{
  "_schema_url": "https://fenre.github.io/splunk-monitoring-use-cases/docs/catalog-schema.md",
  "_agents_url": "https://fenre.github.io/splunk-monitoring-use-cases/AGENTS.md",
  "_readme": "Splunk monitoring use case catalog. Keys are abbreviated — see _schema_url for full field reference. DATA contains categories with subcategories and use cases. CAT_META has per-category metadata. CAT_GROUPS maps domain groups to category IDs. EQUIPMENT lists technology/TA filter definitions. implementationRoadmap groups UC ids into crawl / walk / run / unassigned buckets per category for the 'where do I start?' planner view.",
  "_field_map": {
    "_about": "Abbreviated key → full field name. Category: i=id, n=name, s=subcategories. Subcategory: i=id, n=name, u=use_cases. Use case fields below.",
    "i": "id",
    "n": "title",
    "c": "criticality",
    "f": "difficulty",
    "v": "value",
    "ge": "grandmaExplanation",
    "t": "app_ta",
    "d": "dataSources",
    "q": "spl",
    "qs": "cimSpl",
    "m": "implementation",
    "md": "detailedImplementation",
    "z": "visualization",
    "a": "cimModels",
    "dma": "dataModelAcceleration",
    "schema": "schema",
    "mtype": "monitoringType",
    "kfp": "knownFalsePositives",
    "refs": "references",
    "mitre": "mitreAttack",
    "dtype": "detectionType",
    "sdomain": "securityDomain",
    "reqf": "requiredFields",
    "script": "scriptExample",
    "premium": "premiumApps",
    "hw": "equipmentModels",
    "e": "equipmentIds",
    "em": "equipmentModelIds",
    "status": "status",
    "reviewed": "lastReviewed",
    "sver": "splunkVersions",
    "rby": "reviewer",
    "wv": "wave",
    "pre": "prerequisiteUseCases"
  },
  "DATA": [
    {
      "s": [
        {
          "i": "1.1",
          "n": "Linux Servers",
          "u": [
            {
              "i": "1.1.1",
              "n": "CPU Utilization Trending (Linux)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects overloaded hosts before they cause application degradation. Enables capacity planning and right-sizing.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=cpu` (from `cpu.sh` scripted input)",
              "q": "index=os sourcetype=cpu host=*\n| eval cpu_used = 100 - pctIdle\n| timechart span=1h avg(cpu_used) as avg_cpu by host\n| where avg_cpu > 90",
              "m": "Install Splunk_TA_nix on Universal Forwarders. Enable the `cpu` scripted input in `inputs.conf` (`[script://./bin/cpu.sh]`, interval=60). The cpu sourcetype provides fields: `pctUser`, `pctSystem`, `pctIowait`, `pctIdle`, etc. Create an alert for sustained >90% over 15 minutes using a rolling window.",
              "z": "Line chart (timechart by host), Single value panels for current/peak CPU, Table of hosts exceeding threshold.",
              "kfp": "Sustained high CPU during backups, batch jobs, or maintenance; correlate with change windows.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [inputs.conf](https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=cpu` (from `cpu.sh` scripted input).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall Splunk_TA_nix on Universal Forwarders. Enable the `cpu` scripted input in `inputs.conf` (`[script://./bin/cpu.sh]`, interval=60). The cpu sourcetype provides fields: `pctUser`, `pctSystem`, `pctIowait`, `pctIdle`, etc. Create an alert for sustained >90% over 15 minutes using a rolling window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=cpu host=*\n| eval cpu_used = 100 - pctIdle\n| timechart span=1h avg(cpu_used) as avg_cpu by host\n| where avg_cpu > 90\n```\n\nUnderstanding this SPL\n\n**CPU Utilization Trending (Linux)** — Detects overloaded hosts before they cause application degradation. Enables capacity planning and right-sizing.\n\nDocumented **Data sources**: `sourcetype=cpu` (from `cpu.sh` scripted input). **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: cpu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=cpu. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cpu_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CPU Utilization Trending (Linux)** — Detects overloaded hosts before they cause application degradation. Enables capacity planning and right-sizing.\n\nDocumented **Data sources**: `sourcetype=cpu` (from `cpu.sh` scripted input). **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (timechart by host), Single value panels for current/peak CPU, Table of hosts exceeding threshold.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "Enable acceleration for the Performance data model; set summary range to cover your alert window (e.g. 30 days).",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "9.2+, Cloud",
              "rby": "",
              "ge": "We watch how busy each Linux server's CPU is, hour by hour. When a server's been pinned for an hour straight, we ring the bell — because that's when the apps on top of it start feeling slow to the people using them. We also keep the trend so we can see months in advance which servers need to be replaced or have work moved off them.",
              "wv": "crawl",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "1.1.2",
              "n": "Memory Pressure Detection (Linux)",
              "c": "high",
              "f": "intermediate",
              "v": "Prevents OOM kills, application crashes, and unresponsive systems by detecting memory exhaustion early.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=vmstat`",
              "q": "index=os sourcetype=vmstat host=*\n| timechart span=5m avg(memUsedPct) as memory_pct, avg(swapUsedPct) as swap_pct by host",
              "m": "Enable `vmstat` scripted input in Splunk_TA_nix (interval=60). Key fields: `memTotalMB`, `memFreeMB`, `memUsedMB`, `memUsedPct` (memory), `swapUsedPct` (swap percentage), `loadAvg1mi` (1-min load avg). Set alert when swapUsedPct exceeds 20% or memUsedPct exceeds 95% sustained for 10 minutes.",
              "z": "Area chart (memory + swap stacked), Single value panels showing current utilization, Gauge widget for threshold display.",
              "kfp": "Memory use rises during warm-up, cache fill, and after service restarts. Some runtimes keep large allocations by design. Correlate with deploys, batch jobs, and heap settings.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=vmstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable `vmstat` scripted input in Splunk_TA_nix (interval=60). Key fields: `memTotalMB`, `memFreeMB`, `memUsedMB`, `memUsedPct` (memory), `swapUsedPct` (swap percentage), `loadAvg1mi` (1-min load avg). Set alert when swapUsedPct exceeds 20% or memUsedPct exceeds 95% sustained for 10 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=vmstat host=*\n| timechart span=5m avg(memUsedPct) as memory_pct, avg(swapUsedPct) as swap_pct by host\n```\n\nUnderstanding this SPL\n\n**Memory Pressure Detection (Linux)** — Prevents OOM kills, application crashes, and unresponsive systems by detecting memory exhaustion early.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: vmstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=vmstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Memory Pressure Detection (Linux)** — Prevents OOM kills, application crashes, and unresponsive systems by detecting memory exhaustion early.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (memory + swap stacked), Single value panels showing current utilization, Gauge widget for threshold display.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on memory and swap so you get warning before the machine runs out of room, starts swapping hard, or the system starts killing programs to stay alive.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "1.1.3",
              "n": "Disk Capacity Forecasting (Linux)",
              "c": "critical",
              "f": "advanced",
              "v": "Prevents outages caused by full filesystems. A full /var or / can bring down services, databases, and logging. Enables proactive storage procurement.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=df`",
              "q": "index=os sourcetype=df host=myserver Filesystem=\"/dev/sda1\"\n| timechart span=1d avg(UsePct) as disk_pct\n| predict disk_pct as predicted future_timespan=30",
              "m": "Enable `df` scripted input (interval=300). Create a saved search that runs daily, identifying filesystems above 85%. Use `predict` command for 30-day forecasting. Set tiered alerts at 85% (warning), 90% (high), 95% (critical).",
              "z": "Line chart with predict trendline, Table sorted by usage descending, Gauge per critical mount point.",
              "kfp": "Usage may jump during log rotation, large temporary files, index or database maintenance, and release deployments. One-time growth on a data volume can be expected.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=df`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable `df` scripted input (interval=300). Create a saved search that runs daily, identifying filesystems above 85%. Use `predict` command for 30-day forecasting. Set tiered alerts at 85% (warning), 90% (high), 95% (critical).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=df host=myserver Filesystem=\"/dev/sda1\"\n| timechart span=1d avg(UsePct) as disk_pct\n| predict disk_pct as predicted future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Disk Capacity Forecasting (Linux)** — Prevents outages caused by full filesystems. A full /var or / can bring down services, databases, and logging. Enables proactive storage procurement.\n\nDocumented **Data sources**: `sourcetype=df`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: df; **host** filter: myserver. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=df. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Disk Capacity Forecasting (Linux)**): predict disk_pct as predicted future_timespan=30\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host, Performance.mount span=1h\n| where disk_pct > 85\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Disk Capacity Forecasting (Linux)** — Prevents outages caused by full filesystems. A full /var or / can bring down services, databases, and logging. Enables proactive storage procurement.\n\nDocumented **Data sources**: `sourcetype=df`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where disk_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart with predict trendline, Table sorted by usage descending, Gauge per critical mount point.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your disks are getting so you can add space, clean up, or plan storage before a full filesystem stops apps or logging.",
              "wv": "walk",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host, Performance.mount span=1h\n| where disk_pct > 85",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "1.1.4",
              "n": "Disk I/O Saturation (Linux)",
              "c": "high",
              "f": "intermediate",
              "v": "High I/O wait degrades application performance even when CPU and memory look healthy. Catches storage bottlenecks before users complain.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=iostat`",
              "q": "index=os sourcetype=iostat host=*\n| timechart span=5m avg(avgWaitMillis) as avg_wait, avg(avgSvcMillis) as avg_svc by host\n| where avg_wait > 20",
              "m": "Enable `iostat` scripted input (interval=60). Key fields: `avgWaitMillis` (await — avg wait in ms), `avgSvcMillis` (svctm — avg service time in ms), `bandwUtilPct` (disk utilization %), `rReq_PS`/`wReq_PS` (read/write IOPS). Alert when avgWaitMillis >20ms sustained over 10 minutes. Correlate with application latency metrics for root cause.",
              "z": "Line chart (latency over time by host), Heatmap of I/O wait across hosts.",
              "kfp": "High wait or service times can track heavy batch windows, backup jobs, storage firmware upgrades, or transient path issues. Cross-check with application latency and array or hypervisor health.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=iostat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable `iostat` scripted input (interval=60). Key fields: `avgWaitMillis` (await — avg wait in ms), `avgSvcMillis` (svctm — avg service time in ms), `bandwUtilPct` (disk utilization %), `rReq_PS`/`wReq_PS` (read/write IOPS). Alert when avgWaitMillis >20ms sustained over 10 minutes. Correlate with application latency metrics for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=iostat host=*\n| timechart span=5m avg(avgWaitMillis) as avg_wait, avg(avgSvcMillis) as avg_svc by host\n| where avg_wait > 20\n```\n\nUnderstanding this SPL\n\n**Disk I/O Saturation (Linux)** — High I/O wait degrades application performance even when CPU and memory look healthy. Catches storage bottlenecks before users complain.\n\nDocumented **Data sources**: `sourcetype=iostat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: iostat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=iostat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_wait > 20` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.read_latency) as read_ms avg(Performance.write_latency) as write_ms\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=5m\n| eval worst_ms=max(read_ms, write_ms)\n| where worst_ms > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Disk I/O Saturation (Linux)** — High I/O wait degrades application performance even when CPU and memory look healthy. Catches storage bottlenecks before users complain.\n\nDocumented **Data sources**: `sourcetype=iostat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• `eval` defines or adjusts **worst_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where worst_ms > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency over time by host), Heatmap of I/O wait across hosts.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for storage that is taking too long to answer, even when the machine does not look busy — that is often where slowness starts before anything obvious breaks.",
              "wv": "walk",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.read_latency) as read_ms avg(Performance.write_latency) as write_ms\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=5m\n| eval worst_ms=max(read_ms, write_ms)\n| where worst_ms > 20",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "1.1.5",
              "n": "System Load Anomalies",
              "c": "medium",
              "f": "advanced",
              "v": "Load average exceeding CPU core count indicates process queuing. Useful as an early warning for runaway processes or unexpected workloads.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=vmstat` (includes `loadAvg1mi`) or custom `uptime` scripted input",
              "q": "index=os sourcetype=vmstat host=*\n| stats latest(loadAvg1mi) as load1 by host\n| lookup server_inventory host OUTPUT cpu_count\n| eval load_ratio = round(load1 / cpu_count, 2)\n| where load_ratio > 1.5\n| sort -load_ratio\n| table host load1 cpu_count load_ratio",
              "m": "The `vmstat` sourcetype provides `loadAvg1mi` (1-minute load average). For CPU core count, use either the `hardware` sourcetype (`CPU_COUNT` field) or a server inventory lookup. Alternatively, create a custom `uptime` scripted input parsing all three load averages. Alert when load ratio exceeds 1.5 for 15+ minutes.",
              "z": "Line chart (load1/5/15 over time), Table of high-load hosts with core count context.",
              "kfp": "Load can exceed core count during short traffic bursts, build or backup jobs, or when the CPU count in your lookup is wrong. Verify `cpu_count` from inventory or the `hardware` feed.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=vmstat` (includes `loadAvg1mi`) or custom `uptime` scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe `vmstat` sourcetype provides `loadAvg1mi` (1-minute load average). For CPU core count, use either the `hardware` sourcetype (`CPU_COUNT` field) or a server inventory lookup. Alternatively, create a custom `uptime` scripted input parsing all three load averages. Alert when load ratio exceeds 1.5 for 15+ minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=vmstat host=*\n| stats latest(loadAvg1mi) as load1 by host\n| lookup server_inventory host OUTPUT cpu_count\n| eval load_ratio = round(load1 / cpu_count, 2)\n| where load_ratio > 1.5\n| sort -load_ratio\n| table host load1 cpu_count load_ratio\n```\n\nUnderstanding this SPL\n\n**System Load Anomalies** — Load average exceeding CPU core count indicates process queuing. Useful as an early warning for runaway processes or unexpected workloads.\n\nDocumented **Data sources**: `sourcetype=vmstat` (includes `loadAvg1mi`) or custom `uptime` scripted input. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: vmstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=vmstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **load_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where load_ratio > 1.5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **System Load Anomalies**): table host load1 cpu_count load_ratio\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as cpu_pct\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=5m\n| where cpu_pct > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**System Load Anomalies** — Load average exceeding CPU core count indicates process queuing. Useful as an early warning for runaway processes or unexpected workloads.\n\nDocumented **Data sources**: `sourcetype=vmstat` (includes `loadAvg1mi`) or custom `uptime` scripted input. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where cpu_pct > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (load1/5/15 over time), Table of high-load hosts with core count context.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Load average exceeding CPU core count indicates process queuing — so you find out before users do when something is slowing down or breaking.",
              "wv": "run",
              "pre": [
                "UC-1.1.2"
              ],
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as cpu_pct\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=5m\n| where cpu_pct > 90",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "1.1.6",
              "n": "Process Crash Detection (Linux)",
              "c": "high",
              "f": "intermediate",
              "v": "Immediate awareness of unexpected process terminations. Critical for services that don't auto-restart or have no watchdog.",
              "t": "`Splunk_TA_nix`, Splunk Add-on for Syslog",
              "d": "`sourcetype=syslog`, `/var/log/messages`, `/var/log/syslog`",
              "q": "index=os sourcetype=syslog (\"segfault\" OR \"killed process\" OR \"core dumped\" OR \"terminated\" OR \"SIGABRT\" OR \"SIGSEGV\")\n| rex \"(?<process_name>\\w+)\\[\\d+\\]\"\n| stats count by host, process_name, _time\n| sort -count",
              "m": "Forward `/var/log/messages` and `/var/log/syslog` via UF inputs.conf. Create an alert on keywords: `segfault`, `killed process`, `core dumped`. Enrich with service/owner lookup.",
              "z": "Events list (timeline view), Stats table grouped by host and process, Bar chart of crash counts by process.",
              "kfp": "Planned restarts, package upgrades, and noisy kernel or library log lines can resemble crashes. Tune keywords and severity for your distros, and match to change records.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, Splunk Add-on for Syslog.\n• Ensure the following data sources are available: `sourcetype=syslog`, `/var/log/messages`, `/var/log/syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward `/var/log/messages` and `/var/log/syslog` via UF inputs.conf. Create an alert on keywords: `segfault`, `killed process`, `core dumped`. Enrich with service/owner lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (\"segfault\" OR \"killed process\" OR \"core dumped\" OR \"terminated\" OR \"SIGABRT\" OR \"SIGSEGV\")\n| rex \"(?<process_name>\\w+)\\[\\d+\\]\"\n| stats count by host, process_name, _time\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Process Crash Detection (Linux)** — Immediate awareness of unexpected process terminations. Critical for services that don't auto-restart or have no watchdog.\n\nDocumented **Data sources**: `sourcetype=syslog`, `/var/log/messages`, `/var/log/syslog`. **App/TA** (typical add-on context): `Splunk_TA_nix`, Splunk Add-on for Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, process_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Process Crash Detection (Linux)** — Immediate awareness of unexpected process terminations. Critical for services that don't auto-restart or have no watchdog.\n\nDocumented **Data sources**: `sourcetype=syslog`, `/var/log/messages`, `/var/log/syslog`. **App/TA** (typical add-on context): `Splunk_TA_nix`, Splunk Add-on for Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events list (timeline view), Stats table grouped by host and process, Bar chart of crash counts by process.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Immediate awareness of unexpected process terminations — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count",
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.7",
              "n": "OOM Killer Events",
              "c": "critical",
              "f": "intermediate",
              "v": "OOM killer invocations mean the system ran out of memory and Linux chose to kill a process to survive. This often takes out critical services silently.",
              "t": "`Splunk_TA_nix`, Syslog",
              "d": "`sourcetype=syslog`, `dmesg`",
              "q": "index=os sourcetype=syslog \"Out of memory\" OR \"oom-killer\" OR \"Killed process\"\n| rex \"Killed process (?<killed_pid>\\d+) \\((?<killed_process>[^)]+)\\)\"\n| rex \"total-vm:(?<total_vm>\\d+)kB\"\n| table _time host killed_process killed_pid total_vm\n| sort -_time",
              "m": "Forward syslog and dmesg output. Create a real-time alert on `oom-killer` or `Out of memory` keywords. Consider setting up a triggered action to also capture current process list via scripted input when OOM occurs.",
              "z": "Events timeline, Single value panel (count of OOM events last 24h), Table with affected hosts and processes.",
              "kfp": "Memory tuning tests, container limits, and one-off jobs can force OOM during maintenance. Correlate with deploys, cgroup or Kubernetes limits, and large batch work.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, Syslog.\n• Ensure the following data sources are available: `sourcetype=syslog`, `dmesg`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog and dmesg output. Create a real-time alert on `oom-killer` or `Out of memory` keywords. Consider setting up a triggered action to also capture current process list via scripted input when OOM occurs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"Out of memory\" OR \"oom-killer\" OR \"Killed process\"\n| rex \"Killed process (?<killed_pid>\\d+) \\((?<killed_process>[^)]+)\\)\"\n| rex \"total-vm:(?<total_vm>\\d+)kB\"\n| table _time host killed_process killed_pid total_vm\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**OOM Killer Events** — OOM killer invocations mean the system ran out of memory and Linux chose to kill a process to survive. This often takes out critical services silently.\n\nDocumented **Data sources**: `sourcetype=syslog`, `dmesg`. **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **OOM Killer Events**): table _time host killed_process killed_pid total_vm\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Single value panel (count of OOM events last 24h), Table with affected hosts and processes.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We notice when the system is so short on memory that it starts force-quitting programs — so you can act before a critical app disappears with little warning.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.8",
              "n": "SSH Brute-Force Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Detects active password-guessing attacks against SSH services. Can be early indicator of compromised credentials or targeted intrusion attempts.",
              "t": "`Splunk_TA_nix`, Syslog",
              "d": "`sourcetype=linux_secure` (`/var/log/auth.log` or `/var/log/secure`)",
              "q": "index=os sourcetype=linux_secure \"Failed password\"\n| rex \"from (?<src>\\d+\\.\\d+\\.\\d+\\.\\d+)\"\n| stats count as attempts, dc(user) as users_targeted, values(user) as usernames by src, host\n| where attempts > 10\n| sort -attempts\n| iplocation src",
              "m": "Forward `/var/log/auth.log` (Debian/Ubuntu) or `/var/log/secure` (RHEL/CentOS). Create alert for >10 failed attempts from a single IP in 5 minutes. Consider integrating with a GeoIP lookup for geographic context.",
              "z": "Table of source IPs with attempt counts, Choropleth map (GeoIP), Timechart of brute-force events.",
              "kfp": "Repeated failures from security scans, mis-typed passwords, automation with stale keys, or shared jump hosts can look like attacks. Pair with asset purpose, pen-test windows, and source reputation before paging out of hours.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, Syslog.\n• Ensure the following data sources are available: `sourcetype=linux_secure` (`/var/log/auth.log` or `/var/log/secure`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward `/var/log/auth.log` (Debian/Ubuntu) or `/var/log/secure` (RHEL/CentOS). Create alert for >10 failed attempts from a single IP in 5 minutes. Consider integrating with a GeoIP lookup for geographic context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_secure \"Failed password\"\n| rex \"from (?<src>\\d+\\.\\d+\\.\\d+\\.\\d+)\"\n| stats count as attempts, dc(user) as users_targeted, values(user) as usernames by src, host\n| where attempts > 10\n| sort -attempts\n| iplocation src\n```\n\nUnderstanding this SPL\n\n**SSH Brute-Force Detection** — Detects active password-guessing attacks against SSH services. Can be early indicator of compromised credentials or targeted intrusion attempts.\n\nDocumented **Data sources**: `sourcetype=linux_secure` (`/var/log/auth.log` or `/var/log/secure`). **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_secure. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by src, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where attempts > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **SSH Brute-Force Detection**): iplocation src\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SSH Brute-Force Detection** — Detects active password-guessing attacks against SSH services. Can be early indicator of compromised credentials or targeted intrusion attempts.\n\nDocumented **Data sources**: `sourcetype=linux_secure` (`/var/log/auth.log` or `/var/log/secure`). **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of source IPs with attempt counts, Choropleth map (GeoIP), Timechart of brute-force events.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for many failed remote login attempts so you can spot someone trying to guess their way in before a weak or stolen password gives them a foothold.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.9",
              "n": "Unauthorized Sudo Usage",
              "c": "high",
              "f": "beginner",
              "v": "Repeated failed `sudo` attempts indicate attacker probing after account compromise; unexpected `sudo` success with destructive commands (e.g. `rm -rf`, `chmod 777`) may signal insider misuse or stolen credentials. Pair detection with IR steps: disable the account, revoke SSH keys, and review command history for lateral movement.",
              "t": "`Splunk_TA_nix`, Syslog",
              "d": "`sourcetype=linux_secure`",
              "q": "index=os sourcetype=linux_secure \"sudo:\" (\"NOT in sudoers\" OR \"authentication failure\" OR \"incorrect password\")\n| rex \"user (?<sudo_user>\\w+)\"\n| rex \"COMMAND=(?<command>.+)\"\n| stats count by host, sudo_user, command\n| sort -count",
              "m": "Forward auth logs. Alert immediately on `NOT in sudoers` events. For successful sudo, create audit dashboard showing who ran what with root privileges.",
              "z": "Table (user, host, command, count), Bar chart of sudo failures by user, Events list for investigation.",
              "kfp": "Automation, break-glass accounts, and honest typos can look like policy violations. Whitelist service principals and require change or ticket context for out-of-band alerts.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, Syslog.\n• Ensure the following data sources are available: `sourcetype=linux_secure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward auth logs. Alert immediately on `NOT in sudoers` events. For successful sudo, create audit dashboard showing who ran what with root privileges.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_secure \"sudo:\" (\"NOT in sudoers\" OR \"authentication failure\" OR \"incorrect password\")\n| rex \"user (?<sudo_user>\\w+)\"\n| rex \"COMMAND=(?<command>.+)\"\n| stats count by host, sudo_user, command\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Unauthorized Sudo Usage** — Repeated failed `sudo` attempts indicate attacker probing after account compromise; unexpected `sudo` success with destructive commands (e.g. `rm -rf`, `chmod 777`) may signal insider misuse or stolen credentials. Pair detection with IR steps: disable the account, revoke SSH keys, and review command history for lateral movement.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_secure. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, sudo_user, command** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unauthorized Sudo Usage** — Repeated failed `sudo` attempts indicate attacker probing after account compromise; unexpected `sudo` success with destructive commands (e.g. `rm -rf`, `chmod 777`) may signal insider misuse or stolen credentials. Pair detection with IR steps: disable the account, revoke SSH keys, and review command history for lateral movement.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, host, command, count), Bar chart of sudo failures by user, Events list for investigation.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch sudo-related messages so you can see repeated failed attempts or fishy privilege use — a sign someone may be poking at admin access or misusing a normal account.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root",
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.10",
              "n": "Cron Job Failure Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Failed cron jobs can silently break batch processing, backups, log rotation, and maintenance tasks. Catching failures early prevents cascading issues.",
              "t": "`Splunk_TA_nix`, Syslog",
              "d": "`sourcetype=cron` or `sourcetype=syslog` source=\"/var/log/cron\"",
              "q": "index=os (sourcetype=cron OR source=\"/var/log/cron\") (\"error\" OR \"failed\" OR \"EXIT STATUS\" OR \"ORPHAN\")\n| rex \"CMD \\((?<cron_cmd>[^)]+)\\)\"\n| rex \"CROND\\[(?<pid>\\d+)\\]\"\n| stats count by host, cron_cmd, _time\n| sort -_time",
              "m": "Forward `/var/log/cron`. For critical cron jobs, create a \"heartbeat\" approach: expect a success message within a window, alert on absence. Use `| inputlookup expected_crons | join` pattern for missing run detection.",
              "z": "Table of failed cron jobs, Single value panel (failures last 24h), Missing job detection table.",
              "kfp": "The word 'failed' in unrelated text, non-critical crons, and noisy exit-status lines can trigger alerts. Tie critical jobs to heartbeats and allowlists for known-good noise.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, Syslog.\n• Ensure the following data sources are available: `sourcetype=cron` or `sourcetype=syslog` source=\"/var/log/cron\".\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward `/var/log/cron`. For critical cron jobs, create a \"heartbeat\" approach: expect a success message within a window, alert on absence. Use `| inputlookup expected_crons | join` pattern for missing run detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os (sourcetype=cron OR source=\"/var/log/cron\") (\"error\" OR \"failed\" OR \"EXIT STATUS\" OR \"ORPHAN\")\n| rex \"CMD \\((?<cron_cmd>[^)]+)\\)\"\n| rex \"CROND\\[(?<pid>\\d+)\\]\"\n| stats count by host, cron_cmd, _time\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cron Job Failure Monitoring** — Failed cron jobs can silently break batch processing, backups, log rotation, and maintenance tasks. Catching failures early prevents cascading issues.\n\nDocumented **Data sources**: `sourcetype=cron` or `sourcetype=syslog` source=\"/var/log/cron\". **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: cron. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=cron. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, cron_cmd, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of failed cron jobs, Single value panel (failures last 24h), Missing job detection table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Failed cron jobs can silently break batch processing, backups, log rotation, and maintenance tasks — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.11",
              "n": "Kernel Panic Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Kernel panics cause immediate system crashes and potential data corruption. Often indicates hardware failure, driver issues, or memory corruption.",
              "t": "`Splunk_TA_nix`, Syslog",
              "d": "`sourcetype=syslog`, `dmesg`",
              "q": "index=os sourcetype=syslog (\"kernel panic\" OR \"Kernel panic\" OR \"BUG:\" OR \"Oops:\" OR \"Call Trace:\")\n| table _time host _raw\n| sort -_time",
              "m": "Forward syslog and enable dmesg scripted input. Create critical alert on `kernel panic` or `Oops:` keywords. Correlate with hardware health data (IPMI) for root cause.",
              "z": "Events timeline, Count by host, Alert panel (critical).",
              "kfp": "Some 'Oops' and driver messages are recoverable. Hardware tests and stress runs can also emit crash-like text. Read the full log line and correlate with real outages.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, Syslog.\n• Ensure the following data sources are available: `sourcetype=syslog`, `dmesg`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog and enable dmesg scripted input. Create critical alert on `kernel panic` or `Oops:` keywords. Correlate with hardware health data (IPMI) for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (\"kernel panic\" OR \"Kernel panic\" OR \"BUG:\" OR \"Oops:\" OR \"Call Trace:\")\n| table _time host _raw\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Kernel Panic Detection** — Kernel panics cause immediate system crashes and potential data corruption. Often indicates hardware failure, driver issues, or memory corruption.\n\nDocumented **Data sources**: `sourcetype=syslog`, `dmesg`. **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Kernel Panic Detection**): table _time host _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Count by host, Alert panel (critical).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Kernel panics cause immediate system crashes and potential data corruption — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.12",
              "n": "NTP Time Sync Drift (Linux)",
              "c": "medium",
              "f": "intermediate",
              "v": "Clock drift causes authentication failures (Kerberos), log correlation issues, transaction ordering problems, and certificate validation failures.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=ntp` (scripted input via `ntpq -p` or `chronyc tracking`)",
              "q": "index=os sourcetype=ntp host=*\n| eval offset_ms = abs(offset)\n| stats latest(offset_ms) as drift_ms, latest(stratum) as stratum by host\n| where drift_ms > 100 OR stratum > 5\n| sort -drift_ms",
              "m": "Enable the `ntp` scripted input in Splunk_TA_nix (interval=300). It runs `ntpq -pn` and outputs peer data. The `offset` field is in milliseconds. Alert when offset exceeds 100ms or stratum exceeds 5.",
              "z": "Line chart (drift over time by host), Table of hosts with excessive drift.",
              "kfp": "NTP or chrony can step the clock and change stratum briefly during source changes, VM moves, or maintenance. Alert on persistent drift, not a single sample.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=ntp` (scripted input via `ntpq -p` or `chronyc tracking`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable the `ntp` scripted input in Splunk_TA_nix (interval=300). It runs `ntpq -pn` and outputs peer data. The `offset` field is in milliseconds. Alert when offset exceeds 100ms or stratum exceeds 5.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=ntp host=*\n| eval offset_ms = abs(offset)\n| stats latest(offset_ms) as drift_ms, latest(stratum) as stratum by host\n| where drift_ms > 100 OR stratum > 5\n| sort -drift_ms\n```\n\nUnderstanding this SPL\n\n**NTP Time Sync Drift (Linux)** — Clock drift causes authentication failures (Kerberos), log correlation issues, transaction ordering problems, and certificate validation failures.\n\nDocumented **Data sources**: `sourcetype=ntp` (scripted input via `ntpq -p` or `chronyc tracking`). **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: ntp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=ntp. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **offset_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where drift_ms > 100 OR stratum > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (drift over time by host), Table of hosts with excessive drift.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Clock drift causes authentication failures (Kerberos), log correlation issues, transaction ordering problems, and certificate validation failures — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.13",
              "n": "Zombie Process Accumulation",
              "c": "medium",
              "f": "intermediate",
              "v": "Zombie processes indicate parent processes not properly reaping children. Accumulation can exhaust PID space and indicates application bugs.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=ps` (process listing from Splunk_TA_nix)",
              "q": "index=os sourcetype=ps host=*\n| search S=\"Z\"\n| stats count as zombie_count, values(COMMAND) as zombie_processes by host\n| where zombie_count > 5\n| sort -zombie_count\n| table host zombie_count zombie_processes",
              "m": "Enable `ps` scripted input (interval=300). The `ps` sourcetype includes a `S` (state) field where `Z` = zombie. This is more reliable than parsing the `top` header. Alert when zombie count exceeds 5. Investigate parent PIDs with `PPID` field to identify the root cause process.",
              "z": "Single value panel, Table of hosts with zombie counts.",
              "kfp": "A few defunct children are normal. Counts can spike if `ps` samples during a fork-heavy job. Focus on growth over time and parent processes that never reap.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=ps` (process listing from Splunk_TA_nix).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable `ps` scripted input (interval=300). The `ps` sourcetype includes a `S` (state) field where `Z` = zombie. This is more reliable than parsing the `top` header. Alert when zombie count exceeds 5. Investigate parent PIDs with `PPID` field to identify the root cause process.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=ps host=*\n| search S=\"Z\"\n| stats count as zombie_count, values(COMMAND) as zombie_processes by host\n| where zombie_count > 5\n| sort -zombie_count\n| table host zombie_count zombie_processes\n```\n\nUnderstanding this SPL\n\n**Zombie Process Accumulation** — Zombie processes indicate parent processes not properly reaping children. Accumulation can exhaust PID space and indicates application bugs.\n\nDocumented **Data sources**: `sourcetype=ps` (process listing from Splunk_TA_nix). **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: ps. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=ps. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where zombie_count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Zombie Process Accumulation**): table host zombie_count zombie_processes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value panel, Table of hosts with zombie counts.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Zombie processes indicate parent processes not properly reaping children — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.14",
              "n": "File Descriptor Exhaustion",
              "c": "high",
              "f": "intermediate",
              "v": "File descriptor exhaustion causes \"too many open files\" errors, breaking network connections, log writing, and inter-process communication. Common in Java apps and databases.",
              "t": "`Splunk_TA_nix`, custom scripted input",
              "d": "`sourcetype=openfiles` (custom) or `/proc/sys/fs/file-nr`",
              "q": "index=os sourcetype=openfiles host=*\n| eval usage_pct = round(open_fds / max_fds * 100, 1)\n| where usage_pct > 80\n| sort -usage_pct\n| table host process open_fds max_fds usage_pct",
              "m": "Create scripted input: `cat /proc/sys/fs/file-nr` (system-wide) or `ls /proc/<pid>/fd | wc -l` for per-process tracking. Alert at 80% of system or per-process limit.",
              "z": "Gauge (system-wide), Table per process, Line chart trend.",
              "kfp": "Short spikes happen during log rotation, build jobs, and database work. Compare to `ulimit -n` and the application's expected footprint.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, custom scripted input.\n• Ensure the following data sources are available: `sourcetype=openfiles` (custom) or `/proc/sys/fs/file-nr`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `cat /proc/sys/fs/file-nr` (system-wide) or `ls /proc/<pid>/fd | wc -l` for per-process tracking. Alert at 80% of system or per-process limit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=openfiles host=*\n| eval usage_pct = round(open_fds / max_fds * 100, 1)\n| where usage_pct > 80\n| sort -usage_pct\n| table host process open_fds max_fds usage_pct\n```\n\nUnderstanding this SPL\n\n**File Descriptor Exhaustion** — File descriptor exhaustion causes \"too many open files\" errors, breaking network connections, log writing, and inter-process communication. Common in Java apps and databases.\n\nDocumented **Data sources**: `sourcetype=openfiles` (custom) or `/proc/sys/fs/file-nr`. **App/TA** (typical add-on context): `Splunk_TA_nix`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: openfiles. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=openfiles. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **usage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where usage_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **File Descriptor Exhaustion**): table host process open_fds max_fds usage_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (system-wide), Table per process, Line chart trend.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "File descriptor exhaustion causes \"too many open files\" errors, breaking network connections, log writing, and inter-process communication — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.15",
              "n": "Network Interface Errors",
              "c": "medium",
              "f": "intermediate",
              "v": "Interface errors (CRC, drops, overruns) indicate bad cables, failing NICs, or duplex mismatches. Catching early prevents intermittent application failures.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=interfaces`",
              "q": "index=os sourcetype=interfaces host=*\n| stats latest(RXerrors) as rx_errors, latest(TXerrors) as tx_errors, latest(Collisions) as collisions by host, Name\n| where rx_errors > 0 OR tx_errors > 0\n| sort -rx_errors",
              "m": "Enable `interfaces` scripted input (interval=300). Use `| delta` or `| streamstats` to track error rate deltas. Alert on increasing error counts.",
              "z": "Table (interface, error type, count), Line chart of error rate over time.",
              "kfp": "Non-zero error counters can appear after a cable wiggle, driver reload, or known firmware bug. Use deltas over time and physical checks, not a single non-zero read.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=interfaces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable `interfaces` scripted input (interval=300). Use `| delta` or `| streamstats` to track error rate deltas. Alert on increasing error counts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=interfaces host=*\n| stats latest(RXerrors) as rx_errors, latest(TXerrors) as tx_errors, latest(Collisions) as collisions by host, Name\n| where rx_errors > 0 OR tx_errors > 0\n| sort -rx_errors\n```\n\nUnderstanding this SPL\n\n**Network Interface Errors** — Interface errors (CRC, drops, overruns) indicate bad cables, failing NICs, or duplex mismatches. Catching early prevents intermittent application failures.\n\nDocumented **Data sources**: `sourcetype=interfaces`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: interfaces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=interfaces. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rx_errors > 0 OR tx_errors > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.thruput) as thruput_bps\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n| where thruput_bps > 0\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Interface Errors** — Interface errors (CRC, drops, overruns) indicate bad cables, failing NICs, or duplex mismatches. Catching early prevents intermittent application failures.\n\nDocumented **Data sources**: `sourcetype=interfaces`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where thruput_bps > 0` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (interface, error type, count), Line chart of error rate over time.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Interface errors (CRC, drops, overruns) indicate bad cables, failing NICs, or duplex mismatches — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.thruput) as thruput_bps\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n| where thruput_bps > 0",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.16",
              "n": "Package Vulnerability Tracking",
              "c": "medium",
              "f": "expert",
              "v": "Maintains visibility into known vulnerable packages across the fleet, supporting vulnerability management and compliance programs.",
              "t": "`Splunk_TA_nix`, custom scripted input",
              "d": "`sourcetype=package` (Splunk_TA_nix), vulnerability scanner output",
              "q": "index=os sourcetype=package host=*\n| stats values(VERSION) as version by host, NAME\n| join max=1 NAME [| inputlookup known_cves.csv]\n| table host NAME version cve_id severity\n| sort -severity",
              "m": "Enable `package` scripted input in Splunk_TA_nix (daily interval). Cross-reference with a CVE lookup table updated from vulnerability scan exports. Alternatively, ingest Qualys/Tenable scan results directly.",
              "z": "Table (host, package, CVE, severity), Stats panel of critical/high vuln counts, Bar chart by severity.",
              "kfp": "Lookup tables and scanner feeds can lag; backported security fixes sometimes keep the same version string. Triage with your vulnerability scanner as the source of truth for 'fixed' or 'not vulnerable'.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, custom scripted input.\n• Ensure the following data sources are available: `sourcetype=package` (Splunk_TA_nix), vulnerability scanner output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable `package` scripted input in Splunk_TA_nix (daily interval). Cross-reference with a CVE lookup table updated from vulnerability scan exports. Alternatively, ingest Qualys/Tenable scan results directly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=package host=*\n| stats values(VERSION) as version by host, NAME\n| join max=1 NAME [| inputlookup known_cves.csv]\n| table host NAME version cve_id severity\n| sort -severity\n```\n\nUnderstanding this SPL\n\n**Package Vulnerability Tracking** — Maintains visibility into known vulnerable packages across the fleet, supporting vulnerability management and compliance programs.\n\nDocumented **Data sources**: `sourcetype=package` (Splunk_TA_nix), vulnerability scanner output. **App/TA** (typical add-on context): `Splunk_TA_nix`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: package. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=package. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, NAME** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Package Vulnerability Tracking**): table host NAME version cve_id severity\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, package, CVE, severity), Stats panel of critical/high vuln counts, Bar chart by severity.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Maintains visibility into known vulnerable packages across the fleet, supporting vulnerability management and compliance programs — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.17",
              "n": "Service Availability Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects stopped services before users notice. Essential for any SLA-bound service where uptime matters.",
              "t": "`Splunk_TA_nix`, custom scripted input",
              "d": "Custom scripted input (`systemctl is-active <service>`)",
              "q": "index=os sourcetype=service_status host=*\n| stats latest(status) as status by host, service_name\n| where status != \"active\"\n| table host service_name status",
              "m": "Create a scripted input that checks key service statuses: `systemctl is-active httpd sshd mysqld | paste - - -`. Run every 60 seconds. Alert immediately when critical services stop. Maintain a lookup of expected services per host role.",
              "z": "Status indicator panels (green/red per service), Table of down services, Icon grid.",
              "kfp": "Socket-activated services may look 'inactive' when idle. Planned stops and blue/green deploys also look like outages. Add host-role context and change-window suppressions.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, custom scripted input.\n• Ensure the following data sources are available: Custom scripted input (`systemctl is-active <service>`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that checks key service statuses: `systemctl is-active httpd sshd mysqld | paste - - -`. Run every 60 seconds. Alert immediately when critical services stop. Maintain a lookup of expected services per host role.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=service_status host=*\n| stats latest(status) as status by host, service_name\n| where status != \"active\"\n| table host service_name status\n```\n\nUnderstanding this SPL\n\n**Service Availability Monitoring** — Detects stopped services before users notice. Essential for any SLA-bound service where uptime matters.\n\nDocumented **Data sources**: Custom scripted input (`systemctl is-active <service>`). **App/TA** (typical add-on context): `Splunk_TA_nix`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: service_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=service_status. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status != \"active\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Service Availability Monitoring**): table host service_name status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status indicator panels (green/red per service), Table of down services, Icon grid.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch stopped services before users notice — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.18",
              "n": "User Account Changes",
              "c": "high",
              "f": "beginner",
              "v": "Detects unauthorized user creation or modification. Key for security auditing and compliance (SOX, PCI, HIPAA).",
              "t": "`Splunk_TA_nix`, Syslog",
              "d": "`sourcetype=linux_secure`, `sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_secure (\"useradd\" OR \"userdel\" OR \"usermod\" OR \"groupadd\" OR \"passwd\")\n| rex \"by (?<admin_user>\\w+)\"\n| table _time host admin_user _raw\n| sort -_time",
              "m": "Forward auth logs. Enable auditd rules for user management commands. Alert on any user creation/deletion events. Consider correlating with change management tickets.",
              "z": "Events timeline, Table of account changes with who/what/when.",
              "kfp": "Help desk and directory automation create many legitimate account and group events. Route on unexpected actors, off-hours, or missing change numbers.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, Syslog.\n• Ensure the following data sources are available: `sourcetype=linux_secure`, `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward auth logs. Enable auditd rules for user management commands. Alert on any user creation/deletion events. Consider correlating with change management tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_secure (\"useradd\" OR \"userdel\" OR \"usermod\" OR \"groupadd\" OR \"passwd\")\n| rex \"by (?<admin_user>\\w+)\"\n| table _time host admin_user _raw\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**User Account Changes** — Detects unauthorized user creation or modification. Key for security auditing and compliance (SOX, PCI, HIPAA).\n\nDocumented **Data sources**: `sourcetype=linux_secure`, `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_secure. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **User Account Changes**): table _time host admin_user _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User Account Changes** — Detects unauthorized user creation or modification. Key for security auditing and compliance (SOX, PCI, HIPAA).\n\nDocumented **Data sources**: `sourcetype=linux_secure`, `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table of account changes with who/what/when.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch unauthorized user creation or modification — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.19",
              "n": "Filesystem Read-Only Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "A filesystem remounting as read-only indicates disk failure, corruption, or mount issues. Applications will fail silently when they can't write.",
              "t": "`Splunk_TA_nix`, Syslog",
              "d": "`sourcetype=syslog`, `dmesg`",
              "q": "index=os sourcetype=syslog (\"Remounting filesystem read-only\" OR \"EXT4-fs error\" OR \"I/O error\" OR \"read-only file system\")\n| table _time host _raw\n| sort -_time",
              "m": "Forward syslog and dmesg. Create critical alert on read-only remount messages. Also add a scripted input: `mount | grep \"ro,\"` to periodically verify all expected read-write mounts.",
              "z": "Alert panel (critical), Events list, Host status table.",
              "kfp": "Filesystem remounts during checks or admin action look like read-only flips. Storage incidents and `dmesg` on the host tell you if it is protective rather than a false alarm.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, Syslog.\n• Ensure the following data sources are available: `sourcetype=syslog`, `dmesg`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog and dmesg. Create critical alert on read-only remount messages. Also add a scripted input: `mount | grep \"ro,\"` to periodically verify all expected read-write mounts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (\"Remounting filesystem read-only\" OR \"EXT4-fs error\" OR \"I/O error\" OR \"read-only file system\")\n| table _time host _raw\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Filesystem Read-Only Detection** — A filesystem remounting as read-only indicates disk failure, corruption, or mount issues. Applications will fail silently when they can't write.\n\nDocumented **Data sources**: `sourcetype=syslog`, `dmesg`. **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Filesystem Read-Only Detection**): table _time host _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert panel (critical), Events list, Host status table.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "A filesystem remounting as read-only indicates disk failure, corruption, or mount issues — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.20",
              "n": "Reboot Detection (Linux)",
              "c": "high",
              "f": "advanced",
              "v": "Unexpected reboots may indicate kernel panics, hardware failure, or unauthorized changes. Distinguishing planned vs. unplanned reboots is key.",
              "t": "`Splunk_TA_nix`, Syslog",
              "d": "`sourcetype=syslog`, `sourcetype=who` (wtmp)",
              "q": "index=os sourcetype=syslog (\"Initializing cgroup subsys\" OR \"Linux version\" OR \"Command line:\" OR \"systemd.*Started\" OR \"Booting Linux\")\n| stats latest(_time) as last_boot by host\n| eval hours_since_boot = round((now() - last_boot) / 3600, 1)\n| join max=1 host [| inputlookup maintenance_windows.csv | where status=\"approved\"]\n| sort hours_since_boot",
              "m": "Forward syslog. Detect boot-up log patterns. Cross-reference boot times with maintenance window lookups to flag unplanned reboots. Alert on any reboot outside approved windows.",
              "z": "Table (host, last boot, planned/unplanned), Timeline of reboots, Single value panel (unexpected reboots last 7d).",
              "kfp": "Planned reboots, patching, and hypervisor operations match surprise reboots in the data. Correlate with change windows and VM platform events.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, Syslog.\n• Ensure the following data sources are available: `sourcetype=syslog`, `sourcetype=who` (wtmp).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog. Detect boot-up log patterns. Cross-reference boot times with maintenance window lookups to flag unplanned reboots. Alert on any reboot outside approved windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (\"Initializing cgroup subsys\" OR \"Linux version\" OR \"Command line:\" OR \"systemd.*Started\" OR \"Booting Linux\")\n| stats latest(_time) as last_boot by host\n| eval hours_since_boot = round((now() - last_boot) / 3600, 1)\n| join max=1 host [| inputlookup maintenance_windows.csv | where status=\"approved\"]\n| sort hours_since_boot\n```\n\nUnderstanding this SPL\n\n**Reboot Detection (Linux)** — Unexpected reboots may indicate kernel panics, hardware failure, or unauthorized changes. Distinguishing planned vs. unplanned reboots is key.\n\nDocumented **Data sources**: `sourcetype=syslog`, `sourcetype=who` (wtmp). **App/TA** (typical add-on context): `Splunk_TA_nix`, Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since_boot** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, last boot, planned/unplanned), Timeline of reboots, Single value panel (unexpected reboots last 7d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Unexpected reboots may indicate kernel panics, hardware failure, or unauthorized changes — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "both",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.21",
              "n": "Kernel Module Loading Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Detects unauthorized kernel module insertions which can indicate rootkits or malware persistence.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit, auditctl syscall logs`",
              "q": "index=os sourcetype=linux_audit action=* syscall=init_module OR syscall=finit_module\n| stats count by host, name, exe\n| where count > 0",
              "m": "Configure auditctl rules to monitor syscalls for module loading (init_module, finit_module). Create a search that alerts on any unexpected module loads outside maintenance windows. Correlate against a whitelist of approved modules per host.",
              "z": "Table of module name, executable path, and loading user sorted by time; timechart of module load counts per host to spot anomalous spikes; single-value panel showing new (first-seen) modules in the last 24 hours for SOC triage.",
              "kfp": "Valid `modprobe` and out-of-tree drivers load new modules. Use an allowlist and change history before treating a load as malicious.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit, auditctl syscall logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure auditctl rules to monitor syscalls for module loading (init_module, finit_module). Create a search that alerts on any unexpected module loads outside maintenance windows. Correlate against a whitelist of approved modules per host.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit action=* syscall=init_module OR syscall=finit_module\n| stats count by host, name, exe\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Kernel Module Loading Tracking** — Detects unauthorized kernel module insertions which can indicate rootkits or malware persistence.\n\nDocumented **Data sources**: `sourcetype=linux_audit, auditctl syscall logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, name, exe** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of module name, executable path, and loading user sorted by time; timechart of module load counts per host to spot anomalous spikes; single-value panel showing new (first-seen) modules in the last 24 hours for SOC triage.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch unauthorized kernel module insertions which can indicate rootkits or malware persistence — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.22",
              "n": "Sysctl Parameter Changes Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Identifies modifications to kernel parameters that affect system behavior, security posture, or performance tuning.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit, /proc/sys monitoring`",
              "q": "index=os sourcetype=linux_audit action=modified path=/proc/sys/*\n| stats count by host, path, exe, auid\n| where count > 0",
              "m": "Set up auditctl rules to monitor changes to /proc/sys and /etc/sysctl.conf. Create alerts for unexpected sysctl modifications, especially those affecting network (ip_forward, tcp_syncookies) or IPC parameters.",
              "z": "Table, Timeline",
              "kfp": "Hardening, containers, and orchestration can change `sysctl` often. Distinguish policy drift from expected automation using host class and change tickets.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit, /proc/sys monitoring`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSet up auditctl rules to monitor changes to /proc/sys and /etc/sysctl.conf. Create alerts for unexpected sysctl modifications, especially those affecting network (ip_forward, tcp_syncookies) or IPC parameters.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit action=modified path=/proc/sys/*\n| stats count by host, path, exe, auid\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Sysctl Parameter Changes Detection** — Identifies modifications to kernel parameters that affect system behavior, security posture, or performance tuning.\n\nDocumented **Data sources**: `sourcetype=linux_audit, /proc/sys monitoring`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, path, exe, auid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot modifications to kernel parameters that affect system behavior, security posture, or performance tuning — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.23",
              "n": "Kernel Core Dump Generation",
              "c": "critical",
              "f": "beginner",
              "v": "Core dumps indicate process crashes at kernel level, enabling root cause analysis of system stability issues.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, /var/log/kern.log`",
              "q": "index=os sourcetype=syslog \"segfault at\" OR \"general protection fault\" OR \"double fault\"\n| stats count by host, message, user\n| eval severity=\"high\"",
              "m": "On the UF, enable `[monitor:///var/log/kern.log]` (Debian/Ubuntu) or `[monitor:///var/log/messages]` (RHEL/CentOS) with `sourcetype=syslog`; for journald-only distros use the `[journald://]` input. Alert on first occurrence of `segfault` or `core dumped` per host. Configure `systemd-coredump` to write dumps to `/var/lib/systemd/coredump/` and monitor the directory for new files to correlate dump metadata with syslog events.",
              "z": "Alert, Stats Table",
              "kfp": "Developers and support teams generate core dumps on purpose. Scope production hosts or critical services before paging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, /var/log/kern.log`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOn the UF, enable `[monitor:///var/log/kern.log]` (Debian/Ubuntu) or `[monitor:///var/log/messages]` (RHEL/CentOS) with `sourcetype=syslog`; for journald-only distros use the `[journald://]` input. Alert on first occurrence of `segfault` or `core dumped` per host. Configure `systemd-coredump` to write dumps to `/var/lib/systemd/coredump/` and monitor the directory for new files to correlate dump metadata with syslog events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"segfault at\" OR \"general protection fault\" OR \"double fault\"\n| stats count by host, message, user\n| eval severity=\"high\"\n```\n\nUnderstanding this SPL\n\n**Kernel Core Dump Generation** — Core dumps indicate process crashes at kernel level, enabling root cause analysis of system stability issues.\n\nDocumented **Data sources**: `sourcetype=syslog, /var/log/kern.log`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, message, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Stats Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Core dumps indicate process crashes at kernel level, enabling root cause analysis of system stability issues — so you find out before users do when something is slowing down or breaking.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "1.1.24",
              "n": "Kernel Ring Buffer Error Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Ring buffer errors signal kernel-level problems including driver issues, hardware failures, or module conflicts.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, dmesg output`",
              "q": "index=os sourcetype=syslog \"kernel:\" \"error\" OR \"warning\" OR \"BUG\"\n| timechart count by host\n| where count > 5",
              "m": "Create a scripted input that periodically parses dmesg output and forwards errors to Splunk. Build a dashboard that shows error trends over time. Set thresholds for alerting on sustained error rates.",
              "z": "Timechart, Line Chart",
              "kfp": "Error-rate spikes at boot, after driver updates, or during hardware work are common. Filter well-known benign strings for your server models.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, dmesg output`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that periodically parses dmesg output and forwards errors to Splunk. Build a dashboard that shows error trends over time. Set thresholds for alerting on sustained error rates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"kernel:\" \"error\" OR \"warning\" OR \"BUG\"\n| timechart count by host\n| where count > 5\n```\n\nUnderstanding this SPL\n\n**Kernel Ring Buffer Error Rate** — Ring buffer errors signal kernel-level problems including driver issues, hardware failures, or module conflicts.\n\nDocumented **Data sources**: `sourcetype=syslog, dmesg output`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Line Chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Ring buffer errors signal kernel-level problems including driver issues, hardware failures, or module conflicts — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.25",
              "n": "NUMA Imbalance Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "NUMA imbalance causes memory locality issues and performance degradation on multi-socket systems.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:numa_stats`",
              "q": "index=os sourcetype=custom:numa_stats\n| stats avg(numa_hit) as avg_hits, avg(numa_miss) as avg_misses by host\n| eval miss_pct=(avg_misses/(avg_hits+avg_misses))*100\n| where miss_pct > 10",
              "m": "Create a custom script that reads /proc/zoneinfo or numactl output and monitors NUMA hit/miss ratios. Alert when local NUMA hits drop below 90% on systems with multiple sockets, indicating memory is being accessed remotely.",
              "z": "Single Value, Gauge",
              "kfp": "Single-socket or pinned workloads can look 'imbalanced' by design. Compare to baseline and app placement, not a universal threshold only.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:numa_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a custom script that reads /proc/zoneinfo or numactl output and monitors NUMA hit/miss ratios. Alert when local NUMA hits drop below 90% on systems with multiple sockets, indicating memory is being accessed remotely.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:numa_stats\n| stats avg(numa_hit) as avg_hits, avg(numa_miss) as avg_misses by host\n| eval miss_pct=(avg_misses/(avg_hits+avg_misses))*100\n| where miss_pct > 10\n```\n\nUnderstanding this SPL\n\n**NUMA Imbalance Detection** — NUMA imbalance causes memory locality issues and performance degradation on multi-socket systems.\n\nDocumented **Data sources**: `sourcetype=custom:numa_stats`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:numa_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:numa_stats. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **miss_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where miss_pct > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single Value, Gauge",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "NUMA imbalance causes memory locality issues and performance degradation on multi-socket systems — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.26",
              "n": "CPU Frequency Scaling Events",
              "c": "medium",
              "f": "beginner",
              "v": "Frequency scaling changes indicate thermal throttling or power management adjustments affecting workload performance.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit OR custom:cpufreq`",
              "q": "index=os sourcetype=linux_audit path=\"/sys/devices/system/cpu/cpu*/cpufreq/*\" action=modified\n| stats count by host, path\n| where count > 10",
              "m": "Monitor /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq for rapid changes. Create alerts when frequency scaling events occur frequently, indicating thermal or power issues.",
              "z": "Table, Timeline",
              "kfp": "Governor and P-state changes are normal on idle and power-capped systems. Escalate sustained misconfiguration or unexpected limits, not every transition.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit OR custom:cpufreq`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq for rapid changes. Create alerts when frequency scaling events occur frequently, indicating thermal or power issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit path=\"/sys/devices/system/cpu/cpu*/cpufreq/*\" action=modified\n| stats count by host, path\n| where count > 10\n```\n\nUnderstanding this SPL\n\n**CPU Frequency Scaling Events** — Frequency scaling changes indicate thermal throttling or power management adjustments affecting workload performance.\n\nDocumented **Data sources**: `sourcetype=linux_audit OR custom:cpufreq`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Frequency scaling changes indicate thermal throttling or power management adjustments affecting workload performance — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.27",
              "n": "CPU Steal Time Elevation (Virtual Machines)",
              "c": "high",
              "f": "intermediate",
              "v": "High steal time indicates VM is contending with host resources, affecting application performance.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=vmstat`",
              "q": "index=os sourcetype=vmstat host=*\n| stats avg(st) as avg_steal_time by host\n| where avg_steal_time > 5",
              "m": "Use Splunk_TA_nix vmstat input which automatically extracts steal time percentage. Create alerts for hosts where average steal time exceeds 5% over a 10-minute window, indicating overcommitment on hypervisor.",
              "z": "Timechart, Gauge",
              "kfp": "Steal time bounces with short host contention, not only chronic overcommit. Correlate with hypervisor CPU headroom and neighbor VMs before rightsizing.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=vmstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix vmstat input which automatically extracts steal time percentage. Create alerts for hosts where average steal time exceeds 5% over a 10-minute window, indicating overcommitment on hypervisor.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=vmstat host=*\n| stats avg(st) as avg_steal_time by host\n| where avg_steal_time > 5\n```\n\nUnderstanding this SPL\n\n**CPU Steal Time Elevation (Virtual Machines)** — High steal time indicates VM is contending with host resources, affecting application performance.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: vmstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=vmstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_steal_time > 5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CPU Steal Time Elevation (Virtual Machines)** — High steal time indicates VM is contending with host resources, affecting application performance.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Gauge",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We notice when a virtual machine keeps losing CPU time to other work on the same physical host — a sign the host is crowded or overbooked before users only see a slow app.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.28",
              "n": "IRQ Imbalance Across CPU Cores",
              "c": "medium",
              "f": "intermediate",
              "v": "Imbalanced IRQ handling causes uneven CPU utilization and can bottleneck network or storage throughput.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:irq_stats, /proc/interrupts`",
              "q": "index=os sourcetype=custom:irq_stats\n| stats avg(count) as avg_irq, stdev(count) as stddev_irq by host, irq_type\n| eval cv=stddev_irq/avg_irq\n| where cv > 0.5",
              "m": "Create a scripted input that parses /proc/interrupts and calculates the coefficient of variation (stdev/mean) of IRQ distribution across CPUs. Alert when imbalance is detected; use irqbalance daemon or kernel parameters to correct.",
              "z": "Heatmap, Table",
              "kfp": "Some apps pin IRQ load to a core by design. Prefer baseline or group-specific thresholds, not a single static limit.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:irq_stats, /proc/interrupts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that parses /proc/interrupts and calculates the coefficient of variation (stdev/mean) of IRQ distribution across CPUs. Alert when imbalance is detected; use irqbalance daemon or kernel parameters to correct.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:irq_stats\n| stats avg(count) as avg_irq, stdev(count) as stddev_irq by host, irq_type\n| eval cv=stddev_irq/avg_irq\n| where cv > 0.5\n```\n\nUnderstanding this SPL\n\n**IRQ Imbalance Across CPU Cores** — Imbalanced IRQ handling causes uneven CPU utilization and can bottleneck network or storage throughput.\n\nDocumented **Data sources**: `sourcetype=custom:irq_stats, /proc/interrupts`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:irq_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:irq_stats. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, irq_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cv > 0.5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap, Table\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Imbalanced IRQ handling causes uneven CPU utilization and can bottleneck network or storage throughput — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.29",
              "n": "Context Switch Rate Anomaly Detection (Linux)",
              "c": "medium",
              "f": "intermediate",
              "v": "Excessive context switching reduces CPU cache effectiveness and indicates scheduler overload or contention.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=vmstat`",
              "q": "index=os sourcetype=vmstat host=*\n| bin _time span=5m\n| stats avg(cs) as avg_ctx_switch by host, _time\n| streamstats window=100 avg(avg_ctx_switch) as baseline stdev(avg_ctx_switch) as stddev by host\n| eval upper_bound=baseline+(2*stddev)\n| where avg_ctx_switch > upper_bound",
              "m": "Monitor vmstat context switch counter (cs field). Use baseline and anomaly detection to alert on sustained context switch rates that exceed 2 standard deviations above normal, indicating scheduler pressure.",
              "z": "Timechart, Anomaly Detector",
              "kfp": "Deploys, tracing, and load tests spike context switches. Require enough samples in the stats window; tune by role.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=vmstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor vmstat context switch counter (cs field). Use baseline and anomaly detection to alert on sustained context switch rates that exceed 2 standard deviations above normal, indicating scheduler pressure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=vmstat host=*\n| bin _time span=5m\n| stats avg(cs) as avg_ctx_switch by host, _time\n| streamstats window=100 avg(avg_ctx_switch) as baseline stdev(avg_ctx_switch) as stddev by host\n| eval upper_bound=baseline+(2*stddev)\n| where avg_ctx_switch > upper_bound\n```\n\nUnderstanding this SPL\n\n**Context Switch Rate Anomaly Detection (Linux)** — Excessive context switching reduces CPU cache effectiveness and indicates scheduler overload or contention.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: vmstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=vmstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `streamstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **upper_bound** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_ctx_switch > upper_bound` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Context Switch Rate Anomaly Detection (Linux)** — Excessive context switching reduces CPU cache effectiveness and indicates scheduler overload or contention.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Anomaly Detector",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Excessive context switching reduces CPU cache effectiveness and indicates scheduler overload or contention — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.30",
              "n": "Scheduler Latency and Run Queue Depth",
              "c": "high",
              "f": "intermediate",
              "v": "High run queue depth with elevated scheduling latency causes visible application performance degradation.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=vmstat, top, custom:sched_latency`",
              "q": "index=os sourcetype=vmstat host=*\n| eval runq_to_cpu=r/procs_cpu\n| stats avg(runq_to_cpu) as avg_ratio by host\n| where avg_ratio > 2",
              "m": "Monitor run queue (r) field from vmstat and correlate with process count. When run queue exceeds 2x CPU count, alert on scheduler saturation. Create SPL to identify top CPU-consuming processes during high latency periods.",
              "z": "Timechart, Multi-series Line Chart",
              "kfp": "Boot, batch peaks, and wrong `procs_cpu` in your eval can inflate the ratio. Fix inventory fields before changing scheduler alerts.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=vmstat, top, custom:sched_latency`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor run queue (r) field from vmstat and correlate with process count. When run queue exceeds 2x CPU count, alert on scheduler saturation. Create SPL to identify top CPU-consuming processes during high latency periods.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=vmstat host=*\n| eval runq_to_cpu=r/procs_cpu\n| stats avg(runq_to_cpu) as avg_ratio by host\n| where avg_ratio > 2\n```\n\nUnderstanding this SPL\n\n**Scheduler Latency and Run Queue Depth** — High run queue depth with elevated scheduling latency causes visible application performance degradation.\n\nDocumented **Data sources**: `sourcetype=vmstat, top, custom:sched_latency`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: vmstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=vmstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **runq_to_cpu** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_ratio > 2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Scheduler Latency and Run Queue Depth** — High run queue depth with elevated scheduling latency causes visible application performance degradation.\n\nDocumented **Data sources**: `sourcetype=vmstat, top, custom:sched_latency`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Multi-series Line Chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "High run queue depth with elevated scheduling latency causes visible application performance degradation — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.31",
              "n": "Hugepage Allocation and Usage",
              "c": "medium",
              "f": "intermediate",
              "v": "Hugepage contention or allocation failures impact database and large memory workload performance.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:hugepages, /proc/meminfo`",
              "q": "index=os sourcetype=custom:hugepages host=*\n| stats avg(HugePages_Total) as total, avg(HugePages_Free) as free by host\n| eval usage_pct=(total-free)/total*100\n| where usage_pct > 90",
              "m": "Create a scripted input parsing /proc/meminfo for hugepage metrics. Track HugePages_Total, HugePages_Free, HugePages_Rsvd, and HugePages_Surp. Alert when free hugepages fall below 10% or when failed allocations occur.",
              "z": "Gauge, Single Value",
              "kfp": "Databases and JVMs use huge pages on purpose. Compare to the expected mode for that host's role before reacting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:hugepages, /proc/meminfo`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input parsing /proc/meminfo for hugepage metrics. Track HugePages_Total, HugePages_Free, HugePages_Rsvd, and HugePages_Surp. Alert when free hugepages fall below 10% or when failed allocations occur.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:hugepages host=*\n| stats avg(HugePages_Total) as total, avg(HugePages_Free) as free by host\n| eval usage_pct=(total-free)/total*100\n| where usage_pct > 90\n```\n\nUnderstanding this SPL\n\n**Hugepage Allocation and Usage** — Hugepage contention or allocation failures impact database and large memory workload performance.\n\nDocumented **Data sources**: `sourcetype=custom:hugepages, /proc/meminfo`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:hugepages. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:hugepages. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **usage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where usage_pct > 90` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Single Value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Hugepage contention or allocation failures impact database and large memory workload performance — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.32",
              "n": "Transparent Hugepage Compaction Stalls",
              "c": "high",
              "f": "intermediate",
              "v": "THP compaction stalls indicate severe memory fragmentation affecting latency-sensitive workloads.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, /sys/kernel/debug/thp_collapse_alloc`",
              "q": "index=os sourcetype=syslog \"compact_stall\" OR \"collapses_alloc_failed\"\n| stats count by host\n| where count > 0",
              "m": "Monitor kernel logs for THP compaction failures. Enable debug logging on /sys/kernel/debug/thp* paths via custom input. Alert when compaction stalls occur during peak application hours, indicating need to tune THP settings.",
              "z": "Alert, Stats Table",
              "kfp": "THP and compaction show up at boot and after memory tuning. Treat repeated stalls in production, not one-off dmesg lines, as the signal.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, /sys/kernel/debug/thp_collapse_alloc`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor kernel logs for THP compaction failures. Enable debug logging on /sys/kernel/debug/thp* paths via custom input. Alert when compaction stalls occur during peak application hours, indicating need to tune THP settings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"compact_stall\" OR \"collapses_alloc_failed\"\n| stats count by host\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Transparent Hugepage Compaction Stalls** — THP compaction stalls indicate severe memory fragmentation affecting latency-sensitive workloads.\n\nDocumented **Data sources**: `sourcetype=syslog, /sys/kernel/debug/thp_collapse_alloc`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Stats Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "THP compaction stalls indicate severe memory fragmentation affecting latency-sensitive workloads — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.33",
              "n": "Inode Exhaustion Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Inode exhaustion causes file creation failures even when disk space remains available, stopping applications.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=df`",
              "q": "index=os sourcetype=df host=*\n| stats latest(inode_usage) as inode_pct by host, mount_point\n| where inode_pct > 85",
              "m": "Use Splunk_TA_nix df input which includes inode usage percentages. Create alerts for filesystems exceeding 85% inode usage. Add search to identify which directories consuming excessive inodes to guide cleanup.",
              "z": "Table, Gauge",
              "kfp": "Many small files can use most inodes while block space is still free. Set thresholds with `df -i` baselines and directory clean-up playbooks, not only block-usage SLOs.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=df`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix df input which includes inode usage percentages. Create alerts for filesystems exceeding 85% inode usage. Add search to identify which directories consuming excessive inodes to guide cleanup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=df host=*\n| stats latest(inode_usage) as inode_pct by host, mount_point\n| where inode_pct > 85\n```\n\nUnderstanding this SPL\n\n**Inode Exhaustion Detection** — Inode exhaustion causes file creation failures even when disk space remains available, stopping applications.\n\nDocumented **Data sources**: `sourcetype=df`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: df. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=df. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, mount_point** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where inode_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Inode Exhaustion Detection** — Inode exhaustion causes file creation failures even when disk space remains available, stopping applications.\n\nDocumented **Data sources**: `sourcetype=df`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where disk_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Gauge",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Inode exhaustion causes file creation failures even when disk space remains available, stopping applications — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.34",
              "n": "RAID Array Degradation Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Degraded RAID arrays mean data loss risk and potential performance impact during rebuild.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:raid, /proc/mdstat`",
              "q": "index=os sourcetype=custom:raid host=*\n| regex _raw=\"^\\[.*_.*\\]\"\n| stats count by host, device, status\n| where status=degraded",
              "m": "Create a scripted input that parses /proc/mdstat and reports RAID device status. Alert immediately on any degradation detected. Track rebuild progress and time to completion.",
              "z": "Alert, Table",
              "kfp": "Rebuilds, copyback, and slow parity operations can show 'degraded' until the array finishes. Check storage events for auto-recovery before calling a Sev-1.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:raid, /proc/mdstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that parses /proc/mdstat and reports RAID device status. Alert immediately on any degradation detected. Track rebuild progress and time to completion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:raid host=*\n| regex _raw=\"^\\[.*_.*\\]\"\n| stats count by host, device, status\n| where status=degraded\n```\n\nUnderstanding this SPL\n\n**RAID Array Degradation Detection** — Degraded RAID arrays mean data loss risk and potential performance impact during rebuild.\n\nDocumented **Data sources**: `sourcetype=custom:raid, /proc/mdstat`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:raid. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:raid. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• `stats` rolls up events into metrics; results are split **by host, device, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status=degraded` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Degraded RAID arrays mean data loss risk and potential performance impact during rebuild — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.35",
              "n": "LVM Thin Pool Capacity Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Thin pool exhaustion causes I/O errors on all logical volumes in the pool, causing application failures.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:lvm_thin, lvs output`",
              "q": "index=os sourcetype=custom:lvm_thin host=*\n| stats latest(data_percent) as pool_usage by host, pool_name\n| where pool_usage > 80",
              "m": "Create a scripted input running 'lvs' to extract thin pool metrics. Monitor Data% and Metadata% separately. Alert at 80% capacity and again at 95%, with escalation at 99%.",
              "z": "Gauge, Single Value",
              "kfp": "Snapshots, clones, and VM images swing thin-pool use. A jump can be expected provisioning, not a leak.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:lvm_thin, lvs output`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input running 'lvs' to extract thin pool metrics. Monitor Data% and Metadata% separately. Alert at 80% capacity and again at 95%, with escalation at 99%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:lvm_thin host=*\n| stats latest(data_percent) as pool_usage by host, pool_name\n| where pool_usage > 80\n```\n\nUnderstanding this SPL\n\n**LVM Thin Pool Capacity Monitoring** — Thin pool exhaustion causes I/O errors on all logical volumes in the pool, causing application failures.\n\nDocumented **Data sources**: `sourcetype=custom:lvm_thin, lvs output`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:lvm_thin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:lvm_thin. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, pool_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where pool_usage > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Single Value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Thin pool exhaustion causes I/O errors on all logical volumes in the pool, causing application failures — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.36",
              "n": "Multipath I/O Failover Events",
              "c": "critical",
              "f": "beginner",
              "v": "Multipath failovers indicate storage path degradation requiring immediate investigation to prevent I/O loss.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, multipathd logs`",
              "q": "index=os sourcetype=syslog \"multipathd\" \"failover\" OR \"path failed\" OR \"path recovered\"\n| stats count by host, device\n| timechart count by host",
              "m": "Configure multipathd logging to syslog. Create alerts on any failover event. Include search to show path status before/after failover to help storage team troubleshoot.",
              "z": "Timechart, Alert",
              "kfp": "Planned path failovers, cable work, and driver resets can trigger multipath messages without user impact. Require loss of all paths or a sustained state before top severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, multipathd logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure multipathd logging to syslog. Create alerts on any failover event. Include search to show path status before/after failover to help storage team troubleshoot.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"multipathd\" \"failover\" OR \"path failed\" OR \"path recovered\"\n| stats count by host, device\n| timechart count by host\n```\n\nUnderstanding this SPL\n\n**Multipath I/O Failover Events** — Multipath failovers indicate storage path degradation requiring immediate investigation to prevent I/O loss.\n\nDocumented **Data sources**: `sourcetype=syslog, multipathd logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, device** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Multipath failovers indicate storage path degradation requiring immediate investigation to prevent I/O loss — so you find out before users do when something is slowing down or breaking.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "1.1.37",
              "n": "NFS Mount Stale Handle Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Stale NFS handles cause application hangs and I/O failures that severely impact users.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, kernel NFS logs`",
              "q": "index=os sourcetype=syslog \"nfs\" (\"stale\" OR \"stale NFS file handle\" OR \"Stale NFS\")\n| stats count by host, nfs_server\n| where count > 0",
              "m": "Monitor kernel logs and NFS client logs for stale handle errors. Create immediate alerts with escalation to storage team. Add search to identify affected processes and suggest remount or NFS server recovery.",
              "z": "Alert, Table",
              "kfp": "NFS blips during network or storage work look like stale handles in logs. Confirm app-level I/O errors, not a single message.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, kernel NFS logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor kernel logs and NFS client logs for stale handle errors. Create immediate alerts with escalation to storage team. Add search to identify affected processes and suggest remount or NFS server recovery.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"nfs\" (\"stale\" OR \"stale NFS file handle\" OR \"Stale NFS\")\n| stats count by host, nfs_server\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**NFS Mount Stale Handle Detection** — Stale NFS handles cause application hangs and I/O failures that severely impact users.\n\nDocumented **Data sources**: `sourcetype=syslog, kernel NFS logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, nfs_server** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Stale NFS handles cause application hangs and I/O failures that severely impact users — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.38",
              "n": "Filesystem Journal Errors",
              "c": "high",
              "f": "beginner",
              "v": "Filesystem journal errors indicate potential corruption risk and can lead to data loss or recovery timeouts.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, kernel logs`",
              "q": "index=os sourcetype=syslog (\"ext4\" OR \"xfs\" OR \"jbd2\") (\"error\" OR \"EIO\" OR \"metadata error\")\n| stats count by host, filesystem_type\n| where count > 0",
              "m": "Configure kernel logging to capture filesystem journal messages. Create alerts for any journal errors. Include fsck recommendations in alert description and track error rates over time.",
              "z": "Alert, Timechart",
              "kfp": "Journals recover from many transient issues; some lines are normal after a crash or reset. Pair with `fsck` results and app-visible errors.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, kernel logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure kernel logging to capture filesystem journal messages. Create alerts for any journal errors. Include fsck recommendations in alert description and track error rates over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (\"ext4\" OR \"xfs\" OR \"jbd2\") (\"error\" OR \"EIO\" OR \"metadata error\")\n| stats count by host, filesystem_type\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Filesystem Journal Errors** — Filesystem journal errors indicate potential corruption risk and can lead to data loss or recovery timeouts.\n\nDocumented **Data sources**: `sourcetype=syslog, kernel logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, filesystem_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Filesystem journal errors indicate potential corruption risk and can lead to data loss or recovery timeouts — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.39",
              "n": "Ext4 Filesystem Errors and Recovery",
              "c": "high",
              "f": "intermediate",
              "v": "Ext4 errors may indicate filesystem corruption or hardware issues requiring immediate diagnostic action.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, dmesg`",
              "q": "index=os sourcetype=syslog host=* (\"ext4\" AND (\"error\" OR \"abort\" OR \"FS-error\"))\n| stats count by host, mount_point\n| eval severity=\"high\"",
              "m": "Monitor for ext4-specific error messages in kernel logs. Create a baseline of expected errors and alert on increases. Correlate with disk smart data and I/O error rates to identify hardware vs. filesystem issues.",
              "z": "Table, Timechart",
              "kfp": "ext4 can log during self-repair; boot-time checks add noise. Distinguish repeated data-loss indicators from a one-time recovery run.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, dmesg`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor for ext4-specific error messages in kernel logs. Create a baseline of expected errors and alert on increases. Correlate with disk smart data and I/O error rates to identify hardware vs. filesystem issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog host=* (\"ext4\" AND (\"error\" OR \"abort\" OR \"FS-error\"))\n| stats count by host, mount_point\n| eval severity=\"high\"\n```\n\nUnderstanding this SPL\n\n**Ext4 Filesystem Errors and Recovery** — Ext4 errors may indicate filesystem corruption or hardware issues requiring immediate diagnostic action.\n\nDocumented **Data sources**: `sourcetype=syslog, dmesg`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, mount_point** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Ext4 errors may indicate filesystem corruption or hardware issues requiring immediate diagnostic action — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.40",
              "n": "XFS Filesystem Errors and Recovery",
              "c": "high",
              "f": "beginner",
              "v": "XFS errors indicate potential corruption in high-performance storage systems commonly used in enterprise environments.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, kernel logs`",
              "q": "index=os sourcetype=syslog host=* (\"XFS\" OR \"xfs_*\" AND (\"error\" OR \"IO Error\" OR \"shutdown\"))\n| stats count by host, mount_point\n| where count > 0",
              "m": "Monitor XFS-specific kernel messages. Create alerts for any XFS I/O errors or shutdown messages. Include xfs_repair suggestions and track patterns across storage arrays.",
              "z": "Alert, Table",
              "kfp": "XFS may log retryable conditions under load or a short storage stall. Escalate on persistent and application-visible file errors.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, kernel logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor XFS-specific kernel messages. Create alerts for any XFS I/O errors or shutdown messages. Include xfs_repair suggestions and track patterns across storage arrays.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog host=* (\"XFS\" OR \"xfs_*\" AND (\"error\" OR \"IO Error\" OR \"shutdown\"))\n| stats count by host, mount_point\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**XFS Filesystem Errors and Recovery** — XFS errors indicate potential corruption in high-performance storage systems commonly used in enterprise environments.\n\nDocumented **Data sources**: `sourcetype=syslog, kernel logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, mount_point** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "XFS errors indicate potential corruption in high-performance storage systems commonly used in enterprise environments — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.41",
              "n": "Disk SMART Health Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "SMART errors predict disk failure, enabling proactive replacement before data loss occurs.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:smartctl, smartmontools output`",
              "q": "index=os sourcetype=custom:smartctl host=*\n| stats latest(smart_health) as health, latest(reallocated_sectors) as realloc by host, device\n| where health!=\"PASSED\" OR realloc > 100",
              "m": "Create a scripted input running 'smartctl' on all disks and parsing output. Monitor SMART attributes including reallocated sectors, pending sectors, and CRC errors. Alert on any non-PASSED status immediately.",
              "z": "Alert, Table",
              "kfp": "A small number of reallocated or pending sectors is often advisory; vendors define when to replace. Trend the metric, do not act on a single read.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:smartctl, smartmontools output`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input running 'smartctl' on all disks and parsing output. Monitor SMART attributes including reallocated sectors, pending sectors, and CRC errors. Alert on any non-PASSED status immediately.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:smartctl host=*\n| stats latest(smart_health) as health, latest(reallocated_sectors) as realloc by host, device\n| where health!=\"PASSED\" OR realloc > 100\n```\n\nUnderstanding this SPL\n\n**Disk SMART Health Monitoring** — SMART errors predict disk failure, enabling proactive replacement before data loss occurs.\n\nDocumented **Data sources**: `sourcetype=custom:smartctl, smartmontools output`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:smartctl. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:smartctl. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, device** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where health!=\"PASSED\" OR realloc > 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "SMART errors predict disk failure, enabling proactive replacement before data loss occurs — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.42",
              "n": "SSD Wear Leveling and Health",
              "c": "high",
              "f": "intermediate",
              "v": "SSD wear metrics indicate remaining lifespan, enabling proactive replacement planning.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:nvme, nvme-cli output`",
              "q": "index=os sourcetype=custom:nvme host=*\n| stats latest(percentage_used) as wear_pct, latest(available_spare) as spare by host, device\n| where wear_pct > 80 OR spare < 5",
              "m": "Create a scripted input running 'nvme smart-log' for NVMe drives. Track percentage_used, available_spare, and media_errors. Alert when wear exceeds 80% or spare drops below 5%.",
              "z": "Gauge, Single Value",
              "kfp": "Wear and health percentages move slowly; firmware can change how values are reported. Use the vendor's on-box tool to confirm before replacement.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:nvme, nvme-cli output`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input running 'nvme smart-log' for NVMe drives. Track percentage_used, available_spare, and media_errors. Alert when wear exceeds 80% or spare drops below 5%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:nvme host=*\n| stats latest(percentage_used) as wear_pct, latest(available_spare) as spare by host, device\n| where wear_pct > 80 OR spare < 5\n```\n\nUnderstanding this SPL\n\n**SSD Wear Leveling and Health** — SSD wear metrics indicate remaining lifespan, enabling proactive replacement planning.\n\nDocumented **Data sources**: `sourcetype=custom:nvme, nvme-cli output`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:nvme. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:nvme. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, device** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where wear_pct > 80 OR spare < 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Single Value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "SSD wear metrics indicate remaining lifespan, enabling proactive replacement planning — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.43",
              "n": "Fstrim and TRIM Command Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Fstrim failures indicate potential SSD performance degradation from lack of proper space reclamation.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, custom:fstrim_status`",
              "q": "index=os sourcetype=custom:fstrim_status host=*\n| stats latest(status) as trim_status, latest(bytes_discarded) as discarded by host, mount_point\n| where trim_status!=\"success\"",
              "m": "Create a cron job that runs fstrim -v and logs output to syslog. Create alerts for any failures. Track bytes discarded over time to ensure TRIM operations are completing successfully.",
              "z": "Table, Timechart",
              "kfp": "Regular `fstrim` and vendor maintenance can cluster related log lines. Distinguish scheduled from unexpected manual TRIMs.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, custom:fstrim_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a cron job that runs fstrim -v and logs output to syslog. Create alerts for any failures. Track bytes discarded over time to ensure TRIM operations are completing successfully.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:fstrim_status host=*\n| stats latest(status) as trim_status, latest(bytes_discarded) as discarded by host, mount_point\n| where trim_status!=\"success\"\n```\n\nUnderstanding this SPL\n\n**Fstrim and TRIM Command Monitoring** — Fstrim failures indicate potential SSD performance degradation from lack of proper space reclamation.\n\nDocumented **Data sources**: `sourcetype=syslog, custom:fstrim_status`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:fstrim_status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:fstrim_status. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, mount_point** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where trim_status!=\"success\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Fstrim failures indicate potential SSD performance degradation from lack of proper space reclamation — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.44",
              "n": "Memory Leak Detection Per Process",
              "c": "high",
              "f": "intermediate",
              "v": "Process memory leaks cause gradual performance degradation and eventual OOM situations.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=top, custom:proc_rss_tracking`",
              "q": "index=os sourcetype=top host=*\n| stats latest(rss) as latest_rss, earliest(rss) as earliest_rss by host, process\n| eval rss_growth=(latest_rss-earliest_rss)/earliest_rss*100\n| where rss_growth > 20\n| stats latest(latest_rss), max(rss_growth) by process, host",
              "m": "Use Splunk_TA_nix top input to track RSS memory per process. Calculate linear regression or growth trends over 1-week windows. Alert on processes with sustained >20% RSS growth in a week, indicating memory leaks.",
              "z": "Table, Scatter Chart",
              "kfp": "Intentional cache growth, heap growth after deploy, and large batch jobs add RSS. Require a sustained upward trend and baseline, not one interval.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=top, custom:proc_rss_tracking`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix top input to track RSS memory per process. Calculate linear regression or growth trends over 1-week windows. Alert on processes with sustained >20% RSS growth in a week, indicating memory leaks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=top host=*\n| stats latest(rss) as latest_rss, earliest(rss) as earliest_rss by host, process\n| eval rss_growth=(latest_rss-earliest_rss)/earliest_rss*100\n| where rss_growth > 20\n| stats latest(latest_rss), max(rss_growth) by process, host\n```\n\nUnderstanding this SPL\n\n**Memory Leak Detection Per Process** — Process memory leaks cause gradual performance degradation and eventual OOM situations.\n\nDocumented **Data sources**: `sourcetype=top, custom:proc_rss_tracking`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: top. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=top. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, process** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **rss_growth** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where rss_growth > 20` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by process, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Scatter Chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Process memory leaks cause gradual performance degradation and eventual OOM situations — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.45",
              "n": "Swap Thrashing Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Swap thrashing causes severe performance degradation and can make systems unresponsive.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=vmstat`",
              "q": "index=os sourcetype=vmstat host=*\n| where si > 100 AND so > 100\n| stats count by host\n| eval swap_thrash=\"YES\"\n| where count > 10",
              "m": "Monitor vmstat si (swap in) and so (swap out) rates. Alert when both exceed 100 pages/sec simultaneously for 10+ consecutive samples. Include memory pressure metrics and process identification in alert context.",
              "z": "Alert, Timechart",
              "kfp": "A short burst of swap activity can start a large process or follow a one-time memory spike. Sustain the pattern or add memory-usage and app context before a critical page.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=vmstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor vmstat si (swap in) and so (swap out) rates. Alert when both exceed 100 pages/sec simultaneously for 10+ consecutive samples. Include memory pressure metrics and process identification in alert context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=vmstat host=*\n| where si > 100 AND so > 100\n| stats count by host\n| eval swap_thrash=\"YES\"\n| where count > 10\n```\n\nUnderstanding this SPL\n\n**Swap Thrashing Detection** — Swap thrashing causes severe performance degradation and can make systems unresponsive.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: vmstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=vmstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where si > 100 AND so > 100` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **swap_thrash** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Swap Thrashing Detection** — Swap thrashing causes severe performance degradation and can make systems unresponsive.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Swap thrashing causes severe performance degradation and can make systems unresponsive — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.46",
              "n": "Slab Cache Growth Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Unbounded slab cache growth consumes memory that could be used for page cache or application memory.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:slabinfo, /proc/slabinfo`",
              "q": "index=os sourcetype=syslog \"slab\"\n| bin _time span=1d\n| stats sum(slab_size) as total_slab by host, _time\n| streamstats window=30 avg(total_slab) as baseline stdev(total_slab) as stddev by host\n| eval upper=baseline+(2*stddev)\n| where total_slab > upper",
              "m": "Create a scripted input that parses /proc/slabinfo monthly and tracks total slab size. Use anomaly detection to alert when slab grows beyond 2 standard deviations, indicating slab leak.",
              "z": "Timechart, Anomaly Chart",
              "kfp": "A one-off spike right after a major package upgrade, driver reload, or cache-heavy batch job that legitimately pin more slab pages; a logging change that alters or drops `slab_size` and makes the sum jump without real growth.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:slabinfo, /proc/slabinfo`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that parses /proc/slabinfo monthly and tracks total slab size. Use anomaly detection to alert when slab grows beyond 2 standard deviations, indicating slab leak.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"slab\"\n| bin _time span=1d\n| stats sum(slab_size) as total_slab by host, _time\n| streamstats window=30 avg(total_slab) as baseline stdev(total_slab) as stddev by host\n| eval upper=baseline+(2*stddev)\n| where total_slab > upper\n```\n\nUnderstanding this SPL\n\n**Slab Cache Growth Monitoring** — Unbounded slab cache growth consumes memory that could be used for page cache or application memory.\n\nDocumented **Data sources**: `sourcetype=custom:slabinfo, /proc/slabinfo`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `streamstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **upper** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where total_slab > upper` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Anomaly Chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether the system’s special fast memory pools grow in an unusual way over time, so we can step in if the machine is quietly eating memory in the background.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.47",
              "n": "Page Cache Pressure and Reclaim Activity",
              "c": "medium",
              "f": "intermediate",
              "v": "High page cache reclaim activity indicates memory pressure affecting application performance.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:meminfo_delta, /proc/vmstat`",
              "q": "index=os sourcetype=custom:meminfo_delta host=*\n| stats avg(pgscan_direct) as scan_avg, avg(pgsteal_direct) as steal_avg by host\n| eval steal_ratio=steal_avg/scan_avg\n| where steal_ratio > 0.7",
              "m": "Create a scripted input that parses /proc/vmstat delta between samples. Track pgscan_direct and pgsteal_direct rates. Alert when steal ratio exceeds 0.7, indicating aggressive memory reclaim.",
              "z": "Timechart, Single Value",
              "kfp": "Short bursts during backups or large file copies that intentionally churn the page cache; missing or zero `pgscan_direct` in a sample, which can inflate ratios—tune the search to require `scan_avg>0` if needed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:meminfo_delta, /proc/vmstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that parses /proc/vmstat delta between samples. Track pgscan_direct and pgsteal_direct rates. Alert when steal ratio exceeds 0.7, indicating aggressive memory reclaim.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:meminfo_delta host=*\n| stats avg(pgscan_direct) as scan_avg, avg(pgsteal_direct) as steal_avg by host\n| eval steal_ratio=steal_avg/scan_avg\n| where steal_ratio > 0.7\n```\n\nUnderstanding this SPL\n\n**Page Cache Pressure and Reclaim Activity** — High page cache reclaim activity indicates memory pressure affecting application performance.\n\nDocumented **Data sources**: `sourcetype=custom:meminfo_delta, /proc/vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:meminfo_delta. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:meminfo_delta. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **steal_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where steal_ratio > 0.7` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Single Value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the server is so tight on memory that it keeps reclaiming file cache, which is often when things start feeling slow for everyone on that machine.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.48",
              "n": "NUMA Memory Imbalance Per Node",
              "c": "medium",
              "f": "intermediate",
              "v": "NUMA memory imbalance causes remote memory access latency affecting NUMA-aware applications.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:numa_meminfo`",
              "q": "index=os sourcetype=custom:numa_meminfo host=*\n| stats avg(node_free) as avg_free by host, numa_node\n| stats max(avg_free) as max_free, min(avg_free) as min_free by host\n| eval imbalance_ratio=max_free/min_free\n| where imbalance_ratio > 1.5",
              "m": "Create a scripted input parsing /sys/devices/system/node/node*/meminfo. Calculate free memory per NUMA node monthly. Alert when free memory distribution becomes imbalanced, indicating suboptimal memory allocation.",
              "z": "Gauge, Heatmap",
              "kfp": "Non-NUMA or single-socket machines mis-tagged in inventory; short-lived skew right after boot or a large one-node allocation that later balances; `min_free` at or near zero causing unstable ratios—consider a `min_free` floor in `eval`.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:numa_meminfo`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input parsing /sys/devices/system/node/node*/meminfo. Calculate free memory per NUMA node monthly. Alert when free memory distribution becomes imbalanced, indicating suboptimal memory allocation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:numa_meminfo host=*\n| stats avg(node_free) as avg_free by host, numa_node\n| stats max(avg_free) as max_free, min(avg_free) as min_free by host\n| eval imbalance_ratio=max_free/min_free\n| where imbalance_ratio > 1.5\n```\n\nUnderstanding this SPL\n\n**NUMA Memory Imbalance Per Node** — NUMA memory imbalance causes remote memory access latency affecting NUMA-aware applications.\n\nDocumented **Data sources**: `sourcetype=custom:numa_meminfo`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:numa_meminfo. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:numa_meminfo. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, numa_node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **imbalance_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where imbalance_ratio > 1.5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Heatmap\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch when one side of a big server is starved of memory while the other still has room, so you can fix placement before jobs run slow in hard-to-explain ways.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.49",
              "n": "Memory Cgroup Limit Enforcement",
              "c": "high",
              "f": "intermediate",
              "v": "Cgroup limits prevent runaway processes but enforcement indicates containers at memory limits need scaling.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, custom:cgroup_memory`",
              "q": "index=os sourcetype=syslog \"memory.max_usage_in_bytes\" OR \"Out of memory\" AND cgroup\n| stats count by host, cgroup_id\n| where count > 0",
              "m": "Create a scripted input that tracks /sys/fs/cgroup/memory/* metrics. Monitor max_usage_in_bytes vs. limit_in_bytes ratio. Alert when usage exceeds 90% of limit, indicating need for more memory allocation or right-sizing.",
              "z": "Table, Gauge",
              "kfp": "Test clusters that intentionally trigger cgroup OOM; log-format changes that still contain the word cgroup but refer to a healthy accounting line; missing `cgroup_id` parsing so every row lands in null—fix extractions first.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, custom:cgroup_memory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that tracks /sys/fs/cgroup/memory/* metrics. Monitor max_usage_in_bytes vs. limit_in_bytes ratio. Alert when usage exceeds 90% of limit, indicating need for more memory allocation or right-sizing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"memory.max_usage_in_bytes\" OR \"Out of memory\" AND cgroup\n| stats count by host, cgroup_id\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Memory Cgroup Limit Enforcement** — Cgroup limits prevent runaway processes but enforcement indicates containers at memory limits need scaling.\n\nDocumented **Data sources**: `sourcetype=syslog, custom:cgroup_memory`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, cgroup_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Gauge\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that a container or service group is bumping into its memory cap, so you can add memory or move work before things get stopped unexpectedly.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.50",
              "n": "Transparent Hugepage Defragmentation Stalls",
              "c": "high",
              "f": "beginner",
              "v": "THP defrag stalls cause application latency spikes affecting real-time and interactive workloads.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, THP metrics`",
              "q": "index=os sourcetype=syslog (\"thp_defrags\" OR \"khugepaged\" OR \"thp_collapse\")\n| stats count by host\n| where count > 5",
              "m": "Enable THP statistics logging via /sys/kernel/debug/thp*. Create alerts when defrag stalls occur during peak application hours. Recommend adjusting THP settings to madvise mode for latency-sensitive workloads.",
              "z": "Alert, Table",
              "kfp": "Noise from unrelated subsystems that share substrings; hosts where logging verbosity was turned up; legitimate one-off maintenance that still crosses a low count—tighten keywords or time bounds per environment.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, THP metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable THP statistics logging via /sys/kernel/debug/thp*. Create alerts when defrag stalls occur during peak application hours. Recommend adjusting THP settings to madvise mode for latency-sensitive workloads.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (\"thp_defrags\" OR \"khugepaged\" OR \"thp_collapse\")\n| stats count by host\n| where count > 5\n```\n\nUnderstanding this SPL\n\n**Transparent Hugepage Defragmentation Stalls** — THP defrag stalls cause application latency spikes affecting real-time and interactive workloads.\n\nDocumented **Data sources**: `sourcetype=syslog, THP metrics`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the operating system is busy reshuffling its large memory pages, which can make some programs stutter, so you can change settings or move fussy work elsewhere.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.51",
              "n": "TCP Retransmission Rate Elevation",
              "c": "high",
              "f": "advanced",
              "v": "High retransmission rates indicate network congestion, packet loss, or application issues affecting throughput.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:tcp_stats, /proc/net/tcp`",
              "q": "index=os sourcetype=netstat host=*\n| bin _time span=5m\n| stats sum(retransSegs) as retrans by host, _time\n| streamstats window=100 avg(retrans) as baseline stdev(retrans) as stddev by host\n| eval upper=baseline+(2*stddev)\n| where retrans > upper",
              "m": "Create a scripted input that parses /proc/net/snmp for TCP retransmission metrics. Track TcpRetransSegs and TcpOutSegs to calculate retransmission percentage. Alert when above 2% or 3x baseline.",
              "z": "Timechart, Anomaly Chart",
              "kfp": "Wi‑Fi or wide-area test hosts with inherently noisy retrans; counter resets on driver reloads that make `sum(retransSegs)` jump; a too-short `window` that never stabilizes the baseline on bursty services.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:tcp_stats, /proc/net/tcp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that parses /proc/net/snmp for TCP retransmission metrics. Track TcpRetransSegs and TcpOutSegs to calculate retransmission percentage. Alert when above 2% or 3x baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=netstat host=*\n| bin _time span=5m\n| stats sum(retransSegs) as retrans by host, _time\n| streamstats window=100 avg(retrans) as baseline stdev(retrans) as stddev by host\n| eval upper=baseline+(2*stddev)\n| where retrans > upper\n```\n\nUnderstanding this SPL\n\n**TCP Retransmission Rate Elevation** — High retransmission rates indicate network congestion, packet loss, or application issues affecting throughput.\n\nDocumented **Data sources**: `sourcetype=custom:tcp_stats, /proc/net/tcp`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: netstat. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=netstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `streamstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **upper** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where retrans > upper` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Anomaly Chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs the computer is resending a lot of network data it already tried to send, which usually means a rough connection or a sick link somewhere along the way.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.52",
              "n": "Connection Tracking Table Exhaustion",
              "c": "critical",
              "f": "intermediate",
              "v": "Conntrack table full prevents new network connections, causing application failures.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:conntrack, /proc/net/nf_conntrack`",
              "q": "index=os sourcetype=custom:conntrack host=*\n| stats latest(current_count) as current, latest(max_size) as maximum by host\n| eval usage_pct=(current/maximum)*100\n| where usage_pct > 80",
              "m": "Create a scripted input that parses /proc/net/nf_conntrack_count and /proc/sys/net/netfilter/nf_conntrack_max. Alert when usage exceeds 80%, with escalation at 95%. Include recommendations to increase nf_conntrack_max.",
              "z": "Gauge, Alert",
              "kfp": "Short-lived spikes on hosts doing heavy one-shot scans; `max_size` read errors defaulting to zero; virtual appliances where limits change during live migration without updating your baseline.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:conntrack, /proc/net/nf_conntrack`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that parses /proc/net/nf_conntrack_count and /proc/sys/net/netfilter/nf_conntrack_max. Alert when usage exceeds 80%, with escalation at 95%. Include recommendations to increase nf_conntrack_max.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:conntrack host=*\n| stats latest(current_count) as current, latest(max_size) as maximum by host\n| eval usage_pct=(current/maximum)*100\n| where usage_pct > 80\n```\n\nUnderstanding this SPL\n\n**Connection Tracking Table Exhaustion** — Conntrack table full prevents new network connections, causing application failures.\n\nDocumented **Data sources**: `sourcetype=custom:conntrack, /proc/net/nf_conntrack`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:conntrack. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:conntrack. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **usage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where usage_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Alert\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a safety table that tracks open network connections is almost full, so new connections can still get through when things get busy.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.53",
              "n": "Socket Buffer Overflow Detection",
              "c": "high",
              "f": "advanced",
              "v": "Socket buffer overflows cause packet drops and connection resets, indicating network saturation or misconfiguration.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:socket_stats, /proc/net/sockstat`",
              "q": "index=os sourcetype=custom:socket_stats host=*\n| stats avg(TCPBacklogDrop) as avg_drop by host\n| where avg_drop > 0",
              "m": "Create a scripted input parsing /proc/net/sockstat and monitor TCP_alloc, sockets_inuse, and TCP backlog. Also track netstat LISTEN state queue counts. Alert on backlog drops indicating insufficient buffer space.",
              "z": "Table, Timechart",
              "kfp": "Kernels that do not publish the field, leaving a literal zero and hiding issues—validate field mapping; microbursts on restart that clear immediately; load tests in lower environments you should suppress by host prefix.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:socket_stats, /proc/net/sockstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input parsing /proc/net/sockstat and monitor TCP_alloc, sockets_inuse, and TCP backlog. Also track netstat LISTEN state queue counts. Alert on backlog drops indicating insufficient buffer space.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:socket_stats host=*\n| stats avg(TCPBacklogDrop) as avg_drop by host\n| where avg_drop > 0\n```\n\nUnderstanding this SPL\n\n**Socket Buffer Overflow Detection** — Socket buffer overflows cause packet drops and connection resets, indicating network saturation or misconfiguration.\n\nDocumented **Data sources**: `sourcetype=custom:socket_stats, /proc/net/sockstat`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:socket_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:socket_stats. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_drop > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timechart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch when the system is starting to turn away work at the front door of busy network listeners, so you can fix sizing or load before people see long waits and errors.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.54",
              "n": "Network Namespace Monitoring",
              "c": "medium",
              "f": "advanced",
              "v": "Network namespace monitoring detects container escape attempts and validates network isolation in containerized environments.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:netns, /var/run/netns/`",
              "q": "index=os sourcetype=custom:netns host=*\n| stats count by host, netns_name\n| where count > 10",
              "m": "Create a scripted input that enumerates /var/run/netns/ and tracks namespace creation/deletion. Baseline expected namespaces per host. Alert on unexpected new namespaces which may indicate container escape or compromise.",
              "z": "Table, Alert",
              "kfp": "Kubernetes or Mesos nodes that legitimately create many short-lived namespaces during rolling upgrades; CI hosts that spin up dozens of netns for tests; threshold set for bare metal but applied to dense multi-tenant workers without retuning.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:netns, /var/run/netns/`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that enumerates /var/run/netns/ and tracks namespace creation/deletion. Baseline expected namespaces per host. Alert on unexpected new namespaces which may indicate container escape or compromise.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:netns host=*\n| stats count by host, netns_name\n| where count > 10\n```\n\nUnderstanding this SPL\n\n**Network Namespace Monitoring** — Network namespace monitoring detects container escape attempts and validates network isolation in containerized environments.\n\nDocumented **Data sources**: `sourcetype=custom:netns, /var/run/netns/`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:netns. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:netns. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, netns_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Namespace Monitoring** — Network namespace monitoring detects container escape attempts and validates network isolation in containerized environments.\n\nDocumented **Data sources**: `sourcetype=custom:netns, /var/run/netns/`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you notice when a server suddenly has far more separate network spaces than usual, which is worth a second look on machines that should stay simple and predictable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.55",
              "n": "DNS Resolution Failure Rate",
              "c": "high",
              "f": "intermediate",
              "v": "DNS failures impact application availability and user experience, requiring immediate investigation.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, systemd-resolved logs`",
              "q": "index=os sourcetype=syslog \"systemd-resolved\" (\"SERVFAIL\" OR \"NXDOMAIN\" OR \"TIMEOUT\")\n| stats count as failures by host, query_name\n| eval failure_rate=count\n| where failure_rate > 10",
              "m": "Monitor systemd-resolved or BIND logs for DNS query failures. Track NXDOMAIN, SERVFAIL, and TIMEOUT responses. Alert on failure rate spikes with correlation to specific nameservers or query types.",
              "z": "Table, Timechart",
              "kfp": "Intentional NXDOMAIN from security or sinkhole tools; build jobs that resolve thousands of not-yet-registered names; false `eval` of `count` in older copies of this use case—this version uses the `failures` field directly from `stats`.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, systemd-resolved logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor systemd-resolved or BIND logs for DNS query failures. Track NXDOMAIN, SERVFAIL, and TIMEOUT responses. Alert on failure rate spikes with correlation to specific nameservers or query types.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"systemd-resolved\" (\"SERVFAIL\" OR \"NXDOMAIN\" OR \"TIMEOUT\")\n| stats count as failures by host, query_name\n| eval failure_rate=count\n| where failure_rate > 10\n```\n\nUnderstanding this SPL\n\n**DNS Resolution Failure Rate** — DNS failures impact application availability and user experience, requiring immediate investigation.\n\nDocumented **Data sources**: `sourcetype=syslog, systemd-resolved logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, query_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **failure_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failure_rate > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the machine’s name-lookup diary for a jump in “couldn’t look that up” lines, so teams can fix the phone book before apps start failing in big numbers.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.56",
              "n": "Firewall Rule Hit Tracking (iptables/nftables)",
              "c": "medium",
              "f": "beginner",
              "v": "Firewall rule tracking identifies blocked traffic patterns, helping optimize rules and detect attack attempts.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, kernel ufw/firewall logs`",
              "q": "index=os sourcetype=syslog \"ufw\" (\"DENY\" OR \"REJECT\" OR \"DROP\")\n| stats count by host, src, dst_port, protocol\n| where count > 100",
              "m": "Enable firewall logging in iptables/nftables. Configure kernel logging for denied traffic. Create alerts for spike in dropped packets to specific ports, and trending reports on top blocked IPs.",
              "z": "Table, Bar Chart",
              "kfp": "Honeypot or intentionally noisy deny-all segments; UFW re-reads during reload that duplicate the same line; fields not parsed, collapsing rows incorrectly—verify extractions in Search before you alert.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, kernel ufw/firewall logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable firewall logging in iptables/nftables. Configure kernel logging for denied traffic. Create alerts for spike in dropped packets to specific ports, and trending reports on top blocked IPs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"ufw\" (\"DENY\" OR \"REJECT\" OR \"DROP\")\n| stats count by host, src, dst_port, protocol\n| where count > 100\n```\n\nUnderstanding this SPL\n\n**Firewall Rule Hit Tracking (iptables/nftables)** — Firewall rule tracking identifies blocked traffic patterns, helping optimize rules and detect attack attempts.\n\nDocumented **Data sources**: `sourcetype=syslog, kernel ufw/firewall logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, src, dst_port, protocol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar Chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when a firewall is saying “no” a lot more than usual, which can show an attack, a wrong setting, or a client pointed at the wrong place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.57",
              "n": "ARP Table Overflow Detection",
              "c": "high",
              "f": "advanced",
              "v": "ARP table overflow causes network connectivity issues and may indicate ARP spoofing attacks or network misconfiguration.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:arp, /proc/net/arp`",
              "q": "index=os sourcetype=custom:arp host=*\n| stats count as arp_entry_count by host\n| eval max_entries=1024\n| where arp_entry_count > (max_entries * 0.8)",
              "m": "Create a scripted input that counts /proc/net/arp entries and monitors /proc/sys/net/ipv4/neigh/*/gc_thresh* limits. Alert when ARP table approaches limits. Correlate with network scans or spoofing indicators.",
              "z": "Gauge, Alert",
              "kfp": "VM mobility or anycast designs with legitimately huge neighbor tables; a too-low `max_entries` constant; duplicate polls double-counting if you emit multiple sourcetypes to the same index without host keys.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:arp, /proc/net/arp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that counts /proc/net/arp entries and monitors /proc/sys/net/ipv4/neigh/*/gc_thresh* limits. Alert when ARP table approaches limits. Correlate with network scans or spoofing indicators.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:arp host=*\n| stats count as arp_entry_count by host\n| eval max_entries=1024\n| where arp_entry_count > (max_entries * 0.8)\n```\n\nUnderstanding this SPL\n\n**ARP Table Overflow Detection** — ARP table overflow causes network connectivity issues and may indicate ARP spoofing attacks or network misconfiguration.\n\nDocumented **Data sources**: `sourcetype=custom:arp, /proc/net/arp`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:arp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:arp. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **max_entries** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where arp_entry_count > (max_entries * 0.8)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Alert\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full the local address list is, because when that list is full, the server can’t place new devices on the map and traffic stops making sense.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.58",
              "n": "Network Bond Failover Events (Linux)",
              "c": "critical",
              "f": "beginner",
              "v": "Network bond failovers indicate NIC or port failures requiring immediate remediation to prevent connectivity loss.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, bonding driver logs`",
              "q": "index=os sourcetype=syslog \"bonding:\" (\"slave\" OR \"primary\") (\"failed\" OR \"recovering\" OR \"detected\")\n| stats count by host, bond_interface, slave_interface\n| timechart count by host",
              "m": "Configure kernel bonding driver logging to syslog. Create immediate alerts on slave link failures. Include search to show bond status and recommend manual failover tests post-recovery.",
              "z": "Alert, Timechart",
              "kfp": "Planned failovers or switch maintenance during which operators already know links flap; very chatty `debug` bonding logs after a driver upgrade; hosts without bonding that pick up the substring from unrelated processes—tighten with `sourcetype` filters or `bond_interface` fields if extracted.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, bonding driver logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure kernel bonding driver logging to syslog. Create immediate alerts on slave link failures. Include search to show bond status and recommend manual failover tests post-recovery.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"bonding:\" (\"slave\" OR \"primary\") (\"failed\" OR \"recovering\" OR \"detected\")\n| stats count by host, bond_interface, slave_interface\n| timechart count by host\n```\n\nUnderstanding this SPL\n\n**Network Bond Failover Events (Linux)** — Network bond failovers indicate NIC or port failures requiring immediate remediation to prevent connectivity loss.\n\nDocumented **Data sources**: `sourcetype=syslog, bonding driver logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, bond_interface, slave_interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time with a separate series **by host** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Bond Failover Events (Linux)** — Network bond failovers indicate NIC or port failures requiring immediate remediation to prevent connectivity loss.\n\nDocumented **Data sources**: `sourcetype=syslog, bonding driver logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a server’s paired network lines stop taking turns nicely, so someone can check cables and switches before the last good line goes bad too.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "1.1.59",
              "n": "Network Team Failover Detection (Linux)",
              "c": "critical",
              "f": "intermediate",
              "v": "Teamed interface failovers indicate critical network path failures affecting server connectivity.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, teamd logs`",
              "q": "index=os sourcetype=syslog \"teamd\" (\"port\" OR \"link\") (\"down\" OR \"up\" OR \"enabled\" OR \"disabled\")\n| stats count by host, team_interface\n| where count > 0",
              "m": "Monitor teamd daemon logs for port state changes. Create alerts on any port disable/enable events. Correlate with physical switch logs to validate network-side issues.",
              "z": "Alert, Table",
              "kfp": "Brief log storms during daemon restarts; duplicate lines if you capture both **journald** and **rsyslog**; environments without libteam that match the teamd string in unrelated app logs—tighten with `sourcetype` host allowlists for critical segments.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, teamd logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor teamd daemon logs for port state changes. Create alerts on any port disable/enable events. Correlate with physical switch logs to validate network-side issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"teamd\" (\"port\" OR \"link\") (\"down\" OR \"up\" OR \"enabled\" OR \"disabled\")\n| stats count by host, team_interface\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Network Team Failover Detection (Linux)** — Teamed interface failovers indicate critical network path failures affecting server connectivity.\n\nDocumented **Data sources**: `sourcetype=syslog, teamd logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, team_interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Team Failover Detection (Linux)** — Teamed interface failovers indicate critical network path failures affecting server connectivity.\n\nDocumented **Data sources**: `sourcetype=syslog, teamd logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch log lines when a “team” of network ports changes who is up or down, so you can fix a flaky port before the backup port is the only one left.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.60",
              "n": "MTU Mismatch Detection",
              "c": "medium",
              "f": "advanced",
              "v": "MTU mismatches cause fragmentation and performance issues, especially with jumbo frame configurations.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:mtu, ip link show output`",
              "q": "index=os sourcetype=custom:mtu host=*\n| stats latest(mtu) as interface_mtu by host, interface\n| stats values(interface_mtu) as mtus by host\n| where mvcount(mtus) > 1",
              "m": "Create a scripted input that runs 'ip link show' and parses MTU values per interface. Alert when mixed MTUs are detected on a host, indicating potential mismatch with switch/network.",
              "z": "Table, Alert",
              "kfp": "Intentional designs: jumbo on storage VLAN, 1500 on management; down interfaces still reporting an old MTU; containers or VRFs with deliberate differences—add a **lookup** of `expected_mismatch_ok` for those hosts.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:mtu, ip link show output`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that runs 'ip link show' and parses MTU values per interface. Alert when mixed MTUs are detected on a host, indicating potential mismatch with switch/network.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:mtu host=*\n| stats latest(mtu) as interface_mtu by host, interface\n| stats values(interface_mtu) as mtus by host\n| where mvcount(mtus) > 1\n```\n\nUnderstanding this SPL\n\n**MTU Mismatch Detection** — MTU mismatches cause fragmentation and performance issues, especially with jumbo frame configurations.\n\nDocumented **Data sources**: `sourcetype=custom:mtu, ip link show output`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:mtu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:mtu. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvcount(mtus) > 1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the same computer’s network doors are not all the same size, which can make some big packets fit on one path but get chopped up or stuck on another.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.61",
              "n": "TCP TIME_WAIT Accumulation",
              "c": "medium",
              "f": "intermediate",
              "v": "Excessive TIME_WAIT sockets can exhaust ephemeral port space, causing connection failures under load.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:netstat, netstat output`",
              "q": "index=os sourcetype=custom:netstat host=*\n| stats count(status) as time_wait_count by host where status=\"TIME_WAIT\"\n| eval warning_level=32000\n| where time_wait_count > warning_level",
              "m": "Create a scripted input that runs 'netstat -tan | grep TIME_WAIT | wc -l'. Alert when TIME_WAIT count exceeds 32K. Include recommendations to tune tcp_tw_reuse or tcp_tw_recycle on load-generation hosts.",
              "z": "Gauge, Single Value",
              "kfp": "Load generators that open huge fan-out short-lived connections on purpose; `warning_level=32000` may be too low on massive connection brokers—raise per service tier; `search` that misses your field name if the parser used `ST` instead of `state`.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:netstat, netstat output`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that runs 'netstat -tan | grep TIME_WAIT | wc -l'. Alert when TIME_WAIT count exceeds 32K. Include recommendations to tune tcp_tw_reuse or tcp_tw_recycle on load-generation hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:netstat host=*\n| stats count(status) as time_wait_count by host where status=\"TIME_WAIT\"\n| eval warning_level=32000\n| where time_wait_count > warning_level\n```\n\nUnderstanding this SPL\n\n**TCP TIME_WAIT Accumulation** — Excessive TIME_WAIT sockets can exhaust ephemeral port space, causing connection failures under load.\n\nDocumented **Data sources**: `sourcetype=custom:netstat, netstat output`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:netstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:netstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host where status=\"TIME_WAIT\"** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **warning_level** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where time_wait_count > warning_level` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Single Value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a computer is keeping an unusually large pile of “recently finished” network connections, which can get in the way of opening new ones when traffic is very busy.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.62",
              "n": "Network Bandwidth Utilization by Interface (Linux)",
              "c": "medium",
              "f": "intermediate",
              "v": "High bandwidth utilization indicates potential capacity constraints or unexpected traffic patterns.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=interfaces`",
              "q": "index=os sourcetype=interfaces host=*\n| stats latest(bytes_in) as latest_in, earliest(bytes_in) as earliest_in by host, interface\n| eval bytes_transferred=(latest_in-earliest_in)\n| stats sum(bytes_transferred) as total_bytes by host\n| eval bandwidth_util_pct=(total_bytes/interface_capacity_bits)*100",
              "m": "Use Splunk_TA_nix interfaces input to track bytes in/out. Calculate bandwidth percentage based on interface speed. Create alerts for sustained utilization above 70% or unexpected spikes.",
              "z": "Timechart, Heatmap",
              "kfp": "Counter resets (driver reload) that make **last-first** look huge; very short intervals inside the bin; interfaces that were down part of the window—consider `| where dbytes_in>=0 AND dbytes_out>=0` and a lower bound on `mbps` noise.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=interfaces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix interfaces input to track bytes in/out. Calculate bandwidth percentage based on interface speed. Create alerts for sustained utilization above 70% or unexpected spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=interfaces host=*\n| stats latest(bytes_in) as latest_in, earliest(bytes_in) as earliest_in by host, interface\n| eval bytes_transferred=(latest_in-earliest_in)\n| stats sum(bytes_transferred) as total_bytes by host\n| eval bandwidth_util_pct=(total_bytes/interface_capacity_bits)*100\n```\n\nUnderstanding this SPL\n\n**Network Bandwidth Utilization by Interface (Linux)** — High bandwidth utilization indicates potential capacity constraints or unexpected traffic patterns.\n\nDocumented **Data sources**: `sourcetype=interfaces`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: interfaces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=interfaces. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **bytes_transferred** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **bandwidth_util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Bandwidth Utilization by Interface (Linux)** — High bandwidth utilization indicates potential capacity constraints or unexpected traffic patterns.\n\nDocumented **Data sources**: `sourcetype=interfaces`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Heatmap",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how much data each network card is moving in each few-minute slice, and we raise a hand when that slice is so busy it’s probably time to add speed or move work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.63",
              "n": "Dropped Packets by Network Interface",
              "c": "high",
              "f": "intermediate",
              "v": "Dropped packets indicate network issues, buffer overflow, or driver problems affecting reliability.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=interfaces`",
              "q": "index=os sourcetype=interfaces host=*\n| stats latest(dropped_in) as dropped_in, latest(dropped_out) as dropped_out by host, interface\n| eval total_dropped=dropped_in+dropped_out\n| where total_dropped > 0\n| timechart sum(total_dropped) by host, interface",
              "m": "Monitor interface drop counters from /proc/net/dev or ethtool. Alert on any dropped packets, which should be zero in healthy networks. Correlate with driver errors and ring buffer exhaustion.",
              "z": "Timechart, Alert",
              "kfp": "A single non-zero value from a one-time micro-burst; counters that only increase on reboot; bond/team parents vs slave stats double-counting if you do not pick the right **interface** names—tighten to `| where total_dropped>threshold` and compare delta over time, not the absolute if counters never reset.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=interfaces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor interface drop counters from /proc/net/dev or ethtool. Alert on any dropped packets, which should be zero in healthy networks. Correlate with driver errors and ring buffer exhaustion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=interfaces host=*\n| stats latest(dropped_in) as dropped_in, latest(dropped_out) as dropped_out by host, interface\n| eval total_dropped=dropped_in+dropped_out\n| where total_dropped > 0\n| timechart sum(total_dropped) by host, interface\n```\n\nUnderstanding this SPL\n\n**Dropped Packets by Network Interface** — Dropped packets indicate network issues, buffer overflow, or driver problems affecting reliability.\n\nDocumented **Data sources**: `sourcetype=interfaces`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: interfaces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=interfaces. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_dropped** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where total_dropped > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time with a separate series **by host, interface** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Dropped Packets by Network Interface** — Dropped packets indicate network issues, buffer overflow, or driver problems affecting reliability.\n\nDocumented **Data sources**: `sourcetype=interfaces`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a network card is quietly throwing away pieces of traffic, which is often the first sign of a busy or mis-sized connection between the card and the rest of the network.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.64",
              "n": "Network Latency Monitoring (Ping RTT)",
              "c": "medium",
              "f": "advanced",
              "v": "Elevated latency to critical services impacts application performance and user experience.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:ping_rtt`",
              "q": "index=os sourcetype=custom:ping_rtt host=*\n| stats avg(rtt_ms) as avg_latency, max(rtt_ms) as max_latency, stdev(rtt_ms) as stddev by host, target\n| eval upper_bound=avg_latency+(2*stddev)\n| where avg_latency > baseline OR max_latency > upper_bound",
              "m": "Create a scripted input that pings critical infrastructure hosts and captures RTT. Baseline normal latencies per target. Alert when average exceeds baseline or max exceeds 2x standard deviation.",
              "z": "Timechart, Gauge",
              "kfp": "Wi‑Fi test hosts; ICMP deprioritised on some routers; a single outlier `max` from one lost/replied packet pair—tighten with `| where count>20` on the stats or move to `perc95` for robustness.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:ping_rtt`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that pings critical infrastructure hosts and captures RTT. Baseline normal latencies per target. Alert when average exceeds baseline or max exceeds 2x standard deviation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:ping_rtt host=*\n| stats avg(rtt_ms) as avg_latency, max(rtt_ms) as max_latency, stdev(rtt_ms) as stddev by host, target\n| eval upper_bound=avg_latency+(2*stddev)\n| where avg_latency > baseline OR max_latency > upper_bound\n```\n\nUnderstanding this SPL\n\n**Network Latency Monitoring (Ping RTT)** — Elevated latency to critical services impacts application performance and user experience.\n\nDocumented **Data sources**: `sourcetype=custom:ping_rtt`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:ping_rtt. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:ping_rtt. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, target** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **upper_bound** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_latency > baseline OR max_latency > upper_bound` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Latency Monitoring (Ping RTT)** — Elevated latency to critical services impacts application performance and user experience.\n\nDocumented **Data sources**: `sourcetype=custom:ping_rtt`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Gauge\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the little “how long did the answer take” test between servers, and we get worried when the worst answers are a lot later than the usual ones, not just when the average is off.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.65",
              "n": "Auditd Rule Violation Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Auditd violations provide forensic evidence of security incidents and unauthorized system access.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit type=AVC\n| stats count by host, avc_type, comm\n| where count > threshold",
              "m": "Configure comprehensive auditd rules covering file access, syscalls, and privilege escalation. Monitor AVC (Access Vector Cache) denials. Create alerts on violation patterns indicating potential compromise.",
              "z": "Table, Alert",
              "kfp": "Planned app releases that hit new file paths under enforcing policy until policy is updated; backup or scanner accounts that always trip the same `avc_type`; a too-low `>5` on chatty but benign automation—tune to `>25` in mature estates.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure comprehensive auditd rules covering file access, syscalls, and privilege escalation. Monitor AVC (Access Vector Cache) denials. Create alerts on violation patterns indicating potential compromise.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit type=AVC\n| stats count by host, avc_type, comm\n| where count > threshold\n```\n\nUnderstanding this SPL\n\n**Auditd Rule Violation Detection** — Auditd violations provide forensic evidence of security incidents and unauthorized system access.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, avc_type, comm** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > threshold` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the security layer on a Linux box is saying “no, you may not do that” more often than usual, so the right people can check policy or a possible break-in early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.12 (Record-keeping (logging)) is enforced — Splunk UC-1.1.65: Auditd Rule Violation Detection.",
                  "ea": "Saved search 'UC-1.1.65' running on sourcetype=linux_audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2024/1689/oj"
                },
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-1.1.65: Auditd Rule Violation Detection.",
                  "ea": "Saved search 'UC-1.1.65' running on sourcetype=linux_audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-21/chapter-I/subchapter-A/part-11"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "SI-4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FedRAMP SI-4 (System monitoring) is enforced — Splunk UC-1.1.65: Auditd Rule Violation Detection.",
                  "ea": "Saved search 'UC-1.1.65' running on sourcetype=linux_audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "09.aa",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 09.aa (Audit logging) is enforced — Splunk UC-1.1.65: Auditd Rule Violation Detection.",
                  "ea": "Saved search 'UC-1.1.65' running on sourcetype=linux_audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-1.1.65: Auditd Rule Violation Detection.",
                  "ea": "Saved search 'UC-1.1.65' running on sourcetype=linux_audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.swift.com/myswift/customer-security-programme-csp/security-controls"
                }
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.66",
              "n": "SELinux Denial Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "SELinux denials indicate policy violations that may require tuning or signal legitimate attack attempts.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, /var/log/audit/audit.log`",
              "q": "index=os sourcetype=syslog \"SELinux\" \"denied\"\n| stats count by host, source_context, target_context, action\n| where count > 5",
              "m": "Enable SELinux audit logging. Monitor /var/log/audit/audit.log for denial messages. Create alerts for denial spikes indicating possible policy misconfigurations or attacks. Include context in alerts to help debugging.",
              "z": "Table, Timechart",
              "kfp": "Deploy Fridays where new paths are first touched; `count>5` may be one noisy minute after a `restorecon`—pair with a **per-hour** `stats` for slower environments or raise the floor.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, /var/log/audit/audit.log`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable SELinux audit logging. Monitor /var/log/audit/audit.log for denial messages. Create alerts for denial spikes indicating possible policy misconfigurations or attacks. Include context in alerts to help debugging.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"SELinux\" \"denied\"\n| stats count by host, source_context, target_context, action\n| where count > 5\n```\n\nUnderstanding this SPL\n\n**SELinux Denial Monitoring** — SELinux denials indicate policy violations that may require tuning or signal legitimate attack attempts.\n\nDocumented **Data sources**: `sourcetype=syslog, /var/log/audit/audit.log`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, source_context, target_context, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the “extra safety wrapper” on the system keeps saying no, often for the same file or the same app role, so people can fix policy or look for mischief.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.67",
              "n": "AppArmor Profile Violation Detection",
              "c": "high",
              "f": "intermediate",
              "v": "AppArmor violations indicate policy breaches that may reflect policy misconfigurations or attack attempts.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, AppArmor audit logs`",
              "q": "index=os sourcetype=syslog \"apparmor\" (\"DENIED\" OR \"ALLOWED\" AND \"mode=enforce\")\n| stats count by host, profile, operation\n| where count > baseline",
              "m": "Enable AppArmor audit mode logging to syslog. Monitor for DENIED operations in enforce mode. Create alerts for violation spikes by profile. Include operation context to guide policy tuning.",
              "z": "Table, Alert",
              "kfp": "Developers who intentionally trigger denies in a lab host that shares an index; profiles that `complain` first (no deny) — you may need a separate use case; short spikes during `aa-enforce` rollouts—suppress by **change** window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, AppArmor audit logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AppArmor audit mode logging to syslog. Monitor for DENIED operations in enforce mode. Create alerts for violation spikes by profile. Include operation context to guide policy tuning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"apparmor\" (\"DENIED\" OR \"ALLOWED\" AND \"mode=enforce\")\n| stats count by host, profile, operation\n| where count > baseline\n```\n\nUnderstanding this SPL\n\n**AppArmor Profile Violation Detection** — AppArmor violations indicate policy breaches that may reflect policy misconfigurations or attack attempts.\n\nDocumented **Data sources**: `sourcetype=syslog, AppArmor audit logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, profile, operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > baseline` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when a safety profile blocks an action more than a few times, which is a good time to see if a program changed or if someone is poking at things they shouldn’t.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.68",
              "n": "Rootkit Detection via File Integrity",
              "c": "critical",
              "f": "intermediate",
              "v": "File integrity changes indicate potential rootkit installation or unauthorized system modification.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:aide, AIDE database changes`",
              "q": "index=os sourcetype=custom:aide host=*\n| stats count by host, file_path, change_type\n| where change_type IN (\"added\", \"changed\", \"removed\") AND count > 0",
              "m": "Deploy AIDE (Advanced Intrusion Detection Environment) with daily scans. Create alerts for unexpected file changes in /bin, /sbin, /lib directories. Include baseline scans on system initialization.",
              "z": "Alert, Table",
              "kfp": "Monthly patch Tuesday churn; containers bind-mounting over paths AIDE also scans; a scanner that rewrites the same library twice in one run—use **first**-seen dedupe if needed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:aide, AIDE database changes`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy AIDE (Advanced Intrusion Detection Environment) with daily scans. Create alerts for unexpected file changes in /bin, /sbin, /lib directories. Include baseline scans on system initialization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:aide host=*\n| stats count by host, file_path, change_type\n| where change_type IN (\"added\", \"changed\", \"removed\") AND count > 0\n```\n\nUnderstanding this SPL\n\n**Rootkit Detection via File Integrity** — File integrity changes indicate potential rootkit installation or unauthorized system modification.\n\nDocumented **Data sources**: `sourcetype=custom:aide, AIDE database changes`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:aide. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:aide. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, file_path, change_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where change_type IN (\"added\", \"changed\", \"removed\") AND count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you get a list when a strict file checker notices new, changed, or missing important files, so the right people can see if a patch, a person, or a problem caused it.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.69",
              "n": "SUID/SGID Binary Changes",
              "c": "critical",
              "f": "beginner",
              "v": "Unauthorized SUID/SGID binary modifications enable privilege escalation attacks.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit type=EXECVE \"suid\" OR \"sgid\"\n| stats count by host, name, comm\n| where count > 0",
              "m": "Monitor /proc/fs/pstore or auditctl for SUID/SGID attribute changes. Create alerts on any changes to SUID/SGID binaries. Maintain a whitelist of expected SUID/SGID files for comparison.",
              "z": "Alert, Table",
              "kfp": "Package post-install scripts that chmod during patching; a missing field so every row is null; duplicate lines from exec and path in the same rule—add dedup on time, host, and name in a follow-on search.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor /proc/fs/pstore or auditctl for SUID/SGID attribute changes. Create alerts on any changes to SUID/SGID binaries. Maintain a whitelist of expected SUID/SGID files for comparison.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit type=EXECVE \"suid\" OR \"sgid\"\n| stats count by host, name, comm\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**SUID/SGID Binary Changes** — Unauthorized SUID/SGID binary modifications enable privilege escalation attacks.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, name, comm** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you notice when something tries to hand extra powers to a normal program, which is a serious thing to look at on a well-run server.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 65,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier"
              ]
            },
            {
              "i": "1.1.70",
              "n": "/etc/passwd Modifications",
              "c": "critical",
              "f": "beginner",
              "v": "/etc/passwd changes indicate user account creation/modification requiring immediate investigation for unauthorized access.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit path=\"/etc/passwd\" action=modified\n| stats count by host, auid, exe\n| where count > 0",
              "m": "Configure auditctl rules to monitor /etc/passwd for all modifications. Create immediate alerts with escalation to security team. Include user context showing who made the change.",
              "z": "Alert, Table",
              "kfp": "Configuration management tools that legitimately rewrite **passwd** every hour; duplicate **PATH** + **SYSCALL** pairs—`dedup` or key on **key=** only; container hosts where only the image builder should touch **passwd**—alert there with higher severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure auditctl rules to monitor /etc/passwd for all modifications. Create immediate alerts with escalation to security team. Include user context showing who made the change.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit path=\"/etc/passwd\" action=modified\n| stats count by host, auid, exe\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**/etc/passwd Modifications** — /etc/passwd changes indicate user account creation/modification requiring immediate investigation for unauthorized access.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, auid, exe** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the master list of user names on a machine is edited, so only the right people and tools are the ones doing it.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 65,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier"
              ]
            },
            {
              "i": "1.1.71",
              "n": "/etc/shadow Modifications",
              "c": "critical",
              "f": "beginner",
              "v": "/etc/shadow changes indicate password hash tampering or unauthorized privilege escalation attempts.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit path=\"/etc/shadow\" action=modified\n| stats count by host, auid, exe\n| where count > 0",
              "m": "Configure auditctl rules to monitor /etc/shadow modifications. Create immediate critical alerts. Include process context showing which application attempted the change.",
              "z": "Alert, Table",
              "kfp": "Monthly forced password-rotation playbooks that are still manual; `auid=4294967295` on some kernels when login uid cannot be resolved—tune **props**; backup tools that re-copy **shadow** during DR tests—**suppress** by host group.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure auditctl rules to monitor /etc/shadow modifications. Create immediate critical alerts. Include process context showing which application attempted the change.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit path=\"/etc/shadow\" action=modified\n| stats count by host, auid, exe\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**/etc/shadow Modifications** — /etc/shadow changes indicate password hash tampering or unauthorized privilege escalation attempts.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, auid, exe** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the secret password list on the machine is written to, which should almost never happen without a very clear, approved reason.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.72",
              "n": "SSH Public Key Changes",
              "c": "critical",
              "f": "beginner",
              "v": "SSH key modifications enable persistent unauthorized access, indicating potential account compromise.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit path~=\"\\.ssh/authorized_keys\" action=modified\n| stats count by host, auid, user\n| where count > 0",
              "m": "Monitor ~/.ssh/authorized_keys files for all users via auditctl. Create alerts on any modifications. Include user and source process information to determine if change was authorized.",
              "z": "Alert, Table",
              "kfp": "Configuration management re-pushing the same key each hour—dedupe on hash if you can extract it; **lxd** or **lxc** home dirs that are ephemeral; developers who frequently rotate personal keys on Fridays—**lookup**-based suppressions per team.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor ~/.ssh/authorized_keys files for all users via auditctl. Create alerts on any modifications. Include user and source process information to determine if change was authorized.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit path~=\"\\.ssh/authorized_keys\" action=modified\n| stats count by host, auid, user\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**SSH Public Key Changes** — SSH key modifications enable persistent unauthorized access, indicating potential account compromise.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, auid, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SSH Public Key Changes** — SSH key modifications enable persistent unauthorized access, indicating potential account compromise.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when someone changes the file that says who may log in without a password, because that is a common way to keep a back door open.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.73",
              "n": "PAM Authentication Failure Tracking",
              "c": "high",
              "f": "beginner",
              "v": "PAM failures indicate authentication issues or brute-force attack attempts against user accounts.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_secure`",
              "q": "index=os sourcetype=linux_secure pam \"authentication failure\"\n| stats count as failures by host, user, src\n| where failures > 5",
              "m": "Monitor PAM logs for authentication failures. Track failures per user and source IP. Create alerts for multiple failures from single IP within short timeframe indicating brute force.",
              "z": "Table, Timechart",
              "kfp": "Shared jump hosts where many analysts mistype a password; vulnerability scanners; accounts stuck in a reboot loop; lockout not aligned with Splunk time zone—**always** align **TZ** in props.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_secure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor PAM logs for authentication failures. Track failures per user and source IP. Create alerts for multiple failures from single IP within short timeframe indicating brute force.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_secure pam \"authentication failure\"\n| stats count as failures by host, user, src\n| where failures > 5\n```\n\nUnderstanding this SPL\n\n**PAM Authentication Failure Tracking** — PAM failures indicate authentication issues or brute-force attack attempts against user accounts.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_secure. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, user, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures > 5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PAM Authentication Failure Tracking** — PAM failures indicate authentication issues or brute-force attack attempts against user accounts.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when someone keeps typing the wrong password or the wrong account is trying over and over, which is often the first sign of a break-in attempt.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.74",
              "n": "Login from Unusual Source IPs",
              "c": "high",
              "f": "intermediate",
              "v": "Logins from unusual source IPs may indicate account compromise or unauthorized access.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_secure`",
              "q": "index=os sourcetype=linux_secure \"Accepted publickey\" OR \"Accepted password\"\n| stats dc(src) as unique_ips by host, user\n| eventstats avg(unique_ips) as baseline_ips by user\n| where unique_ips > baseline_ips + 3",
              "m": "Baseline normal login source IPs per user. Alert when login occurs from new IP addresses outside expected pattern. Include geolocation data for context on alert.",
              "z": "Table, Scatter Plot",
              "kfp": "Any corporate **NAT** that rotates exit IPs; **CI** users; **SRE** jump hosts that legitimately have dozens of home IPs a day; **zero**-history first day of deployment (no baseline) — the SPL guards with `isnotnull(baseline_ips)` but still think about warm-up time.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_secure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline normal login source IPs per user. Alert when login occurs from new IP addresses outside expected pattern. Include geolocation data for context on alert.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_secure \"Accepted publickey\" OR \"Accepted password\"\n| stats dc(src) as unique_ips by host, user\n| eventstats avg(unique_ips) as baseline_ips by user\n| where unique_ips > baseline_ips + 3\n```\n\nUnderstanding this SPL\n\n**Login from Unusual Source IPs** — Logins from unusual source IPs may indicate account compromise or unauthorized access.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_secure. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_ips > baseline_ips + 3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Login from Unusual Source IPs** — Logins from unusual source IPs may indicate account compromise or unauthorized access.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Scatter Plot",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you notice when someone signs in from more different places than they usually do in a day, which can be travel—or someone else using their name.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.75",
              "n": "Failed su Attempts",
              "c": "high",
              "f": "beginner",
              "v": "Failed su attempts indicate potential privilege escalation attempts or credential compromise.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_secure`",
              "q": "index=os sourcetype=linux_secure \"su:\" \"FAILED\" OR \"su:\" \"authentication failure\"\n| stats count as failures by host, user\n| where failures > 3",
              "m": "Monitor /var/log/auth.log for su command failures. Create alerts for multiple failures by same user in short window. Include target user context showing privilege escalation target.",
              "z": "Table, Alert",
              "kfp": "Monitoring scripts that intentionally exercise **su** in **cron**; **failures=1** for a person mis-typing once—`>3` already dampens; **user** misparsed from **PAM** lines—**props** work first.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_secure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor /var/log/auth.log for su command failures. Create alerts for multiple failures by same user in short window. Include target user context showing privilege escalation target.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_secure \"su:\" \"FAILED\" OR \"su:\" \"authentication failure\"\n| stats count as failures by host, user\n| where failures > 3\n```\n\nUnderstanding this SPL\n\n**Failed su Attempts** — Failed su attempts indicate potential privilege escalation attempts or credential compromise.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_secure. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures > 3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Failed su Attempts** — Failed su attempts indicate potential privilege escalation attempts or credential compromise.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the old “become another user” command keeps failing, which can be someone guessing a password for a powerful account or a script that’s stuck in a loop.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.76",
              "n": "Privilege Escalation Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Privilege escalation indicates successful security breach enabling attacker to gain administrative access.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_secure`",
              "q": "index=os sourcetype=linux_secure \"sudo:\" AND \"command=\"\n| stats count by host, user, command\n| where user!=\"root\"",
              "m": "Monitor sudo logs for privilege escalation attempts. Create alerts for unexpected sudo usage by specific users. Correlate with auditctl syscall logs showing actual command execution after privilege gain.",
              "z": "Alert, Table",
              "kfp": "Legitimate one-off `sudo` during vendor support windows; `user` field parsing that maps **root** differently; CI users that are supposed to `sudo` dozens of times per hour—move those to an explicit allow class.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_secure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor sudo logs for privilege escalation attempts. Create alerts for unexpected sudo usage by specific users. Correlate with auditctl syscall logs showing actual command execution after privilege gain.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_secure \"sudo:\" AND \"command=\"\n| stats count by host, user, command\n| where user!=\"root\"\n```\n\nUnderstanding this SPL\n\n**Privilege Escalation Detection** — Privilege escalation indicates successful security breach enabling attacker to gain administrative access.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_secure. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, user, command** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where user!=\"root\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privilege Escalation Detection** — Privilege escalation indicates successful security breach enabling attacker to gain administrative access.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see who used the “run as superuser” path and what they tried to run, so odd surprises can be checked while they still matter.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.05",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of ASD E8 E8.05 (Restrict administrative privileges) — Splunk UC-1.1.76: Privilege Escalation Detection.",
                  "ea": "Saved search 'UC-1.1.76' running on sourcetype=linux_secure, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "SI-4",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of FedRAMP SI-4 (System monitoring) — Splunk UC-1.1.76: Privilege Escalation Detection.",
                  "ea": "Saved search 'UC-1.1.76' running on sourcetype=linux_secure, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "RBI Cyber",
                  "v": "2016 (as amended)",
                  "cl": "Annex-A",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of RBI Cyber Annex-A (Baseline cyber-security controls) — Splunk UC-1.1.76: Privilege Escalation Detection.",
                  "ea": "Saved search 'UC-1.1.76' running on sourcetype=linux_secure, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://rbi.org.in/Scripts/NotificationUser.aspx?Id=10435"
                },
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.3.5",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of SAMA CSF 3.3.5 (Security monitoring) — Splunk UC-1.1.76: Privilege Escalation Detection.",
                  "ea": "Saved search 'UC-1.1.76' running on sourcetype=linux_secure, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.sama.gov.sa/en-US/Laws/BankingRules/SAMA%20Cyber%20Security%20Framework.pdf"
                }
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.77",
              "n": "Unauthorized Cron Job Additions",
              "c": "critical",
              "f": "beginner",
              "v": "Unauthorized cron jobs enable persistent malware execution and data exfiltration.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit path~=\"/var/spool/cron/crontabs/*\" action=modified\n| stats count by host, auid, file_name\n| where count > 0",
              "m": "Monitor /var/spool/cron/crontabs/ and /etc/cron.d/ for modifications via auditctl. Create alerts on any new cron job additions. Compare against known application cron jobs from baseline.",
              "z": "Alert, Table",
              "kfp": "Patching that drops files into /etc/cron.d every month; configuration tools that rewrite the same line; login UID showing as 4294967295 on some kernels—SMEs must decide whether to keep or filter.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor /var/spool/cron/crontabs/ and /etc/cron.d/ for modifications via auditctl. Create alerts on any new cron job additions. Compare against known application cron jobs from baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit path~=\"/var/spool/cron/crontabs/*\" action=modified\n| stats count by host, auid, file_name\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Unauthorized Cron Job Additions** — Unauthorized cron jobs enable persistent malware execution and data exfiltration.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, auid, file_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the hidden clock-driven task list is edited, which is a classic place for a bad actor to leave something running in the background.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.78",
              "n": "Open Port Changes",
              "c": "critical",
              "f": "intermediate",
              "v": "New listening ports indicate service configuration changes or malware opening backdoors.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=openPorts`",
              "q": "index=os sourcetype=openPorts host=*\n| stats latest(port_list) as current_ports by host\n| eval new_ports=port_list - previous_ports\n| where isnotnull(new_ports)",
              "m": "Use Splunk_TA_nix openPorts input to track listening ports per host. Baseline expected ports. Create alerts on new listening ports with escalation to change management. Include process information showing which service opened port.",
              "z": "Alert, Table",
              "kfp": "Apps that only bind on schedule; `openPorts` interval longer than the short-lived service; **Docker** or **K8s** that churn ports often—separate UCs for those estates.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=openPorts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix openPorts input to track listening ports per host. Baseline expected ports. Create alerts on new listening ports with escalation to change management. Include process information showing which service opened port.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=openPorts host=*\n| stats latest(port_list) as current_ports by host\n| eval new_ports=port_list - previous_ports\n| where isnotnull(new_ports)\n```\n\nUnderstanding this SPL\n\n**Open Port Changes** — New listening ports indicate service configuration changes or malware opening backdoors.\n\nDocumented **Data sources**: `sourcetype=openPorts`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: openPorts. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=openPorts. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **new_ports** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(new_ports)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the list of “who is listening on the network” on a box changes, which can show a new service or a new hole an attacker could use.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.79",
              "n": "Setcap Binary Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Setcap binary modifications enable privilege escalation bypassing traditional privilege boundary enforcement.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit type=CAPABILITY_CHANGE\n| stats count by host, name, cap_changes\n| where count > 0",
              "m": "Monitor setcap changes via auditctl CAPABILITY_CHANGE events. Create alerts on any setcap modifications. Maintain whitelist of expected capability assignments by application.",
              "z": "Alert, Table",
              "kfp": "Vendors that `setcap` in `%post` every upgrade; the broad **search** in the `spl` field until you add your **key=**; duplicate **PATH**+**capset**—**dedup** in props.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor setcap changes via auditctl CAPABILITY_CHANGE events. Create alerts on any setcap modifications. Maintain whitelist of expected capability assignments by application.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit type=CAPABILITY_CHANGE\n| stats count by host, name, cap_changes\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Setcap Binary Monitoring** — Setcap binary modifications enable privilege escalation bypassing traditional privilege boundary enforcement.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, name, cap_changes** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when special “extra powers” are glued onto normal programs, which can be a careful admin—or someone trying a clever way around normal login rules.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.80",
              "n": "Systemd Unit Failures",
              "c": "high",
              "f": "beginner",
              "v": "Service unit failures indicate application issues or configuration problems requiring immediate remediation.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog`",
              "q": "index=os sourcetype=syslog \"systemd\" AND (\"Failed\" OR \"ERROR\" OR \"not-found\")\n| stats count by host, unit\n| where count > 0",
              "m": "Monitor systemd logs via journalctl. Create alerts for service failures. Include restart policy status and recent unit logs to help troubleshooting. Correlate with dependency failures.",
              "z": "Alert, Table",
              "kfp": "Short spikes during **package** upgrades; units that **restart** often and log `Failed` even when **systemd** eventually recovers—tighten with `| where count>5` in 15**m**; multi-line events split wrong in **props**.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor systemd logs via journalctl. Create alerts for service failures. Include restart policy status and recent unit logs to help troubleshooting. Correlate with dependency failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"systemd\" AND (\"Failed\" OR \"ERROR\" OR \"not-found\")\n| stats count by host, unit\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Systemd Unit Failures** — Service unit failures indicate application issues or configuration problems requiring immediate remediation.\n\nDocumented **Data sources**: `sourcetype=syslog`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, unit** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the built-in service manager keeps saying it could not start or find an important service, so you can fix the app or the machine before users feel it.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.81",
              "n": "Systemd Timer Missed Triggers",
              "c": "medium",
              "f": "beginner",
              "v": "Missed systemd timers indicate scheduling issues or system overload preventing scheduled tasks.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog`",
              "q": "index=os sourcetype=syslog \"systemd\" \"timer\" (\"cannot run\" OR \"Skipping\")\n| stats count by host, timer_unit\n| where count > 0",
              "m": "Monitor systemd timer logs for \"cannot run\" or skipped trigger messages. Create alerts when timers miss scheduled runs. Include impact assessment based on timer purpose.",
              "z": "Alert, Table",
              "kfp": "Hosts that sleep or **hibernate**; **DST** jumps; **timer** units that intentionally **skip** overlaps—read the specific **systemd** wording in `_raw` before paging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor systemd timer logs for \"cannot run\" or skipped trigger messages. Create alerts when timers miss scheduled runs. Include impact assessment based on timer purpose.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"systemd\" \"timer\" (\"cannot run\" OR \"Skipping\")\n| stats count by host, timer_unit\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Systemd Timer Missed Triggers** — Missed systemd timers indicate scheduling issues or system overload preventing scheduled tasks.\n\nDocumented **Data sources**: `sourcetype=syslog`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, timer_unit** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a scheduled “do this later” task on the machine did not run when it should, which can quietly break backups or clean-up jobs.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.82",
              "n": "D-State (Uninterruptible Sleep) Process Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "D-state processes indicate hanging I/O operations or kernel deadlocks requiring immediate investigation.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=ps`",
              "q": "index=os sourcetype=ps host=* state=\"D\"\n| stats count as dstate_count by host\n| where dstate_count > 0",
              "m": "Monitor ps output for D-state (uninterruptible sleep) processes. Create alerts when any D-state processes exist for >5 minutes. Include wchan (wait channel) showing what I/O operation is blocking.",
              "z": "Alert, Table",
              "kfp": "Short **D** during heavy **NFS** metadata—trend duration in a v2 use case; **ps** interval so large you only ever see one row; field `S` vs `state` mismatch—fix **props** first.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=ps`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor ps output for D-state (uninterruptible sleep) processes. Create alerts when any D-state processes exist for >5 minutes. Include wchan (wait channel) showing what I/O operation is blocking.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=ps host=* state=\"D\"\n| stats count as dstate_count by host\n| where dstate_count > 0\n```\n\nUnderstanding this SPL\n\n**D-State (Uninterruptible Sleep) Process Detection** — D-state processes indicate hanging I/O operations or kernel deadlocks requiring immediate investigation.\n\nDocumented **Data sources**: `sourcetype=ps`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: ps. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=ps. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dstate_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when programs are stuck waiting on the disk or network in a way they can’t be interrupted, which is a strong hint to look at storage before blaming the app.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.83",
              "n": "Process CPU Affinity Changes",
              "c": "medium",
              "f": "beginner",
              "v": "CPU affinity changes can indicate attempted performance optimization or malicious CPU isolation attempts.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit type=SCHED_SETAFFINITY\n| stats count by host, pid, comm\n| where count > 0",
              "m": "Monitor sched_setaffinity syscalls via auditctl. Create alerts on unexpected CPU affinity changes. Correlate with application deployment or configuration management changes.",
              "z": "Table, Alert",
              "kfp": "Legit **numactl** orchestration; **Java** runtimes that set affinity on start; **kubernetes** device plugins—add allow **lookup** on **exe**.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor sched_setaffinity syscalls via auditctl. Create alerts on unexpected CPU affinity changes. Correlate with application deployment or configuration management changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit type=SCHED_SETAFFINITY\n| stats count by host, pid, comm\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Process CPU Affinity Changes** — CPU affinity changes can indicate attempted performance optimization or malicious CPU isolation attempts.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, pid, comm** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when something tries to pin programs to specific processor cores, which is normal for some big number crunching, and odd for everyday small programs.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.84",
              "n": "Runaway Process Detection (CPU Hog)",
              "c": "high",
              "f": "intermediate",
              "v": "Runaway processes consuming excessive CPU degrade performance for all workloads on the host.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=top`",
              "q": "index=os sourcetype=top host=*\n| stats avg(cpu_pct) as avg_cpu by host, process\n| where avg_cpu > 80",
              "m": "Use Splunk_TA_nix top input to track per-process CPU usage. Create alerts for processes consistently exceeding 80% CPU. Include user, parent process, and command line context. Suggest kill or scaling actions.",
              "z": "Table, Timechart",
              "kfp": "**gzip** during backup hour; **java** GC spikes; **containers** where **process** names collapse to `java`—add **container** id from **cgroup** in a follow-on use case.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=top`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix top input to track per-process CPU usage. Create alerts for processes consistently exceeding 80% CPU. Include user, parent process, and command line context. Suggest kill or scaling actions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=top host=*\n| stats avg(cpu_pct) as avg_cpu by host, process\n| where avg_cpu > 80\n```\n\nUnderstanding this SPL\n\n**Runaway Process Detection (CPU Hog)** — Runaway processes consuming excessive CPU degrade performance for all workloads on the host.\n\nDocumented **Data sources**: `sourcetype=top`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: top. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=top. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, process** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_cpu > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you spot one program that is eating most of the processor for a while, so you can stop it or fix it before everything else on the machine feels slow.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.85",
              "n": "Memory Hog Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Memory-consuming processes can cause OOM conditions affecting all applications on the host.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=top`",
              "q": "index=os sourcetype=top host=*\n| stats avg(mem_pct) as avg_mem by host, process\n| where avg_mem > 40",
              "m": "Monitor per-process memory percentage from top input. Create alerts for processes consistently exceeding 40% of system memory. Include growth trend and suggest right-sizing or memory limit enforcement.",
              "z": "Table, Gauge",
              "kfp": "**Elasticsearch** / **JVM** heaps that are large by design; **k8s** pause containers; **40** may be too low on 1**TB** RAM hosts—scale with a **lookup** of `host_ram_gb`.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=top`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor per-process memory percentage from top input. Create alerts for processes consistently exceeding 40% of system memory. Include growth trend and suggest right-sizing or memory limit enforcement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=top host=*\n| stats avg(mem_pct) as avg_mem by host, process\n| where avg_mem > 40\n```\n\nUnderstanding this SPL\n\n**Memory Hog Detection** — Memory-consuming processes can cause OOM conditions affecting all applications on the host.\n\nDocumented **Data sources**: `sourcetype=top`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: top. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=top. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, process** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_mem > 40` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Gauge",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you find one program that is using a big share of the machine’s memory for a while, which is a common path to crashes when memory runs out.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.86",
              "n": "Fork Bomb Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Fork bombs exhaust PID space and system resources, making systems unusable.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:process_count`",
              "q": "index=os sourcetype=custom:process_count host=*\n| stats avg(process_count) as avg_procs, stdev(process_count) as stddev by host\n| where process_count > (avg_procs + 4*stddev)",
              "m": "Track /proc process count or 'ps aux | wc -l'. Create alerts when process count spikes suddenly. Include threshold based on baseline plus 4x standard deviation to detect sudden fork activity.",
              "z": "Alert, Anomaly Chart",
              "kfp": "**CI** builds that spawn thousands of short **gcc** jobs once a day; **K8s** nodes with real **pod** churn—move this UC to bare metal or **VM** estates first or add a host allowlist filter for physical servers only.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:process_count`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack /proc process count or 'ps aux | wc -l'. Create alerts when process count spikes suddenly. Include threshold based on baseline plus 4x standard deviation to detect sudden fork activity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:process_count host=*\n| stats avg(process_count) as avg_procs, stdev(process_count) as stddev by host\n| where process_count > (avg_procs + 4*stddev)\n```\n\nUnderstanding this SPL\n\n**Fork Bomb Detection** — Fork bombs exhaust PID space and system resources, making systems unusable.\n\nDocumented **Data sources**: `sourcetype=custom:process_count`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:process_count. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:process_count. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where process_count > (avg_procs + 4*stddev)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Anomaly Chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a machine suddenly has far more running programs than it usually does, which can be a runaway script that keeps starting children until the system chokes.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.87",
              "n": "Process Namespace Breakout Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Process namespace breakout indicates container escape or privilege escalation enabling access to host.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit type=CONTAINER_ESCAPE OR syscall=setns\n| stats count by host, pid, comm\n| where count > 0",
              "m": "Monitor setns syscalls via auditctl. Create alerts on namespace escape attempts. Correlate with process name and user to identify unauthorized actors.",
              "z": "Alert, Table",
              "kfp": "**CNI** / **kubelet** **setns** during normal **pod** starts; **debug** sidecars—add **exe** allow lists; duplicate **SYSCALL**+**PATH** lines.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor setns syscalls via auditctl. Create alerts on namespace escape attempts. Correlate with process name and user to identify unauthorized actors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit type=CONTAINER_ESCAPE OR syscall=setns\n| stats count by host, pid, comm\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Process Namespace Breakout Detection** — Process namespace breakout indicates container escape or privilege escalation enabling access to host.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, pid, comm** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when programs try to slip out of the little fenced area they were put in, which is one of the scarier signs on machines that run many small apps in boxes.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.88",
              "n": "Container Escape Attempt Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Container escape attempts are critical security events indicating sophisticated attack against containerized infrastructure.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, AppArmor/SELinux denials`",
              "q": "index=os sourcetype=syslog (AppArmor OR SELinux) \"container\" \"denied\"\n| stats count by host, container_id\n| where count > threshold",
              "m": "Monitor container runtime logs and SELinux/AppArmor denials for escape signatures. Create immediate critical alerts. Correlate with process syscalls and capability usage anomalies.",
              "z": "Alert, Table",
              "kfp": "**Build** hosts that compile in **unconfined** modes; **shift-left** scans; log text that contains the word **container** in an unrelated sense—tighten keywords per your **CRI** vendor.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, AppArmor/SELinux denials`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor container runtime logs and SELinux/AppArmor denials for escape signatures. Create immediate critical alerts. Correlate with process syscalls and capability usage anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (AppArmor OR SELinux) \"container\" \"denied\"\n| stats count by host, container_id\n| where count > threshold\n```\n\nUnderstanding this SPL\n\n**Container Escape Attempt Detection** — Container escape attempts are critical security events indicating sophisticated attack against containerized infrastructure.\n\nDocumented **Data sources**: `sourcetype=syslog, AppArmor/SELinux denials`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, container_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > threshold` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the safety rules around boxed-in programs say “no” more than a few times in a way that mentions those little program boxes, which is worth a fast look.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.89",
              "n": "Syslog Flood Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Syslog floods can overwhelm log infrastructure and mask real security events in log noise.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=syslog`",
              "q": "index=os sourcetype=syslog host=*\n| timechart count by host\n| where count > 10000 in 5 minute window",
              "m": "Monitor syslog event rate per host. Create alerts for rate spikes indicating syslog flood. Include source identification and recommend investigation of root cause or log source throttling.",
              "z": "Timechart, Alert",
              "kfp": "**SIEM** **forwarders** that batch; **k8s** **node** **syslog** during **rollouts**; too-low **10**k on a **log** **aggregator**—run this **per** **source** **type** child, not on the **agg** only, or you will never learn the true offender.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor syslog event rate per host. Create alerts for rate spikes indicating syslog flood. Include source identification and recommend investigation of root cause or log source throttling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog host=*\n| timechart count by host\n| where count > 10000 in 5 minute window\n```\n\nUnderstanding this SPL\n\n**Syslog Flood Detection** — Syslog floods can overwhelm log infrastructure and mask real security events in log noise.\n\nDocumented **Data sources**: `sourcetype=syslog`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count > 10000 in 5 minute window` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when one machine suddenly starts writing a truly huge diary, so a broken program or attack does not fill the log system and hide other problems.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.90",
              "n": "Journal Disk Usage Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Journal disk usage growth can consume valuable storage space, potentially filling disks.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:journalctl_usage`",
              "q": "index=os sourcetype=custom:journalctl_usage host=*\n| stats latest(disk_usage_mb) as journal_size by host\n| where journal_size > 1000",
              "m": "Create a scripted input running 'journalctl --disk-usage' monthly. Alert when journal size exceeds 1GB. Include recommendations to prune old journal entries using journalctl --vacuum-time or --vacuum-size.",
              "z": "Gauge, Single Value",
              "kfp": "Hosts that intentionally keep a **large** **journal** for **compliance**; **MB** vs **MiB** confusion in the script—document units in the **event**; **docker** **graph** drivers that store journal elsewhere—scope by **CMDB** role.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:journalctl_usage`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input running 'journalctl --disk-usage' monthly. Alert when journal size exceeds 1GB. Include recommendations to prune old journal entries using journalctl --vacuum-time or --vacuum-size.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:journalctl_usage host=*\n| stats latest(disk_usage_mb) as journal_size by host\n| where journal_size > 1000\n```\n\nUnderstanding this SPL\n\n**Journal Disk Usage Monitoring** — Journal disk usage growth can consume valuable storage space, potentially filling disks.\n\nDocumented **Data sources**: `sourcetype=custom:journalctl_usage`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:journalctl_usage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:journalctl_usage. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where journal_size > 1000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Single Value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the machine’s own event diary is using a lot of disk space, so it does not quietly fill the drive and break other programs that need room to work.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.91",
              "n": "Log Rotation Failures",
              "c": "medium",
              "f": "beginner",
              "v": "Log rotation failures can cause log files to grow unbounded, consuming disk space and impacting log analysis.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, logrotate output`",
              "q": "index=os sourcetype=syslog \"logrotate\" (\"error\" OR \"failed\" OR \"ERROR\")\n| stats count by host, log_file\n| where count > 0",
              "m": "Monitor logrotate errors via syslog. Create alerts for rotation failures. Include recommended actions to fix permissions or free disk space blocking rotation.",
              "z": "Alert, Table",
              "kfp": "Kernel or application messages that contain the word \"error\" near \"logrotate\" during a manual `logrotate -f` test, or syslog noise from non-rotation tools, can match without a failed rotation. Sparse `rex` extractions of file paths can leave `log_file` empty on some formats—tune the sample `rex` to your OS’s logrotate template.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, logrotate output`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor logrotate errors via syslog. Create alerts for rotation failures. Include recommended actions to fix permissions or free disk space blocking rotation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"logrotate\" (\"error\" OR \"failed\" OR \"ERROR\")\n| stats count by host, log_file\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Log Rotation Failures** — Log rotation failures can cause log files to grow unbounded, consuming disk space and impacting log analysis.\n\nDocumented **Data sources**: `sourcetype=syslog, logrotate output`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, log_file** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for logrotate failures in syslog so log files do not silently grow forever and fill the disk or hide what happened on the server.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.92",
              "n": "Auditd Daemon Health",
              "c": "high",
              "f": "beginner",
              "v": "Auditd daemon failure results in loss of security audit trail, creating compliance and forensic gaps.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, linux_audit`",
              "q": "index=os sourcetype=linux_audit host=*\n| stats count as audit_events, max(_time) as last_event by host\n| where audit_events == 0 OR (now() - last_event) > 300",
              "m": "Monitor auditd process status and audit event flow. Create alerts when no audit events are received for 5+ minutes. Include daemon status checks and restart recommendations.",
              "z": "Alert, Single Value",
              "kfp": "Hosts that rarely emit audit events by design (minimal `audit.rules`) can look “stale” even though auditd is healthy. Indexing lag, forwarder buffer backlog, or a narrow `index=` scope that omits the host’s audit data can falsely suggest a gap. During maintenance we also pause or redirect audit shipping on purpose.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor auditd process status and audit event flow. Create alerts when no audit events are received for 5+ minutes. Include daemon status checks and restart recommendations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit host=*\n| stats count as audit_events, max(_time) as last_event by host\n| where audit_events == 0 OR (now() - last_event) > 300\n```\n\nUnderstanding this SPL\n\n**Auditd Daemon Health** — Auditd daemon failure results in loss of security audit trail, creating compliance and forensic gaps.\n\nDocumented **Data sources**: `sourcetype=syslog, linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where audit_events == 0 OR (now() - last_event) > 300` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Auditd Daemon Health** — Auditd daemon failure results in loss of security audit trail, creating compliance and forensic gaps.\n\nDocumented **Data sources**: `sourcetype=syslog, linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Single Value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We make sure security audit records keep flowing from Linux hosts so we are not blind to changes and logins when auditors or incident response need the trail.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.93",
              "n": "Rsyslog Queue Backlog Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Rsyslog queue backlog indicates log forwarding issues or overload of remote syslog infrastructure.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:rsyslog_stats`",
              "q": "index=os sourcetype=custom:rsyslog_stats host=*\n| stats latest(queue_size) as backlog by host\n| where backlog > 100",
              "m": "Enable rsyslog statistics logging via $ActionFileDefaultTemplate and stats modules. Monitor queue_size metric. Alert when backlog accumulates indicating destination unavailability.",
              "z": "Gauge, Table",
              "kfp": "Short spikes in `queue_size` during remote indexer restarts or TLS renegotiation often clear without data loss. imuxsock or local-only actions can still show queue metrics depending on how you named `queue_size` in the custom parser—confirm field names against a sample event.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:rsyslog_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable rsyslog statistics logging via $ActionFileDefaultTemplate and stats modules. Monitor queue_size metric. Alert when backlog accumulates indicating destination unavailability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:rsyslog_stats host=*\n| stats latest(queue_size) as backlog by host\n| where backlog > 100\n```\n\nUnderstanding this SPL\n\n**Rsyslog Queue Backlog Monitoring** — Rsyslog queue backlog indicates log forwarding issues or overload of remote syslog infrastructure.\n\nDocumented **Data sources**: `sourcetype=custom:rsyslog_stats`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:rsyslog_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:rsyslog_stats. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where backlog > 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch rsyslog queue depth so we know when the log shipper is falling behind before messages are dropped or the disk fills.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.94",
              "n": "Failed Log Forwarding",
              "c": "high",
              "f": "beginner",
              "v": "Failed log forwarding creates data loss in centralized logging infrastructure, creating gaps in monitoring and compliance.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, rsyslog/syslog-ng error logs`",
              "q": "index=os sourcetype=syslog (\"rsyslog\" OR \"syslog-ng\") (\"error\" OR \"connection refused\" OR \"name resolution failed\")\n| stats count by host, remote_host\n| where count > 0",
              "m": "Monitor rsyslog/syslog-ng logs for forwarding failures. Create alerts on connection or name resolution errors. Include impact assessment showing how many events are being dropped.",
              "z": "Alert, Table",
              "kfp": "Transient \"connection refused\" lines during Splunk HEC or LF rotation, or a one-off DNS blip, can fire without sustained loss. Some distros log routine TLS warnings that mention \"error\" even when messages were retried successfully.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, rsyslog/syslog-ng error logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor rsyslog/syslog-ng logs for forwarding failures. Create alerts on connection or name resolution errors. Include impact assessment showing how many events are being dropped.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (\"rsyslog\" OR \"syslog-ng\") (\"error\" OR \"connection refused\" OR \"name resolution failed\")\n| stats count by host, remote_host\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Failed Log Forwarding** — Failed log forwarding creates data loss in centralized logging infrastructure, creating gaps in monitoring and compliance.\n\nDocumented **Data sources**: `sourcetype=syslog, rsyslog/syslog-ng error logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, remote_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We detect when this Linux host cannot ship logs to the central collector so monitoring and compliance evidence do not quietly go missing.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.95",
              "n": "TCP Connection Establishment Rate",
              "c": "medium",
              "f": "intermediate",
              "v": "High connection establishment rate may indicate application behavior changes or DDoS attack preparation.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:netstat_stats, /proc/net/snmp`",
              "q": "index=os sourcetype=custom:netstat_stats host=*\n| stats avg(TcpActiveOpens) as avg_active by host\n| streamstats avg(avg_active) as baseline, stdev(avg_active) as stddev\n| where avg_active > baseline + 3*stddev",
              "m": "Monitor TcpActiveOpens from /proc/net/snmp. Track baseline and detect anomalies. Create alerts for sustained elevation in connection rate indicating potential DDoS or application issues.",
              "z": "Timechart, Anomaly Chart",
              "kfp": "Batch jobs, OS updates pulling many packages, or a new service that legitimately opens many short-lived TCP connections can raise `TcpActiveOpens` without an attack. Cold start after reboot also shifts the baseline until `streamstats` has enough history.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:netstat_stats, /proc/net/snmp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor TcpActiveOpens from /proc/net/snmp. Track baseline and detect anomalies. Create alerts for sustained elevation in connection rate indicating potential DDoS or application issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:netstat_stats host=*\n| stats avg(TcpActiveOpens) as avg_active by host\n| streamstats avg(avg_active) as baseline, stdev(avg_active) as stddev\n| where avg_active > baseline + 3*stddev\n```\n\nUnderstanding this SPL\n\n**TCP Connection Establishment Rate** — High connection establishment rate may indicate application behavior changes or DDoS attack preparation.\n\nDocumented **Data sources**: `sourcetype=custom:netstat_stats, /proc/net/snmp`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:netstat_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:netstat_stats. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where avg_active > baseline + 3*stddev` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Anomaly Chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how fast the system opens new TCP connections so we notice traffic spikes that might mean a misbehaving app or an early-stage flood.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.96",
              "n": "NUMA Hit/Miss Ratio Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Tracking NUMA hit/miss ratio identifies opportunities for workload optimization on NUMA systems.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:numa_zone`",
              "q": "index=os sourcetype=custom:numa_zone host=*\n| stats sum(numa_hit) as hits, sum(numa_miss) as misses by host\n| eval hit_ratio=hits/(hits+misses)\n| where hit_ratio < 0.9",
              "m": "Parse /proc/zoneinfo for NUMA statistics per zone. Calculate hit ratio monthly. Alert when ratio drops below 90%, suggesting memory allocation pattern misalignment with NUMA topology.",
              "z": "Gauge, Timechart",
              "kfp": "Non-NUMA or single-socket hosts may emit sparse or zero `numa_miss` events; ratios can be misleading on VMs where the hypervisor rebalances memory. Short sampling windows during migrations or kernel upgrades can dip the hit ratio without a chronic workload problem.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:numa_zone`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse /proc/zoneinfo for NUMA statistics per zone. Calculate hit ratio monthly. Alert when ratio drops below 90%, suggesting memory allocation pattern misalignment with NUMA topology.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:numa_zone host=*\n| stats sum(numa_hit) as hits, sum(numa_miss) as misses by host\n| eval hit_ratio=hits/(hits+misses)\n| where hit_ratio < 0.9\n```\n\nUnderstanding this SPL\n\n**NUMA Hit/Miss Ratio Tracking** — Tracking NUMA hit/miss ratio identifies opportunities for workload optimization on NUMA systems.\n\nDocumented **Data sources**: `sourcetype=custom:numa_zone`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:numa_zone. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:numa_zone. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio < 0.9` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track NUMA local versus remote memory use so we can see when workloads are sitting on the wrong socket and wasting performance.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.97",
              "n": "CPU C-State Residency Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "CPU C-state residency tracking optimizes power consumption and identifies power management issues.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:cpuidle, /sys/devices/system/cpu/cpu*/cpuidle/state*/`",
              "q": "index=os sourcetype=custom:cpuidle host=*\n| stats avg(c_state_time) as avg_time by host, c_state\n| eval idle_pct=avg_time/total_time*100",
              "m": "Create a scripted input reading CPU idle state residency times. Track time spent in each C-state. Alert when C-state distribution changes unexpectedly, indicating power management changes.",
              "z": "Pie Chart, Heatmap",
              "kfp": "Laptop or datacenter power policies legitimately change C-state mix after BIOS, `cpupower`, or `intel_pstate` updates. The SPL references `total_time` only if your collector emits it—without that field, `idle_pct` is empty rather than wrong.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:cpuidle, /sys/devices/system/cpu/cpu*/cpuidle/state*/`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input reading CPU idle state residency times. Track time spent in each C-state. Alert when C-state distribution changes unexpectedly, indicating power management changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:cpuidle host=*\n| stats avg(c_state_time) as avg_time by host, c_state\n| eval idle_pct=avg_time/total_time*100\n```\n\nUnderstanding this SPL\n\n**CPU C-State Residency Monitoring** — CPU C-state residency tracking optimizes power consumption and identifies power management issues.\n\nDocumented **Data sources**: `sourcetype=custom:cpuidle, /sys/devices/system/cpu/cpu*/cpuidle/state*/`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:cpuidle. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:cpuidle. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, c_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **idle_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie Chart, Heatmap\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We measure how long CPUs stay in each sleep state so we can spot policy or firmware changes that save power at the cost of latency.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.98",
              "n": "TLB Shootdown Rate Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "High TLB shootdown rates indicate excessive cross-CPU cache invalidations affecting performance on multi-CPU systems.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:tlb_stats, /proc/interrupts`",
              "q": "index=os sourcetype=custom:tlb_stats host=*\n| stats avg(tlb_shootdown_rate) as avg_rate by host\n| where avg_rate > threshold",
              "m": "Monitor TLB shootdown interrupts from /proc/interrupts. Create alerts when shootdown rate exceeds baseline. Recommend application profiling and memory access pattern optimization.",
              "z": "Timechart, Alert",
              "kfp": "Kernel compile-time symbols and CPU counts change how `tlb_shootdown_rate` is derived; compare hosts with similar core counts. Large-page or VM balloon operations can legitimately raise shootdowns during narrow windows. Replace the literal `threshold` token in SPL with a numeric macro before production use.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:tlb_stats, /proc/interrupts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor TLB shootdown interrupts from /proc/interrupts. Create alerts when shootdown rate exceeds baseline. Recommend application profiling and memory access pattern optimization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:tlb_stats host=*\n| stats avg(tlb_shootdown_rate) as avg_rate by host\n| where avg_rate > threshold\n```\n\nUnderstanding this SPL\n\n**TLB Shootdown Rate Monitoring** — High TLB shootdown rates indicate excessive cross-CPU cache invalidations affecting performance on multi-CPU systems.\n\nDocumented **Data sources**: `sourcetype=custom:tlb_stats, /proc/interrupts`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:tlb_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:tlb_stats. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_rate > threshold` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch TLB shootdowns so we know when CPUs are invalidating each other’s memory mappings too often—a hidden cause of slowdowns on big servers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.99",
              "n": "Kernel Lock Contention Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Kernel lock contention degrades multi-core scalability and application throughput.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:lock_stats, /proc/lock_stat`",
              "q": "index=os sourcetype=custom:lock_stats host=*\n| stats avg(contentions) as avg_contention by host, lock_name\n| where avg_contention > threshold",
              "m": "Enable kernel lock statistics via /proc/lock_stat or perf tools. Monitor lock contention per lock. Create alerts for high-contention locks with recommendations for kernel/application tuning.",
              "z": "Table, Bar Chart",
              "kfp": "`/proc/lock_stat` and lock names vary by kernel; a lock that spikes during boot or `kexec` is often benign. Replace `threshold` in SPL with a tuned macro. Profiling with `perf` on a lab host should confirm the hot lock before we chase it in production.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:lock_stats, /proc/lock_stat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable kernel lock statistics via /proc/lock_stat or perf tools. Monitor lock contention per lock. Create alerts for high-contention locks with recommendations for kernel/application tuning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:lock_stats host=*\n| stats avg(contentions) as avg_contention by host, lock_name\n| where avg_contention > threshold\n```\n\nUnderstanding this SPL\n\n**Kernel Lock Contention Detection** — Kernel lock contention degrades multi-core scalability and application throughput.\n\nDocumented **Data sources**: `sourcetype=custom:lock_stats, /proc/lock_stat`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:lock_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:lock_stats. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, lock_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_contention > threshold` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar Chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for hot kernel locks so we can tell when applications are tripping over each other inside the operating system, not just in user code.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.100",
              "n": "Softirq Rate Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "High softirq rates indicate kernel workload distribution issues or network stack pressure.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=vmstat`",
              "q": "index=os sourcetype=vmstat host=*\n| stats avg(si) as avg_softirq by host\n| where avg_softirq > 1000",
              "m": "Monitor softirq field from vmstat. Create alerts when softirq rate exceeds 1000 per second. Correlate with network packet rate to identify if networking-driven or other kernel subsystem.",
              "z": "Timechart, Alert",
              "kfp": "Right after boot, large container image pulls, or backup restores can cause short `si` bursts that are expected. If the TA’s sampling interval differs from classic vmstat’s 1s default, rescale thresholds to match your `si` units. Very large `si` thresholds (e.g. 1000) should be validated against your host class.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=vmstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor softirq field from vmstat. Create alerts when softirq rate exceeds 1000 per second. Correlate with network packet rate to identify if networking-driven or other kernel subsystem.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=vmstat host=*\n| stats avg(si) as avg_softirq by host\n| where avg_softirq > 1000\n```\n\nUnderstanding this SPL\n\n**Softirq Rate Monitoring** — High softirq rates indicate kernel workload distribution issues or network stack pressure.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: vmstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=vmstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_softirq > 1000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Softirq Rate Monitoring** — High softirq rates indicate kernel workload distribution issues or network stack pressure.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Linux memory swap-in from vmstat so we can tell when the server is shuffling data to disk because it is out of RAM—before everything feels slow.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.101",
              "n": "Context Switch Anomalies Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Context switch anomalies indicate scheduler issues or unexpected process workload changes.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=vmstat`",
              "q": "index=os sourcetype=vmstat host=*\n| stats avg(cs) as baseline\n| eventstats stdev(cs) as stddev\n| where cs > baseline + 3*stddev",
              "m": "Use vmstat context switch field with statistical anomaly detection. Alert on 3-sigma deviations from baseline. Include process state analysis to identify cause.",
              "z": "Timechart, Anomaly Detector",
              "kfp": "Short time windows after boot, during `fork` storms from CI, or when `vmstat` sampling cadence changes can make `stdev` unstable. Hosts with very low `cs` can have tiny standard deviations so the 3-sigma rule trips on noise—set a floor on `stddev` or minimum sample count if needed.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=vmstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse vmstat context switch field with statistical anomaly detection. Alert on 3-sigma deviations from baseline. Include process state analysis to identify cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=vmstat host=*\n| stats avg(cs) as baseline\n| eventstats stdev(cs) as stddev\n| where cs > baseline + 3*stddev\n```\n\nUnderstanding this SPL\n\n**Context Switch Anomalies Detection** — Context switch anomalies indicate scheduler issues or unexpected process workload changes.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: vmstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=vmstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where cs > baseline + 3*stddev` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Context Switch Anomalies Detection** — Context switch anomalies indicate scheduler issues or unexpected process workload changes.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Anomaly Detector",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how often the Linux kernel switches between processes so we can catch surprising churn that usually means something is thrashing or stuck in a loop.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.102",
              "n": "EDAC Memory Error Tracking",
              "c": "critical",
              "f": "beginner",
              "v": "EDAC memory errors indicate hardware failures predicting imminent memory or system failure.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, EDAC kernel driver logs`",
              "q": "index=os sourcetype=syslog \"EDAC\" OR \"MCE\" (\"error\" OR \"correctable\" OR \"uncorrectable\")\n| stats count by host, error_type\n| where count > 0",
              "m": "Monitor EDAC (Error Detection and Correction) and MCE (Machine Check Exception) logs. Create immediate alerts on memory errors with escalation to hardware team for memory replacement.",
              "z": "Alert, Table",
              "kfp": "Correctable ECC errors are often auto-scrubbed and may be acceptable up to a vendor threshold; tune for “uncorrectable” vs “correctable”. MCE strings from non-memory subsystems can match the same keywords—verify DIMM vs CPU cache in the full message.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, EDAC kernel driver logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor EDAC (Error Detection and Correction) and MCE (Machine Check Exception) logs. Create immediate alerts on memory errors with escalation to hardware team for memory replacement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"EDAC\" OR \"MCE\" (\"error\" OR \"correctable\" OR \"uncorrectable\")\n| stats count by host, error_type\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**EDAC Memory Error Tracking** — EDAC memory errors indicate hardware failures predicting imminent memory or system failure.\n\nDocumented **Data sources**: `sourcetype=syslog, EDAC kernel driver logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, error_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We scan kernel logs for memory hardware errors so we can replace bad DIMMs before random crashes or data corruption spread.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.103",
              "n": "IPMI Sensor Threshold Violations",
              "c": "critical",
              "f": "intermediate",
              "v": "IPMI sensor violations indicate hardware conditions (thermal, voltage, power) requiring immediate remediation.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:ipmi, ipmitool sensor output`",
              "q": "index=os sourcetype=custom:ipmi host=*\n| stats latest(sensor_status) as status by host, sensor_name\n| where status IN (\"CRITICAL\", \"WARNING\")",
              "m": "Create a scripted input running ipmitool sensor list and parsing status. Alert on CRITICAL or WARNING status. Include sensor readings and recommended actions per sensor type.",
              "z": "Alert, Table",
              "kfp": "Transient fan or voltage warnings during chassis hot-swap or PDU work can clear in minutes. Some vendors map “non-critical” strings that still match `WARNING` in your parser—align enum values with `ipmitool` output for your BMC firmware.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:ipmi, ipmitool sensor output`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input running ipmitool sensor list and parsing status. Alert on CRITICAL or WARNING status. Include sensor readings and recommended actions per sensor type.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:ipmi host=*\n| stats latest(sensor_status) as status by host, sensor_name\n| where status IN (\"CRITICAL\", \"WARNING\")\n```\n\nUnderstanding this SPL\n\n**IPMI Sensor Threshold Violations** — IPMI sensor violations indicate hardware conditions (thermal, voltage, power) requiring immediate remediation.\n\nDocumented **Data sources**: `sourcetype=custom:ipmi, ipmitool sensor output`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:ipmi. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:ipmi. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, sensor_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status IN (\"CRITICAL\", \"WARNING\")` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We pull IPMI sensor health from the management controller so we see overheating or bad power before the operating system panics.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.104",
              "n": "Thermal Throttling Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Thermal throttling reduces CPU performance to prevent overheating, indicating cooling system issues.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, kernel thermal logs`",
              "q": "index=os sourcetype=syslog \"thermal\" OR \"CPU\" AND \"throttling\"\n| stats count by host\n| where count > 0",
              "m": "Monitor kernel thermal throttling messages. Create alerts on any throttling events. Include thermal zone temperatures and recommendations for cooling investigation.",
              "z": "Alert, Table",
              "kfp": "The search uses loose `OR`/`AND` precedence; strings that mention both “CPU” and “throttling” in unrelated contexts (some drivers, benchmarks) can match. Short laptop thermal events under heavy load may be expected—correlate with sustained duration and temperature sensors.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, kernel thermal logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor kernel thermal throttling messages. Create alerts on any throttling events. Include thermal zone temperatures and recommendations for cooling investigation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"thermal\" OR \"CPU\" AND \"throttling\"\n| stats count by host\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Thermal Throttling Detection** — Thermal throttling reduces CPU performance to prevent overheating, indicating cooling system issues.\n\nDocumented **Data sources**: `sourcetype=syslog, kernel thermal logs`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We alert when the kernel says the CPU had to slow down for heat so we can fix fans, airflow, or paste before performance tanks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.105",
              "n": "Fan Speed Anomalies",
              "c": "critical",
              "f": "beginner",
              "v": "Fan speed anomalies indicate cooling system degradation potentially leading to thermal overload.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:ipmi, ipmitool reading`",
              "q": "index=os sourcetype=custom:ipmi host=* sensor_type=fan\n| stats latest(reading_pct) as fan_speed by host, fan_name\n| where fan_speed < 20 AND fan_speed > 0",
              "m": "Monitor fan speed readings via IPMI. Alert on anomalously low fan speeds (< 20%) even when speed is non-zero. Correlate with temperature readings to assess thermal risk.",
              "z": "Gauge, Table",
              "kfp": "PWM fans idling low in a cold datacenter row can sit under 20% by design; pair with temperature and redundant fan loss. IPMI may report “na” or zero when a sensor is absent—exclude those rows in parsing.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:ipmi, ipmitool reading`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor fan speed readings via IPMI. Alert on anomalously low fan speeds (< 20%) even when speed is non-zero. Correlate with temperature readings to assess thermal risk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:ipmi host=* sensor_type=fan\n| stats latest(reading_pct) as fan_speed by host, fan_name\n| where fan_speed < 20 AND fan_speed > 0\n```\n\nUnderstanding this SPL\n\n**Fan Speed Anomalies** — Fan speed anomalies indicate cooling system degradation potentially leading to thermal overload.\n\nDocumented **Data sources**: `sourcetype=custom:ipmi, ipmitool reading`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:ipmi. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:ipmi. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, fan_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fan_speed < 20 AND fan_speed > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch fan RPM from IPMI so we catch failed or lazy fans while the server still has room to cool down.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.106",
              "n": "Power Supply State Changes",
              "c": "critical",
              "f": "beginner",
              "v": "Power supply state changes indicate hardware failures requiring immediate physical intervention.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog, IPMI power events`",
              "q": "index=os sourcetype=syslog (\"PSU\" OR \"power supply\") (\"failed\" OR \"degraded\" OR \"offline\")\n| stats count by host\n| where count > 0",
              "m": "Monitor power supply status via IPMI. Create immediate alerts on any PSU status changes. Include redundancy status and escalation to datacenter ops for physical inspection.",
              "z": "Alert, Table",
              "kfp": "Planned PDU or grid work, firmware updates that re-enumerate PSUs, and vendor log text that contains “failed” during a successful self-test can false-match. Narrow keywords to your platform’s actual syslog phrases.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog, IPMI power events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor power supply status via IPMI. Create immediate alerts on any PSU status changes. Include redundancy status and escalation to datacenter ops for physical inspection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (\"PSU\" OR \"power supply\") (\"failed\" OR \"degraded\" OR \"offline\")\n| stats count by host\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Power Supply State Changes** — Power supply state changes indicate hardware failures requiring immediate physical intervention.\n\nDocumented **Data sources**: `sourcetype=syslog, IPMI power events`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We alert when syslog says a power supply went bad or offline so datacenter staff can swap hardware before redundancy is gone.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.107",
              "n": "Hardware Clock Drift Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Hardware clock drift affects system time accuracy impacting application consistency and audit trails.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=time`",
              "q": "index=os sourcetype=time host=*\n| stats latest(time_offset_ms) as offset by host\n| where abs(offset) > 100",
              "m": "Use Splunk_TA_nix time input to track system time vs. reference. Monitor offset from NTP server. Alert when offset exceeds 100ms. Recommend NTP service investigation or hardware RTC replacement.",
              "z": "Gauge, Timechart",
              "kfp": "VM clock sync through the hypervisor, leap seconds, or a one-time `chronyc makestep` can move `time_offset_ms` sharply without ongoing drift. Containers sharing the host clock can duplicate the same offset—dedupe by asset if needed.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix time input to track system time vs. reference. Monitor offset from NTP server. Alert when offset exceeds 100ms. Recommend NTP service investigation or hardware RTC replacement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=time host=*\n| stats latest(time_offset_ms) as offset by host\n| where abs(offset) > 100\n```\n\nUnderstanding this SPL\n\n**Hardware Clock Drift Detection** — Hardware clock drift affects system time accuracy impacting application consistency and audit trails.\n\nDocumented **Data sources**: `sourcetype=time`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: time. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=time. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where abs(offset) > 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Timechart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare the Linux clock to real time so logins, certificates, and cross-system correlation stay trustworthy.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.108",
              "n": "Password Policy Violation Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Password policy violations indicate accounts with weak credentials vulnerable to compromise.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit, /etc/shadow audit`",
              "q": "index=os sourcetype=linux_audit path=\"/etc/shadow\"\n| stats count by host, user\n| eval policy_violation=\"yes\"",
              "m": "Periodically scan /etc/shadow for passwords that violate policy (too simple, too old, etc.) via custom scripts. Create alerts for violations. Include remediation instructions.",
              "z": "Table, Alert",
              "kfp": "The sample SPL flags audit events on `/etc/shadow`, not the cryptographic strength of passwords—puppet, PAM, or `vipw` edits create noise. High-volume automation accounts can touch shadow during rotation windows; correlate `auid` and change tickets before treating as a policy breach.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit, /etc/shadow audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically scan /etc/shadow for passwords that violate policy (too simple, too old, etc.) via custom scripts. Create alerts for violations. Include remediation instructions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit path=\"/etc/shadow\"\n| stats count by host, user\n| eval policy_violation=\"yes\"\n```\n\nUnderstanding this SPL\n\n**Password Policy Violation Detection** — Password policy violations indicate accounts with weak credentials vulnerable to compromise.\n\nDocumented **Data sources**: `sourcetype=linux_audit, /etc/shadow audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **policy_violation** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use audit records around password files to spot unexpected changes that should go through your normal account-management process.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.5.1",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of CJIS 5.5.1 (Access control - identification) — Splunk UC-1.1.108: Password Policy Violation Detection.",
                  "ea": "Saved search 'UC-1.1.108' running on sourcetype=linux_audit, /etc/shadow audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://le.fbi.gov/cjis-division/cjis-security-policy-resource-center"
                },
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.SAU.1",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of Cyber Essentials CE.SAU.1 (Secure authentication & access) — Splunk UC-1.1.108: Password Policy Violation Detection.",
                  "ea": "Saved search 'UC-1.1.108' running on sourcetype=linux_audit, /etc/shadow audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ncsc.gov.uk/cyberessentials/overview"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "01.b",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HITRUST 01.b (User access management) is enforced — Splunk UC-1.1.108: Password Policy Violation Detection.",
                  "ea": "Saved search 'UC-1.1.108' running on sourcetype=linux_audit, /etc/shadow audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                }
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.109",
              "n": "Account Expiry Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Account expiry tracking ensures user accounts remain valid and prevents access with expired credentials.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:account_expiry`",
              "q": "index=os sourcetype=custom:account_expiry host=*\n| where days_until_expiry <= 30\n| stats by host, user, days_until_expiry",
              "m": "Create a scripted input that parses /etc/shadow and calculates days until account expiry. Alert 30 days before expiry with reminders. Track expired accounts on production systems.",
              "z": "Table, Alert",
              "kfp": "Service accounts with `chage -I -1` or non-expiring shadow fields never reach `days_until_expiry==0` in your feed—validate script logic. Contractor accounts that should expire may be extended by approved HR actions; suppress with a lookup of expected extensions.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:account_expiry`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that parses /etc/shadow and calculates days until account expiry. Alert 30 days before expiry with reminders. Track expired accounts on production systems.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:account_expiry host=*\n| where days_until_expiry <= 30\n| stats by host, user, days_until_expiry\n```\n\nUnderstanding this SPL\n\n**Account Expiry Monitoring** — Account expiry tracking ensures user accounts remain valid and prevents access with expired credentials.\n\nDocumented **Data sources**: `sourcetype=custom:account_expiry`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:account_expiry. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:account_expiry. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where days_until_expiry <= 30` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, user, days_until_expiry** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We list accounts nearing expiry so owners renew or disable them before contractors and temp users keep access too long.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.110",
              "n": "Inactive User Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Inactive users with enabled accounts represent security risk and should be disabled to reduce attack surface.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_secure`",
              "q": "index=os sourcetype=linux_secure \"Accepted\"\n| stats max(_time) as last_login by user, host\n| eval days_inactive=(now()-last_login)/86400\n| where days_inactive > 90",
              "m": "Track user login activity from /var/log/auth.log. Calculate days since last login. Alert on users inactive >90 days. Include list of inactive accounts for review and disabling.",
              "z": "Table, Alert",
              "kfp": "Break-glass or shared functional accounts may show no SSH “Accepted” lines if people only use `sudo` or console—join with directory data. Short `last_login` windows after a password reset or key-only logins without matching `linux_secure` patterns can false-negative; validate against identity tools.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_secure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack user login activity from /var/log/auth.log. Calculate days since last login. Alert on users inactive >90 days. Include list of inactive accounts for review and disabling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_secure \"Accepted\"\n| stats max(_time) as last_login by user, host\n| eval days_inactive=(now()-last_login)/86400\n| where days_inactive > 90\n```\n\nUnderstanding this SPL\n\n**Inactive User Detection** — Inactive users with enabled accounts represent security risk and should be disabled to reduce attack surface.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_secure. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_inactive** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_inactive > 90` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Inactive User Detection** — Inactive users with enabled accounts represent security risk and should be disabled to reduce attack surface.\n\nDocumented **Data sources**: `sourcetype=linux_secure`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We list people who have not logged in for a long time so we can disable stale Linux accounts that attackers could try next.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.111",
              "n": "World-Writable File Detection",
              "c": "high",
              "f": "advanced",
              "v": "World-writable files can be modified by any user enabling privilege escalation or system compromise.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:file_perms`",
              "q": "index=os sourcetype=custom:file_perms host=*\n| where permissions=\"777\" OR permissions=\"666\"\n| stats count by host, directory\n| where count > 0",
              "m": "Create a scripted input that finds world-writable files via 'find / -perm /002'. Exclude expected world-writable locations like /tmp. Alert on unexpected world-writable files in sensitive directories.",
              "z": "Table, Alert",
              "kfp": "Paths under `/tmp`, container layers, and package managers can legitimately be world-writable; exclude with lookups. `find` output volume can be large—cap depth or pre-filter in the collector. Occasional 777 on installer staging dirs during upgrades is normal if time-boxed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:file_perms`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that finds world-writable files via 'find / -perm /002'. Exclude expected world-writable locations like /tmp. Alert on unexpected world-writable files in sensitive directories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:file_perms host=*\n| where permissions=\"777\" OR permissions=\"666\"\n| stats count by host, directory\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**World-Writable File Detection** — World-writable files can be modified by any user enabling privilege escalation or system compromise.\n\nDocumented **Data sources**: `sourcetype=custom:file_perms`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:file_perms. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:file_perms. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where permissions=\"777\" OR permissions=\"666\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, directory** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We flag files that anyone can change so a mistake or intruder cannot leave a trapdoor in critical directories.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.112",
              "n": "Unowned File Detection",
              "c": "high",
              "f": "advanced",
              "v": "Unowned files indicate potential file system corruption or security issue requiring investigation.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:unowned_files`",
              "q": "index=os sourcetype=custom:unowned_files host=*\n| stats count by host, directory\n| where count > 0",
              "m": "Create a scripted input running 'find / -nouser -o -nogroup' to identify unowned files. Alert on any findings. Include recommendations to investigate origin and correct ownership.",
              "z": "Table, Alert",
              "kfp": "NFS, overlayfs, and unpacked tarballs can surface `nouser` files during migrations without an attack. Excluding `/proc` and ephemeral mounts in the `find` keeps noise down—mirror those excludes in the alert.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:unowned_files`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input running 'find / -nouser -o -nogroup' to identify unowned files. Alert on any findings. Include recommendations to investigate origin and correct ownership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:unowned_files host=*\n| stats count by host, directory\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Unowned File Detection** — Unowned files indicate potential file system corruption or security issue requiring investigation.\n\nDocumented **Data sources**: `sourcetype=custom:unowned_files`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:unowned_files. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:unowned_files. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, directory** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for files with no valid owner so odd leftovers from bad restores or intruders do not sit on the disk undetected.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.113",
              "n": "SETUID Audit and Tracking",
              "c": "critical",
              "f": "beginner",
              "v": "SETUID binary execution enables privilege escalation requiring tracking for security monitoring.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit type=EXECVE exe=\"*\" AND suid\n| stats count by host, exe, user\n| where count > threshold",
              "m": "Configure auditctl rules monitoring EXECVE syscalls with setuid. Create alerts on SETUID binary execution by unexpected users. Baseline expected SETUID usage per host.",
              "z": "Table, Alert",
              "kfp": "Legitimate setuid programs (`sudo`, `passwd`, `ping`) are expected on some paths—maintain an allowlist of `exe` values. `threshold` in SPL is a token you must replace with a numeric or macro. Auditd field names (`suid` vs `auid`) vary by rule set—tune to your `auditd` version.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure auditctl rules monitoring EXECVE syscalls with setuid. Create alerts on SETUID binary execution by unexpected users. Baseline expected SETUID usage per host.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit type=EXECVE exe=\"*\" AND suid\n| stats count by host, exe, user\n| where count > threshold\n```\n\nUnderstanding this SPL\n\n**SETUID Audit and Tracking** — SETUID binary execution enables privilege escalation requiring tracking for security monitoring.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, exe, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > threshold` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track when setuid programs run from Linux audit data so only the binaries and users we expect can elevate privileges.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.114",
              "n": "Open File Handle Per-Process Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "High open file handle counts per process can exhaust system limits causing application failures.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=lsof`",
              "q": "index=os sourcetype=lsof host=*\n| stats count as open_files by host, process, pid\n| where open_files > 1000",
              "m": "Use Splunk_TA_nix lsof input to track open files per process. Create alerts for processes approaching system limit. Include breakdown of file types (sockets, regular files, pipes).",
              "z": "Table, Gauge",
              "kfp": "Databases, reverse proxies, and `java` apps legitimately keep thousands of FDs; threshold per `process` or use percent of `ulimit -n`. TA sampling may count one `lsof` line per FD—confirm whether `count` in SPL matches that model.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=lsof`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix lsof input to track open files per process. Create alerts for processes approaching system limit. Include breakdown of file types (sockets, regular files, pipes).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=lsof host=*\n| stats count as open_files by host, process, pid\n| where open_files > 1000\n```\n\nUnderstanding this SPL\n\n**Open File Handle Per-Process Monitoring** — High open file handle counts per process can exhaust system limits causing application failures.\n\nDocumented **Data sources**: `sourcetype=lsof`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: lsof. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=lsof. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, process, pid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where open_files > 1000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Gauge",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count open files per process on Linux so runaway programs cannot exhaust the file descriptor table and crash the box.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.115",
              "n": "Listening Port Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Port compliance ensures only authorized services are listening, reducing attack surface.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=openPorts, netstat`",
              "q": "index=os sourcetype=openPorts host=*\n| where NOT (port IN (approved_port_list))\n| stats count by host, port\n| where count > 0",
              "m": "Use Splunk_TA_nix openPorts input with baseline of expected listening ports per host. Create alerts for unexpected listening ports. Include service identification and change management correlation.",
              "z": "Table, Alert",
              "kfp": "Ephemeral ports used for outbound client connections can appear in some `netstat`/`ss` extractions; scope to `LISTEN` or use Splunk’s openPorts field definitions. `approved_port_list` is a placeholder macro—update it to your CMDB- or lookup-driven allowlist. Dynamic Kubernetes or Nomad services may legitimately add listeners during deploys.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=openPorts, netstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix openPorts input with baseline of expected listening ports per host. Create alerts for unexpected listening ports. Include service identification and change management correlation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=openPorts host=*\n| where NOT (port IN (approved_port_list))\n| stats count by host, port\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Listening Port Compliance** — Port compliance ensures only authorized services are listening, reducing attack surface.\n\nDocumented **Data sources**: `sourcetype=openPorts, netstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: openPorts. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=openPorts. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT (port IN (approved_port_list))` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Listening Port Compliance** — Port compliance ensures only authorized services are listening, reducing attack surface.\n\nDocumented **Data sources**: `sourcetype=openPorts, netstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare open listening ports to an approved list so new services that nobody authorized show up right away.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.116",
              "n": "Installed Package Drift Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Package drift indicates unauthorized software installation or configuration management failures.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=package`",
              "q": "index=os sourcetype=package host=*\n| stats dc(package) as installed_count by host\n| stats avg(installed_count) as baseline\n| where installed_count > baseline + threshold",
              "m": "Use Splunk_TA_nix package input to track installed software. Baseline expected packages per host. Alert on unexpected new packages with name and version details.",
              "z": "Table, Alert",
              "kfp": "The second `stats` in the search collapses all hosts when computing `baseline`—tune the pipeline so each host compares to its own history (e.g. `eventstats` by host). Security errata and automatic updates can add packages in minutes with full change approval.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=package`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk_TA_nix package input to track installed software. Baseline expected packages per host. Alert on unexpected new packages with name and version details.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=package host=*\n| stats dc(package) as installed_count by host\n| stats avg(installed_count) as baseline\n| where installed_count > baseline + threshold\n```\n\nUnderstanding this SPL\n\n**Installed Package Drift Detection** — Package drift indicates unauthorized software installation or configuration management failures.\n\nDocumented **Data sources**: `sourcetype=package`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: package. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=package. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where installed_count > baseline + threshold` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare installed software to what we expect so unknown packages show up as soon as they land on the server.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.117",
              "n": "Configuration File Change Tracking (/etc)",
              "c": "critical",
              "f": "beginner",
              "v": "Configuration file changes can indicate system compromise or unauthorized configuration modifications.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit path=\"/etc/*\" action=modified\n| stats count by host, path, auid\n| where count > 0",
              "m": "Configure auditctl rules to monitor all /etc/ modifications. Create alerts on unexpected config changes. Include before/after comparison using file integrity tools.",
              "z": "Alert, Table",
              "kfp": "Puppet, Ansible, or `dnf` can rewrite `/etc` during approved changes—join to change records. The glob `path=\"/etc/*\"` may not match every auditd `path` field format; use `path=/etc/...` with `match` in `where` if needed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure auditctl rules to monitor all /etc/ modifications. Create alerts on unexpected config changes. Include before/after comparison using file integrity tools.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit path=\"/etc/*\" action=modified\n| stats count by host, path, auid\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Configuration File Change Tracking (/etc)** — Configuration file changes can indicate system compromise or unauthorized configuration modifications.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, path, auid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We get alerts when key files under `/etc` change from Linux audit so surprise edits do not go unnoticed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.118",
              "n": "System Reboot Frequency Anomaly",
              "c": "medium",
              "f": "intermediate",
              "v": "Unexpected reboot frequency indicates system instability, crashes, or possible security incident response.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=syslog`",
              "q": "index=os sourcetype=syslog \"Kernel panic\" OR \"reboot\" OR \"system shutdown\"\n| stats count as reboot_count by host\n| where reboot_count > 2 in 7 days",
              "m": "Monitor system boot/reboot messages in syslog. Create alerts when reboot frequency exceeds normal baseline. Include reboot cause analysis and incident correlation.",
              "z": "Timechart, Alert",
              "kfp": "Patch Tuesdays, kernel CVE rollouts, or fleet-wide maintenance can push many reboot messages that are still approved work. The substring “reboot” appears in user-facing apps and non-kernel subsystems—tighten `search` to your distro’s real shutdown/startup phrases.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor system boot/reboot messages in syslog. Create alerts when reboot frequency exceeds normal baseline. Include reboot cause analysis and incident correlation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"Kernel panic\" OR \"reboot\" OR \"system shutdown\"\n| stats count as reboot_count by host\n| where reboot_count > 2 in 7 days\n```\n\nUnderstanding this SPL\n\n**System Reboot Frequency Anomaly** — Unexpected reboot frequency indicates system instability, crashes, or possible security incident response.\n\nDocumented **Data sources**: `sourcetype=syslog`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where reboot_count > 2 in 7 days` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart, Alert",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count reboot and panic messages in syslog so unstable hosts or surprise maintenance do not go unnoticed in the last week.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.119",
              "n": "Defunct (Zombie) Process Accumulation",
              "c": "medium",
              "f": "beginner",
              "v": "Accumulating zombie processes indicate application resource leaks causing process table exhaustion risk.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=ps`",
              "q": "index=os sourcetype=ps host=* state=\"Z\"\n| stats count as zombie_count by host\n| where zombie_count > 10",
              "m": "Monitor ps output for Z (zombie) state processes. Create alerts when zombie count exceeds 10. Include parent process information to identify resource leak culprit.",
              "z": "Gauge, Table",
              "kfp": "Short parentless `Z` states during container stop or CI teardown can clear quickly—use a sustained window. Field name for state may be `STAT` in some `ps` extractions, not `state`—align props to the TA.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=ps`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor ps output for Z (zombie) state processes. Create alerts when zombie count exceeds 10. Include parent process information to identify resource leak culprit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=ps host=* state=\"Z\"\n| stats count as zombie_count by host\n| where zombie_count > 10\n```\n\nUnderstanding this SPL\n\n**Defunct (Zombie) Process Accumulation** — Accumulating zombie processes indicate application resource leaks causing process table exhaustion risk.\n\nDocumented **Data sources**: `sourcetype=ps`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: ps. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=ps. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where zombie_count > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count zombie processes from `ps` data so a buggy service cannot leave behind endless dead children and starve the process table.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.120",
              "n": "Symbolic Link Chain Depth Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Excessive symbolic link chains can cause performance issues and may indicate directory traversal vulnerabilities.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=custom:symlink_scan`",
              "q": "index=os sourcetype=custom:symlink_scan host=*\n| stats max(chain_depth) as max_depth by host, directory\n| where max_depth > 10",
              "m": "Create a scripted input that recursively follows symbolic links counting chain depth. Alert when exceeding 10 levels. Include directory path for investigation of circular or excessive chains.",
              "z": "Table, Alert",
              "kfp": "Build trees, language package caches, and vendor installers sometimes create long but benign chains—allowlist those roots. Your scanner should detect cycles; without it, a loop can blow up the scripted input before Splunk.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=custom:symlink_scan`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that recursively follows symbolic links counting chain depth. Alert when exceeding 10 levels. Include directory path for investigation of circular or excessive chains.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=custom:symlink_scan host=*\n| stats max(chain_depth) as max_depth by host, directory\n| where max_depth > 10\n```\n\nUnderstanding this SPL\n\n**Symbolic Link Chain Depth Monitoring** — Excessive symbolic link chains can cause performance issues and may indicate directory traversal vulnerabilities.\n\nDocumented **Data sources**: `sourcetype=custom:symlink_scan`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: custom:symlink_scan. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=custom:symlink_scan. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, directory** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_depth > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Alert\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We measure how deep symlink chains go so odd loops or runaway indirections do not slow programs or hide attacks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.121",
              "n": "Bootloader Configuration Changes",
              "c": "critical",
              "f": "beginner",
              "v": "Bootloader changes can enable persistence mechanisms or bypass security controls at boot time.",
              "t": "`Splunk_TA_nix, custom scripted input`",
              "d": "`sourcetype=linux_audit`",
              "q": "index=os sourcetype=linux_audit path~=\"/boot/(grub|efi)\" action=modified\n| stats count by host, path, auid\n| where count > 0",
              "m": "Monitor /boot/grub/ and UEFI boot directories via auditctl. Create immediate critical alerts on any bootloader modifications. Include file hash comparison to detect tampering.",
              "z": "Alert, Table",
              "kfp": "Kernel updates and `grub2-mkconfig` runs touch bootloader files in normal patch cycles—suppress with change windows. The `path~` regex may need adjustment for your distro’s UEFI path (`/boot/efi/EFI/...`).",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix, custom scripted input`.\n• Ensure the following data sources are available: `sourcetype=linux_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor /boot/grub/ and UEFI boot directories via auditctl. Create immediate critical alerts on any bootloader modifications. Include file hash comparison to detect tampering.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux_audit path~=\"/boot/(grub|efi)\" action=modified\n| stats count by host, path, auid\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Bootloader Configuration Changes** — Bootloader changes can enable persistence mechanisms or bypass security controls at boot time.\n\nDocumented **Data sources**: `sourcetype=linux_audit`. **App/TA** (typical add-on context): `Splunk_TA_nix, custom scripted input`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, path, auid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We alert on changes under `/boot` from audit records so the machine cannot boot a surprise kernel or initramfs without us knowing.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.122",
              "n": "Systemd Unit State Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Track failed/inactive systemd services, auto-restart counts, and service startup time to prevent cascading failures and identify misconfigured or unhealthy units.",
              "t": "`Splunk_TA_nix` (scripted input)",
              "d": "`systemctl list-units` output, systemd journal",
              "q": "index=os sourcetype=systemd_units host=* NRestarts>0\n| stats sum(NRestarts) as total_restarts by host, Unit\n| where total_restarts > 5\n| sort -total_restarts",
              "m": "Create a scripted input that runs `systemctl list-units --all --no-pager --plain` and `systemctl show --property=ActiveState,SubState,NRestarts` for critical units. Parse ActiveState, SubState, and NRestarts. Run every 60 seconds. For startup time, use `systemd-analyze` output. Alert on any failed units; alert when NRestarts exceeds 5 in 1 hour for critical services.",
              "z": "Table (failed/inactive units by host), Single value (count of failed units), Timechart of restart counts.",
              "kfp": "`NRestarts` is cumulative in many versions—reset expectations after upgrades. Short bursts from health checks on aggressive `RestartSec` can exceed five without user impact. Scope the search to a lookup of `Unit` names that actually matter to SLAs.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix` (scripted input).\n• Ensure the following data sources are available: `systemctl list-units` output, systemd journal.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that runs `systemctl list-units --all --no-pager --plain` and `systemctl show --property=ActiveState,SubState,NRestarts` for critical units. Parse ActiveState, SubState, and NRestarts. Run every 60 seconds. For startup time, use `systemd-analyze` output. Alert on any failed units; alert when NRestarts exceeds 5 in 1 hour for critical services.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=systemd_units host=* NRestarts>0\n| stats sum(NRestarts) as total_restarts by host, Unit\n| where total_restarts > 5\n| sort -total_restarts\n```\n\nUnderstanding this SPL\n\n**Systemd Unit State Monitoring** — Track failed/inactive systemd services, auto-restart counts, and service startup time to prevent cascading failures and identify misconfigured or unhealthy units.\n\nDocumented **Data sources**: `systemctl list-units` output, systemd journal. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: systemd_units. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=systemd_units. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, Unit** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_restarts > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed/inactive units by host), Single value (count of failed units), Timechart of restart counts.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We read systemd’s restart counts and unit states so flapping services get fixed before they take down the whole host.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.123",
              "n": "Linux Cgroup Resource Pressure (PSI)",
              "c": "medium",
              "f": "advanced",
              "v": "Monitor Pressure Stall Information (PSI) for CPU, memory, and I/O at cgroup level to detect resource contention before it causes application latency.",
              "t": "`Splunk_TA_nix` (scripted input)",
              "d": "`/proc/pressure/cpu`, `/proc/pressure/memory`, `/proc/pressure/io`",
              "q": "index=os sourcetype=psi host=* cgroup=*\n| stats latest(avg10) as pressure by host, cgroup, resource\n| where pressure > 20\n| sort -pressure",
              "m": "Create a scripted input that reads `/proc/pressure/cpu`, `/proc/pressure/memory`, and `/proc/pressure/io`. Parse avg10, avg60, avg300, and total fields (format: `avg10=0.00 avg60=0.00 avg300=0.00 total=12345`). Optionally collect per-cgroup PSI from `/sys/fs/cgroup/<cgroup>/cpu.pressure` etc. Run every 60 seconds. Alert when avg10 exceeds 10% or avg60 exceeds 5% for any resource.",
              "z": "Line chart (pressure over time by resource), Table of hosts with elevated pressure, Gauge per resource type.",
              "kfp": "Kernel <=5.0 may lack full PSI; some platforms expose only system-wide files. `some` vs `full` line semantics differ for I/O—match your doc before thresholding. Container churn can briefly spike one cgroup while siblings are fine.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix` (scripted input).\n• Ensure the following data sources are available: `/proc/pressure/cpu`, `/proc/pressure/memory`, `/proc/pressure/io`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that reads `/proc/pressure/cpu`, `/proc/pressure/memory`, and `/proc/pressure/io`. Parse avg10, avg60, avg300, and total fields (format: `avg10=0.00 avg60=0.00 avg300=0.00 total=12345`). Optionally collect per-cgroup PSI from `/sys/fs/cgroup/<cgroup>/cpu.pressure` etc. Run every 60 seconds. Alert when avg10 exceeds 10% or avg60 exceeds 5% for any resource.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=psi host=* cgroup=*\n| stats latest(avg10) as pressure by host, cgroup, resource\n| where pressure > 20\n| sort -pressure\n```\n\nUnderstanding this SPL\n\n**Linux Cgroup Resource Pressure (PSI)** — Monitor Pressure Stall Information (PSI) for CPU, memory, and I/O at cgroup level to detect resource contention before it causes application latency.\n\nDocumented **Data sources**: `/proc/pressure/cpu`, `/proc/pressure/memory`, `/proc/pressure/io`. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: psi. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=psi. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, cgroup, resource** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where pressure > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pressure over time by resource), Table of hosts with elevated pressure, Gauge per resource type.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Linux PSI metrics so cgroups that wait on CPU, memory, or disk show up before user-facing latency spikes.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.124",
              "n": "Linux Entropy Pool Depletion",
              "c": "medium",
              "f": "intermediate",
              "v": "Low entropy blocks /dev/random and can stall crypto operations (SSL handshakes, key generation). Detecting depletion prevents application hangs and security failures.",
              "t": "`Splunk_TA_nix` (scripted input)",
              "d": "`/proc/sys/kernel/random/entropy_avail`",
              "q": "index=os sourcetype=entropy host=*\n| timechart span=5m avg(entropy_avail) as entropy by host\n| where entropy < 500",
              "m": "Create a scripted input that reads `cat /proc/sys/kernel/random/entropy_avail` and optionally `poolsize`. Run every 60 seconds. Parse entropy_avail as integer. Alert when entropy drops below 200 (warning) or 100 (critical). Consider haveged or rng-tools for entropy generation on VMs.",
              "z": "Single value (entropy_avail), Line chart (entropy over time by host), Table of hosts below threshold.",
              "kfp": "Many modern kernels and `getrandom()` behavior mean `entropy_avail` hovers low while crypto still succeeds—pair with application symptoms. ChaCha20-based `/dev/urandom` models do not block the same way older systems did; validate threshold on your distro.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix` (scripted input).\n• Ensure the following data sources are available: `/proc/sys/kernel/random/entropy_avail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that reads `cat /proc/sys/kernel/random/entropy_avail` and optionally `poolsize`. Run every 60 seconds. Parse entropy_avail as integer. Alert when entropy drops below 200 (warning) or 100 (critical). Consider haveged or rng-tools for entropy generation on VMs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=entropy host=*\n| timechart span=5m avg(entropy_avail) as entropy by host\n| where entropy < 500\n```\n\nUnderstanding this SPL\n\n**Linux Entropy Pool Depletion** — Low entropy blocks /dev/random and can stall crypto operations (SSL handshakes, key generation). Detecting depletion prevents application hangs and security failures.\n\nDocumented **Data sources**: `/proc/sys/kernel/random/entropy_avail`. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: entropy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=entropy. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where entropy < 500` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (entropy_avail), Line chart (entropy over time by host), Table of hosts below threshold.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the kernel’s entropy counter so secure services do not block when the random number pool runs dry on busy VMs.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.125",
              "n": "Linux Journal / Journald Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Journal corruption, excessive disk usage, and rate-limited entries indicate logging problems that can hide critical events and fill disk.",
              "t": "`Splunk_TA_nix` (scripted input)",
              "d": "`journalctl --disk-usage`, `journalctl --verify`",
              "q": "index=os sourcetype=journal_health host=*\n| timechart span=1h avg(disk_usage_mb) as journal_mb by host",
              "m": "Create a scripted input that runs `journalctl --disk-usage` (parse \"Archived and active: X.XG\" or similar) and `journalctl --verify 2>&1` (check exit code and output for \"corrupt\" or \"inconsistent\"). For suppressed messages, parse `journalctl -u systemd-journald` for \"Suppressed\" or use `journalctl --output=short-full` rate stats. Run every 300 seconds. Alert on corruption; alert when journal exceeds 4GB or suppressed count is high.",
              "z": "Table (host, size, corruption status), Line chart (journal size over time), Single value (corruption count).",
              "kfp": "The sample SPL is trending-only—add `where disk_usage_mb` thresholds for true alerts. `journalctl --verify` is heavy; do not run every minute in production. VPN or offline hosts can have stale `disk_usage_mb` that still trends upward on next successful poll.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix` (scripted input).\n• Ensure the following data sources are available: `journalctl --disk-usage`, `journalctl --verify`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that runs `journalctl --disk-usage` (parse \"Archived and active: X.XG\" or similar) and `journalctl --verify 2>&1` (check exit code and output for \"corrupt\" or \"inconsistent\"). For suppressed messages, parse `journalctl -u systemd-journald` for \"Suppressed\" or use `journalctl --output=short-full` rate stats. Run every 300 seconds. Alert on corruption; alert when journal exceeds 4GB or suppressed count is high.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=journal_health host=*\n| timechart span=1h avg(disk_usage_mb) as journal_mb by host\n```\n\nUnderstanding this SPL\n\n**Linux Journal / Journald Health** — Journal corruption, excessive disk usage, and rate-limited entries indicate logging problems that can hide critical events and fill disk.\n\nDocumented **Data sources**: `journalctl --disk-usage`, `journalctl --verify`. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: journal_health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=journal_health. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, size, corruption status), Line chart (journal size over time), Single value (corruption count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track systemd journal size and health so the machine’s own logs do not break or hide what happened when we need them most.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.126",
              "n": "Chrony / NTP Time Synchronization Drift",
              "c": "high",
              "f": "intermediate",
              "v": "Clock offset, stratum, and reachability issues cause authentication failures, log correlation errors, and certificate validation problems. Time drift is a root cause of many subtle failures.",
              "t": "`Splunk_TA_nix` (scripted input)",
              "d": "`chronyc tracking`, `ntpq -p`",
              "q": "index=os sourcetype=ntp_status host=*\n| timechart span=15m avg(offset_ms) as offset_ms by host\n| where abs(offset_ms) > 50",
              "m": "Create a scripted input that runs `chronyc tracking` (parse Last offset, Stratum, Leap status) or `ntpq -p` for ntpd. Extract offset_ms (convert to milliseconds), stratum, and reachability (octal 377 = all peers reachable). Run every 300 seconds. Alert when offset exceeds 100ms; alert when stratum > 10 or reachability indicates no peers.",
              "z": "Line chart (offset over time by host), Table (host, offset, stratum), Single value (hosts with drift).",
              "kfp": "After `chronyc` slew or NTP step, `timechart` can show a one-bucket spike. VMs sharing an RTC with snapshot/revert can jump offset without chronic skew. The `where abs(offset_ms) > 50` on a timechart output often needs `mvexpand` or a `stats` post-process—tune the alert pipeline for how your `ntp_status` field is shaped.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix` (scripted input).\n• Ensure the following data sources are available: `chronyc tracking`, `ntpq -p`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that runs `chronyc tracking` (parse Last offset, Stratum, Leap status) or `ntpq -p` for ntpd. Extract offset_ms (convert to milliseconds), stratum, and reachability (octal 377 = all peers reachable). Run every 300 seconds. Alert when offset exceeds 100ms; alert when stratum > 10 or reachability indicates no peers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=ntp_status host=*\n| timechart span=15m avg(offset_ms) as offset_ms by host\n| where abs(offset_ms) > 50\n```\n\nUnderstanding this SPL\n\n**Chrony / NTP Time Synchronization Drift** — Clock offset, stratum, and reachability issues cause authentication failures, log correlation errors, and certificate validation problems. Time drift is a root cause of many subtle failures.\n\nDocumented **Data sources**: `chronyc tracking`, `ntpq -p`. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: ntp_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=ntp_status. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where abs(offset_ms) > 50` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (offset over time by host), Table (host, offset, stratum), Single value (hosts with drift).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow chrony/ntp offset on Linux so certificates, Kerberos, and log times stay lined up with the rest of the world.",
              "mtype": [
                "Configuration",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.127",
              "n": "Swap Activity Rate Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Pages swapped in/out per second (distinct from swap usage %) indicates memory pressure and I/O load. High swap I/O rate degrades performance even before swap usage is critical.",
              "t": "`Splunk_TA_nix`",
              "d": "`sourcetype=vmstat`",
              "q": "index=os sourcetype=vmstat host=*\n| eval swap_rate = si + so\n| bin _time span=1h\n| stats avg(swap_rate) as avg_rate, stdev(swap_rate) as std_rate by host, _time\n| eventstats avg(avg_rate) as baseline stdev(avg_rate) as baseline_std by host\n| eval threshold = baseline + (2 * coalesce(baseline_std, 50))\n| where avg_rate > threshold",
              "m": "Enable vmstat scripted input in Splunk_TA_nix (interval=60). Fields `si` (swap in) and `so` (swap out) represent pages per interval. Create baseline of normal swap rate per host; alert when swap I/O rate exceeds 2x baseline or exceeds 100 pages/sec sustained for 10 minutes.",
              "z": "Line chart (swap in/out rates by host), Table of hosts with elevated swap I/O, Single value (current swap rate).",
              "kfp": "Heavy `si`+`so` can appear during transparent huge page compaction or one-off `swapoff` even when steady-state memory is fine—use duration filters. The `coalesce(baseline_std, 50)` floor embeds a magic number; replace with a macro for your page size and polling cadence.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `sourcetype=vmstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable vmstat scripted input in Splunk_TA_nix (interval=60). Fields `si` (swap in) and `so` (swap out) represent pages per interval. Create baseline of normal swap rate per host; alert when swap I/O rate exceeds 2x baseline or exceeds 100 pages/sec sustained for 10 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=vmstat host=*\n| eval swap_rate = si + so\n| bin _time span=1h\n| stats avg(swap_rate) as avg_rate, stdev(swap_rate) as std_rate by host, _time\n| eventstats avg(avg_rate) as baseline stdev(avg_rate) as baseline_std by host\n| eval threshold = baseline + (2 * coalesce(baseline_std, 50))\n| where avg_rate > threshold\n```\n\nUnderstanding this SPL\n\n**Swap Activity Rate Trending** — Pages swapped in/out per second (distinct from swap usage %) indicates memory pressure and I/O load. High swap I/O rate degrades performance even before swap usage is critical.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: vmstat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=vmstat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **swap_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **threshold** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_rate > threshold` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.mem_used) as agg_value from datamodel=Performance.Memory by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Swap Activity Rate Trending** — Pages swapped in/out per second (distinct from swap usage %) indicates memory pressure and I/O load. High swap I/O rate degrades performance even before swap usage is critical.\n\nDocumented **Data sources**: `sourcetype=vmstat`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Memory` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (swap in/out rates by host), Table of hosts with elevated swap I/O, Single value (current swap rate).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how many pages the kernel moves in and out of swap so we see memory stress early—not only when the disk is full.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.mem_used) as agg_value from datamodel=Performance.Memory by Performance.host span=5m | sort - agg_value",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.128",
              "n": "Filesystem Inode Exhaustion",
              "c": "critical",
              "f": "intermediate",
              "v": "Inode usage approaching 100% blocks file creation even with free disk space. Applications fail with \"No space left on device\" despite available blocks — a common misdiagnosis.",
              "t": "`Splunk_TA_nix` (scripted input)",
              "d": "`df -i` output",
              "q": "index=os sourcetype=df_inode host=*\n| stats latest(IUsePct) as inode_pct by host, MountedOn\n| where inode_pct > 80",
              "m": "Create a scripted input that runs `df -i` and parses output. Extract Filesystem, Inodes, IUsed, IFree, IUse%, MountedOn. Run every 300 seconds. Set tiered alerts: 80% (warning), 90% (high), 95% (critical). Include `find` or `du --inodes` to identify directories consuming inodes for remediation.",
              "z": "Table (filesystem, host, inode %), Gauge per critical mount, Line chart (inode % over time).",
              "kfp": "Tiny `tmpfs` or `squashfs` images can have low inode totals that sit high in percent without production risk. Field names vary (`IUsePct` vs `IUse%`)—align extractions. Snapshot-heavy backup volumes can have inode pressure that is still in a change window.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix` (scripted input).\n• Ensure the following data sources are available: `df -i` output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that runs `df -i` and parses output. Extract Filesystem, Inodes, IUsed, IFree, IUse%, MountedOn. Run every 300 seconds. Set tiered alerts: 80% (warning), 90% (high), 95% (critical). Include `find` or `du --inodes` to identify directories consuming inodes for remediation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=df_inode host=*\n| stats latest(IUsePct) as inode_pct by host, MountedOn\n| where inode_pct > 80\n```\n\nUnderstanding this SPL\n\n**Filesystem Inode Exhaustion** — Inode usage approaching 100% blocks file creation even with free disk space. Applications fail with \"No space left on device\" despite available blocks — a common misdiagnosis.\n\nDocumented **Data sources**: `df -i` output. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: df_inode. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=df_inode. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, MountedOn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where inode_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (filesystem, host, inode %), Gauge per critical mount, Line chart (inode % over time).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch inode use, not just gigabytes free, so the server cannot run out of file slots when space still looks okay.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.129",
              "n": "Linux Softirq / Hardirq Time",
              "c": "medium",
              "f": "advanced",
              "v": "Detect interrupt storms (softirq/hardirq) that degrade system performance. High IRQ time indicates network, block I/O, or timer storms.",
              "t": "`Splunk_TA_nix` (scripted input)",
              "d": "`/proc/interrupts`, `/proc/softirqs`, `mpstat` output",
              "q": "index=os sourcetype=irq_stats host=* cpu=*\n| stats latest(softirq_pct) as softirq, latest(hardirq_pct) as hardirq by host, cpu\n| where softirq > 30 OR hardirq > 15",
              "m": "Create a scripted input that parses `/proc/softirqs` and `/proc/interrupts` (or use `mpstat -I SUM` for softirq/hardirq percentages). Calculate softirq and hardirq as percentage of CPU time. Run every 60 seconds. Alert when combined IRQ time exceeds 20% sustained for 10 minutes. Correlate with network/block device activity.",
              "z": "Line chart (softirq/hardirq % over time), Table of hosts with elevated IRQ, Stacked area chart by IRQ type.",
              "kfp": "Packet captures, storage benchmarks, or DPDK can legitimately run CPUs hot with high softirq. `irq_stats` field definitions differ when sourced from `mpstat` vs parsed `/proc`—ensure percentages are 0–100. Replace any literal `threshold` tokens in derived searches with real macros.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix` (scripted input).\n• Ensure the following data sources are available: `/proc/interrupts`, `/proc/softirqs`, `mpstat` output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that parses `/proc/softirqs` and `/proc/interrupts` (or use `mpstat -I SUM` for softirq/hardirq percentages). Calculate softirq and hardirq as percentage of CPU time. Run every 60 seconds. Alert when combined IRQ time exceeds 20% sustained for 10 minutes. Correlate with network/block device activity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=irq_stats host=* cpu=*\n| stats latest(softirq_pct) as softirq, latest(hardirq_pct) as hardirq by host, cpu\n| where softirq > 30 OR hardirq > 15\n```\n\nUnderstanding this SPL\n\n**Linux Softirq / Hardirq Time** — Detect interrupt storms (softirq/hardirq) that degrade system performance. High IRQ time indicates network, block I/O, or timer storms.\n\nDocumented **Data sources**: `/proc/interrupts`, `/proc/softirqs`, `mpstat` output. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: irq_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=irq_stats. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, cpu** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where softirq > 30 OR hardirq > 15` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Linux Softirq / Hardirq Time** — Detect interrupt storms (softirq/hardirq) that degrade system performance. High IRQ time indicates network, block I/O, or timer storms.\n\nDocumented **Data sources**: `/proc/interrupts`, `/proc/softirqs`, `mpstat` output. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (softirq/hardirq % over time), Table of hosts with elevated IRQ, Stacked area chart by IRQ type.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how much CPU time goes to interrupt handling so storms from networking or storage show up before average utilization looks bad.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host span=5m | sort - count",
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.130",
              "n": "TCP Connection State Distribution (Linux)",
              "c": "medium",
              "f": "intermediate",
              "v": "Count of ESTABLISHED, TIME_WAIT, CLOSE_WAIT, SYN_RECV connections. Detects connection leaks (accumulating CLOSE_WAIT), exhaustion (TIME_WAIT), and half-open buildup.",
              "t": "`Splunk_TA_nix` (scripted input)",
              "d": "`ss -s` or `netstat` output",
              "q": "index=os sourcetype=tcp_states host=*\n| timechart span=15m avg(CLOSE-WAIT) as close_wait by host\n| where close_wait > 500",
              "m": "Create a scripted input that runs `ss -s` (parse TCP: inuse X orphaned X tw X alloc X mem X) or `netstat -an | awk` to count by state. Parse ESTAB, TIME-WAIT, CLOSE-WAIT, SYN-RECV. Run every 60 seconds. Alert when CLOSE_WAIT exceeds 1000 (possible connection leak); alert when TIME_WAIT exceeds 10000 (port exhaustion risk).",
              "z": "Stacked bar chart (state distribution by host), Line chart (CLOSE_WAIT over time), Table of hosts exceeding thresholds.",
              "kfp": "Your collector must emit a numeric `close_wait` field; hyphenated `CLOSE-WAIT` from raw `ss` needs renaming at ingest. Load balancers and chatty microservices can hold many `CLOSE_WAIT` sockets briefly during deploys. Tune `_time` span to avoid single-sample spikes.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix` (scripted input).\n• Ensure the following data sources are available: `ss -s` or `netstat` output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that runs `ss -s` (parse TCP: inuse X orphaned X tw X alloc X mem X) or `netstat -an | awk` to count by state. Parse ESTAB, TIME-WAIT, CLOSE-WAIT, SYN-RECV. Run every 60 seconds. Alert when CLOSE_WAIT exceeds 1000 (possible connection leak); alert when TIME_WAIT exceeds 10000 (port exhaustion risk).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=tcp_states host=*\n| timechart span=15m avg(CLOSE-WAIT) as close_wait by host\n| where close_wait > 500\n```\n\nUnderstanding this SPL\n\n**TCP Connection State Distribution (Linux)** — Count of ESTABLISHED, TIME_WAIT, CLOSE_WAIT, SYN_RECV connections. Detects connection leaks (accumulating CLOSE_WAIT), exhaustion (TIME_WAIT), and half-open buildup.\n\nDocumented **Data sources**: `ss -s` or `netstat` output. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: tcp_states. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=tcp_states. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where close_wait > 500` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart (state distribution by host), Line chart (CLOSE_WAIT over time), Table of hosts exceeding thresholds.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow half-open and stuck TCP states from Linux so connection leaks and port exhaustion are visible before services refuse new work.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.1.131",
              "n": "Linux OOM Killer Invocation Tracking",
              "c": "critical",
              "f": "intermediate",
              "v": "Track which processes were killed by the OOM killer and how often. OOM events indicate severe memory pressure and often precede application outages.",
              "t": "`Splunk_TA_nix`",
              "d": "`/var/log/kern.log`, `dmesg`, `sourcetype=syslog`",
              "q": "index=os (sourcetype=syslog OR sourcetype=linux_secure) host=*\n| search \"oom-kill\" OR \"Out of memory\" \"Killed process\"\n| stats count by host\n| where count > 0",
              "m": "Ensure kernel messages are forwarded via syslog or Splunk_TA_nix. The OOM killer logs to kernel ring buffer; rsyslog typically captures to kern.log. Use `dmesg -T` or journalctl for immediate capture. Create alert on any OOM event. Parse process name and PID for context. Correlate with memory metrics before the event.",
              "z": "Alert (immediate on OOM), Table (host, process, count), Timeline of OOM events.",
              "kfp": "User-space messages that contain the phrase “out of memory” without an actual `oom-killer` action (e.g. Java heap text) can match—tighten with `host=kern` or facility filters if noisy. Container restarts and cgroup OOM differ slightly in wording by runtime; extend the `search` OR list for your stack. Log volume from `kern` can be high during GPU or ML jobs without service impact if limits are expected.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`.\n• Ensure the following data sources are available: `/var/log/kern.log`, `dmesg`, `sourcetype=syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure kernel messages are forwarded via syslog or Splunk_TA_nix. The OOM killer logs to kernel ring buffer; rsyslog typically captures to kern.log. Use `dmesg -T` or journalctl for immediate capture. Create alert on any OOM event. Parse process name and PID for context. Correlate with memory metrics before the event.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os (sourcetype=syslog OR sourcetype=linux_secure) host=*\n| search \"oom-kill\" OR \"Out of memory\" \"Killed process\"\n| stats count by host\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Linux OOM Killer Invocation Tracking** — Track which processes were killed by the OOM killer and how often. OOM events indicate severe memory pressure and often precede application outages.\n\nDocumented **Data sources**: `/var/log/kern.log`, `dmesg`, `sourcetype=syslog`. **App/TA** (typical add-on context): `Splunk_TA_nix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog, linux_secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert (immediate on OOM), Table (host, process, count), Timeline of OOM events.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We capture when Linux kills a process to save the rest of the system, so we know memory ran out before apps crash in a less obvious way.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 38.6,
          "qd": {
            "gold": 5,
            "silver": 5,
            "bronze": 121,
            "none": 0
          }
        },
        {
          "i": "1.2",
          "n": "Windows Servers",
          "u": [
            {
              "i": "1.2.1",
              "n": "CPU Utilization Trending (Windows)",
              "c": "high",
              "f": "intermediate",
              "v": "Sustained high CPU causes application timeouts and service degradation. Trending enables capacity planning and helps identify runaway processes.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:CPU` (Perfmon input: `Processor`)",
              "q": "index=perfmon sourcetype=\"Perfmon:CPU\" counter=\"% Processor Time\" instance=\"_Total\"\n| timechart span=1h avg(Value) as avg_cpu by host\n| where avg_cpu > 90",
              "m": "Configure Perfmon inputs in `inputs.conf` on the UF: `[perfmon://CPU]`, object=Processor, counters=% Processor Time, instances=_Total, interval=60. Alert on sustained >90% for 15+ minutes.",
              "z": "Line chart (timechart), Single value (current), Heatmap across hosts.",
              "kfp": "Sustained high CPU is normal during Windows Update, antivirus scans, IIS or SQL maintenance, and large batch jobs. Short spikes often clear without action.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:CPU` (Perfmon input: `Processor`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs in `inputs.conf` on the UF: `[perfmon://CPU]`, object=Processor, counters=% Processor Time, instances=_Total, interval=60. Alert on sustained >90% for 15+ minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:CPU\" counter=\"% Processor Time\" instance=\"_Total\"\n| timechart span=1h avg(Value) as avg_cpu by host\n| where avg_cpu > 90\n```\n\nUnderstanding this SPL\n\n**CPU Utilization Trending (Windows)** — Sustained high CPU causes application timeouts and service degradation. Trending enables capacity planning and helps identify runaway processes.\n\nDocumented **Data sources**: `sourcetype=Perfmon:CPU` (Perfmon input: `Processor`). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:CPU. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:CPU\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CPU Utilization Trending (Windows)** — Sustained high CPU causes application timeouts and service degradation. Trending enables capacity planning and helps identify runaway processes.\n\nDocumented **Data sources**: `sourcetype=Perfmon:CPU` (Perfmon input: `Processor`). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (timechart), Single value (current), Heatmap across hosts.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how busy each server’s processors are over the hours, and we help you know when a machine is working too hard for too long to serve apps well.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.2",
              "n": "Memory Utilization & Paging (Windows)",
              "c": "high",
              "f": "intermediate",
              "v": "High memory and excessive paging degrade performance. Page file usage indicates the system is under memory pressure.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:Memory`",
              "q": "index=perfmon sourcetype=\"Perfmon:Memory\" (counter=\"% Committed Bytes In Use\" OR counter=\"Pages/sec\")\n| timechart span=5m avg(Value) by counter, host",
              "m": "Configure Perfmon input for Memory object: counters = `% Committed Bytes In Use`, `Available MBytes`, `Pages/sec`. Alert when committed bytes >90% or pages/sec sustained >1000.",
              "z": "Dual-axis line chart (memory % + pages/sec), Gauge widget.",
              "kfp": "Memory and paging can spike during patch installs, large queries, and backup windows. Windows may commit aggressively under load without immediate user impact.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:Memory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon input for Memory object: counters = `% Committed Bytes In Use`, `Available MBytes`, `Pages/sec`. Alert when committed bytes >90% or pages/sec sustained >1000.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:Memory\" (counter=\"% Committed Bytes In Use\" OR counter=\"Pages/sec\")\n| timechart span=5m avg(Value) by counter, host\n```\n\nUnderstanding this SPL\n\n**Memory Utilization & Paging (Windows)** — High memory and excessive paging degrade performance. Page file usage indicates the system is under memory pressure.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Memory`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:Memory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:Memory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by counter, host** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Memory Utilization & Paging (Windows)** — High memory and excessive paging degrade performance. Page file usage indicates the system is under memory pressure.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Memory`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis line chart (memory % + pages/sec), Gauge widget.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when the server is short on memory or swapping a lot, so you can add RAM or stop a runaway app before things freeze up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.3",
              "n": "Disk Space Monitoring (Windows)",
              "c": "critical",
              "f": "intermediate",
              "v": "Full disks crash applications, stop logging, and corrupt databases. Windows can become unbootable if the system drive fills.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:LogicalDisk`",
              "q": "index=perfmon sourcetype=\"Perfmon:LogicalDisk\" counter=\"% Free Space\" instance!=\"_Total\"\n| stats latest(Value) as free_pct by host, instance\n| eval used_pct = 100 - free_pct\n| where used_pct > 85\n| sort -used_pct",
              "m": "Perfmon input: LogicalDisk, counters = `% Free Space`, `Free Megabytes`. Alert at 85%/90%/95% thresholds. Use `predict` for forecasting.",
              "z": "Table sorted by usage, Gauge per drive, Line chart trend per volume.",
              "kfp": "Low free space can be short-lived during upgrades, VSS, backups, and temp file bursts. Some volumes are intentionally small; alert by tier.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:LogicalDisk`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPerfmon input: LogicalDisk, counters = `% Free Space`, `Free Megabytes`. Alert at 85%/90%/95% thresholds. Use `predict` for forecasting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:LogicalDisk\" counter=\"% Free Space\" instance!=\"_Total\"\n| stats latest(Value) as free_pct by host, instance\n| eval used_pct = 100 - free_pct\n| where used_pct > 85\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**Disk Space Monitoring (Windows)** — Full disks crash applications, stop logging, and corrupt databases. Windows can become unbootable if the system drive fills.\n\nDocumented **Data sources**: `sourcetype=Perfmon:LogicalDisk`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:LogicalDisk. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:LogicalDisk\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, instance** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Disk Space Monitoring (Windows)** — Full disks crash applications, stop logging, and corrupt databases. Windows can become unbootable if the system drive fills.\n\nDocumented **Data sources**: `sourcetype=Perfmon:LogicalDisk`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where disk_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table sorted by usage, Gauge per drive, Line chart trend per volume.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check how full your disks are on each drive, and we help you add space or clean up before the server runs out of room to work.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.4",
              "n": "Windows Service Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Stopped critical services directly impact application availability. Auto-restart doesn't always work, and some services can't auto-restart.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System`, Event IDs 7034 (crash), 7036 (state change), 7031 (unexpected termination)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" (EventCode=7034 OR EventCode=7031 OR EventCode=7036)\n| eval status=case(EventCode=7034, \"Crashed\", EventCode=7031, \"Terminated Unexpectedly\", EventCode=7036 AND Message LIKE \"%stopped%\", \"Stopped\", 1=1, \"Changed\")\n| stats count by host, EventCode, status, Message\n| sort -count",
              "m": "Enable Windows Event Log collection for the System log. Create alerts on EventCode 7034 and 7031. Maintain a lookup of critical services per server role to filter noise.",
              "z": "Status panel (red/green per service), Table of recent events, Timeline.",
              "kfp": "Planned restarts, patching, and automation stop services on purpose. Pair alerts with a list of in-scope business-critical services and change windows.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System`, Event IDs 7034 (crash), 7036 (state change), 7031 (unexpected termination).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Windows Event Log collection for the System log. Create alerts on EventCode 7034 and 7031. Maintain a lookup of critical services per server role to filter noise.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" (EventCode=7034 OR EventCode=7031 OR EventCode=7036)\n| eval status=case(EventCode=7034, \"Crashed\", EventCode=7031, \"Terminated Unexpectedly\", EventCode=7036 AND Message LIKE \"%stopped%\", \"Stopped\", 1=1, \"Changed\")\n| stats count by host, EventCode, status, Message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Windows Service Failures** — Stopped critical services directly impact application availability. Auto-restart doesn't always work, and some services can't auto-restart.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System`, Event IDs 7034 (crash), 7036 (state change), 7031 (unexpected termination). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, EventCode, status, Message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Windows Service Failures** — Stopped critical services directly impact application availability. Auto-restart doesn't always work, and some services can't auto-restart.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System`, Event IDs 7034 (crash), 7036 (state change), 7031 (unexpected termination). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel (red/green per service), Table of recent events, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when a Windows service stopped, crashed, or failed in a way you care about, so your team can get it back online before the app is down for long.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.5",
              "n": "Event Log Flood Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Abnormal event log volumes often indicate error loops, misconfiguration, or an active attack. Also protects Splunk license from unexpected spikes.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:*`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:*\"\n| timechart span=1h count by host\n| eventstats avg(count) as avg_count, stdev(count) as stdev_count by host\n| eval threshold = avg_count + (3 * stdev_count)\n| where count > threshold",
              "m": "Use `timechart` + standard deviation to baseline normal volumes. Alert when volume exceeds 3 standard deviations. Investigate the top EventCode contributing to the spike.",
              "z": "Line chart with dynamic threshold overlay, Table of spike events.",
              "kfp": "Big releases, AD churn, and mis-tuned health checks can raise volume. Seasonality and move weekends shift baselines; retrain before paging.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse `timechart` + standard deviation to baseline normal volumes. Alert when volume exceeds 3 standard deviations. Investigate the top EventCode contributing to the spike.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:*\"\n| timechart span=1h count by host\n| eventstats avg(count) as avg_count, stdev(count) as stdev_count by host\n| eval threshold = avg_count + (3 * stdev_count)\n| where count > threshold\n```\n\nUnderstanding this SPL\n\n**Event Log Flood Detection** — Abnormal event log volumes often indicate error loops, misconfiguration, or an active attack. Also protects Splunk license from unexpected spikes.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:*`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:*\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• `eventstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **threshold** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where count > threshold` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart with dynamic threshold overlay, Table of spike events.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count how many log lines each machine is sending, and we help you know when one box is chattering way more than normal so you can dig in or save license.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.6",
              "n": "Failed Login Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Detects credential stuffing, brute-force attacks, and compromised account usage. Key for security monitoring and compliance.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security`, EventCode=4625",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4625\n| eval src=coalesce(src, IpAddress)\n| stats count as failures, dc(TargetUserName) as accounts_targeted, values(TargetUserName) as usernames by src, host\n| where failures > 10\n| sort -failures\n| iplocation src",
              "m": "Enable Security Event Log collection (already default in most deployments). Create alert for >10 failures from single source in 5 minutes. Correlate with successful logins (4624) from same source.",
              "z": "Table (source, failures, targets), Map (GeoIP), Timechart of failure trends.",
              "kfp": "Service accounts with bad passwords, scanners, and SSO hiccups cluster failures. Private IP geolocation is often wrong; pair with your asset and account lists.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security`, EventCode=4625.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Security Event Log collection (already default in most deployments). Create alert for >10 failures from single source in 5 minutes. Correlate with successful logins (4624) from same source.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4625\n| eval src=coalesce(src, IpAddress)\n| stats count as failures, dc(TargetUserName) as accounts_targeted, values(TargetUserName) as usernames by src, host\n| where failures > 10\n| sort -failures\n| iplocation src\n```\n\nUnderstanding this SPL\n\n**Failed Login Monitoring** — Detects credential stuffing, brute-force attacks, and compromised account usage. Key for security monitoring and compliance.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security`, EventCode=4625. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Failed Login Monitoring**): iplocation src\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Failed Login Monitoring** — Detects credential stuffing, brute-force attacks, and compromised account usage. Key for security monitoring and compliance.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security`, EventCode=4625. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (source, failures, targets), Map (GeoIP), Timechart of failure trends.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count failed sign-ins by user and source so you can tell a guessing attack, a stuck script, and a person mistyping a password apart more quickly.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.7",
              "n": "Account Lockout Tracking",
              "c": "high",
              "f": "beginner",
              "v": "Lockouts frustrate users and can indicate active attacks. Identifying the source computer of the lockout dramatically speeds resolution.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security`, EventCode=4740",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4740\n| table _time TargetUserName TargetDomainName CallerComputerName\n| sort -_time",
              "m": "Collect Security logs from domain controllers (critical). The CallerComputerName field identifies which machine caused the lockout. Create alert per lockout and an aggregate alert for mass lockouts.",
              "z": "Table (user, source computer, time), Single value (lockouts last 24h), Bar chart by user.",
              "kfp": "Users with multiple devices, cached creds, and help-desk lockouts are common. The caller computer field tells you where to look first.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security`, EventCode=4740.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Security logs from domain controllers (critical). The CallerComputerName field identifies which machine caused the lockout. Create alert per lockout and an aggregate alert for mass lockouts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4740\n| table _time TargetUserName TargetDomainName CallerComputerName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Account Lockout Tracking** — Lockouts frustrate users and can indicate active attacks. Identifying the source computer of the lockout dramatically speeds resolution.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security`, EventCode=4740. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Account Lockout Tracking**): table _time TargetUserName TargetDomainName CallerComputerName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Account Lockout Tracking** — Lockouts frustrate users and can indicate active attacks. Identifying the source computer of the lockout dramatically speeds resolution.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security`, EventCode=4740. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, source computer, time), Single value (lockouts last 24h), Bar chart by user.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We show when someone’s account is locked and which machine caused it, so support can fix the right password or app instead of guessing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.8",
              "n": "Privileged Group Changes",
              "c": "critical",
              "f": "intermediate",
              "v": "Additions to Domain Admins, Enterprise Admins, or Schema Admins grant extreme privilege. Unauthorized changes could mean full domain compromise.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security`, EventCodes 4728, 4732, 4756 (member added); 4729, 4733, 4757 (member removed)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" (EventCode=4728 OR EventCode=4732 OR EventCode=4756 OR EventCode=4729 OR EventCode=4733 OR EventCode=4757)\n| eval action=case(EventCode IN (4728,4732,4756), \"Added\", EventCode IN (4729,4733,4757), \"Removed\")\n| table _time action TargetUserName MemberName Group_Name SubjectUserName host\n| sort -_time",
              "m": "Collect Security logs from all domain controllers. Create a real-time alert on these event codes filtered to privileged groups (Domain Admins, Enterprise Admins, Schema Admins, Administrators). Require correlation with change ticket.",
              "z": "Events timeline, Table with action details, Alert panel (critical).",
              "kfp": "Joiners, break-glass, and vendor scripts change privileged groups during approved work. Match events to your change and approval system.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security`, EventCodes 4728, 4732, 4756 (member added); 4729, 4733, 4757 (member removed).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Security logs from all domain controllers. Create a real-time alert on these event codes filtered to privileged groups (Domain Admins, Enterprise Admins, Schema Admins, Administrators). Require correlation with change ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" (EventCode=4728 OR EventCode=4732 OR EventCode=4756 OR EventCode=4729 OR EventCode=4733 OR EventCode=4757)\n| eval action=case(EventCode IN (4728,4732,4756), \"Added\", EventCode IN (4729,4733,4757), \"Removed\")\n| table _time action TargetUserName MemberName Group_Name SubjectUserName host\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Privileged Group Changes** — Additions to Domain Admins, Enterprise Admins, or Schema Admins grant extreme privilege. Unauthorized changes could mean full domain compromise.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security`, EventCodes 4728, 4732, 4756 (member added); 4729, 4733, 4757 (member removed). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Privileged Group Changes**): table _time action TargetUserName MemberName Group_Name SubjectUserName host\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privileged Group Changes** — Additions to Domain Admins, Enterprise Admins, or Schema Admins grant extreme privilege. Unauthorized changes could mean full domain compromise.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security`, EventCodes 4728, 4732, 4756 (member added); 4729, 4733, 4757 (member removed). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table with action details, Alert panel (critical).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when who belongs to the most powerful groups changes, so security can check each change was planned and correct.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.9",
              "n": "Windows Update Compliance",
              "c": "medium",
              "f": "advanced",
              "v": "Unpatched systems are primary attack vectors. Tracking patch compliance across the fleet supports vulnerability management and regulatory requirements.",
              "t": "`Splunk_TA_windows`, custom scripted input",
              "d": "`sourcetype=WinEventLog:System` (Event ID 19/20/43), WSUS logs, or scripted input",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" EventCode=19\n| rex \"(?<kb_article>KB\\d+)\"\n| stats latest(_time) as last_update, count as updates_installed by host\n| eval days_since_update = round((now() - last_update) / 86400, 0)\n| where days_since_update > 30\n| sort -days_since_update",
              "m": "Forward System event logs. EventCode 19 = successful update install. Create scripted input running `Get-HotFix` for comprehensive view. Dashboard showing days since last patch per host, flagging >30 days.",
              "z": "Table (host, last update, days since), Bar chart (compliance %), Heatmap by team/location.",
              "kfp": "Staged rollouts, deferred reboots, air-gaps, and WSUS scope changes all shift “compliant” timing. Do not use this as your only patch KPI.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, custom scripted input.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (Event ID 19/20/43), WSUS logs, or scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward System event logs. EventCode 19 = successful update install. Create scripted input running `Get-HotFix` for comprehensive view. Dashboard showing days since last patch per host, flagging >30 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" EventCode=19\n| rex \"(?<kb_article>KB\\d+)\"\n| stats latest(_time) as last_update, count as updates_installed by host\n| eval days_since_update = round((now() - last_update) / 86400, 0)\n| where days_since_update > 30\n| sort -days_since_update\n```\n\nUnderstanding this SPL\n\n**Windows Update Compliance** — Unpatched systems are primary attack vectors. Tracking patch compliance across the fleet supports vulnerability management and regulatory requirements.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (Event ID 19/20/43), WSUS logs, or scripted input. **App/TA** (typical add-on context): `Splunk_TA_windows`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_update** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since_update > 30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, last update, days since), Bar chart (compliance %), Heatmap by team/location.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which servers are behind on important updates, so the team can catch up before scanners or attackers do.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "15",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that APRA CPS 234 15 (Policy framework) is enforced — Splunk UC-1.2.9: Windows Update Compliance.",
                  "ea": "Saved search 'UC-1.2.9' running on sourcetype=WinEventLog:System (Event ID 19/20/43), WSUS logs, or scripted input, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.apra.gov.au/information-security"
                },
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.06",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.06 (Patch operating systems) is enforced — Splunk UC-1.2.9: Windows Update Compliance.",
                  "ea": "Saved search 'UC-1.2.9' running on sourcetype=WinEventLog:System (Event ID 19/20/43), WSUS logs, or scripted input, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NERC CIP CIP-010-4 R1 (Configuration change management) is enforced — Splunk UC-1.2.9: Windows Update Compliance.",
                  "ea": "Saved search 'UC-1.2.9' running on sourcetype=WinEventLog:System (Event ID 19/20/43), WSUS logs, or scripted input, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                }
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.10",
              "n": "Scheduled Task Failures",
              "c": "medium",
              "f": "beginner",
              "v": "Failed scheduled tasks break batch jobs, cleanup scripts, and automated processes. Often goes unnoticed until downstream effects appear.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational`, EventCode=201",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-TaskScheduler/Operational\" EventCode=201\n| where ActionName!=\"0\" AND ResultCode!=\"0\"\n| table _time host TaskName ResultCode ActionName\n| sort -_time",
              "m": "Enable Task Scheduler operational log collection. Alert on non-zero ResultCode values. Maintain a lookup of critical tasks per server role.",
              "z": "Table of failures, Single value (failures last 24h), Bar chart by task name.",
              "kfp": "Build scripts, MDT, and dev boxes fail tasks in ways that are noisy. Name and owner context separate noise from business batch failures.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational`, EventCode=201.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Task Scheduler operational log collection. Alert on non-zero ResultCode values. Maintain a lookup of critical tasks per server role.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-TaskScheduler/Operational\" EventCode=201\n| where ActionName!=\"0\" AND ResultCode!=\"0\"\n| table _time host TaskName ResultCode ActionName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Scheduled Task Failures** — Failed scheduled tasks break batch jobs, cleanup scripts, and automated processes. Often goes unnoticed until downstream effects appear.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational`, EventCode=201. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-TaskScheduler/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-TaskScheduler/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where ActionName!=\"0\" AND ResultCode!=\"0\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Scheduled Task Failures**): table _time host TaskName ResultCode ActionName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of failures, Single value (failures last 24h), Bar chart by task name.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch scheduled jobs that did not run successfully, so backups and cleanups you rely on are not failing quietly in the background.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.11",
              "n": "Blue Screen of Death (BSOD)",
              "c": "critical",
              "f": "beginner",
              "v": "BSODs indicate severe system instability — driver bugs, hardware failure, or memory corruption. Repeated BSODs on the same host demand immediate attention.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System`, EventCode=1001 (BugCheck)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" EventCode=1001 SourceName=\"BugCheck\"\n| rex \"(?<bugcheck_code>0x[0-9a-fA-F]+)\"\n| table _time host bugcheck_code Message\n| sort -_time",
              "m": "Enable System event log collection. Alert on EventCode 1001 from BugCheck source. Correlate bugcheck codes with known issues. Track frequency per host to identify chronic instability.",
              "z": "Events timeline, Table per host, Single value (BSOD count last 30d).",
              "kfp": "One-off driver loads, flaky RAM, and manual crash tests can produce a single bugcheck. Recurrence and new drivers matter more than a lone event.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System`, EventCode=1001 (BugCheck).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable System event log collection. Alert on EventCode 1001 from BugCheck source. Correlate bugcheck codes with known issues. Track frequency per host to identify chronic instability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" EventCode=1001 SourceName=\"BugCheck\"\n| rex \"(?<bugcheck_code>0x[0-9a-fA-F]+)\"\n| table _time host bugcheck_code Message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Blue Screen of Death (BSOD)** — BSODs indicate severe system instability — driver bugs, hardware failure, or memory corruption. Repeated BSODs on the same host demand immediate attention.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System`, EventCode=1001 (BugCheck). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Blue Screen of Death (BSOD)**): table _time host bugcheck_code Message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table per host, Single value (BSOD count last 30d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a server had a full blue-screen class crash so hardware and drivers can be fixed before it keeps happening.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.12",
              "n": "RDP Session Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Tracks who connected via Remote Desktop, from where, and when. Essential for compliance auditing and detecting lateral movement.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4624, LogonType=10), `sourcetype=WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational`",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational\" (EventCode=21 OR EventCode=23 OR EventCode=24 OR EventCode=25)\n| table _time host User EventCode",
              "m": "Enable Security log + TerminalServices operational log. Alert on RDP sessions to servers from unexpected sources. Create session audit report correlating logon/logoff events.",
              "z": "Table (user, source IP, host, time), Choropleth map for source IPs, Session timeline.",
              "kfp": "Jump boxes, help desk, and vendors create many sessions. Map sources to your admin roster and maintenance windows first.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4624, LogonType=10), `sourcetype=WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Security log + TerminalServices operational log. Alert on RDP sessions to servers from unexpected sources. Create session audit report correlating logon/logoff events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational\" (EventCode=21 OR EventCode=23 OR EventCode=24 OR EventCode=25)\n| table _time host User EventCode\n```\n\nUnderstanding this SPL\n\n**RDP Session Monitoring** — Tracks who connected via Remote Desktop, from where, and when. Essential for compliance auditing and detecting lateral movement.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4624, LogonType=10), `sourcetype=WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **RDP Session Monitoring**): table _time host User EventCode\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, source IP, host, time), Choropleth map for source IPs, Session timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We list who used Remote Desktop to which systems and when, so access reviews and investigations have a clear trail to trust.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.13",
              "n": "PowerShell Script Execution",
              "c": "high",
              "f": "beginner",
              "v": "PowerShell is the most common tool in modern Windows attacks (Cobalt Strike, Empire, fileless malware). Script block logging captures the actual code executed.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational`, EventCode=4104",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-PowerShell/Operational\" EventCode=4104\n| search ScriptBlockText=\"*EncodedCommand*\" OR ScriptBlockText=\"*Invoke-Mimikatz*\" OR ScriptBlockText=\"*Net.WebClient*\" OR ScriptBlockText=\"*-nop -w hidden*\"\n| table _time host ScriptBlockText\n| sort -_time",
              "m": "Enable PowerShell Script Block Logging via GPO: `Administrative Templates > Windows Components > Windows PowerShell > Turn on PowerShell Script Block Logging`. Forward the PowerShell Operational log. Create alerts on suspicious keywords (encoded commands, invoke-expression, web client downloads).",
              "z": "Events list (full script block text), Table of suspicious commands, Volume timechart.",
              "kfp": "IT automation, Intune, and installers can match “suspicious” patterns. Baseline with script paths, signers, and parent process context.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational`, EventCode=4104.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable PowerShell Script Block Logging via GPO: `Administrative Templates > Windows Components > Windows PowerShell > Turn on PowerShell Script Block Logging`. Forward the PowerShell Operational log. Create alerts on suspicious keywords (encoded commands, invoke-expression, web client downloads).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-PowerShell/Operational\" EventCode=4104\n| search ScriptBlockText=\"*EncodedCommand*\" OR ScriptBlockText=\"*Invoke-Mimikatz*\" OR ScriptBlockText=\"*Net.WebClient*\" OR ScriptBlockText=\"*-nop -w hidden*\"\n| table _time host ScriptBlockText\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**PowerShell Script Execution** — PowerShell is the most common tool in modern Windows attacks (Cobalt Strike, Empire, fileless malware). Script block logging captures the actual code executed.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational`, EventCode=4104. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-PowerShell/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-PowerShell/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **PowerShell Script Execution**): table _time host ScriptBlockText\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events list (full script block text), Table of suspicious commands, Volume timechart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at powerful commands that match patterns attackers use, and we help you know when a script is more likely trouble than a normal admin job.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.15",
              "n": "DNS Server Health",
              "c": "high",
              "f": "beginner",
              "v": "DNS is foundational infrastructure — when DNS is slow or failing, everything fails. Monitoring query rates and failures ensures resolution reliability.",
              "t": "`Splunk_TA_windows`, Microsoft DNS Analytical logs",
              "d": "`sourcetype=WinEventLog:DNS Server`, DNS debug/analytical logs",
              "q": "index=dns sourcetype=\"MSAD:NT6:DNS\"\n| timechart span=5m count as query_count by QTYPE",
              "m": "Enable DNS analytical logging via Event Viewer (disabled by default for performance). Alternatively use DNS debug logging to a file and forward it. Monitor query volume, SERVFAIL rate, and zone transfer events.",
              "z": "Line chart (query rate), Pie chart (query types), Single value (SERVFAIL count).",
              "kfp": "Start-of-day, Wi-Fi, and guest VLANs spike DNS. Compare to a sane baseline; some failure types are normal during re-pointing and upgrades.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Microsoft DNS Analytical logs.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:DNS Server`, DNS debug/analytical logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DNS analytical logging via Event Viewer (disabled by default for performance). Alternatively use DNS debug logging to a file and forward it. Monitor query volume, SERVFAIL rate, and zone transfer events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=\"MSAD:NT6:DNS\"\n| timechart span=5m count as query_count by QTYPE\n```\n\nUnderstanding this SPL\n\n**DNS Server Health** — DNS is foundational infrastructure — when DNS is slow or failing, everything fails. Monitoring query rates and failures ensures resolution reliability.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:DNS Server`, DNS debug/analytical logs. **App/TA** (typical add-on context): `Splunk_TA_windows`, Microsoft DNS Analytical logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: MSAD:NT6:DNS. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=\"MSAD:NT6:DNS\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by QTYPE** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (query rate), Pie chart (query types), Single value (SERVFAIL count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the pattern of name lookups the DNS server is handling, and we help you know when the service is under unusual load or odd query types show up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.16",
              "n": "DHCP Scope Exhaustion",
              "c": "high",
              "f": "intermediate",
              "v": "When DHCP scopes run out of addresses, new devices can't get network access. Often manifests as \"network down\" complaints.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=DhcpSrvLog`, DHCP audit logs",
              "q": "index=dhcp sourcetype=\"DhcpSrvLog\"\n| where EventID=13 OR EventID=14\n| stats count by Description",
              "m": "Forward DHCP server audit logs from `%windir%\\System32\\Dhcp`. Create scripted input running `Get-DhcpServerv4ScopeStatistics` to get scope utilization. Alert when any scope exceeds 90% utilization.",
              "z": "Gauge per scope, Table (scope, used, available, % full), Trend line.",
              "kfp": "New sites, big events, and guest surges can exhaust a scope. Some exhaustion alerts are by design in lab or isolation VLANs; tune with subnet owners.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=DhcpSrvLog`, DHCP audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DHCP server audit logs from `%windir%\\System32\\Dhcp`. Create scripted input running `Get-DhcpServerv4ScopeStatistics` to get scope utilization. Alert when any scope exceeds 90% utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dhcp sourcetype=\"DhcpSrvLog\"\n| where EventID=13 OR EventID=14\n| stats count by Description\n```\n\nUnderstanding this SPL\n\n**DHCP Scope Exhaustion** — When DHCP scopes run out of addresses, new devices can't get network access. Often manifests as \"network down\" complaints.\n\nDocumented **Data sources**: `sourcetype=DhcpSrvLog`, DHCP audit logs. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dhcp; **sourcetype**: DhcpSrvLog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dhcp, sourcetype=\"DhcpSrvLog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where EventID=13 OR EventID=14` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by Description** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per scope, Table (scope, used, available, % full), Trend line.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a network ran out of addresses to hand out, so new phones and laptops can still get online in busy places.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.17",
              "n": "Certificate Expiration",
              "c": "high",
              "f": "advanced",
              "v": "Expired certificates cause TLS failures, broken websites, authentication failures, and service outages. Among the most preventable outage causes.",
              "t": "`Splunk_TA_windows`, custom scripted input",
              "d": "Custom scripted input (`certutil` or PowerShell `Get-ChildItem Cert:\\`)",
              "q": "index=os sourcetype=certificate_inventory host=*\n| eval days_until_expiry = round((expiry_epoch - now()) / 86400, 0)\n| where days_until_expiry < 90\n| sort days_until_expiry\n| table host cert_subject issuer days_until_expiry expiry_date",
              "m": "Create a PowerShell scripted input: `Get-ChildItem -Path Cert:\\LocalMachine -Recurse | Select Subject, NotAfter, Issuer`. Run daily. Alert at 90/60/30/7 day thresholds.",
              "z": "Table sorted by days to expiry, Single value (certs expiring within 30d), Status indicator (red/yellow/green).",
              "kfp": "Short internal CA lifetimes, test hosts, and auto-rotating certs are noisy. Allowlist your automation and standard templates.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, custom scripted input.\n• Ensure the following data sources are available: Custom scripted input (`certutil` or PowerShell `Get-ChildItem Cert:\\`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a PowerShell scripted input: `Get-ChildItem -Path Cert:\\LocalMachine -Recurse | Select Subject, NotAfter, Issuer`. Run daily. Alert at 90/60/30/7 day thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=certificate_inventory host=*\n| eval days_until_expiry = round((expiry_epoch - now()) / 86400, 0)\n| where days_until_expiry < 90\n| sort days_until_expiry\n| table host cert_subject issuer days_until_expiry expiry_date\n```\n\nUnderstanding this SPL\n\n**Certificate Expiration** — Expired certificates cause TLS failures, broken websites, authentication failures, and service outages. Among the most preventable outage causes.\n\nDocumented **Data sources**: Custom scripted input (`certutil` or PowerShell `Get-ChildItem Cert:\\`). **App/TA** (typical add-on context): `Splunk_TA_windows`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: certificate_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=certificate_inventory. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_until_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_until_expiry < 90` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Certificate Expiration**): table host cert_subject issuer days_until_expiry expiry_date\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table sorted by days to expiry, Single value (certs expiring within 30d), Status indicator (red/yellow/green).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when machine or service certificates are about to expire, so renewals happen before someone’s browser or app suddenly stops trusting the server.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.19",
              "n": "Group Policy Processing Failures",
              "c": "medium",
              "f": "beginner",
              "v": "GPO failures mean security policies, drive mappings, software deployments, and configurations aren't being applied. Systems may be running with stale or missing policies.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-GroupPolicy/Operational`, EventCodes 1085, 1096",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-GroupPolicy/Operational\" (EventCode=1085 OR EventCode=1096 OR EventCode=7016 OR EventCode=7320)\n| stats count by host, EventCode, ErrorDescription\n| sort -count",
              "m": "Enable Group Policy operational log forwarding. Alert on persistent GPO failures per host. Correlate with network connectivity (DC reachability) and DNS resolution issues.",
              "z": "Table (host, error, count), Bar chart by error type.",
              "kfp": "Policy version bumps, site links, and WMI filter re-evaluation can cause short bursts. Compare across domain controllers and change windows first.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-GroupPolicy/Operational`, EventCodes 1085, 1096.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Group Policy operational log forwarding. Alert on persistent GPO failures per host. Correlate with network connectivity (DC reachability) and DNS resolution issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-GroupPolicy/Operational\" (EventCode=1085 OR EventCode=1096 OR EventCode=7016 OR EventCode=7320)\n| stats count by host, EventCode, ErrorDescription\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Group Policy Processing Failures** — GPO failures mean security policies, drive mappings, software deployments, and configurations aren't being applied. Systems may be running with stale or missing policies.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-GroupPolicy/Operational`, EventCodes 1085, 1096. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-GroupPolicy/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-GroupPolicy/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, EventCode, ErrorDescription** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Group Policy Processing Failures** — GPO failures mean security policies, drive mappings, software deployments, and configurations aren't being applied. Systems may be running with stale or missing policies.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-GroupPolicy/Operational`, EventCodes 1085, 1096. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, error, count), Bar chart by error type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when group policy is not applying the way the business expects, so security and software settings are not quietly out of date on some machines.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.20",
              "n": "Print Spooler Issues",
              "c": "low",
              "f": "beginner",
              "v": "Print spooler crashes affect print services and have historically been attack vectors (PrintNightmare). Monitoring catches both operational and security issues.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-PrintService/Operational`",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-PrintService/Operational\" (EventCode=372 OR EventCode=805 OR EventCode=842)\n| stats count by host, EventCode\n| sort -count",
              "m": "Enable PrintService operational log on print servers. Alert on spooler crash (EventCode 372) and driver installation events (security relevance). Consider disabling the print spooler on servers that don't need it (attack surface reduction).",
              "z": "Table, Events timeline.",
              "kfp": "Session hosts, driver push days, and print migrations spike spooler events. Distinguish security-relevant events from day-two operational churn.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-PrintService/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable PrintService operational log on print servers. Alert on spooler crash (EventCode 372) and driver installation events (security relevance). Consider disabling the print spooler on servers that don't need it (attack surface reduction).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-PrintService/Operational\" (EventCode=372 OR EventCode=805 OR EventCode=842)\n| stats count by host, EventCode\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Print Spooler Issues** — Print spooler crashes affect print services and have historically been attack vectors (PrintNightmare). Monitoring catches both operational and security issues.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-PrintService/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, EventCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Events timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the print spooler is crashing or getting into trouble, which matters for print uptime and for security on older attack paths.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.21",
              "n": "Disk I/O Queue Length (Windows)",
              "c": "high",
              "f": "intermediate",
              "v": "Sustained high disk queue lengths indicate storage bottlenecks invisible to CPU/memory monitoring. Causes application hangs and timeout errors.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:LogicalDisk` (counter: Current Disk Queue Length)",
              "q": "index=perfmon sourcetype=\"Perfmon:LogicalDisk\" counter=\"Current Disk Queue Length\" instance!=\"_Total\"\n| timechart span=5m avg(Value) as avg_queue by host, instance\n| where avg_queue > 2",
              "m": "Add `Current Disk Queue Length` and `Avg. Disk sec/Transfer` to Perfmon LogicalDisk inputs (interval=30). A sustained queue >2 per spindle indicates saturation. Correlate with application latency. For SSDs, thresholds differ — focus on `Avg. Disk sec/Transfer` >20ms.",
              "z": "Line chart (queue by drive), Heatmap (hosts × drives), Single value (worst queue).",
              "kfp": "Backups, storage vMotion, and large copies flood queues briefly. CIM tstats is a related capacity signal only—tune the raw Perfmon alert for the real queue SLO.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:LogicalDisk` (counter: Current Disk Queue Length).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAdd `Current Disk Queue Length` and `Avg. Disk sec/Transfer` to Perfmon LogicalDisk inputs (interval=30). A sustained queue >2 per spindle indicates saturation. Correlate with application latency. For SSDs, thresholds differ — focus on `Avg. Disk sec/Transfer` >20ms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:LogicalDisk\" counter=\"Current Disk Queue Length\" instance!=\"_Total\"\n| timechart span=5m avg(Value) as avg_queue by host, instance\n| where avg_queue > 2\n```\n\nUnderstanding this SPL\n\n**Disk I/O Queue Length (Windows)** — Sustained high disk queue lengths indicate storage bottlenecks invisible to CPU/memory monitoring. Causes application hangs and timeout errors.\n\nDocumented **Data sources**: `sourcetype=Perfmon:LogicalDisk` (counter: Current Disk Queue Length). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:LogicalDisk. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:LogicalDisk\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host, instance** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_queue > 2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Disk I/O Queue Length (Windows)** — Sustained high disk queue lengths indicate storage bottlenecks invisible to CPU/memory monitoring. Causes application hangs and timeout errors.\n\nDocumented **Data sources**: `sourcetype=Perfmon:LogicalDisk` (counter: Current Disk Queue Length). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where disk_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue by drive), Heatmap (hosts × drives), Single value (worst queue).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a disk is stuck behind a long line of I/O, which can make apps on that drive feel frozen even if CPU looks fine.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.22",
              "n": "Process Handle Leak Detection",
              "c": "high",
              "f": "advanced",
              "v": "Handle leaks cause resource exhaustion and eventual application crashes or system instability. Detecting the leak early prevents unplanned outages.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:Process` (counter: Handle Count)",
              "q": "index=perfmon sourcetype=\"Perfmon:Process\" counter=\"Handle Count\" instance!=\"_Total\" instance!=\"Idle\"\n| timechart span=1h max(Value) as handles by host, instance\n| streamstats window=24 current=f avg(handles) as avg_handles by host, instance\n| eval pct_increase = round((handles - avg_handles) / avg_handles * 100, 1)\n| where pct_increase > 50 AND handles > 5000",
              "m": "Configure Perfmon Process inputs with `Handle Count` counter, all instances, interval=300. Alert when a process shows sustained handle growth >50% over 24-hour baseline. Common leakers: w3wp.exe, svchost.exe, custom .NET apps. Correlate with application restarts.",
              "z": "Line chart (handle trend per process), Table (top handle consumers), Alert on sustained growth.",
              "kfp": "IIS, Java, and long-lived services can hold many handles by design. Growth rate matters more than a high static count. Match against known noisy instances.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:Process` (counter: Handle Count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon Process inputs with `Handle Count` counter, all instances, interval=300. Alert when a process shows sustained handle growth >50% over 24-hour baseline. Common leakers: w3wp.exe, svchost.exe, custom .NET apps. Correlate with application restarts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:Process\" counter=\"Handle Count\" instance!=\"_Total\" instance!=\"Idle\"\n| timechart span=1h max(Value) as handles by host, instance\n| streamstats window=24 current=f avg(handles) as avg_handles by host, instance\n| eval pct_increase = round((handles - avg_handles) / avg_handles * 100, 1)\n| where pct_increase > 50 AND handles > 5000\n```\n\nUnderstanding this SPL\n\n**Process Handle Leak Detection** — Handle leaks cause resource exhaustion and eventual application crashes or system instability. Detecting the leak early prevents unplanned outages.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Process` (counter: Handle Count). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:Process. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:Process\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host, instance** — ideal for trending and alerting on this use case.\n• `streamstats` rolls up events into metrics; results are split **by host, instance** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct_increase** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_increase > 50 AND handles > 5000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Process Handle Leak Detection** — Handle leaks cause resource exhaustion and eventual application crashes or system instability. Detecting the leak early prevents unplanned outages.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Process` (counter: Handle Count). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (handle trend per process), Table (top handle consumers), Alert on sustained growth.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for programs that keep grabbing more and more system handles, which can end in crashes. The detailed handle-count search is what actually catches the problem; other summarized views are only supporting context.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.23",
              "n": "Non-Paged Pool Exhaustion",
              "c": "critical",
              "f": "intermediate",
              "v": "Non-paged pool memory is limited kernel memory. Exhaustion causes BSOD (DRIVER_IRQL_NOT_LESS_OR_EQUAL). Often caused by driver leaks.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:Memory` (counter: Pool Nonpaged Bytes)",
              "q": "index=perfmon sourcetype=\"Perfmon:Memory\" counter=\"Pool Nonpaged Bytes\"\n| eval pool_MB = Value / 1048576\n| timechart span=15m avg(pool_MB) as nonpaged_pool_MB by host\n| where nonpaged_pool_MB > 256",
              "m": "Add `Pool Nonpaged Bytes` and `Pool Nonpaged Allocs` to Memory Perfmon inputs (interval=60). Default limit is ~75% of RAM or registry-defined. Alert at 256MB+ or when growth is sustained over hours. Use `poolmon.exe` or `xperf` to identify the leaking driver tag on affected hosts.",
              "z": "Line chart (pool growth over time), Single value (current pool size), Alert threshold marker.",
              "kfp": "Network drivers, filters, and AV can grow nonpaged pool during heavy I/O. `poolmon` and tag-level review beat chasing every 256MB cross.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:Memory` (counter: Pool Nonpaged Bytes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAdd `Pool Nonpaged Bytes` and `Pool Nonpaged Allocs` to Memory Perfmon inputs (interval=60). Default limit is ~75% of RAM or registry-defined. Alert at 256MB+ or when growth is sustained over hours. Use `poolmon.exe` or `xperf` to identify the leaking driver tag on affected hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:Memory\" counter=\"Pool Nonpaged Bytes\"\n| eval pool_MB = Value / 1048576\n| timechart span=15m avg(pool_MB) as nonpaged_pool_MB by host\n| where nonpaged_pool_MB > 256\n```\n\nUnderstanding this SPL\n\n**Non-Paged Pool Exhaustion** — Non-paged pool memory is limited kernel memory. Exhaustion causes BSOD (DRIVER_IRQL_NOT_LESS_OR_EQUAL). Often caused by driver leaks.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Memory` (counter: Pool Nonpaged Bytes). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:Memory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:Memory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pool_MB** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where nonpaged_pool_MB > 256` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Non-Paged Pool Exhaustion** — Non-paged pool memory is limited kernel memory. Exhaustion causes BSOD (DRIVER_IRQL_NOT_LESS_OR_EQUAL). Often caused by driver leaks.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Memory` (counter: Pool Nonpaged Bytes). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pool growth over time), Single value (current pool size), Alert threshold marker.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a special pool of server memory is growing in a way that can freeze the box—often a bad driver, not a normal app.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.24",
              "n": "Network Interface Utilization (Windows)",
              "c": "high",
              "f": "intermediate",
              "v": "Saturated network interfaces cause packet drops, retransmissions, and application timeouts. Often missed when only CPU/memory are monitored.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:Network_Interface`",
              "q": "index=perfmon sourcetype=\"Perfmon:Network_Interface\" counter=\"Bytes Total/sec\"\n| eval bandwidth_Mbps = (Value * 8) / 1000000\n| timechart span=5m avg(bandwidth_Mbps) as avg_Mbps by host, instance\n| where avg_Mbps > 800",
              "m": "Configure Perfmon Network Interface inputs: counters `Bytes Total/sec`, `Packets Outbound Errors`, `Output Queue Length` (interval=60). For 1Gbps NICs, alert at 80% (~800Mbps). Also monitor `Output Queue Length >2` for congestion even below bandwidth saturation. Exclude loopback and virtual adapters from monitoring.",
              "z": "Line chart (bandwidth by interface), Dual-axis (bandwidth + errors), Table (top talkers).",
              "kfp": "Backups, VM migration, and patch pushes can max an interface. Baseline the same host’s last month before calling it an incident.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:Network_Interface`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon Network Interface inputs: counters `Bytes Total/sec`, `Packets Outbound Errors`, `Output Queue Length` (interval=60). For 1Gbps NICs, alert at 80% (~800Mbps). Also monitor `Output Queue Length >2` for congestion even below bandwidth saturation. Exclude loopback and virtual adapters from monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:Network_Interface\" counter=\"Bytes Total/sec\"\n| eval bandwidth_Mbps = (Value * 8) / 1000000\n| timechart span=5m avg(bandwidth_Mbps) as avg_Mbps by host, instance\n| where avg_Mbps > 800\n```\n\nUnderstanding this SPL\n\n**Network Interface Utilization (Windows)** — Saturated network interfaces cause packet drops, retransmissions, and application timeouts. Often missed when only CPU/memory are monitored.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Network_Interface`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:Network_Interface. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:Network_Interface\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bandwidth_Mbps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host, instance** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_Mbps > 800` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Interface Utilization (Windows)** — Saturated network interfaces cause packet drops, retransmissions, and application timeouts. Often missed when only CPU/memory are monitored.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Network_Interface`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (bandwidth by interface), Dual-axis (bandwidth + errors), Table (top talkers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how much data is moving through each network card on the server, and we help you know when a card is so busy that apps may slow down.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` sum(Performance.bytes_in) as bytes_in\n                        sum(Performance.bytes_out) as bytes_out\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.25",
              "n": "Processor Queue Length",
              "c": "medium",
              "f": "intermediate",
              "v": "Processor queue length >2 per core indicates threads waiting for CPU time. Detects CPU contention even when average utilization looks normal due to burst patterns.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:System` (counter: Processor Queue Length)",
              "q": "index=perfmon sourcetype=\"Perfmon:System\" counter=\"Processor Queue Length\"\n| timechart span=5m avg(Value) as queue_len by host\n| where queue_len > 4",
              "m": "Add `Processor Queue Length` to Perfmon System object inputs (interval=30). A sustained queue >2× number of cores indicates saturation. Correlate with `Context Switches/sec` from the same object to distinguish CPU-bound workloads from excessive threading.",
              "z": "Line chart (queue trend), Heatmap (hosts × time), Single value (current queue).",
              "kfp": "Bursty work can spike queue with modest average CPU. Sustained queue over core count matters. The `>4` thread queue in the main SPL is *not* the same as `>80%` CPU here; tune both in tandem.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:System` (counter: Processor Queue Length).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAdd `Processor Queue Length` to Perfmon System object inputs (interval=30). A sustained queue >2× number of cores indicates saturation. Correlate with `Context Switches/sec` from the same object to distinguish CPU-bound workloads from excessive threading.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:System\" counter=\"Processor Queue Length\"\n| timechart span=5m avg(Value) as queue_len by host\n| where queue_len > 4\n```\n\nUnderstanding this SPL\n\n**Processor Queue Length** — Processor queue length >2 per core indicates threads waiting for CPU time. Detects CPU contention even when average utilization looks normal due to burst patterns.\n\nDocumented **Data sources**: `sourcetype=Perfmon:System` (counter: Processor Queue Length). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where queue_len > 4` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Processor Queue Length** — Processor queue length >2 per core indicates threads waiting for CPU time. Detects CPU contention even when average utilization looks normal due to burst patterns.\n\nDocumented **Data sources**: `sourcetype=Perfmon:System` (counter: Processor Queue Length). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue trend), Heatmap (hosts × time), Single value (current queue).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a server has a line of work waiting for the processor, which can make everything feel slow even if average CPU is not redlining.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.26",
              "n": "Security Log Cleared",
              "c": "critical",
              "f": "beginner",
              "v": "Clearing the Security event log is a classic attacker technique to cover tracks. Legitimate clears are rare and should always be investigated.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 1102), `sourcetype=WinEventLog:System` (EventCode 104)",
              "q": "index=wineventlog (sourcetype=\"WinEventLog:Security\" EventCode=1102) OR (sourcetype=\"WinEventLog:System\" EventCode=104)\n| table _time, host, sourcetype, EventCode, SubjectUserName, SubjectDomainName\n| sort -_time",
              "m": "EventCode 1102 fires when the Security log is cleared; EventCode 104 when any event log is cleared. These should never occur in production outside controlled maintenance windows. Set a real-time alert with critical priority. Enrich with user identity to track who performed the action.",
              "z": "Timeline (clear events), Table (who cleared what), Single value (count — target: 0).",
              "kfp": "Planned log rollovers, support investigations, and evidence exports clear logs. 1102/104 in production AD should still page—pair with your dual-control runbook.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 1102), `sourcetype=WinEventLog:System` (EventCode 104).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 1102 fires when the Security log is cleared; EventCode 104 when any event log is cleared. These should never occur in production outside controlled maintenance windows. Set a real-time alert with critical priority. Enrich with user identity to track who performed the action.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog (sourcetype=\"WinEventLog:Security\" EventCode=1102) OR (sourcetype=\"WinEventLog:System\" EventCode=104)\n| table _time, host, sourcetype, EventCode, SubjectUserName, SubjectDomainName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Security Log Cleared** — Clearing the Security event log is a classic attacker technique to cover tracks. Legitimate clears are rare and should always be investigated.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 1102), `sourcetype=WinEventLog:System` (EventCode 104). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security, WinEventLog:System. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Security Log Cleared**): table _time, host, sourcetype, EventCode, SubjectUserName, SubjectDomainName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (clear events), Table (who cleared what), Single value (count — target: 0).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch when someone cleared important security or system logs, which is a big deal for both attacks and policy compliance.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.27",
              "n": "New Service Installation",
              "c": "high",
              "f": "beginner",
              "v": "Attackers install malicious services for persistence. Unexpected service installations outside change windows indicate compromise or unauthorized software.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System` (EventCode 7045)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" EventCode=7045\n| table _time, host, ServiceName, ImagePath, ServiceType, AccountName\n| regex ImagePath!=\"(?i)(C:\\\\\\\\Windows\\\\\\\\|C:\\\\\\\\Program Files)\"\n| sort -_time",
              "m": "EventCode 7045 logs every new service installation. Filter out known/expected services via a lookup of approved service names. Alert on services with binaries outside standard paths (C:\\Windows, C:\\Program Files). Pay special attention to services running as SYSTEM with binaries in temp directories.",
              "z": "Table (new services with paths), Timeline, Alert on non-standard paths.",
              "kfp": "Patching, monitoring agents, and drivers add services in waves. The unusual path and account filters in the main SPL are what separate noise from malware installs.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (EventCode 7045).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 7045 logs every new service installation. Filter out known/expected services via a lookup of approved service names. Alert on services with binaries outside standard paths (C:\\Windows, C:\\Program Files). Pay special attention to services running as SYSTEM with binaries in temp directories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" EventCode=7045\n| table _time, host, ServiceName, ImagePath, ServiceType, AccountName\n| regex ImagePath!=\"(?i)(C:\\\\\\\\Windows\\\\\\\\|C:\\\\\\\\Program Files)\"\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**New Service Installation** — Attackers install malicious services for persistence. Unexpected service installations outside change windows indicate compromise or unauthorized software.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (EventCode 7045). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **New Service Installation**): table _time, host, ServiceName, ImagePath, ServiceType, AccountName\n• Filters rows matching a pattern with `regex`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**New Service Installation** — Attackers install malicious services for persistence. Unexpected service installations outside change windows indicate compromise or unauthorized software.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (EventCode 7045). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new services with paths), Timeline, Alert on non-standard paths.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a new background service is installed, especially when the program lives in an odd folder—something you want to see early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.28",
              "n": "Windows Firewall Rule Changes",
              "c": "high",
              "f": "beginner",
              "v": "Unauthorized firewall rule changes can open attack vectors. Malware often disables the firewall or adds allow rules for C2 communication.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall` (EventCode 2004, 2005, 2006, 2033)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall\"\n  EventCode IN (2004, 2005, 2006, 2033)\n| eval action=case(EventCode=2004,\"Rule Added\",EventCode=2005,\"Rule Modified\",EventCode=2006,\"Rule Deleted\",EventCode=2033,\"Firewall Disabled\")\n| table _time, host, action, RuleName, ApplicationPath, Direction, Protocol\n| sort -_time",
              "m": "Enable the Windows Firewall audit log. EventCode 2004=rule added, 2005=modified, 2006=deleted, 2033=firewall disabled. Alert immediately on firewall disabled events. Track rule changes against change management records. Focus on inbound allow rules added for non-standard ports.",
              "z": "Table (rule changes), Timeline, Single value (firewall disabled count — target: 0).",
              "kfp": "GPO and Intune can push many rule changes during hardening. Expect bursts during build seasons; use change records to explain volume.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall` (EventCode 2004, 2005, 2006, 2033).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable the Windows Firewall audit log. EventCode 2004=rule added, 2005=modified, 2006=deleted, 2033=firewall disabled. Alert immediately on firewall disabled events. Track rule changes against change management records. Focus on inbound allow rules added for non-standard ports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall\"\n  EventCode IN (2004, 2005, 2006, 2033)\n| eval action=case(EventCode=2004,\"Rule Added\",EventCode=2005,\"Rule Modified\",EventCode=2006,\"Rule Deleted\",EventCode=2033,\"Firewall Disabled\")\n| table _time, host, action, RuleName, ApplicationPath, Direction, Protocol\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Windows Firewall Rule Changes** — Unauthorized firewall rule changes can open attack vectors. Malware often disables the firewall or adds allow rules for C2 communication.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall` (EventCode 2004, 2005, 2006, 2033). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Windows Firewall Rule Changes**): table _time, host, action, RuleName, ApplicationPath, Direction, Protocol\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Windows Firewall Rule Changes** — Unauthorized firewall rule changes can open attack vectors. Malware often disables the firewall or adds allow rules for C2 communication.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall` (EventCode 2004, 2005, 2006, 2033). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule changes), Timeline, Single value (firewall disabled count — target: 0).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when someone adds, changes, or removes Windows firewall rules, which matters for who can reach the box over the network.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.29",
              "n": "Registry Run Key Modification (Persistence)",
              "c": "critical",
              "f": "intermediate",
              "v": "Run/RunOnce registry keys are the most common malware persistence mechanism. Monitoring these keys catches many threats early.",
              "t": "`Splunk_TA_windows`, Sysmon recommended",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13) or `sourcetype=WinEventLog:Security` (EventCode 4657)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=13\n  TargetObject=\"*\\\\CurrentVersion\\\\Run*\"\n| table _time, host, Image, TargetObject, Details, User\n| sort -_time",
              "m": "Deploy Sysmon with registry monitoring rules targeting HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run, RunOnce, and HKCU equivalents. Alternatively, enable Object Access auditing (EventCode 4657) with SACLs on Run keys. Alert on any modification outside approved deployment tools (SCCM, GPO). Cross-reference with threat intel.",
              "z": "Table (registry changes with process context), Timeline, Alert on non-GPO modifications.",
              "kfp": "IT packaging, RMM tools, and installers write Run keys. Baseline your gold image and corporate signers before every write is a red alert.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Sysmon recommended.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13) or `sourcetype=WinEventLog:Security` (EventCode 4657).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Sysmon with registry monitoring rules targeting HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run, RunOnce, and HKCU equivalents. Alternatively, enable Object Access auditing (EventCode 4657) with SACLs on Run keys. Alert on any modification outside approved deployment tools (SCCM, GPO). Cross-reference with threat intel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=13\n  TargetObject=\"*\\\\CurrentVersion\\\\Run*\"\n| table _time, host, Image, TargetObject, Details, User\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Registry Run Key Modification (Persistence)** — Run/RunOnce registry keys are the most common malware persistence mechanism. Monitoring these keys catches many threats early.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13) or `sourcetype=WinEventLog:Security` (EventCode 4657). **App/TA** (typical add-on context): `Splunk_TA_windows`, Sysmon recommended. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Registry Run Key Modification (Persistence)**): table _time, host, Image, TargetObject, Details, User\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Registry Run Key Modification (Persistence)** — Run/RunOnce registry keys are the most common malware persistence mechanism. Monitoring these keys catches many threats early.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13) or `sourcetype=WinEventLog:Security` (EventCode 4657). **App/TA** (typical add-on context): `Splunk_TA_windows`, Sysmon recommended. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (registry changes with process context), Timeline, Alert on non-GPO modifications.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at changes to the common ‘run at sign-in’ spots in the registry, where attackers like to hide programs that start with Windows.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.30",
              "n": "LSASS Memory Access (Credential Dumping)",
              "c": "critical",
              "f": "intermediate",
              "v": "Accessing LSASS process memory is the primary technique for credential theft (Mimikatz, ProcDump). Detection is critical to stopping lateral movement.",
              "t": "`Splunk_TA_windows`, Sysmon required",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 10)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=10\n  TargetImage=\"*\\\\lsass.exe\"\n  GrantedAccess IN (\"0x1010\",\"0x1410\",\"0x1438\",\"0x143a\",\"0x1fffff\")\n| where NOT match(SourceImage, \"(?i)(MsMpEng|csrss|wininit|svchost|mrt\\.exe)\")\n| table _time, host, SourceImage, GrantedAccess, SourceUser\n| sort -_time",
              "m": "Deploy Sysmon with ProcessAccess (EventCode 10) monitoring. Filter out legitimate LSASS accessors (AV engines, csrss, wininit). The GrantedAccess mask 0x1010 (PROCESS_VM_READ + PROCESS_QUERY_LIMITED_INFORMATION) is the Mimikatz signature. Alert immediately with critical priority. Enable Credential Guard (Windows 10+) as a complementary defense.",
              "z": "Table (LSASS access events), Single value (count — target: 0), Alert with MITRE ATT&CK T1003 reference.",
              "kfp": "EDR, backup, and some AV scan lsass. Parent process, signer, and path exclusions cut most noise on servers.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Sysmon required.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 10).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Sysmon with ProcessAccess (EventCode 10) monitoring. Filter out legitimate LSASS accessors (AV engines, csrss, wininit). The GrantedAccess mask 0x1010 (PROCESS_VM_READ + PROCESS_QUERY_LIMITED_INFORMATION) is the Mimikatz signature. Alert immediately with critical priority. Enable Credential Guard (Windows 10+) as a complementary defense.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=10\n  TargetImage=\"*\\\\lsass.exe\"\n  GrantedAccess IN (\"0x1010\",\"0x1410\",\"0x1438\",\"0x143a\",\"0x1fffff\")\n| where NOT match(SourceImage, \"(?i)(MsMpEng|csrss|wininit|svchost|mrt\\.exe)\")\n| table _time, host, SourceImage, GrantedAccess, SourceUser\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**LSASS Memory Access (Credential Dumping)** — Accessing LSASS process memory is the primary technique for credential theft (Mimikatz, ProcDump). Detection is critical to stopping lateral movement.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 10). **App/TA** (typical add-on context): `Splunk_TA_windows`, Sysmon required. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match(SourceImage, \"(?i)(MsMpEng|csrss|wininit|svchost|mrt\\.exe)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **LSASS Memory Access (Credential Dumping)**): table _time, host, SourceImage, GrantedAccess, SourceUser\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**LSASS Memory Access (Credential Dumping)** — Accessing LSASS process memory is the primary technique for credential theft (Mimikatz, ProcDump). Detection is critical to stopping lateral movement.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 10). **App/TA** (typical add-on context): `Splunk_TA_windows`, Sysmon required. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (LSASS access events), Single value (count — target: 0), Alert with MITRE ATT&CK T1003 reference.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a program pokes the login secret holder process in a way that is more like a thief than a normal system tool.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.31",
              "n": "Kerberos Authentication Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Kerberos failures (EventCode 4771) reveal password spraying, expired accounts, clock skew, and misconfigured SPNs. Distinct from NTLM failures and requires separate monitoring.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4771)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4771\n| eval failure=case(Status=\"0x6\",\"Unknown username\",Status=\"0x12\",\"Account disabled/expired/locked\",Status=\"0x17\",\"Password expired\",Status=\"0x18\",\"Bad password\",Status=\"0x25\",\"Clock skew\",1=1,Status)\n| stats count by TargetUserName, IpAddress, failure, host\n| where count > 5\n| sort -count",
              "m": "Collect Security event logs from all domain controllers. EventCode 4771 is Kerberos pre-auth failure. Status codes: 0x18=wrong password (most common attack indicator), 0x12=disabled/locked, 0x25=clock skew (infrastructure issue). Alert on >10 failures per user in 5 minutes (spray detection). Correlate IpAddress with known endpoints.",
              "z": "Table (failures by user and reason), Bar chart (top failing accounts), Timechart (failure rate trending).",
              "kfp": "Clock skew, service accounts with bad secrets, and lab Kerberos hiccups create clusters. NTP and account hygiene fix many rows.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4771).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Security event logs from all domain controllers. EventCode 4771 is Kerberos pre-auth failure. Status codes: 0x18=wrong password (most common attack indicator), 0x12=disabled/locked, 0x25=clock skew (infrastructure issue). Alert on >10 failures per user in 5 minutes (spray detection). Correlate IpAddress with known endpoints.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4771\n| eval failure=case(Status=\"0x6\",\"Unknown username\",Status=\"0x12\",\"Account disabled/expired/locked\",Status=\"0x17\",\"Password expired\",Status=\"0x18\",\"Bad password\",Status=\"0x25\",\"Clock skew\",1=1,Status)\n| stats count by TargetUserName, IpAddress, failure, host\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Kerberos Authentication Failures** — Kerberos failures (EventCode 4771) reveal password spraying, expired accounts, clock skew, and misconfigured SPNs. Distinct from NTLM failures and requires separate monitoring.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4771). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **failure** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by TargetUserName, IpAddress, failure, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Kerberos Authentication Failures** — Kerberos failures (EventCode 4771) reveal password spraying, expired accounts, clock skew, and misconfigured SPNs. Distinct from NTLM failures and requires separate monitoring.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4771). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failures by user and reason), Bar chart (top failing accounts), Timechart (failure rate trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count Kerberos sign-in problems so you can tell bad keys and clocks from someone guessing passwords at the domain door.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.32",
              "n": "WMI Event Subscription Persistence",
              "c": "critical",
              "f": "intermediate",
              "v": "WMI event subscriptions are a stealthy persistence mechanism that survives reboots. Used by APT groups and fileless malware.",
              "t": "`Splunk_TA_windows`, Sysmon required",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 19, 20, 21)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\"\n  EventCode IN (19, 20, 21)\n| eval wmi_type=case(EventCode=19,\"Filter Created\",EventCode=20,\"Consumer Created\",EventCode=21,\"Binding Created\")\n| table _time, host, wmi_type, User, Name, Destination, Query\n| sort -_time",
              "m": "Deploy Sysmon v10+ which logs WMI event filter (19), consumer (20), and binding (21) creation. Any new WMI subscription outside management tools (SCCM, monitoring agents) is suspicious. Alert on all new subscriptions. Legitimate ones are rare and well-known (e.g., SCCM client). Correlate consumer CommandLineTemplate with known malware signatures.",
              "z": "Table (WMI subscriptions created), Timeline, Single value (new subscriptions — target: 0 outside SCCM).",
              "kfp": "SCCM, SCOM, and inventory tools create WMI consumers. Compare against a known-good image list before a hunt-only alert goes wide.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Sysmon required.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 19, 20, 21).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Sysmon v10+ which logs WMI event filter (19), consumer (20), and binding (21) creation. Any new WMI subscription outside management tools (SCCM, monitoring agents) is suspicious. Alert on all new subscriptions. Legitimate ones are rare and well-known (e.g., SCCM client). Correlate consumer CommandLineTemplate with known malware signatures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\"\n  EventCode IN (19, 20, 21)\n| eval wmi_type=case(EventCode=19,\"Filter Created\",EventCode=20,\"Consumer Created\",EventCode=21,\"Binding Created\")\n| table _time, host, wmi_type, User, Name, Destination, Query\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**WMI Event Subscription Persistence** — WMI event subscriptions are a stealthy persistence mechanism that survives reboots. Used by APT groups and fileless malware.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 19, 20, 21). **App/TA** (typical add-on context): `Splunk_TA_windows`, Sysmon required. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **wmi_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **WMI Event Subscription Persistence**): table _time, host, wmi_type, User, Name, Destination, Query\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (WMI subscriptions created), Timeline, Single value (new subscriptions — target: 0 outside SCCM).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the hidden hooks attackers use in Windows’ management layer, which can run code on a schedule without a normal .exe in Startup.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.33",
              "n": "Audit Policy Changes",
              "c": "critical",
              "f": "beginner",
              "v": "Attackers modify audit policies to disable logging and hide their activities. Any unauthorized audit policy change must be investigated immediately.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4719)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4719\n| table _time, host, SubjectUserName, SubjectDomainName, CategoryId, SubcategoryGuid, AuditPolicyChanges\n| sort -_time",
              "m": "EventCode 4719 fires when an audit policy is changed via `auditpol.exe` or Group Policy. Any change outside planned GPO updates is suspicious. Alert with critical priority. Pay special attention to \"Success removed\" or \"Failure removed\" changes that reduce auditing coverage. Correlate with GPO change events.",
              "z": "Table (policy changes with user context), Timeline, Single value (count — target: 0 outside maintenance).",
              "kfp": "Hardening projects and GPO rewrites reconfigure subcategories. Pair every burst with a change or GPO number.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4719).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 4719 fires when an audit policy is changed via `auditpol.exe` or Group Policy. Any change outside planned GPO updates is suspicious. Alert with critical priority. Pay special attention to \"Success removed\" or \"Failure removed\" changes that reduce auditing coverage. Correlate with GPO change events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4719\n| table _time, host, SubjectUserName, SubjectDomainName, CategoryId, SubcategoryGuid, AuditPolicyChanges\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Audit Policy Changes** — Attackers modify audit policies to disable logging and hide their activities. Any unauthorized audit policy change must be investigated immediately.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4719). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Audit Policy Changes**): table _time, host, SubjectUserName, SubjectDomainName, CategoryId, SubcategoryGuid, AuditPolicyChanges\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Audit Policy Changes** — Attackers modify audit policies to disable logging and hide their activities. Any unauthorized audit policy change must be investigated immediately.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4719). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (policy changes with user context), Timeline, Single value (count — target: 0 outside maintenance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We show when the audit and logging rules on a machine are changed, which can hide mischief if someone turns the cameras off.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-1.2.33: Audit Policy Changes.",
                  "ea": "Saved search 'UC-1.2.33' running on sourcetype=WinEventLog:Security (EventCode 4719), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-21/chapter-I/subchapter-A/part-11"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "09.aa",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 09.aa (Audit logging) is enforced — Splunk UC-1.2.33: Audit Policy Changes.",
                  "ea": "Saved search 'UC-1.2.33' running on sourcetype=WinEventLog:Security (EventCode 4719), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-1.2.33: Audit Policy Changes.",
                  "ea": "Saved search 'UC-1.2.33' running on sourcetype=WinEventLog:Security (EventCode 4719), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.swift.com/myswift/customer-security-programme-csp/security-controls"
                }
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.34",
              "n": "AppLocker / WDAC Policy Violations",
              "c": "high",
              "f": "intermediate",
              "v": "AppLocker/WDAC blocks track unauthorized application execution attempts. High violation rates indicate persistent threats or misconfigured policies.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-AppLocker/EXE and DLL` (EventCode 8004, 8007)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-AppLocker*\" EventCode IN (8004, 8007)\n| eval block_type=case(EventCode=8004,\"EXE blocked\",EventCode=8007,\"Script blocked\")\n| stats count by host, block_type, RuleNameOrId, FilePath, UserName\n| sort -count",
              "m": "Enable AppLocker EXE, DLL, and Script rules in enforcement or audit mode. EventCode 8003/8006=allowed, 8004/8007=blocked. In audit mode (EventCode 8003), use data to build baseline before enforcement. Track blocked attempts per host — spikes indicate attack attempts or policy gaps. Correlate FilePath with threat intel.",
              "z": "Bar chart (top blocked apps), Table (blocks by host), Timechart (block rate over time).",
              "kfp": "Pilot rings and new version rollouts can spike blocks. A block can be a policy win, not a breach—triage with app owners.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-AppLocker/EXE and DLL` (EventCode 8004, 8007).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AppLocker EXE, DLL, and Script rules in enforcement or audit mode. EventCode 8003/8006=allowed, 8004/8007=blocked. In audit mode (EventCode 8003), use data to build baseline before enforcement. Track blocked attempts per host — spikes indicate attack attempts or policy gaps. Correlate FilePath with threat intel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-AppLocker*\" EventCode IN (8004, 8007)\n| eval block_type=case(EventCode=8004,\"EXE blocked\",EventCode=8007,\"Script blocked\")\n| stats count by host, block_type, RuleNameOrId, FilePath, UserName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**AppLocker / WDAC Policy Violations** — AppLocker/WDAC blocks track unauthorized application execution attempts. High violation rates indicate persistent threats or misconfigured policies.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-AppLocker/EXE and DLL` (EventCode 8004, 8007). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **block_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, block_type, RuleNameOrId, FilePath, UserName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**AppLocker / WDAC Policy Violations** — AppLocker/WDAC blocks track unauthorized application execution attempts. High violation rates indicate persistent threats or misconfigured policies.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-AppLocker/EXE and DLL` (EventCode 8004, 8007). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top blocked apps), Table (blocks by host), Timechart (block rate over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when Windows stops an app or script you did not want to run, which supports both hardening and finding shadow IT activity.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.35",
              "n": "Windows Defender Threat Detections",
              "c": "critical",
              "f": "intermediate",
              "v": "Real-time visibility into endpoint AV detections across the fleet. Delayed response to malware detections increases blast radius.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Windows Defender/Operational` (EventCode 1006, 1007, 1116, 1117)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\"\n  EventCode IN (1006, 1007, 1116, 1117)\n| eval action=case(EventCode=1006,\"Detected\",EventCode=1007,\"Action taken\",EventCode=1116,\"Detected\",EventCode=1117,\"Action taken\")\n| table _time, host, action, \"Threat Name\", \"Severity ID\", Path, \"Detection User\"\n| sort -_time",
              "m": "Forward Windows Defender Operational log from all endpoints. EventCode 1116=threat detected, 1117=action taken, 1006/1007=malware detected/acted on. Alert immediately on detections with Severity \"Severe\" or \"High\". Track remediation success (1117 following 1116). Monitor for EventCode 5001 (real-time protection disabled) as a separate critical alert.",
              "z": "Table (recent detections), Bar chart (threat categories), Single value (unresolved threats), Map (affected hosts).",
              "kfp": "PUP, cracktools in sandboxes, and tester malware samples are noisy. Scope business-critical OUs to reduce help desk load.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Windows Defender/Operational` (EventCode 1006, 1007, 1116, 1117).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Windows Defender Operational log from all endpoints. EventCode 1116=threat detected, 1117=action taken, 1006/1007=malware detected/acted on. Alert immediately on detections with Severity \"Severe\" or \"High\". Track remediation success (1117 following 1116). Monitor for EventCode 5001 (real-time protection disabled) as a separate critical alert.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\"\n  EventCode IN (1006, 1007, 1116, 1117)\n| eval action=case(EventCode=1006,\"Detected\",EventCode=1007,\"Action taken\",EventCode=1116,\"Detected\",EventCode=1117,\"Action taken\")\n| table _time, host, action, \"Threat Name\", \"Severity ID\", Path, \"Detection User\"\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Windows Defender Threat Detections** — Real-time visibility into endpoint AV detections across the fleet. Delayed response to malware detections increases blast radius.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Windows Defender/Operational` (EventCode 1006, 1007, 1116, 1117). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Windows Defender Threat Detections**): table _time, host, action, \"Threat Name\", \"Severity ID\", Path, \"Detection User\"\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent detections), Bar chart (threat categories), Single value (unresolved threats), Map (affected hosts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the built-in virus protection sees or blocks something nasty on a server, in plain terms your team can act on fast.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.36",
              "n": "DCSync Attack Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "DCSync uses Directory Replication Service permissions to extract password hashes remotely. Detecting non-DC replication requests catches this attack before credential theft completes.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4662)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4662\n  AccessMask=\"0x100\"\n  (Properties=\"*1131f6aa*\" OR Properties=\"*1131f6ad*\" OR Properties=\"*89e95b76*\")\n| where NOT match(SubjectUserName, \"(?i)(\\\\$$)\")\n| table _time, host, SubjectUserName, SubjectDomainName, ObjectName\n| sort -_time",
              "m": "Enable Directory Service Access auditing on domain controllers. EventCode 4662 with GUID 1131f6aa (DS-Replication-Get-Changes) or 1131f6ad (DS-Replication-Get-Changes-All) from a non-machine account (not ending in $) is a DCSync indicator. Alert immediately with critical priority. Legitimate replication only occurs between DCs (machine accounts). MITRE ATT&CK T1003.006.",
              "z": "Table (replication requests from non-DCs), Single value (count — target: 0), Alert with analyst playbook.",
              "kfp": "Backups, recovery tools, and AAD connect can replicate rights that look like DCSync. Whitelist well-known service principals and paths.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4662).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Directory Service Access auditing on domain controllers. EventCode 4662 with GUID 1131f6aa (DS-Replication-Get-Changes) or 1131f6ad (DS-Replication-Get-Changes-All) from a non-machine account (not ending in $) is a DCSync indicator. Alert immediately with critical priority. Legitimate replication only occurs between DCs (machine accounts). MITRE ATT&CK T1003.006.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4662\n  AccessMask=\"0x100\"\n  (Properties=\"*1131f6aa*\" OR Properties=\"*1131f6ad*\" OR Properties=\"*89e95b76*\")\n| where NOT match(SubjectUserName, \"(?i)(\\\\$$)\")\n| table _time, host, SubjectUserName, SubjectDomainName, ObjectName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**DCSync Attack Detection** — DCSync uses Directory Replication Service permissions to extract password hashes remotely. Detecting non-DC replication requests catches this attack before credential theft completes.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4662). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match(SubjectUserName, \"(?i)(\\\\$$)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DCSync Attack Detection**): table _time, host, SubjectUserName, SubjectDomainName, ObjectName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (replication requests from non-DCs), Single value (count — target: 0), Alert with analyst playbook.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the kind of directory access that copies the keys to the kingdom, which is a serious sign someone may be staging a domain-wide attack.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.37",
              "n": "Kerberoasting Detection (SPN Ticket Requests)",
              "c": "critical",
              "f": "intermediate",
              "v": "Kerberoasting requests TGS tickets for service accounts with SPNs, then cracks them offline. Detecting anomalous TGS requests catches this before passwords are compromised.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4769)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4769\n  TicketEncryptionType=0x17\n  ServiceName!=\"krbtgt\" ServiceName!=\"*$\"\n| stats count dc(ServiceName) as unique_spns by TargetUserName, IpAddress\n| where unique_spns > 3\n| sort -unique_spns",
              "m": "Collect Security logs from all DCs. EventCode 4769 = TGS ticket request. Encryption type 0x17 (RC4) is the Kerberoasting indicator — modern environments should use AES (0x12). Alert when a single user requests RC4 tickets for multiple service SPNs. Exclude machine accounts ($) and krbtgt. Remediation: enforce AES-only on service accounts and use Group Managed Service Accounts (gMSAs).",
              "z": "Table (suspicious requestors), Bar chart (TGS requests by encryption type), Timeline.",
              "kfp": "Service accounts, legacy SPNs, and Tier-2 services request many TGS. Hunt on *new* SPN+user pairings, not volume alone.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4769).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Security logs from all DCs. EventCode 4769 = TGS ticket request. Encryption type 0x17 (RC4) is the Kerberoasting indicator — modern environments should use AES (0x12). Alert when a single user requests RC4 tickets for multiple service SPNs. Exclude machine accounts ($) and krbtgt. Remediation: enforce AES-only on service accounts and use Group Managed Service Accounts (gMSAs).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4769\n  TicketEncryptionType=0x17\n  ServiceName!=\"krbtgt\" ServiceName!=\"*$\"\n| stats count dc(ServiceName) as unique_spns by TargetUserName, IpAddress\n| where unique_spns > 3\n| sort -unique_spns\n```\n\nUnderstanding this SPL\n\n**Kerberoasting Detection (SPN Ticket Requests)** — Kerberoasting requests TGS tickets for service accounts with SPNs, then cracks them offline. Detecting anomalous TGS requests catches this before passwords are compromised.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4769). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by TargetUserName, IpAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_spns > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious requestors), Bar chart (TGS requests by encryption type), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a burst of ticket requests that can mean someone is trying to grab password hashes to crack offline later.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.38",
              "n": "AD Object Deletion Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Accidental or malicious deletion of AD objects (OUs, users, groups, computer accounts) can cause widespread service disruption. AD Recycle Bin has a limited window.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4726, 4730, 4743, 5141)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\"\n  EventCode IN (4726, 4730, 4743, 5141)\n| eval object_type=case(EventCode=4726,\"User deleted\",EventCode=4730,\"Group deleted\",EventCode=4743,\"Computer deleted\",EventCode=5141,\"AD object deleted\")\n| table _time, host, object_type, SubjectUserName, TargetUserName, ObjectDN\n| sort -_time",
              "m": "Enable DS Object Access auditing on domain controllers. EventCode 5141 catches all AD object deletions including OUs. 4726/4730/4743 catch specific account/group/computer deletions. Alert on OU deletions immediately (mass impact). Track deletion volume per admin — spikes indicate accidental bulk operations or insider threats. Ensure AD Recycle Bin is enabled.",
              "z": "Table (deleted objects), Timeline, Bar chart (deletions by admin), Single value (OU deletions — target: 0).",
              "kfp": "Lifecycle decom, cleanup scripts, and merger projects delete many objects. Match time window to your HR offboarding feed.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4726, 4730, 4743, 5141).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DS Object Access auditing on domain controllers. EventCode 5141 catches all AD object deletions including OUs. 4726/4730/4743 catch specific account/group/computer deletions. Alert on OU deletions immediately (mass impact). Track deletion volume per admin — spikes indicate accidental bulk operations or insider threats. Ensure AD Recycle Bin is enabled.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\"\n  EventCode IN (4726, 4730, 4743, 5141)\n| eval object_type=case(EventCode=4726,\"User deleted\",EventCode=4730,\"Group deleted\",EventCode=4743,\"Computer deleted\",EventCode=5141,\"AD object deleted\")\n| table _time, host, object_type, SubjectUserName, TargetUserName, ObjectDN\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**AD Object Deletion Monitoring** — Accidental or malicious deletion of AD objects (OUs, users, groups, computer accounts) can cause widespread service disruption. AD Recycle Bin has a limited window.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4726, 4730, 4743, 5141). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **object_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **AD Object Deletion Monitoring**): table _time, host, object_type, SubjectUserName, TargetUserName, ObjectDN\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (deleted objects), Timeline, Bar chart (deletions by admin), Single value (OU deletions — target: 0).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when an important user, group, or computer object was removed from the directory and who did it.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.39",
              "n": "Domain Trust Changes",
              "c": "critical",
              "f": "beginner",
              "v": "Unauthorized trust relationships can grant external domains access to internal resources. Trust modifications are rare and high-impact.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4706, 4707, 4716)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4706, 4707, 4716)\n| eval action=case(EventCode=4706,\"Trust created\",EventCode=4707,\"Trust removed\",EventCode=4716,\"Trust modified\")\n| table _time, host, action, SubjectUserName, TrustDirection, TrustType, TrustedDomain\n| sort -_time",
              "m": "EventCode 4706=new trust, 4707=trust removed, 4716=trust modified. These events are extremely rare in stable environments. Alert on all trust changes with critical priority. Verify against approved change requests. Pay attention to trust direction (inbound trusts grant access TO your domain) and trust type (external vs. forest trusts).",
              "z": "Table (trust changes), Single value (count — target: 0 outside planned changes), Alert.",
              "kfp": "Mergers, test forests, and cloud trust work create bursts. A written trust inventory explains most changes.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4706, 4707, 4716).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 4706=new trust, 4707=trust removed, 4716=trust modified. These events are extremely rare in stable environments. Alert on all trust changes with critical priority. Verify against approved change requests. Pay attention to trust direction (inbound trusts grant access TO your domain) and trust type (external vs. forest trusts).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4706, 4707, 4716)\n| eval action=case(EventCode=4706,\"Trust created\",EventCode=4707,\"Trust removed\",EventCode=4716,\"Trust modified\")\n| table _time, host, action, SubjectUserName, TrustDirection, TrustType, TrustedDomain\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Domain Trust Changes** — Unauthorized trust relationships can grant external domains access to internal resources. Trust modifications are rare and high-impact.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4706, 4707, 4716). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Domain Trust Changes**): table _time, host, action, SubjectUserName, TrustDirection, TrustType, TrustedDomain\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Domain Trust Changes** — Unauthorized trust relationships can grant external domains access to internal resources. Trust modifications are rare and high-impact.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4706, 4707, 4716). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (trust changes), Single value (count — target: 0 outside planned changes), Alert.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a trust relationship between your Windows domains changed, which can open a wide door if it was not planned.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.40",
              "n": "WHEA Hardware Error Reporting",
              "c": "high",
              "f": "intermediate",
              "v": "Windows Hardware Error Architecture (WHEA) reports CPU, memory, and PCIe hardware errors before they cause crashes. Enables proactive hardware replacement.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System` (Source=Microsoft-Windows-WHEA-Logger, EventCode 17, 18, 19, 20, 47)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" Source=\"Microsoft-Windows-WHEA-Logger\"\n| eval severity=case(EventCode=18,\"Fatal\",EventCode=19,\"Corrected\",EventCode=20,\"Informational\",1=1,\"Other\")\n| stats count by host, severity, ErrorSource, ErrorType\n| sort -count",
              "m": "WHEA events are logged automatically by Windows on hardware error. EventCode 18=fatal (machine check, NMI), 19=corrected (ECC memory correction, CPU thermal), 47=informational. Track corrected error rates — rising counts predict imminent failure. Correlate with specific hardware component (CPU, memory DIMM, PCIe device) from ErrorSource field. Alert on any fatal errors and on corrected error rate >10/hour.",
              "z": "Table (errors by host and component), Line chart (corrected error trend), Single value (fatal errors — target: 0).",
              "kfp": "One-off corrected ECC, thermal, or firmware self-tests can look scary. Recurrence and fatal classes drive priority.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (Source=Microsoft-Windows-WHEA-Logger, EventCode 17, 18, 19, 20, 47).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWHEA events are logged automatically by Windows on hardware error. EventCode 18=fatal (machine check, NMI), 19=corrected (ECC memory correction, CPU thermal), 47=informational. Track corrected error rates — rising counts predict imminent failure. Correlate with specific hardware component (CPU, memory DIMM, PCIe device) from ErrorSource field. Alert on any fatal errors and on corrected error rate >10/hour.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" Source=\"Microsoft-Windows-WHEA-Logger\"\n| eval severity=case(EventCode=18,\"Fatal\",EventCode=19,\"Corrected\",EventCode=20,\"Informational\",1=1,\"Other\")\n| stats count by host, severity, ErrorSource, ErrorType\n| sort -count\n```\n\nUnderstanding this SPL\n\n**WHEA Hardware Error Reporting** — Windows Hardware Error Architecture (WHEA) reports CPU, memory, and PCIe hardware errors before they cause crashes. Enables proactive hardware replacement.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (Source=Microsoft-Windows-WHEA-Logger, EventCode 17, 18, 19, 20, 47). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, severity, ErrorSource, ErrorType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (errors by host and component), Line chart (corrected error trend), Single value (fatal errors — target: 0).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the server is reporting on-the-metal hardware health problems, not just an app error, so you can fix chips and drivers before a crash.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.41",
              "n": "Volume Shadow Copy Failures",
              "c": "high",
              "f": "intermediate",
              "v": "VSS failures break backup chains, System Restore, and SQL/Exchange application-consistent snapshots. Often silent until a restore is attempted.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Application` (Source=VSS, EventCode 12289, 12298, 8193, 8194)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Application\" Source=\"VSS\" EventCode IN (12289, 12298, 8193, 8194)\n| eval issue=case(EventCode=12289,\"VSS writer failed\",EventCode=12298,\"VSS copy failed\",EventCode=8193,\"VSS error\",EventCode=8194,\"VSS error\")\n| stats count by host, issue, EventCode\n| sort -count",
              "m": "VSS events appear in the Application log. EventCode 12289=writer failure (often SQL, Exchange, or Hyper-V writers), 12298=shadow copy creation failure. Common causes: low disk space, I/O timeouts, conflicting backup agents. Alert on any VSS failure — they directly impact RPO. Correlate with backup job logs to identify which backup product is affected.",
              "z": "Table (VSS errors by host), Timeline, Bar chart (failure types).",
              "kfp": "Backup under load, VSS full-volume copies, and low staging disk during dedupe can fail transiently. Re-run jobs and check storage back ends before a sev-1 on VSS only.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Application` (Source=VSS, EventCode 12289, 12298, 8193, 8194).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nVSS events appear in the Application log. EventCode 12289=writer failure (often SQL, Exchange, or Hyper-V writers), 12298=shadow copy creation failure. Common causes: low disk space, I/O timeouts, conflicting backup agents. Alert on any VSS failure — they directly impact RPO. Correlate with backup job logs to identify which backup product is affected.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Application\" Source=\"VSS\" EventCode IN (12289, 12298, 8193, 8194)\n| eval issue=case(EventCode=12289,\"VSS writer failed\",EventCode=12298,\"VSS copy failed\",EventCode=8193,\"VSS error\",EventCode=8194,\"VSS error\")\n| stats count by host, issue, EventCode\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Volume Shadow Copy Failures** — VSS failures break backup chains, System Restore, and SQL/Exchange application-consistent snapshots. Often silent until a restore is attempted.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Application` (Source=VSS, EventCode 12289, 12298, 8193, 8194). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Application. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, issue, EventCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Volume Shadow Copy Failures** — VSS failures break backup chains, System Restore, and SQL/Exchange application-consistent snapshots. Often silent until a restore is attempted.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Application` (Source=VSS, EventCode 12289, 12298, 8193, 8194). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VSS errors by host), Timeline, Bar chart (failure types).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the shadow-copy and backup service is failing, which can break restores and some apps that expect those snapshots to work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.42",
              "n": ".NET CLR Performance Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": ".NET garbage collection pauses and high exception rates cause application latency and instability. CLR monitoring reveals issues invisible to external health checks.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:dotNET_CLR_Memory` (counters: % Time in GC, Gen 2 Collections)",
              "q": "index=perfmon sourcetype=\"Perfmon:dotNET_CLR_Memory\" counter=\"% Time in GC\" instance!=\"_Global_\"\n| timechart span=5m avg(Value) as pct_gc by host, instance\n| where pct_gc > 20",
              "m": "Configure Perfmon inputs for `.NET CLR Memory` object: `% Time in GC`, `# Gen 2 Collections`, `Large Object Heap size`. Also monitor `.NET CLR Exceptions` → `# of Exceps Thrown / sec`. >20% time in GC indicates memory pressure in .NET apps. Frequent Gen 2 collections signal large object allocation issues. Target specific app pool instances (w3wp) for IIS applications.",
              "z": "Line chart (GC time %), Bar chart (Gen 2 collections by app), Dual-axis (GC time + exceptions).",
              "kfp": "Deploys, JIT, and big heap Gen2 collections can spike time-in-GC. Use release timing and the same w3wp/svc instance context before a dev escalation.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:dotNET_CLR_Memory` (counters: % Time in GC, Gen 2 Collections).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs for `.NET CLR Memory` object: `% Time in GC`, `# Gen 2 Collections`, `Large Object Heap size`. Also monitor `.NET CLR Exceptions` → `# of Exceps Thrown / sec`. >20% time in GC indicates memory pressure in .NET apps. Frequent Gen 2 collections signal large object allocation issues. Target specific app pool instances (w3wp) for IIS applications.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:dotNET_CLR_Memory\" counter=\"% Time in GC\" instance!=\"_Global_\"\n| timechart span=5m avg(Value) as pct_gc by host, instance\n| where pct_gc > 20\n```\n\nUnderstanding this SPL\n\n**.NET CLR Performance Monitoring** — .NET garbage collection pauses and high exception rates cause application latency and instability. CLR monitoring reveals issues invisible to external health checks.\n\nDocumented **Data sources**: `sourcetype=Perfmon:dotNET_CLR_Memory` (counters: % Time in GC, Gen 2 Collections). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:dotNET_CLR_Memory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:dotNET_CLR_Memory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host, instance** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where pct_gc > 20` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**.NET CLR Performance Monitoring** — .NET garbage collection pauses and high exception rates cause application latency and instability. CLR monitoring reveals issues invisible to external health checks.\n\nDocumented **Data sources**: `sourcetype=Perfmon:dotNET_CLR_Memory` (counters: % Time in GC, Gen 2 Collections). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (GC time %), Bar chart (Gen 2 collections by app), Dual-axis (GC time + exceptions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how much time the .NET part of a program spends in garbage collection, which can slow web and service apps on Windows even when the server CPU looks only moderately busy.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.43",
              "n": "Failover Cluster Event Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Cluster failovers indicate node failures or network partitions affecting high-availability services. Each failover risks brief downtime and potential data loss.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational` (EventCode 1069, 1177, 1205, 1254)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\"\n  EventCode IN (1069, 1177, 1205, 1254)\n| eval event=case(EventCode=1069,\"Resource failed\",EventCode=1177,\"Quorum lost\",EventCode=1205,\"Cluster service stopped\",EventCode=1254,\"Node removed\")\n| table _time, host, event, EventCode, ResourceName, NodeName\n| sort -_time",
              "m": "Enable FailoverClustering Operational log on all cluster nodes. EventCode 1069=cluster resource failure (triggers failover), 1177=quorum loss (cluster at risk), 1205=cluster service stopped. Alert on quorum loss and resource failures immediately. Track failover frequency — frequent failovers indicate underlying instability. Monitor cluster network health via EventCode 1123 (network disconnected).",
              "z": "Timeline (failover events), Table (affected resources), Single value (failovers today), Status panel (cluster health).",
              "kfp": "Planned move groups, patch reboots, and node drains are normal. Sustained resource offline without change is the real signal.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational` (EventCode 1069, 1177, 1205, 1254).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable FailoverClustering Operational log on all cluster nodes. EventCode 1069=cluster resource failure (triggers failover), 1177=quorum loss (cluster at risk), 1205=cluster service stopped. Alert on quorum loss and resource failures immediately. Track failover frequency — frequent failovers indicate underlying instability. Monitor cluster network health via EventCode 1123 (network disconnected).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\"\n  EventCode IN (1069, 1177, 1205, 1254)\n| eval event=case(EventCode=1069,\"Resource failed\",EventCode=1177,\"Quorum lost\",EventCode=1205,\"Cluster service stopped\",EventCode=1254,\"Node removed\")\n| table _time, host, event, EventCode, ResourceName, NodeName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Failover Cluster Event Monitoring** — Cluster failovers indicate node failures or network partitions affecting high-availability services. Each failover risks brief downtime and potential data loss.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational` (EventCode 1069, 1177, 1205, 1254). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **event** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Failover Cluster Event Monitoring**): table _time, host, event, EventCode, ResourceName, NodeName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failover events), Table (affected resources), Single value (failovers today), Status panel (cluster health).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a cluster of servers that keep each other online is in trouble, so a storage or network blip does not take the whole app down by surprise.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.44",
              "n": "SMB Share Access Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Anomalous SMB share access patterns indicate lateral movement, data exfiltration, or ransomware file encryption across network shares.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 5140, 5145)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5140\n| stats dc(ShareName) as unique_shares count by SubjectUserName, IpAddress\n| where unique_shares > 10 OR count > 1000\n| sort -unique_shares",
              "m": "Enable \"Audit File Share\" and \"Audit Detailed File Share\" in Advanced Audit Policy. EventCode 5140=share accessed, 5145=detailed file access with access check results. Alert when a single user accesses many shares rapidly (lateral movement) or when write volume spikes (ransomware indicator). Baseline normal access patterns per user/role. Note: generates high volume — filter to sensitive shares or use summary indexing.",
              "z": "Table (top share accessors), Timechart (access rate), Bar chart (shares accessed per user).",
              "kfp": "Indexers, data movers, and backup software scan many shares. A high share count is often the job, not a person browsing randomly.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 5140, 5145).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Audit File Share\" and \"Audit Detailed File Share\" in Advanced Audit Policy. EventCode 5140=share accessed, 5145=detailed file access with access check results. Alert when a single user accesses many shares rapidly (lateral movement) or when write volume spikes (ransomware indicator). Baseline normal access patterns per user/role. Note: generates high volume — filter to sensitive shares or use summary indexing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5140\n| stats dc(ShareName) as unique_shares count by SubjectUserName, IpAddress\n| where unique_shares > 10 OR count > 1000\n| sort -unique_shares\n```\n\nUnderstanding this SPL\n\n**SMB Share Access Anomalies** — Anomalous SMB share access patterns indicate lateral movement, data exfiltration, or ransomware file encryption across network shares.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 5140, 5145). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by SubjectUserName, IpAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_shares > 10 OR count > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top share accessors), Timechart (access rate), Bar chart (shares accessed per user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a user or account that is touching a lot more file shares at once than normal, which can be a data hunt in progress.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.45",
              "n": "Windows Time Service (W32Time) Issues",
              "c": "high",
              "f": "beginner",
              "v": "Time synchronization failures break Kerberos authentication (5-minute tolerance), cause log correlation issues, and invalidate audit trails.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System` (Source=Microsoft-Windows-Time-Service, EventCode 129, 134, 142, 36)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" Source=\"Microsoft-Windows-Time-Service\"\n  EventCode IN (129, 134, 142, 36)\n| eval issue=case(EventCode=129,\"NTP unreachable\",EventCode=134,\"Time difference too large\",EventCode=142,\"Time service stopped\",EventCode=36,\"Time not synced for 24h\")\n| table _time, host, issue, EventCode\n| sort -_time",
              "m": "W32Time events log automatically. EventCode 129=NTP server unreachable, 134=time difference >5 seconds (Kerberos risk), 142=time service stopped, 36=not synced in 24 hours. Domain-joined machines sync to DC; DCs sync to PDC emulator; PDC syncs to external NTP. Alert on any DC time sync failures (Kerberos impact). Monitor non-DC servers for EventCode 36.",
              "z": "Table (time sync issues), Status grid (host × sync status), Single value (unsynced hosts).",
              "kfp": "Short VPN blips, Hyper-V time sync, and manual changes during DR drills. Fix hierarchy before chasing malware on a single 129.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (Source=Microsoft-Windows-Time-Service, EventCode 129, 134, 142, 36).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nW32Time events log automatically. EventCode 129=NTP server unreachable, 134=time difference >5 seconds (Kerberos risk), 142=time service stopped, 36=not synced in 24 hours. Domain-joined machines sync to DC; DCs sync to PDC emulator; PDC syncs to external NTP. Alert on any DC time sync failures (Kerberos impact). Monitor non-DC servers for EventCode 36.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" Source=\"Microsoft-Windows-Time-Service\"\n  EventCode IN (129, 134, 142, 36)\n| eval issue=case(EventCode=129,\"NTP unreachable\",EventCode=134,\"Time difference too large\",EventCode=142,\"Time service stopped\",EventCode=36,\"Time not synced for 24h\")\n| table _time, host, issue, EventCode\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Windows Time Service (W32Time) Issues** — Time synchronization failures break Kerberos authentication (5-minute tolerance), cause log correlation issues, and invalidate audit trails.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (Source=Microsoft-Windows-Time-Service, EventCode 129, 134, 142, 36). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Windows Time Service (W32Time) Issues**): table _time, host, issue, EventCode\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Windows Time Service (W32Time) Issues** — Time synchronization failures break Kerberos authentication (5-minute tolerance), cause log correlation issues, and invalidate audit trails.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (Source=Microsoft-Windows-Time-Service, EventCode 129, 134, 142, 36). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (time sync issues), Status grid (host × sync status), Single value (unsynced hosts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a server is struggling to keep its clock in sync, which can break sign-ins, databases, and logs in subtle ways.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.46",
              "n": "DFS-R Replication Backlog",
              "c": "high",
              "f": "intermediate",
              "v": "DFS-R replication backlogs mean file servers are out of sync. Users may access stale data, and a prolonged backlog can trigger an initial sync (full re-replication).",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:DFS Replication` (EventCode 4012, 4302, 4304, 5002, 5008)",
              "q": "index=wineventlog source=\"WinEventLog:DFS Replication\" EventCode IN (4012, 4302, 4304, 5002, 5008)\n| eval issue=case(EventCode=4012,\"Auto-recovery started\",EventCode=4302,\"Staging quota exceeded\",EventCode=4304,\"Backlog exceeded limit\",EventCode=5002,\"Initial sync unexpected\",EventCode=5008,\"Connection failed\")\n| table _time, host, issue, ReplicationGroupName, PartnerName\n| sort -_time",
              "m": "Forward DFS Replication event logs from all DFS members. EventCode 4304=backlog exceeds threshold (default 100 files), 5008=connection failure between partners. Alert on backlog thresholds and connection failures. Monitor EventCode 4012 (auto-recovery) — frequent occurrences indicate unstable replication. Use `dfsrdiag backlog` via scripted input for precise backlog counts.",
              "z": "Table (replication issues), Line chart (backlog trend), Status grid (partner × status).",
              "kfp": "First sync, WAN throttling, and staging size tuning create churn. A backlog during initial build is not the same as a permanent split-brain—check replication health in DFS UI.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:DFS Replication` (EventCode 4012, 4302, 4304, 5002, 5008).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DFS Replication event logs from all DFS members. EventCode 4304=backlog exceeds threshold (default 100 files), 5008=connection failure between partners. Alert on backlog thresholds and connection failures. Monitor EventCode 4012 (auto-recovery) — frequent occurrences indicate unstable replication. Use `dfsrdiag backlog` via scripted input for precise backlog counts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:DFS Replication\" EventCode IN (4012, 4302, 4304, 5002, 5008)\n| eval issue=case(EventCode=4012,\"Auto-recovery started\",EventCode=4302,\"Staging quota exceeded\",EventCode=4304,\"Backlog exceeded limit\",EventCode=5002,\"Initial sync unexpected\",EventCode=5008,\"Connection failed\")\n| table _time, host, issue, ReplicationGroupName, PartnerName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**DFS-R Replication Backlog** — DFS-R replication backlogs mean file servers are out of sync. Users may access stale data, and a prolonged backlog can trigger an initial sync (full re-replication).\n\nDocumented **Data sources**: `sourcetype=WinEventLog:DFS Replication` (EventCode 4012, 4302, 4304, 5002, 5008). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DFS-R Replication Backlog**): table _time, host, issue, ReplicationGroupName, PartnerName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (replication issues), Line chart (backlog trend), Status grid (partner × status).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when file copies between sites are piling up, which can make different offices see different versions of the same data.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.47",
              "n": "Application Crash (WER) Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Windows Error Reporting captures crash details for all applications. Trending reveals systemic instability, bad patches, or problematic application versions.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Application` (EventCode 1000, 1001, 1002)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Application\" EventCode IN (1000, 1002)\n| eval crash_app=coalesce(Application, param1)\n| stats count by host, crash_app, EventCode\n| where count > 3\n| sort -count",
              "m": "EventCode 1000=application crash with fault details (module, exception code, offset), 1002=application hang detected. Aggregate by faulting application and module across the fleet. Spikes after patch deployment indicate regression. Alert on critical applications (e.g., w3wp.exe, sqlservr.exe, lsass.exe). Use EventCode 1001 (WER bucket data) for deduplication.",
              "z": "Bar chart (top crashing apps), Timechart (crash rate over time), Table (crash details by module).",
              "kfp": "Browser and Office update waves throw WER noise. Count by the same `crash_app` + version in your main search before a mass pager.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Application` (EventCode 1000, 1001, 1002).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 1000=application crash with fault details (module, exception code, offset), 1002=application hang detected. Aggregate by faulting application and module across the fleet. Spikes after patch deployment indicate regression. Alert on critical applications (e.g., w3wp.exe, sqlservr.exe, lsass.exe). Use EventCode 1001 (WER bucket data) for deduplication.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Application\" EventCode IN (1000, 1002)\n| eval crash_app=coalesce(Application, param1)\n| stats count by host, crash_app, EventCode\n| where count > 3\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Application Crash (WER) Trending** — Windows Error Reporting captures crash details for all applications. Trending reveals systemic instability, bad patches, or problematic application versions.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Application` (EventCode 1000, 1001, 1002). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Application. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **crash_app** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, crash_app, EventCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Application Crash (WER) Trending** — Windows Error Reporting captures crash details for all applications. Trending reveals systemic instability, bad patches, or problematic application versions.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Application` (EventCode 1000, 1001, 1002). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top crashing apps), Timechart (crash rate over time), Table (crash details by module).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the same app is crashing over and over on a server, not just a one-time blip, so you can fix the root cause before users revolt.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.48",
              "n": "PowerShell Script Block Logging",
              "c": "high",
              "f": "intermediate",
              "v": "Script Block Logging captures the full text of every PowerShell script executed, including deobfuscated code. Essential for detecting fileless attacks and encoded commands.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational` (EventCode 4104)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-PowerShell/Operational\" EventCode=4104\n| search ScriptBlockText IN (\"*Invoke-Mimikatz*\",\"*Net.WebClient*\",\"*DownloadString*\",\"*IEX*\",\"*-enc*\",\"*FromBase64*\",\"*Invoke-Expression*\")\n| table _time, host, Path, ScriptBlockText, UserName\n| sort -_time",
              "m": "Enable Script Block Logging via GPO: Computer Configuration → Administrative Templates → Windows PowerShell → Turn on PowerShell Script Block Logging. EventCode 4104 logs the full script text, including auto-deobfuscation. Search for suspicious keywords: `Invoke-Expression`, `Net.WebClient`, `DownloadString`, `FromBase64String`, `Invoke-Mimikatz`. High volume — consider targeted alerting and summary indexing. Complements EventCode 4688 (process creation with command line).",
              "z": "Table (suspicious scripts), Timeline, Bar chart (script execution by host), Search interface for threat hunting.",
              "kfp": "RMM, installers, and admin wrappers trigger the same *Invoke* and *IEX* substrings. Baseline with your gold image and approved paths.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational` (EventCode 4104).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Script Block Logging via GPO: Computer Configuration → Administrative Templates → Windows PowerShell → Turn on PowerShell Script Block Logging. EventCode 4104 logs the full script text, including auto-deobfuscation. Search for suspicious keywords: `Invoke-Expression`, `Net.WebClient`, `DownloadString`, `FromBase64String`, `Invoke-Mimikatz`. High volume — consider targeted alerting and summary indexing. Complements EventCode 4688 (process creation with command line).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-PowerShell/Operational\" EventCode=4104\n| search ScriptBlockText IN (\"*Invoke-Mimikatz*\",\"*Net.WebClient*\",\"*DownloadString*\",\"*IEX*\",\"*-enc*\",\"*FromBase64*\",\"*Invoke-Expression*\")\n| table _time, host, Path, ScriptBlockText, UserName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**PowerShell Script Block Logging** — Script Block Logging captures the full text of every PowerShell script executed, including deobfuscated code. Essential for detecting fileless attacks and encoded commands.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational` (EventCode 4104). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-PowerShell/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-PowerShell/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **PowerShell Script Block Logging**): table _time, host, Path, ScriptBlockText, UserName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious scripts), Timeline, Bar chart (script execution by host), Search interface for threat hunting.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at PowerShell command text for a wider set of sketchy patterns so your team can tune beyond the first hunt list you shipped.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.49",
              "n": "Lateral Movement via Explicit Credentials",
              "c": "critical",
              "f": "advanced",
              "v": "Logon type 9 (NewCredentials / RunAs /netonly) and type 10 (RDP) from unexpected sources reveal credential abuse and lateral movement between systems.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4624, Logon Type 3, 9, 10)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType IN (9, 10)\n| stats count values(LogonType) as types by TargetUserName, IpAddress, host\n| where count > 5\n| lookup admin_accounts.csv user as TargetUserName OUTPUT is_admin\n| where is_admin=\"true\"\n| sort -count",
              "m": "Collect Security logs from all servers. Logon type 9=NewCredentials (runas /netonly — commonly used with stolen hashes), type 10=RemoteInteractive (RDP). Focus on admin accounts authenticating to servers they don't normally access. Build a baseline of normal admin→server mappings. Alert when an admin authenticates to >3 new hosts in an hour. Correlate with process creation (4688) on the destination.",
              "z": "Network graph (source→destination), Table (unusual logons), Timechart (logon rate by type).",
              "kfp": "Jump boxes, PAM, and VDI can cluster here as ‘explicit creds’ in normal work. Add roster and PAM context before calling IR.",
              "refs": "[new hosts in an hour. Correlate with process creation](https://splunkbase.splunk.com/app/4688), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4624, Logon Type 3, 9, 10).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Security logs from all servers. Logon type 9=NewCredentials (runas /netonly — commonly used with stolen hashes), type 10=RemoteInteractive (RDP). Focus on admin accounts authenticating to servers they don't normally access. Build a baseline of normal admin→server mappings. Alert when an admin authenticates to >3 new hosts in an hour. Correlate with process creation (4688) on the destination.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType IN (9, 10)\n| stats count values(LogonType) as types by TargetUserName, IpAddress, host\n| where count > 5\n| lookup admin_accounts.csv user as TargetUserName OUTPUT is_admin\n| where is_admin=\"true\"\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Lateral Movement via Explicit Credentials** — Logon type 9 (NewCredentials / RunAs /netonly) and type 10 (RDP) from unexpected sources reveal credential abuse and lateral movement between systems.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4624, Logon Type 3, 9, 10). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by TargetUserName, IpAddress, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_admin=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Lateral Movement via Explicit Credentials** — Logon type 9 (NewCredentials / RunAs /netonly) and type 10 (RDP) from unexpected sources reveal credential abuse and lateral movement between systems.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4624, Logon Type 3, 9, 10). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network graph (source→destination), Table (unusual logons), Timechart (logon rate by type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when people sign in in a way that often shows someone using someone else’s password on purpose, which needs a careful look in admin shops.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.50",
              "n": "DNS Debug Query Logging",
              "c": "medium",
              "f": "intermediate",
              "v": "DNS query logging reveals C2 communication via DNS tunneling, DGA domains, and unauthorized DNS resolution. Essential for security visibility.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-DNS-Server/Analytical` or DNS debug log file",
              "q": "index=dns sourcetype=\"MSAD:NT6:DNS\" query_type IN (TXT, NULL, CNAME)\n| stats count avg(query_length) as avg_len by query, client_ip\n| where avg_len > 50 OR count > 100\n| sort -avg_len",
              "m": "Enable DNS Analytical logging on Windows DNS servers or DNS debug logging to file (dnscmd /config /logfilepath). Forward via Splunk_TA_windows or Splunk Add-on for Microsoft DNS. Long TXT queries (>50 chars) and high-frequency CNAME lookups indicate DNS tunneling. Queries to recently registered domains or high-entropy names suggest DGA malware. Baseline normal query patterns, then alert on anomalies.",
              "z": "Table (suspicious queries), Bar chart (query types), Timechart (query volume), Top domains.",
              "kfp": "ACME, email security, and DevOps TXT churn can look odd. A DNS and security architecture review should label expected odd query types first.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-DNS-Server/Analytical` or DNS debug log file.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DNS Analytical logging on Windows DNS servers or DNS debug logging to file (dnscmd /config /logfilepath). Forward via Splunk_TA_windows or Splunk Add-on for Microsoft DNS. Long TXT queries (>50 chars) and high-frequency CNAME lookups indicate DNS tunneling. Queries to recently registered domains or high-entropy names suggest DGA malware. Baseline normal query patterns, then alert on anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=\"MSAD:NT6:DNS\" query_type IN (TXT, NULL, CNAME)\n| stats count avg(query_length) as avg_len by query, client_ip\n| where avg_len > 50 OR count > 100\n| sort -avg_len\n```\n\nUnderstanding this SPL\n\n**DNS Debug Query Logging** — DNS query logging reveals C2 communication via DNS tunneling, DGA domains, and unauthorized DNS resolution. Essential for security visibility.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-DNS-Server/Analytical` or DNS debug log file. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: MSAD:NT6:DNS. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=\"MSAD:NT6:DNS\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by query, client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_len > 50 OR count > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious queries), Bar chart (query types), Timechart (query volume), Top domains.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you spot unusual name lookups—length and pattern—that can be used to sneak data out or set up a tunnel, beyond normal browsing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.51",
              "n": "Process Creation with Command Line Auditing",
              "c": "high",
              "f": "intermediate",
              "v": "Full command-line visibility on process creation is the foundation of threat detection. Reveals encoded PowerShell, LOLBin abuse, and suspicious child processes.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4688)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4688\n| where match(CommandLine, \"(?i)(certutil.*-urlcache|bitsadmin.*\\/transfer|mshta.*http|regsvr32.*\\/s.*\\/n.*\\/u|rundll32.*javascript)\")\n| table _time, host, SubjectUserName, NewProcessName, CommandLine, ParentProcessName\n| sort -_time",
              "m": "Enable \"Audit Process Creation\" and \"Include command line in process creation events\" via GPO (Computer Configuration → Administrative Templates → System → Audit Process Creation). EventCode 4688 then includes full CommandLine. Search for known LOLBins (Living Off the Land Binaries): certutil, bitsadmin, mshta, regsvr32, rundll32 with suspicious parameters. High volume — use summary indexing or data model acceleration.",
              "z": "Table (suspicious processes), Timeline, Search interface for hunting.",
              "kfp": "IT tools (`certutil`, `bitsadmin`, `mshta`, `rundll32`) are used in patching and MECM. Add parent, hash, and signer to avoid 24/7 red noise.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4688).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Audit Process Creation\" and \"Include command line in process creation events\" via GPO (Computer Configuration → Administrative Templates → System → Audit Process Creation). EventCode 4688 then includes full CommandLine. Search for known LOLBins (Living Off the Land Binaries): certutil, bitsadmin, mshta, regsvr32, rundll32 with suspicious parameters. High volume — use summary indexing or data model acceleration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4688\n| where match(CommandLine, \"(?i)(certutil.*-urlcache|bitsadmin.*\\/transfer|mshta.*http|regsvr32.*\\/s.*\\/n.*\\/u|rundll32.*javascript)\")\n| table _time, host, SubjectUserName, NewProcessName, CommandLine, ParentProcessName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Process Creation with Command Line Auditing** — Full command-line visibility on process creation is the foundation of threat detection. Reveals encoded PowerShell, LOLBin abuse, and suspicious child processes.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4688). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(CommandLine, \"(?i)(certutil.*-urlcache|bitsadmin.*\\/transfer|mshta.*http|regsvr32.*\\/s.*\\/n.*\\/u|rundll32.*java…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Process Creation with Command Line Auditing**): table _time, host, SubjectUserName, NewProcessName, CommandLine, ParentProcessName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Process Creation with Command Line Auditing** — Full command-line visibility on process creation is the foundation of threat detection. Reveals encoded PowerShell, LOLBin abuse, and suspicious child processes.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4688). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious processes), Timeline, Search interface for hunting.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a short list of command-line patterns that are often used when someone is already inside the network and trying to run download or file tricks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.19",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU AI Act Art.19 (Automatically generated logs) is enforced — Splunk UC-1.2.51: Process Creation with Command Line Auditing.",
                  "ea": "Saved search 'UC-1.2.51' running on sourcetype=WinEventLog:Security (EventCode 4688), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2024/1689/oj"
                },
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-1.2.51: Process Creation with Command Line Auditing.",
                  "ea": "Saved search 'UC-1.2.51' running on sourcetype=WinEventLog:Security (EventCode 4688), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-21/chapter-I/subchapter-A/part-11"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "SI-4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FedRAMP SI-4 (System monitoring) is enforced — Splunk UC-1.2.51: Process Creation with Command Line Auditing.",
                  "ea": "Saved search 'UC-1.2.51' running on sourcetype=WinEventLog:Security (EventCode 4688), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "09.aa",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HITRUST 09.aa (Audit logging) is enforced — Splunk UC-1.2.51: Process Creation with Command Line Auditing.",
                  "ea": "Saved search 'UC-1.2.51' running on sourcetype=WinEventLog:Security (EventCode 4688), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                }
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.52",
              "n": "NIC Teaming / LBFO Failover (Windows)",
              "c": "high",
              "f": "beginner",
              "v": "NIC team member failures reduce redundancy silently. A second failure causes full network loss. Detecting the first failure enables proactive repair.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-NlbFo/Operational` (EventCode 101, 105, 106, 115)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-NlbFo/Operational\"\n  EventCode IN (101, 105, 106, 115)\n| eval event=case(EventCode=101,\"Team degraded\",EventCode=105,\"Member disconnected\",EventCode=106,\"Member reconnected\",EventCode=115,\"Standby activated\")\n| table _time, host, event, TeamName, MemberName\n| sort -_time",
              "m": "NIC Teaming (LBFO) events log automatically. EventCode 101=team degraded (member lost), 105=member disconnected, 106=reconnected, 115=standby activated. Alert immediately when team degrades — the remaining NIC is now a single point of failure. Track flapping (repeated 105→106 cycles) which indicates cable, switch port, or driver issues.",
              "z": "Status grid (team × member status), Timeline (failover events), Single value (degraded teams).",
              "kfp": "Switch maintenance, cable wiggle, and driver reloads flap teams. Pair with your network and hypervisor runbooks, not a single 101/105.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-NlbFo/Operational` (EventCode 101, 105, 106, 115).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNIC Teaming (LBFO) events log automatically. EventCode 101=team degraded (member lost), 105=member disconnected, 106=reconnected, 115=standby activated. Alert immediately when team degrades — the remaining NIC is now a single point of failure. Track flapping (repeated 105→106 cycles) which indicates cable, switch port, or driver issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-NlbFo/Operational\"\n  EventCode IN (101, 105, 106, 115)\n| eval event=case(EventCode=101,\"Team degraded\",EventCode=105,\"Member disconnected\",EventCode=106,\"Member reconnected\",EventCode=115,\"Standby activated\")\n| table _time, host, event, TeamName, MemberName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**NIC Teaming / LBFO Failover (Windows)** — NIC team member failures reduce redundancy silently. A second failure causes full network loss. Detecting the first failure enables proactive repair.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-NlbFo/Operational` (EventCode 101, 105, 106, 115). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **event** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NIC Teaming / LBFO Failover (Windows)**): table _time, host, event, TeamName, MemberName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (team × member status), Timeline (failover events), Single value (degraded teams).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a teamed network card is not healthy as a group, so one bad link does not take down a highly available server the hard way.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.53",
              "n": "BitLocker Recovery Events",
              "c": "high",
              "f": "beginner",
              "v": "BitLocker recovery mode triggers indicate TPM issues, boot configuration changes, or potential tampering with the boot chain. Each event requires investigation.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-BitLocker/BitLocker Management` (EventCode 768, 770, 775, 846)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-BitLocker*\"\n  EventCode IN (768, 770, 775, 846)\n| eval issue=case(EventCode=768,\"Recovery mode entered\",EventCode=770,\"Protection suspended\",EventCode=775,\"Recovery key used\",EventCode=846,\"Encryption failed\")\n| table _time, host, issue, VolumeName, RecoveryReason\n| sort -_time",
              "m": "Forward BitLocker Management and Operational logs. EventCode 768/775=recovery mode (TPM unsealing failed, boot integrity compromised). Common benign triggers: BIOS updates, boot order changes. Alert on recovery events — each one should be correlated with approved change windows. Track EventCode 770 (protection suspended) — ensure it's re-enabled within 24 hours.",
              "z": "Table (recovery events), Timeline, Single value (unresolved recoveries).",
              "kfp": "Re-image, re-key, and TPM/bios update flows generate recovery and suspend entries. A recovery key *use* in the field is the hot signal for stolen device workflows.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-BitLocker/BitLocker Management` (EventCode 768, 770, 775, 846).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward BitLocker Management and Operational logs. EventCode 768/775=recovery mode (TPM unsealing failed, boot integrity compromised). Common benign triggers: BIOS updates, boot order changes. Alert on recovery events — each one should be correlated with approved change windows. Track EventCode 770 (protection suspended) — ensure it's re-enabled within 24 hours.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-BitLocker*\"\n  EventCode IN (768, 770, 775, 846)\n| eval issue=case(EventCode=768,\"Recovery mode entered\",EventCode=770,\"Protection suspended\",EventCode=775,\"Recovery key used\",EventCode=846,\"Encryption failed\")\n| table _time, host, issue, VolumeName, RecoveryReason\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**BitLocker Recovery Events** — BitLocker recovery mode triggers indicate TPM issues, boot configuration changes, or potential tampering with the boot chain. Each event requires investigation.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-BitLocker/BitLocker Management` (EventCode 768, 770, 775, 846). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **BitLocker Recovery Events**): table _time, host, issue, VolumeName, RecoveryReason\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recovery events), Timeline, Single value (unresolved recoveries).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when full-disk protection on the server is in recovery or trouble, so someone can re-verify keys before the volume is wide open with no secrets.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.54",
              "n": "Windows Event Forwarding (WEF) Health",
              "c": "high",
              "f": "advanced",
              "v": "WEF collects events from thousands of endpoints to central collectors. Forwarding failures create visibility gaps across the security monitoring pipeline.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Forwarding/Operational` (EventCode 100, 102, 103, 105, 111)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Forwarding/Operational\"\n  EventCode IN (102, 103, 105, 111)\n| eval issue=case(EventCode=102,\"Subscription connected\",EventCode=103,\"Subscription error\",EventCode=105,\"Access denied\",EventCode=111,\"Collector unreachable\")\n| stats count by host, issue, SubscriptionName\n| where issue!=\"Subscription connected\"\n| sort -count",
              "m": "Enable Forwarding/Operational log on WEF collectors and clients. EventCode 103=subscription-level error, 105=access denied (Kerberos/permission issue), 111=cannot reach collector. Monitor for expected forwarders going silent — compare against CMDB endpoint list. Alert when error rate exceeds 5% of clients. Use `wecutil gr <subscription>` via scripted input for precise subscription health.",
              "z": "Status grid (subscription × host), Pie chart (healthy vs. error), Table (error details), Single value (connected clients).",
              "kfp": "Short collector restarts and cert rollouts. Sustained 103/111 patterns mean central logging gaps—*that* is the audit and IR risk.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Forwarding/Operational` (EventCode 100, 102, 103, 105, 111).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Forwarding/Operational log on WEF collectors and clients. EventCode 103=subscription-level error, 105=access denied (Kerberos/permission issue), 111=cannot reach collector. Monitor for expected forwarders going silent — compare against CMDB endpoint list. Alert when error rate exceeds 5% of clients. Use `wecutil gr <subscription>` via scripted input for precise subscription health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Forwarding/Operational\"\n  EventCode IN (102, 103, 105, 111)\n| eval issue=case(EventCode=102,\"Subscription connected\",EventCode=103,\"Subscription error\",EventCode=105,\"Access denied\",EventCode=111,\"Collector unreachable\")\n| stats count by host, issue, SubscriptionName\n| where issue!=\"Subscription connected\"\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Windows Event Forwarding (WEF) Health** — WEF collects events from thousands of endpoints to central collectors. Forwarding failures create visibility gaps across the security monitoring pipeline.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Forwarding/Operational` (EventCode 100, 102, 103, 105, 111). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, issue, SubscriptionName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where issue!=\"Subscription connected\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (subscription × host), Pie chart (healthy vs. error), Table (error details), Single value (connected clients).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a forwarder of Windows logs to a central place is losing connection, so you are not missing security events in one region quietly.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.55",
              "n": "Suspicious Token Manipulation",
              "c": "critical",
              "f": "intermediate",
              "v": "Token impersonation and privilege escalation via token manipulation (SeImpersonatePrivilege abuse) is a common post-exploitation technique.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4673, 4674)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4673, 4674)\n  Privileges IN (\"SeImpersonatePrivilege\", \"SeAssignPrimaryTokenPrivilege\", \"SeTcbPrivilege\", \"SeDebugPrivilege\")\n| where NOT match(ProcessName, \"(?i)(lsass|svchost|services|mssql|w3wp)\")\n| stats count by SubjectUserName, ProcessName, Privileges, host\n| sort -count",
              "m": "Enable \"Audit Sensitive Privilege Use\" in Advanced Audit Policy. EventCode 4673=sensitive privilege used, 4674=operation on privileged object. Focus on SeImpersonatePrivilege (Potato attacks), SeDebugPrivilege (memory injection), SeTcbPrivilege (token creation). Filter known legitimate users (service accounts, SQL Server, IIS). Alert on non-standard processes using these privileges.",
              "z": "Table (privilege usage by process), Bar chart (privilege types), Timeline, Alert on unusual callers.",
              "kfp": "Backup, storage, and defrag tools can trigger privilege-heavy access. A raw match on the privilege *string* is only the start of triage—add process and signer path.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4673, 4674).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Audit Sensitive Privilege Use\" in Advanced Audit Policy. EventCode 4673=sensitive privilege used, 4674=operation on privileged object. Focus on SeImpersonatePrivilege (Potato attacks), SeDebugPrivilege (memory injection), SeTcbPrivilege (token creation). Filter known legitimate users (service accounts, SQL Server, IIS). Alert on non-standard processes using these privileges.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4673, 4674)\n  Privileges IN (\"SeImpersonatePrivilege\", \"SeAssignPrimaryTokenPrivilege\", \"SeTcbPrivilege\", \"SeDebugPrivilege\")\n| where NOT match(ProcessName, \"(?i)(lsass|svchost|services|mssql|w3wp)\")\n| stats count by SubjectUserName, ProcessName, Privileges, host\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Suspicious Token Manipulation** — Token impersonation and privilege escalation via token manipulation (SeImpersonatePrivilege abuse) is a common post-exploitation technique.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4673, 4674). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match(ProcessName, \"(?i)(lsass|svchost|services|mssql|w3wp)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by SubjectUserName, ProcessName, Privileges, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Suspicious Token Manipulation** — Token impersonation and privilege escalation via token manipulation (SeImpersonatePrivilege abuse) is a common post-exploitation technique.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4673, 4674). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (privilege usage by process), Bar chart (privilege types), Timeline, Alert on unusual callers.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a small set of very powerful system rights being used in a way that does not look like a normal system tool, which can be part of a break-in path.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.56",
              "n": "Sysmon Network Connection Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Sysmon EventCode 3 logs every outbound TCP/UDP connection with the originating process. Reveals C2 callbacks, data exfiltration, and unauthorized network access.",
              "t": "`Splunk_TA_windows`, Sysmon required",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 3)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=3\n  Initiated=\"true\"\n| where NOT cidrmatch(\"10.0.0.0/8\", DestinationIp) AND NOT cidrmatch(\"172.16.0.0/12\", DestinationIp) AND NOT cidrmatch(\"192.168.0.0/16\", DestinationIp)\n| stats count dc(DestinationIp) as unique_ips by Image, host, User\n| where unique_ips > 50 OR count > 500\n| sort -unique_ips",
              "m": "Deploy Sysmon with network connection logging (EventCode 3, Initiated=true for outbound). Filter RFC1918 addresses to focus on external connections. High unique destination IPs from a single process suggest scanning or C2 beaconing. Alert on processes making external connections that normally shouldn't (e.g., winword.exe, excel.exe connecting outbound). Combine with DNS logs for full picture.",
              "z": "Table (outbound connections by process), Network graph, Timechart (connection rate).",
              "kfp": "Browsers, updaters, and agents reach the whole internet. Baseline the same parent process; rare parent + new rare destination is a better story than a raw denylist miss.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Sysmon required.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 3).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Sysmon with network connection logging (EventCode 3, Initiated=true for outbound). Filter RFC1918 addresses to focus on external connections. High unique destination IPs from a single process suggest scanning or C2 beaconing. Alert on processes making external connections that normally shouldn't (e.g., winword.exe, excel.exe connecting outbound). Combine with DNS logs for full picture.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=3\n  Initiated=\"true\"\n| where NOT cidrmatch(\"10.0.0.0/8\", DestinationIp) AND NOT cidrmatch(\"172.16.0.0/12\", DestinationIp) AND NOT cidrmatch(\"192.168.0.0/16\", DestinationIp)\n| stats count dc(DestinationIp) as unique_ips by Image, host, User\n| where unique_ips > 50 OR count > 500\n| sort -unique_ips\n```\n\nUnderstanding this SPL\n\n**Sysmon Network Connection Monitoring** — Sysmon EventCode 3 logs every outbound TCP/UDP connection with the originating process. Reveals C2 callbacks, data exfiltration, and unauthorized network access.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 3). **App/TA** (typical add-on context): `Splunk_TA_windows`, Sysmon required. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT cidrmatch(\"10.0.0.0/8\", DestinationIp) AND NOT cidrmatch(\"172.16.0.0/12\", DestinationIp) AND NOT cidrmatch(\"192.1…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by Image, host, User** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_ips > 50 OR count > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (outbound connections by process), Network graph, Timechart (connection rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at program-started network connections to odd places, after we have removed your usual on-network ranges, so a beacon stands out a bit more clearly.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.57",
              "n": "Thread Count Exhaustion",
              "c": "high",
              "f": "intermediate",
              "v": "Thread leaks or excessive thread creation cause pool exhaustion and application hangs. Windows has a system-wide limit of ~65K threads that affects all processes.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:Process` (counter: Thread Count), `sourcetype=Perfmon:System` (counter: Threads)",
              "q": "index=perfmon sourcetype=\"Perfmon:Process\" counter=\"Thread Count\" instance!=\"_Total\" instance!=\"Idle\"\n| stats max(Value) as threads by host, instance\n| where threads > 500\n| sort -threads",
              "m": "Configure Perfmon Process inputs with `Thread Count` counter (interval=300). Also monitor system-wide threads via Perfmon System → Threads. Alert when any single process exceeds 500 threads or system total exceeds 50K. Common offenders: IIS application pools (w3wp.exe), Java applications, .NET services with async leaks. Correlate with application response times.",
              "z": "Bar chart (top thread consumers), Line chart (thread growth trend), Single value (system total).",
              "kfp": "IIS, Java, and some database engines run hot thread counts in normal steady state. Baseline the same `instance` over weeks before a static cap.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:Process` (counter: Thread Count), `sourcetype=Perfmon:System` (counter: Threads).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon Process inputs with `Thread Count` counter (interval=300). Also monitor system-wide threads via Perfmon System → Threads. Alert when any single process exceeds 500 threads or system total exceeds 50K. Common offenders: IIS application pools (w3wp.exe), Java applications, .NET services with async leaks. Correlate with application response times.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:Process\" counter=\"Thread Count\" instance!=\"_Total\" instance!=\"Idle\"\n| stats max(Value) as threads by host, instance\n| where threads > 500\n| sort -threads\n```\n\nUnderstanding this SPL\n\n**Thread Count Exhaustion** — Thread leaks or excessive thread creation cause pool exhaustion and application hangs. Windows has a system-wide limit of ~65K threads that affects all processes.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Process` (counter: Thread Count), `sourcetype=Perfmon:System` (counter: Threads). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:Process. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:Process\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, instance** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where threads > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Thread Count Exhaustion** — Thread leaks or excessive thread creation cause pool exhaustion and application hangs. Windows has a system-wide limit of ~65K threads that affects all processes.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Process` (counter: Thread Count), `sourcetype=Perfmon:System` (counter: Threads). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top thread consumers), Line chart (thread growth trend), Single value (system total).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a program on the server with an unusually high number of active threads, which can mean a bug, a runaway, or a capacity ceiling hit when you use the main Perfmon search as the SLO.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.58",
              "n": "Storage Spaces Health Monitoring",
              "c": "critical",
              "f": "advanced",
              "v": "Storage Spaces pools degrade silently when physical disks fail. Detection before a second disk fails prevents data loss in mirrored/parity configurations.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-StorageSpaces-Driver/Operational` (EventCode 1, 2, 3, 207)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-StorageSpaces*\" EventCode IN (1, 2, 3, 207)\n| eval status=case(EventCode=1,\"Pool degraded\",EventCode=2,\"Disk failed\",EventCode=3,\"IO error\",EventCode=207,\"Repair started\")\n| table _time, host, status, PhysicalDiskId, PoolName\n| sort -_time",
              "m": "Storage Spaces driver events log automatically. Monitor for pool degradation (lost redundancy) and disk failures. Alert at critical priority on any degradation — the pool is now running without full redundancy. Track repair progress (EventCode 207). Also poll via PowerShell scripted input: `Get-StoragePool | Get-PhysicalDisk | Where OperationalStatus -ne 'OK'` for proactive monitoring beyond event-based detection.",
              "z": "Status grid (pool × disk health), Timeline (degradation events), Single value (degraded pools — target: 0).",
              "kfp": "Resilver, rebalance, and one-slot swap cycles create scary logs that resolve as healthy after repair finishes. A disk that keeps leaving the array is the real RMA case.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-StorageSpaces-Driver/Operational` (EventCode 1, 2, 3, 207).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStorage Spaces driver events log automatically. Monitor for pool degradation (lost redundancy) and disk failures. Alert at critical priority on any degradation — the pool is now running without full redundancy. Track repair progress (EventCode 207). Also poll via PowerShell scripted input: `Get-StoragePool | Get-PhysicalDisk | Where OperationalStatus -ne 'OK'` for proactive monitoring beyond event-based detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-StorageSpaces*\" EventCode IN (1, 2, 3, 207)\n| eval status=case(EventCode=1,\"Pool degraded\",EventCode=2,\"Disk failed\",EventCode=3,\"IO error\",EventCode=207,\"Repair started\")\n| table _time, host, status, PhysicalDiskId, PoolName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Storage Spaces Health Monitoring** — Storage Spaces pools degrade silently when physical disks fail. Detection before a second disk fails prevents data loss in mirrored/parity configurations.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-StorageSpaces-Driver/Operational` (EventCode 1, 2, 3, 207). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Storage Spaces Health Monitoring**): table _time, host, status, PhysicalDiskId, PoolName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (pool × disk health), Timeline (degradation events), Single value (degraded pools — target: 0).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a software-defined storage pool on Windows is in a bad state, so you can replace disks and fix the pool before data is lost or offline.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.59",
              "n": "DCOM / COM+ Application Errors",
              "c": "medium",
              "f": "beginner",
              "v": "DCOM errors affect distributed applications, WMI remote management, and MMC snap-ins. Persistent errors indicate permission issues or component registration corruption.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System` (EventCode 10016, 10028, 10010)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" Source=\"DCOM\" EventCode IN (10016, 10028, 10010)\n| stats count by host, EventCode, param1, param2\n| where count > 10\n| sort -count",
              "m": "EventCode 10016=permission error (most common — often benign for built-in COM objects), 10028=DCOM connection timed out, 10010=server did not register within timeout. Filter known benign 10016 errors (Windows built-in CLSIDs). Alert on 10028/10010 as these indicate application-impacting failures. Persistent 10010 errors for specific CLSIDs indicate broken COM registrations.",
              "z": "Table (DCOM errors by CLSID), Bar chart (error types), Timechart (error frequency).",
              "kfp": "10016 in shared RDS/ Citrix and ‘wrong app identity’ templates are a sea of text. Triage with Microsoft’s per-AppID guidance and a documented GPO exception process.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (EventCode 10016, 10028, 10010).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 10016=permission error (most common — often benign for built-in COM objects), 10028=DCOM connection timed out, 10010=server did not register within timeout. Filter known benign 10016 errors (Windows built-in CLSIDs). Alert on 10028/10010 as these indicate application-impacting failures. Persistent 10010 errors for specific CLSIDs indicate broken COM registrations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" Source=\"DCOM\" EventCode IN (10016, 10028, 10010)\n| stats count by host, EventCode, param1, param2\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DCOM / COM+ Application Errors** — DCOM errors affect distributed applications, WMI remote management, and MMC snap-ins. Persistent errors indicate permission issues or component registration corruption.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (EventCode 10016, 10028, 10010). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, EventCode, param1, param2** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DCOM errors by CLSID), Bar chart (error types), Timechart (error frequency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at COM and DCOM errors that can show up a lot on busy servers, and we help you tell one-off app permission noise from a true outage pattern.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.60",
              "n": "Code Integrity / Driver Signing Violations",
              "c": "critical",
              "f": "intermediate",
              "v": "Unsigned or tampered drivers loading into the kernel are a rootkit indicator. Code Integrity violations detect bypass attempts and driver-level threats.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-CodeIntegrity/Operational` (EventCode 3001, 3002, 3003, 3004, 3033)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-CodeIntegrity/Operational\"\n  EventCode IN (3001, 3002, 3003, 3004, 3033)\n| eval issue=case(EventCode=3001,\"Unsigned driver blocked\",EventCode=3002,\"Unable to verify\",EventCode=3003,\"Unsigned policy\",EventCode=3004,\"File hash not found\",EventCode=3033,\"Unsigned image loaded\")\n| table _time, host, issue, FileNameBuffer, ProcessNameBuffer\n| sort -_time",
              "m": "Code Integrity events log automatically on systems with Secure Boot, HVCI, or WDAC. EventCode 3033=unsigned image loaded (audit mode), 3001=unsigned driver blocked (enforcement). Alert on all blocked events in enforcement mode. In audit mode, use data to build a driver whitelist before enabling enforcement. Cross-reference drivers with known-good hashes from Microsoft catalog.",
              "z": "Table (integrity violations), Timeline, Bar chart (top unsigned files), Single value (blocked loads).",
              "kfp": "Vendors, lab hosts, and Windows Insider builds mix signed and unsigned in ways prod does not. Keep production in ‘production’ in your alert scoping first.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-CodeIntegrity/Operational` (EventCode 3001, 3002, 3003, 3004, 3033).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCode Integrity events log automatically on systems with Secure Boot, HVCI, or WDAC. EventCode 3033=unsigned image loaded (audit mode), 3001=unsigned driver blocked (enforcement). Alert on all blocked events in enforcement mode. In audit mode, use data to build a driver whitelist before enabling enforcement. Cross-reference drivers with known-good hashes from Microsoft catalog.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-CodeIntegrity/Operational\"\n  EventCode IN (3001, 3002, 3003, 3004, 3033)\n| eval issue=case(EventCode=3001,\"Unsigned driver blocked\",EventCode=3002,\"Unable to verify\",EventCode=3003,\"Unsigned policy\",EventCode=3004,\"File hash not found\",EventCode=3033,\"Unsigned image loaded\")\n| table _time, host, issue, FileNameBuffer, ProcessNameBuffer\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Code Integrity / Driver Signing Violations** — Unsigned or tampered drivers loading into the kernel are a rootkit indicator. Code Integrity violations detect bypass attempts and driver-level threats.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-CodeIntegrity/Operational` (EventCode 3001, 3002, 3003, 3004, 3033). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Code Integrity / Driver Signing Violations**): table _time, host, issue, FileNameBuffer, ProcessNameBuffer\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (integrity violations), Timeline, Bar chart (top unsigned files), Single value (blocked loads).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the kernel is refusing a driver or code load because of signing rules, which protects the server from a whole class of rootkits.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.61",
              "n": "Data Deduplication Health",
              "c": "medium",
              "f": "beginner",
              "v": "Windows Data Deduplication saves significant storage on file servers. Job failures or savings degradation indicate volume corruption or configuration issues.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Deduplication/Operational` (EventCode 6153, 6155, 12800, 12802)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Deduplication*\"\n  EventCode IN (6153, 6155, 12800, 12802)\n| eval status=case(EventCode=6153,\"Optimization completed\",EventCode=6155,\"Optimization failed\",EventCode=12800,\"Scrubbing completed\",EventCode=12802,\"Corruption detected\")\n| table _time, host, status, VolumeName, SavingsRate, CorruptionCount\n| sort -_time",
              "m": "Enable Deduplication Operational log on file servers with dedup enabled. EventCode 6155=optimization job failure, 12802=data corruption detected. Monitor savings rate trending — declining rates suggest changing data patterns or dedup overhead. Alert on any corruption detection (12802) immediately. Track optimization duration — increasing times indicate volume growth outpacing dedup capacity.",
              "z": "Line chart (savings rate over time), Table (job results), Single value (current savings %), Alert on corruption.",
              "kfp": "Optimization after large data moves or a full volume can be slow and look like failure until completion. Distinguish ‘still running’ from ‘gave up’ with the vendor event text.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Deduplication/Operational` (EventCode 6153, 6155, 12800, 12802).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Deduplication Operational log on file servers with dedup enabled. EventCode 6155=optimization job failure, 12802=data corruption detected. Monitor savings rate trending — declining rates suggest changing data patterns or dedup overhead. Alert on any corruption detection (12802) immediately. Track optimization duration — increasing times indicate volume growth outpacing dedup capacity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Deduplication*\"\n  EventCode IN (6153, 6155, 12800, 12802)\n| eval status=case(EventCode=6153,\"Optimization completed\",EventCode=6155,\"Optimization failed\",EventCode=12800,\"Scrubbing completed\",EventCode=12802,\"Corruption detected\")\n| table _time, host, status, VolumeName, SavingsRate, CorruptionCount\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Data Deduplication Health** — Windows Data Deduplication saves significant storage on file servers. Job failures or savings degradation indicate volume corruption or configuration issues.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Deduplication/Operational` (EventCode 6153, 6155, 12800, 12802). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Data Deduplication Health**): table _time, host, status, VolumeName, SavingsRate, CorruptionCount\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (savings rate over time), Table (job results), Single value (current savings %), Alert on corruption.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the space-savings process on a volume is in trouble, which can cost money on disk and time on rehydrate jobs before restore.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.62",
              "n": "TCP Connection State Monitoring (Windows)",
              "c": "medium",
              "f": "intermediate",
              "v": "Excessive TIME_WAIT, CLOSE_WAIT, or ESTABLISHED connections indicate connection leaks, exhausted ephemeral ports, or application hanging. Causes service unavailability.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:TCPv4` (counters: Connections Established, Connection Failures)",
              "q": "index=perfmon sourcetype=\"Perfmon:TCPv4\" counter IN (\"Connections Established\",\"Connection Failures\",\"Connections Reset\")\n| timechart span=5m avg(Value) as value by counter, host\n| where value > 10000",
              "m": "Configure Perfmon inputs for TCPv4 object: `Connections Established`, `Connection Failures`, `Connections Reset`, `Segments Retransmitted/sec` (interval=60). Also deploy a scripted input running `netstat -an | find /c \"TIME_WAIT\"` for state-level counts. Alert when established connections exceed application baseline by 2x or TIME_WAIT exceeds 5000 (ephemeral port exhaustion risk). Default ephemeral port range: 49152-65535 (16K ports).",
              "z": "Line chart (connection states over time), Gauge (established connections), Single value (TIME_WAIT count).",
              "kfp": "Load tests, health checks, and security scans can push connection counts. Always compare the same work day over day for the app role hosting the work.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:TCPv4` (counters: Connections Established, Connection Failures).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs for TCPv4 object: `Connections Established`, `Connection Failures`, `Connections Reset`, `Segments Retransmitted/sec` (interval=60). Also deploy a scripted input running `netstat -an | find /c \"TIME_WAIT\"` for state-level counts. Alert when established connections exceed application baseline by 2x or TIME_WAIT exceeds 5000 (ephemeral port exhaustion risk). Default ephemeral port range: 49152-65535 (16K ports).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:TCPv4\" counter IN (\"Connections Established\",\"Connection Failures\",\"Connections Reset\")\n| timechart span=5m avg(Value) as value by counter, host\n| where value > 10000\n```\n\nUnderstanding this SPL\n\n**TCP Connection State Monitoring (Windows)** — Excessive TIME_WAIT, CLOSE_WAIT, or ESTABLISHED connections indicate connection leaks, exhausted ephemeral ports, or application hanging. Causes service unavailability.\n\nDocumented **Data sources**: `sourcetype=Perfmon:TCPv4` (counters: Connections Established, Connection Failures). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:TCPv4. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:TCPv4\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by counter, host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where value > 10000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**TCP Connection State Monitoring (Windows)** — Excessive TIME_WAIT, CLOSE_WAIT, or ESTABLISHED connections indicate connection leaks, exhausted ephemeral ports, or application hanging. Causes service unavailability.\n\nDocumented **Data sources**: `sourcetype=Perfmon:TCPv4` (counters: Connections Established, Connection Failures). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (connection states over time), Gauge (established connections), Single value (TIME_WAIT count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at the shape of how many network connections a server is holding, which can show hidden saturation before users say the word 'slow'.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.63",
              "n": "Windows Installer Failures",
              "c": "medium",
              "f": "intermediate",
              "v": "MSI installation failures affect patching, software deployment, and SCCM/Intune compliance. Repeated failures indicate corrupted Windows Installer service or disk issues.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Application` (Source=MsiInstaller, EventCode 11708, 11724)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Application\" Source=\"MsiInstaller\" EventCode IN (11708, 11724)\n| table _time, host, EventCode, ProductName, ProductVersion, Message\n| stats count by host, ProductName\n| where count > 2\n| sort -count",
              "m": "EventCode 11708=installation failed, 11724=removal completed (track uninstalls). Track installation failures per host — repeated failures for the same product indicate systematic issues. Correlate with SCCM/Intune deployment status. Common causes: pending reboots, insufficient disk space, corrupted Windows Installer cache. Alert when critical patches fail to install across >5% of fleet.",
              "z": "Table (failed installs), Bar chart (top failing products), Timechart (failure rate).",
              "kfp": "Patch Tuesday, bulk .NET, and C++ redistributable pushes fail on a *subset* of hosts with a bad prereq. Track `ProductName` and version, not a raw fail count, as your SLO.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Application` (Source=MsiInstaller, EventCode 11708, 11724).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 11708=installation failed, 11724=removal completed (track uninstalls). Track installation failures per host — repeated failures for the same product indicate systematic issues. Correlate with SCCM/Intune deployment status. Common causes: pending reboots, insufficient disk space, corrupted Windows Installer cache. Alert when critical patches fail to install across >5% of fleet.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Application\" Source=\"MsiInstaller\" EventCode IN (11708, 11724)\n| table _time, host, EventCode, ProductName, ProductVersion, Message\n| stats count by host, ProductName\n| where count > 2\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Windows Installer Failures** — MSI installation failures affect patching, software deployment, and SCCM/Intune compliance. Repeated failures indicate corrupted Windows Installer service or disk issues.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Application` (Source=MsiInstaller, EventCode 11708, 11724). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Application. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Windows Installer Failures**): table _time, host, EventCode, ProductName, ProductVersion, Message\n• `stats` rolls up events into metrics; results are split **by host, ProductName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 2` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Windows Installer Failures** — MSI installation failures affect patching, software deployment, and SCCM/Intune compliance. Repeated failures indicate corrupted Windows Installer service or disk issues.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Application` (Source=MsiInstaller, EventCode 11708, 11724). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed installs), Bar chart (top failing products), Timechart (failure rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a Windows installer is failing a lot, which can block patches, app deploys, and your change windows if you do not see it early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.64",
              "n": "Event Log Channel Size / Overflow",
              "c": "high",
              "f": "advanced",
              "v": "When event logs reach maximum size with overwrite-oldest policy, critical security events are lost. With do-not-overwrite policy, the log stops recording entirely.",
              "t": "`Splunk_TA_windows`, custom scripted input",
              "d": "`sourcetype=WinEventLog:System` (EventCode 6005) + custom scripted input (`wevtutil gl Security`)",
              "q": "index=os sourcetype=windows:eventlog:size\n| where used_pct > 90\n| table _time, host, log_name, current_size_MB, max_size_MB, used_pct",
              "m": "Deploy a scripted input that runs `wevtutil gl Security` (and other critical channels) every 15 minutes, parsing current size vs. max size. Default Security log is 20MB — often insufficient on DCs and servers with detailed auditing. Alert when any critical log exceeds 90% capacity. Alternatively, monitor EventCode 1101 (audit log full) in the System log. Recommended: increase Security log to 1GB+ on DCs.",
              "z": "Gauge (log fill percentage), Table (logs near capacity), Bar chart (log sizes by channel).",
              "kfp": "High churn apps on small forward buffers look ‘full’ by percent but are by design. Cross-check with max size policy and your retention, not a once-a-day 90% only.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, custom scripted input.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (EventCode 6005) + custom scripted input (`wevtutil gl Security`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a scripted input that runs `wevtutil gl Security` (and other critical channels) every 15 minutes, parsing current size vs. max size. Default Security log is 20MB — often insufficient on DCs and servers with detailed auditing. Alert when any critical log exceeds 90% capacity. Alternatively, monitor EventCode 1101 (audit log full) in the System log. Recommended: increase Security log to 1GB+ on DCs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=windows:eventlog:size\n| where used_pct > 90\n| table _time, host, log_name, current_size_MB, max_size_MB, used_pct\n```\n\nUnderstanding this SPL\n\n**Event Log Channel Size / Overflow** — When event logs reach maximum size with overwrite-oldest policy, critical security events are lost. With do-not-overwrite policy, the log stops recording entirely.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (EventCode 6005) + custom scripted input (`wevtutil gl Security`). **App/TA** (typical add-on context): `Splunk_TA_windows`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: windows:eventlog:size. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=windows:eventlog:size. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where used_pct > 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Event Log Channel Size / Overflow**): table _time, host, log_name, current_size_MB, max_size_MB, used_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (log fill percentage), Table (logs near capacity), Bar chart (log sizes by channel).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a log file on disk is close to the max size you allowed, so it does not stop writing the events you need for audits and support.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.65",
              "n": "Pass-the-Hash / NTLM Relay Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Pass-the-hash attacks use stolen NTLM hashes to authenticate without knowing the password. Detecting NTLM logons from unusual sources catches this common lateral movement technique.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4624, LogonType 3, AuthenticationPackageName=NTLM)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=3\n  AuthenticationPackageName=\"NTLM\" TargetUserName!=\"ANONYMOUS LOGON\"\n| stats count dc(host) as target_hosts values(host) as targets by TargetUserName, IpAddress\n| where target_hosts > 3\n| sort -target_hosts",
              "m": "NTLM type 3 (network) logons from non-standard sources indicate pass-the-hash. In environments enforcing Kerberos, any NTLM logon to a server is suspicious. Focus on admin accounts using NTLM to access multiple hosts. EventCode 4776 on the DC shows the NTLM validation. Remediation: enable \"Restrict NTLM\" GPO settings, enforce Kerberos, deploy Credential Guard. MITRE ATT&CK T1550.002.",
              "z": "Table (NTLM logons from suspicious sources), Network graph (source→targets), Timeline, Single value (NTLM vs Kerberos ratio).",
              "kfp": "File servers, old apps, and legacy NTLM for IoT/print will cluster here. A Tier-0 PTH program uses stack + Rare Destination + PAM, not a raw 4624 only.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4624, LogonType 3, AuthenticationPackageName=NTLM).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNTLM type 3 (network) logons from non-standard sources indicate pass-the-hash. In environments enforcing Kerberos, any NTLM logon to a server is suspicious. Focus on admin accounts using NTLM to access multiple hosts. EventCode 4776 on the DC shows the NTLM validation. Remediation: enable \"Restrict NTLM\" GPO settings, enforce Kerberos, deploy Credential Guard. MITRE ATT&CK T1550.002.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=3\n  AuthenticationPackageName=\"NTLM\" TargetUserName!=\"ANONYMOUS LOGON\"\n| stats count dc(host) as target_hosts values(host) as targets by TargetUserName, IpAddress\n| where target_hosts > 3\n| sort -target_hosts\n```\n\nUnderstanding this SPL\n\n**Pass-the-Hash / NTLM Relay Detection** — Pass-the-hash attacks use stolen NTLM hashes to authenticate without knowing the password. Detecting NTLM logons from unusual sources catches this common lateral movement technique.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4624, LogonType 3, AuthenticationPackageName=NTLM). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by TargetUserName, IpAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where target_hosts > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Pass-the-Hash / NTLM Relay Detection** — Pass-the-hash attacks use stolen NTLM hashes to authenticate without knowing the password. Detecting NTLM logons from unusual sources catches this common lateral movement technique.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4624, LogonType 3, AuthenticationPackageName=NTLM). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (NTLM logons from suspicious sources), Network graph (source→targets), Timeline, Single value (NTLM vs Kerberos ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a pattern of sign-ins that is often part of a stolen-hash style attack, and we help you know when to hand this to a specialist hunt, not a level-one only queue.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.66",
              "n": "Sysmon File Creation in Suspicious Paths",
              "c": "high",
              "f": "intermediate",
              "v": "Files created in temp directories, startup folders, and system paths by unexpected processes indicate malware dropping payloads or establishing persistence.",
              "t": "`Splunk_TA_windows`, Sysmon required",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 11)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=11\n| where match(TargetFilename, \"(?i)(\\\\\\\\Temp\\\\\\\\.*\\\\.exe|\\\\\\\\Startup\\\\\\\\|\\\\\\\\Tasks\\\\\\\\|\\\\\\\\ProgramData\\\\\\\\.*\\\\.exe|\\\\\\\\AppData\\\\\\\\.*\\\\.bat|\\\\\\\\AppData\\\\\\\\.*\\\\.ps1)\")\n| table _time, host, Image, TargetFilename, User\n| sort -_time",
              "m": "Deploy Sysmon with FileCreate (EventCode 11) monitoring, filtered to suspicious target paths: Temp, Startup, ProgramData, AppData. Executables (.exe, .dll, .bat, .ps1, .vbs) created in these paths by non-installer processes are suspicious. Exclude known deployment tools (SCCM client, Intune agent). Cross-reference with process creation events to build full attack chain.",
              "z": "Table (suspicious file creations), Bar chart (top dropping processes), Timeline.",
              "kfp": "Legitimate installers and patches writing to Temp or ProgramData; SCCM, Intune, and other deployment tools; build or IT scripts under AppData; server roles that stage updates in profile or system paths. Tune with path, hash, or parent-process allowlists.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Sysmon required.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 11).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Sysmon with FileCreate (EventCode 11) monitoring, filtered to suspicious target paths: Temp, Startup, ProgramData, AppData. Executables (.exe, .dll, .bat, .ps1, .vbs) created in these paths by non-installer processes are suspicious. Exclude known deployment tools (SCCM client, Intune agent). Cross-reference with process creation events to build full attack chain.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=11\n| where match(TargetFilename, \"(?i)(\\\\\\\\Temp\\\\\\\\.*\\\\.exe|\\\\\\\\Startup\\\\\\\\|\\\\\\\\Tasks\\\\\\\\|\\\\\\\\ProgramData\\\\\\\\.*\\\\.exe|\\\\\\\\AppData\\\\\\\\.*\\\\.bat|\\\\\\\\AppData\\\\\\\\.*\\\\.ps1)\")\n| table _time, host, Image, TargetFilename, User\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Sysmon File Creation in Suspicious Paths** — Files created in temp directories, startup folders, and system paths by unexpected processes indicate malware dropping payloads or establishing persistence.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 11). **App/TA** (typical add-on context): `Splunk_TA_windows`, Sysmon required. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(TargetFilename, \"(?i)(\\\\\\\\Temp\\\\\\\\.*\\\\.exe|\\\\\\\\Startup\\\\\\\\|\\\\\\\\Tasks\\\\\\\\|\\\\\\\\ProgramData\\\\\\\\.*\\\\.exe|\\\\\\\\AppDat…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Sysmon File Creation in Suspicious Paths**): table _time, host, Image, TargetFilename, User\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious file creations), Bar chart (top dropping processes), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for new files in temp, startup, scheduled-task, and other sensitive folders on your Windows machines so we can notice suspicious drops that often go with malware or unwanted persistence before things get worse.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.67",
              "n": "Golden Ticket Detection (TGT Anomalies)",
              "c": "critical",
              "f": "intermediate",
              "v": "Golden tickets are forged Kerberos TGTs that grant domain-wide access. Detecting anomalous TGT properties catches this catastrophic compromise.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4768, 4769)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4768 TicketEncryptionType=0x17\n| stats count by TargetUserName, IpAddress",
              "m": "Golden tickets typically use RC4 encryption (0x17) with abnormally long lifetimes (default Kerberos max is 10 hours). EventCode 4768=TGT request, 4769=TGS request. Detect TGS requests referencing TGTs older than 10 hours, or TGT requests with RC4 in environments that enforce AES. Also monitor for EventCode 4769 with services accessed that the user normally doesn't touch. Requires KRBTGT password rotation as remediation.",
              "z": "Table (anomalous ticket requests), Timeline, Single value (RC4 TGT count), Alert.",
              "kfp": "Legacy applications or domains that still negotiate RC4; lab or isolated forests with relaxed crypto policy; mixed-mode or migration windows. Baseline TGT volume per account and corroborate with ticket lifetime and source DC before calling incident.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4768, 4769).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nGolden tickets typically use RC4 encryption (0x17) with abnormally long lifetimes (default Kerberos max is 10 hours). EventCode 4768=TGT request, 4769=TGS request. Detect TGS requests referencing TGTs older than 10 hours, or TGT requests with RC4 in environments that enforce AES. Also monitor for EventCode 4769 with services accessed that the user normally doesn't touch. Requires KRBTGT password rotation as remediation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4768 TicketEncryptionType=0x17\n| stats count by TargetUserName, IpAddress\n```\n\nUnderstanding this SPL\n\n**Golden Ticket Detection (TGT Anomalies)** — Golden tickets are forged Kerberos TGTs that grant domain-wide access. Detecting anomalous TGT properties catches this catastrophic compromise.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4768, 4769). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by TargetUserName, IpAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous ticket requests), Timeline, Single value (RC4 TGT count), Alert.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for strange Kerberos login-ticket patterns on Windows that can mean someone forged a 'golden ticket' and could act as anyone in the directory, so the team can respond while damage is still limited.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.68",
              "n": "NTFS Corruption and Self-Healing",
              "c": "critical",
              "f": "intermediate",
              "v": "NTFS corruption can cause data loss, application failures, and boot issues. Self-healing events indicate disk degradation that will worsen.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System` (Source=Ntfs, EventCode 55, 98, 137, 140)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" Source=\"Ntfs\" EventCode IN (55, 98, 137, 140)\n| eval issue=case(EventCode=55,\"NTFS corruption detected\",EventCode=98,\"Volume dirty flag set\",EventCode=137,\"Self-healing started\",EventCode=140,\"Self-healing completed\")\n| table _time, host, issue, DriveName, CorruptionType\n| sort -_time",
              "m": "NTFS events log automatically. EventCode 55=structure corruption on volume (critical), 98=volume marked dirty (chkdsk needed at boot), 137/140=self-healing activity. Any EventCode 55 requires immediate attention — indicates metadata corruption that may spread. Correlate with WHEA (hardware) and SMART events to determine if underlying disk is failing. Schedule chkdsk offline and plan disk replacement.",
              "z": "Table (corruption events), Timeline, Single value (affected volumes — target: 0).",
              "kfp": "Planned chkdsk or storage maintenance; known firmware/driver updates that trigger one-time self-heal; virtualization snapshots or backup agents touching the volume. Correlate with disk hardware, backup windows, and change records.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (Source=Ntfs, EventCode 55, 98, 137, 140).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNTFS events log automatically. EventCode 55=structure corruption on volume (critical), 98=volume marked dirty (chkdsk needed at boot), 137/140=self-healing activity. Any EventCode 55 requires immediate attention — indicates metadata corruption that may spread. Correlate with WHEA (hardware) and SMART events to determine if underlying disk is failing. Schedule chkdsk offline and plan disk replacement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" Source=\"Ntfs\" EventCode IN (55, 98, 137, 140)\n| eval issue=case(EventCode=55,\"NTFS corruption detected\",EventCode=98,\"Volume dirty flag set\",EventCode=137,\"Self-healing started\",EventCode=140,\"Self-healing completed\")\n| table _time, host, issue, DriveName, CorruptionType\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**NTFS Corruption and Self-Healing** — NTFS corruption can cause data loss, application failures, and boot issues. Self-healing events indicate disk degradation that will worsen.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (Source=Ntfs, EventCode 55, 98, 137, 140). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NTFS Corruption and Self-Healing**): table _time, host, issue, DriveName, CorruptionType\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (corruption events), Timeline, Single value (affected volumes — target: 0).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the Windows system log for file-system health messages so we can notice disk and corruption problems on servers before they turn into data loss or long outages.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.69",
              "n": "Page File Usage & Exhaustion",
              "c": "high",
              "f": "intermediate",
              "v": "Page file exhaustion prevents new process creation and causes \"out of virtual memory\" errors. System-managed page files can grow to fill the disk.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:Paging_File` (counter: % Usage, % Usage Peak)",
              "q": "index=perfmon sourcetype=\"Perfmon:Paging_File\" counter=\"% Usage\" instance=\"_Total\"\n| timechart span=15m avg(Value) as pf_pct by host\n| where pf_pct > 70",
              "m": "Configure Perfmon inputs for Paging File object: `% Usage`, `% Usage Peak` (interval=300). Alert when usage exceeds 70% sustained (indicates memory pressure requiring page file). Track peak usage — if it regularly exceeds 80%, the system needs more RAM or has a memory leak. Also monitor EventCode 2004 in System log (page file too small) as a reactive indicator.",
              "z": "Line chart (page file usage over time), Gauge (current usage), Table (hosts with high usage).",
              "kfp": "Short spikes during batch jobs, backups, or memory-heavy reporting; large SQL or analytics nodes with sustained paging that is still within capacity planning. Use host role baselines; pair with `Perfmon:Memory` Available MBytes and process memory if needed.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:Paging_File` (counter: % Usage, % Usage Peak).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs for Paging File object: `% Usage`, `% Usage Peak` (interval=300). Alert when usage exceeds 70% sustained (indicates memory pressure requiring page file). Track peak usage — if it regularly exceeds 80%, the system needs more RAM or has a memory leak. Also monitor EventCode 2004 in System log (page file too small) as a reactive indicator.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:Paging_File\" counter=\"% Usage\" instance=\"_Total\"\n| timechart span=15m avg(Value) as pf_pct by host\n| where pf_pct > 70\n```\n\nUnderstanding this SPL\n\n**Page File Usage & Exhaustion** — Page file exhaustion prevents new process creation and causes \"out of virtual memory\" errors. System-managed page files can grow to fill the disk.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Paging_File` (counter: % Usage, % Usage Peak). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:Paging_File. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:Paging_File\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where pf_pct > 70` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Page File Usage & Exhaustion** — Page file exhaustion prevents new process creation and causes \"out of virtual memory\" errors. System-managed page files can grow to fill the disk.\n\nDocumented **Data sources**: `sourcetype=Perfmon:Paging_File` (counter: % Usage, % Usage Peak). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (page file usage over time), Gauge (current usage), Table (hosts with high usage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how full the page file is on Windows servers so we can spot memory pressure before applications start failing or the disk fills from an oversized page file.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.70",
              "n": "Context Switch Rate Anomalies (Windows)",
              "c": "medium",
              "f": "advanced",
              "v": "Abnormally high context switch rates indicate excessive threading, poor application design, or kernel-mode driver issues. Degrades overall system performance.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:System` (counter: Context Switches/sec)",
              "q": "index=perfmon sourcetype=\"Perfmon:System\" counter=\"Context Switches/sec\"\n| timechart span=5m avg(Value) as ctx_switches by host\n| streamstats window=48 avg(ctx_switches) as baseline by host\n| eval deviation = (ctx_switches - baseline) / baseline * 100\n| where deviation > 100",
              "m": "Add `Context Switches/sec` to Perfmon System inputs (interval=60). Normal range varies by workload — establish per-host baselines. >15,000/sec per CPU core is generally concerning. Alert when rate exceeds 2x the rolling baseline. Correlate with `Processor Queue Length` and `% Interrupt Time` to distinguish user-mode threading issues from driver/hardware interrupt storms.",
              "z": "Line chart (context switch rate with baseline), Heatmap (hosts × rate), Single value (anomalous hosts).",
              "kfp": "Virtualization, terminal servers, and heavily threaded apps (web stacks, in-memory DBs) run high context switch rates at baseline; firmware or NIC driver updates can briefly spike rates. Use per-host baselines and compare with `Processor Queue Length` and interrupt time. Not a CIM 1:1 for this use case without custom metrics.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:System` (counter: Context Switches/sec).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAdd `Context Switches/sec` to Perfmon System inputs (interval=60). Normal range varies by workload — establish per-host baselines. >15,000/sec per CPU core is generally concerning. Alert when rate exceeds 2x the rolling baseline. Correlate with `Processor Queue Length` and `% Interrupt Time` to distinguish user-mode threading issues from driver/hardware interrupt storms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:System\" counter=\"Context Switches/sec\"\n| timechart span=5m avg(Value) as ctx_switches by host\n| streamstats window=48 avg(ctx_switches) as baseline by host\n| eval deviation = (ctx_switches - baseline) / baseline * 100\n| where deviation > 100\n```\n\nUnderstanding this SPL\n\n**Context Switch Rate Anomalies (Windows)** — Abnormally high context switch rates indicate excessive threading, poor application design, or kernel-mode driver issues. Degrades overall system performance.\n\nDocumented **Data sources**: `sourcetype=Perfmon:System` (counter: Context Switches/sec). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• `streamstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **deviation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where deviation > 100` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Context Switch Rate Anomalies (Windows)** — Abnormally high context switch rates indicate excessive threading, poor application design, or kernel-mode driver issues. Degrades overall system performance.\n\nDocumented **Data sources**: `sourcetype=Perfmon:System` (counter: Context Switches/sec). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (context switch rate with baseline), Heatmap (hosts × rate), Single value (anomalous hosts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for Windows machines that start switching between tasks far more than usual so we can dig into runaway programs or drivers before the whole system feels slow.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.71",
              "n": "Scheduled Task Creation (Persistence)",
              "c": "high",
              "f": "expert",
              "v": "Scheduled tasks are a common persistence mechanism for malware. New tasks created outside change management warrant investigation.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4698), `sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 106)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4698\n| rex field=TaskContent \"<Command>(?<command>[^<]+)</Command>\"\n| rex field=TaskContent \"<Arguments>(?<arguments>[^<]+)</Arguments>\"\n| table _time, host, SubjectUserName, TaskName, command, arguments\n| where NOT match(SubjectUserName, \"(?i)(SYSTEM|sccm|intune)\")\n| sort -_time",
              "m": "Enable \"Audit Other Object Access Events\" for EventCode 4698 (task created). The TaskContent XML field contains the full task definition including command, arguments, and triggers. Alert on tasks created by non-SYSTEM/non-admin accounts, tasks with commands in temp/user directories, or tasks executing encoded PowerShell. Cross-reference with Sysmon process creation for execution context.",
              "z": "Table (new tasks with commands), Timeline, Bar chart (tasks created by user).",
              "kfp": "SCCM, Intune, Packer, monitoring agents, and patch tools creating or updating tasks; GPO-based tasks; VDI gold-image builds. Extend exclusions with service accounts, cert-bound parents, and change-ticket correlation.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4698), `sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 106).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Audit Other Object Access Events\" for EventCode 4698 (task created). The TaskContent XML field contains the full task definition including command, arguments, and triggers. Alert on tasks created by non-SYSTEM/non-admin accounts, tasks with commands in temp/user directories, or tasks executing encoded PowerShell. Cross-reference with Sysmon process creation for execution context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4698\n| rex field=TaskContent \"<Command>(?<command>[^<]+)</Command>\"\n| rex field=TaskContent \"<Arguments>(?<arguments>[^<]+)</Arguments>\"\n| table _time, host, SubjectUserName, TaskName, command, arguments\n| where NOT match(SubjectUserName, \"(?i)(SYSTEM|sccm|intune)\")\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Scheduled Task Creation (Persistence)** — Scheduled tasks are a common persistence mechanism for malware. New tasks created outside change management warrant investigation.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4698), `sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 106). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Scheduled Task Creation (Persistence)**): table _time, host, SubjectUserName, TaskName, command, arguments\n• Filters the current rows with `where NOT match(SubjectUserName, \"(?i)(SYSTEM|sccm|intune)\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Scheduled Task Creation (Persistence)** — Scheduled tasks are a common persistence mechanism for malware. New tasks created outside change management warrant investigation.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4698), `sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 106). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new tasks with commands), Timeline, Bar chart (tasks created by user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for new Windows scheduled tasks that are not the usual software rollout jobs so we can catch sneaky persistence before it quietly runs in the background.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.72",
              "n": "WinRM / Remote PowerShell Connections",
              "c": "high",
              "f": "intermediate",
              "v": "WinRM enables remote command execution via PowerShell Remoting. Monitoring inbound WinRM sessions detects lateral movement and unauthorized remote management.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-WinRM/Operational` (EventCode 6, 91, 161)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-WinRM/Operational\"\n  EventCode IN (6, 91, 161)\n| eval action=case(EventCode=6,\"Session created\",EventCode=91,\"Session created (user)\",EventCode=161,\"Auth failed\")\n| stats count by host, action, User, IpAddress\n| sort -count",
              "m": "Enable WinRM Operational log on all servers. EventCode 6/91=new WinRM session established, 161=authentication failure. Baseline expected WinRM sources (jump servers, SCCM, monitoring tools). Alert on WinRM sessions from non-authorized IPs or workstations. In restricted environments, consider disabling WinRM on servers that don't require it. Correlate with PowerShell Script Block Logging for full command visibility.",
              "z": "Table (WinRM sessions by source), Network graph (source→dest), Timeline, Bar chart (sessions per host).",
              "kfp": "Jump boxes, Packer, Ansible, SCCM, approved monitoring, and break-glass admin paths using WinRM. Baseline by source subnet and automation accounts; require correlation for interactive sessions or new client IPs.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-WinRM/Operational` (EventCode 6, 91, 161).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable WinRM Operational log on all servers. EventCode 6/91=new WinRM session established, 161=authentication failure. Baseline expected WinRM sources (jump servers, SCCM, monitoring tools). Alert on WinRM sessions from non-authorized IPs or workstations. In restricted environments, consider disabling WinRM on servers that don't require it. Correlate with PowerShell Script Block Logging for full command visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-WinRM/Operational\"\n  EventCode IN (6, 91, 161)\n| eval action=case(EventCode=6,\"Session created\",EventCode=91,\"Session created (user)\",EventCode=161,\"Auth failed\")\n| stats count by host, action, User, IpAddress\n| sort -count\n```\n\nUnderstanding this SPL\n\n**WinRM / Remote PowerShell Connections** — WinRM enables remote command execution via PowerShell Remoting. Monitoring inbound WinRM sessions detects lateral movement and unauthorized remote management.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-WinRM/Operational` (EventCode 6, 91, 161). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, action, User, IpAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (WinRM sessions by source), Network graph (source→dest), Timeline, Bar chart (sessions per host).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for remote management sessions to your Windows machines so we can see when someone might be running commands from an unexpected place before they move deeper into the network.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.73",
              "n": "LDAP Query Performance (DC Health)",
              "c": "high",
              "f": "intermediate",
              "v": "Slow LDAP queries on domain controllers degrade authentication, group policy processing, and application lookups across the entire domain.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:NTDS` (counters: LDAP Searches/sec, LDAP Successful Binds/sec, LDAP Search Time)",
              "q": "index=perfmon sourcetype=\"Perfmon:NTDS\" counter IN (\"LDAP Searches/sec\",\"LDAP Successful Binds/sec\",\"LDAP Client Sessions\")\n| timechart span=5m avg(Value) as value by counter, host",
              "m": "Configure Perfmon inputs on domain controllers for NTDS object: `LDAP Searches/sec`, `LDAP Successful Binds/sec`, `LDAP Client Sessions`, `LDAP Active Threads` (interval=60). Also enable \"Expensive/Inefficient LDAP searches\" logging via registry (15 Field Engineering diagnostics). Alert when LDAP search rate drops suddenly (DC issues) or when client sessions exceed baseline by 2x (possible LDAP enumeration attack).",
              "z": "Line chart (LDAP operations/sec), Dual-axis (searches + bind rate), Table (DCs by load), Gauge (active sessions).",
              "kfp": "Replication storms, large group changes, inventory or scanner tools, and backup touching AD; expected peak hours. Baseline per DC; correlate with `LDAP Searches/sec` drops (service issue) or client-session spikes (possible enumeration) on the raw Perfmon search.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:NTDS` (counters: LDAP Searches/sec, LDAP Successful Binds/sec, LDAP Search Time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs on domain controllers for NTDS object: `LDAP Searches/sec`, `LDAP Successful Binds/sec`, `LDAP Client Sessions`, `LDAP Active Threads` (interval=60). Also enable \"Expensive/Inefficient LDAP searches\" logging via registry (15 Field Engineering diagnostics). Alert when LDAP search rate drops suddenly (DC issues) or when client sessions exceed baseline by 2x (possible LDAP enumeration attack).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:NTDS\" counter IN (\"LDAP Searches/sec\",\"LDAP Successful Binds/sec\",\"LDAP Client Sessions\")\n| timechart span=5m avg(Value) as value by counter, host\n```\n\nUnderstanding this SPL\n\n**LDAP Query Performance (DC Health)** — Slow LDAP queries on domain controllers degrade authentication, group policy processing, and application lookups across the entire domain.\n\nDocumented **Data sources**: `sourcetype=Perfmon:NTDS` (counters: LDAP Searches/sec, LDAP Successful Binds/sec, LDAP Search Time). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:NTDS. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:NTDS\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by counter, host** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**LDAP Query Performance (DC Health)** — Slow LDAP queries on domain controllers degrade authentication, group policy processing, and application lookups across the entire domain.\n\nDocumented **Data sources**: `sourcetype=Perfmon:NTDS` (counters: LDAP Searches/sec, LDAP Successful Binds/sec, LDAP Search Time). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (LDAP operations/sec), Dual-axis (searches + bind rate), Table (DCs by load), Gauge (active sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the load on your domain controllers’ directory service so we can tell when sign-ins and group policy may slow down for the whole company, not just one app.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.76",
              "n": "AdminSDHolder Modification",
              "c": "critical",
              "f": "beginner",
              "v": "The AdminSDHolder container controls ACLs on all privileged AD groups. Modifying it grants persistent hidden admin access that survives permission resets.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 5136)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n  ObjectDN=\"*AdminSDHolder*\"\n| table _time, host, SubjectUserName, AttributeLDAPDisplayName, AttributeValue, OperationType\n| sort -_time",
              "m": "Enable \"Audit Directory Service Changes\" on domain controllers. EventCode 5136=directory object modified. Filter for ObjectDN containing \"AdminSDHolder\". Any modification to this container is highly suspicious — it should only change via approved security hardening. The SDProp process propagates AdminSDHolder ACLs to all protected groups every 60 minutes. Alert immediately with critical priority.",
              "z": "Table (modifications), Single value (count — target: 0), Alert with SOC escalation.",
              "kfp": "Rare but approved hardening that touches AdminSDHolder; AD forest recovery; some third-party IDM or AD management tools. Every hit should have a change record and named approver; default expectation is **zero** unplanned events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 5136).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Audit Directory Service Changes\" on domain controllers. EventCode 5136=directory object modified. Filter for ObjectDN containing \"AdminSDHolder\". Any modification to this container is highly suspicious — it should only change via approved security hardening. The SDProp process propagates AdminSDHolder ACLs to all protected groups every 60 minutes. Alert immediately with critical priority.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n  ObjectDN=\"*AdminSDHolder*\"\n| table _time, host, SubjectUserName, AttributeLDAPDisplayName, AttributeValue, OperationType\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**AdminSDHolder Modification** — The AdminSDHolder container controls ACLs on all privileged AD groups. Modifying it grants persistent hidden admin access that survives permission resets.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 5136). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **AdminSDHolder Modification**): table _time, host, SubjectUserName, AttributeLDAPDisplayName, AttributeValue, OperationType\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**AdminSDHolder Modification** — The AdminSDHolder container controls ACLs on all privileged AD groups. Modifying it grants persistent hidden admin access that survives permission resets.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 5136). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (modifications), Single value (count — target: 0), Alert with SOC escalation.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We alert when a hidden but powerful part of the directory that controls who stays an admin is changed, because that is a favorite trick for staying in charge without looking like a normal admin change.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.77",
              "n": "SPN Modification (Targeted Kerberoasting)",
              "c": "critical",
              "f": "intermediate",
              "v": "Attackers add SPNs to admin accounts to make them Kerberoastable. Monitoring SPN changes on sensitive accounts catches this setup before the actual attack.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 5136, attribute servicePrincipalName)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n  AttributeLDAPDisplayName=\"servicePrincipalName\"\n| table _time, host, SubjectUserName, ObjectDN, AttributeValue, OperationType\n| where OperationType=\"%%14674\"\n| sort -_time",
              "m": "EventCode 5136 with AttributeLDAPDisplayName=servicePrincipalName tracks SPN additions (OperationType %%14674=value added) and removals. Alert on any SPN added to user accounts in privileged groups (Domain Admins, Enterprise Admins, Schema Admins). Legitimate SPN changes are rare and tied to service deployments. Cross-reference with Kerberoasting detection (UC-1.2.37).",
              "z": "Table (SPN changes), Single value (changes to admin accounts — target: 0), Timeline.",
              "kfp": "Planned SPNs for new services, cluster virtual names, and SQL/HTTP SPNs added by product deploys; SPN repair tools after duplicate-SPN events. Cross-check with change and service owner; restrict alerts to protected groups where possible.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 5136, attribute servicePrincipalName).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 5136 with AttributeLDAPDisplayName=servicePrincipalName tracks SPN additions (OperationType %%14674=value added) and removals. Alert on any SPN added to user accounts in privileged groups (Domain Admins, Enterprise Admins, Schema Admins). Legitimate SPN changes are rare and tied to service deployments. Cross-reference with Kerberoasting detection (UC-1.2.37).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n  AttributeLDAPDisplayName=\"servicePrincipalName\"\n| table _time, host, SubjectUserName, ObjectDN, AttributeValue, OperationType\n| where OperationType=\"%%14674\"\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**SPN Modification (Targeted Kerberoasting)** — Attackers add SPNs to admin accounts to make them Kerberoastable. Monitoring SPN changes on sensitive accounts catches this setup before the actual attack.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 5136, attribute servicePrincipalName). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **SPN Modification (Targeted Kerberoasting)**): table _time, host, SubjectUserName, ObjectDN, AttributeValue, OperationType\n• Filters the current rows with `where OperationType=\"%%14674\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SPN Modification (Targeted Kerberoasting)** — Attackers add SPNs to admin accounts to make them Kerberoastable. Monitoring SPN changes on sensitive accounts catches this setup before the actual attack.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 5136, attribute servicePrincipalName). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SPN changes), Single value (changes to admin accounts — target: 0), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the directory for changes to special login names on powerful accounts, because that can be how an attacker gets ready to crack passwords in the background.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.78",
              "n": "DSRM Account Usage",
              "c": "critical",
              "f": "beginner",
              "v": "The Directory Services Restore Mode (DSRM) account is a local admin on every DC with a rarely-changed password. Its use outside restores indicates compromise.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4794, 4624 with DSRM)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\"\n  (EventCode=4794 OR (EventCode=4624 TargetUserName=\"Administrator\" LogonType=10 AuthenticationPackageName=\"Negotiate\"))\n| eval alert_type=case(EventCode=4794,\"DSRM password changed\",EventCode=4624,\"Possible DSRM logon\")\n| table _time, host, alert_type, SubjectUserName, IpAddress\n| sort -_time",
              "m": "EventCode 4794=DSRM password change (should only happen during planned maintenance). DSRM logons appear as local \"Administrator\" logons on the DC. Since Windows Server 2008 R2, registry key DsrmAdminLogonBehavior allows DSRM logon while AD is running (value=2). Alert on any DSRM password change and any local admin logon to a DC. Set DsrmAdminLogonBehavior=0 (default, deny DSRM logon while AD running).",
              "z": "Table (DSRM events), Single value (count — target: 0 outside restore operations), Alert.",
              "kfp": "Planned DSRM password rotations with a change ticket; DR or AD recovery exercises; rare vendor work at the DC console. Baseline expected `IpAddress` for physical/KVM access; alert on remote or unknown sources.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4794, 4624 with DSRM).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 4794=DSRM password change (should only happen during planned maintenance). DSRM logons appear as local \"Administrator\" logons on the DC. Since Windows Server 2008 R2, registry key DsrmAdminLogonBehavior allows DSRM logon while AD is running (value=2). Alert on any DSRM password change and any local admin logon to a DC. Set DsrmAdminLogonBehavior=0 (default, deny DSRM logon while AD running).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\"\n  (EventCode=4794 OR (EventCode=4624 TargetUserName=\"Administrator\" LogonType=10 AuthenticationPackageName=\"Negotiate\"))\n| eval alert_type=case(EventCode=4794,\"DSRM password changed\",EventCode=4624,\"Possible DSRM logon\")\n| table _time, host, alert_type, SubjectUserName, IpAddress\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**DSRM Account Usage** — The Directory Services Restore Mode (DSRM) account is a local admin on every DC with a rarely-changed password. Its use outside restores indicates compromise.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4794, 4624 with DSRM). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **alert_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DSRM Account Usage**): table _time, host, alert_type, SubjectUserName, IpAddress\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DSRM events), Single value (count — target: 0 outside restore operations), Alert.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for use of a special break-glass administrator on domain controllers so we can tell when someone is not doing a normal directory sign-in during everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.79",
              "n": "Sysmon DNS Query Logging",
              "c": "medium",
              "f": "intermediate",
              "v": "Per-process DNS query logging reveals which applications communicate with which domains. Detects DGA, C2 callbacks, and data exfiltration at the endpoint level.",
              "t": "`Splunk_TA_windows`, Sysmon required",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 22)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=22\n| where NOT match(QueryName, \"(?i)(microsoft\\.com|windowsupdate\\.com|office\\.com|bing\\.com|msftconnecttest)\")\n| stats count dc(QueryName) as unique_domains by Image, host\n| where unique_domains > 100\n| sort -unique_domains",
              "m": "Deploy Sysmon v10+ with DNS query logging (EventCode 22). Each event records the process that made the DNS query and the resolved domain. Filter out known-good domains. Alert on processes with high unique domain counts (DGA indicator), processes that normally don't make DNS queries (LOLBin abuse), or queries to known-bad domains (threat intel lookup). Lower volume than network-level DNS logging since it's per-endpoint.",
              "z": "Table (queries by process), Bar chart (top resolving processes), Sankey diagram (process→domain).",
              "kfp": "Browsers, update agents, AV, and dev tools; first-run profile or image build. Expand Microsoft CDN allowlists; raise thresholds for known noisy parent images on jump hosts.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Sysmon required.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 22).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Sysmon v10+ with DNS query logging (EventCode 22). Each event records the process that made the DNS query and the resolved domain. Filter out known-good domains. Alert on processes with high unique domain counts (DGA indicator), processes that normally don't make DNS queries (LOLBin abuse), or queries to known-bad domains (threat intel lookup). Lower volume than network-level DNS logging since it's per-endpoint.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\" EventCode=22\n| where NOT match(QueryName, \"(?i)(microsoft\\.com|windowsupdate\\.com|office\\.com|bing\\.com|msftconnecttest)\")\n| stats count dc(QueryName) as unique_domains by Image, host\n| where unique_domains > 100\n| sort -unique_domains\n```\n\nUnderstanding this SPL\n\n**Sysmon DNS Query Logging** — Per-process DNS query logging reveals which applications communicate with which domains. Detects DGA, C2 callbacks, and data exfiltration at the endpoint level.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 22). **App/TA** (typical add-on context): `Splunk_TA_windows`, Sysmon required. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match(QueryName, \"(?i)(microsoft\\.com|windowsupdate\\.com|office\\.com|bing\\.com|msftconnecttest)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by Image, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_domains > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (queries by process), Bar chart (top resolving processes), Sankey diagram (process→domain).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at which programs on a Windows machine ask the network for which website or server names, so a single program asking for a huge mix of names stands out the way some bad software does.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.81",
              "n": "SMBv1 Usage Detection",
              "c": "high",
              "f": "intermediate",
              "v": "SMBv1 is vulnerable to EternalBlue and WannaCry. Detecting remaining SMBv1 traffic identifies systems that need upgrading or have SMBv1 re-enabled.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-SMBServer/Audit` (EventCode 3000)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-SMBServer/Audit\" EventCode=3000\n| stats count values(ClientName) as clients dc(ClientName) as client_count by host\n| sort -client_count",
              "m": "Enable SMB1 audit logging via `Set-SmbServerConfiguration -AuditSmb1Access $true`. EventCode 3000 logs each SMBv1 connection with the client name. Identify all clients still using SMBv1, then upgrade or remediate before disabling SMBv1 entirely. Alert on any new SMBv1 access after remediation is complete. MS17-010 (EternalBlue) affects unpatched SMBv1 systems.",
              "z": "Table (SMBv1 clients), Bar chart (clients per server), Single value (total SMBv1 connections — target: 0).",
              "kfp": "Legacy storage and lab gear; one-off copy jobs during protocol migration. Inventory `ClientName` and retire per subnet; do not clear alerts until clients are upgraded or firewalled.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-SMBServer/Audit` (EventCode 3000).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable SMB1 audit logging via `Set-SmbServerConfiguration -AuditSmb1Access $true`. EventCode 3000 logs each SMBv1 connection with the client name. Identify all clients still using SMBv1, then upgrade or remediate before disabling SMBv1 entirely. Alert on any new SMBv1 access after remediation is complete. MS17-010 (EternalBlue) affects unpatched SMBv1 systems.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-SMBServer/Audit\" EventCode=3000\n| stats count values(ClientName) as clients dc(ClientName) as client_count by host\n| sort -client_count\n```\n\nUnderstanding this SPL\n\n**SMBv1 Usage Detection** — SMBv1 is vulnerable to EternalBlue and WannaCry. Detecting remaining SMBv1 traffic identifies systems that need upgrading or have SMBv1 re-enabled.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-SMBServer/Audit` (EventCode 3000). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SMBv1 clients), Bar chart (clients per server), Single value (total SMBv1 connections — target: 0).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We list machines still talking the oldest Windows file-sharing dialect so the team can turn it off on purpose instead of by surprise when a new flaw appears.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.82",
              "n": "Credential Guard Status Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Credential Guard protects NTLM hashes and Kerberos tickets in an isolated container. Monitoring ensures it remains enabled and isn't bypassed.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-DeviceGuard/Operational` (EventCode 13, 14, 15)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-DeviceGuard*\"\n| stats latest(EventCode) as status by host\n| eval cg_status=case(status=13,\"Running\",status=14,\"Stopped\",status=15,\"Not configured\",1=1,\"Unknown\")\n| table host, cg_status\n| where cg_status!=\"Running\"",
              "m": "Device Guard/Credential Guard Operational log reports VBS (Virtualization Based Security) status. EventCode 13=VBS running with Credential Guard, 14=stopped, 15=not configured. All domain-joined Windows 10/11 and Server 2016+ should have Credential Guard enabled. Alert when any previously-enabled host reports stopped or not configured. Requires UEFI Secure Boot, TPM 2.0, and compatible hardware.",
              "z": "Pie chart (fleet CG status), Table (non-compliant hosts), Single value (% compliant).",
              "kfp": "Planned VBS or firmware changes; hosts that never met hardware requirements; temporary disable during a vendor-approved break-fix. Exempt only with a named ticket and end date; re-baseline `cg_status` after each OS feature update.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-DeviceGuard/Operational` (EventCode 13, 14, 15).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDevice Guard/Credential Guard Operational log reports VBS (Virtualization Based Security) status. EventCode 13=VBS running with Credential Guard, 14=stopped, 15=not configured. All domain-joined Windows 10/11 and Server 2016+ should have Credential Guard enabled. Alert when any previously-enabled host reports stopped or not configured. Requires UEFI Secure Boot, TPM 2.0, and compatible hardware.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-DeviceGuard*\"\n| stats latest(EventCode) as status by host\n| eval cg_status=case(status=13,\"Running\",status=14,\"Stopped\",status=15,\"Not configured\",1=1,\"Unknown\")\n| table host, cg_status\n| where cg_status!=\"Running\"\n```\n\nUnderstanding this SPL\n\n**Credential Guard Status Monitoring** — Credential Guard protects NTLM hashes and Kerberos tickets in an isolated container. Monitoring ensures it remains enabled and isn't bypassed.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-DeviceGuard/Operational` (EventCode 13, 14, 15). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cg_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Credential Guard Status Monitoring**): table host, cg_status\n• Filters the current rows with `where cg_status!=\"Running\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Credential Guard Status Monitoring** — Credential Guard protects NTLM hashes and Kerberos tickets in an isolated container. Monitoring ensures it remains enabled and isn't bypassed.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-DeviceGuard/Operational` (EventCode 13, 14, 15). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (fleet CG status), Table (non-compliant hosts), Single value (% compliant).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check that a Windows security feature meant to keep login secrets locked away is really turned on across your servers, so a simple setting slip does not leave passwords easy to grab.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.83",
              "n": "Boot Configuration Changes (BCDEdit)",
              "c": "critical",
              "f": "intermediate",
              "v": "Boot configuration changes can disable Secure Boot, enable test signing (rootkit loading), or modify boot chain integrity. Used by advanced threats.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4688, CommandLine containing bcdedit)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4688\n  CommandLine=\"*bcdedit*\"\n| where match(CommandLine, \"(?i)(testsigning|nointegritychecks|safeboot|debug|disableelamdrivers)\")\n| table _time, host, SubjectUserName, CommandLine, ParentProcessName\n| sort -_time",
              "m": "Requires process creation with command line auditing (EventCode 4688). Alert on any bcdedit execution that modifies security settings: `testsigning on` (allows unsigned drivers), `nointegritychecks` (disables code integrity), `debug on` (enables kernel debugging), `disableelamdrivers` (disables early launch anti-malware). All of these weaken the boot chain. Legitimate uses are rare and limited to development environments.",
              "z": "Table (bcdedit commands), Single value (security-affecting changes — target: 0), Alert.",
              "kfp": "MECM, imaging, and vendor repair media running `bcdedit` during build; break-glass recovery. Parent image and host role allowlists; require ticket for servers.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4688, CommandLine containing bcdedit).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires process creation with command line auditing (EventCode 4688). Alert on any bcdedit execution that modifies security settings: `testsigning on` (allows unsigned drivers), `nointegritychecks` (disables code integrity), `debug on` (enables kernel debugging), `disableelamdrivers` (disables early launch anti-malware). All of these weaken the boot chain. Legitimate uses are rare and limited to development environments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4688\n  CommandLine=\"*bcdedit*\"\n| where match(CommandLine, \"(?i)(testsigning|nointegritychecks|safeboot|debug|disableelamdrivers)\")\n| table _time, host, SubjectUserName, CommandLine, ParentProcessName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Boot Configuration Changes (BCDEdit)** — Boot configuration changes can disable Secure Boot, enable test signing (rootkit loading), or modify boot chain integrity. Used by advanced threats.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4688, CommandLine containing bcdedit). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(CommandLine, \"(?i)(testsigning|nointegritychecks|safeboot|debug|disableelamdrivers)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Boot Configuration Changes (BCDEdit)**): table _time, host, SubjectUserName, CommandLine, ParentProcessName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Boot Configuration Changes (BCDEdit)** — Boot configuration changes can disable Secure Boot, enable test signing (rootkit loading), or modify boot chain integrity. Used by advanced threats.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4688, CommandLine containing bcdedit). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (bcdedit commands), Single value (security-affecting changes — target: 0), Alert.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for changes to the Windows boot settings and boot commands so we can see when something other than a planned fix or build job is about to change how the system starts or recovers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.84",
              "n": "Sysmon Named Pipe Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Named pipes are used for inter-process communication and by tools like Cobalt Strike, Mimikatz, and PsExec. Detecting unusual named pipes reveals C2 and lateral movement.",
              "t": "`Splunk_TA_windows`, Sysmon required",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 17, 18)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\"\n  EventCode IN (17, 18)\n| where match(PipeName, \"(?i)(MSSE-|msagent_|postex_|status_|mojo\\.|cobaltstrike|beacon)\")\n| table _time, host, EventCode, PipeName, Image, User\n| sort -_time",
              "m": "Deploy Sysmon with PipeCreated (17) and PipeConnected (18) monitoring. Known malicious pipe names: `MSSE-*` (Metasploit), `msagent_*` (Cobalt Strike), `postex_*` (Cobalt Strike post-exploitation), `status_*` (default Cobalt Strike). Also detect PsExec pipes (`PSEXESVC`). Baseline normal pipes per application role, then alert on anomalies.",
              "z": "Table (pipe events), Bar chart (top pipe names), Timeline, Alert on known-bad patterns.",
              "kfp": "Defender, SQL, backup, and remote management tools. Baseline by `Image` and pipe name allowlists; expect noise on app and RDS session hosts.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Sysmon required.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 17, 18).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Sysmon with PipeCreated (17) and PipeConnected (18) monitoring. Known malicious pipe names: `MSSE-*` (Metasploit), `msagent_*` (Cobalt Strike), `postex_*` (Cobalt Strike post-exploitation), `status_*` (default Cobalt Strike). Also detect PsExec pipes (`PSEXESVC`). Baseline normal pipes per application role, then alert on anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\"\n  EventCode IN (17, 18)\n| where match(PipeName, \"(?i)(MSSE-|msagent_|postex_|status_|mojo\\.|cobaltstrike|beacon)\")\n| table _time, host, EventCode, PipeName, Image, User\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Sysmon Named Pipe Monitoring** — Named pipes are used for inter-process communication and by tools like Cobalt Strike, Mimikatz, and PsExec. Detecting unusual named pipes reveals C2 and lateral movement.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 17, 18). **App/TA** (typical add-on context): `Splunk_TA_windows`, Sysmon required. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Sysmon/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(PipeName, \"(?i)(MSSE-|msagent_|postex_|status_|mojo\\.|cobaltstrike|beacon)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Sysmon Named Pipe Monitoring**): table _time, host, EventCode, PipeName, Image, User\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pipe events), Bar chart (top pipe names), Timeline, Alert on known-bad patterns.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the hidden pipes programs use to talk to each other on Windows so we can notice odd pipe activity that often rides along with sneaky tools moving between machines.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.86",
              "n": "NTLM Audit and Restriction Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "NTLM is a legacy authentication protocol vulnerable to relay attacks. Auditing NTLM usage identifies applications and systems that need migration to Kerberos.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4776), `sourcetype=WinEventLog:Microsoft-Windows-NTLM/Operational` (EventCode 8001, 8003, 8004)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-NTLM/Operational\"\n  EventCode IN (8001, 8003, 8004)\n| stats count by TargetName, DomainName, WorkstationName\n| sort -count",
              "m": "Enable NTLM auditing via GPO: Network Security → Restrict NTLM → Audit incoming/outgoing NTLM traffic. EventCode 8001=outgoing NTLM, 8003=incoming NTLM to server, 8004=NTLM blocked. Start in audit mode to identify all NTLM-dependent applications before enabling blocking. Goal: reduce NTLM usage to zero where possible, using Kerberos for all domain authentication.",
              "z": "Bar chart (top NTLM sources/destinations), Timechart (NTLM vs Kerberos ratio), Table (NTLM-dependent applications).",
              "kfp": "Audit-only mode before cutover; legacy apps with no Kerberos path; NAS and multi-vendor SMB during migration. Baseline by `TargetName`/`WorkstationName`; do not alert on volume until you know expected business traffic.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4776), `sourcetype=WinEventLog:Microsoft-Windows-NTLM/Operational` (EventCode 8001, 8003, 8004).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable NTLM auditing via GPO: Network Security → Restrict NTLM → Audit incoming/outgoing NTLM traffic. EventCode 8001=outgoing NTLM, 8003=incoming NTLM to server, 8004=NTLM blocked. Start in audit mode to identify all NTLM-dependent applications before enabling blocking. Goal: reduce NTLM usage to zero where possible, using Kerberos for all domain authentication.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-NTLM/Operational\"\n  EventCode IN (8001, 8003, 8004)\n| stats count by TargetName, DomainName, WorkstationName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**NTLM Audit and Restriction Monitoring** — NTLM is a legacy authentication protocol vulnerable to relay attacks. Auditing NTLM usage identifies applications and systems that need migration to Kerberos.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4776), `sourcetype=WinEventLog:Microsoft-Windows-NTLM/Operational` (EventCode 8001, 8003, 8004). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by TargetName, DomainName, WorkstationName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NTLM Audit and Restriction Monitoring** — NTLM is a legacy authentication protocol vulnerable to relay attacks. Auditing NTLM usage identifies applications and systems that need migration to Kerberos.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4776), `sourcetype=WinEventLog:Microsoft-Windows-NTLM/Operational` (EventCode 8001, 8003, 8004). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top NTLM sources/destinations), Timechart (NTLM vs Kerberos ratio), Table (NTLM-dependent applications).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track the older NTLM sign-in style on Windows so the team can move systems to the safer Kerberos way of doing things and shrink the room for account-stealing tricks.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.87",
              "n": "DPAPI Credential Backup (DC)",
              "c": "critical",
              "f": "intermediate",
              "v": "Data Protection API master key backup to domain controllers enables credential theft. Abnormal DPAPI backup activity from unexpected accounts indicates compromise.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4692, 4693)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4692, 4693)\n| eval action=case(EventCode=4692,\"DPAPI backup attempted\",EventCode=4693,\"DPAPI recovery attempted\")\n| table _time, host, action, SubjectUserName, SubjectDomainName, MasterKeyId\n| sort -_time",
              "m": "EventCode 4692=DPAPI master key backup, 4693=recovery. Normal during user password changes. Alert on mass backup attempts (many keys in short time) or recovery from unexpected admin accounts — indicates SharpDPAPI/Mimikatz DPAPI module usage. Correlate with DCSync events (4662) as attackers often combine both techniques.",
              "z": "Table (DPAPI events), Single value (recovery count), Timeline, Alert on mass operations.",
              "kfp": "User password changes and single-key rotation during help-desk reset; EFS or credential-roaming in enabled orgs. Alert on *volume* and unexpected privilege sources; tune `SubjectUserName` allowlists for known backup service accounts only after review.",
              "refs": "[indicates SharpDPAPI/Mimikatz DPAPI module usage. Correlate with DCSync events](https://splunkbase.splunk.com/app/4662), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4692, 4693).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 4692=DPAPI master key backup, 4693=recovery. Normal during user password changes. Alert on mass backup attempts (many keys in short time) or recovery from unexpected admin accounts — indicates SharpDPAPI/Mimikatz DPAPI module usage. Correlate with DCSync events (4662) as attackers often combine both techniques.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4692, 4693)\n| eval action=case(EventCode=4692,\"DPAPI backup attempted\",EventCode=4693,\"DPAPI recovery attempted\")\n| table _time, host, action, SubjectUserName, SubjectDomainName, MasterKeyId\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**DPAPI Credential Backup (DC)** — Data Protection API master key backup to domain controllers enables credential theft. Abnormal DPAPI backup activity from unexpected accounts indicates compromise.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4692, 4693). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DPAPI Credential Backup (DC)**): table _time, host, action, SubjectUserName, SubjectDomainName, MasterKeyId\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DPAPI Credential Backup (DC)** — Data Protection API master key backup to domain controllers enables credential theft. Abnormal DPAPI backup activity from unexpected accounts indicates compromise.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4692, 4693). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DPAPI events), Single value (recovery count), Timeline, Alert on mass operations.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for odd bulk backup or recovery of the Windows secret-keeping keys on your domain controllers, because that pattern often shows up when someone is after passwords and tokens.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.88",
              "n": "Windows Search Indexer Issues",
              "c": "low",
              "f": "beginner",
              "v": "Search Indexer crashes and high resource usage affect file server performance and SharePoint crawling. Index corruption requires full rebuild.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Application` (Source=Windows Search Service, EventCode 3028, 3036, 7010, 7040, 7042)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Application\" Source=\"Windows Search Service\"\n  EventCode IN (3028, 3036, 7010, 7040, 7042)\n| eval issue=case(EventCode=3028,\"Index corrupted\",EventCode=3036,\"Indexer failed\",EventCode=7040,\"Catalog corrupted\",EventCode=7042,\"Index rebuild started\")\n| table _time, host, issue, CatalogName\n| sort -_time",
              "m": "Monitor on file servers and SharePoint servers where search indexing is critical. EventCode 3028/7040=index corruption (requires rebuild), 3036=indexer service failure. Also monitor Perfmon `Windows Search Indexer` object for `Items in Progress` and `Index Size`. A stuck \"Items in Progress\" >0 for extended periods indicates a hung indexer.",
              "z": "Table (indexer events), Single value (index health status), Line chart (index size over time).",
              "kfp": "Index reset after feature update; large first-time crawl; antivirus locking files. Check disk space, antivirus exclusions, and `SearchIndexer` CPU vs baseline.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Application` (Source=Windows Search Service, EventCode 3028, 3036, 7010, 7040, 7042).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor on file servers and SharePoint servers where search indexing is critical. EventCode 3028/7040=index corruption (requires rebuild), 3036=indexer service failure. Also monitor Perfmon `Windows Search Indexer` object for `Items in Progress` and `Index Size`. A stuck \"Items in Progress\" >0 for extended periods indicates a hung indexer.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Application\" Source=\"Windows Search Service\"\n  EventCode IN (3028, 3036, 7010, 7040, 7042)\n| eval issue=case(EventCode=3028,\"Index corrupted\",EventCode=3036,\"Indexer failed\",EventCode=7040,\"Catalog corrupted\",EventCode=7042,\"Index rebuild started\")\n| table _time, host, issue, CatalogName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Windows Search Indexer Issues** — Search Indexer crashes and high resource usage affect file server performance and SharePoint crawling. Index corruption requires full rebuild.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Application` (Source=Windows Search Service, EventCode 3028, 3036, 7010, 7040, 7042). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Application. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Windows Search Indexer Issues**): table _time, host, issue, CatalogName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (indexer events), Single value (index health status), Line chart (index size over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check the health of the built-in file search service on Windows servers so we know when people might not find their files and mail in the place they expect on busy shares.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.89",
              "n": "System Uptime & Unexpected Restarts (Windows)",
              "c": "medium",
              "f": "beginner",
              "v": "Unexpected restarts indicate BSOD, power loss, forced reboots, or patch installations. Tracking uptime reveals instability patterns and unauthorized maintenance.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System` (EventCode 6005, 6006, 6008, 6009, 1074)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" EventCode IN (6005, 6006, 6008, 1074)\n| eval event=case(EventCode=6005,\"Event log started (boot)\",EventCode=6006,\"Event log stopped (clean shutdown)\",EventCode=6008,\"Unexpected shutdown\",EventCode=1074,\"User-initiated shutdown/restart\")\n| table _time, host, event, User, Reason, Comment\n| sort -_time",
              "m": "EventCode 6008=unexpected shutdown (BSOD, power loss, hard reset) — always investigate. EventCode 1074=planned shutdown with user and reason. Calculate uptime by measuring time between 6005 events. Alert on any EventCode 6008 (unexpected) and on restarts outside maintenance windows. Track monthly uptime percentage per server for SLA reporting.",
              "z": "Table (shutdown events), Line chart (uptime per host), Single value (hosts with unexpected restarts), Calendar view.",
              "kfp": "Patch Tuesday, hypervisor host maintenance, and deliberate cluster move. Correlate with VM migration and change calendar; treat repeated 6008 as hardware or driver priority.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (EventCode 6005, 6006, 6008, 6009, 1074).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEventCode 6008=unexpected shutdown (BSOD, power loss, hard reset) — always investigate. EventCode 1074=planned shutdown with user and reason. Calculate uptime by measuring time between 6005 events. Alert on any EventCode 6008 (unexpected) and on restarts outside maintenance windows. Track monthly uptime percentage per server for SLA reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" EventCode IN (6005, 6006, 6008, 1074)\n| eval event=case(EventCode=6005,\"Event log started (boot)\",EventCode=6006,\"Event log stopped (clean shutdown)\",EventCode=6008,\"Unexpected shutdown\",EventCode=1074,\"User-initiated shutdown/restart\")\n| table _time, host, event, User, Reason, Comment\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**System Uptime & Unexpected Restarts (Windows)** — Unexpected restarts indicate BSOD, power loss, forced reboots, or patch installations. Tracking uptime reveals instability patterns and unauthorized maintenance.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (EventCode 6005, 6006, 6008, 6009, 1074). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **event** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **System Uptime & Unexpected Restarts (Windows)**): table _time, host, event, User, Reason, Comment\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (shutdown events), Line chart (uptime per host), Single value (hosts with unexpected restarts), Calendar view.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep a simple story of when a server last started and whether shutdowns look clean, so the team can tell maintenance restarts from power or software surprises.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.90",
              "n": "Shadow Copy Deletion (Ransomware Indicator)",
              "c": "critical",
              "f": "intermediate",
              "v": "Ransomware deletes volume shadow copies to prevent file recovery. Detecting vssadmin/wmic shadow deletion commands is a high-confidence ransomware indicator.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4688), `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 1)",
              "q": "index=wineventlog EventCode IN (4688, 1)\n| where match(CommandLine, \"(?i)(vssadmin.*delete.*shadows|wmic.*shadowcopy.*delete|bcdedit.*recoveryenabled.*no|wbadmin.*delete.*catalog)\")\n| table _time, host, User, CommandLine, ParentProcessName, Image\n| sort -_time",
              "m": "Monitor process creation (EventCode 4688 or Sysmon 1) for commands: `vssadmin delete shadows`, `wmic shadowcopy delete`, `bcdedit /set {default} recoveryenabled no`, `wbadmin delete catalog`. Any of these commands executed outside backup maintenance is a near-certain indicator of ransomware or destructive attack. Alert with critical priority and trigger automated response (network isolation). MITRE ATT&CK T1490.",
              "z": "Single value (count — target: 0), Table (events), Alert with automated containment trigger.",
              "kfp": "Backup, DR, and admin scripts; uninstallers. Allow parent `Image`+account pairs with tickets; never blanket-suppress on domain controllers without security review.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4688), `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 1).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor process creation (EventCode 4688 or Sysmon 1) for commands: `vssadmin delete shadows`, `wmic shadowcopy delete`, `bcdedit /set {default} recoveryenabled no`, `wbadmin delete catalog`. Any of these commands executed outside backup maintenance is a near-certain indicator of ransomware or destructive attack. Alert with critical priority and trigger automated response (network isolation). MITRE ATT&CK T1490.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode IN (4688, 1)\n| where match(CommandLine, \"(?i)(vssadmin.*delete.*shadows|wmic.*shadowcopy.*delete|bcdedit.*recoveryenabled.*no|wbadmin.*delete.*catalog)\")\n| table _time, host, User, CommandLine, ParentProcessName, Image\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Shadow Copy Deletion (Ransomware Indicator)** — Ransomware deletes volume shadow copies to prevent file recovery. Detecting vssadmin/wmic shadow deletion commands is a high-confidence ransomware indicator.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4688), `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 1). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(CommandLine, \"(?i)(vssadmin.*delete.*shadows|wmic.*shadowcopy.*delete|bcdedit.*recoveryenabled.*no|wbadmin.*del…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Shadow Copy Deletion (Ransomware Indicator)**): table _time, host, User, CommandLine, ParentProcessName, Image\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (count — target: 0), Table (events), Alert with automated containment trigger.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the built-in tools that remove backup shadows on Windows, because that step often shows up right when ransomware is about to lock files and hide recovery options.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.91",
              "n": "USB / Removable Device Auditing",
              "c": "medium",
              "f": "intermediate",
              "v": "Removable storage devices are a data exfiltration vector. Auditing device connections enables DLP and compliance enforcement.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 6416), `sourcetype=WinEventLog:Microsoft-Windows-DriverFrameworks-UserMode/Operational`",
              "q": "index=wineventlog EventCode=6416\n| eval DeviceClass=coalesce(ClassName, \"Unknown\")\n| where DeviceClass=\"DiskDrive\" OR DeviceClass=\"WPD\" OR DeviceClass=\"USB\"\n| stats count by host, DeviceId, DeviceDescription, DeviceClass, SubjectUserName, _time\n| sort -_time",
              "m": "Enable Audit PnP Activity (EventCode 6416) via Advanced Audit Policy. Track USB mass storage, MTP devices, and portable drives. Correlate with file access events for full data movement picture. Alert on USB connections to servers or high-security workstations. Consider blocking USB storage via Group Policy on sensitive systems.",
              "z": "Timeline (device connections over time), Table (device details), Alert on server USB connections.",
              "kfp": "IT imaging keys; medical or industrial devices; encrypted corporate USB. Map ClassGuid allowlists; separate alert tiers for server vs desktop estates.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 6416), `sourcetype=WinEventLog:Microsoft-Windows-DriverFrameworks-UserMode/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Audit PnP Activity (EventCode 6416) via Advanced Audit Policy. Track USB mass storage, MTP devices, and portable drives. Correlate with file access events for full data movement picture. Alert on USB connections to servers or high-security workstations. Consider blocking USB storage via Group Policy on sensitive systems.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=6416\n| eval DeviceClass=coalesce(ClassName, \"Unknown\")\n| where DeviceClass=\"DiskDrive\" OR DeviceClass=\"WPD\" OR DeviceClass=\"USB\"\n| stats count by host, DeviceId, DeviceDescription, DeviceClass, SubjectUserName, _time\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**USB / Removable Device Auditing** — Removable storage devices are a data exfiltration vector. Auditing device connections enables DLP and compliance enforcement.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 6416), `sourcetype=WinEventLog:Microsoft-Windows-DriverFrameworks-UserMode/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **DeviceClass** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where DeviceClass=\"DiskDrive\" OR DeviceClass=\"WPD\" OR DeviceClass=\"USB\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, DeviceId, DeviceDescription, DeviceClass, SubjectUserName, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (device connections over time), Table (device details), Alert on server USB connections.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log when someone plugs a USB or other removable device into a Windows server so the right people can review it against policy and later explain who moved data in or out that way.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ASD E8 E8.01 (Application control) is enforced — Splunk UC-1.2.91: USB / Removable Device Auditing.",
                  "ea": "Saved search 'UC-1.2.91' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) is enforced — Splunk UC-1.2.91: USB / Removable Device Auditing.",
                  "ea": "Saved search 'UC-1.2.91' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/IT-Grundschutz/it-grundschutz_node.html"
                }
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.92",
              "n": "Remote Desktop Gateway Session Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "RD Gateway is the entry point for remote workers. Monitoring session lifecycle detects unauthorized access, session hijacking, and resource abuse.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational`",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational\"\n| eval EventAction=case(EventCode=300,\"Connected\", EventCode=302,\"Disconnected\", EventCode=303,\"AuthFailed\", EventCode=304,\"AuthZ_Failed\", 1=1,\"Other\")\n| stats count by host, UserName, ClientIP, ResourceName, EventAction\n| where EventAction=\"AuthFailed\" OR EventAction=\"AuthZ_Failed\"",
              "m": "Collect RD Gateway Operational logs. Track connection (300), disconnect (302), authentication failures (303), and authorization failures (304). Alert on brute-force patterns (multiple 303s from same IP), connections from unusual geolocations, and access to unauthorized resources. Monitor session duration for anomalies.",
              "z": "Geo map (client IPs), Table (session details), Timechart (connections by hour).",
              "kfp": "Pentest or red-team windows; user lockouts; legacy clients. Baseline by country and client version; require velocity per user+source before CRIT.",
              "refs": "[Collect RD Gateway Operational logs. Track connection](https://splunkbase.splunk.com/app/300)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect RD Gateway Operational logs. Track connection (300), disconnect (302), authentication failures (303), and authorization failures (304). Alert on brute-force patterns (multiple 303s from same IP), connections from unusual geolocations, and access to unauthorized resources. Monitor session duration for anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational\"\n| eval EventAction=case(EventCode=300,\"Connected\", EventCode=302,\"Disconnected\", EventCode=303,\"AuthFailed\", EventCode=304,\"AuthZ_Failed\", 1=1,\"Other\")\n| stats count by host, UserName, ClientIP, ResourceName, EventAction\n| where EventAction=\"AuthFailed\" OR EventAction=\"AuthZ_Failed\"\n```\n\nUnderstanding this SPL\n\n**Remote Desktop Gateway Session Monitoring** — RD Gateway is the entry point for remote workers. Monitoring session lifecycle detects unauthorized access, session hijacking, and resource abuse.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **EventAction** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, UserName, ClientIP, ResourceName, EventAction** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where EventAction=\"AuthFailed\" OR EventAction=\"AuthZ_Failed\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Geo map (client IPs), Table (session details), Timechart (connections by hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the front door for remote desktop access through the gateway so failed tries and heavy use line up with what the network and identity teams expect before a small problem spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.93",
              "n": "Group Policy Object (GPO) Modification Auditing",
              "c": "critical",
              "f": "intermediate",
              "v": "GPO changes affect all domain-joined systems. Unauthorized GPO modifications can deploy malware, weaken security, or exfiltrate credentials at scale.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 5136, 5137)",
              "q": "index=wineventlog EventCode IN (5136, 5137) ObjectClass=\"groupPolicyContainer\"\n| eval Action=case(EventCode=5136,\"Modified\", EventCode=5137,\"Created\", 1=1,\"Other\")\n| table _time, host, SubjectUserName, Action, ObjectDN, AttributeLDAPDisplayName, AttributeValue\n| sort -_time",
              "m": "Enable Audit Directory Service Changes. Track GPO creation (5137) and modification (5136) on domain controllers. Alert on GPO changes outside change windows, by non-admin accounts, or modifications to security-sensitive GPOs (password policy, audit policy, software restriction). Correlate with change management tickets.",
              "z": "Timeline (GPO changes), Table (modification details), Alert on unauthorized changes.",
              "kfp": "Planned GPO and AGPM releases; DC replication latency showing duplicate 5136. Use AGPM history and change tickets; diff GPO before alert priority.",
              "refs": "[Enable Audit Directory Service Changes. Track GPO creation](https://splunkbase.splunk.com/app/5137), [and modification](https://splunkbase.splunk.com/app/5136), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 5136, 5137).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Audit Directory Service Changes. Track GPO creation (5137) and modification (5136) on domain controllers. Alert on GPO changes outside change windows, by non-admin accounts, or modifications to security-sensitive GPOs (password policy, audit policy, software restriction). Correlate with change management tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode IN (5136, 5137) ObjectClass=\"groupPolicyContainer\"\n| eval Action=case(EventCode=5136,\"Modified\", EventCode=5137,\"Created\", 1=1,\"Other\")\n| table _time, host, SubjectUserName, Action, ObjectDN, AttributeLDAPDisplayName, AttributeValue\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Group Policy Object (GPO) Modification Auditing** — GPO changes affect all domain-joined systems. Unauthorized GPO modifications can deploy malware, weaken security, or exfiltrate credentials at scale.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 5136, 5137). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Group Policy Object (GPO) Modification Auditing**): table _time, host, SubjectUserName, Action, ObjectDN, AttributeLDAPDisplayName, AttributeValue\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Group Policy Object (GPO) Modification Auditing** — GPO changes affect all domain-joined systems. Unauthorized GPO modifications can deploy malware, weaken security, or exfiltrate credentials at scale.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 5136, 5137). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (GPO changes), Table (modification details), Alert on unauthorized changes.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow changes to the rules that group policy uses across the company so a surprise edit does not quietly open holes or break logons for whole offices at once.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.94",
              "n": "Windows Subsystem for Linux (WSL) Activity",
              "c": "medium",
              "f": "intermediate",
              "v": "WSL can be abused to run Linux-based attack tools while evading Windows-focused security tooling. Monitoring WSL activity closes this visibility gap.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 1), `sourcetype=WinEventLog:Security` (EventCode 4688)",
              "q": "index=wineventlog (EventCode=1 OR EventCode=4688)\n| where match(Image, \"(?i)(wsl\\.exe|wslhost\\.exe|bash\\.exe.*windows)\") OR match(ParentImage, \"(?i)wsl\")\n| table _time, host, User, Image, CommandLine, ParentImage, ParentCommandLine\n| sort -_time",
              "m": "Monitor for WSL process execution (wsl.exe, wslhost.exe, bash.exe from WindowsApps). Track what commands are executed inside WSL via Sysmon process creation. On servers, WSL should not be installed — alert on any WSL activity. On workstations, baseline normal usage and alert on anomalies like network tools (nmap, netcat) or credential access tools.",
              "z": "Table (WSL commands), Timechart (usage patterns), Alert on server WSL usage.",
              "kfp": "Dev and data science build hosts; WSL2 updates. Tag approved `host` groups; do not run this UC on explicit Linux-dev estates without retuning.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 1), `sourcetype=WinEventLog:Security` (EventCode 4688).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor for WSL process execution (wsl.exe, wslhost.exe, bash.exe from WindowsApps). Track what commands are executed inside WSL via Sysmon process creation. On servers, WSL should not be installed — alert on any WSL activity. On workstations, baseline normal usage and alert on anomalies like network tools (nmap, netcat) or credential access tools.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog (EventCode=1 OR EventCode=4688)\n| where match(Image, \"(?i)(wsl\\.exe|wslhost\\.exe|bash\\.exe.*windows)\") OR match(ParentImage, \"(?i)wsl\")\n| table _time, host, User, Image, CommandLine, ParentImage, ParentCommandLine\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Windows Subsystem for Linux (WSL) Activity** — WSL can be abused to run Linux-based attack tools while evading Windows-focused security tooling. Monitoring WSL activity closes this visibility gap.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 1), `sourcetype=WinEventLog:Security` (EventCode 4688). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(Image, \"(?i)(wsl\\.exe|wslhost\\.exe|bash\\.exe.*windows)\") OR match(ParentImage, \"(?i)wsl\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Windows Subsystem for Linux (WSL) Activity**): table _time, host, User, Image, CommandLine, ParentImage, ParentCommandLine\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (WSL commands), Timechart (usage patterns), Alert on server WSL usage.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We pay attention when the Linux tools inside Windows show up in places you do not expect, so a hidden shell or unapproved app store install does not sit there unnoticed on servers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.95",
              "n": "Windows Container Health Monitoring",
              "c": "medium",
              "f": "expert",
              "v": "Windows containers running on Server 2019+ need monitoring for resource limits, failures, and networking issues to ensure application availability.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-Compute-Operational`, `sourcetype=Perfmon:Container`",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Hyper-V-Compute-Operational\"\n| eval Status=case(EventCode=13001,\"Created\", EventCode=13003,\"Started\", EventCode=13005,\"Stopped\", EventCode=13007,\"Terminated\", 1=1,\"Other\")\n| stats count by host, ContainerName, Status\n| append [search index=perfmon source=\"Perfmon:Container\" counter=\"% Processor Time\"\n  | stats avg(Value) as AvgCPU max(Value) as MaxCPU by host, instance]",
              "m": "Enable Hyper-V Compute Operational log for container lifecycle events. Configure Perfmon inputs for container-specific counters (CPU, memory, network). Track container crashes (unexpected stops), OOM kills, and resource exhaustion. Alert on container restart loops and CPU throttling. Integrate with Docker/containerd logs for application-level visibility.",
              "z": "Table (container status), Timechart (resource usage), Alert on crash loops.",
              "kfp": "Live migration, backup integration service pauses, and planned node drain; dev clusters. Correlates with SCVMM and cluster resource state.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-Compute-Operational`, `sourcetype=Perfmon:Container`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Hyper-V Compute Operational log for container lifecycle events. Configure Perfmon inputs for container-specific counters (CPU, memory, network). Track container crashes (unexpected stops), OOM kills, and resource exhaustion. Alert on container restart loops and CPU throttling. Integrate with Docker/containerd logs for application-level visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Hyper-V-Compute-Operational\"\n| eval Status=case(EventCode=13001,\"Created\", EventCode=13003,\"Started\", EventCode=13005,\"Stopped\", EventCode=13007,\"Terminated\", 1=1,\"Other\")\n| stats count by host, ContainerName, Status\n| append [search index=perfmon source=\"Perfmon:Container\" counter=\"% Processor Time\"\n  | stats avg(Value) as AvgCPU max(Value) as MaxCPU by host, instance]\n```\n\nUnderstanding this SPL\n\n**Windows Container Health Monitoring** — Windows containers running on Server 2019+ need monitoring for resource limits, failures, and networking issues to ensure application availability.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-Compute-Operational`, `sourcetype=Perfmon:Container`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, ContainerName, Status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Windows Container Health Monitoring** — Windows containers running on Server 2019+ need monitoring for resource limits, failures, and networking issues to ensure application availability.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-Compute-Operational`, `sourcetype=Perfmon:Container`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (container status), Timechart (resource usage), Alert on crash loops.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at the health of Windows containers and the Hyper-V layer under them so application teams see a guest pause or service fault before every app on the host feels it.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.96",
              "n": "DNS Server Zone Transfer Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Zone transfers expose the entire DNS namespace to attackers. Unauthorized zone transfers enable reconnaissance and must be detected immediately.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:DNS Server` (EventID 6001, 6002), `sourcetype=MSAD:NT6:DNS`",
              "q": "index=wineventlog source=\"WinEventLog:DNS Server\" EventCode IN (6001, 6002)\n| eval TransferType=case(EventCode=6001,\"AXFR_Sent\", EventCode=6002,\"IXFR_Sent\", 1=1,\"Other\")\n| table _time, host, Source_Network_Address, Zone, TransferType\n| lookup dns_authorized_transfer_partners Source_Network_Address OUTPUT authorized\n| where NOT authorized=\"yes\"",
              "m": "Enable DNS Server Analytical logging. Track zone transfer events (AXFR/IXFR) and correlate with authorized secondary DNS servers via lookup table. Alert on zone transfers to unauthorized IP addresses. Monitor for AXFR queries from non-DNS-server IPs. This is a high-confidence indicator of DNS reconnaissance.",
              "z": "Table (transfer details), Alert on unauthorized transfers, Geo map (requester IPs).",
              "kfp": "Legit secondary promotion and zone transfer to DR; MNAME misconfig. Allowlist by peer server IP; burst during DNS migration.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:DNS Server` (EventID 6001, 6002), `sourcetype=MSAD:NT6:DNS`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DNS Server Analytical logging. Track zone transfer events (AXFR/IXFR) and correlate with authorized secondary DNS servers via lookup table. Alert on zone transfers to unauthorized IP addresses. Monitor for AXFR queries from non-DNS-server IPs. This is a high-confidence indicator of DNS reconnaissance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:DNS Server\" EventCode IN (6001, 6002)\n| eval TransferType=case(EventCode=6001,\"AXFR_Sent\", EventCode=6002,\"IXFR_Sent\", 1=1,\"Other\")\n| table _time, host, Source_Network_Address, Zone, TransferType\n| lookup dns_authorized_transfer_partners Source_Network_Address OUTPUT authorized\n| where NOT authorized=\"yes\"\n```\n\nUnderstanding this SPL\n\n**DNS Server Zone Transfer Monitoring** — Zone transfers expose the entire DNS namespace to attackers. Unauthorized zone transfers enable reconnaissance and must be detected immediately.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:DNS Server` (EventID 6001, 6002), `sourcetype=MSAD:NT6:DNS`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **TransferType** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DNS Server Zone Transfer Monitoring**): table _time, host, Source_Network_Address, Zone, TransferType\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where NOT authorized=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (transfer details), Alert on unauthorized transfers, Geo map (requester IPs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for full copy requests of your whole DNS zone so you can see transfers that are not in your list of real secondaries before someone walks away with a map of your names.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.97",
              "n": "Print Spooler Vulnerability Monitoring (PrintNightmare)",
              "c": "critical",
              "f": "intermediate",
              "v": "Print Spooler vulnerabilities (CVE-2021-34527, CVE-2021-1675) enable remote code execution and privilege escalation. Continuous monitoring ensures patches hold and exploitation attempts are caught.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-PrintService/Operational`, `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational`",
              "q": "index=wineventlog ((source=\"WinEventLog:Microsoft-Windows-PrintService/Operational\" EventCode IN (316, 808, 811))\n  OR (EventCode=11 TargetFilename=\"*\\\\spool\\\\drivers\\\\*\"))\n| eval Indicator=case(EventCode=316,\"Driver_Install\", EventCode=808,\"RestrictDriverInstallation\", EventCode=11,\"Driver_File_Drop\", 1=1,\"Other\")\n| table _time, host, Indicator, UserName, DriverName, TargetFilename\n| sort -_time",
              "m": "Audit Print Service Operational log for driver installations (316), and Sysmon for DLL drops into spool\\drivers directory (EventCode 11). On non-print servers, the Print Spooler service should be disabled — alert if running. On print servers, monitor for unsigned driver installations and remote driver additions. Alert on any spoolsv.exe spawning cmd.exe or powershell.exe.",
              "z": "Table (events), Single value (spooler running on non-print servers), Alert on exploitation indicators.",
              "kfp": "Mass driver push after print-nightmare style patching; MFP rollouts. Baseline on print servers; separate file servers without spooler from role holders.",
              "refs": "[Audit Print Service Operational log for driver installations](https://splunkbase.splunk.com/app/316), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-PrintService/Operational`, `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAudit Print Service Operational log for driver installations (316), and Sysmon for DLL drops into spool\\drivers directory (EventCode 11). On non-print servers, the Print Spooler service should be disabled — alert if running. On print servers, monitor for unsigned driver installations and remote driver additions. Alert on any spoolsv.exe spawning cmd.exe or powershell.exe.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog ((source=\"WinEventLog:Microsoft-Windows-PrintService/Operational\" EventCode IN (316, 808, 811))\n  OR (EventCode=11 TargetFilename=\"*\\\\spool\\\\drivers\\\\*\"))\n| eval Indicator=case(EventCode=316,\"Driver_Install\", EventCode=808,\"RestrictDriverInstallation\", EventCode=11,\"Driver_File_Drop\", 1=1,\"Other\")\n| table _time, host, Indicator, UserName, DriverName, TargetFilename\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Print Spooler Vulnerability Monitoring (PrintNightmare)** — Print Spooler vulnerabilities (CVE-2021-34527, CVE-2021-1675) enable remote code execution and privilege escalation. Continuous monitoring ensures patches hold and exploitation attempts are caught.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-PrintService/Operational`, `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Indicator** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Print Spooler Vulnerability Monitoring (PrintNightmare)**): table _time, host, Indicator, UserName, DriverName, TargetFilename\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (events), Single value (spooler running on non-print servers), Alert on exploitation indicators.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the print system for the same kinds of driver and remote call patterns that keep showing up in big security problems, so print servers are not a quiet open door into the rest of the network.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.98",
              "n": "NPS / RADIUS Authentication Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Network Policy Server handles VPN, Wi-Fi, and 802.1X authentication. Monitoring NPS detects brute-force attacks, misconfigured policies, and unauthorized network access.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 6272, 6273, 6274, 6278)",
              "q": "index=wineventlog EventCode IN (6272, 6273, 6274)\n| eval Result=case(EventCode=6272,\"Access_Granted\", EventCode=6273,\"Access_Denied\", EventCode=6274,\"Discarded\", 1=1,\"Other\")\n| stats count by Result, UserName, CallingStationID, NASIPAddress, AuthenticationProvider\n| where Result=\"Access_Denied\"\n| sort -count",
              "m": "NPS logs authentication events to the Security log. Track granted (6272), denied (6273), and discarded (6274) requests. Alert on high denial rates from specific users (brute-force) or NAS devices (misconfiguration). Monitor for authentication attempts using disabled accounts or from unknown calling station IDs. Correlate with VPN gateway logs.",
              "z": "Pie chart (grant vs deny ratio), Table (denied requests), Timechart (auth attempts by hour).",
              "kfp": "User fat-fingered passwords; RADIUS client shared-secret rotation drift; new AP or NAS with wrong IP in policy; load-test accounts in lab. Enrich with CallingStationID and NAS allowlists; require sustained `count` per user+NAS before high severity.",
              "refs": "[NPS logs authentication events to the Security log. Track granted](https://splunkbase.splunk.com/app/6272), [denied](https://splunkbase.splunk.com/app/6273), [and discarded](https://splunkbase.splunk.com/app/6274), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 6272, 6273, 6274, 6278).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNPS logs authentication events to the Security log. Track granted (6272), denied (6273), and discarded (6274) requests. Alert on high denial rates from specific users (brute-force) or NAS devices (misconfiguration). Monitor for authentication attempts using disabled accounts or from unknown calling station IDs. Correlate with VPN gateway logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode IN (6272, 6273, 6274)\n| eval Result=case(EventCode=6272,\"Access_Granted\", EventCode=6273,\"Access_Denied\", EventCode=6274,\"Discarded\", 1=1,\"Other\")\n| stats count by Result, UserName, CallingStationID, NASIPAddress, AuthenticationProvider\n| where Result=\"Access_Denied\"\n| sort -count\n```\n\nUnderstanding this SPL\n\n**NPS / RADIUS Authentication Monitoring** — Network Policy Server handles VPN, Wi-Fi, and 802.1X authentication. Monitoring NPS detects brute-force attacks, misconfigured policies, and unauthorized network access.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 6272, 6273, 6274, 6278). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Result** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by Result, UserName, CallingStationID, NASIPAddress, AuthenticationProvider** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where Result=\"Access_Denied\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NPS / RADIUS Authentication Monitoring** — Network Policy Server handles VPN, Wi-Fi, and 802.1X authentication. Monitoring NPS detects brute-force attacks, misconfigured policies, and unauthorized network access.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 6272, 6273, 6274, 6278). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (grant vs deny ratio), Table (denied requests), Timechart (auth attempts by hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the Windows log where VPN and office Wi-Fi sign-in attempts are decided so we can see many failed tries or odd devices before a small login issue turns into a full lockout or an intruder poking the network.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.100",
              "n": "PKI / Certificate Authority Health",
              "c": "critical",
              "f": "intermediate",
              "v": "An enterprise CA issues certificates for authentication, encryption, and code signing. CA failures break SSO, VPN, Wi-Fi, and TLS across the organization.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4886, 4887, 4888), `sourcetype=WinEventLog:Application`",
              "q": "index=wineventlog EventCode IN (4886, 4887, 4888, 4890, 4891, 4893)\n| eval Action=case(EventCode=4886,\"CertRequest_Received\", EventCode=4887,\"CertRequest_Approved\", EventCode=4888,\"CertRequest_Denied\", EventCode=4890,\"CA_Settings_Changed\", EventCode=4891,\"CA_Config_Changed\", EventCode=4893,\"CA_Archived_Key\", 1=1,\"Other\")\n| stats count by Action, host, SubjectUserName, RequesterName\n| sort -count",
              "m": "Enable CA-specific audit events via certsrv MMC. Monitor certificate request lifecycle: received (4886), approved (4887), denied (4888). Alert on CA configuration changes (4890/4891) and key archival (4893). Track CRL publishing failures in Application log. Monitor CA certificate expiration (alert 90/60/30 days before). Detect ESC1-ESC8 ADCS attack patterns (misconfigurations in certificate templates).",
              "z": "Timechart (cert requests), Table (CA changes), Alert on config changes and template modifications.",
              "kfp": "High request volume during laptop refresh; lab CAs; manual approval queues. Track 4890/4891 as break-glass; pair with CRL publish health in Application log.",
              "refs": "[received](https://splunkbase.splunk.com/app/4886), [approved](https://splunkbase.splunk.com/app/4887), [denied](https://splunkbase.splunk.com/app/4888), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4886, 4887, 4888), `sourcetype=WinEventLog:Application`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CA-specific audit events via certsrv MMC. Monitor certificate request lifecycle: received (4886), approved (4887), denied (4888). Alert on CA configuration changes (4890/4891) and key archival (4893). Track CRL publishing failures in Application log. Monitor CA certificate expiration (alert 90/60/30 days before). Detect ESC1-ESC8 ADCS attack patterns (misconfigurations in certificate templates).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode IN (4886, 4887, 4888, 4890, 4891, 4893)\n| eval Action=case(EventCode=4886,\"CertRequest_Received\", EventCode=4887,\"CertRequest_Approved\", EventCode=4888,\"CertRequest_Denied\", EventCode=4890,\"CA_Settings_Changed\", EventCode=4891,\"CA_Config_Changed\", EventCode=4893,\"CA_Archived_Key\", 1=1,\"Other\")\n| stats count by Action, host, SubjectUserName, RequesterName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**PKI / Certificate Authority Health** — An enterprise CA issues certificates for authentication, encryption, and code signing. CA failures break SSO, VPN, Wi-Fi, and TLS across the organization.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4886, 4887, 4888), `sourcetype=WinEventLog:Application`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by Action, host, SubjectUserName, RequesterName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PKI / Certificate Authority Health** — An enterprise CA issues certificates for authentication, encryption, and code signing. CA failures break SSO, VPN, Wi-Fi, and TLS across the organization.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4886, 4887, 4888), `sourcetype=WinEventLog:Application`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (cert requests), Table (CA changes), Alert on config changes and template modifications.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the Windows certificate authority logs so we know when request flows, denials, or config changes could soon break logons, Wi-Fi, VPN, or website trust for everyone.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.101",
              "n": "File Share Access Auditing (SMB)",
              "c": "medium",
              "f": "intermediate",
              "v": "File share access auditing detects unauthorized data access, lateral movement via mapped drives, and ransomware encrypting network shares.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 5140, 5145)",
              "q": "index=wineventlog EventCode IN (5140, 5145)\n| eval AccessType=case(EventCode=5140,\"Share_Access\", EventCode=5145,\"File_Access\", 1=1,\"Other\")\n| stats count dc(RelativeTargetName) as UniqueFiles by SubjectUserName, IpAddress, ShareName, AccessType\n| where count>100 OR UniqueFiles>50\n| sort -count",
              "m": "Enable Audit File Share and Audit Detailed File Share via Advanced Audit Policy. EventCode 5140 logs share-level access; 5145 logs individual file access (high volume — use targeted auditing). Alert on mass file access patterns (ransomware indicator), access from unusual IPs, and access to sensitive shares outside business hours. Use SACL on sensitive folders for granular auditing.",
              "z": "Timechart (access volume), Table (top users/shares), Alert on mass access patterns.",
              "kfp": "Backup, AV scan, and indexing service accounts; desktop support browsing. Enrich with privileged share allowlists and only alert on sensitive path patterns.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 5140, 5145).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Audit File Share and Audit Detailed File Share via Advanced Audit Policy. EventCode 5140 logs share-level access; 5145 logs individual file access (high volume — use targeted auditing). Alert on mass file access patterns (ransomware indicator), access from unusual IPs, and access to sensitive shares outside business hours. Use SACL on sensitive folders for granular auditing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode IN (5140, 5145)\n| eval AccessType=case(EventCode=5140,\"Share_Access\", EventCode=5145,\"File_Access\", 1=1,\"Other\")\n| stats count dc(RelativeTargetName) as UniqueFiles by SubjectUserName, IpAddress, ShareName, AccessType\n| where count>100 OR UniqueFiles>50\n| sort -count\n```\n\nUnderstanding this SPL\n\n**File Share Access Auditing (SMB)** — File share access auditing detects unauthorized data access, lateral movement via mapped drives, and ransomware encrypting network shares.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 5140, 5145). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **AccessType** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by SubjectUserName, IpAddress, ShareName, AccessType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>100 OR UniqueFiles>50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (access volume), Table (top users/shares), Alert on mass access patterns.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see who hit which file share and when, so if something sensitive moves you have a place to start without guessing which account had their hands in the folder.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.102",
              "n": "Software Restriction / AppLocker Bypass Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Application whitelisting is a primary defense against malware. Detecting bypass attempts reveals both sophisticated attackers and policy gaps.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-AppLocker/EXE and DLL`, `sourcetype=WinEventLog:Microsoft-Windows-AppLocker/MSI and Script`",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-AppLocker*\" EventCode IN (8004, 8007, 8022, 8025)\n| eval BlockType=case(EventCode=8004,\"EXE_Blocked\", EventCode=8007,\"Script_Blocked\", EventCode=8022,\"MSI_Blocked\", EventCode=8025,\"DLL_Blocked\", 1=1,\"Other\")\n| stats count by host, UserName, BlockType, RuleName, FilePath\n| sort -count",
              "m": "Collect all four AppLocker log channels (EXE/DLL, MSI/Script, Packaged app, Script). Track blocked executions (8004/8007/8022/8025) and audit-mode warnings (8003/8006). Alert on repeated blocks from same user (attempted bypass), blocks in admin paths, and execution of known LOLBins that bypass default rules (mshta.exe, regsvr32.exe, msbuild.exe). Correlate with Sysmon for parent process context.",
              "z": "Bar chart (blocks by type), Table (blocked files), Timechart (block trends).",
              "kfp": "Developers on non-prod; Microsoft signed updaters occasionally tripped by strict policy. Tier by OU; use test rings before prod noise.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-AppLocker/EXE and DLL`, `sourcetype=WinEventLog:Microsoft-Windows-AppLocker/MSI and Script`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect all four AppLocker log channels (EXE/DLL, MSI/Script, Packaged app, Script). Track blocked executions (8004/8007/8022/8025) and audit-mode warnings (8003/8006). Alert on repeated blocks from same user (attempted bypass), blocks in admin paths, and execution of known LOLBins that bypass default rules (mshta.exe, regsvr32.exe, msbuild.exe). Correlate with Sysmon for parent process context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-AppLocker*\" EventCode IN (8004, 8007, 8022, 8025)\n| eval BlockType=case(EventCode=8004,\"EXE_Blocked\", EventCode=8007,\"Script_Blocked\", EventCode=8022,\"MSI_Blocked\", EventCode=8025,\"DLL_Blocked\", 1=1,\"Other\")\n| stats count by host, UserName, BlockType, RuleName, FilePath\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Software Restriction / AppLocker Bypass Detection** — Application whitelisting is a primary defense against malware. Detecting bypass attempts reveals both sophisticated attackers and policy gaps.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-AppLocker/EXE and DLL`, `sourcetype=WinEventLog:Microsoft-Windows-AppLocker/MSI and Script`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **BlockType** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, UserName, BlockType, RuleName, FilePath** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (blocks by type), Table (blocked files), Timechart (block trends).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the tricks people use to get around app-block rules, so a policy on paper still matches what really happens on the machine when someone tries a shortcut.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.103",
              "n": "Terminal Services / RDP Session Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "RDP is a primary lateral movement vector. Complete session tracking from logon to logoff enables detection of compromised credentials and unauthorized access.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational`, `sourcetype=WinEventLog:Security`",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational\" EventCode IN (21, 23, 24, 25)\n| eval Action=case(EventCode=21,\"Logon\", EventCode=23,\"Logoff\", EventCode=24,\"Disconnect\", EventCode=25,\"Reconnect\", 1=1,\"Other\")\n| eval src=if(isnotnull(Address), Address, \"local\")\n| stats earliest(_time) as SessionStart latest(_time) as SessionEnd values(Action) as Actions by host, User, SessionID, src\n| eval Duration=round((SessionEnd-SessionStart)/60,1)",
              "m": "Collect TerminalServices-LocalSessionManager/Operational log for session lifecycle events. Track logon (21), logoff (23), disconnect (24), reconnect (25). Correlate with Security log 4624 Type 10 for source IP. Alert on RDP to servers from non-admin workstations, sessions during off-hours, and multiple concurrent sessions from different IPs for same user.",
              "z": "Timeline (sessions), Table (session details), Alert on anomalous patterns.",
              "kfp": "Jump boxes, night patching, and red-team ranges. Baseline interactive hours; suppress known service accounts used for screen sharing.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational`, `sourcetype=WinEventLog:Security`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect TerminalServices-LocalSessionManager/Operational log for session lifecycle events. Track logon (21), logoff (23), disconnect (24), reconnect (25). Correlate with Security log 4624 Type 10 for source IP. Alert on RDP to servers from non-admin workstations, sessions during off-hours, and multiple concurrent sessions from different IPs for same user.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational\" EventCode IN (21, 23, 24, 25)\n| eval Action=case(EventCode=21,\"Logon\", EventCode=23,\"Logoff\", EventCode=24,\"Disconnect\", EventCode=25,\"Reconnect\", 1=1,\"Other\")\n| eval src=if(isnotnull(Address), Address, \"local\")\n| stats earliest(_time) as SessionStart latest(_time) as SessionEnd values(Action) as Actions by host, User, SessionID, src\n| eval Duration=round((SessionEnd-SessionStart)/60,1)\n```\n\nUnderstanding this SPL\n\n**Terminal Services / RDP Session Tracking** — RDP is a primary lateral movement vector. Complete session tracking from logon to logoff enables detection of compromised credentials and unauthorized access.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-TerminalServices-LocalSessionManager/Operational`, `sourcetype=WinEventLog:Security`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, User, SessionID, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **Duration** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (sessions), Table (session details), Alert on anomalous patterns.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you read who used remote desktop, from where, and when, so support and security can agree on a timeline when a shared admin account or odd hours need explaining.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.104",
              "n": "Disk Latency and I/O Performance (Windows)",
              "c": "high",
              "f": "intermediate",
              "v": "High disk latency directly impacts application performance and user experience. Proactive monitoring prevents performance degradation and identifies failing storage.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:LogicalDisk`",
              "q": "index=perfmon source=\"Perfmon:LogicalDisk\" counter IN (\"Avg. Disk sec/Read\", \"Avg. Disk sec/Write\", \"Current Disk Queue Length\")\n| eval latency_ms=round(Value*1000, 2)\n| stats avg(latency_ms) as AvgLatency max(latency_ms) as MaxLatency by host, instance, counter\n| where AvgLatency>20 OR MaxLatency>100\n| sort -MaxLatency",
              "m": "Configure Perfmon inputs for LogicalDisk counters: Avg. Disk sec/Read, Avg. Disk sec/Write, Current Disk Queue Length, Disk Transfers/sec. Thresholds: <10ms normal, 10-20ms degraded, >20ms poor, >50ms critical. Alert on sustained latency above 20ms. Correlate with application response times and IOPS counters. Track latency trends for capacity planning and storage migration decisions.",
              "z": "Timechart (latency trend), Gauge (current latency), Table (high-latency volumes).",
              "kfp": "Backup merge, index optimization, and large report jobs; CSV under Hyper-V during live migration. Layer with `PhysicalDisk` queue length and vendor array stats.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:LogicalDisk`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs for LogicalDisk counters: Avg. Disk sec/Read, Avg. Disk sec/Write, Current Disk Queue Length, Disk Transfers/sec. Thresholds: <10ms normal, 10-20ms degraded, >20ms poor, >50ms critical. Alert on sustained latency above 20ms. Correlate with application response times and IOPS counters. Track latency trends for capacity planning and storage migration decisions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon source=\"Perfmon:LogicalDisk\" counter IN (\"Avg. Disk sec/Read\", \"Avg. Disk sec/Write\", \"Current Disk Queue Length\")\n| eval latency_ms=round(Value*1000, 2)\n| stats avg(latency_ms) as AvgLatency max(latency_ms) as MaxLatency by host, instance, counter\n| where AvgLatency>20 OR MaxLatency>100\n| sort -MaxLatency\n```\n\nUnderstanding this SPL\n\n**Disk Latency and I/O Performance (Windows)** — High disk latency directly impacts application performance and user experience. Proactive monitoring prevents performance degradation and identifies failing storage.\n\nDocumented **Data sources**: `sourcetype=Perfmon:LogicalDisk`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, instance, counter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where AvgLatency>20 OR MaxLatency>100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Disk Latency and I/O Performance (Windows)** — High disk latency directly impacts application performance and user experience. Proactive monitoring prevents performance degradation and identifies failing storage.\n\nDocumented **Data sources**: `sourcetype=Perfmon:LogicalDisk`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where disk_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (latency trend), Gauge (current latency), Table (high-latency volumes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how long disk read and write operations take on Windows servers so the team sees storage getting sick before programs start timing out and people feel the drag everywhere.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.105",
              "n": "Windows Defender Exclusion Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Attackers add Defender exclusions to hide malware. Monitoring exclusion changes detects evasion techniques and ensures antivirus coverage remains complete.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Windows Defender/Operational` (EventID 5007)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\" EventCode=5007\n| where match(New_Value, \"(?i)exclusions\")\n| rex field=New_Value \"Exclusions\\\\\\\\(?<ExclusionType>[^\\\\\\\\]+)\\\\\\\\(?<ExclusionValue>.+)\"\n| table _time, host, ExclusionType, ExclusionValue, Old_Value, New_Value\n| sort -_time",
              "m": "Monitor Defender configuration changes (EventID 5007) and filter for exclusion modifications. Track path, extension, and process exclusions. Alert on any exclusion added outside of change management, especially for temp directories, user profiles, or common malware paths. Maintain a whitelist of approved exclusions and alert on deviations. Critical for detecting MITRE ATT&CK T1562.001 (Impair Defenses).",
              "z": "Table (exclusion changes), Alert on unauthorized exclusions, Trend chart.",
              "kfp": "Vendor-required paths during app deploy; temporary DevExclusions. Require ticket ID in notes; auto-expire exclusions with GPO or pipeline review.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Windows Defender/Operational` (EventID 5007).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Defender configuration changes (EventID 5007) and filter for exclusion modifications. Track path, extension, and process exclusions. Alert on any exclusion added outside of change management, especially for temp directories, user profiles, or common malware paths. Maintain a whitelist of approved exclusions and alert on deviations. Critical for detecting MITRE ATT&CK T1562.001 (Impair Defenses).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\" EventCode=5007\n| where match(New_Value, \"(?i)exclusions\")\n| rex field=New_Value \"Exclusions\\\\\\\\(?<ExclusionType>[^\\\\\\\\]+)\\\\\\\\(?<ExclusionValue>.+)\"\n| table _time, host, ExclusionType, ExclusionValue, Old_Value, New_Value\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Windows Defender Exclusion Monitoring** — Attackers add Defender exclusions to hide malware. Monitoring exclusion changes detects evasion techniques and ensures antivirus coverage remains complete.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Windows Defender/Operational` (EventID 5007). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(New_Value, \"(?i)exclusions\")` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Windows Defender Exclusion Monitoring**): table _time, host, ExclusionType, ExclusionValue, Old_Value, New_Value\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exclusion changes), Alert on unauthorized exclusions, Trend chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We show when someone widens the list of things virus scanning ignores on a machine, so a quick troubleshooting exception does not quietly stay wide open forever.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.106",
              "n": "Local Administrator Group Membership Changes",
              "c": "critical",
              "f": "intermediate",
              "v": "Local admin privileges enable credential theft, persistence, and lateral movement. Monitoring local admin group changes detects privilege escalation attacks.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4732, 4733)",
              "q": "index=wineventlog EventCode IN (4732, 4733) TargetUserName=\"Administrators\"\n| eval Action=case(EventCode=4732,\"Member_Added\", EventCode=4733,\"Member_Removed\", 1=1,\"Other\")\n| table _time, host, Action, MemberName, MemberSid, SubjectUserName, SubjectDomainName\n| sort -_time",
              "m": "Enable Audit Security Group Management. Track additions (4732) and removals (4733) from the local Administrators group. Alert on any additions, especially by non-domain-admin accounts. Monitor for patterns: add user → perform action → remove user (cleanup). Correlate with LAPS password rotations and PAM solutions. On servers, local admin changes should be extremely rare.",
              "z": "Table (membership changes), Alert on all additions, Trend chart.",
              "kfp": "Planned access reviews; GPO and AGPM; break-glass group adds with tickets. Exempt with expiring JIRA and named approver; review quarterly.",
              "refs": "[Enable Audit Security Group Management. Track additions](https://splunkbase.splunk.com/app/4732), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4732, 4733).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Audit Security Group Management. Track additions (4732) and removals (4733) from the local Administrators group. Alert on any additions, especially by non-domain-admin accounts. Monitor for patterns: add user → perform action → remove user (cleanup). Correlate with LAPS password rotations and PAM solutions. On servers, local admin changes should be extremely rare.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode IN (4732, 4733) TargetUserName=\"Administrators\"\n| eval Action=case(EventCode=4732,\"Member_Added\", EventCode=4733,\"Member_Removed\", 1=1,\"Other\")\n| table _time, host, Action, MemberName, MemberSid, SubjectUserName, SubjectDomainName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Local Administrator Group Membership Changes** — Local admin privileges enable credential theft, persistence, and lateral movement. Monitoring local admin group changes detects privilege escalation attacks.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4732, 4733). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Local Administrator Group Membership Changes**): table _time, host, Action, MemberName, MemberSid, SubjectUserName, SubjectDomainName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Local Administrator Group Membership Changes** — Local admin privileges enable credential theft, persistence, and lateral movement. Monitoring local admin group changes detects privilege escalation attacks.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4732, 4733). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (membership changes), Alert on all additions, Trend chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch changes to the local administrators group on Windows servers, because that is still a quick way for anyone with a foothold to hand themselves full control on the box.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.107",
              "n": "DFS Replication Health Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "DFS-R synchronizes SYSVOL and shared folders across domain controllers and file servers. Replication failures cause inconsistent GPOs and stale data.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:DFS Replication`",
              "q": "index=wineventlog source=\"WinEventLog:DFS Replication\" EventCode IN (4012, 4302, 4304, 5002, 5008, 5014)\n| eval Severity=case(EventCode=4012,\"DFSR_Stopped\", EventCode=4302,\"Staging_Quota_Exceeded\", EventCode=4304,\"Staging_Cleanup_Failed\", EventCode=5002,\"Unexpected_Shutdown\", EventCode=5008,\"Auto_Recovery_Failed\", EventCode=5014,\"USN_Journal_Wrap\", 1=1,\"Warning\")\n| stats count by host, Severity, EventCode\n| sort -count",
              "m": "Monitor DFS Replication event log for critical events. EventCode 4012 (DFSR stopped) and 5014 (USN journal wrap) require immediate attention — USN journal wrap can cause full resync. Track staging quota events (4302) to prevent replication stalls. Monitor SYSVOL replication specifically on domain controllers. Alert on replication backlog exceeding threshold via WMI/PowerShell scripted input collecting DFSR WMI counters.",
              "z": "Table (replication errors), Timechart (error trend), Alert on critical events.",
              "kfp": "Planned file replication across sites; large initial sync. Watch backlog duration vs. steady state; treat sustained growth as incident.",
              "refs": "[USN journal wrap can cause full resync. Track staging quota events](https://splunkbase.splunk.com/app/4302), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:DFS Replication`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor DFS Replication event log for critical events. EventCode 4012 (DFSR stopped) and 5014 (USN journal wrap) require immediate attention — USN journal wrap can cause full resync. Track staging quota events (4302) to prevent replication stalls. Monitor SYSVOL replication specifically on domain controllers. Alert on replication backlog exceeding threshold via WMI/PowerShell scripted input collecting DFSR WMI counters.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:DFS Replication\" EventCode IN (4012, 4302, 4304, 5002, 5008, 5014)\n| eval Severity=case(EventCode=4012,\"DFSR_Stopped\", EventCode=4302,\"Staging_Quota_Exceeded\", EventCode=4304,\"Staging_Cleanup_Failed\", EventCode=5002,\"Unexpected_Shutdown\", EventCode=5008,\"Auto_Recovery_Failed\", EventCode=5014,\"USN_Journal_Wrap\", 1=1,\"Warning\")\n| stats count by host, Severity, EventCode\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DFS Replication Health Monitoring** — DFS-R synchronizes SYSVOL and shared folders across domain controllers and file servers. Replication failures cause inconsistent GPOs and stale data.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:DFS Replication`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, Severity, EventCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (replication errors), Timechart (error trend), Alert on critical events.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at whether file copies between sites through DFS are healthy, so a stuck backlog does not turn into different folders and broken logon scripts for half the company.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.108",
              "n": "Kerberos Constrained Delegation Abuse",
              "c": "critical",
              "f": "advanced",
              "v": "Kerberos delegation allows services to impersonate users. Misconfigured or compromised delegation targets enable privilege escalation to domain admin.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4769, 5136)",
              "q": "index=wineventlog EventCode=4769 TransitionedServices!=\"\"\n| eval is_suspicious=if(match(ServiceName, \"(?i)(krbtgt|ldap/)\"), \"High_Risk\", \"Normal\")\n| stats count by ServiceName, TargetUserName, IpAddress, TransitionedServices, is_suspicious\n| where is_suspicious=\"High_Risk\" OR count>50\n| sort -count",
              "m": "Monitor TGS requests (4769) where TransitionedServices is populated (indicates S4U2Proxy delegation). Alert on delegation to sensitive services (krbtgt, LDAP, CIFS on DCs). Track AD object modifications (5136) that change msDS-AllowedToDelegateTo attribute — indicates delegation configuration changes. Detect resource-based constrained delegation attacks by monitoring msDS-AllowedToActOnBehalfOfOtherIdentity attribute changes. MITRE ATT&CK T1550.003.",
              "z": "Table (delegation events), Alert on sensitive service delegation, Network diagram (delegation paths).",
              "kfp": "Legitimate constrained delegation for web/SQL; cluster resource SPNs. Use attribute-level diff and service owner + CAB prior to false SOC pages.",
              "refs": "[Track AD object modifications](https://splunkbase.splunk.com/app/5136), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4769, 5136).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor TGS requests (4769) where TransitionedServices is populated (indicates S4U2Proxy delegation). Alert on delegation to sensitive services (krbtgt, LDAP, CIFS on DCs). Track AD object modifications (5136) that change msDS-AllowedToDelegateTo attribute — indicates delegation configuration changes. Detect resource-based constrained delegation attacks by monitoring msDS-AllowedToActOnBehalfOfOtherIdentity attribute changes. MITRE ATT&CK T1550.003.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=4769 TransitionedServices!=\"\"\n| eval is_suspicious=if(match(ServiceName, \"(?i)(krbtgt|ldap/)\"), \"High_Risk\", \"Normal\")\n| stats count by ServiceName, TargetUserName, IpAddress, TransitionedServices, is_suspicious\n| where is_suspicious=\"High_Risk\" OR count>50\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Kerberos Constrained Delegation Abuse** — Kerberos delegation allows services to impersonate users. Misconfigured or compromised delegation targets enable privilege escalation to domain admin.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4769, 5136). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_suspicious** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by ServiceName, TargetUserName, IpAddress, TransitionedServices, is_suspicious** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where is_suspicious=\"High_Risk\" OR count>50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Kerberos Constrained Delegation Abuse** — Kerberos delegation allows services to impersonate users. Misconfigured or compromised delegation targets enable privilege escalation to domain admin.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4769, 5136). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (delegation events), Alert on sensitive service delegation, Network diagram (delegation paths).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the directory settings that let one service act as another user, because those are a favorite way to move sideways once someone has a toehold on a web or app server.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.109",
              "n": "Windows Time Service (W32Time) Drift",
              "c": "high",
              "f": "advanced",
              "v": "Kerberos authentication fails when clock skew exceeds 5 minutes. Time drift breaks authentication, log correlation, and forensic timelines.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System` (Source=Microsoft-Windows-Time-Service)",
              "q": "index=wineventlog source=\"WinEventLog:System\" SourceName=\"Microsoft-Windows-Time-Service\" EventCode IN (134, 142, 129)\n| eval Issue=case(EventCode=134,\"Time_Provider_Error\", EventCode=142,\"Time_Skew_Too_Large\", EventCode=129,\"NTP_Unreachable\", 1=1,\"Warning\")\n| stats count latest(_time) as LastSeen by host, Issue, EventCode\n| sort -count",
              "m": "Monitor W32Time events for time provider errors (134), skew warnings (142), and NTP unreachable (129). On DCs, the PDC Emulator must sync to an external NTP source — all other DCs sync to the domain hierarchy. Alert on any DC time skew >2 minutes. Monitor w32tm /query /status output via scripted input for continuous drift tracking. Time-critical for Kerberos (5-min max skew) and forensic log correlation.",
              "z": "Timechart (time offset), Table (time errors), Alert on >2min drift.",
              "kfp": "Leap seconds, hypervisor time sync, and noisy VMs after vMotion. Pair with NTP stratum and hardware clock checks.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (Source=Microsoft-Windows-Time-Service).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor W32Time events for time provider errors (134), skew warnings (142), and NTP unreachable (129). On DCs, the PDC Emulator must sync to an external NTP source — all other DCs sync to the domain hierarchy. Alert on any DC time skew >2 minutes. Monitor w32tm /query /status output via scripted input for continuous drift tracking. Time-critical for Kerberos (5-min max skew) and forensic log correlation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:System\" SourceName=\"Microsoft-Windows-Time-Service\" EventCode IN (134, 142, 129)\n| eval Issue=case(EventCode=134,\"Time_Provider_Error\", EventCode=142,\"Time_Skew_Too_Large\", EventCode=129,\"NTP_Unreachable\", 1=1,\"Warning\")\n| stats count latest(_time) as LastSeen by host, Issue, EventCode\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Windows Time Service (W32Time) Drift** — Kerberos authentication fails when clock skew exceeds 5 minutes. Time drift breaks authentication, log correlation, and forensic timelines.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (Source=Microsoft-Windows-Time-Service). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, Issue, EventCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Windows Time Service (W32Time) Drift** — Kerberos authentication fails when clock skew exceeds 5 minutes. Time drift breaks authentication, log correlation, and forensic timelines.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (Source=Microsoft-Windows-Time-Service). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (time offset), Table (time errors), Alert on >2min drift.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the Windows clock is drifting, because time skew turns normal sign-ins and log checks into a mess of mystery errors and duplicate-looking events.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.110",
              "n": "PowerShell Constrained Language Mode Bypass",
              "c": "critical",
              "f": "advanced",
              "v": "Constrained Language Mode limits PowerShell attack surface. Detecting bypasses reveals attackers escalating from restricted to full-language mode for malware execution.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational` (EventCode 4104), `sourcetype=WinEventLog:Windows PowerShell`",
              "q": "index=wineventlog EventCode=4104\n| where match(ScriptBlockText, \"(?i)(FullLanguage|LanguageMode|Add-Type.*DllImport|System\\.Management\\.Automation\\.LanguageMode)\")\n| table _time, host, UserName, ScriptBlockText, Path\n| sort -_time",
              "m": "Enable PowerShell Script Block Logging (EventCode 4104) and Module Logging. Search for scripts that attempt to change LanguageMode, use reflection to bypass CLM, or reference FullLanguage mode. Alert on Add-Type with DllImport (P/Invoke) in constrained environments — this is a common CLM bypass. Correlate with AppLocker and WDAC logs for defense-in-depth monitoring.",
              "z": "Table (bypass attempts), Alert on detection, Single value (count).",
              "kfp": "Build pipelines and config management running full language on purpose; dev hosts. Use OU and certificate-bound script baselines.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational` (EventCode 4104), `sourcetype=WinEventLog:Windows PowerShell`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable PowerShell Script Block Logging (EventCode 4104) and Module Logging. Search for scripts that attempt to change LanguageMode, use reflection to bypass CLM, or reference FullLanguage mode. Alert on Add-Type with DllImport (P/Invoke) in constrained environments — this is a common CLM bypass. Correlate with AppLocker and WDAC logs for defense-in-depth monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=4104\n| where match(ScriptBlockText, \"(?i)(FullLanguage|LanguageMode|Add-Type.*DllImport|System\\.Management\\.Automation\\.LanguageMode)\")\n| table _time, host, UserName, ScriptBlockText, Path\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**PowerShell Constrained Language Mode Bypass** — Constrained Language Mode limits PowerShell attack surface. Detecting bypasses reveals attackers escalating from restricted to full-language mode for malware execution.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-PowerShell/Operational` (EventCode 4104), `sourcetype=WinEventLog:Windows PowerShell`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(ScriptBlockText, \"(?i)(FullLanguage|LanguageMode|Add-Type.*DllImport|System\\.Management\\.Automation\\.LanguageMo…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PowerShell Constrained Language Mode Bypass**): table _time, host, UserName, ScriptBlockText, Path\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (bypass attempts), Alert on detection, Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ways people get out of the locked-down PowerShell mode, so a policy that should limit risky scripts is not the only line of defense on its own.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.111",
              "n": "Windows Firewall Rule Tampering",
              "c": "critical",
              "f": "intermediate",
              "v": "Attackers disable or modify firewall rules to enable lateral movement, C2 communication, and data exfiltration. Rule changes outside maintenance windows indicate compromise.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall` (EventCode 2004, 2005, 2006, 2033)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall\" EventCode IN (2004, 2005, 2006, 2033)\n| eval Action=case(EventCode=2004,\"Rule_Added\", EventCode=2005,\"Rule_Modified\", EventCode=2006,\"Rule_Deleted\", EventCode=2033,\"All_Rules_Deleted\", 1=1,\"Other\")\n| table _time, host, Action, RuleName, ApplicationPath, Direction, Protocol, LocalPort, RemotePort, ModifyingUser\n| sort -_time",
              "m": "Collect Windows Firewall With Advanced Security log. Track rule additions (2004), modifications (2005), deletions (2006), and bulk deletion (2033 — extremely suspicious). Alert on: allow-inbound rules for unusual ports, rules permitting all traffic, rules created by non-admin processes, and any rule changes on servers outside change windows. Correlate with process creation to identify the modifying application.",
              "z": "Table (rule changes), Timeline (change frequency), Alert on suspicious modifications.",
              "kfp": "GPO-refresh of approved rules; WDAC co-management. Use GPO version and `who` fields; require ticket for non-GPO-sourced local edits on servers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall` (EventCode 2004, 2005, 2006, 2033).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Windows Firewall With Advanced Security log. Track rule additions (2004), modifications (2005), deletions (2006), and bulk deletion (2033 — extremely suspicious). Alert on: allow-inbound rules for unusual ports, rules permitting all traffic, rules created by non-admin processes, and any rule changes on servers outside change windows. Correlate with process creation to identify the modifying application.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall\" EventCode IN (2004, 2005, 2006, 2033)\n| eval Action=case(EventCode=2004,\"Rule_Added\", EventCode=2005,\"Rule_Modified\", EventCode=2006,\"Rule_Deleted\", EventCode=2033,\"All_Rules_Deleted\", 1=1,\"Other\")\n| table _time, host, Action, RuleName, ApplicationPath, Direction, Protocol, LocalPort, RemotePort, ModifyingUser\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Windows Firewall Rule Tampering** — Attackers disable or modify firewall rules to enable lateral movement, C2 communication, and data exfiltration. Rule changes outside maintenance windows indicate compromise.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Windows Firewall With Advanced Security/Firewall` (EventCode 2004, 2005, 2006, 2033). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Windows Firewall Rule Tampering**): table _time, host, Action, RuleName, ApplicationPath, Direction, Protocol, LocalPort, RemotePort, ModifyingUser\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule changes), Timeline (change frequency), Alert on suspicious modifications.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We show when the host firewall rules change, so a rule someone thought was still there for remote support or file sharing is not silently open to the world after an edit.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.112",
              "n": "BITS Transfer Abuse Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Background Intelligent Transfer Service (BITS) is abused by malware for stealthy downloads and persistence. Monitoring BITS jobs detects LOLBin-based attacks.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Bits-Client/Operational`",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Bits-Client/Operational\" EventCode IN (3, 4, 59, 60, 61)\n| eval Status=case(EventCode=3,\"Transfer_Complete\", EventCode=4,\"Transfer_Cancelled\", EventCode=59,\"Job_Created\", EventCode=60,\"Job_Modified\", EventCode=61,\"Job_Transferred\", 1=1,\"Other\")\n| table _time, host, User, jobTitle, url, fileList, Status, bytesTransferred\n| where NOT match(url, \"(?i)(windowsupdate|microsoft\\.com|msedge)\")\n| sort -_time",
              "m": "Enable BITS Client Operational logging. Track job creation (59), modification (60), and completion (3/61). Filter out legitimate BITS usage (Windows Update, Edge updates). Alert on BITS jobs downloading from unusual URLs, jobs created by unexpected processes (not svchost or system), and BITS persistence via /SetNotifyCmdLine. MITRE ATT&CK T1197.",
              "z": "Table (BITS jobs), Timechart (transfer volume), Alert on non-standard URLs.",
              "kfp": "SCCM/Intune, Windows Update, and Edge updates; VDI gold builds. Correlates parent `Image` and user account; velocity-only alerts.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Bits-Client/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable BITS Client Operational logging. Track job creation (59), modification (60), and completion (3/61). Filter out legitimate BITS usage (Windows Update, Edge updates). Alert on BITS jobs downloading from unusual URLs, jobs created by unexpected processes (not svchost or system), and BITS persistence via /SetNotifyCmdLine. MITRE ATT&CK T1197.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Bits-Client/Operational\" EventCode IN (3, 4, 59, 60, 61)\n| eval Status=case(EventCode=3,\"Transfer_Complete\", EventCode=4,\"Transfer_Cancelled\", EventCode=59,\"Job_Created\", EventCode=60,\"Job_Modified\", EventCode=61,\"Job_Transferred\", 1=1,\"Other\")\n| table _time, host, User, jobTitle, url, fileList, Status, bytesTransferred\n| where NOT match(url, \"(?i)(windowsupdate|microsoft\\.com|msedge)\")\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**BITS Transfer Abuse Detection** — Background Intelligent Transfer Service (BITS) is abused by malware for stealthy downloads and persistence. Monitoring BITS jobs detects LOLBin-based attacks.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Bits-Client/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **BITS Transfer Abuse Detection**): table _time, host, User, jobTitle, url, fileList, Status, bytesTransferred\n• Filters the current rows with `where NOT match(url, \"(?i)(windowsupdate|microsoft\\.com|msedge)\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (BITS jobs), Timechart (transfer volume), Alert on non-standard URLs.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at background file transfers that use the quiet download channel, so a big or odd transfer does not look like a normal patch job when it is not.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.113",
              "n": "COM Object Hijacking Detection",
              "c": "high",
              "f": "advanced",
              "v": "COM hijacking replaces legitimate COM objects with malicious ones for persistence and privilege escalation. It's a stealthy technique that survives reboots.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13)",
              "q": "index=wineventlog EventCode=13 TargetObject=\"*\\\\Classes\\\\CLSID\\\\*\\\\InprocServer32*\"\n| where NOT match(Image, \"(?i)(msiexec|svchost|TiWorker|TrustedInstaller|DismHost)\")\n| table _time, host, User, Image, TargetObject, Details\n| rex field=TargetObject \"CLSID\\\\\\\\(?<CLSID>[^\\\\\\\\]+)\"\n| sort -_time",
              "m": "Monitor Sysmon registry value set events (EventCode 13) targeting CLSID InprocServer32 and LocalServer32 keys in HKCU and HKLM. Filter out legitimate installers (msiexec, TrustedInstaller). Alert on modifications pointing to unusual DLL paths (temp directories, user profiles, AppData). Maintain baseline of known-good CLSID registrations. MITRE ATT&CK T1546.015.",
              "z": "Table (registry changes), Alert on suspicious CLSID modifications.",
              "kfp": "App upgrades rewriting CLSID; .NET and Office repairs. Use stack of command line, signer, and path under `Software\\Classes`.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Sysmon registry value set events (EventCode 13) targeting CLSID InprocServer32 and LocalServer32 keys in HKCU and HKLM. Filter out legitimate installers (msiexec, TrustedInstaller). Alert on modifications pointing to unusual DLL paths (temp directories, user profiles, AppData). Maintain baseline of known-good CLSID registrations. MITRE ATT&CK T1546.015.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=13 TargetObject=\"*\\\\Classes\\\\CLSID\\\\*\\\\InprocServer32*\"\n| where NOT match(Image, \"(?i)(msiexec|svchost|TiWorker|TrustedInstaller|DismHost)\")\n| table _time, host, User, Image, TargetObject, Details\n| rex field=TargetObject \"CLSID\\\\\\\\(?<CLSID>[^\\\\\\\\]+)\"\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**COM Object Hijacking Detection** — COM hijacking replaces legitimate COM objects with malicious ones for persistence and privilege escalation. It's a stealthy technique that survives reboots.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match(Image, \"(?i)(msiexec|svchost|TiWorker|TrustedInstaller|DismHost)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **COM Object Hijacking Detection**): table _time, host, User, Image, TargetObject, Details\n• Extracts fields with `rex` (regular expression).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (registry changes), Alert on suspicious CLSID modifications.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for new or changed links between programs and little shared libraries, because that is an old and still popular way to make a good program run attacker code on purpose.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.114",
              "n": "LSASS Memory Protection Monitoring",
              "c": "critical",
              "f": "advanced",
              "v": "LSASS contains credentials in memory. Monitoring LSASS access attempts and protection status detects credential dumping tools like Mimikatz.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 10), `sourcetype=WinEventLog:Security`",
              "q": "index=wineventlog EventCode=10 TargetImage=\"*\\\\lsass.exe\"\n| where NOT match(SourceImage, \"(?i)(csrss|services|svchost|wininit|MsMpEng|MsSense|CrowdStrike|SentinelAgent)\")\n| eval GrantedAccess_hex=GrantedAccess\n| table _time, host, SourceImage, SourceUser, GrantedAccess_hex, CallTrace\n| where match(GrantedAccess_hex, \"0x1010|0x1FFFFF|0x143A\")\n| sort -_time",
              "m": "Sysmon EventCode 10 (ProcessAccess) targeting lsass.exe. Filter legitimate AV/EDR processes. Focus on suspicious access masks: 0x1010 (PROCESS_QUERY_LIMITED_INFORMATION + PROCESS_VM_READ), 0x1FFFFF (PROCESS_ALL_ACCESS), 0x143A (used by Mimikatz sekurlsa). Enable RunAsPPL for LSASS protection and monitor for its status. Alert on any non-whitelisted LSASS access. MITRE ATT&CK T1003.001.",
              "z": "Table (access events), Alert on suspicious access masks, Single value (LSASS PPL status).",
              "kfp": "EDR, AV, and kernel debug; known dump tools with approval. Whitelist EDR with registry evidence; high severity on new unsigned consumers only.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 10), `sourcetype=WinEventLog:Security`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSysmon EventCode 10 (ProcessAccess) targeting lsass.exe. Filter legitimate AV/EDR processes. Focus on suspicious access masks: 0x1010 (PROCESS_QUERY_LIMITED_INFORMATION + PROCESS_VM_READ), 0x1FFFFF (PROCESS_ALL_ACCESS), 0x143A (used by Mimikatz sekurlsa). Enable RunAsPPL for LSASS protection and monitor for its status. Alert on any non-whitelisted LSASS access. MITRE ATT&CK T1003.001.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=10 TargetImage=\"*\\\\lsass.exe\"\n| where NOT match(SourceImage, \"(?i)(csrss|services|svchost|wininit|MsMpEng|MsSense|CrowdStrike|SentinelAgent)\")\n| eval GrantedAccess_hex=GrantedAccess\n| table _time, host, SourceImage, SourceUser, GrantedAccess_hex, CallTrace\n| where match(GrantedAccess_hex, \"0x1010|0x1FFFFF|0x143A\")\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**LSASS Memory Protection Monitoring** — LSASS contains credentials in memory. Monitoring LSASS access attempts and protection status detects credential dumping tools like Mimikatz.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 10), `sourcetype=WinEventLog:Security`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match(SourceImage, \"(?i)(csrss|services|svchost|wininit|MsMpEng|MsSense|CrowdStrike|SentinelAgent)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **GrantedAccess_hex** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **LSASS Memory Protection Monitoring**): table _time, host, SourceImage, SourceUser, GrantedAccess_hex, CallTrace\n• Filters the current rows with `where match(GrantedAccess_hex, \"0x1010|0x1FFFFF|0x143A\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (access events), Alert on suspicious access masks, Single value (LSASS PPL status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the main memory area that holds sign-in secrets is less protected or being opened in a way that often comes before big credential theft, so the team can react early.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.115",
              "n": "Logon Session Anomalies (Type 3 / Network Logon)",
              "c": "high",
              "f": "intermediate",
              "v": "Network logons (Type 3) from unexpected sources indicate lateral movement with stolen credentials. Baselining normal patterns reveals compromised accounts.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4624)",
              "q": "index=wineventlog EventCode=4624 Logon_Type=3\n| eval src=coalesce(Source_Network_Address, IpAddress, \"unknown\")\n| stats dc(host) as TargetCount values(host) as Targets count by TargetUserName, src\n| where TargetCount>5\n| sort -TargetCount",
              "m": "Monitor Type 3 (Network) logons across all systems. Build baseline of normal logon patterns: which accounts log into which systems from where. Alert on accounts that suddenly access many more systems than usual (lateral movement), network logons from unusual subnets, and logons using service accounts from non-service IPs. Exclude machine accounts (ending in $) for noise reduction. Combine with EventCode 4648 (explicit credentials).",
              "z": "Network diagram (account-to-host), Timechart (logon volume), Alert on anomalous spread.",
              "kfp": "Scanners, monitoring, and DFS namespace auth noise; file shares from service accounts. Baseline by `src` subnet and share sensitivity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4624).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Type 3 (Network) logons across all systems. Build baseline of normal logon patterns: which accounts log into which systems from where. Alert on accounts that suddenly access many more systems than usual (lateral movement), network logons from unusual subnets, and logons using service accounts from non-service IPs. Exclude machine accounts (ending in $) for noise reduction. Combine with EventCode 4648 (explicit credentials).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=4624 Logon_Type=3\n| eval src=coalesce(Source_Network_Address, IpAddress, \"unknown\")\n| stats dc(host) as TargetCount values(host) as Targets count by TargetUserName, src\n| where TargetCount>5\n| sort -TargetCount\n```\n\nUnderstanding this SPL\n\n**Logon Session Anomalies (Type 3 / Network Logon)** — Network logons (Type 3) from unexpected sources indicate lateral movement with stolen credentials. Baselining normal patterns reveals compromised accounts.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4624). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by TargetUserName, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where TargetCount>5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Logon Session Anomalies (Type 3 / Network Logon)** — Network logons (Type 3) from unexpected sources indicate lateral movement with stolen credentials. Baselining normal patterns reveals compromised accounts.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4624). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network diagram (account-to-host), Timechart (logon volume), Alert on anomalous spread.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at the kind of network sign-ins that do not show a person at a keyboard, so a single bad password in many places is easier to see before it turns into a wide break-in.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.116",
              "n": "WMI Persistence Detection",
              "c": "critical",
              "f": "advanced",
              "v": "WMI event subscriptions provide fileless persistence that survives reboots. Detecting WMI persistence reveals advanced persistent threats.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 19, 20, 21)",
              "q": "index=wineventlog EventCode IN (19, 20, 21)\n| eval WMIType=case(EventCode=19,\"FilterCreated\", EventCode=20,\"ConsumerCreated\", EventCode=21,\"BindingCreated\", 1=1,\"Other\")\n| table _time, host, User, WMIType, EventNamespace, Name, Query, Destination, Consumer\n| where NOT match(Name, \"(?i)(BVTFilter|TSLogonFilter|SCM Event)\")\n| sort -_time",
              "m": "Sysmon EventCodes 19/20/21 track WMI event filter, consumer, and binding creation. Any new WMI subscription (especially CommandLineEventConsumer or ActiveScriptEventConsumer) is suspicious. Filter out known-good subscriptions (BVTFilter, TSLogonFilter). Alert on all new subscriptions and investigate the consumer action. Correlate EventCode 21 (binding) — a complete subscription requires filter + consumer + binding. MITRE ATT&CK T1546.003.",
              "z": "Table (WMI subscriptions), Alert on creation, Timeline (events).",
              "kfp": "SCCM inventory, monitoring agents, and DSC. Whitelist by namespace; alert on `ROOT\\subscription` and permanent consumers outside IT templates.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 19, 20, 21).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSysmon EventCodes 19/20/21 track WMI event filter, consumer, and binding creation. Any new WMI subscription (especially CommandLineEventConsumer or ActiveScriptEventConsumer) is suspicious. Filter out known-good subscriptions (BVTFilter, TSLogonFilter). Alert on all new subscriptions and investigate the consumer action. Correlate EventCode 21 (binding) — a complete subscription requires filter + consumer + binding. MITRE ATT&CK T1546.003.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode IN (19, 20, 21)\n| eval WMIType=case(EventCode=19,\"FilterCreated\", EventCode=20,\"ConsumerCreated\", EventCode=21,\"BindingCreated\", 1=1,\"Other\")\n| table _time, host, User, WMIType, EventNamespace, Name, Query, Destination, Consumer\n| where NOT match(Name, \"(?i)(BVTFilter|TSLogonFilter|SCM Event)\")\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**WMI Persistence Detection** — WMI event subscriptions provide fileless persistence that survives reboots. Detecting WMI persistence reveals advanced persistent threats.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 19, 20, 21). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **WMIType** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **WMI Persistence Detection**): table _time, host, User, WMIType, EventNamespace, Name, Query, Destination, Consumer\n• Filters the current rows with `where NOT match(Name, \"(?i)(BVTFilter|TSLogonFilter|SCM Event)\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (WMI subscriptions), Alert on creation, Timeline (events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for new behind-the-scenes Windows management hooks that can run code without a normal login, because those are a quiet way to stay on a machine for a long time.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.117",
              "n": "NIC Teaming & Network Adapter Failures (Windows)",
              "c": "high",
              "f": "advanced",
              "v": "NIC teaming provides network redundancy for servers. Adapter failures reduce redundancy and can cause outages if the remaining NIC also fails.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-NlbMgr/Operational`, `sourcetype=WinEventLog:System`",
              "q": "index=wineventlog source=\"WinEventLog:System\" SourceName IN (\"Microsoft-Windows-NDIS\", \"e1cexpress\", \"mlx4_bus\", \"vmxnet3ndis6\", \"Tcpip\")\n| eval Issue=case(match(Message, \"(?i)disconnect\"), \"Link_Down\", match(Message, \"(?i)reset\"), \"Adapter_Reset\", match(Message, \"(?i)error\"), \"Adapter_Error\", 1=1, \"Other\")\n| stats count latest(_time) as LastEvent by host, SourceName, Issue\n| sort -LastEvent",
              "m": "Monitor System event log for network adapter events from NIC drivers (e1cexpress for Intel, mlx4_bus for Mellanox, vmxnet3ndis6 for VMware). Track link-down events, adapter resets, and errors. For NIC teams, monitor Microsoft-Windows-MsLbfoProvider events. Alert on: team degradation (standby adapter now active), both adapters down, and frequent adapter resets (hardware failure). Include Perfmon Network Interface counters for bandwidth and error monitoring.",
              "z": "Table (adapter events), Status dashboard (team health), Alert on degradation.",
              "kfp": "Physical maintenance, SFP reseat, and driver update flaps. Pair with switch port and `Team` membership before CRIT for single-nic blips.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-NlbMgr/Operational`, `sourcetype=WinEventLog:System`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor System event log for network adapter events from NIC drivers (e1cexpress for Intel, mlx4_bus for Mellanox, vmxnet3ndis6 for VMware). Track link-down events, adapter resets, and errors. For NIC teams, monitor Microsoft-Windows-MsLbfoProvider events. Alert on: team degradation (standby adapter now active), both adapters down, and frequent adapter resets (hardware failure). Include Perfmon Network Interface counters for bandwidth and error monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:System\" SourceName IN (\"Microsoft-Windows-NDIS\", \"e1cexpress\", \"mlx4_bus\", \"vmxnet3ndis6\", \"Tcpip\")\n| eval Issue=case(match(Message, \"(?i)disconnect\"), \"Link_Down\", match(Message, \"(?i)reset\"), \"Adapter_Reset\", match(Message, \"(?i)error\"), \"Adapter_Error\", 1=1, \"Other\")\n| stats count latest(_time) as LastEvent by host, SourceName, Issue\n| sort -LastEvent\n```\n\nUnderstanding this SPL\n\n**NIC Teaming & Network Adapter Failures (Windows)** — NIC teaming provides network redundancy for servers. Adapter failures reduce redundancy and can cause outages if the remaining NIC also fails.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-NlbMgr/Operational`, `sourcetype=WinEventLog:System`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, SourceName, Issue** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIC Teaming & Network Adapter Failures (Windows)** — NIC teaming provides network redundancy for servers. Adapter failures reduce redundancy and can cause outages if the remaining NIC also fails.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-NlbMgr/Operational`, `sourcetype=WinEventLog:System`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (adapter events), Status dashboard (team health), Alert on degradation.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a network team or a cable in the rack hiccuped on Windows, so a whole app is not blamed for a bad card or broken team while the real fix is on the wire.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.118",
              "n": "ASR (Attack Surface Reduction) Rule Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "ASR rules block common attack techniques (Office macro code, credential theft, ransomware). Monitoring ASR ensures rules are enforced and detects blocked attacks.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Windows Defender/Operational` (EventID 1121, 1122)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\" EventCode IN (1121, 1122)\n| eval Mode=case(EventCode=1121,\"Blocked\", EventCode=1122,\"Audit\", 1=1,\"Other\")\n| lookup asr_rule_names ID as RuleId OUTPUT RuleName\n| stats count by host, RuleName, Mode, Path, ProcessName\n| sort -count",
              "m": "Enable ASR rules in Block or Audit mode. EventCode 1121 (blocked) and 1122 (audit) log ASR triggers. Map rule GUIDs to names via lookup table (e.g., 75668C1F = \"Block Office from creating executable content\"). Track most-triggered rules for tuning. Alert on: high block counts (active attack), blocks suddenly stopping (rules disabled), and audit-mode triggers on sensitive rules that should be in block mode.",
              "z": "Bar chart (blocks by rule), Timechart (block trends), Table (event details).",
              "kfp": "Pilot rings with noisy Office macros; VDI non-persist. Require host tier and GPO version before paging prod.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Windows Defender/Operational` (EventID 1121, 1122).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ASR rules in Block or Audit mode. EventCode 1121 (blocked) and 1122 (audit) log ASR triggers. Map rule GUIDs to names via lookup table (e.g., 75668C1F = \"Block Office from creating executable content\"). Track most-triggered rules for tuning. Alert on: high block counts (active attack), blocks suddenly stopping (rules disabled), and audit-mode triggers on sensitive rules that should be in block mode.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\" EventCode IN (1121, 1122)\n| eval Mode=case(EventCode=1121,\"Blocked\", EventCode=1122,\"Audit\", 1=1,\"Other\")\n| lookup asr_rule_names ID as RuleId OUTPUT RuleName\n| stats count by host, RuleName, Mode, Path, ProcessName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ASR (Attack Surface Reduction) Rule Monitoring** — ASR rules block common attack techniques (Office macro code, credential theft, ransomware). Monitoring ASR ensures rules are enforced and detects blocked attacks.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Windows Defender/Operational` (EventID 1121, 1122). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Mode** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by host, RuleName, Mode, Path, ProcessName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (blocks by rule), Timechart (block trends), Table (event details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at the settings that block risky document and script behaviors, so after an upgrade you still know the protections you expect are really on, not just on paper.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.119",
              "n": "Registry Run Key Persistence Monitoring",
              "c": "critical",
              "f": "advanced",
              "v": "Registry Run keys are the most common persistence mechanism for malware. Monitoring autostart registry locations detects new malware installations.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13)",
              "q": "index=wineventlog EventCode=13\n| where match(TargetObject, \"(?i)(CurrentVersion\\\\\\\\Run|CurrentVersion\\\\\\\\RunOnce|Winlogon\\\\\\\\Shell|Winlogon\\\\\\\\Userinit|Explorer\\\\\\\\Shell Folders)\")\n| where NOT match(Details, \"(?i)(program files|windows\\\\\\\\system32|syswow64)\")\n| table _time, host, User, Image, TargetObject, Details\n| sort -_time",
              "m": "Sysmon EventCode 13 (RegistryValueSet) monitors registry modifications. Track all autostart locations: Run, RunOnce, RunServices, Winlogon Shell/Userinit, Explorer Shell Folders, and AppInit_DLLs. Filter known-legitimate entries (Program Files, System32). Alert on entries pointing to temp directories, AppData, user profiles, or encoded/obfuscated paths. Monitor both HKLM (system-wide) and HKCU (per-user). MITRE ATT&CK T1547.001.",
              "z": "Table (new Run key entries), Alert on suspicious paths, Timeline.",
              "kfp": "SSO and VPN client updates under HKCU; gold-image noise. Use path allowlists and compare to prior week baseline.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSysmon EventCode 13 (RegistryValueSet) monitors registry modifications. Track all autostart locations: Run, RunOnce, RunServices, Winlogon Shell/Userinit, Explorer Shell Folders, and AppInit_DLLs. Filter known-legitimate entries (Program Files, System32). Alert on entries pointing to temp directories, AppData, user profiles, or encoded/obfuscated paths. Monitor both HKLM (system-wide) and HKCU (per-user). MITRE ATT&CK T1547.001.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=13\n| where match(TargetObject, \"(?i)(CurrentVersion\\\\\\\\Run|CurrentVersion\\\\\\\\RunOnce|Winlogon\\\\\\\\Shell|Winlogon\\\\\\\\Userinit|Explorer\\\\\\\\Shell Folders)\")\n| where NOT match(Details, \"(?i)(program files|windows\\\\\\\\system32|syswow64)\")\n| table _time, host, User, Image, TargetObject, Details\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Registry Run Key Persistence Monitoring** — Registry Run keys are the most common persistence mechanism for malware. Monitoring autostart registry locations detects new malware installations.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(TargetObject, \"(?i)(CurrentVersion\\\\\\\\Run|CurrentVersion\\\\\\\\RunOnce|Winlogon\\\\\\\\Shell|Winlogon\\\\\\\\Userinit|Expl…` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where NOT match(Details, \"(?i)(program files|windows\\\\\\\\system32|syswow64)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Registry Run Key Persistence Monitoring**): table _time, host, User, Image, TargetObject, Details\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Registry Run Key Persistence Monitoring** — Registry Run keys are the most common persistence mechanism for malware. Monitoring autostart registry locations detects new malware installations.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 13). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new Run key entries), Alert on suspicious paths, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at the list of things Windows runs at login from the registry, so a new auto-start is not the first time you hear about a tool that should not be there on a server.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.120",
              "n": "BitLocker Recovery & Compliance Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "BitLocker protects data at rest. Monitoring recovery events detects unauthorized hardware changes, and compliance tracking ensures all endpoints are encrypted.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-BitLocker/BitLocker Management` (EventCode 770, 771, 773, 774, 775)",
              "q": "index=wineventlog source=\"*BitLocker*\" EventCode IN (770, 771, 773, 774, 775, 776, 778)\n| eval Status=case(EventCode=770,\"Protection_Off\", EventCode=771,\"Protection_Resumed\", EventCode=773,\"Volume_Recovery\", EventCode=774,\"Key_Rotated\", EventCode=775,\"Auto_Unlock_Enabled\", EventCode=776,\"Recovery_Password_Backup\", EventCode=778,\"TPM_Error\", 1=1,\"Other\")\n| stats count by host, Status, VolumeName\n| sort -count",
              "m": "Monitor BitLocker Management log for encryption status changes. Protection off (770) may indicate maintenance or attack — correlate with change tickets. Volume recovery (773) means the recovery key was needed — investigate hardware changes or TPM issues. Track recovery password backup to AD (776) for compliance. Run a scripted input querying `manage-bde -status` for real-time encryption state across all volumes. Alert on any protection suspension on servers.",
              "z": "Dashboard (encryption compliance %), Table (events), Alert on protection suspension.",
              "kfp": "Planned re-image; TPM and firmware work; vendor BitLocker tools during imaging. Distinguish server vs laptop policy; one-off recovery with help desk ticket and asset tag.",
              "refs": "[Monitor BitLocker Management log for encryption status changes. Protection off](https://splunkbase.splunk.com/app/770), [correlate with change tickets. Volume recovery](https://splunkbase.splunk.com/app/773)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-BitLocker/BitLocker Management` (EventCode 770, 771, 773, 774, 775).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor BitLocker Management log for encryption status changes. Protection off (770) may indicate maintenance or attack — correlate with change tickets. Volume recovery (773) means the recovery key was needed — investigate hardware changes or TPM issues. Track recovery password backup to AD (776) for compliance. Run a scripted input querying `manage-bde -status` for real-time encryption state across all volumes. Alert on any protection suspension on servers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"*BitLocker*\" EventCode IN (770, 771, 773, 774, 775, 776, 778)\n| eval Status=case(EventCode=770,\"Protection_Off\", EventCode=771,\"Protection_Resumed\", EventCode=773,\"Volume_Recovery\", EventCode=774,\"Key_Rotated\", EventCode=775,\"Auto_Unlock_Enabled\", EventCode=776,\"Recovery_Password_Backup\", EventCode=778,\"TPM_Error\", 1=1,\"Other\")\n| stats count by host, Status, VolumeName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**BitLocker Recovery & Compliance Monitoring** — BitLocker protects data at rest. Monitoring recovery events detects unauthorized hardware changes, and compliance tracking ensures all endpoints are encrypted.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-BitLocker/BitLocker Management` (EventCode 770, 771, 773, 774, 775). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, Status, VolumeName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dashboard (encryption compliance %), Table (events), Alert on protection suspension.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the BitLocker and protection messages on a machine, so a disk that is supposed to stay encrypted is not off without someone knowing, and recovery is not a surprise mystery.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.121",
              "n": "DNS Client Query Anomalies",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitoring DNS queries from Windows clients reveals C2 beacons, DNS tunneling, and DGA-based malware communicating with attacker infrastructure.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 22), `sourcetype=WinEventLog:Microsoft-Windows-DNS-Client/Operational`",
              "q": "index=wineventlog EventCode=22\n| eval domain=lower(QueryName)\n| eval domain_len=len(domain)\n| eval label_count=mvcount(split(domain, \".\"))\n| where domain_len>50 OR label_count>5\n| stats count dc(QueryName) as UniqueDomains by host, Image\n| where UniqueDomains>100 OR count>500\n| sort -UniqueDomains",
              "m": "Sysmon EventCode 22 logs DNS queries with the originating process. Detect DNS tunneling via long domain names (>50 chars), high label counts, and high-entropy subdomains. Identify DGA patterns: many unique NXDomain responses from a single process. Alert on processes making unusual DNS query volumes. Baseline per-process DNS behavior and alert on deviations.",
              "z": "Timechart (query volume by process), Table (anomalous queries), Alert on tunneling indicators.",
              "kfp": "CDNs, AV cloud lookups, and dev containers; first boot. Baseline by subnet + DHCP scope.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 22), `sourcetype=WinEventLog:Microsoft-Windows-DNS-Client/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSysmon EventCode 22 logs DNS queries with the originating process. Detect DNS tunneling via long domain names (>50 chars), high label counts, and high-entropy subdomains. Identify DGA patterns: many unique NXDomain responses from a single process. Alert on processes making unusual DNS query volumes. Baseline per-process DNS behavior and alert on deviations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=22\n| eval domain=lower(QueryName)\n| eval domain_len=len(domain)\n| eval label_count=mvcount(split(domain, \".\"))\n| where domain_len>50 OR label_count>5\n| stats count dc(QueryName) as UniqueDomains by host, Image\n| where UniqueDomains>100 OR count>500\n| sort -UniqueDomains\n```\n\nUnderstanding this SPL\n\n**DNS Client Query Anomalies** — Monitoring DNS queries from Windows clients reveals C2 beacons, DNS tunneling, and DGA-based malware communicating with attacker infrastructure.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 22), `sourcetype=WinEventLog:Microsoft-Windows-DNS-Client/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **domain** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **domain_len** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **label_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where domain_len>50 OR label_count>5` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, Image** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where UniqueDomains>100 OR count>500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (query volume by process), Table (anomalous queries), Alert on tunneling indicators.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at the everyday DNS questions each client makes so a single device asking the wrong mix of names stands out the way a sick or misbehaving machine often does.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.122",
              "n": "Local Account Creation & Modification",
              "c": "critical",
              "f": "intermediate",
              "v": "Creating local accounts is a persistence technique. On domain-joined systems, local account creation is rare and suspicious.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4720, 4722, 4724, 4738)",
              "q": "index=wineventlog EventCode IN (4720, 4722, 4724, 4738) NOT TargetDomainName IN (\"NT AUTHORITY\", \"NT-AUTORITÄT\")\n| eval Action=case(EventCode=4720,\"Account_Created\", EventCode=4722,\"Account_Enabled\", EventCode=4724,\"Password_Reset\", EventCode=4738,\"Account_Changed\", 1=1,\"Other\")\n| table _time, host, Action, TargetUserName, TargetDomainName, SubjectUserName\n| sort -_time",
              "m": "Track local account creation (4720), enabling (4722), password reset (4724), and modification (4738). On domain-joined servers, local account creation is almost always suspicious. Alert on any local account creation, especially when performed by non-admin processes or via net.exe/net1.exe. Filter out managed service accounts and known automation. MITRE ATT&CK T1136.001.",
              "z": "Table (account events), Alert on creation, Timeline.",
              "kfp": "Imaging, lab, and break-glass local admins during rack; sysprep. Use naming convention and GPO for LAPS-managed locals.",
              "refs": "[Track local account creation](https://splunkbase.splunk.com/app/4720), [enabling](https://splunkbase.splunk.com/app/4722), [password reset](https://splunkbase.splunk.com/app/4724), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4720, 4722, 4724, 4738).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack local account creation (4720), enabling (4722), password reset (4724), and modification (4738). On domain-joined servers, local account creation is almost always suspicious. Alert on any local account creation, especially when performed by non-admin processes or via net.exe/net1.exe. Filter out managed service accounts and known automation. MITRE ATT&CK T1136.001.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode IN (4720, 4722, 4724, 4738) NOT TargetDomainName IN (\"NT AUTHORITY\", \"NT-AUTORITÄT\")\n| eval Action=case(EventCode=4720,\"Account_Created\", EventCode=4722,\"Account_Enabled\", EventCode=4724,\"Password_Reset\", EventCode=4738,\"Account_Changed\", 1=1,\"Other\")\n| table _time, host, Action, TargetUserName, TargetDomainName, SubjectUserName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Local Account Creation & Modification** — Creating local accounts is a persistence technique. On domain-joined systems, local account creation is rare and suspicious.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4720, 4722, 4724, 4738). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Local Account Creation & Modification**): table _time, host, Action, TargetUserName, TargetDomainName, SubjectUserName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Local Account Creation & Modification** — Creating local accounts is a persistence technique. On domain-joined systems, local account creation is rare and suspicious.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4720, 4722, 4724, 4738). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (account events), Alert on creation, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at who creates or renames a local sign-in on a server, so extra local accounts or surprise renames do not sit there outside your central account rules.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.123",
              "n": "Token Manipulation / Privilege Escalation",
              "c": "critical",
              "f": "intermediate",
              "v": "Token manipulation (impersonation, token duplication) allows attackers to escalate privileges. Detecting abuse of SeImpersonatePrivilege catches potato-style attacks.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4673, 4674)",
              "q": "index=wineventlog EventCode IN (4673, 4674) PrivilegeList IN (\"SeImpersonatePrivilege\", \"SeAssignPrimaryTokenPrivilege\", \"SeTcbPrivilege\", \"SeDebugPrivilege\")\n| where NOT match(ProcessName, \"(?i)(lsass|services|svchost|csrss|wininit|smss)\")\n| stats count by host, SubjectUserName, ProcessName, PrivilegeList\n| sort -count",
              "m": "Enable Audit Sensitive Privilege Use. Monitor 4673 (sensitive privilege used) and 4674 (privilege operation on privileged object). Focus on SeImpersonatePrivilege (potato attacks), SeDebugPrivilege (process injection), SeTcbPrivilege (token creation), and SeAssignPrimaryTokenPrivilege. Filter OS processes. Alert on privilege use by service accounts running web apps or databases (common potato attack targets). MITRE ATT&CK T1134.",
              "z": "Table (privilege use events), Alert on suspicious processes, Bar chart.",
              "kfp": "Backup and defrag with SeManageVolumePrivilege; vendors with documented needs. Enrich with process tree from same `_time` window.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4673, 4674).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Audit Sensitive Privilege Use. Monitor 4673 (sensitive privilege used) and 4674 (privilege operation on privileged object). Focus on SeImpersonatePrivilege (potato attacks), SeDebugPrivilege (process injection), SeTcbPrivilege (token creation), and SeAssignPrimaryTokenPrivilege. Filter OS processes. Alert on privilege use by service accounts running web apps or databases (common potato attack targets). MITRE ATT&CK T1134.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode IN (4673, 4674) PrivilegeList IN (\"SeImpersonatePrivilege\", \"SeAssignPrimaryTokenPrivilege\", \"SeTcbPrivilege\", \"SeDebugPrivilege\")\n| where NOT match(ProcessName, \"(?i)(lsass|services|svchost|csrss|wininit|smss)\")\n| stats count by host, SubjectUserName, ProcessName, PrivilegeList\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Token Manipulation / Privilege Escalation** — Token manipulation (impersonation, token duplication) allows attackers to escalate privileges. Detecting abuse of SeImpersonatePrivilege catches potato-style attacks.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4673, 4674). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match(ProcessName, \"(?i)(lsass|services|svchost|csrss|wininit|smss)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, SubjectUserName, ProcessName, PrivilegeList** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Token Manipulation / Privilege Escalation** — Token manipulation (impersonation, token duplication) allows attackers to escalate privileges. Detecting abuse of SeImpersonatePrivilege catches potato-style attacks.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4673, 4674). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (privilege use events), Alert on suspicious processes, Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the extra rights and hand-offs that turn a normal session into a much more powerful one, so a small foothold is harder to grow into a full take-over in one step.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| search Authentication.user=*admin* OR Authentication.user=root",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.124",
              "n": "Process Injection Detection (Sysmon)",
              "c": "critical",
              "f": "expert",
              "v": "Process injection hides malicious code inside legitimate processes. Detecting injection techniques (CreateRemoteThread, APC, process hollowing) catches advanced malware.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 8, 10)",
              "q": "index=wineventlog EventCode=8\n| where NOT match(SourceImage, \"(?i)(csrss|MsMpEng|SentinelAgent|CrowdStrike)\")\n| eval InjectionTarget=TargetImage\n| table _time, host, SourceImage, InjectionTarget, SourceUser, StartModule, StartFunction\n| append [search index=wineventlog EventCode=10 GrantedAccess IN (\"0x1FFFFF\",\"0x801\",\"0x1FFB\") | where NOT match(SourceImage, \"(?i)(csrss|MsMpEng|lsass)\") | table _time, host, SourceImage, TargetImage, SourceUser, GrantedAccess]\n| sort -_time",
              "m": "Sysmon EventCode 8 (CreateRemoteThread) detects thread injection into remote processes. Filter legitimate EDR/AV injections. EventCode 10 (ProcessAccess) with specific access masks (0x1FFFFF=ALL_ACCESS, 0x801=VM_WRITE+QUERY) detects memory writes for process hollowing. Alert on any remote thread creation targeting system processes (explorer.exe, svchost.exe, services.exe). Correlate with EventCode 1 for full process chain. MITRE ATT&CK T1055.",
              "z": "Table (injection events), Network diagram (source→target), Alert on detection.",
              "kfp": "EDR, debuggers, and .NET JIT in dev; some games and GPU tools. Baseline on gold images; high severity on server roles with no dev toolchain.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 8, 10).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSysmon EventCode 8 (CreateRemoteThread) detects thread injection into remote processes. Filter legitimate EDR/AV injections. EventCode 10 (ProcessAccess) with specific access masks (0x1FFFFF=ALL_ACCESS, 0x801=VM_WRITE+QUERY) detects memory writes for process hollowing. Alert on any remote thread creation targeting system processes (explorer.exe, svchost.exe, services.exe). Correlate with EventCode 1 for full process chain. MITRE ATT&CK T1055.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=8\n| where NOT match(SourceImage, \"(?i)(csrss|MsMpEng|SentinelAgent|CrowdStrike)\")\n| eval InjectionTarget=TargetImage\n| table _time, host, SourceImage, InjectionTarget, SourceUser, StartModule, StartFunction\n| append [search index=wineventlog EventCode=10 GrantedAccess IN (\"0x1FFFFF\",\"0x801\",\"0x1FFB\") | where NOT match(SourceImage, \"(?i)(csrss|MsMpEng|lsass)\") | table _time, host, SourceImage, TargetImage, SourceUser, GrantedAccess]\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Process Injection Detection (Sysmon)** — Process injection hides malicious code inside legitimate processes. Detecting injection techniques (CreateRemoteThread, APC, process hollowing) catches advanced malware.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 8, 10). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match(SourceImage, \"(?i)(csrss|MsMpEng|SentinelAgent|CrowdStrike)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **InjectionTarget** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Process Injection Detection (Sysmon)**): table _time, host, SourceImage, InjectionTarget, SourceUser, StartModule, StartFunction\n• Appends rows from a subsearch with `append`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (injection events), Network diagram (source→target), Alert on detection.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the extra detail from Sysmon around process and memory use so a jump from normal code into another program’s space is easier to see when you do not have full agent coverage everywhere.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.125",
              "n": "Cluster Shared Volume (CSV) Health",
              "c": "high",
              "f": "intermediate",
              "v": "Cluster Shared Volumes underpin Hyper-V and SQL Server failover clusters. CSV failures cause VM/database unavailability across the cluster.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational`",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\" EventCode IN (5120, 5121, 5140, 5142, 5143)\n| eval Status=case(EventCode=5120,\"CSV_Online\", EventCode=5121,\"CSV_Offline\", EventCode=5140,\"CSV_Redirected\", EventCode=5142,\"CSV_IO_Paused\", EventCode=5143,\"CSV_IO_Resumed\", 1=1,\"Other\")\n| stats count latest(_time) as LastEvent by host, VolumeName, Status\n| where Status IN (\"CSV_Offline\", \"CSV_Redirected\", \"CSV_IO_Paused\")\n| sort -LastEvent",
              "m": "Monitor Failover Clustering Operational log for CSV state changes. CSV Offline (5121) is critical — VMs will fail. CSV Redirected (5140) means I/O is going through another node (degraded performance). CSV I/O Paused (5142) freezes all VMs on that volume. Alert immediately on offline and paused states. Monitor CSV latency via Perfmon: Cluster CSV File System counters. Track cluster node membership changes (1069/1070/1135).",
              "z": "Status dashboard (CSV states), Timechart (state changes), Alert on failures.",
              "kfp": "Storage maintenance, redirect-on-write, and node drain; chkdsk on CSV. Check cluster `Quorum` and `PhysicalDisk` resource state in parallel.",
              "refs": "[Monitor Failover Clustering Operational log for CSV state changes. CSV Offline](https://splunkbase.splunk.com/app/5121), [VMs will fail. CSV Redirected](https://splunkbase.splunk.com/app/5140), [CSV I/O Paused](https://splunkbase.splunk.com/app/5142)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Failover Clustering Operational log for CSV state changes. CSV Offline (5121) is critical — VMs will fail. CSV Redirected (5140) means I/O is going through another node (degraded performance). CSV I/O Paused (5142) freezes all VMs on that volume. Alert immediately on offline and paused states. Monitor CSV latency via Perfmon: Cluster CSV File System counters. Track cluster node membership changes (1069/1070/1135).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\" EventCode IN (5120, 5121, 5140, 5142, 5143)\n| eval Status=case(EventCode=5120,\"CSV_Online\", EventCode=5121,\"CSV_Offline\", EventCode=5140,\"CSV_Redirected\", EventCode=5142,\"CSV_IO_Paused\", EventCode=5143,\"CSV_IO_Resumed\", 1=1,\"Other\")\n| stats count latest(_time) as LastEvent by host, VolumeName, Status\n| where Status IN (\"CSV_Offline\", \"CSV_Redirected\", \"CSV_IO_Paused\")\n| sort -LastEvent\n```\n\nUnderstanding this SPL\n\n**Cluster Shared Volume (CSV) Health** — Cluster Shared Volumes underpin Hyper-V and SQL Server failover clusters. CSV failures cause VM/database unavailability across the cluster.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, VolumeName, Status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where Status IN (\"CSV_Offline\", \"CSV_Redirected\", \"CSV_IO_Paused\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status dashboard (CSV states), Timechart (state changes), Alert on failures.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at the shared disk space that many virtual machines use at once, so a storage hiccup on that shared volume is seen before a whole cluster of services falls over together.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.126",
              "n": "DCOM Activation Failures",
              "c": "medium",
              "f": "intermediate",
              "v": "DCOM failures break distributed applications, WMI remote management, and SCCM client operations. Monitoring identifies misconfigured permissions and network issues.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:System` (EventCode 10016)",
              "q": "index=wineventlog source=\"WinEventLog:System\" EventCode=10016\n| rex field=Message \"CLSID\\s+(?<CLSID>\\{[^}]+\\}).*APPID\\s+(?<APPID>\\{[^}]+\\})\"\n| stats count by host, CLSID, APPID\n| where count>10\n| sort -count",
              "m": "DCOM activation errors (10016) are common but mostly benign. Focus on recurring errors that affect application functionality. Map CLSIDs to application names to identify impacted services. Filter known-benign CLSIDs (RuntimeBroker, PerAppRuntimeBroker, ShellServiceHost). Alert on DCOM errors affecting SCCM ({4991D34B}), WMI ({76A64158}), or custom line-of-business applications. Track error count trends — sudden spikes indicate configuration changes.",
              "z": "Bar chart (top CLSIDs), Timechart (error trend), Table (details).",
              "kfp": "Remote WMI/COM for monitoring, SCOM, and WMI filters; Exchange health. Whitelist by `AppID` to known services with tickets.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (EventCode 10016).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDCOM activation errors (10016) are common but mostly benign. Focus on recurring errors that affect application functionality. Map CLSIDs to application names to identify impacted services. Filter known-benign CLSIDs (RuntimeBroker, PerAppRuntimeBroker, ShellServiceHost). Alert on DCOM errors affecting SCCM ({4991D34B}), WMI ({76A64158}), or custom line-of-business applications. Track error count trends — sudden spikes indicate configuration changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:System\" EventCode=10016\n| rex field=Message \"CLSID\\s+(?<CLSID>\\{[^}]+\\}).*APPID\\s+(?<APPID>\\{[^}]+\\})\"\n| stats count by host, CLSID, APPID\n| where count>10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DCOM Activation Failures** — DCOM failures break distributed applications, WMI remote management, and SCCM client operations. Monitoring identifies misconfigured permissions and network issues.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (EventCode 10016). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, CLSID, APPID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DCOM Activation Failures** — DCOM failures break distributed applications, WMI remote management, and SCCM client operations. Monitoring identifies misconfigured permissions and network issues.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (EventCode 10016). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top CLSIDs), Timechart (error trend), Table (details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at failures when one program on Windows asks another to start through the old shared-component path, so both everyday misconfiguration and attack-style abuse are easier to see.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Services\n  by Services.dest Services.name Services.status span=5m\n| search Services.status!=\"running\"",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.127",
              "n": "Automatic Windows Update Compliance",
              "c": "high",
              "f": "advanced",
              "v": "Unpatched systems are the primary attack surface. Tracking Windows Update status across all systems ensures timely patching and compliance reporting.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-WindowsUpdateClient/Operational`",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-WindowsUpdateClient/Operational\" EventCode IN (19, 20, 25, 31, 35)\n| eval Status=case(EventCode=19,\"Install_Success\", EventCode=20,\"Install_Failed\", EventCode=25,\"Restart_Required\", EventCode=31,\"Download_Failed\", EventCode=35,\"Download_Success\", 1=1,\"Other\")\n| stats latest(_time) as LastUpdate latest(Status) as LastStatus count(eval(Status=\"Install_Failed\")) as FailCount by host\n| eval DaysSinceUpdate=round((now()-LastUpdate)/86400, 0)\n| where DaysSinceUpdate>30 OR FailCount>0\n| sort -DaysSinceUpdate",
              "m": "Monitor Windows Update Client Operational log. Track successful installs (19), failed installs (20), restart required (25), download failures (31). Calculate days since last successful update for each host. Alert on: systems not updated in 30+ days, repeated installation failures, and systems stuck in \"restart required\" state. Supplement with `wmic qfe list` scripted input for installed KB inventory. Essential for vulnerability management and audit compliance.",
              "z": "Table (compliance status), Single value (% compliant), Bar chart (days since update).",
              "kfp": "Air-gapped or change-frozen servers; WSUS deferrals; feature updates that need multiple reboots. Compare to maintenance groups; allow longer `DaysSinceUpdate` for non-Internet hosts only with exception records.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-WindowsUpdateClient/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Windows Update Client Operational log. Track successful installs (19), failed installs (20), restart required (25), download failures (31). Calculate days since last successful update for each host. Alert on: systems not updated in 30+ days, repeated installation failures, and systems stuck in \"restart required\" state. Supplement with `wmic qfe list` scripted input for installed KB inventory. Essential for vulnerability management and audit compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-WindowsUpdateClient/Operational\" EventCode IN (19, 20, 25, 31, 35)\n| eval Status=case(EventCode=19,\"Install_Success\", EventCode=20,\"Install_Failed\", EventCode=25,\"Restart_Required\", EventCode=31,\"Download_Failed\", EventCode=35,\"Download_Success\", 1=1,\"Other\")\n| stats latest(_time) as LastUpdate latest(Status) as LastStatus count(eval(Status=\"Install_Failed\")) as FailCount by host\n| eval DaysSinceUpdate=round((now()-LastUpdate)/86400, 0)\n| where DaysSinceUpdate>30 OR FailCount>0\n| sort -DaysSinceUpdate\n```\n\nUnderstanding this SPL\n\n**Automatic Windows Update Compliance** — Unpatched systems are the primary attack surface. Tracking Windows Update status across all systems ensures timely patching and compliance reporting.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-WindowsUpdateClient/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **DaysSinceUpdate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where DaysSinceUpdate>30 OR FailCount>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (compliance status), Single value (% compliant), Bar chart (days since update).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at how current Windows update is on each computer, so out-of-date machines are visible for both security follow-up and honest answers in an audit, not a surprise after an incident.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.128",
              "n": "Service Account Logon Anomalies",
              "c": "critical",
              "f": "intermediate",
              "v": "Compromised service accounts grant persistent access and often have elevated privileges. Detecting anomalous service account behavior catches credential theft early.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4624, 4625)",
              "q": "index=wineventlog EventCode=4624 Logon_Type IN (2, 10, 11)\n| lookup service_accounts TargetUserName OUTPUT is_service_account\n| where is_service_account=\"yes\"\n| eval src=coalesce(Source_Network_Address, IpAddress)\n| stats count dc(host) as TargetHosts values(Logon_Type) as LogonTypes by TargetUserName, src\n| where LogonTypes!=5 AND LogonTypes!=3\n| sort -count",
              "m": "Define a lookup of known service accounts. Service accounts should only log on with Type 5 (Service) or Type 3 (Network) from expected sources. Alert on interactive logons (Type 2/10/11) by service accounts — this indicates credential compromise and human use. Track source IPs and target hosts — service accounts should access a consistent set of systems. Alert on new source IPs or target hosts for any service account.",
              "z": "Table (anomalous logons), Alert on interactive service account logon, Network diagram.",
              "kfp": "",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4624, 4625).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine a lookup of known service accounts. Service accounts should only log on with Type 5 (Service) or Type 3 (Network) from expected sources. Alert on interactive logons (Type 2/10/11) by service accounts — this indicates credential compromise and human use. Track source IPs and target hosts — service accounts should access a consistent set of systems. Alert on new source IPs or target hosts for any service account.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=4624 Logon_Type IN (2, 10, 11)\n| lookup service_accounts TargetUserName OUTPUT is_service_account\n| where is_service_account=\"yes\"\n| eval src=coalesce(Source_Network_Address, IpAddress)\n| stats count dc(host) as TargetHosts values(Logon_Type) as LogonTypes by TargetUserName, src\n| where LogonTypes!=5 AND LogonTypes!=3\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Service Account Logon Anomalies** — Compromised service accounts grant persistent access and often have elevated privileges. Detecting anomalous service account behavior catches credential theft early.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4624, 4625). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_service_account=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by TargetUserName, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where LogonTypes!=5 AND LogonTypes!=3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Service Account Logon Anomalies** — Compromised service accounts grant persistent access and often have elevated privileges. Detecting anomalous service account behavior catches credential theft early.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4624, 4625). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous logons), Alert on interactive service account logon, Network diagram.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Compromised service accounts grant persistent access and often have elevated privileges — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.129",
              "n": "Sysmon Driver/Image Load Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Monitoring driver and DLL loads catches rootkits, vulnerable drivers, and DLL side-loading attacks that evade process-level monitoring.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 6, 7)",
              "q": "index=wineventlog EventCode=6 Signed=\"false\"\n| table _time, host, ImageLoaded, Hashes, Signature, SignatureStatus\n| sort -_time\n| append [search index=wineventlog EventCode=7 Signed=\"false\" | where NOT match(ImageLoaded, \"(?i)(windows\\\\\\\\system32|program files)\") | table _time, host, Image, ImageLoaded, Hashes, SignatureStatus]",
              "m": "Sysmon EventCode 6 (DriverLoad) monitors kernel driver loads. Alert on unsigned drivers — all legitimate drivers should be signed. EventCode 7 (ImageLoad) monitors DLL loads (high volume — use targeted config). Focus on unsigned DLLs loaded from unusual paths. Track BYOVD (Bring Your Own Vulnerable Driver) attacks by maintaining a list of known-vulnerable driver hashes. MITRE ATT&CK T1068, T1574.002.",
              "z": "Table (unsigned loads), Alert on unsigned kernel drivers, Timechart.",
              "kfp": "",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 6, 7).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSysmon EventCode 6 (DriverLoad) monitors kernel driver loads. Alert on unsigned drivers — all legitimate drivers should be signed. EventCode 7 (ImageLoad) monitors DLL loads (high volume — use targeted config). Focus on unsigned DLLs loaded from unusual paths. Track BYOVD (Bring Your Own Vulnerable Driver) attacks by maintaining a list of known-vulnerable driver hashes. MITRE ATT&CK T1068, T1574.002.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog EventCode=6 Signed=\"false\"\n| table _time, host, ImageLoaded, Hashes, Signature, SignatureStatus\n| sort -_time\n| append [search index=wineventlog EventCode=7 Signed=\"false\" | where NOT match(ImageLoaded, \"(?i)(windows\\\\\\\\system32|program files)\") | table _time, host, Image, ImageLoaded, Hashes, SignatureStatus]\n```\n\nUnderstanding this SPL\n\n**Sysmon Driver/Image Load Monitoring** — Monitoring driver and DLL loads catches rootkits, vulnerable drivers, and DLL side-loading attacks that evade process-level monitoring.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode 6, 7). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Sysmon Driver/Image Load Monitoring**): table _time, host, ImageLoaded, Hashes, Signature, SignatureStatus\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Appends rows from a subsearch with `append`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unsigned loads), Alert on unsigned kernel drivers, Timechart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Monitoring driver and DLL loads catches rootkits, vulnerable drivers, and DLL side-loading attacks that evade process-level monitoring — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.130",
              "n": "Scheduled Task Modification for Persistence",
              "c": "critical",
              "f": "advanced",
              "v": "Modifying existing scheduled tasks is stealthier than creating new ones. Attackers replace legitimate task actions to achieve persistence without new artifacts.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 140, 141, 142)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-TaskScheduler/Operational\" EventCode IN (140, 141, 142)\n| eval Action=case(EventCode=140,\"Task_Updated\", EventCode=141,\"Task_Deleted\", EventCode=142,\"Task_Disabled\", 1=1,\"Other\")\n| table _time, host, TaskName, Action, UserContext\n| where NOT match(TaskName, \"(?i)(\\\\\\\\Microsoft\\\\\\\\Windows\\\\\\\\)\")\n| sort -_time",
              "m": "Monitor Task Scheduler Operational log for task modifications (140), deletions (141), and disabling (142). Focus on non-Microsoft tasks being modified. Correlate with Sysmon process creation (EventCode 1) to identify what tool made the change. Alert on modifications to security-related tasks (AV scans, backup tasks). Track task action changes — replacing a legitimate executable with malware. Maintain baseline of critical task configurations.",
              "z": "Table (task changes), Alert on modification of critical tasks, Timeline.",
              "kfp": "",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 140, 141, 142).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Task Scheduler Operational log for task modifications (140), deletions (141), and disabling (142). Focus on non-Microsoft tasks being modified. Correlate with Sysmon process creation (EventCode 1) to identify what tool made the change. Alert on modifications to security-related tasks (AV scans, backup tasks). Track task action changes — replacing a legitimate executable with malware. Maintain baseline of critical task configurations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-TaskScheduler/Operational\" EventCode IN (140, 141, 142)\n| eval Action=case(EventCode=140,\"Task_Updated\", EventCode=141,\"Task_Deleted\", EventCode=142,\"Task_Disabled\", 1=1,\"Other\")\n| table _time, host, TaskName, Action, UserContext\n| where NOT match(TaskName, \"(?i)(\\\\\\\\Microsoft\\\\\\\\Windows\\\\\\\\)\")\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Scheduled Task Modification for Persistence** — Modifying existing scheduled tasks is stealthier than creating new ones. Attackers replace legitimate task actions to achieve persistence without new artifacts.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 140, 141, 142). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **Action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Scheduled Task Modification for Persistence**): table _time, host, TaskName, Action, UserContext\n• Filters the current rows with `where NOT match(TaskName, \"(?i)(\\\\\\\\Microsoft\\\\\\\\Windows\\\\\\\\)\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Scheduled Task Modification for Persistence** — Modifying existing scheduled tasks is stealthier than creating new ones. Attackers replace legitimate task actions to achieve persistence without new artifacts.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 140, 141, 142). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (task changes), Alert on modification of critical tasks, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Modifying existing scheduled tasks is stealthier than creating new ones — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.131",
              "n": "Windows Print Spooler Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Spooler service state, queue depth, and stalled print jobs affect printing availability. Print Spooler failures block all printing on the host.",
              "t": "`Splunk_TA_windows`",
              "d": "`WinEventLog:System` (Event ID 7036 for spooler), Perfmon (Print Queue counters)",
              "q": "index=perfmon sourcetype=Perfmon:PrintQueue host=* counter=\"Jobs\"\n| stats latest(Value) as queue_depth by host, instance\n| where queue_depth > 50\n| sort -queue_depth",
              "m": "Enable `WinEventLog:System` input for EventCode 7036 (service state change). Filter for ServiceName=Spooler. Configure Perfmon input for Print Queue object: counter=Jobs (queue depth). Run every 60 seconds. Alert when Spooler stops; alert when queue depth exceeds 50 for sustained period (stalled jobs).",
              "z": "Table (host, spooler state, queue depth), Single value (failed spooler count), Line chart (queue depth over time).",
              "kfp": "",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `WinEventLog:System` (Event ID 7036 for spooler), Perfmon (Print Queue counters).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable `WinEventLog:System` input for EventCode 7036 (service state change). Filter for ServiceName=Spooler. Configure Perfmon input for Print Queue object: counter=Jobs (queue depth). Run every 60 seconds. Alert when Spooler stops; alert when queue depth exceeds 50 for sustained period (stalled jobs).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=Perfmon:PrintQueue host=* counter=\"Jobs\"\n| stats latest(Value) as queue_depth by host, instance\n| where queue_depth > 50\n| sort -queue_depth\n```\n\nUnderstanding this SPL\n\n**Windows Print Spooler Health** — Spooler service state, queue depth, and stalled print jobs affect printing availability. Print Spooler failures block all printing on the host.\n\nDocumented **Data sources**: `WinEventLog:System` (Event ID 7036 for spooler), Perfmon (Print Queue counters). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:PrintQueue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=Perfmon:PrintQueue. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, instance** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where queue_depth > 50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, spooler state, queue depth), Single value (failed spooler count), Line chart (queue depth over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Spooler service state, queue depth, and stalled print jobs affect printing availability — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.132",
              "n": "Windows Scheduled Task Failures",
              "c": "medium",
              "f": "intermediate",
              "v": "Detect tasks that failed to run or returned non-zero result codes. Indicates missed backups, sync jobs, or automation failures.",
              "t": "`Splunk_TA_windows`",
              "d": "`WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 201, 101)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-TaskScheduler/Operational\" EventCode=201 ResultCode!=0\n| stats count by host, TaskName\n| sort -count",
              "m": "Enable Task Scheduler Operational log input. EventCode 201 = task completed; EventCode 101 = task started. Parse ResultCode (0 = success). Alert on ResultCode != 0. Common codes: 0x1 (incorrect function), 0x2 (file not found), 0x5 (access denied). Exclude known flaky tasks from alert if acceptable.",
              "z": "Table (task, host, result code), Alert on failed tasks, Bar chart (failed task count by task name).",
              "kfp": "",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 201, 101).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Task Scheduler Operational log input. EventCode 201 = task completed; EventCode 101 = task started. Parse ResultCode (0 = success). Alert on ResultCode != 0. Common codes: 0x1 (incorrect function), 0x2 (file not found), 0x5 (access denied). Exclude known flaky tasks from alert if acceptable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-TaskScheduler/Operational\" EventCode=201 ResultCode!=0\n| stats count by host, TaskName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Windows Scheduled Task Failures** — Detect tasks that failed to run or returned non-zero result codes. Indicates missed backups, sync jobs, or automation failures.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-TaskScheduler/Operational` (EventCode 201, 101). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, TaskName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (task, host, result code), Alert on failed tasks, Bar chart (failed task count by task name).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Detect tasks that failed to run or returned non-zero result codes — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.133",
              "n": "Windows WMI Repository Health",
              "c": "medium",
              "f": "intermediate",
              "v": "WMI corruption breaks many monitoring agents, SCCM, and management tools. Detecting broken WMI enables early remediation before dependent systems fail.",
              "t": "`Splunk_TA_windows` (scripted input)",
              "d": "`winmgmt /verifyrepository` output",
              "q": "index=os sourcetype=wmi_verify host=*\n| search \"inconsistent\" OR \"corrupt\" OR \"failed\"\n| table _time host _raw",
              "m": "Create a scripted input that runs `winmgmt /verifyrepository` and captures output. Parse for \"repository is consistent\" (success) vs \"inconsistent\" or \"corrupt\". Run daily or weekly. Alert immediately on inconsistent. Remediation: `winmgmt /resetrepository` (requires reboot). WMI issues often cause perfmon and other agent inputs to fail.",
              "z": "Table (host, WMI status), Single value (hosts with WMI issues), Alert.",
              "kfp": "",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (scripted input).\n• Ensure the following data sources are available: `winmgmt /verifyrepository` output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that runs `winmgmt /verifyrepository` and captures output. Parse for \"repository is consistent\" (success) vs \"inconsistent\" or \"corrupt\". Run daily or weekly. Alert immediately on inconsistent. Remediation: `winmgmt /resetrepository` (requires reboot). WMI issues often cause perfmon and other agent inputs to fail.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=wmi_verify host=*\n| search \"inconsistent\" OR \"corrupt\" OR \"failed\"\n| table _time host _raw\n```\n\nUnderstanding this SPL\n\n**Windows WMI Repository Health** — WMI corruption breaks many monitoring agents, SCCM, and management tools. Detecting broken WMI enables early remediation before dependent systems fail.\n\nDocumented **Data sources**: `winmgmt /verifyrepository` output. **App/TA** (typical add-on context): `Splunk_TA_windows` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: wmi_verify. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=wmi_verify. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Windows WMI Repository Health**): table _time host _raw\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, WMI status), Single value (hosts with WMI issues), Alert.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "WMI corruption breaks many monitoring agents, SCCM, and management tools — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.2.134",
              "n": "Windows Pending Reboot Detection",
              "c": "medium",
              "f": "beginner",
              "v": "Detect servers waiting for reboot after Windows updates. Pending reboots cause inconsistent behavior and can block security patch application.",
              "t": "`Splunk_TA_windows` (scripted input)",
              "d": "Registry keys (RebootRequired, PendingFileRenameOperations)",
              "q": "index=os sourcetype=windows_pending_reboot host=*\n| stats latest(reboot_pending) as pending by host\n| search pending=\"true\"\n| stats count as pending_count",
              "m": "Create a scripted input that checks registry: `HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Component Based Servicing\\RebootPending`, `HKLM\\SYSTEM\\CurrentControlSet\\Control\\Session Manager\\PendingFileRenameOperations`, `HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\WindowsUpdate\\Auto Update\\RebootRequired`. Set reboot_pending=true if any exist. Run every 60-300 seconds. Report reason (e.g., \"Windows Update\", \"Component Based Servicing\"). Include in change management dashboard.",
              "z": "Table (host, pending, reason), Single value (pending reboot count), Pie chart (pending vs. current).",
              "kfp": "",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (scripted input).\n• Ensure the following data sources are available: Registry keys (RebootRequired, PendingFileRenameOperations).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that checks registry: `HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Component Based Servicing\\RebootPending`, `HKLM\\SYSTEM\\CurrentControlSet\\Control\\Session Manager\\PendingFileRenameOperations`, `HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\WindowsUpdate\\Auto Update\\RebootRequired`. Set reboot_pending=true if any exist. Run every 60-300 seconds. Report reason (e.g., \"Windows Update\", \"Component Based Servicing\"). Include in change management dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=windows_pending_reboot host=*\n| stats latest(reboot_pending) as pending by host\n| search pending=\"true\"\n| stats count as pending_count\n```\n\nUnderstanding this SPL\n\n**Windows Pending Reboot Detection** — Detect servers waiting for reboot after Windows updates. Pending reboots cause inconsistent behavior and can block security patch application.\n\nDocumented **Data sources**: Registry keys (RebootRequired, PendingFileRenameOperations). **App/TA** (typical add-on context): `Splunk_TA_windows` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: windows_pending_reboot. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=windows_pending_reboot. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, pending, reason), Single value (pending reboot count), Pie chart (pending vs. current).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Detect servers waiting for reboot after Windows updates — so you find out before users do when something is slowing down or breaking.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.4,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 127,
            "none": 0
          }
        },
        {
          "i": "1.3",
          "n": "macOS Endpoints",
          "u": [
            {
              "i": "1.3.1",
              "n": "System Resource Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Endpoint performance visibility helps IT support triage user complaints and identify machines needing replacement or upgrades.",
              "t": "Splunk UF for macOS, custom scripted inputs",
              "d": "Custom scripted inputs (`top -l 1`, `vm_stat`, `df`)",
              "q": "index=os sourcetype=macos_top host=*\n| stats latest(cpu_pct) as cpu, latest(mem_pct) as memory by host\n| where cpu > 80 OR memory > 90",
              "m": "Install Splunk UF on macOS endpoints. Create scripted inputs for `top -l 1 -s 0`, `vm_stat`, and `df -h`. Run every 60-300 seconds. Parse key metrics.",
              "z": "Table of endpoints, Gauge panels, Line chart trending.",
              "kfp": "Sustained high memory during large creative exports or high CPU during operating-system updates, backups, or indexing jobs. A machine can briefly pass thresholds with no user-facing issue; correlate with maintenance windows and known heavy applications.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk UF for macOS, custom scripted inputs.\n• Ensure the following data sources are available: Custom scripted inputs (`top -l 1`, `vm_stat`, `df`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall Splunk UF on macOS endpoints. Create scripted inputs for `top -l 1 -s 0`, `vm_stat`, and `df -h`. Run every 60-300 seconds. Parse key metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=macos_top host=*\n| stats latest(cpu_pct) as cpu, latest(mem_pct) as memory by host\n| where cpu > 80 OR memory > 90\n```\n\nUnderstanding this SPL\n\n**System Resource Monitoring** — Endpoint performance visibility helps IT support triage user complaints and identify machines needing replacement or upgrades.\n\nDocumented **Data sources**: Custom scripted inputs (`top -l 1`, `vm_stat`, `df`). **App/TA** (typical add-on context): Splunk UF for macOS, custom scripted inputs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: macos_top. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=macos_top. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu > 80 OR memory > 90` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of endpoints, Gauge panels, Line chart trending.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch each Mac’s processor and memory use so we can see which ones are under heavy load before people complain that their computer feels slow or frozen.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "macos"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.3.2",
              "n": "FileVault Encryption Status",
              "c": "high",
              "f": "intermediate",
              "v": "Unencrypted endpoints are a data breach risk if lost or stolen. Compliance requirement for most security frameworks (SOC2, ISO27001, PCI).",
              "t": "Splunk UF, custom scripted input",
              "d": "Custom scripted input (`fdesetup status`)",
              "q": "index=os sourcetype=macos_filevault host=*\n| stats latest(status) as fv_status by host\n| where fv_status!=\"FileVault is On.\"",
              "m": "Create a scripted input: `fdesetup status`. Run daily. Alert on any endpoint where FileVault is not enabled. Feed into compliance dashboard.",
              "z": "Pie chart (encrypted vs. not), Table of non-compliant hosts, Single value (compliance %).",
              "kfp": "Recovery or in-progress FileVault enablement, deferred encryption during imaging, and loaner machines excluded by policy can show as non-compliant until a reboot or profile finishes. Parse the exact `fdesetup` string your fleet produces and allow approved exceptions in a reference list.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk UF, custom scripted input.\n• Ensure the following data sources are available: Custom scripted input (`fdesetup status`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input: `fdesetup status`. Run daily. Alert on any endpoint where FileVault is not enabled. Feed into compliance dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=macos_filevault host=*\n| stats latest(status) as fv_status by host\n| where fv_status!=\"FileVault is On.\"\n```\n\nUnderstanding this SPL\n\n**FileVault Encryption Status** — Unencrypted endpoints are a data breach risk if lost or stolen. Compliance requirement for most security frameworks (SOC2, ISO27001, PCI).\n\nDocumented **Data sources**: Custom scripted input (`fdesetup status`). **App/TA** (typical add-on context): Splunk UF, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: macos_filevault. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=macos_filevault. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fv_status!=\"FileVault is On.\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (encrypted vs. not), Table of non-compliant hosts, Single value (compliance %).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check whether each Mac’s disk is encrypted so a lost or stolen laptop is much harder to read if someone else gets hold of it.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.3.3",
              "n": "Gatekeeper and SIP Status",
              "c": "medium",
              "f": "intermediate",
              "v": "Disabled Gatekeeper or System Integrity Protection weakens macOS security posture. May indicate developer override or tampering.",
              "t": "Splunk UF, custom scripted input",
              "d": "Custom scripted inputs (`spctl --status`, `csrutil status`)",
              "q": "index=os sourcetype=macos_security host=*\n| stats latest(gatekeeper) as gk, latest(sip) as sip by host\n| where gk!=\"enabled\" OR sip!=\"enabled\"",
              "m": "Scripted inputs for `spctl --status` and `csrutil status`. Run daily. Dashboard showing fleet-wide compliance.",
              "z": "Pie chart (compliant vs. not), Table of non-compliant endpoints.",
              "kfp": "Developers and security researchers sometimes run with protections relaxed; lab or build systems may be in policy. Field values depend on your parser (for example `enabled` vs. `disabled` strings); tune **where** to match exact output. Recovery mode and macOS major upgrades can temporarily change `csrutil` reporting until a reboot.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk UF, custom scripted input.\n• Ensure the following data sources are available: Custom scripted inputs (`spctl --status`, `csrutil status`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted inputs for `spctl --status` and `csrutil status`. Run daily. Dashboard showing fleet-wide compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=macos_security host=*\n| stats latest(gatekeeper) as gk, latest(sip) as sip by host\n| where gk!=\"enabled\" OR sip!=\"enabled\"\n```\n\nUnderstanding this SPL\n\n**Gatekeeper and SIP Status** — Disabled Gatekeeper or System Integrity Protection weakens macOS security posture. May indicate developer override or tampering.\n\nDocumented **Data sources**: Custom scripted inputs (`spctl --status`, `csrutil status`). **App/TA** (typical add-on context): Splunk UF, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: macos_security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=macos_security. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where gk!=\"enabled\" OR sip!=\"enabled\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (compliant vs. not), Table of non-compliant endpoints.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We make sure the Mac’s built-in protections for what software can run and what the system can change are still on, so the machine is not left wide open to tampering or sneaky installs.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.3.4",
              "n": "Software Update Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Unpatched macOS endpoints are vulnerable. Tracking update levels across the fleet supports vulnerability management.",
              "t": "Splunk UF, custom scripted input",
              "d": "Custom scripted input (`softwareupdate -l`, `sw_vers`)",
              "q": "index=os sourcetype=macos_sw_vers host=*\n| stats latest(ProductVersion) as os_version by host\n| eval is_current = if(os_version >= \"14.3\", \"Yes\", \"No\")\n| stats count by is_current",
              "m": "Scripted input for `sw_vers` (weekly) and `softwareupdate -l` (daily). Track OS versions and pending updates. Alert when critical security updates are pending >7 days.",
              "z": "Table (host, OS version, pending updates), Pie chart (version distribution).",
              "kfp": "The example cut-off (`14.3`) is illustrative; replace with your org’s minimum supported build. Deferral profiles, betas, and long-lived developer seeds can look ‘non-current’ when they are still allowed. String comparison of version numbers can mis-rank; prefer a lookup table of approved builds for production.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk UF, custom scripted input.\n• Ensure the following data sources are available: Custom scripted input (`softwareupdate -l`, `sw_vers`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input for `sw_vers` (weekly) and `softwareupdate -l` (daily). Track OS versions and pending updates. Alert when critical security updates are pending >7 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=macos_sw_vers host=*\n| stats latest(ProductVersion) as os_version by host\n| eval is_current = if(os_version >= \"14.3\", \"Yes\", \"No\")\n| stats count by is_current\n```\n\nUnderstanding this SPL\n\n**Software Update Compliance** — Unpatched macOS endpoints are vulnerable. Tracking update levels across the fleet supports vulnerability management.\n\nDocumented **Data sources**: Custom scripted input (`softwareupdate -l`, `sw_vers`). **App/TA** (typical add-on context): Splunk UF, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: macos_sw_vers. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=macos_sw_vers. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **is_current** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by is_current** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, OS version, pending updates), Pie chart (version distribution).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track which version of the system each Mac is running so you can see which ones still need a security update before a known problem hits your whole team.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "15",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that APRA CPS 234 15 (Policy framework) is enforced — Splunk UC-1.3.4: Software Update Compliance.",
                  "ea": "Saved search 'UC-1.3.4' running on Custom scripted input (softwareupdate -l, sw_vers), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.apra.gov.au/information-security"
                },
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.06",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.06 (Patch operating systems) is enforced — Splunk UC-1.3.4: Software Update Compliance.",
                  "ea": "Saved search 'UC-1.3.4' running on Custom scripted input (softwareupdate -l, sw_vers), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.3.5",
              "n": "Application Crash Monitoring",
              "c": "low",
              "f": "intermediate",
              "v": "Frequent application crashes degrade user experience and may indicate malware, resource issues, or incompatible software.",
              "t": "Splunk UF",
              "d": "`/Library/Logs/DiagnosticReports/*.crash`",
              "q": "index=os sourcetype=macos_crash host=*\n| rex \"Process:\\s+(?<process>\\S+)\"\n| stats count by host, process\n| sort -count",
              "m": "Forward `~/Library/Logs/DiagnosticReports/` and `/Library/Logs/DiagnosticReports/`. Use `monitor` input in inputs.conf. Parse process name and exception type from crash reports.",
              "z": "Table (process, host, count), Bar chart of top crashing apps.",
              "kfp": "A new app version can produce a burst of identical crashes; treat top-level counts with the build number in view. A developer repeatedly stopping a process in Xcode can add noise. Filter out per-user `~/Library` paths for fleet dashboards if you only care about system-wide or shared apps.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk UF.\n• Ensure the following data sources are available: `/Library/Logs/DiagnosticReports/*.crash`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward `~/Library/Logs/DiagnosticReports/` and `/Library/Logs/DiagnosticReports/`. Use `monitor` input in inputs.conf. Parse process name and exception type from crash reports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=macos_crash host=*\n| rex \"Process:\\s+(?<process>\\S+)\"\n| stats count by host, process\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Application Crash Monitoring** — Frequent application crashes degrade user experience and may indicate malware, resource issues, or incompatible software.\n\nDocumented **Data sources**: `/Library/Logs/DiagnosticReports/*.crash`. **App/TA** (typical add-on context): Splunk UF. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: macos_crash. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=macos_crash. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, process** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (process, host, count), Bar chart of top crashing apps.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count which apps are crashing and how often, so you can see unstable software or a troubled computer before it ruins someone’s workday or loses their data.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.3.6",
              "n": "macOS Gatekeeper and XProtect Status",
              "c": "medium",
              "f": "intermediate",
              "v": "Verify Gatekeeper and XProtect are enabled and definitions are current. Disabled or outdated security controls increase malware risk.",
              "t": "`Splunk_TA_nix` (scripted input)",
              "d": "`spctl --status`, `system_profiler SPInstallHistoryDataType`",
              "q": "index=os sourcetype=macos_gatekeeper host=*\n| eval xprotect_age_days = now() - strptime(xprotect_date, \"%Y-%m-%d\")\n| where xprotect_age_days > 30\n| table host xprotect_ver xprotect_date xprotect_age_days",
              "m": "Create a scripted input that runs `spctl --status` (expect \"assessments enabled\" for Gatekeeper on). For XProtect, run `system_profiler SPInstallHistoryDataType` and parse XProtect/XProtect Remediator entries, or check `/Library/Apple/System/Library/CoreServices/XProtect.bundle/Contents/version.plist`. Run daily. Alert when Gatekeeper is disabled; alert when XProtect definitions are older than 30 days.",
              "z": "Table (host, Gatekeeper status, XProtect version), Single value (non-compliant count), Pie chart (enabled vs. disabled).",
              "kfp": "If `xprotect_date` or time zone parsing is off, a healthy Mac can look stale. A machine offline for travel may not pull updates; combine with last-seen network signals. A separate search should cover Gatekeeper disabled without mixing both conditions on one false alarm if your parsers split them into different events.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix` (scripted input).\n• Ensure the following data sources are available: `spctl --status`, `system_profiler SPInstallHistoryDataType`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that runs `spctl --status` (expect \"assessments enabled\" for Gatekeeper on). For XProtect, run `system_profiler SPInstallHistoryDataType` and parse XProtect/XProtect Remediator entries, or check `/Library/Apple/System/Library/CoreServices/XProtect.bundle/Contents/version.plist`. Run daily. Alert when Gatekeeper is disabled; alert when XProtect definitions are older than 30 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=macos_gatekeeper host=*\n| eval xprotect_age_days = now() - strptime(xprotect_date, \"%Y-%m-%d\")\n| where xprotect_age_days > 30\n| table host xprotect_ver xprotect_date xprotect_age_days\n```\n\nUnderstanding this SPL\n\n**macOS Gatekeeper and XProtect Status** — Verify Gatekeeper and XProtect are enabled and definitions are current. Disabled or outdated security controls increase malware risk.\n\nDocumented **Data sources**: `spctl --status`, `system_profiler SPInstallHistoryDataType`. **App/TA** (typical add-on context): `Splunk_TA_nix` (scripted input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: macos_gatekeeper. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=macos_gatekeeper. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **xprotect_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where xprotect_age_days > 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **macOS Gatekeeper and XProtect Status**): table host xprotect_ver xprotect_date xprotect_age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, Gatekeeper status, XProtect version), Single value (non-compliant count), Pie chart (enabled vs. disabled).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when a Mac’s built-in file checks and malware list are out of date, so the machine is not left running with last month’s protection.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 6,
            "none": 0
          }
        },
        {
          "i": "1.4",
          "n": "Bare-Metal / Hardware",
          "u": [
            {
              "i": "1.4.1",
              "n": "Hardware Sensor Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Temperature, voltage, and fan speed anomalies predict impending hardware failures before they cause unplanned downtime.",
              "t": "Custom scripted input (`ipmitool`), SNMP",
              "d": "IPMI sensor data via scripted input, `sourcetype=ipmi:sensor` (custom)",
              "q": "index=hardware sourcetype=ipmi:sensor\n| eval is_critical = if(status=\"Critical\" OR status=\"Non-Recoverable\", 1, 0)\n| where is_critical=1\n| table _time host sensor_name reading unit status\n| sort -_time",
              "m": "Install `ipmitool` on hosts. Create scripted input: `ipmitool sensor list` (interval=300). Parse sensor name, reading, unit, and status. Alert on Critical/Non-Recoverable status. Alternatively, use SNMP to poll vendor-specific MIBs (Dell iDRAC, HP iLO, Lenovo IMM).",
              "z": "Table of critical sensors, Gauge per sensor type, Heatmap across hosts.",
              "kfp": "Chassis intrusion, initial burn-in, and fan surges after cold start or cleaning can trip Critical briefly. Hot-aisle events and missing threshold configuration in the parser can mislabel events; compare to facility environmental data and maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`ipmitool`), SNMP.\n• Ensure the following data sources are available: IPMI sensor data via scripted input, `sourcetype=ipmi:sensor` (custom).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall `ipmitool` on hosts. Create scripted input: `ipmitool sensor list` (interval=300). Parse sensor name, reading, unit, and status. Alert on Critical/Non-Recoverable status. Alternatively, use SNMP to poll vendor-specific MIBs (Dell iDRAC, HP iLO, Lenovo IMM).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=ipmi:sensor\n| eval is_critical = if(status=\"Critical\" OR status=\"Non-Recoverable\", 1, 0)\n| where is_critical=1\n| table _time host sensor_name reading unit status\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Hardware Sensor Monitoring** — Temperature, voltage, and fan speed anomalies predict impending hardware failures before they cause unplanned downtime.\n\nDocumented **Data sources**: IPMI sensor data via scripted input, `sourcetype=ipmi:sensor` (custom). **App/TA** (typical add-on context): Custom scripted input (`ipmitool`), SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: ipmi:sensor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=ipmi:sensor. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_critical** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_critical=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Hardware Sensor Monitoring**): table _time host sensor_name reading unit status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of critical sensors, Gauge per sensor type, Heatmap across hosts.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the server’s out-of-band temperature, power, and fan readings so we can catch overheating or a failing part before the machine suddenly stops working.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc",
                "snmp"
              ],
              "em": [
                "hardware_bmc_ipmi",
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.2",
              "n": "RAID Degradation Alerts",
              "c": "critical",
              "f": "intermediate",
              "v": "A degraded RAID has lost redundancy — another disk failure means data loss. Requires immediate attention.",
              "t": "Custom scripted input (`megacli`, `storcli`, `ssacli`)",
              "d": "Custom sourcetype (RAID controller output)",
              "q": "index=hardware sourcetype=raid_status\n| where state!=\"Optimal\" AND state!=\"Online\"\n| table _time host vd_name state disks_failed\n| sort -_time",
              "m": "Create scripted input for the RAID controller CLI tool: `storcli /c0/v0 show` or `megacli -LDInfo -Lall -aAll`. Run every 300 seconds. Alert immediately on any non-Optimal state.",
              "z": "Status indicator per array, Table, Alert panel (critical).",
              "kfp": "Rebuild, initialization, and battery learn cycles can show non-Optimal until the operation completes. Vendors use different “healthy” labels (`Online` vs. `Optimal`); your **where** clause must match the strings you ingest. Firmware updates can also temporarily change state strings.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`megacli`, `storcli`, `ssacli`).\n• Ensure the following data sources are available: Custom sourcetype (RAID controller output).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input for the RAID controller CLI tool: `storcli /c0/v0 show` or `megacli -LDInfo -Lall -aAll`. Run every 300 seconds. Alert immediately on any non-Optimal state.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=raid_status\n| where state!=\"Optimal\" AND state!=\"Online\"\n| table _time host vd_name state disks_failed\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**RAID Degradation Alerts** — A degraded RAID has lost redundancy — another disk failure means data loss. Requires immediate attention.\n\nDocumented **Data sources**: Custom sourcetype (RAID controller output). **App/TA** (typical add-on context): Custom scripted input (`megacli`, `storcli`, `ssacli`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: raid_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=raid_status. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state!=\"Optimal\" AND state!=\"Online\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **RAID Degradation Alerts**): table _time host vd_name state disks_failed\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status indicator per array, Table, Alert panel (critical).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on your disk groups so you know the moment a mirror or parity setup is no longer fully healthy and another drive failure could lose data.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc"
              ],
              "em": [
                "hardware_bmc_megacli",
                "hardware_bmc_ssacli",
                "hardware_bmc_storcli"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.3",
              "n": "Power Supply Failure",
              "c": "critical",
              "f": "beginner",
              "v": "Lost power supply redundancy means a single PSU failure away from an unplanned outage. Replacement needs to happen before the remaining PSU fails.",
              "t": "Custom scripted input (`ipmitool`), SNMP, vendor management syslog (iLO/iDRAC)",
              "d": "IPMI SEL (System Event Log) via scripted input, syslog from BMC",
              "q": "index=hardware sourcetype=ipmi:sel (\"Power Supply\" OR \"PS\" OR \"power_supply\") (\"Failure\" OR \"Absent\" OR \"fault\" OR \"lost\")\n| table _time host sensor event_description\n| sort -_time",
              "m": "Forward IPMI System Event Log data. Enable syslog forwarding from iLO/iDRAC to Splunk. Alert immediately on PSU failure events.",
              "z": "Events timeline, Status indicator per host, Alert panel.",
              "kfp": "Vendors use different phrasing (for example `PS Redundancy Lost` vs. `Power Supply Failed`). Redundant A/B maintenance and hot-swap can generate duplicate SEL rows. Validate against the out-of-band console or a physical site check before a full incident bridge.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`ipmitool`), SNMP, vendor management syslog (iLO/iDRAC).\n• Ensure the following data sources are available: IPMI SEL (System Event Log) via scripted input, syslog from BMC.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward IPMI System Event Log data. Enable syslog forwarding from iLO/iDRAC to Splunk. Alert immediately on PSU failure events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=ipmi:sel (\"Power Supply\" OR \"PS\" OR \"power_supply\") (\"Failure\" OR \"Absent\" OR \"fault\" OR \"lost\")\n| table _time host sensor event_description\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Power Supply Failure** — Lost power supply redundancy means a single PSU failure away from an unplanned outage. Replacement needs to happen before the remaining PSU fails.\n\nDocumented **Data sources**: IPMI SEL (System Event Log) via scripted input, syslog from BMC. **App/TA** (typical add-on context): Custom scripted input (`ipmitool`), SNMP, vendor management syslog (iLO/iDRAC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: ipmi:sel. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=ipmi:sel. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Power Supply Failure**): table _time host sensor event_description\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Status indicator per host, Alert panel.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for out-of-band messages that say a power feed or supply is missing or bad, so you can fix it while the box still has another power path to lean on.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc",
                "snmp",
                "syslog"
              ],
              "em": [
                "hardware_bmc_idrac",
                "hardware_bmc_ilo",
                "hardware_bmc_ipmi",
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.4",
              "n": "Predictive Disk Failure",
              "c": "high",
              "f": "intermediate",
              "v": "SMART attributes can predict disk failure days or weeks in advance, enabling proactive replacement during maintenance windows.",
              "t": "Custom scripted input (`smartctl`)",
              "d": "Custom sourcetype (SMART data)",
              "q": "index=hardware sourcetype=smart_data\n| where Reallocated_Sector_Ct > 0 OR Current_Pending_Sector > 0 OR Offline_Uncorrectable > 0\n| table _time host device Reallocated_Sector_Ct Current_Pending_Sector Temperature_Celsius\n| sort -Reallocated_Sector_Ct",
              "m": "Install `smartmontools`. Scripted input: `smartctl -A /dev/sd[a-z]`. Run every 3600 seconds. Track key attributes: Reallocated Sector Count, Current Pending Sector, Offline Uncorrectable. Alert on any non-zero values.",
              "z": "Table per disk, Trend line for sector counts, Heatmap of disk health.",
              "kfp": "A small stable reallocated count on older drives can be benign; trend velocity matters more than a single point. NVMe and SATA attribute names and scales differ; align field names to your `smartctl` output. A machine under heavy stress tests can show transient noise.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`smartctl`).\n• Ensure the following data sources are available: Custom sourcetype (SMART data).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall `smartmontools`. Scripted input: `smartctl -A /dev/sd[a-z]`. Run every 3600 seconds. Track key attributes: Reallocated Sector Count, Current Pending Sector, Offline Uncorrectable. Alert on any non-zero values.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=smart_data\n| where Reallocated_Sector_Ct > 0 OR Current_Pending_Sector > 0 OR Offline_Uncorrectable > 0\n| table _time host device Reallocated_Sector_Ct Current_Pending_Sector Temperature_Celsius\n| sort -Reallocated_Sector_Ct\n```\n\nUnderstanding this SPL\n\n**Predictive Disk Failure** — SMART attributes can predict disk failure days or weeks in advance, enabling proactive replacement during maintenance windows.\n\nDocumented **Data sources**: Custom sourcetype (SMART data). **App/TA** (typical add-on context): Custom scripted input (`smartctl`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: smart_data. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=smart_data. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Reallocated_Sector_Ct > 0 OR Current_Pending_Sector > 0 OR Offline_Uncorrectable > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Predictive Disk Failure**): table _time host device Reallocated_Sector_Ct Current_Pending_Sector Temperature_Celsius\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table per disk, Trend line for sector counts, Heatmap of disk health.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We read the early-warning numbers from each disk’s self-checks so you can schedule a drive swap when it is convenient instead of when it suddenly fails.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc"
              ],
              "em": [
                "hardware_bmc_smartctl"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.5",
              "n": "Firmware Version Compliance",
              "c": "medium",
              "f": "advanced",
              "v": "Outdated firmware may have security vulnerabilities or known bugs. Fleet-wide firmware tracking supports patch management.",
              "t": "Custom scripted input (`ipmitool`, `dmidecode`), vendor APIs",
              "d": "BMC/BIOS version data via scripted input",
              "q": "index=hardware sourcetype=firmware_inventory\n| stats latest(bios_version) as bios, latest(bmc_version) as bmc by host, model\n| lookup current_firmware model OUTPUT expected_bios, expected_bmc\n| eval bios_current = if(bios=expected_bios, \"Yes\", \"No\")\n| where bios_current=\"No\"",
              "m": "Create scripted input: `ipmitool mc info` or `dmidecode -t bios`. Run daily. Maintain a lookup table of expected firmware versions per server model. Dashboard showing compliance.",
              "z": "Table (host, model, current vs. expected), Pie chart (compliant %), Bar chart by model.",
              "kfp": "OEMs format the same build as different strings; normalize before equality checks. A deliberate golden-image rollout or vendor mandatory update can make many machines look `No` through rollout week—compare to change records and the lookup’s effective date column.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`ipmitool`, `dmidecode`), vendor APIs.\n• Ensure the following data sources are available: BMC/BIOS version data via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `ipmitool mc info` or `dmidecode -t bios`. Run daily. Maintain a lookup table of expected firmware versions per server model. Dashboard showing compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=firmware_inventory\n| stats latest(bios_version) as bios, latest(bmc_version) as bmc by host, model\n| lookup current_firmware model OUTPUT expected_bios, expected_bmc\n| eval bios_current = if(bios=expected_bios, \"Yes\", \"No\")\n| where bios_current=\"No\"\n```\n\nUnderstanding this SPL\n\n**Firmware Version Compliance** — Outdated firmware may have security vulnerabilities or known bugs. Fleet-wide firmware tracking supports patch management.\n\nDocumented **Data sources**: BMC/BIOS version data via scripted input. **App/TA** (typical add-on context): Custom scripted input (`ipmitool`, `dmidecode`), vendor APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: firmware_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=firmware_inventory. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **bios_current** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bios_current=\"No\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, model, current vs. expected), Pie chart (compliant %), Bar chart by model.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare each server’s built-in and management-controller software versions to what you say they should be, so outdated firmware does not sit there quietly as a security or bug risk.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc"
              ],
              "em": [
                "hardware_bmc_dmidecode",
                "hardware_bmc_ipmi"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.6",
              "n": "Memory ECC Error Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Correctable ECC errors that increase over time strongly predict impending DIMM failure. Proactive replacement avoids unrecoverable memory errors and system crashes.",
              "t": "Custom scripted input (`edac-util`, IPMI SEL)",
              "d": "`edac-util`, `/sys/devices/system/edac/mc/`, IPMI SEL",
              "q": "index=hardware sourcetype=ecc_errors\n| timechart span=1d sum(correctable_errors) as ecc_errors by host\n| where ecc_errors > 0\n| streamstats window=7 sum(ecc_errors) as weekly_errors by host\n| where weekly_errors > 10",
              "m": "Create scripted input: `edac-util -s` or parse `/sys/devices/system/edac/mc/mc*/ce_count`. Run hourly. Alert when correctable errors increase by >10/week. Track per-DIMM slot for targeted replacement.",
              "z": "Line chart (errors over time by host), Table (host, DIMM, error count), Trend chart.",
              "kfp": "A few correctable events can be a one-off; sustained growth per slot is what matters. Cosmic events, large-memory scans, and borderline contact after shipping can all noise the counter—trend and validate against vendor diagnostics before wholesale DIMM replacement.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`edac-util`, IPMI SEL).\n• Ensure the following data sources are available: `edac-util`, `/sys/devices/system/edac/mc/`, IPMI SEL.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `edac-util -s` or parse `/sys/devices/system/edac/mc/mc*/ce_count`. Run hourly. Alert when correctable errors increase by >10/week. Track per-DIMM slot for targeted replacement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=ecc_errors\n| timechart span=1d sum(correctable_errors) as ecc_errors by host\n| where ecc_errors > 0\n| streamstats window=7 sum(ecc_errors) as weekly_errors by host\n| where weekly_errors > 10\n```\n\nUnderstanding this SPL\n\n**Memory ECC Error Trending** — Correctable ECC errors that increase over time strongly predict impending DIMM failure. Proactive replacement avoids unrecoverable memory errors and system crashes.\n\nDocumented **Data sources**: `edac-util`, `/sys/devices/system/edac/mc/`, IPMI SEL. **App/TA** (typical add-on context): Custom scripted input (`edac-util`, IPMI SEL). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: ecc_errors. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=ecc_errors. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where ecc_errors > 0` — typically the threshold or rule expression for this monitoring goal.\n• `streamstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where weekly_errors > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (errors over time by host), Table (host, DIMM, error count), Trend chart.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many tiny memory self-corrections each machine logs over time, so you can replace a bad memory stick before the errors get serious enough to crash programs.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc"
              ],
              "em": [
                "hardware_bmc_edac",
                "hardware_bmc_ipmi"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.7",
              "n": "BMC Out-of-Band Connectivity Health",
              "c": "high",
              "f": "intermediate",
              "v": "BMC (IPMI/iDRAC/iLO) loss prevents remote power, console, and sensor access. Early detection ensures out-of-band management remains available for recovery.",
              "t": "Custom scripted input, IPMI",
              "d": "`ipmitool lan print`, BMC health sensors, SNMP (if BMC supports it)",
              "q": "index=hardware sourcetype=bmc_health host=*\n| stats latest(channel_voltage) as voltage, latest(link_detected) as link by host\n| where link=\"no\" OR voltage < 3.0\n| table host link voltage _time",
              "m": "Create scripted input: `ipmitool lan print` or vendor-specific tools (racadm, hpasm) to verify BMC reachability and LAN channel. Run every 5 minutes. Alert when BMC becomes unreachable.",
              "z": "Status grid (BMC up/down per host), Table of unreachable BMCs, Single value (count of healthy BMCs).",
              "kfp": "A brief poll failure during a BMC firmware update, switch maintenance, or VLAN change can look like a down link. A machine just powered off has no in-band poll—separate in-band vs. OOB health if you use both. SNMP traps can duplicate syslog events.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input, IPMI.\n• Ensure the following data sources are available: `ipmitool lan print`, BMC health sensors, SNMP (if BMC supports it).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `ipmitool lan print` or vendor-specific tools (racadm, hpasm) to verify BMC reachability and LAN channel. Run every 5 minutes. Alert when BMC becomes unreachable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=bmc_health host=*\n| stats latest(channel_voltage) as voltage, latest(link_detected) as link by host\n| where link=\"no\" OR voltage < 3.0\n| table host link voltage _time\n```\n\nUnderstanding this SPL\n\n**BMC Out-of-Band Connectivity Health** — BMC (IPMI/iDRAC/iLO) loss prevents remote power, console, and sensor access. Early detection ensures out-of-band management remains available for recovery.\n\nDocumented **Data sources**: `ipmitool lan print`, BMC health sensors, SNMP (if BMC supports it). **App/TA** (typical add-on context): Custom scripted input, IPMI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: bmc_health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=bmc_health. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where link=\"no\" OR voltage < 3.0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BMC Out-of-Band Connectivity Health**): table host link voltage _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (BMC up/down per host), Table of unreachable BMCs, Single value (count of healthy BMCs).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check that the server’s “remote control” path over the management network is still there, so you are not caught unable to turn a machine on or fix it from afar when something goes wrong in the data hall.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "hardware_bmc_ipmi"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.8",
              "n": "PCIe Link Width and Speed Degradation",
              "c": "medium",
              "f": "advanced",
              "v": "PCIe links that downgrade (e.g. x16→x8) indicate slot or cable issues. Affects GPU, NVMe, and HBA performance and can precede full failure.",
              "t": "Custom scripted input (`lspci -vv` or Windows PCI query)",
              "d": "`lspci -vv`, `/sys/bus/pci/devices/*/current_link_width_speed`",
              "q": "index=hardware sourcetype=pcie_link host=*\n| stats latest(link_width) as width, latest(link_speed) as speed by host, slot\n| lookup pcie_expected host slot OUTPUT expected_width expected_speed\n| where width < expected_width OR speed < expected_speed\n| table host slot width speed expected_width expected_speed",
              "m": "Parse `lspci -vv` for \"LnkCap\" and \"LnkSta\" or read sysfs. Run daily. Maintain lookup of expected width/speed per host and slot. Alert on downgrade.",
              "z": "Table (host, slot, current vs. expected), Bar chart of link widths.",
              "kfp": "Some GPUs and NICs downshift link width or speed at idle to save power; compare idle vs. under load before treating a downgrade as hardware failure. Cable reseats and BIOS updates can temporarily change `LnkSta` text—correlate with change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`lspci -vv` or Windows PCI query).\n• Ensure the following data sources are available: `lspci -vv`, `/sys/bus/pci/devices/*/current_link_width_speed`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse `lspci -vv` for \"LnkCap\" and \"LnkSta\" or read sysfs. Run daily. Maintain lookup of expected width/speed per host and slot. Alert on downgrade.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=pcie_link host=*\n| stats latest(link_width) as width, latest(link_speed) as speed by host, slot\n| lookup pcie_expected host slot OUTPUT expected_width expected_speed\n| where width < expected_width OR speed < expected_speed\n| table host slot width speed expected_width expected_speed\n```\n\nUnderstanding this SPL\n\n**PCIe Link Width and Speed Degradation** — PCIe links that downgrade (e.g. x16→x8) indicate slot or cable issues. Affects GPU, NVMe, and HBA performance and can precede full failure.\n\nDocumented **Data sources**: `lspci -vv`, `/sys/bus/pci/devices/*/current_link_width_speed`. **App/TA** (typical add-on context): Custom scripted input (`lspci -vv` or Windows PCI query). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: pcie_link. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=pcie_link. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, slot** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where width < expected_width OR speed < expected_speed` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCIe Link Width and Speed Degradation**): table host slot width speed expected_width expected_speed\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, slot, current vs. expected), Bar chart of link widths.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare how wide and fast the plug-in cards’ data lanes are running to what they should be, so a loose slot or bad cable does not quietly slow down a whole server.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.3.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS unknown is enforced — Splunk UC-1.4.8: PCIe Link Width and Speed Degradation.",
                  "ea": "Saved search 'UC-1.4.8' running on lspci -vv, /sys/bus/pci/devices/*/current_link_width_speed, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "regs": [
                "PCI DSS"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.9",
              "n": "Out-of-Band Sensor Threshold Breach (IPMI)",
              "c": "critical",
              "f": "intermediate",
              "v": "IPMI sensor events (temperature, voltage, fan) indicate environmental or hardware problems before they cause crashes. Critical for datacenter and server health.",
              "t": "Splunk Add-on for Unix and Linux (scripted input), IPMI",
              "d": "`ipmitool sdr`, IPMI SEL (System Event Log)",
              "q": "index=hardware sourcetype=ipmi_sdr host=*\n| search sensor_type=\"Temperature\" OR sensor_type=\"Voltage\" OR sensor_type=\"Fan\"\n| eval status=case(sensor_reading >= upper_critical, \"Critical\", sensor_reading >= upper_non_critical, \"Warning\", 1=1, \"OK\")\n| where status != \"OK\"\n| table _time host sensor_name sensor_reading upper_critical status",
              "m": "Create scripted input: `ipmitool sdr type temperature` (and voltage, fan). Parse thresholds and current readings. Forward IPMI SEL for discrete events. Alert on Critical/Warning threshold breach.",
              "z": "Gauges per sensor, Table of breached sensors, Timeline of SEL events.",
              "kfp": "Missing or misparsed `upper_non_critical` and `upper_critical` numbers will mis-classify. Seasonal data-hall changes, fan curves after dust cleaning, and sensors marked `na` in `ipmitool` output can add noise. Compare to facility monitoring before paging.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (scripted input), IPMI.\n• Ensure the following data sources are available: `ipmitool sdr`, IPMI SEL (System Event Log).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `ipmitool sdr type temperature` (and voltage, fan). Parse thresholds and current readings. Forward IPMI SEL for discrete events. Alert on Critical/Warning threshold breach.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=ipmi_sdr host=*\n| search sensor_type=\"Temperature\" OR sensor_type=\"Voltage\" OR sensor_type=\"Fan\"\n| eval status=case(sensor_reading >= upper_critical, \"Critical\", sensor_reading >= upper_non_critical, \"Warning\", 1=1, \"OK\")\n| where status != \"OK\"\n| table _time host sensor_name sensor_reading upper_critical status\n```\n\nUnderstanding this SPL\n\n**Out-of-Band Sensor Threshold Breach (IPMI)** — IPMI sensor events (temperature, voltage, fan) indicate environmental or hardware problems before they cause crashes. Critical for datacenter and server health.\n\nDocumented **Data sources**: `ipmitool sdr`, IPMI SEL (System Event Log). **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (scripted input), IPMI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: ipmi_sdr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=ipmi_sdr. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status != \"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Out-of-Band Sensor Threshold Breach (IPMI)**): table _time host sensor_name sensor_reading upper_critical status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauges per sensor, Table of breached sensors, Timeline of SEL events.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare each out-of-band temperature, voltage, and fan reading to the safe limits, so a room that is too hot or a fan that is too weak is caught before the server has to shut itself down.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "hardware_bmc_ipmi"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.10",
              "n": "Disk Controller and HBA Health",
              "c": "high",
              "f": "intermediate",
              "v": "RAID/HBA controller errors and degraded state often precede array failure. Early visibility enables planned maintenance and avoids data loss.",
              "t": "Custom scripted input (MegaRAID, perccli, hpssacli)",
              "d": "Vendor CLI output (e.g. `MegaCli64 -AdpAllInfo -aAll`), `/proc/scsi/`",
              "q": "index=hardware sourcetype=raid_controller host=*\n| stats latest(controller_status) as status, latest(degraded_virtual_drives) as degraded by host, controller_id\n| where status != \"Optimal\" OR degraded > 0\n| table host controller_id status degraded",
              "m": "Run vendor CLI (MegaCli, perccli, hpssacli) via scripted input every 15 minutes. Parse controller and virtual drive state. Alert when status is not Optimal or any array is degraded.",
              "z": "Status panel (Optimal/Degraded/Failed), Table of degraded arrays.",
              "kfp": "Patrol read, copyback, and foreign import operations can set temporary attention flags. A restart after a kernel driver update can reset counters—compare to the vendor’s management utility before a hardware RMA. Vendor strings for `status` differ; tune **where** to your parser.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (MegaRAID, perccli, hpssacli).\n• Ensure the following data sources are available: Vendor CLI output (e.g. `MegaCli64 -AdpAllInfo -aAll`), `/proc/scsi/`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun vendor CLI (MegaCli, perccli, hpssacli) via scripted input every 15 minutes. Parse controller and virtual drive state. Alert when status is not Optimal or any array is degraded.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=raid_controller host=*\n| stats latest(controller_status) as status, latest(degraded_virtual_drives) as degraded by host, controller_id\n| where status != \"Optimal\" OR degraded > 0\n| table host controller_id status degraded\n```\n\nUnderstanding this SPL\n\n**Disk Controller and HBA Health** — RAID/HBA controller errors and degraded state often precede array failure. Early visibility enables planned maintenance and avoids data loss.\n\nDocumented **Data sources**: Vendor CLI output (e.g. `MegaCli64 -AdpAllInfo -aAll`), `/proc/scsi/`. **App/TA** (typical add-on context): Custom scripted input (MegaRAID, perccli, hpssacli). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: raid_controller. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=raid_controller. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, controller_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status != \"Optimal\" OR degraded > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Disk Controller and HBA Health**): table host controller_id status degraded\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel (Optimal/Degraded/Failed), Table of degraded arrays.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at the card that runs your disk storage so you know when it is unhappy or a disk group is in trouble before a full storage failure.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc"
              ],
              "em": [
                "hardware_bmc_perccli",
                "hardware_bmc_ssacli"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "1.4.11",
              "n": "Boot Order and UEFI/BIOS Configuration Drift",
              "c": "medium",
              "f": "advanced",
              "v": "Unauthorized or accidental boot order changes can prevent systems from booting from the correct disk or PXE. Tracking supports change audit and recovery.",
              "t": "Custom scripted input (vendor tools, dmidecode)",
              "d": "`dmidecode -t bios`, vendor REST/CLI (iDRAC, iLO) for boot order",
              "q": "index=hardware sourcetype=boot_config host=*\n| stats latest(boot_order) as current_order, latest(secure_boot) as secure_boot by host\n| inputlookup expected_boot_config append=t\n| eval match=if('current_order'='expected_order', \"Match\", \"Drift\")\n| where match=\"Drift\"\n| table host current_order expected_order secure_boot",
              "m": "Use vendor APIs or scripts to export boot order and Secure Boot state. Compare to a lookup of expected configuration. Alert on drift. Run after changes or daily.",
              "z": "Table (host, current vs. expected boot order), Compliance percentage.",
              "kfp": "OS reinstalls, UEFI/firmware updates, and temporary one-time network boots can change `boot_order` legitimately. Update the `expected_boot_config` lookup the same day as change tickets. The sample `if('current_order'='expected_order'...)` is fragile for multivalue or formatted strings; normalize in your script before index time.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (vendor tools, dmidecode).\n• Ensure the following data sources are available: `dmidecode -t bios`, vendor REST/CLI (iDRAC, iLO) for boot order.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse vendor APIs or scripts to export boot order and Secure Boot state. Compare to a lookup of expected configuration. Alert on drift. Run after changes or daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hardware sourcetype=boot_config host=*\n| stats latest(boot_order) as current_order, latest(secure_boot) as secure_boot by host\n| inputlookup expected_boot_config append=t\n| eval match=if('current_order'='expected_order', \"Match\", \"Drift\")\n| where match=\"Drift\"\n| table host current_order expected_order secure_boot\n```\n\nUnderstanding this SPL\n\n**Boot Order and UEFI/BIOS Configuration Drift** — Unauthorized or accidental boot order changes can prevent systems from booting from the correct disk or PXE. Tracking supports change audit and recovery.\n\nDocumented **Data sources**: `dmidecode -t bios`, vendor REST/CLI (iDRAC, iLO) for boot order. **App/TA** (typical add-on context): Custom scripted input (vendor tools, dmidecode). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hardware; **sourcetype**: boot_config. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hardware, sourcetype=boot_config. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **match** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match=\"Drift\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Boot Order and UEFI/BIOS Configuration Drift**): table host current_order expected_order secure_boot\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, current vs. expected boot order), Compliance percentage.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We remember how each server is supposed to start up and compare that to what it would do today, so a surprise change to the boot order does not leave you with a system that will not start the right way after a reboot.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc"
              ],
              "em": [
                "hardware_bmc_dmidecode"
              ],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.9,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 11,
            "none": 0
          }
        }
      ],
      "i": 1,
      "n": "Server & Compute",
      "src": "cat-01-server-compute.md"
    },
    {
      "s": [
        {
          "i": "2.1",
          "n": "VMware vSphere",
          "u": [
            {
              "i": "2.1.1",
              "n": "ESXi Host CPU Contention",
              "c": "high",
              "f": "intermediate",
              "v": "CPU ready time measures how long a VM waits for physical CPU. High values (>5%) mean the host is overcommitted and VMs are starved for compute.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:perf:cpu`, vCenter performance metrics",
              "q": "index=vmware sourcetype=\"vmware:perf:cpu\" counter=\"cpu.ready.summation\"\n| eval ready_pct = round(Value / 20000 * 100, 2)\n| stats avg(ready_pct) as avg_ready by host, vm_name\n| where avg_ready > 5\n| sort -avg_ready",
              "m": "Install `Splunk_TA_vmware` (Splunkbase 3215) on a Heavy Forwarder. Create a read-only vCenter service account with `System.View` and `VirtualMachine.Interact.ConsoleInteract` privileges. Configure collection intervals in the TA setup UI: 300s for performance metrics, 600s for inventory, 3600s for events. Set the `host_segment` to resolve ESXi hostnames. Verify data flow with `index=vmware sourcetype=vmware:perf:cpu | head 5`. Alert when CPU ready exceeds 5% per VM.",
              "z": "Heatmap (VMs vs. hosts, colored by ready %), Bar chart (top VMs by ready time), Line chart (trending).",
              "kfp": "Short CPU ready spikes during boot or cloning; tune threshold or use rolling average.",
              "refs": "[Splunk Add-on for VMware](https://splunkbase.splunk.com/app/3215), [vSphere API](https://developer.vmware.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:cpu`, vCenter performance metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall `Splunk_TA_vmware` (Splunkbase 3215) on a Heavy Forwarder. Create a read-only vCenter service account with `System.View` and `VirtualMachine.Interact.ConsoleInteract` privileges. Configure collection intervals in the TA setup UI: 300s for performance metrics, 600s for inventory, 3600s for events. Set the `host_segment` to resolve ESXi hostnames. Verify data flow with `index=vmware sourcetype=vmware:perf:cpu | head 5`. Alert when CPU ready exceeds 5% per VM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:cpu\" counter=\"cpu.ready.summation\"\n| eval ready_pct = round(Value / 20000 * 100, 2)\n| stats avg(ready_pct) as avg_ready by host, vm_name\n| where avg_ready > 5\n| sort -avg_ready\n```\n\nUnderstanding this SPL\n\n**ESXi Host CPU Contention** — CPU ready time measures how long a VM waits for physical CPU. High values (>5%) mean the host is overcommitted and VMs are starved for compute.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:cpu`, vCenter performance metrics. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:cpu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:cpu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ready_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_ready > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ESXi Host CPU Contention** — CPU ready time measures how long a VM waits for physical CPU. High values (>5%) mean the host is overcommitted and VMs are starved for compute.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:cpu`, vCenter performance metrics. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (VMs vs. hosts, colored by ready %), Bar chart (top VMs by ready time), Line chart (trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track when your virtual machines wait too long for CPU time, so we can catch overload or unfair resource shares before the slowness hits real work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.2",
              "n": "ESXi Host Memory Ballooning",
              "c": "high",
              "f": "intermediate",
              "v": "Memory ballooning means the hypervisor is reclaiming memory from VMs. Swapping at the hypervisor level is worse — causes severe VM performance degradation invisible to the guest OS.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:perf:mem`, vCenter performance metrics",
              "q": "index=vmware sourcetype=\"vmware:perf:mem\" (counter=\"mem.vmmemctl.average\" OR counter=\"mem.swapped.average\")\n| eval metric=case(counter=\"mem.vmmemctl.average\", \"Balloon_KB\", counter=\"mem.swapped.average\", \"Swap_KB\")\n| stats avg(Value) as avg_value by host, vm_name, metric\n| where avg_value > 0\n| sort -avg_value",
              "m": "Collected automatically by TA-vmware via vCenter API. Alert when balloon or swap values are >0 for extended periods. Investigate by comparing total VM memory allocation vs. host physical memory.",
              "z": "Line chart for balloon/swap trend over time per VM; stacked bar for top 10 VMs by current balloon size; table drill-down to worst offenders with columns for VM name, balloon MB, swap MB, and host.",
              "kfp": "Memory ballooning or pressure may track deliberate overcommit tests, vMotion waves, or TPS changes; some ballooning is normal on hosts that are intentionally dense.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:mem`, vCenter performance metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected automatically by TA-vmware via vCenter API. Alert when balloon or swap values are >0 for extended periods. Investigate by comparing total VM memory allocation vs. host physical memory.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:mem\" (counter=\"mem.vmmemctl.average\" OR counter=\"mem.swapped.average\")\n| eval metric=case(counter=\"mem.vmmemctl.average\", \"Balloon_KB\", counter=\"mem.swapped.average\", \"Swap_KB\")\n| stats avg(Value) as avg_value by host, vm_name, metric\n| where avg_value > 0\n| sort -avg_value\n```\n\nUnderstanding this SPL\n\n**ESXi Host Memory Ballooning** — Memory ballooning means the hypervisor is reclaiming memory from VMs. Swapping at the hypervisor level is worse — causes severe VM performance degradation invisible to the guest OS.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:mem`, vCenter performance metrics. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:mem. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:mem\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **metric** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, vm_name, metric** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_value > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ESXi Host Memory Ballooning** — Memory ballooning means the hypervisor is reclaiming memory from VMs. Swapping at the hypervisor level is worse — causes severe VM performance degradation invisible to the guest OS.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:mem`, vCenter performance metrics. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where mem_pct > 95 OR swap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart for balloon/swap trend over time per VM; stacked bar for top 10 VMs by current balloon size; table drill-down to worst offenders with columns for VM name, balloon MB, swap MB, and host.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Memory Ballooning and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.mem_used_percent) as mem_pct\n                        avg(Performance.swap_used_percent) as swap_pct\n  from datamodel=Performance where nodename=Performance.Memory\n  by Performance.host span=5m\n| where mem_pct > 95 OR swap_pct > 20",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.3",
              "n": "Datastore Capacity Trending",
              "c": "critical",
              "f": "intermediate",
              "v": "A full datastore prevents VM disk writes, causing crashes and corruption. Datastores fill gradually from VM growth, snapshots, and log accumulation.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:perf:datastore` or `sourcetype=vmware:inv:datastore`",
              "q": "index=vmware sourcetype=\"vmware:inv:datastore\"\n| eval used_pct = round((capacity - freeSpace) / capacity * 100, 1)\n| stats latest(used_pct) as used_pct, latest(freeSpace) as free_GB by name\n| eval free_GB = round(free_GB / 1073741824, 1)\n| where used_pct > 80\n| sort -used_pct",
              "m": "TA-vmware collects datastore inventory automatically. Set alerts at 80% (warning), 90% (high), 95% (critical). Use `predict` command for 30-day forecasting. Include snapshot size in the analysis.",
              "z": "Gauge per datastore, Table (name, capacity, free, % used), Line chart with predict trendline.",
              "kfp": "Storage growth and capacity signals can track expected backups, storage vMotion, or provisioning tests; align spikes with change calendars and storage operations.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:datastore` or `sourcetype=vmware:inv:datastore`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects datastore inventory automatically. Set alerts at 80% (warning), 90% (high), 95% (critical). Use `predict` command for 30-day forecasting. Include snapshot size in the analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:datastore\"\n| eval used_pct = round((capacity - freeSpace) / capacity * 100, 1)\n| stats latest(used_pct) as used_pct, latest(freeSpace) as free_GB by name\n| eval free_GB = round(free_GB / 1073741824, 1)\n| where used_pct > 80\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**Datastore Capacity Trending** — A full datastore prevents VM disk writes, causing crashes and corruption. Datastores fill gradually from VM growth, snapshots, and log accumulation.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:datastore` or `sourcetype=vmware:inv:datastore`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:datastore. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:datastore\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **free_GB** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Datastore Capacity Trending** — A full datastore prevents VM disk writes, causing crashes and corruption. Datastores fill gradually from VM growth, snapshots, and log accumulation.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:datastore` or `sourcetype=vmware:inv:datastore`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where disk_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per datastore, Table (name, capacity, free, % used), Line chart with predict trendline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on datastore Capacity Trending and raise the alarm before it drags down real work or real outages start.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.storage_used_percent) as disk_pct\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=1h\n| where disk_pct > 85",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "2.1.4",
              "n": "Datastore Latency Spikes",
              "c": "high",
              "f": "intermediate",
              "v": "Storage latency >20ms significantly impacts VM performance. Detects SAN issues, datastore contention, or storage path problems before applications are affected.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:perf:datastore`",
              "q": "index=vmware sourcetype=\"vmware:perf:datastore\" (counter=\"datastore.totalReadLatency.average\" OR counter=\"datastore.totalWriteLatency.average\")\n| eval latency_ms = Value\n| stats avg(latency_ms) as avg_latency, max(latency_ms) as peak_latency by host, datastore, counter\n| where avg_latency > 20\n| sort -avg_latency",
              "m": "Collected via TA-vmware. Alert when average read/write latency exceeds 20ms over 10 minutes. Correlate with IOPS and throughput counters for full picture.",
              "z": "Line chart (latency over time), Heatmap (datastores vs. hosts), Table with avg/peak.",
              "kfp": "Storage growth and capacity signals can track expected backups, storage vMotion, or provisioning tests; align spikes with change calendars and storage operations.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:datastore`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected via TA-vmware. Alert when average read/write latency exceeds 20ms over 10 minutes. Correlate with IOPS and throughput counters for full picture.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:datastore\" (counter=\"datastore.totalReadLatency.average\" OR counter=\"datastore.totalWriteLatency.average\")\n| eval latency_ms = Value\n| stats avg(latency_ms) as avg_latency, max(latency_ms) as peak_latency by host, datastore, counter\n| where avg_latency > 20\n| sort -avg_latency\n```\n\nUnderstanding this SPL\n\n**Datastore Latency Spikes** — Storage latency >20ms significantly impacts VM performance. Detects SAN issues, datastore contention, or storage path problems before applications are affected.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:datastore`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:datastore. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:datastore\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, datastore, counter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_latency > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.read_latency) as read_ms avg(Performance.write_latency) as write_ms\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=5m\n| eval worst_ms=max(read_ms, write_ms)\n| where worst_ms > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Datastore Latency Spikes** — Storage latency >20ms significantly impacts VM performance. Detects SAN issues, datastore contention, or storage path problems before applications are affected.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:datastore`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• `eval` defines or adjusts **worst_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where worst_ms > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency over time), Heatmap (datastores vs. hosts), Table with avg/peak.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on datastore Latency Spikes and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.read_latency) as read_ms avg(Performance.write_latency) as write_ms\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=5m\n| eval worst_ms=max(read_ms, write_ms)\n| where worst_ms > 20",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.5",
              "n": "VM Snapshot Sprawl",
              "c": "high",
              "f": "intermediate",
              "v": "Old snapshots consume datastore space exponentially, degrade VM I/O performance, and complicate backups. Snapshots >72 hours old are generally a problem.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:inv:vm` (inventory data)",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\" snapshot_name=*\n| eval snapshot_age_days = round((now() - strptime(snapshot_createTime, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where snapshot_age_days > 3\n| table vm_name, host, snapshot_name, snapshot_age_days, snapshot_sizeBytes\n| eval snapshot_size_GB = round(snapshot_sizeBytes / 1073741824, 2)\n| sort -snapshot_age_days",
              "m": "TA-vmware collects VM inventory including snapshots. Run daily report identifying snapshots >72 hours old. Escalate snapshots >7 days to VM owners. Include snapshot size to show storage impact.",
              "z": "Table (VM, snapshot name, age, size), Bar chart (top VMs by snapshot size), Single value (total snapshots >3d).",
              "kfp": "Snapshot age alerts may fire for intentional long-running snapshots during storage migration, tests, or vendor-guided holds; confirm with the owner before delete requests.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm` (inventory data).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects VM inventory including snapshots. Run daily report identifying snapshots >72 hours old. Escalate snapshots >7 days to VM owners. Include snapshot size to show storage impact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\" snapshot_name=*\n| eval snapshot_age_days = round((now() - strptime(snapshot_createTime, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where snapshot_age_days > 3\n| table vm_name, host, snapshot_name, snapshot_age_days, snapshot_sizeBytes\n| eval snapshot_size_GB = round(snapshot_sizeBytes / 1073741824, 2)\n| sort -snapshot_age_days\n```\n\nUnderstanding this SPL\n\n**VM Snapshot Sprawl** — Old snapshots consume datastore space exponentially, degrade VM I/O performance, and complicate backups. Snapshots >72 hours old are generally a problem.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm` (inventory data). **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **snapshot_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where snapshot_age_days > 3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VM Snapshot Sprawl**): table vm_name, host, snapshot_name, snapshot_age_days, snapshot_sizeBytes\n• `eval` defines or adjusts **snapshot_size_GB** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, snapshot name, age, size), Bar chart (top VMs by snapshot size), Single value (total snapshots >3d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on snapshot Sprawl and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.6",
              "n": "vMotion Tracking",
              "c": "low",
              "f": "beginner",
              "v": "Tracks VM migrations for troubleshooting and change management. Excessive vMotion can indicate DRS instability or resource contention.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:events`, vCenter event data",
              "q": "index=vmware sourcetype=\"vmware:events\" event_type=\"VmMigratedEvent\" OR event_type=\"DrsVmMigratedEvent\"\n| table _time vm_name source_host dest_host user event_type\n| sort -_time",
              "m": "TA-vmware collects vCenter events. Create a report for audit/change tracking. Alert on excessive vMotion frequency (>10 migrations per host per hour may indicate DRS instability).",
              "z": "Table (timeline), Sankey diagram (source to destination host), Count by host/hour.",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`, vCenter event data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects vCenter events. Create a report for audit/change tracking. Alert on excessive vMotion frequency (>10 migrations per host per hour may indicate DRS instability).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" event_type=\"VmMigratedEvent\" OR event_type=\"DrsVmMigratedEvent\"\n| table _time vm_name source_host dest_host user event_type\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**vMotion Tracking** — Tracks VM migrations for troubleshooting and change management. Excessive vMotion can indicate DRS instability or resource contention.\n\nDocumented **Data sources**: `sourcetype=vmware:events`, vCenter event data. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **vMotion Tracking**): table _time vm_name source_host dest_host user event_type\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (timeline), Sankey diagram (source to destination host), Count by host/hour.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track VM migrations for troubleshooting and change management — so you catch performance or capacity problems before they hurt real work.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.7",
              "n": "HA Failover Events",
              "c": "critical",
              "f": "beginner",
              "v": "HA failover means a host failed and VMs were restarted on surviving hosts. Indicates hardware failure and potential capacity risk on remaining hosts.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:events`",
              "q": "index=vmware sourcetype=\"vmware:events\" (event_type=\"DasVmPoweredOnEvent\" OR event_type=\"DasHostFailedEvent\" OR event_type=\"ClusterFailoverActionTriggered\")\n| table _time event_type host vm_name message\n| sort -_time",
              "m": "Collect vCenter events via TA-vmware. Create critical real-time alert on HA failover events. Correlate with host hardware health and ESXi syslog for root cause.",
              "z": "Events timeline (critical alert), Table of affected VMs, Host status panel.",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect vCenter events via TA-vmware. Create critical real-time alert on HA failover events. Correlate with host hardware health and ESXi syslog for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" (event_type=\"DasVmPoweredOnEvent\" OR event_type=\"DasHostFailedEvent\" OR event_type=\"ClusterFailoverActionTriggered\")\n| table _time event_type host vm_name message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**HA Failover Events** — HA failover means a host failed and VMs were restarted on surviving hosts. Indicates hardware failure and potential capacity risk on remaining hosts.\n\nDocumented **Data sources**: `sourcetype=vmware:events`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **HA Failover Events**): table _time event_type host vm_name message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline (critical alert), Table of affected VMs, Host status panel.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on hA Failover Events and raise the alarm before it drags down real work or real outages start.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "2.1.8",
              "n": "DRS Imbalance Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "DRS should keep clusters balanced. Frequent or failed DRS recommendations indicate resource constraints, affinity rule conflicts, or misconfiguration.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:events`",
              "q": "index=vmware sourcetype=\"vmware:events\" event_type=\"DrsVmMigratedEvent\"\n| bin _time span=1h\n| stats count by _time, cluster\n| where count > 20",
              "m": "Monitor DRS migration frequency. High migration counts suggest oscillation. Also check for unapplied DRS recommendations (DRS set to manual mode). Correlate with CPU/memory utilization per host.",
              "z": "Line chart (migrations per hour), Table of DRS events, Cluster balance comparison chart.",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor DRS migration frequency. High migration counts suggest oscillation. Also check for unapplied DRS recommendations (DRS set to manual mode). Correlate with CPU/memory utilization per host.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" event_type=\"DrsVmMigratedEvent\"\n| bin _time span=1h\n| stats count by _time, cluster\n| where count > 20\n```\n\nUnderstanding this SPL\n\n**DRS Imbalance Detection** — DRS should keep clusters balanced. Frequent or failed DRS recommendations indicate resource constraints, affinity rule conflicts, or misconfiguration.\n\nDocumented **Data sources**: `sourcetype=vmware:events`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 20` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (migrations per hour), Table of DRS events, Cluster balance comparison chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on dRS Imbalance Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.9",
              "n": "VM Sprawl Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Orphaned, powered-off, or idle VMs waste storage, IP addresses, backup capacity, and licenses. Regular cleanup frees significant resources.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:inv:vm`",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\"\n| where power_state=\"poweredOff\"\n| eval days_off = round((now() - strptime(lastPowerOffTime, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where days_off > 30\n| table vm_name, host, days_off, numCpu, memoryMB, storage_committed\n| sort -days_off",
              "m": "Run weekly report on powered-off VMs >30 days. Also identify idle VMs: powered on but CPU usage <5% and network <1Kbps consistently. Send reports to VM owners for review.",
              "z": "Table (VM, state, days, resources), Pie chart (powered on vs. off), Bar chart (resource waste by team).",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun weekly report on powered-off VMs >30 days. Also identify idle VMs: powered on but CPU usage <5% and network <1Kbps consistently. Send reports to VM owners for review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\"\n| where power_state=\"poweredOff\"\n| eval days_off = round((now() - strptime(lastPowerOffTime, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where days_off > 30\n| table vm_name, host, days_off, numCpu, memoryMB, storage_committed\n| sort -days_off\n```\n\nUnderstanding this SPL\n\n**VM Sprawl Detection** — Orphaned, powered-off, or idle VMs waste storage, IP addresses, backup capacity, and licenses. Regular cleanup frees significant resources.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where power_state=\"poweredOff\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_off** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_off > 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VM Sprawl Detection**): table vm_name, host, days_off, numCpu, memoryMB, storage_committed\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, state, days, resources), Pie chart (powered on vs. off), Bar chart (resource waste by team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on sprawl Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.10",
              "n": "vSAN Health Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "vSAN is the storage fabric for many VMware clusters. Degraded vSAN health can cause VM data loss and cluster-wide outages.",
              "t": "`TA-vmware`, vSAN health service",
              "d": "`sourcetype=vmware:perf:vsan`, vSAN health checks",
              "q": "index=vmware sourcetype=\"vmware:perf:vsan\"\n| stats latest(health_status) as health by cluster, disk_group\n| where health!=\"green\"\n| table cluster disk_group health",
              "m": "TA-vmware collects vSAN metrics. Also enable vSAN health checks in vCenter. Monitor disk group health, resync progress, and capacity. Alert on any non-green health status.",
              "z": "Status indicator per cluster, Table of health issues, Gauge (capacity).",
              "kfp": "vSAN health can warn during disk group rebuilds, component resync, or one-host maintenance; treat as incident only when the condition persists after resync finishes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`, vSAN health service.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:vsan`, vSAN health checks.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects vSAN metrics. Also enable vSAN health checks in vCenter. Monitor disk group health, resync progress, and capacity. Alert on any non-green health status.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:vsan\"\n| stats latest(health_status) as health by cluster, disk_group\n| where health!=\"green\"\n| table cluster disk_group health\n```\n\nUnderstanding this SPL\n\n**vSAN Health Monitoring** — vSAN is the storage fabric for many VMware clusters. Degraded vSAN health can cause VM data loss and cluster-wide outages.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:vsan`, vSAN health checks. **App/TA** (typical add-on context): `TA-vmware`, vSAN health service. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:vsan. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:vsan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster, disk_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where health!=\"green\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **vSAN Health Monitoring**): table cluster disk_group health\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status indicator per cluster, Table of health issues, Gauge (capacity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on vSAN Health and raise the alarm before it drags down real work or real outages start.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "2.1.11",
              "n": "ESXi Host Hardware Alerts",
              "c": "high",
              "f": "beginner",
              "v": "CIM-based hardware health detects physical component failures (fans, PSU, temperature) at the hypervisor level before they cause host failure.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:events` (vCenter alarms)",
              "q": "index=vmware sourcetype=\"vmware:events\" (event_type=\"AlarmStatusChangedEvent\") alarm_name=\"Host hardware*\"\n| table _time host alarm_name old_status new_status\n| where new_status=\"red\" OR new_status=\"yellow\"\n| sort -_time",
              "m": "vCenter triggers hardware alarms via CIM providers on ESXi hosts. TA-vmware collects these alarm events. Alert on red/yellow hardware alarms. Ensure CIM providers are installed on ESXi (vendor-specific VIBs).",
              "z": "Host health grid (red/yellow/green), Events table, Alert panel.",
              "kfp": "IPMI and hardware agents may flap briefly during firmware runs, power cycles, or cable checks; confirm against maintenance tickets before dispatching on-site help.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events` (vCenter alarms).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nvCenter triggers hardware alarms via CIM providers on ESXi hosts. TA-vmware collects these alarm events. Alert on red/yellow hardware alarms. Ensure CIM providers are installed on ESXi (vendor-specific VIBs).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" (event_type=\"AlarmStatusChangedEvent\") alarm_name=\"Host hardware*\"\n| table _time host alarm_name old_status new_status\n| where new_status=\"red\" OR new_status=\"yellow\"\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**ESXi Host Hardware Alerts** — CIM-based hardware health detects physical component failures (fans, PSU, temperature) at the hypervisor level before they cause host failure.\n\nDocumented **Data sources**: `sourcetype=vmware:events` (vCenter alarms). **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **ESXi Host Hardware Alerts**): table _time host alarm_name old_status new_status\n• Filters the current rows with `where new_status=\"red\" OR new_status=\"yellow\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Host health grid (red/yellow/green), Events table, Alert panel.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Hardware Alerts and raise the alarm before it drags down real work or real outages start.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "2.1.12",
              "n": "VM Resource Over-Allocation",
              "c": "medium",
              "f": "advanced",
              "v": "VMs consistently using <20% of allocated CPU/memory waste resources that other VMs could use. Right-sizing saves money and improves cluster capacity.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:perf:cpu`, `sourcetype=vmware:perf:mem`, `sourcetype=vmware:inv:vm`",
              "q": "index=vmware sourcetype=\"vmware:perf:cpu\" counter=\"cpu.usage.average\"\n| stats avg(Value) as avg_cpu by vm_name\n| join max=1 vm_name [search index=vmware sourcetype=\"vmware:inv:vm\" | table vm_name numCpu memoryMB]\n| where avg_cpu < 20\n| sort avg_cpu\n| table vm_name numCpu memoryMB avg_cpu",
              "m": "Analyze 30-day average CPU and memory utilization vs. allocated resources. Generate monthly right-sizing report. Include peak utilization to avoid right-sizing below burst needs.",
              "z": "Scatter plot (allocated vs. used), Table with recommendations, Bar chart of waste by team.",
              "kfp": "Low average utilization is sometimes intentional for burst headroom, licensed cores, or compliance with fixed templates; do not right-size from averages alone without owner input.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:cpu`, `sourcetype=vmware:perf:mem`, `sourcetype=vmware:inv:vm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAnalyze 30-day average CPU and memory utilization vs. allocated resources. Generate monthly right-sizing report. Include peak utilization to avoid right-sizing below burst needs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:cpu\" counter=\"cpu.usage.average\"\n| stats avg(Value) as avg_cpu by vm_name\n| join max=1 vm_name [search index=vmware sourcetype=\"vmware:inv:vm\" | table vm_name numCpu memoryMB]\n| where avg_cpu < 20\n| sort avg_cpu\n| table vm_name numCpu memoryMB avg_cpu\n```\n\nUnderstanding this SPL\n\n**VM Resource Over-Allocation** — VMs consistently using <20% of allocated CPU/memory waste resources that other VMs could use. Right-sizing saves money and improves cluster capacity.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:cpu`, `sourcetype=vmware:perf:mem`, `sourcetype=vmware:inv:vm`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:cpu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:cpu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where avg_cpu < 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Resource Over-Allocation**): table vm_name numCpu memoryMB avg_cpu\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1d\n| where avg_cpu < 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VM Resource Over-Allocation** — VMs consistently using <20% of allocated CPU/memory waste resources that other VMs could use. Right-sizing saves money and improves cluster capacity.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:cpu`, `sourcetype=vmware:perf:mem`, `sourcetype=vmware:inv:vm`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu < 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter plot (allocated vs. used), Table with recommendations, Bar chart of waste by team.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on resource Over-Allocation and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1d\n| where avg_cpu < 20",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.13",
              "n": "vCenter Alarm Correlation",
              "c": "medium",
              "f": "beginner",
              "v": "Centralizing vCenter alarms in Splunk reduces mean-time-to-repair during compound failures (e.g. datastore latency + host memory pressure) that appear as separate alarms in vSphere. Alarm storm correlation by shared datastore or maintenance window prevents alert fatigue and highlights the root cause rather than symptoms.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:events`",
              "q": "index=vmware sourcetype=\"vmware:events\" event_type=\"AlarmStatusChangedEvent\"\n| stats count by alarm_name, new_status\n| sort -count",
              "m": "TA-vmware automatically collects vCenter events including alarm state changes. Create a dashboard showing all active alarms. Correlate with time of changes, DRS events, and host health.",
              "z": "Table of active alarms, Bar chart by alarm type, Timeline.",
              "kfp": "vCenter can emit overlapping alarms from dependency chains; a single host maintenance may trigger many child alarms that clear when the first root cause is fixed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware automatically collects vCenter events including alarm state changes. Create a dashboard showing all active alarms. Correlate with time of changes, DRS events, and host health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" event_type=\"AlarmStatusChangedEvent\"\n| stats count by alarm_name, new_status\n| sort -count\n```\n\nUnderstanding this SPL\n\n**vCenter Alarm Correlation** — Centralizing vCenter alarms in Splunk reduces mean-time-to-repair during compound failures (e.g. datastore latency + host memory pressure) that appear as separate alarms in vSphere. Alarm storm correlation by shared datastore or maintenance window prevents alert fatigue and highlights the root cause rather than symptoms.\n\nDocumented **Data sources**: `sourcetype=vmware:events`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by alarm_name, new_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of active alarms, Bar chart by alarm type, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on alarm Correlation and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.14",
              "n": "ESXi Patch Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Unpatched ESXi hosts have known vulnerabilities. Fleet-wide version tracking ensures consistent patching and audit compliance.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:inv:hostsystem`",
              "q": "index=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(version) as esxi_version, latest(build) as build by host\n| eval is_current = if(build=\"24585291\", \"Yes\", \"No\")\n| sort esxi_version\n| table host esxi_version build is_current",
              "m": "TA-vmware collects host inventory including ESXi version and build number. Maintain a lookup of current expected builds. Dashboard showing compliance percentage and hosts needing updates.",
              "z": "Table (host, version, build, compliant), Pie chart (compliant %), Bar chart by version.",
              "kfp": "Baseline and patch compliance lag expected during large maintenance windows, staged image rollout, or when hosts are in remediation queues.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:hostsystem`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects host inventory including ESXi version and build number. Maintain a lookup of current expected builds. Dashboard showing compliance percentage and hosts needing updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(version) as esxi_version, latest(build) as build by host\n| eval is_current = if(build=\"24585291\", \"Yes\", \"No\")\n| sort esxi_version\n| table host esxi_version build is_current\n```\n\nUnderstanding this SPL\n\n**ESXi Patch Compliance** — Unpatched ESXi hosts have known vulnerabilities. Fleet-wide version tracking ensures consistent patching and audit compliance.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:hostsystem`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:hostsystem. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:hostsystem\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **is_current** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **ESXi Patch Compliance**): table host esxi_version build is_current\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, version, build, compliant), Pie chart (compliant %), Bar chart by version.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on eSXi Patch Compliance and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.15",
              "n": "VM Creation/Deletion Audit",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks VM lifecycle for change management compliance and resource governance. Detects unauthorized VM creation or suspicious deletions.",
              "t": "`TA-vmware`",
              "d": "`sourcetype=vmware:events`",
              "q": "index=vmware sourcetype=\"vmware:events\" (event_type=\"VmCreatedEvent\" OR event_type=\"VmRemovedEvent\" OR event_type=\"VmClonedEvent\")\n| eval action=case(event_type=\"VmCreatedEvent\",\"Created\", event_type=\"VmRemovedEvent\",\"Deleted\", event_type=\"VmClonedEvent\",\"Cloned\")\n| table _time action vm_name user host datacenter\n| sort -_time",
              "m": "Collected automatically via TA-vmware vCenter events. Create daily report. Correlate with change management tickets. Alert on deletions of production VMs.",
              "z": "Table (timeline), Bar chart (create/delete by user), Line chart (VM count trending).",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected automatically via TA-vmware vCenter events. Create daily report. Correlate with change management tickets. Alert on deletions of production VMs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" (event_type=\"VmCreatedEvent\" OR event_type=\"VmRemovedEvent\" OR event_type=\"VmClonedEvent\")\n| eval action=case(event_type=\"VmCreatedEvent\",\"Created\", event_type=\"VmRemovedEvent\",\"Deleted\", event_type=\"VmClonedEvent\",\"Cloned\")\n| table _time action vm_name user host datacenter\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**VM Creation/Deletion Audit** — Tracks VM lifecycle for change management compliance and resource governance. Detects unauthorized VM creation or suspicious deletions.\n\nDocumented **Data sources**: `sourcetype=vmware:events`. **App/TA** (typical add-on context): `TA-vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **VM Creation/Deletion Audit**): table _time action vm_name user host datacenter\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (timeline), Bar chart (create/delete by user), Line chart (VM count trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track VM lifecycle for change management compliance and resource governance — so you catch performance or capacity problems before they hurt real work.",
              "mtype": [
                "Compliance",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.16",
              "n": "VM Network I/O and Dropped Packets",
              "c": "high",
              "f": "intermediate",
              "v": "Dropped packets at the vNIC level indicate network saturation, driver issues, or misconfigured traffic shaping policies. Unlike guest OS network stats, hypervisor-level counters capture drops the VM never sees — making this the only reliable way to detect silent packet loss that degrades application performance.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:perf:net`, vCenter network performance metrics",
              "q": "index=vmware sourcetype=\"vmware:perf:net\" (counter=\"net.droppedRx.summation\" OR counter=\"net.droppedTx.summation\" OR counter=\"net.usage.average\")\n| stats sum(eval(if(counter=\"net.droppedRx.summation\", Value, 0))) as dropped_rx, sum(eval(if(counter=\"net.droppedTx.summation\", Value, 0))) as dropped_tx, avg(eval(if(counter=\"net.usage.average\", Value, 0))) as avg_kbps by host, vm_name\n| where dropped_rx > 0 OR dropped_tx > 0\n| sort -dropped_rx\n| table vm_name, host, avg_kbps, dropped_rx, dropped_tx",
              "m": "Collected via Splunk_TA_vmware performance counters. Alert when any VM shows >0 dropped packets sustained over 5 minutes. Correlate with VM network usage to determine if drops correlate with saturation. Check dvSwitch traffic shaping policies and physical NIC utilization on the host.",
              "z": "Table (VM, host, throughput, drops), Line chart (drops over time), Bar chart (top VMs by drops).",
              "kfp": "Short bursts of link errors or drops may appear during switch upgrades, LAG failovers, cable wiggles, or when VMs migrate; trend duration before treating as a chronic fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:net`, vCenter network performance metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected via Splunk_TA_vmware performance counters. Alert when any VM shows >0 dropped packets sustained over 5 minutes. Correlate with VM network usage to determine if drops correlate with saturation. Check dvSwitch traffic shaping policies and physical NIC utilization on the host.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:net\" (counter=\"net.droppedRx.summation\" OR counter=\"net.droppedTx.summation\" OR counter=\"net.usage.average\")\n| stats sum(eval(if(counter=\"net.droppedRx.summation\", Value, 0))) as dropped_rx, sum(eval(if(counter=\"net.droppedTx.summation\", Value, 0))) as dropped_tx, avg(eval(if(counter=\"net.usage.average\", Value, 0))) as avg_kbps by host, vm_name\n| where dropped_rx > 0 OR dropped_tx > 0\n| sort -dropped_rx\n| table vm_name, host, avg_kbps, dropped_rx, dropped_tx\n```\n\nUnderstanding this SPL\n\n**VM Network I/O and Dropped Packets** — Dropped packets at the vNIC level indicate network saturation, driver issues, or misconfigured traffic shaping policies. Unlike guest OS network stats, hypervisor-level counters capture drops the VM never sees — making this the only reliable way to detect silent packet loss that degrades application performance.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:net`, vCenter network performance metrics. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:net. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:net\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dropped_rx > 0 OR dropped_tx > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Network I/O and Dropped Packets**): table vm_name, host, avg_kbps, dropped_rx, dropped_tx\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, host, throughput, drops), Line chart (drops over time), Bar chart (top VMs by drops).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on network I/O and Dropped Packets and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.17",
              "n": "VM Disk IOPS Trending and Throttling",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks read/write IOPS per VM to identify storage-hungry workloads before they impact other VMs on the same datastore. When Storage I/O Control (SIOC) throttles a VM, it appears as increased latency inside the guest — this use case exposes the throttling at the hypervisor level.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:perf:datastore`, vCenter disk performance metrics",
              "q": "index=vmware sourcetype=\"vmware:perf:datastore\" (counter=\"datastore.numberReadAveraged.average\" OR counter=\"datastore.numberWriteAveraged.average\")\n| eval metric=case(counter=\"datastore.numberReadAveraged.average\", \"read_iops\", counter=\"datastore.numberWriteAveraged.average\", \"write_iops\")\n| stats avg(Value) as avg_val by vm_name, host, datastore, metric\n| eval avg_val=round(avg_val, 0)\n| stats sum(avg_val) as total_iops, values(eval(metric . \"=\" . avg_val)) as breakdown by vm_name, host, datastore\n| where total_iops > 500\n| sort -total_iops\n| table vm_name, host, datastore, total_iops, breakdown",
              "m": "Collected via Splunk_TA_vmware. Baseline per-VM IOPS over 7 days. Alert when a VM exceeds 2x its baseline sustained for 15 minutes. Track SIOC injector latency counters to detect throttling. Correlate high-IOPS VMs with datastore latency spikes from UC-2.1.4.",
              "z": "Line chart (IOPS per VM over time), Stacked bar chart (read vs write), Table (top IOPS consumers).",
              "kfp": "IOPS and latency can spike during backup windows, storage vMotion, or VM snapshots; compare to backup schedules and SIOC/queue depth baselines.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:datastore`, vCenter disk performance metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected via Splunk_TA_vmware. Baseline per-VM IOPS over 7 days. Alert when a VM exceeds 2x its baseline sustained for 15 minutes. Track SIOC injector latency counters to detect throttling. Correlate high-IOPS VMs with datastore latency spikes from UC-2.1.4.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:datastore\" (counter=\"datastore.numberReadAveraged.average\" OR counter=\"datastore.numberWriteAveraged.average\")\n| eval metric=case(counter=\"datastore.numberReadAveraged.average\", \"read_iops\", counter=\"datastore.numberWriteAveraged.average\", \"write_iops\")\n| stats avg(Value) as avg_val by vm_name, host, datastore, metric\n| eval avg_val=round(avg_val, 0)\n| stats sum(avg_val) as total_iops, values(eval(metric . \"=\" . avg_val)) as breakdown by vm_name, host, datastore\n| where total_iops > 500\n| sort -total_iops\n| table vm_name, host, datastore, total_iops, breakdown\n```\n\nUnderstanding this SPL\n\n**VM Disk IOPS Trending and Throttling** — Tracks read/write IOPS per VM to identify storage-hungry workloads before they impact other VMs on the same datastore. When Storage I/O Control (SIOC) throttles a VM, it appears as increased latency inside the guest — this use case exposes the throttling at the hypervisor level.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:datastore`, vCenter disk performance metrics. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:datastore. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:datastore\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **metric** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by vm_name, host, datastore, metric** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_val** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by vm_name, host, datastore** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_iops > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Disk IOPS Trending and Throttling**): table vm_name, host, datastore, total_iops, breakdown\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(Performance.read_ops) as read_ops sum(Performance.write_ops) as write_ops\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=15m\n| eval total_iops=read_ops + write_ops\n| where total_iops > 500\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VM Disk IOPS Trending and Throttling** — Tracks read/write IOPS per VM to identify storage-hungry workloads before they impact other VMs on the same datastore. When Storage I/O Control (SIOC) throttles a VM, it appears as increased latency inside the guest — this use case exposes the throttling at the hypervisor level.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:datastore`, vCenter disk performance metrics. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• `eval` defines or adjusts **total_iops** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where total_iops > 500` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (IOPS per VM over time), Stacked bar chart (read vs write), Table (top IOPS consumers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track read/write IOPS per VM to identify storage-hungry workloads before they impact other VMs on the same datastore — so you catch performance or capacity problems before they hurt real work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` sum(Performance.read_ops) as read_ops sum(Performance.write_ops) as write_ops\n  from datamodel=Performance where nodename=Performance.Storage\n  by Performance.host Performance.mount span=15m\n| eval total_iops=read_ops + write_ops\n| where total_iops > 500",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.18",
              "n": "VMware Tools Status and Version Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Outdated or missing VMware Tools causes loss of guest-host integration — no graceful shutdown, no quiesced snapshots, no balloon driver, degraded network/disk performance, and inaccurate guest OS reporting. Tools must be current for vMotion to function optimally.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:vm` (inventory data)",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\"\n| stats latest(toolsStatus) as tools_status, latest(toolsVersionStatus) as version_status, latest(toolsRunningStatus) as running_status by vm_name, host, guest_os\n| where tools_status!=\"toolsOk\" OR version_status!=\"guestToolsCurrent\"\n| sort tools_status\n| table vm_name, host, guest_os, tools_status, version_status, running_status",
              "m": "Collected automatically via Splunk_TA_vmware inventory. Run daily compliance report. toolsStatus values: toolsOk, toolsOld, toolsNotInstalled, toolsNotRunning. Alert on toolsNotInstalled for production VMs. Track version_status to ensure Tools are current across the fleet.",
              "z": "Pie chart (Tools status distribution), Table (non-compliant VMs), Bar chart (by guest OS).",
              "kfp": "Baseline and patch compliance lag expected during large maintenance windows, staged image rollout, or when hosts are in remediation queues.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm` (inventory data).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected automatically via Splunk_TA_vmware inventory. Run daily compliance report. toolsStatus values: toolsOk, toolsOld, toolsNotInstalled, toolsNotRunning. Alert on toolsNotInstalled for production VMs. Track version_status to ensure Tools are current across the fleet.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\"\n| stats latest(toolsStatus) as tools_status, latest(toolsVersionStatus) as version_status, latest(toolsRunningStatus) as running_status by vm_name, host, guest_os\n| where tools_status!=\"toolsOk\" OR version_status!=\"guestToolsCurrent\"\n| sort tools_status\n| table vm_name, host, guest_os, tools_status, version_status, running_status\n```\n\nUnderstanding this SPL\n\n**VMware Tools Status and Version Compliance** — Outdated or missing VMware Tools causes loss of guest-host integration — no graceful shutdown, no quiesced snapshots, no balloon driver, degraded network/disk performance, and inaccurate guest OS reporting. Tools must be current for vMotion to function optimally.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm` (inventory data). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name, host, guest_os** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tools_status!=\"toolsOk\" OR version_status!=\"guestToolsCurrent\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VMware Tools Status and Version Compliance**): table vm_name, host, guest_os, tools_status, version_status, running_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (Tools status distribution), Table (non-compliant VMs), Bar chart (by guest OS).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on vMware Tools Status and Version and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.19",
              "n": "Distributed vSwitch Port Health and Errors",
              "c": "high",
              "f": "intermediate",
              "v": "VDS port errors indicate VLAN misconfiguration, MTU mismatches, uplink failures, or teaming policy problems. VDS health check results (available since vSphere 5.1) detect common misconfigurations that cause intermittent connectivity issues that are extremely hard to troubleshoot from the guest OS.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:events`, VDS health check results, vCenter network events",
              "q": "index=vmware sourcetype=\"vmware:events\" (event_type=\"*Dvs*\" OR event_type=\"*dvPort*\" OR event_type=\"*VmnicDisconnectedEvent*\")\n| stats count by event_type, host, dvs_name\n| sort -count\n| table event_type, host, dvs_name, count",
              "m": "Enable VDS health checks in vCenter (VLAN/MTU check, Teaming/Failover check). Collect vCenter events via Splunk_TA_vmware. Alert on VmnicDisconnectedEvent (physical uplink loss), DvsPortBlockedEvent, and any health check failure. Create a network topology dashboard showing VDS → uplink → VLAN mapping.",
              "z": "Status grid (VDS health per host), Events table, Network topology diagram.",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`, VDS health check results, vCenter network events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable VDS health checks in vCenter (VLAN/MTU check, Teaming/Failover check). Collect vCenter events via Splunk_TA_vmware. Alert on VmnicDisconnectedEvent (physical uplink loss), DvsPortBlockedEvent, and any health check failure. Create a network topology dashboard showing VDS → uplink → VLAN mapping.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" (event_type=\"*Dvs*\" OR event_type=\"*dvPort*\" OR event_type=\"*VmnicDisconnectedEvent*\")\n| stats count by event_type, host, dvs_name\n| sort -count\n| table event_type, host, dvs_name, count\n```\n\nUnderstanding this SPL\n\n**Distributed vSwitch Port Health and Errors** — VDS port errors indicate VLAN misconfiguration, MTU mismatches, uplink failures, or teaming policy problems. VDS health check results (available since vSphere 5.1) detect common misconfigurations that cause intermittent connectivity issues that are extremely hard to troubleshoot from the guest OS.\n\nDocumented **Data sources**: `sourcetype=vmware:events`, VDS health check results, vCenter network events. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by event_type, host, dvs_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Distributed vSwitch Port Health and Errors**): table event_type, host, dvs_name, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (VDS health per host), Events table, Network topology diagram.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on distributed vSwitch Port Health and Errors and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.20",
              "n": "Resource Pool Utilization and Limits",
              "c": "medium",
              "f": "intermediate",
              "v": "Resource pools with hard limits can silently throttle VMs even when the cluster has spare capacity. Pools without reservations provide no guarantees during contention. Monitoring utilization vs. configured limits/reservations reveals misconfigurations that cause unpredictable performance.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:resourcepool`, `sourcetype=vmware:perf:cpu`",
              "q": "index=vmware sourcetype=\"vmware:inv:resourcepool\"\n| eval cpu_limit_ghz=if(cpuAllocation_limit=-1, \"Unlimited\", round(cpuAllocation_limit/1000, 1))\n| eval mem_limit_gb=if(memoryAllocation_limit=-1, \"Unlimited\", round(memoryAllocation_limit/1024, 1))\n| table name, cluster, cpuAllocation_reservation, cpu_limit_ghz, cpuAllocation_shares, memoryAllocation_reservation, mem_limit_gb, memoryAllocation_shares\n| sort cluster, name",
              "m": "Collect resource pool inventory via Splunk_TA_vmware. Alert when resource pool utilization approaches its configured limit (>80%). Flag resource pools with unlimited limits and zero reservations in production — they offer no guarantees. Cross-reference with VM performance to detect pool-level throttling.",
              "z": "Table (pool hierarchy, limits, utilization), Tree map (pools by resource allocation), Gauge (utilization vs limit).",
              "kfp": "Pool limit breaches may come from time-boxed test spikes or month-end jobs; use baselines and known batch windows to avoid paging for expected bursts.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:resourcepool`, `sourcetype=vmware:perf:cpu`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect resource pool inventory via Splunk_TA_vmware. Alert when resource pool utilization approaches its configured limit (>80%). Flag resource pools with unlimited limits and zero reservations in production — they offer no guarantees. Cross-reference with VM performance to detect pool-level throttling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:resourcepool\"\n| eval cpu_limit_ghz=if(cpuAllocation_limit=-1, \"Unlimited\", round(cpuAllocation_limit/1000, 1))\n| eval mem_limit_gb=if(memoryAllocation_limit=-1, \"Unlimited\", round(memoryAllocation_limit/1024, 1))\n| table name, cluster, cpuAllocation_reservation, cpu_limit_ghz, cpuAllocation_shares, memoryAllocation_reservation, mem_limit_gb, memoryAllocation_shares\n| sort cluster, name\n```\n\nUnderstanding this SPL\n\n**Resource Pool Utilization and Limits** — Resource pools with hard limits can silently throttle VMs even when the cluster has spare capacity. Pools without reservations provide no guarantees during contention. Monitoring utilization vs. configured limits/reservations reveals misconfigurations that cause unpredictable performance.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:resourcepool`, `sourcetype=vmware:perf:cpu`. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:resourcepool. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:resourcepool\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cpu_limit_ghz** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_limit_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Resource Pool Utilization and Limits**): table name, cluster, cpuAllocation_reservation, cpu_limit_ghz, cpuAllocation_shares, memoryAllocation_reservation, mem_limit_gb, memoryAl…\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pool hierarchy, limits, utilization), Tree map (pools by resource allocation), Gauge (utilization vs limit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on resource Pool Utilization and Limits and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.21",
              "n": "ESXi Host Unexpected Reboot Detection",
              "c": "critical",
              "f": "beginner",
              "v": "Unexpected ESXi reboots indicate hardware failure (memory ECC errors, CPU machine checks), kernel panics (PSODs), or firmware bugs. Each reboot triggers HA failover of all VMs on that host, causing widespread service disruption. Early detection enables root cause analysis before the issue recurs.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:events`, ESXi syslog",
              "q": "index=vmware sourcetype=\"vmware:events\" (event_type=\"HostConnectionLostEvent\" OR event_type=\"HostDisconnectedEvent\" OR event_type=\"HostShutdownEvent\")\n| table _time, host, event_type, message, user\n| sort -_time",
              "m": "Collect vCenter events via Splunk_TA_vmware. Also forward ESXi syslog directly to Splunk for boot-time messages. Alert immediately on HostConnectionLostEvent (ungraceful). Correlate with IPMI/iLO/iDRAC logs if available. Differentiate planned reboots (HostShutdownEvent with a user) from unplanned (HostConnectionLostEvent).",
              "z": "Timeline (host events), Status grid (host connectivity), Alert panel (critical).",
              "kfp": "Power-state changes may reflect approved automation, DRS, HA, or host evacuation; correlate to change and incident records before calling an outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`, ESXi syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect vCenter events via Splunk_TA_vmware. Also forward ESXi syslog directly to Splunk for boot-time messages. Alert immediately on HostConnectionLostEvent (ungraceful). Correlate with IPMI/iLO/iDRAC logs if available. Differentiate planned reboots (HostShutdownEvent with a user) from unplanned (HostConnectionLostEvent).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" (event_type=\"HostConnectionLostEvent\" OR event_type=\"HostDisconnectedEvent\" OR event_type=\"HostShutdownEvent\")\n| table _time, host, event_type, message, user\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**ESXi Host Unexpected Reboot Detection** — Unexpected ESXi reboots indicate hardware failure (memory ECC errors, CPU machine checks), kernel panics (PSODs), or firmware bugs. Each reboot triggers HA failover of all VMs on that host, causing widespread service disruption. Early detection enables root cause analysis before the issue recurs.\n\nDocumented **Data sources**: `sourcetype=vmware:events`, ESXi syslog. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **ESXi Host Unexpected Reboot Detection**): table _time, host, event_type, message, user\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (host events), Status grid (host connectivity), Alert panel (critical).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Unexpected Reboot Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.22",
              "n": "vCenter Service Health",
              "c": "critical",
              "f": "intermediate",
              "v": "vCenter is the management plane for the entire VMware environment. If VPXD, SSO, or the content library service fails, you lose visibility into your VMs and cannot perform management operations. Monitoring vCenter appliance health ensures the control plane is operational.",
              "t": "`Splunk_TA_vmware`, vCenter syslog",
              "d": "`sourcetype=vmware:events`, vCenter VAMI health API, vCenter syslog (`/var/log/vmware/vpxd/vpxd.log`)",
              "q": "index=vmware sourcetype=\"syslog\" source=\"/var/log/vmware/vpxd/*\" (\"ERROR\" OR \"CRITICAL\" OR \"FATAL\")\n| stats count as errors by host, source\n| where errors > 10\n| sort -errors\n| table host, source, errors",
              "m": "Forward vCenter appliance syslog to Splunk. Monitor VPXD, STS (SSO), content library, and PostgreSQL logs. Also create a scripted input to poll the VAMI health API (`https://vcsa:5480/rest/applmgmt/health`). Alert when any service reports unhealthy status. Monitor vCenter disk space (database growth).",
              "z": "Status grid (service health), Line chart (error rate over time), Table (recent errors).",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, vCenter syslog.\n• Ensure the following data sources are available: `sourcetype=vmware:events`, vCenter VAMI health API, vCenter syslog (`/var/log/vmware/vpxd/vpxd.log`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward vCenter appliance syslog to Splunk. Monitor VPXD, STS (SSO), content library, and PostgreSQL logs. Also create a scripted input to poll the VAMI health API (`https://vcsa:5480/rest/applmgmt/health`). Alert when any service reports unhealthy status. Monitor vCenter disk space (database growth).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"syslog\" source=\"/var/log/vmware/vpxd/*\" (\"ERROR\" OR \"CRITICAL\" OR \"FATAL\")\n| stats count as errors by host, source\n| where errors > 10\n| sort -errors\n| table host, source, errors\n```\n\nUnderstanding this SPL\n\n**vCenter Service Health** — vCenter is the management plane for the entire VMware environment. If VPXD, SSO, or the content library service fails, you lose visibility into your VMs and cannot perform management operations. Monitoring vCenter appliance health ensures the control plane is operational.\n\nDocumented **Data sources**: `sourcetype=vmware:events`, vCenter VAMI health API, vCenter syslog (`/var/log/vmware/vpxd/vpxd.log`). **App/TA** (typical add-on context): `Splunk_TA_vmware`, vCenter syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, source** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where errors > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **vCenter Service Health**): table host, source, errors\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (service health), Line chart (error rate over time), Table (recent errors).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on service Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog",
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware",
                "vmware_vcenter"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.23",
              "n": "VM Unexpected Power State Changes",
              "c": "critical",
              "f": "beginner",
              "v": "Unexpected VM shutdowns or resets indicate guest OS crashes, resource exhaustion, or unauthorized actions. Unlike planned maintenance, unplanned power state changes disrupt services without warning. Correlating with the initiating user distinguishes admin actions from automated or malicious changes.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:events`",
              "q": "index=vmware sourcetype=\"vmware:events\" (event_type=\"VmPoweredOffEvent\" OR event_type=\"VmResettingEvent\" OR event_type=\"VmGuestShutdownEvent\" OR event_type=\"VmGuestRebootEvent\")\n| eval planned=if(match(user, \"^(admin|svc_|scheduled)\"), \"Planned\", \"Unplanned\")\n| where planned=\"Unplanned\"\n| table _time, vm_name, host, event_type, user, message\n| sort -_time",
              "m": "Collect vCenter events via Splunk_TA_vmware. Maintain a lookup of authorized service accounts and scheduled maintenance windows. Alert on any power-off or reset outside of maintenance windows or by non-authorized users. Cross-reference with guest OS event logs for crash evidence.",
              "z": "Timeline (power events), Table (unplanned shutdowns), Bar chart (by VM and user).",
              "kfp": "Power-state changes may reflect approved automation, DRS, HA, or host evacuation; correlate to change and incident records before calling an outage.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect vCenter events via Splunk_TA_vmware. Maintain a lookup of authorized service accounts and scheduled maintenance windows. Alert on any power-off or reset outside of maintenance windows or by non-authorized users. Cross-reference with guest OS event logs for crash evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" (event_type=\"VmPoweredOffEvent\" OR event_type=\"VmResettingEvent\" OR event_type=\"VmGuestShutdownEvent\" OR event_type=\"VmGuestRebootEvent\")\n| eval planned=if(match(user, \"^(admin|svc_|scheduled)\"), \"Planned\", \"Unplanned\")\n| where planned=\"Unplanned\"\n| table _time, vm_name, host, event_type, user, message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**VM Unexpected Power State Changes** — Unexpected VM shutdowns or resets indicate guest OS crashes, resource exhaustion, or unauthorized actions. Unlike planned maintenance, unplanned power state changes disrupt services without warning. Correlating with the initiating user distinguishes admin actions from automated or malicious changes.\n\nDocumented **Data sources**: `sourcetype=vmware:events`. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **planned** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where planned=\"Unplanned\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VM Unexpected Power State Changes**): table _time, vm_name, host, event_type, user, message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VM Unexpected Power State Changes** — Unexpected VM shutdowns or resets indicate guest OS crashes, resource exhaustion, or unauthorized actions. Unlike planned maintenance, unplanned power state changes disrupt services without warning. Correlating with the initiating user distinguishes admin actions from automated or malicious changes.\n\nDocumented **Data sources**: `sourcetype=vmware:events`. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (power events), Table (unplanned shutdowns), Bar chart (by VM and user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on unexpected Power State Changes and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "both",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.24",
              "n": "ESXi Host NTP Clock Drift",
              "c": "medium",
              "f": "intermediate",
              "v": "Clock drift on ESXi hosts causes VM time drift, Kerberos authentication failures, log correlation issues, and vSAN timing problems. NTP misconfiguration is a common root cause of intermittent authentication failures that are difficult to diagnose.",
              "t": "`Splunk_TA_vmware`, ESXi syslog",
              "d": "ESXi syslog, `sourcetype=vmware:inv:hostsystem`",
              "q": "index=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(ntpConfig_server) as ntp_servers, latest(dateTimeInfo_timeZone) as timezone by host\n| eval ntp_configured=if(isnotnull(ntp_servers) AND ntp_servers!=\"\", \"Yes\", \"No\")\n| table host, ntp_configured, ntp_servers, timezone\n| sort ntp_configured",
              "m": "Collect host inventory via Splunk_TA_vmware. Also monitor ESXi syslog for NTP daemon messages. Create a scripted input using `esxcli system time get` via PowerCLI to capture actual time offset. Alert when NTP is not configured or when time offset exceeds 1 second.",
              "z": "Table (host, NTP status, servers), Status grid (NTP health), Gauge (drift in ms).",
              "kfp": "NTP and clock step events may occur when hosts rejoin a site after maintenance, or after stratum and peer changes; verify against expected maintenance and time-server work.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, ESXi syslog.\n• Ensure the following data sources are available: ESXi syslog, `sourcetype=vmware:inv:hostsystem`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect host inventory via Splunk_TA_vmware. Also monitor ESXi syslog for NTP daemon messages. Create a scripted input using `esxcli system time get` via PowerCLI to capture actual time offset. Alert when NTP is not configured or when time offset exceeds 1 second.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(ntpConfig_server) as ntp_servers, latest(dateTimeInfo_timeZone) as timezone by host\n| eval ntp_configured=if(isnotnull(ntp_servers) AND ntp_servers!=\"\", \"Yes\", \"No\")\n| table host, ntp_configured, ntp_servers, timezone\n| sort ntp_configured\n```\n\nUnderstanding this SPL\n\n**ESXi Host NTP Clock Drift** — Clock drift on ESXi hosts causes VM time drift, Kerberos authentication failures, log correlation issues, and vSAN timing problems. NTP misconfiguration is a common root cause of intermittent authentication failures that are difficult to diagnose.\n\nDocumented **Data sources**: ESXi syslog, `sourcetype=vmware:inv:hostsystem`. **App/TA** (typical add-on context): `Splunk_TA_vmware`, ESXi syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:hostsystem. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:hostsystem\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ntp_configured** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **ESXi Host NTP Clock Drift**): table host, ntp_configured, ntp_servers, timezone\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, NTP status, servers), Status grid (NTP health), Gauge (drift in ms).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host NTP Clock Drift and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog",
                "vmware"
              ],
              "em": [
                "vmware_esxi",
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.25",
              "n": "Storage I/O Control (SIOC) Throttling",
              "c": "high",
              "f": "advanced",
              "v": "SIOC throttles VM disk I/O when datastore latency exceeds thresholds (default 30ms). When SIOC activates, VMs experience injected latency that appears as slow storage from the guest perspective. Detecting SIOC activation reveals contention invisible from the guest OS.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:perf:datastore`, vCenter events",
              "q": "index=vmware sourcetype=\"vmware:perf:datastore\" counter=\"datastore.sizeNormalizedDatastoreLatency.average\"\n| stats avg(Value) as avg_latency by datastore, host\n| where avg_latency > 25\n| sort -avg_latency\n| table datastore, host, avg_latency",
              "m": "Collected via Splunk_TA_vmware. SIOC triggers when datastore latency exceeds its configured threshold. Monitor the sizeNormalizedDatastoreLatency counter which SIOC uses for its decisions. Alert when latency approaches the SIOC threshold (default 30ms). Correlate with per-VM IOPS from UC-2.1.17 to identify the VM causing contention.",
              "z": "Line chart (datastore latency over time with SIOC threshold line), Table (datastores near threshold), Heatmap (datastores by latency).",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:datastore`, vCenter events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected via Splunk_TA_vmware. SIOC triggers when datastore latency exceeds its configured threshold. Monitor the sizeNormalizedDatastoreLatency counter which SIOC uses for its decisions. Alert when latency approaches the SIOC threshold (default 30ms). Correlate with per-VM IOPS from UC-2.1.17 to identify the VM causing contention.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:datastore\" counter=\"datastore.sizeNormalizedDatastoreLatency.average\"\n| stats avg(Value) as avg_latency by datastore, host\n| where avg_latency > 25\n| sort -avg_latency\n| table datastore, host, avg_latency\n```\n\nUnderstanding this SPL\n\n**Storage I/O Control (SIOC) Throttling** — SIOC throttles VM disk I/O when datastore latency exceeds thresholds (default 30ms). When SIOC activates, VMs experience injected latency that appears as slow storage from the guest perspective. Detecting SIOC activation reveals contention invisible from the guest OS.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:datastore`, vCenter events. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:datastore. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:datastore\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by datastore, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_latency > 25` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Storage I/O Control (SIOC) Throttling**): table datastore, host, avg_latency\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (datastore latency over time with SIOC threshold line), Table (datastores near threshold), Heatmap (datastores by latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on storage I/O Control (SIOC) Throttling and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.26",
              "n": "VM Hardware Version Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Older VM hardware versions lack support for newer features — vNVMe, UEFI secure boot, vTPM, higher vCPU/memory limits, and improved device emulation. Running mixed hardware versions complicates fleet management and limits what features you can enable cluster-wide.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:vm`",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\"\n| stats latest(hw_version) as hw_version, latest(guest_os) as guest_os by vm_name, host\n| eval hw_num=tonumber(replace(hw_version, \"vmx-\", \"\"))\n| where hw_num < 19\n| stats count by hw_version\n| sort hw_version\n| table hw_version, count",
              "m": "Collected via Splunk_TA_vmware inventory. Define target hardware version per cluster (e.g., vmx-19 for vSphere 7.0 U2+, vmx-20 for vSphere 8.0). Generate weekly compliance reports. Coordinate upgrades during maintenance windows as they require VM power cycle.",
              "z": "Pie chart (hardware version distribution), Table (VMs needing upgrade), Bar chart (versions by cluster).",
              "kfp": "IPMI and hardware agents may flap briefly during firmware runs, power cycles, or cable checks; confirm against maintenance tickets before dispatching on-site help.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected via Splunk_TA_vmware inventory. Define target hardware version per cluster (e.g., vmx-19 for vSphere 7.0 U2+, vmx-20 for vSphere 8.0). Generate weekly compliance reports. Coordinate upgrades during maintenance windows as they require VM power cycle.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\"\n| stats latest(hw_version) as hw_version, latest(guest_os) as guest_os by vm_name, host\n| eval hw_num=tonumber(replace(hw_version, \"vmx-\", \"\"))\n| where hw_num < 19\n| stats count by hw_version\n| sort hw_version\n| table hw_version, count\n```\n\nUnderstanding this SPL\n\n**VM Hardware Version Compliance** — Older VM hardware versions lack support for newer features — vNVMe, UEFI secure boot, vTPM, higher vCPU/memory limits, and improved device emulation. Running mixed hardware versions complicates fleet management and limits what features you can enable cluster-wide.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm`. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hw_num** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hw_num < 19` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by hw_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Hardware Version Compliance**): table hw_version, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (hardware version distribution), Table (VMs needing upgrade), Bar chart (versions by cluster).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on hardware Version and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.27",
              "n": "VM Disk Consolidation Needed",
              "c": "high",
              "f": "beginner",
              "v": "After a failed snapshot deletion, VMs can have orphaned delta disks that keep growing and degrading I/O performance. The \"consolidation needed\" flag indicates the disk chain is broken and needs manual intervention before it causes datastore exhaustion.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:vm`",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\" consolidationNeeded=\"true\"\n| table vm_name, host, datastore, consolidationNeeded\n| sort vm_name",
              "m": "Collected via Splunk_TA_vmware inventory. Alert immediately on any VM with consolidationNeeded=true. Consolidation should be performed during low-I/O periods as it can temporarily stun the VM. Track datastore free space for affected VMs as orphaned deltas grow continuously.",
              "z": "Table (VMs needing consolidation), Single value (count), Status indicator.",
              "kfp": "Consolidation-needed flags are common right after large snapshot deletes, backup chains, or clones; allow operations to complete before second-guessing the datastore.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected via Splunk_TA_vmware inventory. Alert immediately on any VM with consolidationNeeded=true. Consolidation should be performed during low-I/O periods as it can temporarily stun the VM. Track datastore free space for affected VMs as orphaned deltas grow continuously.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\" consolidationNeeded=\"true\"\n| table vm_name, host, datastore, consolidationNeeded\n| sort vm_name\n```\n\nUnderstanding this SPL\n\n**VM Disk Consolidation Needed** — After a failed snapshot deletion, VMs can have orphaned delta disks that keep growing and degrading I/O performance. The \"consolidation needed\" flag indicates the disk chain is broken and needs manual intervention before it causes datastore exhaustion.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm`. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **VM Disk Consolidation Needed**): table vm_name, host, datastore, consolidationNeeded\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VMs needing consolidation), Single value (count), Status indicator.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on disk Consolidation Needed and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.28",
              "n": "Thin-Provisioned Disk Growth Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Thin-provisioned disks start small but grow to their provisioned maximum as the guest writes data. If the total provisioned size of all thin disks on a datastore exceeds physical capacity (over-provisioning), the datastore can fill unexpectedly. Tracking actual growth rate predicts when this will happen.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:vm` (storage_committed, storage_uncommitted)",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\"\n| eval committed_gb=round(storage_committed/1073741824, 1)\n| eval provisioned_gb=round((storage_committed+storage_uncommitted)/1073741824, 1)\n| eval thin_ratio=round(committed_gb/provisioned_gb*100, 1)\n| stats latest(committed_gb) as used_gb, latest(provisioned_gb) as provisioned_gb, latest(thin_ratio) as thin_pct by vm_name, datastore\n| where thin_pct < 80\n| sort thin_pct\n| table vm_name, datastore, used_gb, provisioned_gb, thin_pct",
              "m": "Collect VM storage inventory via Splunk_TA_vmware. Calculate total provisioned vs. total physical per datastore to determine over-provisioning ratio. Track committed bytes over time to calculate daily growth rate. Alert when datastore over-provisioning ratio exceeds 200% and growth rate will fill physical capacity within 30 days.",
              "z": "Table (VM, used vs provisioned), Gauge (datastore over-provisioning ratio), Line chart (growth trend with prediction).",
              "kfp": "Storage growth and capacity signals can track expected backups, storage vMotion, or provisioning tests; align spikes with change calendars and storage operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm` (storage_committed, storage_uncommitted).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect VM storage inventory via Splunk_TA_vmware. Calculate total provisioned vs. total physical per datastore to determine over-provisioning ratio. Track committed bytes over time to calculate daily growth rate. Alert when datastore over-provisioning ratio exceeds 200% and growth rate will fill physical capacity within 30 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\"\n| eval committed_gb=round(storage_committed/1073741824, 1)\n| eval provisioned_gb=round((storage_committed+storage_uncommitted)/1073741824, 1)\n| eval thin_ratio=round(committed_gb/provisioned_gb*100, 1)\n| stats latest(committed_gb) as used_gb, latest(provisioned_gb) as provisioned_gb, latest(thin_ratio) as thin_pct by vm_name, datastore\n| where thin_pct < 80\n| sort thin_pct\n| table vm_name, datastore, used_gb, provisioned_gb, thin_pct\n```\n\nUnderstanding this SPL\n\n**Thin-Provisioned Disk Growth Rate** — Thin-provisioned disks start small but grow to their provisioned maximum as the guest writes data. If the total provisioned size of all thin disks on a datastore exceeds physical capacity (over-provisioning), the datastore can fill unexpectedly. Tracking actual growth rate predicts when this will happen.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm` (storage_committed, storage_uncommitted). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **committed_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **provisioned_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **thin_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by vm_name, datastore** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where thin_pct < 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Thin-Provisioned Disk Growth Rate**): table vm_name, datastore, used_gb, provisioned_gb, thin_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, used vs provisioned), Gauge (datastore over-provisioning ratio), Line chart (growth trend with prediction).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on thin-Provisioned Disk Growth Rate and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.29",
              "n": "VM Affinity and Anti-Affinity Rule Violations",
              "c": "medium",
              "f": "intermediate",
              "v": "Anti-affinity rules ensure redundant VMs (e.g., HA pairs, database replicas) run on different hosts. Rule violations mean a single host failure can take down both instances. Affinity rules keep related VMs together for performance. DRS may violate rules when resources are constrained.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:events`, cluster rule configuration",
              "q": "index=vmware sourcetype=\"vmware:events\" event_type=\"DrsRuleViolatedEvent\"\n| table _time, cluster, rule_name, vm_name, host, message\n| sort -_time",
              "m": "Collect vCenter events via Splunk_TA_vmware. DRS logs rule violations as events. Also create a scripted input using PowerCLI to enumerate cluster rules and check current VM placement. Alert immediately on anti-affinity violations in production. Review affinity rule compliance weekly.",
              "z": "Table (violated rules), Status grid (rule compliance), Alert panel.",
              "kfp": "Rule violations can show during rolling moves, DRS rebalancing, or host evacuations; treat as urgent only if the cluster cannot restore compliance on its own.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`, cluster rule configuration.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect vCenter events via Splunk_TA_vmware. DRS logs rule violations as events. Also create a scripted input using PowerCLI to enumerate cluster rules and check current VM placement. Alert immediately on anti-affinity violations in production. Review affinity rule compliance weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" event_type=\"DrsRuleViolatedEvent\"\n| table _time, cluster, rule_name, vm_name, host, message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**VM Affinity and Anti-Affinity Rule Violations** — Anti-affinity rules ensure redundant VMs (e.g., HA pairs, database replicas) run on different hosts. Rule violations mean a single host failure can take down both instances. Affinity rules keep related VMs together for performance. DRS may violate rules when resources are constrained.\n\nDocumented **Data sources**: `sourcetype=vmware:events`, cluster rule configuration. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **VM Affinity and Anti-Affinity Rule Violations**): table _time, cluster, rule_name, vm_name, host, message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violated rules), Status grid (rule compliance), Alert panel.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on affinity and Anti-Affinity Rule Violations and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.30",
              "n": "Storage DRS Recommendations and Actions",
              "c": "medium",
              "f": "intermediate",
              "v": "Storage DRS (SDRS) balances VM storage across datastores within a datastore cluster. Frequent SDRS migrations indicate capacity or performance imbalance. Unapplied recommendations (when SDRS is in manual mode) mean datastores are unbalanced and latency may be inconsistent.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:events`",
              "q": "index=vmware sourcetype=\"vmware:events\" (event_type=\"StorageDrsRecommendation*\" OR event_type=\"StorageMigratedEvent\")\n| eval action=case(match(event_type, \"Recommendation\"), \"Recommended\", event_type=\"StorageMigratedEvent\", \"Migrated\")\n| stats count by action, datastore_cluster\n| sort -count\n| table datastore_cluster, action, count",
              "m": "Collect vCenter events via Splunk_TA_vmware. Track SDRS migration frequency per datastore cluster. Alert when manual-mode SDRS has unapplied recommendations older than 24 hours. Monitor datastore cluster balance — alert when any datastore deviates >20% from the cluster average utilization.",
              "z": "Table (recommendations and actions), Bar chart (migrations per cluster), Line chart (cluster balance over time).",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect vCenter events via Splunk_TA_vmware. Track SDRS migration frequency per datastore cluster. Alert when manual-mode SDRS has unapplied recommendations older than 24 hours. Monitor datastore cluster balance — alert when any datastore deviates >20% from the cluster average utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" (event_type=\"StorageDrsRecommendation*\" OR event_type=\"StorageMigratedEvent\")\n| eval action=case(match(event_type, \"Recommendation\"), \"Recommended\", event_type=\"StorageMigratedEvent\", \"Migrated\")\n| stats count by action, datastore_cluster\n| sort -count\n| table datastore_cluster, action, count\n```\n\nUnderstanding this SPL\n\n**Storage DRS Recommendations and Actions** — Storage DRS (SDRS) balances VM storage across datastores within a datastore cluster. Frequent SDRS migrations indicate capacity or performance imbalance. Unapplied recommendations (when SDRS is in manual mode) mean datastores are unbalanced and latency may be inconsistent.\n\nDocumented **Data sources**: `sourcetype=vmware:events`. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by action, datastore_cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Storage DRS Recommendations and Actions**): table datastore_cluster, action, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recommendations and actions), Bar chart (migrations per cluster), Line chart (cluster balance over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on storage DRS Recommendations and Actions and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.31",
              "n": "Fault Tolerance Status and Replication Lag",
              "c": "critical",
              "f": "intermediate",
              "v": "VMware Fault Tolerance provides zero-downtime protection by maintaining a live secondary copy of a VM. If FT replication falls behind or becomes disabled, the VM loses its zero-downtime protection. FT lag indicates network bandwidth or CPU contention on the secondary host.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:vm`, `sourcetype=vmware:events`",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\" ftInfo_role=*\n| stats latest(ftInfo_role) as ft_role, latest(ftInfo_instanceUuid) as ft_pair by vm_name, host\n| eval ft_status=if(isnotnull(ft_role), ft_role, \"Not Configured\")\n| table vm_name, host, ft_status, ft_pair\n| append [search index=vmware sourcetype=\"vmware:events\" event_type=\"*FaultTolerance*\" | stats count by event_type, vm_name | table event_type, vm_name, count]",
              "m": "Collect VM inventory and events via Splunk_TA_vmware. Monitor FT state changes (enabled, disabled, failover occurred). Alert when FT is disabled on a protected VM or when FT failover events occur. Track FT vMotion log latency counters to detect replication lag.",
              "z": "Status grid (FT-protected VMs), Events timeline (FT state changes), Table (FT configuration).",
              "kfp": "FT and replication may lag during network maintenance or when secondary paths fail over; ignore brief lag if the UI shows a clean return to full protection.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm`, `sourcetype=vmware:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect VM inventory and events via Splunk_TA_vmware. Monitor FT state changes (enabled, disabled, failover occurred). Alert when FT is disabled on a protected VM or when FT failover events occur. Track FT vMotion log latency counters to detect replication lag.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\" ftInfo_role=*\n| stats latest(ftInfo_role) as ft_role, latest(ftInfo_instanceUuid) as ft_pair by vm_name, host\n| eval ft_status=if(isnotnull(ft_role), ft_role, \"Not Configured\")\n| table vm_name, host, ft_status, ft_pair\n| append [search index=vmware sourcetype=\"vmware:events\" event_type=\"*FaultTolerance*\" | stats count by event_type, vm_name | table event_type, vm_name, count]\n```\n\nUnderstanding this SPL\n\n**Fault Tolerance Status and Replication Lag** — VMware Fault Tolerance provides zero-downtime protection by maintaining a live secondary copy of a VM. If FT replication falls behind or becomes disabled, the VM loses its zero-downtime protection. FT lag indicates network bandwidth or CPU contention on the secondary host.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm`, `sourcetype=vmware:events`. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ft_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Fault Tolerance Status and Replication Lag**): table vm_name, host, ft_status, ft_pair\n• Appends rows from a subsearch with `append`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (FT-protected VMs), Events timeline (FT state changes), Table (FT configuration).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on fault Tolerance Status and Replication Lag and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.32",
              "n": "ESXi Host Certificate Expiration",
              "c": "high",
              "f": "intermediate",
              "v": "ESXi hosts use certificates for secure communication with vCenter and other hosts. Expired certificates cause vCenter disconnection, vMotion failures, and HA communication breakdowns. The VMCA-signed certificates have a 5-year default lifetime, but custom certificates may expire sooner.",
              "t": "`Splunk_TA_vmware`, custom scripted input",
              "d": "Custom scripted input (PowerCLI certificate query)",
              "q": "index=vmware sourcetype=\"esxi_certificates\"\n| eval days_to_expiry=round((strptime(not_after, \"%Y-%m-%dT%H:%M:%S\") - now()) / 86400, 0)\n| where days_to_expiry < 90\n| sort days_to_expiry\n| table host, subject, issuer, days_to_expiry, not_after",
              "m": "Create a PowerCLI scripted input: `Get-VMHost | Get-VMHostCertificate | Select VMHost, NotAfter, Subject, Issuer`. Run daily. Alert at 90 days (warning), 30 days (high), 7 days (critical). Also check vCenter VMCA certificate and STS signing certificate which cause widespread failures when expired.",
              "z": "Table (host, cert, expiry), Single value (certs expiring within 30 days), Timeline (upcoming expirations).",
              "kfp": "Certificate warnings can repeat during staged re-issuance, mixed-version clusters, or when a replacement CA is not yet trusted on every management client.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, custom scripted input.\n• Ensure the following data sources are available: Custom scripted input (PowerCLI certificate query).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a PowerCLI scripted input: `Get-VMHost | Get-VMHostCertificate | Select VMHost, NotAfter, Subject, Issuer`. Run daily. Alert at 90 days (warning), 30 days (high), 7 days (critical). Also check vCenter VMCA certificate and STS signing certificate which cause widespread failures when expired.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"esxi_certificates\"\n| eval days_to_expiry=round((strptime(not_after, \"%Y-%m-%dT%H:%M:%S\") - now()) / 86400, 0)\n| where days_to_expiry < 90\n| sort days_to_expiry\n| table host, subject, issuer, days_to_expiry, not_after\n```\n\nUnderstanding this SPL\n\n**ESXi Host Certificate Expiration** — ESXi hosts use certificates for secure communication with vCenter and other hosts. Expired certificates cause vCenter disconnection, vMotion failures, and HA communication breakdowns. The VMCA-signed certificates have a 5-year default lifetime, but custom certificates may expire sooner.\n\nDocumented **Data sources**: Custom scripted input (PowerCLI certificate query). **App/TA** (typical add-on context): `Splunk_TA_vmware`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: esxi_certificates. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"esxi_certificates\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_expiry < 90` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **ESXi Host Certificate Expiration**): table host, subject, issuer, days_to_expiry, not_after\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, cert, expiry), Single value (certs expiring within 30 days), Timeline (upcoming expirations).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Certificate Expiration and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.33",
              "n": "ESXi Host Lockdown Mode Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Lockdown mode restricts direct ESXi access, forcing all management through vCenter. Hosts not in lockdown mode can be accessed directly via SSH or the DCUI, bypassing vCenter audit trails and RBAC. Required by security frameworks like CIS, DISA STIG, and PCI DSS.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:hostsystem`",
              "q": "index=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(lockdownMode) as lockdown, latest(sshEnabled) as ssh by host, cluster\n| where lockdown!=\"lockdownNormal\" OR ssh=\"true\"\n| table host, cluster, lockdown, ssh",
              "m": "Collect host inventory via Splunk_TA_vmware. Define expected lockdown mode per cluster (lockdownNormal or lockdownStrict). Alert when any production host has lockdown disabled or SSH enabled outside a maintenance window. Generate weekly compliance reports for security audits.",
              "z": "Status grid (lockdown compliance), Table (non-compliant hosts), Pie chart (compliance rate).",
              "kfp": "Baseline and patch compliance lag expected during large maintenance windows, staged image rollout, or when hosts are in remediation queues.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:hostsystem`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect host inventory via Splunk_TA_vmware. Define expected lockdown mode per cluster (lockdownNormal or lockdownStrict). Alert when any production host has lockdown disabled or SSH enabled outside a maintenance window. Generate weekly compliance reports for security audits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(lockdownMode) as lockdown, latest(sshEnabled) as ssh by host, cluster\n| where lockdown!=\"lockdownNormal\" OR ssh=\"true\"\n| table host, cluster, lockdown, ssh\n```\n\nUnderstanding this SPL\n\n**ESXi Host Lockdown Mode Compliance** — Lockdown mode restricts direct ESXi access, forcing all management through vCenter. Hosts not in lockdown mode can be accessed directly via SSH or the DCUI, bypassing vCenter audit trails and RBAC. Required by security frameworks like CIS, DISA STIG, and PCI DSS.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:hostsystem`. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:hostsystem. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:hostsystem\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where lockdown!=\"lockdownNormal\" OR ssh=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ESXi Host Lockdown Mode Compliance**): table host, cluster, lockdown, ssh\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (lockdown compliance), Table (non-compliant hosts), Pie chart (compliance rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Lockdown Mode and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.34",
              "n": "Orphaned VMDK Files on Datastores",
              "c": "medium",
              "f": "advanced",
              "v": "When VMs are deleted without cleaning up their disk files, or when snapshots leave behind delta VMDKs, orphaned files accumulate and waste datastore space. In large environments, orphaned files can consume terabytes of storage that cannot be identified through normal VM inventory.",
              "t": "`Splunk_TA_vmware`, custom scripted input",
              "d": "Custom scripted input (datastore file browser vs VM inventory comparison)",
              "q": "index=vmware sourcetype=\"datastore_orphans\"\n| stats sum(size_gb) as total_waste_gb, count as orphan_count by datastore\n| sort -total_waste_gb\n| table datastore, orphan_count, total_waste_gb",
              "m": "Create a PowerCLI scripted input that lists all VMDK files on each datastore and compares against registered VM disk paths. Files not attached to any VM are orphans. Run weekly during off-peak hours (datastore browsing is I/O intensive). Alert when total orphan size exceeds 100GB per datastore.",
              "z": "Table (datastore, orphan count, wasted GB), Bar chart (waste by datastore), Single value (total waste).",
              "kfp": "Storage growth and capacity signals can track expected backups, storage vMotion, or provisioning tests; align spikes with change calendars and storage operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, custom scripted input.\n• Ensure the following data sources are available: Custom scripted input (datastore file browser vs VM inventory comparison).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a PowerCLI scripted input that lists all VMDK files on each datastore and compares against registered VM disk paths. Files not attached to any VM are orphans. Run weekly during off-peak hours (datastore browsing is I/O intensive). Alert when total orphan size exceeds 100GB per datastore.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"datastore_orphans\"\n| stats sum(size_gb) as total_waste_gb, count as orphan_count by datastore\n| sort -total_waste_gb\n| table datastore, orphan_count, total_waste_gb\n```\n\nUnderstanding this SPL\n\n**Orphaned VMDK Files on Datastores** — When VMs are deleted without cleaning up their disk files, or when snapshots leave behind delta VMDKs, orphaned files accumulate and waste datastore space. In large environments, orphaned files can consume terabytes of storage that cannot be identified through normal VM inventory.\n\nDocumented **Data sources**: Custom scripted input (datastore file browser vs VM inventory comparison). **App/TA** (typical add-on context): `Splunk_TA_vmware`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: datastore_orphans. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"datastore_orphans\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by datastore** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Orphaned VMDK Files on Datastores**): table datastore, orphan_count, total_waste_gb\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (datastore, orphan count, wasted GB), Bar chart (waste by datastore), Single value (total waste).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on orphaned VMDK Files on Datastores and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.35",
              "n": "VM Guest OS Disk Space via VMware Tools",
              "c": "high",
              "f": "beginner",
              "v": "VMware Tools reports guest OS filesystem utilization to vCenter, enabling disk space monitoring without an in-guest agent. Particularly valuable for appliances, embedded systems, and VMs where you cannot install a Splunk forwarder. Catches disk-full conditions before they crash applications.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:vm` (guest disk info)",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\" guest_disk_path=*\n| eval used_pct=round((guest_disk_capacity - guest_disk_freeSpace) / guest_disk_capacity * 100, 1)\n| where used_pct > 85\n| sort -used_pct\n| table vm_name, host, guest_disk_path, used_pct, guest_disk_capacity, guest_disk_freeSpace",
              "m": "Requires VMware Tools running in the guest. Splunk_TA_vmware collects guest disk info as part of VM inventory. Alert at 85% (warning) and 95% (critical). Note: this is less granular than an in-guest agent — it reports per-partition but with slower refresh intervals (typically 5-10 minutes).",
              "z": "Table (VM, disk, usage), Gauge per critical VM, Bar chart (top full disks).",
              "kfp": "VMware Tools status can report transiently during reboots, upgrades, or when guests are powered off for imaging; re-check on next poll before escalation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm` (guest disk info).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires VMware Tools running in the guest. Splunk_TA_vmware collects guest disk info as part of VM inventory. Alert at 85% (warning) and 95% (critical). Note: this is less granular than an in-guest agent — it reports per-partition but with slower refresh intervals (typically 5-10 minutes).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\" guest_disk_path=*\n| eval used_pct=round((guest_disk_capacity - guest_disk_freeSpace) / guest_disk_capacity * 100, 1)\n| where used_pct > 85\n| sort -used_pct\n| table vm_name, host, guest_disk_path, used_pct, guest_disk_capacity, guest_disk_freeSpace\n```\n\nUnderstanding this SPL\n\n**VM Guest OS Disk Space via VMware Tools** — VMware Tools reports guest OS filesystem utilization to vCenter, enabling disk space monitoring without an in-guest agent. Particularly valuable for appliances, embedded systems, and VMs where you cannot install a Splunk forwarder. Catches disk-full conditions before they crash applications.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm` (guest disk info). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Guest OS Disk Space via VMware Tools**): table vm_name, host, guest_disk_path, used_pct, guest_disk_capacity, guest_disk_freeSpace\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, disk, usage), Gauge per critical VM, Bar chart (top full disks).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on guest OS Disk Space via VMware Tools and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.36",
              "n": "VM Encryption and vTPM Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "VM encryption protects data at rest on shared storage. vTPM enables Credential Guard, BitLocker, and measured boot inside VMs. Compliance frameworks increasingly require encryption for workloads handling sensitive data. Tracking which VMs are encrypted vs. which should be ensures policy adherence.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:vm`",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\"\n| stats latest(cryptoState) as encryption, latest(vtpm_present) as vtpm by vm_name, host, guest_os\n| eval encrypted=if(encryption=\"encrypted\", \"Yes\", \"No\")\n| eval has_vtpm=if(vtpm_present=\"true\", \"Yes\", \"No\")\n| table vm_name, host, guest_os, encrypted, has_vtpm\n| sort encrypted, has_vtpm",
              "m": "Collect VM inventory via Splunk_TA_vmware. Maintain a lookup defining which VMs require encryption (based on data classification). Cross-reference inventory with the requirements lookup. Alert when a VM that should be encrypted is not. Generate quarterly compliance reports for audit.",
              "z": "Table (VM encryption status), Pie chart (encrypted vs not), Bar chart (compliance by department).",
              "kfp": "Baseline and patch compliance lag expected during large maintenance windows, staged image rollout, or when hosts are in remediation queues.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect VM inventory via Splunk_TA_vmware. Maintain a lookup defining which VMs require encryption (based on data classification). Cross-reference inventory with the requirements lookup. Alert when a VM that should be encrypted is not. Generate quarterly compliance reports for audit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\"\n| stats latest(cryptoState) as encryption, latest(vtpm_present) as vtpm by vm_name, host, guest_os\n| eval encrypted=if(encryption=\"encrypted\", \"Yes\", \"No\")\n| eval has_vtpm=if(vtpm_present=\"true\", \"Yes\", \"No\")\n| table vm_name, host, guest_os, encrypted, has_vtpm\n| sort encrypted, has_vtpm\n```\n\nUnderstanding this SPL\n\n**VM Encryption and vTPM Compliance** — VM encryption protects data at rest on shared storage. vTPM enables Credential Guard, BitLocker, and measured boot inside VMs. Compliance frameworks increasingly require encryption for workloads handling sensitive data. Tracking which VMs are encrypted vs. which should be ensures policy adherence.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm`. **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name, host, guest_os** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **encrypted** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_vtpm** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **VM Encryption and vTPM Compliance**): table vm_name, host, guest_os, encrypted, has_vtpm\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM encryption status), Pie chart (encrypted vs not), Bar chart (compliance by department).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on encryption and vTPM and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.37",
              "n": "VM Template Inventory and Staleness",
              "c": "low",
              "f": "beginner",
              "v": "Stale VM templates with outdated OS patches, expired certificates, or old application versions get deployed into production and immediately become vulnerable. Tracking template age and last update ensures new VMs start from a secure, current baseline.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:vm` (templates are VMs with isTemplate=true)",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\" isTemplate=\"true\"\n| eval age_days=round((now() - strptime(modifiedTime, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| sort -age_days\n| table vm_name, guest_os, hw_version, age_days, modifiedTime",
              "m": "Collect VM inventory via Splunk_TA_vmware (templates appear as VMs with isTemplate=true). Flag templates older than 30 days as needing refresh. Alert on templates older than 90 days. Track deployment frequency per template to identify popular templates that should be prioritized for updates.",
              "z": "Table (template, OS, age), Bar chart (templates by age bucket), Single value (templates >90 days).",
              "kfp": "Stale template alerts may be expected for images kept only for break-glass restore; exclude those catalogs or extend freshness for DR-only content.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm` (templates are VMs with isTemplate=true).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect VM inventory via Splunk_TA_vmware (templates appear as VMs with isTemplate=true). Flag templates older than 30 days as needing refresh. Alert on templates older than 90 days. Track deployment frequency per template to identify popular templates that should be prioritized for updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\" isTemplate=\"true\"\n| eval age_days=round((now() - strptime(modifiedTime, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| sort -age_days\n| table vm_name, guest_os, hw_version, age_days, modifiedTime\n```\n\nUnderstanding this SPL\n\n**VM Template Inventory and Staleness** — Stale VM templates with outdated OS patches, expired certificates, or old application versions get deployed into production and immediately become vulnerable. Tracking template age and last update ensures new VMs start from a secure, current baseline.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm` (templates are VMs with isTemplate=true). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Template Inventory and Staleness**): table vm_name, guest_os, hw_version, age_days, modifiedTime\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (template, OS, age), Bar chart (templates by age bucket), Single value (templates >90 days).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on template Inventory and Staleness and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.38",
              "n": "ESXi Host Syslog Forwarding Health",
              "c": "medium",
              "f": "beginner",
              "v": "If ESXi syslog forwarding breaks, you lose visibility into host-level events — PSOD messages, hardware errors, authentication attempts, and kernel warnings. Since syslog is often the only real-time data source from ESXi (vs. the polling-based TA), silent forwarding failures create dangerous blind spots.",
              "t": "`Splunk_TA_vmware`, ESXi syslog",
              "d": "ESXi syslog, `sourcetype=vmware:inv:hostsystem`",
              "q": "index=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(syslogConfig_logHost) as syslog_target by host\n| eval syslog_configured=if(isnotnull(syslog_target) AND syslog_target!=\"\", \"Yes\", \"No\")\n| append [search index=esxi sourcetype=syslog | stats latest(_time) as last_seen by host]\n| stats latest(syslog_configured) as configured, latest(last_seen) as last_event by host\n| eval hours_silent=round((now()-last_event)/3600, 1)\n| where configured=\"No\" OR hours_silent > 2\n| table host, configured, syslog_target, last_event, hours_silent",
              "m": "Verify syslog configuration via Splunk_TA_vmware host inventory. Monitor for gaps in syslog data per host — if a host stops sending syslog for >1 hour, investigate. Alert on hosts without syslog configured. Validate syslog protocol (UDP vs TCP vs TLS) meets security requirements.",
              "z": "Status grid (syslog health per host), Table (misconfigured hosts), Single value (hosts with gaps).",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, ESXi syslog.\n• Ensure the following data sources are available: ESXi syslog, `sourcetype=vmware:inv:hostsystem`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nVerify syslog configuration via Splunk_TA_vmware host inventory. Monitor for gaps in syslog data per host — if a host stops sending syslog for >1 hour, investigate. Alert on hosts without syslog configured. Validate syslog protocol (UDP vs TCP vs TLS) meets security requirements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(syslogConfig_logHost) as syslog_target by host\n| eval syslog_configured=if(isnotnull(syslog_target) AND syslog_target!=\"\", \"Yes\", \"No\")\n| append [search index=esxi sourcetype=syslog | stats latest(_time) as last_seen by host]\n| stats latest(syslog_configured) as configured, latest(last_seen) as last_event by host\n| eval hours_silent=round((now()-last_event)/3600, 1)\n| where configured=\"No\" OR hours_silent > 2\n| table host, configured, syslog_target, last_event, hours_silent\n```\n\nUnderstanding this SPL\n\n**ESXi Host Syslog Forwarding Health** — If ESXi syslog forwarding breaks, you lose visibility into host-level events — PSOD messages, hardware errors, authentication attempts, and kernel warnings. Since syslog is often the only real-time data source from ESXi (vs. the polling-based TA), silent forwarding failures create dangerous blind spots.\n\nDocumented **Data sources**: ESXi syslog, `sourcetype=vmware:inv:hostsystem`. **App/TA** (typical add-on context): `Splunk_TA_vmware`, ESXi syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:hostsystem. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:hostsystem\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **syslog_configured** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_silent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where configured=\"No\" OR hours_silent > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ESXi Host Syslog Forwarding Health**): table host, configured, syslog_target, last_event, hours_silent\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (syslog health per host), Table (misconfigured hosts), Single value (hosts with gaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Syslog Forwarding Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog",
                "vmware"
              ],
              "em": [
                "vmware_esxi",
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.39",
              "n": "ESXi Host Firewall Rule Audit",
              "c": "medium",
              "f": "intermediate",
              "v": "ESXi has a built-in firewall that controls which services are accessible. Overly permissive rules (e.g., SSH from any IP, open NFC ports) expand the attack surface. CIS benchmarks and DISA STIGs require specific firewall configurations on ESXi hosts.",
              "t": "`Splunk_TA_vmware`, custom scripted input",
              "d": "Custom scripted input (PowerCLI `Get-VMHostFirewallException`)",
              "q": "index=vmware sourcetype=\"esxi_firewall\"\n| where enabled=\"true\" AND allowedAll=\"true\"\n| table host, rule_name, protocol, port, direction, allowedAll\n| sort host, rule_name",
              "m": "Create a PowerCLI scripted input: `Get-VMHost | Get-VMHostFirewallException | Where Enabled | Select VMHost, Name, Enabled, IncomingPorts, OutgoingPorts, Protocols`. Run daily. Alert on rules with AllHosts=true for sensitive services (SSH, NFC, vSAN). Compare against a baseline lookup of approved rules.",
              "z": "Table (host, rule, ports, scope), Bar chart (rules allowing all IPs), Compliance percentage.",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, custom scripted input.\n• Ensure the following data sources are available: Custom scripted input (PowerCLI `Get-VMHostFirewallException`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a PowerCLI scripted input: `Get-VMHost | Get-VMHostFirewallException | Where Enabled | Select VMHost, Name, Enabled, IncomingPorts, OutgoingPorts, Protocols`. Run daily. Alert on rules with AllHosts=true for sensitive services (SSH, NFC, vSAN). Compare against a baseline lookup of approved rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"esxi_firewall\"\n| where enabled=\"true\" AND allowedAll=\"true\"\n| table host, rule_name, protocol, port, direction, allowedAll\n| sort host, rule_name\n```\n\nUnderstanding this SPL\n\n**ESXi Host Firewall Rule Audit** — ESXi has a built-in firewall that controls which services are accessible. Overly permissive rules (e.g., SSH from any IP, open NFC ports) expand the attack surface. CIS benchmarks and DISA STIGs require specific firewall configurations on ESXi hosts.\n\nDocumented **Data sources**: Custom scripted input (PowerCLI `Get-VMHostFirewallException`). **App/TA** (typical add-on context): `Splunk_TA_vmware`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: esxi_firewall. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"esxi_firewall\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where enabled=\"true\" AND allowedAll=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ESXi Host Firewall Rule Audit**): table host, rule_name, protocol, port, direction, allowedAll\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, rule, ports, scope), Bar chart (rules allowing all IPs), Compliance percentage.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Firewall Rule Audit and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.40",
              "n": "VM NUMA Alignment",
              "c": "medium",
              "f": "advanced",
              "v": "VMs that span NUMA nodes experience cross-node memory access latency — 2-3x slower than local access. Large VMs (8+ vCPUs or 32+ GB RAM) are most affected. Proper NUMA alignment can improve performance by 10-30% for memory-intensive workloads like databases and in-memory caches.",
              "t": "`Splunk_TA_vmware`, custom scripted input",
              "d": "`sourcetype=vmware:perf:mem`, host NUMA topology",
              "q": "index=vmware sourcetype=\"vmware:perf:mem\" counter=\"mem.llSwapUsed.average\"\n| stats avg(Value) as ll_swap_kb by vm_name, host\n| join max=1 host [search index=vmware sourcetype=\"vmware:inv:hostsystem\" | eval numa_nodes=numNumaNodes | table host, numa_nodes, numCpuCores]\n| join max=1 vm_name [search index=vmware sourcetype=\"vmware:inv:vm\" | table vm_name, numCpu, memoryMB]\n| eval vcpus_per_node=round(numCpuCores/numa_nodes, 0)\n| eval spans_numa=if(numCpu > vcpus_per_node, \"Yes\", \"No\")\n| where spans_numa=\"Yes\"\n| table vm_name, host, numCpu, memoryMB, vcpus_per_node, numa_nodes, spans_numa, ll_swap_kb\n| sort -memoryMB",
              "m": "Collect host NUMA topology from inventory and VM sizing. Flag VMs whose vCPU count exceeds a single NUMA node's core count. For critical workloads, set `numa.vcpu.preferHT=true` and consider vNUMA configuration. Monitor `mem.llSwapUsed` for cross-NUMA penalties.",
              "z": "Table (VM, vCPUs, NUMA alignment), Scatter plot (VM size vs NUMA fit), Single value (misaligned VMs).",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, custom scripted input.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:mem`, host NUMA topology.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect host NUMA topology from inventory and VM sizing. Flag VMs whose vCPU count exceeds a single NUMA node's core count. For critical workloads, set `numa.vcpu.preferHT=true` and consider vNUMA configuration. Monitor `mem.llSwapUsed` for cross-NUMA penalties.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:mem\" counter=\"mem.llSwapUsed.average\"\n| stats avg(Value) as ll_swap_kb by vm_name, host\n| join max=1 host [search index=vmware sourcetype=\"vmware:inv:hostsystem\" | eval numa_nodes=numNumaNodes | table host, numa_nodes, numCpuCores]\n| join max=1 vm_name [search index=vmware sourcetype=\"vmware:inv:vm\" | table vm_name, numCpu, memoryMB]\n| eval vcpus_per_node=round(numCpuCores/numa_nodes, 0)\n| eval spans_numa=if(numCpu > vcpus_per_node, \"Yes\", \"No\")\n| where spans_numa=\"Yes\"\n| table vm_name, host, numCpu, memoryMB, vcpus_per_node, numa_nodes, spans_numa, ll_swap_kb\n| sort -memoryMB\n```\n\nUnderstanding this SPL\n\n**VM NUMA Alignment** — VMs that span NUMA nodes experience cross-node memory access latency — 2-3x slower than local access. Large VMs (8+ vCPUs or 32+ GB RAM) are most affected. Proper NUMA alignment can improve performance by 10-30% for memory-intensive workloads like databases and in-memory caches.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:mem`, host NUMA topology. **App/TA** (typical add-on context): `Splunk_TA_vmware`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:mem. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:mem\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **vcpus_per_node** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **spans_numa** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where spans_numa=\"Yes\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VM NUMA Alignment**): table vm_name, host, numCpu, memoryMB, vcpus_per_node, numa_nodes, spans_numa, ll_swap_kb\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, vCPUs, NUMA alignment), Scatter plot (VM size vs NUMA fit), Single value (misaligned VMs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on nUMA Alignment and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.41",
              "n": "ESXi Host Coredump Configuration",
              "c": "medium",
              "f": "beginner",
              "v": "When an ESXi host experiences a PSOD (Purple Screen of Death), the coredump contains critical diagnostic information. Without a properly configured dump target (network or local), the coredump is lost on reboot and root cause analysis becomes impossible. Particularly important for diskless/boot-from-SAN hosts.",
              "t": "`Splunk_TA_vmware`, custom scripted input",
              "d": "Custom scripted input (`esxcli system coredump`)",
              "q": "index=vmware sourcetype=\"esxi_coredump\"\n| stats latest(network_configured) as net_dump, latest(partition_configured) as part_dump by host\n| eval dump_ok=if(net_dump=\"true\" OR part_dump=\"true\", \"Yes\", \"No\")\n| where dump_ok=\"No\"\n| table host, net_dump, part_dump, dump_ok",
              "m": "Create scripted input via PowerCLI or SSH: `esxcli system coredump network get` and `esxcli system coredump partition get`. Run daily. Alert on hosts without any dump target configured. For stateless/diskless hosts, ensure network dump collector is configured and reachable.",
              "z": "Status grid (dump config per host), Table (unconfigured hosts), Compliance percentage.",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, custom scripted input.\n• Ensure the following data sources are available: Custom scripted input (`esxcli system coredump`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input via PowerCLI or SSH: `esxcli system coredump network get` and `esxcli system coredump partition get`. Run daily. Alert on hosts without any dump target configured. For stateless/diskless hosts, ensure network dump collector is configured and reachable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"esxi_coredump\"\n| stats latest(network_configured) as net_dump, latest(partition_configured) as part_dump by host\n| eval dump_ok=if(net_dump=\"true\" OR part_dump=\"true\", \"Yes\", \"No\")\n| where dump_ok=\"No\"\n| table host, net_dump, part_dump, dump_ok\n```\n\nUnderstanding this SPL\n\n**ESXi Host Coredump Configuration** — When an ESXi host experiences a PSOD (Purple Screen of Death), the coredump contains critical diagnostic information. Without a properly configured dump target (network or local), the coredump is lost on reboot and root cause analysis becomes impossible. Particularly important for diskless/boot-from-SAN hosts.\n\nDocumented **Data sources**: Custom scripted input (`esxcli system coredump`). **App/TA** (typical add-on context): `Splunk_TA_vmware`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: esxi_coredump. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"esxi_coredump\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dump_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dump_ok=\"No\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ESXi Host Coredump Configuration**): table host, net_dump, part_dump, dump_ok\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (dump config per host), Table (unconfigured hosts), Compliance percentage.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Coredump Configuration and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.42",
              "n": "VM CPU Ready Time Percentage",
              "c": "high",
              "f": "advanced",
              "v": "Measures time VMs wait for physical CPU — distinct from host utilization. High CPU ready time indicates over-committed CPU; VMs are queued waiting for scheduler time even when host CPU % appears acceptable. Critical for identifying latent contention invisible from guest metrics.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:perf:cpu` (counter=cpu.ready.summation)",
              "q": "index=vmware sourcetype=\"vmware:perf:cpu\" counter=\"cpu.ready.summation\"\n| eval ready_pct = round(Value / 20000 * 100, 2)\n| stats avg(ready_pct) as avg_ready_pct, max(ready_pct) as peak_ready_pct by host, vm_name\n| where avg_ready_pct > 5\n| sort -avg_ready_pct\n| table vm_name, host, avg_ready_pct, peak_ready_pct",
              "m": "TA-vmware collects cpu.ready.summation (milliseconds VM waited per 20s interval). Formula: ready_pct = Value / 20000 * 100 (20s = 20000ms). Alert when avg_ready_pct >5% over 10 minutes. Use rolling 15-min average to smooth spikes. Correlate with cluster CPU utilization and DRS migrations.",
              "z": "Heatmap (VMs vs hosts, colored by ready %), Bar chart (top VMs by ready time), Line chart (ready % trend).",
              "kfp": "CPU ready time can spike during vMotion, host maintenance mode, DRS rebalancing, or right after power-on; correlate with change windows and cluster load before re-sizing VMs.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:cpu` (counter=cpu.ready.summation).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects cpu.ready.summation (milliseconds VM waited per 20s interval). Formula: ready_pct = Value / 20000 * 100 (20s = 20000ms). Alert when avg_ready_pct >5% over 10 minutes. Use rolling 15-min average to smooth spikes. Correlate with cluster CPU utilization and DRS migrations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:cpu\" counter=\"cpu.ready.summation\"\n| eval ready_pct = round(Value / 20000 * 100, 2)\n| stats avg(ready_pct) as avg_ready_pct, max(ready_pct) as peak_ready_pct by host, vm_name\n| where avg_ready_pct > 5\n| sort -avg_ready_pct\n| table vm_name, host, avg_ready_pct, peak_ready_pct\n```\n\nUnderstanding this SPL\n\n**VM CPU Ready Time Percentage** — Measures time VMs wait for physical CPU — distinct from host utilization. High CPU ready time indicates over-committed CPU; VMs are queued waiting for scheduler time even when host CPU % appears acceptable. Critical for identifying latent contention invisible from guest metrics.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:cpu` (counter=cpu.ready.summation). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:cpu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:cpu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ready_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_ready_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM CPU Ready Time Percentage**): table vm_name, host, avg_ready_pct, peak_ready_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (VMs vs hosts, colored by ready %), Bar chart (top VMs by ready time), Line chart (ready % trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track when your virtual machines wait too long for CPU time, so we can catch overload or unfair resource shares before the slowness hits real work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.43",
              "n": "VM Disk I/O Latency per Datastore",
              "c": "high",
              "f": "intermediate",
              "v": "Correlate VM disk latency to specific datastores to identify storage bottlenecks. When multiple VMs on the same datastore show high latency, the datastore or underlying storage is the culprit rather than individual VM workload.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:perf:datastore` (datastore.totalReadLatency.average, datastore.totalWriteLatency.average — per VM when object is VM)",
              "q": "index=vmware sourcetype=\"vmware:perf:datastore\" (counter=\"datastore.totalReadLatency.average\" OR counter=\"datastore.totalWriteLatency.average\")\n| eval read_latency = if(counter=\"datastore.totalReadLatency.average\", Value, null())\n| eval write_latency = if(counter=\"datastore.totalWriteLatency.average\", Value, null())\n| stats avg(read_latency) as avg_read_ms, avg(write_latency) as avg_write_ms by vm_name, host, datastore\n| eval avg_latency = max(coalesce(avg_read_ms, 0), coalesce(avg_write_ms, 0))\n| where avg_latency > 20\n| sort -avg_latency\n| table vm_name, host, datastore, avg_read_ms, avg_write_ms, avg_latency",
              "m": "TA-vmware collects per-VM disk latency. Use datastore dimension to group VMs by backing storage. Alert when any VM-datastore pair exceeds 20ms average latency over 10 minutes. Correlate with datastore-level latency (UC-2.1.4) to distinguish VM workload from shared storage contention.",
              "z": "Heatmap (VMs vs datastores, colored by latency), Table (top latency by VM/datastore), Line chart (latency trend per datastore).",
              "kfp": "Storage growth and capacity signals can track expected backups, storage vMotion, or provisioning tests; align spikes with change calendars and storage operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:datastore` (datastore.totalReadLatency.average, datastore.totalWriteLatency.average — per VM when object is VM).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects per-VM disk latency. Use datastore dimension to group VMs by backing storage. Alert when any VM-datastore pair exceeds 20ms average latency over 10 minutes. Correlate with datastore-level latency (UC-2.1.4) to distinguish VM workload from shared storage contention.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:datastore\" (counter=\"datastore.totalReadLatency.average\" OR counter=\"datastore.totalWriteLatency.average\")\n| eval read_latency = if(counter=\"datastore.totalReadLatency.average\", Value, null())\n| eval write_latency = if(counter=\"datastore.totalWriteLatency.average\", Value, null())\n| stats avg(read_latency) as avg_read_ms, avg(write_latency) as avg_write_ms by vm_name, host, datastore\n| eval avg_latency = max(coalesce(avg_read_ms, 0), coalesce(avg_write_ms, 0))\n| where avg_latency > 20\n| sort -avg_latency\n| table vm_name, host, datastore, avg_read_ms, avg_write_ms, avg_latency\n```\n\nUnderstanding this SPL\n\n**VM Disk I/O Latency per Datastore** — Correlate VM disk latency to specific datastores to identify storage bottlenecks. When multiple VMs on the same datastore show high latency, the datastore or underlying storage is the culprit rather than individual VM workload.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:datastore` (datastore.totalReadLatency.average, datastore.totalWriteLatency.average — per VM when object is VM). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:datastore. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:datastore\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **read_latency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **write_latency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by vm_name, host, datastore** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_latency > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Disk I/O Latency per Datastore**): table vm_name, host, datastore, avg_read_ms, avg_write_ms, avg_latency\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (VMs vs datastores, colored by latency), Table (top latency by VM/datastore), Line chart (latency trend per datastore).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on disk I/O Latency per Datastore and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.44",
              "n": "ESXi Host Certificate Renewal Compliance",
              "c": "critical",
              "f": "intermediate",
              "v": "ESXi hosts use SSL certificates for vCenter communication, vMotion, and HA. Expired certificates break vCenter connectivity, prevent migrations, and cause HA communication failures. Proactive monitoring prevents unexpected outages when certs expire.",
              "t": "Custom scripted input (openssl, ESXi API, or PowerCLI)",
              "d": "Certificate expiry from ESXi hosts (scripted input querying host API or certificate store)",
              "q": "index=vmware sourcetype=\"esxi_certificates\"\n| eval days_to_expiry = round((strptime(not_after, \"%Y-%m-%dT%H:%M:%S\") - now()) / 86400, 0)\n| eval severity = case(days_to_expiry < 0, \"Expired\", days_to_expiry < 7, \"Critical\", days_to_expiry < 30, \"High\", days_to_expiry < 90, \"Warning\", 1==1, \"OK\")\n| where days_to_expiry < 90\n| sort days_to_expiry\n| table host, subject, issuer, not_after, days_to_expiry, severity",
              "m": "Create scripted input: use `openssl s_client -connect <host>:443 -servername <host> 2>/dev/null | openssl x509 -noout -enddate` or PowerCLI `Get-VMHost | Get-VMHostCertificate`. Run daily. Alert at 90 days (warning), 30 days (high), 7 days (critical). Include vCenter VMCA and STS certs — their expiry affects all hosts.",
              "z": "Table (host, cert, days to expiry), Single value (certs expiring within 30 days), Timeline (upcoming expirations).",
              "kfp": "Baseline and patch compliance lag expected during large maintenance windows, staged image rollout, or when hosts are in remediation queues.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (openssl, ESXi API, or PowerCLI).\n• Ensure the following data sources are available: Certificate expiry from ESXi hosts (scripted input querying host API or certificate store).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: use `openssl s_client -connect <host>:443 -servername <host> 2>/dev/null | openssl x509 -noout -enddate` or PowerCLI `Get-VMHost | Get-VMHostCertificate`. Run daily. Alert at 90 days (warning), 30 days (high), 7 days (critical). Include vCenter VMCA and STS certs — their expiry affects all hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"esxi_certificates\"\n| eval days_to_expiry = round((strptime(not_after, \"%Y-%m-%dT%H:%M:%S\") - now()) / 86400, 0)\n| eval severity = case(days_to_expiry < 0, \"Expired\", days_to_expiry < 7, \"Critical\", days_to_expiry < 30, \"High\", days_to_expiry < 90, \"Warning\", 1==1, \"OK\")\n| where days_to_expiry < 90\n| sort days_to_expiry\n| table host, subject, issuer, not_after, days_to_expiry, severity\n```\n\nUnderstanding this SPL\n\n**ESXi Host Certificate Renewal Compliance** — ESXi hosts use SSL certificates for vCenter communication, vMotion, and HA. Expired certificates break vCenter connectivity, prevent migrations, and cause HA communication failures. Proactive monitoring prevents unexpected outages when certs expire.\n\nDocumented **Data sources**: Certificate expiry from ESXi hosts (scripted input querying host API or certificate store). **App/TA** (typical add-on context): Custom scripted input (openssl, ESXi API, or PowerCLI). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: esxi_certificates. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"esxi_certificates\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_expiry < 90` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **ESXi Host Certificate Renewal Compliance**): table host, subject, issuer, not_after, days_to_expiry, severity\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, cert, days to expiry), Single value (certs expiring within 30 days), Timeline (upcoming expirations).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Certificate Renewal and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_esxi"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.45",
              "n": "VM Snapshot Age Alerting",
              "c": "high",
              "f": "intermediate",
              "v": "Snapshots older than N days degrade VM I/O performance and complicate backups — distinct from snapshot count or space. Old snapshots cause delta disk growth, extended backup windows, and increased risk of consolidation failures. Age-based alerting ensures timely cleanup.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:inv:vm` (snapshot info: snapshot_createTime, snapshot_name)",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\" snapshot_name=*\n| eval snapshot_age_days = round((now() - strptime(snapshot_createTime, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where snapshot_age_days > 7\n| eval snapshot_size_gb = round(snapshot_sizeBytes / 1073741824, 2)\n| sort -snapshot_age_days\n| table vm_name, host, snapshot_name, snapshot_age_days, snapshot_size_gb, snapshot_createTime",
              "m": "TA-vmware collects VM inventory including snapshot metadata. Define policy: alert on snapshots >7 days (high), >3 days (warning). Run daily report. Escalate to VM owners. Include snapshot size to prioritize cleanup. Correlate with datastore capacity for storage impact.",
              "z": "Table (VM, snapshot, age, size), Bar chart (snapshots by age bucket), Single value (snapshots >7 days).",
              "kfp": "Snapshot age alerts may fire for intentional long-running snapshots during storage migration, tests, or vendor-guided holds; confirm with the owner before delete requests.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:inv:vm` (snapshot info: snapshot_createTime, snapshot_name).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects VM inventory including snapshot metadata. Define policy: alert on snapshots >7 days (high), >3 days (warning). Run daily report. Escalate to VM owners. Include snapshot size to prioritize cleanup. Correlate with datastore capacity for storage impact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\" snapshot_name=*\n| eval snapshot_age_days = round((now() - strptime(snapshot_createTime, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where snapshot_age_days > 7\n| eval snapshot_size_gb = round(snapshot_sizeBytes / 1073741824, 2)\n| sort -snapshot_age_days\n| table vm_name, host, snapshot_name, snapshot_age_days, snapshot_size_gb, snapshot_createTime\n```\n\nUnderstanding this SPL\n\n**VM Snapshot Age Alerting** — Snapshots older than N days degrade VM I/O performance and complicate backups — distinct from snapshot count or space. Old snapshots cause delta disk growth, extended backup windows, and increased risk of consolidation failures. Age-based alerting ensures timely cleanup.\n\nDocumented **Data sources**: `sourcetype=vmware:inv:vm` (snapshot info: snapshot_createTime, snapshot_name). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **snapshot_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where snapshot_age_days > 7` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **snapshot_size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Snapshot Age Alerting**): table vm_name, host, snapshot_name, snapshot_age_days, snapshot_size_gb, snapshot_createTime\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, snapshot, age, size), Bar chart (snapshots by age bucket), Single value (snapshots >7 days).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on snapshot Age Alerting and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.46",
              "n": "vCenter Alarm Acknowledgment Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Track alarms that remain unacknowledged for extended periods. Unacknowledged alarms indicate ignored issues — either operational gaps or alarm fatigue. Ensures critical alerts receive follow-up and supports SLA tracking for incident response.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:events` (AlarmStatusChangedEvent)",
              "q": "index=vmware sourcetype=\"vmware:events\" event_type=\"AlarmStatusChangedEvent\"\n| eval alarm_id = coalesce(alarm, alarm_name)\n| stats latest(_time) as last_change, latest(new_status) as status, latest(acknowledged) as ack, latest(alarm_name) as alarm_name, latest(entity) as entity by alarm_id\n| where status=\"red\" OR status=\"yellow\"\n| eval hours_unack = round((now() - last_change) / 3600, 1)\n| where ack!=\"true\" AND hours_unack > 4\n| sort -hours_unack\n| table alarm_name, entity, status, last_change, hours_unack, ack",
              "m": "TA-vmware collects AlarmStatusChangedEvent. Parse acknowledged field if present; otherwise infer from event sequence. Alert when red/yellow alarms remain unacknowledged >4 hours. Maintain lookup of alarm ownership for escalation. Correlate with incident tickets.",
              "z": "Table (unacknowledged alarms, age), Timeline (alarm state changes), Single value (count unacknowledged >4h).",
              "kfp": "vCenter can emit overlapping alarms from dependency chains; a single host maintenance may trigger many child alarms that clear when the first root cause is fixed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events` (AlarmStatusChangedEvent).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects AlarmStatusChangedEvent. Parse acknowledged field if present; otherwise infer from event sequence. Alert when red/yellow alarms remain unacknowledged >4 hours. Maintain lookup of alarm ownership for escalation. Correlate with incident tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" event_type=\"AlarmStatusChangedEvent\"\n| eval alarm_id = coalesce(alarm, alarm_name)\n| stats latest(_time) as last_change, latest(new_status) as status, latest(acknowledged) as ack, latest(alarm_name) as alarm_name, latest(entity) as entity by alarm_id\n| where status=\"red\" OR status=\"yellow\"\n| eval hours_unack = round((now() - last_change) / 3600, 1)\n| where ack!=\"true\" AND hours_unack > 4\n| sort -hours_unack\n| table alarm_name, entity, status, last_change, hours_unack, ack\n```\n\nUnderstanding this SPL\n\n**vCenter Alarm Acknowledgment Tracking** — Track alarms that remain unacknowledged for extended periods. Unacknowledged alarms indicate ignored issues — either operational gaps or alarm fatigue. Ensures critical alerts receive follow-up and supports SLA tracking for incident response.\n\nDocumented **Data sources**: `sourcetype=vmware:events` (AlarmStatusChangedEvent). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **alarm_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by alarm_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status=\"red\" OR status=\"yellow\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **hours_unack** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ack!=\"true\" AND hours_unack > 4` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **vCenter Alarm Acknowledgment Tracking**): table alarm_name, entity, status, last_change, hours_unack, ack\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unacknowledged alarms, age), Timeline (alarm state changes), Single value (count unacknowledged >4h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on alarm Acknowledgment and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.47",
              "n": "VM Network Packet Loss and Retransmit",
              "c": "high",
              "f": "advanced",
              "v": "Per-VM network quality metrics including packet loss and retransmission. Dropped packets at the vNIC indicate congestion, driver issues, or misconfigured traffic shaping. Hypervisor-level counters capture drops invisible to the guest — essential for diagnosing application network issues.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:perf:net` (net.droppedRx.summation, net.droppedTx.summation)",
              "q": "index=vmware sourcetype=\"vmware:perf:net\" (counter=\"net.droppedRx.summation\" OR counter=\"net.droppedTx.summation\" OR counter=\"net.packetsRx.summation\" OR counter=\"net.packetsTx.summation\")\n| stats sum(eval(if(counter=\"net.droppedRx.summation\", Value, 0))) as dropped_rx, sum(eval(if(counter=\"net.droppedTx.summation\", Value, 0))) as dropped_tx, sum(eval(if(counter=\"net.packetsRx.summation\", Value, 0))) as packets_rx, sum(eval(if(counter=\"net.packetsTx.summation\", Value, 0))) as packets_tx by vm_name, host\n| eval total_packets = packets_rx + packets_tx\n| eval loss_pct = if(total_packets > 0, round((dropped_rx + dropped_tx) / total_packets * 100, 4), 0)\n| where dropped_rx > 0 OR dropped_tx > 0\n| sort -dropped_rx\n| table vm_name, host, dropped_rx, dropped_tx, total_packets, loss_pct",
              "m": "TA-vmware collects net.droppedRx/Tx.summation. Alert when any VM shows >0 dropped packets sustained over 5 minutes. Compute loss percentage when packet counters available. Correlate with net.usage.average for saturation. Check dvSwitch policies, physical NIC utilization, and VMXNET3 driver version.",
              "z": "Table (VM, host, drops, loss %), Line chart (drops over time), Bar chart (top VMs by packet loss).",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:perf:net` (net.droppedRx.summation, net.droppedTx.summation).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTA-vmware collects net.droppedRx/Tx.summation. Alert when any VM shows >0 dropped packets sustained over 5 minutes. Compute loss percentage when packet counters available. Correlate with net.usage.average for saturation. Check dvSwitch policies, physical NIC utilization, and VMXNET3 driver version.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:perf:net\" (counter=\"net.droppedRx.summation\" OR counter=\"net.droppedTx.summation\" OR counter=\"net.packetsRx.summation\" OR counter=\"net.packetsTx.summation\")\n| stats sum(eval(if(counter=\"net.droppedRx.summation\", Value, 0))) as dropped_rx, sum(eval(if(counter=\"net.droppedTx.summation\", Value, 0))) as dropped_tx, sum(eval(if(counter=\"net.packetsRx.summation\", Value, 0))) as packets_rx, sum(eval(if(counter=\"net.packetsTx.summation\", Value, 0))) as packets_tx by vm_name, host\n| eval total_packets = packets_rx + packets_tx\n| eval loss_pct = if(total_packets > 0, round((dropped_rx + dropped_tx) / total_packets * 100, 4), 0)\n| where dropped_rx > 0 OR dropped_tx > 0\n| sort -dropped_rx\n| table vm_name, host, dropped_rx, dropped_tx, total_packets, loss_pct\n```\n\nUnderstanding this SPL\n\n**VM Network Packet Loss and Retransmit** — Per-VM network quality metrics including packet loss and retransmission. Dropped packets at the vNIC indicate congestion, driver issues, or misconfigured traffic shaping. Hypervisor-level counters capture drops invisible to the guest — essential for diagnosing application network issues.\n\nDocumented **Data sources**: `sourcetype=vmware:perf:net` (net.droppedRx.summation, net.droppedTx.summation). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:perf:net. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:perf:net\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_packets** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **loss_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dropped_rx > 0 OR dropped_tx > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Network Packet Loss and Retransmit**): table vm_name, host, dropped_rx, dropped_tx, total_packets, loss_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, host, drops, loss %), Line chart (drops over time), Bar chart (top VMs by packet loss).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on network Packet Loss and Retransmit and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.1.48",
              "n": "VMware DRS Effectiveness",
              "c": "medium",
              "f": "advanced",
              "v": "DRS migrations per hour, cluster imbalance score, and recommendations vs. applied. High migration frequency may indicate oscillation; low application of recommendations suggests manual mode or constraint conflicts. Tracks whether DRS is effectively balancing the cluster.",
              "t": "`Splunk_TA_vmware`",
              "d": "`sourcetype=vmware:events` (DrsVmMigratedEvent, DrsVmPoweredOnEvent, DrsRecommendationAppliedEvent)",
              "q": "index=vmware sourcetype=\"vmware:events\" (event_type=\"DrsVmMigratedEvent\" OR event_type=\"DrsVmPoweredOnEvent\")\n| eval migration_type = case(event_type=\"DrsVmMigratedEvent\", \"Migration\", event_type=\"DrsVmPoweredOnEvent\", \"PowerOn\")\n| bin _time span=1h\n| stats count as migrations by _time, cluster\n| eventstats avg(migrations) as avg_migrations, stdev(migrations) as stdev_migrations by cluster\n| eval is_high = if(migrations > avg_migrations + (2 * coalesce(stdev_migrations, 0)) AND migrations > 10, 1, 0)\n| where is_high = 1\n| table _time, cluster, migrations, avg_migrations, stdev_migrations",
              "m": "Collect DRS events via TA-vmware. Baseline migrations per hour per cluster. Alert when migrations exceed 2 stdev above mean (possible oscillation). For recommendations: query DrsRecommendationAppliedEvent vs. total recommendations. Manual DRS mode will show recommendations without corresponding applied events.",
              "z": "Line chart (migrations per hour by cluster), Table (cluster, migrations, baseline), Bar chart (recommendations vs applied).",
              "kfp": "Host and VM health signals can move during vMotion, DRS, backup jobs, and scheduled maintenance; trend persistence and match alerts to vCenter or ESXi change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`.\n• Ensure the following data sources are available: `sourcetype=vmware:events` (DrsVmMigratedEvent, DrsVmPoweredOnEvent, DrsRecommendationAppliedEvent).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect DRS events via TA-vmware. Baseline migrations per hour per cluster. Alert when migrations exceed 2 stdev above mean (possible oscillation). For recommendations: query DrsRecommendationAppliedEvent vs. total recommendations. Manual DRS mode will show recommendations without corresponding applied events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" (event_type=\"DrsVmMigratedEvent\" OR event_type=\"DrsVmPoweredOnEvent\")\n| eval migration_type = case(event_type=\"DrsVmMigratedEvent\", \"Migration\", event_type=\"DrsVmPoweredOnEvent\", \"PowerOn\")\n| bin _time span=1h\n| stats count as migrations by _time, cluster\n| eventstats avg(migrations) as avg_migrations, stdev(migrations) as stdev_migrations by cluster\n| eval is_high = if(migrations > avg_migrations + (2 * coalesce(stdev_migrations, 0)) AND migrations > 10, 1, 0)\n| where is_high = 1\n| table _time, cluster, migrations, avg_migrations, stdev_migrations\n```\n\nUnderstanding this SPL\n\n**VMware DRS Effectiveness** — DRS migrations per hour, cluster imbalance score, and recommendations vs. applied. High migration frequency may indicate oscillation; low application of recommendations suggests manual mode or constraint conflicts. Tracks whether DRS is effectively balancing the cluster.\n\nDocumented **Data sources**: `sourcetype=vmware:events` (DrsVmMigratedEvent, DrsVmPoweredOnEvent, DrsRecommendationAppliedEvent). **App/TA** (typical add-on context): `Splunk_TA_vmware`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **migration_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **is_high** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_high = 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VMware DRS Effectiveness**): table _time, cluster, migrations, avg_migrations, stdev_migrations\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (migrations per hour by cluster), Table (cluster, migrations, baseline), Bar chart (recommendations vs applied).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on vMware DRS Effectiveness and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.6,
          "qd": {
            "gold": 0,
            "silver": 4,
            "bronze": 44,
            "none": 0
          }
        },
        {
          "i": "2.2",
          "n": "Microsoft Hyper-V",
          "u": [
            {
              "i": "2.2.1",
              "n": "VM Performance Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Per-VM CPU, memory, and disk metrics identify resource contention and performance bottlenecks within the Hyper-V environment.",
              "t": "`Splunk_TA_windows` (Perfmon inputs for Hyper-V counters)",
              "d": "`sourcetype=Perfmon:HyperV` (Hyper-V Virtual Machine Health Summary, Hyper-V Hypervisor Logical Processor)",
              "q": "index=perfmon sourcetype=\"Perfmon:HyperV\" object=\"Hyper-V Hypervisor Virtual Processor\" counter=\"% Guest Run Time\"\n| stats avg(Value) as avg_cpu by instance, host\n| sort -avg_cpu",
              "m": "Configure Perfmon inputs on Hyper-V hosts for Hyper-V specific objects: `Hyper-V Hypervisor Virtual Processor`, `Hyper-V Dynamic Memory - VM`, `Hyper-V Virtual Storage Device`. Set interval=60.",
              "z": "Table (VM, CPU%, Memory%), Line chart per VM, Heatmap.",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Perfmon inputs for Hyper-V counters).\n• Ensure the following data sources are available: `sourcetype=Perfmon:HyperV` (Hyper-V Virtual Machine Health Summary, Hyper-V Hypervisor Logical Processor).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs on Hyper-V hosts for Hyper-V specific objects: `Hyper-V Hypervisor Virtual Processor`, `Hyper-V Dynamic Memory - VM`, `Hyper-V Virtual Storage Device`. Set interval=60.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:HyperV\" object=\"Hyper-V Hypervisor Virtual Processor\" counter=\"% Guest Run Time\"\n| stats avg(Value) as avg_cpu by instance, host\n| sort -avg_cpu\n```\n\nUnderstanding this SPL\n\n**VM Performance Monitoring** — Per-VM CPU, memory, and disk metrics identify resource contention and performance bottlenecks within the Hyper-V environment.\n\nDocumented **Data sources**: `sourcetype=Perfmon:HyperV` (Hyper-V Virtual Machine Health Summary, Hyper-V Hypervisor Logical Processor). **App/TA** (typical add-on context): `Splunk_TA_windows` (Perfmon inputs for Hyper-V counters). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:HyperV. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:HyperV\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by instance, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VM Performance Monitoring** — Per-VM CPU, memory, and disk metrics identify resource contention and performance bottlenecks within the Hyper-V environment.\n\nDocumented **Data sources**: `sourcetype=Perfmon:HyperV` (Hyper-V Virtual Machine Health Summary, Hyper-V Hypervisor Logical Processor). **App/TA** (typical add-on context): `Splunk_TA_windows` (Perfmon inputs for Hyper-V counters). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, CPU%, Memory%), Line chart per VM, Heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on performance and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 90",
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.2",
              "n": "Hyper-V Replication Health",
              "c": "high",
              "f": "intermediate",
              "v": "Replication lag means your DR site is behind. If replication breaks, you lose your recovery point objective (RPO).",
              "t": "`Splunk_TA_windows` (Hyper-V)",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin\" (\"replication\" AND (\"error\" OR \"warning\" OR \"critical\" OR \"failed\"))\n| stats count by host, EventCode, Message\n| sort -count",
              "m": "Enable Hyper-V VMMS event log collection. Also create a PowerShell scripted input: `Get-VMReplication | Select VMName, State, Health, LastReplicationTime`. Alert on replication state != Normal or health != Normal.",
              "z": "Table (VM, replication state, health, last sync), Status indicators, Events list.",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V).\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Hyper-V VMMS event log collection. Also create a PowerShell scripted input: `Get-VMReplication | Select VMName, State, Health, LastReplicationTime`. Alert on replication state != Normal or health != Normal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin\" (\"replication\" AND (\"error\" OR \"warning\" OR \"critical\" OR \"failed\"))\n| stats count by host, EventCode, Message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Hyper-V Replication Health** — Replication lag means your DR site is behind. If replication breaks, you lose your recovery point objective (RPO).\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin`. **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, EventCode, Message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, replication state, health, last sync), Status indicators, Events list.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on replication Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.3",
              "n": "Cluster Shared Volume Health",
              "c": "critical",
              "f": "beginner",
              "v": "CSV issues can cause VM storage access failures across the entire cluster. Redirected I/O mode significantly degrades performance.",
              "t": "`Splunk_TA_windows` (Hyper-V)",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\" (\"CSV\" OR \"Cluster Shared Volume\") (\"error\" OR \"redirected\" OR \"failed\")\n| table _time host Message\n| sort -_time",
              "m": "Enable Failover Clustering operational log. Alert on CSV ownership changes, redirected I/O mode, and disk health issues. Monitor Perfmon counters for CSV latency.",
              "z": "Status panel per CSV, Events timeline, Table.",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V).\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Failover Clustering operational log. Alert on CSV ownership changes, redirected I/O mode, and disk health issues. Monitor Perfmon counters for CSV latency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\" (\"CSV\" OR \"Cluster Shared Volume\") (\"error\" OR \"redirected\" OR \"failed\")\n| table _time host Message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cluster Shared Volume Health** — CSV issues can cause VM storage access failures across the entire cluster. Redirected I/O mode significantly degrades performance.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-FailoverClustering/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Cluster Shared Volume Health**): table _time host Message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel per CSV, Events timeline, Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cluster Shared Volume Health and raise the alarm before it drags down real work or real outages start.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "2.2.4",
              "n": "Live Migration Tracking",
              "c": "low",
              "f": "beginner",
              "v": "Audit trail for VM mobility. Excessive live migrations may indicate cluster imbalance or storage issues.",
              "t": "`Splunk_TA_windows` (Hyper-V)",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin\" \"migration\" (\"completed\" OR \"started\")\n| rex \"Virtual machine '(?<vm_name>[^']+)'\"\n| table _time host vm_name Message\n| sort -_time",
              "m": "Collected via standard Hyper-V event log monitoring. Create an audit report. Alert on migration failures or excessive frequency.",
              "z": "Table (timeline), Count by host/day.",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V).\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollected via standard Hyper-V event log monitoring. Create an audit report. Alert on migration failures or excessive frequency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin\" \"migration\" (\"completed\" OR \"started\")\n| rex \"Virtual machine '(?<vm_name>[^']+)'\"\n| table _time host vm_name Message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Live Migration Tracking** — Audit trail for VM mobility. Excessive live migrations may indicate cluster imbalance or storage issues.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin`. **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Live Migration Tracking**): table _time host vm_name Message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (timeline), Count by host/day.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on live Migration and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.5",
              "n": "Integration Services Version",
              "c": "low",
              "f": "intermediate",
              "v": "Outdated integration services cause performance issues and prevent features like time sync, heartbeat, and data exchange from working correctly.",
              "t": "`Splunk_TA_windows` (Hyper-V), custom scripted input",
              "d": "PowerShell scripted input (`Get-VMIntegrationService`)",
              "q": "index=hyperv sourcetype=integration_services\n| stats latest(version) as ic_version by vm_name, host\n| where ic_version != \"latest\"",
              "m": "Replace `\"latest\"` in the SPL with the actual expected integration services version. Create a PowerShell scripted input on Hyper-V hosts: `Get-VM | Get-VMIntegrationService | Select VMName, Name, Enabled, PrimaryOperationalStatus`. Run daily.",
              "z": "Table (VM, version, status), Pie chart (current vs. outdated).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V), custom scripted input.\n• Ensure the following data sources are available: PowerShell scripted input (`Get-VMIntegrationService`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nReplace `\"latest\"` in the SPL with the actual expected integration services version. Create a PowerShell scripted input on Hyper-V hosts: `Get-VM | Get-VMIntegrationService | Select VMName, Name, Enabled, PrimaryOperationalStatus`. Run daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hyperv sourcetype=integration_services\n| stats latest(version) as ic_version by vm_name, host\n| where ic_version != \"latest\"\n```\n\nUnderstanding this SPL\n\n**Integration Services Version** — Outdated integration services cause performance issues and prevent features like time sync, heartbeat, and data exchange from working correctly.\n\nDocumented **Data sources**: PowerShell scripted input (`Get-VMIntegrationService`). **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V), custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hyperv; **sourcetype**: integration_services. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hyperv, sourcetype=integration_services. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ic_version != \"latest\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, version, status), Pie chart (current vs. outdated).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on integration Services Version and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.6",
              "n": "Hyper-V Host Resource Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "Host-level CPU and memory utilization across all VMs determines capacity headroom. Unlike per-VM monitoring, host-level metrics reveal when the hypervisor itself is under pressure — affecting all VMs simultaneously. Tracks the root partition overhead which is invisible from within VMs.",
              "t": "`Splunk_TA_windows` (Hyper-V Perfmon inputs)",
              "d": "`sourcetype=Perfmon:HyperV` (Hyper-V Hypervisor Logical Processor, Memory counters)",
              "q": "index=perfmon sourcetype=\"Perfmon:HyperV\" object=\"Hyper-V Hypervisor Logical Processor\" instance=\"_Total\" counter=\"% Total Run Time\"\n| bin _time span=5m\n| stats avg(Value) as avg_cpu by host, _time\n| where avg_cpu > 85\n| table _time, host, avg_cpu",
              "m": "Configure Perfmon inputs for `Hyper-V Hypervisor Logical Processor` (% Total Run Time, % Hypervisor Run Time) and `Memory` (Available MBytes, Committed Bytes). Set interval=60. Alert when host CPU exceeds 85% or available memory drops below 10% of physical. Track root partition overhead separately.",
              "z": "Line chart (CPU/memory over time per host), Gauge (current utilization), Heatmap (hosts by load).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V Perfmon inputs).\n• Ensure the following data sources are available: `sourcetype=Perfmon:HyperV` (Hyper-V Hypervisor Logical Processor, Memory counters).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs for `Hyper-V Hypervisor Logical Processor` (% Total Run Time, % Hypervisor Run Time) and `Memory` (Available MBytes, Committed Bytes). Set interval=60. Alert when host CPU exceeds 85% or available memory drops below 10% of physical. Track root partition overhead separately.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:HyperV\" object=\"Hyper-V Hypervisor Logical Processor\" instance=\"_Total\" counter=\"% Total Run Time\"\n| bin _time span=5m\n| stats avg(Value) as avg_cpu by host, _time\n| where avg_cpu > 85\n| table _time, host, avg_cpu\n```\n\nUnderstanding this SPL\n\n**Hyper-V Host Resource Utilization** — Host-level CPU and memory utilization across all VMs determines capacity headroom. Unlike per-VM monitoring, host-level metrics reveal when the hypervisor itself is under pressure — affecting all VMs simultaneously. Tracks the root partition overhead which is invisible from within VMs.\n\nDocumented **Data sources**: `sourcetype=Perfmon:HyperV` (Hyper-V Hypervisor Logical Processor, Memory counters). **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V Perfmon inputs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:HyperV. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:HyperV\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_cpu > 85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Hyper-V Host Resource Utilization**): table _time, host, avg_cpu\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 85\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Hyper-V Host Resource Utilization** — Host-level CPU and memory utilization across all VMs determines capacity headroom. Unlike per-VM monitoring, host-level metrics reveal when the hypervisor itself is under pressure — affecting all VMs simultaneously. Tracks the root partition overhead which is invisible from within VMs.\n\nDocumented **Data sources**: `sourcetype=Perfmon:HyperV` (Hyper-V Hypervisor Logical Processor, Memory counters). **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V Perfmon inputs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where avg_cpu > 85` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU/memory over time per host), Gauge (current utilization), Heatmap (hosts by load).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Resource Utilization and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as avg_cpu\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where avg_cpu > 85",
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.7",
              "n": "Dynamic Memory Pressure and Effectiveness",
              "c": "high",
              "f": "intermediate",
              "v": "Dynamic Memory allows Hyper-V to adjust VM memory allocations based on demand. When memory pressure is high, the host reduces VM allocations below their startup RAM — causing in-guest paging. Monitoring reveals whether Dynamic Memory is helping or hurting, and which VMs are being starved.",
              "t": "`Splunk_TA_windows` (Hyper-V Perfmon inputs)",
              "d": "`sourcetype=Perfmon:HyperV` (Hyper-V Dynamic Memory - VM)",
              "q": "index=perfmon sourcetype=\"Perfmon:HyperV\" object=\"Hyper-V Dynamic Memory - VM\" (counter=\"Current Pressure\" OR counter=\"Average Pressure\" OR counter=\"Guest Visible Physical Memory\")\n| stats avg(eval(if(counter=\"Current Pressure\", Value, null()))) as pressure, avg(eval(if(counter=\"Guest Visible Physical Memory\", Value, null()))) as visible_mb by instance, host\n| where pressure > 100\n| sort -pressure\n| table instance, host, pressure, visible_mb",
              "m": "Configure Perfmon inputs for `Hyper-V Dynamic Memory - VM` counters. Pressure >100 means the VM wants more memory than it has. Track over time — sustained pressure >80 indicates the VM needs a higher minimum RAM setting. Alert when pressure exceeds 100 for production VMs.",
              "z": "Line chart (pressure over time per VM), Table (VMs under pressure), Gauge (average pressure).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V Perfmon inputs).\n• Ensure the following data sources are available: `sourcetype=Perfmon:HyperV` (Hyper-V Dynamic Memory - VM).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs for `Hyper-V Dynamic Memory - VM` counters. Pressure >100 means the VM wants more memory than it has. Track over time — sustained pressure >80 indicates the VM needs a higher minimum RAM setting. Alert when pressure exceeds 100 for production VMs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:HyperV\" object=\"Hyper-V Dynamic Memory - VM\" (counter=\"Current Pressure\" OR counter=\"Average Pressure\" OR counter=\"Guest Visible Physical Memory\")\n| stats avg(eval(if(counter=\"Current Pressure\", Value, null()))) as pressure, avg(eval(if(counter=\"Guest Visible Physical Memory\", Value, null()))) as visible_mb by instance, host\n| where pressure > 100\n| sort -pressure\n| table instance, host, pressure, visible_mb\n```\n\nUnderstanding this SPL\n\n**Dynamic Memory Pressure and Effectiveness** — Dynamic Memory allows Hyper-V to adjust VM memory allocations based on demand. When memory pressure is high, the host reduces VM allocations below their startup RAM — causing in-guest paging. Monitoring reveals whether Dynamic Memory is helping or hurting, and which VMs are being starved.\n\nDocumented **Data sources**: `sourcetype=Perfmon:HyperV` (Hyper-V Dynamic Memory - VM). **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V Perfmon inputs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:HyperV. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:HyperV\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by instance, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where pressure > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Dynamic Memory Pressure and Effectiveness**): table instance, host, pressure, visible_mb\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pressure over time per VM), Table (VMs under pressure), Gauge (average pressure).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on dynamic Memory Pressure and Effectiveness and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.8",
              "n": "Checkpoint Age and Sprawl",
              "c": "high",
              "f": "beginner",
              "v": "Hyper-V checkpoints (snapshots) accumulate AVHDX differencing disks that grow over time and degrade I/O performance. Old checkpoints complicate backup and recovery, consume unexpected storage, and cause merge storms when finally deleted. Production checkpoints are safer but still grow.",
              "t": "`Splunk_TA_windows` (Hyper-V), custom scripted input",
              "d": "PowerShell scripted input (`Get-VMCheckpoint`)",
              "q": "index=hyperv sourcetype=\"hyperv_checkpoints\"\n| eval age_days=round((now() - strptime(creation_time, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where age_days > 3\n| sort -age_days\n| table vm_name, host, checkpoint_name, age_days, size_gb, checkpoint_type",
              "m": "Create scripted input: `Get-VM | Get-VMCheckpoint | Select VMName, Name, CreationTime, CheckpointType, @{N='SizeGB';E={[math]::Round((Get-VHD $_.HardDrives.Path).FileSize/1GB,2)}}`. Run daily. Alert on checkpoints >3 days old. Distinguish production checkpoints (application-consistent) from standard (crash-consistent).",
              "z": "Table (VM, checkpoint, age, size), Bar chart (checkpoints by age bucket), Single value (total checkpoint count).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V), custom scripted input.\n• Ensure the following data sources are available: PowerShell scripted input (`Get-VMCheckpoint`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `Get-VM | Get-VMCheckpoint | Select VMName, Name, CreationTime, CheckpointType, @{N='SizeGB';E={[math]::Round((Get-VHD $_.HardDrives.Path).FileSize/1GB,2)}}`. Run daily. Alert on checkpoints >3 days old. Distinguish production checkpoints (application-consistent) from standard (crash-consistent).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hyperv sourcetype=\"hyperv_checkpoints\"\n| eval age_days=round((now() - strptime(creation_time, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where age_days > 3\n| sort -age_days\n| table vm_name, host, checkpoint_name, age_days, size_gb, checkpoint_type\n```\n\nUnderstanding this SPL\n\n**Checkpoint Age and Sprawl** — Hyper-V checkpoints (snapshots) accumulate AVHDX differencing disks that grow over time and degrade I/O performance. Old checkpoints complicate backup and recovery, consume unexpected storage, and cause merge storms when finally deleted. Production checkpoints are safer but still grow.\n\nDocumented **Data sources**: PowerShell scripted input (`Get-VMCheckpoint`). **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V), custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hyperv; **sourcetype**: hyperv_checkpoints. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hyperv, sourcetype=\"hyperv_checkpoints\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Checkpoint Age and Sprawl**): table vm_name, host, checkpoint_name, age_days, size_gb, checkpoint_type\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, checkpoint, age, size), Bar chart (checkpoints by age bucket), Single value (total checkpoint count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on checkpoint Age and Sprawl and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.9",
              "n": "Virtual Switch Dropped Packets and Network Errors",
              "c": "high",
              "f": "intermediate",
              "v": "Virtual switch dropped packets indicate congestion, misconfigured VLAN tagging, or bandwidth management policy throttling. Hyper-V Extensible Switch drops are invisible from within the VM, making hypervisor-level monitoring the only way to detect them.",
              "t": "`Splunk_TA_windows` (Hyper-V Perfmon inputs)",
              "d": "`sourcetype=Perfmon:HyperV` (Hyper-V Virtual Switch, Hyper-V Virtual Network Adapter)",
              "q": "index=perfmon sourcetype=\"Perfmon:HyperV\" object=\"Hyper-V Virtual Network Adapter\" (counter=\"Dropped Packets Incoming\" OR counter=\"Dropped Packets Outgoing\")\n| stats sum(Value) as total_drops by instance, host, counter\n| where total_drops > 0\n| sort -total_drops\n| table instance, host, counter, total_drops",
              "m": "Configure Perfmon inputs for `Hyper-V Virtual Network Adapter` (Dropped Packets Incoming/Outgoing, Packets Received/Sent Errors) and `Hyper-V Virtual Switch` (Dropped Packets/sec). Alert when any adapter shows >0 dropped packets sustained over 5 minutes. Correlate with bandwidth usage to distinguish congestion from misconfiguration.",
              "z": "Table (adapter, host, drops), Line chart (drops over time), Bar chart (top adapters by drops).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V Perfmon inputs).\n• Ensure the following data sources are available: `sourcetype=Perfmon:HyperV` (Hyper-V Virtual Switch, Hyper-V Virtual Network Adapter).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs for `Hyper-V Virtual Network Adapter` (Dropped Packets Incoming/Outgoing, Packets Received/Sent Errors) and `Hyper-V Virtual Switch` (Dropped Packets/sec). Alert when any adapter shows >0 dropped packets sustained over 5 minutes. Correlate with bandwidth usage to distinguish congestion from misconfiguration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:HyperV\" object=\"Hyper-V Virtual Network Adapter\" (counter=\"Dropped Packets Incoming\" OR counter=\"Dropped Packets Outgoing\")\n| stats sum(Value) as total_drops by instance, host, counter\n| where total_drops > 0\n| sort -total_drops\n| table instance, host, counter, total_drops\n```\n\nUnderstanding this SPL\n\n**Virtual Switch Dropped Packets and Network Errors** — Virtual switch dropped packets indicate congestion, misconfigured VLAN tagging, or bandwidth management policy throttling. Hyper-V Extensible Switch drops are invisible from within the VM, making hypervisor-level monitoring the only way to detect them.\n\nDocumented **Data sources**: `sourcetype=Perfmon:HyperV` (Hyper-V Virtual Switch, Hyper-V Virtual Network Adapter). **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V Perfmon inputs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:HyperV. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:HyperV\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by instance, host, counter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_drops > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Virtual Switch Dropped Packets and Network Errors**): table instance, host, counter, total_drops\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (adapter, host, drops), Line chart (drops over time), Bar chart (top adapters by drops).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on virtual Switch Dropped Packets and Network Errors and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.10",
              "n": "Failover Cluster Node Health and Quorum",
              "c": "critical",
              "f": "intermediate",
              "v": "Hyper-V failover clusters require quorum to operate. A node leaving the cluster reduces fault tolerance and can trigger mass VM failover. Quorum loss means the entire cluster stops, downing all VMs. Monitoring node health and quorum status prevents catastrophic cluster outages.",
              "t": "`Splunk_TA_windows` (Hyper-V)",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\" (EventCode=1069 OR EventCode=1177 OR EventCode=1135 OR EventCode=1564 OR EventCode=1566)\n| eval severity=case(EventCode=1135, \"Node Down\", EventCode=1177, \"Quorum Lost\", EventCode=1069, \"Resource Failed\", EventCode=1564, \"Quorum Degraded\", EventCode=1566, \"Quorum Restored\")\n| table _time, host, EventCode, severity, Message\n| sort -_time",
              "m": "Collect Failover Clustering operational event log. Key EventCodes: 1135 (node removed), 1177 (quorum lost), 1069 (cluster resource failed), 1564 (quorum degraded). Alert immediately on quorum events. Also create a PowerShell scripted input: `Get-ClusterNode | Select Name, State, StatusInformation`. Run every 60 seconds.",
              "z": "Status grid (node health), Events timeline, Single value (active nodes / total nodes), Alert panel.",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V).\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Failover Clustering operational event log. Key EventCodes: 1135 (node removed), 1177 (quorum lost), 1069 (cluster resource failed), 1564 (quorum degraded). Alert immediately on quorum events. Also create a PowerShell scripted input: `Get-ClusterNode | Select Name, State, StatusInformation`. Run every 60 seconds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\" (EventCode=1069 OR EventCode=1177 OR EventCode=1135 OR EventCode=1564 OR EventCode=1566)\n| eval severity=case(EventCode=1135, \"Node Down\", EventCode=1177, \"Quorum Lost\", EventCode=1069, \"Resource Failed\", EventCode=1564, \"Quorum Degraded\", EventCode=1566, \"Quorum Restored\")\n| table _time, host, EventCode, severity, Message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Failover Cluster Node Health and Quorum** — Hyper-V failover clusters require quorum to operate. A node leaving the cluster reduces fault tolerance and can trigger mass VM failover. Quorum loss means the entire cluster stops, downing all VMs. Monitoring node health and quorum status prevents catastrophic cluster outages.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-FailoverClustering/Operational`. **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-FailoverClustering/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-FailoverClustering/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Failover Cluster Node Health and Quorum**): table _time, host, EventCode, severity, Message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (node health), Events timeline, Single value (active nodes / total nodes), Alert panel.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on failover Cluster Node Health and Quorum and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.11",
              "n": "Storage Spaces Direct (S2D) Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Storage Spaces Direct pools local storage across cluster nodes into a shared storage fabric. Disk failures, network partitions, or capacity exhaustion degrade the storage pool, risking data loss. S2D self-heals by rebuilding data on remaining disks, consuming significant I/O during repair.",
              "t": "`Splunk_TA_windows` (Hyper-V), custom scripted input",
              "d": "PowerShell scripted input (`Get-StorageSubsystem`, `Get-PhysicalDisk`, `Get-StoragePool`)",
              "q": "index=hyperv sourcetype=\"s2d_health\"\n| stats latest(pool_health) as health, latest(pool_operational_status) as op_status, latest(capacity_pct) as capacity by pool_name, host\n| where health!=\"Healthy\" OR capacity > 80\n| sort -capacity\n| table pool_name, host, health, op_status, capacity",
              "m": "Create scripted inputs: `Get-StoragePool | Select FriendlyName, HealthStatus, OperationalStatus, Size, AllocatedSize` and `Get-PhysicalDisk | Select FriendlyName, HealthStatus, OperationalStatus, MediaType, Size, CanPool`. Run every 5 minutes. Alert on any non-Healthy status or capacity >80%.",
              "z": "Status grid (pool health), Table (disk status), Gauge (capacity utilization), Events (repair operations).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V), custom scripted input.\n• Ensure the following data sources are available: PowerShell scripted input (`Get-StorageSubsystem`, `Get-PhysicalDisk`, `Get-StoragePool`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted inputs: `Get-StoragePool | Select FriendlyName, HealthStatus, OperationalStatus, Size, AllocatedSize` and `Get-PhysicalDisk | Select FriendlyName, HealthStatus, OperationalStatus, MediaType, Size, CanPool`. Run every 5 minutes. Alert on any non-Healthy status or capacity >80%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hyperv sourcetype=\"s2d_health\"\n| stats latest(pool_health) as health, latest(pool_operational_status) as op_status, latest(capacity_pct) as capacity by pool_name, host\n| where health!=\"Healthy\" OR capacity > 80\n| sort -capacity\n| table pool_name, host, health, op_status, capacity\n```\n\nUnderstanding this SPL\n\n**Storage Spaces Direct (S2D) Health** — Storage Spaces Direct pools local storage across cluster nodes into a shared storage fabric. Disk failures, network partitions, or capacity exhaustion degrade the storage pool, risking data loss. S2D self-heals by rebuilding data on remaining disks, consuming significant I/O during repair.\n\nDocumented **Data sources**: PowerShell scripted input (`Get-StorageSubsystem`, `Get-PhysicalDisk`, `Get-StoragePool`). **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V), custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hyperv; **sourcetype**: s2d_health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hyperv, sourcetype=\"s2d_health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by pool_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where health!=\"Healthy\" OR capacity > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Storage Spaces Direct (S2D) Health**): table pool_name, host, health, op_status, capacity\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (pool health), Table (disk status), Gauge (capacity utilization), Events (repair operations).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on storage Spaces Direct (S2D) Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.12",
              "n": "VM Generation and Secure Boot Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Generation 1 VMs use legacy BIOS and cannot support Secure Boot, vTPM, or UEFI features required by modern security policies. Generation 2 VMs with Secure Boot enabled prevent rootkits and bootkits from loading unauthorized firmware or OS loaders.",
              "t": "`Splunk_TA_windows` (Hyper-V), custom scripted input",
              "d": "PowerShell scripted input (`Get-VM`)",
              "q": "index=hyperv sourcetype=\"hyperv_vm_config\"\n| stats latest(generation) as gen, latest(secure_boot) as secure_boot by vm_name, host\n| eval compliant=if(gen=2 AND secure_boot=\"On\", \"Yes\", \"No\")\n| where compliant=\"No\"\n| table vm_name, host, gen, secure_boot, compliant\n| sort gen",
              "m": "Create scripted input: `Get-VM | Select Name, Generation, @{N='SecureBoot';E={(Get-VMFirmware $_).SecureBoot}}`. Run daily. Define compliance policy — all new VMs should be Gen 2 with Secure Boot enabled. Generate weekly compliance reports. Note: Gen 1 → Gen 2 migration requires VM rebuild.",
              "z": "Pie chart (Gen 1 vs Gen 2), Table (non-compliant VMs), Bar chart (by host).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V), custom scripted input.\n• Ensure the following data sources are available: PowerShell scripted input (`Get-VM`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `Get-VM | Select Name, Generation, @{N='SecureBoot';E={(Get-VMFirmware $_).SecureBoot}}`. Run daily. Define compliance policy — all new VMs should be Gen 2 with Secure Boot enabled. Generate weekly compliance reports. Note: Gen 1 → Gen 2 migration requires VM rebuild.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hyperv sourcetype=\"hyperv_vm_config\"\n| stats latest(generation) as gen, latest(secure_boot) as secure_boot by vm_name, host\n| eval compliant=if(gen=2 AND secure_boot=\"On\", \"Yes\", \"No\")\n| where compliant=\"No\"\n| table vm_name, host, gen, secure_boot, compliant\n| sort gen\n```\n\nUnderstanding this SPL\n\n**VM Generation and Secure Boot Compliance** — Generation 1 VMs use legacy BIOS and cannot support Secure Boot, vTPM, or UEFI features required by modern security policies. Generation 2 VMs with Secure Boot enabled prevent rootkits and bootkits from loading unauthorized firmware or OS loaders.\n\nDocumented **Data sources**: PowerShell scripted input (`Get-VM`). **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V), custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hyperv; **sourcetype**: hyperv_vm_config. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hyperv, sourcetype=\"hyperv_vm_config\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliant=\"No\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VM Generation and Secure Boot Compliance**): table vm_name, host, gen, secure_boot, compliant\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (Gen 1 vs Gen 2), Table (non-compliant VMs), Bar chart (by host).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on generation and Secure Boot and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.13",
              "n": "Hyper-V Event Log Error Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Trending Hyper-V event log errors reveals emerging hardware issues, driver problems, and configuration drift. A sudden increase in VMMS, VMWP, or VID errors often precedes VM failures. Baseline comparison distinguishes noise from genuine problems.",
              "t": "`Splunk_TA_windows` (Hyper-V)",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-*`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-*\" Type=\"Error\"\n| bin _time span=1h\n| stats count by _time, host, sourcetype\n| eventstats avg(count) as avg_errors, stdev(count) as stdev_errors by host, sourcetype\n| eval upper=avg_errors + (2*stdev_errors)\n| where count > upper AND count > 5\n| table _time, host, sourcetype, count, avg_errors, upper",
              "m": "Collect all Hyper-V event log channels (VMMS-Admin, Worker-Admin, VID-Admin, Hypervisor-Admin). Baseline error rates over 30 days per host. Alert when error count exceeds 2 standard deviations above the mean. Investigate by drilling into specific EventCodes.",
              "z": "Line chart (errors over time), Table (anomalous periods), Bar chart (error types).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V).\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect all Hyper-V event log channels (VMMS-Admin, Worker-Admin, VID-Admin, Hypervisor-Admin). Baseline error rates over 30 days per host. Alert when error count exceeds 2 standard deviations above the mean. Investigate by drilling into specific EventCodes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-*\" Type=\"Error\"\n| bin _time span=1h\n| stats count by _time, host, sourcetype\n| eventstats avg(count) as avg_errors, stdev(count) as stdev_errors by host, sourcetype\n| eval upper=avg_errors + (2*stdev_errors)\n| where count > upper AND count > 5\n| table _time, host, sourcetype, count, avg_errors, upper\n```\n\nUnderstanding this SPL\n\n**Hyper-V Event Log Error Trending** — Trending Hyper-V event log errors reveals emerging hardware issues, driver problems, and configuration drift. A sudden increase in VMMS, VMWP, or VID errors often precedes VM failures. Baseline comparison distinguishes noise from genuine problems.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-*`. **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Hyper-V-*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-*\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by host, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **upper** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where count > upper AND count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Hyper-V Event Log Error Trending**): table _time, host, sourcetype, count, avg_errors, upper\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (errors over time), Table (anomalous periods), Bar chart (error types).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on event Log Error Trending and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.14",
              "n": "VM Resource Metering for Chargeback",
              "c": "low",
              "f": "intermediate",
              "v": "Hyper-V's built-in resource metering tracks per-VM CPU, memory, disk, and network consumption for chargeback and showback. Without metering data, cost allocation is based on allocation rather than actual usage — leading to disputes and over-provisioning.",
              "t": "`Splunk_TA_windows` (Hyper-V), custom scripted input",
              "d": "PowerShell scripted input (`Measure-VM`)",
              "q": "index=hyperv sourcetype=\"hyperv_metering\"\n| bin _time span=1d\n| stats avg(avg_cpu_mhz) as avg_cpu, avg(avg_memory_mb) as avg_mem, sum(disk_bytes_read) as disk_read, sum(disk_bytes_written) as disk_write, sum(network_bytes_in) as net_in, sum(network_bytes_out) as net_out by vm_name, host, _time\n| eval disk_total_gb=round((disk_read+disk_write)/1073741824, 2)\n| eval net_total_gb=round((net_in+net_out)/1073741824, 2)\n| table _time, vm_name, host, avg_cpu, avg_mem, disk_total_gb, net_total_gb",
              "m": "Enable resource metering: `Enable-VMResourceMetering -VMName *`. Create scripted input: `Measure-VM | Select VMName, AvgCPU, AvgRAM, TotalDisk*, AggregatedAverageNormalizedIOPS, AggregatedDiskDataRead, AggregatedDiskDataWritten, NetworkMeteredTrafficReport`. Run hourly. Maintain cost-per-unit lookups for chargeback calculations.",
              "z": "Table (VM, resource usage, estimated cost), Bar chart (cost by department), Timechart (usage trending).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Hyper-V), custom scripted input.\n• Ensure the following data sources are available: PowerShell scripted input (`Measure-VM`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable resource metering: `Enable-VMResourceMetering -VMName *`. Create scripted input: `Measure-VM | Select VMName, AvgCPU, AvgRAM, TotalDisk*, AggregatedAverageNormalizedIOPS, AggregatedDiskDataRead, AggregatedDiskDataWritten, NetworkMeteredTrafficReport`. Run hourly. Maintain cost-per-unit lookups for chargeback calculations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hyperv sourcetype=\"hyperv_metering\"\n| bin _time span=1d\n| stats avg(avg_cpu_mhz) as avg_cpu, avg(avg_memory_mb) as avg_mem, sum(disk_bytes_read) as disk_read, sum(disk_bytes_written) as disk_write, sum(network_bytes_in) as net_in, sum(network_bytes_out) as net_out by vm_name, host, _time\n| eval disk_total_gb=round((disk_read+disk_write)/1073741824, 2)\n| eval net_total_gb=round((net_in+net_out)/1073741824, 2)\n| table _time, vm_name, host, avg_cpu, avg_mem, disk_total_gb, net_total_gb\n```\n\nUnderstanding this SPL\n\n**VM Resource Metering for Chargeback** — Hyper-V's built-in resource metering tracks per-VM CPU, memory, disk, and network consumption for chargeback and showback. Without metering data, cost allocation is based on allocation rather than actual usage — leading to disputes and over-provisioning.\n\nDocumented **Data sources**: PowerShell scripted input (`Measure-VM`). **App/TA** (typical add-on context): `Splunk_TA_windows` (Hyper-V), custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hyperv; **sourcetype**: hyperv_metering. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hyperv, sourcetype=\"hyperv_metering\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by vm_name, host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **disk_total_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **net_total_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **VM Resource Metering for Chargeback**): table _time, vm_name, host, avg_cpu, avg_mem, disk_total_gb, net_total_gb\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, resource usage, estimated cost), Bar chart (cost by department), Timechart (usage trending).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on resource Metering for Chargeback and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.2.15",
              "n": "Hyper-V VM State Changes",
              "c": "high",
              "f": "intermediate",
              "v": "Unexpected VM power state changes (shutdowns, paused, critical saves) indicate host issues, resource contention, or unauthorized administrative actions.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin` (EventCode 12320, 12322, 12324, 18304)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Hyper-V-VMMS*\"\n  EventCode IN (12320, 12322, 12324, 18304, 18310, 18312)\n| eval action=case(EventCode=12320,\"VM started\",EventCode=12322,\"VM stopped\",EventCode=12324,\"VM saved\",EventCode=18304,\"VM critical\",EventCode=18310,\"VM paused\",EventCode=18312,\"VM resumed\")\n| table _time, host, action, VmName, VmId\n| sort -_time",
              "m": "Forward Hyper-V VMMS Admin logs from all Hyper-V hosts. EventCode 18304=VM entered critical state (memory pressure, lost storage), 18310=VM paused (out of disk, integration services failure). Alert on any critical state transitions. Track unexpected shutdowns (12322 without preceding 12320 by admin). Correlate with host-level resource monitoring to identify the root cause.",
              "z": "Timeline (VM state changes), Status grid (VM × state), Table (critical events), Single value (VMs in critical state).",
              "kfp": "Hyper-V counters can wobble when guests reboot, when integration services update, or during quick live migration; align spikes with host maintenance and live migration events.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin` (EventCode 12320, 12322, 12324, 18304).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Hyper-V VMMS Admin logs from all Hyper-V hosts. EventCode 18304=VM entered critical state (memory pressure, lost storage), 18310=VM paused (out of disk, integration services failure). Alert on any critical state transitions. Track unexpected shutdowns (12322 without preceding 12320 by admin). Correlate with host-level resource monitoring to identify the root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Hyper-V-VMMS*\"\n  EventCode IN (12320, 12322, 12324, 18304, 18310, 18312)\n| eval action=case(EventCode=12320,\"VM started\",EventCode=12322,\"VM stopped\",EventCode=12324,\"VM saved\",EventCode=18304,\"VM critical\",EventCode=18310,\"VM paused\",EventCode=18312,\"VM resumed\")\n| table _time, host, action, VmName, VmId\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Hyper-V VM State Changes** — Unexpected VM power state changes (shutdowns, paused, critical saves) indicate host issues, resource contention, or unauthorized administrative actions.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin` (EventCode 12320, 12322, 12324, 18304). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Hyper-V VM State Changes**): table _time, host, action, VmName, VmId\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Hyper-V VM State Changes** — Unexpected VM power state changes (shutdowns, paused, critical saves) indicate host issues, resource contention, or unauthorized administrative actions.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Hyper-V-VMMS-Admin` (EventCode 12320, 12322, 12324, 18304). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (VM state changes), Status grid (VM × state), Table (critical events), Single value (VMs in critical state).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on vM State Changes and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.3,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 14,
            "none": 0
          }
        },
        {
          "i": "2.3",
          "n": "KVM / Proxmox / oVirt",
          "u": [
            {
              "i": "2.3.1",
              "n": "Guest VM Resource Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Per-VM resource tracking for capacity planning and performance troubleshooting in KVM environments.",
              "t": "Custom scripted input (`virsh domstats`)",
              "d": "Custom sourcetype from `virsh domstats` or `virt-top`",
              "q": "index=virtualization sourcetype=virsh_stats\n| stats latest(cpu_pct) as cpu, latest(mem_used_mb) as memory by vm_name, host\n| sort -cpu",
              "m": "Create scripted input: `virsh domstats --cpu-total --balloon --interface --block`. Run every 60 seconds. Parse per-VM CPU time, balloon current, block read/write, and net rx/tx.",
              "z": "Table, Line chart per VM, Heatmap.",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`virsh domstats`).\n• Ensure the following data sources are available: Custom sourcetype from `virsh domstats` or `virt-top`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `virsh domstats --cpu-total --balloon --interface --block`. Run every 60 seconds. Parse per-VM CPU time, balloon current, block read/write, and net rx/tx.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=virsh_stats\n| stats latest(cpu_pct) as cpu, latest(mem_used_mb) as memory by vm_name, host\n| sort -cpu\n```\n\nUnderstanding this SPL\n\n**Guest VM Resource Monitoring** — Per-VM resource tracking for capacity planning and performance troubleshooting in KVM environments.\n\nDocumented **Data sources**: Custom sourcetype from `virsh domstats` or `virt-top`. **App/TA** (typical add-on context): Custom scripted input (`virsh domstats`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: virsh_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=virsh_stats. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Line chart per VM, Heatmap.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how hard each virtual machine is working on the host (CPU and memory) and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.2",
              "n": "Host Overcommit Detection",
              "c": "high",
              "f": "advanced",
              "v": "Overcommitted KVM hosts cause all VMs to compete for resources. Unlike VMware, KVM doesn't have sophisticated DRS — manual balancing is needed.",
              "t": "Custom scripted input",
              "d": "Custom sourcetype (`virsh nodeinfo` + `virsh list --all`)",
              "q": "index=virtualization sourcetype=kvm_capacity\n| stats sum(vm_vcpus) as total_vcpus, sum(vm_memory_mb) as total_vm_mem, latest(host_cpus) as host_cpus, latest(host_memory_mb) as host_mem by host\n| eval cpu_overcommit = round(total_vcpus / host_cpus, 2)\n| eval mem_overcommit = round(total_vm_mem / host_mem, 2)\n| where cpu_overcommit > 3 OR mem_overcommit > 1.2",
              "m": "Create scripted input combining `virsh nodeinfo` (host resources) with `virsh dominfo <vm>` for each VM. Calculate aggregate allocation vs. physical capacity. Alert when memory overcommit >1.2x or CPU >4x.",
              "z": "Table (host, allocated vs. physical), Gauge per host.",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: Custom sourcetype (`virsh nodeinfo` + `virsh list --all`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input combining `virsh nodeinfo` (host resources) with `virsh dominfo <vm>` for each VM. Calculate aggregate allocation vs. physical capacity. Alert when memory overcommit >1.2x or CPU >4x.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=kvm_capacity\n| stats sum(vm_vcpus) as total_vcpus, sum(vm_memory_mb) as total_vm_mem, latest(host_cpus) as host_cpus, latest(host_memory_mb) as host_mem by host\n| eval cpu_overcommit = round(total_vcpus / host_cpus, 2)\n| eval mem_overcommit = round(total_vm_mem / host_mem, 2)\n| where cpu_overcommit > 3 OR mem_overcommit > 1.2\n```\n\nUnderstanding this SPL\n\n**Host Overcommit Detection** — Overcommitted KVM hosts cause all VMs to compete for resources. Unlike VMware, KVM doesn't have sophisticated DRS — manual balancing is needed.\n\nDocumented **Data sources**: Custom sourcetype (`virsh nodeinfo` + `virsh list --all`). **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: kvm_capacity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=kvm_capacity. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cpu_overcommit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_overcommit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cpu_overcommit > 3 OR mem_overcommit > 1.2` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, allocated vs. physical), Gauge per host.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host Overcommit Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.3",
              "n": "VM Lifecycle Events",
              "c": "medium",
              "f": "beginner",
              "v": "Audit trail for VM start, stop, migrate, and crash events. Essential for troubleshooting and change management in open-source virtualization.",
              "t": "Syslog, libvirt logs",
              "d": "`/var/log/libvirt/`, syslog",
              "q": "index=virtualization sourcetype=syslog source=\"/var/log/libvirt/*\"\n| search \"shutting down\" OR \"starting up\" OR \"migrating\" OR \"crashed\"\n| rex \"domain (?<vm_name>\\S+)\"\n| table _time host vm_name _raw\n| sort -_time",
              "m": "Forward `/var/log/libvirt/qemu/*.log` and libvirt system logs. Parse VM name and event type. Alert on unexpected VM shutdowns or crashes.",
              "z": "Events timeline, Table (VM, event, time).",
              "kfp": "libvirt and KVM logs can burst during live migration, image imports, or when guests crash and restart; correlate time windows with host maintenance.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Syslog, libvirt logs.\n• Ensure the following data sources are available: `/var/log/libvirt/`, syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward `/var/log/libvirt/qemu/*.log` and libvirt system logs. Parse VM name and event type. Alert on unexpected VM shutdowns or crashes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=syslog source=\"/var/log/libvirt/*\"\n| search \"shutting down\" OR \"starting up\" OR \"migrating\" OR \"crashed\"\n| rex \"domain (?<vm_name>\\S+)\"\n| table _time host vm_name _raw\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**VM Lifecycle Events** — Audit trail for VM start, stop, migrate, and crash events. Essential for troubleshooting and change management in open-source virtualization.\n\nDocumented **Data sources**: `/var/log/libvirt/`, syslog. **App/TA** (typical add-on context): Syslog, libvirt logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **VM Lifecycle Events**): table _time host vm_name _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table (VM, event, time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on lifecycle Events and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.4",
              "n": "KVM Guest Agent Heartbeat",
              "c": "high",
              "f": "intermediate",
              "v": "Guest agent (QEMU GA) unavailability prevents graceful shutdown, snapshot consistency, and time sync. Detecting agent loss ensures proper VM management.",
              "t": "Custom scripted input (virsh qemu-agent-command)",
              "d": "`virsh qemu-agent-command <vm> '{\"execute\":\"guest-ping\"}'`",
              "q": "index=virtualization sourcetype=kvm_guest_agent host=*\n| stats latest(agent_ok) as ok by host, vm_name\n| where ok != 1\n| table host vm_name _time",
              "m": "Script that iterates VMs and runs `virsh qemu-agent-command <domain> '{\"execute\":\"guest-ping\"}'`. Ingest result (0/1) per VM. Run every 60 seconds. Alert when agent stops responding.",
              "z": "Status grid (VM vs. agent OK), Table of VMs with no agent.",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (virsh qemu-agent-command).\n• Ensure the following data sources are available: `virsh qemu-agent-command <vm> '{\"execute\":\"guest-ping\"}'`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScript that iterates VMs and runs `virsh qemu-agent-command <domain> '{\"execute\":\"guest-ping\"}'`. Ingest result (0/1) per VM. Run every 60 seconds. Alert when agent stops responding.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=kvm_guest_agent host=*\n| stats latest(agent_ok) as ok by host, vm_name\n| where ok != 1\n| table host vm_name _time\n```\n\nUnderstanding this SPL\n\n**KVM Guest Agent Heartbeat** — Guest agent (QEMU GA) unavailability prevents graceful shutdown, snapshot consistency, and time sync. Detecting agent loss ensures proper VM management.\n\nDocumented **Data sources**: `virsh qemu-agent-command <vm> '{\"execute\":\"guest-ping\"}'`. **App/TA** (typical add-on context): Custom scripted input (virsh qemu-agent-command). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: kvm_guest_agent. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=kvm_guest_agent. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ok != 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **KVM Guest Agent Heartbeat**): table host vm_name _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (VM vs. agent OK), Table of VMs with no agent.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on guest Agent Heartbeat and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.5",
              "n": "Libvirt Network Filter and Firewall Rule Audit",
              "c": "medium",
              "f": "advanced",
              "v": "VM-level firewall and filter rules can be changed accidentally or maliciously. Auditing ensures network isolation and compliance.",
              "t": "Custom scripted input (`virsh nwfilter-list`, `virsh dumpxml`)",
              "d": "Libvirt XML dump, nwfilter definitions",
              "q": "index=virtualization sourcetype=libvirt_nwfilter host=*\n| stats latest(rule_hash) as current by host, vm_name, filter_name\n| inputlookup expected_nwfilter append=t\n| eval drift=if(current!=expected_hash, \"Yes\", \"No\")\n| where drift=\"Yes\"\n| table host vm_name filter_name",
              "m": "Periodically dump VM network filter config and compute hash. Compare to baseline lookup. Alert on change. Run after change windows or daily.",
              "z": "Table (host, VM, filter, drift), Compliance count.",
              "kfp": "libvirt and KVM logs can burst during live migration, image imports, or when guests crash and restart; correlate time windows with host maintenance.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`virsh nwfilter-list`, `virsh dumpxml`).\n• Ensure the following data sources are available: Libvirt XML dump, nwfilter definitions.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically dump VM network filter config and compute hash. Compare to baseline lookup. Alert on change. Run after change windows or daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=libvirt_nwfilter host=*\n| stats latest(rule_hash) as current by host, vm_name, filter_name\n| inputlookup expected_nwfilter append=t\n| eval drift=if(current!=expected_hash, \"Yes\", \"No\")\n| where drift=\"Yes\"\n| table host vm_name filter_name\n```\n\nUnderstanding this SPL\n\n**Libvirt Network Filter and Firewall Rule Audit** — VM-level firewall and filter rules can be changed accidentally or maliciously. Auditing ensures network isolation and compliance.\n\nDocumented **Data sources**: Libvirt XML dump, nwfilter definitions. **App/TA** (typical add-on context): Custom scripted input (`virsh nwfilter-list`, `virsh dumpxml`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: libvirt_nwfilter. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=libvirt_nwfilter. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, vm_name, filter_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **drift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift=\"Yes\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Libvirt Network Filter and Firewall Rule Audit**): table host vm_name filter_name\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, VM, filter, drift), Compliance count.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on libvirt Network Filter and Firewall Rule Audit and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Configuration",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.6",
              "n": "Virtual Disk Backing Chain and Snapshot Age",
              "c": "high",
              "f": "intermediate",
              "v": "Long snapshot chains and old snapshots degrade I/O and complicate recovery. Monitoring supports snapshot hygiene and prevents runaway growth.",
              "t": "Custom scripted input (`virsh domblkinfo`, `qemu-img info`)",
              "d": "Libvirt/QEMU disk info",
              "q": "index=virtualization sourcetype=kvm_disk_chain host=*\n| stats latest(chain_depth) as depth, latest(oldest_snapshot_days) as snapshot_days by host, vm_name, disk\n| where depth > 3 OR snapshot_days > 30\n| table host vm_name disk depth snapshot_days\n| sort -snapshot_days",
              "m": "Script to list VM disks and snapshot chains (e.g. `virsh snapshot-list`, `qemu-img info`). Compute chain depth and oldest snapshot age. Alert when depth >3 or oldest snapshot >30 days.",
              "z": "Table (VM, disk, depth, oldest snapshot), Bar chart of snapshot age.",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`virsh domblkinfo`, `qemu-img info`).\n• Ensure the following data sources are available: Libvirt/QEMU disk info.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScript to list VM disks and snapshot chains (e.g. `virsh snapshot-list`, `qemu-img info`). Compute chain depth and oldest snapshot age. Alert when depth >3 or oldest snapshot >30 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=kvm_disk_chain host=*\n| stats latest(chain_depth) as depth, latest(oldest_snapshot_days) as snapshot_days by host, vm_name, disk\n| where depth > 3 OR snapshot_days > 30\n| table host vm_name disk depth snapshot_days\n| sort -snapshot_days\n```\n\nUnderstanding this SPL\n\n**Virtual Disk Backing Chain and Snapshot Age** — Long snapshot chains and old snapshots degrade I/O and complicate recovery. Monitoring supports snapshot hygiene and prevents runaway growth.\n\nDocumented **Data sources**: Libvirt/QEMU disk info. **App/TA** (typical add-on context): Custom scripted input (`virsh domblkinfo`, `qemu-img info`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: kvm_disk_chain. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=kvm_disk_chain. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, vm_name, disk** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where depth > 3 OR snapshot_days > 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Virtual Disk Backing Chain and Snapshot Age**): table host vm_name disk depth snapshot_days\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, disk, depth, oldest snapshot), Bar chart of snapshot age.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on virtual Disk Backing Chain and Snapshot Age and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.7",
              "n": "KVM Host CPU Model and Migration Compatibility",
              "c": "medium",
              "f": "advanced",
              "v": "Live migration fails or degrades when host CPU models differ. Tracking CPU compatibility avoids failed migrations and performance surprises.",
              "t": "Custom scripted input (`virsh capabilities`, `virsh dominfo`)",
              "d": "Libvirt capabilities XML, VM CPU config",
              "q": "index=virtualization sourcetype=kvm_cpu_compat host=*\n| stats latest(host_cpu_model) as host_model, values(vm_cpu_model) as vm_models by host\n| eval compatible=if(match(vm_models, host_model), \"Yes\", \"No\")\n| where compatible=\"No\"\n| table host host_model vm_models",
              "m": "Extract host CPU model from `virsh capabilities` and per-VM CPU from `virsh dumpxml`. Compare for migration compatibility. Document and alert when VMs use incompatible CPU.",
              "z": "Table (host, VM, CPU model, compatible), Migration readiness matrix.",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`virsh capabilities`, `virsh dominfo`).\n• Ensure the following data sources are available: Libvirt capabilities XML, VM CPU config.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract host CPU model from `virsh capabilities` and per-VM CPU from `virsh dumpxml`. Compare for migration compatibility. Document and alert when VMs use incompatible CPU.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=kvm_cpu_compat host=*\n| stats latest(host_cpu_model) as host_model, values(vm_cpu_model) as vm_models by host\n| eval compatible=if(match(vm_models, host_model), \"Yes\", \"No\")\n| where compatible=\"No\"\n| table host host_model vm_models\n```\n\nUnderstanding this SPL\n\n**KVM Host CPU Model and Migration Compatibility** — Live migration fails or degrades when host CPU models differ. Tracking CPU compatibility avoids failed migrations and performance surprises.\n\nDocumented **Data sources**: Libvirt capabilities XML, VM CPU config. **App/TA** (typical add-on context): Custom scripted input (`virsh capabilities`, `virsh dominfo`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: kvm_cpu_compat. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=kvm_cpu_compat. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compatible** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compatible=\"No\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **KVM Host CPU Model and Migration Compatibility**): table host host_model vm_models\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, VM, CPU model, compatible), Migration readiness matrix.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on host CPU Model and Migration Compatibility and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.8",
              "n": "Virtio Driver and Balloon Status in Guests",
              "c": "medium",
              "f": "intermediate",
              "v": "Virtio drivers and balloon driver improve I/O and allow memory reclamation. Missing or inactive drivers cause poor performance and overcommit issues.",
              "t": "Custom scripted input (guest agent or in-guest script)",
              "d": "QEMU guest agent, `virsh dommemstat`",
              "q": "index=virtualization sourcetype=kvm_balloon host=*\n| stats latest(balloon_current_kb) as balloon_kb, latest(balloon_max_kb) as max_kb by host, vm_name\n| eval balloon_ratio=round(balloon_kb/max_kb*100, 1)\n| where balloon_ratio > 50\n| table host vm_name balloon_kb max_kb balloon_ratio",
              "m": "Use `virsh dommemstat` to get balloon current and maximum. High ratio indicates host is reclaiming memory from the VM. Alert when ratio >50% for critical VMs.",
              "z": "Table (VM, balloon KB, ratio), Line chart (balloon over time).",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (guest agent or in-guest script).\n• Ensure the following data sources are available: QEMU guest agent, `virsh dommemstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse `virsh dommemstat` to get balloon current and maximum. High ratio indicates host is reclaiming memory from the VM. Alert when ratio >50% for critical VMs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=kvm_balloon host=*\n| stats latest(balloon_current_kb) as balloon_kb, latest(balloon_max_kb) as max_kb by host, vm_name\n| eval balloon_ratio=round(balloon_kb/max_kb*100, 1)\n| where balloon_ratio > 50\n| table host vm_name balloon_kb max_kb balloon_ratio\n```\n\nUnderstanding this SPL\n\n**Virtio Driver and Balloon Status in Guests** — Virtio drivers and balloon driver improve I/O and allow memory reclamation. Missing or inactive drivers cause poor performance and overcommit issues.\n\nDocumented **Data sources**: QEMU guest agent, `virsh dommemstat`. **App/TA** (typical add-on context): Custom scripted input (guest agent or in-guest script). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: kvm_balloon. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=kvm_balloon. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **balloon_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where balloon_ratio > 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Virtio Driver and Balloon Status in Guests**): table host vm_name balloon_kb max_kb balloon_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, balloon KB, ratio), Line chart (balloon over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on virtio Driver and Balloon Status in Guests and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.9",
              "n": "QEMU Process Crash and Zombie Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Each KVM VM runs as a qemu-kvm process on the host. If the process crashes, the VM dies instantly without graceful shutdown. Zombie qemu processes consume resources without running a VM. Detecting crashes enables rapid restart, while zombie detection prevents resource leaks.",
              "t": "`Splunk_TA_nix`, custom scripted input",
              "d": "Syslog, libvirt logs, process monitoring",
              "q": "index=os sourcetype=syslog (\"qemu-kvm\" AND (\"killed\" OR \"segfault\" OR \"core dumped\" OR \"terminated\"))\n| rex \"qemu-kvm\\[(?<pid>\\d+)\\]\"\n| table _time, host, pid, _raw\n| sort -_time",
              "m": "Monitor syslog and `/var/log/libvirt/qemu/*.log` for qemu-kvm crash messages. Create a scripted input to detect zombie processes: `ps aux | grep qemu-kvm | grep -v grep | awk '{if($8==\"Z\") print}'`. Alert immediately on crash events. Cross-reference with libvirt domain list to detect processes without corresponding VMs.",
              "z": "Timeline (crash events), Table (crashed VMs), Single value (active zombies).",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, custom scripted input.\n• Ensure the following data sources are available: Syslog, libvirt logs, process monitoring.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor syslog and `/var/log/libvirt/qemu/*.log` for qemu-kvm crash messages. Create a scripted input to detect zombie processes: `ps aux | grep qemu-kvm | grep -v grep | awk '{if($8==\"Z\") print}'`. Alert immediately on crash events. Cross-reference with libvirt domain list to detect processes without corresponding VMs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (\"qemu-kvm\" AND (\"killed\" OR \"segfault\" OR \"core dumped\" OR \"terminated\"))\n| rex \"qemu-kvm\\[(?<pid>\\d+)\\]\"\n| table _time, host, pid, _raw\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**QEMU Process Crash and Zombie Detection** — Each KVM VM runs as a qemu-kvm process on the host. If the process crashes, the VM dies instantly without graceful shutdown. Zombie qemu processes consume resources without running a VM. Detecting crashes enables rapid restart, while zombie detection prevents resource leaks.\n\nDocumented **Data sources**: Syslog, libvirt logs, process monitoring. **App/TA** (typical add-on context): `Splunk_TA_nix`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **QEMU Process Crash and Zombie Detection**): table _time, host, pid, _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (crash events), Table (crashed VMs), Single value (active zombies).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on qEMU Process Crash and Zombie Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.10",
              "n": "Storage Pool Capacity Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Libvirt storage pools (LVM, directory, NFS, Ceph, ZFS) provide disk backing for VMs. A full storage pool prevents new VM creation, snapshot operations, and can cause running VMs to pause when using thin provisioning. Monitoring pool capacity prevents VM outages.",
              "t": "Custom scripted input",
              "d": "`virsh pool-info`, storage pool metrics",
              "q": "index=virtualization sourcetype=kvm_storage_pools\n| eval used_pct=round(used_gb/capacity_gb*100, 1)\n| where used_pct > 80\n| sort -used_pct\n| table host, pool_name, pool_type, capacity_gb, used_gb, used_pct",
              "m": "Create scripted input: `for pool in $(virsh pool-list --name); do virsh pool-info $pool; done`. Parse capacity, allocation, and available fields. Run every 5 minutes. Alert at 80% (warning) and 90% (critical). Include pool type in output — LVM pools cannot auto-extend, while directory pools grow with the filesystem.",
              "z": "Gauge (per pool), Table (pool status), Line chart (capacity trend with prediction).",
              "kfp": "libvirt and KVM logs can burst during live migration, image imports, or when guests crash and restart; correlate time windows with host maintenance.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: `virsh pool-info`, storage pool metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `for pool in $(virsh pool-list --name); do virsh pool-info $pool; done`. Parse capacity, allocation, and available fields. Run every 5 minutes. Alert at 80% (warning) and 90% (critical). Include pool type in output — LVM pools cannot auto-extend, while directory pools grow with the filesystem.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=kvm_storage_pools\n| eval used_pct=round(used_gb/capacity_gb*100, 1)\n| where used_pct > 80\n| sort -used_pct\n| table host, pool_name, pool_type, capacity_gb, used_gb, used_pct\n```\n\nUnderstanding this SPL\n\n**Storage Pool Capacity Monitoring** — Libvirt storage pools (LVM, directory, NFS, Ceph, ZFS) provide disk backing for VMs. A full storage pool prevents new VM creation, snapshot operations, and can cause running VMs to pause when using thin provisioning. Monitoring pool capacity prevents VM outages.\n\nDocumented **Data sources**: `virsh pool-info`, storage pool metrics. **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: kvm_storage_pools. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=kvm_storage_pools. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Storage Pool Capacity Monitoring**): table host, pool_name, pool_type, capacity_gb, used_gb, used_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (per pool), Table (pool status), Line chart (capacity trend with prediction).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on storage Pool Capacity and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.11",
              "n": "Proxmox Backup Server Job Status",
              "c": "critical",
              "f": "beginner",
              "v": "Proxmox Backup Server (PBS) provides incremental, deduplicated VM backups. Failed backup jobs mean VMs have no recovery point. Monitoring backup status, duration, and deduplication ratio ensures recoverability and optimizes storage efficiency.",
              "t": "Custom API input, Proxmox syslog",
              "d": "Proxmox Backup Server API, `/var/log/proxmox-backup/tasks/`",
              "q": "index=virtualization sourcetype=\"proxmox_backup\"\n| eval duration_min=round(duration_sec/60, 1)\n| eval status_ok=if(status=\"OK\", 1, 0)\n| stats latest(status) as last_status, latest(duration_min) as last_duration_min, latest(backup_size_gb) as size_gb, latest(dedup_ratio) as dedup by vm_name, backup_type\n| where last_status!=\"OK\"\n| table vm_name, backup_type, last_status, last_duration_min, size_gb, dedup",
              "m": "Poll the PBS API (`/api2/json/nodes/{node}/tasks`) or forward PBS task logs to Splunk. Track backup success/failure per VM, backup duration, transferred size, and deduplication factor. Alert on any failed backup. Also alert when no backup has been taken for a VM in >24 hours. Monitor datastore capacity on PBS.",
              "z": "Table (VM, status, duration, size), Bar chart (backup success rate), Timechart (backup duration trending).",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input, Proxmox syslog.\n• Ensure the following data sources are available: Proxmox Backup Server API, `/var/log/proxmox-backup/tasks/`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll the PBS API (`/api2/json/nodes/{node}/tasks`) or forward PBS task logs to Splunk. Track backup success/failure per VM, backup duration, transferred size, and deduplication factor. Alert on any failed backup. Also alert when no backup has been taken for a VM in >24 hours. Monitor datastore capacity on PBS.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"proxmox_backup\"\n| eval duration_min=round(duration_sec/60, 1)\n| eval status_ok=if(status=\"OK\", 1, 0)\n| stats latest(status) as last_status, latest(duration_min) as last_duration_min, latest(backup_size_gb) as size_gb, latest(dedup_ratio) as dedup by vm_name, backup_type\n| where last_status!=\"OK\"\n| table vm_name, backup_type, last_status, last_duration_min, size_gb, dedup\n```\n\nUnderstanding this SPL\n\n**Proxmox Backup Server Job Status** — Proxmox Backup Server (PBS) provides incremental, deduplicated VM backups. Failed backup jobs mean VMs have no recovery point. Monitoring backup status, duration, and deduplication ratio ensures recoverability and optimizes storage efficiency.\n\nDocumented **Data sources**: Proxmox Backup Server API, `/var/log/proxmox-backup/tasks/`. **App/TA** (typical add-on context): Custom API input, Proxmox syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: proxmox_backup. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"proxmox_backup\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by vm_name, backup_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where last_status!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Proxmox Backup Server Job Status**): table vm_name, backup_type, last_status, last_duration_min, size_gb, dedup\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, status, duration, size), Bar chart (backup success rate), Timechart (backup duration trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on backup Server Job and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "proxmox",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.12",
              "n": "Proxmox Cluster Corosync and Quorum Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Proxmox clusters use Corosync for node communication and quorum. A split-brain scenario can cause data corruption on shared storage. Nodes losing corosync connectivity cannot access cluster resources, and quorum loss stops all HA-protected VMs. Early detection of communication issues prevents cluster-wide outages.",
              "t": "Custom scripted input, syslog",
              "d": "Corosync logs, `pvecm status`, Proxmox cluster API",
              "q": "index=virtualization sourcetype=\"proxmox_cluster\"\n| stats latest(quorate) as quorum, latest(total_nodes) as total, latest(online_nodes) as online by cluster_name\n| eval quorum_ok=if(quorum=\"Yes\", \"OK\", \"CRITICAL\")\n| eval nodes_ok=if(online=total, \"All Online\", online . \"/\" . total . \" Online\")\n| table cluster_name, quorum_ok, nodes_ok, total, online",
              "m": "Create scripted input: `pvecm status` to get quorum state, node count, and ring status. Also monitor Corosync syslog for retransmit failures and membership changes. Alert immediately on quorum loss. Alert when any node goes offline. Monitor Corosync ring latency — high latency indicates network issues between nodes.",
              "z": "Status grid (node health), Single value (quorum status), Timeline (membership changes).",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input, syslog.\n• Ensure the following data sources are available: Corosync logs, `pvecm status`, Proxmox cluster API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `pvecm status` to get quorum state, node count, and ring status. Also monitor Corosync syslog for retransmit failures and membership changes. Alert immediately on quorum loss. Alert when any node goes offline. Monitor Corosync ring latency — high latency indicates network issues between nodes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"proxmox_cluster\"\n| stats latest(quorate) as quorum, latest(total_nodes) as total, latest(online_nodes) as online by cluster_name\n| eval quorum_ok=if(quorum=\"Yes\", \"OK\", \"CRITICAL\")\n| eval nodes_ok=if(online=total, \"All Online\", online . \"/\" . total . \" Online\")\n| table cluster_name, quorum_ok, nodes_ok, total, online\n```\n\nUnderstanding this SPL\n\n**Proxmox Cluster Corosync and Quorum Health** — Proxmox clusters use Corosync for node communication and quorum. A split-brain scenario can cause data corruption on shared storage. Nodes losing corosync connectivity cannot access cluster resources, and quorum loss stops all HA-protected VMs. Early detection of communication issues prevents cluster-wide outages.\n\nDocumented **Data sources**: Corosync logs, `pvecm status`, Proxmox cluster API. **App/TA** (typical add-on context): Custom scripted input, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: proxmox_cluster. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"proxmox_cluster\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **quorum_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **nodes_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Proxmox Cluster Corosync and Quorum Health**): table cluster_name, quorum_ok, nodes_ok, total, online\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (node health), Single value (quorum status), Timeline (membership changes).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cluster Corosync and Quorum Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.13",
              "n": "Proxmox HA Group and Fence Status",
              "c": "high",
              "f": "intermediate",
              "v": "Proxmox HA automatically restarts VMs on surviving nodes when a host fails. If the HA manager cannot fence (isolate) a failed node, it cannot safely restart VMs — risking split-brain with shared storage. Monitoring HA state, fence status, and migration events ensures the safety net actually works.",
              "t": "Custom scripted input, syslog",
              "d": "Proxmox HA manager logs, `ha-manager status`",
              "q": "index=virtualization sourcetype=\"proxmox_ha\"\n| stats latest(ha_state) as state, latest(node) as current_node, latest(request_state) as requested by vm_id, vm_name\n| where state!=\"started\" OR state!=requested\n| table vm_id, vm_name, state, requested, current_node",
              "m": "Create scripted input: `ha-manager status` to enumerate all HA-managed resources and their states. Monitor HA manager log (`/var/log/pve/ha-manager/`) for fence operations and migration events. Alert on failed fencing (node isolation), VMs in error state, and HA resources that cannot reach their requested state.",
              "z": "Table (HA resources, state, node), Timeline (HA events), Status grid (resource health).",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input, syslog.\n• Ensure the following data sources are available: Proxmox HA manager logs, `ha-manager status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `ha-manager status` to enumerate all HA-managed resources and their states. Monitor HA manager log (`/var/log/pve/ha-manager/`) for fence operations and migration events. Alert on failed fencing (node isolation), VMs in error state, and HA resources that cannot reach their requested state.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"proxmox_ha\"\n| stats latest(ha_state) as state, latest(node) as current_node, latest(request_state) as requested by vm_id, vm_name\n| where state!=\"started\" OR state!=requested\n| table vm_id, vm_name, state, requested, current_node\n```\n\nUnderstanding this SPL\n\n**Proxmox HA Group and Fence Status** — Proxmox HA automatically restarts VMs on surviving nodes when a host fails. If the HA manager cannot fence (isolate) a failed node, it cannot safely restart VMs — risking split-brain with shared storage. Monitoring HA state, fence status, and migration events ensures the safety net actually works.\n\nDocumented **Data sources**: Proxmox HA manager logs, `ha-manager status`. **App/TA** (typical add-on context): Custom scripted input, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: proxmox_ha. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"proxmox_ha\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_id, vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where state!=\"started\" OR state!=requested` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Proxmox HA Group and Fence Status**): table vm_id, vm_name, state, requested, current_node\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (HA resources, state, node), Timeline (HA events), Status grid (resource health).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on hA Group and Fence and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.14",
              "n": "ZFS Pool Health for Proxmox/KVM",
              "c": "critical",
              "f": "beginner",
              "v": "ZFS is the recommended storage backend for Proxmox and many KVM deployments. A degraded ZFS pool means a disk has failed and data is at risk until the pool is resilvered. ZFS capacity above 80% significantly degrades performance due to copy-on-write fragmentation.",
              "t": "`Splunk_TA_nix`, custom scripted input",
              "d": "`zpool status`, `zpool list`, ZFS event daemon (ZED)",
              "q": "index=os sourcetype=\"zfs_pool_status\"\n| stats latest(health) as health, latest(capacity_pct) as capacity, latest(fragmentation) as frag_pct by host, pool_name\n| where health!=\"ONLINE\" OR capacity > 80\n| sort -capacity\n| table host, pool_name, health, capacity, frag_pct",
              "m": "Create scripted input: `zpool list -Hp` for capacity and `zpool status` for health. Parse pool name, size, allocated, free, fragmentation, capacity, dedup ratio, and health. Run every 5 minutes. Alert on any non-ONLINE health status. Alert at 80% capacity. Monitor ZFS Event Daemon (ZED) for disk failures and scrub errors.",
              "z": "Status grid (pool health), Gauge (capacity per pool), Table (pool details), Line chart (capacity trend).",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, custom scripted input.\n• Ensure the following data sources are available: `zpool status`, `zpool list`, ZFS event daemon (ZED).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `zpool list -Hp` for capacity and `zpool status` for health. Parse pool name, size, allocated, free, fragmentation, capacity, dedup ratio, and health. Run every 5 minutes. Alert on any non-ONLINE health status. Alert at 80% capacity. Monitor ZFS Event Daemon (ZED) for disk failures and scrub errors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=\"zfs_pool_status\"\n| stats latest(health) as health, latest(capacity_pct) as capacity, latest(fragmentation) as frag_pct by host, pool_name\n| where health!=\"ONLINE\" OR capacity > 80\n| sort -capacity\n| table host, pool_name, health, capacity, frag_pct\n```\n\nUnderstanding this SPL\n\n**ZFS Pool Health for Proxmox/KVM** — ZFS is the recommended storage backend for Proxmox and many KVM deployments. A degraded ZFS pool means a disk has failed and data is at risk until the pool is resilvered. ZFS capacity above 80% significantly degrades performance due to copy-on-write fragmentation.\n\nDocumented **Data sources**: `zpool status`, `zpool list`, ZFS event daemon (ZED). **App/TA** (typical add-on context): `Splunk_TA_nix`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: zfs_pool_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=\"zfs_pool_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, pool_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where health!=\"ONLINE\" OR capacity > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **ZFS Pool Health for Proxmox/KVM**): table host, pool_name, health, capacity, frag_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (pool health), Gauge (capacity per pool), Table (pool details), Line chart (capacity trend).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on zFS Pool Health for Proxmox/KVM and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.15",
              "n": "VM Disk Cache Mode Audit",
              "c": "medium",
              "f": "intermediate",
              "v": "The disk cache mode determines data safety vs. performance. `writeback` is fast but risks data loss on host crash. `none` (O_DIRECT) provides safe passthrough for guests with their own journaling. `writethrough` is safest but slowest. Incorrect cache modes cause either data loss or unnecessary performance penalties.",
              "t": "Custom scripted input",
              "d": "`virsh dumpxml` disk configuration",
              "q": "index=virtualization sourcetype=\"kvm_disk_config\"\n| stats latest(cache_mode) as cache, latest(io_mode) as io, latest(discard) as discard by host, vm_name, disk_target\n| eval risk=case(cache=\"writeback\", \"High - data loss risk on crash\", cache=\"unsafe\", \"Critical - no fsync\", cache=\"none\", \"Safe - direct I/O\", cache=\"writethrough\", \"Safe - slow\", 1==1, \"Unknown\")\n| where cache=\"writeback\" OR cache=\"unsafe\"\n| table host, vm_name, disk_target, cache, io, risk",
              "m": "Create scripted input: parse `virsh dumpxml <domain>` to extract `<driver cache='...' io='...' discard='...'/>` for each disk. Run daily. Alert on `cache='unsafe'` (never safe for production). Flag `cache='writeback'` for review — acceptable only if the host has battery-backed write cache. Recommend `cache='none'` for most production workloads.",
              "z": "Table (VM, disk, cache mode, risk), Pie chart (cache mode distribution), Bar chart (risky VMs by host).",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: `virsh dumpxml` disk configuration.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: parse `virsh dumpxml <domain>` to extract `<driver cache='...' io='...' discard='...'/>` for each disk. Run daily. Alert on `cache='unsafe'` (never safe for production). Flag `cache='writeback'` for review — acceptable only if the host has battery-backed write cache. Recommend `cache='none'` for most production workloads.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"kvm_disk_config\"\n| stats latest(cache_mode) as cache, latest(io_mode) as io, latest(discard) as discard by host, vm_name, disk_target\n| eval risk=case(cache=\"writeback\", \"High - data loss risk on crash\", cache=\"unsafe\", \"Critical - no fsync\", cache=\"none\", \"Safe - direct I/O\", cache=\"writethrough\", \"Safe - slow\", 1==1, \"Unknown\")\n| where cache=\"writeback\" OR cache=\"unsafe\"\n| table host, vm_name, disk_target, cache, io, risk\n```\n\nUnderstanding this SPL\n\n**VM Disk Cache Mode Audit** — The disk cache mode determines data safety vs. performance. `writeback` is fast but risks data loss on host crash. `none` (O_DIRECT) provides safe passthrough for guests with their own journaling. `writethrough` is safest but slowest. Incorrect cache modes cause either data loss or unnecessary performance penalties.\n\nDocumented **Data sources**: `virsh dumpxml` disk configuration. **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: kvm_disk_config. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"kvm_disk_config\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, vm_name, disk_target** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cache=\"writeback\" OR cache=\"unsafe\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VM Disk Cache Mode Audit**): table host, vm_name, disk_target, cache, io, risk\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, disk, cache mode, risk), Pie chart (cache mode distribution), Bar chart (risky VMs by host).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on disk Cache Mode Audit and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Configuration",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.16",
              "n": "Libvirt Daemon Health and Responsiveness",
              "c": "critical",
              "f": "beginner",
              "v": "The libvirtd daemon is the management layer for all KVM operations — VM start/stop, migration, storage, networking. If libvirtd hangs or crashes, no VM management operations are possible. Existing VMs keep running but become unmanageable. Detecting libvirtd health issues enables proactive restart before they cascade.",
              "t": "`Splunk_TA_nix`, custom scripted input",
              "d": "Syslog, systemd service status, libvirtd response time",
              "q": "index=os sourcetype=syslog \"libvirtd\" (\"error\" OR \"warning\" OR \"failed\" OR \"timed out\")\n| bin _time span=5m\n| stats count as errors by host, _time\n| where errors > 5\n| table _time, host, errors",
              "m": "Monitor libvirtd syslog output for errors. Create a scripted input that runs `virsh list` and measures response time — if it takes >5 seconds, libvirtd is likely overloaded. Also monitor the systemd service status: `systemctl is-active libvirtd`. Alert if libvirtd is not active or response time exceeds 10 seconds.",
              "z": "Status indicator (libvirtd per host), Line chart (response time), Events table (errors).",
              "kfp": "libvirt and KVM logs can burst during live migration, image imports, or when guests crash and restart; correlate time windows with host maintenance.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, custom scripted input.\n• Ensure the following data sources are available: Syslog, systemd service status, libvirtd response time.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor libvirtd syslog output for errors. Create a scripted input that runs `virsh list` and measures response time — if it takes >5 seconds, libvirtd is likely overloaded. Also monitor the systemd service status: `systemctl is-active libvirtd`. Alert if libvirtd is not active or response time exceeds 10 seconds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"libvirtd\" (\"error\" OR \"warning\" OR \"failed\" OR \"timed out\")\n| bin _time span=5m\n| stats count as errors by host, _time\n| where errors > 5\n| table _time, host, errors\n```\n\nUnderstanding this SPL\n\n**Libvirt Daemon Health and Responsiveness** — The libvirtd daemon is the management layer for all KVM operations — VM start/stop, migration, storage, networking. If libvirtd hangs or crashes, no VM management operations are possible. Existing VMs keep running but become unmanageable. Detecting libvirtd health issues enables proactive restart before they cascade.\n\nDocumented **Data sources**: Syslog, systemd service status, libvirtd response time. **App/TA** (typical add-on context): `Splunk_TA_nix`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where errors > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Libvirt Daemon Health and Responsiveness**): table _time, host, errors\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status indicator (libvirtd per host), Line chart (response time), Events table (errors).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on libvirt Daemon Health and Responsiveness and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.3.17",
              "n": "Proxmox VE Cluster Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Node status, storage usage, and HA fence events for Proxmox VE clusters. Ensures all nodes are online, storage is healthy, and HA operations complete successfully. Critical for multi-node Proxmox deployments.",
              "t": "Custom (Proxmox API input)",
              "d": "Proxmox REST API (`/api2/json/cluster/status`), cluster resources, HA manager",
              "q": "index=virtualization sourcetype=\"proxmox_cluster_status\"\n| stats latest(node) as node, latest(status) as status, latest(quorum) as quorum, latest(name) as cluster_name by node\n| eval node_ok = if(status=\"online\", \"OK\", \"CRITICAL\")\n| where node_ok=\"CRITICAL\" OR quorum!=\"1\"\n| table cluster_name, node, status, quorum, node_ok",
              "m": "Create scripted input polling Proxmox API: `GET /api2/json/cluster/status` for node membership and quorum; `GET /api2/json/nodes/{node}/storage` for storage usage; `GET /api2/json/cluster/ha/status` for HA resources. Authenticate via API token or ticket. Run every 60 seconds. Alert on node offline, quorum loss, or storage >85% used. Correlate with Corosync logs for fence events.",
              "z": "Status grid (node health per cluster), Table (storage usage by node), Timeline (HA fence events).",
              "kfp": "KVM and scripted metrics may gap when collectors restart, SSH keys roll, or virsh is blocked by lockfiles; a single null interval is not always a full outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Proxmox API input).\n• Ensure the following data sources are available: Proxmox REST API (`/api2/json/cluster/status`), cluster resources, HA manager.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input polling Proxmox API: `GET /api2/json/cluster/status` for node membership and quorum; `GET /api2/json/nodes/{node}/storage` for storage usage; `GET /api2/json/cluster/ha/status` for HA resources. Authenticate via API token or ticket. Run every 60 seconds. Alert on node offline, quorum loss, or storage >85% used. Correlate with Corosync logs for fence events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"proxmox_cluster_status\"\n| stats latest(node) as node, latest(status) as status, latest(quorum) as quorum, latest(name) as cluster_name by node\n| eval node_ok = if(status=\"online\", \"OK\", \"CRITICAL\")\n| where node_ok=\"CRITICAL\" OR quorum!=\"1\"\n| table cluster_name, node, status, quorum, node_ok\n```\n\nUnderstanding this SPL\n\n**Proxmox VE Cluster Monitoring** — Node status, storage usage, and HA fence events for Proxmox VE clusters. Ensures all nodes are online, storage is healthy, and HA operations complete successfully. Critical for multi-node Proxmox deployments.\n\nDocumented **Data sources**: Proxmox REST API (`/api2/json/cluster/status`), cluster resources, HA manager. **App/TA** (typical add-on context): Custom (Proxmox API input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: proxmox_cluster_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"proxmox_cluster_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **node_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where node_ok=\"CRITICAL\" OR quorum!=\"1\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Proxmox VE Cluster Monitoring**): table cluster_name, node, status, quorum, node_ok\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (node health per cluster), Table (storage usage by node), Timeline (HA fence events).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on vE Cluster and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "proxmox"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 17,
            "none": 0
          }
        },
        {
          "i": "2.4",
          "n": "Cross-Platform Virtualization",
          "u": [
            {
              "i": "2.4.1",
              "n": "Guest OS End-of-Life Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "VMs running end-of-life operating systems no longer receive security patches, creating unmitigated vulnerabilities. Tracking guest OS versions across all hypervisors against vendor EOL dates enables proactive migration planning before support ends. Required for PCI DSS, HIPAA, and SOC 2 compliance.",
              "t": "`Splunk_TA_vmware`, `Splunk_TA_windows`, custom OS inventory",
              "d": "VM inventory from all hypervisors, OS EOL lookup table",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\"\n| stats latest(guest_os) as os_name by vm_name\n| append [search index=hyperv sourcetype=\"hyperv_vm_config\" | stats latest(os_name) as os_name by vm_name]\n| append [search index=virtualization sourcetype=kvm_guest_agent | stats latest(os_name) as os_name by vm_name]\n| lookup os_eol_dates os_name OUTPUT eol_date, eol_status\n| eval days_to_eol=round((strptime(eol_date, \"%Y-%m-%d\") - now()) / 86400, 0)\n| where days_to_eol < 180 OR eol_status=\"EOL\"\n| sort days_to_eol\n| table vm_name, os_name, eol_date, days_to_eol, eol_status",
              "m": "Collect guest OS information from all hypervisor platforms. Maintain a lookup table (`os_eol_dates.csv`) mapping OS names to vendor EOL dates (Microsoft, Red Hat, Canonical, etc.). Alert at 180 days before EOL (planning), 90 days (action required), and on any VM running an already-EOL OS. Generate quarterly reports for management.",
              "z": "Table (VM, OS, EOL date), Bar chart (VMs by EOL status), Timeline (upcoming EOL dates).",
              "kfp": "Citrix and NetScaler signals may reflect planned catalog publishes, autoscale, or ADC failover exercises; use delivery-group and change metadata before treating as a defect.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, `Splunk_TA_windows`, custom OS inventory.\n• Ensure the following data sources are available: VM inventory from all hypervisors, OS EOL lookup table.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect guest OS information from all hypervisor platforms. Maintain a lookup table (`os_eol_dates.csv`) mapping OS names to vendor EOL dates (Microsoft, Red Hat, Canonical, etc.). Alert at 180 days before EOL (planning), 90 days (action required), and on any VM running an already-EOL OS. Generate quarterly reports for management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\"\n| stats latest(guest_os) as os_name by vm_name\n| append [search index=hyperv sourcetype=\"hyperv_vm_config\" | stats latest(os_name) as os_name by vm_name]\n| append [search index=virtualization sourcetype=kvm_guest_agent | stats latest(os_name) as os_name by vm_name]\n| lookup os_eol_dates os_name OUTPUT eol_date, eol_status\n| eval days_to_eol=round((strptime(eol_date, \"%Y-%m-%d\") - now()) / 86400, 0)\n| where days_to_eol < 180 OR eol_status=\"EOL\"\n| sort days_to_eol\n| table vm_name, os_name, eol_date, days_to_eol, eol_status\n```\n\nUnderstanding this SPL\n\n**Guest OS End-of-Life Tracking** — VMs running end-of-life operating systems no longer receive security patches, creating unmitigated vulnerabilities. Tracking guest OS versions across all hypervisors against vendor EOL dates enables proactive migration planning before support ends. Required for PCI DSS, HIPAA, and SOC 2 compliance.\n\nDocumented **Data sources**: VM inventory from all hypervisors, OS EOL lookup table. **App/TA** (typical add-on context): `Splunk_TA_vmware`, `Splunk_TA_windows`, custom OS inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_to_eol** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_eol < 180 OR eol_status=\"EOL\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Guest OS End-of-Life Tracking**): table vm_name, os_name, eol_date, days_to_eol, eol_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VM, OS, EOL date), Bar chart (VMs by EOL status), Timeline (upcoming EOL dates).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on guest OS End-of-Life and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware",
                "windows"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.4.2",
              "n": "VM Backup Coverage Validation",
              "c": "critical",
              "f": "intermediate",
              "v": "VMs without recent successful backups have no recovery point — a single failure causes permanent data loss. By comparing VM inventory across all hypervisors against backup job success records, this use case identifies VMs that have fallen through the cracks of the backup policy.",
              "t": "`Splunk_TA_vmware`, `Splunk_TA_windows`, backup vendor TA",
              "d": "VM inventory from all hypervisors, backup job logs (Veeam, Commvault, Cohesity, PBS)",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\" power_state=\"poweredOn\"\n| stats latest(vm_name) as vm_name by vm_name\n| append [search index=hyperv sourcetype=\"hyperv_vm_config\" state=\"Running\" | stats latest(vm_name) as vm_name by vm_name]\n| sort 0 vm_name, -_time\n| dedup vm_name\n| join type=left max=1 vm_name [search index=backup sourcetype=\"backup_jobs\" status=\"Success\" earliest=-48h | stats latest(_time) as last_backup, latest(status) as backup_status by vm_name]\n| eval backup_age_hours=if(isnotnull(last_backup), round((now()-last_backup)/3600, 0), 999)\n| where backup_age_hours > 48 OR isnull(last_backup)\n| sort -backup_age_hours\n| table vm_name, backup_status, last_backup, backup_age_hours",
              "m": "Combine VM inventory from all hypervisors with backup job results from your backup product. Left-join to find VMs with no matching backup job. Alert on VMs with no backup in >24 hours (for daily policy) or >48 hours (with buffer). Exclude development/test VMs via a lookup if appropriate. Run daily and send report to backup administrators.",
              "z": "Table (unprotected VMs), Single value (backup coverage %), Pie chart (backed up vs unprotected), Bar chart (backup age distribution).",
              "kfp": "Citrix and NetScaler signals may reflect planned catalog publishes, autoscale, or ADC failover exercises; use delivery-group and change metadata before treating as a defect.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, `Splunk_TA_windows`, backup vendor TA.\n• Ensure the following data sources are available: VM inventory from all hypervisors, backup job logs (Veeam, Commvault, Cohesity, PBS).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCombine VM inventory from all hypervisors with backup job results from your backup product. Left-join to find VMs with no matching backup job. Alert on VMs with no backup in >24 hours (for daily policy) or >48 hours (with buffer). Exclude development/test VMs via a lookup if appropriate. Run daily and send report to backup administrators.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\" power_state=\"poweredOn\"\n| stats latest(vm_name) as vm_name by vm_name\n| append [search index=hyperv sourcetype=\"hyperv_vm_config\" state=\"Running\" | stats latest(vm_name) as vm_name by vm_name]\n| sort 0 vm_name, -_time\n| dedup vm_name\n| join type=left max=1 vm_name [search index=backup sourcetype=\"backup_jobs\" status=\"Success\" earliest=-48h | stats latest(_time) as last_backup, latest(status) as backup_status by vm_name]\n| eval backup_age_hours=if(isnotnull(last_backup), round((now()-last_backup)/3600, 0), 999)\n| where backup_age_hours > 48 OR isnull(last_backup)\n| sort -backup_age_hours\n| table vm_name, backup_status, last_backup, backup_age_hours\n```\n\nUnderstanding this SPL\n\n**VM Backup Coverage Validation** — VMs without recent successful backups have no recovery point — a single failure causes permanent data loss. By comparing VM inventory across all hypervisors against backup job success records, this use case identifies VMs that have fallen through the cracks of the backup policy.\n\nDocumented **Data sources**: VM inventory from all hypervisors, backup job logs (Veeam, Commvault, Cohesity, PBS). **App/TA** (typical add-on context): `Splunk_TA_vmware`, `Splunk_TA_windows`, backup vendor TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Removes duplicate values with `dedup` — pair with `sort` when order matters.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **backup_age_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where backup_age_hours > 48 OR isnull(last_backup)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM Backup Coverage Validation**): table vm_name, backup_status, last_backup, backup_age_hours\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unprotected VMs), Single value (backup coverage %), Pie chart (backed up vs unprotected), Bar chart (backup age distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on backup Coverage Validation and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware",
                "windows"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.4.3",
              "n": "VM-to-Host Density Trending",
              "c": "medium",
              "f": "beginner",
              "v": "VM density (VMs per host) is a key capacity metric. Rising density indicates growing consolidation ratios that may exceed host capacity. Density spikes after HA failovers reveal hosts running at unsustainable loads. Trending density over time supports procurement planning and workload distribution decisions.",
              "t": "`Splunk_TA_vmware`, `Splunk_TA_windows`",
              "d": "VM inventory from all hypervisors",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\" power_state=\"poweredOn\"\n| stats dc(vm_name) as vm_count, sum(numCpu) as total_vcpus, sum(memoryMB) as total_mem_mb by host\n| eval total_mem_gb=round(total_mem_mb/1024, 0)\n| sort -vm_count\n| table host, vm_count, total_vcpus, total_mem_gb",
              "m": "Count powered-on VMs per host from inventory data. Track daily for trend analysis. Calculate vcpu-to-pcpu ratio and memory overcommit per host. Alert when any host exceeds your density threshold (e.g., >30 VMs, >4:1 vCPU ratio, or >1.5:1 memory overcommit). Useful after HA events to verify surviving hosts aren't overloaded.",
              "z": "Bar chart (VMs per host), Line chart (density trend over months), Table (host, VM count, ratios), Heatmap (density by cluster).",
              "kfp": "Citrix and NetScaler signals may reflect planned catalog publishes, autoscale, or ADC failover exercises; use delivery-group and change metadata before treating as a defect.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, `Splunk_TA_windows`.\n• Ensure the following data sources are available: VM inventory from all hypervisors.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCount powered-on VMs per host from inventory data. Track daily for trend analysis. Calculate vcpu-to-pcpu ratio and memory overcommit per host. Alert when any host exceeds your density threshold (e.g., >30 VMs, >4:1 vCPU ratio, or >1.5:1 memory overcommit). Useful after HA events to verify surviving hosts aren't overloaded.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\" power_state=\"poweredOn\"\n| stats dc(vm_name) as vm_count, sum(numCpu) as total_vcpus, sum(memoryMB) as total_mem_mb by host\n| eval total_mem_gb=round(total_mem_mb/1024, 0)\n| sort -vm_count\n| table host, vm_count, total_vcpus, total_mem_gb\n```\n\nUnderstanding this SPL\n\n**VM-to-Host Density Trending** — VM density (VMs per host) is a key capacity metric. Rising density indicates growing consolidation ratios that may exceed host capacity. Density spikes after HA failovers reveal hosts running at unsustainable loads. Trending density over time supports procurement planning and workload distribution decisions.\n\nDocumented **Data sources**: VM inventory from all hypervisors. **App/TA** (typical add-on context): `Splunk_TA_vmware`, `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_mem_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VM-to-Host Density Trending**): table host, vm_count, total_vcpus, total_mem_gb\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (VMs per host), Line chart (density trend over months), Table (host, VM count, ratios), Heatmap (density by cluster).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on vM-to-Host Density Trending and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware",
                "windows"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.4.4",
              "n": "VM Provisioning Time Tracking",
              "c": "low",
              "f": "intermediate",
              "v": "Measures the time from VM creation request to operational state. Long provisioning times indicate process bottlenecks — slow template deployment, manual approval delays, storage provisioning issues, or network configuration problems. Supports ITSM service level tracking and infrastructure automation improvement.",
              "t": "`Splunk_TA_vmware`, ITSM TA",
              "d": "vCenter events, ITSM request logs",
              "q": "index=vmware sourcetype=\"vmware:events\" event_type=\"VmCreatedEvent\"\n| eval create_time=_time\n| join max=1 vm_name [search index=vmware sourcetype=\"vmware:events\" event_type=\"VmPoweredOnEvent\" | eval poweron_time=_time | table vm_name, poweron_time]\n| eval provision_minutes=round((poweron_time-create_time)/60, 1)\n| where provision_minutes > 0\n| stats avg(provision_minutes) as avg_min, median(provision_minutes) as median_min, max(provision_minutes) as max_min by datacenter\n| table datacenter, avg_min, median_min, max_min",
              "m": "Correlate VM creation events with first power-on events from vCenter. For full lifecycle tracking, also correlate with ITSM ticket creation time (when the request was submitted). Calculate time from request → approval → creation → power-on. Set SLA targets and alert when provisioning exceeds them.",
              "z": "Bar chart (average provisioning time by DC), Line chart (trend over time), Table (slowest provisions).",
              "kfp": "Citrix and NetScaler signals may reflect planned catalog publishes, autoscale, or ADC failover exercises; use delivery-group and change metadata before treating as a defect.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, ITSM TA.\n• Ensure the following data sources are available: vCenter events, ITSM request logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate VM creation events with first power-on events from vCenter. For full lifecycle tracking, also correlate with ITSM ticket creation time (when the request was submitted). Calculate time from request → approval → creation → power-on. Set SLA targets and alert when provisioning exceeds them.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:events\" event_type=\"VmCreatedEvent\"\n| eval create_time=_time\n| join max=1 vm_name [search index=vmware sourcetype=\"vmware:events\" event_type=\"VmPoweredOnEvent\" | eval poweron_time=_time | table vm_name, poweron_time]\n| eval provision_minutes=round((poweron_time-create_time)/60, 1)\n| where provision_minutes > 0\n| stats avg(provision_minutes) as avg_min, median(provision_minutes) as median_min, max(provision_minutes) as max_min by datacenter\n| table datacenter, avg_min, median_min, max_min\n```\n\nUnderstanding this SPL\n\n**VM Provisioning Time Tracking** — Measures the time from VM creation request to operational state. Long provisioning times indicate process bottlenecks — slow template deployment, manual approval delays, storage provisioning issues, or network configuration problems. Supports ITSM service level tracking and infrastructure automation improvement.\n\nDocumented **Data sources**: vCenter events, ITSM request logs. **App/TA** (typical add-on context): `Splunk_TA_vmware`, ITSM TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **create_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **provision_minutes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where provision_minutes > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by datacenter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **VM Provisioning Time Tracking**): table datacenter, avg_min, median_min, max_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (average provisioning time by DC), Line chart (trend over time), Table (slowest provisions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on provisioning Time and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.4.5",
              "n": "Virtualization License Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "VMware licenses are per-CPU, Hyper-V licenses are per-core, and Windows Server Datacenter vs Standard determines VM rights. Running more physical CPUs or cores than licensed risks audit penalties. Tracking socket/core counts against entitlements prevents costly true-up surprises.",
              "t": "`Splunk_TA_vmware`, `Splunk_TA_windows`, license lookup",
              "d": "Host inventory from all hypervisors, license entitlement lookup",
              "q": "index=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(numCpuPkgs) as sockets, latest(numCpuCores) as cores, latest(version) as esxi_version by host, cluster\n| eval license_units=sockets\n| stats sum(license_units) as total_sockets, sum(cores) as total_cores, dc(host) as host_count by cluster\n| lookup license_entitlements cluster OUTPUT licensed_sockets, license_edition\n| eval compliant=if(total_sockets<=licensed_sockets, \"Yes\", \"No\")\n| table cluster, host_count, total_sockets, total_cores, licensed_sockets, license_edition, compliant",
              "m": "Collect host hardware inventory (socket count, core count) from all hypervisors. Maintain a lookup table of license entitlements per cluster/site. Compare actual vs entitled. Alert when actual exceeds entitled. Generate monthly compliance reports. Track license utilization ratio — under-utilized licenses may be reassignable.",
              "z": "Table (cluster, sockets, entitled, compliant), Gauge (license utilization), Bar chart (compliance by cluster).",
              "kfp": "Citrix and NetScaler signals may reflect planned catalog publishes, autoscale, or ADC failover exercises; use delivery-group and change metadata before treating as a defect.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, `Splunk_TA_windows`, license lookup.\n• Ensure the following data sources are available: Host inventory from all hypervisors, license entitlement lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect host hardware inventory (socket count, core count) from all hypervisors. Maintain a lookup table of license entitlements per cluster/site. Compare actual vs entitled. Alert when actual exceeds entitled. Generate monthly compliance reports. Track license utilization ratio — under-utilized licenses may be reassignable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:hostsystem\"\n| stats latest(numCpuPkgs) as sockets, latest(numCpuCores) as cores, latest(version) as esxi_version by host, cluster\n| eval license_units=sockets\n| stats sum(license_units) as total_sockets, sum(cores) as total_cores, dc(host) as host_count by cluster\n| lookup license_entitlements cluster OUTPUT licensed_sockets, license_edition\n| eval compliant=if(total_sockets<=licensed_sockets, \"Yes\", \"No\")\n| table cluster, host_count, total_sockets, total_cores, licensed_sockets, license_edition, compliant\n```\n\nUnderstanding this SPL\n\n**Virtualization License Compliance** — VMware licenses are per-CPU, Hyper-V licenses are per-core, and Windows Server Datacenter vs Standard determines VM rights. Running more physical CPUs or cores than licensed risks audit penalties. Tracking socket/core counts against entitlements prevents costly true-up surprises.\n\nDocumented **Data sources**: Host inventory from all hypervisors, license entitlement lookup. **App/TA** (typical add-on context): `Splunk_TA_vmware`, `Splunk_TA_windows`, license lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:hostsystem. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:hostsystem\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **license_units** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Virtualization License Compliance**): table cluster, host_count, total_sockets, total_cores, licensed_sockets, license_edition, compliant\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cluster, sockets, entitled, compliant), Gauge (license utilization), Bar chart (compliance by cluster).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on virtualization License and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware",
                "windows"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.4.6",
              "n": "Multi-Hypervisor Fleet Inventory",
              "c": "medium",
              "f": "intermediate",
              "v": "Organizations running multiple hypervisors need a unified view of all VMs regardless of platform. A consolidated inventory enables accurate capacity planning, consistent policy enforcement, and complete asset tracking. Without it, VMs on different platforms become silos with inconsistent governance.",
              "t": "`Splunk_TA_vmware`, `Splunk_TA_windows`, custom KVM inputs",
              "d": "VM inventory from VMware, Hyper-V, KVM/Proxmox",
              "q": "index=vmware sourcetype=\"vmware:inv:vm\"\n| eval platform=\"VMware\", vcpus=numCpu, mem_gb=round(memoryMB/1024,0)\n| table vm_name, platform, host, vcpus, mem_gb, power_state, guest_os\n| append [search index=hyperv sourcetype=\"hyperv_vm_config\" | eval platform=\"Hyper-V\", mem_gb=round(memory_mb/1024,0) | table vm_name, platform, host, vcpus, mem_gb, state, os_name | rename state as power_state, os_name as guest_os]\n| append [search index=virtualization sourcetype=kvm_capacity | eval platform=\"KVM\", mem_gb=round(vm_memory_mb/1024,0) | table vm_name, platform, host, vm_vcpus, mem_gb, power_state, guest_os | rename vm_vcpus as vcpus]\n| stats latest(platform) as platform, latest(host) as host, latest(vcpus) as vcpus, latest(mem_gb) as mem_gb, latest(power_state) as state, latest(guest_os) as os by vm_name\n| sort platform, vm_name\n| table vm_name, platform, host, vcpus, mem_gb, state, os",
              "m": "Normalize VM inventory fields across all hypervisor platforms into a common schema (vm_name, platform, host, vcpus, mem_gb, power_state, guest_os). Use a scheduled search to populate a summary index or KV store for fast lookups. Enrich with CMDB data (owner, department, environment) via lookup. Generate weekly fleet reports showing total VM count, resource allocation, and platform distribution.",
              "z": "Table (unified VM inventory), Pie chart (VMs by platform), Bar chart (resource allocation by platform), Treemap (VMs by department and platform).",
              "kfp": "Citrix and NetScaler signals may reflect planned catalog publishes, autoscale, or ADC failover exercises; use delivery-group and change metadata before treating as a defect.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Compute_Inventory](https://docs.splunk.com/Documentation/CIM/latest/User/Compute_Inventory)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, `Splunk_TA_windows`, custom KVM inputs.\n• Ensure the following data sources are available: VM inventory from VMware, Hyper-V, KVM/Proxmox.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize VM inventory fields across all hypervisor platforms into a common schema (vm_name, platform, host, vcpus, mem_gb, power_state, guest_os). Use a scheduled search to populate a summary index or KV store for fast lookups. Enrich with CMDB data (owner, department, environment) via lookup. Generate weekly fleet reports showing total VM count, resource allocation, and platform distribution.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:inv:vm\"\n| eval platform=\"VMware\", vcpus=numCpu, mem_gb=round(memoryMB/1024,0)\n| table vm_name, platform, host, vcpus, mem_gb, power_state, guest_os\n| append [search index=hyperv sourcetype=\"hyperv_vm_config\" | eval platform=\"Hyper-V\", mem_gb=round(memory_mb/1024,0) | table vm_name, platform, host, vcpus, mem_gb, state, os_name | rename state as power_state, os_name as guest_os]\n| append [search index=virtualization sourcetype=kvm_capacity | eval platform=\"KVM\", mem_gb=round(vm_memory_mb/1024,0) | table vm_name, platform, host, vm_vcpus, mem_gb, power_state, guest_os | rename vm_vcpus as vcpus]\n| stats latest(platform) as platform, latest(host) as host, latest(vcpus) as vcpus, latest(mem_gb) as mem_gb, latest(power_state) as state, latest(guest_os) as os by vm_name\n| sort platform, vm_name\n| table vm_name, platform, host, vcpus, mem_gb, state, os\n```\n\nUnderstanding this SPL\n\n**Multi-Hypervisor Fleet Inventory** — Organizations running multiple hypervisors need a unified view of all VMs regardless of platform. A consolidated inventory enables accurate capacity planning, consistent policy enforcement, and complete asset tracking. Without it, VMs on different platforms become silos with inconsistent governance.\n\nDocumented **Data sources**: VM inventory from VMware, Hyper-V, KVM/Proxmox. **App/TA** (typical add-on context): `Splunk_TA_vmware`, `Splunk_TA_windows`, custom KVM inputs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:inv:vm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:inv:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **platform** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Multi-Hypervisor Fleet Inventory**): table vm_name, platform, host, vcpus, mem_gb, power_state, guest_os\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Multi-Hypervisor Fleet Inventory**): table vm_name, platform, host, vcpus, mem_gb, state, os\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Compute_Inventory.Hypervisor by Hypervisor.dest, Hypervisor.status | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Multi-Hypervisor Fleet Inventory** — Organizations running multiple hypervisors need a unified view of all VMs regardless of platform. A consolidated inventory enables accurate capacity planning, consistent policy enforcement, and complete asset tracking. Without it, VMs on different platforms become silos with inconsistent governance.\n\nDocumented **Data sources**: VM inventory from VMware, Hyper-V, KVM/Proxmox. **App/TA** (typical add-on context): `Splunk_TA_vmware`, `Splunk_TA_windows`, custom KVM inputs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Compute_Inventory.Hypervisor` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unified VM inventory), Pie chart (VMs by platform), Bar chart (resource allocation by platform), Treemap (VMs by department and platform).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on multi-Hypervisor Fleet and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Compute_Inventory.Hypervisor by Hypervisor.dest, Hypervisor.status | sort - count",
              "e": [
                "vmware",
                "windows"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.4.7",
              "n": "oVirt / RHV Data Center Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Data center and storage domain operational status for oVirt and Red Hat Virtualization (RHV). Detects storage domain maintenance mode, data center connectivity issues, and storage domain activation failures that prevent VM operations.",
              "t": "Custom (oVirt REST API input)",
              "d": "oVirt REST API (`/api/datacenters`, `/api/storagedomains`)",
              "q": "index=virtualization sourcetype=\"ovirt_datacenter\"\n| stats latest(status) as dc_status, latest(local) as local_dc, latest(name) as dc_name by id\n| where dc_status!=\"up\"\n| table dc_name, dc_status, local_dc",
              "m": "Create scripted input polling oVirt API: `GET /api/datacenters` and `GET /api/storagedomains`. Authenticate via oVirt SSO (username/password or token). Parse status (up/down/maintenance), active flag, and available space. Run every 5 minutes. Alert when data center status != \"up\" or storage domain status != \"active\". Create separate sourcetypes for datacenter and storagedomain events. Monitor storage domain available percentage for capacity. Correlate with oVirt engine logs for root cause.",
              "z": "Status grid (data centers and storage domains), Table (operational status), Gauge (storage domain capacity).",
              "kfp": "Citrix and NetScaler signals may reflect planned catalog publishes, autoscale, or ADC failover exercises; use delivery-group and change metadata before treating as a defect.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (oVirt REST API input).\n• Ensure the following data sources are available: oVirt REST API (`/api/datacenters`, `/api/storagedomains`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input polling oVirt API: `GET /api/datacenters` and `GET /api/storagedomains`. Authenticate via oVirt SSO (username/password or token). Parse status (up/down/maintenance), active flag, and available space. Run every 5 minutes. Alert when data center status != \"up\" or storage domain status != \"active\". Create separate sourcetypes for datacenter and storagedomain events. Monitor storage domain available percentage for capacity. Correlate with oVirt engine logs for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"ovirt_datacenter\"\n| stats latest(status) as dc_status, latest(local) as local_dc, latest(name) as dc_name by id\n| where dc_status!=\"up\"\n| table dc_name, dc_status, local_dc\n```\n\nUnderstanding this SPL\n\n**oVirt / RHV Data Center Health** — Data center and storage domain operational status for oVirt and Red Hat Virtualization (RHV). Detects storage domain maintenance mode, data center connectivity issues, and storage domain activation failures that prevent VM operations.\n\nDocumented **Data sources**: oVirt REST API (`/api/datacenters`, `/api/storagedomains`). **App/TA** (typical add-on context): Custom (oVirt REST API input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: ovirt_datacenter. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"ovirt_datacenter\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dc_status!=\"up\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **oVirt / RHV Data Center Health**): table dc_name, dc_status, local_dc\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (data centers and storage domains), Table (operational status), Gauge (storage domain capacity).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on oVirt / RHV Data Center Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "ovirt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 7,
            "none": 0
          }
        },
        {
          "i": "2.5",
          "n": "End-User Computing / VDI Endpoints",
          "u": [
            {
              "i": "2.5.1",
              "n": "IGEL Device Fleet Online/Offline Status",
              "c": "critical",
              "f": "beginner",
              "v": "IGEL thin clients are the primary interface for VDI users in healthcare, finance, and enterprise environments. When a device goes offline, the user cannot access virtual desktops or published applications. Monitoring fleet-wide online/offline ratios and identifying persistently offline devices enables rapid remediation before users are affected at scale.",
              "t": "Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients`)",
              "d": "`index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `online_status`, `last_ip`, `site`, `directory_path`",
              "q": "index=endpoint sourcetype=\"igel:ums:inventory\"\n| stats latest(online_status) as status, latest(last_ip) as last_ip, latest(directory_path) as site by device_name\n| eval status_label=if(status=\"true\", \"Online\", \"Offline\")\n| stats count as total, sum(eval(if(status=\"true\",1,0))) as online_count by site\n| eval offline_count=total-online_count\n| eval online_pct=round(online_count/total*100,1)\n| table site, total, online_count, offline_count, online_pct\n| sort -offline_count",
              "m": "Create a scripted input that polls `GET /v3/thinclients` from the IGEL UMS REST API (IMI v3) every 5 minutes. Authenticate using a dedicated UMS service account with read-only permissions. Parse each device's `unitID`, `name`, `lastIP`, `movedToBin`, and online status. Index as JSON events. Group by UMS directory path (used as site/location). Alert when fleet-wide online percentage drops below 90% or when more than 10 devices at a single site go offline simultaneously.",
              "z": "Single value (fleet online %), Table (sites ranked by offline count), Status grid (device online/offline by site).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients`).\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `online_status`, `last_ip`, `site`, `directory_path`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that polls `GET /v3/thinclients` from the IGEL UMS REST API (IMI v3) every 5 minutes. Authenticate using a dedicated UMS service account with read-only permissions. Parse each device's `unitID`, `name`, `lastIP`, `movedToBin`, and online status. Index as JSON events. Group by UMS directory path (used as site/location). Alert when fleet-wide online percentage drops below 90% or when more than 10 devices at a single site go offline simultaneously.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:ums:inventory\"\n| stats latest(online_status) as status, latest(last_ip) as last_ip, latest(directory_path) as site by device_name\n| eval status_label=if(status=\"true\", \"Online\", \"Offline\")\n| stats count as total, sum(eval(if(status=\"true\",1,0))) as online_count by site\n| eval offline_count=total-online_count\n| eval online_pct=round(online_count/total*100,1)\n| table site, total, online_count, offline_count, online_pct\n| sort -offline_count\n```\n\nUnderstanding this SPL\n\n**IGEL Device Fleet Online/Offline Status** — IGEL thin clients are the primary interface for VDI users in healthcare, finance, and enterprise environments. When a device goes offline, the user cannot access virtual desktops or published applications. Monitoring fleet-wide online/offline ratios and identifying persistently offline devices enables rapid remediation before users are affected at scale.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `online_status`, `last_ip`, `site`, `directory_path`. **App/TA** (typical add-on context): Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:ums:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:ums:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status_label** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **offline_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **online_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **IGEL Device Fleet Online/Offline Status**): table site, total, online_count, offline_count, online_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (fleet online %), Table (sites ranked by offline count), Status grid (device online/offline by site).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL Device Fleet Online/Offline and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.5.2",
              "n": "IGEL Firmware Version Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Running outdated or unapproved IGEL OS firmware exposes endpoints to known vulnerabilities and breaks standardized VDI session configurations. Tracking firmware versions across the fleet against an approved baseline ensures compliance with patch policies and simplifies troubleshooting by eliminating version drift as a variable.",
              "t": "Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients?facets=details`, `GET /v3/firmwares`)",
              "d": "`index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `firmware_id`, `firmware_version`, `product_name`, `directory_path`",
              "q": "index=endpoint sourcetype=\"igel:ums:inventory\"\n| stats latest(firmware_id) as fw_id, latest(firmware_version) as fw_version, latest(device_name) as device_name by unit_id\n| lookup igel_approved_firmware fw_version OUTPUT approved, target_version\n| eval compliant=if(approved=\"yes\", \"Compliant\", \"Non-Compliant\")\n| stats count as device_count by fw_version, compliant, target_version\n| sort -device_count\n| table fw_version, compliant, target_version, device_count",
              "m": "Poll `GET /v3/thinclients?facets=details` to retrieve firmware IDs per device, and `GET /v3/firmwares` to resolve firmware IDs to version strings and product names. Maintain a lookup table (`igel_approved_firmware.csv`) with columns `fw_version`, `approved`, `target_version` mapping each known firmware version to its compliance status. Run the lookup enrichment as a scheduled search daily. Alert when non-compliant device percentage exceeds 20% or when any device runs a firmware version flagged as critical-vulnerability.",
              "z": "Pie chart (compliant vs non-compliant), Table (firmware versions with device counts), Single value (compliance %).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients?facets=details`, `GET /v3/firmwares`).\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `firmware_id`, `firmware_version`, `product_name`, `directory_path`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET /v3/thinclients?facets=details` to retrieve firmware IDs per device, and `GET /v3/firmwares` to resolve firmware IDs to version strings and product names. Maintain a lookup table (`igel_approved_firmware.csv`) with columns `fw_version`, `approved`, `target_version` mapping each known firmware version to its compliance status. Run the lookup enrichment as a scheduled search daily. Alert when non-compliant device percentage exceeds 20% or when any device runs a firmware version flagged a…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:ums:inventory\"\n| stats latest(firmware_id) as fw_id, latest(firmware_version) as fw_version, latest(device_name) as device_name by unit_id\n| lookup igel_approved_firmware fw_version OUTPUT approved, target_version\n| eval compliant=if(approved=\"yes\", \"Compliant\", \"Non-Compliant\")\n| stats count as device_count by fw_version, compliant, target_version\n| sort -device_count\n| table fw_version, compliant, target_version, device_count\n```\n\nUnderstanding this SPL\n\n**IGEL Firmware Version Compliance** — Running outdated or unapproved IGEL OS firmware exposes endpoints to known vulnerabilities and breaks standardized VDI session configurations. Tracking firmware versions across the fleet against an approved baseline ensures compliance with patch policies and simplifies troubleshooting by eliminating version drift as a variable.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `firmware_id`, `firmware_version`, `product_name`, `directory_path`. **App/TA** (typical add-on context): Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients?facets=details`, `GET /v3/firmwares`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:ums:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:ums:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by unit_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by fw_version, compliant, target_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **IGEL Firmware Version Compliance**): table fw_version, compliant, target_version, device_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (compliant vs non-compliant), Table (firmware versions with device counts), Single value (compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL Firmware Version and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.5.3",
              "n": "IGEL UMS Server Health Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "The IGEL UMS server is the central management plane for all IGEL endpoints. If UMS goes down or enters an error state, administrators cannot push policies, update firmware, or manage device configurations. Monitoring the built-in health endpoint provides immediate alerting on database connectivity failures, HA issues, or service degradation.",
              "t": "Custom scripted input polling UMS check-status endpoint",
              "d": "`index=endpoint` `sourcetype=\"igel:ums:health\"` fields `ums_server`, `status`, `message`",
              "q": "index=endpoint sourcetype=\"igel:ums:health\"\n| stats latest(status) as current_status, latest(message) as message, latest(_time) as last_check by ums_server\n| eval status_age_min=round((now()-last_check)/60,0)\n| where current_status!=\"ok\" OR status_age_min > 5\n| table ums_server, current_status, message, status_age_min",
              "m": "Create a scripted input that polls `https://[server]:[port]/ums/check-status` every 60 seconds. The endpoint returns JSON with a `status` field (values: `init`, `ok`, `warn`, `err`) and optional `message` describing the issue. Parse the response and index as events. Alert immediately on `err` status (database connection failure, device communication port not ready). Alert on `warn` status (HA update mode, cloud gateway disconnection, certificate sync issues). Also alert if no health check event has been received in 5 minutes (endpoint unreachable).",
              "z": "Single value (current status with color coding), Timeline (status changes over time), Table (all UMS servers with status).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling UMS check-status endpoint.\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:ums:health\"` fields `ums_server`, `status`, `message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that polls `https://[server]:[port]/ums/check-status` every 60 seconds. The endpoint returns JSON with a `status` field (values: `init`, `ok`, `warn`, `err`) and optional `message` describing the issue. Parse the response and index as events. Alert immediately on `err` status (database connection failure, device communication port not ready). Alert on `warn` status (HA update mode, cloud gateway disconnection, certificate sync issues). Also alert if no health check event …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:ums:health\"\n| stats latest(status) as current_status, latest(message) as message, latest(_time) as last_check by ums_server\n| eval status_age_min=round((now()-last_check)/60,0)\n| where current_status!=\"ok\" OR status_age_min > 5\n| table ums_server, current_status, message, status_age_min\n```\n\nUnderstanding this SPL\n\n**IGEL UMS Server Health Monitoring** — The IGEL UMS server is the central management plane for all IGEL endpoints. If UMS goes down or enters an error state, administrators cannot push policies, update firmware, or manage device configurations. Monitoring the built-in health endpoint provides immediate alerting on database connectivity failures, HA issues, or service degradation.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:ums:health\"` fields `ums_server`, `status`, `message`. **App/TA** (typical add-on context): Custom scripted input polling UMS check-status endpoint. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:ums:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:ums:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ums_server** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status_age_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where current_status!=\"ok\" OR status_age_min > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IGEL UMS Server Health Monitoring**): table ums_server, current_status, message, status_age_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (current status with color coding), Timeline (status changes over time), Table (all UMS servers with status).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL UMS Server Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.5.4",
              "n": "IGEL Device Heartbeat Loss Detection",
              "c": "high",
              "f": "intermediate",
              "v": "IGEL OS 12 devices send periodic heartbeat signals to the UMS server to report operational status. When heartbeats stop, the device may be powered off, network-disconnected, or experiencing a crash loop. Detecting heartbeat loss within a configurable window enables proactive remediation before users report issues at shift start.",
              "t": "Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients?facets=details`)",
              "d": "`index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `last_contact`, `directory_path`, `last_ip`",
              "q": "index=endpoint sourcetype=\"igel:ums:inventory\"\n| stats latest(last_contact) as last_contact, latest(last_ip) as last_ip, latest(directory_path) as site by device_name\n| eval contact_epoch=strptime(last_contact, \"%Y-%m-%dT%H:%M:%S\")\n| eval hours_since_contact=round((now()-contact_epoch)/3600, 1)\n| where hours_since_contact > 4\n| sort -hours_since_contact\n| table device_name, site, last_ip, last_contact, hours_since_contact",
              "m": "Poll the UMS API with `facets=details` to retrieve `lastContact` timestamps per device. Convert to epoch and compare against current time. Devices that have not contacted UMS within the configured threshold (default 4 hours, adjust for shift patterns) are flagged. Exclude devices in the UMS recycle bin (`movedToBin=true`). Correlate with site/directory to identify location-specific network outages. Trigger escalation if more than 5 devices at the same site lose heartbeat simultaneously.",
              "z": "Table (stale devices sorted by hours since contact), Bar chart (devices per site with lost heartbeat), Single value (total devices with lost heartbeat).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients?facets=details`).\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `last_contact`, `directory_path`, `last_ip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll the UMS API with `facets=details` to retrieve `lastContact` timestamps per device. Convert to epoch and compare against current time. Devices that have not contacted UMS within the configured threshold (default 4 hours, adjust for shift patterns) are flagged. Exclude devices in the UMS recycle bin (`movedToBin=true`). Correlate with site/directory to identify location-specific network outages. Trigger escalation if more than 5 devices at the same site lose heartbeat simultaneously.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:ums:inventory\"\n| stats latest(last_contact) as last_contact, latest(last_ip) as last_ip, latest(directory_path) as site by device_name\n| eval contact_epoch=strptime(last_contact, \"%Y-%m-%dT%H:%M:%S\")\n| eval hours_since_contact=round((now()-contact_epoch)/3600, 1)\n| where hours_since_contact > 4\n| sort -hours_since_contact\n| table device_name, site, last_ip, last_contact, hours_since_contact\n```\n\nUnderstanding this SPL\n\n**IGEL Device Heartbeat Loss Detection** — IGEL OS 12 devices send periodic heartbeat signals to the UMS server to report operational status. When heartbeats stop, the device may be powered off, network-disconnected, or experiencing a crash loop. Detecting heartbeat loss within a configurable window enables proactive remediation before users report issues at shift start.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `last_contact`, `directory_path`, `last_ip`. **App/TA** (typical add-on context): Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients?facets=details`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:ums:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:ums:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **contact_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hours_since_contact** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since_contact > 4` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **IGEL Device Heartbeat Loss Detection**): table device_name, site, last_ip, last_contact, hours_since_contact\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale devices sorted by hours since contact), Bar chart (devices per site with lost heartbeat), Single value (total devices with lost heartbeat).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL Device Heartbeat Loss Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.5.5",
              "n": "IGEL OS Endpoint Syslog Error Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "IGEL OS endpoints forward syslog messages via rsyslog with TLS encryption to centralized collectors. Monitoring for error and critical severity messages across the fleet surfaces hardware failures, driver issues, network connectivity problems, and application crashes that users may not report until they become workflow-blocking.",
              "t": "Splunk syslog input (TCP/TLS) receiving IGEL OS rsyslog",
              "d": "`index=endpoint` `sourcetype=\"igel:os:syslog\"` fields `host`, `severity`, `facility`, `process`, `message`",
              "q": "index=endpoint sourcetype=\"igel:os:syslog\" (severity=\"err\" OR severity=\"crit\" OR severity=\"alert\" OR severity=\"emerg\")\n| bin _time span=1h\n| stats count as error_count, dc(host) as affected_devices, values(process) as processes by severity, _time\n| where error_count > 10\n| table _time, severity, error_count, affected_devices, processes",
              "m": "Configure IGEL OS syslog forwarding via UMS profile: System > Logging > Remote mode = Client, with TLS enabled and CA certificate at `/wfs/ca-certs/ca.pem`. Point to Splunk TCP/TLS input on port 6514. Create a props.conf entry for `sourcetype=igel:os:syslog` to parse syslog priority into `severity` and `facility` fields. Alert on cluster patterns (same error across many devices = systemic issue, repeated errors on one device = hardware fault). Exclude known benign messages via a lookup filter.",
              "z": "Timechart (error count by severity), Table (top errors by frequency), Bar chart (affected devices by error type).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk syslog input (TCP/TLS) receiving IGEL OS rsyslog.\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:os:syslog\"` fields `host`, `severity`, `facility`, `process`, `message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure IGEL OS syslog forwarding via UMS profile: System > Logging > Remote mode = Client, with TLS enabled and CA certificate at `/wfs/ca-certs/ca.pem`. Point to Splunk TCP/TLS input on port 6514. Create a props.conf entry for `sourcetype=igel:os:syslog` to parse syslog priority into `severity` and `facility` fields. Alert on cluster patterns (same error across many devices = systemic issue, repeated errors on one device = hardware fault). Exclude known benign messages via a lookup filter.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:os:syslog\" (severity=\"err\" OR severity=\"crit\" OR severity=\"alert\" OR severity=\"emerg\")\n| bin _time span=1h\n| stats count as error_count, dc(host) as affected_devices, values(process) as processes by severity, _time\n| where error_count > 10\n| table _time, severity, error_count, affected_devices, processes\n```\n\nUnderstanding this SPL\n\n**IGEL OS Endpoint Syslog Error Monitoring** — IGEL OS endpoints forward syslog messages via rsyslog with TLS encryption to centralized collectors. Monitoring for error and critical severity messages across the fleet surfaces hardware failures, driver issues, network connectivity problems, and application crashes that users may not report until they become workflow-blocking.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:os:syslog\"` fields `host`, `severity`, `facility`, `process`, `message`. **App/TA** (typical add-on context): Splunk syslog input (TCP/TLS) receiving IGEL OS rsyslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:os:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:os:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by severity, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where error_count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IGEL OS Endpoint Syslog Error Monitoring**): table _time, severity, error_count, affected_devices, processes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (error count by severity), Table (top errors by frequency), Bar chart (affected devices by error type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL OS Endpoint Syslog Error and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.5.6",
              "n": "IGEL UMS Security Audit Log Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "IGEL UMS security audit logs capture critical administrative actions: user logins, failed authentication, password changes, device policy assignments, configuration modifications, and administrator account lifecycle events. Monitoring these events is essential for detecting unauthorized administrative access, policy tampering, and insider threats targeting the endpoint management plane.",
              "t": "Splunk Universal Forwarder monitoring UMS security log files",
              "d": "`index=endpoint` `sourcetype=\"igel:ums:security\"` fields `source_tag`, `event_type`, `user`, `target`, `result`, `detail`",
              "q": "index=endpoint sourcetype=\"igel:ums:security\"\n| eval event_category=case(\n    match(event_type, \"(?i)logon|login|logoff|authentication\"), \"Authentication\",\n    match(event_type, \"(?i)password\"), \"Password Change\",\n    match(event_type, \"(?i)assignment|profile|policy\"), \"Policy Change\",\n    match(event_type, \"(?i)account|user.*creat|user.*delet\"), \"Account Lifecycle\",\n    match(event_type, \"(?i)shutdown|restart\"), \"Service Lifecycle\",\n    1=1, \"Other\"\n  )\n| stats count by event_category, source_tag, result\n| sort -count\n| table event_category, source_tag, result, count",
              "m": "Deploy a Splunk Universal Forwarder on the UMS server (Windows or Linux). Monitor the security log files: `ums-server-security.log`, `ums-admin-security.log`, `wums-app-security.log`. Enable remote security logging in UMS Administration > Global Configuration > Logging. Parse events using source tags (`UMS-Server`, `ICG`, `IMI`, `UMS-Webapp`). Alert on: failed login attempts exceeding 5 within 10 minutes, administrator account creation/deletion, device factory reset commands, and off-hours policy modifications.",
              "z": "Bar chart (events by category), Timeline (authentication events), Table (failed logins by user and source IP).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder monitoring UMS security log files.\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:ums:security\"` fields `source_tag`, `event_type`, `user`, `target`, `result`, `detail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a Splunk Universal Forwarder on the UMS server (Windows or Linux). Monitor the security log files: `ums-server-security.log`, `ums-admin-security.log`, `wums-app-security.log`. Enable remote security logging in UMS Administration > Global Configuration > Logging. Parse events using source tags (`UMS-Server`, `ICG`, `IMI`, `UMS-Webapp`). Alert on: failed login attempts exceeding 5 within 10 minutes, administrator account creation/deletion, device factory reset commands, and off-hours polic…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:ums:security\"\n| eval event_category=case(\n    match(event_type, \"(?i)logon|login|logoff|authentication\"), \"Authentication\",\n    match(event_type, \"(?i)password\"), \"Password Change\",\n    match(event_type, \"(?i)assignment|profile|policy\"), \"Policy Change\",\n    match(event_type, \"(?i)account|user.*creat|user.*delet\"), \"Account Lifecycle\",\n    match(event_type, \"(?i)shutdown|restart\"), \"Service Lifecycle\",\n    1=1, \"Other\"\n  )\n| stats count by event_category, source_tag, result\n| sort -count\n| table event_category, source_tag, result, count\n```\n\nUnderstanding this SPL\n\n**IGEL UMS Security Audit Log Monitoring** — IGEL UMS security audit logs capture critical administrative actions: user logins, failed authentication, password changes, device policy assignments, configuration modifications, and administrator account lifecycle events. Monitoring these events is essential for detecting unauthorized administrative access, policy tampering, and insider threats targeting the endpoint management plane.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:ums:security\"` fields `source_tag`, `event_type`, `user`, `target`, `result`, `detail`. **App/TA** (typical add-on context): Splunk Universal Forwarder monitoring UMS security log files. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:ums:security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:ums:security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **event_category** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by event_category, source_tag, result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **IGEL UMS Security Audit Log Monitoring**): table event_category, source_tag, result, count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IGEL UMS Security Audit Log Monitoring** — IGEL UMS security audit logs capture critical administrative actions: user logins, failed authentication, password changes, device policy assignments, configuration modifications, and administrator account lifecycle events. Monitoring these events is essential for detecting unauthorized administrative access, policy tampering, and insider threats targeting the endpoint management plane.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:ums:security\"` fields `source_tag`, `event_type`, `user`, `target`, `result`, `detail`. **App/TA** (typical add-on context): Splunk Universal Forwarder monitoring UMS security log files. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by category), Timeline (authentication events), Table (failed logins by user and source IP).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL UMS Security Audit Log and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Security",
                "Audit"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.5.7",
              "n": "IGEL Device Resource Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "IGEL thin clients have constrained hardware resources (CPU, memory, flash storage). Monitoring resource utilization across the fleet identifies devices that are under-provisioned for their workload, approaching flash storage capacity, or experiencing performance issues that degrade the VDI user experience. Proactive capacity trending prevents user complaints and supports hardware refresh planning.",
              "t": "Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients?facets=details`)",
              "d": "`index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `cpu_speed_mhz`, `mem_size_mb`, `flash_size_mb`, `battery_level`, `network_speed`",
              "q": "index=endpoint sourcetype=\"igel:ums:inventory\"\n| stats latest(cpu_speed_mhz) as cpu_mhz, latest(mem_size_mb) as mem_mb, latest(flash_size_mb) as flash_mb, latest(battery_level) as battery, latest(device_name) as device_name by unit_id\n| eval mem_tier=case(mem_mb<2048, \"Under 2GB\", mem_mb<4096, \"2-4GB\", mem_mb<8192, \"4-8GB\", 1=1, \"8GB+\")\n| eval flash_tier=case(flash_mb<4096, \"Under 4GB\", flash_mb<8192, \"4-8GB\", 1=1, \"8GB+\")\n| stats count as device_count by mem_tier, flash_tier\n| sort mem_tier, flash_tier\n| table mem_tier, flash_tier, device_count",
              "m": "Poll `GET /v3/thinclients?facets=details` to retrieve hardware specifications for each device. The API returns CPU speed, memory size, flash storage, battery level (mobile devices), and network speed. Index these as inventory events with the device `unitID` as a unique key. Build a fleet hardware profile to identify under-provisioned devices. Alert when battery level drops below 20% on mobile IGEL devices. Use trending to forecast flash storage exhaustion. Cross-reference hardware specs against minimum requirements for the VDI workload (e.g., Citrix Workspace App, VMware Horizon Client).",
              "z": "Heatmap (memory tier x flash tier), Bar chart (devices by hardware class), Table (devices below minimum specs).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients?facets=details`).\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `cpu_speed_mhz`, `mem_size_mb`, `flash_size_mb`, `battery_level`, `network_speed`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET /v3/thinclients?facets=details` to retrieve hardware specifications for each device. The API returns CPU speed, memory size, flash storage, battery level (mobile devices), and network speed. Index these as inventory events with the device `unitID` as a unique key. Build a fleet hardware profile to identify under-provisioned devices. Alert when battery level drops below 20% on mobile IGEL devices. Use trending to forecast flash storage exhaustion. Cross-reference hardware specs against …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:ums:inventory\"\n| stats latest(cpu_speed_mhz) as cpu_mhz, latest(mem_size_mb) as mem_mb, latest(flash_size_mb) as flash_mb, latest(battery_level) as battery, latest(device_name) as device_name by unit_id\n| eval mem_tier=case(mem_mb<2048, \"Under 2GB\", mem_mb<4096, \"2-4GB\", mem_mb<8192, \"4-8GB\", 1=1, \"8GB+\")\n| eval flash_tier=case(flash_mb<4096, \"Under 4GB\", flash_mb<8192, \"4-8GB\", 1=1, \"8GB+\")\n| stats count as device_count by mem_tier, flash_tier\n| sort mem_tier, flash_tier\n| table mem_tier, flash_tier, device_count\n```\n\nUnderstanding this SPL\n\n**IGEL Device Resource Utilization** — IGEL thin clients have constrained hardware resources (CPU, memory, flash storage). Monitoring resource utilization across the fleet identifies devices that are under-provisioned for their workload, approaching flash storage capacity, or experiencing performance issues that degrade the VDI user experience. Proactive capacity trending prevents user complaints and supports hardware refresh planning.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:ums:inventory\"` fields `device_name`, `cpu_speed_mhz`, `mem_size_mb`, `flash_size_mb`, `battery_level`, `network_speed`. **App/TA** (typical add-on context): Custom scripted input polling IGEL UMS REST API (`GET /v3/thinclients?facets=details`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:ums:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:ums:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by unit_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **mem_tier** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **flash_tier** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by mem_tier, flash_tier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **IGEL Device Resource Utilization**): table mem_tier, flash_tier, device_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (memory tier x flash tier), Bar chart (devices by hardware class), Table (devices below minimum specs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL Device Resource Utilization and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.5.8",
              "n": "IGEL Device Unscheduled Reboot Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Unexpected reboots on thin clients disrupt active VDI sessions, causing users to lose unsaved work and requiring re-authentication. Detecting unscheduled reboots — those not preceded by an administrator-initiated reboot command or firmware update — helps identify hardware failures, power issues, or kernel panics across the fleet before they become widespread.",
              "t": "Splunk syslog input (TCP/TLS) receiving IGEL OS rsyslog",
              "d": "`index=endpoint` `sourcetype=\"igel:os:syslog\"` fields `host`, `process`, `message`",
              "q": "index=endpoint sourcetype=\"igel:os:syslog\" process=\"kernel\" (\"Linux version\" OR \"Booting\" OR \"Command line:\")\n| stats count as boot_events, earliest(_time) as first_boot, latest(_time) as last_boot by host\n| join type=left host [search index=endpoint sourcetype=\"igel:ums:security\" event_type=\"*reboot*\" OR event_type=\"*restart*\" | stats latest(_time) as scheduled_reboot by target]\n| eval unscheduled=if(isnull(scheduled_reboot) OR last_boot > scheduled_reboot + 600, \"Yes\", \"No\")\n| where unscheduled=\"Yes\"\n| eval last_boot_fmt=strftime(last_boot, \"%Y-%m-%d %H:%M:%S\")\n| table host, last_boot_fmt, boot_events\n| sort -boot_events",
              "m": "IGEL OS kernel boot messages appear in syslog when the device starts. Cross-reference boot events against UMS security audit logs for administrator-initiated reboot commands. Boots that occur without a matching reboot command within a 10-minute window are classified as unscheduled. Alert when a single device has more than 3 unscheduled reboots in 24 hours (possible hardware failure) or when more than 5 devices at the same site reboot unexpectedly within 30 minutes (possible power event).",
              "z": "Table (devices with unscheduled reboots), Timechart (reboot events over time), Single value (unscheduled reboot count last 24h).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk syslog input (TCP/TLS) receiving IGEL OS rsyslog.\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:os:syslog\"` fields `host`, `process`, `message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIGEL OS kernel boot messages appear in syslog when the device starts. Cross-reference boot events against UMS security audit logs for administrator-initiated reboot commands. Boots that occur without a matching reboot command within a 10-minute window are classified as unscheduled. Alert when a single device has more than 3 unscheduled reboots in 24 hours (possible hardware failure) or when more than 5 devices at the same site reboot unexpectedly within 30 minutes (possible power event).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:os:syslog\" process=\"kernel\" (\"Linux version\" OR \"Booting\" OR \"Command line:\")\n| stats count as boot_events, earliest(_time) as first_boot, latest(_time) as last_boot by host\n| join type=left host [search index=endpoint sourcetype=\"igel:ums:security\" event_type=\"*reboot*\" OR event_type=\"*restart*\" | stats latest(_time) as scheduled_reboot by target]\n| eval unscheduled=if(isnull(scheduled_reboot) OR last_boot > scheduled_reboot + 600, \"Yes\", \"No\")\n| where unscheduled=\"Yes\"\n| eval last_boot_fmt=strftime(last_boot, \"%Y-%m-%d %H:%M:%S\")\n| table host, last_boot_fmt, boot_events\n| sort -boot_events\n```\n\nUnderstanding this SPL\n\n**IGEL Device Unscheduled Reboot Detection** — Unexpected reboots on thin clients disrupt active VDI sessions, causing users to lose unsaved work and requiring re-authentication. Detecting unscheduled reboots — those not preceded by an administrator-initiated reboot command or firmware update — helps identify hardware failures, power issues, or kernel panics across the fleet before they become widespread.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:os:syslog\"` fields `host`, `process`, `message`. **App/TA** (typical add-on context): Splunk syslog input (TCP/TLS) receiving IGEL OS rsyslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:os:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:os:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **unscheduled** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where unscheduled=\"Yes\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **last_boot_fmt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **IGEL Device Unscheduled Reboot Detection**): table host, last_boot_fmt, boot_events\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (devices with unscheduled reboots), Timechart (reboot events over time), Single value (unscheduled reboot count last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL Device Unscheduled Reboot Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.5.9",
              "n": "IGEL Cloud Gateway Connection Health",
              "c": "high",
              "f": "intermediate",
              "v": "The IGEL Cloud Gateway (ICG) enables remote management of IGEL devices outside the corporate network — essential for work-from-home and branch office deployments. If ICG connectivity fails, remote devices cannot receive policy updates, firmware upgrades, or administrative commands, creating a management blind spot. Monitoring ICG health from both the UMS and ICG perspectives ensures continuous remote device manageability.",
              "t": "Splunk Universal Forwarder monitoring ICG security log, custom scripted input for UMS health",
              "d": "`index=endpoint` `sourcetype=\"igel:icg:security\"` fields `event_type`, `user`, `result`, `source_ip`; `sourcetype=\"igel:ums:health\"` for ICG connection warnings",
              "q": "index=endpoint sourcetype=\"igel:icg:security\"\n| bin _time span=15m\n| stats count as total_events,\n  sum(eval(if(match(event_type, \"(?i)auth.*fail\"), 1, 0))) as failed_auth,\n  sum(eval(if(match(event_type, \"(?i)auth.*success\"), 1, 0))) as success_auth,\n  dc(source_ip) as unique_sources by _time\n| eval fail_pct=if(total_events>0, round(failed_auth/total_events*100,1), 0)\n| where failed_auth > 5 OR fail_pct > 20\n| table _time, total_events, success_auth, failed_auth, fail_pct, unique_sources",
              "m": "Deploy a Splunk Universal Forwarder on the ICG server to monitor `/opt/IGEL/icg/usg/logs/icg-security.log`. The ICG security log records authentication events (success/failure), user creation/deletion, and file uploads. Also monitor the UMS check-status endpoint for ICG-related warnings (cloud gateway disconnection). Alert on: sustained authentication failures from ICG (possible certificate mismatch), ICG going offline (no events for 15+ minutes), or UMS reporting ICG disconnection in its health status.",
              "z": "Timechart (ICG auth success vs failure), Single value (current ICG status), Table (failed auth sources).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder monitoring ICG security log, custom scripted input for UMS health.\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:icg:security\"` fields `event_type`, `user`, `result`, `source_ip`; `sourcetype=\"igel:ums:health\"` for ICG connection warnings.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a Splunk Universal Forwarder on the ICG server to monitor `/opt/IGEL/icg/usg/logs/icg-security.log`. The ICG security log records authentication events (success/failure), user creation/deletion, and file uploads. Also monitor the UMS check-status endpoint for ICG-related warnings (cloud gateway disconnection). Alert on: sustained authentication failures from ICG (possible certificate mismatch), ICG going offline (no events for 15+ minutes), or UMS reporting ICG disconnection in its health…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:icg:security\"\n| bin _time span=15m\n| stats count as total_events,\n  sum(eval(if(match(event_type, \"(?i)auth.*fail\"), 1, 0))) as failed_auth,\n  sum(eval(if(match(event_type, \"(?i)auth.*success\"), 1, 0))) as success_auth,\n  dc(source_ip) as unique_sources by _time\n| eval fail_pct=if(total_events>0, round(failed_auth/total_events*100,1), 0)\n| where failed_auth > 5 OR fail_pct > 20\n| table _time, total_events, success_auth, failed_auth, fail_pct, unique_sources\n```\n\nUnderstanding this SPL\n\n**IGEL Cloud Gateway Connection Health** — The IGEL Cloud Gateway (ICG) enables remote management of IGEL devices outside the corporate network — essential for work-from-home and branch office deployments. If ICG connectivity fails, remote devices cannot receive policy updates, firmware upgrades, or administrative commands, creating a management blind spot. Monitoring ICG health from both the UMS and ICG perspectives ensures continuous remote device manageability.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:icg:security\"` fields `event_type`, `user`, `result`, `source_ip`; `sourcetype=\"igel:ums:health\"` for ICG connection warnings. **App/TA** (typical add-on context): Splunk Universal Forwarder monitoring ICG security log, custom scripted input for UMS health. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:icg:security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:icg:security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failed_auth > 5 OR fail_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IGEL Cloud Gateway Connection Health**): table _time, total_events, success_auth, failed_auth, fail_pct, unique_sources\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(Authentication.src) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IGEL Cloud Gateway Connection Health** — The IGEL Cloud Gateway (ICG) enables remote management of IGEL devices outside the corporate network — essential for work-from-home and branch office deployments. If ICG connectivity fails, remote devices cannot receive policy updates, firmware upgrades, or administrative commands, creating a management blind spot. Monitoring ICG health from both the UMS and ICG perspectives ensures continuous remote device manageability.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:icg:security\"` fields `event_type`, `user`, `result`, `source_ip`; `sourcetype=\"igel:ums:health\"` for ICG connection warnings. **App/TA** (typical add-on context): Splunk Universal Forwarder monitoring ICG security log, custom scripted input for UMS health. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (ICG auth success vs failure), Single value (current ICG status), Table (failed auth sources).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL Cloud Gateway Connection Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t dc(Authentication.src) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - agg_value",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.5.10",
              "n": "IGEL Device Configuration Drift Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "IGEL UMS manages device configurations through profiles and priority profiles assigned to devices or directories. Unauthorized or unintended configuration changes — profile reassignments, priority profile overrides, or direct device settings modifications — can break VDI session configurations, disable security controls, or create inconsistent user experiences. Detecting configuration drift from the approved baseline ensures fleet standardization.",
              "t": "Splunk Universal Forwarder monitoring UMS security log files",
              "d": "`index=endpoint` `sourcetype=\"igel:ums:security\"` fields `source_tag`, `event_type`, `user`, `target`, `detail`",
              "q": "index=endpoint sourcetype=\"igel:ums:security\" source_tag=\"UMS-Webapp\" OR source_tag=\"UMS-Server\"\n  (event_type=\"*profile*\" OR event_type=\"*assignment*\" OR event_type=\"*settings*\" OR event_type=\"*configuration*\")\n| eval change_type=case(\n    match(event_type, \"(?i)priority.*profile\"), \"Priority Profile Change\",\n    match(event_type, \"(?i)profile\"), \"Profile Change\",\n    match(event_type, \"(?i)assign\"), \"Assignment Change\",\n    1=1, \"Settings Change\"\n  )\n| stats count as changes, dc(target) as affected_devices, values(user) as changed_by by change_type, _time\n| where changes > 0\n| sort -_time\n| table _time, change_type, changes, affected_devices, changed_by",
              "m": "The UMS security audit log records all profile assignments, priority profile updates, and device configuration modifications with the acting administrator's username. Monitor for: bulk profile reassignments (more than 10 devices in 5 minutes — could be intentional rollout or accidental), off-hours configuration changes, changes by unauthorized users, and removal of security-related profiles (e.g., syslog forwarding, USB lockdown). Maintain a lookup of approved change windows and authorized administrators. Alert on changes outside approved windows or by non-authorized users.",
              "z": "Timeline (configuration changes), Bar chart (changes by type), Table (recent changes with user and target details).",
              "kfp": "Proxmox and hypervisor health can go yellow during Ceph rebalancing, zfs scrubs, or one-node maintenance; check cluster quorum and job progress before reassigning the alert.",
              "refs": "[uberAgent indexer app](https://splunkbase.splunk.com/app/2998), [Splunkbase app 1448](https://splunkbase.splunk.com/app/1448), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder monitoring UMS security log files.\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=\"igel:ums:security\"` fields `source_tag`, `event_type`, `user`, `target`, `detail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe UMS security audit log records all profile assignments, priority profile updates, and device configuration modifications with the acting administrator's username. Monitor for: bulk profile reassignments (more than 10 devices in 5 minutes — could be intentional rollout or accidental), off-hours configuration changes, changes by unauthorized users, and removal of security-related profiles (e.g., syslog forwarding, USB lockdown). Maintain a lookup of approved change windows and authorized admin…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"igel:ums:security\" source_tag=\"UMS-Webapp\" OR source_tag=\"UMS-Server\"\n  (event_type=\"*profile*\" OR event_type=\"*assignment*\" OR event_type=\"*settings*\" OR event_type=\"*configuration*\")\n| eval change_type=case(\n    match(event_type, \"(?i)priority.*profile\"), \"Priority Profile Change\",\n    match(event_type, \"(?i)profile\"), \"Profile Change\",\n    match(event_type, \"(?i)assign\"), \"Assignment Change\",\n    1=1, \"Settings Change\"\n  )\n| stats count as changes, dc(target) as affected_devices, values(user) as changed_by by change_type, _time\n| where changes > 0\n| sort -_time\n| table _time, change_type, changes, affected_devices, changed_by\n```\n\nUnderstanding this SPL\n\n**IGEL Device Configuration Drift Detection** — IGEL UMS manages device configurations through profiles and priority profiles assigned to devices or directories. Unauthorized or unintended configuration changes — profile reassignments, priority profile overrides, or direct device settings modifications — can break VDI session configurations, disable security controls, or create inconsistent user experiences. Detecting configuration drift from the approved baseline ensures fleet standardization.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:ums:security\"` fields `source_tag`, `event_type`, `user`, `target`, `detail`. **App/TA** (typical add-on context): Splunk Universal Forwarder monitoring UMS security log files. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: igel:ums:security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"igel:ums:security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **change_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by change_type, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where changes > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **IGEL Device Configuration Drift Detection**): table _time, change_type, changes, affected_devices, changed_by\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IGEL Device Configuration Drift Detection** — IGEL UMS manages device configurations through profiles and priority profiles assigned to devices or directories. Unauthorized or unintended configuration changes — profile reassignments, priority profile overrides, or direct device settings modifications — can break VDI session configurations, disable security controls, or create inconsistent user experiences. Detecting configuration drift from the approved baseline ensures fleet standardization.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=\"igel:ums:security\"` fields `source_tag`, `event_type`, `user`, `target`, `detail`. **App/TA** (typical add-on context): Splunk Universal Forwarder monitoring UMS security log files. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (configuration changes), Bar chart (changes by type), Table (recent changes with user and target details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iGEL Device Configuration Drift Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 10,
            "none": 0
          }
        },
        {
          "i": "2.6",
          "n": "Citrix Virtual Apps & Desktops",
          "u": [
            {
              "i": "2.6.1",
              "n": "Citrix Session Logon Duration Breakdown",
              "c": "critical",
              "f": "advanced",
              "v": "Slow Citrix logon times are the most common user complaint in CVAD environments. Logon duration is composed of multiple sequential phases — brokering, VM start, HDX connection, authentication, profile load, GPO processing, and script execution. Identifying which phase contributes to slow logons enables targeted remediation rather than broad troubleshooting. A 60-second logon target is typical; exceeding it degrades user satisfaction and productivity.",
              "t": "uberAgent UXM (Splunkbase 1448) — recommended; or Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API",
              "d": "uberAgent: `sourcetype=\"uberAgent:Logon:LogonDetail\"` (phase-level breakdown including GPO, profile, shell, scripts); or `index=xd` `sourcetype=\"citrix:broker:events\"` fields `logon_duration_ms`, `brokering_duration_ms`, `vm_start_duration_ms`, `hdx_connection_ms`, `authentication_ms`, `profile_load_ms`, `gpo_ms`, `logon_scripts_ms`, `user`, `delivery_group`",
              "q": "index=xd sourcetype=\"citrix:broker:events\" event_type=\"SessionLogon\"\n| eval total_logon_sec=logon_duration_ms/1000\n| bin _time span=1h\n| stats avg(total_logon_sec) as avg_logon, perc95(total_logon_sec) as p95_logon,\n  avg(brokering_duration_ms) as avg_broker, avg(vm_start_duration_ms) as avg_vmstart,\n  avg(hdx_connection_ms) as avg_hdx, avg(profile_load_ms) as avg_profile,\n  avg(gpo_ms) as avg_gpo, count as logon_count by delivery_group, _time\n| where p95_logon > 60\n| table _time, delivery_group, logon_count, avg_logon, p95_logon, avg_broker, avg_vmstart, avg_hdx, avg_profile, avg_gpo",
              "m": "**Preferred:** Deploy uberAgent UXM on VDAs — the Logon Duration dashboard provides automatic phase breakdown (userinit, shell, GPO, profile, scripts) with no OData polling required, and captures per-user detail. **Alternative:** Collect session logon events from the Citrix Broker Service event log on Delivery Controllers using the `TA-XD7-Broker` add-on, or poll the Monitor Service OData API endpoint `Sessions` for `LogOnDuration` breakdown. Alert when p95 logon exceeds 60 seconds for any delivery group. Trend logon duration over weeks to detect gradual regression after GPO or profile changes. Segment by delivery group to isolate problem areas. Common root causes by phase: brokering (controller load), VM start (hypervisor contention), profile load (large profiles or slow file shares), GPO (excessive policies).",
              "z": "Stacked bar chart (logon phases), Line chart (logon duration trending), Table (slowest delivery groups).",
              "kfp": "Session and launch time spikes may track image updates, profile migrations, or broker outages; compare to the same app on other sites before blaming the network only.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) — recommended; or Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API.\n• Ensure the following data sources are available: uberAgent: `sourcetype=\"uberAgent:Logon:LogonDetail\"` (phase-level breakdown including GPO, profile, shell, scripts); or `index=xd` `sourcetype=\"citrix:broker:events\"` fields `logon_duration_ms`, `brokering_duration_ms`, `vm_start_duration_ms`, `hdx_connection_ms`, `authentication_ms`, `profile_load_ms`, `gpo_ms`, `logon_scripts_ms`, `user`, `delivery_group`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n**Preferred:** Deploy uberAgent UXM on VDAs — the Logon Duration dashboard provides automatic phase breakdown (userinit, shell, GPO, profile, scripts) with no OData polling required, and captures per-user detail. **Alternative:** Collect session logon events from the Citrix Broker Service event log on Delivery Controllers using the `TA-XD7-Broker` add-on, or poll the Monitor Service OData API endpoint `Sessions` for `LogOnDuration` breakdown. Alert when p95 logon exceeds 60 seconds for any deliv…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" event_type=\"SessionLogon\"\n| eval total_logon_sec=logon_duration_ms/1000\n| bin _time span=1h\n| stats avg(total_logon_sec) as avg_logon, perc95(total_logon_sec) as p95_logon,\n  avg(brokering_duration_ms) as avg_broker, avg(vm_start_duration_ms) as avg_vmstart,\n  avg(hdx_connection_ms) as avg_hdx, avg(profile_load_ms) as avg_profile,\n  avg(gpo_ms) as avg_gpo, count as logon_count by delivery_group, _time\n| where p95_logon > 60\n| table _time, delivery_group, logon_count, avg_logon, p95_logon, avg_broker, avg_vmstart, avg_hdx, avg_profile, avg_gpo\n```\n\nUnderstanding this SPL\n\n**Citrix Session Logon Duration Breakdown** — Slow Citrix logon times are the most common user complaint in CVAD environments. Logon duration is composed of multiple sequential phases — brokering, VM start, HDX connection, authentication, profile load, GPO processing, and script execution. Identifying which phase contributes to slow logons enables targeted remediation rather than broad troubleshooting. A 60-second logon target is typical; exceeding it degrades user satisfaction and productivity.\n\nDocumented **Data sources**: uberAgent: `sourcetype=\"uberAgent:Logon:LogonDetail\"` (phase-level breakdown including GPO, profile, shell, scripts); or `index=xd` `sourcetype=\"citrix:broker:events\"` fields `logon_duration_ms`, `brokering_duration_ms`, `vm_start_duration_ms`, `hdx_connection_ms`, `authentication_ms`, `profile_load_ms`, `gpo_ms`, `logon_scripts_ms`, `user`, `delivery_group`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) — recommended; or Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **total_logon_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by delivery_group, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_logon > 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Session Logon Duration Breakdown**): table _time, delivery_group, logon_count, avg_logon, p95_logon, avg_broker, avg_vmstart, avg_hdx, avg_profile, avg_gpo\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart (logon phases), Line chart (logon duration trending), Table (slowest delivery groups).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on session Logon Duration Breakdown and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.2",
              "n": "ICA/HDX Session Latency and Quality",
              "c": "critical",
              "f": "intermediate",
              "v": "ICA Round Trip Time (RTT) is the primary measure of Citrix session responsiveness — the time from a user keystroke to the response appearing on screen. Citrix defines 0–150ms as optimal, 150–300ms as acceptable, and above 300ms as degraded. Poor ICA latency causes sluggish typing, delayed screen updates, and broken audio/video, directly impacting user productivity. Monitoring ICA RTT across the fleet detects network issues, overloaded session hosts, and endpoint problems.",
              "t": "uberAgent UXM (Splunkbase 1448) — recommended; or Template for Citrix XenDesktop 7 (`TA-XD7-VDA`), Citrix Monitor Service OData API",
              "d": "uberAgent: `sourcetype=\"uberAgent:Session:SessionDetail\"` (ICA RTT, ICA latency, bandwidth, protocol, session quality); or `index=xd_perfmon` `sourcetype=\"citrix:vda:perfmon\"` fields `ica_rtt_ms`, `ica_latency_ms`, `ica_bandwidth_in`, `ica_bandwidth_out`, `session_id`, `user`, `vda_host`",
              "q": "index=xd_perfmon sourcetype=\"citrix:vda:perfmon\" counter_name=\"ICA RTT\"\n| bin _time span=5m\n| stats avg(counter_value) as avg_rtt, perc95(counter_value) as p95_rtt, max(counter_value) as max_rtt by vda_host, _time\n| eval quality=case(p95_rtt<=150, \"Optimal\", p95_rtt<=300, \"Acceptable\", 1=1, \"Degraded\")\n| where quality=\"Degraded\"\n| table _time, vda_host, avg_rtt, p95_rtt, max_rtt, quality",
              "m": "Collect ICA RTT performance counters from VDAs using the `TA-XD7-VDA` add-on (Citrix ICA Session performance object). Alternatively, poll the Monitor Service OData API `SessionMetrics` endpoint. The difference between ICA RTT and ICA Latency indicates application processing time on the session host — if ICA Latency is high but network latency is low, the VDA is overloaded. Alert on sustained p95 RTT above 300ms. Segment by delivery group and VDA host to identify whether the issue is endpoint-specific (user's network), VDA-specific (overloaded host), or site-wide (network infrastructure).",
              "z": "Line chart (ICA RTT over time by VDA), Heatmap (VDA x hour), Single value (fleet average RTT with color threshold).",
              "kfp": "Session and launch time spikes may track image updates, profile migrations, or broker outages; compare to the same app on other sites before blaming the network only.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) — recommended; or Template for Citrix XenDesktop 7 (`TA-XD7-VDA`), Citrix Monitor Service OData API.\n• Ensure the following data sources are available: uberAgent: `sourcetype=\"uberAgent:Session:SessionDetail\"` (ICA RTT, ICA latency, bandwidth, protocol, session quality); or `index=xd_perfmon` `sourcetype=\"citrix:vda:perfmon\"` fields `ica_rtt_ms`, `ica_latency_ms`, `ica_bandwidth_in`, `ica_bandwidth_out`, `session_id`, `user`, `vda_host`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect ICA RTT performance counters from VDAs using the `TA-XD7-VDA` add-on (Citrix ICA Session performance object). Alternatively, poll the Monitor Service OData API `SessionMetrics` endpoint. The difference between ICA RTT and ICA Latency indicates application processing time on the session host — if ICA Latency is high but network latency is low, the VDA is overloaded. Alert on sustained p95 RTT above 300ms. Segment by delivery group and VDA host to identify whether the issue is endpoint-spe…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd_perfmon sourcetype=\"citrix:vda:perfmon\" counter_name=\"ICA RTT\"\n| bin _time span=5m\n| stats avg(counter_value) as avg_rtt, perc95(counter_value) as p95_rtt, max(counter_value) as max_rtt by vda_host, _time\n| eval quality=case(p95_rtt<=150, \"Optimal\", p95_rtt<=300, \"Acceptable\", 1=1, \"Degraded\")\n| where quality=\"Degraded\"\n| table _time, vda_host, avg_rtt, p95_rtt, max_rtt, quality\n```\n\nUnderstanding this SPL\n\n**ICA/HDX Session Latency and Quality** — ICA Round Trip Time (RTT) is the primary measure of Citrix session responsiveness — the time from a user keystroke to the response appearing on screen. Citrix defines 0–150ms as optimal, 150–300ms as acceptable, and above 300ms as degraded. Poor ICA latency causes sluggish typing, delayed screen updates, and broken audio/video, directly impacting user productivity. Monitoring ICA RTT across the fleet detects network issues, overloaded session hosts, and endpoint problems.\n\nDocumented **Data sources**: uberAgent: `sourcetype=\"uberAgent:Session:SessionDetail\"` (ICA RTT, ICA latency, bandwidth, protocol, session quality); or `index=xd_perfmon` `sourcetype=\"citrix:vda:perfmon\"` fields `ica_rtt_ms`, `ica_latency_ms`, `ica_bandwidth_in`, `ica_bandwidth_out`, `session_id`, `user`, `vda_host`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) — recommended; or Template for Citrix XenDesktop 7 (`TA-XD7-VDA`), Citrix Monitor Service OData API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd_perfmon; **sourcetype**: citrix:vda:perfmon. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd_perfmon, sourcetype=\"citrix:vda:perfmon\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by vda_host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **quality** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where quality=\"Degraded\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ICA/HDX Session Latency and Quality**): table _time, vda_host, avg_rtt, p95_rtt, max_rtt, quality\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ICA RTT over time by VDA), Heatmap (VDA x hour), Single value (fleet average RTT with color threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iCA/HDX Session Latency and Quality and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.3",
              "n": "Citrix Connection Failure Analysis",
              "c": "critical",
              "f": "intermediate",
              "v": "Connection failures prevent users from launching virtual desktops or published applications. Failures can occur at multiple stages: brokering (no available machines), power management (VM failed to start), registration (VDA not registered with controller), or HDX connection (protocol failure). Categorizing failures by type and correlating with infrastructure state enables rapid root-cause identification.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` fields `connection_state`, `failure_reason`, `failure_type`, `delivery_group`, `machine_name`, `user`",
              "q": "index=xd sourcetype=\"citrix:broker:events\" event_type=\"ConnectionFailure\"\n| bin _time span=15m\n| stats count as failures, dc(user) as affected_users, values(failure_reason) as reasons by failure_type, delivery_group, _time\n| where failures > 3\n| sort -failures\n| table _time, delivery_group, failure_type, failures, affected_users, reasons",
              "m": "Collect Broker Service events (Event IDs 1100–1199 for connection lifecycle) from Delivery Controllers. The Monitor Service OData API `ConnectionFailureLogs` endpoint provides structured failure data with `FailureType` (ClientConnectionFailure, MachineFailure, etc.) and `FailureReason`. Alert on: more than 3 failures in 15 minutes for any delivery group, any `MachineFailure` type (indicates infrastructure problem), or rising failure rates across the site. Correlate with machine power state and VDA registration status for root cause.",
              "z": "Bar chart (failures by type), Timeline (failure events), Table (recent failures with user and machine details).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `connection_state`, `failure_reason`, `failure_type`, `delivery_group`, `machine_name`, `user`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Broker Service events (Event IDs 1100–1199 for connection lifecycle) from Delivery Controllers. The Monitor Service OData API `ConnectionFailureLogs` endpoint provides structured failure data with `FailureType` (ClientConnectionFailure, MachineFailure, etc.) and `FailureReason`. Alert on: more than 3 failures in 15 minutes for any delivery group, any `MachineFailure` type (indicates infrastructure problem), or rising failure rates across the site. Correlate with machine power state and V…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" event_type=\"ConnectionFailure\"\n| bin _time span=15m\n| stats count as failures, dc(user) as affected_users, values(failure_reason) as reasons by failure_type, delivery_group, _time\n| where failures > 3\n| sort -failures\n| table _time, delivery_group, failure_type, failures, affected_users, reasons\n```\n\nUnderstanding this SPL\n\n**Citrix Connection Failure Analysis** — Connection failures prevent users from launching virtual desktops or published applications. Failures can occur at multiple stages: brokering (no available machines), power management (VM failed to start), registration (VDA not registered with controller), or HDX connection (protocol failure). Categorizing failures by type and correlating with infrastructure state enables rapid root-cause identification.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `connection_state`, `failure_reason`, `failure_type`, `delivery_group`, `machine_name`, `user`. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by failure_type, delivery_group, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix Connection Failure Analysis**): table _time, delivery_group, failure_type, failures, affected_users, reasons\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures by type), Timeline (failure events), Table (recent failures with user and machine details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on connection Failure Analysis and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.4",
              "n": "VDA Machine Registration Health",
              "c": "critical",
              "f": "beginner",
              "v": "Virtual Delivery Agents must register with a Delivery Controller to receive user sessions. Unregistered VDAs are effectively offline — they cannot serve users and reduce available capacity. Mass deregistration events indicate controller failures, network issues, or VDA crashes. Monitoring the ratio of registered to total machines ensures session hosting capacity meets demand.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` fields `machine_name`, `registration_state`, `delivery_group`, `catalog_name`, `fault_state`",
              "q": "index=xd sourcetype=\"citrix:broker:events\" event_type=\"MachineStatus\"\n| stats latest(registration_state) as reg_state, latest(fault_state) as fault by machine_name, delivery_group\n| stats count as total,\n  sum(eval(if(reg_state=\"Registered\", 1, 0))) as registered,\n  sum(eval(if(reg_state=\"Unregistered\", 1, 0))) as unregistered,\n  sum(eval(if(fault!=\"None\" AND fault!=\"\", 1, 0))) as faulted by delivery_group\n| eval reg_pct=round(registered/total*100,1)\n| where reg_pct < 95 OR faulted > 0\n| table delivery_group, total, registered, unregistered, faulted, reg_pct",
              "m": "Poll machine status from the Broker Service or Monitor Service OData API `Machines` endpoint. Track `RegistrationState` (Registered, Unregistered, Initializing) and `FaultState` (None, FailedToStart, StuckOnBoot, Unregistered, MaxCapacity). Alert when registration percentage drops below 95% for any delivery group. Alert immediately when more than 5 machines deregister within 5 minutes (mass deregistration = infrastructure problem). Correlate with controller health and hypervisor connectivity.",
              "z": "Single value (registration % with color), Bar chart (registered vs unregistered by delivery group), Table (unregistered machines with fault state).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `machine_name`, `registration_state`, `delivery_group`, `catalog_name`, `fault_state`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll machine status from the Broker Service or Monitor Service OData API `Machines` endpoint. Track `RegistrationState` (Registered, Unregistered, Initializing) and `FaultState` (None, FailedToStart, StuckOnBoot, Unregistered, MaxCapacity). Alert when registration percentage drops below 95% for any delivery group. Alert immediately when more than 5 machines deregister within 5 minutes (mass deregistration = infrastructure problem). Correlate with controller health and hypervisor connectivity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" event_type=\"MachineStatus\"\n| stats latest(registration_state) as reg_state, latest(fault_state) as fault by machine_name, delivery_group\n| stats count as total,\n  sum(eval(if(reg_state=\"Registered\", 1, 0))) as registered,\n  sum(eval(if(reg_state=\"Unregistered\", 1, 0))) as unregistered,\n  sum(eval(if(fault!=\"None\" AND fault!=\"\", 1, 0))) as faulted by delivery_group\n| eval reg_pct=round(registered/total*100,1)\n| where reg_pct < 95 OR faulted > 0\n| table delivery_group, total, registered, unregistered, faulted, reg_pct\n```\n\nUnderstanding this SPL\n\n**VDA Machine Registration Health** — Virtual Delivery Agents must register with a Delivery Controller to receive user sessions. Unregistered VDAs are effectively offline — they cannot serve users and reduce available capacity. Mass deregistration events indicate controller failures, network issues, or VDA crashes. Monitoring the ratio of registered to total machines ensures session hosting capacity meets demand.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `machine_name`, `registration_state`, `delivery_group`, `catalog_name`, `fault_state`. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by machine_name, delivery_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by delivery_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **reg_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where reg_pct < 95 OR faulted > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VDA Machine Registration Health**): table delivery_group, total, registered, unregistered, faulted, reg_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (registration % with color), Bar chart (registered vs unregistered by delivery group), Table (unregistered machines with fault state).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on vDA Machine Registration Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.5",
              "n": "Citrix Delivery Controller Service Health",
              "c": "critical",
              "f": "beginner",
              "v": "Citrix Delivery Controllers run multiple critical Windows services: Broker Service, Configuration Service, Host Service, Machine Creation Service, and others. If the Broker Service stops, no new sessions can be brokered. If both controllers in a site fail, the entire Citrix environment becomes unavailable. Monitoring service health on all controllers ensures rapid detection and failover.",
              "t": "Splunk Add-on for Microsoft Windows",
              "d": "`index=xd_winevents` `sourcetype=\"WinEventLog:System\"` fields `EventCode`, `service_name`, `service_state`, `host`",
              "q": "index=xd_winevents sourcetype=\"WinEventLog:System\" EventCode=7036\n  (service_name=\"Citrix Broker Service\" OR service_name=\"Citrix Configuration Service\"\n  OR service_name=\"Citrix Host Service\" OR service_name=\"CitrixMachineCreationService\"\n  OR service_name=\"Citrix Storefront*\")\n| eval status=if(match(Message, \"running\"), \"Running\", \"Stopped\")\n| stats latest(status) as current_state, latest(_time) as last_change by host, service_name\n| where current_state=\"Stopped\"\n| eval last_change_fmt=strftime(last_change, \"%Y-%m-%d %H:%M:%S\")\n| table host, service_name, current_state, last_change_fmt",
              "m": "Deploy Splunk Universal Forwarder on all Delivery Controllers and monitor Windows System Event Log. Windows Event ID 7036 records service state changes (\"entered the running/stopped state\"). Track all Citrix-specific services. Alert immediately when any critical Citrix service enters the stopped state. Correlate across controllers — if the Broker Service stops on all controllers simultaneously, escalate as P1. Also monitor Event IDs 7031 (service crash) and 7034 (unexpected termination).",
              "z": "Status grid (service x controller), Timeline (state change events), Table (stopped services).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows.\n• Ensure the following data sources are available: `index=xd_winevents` `sourcetype=\"WinEventLog:System\"` fields `EventCode`, `service_name`, `service_state`, `host`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Splunk Universal Forwarder on all Delivery Controllers and monitor Windows System Event Log. Windows Event ID 7036 records service state changes (\"entered the running/stopped state\"). Track all Citrix-specific services. Alert immediately when any critical Citrix service enters the stopped state. Correlate across controllers — if the Broker Service stops on all controllers simultaneously, escalate as P1. Also monitor Event IDs 7031 (service crash) and 7034 (unexpected termination).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd_winevents sourcetype=\"WinEventLog:System\" EventCode=7036\n  (service_name=\"Citrix Broker Service\" OR service_name=\"Citrix Configuration Service\"\n  OR service_name=\"Citrix Host Service\" OR service_name=\"CitrixMachineCreationService\"\n  OR service_name=\"Citrix Storefront*\")\n| eval status=if(match(Message, \"running\"), \"Running\", \"Stopped\")\n| stats latest(status) as current_state, latest(_time) as last_change by host, service_name\n| where current_state=\"Stopped\"\n| eval last_change_fmt=strftime(last_change, \"%Y-%m-%d %H:%M:%S\")\n| table host, service_name, current_state, last_change_fmt\n```\n\nUnderstanding this SPL\n\n**Citrix Delivery Controller Service Health** — Citrix Delivery Controllers run multiple critical Windows services: Broker Service, Configuration Service, Host Service, Machine Creation Service, and others. If the Broker Service stops, no new sessions can be brokered. If both controllers in a site fail, the entire Citrix environment becomes unavailable. Monitoring service health on all controllers ensures rapid detection and failover.\n\nDocumented **Data sources**: `index=xd_winevents` `sourcetype=\"WinEventLog:System\"` fields `EventCode`, `service_name`, `service_state`, `host`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd_winevents; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd_winevents, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where current_state=\"Stopped\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **last_change_fmt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Citrix Delivery Controller Service Health**): table host, service_name, current_state, last_change_fmt\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (service x controller), Timeline (state change events), Table (stopped services).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on delivery Controller Service Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.6",
              "n": "Citrix Machine Power State Management",
              "c": "high",
              "f": "intermediate",
              "v": "Citrix Delivery Controllers manage VM power states through power policy schedules — powering on machines before business hours and off after hours to save resources. Failed power actions (VM failed to start, hypervisor timeout, stuck in boot) reduce available session capacity during peak hours. Monitoring power action success rates and queue depth ensures machines are ready when users arrive.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-Broker`)",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` fields `power_action`, `power_state`, `machine_name`, `delivery_group`, `action_result`",
              "q": "index=xd sourcetype=\"citrix:broker:events\" event_type=\"PowerAction\"\n| bin _time span=1h\n| stats sum(eval(if(action_result=\"Success\", 1, 0))) as success,\n  sum(eval(if(action_result=\"Failed\", 1, 0))) as failed,\n  sum(eval(if(action_result=\"Pending\", 1, 0))) as pending,\n  count as total by power_action, delivery_group, _time\n| eval fail_pct=if(total>0, round(failed/total*100,1), 0)\n| where failed > 0 OR pending > 10\n| table _time, delivery_group, power_action, total, success, failed, pending, fail_pct",
              "m": "The Broker Service logs power management actions with Event IDs in the 2000–3000 range. Track power actions (TurnOn, TurnOff, Shutdown, Reset, Restart) and their results (Success, Failed, Pending, Canceled). The Broker throttles power actions per hypervisor connection to avoid overloading — a large pending queue indicates throttling bottleneck or hypervisor slowness. Alert on: any failed power actions, pending queue exceeding 10 actions (backlog), or power-on failures during scheduled scale-out windows. Use `Get-BrokerHostingPowerAction` via PowerShell scripted input for real-time queue visibility.",
              "z": "Timechart (power actions by result), Bar chart (failures by delivery group), Single value (pending queue depth).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-Broker`).\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `power_action`, `power_state`, `machine_name`, `delivery_group`, `action_result`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe Broker Service logs power management actions with Event IDs in the 2000–3000 range. Track power actions (TurnOn, TurnOff, Shutdown, Reset, Restart) and their results (Success, Failed, Pending, Canceled). The Broker throttles power actions per hypervisor connection to avoid overloading — a large pending queue indicates throttling bottleneck or hypervisor slowness. Alert on: any failed power actions, pending queue exceeding 10 actions (backlog), or power-on failures during scheduled scale-out …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" event_type=\"PowerAction\"\n| bin _time span=1h\n| stats sum(eval(if(action_result=\"Success\", 1, 0))) as success,\n  sum(eval(if(action_result=\"Failed\", 1, 0))) as failed,\n  sum(eval(if(action_result=\"Pending\", 1, 0))) as pending,\n  count as total by power_action, delivery_group, _time\n| eval fail_pct=if(total>0, round(failed/total*100,1), 0)\n| where failed > 0 OR pending > 10\n| table _time, delivery_group, power_action, total, success, failed, pending, fail_pct\n```\n\nUnderstanding this SPL\n\n**Citrix Machine Power State Management** — Citrix Delivery Controllers manage VM power states through power policy schedules — powering on machines before business hours and off after hours to save resources. Failed power actions (VM failed to start, hypervisor timeout, stuck in boot) reduce available session capacity during peak hours. Monitoring power action success rates and queue depth ensures machines are ready when users arrive.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `power_action`, `power_state`, `machine_name`, `delivery_group`, `action_result`. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-Broker`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by power_action, delivery_group, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failed > 0 OR pending > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Machine Power State Management**): table _time, delivery_group, power_action, total, success, failed, pending, fail_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (power actions by result), Bar chart (failures by delivery group), Single value (pending queue depth).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on machine Power State Management and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.7",
              "n": "ICA/HDX Virtual Channel Bandwidth Consumption",
              "c": "medium",
              "f": "intermediate",
              "v": "HDX sessions use multiple virtual channels — graphics, audio, video, printer redirection, drive mapping, clipboard, and USB. Excessive bandwidth consumption on specific channels (e.g., large print jobs, multimedia redirection, USB device streaming) degrades the session experience for all users on the same VDA or network segment. Identifying bandwidth-heavy channels enables targeted policy optimization.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-VDA`)",
              "d": "`index=xd_perfmon` `sourcetype=\"citrix:vda:perfmon\"` fields `counter_name`, `counter_value`, `instance_name`, `vda_host`",
              "q": "index=xd_perfmon sourcetype=\"citrix:vda:perfmon\"\n  (counter_name=\"Output Bandwidth*\" OR counter_name=\"Input Bandwidth*\")\n| bin _time span=15m\n| stats avg(counter_value) as avg_bw_bps by instance_name, counter_name, vda_host, _time\n| eval avg_bw_kbps=round(avg_bw_bps/1024, 1)\n| where avg_bw_kbps > 500\n| sort -avg_bw_kbps\n| table _time, vda_host, instance_name, counter_name, avg_bw_kbps",
              "m": "Collect HDX virtual channel performance counters from VDAs. The Citrix ICA Session performance object exposes per-channel bandwidth metrics (Graphics, Audio, Printing, Drive Mapping, Clipboard, etc.). Alert on abnormal channel bandwidth: graphics channel above 5 Mbps sustained (possible unoptimized video), printing channel spikes (large print jobs), or drive mapping spikes (file copy operations). Use to tune HDX policies: enable adaptive transport, configure video codec, set print quality limits.",
              "z": "Stacked area chart (bandwidth by channel), Table (top bandwidth consumers), Bar chart (channel comparison by VDA).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-VDA`).\n• Ensure the following data sources are available: `index=xd_perfmon` `sourcetype=\"citrix:vda:perfmon\"` fields `counter_name`, `counter_value`, `instance_name`, `vda_host`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect HDX virtual channel performance counters from VDAs. The Citrix ICA Session performance object exposes per-channel bandwidth metrics (Graphics, Audio, Printing, Drive Mapping, Clipboard, etc.). Alert on abnormal channel bandwidth: graphics channel above 5 Mbps sustained (possible unoptimized video), printing channel spikes (large print jobs), or drive mapping spikes (file copy operations). Use to tune HDX policies: enable adaptive transport, configure video codec, set print quality limits…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd_perfmon sourcetype=\"citrix:vda:perfmon\"\n  (counter_name=\"Output Bandwidth*\" OR counter_name=\"Input Bandwidth*\")\n| bin _time span=15m\n| stats avg(counter_value) as avg_bw_bps by instance_name, counter_name, vda_host, _time\n| eval avg_bw_kbps=round(avg_bw_bps/1024, 1)\n| where avg_bw_kbps > 500\n| sort -avg_bw_kbps\n| table _time, vda_host, instance_name, counter_name, avg_bw_kbps\n```\n\nUnderstanding this SPL\n\n**ICA/HDX Virtual Channel Bandwidth Consumption** — HDX sessions use multiple virtual channels — graphics, audio, video, printer redirection, drive mapping, clipboard, and USB. Excessive bandwidth consumption on specific channels (e.g., large print jobs, multimedia redirection, USB device streaming) degrades the session experience for all users on the same VDA or network segment. Identifying bandwidth-heavy channels enables targeted policy optimization.\n\nDocumented **Data sources**: `index=xd_perfmon` `sourcetype=\"citrix:vda:perfmon\"` fields `counter_name`, `counter_value`, `instance_name`, `vda_host`. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-VDA`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd_perfmon; **sourcetype**: citrix:vda:perfmon. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd_perfmon, sourcetype=\"citrix:vda:perfmon\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by instance_name, counter_name, vda_host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_bw_kbps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_bw_kbps > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **ICA/HDX Virtual Channel Bandwidth Consumption**): table _time, vda_host, instance_name, counter_name, avg_bw_kbps\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area chart (bandwidth by channel), Table (top bandwidth consumers), Bar chart (channel comparison by VDA).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iCA/HDX Virtual Channel Bandwidth Consumption and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.8",
              "n": "Citrix Provisioning Services (PVS) vDisk Streaming Health",
              "c": "critical",
              "f": "advanced",
              "v": "In PVS-provisioned environments, target devices boot and run entirely from vDisk images streamed over the network. If PVS streaming degrades — due to network congestion, PVS server overload, or storage bottlenecks — target devices experience slow boot times, application hangs, and blue screens. Monitoring PVS streaming health ensures the foundation of the VDI environment remains solid. Write cache exhaustion on target devices is particularly dangerous as it causes immediate device failure.",
              "t": "Splunk Universal Forwarder on PVS servers, PowerShell scripted input via PVS MCLI",
              "d": "`index=xd` `sourcetype=\"citrix:pvs:stream\"` fields `pvs_server`, `target_device`, `vdisk_name`, `boot_time_sec`, `retries`, `cache_used_pct`, `cache_type`, `status`",
              "q": "index=xd sourcetype=\"citrix:pvs:stream\"\n| stats latest(status) as device_status, latest(boot_time_sec) as boot_sec, latest(retries) as retries, latest(cache_used_pct) as cache_pct by target_device, pvs_server, vdisk_name\n| where device_status!=\"Active\" OR boot_sec > 120 OR retries > 50 OR cache_pct > 80\n| eval risk=case(cache_pct>90, \"Critical-CacheExhaustion\", device_status!=\"Active\", \"Offline\", boot_sec>120, \"SlowBoot\", retries>50, \"HighRetries\", 1=1, \"Warning\")\n| sort -cache_pct\n| table target_device, pvs_server, vdisk_name, device_status, boot_sec, retries, cache_pct, risk",
              "m": "Deploy a Splunk Universal Forwarder on PVS servers and collect Stream Service event logs (enable event logging on each PVS server's Stream Service). Additionally, create a PowerShell scripted input using PVS MCLI commands (`Mcli-Get Device`, `Mcli-Get DiskVersion`) to collect target device status, boot times, retry counts, and write cache utilization. Alert on: boot times exceeding 120 seconds, stream retry counts above 50 (network/disk issues), write cache utilization above 80% (imminent exhaustion), or target devices dropping to inactive status. Monitor vDisk lock status to detect orphan locks preventing updates.",
              "z": "Table (target devices with health metrics), Gauge (write cache utilization), Bar chart (boot times by PVS server).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder on PVS servers, PowerShell scripted input via PVS MCLI.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:pvs:stream\"` fields `pvs_server`, `target_device`, `vdisk_name`, `boot_time_sec`, `retries`, `cache_used_pct`, `cache_type`, `status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a Splunk Universal Forwarder on PVS servers and collect Stream Service event logs (enable event logging on each PVS server's Stream Service). Additionally, create a PowerShell scripted input using PVS MCLI commands (`Mcli-Get Device`, `Mcli-Get DiskVersion`) to collect target device status, boot times, retry counts, and write cache utilization. Alert on: boot times exceeding 120 seconds, stream retry counts above 50 (network/disk issues), write cache utilization above 80% (imminent exhaus…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:pvs:stream\"\n| stats latest(status) as device_status, latest(boot_time_sec) as boot_sec, latest(retries) as retries, latest(cache_used_pct) as cache_pct by target_device, pvs_server, vdisk_name\n| where device_status!=\"Active\" OR boot_sec > 120 OR retries > 50 OR cache_pct > 80\n| eval risk=case(cache_pct>90, \"Critical-CacheExhaustion\", device_status!=\"Active\", \"Offline\", boot_sec>120, \"SlowBoot\", retries>50, \"HighRetries\", 1=1, \"Warning\")\n| sort -cache_pct\n| table target_device, pvs_server, vdisk_name, device_status, boot_sec, retries, cache_pct, risk\n```\n\nUnderstanding this SPL\n\n**Citrix Provisioning Services (PVS) vDisk Streaming Health** — In PVS-provisioned environments, target devices boot and run entirely from vDisk images streamed over the network. If PVS streaming degrades — due to network congestion, PVS server overload, or storage bottlenecks — target devices experience slow boot times, application hangs, and blue screens. Monitoring PVS streaming health ensures the foundation of the VDI environment remains solid. Write cache exhaustion on target devices is particularly dangerous as it causes immediate…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:pvs:stream\"` fields `pvs_server`, `target_device`, `vdisk_name`, `boot_time_sec`, `retries`, `cache_used_pct`, `cache_type`, `status`. **App/TA** (typical add-on context): Splunk Universal Forwarder on PVS servers, PowerShell scripted input via PVS MCLI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:pvs:stream. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:pvs:stream\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by target_device, pvs_server, vdisk_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where device_status!=\"Active\" OR boot_sec > 120 OR retries > 50 OR cache_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix Provisioning Services (PVS) vDisk Streaming Health**): table target_device, pvs_server, vdisk_name, device_status, boot_sec, retries, cache_pct, risk\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (target devices with health metrics), Gauge (write cache utilization), Bar chart (boot times by PVS server).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on provisioning Services (PVS) vDisk Streaming Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.9",
              "n": "Citrix Profile Management Load Time",
              "c": "high",
              "f": "intermediate",
              "v": "Citrix User Profile Management (UPM) loads user profiles at session logon — including registry hives, application settings, and redirected folders. Large or corrupted profiles cause logon delays that can extend login times by minutes. Profile streaming can significantly reduce load times (from 54 seconds to 20 seconds in Citrix tests), but only if properly configured. Monitoring profile load times identifies users with bloated profiles and validates that profile optimization features are effective.",
              "t": "Splunk Universal Forwarder on VDAs, Citrix UPM log collection",
              "d": "`index=xd` `sourcetype=\"citrix:upm:log\"` fields `user`, `profile_load_time_sec`, `profile_size_mb`, `streaming_enabled`, `error_message`, `vda_host`",
              "q": "index=xd sourcetype=\"citrix:upm:log\" event_type=\"ProfileLoad\"\n| bin _time span=1h\n| stats avg(profile_load_time_sec) as avg_load, perc95(profile_load_time_sec) as p95_load, avg(profile_size_mb) as avg_size, count as loads by vda_host, _time\n| where p95_load > 15\n| table _time, vda_host, loads, avg_load, p95_load, avg_size",
              "m": "Citrix Profile Management logs to `%SystemRoot%\\System32\\LogFiles\\UserProfileManager` on each VDA. Configure centralized log storage via UPM policy (store logs on a network share). Forward these logs to Splunk via Universal Forwarder. Parse for profile load/unload timing events. Track profile size growth per user over time. Alert on: p95 profile load time exceeding 15 seconds, individual profiles exceeding 500 MB, or UPM errors indicating profile corruption (\"Error while processing profile\" events). Validate that profile streaming is enabled and effective by comparing load times with/without streaming.",
              "z": "Line chart (profile load time trending), Bar chart (top users by profile size), Table (slow profile loads with user details).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder on VDAs, Citrix UPM log collection.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:upm:log\"` fields `user`, `profile_load_time_sec`, `profile_size_mb`, `streaming_enabled`, `error_message`, `vda_host`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCitrix Profile Management logs to `%SystemRoot%\\System32\\LogFiles\\UserProfileManager` on each VDA. Configure centralized log storage via UPM policy (store logs on a network share). Forward these logs to Splunk via Universal Forwarder. Parse for profile load/unload timing events. Track profile size growth per user over time. Alert on: p95 profile load time exceeding 15 seconds, individual profiles exceeding 500 MB, or UPM errors indicating profile corruption (\"Error while processing profile\" even…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:upm:log\" event_type=\"ProfileLoad\"\n| bin _time span=1h\n| stats avg(profile_load_time_sec) as avg_load, perc95(profile_load_time_sec) as p95_load, avg(profile_size_mb) as avg_size, count as loads by vda_host, _time\n| where p95_load > 15\n| table _time, vda_host, loads, avg_load, p95_load, avg_size\n```\n\nUnderstanding this SPL\n\n**Citrix Profile Management Load Time** — Citrix User Profile Management (UPM) loads user profiles at session logon — including registry hives, application settings, and redirected folders. Large or corrupted profiles cause logon delays that can extend login times by minutes. Profile streaming can significantly reduce load times (from 54 seconds to 20 seconds in Citrix tests), but only if properly configured. Monitoring profile load times identifies users with bloated profiles and validates that profile…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:upm:log\"` fields `user`, `profile_load_time_sec`, `profile_size_mb`, `streaming_enabled`, `error_message`, `vda_host`. **App/TA** (typical add-on context): Splunk Universal Forwarder on VDAs, Citrix UPM log collection. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:upm:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:upm:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by vda_host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_load > 15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Profile Management Load Time**): table _time, vda_host, loads, avg_load, p95_load, avg_size\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (profile load time trending), Bar chart (top users by profile size), Table (slow profile loads with user details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on profile Management Load Time and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.10",
              "n": "Citrix StoreFront Authentication and Enumeration Health",
              "c": "high",
              "f": "intermediate",
              "v": "Citrix StoreFront authenticates users and enumerates available applications and desktops before the session launch process even begins. StoreFront failures manifest as users seeing a blank application list or receiving authentication errors. Since StoreFront runs on IIS, monitoring IIS response codes, authentication success rates, and enumeration latency provides early warning of issues that block all user access.",
              "t": "Splunk Add-on for Microsoft IIS",
              "d": "`index=xd` `sourcetype=\"ms:iis:auto\"` fields `cs_uri_stem`, `sc_status`, `time_taken`, `cs_username`, `s_computername`",
              "q": "index=xd sourcetype=\"ms:iis:auto\" s_sitename=\"*StoreFront*\"\n| bin _time span=5m\n| stats sum(eval(if(sc_status>=500, 1, 0))) as server_errors,\n  sum(eval(if(sc_status=401, 1, 0))) as auth_failures,\n  sum(eval(if(sc_status>=200 AND sc_status<400, 1, 0))) as success,\n  avg(time_taken) as avg_response_ms, count as total by s_computername, _time\n| eval error_pct=round(server_errors/total*100,1)\n| eval auth_fail_pct=round(auth_failures/total*100,1)\n| where error_pct > 5 OR auth_fail_pct > 20 OR avg_response_ms > 5000\n| table _time, s_computername, total, success, server_errors, error_pct, auth_failures, auth_fail_pct, avg_response_ms",
              "m": "Install the Splunk Add-on for Microsoft IIS on StoreFront servers. StoreFront uses a custom IIS log field order — adjust the `auto_kv_for_iis_default` transform field list per Splunk's Content Pack documentation. Monitor HTTP status codes: 401 (authentication failure), 500+ (server errors), and response times. Key URIs to track: `/Citrix/StoreWeb/` (web interface), `/Citrix/Store/resources/` (resource enumeration), `/Citrix/Authentication/` (auth endpoint). Alert on server error rate exceeding 5% or authentication failure rate exceeding 20%. Correlate StoreFront errors with Active Directory health and Delivery Controller connectivity.",
              "z": "Timechart (requests by status code), Bar chart (error rates by StoreFront server), Table (slowest requests).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft IIS.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"ms:iis:auto\"` fields `cs_uri_stem`, `sc_status`, `time_taken`, `cs_username`, `s_computername`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall the Splunk Add-on for Microsoft IIS on StoreFront servers. StoreFront uses a custom IIS log field order — adjust the `auto_kv_for_iis_default` transform field list per Splunk's Content Pack documentation. Monitor HTTP status codes: 401 (authentication failure), 500+ (server errors), and response times. Key URIs to track: `/Citrix/StoreWeb/` (web interface), `/Citrix/Store/resources/` (resource enumeration), `/Citrix/Authentication/` (auth endpoint). Alert on server error rate exceeding 5…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"ms:iis:auto\" s_sitename=\"*StoreFront*\"\n| bin _time span=5m\n| stats sum(eval(if(sc_status>=500, 1, 0))) as server_errors,\n  sum(eval(if(sc_status=401, 1, 0))) as auth_failures,\n  sum(eval(if(sc_status>=200 AND sc_status<400, 1, 0))) as success,\n  avg(time_taken) as avg_response_ms, count as total by s_computername, _time\n| eval error_pct=round(server_errors/total*100,1)\n| eval auth_fail_pct=round(auth_failures/total*100,1)\n| where error_pct > 5 OR auth_fail_pct > 20 OR avg_response_ms > 5000\n| table _time, s_computername, total, success, server_errors, error_pct, auth_failures, auth_fail_pct, avg_response_ms\n```\n\nUnderstanding this SPL\n\n**Citrix StoreFront Authentication and Enumeration Health** — Citrix StoreFront authenticates users and enumerates available applications and desktops before the session launch process even begins. StoreFront failures manifest as users seeing a blank application list or receiving authentication errors. Since StoreFront runs on IIS, monitoring IIS response codes, authentication success rates, and enumeration latency provides early warning of issues that block all user access.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"ms:iis:auto\"` fields `cs_uri_stem`, `sc_status`, `time_taken`, `cs_username`, `s_computername`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft IIS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: ms:iis:auto. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"ms:iis:auto\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by s_computername, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **auth_fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_pct > 5 OR auth_fail_pct > 20 OR avg_response_ms > 5000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix StoreFront Authentication and Enumeration Health**): table _time, s_computername, total, success, server_errors, error_pct, auth_failures, auth_fail_pct, avg_response_ms\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Citrix StoreFront Authentication and Enumeration Health** — Citrix StoreFront authenticates users and enumerates available applications and desktops before the session launch process even begins. StoreFront failures manifest as users seeing a blank application list or receiving authentication errors. Since StoreFront runs on IIS, monitoring IIS response codes, authentication success rates, and enumeration latency provides early warning of issues that block all user access.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"ms:iis:auto\"` fields `cs_uri_stem`, `sc_status`, `time_taken`, `cs_username`, `s_computername`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft IIS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (requests by status code), Bar chart (error rates by StoreFront server), Table (slowest requests).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on storeFront Authentication and Enumeration Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count",
              "e": [
                "iis"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.11",
              "n": "Citrix License Server Utilization and Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Citrix licensing is capacity-based — concurrent user/device licenses or per-user/per-device named licenses. Approaching license limits during peak hours causes session launch failures with \"no licenses available\" errors. While Citrix provides a grace period, operating within grace period indicates a compliance gap. Trending license utilization supports procurement planning and ensures continuous service availability.",
              "t": "Splunk Universal Forwarder on License Server, PowerShell scripted input",
              "d": "`index=xd` `sourcetype=\"citrix:licensing\"` fields `license_type`, `in_use`, `total`, `available`, `grace_period_active`, `expiration_date`",
              "q": "index=xd sourcetype=\"citrix:licensing\"\n| stats latest(in_use) as used, latest(total) as total, latest(available) as available, latest(grace_period_active) as grace, latest(expiration_date) as expiry by license_type\n| eval utilization_pct=round(used/total*100,1)\n| eval days_to_expiry=round((strptime(expiry, \"%Y-%m-%d\")-now())/86400,0)\n| where utilization_pct > 80 OR grace=\"true\" OR days_to_expiry < 90\n| table license_type, used, total, available, utilization_pct, grace, days_to_expiry",
              "m": "Create a PowerShell scripted input on the Citrix License Server that queries license usage via `Get-LicInventory` and `Get-LicUsage` cmdlets or the Citrix Licensing WMI provider. Collect total licenses, in-use count, available count, grace period status, and license expiration dates. Run every 15 minutes. Alert at 80% utilization (capacity planning), 90% (operational warning), and immediately if grace period becomes active. Also alert 90 days before license expiration. Track peak utilization by hour and day of week for procurement planning.",
              "z": "Gauge (license utilization %), Timechart (license usage over time), Table (license types with expiry dates).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder on License Server, PowerShell scripted input.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:licensing\"` fields `license_type`, `in_use`, `total`, `available`, `grace_period_active`, `expiration_date`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a PowerShell scripted input on the Citrix License Server that queries license usage via `Get-LicInventory` and `Get-LicUsage` cmdlets or the Citrix Licensing WMI provider. Collect total licenses, in-use count, available count, grace period status, and license expiration dates. Run every 15 minutes. Alert at 80% utilization (capacity planning), 90% (operational warning), and immediately if grace period becomes active. Also alert 90 days before license expiration. Track peak utilization by …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:licensing\"\n| stats latest(in_use) as used, latest(total) as total, latest(available) as available, latest(grace_period_active) as grace, latest(expiration_date) as expiry by license_type\n| eval utilization_pct=round(used/total*100,1)\n| eval days_to_expiry=round((strptime(expiry, \"%Y-%m-%d\")-now())/86400,0)\n| where utilization_pct > 80 OR grace=\"true\" OR days_to_expiry < 90\n| table license_type, used, total, available, utilization_pct, grace, days_to_expiry\n```\n\nUnderstanding this SPL\n\n**Citrix License Server Utilization and Compliance** — Citrix licensing is capacity-based — concurrent user/device licenses or per-user/per-device named licenses. Approaching license limits during peak hours causes session launch failures with \"no licenses available\" errors. While Citrix provides a grace period, operating within grace period indicates a compliance gap. Trending license utilization supports procurement planning and ensures continuous service availability.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:licensing\"` fields `license_type`, `in_use`, `total`, `available`, `grace_period_active`, `expiration_date`. **App/TA** (typical add-on context): Splunk Universal Forwarder on License Server, PowerShell scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:licensing. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:licensing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by license_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilization_pct > 80 OR grace=\"true\" OR days_to_expiry < 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix License Server Utilization and Compliance**): table license_type, used, total, available, utilization_pct, grace, days_to_expiry\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (license utilization %), Timechart (license usage over time), Table (license types with expiry dates).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on license Server Utilization and and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.12",
              "n": "Citrix Application Usage and Popularity Analytics",
              "c": "medium",
              "f": "beginner",
              "v": "Understanding which published applications are most used, by which user groups, and at what times enables informed capacity planning, application retirement decisions, and license optimization. Applications with zero usage can be decommissioned to reduce attack surface and management overhead. High-usage applications may need dedicated delivery groups or additional server capacity.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` fields `app_name`, `user`, `delivery_group`, `session_duration_min`",
              "q": "index=xd sourcetype=\"citrix:broker:events\" event_type=\"ApplicationLaunch\"\n| bin _time span=1d\n| stats dc(user) as unique_users, count as launches, avg(session_duration_min) as avg_duration by app_name, _time\n| sort -unique_users\n| table _time, app_name, unique_users, launches, avg_duration",
              "m": "Collect application launch events from the Broker Service event log or Monitor Service OData API `ApplicationInstances` endpoint. Track application name, launching user, delivery group, and session duration. Generate weekly reports showing: most-used applications (by unique users and total launches), least-used applications (candidates for retirement), peak usage hours per application, and average session duration. Correlate with license costs per application for ROI analysis.",
              "z": "Bar chart (top applications by users), Heatmap (application usage by hour), Table (unused applications).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `app_name`, `user`, `delivery_group`, `session_duration_min`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect application launch events from the Broker Service event log or Monitor Service OData API `ApplicationInstances` endpoint. Track application name, launching user, delivery group, and session duration. Generate weekly reports showing: most-used applications (by unique users and total launches), least-used applications (candidates for retirement), peak usage hours per application, and average session duration. Correlate with license costs per application for ROI analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" event_type=\"ApplicationLaunch\"\n| bin _time span=1d\n| stats dc(user) as unique_users, count as launches, avg(session_duration_min) as avg_duration by app_name, _time\n| sort -unique_users\n| table _time, app_name, unique_users, launches, avg_duration\n```\n\nUnderstanding this SPL\n\n**Citrix Application Usage and Popularity Analytics** — Understanding which published applications are most used, by which user groups, and at what times enables informed capacity planning, application retirement decisions, and license optimization. Applications with zero usage can be decommissioned to reduce attack surface and management overhead. High-usage applications may need dedicated delivery groups or additional server capacity.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `app_name`, `user`, `delivery_group`, `session_duration_min`. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-Broker`), Citrix Monitor Service OData API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by app_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix Application Usage and Popularity Analytics**): table _time, app_name, unique_users, launches, avg_duration\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top applications by users), Heatmap (application usage by hour), Table (unused applications).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on application Usage and Popularity Analytics and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity",
                "Analytics"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.13",
              "n": "Citrix Federated Authentication Service (FAS) Certificate Health",
              "c": "high",
              "f": "advanced",
              "v": "Citrix FAS dynamically issues short-lived certificates that allow users to log on to VDA sessions as if they had a smart card — enabling passwordless SSO from StoreFront via SAML or other federated identity providers. If FAS cannot reach the Certificate Authority or certificate signing takes too long, user authentication fails entirely. FAS is a privileged component with access to private keys, making its security monitoring equally critical.",
              "t": "Splunk Universal Forwarder on FAS servers",
              "d": "`index=xd` `sourcetype=\"citrix:fas:events\"` fields `event_type`, `user`, `certificate_status`, `signing_time_ms`, `ca_server`, `error_message`",
              "q": "index=xd sourcetype=\"citrix:fas:events\"\n| bin _time span=15m\n| stats sum(eval(if(certificate_status=\"Issued\", 1, 0))) as issued,\n  sum(eval(if(certificate_status=\"Failed\", 1, 0))) as failed,\n  avg(signing_time_ms) as avg_sign_ms, max(signing_time_ms) as max_sign_ms by ca_server, _time\n| eval fail_pct=if((issued+failed)>0, round(failed/(issued+failed)*100,1), 0)\n| where failed > 0 OR avg_sign_ms > 2000\n| table _time, ca_server, issued, failed, fail_pct, avg_sign_ms, max_sign_ms",
              "m": "Deploy a Splunk Universal Forwarder on FAS servers and collect the Citrix FAS application event log. FAS logs certificate issuance attempts, CA connectivity status, and certificate signing operations. Monitor for: certificate issuance failures (CA unreachable, template misconfigured), slow certificate signing (>2 seconds impacts logon), RA certificate expiration (FAS's own registration authority certificate), and unauthorized certificate requests. FAS PowerShell cmdlets (`Get-FasRaCertificateMonitor`, `Test-FasUserCertificateCrypto`) can be used via scripted inputs for proactive health checks. Alert immediately on any certificate issuance failure as it blocks user authentication.",
              "z": "Timechart (certificate issuance success vs failure), Single value (current CA reachability), Table (failed certificate requests with error details).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder on FAS servers.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:fas:events\"` fields `event_type`, `user`, `certificate_status`, `signing_time_ms`, `ca_server`, `error_message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a Splunk Universal Forwarder on FAS servers and collect the Citrix FAS application event log. FAS logs certificate issuance attempts, CA connectivity status, and certificate signing operations. Monitor for: certificate issuance failures (CA unreachable, template misconfigured), slow certificate signing (>2 seconds impacts logon), RA certificate expiration (FAS's own registration authority certificate), and unauthorized certificate requests. FAS PowerShell cmdlets (`Get-FasRaCertificateMon…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:fas:events\"\n| bin _time span=15m\n| stats sum(eval(if(certificate_status=\"Issued\", 1, 0))) as issued,\n  sum(eval(if(certificate_status=\"Failed\", 1, 0))) as failed,\n  avg(signing_time_ms) as avg_sign_ms, max(signing_time_ms) as max_sign_ms by ca_server, _time\n| eval fail_pct=if((issued+failed)>0, round(failed/(issued+failed)*100,1), 0)\n| where failed > 0 OR avg_sign_ms > 2000\n| table _time, ca_server, issued, failed, fail_pct, avg_sign_ms, max_sign_ms\n```\n\nUnderstanding this SPL\n\n**Citrix Federated Authentication Service (FAS) Certificate Health** — Citrix FAS dynamically issues short-lived certificates that allow users to log on to VDA sessions as if they had a smart card — enabling passwordless SSO from StoreFront via SAML or other federated identity providers. If FAS cannot reach the Certificate Authority or certificate signing takes too long, user authentication fails entirely. FAS is a privileged component with access to private keys, making its security monitoring equally critical.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:fas:events\"` fields `event_type`, `user`, `certificate_status`, `signing_time_ms`, `ca_server`, `error_message`. **App/TA** (typical add-on context): Splunk Universal Forwarder on FAS servers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:fas:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:fas:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ca_server, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failed > 0 OR avg_sign_ms > 2000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Federated Authentication Service (FAS) Certificate Health**): table _time, ca_server, issued, failed, fail_pct, avg_sign_ms, max_sign_ms\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Citrix Federated Authentication Service (FAS) Certificate Health** — Citrix FAS dynamically issues short-lived certificates that allow users to log on to VDA sessions as if they had a smart card — enabling passwordless SSO from StoreFront via SAML or other federated identity providers. If FAS cannot reach the Certificate Authority or certificate signing takes too long, user authentication fails entirely. FAS is a privileged component with access to private keys, making its security monitoring equally critical.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:fas:events\"` fields `event_type`, `user`, `certificate_status`, `signing_time_ms`, `ca_server`, `error_message`. **App/TA** (typical add-on context): Splunk Universal Forwarder on FAS servers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (certificate issuance success vs failure), Single value (current CA reachability), Table (failed certificate requests with error details).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on federated Authentication Service (FAS) Certificate Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.14",
              "n": "Citrix Workspace Environment Management (WEM) Optimization Effectiveness",
              "c": "medium",
              "f": "intermediate",
              "v": "Citrix WEM uses CPU spike protection and CPU clamping to prevent individual processes from monopolizing session host resources. Monitoring WEM optimization actions reveals which processes trigger CPU throttling, how often protection engages, and whether the configured thresholds are appropriate. Excessive WEM interventions may indicate undersized VDAs or resource-hungry applications that need attention.",
              "t": "Splunk Universal Forwarder on VDAs, WEM agent logs",
              "d": "`index=xd` `sourcetype=\"citrix:wem:agent\"` fields `action_type`, `process_name`, `cpu_threshold`, `duration_sec`, `user`, `vda_host`",
              "q": "index=xd sourcetype=\"citrix:wem:agent\" (action_type=\"CpuSpikeProtection\" OR action_type=\"CpuClamping\")\n| bin _time span=1h\n| stats count as interventions, dc(process_name) as unique_processes, dc(user) as affected_users, values(process_name) as throttled_processes by action_type, vda_host, _time\n| where interventions > 10\n| table _time, vda_host, action_type, interventions, unique_processes, affected_users, throttled_processes",
              "m": "Collect WEM agent logs from VDAs. The WEM agent logs CPU spike protection events (process priority lowered) and CPU clamping events (process throttled) with the offending process name, CPU threshold that was exceeded, and duration of the intervention. Alert when a single VDA experiences more than 10 WEM interventions per hour (indicates capacity issue). Track the most frequently throttled processes — these are candidates for application optimization, isolation to dedicated delivery groups, or VDA resource increases. Compare WEM intervention frequency before and after VDA resource changes to validate capacity additions.",
              "z": "Bar chart (top throttled processes), Timechart (WEM interventions over time), Table (VDAs with most frequent interventions).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder on VDAs, WEM agent logs.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:wem:agent\"` fields `action_type`, `process_name`, `cpu_threshold`, `duration_sec`, `user`, `vda_host`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect WEM agent logs from VDAs. The WEM agent logs CPU spike protection events (process priority lowered) and CPU clamping events (process throttled) with the offending process name, CPU threshold that was exceeded, and duration of the intervention. Alert when a single VDA experiences more than 10 WEM interventions per hour (indicates capacity issue). Track the most frequently throttled processes — these are candidates for application optimization, isolation to dedicated delivery groups, or VD…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:wem:agent\" (action_type=\"CpuSpikeProtection\" OR action_type=\"CpuClamping\")\n| bin _time span=1h\n| stats count as interventions, dc(process_name) as unique_processes, dc(user) as affected_users, values(process_name) as throttled_processes by action_type, vda_host, _time\n| where interventions > 10\n| table _time, vda_host, action_type, interventions, unique_processes, affected_users, throttled_processes\n```\n\nUnderstanding this SPL\n\n**Citrix Workspace Environment Management (WEM) Optimization Effectiveness** — Citrix WEM uses CPU spike protection and CPU clamping to prevent individual processes from monopolizing session host resources. Monitoring WEM optimization actions reveals which processes trigger CPU throttling, how often protection engages, and whether the configured thresholds are appropriate. Excessive WEM interventions may indicate undersized VDAs or resource-hungry applications that need attention.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:wem:agent\"` fields `action_type`, `process_name`, `cpu_threshold`, `duration_sec`, `user`, `vda_host`. **App/TA** (typical add-on context): Splunk Universal Forwarder on VDAs, WEM agent logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:wem:agent. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:wem:agent\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by action_type, vda_host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where interventions > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Workspace Environment Management (WEM) Optimization Effectiveness**): table _time, vda_host, action_type, interventions, unique_processes, affected_users, throttled_processes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top throttled processes), Timechart (WEM interventions over time), Table (VDAs with most frequent interventions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on workspace Environment Management (WEM) Optimization Effectiveness and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.15",
              "n": "Citrix Session Recording Compliance Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Citrix Session Recording captures video recordings of user sessions for compliance auditing in regulated industries (healthcare, finance, government). Monitoring ensures that recording policies are consistently applied — sessions that should be recorded are being recorded, storage capacity is adequate, and recordings maintain integrity via digital signatures. Gaps in recording coverage represent compliance violations.",
              "t": "Splunk Universal Forwarder on Session Recording servers",
              "d": "`index=xd` `sourcetype=\"citrix:sessionrecording\"` fields `recording_status`, `session_id`, `user`, `policy_name`, `file_size_mb`, `storage_used_pct`, `signed`",
              "q": "index=xd sourcetype=\"citrix:sessionrecording\"\n| stats sum(eval(if(recording_status=\"Recording\", 1, 0))) as active_recordings,\n  sum(eval(if(recording_status=\"Failed\", 1, 0))) as failed_recordings,\n  sum(file_size_mb) as total_storage_mb, latest(storage_used_pct) as storage_pct,\n  sum(eval(if(signed=\"false\", 1, 0))) as unsigned by policy_name\n| eval fail_pct=if((active_recordings+failed_recordings)>0, round(failed_recordings/(active_recordings+failed_recordings)*100,1), 0)\n| where failed_recordings > 0 OR storage_pct > 80 OR unsigned > 0\n| table policy_name, active_recordings, failed_recordings, fail_pct, total_storage_mb, storage_pct, unsigned",
              "m": "Deploy a Splunk Universal Forwarder on Session Recording servers to collect session recording events and storage metrics. Monitor for: recording failures (disk full, agent disconnected, policy misconfiguration), storage capacity approaching limits (>80%), unsigned recordings (integrity concern), and sessions matching recording policy criteria that were not actually recorded (coverage gap). Generate daily compliance reports listing all recorded sessions by user, duration, and policy applied. Required for PCI DSS, HIPAA, and SOX environments where privileged access monitoring is mandated.",
              "z": "Single value (recording compliance %), Gauge (storage utilization), Table (failed recordings with error details).",
              "kfp": "Session and launch time spikes may track image updates, profile migrations, or broker outages; compare to the same app on other sites before blaming the network only.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder on Session Recording servers.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:sessionrecording\"` fields `recording_status`, `session_id`, `user`, `policy_name`, `file_size_mb`, `storage_used_pct`, `signed`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a Splunk Universal Forwarder on Session Recording servers to collect session recording events and storage metrics. Monitor for: recording failures (disk full, agent disconnected, policy misconfiguration), storage capacity approaching limits (>80%), unsigned recordings (integrity concern), and sessions matching recording policy criteria that were not actually recorded (coverage gap). Generate daily compliance reports listing all recorded sessions by user, duration, and policy applied. Requ…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:sessionrecording\"\n| stats sum(eval(if(recording_status=\"Recording\", 1, 0))) as active_recordings,\n  sum(eval(if(recording_status=\"Failed\", 1, 0))) as failed_recordings,\n  sum(file_size_mb) as total_storage_mb, latest(storage_used_pct) as storage_pct,\n  sum(eval(if(signed=\"false\", 1, 0))) as unsigned by policy_name\n| eval fail_pct=if((active_recordings+failed_recordings)>0, round(failed_recordings/(active_recordings+failed_recordings)*100,1), 0)\n| where failed_recordings > 0 OR storage_pct > 80 OR unsigned > 0\n| table policy_name, active_recordings, failed_recordings, fail_pct, total_storage_mb, storage_pct, unsigned\n```\n\nUnderstanding this SPL\n\n**Citrix Session Recording Compliance Monitoring** — Citrix Session Recording captures video recordings of user sessions for compliance auditing in regulated industries (healthcare, finance, government). Monitoring ensures that recording policies are consistently applied — sessions that should be recorded are being recorded, storage capacity is adequate, and recordings maintain integrity via digital signatures. Gaps in recording coverage represent compliance violations.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:sessionrecording\"` fields `recording_status`, `session_id`, `user`, `policy_name`, `file_size_mb`, `storage_used_pct`, `signed`. **App/TA** (typical add-on context): Splunk Universal Forwarder on Session Recording servers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:sessionrecording. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:sessionrecording\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failed_recordings > 0 OR storage_pct > 80 OR unsigned > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Session Recording Compliance Monitoring**): table policy_name, active_recordings, failed_recordings, fail_pct, total_storage_mb, storage_pct, unsigned\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (recording compliance %), Gauge (storage utilization), Table (failed recordings with error details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on session Recording Compliance and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.16",
              "n": "Citrix Cloud Connector Health",
              "c": "critical",
              "f": "beginner",
              "v": "For Citrix DaaS (cloud-managed) deployments, Cloud Connectors are the link between on-premises resources and Citrix Cloud. If all Cloud Connectors in a resource location fail, the site enters Local Host Cache (LHC) mode with limited functionality — no new machine registrations, no power management, and no access to cloud-hosted services. Monitoring connector health ensures continuous cloud management connectivity.",
              "t": "Splunk Universal Forwarder on Cloud Connector hosts",
              "d": "`index=xd` `sourcetype=\"citrix:cloudconnector\"` fields `connector_host`, `connectivity_status`, `last_contact`, `service_status`, `resource_location`",
              "q": "index=xd sourcetype=\"citrix:cloudconnector\"\n| stats latest(connectivity_status) as cloud_status, latest(service_status) as svc_status, latest(_time) as last_seen by connector_host, resource_location\n| eval hours_since_contact=round((now()-last_seen)/3600, 1)\n| where cloud_status!=\"Connected\" OR svc_status!=\"Running\" OR hours_since_contact > 1\n| table connector_host, resource_location, cloud_status, svc_status, hours_since_contact",
              "m": "Deploy a Splunk Universal Forwarder on Cloud Connector hosts and monitor Windows Event Logs for Citrix Cloud Connector events. Also run the Cloud Health Check utility via scheduled PowerShell scripted input to validate connectivity to Citrix Cloud services. Track connectivity status (Connected, Disconnected), service health, and last successful cloud contact time. Alert when: any connector loses cloud connectivity for more than 15 minutes, all connectors in a resource location become disconnected (LHC mode imminent), or Cloud Connector services stop. Ensure at least 2 connectors per resource location for redundancy.",
              "z": "Status grid (connector x resource location), Timeline (connectivity events), Single value (connected connectors count).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[uberAgent indexer app](https://splunkbase.splunk.com/app/2998), [Splunkbase app 1448](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder on Cloud Connector hosts.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:cloudconnector\"` fields `connector_host`, `connectivity_status`, `last_contact`, `service_status`, `resource_location`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a Splunk Universal Forwarder on Cloud Connector hosts and monitor Windows Event Logs for Citrix Cloud Connector events. Also run the Cloud Health Check utility via scheduled PowerShell scripted input to validate connectivity to Citrix Cloud services. Track connectivity status (Connected, Disconnected), service health, and last successful cloud contact time. Alert when: any connector loses cloud connectivity for more than 15 minutes, all connectors in a resource location become disconnecte…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:cloudconnector\"\n| stats latest(connectivity_status) as cloud_status, latest(service_status) as svc_status, latest(_time) as last_seen by connector_host, resource_location\n| eval hours_since_contact=round((now()-last_seen)/3600, 1)\n| where cloud_status!=\"Connected\" OR svc_status!=\"Running\" OR hours_since_contact > 1\n| table connector_host, resource_location, cloud_status, svc_status, hours_since_contact\n```\n\nUnderstanding this SPL\n\n**Citrix Cloud Connector Health** — For Citrix DaaS (cloud-managed) deployments, Cloud Connectors are the link between on-premises resources and Citrix Cloud. If all Cloud Connectors in a resource location fail, the site enters Local Host Cache (LHC) mode with limited functionality — no new machine registrations, no power management, and no access to cloud-hosted services. Monitoring connector health ensures continuous cloud management connectivity.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:cloudconnector\"` fields `connector_host`, `connectivity_status`, `last_contact`, `service_status`, `resource_location`. **App/TA** (typical add-on context): Splunk Universal Forwarder on Cloud Connector hosts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:cloudconnector. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:cloudconnector\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by connector_host, resource_location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since_contact** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cloud_status!=\"Connected\" OR svc_status!=\"Running\" OR hours_since_contact > 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Cloud Connector Health**): table connector_host, resource_location, cloud_status, svc_status, hours_since_contact\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (connector x resource location), Timeline (connectivity events), Single value (connected connectors count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cloud Connector Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.17",
              "n": "uberAgent Experience Score Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "uberAgent's Experience Score is a composite 0–10 metric that summarises the end-user experience across multiple dimensions — session responsiveness, application performance, logon speed, and machine health. A single score per user per session makes it possible to answer \"how is the Citrix experience right now?\" without inspecting dozens of individual metrics. Score drops correlate directly with helpdesk call volume.",
              "t": "uberAgent UXM (Splunkbase 1448)",
              "d": "`index=score_uberagent_uxm` — Experience Scores are calculated by saved searches on the search head and stored in a dedicated Splunk index.",
              "q": "index=score_uberagent_uxm earliest=-4h\n| search ScoreType=\"overall\"\n| bin _time span=30m\n| stats avg(Score) as avg_score perc10(Score) as p10_score dc(Host) as hosts by _time\n| eval quality=case(avg_score>=7, \"Good\", avg_score>=4, \"Medium\", 1=1, \"Bad\")\n| table _time, avg_score, p10_score, hosts, quality",
              "m": "uberAgent UXM calculates Experience Scores via saved searches that run every 30 minutes on the search head, evaluating machine, session, and application health. Scores are stored in the `score_uberagent_uxm` index. No additional agent configuration is required beyond uberAgent deployment. Alert when the fleet-wide average drops below 4 (bad) or when p10 drops below 4. The score dashboard is the default entry point of the uberAgent UXM Splunk app. Score thresholds can be customised via lookup files (`score_machine_configuration.csv`, `score_session_configuration.csv`, `score_application_configuration.csv`).",
              "z": "Line chart (score over time), Gauge (fleet average), Heatmap (delivery group x hour), Table (worst-scoring users).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448).\n• Ensure the following data sources are available: `index=score_uberagent_uxm` — Experience Scores are calculated by saved searches on the search head and stored in a dedicated Splunk index..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nuberAgent UXM calculates Experience Scores via saved searches that run every 30 minutes on the search head, evaluating machine, session, and application health. Scores are stored in the `score_uberagent_uxm` index. No additional agent configuration is required beyond uberAgent deployment. Alert when the fleet-wide average drops below 4 (bad) or when p10 drops below 4. The score dashboard is the default entry point of the uberAgent UXM Splunk app. Score thresholds can be customised via lookup fil…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=score_uberagent_uxm earliest=-4h\n| search ScoreType=\"overall\"\n| bin _time span=30m\n| stats avg(Score) as avg_score perc10(Score) as p10_score dc(Host) as hosts by _time\n| eval quality=case(avg_score>=7, \"Good\", avg_score>=4, \"Medium\", 1=1, \"Bad\")\n| table _time, avg_score, p10_score, hosts, quality\n```\n\nUnderstanding this SPL\n\n**uberAgent Experience Score Monitoring** — uberAgent's Experience Score is a composite 0–10 metric that summarises the end-user experience across multiple dimensions — session responsiveness, application performance, logon speed, and machine health. A single score per user per session makes it possible to answer \"how is the Citrix experience right now?\" without inspecting dozens of individual metrics. Score drops correlate directly with helpdesk call volume.\n\nDocumented **Data sources**: `index=score_uberagent_uxm` — Experience Scores are calculated by saved searches on the search head and stored in a dedicated Splunk index. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: score_uberagent_uxm.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=score_uberagent_uxm, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **quality** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **uberAgent Experience Score Monitoring**): table _time, avg_score, p10_score, hosts, quality\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (score over time), Gauge (fleet average), Heatmap (delivery group x hour), Table (worst-scoring users).",
              "script": "",
              "premium": "",
              "hw": "Windows VDAs, Citrix CVAD / DaaS",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on uberAgent Experience Score and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.18",
              "n": "Application Unresponsiveness (UI Hangs) Detection",
              "c": "high",
              "f": "beginner",
              "v": "Application hangs — where the UI becomes unresponsive and shows \"Not Responding\" — are a major source of user frustration in CVAD sessions. Unlike crashes, hangs don't generate Windows Error Reporting events and are invisible to most monitoring tools. uberAgent detects them in real-time by monitoring message pump responsiveness, capturing which application hung, for how long, and what the user was doing.",
              "t": "uberAgent UXM (Splunkbase 1448)",
              "d": "`sourcetype=\"uberAgent:Application:UIDelay\"`",
              "q": "index=uberagent sourcetype=\"uberAgent:Application:UIDelay\" earliest=-24h\n| stats count as hang_count avg(UIDelayDurationMs) as avg_hang_ms max(UIDelayDurationMs) as max_hang_ms dc(User) as affected_users by AppName, AppVersion\n| where hang_count > 5\n| eval avg_hang_sec=round(avg_hang_ms/1000,1)\n| sort -hang_count\n| table AppName, AppVersion, hang_count, avg_hang_sec, affected_users",
              "m": "uberAgent detects UI unresponsiveness automatically. No special configuration required. Use the data to identify problematic applications, correlate hangs with VDA resource contention (CPU, memory), and prioritise application remediation. Alert when a single application generates more than 20 hangs per hour across the fleet.",
              "z": "Bar chart (hangs by application), Line chart (hang frequency over time), Table (worst applications with user impact).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448).\n• Ensure the following data sources are available: `sourcetype=\"uberAgent:Application:UIDelay\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nuberAgent detects UI unresponsiveness automatically. No special configuration required. Use the data to identify problematic applications, correlate hangs with VDA resource contention (CPU, memory), and prioritise application remediation. Alert when a single application generates more than 20 hangs per hour across the fleet.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:Application:UIDelay\" earliest=-24h\n| stats count as hang_count avg(UIDelayDurationMs) as avg_hang_ms max(UIDelayDurationMs) as max_hang_ms dc(User) as affected_users by AppName, AppVersion\n| where hang_count > 5\n| eval avg_hang_sec=round(avg_hang_ms/1000,1)\n| sort -hang_count\n| table AppName, AppVersion, hang_count, avg_hang_sec, affected_users\n```\n\nUnderstanding this SPL\n\n**Application Unresponsiveness (UI Hangs) Detection** — Application hangs — where the UI becomes unresponsive and shows \"Not Responding\" — are a major source of user frustration in CVAD sessions. Unlike crashes, hangs don't generate Windows Error Reporting events and are invisible to most monitoring tools. uberAgent detects them in real-time by monitoring message pump responsiveness, capturing which application hung, for how long, and what the user was doing.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:Application:UIDelay\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:Application:UIDelay. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:Application:UIDelay\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by AppName, AppVersion** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where hang_count > 5` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_hang_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Application Unresponsiveness (UI Hangs) Detection**): table AppName, AppVersion, hang_count, avg_hang_sec, affected_users\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (hangs by application), Line chart (hang frequency over time), Table (worst applications with user impact).",
              "script": "",
              "premium": "",
              "hw": "Windows VDAs, Windows endpoints",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on application Unresponsiveness (UI Hangs) Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.19",
              "n": "Application Startup Duration Tracking",
              "c": "high",
              "f": "beginner",
              "v": "How long applications take to become usable after launch directly impacts perceived performance. A user launching Outlook, SAP, or a browser expects it within seconds. uberAgent measures the time from process start to the application window being interactive, capturing real user-perceived startup times rather than just process creation. Slow startups indicate disk I/O contention, antivirus interference, or application configuration issues.",
              "t": "uberAgent UXM (Splunkbase 1448)",
              "d": "`sourcetype=\"uberAgent:Process:ProcessStartup\"`",
              "q": "index=uberagent sourcetype=\"uberAgent:Process:ProcessStartup\" earliest=-24h\n| stats avg(StartupTimeMs) as avg_startup_ms perc95(StartupTimeMs) as p95_startup_ms count as launches dc(User) as users by AppName\n| eval avg_startup_sec=round(avg_startup_ms/1000,1), p95_startup_sec=round(p95_startup_ms/1000,1)\n| where p95_startup_sec > 10\n| sort -p95_startup_sec\n| table AppName, launches, users, avg_startup_sec, p95_startup_sec",
              "m": "uberAgent measures startup duration automatically for all applications. Baseline normal startup times per application. Alert when p95 startup exceeds thresholds (e.g., >10s for Outlook, >15s for SAP). Trend over time to detect regression after updates or image changes.",
              "z": "Bar chart (p95 startup by app), Line chart (startup trending), Table (slowest applications).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448).\n• Ensure the following data sources are available: `sourcetype=\"uberAgent:Process:ProcessStartup\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nuberAgent measures startup duration automatically for all applications. Baseline normal startup times per application. Alert when p95 startup exceeds thresholds (e.g., >10s for Outlook, >15s for SAP). Trend over time to detect regression after updates or image changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:Process:ProcessStartup\" earliest=-24h\n| stats avg(StartupTimeMs) as avg_startup_ms perc95(StartupTimeMs) as p95_startup_ms count as launches dc(User) as users by AppName\n| eval avg_startup_sec=round(avg_startup_ms/1000,1), p95_startup_sec=round(p95_startup_ms/1000,1)\n| where p95_startup_sec > 10\n| sort -p95_startup_sec\n| table AppName, launches, users, avg_startup_sec, p95_startup_sec\n```\n\nUnderstanding this SPL\n\n**Application Startup Duration Tracking** — How long applications take to become usable after launch directly impacts perceived performance. A user launching Outlook, SAP, or a browser expects it within seconds. uberAgent measures the time from process start to the application window being interactive, capturing real user-perceived startup times rather than just process creation. Slow startups indicate disk I/O contention, antivirus interference, or application configuration issues.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:Process:ProcessStartup\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:Process:ProcessStartup. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:Process:ProcessStartup\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by AppName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_startup_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where p95_startup_sec > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Application Startup Duration Tracking**): table AppName, launches, users, avg_startup_sec, p95_startup_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (p95 startup by app), Line chart (startup trending), Table (slowest applications).",
              "script": "",
              "premium": "",
              "hw": "Windows VDAs, Windows endpoints",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on application Startup Duration and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.20",
              "n": "Browser Performance per Web Application",
              "c": "medium",
              "f": "beginner",
              "v": "Many Citrix-delivered workloads are browser-based (SaaS applications, internal portals). uberAgent's browser extensions measure page load time, network latency, and rendering performance per website/URL. This reveals whether slow web application performance is due to the Citrix session, the network, or the web application itself — a critical distinction for troubleshooting.",
              "t": "uberAgent UXM (Splunkbase 1448) + browser extension (Chrome, Edge, Firefox)",
              "d": "`sourcetype=\"uberAgent:Application:BrowserWebRequests2\"`",
              "q": "index=uberagent sourcetype=\"uberAgent:Application:BrowserWebRequests2\" earliest=-24h\n| stats avg(PageLoadTotalDurationMs) as avg_load_ms perc95(PageLoadTotalDurationMs) as p95_load_ms count as page_loads dc(User) as users by Host\n| eval avg_load_sec=round(avg_load_ms/1000,1), p95_load_sec=round(p95_load_ms/1000,1)\n| where p95_load_sec > 5\n| sort -p95_load_sec\n| table Host, page_loads, users, avg_load_sec, p95_load_sec",
              "m": "Deploy the uberAgent browser extension via Group Policy or Citrix Studio. The extension collects W3C Navigation Timing API data per page load. Alert when key internal web applications (intranet, CRM, EHR) exceed acceptable page load thresholds. Segment by Citrix delivery group vs physical endpoint to compare performance.",
              "z": "Table (slowest websites), Line chart (page load trending), Bar chart (comparison by browser).",
              "kfp": "Perceived slowness may be an app or profile issue rather than the network; confirm with a second path or version before a broad ADC change.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) + browser extension (Chrome, Edge, Firefox).\n• Ensure the following data sources are available: `sourcetype=\"uberAgent:Application:BrowserWebRequests2\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy the uberAgent browser extension via Group Policy or Citrix Studio. The extension collects W3C Navigation Timing API data per page load. Alert when key internal web applications (intranet, CRM, EHR) exceed acceptable page load thresholds. Segment by Citrix delivery group vs physical endpoint to compare performance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:Application:BrowserWebRequests2\" earliest=-24h\n| stats avg(PageLoadTotalDurationMs) as avg_load_ms perc95(PageLoadTotalDurationMs) as p95_load_ms count as page_loads dc(User) as users by Host\n| eval avg_load_sec=round(avg_load_ms/1000,1), p95_load_sec=round(p95_load_ms/1000,1)\n| where p95_load_sec > 5\n| sort -p95_load_sec\n| table Host, page_loads, users, avg_load_sec, p95_load_sec\n```\n\nUnderstanding this SPL\n\n**Browser Performance per Web Application** — Many Citrix-delivered workloads are browser-based (SaaS applications, internal portals). uberAgent's browser extensions measure page load time, network latency, and rendering performance per website/URL. This reveals whether slow web application performance is due to the Citrix session, the network, or the web application itself — a critical distinction for troubleshooting.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:Application:BrowserWebRequests2\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) + browser extension (Chrome, Edge, Firefox). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:Application:BrowserWebRequests2. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:Application:BrowserWebRequests2\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_load_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where p95_load_sec > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Browser Performance per Web Application**): table Host, page_loads, users, avg_load_sec, p95_load_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slowest websites), Line chart (page load trending), Bar chart (comparison by browser).",
              "script": "",
              "premium": "",
              "hw": "Windows VDAs with Chrome, Edge, or Firefox",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on browser Performance per Web Application and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.21",
              "n": "Machine Boot and Shutdown Duration Analysis",
              "c": "medium",
              "f": "beginner",
              "v": "VDA boot time directly impacts how quickly machines become available after power-on events triggered by Citrix power management schedules. Slow boots delay session availability for early-morning users. uberAgent decomposes boot duration into phases (BIOS/firmware, kernel, drivers, services, boot processes) to identify bottlenecks — antivirus scans at boot, slow driver initialisation, or disk contention during mass power-on.",
              "t": "uberAgent UXM (Splunkbase 1448)",
              "d": "`sourcetype=\"uberAgent:OnOffTransition:BootDetail2\"`, `sourcetype=\"uberAgent:OnOffTransition:BootProcessDetail\"`",
              "q": "index=uberagent sourcetype=\"uberAgent:OnOffTransition:BootDetail2\" earliest=-7d\n| stats avg(TotalBootTimeMs) as avg_boot_ms perc95(TotalBootTimeMs) as p95_boot_ms count as boots by Host\n| eval avg_boot_sec=round(avg_boot_ms/1000,1), p95_boot_sec=round(p95_boot_ms/1000,1)\n| where p95_boot_sec > 120\n| sort -p95_boot_sec\n| table Host, boots, avg_boot_sec, p95_boot_sec",
              "m": "uberAgent captures boot duration automatically on all endpoints. Correlate boot times with Citrix power management schedules (UC-2.6.6) to validate machines are ready when users arrive. Alert on VDAs with p95 boot time exceeding 2 minutes. Use boot process detail data to identify specific services or drivers causing delays.",
              "z": "Bar chart (boot time by VDA), Stacked bar (boot phases), Line chart (boot time trending).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448).\n• Ensure the following data sources are available: `sourcetype=\"uberAgent:OnOffTransition:BootDetail2\"`, `sourcetype=\"uberAgent:OnOffTransition:BootProcessDetail\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nuberAgent captures boot duration automatically on all endpoints. Correlate boot times with Citrix power management schedules (UC-2.6.6) to validate machines are ready when users arrive. Alert on VDAs with p95 boot time exceeding 2 minutes. Use boot process detail data to identify specific services or drivers causing delays.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:OnOffTransition:BootDetail2\" earliest=-7d\n| stats avg(TotalBootTimeMs) as avg_boot_ms perc95(TotalBootTimeMs) as p95_boot_ms count as boots by Host\n| eval avg_boot_sec=round(avg_boot_ms/1000,1), p95_boot_sec=round(p95_boot_ms/1000,1)\n| where p95_boot_sec > 120\n| sort -p95_boot_sec\n| table Host, boots, avg_boot_sec, p95_boot_sec\n```\n\nUnderstanding this SPL\n\n**Machine Boot and Shutdown Duration Analysis** — VDA boot time directly impacts how quickly machines become available after power-on events triggered by Citrix power management schedules. Slow boots delay session availability for early-morning users. uberAgent decomposes boot duration into phases (BIOS/firmware, kernel, drivers, services, boot processes) to identify bottlenecks — antivirus scans at boot, slow driver initialisation, or disk contention during mass power-on.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:OnOffTransition:BootDetail2\"`, `sourcetype=\"uberAgent:OnOffTransition:BootProcessDetail\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:OnOffTransition:BootDetail2. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:OnOffTransition:BootDetail2\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_boot_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where p95_boot_sec > 120` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Machine Boot and Shutdown Duration Analysis**): table Host, boots, avg_boot_sec, p95_boot_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (boot time by VDA), Stacked bar (boot phases), Line chart (boot time trending).",
              "script": "",
              "premium": "",
              "hw": "Windows VDAs (physical and virtual)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on machine Boot and Shutdown Duration Analysis and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.22",
              "n": "Per-Application CPU and Memory Consumption",
              "c": "high",
              "f": "intermediate",
              "v": "Identifying which applications consume the most CPU and memory on shared VDAs is essential for capacity planning and noisy-neighbour detection. A single user running an unoptimised macro or media-heavy application can degrade performance for all other sessions on the same VDA. uberAgent provides per-process, per-user resource consumption with application-level attribution.",
              "t": "uberAgent UXM (Splunkbase 1448)",
              "d": "`sourcetype=\"uberAgent:Process:ProcessDetail\"`",
              "q": "index=uberagent sourcetype=\"uberAgent:Process:ProcessDetail\" earliest=-4h\n| stats avg(ProcCPUPercent) as avg_cpu avg(WorkingSetMB) as avg_ram_mb by AppName, User, Host\n| where avg_cpu > 25 OR avg_ram_mb > 500\n| sort -avg_cpu\n| table Host, User, AppName, avg_cpu, avg_ram_mb",
              "m": "uberAgent collects process-level resource metrics continuously. Identify top resource consumers per VDA and per user. Alert when a single user's process exceeds thresholds that impact co-hosted sessions. Feed into capacity planning: if average RAM per user session is 2 GB and VDAs have 64 GB, the safe session density is ~28 sessions.",
              "z": "Table (top consumers), Bar chart (CPU by application), Heatmap (user x VDA).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448).\n• Ensure the following data sources are available: `sourcetype=\"uberAgent:Process:ProcessDetail\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nuberAgent collects process-level resource metrics continuously. Identify top resource consumers per VDA and per user. Alert when a single user's process exceeds thresholds that impact co-hosted sessions. Feed into capacity planning: if average RAM per user session is 2 GB and VDAs have 64 GB, the safe session density is ~28 sessions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:Process:ProcessDetail\" earliest=-4h\n| stats avg(ProcCPUPercent) as avg_cpu avg(WorkingSetMB) as avg_ram_mb by AppName, User, Host\n| where avg_cpu > 25 OR avg_ram_mb > 500\n| sort -avg_cpu\n| table Host, User, AppName, avg_cpu, avg_ram_mb\n```\n\nUnderstanding this SPL\n\n**Per-Application CPU and Memory Consumption** — Identifying which applications consume the most CPU and memory on shared VDAs is essential for capacity planning and noisy-neighbour detection. A single user running an unoptimised macro or media-heavy application can degrade performance for all other sessions on the same VDA. uberAgent provides per-process, per-user resource consumption with application-level attribution.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:Process:ProcessDetail\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:Process:ProcessDetail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:Process:ProcessDetail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by AppName, User, Host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_cpu > 25 OR avg_ram_mb > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Per-Application CPU and Memory Consumption**): table Host, User, AppName, avg_cpu, avg_ram_mb\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Per-Application CPU and Memory Consumption** — Identifying which applications consume the most CPU and memory on shared VDAs is essential for capacity planning and noisy-neighbour detection. A single user running an unoptimised macro or media-heavy application can degrade performance for all other sessions on the same VDA. uberAgent provides per-process, per-user resource consumption with application-level attribution.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:Process:ProcessDetail\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top consumers), Bar chart (CPU by application), Heatmap (user x VDA).",
              "script": "",
              "premium": "",
              "hw": "Windows VDAs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on which applications use the most CPU and memory on shared session machines and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host | sort - agg_value",
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.23",
              "n": "Application Crash and Error Reporting",
              "c": "high",
              "f": "beginner",
              "v": "Application crashes in Citrix sessions cause data loss, user frustration, and helpdesk calls. uberAgent captures Windows Error Reporting (WER) crash data including the faulting module, exception code, and application version, enabling crash trending and root-cause identification across the fleet. Crash rate spikes after application or image updates indicate problematic deployments.",
              "t": "uberAgent UXM (Splunkbase 1448)",
              "d": "`sourcetype=\"uberAgent:Application:Errors\"`",
              "q": "index=uberagent sourcetype=\"uberAgent:Application:Errors\" earliest=-7d\n| stats count as crashes dc(User) as affected_users values(ExceptionCode) as exception_codes by AppName, AppVersion\n| sort -crashes\n| table AppName, AppVersion, crashes, affected_users, exception_codes",
              "m": "uberAgent captures crash data automatically from WER. Trend crash rates per application version to detect regressions. Alert on crash rate spikes (>200% increase over 7-day baseline). Correlate exception codes with known bugs and vendor advisories. Track crash resolution over time after patching.",
              "z": "Bar chart (crashes by application), Line chart (crash rate trending), Table (faulting modules).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448).\n• Ensure the following data sources are available: `sourcetype=\"uberAgent:Application:Errors\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nuberAgent captures crash data automatically from WER. Trend crash rates per application version to detect regressions. Alert on crash rate spikes (>200% increase over 7-day baseline). Correlate exception codes with known bugs and vendor advisories. Track crash resolution over time after patching.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:Application:Errors\" earliest=-7d\n| stats count as crashes dc(User) as affected_users values(ExceptionCode) as exception_codes by AppName, AppVersion\n| sort -crashes\n| table AppName, AppVersion, crashes, affected_users, exception_codes\n```\n\nUnderstanding this SPL\n\n**Application Crash and Error Reporting** — Application crashes in Citrix sessions cause data loss, user frustration, and helpdesk calls. uberAgent captures Windows Error Reporting (WER) crash data including the faulting module, exception code, and application version, enabling crash trending and root-cause identification across the fleet. Crash rate spikes after application or image updates indicate problematic deployments.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:Application:Errors\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:Application:Errors. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:Application:Errors\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by AppName, AppVersion** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Application Crash and Error Reporting**): table AppName, AppVersion, crashes, affected_users, exception_codes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (crashes by application), Line chart (crash rate trending), Table (faulting modules).",
              "script": "",
              "premium": "",
              "hw": "Windows VDAs, Windows endpoints",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on application Crash and Error Reporting and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.24",
              "n": "Citrix Site Delivery Group Capacity and Health",
              "c": "critical",
              "f": "beginner",
              "v": "uberAgent's Citrix Site Monitoring queries the Broker Service directly to provide real-time visibility into delivery group capacity — total machines, registered machines, active sessions, load index, and machines in maintenance mode. When available capacity drops below a threshold, new user connections may fail or be delayed.",
              "t": "uberAgent UXM (Splunkbase 1448) with Citrix Site Monitoring enabled",
              "d": "`sourcetype=\"uberAgent:Citrix:DesktopGroups\"`",
              "q": "index=uberagent sourcetype=\"uberAgent:Citrix:DesktopGroups\"\n| stats latest(MachinesTotal) as total, latest(MachinesRegistered) as registered, latest(SessionsActive) as active, latest(MachinesInMaintenanceMode) as maint by DeliveryGroupName\n| eval available=registered-active-maint, avail_pct=round(available/total*100,1)\n| where avail_pct < 20 OR registered < total*0.8\n| table DeliveryGroupName, total, registered, maint, active, available, avail_pct\n| sort avail_pct",
              "m": "Enable uberAgent's Citrix Site Monitoring feature, which queries the Citrix Broker Service at configurable intervals. Alert when available capacity drops below 20% of total machines for any delivery group. Track session density trends for capacity planning. Correlate with VDA registration health (UC-2.6.4) for root cause.",
              "z": "Table (delivery group capacity), Gauge (available capacity %), Bar chart (session counts by group).",
              "kfp": "Autoscale and capacity events can follow maintenance tags, test floods, or catalog updates; align counts with autoscale and power schedules.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) with Citrix Site Monitoring enabled.\n• Ensure the following data sources are available: `sourcetype=\"uberAgent:Citrix:DesktopGroups\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable uberAgent's Citrix Site Monitoring feature, which queries the Citrix Broker Service at configurable intervals. Alert when available capacity drops below 20% of total machines for any delivery group. Track session density trends for capacity planning. Correlate with VDA registration health (UC-2.6.4) for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:Citrix:DesktopGroups\"\n| stats latest(MachinesTotal) as total, latest(MachinesRegistered) as registered, latest(SessionsActive) as active, latest(MachinesInMaintenanceMode) as maint by DeliveryGroupName\n| eval available=registered-active-maint, avail_pct=round(available/total*100,1)\n| where avail_pct < 20 OR registered < total*0.8\n| table DeliveryGroupName, total, registered, maint, active, available, avail_pct\n| sort avail_pct\n```\n\nUnderstanding this SPL\n\n**Citrix Site Delivery Group Capacity and Health** — uberAgent's Citrix Site Monitoring queries the Broker Service directly to provide real-time visibility into delivery group capacity — total machines, registered machines, active sessions, load index, and machines in maintenance mode. When available capacity drops below a threshold, new user connections may fail or be delayed.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:Citrix:DesktopGroups\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) with Citrix Site Monitoring enabled. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:Citrix:DesktopGroups. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:Citrix:DesktopGroups\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by DeliveryGroupName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **available** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avail_pct < 20 OR registered < total*0.8` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Site Delivery Group Capacity and Health**): table DeliveryGroupName, total, registered, maint, active, available, avail_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (delivery group capacity), Gauge (available capacity %), Bar chart (session counts by group).",
              "script": "",
              "premium": "",
              "hw": "Citrix CVAD / DaaS site",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on site Delivery Group Capacity and Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Capacity",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.25",
              "n": "Citrix NetScaler ADC Performance via uberAgent",
              "c": "high",
              "f": "beginner",
              "v": "uberAgent can monitor Citrix NetScaler (ADC) appliances via NITRO API without requiring a separate add-on on the ADC itself. This provides gateway session counts, SSL TPS, HTTP request rates, and system resource utilisation alongside endpoint and session data in the same Splunk index, enabling end-to-end correlation from ADC to VDA to application.",
              "t": "uberAgent UXM (Splunkbase 1448) with NetScaler Monitoring enabled",
              "d": "`sourcetype=\"uberAgent:CitrixADC:AppliancePerformance\"`, `sourcetype=\"uberAgent:CitrixADC:vServer\"`",
              "q": "index=uberagent sourcetype=\"uberAgent:CitrixADC:AppliancePerformance\"\n| stats latest(CPUUsagePct) as cpu latest(MemUsagePct) as mem latest(HttpRequestsPerSec) as http_rps latest(SSLTransactionsPerSec) as ssl_tps by ADCHost\n| where cpu > 70 OR mem > 80\n| table ADCHost, cpu, mem, http_rps, ssl_tps",
              "m": "Configure uberAgent's NetScaler monitoring with NITRO API credentials. This provides a unified data source — VDA performance, user sessions, and ADC health all in one index. Correlate ADC gateway session counts with VDA session capacity. Alert on ADC resource utilisation exceeding thresholds.",
              "z": "Single value (CPU, memory), Line chart (SSL TPS over time), Table (ADC fleet health).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) with NetScaler Monitoring enabled.\n• Ensure the following data sources are available: `sourcetype=\"uberAgent:CitrixADC:AppliancePerformance\"`, `sourcetype=\"uberAgent:CitrixADC:vServer\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure uberAgent's NetScaler monitoring with NITRO API credentials. This provides a unified data source — VDA performance, user sessions, and ADC health all in one index. Correlate ADC gateway session counts with VDA session capacity. Alert on ADC resource utilisation exceeding thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:CitrixADC:AppliancePerformance\"\n| stats latest(CPUUsagePct) as cpu latest(MemUsagePct) as mem latest(HttpRequestsPerSec) as http_rps latest(SSLTransactionsPerSec) as ssl_tps by ADCHost\n| where cpu > 70 OR mem > 80\n| table ADCHost, cpu, mem, http_rps, ssl_tps\n```\n\nUnderstanding this SPL\n\n**Citrix NetScaler ADC Performance via uberAgent** — uberAgent can monitor Citrix NetScaler (ADC) appliances via NITRO API without requiring a separate add-on on the ADC itself. This provides gateway session counts, SSL TPS, HTTP request rates, and system resource utilisation alongside endpoint and session data in the same Splunk index, enabling end-to-end correlation from ADC to VDA to application.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:CitrixADC:AppliancePerformance\"`, `sourcetype=\"uberAgent:CitrixADC:vServer\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) with NetScaler Monitoring enabled. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:CitrixADC:AppliancePerformance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:CitrixADC:AppliancePerformance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ADCHost** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu > 70 OR mem > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix NetScaler ADC Performance via uberAgent**): table ADCHost, cpu, mem, http_rps, ssl_tps\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (CPU, memory), Line chart (SSL TPS over time), Table (ADC fleet health).",
              "script": "",
              "premium": "",
              "hw": "Citrix NetScaler / ADC (VPX, MPX, SDX, CPX)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on netScaler ADC Performance via uberAgent and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.26",
              "n": "Per-Application Network Performance",
              "c": "medium",
              "f": "intermediate",
              "v": "uberAgent measures network latency, data volume, and connection quality per application and per target host. This reveals which applications are generating the most network traffic, connecting to slow endpoints, or experiencing high latency — critical for optimising CVAD network policies and WAN bandwidth allocation.",
              "t": "uberAgent UXM (Splunkbase 1448) with Per-Application Network Monitoring",
              "d": "`sourcetype=\"uberAgent:Process:NetworkTargetPerformance\"`",
              "q": "index=uberagent sourcetype=\"uberAgent:Process:NetworkTargetPerformance\" earliest=-4h\n| stats avg(ConnectDurationMs) as avg_latency_ms sum(DataVolumeSentBytes) as bytes_sent sum(DataVolumeReceivedBytes) as bytes_rcvd dc(User) as users by AppName, NetworkTargetName\n| eval total_mb=round((bytes_sent+bytes_rcvd)/1048576,1)\n| where avg_latency_ms > 100 OR total_mb > 500\n| sort -total_mb\n| table AppName, NetworkTargetName, avg_latency_ms, total_mb, users",
              "m": "Enable uberAgent's per-application network monitoring feature. Identify bandwidth-heavy applications and high-latency network targets. Use to validate that HDX redirection policies are routing multimedia traffic efficiently. Detect applications bypassing proxy or connecting to unexpected external hosts.",
              "z": "Table (top bandwidth consumers), Bar chart (latency by target), Sankey diagram (app to network target flow).",
              "kfp": "Perceived slowness may be an app or profile issue rather than the network; confirm with a second path or version before a broad ADC change.",
              "refs": "[uberAgent UXM](https://splunkbase.splunk.com/app/1448), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) with Per-Application Network Monitoring.\n• Ensure the following data sources are available: `sourcetype=\"uberAgent:Process:NetworkTargetPerformance\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable uberAgent's per-application network monitoring feature. Identify bandwidth-heavy applications and high-latency network targets. Use to validate that HDX redirection policies are routing multimedia traffic efficiently. Detect applications bypassing proxy or connecting to unexpected external hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:Process:NetworkTargetPerformance\" earliest=-4h\n| stats avg(ConnectDurationMs) as avg_latency_ms sum(DataVolumeSentBytes) as bytes_sent sum(DataVolumeReceivedBytes) as bytes_rcvd dc(User) as users by AppName, NetworkTargetName\n| eval total_mb=round((bytes_sent+bytes_rcvd)/1048576,1)\n| where avg_latency_ms > 100 OR total_mb > 500\n| sort -total_mb\n| table AppName, NetworkTargetName, avg_latency_ms, total_mb, users\n```\n\nUnderstanding this SPL\n\n**Per-Application Network Performance** — uberAgent measures network latency, data volume, and connection quality per application and per target host. This reveals which applications are generating the most network traffic, connecting to slow endpoints, or experiencing high latency — critical for optimising CVAD network policies and WAN bandwidth allocation.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:Process:NetworkTargetPerformance\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) with Per-Application Network Monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:Process:NetworkTargetPerformance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:Process:NetworkTargetPerformance\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by AppName, NetworkTargetName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_latency_ms > 100 OR total_mb > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Per-Application Network Performance**): table AppName, NetworkTargetName, avg_latency_ms, total_mb, users\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Per-Application Network Performance** — uberAgent measures network latency, data volume, and connection quality per application and per target host. This reveals which applications are generating the most network traffic, connecting to slow endpoints, or experiencing high latency — critical for optimising CVAD network policies and WAN bandwidth allocation.\n\nDocumented **Data sources**: `sourcetype=\"uberAgent:Process:NetworkTargetPerformance\"`. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) with Per-Application Network Monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top bandwidth consumers), Bar chart (latency by target), Sankey diagram (app to network target flow).",
              "script": "",
              "premium": "",
              "hw": "Windows VDAs, Windows endpoints",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on per-Application Network Performance and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.27",
              "n": "Endpoint Security Analytics (ESA) Threat Detection",
              "c": "critical",
              "f": "advanced",
              "v": "uberAgent ESA provides endpoint-level threat detection within Citrix sessions using Sigma rules, LOLBAS detection, process tampering monitoring, and file system activity analysis. In multi-user CVAD environments, a compromised session can laterally move to shared resources. ESA detects threats inside the session that network-based security tools cannot see.",
              "t": "uberAgent ESA (included with uberAgent UXM, Splunkbase 1448)",
              "d": "`sourcetype=\"uberAgentESA:ActivityMonitoring:ProcessTagging\"`, `sourcetype=\"uberAgent:Process:ProcessStartup\"`",
              "q": "index=uberagent sourcetype=\"uberAgentESA:ActivityMonitoring:ProcessTagging\" earliest=-24h\n| stats count by RuleName, RuleSeverity, User, Host, ProcessName\n| where RuleSeverity IN (\"critical\",\"high\")\n| sort -RuleSeverity, -count\n| table Host, User, ProcessName, RuleName, RuleSeverity, count",
              "m": "Enable uberAgent ESA with default Sigma rule pack. Customise rules for Citrix-specific threats (e.g., lateral movement via published apps, credential dumping in shared sessions). Forward ESA events to Splunk Enterprise Security as notable events. The MITRE ATT&CK integration maps detections to tactics and techniques for SOC workflows.",
              "z": "Table (threat detections), Bar chart (by MITRE tactic), Timeline (detection events), Single value (critical alerts).",
              "kfp": "Citrix environments often coordinate broker, VDA, gateway, and profile layers; a single noisy component may clear when maintenance or a publish job completes.",
              "refs": "[Splunkbase app 1448](https://splunkbase.splunk.com/app/1448), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Intrusion Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent ESA (included with uberAgent UXM, Splunkbase 1448).\n• Ensure the following data sources are available: `sourcetype=\"uberAgentESA:ActivityMonitoring:ProcessTagging\"`, `sourcetype=\"uberAgent:Process:ProcessStartup\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable uberAgent ESA with default Sigma rule pack. Customise rules for Citrix-specific threats (e.g., lateral movement via published apps, credential dumping in shared sessions). Forward ESA events to Splunk Enterprise Security as notable events. The MITRE ATT&CK integration maps detections to tactics and techniques for SOC workflows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgentESA:ActivityMonitoring:ProcessTagging\" earliest=-24h\n| stats count by RuleName, RuleSeverity, User, Host, ProcessName\n| where RuleSeverity IN (\"critical\",\"high\")\n| sort -RuleSeverity, -count\n| table Host, User, ProcessName, RuleName, RuleSeverity, count\n```\n\nUnderstanding this SPL\n\n**Endpoint Security Analytics (ESA) Threat Detection** — uberAgent ESA provides endpoint-level threat detection within Citrix sessions using Sigma rules, LOLBAS detection, process tampering monitoring, and file system activity analysis. In multi-user CVAD environments, a compromised session can laterally move to shared resources. ESA detects threats inside the session that network-based security tools cannot see.\n\nDocumented **Data sources**: `sourcetype=\"uberAgentESA:ActivityMonitoring:ProcessTagging\"`, `sourcetype=\"uberAgent:Process:ProcessStartup\"`. **App/TA** (typical add-on context): uberAgent ESA (included with uberAgent UXM, Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgentESA:ActivityMonitoring:ProcessTagging. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgentESA:ActivityMonitoring:ProcessTagging\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by RuleName, RuleSeverity, User, Host, ProcessName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where RuleSeverity IN (\"critical\",\"high\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Endpoint Security Analytics (ESA) Threat Detection**): table Host, User, ProcessName, RuleName, RuleSeverity, count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Endpoint Security Analytics (ESA) Threat Detection** — uberAgent ESA provides endpoint-level threat detection within Citrix sessions using Sigma rules, LOLBAS detection, process tampering monitoring, and file system activity analysis. In multi-user CVAD environments, a compromised session can laterally move to shared resources. ESA detects threats inside the session that network-based security tools cannot see.\n\nDocumented **Data sources**: `sourcetype=\"uberAgentESA:ActivityMonitoring:ProcessTagging\"`, `sourcetype=\"uberAgent:Process:ProcessStartup\"`. **App/TA** (typical add-on context): uberAgent ESA (included with uberAgent UXM, Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (threat detections), Bar chart (by MITRE tactic), Timeline (detection events), Single value (critical alerts).",
              "script": "",
              "premium": "",
              "hw": "Windows VDAs, Windows endpoints",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on endpoint Security Analytics (ESA) Threat Detection and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.dest | sort - count",
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.28",
              "n": "Local Host Cache (LHC) Sync Status and Mode Transitions",
              "c": "critical",
              "f": "advanced",
              "v": "Local Host Cache (LHC) allows Delivery Controllers to broker sessions when the site database is unreachable. Failures in sync, unexpected mode changes (to or from LHC), or lagging replication indicate risk of logon/brokering issues and split-brain scenarios. Alerting on Citrix High Availability Service events and correlating with broker events surfaces site-database outages and recovery before users see widespread failures.",
              "t": "Splunk Add-on for Microsoft Windows, Template for Citrix XenDesktop 7 (TA-XD7-Broker)",
              "d": "`index=windows` (or your controller log index) `sourcetype=\"WinEventLog:Application\"` `source=\"Citrix High Availability Service\"`; optional correlation: `index=xd` `sourcetype=\"citrix:broker:events\"` for controller health and brokering errors during outages",
              "q": "index=windows sourcetype=\"WinEventLog:Application\" source=\"Citrix High Availability Service\" earliest=-24h\n| rex field=Message \"(?i)mode[\\s:]*(?<lhc_mode>\\w+)|synchroni[sz]e|sync\\s+lag|Local\\s*Host\\s*Cache|HA\\s*state\"\n| eval ha_event=if(match(Message, \"(?i)entering|switched|transition|outage|split.?brain|sync\"), 1, 0)\n| where ha_event=1\n| stats count, earliest(_time) as first_seen, latest(_time) as last_seen by host, EventCode, Message\n| sort - count",
              "m": "Ingest Windows Application log from all Delivery Controllers; confirm `source` and `sourcetype` for Citrix High Availability Service. Add field extractions for sync state, mode transition, and error text if Message format varies by version. Correlate with `citrix:broker:events` for registration and brokering errors. Tune noise from planned failovers. Document expected behavior during site DB maintenance so alerts can be suppressed via lookup.",
              "z": "Timeline (HA and mode events), Table (host, event text, first/last seen), Single value (count of critical HA errors).",
              "kfp": "Planned site database maintenance and rehearsed LHC failovers intentionally flip the Citrix High Availability Service between modes and log transitions. Suppress on published SQL maintenance, then correlate with broker 'database unavailable' and controller pairing before a Sev-1 bridge.",
              "refs": "[Local Host Cache in Citrix Virtual Apps and Desktops](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops-2112/manage-deployment/broker.html), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows, Template for Citrix XenDesktop 7 (TA-XD7-Broker).\n• Ensure the following data sources are available: `index=windows` (or your controller log index) `sourcetype=\"WinEventLog:Application\"` `source=\"Citrix High Availability Service\"`; optional correlation: `index=xd` `sourcetype=\"citrix:broker:events\"` for controller health and brokering errors during outages.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Windows Application log from all Delivery Controllers; confirm `source` and `sourcetype` for Citrix High Availability Service. Add field extractions for sync state, mode transition, and error text if Message format varies by version. Correlate with `citrix:broker:events` for registration and brokering errors. Tune noise from planned failovers. Document expected behavior during site DB maintenance so alerts can be suppressed via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Application\" source=\"Citrix High Availability Service\" earliest=-24h\n| rex field=Message \"(?i)mode[\\s:]*(?<lhc_mode>\\w+)|synchroni[sz]e|sync\\s+lag|Local\\s*Host\\s*Cache|HA\\s*state\"\n| eval ha_event=if(match(Message, \"(?i)entering|switched|transition|outage|split.?brain|sync\"), 1, 0)\n| where ha_event=1\n| stats count, earliest(_time) as first_seen, latest(_time) as last_seen by host, EventCode, Message\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Local Host Cache (LHC) Sync Status and Mode Transitions** — Local Host Cache (LHC) allows Delivery Controllers to broker sessions when the site database is unreachable. Failures in sync, unexpected mode changes (to or from LHC), or lagging replication indicate risk of logon/brokering issues and split-brain scenarios. Alerting on Citrix High Availability Service events and correlating with broker events surfaces site-database outages and recovery before users see widespread failures.\n\nDocumented **Data sources**: `index=windows` (or your controller log index) `sourcetype=\"WinEventLog:Application\"` `source=\"Citrix High Availability Service\"`; optional correlation: `index=xd` `sourcetype=\"citrix:broker:events\"` for controller health and brokering errors during outages. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows, Template for Citrix XenDesktop 7 (TA-XD7-Broker). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Application. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Application\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **ha_event** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ha_event=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, EventCode, Message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (HA and mode events), Table (host, event text, first/last seen), Single value (count of critical HA errors).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on local Host Cache (LHC) Sync Status and Mode Transitions and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.29",
              "n": "Machine Catalog Image Pipeline Health",
              "c": "high",
              "f": "intermediate",
              "v": "Machine Catalog health depends on current master images, successful preparation or rollout jobs, and timely rollouts. Stale images (>90 days without refresh), pending rollouts stuck in queue, and provisioning errors reduce pool reliability and can leave machines on vulnerable or non-compliant images. Polling the Monitor `MachineCatalog` OData feed gives a single place to see catalog-level status when broker events do not list every field.",
              "t": "Citrix Monitor Service OData API, Template for Citrix XenDesktop 7 (TA-XD7-Broker)",
              "d": "`index=xd` `sourcetype=\"citrix:monitor:odata\"` with OData entity scoping to `MachineCatalog` (e.g. `ODataEntity=MachineCatalog` or `entity_type=MachineCatalog`); fields may include `Name`, `MasterImageVhd`, `ProvisioningType`, `LastApplyImageDate`, `UsedCount`, `PendingTaskCount` depending on your TA field mapping",
              "q": "index=xd sourcetype=\"citrix:monitor:odata\" (ODataEntity=MachineCatalog OR entity_type=MachineCatalog OR Name=*)\n| eval master_age_days=if(isnotnull(LastImageUpdateTime) OR isnotnull(LastMasterImageTime), round((now()-coalesce(LastImageUpdateTime, LastMasterImageTime, _time)) / 86400, 1), null())\n| eval rollout_pending=coalesce(PendingImageRollout, PendingUpdateCount, 0)\n| where master_age_days > 90 OR rollout_pending > 0 OR match(coalesce(ProvisioningStatus, State, ErrorState), \"(?i)fail|error\")\n| table _time, Name, ProvisioningType, master_age_days, rollout_pending, ProvisioningStatus, State, ErrorState, MasterImageVhd, host\n| sort - master_age_days",
              "m": "Enable OData collection for Machine Catalog. Align field names to your add-on; use `fieldalias` in `props.conf` if the vendor uses `LastImageTime` instead of `LastMasterImageTime`. Set thresholds: image age 90+ days, any non-empty pending rollout counter for more than 24 hours, and any `Fail` in provisioning. Join to change tickets for image updates. Cross-check MCS/PVS UCs for prep failures on the same image name.",
              "z": "Table (catalogs at risk), Line chart (pending rollout trend), Single value (catalogs with stale images).",
              "kfp": "Large catalog republishes, image rollouts, and overnight MCS/PVS rewrites can emit sustained 'pipeline' errors while machines churn. Time-box alerts to the change ticket window and key on a failure rate that exceeds the last three similar publishes.",
              "refs": "[Monitor Citrix with OData](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops-221/operations/monitor/odata-connector.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Monitor Service OData API, Template for Citrix XenDesktop 7 (TA-XD7-Broker).\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:monitor:odata\"` with OData entity scoping to `MachineCatalog` (e.g. `ODataEntity=MachineCatalog` or `entity_type=MachineCatalog`); fields may include `Name`, `MasterImageVhd`, `ProvisioningType`, `LastApplyImageDate`, `UsedCount`, `PendingTaskCount` depending on your TA field mapping.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable OData collection for Machine Catalog. Align field names to your add-on; use `fieldalias` in `props.conf` if the vendor uses `LastImageTime` instead of `LastMasterImageTime`. Set thresholds: image age 90+ days, any non-empty pending rollout counter for more than 24 hours, and any `Fail` in provisioning. Join to change tickets for image updates. Cross-check MCS/PVS UCs for prep failures on the same image name.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:monitor:odata\" (ODataEntity=MachineCatalog OR entity_type=MachineCatalog OR Name=*)\n| eval master_age_days=if(isnotnull(LastImageUpdateTime) OR isnotnull(LastMasterImageTime), round((now()-coalesce(LastImageUpdateTime, LastMasterImageTime, _time)) / 86400, 1), null())\n| eval rollout_pending=coalesce(PendingImageRollout, PendingUpdateCount, 0)\n| where master_age_days > 90 OR rollout_pending > 0 OR match(coalesce(ProvisioningStatus, State, ErrorState), \"(?i)fail|error\")\n| table _time, Name, ProvisioningType, master_age_days, rollout_pending, ProvisioningStatus, State, ErrorState, MasterImageVhd, host\n| sort - master_age_days\n```\n\nUnderstanding this SPL\n\n**Machine Catalog Image Pipeline Health** — Machine Catalog health depends on current master images, successful preparation or rollout jobs, and timely rollouts. Stale images (>90 days without refresh), pending rollouts stuck in queue, and provisioning errors reduce pool reliability and can leave machines on vulnerable or non-compliant images. Polling the Monitor `MachineCatalog` OData feed gives a single place to see catalog-level status when broker events do not list every field.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:monitor:odata\"` with OData entity scoping to `MachineCatalog` (e.g. `ODataEntity=MachineCatalog` or `entity_type=MachineCatalog`); fields may include `Name`, `MasterImageVhd`, `ProvisioningType`, `LastApplyImageDate`, `UsedCount`, `PendingTaskCount` depending on your TA field mapping. **App/TA** (typical add-on context): Citrix Monitor Service OData API, Template for Citrix XenDesktop 7 (TA-XD7-Broker). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:monitor:odata. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:monitor:odata\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **master_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rollout_pending** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where master_age_days > 90 OR rollout_pending > 0 OR match(coalesce(ProvisioningStatus, State, ErrorState), \"(?i)fail|error\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Machine Catalog Image Pipeline Health**): table _time, Name, ProvisioningType, master_age_days, rollout_pending, ProvisioningStatus, State, ErrorState, MasterImageVhd, host\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (catalogs at risk), Line chart (pending rollout trend), Single value (catalogs with stale images).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We check whether the golden desktop images for your virtual machines are up to date and not stuck while updating — so you catch broken or outdated pools before they hurt real work.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.30",
              "n": "MCS Provisioning and Identity Disk Health",
              "c": "high",
              "f": "advanced",
              "v": "MCS relies on correct identity disk creation, image preparation queues, and healthy snapshot or differencing disk chains. Symptoms include rising provisioning task failures, deep snapshot chains, machines stuck in preparation, and mismatches between on-demand and power-managed capacity that stress storage and identity state. Correlating broker and Monitor data with platform metrics isolates whether Citrix, hypervisor, or storage is the bottleneck.",
              "t": "Citrix Monitor Service OData API, Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix VDA/Monitor TA field mappings",
              "d": "`index=xd` `sourcetype=\"citrix:monitor:odata\"` (`Machines` / machine provisioning fields), `sourcetype=\"citrix:broker:events\"` for `ProvisioningTask` / prep failures, `sourcetype=\"citrix:vda:events\"` for identity or disk attach issues; `index=hyperv` or `index=vmware` optional for underlying snapshot/chain data if you collect it",
              "q": "index=xd (sourcetype=\"citrix:broker:events\" event_type=Provisioning* OR match(_raw, \"(?i)identity|prep|snapshot|MCS|Provision\"))\n     OR (sourcetype=\"citrix:monitor:odata\" (ODataEntity=Machines OR ODataEntity=Machine) match(_raw, \"(?i)identity|disk|provisioning|task\"))\n| eval fail=if(match(coalesce(result, ProvisioningState, State), \"(?i)fail|error\") OR match(_raw, \"(?i)identity.*(fail|error)|disk.*(fail|error)\"), 1, 0)\n| bin _time span=15m\n| stats count as evts, sum(fail) as fail_cnt, dc(host) as hosts, values(machine_name) as sample_machines by _time, catalog_name, delivery_group\n| eval fail_rate=if(evts>0, round(100*fail_cnt/evts,2), 0)\n| where fail_cnt>0 OR fail_rate > 5\n| table _time, catalog_name, delivery_group, evts, fail_cnt, fail_rate, hosts, sample_machines",
              "m": "Ingest broker provisioning and OData machine rows. Normalize `machine_name`, `catalog_name`, and task outcome fields. For snapshot chain bloat, use hypervisor or storage feeds if available; otherwise track prep duration percentiles. Alert on sustained fail rate, queue depth, or `identity`/`prep` error strings. Segment by delivery group to assign ownership.",
              "z": "Stacked area (fail count over time by catalog), Table (top failing prep reasons), Bar chart (on-demand vs power-managed pool sizes).",
              "kfp": "Mass re-provision after a template update, identity disk reseal, or storage migration spikes MCS and identity disk errors in parallel. Correlation with the catalog job ID and the service account that runs image refresh is usually a planned burst, not random corruption.",
              "refs": "[Machine Creation Services (Citrix) - Provisioning](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops-service/install-configure/mcs.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Monitor Service OData API, Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix VDA/Monitor TA field mappings.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:monitor:odata\"` (`Machines` / machine provisioning fields), `sourcetype=\"citrix:broker:events\"` for `ProvisioningTask` / prep failures, `sourcetype=\"citrix:vda:events\"` for identity or disk attach issues; `index=hyperv` or `index=vmware` optional for underlying snapshot/chain data if you collect it.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest broker provisioning and OData machine rows. Normalize `machine_name`, `catalog_name`, and task outcome fields. For snapshot chain bloat, use hypervisor or storage feeds if available; otherwise track prep duration percentiles. Alert on sustained fail rate, queue depth, or `identity`/`prep` error strings. Segment by delivery group to assign ownership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:broker:events\" event_type=Provisioning* OR match(_raw, \"(?i)identity|prep|snapshot|MCS|Provision\"))\n     OR (sourcetype=\"citrix:monitor:odata\" (ODataEntity=Machines OR ODataEntity=Machine) match(_raw, \"(?i)identity|disk|provisioning|task\"))\n| eval fail=if(match(coalesce(result, ProvisioningState, State), \"(?i)fail|error\") OR match(_raw, \"(?i)identity.*(fail|error)|disk.*(fail|error)\"), 1, 0)\n| bin _time span=15m\n| stats count as evts, sum(fail) as fail_cnt, dc(host) as hosts, values(machine_name) as sample_machines by _time, catalog_name, delivery_group\n| eval fail_rate=if(evts>0, round(100*fail_cnt/evts,2), 0)\n| where fail_cnt>0 OR fail_rate > 5\n| table _time, catalog_name, delivery_group, evts, fail_cnt, fail_rate, hosts, sample_machines\n```\n\nUnderstanding this SPL\n\n**MCS Provisioning and Identity Disk Health** — MCS relies on correct identity disk creation, image preparation queues, and healthy snapshot or differencing disk chains. Symptoms include rising provisioning task failures, deep snapshot chains, machines stuck in preparation, and mismatches between on-demand and power-managed capacity that stress storage and identity state. Correlating broker and Monitor data with platform metrics isolates whether Citrix, hypervisor, or storage is the bottleneck.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:monitor:odata\"` (`Machines` / machine provisioning fields), `sourcetype=\"citrix:broker:events\"` for `ProvisioningTask` / prep failures, `sourcetype=\"citrix:vda:events\"` for identity or disk attach issues; `index=hyperv` or `index=vmware` optional for underlying snapshot/chain data if you collect it. **App/TA** (typical add-on context): Citrix Monitor Service OData API, Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix VDA/Monitor TA field mappings. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events, citrix:monitor:odata. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, catalog_name, delivery_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_cnt>0 OR fail_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MCS Provisioning and Identity Disk Health**): table _time, catalog_name, delivery_group, evts, fail_cnt, fail_rate, hosts, sample_machines\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (fail count over time by catalog), Table (top failing prep reasons), Bar chart (on-demand vs power-managed pool sizes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on mCS Provisioning and Identity Disk Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.31",
              "n": "Citrix Zone Topology and Zone Preference Failover",
              "c": "high",
              "f": "advanced",
              "v": "Multi-zone CVAD sites route users to preferred zones; controllers and resources must register and broker in the right order. Unplanned failover traffic, inter-zone brokering storms, or machines registering outside their zone hint at network partitions, site misconfiguration, or loss of a preferred data path. Tracking zone-related broker events and preferred versus failover path selection shows topology stress before end-user latency spikes.",
              "t": "Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Monitor Service OData API",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` (zone change, brokering path, `ZoneName`, `PreferredController`, `Failover*`), `sourcetype=\"citrix:monitor:odata\"` for `Zones` / `Controllers` if collected; `sourcetype=\"citrix:netscaler:syslog\"` optional for GSLB/health probe correlation",
              "q": "index=xd sourcetype=\"citrix:broker:events\" (match(_raw, \"(?i)zone|failover|preferred|registration|chassis|data.?store|inter.?zone\") OR event_type IN (\"Zone*\", \"Registration\", \"Configuration\"))\n| eval zone=coalesce(ZoneName, zone_name, Zone)\n| eval path=if(match(_raw, \"(?i)failover|secondary|not.?preferred|alternate\"), \"failover_path\", if(match(_raw, \"(?i)preferred|primary|home.?zone\"), \"preferred_path\", \"other\"))\n| where isnotnull(zone) OR path!=\"other\"\n| bin _time span=5m\n| stats count, values(event_type) as event_types, dc(host) as controller_count by _time, zone, path, delivery_group\n| sort -_time, zone, path",
              "m": "Standardize `ZoneName` and delivery group in broker events. Create lookups for expected zone–delivery-group mappings. Alert when failover_path volume exceeds baseline, when zone membership churn appears, or when a zone has zero registered workers during business hours. Enrich with NetScaler or WAN metrics if you need proof of network cause.",
              "z": "Sankey or flow (preferred vs failover), Timeline (zone events), Table (anomalous delivery groups by zone).",
              "kfp": "Disaster recovery drills, intentional zone-preference changes, and datacenter evacuations are supposed to move traffic between zones. Use a change or DR test calendar and compare against expected zone order before calling an unexpected topology fault.",
              "refs": "[Zones in Citrix Virtual Apps and Desktops](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/221/manage-deployment/zones.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Monitor Service OData API.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` (zone change, brokering path, `ZoneName`, `PreferredController`, `Failover*`), `sourcetype=\"citrix:monitor:odata\"` for `Zones` / `Controllers` if collected; `sourcetype=\"citrix:netscaler:syslog\"` optional for GSLB/health probe correlation.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStandardize `ZoneName` and delivery group in broker events. Create lookups for expected zone–delivery-group mappings. Alert when failover_path volume exceeds baseline, when zone membership churn appears, or when a zone has zero registered workers during business hours. Enrich with NetScaler or WAN metrics if you need proof of network cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" (match(_raw, \"(?i)zone|failover|preferred|registration|chassis|data.?store|inter.?zone\") OR event_type IN (\"Zone*\", \"Registration\", \"Configuration\"))\n| eval zone=coalesce(ZoneName, zone_name, Zone)\n| eval path=if(match(_raw, \"(?i)failover|secondary|not.?preferred|alternate\"), \"failover_path\", if(match(_raw, \"(?i)preferred|primary|home.?zone\"), \"preferred_path\", \"other\"))\n| where isnotnull(zone) OR path!=\"other\"\n| bin _time span=5m\n| stats count, values(event_type) as event_types, dc(host) as controller_count by _time, zone, path, delivery_group\n| sort -_time, zone, path\n```\n\nUnderstanding this SPL\n\n**Citrix Zone Topology and Zone Preference Failover** — Multi-zone CVAD sites route users to preferred zones; controllers and resources must register and broker in the right order. Unplanned failover traffic, inter-zone brokering storms, or machines registering outside their zone hint at network partitions, site misconfiguration, or loss of a preferred data path. Tracking zone-related broker events and preferred versus failover path selection shows topology stress before end-user latency spikes.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` (zone change, brokering path, `ZoneName`, `PreferredController`, `Failover*`), `sourcetype=\"citrix:monitor:odata\"` for `Zones` / `Controllers` if collected; `sourcetype=\"citrix:netscaler:syslog\"` optional for GSLB/health probe correlation. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Monitor Service OData API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **zone** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **path** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(zone) OR path!=\"other\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, zone, path, delivery_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey or flow (preferred vs failover), Timeline (zone events), Table (anomalous delivery groups by zone).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on zone Topology and Zone Preference Failover and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.32",
              "n": "Hypervisor Connection Health Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Delivery Controllers use hypervisor connections to start, stop, and snapshot virtual machines. VMware vCenter loss, Hyper-V/SCVMM permission errors, certificate trust issues, and storage path failures surface as brokering or power-management failures. Early detection from broker `hosting connection` events, combined with a thin layer of hypervisor health, prevents large-scale session capacity loss during certificate rotations or vCenter maintenance.",
              "t": "Template for Citrix XenDesktop 7 (TA-XD7-Broker), Splunk Add-on for VMware, Splunk Add-on for Microsoft Windows (Hyper-V)",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` fields `hosting_connection_name`, `hypervisor_type`, `connection_state`, `ssl_error`, `certificate`, `hostingunit`, `ErrorMessage` (naming may vary by TA); `index=vmware` `sourcetype=\"vmware:inv:host\"` or vCenter health for `index=hyperv` `sourcetype=\"hyperv_host_health\"` as optional corroboration",
              "q": "index=xd sourcetype=\"citrix:broker:events\" match(_raw, \"(?i)host(ing)?\\s*connection|hypervisor|vCenter|Nutanix|XenServer|scvmm|cert|ssl|storage|connectivity\")\n| eval conn_state=coalesce(connection_state, ConnectionState, hypervisor_state, State)\n| eval hc_name=coalesce(hosting_connection_name, HostingUnitName, HostConnection, catalog_hosting_unit)\n| where match(coalesce(conn_state, \"\"), \"(?i)unknown|unavail|error|down|loss|denied|auth|fail|cert|ssl\") OR match(coalesce(ErrorMessage, Message, _raw), \"(?i)ssl|cert|permission|unauthorized|down|unreachable|storage\")\n| stats earliest(_time) as first_evt, latest(_time) as last_evt, count, values(ErrorMessage) as last_errors by hc_name, host, conn_state\n| sort - count",
              "m": "Map hosting connection event fields from your broker TA. For each `hosting_connection_name`, maintain a lookup for owner team and service window. Add optional append searches from `vmware` and `hyperv` indexes to enrich with upstream platform state. Alert on any new critical error type or sustained connection_state not `OK`.",
              "z": "Table (connection, state, first/last event), Map or swimlane (by hosting unit and hypervisor), Single value (count of bad connections).",
              "kfp": "vCenter or Hyper-V host rolling reboots, storage path failover tests, and hypervisor agent upgrades can briefly sever host connections. Layer hypervisor and storage change records on top; treat Citrix-only warnings as false positives when hosts were in a known cluster upgrade.",
              "refs": "[Citrix - Connections and management interfaces](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/221/install-configure/connections-hypervisor.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (TA-XD7-Broker), Splunk Add-on for VMware, Splunk Add-on for Microsoft Windows (Hyper-V).\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `hosting_connection_name`, `hypervisor_type`, `connection_state`, `ssl_error`, `certificate`, `hostingunit`, `ErrorMessage` (naming may vary by TA); `index=vmware` `sourcetype=\"vmware:inv:host\"` or vCenter health for `index=hyperv` `sourcetype=\"hyperv_host_health\"` as optional corroboration.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap hosting connection event fields from your broker TA. For each `hosting_connection_name`, maintain a lookup for owner team and service window. Add optional append searches from `vmware` and `hyperv` indexes to enrich with upstream platform state. Alert on any new critical error type or sustained connection_state not `OK`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" match(_raw, \"(?i)host(ing)?\\s*connection|hypervisor|vCenter|Nutanix|XenServer|scvmm|cert|ssl|storage|connectivity\")\n| eval conn_state=coalesce(connection_state, ConnectionState, hypervisor_state, State)\n| eval hc_name=coalesce(hosting_connection_name, HostingUnitName, HostConnection, catalog_hosting_unit)\n| where match(coalesce(conn_state, \"\"), \"(?i)unknown|unavail|error|down|loss|denied|auth|fail|cert|ssl\") OR match(coalesce(ErrorMessage, Message, _raw), \"(?i)ssl|cert|permission|unauthorized|down|unreachable|storage\")\n| stats earliest(_time) as first_evt, latest(_time) as last_evt, count, values(ErrorMessage) as last_errors by hc_name, host, conn_state\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Hypervisor Connection Health Monitoring** — Delivery Controllers use hypervisor connections to start, stop, and snapshot virtual machines. VMware vCenter loss, Hyper-V/SCVMM permission errors, certificate trust issues, and storage path failures surface as brokering or power-management failures. Early detection from broker `hosting connection` events, combined with a thin layer of hypervisor health, prevents large-scale session capacity loss during certificate rotations or vCenter maintenance.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` fields `hosting_connection_name`, `hypervisor_type`, `connection_state`, `ssl_error`, `certificate`, `hostingunit`, `ErrorMessage` (naming may vary by TA); `index=vmware` `sourcetype=\"vmware:inv:host\"` or vCenter health for `index=hyperv` `sourcetype=\"hyperv_host_health\"` as optional corroboration. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (TA-XD7-Broker), Splunk Add-on for VMware, Splunk Add-on for Microsoft Windows (Hyper-V). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **conn_state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hc_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(coalesce(conn_state, \"\"), \"(?i)unknown|unavail|error|down|loss|denied|auth|fail|cert|ssl\") OR match(coalesce(Er…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by hc_name, host, conn_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (connection, state, first/last event), Map or swimlane (by hosting unit and hypervisor), Single value (count of bad connections).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on hypervisor Connection Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Application_State"
              ],
              "e": [
                "citrix",
                "hyperv",
                "vmware"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.33",
              "n": "Citrix Autoscale Capacity Events",
              "c": "high",
              "f": "intermediate",
              "v": "Autoscale adjusts powered-on machine counts against load and time schedules. Stuck scale-out, aggressive scale-in, schedule drift, or throttled power actions create either idle unassigned capacity (cost) or under-provisioned pools (poor user experience). Aggregating autoscale- and power-related broker events and comparing with powered-on session counts from Monitor highlights drift and failed automation.",
              "t": "Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Monitor Service OData API",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` for autoscale, power, and schedule-related actions, `sourcetype=\"citrix:monitor:odata\"` for `Autoscale*`, `DeliveryGroup`, or `Session` machine counts; optional `sourcetype=\"citrix:vda:events\"` for power-on success after scale-out",
              "q": "index=xd sourcetype=\"citrix:broker:events\" (match(_raw, \"(?i)autoscale|power.?on|turn.?on|turn.?off|scale.?in|scale.?out|power.?action|capaci\") OR event_type=Power* OR event_type=Autoscale*)\n| eval direction=if(match(_raw, \"(?i)scale.?out|turn.?on|add.*machine|increase\"), \"scale_out\", if(match(_raw, \"(?i)scale.?in|turn.?off|remove.*reduce\"), \"scale_in\", \"other\"))\n| eval dg=coalesce(delivery_group, DeliveryGroup, CatalogName)\n| eval success=if(match(coalesce(result, action_result, state), \"(?i)success|complete\"), 1, if(match(coalesce(result, action_result, state), \"(?i)fail|error|throttl|denied|pending\"), 0, null()))\n| bin _time span=15m\n| stats count, sum(eval(if(success=1,1,0))) as success_hits, sum(eval(if(success=0,1,0))) as fail_hits, dc(machine_name) as machine_moves by _time, dg, direction\n| where fail_hits>0 OR count>100\n| table _time, dg, direction, count, success_hits, fail_hits, machine_moves",
              "m": "Confirm event strings for your CVAD/Cloud version. Build baselines of scale_out vs load per delivery group. Join OData `InUse*`, `Registered*`, and `Unassigned*`-style fields when available. Alert on high fail_hits, zero scale_out during a ramp when usage rises, and sustained unassigned high-water marks outside policy.",
              "z": "Column chart (scale events by direction), Timechart (in-use vs registered machines), Table (failed power actions with delivery group).",
              "kfp": "Autoscale scale-out and scale-in during real peak logins and pilot capacity tests is normal economics, not a storm. Fire when scale is denied, pools hit a hard ceiling, or the schedule conflicts with a disabled or mis-set policy, not for every off-hours scale event.",
              "refs": "[Autoscale in Citrix DaaS](https://docs.citrix.com/en-us/citrix-daas-service/monitor/health-data/autoscale.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Monitor Service OData API.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` for autoscale, power, and schedule-related actions, `sourcetype=\"citrix:monitor:odata\"` for `Autoscale*`, `DeliveryGroup`, or `Session` machine counts; optional `sourcetype=\"citrix:vda:events\"` for power-on success after scale-out.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfirm event strings for your CVAD/Cloud version. Build baselines of scale_out vs load per delivery group. Join OData `InUse*`, `Registered*`, and `Unassigned*`-style fields when available. Alert on high fail_hits, zero scale_out during a ramp when usage rises, and sustained unassigned high-water marks outside policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" (match(_raw, \"(?i)autoscale|power.?on|turn.?on|turn.?off|scale.?in|scale.?out|power.?action|capaci\") OR event_type=Power* OR event_type=Autoscale*)\n| eval direction=if(match(_raw, \"(?i)scale.?out|turn.?on|add.*machine|increase\"), \"scale_out\", if(match(_raw, \"(?i)scale.?in|turn.?off|remove.*reduce\"), \"scale_in\", \"other\"))\n| eval dg=coalesce(delivery_group, DeliveryGroup, CatalogName)\n| eval success=if(match(coalesce(result, action_result, state), \"(?i)success|complete\"), 1, if(match(coalesce(result, action_result, state), \"(?i)fail|error|throttl|denied|pending\"), 0, null()))\n| bin _time span=15m\n| stats count, sum(eval(if(success=1,1,0))) as success_hits, sum(eval(if(success=0,1,0))) as fail_hits, dc(machine_name) as machine_moves by _time, dg, direction\n| where fail_hits>0 OR count>100\n| table _time, dg, direction, count, success_hits, fail_hits, machine_moves\n```\n\nUnderstanding this SPL\n\n**Citrix Autoscale Capacity Events** — Autoscale adjusts powered-on machine counts against load and time schedules. Stuck scale-out, aggressive scale-in, schedule drift, or throttled power actions create either idle unassigned capacity (cost) or under-provisioned pools (poor user experience). Aggregating autoscale- and power-related broker events and comparing with powered-on session counts from Monitor highlights drift and failed automation.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` for autoscale, power, and schedule-related actions, `sourcetype=\"citrix:monitor:odata\"` for `Autoscale*`, `DeliveryGroup`, or `Session` machine counts; optional `sourcetype=\"citrix:vda:events\"` for power-on success after scale-out. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Monitor Service OData API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **direction** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **success** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, dg, direction** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fail_hits>0 OR count>100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Autoscale Capacity Events**): table _time, dg, direction, count, success_hits, fail_hits, machine_moves\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (scale events by direction), Timechart (in-use vs registered machines), Table (failed power actions with delivery group).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We follow when the system turns virtual computers on and off to match the day, and when that schedule slips or the machines fail to start — so you catch wasted power or a thin pool before it hurts real work.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Change",
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.34",
              "n": "Maintenance Mode and Drain Operations Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Maintenance mode and drain protect users during image updates, hypervisor work, and migrations. A large, unexpected, or long-lived maintenance footprint can silently reduce session capacity, especially if paired with autoscale. Tracking machines and delivery groups in maintenance and correlating with available capacity highlights operational drains versus true outages.",
              "t": "Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Monitor Service OData API",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` or `sourcetype=\"citrix:monitor:odata\"` for `Machine` with `InMaintenanceMode`, `drain*`, or maintenance flags; `sourcetype=\"citrix:monitor:odata\"` for `Session` or capacity counts to correlate drops",
              "q": "index=xd (sourcetype=\"citrix:monitor:odata\" (ODataEntity=Machine* OR match(_raw, \"(?i)Maintenance|drain|suspend\")))\n| eval mmode=if(match(coalesce(InMaintenanceMode, maintenance_mode, raw_flags), \"(?i)true|1|yes|on\"), 1, 0)\n| eval dg=coalesce(delivery_group, DeliveryGroup, CatalogName)\n| where mmode=1\n| timechart span=1h sum(mmode) as machines_in_maint, dc(dg) as affected_groups, dc(MachineName) as affected_machines",
              "m": "Prefer OData or broker inventory that exposes maintenance state per machine. Add a change lookup to label known maintenance. Compare hourly capacity against baseline when `machines_in_maint` rises. Alert on maintenance outside approved windows or when drain exceeds a percentage of a delivery group without a ticket.",
              "z": "Stacked area (machines in maintenance by group), Bar chart (duration by catalog), Table (open maintenance with owner from lookup).",
              "kfp": "Wide drain and maintenance windows for OS patching can flood the index with 'session not accepting' style messages across many machines. Suppress on maint tags per machine and escalate only on drain failures outside an approved change window or with rising broker reject errors.",
              "refs": "[Put machines in maintenance - Citrix](https://docs.citrix.com/en-us/citrix-daas/deployment-guides/put-machines-into-maintenance.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Monitor Service OData API.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` or `sourcetype=\"citrix:monitor:odata\"` for `Machine` with `InMaintenanceMode`, `drain*`, or maintenance flags; `sourcetype=\"citrix:monitor:odata\"` for `Session` or capacity counts to correlate drops.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer OData or broker inventory that exposes maintenance state per machine. Add a change lookup to label known maintenance. Compare hourly capacity against baseline when `machines_in_maint` rises. Alert on maintenance outside approved windows or when drain exceeds a percentage of a delivery group without a ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:monitor:odata\" (ODataEntity=Machine* OR match(_raw, \"(?i)Maintenance|drain|suspend\")))\n| eval mmode=if(match(coalesce(InMaintenanceMode, maintenance_mode, raw_flags), \"(?i)true|1|yes|on\"), 1, 0)\n| eval dg=coalesce(delivery_group, DeliveryGroup, CatalogName)\n| where mmode=1\n| timechart span=1h sum(mmode) as machines_in_maint, dc(dg) as affected_groups, dc(MachineName) as affected_machines\n```\n\nUnderstanding this SPL\n\n**Maintenance Mode and Drain Operations Tracking** — Maintenance mode and drain protect users during image updates, hypervisor work, and migrations. A large, unexpected, or long-lived maintenance footprint can silently reduce session capacity, especially if paired with autoscale. Tracking machines and delivery groups in maintenance and correlating with available capacity highlights operational drains versus true outages.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` or `sourcetype=\"citrix:monitor:odata\"` for `Machine` with `InMaintenanceMode`, `drain*`, or maintenance flags; `sourcetype=\"citrix:monitor:odata\"` for `Session` or capacity counts to correlate drops. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Monitor Service OData API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:monitor:odata. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:monitor:odata\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mmode** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mmode=1` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (machines in maintenance by group), Bar chart (duration by catalog), Table (open maintenance with owner from lookup).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We count how many virtual desktops are marked “do not use” for upgrades or fixes, and whether that is planned — so you catch surprise loss of open seats before it hurts real work.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.35",
              "n": "Pre-Launch and Lingering Session Management",
              "c": "medium",
              "f": "intermediate",
              "v": "Pre-launched and lingering sessions keep apps warm and retain user context, but they consume memory, session licenses, and power-managed capacity. Misconfigured idle timers, excessive pre-launch, or sessions stuck in disconnected state can exhaust pools and look like a capacity outage. Tuning visibility from broker and VDA events shows where session lifecycle policy diverges from design.",
              "t": "uberAgent UXM (Splunkbase 1448), Template for Citrix XenDesktop 7 (TA-XD7-Broker)",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` (session, idle, prelaunch, `SessionState`), `sourcetype=\"citrix:vda:events\"` (disconnect, idle timer), `index=uberagent` `sourcetype=\"uberAgent:Session:SessionInfo\"` or `sourcetype=\"uberAgent:Process:ProcessStartup\"` for per-session process counts",
              "q": "index=xd sourcetype=\"citrix:broker:events\" (match(_raw, \"(?i)pre[\\s-]*launch|lingering|ghost|idle|disc\") OR event_type IN (\"SessionDisconnect\", \"SessionInfo\") OR match(event_type, \"(?i)SessionPreLaunch\"))\n| eval session_type=if(match(_raw, \"(?i)pre[\\s-]*launch|prelaunch\"), \"prelaunch\", if(match(_raw, \"(?i)linger|disconnected|idle\"), \"idle_linger\", \"other\"))\n| where session_type!=\"other\"\n| bin _time span=1h\n| stats count, dc(user) as users, values(session_id) as sample_sessions by _time, session_type, delivery_group, published_app\n| table _time, session_type, delivery_group, published_app, count, users, sample_sessions",
              "m": "Map published app and user fields. For ghost capacity, also pull uberAgent session or host CPU to correlate pre-launch with sustained resource use. Compare counts against GPO- or policy-driven idle and disconnect timers. Alert when pre-launch or linger counts exceed rolling baselines, or when idle sessions outnumber active sessions in a business hour window.",
              "z": "Area chart (prelaunch vs idle_linger by group), Table (top published apps with linger), Donut (session type mix).",
              "kfp": "Pre-launch, disconnected-session timeout, and idle policies can leave long-lived sessions that look 'stuck' or 'lingering' by design. Baseline per delivery group, then alert when sessions exceed the documented GPO/Studio cap or when broker and VDA state disagree.",
              "refs": "[uberAgent UXM for Citrix](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448), Template for Citrix XenDesktop 7 (TA-XD7-Broker).\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` (session, idle, prelaunch, `SessionState`), `sourcetype=\"citrix:vda:events\"` (disconnect, idle timer), `index=uberagent` `sourcetype=\"uberAgent:Session:SessionInfo\"` or `sourcetype=\"uberAgent:Process:ProcessStartup\"` for per-session process counts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap published app and user fields. For ghost capacity, also pull uberAgent session or host CPU to correlate pre-launch with sustained resource use. Compare counts against GPO- or policy-driven idle and disconnect timers. Alert when pre-launch or linger counts exceed rolling baselines, or when idle sessions outnumber active sessions in a business hour window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" (match(_raw, \"(?i)pre[\\s-]*launch|lingering|ghost|idle|disc\") OR event_type IN (\"SessionDisconnect\", \"SessionInfo\") OR match(event_type, \"(?i)SessionPreLaunch\"))\n| eval session_type=if(match(_raw, \"(?i)pre[\\s-]*launch|prelaunch\"), \"prelaunch\", if(match(_raw, \"(?i)linger|disconnected|idle\"), \"idle_linger\", \"other\"))\n| where session_type!=\"other\"\n| bin _time span=1h\n| stats count, dc(user) as users, values(session_id) as sample_sessions by _time, session_type, delivery_group, published_app\n| table _time, session_type, delivery_group, published_app, count, users, sample_sessions\n```\n\nUnderstanding this SPL\n\n**Pre-Launch and Lingering Session Management** — Pre-launched and lingering sessions keep apps warm and retain user context, but they consume memory, session licenses, and power-managed capacity. Misconfigured idle timers, excessive pre-launch, or sessions stuck in disconnected state can exhaust pools and look like a capacity outage. Tuning visibility from broker and VDA events shows where session lifecycle policy diverges from design.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` (session, idle, prelaunch, `SessionState`), `sourcetype=\"citrix:vda:events\"` (disconnect, idle timer), `index=uberagent` `sourcetype=\"uberAgent:Session:SessionInfo\"` or `sourcetype=\"uberAgent:Process:ProcessStartup\"` for per-session process counts. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448), Template for Citrix XenDesktop 7 (TA-XD7-Broker). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where session_type!=\"other\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, session_type, delivery_group, published_app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Pre-Launch and Lingering Session Management**): table _time, session_type, delivery_group, published_app, count, users, sample_sessions\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (prelaunch vs idle_linger by group), Table (top published apps with linger), Donut (session type mix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We see when apps are left running in the background to start faster, and when people walk away and leave those sessions open — so you catch wasted space and slow computers before they hurt real work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Sessions"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.36",
              "n": "Session Reliability and Auto Client Reconnect",
              "c": "high",
              "f": "intermediate",
              "v": "Session Reliability and Auto Client Reconnect mask brief network blips, but a rising ratio of full disconnects to successful reconnects indicates unstable paths, bad Wi-Fi, or gateway issues. VDA and broker events that mention WCF, keep-alives, reliability channels, and EDT/TCP flips, correlated with network-side syslogs, separate client-side noise from data-center incidents.",
              "t": "Template for Citrix XenDesktop 7 (TA-XD7-Broker), NetScaler/ADC syslog TA, uberAgent UXM (Splunkbase 1448) optional",
              "d": "`sourcetype=\"citrix:vda:events\"` and `sourcetype=\"citrix:broker:events\"` for `Session`, `WCF`, or connection reset messages; `index=netscaler` or `sourcetype=\"citrix:netscaler:syslog\"` for ICA/EDT drops on the gateway; optional `index=uberagent` for network and virtual channel health",
              "q": "index=xd (sourcetype=\"citrix:vda:events\" OR sourcetype=\"citrix:broker:events\") match(_raw, \"(?i)session reliability|reconnect|WCF|keep.?alive|auto.?client|ACR|ICA.*reset|edt|tcp.*(drop|reset)|udp\")\n| eval evt=if(match(_raw, \"(?i)reconnect|re.?establish|re.?connected|back online\"), \"reconnect\", if(match(_raw, \"(?i)disconnect|drop|reset|fail|unreachable\"), \"disrupt\", \"other\"))\n| where evt!=\"other\"\n| eval user=coalesce(user, UserName, ClientName)\n| bin _time span=5m\n| stats count, dc(user) as users by _time, evt, host, delivery_group\n| sort -_time, count",
              "m": "Normalize VDA and broker time zones. For Citrix Cloud or hybrid, ensure universal forwarders label site id. Add optional `append` to NetScaler `citrix:netscaler:syslog` for the same time window. Compute reconnect success ratio: `reconnect` counts vs `disrupt` counts per 5m per delivery group, alert when disrupt exceeds baseline by 2x for 3 intervals.",
              "z": "Multi-series line (disrupt vs reconnect), Timeline (outages), Map or table of affected delivery groups per site.",
              "kfp": "Home users on WiFi, travel VPNs, and unstable cellular paths trigger Session Reliability and auto-reconnect in bulk during benign conditions. Split by site or network profile and raise on sustained reconnection failure for landline and office cohorts, not a single flapping home office.",
              "refs": "[Session Reliability in CVAD / HDX](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/221/hdx/session-reliability.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (TA-XD7-Broker), NetScaler/ADC syslog TA, uberAgent UXM (Splunkbase 1448) optional.\n• Ensure the following data sources are available: `sourcetype=\"citrix:vda:events\"` and `sourcetype=\"citrix:broker:events\"` for `Session`, `WCF`, or connection reset messages; `index=netscaler` or `sourcetype=\"citrix:netscaler:syslog\"` for ICA/EDT drops on the gateway; optional `index=uberagent` for network and virtual channel health.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize VDA and broker time zones. For Citrix Cloud or hybrid, ensure universal forwarders label site id. Add optional `append` to NetScaler `citrix:netscaler:syslog` for the same time window. Compute reconnect success ratio: `reconnect` counts vs `disrupt` counts per 5m per delivery group, alert when disrupt exceeds baseline by 2x for 3 intervals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:vda:events\" OR sourcetype=\"citrix:broker:events\") match(_raw, \"(?i)session reliability|reconnect|WCF|keep.?alive|auto.?client|ACR|ICA.*reset|edt|tcp.*(drop|reset)|udp\")\n| eval evt=if(match(_raw, \"(?i)reconnect|re.?establish|re.?connected|back online\"), \"reconnect\", if(match(_raw, \"(?i)disconnect|drop|reset|fail|unreachable\"), \"disrupt\", \"other\"))\n| where evt!=\"other\"\n| eval user=coalesce(user, UserName, ClientName)\n| bin _time span=5m\n| stats count, dc(user) as users by _time, evt, host, delivery_group\n| sort -_time, count\n```\n\nUnderstanding this SPL\n\n**Session Reliability and Auto Client Reconnect** — Session Reliability and Auto Client Reconnect mask brief network blips, but a rising ratio of full disconnects to successful reconnects indicates unstable paths, bad Wi-Fi, or gateway issues. VDA and broker events that mention WCF, keep-alives, reliability channels, and EDT/TCP flips, correlated with network-side syslogs, separate client-side noise from data-center incidents.\n\nDocumented **Data sources**: `sourcetype=\"citrix:vda:events\"` and `sourcetype=\"citrix:broker:events\"` for `Session`, `WCF`, or connection reset messages; `index=netscaler` or `sourcetype=\"citrix:netscaler:syslog\"` for ICA/EDT drops on the gateway; optional `index=uberagent` for network and virtual channel health. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (TA-XD7-Broker), NetScaler/ADC syslog TA, uberAgent UXM (Splunkbase 1448) optional. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:vda:events, citrix:broker:events. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:vda:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **evt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where evt!=\"other\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, evt, host, delivery_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-series line (disrupt vs reconnect), Timeline (outages), Map or table of affected delivery groups per site.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on session Reliability and Auto Client Reconnect and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Sessions"
              ],
              "e": [
                "citrix",
                "syslog"
              ],
              "em": [
                "citrix_cvad",
                "citrix_netscaler",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.37",
              "n": "HDX Adaptive Transport (EDT) and Graphics Mode",
              "c": "high",
              "f": "advanced",
              "v": "HDX Adaptive Transport prefers UDP (EDT) for interactive traffic when the network is healthy, falling back to TCP when loss or delay is high. High packet loss, RTT, or constant fallback reduces perceived responsiveness and can force CPU-biased H.264/HEVC or software rendering, stressing hosts and user experience. uberAgent’s EDT and HDX remoting metrics complement Citrix VDA event strings on encoder choice and display pipeline pressure.",
              "t": "uberAgent UXM (Splunkbase 1448), optional Template for Citrix XenDesktop 7 (TA-XD7-Broker)",
              "d": "`index=uberagent` `sourcetype=\"uberAgent:Network:NetworkPerformanceEDT\"`, `sourcetype=\"uberAgent:Remoting:HDX*\"` or `sourcetype=\"uberAgent:GPU:*\"` for encoder or GPU use; `index=xd` `sourcetype=\"citrix:vda:events\"` for graphics and transport fallback messages; optional `sourcetype=\"citrix:netscaler:syslog\"` for UDP/ICA profile stats",
              "q": "index=uberagent (sourcetype=\"uberAgent:Network:NetworkPerformanceEDT\" OR sourcetype=\"uberAgent:Remoting:HDX*\") earliest=-1h\n| eval loss_pct=coalesce(UDPPacketLossPercent, UdpPacketLoss, PacketLoss), latency_ms=coalesce(UDPRTTms, AvgRttMs, Latency), fallback=if(match(coalesce(Transport, Protocol), \"(?i)tcp\"),1,0)\n| where loss_pct>2 OR latency_ms>150 OR fallback=1\n| bin _time span=5m\n| stats avg(loss_pct) as avg_loss, avg(latency_ms) as avg_rtt, sum(fallback) as fallbacks, dc(user) as users by _time, host, SessionId\n| table _time, host, users, avg_loss, avg_rtt, fallbacks",
              "m": "Deploy uberAgent on VDAs with network and remoting data enabled. Add field extractions for your exact uberAgent 7.x/8.x field names. Side-by-side: run a VDA search for 'policy', 'H264', 'HEVC', or 'YUV' in `citrix:vda:events` for policy-driven changes. Set threshold bands by site (WAN vs LAN).",
              "z": "Dual-axis chart (loss vs RTT), Heatmap (hosts by time), Table (top sessions with fallback).",
              "kfp": "GPU driver rollouts, EDT pilot toggles, and switching graphics modes during golden-image updates can change transport and rendering counters without user-facing outage. Join to build or driver change tickets before treating EDT fallback as a production performance incident.",
              "refs": "[uberAgent documentation - HDX/EDT](https://uberagent.com/docs/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448), optional Template for Citrix XenDesktop 7 (TA-XD7-Broker).\n• Ensure the following data sources are available: `index=uberagent` `sourcetype=\"uberAgent:Network:NetworkPerformanceEDT\"`, `sourcetype=\"uberAgent:Remoting:HDX*\"` or `sourcetype=\"uberAgent:GPU:*\"` for encoder or GPU use; `index=xd` `sourcetype=\"citrix:vda:events\"` for graphics and transport fallback messages; optional `sourcetype=\"citrix:netscaler:syslog\"` for UDP/ICA profile stats.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy uberAgent on VDAs with network and remoting data enabled. Add field extractions for your exact uberAgent 7.x/8.x field names. Side-by-side: run a VDA search for 'policy', 'H264', 'HEVC', or 'YUV' in `citrix:vda:events` for policy-driven changes. Set threshold bands by site (WAN vs LAN).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent (sourcetype=\"uberAgent:Network:NetworkPerformanceEDT\" OR sourcetype=\"uberAgent:Remoting:HDX*\") earliest=-1h\n| eval loss_pct=coalesce(UDPPacketLossPercent, UdpPacketLoss, PacketLoss), latency_ms=coalesce(UDPRTTms, AvgRttMs, Latency), fallback=if(match(coalesce(Transport, Protocol), \"(?i)tcp\"),1,0)\n| where loss_pct>2 OR latency_ms>150 OR fallback=1\n| bin _time span=5m\n| stats avg(loss_pct) as avg_loss, avg(latency_ms) as avg_rtt, sum(fallback) as fallbacks, dc(user) as users by _time, host, SessionId\n| table _time, host, users, avg_loss, avg_rtt, fallbacks\n```\n\nUnderstanding this SPL\n\n**HDX Adaptive Transport (EDT) and Graphics Mode** — HDX Adaptive Transport prefers UDP (EDT) for interactive traffic when the network is healthy, falling back to TCP when loss or delay is high. High packet loss, RTT, or constant fallback reduces perceived responsiveness and can force CPU-biased H.264/HEVC or software rendering, stressing hosts and user experience. uberAgent’s EDT and HDX remoting metrics complement Citrix VDA event strings on encoder choice and display pipeline pressure.\n\nDocumented **Data sources**: `index=uberagent` `sourcetype=\"uberAgent:Network:NetworkPerformanceEDT\"`, `sourcetype=\"uberAgent:Remoting:HDX*\"` or `sourcetype=\"uberAgent:GPU:*\"` for encoder or GPU use; `index=xd` `sourcetype=\"citrix:vda:events\"` for graphics and transport fallback messages; optional `sourcetype=\"citrix:netscaler:syslog\"` for UDP/ICA profile stats. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448), optional Template for Citrix XenDesktop 7 (TA-XD7-Broker). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:Network:NetworkPerformanceEDT, uberAgent:Remoting:HDX*. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:Network:NetworkPerformanceEDT\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **loss_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where loss_pct>2 OR latency_ms>150 OR fallback=1` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, SessionId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **HDX Adaptive Transport (EDT) and Graphics Mode**): table _time, host, users, avg_loss, avg_rtt, fallbacks\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis chart (loss vs RTT), Heatmap (hosts by time), Table (top sessions with fallback).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We see when the line between a virtual desktop and a user is noisy or too slow, and when the system stops using the fast path for pictures — so you catch choppy screens before they hurt real work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.38",
              "n": "Universal Print Server Health and Printing Failures",
              "c": "medium",
              "f": "intermediate",
              "v": "Citrix Universal Print Server offloads and compresses print traffic, but spooler instability, bad drivers, and printer auto-creation or mapping errors still break end-user print jobs. Monitoring Application and VDA event streams for spooler failures, print manager errors, and user-visible mapping failures differentiates a single bad queue from a site-wide print outage.",
              "t": "Splunk Add-on for Microsoft Windows, Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Universal Print Server documentation-based field extractions",
              "d": "`sourcetype=\"citrix:vda:events\"` (`WinEventLog:Application` or CtxPrint / spooler events on VDA/UPS), `sourcetype=\"WinEventLog:Application\"` for Citrix Print Manager Service, `sourcetype=\"WinEventLog:System\"` for spooler service stops; `index=windows` for Universal Print forwarders if used",
              "q": "index=windows OR index=xd (sourcetype=\"WinEventLog:Application\" OR sourcetype=\"citrix:vda:events\") (match(_raw, \"(?i)Citrix.*Print|Universal Print|spooler|CtxPrint|render|driver|UPS\") OR match(_raw, \"(?i)printer.*(map|fail|error|offline)\")) OR EventCode=808\n| eval fail=if(match(_raw, \"(?i)fail|error|not found|denied|offline|stuck|abort\") OR match(Message, \"(?i)fail|error\"), 1, 0)\n| eval role=if(match(_raw, \"(?i)Universal Print Server|Citrix.*Print\"),\"UPS_VDA\", \"print_stack\")\n| where fail=1\n| stats count, values(Message) as sample_msg, values(host) as hosts, earliest(_time) as first_seen, latest(_time) as last_seen by role, user, client_name, printer_name, EventCode",
              "m": "Ingest VDA/UPS and brokering hosts into indexes with CIM-agnostic `props.conf` for long Message fields. Add printer allow/deny list lookups. Correlate with NetScaler/ADC only if you split print by site. Throttle on EventCode+printer driver hash to find systemic driver regressions. Alert when distinct hosts with fail=1 in 15m exceeds 3.",
              "z": "Table (failures with sample message), Pie chart (fail by driver or queue), Bar chart (failures per site).",
              "kfp": "Print driver upgrades, spooler restarts, and file-server maintenance on universal print or print servers can look like mass print failures. Align spikes with print infrastructure change windows; exclude known patch waves before rerouting a Citrix SEV to the EUC team alone.",
              "refs": "[Universal Print Server - Citrix](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/221/print/ups.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows, Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Universal Print Server documentation-based field extractions.\n• Ensure the following data sources are available: `sourcetype=\"citrix:vda:events\"` (`WinEventLog:Application` or CtxPrint / spooler events on VDA/UPS), `sourcetype=\"WinEventLog:Application\"` for Citrix Print Manager Service, `sourcetype=\"WinEventLog:System\"` for spooler service stops; `index=windows` for Universal Print forwarders if used.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest VDA/UPS and brokering hosts into indexes with CIM-agnostic `props.conf` for long Message fields. Add printer allow/deny list lookups. Correlate with NetScaler/ADC only if you split print by site. Throttle on EventCode+printer driver hash to find systemic driver regressions. Alert when distinct hosts with fail=1 in 15m exceeds 3.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows OR index=xd (sourcetype=\"WinEventLog:Application\" OR sourcetype=\"citrix:vda:events\") (match(_raw, \"(?i)Citrix.*Print|Universal Print|spooler|CtxPrint|render|driver|UPS\") OR match(_raw, \"(?i)printer.*(map|fail|error|offline)\")) OR EventCode=808\n| eval fail=if(match(_raw, \"(?i)fail|error|not found|denied|offline|stuck|abort\") OR match(Message, \"(?i)fail|error\"), 1, 0)\n| eval role=if(match(_raw, \"(?i)Universal Print Server|Citrix.*Print\"),\"UPS_VDA\", \"print_stack\")\n| where fail=1\n| stats count, values(Message) as sample_msg, values(host) as hosts, earliest(_time) as first_seen, latest(_time) as last_seen by role, user, client_name, printer_name, EventCode\n```\n\nUnderstanding this SPL\n\n**Universal Print Server Health and Printing Failures** — Citrix Universal Print Server offloads and compresses print traffic, but spooler instability, bad drivers, and printer auto-creation or mapping errors still break end-user print jobs. Monitoring Application and VDA event streams for spooler failures, print manager errors, and user-visible mapping failures differentiates a single bad queue from a site-wide print outage.\n\nDocumented **Data sources**: `sourcetype=\"citrix:vda:events\"` (`WinEventLog:Application` or CtxPrint / spooler events on VDA/UPS), `sourcetype=\"WinEventLog:Application\"` for Citrix Print Manager Service, `sourcetype=\"WinEventLog:System\"` for spooler service stops; `index=windows` for Universal Print forwarders if used. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows, Template for Citrix XenDesktop 7 (TA-XD7-Broker), Citrix Universal Print Server documentation-based field extractions. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, xd; **sourcetype**: WinEventLog:Application, citrix:vda:events. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=xd, sourcetype=\"WinEventLog:Application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **role** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by role, user, client_name, printer_name, EventCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failures with sample message), Pie chart (fail by driver or queue), Bar chart (failures per site).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We read the on-screen printer errors and system messages from your virtual session hosts — so you catch broken printers and stuck jobs before they hurt real work.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Application_State"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.39",
              "n": "USB and Peripheral Redirection Failures",
              "c": "medium",
              "f": "intermediate",
              "v": "USB, scanner, and smart card redirection, plus client drive mapping and clipboard, depend on VDA services, client versions, and Citrix/Windows policies. Failures are often per-device or per-user but can spike when a new policy, endpoint agent, or firmware change blocks channels. VDA and Application logs capture the denial reason, while optional uberAgent peripheral metrics confirm drop-off in hardware attach success rates.",
              "t": "Template for Citrix XenDesktop 7 (TA-XD7-Broker), Splunk Add-on for Microsoft Windows, optional uberAgent UXM (Splunkbase 1448)",
              "d": "`sourcetype=\"citrix:vda:events\"` (USB, TWAIN, WIA, `CtxUsb`, `CtxCam`), `sourcetype=\"WinEventLog:Application\"` for Citrix ICA client driver messages, `index=windows` for Group Policy/clipboard blocks if forwarded; optional `sourcetype=\"uberAgent:Peripheral:USB*\"` if you enable end-point visibility",
              "q": "(index=xd sourcetype=\"citrix:vda:events\" OR (index=windows (sourcetype=\"WinEventLog:Application\" OR sourcetype=\"citrix:vda:events\")) OR (index=uberagent sourcetype=\"uberAgent:Peripheral*\"))\n| search match(_raw, \"(?i)USB|TWAIN|WIA|redirect|peripheral|smart.?card|scard|clipboard|mapped drive|clpb|device.*(fail|deny|block|stop|stall)\")\n| eval channel=if(match(_raw, \"(?i)twain|wia|scan\"), \"imaging\", if(match(_raw, \"(?i)clipboard|clip\"), \"clipboard\", if(match(_raw, \"(?i)drive|mapped\"), \"drives\", \"usb_usb\")))\n| where match(_raw, \"(?i)fail|error|block|deny|policy|restric|not supported|time.?out|stall\")\n| stats count, values(Message) as sample, earliest(_time) as first_t, latest(_time) as last_t, dc(user) as users by host, channel, user\n| sort - count",
              "m": "Ingest a broad slice of VDA logs with USB/TWAIN categories enabled. Add policy lookup by AD group. Separate Help Desk false positives (unsupported devices) with `NOT match(device_class,\"(legacy)\")` style filters where fields exist. Correlate with NetScaler/ADC app flow only if the channel is not negotiated locally. Alert on new denial strings in a 24h compare.",
              "z": "Table (users and channels with sample errors), Pareto chart (error text top 10), Bar chart (failed channel by delivery group if joined).",
              "kfp": "Kiosk, engineering, and healthcare use cases with heavy USB or peripheral redirection are expected. Tune with delivery-group baselines, device-class allow/deny, and a pilot OU so legitimate redirected devices do not page as a blanket exfil risk.",
              "refs": "[HDX features - USB, TWAIN, drives](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops-2112/hdx/hdx-features-2112.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (TA-XD7-Broker), Splunk Add-on for Microsoft Windows, optional uberAgent UXM (Splunkbase 1448).\n• Ensure the following data sources are available: `sourcetype=\"citrix:vda:events\"` (USB, TWAIN, WIA, `CtxUsb`, `CtxCam`), `sourcetype=\"WinEventLog:Application\"` for Citrix ICA client driver messages, `index=windows` for Group Policy/clipboard blocks if forwarded; optional `sourcetype=\"uberAgent:Peripheral:USB*\"` if you enable end-point visibility.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest a broad slice of VDA logs with USB/TWAIN categories enabled. Add policy lookup by AD group. Separate Help Desk false positives (unsupported devices) with `NOT match(device_class,\"(legacy)\")` style filters where fields exist. Correlate with NetScaler/ADC app flow only if the channel is not negotiated locally. Alert on new denial strings in a 24h compare.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=xd sourcetype=\"citrix:vda:events\" OR (index=windows (sourcetype=\"WinEventLog:Application\" OR sourcetype=\"citrix:vda:events\")) OR (index=uberagent sourcetype=\"uberAgent:Peripheral*\"))\n| search match(_raw, \"(?i)USB|TWAIN|WIA|redirect|peripheral|smart.?card|scard|clipboard|mapped drive|clpb|device.*(fail|deny|block|stop|stall)\")\n| eval channel=if(match(_raw, \"(?i)twain|wia|scan\"), \"imaging\", if(match(_raw, \"(?i)clipboard|clip\"), \"clipboard\", if(match(_raw, \"(?i)drive|mapped\"), \"drives\", \"usb_usb\")))\n| where match(_raw, \"(?i)fail|error|block|deny|policy|restric|not supported|time.?out|stall\")\n| stats count, values(Message) as sample, earliest(_time) as first_t, latest(_time) as last_t, dc(user) as users by host, channel, user\n| sort - count\n```\n\nUnderstanding this SPL\n\n**USB and Peripheral Redirection Failures** — USB, scanner, and smart card redirection, plus client drive mapping and clipboard, depend on VDA services, client versions, and Citrix/Windows policies. Failures are often per-device or per-user but can spike when a new policy, endpoint agent, or firmware change blocks channels. VDA and Application logs capture the denial reason, while optional uberAgent peripheral metrics confirm drop-off in hardware attach success rates.\n\nDocumented **Data sources**: `sourcetype=\"citrix:vda:events\"` (USB, TWAIN, WIA, `CtxUsb`, `CtxCam`), `sourcetype=\"WinEventLog:Application\"` for Citrix ICA client driver messages, `index=windows` for Group Policy/clipboard blocks if forwarded; optional `sourcetype=\"uberAgent:Peripheral:USB*\"` if you enable end-point visibility. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (TA-XD7-Broker), Splunk Add-on for Microsoft Windows, optional uberAgent UXM (Splunkbase 1448). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd, windows, uberagent; **sourcetype**: citrix:vda:events, WinEventLog:Application, uberAgent:Peripheral*. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, index=windows, index=uberagent, sourcetype=\"citrix:vda:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **channel** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(_raw, \"(?i)fail|error|block|deny|policy|restric|not supported|time.?out|stall\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, channel, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users and channels with sample errors), Pareto chart (error text top 10), Bar chart (failed channel by delivery group if joined).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look for the hidden notes when a camera, USB stick, or copy-paste path is turned off or breaks inside a remote session — so you catch blocked gadgets before they hurt real work.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.40",
              "n": "Citrix App Layering Health and Layer Attach Status",
              "c": "high",
              "f": "advanced",
              "v": "Citrix App Layering delivers OS and application layers to MCS, PVS, and elastic deployments. The elastic appliance, packaging connector, and on-VDA mount stack must all stay healthy; a failed package cache, attach timeout, or ELM/connector outage blocks user desktops at boot or sign-in. Windows Application logs on the ELM and connector roles plus VDA layer messages paint an end-to-end path from packaging through attach.",
              "t": "Splunk Add-on for Microsoft Windows, custom scripted or HEC input for App Layering management API, Template for Citrix XenDesktop 7 (TA-XD7-Broker) for VDA",
              "d": "`sourcetype=\"WinEventLog:Application\"` on ELM/connector servers (`unifltr`, `pvs`, `svmgr`), `sourcetype=\"citrix:vda:events\"` for layer attach, `sourcetype=\"citrix:pvs:events\"` when App Layering pairs with PVS, HTTP(S) or scripted inputs from App Layering ELM APIs if you export jobs to text",
              "q": "index=windows OR index=xd (sourcetype=\"WinEventLog:Application\" OR sourcetype=\"citrix:vda:events\" OR sourcetype=\"citrix:pvs:events\") match(_raw, \"(?i)App\\s*Layer|layering|unifl|svmgr|ELM|layer (attach|mount|roll|package|not found|fail|cache)\")\n| eval component=if(match(host, \"(?i)elm|layering|manager\"), \"elm\", if(match(_raw, \"(?i)PVS|vDisk\"), \"pvs\", \"vda\"))\n| where match(_raw, \"(?i)fail|error|timeout|unavail|mismatch|cache.*(miss|full|corrupt)|not mounted\")\n| bin _time span=15m\n| stats count, values(Message) as msg_sample, dc(host) as hosts, dc(user) as users by _time, component, host\n| sort - count",
              "m": "Classify hosts by `elm|connector|vda` using a host lookup. When ELM is Linux-only, push syslog or a JSON HEC path instead of `WinEventLog`. Track cache disk usage for packaging machines via a separate capacity UC. Deduplicate noisy retry loops with `streamstats` or by trimming `count`>100/min bursts.",
              "z": "Swimlane (ELM vs VDA issues), Table (message samples), Single value (open critical errors in 24h).",
              "kfp": "Elastic layering attach on first logon, layer pack upgrades, and user-driven layer repair can show transient 'not ready' or attach warnings. Correlate with a new published layer version and first-boot after publish before assuming broken layering infrastructure.",
              "refs": "[Citrix App Layering - Monitor and troubleshoot](https://docs.citrix.com/en-us/citrix-app-layering/4/monitor/monitor.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows, custom scripted or HEC input for App Layering management API, Template for Citrix XenDesktop 7 (TA-XD7-Broker) for VDA.\n• Ensure the following data sources are available: `sourcetype=\"WinEventLog:Application\"` on ELM/connector servers (`unifltr`, `pvs`, `svmgr`), `sourcetype=\"citrix:vda:events\"` for layer attach, `sourcetype=\"citrix:pvs:events\"` when App Layering pairs with PVS, HTTP(S) or scripted inputs from App Layering ELM APIs if you export jobs to text.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nClassify hosts by `elm|connector|vda` using a host lookup. When ELM is Linux-only, push syslog or a JSON HEC path instead of `WinEventLog`. Track cache disk usage for packaging machines via a separate capacity UC. Deduplicate noisy retry loops with `streamstats` or by trimming `count`>100/min bursts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows OR index=xd (sourcetype=\"WinEventLog:Application\" OR sourcetype=\"citrix:vda:events\" OR sourcetype=\"citrix:pvs:events\") match(_raw, \"(?i)App\\s*Layer|layering|unifl|svmgr|ELM|layer (attach|mount|roll|package|not found|fail|cache)\")\n| eval component=if(match(host, \"(?i)elm|layering|manager\"), \"elm\", if(match(_raw, \"(?i)PVS|vDisk\"), \"pvs\", \"vda\"))\n| where match(_raw, \"(?i)fail|error|timeout|unavail|mismatch|cache.*(miss|full|corrupt)|not mounted\")\n| bin _time span=15m\n| stats count, values(Message) as msg_sample, dc(host) as hosts, dc(user) as users by _time, component, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Citrix App Layering Health and Layer Attach Status** — Citrix App Layering delivers OS and application layers to MCS, PVS, and elastic deployments. The elastic appliance, packaging connector, and on-VDA mount stack must all stay healthy; a failed package cache, attach timeout, or ELM/connector outage blocks user desktops at boot or sign-in. Windows Application logs on the ELM and connector roles plus VDA layer messages paint an end-to-end path from packaging through attach.\n\nDocumented **Data sources**: `sourcetype=\"WinEventLog:Application\"` on ELM/connector servers (`unifltr`, `pvs`, `svmgr`), `sourcetype=\"citrix:vda:events\"` for layer attach, `sourcetype=\"citrix:pvs:events\"` when App Layering pairs with PVS, HTTP(S) or scripted inputs from App Layering ELM APIs if you export jobs to text. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows, custom scripted or HEC input for App Layering management API, Template for Citrix XenDesktop 7 (TA-XD7-Broker) for VDA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, xd; **sourcetype**: WinEventLog:Application, citrix:vda:events, citrix:pvs:events. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=xd, sourcetype=\"WinEventLog:Application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **component** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(_raw, \"(?i)fail|error|timeout|unavail|mismatch|cache.*(miss|full|corrupt)|not mounted\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, component, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Swimlane (ELM vs VDA issues), Table (message samples), Single value (open critical errors in 24h).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We read the service logs for the “layers” of apps stacked onto a virtual desktop, from the back-room packaging server to the moment they snap into place on login — so you catch bad stacks before they hurt real work.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.41",
              "n": "FSLogix and Profile Container Health",
              "c": "high",
              "f": "intermediate",
              "v": "FSLogix profile and Office container disks live on fast SMB file shares. Slow attach, VHD reconnection failures, runaway VHDX growth, and share latency surface as long logon times or read-only profiles. Correlating FSLogix Application events with the profile phase in uberAgent logon data isolates the share path versus client-side issues faster than GPO review alone.",
              "t": "Splunk Add-on for Microsoft Windows, uberAgent UXM (Splunkbase 1448), Microsoft FSLogix policy documentation",
              "d": "`sourcetype=\"WinEventLog:Application\"` source=FSLogix* or `Message=*FSLogix*`, `Perfmon:LogicalDisk` on profile share, `sourcetype=\"WinEventLog:System\"` for VHD/Filter Manager; `index=uberagent` `sourcetype=\"uberAgent:Logon:LogonDetail\"` for profile phase timing; optional SMB `\\Server\\path` path latency with synthetic scripts into `sourcetype=fslogix:synthetic`",
              "q": "index=windows (sourcetype=\"WinEventLog:Application\" (source=\"*FSLogix*\" OR source=\"*frx*\")) OR (sourcetype=\"WinEventLog:System\" EventCode=50)\n| search match(_raw, \"(?i)FSLogix|frx|profile|containe|VHDX|VHD |reparse|reconnect|load.*fail|attach.*fail|size|quota|latency\")\n| eval severity=if(match(_raw, \"(?i)fail|error|could not|denied|timeout|locked|reparse\"), \"error\", if(match(_raw, \"(?i)warn|slow|throttl|retry\"), \"warning\", \"info\"))\n| where severity!=\"info\"\n| join type=left user [search index=uberagent sourcetype=\"uberAgent:Logon:LogonDetail\" earliest=-4h | stats latest(ProfileLoad) as uem_profile_s by user]\n| table _time, host, user, Message, severity, uem_profile_s",
              "m": "Ingest all FSLogix-related Application events and enable logical disk or SMB perf counters for share volumes. Set alerts on new error text patterns and on profile time >30s p95. Track VHD file size with a daily scripted inventory if not in events. For multi-site, tag share names with region and add synthetic SMB probes. Join carefully on `user` to avoid overmatching service accounts.",
              "z": "Timeline (FSLogix errors), Line chart (profile phase from uberAgent), Table (VHD size growth if inventoried).",
              "kfp": "Antivirus on-access scans, profile container rehydration, and one-off VHDX compact jobs spike FSLogix I/O errors briefly on large profiles. Use a short time window, exclude the backup hours job class, and check FSLogix filter and driver version against the last Windows LCU.",
              "refs": "[FSLogix documentation - Microsoft](https://learn.microsoft.com/en-us/fslogix/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows, uberAgent UXM (Splunkbase 1448), Microsoft FSLogix policy documentation.\n• Ensure the following data sources are available: `sourcetype=\"WinEventLog:Application\"` source=FSLogix* or `Message=*FSLogix*`, `Perfmon:LogicalDisk` on profile share, `sourcetype=\"WinEventLog:System\"` for VHD/Filter Manager; `index=uberagent` `sourcetype=\"uberAgent:Logon:LogonDetail\"` for profile phase timing; optional SMB `\\Server\\path` path latency with synthetic scripts into `sourcetype=fslogix:synthetic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest all FSLogix-related Application events and enable logical disk or SMB perf counters for share volumes. Set alerts on new error text patterns and on profile time >30s p95. Track VHD file size with a daily scripted inventory if not in events. For multi-site, tag share names with region and add synthetic SMB probes. Join carefully on `user` to avoid overmatching service accounts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows (sourcetype=\"WinEventLog:Application\" (source=\"*FSLogix*\" OR source=\"*frx*\")) OR (sourcetype=\"WinEventLog:System\" EventCode=50)\n| search match(_raw, \"(?i)FSLogix|frx|profile|containe|VHDX|VHD |reparse|reconnect|load.*fail|attach.*fail|size|quota|latency\")\n| eval severity=if(match(_raw, \"(?i)fail|error|could not|denied|timeout|locked|reparse\"), \"error\", if(match(_raw, \"(?i)warn|slow|throttl|retry\"), \"warning\", \"info\"))\n| where severity!=\"info\"\n| join type=left user [search index=uberagent sourcetype=\"uberAgent:Logon:LogonDetail\" earliest=-4h | stats latest(ProfileLoad) as uem_profile_s by user]\n| table _time, host, user, Message, severity, uem_profile_s\n```\n\nUnderstanding this SPL\n\n**FSLogix and Profile Container Health** — FSLogix profile and Office container disks live on fast SMB file shares. Slow attach, VHD reconnection failures, runaway VHDX growth, and share latency surface as long logon times or read-only profiles. Correlating FSLogix Application events with the profile phase in uberAgent logon data isolates the share path versus client-side issues faster than GPO review alone.\n\nDocumented **Data sources**: `sourcetype=\"WinEventLog:Application\"` source=FSLogix* or `Message=*FSLogix*`, `Perfmon:LogicalDisk` on profile share, `sourcetype=\"WinEventLog:System\"` for VHD/Filter Manager; `index=uberagent` `sourcetype=\"uberAgent:Logon:LogonDetail\"` for profile phase timing; optional SMB `\\Server\\path` path latency with synthetic scripts into `sourcetype=fslogix:synthetic`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows, uberAgent UXM (Splunkbase 1448), Microsoft FSLogix policy documentation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Application, WinEventLog:System. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where severity!=\"info\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **FSLogix and Profile Container Health**): table _time, host, user, Message, severity, uem_profile_s\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (FSLogix errors), Line chart (profile phase from uberAgent), Table (VHD size growth if inventoried).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on fSLogix and Profile Container Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.42",
              "n": "Citrix Configuration Change Audit Trail",
              "c": "high",
              "f": "beginner",
              "v": "Unplanned or unauthorized changes to published resources, machine catalogs, entitlements, and policies are high-impact in VDI. Collecting a tamper-resistant trail from Windows process creation and Citrix admin audit events, plus any broker-side configuration events you expose, gives security and change teams evidence for investigations and attestation, not only for ITIL tickets.",
              "t": "Splunk Add-on for Microsoft Windows, Template for Citrix XenDesktop 7 (TA-XD7-Broker), optional Splunk Enterprise Security for correlation",
              "d": "`sourcetype=\"WinEventLog:Security\"` (4688, 4702) for `powershell*`, `mmc.exe`, `BrokerPowerShell.exe`, `Citrix*Studio*`, `index=xd` `sourcetype=\"citrix:broker:events\"` for admin/audit and publish changes, `sourcetype=\"linux_audit\"` or container logs for Cloud connectors if you separate admin API calls",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" (EventCode=4688 OR EventCode=4702)\n| search match(_raw, \"(?i)BrokerPowerShell|CVAD|XD.*Catalog|XenDesktop|Studio|Publish|Delivery.?Group|Machine.?Catalog|GPO|Broker\\\\bin|Get-Broker|Set-Broker|New-Broker|Remove-Broker\")\n| eval account=coalesce(Security_ID, user, src_user, Account_Name)\n| eval process=New_Process_Name\n| table _time, host, account, process, EventCode, CommandLine\n| append [search index=xd sourcetype=\"citrix:broker:events\" match(_raw, \"(?i)admin|audit|publish|unpublish|add.?desktop|change.?entitlement|polic|Studio\")]\n| sort - _time",
              "m": "Send Security logs from admin jump hosts and all Delivery Controllers. Enable command-line process auditing (4688) per Microsoft guidance. Harden: lock down who can run `BrokerPowerShell`. Enrich with asset identity for admin accounts. For Citrix DaaS, pipe Cloud Director API audit to Splunk. Retention: align to your compliance schedule (e.g. 1 year online).",
              "z": "Timeline (change events by admin), Table (raw command line), Bar chart (changes per day by team via lookup).",
              "kfp": "Studio automation, Terraform, GitOps, and regular scheduled exports to documentation systems generate many configuration diffs. Diff against the automation identity and approved change ID; the false positive is 'noise from script', not a silent attacker edit.",
              "refs": "[Citrix audit logging and reporting](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops-2112/operations/audit/audit-logging.html)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows, Template for Citrix XenDesktop 7 (TA-XD7-Broker), optional Splunk Enterprise Security for correlation.\n• Ensure the following data sources are available: `sourcetype=\"WinEventLog:Security\"` (4688, 4702) for `powershell*`, `mmc.exe`, `BrokerPowerShell.exe`, `Citrix*Studio*`, `index=xd` `sourcetype=\"citrix:broker:events\"` for admin/audit and publish changes, `sourcetype=\"linux_audit\"` or container logs for Cloud connectors if you separate admin API calls.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSend Security logs from admin jump hosts and all Delivery Controllers. Enable command-line process auditing (4688) per Microsoft guidance. Harden: lock down who can run `BrokerPowerShell`. Enrich with asset identity for admin accounts. For Citrix DaaS, pipe Cloud Director API audit to Splunk. Retention: align to your compliance schedule (e.g. 1 year online).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" (EventCode=4688 OR EventCode=4702)\n| search match(_raw, \"(?i)BrokerPowerShell|CVAD|XD.*Catalog|XenDesktop|Studio|Publish|Delivery.?Group|Machine.?Catalog|GPO|Broker\\\\bin|Get-Broker|Set-Broker|New-Broker|Remove-Broker\")\n| eval account=coalesce(Security_ID, user, src_user, Account_Name)\n| eval process=New_Process_Name\n| table _time, host, account, process, EventCode, CommandLine\n| append [search index=xd sourcetype=\"citrix:broker:events\" match(_raw, \"(?i)admin|audit|publish|unpublish|add.?desktop|change.?entitlement|polic|Studio\")]\n| sort - _time\n```\n\nUnderstanding this SPL\n\n**Citrix Configuration Change Audit Trail** — Unplanned or unauthorized changes to published resources, machine catalogs, entitlements, and policies are high-impact in VDI. Collecting a tamper-resistant trail from Windows process creation and Citrix admin audit events, plus any broker-side configuration events you expose, gives security and change teams evidence for investigations and attestation, not only for ITIL tickets.\n\nDocumented **Data sources**: `sourcetype=\"WinEventLog:Security\"` (4688, 4702) for `powershell*`, `mmc.exe`, `BrokerPowerShell.exe`, `Citrix*Studio*`, `index=xd` `sourcetype=\"citrix:broker:events\"` for admin/audit and publish changes, `sourcetype=\"linux_audit\"` or container logs for Cloud connectors if you separate admin API calls. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows, Template for Citrix XenDesktop 7 (TA-XD7-Broker), optional Splunk Enterprise Security for correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **account** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **process** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Citrix Configuration Change Audit Trail**): table _time, host, account, process, EventCode, CommandLine\n• Appends rows from a subsearch with `append`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (change events by admin), Table (raw command line), Bar chart (changes per day by team via lookup).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep a clear diary of who changed what in the app farm, from the admin console to scripted commands — so you catch surprises and finger-pointing before they hurt real work.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.43",
              "n": "Citrix Site Database Connectivity from Controllers",
              "c": "critical",
              "f": "intermediate",
              "v": "The Citrix site database is the single source of truth for registrations, entitlements, and broker decisions. If Delivery Controllers cannot reach the site database, users experience brokering failures, registration storms, and eventual site-wide service degradation. Proactively detecting connection retries, timeout errors, and authentication failures to the data store is essential before session launch capacity collapses. Correlate controller Application log events with `citrix:broker:events` to distinguish transient network blips from persistent connectivity loss.",
              "t": "Splunk Add-on for Microsoft Windows; Template for Citrix XenDesktop 7 (`TA-XD7-Broker`) for broker event normalization",
              "d": "`index=windows` `sourcetype=\"WinEventLog:Application\"` from Delivery Controllers for Citrix site database and configuration services; optional `index=xd` `sourcetype=\"citrix:broker:events\"` for correlated broker health, `EventCode` / `EventID` and `Message` text for connection timeouts and failure reasons",
              "q": "index=windows sourcetype=\"WinEventLog:Application\" (source=\"*Citrix*\" OR source=\"*Broker*\" OR Message=\"*site database*\" OR Message=\"*Site database*\")\n| search (Message=\"*connection*\" OR Message=\"*timeout*\" OR Message=\"*failed*\" OR Message=\"*unavailable*\") (Message=\"*database*\" OR Message=\"*SQL*\" OR Message=\"*data store*\")\n| bin _time span=5m\n| stats count as evt_count, values(EventCode) as event_codes, values(Message) as sample_msgs by host, _time\n| where evt_count > 0\n| sort -_time\n| table _time, host, event_codes, evt_count, sample_msgs",
              "m": "Forward Windows Application logs from every Delivery Controller. Add field extractions or `rex` to normalize database connection error text, SQL connectivity codes, and timeout indicators. Ingest or schedule-query `index=xd` broker events to correlate. Alert when any controller reports repeated site database connection failures in a five-minute window, or when a single error pattern exceeds your baseline. Suppress during planned database maintenance using a time-bound lookup. Document escalation to the DBA and Citrix site recovery runbooks.",
              "z": "Single value (open critical events), timechart of database-related errors by controller, table of recent error text with host, linked drilldown to broker event timeline.",
              "kfp": "SQL Always On failovers, index rebuilds, backup locks, and network ACL or firewall change tests can make controller database checks fail for seconds. Suppress on DBA and network maintenance, then look at SQL listener and firewall logs before a site-down declaration.",
              "refs": "[Citrix Databases — CVAD](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/databases.html), [uberAgent UXM (optional correlation on endpoints)](https://splunkbase.splunk.com/app/1448)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows; Template for Citrix XenDesktop 7 (`TA-XD7-Broker`) for broker event normalization.\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"WinEventLog:Application\"` from Delivery Controllers for Citrix site database and configuration services; optional `index=xd` `sourcetype=\"citrix:broker:events\"` for correlated broker health, `EventCode` / `EventID` and `Message` text for connection timeouts and failure reasons.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Windows Application logs from every Delivery Controller. Add field extractions or `rex` to normalize database connection error text, SQL connectivity codes, and timeout indicators. Ingest or schedule-query `index=xd` broker events to correlate. Alert when any controller reports repeated site database connection failures in a five-minute window, or when a single error pattern exceeds your baseline. Suppress during planned database maintenance using a time-bound lookup. Document escalation…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Application\" (source=\"*Citrix*\" OR source=\"*Broker*\" OR Message=\"*site database*\" OR Message=\"*Site database*\")\n| search (Message=\"*connection*\" OR Message=\"*timeout*\" OR Message=\"*failed*\" OR Message=\"*unavailable*\") (Message=\"*database*\" OR Message=\"*SQL*\" OR Message=\"*data store*\")\n| bin _time span=5m\n| stats count as evt_count, values(EventCode) as event_codes, values(Message) as sample_msgs by host, _time\n| where evt_count > 0\n| sort -_time\n| table _time, host, event_codes, evt_count, sample_msgs\n```\n\nUnderstanding this SPL\n\n**Citrix Site Database Connectivity from Controllers** — The Citrix site database is the single source of truth for registrations, entitlements, and broker decisions. If Delivery Controllers cannot reach the site database, users experience brokering failures, registration storms, and eventual site-wide service degradation. Proactively detecting connection retries, timeout errors, and authentication failures to the data store is essential before session launch capacity collapses. Correlate controller Application log events with…\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Application\"` from Delivery Controllers for Citrix site database and configuration services; optional `index=xd` `sourcetype=\"citrix:broker:events\"` for correlated broker health, `EventCode` / `EventID` and `Message` text for connection timeouts and failure reasons. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows; Template for Citrix XenDesktop 7 (`TA-XD7-Broker`) for broker event normalization. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Application. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where evt_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix Site Database Connectivity from Controllers**): table _time, host, event_codes, evt_count, sample_msgs\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (open critical events), timechart of database-related errors by controller, table of recent error text with host, linked drilldown to broker event timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on site Database Connectivity from Controllers and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Databases"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.44",
              "n": "VDA Disk IOPS and Write Cache Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "MCS, image management service I/O optimization, and PVS all rely on local cache volumes. When the write cache fills or RAM cache spills to disk under burst load, users see freezes, logon failures, and even blue screens. Tracking per-host disk read/write IOPS, queue time, and write-cache utilization on session hosts shows capacity and misconfiguration (undersized cache disk, wrong cache mode, storage latency) before user-visible outages. Combine endpoint metrics with PVS or MCS-specific events to explain growth versus a noisy neighbor on shared storage.",
              "t": "uberAgent UXM (Splunkbase 1448) — recommended for disk and volume metrics; plus Universal Forwarder for `citrix:vda:events` if you stream VDA service events",
              "d": "`index=uberagent` `sourcetype=\"uberAgent:Volume:DiskPerformance\"` (per-volume read/write IOPS, queue depth, percent busy); `index=xd` `sourcetype=\"citrix:vda:events\"` for MCS or image management cache overflow and RAM cache handoff messages; optional `sourcetype=\"citrix:pvs:stream\"` for PVS write-cache percent when you run Provisioning Services",
              "q": "index=uberagent sourcetype=\"uberAgent:Volume:DiskPerformance\"\n| where match(VolumeName, \"Cache|WCD|MCS|WriteCache|PVS|Differencing\", \"i\") OR 1=1\n| bin _time span=15m\n| stats avg(ReadIops) as read_iops, avg(WriteIops) as write_iops, avg(PercentDiskTime) as pct_busy, latest(WriteCacheUtilizationPct) as write_cache_util by host, VolumeName, _time\n| where write_cache_util > 80 OR read_iops > 20000 OR write_iops > 15000 OR pct_busy > 85\n| table _time, host, VolumeName, read_iops, write_iops, pct_busy, write_cache_util",
              "m": "Deploy uberAgent on session hosts. Confirm `uberAgent:Volume:DiskPerformance` (or the equivalent volume performance sourcetype in your build) lands in `index=uberagent`. Add optional scripted or log collection for VDA and PVS cache messages into `index=xd`. Create rolling baselines per hardware tier. Alert when write-cache use crosses a two-tier threshold (for example, 60% warning, 80% critical) or when IOPS and disk busy time together indicate saturation. Group by host and catalog to find mis-sized machines.",
              "z": "Timechart of read/write IOPS and disk busy %, single value for max write-cache use in fleet, table of worst hosts with volume name and cache utilization.",
              "kfp": "PVS vDisk merge, storage vMotion, antivirus full scans, and large patch installs spike disk IOPS and write cache usage together. Key on hosts running those jobs in a job lookup and compare to a same-hour baseline from last week, not a fixed raw cap.",
              "refs": "[uberAgent volume and disk performance](https://docs.uberagent.com/), [Cache for MCS — Citrix CVAD](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops-service/manage-deployment/mcs/mcs-storage.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) — recommended for disk and volume metrics; plus Universal Forwarder for `citrix:vda:events` if you stream VDA service events.\n• Ensure the following data sources are available: `index=uberagent` `sourcetype=\"uberAgent:Volume:DiskPerformance\"` (per-volume read/write IOPS, queue depth, percent busy); `index=xd` `sourcetype=\"citrix:vda:events\"` for MCS or image management cache overflow and RAM cache handoff messages; optional `sourcetype=\"citrix:pvs:stream\"` for PVS write-cache percent when you run Provisioning Services.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy uberAgent on session hosts. Confirm `uberAgent:Volume:DiskPerformance` (or the equivalent volume performance sourcetype in your build) lands in `index=uberagent`. Add optional scripted or log collection for VDA and PVS cache messages into `index=xd`. Create rolling baselines per hardware tier. Alert when write-cache use crosses a two-tier threshold (for example, 60% warning, 80% critical) or when IOPS and disk busy time together indicate saturation. Group by host and catalog to find mis-s…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent sourcetype=\"uberAgent:Volume:DiskPerformance\"\n| where match(VolumeName, \"Cache|WCD|MCS|WriteCache|PVS|Differencing\", \"i\") OR 1=1\n| bin _time span=15m\n| stats avg(ReadIops) as read_iops, avg(WriteIops) as write_iops, avg(PercentDiskTime) as pct_busy, latest(WriteCacheUtilizationPct) as write_cache_util by host, VolumeName, _time\n| where write_cache_util > 80 OR read_iops > 20000 OR write_iops > 15000 OR pct_busy > 85\n| table _time, host, VolumeName, read_iops, write_iops, pct_busy, write_cache_util\n```\n\nUnderstanding this SPL\n\n**VDA Disk IOPS and Write Cache Utilization** — MCS, image management service I/O optimization, and PVS all rely on local cache volumes. When the write cache fills or RAM cache spills to disk under burst load, users see freezes, logon failures, and even blue screens. Tracking per-host disk read/write IOPS, queue time, and write-cache utilization on session hosts shows capacity and misconfiguration (undersized cache disk, wrong cache mode, storage latency) before user-visible outages. Combine endpoint metrics with PVS or…\n\nDocumented **Data sources**: `index=uberagent` `sourcetype=\"uberAgent:Volume:DiskPerformance\"` (per-volume read/write IOPS, queue depth, percent busy); `index=xd` `sourcetype=\"citrix:vda:events\"` for MCS or image management cache overflow and RAM cache handoff messages; optional `sourcetype=\"citrix:pvs:stream\"` for PVS write-cache percent when you run Provisioning Services. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) — recommended for disk and volume metrics; plus Universal Forwarder for `citrix:vda:events` if you stream VDA service events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:Volume:DiskPerformance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:Volume:DiskPerformance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(VolumeName, \"Cache|WCD|MCS|WriteCache|PVS|Differencing\", \"i\") OR 1=1` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, VolumeName, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where write_cache_util > 80 OR read_iops > 20000 OR write_iops > 15000 OR pct_busy > 85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VDA Disk IOPS and Write Cache Utilization**): table _time, host, VolumeName, read_iops, write_iops, pct_busy, write_cache_util\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart of read/write IOPS and disk busy %, single value for max write-cache use in fleet, table of worst hosts with volume name and cache utilization.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We check how hard each session host is hitting its disks and how full its local write cache is, so you run out of room before a freeze or mass logon failure, not after.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.45",
              "n": "Machine Boot Storm Detection and Mitigation",
              "c": "high",
              "f": "advanced",
              "v": "A boot storm is a sudden, correlated surge of machine start and registration activity — for example at shift change or after maintenance — that can flood the hypervisor, storage, and broker queues. It causes long queue times, failed registrations, and slow logon even when per-machine health is good. You need a detection that works on the rate of starts per minute per catalog and delivery group, not only on a static machine count, plus a view of whether staggered start configurations are honored. The goal is to trigger proactive throttling, schedule spreading, and communications before users pile into failures.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional uberAgent UXM (Splunkbase 1448) for boot duration on guests",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` (machine start, registration, power-on, brokering milestones); `index=xd` hypervisor or cloud audit if forwarded (optional burst correlation); `index=uberagent` `sourcetype=\"uberAgent:Machine:Boot\"` or host boot time metrics if you collect them",
              "q": "index=xd sourcetype=\"citrix:broker:events\" (event_type=\"VmPowerOn\" OR event_type=\"MachineStart\" OR event_type=\"MachineRegistration\" OR match(_raw, \"(power.?on|start|registration|boot)\", \"i\"))\n| eval boot_phase=coalesce(power_state, event_type, \"Unknown\")\n| bin _time span=1m\n| stats dc(machine_name) as machines_in_min, count as events_in_min, values(delivery_group) as dgs by _time, catalog_name\n| eventstats median(machines_in_min) as med_boots, stdev(machines_in_min) as stdev_boots by catalog_name\n| eval z_score=if(isnull(stdev_boots) OR stdev_boots=0, 0, (machines_in_min - med_boots) / stdev_boots)\n| where machines_in_min > 20 OR z_score > 3\n| sort - machines_in_min\n| table _time, catalog_name, dgs, machines_in_min, events_in_min, z_score",
              "m": "Ingest broker events with consistent `machine_name`, `catalog_name`, and `delivery_group` fields. Set absolute thresholds (e.g. more than 20 unique machines starting per minute) and relative thresholds (Z-score on the per-catalog rate versus the same time-of-day baseline). Add a secondary search that lists scheduled start tags or autoscale events if you model them. Integrate the alert with power-management policy owners so they can lengthen stagger windows or cap concurrent power operations. For proof, compare to hypervisor CPU ready time and storage latency dashboards.",
              "z": "Overlay timechart: machines started per minute by catalog, optional second axis for failed registrations, table of top peaks with z-score, Sankey or flow optional for maintenance window correlation.",
              "kfp": "Monday open, school-term starts, and single large batch logon events naturally concentrate boot load. Use adaptive or same-weekday seasonality; alert when boot time or queuing outlasts the historical peak for that calendar pattern or is paired with VDA registration failures only.",
              "refs": "[Citrix autoscale and scheduled actions](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops-service/manage-deployments/citrix-autoscale/about-autoscale.html), [Load management — brokering context](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/manage-load-balancing.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional uberAgent UXM (Splunkbase 1448) for boot duration on guests.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` (machine start, registration, power-on, brokering milestones); `index=xd` hypervisor or cloud audit if forwarded (optional burst correlation); `index=uberagent` `sourcetype=\"uberAgent:Machine:Boot\"` or host boot time metrics if you collect them.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest broker events with consistent `machine_name`, `catalog_name`, and `delivery_group` fields. Set absolute thresholds (e.g. more than 20 unique machines starting per minute) and relative thresholds (Z-score on the per-catalog rate versus the same time-of-day baseline). Add a secondary search that lists scheduled start tags or autoscale events if you model them. Integrate the alert with power-management policy owners so they can lengthen stagger windows or cap concurrent power operations. For…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" (event_type=\"VmPowerOn\" OR event_type=\"MachineStart\" OR event_type=\"MachineRegistration\" OR match(_raw, \"(power.?on|start|registration|boot)\", \"i\"))\n| eval boot_phase=coalesce(power_state, event_type, \"Unknown\")\n| bin _time span=1m\n| stats dc(machine_name) as machines_in_min, count as events_in_min, values(delivery_group) as dgs by _time, catalog_name\n| eventstats median(machines_in_min) as med_boots, stdev(machines_in_min) as stdev_boots by catalog_name\n| eval z_score=if(isnull(stdev_boots) OR stdev_boots=0, 0, (machines_in_min - med_boots) / stdev_boots)\n| where machines_in_min > 20 OR z_score > 3\n| sort - machines_in_min\n| table _time, catalog_name, dgs, machines_in_min, events_in_min, z_score\n```\n\nUnderstanding this SPL\n\n**Machine Boot Storm Detection and Mitigation** — A boot storm is a sudden, correlated surge of machine start and registration activity — for example at shift change or after maintenance — that can flood the hypervisor, storage, and broker queues. It causes long queue times, failed registrations, and slow logon even when per-machine health is good. You need a detection that works on the rate of starts per minute per catalog and delivery group, not only on a static machine count, plus a view of whether staggered start…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` (machine start, registration, power-on, brokering milestones); `index=xd` hypervisor or cloud audit if forwarded (optional burst correlation); `index=uberagent` `sourcetype=\"uberAgent:Machine:Boot\"` or host boot time metrics if you collect them. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional uberAgent UXM (Splunkbase 1448) for boot duration on guests. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **boot_phase** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, catalog_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by catalog_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where machines_in_min > 20 OR z_score > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Machine Boot Storm Detection and Mitigation**): table _time, catalog_name, dgs, machines_in_min, events_in_min, z_score\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Overlay timechart: machines started per minute by catalog, optional second axis for failed registrations, table of top peaks with z-score, Sankey or flow optional for maintenance window correlation.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We count how many virtual desktops start at the same time and flag unusual surges, so you calm a boot storm before storage and the hypervisor layer choke and the morning shift cannot log in.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.46",
              "n": "Citrix Monitor OData Load Index Trending",
              "c": "high",
              "f": "intermediate",
              "v": "The Citrix load evaluator reports a load index (0 to 10,000) that combines session count, application load, and other factors per machine. Trending that index at the machine and delivery group level shows who is overassigned before new sessions fail, validates load-balancing policy effectiveness, and helps capacity planning. Spikes in average or peak load index that persist across hours point to too few hosts, heavy users, or runaway published applications, not one-off blips. Pair this with session counts to separate genuine saturation from a broken load metric source.",
              "t": "Citrix Monitor Service OData poller (custom scripted input) or a supported Splunk Citrix add-on that writes `citrix:monitor:odata`",
              "d": "`index=xd` `sourcetype=\"citrix:monitor:odata\"` (Machines, Sessions, and LoadIndex fields from the Citrix Monitor Service OData collection); field aliases such as `load_index` or `LoadIndex` depending on the collector; `MachineName` / `CatalogName` / `DesktopGroupName` for grouping",
              "q": "index=xd sourcetype=\"citrix:monitor:odata\" odata_resource=\"Machines\"\n| eval li=tonumber(coalesce(load_index, LoadIndex, 0))\n| eval machine=coalesce(machine_name, MachineName, host)\n| eval dg=coalesce(delivery_group, DesktopGroupName, \"Unknown\")\n| where li >= 0\n| bin _time span=1h\n| stats latest(li) as load_index, max(li) as peak_li by machine, dg, _time\n| where peak_li > 5000\n| timechart max(peak_li) by dg",
              "m": "Stand up a scheduled OData poll with authentication to the on-premises Monitor service and persist JSON into `citrix:monitor:odata` with a stable `odata_resource` field. Map OData property names to lowercase Splunk fields for `LoadIndex` and `MachineName`. Create hourly or fifteen-minute baselines. Alert when peak load index exceeds 5,000 (tunable) for any machine for more than two consecutive samples, or when the delivery group average crosses your internal green/yellow line. Onboard a dashboard of top ten machines by load index with drilldowns to process and session data.",
              "z": "Line chart of max load index by delivery group, heatmap of machines over time, table of current worst offenders with load index and session count if joined.",
              "kfp": "Third-party monitoring exporters, Monitor OData version upgrades, and dashboard refreshes can change call patterns to the load index. Match software and connector releases; a sustained small drift without user KPI regression is usually tuning noise, not capacity doom.",
              "refs": "[Monitor Service and OData in CVAD](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/monitor-service.html), [Citrix — load management overview](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/manage-load-balancing.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Monitor Service OData poller (custom scripted input) or a supported Splunk Citrix add-on that writes `citrix:monitor:odata`.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:monitor:odata\"` (Machines, Sessions, and LoadIndex fields from the Citrix Monitor Service OData collection); field aliases such as `load_index` or `LoadIndex` depending on the collector; `MachineName` / `CatalogName` / `DesktopGroupName` for grouping.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStand up a scheduled OData poll with authentication to the on-premises Monitor service and persist JSON into `citrix:monitor:odata` with a stable `odata_resource` field. Map OData property names to lowercase Splunk fields for `LoadIndex` and `MachineName`. Create hourly or fifteen-minute baselines. Alert when peak load index exceeds 5,000 (tunable) for any machine for more than two consecutive samples, or when the delivery group average crosses your internal green/yellow line. Onboard a dashboar…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:monitor:odata\" odata_resource=\"Machines\"\n| eval li=tonumber(coalesce(load_index, LoadIndex, 0))\n| eval machine=coalesce(machine_name, MachineName, host)\n| eval dg=coalesce(delivery_group, DesktopGroupName, \"Unknown\")\n| where li >= 0\n| bin _time span=1h\n| stats latest(li) as load_index, max(li) as peak_li by machine, dg, _time\n| where peak_li > 5000\n| timechart max(peak_li) by dg\n```\n\nUnderstanding this SPL\n\n**Citrix Monitor OData Load Index Trending** — The Citrix load evaluator reports a load index (0 to 10,000) that combines session count, application load, and other factors per machine. Trending that index at the machine and delivery group level shows who is overassigned before new sessions fail, validates load-balancing policy effectiveness, and helps capacity planning. Spikes in average or peak load index that persist across hours point to too few hosts, heavy users, or runaway published applications, not one-off…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:monitor:odata\"` (Machines, Sessions, and LoadIndex fields from the Citrix Monitor Service OData collection); field aliases such as `load_index` or `LoadIndex` depending on the collector; `MachineName` / `CatalogName` / `DesktopGroupName` for grouping. **App/TA** (typical add-on context): Citrix Monitor Service OData poller (custom scripted input) or a supported Splunk Citrix add-on that writes `citrix:monitor:odata`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:monitor:odata. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:monitor:odata\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **li** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **machine** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where li >= 0` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by machine, dg, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where peak_li > 5000` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time with a separate series **by dg** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart of max load index by delivery group, heatmap of machines over time, table of current worst offenders with load index and session count if joined.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We track how “full” each desktop server is using Citrix’s own load number over time, so you add capacity or balance work before that host stops taking new people.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.47",
              "n": "Workspace App Client Version Distribution",
              "c": "medium",
              "f": "beginner",
              "v": "Citrix Workspace app versions and platforms drift quickly — users defer upgrades, some branches are blocked by legacy tools, and mobile platforms patch on different cadences. A wide long tail of old clients increases your support cost, security exposure, and feature inconsistency. Reporting client version share by platform (Windows, Mac, Linux, iOS, Android) supports compliance with internal standards, tells you which upgrade campaigns worked, and highlights obsolete builds that should be blocked at the gateway. This is not a one-time audit; you want scheduled visibility after every gateway or StoreFront change.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-Broker`) and optional Citrix Monitor OData poller for session details",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` (session or connection events that carry `ClientVersion`, `ClientProductId`, and `ClientAddress` or platform tags); `index=xd` `sourcetype=\"citrix:monitor:odata\"` `Sessions` resource if the broker event lacks detail",
              "q": "index=xd sourcetype=\"citrix:broker:events\" (event_type=\"SessionConnection\" OR event_type=\"ConnectionLogon\" OR event_type=\"SessionInfo\")\n| eval cv=coalesce(client_version, ClientVersion, workspace_version, \"unknown\")\n| eval platform=coalesce(client_platform, os_type, client_os, \"unknown\")\n| where cv!=\"unknown\"\n| stats count as sessions, dc(user) as users, dc(host) as hosts by cv, platform\n| eventstats sum(sessions) as total_sessions\n| eval pct=round(100 * sessions / total_sessions, 2)\n| sort - sessions\n| table cv, platform, users, sessions, pct",
              "m": "Ensure client version fields are present on at least one reliable event (often session start from broker or a Monitor OData `Sessions` backfill). Build a `lookup` of approved `client_version` per platform. Schedule a weekly or daily report, not an alert, unless a version is explicitly banned — then alert when `pct` for that version is nonzero. For executive views, show stacked percentage bars by platform. Feed the data into your software-asset and endpoint-management teams for package targeting.",
              "z": "Pie or treemap of versions, stacked bar by platform, table of versions with percent of sessions, optional single value for count of unapproved clients via lookup match.",
              "kfp": "Ring-based Workspace app rollouts, Intune staged deployments, and pilot AD groups will skew the version mix from week to week. Chart by update channel and OU, not a single org-wide 'must be 100% latest' line that will never hold during pilots.",
              "refs": "[Citrix Workspace app lifecycle matrix](https://docs.citrix.com/en-us/citrix-workspace-app-for-windows/whats-new.html), [Session data from Monitor (context)](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/monitor-service.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-Broker`) and optional Citrix Monitor OData poller for session details.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` (session or connection events that carry `ClientVersion`, `ClientProductId`, and `ClientAddress` or platform tags); `index=xd` `sourcetype=\"citrix:monitor:odata\"` `Sessions` resource if the broker event lacks detail.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure client version fields are present on at least one reliable event (often session start from broker or a Monitor OData `Sessions` backfill). Build a `lookup` of approved `client_version` per platform. Schedule a weekly or daily report, not an alert, unless a version is explicitly banned — then alert when `pct` for that version is nonzero. For executive views, show stacked percentage bars by platform. Feed the data into your software-asset and endpoint-management teams for package targeting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" (event_type=\"SessionConnection\" OR event_type=\"ConnectionLogon\" OR event_type=\"SessionInfo\")\n| eval cv=coalesce(client_version, ClientVersion, workspace_version, \"unknown\")\n| eval platform=coalesce(client_platform, os_type, client_os, \"unknown\")\n| where cv!=\"unknown\"\n| stats count as sessions, dc(user) as users, dc(host) as hosts by cv, platform\n| eventstats sum(sessions) as total_sessions\n| eval pct=round(100 * sessions / total_sessions, 2)\n| sort - sessions\n| table cv, platform, users, sessions, pct\n```\n\nUnderstanding this SPL\n\n**Workspace App Client Version Distribution** — Citrix Workspace app versions and platforms drift quickly — users defer upgrades, some branches are blocked by legacy tools, and mobile platforms patch on different cadences. A wide long tail of old clients increases your support cost, security exposure, and feature inconsistency. Reporting client version share by platform (Windows, Mac, Linux, iOS, Android) supports compliance with internal standards, tells you which upgrade campaigns worked, and highlights obsolete builds…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` (session or connection events that carry `ClientVersion`, `ClientProductId`, and `ClientAddress` or platform tags); `index=xd` `sourcetype=\"citrix:monitor:odata\"` `Sessions` resource if the broker event lacks detail. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-Broker`) and optional Citrix Monitor OData poller for session details. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **platform** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cv!=\"unknown\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cv, platform** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Workspace App Client Version Distribution**): table cv, platform, users, sessions, pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie or treemap of versions, stacked bar by platform, table of versions with percent of sessions, optional single value for count of unapproved clients via lookup match.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We list which app versions and device types are actually connecting, so you see who is behind on upgrades before support tickets and old bugs come back in waves.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.48",
              "n": "Published Application Inventory Drift",
              "c": "medium",
              "f": "intermediate",
              "v": "The published application catalog and delivery group assignments define what users can launch and from where. Unplanned additions — for example, an overly broad group entitlement — expand attack surface. Silent removals can break a department. Drift is often caught only after help desk tickets. Collecting and comparing app inventory over time, including who made the last change, supports change management, recertification, and quick forensic review if suspicious publishing appears. The goal is the same as infrastructure drift detection, but for desktop and app entitlements in Citrix rather than for cloud IaaS tags alone.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional scripted OData application inventory; Splunk add-on for Windows security audit if you capture privileged Citrix admin accounts",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` (admin and configuration change events, published application and delivery group membership changes if forwarded); `index=xd` `sourcetype=\"citrix:monitor:odata\"` optional periodic snapshot of `Applications` and `ApplicationGroups` for diffing; Windows security or FMA audit if you consolidate admin actions",
              "q": "index=xd sourcetype=\"citrix:broker:events\"\n| where event_type IN (\"PublishedAppChange\", \"AppGroupChange\", \"AdminAction\") OR match(_raw, \"(?i)(publish|unpublish|application|delivery\\s*group|entitlement)\")\n| eval app=coalesce(app_name, ApplicationName, published_name, \"Unknown\")\n| eval change=coalesce(change_type, action, operation, event_type, \"change\")\n| eval actor=coalesce(admin_user, Actor, user, \"unknown\")\n| bin _time span=1d\n| stats count as changes, values(change) as change_types, values(delivery_group) as dgs by _time, app, actor\n| where changes>0\n| sort - _time\n| table _time, app, actor, change_types, dgs, changes",
              "m": "If native broker `event_type` values are not present, use daily OData `Applications` and `Outputlookup` a baseline table, then `diff` the next run with a scripted or Splunk custom command. For real-time, parse admin audit entries that include the admin SID or UPN. Alert when an application appears or disappears without a linked change record in your ITSM, or when `Actor` is not a known automation account. For security, pay special care to new publish actions to all-authenticated users or to broad Active Directory groups.",
              "z": "Changelog table with old versus new, timeline of app count by delivery group, single value for new apps in last 24 hours with drilldown to detail.",
              "kfp": "Application packaging sprints, bulk adds during catalog migrations, and nightly sync jobs from automation move published inventory counts. Scope alerts to changes outside a catalog publish or job ID, not every packaging weekend.",
              "refs": "[Publish applications in CVAD](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/publish.html), [Delegating administration and role-based access](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/delegated-administration.html)",
              "mitre": [
                "T1098",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional scripted OData application inventory; Splunk add-on for Windows security audit if you capture privileged Citrix admin accounts.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` (admin and configuration change events, published application and delivery group membership changes if forwarded); `index=xd` `sourcetype=\"citrix:monitor:odata\"` optional periodic snapshot of `Applications` and `ApplicationGroups` for diffing; Windows security or FMA audit if you consolidate admin actions.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIf native broker `event_type` values are not present, use daily OData `Applications` and `Outputlookup` a baseline table, then `diff` the next run with a scripted or Splunk custom command. For real-time, parse admin audit entries that include the admin SID or UPN. Alert when an application appears or disappears without a linked change record in your ITSM, or when `Actor` is not a known automation account. For security, pay special care to new publish actions to all-authenticated users or to broa…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\"\n| where event_type IN (\"PublishedAppChange\", \"AppGroupChange\", \"AdminAction\") OR match(_raw, \"(?i)(publish|unpublish|application|delivery\\s*group|entitlement)\")\n| eval app=coalesce(app_name, ApplicationName, published_name, \"Unknown\")\n| eval change=coalesce(change_type, action, operation, event_type, \"change\")\n| eval actor=coalesce(admin_user, Actor, user, \"unknown\")\n| bin _time span=1d\n| stats count as changes, values(change) as change_types, values(delivery_group) as dgs by _time, app, actor\n| where changes>0\n| sort - _time\n| table _time, app, actor, change_types, dgs, changes\n```\n\nUnderstanding this SPL\n\n**Published Application Inventory Drift** — The published application catalog and delivery group assignments define what users can launch and from where. Unplanned additions — for example, an overly broad group entitlement — expand attack surface. Silent removals can break a department. Drift is often caught only after help desk tickets. Collecting and comparing app inventory over time, including who made the last change, supports change management, recertification, and quick forensic review if suspicious publishing…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` (admin and configuration change events, published application and delivery group membership changes if forwarded); `index=xd` `sourcetype=\"citrix:monitor:odata\"` optional periodic snapshot of `Applications` and `ApplicationGroups` for diffing; Windows security or FMA audit if you consolidate admin actions. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional scripted OData application inventory; Splunk add-on for Windows security audit if you capture privileged Citrix admin accounts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"PublishedAppChange\", \"AppGroupChange\", \"AdminAction\") OR match(_raw, \"(?i)(publish|unpublish|applicat…` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **app** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **change** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **actor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, app, actor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where changes>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Published Application Inventory Drift**): table _time, app, actor, change_types, dgs, changes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Changelog table with old versus new, timeline of app count by delivery group, single value for new apps in last 24 hours with drilldown to detail.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep a clear history of which apps are offered to whom and who changed that list, so surprise shortcuts and risky publishing do not sit unnoticed between formal reviews.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.49",
              "n": "Stuck Sessions and Ghost Session Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Sessions that sit disconnected beyond policy, that never complete logoff, or that remain in broker state when the session host is already gone, consume user licenses, load index, and file handles; they are common precursors to ghost or orphaned sessions. Ghost sessions that survive past the machine that hosted them complicate support and can block new connections for the same user. You want detection that works off authoritative session records and time-in-state, with thresholds aligned to your group policy and Citrix session reliability settings, plus a path to session host or broker session reset actions.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional uberAgent UXM (Splunkbase 1448); optional Citrix Monitor OData",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` (session state, logoff, disconnect, reconnect); `index=xd` `sourcetype=\"citrix:monitor:odata\"` `Sessions` for `SessionState`, `ClientName`, and `LogoffDuration` when broker feed is thin; `index=uberagent` `sourcetype=\"uberAgent:Session:SessionDetail\"` for per-session end-to-end state",
              "q": "index=xd sourcetype=\"citrix:broker:events\" (event_type=\"SessionState\" OR event_type=\"SessionDisconnect\" OR event_type=\"SessionLogoff\")\n| eval state=coalesce(session_state, SessionState, status, \"Unknown\")\n| eval sess=coalesce(session_key, session_id, Uid, \"unknown\")\n| eval user=coalesce(user, UserName, \"unknown\")\n| eval machine=coalesce(machine_name, VDA, host, \"Unknown\")\n| where state IN (\"Disconnected\", \"StuckOnBroker\", \"PendingLogoff\", \"PreparingSession\", \"PreparingApplication\")\n| eval idle_sec=coalesce(idle_time_sec, disconnect_duration_sec, 0)\n| where idle_sec>28800\n| bin _time span=1h\n| stats count as bad_sessions, values(state) as states, max(idle_sec) as max_idle_sec, dc(user) as affected_users by machine, _time\n| where bad_sessions>0\n| table _time, machine, bad_sessions, affected_users, max_idle_sec, states",
              "m": "Align `idle_sec` and disconnect timers with GPO: disconnected session limit, logoff on disconnect, and session linger. Eight hours in the example SPL is a placeholder. Join broker and Monitor OData so you can see broker versus VDA truth; mismatch flags ghosts. For automation, use a runbook with Citrix `Get-BrokerSession` and reset cmdlets, not blind reboots. Alert on a machine with many long-lived disconnected states or a single user with repeated ghosts after migrations.",
              "z": "Table of long-lived sessions, heatmap of affected machines, sparkline of ghost count over time, optional link to a Director-equivalent view.",
              "kfp": "Disconnected session and idle policies leave sessions that NOCs label ghost while they are still within policy. Escalate when the same user or machine stays beyond the documented threshold or when broker and VDA last-seen times diverge, not a single 'old' row.",
              "refs": "[Session reliability and reconnection (Citrix)](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/ica-session-reliability.html), [Troubleshoot user issues — session disconnect](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/troubleshoot-user-issues.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional uberAgent UXM (Splunkbase 1448); optional Citrix Monitor OData.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` (session state, logoff, disconnect, reconnect); `index=xd` `sourcetype=\"citrix:monitor:odata\"` `Sessions` for `SessionState`, `ClientName`, and `LogoffDuration` when broker feed is thin; `index=uberagent` `sourcetype=\"uberAgent:Session:SessionDetail\"` for per-session end-to-end state.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign `idle_sec` and disconnect timers with GPO: disconnected session limit, logoff on disconnect, and session linger. Eight hours in the example SPL is a placeholder. Join broker and Monitor OData so you can see broker versus VDA truth; mismatch flags ghosts. For automation, use a runbook with Citrix `Get-BrokerSession` and reset cmdlets, not blind reboots. Alert on a machine with many long-lived disconnected states or a single user with repeated ghosts after migrations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\" (event_type=\"SessionState\" OR event_type=\"SessionDisconnect\" OR event_type=\"SessionLogoff\")\n| eval state=coalesce(session_state, SessionState, status, \"Unknown\")\n| eval sess=coalesce(session_key, session_id, Uid, \"unknown\")\n| eval user=coalesce(user, UserName, \"unknown\")\n| eval machine=coalesce(machine_name, VDA, host, \"Unknown\")\n| where state IN (\"Disconnected\", \"StuckOnBroker\", \"PendingLogoff\", \"PreparingSession\", \"PreparingApplication\")\n| eval idle_sec=coalesce(idle_time_sec, disconnect_duration_sec, 0)\n| where idle_sec>28800\n| bin _time span=1h\n| stats count as bad_sessions, values(state) as states, max(idle_sec) as max_idle_sec, dc(user) as affected_users by machine, _time\n| where bad_sessions>0\n| table _time, machine, bad_sessions, affected_users, max_idle_sec, states\n```\n\nUnderstanding this SPL\n\n**Stuck Sessions and Ghost Session Detection** — Sessions that sit disconnected beyond policy, that never complete logoff, or that remain in broker state when the session host is already gone, consume user licenses, load index, and file handles; they are common precursors to ghost or orphaned sessions. Ghost sessions that survive past the machine that hosted them complicate support and can block new connections for the same user. You want detection that works off authoritative session records and time-in-state, with…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` (session state, logoff, disconnect, reconnect); `index=xd` `sourcetype=\"citrix:monitor:odata\"` `Sessions` for `SessionState`, `ClientName`, and `LogoffDuration` when broker feed is thin; `index=uberagent` `sourcetype=\"uberAgent:Session:SessionDetail\"` for per-session end-to-end state. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional uberAgent UXM (Splunkbase 1448); optional Citrix Monitor OData. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sess** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **machine** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where state IN (\"Disconnected\", \"StuckOnBroker\", \"PendingLogoff\", \"PreparingSession\", \"PreparingApplication\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **idle_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where idle_sec>28800` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by machine, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where bad_sessions>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Stuck Sessions and Ghost Session Detection**): table _time, machine, bad_sessions, affected_users, max_idle_sec, states\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of long-lived sessions, heatmap of affected machines, sparkline of ghost count over time, optional link to a Director-equivalent view.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look for people’s remote sessions that stay half-open for far too long or outlive the computer they were on, so you clear them before they eat licenses and block new logons.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Sessions"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.50",
              "n": "VDA BSOD and Machine Stability Tracking",
              "c": "critical",
              "f": "intermediate",
              "v": "Blue screens, hard hangs, and unexpected reboots on session hosts are disproportionately disruptive: many users and published apps can fail in one incident. A single bugcheck may be a driver or GPU edge case; a cluster of the same stop code in one catalog points to a bad image, firmware, or policy rollout. You need a unified stream that captures bugcheck parameters from the System log, correlates with Citrix VDA and agent state when available, and enriches with uberAgent reboot analytics so you can trend stability per catalog, per hardware generation, and after every monthly patch. Treat recurring hosts as a candidate for maintenance mode and root-cause with vendor tools.",
              "t": "Splunk Add-on for Microsoft Windows; uberAgent UXM (Splunkbase 1448); optional Template for Citrix XenDesktop 7 for `citrix:vda:events`",
              "d": "`index=windows` (session hosts) `sourcetype=\"WinEventLog:System\"` bugcheck events, unexpected shutdowns, kernel power events; `index=xd` `sourcetype=\"citrix:vda:events\"` for VDA and agent service restarts tied to host instability; `index=uberagent` `sourcetype=\"uberAgent:Machine:Boot\"` and related stability or boot sourcetypes for unexpected reboots and stop codes normalized by uberAgent",
              "q": "index=windows sourcetype=\"WinEventLog:System\" (EventCode=1001 OR EventCode=41 OR EventCode=6008)\n| rex field=Message max_match=0 \"(?<bugcheck>0x[0-9A-Fa-f]+)\"\n| append [ search index=uberagent sourcetype=\"uberAgent:Machine:Boot\" unexpected_reboot=1 | eval EventCode=9999 | eval host=coalesce(host, dest_host) ]\n| bin _time span=1d\n| stats count as instabilities, values(EventCode) as event_codes, values(bugcheck) as stop_codes by host, _time\n| where instabilities>0\n| sort - instabilities\n| table _time, host, instabilities, event_codes, stop_codes",
              "m": "Ingest the full System channel from all session hosts. For bugcheck 1001, parse `Message` to extract the stop code. Join `host` to a CMDB or lookup that supplies `catalog_name` and `delivery_group`. In uberAgent, confirm unexpected reboots flow with the same `host` key. Alert when any host has more than one bugcheck in seven days, or when a new stop code appears in more than 10% of a catalog in a week. Exclude planned reboot windows via a change lookup. For GPU images, add NVIDIA or AMD field extractions in a child search.",
              "z": "Choropleth of stability rate by data center, bar chart of top stop codes, timeline of restarts, table of worst hosts with catalog and patch level.",
              "kfp": "Patch Tuesday, driver hotfix waves, and pool-wide golden-image replays can produce a short burst of per-host BSODs during rollout. Require multiple distinct hosts, repeat crashes, or a driver signature tied to a bad patch, not a one-off on a single noisy VM.",
              "refs": "[Windows bug check reference (Microsoft Learn)](https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-code-reference2), [uberAgent unexpected reboots](https://docs.uberagent.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows; uberAgent UXM (Splunkbase 1448); optional Template for Citrix XenDesktop 7 for `citrix:vda:events`.\n• Ensure the following data sources are available: `index=windows` (session hosts) `sourcetype=\"WinEventLog:System\"` bugcheck events, unexpected shutdowns, kernel power events; `index=xd` `sourcetype=\"citrix:vda:events\"` for VDA and agent service restarts tied to host instability; `index=uberagent` `sourcetype=\"uberAgent:Machine:Boot\"` and related stability or boot sourcetypes for unexpected reboots and stop codes normalized by uberAgent.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest the full System channel from all session hosts. For bugcheck 1001, parse `Message` to extract the stop code. Join `host` to a CMDB or lookup that supplies `catalog_name` and `delivery_group`. In uberAgent, confirm unexpected reboots flow with the same `host` key. Alert when any host has more than one bugcheck in seven days, or when a new stop code appears in more than 10% of a catalog in a week. Exclude planned reboot windows via a change lookup. For GPU images, add NVIDIA or AMD field ex…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:System\" (EventCode=1001 OR EventCode=41 OR EventCode=6008)\n| rex field=Message max_match=0 \"(?<bugcheck>0x[0-9A-Fa-f]+)\"\n| append [ search index=uberagent sourcetype=\"uberAgent:Machine:Boot\" unexpected_reboot=1 | eval EventCode=9999 | eval host=coalesce(host, dest_host) ]\n| bin _time span=1d\n| stats count as instabilities, values(EventCode) as event_codes, values(bugcheck) as stop_codes by host, _time\n| where instabilities>0\n| sort - instabilities\n| table _time, host, instabilities, event_codes, stop_codes\n```\n\nUnderstanding this SPL\n\n**VDA BSOD and Machine Stability Tracking** — Blue screens, hard hangs, and unexpected reboots on session hosts are disproportionately disruptive: many users and published apps can fail in one incident. A single bugcheck may be a driver or GPU edge case; a cluster of the same stop code in one catalog points to a bad image, firmware, or policy rollout. You need a unified stream that captures bugcheck parameters from the System log, correlates with Citrix VDA and agent state when available, and enriches with uberAgent…\n\nDocumented **Data sources**: `index=windows` (session hosts) `sourcetype=\"WinEventLog:System\"` bugcheck events, unexpected shutdowns, kernel power events; `index=xd` `sourcetype=\"citrix:vda:events\"` for VDA and agent service restarts tied to host instability; `index=uberagent` `sourcetype=\"uberAgent:Machine:Boot\"` and related stability or boot sourcetypes for unexpected reboots and stop codes normalized by uberAgent. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows; uberAgent UXM (Splunkbase 1448); optional Template for Citrix XenDesktop 7 for `citrix:vda:events`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Appends rows from a subsearch with `append`.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where instabilities>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VDA BSOD and Machine Stability Tracking**): table _time, host, instabilities, event_codes, stop_codes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Choropleth of stability rate by data center, bar chart of top stop codes, timeline of restarts, table of worst hosts with catalog and patch level.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We count sudden blue screens and surprise restarts on your shared desktop machines, so you see a bad patch or driver before a whole building keeps dropping connection during work.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.51",
              "n": "Citrix StoreFront Server IIS Health",
              "c": "high",
              "f": "intermediate",
              "v": "StoreFront is the first hop for many users after the gateway. If IIS app pools recycle frequently, if worker processes crash, or if HTTP 401/500/503 rates climb, every receiver update, resource enumeration, and single sign-on call suffers — often before the broker shows pressure. You should monitor the IIS access log error mix, time-taken (latency) percentiles, and the Windows event trail for `W3SVC`, `IIS*`, and app pool `Application pool * stopped` or rapid recycling on each StoreFront node. A healthy farm shows symmetric latency across members; asymmetry is a sign of broken authentication provider settings or a sick node still receiving traffic from the load balancer.",
              "t": "Splunk Add-on for Microsoft Windows; enable IIS and HTTP logging on StoreFront servers; consider Splunk add-on for Microsoft IIS for field extraction",
              "d": "`index=windows` `sourcetype=\"iis\"` and `iis:access` or W3C for StoreFront web traffic; `sourcetype=\"WinEventLog:System\"` and `sourcetype=\"WinEventLog:Application\"` for IIS and Microsoft-Windows-IIS* worker process and app pool recycles; optional `sourcetype=\"WinEventLog:Security\"` for authentication noise correlation",
              "q": "index=windows (sourcetype=\"iis\" OR sourcetype=\"W3C*\" OR source=\"*u_ex*.log\")\n| eval site=coalesce(s_sitename, \"default\")\n| search cs_uri_stem=\"*Authentication*\" OR cs_uri_stem=\"*Resources*\" OR cs_uri_stem=\"*Icon*\" OR like(lower(cs_uri_stem), \"%citrix%\")\n| eval sc=tonumber(sc_status)\n| eval is_err=if(sc>=400,1,0)\n| bin _time span=5m\n| stats count as total, sum(is_err) as http_err, avg(timetaken) as avg_ms, perc95(timetaken) as p95_ms by site, _time\n| eval err_pct=if(total>0, round(100*http_err/total,2), 0)\n| where err_pct>1 OR p95_ms>2000\n| table _time, site, total, err_pct, avg_ms, p95_ms",
              "m": "Enable W3C extended logging on StoreFront with `time-taken`, `sc-status`, and `cs-uri-stem` at minimum. Ingest in near real time. Add a second scheduled search on Application/System for IIS worker crashes. Baseline 401 rates versus known maintenance. Alert when 5xx exceeds 0.2% of requests for 15 minutes, or p95 time-taken exceeds 2,000 ms for authentication and resource endpoints, or on app pool recycles more than one per hour per site. De-dupe load-balanced pairs by `cs-host` to avoid double counting a single user action.",
              "z": "Timechart of 4xx/5xx counts, timechart of p95 time-taken by virtual directory, table of app pool recycles, single value for 503 spike.",
              "kfp": "IIS app pool recycles, certificate binding touch-ups, and .NET or URL rewrite module updates on StoreFront are routine. Correlate 5xx with app pool or SSL maintenance; sustained unrecoverable pool failure across nodes is the real signal.",
              "refs": "[Citrix StoreFront 1912 and later (planning and networking)](https://docs.citrix.com/en-us/storefront/1912/plan/considerations.html), [Microsoft: IIS log fields](https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-85/iis-85-rewrite-module-logging-rewrite-tracing)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows; enable IIS and HTTP logging on StoreFront servers; consider Splunk add-on for Microsoft IIS for field extraction.\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"iis\"` and `iis:access` or W3C for StoreFront web traffic; `sourcetype=\"WinEventLog:System\"` and `sourcetype=\"WinEventLog:Application\"` for IIS and Microsoft-Windows-IIS* worker process and app pool recycles; optional `sourcetype=\"WinEventLog:Security\"` for authentication noise correlation.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable W3C extended logging on StoreFront with `time-taken`, `sc-status`, and `cs-uri-stem` at minimum. Ingest in near real time. Add a second scheduled search on Application/System for IIS worker crashes. Baseline 401 rates versus known maintenance. Alert when 5xx exceeds 0.2% of requests for 15 minutes, or p95 time-taken exceeds 2,000 ms for authentication and resource endpoints, or on app pool recycles more than one per hour per site. De-dupe load-balanced pairs by `cs-host` to avoid double c…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows (sourcetype=\"iis\" OR sourcetype=\"W3C*\" OR source=\"*u_ex*.log\")\n| eval site=coalesce(s_sitename, \"default\")\n| search cs_uri_stem=\"*Authentication*\" OR cs_uri_stem=\"*Resources*\" OR cs_uri_stem=\"*Icon*\" OR like(lower(cs_uri_stem), \"%citrix%\")\n| eval sc=tonumber(sc_status)\n| eval is_err=if(sc>=400,1,0)\n| bin _time span=5m\n| stats count as total, sum(is_err) as http_err, avg(timetaken) as avg_ms, perc95(timetaken) as p95_ms by site, _time\n| eval err_pct=if(total>0, round(100*http_err/total,2), 0)\n| where err_pct>1 OR p95_ms>2000\n| table _time, site, total, err_pct, avg_ms, p95_ms\n```\n\nUnderstanding this SPL\n\n**Citrix StoreFront Server IIS Health** — StoreFront is the first hop for many users after the gateway. If IIS app pools recycle frequently, if worker processes crash, or if HTTP 401/500/503 rates climb, every receiver update, resource enumeration, and single sign-on call suffers — often before the broker shows pressure. You should monitor the IIS access log error mix, time-taken (latency) percentiles, and the Windows event trail for `W3SVC`, `IIS*`, and app pool `Application pool * stopped` or rapid recycling on…\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"iis\"` and `iis:access` or W3C for StoreFront web traffic; `sourcetype=\"WinEventLog:System\"` and `sourcetype=\"WinEventLog:Application\"` for IIS and Microsoft-Windows-IIS* worker process and app pool recycles; optional `sourcetype=\"WinEventLog:Security\"` for authentication noise correlation. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows; enable IIS and HTTP logging on StoreFront servers; consider Splunk add-on for Microsoft IIS for field extraction. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: iis, W3C*. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"iis\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **site** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **sc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_err** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by site, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **err_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_pct>1 OR p95_ms>2000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix StoreFront Server IIS Health**): table _time, site, total, err_pct, avg_ms, p95_ms\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart of 4xx/5xx counts, timechart of p95 time-taken by virtual directory, table of app pool recycles, single value for 503 spike.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the web server that hands people their app and desktop lists, so you see broken logon pages and slow sign-ins before a whole site thinks Citrix is down.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Web"
              ],
              "e": [
                "iis"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.52",
              "n": "VDA Software and OS Version Lifecycle Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Citrix releases new VDA builds regularly. Microsoft retires Windows 10/11 builds on a predictable schedule. Running an inventory of `vda_version` and `os_build` per session host supports compliance with internal standard images, tells you which catalogs are still on long-term servicing versus current channel, and highlights stragglers before support tickets or Citrix Cloud health checks flag them. Feed the same list into patch windows, upgrade rings, and golden-image promotion. A simple scheduled report that lists any host not on the approved pair is enough for many organizations; add lookups for end-of-life dates you maintain in a CSV.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-VDA`); optional uberAgent UXM (Splunkbase 1448) for operating system build and patch level",
              "d": "`index=xd` `sourcetype=\"citrix:vda:events\"` (VDA agent version, component build, plug-in status); `index=uberagent` host inventory sourcetypes if you standardize on uberAgent for OS build; optional Windows `sourcetype=\"WinEventLog:Application\"` for Citrix installer success or failure audit",
              "q": "index=xd sourcetype=\"citrix:vda:events\" (event_type=\"AgentInfo\" OR event_type=\"Registration\" OR event_type=\"Heartbeat\")\n| eval vda_ver=coalesce(vda_version, agent_version, VdaVersion, \"unknown\")\n| eval os_b=coalesce(os_build, windows_build, OSBuild, \"unknown\")\n| eval machine=coalesce(machine_name, host, \"Unknown\")\n| where vda_ver!=\"unknown\"\n| stats dc(machine) as host_count, max(_time) as last_seen by vda_ver, os_b\n| rename vda_ver as vda_version, os_b as os_build_value\n| sort vda_version, os_build_value\n| table vda_version, os_build_value, host_count, last_seen",
              "m": "Emit a heartbeat or registration event at least daily that includes VDA and OS build. Create `lookup citrix_supported_vda.csv` with columns `vda_version`, `supported`, `eol_date`. Version the lookup with change control. Schedule the report weekly; alert only for rows on the critical path (for example, Internet-facing or regulated worker pools). Combine with your configuration management database to auto-close when a host is decommissioned.",
              "z": "Bar chart of hosts by VDA version, table of unsupported rows, treemap by catalog if you join a lookup from machine to catalog.",
              "kfp": "Long tails of older VDA or OS versions are normal during phased EOL and site-by-site upgrades. Track trend toward a published deadline; avoid flat 'any old version' alerts that fire every day of a 12-month migration.",
              "refs": "[Citrix Virtual Apps and Desktops — product matrix and lifecycle](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/product-lifecycle.html), [Current release VDA requirements](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/system-requirements.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-VDA`); optional uberAgent UXM (Splunkbase 1448) for operating system build and patch level.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:vda:events\"` (VDA agent version, component build, plug-in status); `index=uberagent` host inventory sourcetypes if you standardize on uberAgent for OS build; optional Windows `sourcetype=\"WinEventLog:Application\"` for Citrix installer success or failure audit.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmit a heartbeat or registration event at least daily that includes VDA and OS build. Create `lookup citrix_supported_vda.csv` with columns `vda_version`, `supported`, `eol_date`. Version the lookup with change control. Schedule the report weekly; alert only for rows on the critical path (for example, Internet-facing or regulated worker pools). Combine with your configuration management database to auto-close when a host is decommissioned.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:vda:events\" (event_type=\"AgentInfo\" OR event_type=\"Registration\" OR event_type=\"Heartbeat\")\n| eval vda_ver=coalesce(vda_version, agent_version, VdaVersion, \"unknown\")\n| eval os_b=coalesce(os_build, windows_build, OSBuild, \"unknown\")\n| eval machine=coalesce(machine_name, host, \"Unknown\")\n| where vda_ver!=\"unknown\"\n| stats dc(machine) as host_count, max(_time) as last_seen by vda_ver, os_b\n| rename vda_ver as vda_version, os_b as os_build_value\n| sort vda_version, os_build_value\n| table vda_version, os_build_value, host_count, last_seen\n```\n\nUnderstanding this SPL\n\n**VDA Software and OS Version Lifecycle Tracking** — Citrix releases new VDA builds regularly. Microsoft retires Windows 10/11 builds on a predictable schedule. Running an inventory of `vda_version` and `os_build` per session host supports compliance with internal standard images, tells you which catalogs are still on long-term servicing versus current channel, and highlights stragglers before support tickets or Citrix Cloud health checks flag them. Feed the same list into patch windows, upgrade rings, and golden-image…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:vda:events\"` (VDA agent version, component build, plug-in status); `index=uberagent` host inventory sourcetypes if you standardize on uberAgent for OS build; optional Windows `sourcetype=\"WinEventLog:Application\"` for Citrix installer success or failure audit. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-VDA`); optional uberAgent UXM (Splunkbase 1448) for operating system build and patch level. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:vda:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:vda:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **vda_ver** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **os_b** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **machine** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where vda_ver!=\"unknown\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by vda_ver, os_b** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Renames fields with `rename` for clarity or joins.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VDA Software and OS Version Lifecycle Tracking**): table vda_version, os_build_value, host_count, last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart of hosts by VDA version, table of unsupported rows, treemap by catalog if you join a lookup from machine to catalog.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep a simple list of which desktop software and Windows build each shared computer is really running, so you are not caught with old, unsupported versions when audits or upgrades come due.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.53",
              "n": "Citrix Delivery Group Desktop Assignment Changes",
              "c": "medium",
              "f": "beginner",
              "v": "Desktop and machine assignments determine who can reach which host pool, including support jump boxes and high-risk clinical or trading desktops. A mistaken assignment can grant a broad security group direct access to a gold image, or remove access during an incident. You should log add/remove actions on assignments with the acting admin, delivery group, user or group principal, and machine where applicable. Day-to-day automation may drive many rows — the control is the unexpected actor, off-hours change, or assignment outside an approved list of groups, not the volume alone.",
              "t": "Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional Citrix Monitor OData for snapshot-based diff",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` (desktop assignment, entitlement, and machine assignment changes; admin actions); `index=xd` `sourcetype=\"citrix:monitor:odata\"` `Machines` or `Assignments` if you take periodic inventory snapshots; optional Microsoft-365/Entra sign-in or AD audit for correlating the same `user` principal",
              "q": "index=xd sourcetype=\"citrix:broker:events\"\n| where event_type IN (\"DesktopAssignmentChange\", \"MachineAssignment\", \"EntitlementChange\") OR match(_raw, \"(?i)(assignment|entitlement|desktop.?.?user|user.?.?machine)\")\n| eval user_key=coalesce(user, UPN, sam_account, \"unknown\")\n| eval dg=coalesce(delivery_group, desktop_group, \"Unknown\")\n| eval machine=coalesce(machine_name, machine, \"Unassigned\")\n| eval change=coalesce(change_type, action, event_type, \"change\")\n| eval actor=coalesce(admin_user, Admin, \"unknown\")\n| bin _time span=1h\n| stats count as changes, values(change) as change_types, values(user_key) as users_touched, values(machine) as machines by dg, _time, actor\n| where changes>0\n| sort - _time\n| table _time, actor, dg, change_types, users_touched, machines, changes",
              "m": "Map broker admin events into `citrix:broker:events` with stable field names. Create a `lookup` of approved automation service accounts. Alert when `actor` is not in the list and the hour is outside change windows, or when a new Active Directory group is added to a sensitive delivery group. If your broker is quiet, supplement with hourly OData `Machines` output diffed in a saved search. Feed results to the identity and access team for recertification evidence.",
              "z": "Timeline of changes by admin, table of the last 50 events with before/after if your feed includes it, single value of changes in last 24 h compared to 30-day average.",
              "kfp": "HR bulk moves, department restructuring, and large hiring batches legitimately reassign many desktops the same day. Join to the HR feed or the bulk ticket before treating mass assignment as malicious admin action.",
              "refs": "[Assign machines to users in CVAD](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/delivery-groups-machines.html), [Manage machine catalogs and delivery groups](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/manage-cds.html)",
              "mitre": [
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional Citrix Monitor OData for snapshot-based diff.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` (desktop assignment, entitlement, and machine assignment changes; admin actions); `index=xd` `sourcetype=\"citrix:monitor:odata\"` `Machines` or `Assignments` if you take periodic inventory snapshots; optional Microsoft-365/Entra sign-in or AD audit for correlating the same `user` principal.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap broker admin events into `citrix:broker:events` with stable field names. Create a `lookup` of approved automation service accounts. Alert when `actor` is not in the list and the hour is outside change windows, or when a new Active Directory group is added to a sensitive delivery group. If your broker is quiet, supplement with hourly OData `Machines` output diffed in a saved search. Feed results to the identity and access team for recertification evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:broker:events\"\n| where event_type IN (\"DesktopAssignmentChange\", \"MachineAssignment\", \"EntitlementChange\") OR match(_raw, \"(?i)(assignment|entitlement|desktop.?.?user|user.?.?machine)\")\n| eval user_key=coalesce(user, UPN, sam_account, \"unknown\")\n| eval dg=coalesce(delivery_group, desktop_group, \"Unknown\")\n| eval machine=coalesce(machine_name, machine, \"Unassigned\")\n| eval change=coalesce(change_type, action, event_type, \"change\")\n| eval actor=coalesce(admin_user, Admin, \"unknown\")\n| bin _time span=1h\n| stats count as changes, values(change) as change_types, values(user_key) as users_touched, values(machine) as machines by dg, _time, actor\n| where changes>0\n| sort - _time\n| table _time, actor, dg, change_types, users_touched, machines, changes\n```\n\nUnderstanding this SPL\n\n**Citrix Delivery Group Desktop Assignment Changes** — Desktop and machine assignments determine who can reach which host pool, including support jump boxes and high-risk clinical or trading desktops. A mistaken assignment can grant a broad security group direct access to a gold image, or remove access during an incident. You should log add/remove actions on assignments with the acting admin, delivery group, user or group principal, and machine where applicable. Day-to-day automation may drive many rows — the control is the…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` (desktop assignment, entitlement, and machine assignment changes; admin actions); `index=xd` `sourcetype=\"citrix:monitor:odata\"` `Machines` or `Assignments` if you take periodic inventory snapshots; optional Microsoft-365/Entra sign-in or AD audit for correlating the same `user` principal. **App/TA** (typical add-on context): Template for Citrix XenDesktop 7 (`TA-XD7-Broker`); optional Citrix Monitor OData for snapshot-based diff. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"DesktopAssignmentChange\", \"MachineAssignment\", \"EntitlementChange\") OR match(_raw, \"(?i)(assignment|e…` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **user_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **machine** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **change** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **actor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by dg, _time, actor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where changes>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix Delivery Group Desktop Assignment Changes**): table _time, actor, dg, change_types, users_touched, machines, changes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of changes by admin, table of the last 50 events with before/after if your feed includes it, single value of changes in last 24 h compared to 30-day average.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We record when someone’s desktop is tied to a different person or group than before, so risky or surprise access changes are visible in plain reporting, not only inside the admin console.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.54",
              "n": "RDS Licensing Validation for Multi-Session Hosts",
              "c": "high",
              "f": "intermediate",
              "v": "Session hosts that offer multiple concurrent RDP and Citrix sessions need valid Remote Desktop Services client access licenses, healthy communication with the license server list, and clear visibility into per-device versus per-user mode and any grace period. A host in grace can appear healthy until a deadline passes and new sessions are refused. A broken license server list string — wrong DNS, firewall, or certificate — is a common misconfiguration. Collect license warnings from Application and the Remote Desktop service channels on each multi-session VDA, and aggregate the same on license servers. Pair with your Citrix per-user and Microsoft RDS-CAL entitlements in procurement, not in Splunk, but use Splunk to prove the runtime state matches policy.",
              "t": "Splunk Add-on for Microsoft Windows on session hosts and on Remote Desktop License Servers; optional scripted WMI or `Get-CimInstance` poll for `Win32_TerminalServiceSetting` and grace period on hosts",
              "d": "`index=windows` `sourcetype=\"WinEventLog:Application\"` with `Source=\"*TerminalServices*\"` or `Microsoft-Windows-TerminalServices-LSM*`, Windows license service events, Remote Desktop license server events (42, 4105, 22, 23, 25, 28); `index=windows` on the license server and `sourcetype=\"WinEventLog:RemoteDesktopServices*\"` where available; `index=xd` `sourcetype=\"citrix:broker:events\"` for multi-session brokering context",
              "q": "index=windows (sourcetype=\"WinEventLog:Application\" OR sourcetype=\"WinEventLog:RemoteDesktopServices*\" OR source=\"*TerminalServices*\")\n| where EventCode IN (22, 23, 25, 28, 38, 4105) OR like(lower(_raw),\"%license%\") OR like(lower(_raw),\"%grace%\") OR like(lower(_raw),\"%remote desktop%\") OR like(lower(_raw),\"%rd licen%\")\n| eval kind=if(like(lower(_raw),\"%grace%\"),\"grace\", if(like(lower(_raw),\"%expir%\"),\"expiry\",\"license_event\"))\n| eval server=coalesce(license_server, LicenseServer, host)\n| bin _time span=1d\n| stats count as daily_events, values(EventCode) as codes, values(kind) as kinds by server, _time\n| where daily_events>0\n| sort - daily_events\n| table _time, server, daily_events, codes, kinds",
              "m": "Enable verbose Remote Desktop license logging in Windows where supported. Add a small scripted input to dump `HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\Terminal Server\\RCM\\Licensing` or PowerShell `Get-RDLicense` output daily on license servers. Alert on any `grace` or `0-day` grace start, any event that says license server is unreachable, and any 4105 with severity error. Deduplicate license servers. Document which Citrix and Microsoft agreements cover which host pools. Tune out duplicate Windows noise per build.",
              "z": "Table of last license error per host, timechart of daily_events by server, single value of hosts in grace, network diagram optional with manual overlay.",
              "kfp": "License key renewals, grace period during true-up, and a temporary license server failover can log RDS or Citrix session licensing warnings. Map to the license key expiry calendar and only escalate when users are actually blocked at session connection.",
              "refs": "[Remote Desktop Services and licensing (Microsoft Learn)](https://learn.microsoft.com/en-us/troubleshoot/windows-server/remote/remote-desktop-services-terms), [Citrix — supported operating systems and RDS context](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/system-requirements.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows on session hosts and on Remote Desktop License Servers; optional scripted WMI or `Get-CimInstance` poll for `Win32_TerminalServiceSetting` and grace period on hosts.\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"WinEventLog:Application\"` with `Source=\"*TerminalServices*\"` or `Microsoft-Windows-TerminalServices-LSM*`, Windows license service events, Remote Desktop license server events (42, 4105, 22, 23, 25, 28); `index=windows` on the license server and `sourcetype=\"WinEventLog:RemoteDesktopServices*\"` where available; `index=xd` `sourcetype=\"citrix:broker:events\"` for multi-session brokering context.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable verbose Remote Desktop license logging in Windows where supported. Add a small scripted input to dump `HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\Terminal Server\\RCM\\Licensing` or PowerShell `Get-RDLicense` output daily on license servers. Alert on any `grace` or `0-day` grace start, any event that says license server is unreachable, and any 4105 with severity error. Deduplicate license servers. Document which Citrix and Microsoft agreements cover which host pools. Tune out dupli…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows (sourcetype=\"WinEventLog:Application\" OR sourcetype=\"WinEventLog:RemoteDesktopServices*\" OR source=\"*TerminalServices*\")\n| where EventCode IN (22, 23, 25, 28, 38, 4105) OR like(lower(_raw),\"%license%\") OR like(lower(_raw),\"%grace%\") OR like(lower(_raw),\"%remote desktop%\") OR like(lower(_raw),\"%rd licen%\")\n| eval kind=if(like(lower(_raw),\"%grace%\"),\"grace\", if(like(lower(_raw),\"%expir%\"),\"expiry\",\"license_event\"))\n| eval server=coalesce(license_server, LicenseServer, host)\n| bin _time span=1d\n| stats count as daily_events, values(EventCode) as codes, values(kind) as kinds by server, _time\n| where daily_events>0\n| sort - daily_events\n| table _time, server, daily_events, codes, kinds\n```\n\nUnderstanding this SPL\n\n**RDS Licensing Validation for Multi-Session Hosts** — Session hosts that offer multiple concurrent RDP and Citrix sessions need valid Remote Desktop Services client access licenses, healthy communication with the license server list, and clear visibility into per-device versus per-user mode and any grace period. A host in grace can appear healthy until a deadline passes and new sessions are refused. A broken license server list string — wrong DNS, firewall, or certificate — is a common misconfiguration. Collect license…\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Application\"` with `Source=\"*TerminalServices*\"` or `Microsoft-Windows-TerminalServices-LSM*`, Windows license service events, Remote Desktop license server events (42, 4105, 22, 23, 25, 28); `index=windows` on the license server and `sourcetype=\"WinEventLog:RemoteDesktopServices*\"` where available; `index=xd` `sourcetype=\"citrix:broker:events\"` for multi-session brokering context. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows on session hosts and on Remote Desktop License Servers; optional scripted WMI or `Get-CimInstance` poll for `Win32_TerminalServiceSetting` and grace period on hosts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Application, WinEventLog:RemoteDesktopServices*. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where EventCode IN (22, 23, 25, 28, 38, 4105) OR like(lower(_raw),\"%license%\") OR like(lower(_raw),\"%grace%\") OR like(lower…` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **kind** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **server** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by server, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where daily_events>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **RDS Licensing Validation for Multi-Session Hosts**): table _time, server, daily_events, codes, kinds\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of last license error per host, timechart of daily_events by server, single value of hosts in grace, network diagram optional with manual overlay.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We make sure the shared Windows “terminal server” style licensing stays healthy on machines that run many people at once, so you are not suddenly blocked by grace periods or a dead license server list.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.55",
              "n": "GPU Driver Version and License Status (NVIDIA GRID / vGPU)",
              "c": "high",
              "f": "advanced",
              "v": "NVIDIA vGPU and GRID licensing tie guest driver versions, hypervisor, and a license service together. A guest can boot but fall back to a restricted mode, lose hardware encode, or see session failures if the license server is unreachable, the wrong `driverVersion` is paired with a host driver, or ECC errors pass a threshold. Citrix 3D workloads, Teams optimization, and browser video offload all depend on a healthy, licensed GPU path. This use case unifies uberAgent (or an equivalent) GPU performance and license state with optional Citrix VDA hardware health. Treat driver skew across a catalog as an image problem; treat isolated license loss as a network or license server problem.",
              "t": "uberAgent UXM (Splunkbase 1448) with GPU monitoring enabled; NVIDIA vGPU on supported hypervisors; optional Splunk add-on for Windows for NVIDIA-sourced events",
              "d": "`index=uberagent` `sourcetype=\"uberAgent:GPU:Performance\"`, `sourcetype=\"uberAgent:GPU:NVIDIA\"`, or host inventory GPU sourcetypes for `driverVersion`, vGPU name, `licenseState`, and utilization; `index=xd` `sourcetype=\"citrix:vda:events\"` for Citrix agent–reported hardware state when the hypervisor and NVIDIA vGPU have integration events; `index=windows` `sourcetype=\"WinEventLog:System\"` for NVIDIA service failures",
              "q": "index=uberagent (sourcetype=\"uberAgent:GPU:NVIDIA\" OR sourcetype=\"uberAgent:GPU:Performance\")\n| eval host_name=coalesce(host, dest_host, machine)\n| eval driver=coalesce(driverVersion, driver_version, nvidia_driver_version, \"unknown\")\n| eval lic=lower(coalesce(licenseState, license_state, vgpu_license_state, \"unknown\"))\n| eval vgpu_name=coalesce(vgpuType, vgpu_type, vgpu, \"Unknown\")\n| eval errs=tonumber(coalesce(fatal_count, 0)) + tonumber(coalesce(uncorrectable_ecc, 0))\n| where (lic!=\"licensed\" AND lic!=\"ok\" AND lic!=\"n/a\" AND lic!=\"active\") OR like(lic, \"%unlic%\") OR like(lic, \"%fail%\") OR errs>0\n| stats latest(driver) as driver_version, max(lic) as license_state, latest(vgpu_name) as vgpu, max(errs) as err_signals by host_name\n| table host_name, driver_version, vgpu, license_state, err_signals",
              "m": "Enable the GPU-related uberAgent options that match your hypervisor. Confirm `index=uberagent` has one row per host per minute at minimum. Build a `lookup` of approved `driverVersion` for each vGPU type and image generation. Alert when `licenseState` is not `Licensed` for more than 15 minutes, or when `driverVersion` is not in the approved list, or when fatal GPU errors increment. Excluded dedicated physical GPUs from vGPU license logic if you run mixed modes. For Citrix, tag hosts that run HDX 3D Pro policies so the alert is routed to the DaaS and NVIDIA contact points.",
              "z": "Table of hosts with `driverVersion`, vGPU type, and license; heatmap of license problems over time; line chart of GPU utilization for affected hosts, linked to a Citrix app session panel.",
              "kfp": "Host driver upgrades, vGPU rebalancing, and cluster rolling reboots can trip NVIDIA GRID license or driver state checks for minutes. Use host maintenance and GPU cluster change windows, with persistence after the last reboot, before a fleet GPU incident.",
              "refs": "[NVIDIA vGPU software documentation](https://docs.nvidia.com/grid/index.html), [HDX 3D Pro — Citrix (context for GPU use)](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/hdx-3d-pro.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) with GPU monitoring enabled; NVIDIA vGPU on supported hypervisors; optional Splunk add-on for Windows for NVIDIA-sourced events.\n• Ensure the following data sources are available: `index=uberagent` `sourcetype=\"uberAgent:GPU:Performance\"`, `sourcetype=\"uberAgent:GPU:NVIDIA\"`, or host inventory GPU sourcetypes for `driverVersion`, vGPU name, `licenseState`, and utilization; `index=xd` `sourcetype=\"citrix:vda:events\"` for Citrix agent–reported hardware state when the hypervisor and NVIDIA vGPU have integration events; `index=windows` `sourcetype=\"WinEventLog:System\"` for NVIDIA service failures.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable the GPU-related uberAgent options that match your hypervisor. Confirm `index=uberagent` has one row per host per minute at minimum. Build a `lookup` of approved `driverVersion` for each vGPU type and image generation. Alert when `licenseState` is not `Licensed` for more than 15 minutes, or when `driverVersion` is not in the approved list, or when fatal GPU errors increment. Excluded dedicated physical GPUs from vGPU license logic if you run mixed modes. For Citrix, tag hosts that run HDX …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=uberagent (sourcetype=\"uberAgent:GPU:NVIDIA\" OR sourcetype=\"uberAgent:GPU:Performance\")\n| eval host_name=coalesce(host, dest_host, machine)\n| eval driver=coalesce(driverVersion, driver_version, nvidia_driver_version, \"unknown\")\n| eval lic=lower(coalesce(licenseState, license_state, vgpu_license_state, \"unknown\"))\n| eval vgpu_name=coalesce(vgpuType, vgpu_type, vgpu, \"Unknown\")\n| eval errs=tonumber(coalesce(fatal_count, 0)) + tonumber(coalesce(uncorrectable_ecc, 0))\n| where (lic!=\"licensed\" AND lic!=\"ok\" AND lic!=\"n/a\" AND lic!=\"active\") OR like(lic, \"%unlic%\") OR like(lic, \"%fail%\") OR errs>0\n| stats latest(driver) as driver_version, max(lic) as license_state, latest(vgpu_name) as vgpu, max(errs) as err_signals by host_name\n| table host_name, driver_version, vgpu, license_state, err_signals\n```\n\nUnderstanding this SPL\n\n**GPU Driver Version and License Status (NVIDIA GRID / vGPU)** — NVIDIA vGPU and GRID licensing tie guest driver versions, hypervisor, and a license service together. A guest can boot but fall back to a restricted mode, lose hardware encode, or see session failures if the license server is unreachable, the wrong `driverVersion` is paired with a host driver, or ECC errors pass a threshold. Citrix 3D workloads, Teams optimization, and browser video offload all depend on a healthy, licensed GPU path. This use case unifies uberAgent (or an…\n\nDocumented **Data sources**: `index=uberagent` `sourcetype=\"uberAgent:GPU:Performance\"`, `sourcetype=\"uberAgent:GPU:NVIDIA\"`, or host inventory GPU sourcetypes for `driverVersion`, vGPU name, `licenseState`, and utilization; `index=xd` `sourcetype=\"citrix:vda:events\"` for Citrix agent–reported hardware state when the hypervisor and NVIDIA vGPU have integration events; `index=windows` `sourcetype=\"WinEventLog:System\"` for NVIDIA service failures. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) with GPU monitoring enabled; NVIDIA vGPU on supported hypervisors; optional Splunk add-on for Windows for NVIDIA-sourced events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: uberagent; **sourcetype**: uberAgent:GPU:NVIDIA, uberAgent:GPU:Performance. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=uberagent, sourcetype=\"uberAgent:GPU:NVIDIA\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **host_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **driver** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **lic** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **vgpu_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **errs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (lic!=\"licensed\" AND lic!=\"ok\" AND lic!=\"n/a\" AND lic!=\"active\") OR like(lic, \"%unlic%\") OR like(lic, \"%fail%\") OR er…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **GPU Driver Version and License Status (NVIDIA GRID / vGPU)**): table host_name, driver_version, vgpu, license_state, err_signals\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of hosts with `driverVersion`, vGPU type, and license; heatmap of license problems over time; line chart of GPU utilization for affected hosts, linked to a Citrix app session panel.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We check that the shared graphics card inside your virtual desktops has the right software and a current license, so heavy apps and video do not go silent in reduced mode or fail when people need them most.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix",
                "hyperv"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.56",
              "n": "Citrix Cloud Service Health Status Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Citrix Cloud publishes health for core services such as Virtual Apps and Desktops service, StoreFront-related cloud services, and Gateway components. Regional incidents or degraded subcomponents can shrink capacity, break brokering, or strand users before your internal monitors move. Ingesting normalized status events (API or add-on) into a single timeline lets operations correlate internal session drops with upstream Citrix Cloud issues, route communication faster, and avoid fruitless VDI war rooms when the root cause is provider-side.",
              "t": "Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input.",
              "d": "`index=citrix` `sourcetype=\"citrix:cloud:status\"` or `sourcetype=\"citrix:status:api\"` with `component_name`, `region`, `impact`, `status`; optional HEC feed from Citrix Cloud Status page or third-party mirroring; `index=xd` `sourcetype=\"citrix:analytics:health\"` when using Citrix Analytics Add-on for Splunk (Splunkbase 6280) for correlated service signals Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=citrix (sourcetype=\"citrix:cloud:status\" OR sourcetype=\"citrix:status:api\")\n| eval comp=coalesce(component_name, service, product, \"unknown\")\n| eval st=lower(coalesce(status, overall_status, health, \"unknown\"))\n| eval sev=lower(coalesce(impact, incident_severity, \"none\"))\n| where st!=\"operational\" AND st!=\"none\" AND st!=\"healthy\" OR match(sev, \"(major|critical|degraded|partial)\")\n| stats latest(st) as status, latest(sev) as impact, latest(_time) as last_update by comp, region\n| sort - last_update\n| table comp, region, status, impact, last_update",
              "m": "Stand up a collector that polls the Citrix Cloud status API or streams change events at a steady interval (for example every 60 seconds) and writes one event per component per region. Normalize field names across regions. Create a lookup of business-critical components for your tenant (for example brokering, workspace, gateway). Alert when any monitored component leaves an operational state or when incident severity matches major or critical. Feed the same index from the Citrix Analytics Add-on if you use it so internal health metrics and public status share a dashboard. Document a comms template that names the component and region.",
              "z": "Single-value strip of red or yellow components; timeline of status flips by region; table of open incidents with start time and blast radius; overlay with session-failure rate from VDA or gateway logs.",
              "kfp": "Citrix public cloud sub-service blips in one region or short vendor incidents can spike 'service down' without internal cause. Correlation with the official status page and a multi-region health check avoids paging for a 5-minute green-yellow-green flip.",
              "refs": "[Citrix Analytics Add-on for Splunk (Splunkbase 6280)](https://splunkbase.splunk.com/app/6280), [Citrix Cloud service health (product documentation)](https://docs.citrix.com/en-us/citrix-cloud/overview/citrix-cloud-service-availability.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input..\n• Ensure the following data sources are available: `index=citrix` `sourcetype=\"citrix:cloud:status\"` or `sourcetype=\"citrix:status:api\"` with `component_name`, `region`, `impact`, `status`; optional HEC feed from Citrix Cloud Status page or third-party mirroring; `index=xd` `sourcetype=\"citrix:analytics:health\"` when using Citrix Analytics Add-on for Splunk (Splunkbase 6280) for correlated service signals Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStand up a collector that polls the Citrix Cloud status API or streams change events at a steady interval (for example every 60 seconds) and writes one event per component per region. Normalize field names across regions. Create a lookup of business-critical components for your tenant (for example brokering, workspace, gateway). Alert when any monitored component leaves an operational state or when incident severity matches major or critical. Feed the same index from the Citrix Analytics Add-on …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=citrix (sourcetype=\"citrix:cloud:status\" OR sourcetype=\"citrix:status:api\")\n| eval comp=coalesce(component_name, service, product, \"unknown\")\n| eval st=lower(coalesce(status, overall_status, health, \"unknown\"))\n| eval sev=lower(coalesce(impact, incident_severity, \"none\"))\n| where st!=\"operational\" AND st!=\"none\" AND st!=\"healthy\" OR match(sev, \"(major|critical|degraded|partial)\")\n| stats latest(st) as status, latest(sev) as impact, latest(_time) as last_update by comp, region\n| sort - last_update\n| table comp, region, status, impact, last_update\n```\n\nUnderstanding this SPL\n\n**Citrix Cloud Service Health Status Monitoring** — Citrix Cloud publishes health for core services such as Virtual Apps and Desktops service, StoreFront-related cloud services, and Gateway components. Regional incidents or degraded subcomponents can shrink capacity, break brokering, or strand users before your internal monitors move. Ingesting normalized status events (API or add-on) into a single timeline lets operations correlate internal session drops with upstream Citrix Cloud issues, route communication faster, and…\n\nDocumented **Data sources**: `index=citrix` `sourcetype=\"citrix:cloud:status\"` or `sourcetype=\"citrix:status:api\"` with `component_name`, `region`, `impact`, `status`; optional HEC feed from Citrix Cloud Status page or third-party mirroring; `index=xd` `sourcetype=\"citrix:analytics:health\"` when using Citrix Analytics Add-on for Splunk (Splunkbase 6280) for correlated service signals Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: citrix; **sourcetype**: citrix:cloud:status, citrix:status:api. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=citrix, sourcetype=\"citrix:cloud:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **comp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **st** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where st!=\"operational\" AND st!=\"none\" AND st!=\"healthy\" OR match(sev, \"(major|critical|degraded|partial)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by comp, region** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix Cloud Service Health Status Monitoring**): table comp, region, status, impact, last_update\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single-value strip of red or yellow components; timeline of status flips by region; table of open incidents with start time and blast radius; overlay with session-failure rate from VDA or gateway logs.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cloud Service Health Status and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Application_State"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.57",
              "n": "Citrix Cloud Connector Deep Health (HealthData API)",
              "c": "critical",
              "f": "advanced",
              "v": "Basic connector heartbeats (see UC-2.6.16) prove the service is up; deep health from the HealthData API exposes resource starvation, time drift, failed outbound checks to cloud dependencies, and registration edge cases that still leave the connector process running. These conditions cause intermittent brokering delays, policy refresh gaps, and mysterious registration churn on VDAs. Aggregating API snapshots per connector gives an early, concrete signal to patch, scale out, or fix DNS and TLS paths before a resource location loses effective cloud control.",
              "t": "Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input.",
              "d": "`index=xd` `sourcetype=\"citrix:cloudconnector:healthdata\"` with `connector_id`, `cpu_percent`, `alert_state`, `cloud_registration`, `time_sync_status`, `dependency_check`, `failed_outbound` fields parsed from the Citrix HealthData API or Cloud Connector local health snapshots forwarded on a short interval; complementary `sourcetype=\"citrix:cloudconnector\"` for baseline connectivity from UC-2.6.16 Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd sourcetype=\"citrix:cloudconnector:healthdata\"\n| eval reg_ok=if(match(lower(coalesce(cloud_registration, registration_status, \"\")), \"(registered|ok|success)\"), 1, 0)\n| eval sync_ok=if(match(lower(coalesce(time_sync_status, ntp_status, \"\")), \"(synced|ok|in\\ssync)\"), 1, 0)\n| eval dep_ok=if(tonumber(coalesce(failed_outbound, failed_dependencies, 0))=0, 1, 0)\n| eval cpu=tonumber(coalesce(cpu_percent, cpu, 0))\n| where reg_ok=0 OR sync_ok=0 OR dep_ok=0 OR cpu>90 OR like(lower(coalesce(alert_state, health_alert, \"\")), \"%fail%\") OR like(lower(coalesce(alert_state, health_alert, \"\")), \"%error%\")\n| stats latest(cpu) as cpu_pct, latest(alert_state) as alert_state, latest(cloud_registration) as registration, latest(time_sync_status) as time_sync, max(failed_outbound) as failed_deps by host, connector_id, resource_location\n| sort - cpu_pct",
              "m": "Deploy a least-privilege scheduled collector on each Cloud Connector (or a shared runner that iterates member hosts) that calls the HealthData API and emits JSON events every one to five minutes. Normalize numeric CPU and map alert flags to a small enum. Create correlation searches that ignore brief CPU spikes under two minutes. Require dual-connector hot-spares: alert when the worst two hosts in a site both show dependency failures. Retain 30 days of history for post-incident review. Co-watch with 2.6.16 so disconnections and deep health anomalies appear on one dashboard.",
              "z": "Connector matrix (CPU, registration, NTP, dependency failures); sparklines of failed outbound tests; overlay with VDA registration errors in the same resource location.",
              "kfp": "Connector upgrades, Azure or AWS zone maintenance, and rolling Cloud Connector restarts can dip HealthData API or registration checks briefly. Require minimum healthy connector count and align with a published connector maintenance schedule.",
              "refs": "[Citrix Cloud Connector — system and connectivity requirements](https://docs.citrix.com/en-us/citrix-cloud/citrix-cloud-resource-locations/citrix-cloud-connector-installation.html), [Cloud Connector advanced functionality (troubleshooting context)](https://docs.citrix.com/en-us/citrix-cloud/citrix-cloud-resource-locations/connector-technical-details.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:cloudconnector:healthdata\"` with `connector_id`, `cpu_percent`, `alert_state`, `cloud_registration`, `time_sync_status`, `dependency_check`, `failed_outbound` fields parsed from the Citrix HealthData API or Cloud Connector local health snapshots forwarded on a short interval; complementary `sourcetype=\"citrix:cloudconnector\"` for baseline connectivity from UC-2.6.16 Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a least-privilege scheduled collector on each Cloud Connector (or a shared runner that iterates member hosts) that calls the HealthData API and emits JSON events every one to five minutes. Normalize numeric CPU and map alert flags to a small enum. Create correlation searches that ignore brief CPU spikes under two minutes. Require dual-connector hot-spares: alert when the worst two hosts in a site both show dependency failures. Retain 30 days of history for post-incident review. Co-watch w…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:cloudconnector:healthdata\"\n| eval reg_ok=if(match(lower(coalesce(cloud_registration, registration_status, \"\")), \"(registered|ok|success)\"), 1, 0)\n| eval sync_ok=if(match(lower(coalesce(time_sync_status, ntp_status, \"\")), \"(synced|ok|in\\ssync)\"), 1, 0)\n| eval dep_ok=if(tonumber(coalesce(failed_outbound, failed_dependencies, 0))=0, 1, 0)\n| eval cpu=tonumber(coalesce(cpu_percent, cpu, 0))\n| where reg_ok=0 OR sync_ok=0 OR dep_ok=0 OR cpu>90 OR like(lower(coalesce(alert_state, health_alert, \"\")), \"%fail%\") OR like(lower(coalesce(alert_state, health_alert, \"\")), \"%error%\")\n| stats latest(cpu) as cpu_pct, latest(alert_state) as alert_state, latest(cloud_registration) as registration, latest(time_sync_status) as time_sync, max(failed_outbound) as failed_deps by host, connector_id, resource_location\n| sort - cpu_pct\n```\n\nUnderstanding this SPL\n\n**Citrix Cloud Connector Deep Health (HealthData API)** — Basic connector heartbeats (see UC-2.6.16) prove the service is up; deep health from the HealthData API exposes resource starvation, time drift, failed outbound checks to cloud dependencies, and registration edge cases that still leave the connector process running. These conditions cause intermittent brokering delays, policy refresh gaps, and mysterious registration churn on VDAs. Aggregating API snapshots per connector gives an early, concrete signal to patch, scale out,…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:cloudconnector:healthdata\"` with `connector_id`, `cpu_percent`, `alert_state`, `cloud_registration`, `time_sync_status`, `dependency_check`, `failed_outbound` fields parsed from the Citrix HealthData API or Cloud Connector local health snapshots forwarded on a short interval; complementary `sourcetype=\"citrix:cloudconnector\"` for baseline connectivity from UC-2.6.16 Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:cloudconnector:healthdata. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:cloudconnector:healthdata\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **reg_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sync_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dep_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cpu** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where reg_ok=0 OR sync_ok=0 OR dep_ok=0 OR cpu>90 OR like(lower(coalesce(alert_state, health_alert, \"\")), \"%fail%\") OR like…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, connector_id, resource_location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Connector matrix (CPU, registration, NTP, dependency failures); sparklines of failed outbound tests; overlay with VDA registration errors in the same resource location.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We go beyond a simple on-or-off check for the link to the cloud and look at time sync, heavy processor load, and failed dependency checks, so you fix the hidden strain before people lose access or machines stop registering.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Application_State"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.58",
              "n": "Citrix Analytics for Performance Data Export",
              "c": "high",
              "f": "advanced",
              "v": "Citrix Analytics for Performance scores sessions and surfaces machine and session lifecycle events with modeled user-experience metrics. When those streams land in a dedicated index, you can trend score regressions by delivery group, catch rising ICA round trip before the help desk floods, and separate image issues from home-network problems. This use case focuses on continuous performance observability and capacity-driven tuning, not on raw security forensics (see related security export use case).",
              "t": "Citrix Analytics Add-on for Splunk (Splunkbase 6280)",
              "d": "`index=citrix` `sourcetype=\"citrix:analytics:performance\"` with `session_id`, `user_principal`, `ux_score`, `logon_duration_ms`, `ica_rtt`, `vda_name`, `machine_event_type` from the Citrix Analytics for Performance data export via Citrix Analytics Add-on for Splunk (Splunkbase 6280); optional join to `sourcetype=\"citrix:vda:events\"` for on-host correlation",
              "q": "index=citrix sourcetype=\"citrix:analytics:performance\"\n| eval score=tonumber(coalesce(ux_score, user_experience_score, session_score, -1))\n| eval rtt=tonumber(coalesce(ica_rtt, round_trip_ms, 0))\n| eval logon=tonumber(coalesce(logon_duration_ms, logon_ms, 0))\n| where (score>0 AND score<70) OR rtt>300 OR logon>15000\n| eval reason=case(score>0 AND score<70, \"low_ux_score\", rtt>300, \"high_ica_rtt\", logon>15000, \"slow_logon\", true(), \"other\")\n| timechart span=1h count by reason, user_principal\n| fillnull value=0",
              "m": "Complete Citrix Cloud onboarding for Analytics, enable the Performance export, and install Splunkbase 6280 on a test search head. Map exported fields to a stable schema: prefer `user_principal` and `session_id` as join keys. Build baseline weekly medians of UX score and logon time per app group. Alert on a sustained drop in median score (for example 15 points for two hours) or on percentile shifts of ICA RTT. Route reports to EUC and network teams. Mask or hash identifiers if exports leave regulated regions. Keep raw exports within retention that matches your DLP policy.",
              "z": "Time chart of median UX score by delivery group; scatter of logon time versus ICA RTT; table of worst sessions in the last hour with drill to machine name and region.",
              "kfp": "Toggling performance analytics, rebinding a tenant, or a connector upgrade can pause or reshape export volume. Check entitlements and connector version before a zero-data alert; lab toggles and pilot tenants should be in an exclusion list.",
              "refs": "[Citrix Analytics Add-on for Splunk (Splunkbase 6280)](https://splunkbase.splunk.com/app/6280), [Citrix Analytics for Performance (overview)](https://docs.citrix.com/en-us/citrix-analytics/performance.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Analytics Add-on for Splunk (Splunkbase 6280).\n• Ensure the following data sources are available: `index=citrix` `sourcetype=\"citrix:analytics:performance\"` with `session_id`, `user_principal`, `ux_score`, `logon_duration_ms`, `ica_rtt`, `vda_name`, `machine_event_type` from the Citrix Analytics for Performance data export via Citrix Analytics Add-on for Splunk (Splunkbase 6280); optional join to `sourcetype=\"citrix:vda:events\"` for on-host correlation.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nComplete Citrix Cloud onboarding for Analytics, enable the Performance export, and install Splunkbase 6280 on a test search head. Map exported fields to a stable schema: prefer `user_principal` and `session_id` as join keys. Build baseline weekly medians of UX score and logon time per app group. Alert on a sustained drop in median score (for example 15 points for two hours) or on percentile shifts of ICA RTT. Route reports to EUC and network teams. Mask or hash identifiers if exports leave regul…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=citrix sourcetype=\"citrix:analytics:performance\"\n| eval score=tonumber(coalesce(ux_score, user_experience_score, session_score, -1))\n| eval rtt=tonumber(coalesce(ica_rtt, round_trip_ms, 0))\n| eval logon=tonumber(coalesce(logon_duration_ms, logon_ms, 0))\n| where (score>0 AND score<70) OR rtt>300 OR logon>15000\n| eval reason=case(score>0 AND score<70, \"low_ux_score\", rtt>300, \"high_ica_rtt\", logon>15000, \"slow_logon\", true(), \"other\")\n| timechart span=1h count by reason, user_principal\n| fillnull value=0\n```\n\nUnderstanding this SPL\n\n**Citrix Analytics for Performance Data Export** — Citrix Analytics for Performance scores sessions and surfaces machine and session lifecycle events with modeled user-experience metrics. When those streams land in a dedicated index, you can trend score regressions by delivery group, catch rising ICA round trip before the help desk floods, and separate image issues from home-network problems. This use case focuses on continuous performance observability and capacity-driven tuning, not on raw security forensics (see related…\n\nDocumented **Data sources**: `index=citrix` `sourcetype=\"citrix:analytics:performance\"` with `session_id`, `user_principal`, `ux_score`, `logon_duration_ms`, `ica_rtt`, `vda_name`, `machine_event_type` from the Citrix Analytics for Performance data export via Citrix Analytics Add-on for Splunk (Splunkbase 6280); optional join to `sourcetype=\"citrix:vda:events\"` for on-host correlation. **App/TA** (typical add-on context): Citrix Analytics Add-on for Splunk (Splunkbase 6280). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: citrix; **sourcetype**: citrix:analytics:performance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=citrix, sourcetype=\"citrix:analytics:performance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rtt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **logon** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (score>0 AND score<70) OR rtt>300 OR logon>15000` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **reason** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by reason, user_principal** — ideal for trending and alerting on this use case.\n• Fills null values with `fillnull`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart of median UX score by delivery group; scatter of logon time versus ICA RTT; table of worst sessions in the last hour with drill to machine name and region.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We turn the cloud’s picture of how smooth each session feels into simple charts, so you spot when slowness spreads before the phones light up and you know whether the fix is the app image or the network path.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.59",
              "n": "Citrix Analytics for Security Risk Indicators",
              "c": "critical",
              "f": "advanced",
              "v": "Citrix Analytics for Security aggregates behavioral signals on access to virtual apps and data: anomalous authentication patterns, data-exfiltration heuristics, and composite insider-threat style scores. Forwarding these indicators into a security operations index lets analysts create high-fidelity detections, hunt across users and risk types, and tune response playbooks (step-up, session recording review, or account disable) without only relying on raw gateway noise. The goal is to surface the risk-ranked narrative Microsoft and Citrix already compute, enriched with your corporate identity context in downstream workflows.",
              "t": "Citrix Analytics Add-on for Splunk (Splunkbase 6280) — imports risk insights and associated events from Citrix Analytics for Security. Supports both Performance and Security analytics data.",
              "d": "`index=citrix` `sourcetype=\"citrix:analytics:security\"` with `risk_score`, `risk_type`, `user_principal`, `threat_vector`, `event_subtype` (anomalous sign-in, data exfiltration heuristics, compromised credential signals, insider-risk scores) ingested through Citrix Analytics Add-on for Splunk (Splunkbase 6280); optional append of `sourcetype=\"citrix:gateway:syslog\"` for corroboration",
              "q": "index=citrix sourcetype=\"citrix:analytics:security\"\n| eval risk=tonumber(coalesce(risk_score, score, 0))\n| eval rtype=lower(coalesce(risk_type, event_subtype, category, \"unknown\"))\n| where risk>=70 OR like(rtype, \"%exfil%\") OR like(rtype, \"%anomal%auth%\") OR like(rtype, \"%insider%\")\n| stats latest(risk) as max_risk, values(threat_vector) as vectors, count as event_count by user_principal, rtype\n| sort - max_risk\n| head 200",
              "m": "Enable the Security data export in Citrix Cloud and connect Splunkbase 6280 with least-privilege API credentials. Classify `risk_type` into SOC tiers: authentication anomalies versus exfiltration signals versus insider risk. Send critical scores (for example 85 plus) to your incident queue with a direct link to Citrix Cloud investigation. Deduplicate on `user_principal` and five-minute windows to control noise. Add identity context from your directory or HR feed via lookup. Comply with privacy review before storing raw risk text in long retention.",
              "z": "Stacked bar of risk events by type; top risky users table; Sankey or sequence chart from sign-in to risk event when fields allow.",
              "kfp": "Purple-team, pen-test, and synthetic risk scenarios feed Citrix security analytics the same spiky indicators as real attack. Add project tags, test user lists, and time-bound exercise windows, then tune severity when only analytics risk score moves without corroboration.",
              "refs": "[Citrix Analytics Add-on for Splunk (Splunkbase 6280)](https://splunkbase.splunk.com/app/6280), [Citrix Analytics for Security](https://docs.citrix.com/en-us/citrix-analytics/security-analytics.html)",
              "mitre": [
                "T1078",
                "T1110"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Analytics Add-on for Splunk (Splunkbase 6280) — imports risk insights and associated events from Citrix Analytics for Security. Supports both Performance and Security analytics data..\n• Ensure the following data sources are available: `index=citrix` `sourcetype=\"citrix:analytics:security\"` with `risk_score`, `risk_type`, `user_principal`, `threat_vector`, `event_subtype` (anomalous sign-in, data exfiltration heuristics, compromised credential signals, insider-risk scores) ingested through Citrix Analytics Add-on for Splunk (Splunkbase 6280); optional append of `sourcetype=\"citrix:gateway:syslog\"` for corroboration.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable the Security data export in Citrix Cloud and connect Splunkbase 6280 with least-privilege API credentials. Classify `risk_type` into SOC tiers: authentication anomalies versus exfiltration signals versus insider risk. Send critical scores (for example 85 plus) to your incident queue with a direct link to Citrix Cloud investigation. Deduplicate on `user_principal` and five-minute windows to control noise. Add identity context from your directory or HR feed via lookup. Comply with privacy r…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=citrix sourcetype=\"citrix:analytics:security\"\n| eval risk=tonumber(coalesce(risk_score, score, 0))\n| eval rtype=lower(coalesce(risk_type, event_subtype, category, \"unknown\"))\n| where risk>=70 OR like(rtype, \"%exfil%\") OR like(rtype, \"%anomal%auth%\") OR like(rtype, \"%insider%\")\n| stats latest(risk) as max_risk, values(threat_vector) as vectors, count as event_count by user_principal, rtype\n| sort - max_risk\n| head 200\n```\n\nUnderstanding this SPL\n\n**Citrix Analytics for Security Risk Indicators** — Citrix Analytics for Security aggregates behavioral signals on access to virtual apps and data: anomalous authentication patterns, data-exfiltration heuristics, and composite insider-threat style scores. Forwarding these indicators into a security operations index lets analysts create high-fidelity detections, hunt across users and risk types, and tune response playbooks (step-up, session recording review, or account disable) without only relying on raw gateway noise. The…\n\nDocumented **Data sources**: `index=citrix` `sourcetype=\"citrix:analytics:security\"` with `risk_score`, `risk_type`, `user_principal`, `threat_vector`, `event_subtype` (anomalous sign-in, data exfiltration heuristics, compromised credential signals, insider-risk scores) ingested through Citrix Analytics Add-on for Splunk (Splunkbase 6280); optional append of `sourcetype=\"citrix:gateway:syslog\"` for corroboration. **App/TA** (typical add-on context): Citrix Analytics Add-on for Splunk (Splunkbase 6280) — imports risk insights and associated events from Citrix Analytics for Security. Supports both Performance and Security analytics data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: citrix; **sourcetype**: citrix:analytics:security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=citrix, sourcetype=\"citrix:analytics:security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rtype** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where risk>=70 OR like(rtype, \"%exfil%\") OR like(rtype, \"%anomal%auth%\") OR like(rtype, \"%insider%\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user_principal, rtype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar of risk events by type; top risky users table; Sankey or sequence chart from sign-in to risk event when fields allow.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We take the service’s own warnings about odd sign-ins, risky behavior, and possible data grab attempts and make them easy for your security team to act on, so a stolen login or an insider case does not stay invisible behind normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.60",
              "n": "Identity Provider (SAML/AAD) Integration Failures",
              "c": "critical",
              "f": "advanced",
              "v": "Workspace and StoreFront sign-ins that rely on SAML or Microsoft Entra ID can fail when federation certificates roll without coordination, when conditional access policies block legacy protocols, or when NameID/UPN mapping between directories drifts. Users experience intermittent or total login failure while infrastructure monitors still show green VDAs. Correlating Citrix-side assertion errors with Entra sign-in results isolates the owning team (identity vs Citrix) quickly and prevents prolonged outages during certificate and trust changes.",
              "t": "Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input.",
              "d": "`index=xd` `sourcetype=\"citrix:cloud:connector:saml\"` or `sourcetype=\"citrix:workspace:saml:diag\"` with `error_code`, `idp_name`, `cert_subject`, `assertion_user`; `index=azure` `sourcetype=\"azure:aad:signin\"` for conditional access failures, certificate-based auth issues, and UPN mismatch; `index=citrix` `sourcetype=\"citrix:analytics:security\"` optional risk layer Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd (sourcetype=\"citrix:workspace:saml:diag\" OR sourcetype=\"citrix:cloud:connector:saml\")\n| eval err=lower(coalesce(error_code, error, message, \"\")), src=\"citrix_saml\", user=coalesce(user_principal, saml_nameid, subject)\n| where like(err, \"%cert%\") OR like(err, \"%signature%\") OR like(err, \"%nameid%\") OR like(err, \"%audience%\") OR like(err, \"%mismatch%\") OR match(err, \"(AADSTS|MSIS)\")\n| eval userPrincipalName=user, errorCode=err\n| append [\n  search index=azure sourcetype=\"azure:aad:signin\" result!=\"Success\"\n  (resourceDisplayName=\"*citrix*\" OR resourceDisplayName=\"*Citrix*\" OR appDisplayName=\"Citrix Workspace\")\n  | eval src=\"entra_signin\"\n  | fields _time, userPrincipalName, result, conditionalAccessStatus, errorCode, src\n  ]\n| stats count by userPrincipalName, result, errorCode, src\n| sort - count",
              "m": "Ingest Citrix Workspace or connector SAML diagnostic logs to a dedicated sourcetype and map certificate expiry fields. Ingest Microsoft Entra sign-in logs for applications matching Citrix. Build a time-synced join on user and a five-minute window, not a naive transaction. Alert when SAML signature or certificate errors exceed a small baseline, or when Entra returns conditional access block codes for the Citrix app only. Add change tickets for cert rotations with automatic suppression. Document IdP cert fingerprints in a lookup for drift detection. Review privacy before storing full assertion bodies.",
              "z": "Side-by-side timeline: Citrix SAML errors versus Entra failure codes; table of UPNs with both streams; single value of unique users blocked in one hour.",
              "kfp": "IdP certificate rollover, Azure AD B2B guest quirks, and MFA enrollment bursts create transient SAML or token errors that look like integration failure. Use IdP health feeds, cert expiry lead time, and directory maintenance; ignore sub-minute blips at login peaks.",
              "refs": "[Citrix Cloud identity providers and authentication](https://docs.citrix.com/en-us/citrix-cloud/citrix-cloud-management/identity-providers-in-citrix-cloud.html), [Splunk add-on: Microsoft Cloud Services (Entra / Azure data)](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1556"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:cloud:connector:saml\"` or `sourcetype=\"citrix:workspace:saml:diag\"` with `error_code`, `idp_name`, `cert_subject`, `assertion_user`; `index=azure` `sourcetype=\"azure:aad:signin\"` for conditional access failures, certificate-based auth issues, and UPN mismatch; `index=citrix` `sourcetype=\"citrix:analytics:security\"` optional risk layer Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Citrix Workspace or connector SAML diagnostic logs to a dedicated sourcetype and map certificate expiry fields. Ingest Microsoft Entra sign-in logs for applications matching Citrix. Build a time-synced join on user and a five-minute window, not a naive transaction. Alert when SAML signature or certificate errors exceed a small baseline, or when Entra returns conditional access block codes for the Citrix app only. Add change tickets for cert rotations with automatic suppression. Document I…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:workspace:saml:diag\" OR sourcetype=\"citrix:cloud:connector:saml\")\n| eval err=lower(coalesce(error_code, error, message, \"\")), src=\"citrix_saml\", user=coalesce(user_principal, saml_nameid, subject)\n| where like(err, \"%cert%\") OR like(err, \"%signature%\") OR like(err, \"%nameid%\") OR like(err, \"%audience%\") OR like(err, \"%mismatch%\") OR match(err, \"(AADSTS|MSIS)\")\n| eval userPrincipalName=user, errorCode=err\n| append [\n  search index=azure sourcetype=\"azure:aad:signin\" result!=\"Success\"\n  (resourceDisplayName=\"*citrix*\" OR resourceDisplayName=\"*Citrix*\" OR appDisplayName=\"Citrix Workspace\")\n  | eval src=\"entra_signin\"\n  | fields _time, userPrincipalName, result, conditionalAccessStatus, errorCode, src\n  ]\n| stats count by userPrincipalName, result, errorCode, src\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Identity Provider (SAML/AAD) Integration Failures** — Workspace and StoreFront sign-ins that rely on SAML or Microsoft Entra ID can fail when federation certificates roll without coordination, when conditional access policies block legacy protocols, or when NameID/UPN mapping between directories drifts. Users experience intermittent or total login failure while infrastructure monitors still show green VDAs. Correlating Citrix-side assertion errors with Entra sign-in results isolates the owning team (identity vs Citrix) quickly…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:cloud:connector:saml\"` or `sourcetype=\"citrix:workspace:saml:diag\"` with `error_code`, `idp_name`, `cert_subject`, `assertion_user`; `index=azure` `sourcetype=\"azure:aad:signin\"` for conditional access failures, certificate-based auth issues, and UPN mismatch; `index=citrix` `sourcetype=\"citrix:analytics:security\"` optional risk layer Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:workspace:saml:diag, citrix:cloud:connector:saml. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:workspace:saml:diag\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **err** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where like(err, \"%cert%\") OR like(err, \"%signature%\") OR like(err, \"%nameid%\") OR like(err, \"%audience%\") OR like(err, \"%mi…` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **userPrincipalName** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by userPrincipalName, result, errorCode, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Side-by-side timeline: Citrix SAML errors versus Entra failure codes; table of UPNs with both streams; single value of unique users blocked in one hour.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We line up the sign-in story from the identity side with the one from the virtual-workspace side, so when folks cannot log in you see whether a trust certificate, a policy block, or a name mismatch is the real villain.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.61",
              "n": "Citrix HDX Rendezvous Protocol Path Selection",
              "c": "high",
              "f": "advanced",
              "v": "HDX Rendezvous lets sessions establish through direct UDP paths with STUN when possible, and fall back to relayed transport when firewalls, symmetric NAT, or port blocks get in the way. A high relay ratio or STUN failure clusters often point to home-router settings, guest Wi-Fi, or data-center egress rules rather than the VDA image. Monitoring path selection, Rendezvous v2 adoption, and UDP blockage patterns helps the right team tune Gateway, DTLS, and client policy before remote users see chronic latency, dropped multimedia, and unstable Teams inside sessions.",
              "t": "Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input.",
              "d": "`index=xd` `sourcetype=\"citrix:hdx:rendezvous\"` or VDA/Session diagnostics with `rendezvous_path` (`direct` vs `relay`), `stun_status`, `udp_blocked`, `nat_type`, `rendezvous_version`; optional `sourcetype=\"citrix:gateway:connection\"` for combined path; uberAgent or IP flow telemetry in parallel for home-office ISP issues Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd (sourcetype=\"citrix:hdx:rendezvous\" OR (sourcetype=\"citrix:vda:events\" event_type=\"*rendezvous*\"))\n| eval path=lower(coalesce(rendezvous_path, path_mode, connection_path, \"unknown\"))\n| eval stun=lower(coalesce(stun_status, stun, \"unknown\"))\n| eval udp_block=if(match(lower(coalesce(udp_blocked, \"\")), \"(true|yes|1|blocked)\"), 1, 0)\n| eval ver=coalesce(rendezvous_version, rv2_version, \"na\")\n| where path=\"relay\" OR udp_block=1 OR stun!=\"ok\" OR match(lower(coalesce(nat_type, \"\")), \"(sym|symmetric|strict)\")\n| stats count by host, user, path, stun, udp_block, nat_type, ver\n| sort - count",
              "m": "Enable the enhanced rendezvous or HDX connection diagnostics in Citrix that emit path mode. Forward those events to `index=xd` with a stable sourcetype. Parse boolean UDP-block flags when present. Create weekly baselines: percentage relay versus direct by region and by client build. Alert when relay share jumps more than 20 points versus the rolling median for a region, or when symmetric NAT count spikes after a home-router firmware wave. Work with network teams to document required UDP and DTLS allow rules. Pair with Citrix Workspace app version compliance.",
              "z": "Stacked 100% bar: direct vs relay by region; map of STUN failure counts; time chart of rendezvous v2 share across clients.",
              "kfp": "Firewall and routing changes during rendezvous and EDT pilots deliberately shift 'direct' versus 'cloud' path. Document expected path per group; alert when production cohorts fall back across both paths without a matching change, not a single pilot user.",
              "refs": "[HDX direct connections (Rendezvous) — product documentation](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/hdx-direct-connections.html), [Citrix Gateway and rendezvous (deployment context)](https://docs.citrix.com/en-us/citrix-gateway/13-1-citrix-gateway-federation-integration.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:hdx:rendezvous\"` or VDA/Session diagnostics with `rendezvous_path` (`direct` vs `relay`), `stun_status`, `udp_blocked`, `nat_type`, `rendezvous_version`; optional `sourcetype=\"citrix:gateway:connection\"` for combined path; uberAgent or IP flow telemetry in parallel for home-office ISP issues Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable the enhanced rendezvous or HDX connection diagnostics in Citrix that emit path mode. Forward those events to `index=xd` with a stable sourcetype. Parse boolean UDP-block flags when present. Create weekly baselines: percentage relay versus direct by region and by client build. Alert when relay share jumps more than 20 points versus the rolling median for a region, or when symmetric NAT count spikes after a home-router firmware wave. Work with network teams to document required UDP and DTLS…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:hdx:rendezvous\" OR (sourcetype=\"citrix:vda:events\" event_type=\"*rendezvous*\"))\n| eval path=lower(coalesce(rendezvous_path, path_mode, connection_path, \"unknown\"))\n| eval stun=lower(coalesce(stun_status, stun, \"unknown\"))\n| eval udp_block=if(match(lower(coalesce(udp_blocked, \"\")), \"(true|yes|1|blocked)\"), 1, 0)\n| eval ver=coalesce(rendezvous_version, rv2_version, \"na\")\n| where path=\"relay\" OR udp_block=1 OR stun!=\"ok\" OR match(lower(coalesce(nat_type, \"\")), \"(sym|symmetric|strict)\")\n| stats count by host, user, path, stun, udp_block, nat_type, ver\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Citrix HDX Rendezvous Protocol Path Selection** — HDX Rendezvous lets sessions establish through direct UDP paths with STUN when possible, and fall back to relayed transport when firewalls, symmetric NAT, or port blocks get in the way. A high relay ratio or STUN failure clusters often point to home-router settings, guest Wi-Fi, or data-center egress rules rather than the VDA image. Monitoring path selection, Rendezvous v2 adoption, and UDP blockage patterns helps the right team tune Gateway, DTLS, and client policy before…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:hdx:rendezvous\"` or VDA/Session diagnostics with `rendezvous_path` (`direct` vs `relay`), `stun_status`, `udp_blocked`, `nat_type`, `rendezvous_version`; optional `sourcetype=\"citrix:gateway:connection\"` for combined path; uberAgent or IP flow telemetry in parallel for home-office ISP issues Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:hdx:rendezvous, citrix:vda:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:hdx:rendezvous\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **path** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **stun** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **udp_block** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ver** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where path=\"relay\" OR udp_block=1 OR stun!=\"ok\" OR match(lower(coalesce(nat_type, \"\")), \"(sym|symmetric|strict)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, user, path, stun, udp_block, nat_type, ver** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked 100% bar: direct vs relay by region; map of STUN failure counts; time chart of rendezvous v2 share across clients.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We see whether the session took the fast direct lane or a slower relayed path through the cloud, and whether home networks block the traffic, so you can fix the right rule or help people with their home gear before video and calls stutter for everyone.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.62",
              "n": "Citrix Workspace Service Feed Availability",
              "c": "high",
              "f": "intermediate",
              "v": "The Citrix Workspace service must enumerate feeds, aggregate resources from brokering and cloud sources, and stay responsive over HTTPS. Certificate problems, API throttling, or connector outages can produce empty start menus, missing apps, or flapping resource lists that mimic client bugs. Synthetics and server-side feed diagnostics measure availability and latency to the user-facing document endpoints and tie failures to a specific store, region, or IDP, shortening mean time to restore before broad ticket spikes.",
              "t": "Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input.",
              "d": "`index=xd` `sourcetype=\"citrix:workspace:feed\"` with `feed_url`, `http_status`, `latency_ms`, `store_name`, `resource_count`, `error_code`; client-side or synthetic probe events from StoreFront/Workspace; optional `sourcetype=\"citrix:cloud:connector:svc\"` for dependency failures affecting aggregation Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd sourcetype=\"citrix:workspace:feed\"\n| eval ok=if(match(coalesce(http_status, status, \"200\"), \"^(200|204)$\"), 1, 0)\n| eval lat=tonumber(coalesce(latency_ms, response_time_ms, 0))\n| where ok=0 OR lat>2000 OR tonumber(coalesce(resource_count, -1))=0\n| eval issue=case(ok=0, \"http_or_feed_error\", lat>2000, \"high_latency\", tonumber(coalesce(resource_count,0))=0, \"empty_catalog\", true(), \"other\")\n| timechart span=5m count by issue, store_name\n| fillnull value=0",
              "m": "Deploy both passive logs (if available from Citrix) and a lightweight synthetic that requests the same feed entry points the clients use, tagged by region. Send results to a dedicated index with five-minute resolution. Set SLOs (for example 99.9 percent under one second) per region. Alert on two consecutive non-200 responses, zero resources returned for any active directory group, or latency above two seconds. Pair alerts with 2.6.16 and 2.6.60 when the failure is identity-related. Keep separate dashboards for on-premises stores versus cloud Workspace.",
              "z": "Uptime and latency SLO by region; heatmap of feed errors; single value: resources returned versus expected baseline from yesterday same hour.",
              "kfp": "CDN or public DNS hiccups and Workspace app cache refresh during brand changes can depress feed availability for everyone. External synthetic and provider status, plus internal app publishing, separate vendor blips from StoreFront or broker issues.",
              "refs": "[Citrix Workspace app — technical overview and connectivity](https://docs.citrix.com/en-us/citrix-workspace-app-for-windows.html), [Configure Workspace experience (Citrix DaaS)](https://docs.citrix.com/en-us/citrix-daas-service/integrate-identity-serve-apps-and-data.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:workspace:feed\"` with `feed_url`, `http_status`, `latency_ms`, `store_name`, `resource_count`, `error_code`; client-side or synthetic probe events from StoreFront/Workspace; optional `sourcetype=\"citrix:cloud:connector:svc\"` for dependency failures affecting aggregation Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy both passive logs (if available from Citrix) and a lightweight synthetic that requests the same feed entry points the clients use, tagged by region. Send results to a dedicated index with five-minute resolution. Set SLOs (for example 99.9 percent under one second) per region. Alert on two consecutive non-200 responses, zero resources returned for any active directory group, or latency above two seconds. Pair alerts with 2.6.16 and 2.6.60 when the failure is identity-related. Keep separate…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:workspace:feed\"\n| eval ok=if(match(coalesce(http_status, status, \"200\"), \"^(200|204)$\"), 1, 0)\n| eval lat=tonumber(coalesce(latency_ms, response_time_ms, 0))\n| where ok=0 OR lat>2000 OR tonumber(coalesce(resource_count, -1))=0\n| eval issue=case(ok=0, \"http_or_feed_error\", lat>2000, \"high_latency\", tonumber(coalesce(resource_count,0))=0, \"empty_catalog\", true(), \"other\")\n| timechart span=5m count by issue, store_name\n| fillnull value=0\n```\n\nUnderstanding this SPL\n\n**Citrix Workspace Service Feed Availability** — The Citrix Workspace service must enumerate feeds, aggregate resources from brokering and cloud sources, and stay responsive over HTTPS. Certificate problems, API throttling, or connector outages can produce empty start menus, missing apps, or flapping resource lists that mimic client bugs. Synthetics and server-side feed diagnostics measure availability and latency to the user-facing document endpoints and tie failures to a specific store, region, or IDP, shortening mean…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:workspace:feed\"` with `feed_url`, `http_status`, `latency_ms`, `store_name`, `resource_count`, `error_code`; client-side or synthetic probe events from StoreFront/Workspace; optional `sourcetype=\"citrix:cloud:connector:svc\"` for dependency failures affecting aggregation Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:workspace:feed. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:workspace:feed\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **lat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ok=0 OR lat>2000 OR tonumber(coalesce(resource_count, -1))=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by issue, store_name** — ideal for trending and alerting on this use case.\n• Fills null values with `fillnull`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Uptime and latency SLO by region; heatmap of feed errors; single value: resources returned versus expected baseline from yesterday same hour.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on workspace Service Feed Availability and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Application_State"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.63",
              "n": "DaaS Autoscale Cloud Economics Tracking",
              "c": "high",
              "f": "advanced",
              "v": "DaaS autoscale can chase session demand, but public-cloud bills reflect clock time, instance families, and lingering powered-on capacity as much as user counts. Blending host-pool or delivery-group session peaks with tag-aligned cloud spend shows scale-out efficiency, expensive idle headroom, and cost-per-active-session trends. Finance and platform teams get a defensible way to right-size buffer percentages, change instance SKUs, or tune shutdown aggressiveness without only trusting static dashboards in the admin consoles.",
              "t": "Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input.",
              "d": "`index=cloud` or `index=azure` `sourcetype=\"azure:consume:export\"` with `cost`, `resource_id`, `tag_pool`, `meter`; `index=xd` `sourcetype=\"citrix:mc:autoscale\"` with `power_action`, `machine_count`, `session_count`, `host_pool`; `index=xd` `sourcetype=\"citrix:brokering:summary\"` for active sessions; optional FinOps CSV via `inputlookup`",
              "q": "index=cloud sourcetype=\"azure:consume:export\" (resource_type=\"*compute*\" OR resource_type=\"*virtual*\")\n| eval tag_pool=if(isnotnull(citrix_host_pool) AND citrix_host_pool!=\"\", citrix_host_pool, coalesce(resource_name, resource_id, \"unmapped\"))\n| bin _time span=1d\n| stats sum(tonumber(cost,0)) as daily_cost by _time, tag_pool\n| join type=left _time, tag_pool [\n  search index=xd (sourcetype=\"citrix:mc:autoscale\" OR sourcetype=\"citrix:brokering:summary\")\n  | eval tag_pool=coalesce(host_pool, delivery_group, catalog_name, \"unmapped\")\n  | bin _time span=1d\n  | stats max(session_count) as peak_sessions, latest(machine_count) as reported_machines by _time, tag_pool\n  ]\n| eval cost_per_session=if(peak_sessions>0, round(daily_cost/peak_sessions,3), null())\n| where isnotnull(daily_cost) AND daily_cost>0\n| table _time, tag_pool, daily_cost, peak_sessions, reported_machines, cost_per_session",
              "m": "Tag or label cloud VMs with a stable `citrix_host_pool` value matching Splunk's brokering or MCS data. Ingest a daily (or hourly) cost feed with the same key. Build weekly reports: cost per session by pool, unused powered-on hours, and autoscale event counts versus cost deltas. Set soft alerts for sudden jumps in cost-per-session or sustained idle high-water marks after scale events. Engage FinOps to validate currency and amortization. Never alert on cost alone without a session denominator except for obvious billing anomalies. Document that bursty test traffic can skew short windows.",
              "z": "Line chart of cost per session by pool; stacked area of instance hours paid versus sessions; table of autoscale power actions and next-day cost impact.",
              "kfp": "Reserved instances, committed use, and org-wide 'always on' capacity for compliance can make idle-looking cloud spend look 'bad' in a simple dollars-per-day cap. Join FinOps baselines and the business minimum seat count, not a lone threshold.",
              "refs": "[Autoscale in Citrix DaaS](https://docs.citrix.com/en-us/citrix-daas-service-delivery-machines/delivery-groups/autoscale-daas.html), [Microsoft Cost Management (export cost data to external tools)](https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/analyze-cost-data-azure)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input..\n• Ensure the following data sources are available: `index=cloud` or `index=azure` `sourcetype=\"azure:consume:export\"` with `cost`, `resource_id`, `tag_pool`, `meter`; `index=xd` `sourcetype=\"citrix:mc:autoscale\"` with `power_action`, `machine_count`, `session_count`, `host_pool`; `index=xd` `sourcetype=\"citrix:brokering:summary\"` for active sessions; optional FinOps CSV via `inputlookup`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag or label cloud VMs with a stable `citrix_host_pool` value matching Splunk's brokering or MCS data. Ingest a daily (or hourly) cost feed with the same key. Build weekly reports: cost per session by pool, unused powered-on hours, and autoscale event counts versus cost deltas. Set soft alerts for sudden jumps in cost-per-session or sustained idle high-water marks after scale events. Engage FinOps to validate currency and amortization. Never alert on cost alone without a session denominator exce…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:consume:export\" (resource_type=\"*compute*\" OR resource_type=\"*virtual*\")\n| eval tag_pool=if(isnotnull(citrix_host_pool) AND citrix_host_pool!=\"\", citrix_host_pool, coalesce(resource_name, resource_id, \"unmapped\"))\n| bin _time span=1d\n| stats sum(tonumber(cost,0)) as daily_cost by _time, tag_pool\n| join type=left _time, tag_pool [\n  search index=xd (sourcetype=\"citrix:mc:autoscale\" OR sourcetype=\"citrix:brokering:summary\")\n  | eval tag_pool=coalesce(host_pool, delivery_group, catalog_name, \"unmapped\")\n  | bin _time span=1d\n  | stats max(session_count) as peak_sessions, latest(machine_count) as reported_machines by _time, tag_pool\n  ]\n| eval cost_per_session=if(peak_sessions>0, round(daily_cost/peak_sessions,3), null())\n| where isnotnull(daily_cost) AND daily_cost>0\n| table _time, tag_pool, daily_cost, peak_sessions, reported_machines, cost_per_session\n```\n\nUnderstanding this SPL\n\n**DaaS Autoscale Cloud Economics Tracking** — DaaS autoscale can chase session demand, but public-cloud bills reflect clock time, instance families, and lingering powered-on capacity as much as user counts. Blending host-pool or delivery-group session peaks with tag-aligned cloud spend shows scale-out efficiency, expensive idle headroom, and cost-per-active-session trends. Finance and platform teams get a defensible way to right-size buffer percentages, change instance SKUs, or tune shutdown aggressiveness without only…\n\nDocumented **Data sources**: `index=cloud` or `index=azure` `sourcetype=\"azure:consume:export\"` with `cost`, `resource_id`, `tag_pool`, `meter`; `index=xd` `sourcetype=\"citrix:mc:autoscale\"` with `power_action`, `machine_count`, `session_count`, `host_pool`; `index=xd` `sourcetype=\"citrix:brokering:summary\"` for active sessions; optional FinOps CSV via `inputlookup`. **App/TA** (typical add-on context): Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated analytics data. For raw Cloud service data, use the Citrix Cloud System Log Add-on (open-source, see citrix/cc-system-log-addon-for-splunk repository) or poll the Monitor Service OData API (https://api.cloud.com/monitorodata) with a custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:consume:export. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:consume:export\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tag_pool** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, tag_pool** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **cost_per_session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(daily_cost) AND daily_cost>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DaaS Autoscale Cloud Economics Tracking**): table _time, tag_pool, daily_cost, peak_sessions, reported_machines, cost_per_session\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart of cost per session by pool; stacked area of instance hours paid versus sessions; table of autoscale power actions and next-day cost impact.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We line up what the virtual desktops cost in the public cloud with how many people actually used them, so you can see waste from machines left running and tune the auto-sizing rules with numbers everyone trusts.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.64",
              "n": "Citrix Endpoint Management Device Enrollment Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Citrix Endpoint Management (CEM) enrollments that fail by Azure AD, identity-provider, or gateway-based flows strand devices without the policies and secure channels you expect. A rising failure rate in one method (for example AAD) often foreshadows certificate or conditional-access changes rather than a bad device. Tracking failures by method and MDM versus MAM split, with hourly trends, helps operations separate widespread identity drift from a flaky Wi-Fi at one site, and it pairs naturally with the certificate and compliance use cases in the same runbooks.",
              "t": "No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention.",
              "d": "`index=mdm` or `index=xd` `sourcetype=\"citrix:endpoint:enrollment\"` with `enrollment_method` (`AAD`, `idp`, `gateway`, `scep`), `mdms_scope` (full MDM vs MAM), `outcome` (`success`/`failed`), `error_code`, `device_platform`, `user_id`; server logs from Citrix Endpoint Management (on-premises or cloud connector) forwarded with stable timestamps Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd sourcetype=\"citrix:endpoint:enrollment\" outcome!=\"success\"\n| eval method=upper(coalesce(enrollment_method, channel, \"UNKNOWN\"))\n| eval mode=coalesce(mdms_scope, mdm_mam, enrollment_mode, \"unknown\")\n| eval platform=coalesce(device_platform, os_type, \"unknown\")\n| bin _time span=1h\n| stats count as failures, dc(error_code) as unique_errors by _time, method, mode, platform\n| where failures>=5\n| sort - _time, failures",
              "m": "Stream enrollment transactions from the CEM service or appliance into a dedicated index and sourcetype. Normalize `outcome` to lower case. Add a small lookup of acceptable error rates per platform. Alert when hourly non-success events exceed a rolling four-hour baseline by 300 percent, or any single error code appears more than 50 times in an hour. Provide a dashboard by enrollment method and region. Separate corporate-owned and BYOD cohorts if your data model supports it. Coordinate with the identity team when AAD- or IdP-tagged failures lead the chart.",
              "z": "Time chart: enrollment failures by method; bar chart: MDM versus MAM failure share; drill table with top error_code and last device sample IDs (masked).",
              "kfp": "New device season, OS refresh, and OTA mass pushes spike enrollment errors in absolute count. Use success-rate floors and the enrollment campaign ID; raw failure counts without cohort size are often false volume.",
              "refs": "[Citrix Endpoint Management product documentation](https://docs.citrix.com/en-us/citrix-endpoint-management.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention..\n• Ensure the following data sources are available: `index=mdm` or `index=xd` `sourcetype=\"citrix:endpoint:enrollment\"` with `enrollment_method` (`AAD`, `idp`, `gateway`, `scep`), `mdms_scope` (full MDM vs MAM), `outcome` (`success`/`failed`), `error_code`, `device_platform`, `user_id`; server logs from Citrix Endpoint Management (on-premises or cloud connector) forwarded with stable timestamps Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStream enrollment transactions from the CEM service or appliance into a dedicated index and sourcetype. Normalize `outcome` to lower case. Add a small lookup of acceptable error rates per platform. Alert when hourly non-success events exceed a rolling four-hour baseline by 300 percent, or any single error code appears more than 50 times in an hour. Provide a dashboard by enrollment method and region. Separate corporate-owned and BYOD cohorts if your data model supports it. Coordinate with the id…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:endpoint:enrollment\" outcome!=\"success\"\n| eval method=upper(coalesce(enrollment_method, channel, \"UNKNOWN\"))\n| eval mode=coalesce(mdms_scope, mdm_mam, enrollment_mode, \"unknown\")\n| eval platform=coalesce(device_platform, os_type, \"unknown\")\n| bin _time span=1h\n| stats count as failures, dc(error_code) as unique_errors by _time, method, mode, platform\n| where failures>=5\n| sort - _time, failures\n```\n\nUnderstanding this SPL\n\n**Citrix Endpoint Management Device Enrollment Failures** — Citrix Endpoint Management (CEM) enrollments that fail by Azure AD, identity-provider, or gateway-based flows strand devices without the policies and secure channels you expect. A rising failure rate in one method (for example AAD) often foreshadows certificate or conditional-access changes rather than a bad device. Tracking failures by method and MDM versus MAM split, with hourly trends, helps operations separate widespread identity drift from a flaky Wi-Fi at one site,…\n\nDocumented **Data sources**: `index=mdm` or `index=xd` `sourcetype=\"citrix:endpoint:enrollment\"` with `enrollment_method` (`AAD`, `idp`, `gateway`, `scep`), `mdms_scope` (full MDM vs MAM), `outcome` (`success`/`failed`), `error_code`, `device_platform`, `user_id`; server logs from Citrix Endpoint Management (on-premises or cloud connector) forwarded with stable timestamps Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:endpoint:enrollment. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:endpoint:enrollment\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **method** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mode** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **platform** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, method, mode, platform** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures>=5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart: enrollment failures by method; bar chart: MDM versus MAM failure share; drill table with top error_code and last device sample IDs (masked).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We count when phones and tablets fail to check in to the company’s control system and we split the reasons, so a broken sign-in path or a bad update shows up in one clear picture before devices stay unmanaged for long.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.65",
              "n": "Citrix Endpoint Management MDM/MAM Policy Compliance",
              "c": "critical",
              "f": "intermediate",
              "v": "MDM and MAM policies express your minimum security bar: no jailbreak or root, current OS patch bands, a real passcode, and no disallowed applications. CEM can emit compliance state per device and per policy package. A rising non-compliant population after an OS release, or a sudden bloom of blacklisted app hits, is often your first sign of shadow IT or stolen devices on the same fleet as your regulated data. This use case drives executive-friendly compliance rate charts and high-severity security alerts in one place.",
              "t": "No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention.",
              "d": "`index=xd` `sourcetype=\"citrix:endpoint:compliance\"` with `jailbreak_flag`, `root_flag`, `os_patch_level`, `passcode_compliant`, `blacklisted_app_hit`, `device_id`, `compliance_state`; CEM device inventory or compliance export jobs Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd sourcetype=\"citrix:endpoint:compliance\"\n| eval jf=if(match(lower(coalesce(jailbreak_flag, jailbroken, is_compromised, \"no\")), \"(1|true|yes)\"), 1, 0)\n| eval rf=if(match(lower(coalesce(root_flag, rooted, \"no\")), \"(1|true|yes)\"), 1, 0)\n| eval pc=if(match(lower(coalesce(passcode_compliant, has_pin, \"yes\")), \"(0|false|no)\"), 0, 1)\n| eval bad_app=if(tonumber(coalesce(blacklisted_app_hit, blocked_app, 0))>0, 1, 0)\n| where jf=1 OR rf=1 OR pc=0 OR bad_app=1 OR lower(coalesce(compliance_state, \"\"))!=\"compliant\"\n| eval reason=case(jf=1, \"jailbreak\", rf=1, \"root\", pc=0, \"passcode\", bad_app=1, \"blacklist_app\", true(), \"other\")\n| stats values(device_id) as sample_devices, latest(os_patch_level) as patch_level, count as events by user_id, reason\n| sort - events",
              "m": "Ingest a daily (or more frequent) compliance snapshot, not only raw real-time if volume is high. Map vendor booleans to consistent integer flags. Create an overall `compliance_percent` for managed devices. Alert on any jailbreak or root true, any blacklisted app install on a corporate-owned tag, and sustained passcode false on more than five percent of a business unit. Pair with asset ownership lookups. For regulated industries, route evidence exports to your GRC archive with retention that matches policy. Reconcile counts with the CEM admin console during rollout.",
              "z": "Donut: compliant versus not; bar: reasons for failure; line: compliance percent by OS major version across months.",
              "kfp": "Grace periods after a new compliance or passcode policy and BYOD catch-up can lag for days. Compare event rate to the policy version rollout and exclude a 24–48 h post-push window; BYOD in the lookup needs different baselines than corporate fully managed.",
              "refs": "[Compliance policies in Citrix Endpoint Management](https://docs.citrix.com/en-us/citrix-endpoint-management/citrix-endpoint-mdm-mam.html)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:endpoint:compliance\"` with `jailbreak_flag`, `root_flag`, `os_patch_level`, `passcode_compliant`, `blacklisted_app_hit`, `device_id`, `compliance_state`; CEM device inventory or compliance export jobs Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest a daily (or more frequent) compliance snapshot, not only raw real-time if volume is high. Map vendor booleans to consistent integer flags. Create an overall `compliance_percent` for managed devices. Alert on any jailbreak or root true, any blacklisted app install on a corporate-owned tag, and sustained passcode false on more than five percent of a business unit. Pair with asset ownership lookups. For regulated industries, route evidence exports to your GRC archive with retention that matc…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:endpoint:compliance\"\n| eval jf=if(match(lower(coalesce(jailbreak_flag, jailbroken, is_compromised, \"no\")), \"(1|true|yes)\"), 1, 0)\n| eval rf=if(match(lower(coalesce(root_flag, rooted, \"no\")), \"(1|true|yes)\"), 1, 0)\n| eval pc=if(match(lower(coalesce(passcode_compliant, has_pin, \"yes\")), \"(0|false|no)\"), 0, 1)\n| eval bad_app=if(tonumber(coalesce(blacklisted_app_hit, blocked_app, 0))>0, 1, 0)\n| where jf=1 OR rf=1 OR pc=0 OR bad_app=1 OR lower(coalesce(compliance_state, \"\"))!=\"compliant\"\n| eval reason=case(jf=1, \"jailbreak\", rf=1, \"root\", pc=0, \"passcode\", bad_app=1, \"blacklist_app\", true(), \"other\")\n| stats values(device_id) as sample_devices, latest(os_patch_level) as patch_level, count as events by user_id, reason\n| sort - events\n```\n\nUnderstanding this SPL\n\n**Citrix Endpoint Management MDM/MAM Policy Compliance** — MDM and MAM policies express your minimum security bar: no jailbreak or root, current OS patch bands, a real passcode, and no disallowed applications. CEM can emit compliance state per device and per policy package. A rising non-compliant population after an OS release, or a sudden bloom of blacklisted app hits, is often your first sign of shadow IT or stolen devices on the same fleet as your regulated data. This use case drives executive-friendly compliance rate charts and…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:endpoint:compliance\"` with `jailbreak_flag`, `root_flag`, `os_patch_level`, `passcode_compliant`, `blacklisted_app_hit`, `device_id`, `compliance_state`; CEM device inventory or compliance export jobs Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:endpoint:compliance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:endpoint:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **jf** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rf** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bad_app** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where jf=1 OR rf=1 OR pc=0 OR bad_app=1 OR lower(coalesce(compliance_state, \"\"))!=\"compliant\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **reason** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user_id, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Donut: compliant versus not; bar: reasons for failure; line: compliance percent by OS major version across months.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We see which phones and tablets are still safe to trust—patched, locked, and without forbidden apps or hacked systems—so risky devices are spotted before they carry your email and files where rules say they should not be.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix",
                "syslog"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.66",
              "n": "Citrix Endpoint Management App Distribution Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Pushing in-house, store, and volume-purchase programs (VPP) apps through CEM depends on correct tokens, licenses, and platform-specific constraints. A burst of VPP or enterprise install failures is often an Apple Business Manager or Google side issue; steady enterprise failures can point to signing or package corruption. This use case breaks down failures by channel so mobile operations can open the right vendor ticket, roll back a bad build, or fix token drift without re-imaging the whole estate.",
              "t": "No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention.",
              "d": "`index=xd` `sourcetype=\"citrix:endpoint:app:deploy\"` with `app_id`, `app_name`, `action` (`install`/`update`), `outcome`, `error_category` (`VPP`, `app_store`, `enterprise`, `mdm_push`), `device_platform`, `user_id`, `vpp_code` when available Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd sourcetype=\"citrix:endpoint:app:deploy\" outcome!=\"success\"\n| eval cat=lower(coalesce(error_category, failure_bucket, app_source, \"unknown\"))\n| eval app=coalesce(app_name, app_id, \"unknown\")\n| eval plat=coalesce(device_platform, os_type, \"unknown\")\n| where like(cat, \"%vpp%\") OR like(cat, \"%app%store%\") OR like(cat, \"%enterprise%\") OR like(cat, \"%push%\") OR isnotnull(vpp_code)\n| timechart span=1h count by cat, plat\n| fillnull value=0",
              "m": "Ingest CEM app deployment or command-result logs. Tag errors into coarse buckets (VPP, public store, enterprise/internal). Mask user identifiers in shared dashboards. Alert when failures for a specific `app_id` cross a threshold in two consecutive hours, or when VPP-scoped errors exceed the prior week at the same time of day. Keep a runbook for common codes (license exhausted, not compatible OS, app removed from store). Pair with app owners so version bumps are not silent. Rate-limit noisy beta cohorts in test rings.",
              "z": "Stacked area of failures by error bucket; top failing apps table; link from `vpp_code` to Apple’s code reference where applicable.",
              "kfp": "Mass in-house app signings, App Store re-releases, and line-of-business app updates can fail in parallel for reasons unrelated to attack. Time-correlate with MAM app version publish and the signing cert rotation ticket before user-level malware assumptions.",
              "refs": "[Distribute and manage mobile apps in Citrix Endpoint Management](https://docs.citrix.com/en-us/citrix-endpoint-management/mdm-mam/endpoint-management-mdm-mam-mdx-apps.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:endpoint:app:deploy\"` with `app_id`, `app_name`, `action` (`install`/`update`), `outcome`, `error_category` (`VPP`, `app_store`, `enterprise`, `mdm_push`), `device_platform`, `user_id`, `vpp_code` when available Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CEM app deployment or command-result logs. Tag errors into coarse buckets (VPP, public store, enterprise/internal). Mask user identifiers in shared dashboards. Alert when failures for a specific `app_id` cross a threshold in two consecutive hours, or when VPP-scoped errors exceed the prior week at the same time of day. Keep a runbook for common codes (license exhausted, not compatible OS, app removed from store). Pair with app owners so version bumps are not silent. Rate-limit noisy beta …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:endpoint:app:deploy\" outcome!=\"success\"\n| eval cat=lower(coalesce(error_category, failure_bucket, app_source, \"unknown\"))\n| eval app=coalesce(app_name, app_id, \"unknown\")\n| eval plat=coalesce(device_platform, os_type, \"unknown\")\n| where like(cat, \"%vpp%\") OR like(cat, \"%app%store%\") OR like(cat, \"%enterprise%\") OR like(cat, \"%push%\") OR isnotnull(vpp_code)\n| timechart span=1h count by cat, plat\n| fillnull value=0\n```\n\nUnderstanding this SPL\n\n**Citrix Endpoint Management App Distribution Failures** — Pushing in-house, store, and volume-purchase programs (VPP) apps through CEM depends on correct tokens, licenses, and platform-specific constraints. A burst of VPP or enterprise install failures is often an Apple Business Manager or Google side issue; steady enterprise failures can point to signing or package corruption. This use case breaks down failures by channel so mobile operations can open the right vendor ticket, roll back a bad build, or fix token drift without…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:endpoint:app:deploy\"` with `app_id`, `app_name`, `action` (`install`/`update`), `outcome`, `error_category` (`VPP`, `app_store`, `enterprise`, `mdm_push`), `device_platform`, `user_id`, `vpp_code` when available Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:endpoint:app:deploy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:endpoint:app:deploy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **app** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **plat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where like(cat, \"%vpp%\") OR like(cat, \"%app%store%\") OR like(cat, \"%enterprise%\") OR like(cat, \"%push%\") OR isnotnull(vpp_c…` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by cat, plat** — ideal for trending and alerting on this use case.\n• Fills null values with `fillnull`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area of failures by error bucket; top failing apps table; link from `vpp_code` to Apple’s code reference where applicable.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We track when the company’s pushed apps fail to install from the public store, the volume-license pool, or your own package, so the right support path opens and people are not stuck without the tools their job needs.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.67",
              "n": "Citrix Endpoint Management Device Certificate Expiry",
              "c": "high",
              "f": "intermediate",
              "v": "Managed devices rely on short-lived user certificates, profile-managed identities, and sometimes enterprise signing or SCEP-issued device identities. A missed renewal quietly breaks Wi-Fi, per-app data protection, and secure mail—often showing up as vague connectivity tickets. CEM and PKI can expose `not_after` and renewal attempts. This use case finds certificates inside a 30-day window, flags renewal failures, and gives compliance teams a defensible, time-bounded list of devices to retire or re-enroll before hard outages.",
              "t": "No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention.",
              "d": "`index=xd` `sourcetype=\"citrix:endpoint:cert\"` with `cert_type` (`user`, `scep`, `profile`, `signing`), `not_after`, `not_before`, `device_id`, `template_name`, `renewal_status`, `error_on_renew`; CEM or SCEP gateway logs; optional public PKI event stream Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd sourcetype=\"citrix:endpoint:cert\"\n| eval expire_epoch=strptime(coalesce(not_after, expiry_utc, \"\"), \"%Y-%m-%dT%H:%M:%S%Z\")\n| eval days_left=floor((expire_epoch-now())/86400)\n| eval renew_ok=if(match(lower(coalesce(renewal_status, \"\")), \"(ok|success|pending)\"), 1, 0)\n| where (days_left<=30 OR isnull(expire_epoch)) OR renew_ok=0 OR like(lower(coalesce(error_on_renew, \"\")), \"%fail%\")\n| sort days_left\n| table device_id, cert_type, template_name, days_left, renewal_status, error_on_renew, _time",
              "m": "Ingest a daily export of all managed certificates, or event-driven renewal logs. Standardize to UTC. Build sliding windows: critical at 7 days, warning at 30 days. Alert on any renewal error with a non-empty `error_on_renew`. Join to asset ownership to email queue owners, not the whole org. Reconcile with your PKI or SCEP service logs; dual-source if possible. If `not_after` is sometimes missing, fall back to last-known `template_name` and schedule forced re-pushes. Document emergency procedures for wide-scale root rotation.",
              "z": "Gantt or bar of devices by days to expiry; single value: count of certs under 7 days; table of failed renewals in the last 24 hours.",
              "kfp": "Staged cert rotation and renewed SCEP profiles on a device subset look like a sudden expiry cluster. Per-profile expected renewal dates in a lookup separate planned rotation from a real expiration crisis.",
              "refs": "[Certificate security in Citrix Endpoint Management (modeled overview)](https://docs.citrix.com/en-us/citrix-endpoint-management/citrix-endpoint-mdm-mam.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:endpoint:cert\"` with `cert_type` (`user`, `scep`, `profile`, `signing`), `not_after`, `not_before`, `device_id`, `template_name`, `renewal_status`, `error_on_renew`; CEM or SCEP gateway logs; optional public PKI event stream Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest a daily export of all managed certificates, or event-driven renewal logs. Standardize to UTC. Build sliding windows: critical at 7 days, warning at 30 days. Alert on any renewal error with a non-empty `error_on_renew`. Join to asset ownership to email queue owners, not the whole org. Reconcile with your PKI or SCEP service logs; dual-source if possible. If `not_after` is sometimes missing, fall back to last-known `template_name` and schedule forced re-pushes. Document emergency procedures…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:endpoint:cert\"\n| eval expire_epoch=strptime(coalesce(not_after, expiry_utc, \"\"), \"%Y-%m-%dT%H:%M:%S%Z\")\n| eval days_left=floor((expire_epoch-now())/86400)\n| eval renew_ok=if(match(lower(coalesce(renewal_status, \"\")), \"(ok|success|pending)\"), 1, 0)\n| where (days_left<=30 OR isnull(expire_epoch)) OR renew_ok=0 OR like(lower(coalesce(error_on_renew, \"\")), \"%fail%\")\n| sort days_left\n| table device_id, cert_type, template_name, days_left, renewal_status, error_on_renew, _time\n```\n\nUnderstanding this SPL\n\n**Citrix Endpoint Management Device Certificate Expiry** — Managed devices rely on short-lived user certificates, profile-managed identities, and sometimes enterprise signing or SCEP-issued device identities. A missed renewal quietly breaks Wi-Fi, per-app data protection, and secure mail—often showing up as vague connectivity tickets. CEM and PKI can expose `not_after` and renewal attempts. This use case finds certificates inside a 30-day window, flags renewal failures, and gives compliance teams a defensible, time-bounded list of…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:endpoint:cert\"` with `cert_type` (`user`, `scep`, `profile`, `signing`), `not_after`, `not_before`, `device_id`, `template_name`, `renewal_status`, `error_on_renew`; CEM or SCEP gateway logs; optional public PKI event stream Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:endpoint:cert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:endpoint:cert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **expire_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **renew_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (days_left<=30 OR isnull(expire_epoch)) OR renew_ok=0 OR like(lower(coalesce(error_on_renew, \"\")), \"%fail%\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix Endpoint Management Device Certificate Expiry**): table device_id, cert_type, template_name, days_left, renewal_status, error_on_renew, _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gantt or bar of devices by days to expiry; single value: count of certs under 7 days; table of failed renewals in the last 24 hours.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We list which work phones and tablets are about to lose the little digital ID cards the network relies on, and we flag when renewal already failed, so access does not fall off a cliff the day a certificate quietly expires.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Certificates"
              ],
              "e": [
                "citrix",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.68",
              "n": "Citrix Endpoint Management Remote Wipe/Lock Action Tracking",
              "c": "critical",
              "f": "intermediate",
              "v": "Remote lock and wipe are the last line when a device is lost or a user leaves under duress. Stalled, failed, or abnormally slow commands can leave data exposed longer than policy allows, while repeated failures may indicate a rooted device or network blocks. CEM can emit the MDM command lifecycle. This use case reports success rate, median latency, and long-tail timeouts by action type, and it feeds security and audit teams a durable trail of who requested each destructive action and whether it completed.",
              "t": "No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention.",
              "d": "`index=xd` `sourcetype=\"citrix:endpoint:mdm:command\"` with `action` (`wipe`, `lock`, `reset`, `unenroll`), `outcome` (`acknowledged`, `completed`, `failed`, `timeout`), `latency_ms`, `device_id`, `requester`, `incident_id`; CEM admin audit of remote actions if emitted separately to `sourcetype=\"citrix:endpoint:audit\"` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd sourcetype=\"citrix:endpoint:mdm:command\" action IN (\"wipe\",\"lock\",\"reset\",\"unenroll\")\n| eval ok=if(match(lower(coalesce(outcome, status, \"\")), \"(completed|acknowledged|success)\"), 1, 0)\n| eval late=if(tonumber(coalesce(latency_ms, 0))>120000, 1, 0)\n| where ok=0 OR late=1\n| eval action=upper(action)\n| timechart span=1h count by action, outcome\n| fillnull value=0",
              "m": "Ensure both the command result stream and a tamper-resistant admin audit of `requester` and business justification (ticket id) are present. Set RTO expectations (for example lock within two minutes on cellular). Page security operations on any `wipe` or `unenroll` that fails or exceeds latency SLO, with device last-seen time for triage. Weekly review of counts versus HR-driven terminations. Retain 13 months in line with HR and privacy counsel. Suppress test-lab device IDs. Never send full device payloads to a shared room without masking sensitive fields.",
              "z": "Gauge: success rate by action; timeline of long-running commands; table of failed devices with `requester` and `incident_id`.",
              "kfp": "HR terminations, lost-device workflows, and security-driven remote lock are legitimate high-volume events. Join HR tickets or service desk 'lost device' cases; a wipe without a ticket in the batch window is the real finding.",
              "refs": "[Device security actions (enterprise mobility — Apple/Android context)](https://docs.citrix.com/en-us/citrix-endpoint-management/citrix-endpoint-mdm-mam/endpoint-management-mdm-mam-cio.html)",
              "mitre": [
                "T1485"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:endpoint:mdm:command\"` with `action` (`wipe`, `lock`, `reset`, `unenroll`), `outcome` (`acknowledged`, `completed`, `failed`, `timeout`), `latency_ms`, `device_id`, `requester`, `incident_id`; CEM admin audit of remote actions if emitted separately to `sourcetype=\"citrix:endpoint:audit\"` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure both the command result stream and a tamper-resistant admin audit of `requester` and business justification (ticket id) are present. Set RTO expectations (for example lock within two minutes on cellular). Page security operations on any `wipe` or `unenroll` that fails or exceeds latency SLO, with device last-seen time for triage. Weekly review of counts versus HR-driven terminations. Retain 13 months in line with HR and privacy counsel. Suppress test-lab device IDs. Never send full device…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd sourcetype=\"citrix:endpoint:mdm:command\" action IN (\"wipe\",\"lock\",\"reset\",\"unenroll\")\n| eval ok=if(match(lower(coalesce(outcome, status, \"\")), \"(completed|acknowledged|success)\"), 1, 0)\n| eval late=if(tonumber(coalesce(latency_ms, 0))>120000, 1, 0)\n| where ok=0 OR late=1\n| eval action=upper(action)\n| timechart span=1h count by action, outcome\n| fillnull value=0\n```\n\nUnderstanding this SPL\n\n**Citrix Endpoint Management Remote Wipe/Lock Action Tracking** — Remote lock and wipe are the last line when a device is lost or a user leaves under duress. Stalled, failed, or abnormally slow commands can leave data exposed longer than policy allows, while repeated failures may indicate a rooted device or network blocks. CEM can emit the MDM command lifecycle. This use case reports success rate, median latency, and long-tail timeouts by action type, and it feeds security and audit teams a durable trail of who requested each destructive…\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:endpoint:mdm:command\"` with `action` (`wipe`, `lock`, `reset`, `unenroll`), `outcome` (`acknowledged`, `completed`, `failed`, `timeout`), `latency_ms`, `device_id`, `requester`, `incident_id`; CEM admin audit of remote actions if emitted separately to `sourcetype=\"citrix:endpoint:audit\"` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:endpoint:mdm:command. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:endpoint:mdm:command\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **late** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ok=0 OR late=1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by action, outcome** — ideal for trending and alerting on this use case.\n• Fills null values with `fillnull`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge: success rate by action; timeline of long-running commands; table of failed devices with `requester` and `incident_id`.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on endpoint Management Remote Wipe/Lock Action and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix",
                "syslog"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.69",
              "n": "Citrix Endpoint Management Server Health",
              "c": "high",
              "f": "advanced",
              "v": "The Citrix Endpoint Management application tier sits on a JVM, a relational database, and background schedulers that push policy and app commands. Thread starvation, pool exhaustion, or a stuck job queue can surface as flapping device check-ins and mass policy drift before a simple `ping` fails. Server-side health metrics plus SSL and database utilization give a root-cause-friendly picture that complements per-device UCs. Certificate expiry on the CEM public endpoint is a classic near-miss that full-stack monitoring should never leave to an annual calendar reminder alone.",
              "t": "No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention.",
              "d": "`index=main` or `index=app` with `sourcetype=\"citrix:endpoint:server:health\"` or JMX/log-derived metrics: `jvm_thread_blocked`, `db_pool_in_use`, `db_pool_max`, `scheduler_backlog`, `queue_depth`, `ssl_cert_expiry_date`; `sourcetype=\"WinEventLog:Application\"` for Java and database errors on Windows-hosted CEM; `localhost_access_log` (application server) sourcetype if applicable; Linux `sourcetype=\"syslog\"` for appliances Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=app (sourcetype=\"citrix:endpoint:server:health\" OR sourcetype=\"citrix:cep:jmx\")\n| eval blocked=tonumber(coalesce(jvm_thread_blocked, blocked_threads, 0))\n| eval db_use=tonumber(coalesce(db_pool_in_use, db_active, 0))\n| eval db_max=tonumber(coalesce(db_pool_max, db_total, 1))\n| eval back=tonumber(coalesce(scheduler_backlog, job_queue, async_queue, 0))\n| eval cert_days=if(isnotnull(ssl_cert_expiry_date), round((strptime(ssl_cert_expiry_date,\"%Y-%m-%d\")-now())/86400,0), null())\n| eval db_pct=if(db_max>0, round(100*db_use/db_max,1), 0)\n| where blocked>50 OR back>1000 OR db_pct>90 OR (isnotnull(cert_days) AND cert_days<=30)\n| stats latest(blocked) as blocked, latest(db_pct) as db_pool_util_pct, latest(back) as queue_backlog, latest(cert_days) as ssl_cert_days_left by host, role\n| sort - db_pool_util_pct",
              "m": "Instrument each CEM node: JVM thread dumps on alert, JDBC pool via JMX, scheduler backlog from application logs, and a synthetic login or API every five minutes. Forward Windows or Linux system logs. Alert in stages: queue backlog over a static threshold, DB pool over 90 percent for ten minutes, blocked threads over 50 for two samples, and SSL cert under 30 days. Pair with database server KPIs. Document rolling patch windows and scale-out when a single node saturates. Keep an HA pair or cluster view so you alert on the worst node and the cluster average.",
              "z": "Node grid: pool percent, queue depth, blocked threads; line chart: backlog with deploy markers; cert countdown single value for public VIP.",
              "kfp": "EMM database maintenance, index jobs, and load balancer or SSL work on the CEM server tier can return slow or error responses that resemble outage. EMM maint tags on app and DB, plus a sustained 5xx rate, filter single blips.",
              "refs": "[Citrix Endpoint Management — supported topologies and sizing context](https://docs.citrix.com/en-us/citrix-endpoint-management/citrix-endpoint-mdm-mam.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention..\n• Ensure the following data sources are available: `index=main` or `index=app` with `sourcetype=\"citrix:endpoint:server:health\"` or JMX/log-derived metrics: `jvm_thread_blocked`, `db_pool_in_use`, `db_pool_max`, `scheduler_backlog`, `queue_depth`, `ssl_cert_expiry_date`; `sourcetype=\"WinEventLog:Application\"` for Java and database errors on Windows-hosted CEM; `localhost_access_log` (application server) sourcetype if applicable; Linux `sourcetype=\"syslog\"` for appliances Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument each CEM node: JVM thread dumps on alert, JDBC pool via JMX, scheduler backlog from application logs, and a synthetic login or API every five minutes. Forward Windows or Linux system logs. Alert in stages: queue backlog over a static threshold, DB pool over 90 percent for ten minutes, blocked threads over 50 for two samples, and SSL cert under 30 days. Pair with database server KPIs. Document rolling patch windows and scale-out when a single node saturates. Keep an HA pair or cluster …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app (sourcetype=\"citrix:endpoint:server:health\" OR sourcetype=\"citrix:cep:jmx\")\n| eval blocked=tonumber(coalesce(jvm_thread_blocked, blocked_threads, 0))\n| eval db_use=tonumber(coalesce(db_pool_in_use, db_active, 0))\n| eval db_max=tonumber(coalesce(db_pool_max, db_total, 1))\n| eval back=tonumber(coalesce(scheduler_backlog, job_queue, async_queue, 0))\n| eval cert_days=if(isnotnull(ssl_cert_expiry_date), round((strptime(ssl_cert_expiry_date,\"%Y-%m-%d\")-now())/86400,0), null())\n| eval db_pct=if(db_max>0, round(100*db_use/db_max,1), 0)\n| where blocked>50 OR back>1000 OR db_pct>90 OR (isnotnull(cert_days) AND cert_days<=30)\n| stats latest(blocked) as blocked, latest(db_pct) as db_pool_util_pct, latest(back) as queue_backlog, latest(cert_days) as ssl_cert_days_left by host, role\n| sort - db_pool_util_pct\n```\n\nUnderstanding this SPL\n\n**Citrix Endpoint Management Server Health** — The Citrix Endpoint Management application tier sits on a JVM, a relational database, and background schedulers that push policy and app commands. Thread starvation, pool exhaustion, or a stuck job queue can surface as flapping device check-ins and mass policy drift before a simple `ping` fails. Server-side health metrics plus SSL and database utilization give a root-cause-friendly picture that complements per-device UCs. Certificate expiry on the CEM public endpoint is a…\n\nDocumented **Data sources**: `index=main` or `index=app` with `sourcetype=\"citrix:endpoint:server:health\"` or JMX/log-derived metrics: `jvm_thread_blocked`, `db_pool_in_use`, `db_pool_max`, `scheduler_backlog`, `queue_depth`, `ssl_cert_expiry_date`; `sourcetype=\"WinEventLog:Application\"` for Java and database errors on Windows-hosted CEM; `localhost_access_log` (application server) sourcetype if applicable; Linux `sourcetype=\"syslog\"` for appliances Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix Endpoint Management. Ingest via syslog from XenMobile Server, or use the Citrix Analytics Add-on for Splunk (Splunkbase 6280) which imports CEM risk indicators from Citrix Analytics for Security. For on-premises XenMobile, forward syslog and JMX metrics via Universal Forwarder. Suggested custom sourcetypes follow the `citrix:endpoint:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: citrix:endpoint:server:health, citrix:cep:jmx. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"citrix:endpoint:server:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **blocked** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **db_use** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **db_max** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **back** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cert_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **db_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where blocked>50 OR back>1000 OR db_pct>90 OR (isnotnull(cert_days) AND cert_days<=30)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, role** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Node grid: pool percent, queue depth, blocked threads; line chart: backlog with deploy markers; cert countdown single value for public VIP.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on endpoint Management Server Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Application_State"
              ],
              "e": [
                "citrix",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.70",
              "n": "Citrix ShareFile Storage Zone Controller Health",
              "c": "high",
              "f": "intermediate",
              "v": "ShareFile content collaboration depends on healthy Storage Zone Controllers and connectors. Monitoring zone online state, synchronization backlog, split between on-premises and cloud-hosted zones, and connector health early exposes outages, replication stalls, and hybrid path failures that block file access, uploads, and business workflows.",
              "t": "No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators.",
              "d": "`index=sharefile` `sourcetype=\"citrix:sharefile:storagezone\"` (zone state, service heartbeat, sync queue depth, connector status); optional `sourcetype=\"citrix:sharefile:connector\"` for Storage Zone Connectors; fields like `zone_id`, `zone_state`, `sync_backlog`, `hosting_mode` (on_prem|cloud), `connector_health` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sharefile (sourcetype=\"citrix:sharefile:storagezone\" OR sourcetype=\"citrix:sharefile:connector\")\n| eval zone_ok=if(match(lower(zone_state),\"(?i)online|healthy|up\"),1,0), backlog=tonumber(coalesce(sync_backlog, queue_depth, 0)), conn_ok=if(match(lower(connector_health),\"(?i)ok|up|connected\") OR isnull(connector_health),1,0)\n| bin _time span=5m\n| stats min(zone_ok) as min_zone, max(backlog) as max_backlog, min(conn_ok) as min_connector, values(hosting_mode) as hosting by zone_id, _time\n| where min_zone=0 OR max_backlog>10000 OR min_connector=0\n| table _time, zone_id, hosting, min_zone, max_backlog, min_connector",
              "m": "Ingest Storage Zone Controller and Storage Zone Connector logs (syslog, file, or API export) with consistent timestamps and time zones. Tag each zone with `hosting_mode` to separate on-prem vs customer-managed cloud. Define backlog thresholds from your baseline; alert when a zone is not online, backlog grows beyond an agreed cap, or any connector is unhealthy. Pair with Citrix Cloud status and network path tests for the control plane if applicable.",
              "z": "Single-value: zones unhealthy count; timechart: max sync backlog by zone; table: zone_id, hosting_mode, min zone state, max backlog, connector health; pie or bar: on-prem vs cloud event volume (sanity for split reporting).",
              "kfp": "Storage zone controller OS patching, SSL updates, and LB health flaps can look like a storage outage when both nodes bounce. A planned maintenance flag on the SZC pair and a dual-node quorum check avoid false SEV-1s on a rolling upgrade.",
              "refs": "[Citrix — StorageZones Controller](https://docs.citrix.com/en-us/citrix-content-collaboration/storage-zones-controller/4-storage-zones-controllers.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators..\n• Ensure the following data sources are available: `index=sharefile` `sourcetype=\"citrix:sharefile:storagezone\"` (zone state, service heartbeat, sync queue depth, connector status); optional `sourcetype=\"citrix:sharefile:connector\"` for Storage Zone Connectors; fields like `zone_id`, `zone_state`, `sync_backlog`, `hosting_mode` (on_prem|cloud), `connector_health` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Storage Zone Controller and Storage Zone Connector logs (syslog, file, or API export) with consistent timestamps and time zones. Tag each zone with `hosting_mode` to separate on-prem vs customer-managed cloud. Define backlog thresholds from your baseline; alert when a zone is not online, backlog grows beyond an agreed cap, or any connector is unhealthy. Pair with Citrix Cloud status and network path tests for the control plane if applicable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sharefile (sourcetype=\"citrix:sharefile:storagezone\" OR sourcetype=\"citrix:sharefile:connector\")\n| eval zone_ok=if(match(lower(zone_state),\"(?i)online|healthy|up\"),1,0), backlog=tonumber(coalesce(sync_backlog, queue_depth, 0)), conn_ok=if(match(lower(connector_health),\"(?i)ok|up|connected\") OR isnull(connector_health),1,0)\n| bin _time span=5m\n| stats min(zone_ok) as min_zone, max(backlog) as max_backlog, min(conn_ok) as min_connector, values(hosting_mode) as hosting by zone_id, _time\n| where min_zone=0 OR max_backlog>10000 OR min_connector=0\n| table _time, zone_id, hosting, min_zone, max_backlog, min_connector\n```\n\nUnderstanding this SPL\n\n**Citrix ShareFile Storage Zone Controller Health** — ShareFile content collaboration depends on healthy Storage Zone Controllers and connectors. Monitoring zone online state, synchronization backlog, split between on-premises and cloud-hosted zones, and connector health early exposes outages, replication stalls, and hybrid path failures that block file access, uploads, and business workflows.\n\nDocumented **Data sources**: `index=sharefile` `sourcetype=\"citrix:sharefile:storagezone\"` (zone state, service heartbeat, sync queue depth, connector status); optional `sourcetype=\"citrix:sharefile:connector\"` for Storage Zone Connectors; fields like `zone_id`, `zone_state`, `sync_backlog`, `hosting_mode` (on_prem|cloud), `connector_health` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sharefile; **sourcetype**: citrix:sharefile:storagezone, citrix:sharefile:connector. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sharefile, sourcetype=\"citrix:sharefile:storagezone\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **zone_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by zone_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where min_zone=0 OR max_backlog>10000 OR min_connector=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ShareFile Storage Zone Controller Health**): table _time, zone_id, hosting, min_zone, max_backlog, min_connector\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single-value: zones unhealthy count; timechart: max sync backlog by zone; table: zone_id, hosting_mode, min zone state, max backlog, connector health; pie or bar: on-prem vs cloud event volume (sanity for split reporting).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the boxes that host your shared files and the helpers that move them between the cloud and your own office — so you catch file-sync stalls and down zones before they stop real work.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Application_State"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.71",
              "n": "Citrix ShareFile DLP Policy Violation Tracking",
              "c": "critical",
              "f": "advanced",
              "v": "Data loss prevention in ShareFile surfaces policy hits, enforcements (block vs warn), and classification outcomes. Tracking hit volume, block and warn rates, trends that look like false positives, and file classification mismatches helps security and privacy teams prove control effectiveness, tune policies, and respond before regulated data leaves approved channels.",
              "t": "No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators.",
              "d": "`index=sharefile` `sourcetype=\"citrix:sharefile:dlp\"` or DLP/secure collaboration feed; fields `action` (block|warn|audit), `policy_id`, `policy_name`, `classification_mismatch`, `file_path`, `user`, `false_positive` (if your feed tags analyst overrides) Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sharefile sourcetype=\"citrix:sharefile:dlp\" earliest=-24h\n| eval action=lower(coalesce(action, outcome, \"unknown\")), is_fp=if(match(lower(false_positive),\"(?i)true|yes|1\"),1,0), mismatch=if(match(lower(classification_mismatch),\"(?i)true|yes|1\"),1,0)\n| bin _time span=1h\n| stats count as hits, sum(eval(action=\"block\")) as blocks, sum(eval(action=\"warn\")) as warns, sum(is_fp) as fp_tags, sum(mismatch) as class_mismatches by _time, policy_id, user\n| eval block_rate=round(100*blocks/hits,2), warn_rate=round(100*warns/hits,2)\n| where blocks>0 OR warns>0 OR class_mismatches>0\n| table _time, policy_id, user, hits, block_rate, warn_rate, fp_tags, class_mismatches",
              "m": "Ingest DLP or ShareFile security events with stable policy identifiers. Retain long enough for compliance reporting. Create hourly rollups and weekly anomaly review for `false_positive` spikes. For mismatches, join to label or sensitivity taxonomy in a lookup. Escalate sudden block-rate drops (possible policy bypass) and sustained warn-only surges (possible business friction).",
              "z": "Stacked bar: block vs warn by policy; timechart: false-positive tagged rate; table: top users and policies by hits; line: classification mismatch count.",
              "kfp": "Legal hold exports, e-discovery, and DLP-authorized large pulls for audits look like user policy violations in volume. Require a DLP case or legal matter ID; exclude known bulk export service accounts in the business lookup.",
              "refs": "[Citrix — Data loss prevention for ShareFile](https://docs.citrix.com/en-us/citrix-content-collaboration/data-loss-prevention.html)",
              "mitre": [
                "T1567",
                "T1039"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators..\n• Ensure the following data sources are available: `index=sharefile` `sourcetype=\"citrix:sharefile:dlp\"` or DLP/secure collaboration feed; fields `action` (block|warn|audit), `policy_id`, `policy_name`, `classification_mismatch`, `file_path`, `user`, `false_positive` (if your feed tags analyst overrides) Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest DLP or ShareFile security events with stable policy identifiers. Retain long enough for compliance reporting. Create hourly rollups and weekly anomaly review for `false_positive` spikes. For mismatches, join to label or sensitivity taxonomy in a lookup. Escalate sudden block-rate drops (possible policy bypass) and sustained warn-only surges (possible business friction).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sharefile sourcetype=\"citrix:sharefile:dlp\" earliest=-24h\n| eval action=lower(coalesce(action, outcome, \"unknown\")), is_fp=if(match(lower(false_positive),\"(?i)true|yes|1\"),1,0), mismatch=if(match(lower(classification_mismatch),\"(?i)true|yes|1\"),1,0)\n| bin _time span=1h\n| stats count as hits, sum(eval(action=\"block\")) as blocks, sum(eval(action=\"warn\")) as warns, sum(is_fp) as fp_tags, sum(mismatch) as class_mismatches by _time, policy_id, user\n| eval block_rate=round(100*blocks/hits,2), warn_rate=round(100*warns/hits,2)\n| where blocks>0 OR warns>0 OR class_mismatches>0\n| table _time, policy_id, user, hits, block_rate, warn_rate, fp_tags, class_mismatches\n```\n\nUnderstanding this SPL\n\n**Citrix ShareFile DLP Policy Violation Tracking** — Data loss prevention in ShareFile surfaces policy hits, enforcements (block vs warn), and classification outcomes. Tracking hit volume, block and warn rates, trends that look like false positives, and file classification mismatches helps security and privacy teams prove control effectiveness, tune policies, and respond before regulated data leaves approved channels.\n\nDocumented **Data sources**: `index=sharefile` `sourcetype=\"citrix:sharefile:dlp\"` or DLP/secure collaboration feed; fields `action` (block|warn|audit), `policy_id`, `policy_name`, `classification_mismatch`, `file_path`, `user`, `false_positive` (if your feed tags analyst overrides) Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sharefile; **sourcetype**: citrix:sharefile:dlp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sharefile, sourcetype=\"citrix:sharefile:dlp\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, policy_id, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **block_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where blocks>0 OR warns>0 OR class_mismatches>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ShareFile DLP Policy Violation Tracking**): table _time, policy_id, user, hits, block_rate, warn_rate, fp_tags, class_mismatches\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar: block vs warn by policy; timechart: false-positive tagged rate; table: top users and policies by hits; line: classification mismatch count.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We count when sensitive file rules are tripped, what happened next, and when labels do not line up with what the rules expect — so you catch risky sharing and weak rules before a leak or audit surprise.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "DLP"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.72",
              "n": "Citrix ShareFile Mass Download and Data Exfiltration Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Unusual file movement through ShareFile can indicate exfiltration: large or repeated downloads, bursts of public link creation, off-hours bulk export, and access from anomalous locations. This use case focuses on high-signal mass behaviors rather than every single file open so analysts can respond quickly to theft or account abuse.",
              "t": "No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators.",
              "d": "`index=sharefile` `sourcetype=\"citrix:sharefile:audit\"` (file download, link access); optional `sourcetype=\"citrix:sharefile:api\"` for public link creation; fields `user`, `event_type`, `bytes`, `file_count`, `link_type` (public|internal), `client_ip` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sharefile (sourcetype=\"citrix:sharefile:audit\" OR sourcetype=\"citrix:sharefile:api\") earliest=-24h\n| eval evt=lower(coalesce(event_type, action, \"\")), is_dl=if(match(evt, \"(?i)download|fetch|get\"),1,0), is_link=if(match(evt, \"(?i)create.*link|public.*link|share.*link\"),1,0), b=tonumber(bytes), hour=strftime(_time, \"%H\"), fc=tonumber(file_count)\n| eval off_hours=if(tonumber(hour)<6 OR tonumber(hour)>20,1,0)\n| eval bulk=if(is_dl=1 AND (b>200000000 OR fc>200),1,0)\n| bin _time span=1h\n| stats sum(b) as tot_bytes, sum(bulk) as bulk_events, sum(is_link) as link_creates, sum(eval(off_hours=1 AND is_dl=1)) as offh_dl by _time, user, client_ip\n| where tot_bytes>500000000 OR bulk_events>0 OR link_creates>50 OR offh_dl>30\n| table _time, user, client_ip, tot_bytes, bulk_events, link_creates, offh_dl",
              "m": "Ingest high-fidelity audit and API link events. Establish per-role baselines (sales vs finance). Tune byte and count thresholds; exclude known migration service accounts. Correlate with identity risk scores and end-point alerts. Contain with session revoke and link disable playbooks. Review privacy rules before full raw logging of filenames in regulated sectors.",
              "z": "Timechart: total bytes and link creates per hour; table: top users for bulk and off-hours; map or table: source IPs for flagged sessions (where available).",
              "kfp": "Quarterly reporting, finance consolidations, and project teams downloading large project folders in ShareFile are normal bulk downloads. Correlation with the user's business role, folder ACL, and a pre-approved DLP exception suppresses the benign exfil look.",
              "refs": "[Citrix — ShareFile audit logging overview](https://docs.citrix.com/en-us/citrix-content-collaboration/audit-trail-logs.html)",
              "mitre": [
                "T1119",
                "T1567"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators..\n• Ensure the following data sources are available: `index=sharefile` `sourcetype=\"citrix:sharefile:audit\"` (file download, link access); optional `sourcetype=\"citrix:sharefile:api\"` for public link creation; fields `user`, `event_type`, `bytes`, `file_count`, `link_type` (public|internal), `client_ip` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest high-fidelity audit and API link events. Establish per-role baselines (sales vs finance). Tune byte and count thresholds; exclude known migration service accounts. Correlate with identity risk scores and end-point alerts. Contain with session revoke and link disable playbooks. Review privacy rules before full raw logging of filenames in regulated sectors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sharefile (sourcetype=\"citrix:sharefile:audit\" OR sourcetype=\"citrix:sharefile:api\") earliest=-24h\n| eval evt=lower(coalesce(event_type, action, \"\")), is_dl=if(match(evt, \"(?i)download|fetch|get\"),1,0), is_link=if(match(evt, \"(?i)create.*link|public.*link|share.*link\"),1,0), b=tonumber(bytes), hour=strftime(_time, \"%H\"), fc=tonumber(file_count)\n| eval off_hours=if(tonumber(hour)<6 OR tonumber(hour)>20,1,0)\n| eval bulk=if(is_dl=1 AND (b>200000000 OR fc>200),1,0)\n| bin _time span=1h\n| stats sum(b) as tot_bytes, sum(bulk) as bulk_events, sum(is_link) as link_creates, sum(eval(off_hours=1 AND is_dl=1)) as offh_dl by _time, user, client_ip\n| where tot_bytes>500000000 OR bulk_events>0 OR link_creates>50 OR offh_dl>30\n| table _time, user, client_ip, tot_bytes, bulk_events, link_creates, offh_dl\n```\n\nUnderstanding this SPL\n\n**Citrix ShareFile Mass Download and Data Exfiltration Detection** — Unusual file movement through ShareFile can indicate exfiltration: large or repeated downloads, bursts of public link creation, off-hours bulk export, and access from anomalous locations. This use case focuses on high-signal mass behaviors rather than every single file open so analysts can respond quickly to theft or account abuse.\n\nDocumented **Data sources**: `index=sharefile` `sourcetype=\"citrix:sharefile:audit\"` (file download, link access); optional `sourcetype=\"citrix:sharefile:api\"` for public link creation; fields `user`, `event_type`, `bytes`, `file_count`, `link_type` (public|internal), `client_ip` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sharefile; **sourcetype**: citrix:sharefile:audit, citrix:sharefile:api. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sharefile, sourcetype=\"citrix:sharefile:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **evt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **off_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bulk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, user, client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tot_bytes>500000000 OR bulk_events>0 OR link_creates>50 OR offh_dl>30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ShareFile Mass Download and Data Exfiltration Detection**): table _time, user, client_ip, tot_bytes, bulk_events, link_creates, offh_dl\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart: total bytes and link creates per hour; table: top users for bulk and off-hours; map or table: source IPs for flagged sessions (where available).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look for the hallmarks of someone hauling out huge piles of files, sharing links in a sudden rush, or doing it in odd off-hours — so you catch data walking out the door before the damage is done.",
              "mtype": [
                "Security"
              ],
              "a": [
                "DLP"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.73",
              "n": "Citrix ShareFile API Rate Limiting and Auth Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Integrations, automation, and line-of-business apps depend on ShareFile APIs. OAuth failures break sign-in, rate limiting signals abusive or mis-tuned clients, and job errors may leave folders out of sync. Monitoring these patterns protects both availability and security (stolen or misconfigured tokens).",
              "t": "No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators.",
              "d": "`index=sharefile` `sourcetype=\"citrix:sharefile:api\"` (REST responses); fields `http_status` (401, 403, 429), `error_code`, `client_id` or `app_name`, `rate_limit_key`, `integration_job` for sync worker failures Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sharefile sourcetype=\"citrix:sharefile:api\" earliest=-4h\n| eval sc=tonumber(http_status), is_auth=if(sc IN (401,403),1,0), is_rl=if(sc=429 OR match(lower(error_code),\"(?i)rate|throttl|limit\"),1,0), job_fail=if(match(lower(coalesce(integration_job, \"\")),\"(?i)fail|error\") OR match(lower(error_code),\"(?i)job|sync|worker\"),1,0), client=coalesce(client_id, app_name, \"unknown\")\n| bin _time span=5m\n| stats count as reqs, sum(is_auth) as auth_fails, sum(is_rl) as rate_hits, sum(job_fail) as job_errors by _time, client\n| where auth_fails>10 OR rate_hits>0 OR job_errors>0\n| table _time, client, reqs, auth_fails, rate_hits, job_errors",
              "m": "Collect API and OAuth logs with client identity. Alert on 429 from production integrations first (fix back-off and batch size). For 401/403, spike-check against key rotation and blocked accounts. Tag integration job names in events for MTTR. Compare to synthetic login tests to separate ShareFile service issues from a single app.",
              "z": "Timechart: 429, 401, 403 by client_id; table: top clients for rate limits; single-value: failed job count in the last hour.",
              "kfp": "Scheduled RPA, sync clients, and integration workers legitimately hit API rate limits; pentest and scanner retries do too. An API client allowlist keyed to service principal and job schedule separates automation from account takeover.",
              "refs": "[Citrix — ShareFile API documentation](https://api.sharefile.com/)",
              "mitre": [
                "T1110",
                "T1190"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators..\n• Ensure the following data sources are available: `index=sharefile` `sourcetype=\"citrix:sharefile:api\"` (REST responses); fields `http_status` (401, 403, 429), `error_code`, `client_id` or `app_name`, `rate_limit_key`, `integration_job` for sync worker failures Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect API and OAuth logs with client identity. Alert on 429 from production integrations first (fix back-off and batch size). For 401/403, spike-check against key rotation and blocked accounts. Tag integration job names in events for MTTR. Compare to synthetic login tests to separate ShareFile service issues from a single app.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sharefile sourcetype=\"citrix:sharefile:api\" earliest=-4h\n| eval sc=tonumber(http_status), is_auth=if(sc IN (401,403),1,0), is_rl=if(sc=429 OR match(lower(error_code),\"(?i)rate|throttl|limit\"),1,0), job_fail=if(match(lower(coalesce(integration_job, \"\")),\"(?i)fail|error\") OR match(lower(error_code),\"(?i)job|sync|worker\"),1,0), client=coalesce(client_id, app_name, \"unknown\")\n| bin _time span=5m\n| stats count as reqs, sum(is_auth) as auth_fails, sum(is_rl) as rate_hits, sum(job_fail) as job_errors by _time, client\n| where auth_fails>10 OR rate_hits>0 OR job_errors>0\n| table _time, client, reqs, auth_fails, rate_hits, job_errors\n```\n\nUnderstanding this SPL\n\n**Citrix ShareFile API Rate Limiting and Auth Failures** — Integrations, automation, and line-of-business apps depend on ShareFile APIs. OAuth failures break sign-in, rate limiting signals abusive or mis-tuned clients, and job errors may leave folders out of sync. Monitoring these patterns protects both availability and security (stolen or misconfigured tokens).\n\nDocumented **Data sources**: `index=sharefile` `sourcetype=\"citrix:sharefile:api\"` (REST responses); fields `http_status` (401, 403, 429), `error_code`, `client_id` or `app_name`, `rate_limit_key`, `integration_job` for sync worker failures Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sharefile; **sourcetype**: citrix:sharefile:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sharefile, sourcetype=\"citrix:sharefile:api\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, client** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where auth_fails>10 OR rate_hits>0 OR job_errors>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ShareFile API Rate Limiting and Auth Failures**): table _time, client, reqs, auth_fails, rate_hits, job_errors\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart: 429, 401, 403 by client_id; table: top clients for rate limits; single-value: failed job count in the last hour.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on shareFile API Rate Limiting and Auth Failures and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.74",
              "n": "Citrix ShareFile User Activity Audit Trail",
              "c": "high",
              "f": "beginner",
              "v": "A complete audit layer for ShareFile supports investigations and compliance: who accessed, changed, or shared which content; administrative actions in zones; and time-bounded reports for internal review or external auditors. The search summarizes daily activity mix and breadth so teams can spot gaps in logging and prove retention of evidence.",
              "t": "No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators.",
              "d": "`index=sharefile` `sourcetype=\"citrix:sharefile:audit\"` (view, download, upload, share, delete, permission change); `sourcetype=\"citrix:sharefile:admin\"` (admin and zone actions); user, target, time, and outcome fields Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sharefile (sourcetype=\"citrix:sharefile:audit\" OR sourcetype=\"citrix:sharefile:admin\") earliest=-7d\n| eval act=lower(coalesce(event_type, action, operation, \"unknown\"))\n| eval actor=if(isnull(actor_type) OR actor_type=\"\", \"user\", actor_type)\n| bin _time span=1d\n| stats count as events, dc(user) as users, values(act) as actions by _time, actor\n| table _time, actor, events, users, actions",
              "m": "Enable ShareFile audit trail export to Splunk with full coverage (user and admin). Retain per policy (often 1–7 years for regulated data). Create scheduled reports for business reviews and a drill-down form with raw events for cases. Do not over-collect PII; mask where required.",
              "z": "Table: daily event counts; pie: user vs admin share; drill to raw event list; optional PDF/CSV scheduled report for auditors.",
              "kfp": "End-of-quarter audit log exports, SIEM backfills, and helpdesk-driven password and MFA resets spike ShareFile read and auth events. Tag scheduled audit jobs and service desk break-glass accounts to avoid conflating operations with data theft.",
              "refs": "[Citrix — ShareFile audit and logging](https://docs.citrix.com/en-us/citrix-content-collaboration/audit-trail-logs.html)",
              "mitre": [
                "T1074",
                "T1039"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators..\n• Ensure the following data sources are available: `index=sharefile` `sourcetype=\"citrix:sharefile:audit\"` (view, download, upload, share, delete, permission change); `sourcetype=\"citrix:sharefile:admin\"` (admin and zone actions); user, target, time, and outcome fields Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ShareFile audit trail export to Splunk with full coverage (user and admin). Retain per policy (often 1–7 years for regulated data). Create scheduled reports for business reviews and a drill-down form with raw events for cases. Do not over-collect PII; mask where required.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sharefile (sourcetype=\"citrix:sharefile:audit\" OR sourcetype=\"citrix:sharefile:admin\") earliest=-7d\n| eval act=lower(coalesce(event_type, action, operation, \"unknown\"))\n| eval actor=if(isnull(actor_type) OR actor_type=\"\", \"user\", actor_type)\n| bin _time span=1d\n| stats count as events, dc(user) as users, values(act) as actions by _time, actor\n| table _time, actor, events, users, actions\n```\n\nUnderstanding this SPL\n\n**Citrix ShareFile User Activity Audit Trail** — A complete audit layer for ShareFile supports investigations and compliance: who accessed, changed, or shared which content; administrative actions in zones; and time-bounded reports for internal review or external auditors. The search summarizes daily activity mix and breadth so teams can spot gaps in logging and prove retention of evidence.\n\nDocumented **Data sources**: `index=sharefile` `sourcetype=\"citrix:sharefile:audit\"` (view, download, upload, share, delete, permission change); `sourcetype=\"citrix:sharefile:admin\"` (admin and zone actions); user, target, time, and outcome fields Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for ShareFile. Ingest audit trail via ShareFile REST API (https://api.sharefile.com) to HEC with a custom scripted input or Splunk Add-on Builder. Suggested custom sourcetype: `citrix:sharefile:audit`. Optionally correlate with Citrix Analytics Add-on for Splunk (Splunkbase 6280) for aggregated risk indicators. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sharefile; **sourcetype**: citrix:sharefile:audit, citrix:sharefile:admin. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sharefile, sourcetype=\"citrix:sharefile:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **act** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **actor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, actor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Citrix ShareFile User Activity Audit Trail**): table _time, actor, events, users, actions\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table: daily event counts; pie: user vs admin share; drill to raw event list; optional PDF/CSV scheduled report for auditors.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep a clear diary of who touched shared files and who ran the controls behind them — so you can show honest answers when someone asks what happened, or when a regulator asks the same thing.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.75",
              "n": "End-to-End Citrix Session Launch Time",
              "c": "critical",
              "f": "advanced",
              "v": "End-to-end session launch is the user-perceived time from initial click in Workspace through brokering, host start, logon, and HDX connect until a usable session is ready. Splunking all phases in one time series exposes whether delays sit in the broker, the hypervisor, the profile, the identity stack, or the client—so teams do not guess where to invest tuning effort.",
              "t": "uberAgent UXM (Splunkbase 1448) for phase insight; Template for Citrix XenDesktop 7 (TA-XD7-Broker) and Citrix Monitor Service OData; optional ITSI for service aggregation",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` (`event_type=SessionLogon` with `logon_duration_ms` and sub-phase fields); `sourcetype=\"uberAgent:Logon:LogonDetail\"` for VDA phase breakdown; `sourcetype=\"citrix:hdx:connect\"` for client-to-session handshake timing; Director OData exports if used Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd (sourcetype=\"citrix:broker:events\" event_type=\"SessionLogon\" OR sourcetype=\"uberAgent:Logon:LogonDetail\" OR sourcetype=\"citrix:hdx:connect\") earliest=-4h\n| eval total_ms=tonumber(coalesce(logon_duration_ms, total_logon_ms, 0)), phase=coalesce(phase, \"e2e\")\n| bin _time span=15m\n| stats median(total_ms) as p50, perc95(total_ms) as p95, count as n, values(phase) as phases by _time, delivery_group\n| where p95>90000\n| table _time, delivery_group, p50, p95, n, phases",
              "m": "Prefer uberAgent for automatic phase split on the VDA; supplement with broker events for the brokering and VM start portions. Clock-sync all tiers. Set SLOs per delivery group. Create drill-down dashboards from the same search as phase-specific panels. For cloud services, add Citrix DaaS connector latency where exposed in logs. Pair with a synthetic transaction from a test account for continuous proof.",
              "z": "Stacked time by phase, Sankey of phase share for p95, single value SLO, compare regions or delivery groups side by side.",
              "kfp": "Image rollout weeks and semester starts push session launch time for the whole user base at once. A same-day-of-week rolling median and a Studio publish window exclusion reduce Monday-morning false positives on pure latency.",
              "refs": "[uberAgent — Logon and session performance](https://splunkbase.splunk.com/app/1448), [Citrix — Session launch and logon process](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/logon-processes.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) for phase insight; Template for Citrix XenDesktop 7 (TA-XD7-Broker) and Citrix Monitor Service OData; optional ITSI for service aggregation.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` (`event_type=SessionLogon` with `logon_duration_ms` and sub-phase fields); `sourcetype=\"uberAgent:Logon:LogonDetail\"` for VDA phase breakdown; `sourcetype=\"citrix:hdx:connect\"` for client-to-session handshake timing; Director OData exports if used Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer uberAgent for automatic phase split on the VDA; supplement with broker events for the brokering and VM start portions. Clock-sync all tiers. Set SLOs per delivery group. Create drill-down dashboards from the same search as phase-specific panels. For cloud services, add Citrix DaaS connector latency where exposed in logs. Pair with a synthetic transaction from a test account for continuous proof.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:broker:events\" event_type=\"SessionLogon\" OR sourcetype=\"uberAgent:Logon:LogonDetail\" OR sourcetype=\"citrix:hdx:connect\") earliest=-4h\n| eval total_ms=tonumber(coalesce(logon_duration_ms, total_logon_ms, 0)), phase=coalesce(phase, \"e2e\")\n| bin _time span=15m\n| stats median(total_ms) as p50, perc95(total_ms) as p95, count as n, values(phase) as phases by _time, delivery_group\n| where p95>90000\n| table _time, delivery_group, p50, p95, n, phases\n```\n\nUnderstanding this SPL\n\n**End-to-End Citrix Session Launch Time** — End-to-end session launch is the user-perceived time from initial click in Workspace through brokering, host start, logon, and HDX connect until a usable session is ready. Splunking all phases in one time series exposes whether delays sit in the broker, the hypervisor, the profile, the identity stack, or the client—so teams do not guess where to invest tuning effort.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` (`event_type=SessionLogon` with `logon_duration_ms` and sub-phase fields); `sourcetype=\"uberAgent:Logon:LogonDetail\"` for VDA phase breakdown; `sourcetype=\"citrix:hdx:connect\"` for client-to-session handshake timing; Director OData exports if used Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) for phase insight; Template for Citrix XenDesktop 7 (TA-XD7-Broker) and Citrix Monitor Service OData; optional ITSI for service aggregation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events, uberAgent:Logon:LogonDetail, citrix:hdx:connect. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **total_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, delivery_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95>90000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **End-to-End Citrix Session Launch Time**): table _time, delivery_group, p50, p95, n, phases\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked time by phase, Sankey of phase share for p95, single value SLO, compare regions or delivery groups side by side.",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We time the whole path from the moment you click to when the remote desktop is truly ready — so the right team fixes the real slow step instead of everyone blaming the network at once.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix",
                "itsi"
              ],
              "em": [
                "citrix_cvad",
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.76",
              "n": "Citrix Client Ecosystem and Platform Distribution",
              "c": "medium",
              "f": "beginner",
              "v": "Supported, patched clients are a common compliance and support requirement. A live distribution of Workspace app versions, client operating systems, thin-client firmware, and device classes gives IT and security teams a single place to see drift, plan upgrades, and retire unsupported platforms before they become an audit finding or a break-fix incident.",
              "t": "Citrix Cloud / CVAD session metadata export, Syslog or API from supported thin clients, Template for Citrix add-ons in use at your site",
              "d": "`index=xd` `sourcetype=\"citrix:workspace:client\"` (client build, device OS, channel); `sourcetype=\"citrix:hdx:connect\"` (endpoint type, firmware where available); `sourcetype=\"citrix:broker:session\"` (Workspace version from session metadata) when exposed Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd (sourcetype=\"citrix:workspace:client\" OR sourcetype=\"citrix:hdx:connect\" OR sourcetype=\"citrix:broker:session\") earliest=-7d\n| eval os=coalesce(client_os, device_os, platform, \"unknown\"), ver=coalesce(workspace_version, app_version, client_version, \"unknown\"), dev=coalesce(device_type, endpoint_type, \"unknown\")\n| bin _time span=1d\n| stats count as sessions by _time, os, ver, dev\n| sort -sessions\n| head 200",
              "m": "Pull version fields on every new session or daily heartbeat, depending on the feed. Add lookups for LTS/allowed builds. Schedule monthly compliance PDF or CSV. Partner with end-user computing to nudge or block at the gateway for builds below a floor. For BYOD, show OS mix separately from corporate-managed endpoints.",
              "z": "Pie: OS; bar: Workspace app version; treemap: device class; line: unsupported share over time after campaigns.",
              "kfp": "Kiosks, BYOD, contractors, and field forces skew the client OS and form-factor mix. Baseline by OU, region, and device class; a move away from a single internal gold image is often expected diversity, not client sprawl out of control.",
              "refs": "[Citrix Workspace app — Lifecycle milestones](https://docs.citrix.com/en-us/citrix-workspace-app-for-windows/technical-overview-lifecycle-milestones.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Cloud / CVAD session metadata export, Syslog or API from supported thin clients, Template for Citrix add-ons in use at your site.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:workspace:client\"` (client build, device OS, channel); `sourcetype=\"citrix:hdx:connect\"` (endpoint type, firmware where available); `sourcetype=\"citrix:broker:session\"` (Workspace version from session metadata) when exposed Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPull version fields on every new session or daily heartbeat, depending on the feed. Add lookups for LTS/allowed builds. Schedule monthly compliance PDF or CSV. Partner with end-user computing to nudge or block at the gateway for builds below a floor. For BYOD, show OS mix separately from corporate-managed endpoints.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:workspace:client\" OR sourcetype=\"citrix:hdx:connect\" OR sourcetype=\"citrix:broker:session\") earliest=-7d\n| eval os=coalesce(client_os, device_os, platform, \"unknown\"), ver=coalesce(workspace_version, app_version, client_version, \"unknown\"), dev=coalesce(device_type, endpoint_type, \"unknown\")\n| bin _time span=1d\n| stats count as sessions by _time, os, ver, dev\n| sort -sessions\n| head 200\n```\n\nUnderstanding this SPL\n\n**Citrix Client Ecosystem and Platform Distribution** — Supported, patched clients are a common compliance and support requirement. A live distribution of Workspace app versions, client operating systems, thin-client firmware, and device classes gives IT and security teams a single place to see drift, plan upgrades, and retire unsupported platforms before they become an audit finding or a break-fix incident.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:workspace:client\"` (client build, device OS, channel); `sourcetype=\"citrix:hdx:connect\"` (endpoint type, firmware where available); `sourcetype=\"citrix:broker:session\"` (Workspace version from session metadata) when exposed Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): Citrix Cloud / CVAD session metadata export, Syslog or API from supported thin clients, Template for Citrix add-ons in use at your site. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:workspace:client, citrix:hdx:connect, citrix:broker:session. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:workspace:client\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **os** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, os, ver, dev** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie: OS; bar: Workspace app version; treemap: device class; line: unsupported share over time after campaigns.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We list who uses which gizmo and which version of the little program that opens the remote office — so you are not caught with old, out-of-support gear when rules or a security scare demand everyone update.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "citrix",
                "syslog"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.77",
              "n": "Citrix Per-Application Perceived Performance (Startup vs Hang vs Network)",
              "c": "high",
              "f": "advanced",
              "v": "Not all “Citrix is slow” tickets are a network problem. Perceived slowness may be slow application startup, long UI busy states, or real ICA network delay. Combining process startup and hang signals from the VDA with network metrics and broker-reported app ready time splits accountability between packaging, the app itself, the profile, and the path between user and host.",
              "t": "uberAgent UXM (Splunkbase 1448) with Citrix templates; optional Citrix Monitor / Director export for the same fields",
              "d": "uberAgent: `sourcetype=\"uberAgent:Process:ProcessStartup\"` and `sourcetype=\"uberAgent:Application:Error\"` (if configured); `sourcetype=\"uberAgent:Network:Performance\"` (latency, loss); `sourcetype=\"citrix:broker:app_usage\"` (app title, `launch_to_ready_ms`); or Director OData for ICA RTT and application load time",
              "q": "index=xd (sourcetype=\"uberAgent:Process:ProcessStartup\" OR sourcetype=\"uberAgent:Network:Performance\" OR sourcetype=\"citrix:broker:app_usage\") earliest=-4h\n| eval app=coalesce(app_name, process_name, title, \"unknown\"), start_ms=tonumber(startup_ms), ica=tonumber(ica_rtt_ms), hang=if(match(_raw, \"(?i)not.?(responding)\"),1,0)\n| bin _time span=15m\n| stats median(start_ms) as med_start, median(ica) as med_ica, sum(hang) as hang_ev by _time, app, host\n| where med_start>10000 OR med_ica>100 OR hang_ev>0\n| table _time, app, host, med_start, med_ica, hang_ev",
              "m": "Standardize on one app name key (avoid publisher vs start-menu title drift). In uberAgent, enable process and network packs for gold images only first. If ICA RTT is missing in uberAgent, add broker or gateway RTT. Build three small alerts: p95 startup, p95 RTT, and not-responding process count, each routed to a different team owner.",
              "z": "Small multiples: one row per app with startup, hang count, and RTT; table: top hosts driving bad p95; overlay change markers on image or app version.",
              "kfp": "Wide-area brownouts, VPN or SD-WAN issues, and WiFi in one building can make many apps look 'network slow' or 'hung' at once. A site or path health overlay and delivery group before/after a thin-client patch differentiate network from a single bad app package.",
              "refs": "[uberAgent — Process and network metrics](https://splunkbase.splunk.com/app/1448), [Citrix — HDX and session performance](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/hdx-adaptive-technologies.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: uberAgent UXM (Splunkbase 1448) with Citrix templates; optional Citrix Monitor / Director export for the same fields.\n• Ensure the following data sources are available: uberAgent: `sourcetype=\"uberAgent:Process:ProcessStartup\"` and `sourcetype=\"uberAgent:Application:Error\"` (if configured); `sourcetype=\"uberAgent:Network:Performance\"` (latency, loss); `sourcetype=\"citrix:broker:app_usage\"` (app title, `launch_to_ready_ms`); or Director OData for ICA RTT and application load time.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStandardize on one app name key (avoid publisher vs start-menu title drift). In uberAgent, enable process and network packs for gold images only first. If ICA RTT is missing in uberAgent, add broker or gateway RTT. Build three small alerts: p95 startup, p95 RTT, and not-responding process count, each routed to a different team owner.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"uberAgent:Process:ProcessStartup\" OR sourcetype=\"uberAgent:Network:Performance\" OR sourcetype=\"citrix:broker:app_usage\") earliest=-4h\n| eval app=coalesce(app_name, process_name, title, \"unknown\"), start_ms=tonumber(startup_ms), ica=tonumber(ica_rtt_ms), hang=if(match(_raw, \"(?i)not.?(responding)\"),1,0)\n| bin _time span=15m\n| stats median(start_ms) as med_start, median(ica) as med_ica, sum(hang) as hang_ev by _time, app, host\n| where med_start>10000 OR med_ica>100 OR hang_ev>0\n| table _time, app, host, med_start, med_ica, hang_ev\n```\n\nUnderstanding this SPL\n\n**Citrix Per-Application Perceived Performance (Startup vs Hang vs Network)** — Not all “Citrix is slow” tickets are a network problem. Perceived slowness may be slow application startup, long UI busy states, or real ICA network delay. Combining process startup and hang signals from the VDA with network metrics and broker-reported app ready time splits accountability between packaging, the app itself, the profile, and the path between user and host.\n\nDocumented **Data sources**: uberAgent: `sourcetype=\"uberAgent:Process:ProcessStartup\"` and `sourcetype=\"uberAgent:Application:Error\"` (if configured); `sourcetype=\"uberAgent:Network:Performance\"` (latency, loss); `sourcetype=\"citrix:broker:app_usage\"` (app title, `launch_to_ready_ms`); or Director OData for ICA RTT and application load time. **App/TA** (typical add-on context): uberAgent UXM (Splunkbase 1448) with Citrix templates; optional Citrix Monitor / Director export for the same fields. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: uberAgent:Process:ProcessStartup, uberAgent:Network:Performance, citrix:broker:app_usage. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"uberAgent:Process:ProcessStartup\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **app** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, app, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where med_start>10000 OR med_ica>100 OR hang_ev>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Per-Application Perceived Performance (Startup vs Hang vs Network)**): table _time, app, host, med_start, med_ica, hang_ev\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Small multiples: one row per app with startup, hang count, and RTT; table: top hosts driving bad p95; overlay change markers on image or app version.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We break down whether the pain is a slow start, a frozen window, or a choppy line to the data center — so the app owner, the desktop team, and the line folks stop pointing fingers for the same ticket.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_uberagent"
              ],
              "sapp": [
                {
                  "name": "uberAgent UXM — Digital Employee Experience",
                  "id": 1448,
                  "url": "https://splunkbase.splunk.com/app/1448",
                  "desc": "Citrix uberAgent endpoint monitoring — session performance, logon analysis, application health, browser metrics, and security analytics for Citrix CVAD and Windows",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.78",
              "n": "Citrix Session Recording Pipeline and Storage Health",
              "c": "high",
              "f": "intermediate",
              "v": "Session recording is often a compliance and insider-risk control. The recording service, search index, and long-term file storage must be healthy, searchable within agreed latency, and large enough to retain evidence. Failures in any tier create a gap where activity is not provable even though policy requires recording. Monitoring capacity and playback availability closes that gap operationally and for audits.",
              "t": "No official Splunk TA for Citrix Session Recording. Ingest via Windows Event Logs from the Session Recording Server (Splunk Add-on for Microsoft Windows), IIS logs from the SR web player, and optionally SR database queries via Splunk DB Connect.",
              "d": "`index=xd` `sourcetype=\"citrix:session:recording:server\"` (service up, IIS app pool, admin API), `sourcetype=\"citrix:session:recording:storage\"` (free space, file age), `sourcetype=\"citrix:session:recording:search\"` (indexing lag, playback requests); Windows performance or `sourcetype=WinHostMon` for disk Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd (sourcetype=\"citrix:session:recording:server\" OR sourcetype=\"citrix:session:recording:storage\" OR sourcetype=\"citrix:session:recording:search\") earliest=-24h\n| eval g=tonumber(free_gb), low_disk=if((isnotnull(g) AND g<10) OR match(_raw, \"(?i)low.?space|disk.?full\"),1,0), lag_sec=tonumber(index_lag_sec), down=if(match(_raw, \"(?i)service.?(not.?(start|run)|down|stop|fail)\"),1,0)\n| bin _time span=5m\n| stats max(low_disk) as risk_disk, max(lag_sec) as max_lag, max(down) as down_ev by _time, host\n| where risk_disk=1 OR max_lag>300 OR down_ev=1\n| table _time, host, risk_disk, max_lag, down_ev",
              "m": "Separate alerts: infrastructure (service down, disk, SQL), pipeline lag (ingest to searchable), and product errors on playback. Plan retention tiering: hot, warm, and archive. Test restore and playback quarterly. If storage is object-backed, add bucket health and cost monitors outside Splunk and link the dashboard here.",
              "z": "Gauges: free space and index lag; timeline: down events; table: last successful backup or archive job per site if logged.",
              "kfp": "Replay storage maintenance, index rebuild, NFS or SMB blips, and backlog drain after a recording server patch pause ingestion. Align with storage and CIFS tickets; a growing backlog with no matching maintenance is the sustained failure case.",
              "refs": "[Citrix — Session Recording architecture and storage](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/session-recording.html)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix Session Recording. Ingest via Windows Event Logs from the Session Recording Server (Splunk Add-on for Microsoft Windows), IIS logs from the SR web player, and optionally SR database queries via Splunk DB Connect..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:session:recording:server\"` (service up, IIS app pool, admin API), `sourcetype=\"citrix:session:recording:storage\"` (free space, file age), `sourcetype=\"citrix:session:recording:search\"` (indexing lag, playback requests); Windows performance or `sourcetype=WinHostMon` for disk Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSeparate alerts: infrastructure (service down, disk, SQL), pipeline lag (ingest to searchable), and product errors on playback. Plan retention tiering: hot, warm, and archive. Test restore and playback quarterly. If storage is object-backed, add bucket health and cost monitors outside Splunk and link the dashboard here.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:session:recording:server\" OR sourcetype=\"citrix:session:recording:storage\" OR sourcetype=\"citrix:session:recording:search\") earliest=-24h\n| eval g=tonumber(free_gb), low_disk=if((isnotnull(g) AND g<10) OR match(_raw, \"(?i)low.?space|disk.?full\"),1,0), lag_sec=tonumber(index_lag_sec), down=if(match(_raw, \"(?i)service.?(not.?(start|run)|down|stop|fail)\"),1,0)\n| bin _time span=5m\n| stats max(low_disk) as risk_disk, max(lag_sec) as max_lag, max(down) as down_ev by _time, host\n| where risk_disk=1 OR max_lag>300 OR down_ev=1\n| table _time, host, risk_disk, max_lag, down_ev\n```\n\nUnderstanding this SPL\n\n**Citrix Session Recording Pipeline and Storage Health** — Session recording is often a compliance and insider-risk control. The recording service, search index, and long-term file storage must be healthy, searchable within agreed latency, and large enough to retain evidence. Failures in any tier create a gap where activity is not provable even though policy requires recording. Monitoring capacity and playback availability closes that gap operationally and for audits.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:session:recording:server\"` (service up, IIS app pool, admin API), `sourcetype=\"citrix:session:recording:storage\"` (free space, file age), `sourcetype=\"citrix:session:recording:search\"` (indexing lag, playback requests); Windows performance or `sourcetype=WinHostMon` for disk Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix Session Recording. Ingest via Windows Event Logs from the Session Recording Server (Splunk Add-on for Microsoft Windows), IIS logs from the SR web player, and optionally SR database queries via Splunk DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:session:recording:server, citrix:session:recording:storage, citrix:session:recording:search. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:session:recording:server\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **g** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where risk_disk=1 OR max_lag>300 OR down_ev=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Session Recording Pipeline and Storage Health**): table _time, host, risk_disk, max_lag, down_ev\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauges: free space and index lag; timeline: down events; table: last successful backup or archive job per site if logged.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on session Recording Pipeline and Storage Health and raise the alarm before it drags down real work or real outages start.",
              "mtype": [
                "Availability",
                "Compliance",
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "e": [
                "citrix",
                "db_connect",
                "iis"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "2.6.79",
              "n": "Citrix Secure Private Access (ZTNA) Session Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Zero-trust access to private web and TCP apps should enforce policy, give visibility into application categories, and still feel responsive. Monitoring successful versus blocked sessions, connector path health, and round-trip time highlights misconfiguration, over-broad or over-tight rules, and performance issues on browser-based and agent-based paths alike.",
              "t": "Citrix Analytics Add-on for Splunk (Splunkbase 6280) imports Secure Private Access risk indicators from Citrix Analytics for Security.",
              "d": "`index=ztna` or `index=cloud` with `sourcetype=\"citrix:ztna:session\"` (user, app, policy hit, `tcp|tls` outcome), `sourcetype=\"citrix:ztna:access\"` (web and SaaS via browser, category tags), `sourcetype=\"citrix:ztna:connector\"` (on-prem app reachability); fields `app_category`, `result`, `rtt_ms` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=ztna (sourcetype=\"citrix:ztna:session\" OR sourcetype=\"citrix:ztna:access\" OR sourcetype=\"citrix:ztna:connector\") earliest=-4h\n| eval ok=if(match(lower(result),\"(?i)allow|success|established|up\"),1,0), rtt=tonumber(rtt_ms), cat=coalesce(app_category, category, \"uncategorized\"), bfail=if(match(lower(result),\"(?i)block|deny|fail|down|timeout\"),1,0)\n| bin _time span=5m\n| stats count as n, sum(ok) as okc, sum(bfail) as blks, median(rtt) as medrtt, values(cat) as cats by _time, user, app\n| where blks>0 OR (isnotnull(medrtt) AND medrtt>250)\n| table _time, user, app, n, okc, blks, medrtt, cats",
              "m": "Ingest the cloud service feed in near real time. Map internal app names to a category lookup for business-friendly breakdowns. Alert on block spikes, connector-down patterns, and sustained high RTT by region. Pair with traditional gateway logs during migration. Document split between legacy full tunnel and this access path for the same app families.",
              "z": "Sankey: user to app to outcome; timechart: block rate; map or bar: by region; table: high RTT with category.",
              "kfp": "Onboarding new private ZTNA apps, per-app policy cutovers, and pilot users shift SPA session baselines for days. Cohort- or OU-based baselines and a policy version tag on the alert keep expected ramp from looking like a stealth breach pattern.",
              "refs": "[Citrix — Secure Private Access (ZTNA) overview](https://docs.citrix.com/en-us/citrix-secure-private-access/)",
              "mitre": [
                "T1078",
                "T1021"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Analytics Add-on for Splunk (Splunkbase 6280) imports Secure Private Access risk indicators from Citrix Analytics for Security..\n• Ensure the following data sources are available: `index=ztna` or `index=cloud` with `sourcetype=\"citrix:ztna:session\"` (user, app, policy hit, `tcp|tls` outcome), `sourcetype=\"citrix:ztna:access\"` (web and SaaS via browser, category tags), `sourcetype=\"citrix:ztna:connector\"` (on-prem app reachability); fields `app_category`, `result`, `rtt_ms` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest the cloud service feed in near real time. Map internal app names to a category lookup for business-friendly breakdowns. Alert on block spikes, connector-down patterns, and sustained high RTT by region. Pair with traditional gateway logs during migration. Document split between legacy full tunnel and this access path for the same app families.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ztna (sourcetype=\"citrix:ztna:session\" OR sourcetype=\"citrix:ztna:access\" OR sourcetype=\"citrix:ztna:connector\") earliest=-4h\n| eval ok=if(match(lower(result),\"(?i)allow|success|established|up\"),1,0), rtt=tonumber(rtt_ms), cat=coalesce(app_category, category, \"uncategorized\"), bfail=if(match(lower(result),\"(?i)block|deny|fail|down|timeout\"),1,0)\n| bin _time span=5m\n| stats count as n, sum(ok) as okc, sum(bfail) as blks, median(rtt) as medrtt, values(cat) as cats by _time, user, app\n| where blks>0 OR (isnotnull(medrtt) AND medrtt>250)\n| table _time, user, app, n, okc, blks, medrtt, cats\n```\n\nUnderstanding this SPL\n\n**Citrix Secure Private Access (ZTNA) Session Monitoring** — Zero-trust access to private web and TCP apps should enforce policy, give visibility into application categories, and still feel responsive. Monitoring successful versus blocked sessions, connector path health, and round-trip time highlights misconfiguration, over-broad or over-tight rules, and performance issues on browser-based and agent-based paths alike.\n\nDocumented **Data sources**: `index=ztna` or `index=cloud` with `sourcetype=\"citrix:ztna:session\"` (user, app, policy hit, `tcp|tls` outcome), `sourcetype=\"citrix:ztna:access\"` (web and SaaS via browser, category tags), `sourcetype=\"citrix:ztna:connector\"` (on-prem app reachability); fields `app_category`, `result`, `rtt_ms` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): Citrix Analytics Add-on for Splunk (Splunkbase 6280) imports Secure Private Access risk indicators from Citrix Analytics for Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ztna; **sourcetype**: citrix:ztna:session, citrix:ztna:access, citrix:ztna:connector. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ztna, sourcetype=\"citrix:ztna:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, user, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where blks>0 OR (isnotnull(medrtt) AND medrtt>250)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Secure Private Access (ZTNA) Session Monitoring**): table _time, user, app, n, okc, blks, medrtt, cats\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey: user to app to outcome; timechart: block rate; map or bar: by region; table: high RTT with category.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look at the modern “front door to apps” the same way you would a guarded lobby — who got in, what they opened, and whether the round trip was snappy or stuck — so policy holds up without people feeling the new walls.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Network_Sessions"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 79,
            "none": 0
          }
        }
      ],
      "i": 2,
      "n": "Virtualization",
      "src": "cat-02-virtualization.md"
    },
    {
      "s": [
        {
          "i": "3.1",
          "n": "Docker",
          "u": [
            {
              "i": "3.1.1",
              "n": "Container Crash Loops",
              "c": "critical",
              "f": "intermediate",
              "v": "Containers restarting repeatedly indicate application bugs, misconfiguration, or dependency failures. Crash loops consume resources and never reach healthy state.",
              "t": "Splunk Connect for Docker, Docker events via syslog",
              "d": "`sourcetype=docker:events`, Docker daemon logs",
              "q": "index=containers sourcetype=\"docker:events\" action=\"die\"\n| eval exit_code=exitCode\n| where exit_code != \"0\"\n| stats count as crashes by container_name, image, exit_code\n| where crashes > 3\n| sort -crashes",
              "m": "Install Splunk Connect for Docker or configure Docker logging driver to forward to Splunk HEC. Collect Docker events via `docker events --format '{{json .}}'`. Alert when a container restarts >3 times in 15 minutes.",
              "z": "Table (container, image, crashes, exit code), Bar chart by container, Timeline.",
              "kfp": "Short-lived init containers defined in Compose to run database migrations often exit non-zero on the first attempt by design while a lock waits, then succeed on retry.\nBatch sidecars that pull work from a queue may exit code one when the queue is empty if developers coded aggressive failure modes.\nPreStop hooks that run docker stop --time=5 against lazy apps sometimes surface as sigterm-class exits during rolling updates even though traffic already drained.\nDocker-in-Docker CI jobs intentionally kill helper containers with signal nine to simulate failure injection.\nImage pulls during a rolling restart can overlap with on-failure policies so transient layer errors emit die events before the next pull succeeds.\nOneshot Compose command patterns such as printf diagnostics exit non-zero to signal skip conditions in automation.\nA/B smoke tests deliberately exit non-zero on the canary branch while the stable branch keeps serving.\nBlue-green scripts terminate the old task with a forced kill that looks like oom_kill class if misread without deploy tags.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Docker, Docker events via syslog.\n• Ensure the following data sources are available: `sourcetype=docker:events`, Docker daemon logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall Splunk Connect for Docker or configure Docker logging driver to forward to Splunk HEC. Collect Docker events via `docker events --format '{{json .}}'`. Alert when a container restarts >3 times in 15 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:events\" action=\"die\"\n| eval exit_code=exitCode\n| where exit_code != \"0\"\n| stats count as crashes by container_name, image, exit_code\n| where crashes > 3\n| sort -crashes\n```\n\nUnderstanding this SPL\n\n**Container Crash Loops** — Containers restarting repeatedly indicate application bugs, misconfiguration, or dependency failures. Crash loops consume resources and never reach healthy state.\n\nDocumented **Data sources**: `sourcetype=docker:events`, Docker daemon logs. **App/TA** (typical add-on context): Splunk Connect for Docker, Docker events via syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **exit_code** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where exit_code != \"0\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by container_name, image, exit_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where crashes > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (container, image, crashes, exit code), Bar chart by container, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the small programs that run inside boxes on our servers, and when one keeps dying with a bad exit code we notice quickly. We also count how fast it is happening so we know if it is a single hiccup or a loop that will upset people using our service.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "3.1.2",
              "n": "Container OOM Kills",
              "c": "critical",
              "f": "intermediate",
              "v": "OOM kills mean the container exceeded its memory limit. The application is either leaking memory or undersized. Data loss is likely.",
              "t": "Splunk Connect for Docker, host syslog",
              "d": "`sourcetype=docker:events`, host `dmesg`/syslog",
              "q": "index=os sourcetype=syslog \"Memory cgroup out of memory\" OR \"oom-kill\"\n| rex \"task (?<process>\\S+)\"\n| table _time host process _raw",
              "m": "Collect Docker events and forward host syslog. Alert immediately on any OOM event. Include container memory limit in the alert context to aid right-sizing decisions.",
              "z": "Events timeline, Single value (OOM count last 24h), Table with container details.",
              "kfp": "Deliberate memory torture with stress-ng or sysbench inside CI agents that carry tiny --memory limits produces expected oom_kill increments without customer impact. Overnight ETL containers configured fail-fast with tight limits may register kills once per run by design. JVM or dotnet runtimes that warm heaps larger than the cgroup cap can be killed once during first boot, then stabilize after garbage-collection tuning lowers peak resident set. Compose helper scripts that spawn short-lived children may log printk victims whose comm is sh or runc while the parent service keeps running, mimicking a false service outage. BuildKit or Kaniko bursts during docker build can spike memory.events.local in builder scopes without affecting production service tasks. Chaos experiments that randomly cap memory to test autoscaling emit oom_kill deltas that should be tagged in container_memory_baselines.csv. Hosts mid-migration from cgroup v1 to v2 can briefly double-report counters if both hierarchies are scraped. Memory pressure warnings may fire during rolling kernel upgrades when counters reset but workloads are healthy.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Docker, host syslog.\n• Ensure the following data sources are available: `sourcetype=docker:events`, host `dmesg`/syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Docker events and forward host syslog. Alert immediately on any OOM event. Include container memory limit in the alert context to aid right-sizing decisions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog \"Memory cgroup out of memory\" OR \"oom-kill\"\n| rex \"task (?<process>\\S+)\"\n| table _time host process _raw\n```\n\nUnderstanding this SPL\n\n**Container OOM Kills** — OOM kills mean the container exceeded its memory limit. The application is either leaking memory or undersized. Data loss is likely.\n\nDocumented **Data sources**: `sourcetype=docker:events`, host `dmesg`/syslog. **App/TA** (typical add-on context): Splunk Connect for Docker, host syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Container OOM Kills**): table _time host process _raw\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Single value (OOM count last 24h), Table with container details.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the kernel memory cage each container runs in, and we notice when it runs out of room or inches toward the ceiling. When the system is forced to stop a process to protect the machine, we capture who was stopped and suggest a safer memory ceiling so the next deploy does not surprise customers.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "3.1.3",
              "n": "Container CPU Throttling",
              "c": "high",
              "f": "intermediate",
              "v": "CPU throttling means the container is hitting its CPU limit and being artificially slowed. Causes latency spikes invisible to standard CPU utilization metrics.",
              "t": "Custom scripted input (cgroup stats), Splunk OpenTelemetry Collector",
              "d": "`sourcetype=docker:stats`, cgroup `cpu.stat`",
              "q": "index=containers sourcetype=\"docker:stats\"\n| eval throttle_pct = round(nr_throttled / nr_periods * 100, 1)\n| where throttle_pct > 25\n| stats avg(throttle_pct) as avg_throttle by container_name\n| sort -avg_throttle",
              "m": "Collect Docker stats via `docker stats --format '{{json .}}'` or read cgroup files directly (`/sys/fs/cgroup/cpu/docker/<id>/cpu.stat`). Monitor `throttled_time` and `nr_throttled`. Alert when >25% of periods are throttled.",
              "z": "Line chart (throttle % over time), Table (container, throttle %, CPU limit), Bar chart.",
              "kfp": "Batch extract-transform-load containers, CI compile farms, and big-data shuffle workers often carry intentionally tight docker --cpus settings where chronic nr_throttled growth is an expected cost-control posture rather than an outage precursor; tag them in container_cpu_baseline.csv workload_class or suppress via host-class macros after FinOps approval. Periodic JVM or dotnet garbage-collection pauses can align in time with throttle metrics even when the dominant stall is heap management rather than cgroup quota; corroborate with GC logs before blaming CpuQuota alone. Scheduled cron bursts that spike CPU for two minutes every hour may trip medium_throttled_30_to_70pct_chronic without violating customer SLOs if latency dashboards stay flat; require SLO correlation before paging. NUMA-affinity-constrained ML inference pods may keep narrow cpuset spans by design; pair with GPU or accelerator telemetry so high_cpuset_numa_misplacement does not fire on approved pinning. Observed throttle without user-visible latency on background reindexers or telemetry scrapers is common; downgrade using baseline rows that record acceptable throttle for non-interactive images. Host-wide saturation flagged by procfs:loadavg can make many containers appear throttled simultaneously even when individual quotas are generous; follow the recommended_response host attribution path before opening per-container tickets. Kernel upgrades that reset cgroup counters can create one noisy interval; replay after two collection cycles. Dual ingestion from Splunk_TA_nix and OpenTelemetry without deduplication can double periods_delta; enforce a single primary writer per node class.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (cgroup stats), Splunk OpenTelemetry Collector.\n• Ensure the following data sources are available: `sourcetype=docker:stats`, cgroup `cpu.stat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Docker stats via `docker stats --format '{{json .}}'` or read cgroup files directly (`/sys/fs/cgroup/cpu/docker/<id>/cpu.stat`). Monitor `throttled_time` and `nr_throttled`. Alert when >25% of periods are throttled.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:stats\"\n| eval throttle_pct = round(nr_throttled / nr_periods * 100, 1)\n| where throttle_pct > 25\n| stats avg(throttle_pct) as avg_throttle by container_name\n| sort -avg_throttle\n```\n\nUnderstanding this SPL\n\n**Container CPU Throttling** — CPU throttling means the container is hitting its CPU limit and being artificially slowed. Causes latency spikes invisible to standard CPU utilization metrics.\n\nDocumented **Data sources**: `sourcetype=docker:stats`, cgroup `cpu.stat`. **App/TA** (typical add-on context): Custom scripted input (cgroup stats), Splunk OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **throttle_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where throttle_pct > 25` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by container_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (throttle % over time), Table (container, throttle %, CPU limit), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch whether the computer is repeatedly telling a boxed-up program to wait its turn for CPU time, like a car stuck trying to merge when traffic will not open a gap. Even when the box still looks busy-but-fine on average, those waits pile up and make real people feel slowness at the worst moments.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.4",
              "n": "Container Memory Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "Tracking memory usage relative to limits catches containers approaching OOM before they're killed. Enables proactive limit adjustments.",
              "t": "Splunk Connect for Docker, custom scripted input",
              "d": "`sourcetype=docker:stats`",
              "q": "index=containers sourcetype=\"docker:stats\"\n| eval mem_pct = round(mem_usage / mem_limit * 100, 1)\n| stats latest(mem_pct) as mem_pct by container_name\n| where mem_pct > 80\n| sort -mem_pct",
              "m": "Collect `docker stats` output at regular intervals. Alert when memory usage exceeds 80% of limit. Trend over time to catch gradual memory leaks.",
              "z": "Gauge per container, Table with limit context, Line chart (trending).",
              "kfp": "Java and Scala services intentionally park their heaps near cgroup caps when -Xmx tracks the limit; mem_pct and anon_pct_of_used can look alarming even while GC maintains a steady old generation—triage with GC logs before treating as a leak. Sustained high file and active_file bytes can resemble a leak on ratio charts even though the kernel can reclaim page cache; require anon dominance or rising PSI before paging on cache-heavy CDN or build-cache containers. Short-lived spikes during JVM full GC or dotnet compacting GC temporarily inflate memory.current without implying impending kill; correlate timestamps with GC pause metrics. Scheduled machine-learning inference cold starts may ramp RSS in minutes by design; align container_memory_baseline.csv bands to those windows or route alerts to ML platform owners. PostgreSQL containers tuned with shared_buffers hugging the limit are intentional; pair with database KPIs before blaming the cgroup envelope. Chaos experiments that clamp memory.max or inject PSI-heavy neighbors will trip this control on purpose—tag hosts in container_owner.csv. Dual collectors from Splunk_TA_nix and OpenTelemetry without deduplication can double minute buckets; enforce one primary writer per node class. Rootless Docker delegates different cgroup paths; scripted inputs that read the wrong scope emit null mem_pct until paths follow the delegated slice.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Docker, custom scripted input.\n• Ensure the following data sources are available: `sourcetype=docker:stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect `docker stats` output at regular intervals. Alert when memory usage exceeds 80% of limit. Trend over time to catch gradual memory leaks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:stats\"\n| eval mem_pct = round(mem_usage / mem_limit * 100, 1)\n| stats latest(mem_pct) as mem_pct by container_name\n| where mem_pct > 80\n| sort -mem_pct\n```\n\nUnderstanding this SPL\n\n**Container Memory Utilization** — Tracking memory usage relative to limits catches containers approaching OOM before they're killed. Enables proactive limit adjustments.\n\nDocumented **Data sources**: `sourcetype=docker:stats`. **App/TA** (typical add-on context): Splunk Connect for Docker, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mem_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by container_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mem_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per container, Table with limit context, Line chart (trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the fuel gauge and the needle speed for every boxed workload: how full the cgroup allowance is, how fast it is climbing, and whether the kernel is already stalling tasks waiting for memory. That gives crews several minutes to add capacity before the sudden cutoff that customers would feel.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.5",
              "n": "Image Vulnerability Scanning",
              "c": "medium",
              "f": "intermediate",
              "v": "Container images with known CVEs are deployed directly into production. Scanning and tracking vulnerabilities prevents running exploitable workloads.",
              "t": "Custom input (Trivy, Snyk, Grype JSON output)",
              "d": "JSON scan results from vulnerability scanners",
              "q": "index=containers sourcetype=\"trivy:scan\"\n| stats count by image, Severity\n| xyseries image Severity count\n| sort -CRITICAL -HIGH",
              "m": "Run vulnerability scans in CI/CD pipeline (Trivy, Grype, or Snyk). Send Trivy/Grype scan results to Splunk via HEC with `sourcetype=trivy:scan` (or equivalent scanner output); batch results per image digest. Alert on `Severity=CRITICAL` for images whose `image_tag` matches production entries in a `prod_images` lookup. Exclude known-accepted CVEs via a `cve_exceptions.csv` lookup refreshed by the security team.",
              "z": "Table (image, critical, high, medium, low), Stacked bar chart by image, Trend line.",
              "kfp": "Security-approved scan-skip annotations on vendor appliance images or emergency break-glass digests can present as never_scanned until prod_image_scope.csv marks them exempt_with_ticket and a companion macro filters exempt rows from paging queues. Planned scanner maintenance windows legitimately pause heartbeat emissions for short intervals; require sustained silence exceeding two missed heartbeats before declaring pipeline failure, and annotate maintenance tickets in dashboard overlays. Multi-architecture manifests may emit one digest per architecture while admission logs show a manifest list digest; normalize with architecture-specific scope rows or aggregate keys to avoid false MUTABILITY-BYPASS-RISK. Golden base images that intentionally freeze on a digest with a dated attestation can look stale on calendar clocks even when risk is accepted; tie those rows to exception_expires_epoch in lookups instead of muting the sourcetype. Scanner-side rate limiting against huge registries can delay manifests without implying compromise; correlate with UC-3.1.26 pull telemetry before escalating. Air-gapped mirrors sometimes refresh internal vulnerability databases on a slower cadence than public feeds; adjust scanner_db_stale_warn_hours per zone documented in scanner_telemetry_baseline.csv. CI rescans that replay identical digest results can reset oldest-open epochs in ways that temporarily hide SLA breaches; cross-check against UC-3.1.10 historical summaries before closing tickets. Kubernetes audit sampling, if enabled, can drop admission evidence for some mutations; never treat sampled absence alone as proof of safety. Duplicate HEC writers can inflate coverage_pct denominators until deduplication macros land; dedupe on row_key and _time in a summary index when needed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1195",
                "T1525"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom input (Trivy, Snyk, Grype JSON output).\n• Ensure the following data sources are available: JSON scan results from vulnerability scanners.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun vulnerability scans in CI/CD pipeline (Trivy, Grype, or Snyk). Send Trivy/Grype scan results to Splunk via HEC with `sourcetype=trivy:scan` (or equivalent scanner output); batch results per image digest. Alert on `Severity=CRITICAL` for images whose `image_tag` matches production entries in a `prod_images` lookup. Exclude known-accepted CVEs via a `cve_exceptions.csv` lookup refreshed by the security team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"trivy:scan\"\n| stats count by image, Severity\n| xyseries image Severity count\n| sort -CRITICAL -HIGH\n```\n\nUnderstanding this SPL\n\n**Image Vulnerability Scanning** — Container images with known CVEs are deployed directly into production. Scanning and tracking vulnerabilities prevents running exploitable workloads.\n\nDocumented **Data sources**: JSON scan results from vulnerability scanners. **App/TA** (typical add-on context): Custom input (Trivy, Snyk, Grype JSON output). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: trivy:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"trivy:scan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by image, Severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (image, critical, high, medium, low), Stacked bar chart by image, Trend line.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We check that every serious meal leaving the kitchen still has a fresh safety inspection sticker, not only that yesterday's menu listed ingredients. If the sticker is missing, expired, or the kitchen swapped the plate without a new check, we raise the alarm so teams fix the process—not just one dish.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.6",
              "n": "Privileged Container Detection",
              "c": "high",
              "f": "advanced",
              "v": "Privileged containers have full host access — a container escape gives root on the host. Should be flagged and justified in production.",
              "t": "Docker events, custom audit input",
              "d": "`docker inspect` output, Kubernetes pod security",
              "q": "index=containers sourcetype=\"docker:inspect\"\n| where Privileged=\"true\"\n| table container_name image host Privileged",
              "m": "Create scripted input: `docker inspect --format '{{.Name}} {{.HostConfig.Privileged}}' $(docker ps -q)`. Run every 300 seconds. Alert on any privileged container in production. Maintain an allowlist for justified exceptions.",
              "z": "Table (container, image, host), Single value (count of privileged), Status indicator.",
              "kfp": "Cilium, Calico, kube-proxy, Antrea, and similar node agents sometimes require host networking or elevated capabilities that resemble policy violations until you normalize their names in privileged_container_allowlist.csv with ticket-backed approvals. NVIDIA GPU device plugins, InfiniBand helpers, and hardware performance counter exporters may need SYS_ADMIN or SYS_RAWIO on specific SKUs; document the exact image digest regex and retire approvals when vendors ship reduced-capability builds. BMC or IPMI sidecars in factory environments occasionally run privileged to touch raw devices; route those hosts to OT governance macros rather than global suppression. Ephemeral debug pods launched into non-production namespaces during incidents can trigger Falco and inspect simultaneously; require label-based CI/CD segregation so debug telemetry lands in a lower-severity routing macro. BuildKit, docker buildx, or Kaniko-adjacent builders that still use --privileged for nested virtualization will fire on shared CI executors; maintain parallel allowlists keyed by host class or builder pool names. Service mesh init containers that mount cgroup or proc paths for observability can resemble risky bind hints; corroborate with service mesh vendor documentation before muting. Falco rules that broadly match sensitive mount can flag read-only library paths during legitimate JVM diagnostics; tune only after docker:inspect proves mounts are expected. Audit pipelines that log every docker pull can interleave argv fragments that look like privileged launches; require multi-token matches before paging. Rolling kernel upgrades can reset capability baseline hashes briefly; reconcile container_image_capability_baseline.csv when golden images move. Mirantis Container Runtime field naming differences after upgrades can cause coalesce misses that look like drift until props aliases are refreshed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1611"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Docker events, custom audit input.\n• Ensure the following data sources are available: `docker inspect` output, Kubernetes pod security.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input: `docker inspect --format '{{.Name}} {{.HostConfig.Privileged}}' $(docker ps -q)`. Run every 300 seconds. Alert on any privileged container in production. Maintain an allowlist for justified exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:inspect\"\n| where Privileged=\"true\"\n| table container_name image host Privileged\n```\n\nUnderstanding this SPL\n\n**Privileged Container Detection** — Privileged containers have full host access — a container escape gives root on the host. Should be flagged and justified in production.\n\nDocumented **Data sources**: `docker inspect` output, Kubernetes pod security. **App/TA** (typical add-on context): Docker events, custom audit input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:inspect. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:inspect\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Privileged=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Privileged Container Detection**): table container_name image host Privileged\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (container, image, host), Single value (count of privileged), Status indicator.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We check whether little boxed programs were accidentally given master keys to the whole computer—like seeing every process on the machine, loading deep kernel modules, or turning off the safety profiles that keep programs contained. When that happens without paperwork, we sound the alarm so a small break-in cannot become a takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.7",
              "n": "Container Sprawl",
              "c": "low",
              "f": "advanced",
              "v": "Stopped containers and dangling images waste disk space. In development environments, sprawl can consume all available storage.",
              "t": "Custom scripted input",
              "d": "`docker ps -a`, `docker images`",
              "q": "index=containers sourcetype=\"docker:ps\"\n| where status=\"exited\"\n| eval days_stopped = round((now() - strptime(finished_at, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where days_stopped > 7\n| stats count by host",
              "m": "Scripted input: `docker ps -a --format '{{json .}}'` and `docker system df`. Run daily. Report on stopped containers >7 days and total disk reclamation possible.",
              "z": "Table, Single value (reclaimable space), Bar chart by host.",
              "kfp": "Legitimate stopped-but-named data containers that hold reference datasets offline can accumulate exited rows without implying waste; require inventory tags or compose labels before auto-prune tickets fire. Golden-image digests pinned for forensic preservation often look unused to naive digest scans; rely on unused_image_flag exporters that honor inventory holds and exempt namespaces in sprawl_retention_policy.csv. Build caches deliberately retained for monorepo compile performance inflate reclaimable totals without negligence; mark builder SKUs exempt or document finance-approved retention in the lookup rather than muting the UC. Forensics and security_hold namespaces should set exempt_ns so cleanup_command stays advisory. Volumes mounted by external orchestrators the docker engine cannot fully reference may appear orphan_heur while still protected upstream; cross-check CSI or Nomad state before volume prune. Multi-architecture manifest lists keep dangling-looking digests that remain referenced; digest-aware unused detection is mandatory before paging unused_images_count. Patch Tuesday image storms can temporarily raise dangling counts during tag churn; corroborate with change calendars. Lab hosts that run continuous integration with intentional none layers will dominate fleet percentiles unless segmented by environment in eventstats. Duplicate telemetry from OpenTelemetry and legacy forwarders without dedup keys can double reclaimable totals; validate source weights before trusting RED tiers. docker system df rounding versus per-object du can disagree slightly; treat sub-gigabyte deltas as noise unless persistent for a week.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: `docker ps -a`, `docker images`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input: `docker ps -a --format '{{json .}}'` and `docker system df`. Run daily. Report on stopped containers >7 days and total disk reclamation possible.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:ps\"\n| where status=\"exited\"\n| eval days_stopped = round((now() - strptime(finished_at, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where days_stopped > 7\n| stats count by host\n```\n\nUnderstanding this SPL\n\n**Container Sprawl** — Stopped containers and dangling images waste disk space. In development environments, sprawl can consume all available storage.\n\nDocumented **Data sources**: `docker ps -a`, `docker images`. **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:ps. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:ps\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"exited\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_stopped** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_stopped > 7` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value (reclaimable space), Bar chart by host.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We count leftover boxes, loose labels, and forgotten storage closets on each machine that runs containers, then add up how much space we could safely take back if we cleaned responsibly. When the pile grows too large compared with the whole disk, we raise a clear flag so teams tidy before the machine runs out of room.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.8",
              "n": "Docker Daemon Errors",
              "c": "high",
              "f": "beginner",
              "v": "Docker daemon errors affect all containers on the host. Network, storage driver, and containerd errors can cause widespread container failures.",
              "t": "Syslog, Docker daemon log forwarding",
              "d": "`/var/log/docker.log` or `journalctl -u docker`",
              "q": "index=containers sourcetype=\"docker:daemon\" level=\"error\" OR level=\"fatal\"\n| stats count by host, msg\n| sort -count",
              "m": "Forward Docker daemon logs (usually via journald or `/var/log/docker.log`). Alert on fatal errors. Track error patterns by host.",
              "z": "Table (host, error, count), Timeline, Bar chart by error type.",
              "kfp": "Legitimate image pull unauthorized or denied: requested access events fire when registry credentials in /etc/docker/daemon.json or systemd drop-in files expire on a planned rotation cadence; the engine is healthy and the fix is the credential rotation pipeline, not a daemon restart, so route low_image_manifest_unauthorized rows to the registry-ops queue rather than the engine on-call rotation. Benign bridge already exists or libnetwork bridge already declared lines surface harmlessly when a node reloads or when docker-compose down and up cycles in CI executors recreate the same default bridge after a prior cleanup script left state behind; require sustained bursts above the macro threshold or pair with engine_api 5xx evidence before paging. The client disconnect during request log line is a routine outcome of kubelet probe-bursts and short-lived docker ps health checks from monitoring containers; do not score it as a daemon panic on its own. OOM-killed dockerd entries occasionally appear when a host is under extreme memory pressure from co-resident workloads; the root cause is host-level capacity and UC-3.1.2 owns the cgroup analysis for memory, so correlate before raising critical_dockerd_panic_or_fatal. Routine live-restore enable or reload events written to journald during scheduled patch windows look like daemon transitions but are intentional; tag the change window in maintenance lookups and downgrade those rows. Storage_driver_error bursts during planned overlay2 to fuse-overlayfs migrations, aggressive image-layer garbage collection runs, or BuildKit cache prune cycles may briefly cross the sustained threshold without indicating filesystem damage; require corroborating dmesg evidence before escalating. CI executors that intentionally run nested Docker-in-Docker for builder pipelines often emit shim_reap_event and shim_disconnect_signal lines as the inner daemon exits cleanly between jobs; tag those host classes in container_owner.csv with a builder owner_team and downgrade non-panic shim signals on builder fleets. Mirantis Container Runtime field-naming differences after a major release can cause coalesce misses that look like a sudden uncategorized_dockerd_error spike until props aliases catch up; reconcile against vendor release notes before paging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Syslog, Docker daemon log forwarding.\n• Ensure the following data sources are available: `/var/log/docker.log` or `journalctl -u docker`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Docker daemon logs (usually via journald or `/var/log/docker.log`). Alert on fatal errors. Track error patterns by host.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:daemon\" level=\"error\" OR level=\"fatal\"\n| stats count by host, msg\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Docker Daemon Errors** — Docker daemon errors affect all containers on the host. Network, storage driver, and containerd errors can cause widespread container failures.\n\nDocumented **Data sources**: `/var/log/docker.log` or `journalctl -u docker`. **App/TA** (typical add-on context): Syslog, Docker daemon log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:daemon. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:daemon\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, msg** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, error, count), Timeline, Bar chart by error type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the engine that runs the entire shipping yard's container cranes. When that engine itself sputters, panics, or seizes up, every container in motion stops mid-air and no amount of restarting the cranes helps until the engine is breathing again. We catch that engine sputter early so a host-wide outage never grows into a fleet-wide outage.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.9",
              "n": "Docker Daemon Health and Version Drift",
              "c": "medium",
              "f": "intermediate",
              "v": "Mixed Docker Engine versions break image compatibility and CVE patching cadence; silent `ServerErrors` in the daemon log precede pull and run failures. Standardize on a patch channel and alert before user-facing container failures cascade.",
              "t": "Custom scripted input (docker info, docker version)",
              "d": "docker info JSON output, docker version",
              "q": "index=containers sourcetype=\"docker:info\"\n| stats values(ServerVersion) as versions by host\n| eval version_count = mvcount(versions)\n| where version_count > 1\n| mvexpand versions limit=100\n| table host versions",
              "m": "Create scripted input that runs `docker info --format '{{json .}}'` and `docker version --format '{{json .}}'` every 300 seconds. Parse ServerVersion, ServerErrors, Containers, Images, and DriverStatus. Forward to Splunk via HEC. Alert when Docker daemon is unresponsive (no data for >5 minutes) or when ServerErrors is non-empty. Report version drift: alert when multiple Engine versions exist across the fleet.",
              "z": "Table (host, version, containers, images), Single value (version count), Status grid (host health).",
              "kfp": "Canary segments that deliberately run one newer Engine minor on a fraction of hosts will trip sprawl or drift logic until you annotate host_segment or add exempt_sprawl metadata; this is expected and should be documented rather than muted. Vendor-locked LTS channels sometimes trail docker.com release notes by weeks while still being supported; if your allowlist row is missing, you will see lookup_miss until vulnerability management refreshes docker_engine_versions.csv even though finance signed the vendor exception. Freshly bootstrapped hosts can emit transient empty Server.Version rows for a single poll when dockerd starts slower than the scripted input; require two consecutive misses before paging. Engineers testing bleeding-edge nightly builds on lab hosts will appear as critical or medium rows until those hosts are tagged out of production segments. Hosts promoted from lab to prod without CMDB updates inherit stale segment labels, which distorts distinct_engine_versions_per_segment math until the next CMDB sync. Registry-mirror rebuilds can make Server.BuildTime look ancient relative to the image layer timestamps CI recorded, triggering low_engine_buildtime_stale until you compare package NEVRA from the OS package manager. Planned blue-green upgrades temporarily inflate medium_minor_version_drift when half the segment is mid-flight between two approved versions; pair alerts with deployment orchestration timestamps. Distro-specific package suffixes such as tilde ubuntu break naive string equality until you normalize engine_key. Duplicate CSV rows for the same engine_version create ambiguous known_cves text; dedupe during publish. Finally, remember UC-3.1.8 may show healthy journal streams while this UC still fires on semver debt, and UC-3.1.11 may show healthy FD ceilings while CVE rows remain critical.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (docker info, docker version).\n• Ensure the following data sources are available: docker info JSON output, docker version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input that runs `docker info --format '{{json .}}'` and `docker version --format '{{json .}}'` every 300 seconds. Parse ServerVersion, ServerErrors, Containers, Images, and DriverStatus. Forward to Splunk via HEC. Alert when Docker daemon is unresponsive (no data for >5 minutes) or when ServerErrors is non-empty. Report version drift: alert when multiple Engine versions exist across the fleet.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:info\"\n| stats values(ServerVersion) as versions by host\n| eval version_count = mvcount(versions)\n| where version_count > 1\n| mvexpand versions limit=100\n| table host versions\n```\n\nUnderstanding this SPL\n\n**Docker Daemon Health and Version Drift** — Mixed Docker Engine versions break image compatibility and CVE patching cadence; silent `ServerErrors` in the daemon log precede pull and run failures. Standardize on a patch channel and alert before user-facing container failures cascade.\n\nDocumented **Data sources**: docker info JSON output, docker version. **App/TA** (typical add-on context): Custom scripted input (docker info, docker version). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **version_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where version_count > 1` — typically the threshold or rule expression for this monitoring goal.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Pipeline stage (see **Docker Daemon Health and Version Drift**): table host versions\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, version, containers, images), Single value (version count), Status grid (host health).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We check that every machine running our container engine reports the same approved software generation you expect, like making sure every truck in a delivery fleet passed the same safety inspection on schedule. When too many different versions appear in one region, or a known safety recall still shows up in the field, we raise a hand before anything catches fire.",
              "mtype": [
                "Availability",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.10",
              "n": "Container Image Vulnerability Scanning Results",
              "c": "high",
              "f": "intermediate",
              "v": "Centralizing scanner output (severity, package, image digest) proves compliance and speeds remediation when new CVEs hit production images.",
              "t": "Custom HEC (Trivy, Grype, Snyk JSON), CI pipeline artifacts",
              "d": "`sourcetype=trivy:scan`, `sourcetype=grype:scan`",
              "q": "index=containers (sourcetype=\"trivy:scan\" OR sourcetype=\"grype:scan\")\n| stats latest(Severity) as sev, values(VulnerabilityID) as cves, dc(VulnerabilityID) as vuln_count by image_name, image_digest, Target\n| where mvfind(sev, \"CRITICAL\") OR mvfind(sev, \"HIGH\")\n| sort -vuln_count",
              "m": "Forward CI and registry scan JSON to Splunk with stable fields (`image_name`, `image_digest`, `Target`). Deduplicate on digest+CVE. Alert when CRITICAL/HIGH appears on images referenced by running tags.",
              "z": "Table (image, digest, vuln count, severities), Treemap by repo, Trend (open vulns over time).",
              "kfp": "Trivy and Grype vulnerability databases occasionally lag vendor advisories by hours or days, so a Critical in National Vulnerability Database may already be downgraded in a distro security notice while scanners still scream Critical until the DB sync completes. Minimal images such as distroless or Wolfi builds sometimes report package strings that do not match NVD CPE expectations, producing version mismatch noise until you align scanner flags with the distro’s own security tracker. KEV entries may describe Windows-only exploitation paths yet still list a CVE that also affects a Linux shared library; platform owners must read the KEV description before blocking a Linux digest. When one CVE is Critical under CVSS v3 but the software vendor documents that the vulnerable code path is unreachable in your container entrypoint, vulnerability management needs a documented exception rather than Splunk suppression. Running Trivy, Grype, and Snyk in parallel without a deduplication key yields duplicate CVE rows that inflate counts until you dedup on image digest plus CVE plus scanner as this search does. Brand-new CVE placeholders sometimes arrive as UNKNOWN severity before NVD publishes; treat those as data-quality tickets, not automatic production blocks. Snyk CVSS versus Trivy severity can disagree on the same GHSA identifier; governance should pick a primary scoring source for escalations while still ingesting all feeds for coverage. EPSS scores fluctuate daily; a drop from 0.55 to 0.45 is not automatic closure if KEV or compensating-control debt remains. False positives also come from scanning scratch or intermediate CI layers that never deploy to customer-facing clusters; exclude those namespaces via lookup, not by muting the entire sourcetype.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1195",
                "T1204"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (Trivy, Grype, Snyk JSON), CI pipeline artifacts.\n• Ensure the following data sources are available: `sourcetype=trivy:scan`, `sourcetype=grype:scan`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward CI and registry scan JSON to Splunk with stable fields (`image_name`, `image_digest`, `Target`). Deduplicate on digest+CVE. Alert when CRITICAL/HIGH appears on images referenced by running tags.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers (sourcetype=\"trivy:scan\" OR sourcetype=\"grype:scan\")\n| stats latest(Severity) as sev, values(VulnerabilityID) as cves, dc(VulnerabilityID) as vuln_count by image_name, image_digest, Target\n| where mvfind(sev, \"CRITICAL\") OR mvfind(sev, \"HIGH\")\n| sort -vuln_count\n```\n\nUnderstanding this SPL\n\n**Container Image Vulnerability Scanning Results** — Centralizing scanner output (severity, package, image digest) proves compliance and speeds remediation when new CVEs hit production images.\n\nDocumented **Data sources**: `sourcetype=trivy:scan`, `sourcetype=grype:scan`. **App/TA** (typical add-on context): Custom HEC (Trivy, Grype, Snyk JSON), CI pipeline artifacts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: trivy:scan, grype:scan. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"trivy:scan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by image_name, image_digest, Target** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvfind(sev, \"CRITICAL\") OR mvfind(sev, \"HIGH\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (image, digest, vuln count, severities), Treemap by repo, Trend (open vulns over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We treat each shipped image like a recipe on the menu. When a supplier flags an ingredient as dangerous and a replacement exists, we stop serving that dish right away. When the bulletin is dire but no replacement exists, we still need to know which plates are in the dining room and for how long, especially when investigators confirm thieves already exploit the flaw in the wild.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.11",
              "n": "Docker Daemon Resource Limits Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Host-level CPU, memory, and storage pressure on the Docker engine starves containers before per-container limits trigger; early detection avoids fleet-wide slowdowns.",
              "t": "Splunk Connect for Docker, host metrics (Telegraf/OTel)",
              "d": "`sourcetype=docker:info`, `sourcetype=docker:system`",
              "q": "index=containers sourcetype=\"docker:info\"\n| eval mem_total_gb=round(MemTotal/1073741824, 2)\n| eval mem_avail_gb=round(MemAvailable/1073741824, 2)\n| eval mem_used_pct=round((MemTotal-MemAvailable)/MemTotal*100, 1)\n| where mem_used_pct > 85 OR NCPU < 2\n| table _time host mem_total_gb mem_avail_gb mem_used_pct NCPU\n| sort -mem_used_pct",
              "m": "Ingest `docker info` JSON (or `docker system df`) on an interval plus host memory/CPU from the node. Correlate with `docker:events` throttling and OOM. Alert when host memory used >85% or CPU saturation sustained >10 minutes.",
              "z": "Line chart (mem %, CPU load), Table (host, limits), Single value (hosts over threshold).",
              "kfp": "Transient FD spikes during legitimate burst image pulls or massive docker save and docker load operations can push fd_pct into the seventies without implying a leak; require sustained slope from streamstats or repeated polls above baseline_fd_pct_warn before paging application teams. Pull-queue depth may read nonzero during intentional concurrency experiments or when a registry mirror warms caches; cross-check change calendar before treating pull_hot as incident. dockerd restart resets host metrics counters and can briefly inflate inode_pct readings while dentry caches repopulate; suppress duplicate pages for one interval after controlled restarts documented in maintenance lookups. Background docker system prune or aggressive builder cache eviction causes inode churn that resembles exhaustion until the operation completes; validate prune jobs in automation logs. Some monitoring agents open docker.sock briefly each poll; cumulative mis-tuned agents can resemble dockerd leaks but are actually client-side handle pressure—correlate with agent release notes. Plugin drivers that maintain long-lived FUSE mounts may elevate open_fds within policy; verify against plugin vendor baselines in dockerd_resource_baseline.csv. Fleet-wide AMI refresh can shift default max_concurrent_downloads or LimitNOFILE; expect low_baseline_drift until CSV refresh. OTel double-scrape during migrations can duplicate docker:metrics rows; deduplicate before interpreting pull_queue_depth peaks.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Docker, host metrics (Telegraf/OTel).\n• Ensure the following data sources are available: `sourcetype=docker:info`, `sourcetype=docker:system`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest `docker info` JSON (or `docker system df`) on an interval plus host memory/CPU from the node. Correlate with `docker:events` throttling and OOM. Alert when host memory used >85% or CPU saturation sustained >10 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:info\"\n| eval mem_total_gb=round(MemTotal/1073741824, 2)\n| eval mem_avail_gb=round(MemAvailable/1073741824, 2)\n| eval mem_used_pct=round((MemTotal-MemAvailable)/MemTotal*100, 1)\n| where mem_used_pct > 85 OR NCPU < 2\n| table _time host mem_total_gb mem_avail_gb mem_used_pct NCPU\n| sort -mem_used_pct\n```\n\nUnderstanding this SPL\n\n**Docker Daemon Resource Limits Monitoring** — Host-level CPU, memory, and storage pressure on the Docker engine starves containers before per-container limits trigger; early detection avoids fleet-wide slowdowns.\n\nDocumented **Data sources**: `sourcetype=docker:info`, `sourcetype=docker:system`. **App/TA** (typical add-on context): Splunk Connect for Docker, host metrics (Telegraf/OTel). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:info. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mem_total_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_avail_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mem_used_pct > 85 OR NCPU < 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Docker Daemon Resource Limits Monitoring**): table _time host mem_total_gb mem_avail_gb mem_used_pct NCPU\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (mem %, CPU load), Table (host, limits), Single value (hosts over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch whether the kitchen manager who coordinates every food tray is running out of hands, pantry labels, or delivery slots. When their personal limits fill up, new dishes wait with no error bell, and dinner looks late for no obvious reason until someone counts what the manager is already holding.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.12",
              "n": "Compose Service Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Docker Compose stacks power dev/stage and edge; tracking service `healthcheck` state and replica counts catches bad releases before they reach Kubernetes.",
              "t": "Custom script (docker compose ps --format json), vector/file collector",
              "d": "`sourcetype=docker:compose:ps`",
              "q": "index=containers sourcetype=\"docker:compose:ps\"\n| eval healthy=if(Health==\"healthy\",1,0)\n| stats latest(Health) as health, latest(State) as state, values(Service) as services by project, Name\n| where health=0 OR match(state, \"^(exited|restarting)\")\n| table project Name health state services",
              "m": "Scheduled `docker compose -f <file> ps --format json` per project; parse `Health`, `State`, `Service`. Ship to HEC. Alert on unhealthy or restarting services for >5 minutes.",
              "z": "Status grid by service, Table (project, service, health), Timeline of state changes.",
              "kfp": "Legitimate docker compose up single-service restarts during inner-loop development can temporarily raise distinct_config_hashes or dependency warnings until peers reload; tag developer host classes in compose_project_baseline.csv or macro-exclude those host_id patterns after governance approval. Services declared with restart: no or one-shot batch helpers may be intentionally exited while the rest of the project stays running for ETL or migration windows; annotate baseline rows with expected_non_running_services to avoid false chain alarms. Compose file format migrations and simultaneous compose convert drills can produce brief config-hash skew across containers until the operator finishes a full recreate; treat sustained drift only after two inspect cadences. Front-end install hooks such as yarn or npm postinstall spikes can keep a service in starting health while dependencies are actually fine; correlate with UC-3.1.22 probe timelines before declaring dead dependencies. Home-lab or sandbox projects with disposable names may never publish baseline rows yet still appear in inspect; route them to non-prod indexes or maintain a scratch_project_allowlist.csv referenced by a wrapper macro. Rolling kernel or Engine upgrades that restart containers out of order may trip medium_intermittent_health_check_flapping without customer impact; require SLO correlation before sev-one pages. CI builders that reuse com.docker.compose.project names across ephemeral workspaces can inflate fleet_avg_project_health variance; segregate CI indexes from edge production indexes. Manual docker start of a dependency after a compose down partially completes can clear chain_hit while leaving inconsistent volumes; pair with change tickets describing operator actions. Blue-green style experiments that intentionally run two generations with different hashes on separate ports should be labeled in baseline notes so drift alerts route to the right squad.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom script (docker compose ps --format json), vector/file collector.\n• Ensure the following data sources are available: `sourcetype=docker:compose:ps`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScheduled `docker compose -f <file> ps --format json` per project; parse `Health`, `State`, `Service`. Ship to HEC. Alert on unhealthy or restarting services for >5 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:compose:ps\"\n| eval healthy=if(Health==\"healthy\",1,0)\n| stats latest(Health) as health, latest(State) as state, values(Service) as services by project, Name\n| where health=0 OR match(state, \"^(exited|restarting)\")\n| table project Name health state services\n```\n\nUnderstanding this SPL\n\n**Compose Service Health** — Docker Compose stacks power dev/stage and edge; tracking service `healthcheck` state and replica counts catches bad releases before they reach Kubernetes.\n\nDocumented **Data sources**: `sourcetype=docker:compose:ps`. **App/TA** (typical add-on context): Custom script (docker compose ps --format json), vector/file collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:compose:ps. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:compose:ps\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **healthy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by project, Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where health=0 OR match(state, \"^(exited|restarting)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Compose Service Health**): table project Name health state services\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid by service, Table (project, service, health), Timeline of state changes.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We treat each multi-container stack like a band where every musician must be tuned before the curtain rises. If the bass player never makes it on stage but the lights already went up, we spot who is missing and which harmony rules broke before the crowd only hears static.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.13",
              "n": "Container Restart Loop Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Rapid start/die cycles burn CPU and obscure root cause; detecting loops early isolates bad images or bad configs before cascading failures.",
              "t": "Splunk Connect for Docker",
              "d": "`sourcetype=docker:events`",
              "q": "index=containers sourcetype=\"docker:events\" action=\"die\" OR action=\"start\"\n| bin _time span=5m\n| stats dc(action) as actions, count by _time, container_name, host\n| where actions>=2 AND count>=6\n| sort -count",
              "m": "Track paired start/die bursts per container in sliding windows. Alert when >3 restart cycles in 15 minutes. Enrich with `exitCode` from die events.",
              "z": "Timeline (start/die), Table (container, cycles, host), Single value (looping containers).",
              "kfp": "Blue-green deploys that terminate an old task once and start a new task once per instance can emit paired die and start events within minutes across many containers without a true loop; require sustained min_cycle_seconds pressure or tag deploy ids in Compose labels before paging. Compose stacks under active developer iteration on shared CI executors often run docker compose down && up loops that recreate services faster than production backoff expectations even when applications are healthy. Watchtower, Diun, or home-grown image pullers restart containers on a predictable cadence by design; maintain an allowlist keyed by image pattern and host class. Cron-style oneshot jobs that use --restart=on-failure with a small MaximumRetryCount may show several tight gaps then silence when the job finishes successfully on a later attempt; treat owner context before calling it an outage. Docker daemon restarts or live-restore toggles can emit a burst of start events across a fleet without matching dies, inflating cycles_1h until the window clears; cross-check syslog for daemon startup lines. Compose profiles that activate only during nightly batch windows can surface clustered start and die pairs when batch containers exit zero after work completes; narrow time filters with profile labels. Swarm service updates with parallelism above one may interleave events from task slots that resemble flapping though traffic was drained intentionally.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Docker.\n• Ensure the following data sources are available: `sourcetype=docker:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack paired start/die bursts per container in sliding windows. Alert when >3 restart cycles in 15 minutes. Enrich with `exitCode` from die events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:events\" action=\"die\" OR action=\"start\"\n| bin _time span=5m\n| stats dc(action) as actions, count by _time, container_name, host\n| where actions>=2 AND count>=6\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Container Restart Loop Detection** — Rapid start/die cycles burn CPU and obscure root cause; detecting loops early isolates bad images or bad configs before cascading failures.\n\nDocumented **Data sources**: `sourcetype=docker:events`. **App/TA** (typical add-on context): Splunk Connect for Docker. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, container_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where actions>=2 AND count>=6` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (start/die), Table (container, cycles, host), Single value (looping containers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch how often a boxed-up program is started again right after it stops, and how long the system waits between tries. When it thrashes quickly or keeps trying forever, we raise a hand; when it quietly gives up after a set number of tries, we still flag that so it is not mistaken for healthy.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.14",
              "n": "Docker Network Overlay Issues",
              "c": "high",
              "f": "advanced",
              "v": "Overlay plugins (VXLAN, weave, custom bridges) failures cause intermittent connectivity between containers on different hosts.",
              "t": "Docker daemon logs, syslog",
              "d": "`sourcetype=docker:daemon`, `sourcetype=syslog`",
              "q": "index=os sourcetype=syslog (docker OR \"br-\" OR \"vxlan\")\n| search fail OR error OR unreachable\n| stats count by host, _raw",
              "m": "Forward daemon logs with network driver context. Pattern-match overlay create/delete errors, IPAM failures, and iptables sync issues. Correlate multi-host with timestamps.",
              "z": "Table (host, error signature, count), Timeline, Bar chart by error type.",
              "kfp": "Endpoint-count drift during rolling deploys is benign because Swarm temporarily holds tasks in pending or starting while images pull, so docker_overlay_network_baseline.csv expected_endpoint_count must be refreshed alongside service-replica changes or medium_endpoint_count_drift_below_threshold will fire harmlessly for several minutes after every release. Transient gossip lag during manager-quorum re-election after a deliberate manager restart is expected: Serf needs ten to thirty seconds for the new leader to converge cluster state, which can show as gossip_lag_or_drop entries even though packets are not actually lost; require sustained gossip_lag_sec above sixty seconds before paging. Iptables drops from intentional egress-policy enforcement on tenant-tier worker nodes look exactly like overlay packet loss in raw counters until you separate DOCKER-USER chain drops attributable to documented egress rules from drops on the docker_gwbridge or vxlan* interfaces themselves; tag those host classes with a known-policy macro rather than silencing the alert globally. The string network not found is routinely emitted in dockerd journal during ephemeral CI builds that create and tear down Compose-style overlay networks within seconds, so high_libnetwork_driver_error_sustained should not fire on transient lifecycle events; the SPL therefore requires driver_error_count of three or more in the window. nf_conntrack exhaustion is sometimes caused by an application bug (a connection-leaking client that refuses to close keep-alive sockets) rather than overlay-plane disease; correlate with docker:container:logs and tcp_tw_recycle counters before blaming the network layer. Mirantis Container Runtime releases occasionally rename libnetwork log keys after major upgrades, which can cause coalesce misses that look like sudden driver-error bursts until props aliases are refreshed. Backup managers that are intentionally demoted via docker swarm leave emit gossip notifications for several minutes that resemble peer disconnects but are routine; pair with maintenance lookups before paging. Overlay networks that exist only on a single node by design (single-host bridge networks misclassified as overlay by a poller bug) will register vxlan_state single_node_observed and must not page; correct the poller mapping rather than tuning thresholds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Docker daemon logs, syslog.\n• Ensure the following data sources are available: `sourcetype=docker:daemon`, `sourcetype=syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward daemon logs with network driver context. Pattern-match overlay create/delete errors, IPAM failures, and iptables sync issues. Correlate multi-host with timestamps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=syslog (docker OR \"br-\" OR \"vxlan\")\n| search fail OR error OR unreachable\n| stats count by host, _raw\n```\n\nUnderstanding this SPL\n\n**Docker Network Overlay Issues** — Overlay plugins (VXLAN, weave, custom bridges) failures cause intermittent connectivity between containers on different hosts.\n\nDocumented **Data sources**: `sourcetype=docker:daemon`, `sourcetype=syslog`. **App/TA** (typical add-on context): Docker daemon logs, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, _raw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, error signature, count), Timeline, Bar chart by error type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the underground tunnels between buildings on the same campus that carry packets between containers on different machines. Sometimes the floorplan shows a tunnel exists in both directions, but one end is collapsed; packets going outward arrive while replies coming back vanish, and customers feel half-broken connections long before any single container looks unhealthy.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.15",
              "n": "Image Layer Bloat Analysis",
              "c": "low",
              "f": "intermediate",
              "v": "Large layer stacks slow pulls and increase attack surface; trending layer count and size guides image slimming and base image updates.",
              "t": "Custom input (`docker history --no-trunc`, `docker image inspect`)",
              "d": "`sourcetype=docker:image:history`",
              "q": "index=containers sourcetype=\"docker:image:history\"\n| stats sum(layer_size_bytes) as total_bytes, dc(layer_id) as layer_count by image_id, repository\n| eval total_mb=round(total_bytes/1048576,1)\n| where layer_count>25 OR total_mb>800\n| sort -total_mb",
              "m": "Nightly job exports `docker history` JSON per promoted image. Store per-layer size and count. Report images exceeding policy thresholds.",
              "z": "Bar chart (image vs MB), Table (image, layers, total MB), Trend (average image size).",
              "kfp": "Legitimately large ML and AI training images ship multi-gigabyte model weights; they will breach naive megabyte SLAs until you annotate repository prefixes or owner labels in a governance lookup and align caps per workload class. CUDA and cuDNN stacks often add hundreds of megabytes unrelated to application layer hygiene; treat them as expected heaviness with documented exceptions rather than silent suppression. Full JDK base images exceed slim JRE runtimes by design; compare like with like when setting max_image_mb. Multi-stage builds frequently show high docker history layer counts reflecting intermediate compiler stages even when docker inspect Size remains small; triage with inspect bytes and optional BuildKit metadata before paging teams. Multi-architecture manifest lists can duplicate logical tags per platform; without architecture in the grain, totals appear inflated relative to a single-node pull. Golden-image tags pinned under compliance hold may remain large intentionally; route them through exempt env rows or parallel policy tables instead of disabling the UC. Registry compression upgrades can shrink Content-Length proxies without any Dockerfile change, creating apparent improvement that is not a team accomplishment; note infrastructure changes in tickets. Distroless and minimal bases sometimes under-report intermediate history while still being secure; pair signals. Repeated COPY of the same source without cache mount optimization can look like layer bloat that is actually build-cache miss noise; corroborate with CI cache hit metrics. Security scanning layers added as sidecars may count as extra layers without materially increasing attack surface when they are empty metadata; validate with scanner documentation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom input (`docker history --no-trunc`, `docker image inspect`).\n• Ensure the following data sources are available: `sourcetype=docker:image:history`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNightly job exports `docker history` JSON per promoted image. Store per-layer size and count. Report images exceeding policy thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:image:history\"\n| stats sum(layer_size_bytes) as total_bytes, dc(layer_id) as layer_count by image_id, repository\n| eval total_mb=round(total_bytes/1048576,1)\n| where layer_count>25 OR total_mb>800\n| sort -total_mb\n```\n\nUnderstanding this SPL\n\n**Image Layer Bloat Analysis** — Large layer stacks slow pulls and increase attack surface; trending layer count and size guides image slimming and base image updates.\n\nDocumented **Data sources**: `sourcetype=docker:image:history`. **App/TA** (typical add-on context): Custom input (`docker history --no-trunc`, `docker image inspect`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:image:history. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:image:history\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by image_id, repository** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where layer_count>25 OR total_mb>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (image vs MB), Table (image, layers, total MB), Trend (average image size).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We measure how thick the shipping crate is before it leaves the factory: how many stacked sheets went into the box, how heavy the sealed package really is, and how long it should take to arrive over the road. When the crate breaks the rules for a destination, we flag it early so teams slim the load before anyone waits hours at the loading dock.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.16",
              "n": "Docker Volume Usage Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Named volumes grow with databases and caches; trending usage prevents write failures and emergency disk expansion.",
              "t": "Custom scripted input (`docker system df -v`)",
              "d": "`sourcetype=docker:volumes`",
              "q": "index=containers sourcetype=\"docker:volumes\"\n| eval used_pct=if(SizeGB>0, round(UsedGB/SizeGB*100,1), null())\n| timechart span=1d avg(UsedGB) as used_gb by volume_name",
              "m": "Parse `docker system df -v` or volume inspect into Splunk daily. Alert when volume used GB grows >20% week-over-week or host filesystem backing the volume >85%.",
              "z": "Line chart (used GB over time), Table (volume, host, used %), Single value (largest volume).",
              "kfp": "Planned database bulk-import windows intentionally raise named-volume slopes for hours; align those windows with docker_volume_baseline.csv notes or temporary macro suppressions instead of muting the control globally. Canary rollouts may leave dangling volumes between cutovers while the next task still mounts the successor volume; require two consecutive intervals above threshold or a missing deploy tag before paging storage on-call. Patch Tuesday image promotion waves can spike overlay2 diff measurements while layers extract even when applications are idle; corroborate with docker pull and events timelines before blaming application leaks. Heavy du execution on busy XFS arrays occasionally returns stale size snapshots that flatten regression slopes for a single poll; treat single-interval quiet as suspicious only if df and application logs agree. Legitimate developer clusters may keep large idle volumes by design; route those hosts through FinOps macros rather than production paging. Bind mounts that fill a separate array can trip fs_pct_used on non-docker mounts if your df collector includes them; confirm mount correlation before treating the issue as graph-root only.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`docker system df -v`).\n• Ensure the following data sources are available: `sourcetype=docker:volumes`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse `docker system df -v` or volume inspect into Splunk daily. Alert when volume used GB grows >20% week-over-week or host filesystem backing the volume >85%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:volumes\"\n| eval used_pct=if(SizeGB>0, round(UsedGB/SizeGB*100,1), null())\n| timechart span=1d avg(UsedGB) as used_gb by volume_name\n```\n\nUnderstanding this SPL\n\n**Docker Volume Usage Trending** — Named volumes grow with databases and caches; trending usage prevents write failures and emergency disk expansion.\n\nDocumented **Data sources**: `sourcetype=docker:volumes`. **App/TA** (typical add-on context): Custom scripted input (`docker system df -v`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:volumes. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:volumes\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by volume_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (used GB over time), Table (volume, host, used %), Single value (largest volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We track how fast each storage locker on a shared warehouse floor is filling, including the hidden back room where containers scratch temporary marks into the walls. When one locker swells too fast or the whole floor approaches the ceiling, we warn crews before boxes spill into the aisles and everything stops moving.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.17",
              "n": "Container Resource Limit Enforcement",
              "c": "high",
              "f": "intermediate",
              "v": "Verifying cgroup limits match declared `docker run`/`compose` settings catches silent misconfigurations that allow noisy neighbors or false capacity plans.",
              "t": "Splunk Connect for Docker",
              "d": "`sourcetype=docker:inspect`",
              "q": "index=containers sourcetype=\"docker:inspect\"\n| eval mem_limit_bytes=tonumber(HostConfig.Memory)\n| eval nano_cpus=tonumber(HostConfig.NanoCpus)\n| eval cpu_quota=tonumber(HostConfig.CpuQuota)\n| where isnull(mem_limit_bytes) OR mem_limit_bytes=0 OR (nano_cpus=0 AND cpu_quota<=0)\n| table container_name image host mem_limit_bytes nano_cpus cpu_quota",
              "m": "Periodically ingest `docker inspect` for running containers. Flag production workloads with unlimited memory or CPU when policy requires limits. Cross-check with `docker:stats` actual usage.",
              "z": "Table (container, mem limit, CPU), Compliance single value (% with limits), Bar chart by host.",
              "kfp": "Legitimate developer laptops and shared sandbox clusters often run unlimited memory by design; route those hosts through environment tags that never set prod_like unless mis-labeled. Infrastructure agents such as logging forwarders, service-mesh sidecars, and hardware telemetry exporters sometimes ship vendor Dockerfiles without limits until platform teams standardize helm-like wrappers; allowlist them with ticket-backed rows rather than muting the search globally. Containers mid-migration between staging and production tiers can briefly appear under the wrong environment key; reconcile labels before treating ceiling breaches as malicious. Planned load-test windows that temporarily lift limits should carry time-bounded policy_exception metadata referenced in a wrapper macro. Vendor-supplied images may embed runtime-enforced ceilings that differ from operator-supplied docker run flags; pair vendor documentation with inspect output before opening Sev-1 drift tickets. Rootless Docker and cgroup namespace views can make collectors read parent max while inspect shows child limits until path discovery follows delegated subtrees; expect one noisy interval after major upgrades. CI executors that spawn thousands of short-lived ids may miss cgroup polls between creation and teardown; tune host-class macros for builder fleets. Mirantis Container Runtime field renames after patches can cause coalesce misses that resemble policy drift until props aliases refresh. Dual ingestion from experimental eBPF exporters alongside Splunk_TA_nix without deduplication can double-count drift; enforce a single primary writer per node class.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Docker.\n• Ensure the following data sources are available: `sourcetype=docker:inspect`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically ingest `docker inspect` for running containers. Flag production workloads with unlimited memory or CPU when policy requires limits. Cross-check with `docker:stats` actual usage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:inspect\"\n| eval mem_limit_bytes=tonumber(HostConfig.Memory)\n| eval nano_cpus=tonumber(HostConfig.NanoCpus)\n| eval cpu_quota=tonumber(HostConfig.CpuQuota)\n| where isnull(mem_limit_bytes) OR mem_limit_bytes=0 OR (nano_cpus=0 AND cpu_quota<=0)\n| table container_name image host mem_limit_bytes nano_cpus cpu_quota\n```\n\nUnderstanding this SPL\n\n**Container Resource Limit Enforcement** — Verifying cgroup limits match declared `docker run`/`compose` settings catches silent misconfigurations that allow noisy neighbors or false capacity plans.\n\nDocumented **Data sources**: `sourcetype=docker:inspect`. **App/TA** (typical add-on context): Splunk Connect for Docker. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:inspect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:inspect\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mem_limit_bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **nano_cpus** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cpu_quota** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(mem_limit_bytes) OR mem_limit_bytes=0 OR (nano_cpus=0 AND cpu_quota<=0)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Container Resource Limit Enforcement**): table container_name image host mem_limit_bytes nano_cpus cpu_quota\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (container, mem limit, CPU), Compliance single value (% with limits), Bar chart by host.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We check that every boxed workload has a painted size on the parking lot, so one vehicle cannot spread across four spaces and block everyone else. We also compare the dashboard sticker to the actual stripes on the pavement, because if those disagree the rules you thought you posted never really applied.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.18",
              "n": "Docker Build Cache Efficiency",
              "c": "low",
              "f": "intermediate",
              "v": "Poor cache reuse lengthens CI pipelines and increases registry churn; measuring cache hits guides Dockerfile ordering and BuildKit settings.",
              "t": "CI log forwarding (BuildKit, docker build --progress=plain)",
              "d": "`sourcetype=docker:build`",
              "q": "index=containers sourcetype=\"docker:build\"\n| eval cache_hit=if(match(_raw, \"(?i)CACHED\"),1,0)\n| stats sum(cache_hit) as hits, count as steps by build_id, image_name\n| eval hit_rate=round(100*hits/steps,1)\n| where hit_rate < 30 AND steps>10\n| sort hit_rate",
              "m": "Ship structured build logs to Splunk. Parse CACHED vs executed steps. Dashboard average cache hit rate per repo branch. Alert on sustained drop after Dockerfile changes.",
              "z": "Line chart (hit rate over builds), Table (repo, hit rate), Bar chart (CI duration vs hit rate).",
              "kfp": "Monthly or weekly upstream base-image refreshes for tags such as ubuntu:24.04 legitimately collapse cache-hit percentages for one build wave even when engineers did nothing wrong; require comparison against a documented base-image rotation calendar before paging. Large Dockerfile refactors that intentionally reorder layers or merge stages produce sharp CACHE-DEGRADED rows that resemble incidents; pair with change records or a lookup refactor_expected flag. Monorepo pipelines that compile many targets in one job report low cache-hit ratios relative to microservice repos because denominator steps include unrelated artifacts; segment by matrix dimension or accept wider floors for those repos in ci_build_sla.csv. Security teams sometimes wipe builder caches after compromise; expect temporary CACHE-DEGRADED across the fleet until warm cycles complete, and annotate ticket ids in dashboards. Self-hosted runner disk pressure evicts local BuildKit cache with LRU behavior, mimicking cache collapse until volumes resize; correlate with host disk_percent and eviction logs. Runner restarts between build stages drop in-memory cache while registry-backed cache remains; duration may spike once without implying Dockerfile faults—check runner uptime versus build start. Parallel CI matrix builds duplicate pipeline_id families or multiply step counts, inflating apparent REGRESSION frequency when streamstats spans heterogeneous jobs; consolidate identifiers upstream or filter matrix labels. Sparse repos with fewer than two builds inside seven days produce noisy drop_vs_baseline_pct; widen earliest for those branches or fall back to daily summaries. Hosted CI brownouts can raise queue_seconds uniformly; confirm provider status before blaming layer cache. Prometheus cardinality or scrape gaps can nullify bk_metrics arms while plain progress still looks healthy; do not silence CACHE-DEGRADED solely because exporter panels are empty.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CI log forwarding (BuildKit, docker build --progress=plain).\n• Ensure the following data sources are available: `sourcetype=docker:build`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nShip structured build logs to Splunk. Parse CACHED vs executed steps. Dashboard average cache hit rate per repo branch. Alert on sustained drop after Dockerfile changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:build\"\n| eval cache_hit=if(match(_raw, \"(?i)CACHED\"),1,0)\n| stats sum(cache_hit) as hits, count as steps by build_id, image_name\n| eval hit_rate=round(100*hits/steps,1)\n| where hit_rate < 30 AND steps>10\n| sort hit_rate\n```\n\nUnderstanding this SPL\n\n**Docker Build Cache Efficiency** — Poor cache reuse lengthens CI pipelines and increases registry churn; measuring cache hits guides Dockerfile ordering and BuildKit settings.\n\nDocumented **Data sources**: `sourcetype=docker:build`. **App/TA** (typical add-on context): CI log forwarding (BuildKit, docker build --progress=plain). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:build. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:build\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cache_hit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by build_id, image_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hit_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_rate < 30 AND steps>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hit rate over builds), Table (repo, hit rate), Bar chart (CI duration vs hit rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We time how long it takes the kitchen to prep the same lunch when yesterday’s ingredients are still on the counter versus when everything must be fetched fresh. When the team suddenly starts fetching fresh every time, we ask who moved the recipe and whether the stoves are crowded, not only whether the delivery truck was late.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.19",
              "n": "Container Log Driver Health",
              "c": "high",
              "f": "intermediate",
              "v": "When the logging driver backs up or errors, application logs are dropped—blinding security and operations during incidents.",
              "t": "Docker daemon logs, Splunk Connect for Docker",
              "d": "`sourcetype=docker:daemon`",
              "q": "index=containers sourcetype=\"docker:daemon\" (\"log driver\" OR \"failed to log\" OR \"splunk\" OR \"fluentd\")\n| search (level=\"error\" OR level=\"warn\")\n| stats count by host, msg\n| sort -count",
              "m": "Monitor daemon for log driver write failures, buffer overflow, and remote endpoint timeouts. Correlate with missing log volume in Splunk for the same container IDs.",
              "z": "Table (host, error, count), Timeline, Single value (log driver errors/hour).",
              "kfp": "Legitimate startup bursts after container create can spike loglines_per_sec for two to three minutes while health checks warm caches; require sustained intervals above baseline before paging service owners. Incident bridges sometimes enable TRACE or DEBUG intentionally; tag those windows in maintenance lookups so medium_buffer_overrun_warnings rows downgrade after ticket correlation. Collector restarts for fluentd, splunk logging driver targets, or syslog relays increment driver_state_churn and may bump dropped_inc counters briefly without chronic misconfiguration; pair with collector uptime metrics. Hosts participating in deliberate near-full disk stress tests will raise high_rotation_failure_disk_pressure even when engineering expects the condition; exclude tagged hosts or shorten test windows. Structured JSON logging that emits pretty-printed multiline stacks inflates per-line counters versus semantic single events; reconcile baselines using parser-normalized counts when available. Blue-green deploys that recreate containers with new ids but identical images can look like baseline drift until container_log_driver_baseline.csv keys on image alone absorb the churn; add short-lived canary rows if needed. Security patches that only restart dockerd without workload changes may emit benign rotation warnings; compare rotation_cadence_per_hour against application release timelines. Low-volume sidecars that log once per minute may show zero rotation for days; do not treat absence of rotation as failure unless disk pressure is climbing. IPv6-only collector endpoints during migration can present as churn until DNS and routing stabilize; validate dual-stack health before muting. Mirantis field deltas after upgrades occasionally rename MESSAGE tokens until props aliases ship; expect brief uncategorized noise, not sustained forensic loss.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Docker daemon logs, Splunk Connect for Docker.\n• Ensure the following data sources are available: `sourcetype=docker:daemon`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor daemon for log driver write failures, buffer overflow, and remote endpoint timeouts. Correlate with missing log volume in Splunk for the same container IDs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:daemon\" (\"log driver\" OR \"failed to log\" OR \"splunk\" OR \"fluentd\")\n| search (level=\"error\" OR level=\"warn\")\n| stats count by host, msg\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Container Log Driver Health** — When the logging driver backs up or errors, application logs are dropped—blinding security and operations during incidents.\n\nDocumented **Data sources**: `sourcetype=docker:daemon`. **App/TA** (typical add-on context): Docker daemon logs, Splunk Connect for Docker. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:daemon. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:daemon\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, msg** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, error, count), Timeline, Single value (log driver errors/hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We treat application logs like numbered evidence bags on a conveyor. If the belt jams in blocking mode, the whole line slows down; if someone sets the belt to toss overflow bags quietly, evidence disappears and you never know what was inside. We watch belt speed, jam alarms, and how often local log files rotate so the story stays intact.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.20",
              "n": "Docker Registry Mirror Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Registry mirrors reduce pull latency and hub rate limits; a stale or failing mirror causes random image pull delays across the fleet.",
              "t": "Docker daemon config audit, mirror HTTP checks",
              "d": "`sourcetype=docker:info`, `sourcetype=docker:daemon`",
              "q": "index=containers sourcetype=\"http:check\" check_type=\"registry_mirror\"\n| where status!=200 OR response_time_ms>2000\n| table _time mirror_url host status response_time_ms",
              "m": "Log `Registry Mirrors` from `docker info` and probe mirror `/v2/` with auth-less ping where allowed. Alert on daemon errors referencing mirrors or failed probes.",
              "z": "Table (mirror, status, latency), Map or bar by region, Timeline of failures.",
              "kfp": "A fresh golden base rebuild that invalidates manifests can depress cache_hit_pct for one hour without indicating broken mirrors; annotate change records and compare against expected_cache_hit_from_baseline. Scheduled Harbor or Nexus garbage collection produces intentional latency spikes and temporary 503 responses; exclude documented maintenance windows or raise thresholds during those slices. Rolling upgrades of registry pods legitimately generate short 5xx bursts while health checks flap; correlate with deployment pipelines before paging. CI runners that always pull unique digests for inner-loop builds create chronic MISS traffic that looks like cache failure but reflects workflow design, not mirror health. OCI artifacts such as Helm charts or cosign signatures may bypass traditional layer-cache accounting, skewing cache_hit_pct until you add artifact-type filters. Regional failover that shifts pull traffic to a secondary datacenter can elevate p99 while caches warm; treat as expected if geography tags in the lookup explain the shift. When mirrors correctly proxy Docker Hub rate limits, clients may see HTTP 429 with structured bodies; classify those separately from 5xx upstream failures so identity and quota teams own the response instead of storage teams. Artifactory smart remote repositories sometimes log upstream timeouts as 404 or 409 depending on version; tune rex extractions so status classes remain trustworthy.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Docker daemon config audit, mirror HTTP checks.\n• Ensure the following data sources are available: `sourcetype=docker:info`, `sourcetype=docker:daemon`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog `Registry Mirrors` from `docker info` and probe mirror `/v2/` with auth-less ping where allowed. Alert on daemon errors referencing mirrors or failed probes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"http:check\" check_type=\"registry_mirror\"\n| where status!=200 OR response_time_ms>2000\n| table _time mirror_url host status response_time_ms\n```\n\nUnderstanding this SPL\n\n**Docker Registry Mirror Health** — Registry mirrors reduce pull latency and hub rate limits; a stale or failing mirror causes random image pull delays across the fleet.\n\nDocumented **Data sources**: `sourcetype=docker:info`, `sourcetype=docker:daemon`. **App/TA** (typical add-on context): Docker daemon config audit, mirror HTTP checks. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: http:check. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"http:check\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=200 OR response_time_ms>2000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Docker Registry Mirror Health**): table _time mirror_url host status response_time_ms\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mirror, status, latency), Map or bar by region, Timeline of failures.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the regional warehouses that keep extra copies of popular books so nearby libraries can lend them quickly. If a warehouse runs out of shelf space or its sorting machine jams, wait times jump all over the city even when each local library looks healthy from the street.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.21",
              "n": "Container Runtime Security Events",
              "c": "high",
              "f": "advanced",
              "v": "Falco/sysdig/Falco-sidekick style rules surface unexpected shells, sensitive mounts, and syscall anomalies at runtime—complementing image scanning for zero-day behavior.",
              "t": "Falco (JSON to HEC), Sysdig Secure",
              "d": "`sourcetype=falco:alert`, `sourcetype=sysdig:secure`",
              "q": "index=containers sourcetype=\"falco:alert\" priority=\"Critical\" OR priority=\"Error\"\n| stats count by rule, container.name, k8s.pod.name, proc.name\n| sort -count",
              "m": "Forward Falco JSON with `rule`, `priority`, container/k8s metadata. Tune noise with allowlists. Page on Critical; dashboard top rules by container image.",
              "z": "Table (rule, container, count), Timeline, Heatmap (rule vs namespace).",
              "kfp": "GitOps or Helm pre-sync init jobs sometimes run one-shot package installs or curl health checks that resemble T1525 or T1059 until you stamp their image_key rows in falco_rule_baseline.csv with historical fire counts and widen image_process_baseline.csv for the sidecar class. Istio, Linkerd, Consul connect, and Envoy-heavy meshes spawn auxiliary processes that can look like reverse-shell tooling until mesh-owned digests carry explicit expected_proc_regex allowances. Vendor JVM or WebSphere diagnostics may fork bash for thread dumps; document the maintenance digest and tie it to change windows instead of paging as host compromise. Read-only Postgres or MySQL replicas that stream backups through socat or kubectl cp analogues can emit net-like syscall patterns; pre-approve argv templates per backup operator image. The first month after a Falco rules upgrade often looks like novel_rule_signature spam until falco_rule_baseline.csv catches renames—treat notice-only noise as tuning work unless priorities climb to warning. Egress entropy gates can flag autoscaling discovery bursts when a service legitimately fans out to many pod IPs after a rollout; correlate with deployment timestamps before calling lateral movement. Stale c2_iocs.csv rows may label recycled cloud egress IPs; require feed provenance and confidence columns before firewall automation. Blue-green or canary releases that replace every task in a five-minute bucket can spike distinct Falco rule counts without malice; require sustained high priorities or IOC overlap before Sev-1 bridges. Security vendors’ own instrumentation containers sometimes carry permissive Falco macros; negotiate allowlists with the vendor’s threat intel team instead of silent suppression. Red-team exercise tags belong in lookup metadata so purple windows do not exhaust on-call goodwill.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1611",
                "T1525",
                "T1496",
                "T1059",
                "T1543",
                "T1110"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Falco (JSON to HEC), Sysdig Secure.\n• Ensure the following data sources are available: `sourcetype=falco:alert`, `sourcetype=sysdig:secure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Falco JSON with `rule`, `priority`, container/k8s metadata. Tune noise with allowlists. Page on Critical; dashboard top rules by container image.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"falco:alert\" priority=\"Critical\" OR priority=\"Error\"\n| stats count by rule, container.name, k8s.pod.name, proc.name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Container Runtime Security Events** — Falco/sysdig/Falco-sidekick style rules surface unexpected shells, sensitive mounts, and syscall anomalies at runtime—complementing image scanning for zero-day behavior.\n\nDocumented **Data sources**: `sourcetype=falco:alert`, `sourcetype=sysdig:secure`. **App/TA** (typical add-on context): Falco (JSON to HEC), Sysdig Secure. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: falco:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"falco:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule, container.name, k8s.pod.name, proc.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule, container, count), Timeline, Heatmap (rule vs namespace).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We keep motion sensors inside the shop, not only a list of who borrowed the key. If a quiet stockroom suddenly starts rearranging shelves, calling unfamiliar numbers, or tripping a burst of different alarms at once, we treat it like someone turned the back office into a lab without a work order.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.22",
              "n": "Container Health Check Failures",
              "c": "high",
              "f": "beginner",
              "v": "Docker HEALTHCHECK provides a built-in liveness signal. Containers stuck in \"unhealthy\" or \"starting\" state may still appear as \"running\" but are no longer serving traffic correctly, masking outages from basic uptime checks.",
              "t": "Splunk Docker logging driver (HEC), `docker events` scripted input",
              "d": "`sourcetype=docker:events`, `sourcetype=docker:inspect`",
              "q": "index=containers sourcetype=\"docker:events\" type=\"container\" action=\"health_status*\"\n| rex field=action \"health_status: (?<health_status>\\w+)\"\n| where health_status=\"unhealthy\"\n| stats count as unhealthy_count, latest(_time) as last_unhealthy by container_name, container_id, host\n| where unhealthy_count > 3\n| sort -unhealthy_count",
              "m": "Docker emits `health_status` events for containers with a HEALTHCHECK instruction. Forward Docker daemon events to Splunk via HEC or syslog. Alert when a container enters \"unhealthy\" state or stays in \"starting\" for longer than the start period. Correlate with container logs to identify the failing check command. Track health check failure rate per image to identify systemic issues.",
              "z": "Table (unhealthy containers with count), Single value (unhealthy container count), Timeline (health status transitions).",
              "kfp": "Dependency cold-start races on large JVM, Spark, or warehouse images can flip unhealthy briefly while threads warm; pair with start-period tuning and health_check_baseline.csv annotations before paging. Coordinated database failovers emit connection refused lines that look like application bugs; correlate to DB change tickets and downstream restart windows. Probe misconfigurations such as curl against the wrong port or HTTP one versus HTTPS produce chronic unhealthy signals without true user impact until someone fixes the command; validate HEALTHCHECK against the real listener. Legitimate read-only maintenance modes may return HTTP 423 (Locked) on the probe path on purpose; use maintenance lookup flags rather than global suppression. Transient overlay or VPC path flaps during network maintenance can fail HTTP probes intermittently; require sustained streak thresholds on production tiers. Slim images that removed curl after a package cleanup cause probe exit two or executable missing errors; distinguish toolchain failure from app failure before blaming service owners. Aggressive probe intervals on CPU-starved hosts (see UC-3.1.3) can time out under throttle even when dependency health is fine; compare cgroup cpu.stat before opening sev-one bridges. Blue-green cuts that intentionally keep old tasks unhealthy while draining can resemble silent degradation unless deploy ids label the event stream. CI ephemeral runners with toy HEALTHCHECK examples may pollute non-prod indexes unless routed separately.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Docker logging driver (HEC), `docker events` scripted input.\n• Ensure the following data sources are available: `sourcetype=docker:events`, `sourcetype=docker:inspect`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDocker emits `health_status` events for containers with a HEALTHCHECK instruction. Forward Docker daemon events to Splunk via HEC or syslog. Alert when a container enters \"unhealthy\" state or stays in \"starting\" for longer than the start period. Correlate with container logs to identify the failing check command. Track health check failure rate per image to identify systemic issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:events\" type=\"container\" action=\"health_status*\"\n| rex field=action \"health_status: (?<health_status>\\w+)\"\n| where health_status=\"unhealthy\"\n| stats count as unhealthy_count, latest(_time) as last_unhealthy by container_name, container_id, host\n| where unhealthy_count > 3\n| sort -unhealthy_count\n```\n\nUnderstanding this SPL\n\n**Container Health Check Failures** — Docker HEALTHCHECK provides a built-in liveness signal. Containers stuck in \"unhealthy\" or \"starting\" state may still appear as \"running\" but are no longer serving traffic correctly, masking outages from basic uptime checks.\n\nDocumented **Data sources**: `sourcetype=docker:events`, `sourcetype=docker:inspect`. **App/TA** (typical add-on context): Splunk Docker logging driver (HEC), `docker events` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where health_status=\"unhealthy\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by container_name, container_id, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unhealthy_count > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy containers with count), Single value (unhealthy container count), Timeline (health status transitions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the container equivalent of a doctor checking whether someone can really eat and keep food down, not just whether they are still walking. A box can look running while it serves bad answers for hours; this control notices that kind of quiet failure before customers absorb all the pain.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.23",
              "n": "Container Network I/O Anomalies",
              "c": "medium",
              "f": "intermediate",
              "v": "Per-container network throughput monitoring detects noisy neighbors saturating shared networks, unusual outbound traffic indicating data exfiltration, and connectivity issues causing application timeouts.",
              "t": "`docker stats` scripted input, cAdvisor metrics",
              "d": "`sourcetype=docker:stats`, `sourcetype=cadvisor`",
              "q": "index=containers sourcetype=\"docker:stats\"\n| eval rx_mb=round(rx_bytes/1048576,2), tx_mb=round(tx_bytes/1048576,2)\n| timechart span=5m avg(rx_mb) as rx_avg_mb, avg(tx_mb) as tx_avg_mb by container_name\n| eventstats avg(tx_avg_mb) as baseline_tx, stdev(tx_avg_mb) as stdev_tx by container_name\n| where tx_avg_mb > baseline_tx + 3*stdev_tx",
              "m": "Collect `docker stats` output or cAdvisor metrics at regular intervals. Extract `rx_bytes`, `tx_bytes`, `rx_packets`, `tx_packets`, and `rx_dropped`/`tx_dropped` per container. Baseline per-container network profiles and alert on deviations above 3 standard deviations. High TX from a container that normally has low outbound traffic is a strong exfiltration indicator. Dropped packets signal network saturation.",
              "z": "Line chart (TX/RX per container), Bar chart (top talkers), Table (anomalous containers).",
              "kfp": "Machine-learning data loaders that stream multi-gigabyte training shards hourly will cross three sigma legitimately unless container_egress_baseline.csv encodes their workload class. Log forwarders such as Fluent Bit, Fluentd, Vector, or Promtail batching to remote indexes can spike egress without malicious intent; pair timestamps with known configuration pushes. Backup and DR containers running restic, kopia, or Velero toward object storage produce sustained high share that should be tagged in the baseline lookup. Image layer pulls during rolling upgrades can inflate tx_bytes; always correlate UC-3.1.26 registry telemetry before escalating. Scheduled tc traffic-policy enforcement or temporary rate limits may raise tx_dropped during the window even when applications are healthy. OS package refresh jobs inside privileged maintenance containers create predictable egress bursts after patch Tuesday style events. CI agents and synthetic stress harnesses intentionally saturate links for minutes at a time. Canary deployments that replay production traffic multipliers can look like exfiltration until labeled. Mis-mapped PIDs in procfs collectors occasionally attribute host traffic to the wrong container until the enumerator script is fixed; treat the first firing as a collection bug when host share sums look impossible.",
              "refs": "[CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1048",
                "T1071"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `docker stats` scripted input, cAdvisor metrics.\n• Ensure the following data sources are available: `sourcetype=docker:stats`, `sourcetype=cadvisor`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect `docker stats` output or cAdvisor metrics at regular intervals. Extract `rx_bytes`, `tx_bytes`, `rx_packets`, `tx_packets`, and `rx_dropped`/`tx_dropped` per container. Baseline per-container network profiles and alert on deviations above 3 standard deviations. High TX from a container that normally has low outbound traffic is a strong exfiltration indicator. Dropped packets signal network saturation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:stats\"\n| eval rx_mb=round(rx_bytes/1048576,2), tx_mb=round(tx_bytes/1048576,2)\n| timechart span=5m avg(rx_mb) as rx_avg_mb, avg(tx_mb) as tx_avg_mb by container_name\n| eventstats avg(tx_avg_mb) as baseline_tx, stdev(tx_avg_mb) as stdev_tx by container_name\n| where tx_avg_mb > baseline_tx + 3*stdev_tx\n```\n\nUnderstanding this SPL\n\n**Container Network I/O Anomalies** — Per-container network throughput monitoring detects noisy neighbors saturating shared networks, unusual outbound traffic indicating data exfiltration, and connectivity issues causing application timeouts.\n\nDocumented **Data sources**: `sourcetype=docker:stats`, `sourcetype=cadvisor`. **App/TA** (typical add-on context): `docker stats` scripted input, cAdvisor metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **rx_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by container_name** — ideal for trending and alerting on this use case.\n• `eventstats` rolls up events into metrics; results are split **by container_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tx_avg_mb > baseline_tx + 3*stdev_tx` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Container Network I/O Anomalies** — Per-container network throughput monitoring detects noisy neighbors saturating shared networks, unusual outbound traffic indicating data exfiltration, and connectivity issues causing application timeouts.\n\nDocumented **Data sources**: `sourcetype=docker:stats`, `sourcetype=cadvisor`. **App/TA** (typical add-on context): `docker stats` scripted input, cAdvisor metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (TX/RX per container), Bar chart (top talkers), Table (anomalous containers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We weigh every package leaving each loading dock on a shared warehouse floor. When one dock suddenly ships a hundred times its usual weight while the others go quiet, we notice before the highway backs up and other stores miss deliveries.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - agg_value",
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.24",
              "n": "Docker Exec Session Audit",
              "c": "high",
              "f": "beginner",
              "v": "`docker exec` into a running container is an interactive access event that should be rare in production. Unexpected exec sessions may indicate troubleshooting without change control, unauthorized access, or an attacker establishing a foothold.",
              "t": "`docker events` scripted input, Docker daemon logs",
              "d": "`sourcetype=docker:events`, `sourcetype=docker:daemon`",
              "q": "index=containers sourcetype=\"docker:events\" type=\"container\" action=\"exec_start*\"\n| rex field=action \"exec_start: (?<exec_cmd>.+)\"\n| stats count as exec_count, values(exec_cmd) as commands by container_name, host, _time\n| sort -_time",
              "m": "Docker emits `exec_start` and `exec_create` events when someone runs `docker exec`. Forward daemon events to Splunk. Alert on any exec in production environments, especially during non-business hours. Flag high-risk commands (shells like `/bin/bash`, `/bin/sh`, or commands accessing sensitive paths). Correlate with host SSH/login events to attribute the session to a user.",
              "z": "Table (exec sessions with command and container), Timeline (exec events), Single value (exec count last 24h).",
              "kfp": "Health-check tools that exec into containers for binary-level probes (mysqladmin ping, redis-cli PING, pg_isready) when no probe API exists; classify as business_normal in lookups/docker_exec_command_baseline.csv with a permanent change window so observability platforms do not page leadership at the first health probe of the day. Automated backup tools such as pgBackRest, MongoDB backup operators, and MySQL backup runners exec into containers to invoke pg_dump, mongodump, or mysqldump binaries; pre-approve their image_keys and document the expected schedule alongside the change-window CSV. Sanctioned SRE incident-response shells during declared major incidents that have a CAB pre-approval row are recognized when container_change_windows.csv is refreshed within five minutes of the ticket cutting; if the CSV publisher lags, expect transient critical tiers until the lookup catches up. Kubectl-debug ephemeral debug containers when the Kubernetes runtime still uses Docker emit exec_create and exec_start as if they were ordinary docker exec sessions; route those forwarders through a kubectl-debug allowlist or capture the kube_user attribute via Falco proc_cmdline to differentiate cluster-administrator debug from individual exec activity. CI/CD test runners that use docker exec to run integration tests on builder hosts will produce hundreds of business_normal rows per build; route those to a dev index excluded from this saved search rather than raising thresholds globally. Compose project prefixes can break the change-window CSV join when the CSV stores the bare service name; normalize the publisher to include the prefix exactly as docker:events does. Falco rule renames after vendor upgrades may silence the runtime arm; quarterly replay a lab session to confirm the rlow regex still matches. Cron-driven container maintenance tasks that exec into containers on a fixed schedule will fire if not pre-approved; build a per-image scheduled-task allowlist.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1609"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `docker events` scripted input, Docker daemon logs.\n• Ensure the following data sources are available: `sourcetype=docker:events`, `sourcetype=docker:daemon`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDocker emits `exec_start` and `exec_create` events when someone runs `docker exec`. Forward daemon events to Splunk. Alert on any exec in production environments, especially during non-business hours. Flag high-risk commands (shells like `/bin/bash`, `/bin/sh`, or commands accessing sensitive paths). Correlate with host SSH/login events to attribute the session to a user.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:events\" type=\"container\" action=\"exec_start*\"\n| rex field=action \"exec_start: (?<exec_cmd>.+)\"\n| stats count as exec_count, values(exec_cmd) as commands by container_name, host, _time\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Docker Exec Session Audit** — `docker exec` into a running container is an interactive access event that should be rare in production. Unexpected exec sessions may indicate troubleshooting without change control, unauthorized access, or an attacker establishing a foothold.\n\nDocumented **Data sources**: `sourcetype=docker:events`, `sourcetype=docker:daemon`. **App/TA** (typical add-on context): `docker events` scripted input, Docker daemon logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by container_name, host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Docker Exec Session Audit** — `docker exec` into a running container is an interactive access event that should be rare in production. Unexpected exec sessions may indicate troubleshooting without change control, unauthorized access, or an attacker establishing a foothold.\n\nDocumented **Data sources**: `sourcetype=docker:events`, `sourcetype=docker:daemon`. **App/TA** (typical add-on context): `docker events` scripted input, Docker daemon logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exec sessions with command and container), Timeline (exec events), Single value (exec count last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We treat live containers like a bank vault with a security camera inside. When someone opens the door we record who walked in, every thing they touched, and when they walked out, so a quiet three-AM fix and a stranger's secret-grab look completely different in the evidence locker.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.25",
              "n": "Docker Socket Exposure Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Mounting `/var/run/docker.sock` inside a container grants full Docker API access, effectively giving root-level control over the host. This is the most common Docker privilege escalation vector and should be flagged immediately.",
              "t": "`docker inspect` scripted input, Falco",
              "d": "`sourcetype=docker:inspect`, `sourcetype=falco:alert`",
              "q": "index=containers sourcetype=\"docker:inspect\"\n| spath output=mounts path=Mounts{}\n| mvexpand mounts\n| spath input=mounts output=mount_source path=Source\n| where mount_source=\"/var/run/docker.sock\"\n| table _time, container_name, image, host, mount_source",
              "m": "Periodically run `docker inspect` on all containers and forward the JSON output. Search bind mounts for `/var/run/docker.sock` or the Docker API socket path. Alert on any detection. For runtime detection, use Falco rules that trigger on socket access. Also check for TCP Docker daemon exposure (`-H tcp://0.0.0.0`) in daemon configuration. Only allow socket mounting for explicitly approved infrastructure containers (e.g., Portainer, Traefik) via an allowlist lookup.",
              "z": "Table (containers with socket mount), Single value (exposed container count), Alert (immediate page).",
              "kfp": "Legitimate Portainer, Traefik, Caddy docker-proxy helpers, Watchtower, Diun, and vendor-specific registry mirrors sometimes require docker.sock by design; every such workload must carry an explicit docker_socket_allowlist row with ticket-backed approved_reason text or analysts will keep reopening the same incident. CI build agents that still use Docker-in-Docker with socket forwarding will fire unless you migrate them to rootless Kaniko, BuildKit remote builders, or Podman isolated builds; during transition, tag those hosts with a ci namespace macro rather than global suppression. Local KIND, k3d, minikube, or testcontainers labs on engineer laptops should never ship their feeds into production indexes; route lab forwarders to a dev index and exclude them at the saved-search layer. Splunk Universal Forwarder or Datadog-style host monitoring containers that mount the socket under change control should be pre-approved with expected_container_id pinned to the digest you deploy. Security tools such as Sysdig agents or Falco exporters may legitimately reference docker APIs; document them like any other allowlist consumer. Managed services teams that intentionally expose Engine APIs behind mutual TLS on 2376 can still trigger high severity if argv text lacks verify flags; tune high_2376_no_tls_verify only after engineers prove client certificate enforcement on the load balancer path. Registry pull-through caches that run as privileged sidecars near dockerd can create mount-like strings in docker:events without a true breakout; corroborate with inspect before paging leadership.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `docker inspect` scripted input, Falco.\n• Ensure the following data sources are available: `sourcetype=docker:inspect`, `sourcetype=falco:alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically run `docker inspect` on all containers and forward the JSON output. Search bind mounts for `/var/run/docker.sock` or the Docker API socket path. Alert on any detection. For runtime detection, use Falco rules that trigger on socket access. Also check for TCP Docker daemon exposure (`-H tcp://0.0.0.0`) in daemon configuration. Only allow socket mounting for explicitly approved infrastructure containers (e.g., Portainer, Traefik) via an allowlist lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:inspect\"\n| spath output=mounts path=Mounts{}\n| mvexpand mounts\n| spath input=mounts output=mount_source path=Source\n| where mount_source=\"/var/run/docker.sock\"\n| table _time, container_name, image, host, mount_source\n```\n\nUnderstanding this SPL\n\n**Docker Socket Exposure Detection** — Mounting `/var/run/docker.sock` inside a container grants full Docker API access, effectively giving root-level control over the host. This is the most common Docker privilege escalation vector and should be flagged immediately.\n\nDocumented **Data sources**: `sourcetype=docker:inspect`, `sourcetype=falco:alert`. **App/TA** (typical add-on context): `docker inspect` scripted input, Falco. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:inspect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:inspect\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where mount_source=\"/var/run/docker.sock\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Docker Socket Exposure Detection**): table _time, container_name, image, host, mount_source\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (containers with socket mount), Single value (exposed container count), Alert (immediate page).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch for little boxes that secretly carry the host’s master key through a shared socket hole, and we watch for the front door of the Docker service being left open on the network. When either happens without paperwork, we raise a loud alarm so strangers cannot drive the whole machine.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.26",
              "n": "Image Pull Failures and Registry Connectivity",
              "c": "high",
              "f": "beginner",
              "v": "Failed image pulls block container starts, scaling operations, and deployments. Common causes include registry rate limits (Docker Hub's 100 pulls/6h for free accounts), expired credentials, network issues, and deleted image tags.",
              "t": "Docker daemon logs, `docker events` scripted input",
              "d": "`sourcetype=docker:daemon`, `sourcetype=docker:events`",
              "q": "index=containers sourcetype=\"docker:daemon\" (\"pull\" AND (\"error\" OR \"denied\" OR \"rate limit\" OR \"not found\" OR \"timeout\"))\n| rex \"Error response from daemon: (?<error_msg>.+)\"\n| stats count as failures, latest(error_msg) as last_error by image, host\n| sort -failures",
              "m": "Forward Docker daemon logs to Splunk. Search for pull-related error messages including authentication failures, rate limit responses (HTTP 429), image-not-found errors, and network timeouts. Alert on any pull failure in production. Track pull failure rate over time to detect intermittent registry connectivity issues. For Docker Hub rate limits, monitor the `RateLimit-Remaining` header if available in debug logs.",
              "z": "Table (failed images with error), Bar chart (failures by registry), Line chart (pull failure rate over time).",
              "kfp": "Monday-morning CI bursts can spike pull_error_rate_pct for a few five-minute buckets while services remain healthy because pipelines retry aggressively; require sustained_bad or ratelimit_pct_used corroboration before paging product teams. Image-pull warming jobs that pre-stage layers before business hours emit many journald lines that resemble failures when verbosity is debug; filter host_class or sourcetype routing for warming pools. Public registry maintenance windows announced on vendor status pages may raise registry_server_5xx without customer deploy defects; pair with vendor incident RSS or status API before blame. Docker Hub anonymous budgets naturally recover as the six-hour window rolls forward even without operator action, so ratelimit_remaining can climb while dashboards still show historical 429 noise; trend forward with time_to_exhaustion_min instead of snapshot panic. Corporate DNS forwarder restarts produce minutes-long dns_p99_ms spikes that affect every registry hostname uniformly; compare fleet_dns_p95 across registries before opening per-registry tickets. ECR CloudTrail throttles during control-plane automation bursts may not imply data-plane pull failure if dockerd already cached layers; correlate with journald pull errors on the same host_id. MITM proxies that rewrite TLS can inflate tls_handshake_p99_ms without registry fault; document proxy class in registry_baseline.csv notes. Dual scrapers emitting docker:metrics duplicates can flatten quantiles oddly until deduplication macros land. Lab clusters that pull only internal mirrors may never populate docker:registry:ratelimit for docker.io; expect null ratelimit_remaining without muting journald arms.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Docker daemon logs, `docker events` scripted input.\n• Ensure the following data sources are available: `sourcetype=docker:daemon`, `sourcetype=docker:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Docker daemon logs to Splunk. Search for pull-related error messages including authentication failures, rate limit responses (HTTP 429), image-not-found errors, and network timeouts. Alert on any pull failure in production. Track pull failure rate over time to detect intermittent registry connectivity issues. For Docker Hub rate limits, monitor the `RateLimit-Remaining` header if available in debug logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:daemon\" (\"pull\" AND (\"error\" OR \"denied\" OR \"rate limit\" OR \"not found\" OR \"timeout\"))\n| rex \"Error response from daemon: (?<error_msg>.+)\"\n| stats count as failures, latest(error_msg) as last_error by image, host\n| sort -failures\n```\n\nUnderstanding this SPL\n\n**Image Pull Failures and Registry Connectivity** — Failed image pulls block container starts, scaling operations, and deployments. Common causes include registry rate limits (Docker Hub's 100 pulls/6h for free accounts), expired credentials, network issues, and deleted image tags.\n\nDocumented **Data sources**: `sourcetype=docker:daemon`, `sourcetype=docker:events`. **App/TA** (typical add-on context): Docker daemon logs, `docker events` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:daemon. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:daemon\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by image, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed images with error), Bar chart (failures by registry), Line chart (pull failure rate over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We track deliveries from outside warehouses to our office: the courier caps how many free trips you get per day, DNS is the address book that must resolve quickly, and HTTPS is the handshake at the door. We warn the office manager when the free-trip budget is almost gone, not only when packages are already refused at the loading dock.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.27",
              "n": "Dangling Images and Volume Cleanup",
              "c": "medium",
              "f": "beginner",
              "v": "Orphaned image layers and anonymous volumes accumulate silently and can consume tens of gigabytes of disk. On CI/CD build hosts this is especially aggressive. Monitoring prevents disk-full incidents caused by Docker storage waste.",
              "t": "`docker system df` scripted input",
              "d": "`sourcetype=docker:system_df`",
              "q": "index=containers sourcetype=\"docker:system_df\"\n| eval reclaimable_gb=round(reclaimable_bytes/1073741824,2)\n| where type IN (\"Images\",\"Volumes\",\"BuildCache\") AND reclaimable_gb > 5\n| table _time, host, type, total_count, active_count, size_gb, reclaimable_gb\n| sort -reclaimable_gb",
              "m": "Run `docker system df -v --format json` on a schedule and forward the output to Splunk. Track `Reclaimable` bytes for images, volumes, and build cache. Alert when reclaimable space exceeds a threshold (e.g., 10 GB or 50% of Docker storage). For automated cleanup, trigger `docker system prune` via a webhook alert action, but only on non-production hosts. Track cleanup events to verify disk recovery.",
              "z": "Gauge (reclaimable space per host), Line chart (storage growth trend), Table (hosts with most waste).",
              "kfp": "Emergency disk-full response teams often run wide manual prunes under incident command; annotate those hosts in prune_schedule_sla.csv with a temporary exempt_manual_prune window so LARGE-MANUAL-PRUNE rows carry ticket context instead of silent suppression. Scheduled maintenance that pauses systemd timers or cron drops CRON-MISSED until automation resumes; pair timer pause records with the change calendar before paging. CI runner fleets legitimately issue many small prunes per pipeline stage; segment those hosts with shorter expected_cadence_hours or a dedicated index so rolling counts do not look like sprawl-response chaos. Multi-stage cleanup steps in build scripts can emit bursts of builder:prune and image:delete events within seconds; treat bursts as one logical run when playbook ids are present in docker:audit:prune_log. Compliance hold dismissals sometimes require deliberate docker image prune -a --force after legal approval; document the approval id beside argv samples so UNSAFE-FLAG-COMBO reviews close quickly. Edge-cluster nodes that manage their own prune cycles outside central automation will show ad-hoc argv patterns; encode those nodes with self_managed_prune=1 in the lookup to expect manual cadence drift without assuming central cron failure.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `docker system df` scripted input.\n• Ensure the following data sources are available: `sourcetype=docker:system_df`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `docker system df -v --format json` on a schedule and forward the output to Splunk. Track `Reclaimable` bytes for images, volumes, and build cache. Alert when reclaimable space exceeds a threshold (e.g., 10 GB or 50% of Docker storage). For automated cleanup, trigger `docker system prune` via a webhook alert action, but only on non-production hosts. Track cleanup events to verify disk recovery.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:system_df\"\n| eval reclaimable_gb=round(reclaimable_bytes/1073741824,2)\n| where type IN (\"Images\",\"Volumes\",\"BuildCache\") AND reclaimable_gb > 5\n| table _time, host, type, total_count, active_count, size_gb, reclaimable_gb\n| sort -reclaimable_gb\n```\n\nUnderstanding this SPL\n\n**Dangling Images and Volume Cleanup** — Orphaned image layers and anonymous volumes accumulate silently and can consume tens of gigabytes of disk. On CI/CD build hosts this is especially aggressive. Monitoring prevents disk-full incidents caused by Docker storage waste.\n\nDocumented **Data sources**: `sourcetype=docker:system_df`. **App/TA** (typical add-on context): `docker system df` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:system_df. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:system_df\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **reclaimable_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where type IN (\"Images\",\"Volumes\",\"BuildCache\") AND reclaimable_gb > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Dangling Images and Volume Cleanup**): table _time, host, type, total_count, active_count, size_gb, reclaimable_gb\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (reclaimable space per host), Line chart (storage growth trend), Table (hosts with most waste).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We log every automated and manual Docker cleanup like a signed receipt: who ran it, what flags they used, how much space actually came back, and whether the schedule we promised was kept. That way a scary midnight cleanup and a broken weekly job look completely different in the evidence folder.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.28",
              "n": "Docker Swarm Service Replica Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Swarm services may show the correct replica count globally but have tasks stuck in \"pending\", \"rejected\", or \"failed\" state. Monitoring task-level health catches scheduling failures, resource constraints, and rolling update stalls that the service-level view hides.",
              "t": "`docker service` scripted input, Docker daemon logs",
              "d": "`sourcetype=docker:service`, `sourcetype=docker:daemon`",
              "q": "index=containers sourcetype=\"docker:service\"\n| where desired_replicas != running_replicas OR failed_tasks > 0\n| eval gap=desired_replicas - running_replicas\n| table _time, service_name, desired_replicas, running_replicas, gap, failed_tasks, update_status\n| sort -gap",
              "m": "Poll `docker service ls --format json` and `docker service ps <service> --format json` on a schedule. Extract `desired`, `running`, and task states. Alert when running replicas are fewer than desired for more than 2 consecutive checks. Track `update_status` during rolling updates — alert when an update is \"paused\" (hit failure threshold). Monitor task rejection reasons (resource constraints, image pull failures, port conflicts) to diagnose scheduling issues.",
              "z": "Table (services with replica gaps), Single value (unhealthy service count), Timeline (task failures).",
              "kfp": "Planned change windows sometimes set update-failure-action pause on purpose while operators validate canary traffic, which surfaces as critical_rolling_update_paused even though leadership expects the halt. Drain workflows that cordon workers for patching can leave services under-replicated for short intervals when reschedule budgets are tight; sustained_replica_deficit should respect maintenance macros keyed off node availability state. Global services and replicated-job modes change the meaning of desired counts; a flat replicated-mode assumption can mis-state deficit until you branch modes in props. Registry certificate rotations and pull secrets refreshes routinely spike medium_pending_image_pull rows while tasks retry; correlate to credential change tickets before paging. Regional failover drills that temporarily narrow label constraints can create bursts of high_task_rejected_constraint text mentioning no suitable node even though capacity returns minutes later. After Raft leader elections, managers can briefly emit orphaned task rows while state catches up; pair with swarm_event_volume to avoid treating transient bookkeeping as data loss. Blue-green style external traffic cuts may pause updates by design while load balancers drain; tag those stacks in swarm_service_slo.csv with an allowed_pause flag if your governance permits. Low replica counts in development clusters often recover within thirty seconds as images warm; tune ratio thresholds for non-prod tiers. Network attachment delays on dense overlay churn can look like pending storms without true failures; validate with docker network inspect samples before blaming applications. Backup managers that are intentionally demoted should not contribute duplicate service polls; deduplicate host_id in collection. Health-check driven rollbacks can resemble incidents in the alert stream even when service-level availability never dropped; require customer-impact context from load balancer pools.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `docker service` scripted input, Docker daemon logs.\n• Ensure the following data sources are available: `sourcetype=docker:service`, `sourcetype=docker:daemon`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `docker service ls --format json` and `docker service ps <service> --format json` on a schedule. Extract `desired`, `running`, and task states. Alert when running replicas are fewer than desired for more than 2 consecutive checks. Track `update_status` during rolling updates — alert when an update is \"paused\" (hit failure threshold). Monitor task rejection reasons (resource constraints, image pull failures, port conflicts) to diagnose scheduling issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:service\"\n| where desired_replicas != running_replicas OR failed_tasks > 0\n| eval gap=desired_replicas - running_replicas\n| table _time, service_name, desired_replicas, running_replicas, gap, failed_tasks, update_status\n| sort -gap\n```\n\nUnderstanding this SPL\n\n**Docker Swarm Service Replica Health** — Swarm services may show the correct replica count globally but have tasks stuck in \"pending\", \"rejected\", or \"failed\" state. Monitoring task-level health catches scheduling failures, resource constraints, and rolling update stalls that the service-level view hides.\n\nDocumented **Data sources**: `sourcetype=docker:service`, `sourcetype=docker:daemon`. **App/TA** (typical add-on context): `docker service` scripted input, Docker daemon logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:service. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:service\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where desired_replicas != running_replicas OR failed_tasks > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Docker Swarm Service Replica Health**): table _time, service_name, desired_replicas, running_replicas, gap, failed_tasks, update_status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (services with replica gaps), Single value (unhealthy service count), Timeline (task failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the cluster scheduler's promise that every service it agreed to run at full strength really has that many healthy copies running, that rolling upgrades keep moving instead of freezing halfway, and that when a node cannot take work the failure reason is visible instead of quietly piling up in the background.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.1.29",
              "n": "Container Filesystem Write Rate",
              "c": "medium",
              "f": "intermediate",
              "v": "High write rates to the container's writable layer (overlay filesystem) indicate missing volume mounts, excessive application logging to local disk, or tmp file abuse. This degrades performance and fills the Docker storage driver, potentially affecting all containers on the host.",
              "t": "`docker stats` scripted input, cAdvisor metrics",
              "d": "`sourcetype=docker:stats`, `sourcetype=cadvisor`",
              "q": "index=containers sourcetype=\"docker:stats\"\n| eval block_write_mb=round(block_write_bytes/1048576,2)\n| timechart span=5m sum(block_write_mb) as write_mb by container_name\n| where write_mb > 100",
              "m": "Collect `docker stats` or cAdvisor block I/O metrics at regular intervals. Extract `blkio.io_service_bytes_recursive` for read and write operations per container. Alert when any container sustains write rates above 100 MB per 5-minute window. Investigate which process inside the container is writing (use `docker exec` or container logs). Common root causes: application logging to stdout captured by json-file driver with no rotation, temp file accumulation, and missing persistent volume mounts for data directories.",
              "z": "Line chart (write rate per container), Bar chart (top writers), Table (containers exceeding threshold).",
              "kfp": "Deliberately write-heavy ETL containers that materialize large temp files inside the upper layer then exit can spike write_mb_5min legitimately; tag them in volume_mount_inventory.csv or route through CI macros instead of paging production on-call. Database containers in developer sandboxes often lack volume mounts while still writing gigabytes to the graph layer; treat as expected in non-production segments or annotate writable_layer_quota.csv with environment-specific ceilings. CI and image-build runners compile artifacts into ephemeral layers by design; segment those hosts or raise thresholds with FinOps approval. Containers that mount tmpfs for /tmp may still show high block_write_bytes on some collectors when metrics conflate host page cache effects; corroborate with inspect Mounts and process-level writers before calling a log-spam pattern. restart-policy=always loops on flaky dependencies inflate restart_count and cumulative writes even when the application is idle between crashes; pair with UC-3.1.13 before blaming storage abuse. Scrubbing or compliance jobs that intentionally fill scratch space resemble attacks; require maintenance_lookup alignment. Batch extract-transform-load windows that fsync large staging files once per hour can cross one hundred megabytes per five minutes without implying a missing volume; compare to expects_data_volume flags. Prometheus exposition duplicates from dual scrapers can double cadvisor deltas until deduplication macros enforce one writer per host class.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `docker stats` scripted input, cAdvisor metrics.\n• Ensure the following data sources are available: `sourcetype=docker:stats`, `sourcetype=cadvisor`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect `docker stats` or cAdvisor block I/O metrics at regular intervals. Extract `blkio.io_service_bytes_recursive` for read and write operations per container. Alert when any container sustains write rates above 100 MB per 5-minute window. Investigate which process inside the container is writing (use `docker exec` or container logs). Common root causes: application logging to stdout captured by json-file driver with no rotation, temp file accumulation, and missing persistent volume mounts fo…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"docker:stats\"\n| eval block_write_mb=round(block_write_bytes/1048576,2)\n| timechart span=5m sum(block_write_mb) as write_mb by container_name\n| where write_mb > 100\n```\n\nUnderstanding this SPL\n\n**Container Filesystem Write Rate** — High write rates to the container's writable layer (overlay filesystem) indicate missing volume mounts, excessive application logging to local disk, or tmp file abuse. This degrades performance and fills the Docker storage driver, potentially affecting all containers on the host.\n\nDocumented **Data sources**: `sourcetype=docker:stats`, `sourcetype=cadvisor`. **App/TA** (typical add-on context): `docker stats` scripted input, cAdvisor metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: docker:stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"docker:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **block_write_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by container_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where write_mb > 100` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Container Filesystem Write Rate** — High write rates to the container's writable layer (overlay filesystem) indicate missing volume mounts, excessive application logging to local disk, or tmp file abuse. This degrades performance and fills the Docker storage driver, potentially affecting all containers on the host.\n\nDocumented **Data sources**: `sourcetype=docker:stats`, `sourcetype=cadvisor`. **App/TA** (typical add-on context): `docker stats` scripted input, cAdvisor metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Network` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (write rate per container), Bar chart (top writers), Table (containers exceeding threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch how fast each short-term rental box scribbles on walls that disappear when the lease ends, and whether those scribbles are big enough to clog the shared hallway trash chute. When one guest marks too fast or the hallway nears capacity, we warn the building crew before everyone gets stuck carrying boxes.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host span=5m | sort - agg_value",
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.6,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 27,
            "none": 0
          }
        },
        {
          "i": "3.2",
          "n": "Kubernetes",
          "u": [
            {
              "i": "3.2.1",
              "n": "Pod Restart Rate",
              "c": "critical",
              "f": "intermediate",
              "v": "High restart counts indicate application instability. Pods may appear \"Running\" but are constantly crashing and restarting, degrading service quality.",
              "t": "Splunk OpenTelemetry Collector for K8s",
              "d": "`sourcetype=kube:container:meta`, kube-state-metrics",
              "q": "index=k8s sourcetype=\"kube:container:meta\"\n| stats max(restartCount) as restarts by namespace, pod_name, container_name\n| where restarts > 5\n| sort -restarts",
              "m": "Deploy the Splunk OTel Collector as a DaemonSet for kube-state-metrics. The restart counter (`kube_pod_container_status_restarts_total`) is cumulative, so use a windowed increase (`streamstats` delta or `mstats` rate) rather than raw max. Alert when the 1-hour increase exceeds 5 for any pod outside `kube-system`. Filter known CronJob namespaces with a lookup to reduce noise.",
              "z": "Table (namespace, pod, container, restarts), Bar chart by namespace, Trending line.",
              "kfp": "Rolling image refreshes triggered by kubectl set image or GitOps syncs legitimately restart Pods once per ReplicaSet wave; expect a short-lived rise in restarts_window that should collapse within one or two scrape intervals after the rollout stabilizes, and compare timestamps against kube_audit subjects if you ingest API audit JSON. Horizontal Pod Autoscaler scale-in events can delete Pods that still had low restart counters while siblings keep running; the churn concentrates on removed Pods rather than signaling application pathology, so join HPAs only when restarts climb on remaining replicas. Voluntary evictions during node-pressure or priority preemption reorder work without a bad container image; pair eviction messages with node conditions before paging application teams, and read UC-3.2.33 for drain choreography so you do not confuse cordon-and-drain turbulence with spontaneous app failure. Spot and preemptible pools inject deliberate instance loss; bursts that align with cloud preemption notices and node disappearance should downgrade severity unless customer-tier namespaces lack PDB coverage. Batch and CI namespaces may intentionally burn higher restart budgets during chaos drills or data-loader Jobs; pod_inventory.csv should mark those tiers with generous restart_budget_per_hour values so the alert respects expected churn. Cluster upgrades that recycle DaemonSets can elevate kube-system counters briefly; keep platform namespaces on strict budgets but allow maintenance windows via lookup flags such as maintenance_mode=1 with reviewer approval. Dual scrapers emitting duplicate kube-state-metrics samples occasionally double deltas until deduplication lands; watch for mirrored _time rows with identical restart_cumulative values. Short scrape outages followed by catch-up scrapes can synthesize a synthetic spike; require two consecutive intervals above threshold or compare against kube-state-metrics scrape interval metadata when available.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OpenTelemetry Collector for K8s.\n• Ensure the following data sources are available: `sourcetype=kube:container:meta`, kube-state-metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy the Splunk OTel Collector as a DaemonSet for kube-state-metrics. The restart counter (`kube_pod_container_status_restarts_total`) is cumulative, so use a windowed increase (`streamstats` delta or `mstats` rate) rather than raw max. Alert when the 1-hour increase exceeds 5 for any pod outside `kube-system`. Filter known CronJob namespaces with a lookup to reduce noise.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:container:meta\"\n| stats max(restartCount) as restarts by namespace, pod_name, container_name\n| where restarts > 5\n| sort -restarts\n```\n\nUnderstanding this SPL\n\n**Pod Restart Rate** — High restart counts indicate application instability. Pods may appear \"Running\" but are constantly crashing and restarting, degrading service quality.\n\nDocumented **Data sources**: `sourcetype=kube:container:meta`, kube-state-metrics. **App/TA** (typical add-on context): Splunk OpenTelemetry Collector for K8s. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:container:meta. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:container:meta\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, pod_name, container_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where restarts > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, pod, container, restarts), Bar chart by namespace, Trending line.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We count how often the small programs inside our cloud boxes stop and start again, because steady little stops can wear a service down even when everything still looks green. When the pattern spikes or breaks the rules we set for important spaces, we raise a clear signal so teams fix it early.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "3.2.2",
              "n": "Pod Scheduling Failures",
              "c": "high",
              "f": "beginner",
              "v": "Pods stuck in Pending can't serve traffic. Usually caused by insufficient CPU/memory, node affinity rules, or persistent volume claim issues.",
              "t": "Splunk OTel Collector, Kubernetes event forwarding",
              "d": "`sourcetype=kube:events`",
              "q": "index=k8s sourcetype=\"kube:events\" reason=\"FailedScheduling\"\n| stats count by namespace, involvedObject.name, message\n| sort -count",
              "m": "Forward Kubernetes events to Splunk. Alert on FailedScheduling events persisting >5 minutes. Parse the event message for the specific cause (Insufficient cpu, node affinity, PVC not bound, etc.).",
              "z": "Table (pod, namespace, reason), Single value (pending pods), Timeline.",
              "kfp": "Pods may wait on purpose while new node pools come online during planned capacity expansion; require soak minutes and explicit FailedScheduling text before paging. Development clusters that intentionally run hot will show insufficient resource messages during load tests; dampen via workload_tier sandbox. Brief Pending windows during cluster autoscaler upscale can occur until nodes register and become Ready; correlate UC-3.2.46 before treating as pure scheduler misconfiguration. Single-zone topology spread on tiny clusters often fails until architects relax constraints; treat as education not outage when sandbox namespaces experiment. GPU Pods may wait until GPU nodes finish provisioning; pair with node pool timelines. Taint-based maintenance windows during approved changes should carry time-boxed suppressions. Pods using nodeSelector for spot-instance pools may wait when pools drain; FinOps and reliability should agree on acceptable wait. Headless StatefulSet Pods tied to local PersistentVolumes may wait for hostname alignment; distinguish from generic CPU starvation using volume affinity language in events. Rolling upgrades can transiently raise scheduler_pending_pods; compare to change records. Duplicate kube-state-metrics scrapes without deduplication can distort soak until relabel rules converge. Test namespaces that pin impossible affinities will always fire; exclude via lookup flags.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "N/A (operational reliability monitoring)"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, Kubernetes event forwarding.\n• Ensure the following data sources are available: `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Kubernetes events to Splunk. Alert on FailedScheduling events persisting >5 minutes. Parse the event message for the specific cause (Insufficient cpu, node affinity, PVC not bound, etc.).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:events\" reason=\"FailedScheduling\"\n| stats count by namespace, involvedObject.name, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Pod Scheduling Failures** — Pods stuck in Pending can't serve traffic. Usually caused by insufficient CPU/memory, node affinity rules, or persistent volume claim issues.\n\nDocumented **Data sources**: `sourcetype=kube:events`. **App/TA** (typical add-on context): Splunk OTel Collector, Kubernetes event forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pod, namespace, reason), Single value (pending pods), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch when new instances cannot be placed on any machine because the scheduler keeps reporting that nothing fits, and we show how long they waited and why. That helps teams fix capacity, placement rules, or maintenance settings before customers feel missing service.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.3",
              "n": "Node NotReady Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "A NotReady node can't run pods. Existing pods are evicted after the toleration timeout (default 5 min). Causes service disruption if no replacement capacity.",
              "t": "Splunk OTel Collector, kube-state-metrics",
              "d": "`sourcetype=kube:node:meta`, Kubernetes events",
              "q": "index=k8s sourcetype=\"kube:node:meta\"\n| where condition_ready=\"False\"\n| table _time node condition_ready",
              "m": "OTel Collector monitors node conditions. Alert immediately on any node transitioning to NotReady. Correlate with kubelet logs on the affected node for root cause (disk pressure, memory pressure, PID pressure, network).",
              "z": "Node status grid (green/red), Events timeline, Table.",
              "kfp": "Managed Kubernetes layers that roll operating-system security patches through node pools often surface sixty-to-ninety second NotReady windows while kubelets restart; align those windows to provider maintenance bulletins and suppress rows when node_inventory.csv allow_reboot_window matches the observed interval. Spot, preemptible, or deallocated instances emit short NotReady phases immediately before termination; pair cloud termination metadata with this alert so on-call treats the signal as capacity churn rather than kubelet pathology. Fresh nodes joining a pool can report Ready=false during image pull and kubelet registration for roughly thirty to ninety seconds; dampen alerts for roughly ten minutes after node creation timestamps from cloud APIs or from kube_node_created_seconds when that gauge is present. Severe but brief CPU starvation on a worker can delay kubelet status posts without structural failure; when node-exporter or host Performance metrics show a spike that resolves inside thirty seconds, treat the first sample as informational unless lease lag stays elevated. Transient network loss between workers and the apiserver may flip the Ready condition while the data plane still serves east-west traffic; require sustained dwell beyond one minute or corroborate lease failures before paging single-node incidents during known carrier work.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, kube-state-metrics.\n• Ensure the following data sources are available: `sourcetype=kube:node:meta`, Kubernetes events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOTel Collector monitors node conditions. Alert immediately on any node transitioning to NotReady. Correlate with kubelet logs on the affected node for root cause (disk pressure, memory pressure, PID pressure, network).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:node:meta\"\n| where condition_ready=\"False\"\n| table _time node condition_ready\n```\n\nUnderstanding this SPL\n\n**Node NotReady Detection** — A NotReady node can't run pods. Existing pods are evicted after the toleration timeout (default 5 min). Causes service disruption if no replacement capacity.\n\nDocumented **Data sources**: `sourcetype=kube:node:meta`, Kubernetes events. **App/TA** (typical add-on context): Splunk OTel Collector, kube-state-metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:node:meta. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:node:meta\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where condition_ready=\"False\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Node NotReady Detection**): table _time node condition_ready\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Node status grid (green/red), Events timeline, Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "Each worker machine must check in with the cluster control point every few seconds. When a machine goes quiet, programs on it keep running locally but the cluster may soon force-move them. We watch the quiet duration and count how many programs would be moved if the quiet spell crosses the usual five-minute tolerance.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.4",
              "n": "Resource Quota Exhaustion",
              "c": "high",
              "f": "beginner",
              "v": "When namespace quotas are exhausted, new pods can't be created. Impacts deployments, autoscaling, and job scheduling within the namespace.",
              "t": "Splunk OTel Collector, kube-state-metrics",
              "d": "`sourcetype=kube:resourcequota:meta`",
              "q": "index=k8s sourcetype=\"kube:resourcequota:meta\"\n| eval used_pct = round(used / hard * 100, 1)\n| where used_pct > 80\n| table namespace resource used hard used_pct\n| sort -used_pct",
              "m": "kube-state-metrics exposes resource quota data. Collect via OTel Collector. Alert when any resource (cpu, memory, pods, services) exceeds 80% of quota.",
              "z": "Gauge per namespace/resource, Table, Bar chart by namespace.",
              "kfp": "Horizontal pod autoscaler bursts can pin pod counts at one hundred percent of hard for a few scrape intervals while old pods terminate; dampen by requiring sustained streamstats-qualified exhaustion for at least sixty seconds and by ignoring single-bucket spikes when denial_count_5m is zero. Chaos namespaces that deliberately overflow quotas should carry suppression labels propagated into namespace_ownership_inventory.csv so game-day traffic never pages production bridges. Brand-new namespaces cloned from tight admission templates may flash exhaustion on first real rollout; open a ticket and cross-check UC-3.2.32 history before paging executives, or auto-apply standard hard values via GitOps. After large Deployment deletes, kube-state-metrics used gauges can lag briefly while garbage collection finishes; corroborate with live kubectl describe resourcequota before accusing stale telemetry. CronJobs that launch short-lived pods every minute may rhythmically kiss pod quotas; require denial narratives or raise minimum denial_count_5m for bronze CI namespaces. Duplicate scrapes from overlapping Prometheus and OpenTelemetry collectors without dedup keys can distort ratios; validate unique sample sources per cluster. Managed control planes that hide audit bodies may lack 403 messages even when kubectl shows errors; fall back to kube events. Label renames during kube-state-metrics upgrades can null coalesce paths for one interval; treat missing metrics plus steady denials as scrape incidents.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, kube-state-metrics.\n• Ensure the following data sources are available: `sourcetype=kube:resourcequota:meta`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nkube-state-metrics exposes resource quota data. Collect via OTel Collector. Alert when any resource (cpu, memory, pods, services) exceeds 80% of quota.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:resourcequota:meta\"\n| eval used_pct = round(used / hard * 100, 1)\n| where used_pct > 80\n| table namespace resource used hard used_pct\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**Resource Quota Exhaustion** — When namespace quotas are exhausted, new pods can't be created. Impacts deployments, autoscaling, and job scheduling within the namespace.\n\nDocumented **Data sources**: `sourcetype=kube:resourcequota:meta`. **App/TA** (typical add-on context): Splunk OTel Collector, kube-state-metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:resourcequota:meta. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:resourcequota:meta\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Resource Quota Exhaustion**): table namespace resource used hard used_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per namespace/resource, Table, Bar chart by namespace.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "Each Kubernetes namespace is given a budget for compute, memory, and objects such as pods and storage claims. When that budget is completely spent, new work cannot start and upgrades can stall. We warn you when the budget is empty so the owning team can raise it before customers feel delays.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.5",
              "n": "Persistent Volume Claims",
              "c": "high",
              "f": "intermediate",
              "v": "Unbound PVCs prevent stateful workloads (databases, message queues) from starting. Often caused by storage class misconfiguration or capacity.",
              "t": "Splunk OTel Collector, Kubernetes events",
              "d": "`sourcetype=kube:events`, `sourcetype=kube:pvc:meta`",
              "q": "index=k8s sourcetype=\"kube:events\" reason=\"ProvisioningFailed\" OR reason=\"FailedBinding\"\n| stats count by namespace, involvedObject.name, message\n| sort -count",
              "m": "Forward Kubernetes events and PVC metadata. Alert on PVCs in Pending phase >5 minutes. Include storage class and requested size in alert context.",
              "z": "Table (PVC, namespace, status, storage class), Status indicators.",
              "kfp": "WaitForFirstConsumer StorageClasses legitimately leave new claims in phase Pending until a pod that references the volume is scheduled; treat long soak only as incident-grade when no workload is waiting or when events already show FailedProvisioning. Short-lived Pending rows from CI namespaces that create and delete claims within a few minutes resemble outages in raw metrics; require the ten-minute soak or an explicit provisioning event before paging production. Snapshot restore or clone workflows can keep claims Pending while the external snapshot controller binds objects; cross-check VolumeSnapshotContents before blaming CSI drivers. Capacity-starved development clusters may queue dynamic provisioning for tens of minutes during noisy neighbor tests; dampen dev tiers via critical_namespaces.csv workload_tier=sandbox. Namespaces stuck in Terminating often retain PVC finalizers; kubectl get namespace shows status Terminating and Splunk still lists Pending phase even though no new work is expected—route to namespace finalizer runbooks instead of storage rebuilds. Manual static provisioning with pre-created PersistentVolumes and claimRef can intentionally delay binding until selectors align; suppress known migration windows with CAB macros. RWX requests against a ReadWriteOnce-only class produce immediate FailedProvisioning messages that are real incidents but not disk-full problems; do not hand off to UC-3.2.16 fill-rate analytics. Helm test hooks that mint ephemeral PVCs may spam events; exclude chart test namespaces via lookup flags. Dual kube-state-metrics scrapes without deduplication can double pending_soak calculations until relabel rules converge. Cloud credential or quota errors on EBS, PD, Azure Disk, or vSphere volumes surface as distinct error strings; avoid muting the alert when only the message wording changes between vendor releases.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "N/A (operational reliability monitoring)"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, Kubernetes events.\n• Ensure the following data sources are available: `sourcetype=kube:events`, `sourcetype=kube:pvc:meta`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Kubernetes events and PVC metadata. Alert on PVCs in Pending phase >5 minutes. Include storage class and requested size in alert context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:events\" reason=\"ProvisioningFailed\" OR reason=\"FailedBinding\"\n| stats count by namespace, involvedObject.name, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Persistent Volume Claims** — Unbound PVCs prevent stateful workloads (databases, message queues) from starting. Often caused by storage class misconfiguration or capacity.\n\nDocumented **Data sources**: `sourcetype=kube:events`, `sourcetype=kube:pvc:meta`. **App/TA** (typical add-on context): Splunk OTel Collector, Kubernetes events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (PVC, namespace, status, storage class), Status indicators.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch storage claims that never finish attaching so databases and other apps that need saved data cannot start. When a claim stays unfinished too long or the cluster reports a volume setup failure, we raise a clear signal so teams fix settings or capacity before customers feel it.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.6",
              "n": "Deployment Rollout Failures",
              "c": "critical",
              "f": "intermediate",
              "v": "A failed rollout means new code isn't deploying successfully. Pods may be crash-looping, image pulls failing, or health checks not passing.",
              "t": "Splunk OTel Collector, Kubernetes events",
              "d": "`sourcetype=kube:events`",
              "q": "index=k8s sourcetype=\"kube:events\" involvedObject.kind=\"Deployment\" (reason=\"ProgressDeadlineExceeded\" OR reason=\"ReplicaSetUpdated\" OR reason=\"FailedCreate\")\n| table _time namespace involvedObject.name reason message\n| sort -_time",
              "m": "Monitor deployment events. Alert on `ProgressDeadlineExceeded` which means the deployment failed to complete within its configured deadline. Correlate with pod events for root cause.",
              "z": "Table (deployment, namespace, reason), Timeline, Status panel.",
              "kfp": "Deliberate kubectl rollout pause leaves Deployments intentionally mid-rollout with Progressing false and ReplicaSet counts that look alarming until you read pause annotations; inventory pause_expected or GitOps freeze labels should suppress paging. Argo Rollouts, Flagger, or vendor progressive delivery controllers often hold canary phases where available replicas intentionally lag desired counts while traffic weight shifts; require progressive_delivery_tool metadata before treating partial convergence as failure. ProgressDeadlineExceeded can appear on genuinely slow stateful workloads with long initialization even when healthy; extend sla_minutes and document database or migration class services to avoid cruel paging. Brief controller flap during kube-controller-manager leader election or control-plane rolling upgrades can lag observed_generation for one or two scrapes; demand sustained gen_lag with corroborating apiserver latency before executive escalation. Planned kubectl rollout undo during incident response may spike ReplicaSet churn without bad intent; correlate k8s_audit user subjects and incident bridges before reopening blameless reviews. Hot-fix rollouts in emergency windows may relax SLA minutes contractually; reflect relaxed sla_minutes in deployment_inventory.csv for the duration. Heavy clusters with delayed kube-state-metrics scrapes can show stale condition reasons that clear on the next interval; combine with kube events first_seen join to avoid single-scrape noise. Chaos experiments that inject deadline breaches should carry chaos.rollout=expected labels mirrored into suppression lookups. GitOps revert loops that self-heal within five minutes may still trigger rs_thrash_dc; use minimum dwell gates tied to rollout_age_minutes before paging non-production tiers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, Kubernetes events.\n• Ensure the following data sources are available: `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor deployment events. Alert on `ProgressDeadlineExceeded` which means the deployment failed to complete within its configured deadline. Correlate with pod events for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:events\" involvedObject.kind=\"Deployment\" (reason=\"ProgressDeadlineExceeded\" OR reason=\"ReplicaSetUpdated\" OR reason=\"FailedCreate\")\n| table _time namespace involvedObject.name reason message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Deployment Rollout Failures** — A failed rollout means new code isn't deploying successfully. Pods may be crash-looping, image pulls failing, or health checks not passing.\n\nDocumented **Data sources**: `sourcetype=kube:events`. **App/TA** (typical add-on context): Splunk OTel Collector, Kubernetes events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Deployment Rollout Failures**): table _time namespace involvedObject.name reason message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (deployment, namespace, reason), Timeline, Status panel.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "When we publish new software to our container platform, the system should replace old copies in a controlled way. Sometimes that replacement stalls halfway, misses its deadline, or keeps spinning new copies without finishing. We watch those stall signals so teams fix the rollout before customers get inconsistent service.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.7",
              "n": "Control Plane Health",
              "c": "critical",
              "f": "beginner",
              "v": "The control plane (API server, etcd, scheduler, controller-manager) is the brain of Kubernetes. Degradation affects all cluster operations.",
              "t": "Splunk OTel Collector, control plane component metrics",
              "d": "API server metrics, etcd metrics, scheduler/controller-manager logs",
              "q": "index=k8s sourcetype=\"kube:apiserver\"\n| timechart span=5m avg(apiserver_request_duration_seconds) as avg_latency by verb\n| where avg_latency > 1",
              "m": "Configure OTel Collector to scrape control plane metrics endpoints (/metrics on each component). Monitor API server request latency, etcd request duration, scheduler binding latency. Alert on P99 latency >1s or error rates >1%.",
              "z": "Line chart (latency by verb), Single value (error rate), Multi-panel dashboard per component.",
              "kfp": "Etcd periodic compaction and defragmentation windows routinely amplify apiserver request latency histograms for roughly five to ten minutes while the store is consistent; pair apiserver alerts with etcd maintenance annotations or a maintenance=true label on cluster rows so SLO burn during compaction does not page as control-plane failure. Cluster-autoscaler scale-out storms and massive DaemonSet rollouts create short inflight-request bursts on apiserver that clear within one or two scrape intervals; require two consecutive five-minute buckets above threshold or correlate with a known workload event before paging. Kops, Kubeadm, Cluster API, or vendor rolling control-plane upgrades legitimately increment etcd_server_leader_changes_seen_total and flip leader_election_master_status samples; suppress when change_ticket_id is present on the HEC event or when cluster carries upgrade_in_progress=true in cluster_platform_routing.csv, and alert only on repeated leader churn after the maintenance window closes. Amazon EKS, Google GKE, and Microsoft AKS managed control planes occasionally restart apiserver instances during transparent platform maintenance; cross-check cloud provider health dashboards and provider_event streams, and downgrade to informational when the cloud status page acknowledges regional control-plane work. Single-node Minikube, Kind, K3d, and Docker Desktop clusters exhibit perpetual non-HA etcd and scheduler/controller noise; set suppress_single_node_dev=1 in cluster_platform_routing.csv for those names so developers are not paged. Admission webhook outages (UC-3.2.21) can raise apiserver 5xx and latency; when webhook_failure_total or audit denial narratives spike in sibling indexes, route to service-mesh or admission owners instead of etcd on-call. Backup controllers issuing list-watch storms after watch reconnection can inflate workqueue_depth without user impact until depth persists for fifteen minutes; require slope-based escalation using a secondary saved search if this pattern is common in your estate.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, control plane component metrics.\n• Ensure the following data sources are available: API server metrics, etcd metrics, scheduler/controller-manager logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure OTel Collector to scrape control plane metrics endpoints (/metrics on each component). Monitor API server request latency, etcd request duration, scheduler binding latency. Alert on P99 latency >1s or error rates >1%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:apiserver\"\n| timechart span=5m avg(apiserver_request_duration_seconds) as avg_latency by verb\n| where avg_latency > 1\n```\n\nUnderstanding this SPL\n\n**Control Plane Health** — The control plane (API server, etcd, scheduler, controller-manager) is the brain of Kubernetes. Degradation affects all cluster operations.\n\nDocumented **Data sources**: API server metrics, etcd metrics, scheduler/controller-manager logs. **App/TA** (typical add-on context): Splunk OTel Collector, control plane component metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:apiserver. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:apiserver\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by verb** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_latency > 1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency by verb), Single value (error rate), Multi-panel dashboard per component.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "The control plane is the brain of a Kubernetes cluster. If the brain is unhealthy, you can't deploy, you can't scale, and recovery from any other failure becomes impossible. This use case watches the brain itself.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "3.2.8",
              "n": "etcd Cluster Health",
              "c": "critical",
              "f": "beginner",
              "v": "etcd stores all Kubernetes state. etcd problems (leader elections, compaction failures, high latency) cascade into cluster-wide failures.",
              "t": "Splunk OTel Collector, etcd metrics",
              "d": "etcd Prometheus metrics (scraped by OTel)",
              "q": "index=k8s sourcetype=\"kube:etcd\"\n| timechart span=5m avg(etcd_disk_wal_fsync_duration_seconds) as fsync_latency, sum(etcd_server_leader_changes_seen_total) as leader_changes\n| where fsync_latency > 0.01 OR leader_changes > 0",
              "m": "Scrape etcd metrics via OTel Collector. Monitor disk fsync latency (<10ms healthy), database size, leader changes, and proposal failures. Alert on leader changes (indicates instability) and high fsync latency.",
              "z": "Line chart (fsync latency, db size), Single value (leader changes), Gauge (db size).",
              "kfp": "Periodic MVCC compaction every few minutes produces short WAL fsync and commit latency spikes that clear within one or two scrape intervals; require two consecutive five-minute buckets above threshold or annotate maintenance windows in cluster_platform_routing.csv before paging. Kind, Minikube, K3d single-node clusters lack meaningful peer RTT and Raft churn; set suppress_single_node_dev=1 so leader deltas do not wake on-call. Managed EKS, GKE, and AKS clusters without exported etcd metrics should produce zero rows—if an alert fires from stale test data, fix ingest routing rather than blaming etcd. Cluster-autoscaler bursts that add many nodes can create transient apiserver and etcd write pressure; correlate with audit volume and autoscaler events, and dampen alerts when only one bucket is hot. Rolling etcd upgrades legitimately increment leader change counters once per member; tie suppression to change_ticket_id fields on HEC events or upgrade_in_progress labels. Backup solutions that list enormous object graphs during snapshot windows can inflate snapshot_save averages without user-visible outage; compare with etcd_server_is_leader and apiserver error rates before escalating. CRD installation storms during GitOps reconciliation can spike commit p99 temporarily; pair with git commit timelines. Network micro-bursts during control-plane certificate rotations may raise peer RTT histograms for a single interval; verify continuous packet loss before declaring AZ misplacement. False positives from mis-labeled scrape targets (scraping apiserver instead of etcd) show impossible metric combinations; validate job and pod labels in discovery weekly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, etcd metrics.\n• Ensure the following data sources are available: etcd Prometheus metrics (scraped by OTel).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape etcd metrics via OTel Collector. Monitor disk fsync latency (<10ms healthy), database size, leader changes, and proposal failures. Alert on leader changes (indicates instability) and high fsync latency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:etcd\"\n| timechart span=5m avg(etcd_disk_wal_fsync_duration_seconds) as fsync_latency, sum(etcd_server_leader_changes_seen_total) as leader_changes\n| where fsync_latency > 0.01 OR leader_changes > 0\n```\n\nUnderstanding this SPL\n\n**etcd Cluster Health** — etcd stores all Kubernetes state. etcd problems (leader elections, compaction failures, high latency) cascade into cluster-wide failures.\n\nDocumented **Data sources**: etcd Prometheus metrics (scraped by OTel). **App/TA** (typical add-on context): Splunk OTel Collector, etcd metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:etcd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:etcd\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where fsync_latency > 0.01 OR leader_changes > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (fsync latency, db size), Single value (leader changes), Gauge (db size).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "etcd is the database at the heart of every Kubernetes cluster — it remembers every running app, secret, and config. When etcd is slow or full, the entire cluster slows or stops. We watch etcd's heartbeat so we catch trouble before workloads suffer.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "3.2.9",
              "n": "Ingress Error Rates",
              "c": "high",
              "f": "intermediate",
              "v": "Ingress controllers are the front door to your services. High error rates mean users are getting errors. Catches backend failures and misconfigurations.",
              "t": "Ingress controller log forwarding (NGINX, Traefik, etc.)",
              "d": "`sourcetype=kube:ingress:nginx` or similar",
              "q": "index=k8s sourcetype=\"kube:ingress:nginx\"\n| eval is_error = if(status >= 500, 1, 0)\n| timechart span=5m sum(is_error) as errors, count as total\n| eval error_rate = if(total>0, round(errors/total*100, 2), 0)\n| where error_rate > 5",
              "m": "Forward ingress controller access logs. Parse status code, upstream response time, and backend server. Alert when 5xx error rate exceeds 5% over 5 minutes.",
              "z": "Line chart (error rate over time), Table (top error paths), Single value (current error rate).",
              "kfp": "Brief 502 or 503 bursts during NGINX Ingress configuration reloads follow every Ingress create or update because the controller replaces worker processes; dampen by requiring consecutive buckets above threshold or by correlating reload timestamps from kubernetes audit logs. Readiness gaps during scale-out can show 502 while new pods pass kubelet probes but cloud load balancers have not yet registered targets; suppress using AWS TargetTransitioning annotations or an internal readiness_delay_minutes column on ingress_oncall_routing.csv. Synthetic monitors that probe /healthz without authentication sometimes produce 404 or 401; the SPL excludes common probe paths—extend the macro if your tool uses uncommon URIs. Internet bot and scanner traffic elevates 4xx rates without customer impact; route those signals to security analytics instead of application SLO pages by splitting indexes or sourcetypes for edge WAF logs. Preview and staging clusters that intentionally return 404 on unknown hosts should set suppress_preview_tier in the lookup so medium 4xx surges never page production bridges. Traefik debug access logs in non-production can inflate upstream_zero flags when routers are intentionally partial; scope alerts with cluster tier columns.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Ingress controller log forwarding (NGINX, Traefik, etc.).\n• Ensure the following data sources are available: `sourcetype=kube:ingress:nginx` or similar.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward ingress controller access logs. Parse status code, upstream response time, and backend server. Alert when 5xx error rate exceeds 5% over 5 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:ingress:nginx\"\n| eval is_error = if(status >= 500, 1, 0)\n| timechart span=5m sum(is_error) as errors, count as total\n| eval error_rate = if(total>0, round(errors/total*100, 2), 0)\n| where error_rate > 5\n```\n\nUnderstanding this SPL\n\n**Ingress Error Rates** — Ingress controllers are the front door to your services. High error rates mean users are getting errors. Catches backend failures and misconfigurations.\n\nDocumented **Data sources**: `sourcetype=kube:ingress:nginx` or similar. **App/TA** (typical add-on context): Ingress controller log forwarding (NGINX, Traefik, etc.). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:ingress:nginx. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:ingress:nginx\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate over time), Table (top error paths), Single value (current error rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "Most apps in a Kubernetes cluster face the public internet through one or two Ingress Controllers. When those controllers fail or slow down, every customer hits errors no matter how healthy the apps behind them are. This use case watches the front door of the cluster.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nginx",
                "traefik"
              ],
              "em": [
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.10",
              "n": "CrashLoopBackOff Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "CrashLoopBackOff is the most common Kubernetes failure mode. The pod is crashing, restarting, and crashing again with exponential backoff. Service is down.",
              "t": "Splunk OTel Collector, kube-state-metrics",
              "d": "`sourcetype=kube:container:meta`, Kubernetes events",
              "q": "index=k8s sourcetype=\"kube:events\" reason=\"BackOff\"\n| stats count by namespace, involvedObject.name, message\n| where count > 3\n| sort -count",
              "m": "Monitor Kubernetes events for `BackOff` reason. Also check container status for `waiting.reason=CrashLoopBackOff`. Alert immediately. Include container logs in alert for diagnostic context.",
              "z": "Table (pod, namespace, count, message), Status panel, Single value (CrashLoop pods count).",
              "kfp": "CronJob or Job pods that exit non-zero by design after processing a batch can enter CrashLoopBackOff if the manifest uses restartPolicy Always; suppress with lookup k8s_workload_routing.csv job_allowed_nonzero=1 or exclude namespace batch-etl-staging. Init containers stuck during cold start while CoreDNS is still warming can show CrashLoopBackOff on the init path; UC-3.2.30 owns init-phase analytics, so gate this alert with kube_pod_container_status_waiting_reason only for non-init container names or add init_container=false in the lookup. Cluster-autoscaler drains that evict pods and reschedule within two minutes often emit BackOff lines without sustained waiting_reason=CrashLoopBackOff; require time_in_clbo_min>=5 before paging prod unless restarts_24h spikes above the macro. Spark or Flink driver containers that intentionally recycle during driver recovery can resemble a loop; route through workload_kind in the routing lookup and downgrade when owner_workload matches documented platform jobs. Short-lived canary pods in mesh ingress namespaces may flap during weighted route changes; tie suppression to deploy annotation hashes in kube_audit within the same window. GitOps controllers that repeatedly apply a bad manifest generate repeating Failed events; keep the alert but attach audit user to avoid blaming the wrong team. StatefulSet pods waiting for PVC binding can show CrashLoopBackOff on the main container when the entrypoint assumes the volume is mounted; pair with PVC phase metrics or a secondary lookup on pvc_bound=false to reroute to storage on-call instead of app on-call.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, kube-state-metrics.\n• Ensure the following data sources are available: `sourcetype=kube:container:meta`, Kubernetes events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Kubernetes events for `BackOff` reason. Also check container status for `waiting.reason=CrashLoopBackOff`. Alert immediately. Include container logs in alert for diagnostic context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:events\" reason=\"BackOff\"\n| stats count by namespace, involvedObject.name, message\n| where count > 3\n| sort -count\n```\n\nUnderstanding this SPL\n\n**CrashLoopBackOff Detection** — CrashLoopBackOff is the most common Kubernetes failure mode. The pod is crashing, restarting, and crashing again with exponential backoff. Service is down.\n\nDocumented **Data sources**: `sourcetype=kube:container:meta`, Kubernetes events. **App/TA** (typical add-on context): Splunk OTel Collector, kube-state-metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pod, namespace, count, message), Status panel, Single value (CrashLoop pods count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "When a program inside our cloud keeps dying as soon as it starts, the system tries again but waits longer each time until it is effectively stuck. That means customers may get errors or silence until we fix the underlying cause, so we raise a clear signal instead of discovering it by accident.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.11",
              "n": "HPA Scaling Events",
              "c": "medium",
              "f": "beginner",
              "v": "HPA scaling events show when applications are hitting capacity. Repeated max-scale events indicate undersized limits or unexpected traffic.",
              "t": "Splunk OTel Collector, Kubernetes events",
              "d": "`sourcetype=kube:events`",
              "q": "index=k8s sourcetype=\"kube:events\" involvedObject.kind=\"HorizontalPodAutoscaler\"\n| stats count by namespace, involvedObject.name, reason, message\n| sort -count",
              "m": "Forward Kubernetes events. Track scaling decisions and current vs. desired replicas. Alert when HPA reaches maxReplicas (application may be under-provisioned).",
              "z": "Line chart (replica count over time), Table of scaling events, Area chart.",
              "kfp": "Scheduled batch jobs that briefly push CPU or memory above target can legitimately trigger several up and down transitions around each run window; align alert schedules with job calendars or exclude batch namespaces via lookup flags. Autoscale-on-startup warm-up after deploys often produces a short burst of transitions while probes and caches settle; require sustained flap_heavy across multiple evaluation intervals before paging executives. Business-hours traffic crests and legitimate lunch-trough dips can produce civil up and down motion without customer harm; pair with golden signal dashboards before opening sev-one bridges. CronJob-style cyclical workloads can create repeating daily patterns that resemble flapping; use seasonality macros or baseline compares. Freshly deployed workloads calibrating requests and limits may oscillate until Vertical Pod Autoscaler or manual tuning stabilizes; suppress known onboarding namespaces for their first week when documented. KEDA cold-start bursts from scale-to-zero can spike transition counts during the first minutes of an event wave; cross-check ScaledObject cooldown before blaming classic HPA misconfiguration. Scrape-interval mismatch between kube-state-metrics and application metrics can alias sharp spikes into multi-bucket transitions; validate scrape alignment and recording rules. Custom-metric API outages or partial responses can make external metrics flap while CPU is flat; pivot to adapter health checks when ScalingActive toggles with sparse SuccessfulRescale lines. Very low minReplicas combined with bursty internet traffic can produce frequent scale-out edges that are economically intentional; review minReplicas policy before widening stabilization windows. Deliberate cluster cost optimization that uses aggressive scale-down policies may increase down transitions during off-hours; confirm FinOps-approved schedules before treating as defect. Node rotation or consolidation that reschedules many pods can create correlated rescales across HPAs that look like fleet-wide flapping; correlate with UC-3.2.46 timelines and node drain windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, Kubernetes events.\n• Ensure the following data sources are available: `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Kubernetes events. Track scaling decisions and current vs. desired replicas. Alert when HPA reaches maxReplicas (application may be under-provisioned).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:events\" involvedObject.kind=\"HorizontalPodAutoscaler\"\n| stats count by namespace, involvedObject.name, reason, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**HPA Scaling Events** — HPA scaling events show when applications are hitting capacity. Repeated max-scale events indicate undersized limits or unexpected traffic.\n\nDocumented **Data sources**: `sourcetype=kube:events`. **App/TA** (typical add-on context): Splunk OTel Collector, Kubernetes events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, reason, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (replica count over time), Table of scaling events, Area chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch how often the automatic replica scaler nudges instance counts up and down so we notice thrashing before bills jump or the service starts bouncing. We line that up with the cluster’s own success messages so we know the noise is real, not just a glitch in the charts.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.12",
              "n": "RBAC Audit",
              "c": "high",
              "f": "beginner",
              "v": "RBAC misconfigurations grant excessive permissions. Unauthorized access attempts indicate potential compromise or misconfigured service accounts.",
              "t": "Kubernetes audit log forwarding",
              "d": "`sourcetype=kube:audit`",
              "q": "index=k8s sourcetype=\"kube:audit\" responseStatus.code>=403\n| stats count by user.username, verb, objectRef.resource, objectRef.namespace\n| sort -count",
              "m": "Enable Kubernetes audit logging (audit policy file). Forward audit logs to Splunk. Alert on 403 Forbidden responses, especially from service accounts. Track RBAC changes (ClusterRole, ClusterRoleBinding modifications).",
              "z": "Table (user, resource, verb, denials), Bar chart by user, Timeline.",
              "kfp": "Legitimate platform administration during published change windows produces bursts of RoleBinding updates, cluster-admin elevations for etcd or apiserver recovery, and impersonation headers used by support tooling—correlate with tickets or trusted_admin_users.csv before paging. GitOps controllers such as Argo CD and Flux continuously reconcile RBAC manifests; volume spikes that align with git commits are expected, while divergent live bindings signal drift worth investigating. Break-glass accounts during incidents may temporarily bind cluster-admin; suppress only with time-bound lookup rows and owner notes. CI and CD pipeline service accounts often receive expanded RBAC during rollout windows; tie suppressions to pipeline identities rather than muting entire clusters. Vendor operators including Velero, cert-manager, ingress controllers, and OpenShift platform operators sometimes ship ClusterRoleBindings that appear alarming but are documented; maintain an operator exception inventory keyed by namespace and chart version. Node-bootstrap and kubeadm upgrade paths churn system:node and CSR-related bindings; treat short maintenance-adjacent windows as normal when node provisioning runbooks match timestamps. Fleet migrations from blanket cluster-admin to least-privilege roles create transient elevations and denials while workloads adapt; phase annotations in your lookup prevent false escalations. Pen testers and purple-team exercises will intentionally trigger forbidden responses; tag those namespaces in governance data. Regional cluster name collisions without proper cluster labels can mis-join suppressions—enforce unique cluster_row values in CSVs.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes audit log forwarding.\n• Ensure the following data sources are available: `sourcetype=kube:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Kubernetes audit logging (audit policy file). Forward audit logs to Splunk. Alert on 403 Forbidden responses, especially from service accounts. Track RBAC changes (ClusterRole, ClusterRoleBinding modifications).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:audit\" responseStatus.code>=403\n| stats count by user.username, verb, objectRef.resource, objectRef.namespace\n| sort -count\n```\n\nUnderstanding this SPL\n\n**RBAC Audit** — RBAC misconfigurations grant excessive permissions. Unauthorized access attempts indicate potential compromise or misconfigured service accounts.\n\nDocumented **Data sources**: `sourcetype=kube:audit`. **App/TA** (typical add-on context): Kubernetes audit log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user.username, verb, objectRef.resource, objectRef.namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, resource, verb, denials), Bar chart by user, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch who receives powerful control inside your container platform and who tries to touch private settings they should not see. When automation accounts gain dangerous powers or strangers probe sensitive areas, we raise the alarm early.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.13",
              "n": "Certificate Expiration",
              "c": "high",
              "f": "advanced",
              "v": "Kubernetes uses TLS certificates extensively (API server, kubelet, etcd). Expired certs cause cluster communication failures and outages.",
              "t": "cert-manager metrics, custom scripted input",
              "d": "cert-manager events, `kubeadm certs check-expiration` output",
              "q": "index=k8s sourcetype=\"certmanager:metrics\"\n| eval days_left = round((certmanager_certificate_expiration_timestamp_seconds - now()) / 86400, 0)\n| where days_left < 30",
              "m": "Deploy cert-manager and scrape its metrics. Monitor certificate expiration timestamps. Alert at 30/14/7 day thresholds. For kubeadm clusters, scripted input running `kubeadm certs check-expiration`.",
              "z": "Table (cert, namespace, days remaining), Single value (certs expiring soon), Status indicator.",
              "kfp": "cert-manager and kubeadm rotations routinely present two distinct notAfter epochs for the same logical subject_cn inside a twenty-four-hour window while static file scans still read the previous on-disk PEM; treat rotation_in_flight=1 from the dc(not_after_epoch) arm as a suppression flag for red rotation-overdue pages until the newer epoch stabilizes or until the inventory lookup marks rotation_method=cert-manager with an active Certificate Ready=True snapshot. Amazon EKS, Google GKE, and Microsoft AKS rotate some apiserver-facing certificates without exposing the same filesystem paths as self-managed kubeadm; rows tied to cloud-managed rotation_method should skip red tiers when the only signal is a missing /etc/kubernetes/pki/apiserver.crt on nodes that never hosted that material—gate with expected_path_present=0 in k8s_cert_inventory.csv. kind and minikube clusters often run deliberately short-lived bootstrap certs; mark those clusters dev_tier in the lookup and raise yellow only inside thirty days unless prod_criticality_override is set. Self-signed issuers in lab clusters frequently break naive issuer_cn regex expectations; use expected_issuer_regex with a broad ALLOW-LAB pattern rather than deleting telemetry. Nightly kubeadm JSON and continuous filelog can double-count the same serial when both land; dedupe on fingerprint_sha256 or serial_hex before paging. Prometheus text exposition occasionally omits namespace labels on certmanager_certificate_expiration_timestamp_seconds after upgrades; fallback to kube:objects:certs for namespace binding before declaring a missing workload cert. Webhook caBundle rotations sourced only from ConfigMap volume mounts may lag API status fields by one rollout wave; pair object snapshots with admission failure logs from UC-3.2.45 before assuming PKI doom. Inventory model joins can map dest to node name while cluster grain uses eks_cluster_arn; maintain a cluster_alias column in the lookup to prevent false orphan rows. Clock skew between control-plane host and indexer time distorts days_to_expiry; enforce chrony on collectors when variance exceeds two minutes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: cert-manager metrics, custom scripted input.\n• Ensure the following data sources are available: cert-manager events, `kubeadm certs check-expiration` output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy cert-manager and scrape its metrics. Monitor certificate expiration timestamps. Alert at 30/14/7 day thresholds. For kubeadm clusters, scripted input running `kubeadm certs check-expiration`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"certmanager:metrics\"\n| eval days_left = round((certmanager_certificate_expiration_timestamp_seconds - now()) / 86400, 0)\n| where days_left < 30\n```\n\nUnderstanding this SPL\n\n**Certificate Expiration** — Kubernetes uses TLS certificates extensively (API server, kubelet, etcd). Expired certs cause cluster communication failures and outages.\n\nDocumented **Data sources**: cert-manager events, `kubeadm certs check-expiration` output. **App/TA** (typical add-on context): cert-manager metrics, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: certmanager:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"certmanager:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 30` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cert, namespace, days remaining), Single value (certs expiring soon), Status indicator.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We track the many digital credentials that let a Kubernetes cluster talk to itself securely. When any of them expire, things can fail in odd order—sometimes quietly—until an important app stops working. We warn weeks ahead so teams can renew them calmly instead of during an emergency.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.14",
              "n": "Container Image Pull Failures",
              "c": "high",
              "f": "intermediate",
              "v": "ImagePullBackOff prevents pods from starting. Caused by wrong image tags, registry auth failures, or network issues. Blocks deployments.",
              "t": "Splunk OTel Collector, Kubernetes events",
              "d": "`sourcetype=kube:events`",
              "q": "index=k8s sourcetype=\"kube:events\" (reason=\"ErrImagePull\" OR reason=\"ImagePullBackOff\" OR reason=\"Failed\" message=\"*pulling image*\")\n| stats count by namespace, involvedObject.name, message\n| sort -count",
              "m": "Forward Kubernetes events. Alert on ImagePullBackOff events. Parse the image name and registry to identify whether it's an auth issue, missing tag, or network issue.",
              "z": "Table (pod, image, error), Single value (pull failures last hour), Bar chart by namespace.",
              "kfp": "Brief ImagePullBackOff during cluster-autoscaler scale-out can appear when a new node downloads large layers for the first time; the condition often clears within thirty to ninety seconds while kubelet retries succeed. Require time_in_ipb_min above the min_dwell gate and corroborate with registry_failed_events_5m before paging production unless a single critical namespace is completely idle. Image-warming DaemonSets or registry preflight jobs sometimes schedule pods that intentionally reference cold or nonexistent layers to prime cache; mark those namespaces with warmup_job=1 in k8s_workload_routing.csv and exclude them from paging macros. Chaos experiments that deliberately fail pulls should carry chaos.imagepull=expected labels and a matching suppression row so game-day traffic does not look like a credential incident. Brief ImagePullBackOff spikes during in-cluster registry pod restarts can occur when pull-through cache pods recycle; dampen with vendor maintenance annotations or require two consecutive alert intervals above threshold. CI namespaces that rebuild unique tags every commit may hammer registries without production customer impact; lower severity using workload_tier=sandbox in k8s_namespace_tier.csv. Dual scrape agents emitting duplicate kube-state-metrics samples can inflate kubelet_join noise until deduplication macros land; watch for identical _raw lines doubling counts. Lab clusters that point at air-gapped mirrors may never populate docker.io storm columns even during real outages; document mirror hostname expectations in image_pull_secret_inventory.csv notes to avoid misinterpreted nulls.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "N/A (operational reliability monitoring)"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, Kubernetes events.\n• Ensure the following data sources are available: `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Kubernetes events. Alert on ImagePullBackOff events. Parse the image name and registry to identify whether it's an auth issue, missing tag, or network issue.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:events\" (reason=\"ErrImagePull\" OR reason=\"ImagePullBackOff\" OR reason=\"Failed\" message=\"*pulling image*\")\n| stats count by namespace, involvedObject.name, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Container Image Pull Failures** — ImagePullBackOff prevents pods from starting. Caused by wrong image tags, registry auth failures, or network issues. Blocks deployments.\n\nDocumented **Data sources**: `sourcetype=kube:events`. **App/TA** (typical add-on context): Splunk OTel Collector, Kubernetes events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pod, image, error), Single value (pull failures last hour), Bar chart by namespace.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "Before any app on our Kubernetes clusters can run, the cluster must download its container image. When that download fails—a bad name, expired login, or blocked connection—the app never starts. We watch every download error so we can fix it before customers notice.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.15",
              "n": "DaemonSet Completeness",
              "c": "medium",
              "f": "beginner",
              "v": "DaemonSets (monitoring agents, log forwarders, network plugins) must run on every eligible node. Missing instances create monitoring or networking gaps.",
              "t": "Splunk OTel Collector, kube-state-metrics",
              "d": "`sourcetype=kube:daemonset:meta`",
              "q": "index=k8s sourcetype=\"kube:daemonset:meta\"\n| eval missing = desiredNumberScheduled - numberReady\n| where missing > 0\n| table namespace daemonset_name desiredNumberScheduled numberReady missing",
              "m": "kube-state-metrics reports DaemonSet status. Alert when `numberReady < desiredNumberScheduled` for >5 minutes. Critical for infrastructure DaemonSets (CNI plugins, OTel Collector, kube-proxy).",
              "z": "Table (DaemonSet, desired, ready, missing), Status indicator, Single value.",
              "kfp": "GPU or accelerator DaemonSets legitimately skip CPU-only nodes when nodeSelector excludes non-GPU fleets; confirm workload_class and inventory before paging. OnDelete strategies during deliberate maintenance can show updated counts lagging until operators delete pods; pair with ondelete_expected flags. Tolerations updated on freshly tainted nodes may need one or two reconcile intervals before DaemonSet pods reschedule; dampen short flaps. Recent metadata.generation bumps still rolling within thirty minutes can look like stalls when scrapes are coarse; require rolling_stall_min_sec dwell. Cordoned nodes and voluntary drains evict DaemonSet pods on purpose; correlate cordon state and change tickets. Nodes booting after autoscaler scale-out may lack DaemonSet pods for several minutes; use provider lifecycle metadata. Nightly OS patching cycles with batched reboots create predictable holes; align maintenance windows. Calico or Cilium BGP or policy updates may lag under ten minutes on large fleets; treat sub-threshold gaps as operational noise unless misschedule persists. Hardware-tag taints recently added without matching toleration edits look like incidents until the controller catches up. Intentional Always or manual rollout policies that keep older pod generations by design should carry inventory annotations so alerts mute. EKS Fargate and some AKS constrained profiles cannot run certain DaemonSets; document provider exemptions to avoid false escalations. Test clusters with synthetic partial failures may mimic production breaches; route lab noise with tier filters.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, kube-state-metrics.\n• Ensure the following data sources are available: `sourcetype=kube:daemonset:meta`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nkube-state-metrics reports DaemonSet status. Alert when `numberReady < desiredNumberScheduled` for >5 minutes. Critical for infrastructure DaemonSets (CNI plugins, OTel Collector, kube-proxy).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:daemonset:meta\"\n| eval missing = desiredNumberScheduled - numberReady\n| where missing > 0\n| table namespace daemonset_name desiredNumberScheduled numberReady missing\n```\n\nUnderstanding this SPL\n\n**DaemonSet Completeness** — DaemonSets (monitoring agents, log forwarders, network plugins) must run on every eligible node. Missing instances create monitoring or networking gaps.\n\nDocumented **Data sources**: `sourcetype=kube:daemonset:meta`. **App/TA** (typical add-on context): Splunk OTel Collector, kube-state-metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:daemonset:meta. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:daemonset:meta\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **missing** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where missing > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DaemonSet Completeness**): table namespace daemonset_name desiredNumberScheduled numberReady missing\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DaemonSet, desired, ready, missing), Status indicator, Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the small fleet-wide helpers that must run on every eligible machine. When counts fall out of line, updates stall, or pods land on the wrong machines, logging, networking, or security layers can go partly blind, so we raise a clear signal.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.16",
              "n": "Kubernetes PersistentVolume Claim Capacity",
              "c": "high",
              "f": "intermediate",
              "v": "PVC approaching storage limits; prevents application failures from full volumes.",
              "t": "Splunk Connect for Kubernetes, metrics from kubelet",
              "d": "kubelet metrics (`kubelet_volume_stats_used_bytes`, `kubelet_volume_stats_capacity_bytes`)",
              "q": "index=k8s sourcetype=\"kube:metrics\" (metric_name=\"kubelet_volume_stats_used_bytes\" OR metric_name=\"kubelet_volume_stats_capacity_bytes\")\n| stats latest(_value) as value by metric_name, namespace, persistentvolumeclaim, node\n| xyseries namespace,persistentvolumeclaim,node metric_name value\n| eval used_pct = round(kubelet_volume_stats_used_bytes / kubelet_volume_stats_capacity_bytes * 100, 1)\n| where used_pct > 80\n| table namespace persistentvolumeclaim node used_pct kubelet_volume_stats_used_bytes kubelet_volume_stats_capacity_bytes\n| sort -used_pct",
              "m": "Configure Splunk Connect for Kubernetes or OTel Collector to scrape kubelet metrics. The kubelet exposes volume stats at `/metrics` on each node. Extract `kubelet_volume_stats_used_bytes` and `kubelet_volume_stats_capacity_bytes` with labels `namespace`, `persistentvolumeclaim`. Alert when any PVC exceeds 80% capacity. Consider 90% for critical stateful workloads.",
              "z": "Gauge per PVC, Table (namespace, PVC, node, used %, bytes), Line chart (trend over time).",
              "kfp": "Brief saturation spikes during bulk data-loader or restore jobs can cross high thresholds without customer impact; dampen using workload labels, maintenance macros, or temporary suppressions keyed on backup_class in pvc_inventory.csv when the window is CAB-approved. Archival or backup PVCs that are expected to land near full should carry backup_class=archive and criticality bronze so the alert routes to storage operations dashboards instead of product bridges. After PVC migration or node drain, growth rate may read zero until twenty-five hourly samples accumulate; days_to_full stays null and should not page alone without corroborating events. Inode_pct may climb while used_gb stays modest on legitimate small-file workloads such as mail or build artifact caches; pair with workload knowledge before declaring misconfiguration. Teams that deliberately run above eighty percent for cost control should set an inventory suppression flag or custom criticality tier after FinOps sign-off so governance matches intent. Duplicate scrapes from overlapping agents without dedup keys can jitter hour-over-hour growth; validate single writer per kubelet target. CIM node joins may miss when Performance.host naming differs from kubelet node labels; the control remains valid on kubelet metrics alone.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Kubernetes, metrics from kubelet.\n• Ensure the following data sources are available: kubelet metrics (`kubelet_volume_stats_used_bytes`, `kubelet_volume_stats_capacity_bytes`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk Connect for Kubernetes or OTel Collector to scrape kubelet metrics. The kubelet exposes volume stats at `/metrics` on each node. Extract `kubelet_volume_stats_used_bytes` and `kubelet_volume_stats_capacity_bytes` with labels `namespace`, `persistentvolumeclaim`. Alert when any PVC exceeds 80% capacity. Consider 90% for critical stateful workloads.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" (metric_name=\"kubelet_volume_stats_used_bytes\" OR metric_name=\"kubelet_volume_stats_capacity_bytes\")\n| stats latest(_value) as value by metric_name, namespace, persistentvolumeclaim, node\n| xyseries namespace,persistentvolumeclaim,node metric_name value\n| eval used_pct = round(kubelet_volume_stats_used_bytes / kubelet_volume_stats_capacity_bytes * 100, 1)\n| where used_pct > 80\n| table namespace persistentvolumeclaim node used_pct kubelet_volume_stats_used_bytes kubelet_volume_stats_capacity_bytes\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**Kubernetes PersistentVolume Claim Capacity** — PVC approaching storage limits; prevents application failures from full volumes.\n\nDocumented **Data sources**: kubelet metrics (`kubelet_volume_stats_used_bytes`, `kubelet_volume_stats_capacity_bytes`). **App/TA** (typical add-on context): Splunk Connect for Kubernetes, metrics from kubelet. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by metric_name, namespace, persistentvolumeclaim, node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubernetes PersistentVolume Claim Capacity**): table namespace persistentvolumeclaim node used_pct kubelet_volume_stats_used_bytes kubelet_volume_stats_capacity_bytes\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per PVC, Table (namespace, PVC, node, used %, bytes), Line chart (trend over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch each piece of permanent storage attached to an app; when it gets close to full the app can stop writing and the data can be locked read-only. We predict when each volume runs out of space so storage can grow before customers notice.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.17",
              "n": "Kubernetes HorizontalPodAutoscaler Status",
              "c": "high",
              "f": "intermediate",
              "v": "HPA at max replicas, unable to scale, or flapping between min and max.",
              "t": "Splunk Connect for Kubernetes",
              "d": "kube-state-metrics (`kube_horizontalpodautoscaler_status_current_replicas`, `kube_horizontalpodautoscaler_spec_min_replicas`, `kube_horizontalpodautoscaler_spec_max_replicas`)",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_horizontalpodautoscaler_*\"\n| stats latest(_value) as value by metric_name, namespace, horizontalpodautoscaler\n| eval current_replicas = case(metric_name=\"kube_horizontalpodautoscaler_status_current_replicas\", value)\n| eval min_replicas = case(metric_name=\"kube_horizontalpodautoscaler_spec_min_replicas\", value)\n| eval max_replicas = case(metric_name=\"kube_horizontalpodautoscaler_spec_max_replicas\", value)\n| stats max(current_replicas) as current_replicas, max(min_replicas) as min_replicas, max(max_replicas) as max_replicas by namespace, horizontalpodautoscaler\n| eval at_max = if(current_replicas >= max_replicas AND max_replicas > 0, 1, 0)\n| where at_max=1\n| table namespace horizontalpodautoscaler current_replicas min_replicas max_replicas\n| sort -current_replicas",
              "m": "Collect kube-state-metrics HPA series via Splunk Connect for Kubernetes. Alert when `current_replicas == max_replicas` (HPA cannot scale further; application may be under-provisioned). Also alert on rapid replica flapping (e.g. current oscillating between min and max within 10 minutes) indicating unstable scaling.",
              "z": "Table (HPA, namespace, current, min, max), Status indicator (at max = warning), Line chart (replicas over time).",
              "kfp": "Deliberate hard caps for cost-control on dev and staging clusters keep HPAs pinned at maxReplicas for entire business days during load tests that are intentionally smaller than production; suppress those namespaces via hpa_saturation_inventory workload_tier or a macro exclusion list. Planned promotional surges such as Black Friday, game day, or Super Bowl traffic may sustain maxReplicas for many hours with healthy adapters; widen dwell timers or require FailedGet events before paging executives during approved surge windows. Batch jobs that legitimately hit maxReplicas once per day during a short run window can look like saturation if the alert window spans only that spike; align earliest and latest with job schedules or tag batch namespaces in the lookup. HorizontalPodAutoscaler objects under deliberate manual maintenance holds, kubectl scale freezes, or GitOps pause annotations can dwell at max while operators intend the ceiling; tie suppressions to change records. KEDA-managed HPAs that intentionally burst from zero to max on event-driven triggers may flap or pin at max in ways that match incident shapes; cross-check ScaledObject pause and cooldown settings before opening sev-one bridges. KEDA-driven workloads with sub-second event waves can oscillate replica counts between min and max without customer-visible harm; require customer-facing error budget burn or sustained FailedGet evidence before paging. Brief kube-state-metrics scrape gaps after upgrades can null replica fields while events stay quiet; demand two consecutive intervals or corroborating kubectl output before declaring adapter failure. Vendor metric adapters that lag by one or two scrape intervals can emit transient FailedGet lines that self-heal; dampen with rolling counts. Test clusters that hammer impossible metric names to validate alerting can spam FailedGet; exclude qa-only namespaces.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Kubernetes.\n• Ensure the following data sources are available: kube-state-metrics (`kube_horizontalpodautoscaler_status_current_replicas`, `kube_horizontalpodautoscaler_spec_min_replicas`, `kube_horizontalpodautoscaler_spec_max_replicas`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect kube-state-metrics HPA series via Splunk Connect for Kubernetes. Alert when `current_replicas == max_replicas` (HPA cannot scale further; application may be under-provisioned). Also alert on rapid replica flapping (e.g. current oscillating between min and max within 10 minutes) indicating unstable scaling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_horizontalpodautoscaler_*\"\n| stats latest(_value) as value by metric_name, namespace, horizontalpodautoscaler\n| eval current_replicas = case(metric_name=\"kube_horizontalpodautoscaler_status_current_replicas\", value)\n| eval min_replicas = case(metric_name=\"kube_horizontalpodautoscaler_spec_min_replicas\", value)\n| eval max_replicas = case(metric_name=\"kube_horizontalpodautoscaler_spec_max_replicas\", value)\n| stats max(current_replicas) as current_replicas, max(min_replicas) as min_replicas, max(max_replicas) as max_replicas by namespace, horizontalpodautoscaler\n| eval at_max = if(current_replicas >= max_replicas AND max_replicas > 0, 1, 0)\n| where at_max=1\n| table namespace horizontalpodautoscaler current_replicas min_replicas max_replicas\n| sort -current_replicas\n```\n\nUnderstanding this SPL\n\n**Kubernetes HorizontalPodAutoscaler Status** — HPA at max replicas, unable to scale, or flapping between min and max.\n\nDocumented **Data sources**: kube-state-metrics (`kube_horizontalpodautoscaler_status_current_replicas`, `kube_horizontalpodautoscaler_spec_min_replicas`, `kube_horizontalpodautoscaler_spec_max_replicas`). **App/TA** (typical add-on context): Splunk Connect for Kubernetes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by metric_name, namespace, horizontalpodautoscaler** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **current_replicas** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **min_replicas** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **max_replicas** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by namespace, horizontalpodautoscaler** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **at_max** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where at_max=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubernetes HorizontalPodAutoscaler Status**): table namespace horizontalpodautoscaler current_replicas min_replicas max_replicas\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (HPA, namespace, current, min, max), Status indicator (at max = warning), Line chart (replicas over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the automatic replica scaler for each application so we know when it is stuck at its upper limit, bouncing up and down because settings are too tight, or cannot read the measurements it needs. That way teams fix capacity or the measurement pipeline before shoppers or employees feel slowdowns.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.18",
              "n": "Kubernetes Ingress Backend Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Ingress controller returning 502/503 due to unhealthy backends.",
              "t": "Splunk Connect for Kubernetes, ingress controller logs",
              "d": "nginx-ingress controller logs, traefik logs",
              "q": "index=k8s (sourcetype=\"kube:ingress:nginx\" OR sourcetype=\"kube:ingress:traefik\")\n| eval is_backend_error = if(status>=502 AND status<=503, 1, 0)\n| bin _time span=5m\n| stats sum(is_backend_error) as backend_errors, count as total by host, path, upstream, _time\n| eval error_rate = if(total>0, round(backend_errors/total*100, 2), 0)\n| where error_rate > 5 OR backend_errors > 10\n| table _time host path upstream backend_errors total error_rate\n| sort -error_rate",
              "m": "Forward ingress controller access logs to Splunk. For NGINX Ingress, enable access log format with `$upstream_addr` and `$upstream_status`. For Traefik, enable access logs with backend info. Parse status, host, path, and upstream. Alert when 502/503 rate exceeds 5% over 5 minutes or absolute count >10. Correlate with pod readiness and service endpoints.",
              "z": "Table (host, path, upstream, errors, rate), Line chart (error rate over time), Single value (current 5xx rate).",
              "kfp": "Chaos experiments that deliberately drop readiness or inject latency in non-production namespaces will reproduce 503 and 504 rows without customer impact unless ingress_oncall_routing.csv suppress_preview_tier flags those clusters. Blue-green cutovers often show a short 502 window while old pods terminate and new pods warm JVM heaps; require two consecutive fifteen-minute buckets above ERR_FLOOR before paging prod. CDN or edge WAF failures occasionally synthesize 502 responses that never hit the cluster; correlate with upstream_addr emptiness and absence of matching kube-state-metrics movement. Bot traffic that hammers deprecated hostnames can elevate 503 counts while business traffic stays healthy; split vhost allow lists. Database maintenance windows that extend application-level timeouts can trip 504 without ingress defects; confirm dominant_symptom 504 against application change calendars. Certificate hot reloads at the sidecar rather than the ingress can spike tls_hint_flag during rotations that are still compliant with PKI policy; dampen when notAfter is not imminently expiring. Autoscaler scale-down races that remove endpoints before in-flight connections drain resemble sticky-session storms; check Pod disruption budgets before blaming code. Mis-parsed namespace fields from incomplete JSON logging widen joins to unknown_ns and inflate kube_pod_ready_ratio significance; fix extraction before trusting severity. Regional DNS failover drills can send traffic to clusters whose backends were intentionally scaled to zero for the drill; exclude those intervals via lookup columns.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Kubernetes, ingress controller logs.\n• Ensure the following data sources are available: nginx-ingress controller logs, traefik logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward ingress controller access logs to Splunk. For NGINX Ingress, enable access log format with `$upstream_addr` and `$upstream_status`. For Traefik, enable access logs with backend info. Parse status, host, path, and upstream. Alert when 502/503 rate exceeds 5% over 5 minutes or absolute count >10. Correlate with pod readiness and service endpoints.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s (sourcetype=\"kube:ingress:nginx\" OR sourcetype=\"kube:ingress:traefik\")\n| eval is_backend_error = if(status>=502 AND status<=503, 1, 0)\n| bin _time span=5m\n| stats sum(is_backend_error) as backend_errors, count as total by host, path, upstream, _time\n| eval error_rate = if(total>0, round(backend_errors/total*100, 2), 0)\n| where error_rate > 5 OR backend_errors > 10\n| table _time host path upstream backend_errors total error_rate\n| sort -error_rate\n```\n\nUnderstanding this SPL\n\n**Kubernetes Ingress Backend Health** — Ingress controller returning 502/503 due to unhealthy backends.\n\nDocumented **Data sources**: nginx-ingress controller logs, traefik logs. **App/TA** (typical add-on context): Splunk Connect for Kubernetes, ingress controller logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:ingress:nginx, kube:ingress:traefik. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:ingress:nginx\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_backend_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, path, upstream, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 5 OR backend_errors > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubernetes Ingress Backend Health**): table _time host path upstream backend_errors total error_rate\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, path, upstream, errors, rate), Line chart (error rate over time), Single value (current 5xx rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch the doorway between the public internet and the programs running inside our clusters. When that doorway answers with errors because the programs behind it are missing, overloaded, or cannot complete a secure handshake, we raise a clear signal so teams fix the right layer instead of guessing.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.19",
              "n": "Kubernetes DaemonSet Missing Pods",
              "c": "high",
              "f": "intermediate",
              "v": "DaemonSet pods not running on all expected nodes.",
              "t": "Splunk Connect for Kubernetes",
              "d": "kube-state-metrics (`kube_daemonset_status_desired_number_scheduled`, `kube_daemonset_status_current_number_scheduled`, `kube_daemonset_status_number_ready`)",
              "q": "index=k8s sourcetype=\"kube:daemonset:meta\"\n| eval missing_scheduled = desiredNumberScheduled - currentNumberScheduled\n| eval missing_ready = desiredNumberScheduled - numberReady\n| where missing_scheduled > 0 OR missing_ready > 0\n| table namespace daemonset_name desiredNumberScheduled currentNumberScheduled numberReady missing_scheduled missing_ready\n| sort -missing_ready",
              "m": "kube-state-metrics exposes DaemonSet status. Splunk Connect for Kubernetes collects this. Alert when `currentNumberScheduled < desiredNumberScheduled` (pods not scheduled) or `numberReady < desiredNumberScheduled` (pods scheduled but not ready). Critical for CNI, kube-proxy, and monitoring DaemonSets. Investigate node taints, resource constraints, or image pull issues.",
              "z": "Table (DaemonSet, desired, scheduled, ready, missing), Status grid by DaemonSet, Single value (DaemonSets with gaps).",
              "kfp": "Large kubectl get all -A or kubectl get pods -A sweeps during incident triage legitimately spike LIST and WATCH latency histograms for one or two five-minute buckets; require sustained p99_ms across three buckets or correlate with a non-human client signature before paging. cert-manager and similar controllers occasionally renew every certificate at once, producing PATCH and Secret LIST bursts that look like abuse until the window passes; annotate controller backfill windows in splunk_apiserver_client_allowlist.csv. Leader-election lease churn during kube-apiserver rolling restarts or etcd compaction windows can elevate coordination.apiserver.k8s.io lease traffic without user-visible outages; pair with change tickets and UC-3.2.7 posture before declaring a client bug. Scheduled audit-export or security scanning jobs that list the full object graph mirror cluster-autoscaler periodic full-list behavior; mark those service accounts in the allowlist with high budget_qps or informational severity. Very large clusters beyond roughly one thousand nodes may show elevated baseline LIST latency during reconciler-loop iterations; tune thresholds per cluster size class in cluster_platform_routing.csv notes rather than using tiny-cluster defaults. Cluster-autoscaler heartbeats that hit pods with tight intervals are expected; exclude known autoscaler signatures from critical tiers when UC-3.2.46 confirms healthy scale-out. Watch cache invalidation during apiserver restart inflates apiserver_cache_list or watch-cache counters temporarily; treat single-bucket spikes as benign when apiserver_current_inflight_requests is normal. Intentional flow-control rejection during admission storms or load tests raises apiserver_flowcontrol_rejected_requests_total by design; verify maintenance flags before blaming production regressions. Cloud provider transparent control-plane maintenance on Amazon EKS, Google GKE, or Microsoft AKS can reshape scrape alignment; cross-check provider health dashboards when only saturation_plane rows move.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Kubernetes.\n• Ensure the following data sources are available: kube-state-metrics (`kube_daemonset_status_desired_number_scheduled`, `kube_daemonset_status_current_number_scheduled`, `kube_daemonset_status_number_ready`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nkube-state-metrics exposes DaemonSet status. Splunk Connect for Kubernetes collects this. Alert when `currentNumberScheduled < desiredNumberScheduled` (pods not scheduled) or `numberReady < desiredNumberScheduled` (pods scheduled but not ready). Critical for CNI, kube-proxy, and monitoring DaemonSets. Investigate node taints, resource constraints, or image pull issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:daemonset:meta\"\n| eval missing_scheduled = desiredNumberScheduled - currentNumberScheduled\n| eval missing_ready = desiredNumberScheduled - numberReady\n| where missing_scheduled > 0 OR missing_ready > 0\n| table namespace daemonset_name desiredNumberScheduled currentNumberScheduled numberReady missing_scheduled missing_ready\n| sort -missing_ready\n```\n\nUnderstanding this SPL\n\n**Kubernetes DaemonSet Missing Pods** — DaemonSet pods not running on all expected nodes.\n\nDocumented **Data sources**: kube-state-metrics (`kube_daemonset_status_desired_number_scheduled`, `kube_daemonset_status_current_number_scheduled`, `kube_daemonset_status_number_ready`). **App/TA** (typical add-on context): Splunk Connect for Kubernetes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:daemonset:meta. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:daemonset:meta\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **missing_scheduled** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **missing_ready** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where missing_scheduled > 0 OR missing_ready > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubernetes DaemonSet Missing Pods**): table namespace daemonset_name desiredNumberScheduled currentNumberScheduled numberReady missing_scheduled missing_ready\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DaemonSet, desired, scheduled, ready, missing), Status grid by DaemonSet, Single value (DaemonSets with gaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch how quickly the cluster control point answers everyday requests, and who is asking the loudest. When that path slows down or starts turning people away, we show it early so teams fix the hot client before work piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.20",
              "n": "Kubernetes Job and CronJob Failure Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Failed jobs and missed cron schedules.",
              "t": "Splunk Connect for Kubernetes",
              "d": "kube-state-metrics (`kube_job_status_failed`, `kube_cronjob_status_last_schedule_time`)",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_cronjob_status_last_schedule_time\"\n| eval hours_since_schedule = (now() - _value) / 3600\n| where hours_since_schedule > 2\n| table namespace cronjob hours_since_schedule _value",
              "m": "Forward Kubernetes events for Job/CronJob failures. Collect `kube_job_status_failed` and `kube_cronjob_status_last_schedule_time` from kube-state-metrics. Alert on any Job with `failed > 0` or BackoffLimitExceeded. For CronJobs, alert when `last_schedule_time` is older than expected (e.g. 2x the cron interval) indicating missed runs.",
              "z": "Table (job/cronjob, namespace, failures, message), Line chart (failure rate over time), Single value (failed jobs last 24h).",
              "kfp": "Deliberately suspended CronJobs during approved winter holiday freezes or finance close windows should be annotated with should_be_running=0 in critical_cronjobs.csv so suspend bits do not page. One-shot data-migration Jobs that intentionally exit non-zero once and leave kube_job_status_failed elevated until TTL cleanup may look alarming; mark those namespaces with migration_job=1 in the lookup or exclude their job_name prefixes. Jobs that implement retry-then-quit semantics with low backoffLimit can emit BackoffLimitExceeded during normal boundary testing; require sustained kube_job_status_failed with no succeeded counter movement across two alert intervals before paging production. Developer lab clusters run ad-hoc Jobs that fail constantly; keep tier=dev in the lookup and route only informational digests. CronJobs with concurrencyPolicy=Forbid legitimately skip overlapping windows when the previous Job still runs; pair alerts with kube_cronjob_status_active and last_schedule_age so benign skips during long ETL do not look like missed cadence. Manual batch backfills executed under a different namespace or disposable Job name will not match critical_cronjobs expectations; document alternate workload_name rows or suppress known backfill prefixes. Expired snapshot pruning Jobs that are intentionally not re-run after a storage freeze should set should_be_running=0 until operations re-enables them. Brief kube-state-metrics scrape gaps after upgrades can make last_schedule_time look stale for a single interval; demand two consecutive evaluations or corroborating MissedSchedule events before executive escalation. GitOps controllers that temporarily suspend CronJobs during cluster migration may flip kube_cronjob_spec_suspend without incident; align suspension windows to change tickets. Forbid policy skips that coincide with legitimate maintenance where CronJobs were paused cluster-wide should inherit a global suppression macro keyed off maintenance annotations mirrored into Splunk.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "N/A (operational reliability monitoring)"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Kubernetes.\n• Ensure the following data sources are available: kube-state-metrics (`kube_job_status_failed`, `kube_cronjob_status_last_schedule_time`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Kubernetes events for Job/CronJob failures. Collect `kube_job_status_failed` and `kube_cronjob_status_last_schedule_time` from kube-state-metrics. Alert on any Job with `failed > 0` or BackoffLimitExceeded. For CronJobs, alert when `last_schedule_time` is older than expected (e.g. 2x the cron interval) indicating missed runs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_cronjob_status_last_schedule_time\"\n| eval hours_since_schedule = (now() - _value) / 3600\n| where hours_since_schedule > 2\n| table namespace cronjob hours_since_schedule _value\n```\n\nUnderstanding this SPL\n\n**Kubernetes Job and CronJob Failure Rate** — Failed jobs and missed cron schedules.\n\nDocumented **Data sources**: kube-state-metrics (`kube_job_status_failed`, `kube_cronjob_status_last_schedule_time`). **App/TA** (typical add-on context): Splunk Connect for Kubernetes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hours_since_schedule** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since_schedule > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubernetes Job and CronJob Failure Rate**): table namespace cronjob hours_since_schedule _value\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (job/cronjob, namespace, failures, message), Line chart (failure rate over time), Single value (failed jobs last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the timed chores and one-off tasks that keep our computer cluster healthy—things like nightly cleanup or certificate checks. When those tasks fail or stop running on schedule, important housekeeping quietly stops, so we raise a clear signal before the backlog hurts anyone.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.21",
              "n": "Kubernetes Admission Webhook Latency",
              "c": "medium",
              "f": "advanced",
              "v": "Slow webhooks causing API server delays and impacting cluster operations.",
              "t": "Splunk Connect for Kubernetes",
              "d": "apiserver metrics (`apiserver_admission_webhook_admission_duration_seconds`)",
              "q": "index=k8s sourcetype=\"kube:apiserver\" metric_name=\"apiserver_admission_webhook_admission_duration_seconds\"\n| bin _time span=5m\n| stats avg(_value) as avg_sec, max(_value) as max_sec, count by webhook, operation, _time\n| where avg_sec > 0.5 OR max_sec > 2\n| table _time webhook operation avg_sec max_sec count\n| sort -avg_sec",
              "m": "Scrape API server metrics (typically via kube-apiserver /metrics or OTel Collector). The `apiserver_admission_webhook_admission_duration_seconds` histogram has labels `name` (webhook) and `operation`. Alert when P99 or average exceeds 500ms. Slow webhooks (e.g. OPA, Kyverno, cert-manager) block all API requests. Identify and optimize or remove slow webhooks.",
              "z": "Table (webhook, operation, avg, max latency), Line chart (latency over time by webhook), Heatmap.",
              "kfp": "Legitimately slow hooks exist by design: Kyverno may compile policies on first invocation; OPA Gatekeeper may reload constraint templates after Git sync; cert-manager or corporate CA webhooks may wait on external signing round trips; license or entitlement checks may call outside the cluster; cold caches after upgrade or image rollouts routinely elevate P99 for one or two scrape windows; webhooks scaled to zero briefly during voluntary disruption budgets mirror latency spikes without sustained budget burn; path MTU or asymmetric routing can inflate round-trip time so the symptom looks webhook-bound while the root cause is network; teams sometimes set timeoutSeconds deliberately high for batch jobs, which masks tail risk until concurrent admission storms arrive; rare huge ConfigMap or Secret admissions stress JSON patch paths and look like chronic slowness; in-cluster API calls during certificate rotation may stall hooks; operator-driven mass updates create admission bursts that resemble chronic regression; compliance gates may intentionally block with long-running validations; brief startup probe failures on webhook pods can look like latency cliffs until readiness stabilizes. Require sustained elevation across multiple five-minute buckets, corroborate with shed_rpm or internal_rej, and cross-check change records before paging production bridges.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Kubernetes.\n• Ensure the following data sources are available: apiserver metrics (`apiserver_admission_webhook_admission_duration_seconds`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape API server metrics (typically via kube-apiserver /metrics or OTel Collector). The `apiserver_admission_webhook_admission_duration_seconds` histogram has labels `name` (webhook) and `operation`. Alert when P99 or average exceeds 500ms. Slow webhooks (e.g. OPA, Kyverno, cert-manager) block all API requests. Identify and optimize or remove slow webhooks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:apiserver\" metric_name=\"apiserver_admission_webhook_admission_duration_seconds\"\n| bin _time span=5m\n| stats avg(_value) as avg_sec, max(_value) as max_sec, count by webhook, operation, _time\n| where avg_sec > 0.5 OR max_sec > 2\n| table _time webhook operation avg_sec max_sec count\n| sort -avg_sec\n```\n\nUnderstanding this SPL\n\n**Kubernetes Admission Webhook Latency** — Slow webhooks causing API server delays and impacting cluster operations.\n\nDocumented **Data sources**: apiserver metrics (`apiserver_admission_webhook_admission_duration_seconds`). **App/TA** (typical add-on context): Splunk Connect for Kubernetes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:apiserver. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:apiserver\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by webhook, operation, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_sec > 0.5 OR max_sec > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubernetes Admission Webhook Latency**): table _time webhook operation avg_sec max_sec count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (webhook, operation, avg, max latency), Line chart (latency over time by webhook), Heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the cluster gatekeepers that double-check every change. When those checks take too long, the front door starts turning people away and everyday tools time out—we show which gatekeeper is eating the clock before everything grinds to a halt.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.22",
              "n": "Pod Security Admission Violations",
              "c": "high",
              "f": "intermediate",
              "v": "PSA denials block risky pods at admission; tracking them exposes misconfigured workloads and policy gaps before they reach production namespaces.",
              "t": "Kubernetes audit log forwarding",
              "d": "`sourcetype=kube:audit`",
              "q": "index=k8s sourcetype=\"kube:audit\" objectRef.resource=\"pods\"\n| search \"PodSecurity\" OR \"pod-security.kubernetes.io\" OR \"denied the request\"\n| stats count by user.username, objectRef.namespace, objectRef.name, responseStatus.reason\n| sort -count",
              "m": "Enable audit policy capturing Pod create/update denials. Parse PSA-specific messages. Alert on spikes in a namespace or repeated denials for the same workload pattern.",
              "z": "Table (namespace, user, count), Bar chart by namespace, Timeline.",
              "kfp": "Developer and chaos namespaces that intentionally run baseline or privileged enforcement for game-day drills will generate steady denial or privileged-label rows until namespace_psa_inventory.csv carries time-bounded exception_reason and exception_expiry fields that your alert macro reads before paging. kube-system and other platform namespaces often remain privileged by vendor design; exclude them with kube_system_exempt style macros or explicit inventory rows so Cilium, CoreDNS, and metrics-server lifecycles do not open Sev-1 bridges. During cluster bootstrap or immediately after GitOps controller outages, namespaces may temporarily lack enforce labels while controllers replay; treat short windows as operational debt tracked in platform tickets rather than hostile drift when change records exist. Helm upgrades that introduce a non-compliant pod spec often produce a burst of denials before chart maintainers merge fixes; correlate with Helm release revision audit and suppress only when the owning team documents rollback or forward fix ETA. Organizations migrating from PodSecurityPolicy to PSA frequently run audit or warn modes first; historical audit-only periods can look like silent failures if you only chart Forbidden counts without reading the warn and audit labels. Pen testers and red teams that deliberately craft violating pods will trip this control by design; tag those namespaces in the lookup so purple exercises do not exhaust on-call goodwill. Regional replicas of the same logical namespace name on different clusters need distinct cluster keys in namespace_psa_inventory.csv or joins collapse expectations. Finally, some managed control planes redact portions of audit messages; violation_field may fall back to psa_other_field even though operators can still read fuller text in cloud vendor policy logs.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1611"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes audit log forwarding.\n• Ensure the following data sources are available: `sourcetype=kube:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable audit policy capturing Pod create/update denials. Parse PSA-specific messages. Alert on spikes in a namespace or repeated denials for the same workload pattern.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:audit\" objectRef.resource=\"pods\"\n| search \"PodSecurity\" OR \"pod-security.kubernetes.io\" OR \"denied the request\"\n| stats count by user.username, objectRef.namespace, objectRef.name, responseStatus.reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Pod Security Admission Violations** — PSA denials block risky pods at admission; tracking them exposes misconfigured workloads and policy gaps before they reach production namespaces.\n\nDocumented **Data sources**: `sourcetype=kube:audit`. **App/TA** (typical add-on context): Kubernetes audit log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user.username, objectRef.namespace, objectRef.name, responseStatus.reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, user, count), Bar chart by namespace, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch every time the cluster blocks a pod that asks for unsafe powers, and we watch namespaces that lose or weaken their safety labels, so risky apps cannot sneak through quietly while everyone thinks the rules still apply.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.23",
              "n": "RBAC Audit Log Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "ClusterRoleBinding changes and cluster-admin paths are high-risk; structured audit analysis supports least-privilege reviews and incident response.",
              "t": "Kubernetes audit to Splunk (HEC)",
              "d": "`sourcetype=kube:audit`",
              "q": "index=k8s sourcetype=\"kube:audit\" verb=\"create\" OR verb=\"patch\" OR verb=\"update\"\n| where objectRef.resource=\"clusterroles\" OR objectRef.resource=\"clusterrolebindings\" OR objectRef.resource=\"roles\" OR objectRef.resource=\"rolebindings\"\n| stats count by user.username, objectRef.resource, objectRef.name, verb\n| sort -count",
              "m": "Retain audit JSON with `user`, `verb`, `objectRef`. Scheduled reports for RBAC mutations; alert on non-break-glass users attaching `cluster-admin` bindings.",
              "z": "Table (user, resource, object), Timeline of changes, Bar chart by user.",
              "kfp": "GitOps controllers such as FluxCD and ArgoCD continuously reconcile ClusterRoleBindings and ClusterRoles from git; bursts that align with merge commits and carry the same subject_key as prior hours are often operational noise rather than intrusion—correlate with pipeline identity and commit SHA before paging. Fresh cluster bootstrap with kubeadm, kops, eksctl, or managed control-plane installers creates initial cluster-admin bindings for administrators; treat the first day after create-cluster as a known noisy window when change records exist, and tune wrapper searches with cluster-age metadata rather than muting the underlying audit. Break-glass incidents where on-call engineers temporarily bind cluster-admin with a documented ticket should appear as medium_breakglass_documented rows; expired tickets should flip to high_expired_allowlist until renewed. Operators such as kured, cluster-autoscaler, and Karpenter may install or upgrade RBAC that references powerful ClusterRoles during rollouts; verify chart versions and vendor documentation before declaring compromise. Deliberately permissive developer sandboxes that maintain a standing allowlist for sandbox cluster-admin service accounts will page continuously unless namespaces and clusters are labeled and wrapper searches filter dev-tier noise. Automated regression suites that create and delete bindings within minutes can look like privilege escalation; tag those service accounts in the allowlist or exclude ephemeral cluster names. Cloud audit exports occasionally flatten or truncate requestObject; rex arms may miss service account namespace extraction even when the event is genuine—fall back to raw JSON review. Penetration tests that exercise CSR approval or wildcard roles will trigger this control by design; coordinate purple-team windows in the lookup. Finally, UC-3.2.12 may already surface overlapping cluster-admin signals with different severity semantics—use this UC for immutable evidence rows and UC-3.2.12 for broader hunting context without disabling either.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes audit to Splunk (HEC).\n• Ensure the following data sources are available: `sourcetype=kube:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRetain audit JSON with `user`, `verb`, `objectRef`. Scheduled reports for RBAC mutations; alert on non-break-glass users attaching `cluster-admin` bindings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:audit\" verb=\"create\" OR verb=\"patch\" OR verb=\"update\"\n| where objectRef.resource=\"clusterroles\" OR objectRef.resource=\"clusterrolebindings\" OR objectRef.resource=\"roles\" OR objectRef.resource=\"rolebindings\"\n| stats count by user.username, objectRef.resource, objectRef.name, verb\n| sort -count\n```\n\nUnderstanding this SPL\n\n**RBAC Audit Log Analysis** — ClusterRoleBinding changes and cluster-admin paths are high-risk; structured audit analysis supports least-privilege reviews and incident response.\n\nDocumented **Data sources**: `sourcetype=kube:audit`. **App/TA** (typical add-on context): Kubernetes audit to Splunk (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where objectRef.resource=\"clusterroles\" OR objectRef.resource=\"clusterrolebindings\" OR objectRef.resource=\"roles\" OR object…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user.username, objectRef.resource, objectRef.name, verb** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, resource, object), Timeline of changes, Bar chart by user.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch who receives full control of your container fleet and who wires in the built-in super-user role or other extremely powerful access rules. When that happens without an approved ticket on our list, we sound the alarm and keep a clear record for your safety team.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.24",
              "n": "HPA Scale-Out Event Correlation",
              "c": "medium",
              "f": "beginner",
              "v": "Correlating HPA decisions with replica metrics explains surprise scale-outs and validates max replica settings under load.",
              "t": "Splunk OTel Collector, kube-state-metrics",
              "d": "`sourcetype=kube:objects:events`, `sourcetype=kube:metrics`",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_horizontalpodautoscaler_status_current_replicas\"\n| stats latest(_value) as current by namespace, horizontalpodautoscaler\n| join type=left max=1 namespace horizontalpodautoscaler [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_horizontalpodautoscaler_spec_max_replicas\"\n    | stats latest(_value) as max_rep by namespace, horizontalpodautoscaler\n]\n| where current>=max_rep AND max_rep>0\n| table namespace horizontalpodautoscaler current max_rep",
              "m": "Ingest HPA events and kube-state-metrics HPA series. Join current replicas with event stream for postmortems. Alert when scaling messages repeat while replicas stay at max.",
              "z": "Line chart (replicas over time), Table (HPA, events), Single value (scale events/hour).",
              "kfp": "Legitimate spec.behavior scaleDown policies with long stabilizationWindowSeconds or Disabled selectPolicy keep kube_horizontalpodautoscaler_status_desired_replicas above naive down-math even when kube_horizontalpodautoscaler_status_current_metrics_average_utilization looks low; pair kubectl describe with Splunk before paging. The built-in tolerance band near ten percent means small replica deltas versus predicted_clamped are often intentional no-ops, not controller bugs. Single-metric-update jitter from metrics-server cadence or kubelet scrape skew can move kube_horizontalpodautoscaler_status_current_metrics_average_value for one interval while stabilization suppresses motion; require sustained math_surprise across buckets before executive escalation. Custom metrics adapters feeding stale external series inflate kube_horizontalpodautoscaler_status_current_metrics_average_value without matching application truth; pivot to adapter health when util_math_ok stays zero but spec_target_raw references external kinds. KEDA external scalers can replace classic HorizontalPodAutoscaler ratio intuition for the same object; validate ScaledObject ownership before blaming kube-controller-manager math. Vertical Pod Autoscaler mutating requests underneath a CPU-targeted HorizontalPodAutoscaler creates legitimately complex denominator drift; cross-check UC-3.2.38 timelines. Recent rolling restarts fluctuate ready pod sets so average utilization denominators move while desired counts look surprising. ScaleDown floors at spec.minReplicas block predicted down-math even when metrics imply fewer pods. Stabilization windows that prefer the maximum recommended replicas during scale-up hold kube_horizontalpodautoscaler_status_desired_replicas higher than instantaneous naive math. Just-restarted controller-manager processes can re-apply recent decisions visible as short math_surprise spikes that self-heal. Brief Metrics API gaps return empty samples while the controller holds prior kube_horizontalpodautoscaler_status_desired_replicas; correlate empty windows with surprise flags. Test namespaces with intentionally aggressive averageUtilization targets generate noisy comparisons; exclude via inventory tier. Brief metric outliers that stabilization absorbs never become customer incidents; dampen alerts when sudden_desired_jump stays zero across adjacent evaluations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, kube-state-metrics.\n• Ensure the following data sources are available: `sourcetype=kube:objects:events`, `sourcetype=kube:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest HPA events and kube-state-metrics HPA series. Join current replicas with event stream for postmortems. Alert when scaling messages repeat while replicas stay at max.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_horizontalpodautoscaler_status_current_replicas\"\n| stats latest(_value) as current by namespace, horizontalpodautoscaler\n| join type=left max=1 namespace horizontalpodautoscaler [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_horizontalpodautoscaler_spec_max_replicas\"\n    | stats latest(_value) as max_rep by namespace, horizontalpodautoscaler\n]\n| where current>=max_rep AND max_rep>0\n| table namespace horizontalpodautoscaler current max_rep\n```\n\nUnderstanding this SPL\n\n**HPA Scale-Out Event Correlation** — Correlating HPA decisions with replica metrics explains surprise scale-outs and validates max replica settings under load.\n\nDocumented **Data sources**: `sourcetype=kube:objects:events`, `sourcetype=kube:metrics`. **App/TA** (typical add-on context): Splunk OTel Collector, kube-state-metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, horizontalpodautoscaler** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where current>=max_rep AND max_rep>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HPA Scale-Out Event Correlation**): table namespace horizontalpodautoscaler current max_rep\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (replicas over time), Table (HPA, events), Single value (scale events/hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch how the automatic pod scaler picks instance counts from its measurements so we can explain surprise jumps before teams blame the wrong layer. We line those numbers up with what each app reserved for computing power so average load stories stay honest when incidents strike.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.25",
              "n": "PV/PVC Capacity Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Proactive free-space visibility on bound PVs avoids read-only filesystems and database corruption across the cluster.",
              "t": "Splunk OTel Collector (kubelet metrics)",
              "d": "`sourcetype=kube:metrics`",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"kubelet_volume_stats_used_bytes\"\n| stats latest(_value) as used by namespace, persistentvolumeclaim, node\n| join type=left max=1 namespace persistentvolumeclaim node [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kubelet_volume_stats_capacity_bytes\"\n    | stats latest(_value) as cap by namespace, persistentvolumeclaim, node\n]\n| eval used_pct=if(cap>0, round(used/cap*100,1), null())\n| where used_pct>85\n| table namespace persistentvolumeclaim node used_pct used cap\n| sort -used_pct",
              "m": "Scrape kubelet volume stats with PVC labels. Dashboard all namespaces; alert at 85%/95% tiers. Include storage class in lookup tables for business priority.",
              "z": "Gauge per PVC, Table (namespace, PVC, used %), Heatmap (node × PVC).",
              "kfp": "Log retention or indexer window rotation that drops samples older than the streamstats horizon produces a legitimate slope reset where gb_per_day snaps toward zero until the window refills; do not page on slope resets alone without corroborating used_pct movement. Batch ETL jobs that write tens of gigabytes then delete overnight create sawtooth growth that violates linear burn assumptions; require business calendars in inventory or widen bins for those namespaces. emptyDir or scratch PVCs mounted under build pods can burst during compile then shrink when pods terminate; route those namespaces to bronze criticality. Planned datacopy migrations inflate burn for days then collapse when cutover finishes; suppress using CAB macros. Tiered storage with manual archive lag can leave logical usage high while physical arrays freed space; pair with vendor array dashboards before accusing applications. Transient backup catalog files or snapshot differencing stores can spike inode usage briefly; corroborate with backup job schedules. Snapshot space reservation incorrectly counted inside guest filesystem views can make used_pct look worse than operator consoles; validate CSI and array accounting. Kustomize-driven load generator tests in shared clusters can mimic production burn; require perf_test labels in inventory. Vendor expansion executed only through a cloud console while Kubernetes still shows the old PVC request can make pvc_pv_oversub_ratio look alarming until the apiserver catches up; reconcile kubectl describe. Legitimate growth spikes during disaster recovery drills or failover exercises should carry drill annotations so sla_rpo8h_critical does not wake the wrong bridge. Duplicate scrapes from overlapping agents without dedup keys can jitter hour-over-hour growth; validate single writer per target.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector (kubelet metrics).\n• Ensure the following data sources are available: `sourcetype=kube:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape kubelet volume stats with PVC labels. Dashboard all namespaces; alert at 85%/95% tiers. Include storage class in lookup tables for business priority.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"kubelet_volume_stats_used_bytes\"\n| stats latest(_value) as used by namespace, persistentvolumeclaim, node\n| join type=left max=1 namespace persistentvolumeclaim node [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kubelet_volume_stats_capacity_bytes\"\n    | stats latest(_value) as cap by namespace, persistentvolumeclaim, node\n]\n| eval used_pct=if(cap>0, round(used/cap*100,1), null())\n| where used_pct>85\n| table namespace persistentvolumeclaim node used_pct used cap\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**PV/PVC Capacity Monitoring** — Proactive free-space visibility on bound PVs avoids read-only filesystems and database corruption across the cluster.\n\nDocumented **Data sources**: `sourcetype=kube:metrics`. **App/TA** (typical add-on context): Splunk OTel Collector (kubelet metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, persistentvolumeclaim, node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct>85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PV/PVC Capacity Monitoring**): table namespace persistentvolumeclaim node used_pct used cap\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per PVC, Table (namespace, PVC, used %), Heatmap (node × PVC).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We estimate how fast each permanent disk is filling, when it will hit almost full, and whether your storage class as a whole is running out of headroom. We highlight tight spots early so teams can grow volumes or clean data before apps freeze.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.26",
              "n": "etcd Health and Latency",
              "c": "critical",
              "f": "beginner",
              "v": "etcd request latency and raft health predict API slowness and split-brain risk; early warning preserves control plane stability.",
              "t": "Splunk OTel Collector",
              "d": "`sourcetype=kube:etcd`",
              "q": "index=k8s sourcetype=\"kube:etcd\"\n| timechart span=5m avg(etcd_network_peer_round_trip_time_seconds) as rtt, avg(etcd_disk_backend_commit_duration_seconds) as commit\n| where rtt>0.05 OR commit>0.1",
              "m": "Scrape etcd `/metrics` from members (managed clusters: use cloud metrics export if direct scrape is blocked). Alert on rising commit duration, peer RTT, or leader election counters.",
              "z": "Line chart (latency, DB size), Single value (leader changes), Table (member ID, health).",
              "kfp": "Rolling restart of a single etcd member during planned control-plane certificate renewal produces one expected leader handoff and a short burst of etcd_server_leader_changes_seen_total; pair alerts with change tickets or maintenance=true on cluster_platform_routing.csv and require two consecutive windows above churn thresholds before paging. An intentional etcdctl move-leader during a rehearsed failover exercise increments leader change counters once and should not be treated as thrash when annotated. Learner promotion in progress can temporarily show four members while etcd_server_is_learner toggles; widen evaluation to five or ten minutes or correlate with endpoint status JSON showing isLearner transitioning to false before declaring stuck promotion. Maintenance defragmentation on one peer can raise etcd_disk_backend_commit_duration_seconds_bucket skew and etcd_server_proposals_pending briefly without quorum risk; suppress when defrag_started labels or kube-system logs show defrag. Scheduled disaster-recovery tests that snapshot and replicate large states may spike etcd_network_peer_round_trip_time_seconds_bucket for one interval; compare with cross-AZ routing and test windows. A single peer paused for vendor backup can look unhealthy to metrics while quorum remains; verify backup runbooks and heartbeat suppression labels. Graceful member replacement using member remove and add produces short membership drift versus static lookups until expected_etcd_voting_members is updated; treat lookup mismatch as informational until two consecutive scrapes disagree. Brief control-plane network jitter under two hundred milliseconds can elevate histogram quantiles without sustained packet loss; require sustained peer_rtt_p95_ms above warn or packet-drop correlates from network teams before declaring partition. Kind, Minikube, and single-node lab clusters lack meaningful quorum semantics; honor suppress_single_node_dev=1 in the routing lookup.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector.\n• Ensure the following data sources are available: `sourcetype=kube:etcd`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape etcd `/metrics` from members (managed clusters: use cloud metrics export if direct scrape is blocked). Alert on rising commit duration, peer RTT, or leader election counters.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:etcd\"\n| timechart span=5m avg(etcd_network_peer_round_trip_time_seconds) as rtt, avg(etcd_disk_backend_commit_duration_seconds) as commit\n| where rtt>0.05 OR commit>0.1\n```\n\nUnderstanding this SPL\n\n**etcd Health and Latency** — etcd request latency and raft health predict API slowness and split-brain risk; early warning preserves control plane stability.\n\nDocumented **Data sources**: `sourcetype=kube:etcd`. **App/TA** (typical add-on context): Splunk OTel Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:etcd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:etcd\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where rtt>0.05 OR commit>0.1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency, DB size), Single value (leader changes), Table (member ID, health).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the small voter group that keeps the control plane's shared records in agreement. If that group is one loss away from deadlock, if a trainee member never graduates, or if leaders keep re-electing, we warn you before everything that depends on that memory freezes.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.27",
              "n": "Ingress Controller Error Rates",
              "c": "high",
              "f": "intermediate",
              "v": "Controller-level 5xx and upstream errors isolate bad ingress classes, TLS backends, and canary routes before user-facing SLO breach.",
              "t": "Ingress controller log pipeline (NGINX, Traefik, HAProxy)",
              "d": "`sourcetype=kube:ingress:nginx`",
              "q": "index=k8s sourcetype=\"kube:ingress:nginx\"\n| eval err=if(status>=500,1,0)\n| bin _time span=5m\n| stats sum(err) as e, count as n by ingress_class, upstream, _time\n| eval err_rate=if(n>0, round(100*e/n,2), 0)\n| where err_rate>2\n| sort -err_rate",
              "m": "Standardize access log JSON with `ingress_class`, `upstream`, `status`. Baseline per ingress. Alert on error rate versus 7-day same-hour baseline.",
              "z": "Line chart (5xx rate by class), Table (upstream, err %), Single value (global ingress 5xx/min).",
              "kfp": "Cluster-managed namespaces such as kube-system, kube-public, and kube-node-lease are excluded by design in severity filters because vendor components and control-plane traffic patterns rarely follow application default-deny conventions; keep explicit inventory rows with requires flags set to zero rather than muting alerts globally. Namespaces in active deprecation or teardown drains may intentionally delete all NetworkPolicy objects while workloads drain; pair timestamps with decommission tickets before escalating. Estates that enforce east-west segmentation exclusively with Cilium ClusterwideNetworkPolicy, Calico GlobalNetworkPolicy, or other non-NetworkPolicy CRDs will show low Kubernetes NetworkPolicy cardinality even when segmentation is strong; document mesh_authz_alternate and alternate CRD ownership in the lookup. Service-mesh-protected namespaces that rely on Istio AuthorizationPolicy instead of Kubernetes NetworkPolicy should set mesh_authz_alternate=1 with mesh SME approval so the control does not page on absent deny objects. Lab and sandbox namespaces that explicitly opt out of default deny during experiments belong in workload_tier=lab with requires flags zero or non-zero grace_until_epoch windows. Namespaces in multi-phase compliance rollouts may lack deny objects until a scheduled phase completes; use grace_until_epoch and weekly governance reviews rather than nightly pages. Freshly-created namespaces inside an approved policy-author grace window should carry grace_until_epoch from the platform ticket until GitOps applies baseline deny scaffolding. GitOps controllers that delete and recreate policies with identical semantics can emit delete bursts that look like drift; require diff of object UID timelines before sev-one escalation. Managed Kubernetes distributions that rename default policies or use generated suffixes may not match heuristic name tokens; extend the regex ladder in the SPL macro after architecture review. Pen testers clearing policies will trigger this control by design; tag lab clusters in inventory to downgrade non-production noise while retaining evidence rows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Ingress controller log pipeline (NGINX, Traefik, HAProxy).\n• Ensure the following data sources are available: `sourcetype=kube:ingress:nginx`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStandardize access log JSON with `ingress_class`, `upstream`, `status`. Baseline per ingress. Alert on error rate versus 7-day same-hour baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:ingress:nginx\"\n| eval err=if(status>=500,1,0)\n| bin _time span=5m\n| stats sum(err) as e, count as n by ingress_class, upstream, _time\n| eval err_rate=if(n>0, round(100*e/n,2), 0)\n| where err_rate>2\n| sort -err_rate\n```\n\nUnderstanding this SPL\n\n**Ingress Controller Error Rates** — Controller-level 5xx and upstream errors isolate bad ingress classes, TLS backends, and canary routes before user-facing SLO breach.\n\nDocumented **Data sources**: `sourcetype=kube:ingress:nginx`. **App/TA** (typical add-on context): Ingress controller log pipeline (NGINX, Traefik, HAProxy). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:ingress:nginx. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:ingress:nginx\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **err** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ingress_class, upstream, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate>2` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (5xx rate by class), Table (upstream, err %), Single value (global ingress 5xx/min).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the rules that control which programs may talk to each other inside your container estate. When someone removes the lock-down rules, leaves busy application areas without rules, or adds rules that sound like they allow the whole world in or out, we raise a clear signal for the platform team.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "haproxy",
                "nginx",
                "traefik"
              ],
              "em": [
                "nginx_open"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.28",
              "n": "Node Pressure Conditions (Disk/Memory/PID)",
              "c": "critical",
              "f": "intermediate",
              "v": "Kubelet pressure conditions drive evictions; monitoring them reduces surprise pod kills and scheduling failures.",
              "t": "Splunk OTel Collector, node exporter",
              "d": "`sourcetype=kube:node:meta`, `sourcetype=kube:events`",
              "q": "index=k8s sourcetype=\"kube:events\" (reason=\"EvictionThresholdMet\" OR reason=\"FreeDiskSpaceFailed\")\n| stats count by involvedObject.kind, involvedObject.name, message",
              "m": "Ingest node conditions from kube-state-metrics or OTel node receiver. Correlate with eviction events and node `Allocatable` vs usage. Page on any pressure True >2 minutes.",
              "z": "Node heatmap (pressure flags), Timeline (evictions), Table (node, condition).",
              "kfp": "Cron-driven log rotation and brief image pull bursts can flip DiskPressure true for under a minute while kubelet reclaims logs or layers; require sustained dwell and maintenance correlation before paging production bridges, and document PassDoNotAlert patterns for approved compression jobs. Dedicated build and continuous integration pools often run intentionally hot memory and scratch disks; tie those node_inventory rows to dev or build criticality so alerts route to platform dashboards instead of customer bridges unless SLO namespaces colocate there. DaemonSet startup can spike process counts transiently; pair PIDPressure with minute-level dwell and with process leader identity from workload metadata before blaming applications. kube-image-puller or bootstrap bursts on fresh nodes can distort disk and pid telemetry for a few minutes after join; dampen alerts for a short post-bootstrap window keyed from cloud launch timestamps when available. Scratch-disk-only ephemeral storage configurations on development nodes routinely flirt with DiskPressure without customer impact; separate pools in inventory and suppress except when gold namespaces appear on those nodes by mistake. NetworkUnavailable often appears during cluster-managed restarts of CNI agents or during rolling upgrades of the data plane; align to provider maintenance feeds and suppress when Ready remains true and east-west probes stay healthy. Spot interruption notices may correlate with pressure flaps; downgrade when termination is expected. Duplicate prometheus scrapes without dedup keys can double-count pods in qos_at_risk_pods; validate one authoritative kube-state-metrics scrape. Test namespaces that intentionally run unlimited BestEffort noise will page; exclude via lookup flags. After kube-state-metrics upgrades, metric label renames may null coalesce paths briefly; treat missing qos_at_risk_pods as telemetry debt not silence. When swap is enabled against best practice, MemoryPressure semantics diverge from linux free memory intuition; annotate fleet policy in inventory notes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, node exporter.\n• Ensure the following data sources are available: `sourcetype=kube:node:meta`, `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest node conditions from kube-state-metrics or OTel node receiver. Correlate with eviction events and node `Allocatable` vs usage. Page on any pressure True >2 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:events\" (reason=\"EvictionThresholdMet\" OR reason=\"FreeDiskSpaceFailed\")\n| stats count by involvedObject.kind, involvedObject.name, message\n```\n\nUnderstanding this SPL\n\n**Node Pressure Conditions (Disk/Memory/PID)** — Kubelet pressure conditions drive evictions; monitoring them reduces surprise pod kills and scheduling failures.\n\nDocumented **Data sources**: `sourcetype=kube:node:meta`, `sourcetype=kube:events`. **App/TA** (typical add-on context): Splunk OTel Collector, node exporter. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by involvedObject.kind, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Node heatmap (pressure flags), Timeline (evictions), Table (node, condition).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "Each computer in the cluster can quietly run out of memory, disk space, process slots, or network wiring before it is marked broken. We watch those early warning lights, count how long they stay on, and list which programs would be moved first if space is not freed.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.29",
              "n": "CronJob Failure Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Missed batch jobs break SLAs for reporting and cleanup; tracking last successful run and Job failures closes blind spots.",
              "t": "Splunk OTel Collector",
              "d": "`sourcetype=kube:events`, `sourcetype=kube:metrics`",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_cronjob_status_last_schedule_time\"\n| eval hours_since=(now()-_value)/3600\n| where hours_since>24\n| table namespace cronjob hours_since",
              "m": "Combine event-based failures with `kube_cronjob_status_last_schedule_time` staleness versus expected schedule. Alert when no successful Job completion in expected window.",
              "z": "Table (cronjob, last schedule, failures), Line chart (failure count), Single value (failed jobs 24h).",
              "kfp": "spec.suspend=true during approved maintenance, finance close, or disaster-recovery rehearsal should set should_be_running=0 in critical_cronjobs.csv so wrongful-suspend severity does not page. Scheduled DR-test runs that intentionally pause automation via GitOps sync should carry the same lookup flag and a change ticket id in weekly evidence exports. Holiday cron expressions that assume local time while observers watch UTC dashboards can look fourteen hours late until spec.timeZone and business calendars align; document zone intent beside each row. concurrencyPolicy=Forbid legitimately blocks the next window when a long-running prior Job still owns capacity; pair missed events with kube_cronjob_status_active before treating the skip as an incident. Recent CronJob create or spec.edit with no successful completion yet can show success_proof_stale until the first healthy Job finishes; require two intervals or a non-zero lastScheduleTime before paging brand-new objects. completed-job grace-period reaping and aggressive ttlSecondsAfterFinished can delete the only succeeded Job while kube_cronjob_status_last_successful_time still lags; verify apiserver status before blaming application code. Misconfigured @yearly or non-standard cron tokens that the apiserver rejects produce never_scheduled_yet without Pod failures; fix validation errors in kubectl describe before tuning Splunk. Daylight-saving spring-forward creates a single skipped local instant; demand corroborating MissedSchedule counts or fleet skew, not only one bucket spike. Deliberate startingDeadlineSeconds shorter than a known maintenance blackout yields FailedNeedsStart warnings by design; annotate blackout windows in the lookup rather than muting the control globally. CronJob suspended via Argo CD Application sync options should mirror should_be_running=0 for the suspension window and attach the Argo change record to the evidence bundle. Brief kube-state-metrics scrape outages after upgrades can inflate last_schedule_age_min once; require consecutive breaches or event-backed MissedSchedule before executive escalation. Batch namespaces that intentionally run only on business days need cadence columns that reflect five-day schedules, not naive 1440-minute multiples, or stale math will false alarm every weekend.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "N/A (operational reliability monitoring)"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector.\n• Ensure the following data sources are available: `sourcetype=kube:events`, `sourcetype=kube:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCombine event-based failures with `kube_cronjob_status_last_schedule_time` staleness versus expected schedule. Alert when no successful Job completion in expected window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_cronjob_status_last_schedule_time\"\n| eval hours_since=(now()-_value)/3600\n| where hours_since>24\n| table namespace cronjob hours_since\n```\n\nUnderstanding this SPL\n\n**CronJob Failure Tracking** — Missed batch jobs break SLAs for reporting and cleanup; tracking last successful run and Job failures closes blind spots.\n\nDocumented **Data sources**: `sourcetype=kube:events`, `sourcetype=kube:metrics`. **App/TA** (typical add-on context): Splunk OTel Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since>24` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CronJob Failure Tracking**): table namespace cronjob hours_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cronjob, last schedule, failures), Line chart (failure count), Single value (failed jobs 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the cluster's timed jobs so they start when promised and finish on time. When a schedule quietly slips, time zones disagree, or policy blocks the next run, we raise a clear signal before nightly work like backups or reports stops happening.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.30",
              "n": "Init Container Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Failed inits block app containers entirely; fast detection shortens MTTR for migrations and secret-fetch steps.",
              "t": "Kubernetes events, container status JSON",
              "d": "`sourcetype=kube:objects:events`, `sourcetype=kube:container:meta`",
              "q": "index=k8s sourcetype=\"kube:container:meta\" init=\"true\"\n| where exit_code!=0 OR state=\"waiting\"\n| table namespace pod_name container_name state exit_code",
              "m": "Forward events mentioning init containers; optionally ingest pod status subresource via exporter. Alert on non-zero init exit or ImagePull errors on init images.",
              "z": "Table (pod, init container, reason), Timeline, Single value (failed inits/hour).",
              "kfp": "Long-running schema migrations on first install can legitimately exceed five wall-clock minutes while backfilling tables; widen init_stuck thresholds per workload using lookup notes or macro overrides before paging. Vault Agent Init can wait on Vault unseal operations during disaster recovery exercises; pair Splunk rows with Vault cluster health to avoid blaming application teams. wait-for-service style inits that intentionally retry until DNS converges often spike during Amazon EKS CoreDNS rolling restarts; require sustained dwell beyond your CoreDNS maintenance annotation before production pages. Job pods that run test fixture inits in continuous integration namespaces resemble production failures; mark sandbox tiers and exclude ephemeral CI namespaces from paging macros. migrate-only-once init containers may fail fast on the second Pod when idempotency is missing; route those patterns to data platform owners rather than muting the detector. Developer clusters used for ad-hoc init command experiments generate noisy init_term_reason rows; keep workload_tier=dev defaults in k8s_namespace_tier.csv so severity stays low unless override tickets promote the namespace.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "N/A (operational reliability monitoring)"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes events, container status JSON.\n• Ensure the following data sources are available: `sourcetype=kube:objects:events`, `sourcetype=kube:container:meta`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward events mentioning init containers; optionally ingest pod status subresource via exporter. Alert on non-zero init exit or ImagePull errors on init images.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:container:meta\" init=\"true\"\n| where exit_code!=0 OR state=\"waiting\"\n| table namespace pod_name container_name state exit_code\n```\n\nUnderstanding this SPL\n\n**Init Container Failures** — Failed inits block app containers entirely; fast detection shortens MTTR for migrations and secret-fetch steps.\n\nDocumented **Data sources**: `sourcetype=kube:objects:events`, `sourcetype=kube:container:meta`. **App/TA** (typical add-on context): Kubernetes events, container status JSON. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:container:meta. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:container:meta\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where exit_code!=0 OR state=\"waiting\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Init Container Failures**): table namespace pod_name container_name state exit_code\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pod, init container, reason), Timeline, Single value (failed inits/hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the small setup steps that must finish before the real application is allowed to start on our clusters. When those steps keep retrying, run out of memory, or cannot fetch secrets, the app never really launches, so we raise a clear signal early.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.31",
              "n": "Sidecar Injection Validation",
              "c": "medium",
              "f": "intermediate",
              "v": "Ensures service mesh or security sidecars are present where policy requires—avoiding accidental unencrypted east-west traffic.",
              "t": "kube-state-metrics, policy controller (optional)",
              "d": "`sourcetype=kube:pod:meta`",
              "q": "index=k8s sourcetype=\"kube:pod:meta\"\n| eval has_proxy=if(match(container_names, \"(istio-proxy|linkerd-proxy|envoy)\"),1,0)\n| where namespace_injection_enabled=1 AND has_proxy=0\n| table namespace pod_name container_names",
              "m": "Periodically snapshot pod container lists and namespace labels (`istio-injection`, etc.). Flag mismatches. Integrate with CI to fail deploys that skip injection in labeled namespaces.",
              "z": "Table (namespace, pod, missing sidecar), Compliance %, Bar chart by team.",
              "kfp": "Legitimately mesh-exempt workloads include cluster logging agents, eBPF node security agents, kube-system control-plane components, and DaemonSets that intentionally use hostNetwork for CNI or observability reasons—mark them in mesh_sidecar_governance.csv with sidecar_required=0 instead of muting alerts globally. Deliberate bypasses via sidecar.istio.io/inject=false on batch jobs, gateway pods that run as ingress-gateway Deployments rather than classic workload sidecars, and Helm chart workload-level overrides produce missing proxies by design; require annotation arms and inventory notes before escalating. CRD operators that need raw cluster API access sometimes opt out of injection; document those service accounts. Pods created before a namespace gained istio-injection=enabled need rolls to pick up mutating webhook changes; transient gaps after label flips are common until rollouts finish. Paused Deployments during freezes can hold stale pod specs indefinitely. Cilium ambient and similar sidecarless mesh modes intentionally omit istio-proxy containers; set ambient_mesh_ns=1 to avoid false positives. Services explicitly excluded from mesh policy or PeerAuthentication peer lists may not need sidecars even when a namespace is labeled—confirm architecture before paging. Recently restarted mesh control planes can create brief pilot_proxy_convergence_time spikes that resemble incidents; correlate with change tickets. kube-state-metrics version skew may drop kube_pod_annotations series temporarily, inflating apparent silence on inject=false counts—verify telemetry health first.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: kube-state-metrics, policy controller (optional).\n• Ensure the following data sources are available: `sourcetype=kube:pod:meta`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically snapshot pod container lists and namespace labels (`istio-injection`, etc.). Flag mismatches. Integrate with CI to fail deploys that skip injection in labeled namespaces.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:pod:meta\"\n| eval has_proxy=if(match(container_names, \"(istio-proxy|linkerd-proxy|envoy)\"),1,0)\n| where namespace_injection_enabled=1 AND has_proxy=0\n| table namespace pod_name container_names\n```\n\nUnderstanding this SPL\n\n**Sidecar Injection Validation** — Ensures service mesh or security sidecars are present where policy requires—avoiding accidental unencrypted east-west traffic.\n\nDocumented **Data sources**: `sourcetype=kube:pod:meta`. **App/TA** (typical add-on context): kube-state-metrics, policy controller (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:pod:meta. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:pod:meta\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_proxy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where namespace_injection_enabled=1 AND has_proxy=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Sidecar Injection Validation**): table namespace pod_name container_names\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, pod, missing sidecar), Compliance %, Bar chart by team.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We check that the small helpers that encrypt traffic between your applications are actually present wherever your rules say every workload should carry them, and we flag places where the rule says inject but running programs are missing those helpers so trust stays consistent inside the cluster.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.32",
              "n": "Namespace Quota Utilization Trending",
              "c": "high",
              "f": "beginner",
              "v": "Namespaces hitting CPU/memory/object quotas block rollouts; trending utilization prevents deployment freezes during releases.",
              "t": "Splunk OTel Collector",
              "d": "`sourcetype=kube:resourcequota:meta`",
              "q": "index=k8s sourcetype=\"kube:resourcequota:meta\"\n| eval used_pct = round(used / hard * 100, 1)\n| where used_pct > 90\n| table namespace resource used hard used_pct\n| sort -used_pct",
              "m": "Same quota feed as UC-3.2.4; use a stricter 90% threshold for release windows. Split alerts by resource type (cpu, memory, pods).",
              "z": "Stacked bar (used vs hard), Gauge per quota, Table.",
              "kfp": "Short-lived load-test namespaces created by k6, Locust, or vendor performance harnesses often spike cpu, memory, and pods quotas for minutes then vanish, producing steep synthetic slopes and phantom seventy-two hour forecasts; suppress namespaces tagged perf_test=true in namespace_ownership_inventory.csv or exclude them with a macro. Development clusters with deliberately tight quotas for cost control will perpetually ride elevated util_ratio_pct without customer impact; downgrade criticality using inventory tiers and require wow_noisy persistence across three windows before paging executives. GitOps reconciliation waves that pre-create ConfigMaps, Services, or PVC objects before workloads attach can temporarily inflate object-count ratios without steady growth; corroborate with deployment timestamps and Argo CD sync phases before treating as capacity emergencies. Namespace-quota administrators who tighten hard limits without immediate usage drops can flip util_ratio_pct upward without true burn; diff ResourceQuota objects against git and pause forecasts until used gauges stabilize. Namespaces hosting batch CronJobs may burn pod quota only on schedule ticks; linear slopes across sparse cron cadences mis-estimate hours_to_full unless you widen bins or require sustained growth flags. Namespaces in deprecation drain windows may show falling util_ratio while still noisy; exclude when kube_namespace_status_phase stops reporting Active consistently. Cluster-autoscaler buffer or over-provision namespaces maintained by platform teams can look like noisy consumers; mark them in quota_policy_notes and route to platform-only dashboards. GPU extended-resource metrics can reset when device plugins restart, creating false negative slopes; require two consecutive windows of coherent nvidia.com/gpu samples. Duplicate prometheus scrapes without dedup external labels can double-count kube_resourcequota values; validate one authoritative kube-state-metrics endpoint per cluster. Managed control planes that throttle metric cardinality may omit rare resource labels for one interval; treat missing join arms as telemetry gaps not tenant malice.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector.\n• Ensure the following data sources are available: `sourcetype=kube:resourcequota:meta`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSame quota feed as UC-3.2.4; use a stricter 90% threshold for release windows. Split alerts by resource type (cpu, memory, pods).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:resourcequota:meta\"\n| eval used_pct = round(used / hard * 100, 1)\n| where used_pct > 90\n| table namespace resource used hard used_pct\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**Namespace Quota Utilization Trending** — Namespaces hitting CPU/memory/object quotas block rollouts; trending utilization prevents deployment freezes during releases.\n\nDocumented **Data sources**: `sourcetype=kube:resourcequota:meta`. **App/TA** (typical add-on context): Splunk OTel Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:resourcequota:meta. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:resourcequota:meta\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Namespace Quota Utilization Trending**): table namespace resource used hard used_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (used vs hard), Gauge per quota, Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch how full each team’s namespace budget is over time and estimate when that budget will be completely used. We tell platform leaders early so they can raise limits or trim usage before new work gets blocked.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.33",
              "n": "Node Drain Events",
              "c": "high",
              "f": "intermediate",
              "v": "Draining nodes for maintenance evicts workloads; correlating drains with pod disruption helps explain transient unavailability.",
              "t": "Kubernetes audit, controller logs",
              "d": "`sourcetype=kube:audit`, `sourcetype=kube:objects:events`",
              "q": "index=k8s sourcetype=\"kube:objects:events\" \"*drain*\" OR reason=\"NodeSchedulable\"\n| table _time involvedObject.name message",
              "m": "Capture cordon/drain API calls via audit. Dashboard maintenance windows. Alert on unexpected uncordon outside change windows.",
              "z": "Timeline (drain/cordon), Table (user, node), Map of affected nodes.",
              "kfp": "Scheduled maintenance windows that are tracked in k8s_approved_drain_windows.csv or a wider change-management lookup will legitimately produce dense audit and eviction traffic without implying an incident; require absent tickets or closed windows before paging executives. Automated kured drains for kernel patching repeat cordon and eviction patterns every reboot cycle across the fleet; tag those clusters in the lookup or exclude the kured service account from high severity when tickets are standing. Cluster-autoscaler scale-down during off-peak hours removes underutilized nodes and emits voluntary disruption signals that resemble human kubectl drain; correlate kube-system cluster-autoscaler identity and node lifecycle timestamps before blaming application teams. Surge-upgrade rolling node replacement on managed control planes—GKE node pool surge parameters, EKS managed node group rolling update behavior, and AKS surge settings—creates overlapping patch and eviction rows that look noisy yet are expected when cloud consoles show an active upgrade. Karpenter consolidation drains for intentional bin-packing issue eviction creates with controller service accounts; severity should reference consolidation logs and documented FinOps policy. Cluster API rolling upgrades driven by MachineDeployments produce repeated node replacements; map cluster-api controller identities in the lookup. Off-hours batch infrastructure rotation and blue-green node pool migrations may spike pending counts briefly while new pools warm; widen suppression when dual write paths stay healthy and tickets reference pool cutovers. Finally, audit latency or missing RequestResponse bodies can hide true cordon semantics and make rex arms match too broadly—validate audit policy depth after upgrades before muting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes audit, controller logs.\n• Ensure the following data sources are available: `sourcetype=kube:audit`, `sourcetype=kube:objects:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture cordon/drain API calls via audit. Dashboard maintenance windows. Alert on unexpected uncordon outside change windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:objects:events\" \"*drain*\" OR reason=\"NodeSchedulable\"\n| table _time involvedObject.name message\n```\n\nUnderstanding this SPL\n\n**Node Drain Events** — Draining nodes for maintenance evicts workloads; correlating drains with pod disruption helps explain transient unavailability.\n\nDocumented **Data sources**: `sourcetype=kube:audit`, `sourcetype=kube:objects:events`. **App/TA** (typical add-on context): Kubernetes audit, controller logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:objects:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:objects:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Node Drain Events**): table _time involvedObject.name message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (drain/cordon), Table (user, node), Map of affected nodes.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We keep a careful diary when computers in your container fleet are emptied on purpose for upgrades or resizing. We record who gave the order, how long the work took, and whether your applications bounced back quickly. That way surprises do not hide behind normal charts.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.34",
              "n": "Cluster DNS Resolution Failures",
              "c": "critical",
              "f": "advanced",
              "v": "CoreDNS failures cause widespread `SERVFAIL` and intermittent app errors; monitoring query errors and upstream timeouts is essential.",
              "t": "CoreDNS log forwarding, Prometheus metrics",
              "d": "`sourcetype=kube:coredns`, `sourcetype=kube:metrics`",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"coredns_dns_responses_total\"\n| stats sum(_value) as responses by rcode\n| where rcode!=\"NOERROR\" AND rcode!=\"NXDOMAIN\"",
              "m": "Forward CoreDNS logs with response code. Scrape `coredns_dns_responses_total` by rcode. Alert on SERVFAIL spike or upstream forward errors.",
              "z": "Line chart (errors by rcode), Table (qname, count), Single value (SERVFAIL/min).",
              "kfp": "Rolling restarts of the CoreDNS Deployment emit short SERVFAIL or timeout bursts while endpoints drain and new pods pass readiness; dampen alerts to two consecutive evaluation windows or correlate with Kubernetes ReplicaSet revision timestamps before paging. NXDOMAIN spikes often trace to mis-typed Service names, canary namespaces, or security scanners hammering random labels; exclude known synthetic namespaces via coredns_oncall_routing.csv so caller mistakes do not wake platform on-call. Cache hit ratio drops sharply after cold cluster start or mass node replacement because in-process caches empty; require sustained low cache_hit_ratio_pct beyond fifteen minutes or pair with elevated forward query volume before treating as misconfiguration. Upstream public resolver or cloud DNS hiccups forward as transient SERVFAIL on the forward plugin counters; many clear within seconds and need ticket-only follow-up unless customer impact is already visible. Horizontal Pod Autoscaler scale-out from two to four CoreDNS replicas increases per-pod query share and can look like a load anomaly until you normalize rates by ready_pod_count; suppress alerts when spec replica changes align with HPA events. EKS managed CoreDNS add-on upgrades and GKE master upgrades can reorder pods without functional loss; annotate maintenance tickets beside the timeline. NodeLocal DNSCache rollouts temporarily shift histograms while traffic paths change; compare CoreDNS-facing rates against node-local metrics before blaming CoreDNS CPU. Broken negative testing in CI that points at non-existent Services can flood NXDOMAIN; route those clusters to lower severity tiers using lookup columns.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CoreDNS log forwarding, Prometheus metrics.\n• Ensure the following data sources are available: `sourcetype=kube:coredns`, `sourcetype=kube:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward CoreDNS logs with response code. Scrape `coredns_dns_responses_total` by rcode. Alert on SERVFAIL spike or upstream forward errors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"coredns_dns_responses_total\"\n| stats sum(_value) as responses by rcode\n| where rcode!=\"NOERROR\" AND rcode!=\"NXDOMAIN\"\n```\n\nUnderstanding this SPL\n\n**Cluster DNS Resolution Failures** — CoreDNS failures cause widespread `SERVFAIL` and intermittent app errors; monitoring query errors and upstream timeouts is essential.\n\nDocumented **Data sources**: `sourcetype=kube:coredns`, `sourcetype=kube:metrics`. **App/TA** (typical add-on context): CoreDNS log forwarding, Prometheus metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rcode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rcode!=\"NOERROR\" AND rcode!=\"NXDOMAIN\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (errors by rcode), Table (qname, count), Single value (SERVFAIL/min).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "Every app inside a Kubernetes cluster looks up other apps by name through the cluster DNS service. When that DNS service slows or fails, even healthy apps appear broken to each other. We watch that cluster address book so the shared resolver is never invisible when things go wrong.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.35",
              "n": "Pod Anti-Affinity Violations",
              "c": "medium",
              "f": "advanced",
              "v": "Scheduling cannot always satisfy anti-affinity; detecting pending pods or topology spread skew avoids accidental single-AZ concentration.",
              "t": "kube-scheduler logs, Kubernetes events",
              "d": "`sourcetype=kube:scheduler`, `sourcetype=kube:events`",
              "q": "index=k8s sourcetype=\"kube:events\" reason=\"FailedScheduling\"\n| search \"affinity\" OR \"anti-affinity\" OR \"topology spread\"\n| stats count by namespace, involvedObject.name, message\n| sort -count",
              "m": "Capture scheduler `FailedScheduling` messages with affinity terms. Optional: compare replica distribution by zone label versus `topologySpreadConstraints`. Alert when scheduling failures mention anti-affinity for >10 minutes.",
              "z": "Table (pod, message), Bar chart by zone (replica counts), Timeline.",
              "kfp": "Legitimately single-zone workloads are expected in single-AZ test clusters, many development namespaces, and specialized low-latency services where architects intentionally disable hard spread. Recently launched Deployments still scaling up can show temporary single-zone histograms until the rolling update finishes across zones. Clusters that temporarily have only one functional AZ during provider incidents will concentrate replicas without malicious intent; pair provider status pages before paging application teams. Capacity-constrained zones with cordoned nodes can force skew until capacity returns; corroborate node cordon and taint narratives. Intentional same-host placement for DaemonSet-like patterns or hostPort constraints can look like skew when grouped by Deployment name; filter known DaemonSet namespaces via lookup flags. Workloads with whenUnsatisfiable ScheduleAnyway are advisory at scheduling time; this UC deliberately surfaces silent skew as lower certainty findings rather than hard blocks. Workloads with requiredDuringScheduling node or pod affinity scoped to one zone for latency may always show zone_distinct equals one by design; inventory those rows. Transient evictions and rescheduling storms can temporarily distort histograms for one or two scrape intervals; require sustained skew or explicit FailedScheduling text before executive escalation. Scale-down-in-progress clusters may drain alternate zones first and look imbalanced until the operation completes. Autoscaler or Karpenter activity still adding nodes in alternate zones can overlap with skew signals; correlate UC-3.2.46 logs and metrics to avoid blaming spread policy when nodes are simply not Ready yet.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: kube-scheduler logs, Kubernetes events.\n• Ensure the following data sources are available: `sourcetype=kube:scheduler`, `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture scheduler `FailedScheduling` messages with affinity terms. Optional: compare replica distribution by zone label versus `topologySpreadConstraints`. Alert when scheduling failures mention anti-affinity for >10 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:events\" reason=\"FailedScheduling\"\n| search \"affinity\" OR \"anti-affinity\" OR \"topology spread\"\n| stats count by namespace, involvedObject.name, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Pod Anti-Affinity Violations** — Scheduling cannot always satisfy anti-affinity; detecting pending pods or topology spread skew avoids accidental single-AZ concentration.\n\nDocumented **Data sources**: `sourcetype=kube:scheduler`, `sourcetype=kube:events`. **App/TA** (typical add-on context): kube-scheduler logs, Kubernetes events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pod, message), Bar chart by zone (replica counts), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch when copies of an application pile into one geographic pocket or refuse to spread because placement rules cannot be met. We catch that early so teams fix spread settings before one pocket going dark takes the whole service.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.36",
              "n": "Namespace Resource Limit Enforcement",
              "c": "high",
              "f": "intermediate",
              "v": "`LimitRange` defaults and max per-container caps prevent one pod from consuming a whole namespace budget; violations indicate chart misconfigurations.",
              "t": "Kubernetes events, admission audit",
              "d": "`sourcetype=kube:objects:events`, `sourcetype=kube:audit`",
              "q": "index=k8s sourcetype=\"kube:audit\" objectRef.resource=\"pods\" responseStatus.code=422\n| stats count by objectRef.namespace, user.username",
              "m": "Track events when requests exceed LimitRange. Use audit 422 responses. Dashboard per namespace against documented standards.",
              "z": "Table (namespace, workload, reason), Timeline, Single value (limit violations/day).",
              "kfp": "Pre-existing pods created before a LimitRange introduced defaultRequest or default limits will not be re-validated by admission and can look like missing default injection until they roll. DaemonSet pods that use priorityClass system-cluster-critical or similar platform classes may follow exemption patterns your governance documents already allow; mark those namespaces in limitrange_bypass_allowlist.csv instead of paging application teams. Stateful workloads under vertical pod autoscaler Auto mode can temporarily diverge from LimitRange min while recommendations converge; widen time windows or require sustained violation before executive pages. kube-system and istio-system namespaces often intentionally omit LimitRange objects; keep explicit bypass rows with rationale. Build and CI namespaces may show intentional ratio drift during load generation; route those namespaces to low severity or exclude them. Helm chart upgrades in flight can leave transient pods on old specs that clear within one reconciliation loop; demand two consecutive intervals before paging. Argo CD wave ordering can drop a replica before injecting a new spec, producing short-lived odd ratios that self-heal. Cluster autoscaler eviction events can relocate LimitRange-aware pods without a policy change; correlate node events before blaming LimitRange edits. LimitRange recently edited by operators can churn metrics while controllers catch up; treat recent_spec_churn as informational unless customer-visible saturation coincides.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes events, admission audit.\n• Ensure the following data sources are available: `sourcetype=kube:objects:events`, `sourcetype=kube:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack events when requests exceed LimitRange. Use audit 422 responses. Dashboard per namespace against documented standards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:audit\" objectRef.resource=\"pods\" responseStatus.code=422\n| stats count by objectRef.namespace, user.username\n```\n\nUnderstanding this SPL\n\n**Namespace Resource Limit Enforcement** — `LimitRange` defaults and max per-container caps prevent one pod from consuming a whole namespace budget; violations indicate chart misconfigurations.\n\nDocumented **Data sources**: `sourcetype=kube:objects:events`, `sourcetype=kube:audit`. **App/TA** (typical add-on context): Kubernetes events, admission audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by objectRef.namespace, user.username** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, workload, reason), Timeline, Single value (limit violations/day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the per-container resource guardrails in each namespace so teams notice when old apps never picked up new defaults, when someone pushes limits past the published ceiling, or when policy changes faster than the cluster can settle.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.37",
              "n": "Pod Disruption Budget Violations",
              "c": "high",
              "f": "intermediate",
              "v": "PDBs protect availability during voluntary disruptions; monitoring expected vs healthy pods avoids accidental full service outages during drains.",
              "t": "kube-state-metrics",
              "d": "`sourcetype=kube:metrics`, `sourcetype=kube:events`",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_poddisruptionbudget_status_expected_pods\"\n| stats latest(_value) as expected by namespace, poddisruptionbudget\n| join type=left max=1 namespace poddisruptionbudget [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_poddisruptionbudget_status_current_healthy\"\n    | stats latest(_value) as healthy by namespace, poddisruptionbudget\n]\n| where isnotnull(healthy) AND healthy<expected\n| table namespace poddisruptionbudget expected healthy",
              "m": "Scrape PDB status metrics; correlate with `Cannot evict pod` events during drains. Alert when healthy < expected minimum for PDB.",
              "z": "Table (PDB, healthy vs expected), Timeline (blocked evictions), Status panel.",
              "kfp": "PodDisruptionBudget objects are sometimes pinned intentionally with minAvailable equal to replica count during a narrow release ceremony so every replica stays protected while traffic shifts at the edge; those windows look like zero headroom or zero disruptionsAllowed but are deliberate if the change record and workload_pdb_policy.csv notes column document the blackout. Single-replica StatefulSets and databases that delegate high availability to the application layer may keep disruptionsAllowed at zero on purpose; downgrade using inventory notes rather than treating as sev-one. Active rolling deployments can create transient deficits where current_healthy lags desired_healthy by design until new pods pass readiness; require sustained chronic_below_desired across two evaluation intervals or corroborate with UC-3.2.6 rollout signals before executive escalation. Namespaces in deprecation drain that explicitly removed budgets to force eviction may emit governance_gap rows; verify namespace lifecycle state before paging application owners. Spot instance node groups with high churn can temporarily violate budget headroom while the cluster elastically backfills; pair node termination timelines and UC-3.2.46 context before blaming application health. Test clusters, chaos namespaces, and progressive delivery canary windows may generate EvictionDenied noise during approved experiments; suppress using workload_tier and annotations mirrored into the lookup. Brief kube-state-metrics scrape gaps after upgrades can look like missing budgets cluster-wide; demand corroborating collector health before reopening governance tickets. Helm or GitOps rename storms can orphan budgets until the next sync; treat short spikes as operational debt when change records exist. Pen testers generating eviction traffic will trip eviction lanes by design; tag those namespaces in the lookup. Regional replicas sharing logical names need distinct cluster keys in CSV or joins collapse expectations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: kube-state-metrics.\n• Ensure the following data sources are available: `sourcetype=kube:metrics`, `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape PDB status metrics; correlate with `Cannot evict pod` events during drains. Alert when healthy < expected minimum for PDB.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_poddisruptionbudget_status_expected_pods\"\n| stats latest(_value) as expected by namespace, poddisruptionbudget\n| join type=left max=1 namespace poddisruptionbudget [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_poddisruptionbudget_status_current_healthy\"\n    | stats latest(_value) as healthy by namespace, poddisruptionbudget\n]\n| where isnotnull(healthy) AND healthy<expected\n| table namespace poddisruptionbudget expected healthy\n```\n\nUnderstanding this SPL\n\n**Pod Disruption Budget Violations** — PDBs protect availability during voluntary disruptions; monitoring expected vs healthy pods avoids accidental full service outages during drains.\n\nDocumented **Data sources**: `sourcetype=kube:metrics`, `sourcetype=kube:events`. **App/TA** (typical add-on context): kube-state-metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, poddisruptionbudget** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(healthy) AND healthy<expected` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Pod Disruption Budget Violations**): table namespace poddisruptionbudget expected healthy\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (PDB, healthy vs expected), Timeline (blocked evictions), Status panel.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the cluster rules that keep enough copies of important programs running when machines go offline for repairs. We notice when those rules are missing, too loose to help, tight enough to block safe maintenance, or fighting the system when an eviction is refused.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.38",
              "n": "Vertical Pod Autoscaler Recommendations",
              "c": "medium",
              "f": "advanced",
              "v": "VPA recommendation divergence from actual requests drives right-sizing and prevents CPU starvation when recommendations are not applied.",
              "t": "VPA metrics export, `kubectl describe vpa` JSON job",
              "d": "`sourcetype=kube:metrics`, `sourcetype=kube:vpa:status`",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"vpa_recommendation_target_cpu\"\n| stats latest(_value) as target_millicores by namespace, verticalpodautoscaler\n| join type=left max=1 namespace verticalpodautoscaler [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_pod_container_resource_requests\" resource=\"cpu\"\n    | stats latest(_value) as request_millicores by namespace, pod\n]\n| eval gap_m=abs(target_millicores-request_millicores)\n| where gap_m>500\n| table namespace verticalpodautoscaler target_millicores request_millicores gap_m",
              "m": "Ingest VPA recommendation metrics (or periodic JSON status). Compare recommendation to live requests. Alert on large sustained gaps for tier-1 workloads.",
              "z": "Table (workload, target vs request), Line chart (recommendation drift), Bar chart (gap).",
              "kfp": "Workloads launched within the last day can look severely wrong-sized because the recommender has not accumulated twenty-four hours of history yet; delay executive pages until the recommender checkpoint matures or exclude brand-new deployments via inventory birth_epoch fields. Off-mode VerticalPodAutoscaler objects are intentionally recommend-only; sustained drift there is a planning backlog, not an eviction emergency, unless finance mandates faster right-sizing SLAs. Batch and CronJob profiles with bursty CPU spikes can push targets far above steady-state requests; pair this detector with business calendars or require longer rolling windows before paging SRE leadership. FinOps guardrails that set maxAllowed CPU below the honest recommendation will raise cap_breach even when behavior is deliberate cost control; annotate those rows in vpa_right_sizing_inventory.csv instead of treating them as unknown misconfiguration. HPA-controlled CPU targets on the same Deployment as VPA admission create known interaction hazards; expect hpa_conflict positives during legitimate architecture transitions and route to platform architecture review rather than automatic rollback. Planned blue-green cutovers temporarily multiply EvictedByVPA noise while both colors exist; tie suppressions to change tickets. VPA admission controller restarts or webhook TLS rotations can stall mutations and look like drift when pods cannot refresh; correlate apiserver webhook latency before blaming recommender math. Recent VerticalPodAutoscaler spec edits from Argo CD sync waves can re-trigger updater loops without customer regressions; demand sustained drift across two intervals before paging. DaemonSets pinned to node-local agents are often ineligible for meaningful VPA; mark them ineligible rather than chasing coverage gaps. Scheduled load tests and game-day injectors skew recommender distributions for hours; use macro exclusions. Cluster autoscaler or HPA-driven pod churn can inflate eviction counts without a single bad VPA object; corroborate with UC-3.2.17 and UC-3.2.46 before declaring an admission storm. CronJob ephemeral pods may never receive VPA objects; exclude those namespaces or require sustained signals on long-lived controllers only.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VPA metrics export, `kubectl describe vpa` JSON job.\n• Ensure the following data sources are available: `sourcetype=kube:metrics`, `sourcetype=kube:vpa:status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest VPA recommendation metrics (or periodic JSON status). Compare recommendation to live requests. Alert on large sustained gaps for tier-1 workloads.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"vpa_recommendation_target_cpu\"\n| stats latest(_value) as target_millicores by namespace, verticalpodautoscaler\n| join type=left max=1 namespace verticalpodautoscaler [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_pod_container_resource_requests\" resource=\"cpu\"\n    | stats latest(_value) as request_millicores by namespace, pod\n]\n| eval gap_m=abs(target_millicores-request_millicores)\n| where gap_m>500\n| table namespace verticalpodautoscaler target_millicores request_millicores gap_m\n```\n\nUnderstanding this SPL\n\n**Vertical Pod Autoscaler Recommendations** — VPA recommendation divergence from actual requests drives right-sizing and prevents CPU starvation when recommendations are not applied.\n\nDocumented **Data sources**: `sourcetype=kube:metrics`, `sourcetype=kube:vpa:status`. **App/TA** (typical add-on context): VPA metrics export, `kubectl describe vpa` JSON job. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, verticalpodautoscaler** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **gap_m** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_m>500` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Vertical Pod Autoscaler Recommendations**): table namespace verticalpodautoscaler target_millicores request_millicores gap_m\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (workload, target vs request), Line chart (recommendation drift), Bar chart (gap).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch whether the cluster's vertical sizing hints for computer and memory line up with what applications actually reserve, and we raise the alarm when sizing advice goes stale, when automatic mode evicts too aggressively, or when important apps never got a sizing hook.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.39",
              "n": "Kubernetes Events Anomaly Detection",
              "c": "high",
              "f": "advanced",
              "v": "Sudden `Warning` event storms often precede control plane or network incidents; statistical baselines catch abnormal rates per namespace.",
              "t": "Splunk ML Toolkit (optional) or scheduled analytics",
              "d": "`sourcetype=kube:objects:events`",
              "q": "index=k8s sourcetype=\"kube:objects:events\" type=\"Warning\"\n| bin _time span=15m\n| stats count as warn_count by _time, namespace\n| eventstats avg(warn_count) as avg_w stdev(warn_count) as sd by namespace\n| eval z=if(sd>0 AND sd!=null, (warn_count-avg_w)/sd, 0)\n| where abs(z)>3 AND warn_count>10\n| table _time namespace warn_count avg_w z\n| sort -warn_count",
              "m": "Baseline Warning rate per namespace with rolling stdev. Tune thresholds for chatty namespaces. Optional: replace with `anomalydetection` or MLTK for seasonality.",
              "z": "Timechart with overlay, Table (namespace, spike z-score), Single value (anomaly intervals).",
              "kfp": "Planned control-plane upgrades and kubelet rollouts legitimately raise NodeNotReady and NodeReady churn pairs; annotate maintenance windows in cluster_platform_routing.csv or require two consecutive anomalous buckets after the declared window before paging executives. Controller-manager and scheduler leader-election churn during upgrades emits bursts of Warning events that resemble incidents; pair timestamps with vendor maintenance tickets and suppress fleet_anomaly_flag-only medium rows when apiserver audit mutate traffic shows only expected lease rotations. Batch cron waves at clock boundaries (for example 00:00 hourly) can spike FailedSync or mount warnings when thousands of Jobs start together; add a cron_guard macro that dampens severity for namespaces listed as batch_senders in the lookup. GitOps engines that reapply large manifests after a stash or rollback create short Warning storms from reconciliation controllers; correlate with Git commit rate or Argo CD sync counters before opening sev-one bridges. Scheduled CSI snapshot controllers sometimes emit many volume attach or mount warnings during snapshot cut-overs; exclude snapshotter namespaces when snapshot_policy=managed appears in your routing table. Controlled chaos experiments (Chaos Mesh, LitmusChaos, Gremlin) intentionally stress nodes and network paths; set suppress_eventstorm_dev or a chaos_active flag on participating clusters for the drill duration. Regional failover drills and active-active traffic shifts can duplicate events across paired clusters; compare warn_eps to fleet_eps_p90 only inside the same region label to avoid false cross-region anomalies. Development and lab clusters with artificial load generators routinely exceed production z-scores; keep suppress_eventstorm_dev=1 on those rows unless critical severity fires. Finally, first-time ingestion of a second event feed (adding eventrouter beside kube:events) can double counts until deduplication keys are applied; treat baseline_avg shifts after feed changes as a configuration incident, not a platform outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ML Toolkit (optional) or scheduled analytics.\n• Ensure the following data sources are available: `sourcetype=kube:objects:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline Warning rate per namespace with rolling stdev. Tune thresholds for chatty namespaces. Optional: replace with `anomalydetection` or MLTK for seasonality.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:objects:events\" type=\"Warning\"\n| bin _time span=15m\n| stats count as warn_count by _time, namespace\n| eventstats avg(warn_count) as avg_w stdev(warn_count) as sd by namespace\n| eval z=if(sd>0 AND sd!=null, (warn_count-avg_w)/sd, 0)\n| where abs(z)>3 AND warn_count>10\n| table _time namespace warn_count avg_w z\n| sort -warn_count\n```\n\nUnderstanding this SPL\n\n**Kubernetes Events Anomaly Detection** — Sudden `Warning` event storms often precede control plane or network incidents; statistical baselines catch abnormal rates per namespace.\n\nDocumented **Data sources**: `sourcetype=kube:objects:events`. **App/TA** (typical add-on context): Splunk ML Toolkit (optional) or scheduled analytics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:objects:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:objects:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(z)>3 AND warn_count>10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubernetes Events Anomaly Detection**): table _time namespace warn_count avg_w z\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart with overlay, Table (namespace, spike z-score), Single value (anomaly intervals).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual bursts of cluster warning messages before small problems turn into big outages. When the warning traffic pattern jumps far above its own normal rhythm, we raise a clear signal so people can investigate early.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.40",
              "n": "Persistent Volume Snapshot Status",
              "c": "medium",
              "f": "intermediate",
              "v": "Failed or stale snapshots break restore RPO; monitoring `VolumeSnapshot` and CSI driver status supports backup verification.",
              "t": "Kubernetes events, CSI driver metrics",
              "d": "`sourcetype=kube:objects:events`, `sourcetype=kube:metrics`",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_volume_snapshot_ready\"\n| where _value=0\n| table namespace volumesnapshot _time",
              "m": "Forward VolumeSnapshot events and optional readiness gauge from CSI. Alert on failed snapshot jobs or readiness stuck false >1 hour.",
              "z": "Table (snapshot, status), Timeline, Single value (failed snapshots 24h).",
              "kfp": "Planned webhook operator upgrades restart pods and can produce short five-xx bursts and missing Endpoint events that look like outages; require two consecutive windows or correlate with change tickets before paging production bridges. Transient TLS handshake failures during cert-manager or internal PKI rotation are common for tens of seconds when caBundle updates race controller restarts; pair with certificate notAfter telemetry from UC-3.2.13 before declaring hook death. DNS-01 ACME challenges for cert-manager webhook certificates may legitimately take more than thirty seconds while public DNS propagates; suppress latency alerts on cert-manager hooks during documented issuance windows. Gatekeeper constraint template library reloads and Kyverno policy library syncs temporarily increase admission latency while caches warm; treat single-bucket spikes as benign when GitOps commits show only library bumps. Network blips between apiserver and webhook Services on large clusters create correlated timeouts during AZ maintenance; cross-check cloud provider status and node-to-control-plane routing before fail-closed policy changes. Namespace-scoped validating hooks that intentionally deny misconfigured pods in test namespaces will raise rejection counts by design; mark those namespaces or hooks with exception_expiry in webhook_policy_governance.csv or lower severity. Webhooks configured with failurePolicy=Ignore may still increment rejection metrics when policies deny requests while allowing the request to continue on hook transport errors—read expected_failure_policy before treating rejections as blocking incidents. Managed Kubernetes control planes sometimes restart apiserver instances during transparent maintenance, which can look like webhook instability when metrics scrape aligns with restart; verify with UC-3.2.7 before blaming Kyverno. High-cardinality audit sampling or Metadata-only audit policies can hide webhook messages while metrics look healthy; never silence this UC on metrics alone when audit volume drops unexpectedly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes events, CSI driver metrics.\n• Ensure the following data sources are available: `sourcetype=kube:objects:events`, `sourcetype=kube:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward VolumeSnapshot events and optional readiness gauge from CSI. Alert on failed snapshot jobs or readiness stuck false >1 hour.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_volume_snapshot_ready\"\n| where _value=0\n| table namespace volumesnapshot _time\n```\n\nUnderstanding this SPL\n\n**Persistent Volume Snapshot Status** — Failed or stale snapshots break restore RPO; monitoring `VolumeSnapshot` and CSI driver status supports backup verification.\n\nDocumented **Data sources**: `sourcetype=kube:objects:events`, `sourcetype=kube:metrics`. **App/TA** (typical add-on context): Kubernetes events, CSI driver metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where _value=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Persistent Volume Snapshot Status**): table namespace volumesnapshot _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (snapshot, status), Timeline, Single value (failed snapshots 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the extra safety gates that outside tools attach to every cluster change. When those gates get slow, sick, or too strict, ordinary work stops—we surface that early so teams fix the gate before everything piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.41",
              "n": "Service Endpoint Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Services with zero ready endpoints drop traffic for ClusterIP clients; fast detection isolates label selector and readiness probe issues.",
              "t": "kube-state-metrics",
              "d": "`sourcetype=kube:metrics`",
              "q": "index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_endpoint_address_available\"\n| stats latest(_value) as avail by namespace, service\n| join type=left max=1 namespace service [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_endpoint_address_not_ready\"\n    | stats latest(_value) as not_ready by namespace, service\n]\n| where avail=0 OR not_ready>0\n| table namespace service avail not_ready",
              "m": "Scrape EndpointSlice metrics (`kube_endpoint_*`). Exclude headless where appropriate. Alert when `available==0` for production Services.",
              "z": "Table (service, endpoints), Status grid, Line chart (ready endpoints).",
              "kfp": "Deliberate scale-to-zero for batch workers, cost-saving off-hours namespaces, or Knative scale-down idle targets will legitimately show zero ready endpoints; suppress with critical_services.csv tier1_sli=0 or a workload annotation mirrored into the lookup. Headless Services backing StatefulSets often oscillate address lists as pods reschedule; pair this alert with a minimum dwell timer or require both legacy kube_endpoint_address_available and kube_endpointslice_endpoints_ready_sum at zero for several scrapes before paging. Services without selectors that rely on manually maintained Endpoints objects for off-cluster backends can appear empty when those manual Endpoint subsets are intentionally cleared during maintenance; exclude namespaces that only front external databases. ExternalName Services and some mesh ingress bypass patterns do not use local Endpoints the same way; the SPL drops ExternalName when kube_service_spec_type is populated, but mis-extracted types may still leak through on noisy scrapes—validate kube_service_spec_type joins after kube-state-metrics upgrades. Node cordons and drains temporarily remove endpoints while pods reschedule; correlate with node readiness and cluster-autoscaler events before treating as an application defect. Brief gaps during rolling updates can appear when readinessProbe initialDelaySeconds is longer than the time pods spend Ready=false during container restarts; widen alert suppression windows for stateless fleets or require endpoint event reasons not just metric zeros. Namespace ResourceQuota objects that block new pods leave selectors matching zero pods; the metric looks like an application outage but is quota—route using kube events FailedCreate and quota metrics. NetworkPolicy misconfiguration that drops traffic after endpoints exist may not reduce ready counts; this UC will not fire and UC siblings covering policy drops should be used instead. CI namespaces that constantly delete Services during integration tests can spam the search; exclude ci-* namespaces at the saved-search layer.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: kube-state-metrics.\n• Ensure the following data sources are available: `sourcetype=kube:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape EndpointSlice metrics (`kube_endpoint_*`). Exclude headless where appropriate. Alert when `available==0` for production Services.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_endpoint_address_available\"\n| stats latest(_value) as avail by namespace, service\n| join type=left max=1 namespace service [\n    search index=k8s sourcetype=\"kube:metrics\" metric_name=\"kube_endpoint_address_not_ready\"\n    | stats latest(_value) as not_ready by namespace, service\n]\n| where avail=0 OR not_ready>0\n| table namespace service avail not_ready\n```\n\nUnderstanding this SPL\n\n**Service Endpoint Health** — Services with zero ready endpoints drop traffic for ClusterIP clients; fast detection isolates label selector and readiness probe issues.\n\nDocumented **Data sources**: `sourcetype=kube:metrics`. **App/TA** (typical add-on context): kube-state-metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where avail=0 OR not_ready>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Service Endpoint Health**): table namespace service avail not_ready\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (service, endpoints), Status grid, Line chart (ready endpoints).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the cluster’s internal address book for each service. When that book says nobody is ready to receive traffic, calls can fail even if the public website still looks fine, so we raise a clear signal before customers get stuck.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.42",
              "n": "Kubelet Certificate Rotation",
              "c": "high",
              "f": "advanced",
              "v": "Kubelet client/server cert expiry breaks node registration and pod lifecycle; tracking rotation events prevents surprise NotReady storms.",
              "t": "kubelet logs, node cert exporter",
              "d": "`sourcetype=kube:kubelet`, `sourcetype=kube:node:cert`",
              "q": "index=k8s sourcetype=\"kube:node:cert\" role=\"kubelet\"\n| eval days_left=round((not_after-now())/86400,0)\n| where days_left<30\n| table host role days_left",
              "m": "Forward kubelet logs and optional script exporting kubelet cert `NotAfter`. Alert at 30/14 days for self-managed rotation.",
              "z": "Table (node, days left), Timeline (rotation success), Single value (nodes expiring <30d).",
              "kfp": "Clusters that mandate manual CSR review introduce hours of deliberate latency while security tickets accumulate; treat Approved=false as expected until SLA breach, using kubelet_node_posture.csv notes to lengthen suppression windows rather than disabling the control. Freshly bootstrapped nodes may sit pending first signature overnight during change freezes; require sustained backlog or combine with days_to_expiry before paging. Amazon EKS, Google GKE, and Microsoft AKS managed node pools sometimes obscure host-level PEM inspection while provider automation still rotates; rows with only API signals should downgrade when lookup marks managed_signer and host openssl is unavailable by design. Bare-metal estates using cert-manager, AWS Private CA, HashiCorp Vault PKI, or CMP gateways can see multi-hour signing windows that look like stuck CSRs during CA maintenance; correlate external signer health before blaming kubelet. Lab clusters that intentionally set one-day certificate lifetimes for rotation drills will page frequently unless you set expect_auto_rotation=0 or maintenance suppress metadata for those clusters. Kubelet behavior under disk-pressure eviction can fail writes to /var/lib/kubelet/pki even when apiserver and signer are healthy; pair alerts with node pressure metrics and inode charts before escalating to PKI teams. Short-lived spot interruptions may kill nodes before renewal completes; distinguish provider churn from pipeline defects using cloud termination metadata. Audit sampling or Metadata-only policies may omit CSR correlates; never silence the UC solely because audit is thin—investigate collector policy instead.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: kubelet logs, node cert exporter.\n• Ensure the following data sources are available: `sourcetype=kube:kubelet`, `sourcetype=kube:node:cert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward kubelet logs and optional script exporting kubelet cert `NotAfter`. Alert at 30/14 days for self-managed rotation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:node:cert\" role=\"kubelet\"\n| eval days_left=round((not_after-now())/86400,0)\n| where days_left<30\n| table host role days_left\n```\n\nUnderstanding this SPL\n\n**Kubelet Certificate Rotation** — Kubelet client/server cert expiry breaks node registration and pod lifecycle; tracking rotation events prevents surprise NotReady storms.\n\nDocumented **Data sources**: `sourcetype=kube:kubelet`, `sourcetype=kube:node:cert`. **App/TA** (typical add-on context): kubelet logs, node cert exporter. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:node:cert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:node:cert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left<30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubelet Certificate Rotation**): table host role days_left\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (node, days left), Timeline (rotation success), Single value (nodes expiring <30d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch each worker machine refresh its security badges automatically before they expire. If the refresh gets stuck, we warn you early so the machine does not go quiet and programs are not forced to move unexpectedly.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.43",
              "n": "Container Probe Failure Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Repeated readiness/liveness probe failures indicate dependency outages or mis-tuned thresholds before user-visible errors dominate.",
              "t": "kubelet logs, Kubernetes events",
              "d": "`sourcetype=kube:kubelet`, `sourcetype=kube:objects:events`",
              "q": "index=k8s sourcetype=\"kube:objects:events\" reason=\"Unhealthy\"\n| stats count by namespace, involvedObject.name, message",
              "m": "Collect kubelet probe log lines. Bucket by container. Alert on sustained probe failure rate after deployments.",
              "z": "Table (pod, container, count), Timeline, Bar chart by workload.",
              "kfp": "Brief readiness failures during heavy JVM garbage-collection pauses are common when stop-the-world events exceed probe timeouts; widen thresholds or tune garbage-collection ergonomics before paging production. Intentional graceful-shutdown paths may return failed for readiness or liveness while SIGTERM handlers drain connections; pair Splunk rows with maintenance annotations rather than treating every row as a defect. Slow-start applications during Vault post-unseal or secret hydration windows can fail startup probes until identity platforms stabilize; route identity on-call when messages reference unseal or token semantics. Database backup jobs that pause HTTP listeners on purpose resemble outages; require maintenance calendar correlation. .NET just-in-time compilation and first-request warm-up can fail HTTP probes on cold pods; use startupProbe budgets or warm-up sidecars. Feature-flag flips that change dependency health checks may flap readiness without infrastructure failure; compare flag change logs to alert timestamps. Scheduled maintenance that deliberately returns 503 on health endpoints should carry suppression metadata in routing lookups. Blue-green or canary cuts can generate short probe failure bursts; require sustained dwell or combine event and kubelet counter arms before sev-one pages. Test clusters running chaos experiments on probes will flood the search; keep workload_tier dev defaults in k8s_namespace_tier.csv so severity stays low unless promoted namespaces override.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "N/A (operational reliability monitoring)"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: kubelet logs, Kubernetes events.\n• Ensure the following data sources are available: `sourcetype=kube:kubelet`, `sourcetype=kube:objects:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect kubelet probe log lines. Bucket by container. Alert on sustained probe failure rate after deployments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:objects:events\" reason=\"Unhealthy\"\n| stats count by namespace, involvedObject.name, message\n```\n\nUnderstanding this SPL\n\n**Container Probe Failure Analysis** — Repeated readiness/liveness probe failures indicate dependency outages or mis-tuned thresholds before user-visible errors dominate.\n\nDocumented **Data sources**: `sourcetype=kube:kubelet`, `sourcetype=kube:objects:events`. **App/TA** (typical add-on context): kubelet logs, Kubernetes events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:objects:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:objects:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pod, container, count), Timeline, Bar chart by workload.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the health checks that our cloud uses to decide whether a program is truly ready to serve traffic, before the system gives up and starts looping restarts. When those checks fail in waves, we raise a clear signal early so teams fix timing or dependencies sooner.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.44",
              "n": "Node Pool Auto-Repair Events",
              "c": "medium",
              "f": "intermediate",
              "v": "Managed Kubernetes auto-replace unhealthy nodes; tracking repairs explains brief capacity dips and correlates with hardware or image issues.",
              "t": "Cloud cluster logs (EKS, GKE, AKS) to Splunk",
              "d": "`sourcetype=aws:eks:node`, `sourcetype=gcp:gke:operation`, `sourcetype=azure:aks:node`",
              "q": "index=cloud (sourcetype=\"aws:eks:node\" OR sourcetype=\"gcp:gke:operation\" OR sourcetype=\"azure:aks:node\")\n| search repair OR replace OR recreate OR \"auto repair\"\n| stats count by cluster_name, node_pool, _raw\n| sort -_time",
              "m": "Export node pool operations via cloud add-ons or EventBridge/Activity Log. Tag auto-repair operations. Alert on repair rate spike versus baseline.",
              "z": "Timeline (repairs), Table (pool, message), Single value (repairs/day).",
              "kfp": "Legitimate auto-repair after genuine hardware faults or host hypervisor defects produces the same cloud audit signatures as software regressions; require velocity, storm clustering, or premature-age context before treating a single replacement as incident-grade. Planned maintenance windows and provider-published infrastructure events often include preemptive node recycling; cross-check maintenance calendars and Azure Activity informational categories before paging application teams. Intentional rolling node-pool upgrades that surge new instances then retire old ones are change-driven replacements, not reactive repair; align timestamps with recorded upgrades and UC-3.2.33 drain tickets so upgrades do not masquerade as storms. Karpenter Drift following an AMI or image bump is optimization-class disruption and should be labeled FinOps or image hygiene rather than cloud repair unless Health reasons dominate the same interval. Spot or preemptible reclaim termination events resemble repair-storm bursts but are cloud capacity reclaim, not Kubernetes health repair; tag instance_market_option or preemptible labels when forwarding CloudTrail. Tier-aware autoscaling that cordons and replaces nodes deliberately can emit dense recycle sequences; map those controllers in lookups before declaring subnet faults. Transient ENI or CNI attach failures sometimes trigger one-off managed repairs that self-heal; dampen severity when follow-up NotReady risk stays empty for thirty minutes. Recently rolled-back AMI bugs can cause short repair spikes that stop after revert; correlate with image change records. Silent OS kernel auto-updates that reboot workers may surface as termination and recreate pairs unrelated to application defects; pair with patch baselines. Regional cloud degradation that concentrates replacements in one availability zone can look like a pool storm even when the root cause is upstream networking; compare AZ distribution in raw events. Deliberate kubectl drain followed by managed auto-repair overlaps voluntary and involuntary planes; require UC-3.2.33 absence of matching tickets before blaming automation alone. Image pull failures that drive kubelet NotReady cycles can precede managed recycle loops; escalate only when cloud repair signals lead the loop rather than lagging image errors.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud cluster logs (EKS, GKE, AKS) to Splunk.\n• Ensure the following data sources are available: `sourcetype=aws:eks:node`, `sourcetype=gcp:gke:operation`, `sourcetype=azure:aks:node`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport node pool operations via cloud add-ons or EventBridge/Activity Log. Tag auto-repair operations. Alert on repair rate spike versus baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud (sourcetype=\"aws:eks:node\" OR sourcetype=\"gcp:gke:operation\" OR sourcetype=\"azure:aks:node\")\n| search repair OR replace OR recreate OR \"auto repair\"\n| stats count by cluster_name, node_pool, _raw\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Node Pool Auto-Repair Events** — Managed Kubernetes auto-replace unhealthy nodes; tracking repairs explains brief capacity dips and correlates with hardware or image issues.\n\nDocumented **Data sources**: `sourcetype=aws:eks:node`, `sourcetype=gcp:gke:operation`, `sourcetype=azure:aks:node`. **App/TA** (typical add-on context): Cloud cluster logs (EKS, GKE, AKS) to Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:eks:node, gcp:gke:operation, azure:aks:node. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:eks:node\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by cluster_name, node_pool, _raw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (repairs), Table (pool, message), Single value (repairs/day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch when the cloud automatically swaps out unhealthy worker machines in your container fleet, how often it happens in each group, and whether those swaps bunch up in a scary burst. We also notice when a brand-new machine gets replaced almost immediately, which often means something deeper is wrong.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.45",
              "n": "Admission Webhook Latency",
              "c": "medium",
              "f": "advanced",
              "v": "P95/P99 webhook latency drives API server tail latency; isolating slow validating/mutating hooks prevents global API degradation.",
              "t": "Splunk OTel Collector (apiserver metrics)",
              "d": "`sourcetype=kube:apiserver`",
              "q": "index=k8s sourcetype=\"kube:apiserver\" metric_name=\"apiserver_admission_webhook_admission_duration_seconds\"\n| bin _time span=5m\n| stats perc95(_value) as p95, perc99(_value) as p99 by name, operation, _time\n| where p95>0.25 OR p99>1\n| sort -p99",
              "m": "Same histogram as UC-3.2.21; emphasize percentile SLOs per webhook `name` and `operation`. Page on P99 >1s for production webhooks.",
              "z": "Line chart (p95/p99 by webhook), Table (webhook, p99), Heatmap.",
              "kfp": "Legitimately slow webhooks sometimes do expensive work by design: Kyverno first-call policy compilation, OPA Gatekeeper bundle reloads after Git pushes, and just-in-time certificate signing paths can stretch one bucket while remaining correct. Webhooks that call an internal certificate authority or enterprise policy service may show benign tails when that dependency is briefly slow without implying cluster misconfiguration. Batch creation storms, bursty test harness traffic, or GitOps apply waves can temporarily inflate P99 and P999 while medians stay acceptable. Very large ConfigMaps and Secret-heavy objects increase serialization cost and widen tails for mutating hooks that rewrite images or labels. Rare admin-only operations naturally traverse slower code paths and should be documented in governance CSV notes. Webhooks scaled out across many replicas can exhibit per-pod cold caches that look like incidents for a single five-minute bucket. Brief webhook restarts, dual-stack networking initialization delays, freshly pulled images with language runtime warm-up, and deliberately heavy compliance gates all produce legitimate tail inflation. When clusters run intentionally strict admission policies for regulatory reasons, sustained tail cost may be an accepted trade-off; pair alerts with owner_team budgets rather than muting silently.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector (apiserver metrics).\n• Ensure the following data sources are available: `sourcetype=kube:apiserver`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSame histogram as UC-3.2.21; emphasize percentile SLOs per webhook `name` and `operation`. Page on P99 >1s for production webhooks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:apiserver\" metric_name=\"apiserver_admission_webhook_admission_duration_seconds\"\n| bin _time span=5m\n| stats perc95(_value) as p95, perc99(_value) as p99 by name, operation, _time\n| where p95>0.25 OR p99>1\n| sort -p99\n```\n\nUnderstanding this SPL\n\n**Admission Webhook Latency** — P95/P99 webhook latency drives API server tail latency; isolating slow validating/mutating hooks prevents global API degradation.\n\nDocumented **Data sources**: `sourcetype=kube:apiserver`. **App/TA** (typical add-on context): Splunk OTel Collector (apiserver metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:apiserver. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:apiserver\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by name, operation, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95>0.25 OR p99>1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95/p99 by webhook), Table (webhook, p99), Heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We measure how long each automatic safety check takes when someone changes the cluster, including the steps that rewrite custom resource shapes. When one check develops a long tail or gets dangerously close to its time limit, we show it early so teams fix that check before normal work starts failing.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.2.46",
              "n": "Cluster Autoscaler Pending Pods",
              "c": "high",
              "f": "intermediate",
              "v": "Cluster autoscaler unable to scale out leaves pods pending during traffic spikes; monitoring pending duration and CA logs protects scale-out SLAs.",
              "t": "cluster-autoscaler logs, Kubernetes events",
              "d": "`sourcetype=kube:cluster-autoscaler`, `sourcetype=kube:events`",
              "q": "index=k8s sourcetype=\"kube:events\" reason=\"FailedScheduling\"\n| eval age_sec=now()-_time\n| where age_sec>300\n| stats max(age_sec) as max_pending by namespace, involvedObject.name, message\n| sort -max_pending",
              "m": "Forward cluster-autoscaler Deployment logs. Correlate `NotTriggeredScaleUp` with max node pool size and quotas. Alert when scheduling failures persist >5 minutes while CA reports scale blocked.",
              "z": "Table (reason, count), Timeline (scale-up), Single value (pending pods).",
              "kfp": "Brief Pending pod spikes lasting five to ten seconds during rolling updates or traffic bursts are normal while Cluster Autoscaler evaluates unschedulable pods; suppress alerts below the five minute streamstats soak and require fused autoscaler reason text or counter increments before paging executives. Karpenter consolidation or aggressive scale-down can reschedule pods and inflate Pending counts for tens of seconds while old nodes drain; correlate controller logs for consolidation before treating as cloud denial. Cloud quota error strings during an approved budget-resize maintenance window should be suppressed via a time-based macro tied to change records. Cluster-autoscaler-status leader election churn can create short gaps in log lines without true capacity failure; corroborate with kube events and metrics. AWS Spot pool churn may flip Insufficient capacity messages intermittently; dampen with multi-interval logic or compare on-demand fallback success. Duplicate prometheus scrapes without dedup keys can inflate pending_pod_count math; validate one authoritative kube-state-metrics path per cluster. Test namespaces that intentionally pin impossible selectors will always page; exclude them via lookup flags. After large cluster upgrades, metric label renames can null coalesce paths for one interval; treat missing metrics plus steady kubectl Pending as telemetry debt not silence.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: cluster-autoscaler logs, Kubernetes events.\n• Ensure the following data sources are available: `sourcetype=kube:cluster-autoscaler`, `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward cluster-autoscaler Deployment logs. Correlate `NotTriggeredScaleUp` with max node pool size and quotas. Alert when scheduling failures persist >5 minutes while CA reports scale blocked.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:events\" reason=\"FailedScheduling\"\n| eval age_sec=now()-_time\n| where age_sec>300\n| stats max(age_sec) as max_pending by namespace, involvedObject.name, message\n| sort -max_pending\n```\n\nUnderstanding this SPL\n\n**Cluster Autoscaler Pending Pods** — Cluster autoscaler unable to scale out leaves pods pending during traffic spikes; monitoring pending duration and CA logs protects scale-out SLAs.\n\nDocumented **Data sources**: `sourcetype=kube:cluster-autoscaler`, `sourcetype=kube:events`. **App/TA** (typical add-on context): cluster-autoscaler logs, Kubernetes events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_sec>300` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (reason, count), Timeline (scale-up), Single value (pending pods).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch for moments when the cloud refuses to add machines even though new application instances are waiting, and we show why it said no. We catch those refusals early so teams fix quotas, instance choices, or pool limits before customers feel the slowdown.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.0,
          "qd": {
            "gold": 0,
            "silver": 3,
            "bronze": 43,
            "none": 0
          }
        },
        {
          "i": "3.3",
          "n": "OpenShift",
          "u": [
            {
              "i": "3.3.1",
              "n": "Cluster Version & Upgrade Status",
              "c": "medium",
              "f": "intermediate",
              "v": "OpenShift upgrades can stall. Tracking upgrade progress and version across clusters ensures consistency and support compliance.",
              "t": "Custom API input (ClusterVersion API)",
              "d": "ClusterVersion resource, OpenShift events",
              "q": "index=openshift sourcetype=\"openshift:clusterversion\"\n| stats latest(version) as version, latest(progressing) as upgrading, latest(available) as available by cluster\n| table cluster version upgrading available",
              "m": "Create scripted input querying `oc get clusterversion -o json`. Run hourly. Alert when upgrade is progressing but stalled (>2 hours without progress).",
              "z": "Table (cluster, version, status), Status indicator.",
              "kfp": "Operator teams routinely invoke oc adm upgrade with explicit --to targets during vendor-guided recovery; audit rows and short-lived history Failed or Partial entries can precede a healthy Completed row without indicating a stuck cluster. Z-stream patches often hold Progressing=True for several hours while machine-config reboots serialize across control-plane nodes; tune prog_page_h to your internal service-level expectation before paging leadership. RetrievedUpdates=False may appear briefly during scheduled proxy maintenance or transitive certificate rotations on outbound inspection appliances; require cvo_log_hint corroboration or sustained duration across multiple snapshot intervals before treating reachability as broken. Paused upgrades during enterprise change-freeze windows, including financial institution cutoff periods, are deliberate; join alerts to change calendars or HEC metadata so governance freezes do not present as incidents. CVO pods may restart during master pool rollouts and emit noisy log lines without changing ClusterVersion conditions; compare message timestamps to history startedTime. Image mirror synchronization lag can widen spec versus status desired version gaps while pulls still succeed; verify mirror freshness before blaming signature faults. Duplicate telemetry from redundant collectors can inflate aud_mut_cnt; dedupe on audit auditID when present. Lab clusters that constantly churn channels generate noisy warn tiers unless routed to non-production indexes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (ClusterVersion API).\n• Ensure the following data sources are available: ClusterVersion resource, OpenShift events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input querying `oc get clusterversion -o json`. Run hourly. Alert when upgrade is progressing but stalled (>2 hours without progress).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:clusterversion\"\n| stats latest(version) as version, latest(progressing) as upgrading, latest(available) as available by cluster\n| table cluster version upgrading available\n```\n\nUnderstanding this SPL\n\n**Cluster Version & Upgrade Status** — OpenShift upgrades can stall. Tracking upgrade progress and version across clusters ensures consistency and support compliance.\n\nDocumented **Data sources**: ClusterVersion resource, OpenShift events. **App/TA** (typical add-on context): Custom API input (ClusterVersion API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:clusterversion. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:clusterversion\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Cluster Version & Upgrade Status**): table cluster version upgrading available\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cluster, version, status), Status indicator.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the one master switch that tells an OpenShift cluster which software version it is moving toward and whether that move finished cleanly. If it stays stuck too long or cannot reach the vendor’s update catalog, we raise a clear signal so engineers fix the upgrade path before applications ride on a half-finished platform change.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.2",
              "n": "Operator Degraded Detection",
              "c": "high",
              "f": "advanced",
              "v": "Cluster operators manage core OpenShift components (networking, ingress, monitoring, authentication). Degraded operators mean partial cluster functionality loss.",
              "t": "Custom API input",
              "d": "ClusterOperator resources",
              "q": "index=openshift sourcetype=\"openshift:clusteroperator\"\n| where degraded=\"True\" OR available=\"False\"\n| table _time cluster operator degraded available message\n| sort -_time",
              "m": "Scripted input: `oc get clusteroperators -o json`. Run every 300 seconds. Alert when any operator reports `Degraded=True` or `Available=False`.",
              "z": "Operator status grid (green/yellow/red), Table with details, Timeline.",
              "kfp": "Short Progressing=True windows are normal during monitoring stack upgrades, user workload monitoring enablement, and PVC resizes; require prog_ticks and co_deg corroboration before paging application teams. Intentional remote_write pauses during downstream observability backend maintenance can spike remote_storage failure counters; pair Splunk rows with change_ticket_id metadata before treating as cluster fault. Disconnected or air-gapped clusters may disable Telemeter and Insights uploads under policy; use lookup flags to downgrade tel_rw arm severities rather than assuming broken agents. Watchdog is sometimes silenced globally in mature operations models; false warn severity on watchdog_seen requires tuning through a maintenance lookup to avoid noise. Duplicate HTTP Event Collector submissions from redundant federation collectors can inflate remote_fail_sum until dedupe lands in summary indexes. Prometheus scrape gaps from network partitions can drop ALERTS samples while the alert pipeline remains healthy; combine prometheus lanes with console checks before muting. High cardinality bursts from a single namespace can raise rule_eval_fail without operator Degraded; involve service owners before scaling cluster resources. Lab clusters that continuously recycle monitoring test fixtures will page unless routed to non-production indexes. Field renames after OpenShift minor upgrades can empty mn filters until FIELDALIAS maps refresh; validate extractor tests quarterly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input.\n• Ensure the following data sources are available: ClusterOperator resources.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input: `oc get clusteroperators -o json`. Run every 300 seconds. Alert when any operator reports `Degraded=True` or `Available=False`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:clusteroperator\"\n| where degraded=\"True\" OR available=\"False\"\n| table _time cluster operator degraded available message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Operator Degraded Detection** — Cluster operators manage core OpenShift components (networking, ingress, monitoring, authentication). Degraded operators mean partial cluster functionality loss.\n\nDocumented **Data sources**: ClusterOperator resources. **App/TA** (typical add-on context): Custom API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:clusteroperator. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:clusteroperator\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where degraded=\"True\" OR available=\"False\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Operator Degraded Detection**): table _time cluster operator degraded available message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Operator status grid (green/yellow/red), Table with details, Timeline.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the cluster's own metrics and alerting machinery—the part that feeds dashboards and warnings. When that inner stack stumbles, ordinary health screens lie, so we raise a clear signal for the platform team to fix it fast.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.3",
              "n": "Build Failure Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "OpenShift Source-to-Image (S2I) build failures block application deployments. Trend analysis reveals systemic build infrastructure issues.",
              "t": "OpenShift event forwarding",
              "d": "`sourcetype=kube:events` (Build events)",
              "q": "index=openshift sourcetype=\"kube:events\" involvedObject.kind=\"Build\" reason=\"BuildFailed\"\n| stats count by namespace, involvedObject.name, message\n| sort -count",
              "m": "Forward OpenShift events. Alert on BuildFailed events. Track build success/failure rate per namespace over time. Investigate common failure reasons (image pull, compile errors, push failures).",
              "z": "Table (build, namespace, reason), Line chart (success rate %), Bar chart by failure type.",
              "kfp": "Canary PipelineRun cancellations during controlled rollouts resemble incidents until reason equals PipelineRunCancelled and change metadata confirms intent. Vendor maintenance on Git providers can spike GitResolverFailed rows that self-resolve when mirrors catch up; require consecutive windows before paging executives. Security scanners that hammer public EventListener routes inflate webhook_err_pct without blocking real Git deliveries; segment traffic using authenticated routes or allow lists when policy permits. Long-running integration pipelines legitimately approach timeout thresholds during quarter-end batch windows; pair slo_target_sec with calendar metadata. Disconnected clusters may pause Chains uploads while still building images; distinguish air-gap policy from misconfiguration using identity and Rekor reachability checks. Lab namespaces that continuously break builds for testing will page unless routed to non-production indexes. Duplicate HTTP Event Collector submissions from redundant exporters can double pr_obs counts until dedupe logic lands in summary indexes. Prometheus scrape gaps from user-workload monitoring outages can hide controller signals even while API snapshots remain healthy; repair federation before muting the analytic entirely. Resolver metric renames between OpenShift Pipelines patch releases can alter field extraction until props updates ship; validate extractor tests quarterly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenShift event forwarding.\n• Ensure the following data sources are available: `sourcetype=kube:events` (Build events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward OpenShift events. Alert on BuildFailed events. Track build success/failure rate per namespace over time. Investigate common failure reasons (image pull, compile errors, push failures).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"kube:events\" involvedObject.kind=\"Build\" reason=\"BuildFailed\"\n| stats count by namespace, involvedObject.name, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Build Failure Monitoring** — OpenShift Source-to-Image (S2I) build failures block application deployments. Trend analysis reveals systemic build infrastructure issues.\n\nDocumented **Data sources**: `sourcetype=kube:events` (Build events). **App/TA** (typical add-on context): OpenShift event forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (build, namespace, reason), Line chart (success rate %), Bar chart by failure type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the automated release assembly line inside OpenShift that runs each step of building and testing software. When webhooks, signing, or individual steps start failing often, we raise a clear signal so teams fix the pipeline before customer releases slow down.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.4",
              "n": "SCC Violation Detection",
              "c": "high",
              "f": "beginner",
              "v": "Security Context Constraint violations mean pods are attempting to run with permissions beyond their allowed scope. Could indicate misconfiguration or an attack.",
              "t": "OpenShift audit log forwarding",
              "d": "`sourcetype=openshift:audit`",
              "q": "index=openshift sourcetype=\"openshift:audit\" responseStatus.code=403 objectRef.resource=\"pods\"\n| search \"unable to validate against any security context constraint\"\n| stats count by user.username, objectRef.namespace, objectRef.name\n| sort -count",
              "m": "Enable and forward OpenShift audit logs. Alert on SCC-related 403 errors. Track which SCCs are most commonly requested and denied.",
              "z": "Table (user, namespace, pod, SCC requested), Bar chart by SCC, Timeline.",
              "kfp": "Red Hat core operators for cluster monitoring, cluster logging, and ingress routinely require SCC classes such as node-exporter, privileged, or hostnetwork variants documented for your OpenShift minor; join vendor documentation and openshift_scc_namespace_policy.csv exception_expiry_epoch before paging application teams. Operator Lifecycle Manager and Red Hat-certified partner operators including Prisma Cloud, Falco, or Velero often ship with deliberately broader SCC bindings approved in change records; suppress when change_ticket_id metadata matches. Pod creation races during Machine Config Operator node reboots or eviction storms can emit transient FailedCreate events that mirror SCC denials even when the next reconcile succeeds; require sustained denial_count_24h or pairing with audit 403 rows. Legitimate oc adm policy add-scc-to-user executions during documented migrations look identical to abuse until you join ITSM tickets and approved maintenance windows. Audit RequestResponse volume spikes during upgrades can drop openshift.io/scc extraction if collectors sample; verify shipper health before declaring skew. Custom SCC names differ per cluster; tune privileged_admit match lists so internal aliases do not false-negative. Namespaces without lookup rows leave expected_scc_tier empty and suppress selection_skew; default behavior should remain informational rather than critical. Duplicate HEC submissions from redundant forwarders can inflate deny_burst; dedupe on audit auditID when present. Some managed OpenShift offerings redact portions of audit bodies; rex may miss openshift.io/scc while oc get pod still shows the annotation—investigate parser gaps before muting admissions. Developer sandboxes that intentionally test anyuid will page unless dev namespaces carry lookup tiers and lowered severities.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1548.001",
                "T1611"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenShift audit log forwarding.\n• Ensure the following data sources are available: `sourcetype=openshift:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable and forward OpenShift audit logs. Alert on SCC-related 403 errors. Track which SCCs are most commonly requested and denied.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:audit\" responseStatus.code=403 objectRef.resource=\"pods\"\n| search \"unable to validate against any security context constraint\"\n| stats count by user.username, objectRef.namespace, objectRef.name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**SCC Violation Detection** — Security Context Constraint violations mean pods are attempting to run with permissions beyond their allowed scope. Could indicate misconfiguration or an attack.\n\nDocumented **Data sources**: `sourcetype=openshift:audit`. **App/TA** (typical add-on context): OpenShift audit log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user.username, objectRef.namespace, objectRef.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, namespace, pod, SCC requested), Bar chart by SCC, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "OpenShift applies extra safety rules that decide how much power each workload may request before it ever runs. We catch when those rules block pods unexpectedly, when someone widens the rules without a paper trail, or when a workload lands in a broader rule than your team expects.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.5",
              "n": "Helm Release Drift Detection",
              "c": "medium",
              "f": "advanced",
              "v": "Deployed state differs from declared chart version.",
              "t": "Custom scripted input (helm list --output json)",
              "d": "helm list output, GitOps desired state",
              "q": "index=k8s (sourcetype=\"helm:list\" OR sourcetype=\"gitops:desired\")\n| eval chart_version = mvindex(split(chart, \"-\"), -1)\n| stats values(chart_version) as versions by namespace, name, source\n| eval version_count = mvcount(versions)\n| where version_count > 1\n| table namespace name versions source",
              "m": "Scripted input: `helm list -A -o json` (all namespaces). Parse name, namespace, chart (includes version), app_version, status, updated. Run every 600 seconds. Optionally ingest GitOps desired state (Argo CD, Flux) from API or Git. Compare deployed chart version to desired. Alert when drift detected (deployed != desired). Useful for detecting manual changes or failed syncs.",
              "z": "Table (namespace, release, chart, version, status), Drift indicator (deployed vs desired), Timeline of updates.",
              "kfp": "Planned maintenance with sync disabled legitimately shows OutOfSync until operators click sync; suppress with change_ticket_id on HEC metadata or lookup suppress_until. Canary Applications that intentionally lag production Git branches resemble drift; annotate app_key rows or split indexes. Helm superseded rows for old revision Secrets are normal; always evaluate the latest revision per release or decode version ordering before paging. SealedSecrets, reencrypt controllers, and cert-manager rewrite Secrets without malice; pair Degraded health with message text. Image updater sidecars and OCI Helm mirrors can cause short-lived revision churn during cache refresh. Audit Metadata-only streams omit requestObject bodies; patch detection may miss fields unless you enable RequestResponse for governed resource types. Prometheus label churn hides argocd_app_info joins; repair federation before muting. ApplicationSet template targetRevision comparisons to child Applications require consistent exporter flattening; false mismatches occur when child overrides are intentional.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (helm list --output json).\n• Ensure the following data sources are available: helm list output, GitOps desired state.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input: `helm list -A -o json` (all namespaces). Parse name, namespace, chart (includes version), app_version, status, updated. Run every 600 seconds. Optionally ingest GitOps desired state (Argo CD, Flux) from API or Git. Compare deployed chart version to desired. Alert when drift detected (deployed != desired). Useful for detecting manual changes or failed syncs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s (sourcetype=\"helm:list\" OR sourcetype=\"gitops:desired\")\n| eval chart_version = mvindex(split(chart, \"-\"), -1)\n| stats values(chart_version) as versions by namespace, name, source\n| eval version_count = mvcount(versions)\n| where version_count > 1\n| table namespace name versions source\n```\n\nUnderstanding this SPL\n\n**Helm Release Drift Detection** — Deployed state differs from declared chart version.\n\nDocumented **Data sources**: helm list output, GitOps desired state. **App/TA** (typical add-on context): Custom scripted input (helm list --output json). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: helm:list, gitops:desired. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"helm:list\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **chart_version** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by namespace, name, source** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **version_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where version_count > 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Helm Release Drift Detection**): table namespace name versions source\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, release, chart, version, status), Drift indicator (deployed vs desired), Timeline of updates.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We compare the version of software your automation promised to the version the cluster is actually running. When those fall out of line, or someone changes things by hand, we raise a clear signal so teams put the truth back in the trusted record before small differences become big outages.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_helm"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.6",
              "n": "Operator Health Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "OpenShift operators reconcile cluster components; tracking Available/Progressing/Degraded across the full operator set surfaces partial failures before user-facing symptoms.",
              "t": "Custom API input (`oc get clusteroperator -o json`)",
              "d": "`sourcetype=openshift:clusteroperator`",
              "q": "index=openshift sourcetype=\"openshift:clusteroperator\"\n| where progressing=\"True\" OR degraded=\"True\" OR available=\"False\"\n| stats values(available) as avail, values(degraded) as deg, values(progressing) as prog by cluster, operator\n| sort cluster, operator",
              "m": "Ingest ClusterOperator status on a 5-minute cadence. Build a health matrix per cluster. Alert when any operator is `Degraded=True` or `Available=False` beyond the remediation SLA.",
              "z": "Operator matrix (green/yellow/red), Table (operator, conditions), Timeline of flapping.",
              "kfp": "Transient Progressing=True windows are expected during minor-version upgrades, certificate rotations, and machine-config rollouts; require dwell thresholds in the saved search before paging platform leadership. The cluster-version operator routinely reports Progressing=True while patches reconcile release image pulls and operator payloads; treat short windows as normal unless duration exceeds vendor guidance or pairs with Degraded=True on peer operators. Brief network flaps to the in-cluster registry can flip image-registry or samples operators without sustained user impact; corroborate with route and storage health before executive escalation. Admission webhook timeouts during heavy apply storms can surface as Degraded messages on openshift-apiserver or kube-apiserver operators; pair with admission webhook latency analytics rather than assuming etcd failure. Scheduled maintenance that sets Upgradeable=False via cluster version gates or documented upgradeableTo constraints is intentional; join alerts to change calendars or suppress when maintenance_authorized metadata is present on HEC events. Prometheus scrape gaps from Thanos receive outages can drop cluster_operator_conditions samples while API snapshots remain healthy; combine lanes before muting metrics entirely. Duplicate HEC submissions from redundant collectors can double operator rows; dedupe on cluster, operator_name, and snapshot_generation in summary indexes when cost matters. Lab clusters that constantly churn test operators will page unless routed to non-production indexes; mark them in a lookup to downgrade severity. Stale Splunk parsers after an OpenShift minor upgrade can mis-map condition fields until FIELDALIAS updates ship; validate extractor tests quarterly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (`oc get clusteroperator -o json`).\n• Ensure the following data sources are available: `sourcetype=openshift:clusteroperator`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ClusterOperator status on a 5-minute cadence. Build a health matrix per cluster. Alert when any operator is `Degraded=True` or `Available=False` beyond the remediation SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:clusteroperator\"\n| where progressing=\"True\" OR degraded=\"True\" OR available=\"False\"\n| stats values(available) as avail, values(degraded) as deg, values(progressing) as prog by cluster, operator\n| sort cluster, operator\n```\n\nUnderstanding this SPL\n\n**Operator Health Monitoring** — OpenShift operators reconcile cluster components; tracking Available/Progressing/Degraded across the full operator set surfaces partial failures before user-facing symptoms.\n\nDocumented **Data sources**: `sourcetype=openshift:clusteroperator`. **App/TA** (typical add-on context): Custom API input (`oc get clusteroperator -o json`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:clusteroperator. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:clusteroperator\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where progressing=\"True\" OR degraded=\"True\" OR available=\"False\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, operator** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Operator matrix (green/yellow/red), Table (operator, conditions), Timeline of flapping.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch the small set of OpenShift system controllers that keep the cluster version, networking, sign-in, and monitoring working. When one reports it is stuck or broken for too long, we raise a clear signal so teams fix the platform before customer applications notice.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.7",
              "n": "Build Config Failures",
              "c": "medium",
              "f": "beginner",
              "v": "`BuildConfig` runs power S2I and Docker builds; failed builds block image promotion and rollouts tied to CI/CD.",
              "t": "OpenShift event forwarding",
              "d": "`sourcetype=kube:objects:events`",
              "q": "index=openshift sourcetype=\"kube:objects:events\" involvedObject.kind=\"Build\" (reason=\"BuildFailed\" OR reason=\"Error\" OR reason=\"Failed\")\n| stats count by namespace, involvedObject.name, reason, message\n| sort -count",
              "m": "Ensure build events include `BuildConfig` correlation. Group by namespace and builder image. Alert on repeated failures for the same `BuildConfig` within 1 hour.",
              "z": "Table (build, namespace, message), Line chart (failure rate), Bar chart by builder image.",
              "kfp": "Developers routinely push intentionally broken commits to feature branches that drive short-lived Failed builds; scope production paging to namespaces tied to protected branches or add lookup-driven branch tiering before interpreting fail_rate. S2I builder ImageStream tags can churn during vendor patch weeks when maintainers merge refreshed builder images; expect clustered FailedRetrieveBuilder rows that clear after mirrors sync rather than signaling misconfiguration. Corporate registry rate limits during cold CI mornings can manifest as PushImageToRegistryFailed bursts that self-resolve when traffic spreads; corroborate with registry dashboards before executive escalation. JenkinsPipeline strategy builds may fail entirely inside external Jenkins while OpenShift Build phases remain Pending; pairing this control with Jenkins telemetry avoids false reassurance from quiet cluster rows. Transient FetchSourceFailed events during planned Git server reboots resemble incidents until maintenance calendars annotate the window; suppress or downgrade severity using scheduled maintenance metadata on HEC events. User-initiated oc cancel-build and duplicate webhook deliveries inflate Cancelled counts that should not be treated like fetch failures unless your governance treats cancels as SLA breaches. Admission controllers that temporarily deny creates during etcd compaction can emit FailedCreate hints without sustained build faults; require repeated admission_hint rows across intervals. Prometheus openshift_build_total series rename between minor OpenShift releases can null prom_counter joins for one scrape rotation; rely on ocp_build snapshots when metrics cardinality shifts. Pruned Build history from aggressive failedBuildsHistoryLimit values reduces visible Kubernetes rows while Splunk still shows historical failure spikes; teach reviewers that API cleanliness differs from log retention. Penetration-test namespaces that continuously break compiles will page unless marked non-production in a lookup table. Webhook secret rotation without Git-side updates causes authentication failures that look like source fetch faults until credentials align on both sides.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenShift event forwarding.\n• Ensure the following data sources are available: `sourcetype=kube:objects:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure build events include `BuildConfig` correlation. Group by namespace and builder image. Alert on repeated failures for the same `BuildConfig` within 1 hour.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"kube:objects:events\" involvedObject.kind=\"Build\" (reason=\"BuildFailed\" OR reason=\"Error\" OR reason=\"Failed\")\n| stats count by namespace, involvedObject.name, reason, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Build Config Failures** — `BuildConfig` runs power S2I and Docker builds; failed builds block image promotion and rollouts tied to CI/CD.\n\nDocumented **Data sources**: `sourcetype=kube:objects:events`. **App/TA** (typical add-on context): OpenShift event forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: kube:objects:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"kube:objects:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, reason, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (build, namespace, message), Line chart (failure rate), Bar chart by builder image.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch the automated assembly line that turns source code into runnable images inside OpenShift. When that line keeps stopping—fetch errors, push errors, or broken test hooks—we notice early so teams fix the pipeline before releases stall.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.8",
              "n": "Route TLS Expiry Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "OpenShift Routes terminate TLS for apps; expiring certs on edge or re-encrypt routes cause sudden browser and API client failures.",
              "t": "cert-manager, `oc get route -o json` scripted input",
              "d": "`sourcetype=openshift:route`, `sourcetype=certmanager:metrics`",
              "q": "index=openshift sourcetype=\"certmanager:metrics\" metric_name=\"certmanager_certificate_expiration_timestamp_seconds\"\n| eval days_left=round((_value-now())/86400,0)\n| where days_left < 30\n| table namespace name days_left",
              "m": "Periodically export Route TLS `notAfter` from `oc` or ingress controller. If using cert-manager, scrape expiration metrics. Alert at 30/14/7 days; page inside 7 days for customer-facing hostnames.",
              "z": "Table (route, hostname, days left), Single value (soonest expiry), Gauge.",
              "kfp": "cert-manager can renew a Secret minutes before Route admission reflects the new resourceVersion; Splunk may still read the previous notAfter until the next oc export wave—correlate Certificate Ready=True and router reload timestamps before paging. Manually pasted inline PEM on Routes without Secret rotation can desynchronize from GitOps desired state; require repository commit hashes in tickets when severity fires on edge termination. Passthrough Routes legitimately lack HAProxy leaf metadata; do not interpret null days_until_expiry as collector failure without checking termination class. Wildcard hostnames and shared Secrets inflate blast_radius_score for a single renewal action; dedupe executive summaries by Secret fingerprint to avoid forty duplicate tickets. Sharded routers may cache TLS material briefly after Secret updates; short openssl mismatches during reload windows are not chain omissions. Self-signed lab clusters flagged by issuer heuristics should be suppressed via lookup criticality dev—avoid compliance language on known sandboxes. Partial intermediate chains sometimes validate on RHEL trust stores but fail on mobile clients; pair Splunk issuer fields with s_client before declaring false positives. Stale openshift_route_blastradius.csv rows can mis-route pages after reorganizations; audit owner_team quarterly. Prometheus scrape gaps drop certmanager_certificate_expiration_timestamp_seconds while Routes remain healthy; require Route snapshot notAfter or cmctl before muting metrics entirely. External Secrets lag can leave a renewed vault record invisible to the cluster; distinguish vault truth from kube Secret truth when incidents overlap. DNS gray traffic to retired clusters can inflate weekly_client_hits without active Routes; validate cluster labels on router logs before blaming current namespaces.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: cert-manager, `oc get route -o json` scripted input.\n• Ensure the following data sources are available: `sourcetype=openshift:route`, `sourcetype=certmanager:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically export Route TLS `notAfter` from `oc` or ingress controller. If using cert-manager, scrape expiration metrics. Alert at 30/14/7 days; page inside 7 days for customer-facing hostnames.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"certmanager:metrics\" metric_name=\"certmanager_certificate_expiration_timestamp_seconds\"\n| eval days_left=round((_value-now())/86400,0)\n| where days_left < 30\n| table namespace name days_left\n```\n\nUnderstanding this SPL\n\n**Route TLS Expiry Detection** — OpenShift Routes terminate TLS for apps; expiring certs on edge or re-encrypt routes cause sudden browser and API client failures.\n\nDocumented **Data sources**: `sourcetype=openshift:route`, `sourcetype=certmanager:metrics`. **App/TA** (typical add-on context): cert-manager, `oc get route -o json` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: certmanager:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"certmanager:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Route TLS Expiry Detection**): table namespace name days_left\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (route, hostname, days left), Single value (soonest expiry), Gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Every public-facing OpenShift hostname has a TLS certificate that the HAProxy router presents at the edge. If that certificate lapses, browsers warn or refuse to connect, and customers cannot reach the application. We watch each route expiry, weight findings by traffic, and warn the correct team early.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.9",
              "n": "Cluster Version Upgrade Status",
              "c": "medium",
              "f": "intermediate",
              "v": "Long-running or failing upgrades leave clusters on unsupported versions; monitoring `ClusterVersion` conditions and history pins down stuck machine-config or operator prerequisites.",
              "t": "Custom API input (`oc get clusterversion version -o json`)",
              "d": "`sourcetype=openshift:clusterversion`",
              "q": "index=openshift sourcetype=\"openshift:clusterversion\"\n| stats latest(version) as version, latest(progressing) as upgrading, latest(available) as available, latest(failing) as failing by cluster\n| where upgrading=\"True\" OR failing=\"True\" OR available=\"False\"\n| table cluster version upgrading failing available",
              "m": "Parse `status.conditions` (Failing, Progressing, Available) and `status.history[]` from JSON into indexed fields. Alert when `progressing` remains true >2 hours or `Failing=True`. Complements UC-3.3.1 with failure messages from `status.history[].message`.",
              "z": "Upgrade timeline per cluster, Table (version, phase, message), Single value (clusters not on target channel).",
              "kfp": "Legitimate canary or blue-green rollouts often emit short-lived bursts of 5xx responses while new pods warm up or while load shifts between ReplicationControllers; require sustained xx5_window minutes or customer-impact corroboration before paging platform executives. Teams sometimes tune maxConnections and router Pod resource limits deliberately low on shared development clusters to contain noisy-neighbor cost; join alerts to environment lookups so non-production shards downgrade automatically. Planned horizontal pod autoscaler scale-down events during low-traffic windows can compress headroom and make transient saturation look severe until traffic returns; annotate maintenance metadata on HTTP Event Collector events when autoscaler policies change. Peering partners or WAN acceleration appliances occasionally run scripted connection tests that resemble storms; correlate source addresses and change tickets before blaming internal microservices. Sticky-session routes after large reload windows can skew server-level session counts until the affinity table stabilizes; treat brief haproxy_server_current_sessions imbalance as expected when reload_burst_cnt was high minutes earlier. Scheduled blackbox probes from external monitoring platforms sometimes target canary hostnames that intentionally fail authentication; whitelist probe identities or exclude those hostnames from 5xx rate math. Certificate reissue or IngressController publishing strategy edits can spike reload counts without user-facing errors; pair reload_burst_cnt with fe_sat_ratio before declaring incidents. Prometheus cardinality or scrape delays can null individual mstats arms while HAProxy remains healthy; avoid auto-closing incidents solely because one arm is silent without cross-checking router Pod readiness. Multi-shard estates may show saturation on non-default shards during DNS migration projects; confirm which IngressController owns the hostname before remediating router-default alone.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (`oc get clusterversion version -o json`).\n• Ensure the following data sources are available: `sourcetype=openshift:clusterversion`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse `status.conditions` (Failing, Progressing, Available) and `status.history[]` from JSON into indexed fields. Alert when `progressing` remains true >2 hours or `Failing=True`. Complements UC-3.3.1 with failure messages from `status.history[].message`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:clusterversion\"\n| stats latest(version) as version, latest(progressing) as upgrading, latest(available) as available, latest(failing) as failing by cluster\n| where upgrading=\"True\" OR failing=\"True\" OR available=\"False\"\n| table cluster version upgrading failing available\n```\n\nUnderstanding this SPL\n\n**Cluster Version Upgrade Status** — Long-running or failing upgrades leave clusters on unsupported versions; monitoring `ClusterVersion` conditions and history pins down stuck machine-config or operator prerequisites.\n\nDocumented **Data sources**: `sourcetype=openshift:clusterversion`. **App/TA** (typical add-on context): Custom API input (`oc get clusterversion version -o json`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:clusterversion. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:clusterversion\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where upgrading=\"True\" OR failing=\"True\" OR available=\"False\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cluster Version Upgrade Status**): table cluster version upgrading failing available\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Upgrade timeline per cluster, Table (version, phase, message), Single value (clusters not on target channel).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the front door load balancer that ships with OpenShift—the one that carries customer web traffic into your apps. When that door is overwhelmed, connections pile up or errors spike, and we raise a clear signal so teams fix the shared edge before many apps suffer at once.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.10",
              "n": "Image Stream Tag Drift",
              "c": "medium",
              "f": "advanced",
              "v": "ImageStreams can point to unexpected digests after imports or mirroring; drift from expected tags breaks reproducible builds and compliance baselines.",
              "t": "`oc get imagestream -o json` scripted input",
              "d": "`sourcetype=openshift:imagestream`",
              "q": "index=openshift sourcetype=\"openshift:imagestream\"\n| where isnotnull(expected_digest) AND isnotnull(digest) AND digest!=expected_digest\n| table namespace name tag digest expected_digest source\n| sort namespace, name",
              "m": "Scripted input emits `digest` per tag plus `expected_digest` from GitOps/CMDB (or use `| lookup` against a KV store). Alert on mismatch for `latest` and release tags used in production pipelines.",
              "z": "Table (imagestream, tag, digests), Drift count single value, Timeline of tag updates.",
              "kfp": "Vendor-managed CSI operator upgrades during approved change windows routinely flip OperatorProgressing=True on ClusterCSIDriver rows and restart node DaemonSets; require dwell thresholds and change_ticket_id correlation before paging application teams. Intentional default StorageClass annotation moves during platform refresh look like governance drift in naive comparisons until GitOps baselines and audit verbs confirm the actor and ticket. Scheduled snapshot garbage collection jobs can emit VolumeSnapshotContentDeletionFailed bursts that are benign when namespaces are purged under automation; pair with VolumeAttachment age and cloud disk inventory before treating as corruption. Rolling node drains produce intentional VolumeAttachment churn and AttachVolumeFailed events that clear after kubelet finishes detaching; suppress when maintenance metadata is present. In-tree to CSI migration windows for vSphere, Azure, and AWS inflate Progressing time on both storage ClusterOperator and driver-specific ClusterCSIDriver resources; consult OpenShift release notes for expected duration. Lab clusters that constantly recycle StorageClasses for pipeline tests will trigger sc_gov hints unless non-production indexes or lookups downgrade severity. Prometheus scrape gaps or Splunk Metrics index misconfiguration can empty the mstats_supp arm while API snapshots remain healthy; repair federation before muting metrics entirely. CSINode exporter parsers that flatten driver lists differently may appear as empty driver strings; validate FIELDALIAS rules quarterly to avoid false registration gaps. Duplicate HTTP Event Collector submissions from redundant forwarders can double entity rows until dedupe logic lands in summary indexes. EBS gp3 classes without explicit iops in parameters may still be valid when defaults satisfy finance policy; tune gp3_iops_gap logic against openshift_storageclass_catalog.csv rather than raw heuristics alone.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `oc get imagestream -o json` scripted input.\n• Ensure the following data sources are available: `sourcetype=openshift:imagestream`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input emits `digest` per tag plus `expected_digest` from GitOps/CMDB (or use `| lookup` against a KV store). Alert on mismatch for `latest` and release tags used in production pipelines.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:imagestream\"\n| where isnotnull(expected_digest) AND isnotnull(digest) AND digest!=expected_digest\n| table namespace name tag digest expected_digest source\n| sort namespace, name\n```\n\nUnderstanding this SPL\n\n**Image Stream Tag Drift** — ImageStreams can point to unexpected digests after imports or mirroring; drift from expected tags breaks reproducible builds and compliance baselines.\n\nDocumented **Data sources**: `sourcetype=openshift:imagestream`. **App/TA** (typical add-on context): `oc get imagestream -o json` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:imagestream. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:imagestream\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(expected_digest) AND isnotnull(digest) AND digest!=expected_digest` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Image Stream Tag Drift**): table namespace name tag digest expected_digest source\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (imagestream, tag, digests), Drift count single value, Timeline of tag updates.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the cluster machinery that hands out disks and keeps disk drivers healthy. When that machinery breaks, drifts from your catalog, or cannot attach snapshots cleanly, we raise a clear signal so teams fix the foundation before applications lose reliable storage.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.11",
              "n": "Operator Subscription Health",
              "c": "high",
              "f": "intermediate",
              "v": "OLM subscriptions deliver operator upgrades; unhealthy subscriptions block security patches and CRD updates for platform add-ons.",
              "t": "`oc get subscription -A -o json` scripted input",
              "d": "`sourcetype=openshift:subscription`",
              "q": "index=openshift sourcetype=\"openshift:subscription\"\n| where state!=\"AtLatestKnown\" OR match(_raw,\"InstallPlanPending|CatalogSourcesUnhealthy\")\n| stats latest(state) as state, latest(message) as msg by namespace, name, channel\n| sort namespace, name",
              "m": "Parse Subscription `status.state` and conditions. Alert on `CatalogSourcesUnhealthy`, `InstallPlanPending` beyond SLA, or repeated upgrade failures. Correlate with CatalogSource pod health.",
              "z": "Table (subscription, state, message), Status grid by namespace, Timeline.",
              "kfp": "Subscription status UpgradePending often reflects an intentional wait for manual InstallPlan approval or a vendor channel that lags behind the latest catalog graph; pair alerts with governance metadata before paging. CatalogSourcesUnhealthy or CatalogSource connection states of CONNECTING and TRANSIENT_FAILURE appear during scheduled disconnected-mirror syncs, temporary registry rate limits, or DNS blips while grpc catalog pods restart; require sustained dwell and corroborating CSV failures. Vendor catalog rebuilds that bump bundle digests can briefly surface ResolutionFailed until the cluster re-resolves; treat single-interval spikes as noise when oc describe shows recovery. Deliberately paused operator upgrades during enterprise change-freeze windows keep Subscriptions off AtLatestKnown without indicating breakage; suppress when change tickets authorize the freeze. CSV phase Replacing is a normal transition between versions during upgrades; escalate only when Replacing persists beyond vendor guidance or coexists with Failed steps on the InstallPlan. The default OperatorHub catalog can bounce after control-plane reboots or etcd compaction windows; combine CatalogSource errors with cluster_csv_fail_cnt before executive escalation. Manual approval policies may leave InstallPlans in RequiresApproval longer than engineering comfort; tune ip_approval_age_h thresholds to match risk policy, not only developer impatience. Lab clusters that constantly churn test operators will generate warn noise unless routed to non-production indexes. Prometheus scrape gaps can flatline subscription_sync_total while API snapshots remain healthy; repair federation before muting the analytic. Duplicate HTTP Event Collector submissions from redundant exporters can inflate olm_restarts or duplicate rows until dedupe logic lands in summary indexes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `oc get subscription -A -o json` scripted input.\n• Ensure the following data sources are available: `sourcetype=openshift:subscription`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse Subscription `status.state` and conditions. Alert on `CatalogSourcesUnhealthy`, `InstallPlanPending` beyond SLA, or repeated upgrade failures. Correlate with CatalogSource pod health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:subscription\"\n| where state!=\"AtLatestKnown\" OR match(_raw,\"InstallPlanPending|CatalogSourcesUnhealthy\")\n| stats latest(state) as state, latest(message) as msg by namespace, name, channel\n| sort namespace, name\n```\n\nUnderstanding this SPL\n\n**Operator Subscription Health** — OLM subscriptions deliver operator upgrades; unhealthy subscriptions block security patches and CRD updates for platform add-ons.\n\nDocumented **Data sources**: `sourcetype=openshift:subscription`. **App/TA** (typical add-on context): `oc get subscription -A -o json` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:subscription. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:subscription\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state!=\"AtLatestKnown\" OR match(_raw,\"InstallPlanPending|CatalogSourcesUnhealthy\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by namespace, name, channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (subscription, state, message), Status grid by namespace, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the backstage process that installs optional cluster add-ons from software catalogs. When that process gets stuck, refuses to approve the next step, or cannot reach its catalog, we raise a clear signal so teams fix it before important tools stop updating.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.12",
              "n": "Project Resource Quota Exhaustion",
              "c": "high",
              "f": "beginner",
              "v": "OpenShift projects enforce ResourceQuotas for CPU, memory, and pod count. When quotas approach exhaustion, new deployments and scaling operations fail silently.",
              "t": "`oc get resourcequota -A -o json` scripted input",
              "d": "`sourcetype=openshift:resourcequota`",
              "q": "index=openshift sourcetype=\"openshift:resourcequota\"\n| eval cpu_pct=round(used_cpu/hard_cpu*100,1), mem_pct=round(used_memory/hard_memory*100,1), pods_pct=round(used_pods/hard_pods*100,1)\n| where cpu_pct>85 OR mem_pct>85 OR pods_pct>85\n| table namespace quota_name cpu_pct mem_pct pods_pct\n| sort -cpu_pct",
              "m": "Scripted input: `oc get resourcequota -A -o json`. Parse `status.hard` and `status.used` for CPU, memory, and pods. Run every 300 seconds. Alert when any resource exceeds 85% of quota.",
              "z": "Table (namespace, quota, resource, used, hard, pct), Gauge per resource, Heatmap by namespace.",
              "kfp": "Planned OpenShiftSDN to OVN-K migration windows produce sustained Progressing=True on the network ClusterOperator, elevated NetworkNotReady events on nodes while daemons roll, and temporary EgressIP churn; annotate change_ticket_id on HTTP Event Collector payloads and widen suppression minutes per migration runbook guidance before paging executives. Scheduled z-stream upgrades that restart ovnkube-master and ovnkube-node DaemonSets routinely elevate container restart counters and short Geneve tunnel metric noise without customer impact; require multisearch corroboration with Degraded=True or sustained sandbox failures before declaring incidents. Transient EgressIPNotAssigned events during node drain and replacement often clear after assignment reconciles across cloud subnets; pair with oc describe egressip before automated ticketing. Cluster certificate rotations that recycle control-plane trust bundles can spike ovsdb reconnect counters for minutes; treat as benign when operator Available stays True and diagnostics Pods pass. network-check-source or network-check-target Pods scheduled to tainted infrastructure nodes may remain Pending; exclude known taint keys in lookup tables so synthetic probe silence does not masquerade as overlay failure. Large batch pod creation storms from CI namespaces can inflate FailedCreatePodSandBox counts when temporary IP pools are tight; scope alerts with namespace allow and deny lists. Duplicate log forwarders can double event counts; dedupe on involvedObject.uid when present. Prometheus scrape outages drop mstats arms while the data plane remains healthy; combine metric silence with ocp_clusteroperator snapshots before muting ovn_controller math entirely. Lab clusters that constantly churn network test namespaces will page unless routed to non-production indexes. AdminNetworkPolicy exporter drift from API version skew can raise anp_drift_hint during harmless CRD bumps; validate CRD generation against OpenShift release notes before runbook execution.",
              "refs": "[OpenShift ResourceQuotas documentation](https://docs.openshift.com/container-platform/latest/applications/quotas/quotas-setting-per-project.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `oc get resourcequota -A -o json` scripted input.\n• Ensure the following data sources are available: `sourcetype=openshift:resourcequota`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input: `oc get resourcequota -A -o json`. Parse `status.hard` and `status.used` for CPU, memory, and pods. Run every 300 seconds. Alert when any resource exceeds 85% of quota.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:resourcequota\"\n| eval cpu_pct=round(used_cpu/hard_cpu*100,1), mem_pct=round(used_memory/hard_memory*100,1), pods_pct=round(used_pods/hard_pods*100,1)\n| where cpu_pct>85 OR mem_pct>85 OR pods_pct>85\n| table namespace quota_name cpu_pct mem_pct pods_pct\n| sort -cpu_pct\n```\n\nUnderstanding this SPL\n\n**Project Resource Quota Exhaustion** — OpenShift projects enforce ResourceQuotas for CPU, memory, and pod count. When quotas approach exhaustion, new deployments and scaling operations fail silently.\n\nDocumented **Data sources**: `sourcetype=openshift:resourcequota`. **App/TA** (typical add-on context): `oc get resourcequota -A -o json` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:resourcequota. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:resourcequota\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cpu_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cpu_pct>85 OR mem_pct>85 OR pods_pct>85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Project Resource Quota Exhaustion**): table namespace quota_name cpu_pct mem_pct pods_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, quota, resource, used, hard, pct), Gauge per resource, Heatmap by namespace.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the internal networking fabric that lets pods talk to each other across your OpenShift cluster. When that fabric’s control software struggles or IP assignment breaks, apps fail in confusing ways even when the front door still works, so we raise a clear signal for the networking team to fix it.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.13",
              "n": "MachineSet Scaling Failures",
              "c": "high",
              "f": "intermediate",
              "v": "MachineSets control node scaling in IPI clusters. When desired replicas diverge from ready/available counts, new machines may be stuck provisioning, failing cloud API calls, or hitting infrastructure limits.",
              "t": "`oc get machineset -n openshift-machine-api -o json` scripted input",
              "d": "`sourcetype=openshift:machineset`, `sourcetype=kube:events`",
              "q": "index=openshift sourcetype=\"openshift:machineset\"\n| where replicas!=readyReplicas OR replicas!=availableReplicas\n| eval gap=replicas-readyReplicas\n| table _time cluster machineset namespace replicas readyReplicas gap\n| sort -gap",
              "m": "Scripted input: `oc get machineset -n openshift-machine-api -o json`. Parse `spec.replicas`, `status.readyReplicas`, `status.availableReplicas`. Run every 300 seconds. Alert when replicas != readyReplicas for more than 15 minutes.",
              "z": "Table (machineset, desired, ready, gap), Single value (total gap), Timeline of scaling events.",
              "kfp": "Public registries such as docker.io and quay.io apply rate limits to anonymous pulls; bursts of Unauthorized or throttling messages during cold-start mornings can mirror incidents until authenticated pull secrets or mirror caches absorb traffic. Disconnected clusters run intentional mirror sync windows where ImageStream tags age until the next oc mirror job completes; require drift multiples beyond documented sync cadence before paging platform leadership. Teams that set spec.tag.importPolicy.scheduled false for one-shot promotion tags should never expect periodic refresh; staleness detectors must read scheduled_flag from exporters or suppress namespaces in a lookup table. Change-freeze periods sometimes suspend image-pruner CronJobs to avoid destructive pruning during holiday moratoriums; pair absent pruner rows with disk capacity dashboards and explicit freeze metadata before treating silence as failure. Vendor registry maintenance windows published on status pages can align with BadGateway spikes that self-resolve; annotate maintenance_authorized fields on HTTP Event Collector events when network teams pre-approve the window. S2I builder ImageStream deprecation announcements may cause intentional ManifestNotFound rows until application teams retag; route those pages to developer relations workflows rather than overnight platform bridges. Clusters that expose the integrated registry through alternative routes can leave status.dockerImageRepository empty in some snapshots while remains healthy; corroborate with oc get route and configs.imageregistry.operator.openshift.io before assuming import misconfiguration. Lab namespaces that continuously import intentionally broken tags for pipeline tests will generate warn noise unless routed to non-production indexes. Duplicate HTTP Event Collector submissions from redundant exporters can double met_sum restart counters until dedupe logic lands in summary indexes. Prometheus metric renames between OpenShift minors can hide image_registry_operator series for one rotation; fall back to ocp_imageregistry snapshots when metrics cardinality shifts.",
              "refs": "[OpenShift Machine management documentation](https://docs.openshift.com/container-platform/latest/machine_management/index.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `oc get machineset -n openshift-machine-api -o json` scripted input.\n• Ensure the following data sources are available: `sourcetype=openshift:machineset`, `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input: `oc get machineset -n openshift-machine-api -o json`. Parse `spec.replicas`, `status.readyReplicas`, `status.availableReplicas`. Run every 300 seconds. Alert when replicas != readyReplicas for more than 15 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:machineset\"\n| where replicas!=readyReplicas OR replicas!=availableReplicas\n| eval gap=replicas-readyReplicas\n| table _time cluster machineset namespace replicas readyReplicas gap\n| sort -gap\n```\n\nUnderstanding this SPL\n\n**MachineSet Scaling Failures** — MachineSets control node scaling in IPI clusters. When desired replicas diverge from ready/available counts, new machines may be stuck provisioning, failing cloud API calls, or hitting infrastructure limits.\n\nDocumented **Data sources**: `sourcetype=openshift:machineset`, `sourcetype=kube:events`. **App/TA** (typical add-on context): `oc get machineset -n openshift-machine-api -o json` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:machineset. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:machineset\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where replicas!=readyReplicas OR replicas!=availableReplicas` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **MachineSet Scaling Failures**): table _time cluster machineset namespace replicas readyReplicas gap\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (machineset, desired, ready, gap), Single value (total gap), Timeline of scaling events.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the cluster service that copies container images from outside vendors into our own library and keeps them fresh on a schedule. When that copying breaks or our internal image warehouse gets sick, we raise a clear signal so teams fix it before releases stall.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.14",
              "n": "Node NotReady Detection",
              "c": "critical",
              "f": "beginner",
              "v": "Nodes transitioning to NotReady, SchedulingDisabled, or reporting disk/memory/PID pressure cause pod evictions and workload disruption across the cluster.",
              "t": "`oc get nodes -o json` scripted input",
              "d": "`sourcetype=openshift:node`, `sourcetype=kube:events`",
              "q": "index=openshift sourcetype=\"openshift:node\"\n| where status!=\"Ready\"\n| stats latest(status) as status, latest(reason) as reason, count by cluster, node, role\n| sort cluster, node",
              "m": "Scripted input: `oc get nodes -o json`. Parse `status.conditions` for Ready, MemoryPressure, DiskPressure, PIDPressure. Run every 120 seconds. Alert immediately when any node is NotReady or reporting pressure conditions.",
              "z": "Node status grid (green/yellow/red), Table (node, role, status, reason), Timeline.",
              "kfp": "Planned identity provider certificate rotations, corporate IdP maintenance windows, and OAuth serving certificate reloads on openshift-authentication often produce short AuthenticationFailed bursts that clear within minutes; require multi-lane corroboration or dwell thresholds before executive escalation. Scheduled GitHub Enterprise outages or rate-limit windows can spike oauth-openshift errors without cluster defects; join provider status pages and change_ticket_id metadata on HTTP Event Collector payloads. htpasswd Secret batch rotations during enterprise password campaigns generate many failed attempts from cached browser sessions until users pick up new credentials; communicate campaigns before muting alerts entirely. Cluster administrators running oc adm groups sync or identity mapping tests in lab clusters can create Identity churn spikes that resemble misconfiguration; segregate lab indexes or lookup suppressions for non-production clusters. End-of-quarter hiring waves and training-class login surges elevate token issuance velocity benignly; compare streamstats baselines to HR-approved calendars. Intentional MFA enforcement rollouts increase AuthenticationFailed counts on legacy oc clients until upgrades finish; document client baselines and extend thresholds during the rollout window. Transient LDAP referral chase loops on consolidated directory hubs can raise latency and failure counts without sustained oauth-openshift 5xx; pair with directory engineer tcpdump or IdP trace only under policy. Penetration tests that deliberately hammer OAuth endpoints may trip flow_burst logic; ingest pentest_authorization lookup rows with start and end epochs. Duplicate log forwarders can double lane_events counts; dedupe on audit auditID or equivalent correlation identifiers when present. Some managed OpenShift offerings redact oauth-server audit paths from centralized logging; expect metric-heavy firing and validate ground truth with oc adm node-logs during incidents. ServiceAccount token clues can reflect legitimate automation switched on during OAuth outages; never treat automation alone as hostile without UC-3.2.23 binding and UC-3.2.12 RBAC context.",
              "refs": "[OpenShift Node health documentation](https://docs.openshift.com/container-platform/latest/nodes/nodes/nodes-nodes-viewing.html)",
              "mitre": [
                "T1078",
                "T1556"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `oc get nodes -o json` scripted input.\n• Ensure the following data sources are available: `sourcetype=openshift:node`, `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input: `oc get nodes -o json`. Parse `status.conditions` for Ready, MemoryPressure, DiskPressure, PIDPressure. Run every 120 seconds. Alert immediately when any node is NotReady or reporting pressure conditions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:node\"\n| where status!=\"Ready\"\n| stats latest(status) as status, latest(reason) as reason, count by cluster, node, role\n| sort cluster, node\n```\n\nUnderstanding this SPL\n\n**Node NotReady Detection** — Nodes transitioning to NotReady, SchedulingDisabled, or reporting disk/memory/PID pressure cause pod evictions and workload disruption across the cluster.\n\nDocumented **Data sources**: `sourcetype=openshift:node`, `sourcetype=kube:events`. **App/TA** (typical add-on context): `oc get nodes -o json` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:node. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:node\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"Ready\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, node, role** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Node status grid (green/yellow/red), Table (node, role, status, reason), Timeline.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the cluster sign-in doorway and its connections to your corporate login systems. When sign-in starts failing in waves, when the cluster cannot talk to your login provider, or when account records change too fast, we raise a clear signal so teams fix it before people are locked out.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.15",
              "n": "OAuth Access Token Audit",
              "c": "high",
              "f": "intermediate",
              "v": "OpenShift OAuth tokens grant API access. Tracking token creation and deletion reveals unauthorized access attempts, service account abuse, or compromised credentials.",
              "t": "OpenShift audit log forwarding",
              "d": "`sourcetype=openshift:audit`",
              "q": "index=openshift sourcetype=\"openshift:audit\" (objectRef.resource=\"oauthaccesstokens\" OR objectRef.resource=\"oauthauthorizetokens\")\n| stats count by verb, user.username, sourceIPs{}, responseStatus.code\n| where verb=\"create\" OR verb=\"delete\"\n| sort -count",
              "m": "Enable and forward OpenShift audit logs (API server audit policy). Filter for `oauthaccesstokens` and `oauthauthorizetokens` resource operations. Alert on unusual token creation volume, token creation from unexpected IPs, or bulk token deletions.",
              "z": "Table (user, verb, source IP, status), Timechart (token creates/deletes), Bar chart by user.",
              "kfp": "Deliberate change-freeze policies sometimes pause scheduled etcd backups when storage maintenance forbids snapshot IO; stamp change_ticket_id onto HTTP Event Collector payloads and document compensating manual backup runs before muting pages. Rolling reboots of control-plane nodes during kernel upgrades can elevate cluster-etcd-operator and etcd pod restart counters without backup posture regression; require dwell thresholds and ticket correlation. Planned oc adm cluster restore rehearsals and disaster-recovery game days intentionally emit EtcdRecovery or BootstrapTeardown adjacent signals; route those windows through lookup suppressions tied to calendar artifacts. ConfigMap rotations for cluster-etcd-operator-config can briefly surface Progressing=True on the etcd ClusterOperator while revisions settle; pair with short dwell before treating as incident. Transient EtcdMembersDegraded during approved control-plane scaling from three to five nodes may track single-master conversion work; validate against migration runbooks rather than assuming datastore corruption. BackupSucceeded bursts during CronJob overlap windows can look noisy in low-volume indexes; dedupe on involvedObject.uid when present. Duplicate forwarders can double event counts; dedupe on audit auditID or equivalent correlation identifiers. Some managed offerings restrict etcd metric cardinality or redact certain event reasons; expect metric-heavy firing and validate ground truth with oc commands during incidents. Vendor extensions that push snapshots to object storage may lag Splunk visibility until the transfer job completes; corroborate with object store lifecycle metrics outside this UC before declaring backup success. Penetration tests that simulate API denial against etcd backup namespaces may trip backup failure logic; ingest pentest authorization lookups with start and end epochs.",
              "refs": "[OpenShift OAuth configuration](https://docs.openshift.com/container-platform/latest/authentication/understanding-authentication.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenShift audit log forwarding.\n• Ensure the following data sources are available: `sourcetype=openshift:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable and forward OpenShift audit logs (API server audit policy). Filter for `oauthaccesstokens` and `oauthauthorizetokens` resource operations. Alert on unusual token creation volume, token creation from unexpected IPs, or bulk token deletions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:audit\" (objectRef.resource=\"oauthaccesstokens\" OR objectRef.resource=\"oauthauthorizetokens\")\n| stats count by verb, user.username, sourceIPs{}, responseStatus.code\n| where verb=\"create\" OR verb=\"delete\"\n| sort -count\n```\n\nUnderstanding this SPL\n\n**OAuth Access Token Audit** — OpenShift OAuth tokens grant API access. Tracking token creation and deletion reveals unauthorized access attempts, service account abuse, or compromised credentials.\n\nDocumented **Data sources**: `sourcetype=openshift:audit`. **App/TA** (typical add-on context): OpenShift audit log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by verb, user.username, sourceIPs{}, responseStatus.code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where verb=\"create\" OR verb=\"delete\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, verb, source IP, status), Timechart (token creates/deletes), Bar chart by user.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the automated caretakers that keep the cluster control database healthy, backed up on schedule, and ready for careful member repairs. When backups fall behind, a member starts failing, or the repair workflow shows warning signs, we raise a clear signal so engineers act before a small fault becomes a full outage.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.16",
              "n": "DeploymentConfig Rollout Failures",
              "c": "high",
              "f": "beginner",
              "v": "DeploymentConfig rollouts can fail due to image pull errors, readiness probe timeouts, or resource limits. Stalled rollouts leave applications on old versions and consume resources.",
              "t": "OpenShift event forwarding",
              "d": "`sourcetype=kube:events`, `sourcetype=openshift:deploymentconfig`",
              "q": "index=openshift sourcetype=\"kube:events\" involvedObject.kind=\"DeploymentConfig\" (reason=\"DeploymentFailed\" OR reason=\"RollbackDone\" OR reason=\"CancelledRollout\")\n| stats count by namespace, involvedObject.name, reason, message\n| sort -count",
              "m": "Forward OpenShift events filtered for DeploymentConfig-related reasons. Alert on DeploymentFailed, CancelledRollout, or rollouts that remain in progress beyond the configured deadline.",
              "z": "Table (DC, namespace, reason, message), Timeline of rollouts, Success rate bar chart.",
              "kfp": "Intentional oc rollout pause during canary or traffic-shadow experiments leaves Progressing true with deliberate partial replica states until operators oc rollout resume; join alerts to change calendars or suppress when pause_authorized metadata appears on HTTP Event Collector events. Lifecycle hook pods that run long database migrations may exceed default timeoutSeconds while still healthy; service owners should document extended windows and raise thresholds rather than treating every long hook as an outage. Rolling strategies with maxSurge greater than zero routinely create ReplicationController overlap windows where more than one RC shows ready replicas briefly; require sustained rc_overlap_cnt beyond vendor-guided durations before paging leadership. Recreate strategies may show transient zero ready replicas during tear-down phases that are expected though customer-facing monitors should still fire separately when routes serve errors. ImageStreamTag bumps in shared openshift namespace base layers can fan out many DeploymentConfigs simultaneously; fleet-level ist_bump spikes should downgrade per-application pages until cluster-wide correlation confirms a single root cause. Blue-green style cutovers that intentionally keep two RC generations warm for route testing resemble failure in naive overlap detectors; annotate dc_name rows with blue_green_expected in lookups when your pipeline relies on that pattern. Admission webhook denials on ReplicationController create can stall latestVersion without ReplicaFailure flipping immediately; corroborate ocp_audit before blaming image content. kube-state-metrics label schemes differ across chart versions; a single scrape gap can null ksm_ready_rep while API snapshots remain authoritative. Lab namespaces that continuously churn DeploymentConfigs for pipeline tests will page unless routed to non-production indexes. oc rollback during incident response produces DeploymentRolledBack events that look severe until audit_actor and bridge notes confirm intent. ConfigChange triggers firing on frequent ConfigMap edits during autoscaling sidecars can increment latestVersion without meaningful image changes; pair with diff tools before opening release defects.",
              "refs": "[OpenShift DeploymentConfig strategies](https://docs.openshift.com/container-platform/latest/applications/deployments/what-deployments-are.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenShift event forwarding.\n• Ensure the following data sources are available: `sourcetype=kube:events`, `sourcetype=openshift:deploymentconfig`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward OpenShift events filtered for DeploymentConfig-related reasons. Alert on DeploymentFailed, CancelledRollout, or rollouts that remain in progress beyond the configured deadline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"kube:events\" involvedObject.kind=\"DeploymentConfig\" (reason=\"DeploymentFailed\" OR reason=\"RollbackDone\" OR reason=\"CancelledRollout\")\n| stats count by namespace, involvedObject.name, reason, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DeploymentConfig Rollout Failures** — DeploymentConfig rollouts can fail due to image pull errors, readiness probe timeouts, or resource limits. Stalled rollouts leave applications on old versions and consume resources.\n\nDocumented **Data sources**: `sourcetype=kube:events`, `sourcetype=openshift:deploymentconfig`. **App/TA** (typical add-on context): OpenShift event forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.name, reason, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DC, namespace, reason, message), Timeline of rollouts, Success rate bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Some of our applications still update the old OpenShift way, where each release spins a new controller generation and waits for careful steps like image swaps and small helper jobs. When that sequence gets stuck, we raise a clear signal so engineers fix the release before customers ride on a half-finished swap.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.17",
              "n": "MachineConfigPool Degradation",
              "c": "critical",
              "f": "advanced",
              "v": "MachineConfigPools apply OS-level configuration (kernel args, kubelet config, chrony, SSH keys) to node groups. A degraded MCP means nodes failed to apply the desired config, blocking upgrades and leaving nodes in an inconsistent state.",
              "t": "`oc get mcp -o json` scripted input",
              "d": "`sourcetype=openshift:machineconfigpool`",
              "q": "index=openshift sourcetype=\"openshift:machineconfigpool\"\n| where degraded=\"True\" OR updating=\"True\"\n| eval degraded_pct=round(degradedMachineCount/machineCount*100,1)\n| table _time cluster pool machineCount readyMachineCount degradedMachineCount degraded_pct updating\n| sort -degraded_pct",
              "m": "Scripted input: `oc get mcp -o json`. Parse `status.conditions` for Degraded, Updated, Updating. Extract `machineCount`, `readyMachineCount`, `degradedMachineCount`, `updatedMachineCount`. Run every 300 seconds. Alert when Degraded=True or when Updating=True for more than 60 minutes.",
              "z": "MCP status grid (green/yellow/red), Table (pool, machines, ready, degraded), Progress bar for updates.",
              "kfp": "Short Updating=True windows during legitimate serial rollouts on maxUnavailable=1 pools routinely exceed ten minutes without indicating failure; require the sixty minute stuck threshold or a non-zero degradedMachineCount before executive paging. spec.paused=true during approved maintenance freezes Degraded signals that are intentional; join alerts to change calendars or suppress when pause_authorized=true metadata is present on HEC events. Lab clusters that constantly churn test MachineConfigs can emit noisy degraded counts; route those clusters to non-prod indexes or lower severity tiers. Duplicate HEC submissions from redundant collectors double machine counts in rare misconfigurations; dedupe on cluster, mcp_pool, and snapshot_generation before paging. Prometheus label renames after OpenShift minor upgrades can null mco_machine_count_drift joins for a single scrape interval; corroborate with API snapshots before opening operator defects. Single-node development clusters often pin MCPs that would be unhealthy at enterprise scale; mark them in a lookup to downgrade severity. Image content mirroring or registry outages can stall MCD pulls and mimic MCP degradation; cross-link registry health before blaming kubelet configs. Clock skew between management hosts and Splunk indexers distorts updating_minutes; enforce chrony on forwarders when skew exceeds two minutes.",
              "refs": "[OpenShift MachineConfig Operator documentation](https://docs.openshift.com/container-platform/latest/post_installation_configuration/machine-configuration-tasks.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `oc get mcp -o json` scripted input.\n• Ensure the following data sources are available: `sourcetype=openshift:machineconfigpool`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input: `oc get mcp -o json`. Parse `status.conditions` for Degraded, Updated, Updating. Extract `machineCount`, `readyMachineCount`, `degradedMachineCount`, `updatedMachineCount`. Run every 300 seconds. Alert when Degraded=True or when Updating=True for more than 60 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:machineconfigpool\"\n| where degraded=\"True\" OR updating=\"True\"\n| eval degraded_pct=round(degradedMachineCount/machineCount*100,1)\n| table _time cluster pool machineCount readyMachineCount degradedMachineCount degraded_pct updating\n| sort -degraded_pct\n```\n\nUnderstanding this SPL\n\n**MachineConfigPool Degradation** — MachineConfigPools apply OS-level configuration (kernel args, kubelet config, chrony, SSH keys) to node groups. A degraded MCP means nodes failed to apply the desired config, blocking upgrades and leaving nodes in an inconsistent state.\n\nDocumented **Data sources**: `sourcetype=openshift:machineconfigpool`. **App/TA** (typical add-on context): `oc get mcp -o json` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:machineconfigpool. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:machineconfigpool\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where degraded=\"True\" OR updating=\"True\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **degraded_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **MachineConfigPool Degradation**): table _time cluster pool machineCount readyMachineCount degradedMachineCount degraded_pct updating\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: MCP status grid (green/yellow/red), Table (pool, machines, ready, degraded), Progress bar for updates.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "OpenShift keeps a master checklist called a MachineConfigPool that says every server in a group should run the same low-level operating recipe. When that checklist shows Degraded, some machines refused the recipe, and we need to find out why before the next big upgrade leaves them behind.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.18",
              "n": "etcd Leader Changes",
              "c": "high",
              "f": "advanced",
              "v": "Frequent etcd leader elections indicate network partitions, disk I/O bottlenecks, or resource contention on control plane nodes. Excessive elections degrade API server responsiveness and can cause cluster instability.",
              "t": "OpenTelemetry Collector (Prometheus scrape), etcd log forwarding",
              "d": "`sourcetype=openshift:etcd:metrics`, `sourcetype=openshift:etcd`",
              "q": "index=openshift sourcetype=\"openshift:etcd:metrics\" metric_name=\"etcd_server_leader_changes_seen_total\"\n| bin _time span=5m\n| stats max(_value) as leader_changes by cluster, instance, _time\n| streamstats current=f window=1 last(leader_changes) as prev by cluster, instance\n| eval delta=leader_changes-prev\n| where delta>0\n| table _time cluster instance delta",
              "m": "Scrape etcd Prometheus metrics via OpenTelemetry Collector or forward etcd logs. Track `etcd_server_leader_changes_seen_total` as a counter. Alert when more than 3 leader changes occur within 10 minutes. Correlate with disk latency metrics (`etcd_disk_wal_fsync_duration_seconds`).",
              "z": "Timechart (leader changes over time), Table (instance, changes), Correlation panel with disk latency.",
              "kfp": "Planned control-plane upgrades and certificate rotations can legitimately increment etcd_server_leader_changes_seen_total for short intervals while quantiles remain well below paging thresholds; rely on the rolling fifteen-minute sum and the conjunction with disk tails before executive escalation. Approved etcd member replacement ceremonies supervised by support may show bursts of leader churn without storage faults; correlate change tickets and NodeMaintenance or machine-config timelines before treating as defect. Scheduled etcd defragmentation or compaction windows sometimes widen histograms briefly; compare duration against documented maintenance lookups and vendor guidance rather than muting the analytic globally. Backup or snapshot IO spikes from adjacent storage workloads can elevate fsync tails on shared arrays even when etcd logical health is fine; storage teams should map noisy neighbor LUNs and use the maintenance lookup when IO qualification runs are authorized. Prometheus federation delays or Thanos receiver gaps can make leader_roll15 appear artificially low while events still show warnings; pair with scrape health monitors before declaring all-clear. Lab clusters with synthetic chaos experiments will page by design; route them to non-production alert channels. Some cloud instance types exhibit higher baseline WAL quantiles; baseline fleet_fsync_p90 quarterly and adjust warn thresholds where hardware generations justify it. Duplicate scrapes from redundant collectors can inflate counters if deduplication is misconfigured upstream; reconcile Prometheus HA pairs before blaming etcd. Network microbursts that do not persist fifteen minutes may trip warn on peer round-trip tails without sustained storms; capture packet loss evidence on control-plane interfaces. When Etcd CR exporters lag behind metrics, etcd_cr_memdeg may be zero during genuine degradation; treat ClusterOperator and metrics lanes as mandatory peers, not optional ornaments.",
              "refs": "[OpenShift etcd monitoring documentation](https://docs.openshift.com/container-platform/latest/scalability_and_performance/recommended-performance-scale-practices/recommended-etcd-practices.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenTelemetry Collector (Prometheus scrape), etcd log forwarding.\n• Ensure the following data sources are available: `sourcetype=openshift:etcd:metrics`, `sourcetype=openshift:etcd`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape etcd Prometheus metrics via OpenTelemetry Collector or forward etcd logs. Track `etcd_server_leader_changes_seen_total` as a counter. Alert when more than 3 leader changes occur within 10 minutes. Correlate with disk latency metrics (`etcd_disk_wal_fsync_duration_seconds`).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:etcd:metrics\" metric_name=\"etcd_server_leader_changes_seen_total\"\n| bin _time span=5m\n| stats max(_value) as leader_changes by cluster, instance, _time\n| streamstats current=f window=1 last(leader_changes) as prev by cluster, instance\n| eval delta=leader_changes-prev\n| where delta>0\n| table _time cluster instance delta\n```\n\nUnderstanding this SPL\n\n**etcd Leader Changes** — Frequent etcd leader elections indicate network partitions, disk I/O bottlenecks, or resource contention on control plane nodes. Excessive elections degrade API server responsiveness and can cause cluster instability.\n\nDocumented **Data sources**: `sourcetype=openshift:etcd:metrics`, `sourcetype=openshift:etcd`. **App/TA** (typical add-on context): OpenTelemetry Collector (Prometheus scrape), etcd log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:etcd:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:etcd:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by cluster, instance, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `streamstats` rolls up events into metrics; results are split **by cluster, instance** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **etcd Leader Changes**): table _time cluster instance delta\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (leader changes over time), Table (instance, changes), Correlation panel with disk latency.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the tiny database that runs the whole cluster. When it keeps picking new leaders while disks respond too slowly, we raise a clear signal so engineers fix the hardware or network before things freeze up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry",
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.19",
              "n": "Ingress Controller Errors",
              "c": "high",
              "f": "intermediate",
              "v": "The OpenShift Ingress Controller (HAProxy-based router) handles all external traffic into the cluster. 5xx errors, backend connection failures, and route misconfigurations directly impact application availability.",
              "t": "OpenShift HAProxy log forwarding, `oc get ingresscontroller -n openshift-ingress-operator -o json` scripted input",
              "d": "`sourcetype=openshift:haproxy`, `sourcetype=openshift:ingresscontroller`",
              "q": "index=openshift sourcetype=\"openshift:haproxy\"\n| where status_code>=500\n| bin _time span=5m\n| stats count as errors by backend_name, status_code, _time\n| where errors>10\n| sort -errors",
              "m": "Forward HAProxy access logs from the router pods or expose HAProxy stats. Parse status codes, backend names, and response times. Alert when 5xx error rate exceeds threshold per backend. Also monitor IngressController object for Degraded condition.",
              "z": "Timechart (5xx rate by backend), Table (backend, status, count), Error rate single value.",
              "kfp": "Short Progressing=True windows are normal during shard scaling, certificate rotations, and edits to endpointPublishingStrategy; require pairing with depmin_bad, canary_bad, or sustained dns_bad before paging leadership. Cloud DNS automation that rotates credentials can flip DNSManaged=False for minutes while controllers reconcile; corroborate with provider audit logs and route53 or azure-dns activity before treating as incident. CanaryChecksSucceeding=False after deliberate NetworkPolicy tightening may be a one-time expected blip when change tickets document the hardening; distinguish from silent data-plane outages using external synthetic checks and openshift-ingress-canary Pod logs. Wildcard admission collisions often reflect intentional migration projects where two shards temporarily overlap; use architecture runbooks rather than automatic rollback unless customer Routes fail admission. Replica gaps can appear during voluntary cluster autoscaler scale-in of router nodes when budgets cap available replicas; compare to machine health, PodDisruptionBudgets, and upgrade notes before opening defects against the operator. Prometheus scrape gaps from monitoring outages can null prom_peak while API snapshots remain authoritative; combine lanes before muting metrics-only arms. Duplicate HTTP Event Collector submissions from redundant exporters can inflate op_fail_cnt until dedupe lands in summary indexes. Lab clusters that continuously churn IngressController tests will generate warn noise unless routed to non-production indexes. tlsSecurityProfile fields vary by exporter flattening; false tls_drift_hint rows appear until FIELDALIAS maps stabilize after OpenShift minor upgrades. Maintenance suppression misuse can hide genuine outages if tickets are left open; expire lookup rows on schedule and require human approval for long suppress windows.",
              "refs": "[OpenShift Ingress Operator documentation](https://docs.openshift.com/container-platform/latest/networking/ingress-operator.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenShift HAProxy log forwarding, `oc get ingresscontroller -n openshift-ingress-operator -o json` scripted input.\n• Ensure the following data sources are available: `sourcetype=openshift:haproxy`, `sourcetype=openshift:ingresscontroller`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward HAProxy access logs from the router pods or expose HAProxy stats. Parse status codes, backend names, and response times. Alert when 5xx error rate exceeds threshold per backend. Also monitor IngressController object for Degraded condition.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:haproxy\"\n| where status_code>=500\n| bin _time span=5m\n| stats count as errors by backend_name, status_code, _time\n| where errors>10\n| sort -errors\n```\n\nUnderstanding this SPL\n\n**Ingress Controller Errors** — The OpenShift Ingress Controller (HAProxy-based router) handles all external traffic into the cluster. 5xx errors, backend connection failures, and route misconfigurations directly impact application availability.\n\nDocumented **Data sources**: `sourcetype=openshift:haproxy`, `sourcetype=openshift:ingresscontroller`. **App/TA** (typical add-on context): OpenShift HAProxy log forwarding, `oc get ingresscontroller -n openshift-ingress-operator -o json` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:haproxy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:haproxy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status_code>=500` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by backend_name, status_code, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where errors>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (5xx rate by backend), Table (backend, status, count), Error rate single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the OpenShift service that turns your desired edge routing design into live routers and DNS integration. When that service cannot reconcile shards, certificates, or health checks, we raise a clear signal so teams fix the platform configuration before customer routes fail.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "haproxy"
              ],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.20",
              "n": "Cluster Certificate Expiry",
              "c": "critical",
              "f": "intermediate",
              "v": "OpenShift uses dozens of internal certificates for API server, kubelet, etcd, and service-serving. Expired internal certificates cause sudden cluster-wide outages that are difficult to recover from without backup.",
              "t": "`oc get secret -A -o json` scripted input (TLS secrets), OpenShift Monitoring Prometheus metrics",
              "d": "`sourcetype=openshift:certificates`",
              "q": "index=openshift sourcetype=\"openshift:certificates\"\n| eval days_left=round((expiry_epoch-now())/86400,0)\n| where days_left<30\n| table namespace secret_name cn days_left issuer\n| sort days_left",
              "m": "Scripted input parsing TLS secrets or extracting from `openssl x509 -noout -enddate`. Also scrape `apiserver_client_certificate_expiration_seconds` Prometheus metric. Alert at 30/14/7 days before expiry. Page at 3 days for cluster-critical certificates.",
              "z": "Table (certificate, namespace, days left, issuer), Single value (soonest expiry), Gauge.",
              "kfp": "Chained certificate renewals routinely leave multiple PEM leaves in a single tls.crt while operators finish rollout; Splunk may still read an older notAfter until parsers select the final leaf—correlate resourceVersion increments and openshift:operator Progressing messages before paging. Trust bundle propagation lag after service-ca or apiserver signer rotation can produce short TLS error bursts that resemble imminent expiry even when new CABundle ConfigMaps already exist on disk; require sustained Degraded conditions or consistent short notAfter across collectors before executive escalation. Operator reconciliation timestamps and metadata.generation can advance without changing notAfter when secrets are relabeled; avoid interpreting generation bumps alone as rotation completion. etcd quorum maintenance can pause secret writes briefly, producing null exporter rows that look like missing certificates; verify openshift:certificates lane freshness against oc get secret before assuming data loss. Prometheus federation scrape gaps or Thanos receive outages can drop apiserver_client_certificate_expiration_seconds samples while clusters remain healthy; combine with openshift:certificates horizons before muting metrics entirely. Lab clusters with intentionally short lifetimes for drill secrets will page unless openshift_cert_policy.csv marks them non-production; tune rotation_policy and suppress windows rather than disabling the control. Duplicate HEC submissions from redundant inventory jobs can double-count secrets; dedupe on cluster, namespace, secret_name, and fingerprint fields in summary indexes when cost matters. ClusterOperator messages that mention cert in unrelated contexts, such as registry mirror TLS to external sites, can false-positive cert_degraded; read message text and compare to etcd, authentication, or ingress operator ownership before kubelet signer escalation. Splunk lookup staleness after a successful rotation can show expired rows while OpenShift already serves new PEM material; re-ingest openshift_cert_policy.csv signer columns after offline openssl parent comparisons. This control focuses on internal operator PKI, not customer edge CDN pinning alone—that narrative lives in UC-3.3.8.",
              "refs": "[OpenShift certificate management](https://docs.openshift.com/container-platform/latest/security/certificates/api-server.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `oc get secret -A -o json` scripted input (TLS secrets), OpenShift Monitoring Prometheus metrics.\n• Ensure the following data sources are available: `sourcetype=openshift:certificates`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input parsing TLS secrets or extracting from `openssl x509 -noout -enddate`. Also scrape `apiserver_client_certificate_expiration_seconds` Prometheus metric. Alert at 30/14/7 days before expiry. Page at 3 days for cluster-critical certificates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:certificates\"\n| eval days_left=round((expiry_epoch-now())/86400,0)\n| where days_left<30\n| table namespace secret_name cn days_left issuer\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**Cluster Certificate Expiry** — OpenShift uses dozens of internal certificates for API server, kubelet, etcd, and service-serving. Expired internal certificates cause sudden cluster-wide outages that are difficult to recover from without backup.\n\nDocumented **Data sources**: `sourcetype=openshift:certificates`. **App/TA** (typical add-on context): `oc get secret -A -o json` scripted input (TLS secrets), OpenShift Monitoring Prometheus metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:certificates. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:certificates\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left<30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cluster Certificate Expiry**): table namespace secret_name cn days_left issuer\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certificate, namespace, days left, issuer), Single value (soonest expiry), Gauge.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "The platform issues many internal trust credentials so its own parts can talk safely. When an important one expires, administration can fail—sometimes without an obvious alarm. We keep an inventory, warn well ahead, and escalate faster when a renewal still needs a human approval step.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "prometheus"
              ],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.21",
              "n": "ClusterRole and ClusterRoleBinding Changes",
              "c": "high",
              "f": "intermediate",
              "v": "Changes to ClusterRoles and ClusterRoleBindings affect cluster-wide RBAC permissions. Unauthorized modifications can grant excessive privileges or create backdoor access.",
              "t": "OpenShift audit log forwarding",
              "d": "`sourcetype=openshift:audit`",
              "q": "index=openshift sourcetype=\"openshift:audit\" (objectRef.resource=\"clusterroles\" OR objectRef.resource=\"clusterrolebindings\") verb!=\"get\" verb!=\"list\" verb!=\"watch\"\n| stats count by verb, user.username, objectRef.name, objectRef.resource, responseStatus.code\n| sort -count",
              "m": "Enable and forward OpenShift audit logs. Filter for create/update/patch/delete on `clusterroles` and `clusterrolebindings`. Alert on any modification outside of approved change windows or by non-admin users. Cross-reference with change management tickets.",
              "z": "Table (user, verb, resource, name, status), Timeline of changes, Bar chart by user.",
              "kfp": "Operator Lifecycle Manager installs and certified partner operators routinely create ClusterRoles with aggregation labels during upgrades; require approved_cluster_roles.csv rows and vendor documentation references before paging teams. OpenShift platform controllers reconcile bundled roles after upgrades; short bursts of update verbs on system roles may be healthy when ClusterVersion history shows an active rollout—join to change calendars. GitOps controllers applying large binding batches can spike binding_velocity_win without malice; suppress when automation_actor metadata matches known Flux or Argo CD service accounts. oc adm policy synchronization by break-glass administrators resembles abuse until tickets arrive; use allowlist exception_expiry_epoch thoughtfully. Hourly identity snapshots lag behind immediate user deletions; tune orphan_subject logic to avoid noise during known directory replication delays. Audit RequestResponse volume or field extraction gaps can hide subjects; validate parsers when orphan_subject fires without oc confirmation. Penetration tests mutate RBAC objects under authorization; ingest pentest windows into lookup suppressions. Duplicate HEC submissions double velocity counters; dedupe on auditID when present. Lab clusters that intentionally mutate bundled roles for training will page unless routed to non-production indexes.",
              "refs": "[OpenShift RBAC documentation](https://docs.openshift.com/container-platform/latest/authentication/using-rbac.html)",
              "mitre": [
                "T1098",
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenShift audit log forwarding.\n• Ensure the following data sources are available: `sourcetype=openshift:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable and forward OpenShift audit logs. Filter for create/update/patch/delete on `clusterroles` and `clusterrolebindings`. Alert on any modification outside of approved change windows or by non-admin users. Cross-reference with change management tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:audit\" (objectRef.resource=\"clusterroles\" OR objectRef.resource=\"clusterrolebindings\") verb!=\"get\" verb!=\"list\" verb!=\"watch\"\n| stats count by verb, user.username, objectRef.name, objectRef.resource, responseStatus.code\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ClusterRole and ClusterRoleBinding Changes** — Changes to ClusterRoles and ClusterRoleBindings affect cluster-wide RBAC permissions. Unauthorized modifications can grant excessive privileges or create backdoor access.\n\nDocumented **Data sources**: `sourcetype=openshift:audit`. **App/TA** (typical add-on context): OpenShift audit log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by verb, user.username, objectRef.name, objectRef.resource, responseStatus.code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, verb, resource, name, status), Timeline of changes, Bar chart by user.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the master permission lists for the whole cluster like building master keys. We catch when the keys the platform ships get altered, when name tags point to people who no longer exist, and when someone hands out many new cluster-wide keys all at once.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "security",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.22",
              "n": "Pod Security Admission Violations",
              "c": "high",
              "f": "beginner",
              "v": "Pod Security Admission (PSA) replaced PodSecurityPolicies in OpenShift 4.12+. Violations of baseline or restricted profiles indicate workloads requesting disallowed capabilities such as privileged containers, host networking, or writable root filesystems.",
              "t": "OpenShift audit log forwarding",
              "d": "`sourcetype=openshift:audit`",
              "q": "index=openshift sourcetype=\"openshift:audit\" objectRef.resource=\"pods\" verb=\"create\"\n| search \"pod-security.kubernetes.io\" (\"violat*\" OR \"would have been\")\n| stats count by user.username, objectRef.namespace, 'annotations.pod-security.kubernetes.io/audit-violations'\n| sort -count",
              "m": "Forward OpenShift audit logs. Filter for pod create events containing PSA violation annotations. Track violations by namespace and user. Alert on enforce-mode rejections and audit/warn-mode violations for trend analysis.",
              "z": "Table (namespace, user, violation), Bar chart by violation type, Timechart trend.",
              "kfp": "Red Hat core operators and catalog partners sometimes require short privileged PSA windows during install or upgrade; join vendor documentation and approved_psa_baseline.csv exception_expiry_epoch before paging application teams for openshift-marketplace, openshift-logging, or openshift-monitoring style namespaces. Legitimate oc adm policy workflows and GitOps reconcilers can emit namespace mutation audit rows that look like drift until you compare actor service accounts to known controllers. Developer sandboxes that intentionally remain privileged while audit is restricted will trip soft_launch_stuck logic unless inventory marks them as approved long-running experiments with owners. EUS-track clusters can linger on older Kubernetes minors while teams pin enforce-version strings that match the prior minor; treat as planning debt rather than incident until production namespaces show mismatch with expired exceptions. OLM subscription churn during catalog mirrors or air-gap transfers may temporarily widen labels; require sustained drift or pairing with failed installs before executive escalation. Cluster policy controller reconciliation can flap labels during upgrades; compare Splunk windows to ClusterVersion progressing flags and UC-3.3.6 context before blaming tenants. Penetration tests that craft violating pods will inflate deny bursts by design; ingest pentest authorization metadata on HEC events. Duplicate HEC submissions from redundant forwarders can inflate psa_deny_burst; dedupe on audit auditID when present. Some managed offerings redact portions of audit bodies; SCC markers may be absent while oc get pod still shows openshift.io/scc annotations—investigate parser gaps before muting SCC↔PSA correlation. Break-glass oc adm policy add-scc-to-user activity during incidents can change admission outcomes without PSA label edits; pair timelines with UC-3.3.4 narratives and tickets so SCC relaxation is not mistaken for pure PSA failure.",
              "refs": "[OpenShift Pod Security Admission](https://docs.openshift.com/container-platform/latest/authentication/understanding-and-managing-pod-security-admission.html)",
              "mitre": [
                "T1611",
                "T1548.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenShift audit log forwarding.\n• Ensure the following data sources are available: `sourcetype=openshift:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward OpenShift audit logs. Filter for pod create events containing PSA violation annotations. Track violations by namespace and user. Alert on enforce-mode rejections and audit/warn-mode violations for trend analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:audit\" objectRef.resource=\"pods\" verb=\"create\"\n| search \"pod-security.kubernetes.io\" (\"violat*\" OR \"would have been\")\n| stats count by user.username, objectRef.namespace, 'annotations.pod-security.kubernetes.io/audit-violations'\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Pod Security Admission Violations** — Pod Security Admission (PSA) replaced PodSecurityPolicies in OpenShift 4.12+. Violations of baseline or restricted profiles indicate workloads requesting disallowed capabilities such as privileged containers, host networking, or writable root filesystems.\n\nDocumented **Data sources**: `sourcetype=openshift:audit`. **App/TA** (typical add-on context): OpenShift audit log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user.username, objectRef.namespace, 'annotations.pod-security.kubernetes.io/audit-violations'** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, user, violation), Bar chart by violation type, Timechart trend.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the layered safety rules that decide whether an app may launch, and we catch when namespace labels drift, when installs leave spaces stuck permissive, or when a cluster upgrade makes pinned rule versions disagree with the live platform version.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.23",
              "n": "Console and API Access Audit",
              "c": "medium",
              "f": "beginner",
              "v": "Tracking who accesses the OpenShift console and API, from where, and what actions they perform provides a complete audit trail for compliance and security investigations.",
              "t": "OpenShift audit log forwarding",
              "d": "`sourcetype=openshift:audit`",
              "q": "index=openshift sourcetype=\"openshift:audit\"\n| where NOT match('user.username', \"^system:\")\n| stats count, dc(sourceIPs{}) as unique_ips, values(verb) as verbs by user.username, userAgent\n| eval is_console=if(match(userAgent,\"Mozilla\"),\"console\",\"cli/api\")\n| sort -count",
              "m": "Forward OpenShift audit logs. Filter out system service accounts. Distinguish console users (browser user-agent) from CLI/API users. Track source IPs per user. Alert on access from unexpected locations or outside business hours.",
              "z": "Table (user, access type, IPs, action count), Timechart (access patterns), Sankey (user to action).",
              "kfp": "OpenShift upgrades and operator reconciliations emit bursts of cluster-scoped writes that resemble attacks until joined to ClusterVersion history and change windows. GitOps controllers and backup operators mutate secrets and configmaps rapidly with benign intent; tune priv_ct thresholds using automation_actor metadata. Corporate reverse proxies and split-horizon DNS can make token_ip_reuse fire when a single user crosses two edge addresses; confirm proxy topology before paging identity teams. Browser user-agent matches are heuristic; headless Chromium in CI may resemble consoles unless you join pipeline identities. cidrmatch against a single management supernet per cluster will flag legitimate travel through new bastions after network moves until approved_admin_cidrs.csv refreshes. audit_baseline.csv surprises may reflect intentional policy changes not yet committed to git; avoid critical routing until GRC confirms drift. Seven-day z-scores inside multisearch still inherit maintenance spikes; pair with upgrade freeze lookups. Duplicate HEC shipments double counts; dedupe on auditID when present. Penetration tests and red-team impersonation exercises should carry pentest lookup windows. Some managed offerings redact portions of audit bodies; expect partial nulls in impersonation extracts and validate with vendor guidance.",
              "refs": "[OpenShift audit log policy](https://docs.openshift.com/container-platform/latest/security/audit-log-policy-config.html)",
              "mitre": [
                "T1078",
                "T1550.001",
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenShift audit log forwarding.\n• Ensure the following data sources are available: `sourcetype=openshift:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward OpenShift audit logs. Filter out system service accounts. Distinguish console users (browser user-agent) from CLI/API users. Track source IPs per user. Alert on access from unexpected locations or outside business hours.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"openshift:audit\"\n| where NOT match('user.username', \"^system:\")\n| stats count, dc(sourceIPs{}) as unique_ips, values(verb) as verbs by user.username, userAgent\n| eval is_console=if(match(userAgent,\"Mozilla\"),\"console\",\"cli/api\")\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Console and API Access Audit** — Tracking who accesses the OpenShift console and API, from where, and what actions they perform provides a complete audit trail for compliance and security investigations.\n\nDocumented **Data sources**: `sourcetype=openshift:audit`. **App/TA** (typical add-on context): OpenShift audit log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: openshift:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"openshift:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match('user.username', \"^system:\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user.username, userAgent** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **is_console** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, access type, IPs, action count), Timechart (access patterns), Sankey (user to action).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We follow who touched the cluster control panel and the behind-the-scenes application programming surface, how fast they changed sensitive settings, whether someone is pretending to be someone else, and whether the same login token appears from two places at once.",
              "mtype": [
                "Audit"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.24",
              "n": "MachineHealthCheck Remediations",
              "c": "high",
              "f": "advanced",
              "v": "MachineHealthChecks automatically remediate unhealthy nodes by deleting and replacing them. Frequent remediations indicate persistent infrastructure problems such as failing hardware, cloud provider issues, or misconfigurations.",
              "t": "`oc get machinehealthcheck -n openshift-machine-api -o json` scripted input, event forwarding",
              "d": "`sourcetype=openshift:machinehealthcheck`, `sourcetype=kube:events`",
              "q": "index=openshift (sourcetype=\"kube:events\" involvedObject.kind=\"Machine\" reason=\"Remediated\") OR (sourcetype=\"openshift:machinehealthcheck\" currentHealthy<expectedMachines)\n| stats count as remediations, values(involvedObject.name) as machines by namespace, cluster\n| where remediations>0\n| sort -remediations",
              "m": "Scripted input: `oc get machinehealthcheck -n openshift-machine-api -o json`. Parse `status.currentHealthy` vs `status.expectedMachines`. Also capture remediation events. Run every 300 seconds. Alert when remediations exceed 2 per hour or currentHealthy < expectedMachines.",
              "z": "Table (MHC, healthy, expected, remediations), Timeline of remediation events, Single value (total remediations/24h).",
              "kfp": "Single-node remediation after hardware faults produces short bursts of Machine phase churn that self-resolve once replacement joins; require storm thresholds and concurrent event counts before paging leadership. Approved cluster upgrades or machine-os image bumps can temporarily elevate phase events across MachineSets; correlate with ClusterVersion history and change tickets before treating as incident. maxUnhealthy gates that pause remediation during widespread provider outages are intentional protective behavior; pair with cloud status pages rather than assuming Splunk silence means health. Lab clusters that constantly recycle Machines for pipeline tests will trigger velocity hints unless non-production clusters are marked down in escalation_tier lookups. Duplicate HTTP Event Collector submissions from redundant collectors can double phase event counts until dedupe logic lands in summary indexes. Stale ocp_mhc_cluster_capacity_hints.csv rows can mislabel watermark risk; refresh lookups when autoscaler max nodes or MachineSet maxSize changes. Brief ControlPlaneMachineSet rolling updates may show transient unavailable counters; compare dwell time to vendor guidance before executive escalation. Prometheus federation gaps can hide autoscaler corroboration while API snapshots remain authoritative; repair metrics pipelines before muting replica drift alerts. Nodes blocked by MachineConfigPool apply storms mirror UC-3.3.17; cross-reference before blaming machine-api alone.",
              "refs": "[OpenShift MachineHealthCheck documentation](https://docs.openshift.com/container-platform/latest/machine_management/deploying-machine-health-checks.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `oc get machinehealthcheck -n openshift-machine-api -o json` scripted input, event forwarding.\n• Ensure the following data sources are available: `sourcetype=openshift:machinehealthcheck`, `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input: `oc get machinehealthcheck -n openshift-machine-api -o json`. Parse `status.currentHealthy` vs `status.expectedMachines`. Also capture remediation events. Run every 300 seconds. Alert when remediations exceed 2 per hour or currentHealthy < expectedMachines.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift (sourcetype=\"kube:events\" involvedObject.kind=\"Machine\" reason=\"Remediated\") OR (sourcetype=\"openshift:machinehealthcheck\" currentHealthy<expectedMachines)\n| stats count as remediations, values(involvedObject.name) as machines by namespace, cluster\n| where remediations>0\n| sort -remediations\n```\n\nUnderstanding this SPL\n\n**MachineHealthCheck Remediations** — MachineHealthChecks automatically remediate unhealthy nodes by deleting and replacing them. Frequent remediations indicate persistent infrastructure problems such as failing hardware, cloud provider issues, or misconfigurations.\n\nDocumented **Data sources**: `sourcetype=openshift:machinehealthcheck`, `sourcetype=kube:events`. **App/TA** (typical add-on context): `oc get machinehealthcheck -n openshift-machine-api -o json` scripted input, event forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: kube:events, openshift:machinehealthcheck. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where remediations>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (MHC, healthy, expected, remediations), Timeline of remediation events, Single value (total remediations/24h).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch when the platform automatically replaces unhealthy machines, when too many replacements bunch together, and when replacement runs into account size limits. We treat control-plane rotation as more serious than worker churn because those machines anchor the whole cluster.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_openshift"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.3.25",
              "n": "LimitRange Enforcement Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "LimitRanges set default and maximum resource requests/limits per container or pod in a namespace. When workloads are rejected for exceeding these limits, deployments fail. Tracking enforcement events helps teams right-size their resource requests.",
              "t": "`oc get limitrange -A -o json` scripted input, event forwarding",
              "d": "`sourcetype=openshift:limitrange`, `sourcetype=kube:events`",
              "q": "index=openshift sourcetype=\"kube:events\" reason=\"FailedCreate\"\n| search \"forbidden: exceeded quota\" OR \"must be less than or equal to\" OR \"LimitRange\"\n| stats count by namespace, involvedObject.kind, involvedObject.name, message\n| sort -count",
              "m": "Forward OpenShift events. Filter for FailedCreate events with LimitRange or quota-related messages. Also periodically export LimitRange definitions to compare against actual workload requests. Alert on repeated failures per namespace.",
              "z": "Table (namespace, workload, message), Bar chart by namespace, Timeline of rejections.",
              "kfp": "Short reconciliation lag after ClusterResourceQuota edits can spike acrq_delta_hint until controllers catch up; require persistence across multiple exporter intervals before paging application teams. Legitimate finance-approved raises emit audit mutations that look alarming until approved_quota_changes.csv contains matching ticket identifiers with correct effective times. Developer sandboxes that intentionally sit outside CRQ selectors will trigger selector_gap logic unless CMDB expect_crq_membership flags are accurate. EUS or maintenance windows that freeze snapshots may skew streamstats burn projections; pair sec_to_sat with live oc describe during incidents. OLM or catalog installs that burst pod creates can temporarily inflate denial_rate without sustained tenant impact; compare to deployment success ratios before executive escalation. Duplicate kube_events forwarders double denial_rate; dedupe on involvedObject uid when present. Penetration tests or chaos exercises that exhaust quotas should carry lookup suppressions with start and end epochs. Namespace label changes during live migrations may produce brief overlap_ns positives; confirm intentional dual-parent windows in governance lookups before forcing selector edits. Storage-heavy quotas may saturate requests.storage or persistentvolumeclaims before CPU burn math matters; extend dashboards beyond millicores when finance cares about those dimensions. Break-glass oc edit clusterresourcequota during incidents may lack preemptive CSV rows; use exception_expiry style columns in approved_quota_changes.csv to avoid noisy pages.",
              "refs": "[OpenShift LimitRange documentation](https://docs.openshift.com/container-platform/latest/nodes/clusters/nodes-cluster-limit-ranges.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `oc get limitrange -A -o json` scripted input, event forwarding.\n• Ensure the following data sources are available: `sourcetype=openshift:limitrange`, `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward OpenShift events. Filter for FailedCreate events with LimitRange or quota-related messages. Also periodically export LimitRange definitions to compare against actual workload requests. Alert on repeated failures per namespace.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openshift sourcetype=\"kube:events\" reason=\"FailedCreate\"\n| search \"forbidden: exceeded quota\" OR \"must be less than or equal to\" OR \"LimitRange\"\n| stats count by namespace, involvedObject.kind, involvedObject.name, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**LimitRange Enforcement Tracking** — LimitRanges set default and maximum resource requests/limits per container or pod in a namespace. When workloads are rejected for exceeding these limits, deployments fail. Tracking enforcement events helps teams right-size their resource requests.\n\nDocumented **Data sources**: `sourcetype=openshift:limitrange`, `sourcetype=kube:events`. **App/TA** (typical add-on context): `oc get limitrange -A -o json` scripted input, event forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openshift; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openshift, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by namespace, involvedObject.kind, involvedObject.name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, workload, message), Bar chart by namespace, Timeline of rejections.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch shared budget caps that span many team spaces on OpenShift, and we catch when labels stop matching so new work lands outside the cap, when one team hogs the shared pool, or when changes to limits skip the approval paper trail you rely on.",
              "mtype": [
                "Capacity",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "3.4",
          "n": "Container Registries",
          "u": [
            {
              "i": "3.4.1",
              "n": "Image Push/Pull Audit",
              "c": "medium",
              "f": "beginner",
              "v": "Audit trail for who pushed or pulled what images. Detects unauthorized access, supply chain concerns, and usage patterns.",
              "t": "Registry webhook to Splunk HEC, API polling",
              "d": "Registry audit/webhook events",
              "q": "index=containers sourcetype=\"registry:audit\"\n| stats count by action, repository, tag, user\n| sort -count",
              "m": "Configure registry webhooks (Harbor, ACR, ECR) to send events to Splunk HEC. Alternatively, poll registry API for audit logs. Track push events (new deployments) and pull events (consumption).",
              "z": "Table (user, image, action, time), Bar chart by repository, Timeline.",
              "kfp": "**webhook_replay_storm** — Harbor retries failed webhook deliveries with exponential backoff up to 10 attempts; a brief Splunk HEC outage followed by recovery replays hours of queued events, creating artificial volume spikes that look like a pull storm. Compare **`_time`** versus **`_indextime`** skew to identify replayed batches and deduplicate using the **`event_data.repository.digest`** plus **`op_time`** pair.\n\n**service_account_noise** — CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Flux) generate high-volume pull events during build and deployment cycles that dominate audit dashboards; unless filtered by the **`is_ci_account`** flag or suppressed via the **`authorized_registry_users`** lookup, these legitimate pulls trigger anomalous-volume alerts every deployment window. Baseline normal CI pull rates per project and set thresholds at 2x the 95th percentile.\n\n**mirror_replication_echo** — Harbor project-level replication jobs between a primary and disaster-recovery registry generate paired push events on both sides that appear as distinct human pushes; tag these with a **`replication`** action filter and exclude from supply-chain integrity alerts unless the replication target is an unexpected endpoint.\n\n**scanner_pull_amplification** — Integrated vulnerability scanners (Trivy, Grype, Clair) pull every layer of every image on scan, generating pull counts 5–20x higher than actual developer or deployment activity; distinguish scanner actors by username pattern (`scanner-*`, `trivy-*`) or correlate with **`SCANNING_COMPLETED`** webhook events.\n\n**gc_delete_burst** — Harbor scheduled garbage collection removes unreferenced blobs and manifests, producing hundreds of delete events in a narrow window that looks like mass artifact destruction; correlate timestamps with the Harbor GC schedule in your **`maintenance_windows`** lookup and suppress alerts during those windows.\n\n**anonymous_pull_allowance** — Public-facing projects with anonymous pull enabled generate pull events without an actor identity, which the unauthorized-pull search flags as suspicious; whitelist public project names in the lookup or add a **`project_visibility`** field from the Harbor API to exclude public-project pulls.\n\n**robot_credential_rotation** — When Harbor robot accounts are rotated, the old credential continues generating events during the overlap window while the new credential ramps up, creating dual-actor pulls on the same images; correlate with robot creation timestamps from **`harbor:audit`** `operation=create resource_type=robot` to distinguish rotation from credential sharing.\n\n**tag_retention_cleanup** — Harbor tag retention policies automatically delete tags exceeding retention rules, generating delete events that are intentional lifecycle management rather than unauthorized destruction; filter by checking whether **`event_type`** is **`TAG_RETENTION`** rather than **`DELETE_ARTIFACT`**.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Registry webhook to Splunk HEC, API polling.\n• Ensure the following data sources are available: Registry audit/webhook events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure registry webhooks (Harbor, ACR, ECR) to send events to Splunk HEC. Alternatively, poll registry API for audit logs. Track push events (new deployments) and pull events (consumption).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"registry:audit\"\n| stats count by action, repository, tag, user\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Image Push/Pull Audit** — Audit trail for who pushed or pulled what images. Detects unauthorized access, supply chain concerns, and usage patterns.\n\nDocumented **Data sources**: Registry audit/webhook events. **App/TA** (typical add-on context): Registry webhook to Splunk HEC, API polling. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: registry:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"registry:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action, repository, tag, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, image, action, time), Bar chart by repository, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We keep a record of every time someone uploads, downloads, or removes a software package from the team's shared storage, so we can spot unauthorized access or tampering before anything harmful reaches customers.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.4.2",
              "n": "Vulnerability Scan Results",
              "c": "high",
              "f": "intermediate",
              "v": "Registry-level scanning catches vulnerabilities before images are deployed. Trending shows whether security posture is improving or degrading.",
              "t": "Custom input (Harbor, ACR, ECR scan APIs)",
              "d": "Scan result JSON from registry API",
              "q": "index=containers sourcetype=\"registry:scan\"\n| stats sum(critical) as critical, sum(high) as high, sum(medium) as medium by repository, tag\n| where critical > 0\n| sort -critical",
              "m": "Poll registry scan APIs for results or configure webhook notifications on scan completion. Forward to Splunk via HEC. Alert on critical vulnerabilities in images tagged for production.",
              "z": "Stacked bar chart (vulns by severity per image), Table, Trend line (vulns over time).",
              "kfp": "**base_image_inheritance** — CVEs reported against the base OS layer (alpine, debian, ubuntu) appear in every image built on that base, inflating the total count even though a single base-image update would remediate all of them. Group CVEs by `pkg_name` and base image to identify this pattern and prioritize the base-image rebuild over individual image patches.\n\n**scanner_database_lag** — Trivy and Grype update their vulnerability databases on different schedules; a scan run immediately after a new CVE is published may not detect it until the database refreshes (typically within 6–12 hours). Re-scan images after database updates to ensure coverage.\n\n**disputed_cve_noise** — Some CVEs are disputed or marked as \"not a vulnerability\" by the upstream maintainer but remain in the NVD database. These inflate severity counts without representing real risk. Cross-reference flagged CVEs with upstream advisories and add confirmed false positives to `cve_exceptions.csv`.\n\n**dev_image_pollution** — Development and test images (tagged `dev-*`, `test-*`, `snapshot-*`) often contain debug tools, extra packages, and unpatched dependencies that inflate vulnerability counts. Filter by image tag pattern or namespace to separate production image risk from development noise.\n\n**go_binary_phantom** — Go binaries compiled with older Go toolchains report CVEs against the Go standard library even when the vulnerable code path is not reachable from the application. Trivy's Go module analysis may overreport; verify reachability before prioritizing remediation.\n\n**kernel_cve_in_container** — Scanners sometimes report kernel CVEs against container images even though containers share the host kernel and cannot independently patch it. These CVEs should be tracked at the node/host level, not the container image level; suppress with an exceptions lookup keyed to kernel-related package names.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom input (Harbor, ACR, ECR scan APIs).\n• Ensure the following data sources are available: Scan result JSON from registry API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll registry scan APIs for results or configure webhook notifications on scan completion. Forward to Splunk via HEC. Alert on critical vulnerabilities in images tagged for production.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"registry:scan\"\n| stats sum(critical) as critical, sum(high) as high, sum(medium) as medium by repository, tag\n| where critical > 0\n| sort -critical\n```\n\nUnderstanding this SPL\n\n**Vulnerability Scan Results** — Registry-level scanning catches vulnerabilities before images are deployed. Trending shows whether security posture is improving or degrading.\n\nDocumented **Data sources**: Scan result JSON from registry API. **App/TA** (typical add-on context): Custom input (Harbor, ACR, ECR scan APIs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: registry:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"registry:scan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by repository, tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where critical > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart (vulns by severity per image), Table, Trend line (vulns over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "After each software package is built, we automatically check it for known security weaknesses and send the results to a central dashboard so the team can fix the most dangerous ones first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.4.3",
              "n": "Storage Quota Monitoring",
              "c": "low",
              "f": "intermediate",
              "v": "Registry storage exhaustion prevents image pushes, blocking CI/CD pipelines. Monitoring enables proactive cleanup policy tuning.",
              "t": "Custom API input",
              "d": "Registry storage API metrics",
              "q": "index=containers sourcetype=\"registry:metrics\"\n| stats latest(storage_used_bytes) as used, latest(storage_quota_bytes) as quota by registry\n| eval used_pct = round(used / quota * 100, 1)\n| where used_pct > 80",
              "m": "Poll registry API for storage metrics. Alert when usage exceeds 80%. Review and tune image retention/garbage collection policies.",
              "z": "Gauge (storage usage), Line chart (growth trend), Table.",
              "kfp": "**retention_reclaim_overshoot** — When Harbor runs a tag retention policy followed by garbage collection, storage usage drops sharply, then the next batch of CI/CD pushes refills the reclaimed space. The growth-rate search may flag this project as GROWING_FAST even though the net change over a week is near zero. Use a 7-day moving average rather than daily delta for growth classification.\n\n**replication_double_count** — Projects using Harbor replication to a secondary registry consume quota on both the source and destination projects. The source project appears to grow faster than expected because replicated artifacts count against the source quota until the next garbage collection cycle clears unreferenced blobs.\n\n**trivy_database_cache** — The Trivy vulnerability database cache stored in the Harbor data volume contributes to PV-level usage metrics but is not counted against project quotas. PV usage may appear higher than the sum of project quotas due to this shared cache. Distinguish by comparing PV usage with the sum of `harbor_project_quota_usage_byte` values.\n\n**untagged_artifact_accumulation** — Continuous image rebuilds in CI/CD pipelines replace tagged artifacts but leave behind untagged manifests and layers that consume quota until garbage collection runs. A project may appear to grow steadily despite having the same number of tagged images. Verify by checking untagged artifact count in Harbor UI.\n\n**quota_limit_change_step** — When an administrator increases a project's quota limit, the usage_pct drops instantly without any actual storage reclamation. The growth-rate search may misinterpret this as a SHRINKING trend. Correlate with `sourcetype=harbor:audit operation=update resource_type=quota` to identify administrative quota changes.\n\n**blob_dedup_variance** — Harbor uses content-addressable storage where identical layers across images are stored once. Project quota accounting may show different effective usage than the physical storage consumed because deduplication savings are applied at the storage level, not the quota level.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input.\n• Ensure the following data sources are available: Registry storage API metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll registry API for storage metrics. Alert when usage exceeds 80%. Review and tune image retention/garbage collection policies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"registry:metrics\"\n| stats latest(storage_used_bytes) as used, latest(storage_quota_bytes) as quota by registry\n| eval used_pct = round(used / quota * 100, 1)\n| where used_pct > 80\n```\n\nUnderstanding this SPL\n\n**Storage Quota Monitoring** — Registry storage exhaustion prevents image pushes, blocking CI/CD pipelines. Monitoring enables proactive cleanup policy tuning.\n\nDocumented **Data sources**: Registry storage API metrics. **App/TA** (typical add-on context): Custom API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: registry:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"registry:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by registry** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (storage usage), Line chart (growth trend), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We measure how full the team's shared storage space is getting, alerting them when it is nearly full so they can clean up old files before new work gets blocked.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.4.4",
              "n": "Registry Image Vulnerability Scan Results",
              "c": "high",
              "f": "intermediate",
              "v": "Images with known CVEs in the registry pose risk when deployed. Tracking scan results ensures only approved images are used and triggers remediation.",
              "t": "Custom API input (Trivy, Clair, registry scanner)",
              "d": "Registry vulnerability scan output (JSON/CSV)",
              "q": "index=containers sourcetype=\"registry:vuln_scan\"\n| search severity=\"Critical\" OR severity=\"High\"\n| stats count as vuln_count, values(cve_id) as cves by image_tag, registry\n| where vuln_count > 0\n| sort -vuln_count",
              "m": "Run vulnerability scanner against registry images (e.g. Trivy, Clair) and ingest results. Alert when Critical/High CVEs appear. Enforce policy to block deployment of failing images.",
              "z": "Table (image, CVE count, severity), Bar chart by image, Single value (images with critical vulns).",
              "kfp": "**audit_mode_inflation** — When the Kubernetes admission controller runs in audit-only mode rather than enforce mode, it logs policy violations as informational events rather than actual denials. The admission denial search counts these as blocked deployments even though the image was allowed to run. Filter by the admission controller's mode field or cross-reference with running containers.\n\n**rescan_verdict_flip** — Harbor's nightly scheduled rescan may change a previously PASSED image to BLOCKED when new CVEs are published against its packages. The compliance search shows a sudden compliance drop that reflects new vulnerability discoveries, not a regression in the build pipeline. Compare with the CVE publication date to distinguish.\n\n**multi_scanner_disagreement** — If both Trivy and Clair are configured as Harbor scanners, they may produce different severity classifications for the same CVE due to different scoring methodologies. An image may show as PASSED by one scanner and BLOCKED by another. Standardize on a single scanner for policy enforcement.\n\n**tag_immutability_bypass** — Images pushed with a mutable tag (e.g., `:latest`) can be replaced after scanning, resulting in a PASSED verdict for a different image than what is currently tagged. Enable tag immutability in Harbor project settings to prevent this bypass.\n\n**namespace_exclusion_gap** — Admission controllers typically exclude system namespaces (kube-system, istio-system) from image scan policies. Images deployed to these namespaces bypass the enforcement gate legitimately. Document excluded namespaces in the policy exception lookup.\n\n**init_container_scan_skip** — Some admission policies only validate the main container images but skip init containers and ephemeral containers. A vulnerable init container image can be deployed without triggering an admission denial. Extend the admission policy to cover all container types.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (Trivy, Clair, registry scanner).\n• Ensure the following data sources are available: Registry vulnerability scan output (JSON/CSV).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun vulnerability scanner against registry images (e.g. Trivy, Clair) and ingest results. Alert when Critical/High CVEs appear. Enforce policy to block deployment of failing images.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"registry:vuln_scan\"\n| search severity=\"Critical\" OR severity=\"High\"\n| stats count as vuln_count, values(cve_id) as cves by image_tag, registry\n| where vuln_count > 0\n| sort -vuln_count\n```\n\nUnderstanding this SPL\n\n**Registry Image Vulnerability Scan Results** — Images with known CVEs in the registry pose risk when deployed. Tracking scan results ensures only approved images are used and triggers remediation.\n\nDocumented **Data sources**: Registry vulnerability scan output (JSON/CSV). **App/TA** (typical add-on context): Custom API input (Trivy, Clair, registry scanner). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: registry:vuln_scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"registry:vuln_scan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by image_tag, registry** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where vuln_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (image, CVE count, severity), Bar chart by image, Single value (images with critical vulns).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "Before any software package is allowed to run in our systems, it must pass a security inspection — we track which packages passed, which were blocked, and whether any slipped through without inspection.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.4.5",
              "n": "Registry Authentication and Authorization Failures",
              "c": "high",
              "f": "beginner",
              "v": "Failed logins and denied pushes/pulls may indicate credential abuse or misconfiguration. Detecting anomalies supports security and access troubleshooting.",
              "t": "Registry audit logs (Harbor, Docker Registry, ECR)",
              "d": "Registry audit log API or log files",
              "q": "index=containers sourcetype=\"registry:audit\" (action=\"login_failed\" OR action=\"pull_denied\" OR action=\"push_denied\")\n| bin _time span=1h\n| stats count by user, action, repository, _time\n| where count > 10\n| sort -count",
              "m": "Forward registry audit logs to Splunk. Extract user, action, repository. Alert on high failure rates or denied actions for critical repos.",
              "z": "Table (user, action, count), Timechart of failures, Events list.",
              "kfp": "**robot_credential_rotation_burst** — When Harbor robot account credentials are rotated, the old secret continues to be used by CI/CD pipelines until the new secret is propagated, generating a burst of 401 failures that resolves within minutes. Correlate with robot account creation events and suppress during planned rotation windows.\n\n**docker_client_auth_flow** — The Docker client's authentication handshake first sends an unauthenticated request to the registry, receives a 401 with a `WWW-Authenticate` challenge header, then retries with credentials. These initial 401s are part of the normal auth protocol, not failures. Filter by checking if the same actor has a subsequent 200 within the same second.\n\n**imagepullsecret_expiry_cascade** — When a Kubernetes imagePullSecret expires, every pod in the namespace that references it enters ImagePullBackOff simultaneously. This produces a burst of auth failures proportional to the number of pods, not the number of attack attempts. Check whether all affected pods reference the same Secret name.\n\n**ldap_backend_timeout** — Harbor configured with LDAP authentication may return 401 during LDAP server unavailability even for valid credentials. The access log shows auth failures, but the root cause is infrastructure, not credential abuse. Correlate with LDAP server health monitoring.\n\n**catalog_endpoint_enumeration** — Security scanners and compliance tools routinely call `/v2/_catalog` to inventory images, which triggers 403 responses if the service account lacks catalog scope. These are legitimate compliance activities, not reconnaissance. Add scanner service accounts to the known_service_accounts lookup.\n\n**proxy_cache_stale_token** — When Harbor is fronted by a CDN or reverse proxy that caches authentication tokens, clients may present stale tokens after rotation, generating a wave of 401s that resolves when the cache refreshes. Check for uniform user agent strings indicating proxy-mediated requests.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Registry audit logs (Harbor, Docker Registry, ECR).\n• Ensure the following data sources are available: Registry audit log API or log files.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward registry audit logs to Splunk. Extract user, action, repository. Alert on high failure rates or denied actions for critical repos.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"registry:audit\" (action=\"login_failed\" OR action=\"pull_denied\" OR action=\"push_denied\")\n| bin _time span=1h\n| stats count by user, action, repository, _time\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Registry Authentication and Authorization Failures** — Failed logins and denied pushes/pulls may indicate credential abuse or misconfiguration. Detecting anomalies supports security and access troubleshooting.\n\nDocumented **Data sources**: Registry audit log API or log files. **App/TA** (typical add-on context): Registry audit logs (Harbor, Docker Registry, ECR). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: registry:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"registry:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by user, action, repository, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.action span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Registry Authentication and Authorization Failures** — Failed logins and denied pushes/pulls may indicate credential abuse or misconfiguration. Detecting anomalies supports security and access troubleshooting.\n\nDocumented **Data sources**: Registry audit log API or log files. **App/TA** (typical add-on context): Registry audit logs (Harbor, Docker Registry, ECR). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, action, count), Timechart of failures, Events list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We watch for anyone trying to access our software storage with the wrong password or without permission, catching repeated break-in attempts and alerting when legitimate workers get locked out because their access expired.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.action span=1h | sort - count",
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.4.6",
              "n": "Registry Replication Lag and Consistency",
              "c": "medium",
              "f": "advanced",
              "v": "Replication lag between registry replicas can cause inconsistent image availability and failed pulls. Monitoring supports HA and DR assurance.",
              "t": "Custom API input (registry replication status)",
              "d": "Registry replication API or admin metrics",
              "q": "index=containers sourcetype=\"registry:replication\"\n| stats latest(lag_seconds) as lag, latest(status) as status by source_registry, target_registry\n| where lag > 300 OR status != \"success\"\n| table source_registry target_registry lag status _time",
              "m": "Poll replication status from registry (e.g. Harbor replication jobs). Ingest lag and status. Alert when lag exceeds 5 minutes or status is failed.",
              "z": "Line chart (lag over time), Table (source, target, lag, status), Single value (max lag).",
              "kfp": "**scheduled_overlap_contention** — When multiple replication policies execute simultaneously on the same Harbor jobservice, worker pool contention increases execution duration for all policies. This creates artificial lag spikes that do not indicate network or registry health problems. Stagger replication schedules and correlate lag spikes with concurrent execution counts.\n\n**large_artifact_skew** — A single multi-gigabyte container image (such as ML training images or monolithic legacy applications) can dominate the execution duration, making the average lag appear high even though all other artifacts replicate quickly. Examine per-task detail to identify outlier artifacts and consider separate policies for large images.\n\n**garbage_collection_windows** — Harbor garbage collection temporarily locks blob storage, causing concurrent replication tasks to fail with transient errors or timeout. These failures are expected during scheduled GC windows. Correlate replication failures with GC execution times from harbor-core logs.\n\n**network_maintenance_windows** — Planned WAN maintenance between registry sites causes temporary replication failures that self-resolve after the maintenance window. Cross-reference with change management records and suppress alerts during documented maintenance periods.\n\n**event_based_burst** — Event-based replication policies trigger on every image push, creating bursts of small replication executions during active CI/CD hours. The sheer volume of executions may appear as increased failure rate when individual transient network errors affect a small percentage. Aggregate by policy and time window rather than individual execution.\n\n**target_registry_capacity** — When the target registry's storage approaches capacity, replication tasks fail with disk-full errors that look like replication failures but are actually a storage provisioning issue. Monitor target registry storage quotas (UC-3.4.3) alongside replication health.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (registry replication status).\n• Ensure the following data sources are available: Registry replication API or admin metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll replication status from registry (e.g. Harbor replication jobs). Ingest lag and status. Alert when lag exceeds 5 minutes or status is failed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"registry:replication\"\n| stats latest(lag_seconds) as lag, latest(status) as status by source_registry, target_registry\n| where lag > 300 OR status != \"success\"\n| table source_registry target_registry lag status _time\n```\n\nUnderstanding this SPL\n\n**Registry Replication Lag and Consistency** — Replication lag between registry replicas can cause inconsistent image availability and failed pulls. Monitoring supports HA and DR assurance.\n\nDocumented **Data sources**: Registry replication API or admin metrics. **App/TA** (typical add-on context): Custom API input (registry replication status). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: registry:replication. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"registry:replication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by source_registry, target_registry** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where lag > 300 OR status != \"success\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Registry Replication Lag and Consistency**): table source_registry target_registry lag status _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (lag over time), Table (source, target, lag, status), Single value (max lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "When our software library has copies in different buildings, we track how long it takes for new items to arrive at each copy and flag when any building falls behind or misses a delivery.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.4.7",
              "n": "Registry Image Tag Retention and Orphan Cleanup",
              "c": "low",
              "f": "intermediate",
              "v": "Untagged and old tags consume storage and complicate governance. Tracking supports retention policy tuning and cleanup automation.",
              "t": "Custom API input (registry catalog API)",
              "d": "Registry catalog, image manifest API",
              "q": "index=containers sourcetype=\"registry:tags\"\n| eval age_days=round((now()-tag_time)/86400, 0)\n| stats count as tag_count, values(tag) as tags by repository\n| where tag_count > 100 OR age_days > 90\n| table repository tag_count age_days",
              "m": "List repositories and tags via registry API. Compute tag count and oldest tag age per repo. Report repos with excessive tags or very old tags for retention policy review.",
              "z": "Table (repository, tag count, oldest tag), Bar chart (tags per repo).",
              "kfp": "**pinned_production_tags** — Long-lived production tags (e.g., specific release versions pinned in deployment manifests) may appear as stale because they were pushed months ago, but they are actively referenced by running workloads. Cross-reference with Kubernetes pod image references to distinguish actively deployed tags from truly orphaned ones.\n\n**shared_base_image_layers** — Multiple tags sharing the same base image layers inflate per-tag size calculations because Harbor reports the compressed layer size per artifact. The total reclaimable storage from deleting orphan tags is often significantly less than the sum of their reported sizes due to layer deduplication.\n\n**ci_cd_build_cache_tags** — CI/CD systems that use build cache tags (e.g., `buildcache-<hash>`) create many tags that are only pulled by the CI system itself. These appear as orphans because no production workload pulls them, but they serve an important build performance function. Exclude known CI cache tag patterns from orphan classification.\n\n**immutable_tag_policies** — Some organizations enforce immutable tag policies where tags are never overwritten or deleted for audit and compliance reasons. Repositories under such policies will always appear in the retention report with high tag counts. Tag these repositories in the project-to-team lookup as `retention_exempt=true`.\n\n**gc_scheduling_gap** — Harbor GC must be explicitly configured and scheduled. A new Harbor deployment may have no GC schedule configured, causing the GC effectiveness search to show zero freed storage indefinitely. This is a configuration gap, not a collection error. The first remediation step is to configure a GC schedule.\n\n**multi_architecture_manifests** — A single logical tag (e.g., `v1.0`) with multi-architecture support (amd64, arm64) creates multiple artifact records in Harbor. The tag inventory may count these as separate artifacts, inflating the tag count above the actual number of logical versions.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (registry catalog API).\n• Ensure the following data sources are available: Registry catalog, image manifest API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nList repositories and tags via registry API. Compute tag count and oldest tag age per repo. Report repos with excessive tags or very old tags for retention policy review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"registry:tags\"\n| eval age_days=round((now()-tag_time)/86400, 0)\n| stats count as tag_count, values(tag) as tags by repository\n| where tag_count > 100 OR age_days > 90\n| table repository tag_count age_days\n```\n\nUnderstanding this SPL\n\n**Registry Image Tag Retention and Orphan Cleanup** — Untagged and old tags consume storage and complicate governance. Tracking supports retention policy tuning and cleanup automation.\n\nDocumented **Data sources**: Registry catalog, image manifest API. **App/TA** (typical add-on context): Custom API input (registry catalog API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: registry:tags. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"registry:tags\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by repository** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tag_count > 100 OR age_days > 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Registry Image Tag Retention and Orphan Cleanup**): table repository tag_count age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (repository, tag count, oldest tag), Bar chart (tags per repo).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "Just as a library periodically removes books that nobody has borrowed in years to make room for new ones, we track which software packages in our storage have not been used and flag them for cleanup.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.4.8",
              "n": "Registry TLS and Certificate Expiration",
              "c": "critical",
              "f": "beginner",
              "v": "Expired or expiring registry certificates break all pulls and pushes. Proactive monitoring prevents pipeline and runtime failures.",
              "t": "Custom scripted input (openssl s_client, registry health API)",
              "d": "TLS certificate from registry endpoint",
              "q": "index=containers sourcetype=\"registry:tls\"\n| eval days_left=round((expiry_time-now())/86400, 0)\n| where days_left < 30\n| table registry_host expiry_time days_left subject\n| sort days_left",
              "m": "Script that connects to registry HTTPS and extracts cert expiry (e.g. `openssl s_client -connect registry:443 -servername registry`). Ingest daily. Alert when expiry is within 30 days.",
              "z": "Table (registry, expiry, days left), Single value (soonest expiry), Gauge (days remaining).",
              "kfp": "**load_balancer_tls_termination** — If a load balancer or CDN terminates TLS in front of the registry, the probe sees the load balancer's certificate, not the registry's. The load balancer certificate may have a different expiry than the backend registry certificate. Probe both endpoints if the architecture includes TLS termination layers.\n\n**acme_auto_renewal** — Registries using Let's Encrypt or ACME-based certificate automation renew certificates automatically 30 days before expiry. The WARNING alert triggers but the certificate is renewed without manual intervention. Suppress WARNING alerts for ACME-managed registries and only alert on CRITICAL (indicating the automation failed).\n\n**internal_ca_short_lived** — Organizations using internal PKI often issue short-lived certificates (30–90 days) with automated renewal. These certificates frequently trigger WARNING and NOTICE alerts even though the renewal process is automated. Configure per-registry urgency thresholds in the lookup to match the expected certificate lifecycle.\n\n**certificate_pinning_rotation** — Some container runtimes or Kubernetes distributions pin certificates and require explicit trust store updates when certificates are rotated. A new certificate that passes the probe may still cause pull failures on nodes with outdated trust stores. Cross-reference certificate serial numbers between the probe and kubelet logs.\n\n**wildcard_certificate_scope** — A wildcard certificate covering `*.registry.example.com` is probed once but covers multiple registry subdomains. If the wildcard certificate expires, all subdomains fail simultaneously. Track wildcard certificates separately and alert at a higher urgency level due to the larger blast radius.\n\n**self_signed_development** — Development and testing environments often use self-signed certificates that are always flagged as untrusted by the chain validation. Exclude known development registries from the untrusted CA alert or tag them in the registry inventory lookup.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (openssl s_client, registry health API).\n• Ensure the following data sources are available: TLS certificate from registry endpoint.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScript that connects to registry HTTPS and extracts cert expiry (e.g. `openssl s_client -connect registry:443 -servername registry`). Ingest daily. Alert when expiry is within 30 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"registry:tls\"\n| eval days_left=round((expiry_time-now())/86400, 0)\n| where days_left < 30\n| table registry_host expiry_time days_left subject\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**Registry TLS and Certificate Expiration** — Expired or expiring registry certificates break all pulls and pushes. Proactive monitoring prevents pipeline and runtime failures.\n\nDocumented **Data sources**: TLS certificate from registry endpoint. **App/TA** (typical add-on context): Custom scripted input (openssl s_client, registry health API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: registry:tls. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"registry:tls\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Registry TLS and Certificate Expiration**): table registry_host expiry_time days_left subject\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (registry, expiry, days left), Single value (soonest expiry), Gauge (days remaining).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "Every few hours we check the digital identity cards of our software warehouses to see if any are about to expire, because an expired card means nobody can pick up or deliver packages until a new one is issued.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.4.9",
              "n": "Container Image Vulnerability Age",
              "c": "critical",
              "f": "advanced",
              "v": "Images running with known CVEs older than N days.",
              "t": "Custom (Trivy, Grype, or registry scanner output)",
              "d": "vulnerability scanner JSON output",
              "q": "index=containers (sourcetype=\"trivy:scan\" OR sourcetype=\"grype:scan\" OR sourcetype=\"registry:vuln_scan\")\n| eval vuln_date = coalesce(discovered_at, PublishedDate, published_date)\n| eval vuln_age_days = round((now() - strptime(vuln_date, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where (Severity=\"Critical\" OR Severity=\"High\") AND vuln_age_days > 7\n| stats count as vuln_count, min(vuln_age_days) as oldest_vuln_days by image, tag, Severity\n| sort -oldest_vuln_days -vuln_count",
              "m": "Run Trivy, Grype, or registry-native scanner (Harbor, ACR) against running images or registry catalog. Output JSON with image, CVE ID, severity, and discovered_at (or published date). Forward to Splunk via HEC. Alert when Critical/High CVEs have been known for >7 days (configurable). Integrate with CI/CD to block deployment of images with aged critical vulns. Track remediation SLA.",
              "z": "Table (image, tag, severity, vuln count, oldest days), Bar chart (images by vuln age), Single value (images with aged critical vulns).",
              "kfp": "**disputed_cve_classification** — Some CVEs are disputed by the software vendor or classified at a higher severity by the NVD than the vendor considers appropriate. A Critical CVE that the vendor has assessed as Low risk inflates the SLA breach count. Cross-reference with vendor security advisories and maintain a dispute lookup to reclassify known disputed CVEs.\n\n**base_image_inherited_cves** — Many CVEs come from the base image (debian, ubuntu, alpine) packages that are not used by the application. These packages exist in the image layer but are never executed. The vulnerability is technically present but not exploitable in context. Use Trivy's VEX (Vulnerability Exploitability eXchange) support to mark non-exploitable CVEs.\n\n**scanner_database_lag** — Vulnerability scanners depend on their vulnerability database being current. If the database is outdated, recently published CVEs will not appear in scan results, creating a false sense of compliance. Monitor the scanner's database update timestamp and alert when it is more than 24 hours stale.\n\n**image_tag_reuse** — If image tags are reused (e.g., the same `latest` tag points to different digests over time), scan results may reference an image that has been replaced by a newer build. The aged CVE may have been fixed in the current image but the scan result refers to the old digest. Use image digests rather than tags for precise cross-referencing.\n\n**fix_version_not_yet_released** — A CVE may have a known fix in upstream code but no released package version yet. The scan shows the CVE as fixable but no update is available to install. Track the `FixedVersion` field — if it contains a version not yet released in the package repository, the CVE is not yet actionable.\n\n**dev_namespace_noise** — Development and staging namespaces intentionally run older image versions for testing purposes. These images may have many aged CVEs that do not represent production risk. Use namespace classification to separate production SLA tracking from non-production environments.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Trivy, Grype, or registry scanner output).\n• Ensure the following data sources are available: vulnerability scanner JSON output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun Trivy, Grype, or registry-native scanner (Harbor, ACR) against running images or registry catalog. Output JSON with image, CVE ID, severity, and discovered_at (or published date). Forward to Splunk via HEC. Alert when Critical/High CVEs have been known for >7 days (configurable). Integrate with CI/CD to block deployment of images with aged critical vulns. Track remediation SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers (sourcetype=\"trivy:scan\" OR sourcetype=\"grype:scan\" OR sourcetype=\"registry:vuln_scan\")\n| eval vuln_date = coalesce(discovered_at, PublishedDate, published_date)\n| eval vuln_age_days = round((now() - strptime(vuln_date, \"%Y-%m-%dT%H:%M:%S\")) / 86400, 0)\n| where (Severity=\"Critical\" OR Severity=\"High\") AND vuln_age_days > 7\n| stats count as vuln_count, min(vuln_age_days) as oldest_vuln_days by image, tag, Severity\n| sort -oldest_vuln_days -vuln_count\n```\n\nUnderstanding this SPL\n\n**Container Image Vulnerability Age** — Images running with known CVEs older than N days.\n\nDocumented **Data sources**: vulnerability scanner JSON output. **App/TA** (typical add-on context): Custom (Trivy, Grype, or registry scanner output). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: trivy:scan, grype:scan, registry:vuln_scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"trivy:scan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **vuln_date** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **vuln_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (Severity=\"Critical\" OR Severity=\"High\") AND vuln_age_days > 7` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by image, tag, Severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (image, tag, severity, vuln count, oldest days), Bar chart (images by vuln age), Single value (images with aged critical vulns).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We track how many days each known security problem has been sitting unpatched in our software packages and flag the ones that have been ignored past the deadline, so the team fixes the most overdue ones first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.6,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 9,
            "none": 0
          }
        },
        {
          "i": "3.5",
          "n": "Service Mesh & Serverless Containers",
          "u": [
            {
              "i": "3.5.1",
              "n": "Istio Mesh Traffic Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Baseline and anomaly detection on east-west traffic prevents silent degradation and helps isolate failing workloads before user impact spreads.",
              "t": "Splunk OTel Collector (Prometheus receiver scraping Istio sidecar `15090`), `istio-mixer`/`istio` telemetry",
              "d": "`sourcetype=otel:metrics` or `sourcetype=prometheus:istio`",
              "q": "index=containers (sourcetype=\"otel:metrics\" OR sourcetype=\"prometheus:istio\")\n| where like(metric_name, \"istio_requests_total%\") OR like(name, \"istio_requests_total%\")\n| eval rc=tonumber(response_code)\n| stats sum(value) as requests by destination_service_name, reporter, rc\n| eval is_error=if(rc>=500 OR rc=0 OR isnull(rc), 1, 0)\n| stats sum(requests) as total, sum(eval(if(is_error=1, requests, 0))) as err by destination_service_name\n| eval err_rate=round(100*err/total, 2)\n| where err_rate > 1\n| sort -err_rate",
              "m": "Deploy the OTel Collector with a Prometheus receiver targeting Istio workload and ingress scrape configs (per Istio observability docs). Forward metrics to Splunk via OTLP or Splunk HEC. Normalize `destination_service_name` and `response_code` labels into dimensions. Build baselines per service pair and alert on sustained error-rate spikes versus historical traffic.",
              "z": "Time chart (requests and 5xx by destination), Table (top error rates by service), Single value (mesh-wide error %).",
              "kfp": "**canary_release_noise** — During canary or blue-green deployments, the new version may exhibit elevated error rates for the first few minutes as the Envoy sidecar warms up connection pools and the service initializes caches; correlate with `sourcetype=kube:events reason=ScalingReplicaSet` timestamps and suppress alerts for 10 minutes after a deployment event in the same namespace.\n\n**health_check_amplification** — Kubernetes liveness and readiness probes generate HTTP requests that flow through the Istio sidecar and contribute to `istio_requests_total` counters; if a probe endpoint returns intermittent failures, it inflates the error rate for the entire service despite having zero user impact. Exclude probe paths by filtering `path!=\"/healthz\" AND path!=\"/ready\"` in the access-log variant.\n\n**retry_storm_double_count** — Istio automatic retries (configured via DestinationRule `retryPolicy`) cause the source reporter to record multiple attempts for a single logical request; when calculating error rates from `reporter=source` metrics, the denominator is inflated by retry attempts. Always prefer `reporter=destination` for accurate final-response-code percentages.\n\n**circuit_breaker_trip** — When Envoy trips an outlier detection circuit breaker on a destination, subsequent requests to that destination return 503 from the source sidecar without reaching the destination at all; these synthetic 503s appear as destination errors but originate from the mesh, not the application. Check `envoy_cluster_outlier_detection_ejections_active` to distinguish mesh-generated errors from application errors.\n\n**mtls_handshake_spike** — Periodic mTLS certificate rotation by Istio's citadel/istiod causes brief TLS handshake failures across the mesh, producing a coordinated burst of connection errors that resolves within 30–60 seconds. Correlate with `istiod` certificate-rotation logs and suppress alerts shorter than 90 seconds.\n\n**low_volume_service_noise** — Internal services handling < 50 requests per scrape interval (batch jobs, admin endpoints, internal tooling) have inherently noisy error rates where a single failed request creates a 10–50% error rate. The `traffic_class` filter in the SPL excludes these, but adjust the threshold if your mesh has many legitimate low-volume services.\n\n**grpc_status_mismatch** — gRPC services report errors via `grpc_response_status` rather than HTTP `response_code`; a gRPC `UNAVAILABLE` (status 14) maps to HTTP 503 but appears as `response_code=200` with a non-zero `grpc_response_status` in the Istio metrics. The SPL `coalesce` chain handles this, but verify gRPC-heavy services have correct error classification by checking `grpc_response_status` field presence.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector (Prometheus receiver scraping Istio sidecar `15090`), `istio-mixer`/`istio` telemetry.\n• Ensure the following data sources are available: `sourcetype=otel:metrics` or `sourcetype=prometheus:istio`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy the OTel Collector with a Prometheus receiver targeting Istio workload and ingress scrape configs (per Istio observability docs). Forward metrics to Splunk via OTLP or Splunk HEC. Normalize `destination_service_name` and `response_code` labels into dimensions. Build baselines per service pair and alert on sustained error-rate spikes versus historical traffic.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers (sourcetype=\"otel:metrics\" OR sourcetype=\"prometheus:istio\")\n| where like(metric_name, \"istio_requests_total%\") OR like(name, \"istio_requests_total%\")\n| eval rc=tonumber(response_code)\n| stats sum(value) as requests by destination_service_name, reporter, rc\n| eval is_error=if(rc>=500 OR rc=0 OR isnull(rc), 1, 0)\n| stats sum(requests) as total, sum(eval(if(is_error=1, requests, 0))) as err by destination_service_name\n| eval err_rate=round(100*err/total, 2)\n| where err_rate > 1\n| sort -err_rate\n```\n\nUnderstanding this SPL\n\n**Istio Mesh Traffic Monitoring** — Baseline and anomaly detection on east-west traffic prevents silent degradation and helps isolate failing workloads before user impact spreads.\n\nDocumented **Data sources**: `sourcetype=otel:metrics` or `sourcetype=prometheus:istio`. **App/TA** (typical add-on context): Splunk OTel Collector (Prometheus receiver scraping Istio sidecar `15090`), `istio-mixer`/`istio` telemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: otel:metrics, prometheus:istio. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"otel:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(metric_name, \"istio_requests_total%\") OR like(name, \"istio_requests_total%\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **rc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by destination_service_name, reporter, rc** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by destination_service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (requests and 5xx by destination), Table (top error rates by service), Single value (mesh-wide error %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We watch the traffic flowing between all the small programs that make up an application, counting how often requests fail or run slow, so we can spot a struggling piece before the whole system slows down for everyone.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry",
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.2",
              "n": "Sidecar Proxy Health",
              "c": "high",
              "f": "intermediate",
              "v": "Unhealthy Envoy sidecars drop or misroute traffic; catching not-ready or crash-looping proxies avoids cascading failures across the mesh.",
              "t": "Splunk OTel Collector (kubelet/cAdvisor or Prometheus kube-state-metrics), Kubernetes metadata",
              "d": "`sourcetype=kube:metrics` or `sourcetype=otel:metrics`",
              "q": "index=containers (sourcetype=\"kube:metrics\" OR sourcetype=\"otel:metrics\")\n| where match(pod, \".*-istio-proxy$\") OR container_name=\"istio-proxy\"\n| stats latest(ready) as ready, latest(restarts) as restarts, latest(phase) as phase by pod, namespace, node\n| where ready=0 OR restarts>3 OR phase!=\"Running\"\n| sort namespace, pod",
              "m": "Ingest pod/container metrics from Prometheus or OTel Kubernetes receiver so `istio-proxy` containers expose readiness and restart counts. Correlate with kube-state-metrics `kube_pod_container_status_restarts_total` where available. Alert when sidecars are not ready or restart churn exceeds threshold after mesh upgrades.",
              "z": "Table (namespace, pod, ready, restarts), Timeline (restarts), Single value (unhealthy sidecar count).",
              "kfp": "**rolling_upgrade_drain** — During Istio or application rolling upgrades, sidecars transition through DRAINING and PRE_INITIALIZING states as old pods terminate and new pods start. This is expected lifecycle behavior, not a failure. Suppress DRAINING/INITIALIZING alerts for 10 minutes after detecting a corresponding deployment ScalingReplicaSet event in the same namespace.\n\n**node_scale_down_eviction** — When a Kubernetes node is drained for maintenance or autoscaler scale-down, all sidecars on that node briefly show as STALE before pods are rescheduled. Correlate staleness with `sourcetype=kube:events reason=Evicted` or `reason=NodeNotReady` and suppress during planned maintenance windows.\n\n**scrape_target_lag** — The Prometheus receiver discovers new sidecar scrape targets via Kubernetes service discovery, which has a propagation delay of 5–30 seconds. Newly created pods may appear as STALE until the first successful scrape. Ignore staleness alerts for pods less than 2 minutes old based on `kube_pod_created` timestamp.\n\n**response_flag_health_check** — Kubernetes liveness and readiness probes routed through the Envoy sidecar can generate UF or DC response flags when the application takes longer than the probe timeout to respond. These probe-induced flags inflate the error count without representing real traffic failures. Filter by excluding requests to known probe paths.\n\n**concurrency_mismatch_noise** — The `envoy_server_concurrency` metric may differ from the pod's CPU limit when Istio's `proxy.concurrency` is explicitly set in the mesh config or pod annotation. A value of 0 during initialization is expected and resolves within seconds. Only alert on sustained concurrency=0 lasting more than 60 seconds.\n\n**istio_cni_injection_delay** — When using Istio CNI instead of init containers for sidecar injection, there is a brief window during pod startup where the sidecar is not yet configured. Metrics show PRE_INITIALIZING state during this window. Allow 30 seconds of PRE_INITIALIZING before alerting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector (kubelet/cAdvisor or Prometheus kube-state-metrics), Kubernetes metadata.\n• Ensure the following data sources are available: `sourcetype=kube:metrics` or `sourcetype=otel:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest pod/container metrics from Prometheus or OTel Kubernetes receiver so `istio-proxy` containers expose readiness and restart counts. Correlate with kube-state-metrics `kube_pod_container_status_restarts_total` where available. Alert when sidecars are not ready or restart churn exceeds threshold after mesh upgrades.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers (sourcetype=\"kube:metrics\" OR sourcetype=\"otel:metrics\")\n| where match(pod, \".*-istio-proxy$\") OR container_name=\"istio-proxy\"\n| stats latest(ready) as ready, latest(restarts) as restarts, latest(phase) as phase by pod, namespace, node\n| where ready=0 OR restarts>3 OR phase!=\"Running\"\n| sort namespace, pod\n```\n\nUnderstanding this SPL\n\n**Sidecar Proxy Health** — Unhealthy Envoy sidecars drop or misroute traffic; catching not-ready or crash-looping proxies avoids cascading failures across the mesh.\n\nDocumented **Data sources**: `sourcetype=kube:metrics` or `sourcetype=otel:metrics`. **App/TA** (typical add-on context): Splunk OTel Collector (kubelet/cAdvisor or Prometheus kube-state-metrics), Kubernetes metadata. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: kube:metrics, otel:metrics. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"kube:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(pod, \".*-istio-proxy$\") OR container_name=\"istio-proxy\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by pod, namespace, node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ready=0 OR restarts>3 OR phase!=\"Running\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, pod, ready, restarts), Timeline (restarts), Single value (unhealthy sidecar count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "Every small program in our system has a helper that manages its network traffic — we monitor whether each helper is alive and working, so a broken one does not silently disrupt the whole system.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry",
                "prometheus"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.3",
              "n": "mTLS Certificate Expiry",
              "c": "critical",
              "f": "advanced",
              "v": "Expired Istio workload or gateway certs break mTLS between services; proactive expiry tracking avoids sudden mesh-wide authentication failures.",
              "t": "Custom script or `istioctl proxy-config secret` output to HEC, optional cert-manager logs",
              "d": "`sourcetype=istio:cert_status` or `sourcetype=kubernetes:audit`",
              "q": "index=containers sourcetype=\"istio:cert_status\"\n| eval days_left=round((strptime(not_after, \"%Y-%m-%dT%H:%M:%SZ\")-now())/86400, 0)\n| where days_left < 30 OR isnull(days_left)\n| stats min(days_left) as soonest_expiry by workload_name, namespace, serial\n| sort soonest_expiry",
              "m": "Schedule `istioctl proxy-config secret` or Citadel/istiod cert status exports and send JSON to Splunk (HEC). Include `not_after` for each SPIFFE identity. Alternatively parse cert-manager Certificate resources’ status. Alert at 30/14/7 days and page on any cert already expired.",
              "z": "Table (workload, namespace, soonest expiry days), Single value (minimum days to expiry), Gauge per cluster.",
              "kfp": "**workload_cert_rotation_window** — Istio's default workload certificate TTL is 24 hours, meaning the `istiod_cert_expiry_seconds` metric naturally drops to near-zero every day before automatic rotation replenishes it. The CRITICAL alert fires on the expected rotation cycle rather than an actual problem. Set the workload cert alert threshold to 2× the TTL (e.g., alert only if hours_remaining > 36h for a 24h TTL, indicating rotation has stalled).\n\n**root_ca_self_signed_decade** — Istio's default self-signed root CA has a 10-year validity, so the root CA alert_level stays at HEALTHY for years. This is correct behavior, not a false positive — but teams should still plan rotation well before the 10-year mark because root CA rotation requires a coordinated rollout across all clusters.\n\n**gateway_cert_renewal_overlap** — When cert-manager renews a gateway TLS Secret, there is a brief overlap period where both the old and new certificates exist. The monitoring may briefly show two entries for the same gateway, one approaching expiry and one freshly issued. Deduplicate by taking the latest cert for each secret_name.\n\n**sds_push_transient_failure** — istiod occasionally logs transient SDS push errors during high churn (many pods starting simultaneously) that resolve within seconds. These are not certificate problems but control-plane load spikes. Alert only on sustained SDS failures (> 5 errors in 5 minutes).\n\n**permissive_mode_handshake** — Namespaces with PeerAuthentication set to PERMISSIVE mode accept both plaintext and mTLS traffic. TLS handshake failures from non-mesh clients are expected and do not indicate certificate problems. Filter handshake alerts to STRICT-mode namespaces only.\n\n**external_ca_metric_absence** — When using an external CA (cert-manager, Vault PKI), the `citadel_server_root_cert_expiry_timestamp` metric may not be populated because istiod is not acting as the CA. Use the external CA's own metrics instead and suppress alerts for missing Istio CA metrics in external-CA deployments.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom script or `istioctl proxy-config secret` output to HEC, optional cert-manager logs.\n• Ensure the following data sources are available: `sourcetype=istio:cert_status` or `sourcetype=kubernetes:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule `istioctl proxy-config secret` or Citadel/istiod cert status exports and send JSON to Splunk (HEC). Include `not_after` for each SPIFFE identity. Alternatively parse cert-manager Certificate resources’ status. Alert at 30/14/7 days and page on any cert already expired.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"istio:cert_status\"\n| eval days_left=round((strptime(not_after, \"%Y-%m-%dT%H:%M:%SZ\")-now())/86400, 0)\n| where days_left < 30 OR isnull(days_left)\n| stats min(days_left) as soonest_expiry by workload_name, namespace, serial\n| sort soonest_expiry\n```\n\nUnderstanding this SPL\n\n**mTLS Certificate Expiry** — Expired Istio workload or gateway certs break mTLS between services; proactive expiry tracking avoids sudden mesh-wide authentication failures.\n\nDocumented **Data sources**: `sourcetype=istio:cert_status` or `sourcetype=kubernetes:audit`. **App/TA** (typical add-on context): Custom script or `istioctl proxy-config secret` output to HEC, optional cert-manager logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: istio:cert_status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"istio:cert_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 30 OR isnull(days_left)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by workload_name, namespace, serial** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (workload, namespace, soonest expiry days), Single value (minimum days to expiry), Gauge per cluster.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "The digital identity cards that let programs prove who they are to each other expire on a schedule — we track every card's expiry date and sound an alarm weeks in advance so the team can renew them before services lose the ability to communicate.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.7",
              "n": "Envoy Proxy Error Rates",
              "c": "high",
              "f": "intermediate",
              "v": "Envoy aggregates L7 failures; trending 4xx/5xx and upstream errors isolates bad clusters and config rollouts quickly.",
              "t": "Splunk OTel Collector (Envoy admin `/stats` or access log pipeline), `envoy.access_log`",
              "d": "`sourcetype=envoy:access` or `sourcetype=otel:metrics`",
              "q": "index=containers sourcetype=\"envoy:access\"\n| eval status=tonumber(response_code)\n| eval is_err=if(status>=400 OR upstream_cluster=\"-\" , 1, 0)\n| stats count as total, sum(is_err) as err by route_name, upstream_cluster, cluster_name\n| eval err_pct=round(100*err/total, 2)\n| where err_pct>1 AND total>100\n| sort -err_pct",
              "m": "Configure Envoy access logs (JSON) to stdout and collect via OTel filelog receiver or Fluent Bit to Splunk. Include `response_code`, `route_name`, `upstream_cluster`, `duration`. Optionally scrape `envoy_cluster_upstream_rq_xx` from Prometheus. Baseline error percentages per route and alert on spikes after deployments.",
              "z": "Time chart (4xx/5xx rate by route), Table (top routes by error %), Heatmap (cluster vs status).",
              "kfp": "**retry_amplification_noise** — Envoy's automatic retry policy generates additional upstream requests that are counted in `envoy_cluster_upstream_rq` counters, inflating both the total request count and the error count. A service with 3 retries configured may show 4× the actual error count. Subtract `envoy_cluster_upstream_rq_retry` from both numerator and denominator for accurate error rates.\n\n**circuit_breaker_overflow** — When Envoy's circuit breaker trips due to `max_pending_requests` or `max_connections` limits, it rejects requests locally with a 503 and increments `envoy_cluster_upstream_rq_pending_overflow`. These are proxy-generated errors, not application errors. Distinguish by checking whether `envoy_cluster_upstream_rq_pending_overflow` is non-zero for the cluster.\n\n**canary_baseline_skew** — During canary deployments, the canary version may have a higher error rate than the stable version. The aggregate cluster error rate reflects the weighted average, masking the canary's true error rate. Filter by pod label (e.g., `version=canary`) to isolate the canary's contribution.\n\n**health_check_endpoint_noise** — Kubernetes liveness and readiness probes generate HTTP requests through the Envoy sidecar that count toward upstream request totals. A probe endpoint returning intermittent failures inflates the cluster's error rate. Exclude probe paths by filtering access logs or by using separate Envoy clusters for probe traffic.\n\n**passthrough_cluster_errors** — Envoy's `PassthroughCluster` handles traffic to destinations outside the mesh (external APIs, databases). Errors on this cluster reflect external service issues, not mesh health. Filter or separate PassthroughCluster metrics from mesh-internal cluster metrics.\n\n**outlier_detection_ejection** — When Envoy ejects an unhealthy endpoint via outlier detection, subsequent requests to the remaining endpoints may succeed, but the ejection event itself was preceded by errors that triggered the detection. The error counter includes the triggering errors even though the outlier detection resolved the problem. Check `envoy_cluster_outlier_detection_ejections_active` to understand whether errors are being actively mitigated.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector (Envoy admin `/stats` or access log pipeline), `envoy.access_log`.\n• Ensure the following data sources are available: `sourcetype=envoy:access` or `sourcetype=otel:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Envoy access logs (JSON) to stdout and collect via OTel filelog receiver or Fluent Bit to Splunk. Include `response_code`, `route_name`, `upstream_cluster`, `duration`. Optionally scrape `envoy_cluster_upstream_rq_xx` from Prometheus. Baseline error percentages per route and alert on spikes after deployments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"envoy:access\"\n| eval status=tonumber(response_code)\n| eval is_err=if(status>=400 OR upstream_cluster=\"-\" , 1, 0)\n| stats count as total, sum(is_err) as err by route_name, upstream_cluster, cluster_name\n| eval err_pct=round(100*err/total, 2)\n| where err_pct>1 AND total>100\n| sort -err_pct\n```\n\nUnderstanding this SPL\n\n**Envoy Proxy Error Rates** — Envoy aggregates L7 failures; trending 4xx/5xx and upstream errors isolates bad clusters and config rollouts quickly.\n\nDocumented **Data sources**: `sourcetype=envoy:access` or `sourcetype=otel:metrics`. **App/TA** (typical add-on context): Splunk OTel Collector (Envoy admin `/stats` or access log pipeline), `envoy.access_log`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: envoy:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"envoy:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_err** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by route_name, upstream_cluster, cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **err_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_pct>1 AND total>100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (4xx/5xx rate by route), Table (top routes by error %), Heatmap (cluster vs status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We monitor the network helpers sitting between all our programs, counting how often they report errors or failed connections, so when something goes wrong the team knows exactly which helper and which connection is the problem.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "envoy",
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.8",
              "n": "Circuit Breaker Trips",
              "c": "high",
              "f": "advanced",
              "v": "Outlier detection and open circuits protect the mesh; frequent trips signal upstream saturation or bad health checks that need capacity or code fixes.",
              "t": "Splunk OTel Collector (Envoy/Istio Prometheus metrics)",
              "d": "`sourcetype=prometheus:istio` or `sourcetype=otel:metrics`",
              "q": "index=containers (sourcetype=\"prometheus:istio\" OR sourcetype=\"otel:metrics\")\n| where match(metric_name, \"envoy_cluster_upstream_rq_pending_overflow\") OR match(metric_name, \"circuit_breakers.*overflow\")\n| stats sum(value) as trips by cluster_name, destination_service_name\n| where trips>0\n| sort -trips",
              "m": "Scrape Istio/Envoy Prometheus endpoints (port 15090) with OTel Prometheus receiver. Map overflow and ejection counters to Splunk metrics. Correlate trips with deploy times and upstream latency. Alert when overflow rate accelerates versus steady-state for a cluster.",
              "z": "Time chart (overflow counters by cluster), Table (cluster, trips, destination), Bar chart (trips per namespace).",
              "kfp": "**canary_endpoint_ejection** — During canary deployments, the canary endpoint may exhibit elevated error rates that trigger outlier detection ejection. This is the expected behavior of outlier detection protecting the service during progressive rollout. Correlate ejection timestamps with deployment events and suppress ejection alerts for 15 minutes after a canary deployment starts.\n\n**health_check_driven_ejection** — Outlier detection uses passive health checking (counting errors on actual requests). If the upstream has aggressive liveness probes that fail during brief GC pauses, those failures count toward the consecutive error threshold and trigger ejection even though application traffic is healthy. Adjust `consecutiveErrors` to be higher than the expected probe failure bursts.\n\n**connection_pool_sizing** — Default Envoy circuit breaker limits (1024 max connections, 1024 max pending requests) may be insufficient for high-traffic services. Hitting these limits generates `pending_overflow` events that indicate undersized pool configuration rather than application failure. Compare current traffic with DestinationRule limits to determine if scaling is needed.\n\n**panic_threshold_activation** — When Envoy ejects more than `maxEjectionPercent` of endpoints, it enters panic mode and routes to all endpoints (including ejected ones) to avoid complete service unavailability. The `ejections_active` counter stays high but traffic continues to flow — the circuit breaker metrics look alarming but the system is self-protecting.\n\n**split_brain_ejection** — In multi-zone deployments, each zone's Envoy may independently eject the same endpoint based on its local view. Cross-zone traffic patterns can cause the same healthy endpoint to be ejected by sidecars in a different zone while remaining healthy from the local zone's perspective. Check ejection counts per zone.\n\n**warm_up_connection_burst** — When a new pod starts, the first few requests may timeout while connection pools warm up, triggering the consecutive error threshold. Set `outlierDetection.interval` longer than the pod's startup time to avoid ejecting endpoints that are still initializing.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector (Envoy/Istio Prometheus metrics).\n• Ensure the following data sources are available: `sourcetype=prometheus:istio` or `sourcetype=otel:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape Istio/Envoy Prometheus endpoints (port 15090) with OTel Prometheus receiver. Map overflow and ejection counters to Splunk metrics. Correlate trips with deploy times and upstream latency. Alert when overflow rate accelerates versus steady-state for a cluster.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers (sourcetype=\"prometheus:istio\" OR sourcetype=\"otel:metrics\")\n| where match(metric_name, \"envoy_cluster_upstream_rq_pending_overflow\") OR match(metric_name, \"circuit_breakers.*overflow\")\n| stats sum(value) as trips by cluster_name, destination_service_name\n| where trips>0\n| sort -trips\n```\n\nUnderstanding this SPL\n\n**Circuit Breaker Trips** — Outlier detection and open circuits protect the mesh; frequent trips signal upstream saturation or bad health checks that need capacity or code fixes.\n\nDocumented **Data sources**: `sourcetype=prometheus:istio` or `sourcetype=otel:metrics`. **App/TA** (typical add-on context): Splunk OTel Collector (Envoy/Istio Prometheus metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: prometheus:istio, otel:metrics. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"prometheus:istio\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(metric_name, \"envoy_cluster_upstream_rq_pending_overflow\") OR match(metric_name, \"circuit_breakers.*overflow\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster_name, destination_service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where trips>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (overflow counters by cluster), Table (cluster, trips, destination), Bar chart (trips per namespace).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "When one of the small programs in the system keeps failing, a safety switch automatically stops sending it work to prevent the problem from spreading — we monitor when those safety switches activate so the team can fix the broken program.",
              "mtype": [
                "Reliability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "envoy",
                "opentelemetry",
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.9",
              "n": "Service Mesh Control Plane Health",
              "c": "critical",
              "f": "advanced",
              "v": "istiod and validating webhook outages block config pushes and sidecar updates; early detection prevents a widening blast radius across namespaces.",
              "t": "Splunk OTel Collector, Kubernetes metrics/logs",
              "d": "`sourcetype=kube:logs` or `sourcetype=otel:logs`",
              "q": "index=containers (sourcetype=\"kube:logs\" OR sourcetype=\"otel:logs\") (istiod OR \"istiod-\")\n| stats count(eval(match(_raw, \"(?i)error|panic|failed\"))) as err_count, count as line_count by pod, namespace\n| eval err_rate=round(100*err_count/line_count, 2)\n| where err_rate>5 OR err_count>50\n| sort -err_count",
              "m": "Collect istiod container logs and kube-state-metrics for Deployment replicas (`istiod` available vs desired). Ingest admission webhook failure events from API server audit if enabled. Alert on istiod pod restarts, gRPC push errors in logs, and replica mismatch.",
              "z": "Single value (istiod ready replicas), Timeline (pod restarts), Table (error snippets by pod).",
              "kfp": "**upgrade_window_churn** — During Istio control plane upgrades, istiod pods restart, leader elections occur, and XDS push error rates temporarily spike as the new version takes over. These are expected transient events during the upgrade window. Correlate with known maintenance schedules and suppress alerts during planned upgrade windows.\n\n**proxy_version_skew** — When Envoy sidecar proxies run a different Istio version than the control plane (common during canary upgrades), istiod may report increased push errors for specific proxy versions. These errors resolve as proxies are gradually updated. Check the `proxy_version` label on push metrics to identify version-related errors.\n\n**config_validation_learning** — Platform teams authoring new Istio resources (VirtualService, DestinationRule, AuthorizationPolicy) during development produce validation failures that appear as control plane issues. These are expected in development namespaces. Use namespace-aware alerting that excludes development namespaces from validation failure alerts.\n\n**resource_quota_scaling** — In autoscaled clusters, istiod memory consumption grows proportionally with the number of services and endpoints. During scale-up events, istiod may briefly approach its memory limit, causing garbage collection pauses that appear as degraded push performance. This resolves after istiod's HPA scales the deployment.\n\n**api_server_latency** — High API server latency (from cluster-wide load, etcd compaction, or node pressure) slows istiod's ability to watch configuration changes and distribute updates. The resulting push delays appear as control plane degradation but are actually infrastructure-level issues. Correlate with API server latency metrics.\n\n**certificate_rotation_burst** — Periodic mTLS certificate rotation across the mesh causes a burst of CSR requests to istiod's certificate authority component. This temporarily increases CPU usage and may cause push delays. The burst pattern is predictable and correlates with the certificate TTL setting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, Kubernetes metrics/logs.\n• Ensure the following data sources are available: `sourcetype=kube:logs` or `sourcetype=otel:logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect istiod container logs and kube-state-metrics for Deployment replicas (`istiod` available vs desired). Ingest admission webhook failure events from API server audit if enabled. Alert on istiod pod restarts, gRPC push errors in logs, and replica mismatch.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers (sourcetype=\"kube:logs\" OR sourcetype=\"otel:logs\") (istiod OR \"istiod-\")\n| stats count(eval(match(_raw, \"(?i)error|panic|failed\"))) as err_count, count as line_count by pod, namespace\n| eval err_rate=round(100*err_count/line_count, 2)\n| where err_rate>5 OR err_count>50\n| sort -err_count\n```\n\nUnderstanding this SPL\n\n**Service Mesh Control Plane Health** — istiod and validating webhook outages block config pushes and sidecar updates; early detection prevents a widening blast radius across namespaces.\n\nDocumented **Data sources**: `sourcetype=kube:logs` or `sourcetype=otel:logs`. **App/TA** (typical add-on context): Splunk OTel Collector, Kubernetes metrics/logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: kube:logs, otel:logs. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"kube:logs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by pod, namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate>5 OR err_count>50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (istiod ready replicas), Timeline (pod restarts), Table (error snippets by pod).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "The service mesh has a central brain that tells all the little network helpers how to route traffic — we watch that brain's vital signs to catch problems before the helpers stop receiving instructions.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.10",
              "n": "Ingress Gateway Latency",
              "c": "high",
              "f": "intermediate",
              "v": "North-south latency reflects TLS, auth, and routing at the edge; regressions here affect every external client before internal mesh metrics move.",
              "t": "Splunk OTel Collector (Istio ingress gateway metrics), Envoy access logs",
              "d": "`sourcetype=otel:metrics` or `sourcetype=envoy:access`",
              "q": "index=containers sourcetype=\"envoy:access\"\n| where like(gateway_workload, \"istio-ingress%\") OR like(kubernetes_pod_name, \"istio-ingress%\")\n| eval dur_ms=tonumber(duration_ms)\n| timechart span=5m perc95(dur_ms) as p95_ms, perc99(dur_ms) as p99_ms by route_name",
              "m": "Label ingress gateway access logs with `gateway_workload` or filter Kubernetes workload name. Export histogram or timer metrics (`istio_request_duration_milliseconds`) via OTel. Set SLO windows on p95/p99 per host and route. Compare canary vs stable gateway revisions during rollouts.",
              "z": "Time chart (p95/p99 latency by route), Geographic or by-AZ breakdown if multi-region, Single value (SLO burn).",
              "kfp": "**tls_handshake_cold_start** — The first request to a new TLS session incurs a full TLS handshake that adds 50–150ms of latency compared to subsequent requests using session resumption. If the ingress gateway has many short-lived connections (mobile clients, IoT devices), the p99 is dominated by handshake overhead rather than application latency. Separate first-request latency from steady-state latency using the Envoy connection reuse metrics.\n\n**health_check_latency_skew** — External health check systems (load balancers, CDNs, uptime monitors) typically hit lightweight endpoints (/healthz) that respond in single-digit milliseconds, pulling the p50 and average latency artificially low. Filter health check requests by path or user-agent for an accurate application latency measurement.\n\n**backend_retry_inflation** — Istio retry policies cause the gateway to retry failed upstream requests, which inflates the observed request duration (the duration includes all retry attempts). A single request with 3 retries at 500ms each shows 1500ms duration. Check the response flags for UF or 5xx to identify retry-inflated durations.\n\n**gateway_scaling_event** — When the HPA scales the ingress gateway deployment, new pods go through a warm-up period where the Envoy configuration is loaded and connections are established. Requests hitting newly added pods may show higher latency for the first 30–60 seconds. Correlate latency spikes with HPA scaling events from kube:events.\n\n**geographic_client_variance** — Latency measured at the gateway includes network round-trip time from the client to the cluster ingress. Clients in geographically distant regions contribute higher latency that is not caused by the gateway or backend services. Use client IP geolocation to segment latency by region.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector (Istio ingress gateway metrics), Envoy access logs.\n• Ensure the following data sources are available: `sourcetype=otel:metrics` or `sourcetype=envoy:access`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLabel ingress gateway access logs with `gateway_workload` or filter Kubernetes workload name. Export histogram or timer metrics (`istio_request_duration_milliseconds`) via OTel. Set SLO windows on p95/p99 per host and route. Compare canary vs stable gateway revisions during rollouts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"envoy:access\"\n| where like(gateway_workload, \"istio-ingress%\") OR like(kubernetes_pod_name, \"istio-ingress%\")\n| eval dur_ms=tonumber(duration_ms)\n| timechart span=5m perc95(dur_ms) as p95_ms, perc99(dur_ms) as p99_ms by route_name\n```\n\nUnderstanding this SPL\n\n**Ingress Gateway Latency** — North-south latency reflects TLS, auth, and routing at the edge; regressions here affect every external client before internal mesh metrics move.\n\nDocumented **Data sources**: `sourcetype=otel:metrics` or `sourcetype=envoy:access`. **App/TA** (typical add-on context): Splunk OTel Collector (Istio ingress gateway metrics), Envoy access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: envoy:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"envoy:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(gateway_workload, \"istio-ingress%\") OR like(kubernetes_pod_name, \"istio-ingress%\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **dur_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by route_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (p95/p99 latency by route), Geographic or by-AZ breakdown if multi-region, Single value (SLO burn).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We measure how quickly the front gate of our computer network responds to each visitor and track whether it is getting slower over time, so the team knows when to widen the gate before visitors start waiting too long.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "envoy",
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.11",
              "n": "Sidecar Injection Validation",
              "c": "medium",
              "f": "intermediate",
              "v": "Pods without injection bypass mesh policy and mTLS; continuous validation enforces namespace labels and mutating webhook coverage.",
              "t": "Kubernetes API audit or controller logs, `kube:objects` snapshot",
              "d": "`sourcetype=kubernetes:audit` or `sourcetype=kube:objects`",
              "q": "index=containers sourcetype=\"kubernetes:audit\" objectRef.resource=\"pods\"\n| eval has_sidecar=if(match(_raw, \"istio-proxy\"), 1, 0)\n| join type=left max=1 objectRef.namespace [\n    search index=containers sourcetype=\"kube:objects\" kind=\"Namespace\"\n    | eval inject=if(match(_raw, \"istio-injection=enabled\"), 1, 0)\n    | stats max(inject) as should_inject by metadata.name\n    | rename metadata.name as objectRef.namespace\n]\n| where should_inject=1 AND has_sidecar=0\n| stats count by objectRef.namespace, objectRef.name",
              "m": "Periodically inventory pods in `istio-injection=enabled` namespaces (CI job or Splunk scheduled search against cached object JSON). Flag workloads missing `istio-proxy`. Optionally parse audit logs for pod create with webhook bypass. Integrate with policy-as-code to fail builds that skip mesh membership.",
              "z": "Table (namespace, pod, missing sidecar), Single value (non-compliant pod count), Trend (compliance %).",
              "kfp": "**explicit_opt_out_annotation** — Development teams may intentionally disable sidecar injection for specific pods using the `sidecar.istio.io/inject: false` annotation. This is a deliberate decision, not a compliance gap. The compliance search should cross-reference pod annotations and maintain an exemption registry to distinguish intentional opt-outs from accidental omissions.\n\n**daemonset_host_networking** — DaemonSets that require host networking (monitoring agents, log collectors, CNI plugins) cannot run with Istio sidecars because the sidecar's iptables rules conflict with host network access. These pods will always appear as non-compliant in injection-enabled namespaces. Add DaemonSet owner_kind to the exemption list.\n\n**job_and_cronjob_completion** — Kubernetes Jobs and CronJobs with sidecars have a known issue where the job container completes but the sidecar keeps running, preventing the pod from terminating. Some teams disable injection for jobs to avoid this. These appear as non-compliant but are a pragmatic workaround for a known Istio limitation.\n\n**webhook_timeout_transient** — During cluster-wide pressure (node scaling, API server load, etcd compaction), the istio-sidecar-injector webhook may timeout for individual pod CREATE requests. The pod is created without a sidecar. These are transient failures that resolve when cluster pressure subsides. The injection success rate search surfaces the frequency.\n\n**revision_label_mismatch** — When using Istio canary upgrades with revision labels, a namespace labeled with a revision that has been removed creates pods without injection because no webhook matches the revision. This appears as non-compliant but is actually a stale revision reference.\n\n**init_container_network_policy** — In clusters with strict NetworkPolicy enforcement, the istio-init container may fail to set up iptables rules if the CNI plugin blocks the required operations. The pod starts without proper traffic interception even though istio-proxy is present. This is a partial compliance failure not caught by container-name-only checking.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes API audit or controller logs, `kube:objects` snapshot.\n• Ensure the following data sources are available: `sourcetype=kubernetes:audit` or `sourcetype=kube:objects`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically inventory pods in `istio-injection=enabled` namespaces (CI job or Splunk scheduled search against cached object JSON). Flag workloads missing `istio-proxy`. Optionally parse audit logs for pod create with webhook bypass. Integrate with policy-as-code to fail builds that skip mesh membership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"kubernetes:audit\" objectRef.resource=\"pods\"\n| eval has_sidecar=if(match(_raw, \"istio-proxy\"), 1, 0)\n| join type=left max=1 objectRef.namespace [\n    search index=containers sourcetype=\"kube:objects\" kind=\"Namespace\"\n    | eval inject=if(match(_raw, \"istio-injection=enabled\"), 1, 0)\n    | stats max(inject) as should_inject by metadata.name\n    | rename metadata.name as objectRef.namespace\n]\n| where should_inject=1 AND has_sidecar=0\n| stats count by objectRef.namespace, objectRef.name\n```\n\nUnderstanding this SPL\n\n**Sidecar Injection Validation** — Pods without injection bypass mesh policy and mTLS; continuous validation enforces namespace labels and mutating webhook coverage.\n\nDocumented **Data sources**: `sourcetype=kubernetes:audit` or `sourcetype=kube:objects`. **App/TA** (typical add-on context): Kubernetes API audit or controller logs, `kube:objects` snapshot. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: kubernetes:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"kubernetes:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_sidecar** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where should_inject=1 AND has_sidecar=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by objectRef.namespace, objectRef.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, pod, missing sidecar), Single value (non-compliant pod count), Trend (compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "Every program in our network is supposed to have a security guard assigned to it — we regularly check which programs are missing their guard and report them so they can be fixed before sensitive information travels unprotected.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.12",
              "n": "Rate Limiting and Traffic Policy Compliance",
              "c": "medium",
              "f": "advanced",
              "v": "Confirms quotas and Istio `RateLimitService`/Local rate limit configs actually throttle abuse; drift between policy and observed denials indicates misconfiguration or bypass attempts.",
              "t": "Splunk OTel Collector (Envoy local rate limit / RLS metrics), Envoy access logs",
              "d": "`sourcetype=envoy:access` or `sourcetype=otel:metrics`",
              "q": "index=containers sourcetype=\"envoy:access\"\n| eval denied=if(response_code=429 OR match(response_flags, \"RL\"), 1, 0)\n| stats count as total, sum(denied) as rate_limited by route_name, cluster_name\n| eval rl_pct=round(100*rate_limited/total, 3)\n| where rate_limited>0\n| sort -rate_limited",
              "m": "Ensure access logs include `response_code` 429 and Envoy `response_flags` (e.g. `RL` for rate limited). For global RLS, scrape `ratelimit_service_*` or service metrics. Dashboard expected 429 share per route against policy (e.g. per-API key). Alert on unexpected absence of throttling during attacks or sudden spikes in 429s indicating config errors.",
              "z": "Time chart (429 rate by route), Table (routes with throttle events), Stacked bar (allowed vs rate-limited volume).",
              "kfp": "**legitimate_burst_traffic** — Legitimate traffic bursts (e.g., batch API calls, webhook retries, cron-triggered bulk operations) trigger rate limits that are correctly enforced but do not represent abuse. These bursts appear as ACTIVE_THROTTLE or HEAVY_THROTTLE without any malicious intent. Correlate with known batch processing schedules and application behavior patterns.\n\n**rate_limit_service_cold_start** — After a global rate limit service restart, the in-memory counters reset to zero. Traffic that was previously at the limit is temporarily allowed through until counters rebuild. This appears as a brief NO_THROTTLE period followed by normal enforcement. Correlate with RLS pod restart events.\n\n**local_rate_limit_per_pod_variance** — Local rate limits enforce per-pod, so the effective cluster-wide limit scales with replica count. During HPA scaling events, the effective limit changes. A scale-down concentrates traffic onto fewer pods, potentially triggering more throttling. Correlate enforcement changes with HPA scaling events.\n\n**shadow_mode_enforcement** — Envoy's rate limit filter can operate in shadow mode where it logs rate limit decisions without actually enforcing them. Shadow mode generates metrics and log entries but never returns 429 to clients. The enforcement search may show NO_THROTTLE even though rate limits are configured. Check the `envoy_http_local_rate_limit_enforced` vs `envoy_http_local_rate_limit_rate_limited` ratio.\n\n**client_retry_amplification** — When a client receives a 429 response, many HTTP client libraries automatically retry the request. Each retry generates another 429, inflating the rate-limited count. A single rate-limited client may appear to generate dozens of 429s from retry loops. Analyze by client IP to identify retry-driven inflation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector (Envoy local rate limit / RLS metrics), Envoy access logs.\n• Ensure the following data sources are available: `sourcetype=envoy:access` or `sourcetype=otel:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure access logs include `response_code` 429 and Envoy `response_flags` (e.g. `RL` for rate limited). For global RLS, scrape `ratelimit_service_*` or service metrics. Dashboard expected 429 share per route against policy (e.g. per-API key). Alert on unexpected absence of throttling during attacks or sudden spikes in 429s indicating config errors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"envoy:access\"\n| eval denied=if(response_code=429 OR match(response_flags, \"RL\"), 1, 0)\n| stats count as total, sum(denied) as rate_limited by route_name, cluster_name\n| eval rl_pct=round(100*rate_limited/total, 3)\n| where rate_limited>0\n| sort -rate_limited\n```\n\nUnderstanding this SPL\n\n**Rate Limiting and Traffic Policy Compliance** — Confirms quotas and Istio `RateLimitService`/Local rate limit configs actually throttle abuse; drift between policy and observed denials indicates misconfiguration or bypass attempts.\n\nDocumented **Data sources**: `sourcetype=envoy:access` or `sourcetype=otel:metrics`. **App/TA** (typical add-on context): Splunk OTel Collector (Envoy local rate limit / RLS metrics), Envoy access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: envoy:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"envoy:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **denied** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by route_name, cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **rl_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where rate_limited>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (429 rate by route), Table (routes with throttle events), Stacked bar (allowed vs rate-limited volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We set speed limits on how fast visitors can access each service and then verify those limits are actually being enforced — like checking that a speed camera is working and not letting every car pass unchecked.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "envoy",
                "opentelemetry"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.13",
              "n": "eBPF Network Observability (Cilium Hubble)",
              "c": "high",
              "f": "advanced",
              "v": "Traditional network monitoring in Kubernetes relies on service mesh sidecars or packet capture — both adding overhead and complexity. Cilium Hubble provides kernel-level L3/L4/L7 network visibility via eBPF without sidecar injection, capturing every network flow between pods, services, and external endpoints with near-zero performance impact. Ingesting Hubble flow logs into Splunk reveals unexpected service communication (security), DNS failures (availability), and packet drops (performance) that are invisible to application-level monitoring.",
              "t": "Splunk Distribution of OpenTelemetry Collector (Hubble receiver), Cilium Hubble",
              "d": "`sourcetype=cilium:hubble:flows`, Hubble flow logs via OTLP or gRPC relay",
              "q": "index=containers sourcetype=\"cilium:hubble:flows\"\n| eval flow_direction=case(\n    traffic_direction==\"INGRESS\", \"Inbound\",\n    traffic_direction==\"EGRESS\", \"Outbound\",\n    1==1, \"Unknown\")\n| eval flow_status=case(\n    verdict==\"FORWARDED\", \"Allowed\",\n    verdict==\"DROPPED\", \"Dropped\",\n    verdict==\"AUDIT\", \"Audited\",\n    1==1, verdict)\n| bin _time span=5m\n| stats count as flows, sum(eval(if(verdict==\"DROPPED\",1,0))) as dropped, dc(destination_identity) as unique_destinations by _time, source_namespace, source_pod, destination_namespace, flow_direction\n| eval drop_pct=round(dropped*100/flows, 2)\n| where dropped > 0 OR unique_destinations > 50\n| table _time, source_namespace, source_pod, destination_namespace, flow_direction, flows, dropped, drop_pct, unique_destinations\n| sort -dropped",
              "m": "Deploy Cilium as the Kubernetes CNI with Hubble enabled. Hubble captures eBPF-level network flows including source/destination pod, namespace, identity, IP, port, protocol, L7 protocol details (HTTP method/path, DNS query/response, Kafka topic), verdict (forwarded/dropped), and drop reason. Export Hubble flows to Splunk via the OTel Collector's Hubble receiver or by relaying Hubble's gRPC stream to a log pipeline. Key detections: dropped flows indicate network policy violations or misconfigurations; unexpected destination identities signal lateral movement or misconfigured services; DNS failures (NXDOMAIN, timeout) from Hubble's DNS-aware L7 parsing reveal resolution issues before they cascade. Correlate dropped flows with Cilium network policies to identify which policy blocked the traffic. Track flow volume per namespace to detect traffic anomalies.",
              "z": "Sankey diagram (namespace-to-namespace traffic flow), Table (dropped flows with source/destination), Line chart (flow volume and drop rate over 24 hours), Bar chart (top drop reasons), Network graph (pod communication map).",
              "kfp": "**policy_rollout_transition** — When deploying new CiliumNetworkPolicies, there is a brief transition period where the eBPF datapath is updated across nodes. During this window, some flows may be dropped by the new policy before applications have been updated to comply. These transient drops resolve once the rollout completes. Correlate drop spikes with policy change timestamps.\n\n**identity_allocation_delay** — Cilium assigns security identities to pods based on labels. When a new pod starts, there is a brief delay before the identity is allocated and propagated to all nodes. Flows from the pod during this window may be dropped with POLICY_DENIED even though the pod should be allowed. These resolve within seconds of identity allocation.\n\n**node_to_node_tunnel_flap** — In overlay networking mode (VXLAN or Geneve), Cilium establishes tunnels between nodes. Tunnel flaps cause temporary packet drops that appear as DROPPED flows with reason CT_TRUNCATED or STALE_OR_UNROUTABLE. These are infrastructure-level events, not application policy violations.\n\n**dns_negative_cache** — Applications that query non-existent services generate NXDOMAIN responses that are cached by CoreDNS. The initial query produces a flow with dns_rcode=3, but subsequent queries may be answered from cache without generating a Hubble flow. The DNS failure rate may undercount persistent resolution issues that are being served from negative cache.\n\n**hubble_ring_buffer_overflow** — Under extreme network load, the Hubble ring buffer on individual Cilium agents may overflow, causing flow log drops. The dropped flows are not the same as policy-dropped packets — they are observability gaps. Monitor `cilium_hubble_events_lost_total` to detect ring buffer overflows.\n\n**external_traffic_identity** — Traffic from external sources (outside the cluster) appears with `source_identity=world` and no namespace/pod information. Dropped flows from external sources may represent legitimate firewall behavior (blocking inbound traffic to internal services) rather than policy violations.",
              "refs": "[CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector (Hubble receiver), Cilium Hubble.\n• Ensure the following data sources are available: `sourcetype=cilium:hubble:flows`, Hubble flow logs via OTLP or gRPC relay.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Cilium as the Kubernetes CNI with Hubble enabled. Hubble captures eBPF-level network flows including source/destination pod, namespace, identity, IP, port, protocol, L7 protocol details (HTTP method/path, DNS query/response, Kafka topic), verdict (forwarded/dropped), and drop reason. Export Hubble flows to Splunk via the OTel Collector's Hubble receiver or by relaying Hubble's gRPC stream to a log pipeline. Key detections: dropped flows indicate network policy violations or misconfigurati…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"cilium:hubble:flows\"\n| eval flow_direction=case(\n    traffic_direction==\"INGRESS\", \"Inbound\",\n    traffic_direction==\"EGRESS\", \"Outbound\",\n    1==1, \"Unknown\")\n| eval flow_status=case(\n    verdict==\"FORWARDED\", \"Allowed\",\n    verdict==\"DROPPED\", \"Dropped\",\n    verdict==\"AUDIT\", \"Audited\",\n    1==1, verdict)\n| bin _time span=5m\n| stats count as flows, sum(eval(if(verdict==\"DROPPED\",1,0))) as dropped, dc(destination_identity) as unique_destinations by _time, source_namespace, source_pod, destination_namespace, flow_direction\n| eval drop_pct=round(dropped*100/flows, 2)\n| where dropped > 0 OR unique_destinations > 50\n| table _time, source_namespace, source_pod, destination_namespace, flow_direction, flows, dropped, drop_pct, unique_destinations\n| sort -dropped\n```\n\nUnderstanding this SPL\n\n**eBPF Network Observability (Cilium Hubble)** — Traditional network monitoring in Kubernetes relies on service mesh sidecars or packet capture — both adding overhead and complexity. Cilium Hubble provides kernel-level L3/L4/L7 network visibility via eBPF without sidecar injection, capturing every network flow between pods, services, and external endpoints with near-zero performance impact. Ingesting Hubble flow logs into Splunk reveals unexpected service communication (security), DNS failures (availability), and packet…\n\nDocumented **Data sources**: `sourcetype=cilium:hubble:flows`, Hubble flow logs via OTLP or gRPC relay. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector (Hubble receiver), Cilium Hubble. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: cilium:hubble:flows. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"cilium:hubble:flows\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **flow_direction** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **flow_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, source_namespace, source_pod, destination_namespace, flow_direction** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drop_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dropped > 0 OR unique_destinations > 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **eBPF Network Observability (Cilium Hubble)**): table _time, source_namespace, source_pod, destination_namespace, flow_direction, flows, dropped, drop_pct, unique_destinations\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**eBPF Network Observability (Cilium Hubble)** — Traditional network monitoring in Kubernetes relies on service mesh sidecars or packet capture — both adding overhead and complexity. Cilium Hubble provides kernel-level L3/L4/L7 network visibility via eBPF without sidecar injection, capturing every network flow between pods, services, and external endpoints with near-zero performance impact. Ingesting Hubble flow logs into Splunk reveals unexpected service communication (security), DNS failures (availability), and packet…\n\nDocumented **Data sources**: `sourcetype=cilium:hubble:flows`, Hubble flow logs via OTLP or gRPC relay. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector (Hubble receiver), Cilium Hubble. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey diagram (namespace-to-namespace traffic flow), Table (dropped flows with source/destination), Line chart (flow volume and drop rate over 24 hours), Bar chart (top drop reasons), Network graph (pod communication map).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "Like an invisible camera system in every hallway that records who walks where and which doors they try, we monitor every network connection inside the cluster to catch blocked or suspicious traffic.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - count",
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.14",
              "n": "eBPF Process-Level Security Observability (Tetragon)",
              "c": "critical",
              "f": "advanced",
              "v": "Container runtime security traditionally relies on syscall interception (Falco/seccomp) or agent-based file integrity monitoring — both with performance overhead and blind spots. Tetragon provides kernel-level visibility into process execution, file access, network connections, and privilege escalation via eBPF tracing policies, with minimal overhead. Ingesting Tetragon events into Splunk enables correlation with application traces and infrastructure metrics — connecting \"a process opened /etc/shadow\" with \"which user request triggered this\" via trace context.",
              "t": "Splunk Distribution of OpenTelemetry Collector, Tetragon (Isovalent/Cilium)",
              "d": "`sourcetype=tetragon:events`, Tetragon JSON event stream via FluentBit or OTel filelog receiver",
              "q": "index=containers sourcetype=\"tetragon:events\"\n| eval event_type=case(\n    process_exec!=\"\", \"process_exec\",\n    process_file!=\"\", \"file_access\",\n    process_connect!=\"\", \"network_connect\",\n    process_kprobe!=\"\", \"kprobe\",\n    1==1, \"other\")\n| eval severity=case(\n    match(binary, \"/(nc|ncat|curl|wget|python|perl|ruby|bash|sh)$\"), \"High\",\n    match(filepath, \"/(etc/shadow|etc/passwd|.ssh/|.kube/)\"), \"Critical\",\n    match(event_type, \"kprobe\") AND match(function_name, \"sys_ptrace|sys_mount\"), \"Critical\",\n    1==1, \"Info\")\n| where severity IN (\"High\", \"Critical\")\n| stats count as events, values(binary) as binaries, values(filepath) as files, values(k8s_pod) as pods by _time, k8s_namespace, event_type, severity\n| sort -severity, -events\n| table _time, k8s_namespace, pods, event_type, severity, binaries, files, events",
              "m": "Deploy Tetragon as a DaemonSet in Kubernetes. Define TracingPolicies to capture security-relevant events: process execution (detect shells, interpreters, network tools in production containers), file access (sensitive paths like /etc/shadow, SSH keys, Kubernetes secrets), network connections (unexpected outbound connections from application pods), and kprobe events (privilege escalation syscalls like ptrace, mount). Export Tetragon events via JSON log file or gRPC stream to the OTel Collector's filelog receiver, then forward to Splunk HEC. Classify events by severity based on the binary executed (shells and network tools in production = high risk) and file paths accessed (credentials and keys = critical). Correlate with Kubernetes context (pod, namespace, node, container image) for investigation. Integrate with Splunk ES as risk events on container/pod entities.",
              "z": "Timeline (security events per namespace), Table (critical events with pod and binary details), Bar chart (events by type and severity), Single value (critical events in last hour).",
              "kfp": "**legitimate_debugging_sessions** — Developers or SREs using kubectl exec to debug production pods generate process execution events (bash, sh, cat, curl) that match suspicious binary patterns. These are legitimate troubleshooting activities, not attacks. Correlate with kubectl audit logs and maintenance windows to classify debugging sessions.\n\n**container_init_processes** — Container entrypoint scripts often execute shell processes and file access operations during startup that trigger Tetragon events. These are transient startup activities, not runtime anomalies. Filter events from processes with PID 1 or within the first 30 seconds of container start time.\n\n**health_check_execution** — Kubernetes liveness and readiness probes that execute commands (exec probes) generate repeated process execution events for the probe command. These are expected periodic executions, not suspicious activity. Identify and exclude known probe command patterns.\n\n**ci_cd_build_containers** — CI/CD pipeline containers (build, test, deploy stages) legitimately execute package managers, compilers, and shell scripts as part of their function. These containers run in CI namespaces and should be excluded from production security alerts via namespace-scoped TracingPolicies.\n\n**operator_reconciliation_loops** — Kubernetes operators and controllers running inside the cluster execute kubectl, curl, or custom binaries as part of their reconciliation loops. These process executions are expected and continuous. Identify operator pods by label and exclude from anomaly detection.\n\n**sidecar_proxy_binary** — Service mesh sidecar proxies (Envoy, istio-proxy) execute specific binaries during startup and configuration reload that may trigger process execution events. These are expected mesh infrastructure activities.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, Tetragon (Isovalent/Cilium).\n• Ensure the following data sources are available: `sourcetype=tetragon:events`, Tetragon JSON event stream via FluentBit or OTel filelog receiver.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Tetragon as a DaemonSet in Kubernetes. Define TracingPolicies to capture security-relevant events: process execution (detect shells, interpreters, network tools in production containers), file access (sensitive paths like /etc/shadow, SSH keys, Kubernetes secrets), network connections (unexpected outbound connections from application pods), and kprobe events (privilege escalation syscalls like ptrace, mount). Export Tetragon events via JSON log file or gRPC stream to the OTel Collector's …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"tetragon:events\"\n| eval event_type=case(\n    process_exec!=\"\", \"process_exec\",\n    process_file!=\"\", \"file_access\",\n    process_connect!=\"\", \"network_connect\",\n    process_kprobe!=\"\", \"kprobe\",\n    1==1, \"other\")\n| eval severity=case(\n    match(binary, \"/(nc|ncat|curl|wget|python|perl|ruby|bash|sh)$\"), \"High\",\n    match(filepath, \"/(etc/shadow|etc/passwd|.ssh/|.kube/)\"), \"Critical\",\n    match(event_type, \"kprobe\") AND match(function_name, \"sys_ptrace|sys_mount\"), \"Critical\",\n    1==1, \"Info\")\n| where severity IN (\"High\", \"Critical\")\n| stats count as events, values(binary) as binaries, values(filepath) as files, values(k8s_pod) as pods by _time, k8s_namespace, event_type, severity\n| sort -severity, -events\n| table _time, k8s_namespace, pods, event_type, severity, binaries, files, events\n```\n\nUnderstanding this SPL\n\n**eBPF Process-Level Security Observability (Tetragon)** — Container runtime security traditionally relies on syscall interception (Falco/seccomp) or agent-based file integrity monitoring — both with performance overhead and blind spots. Tetragon provides kernel-level visibility into process execution, file access, network connections, and privilege escalation via eBPF tracing policies, with minimal overhead. Ingesting Tetragon events into Splunk enables correlation with application traces and infrastructure metrics — connecting \"a…\n\nDocumented **Data sources**: `sourcetype=tetragon:events`, Tetragon JSON event stream via FluentBit or OTel filelog receiver. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Tetragon (Isovalent/Cilium). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: tetragon:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"tetragon:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **event_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where severity IN (\"High\", \"Critical\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by _time, k8s_namespace, event_type, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **eBPF Process-Level Security Observability (Tetragon)**): table _time, k8s_namespace, pods, event_type, severity, binaries, files, events\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**eBPF Process-Level Security Observability (Tetragon)** — Container runtime security traditionally relies on syscall interception (Falco/seccomp) or agent-based file integrity monitoring — both with performance overhead and blind spots. Tetragon provides kernel-level visibility into process execution, file access, network connections, and privilege escalation via eBPF tracing policies, with minimal overhead. Ingesting Tetragon events into Splunk enables correlation with application traces and infrastructure metrics — connecting \"a…\n\nDocumented **Data sources**: `sourcetype=tetragon:events`, Tetragon JSON event stream via FluentBit or OTel filelog receiver. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Tetragon (Isovalent/Cilium). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (security events per namespace), Table (critical events with pod and binary details), Bar chart (events by type and severity), Single value (critical events in last hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We place invisible sensors inside each program container that detect when something unusual happens — like an unauthorized person trying to open a locked filing cabinet or running tools they should not have — and immediately alert the security team.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.15",
              "n": "eBPF Auto-Instrumented Service Metrics (Beyla)",
              "c": "high",
              "f": "intermediate",
              "v": "Traditional application instrumentation requires code changes (OTel SDK integration) or agent injection (Java agent, .NET profiler). eBPF auto-instrumentation tools like Grafana Beyla generate RED metrics (Request rate, Error rate, Duration) for HTTP and gRPC services by observing kernel-level syscalls — zero code changes, zero sidecar overhead, zero application awareness required. This provides instant observability for legacy services, third-party applications, and polyglot environments where manual instrumentation is impractical or too slow to roll out.",
              "t": "Grafana Beyla (eBPF auto-instrumentation), Splunk Distribution of OpenTelemetry Collector",
              "d": "Beyla-generated OTel metrics and traces, `sourcetype=otel:metrics`, `sourcetype=otel:traces`",
              "q": "| mstats avg(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"http.server.request.duration\",\n    \"http.server.request.body.size\",\n    \"rpc.server.duration\"\n  ) AND instrumentation_source=\"beyla\" BY metric_name, service_name, http_request_method, http_response_status_code span=5m\n| eval signal=case(\n    match(metric_name, \"duration\"), \"duration_ms\",\n    match(metric_name, \"body.size\"), \"request_size\",\n    match(metric_name, \"rpc\"), \"rpc_duration_ms\")\n| eval is_error=if(http_response_status_code>=500, 1, 0)\n| stats count as requests, sum(is_error) as errors, avg(val) as avg_duration by _time, service_name\n| eval error_rate_pct=round(errors*100/requests, 2)\n| eval req_per_sec=round(requests/300, 1)\n| table _time, service_name, req_per_sec, error_rate_pct, avg_duration\n| sort -error_rate_pct",
              "m": "Deploy Beyla as a DaemonSet or sidecar. Beyla uses eBPF to intercept HTTP/gRPC syscalls and generate OpenTelemetry-compatible metrics and traces without any application code changes. Configure Beyla to export via OTLP to the OTel Collector, which forwards to Splunk. Beyla generates standard OTel HTTP semantic conventions: `http.server.request.duration`, `http.server.request.body.size`, `http.request.method`, `http.response.status_code`. Tag Beyla-generated telemetry with `instrumentation_source=beyla` to distinguish from SDK-instrumented data. Use Beyla for immediate coverage of uninstrumented services while teams work on proper OTel SDK integration. Compare Beyla RED metrics with SDK-generated metrics for instrumented services to validate accuracy. Track which services rely on Beyla vs SDK instrumentation to measure manual instrumentation progress.",
              "z": "Table (service RED metrics from Beyla), Line chart (request rate and error rate per service), Bar chart (services by instrumentation source — Beyla vs SDK), Gauge (error rate per service).",
              "kfp": "**cold_start_latency_spike** — When a service pod starts or scales up, the first few requests experience higher latency due to JIT compilation, connection pool initialization, and cache warming. Beyla captures these requests, inflating the initial p95/p99 values. Exclude the first 60 seconds after pod start from latency alerting or use a minimum request count threshold.\n\n**health_check_skew** — Kubernetes health check endpoints (/health, /readyz, /livez) respond in sub-millisecond times, pulling down average latency numbers and inflating request counts. These endpoints should be filtered from RED metric calculations or tracked separately.\n\n**batch_job_endpoints** — Some services have endpoints that process batch operations with legitimately long response times (e.g., report generation, data export). These endpoints consistently trigger latency alerts even when functioning correctly. Create endpoint-specific thresholds in a lookup and exclude known long-running endpoints from standard latency alerting.\n\n**metric_cardinality_artifacts** — When url.path is not normalized, each unique path creates a separate metric series. The z-score calculation may flag low-traffic paths as anomalies simply because they have insufficient data points for a stable baseline. Require a minimum request count (e.g., 10 requests per 15-minute window) before computing z-scores.\n\n**beyla_binary_mismatch** — Beyla uses heuristics to identify service processes. If multiple processes listen on the same port (e.g., a sidecar proxy and the main application), Beyla may attribute all traffic to one process. This creates phantom services or missing services in the metrics. Use BEYLA_EXECUTABLE_NAME to disambiguate.\n\n**grpc_status_codes** — gRPC uses numeric status codes where some non-zero codes (like CANCELLED or DEADLINE_EXCEEDED) are expected in normal operation. Beyla maps these to error counts, inflating the error rate. Configure gRPC-aware error classification to treat expected status codes as non-errors.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Grafana Beyla (eBPF auto-instrumentation), Splunk Distribution of OpenTelemetry Collector.\n• Ensure the following data sources are available: Beyla-generated OTel metrics and traces, `sourcetype=otel:metrics`, `sourcetype=otel:traces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Beyla as a DaemonSet or sidecar. Beyla uses eBPF to intercept HTTP/gRPC syscalls and generate OpenTelemetry-compatible metrics and traces without any application code changes. Configure Beyla to export via OTLP to the OTel Collector, which forwards to Splunk. Beyla generates standard OTel HTTP semantic conventions: `http.server.request.duration`, `http.server.request.body.size`, `http.request.method`, `http.response.status_code`. Tag Beyla-generated telemetry with `instrumentation_source=…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"http.server.request.duration\",\n    \"http.server.request.body.size\",\n    \"rpc.server.duration\"\n  ) AND instrumentation_source=\"beyla\" BY metric_name, service_name, http_request_method, http_response_status_code span=5m\n| eval signal=case(\n    match(metric_name, \"duration\"), \"duration_ms\",\n    match(metric_name, \"body.size\"), \"request_size\",\n    match(metric_name, \"rpc\"), \"rpc_duration_ms\")\n| eval is_error=if(http_response_status_code>=500, 1, 0)\n| stats count as requests, sum(is_error) as errors, avg(val) as avg_duration by _time, service_name\n| eval error_rate_pct=round(errors*100/requests, 2)\n| eval req_per_sec=round(requests/300, 1)\n| table _time, service_name, req_per_sec, error_rate_pct, avg_duration\n| sort -error_rate_pct\n```\n\nUnderstanding this SPL\n\n**eBPF Auto-Instrumented Service Metrics (Beyla)** — Traditional application instrumentation requires code changes (OTel SDK integration) or agent injection (Java agent, .NET profiler). eBPF auto-instrumentation tools like Grafana Beyla generate RED metrics (Request rate, Error rate, Duration) for HTTP and gRPC services by observing kernel-level syscalls — zero code changes, zero sidecar overhead, zero application awareness required. This provides instant observability for legacy services, third-party applications, and…\n\nDocumented **Data sources**: Beyla-generated OTel metrics and traces, `sourcetype=otel:metrics`, `sourcetype=otel:traces`. **App/TA** (typical add-on context): Grafana Beyla (eBPF auto-instrumentation), Splunk Distribution of OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: otel_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **signal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time, service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **req_per_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **eBPF Auto-Instrumented Service Metrics (Beyla)**): table _time, service_name, req_per_sec, error_rate_pct, avg_duration\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (service RED metrics from Beyla), Line chart (request rate and error rate per service), Bar chart (services by instrumentation source — Beyla vs SDK), Gauge (error rate per service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We attach an invisible speed camera to every service in our software city that automatically measures how fast each one responds and how often it makes mistakes, without the service ever knowing the camera is there.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "grafana",
                "opentelemetry"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.16",
              "n": "Kubernetes Event Correlation with Application Traces",
              "c": "high",
              "f": "advanced",
              "v": "When a Kubernetes OOMKill, pod eviction, or node pressure event coincides with an application error spike, the root cause is infrastructure — not application code. Without correlating K8s events with application traces, teams waste hours debugging application logic for failures caused by resource constraints. This correlation automatically links infrastructure events to their application impact, routing the incident to the right team (platform vs application) and reducing MTTR by eliminating misdiagnosis.",
              "t": "Splunk Distribution of OpenTelemetry Collector (k8s_events receiver), Splunk Observability Cloud",
              "d": "`sourcetype=kube:events`, `sourcetype=otel:traces`, `index=containers`",
              "q": "index=containers sourcetype=\"kube:events\" (reason=\"OOMKilling\" OR reason=\"Evicted\" OR reason=\"FailedScheduling\" OR reason=\"NodeNotReady\" OR reason=\"BackOff\")\n| eval k8s_event_severity=case(\n    reason==\"OOMKilling\", \"Critical\",\n    reason==\"Evicted\", \"High\",\n    reason==\"NodeNotReady\", \"Critical\",\n    1==1, \"Medium\")\n| rename involvedObject.name as pod_name, involvedObject.namespace as namespace\n| join type=left namespace [search index=traces sourcetype=\"otel:traces\" earliest=-15m latest=+15m\n    | eval is_error=if(status_code==\"ERROR\", 1, 0)\n    | stats count as span_count, sum(is_error) as error_count, avg(eval(duration_nano/1000000)) as avg_duration_ms by k8s_namespace\n    | rename k8s_namespace as namespace]\n| eval app_impact=case(\n    error_count > 10, \"Application Error Spike Detected\",\n    avg_duration_ms > 2000, \"Application Latency Spike Detected\",\n    isnotnull(span_count), \"Application Running — No Visible Impact\",\n    1==1, \"No Application Traces Available\")\n| table _time, namespace, pod_name, reason, k8s_event_severity, app_impact, error_count, avg_duration_ms, message\n| sort -k8s_event_severity",
              "m": "Ingest Kubernetes events via the OTel Collector's `k8s_events` receiver or via Splunk Connect for Kubernetes. Focus on resource-related events: OOMKilling (memory limit exceeded), Evicted (node under pressure), FailedScheduling (no capacity), NodeNotReady (node failure), BackOff (crash loops). For each infrastructure event, query application traces in a ±15 minute window around the event timestamp, filtered by the affected namespace. Look for concurrent error spikes or latency increases. Classify the correlation: if app errors spike within 5 minutes of an OOMKill in the same namespace, the infrastructure event likely caused the app errors. Generate a correlated incident report that links the K8s event with the affected traces, enabling platform teams to see the application impact and application teams to see the infrastructure root cause. Feed into ITSI as correlated notable events.",
              "z": "Timeline (K8s events overlaid with trace error rate), Table (correlated events with app impact), Bar chart (K8s events by reason), Line chart (trace error rate with event markers).",
              "kfp": "**coincidental_timing** — An application error spike may occur within the same 15-minute window as an infrastructure event without any causal relationship. Two independent issues — a code deployment causing errors and a separate OOMKill on an unrelated pod — appear correlated because they share a namespace and time window. Cross-reference with deployment records and change management logs.\n\n**graceful_pod_termination** — When Kubernetes evicts a pod gracefully, the pod receives SIGTERM and has a shutdown grace period to complete in-flight requests. If the application handles shutdown correctly, no errors are visible in traces even though an infrastructure event occurred. The correlation correctly shows NO_VISIBLE_IMPACT.\n\n**autoscaler_induced_events** — Horizontal Pod Autoscaler scale-down generates pod termination events that are normal operational behavior. These events may briefly correlate with trace latency increases as connections are rebalanced across remaining pods. Filter HPA-initiated events via the event message.\n\n**preemptible_node_rotation** — Cloud providers may preempt spot/preemptible nodes, generating NodeNotReady events followed by pod rescheduling. If the application is designed for this (with pod disruption budgets and graceful migration), the transient events do not indicate real impact. Check PDB compliance before escalating.\n\n**batch_job_memory_patterns** — Batch processing jobs intentionally consume large amounts of memory and may trigger OOMKill events at completion. These OOMKills are expected behavior for memory-intensive workloads and do not indicate a problem requiring remediation. Classify batch job namespaces separately.\n\n**probe_configuration_drift** — Readiness and liveness probe thresholds may become too aggressive after an application update changes startup time. The resulting Unhealthy events are not infrastructure problems but probe misconfiguration. Correlate with recent deployment events to identify configuration drift.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector (k8s_events receiver), Splunk Observability Cloud.\n• Ensure the following data sources are available: `sourcetype=kube:events`, `sourcetype=otel:traces`, `index=containers`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Kubernetes events via the OTel Collector's `k8s_events` receiver or via Splunk Connect for Kubernetes. Focus on resource-related events: OOMKilling (memory limit exceeded), Evicted (node under pressure), FailedScheduling (no capacity), NodeNotReady (node failure), BackOff (crash loops). For each infrastructure event, query application traces in a ±15 minute window around the event timestamp, filtered by the affected namespace. Look for concurrent error spikes or latency increases. Classif…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"kube:events\" (reason=\"OOMKilling\" OR reason=\"Evicted\" OR reason=\"FailedScheduling\" OR reason=\"NodeNotReady\" OR reason=\"BackOff\")\n| eval k8s_event_severity=case(\n    reason==\"OOMKilling\", \"Critical\",\n    reason==\"Evicted\", \"High\",\n    reason==\"NodeNotReady\", \"Critical\",\n    1==1, \"Medium\")\n| rename involvedObject.name as pod_name, involvedObject.namespace as namespace\n| join type=left namespace [search index=traces sourcetype=\"otel:traces\" earliest=-15m latest=+15m\n    | eval is_error=if(status_code==\"ERROR\", 1, 0)\n    | stats count as span_count, sum(is_error) as error_count, avg(eval(duration_nano/1000000)) as avg_duration_ms by k8s_namespace\n    | rename k8s_namespace as namespace]\n| eval app_impact=case(\n    error_count > 10, \"Application Error Spike Detected\",\n    avg_duration_ms > 2000, \"Application Latency Spike Detected\",\n    isnotnull(span_count), \"Application Running — No Visible Impact\",\n    1==1, \"No Application Traces Available\")\n| table _time, namespace, pod_name, reason, k8s_event_severity, app_impact, error_count, avg_duration_ms, message\n| sort -k8s_event_severity\n```\n\nUnderstanding this SPL\n\n**Kubernetes Event Correlation with Application Traces** — When a Kubernetes OOMKill, pod eviction, or node pressure event coincides with an application error spike, the root cause is infrastructure — not application code. Without correlating K8s events with application traces, teams waste hours debugging application logic for failures caused by resource constraints. This correlation automatically links infrastructure events to their application impact, routing the incident to the right team (platform vs application) and reducing…\n\nDocumented **Data sources**: `sourcetype=kube:events`, `sourcetype=otel:traces`, `index=containers`. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector (k8s_events receiver), Splunk Observability Cloud. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **k8s_event_severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **app_impact** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Kubernetes Event Correlation with Application Traces**): table _time, namespace, pod_name, reason, k8s_event_severity, app_impact, error_count, avg_duration_ms, message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (K8s events overlaid with trace error rate), Table (correlated events with app impact), Bar chart (K8s events by reason), Line chart (trace error rate with event markers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "When something breaks in the building's plumbing and the restaurant upstairs starts getting complaints, we connect those two events together so the maintenance crew knows the real problem is pipes, not the chef.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.5.17",
              "n": "Kubernetes Resource Quota and LimitRange Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Kubernetes resource quotas and LimitRanges prevent any single team from monopolizing cluster resources. When a namespace approaches its quota, new pod deployments fail silently — the deployment controller keeps retrying but pods never schedule. Monitoring quota utilization trending per namespace detects teams approaching limits before their deployments start failing, enabling proactive quota adjustment rather than reactive incident response at 2 AM when the next deployment fails.",
              "t": "Splunk Distribution of OpenTelemetry Collector (k8sobjects receiver), Splunk Connect for Kubernetes",
              "d": "`sourcetype=kube:objects:resourcequotas`, `sourcetype=kube:events`",
              "q": "index=containers sourcetype=\"kube:objects:resourcequotas\"\n| spath\n| eval cpu_used_pct=round(status.used.cpu*100/status.hard.cpu, 1)\n| eval mem_used_pct=round(status.used.memory*100/status.hard.memory, 1)\n| eval pods_used_pct=round(status.used.pods*100/status.hard.pods, 1)\n| stats latest(cpu_used_pct) as cpu_pct, latest(mem_used_pct) as mem_pct, latest(pods_used_pct) as pods_pct by metadata.namespace, metadata.name\n| eval max_util=max(cpu_pct, mem_pct, pods_pct)\n| eval risk=case(\n    max_util >= 90, \"Critical - Near Limit\",\n    max_util >= 75, \"Warning - Approaching Limit\",\n    max_util >= 50, \"Info - Moderate Usage\",\n    1==1, \"OK\")\n| where risk!=\"OK\"\n| table metadata.namespace, metadata.name, cpu_pct, mem_pct, pods_pct, max_util, risk\n| sort -max_util",
              "m": "Use the OTel Collector's `k8sobjects` receiver to collect ResourceQuota objects from the Kubernetes API. Each ResourceQuota contains `status.used` and `status.hard` for CPU, memory, pods, services, and other resources. Calculate utilization percentage for each resource type per namespace. Alert when any namespace exceeds 75% (warning) or 90% (critical) of any quota dimension. Correlate with FailedScheduling events (from UC-3.5.16) to confirm that quota exhaustion is causing pod scheduling failures. Track quota utilization trends over 30 days to forecast when namespaces will hit limits based on growth rate. Provide monthly capacity reports to platform teams with recommendations for quota adjustments. Also monitor LimitRange violations — pods that request resources outside the defined min/max range fail admission and generate events.",
              "z": "Bar chart (quota utilization % by namespace), Table (namespaces approaching limits), Heatmap (namespace × resource type utilization), Line chart (quota utilization trend per namespace over 30 days).",
              "kfp": "**transient_ci_cd_spikes** — CI/CD pipelines running in dedicated namespaces temporarily consume large amounts of CPU and memory quota during build and test phases. These spikes push quota utilization to CRITICAL levels but self-resolve within minutes as pipeline jobs complete. Correlate with pipeline execution schedules and use time-averaged utilization rather than peak for alerting.\n\n**horizontal_autoscaler_bursts** — The Horizontal Pod Autoscaler may scale a deployment to its maximum replica count during traffic peaks, temporarily consuming most of the namespace quota. This is expected behavior that the quota is designed to cap. Alert on sustained high utilization rather than momentary peaks.\n\n**job_completion_lag** — Completed Kubernetes Jobs and their pods continue to count against the quota until garbage collected (controlled by ttlSecondsAfterFinished). A namespace may appear near quota exhaustion even though the actual running workload is small. Check for completed-but-uncleaned jobs contributing to quota usage.\n\n**resource_unit_mismatch** — ResourceQuotas can track both requests and limits separately. A namespace at 90% of its CPU requests quota may have only 50% of its CPU limits quota consumed. The risk assessment should evaluate each dimension independently rather than combining them.\n\n**namespace_lifecycle_events** — When namespaces are created or deleted, quota objects appear or disappear, causing utilization jumps or drops. New namespaces start at 0% utilization (benign) while deleted namespaces cause sudden data gaps. Filter quota changes within the first hour of namespace creation.\n\n**quota_scope_confusion** — ResourceQuotas can have scopes (BestEffort, NotBestEffort, Terminating, NotTerminating) that limit which pods count against the quota. A quota with BestEffort scope only tracks pods without resource requests, making the utilization percentage misleading if interpreted as total namespace consumption.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector (k8sobjects receiver), Splunk Connect for Kubernetes.\n• Ensure the following data sources are available: `sourcetype=kube:objects:resourcequotas`, `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse the OTel Collector's `k8sobjects` receiver to collect ResourceQuota objects from the Kubernetes API. Each ResourceQuota contains `status.used` and `status.hard` for CPU, memory, pods, services, and other resources. Calculate utilization percentage for each resource type per namespace. Alert when any namespace exceeds 75% (warning) or 90% (critical) of any quota dimension. Correlate with FailedScheduling events (from UC-3.5.16) to confirm that quota exhaustion is causing pod scheduling failur…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"kube:objects:resourcequotas\"\n| spath\n| eval cpu_used_pct=round(status.used.cpu*100/status.hard.cpu, 1)\n| eval mem_used_pct=round(status.used.memory*100/status.hard.memory, 1)\n| eval pods_used_pct=round(status.used.pods*100/status.hard.pods, 1)\n| stats latest(cpu_used_pct) as cpu_pct, latest(mem_used_pct) as mem_pct, latest(pods_used_pct) as pods_pct by metadata.namespace, metadata.name\n| eval max_util=max(cpu_pct, mem_pct, pods_pct)\n| eval risk=case(\n    max_util >= 90, \"Critical - Near Limit\",\n    max_util >= 75, \"Warning - Approaching Limit\",\n    max_util >= 50, \"Info - Moderate Usage\",\n    1==1, \"OK\")\n| where risk!=\"OK\"\n| table metadata.namespace, metadata.name, cpu_pct, mem_pct, pods_pct, max_util, risk\n| sort -max_util\n```\n\nUnderstanding this SPL\n\n**Kubernetes Resource Quota and LimitRange Compliance** — Kubernetes resource quotas and LimitRanges prevent any single team from monopolizing cluster resources. When a namespace approaches its quota, new pod deployments fail silently — the deployment controller keeps retrying but pods never schedule. Monitoring quota utilization trending per namespace detects teams approaching limits before their deployments start failing, enabling proactive quota adjustment rather than reactive incident response at 2 AM when the next deployment…\n\nDocumented **Data sources**: `sourcetype=kube:objects:resourcequotas`, `sourcetype=kube:events`. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector (k8sobjects receiver), Splunk Connect for Kubernetes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: kube:objects:resourcequotas. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"kube:objects:resourcequotas\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `eval` defines or adjusts **cpu_used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pods_used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by metadata.namespace, metadata.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **max_util** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where risk!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubernetes Resource Quota and LimitRange Compliance**): table metadata.namespace, metadata.name, cpu_pct, mem_pct, pods_pct, max_util, risk\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (quota utilization % by namespace), Table (namespaces approaching limits), Heatmap (namespace × resource type utilization), Line chart (quota utilization trend per namespace over 30 days).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "Each department in our company has a spending budget for computer resources, and we monitor how close each department is to its limit so we can raise the budget before their next project gets rejected for insufficient funds.",
              "mtype": [
                "Capacity",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 14,
            "none": 0
          }
        },
        {
          "i": "3.6",
          "n": "Container & Kubernetes Trending",
          "u": [
            {
              "i": "3.6.1",
              "n": "Pod Restart Rate Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "A rising cluster-wide pod restart rate points to unstable workloads, resource pressure, or bad rollouts before a single namespace triggers a critical alert. Trending over 30 days reveals whether reliability is improving or degrading after platform changes.",
              "t": "Splunk Connect for Kubernetes, OpenTelemetry Collector for Kubernetes",
              "d": "`index=containers` metrics via `mstats` (`kube_pod_container_status_restarts_total`), or `sourcetype=kube:events`",
              "q": "| mstats latest(kube_pod_container_status_restarts_total) as restarts WHERE index=containers by namespace span=1d\n| timechart span=1d sum(restarts) as total_restarts\n| trendline sma7(total_restarts) as restart_trend",
              "m": "Ensure kube-state-metrics or the OpenTelemetry Kubernetes receiver emits container restart counters into Splunk metrics. Run the panel on a 30-day window and baseline normal daily restarts per cluster. Alert when the 7-day moving average exceeds the prior 30-day baseline by more than 50%. Exclude system namespaces (kube-system, monitoring) if they skew the signal.",
              "z": "Line chart (daily restarts with 7-day SMA, 30 days), area chart by namespace.",
              "kfp": "**rolling_deployment_restart** — Every Kubernetes rolling deployment replaces pods by terminating old replicas and starting new ones, which increments the restart counter for the old containers. During a busy deployment window affecting 10+ services, the cluster-wide restart count spikes without indicating instability. Correlate with `sourcetype=kube:events reason=ScalingReplicaSet` and suppress alerts for 15 minutes after deployment events in the same namespace.\n\n**liveness_probe_tuning** — Overly aggressive liveness probes (short `initialDelaySeconds`, tight `timeoutSeconds`) cause healthy but slow-starting containers to be killed and restarted repeatedly. The restart counter climbs but the application is functioning correctly once warm. Check liveness probe configuration: `kubectl describe pod <pod> | grep Liveness` and adjust thresholds before treating the restarts as failures.\n\n**node_drain_churn** — When nodes are cordoned and drained for maintenance or autoscaler scale-down, all pods on those nodes are evicted and rescheduled, producing a burst of restarts concentrated in the drain window. Compare restart spikes with node lifecycle events: `sourcetype=kube:events reason=Evicted` or `reason=NodeNotReady` and suppress during planned maintenance.\n\n**oom_kill_resource_sizing** — Containers running near their memory limits may be OOMKilled by the kernel during transient load spikes. The restart counter increments but the application recovers immediately after restart. Distinguish genuine memory leaks (steadily rising restarts over days) from transient OOMs (isolated spikes) by checking whether the same pod appears in consecutive alert windows.\n\n**hpa_scale_down_restart** — When a HorizontalPodAutoscaler scales down a deployment, surplus pods are terminated, incrementing the restart counter for those containers. This is normal autoscaler behavior, not instability. Filter by checking whether `kube_pod_status_phase` transitions to `Succeeded` (graceful termination) rather than `Failed` (crash).\n\n**preemption_spot_instance** — Pods running on preemptible or spot instance nodes are terminated when the cloud provider reclaims the node. Restarts from preemption are expected cost-optimization behavior, not application instability. Identify preemption restarts by correlating with node labels (`node.kubernetes.io/instance-type` containing `spot` or `preemptible`) or event reason `Preempted`.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Kubernetes, OpenTelemetry Collector for Kubernetes.\n• Ensure the following data sources are available: `index=containers` metrics via `mstats` (`kube_pod_container_status_restarts_total`), or `sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure kube-state-metrics or the OpenTelemetry Kubernetes receiver emits container restart counters into Splunk metrics. Run the panel on a 30-day window and baseline normal daily restarts per cluster. Alert when the 7-day moving average exceeds the prior 30-day baseline by more than 50%. Exclude system namespaces (kube-system, monitoring) if they skew the signal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats latest(kube_pod_container_status_restarts_total) as restarts WHERE index=containers by namespace span=1d\n| timechart span=1d sum(restarts) as total_restarts\n| trendline sma7(total_restarts) as restart_trend\n```\n\nUnderstanding this SPL\n\n**Pod Restart Rate Trending** — A rising cluster-wide pod restart rate points to unstable workloads, resource pressure, or bad rollouts before a single namespace triggers a critical alert. Trending over 30 days reveals whether reliability is improving or degrading after platform changes.\n\nDocumented **Data sources**: `index=containers` metrics via `mstats` (`kube_pod_container_status_restarts_total`), or `sourcetype=kube:events`. **App/TA** (typical add-on context): Splunk Connect for Kubernetes, OpenTelemetry Collector for Kubernetes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Pod Restart Rate Trending**): trendline sma7(total_restarts) as restart_trend\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily restarts with 7-day SMA, 30 days), area chart by namespace.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We count how often the small programs inside our systems crash and restart each day, charting the trend over a month so the team can tell whether things are getting more or less stable.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.6.2",
              "n": "Container Image Vulnerability Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Tracking critical and high CVE counts per image over time shows whether your build pipeline and patching cadence are reducing risk or if new vulnerabilities are outpacing remediation. Supports prioritization of image rebuilds and exception reviews.",
              "t": "Trivy/Grype/Snyk CI integration, Splunk HEC",
              "d": "`index=containers sourcetype=trivy:scan` OR `sourcetype=grype:scan`",
              "q": "index=containers sourcetype IN (\"trivy:scan\", \"grype:scan\")\n| where Severity IN (\"CRITICAL\", \"HIGH\")\n| bin _time span=1d\n| stats dc(VulnerabilityID) as cve_count by _time, image\n| timechart span=1d sum(cve_count) as total_cves\n| trendline sma7(total_cves) as cve_trend",
              "m": "Ingest scanner JSON on every build or scheduled registry scan with stable image and severity fields. Normalize severity to CRITICAL/HIGH. Schedule a daily saved search to populate a summary index. Compare jumps after base-image updates as expected versus organic growth. Pair with a lookup of accepted CVEs to subtract noise for trending net-new exposure.",
              "z": "Stacked area chart (critical/high CVEs over time), line chart (7-day SMA), table of top images by CVE count.",
              "kfp": "**cve_database_refresh_spike** — When Trivy or Grype update their vulnerability databases (typically daily), a batch of newly published CVEs appears across all images simultaneously, creating an artificial spike in the trending chart. The spike reflects new discoveries in the NVD, not changes in the images themselves. Compare the spike date with the scanner's database update timestamp and annotate the trend chart accordingly.\n\n**scheduled_scan_gap** — If the nightly Harbor scheduled scan fails or is delayed (due to maintenance windows, resource contention, or job queue backlog), the trending chart shows a gap or zero-count day that artificially lowers the moving average. When the next scan runs, the accumulated counts create a false spike. Verify scan completion via `sourcetype=harbor:webhook event_type=SCANNING_COMPLETED` and interpolate missing days.\n\n**base_image_mass_update** — When a team upgrades the base image across multiple images simultaneously (e.g., alpine 3.14 → 3.19), the per-image trajectory labels all affected images as IMPROVING in the same week, creating a misleading impression of broad remediation when only one base-image change drove the improvement. Group trending by base image tag to separate base-image effects from application-level fixes.\n\n**scanner_version_drift** — Upgrading the scanner version (e.g., Trivy 0.45 → 0.50) can change CVE detection logic, adding or removing CVE matches for the same image. This creates a step change in the trend that does not reflect actual vulnerability changes. Note scanner version in scan metadata and annotate trend charts at upgrade boundaries.\n\n**end_of_life_image_noise** — Images built on end-of-life OS distributions (e.g., Debian Stretch, Ubuntu 18.04) accumulate CVEs indefinitely because no patches will ever be released. These images inflate the DEGRADING trajectory count and should be flagged for retirement rather than patched. Filter by base OS EOL status using a lookup.\n\n**test_image_churn** — Development and test pipelines that rebuild images frequently with experimental dependencies create high-frequency scan results that dominate the trending chart. Filter by image tag patterns (exclude `dev-*`, `test-*`, `snapshot-*`) or namespace to isolate production image trends.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Trivy/Grype/Snyk CI integration, Splunk HEC.\n• Ensure the following data sources are available: `index=containers sourcetype=trivy:scan` OR `sourcetype=grype:scan`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest scanner JSON on every build or scheduled registry scan with stable image and severity fields. Normalize severity to CRITICAL/HIGH. Schedule a daily saved search to populate a summary index. Compare jumps after base-image updates as expected versus organic growth. Pair with a lookup of accepted CVEs to subtract noise for trending net-new exposure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype IN (\"trivy:scan\", \"grype:scan\")\n| where Severity IN (\"CRITICAL\", \"HIGH\")\n| bin _time span=1d\n| stats dc(VulnerabilityID) as cve_count by _time, image\n| timechart span=1d sum(cve_count) as total_cves\n| trendline sma7(total_cves) as cve_trend\n```\n\nUnderstanding this SPL\n\n**Container Image Vulnerability Trending** — Tracking critical and high CVE counts per image over time shows whether your build pipeline and patching cadence are reducing risk or if new vulnerabilities are outpacing remediation. Supports prioritization of image rebuilds and exception reviews.\n\nDocumented **Data sources**: `index=containers sourcetype=trivy:scan` OR `sourcetype=grype:scan`. **App/TA** (typical add-on context): Trivy/Grype/Snyk CI integration, Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Severity IN (\"CRITICAL\", \"HIGH\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, image** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Container Image Vulnerability Trending**): trendline sma7(total_cves) as cve_trend\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area chart (critical/high CVEs over time), line chart (7-day SMA), table of top images by CVE count.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "Each day we chart how many known security flaws exist across all our software packages, tracking whether the number is going up or down over weeks, so management can tell if the team is fixing problems faster than new ones appear.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.6.3",
              "n": "Deployment Velocity Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Deployments per day or week by namespace show delivery cadence and whether release practices are steady or chaotic. Sudden drops or spikes often correlate with code freezes, incidents, or CI/CD automation changes.",
              "t": "Splunk Connect for Kubernetes, Argo CD / Flux logs",
              "d": "`index=containers sourcetype=kube:audit` or `sourcetype=kube:controller`",
              "q": "index=containers sourcetype=\"kube:audit\"\n| search objectRef.resource=\"deployments\" verb IN (\"create\", \"patch\", \"update\")\n| eval namespace=coalesce('objectRef.namespace', namespace)\n| timechart span=1d dc(objectRef.name) as deployments by namespace",
              "m": "Ingest Kubernetes audit logs that record Deployment changes. Map objectRef.resource, verb, and namespace fields. If audit logs are unavailable, use Argo CD or Flux application sync events with the same grouping. Use span=1d or span=1w depending on whether you want daily or weekly velocity. Exclude system namespaces that skew the chart.",
              "z": "Column chart (deployments per period by namespace), line chart (weekly total velocity trend).",
              "kfp": "**hpa_scaling_noise** — HorizontalPodAutoscaler-driven scaling generates ScalingReplicaSet events that are not code deployments but resource adjustments. These inflate the velocity count during load spikes. Filter by excluding events where the deployment's `observedGeneration` did not change between consecutive events — HPA scaling does not increment the generation counter.\n\n**cronjob_deployment_churn** — CronJob-managed workloads create and destroy pods on schedule, generating deployment-like events that inflate the velocity count. Exclude namespaces or deployments known to contain only CronJobs via the namespace_owners lookup.\n\n**ci_environment_rebuild** — Development and staging namespaces may have automated pipelines that rebuild and redeploy every commit, producing artificially high deployment counts that skew the cluster-wide SMA. Filter the production-velocity alert to production-tier namespaces only using the service_tier field from the namespace_owners lookup.\n\n**rollback_double_count** — A failed deployment followed by a rollback generates two sets of ScalingReplicaSet events (the failed rollout and the rollback), counting as two deployments when only one intentional change occurred. Correlate with ProgressDeadlineExceeded events and count the rollback as part of the same deployment lifecycle.\n\n**weekend_freeze_pattern** — Organizations that do not deploy on weekends will consistently trigger the FREEZE flag on Saturdays and Sundays. Adjust the FREEZE threshold to exclude weekends or use a weekday-only SMA by filtering `| where strftime(_time, \"%u\") < 6`.\n\n**namespace_migration_surge** — When workloads are migrated between namespaces (e.g., during cluster consolidation or namespace reorganization), both the source and destination namespaces show deployment activity that inflates the velocity count. Correlate with namespace lifecycle events and suppress during planned migration windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Kubernetes, Argo CD / Flux logs.\n• Ensure the following data sources are available: `index=containers sourcetype=kube:audit` or `sourcetype=kube:controller`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Kubernetes audit logs that record Deployment changes. Map objectRef.resource, verb, and namespace fields. If audit logs are unavailable, use Argo CD or Flux application sync events with the same grouping. Use span=1d or span=1w depending on whether you want daily or weekly velocity. Exclude system namespaces that skew the chart.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"kube:audit\"\n| search objectRef.resource=\"deployments\" verb IN (\"create\", \"patch\", \"update\")\n| eval namespace=coalesce('objectRef.namespace', namespace)\n| timechart span=1d dc(objectRef.name) as deployments by namespace\n```\n\nUnderstanding this SPL\n\n**Deployment Velocity Trending** — Deployments per day or week by namespace show delivery cadence and whether release practices are steady or chaotic. Sudden drops or spikes often correlate with code freezes, incidents, or CI/CD automation changes.\n\nDocumented **Data sources**: `index=containers sourcetype=kube:audit` or `sourcetype=kube:controller`. **App/TA** (typical add-on context): Splunk Connect for Kubernetes, Argo CD / Flux logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: kube:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"kube:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **namespace** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by namespace** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (deployments per period by namespace), line chart (weekly total velocity trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We track how many software updates the team pushes out each day, charting whether they are releasing more or fewer changes over time, so leadership can spot when delivery is speeding up dangerously or stalling unexpectedly.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "argocd",
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.6.4",
              "n": "Resource Request vs Limit Utilization Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Comparing actual CPU and memory usage to requested and limit values shows whether workloads are over-provisioned, at risk of throttling or OOM, or drifting after code changes. Trending utilization percentages highlights capacity pressure before quotas cause scheduling failures.",
              "t": "OpenTelemetry Collector, Prometheus-compatible scrape to Splunk",
              "d": "`index=containers` via `mstats` — `k8s.pod.cpu.utilization`, `k8s.pod.memory.usage`",
              "q": "| mstats avg(k8s.pod.cpu.utilization) as cpu_util WHERE index=containers by namespace span=1d\n| sort namespace, _time\n| streamstats window=7 avg(cpu_util) as cpu_trend by namespace\n| table _time, namespace, cpu_util, cpu_trend",
              "m": "Align metric names with your Prometheus/OpenTelemetry pipeline. Ensure pod labels match between usage and request series. Cap percentages at 100% for display where usage can briefly exceed requests. Review namespaces trending above 85% of request or near limit consistently for right-sizing or HPA tuning. Duplicate the panel for memory utilization.",
              "z": "Line chart (avg CPU % of request by namespace, 30 days), dual panel for memory, heatmap (namespace x day).",
              "kfp": "**batch_workload_idle** — Batch processing jobs (CronJobs, Spark executors, data pipelines) alternate between full utilization during processing and near-zero usage between runs. Average utilization metrics classify these as OVER_PROVISIONED even though their peak utilization justifies the resource request. Use max or P95 utilization over a 24-hour window for batch workloads instead of average.\n\n**hpa_scaling_headroom** — HorizontalPodAutoscaler targets a specific CPU utilization percentage (e.g., 50%) by scaling the replica count. Individual pods appear under-utilized relative to their requests because the HPA deliberately maintains headroom. The waste score reflects intentional HPA design, not misconfiguration.\n\n**jvm_heap_reservation** — Java applications configured with -Xmx reserve the maximum heap size at startup, causing container_memory_working_set_bytes to stabilize at the JVM heap max even when actual object occupancy is low. Memory limit utilization appears high even though the JVM can handle more work within its allocated heap.\n\n**init_container_spike** — Init containers run briefly during pod startup and may consume significant CPU or memory during their execution window. If the scrape coincides with init container activity, it captures a transient spike that does not represent steady-state utilization. Filter by container name to exclude init containers from the analysis.\n\n**guaranteed_qos_design** — Containers in the Kubernetes Guaranteed QoS class (request equals limit for all resources) always show 100% request-to-limit ratio regardless of actual usage. This is intentional design for latency-sensitive workloads that need guaranteed scheduling. Do not flag these as AT_RISK based on the request/limit ratio alone.\n\n**vertical_pod_autoscaler_transition** — When VPA updates resource requests, there is a transition period where the new request takes effect after pod restart but the old utilization baseline persists in the trend data. This creates a temporary misclassification until enough data accumulates under the new request values.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenTelemetry Collector, Prometheus-compatible scrape to Splunk.\n• Ensure the following data sources are available: `index=containers` via `mstats` — `k8s.pod.cpu.utilization`, `k8s.pod.memory.usage`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign metric names with your Prometheus/OpenTelemetry pipeline. Ensure pod labels match between usage and request series. Cap percentages at 100% for display where usage can briefly exceed requests. Review namespaces trending above 85% of request or near limit consistently for right-sizing or HPA tuning. Duplicate the panel for memory utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(k8s.pod.cpu.utilization) as cpu_util WHERE index=containers by namespace span=1d\n| sort namespace, _time\n| streamstats window=7 avg(cpu_util) as cpu_trend by namespace\n| table _time, namespace, cpu_util, cpu_trend\n```\n\nUnderstanding this SPL\n\n**Resource Request vs Limit Utilization Trending** — Comparing actual CPU and memory usage to requested and limit values shows whether workloads are over-provisioned, at risk of throttling or OOM, or drifting after code changes. Trending utilization percentages highlights capacity pressure before quotas cause scheduling failures.\n\nDocumented **Data sources**: `index=containers` via `mstats` — `k8s.pod.cpu.utilization`, `k8s.pod.memory.usage`. **App/TA** (typical add-on context): OpenTelemetry Collector, Prometheus-compatible scrape to Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Resource Request vs Limit Utilization Trending**): table _time, namespace, cpu_util, cpu_trend\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (avg CPU % of request by namespace, 30 days), dual panel for memory, heatmap (namespace x day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We compare how much computer power each program was promised versus how much it actually uses, spotting those that waste resources by hoarding too much or risk crashing by using more than their share.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry",
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.6.5",
              "n": "Kubernetes Event Error Rate Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Warning and error Kubernetes events aggregate noise from image pulls, scheduling failures, and control-plane issues. A rising daily rate of Warning/Error events signals systemic problems even when individual alerts are not firing.",
              "t": "Splunk Connect for Kubernetes",
              "d": "`index=containers sourcetype=kube:events`",
              "q": "index=containers sourcetype=\"kube:events\" type IN (\"Warning\", \"Error\")\n| timechart span=1d count by type\n| trendline sma7(Warning) as warning_trend sma7(Error) as error_trend",
              "m": "Forward Kubernetes events via the Splunk connector with the type field preserved. Filter out known noisy reasons with a lookup. Baseline typical Warning vs Error counts per day and alert when Error count exceeds threshold or Warning grows week-over-week. Optionally split by involvedObject.namespace for drilldown.",
              "z": "Column chart (Warning vs Error per day), line chart overlay with 7-day SMA.",
              "kfp": "**event_ttl_gap** — Kubernetes events have a default TTL of 1 hour, meaning events generated during collector downtime are permanently lost. A gap in event collection creates an artificial dip in the trend chart followed by a return to normal that may look like a spike. Cross-reference collector pod uptime with trend anomalies to identify collection gaps.\n\n**node_lifecycle_churn** — Autoscaler node additions and removals generate bursts of Warning events (NodeNotReady, FailedAttachVolume, Taint-related scheduling warnings) that reflect normal cluster scaling behavior, not application problems. Correlate with autoscaler events and suppress during scale-up/down windows.\n\n**webhook_timeout_noise** — Admission webhooks that occasionally timeout generate Warning events from the API server that affect the cluster-wide trend without indicating application issues. These events originate from `source.component=apiserver` rather than `kubelet` or `scheduler`. Filter by component to separate infrastructure warnings from application warnings.\n\n**count_field_inflation** — Kubernetes deduplicates identical events by incrementing the count field. If the event collector treats each count update as a new event, the daily warning count is inflated by repetition rather than distinct incidents. Use `| dedup involvedObject.uid reason message` or sum the `count` field for accurate trending.\n\n**development_namespace_noise** — Development and testing namespaces with active experimentation generate high Warning event volumes (failed pulls, OOMKills from undersized limits, scheduling conflicts) that dominate the cluster-wide trend. Use the namespace exclusion or ownership lookup to separate production trend from development noise.\n\n**controller_retry_amplification** — Kubernetes controllers retry failed operations with exponential backoff, generating a new Warning event on each retry attempt. A single underlying failure can produce dozens of Warning events over minutes, inflating the daily count disproportionately to the actual number of distinct issues.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Kubernetes.\n• Ensure the following data sources are available: `index=containers sourcetype=kube:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Kubernetes events via the Splunk connector with the type field preserved. Filter out known noisy reasons with a lookup. Baseline typical Warning vs Error counts per day and alert when Error count exceeds threshold or Warning grows week-over-week. Optionally split by involvedObject.namespace for drilldown.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=\"kube:events\" type IN (\"Warning\", \"Error\")\n| timechart span=1d count by type\n| trendline sma7(Warning) as warning_trend sma7(Error) as error_trend\n```\n\nUnderstanding this SPL\n\n**Kubernetes Event Error Rate Trending** — Warning and error Kubernetes events aggregate noise from image pulls, scheduling failures, and control-plane issues. A rising daily rate of Warning/Error events signals systemic problems even when individual alerts are not firing.\n\nDocumented **Data sources**: `index=containers sourcetype=kube:events`. **App/TA** (typical add-on context): Splunk Connect for Kubernetes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: kube:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by type** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Kubernetes Event Error Rate Trending**): trendline sma7(Warning) as warning_trend sma7(Error) as error_trend\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (Warning vs Error per day), line chart overlay with 7-day SMA.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "We count the daily warning messages that our computer systems produce and chart the trend over weeks, so the team can tell whether problems are increasing, decreasing, or staying the same.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "3.6.6",
              "n": "Ingress Traffic Volume Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Ingress requests per second trended weekly shows growth in user load, campaign effects, or misconfigured clients hammering APIs. Supports capacity planning for ingress controllers and upstream services.",
              "t": "NGINX Ingress Controller / Istio Ingress Gateway logs forwarded to Splunk",
              "d": "`index=containers sourcetype=nginx:ingress` or `sourcetype=istio:ingress`",
              "q": "index=containers sourcetype IN (\"nginx:ingress\", \"istio:ingress\")\n| timechart span=1w count as weekly_requests\n| eval avg_rps=round(weekly_requests/(7*86400), 2)\n| trendline sma4(avg_rps) as rps_trend",
              "m": "Prefer RED metrics from the service mesh or ingress controller scraped into Splunk metrics. If only access logs exist, each log line represents one request. Use span=1w for medium-term trending. Tag by ingress or virtual host for breakdowns. Correlate traffic spikes with marketing campaigns or releases.",
              "z": "Line chart (weekly average RPS with 4-week SMA), optional breakdown by ingress class or hostname.",
              "kfp": "**health_check_inflation** — Kubernetes liveness and readiness probes, plus external health check systems (load balancers, CDNs, uptime monitors), generate a steady stream of requests that inflate the daily request count without representing real user traffic. These typically hit a fixed path like /healthz. Filter by URI or user-agent to separate synthetic health checks from user traffic.\n\n**bot_and_crawler_traffic** — Search engine crawlers, vulnerability scanners, and automated bots can represent 10–40% of total ingress traffic depending on the application. A spike in bot traffic appears as a legitimate traffic surge. Use user-agent classification and rate-per-client-IP analysis to separate bot traffic from human traffic.\n\n**cdn_cache_bypass** — When a CDN cache is invalidated or bypassed (e.g., cache-busting query parameters, Vary header changes), all requests that were previously served from cache suddenly hit the origin ingress controller. This appears as a traffic surge but represents the same user volume with changed caching behavior. Correlate with CDN cache hit ratio metrics.\n\n**timezone_aggregation_artifact** — Daily aggregation bins are aligned to UTC by default. For applications with traffic concentrated in a specific timezone, a UTC day boundary splits the peak traffic period across two daily buckets, making both days appear to have moderate traffic rather than one day with a clear peak. Use `| eval _time=relative_time(_time, \"@d\")` with timezone adjustment if needed.\n\n**ingress_controller_restart** — When an ingress controller pod restarts, in-flight connections are dropped and clients retry, creating a brief spike in request count immediately after the restart. This appears as a traffic surge but is actually connection recovery. Correlate with ingress controller pod restart events from UC-3.6.1.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NGINX Ingress Controller / Istio Ingress Gateway logs forwarded to Splunk.\n• Ensure the following data sources are available: `index=containers sourcetype=nginx:ingress` or `sourcetype=istio:ingress`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer RED metrics from the service mesh or ingress controller scraped into Splunk metrics. If only access logs exist, each log line represents one request. Use span=1w for medium-term trending. Tag by ingress or virtual host for breakdowns. Correlate traffic spikes with marketing campaigns or releases.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype IN (\"nginx:ingress\", \"istio:ingress\")\n| timechart span=1w count as weekly_requests\n| eval avg_rps=round(weekly_requests/(7*86400), 2)\n| trendline sma4(avg_rps) as rps_trend\n```\n\nUnderstanding this SPL\n\n**Ingress Traffic Volume Trending** — Ingress requests per second trended weekly shows growth in user load, campaign effects, or misconfigured clients hammering APIs. Supports capacity planning for ingress controllers and upstream services.\n\nDocumented **Data sources**: `index=containers sourcetype=nginx:ingress` or `sourcetype=istio:ingress`. **App/TA** (typical add-on context): NGINX Ingress Controller / Istio Ingress Gateway logs forwarded to Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **avg_rps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Ingress Traffic Volume Trending**): trendline sma4(avg_rps) as rps_trend\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (weekly average RPS with 4-week SMA), optional breakdown by ingress class or hostname.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-30",
              "sver": "",
              "rby": "",
              "ge": "Like counting how many visitors walk through the front door each day and charting the numbers over weeks, we track how many requests arrive at the cluster entrance so the team knows whether traffic is growing, shrinking, or spiking unexpectedly.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nginx"
              ],
              "em": [
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 6,
            "none": 0
          }
        }
      ],
      "i": 3,
      "n": "Containers & Orchestration",
      "src": "cat-03-containers-orchestration.md"
    },
    {
      "s": [
        {
          "i": "4.1",
          "n": "Amazon Web Services (AWS)",
          "u": [
            {
              "i": "4.1.1",
              "n": "Unauthorized API Calls",
              "c": "high",
              "f": "intermediate",
              "v": "AccessDenied errors reveal reconnaissance activity, compromised credentials with insufficient permissions, or misconfigurations. Early indicator of attack or drift.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail`, CloudTrail logs",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" errorCode=\"AccessDenied\" OR errorCode=\"UnauthorizedAccess\" OR errorCode=\"Client.UnauthorizedAccess\"\n| stats count by userIdentity.arn, eventName, sourceIPAddress, errorCode\n| where count > 5\n| sort -count",
              "m": "Configure CloudTrail to send logs to an S3 bucket. Set up the Splunk_TA_aws with an SQS-based S3 input for CloudTrail. Alert when a single principal gets >5 access denied errors in 10 minutes.",
              "z": "Table (principal, API call, source IP, count), Bar chart by principal, Map (source IP GeoIP).",
              "kfp": "Legitimate access denied for least-privilege testing or new IAM policies; verify with change management.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/)",
              "mitre": [
                "T1078.004",
                "T1526"
              ],
              "dtype": "TTP",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail`, CloudTrail logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure CloudTrail to send logs to an S3 bucket. Set up the Splunk_TA_aws with an SQS-based S3 input for CloudTrail. Alert when a single principal gets >5 access denied errors in 10 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" errorCode=\"AccessDenied\" OR errorCode=\"UnauthorizedAccess\" OR errorCode=\"Client.UnauthorizedAccess\"\n| stats count by userIdentity.arn, eventName, sourceIPAddress, errorCode\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Unauthorized API Calls** — AccessDenied errors reveal reconnaissance activity, compromised credentials with insufficient permissions, or misconfigurations. Early indicator of attack or drift.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`, CloudTrail logs. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, eventName, sourceIPAddress, errorCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"failure\"\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unauthorized API Calls** — AccessDenied errors reveal reconnaissance activity, compromised credentials with insufficient permissions, or misconfigurations. Early indicator of attack or drift.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`, CloudTrail logs. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, API call, source IP, count), Bar chart by principal, Map (source IP GeoIP).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on unauthorized api calls. Repeated access-denied replies from the cloud control plane can mean someone probing permissions, broken automation, or a stolen key that no longer fits. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"failure\"\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AU-6",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of FedRAMP AU-6 (Audit review, analysis, reporting) — Splunk UC-4.1.1: Unauthorized API Calls.",
                  "ea": "Saved search 'UC-4.1.1' running on sourcetype=aws:cloudtrail, CloudTrail logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "SI-4",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of FedRAMP SI-4 (System monitoring) — Splunk UC-4.1.1: Unauthorized API Calls.",
                  "ea": "Saved search 'UC-4.1.1' running on sourcetype=aws:cloudtrail, CloudTrail logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.3.5",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of SAMA CSF 3.3.5 (Security monitoring) — Splunk UC-4.1.1: Unauthorized API Calls.",
                  "ea": "Saved search 'UC-4.1.1' running on sourcetype=aws:cloudtrail, CloudTrail logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.sama.gov.sa/en-US/Laws/BankingRules/SAMA%20Cyber%20Security%20Framework.pdf"
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.2",
              "n": "Root Account Usage",
              "c": "critical",
              "f": "beginner",
              "v": "The AWS root account has unrestricted access and should never be used for daily operations. Any root activity is a critical security event.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" userIdentity.type=\"Root\"\n| table _time eventName sourceIPAddress userAgent errorCode\n| sort -_time",
              "m": "CloudTrail must be enabled in all regions. Create a critical real-time alert on any event where `userIdentity.type=Root`. Exclude expected events (e.g., automated billing).",
              "z": "Events list (critical alert), Single value (root events last 30d), Timeline.",
              "kfp": "Initial account setup, rare billing or support cases your policy still routes through root; documented break-glass; MFA and payment instrument changes in small businesses.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCloudTrail must be enabled in all regions. Create a critical real-time alert on any event where `userIdentity.type=Root`. Exclude expected events (e.g., automated billing).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" userIdentity.type=\"Root\"\n| table _time eventName sourceIPAddress userAgent errorCode\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Root Account Usage** — The AWS root account has unrestricted access and should never be used for daily operations. Any root activity is a critical security event.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Root Account Usage**): table _time eventName sourceIPAddress userAgent errorCode\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.user=\"root\" OR Authentication.user=\"Root\"\n  by Authentication.src Authentication.action Authentication.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Root Account Usage** — The AWS root account has unrestricted access and should never be used for daily operations. Any root activity is a critical security event.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events list (critical alert), Single value (root events last 30d), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on root account usage. Any sign-in using the special root identity, which should be almost never because normal admins should use named roles with limits. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.user=\"root\" OR Authentication.user=\"Root\"\n  by Authentication.src Authentication.action Authentication.app span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "4.1.3",
              "n": "Security Group Changes",
              "c": "high",
              "f": "beginner",
              "v": "Security group changes can expose services to the internet. Unauthorized modifications are a primary attack vector and compliance violation.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventName=\"AuthorizeSecurityGroupIngress\" OR eventName=\"AuthorizeSecurityGroupEgress\" OR eventName=\"RevokeSecurityGroup*\"\n| spath output=rules path=requestParameters.ipPermissions.items{}\n| table _time userIdentity.arn eventName requestParameters.groupId rules sourceIPAddress\n| sort -_time",
              "m": "Alert on any security group modification. Extra-critical alert when `0.0.0.0/0` is added as a source (exposes to internet). Correlate with change tickets.",
              "z": "Table (who, what, when), Timeline, Single value (changes last 24h).",
              "kfp": "Auto scaling, Elastic Beanstalk, or verified Terraform/CloudFormation that updates the same CIDRs or rules during a ticketed change.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.007",
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on any security group modification. Extra-critical alert when `0.0.0.0/0` is added as a source (exposes to internet). Correlate with change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventName=\"AuthorizeSecurityGroupIngress\" OR eventName=\"AuthorizeSecurityGroupEgress\" OR eventName=\"RevokeSecurityGroup*\"\n| spath output=rules path=requestParameters.ipPermissions.items{}\n| table _time userIdentity.arn eventName requestParameters.groupId rules sourceIPAddress\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Security Group Changes** — Security group changes can expose services to the internet. Unauthorized modifications are a primary attack vector and compliance violation.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Pipeline stage (see **Security Group Changes**): table _time userIdentity.arn eventName requestParameters.groupId rules sourceIPAddress\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where All_Changes.object_category=\"security_group\" OR match(All_Changes.object, \"(?i)SecurityGroup\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security Group Changes** — Security group changes can expose services to the internet. Unauthorized modifications are a primary attack vector and compliance violation.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (who, what, when), Timeline, Single value (changes last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on security group changes. Security group changes can expose services to the internet. Unauthorized modifications are a primary attack vector and c. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where All_Changes.object_category=\"security_group\" OR match(All_Changes.object, \"(?i)SecurityGroup\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.4",
              "n": "IAM Policy Changes",
              "c": "high",
              "f": "beginner",
              "v": "IAM policy changes affect who can do what across the entire AWS account. Unauthorized policy attachments can grant admin access.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" (eventName=\"CreatePolicy\" OR eventName=\"AttachUserPolicy\" OR eventName=\"AttachRolePolicy\" OR eventName=\"PutUserPolicy\" OR eventName=\"PutRolePolicy\" OR eventName=\"CreateRole\")\n| table _time userIdentity.arn eventName requestParameters.policyArn requestParameters.roleName\n| sort -_time",
              "m": "Alert on all IAM policy modifications. Critical alert when AdministratorAccess or PowerUserAccess policies are attached. Track with change management.",
              "z": "Table, Timeline, Bar chart by event type.",
              "kfp": "CI/CD or IaC that attaches known managed policies; emergency access policies already on the change record.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.001",
                "T1098.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on all IAM policy modifications. Critical alert when AdministratorAccess or PowerUserAccess policies are attached. Track with change management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" (eventName=\"CreatePolicy\" OR eventName=\"AttachUserPolicy\" OR eventName=\"AttachRolePolicy\" OR eventName=\"PutUserPolicy\" OR eventName=\"PutRolePolicy\" OR eventName=\"CreateRole\")\n| table _time userIdentity.arn eventName requestParameters.policyArn requestParameters.roleName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**IAM Policy Changes** — IAM policy changes affect who can do what across the entire AWS account. Unauthorized policy attachments can grant admin access.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **IAM Policy Changes**): table _time userIdentity.arn eventName requestParameters.policyArn requestParameters.roleName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where All_Changes.object_category=\"policy\" OR match(All_Changes.object, \"(?i)IAMPolicy|iam:policy\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IAM Policy Changes** — IAM policy changes affect who can do what across the entire AWS account. Unauthorized policy attachments can grant admin access.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline, Bar chart by event type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iam policy changes. IAM policy changes affect who can do what across the entire AWS account. Unauthorized policy attachments can grant admin. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where All_Changes.object_category=\"policy\" OR match(All_Changes.object, \"(?i)IAMPolicy|iam:policy\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that BAIT/KAIT §5 (Identity & access management) is enforced — Splunk UC-4.1.4: IAM Policy Changes.",
                  "ea": "Saved search 'UC-4.1.4' running on sourcetype=aws:cloudtrail, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Rundschreiben/2021/rs_1021_BAIT_en.html"
                },
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CJIS 5.5.1 (Access control - identification) is enforced — Splunk UC-4.1.4: IAM Policy Changes.",
                  "ea": "Saved search 'UC-4.1.4' running on sourcetype=aws:cloudtrail, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://le.fbi.gov/cjis-division/cjis-security-policy-resource-center"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FedRAMP AC-2 (Account management) is enforced — Splunk UC-4.1.4: IAM Policy Changes.",
                  "ea": "Saved search 'UC-4.1.4' running on sourcetype=aws:cloudtrail, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "01.b",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 01.b (User access management) is enforced — Splunk UC-4.1.4: IAM Policy Changes.",
                  "ea": "Saved search 'UC-4.1.4' running on sourcetype=aws:cloudtrail, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.5",
              "n": "Console Login Without MFA",
              "c": "high",
              "f": "beginner",
              "v": "Console access without MFA is a security risk — compromised passwords alone can grant full account access. Most compliance frameworks require MFA.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventName=\"ConsoleLogin\" responseElements.ConsoleLogin=\"Success\"\n| eval mfa_used = if(additionalEventData.MFAUsed=\"Yes\", \"Yes\", \"No\")\n| where mfa_used=\"No\"\n| table _time userIdentity.arn sourceIPAddress mfa_used\n| sort -_time",
              "m": "Monitor ConsoleLogin events. Alert on successful console logins where MFA is not used. Exclude service accounts that authenticate via SSO (which has its own MFA).",
              "z": "Table (user, source IP, MFA status), Pie chart (MFA vs. no-MFA), Single value.",
              "kfp": "Federated SSO through SAML or OIDC that enforces MFA at the IdP, so the AWS record may not show the same second factor detail.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor ConsoleLogin events. Alert on successful console logins where MFA is not used. Exclude service accounts that authenticate via SSO (which has its own MFA).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventName=\"ConsoleLogin\" responseElements.ConsoleLogin=\"Success\"\n| eval mfa_used = if(additionalEventData.MFAUsed=\"Yes\", \"Yes\", \"No\")\n| where mfa_used=\"No\"\n| table _time userIdentity.arn sourceIPAddress mfa_used\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Console Login Without MFA** — Console access without MFA is a security risk — compromised passwords alone can grant full account access. Most compliance frameworks require MFA.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mfa_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mfa_used=\"No\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Console Login Without MFA**): table _time userIdentity.arn sourceIPAddress mfa_used\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND (match(Authentication.signature, \"(?i)ConsoleLogin|AwsConsoleSignIn\") OR match(Authentication.app, \"(?i)signin\\\\.amazonaws\"))\n  AND NOT (Authentication.mfa=\"true\" OR lower(Authentication.authentication_method)=\"mfa\")\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Console Login Without MFA** — Console access without MFA is a security risk — compromised passwords alone can grant full account access. Most compliance frameworks require MFA.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, source IP, MFA status), Pie chart (MFA vs. no-MFA), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on console login without mfa. Successful sign-ins to the AWS web console that did not use a second step, so a stolen password is still dangerous for admin work. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND (match(Authentication.signature, \"(?i)ConsoleLogin|AwsConsoleSignIn\") OR match(Authentication.app, \"(?i)signin\\\\.amazonaws\"))\n  AND NOT (Authentication.mfa=\"true\" OR lower(Authentication.authentication_method)=\"mfa\")\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.SAU.1",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of Cyber Essentials CE.SAU.1 (Secure authentication & access) — Splunk UC-4.1.5: Console Login Without MFA.",
                  "ea": "Saved search 'UC-4.1.5' running on sourcetype=aws:cloudtrail, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ncsc.gov.uk/cyberessentials/overview"
                },
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of PSD2 Art.97 (Strong customer authentication) — Splunk UC-4.1.5: Console Login Without MFA.",
                  "ea": "Saved search 'UC-4.1.5' running on sourcetype=aws:cloudtrail, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2015/2366/oj"
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.6",
              "n": "EC2 Instance State Changes",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks instance lifecycle for audit and change management. Unexpected terminations indicate accidents, auto-scaling issues, or attacks.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" (eventName=\"RunInstances\" OR eventName=\"TerminateInstances\" OR eventName=\"StopInstances\" OR eventName=\"StartInstances\")\n| table _time userIdentity.arn eventName requestParameters.instancesSet.items{}.instanceId responseElements.instancesSet.items{}.currentState.name\n| sort -_time",
              "m": "Ingest CloudTrail via the Splunk Add-on for AWS (`Splunk_TA_aws`) using the S3/SQS input from the organization trail. Alert on `TerminateInstances` where `requestParameters.instancesSet.items{}.instanceId` matches production-tagged instances from a `prod_instances` lookup. Suppress alerts during Auto Scaling scale-in events by checking `userIdentity.invokedBy=autoscaling.amazonaws.com`.",
              "z": "Table (timeline), Bar chart (events by type per day), Line chart (instance count trending).",
              "kfp": "Auto Scaling, patch cycles, and CI workers that start/stop or replace instances you expect in non-prod or blue/green.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1578.002",
                "T1578.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CloudTrail via the Splunk Add-on for AWS (`Splunk_TA_aws`) using the S3/SQS input from the organization trail. Alert on `TerminateInstances` where `requestParameters.instancesSet.items{}.instanceId` matches production-tagged instances from a `prod_instances` lookup. Suppress alerts during Auto Scaling scale-in events by checking `userIdentity.invokedBy=autoscaling.amazonaws.com`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" (eventName=\"RunInstances\" OR eventName=\"TerminateInstances\" OR eventName=\"StopInstances\" OR eventName=\"StartInstances\")\n| table _time userIdentity.arn eventName requestParameters.instancesSet.items{}.instanceId responseElements.instancesSet.items{}.currentState.name\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**EC2 Instance State Changes** — Tracks instance lifecycle for audit and change management. Unexpected terminations indicate accidents, auto-scaling issues, or attacks.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **EC2 Instance State Changes**): table _time userIdentity.arn eventName requestParameters.instancesSet.items{}.instanceId responseElements.instancesSet.items{}.currentSta…\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where All_Changes.object_category=\"instance\" OR match(All_Changes.object, \"(?i)ec2:|i-[0-9a-f]{8,17}\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**EC2 Instance State Changes** — Tracks instance lifecycle for audit and change management. Unexpected terminations indicate accidents, auto-scaling issues, or attacks.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (timeline), Bar chart (events by type per day), Line chart (instance count trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on ec2 instance state changes. Tracks instance lifecycle for audit and change management. Unexpected terminations indicate accidents, auto-scaling issu. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where All_Changes.object_category=\"instance\" OR match(All_Changes.object, \"(?i)ec2:|i-[0-9a-f]{8,17}\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.7",
              "n": "S3 Bucket Policy Changes",
              "c": "critical",
              "f": "beginner",
              "v": "S3 bucket policy changes can expose sensitive data to the public internet. One of the most common cloud security incidents.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" (eventName=\"PutBucketPolicy\" OR eventName=\"PutBucketAcl\" OR eventName=\"PutBucketPublicAccessBlock\" OR eventName=\"DeleteBucketPolicy\")\n| table _time userIdentity.arn eventName requestParameters.bucketName\n| sort -_time",
              "m": "Critical alert on any bucket policy change. Extra-critical when `PutBucketPublicAccessBlock` is disabled or when ACLs grant public access. Integrate with AWS Config for continuous compliance.",
              "z": "Events list (critical), Table, Single value (policy changes last 7d).",
              "kfp": "Planned public static sites, migration cutovers, or content changes where security still reviewed the bucket policy.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1530",
                "T1619"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCritical alert on any bucket policy change. Extra-critical when `PutBucketPublicAccessBlock` is disabled or when ACLs grant public access. Integrate with AWS Config for continuous compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" (eventName=\"PutBucketPolicy\" OR eventName=\"PutBucketAcl\" OR eventName=\"PutBucketPublicAccessBlock\" OR eventName=\"DeleteBucketPolicy\")\n| table _time userIdentity.arn eventName requestParameters.bucketName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**S3 Bucket Policy Changes** — S3 bucket policy changes can expose sensitive data to the public internet. One of the most common cloud security incidents.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **S3 Bucket Policy Changes**): table _time userIdentity.arn eventName requestParameters.bucketName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where All_Changes.object_category=\"bucket\" OR match(All_Changes.object, \"(?i)s3:|arn:aws:s3:::\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**S3 Bucket Policy Changes** — S3 bucket policy changes can expose sensitive data to the public internet. One of the most common cloud security incidents.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events list (critical), Table, Single value (policy changes last 7d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on s3 bucket policy changes. S3 bucket policy changes can expose sensitive data to the public internet. One of the most common cloud security inciden. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where All_Changes.object_category=\"bucket\" OR match(All_Changes.object, \"(?i)s3:|arn:aws:s3:::\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "4.1.8",
              "n": "GuardDuty Finding Ingestion",
              "c": "critical",
              "f": "beginner",
              "v": "GuardDuty provides ML-powered threat detection for AWS accounts. Centralizing findings in Splunk enables correlation with other security data.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch:guardduty`",
              "q": "index=aws sourcetype=\"aws:cloudwatch:guardduty\"\n| spath output=severity path=detail.severity\n| spath output=finding_type path=detail.type\n| where severity >= 7\n| table _time finding_type severity detail.title detail.description\n| sort -severity",
              "m": "Enable GuardDuty in all regions. Configure CloudWatch Events rule to forward findings to an SNS topic or S3. Ingest via Splunk_TA_aws. Alert on High/Critical findings (severity ≥7).",
              "z": "Table by severity, Bar chart (finding types), Trend line (findings over time), Single value.",
              "kfp": "Benign port scans, red-team, authorized network scanners, and new accounts/regions before suppressions and baselines are tuned.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch:guardduty`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable GuardDuty in all regions. Configure CloudWatch Events rule to forward findings to an SNS topic or S3. Ingest via Splunk_TA_aws. Alert on High/Critical findings (severity ≥7).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:guardduty\"\n| spath output=severity path=detail.severity\n| spath output=finding_type path=detail.type\n| where severity >= 7\n| table _time finding_type severity detail.title detail.description\n| sort -severity\n```\n\nUnderstanding this SPL\n\n**GuardDuty Finding Ingestion** — GuardDuty provides ML-powered threat detection for AWS accounts. Centralizing findings in Splunk enables correlation with other security data.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch:guardduty`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:guardduty. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:guardduty\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where severity >= 7` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GuardDuty Finding Ingestion**): table _time finding_type severity detail.title detail.description\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table by severity, Bar chart (finding types), Trend line (findings over time), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on guardduty finding ingestion. The serious GuardDuty finding types the service raises, in one place, so a small team does not miss something loud in the console. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "4.1.9",
              "n": "VPC Flow Log Analysis",
              "c": "high",
              "f": "beginner",
              "v": "VPC Flow Logs provide network-level visibility into all traffic. Detects rejected traffic, data exfiltration, lateral movement, and network anomalies.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatchlogs:vpcflow`",
              "q": "index=aws sourcetype=\"aws:cloudwatchlogs:vpcflow\" action=\"REJECT\"\n| stats count by src, dest, dest_port, protocol\n| sort 20 -count",
              "m": "Enable VPC Flow Logs on all VPCs (send to S3 or CloudWatch Logs). Ingest via Splunk_TA_aws. Create dashboards for rejected traffic, top talkers, and unusual port activity.",
              "z": "Table (top rejected flows), Sankey diagram (source to destination), Timechart, Map.",
              "kfp": "Legitimate NACLs or security groups that block expected scanners; canary and migration traffic you already whitelisted in the runbook.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatchlogs:vpcflow`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable VPC Flow Logs on all VPCs (send to S3 or CloudWatch Logs). Ingest via Splunk_TA_aws. Create dashboards for rejected traffic, top talkers, and unusual port activity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatchlogs:vpcflow\" action=\"REJECT\"\n| stats count by src, dest, dest_port, protocol\n| sort 20 -count\n```\n\nUnderstanding this SPL\n\n**VPC Flow Log Analysis** — VPC Flow Logs provide network-level visibility into all traffic. Detects rejected traffic, data exfiltration, lateral movement, and network anomalies.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatchlogs:vpcflow`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatchlogs:vpcflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatchlogs:vpcflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, protocol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top rejected flows), Sankey diagram (source to destination), Timechart, Map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on vpc flow log analysis. The traffic the network chose to block so we can see odd ports, regions, or quiet probes before they become a breach. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.10",
              "n": "EC2 Performance Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "CloudWatch metrics provide host-level performance data without agents. Baseline trending for capacity planning and anomaly detection.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (EC2 namespace)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" metric_name=\"CPUUtilization\" namespace=\"AWS/EC2\"\n| timechart span=1h avg(Average) as avg_cpu by metric_dimensions\n| where avg_cpu > 80",
              "m": "Configure CloudWatch metric collection in Splunk_TA_aws for EC2 namespace. Collect CPUUtilization, NetworkIn/Out, DiskReadOps, DiskWriteOps. Set polling interval (300s minimum).",
              "z": "Line chart per instance, Heatmap across fleet, Gauge.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (EC2 namespace).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure CloudWatch metric collection in Splunk_TA_aws for EC2 namespace. Collect CPUUtilization, NetworkIn/Out, DiskReadOps, DiskWriteOps. Set polling interval (300s minimum).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" metric_name=\"CPUUtilization\" namespace=\"AWS/EC2\"\n| timechart span=1h avg(Average) as avg_cpu by metric_dimensions\n| where avg_cpu > 80\n```\n\nUnderstanding this SPL\n\n**EC2 Performance Monitoring** — CloudWatch metrics provide host-level performance data without agents. Baseline trending for capacity planning and anomaly detection.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (EC2 namespace). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by metric_dimensions** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_cpu > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart per instance, Heatmap across fleet, Gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on ec2 performance monitoring. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.11",
              "n": "RDS Performance Insights",
              "c": "high",
              "f": "beginner",
              "v": "Database performance issues directly impact application experience. Monitoring connections, CPU, IOPS, and replica lag catches problems before users notice.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (RDS namespace), RDS logs",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" (metric_name=\"CPUUtilization\" OR metric_name=\"DatabaseConnections\" OR metric_name=\"ReadLatency\" OR metric_name=\"ReplicaLag\")\n| timechart span=5m avg(Average) by metric_name, DBInstanceIdentifier",
              "m": "Enable CloudWatch metric collection for RDS namespace. Also forward RDS logs (slow query, error, general) to Splunk via CloudWatch Logs. Alert on ReplicaLag >30s, CPU >80%, or connection count nearing max.",
              "z": "Multi-metric line chart, Gauge (connections vs. max), Table.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (RDS namespace), RDS logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CloudWatch metric collection for RDS namespace. Also forward RDS logs (slow query, error, general) to Splunk via CloudWatch Logs. Alert on ReplicaLag >30s, CPU >80%, or connection count nearing max.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" (metric_name=\"CPUUtilization\" OR metric_name=\"DatabaseConnections\" OR metric_name=\"ReadLatency\" OR metric_name=\"ReplicaLag\")\n| timechart span=5m avg(Average) by metric_name, DBInstanceIdentifier\n```\n\nUnderstanding this SPL\n\n**RDS Performance Insights** — Database performance issues directly impact application experience. Monitoring connections, CPU, IOPS, and replica lag catches problems before users notice.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (RDS namespace), RDS logs. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name, DBInstanceIdentifier** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-metric line chart, Gauge (connections vs. max), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on rds performance insights. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.12",
              "n": "Lambda Error Rate Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Lambda errors affect serverless application reliability. Timeouts indicate functions need more memory/time. Throttling means concurrency limits are hit.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (Lambda namespace), Lambda logs",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" (metric_name=\"Errors\" OR metric_name=\"Throttles\" OR metric_name=\"Duration\")\n| timechart span=5m sum(Sum) by metric_name, FunctionName",
              "m": "Ingest CloudWatch metrics (namespace `AWS/Lambda`, metrics `Errors`, `Invocations`, `Throttles`) via the Splunk Add-on for AWS. Compute error rate as `Errors/Invocations` over a 5-minute window; alert when rate exceeds 5% AND invocations exceed 50 (to avoid low-traffic false positives). For throttles, alert on any non-zero value. Forward Lambda CloudWatch Logs for stack trace correlation.",
              "z": "Line chart (errors/invocations over time), Bar chart (top error functions), Single value (error rate %).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (Lambda namespace), Lambda logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CloudWatch metrics (namespace `AWS/Lambda`, metrics `Errors`, `Invocations`, `Throttles`) via the Splunk Add-on for AWS. Compute error rate as `Errors/Invocations` over a 5-minute window; alert when rate exceeds 5% AND invocations exceed 50 (to avoid low-traffic false positives). For throttles, alert on any non-zero value. Forward Lambda CloudWatch Logs for stack trace correlation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" (metric_name=\"Errors\" OR metric_name=\"Throttles\" OR metric_name=\"Duration\")\n| timechart span=5m sum(Sum) by metric_name, FunctionName\n```\n\nUnderstanding this SPL\n\n**Lambda Error Rate Monitoring** — Lambda errors affect serverless application reliability. Timeouts indicate functions need more memory/time. Throttling means concurrency limits are hit.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (Lambda namespace), Lambda logs. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name, FunctionName** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (errors/invocations over time), Bar chart (top error functions), Single value (error rate %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on lambda error rate monitoring. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.13",
              "n": "EKS/ECS Cluster Health",
              "c": "high",
              "f": "beginner",
              "v": "Unhealthy ECS/EKS control planes strand deployments and skew desired-vs-running task counts, causing user-visible errors before infrastructure metrics breach thresholds. Route platform-level failures (API server, scheduler) to the platform team and workload-level failures (CrashLoopBackOff, OOM) to the application owner.",
              "t": "`Splunk_TA_aws`, Splunk OTel Collector",
              "d": "CloudWatch EKS/ECS metrics, container insights",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ECS\" metric_name=\"CPUUtilization\"\n| timechart span=5m avg(Average) by ClusterName, ServiceName",
              "m": "Enable Container Insights for EKS/ECS. Collect metrics via CloudWatch. For deeper Kubernetes visibility in EKS, deploy Splunk OTel Collector as described in Category 3.2.",
              "z": "Line chart per service, Cluster status panel, Table.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, Splunk OTel Collector.\n• Ensure the following data sources are available: CloudWatch EKS/ECS metrics, container insights.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Container Insights for EKS/ECS. Collect metrics via CloudWatch. For deeper Kubernetes visibility in EKS, deploy Splunk OTel Collector as described in Category 3.2.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ECS\" metric_name=\"CPUUtilization\"\n| timechart span=5m avg(Average) by ClusterName, ServiceName\n```\n\nUnderstanding this SPL\n\n**EKS/ECS Cluster Health** — Unhealthy ECS/EKS control planes strand deployments and skew desired-vs-running task counts, causing user-visible errors before infrastructure metrics breach thresholds. Route platform-level failures (API server, scheduler) to the platform team and workload-level failures (CrashLoopBackOff, OOM) to the application owner.\n\nDocumented **Data sources**: CloudWatch EKS/ECS metrics, container insights. **App/TA** (typical add-on context): `Splunk_TA_aws`, Splunk OTel Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by ClusterName, ServiceName** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart per service, Cluster status panel, Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on eks/ecs cluster health. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "opentelemetry"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.14",
              "n": "Cost Anomaly Detection",
              "c": "medium",
              "f": "advanced",
              "v": "Unexpected spend spikes indicate runaway resources, cryptomining attacks, or misconfigured services. Catching anomalies early saves money.",
              "t": "`Splunk_TA_aws`, AWS Cost and Usage Report (CUR)",
              "d": "`sourcetype=aws:billing` or CUR data",
              "q": "index=aws sourcetype=\"aws:billing\"\n| timechart span=1d sum(BlendedCost) as daily_cost by ProductName\n| eventstats avg(daily_cost) as avg_cost, stdev(daily_cost) as stdev_cost by ProductName\n| eval threshold = avg_cost + (2 * stdev_cost)\n| where daily_cost > threshold",
              "m": "Enable CUR reports to S3. Ingest via Splunk_TA_aws (billing input). Calculate daily baselines per service. Alert when daily spend exceeds 2 standard deviations from the 30-day average.",
              "z": "Line chart (daily spend with threshold), Table (anomalous services), Stacked area (spend by service).",
              "kfp": "Month-end or quarter spikes, RIs, savings plans, and new services the statistical model has not seen yet.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, AWS Cost and Usage Report (CUR).\n• Ensure the following data sources are available: `sourcetype=aws:billing` or CUR data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CUR reports to S3. Ingest via Splunk_TA_aws (billing input). Calculate daily baselines per service. Alert when daily spend exceeds 2 standard deviations from the 30-day average.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:billing\"\n| timechart span=1d sum(BlendedCost) as daily_cost by ProductName\n| eventstats avg(daily_cost) as avg_cost, stdev(daily_cost) as stdev_cost by ProductName\n| eval threshold = avg_cost + (2 * stdev_cost)\n| where daily_cost > threshold\n```\n\nUnderstanding this SPL\n\n**Cost Anomaly Detection** — Unexpected spend spikes indicate runaway resources, cryptomining attacks, or misconfigured services. Catching anomalies early saves money.\n\nDocumented **Data sources**: `sourcetype=aws:billing` or CUR data. **App/TA** (typical add-on context): `Splunk_TA_aws`, AWS Cost and Usage Report (CUR). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:billing. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:billing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by ProductName** — ideal for trending and alerting on this use case.\n• `eventstats` rolls up events into metrics; results are split **by ProductName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **threshold** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where daily_cost > threshold` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily spend with threshold), Table (anomalous services), Stacked area (spend by service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cost anomaly detection. Day-to-day spend that jumps away from a normal week so we can catch a mis-sized resource, surprise usage, or abuse early. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.15",
              "n": "Config Compliance Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "AWS Config rules continuously evaluate resource compliance against security best practices. Non-compliant resources are attack surface.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:config:notification`",
              "q": "index=aws sourcetype=\"aws:config:notification\" configRuleList{}.complianceType=\"NON_COMPLIANT\"\n| stats count by resourceType, resourceId, configRuleList{}.configRuleName\n| sort -count",
              "m": "Enable AWS Config with rules (e.g., CIS Benchmark). Forward Config notifications to SNS/S3 and ingest in Splunk. Dashboard showing compliance score per rule. Alert on newly non-compliant critical resources.",
              "z": "Table (resource, rule, status), Pie chart (compliant %), Bar chart by rule.",
              "kfp": "New resources that have not been evaluated yet; rules you intentionally suppress during rollout; one-time exceptions on the change record.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:config:notification`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AWS Config with rules (e.g., CIS Benchmark). Forward Config notifications to SNS/S3 and ingest in Splunk. Dashboard showing compliance score per rule. Alert on newly non-compliant critical resources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:config:notification\" configRuleList{}.complianceType=\"NON_COMPLIANT\"\n| stats count by resourceType, resourceId, configRuleList{}.configRuleName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Config Compliance Monitoring** — AWS Config rules continuously evaluate resource compliance against security best practices. Non-compliant resources are attack surface.\n\nDocumented **Data sources**: `sourcetype=aws:config:notification`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:notification. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resourceType, resourceId, configRuleList{}.configRuleName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (resource, rule, status), Pie chart (compliant %), Bar chart by rule.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on config compliance monitoring. When something in the account is out of line with a rule the business said it wanted enforced. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.16",
              "n": "KMS Key Usage Audit",
              "c": "medium",
              "f": "beginner",
              "v": "Encryption key usage audit ensures data protection compliance. Unusual key access patterns may indicate unauthorized data decryption.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" (eventName=\"Decrypt\" OR eventName=\"Encrypt\" OR eventName=\"GenerateDataKey\") eventSource=\"kms.amazonaws.com\"\n| stats count by userIdentity.arn, requestParameters.keyId, eventName\n| sort -count",
              "m": "CloudTrail captures all KMS API calls. Monitor for unusual Decrypt call volumes or access from unexpected principals. Track key rotation compliance.",
              "z": "Table (principal, key, action, count), Trend line, Bar chart.",
              "kfp": "Burst of KMS calls during app deploys, key rotation, or database encryption enablement in the same change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1078.004",
                "T1530"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCloudTrail captures all KMS API calls. Monitor for unusual Decrypt call volumes or access from unexpected principals. Track key rotation compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" (eventName=\"Decrypt\" OR eventName=\"Encrypt\" OR eventName=\"GenerateDataKey\") eventSource=\"kms.amazonaws.com\"\n| stats count by userIdentity.arn, requestParameters.keyId, eventName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**KMS Key Usage Audit** — Encryption key usage audit ensures data protection compliance. Unusual key access patterns may indicate unauthorized data decryption.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, requestParameters.keyId, eventName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.app, \"(?i)kms\\\\.amazonaws\") OR match(All_Changes.object, \"(?i)kms:|key/\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**KMS Key Usage Audit** — Encryption key usage audit ensures data protection compliance. Unusual key access patterns may indicate unauthorized data decryption.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, key, action, count), Trend line, Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on kms key usage audit. Encryption key usage audit ensures data protection compliance. Unusual key access patterns may indicate unauthorized dat. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.app, \"(?i)kms\\\\.amazonaws\") OR match(All_Changes.object, \"(?i)kms:|key/\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.17",
              "n": "Elastic IP Association",
              "c": "low",
              "f": "beginner",
              "v": "Unassociated Elastic IPs cost money. Tracking associations supports inventory accuracy and cost management.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" (eventName=\"AllocateAddress\" OR eventName=\"AssociateAddress\" OR eventName=\"DisassociateAddress\" OR eventName=\"ReleaseAddress\")\n| table _time userIdentity.arn eventName requestParameters.publicIp\n| sort -_time",
              "m": "Forward CloudTrail. Create weekly report of EIP allocations vs. associations. Flag unassociated EIPs for cleanup.",
              "z": "Table, Single value (unassociated EIPs), Bar chart.",
              "kfp": "Elastic IP moves during failover drills, load-balancer rebinds, and documented network refactors.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward CloudTrail. Create weekly report of EIP allocations vs. associations. Flag unassociated EIPs for cleanup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" (eventName=\"AllocateAddress\" OR eventName=\"AssociateAddress\" OR eventName=\"DisassociateAddress\" OR eventName=\"ReleaseAddress\")\n| table _time userIdentity.arn eventName requestParameters.publicIp\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Elastic IP Association** — Unassociated Elastic IPs cost money. Tracking associations supports inventory accuracy and cost management.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Elastic IP Association**): table _time userIdentity.arn eventName requestParameters.publicIp\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.action, \"(?i)AllocateAddress|AssociateAddress|DisassociateAddress|ReleaseAddress\") OR match(All_Changes.object, \"(?i)eipalloc|elastic.?ip\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Elastic IP Association** — Unassociated Elastic IPs cost money. Tracking associations supports inventory accuracy and cost management.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value (unassociated EIPs), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on elastic ip association. Unassociated Elastic IPs cost money. Tracking associations supports inventory accuracy and cost management. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.action, \"(?i)AllocateAddress|AssociateAddress|DisassociateAddress|ReleaseAddress\") OR match(All_Changes.object, \"(?i)eipalloc|elastic.?ip\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.18",
              "n": "CloudFormation Stack Drift",
              "c": "medium",
              "f": "beginner",
              "v": "Drift means infrastructure no longer matches its declared template — manual changes have been made. This breaks IaC and causes inconsistencies.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail` (DetectStackDrift events)",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventName=\"DetectStackDrift\" OR eventName=\"DetectStackResourceDrift\"\n| spath output=drift_status path=responseElements.stackDriftStatus\n| where drift_status=\"DRIFTED\"\n| table _time requestParameters.stackName drift_status",
              "m": "Schedule periodic drift detection via CloudFormation API or AWS Config rule. Forward detection results to Splunk. Alert on stacks in DRIFTED state.",
              "z": "Table (stack, drift status), Pie chart (drifted vs. in-sync), Status indicator.",
              "kfp": "Scheduled template drift runs right after a pipeline deploy, before auto-remediation re-applies the template.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail` (DetectStackDrift events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule periodic drift detection via CloudFormation API or AWS Config rule. Forward detection results to Splunk. Alert on stacks in DRIFTED state.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventName=\"DetectStackDrift\" OR eventName=\"DetectStackResourceDrift\"\n| spath output=drift_status path=responseElements.stackDriftStatus\n| where drift_status=\"DRIFTED\"\n| table _time requestParameters.stackName drift_status\n```\n\nUnderstanding this SPL\n\n**CloudFormation Stack Drift** — Drift means infrastructure no longer matches its declared template — manual changes have been made. This breaks IaC and causes inconsistencies.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (DetectStackDrift events). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where drift_status=\"DRIFTED\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CloudFormation Stack Drift**): table _time requestParameters.stackName drift_status\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.action, \"(?i)DetectStackDrift|DetectStackResourceDrift|CreateStack|UpdateStack|DeleteStack\") OR match(All_Changes.object, \"(?i)cloudformation|stack:\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CloudFormation Stack Drift** — Drift means infrastructure no longer matches its declared template — manual changes have been made. This breaks IaC and causes inconsistencies.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (DetectStackDrift events). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stack, drift status), Pie chart (drifted vs. in-sync), Status indicator.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cloudformation stack drift. Drift means infrastructure no longer matches its declared template — manual changes have been made. This breaks IaC and. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.action, \"(?i)DetectStackDrift|DetectStackResourceDrift|CreateStack|UpdateStack|DeleteStack\") OR match(All_Changes.object, \"(?i)cloudformation|stack:\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.19",
              "n": "WAF Blocked Request Analysis",
              "c": "medium",
              "f": "beginner",
              "v": "WAF blocks reveal attack patterns targeting your applications. Analysis helps tune rules and understand the threat landscape.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:waf` (WAF logs via S3 or Kinesis)",
              "q": "index=aws sourcetype=\"aws:waf\" action=\"BLOCK\"\n| stats count by terminatingRuleId, httpRequest.clientIp, httpRequest.uri\n| sort 20 -count",
              "m": "Enable WAF logging to S3 or Kinesis Firehose. Ingest via Splunk_TA_aws. Analyze blocked requests by rule, source IP, URI, and user agent to identify attack patterns and false positives.",
              "z": "Table (rule, source, URI, count), Bar chart by rule, Map (source IPs), Timeline.",
              "kfp": "Known pen-test, dark-traffic, or canary 5xx during deploy; WAF in count or shadow mode you expect to list blocks for.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:waf` (WAF logs via S3 or Kinesis).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable WAF logging to S3 or Kinesis Firehose. Ingest via Splunk_TA_aws. Analyze blocked requests by rule, source IP, URI, and user agent to identify attack patterns and false positives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:waf\" action=\"BLOCK\"\n| stats count by terminatingRuleId, httpRequest.clientIp, httpRequest.uri\n| sort 20 -count\n```\n\nUnderstanding this SPL\n\n**WAF Blocked Request Analysis** — WAF blocks reveal attack patterns targeting your applications. Analysis helps tune rules and understand the threat landscape.\n\nDocumented **Data sources**: `sourcetype=aws:waf` (WAF logs via S3 or Kinesis). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:waf. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:waf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by terminatingRuleId, httpRequest.clientIp, httpRequest.uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule, source, URI, count), Bar chart by rule, Map (source IPs), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on waf blocked request analysis. Web requests that the edge blocked or that returned server error codes so the team can tell attacks and broken deploys apart. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.20",
              "n": "Reserved Instance Utilization",
              "c": "low",
              "f": "beginner",
              "v": "Underutilized RIs waste money. Tracking RI coverage and utilization helps optimize commit spending vs. on-demand costs.",
              "t": "`Splunk_TA_aws`, CUR data",
              "d": "`sourcetype=aws:billing` (CUR)",
              "q": "index=aws sourcetype=\"aws:billing\" lineItem_LineItemType=\"DiscountedUsage\" OR lineItem_LineItemType=\"RIFee\"\n| stats sum(lineItem_UsageAmount) as ri_hours, sum(lineItem_UnblendedCost) as ri_cost by reservation_ReservationARN, product_instanceType\n| eval utilization_pct = round(ri_hours / expected_hours * 100, 1)",
              "m": "Ingest CUR data. Calculate RI utilization by comparing reserved hours against actual usage. Dashboard showing RI coverage percentage and waste. Review monthly.",
              "z": "Table (RI, type, utilization %), Gauge (overall utilization), Bar chart by instance type.",
              "kfp": "True-up, RI purchase, and one-time credit lines that are not a recurring waste pattern next month.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, CUR data.\n• Ensure the following data sources are available: `sourcetype=aws:billing` (CUR).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CUR data. Calculate RI utilization by comparing reserved hours against actual usage. Dashboard showing RI coverage percentage and waste. Review monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:billing\" lineItem_LineItemType=\"DiscountedUsage\" OR lineItem_LineItemType=\"RIFee\"\n| stats sum(lineItem_UsageAmount) as ri_hours, sum(lineItem_UnblendedCost) as ri_cost by reservation_ReservationARN, product_instanceType\n| eval utilization_pct = round(ri_hours / expected_hours * 100, 1)\n```\n\nUnderstanding this SPL\n\n**Reserved Instance Utilization** — Underutilized RIs waste money. Tracking RI coverage and utilization helps optimize commit spending vs. on-demand costs.\n\nDocumented **Data sources**: `sourcetype=aws:billing` (CUR). **App/TA** (typical add-on context): `Splunk_TA_aws`, CUR data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:billing. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:billing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by reservation_ReservationARN, product_instanceType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (RI, type, utilization %), Gauge (overall utilization), Bar chart by instance type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on reserved instance utilization. Whether reserved and discounted usage is actually being used, so the money for commits is not left on the table. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.21",
              "n": "ALB/NLB Access Logs and 5xx Errors",
              "c": "high",
              "f": "beginner",
              "v": "Load balancer 5xx and target failures indicate backend or LB misconfiguration. Access logs enable traffic analysis and security forensics.",
              "t": "`Splunk_TA_aws`",
              "d": "S3 bucket with ALB/NLB access logs, CloudWatch LB metrics",
              "q": "index=aws sourcetype=\"aws:elb:accesslogs\" elb_status_code>=500\n| stats count by target_port, elb_status_code, request_url\n| sort -count",
              "m": "Enable access logging for ALB/NLB to S3. Ingest via Splunk_TA_aws S3 input. Collect CloudWatch metrics (RequestCount, TargetResponseTime, HTTPCode_Target_5XX_Count). Alert on 5xx rate >1%.",
              "z": "Table (status, target, count), Line chart (5xx over time), Bar chart by target.",
              "kfp": "Known pen-test, dark-traffic, or canary 5xx during deploy; WAF in count or shadow mode you expect to list blocks for.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: S3 bucket with ALB/NLB access logs, CloudWatch LB metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable access logging for ALB/NLB to S3. Ingest via Splunk_TA_aws S3 input. Collect CloudWatch metrics (RequestCount, TargetResponseTime, HTTPCode_Target_5XX_Count). Alert on 5xx rate >1%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:elb:accesslogs\" elb_status_code>=500\n| stats count by target_port, elb_status_code, request_url\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ALB/NLB Access Logs and 5xx Errors** — Load balancer 5xx and target failures indicate backend or LB misconfiguration. Access logs enable traffic analysis and security forensics.\n\nDocumented **Data sources**: S3 bucket with ALB/NLB access logs, CloudWatch LB metrics. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:elb:accesslogs. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:elb:accesslogs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by target_port, elb_status_code, request_url** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (status, target, count), Line chart (5xx over time), Bar chart by target.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on alb/nlb access logs and 5xx errors. Web requests that the edge blocked or that returned server error codes so the team can tell attacks and broken deploys apart. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.22",
              "n": "ELB Target Health and Unhealthy Hosts",
              "c": "critical",
              "f": "beginner",
              "v": "Unhealthy targets cause traffic to fail or shift to remaining nodes. Early detection prevents user-facing outages.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (AWS/ApplicationELB, AWS/NetworkELB)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ApplicationELB\" metric_name=\"UnHealthyHostCount\"\n| where Average > 0\n| timechart span=5m max(Average) by LoadBalancer",
              "m": "Collect UnHealthyHostCount and HealthyHostCount from CloudWatch. Alert when UnHealthyHostCount > 0 for more than 2 minutes. Correlate with target group and instance health checks.",
              "z": "Single value (unhealthy count), Table (LB, target group, unhealthy), Timeline.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (AWS/ApplicationELB, AWS/NetworkELB).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect UnHealthyHostCount and HealthyHostCount from CloudWatch. Alert when UnHealthyHostCount > 0 for more than 2 minutes. Correlate with target group and instance health checks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ApplicationELB\" metric_name=\"UnHealthyHostCount\"\n| where Average > 0\n| timechart span=5m max(Average) by LoadBalancer\n```\n\nUnderstanding this SPL\n\n**ELB Target Health and Unhealthy Hosts** — Unhealthy targets cause traffic to fail or shift to remaining nodes. Early detection prevents user-facing outages.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (AWS/ApplicationELB, AWS/NetworkELB). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Average > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by LoadBalancer** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (unhealthy count), Table (LB, target group, unhealthy), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on elb target health and unhealthy hosts. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.23",
              "n": "CloudFront Cache Hit Ratio and Origin Errors",
              "c": "high",
              "f": "beginner",
              "v": "Low cache hit ratio increases origin load and latency. Origin errors indicate backend or CDN misconfiguration.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch CloudFront metrics, CloudFront access logs (optional, to S3)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/CloudFront\" (metric_name=\"4xxErrorRate\" OR metric_name=\"5xxErrorRate\" OR metric_name=\"BytesDownloaded\")\n| timechart span=1h avg(Average) by metric_name, DistributionId",
              "m": "Enable CloudFront metrics in CloudWatch. Optionally enable standard logging to S3 for request-level analysis. Calculate cache hit ratio from requests (Hit vs Miss). Alert on 5xxErrorRate > 1%.",
              "z": "Line chart (4xx/5xx rate, bytes), Gauge (cache hit %), Table by distribution.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch CloudFront metrics, CloudFront access logs (optional, to S3).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CloudFront metrics in CloudWatch. Optionally enable standard logging to S3 for request-level analysis. Calculate cache hit ratio from requests (Hit vs Miss). Alert on 5xxErrorRate > 1%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/CloudFront\" (metric_name=\"4xxErrorRate\" OR metric_name=\"5xxErrorRate\" OR metric_name=\"BytesDownloaded\")\n| timechart span=1h avg(Average) by metric_name, DistributionId\n```\n\nUnderstanding this SPL\n\n**CloudFront Cache Hit Ratio and Origin Errors** — Low cache hit ratio increases origin load and latency. Origin errors indicate backend or CDN misconfiguration.\n\nDocumented **Data sources**: CloudWatch CloudFront metrics, CloudFront access logs (optional, to S3). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by metric_name, DistributionId** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (4xx/5xx rate, bytes), Gauge (cache hit %), Table by distribution.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cloudfront cache hit ratio and origin errors. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.24",
              "n": "SQS Queue Depth and Age of Oldest Message",
              "c": "high",
              "f": "beginner",
              "v": "Growing queue depth or old messages indicate consumers are falling behind or failing. Prevents backlog and SLA breaches.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch SQS metrics (ApproximateNumberOfMessagesVisible, ApproximateAgeOfOldestMessage)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SQS\" (metric_name=\"ApproximateNumberOfMessagesVisible\" OR metric_name=\"ApproximateAgeOfOldestMessage\")\n| bin _time span=5m\n| eval depth=if(metric_name=\"ApproximateNumberOfMessagesVisible\", Average, null()),\n       age_s=if(metric_name=\"ApproximateAgeOfOldestMessage\", Average, null())\n| stats avg(depth) as depth, avg(age_s) as age_s by _time, QueueName\n| where depth > 1000 OR age_s > 300",
              "m": "Collect SQS metrics. Alert when queue depth exceeds threshold (e.g. 1000) or age of oldest message > 5 minutes. Monitor dead-letter queue (ApproximateNumberOfMessagesDelayed) separately.",
              "z": "Line chart (depth, age by queue), Single value (oldest message age), Table (queue, depth).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch SQS metrics (ApproximateNumberOfMessagesVisible, ApproximateAgeOfOldestMessage).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect SQS metrics. Alert when queue depth exceeds threshold (e.g. 1000) or age of oldest message > 5 minutes. Monitor dead-letter queue (ApproximateNumberOfMessagesDelayed) separately.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SQS\" (metric_name=\"ApproximateNumberOfMessagesVisible\" OR metric_name=\"ApproximateAgeOfOldestMessage\")\n| bin _time span=5m\n| eval depth=if(metric_name=\"ApproximateNumberOfMessagesVisible\", Average, null()),\n       age_s=if(metric_name=\"ApproximateAgeOfOldestMessage\", Average, null())\n| stats avg(depth) as depth, avg(age_s) as age_s by _time, QueueName\n| where depth > 1000 OR age_s > 300\n```\n\nUnderstanding this SPL\n\n**SQS Queue Depth and Age of Oldest Message** — Growing queue depth or old messages indicate consumers are falling behind or failing. Prevents backlog and SLA breaches.\n\nDocumented **Data sources**: CloudWatch SQS metrics (ApproximateNumberOfMessagesVisible, ApproximateAgeOfOldestMessage). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `eval` defines or adjusts **depth** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time, QueueName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where depth > 1000 OR age_s > 300` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (depth, age by queue), Single value (oldest message age), Table (queue, depth).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on sqs queue depth and age of oldest message. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.25",
              "n": "SQS Dead-Letter Queue Message Count",
              "c": "critical",
              "f": "beginner",
              "v": "Messages in DLQ indicate processing failures. Immediate alerting ensures failed messages are investigated and reprocessed.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch SQS metrics for DLQ (ApproximateNumberOfMessagesVisible)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SQS\" metric_name=\"ApproximateNumberOfMessagesVisible\"\n| search QueueName=\"*dlq*\" OR QueueName=\"*dead*\"\n| where Average > 0\n| table _time QueueName Average",
              "m": "Tag or identify DLQ queues (naming convention or tags). Alert when ApproximateNumberOfMessagesVisible > 0 for any DLQ. Create runbook for DLQ investigation and replay.",
              "z": "Single value (DLQ messages), Table (queue, count), Timeline.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch SQS metrics for DLQ (ApproximateNumberOfMessagesVisible).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag or identify DLQ queues (naming convention or tags). Alert when ApproximateNumberOfMessagesVisible > 0 for any DLQ. Create runbook for DLQ investigation and replay.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SQS\" metric_name=\"ApproximateNumberOfMessagesVisible\"\n| search QueueName=\"*dlq*\" OR QueueName=\"*dead*\"\n| where Average > 0\n| table _time QueueName Average\n```\n\nUnderstanding this SPL\n\n**SQS Dead-Letter Queue Message Count** — Messages in DLQ indicate processing failures. Immediate alerting ensures failed messages are investigated and reprocessed.\n\nDocumented **Data sources**: CloudWatch SQS metrics for DLQ (ApproximateNumberOfMessagesVisible). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Filters the current rows with `where Average > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SQS Dead-Letter Queue Message Count**): table _time QueueName Average\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (DLQ messages), Table (queue, count), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on sqs dead-letter queue message count. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.26",
              "n": "DynamoDB Throttled Requests and Consumed Capacity",
              "c": "high",
              "f": "beginner",
              "v": "Throttling causes request failures and degraded application performance. Capacity monitoring supports right-sizing and auto-scaling.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch DynamoDB metrics (ThrottledRequests, ConsumedReadCapacityUnits, ConsumedWriteCapacityUnits)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/DynamoDB\" metric_name=\"ThrottledRequests\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by TableName, Operation",
              "m": "Collect DynamoDB metrics per table. Alert on any ThrottledRequests. Dashboard consumed vs. provisioned capacity to tune throughput. Consider on-demand capacity if spikes are unpredictable.",
              "z": "Line chart (throttled, consumed by table), Table (top throttled tables), Gauge (utilization %).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch DynamoDB metrics (ThrottledRequests, ConsumedReadCapacityUnits, ConsumedWriteCapacityUnits).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect DynamoDB metrics per table. Alert on any ThrottledRequests. Dashboard consumed vs. provisioned capacity to tune throughput. Consider on-demand capacity if spikes are unpredictable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/DynamoDB\" metric_name=\"ThrottledRequests\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by TableName, Operation\n```\n\nUnderstanding this SPL\n\n**DynamoDB Throttled Requests and Consumed Capacity** — Throttling causes request failures and degraded application performance. Capacity monitoring supports right-sizing and auto-scaling.\n\nDocumented **Data sources**: CloudWatch DynamoDB metrics (ThrottledRequests, ConsumedReadCapacityUnits, ConsumedWriteCapacityUnits). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Sum > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by TableName, Operation** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (throttled, consumed by table), Table (top throttled tables), Gauge (utilization %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on dynamodb throttled requests and consumed capacity. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.27",
              "n": "API Gateway 4xx/5xx and Throttling",
              "c": "high",
              "f": "beginner",
              "v": "High 4xx/5xx or throttling indicates misconfigured APIs, backend failures, or abuse. Essential for API reliability and quota management.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch API Gateway metrics (Count, 4XXError, 5XXError, IntegrationLatency)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ApiGateway\" (metric_name=\"5XXError\" OR metric_name=\"Count\")\n| timechart span=5m sum(Sum) by metric_name, ApiName, Stage\n| eval error_rate = 5XXError / Count * 100\n| where error_rate > 1",
              "m": "Enable detailed metrics for API Gateway (per-stage). Ingest CloudWatch. Alert on 5XXError rate >1% or ThrottleCount > 0. Optionally enable access logging to S3 for request-level analysis.",
              "z": "Line chart (errors, count, latency), Table (API, stage, error rate), Single value.",
              "kfp": "Intentional 4xx from bad clients, pen-test, canary deploys, and throttling in lower environments; compare with the release window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch API Gateway metrics (Count, 4XXError, 5XXError, IntegrationLatency).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable detailed metrics for API Gateway (per-stage). Ingest CloudWatch. Alert on 5XXError rate >1% or ThrottleCount > 0. Optionally enable access logging to S3 for request-level analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ApiGateway\" (metric_name=\"5XXError\" OR metric_name=\"Count\")\n| timechart span=5m sum(Sum) by metric_name, ApiName, Stage\n| eval error_rate = 5XXError / Count * 100\n| where error_rate > 1\n```\n\nUnderstanding this SPL\n\n**API Gateway 4xx/5xx and Throttling** — High 4xx/5xx or throttling indicates misconfigured APIs, backend failures, or abuse. Essential for API reliability and quota management.\n\nDocumented **Data sources**: CloudWatch API Gateway metrics (Count, 4XXError, 5XXError, IntegrationLatency). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name, ApiName, Stage** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (errors, count, latency), Table (API, stage, error rate), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on api gateway 4xx/5xx and throttling. API error and throttle patterns so a broken integration or a noisy client is visible before the help desk fills up. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.28",
              "n": "EBS Volume Status and Burst Balance",
              "c": "high",
              "f": "beginner",
              "v": "EBS status checks and burst balance (gp2/gp3) indicate volume health and risk of I/O throttling when credits are exhausted.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch EBS metrics (VolumeStatusCheckFailed, BurstBalancePercentage)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/EBS\" (metric_name=\"VolumeStatusCheckFailed\" OR metric_name=\"BurstBalancePercentage\")\n| where VolumeStatusCheckFailed > 0 OR BurstBalancePercentage < 20\n| table _time VolumeId metric_name Average",
              "m": "Collect EBS metrics. Alert on VolumeStatusCheckFailed. For gp2/gp3, alert when BurstBalancePercentage < 20%. Consider io1/io2 or gp3 with higher baseline IOPS for steady high I/O.",
              "z": "Table (volume, status, burst %), Single value (volumes with low burst), Timeline.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch EBS metrics (VolumeStatusCheckFailed, BurstBalancePercentage).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect EBS metrics. Alert on VolumeStatusCheckFailed. For gp2/gp3, alert when BurstBalancePercentage < 20%. Consider io1/io2 or gp3 with higher baseline IOPS for steady high I/O.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/EBS\" (metric_name=\"VolumeStatusCheckFailed\" OR metric_name=\"BurstBalancePercentage\")\n| where VolumeStatusCheckFailed > 0 OR BurstBalancePercentage < 20\n| table _time VolumeId metric_name Average\n```\n\nUnderstanding this SPL\n\n**EBS Volume Status and Burst Balance** — EBS status checks and burst balance (gp2/gp3) indicate volume health and risk of I/O throttling when credits are exhausted.\n\nDocumented **Data sources**: CloudWatch EBS metrics (VolumeStatusCheckFailed, BurstBalancePercentage). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where VolumeStatusCheckFailed > 0 OR BurstBalancePercentage < 20` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **EBS Volume Status and Burst Balance**): table _time VolumeId metric_name Average\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (volume, status, burst %), Single value (volumes with low burst), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on ebs volume status and burst balance. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.29",
              "n": "EC2 Spot Instance Interruption Notices",
              "c": "high",
              "f": "beginner",
              "v": "Spot interruptions cause instance termination with short notice. Tracking enables graceful shutdown, workload migration, and capacity planning.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Events (EC2 Spot Instance Interruption Warning), EventBridge",
              "q": "index=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"EC2 Spot Instance Interruption Warning\"\n| table _time detail.instance-id detail.instance-action detail.spot-instance-request-id\n| sort -_time",
              "m": "Create EventBridge rule for EC2 Spot Instance Interruption Warning. Forward to SNS or Lambda for Splunk ingestion. Alert on every interruption; use for fleet metrics and hybrid/on-demand fallback decisions.",
              "z": "Table (instance, action, time), Timeline (interruptions by AZ), Bar chart (interruptions by instance type).",
              "kfp": "Intentional spot reclaim during game-day, batch jobs, and capacity tests where interruption is the design.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Events (EC2 Spot Instance Interruption Warning), EventBridge.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate EventBridge rule for EC2 Spot Instance Interruption Warning. Forward to SNS or Lambda for Splunk ingestion. Alert on every interruption; use for fleet metrics and hybrid/on-demand fallback decisions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"EC2 Spot Instance Interruption Warning\"\n| table _time detail.instance-id detail.instance-action detail.spot-instance-request-id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**EC2 Spot Instance Interruption Notices** — Spot interruptions cause instance termination with short notice. Tracking enables graceful shutdown, workload migration, and capacity planning.\n\nDocumented **Data sources**: CloudWatch Events (EC2 Spot Instance Interruption Warning), EventBridge. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **EC2 Spot Instance Interruption Notices**): table _time detail.instance-id detail.instance-action detail.spot-instance-request-id\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (instance, action, time), Timeline (interruptions by AZ), Bar chart (interruptions by instance type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on ec2 spot instance interruption notices. The notice that a cheap spot node may be taken back so the app can drain work or request capacity elsewhere in time. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.30",
              "n": "CloudTrail Log File Delivery Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Failed CloudTrail delivery means audit gaps. Attackers may target trail deletion or S3 permissions to hide activity.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudTrail insight events, S3 bucket event notifications, or CloudWatch Logs for trail validation",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventName=\"DeleteTrail\" OR eventName=\"PutBucketPolicy\" requestParameters.name=*\n| table _time userIdentity.arn eventName requestParameters.bucketName\n| sort -_time",
              "m": "Enable CloudTrail log file validation. Monitor for DeleteTrail, PutBucketPolicy on the trail bucket, or S3 access denied to trail bucket. Use AWS Config or custom Lambda to validate delivery and alert on gaps.",
              "z": "Events list (critical), Table (trail, bucket, event), Timeline.",
              "kfp": "Intentional trail or bucket hardening in the same change ticket; always confirm a matching ticket before paging.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudTrail insight events, S3 bucket event notifications, or CloudWatch Logs for trail validation.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CloudTrail log file validation. Monitor for DeleteTrail, PutBucketPolicy on the trail bucket, or S3 access denied to trail bucket. Use AWS Config or custom Lambda to validate delivery and alert on gaps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventName=\"DeleteTrail\" OR eventName=\"PutBucketPolicy\" requestParameters.name=*\n| table _time userIdentity.arn eventName requestParameters.bucketName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**CloudTrail Log File Delivery Failures** — Failed CloudTrail delivery means audit gaps. Attackers may target trail deletion or S3 permissions to hide activity.\n\nDocumented **Data sources**: CloudTrail insight events, S3 bucket event notifications, or CloudWatch Logs for trail validation. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **CloudTrail Log File Delivery Failures**): table _time userIdentity.arn eventName requestParameters.bucketName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.action, \"(?i)DeleteTrail|StopLogging|UpdateTrail|PutBucketPolicy|PutEventSelectors\") OR match(All_Changes.object, \"(?i)cloudtrail|trail\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CloudTrail Log File Delivery Failures** — Failed CloudTrail delivery means audit gaps. Attackers may target trail deletion or S3 permissions to hide activity.\n\nDocumented **Data sources**: CloudTrail insight events, S3 bucket event notifications, or CloudWatch Logs for trail validation. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events list (critical), Table (trail, bucket, event), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cloudtrail log file delivery failures. Failed CloudTrail delivery means audit gaps. Attackers may target trail deletion or S3 permissions to hide activity. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.action, \"(?i)DeleteTrail|StopLogging|UpdateTrail|PutBucketPolicy|PutEventSelectors\") OR match(All_Changes.object, \"(?i)cloudtrail|trail\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.31",
              "n": "CloudWatch Alarm State Changes",
              "c": "medium",
              "f": "beginner",
              "v": "Alarm state transitions (OK → ALARM, INSUFFICIENT_DATA) provide a consolidated view of metric-based issues. Centralizing in Splunk enables correlation with other data.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Events (Alarm state change), SNS subscription",
              "q": "index=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"CloudWatch Alarm State Change\" detail.state.value=\"ALARM\"\n| table _time detail.alarmName detail.state.value detail.newStateReason\n| sort -_time",
              "m": "Create EventBridge rule for CloudWatch Alarm State Change. Send to SNS topic; ingest via Splunk_TA_aws or HEC. Filter for state=ALARM. Correlate alarm name with resource tags for ownership.",
              "z": "Table (alarm, state, reason), Timeline (alarms over time), Single value (active alarms).",
              "kfp": "Test alarms, threshold tuning, and `INSUFFICIENT_DATA` that flaps during account setup or a new service bootstrap.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Events (Alarm state change), SNS subscription.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate EventBridge rule for CloudWatch Alarm State Change. Send to SNS topic; ingest via Splunk_TA_aws or HEC. Filter for state=ALARM. Correlate alarm name with resource tags for ownership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"CloudWatch Alarm State Change\" detail.state.value=\"ALARM\"\n| table _time detail.alarmName detail.state.value detail.newStateReason\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**CloudWatch Alarm State Changes** — Alarm state transitions (OK → ALARM, INSUFFICIENT_DATA) provide a consolidated view of metric-based issues. Centralizing in Splunk enables correlation with other data.\n\nDocumented **Data sources**: CloudWatch Events (Alarm state change), SNS subscription. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **CloudWatch Alarm State Changes**): table _time detail.alarmName detail.state.value detail.newStateReason\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (alarm, state, reason), Timeline (alarms over time), Single value (active alarms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cloudwatch alarm state changes. When a CloudWatch threshold moves into a bad state so the owning team is not the last to know. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.32",
              "n": "NAT Gateway Bytes Processed and Connection Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "NAT Gateway is a single point of egress for private subnets. Monitoring bytes and connection count supports capacity and cost (data processed) planning.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch NAT Gateway metrics (BytesOutToDestination, ActiveConnectionCount)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/NATGateway\"\n| timechart span=1h sum(Sum) as bytes, avg(Average) as connections by NatGatewayId",
              "m": "Collect NAT Gateway metrics. Alert on sudden drop in BytesOutToDestination (possible outage) or spike in ActiveConnectionCount (possible connection exhaustion). Track data processed for cost.",
              "z": "Line chart (bytes, connections by NAT GW), Table (NAT GW, bytes today), Single value.",
              "kfp": "Large data transfers, backups, and regional failovers you scheduled; one-off spikes during migration, not a silent exfil by default.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch NAT Gateway metrics (BytesOutToDestination, ActiveConnectionCount).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect NAT Gateway metrics. Alert on sudden drop in BytesOutToDestination (possible outage) or spike in ActiveConnectionCount (possible connection exhaustion). Track data processed for cost.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/NATGateway\"\n| timechart span=1h sum(Sum) as bytes, avg(Average) as connections by NatGatewayId\n```\n\nUnderstanding this SPL\n\n**NAT Gateway Bytes Processed and Connection Tracking** — NAT Gateway is a single point of egress for private subnets. Monitoring bytes and connection count supports capacity and cost (data processed) planning.\n\nDocumented **Data sources**: CloudWatch NAT Gateway metrics (BytesOutToDestination, ActiveConnectionCount). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by NatGatewayId** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (bytes, connections by NAT GW), Table (NAT GW, bytes today), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on nat gateway bytes processed and connection tracking. How much is leaving the private network through the NAT and how many connections it carries, so a quiet cutover does not show up only as a ticket. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.33",
              "n": "VPN Connection State and Tunnel Status",
              "c": "critical",
              "f": "beginner",
              "v": "VPN down breaks hybrid connectivity. Tunnel state monitoring ensures quick detection and failover to secondary tunnel or connection.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch VPN metrics (TunnelState, TunnelDataIn, TunnelDataOut)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/VPN\" metric_name=\"TunnelState\"\n| where Average != 1\n| table _time VpnId TunnelIpAddress Average",
              "m": "TunnelState 1 = UP, 0 = DOWN. Alert when either tunnel is down. Monitor TunnelDataIn/Out for traffic; zero traffic may indicate routing or peer issue even if state is UP.",
              "z": "Status panel (tunnel up/down), Table (VPN, tunnel, state), Timeline.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch VPN metrics (TunnelState, TunnelDataIn, TunnelDataOut).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTunnelState 1 = UP, 0 = DOWN. Alert when either tunnel is down. Monitor TunnelDataIn/Out for traffic; zero traffic may indicate routing or peer issue even if state is UP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/VPN\" metric_name=\"TunnelState\"\n| where Average != 1\n| table _time VpnId TunnelIpAddress Average\n```\n\nUnderstanding this SPL\n\n**VPN Connection State and Tunnel Status** — VPN down breaks hybrid connectivity. Tunnel state monitoring ensures quick detection and failover to secondary tunnel or connection.\n\nDocumented **Data sources**: CloudWatch VPN metrics (TunnelState, TunnelDataIn, TunnelDataOut). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Average != 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VPN Connection State and Tunnel Status**): table _time VpnId TunnelIpAddress Average\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel (tunnel up/down), Table (VPN, tunnel, state), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on vpn connection state and tunnel status. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.34",
              "n": "AWS Organizations SCP and OU Changes",
              "c": "high",
              "f": "intermediate",
              "v": "SCP (Service Control Policy) and OU structure changes affect permissions across many accounts. Unauthorized changes can weaken security boundaries.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail` (management account or delegated admin)",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" (eventName=\"AttachPolicy\" OR eventName=\"DetachPolicy\" OR eventName=\"CreateOrganizationalUnit\" OR eventName=\"MoveAccount\") requestParameters.targetId=*\n| table _time userIdentity.arn eventName requestParameters.policyId requestParameters.organizationalUnitId\n| sort -_time",
              "m": "Ensure CloudTrail in management account logs Organizations API calls. Alert on AttachPolicy/DetachPolicy (SCP) and MoveAccount. Restrict who can modify SCPs via IAM and MFA.",
              "z": "Table (who, what, when), Timeline, Bar chart by event type.",
              "kfp": "Organization-wide guardrails rolled out in waves; new-OUs or new accounts can briefly flag events your playbook allows.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail` (management account or delegated admin).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure CloudTrail in management account logs Organizations API calls. Alert on AttachPolicy/DetachPolicy (SCP) and MoveAccount. Restrict who can modify SCPs via IAM and MFA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" (eventName=\"AttachPolicy\" OR eventName=\"DetachPolicy\" OR eventName=\"CreateOrganizationalUnit\" OR eventName=\"MoveAccount\") requestParameters.targetId=*\n| table _time userIdentity.arn eventName requestParameters.policyId requestParameters.organizationalUnitId\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**AWS Organizations SCP and OU Changes** — SCP (Service Control Policy) and OU structure changes affect permissions across many accounts. Unauthorized changes can weaken security boundaries.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (management account or delegated admin). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **AWS Organizations SCP and OU Changes**): table _time userIdentity.arn eventName requestParameters.policyId requestParameters.organizationalUnitId\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.action, \"(?i)AttachPolicy|DetachPolicy|CreateOrganizationalUnit|MoveAccount|CreateAccount|CloseAccount\") OR match(All_Changes.object, \"(?i)organizations|scp|ou-\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**AWS Organizations SCP and OU Changes** — SCP (Service Control Policy) and OU structure changes affect permissions across many accounts. Unauthorized changes can weaken security boundaries.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (management account or delegated admin). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (who, what, when), Timeline, Bar chart by event type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on aws organizations scp and ou changes. SCP (Service Control Policy) and OU structure changes affect permissions across many accounts. Unauthorized changes can. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.action, \"(?i)AttachPolicy|DetachPolicy|CreateOrganizationalUnit|MoveAccount|CreateAccount|CloseAccount\") OR match(All_Changes.object, \"(?i)organizations|scp|ou-\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.35",
              "n": "S3 Replication Lag and Failed Replication",
              "c": "high",
              "f": "intermediate",
              "v": "Replication lag or failures break DR and compliance. Detecting failures ensures data is replicated within RPO.",
              "t": "`Splunk_TA_aws`",
              "d": "S3 Replication metrics (ReplicationLatency, BytesPendingReplication), S3 event notifications for replication failures",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/S3\" metric_name=\"ReplicationLatency\"\n| where Average > 900\n| timechart span=15m avg(Average) by SourceBucket, DestinationBucket",
              "m": "Enable S3 Replication metrics in CloudWatch. Configure event notifications for replication failures (s3:Replication:OperationFailedReplication). Alert on ReplicationLatency > 15 min or any failure event.",
              "z": "Line chart (latency by bucket pair), Table (failed replications), Single value (bytes pending).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: S3 Replication metrics (ReplicationLatency, BytesPendingReplication), S3 event notifications for replication failures.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable S3 Replication metrics in CloudWatch. Configure event notifications for replication failures (s3:Replication:OperationFailedReplication). Alert on ReplicationLatency > 15 min or any failure event.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/S3\" metric_name=\"ReplicationLatency\"\n| where Average > 900\n| timechart span=15m avg(Average) by SourceBucket, DestinationBucket\n```\n\nUnderstanding this SPL\n\n**S3 Replication Lag and Failed Replication** — Replication lag or failures break DR and compliance. Detecting failures ensures data is replicated within RPO.\n\nDocumented **Data sources**: S3 Replication metrics (ReplicationLatency, BytesPendingReplication), S3 event notifications for replication failures. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Average > 900` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by SourceBucket, DestinationBucket** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency by bucket pair), Table (failed replications), Single value (bytes pending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on s3 replication lag and failed replication. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.36",
              "n": "ElastiCache/Redis CPU and Evictions",
              "c": "high",
              "f": "beginner",
              "v": "High CPU or evictions indicate undersized cache or hot keys. Impacts application latency and cache hit ratio.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch ElastiCache metrics (CPUUtilization, CacheEvictions, CacheHitRate)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ElastiCache\" (metric_name=\"CPUUtilization\" OR metric_name=\"CacheEvictions\")\n| bin _time span=5m\n| eval cpu=if(metric_name=\"CPUUtilization\", Average, null()),\n       evictions=if(metric_name=\"CacheEvictions\", Average, null())\n| stats avg(cpu) as cpu, sum(evictions) as evictions by _time, CacheClusterId\n| where cpu > 80 OR evictions > 100",
              "m": "Collect ElastiCache metrics per node/cluster. Alert on CPUUtilization > 80% sustained. Monitor CacheHitRate; low hit rate and high evictions suggest need for more memory or key design review.",
              "z": "Line chart (CPU, evictions, hit rate), Table (cluster, metrics), Gauge (hit rate).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch ElastiCache metrics (CPUUtilization, CacheEvictions, CacheHitRate).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect ElastiCache metrics per node/cluster. Alert on CPUUtilization > 80% sustained. Monitor CacheHitRate; low hit rate and high evictions suggest need for more memory or key design review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ElastiCache\" (metric_name=\"CPUUtilization\" OR metric_name=\"CacheEvictions\")\n| bin _time span=5m\n| eval cpu=if(metric_name=\"CPUUtilization\", Average, null()),\n       evictions=if(metric_name=\"CacheEvictions\", Average, null())\n| stats avg(cpu) as cpu, sum(evictions) as evictions by _time, CacheClusterId\n| where cpu > 80 OR evictions > 100\n```\n\nUnderstanding this SPL\n\n**ElastiCache/Redis CPU and Evictions** — High CPU or evictions indicate undersized cache or hot keys. Impacts application latency and cache hit ratio.\n\nDocumented **Data sources**: CloudWatch ElastiCache metrics (CPUUtilization, CacheEvictions, CacheHitRate). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `eval` defines or adjusts **cpu** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time, CacheClusterId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu > 80 OR evictions > 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU, evictions, hit rate), Table (cluster, metrics), Gauge (hit rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on elasticache/redis cpu and evictions. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.37",
              "n": "SNS Delivery Failures and Bounce/Complaint",
              "c": "high",
              "f": "beginner",
              "v": "SNS delivery failures mean subscribers are not receiving notifications. Bounce/complaint (for email) affects sender reputation and deliverability.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch SNS metrics (NumberOfNotificationsFailed, NumberOfMessagesFailed)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SNS\" metric_name=\"NumberOfNotificationsFailed\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by TopicName",
              "m": "Collect SNS metrics. Alert when NumberOfNotificationsFailed > 0. For email subscriptions, enable bounce/complaint feedback and ingest via SNS or EventBridge. Track delivery success rate.",
              "z": "Line chart (failures by topic), Table (topic, failure count), Single value (failed notifications).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch SNS metrics (NumberOfNotificationsFailed, NumberOfMessagesFailed).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect SNS metrics. Alert when NumberOfNotificationsFailed > 0. For email subscriptions, enable bounce/complaint feedback and ingest via SNS or EventBridge. Track delivery success rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SNS\" metric_name=\"NumberOfNotificationsFailed\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by TopicName\n```\n\nUnderstanding this SPL\n\n**SNS Delivery Failures and Bounce/Complaint** — SNS delivery failures mean subscribers are not receiving notifications. Bounce/complaint (for email) affects sender reputation and deliverability.\n\nDocumented **Data sources**: CloudWatch SNS metrics (NumberOfNotificationsFailed, NumberOfMessagesFailed). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Sum > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by TopicName** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failures by topic), Table (topic, failure count), Single value (failed notifications).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on sns delivery failures and bounce/complaint. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.38",
              "n": "EventBridge Rule Invocation and Failed Invocations",
              "c": "medium",
              "f": "beginner",
              "v": "Failed invocations mean downstream targets (Lambda, SQS, etc.) are not receiving events. Critical for event-driven architecture reliability.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch EventBridge metrics (Invocations, FailedInvocations, TriggeredRules)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Events\" metric_name=\"FailedInvocations\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by RuleName",
              "m": "Collect EventBridge metrics per rule. Alert on FailedInvocations > 0. Correlate with target service (e.g. Lambda errors, SQS rejections) for root cause.",
              "z": "Table (rule, failures), Line chart (invocations vs failures), Single value.",
              "kfp": "Rules pointed at throttled or sleeping targets, deploy windows, and canary invocations in non-production.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch EventBridge metrics (Invocations, FailedInvocations, TriggeredRules).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect EventBridge metrics per rule. Alert on FailedInvocations > 0. Correlate with target service (e.g. Lambda errors, SQS rejections) for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Events\" metric_name=\"FailedInvocations\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by RuleName\n```\n\nUnderstanding this SPL\n\n**EventBridge Rule Invocation and Failed Invocations** — Failed invocations mean downstream targets (Lambda, SQS, etc.) are not receiving events. Critical for event-driven architecture reliability.\n\nDocumented **Data sources**: CloudWatch EventBridge metrics (Invocations, FailedInvocations, TriggeredRules). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Sum > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by RuleName** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule, failures), Line chart (invocations vs failures), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on eventbridge rule invocation and failed invocations. Event rules that could not do their work so a silent gap does not show up only as a downstream timeout. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.39",
              "n": "AWS Backup Restore Job Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Restore job failures prevent recovery during DR. Monitoring ensures backup and restore pipeline is healthy.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Events (Backup job state change), Backup job history via API",
              "q": "index=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Backup Job State Change\" (detail.state=\"FAILED\" OR detail.state=\"ABORTED\")\n| table _time detail.backupJobId detail.resourceType detail.state detail.message\n| sort -_time",
              "m": "Create EventBridge rule for Backup Job State Change. Filter for FAILED/ABORTED. Optionally ingest backup job list from AWS Backup API for compliance dashboard. Run periodic restore tests and log results.",
              "z": "Table (job, resource, state, message), Timeline (failed restores), Single value (failed jobs last 24h).",
              "kfp": "Jobs that you cancelled on purpose, DR drills, and pre-prod runs that you expect to fail a gate.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Events (Backup job state change), Backup job history via API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate EventBridge rule for Backup Job State Change. Filter for FAILED/ABORTED. Optionally ingest backup job list from AWS Backup API for compliance dashboard. Run periodic restore tests and log results.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Backup Job State Change\" (detail.state=\"FAILED\" OR detail.state=\"ABORTED\")\n| table _time detail.backupJobId detail.resourceType detail.state detail.message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**AWS Backup Restore Job Failures** — Restore job failures prevent recovery during DR. Monitoring ensures backup and restore pipeline is healthy.\n\nDocumented **Data sources**: CloudWatch Events (Backup job state change), Backup job history via API. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **AWS Backup Restore Job Failures**): table _time detail.backupJobId detail.resourceType detail.state detail.message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (job, resource, state, message), Timeline (failed restores), Single value (failed jobs last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on aws backup restore job failures. Backups and restores that do not complete correctly so we are not one restore away from a bad surprise. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.40",
              "n": "Route 53 Health Check Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Health check failures indicate endpoint or path unreachable. Used for failover and monitoring of external/internal resources.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Route 53 health check metrics (HealthCheckStatus)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Route53\" metric_name=\"HealthCheckStatus\"\n| where Average != 1\n| table _time HealthCheckId Average",
              "m": "HealthCheckStatus 1 = Healthy, 0 = Unhealthy. Alert when status = 0. Create dashboard of all health checks with status. Use for failover routing and status page.",
              "z": "Status panel (healthy/unhealthy), Table (health check, status), Map (endpoint locations).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Route 53 health check metrics (HealthCheckStatus).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nHealthCheckStatus 1 = Healthy, 0 = Unhealthy. Alert when status = 0. Create dashboard of all health checks with status. Use for failover routing and status page.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Route53\" metric_name=\"HealthCheckStatus\"\n| where Average != 1\n| table _time HealthCheckId Average\n```\n\nUnderstanding this SPL\n\n**Route 53 Health Check Failures** — Health check failures indicate endpoint or path unreachable. Used for failover and monitoring of external/internal resources.\n\nDocumented **Data sources**: CloudWatch Route 53 health check metrics (HealthCheckStatus). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Average != 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Route 53 Health Check Failures**): table _time HealthCheckId Average\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel (healthy/unhealthy), Table (health check, status), Map (endpoint locations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on route 53 health check failures. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.41",
              "n": "Redshift Cluster Health and Connection Count",
              "c": "high",
              "f": "beginner",
              "v": "Redshift cluster health and connection exhaustion impact analytics workloads. Monitoring supports capacity and connection limit management.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Redshift metrics (DatabaseConnections, CPUUtilization, PercentageDiskSpaceUsed)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Redshift\" metric_name=\"DatabaseConnections\"\n| bin _time span=5m\n| stats avg(Average) as connections by _time, ClusterIdentifier\n| where connections > 80",
              "m": "Collect Redshift metrics. Alert when DatabaseConnections approaches max (e.g. 90% of limit) or CPUUtilization/PercentageDiskSpaceUsed is high. Correlate with query queue length.",
              "z": "Line chart (connections, CPU, disk by cluster), Table (cluster, metrics), Gauge (connection %).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Redshift metrics (DatabaseConnections, CPUUtilization, PercentageDiskSpaceUsed).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Redshift metrics. Alert when DatabaseConnections approaches max (e.g. 90% of limit) or CPUUtilization/PercentageDiskSpaceUsed is high. Correlate with query queue length.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Redshift\" metric_name=\"DatabaseConnections\"\n| bin _time span=5m\n| stats avg(Average) as connections by _time, ClusterIdentifier\n| where connections > 80\n```\n\nUnderstanding this SPL\n\n**Redshift Cluster Health and Connection Count** — Redshift cluster health and connection exhaustion impact analytics workloads. Monitoring supports capacity and connection limit management.\n\nDocumented **Data sources**: CloudWatch Redshift metrics (DatabaseConnections, CPUUtilization, PercentageDiskSpaceUsed). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, ClusterIdentifier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where connections > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (connections, CPU, disk by cluster), Table (cluster, metrics), Gauge (connection %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on redshift cluster health and connection count. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.42",
              "n": "Step Functions Execution Failures",
              "c": "high",
              "f": "beginner",
              "v": "Failed or aborted executions break workflows. Tracking failure rate and failed execution IDs enables debugging and retry.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Step Functions metrics (ExecutionsFailed, ExecutionsAborted), or EventBridge for state machine events",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/States\" (metric_name=\"ExecutionsFailed\" OR metric_name=\"ExecutionsAborted\")\n| where Sum > 0\n| timechart span=5m sum(Sum) by StateMachineArn",
              "m": "Collect Step Functions metrics. Alert when ExecutionsFailed or ExecutionsAborted > 0. Use X-Ray or CloudWatch Logs for failed execution details. Create runbook for common failure causes.",
              "z": "Line chart (failed, aborted by workflow), Table (state machine, count), Single value.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Step Functions metrics (ExecutionsFailed, ExecutionsAborted), or EventBridge for state machine events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Step Functions metrics. Alert when ExecutionsFailed or ExecutionsAborted > 0. Use X-Ray or CloudWatch Logs for failed execution details. Create runbook for common failure causes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/States\" (metric_name=\"ExecutionsFailed\" OR metric_name=\"ExecutionsAborted\")\n| where Sum > 0\n| timechart span=5m sum(Sum) by StateMachineArn\n```\n\nUnderstanding this SPL\n\n**Step Functions Execution Failures** — Failed or aborted executions break workflows. Tracking failure rate and failed execution IDs enables debugging and retry.\n\nDocumented **Data sources**: CloudWatch Step Functions metrics (ExecutionsFailed, ExecutionsAborted), or EventBridge for state machine events. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Sum > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by StateMachineArn** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failed, aborted by workflow), Table (state machine, count), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on step functions execution failures. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.43",
              "n": "EFS Burst Credit Balance and Throughput",
              "c": "medium",
              "f": "beginner",
              "v": "EFS burst credits deplete under sustained high throughput; performance then drops to baseline. Monitoring prevents unexpected slowdowns.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch EFS metrics (BurstCreditBalance, DataReadIOBytes, DataWriteIOBytes)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/EFS\" metric_name=\"BurstCreditBalance\"\n| where Average < 500000000\n| timechart span=1h avg(Average) by FileSystemId",
              "m": "Collect EFS metrics. Alert when BurstCreditBalance falls below threshold (e.g. 500M). Consider provisioned throughput for consistent high I/O. Dashboard read/write IOPS and throughput.",
              "z": "Line chart (burst balance, IOPS by filesystem), Table (filesystem, balance), Gauge (balance %).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch EFS metrics (BurstCreditBalance, DataReadIOBytes, DataWriteIOBytes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect EFS metrics. Alert when BurstCreditBalance falls below threshold (e.g. 500M). Consider provisioned throughput for consistent high I/O. Dashboard read/write IOPS and throughput.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/EFS\" metric_name=\"BurstCreditBalance\"\n| where Average < 500000000\n| timechart span=1h avg(Average) by FileSystemId\n```\n\nUnderstanding this SPL\n\n**EFS Burst Credit Balance and Throughput** — EFS burst credits deplete under sustained high throughput; performance then drops to baseline. Monitoring prevents unexpected slowdowns.\n\nDocumented **Data sources**: CloudWatch EFS metrics (BurstCreditBalance, DataReadIOBytes, DataWriteIOBytes). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Average < 500000000` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by FileSystemId** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (burst balance, IOPS by filesystem), Table (filesystem, balance), Gauge (balance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on efs burst credit balance and throughput. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.44",
              "n": "Inspector Vulnerability and Finding Trends",
              "c": "high",
              "f": "intermediate",
              "v": "Inspector findings (EC2, ECR, Lambda) identify vulnerabilities. Tracking trends and new critical findings supports patch and image hygiene.",
              "t": "`Splunk_TA_aws`",
              "d": "Inspector findings via EventBridge or SNS, or Security Hub (which aggregates Inspector)",
              "q": "index=aws sourcetype=\"aws:inspector\" severity=\"CRITICAL\" OR severity=\"HIGH\"\n| stats count by severity, findingType, resourceType\n| sort -count",
              "m": "Configure Inspector to send findings to EventBridge or SNS; ingest in Splunk. Alert on new CRITICAL findings. Dashboard open findings by severity and age. Correlate with patch compliance (SSM).",
              "z": "Table (severity, type, count), Bar chart by severity, Trend line (findings over time).",
              "kfp": "Accepted risks for legacy apps; images you already planned to replace; test registries in lower accounts.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1526",
                "T1613"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: Inspector findings via EventBridge or SNS, or Security Hub (which aggregates Inspector).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Inspector to send findings to EventBridge or SNS; ingest in Splunk. Alert on new CRITICAL findings. Dashboard open findings by severity and age. Correlate with patch compliance (SSM).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:inspector\" severity=\"CRITICAL\" OR severity=\"HIGH\"\n| stats count by severity, findingType, resourceType\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Inspector Vulnerability and Finding Trends** — Inspector findings (EC2, ECR, Lambda) identify vulnerabilities. Tracking trends and new critical findings supports patch and image hygiene.\n\nDocumented **Data sources**: Inspector findings via EventBridge or SNS, or Security Hub (which aggregates Inspector). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:inspector. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:inspector\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity, findingType, resourceType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (severity, type, count), Bar chart by severity, Trend line (findings over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on inspector vulnerability and finding trends. Vulnerability and image-scan findings the pipeline raised so the backlog stays honest. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.45",
              "n": "Systems Manager (SSM) Patch Compliance",
              "c": "high",
              "f": "beginner",
              "v": "Low patch compliance increases vulnerability. SSM Patch Manager compliance status enables prioritization and remediation tracking.",
              "t": "`Splunk_TA_aws`",
              "d": "SSM compliance data via Config, or custom Lambda polling SSM DescribeInstancePatchStates",
              "q": "index=aws sourcetype=\"aws:ssm:compliance\" ComplianceType=\"Patch\" status!=\"Compliant\"\n| stats count by status, InstanceId\n| sort -count",
              "m": "Use AWS Config rule for patch-compliance or custom automation to export Patch Manager compliance to S3/CloudWatch. Ingest in Splunk. Dashboard compliance % by OU/account. Alert when compliance drops below threshold.",
              "z": "Table (instance, status), Pie chart (compliant vs non-compliant), Bar chart by patch group.",
              "kfp": "Patch waves not finished yet, instances powered off, and CAB exceptions you already published.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: SSM compliance data via Config, or custom Lambda polling SSM DescribeInstancePatchStates.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse AWS Config rule for patch-compliance or custom automation to export Patch Manager compliance to S3/CloudWatch. Ingest in Splunk. Dashboard compliance % by OU/account. Alert when compliance drops below threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:ssm:compliance\" ComplianceType=\"Patch\" status!=\"Compliant\"\n| stats count by status, InstanceId\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Systems Manager (SSM) Patch Compliance** — Low patch compliance increases vulnerability. SSM Patch Manager compliance status enables prioritization and remediation tracking.\n\nDocumented **Data sources**: SSM compliance data via Config, or custom Lambda polling SSM DescribeInstancePatchStates. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:ssm:compliance. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:ssm:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by status, InstanceId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (instance, status), Pie chart (compliant vs non-compliant), Bar chart by patch group.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on systems manager (ssm) patch compliance. Which machines are not in the patch state the business asked for, before auditors or ransomware find them first. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.46",
              "n": "Direct Connect Virtual Interface BGP State",
              "c": "critical",
              "f": "intermediate",
              "v": "BGP down on Direct Connect breaks hybrid connectivity. Monitoring BGP session state ensures quick detection and carrier escalation.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Direct Connect metrics (ConnectionState, VirtualInterfaceState), or custom script polling DescribeVirtualInterfaces",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/DX\" metric_name=\"ConnectionState\"\n| where Average != 1\n| table _time VirtualInterfaceId ConnectionState",
              "m": "ConnectionState 1 = available. Alert when state changes to down or unknown. For BGP specifically, use Direct Connect LAG/connection health or partner/carrier APIs if AWS metrics are insufficient.",
              "z": "Status panel (connection state), Table (VIF, state), Timeline (state changes).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Direct Connect metrics (ConnectionState, VirtualInterfaceState), or custom script polling DescribeVirtualInterfaces.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConnectionState 1 = available. Alert when state changes to down or unknown. For BGP specifically, use Direct Connect LAG/connection health or partner/carrier APIs if AWS metrics are insufficient.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/DX\" metric_name=\"ConnectionState\"\n| where Average != 1\n| table _time VirtualInterfaceId ConnectionState\n```\n\nUnderstanding this SPL\n\n**Direct Connect Virtual Interface BGP State** — BGP down on Direct Connect breaks hybrid connectivity. Monitoring BGP session state ensures quick detection and carrier escalation.\n\nDocumented **Data sources**: CloudWatch Direct Connect metrics (ConnectionState, VirtualInterfaceState), or custom script polling DescribeVirtualInterfaces. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Average != 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Direct Connect Virtual Interface BGP State**): table _time VirtualInterfaceId ConnectionState\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel (connection state), Table (VIF, state), Timeline (state changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on direct connect virtual interface bgp state. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.47",
              "n": "Glue Job Run Failures and Duration",
              "c": "high",
              "f": "beginner",
              "v": "Glue job failures break ETL pipelines. Duration trends support capacity and cost optimization.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Glue metrics (JobRunFailureCount, JobRunDuration), Glue job run history",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Glue\" metric_name=\"JobRunFailureCount\"\n| where Sum > 0\n| timechart span=1h sum(Sum) by JobName",
              "m": "Collect Glue metrics. Alert when JobRunFailureCount > 0. Track JobRunDuration for SLA and DPU tuning. Ingest job run events from EventBridge for run-level detail.",
              "z": "Line chart (failures, duration by job), Table (job, failure count), Single value.",
              "kfp": "Known slow cold starts, scheduled batch jobs, and canary invocations in lower envs you already noise-reduced.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Glue metrics (JobRunFailureCount, JobRunDuration), Glue job run history.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Glue metrics. Alert when JobRunFailureCount > 0. Track JobRunDuration for SLA and DPU tuning. Ingest job run events from EventBridge for run-level detail.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Glue\" metric_name=\"JobRunFailureCount\"\n| where Sum > 0\n| timechart span=1h sum(Sum) by JobName\n```\n\nUnderstanding this SPL\n\n**Glue Job Run Failures and Duration** — Glue job failures break ETL pipelines. Duration trends support capacity and cost optimization.\n\nDocumented **Data sources**: CloudWatch Glue metrics (JobRunFailureCount, JobRunDuration), Glue job run history. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Sum > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by JobName** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failures, duration by job), Table (job, failure count), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on glue job run failures and duration. Glue job failures break ETL pipelines. Duration trends support capacity and cost optimization. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.48",
              "n": "Athena Query Execution Failures and Bytes Scanned",
              "c": "medium",
              "f": "beginner",
              "v": "Failed queries and high bytes scanned impact user experience and cost. Monitoring supports optimization and error triage.",
              "t": "`Splunk_TA_aws`",
              "d": "Athena query history via API or CloudWatch (DataScannedInBytes), CloudTrail (StartQueryExecution, GetQueryResults)",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventName=\"StartQueryExecution\" errorCode!=\"\"\n| table _time userIdentity.arn requestParameters.queryExecutionId errorCode\n| sort -_time",
              "m": "Use CloudTrail for Athena API calls (success/failure). Optionally export query execution IDs and join with GetQueryExecution for bytes scanned and state. Alert on high failure rate or queries scanning >1TB.",
              "z": "Table (query, user, bytes, state), Line chart (bytes scanned over time), Bar chart (top users by bytes).",
              "kfp": "Analysts stopping large queries, bad SQL in a lab, and role tests that you expect to fail in sandboxes.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: Athena query history via API or CloudWatch (DataScannedInBytes), CloudTrail (StartQueryExecution, GetQueryResults).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse CloudTrail for Athena API calls (success/failure). Optionally export query execution IDs and join with GetQueryExecution for bytes scanned and state. Alert on high failure rate or queries scanning >1TB.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventName=\"StartQueryExecution\" errorCode!=\"\"\n| table _time userIdentity.arn requestParameters.queryExecutionId errorCode\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Athena Query Execution Failures and Bytes Scanned** — Failed queries and high bytes scanned impact user experience and cost. Monitoring supports optimization and error triage.\n\nDocumented **Data sources**: Athena query history via API or CloudWatch (DataScannedInBytes), CloudTrail (StartQueryExecution, GetQueryResults). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Athena Query Execution Failures and Bytes Scanned**): table _time userIdentity.arn requestParameters.queryExecutionId errorCode\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (query, user, bytes, state), Line chart (bytes scanned over time), Bar chart (top users by bytes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on athena query execution failures and bytes scanned. Failed Athena work that might hide reporting gaps or a permissions mistake before it hits the board meeting. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.49",
              "n": "FSx for Lustre/Windows Capacity and Throughput",
              "c": "medium",
              "f": "intermediate",
              "v": "FSx capacity and throughput metrics support HPC and Windows file share capacity planning and performance troubleshooting.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch FSx metrics (DataReadBytes, DataWriteBytes, FreeDataStorageCapacity)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/FSx\" metric_name=\"FreeDataStorageCapacity\"\n| timechart span=1d avg(Average) by FileSystemId\n| eval free_gb = FreeDataStorageCapacity / 1024 / 1024 / 1024\n| where free_gb < 100",
              "m": "Collect FSx metrics. Alert when free capacity is low. Monitor read/write throughput for Lustre; for Windows, track client connections and IOPS. Correlate with backup completion.",
              "z": "Line chart (capacity, throughput by filesystem), Table (filesystem, free GB), Gauge (used %).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch FSx metrics (DataReadBytes, DataWriteBytes, FreeDataStorageCapacity).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect FSx metrics. Alert when free capacity is low. Monitor read/write throughput for Lustre; for Windows, track client connections and IOPS. Correlate with backup completion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/FSx\" metric_name=\"FreeDataStorageCapacity\"\n| timechart span=1d avg(Average) by FileSystemId\n| eval free_gb = FreeDataStorageCapacity / 1024 / 1024 / 1024\n| where free_gb < 100\n```\n\nUnderstanding this SPL\n\n**FSx for Lustre/Windows Capacity and Throughput** — FSx capacity and throughput metrics support HPC and Windows file share capacity planning and performance troubleshooting.\n\nDocumented **Data sources**: CloudWatch FSx metrics (DataReadBytes, DataWriteBytes, FreeDataStorageCapacity). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by FileSystemId** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **free_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where free_gb < 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (capacity, throughput by filesystem), Table (filesystem, free GB), Gauge (used %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on fsx for lustre/windows capacity and throughput. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.50",
              "n": "Trusted Advisor Check Results and Cost Optimization",
              "c": "low",
              "f": "intermediate",
              "v": "Trusted Advisor identifies cost optimization, performance, and security improvements. Tracking check results supports governance and savings.",
              "t": "`Splunk_TA_aws` (Trusted Advisor API or Support API)",
              "d": "Trusted Advisor API (describe-trusted-advisor-checks, describe-trusted-advisor-check-result)",
              "q": "index=aws sourcetype=\"aws:trustedadvisor\" status=\"warning\" OR status=\"error\"\n| stats count by category name status\n| sort -count",
              "m": "Schedule Lambda or script to call Trusted Advisor API (requires Business/Enterprise Support). Export check results to S3 or send to Splunk via HEC. Dashboard by category (cost, performance, security). Alert on new critical security checks failing.",
              "z": "Table (check, category, status), Pie chart (ok vs warning vs error), Bar chart by category.",
              "kfp": "Checks in warning for architecture choices you accept, or for temporary exceptions in migration; review category before treating as a new incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (Trusted Advisor API or Support API).\n• Ensure the following data sources are available: Trusted Advisor API (describe-trusted-advisor-checks, describe-trusted-advisor-check-result).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule Lambda or script to call Trusted Advisor API (requires Business/Enterprise Support). Export check results to S3 or send to Splunk via HEC. Dashboard by category (cost, performance, security). Alert on new critical security checks failing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:trustedadvisor\" status=\"warning\" OR status=\"error\"\n| stats count by category name status\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Trusted Advisor Check Results and Cost Optimization** — Trusted Advisor identifies cost optimization, performance, and security improvements. Tracking check results supports governance and savings.\n\nDocumented **Data sources**: Trusted Advisor API (describe-trusted-advisor-checks, describe-trusted-advisor-check-result). **App/TA** (typical add-on context): `Splunk_TA_aws` (Trusted Advisor API or Support API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:trustedadvisor. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:trustedadvisor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by category name status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (check, category, status), Pie chart (ok vs warning vs error), Bar chart by category.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on trusted advisor check results and cost optimization. AWS’s own best-practice and cost flags in one list so the team can pick what to fix and what the business has decided to live with. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.51",
              "n": "Lambda Concurrent Executions and Throttling",
              "c": "high",
              "f": "beginner",
              "v": "Throttling occurs when concurrent executions hit account or function limits. Monitoring prevents dropped invocations and supports quota increase requests.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Lambda metrics (ConcurrentExecutions, Throttles)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"Throttles\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by FunctionName",
              "m": "Collect Lambda metrics. Alert on Throttles > 0. Monitor ConcurrentExecutions vs account limit (1000 default). Consider reserved concurrency for critical functions. Dashboard invocations, duration, errors, throttles together.",
              "z": "Line chart (concurrent, throttles by function), Table (function, throttles), Single value (account concurrent %).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Lambda metrics (ConcurrentExecutions, Throttles).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Lambda metrics. Alert on Throttles > 0. Monitor ConcurrentExecutions vs account limit (1000 default). Consider reserved concurrency for critical functions. Dashboard invocations, duration, errors, throttles together.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"Throttles\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by FunctionName\n```\n\nUnderstanding this SPL\n\n**Lambda Concurrent Executions and Throttling** — Throttling occurs when concurrent executions hit account or function limits. Monitoring prevents dropped invocations and supports quota increase requests.\n\nDocumented **Data sources**: CloudWatch Lambda metrics (ConcurrentExecutions, Throttles). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Sum > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by FunctionName** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (concurrent, throttles by function), Table (function, throttles), Single value (account concurrent %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on lambda concurrent executions and throttling. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.52",
              "n": "ECR Image Scan Findings",
              "c": "high",
              "f": "beginner",
              "v": "ECR image scan finds CVEs in container images. Critical/high findings in production images require immediate remediation or rollback.",
              "t": "`Splunk_TA_aws`",
              "d": "ECR scan findings via EventBridge (ECR Image Scan), or Security Hub",
              "q": "index=aws sourcetype=\"aws:ecr:scan\" severity=\"CRITICAL\" OR severity=\"HIGH\"\n| table _time repositoryName imageTag severity findingName\n| sort -_time",
              "m": "Enable ECR image scanning (enhanced or basic). Send scan completion events to EventBridge; forward to Splunk. Alert on CRITICAL/HIGH in repos tagged as production. Block deployment in pipeline when findings exceed threshold.",
              "z": "Table (repo, tag, severity, CVE), Bar chart (findings by repo), Trend line (findings over time).",
              "kfp": "Accepted risks for legacy apps; images you already planned to replace; test registries in lower accounts.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1610",
                "T1613"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: ECR scan findings via EventBridge (ECR Image Scan), or Security Hub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ECR image scanning (enhanced or basic). Send scan completion events to EventBridge; forward to Splunk. Alert on CRITICAL/HIGH in repos tagged as production. Block deployment in pipeline when findings exceed threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:ecr:scan\" severity=\"CRITICAL\" OR severity=\"HIGH\"\n| table _time repositoryName imageTag severity findingName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**ECR Image Scan Findings** — ECR image scan finds CVEs in container images. Critical/high findings in production images require immediate remediation or rollback.\n\nDocumented **Data sources**: ECR scan findings via EventBridge (ECR Image Scan), or Security Hub. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:ecr:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:ecr:scan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **ECR Image Scan Findings**): table _time repositoryName imageTag severity findingName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (repo, tag, severity, CVE), Bar chart (findings by repo), Trend line (findings over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on ecr image scan findings. Vulnerability and image-scan findings the pipeline raised so the backlog stays honest. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.53",
              "n": "CloudWatch Logs Subscription Filter Errors",
              "c": "medium",
              "f": "intermediate",
              "v": "Subscription filter delivery failures mean logs are not reaching Lambda, Kinesis, or Firehose. Indicates quota, permission, or downstream failures.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Logs metric filters (IncomingLogEvents, DeliveryErrors), or destination-specific metrics",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Logs\" metric_name=\"DeliveryErrors\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by LogGroupName, FilterName",
              "m": "Create CloudWatch metric filter for subscription delivery errors if available, or monitor Kinesis/Firehose delivery errors. Alert when delivery errors spike. Check Lambda/Kinesis throttling and IAM permissions.",
              "z": "Table (log group, filter, errors), Line chart (delivery errors over time), Single value.",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Logs metric filters (IncomingLogEvents, DeliveryErrors), or destination-specific metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate CloudWatch metric filter for subscription delivery errors if available, or monitor Kinesis/Firehose delivery errors. Alert when delivery errors spike. Check Lambda/Kinesis throttling and IAM permissions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Logs\" metric_name=\"DeliveryErrors\"\n| where Sum > 0\n| timechart span=5m sum(Sum) by LogGroupName, FilterName\n```\n\nUnderstanding this SPL\n\n**CloudWatch Logs Subscription Filter Errors** — Subscription filter delivery failures mean logs are not reaching Lambda, Kinesis, or Firehose. Indicates quota, permission, or downstream failures.\n\nDocumented **Data sources**: CloudWatch Logs metric filters (IncomingLogEvents, DeliveryErrors), or destination-specific metrics. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Sum > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by LogGroupName, FilterName** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (log group, filter, errors), Line chart (delivery errors over time), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cloudwatch logs subscription filter errors. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.54",
              "n": "Kinesis Data Stream Iterator Age and Throttling",
              "c": "high",
              "f": "beginner",
              "v": "High iterator age means consumers are falling behind. Throttling indicates producers exceed shard capacity. Both cause lag and potential data loss.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Kinesis metrics (GetRecords.IteratorAgeMilliseconds, WriteProvisionedThroughputExceeded)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Kinesis\" metric_name=\"GetRecords.IteratorAgeMilliseconds\"\n| where Average > 60000\n| timechart span=1m avg(Average) by StreamName",
              "m": "Collect Kinesis metrics. Alert when iterator age > 60 seconds (consumer lag). Alert on WriteProvisionedThroughputExceeded (add shards or reduce write rate). Monitor IncomingRecords/OutgoingRecords for throughput.",
              "z": "Line chart (iterator age, throttles by stream), Table (stream, age ms), Single value (max lag).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Kinesis metrics (GetRecords.IteratorAgeMilliseconds, WriteProvisionedThroughputExceeded).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Kinesis metrics. Alert when iterator age > 60 seconds (consumer lag). Alert on WriteProvisionedThroughputExceeded (add shards or reduce write rate). Monitor IncomingRecords/OutgoingRecords for throughput.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Kinesis\" metric_name=\"GetRecords.IteratorAgeMilliseconds\"\n| where Average > 60000\n| timechart span=1m avg(Average) by StreamName\n```\n\nUnderstanding this SPL\n\n**Kinesis Data Stream Iterator Age and Throttling** — High iterator age means consumers are falling behind. Throttling indicates producers exceed shard capacity. Both cause lag and potential data loss.\n\nDocumented **Data sources**: CloudWatch Kinesis metrics (GetRecords.IteratorAgeMilliseconds, WriteProvisionedThroughputExceeded). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Average > 60000` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by StreamName** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (iterator age, throttles by stream), Table (stream, age ms), Single value (max lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on kinesis data stream iterator age and throttling. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.55",
              "n": "Secrets Manager Secret Rotation and Access",
              "c": "high",
              "f": "beginner",
              "v": "Failed rotation leaves stale credentials. Unusual access patterns may indicate credential abuse. Audit supports compliance and incident response.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudTrail (RotateSecret, GetSecretValue, DescribeSecret)",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"secretsmanager.amazonaws.com\" (eventName=\"RotateSecret\" OR eventName=\"GetSecretValue\")\n| stats count by userIdentity.arn eventName requestParameters.secretId\n| sort -count",
              "m": "CloudTrail logs Secrets Manager API. Alert on RotateSecret failures. Baseline GetSecretValue by principal and secret; alert on anomalous access (new principal, spike in access). Track rotation schedule compliance.",
              "z": "Table (principal, secret, action, count), Timeline (rotation events), Bar chart by secret.",
              "kfp": "Rotation jobs, microservice cold starts, and read-heavy canaries that call `GetSecretValue` in bursts.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.001",
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudTrail (RotateSecret, GetSecretValue, DescribeSecret).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCloudTrail logs Secrets Manager API. Alert on RotateSecret failures. Baseline GetSecretValue by principal and secret; alert on anomalous access (new principal, spike in access). Track rotation schedule compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventSource=\"secretsmanager.amazonaws.com\" (eventName=\"RotateSecret\" OR eventName=\"GetSecretValue\")\n| stats count by userIdentity.arn eventName requestParameters.secretId\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Secrets Manager Secret Rotation and Access** — Failed rotation leaves stale credentials. Unusual access patterns may indicate credential abuse. Audit supports compliance and incident response.\n\nDocumented **Data sources**: CloudTrail (RotateSecret, GetSecretValue, DescribeSecret). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn eventName requestParameters.secretId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.app, \"(?i)secretsmanager\\\\.amazonaws\") OR match(All_Changes.object, \"(?i)secret:\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Secrets Manager Secret Rotation and Access** — Failed rotation leaves stale credentials. Unusual access patterns may indicate credential abuse. Audit supports compliance and incident response.\n\nDocumented **Data sources**: CloudTrail (RotateSecret, GetSecretValue, DescribeSecret). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, secret, action, count), Timeline (rotation events), Bar chart by secret.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on secrets manager secret rotation and access. Failed rotation leaves stale credentials. Unusual access patterns may indicate credential abuse. Audit supports complian. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.app, \"(?i)secretsmanager\\\\.amazonaws\") OR match(All_Changes.object, \"(?i)secret:\")\n  by All_Changes.user All_Changes.object All_Changes.action span=1h\n| sort -count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.56",
              "n": "AWS Lambda Cold Start Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Cold start frequency and duration impact user experience. High cold start rates or long init times cause request latency spikes and timeouts.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudWatch Logs (Lambda platform logs: REPORT, INIT), X-Ray traces",
              "q": "index=aws sourcetype=\"aws:cloudwatchlogs\" (\"REPORT RequestId\" OR \"INIT_START\")\n| eval function_name=case(isnotnull(log_group), replace(log_group, \"/aws/lambda/\", \"\"), 1=1, \"unknown\")\n| rex \"Init Duration:\\s+(?<init_ms>\\d+\\.?\\d*)\\s*ms\"\n| rex \"Duration:\\s+(?<duration_ms>\\d+\\.?\\d*)\\s*ms\"\n| eval cold_start=if(match(_raw, \"INIT_START\"), 1, 0)\n| stats count as invocations, sum(cold_start) as cold_starts, avg(init_ms) as avg_init_ms, avg(duration_ms) as avg_duration_ms by function_name, bin(_time, 1h)\n| eval cold_start_pct=round(cold_starts/invocations*100, 1)\n| where cold_start_pct > 10 OR avg_init_ms > 1000\n| table _time function_name invocations cold_starts cold_start_pct avg_init_ms avg_duration_ms\n| sort -cold_start_pct",
              "m": "Enable CloudWatch Logs for Lambda (platform logs include REPORT and INIT). Optionally ingest X-Ray traces for end-to-end cold start visibility. Parse REPORT/INIT_START lines to extract init duration and invocation type. Alert when cold start rate exceeds 10% or init duration > 1s for critical functions. Consider provisioned concurrency for latency-sensitive workloads.",
              "z": "Line chart (cold start % and init duration by function over time), Table (function, cold starts, avg init ms), Single value (cold start rate).",
              "kfp": "Known slow cold starts, scheduled batch jobs, and canary invocations in lower envs you already noise-reduced.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudWatch Logs (Lambda platform logs: REPORT, INIT), X-Ray traces.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CloudWatch Logs for Lambda (platform logs include REPORT and INIT). Optionally ingest X-Ray traces for end-to-end cold start visibility. Parse REPORT/INIT_START lines to extract init duration and invocation type. Alert when cold start rate exceeds 10% or init duration > 1s for critical functions. Consider provisioned concurrency for latency-sensitive workloads.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatchlogs\" (\"REPORT RequestId\" OR \"INIT_START\")\n| eval function_name=case(isnotnull(log_group), replace(log_group, \"/aws/lambda/\", \"\"), 1=1, \"unknown\")\n| rex \"Init Duration:\\s+(?<init_ms>\\d+\\.?\\d*)\\s*ms\"\n| rex \"Duration:\\s+(?<duration_ms>\\d+\\.?\\d*)\\s*ms\"\n| eval cold_start=if(match(_raw, \"INIT_START\"), 1, 0)\n| stats count as invocations, sum(cold_start) as cold_starts, avg(init_ms) as avg_init_ms, avg(duration_ms) as avg_duration_ms by function_name, bin(_time, 1h)\n| eval cold_start_pct=round(cold_starts/invocations*100, 1)\n| where cold_start_pct > 10 OR avg_init_ms > 1000\n| table _time function_name invocations cold_starts cold_start_pct avg_init_ms avg_duration_ms\n| sort -cold_start_pct\n```\n\nUnderstanding this SPL\n\n**AWS Lambda Cold Start Monitoring** — Cold start frequency and duration impact user experience. High cold start rates or long init times cause request latency spikes and timeouts.\n\nDocumented **Data sources**: CloudWatch Logs (Lambda platform logs: REPORT, INIT), X-Ray traces. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatchlogs. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatchlogs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **function_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **cold_start** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by function_name, bin(_time, 1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cold_start_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cold_start_pct > 10 OR avg_init_ms > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **AWS Lambda Cold Start Monitoring**): table _time function_name invocations cold_starts cold_start_pct avg_init_ms avg_duration_ms\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (cold start % and init duration by function over time), Table (function, cold starts, avg init ms), Single value (cold start rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on aws lambda cold start monitoring. Cold start frequency and duration impact user experience. High cold start rates or long init times cause request latency spikes an. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.57",
              "n": "AWS ECS Task Placement Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Tasks failing to place due to resource constraints (CPU, memory, ports, attributes) cause service scaling failures and deployment blockages.",
              "t": "`Splunk_TA_aws`",
              "d": "CloudTrail (RunTask, CreateService with placement failures), ECS container instance state change events",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"ecs.amazonaws.com\" (eventName=\"RunTask\" OR eventName=\"CreateService\")\n| spath path=responseElements.failures{}\n| mvexpand responseElements.failures{} limit=500\n| spath input=responseElements.failures{} path=reason\n| spath input=responseElements.failures{} path=arn\n| search reason=*\n| stats count by reason, requestParameters.cluster\n| sort -count",
              "m": "CloudTrail logs ECS API calls; RunTask and CreateService responses include a `failures` array when placement fails. Ingest ECS events from EventBridge for container instance state changes. Parse failure reasons (RESOURCE:MEMORY, RESOURCE:CPU, RESOURCE:PORT, attribute constraints). Alert on any placement failure. Dashboard by cluster, reason, and task definition. Remediate by adding capacity, relaxing constraints, or adjusting task definitions.",
              "z": "Table (reason, cluster, count), Bar chart (failures by reason), Timeline (placement failure events).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: CloudTrail (RunTask, CreateService with placement failures), ECS container instance state change events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCloudTrail logs ECS API calls; RunTask and CreateService responses include a `failures` array when placement fails. Ingest ECS events from EventBridge for container instance state changes. Parse failure reasons (RESOURCE:MEMORY, RESOURCE:CPU, RESOURCE:PORT, attribute constraints). Alert on any placement failure. Dashboard by cluster, reason, and task definition. Remediate by adding capacity, relaxing constraints, or adjusting task definitions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventSource=\"ecs.amazonaws.com\" (eventName=\"RunTask\" OR eventName=\"CreateService\")\n| spath path=responseElements.failures{}\n| mvexpand responseElements.failures{} limit=500\n| spath input=responseElements.failures{} path=reason\n| spath input=responseElements.failures{} path=arn\n| search reason=*\n| stats count by reason, requestParameters.cluster\n| sort -count\n```\n\nUnderstanding this SPL\n\n**AWS ECS Task Placement Failures** — Tasks failing to place due to resource constraints (CPU, memory, ports, attributes) cause service scaling failures and deployment blockages.\n\nDocumented **Data sources**: CloudTrail (RunTask, CreateService with placement failures), ECS container instance state change events. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by reason, requestParameters.cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (reason, cluster, count), Bar chart (failures by reason), Timeline (placement failure events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on aws ecs task placement failures. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.58",
              "n": "AWS Transit Gateway Attachment Health",
              "c": "high",
              "f": "advanced",
              "v": "TGW route propagation and attachment state affect cross-VPC and hybrid connectivity. Failed attachments or stale routes cause network outages.",
              "t": "`Splunk_TA_aws`",
              "d": "AWS Config (TGW attachment compliance), TGW flow logs, CloudWatch TGW metrics (BytesIn, BytesOut, PacketsIn, PacketsOut)",
              "q": "index=aws (sourcetype=\"aws:config:notification\" resourceType=\"AWS::EC2::TransitGatewayAttachment\" OR (sourcetype=\"aws:cloudwatch\" namespace=\"AWS/TransitGateway\" metric_name=\"BytesIn\"))\n| eval attachment_state=case(\n  configurationItemStatus=\"ResourceDeleted\", \"deleted\",\n  configurationItemStatus=\"ResourceNotRecorded\", \"unknown\",\n  configurationItemStatus=\"OK\", \"ok\",\n  isnotnull(configurationItemStatus), configurationItemStatus,\n  1=1, null())\n| eval resourceId=coalesce(resourceId, resource_id)\n| stats latest(attachment_state) as state, latest(Sum) as bytes_in by resourceId, bin(_time, 1h)\n| where (isnotnull(state) AND state!=\"ok\") OR (isnotnull(bytes_in) AND bytes_in=0)\n| table _time resourceId state bytes_in\n| sort -_time",
              "m": "Enable AWS Config for TGW attachments to track state changes. Ingest TGW flow logs to S3 and forward to Splunk for traffic analysis. Collect CloudWatch TGW metrics (BytesIn, BytesOut) per attachment. Alert when attachment state is not available or traffic drops to zero unexpectedly. Correlate with route table propagation events. Use for hybrid connectivity and SD-WAN monitoring.",
              "z": "Table (attachment, state, traffic), Status grid (attachment health), Line chart (bytes in/out by attachment).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: AWS Config (TGW attachment compliance), TGW flow logs, CloudWatch TGW metrics (BytesIn, BytesOut, PacketsIn, PacketsOut).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AWS Config for TGW attachments to track state changes. Ingest TGW flow logs to S3 and forward to Splunk for traffic analysis. Collect CloudWatch TGW metrics (BytesIn, BytesOut) per attachment. Alert when attachment state is not available or traffic drops to zero unexpectedly. Correlate with route table propagation events. Use for hybrid connectivity and SD-WAN monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws (sourcetype=\"aws:config:notification\" resourceType=\"AWS::EC2::TransitGatewayAttachment\" OR (sourcetype=\"aws:cloudwatch\" namespace=\"AWS/TransitGateway\" metric_name=\"BytesIn\"))\n| eval attachment_state=case(\n  configurationItemStatus=\"ResourceDeleted\", \"deleted\",\n  configurationItemStatus=\"ResourceNotRecorded\", \"unknown\",\n  configurationItemStatus=\"OK\", \"ok\",\n  isnotnull(configurationItemStatus), configurationItemStatus,\n  1=1, null())\n| eval resourceId=coalesce(resourceId, resource_id)\n| stats latest(attachment_state) as state, latest(Sum) as bytes_in by resourceId, bin(_time, 1h)\n| where (isnotnull(state) AND state!=\"ok\") OR (isnotnull(bytes_in) AND bytes_in=0)\n| table _time resourceId state bytes_in\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**AWS Transit Gateway Attachment Health** — TGW route propagation and attachment state affect cross-VPC and hybrid connectivity. Failed attachments or stale routes cause network outages.\n\nDocumented **Data sources**: AWS Config (TGW attachment compliance), TGW flow logs, CloudWatch TGW metrics (BytesIn, BytesOut, PacketsIn, PacketsOut). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:notification, AWS::EC2::TransitGatewayAttachment, aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **attachment_state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **resourceId** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by resourceId, bin(_time, 1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (isnotnull(state) AND state!=\"ok\") OR (isnotnull(bytes_in) AND bytes_in=0)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **AWS Transit Gateway Attachment Health**): table _time resourceId state bytes_in\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (attachment, state, traffic), Status grid (attachment health), Line chart (bytes in/out by attachment).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on aws transit gateway attachment health. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.59",
              "n": "S3 Suspicious Access Patterns",
              "c": "high",
              "f": "intermediate",
              "v": "Unusual ListBucket volume, access from new regions, or anonymous reads often precede data exfiltration; pattern detection reduces dwell time.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail` (s3.amazonaws.com), optional S3 server access logs `sourcetype=aws:s3:accesslogs`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"s3.amazonaws.com\" eventName=\"GetObject\"\n| eval geo=if(isnull(sourceIPAddress),\"unknown\",sourceIPAddress)\n| stats dc(eventName) as ops, dc(awsRegion) as regions, count by userIdentity.arn, requestParameters.bucketName\n| where regions > 3 OR count > 10000\n| sort -count",
              "m": "Baseline normal GetObject/ListBucket rates per bucket and principal. Enrich with GeoIP on `sourceIPAddress`. Alert on first-seen ASN, burst downloads, or ListBucket without matching application inventory. Correlate with GuardDuty S3 findings.",
              "z": "Table (bucket, principal, count), Map (source IP), Timeline (access spikes).",
              "kfp": "Approved changes, pre-production, and known vendor windows already named in the runbook for this s3 suspicious access patterns.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1530",
                "T1619"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail` (s3.amazonaws.com), optional S3 server access logs `sourcetype=aws:s3:accesslogs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline normal GetObject/ListBucket rates per bucket and principal. Enrich with GeoIP on `sourceIPAddress`. Alert on first-seen ASN, burst downloads, or ListBucket without matching application inventory. Correlate with GuardDuty S3 findings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventSource=\"s3.amazonaws.com\" eventName=\"GetObject\"\n| eval geo=if(isnull(sourceIPAddress),\"unknown\",sourceIPAddress)\n| stats dc(eventName) as ops, dc(awsRegion) as regions, count by userIdentity.arn, requestParameters.bucketName\n| where regions > 3 OR count > 10000\n| sort -count\n```\n\nUnderstanding this SPL\n\n**S3 Suspicious Access Patterns** — Unusual ListBucket volume, access from new regions, or anonymous reads often precede data exfiltration; pattern detection reduces dwell time.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (s3.amazonaws.com), optional S3 server access logs `sourcetype=aws:s3:accesslogs`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **geo** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, requestParameters.bucketName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where regions > 3 OR count > 10000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**S3 Suspicious Access Patterns** — Unusual ListBucket volume, access from new regions, or anonymous reads often precede data exfiltration; pattern detection reduces dwell time.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (s3.amazonaws.com), optional S3 server access logs `sourcetype=aws:s3:accesslogs`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (bucket, principal, count), Map (source IP), Timeline (access spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Splunk to watch s3 suspicious access patterns—unusual listbucket volume, access from new regions, or anonymous reads often precede data exfiltration; pattern detection reduces dwell time.—so the team is told early, not only after a customer or auditor notices a gap.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.60",
              "n": "Security Hub Alert Aggregation",
              "c": "high",
              "f": "intermediate",
              "v": "Security Hub rolls up Config, GuardDuty, Inspector, and partner findings; aggregating by account and severity prioritizes remediation queues.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:firehose` or `sourcetype=aws:cloudwatch:events` (Security Hub findings), EventBridge to Splunk",
              "q": "index=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Security Hub Findings - Imported\"\n| spath path=detail.findings{}\n| mvexpand detail.findings{} limit=500\n| spath input=detail.findings{} output=sev path=Severity.Label\n| spath input=detail.findings{} output=title path=Title\n| stats count by sev, title, account\n| sort -count",
              "m": "Send Security Hub custom actions or EventBridge rules to Firehose/HEC. Normalize `Severity` and `ComplianceStatus`. Auto-ticket CRITICAL/HIGH. Deduplicate by finding ID across updates. Feed executive dashboards with counts by standard (CIS, PCI).",
              "z": "Bar chart (findings by severity), Table (title, account, count), Single value (open critical).",
              "kfp": "Suppressed or accepted findings, duplicate product integrations, and member-account lag that catches up a few minutes later.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:firehose` or `sourcetype=aws:cloudwatch:events` (Security Hub findings), EventBridge to Splunk.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSend Security Hub custom actions or EventBridge rules to Firehose/HEC. Normalize `Severity` and `ComplianceStatus`. Auto-ticket CRITICAL/HIGH. Deduplicate by finding ID across updates. Feed executive dashboards with counts by standard (CIS, PCI).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Security Hub Findings - Imported\"\n| spath path=detail.findings{}\n| mvexpand detail.findings{} limit=500\n| spath input=detail.findings{} output=sev path=Severity.Label\n| spath input=detail.findings{} output=title path=Title\n| stats count by sev, title, account\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Security Hub Alert Aggregation** — Security Hub rolls up Config, GuardDuty, Inspector, and partner findings; aggregating by account and severity prioritizes remediation queues.\n\nDocumented **Data sources**: `sourcetype=aws:firehose` or `sourcetype=aws:cloudwatch:events` (Security Hub findings), EventBridge to Splunk. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `stats` rolls up events into metrics; results are split **by sev, title, account** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security Hub Alert Aggregation** — Security Hub rolls up Config, GuardDuty, Inspector, and partner findings; aggregating by account and severity prioritizes remediation queues.\n\nDocumented **Data sources**: `sourcetype=aws:firehose` or `sourcetype=aws:cloudwatch:events` (Security Hub findings), EventBridge to Splunk. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (findings by severity), Table (title, account, count), Single value (open critical).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on security hub alert aggregation. Security Hub results pulled into one stream so a single place can own triage, not a dozen product consoles. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.61",
              "n": "Network ACL Changes",
              "c": "high",
              "f": "intermediate",
              "v": "NACL changes can open subnets to the internet or break least-privilege segmentation; they are less common than security groups and warrant explicit audit.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail` (ec2.amazonaws.com CreateNetworkAclEntry, ReplaceNetworkAclEntry, DeleteNetworkAclEntry)",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"ec2.amazonaws.com\" (eventName=\"CreateNetworkAclEntry\" OR eventName=\"ReplaceNetworkAclEntry\" OR eventName=\"DeleteNetworkAclEntry\")\n| stats count by userIdentity.arn, requestParameters.networkAclId, eventName, awsRegion\n| sort -_time",
              "m": "Require change tickets in deployment pipeline metadata where possible. Alert on any prod NACL change. Visualize before/after rule numbers and CIDR blocks from `requestParameters`. Weekly review with network team.",
              "z": "Table (NACL, user, action), Timeline (changes), Single value (changes 24h).",
              "kfp": "Automation that refreshes the same NACL or SG rule during migration; change ticket should name the CIDRs.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail` (ec2.amazonaws.com CreateNetworkAclEntry, ReplaceNetworkAclEntry, DeleteNetworkAclEntry).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequire change tickets in deployment pipeline metadata where possible. Alert on any prod NACL change. Visualize before/after rule numbers and CIDR blocks from `requestParameters`. Weekly review with network team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventSource=\"ec2.amazonaws.com\" (eventName=\"CreateNetworkAclEntry\" OR eventName=\"ReplaceNetworkAclEntry\" OR eventName=\"DeleteNetworkAclEntry\")\n| stats count by userIdentity.arn, requestParameters.networkAclId, eventName, awsRegion\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Network ACL Changes** — NACL changes can open subnets to the internet or break least-privilege segmentation; they are less common than security groups and warrant explicit audit.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (ec2.amazonaws.com CreateNetworkAclEntry, ReplaceNetworkAclEntry, DeleteNetworkAclEntry). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, requestParameters.networkAclId, eventName, awsRegion** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network ACL Changes** — NACL changes can open subnets to the internet or break least-privilege segmentation; they are less common than security groups and warrant explicit audit.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (ec2.amazonaws.com CreateNetworkAclEntry, ReplaceNetworkAclEntry, DeleteNetworkAclEntry). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (NACL, user, action), Timeline (changes), Single value (changes 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on network acl changes. NACL changes can open subnets to the internet or break least-privilege segmentation; they are less common than security. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.62",
              "n": "RDS Performance Insights Trending",
              "c": "medium",
              "f": "advanced",
              "v": "Performance Insights exposes DB load by wait state; trending top SQL and waits guides index and instance right-sizing beyond raw CPU.",
              "t": "`Splunk_TA_aws`",
              "d": "Performance Insights API export, `sourcetype=aws:cloudwatch` (PI metrics), RDS log exports",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" metric_name=\"DBLoad\" statistic=\"Average\"\n| timechart span=1h avg(Average) as dbload by DBInstanceIdentifier\n| streamstats window=168 global=f avg(dbload) as baseline by DBInstanceIdentifier\n| where dbload > baseline * 1.5",
              "m": "Enable Performance Insights (7–30 day retention). Export `DBLoad`, `DBLoadCPU`, `DBLoadNonCPU` via API or CloudWatch where available. Alert on sustained elevation vs weekly baseline. Join with application release times.",
              "z": "Line chart (DB load vs baseline), Table (instance, wait class if ingested), Area chart (CPU vs non-CPU load).",
              "kfp": "Large ad hoc queries, ETL, or an index build that you expect to spike DB load during the same maintenance window as the alert.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: Performance Insights API export, `sourcetype=aws:cloudwatch` (PI metrics), RDS log exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Performance Insights (7–30 day retention). Export `DBLoad`, `DBLoadCPU`, `DBLoadNonCPU` via API or CloudWatch where available. Alert on sustained elevation vs weekly baseline. Join with application release times.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" metric_name=\"DBLoad\" statistic=\"Average\"\n| timechart span=1h avg(Average) as dbload by DBInstanceIdentifier\n| streamstats window=168 global=f avg(dbload) as baseline by DBInstanceIdentifier\n| where dbload > baseline * 1.5\n```\n\nUnderstanding this SPL\n\n**RDS Performance Insights Trending** — Performance Insights exposes DB load by wait state; trending top SQL and waits guides index and instance right-sizing beyond raw CPU.\n\nDocumented **Data sources**: Performance Insights API export, `sourcetype=aws:cloudwatch` (PI metrics), RDS log exports. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by DBInstanceIdentifier** — ideal for trending and alerting on this use case.\n• `streamstats` rolls up events into metrics; results are split **by DBInstanceIdentifier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dbload > baseline * 1.5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=1h | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**RDS Performance Insights Trending** — Performance Insights exposes DB load by wait state; trending top SQL and waits guides index and instance right-sizing beyond raw CPU.\n\nDocumented **Data sources**: Performance Insights API export, `sourcetype=aws:cloudwatch` (PI metrics), RDS log exports. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (DB load vs baseline), Table (instance, wait class if ingested), Area chart (CPU vs non-CPU load).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on rds performance insights trending. How heavy the database is over time (Performance Insights) so tuning waits and right-sizing is based on data, not guesswork. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=1h | sort - agg_value",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.63",
              "n": "ECS Service Health",
              "c": "high",
              "f": "intermediate",
              "v": "Service-level running count versus desired indicates deployment failures, capacity shortfall, or health check flapping.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (AWS/ECS — CPUUtilization, MemoryUtilization), `sourcetype=aws:cloudwatch:events` (service events)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ECS\" (metric_name=\"CPUUtilization\" OR metric_name=\"MemoryUtilization\")\n| stats latest(Average) as util by ClusterName, ServiceName, metric_name\n| where util > 85",
              "m": "Ingest ECS service events from EventBridge for steady-state issues. Dashboard desired vs running from `DescribeServices` snapshots if scripted. Alert on failed deployments or service unable to reach steady state.",
              "z": "Status grid (service health), Line chart (CPU/memory by service), Table (cluster, service, failures).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (AWS/ECS — CPUUtilization, MemoryUtilization), `sourcetype=aws:cloudwatch:events` (service events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ECS service events from EventBridge for steady-state issues. Dashboard desired vs running from `DescribeServices` snapshots if scripted. Alert on failed deployments or service unable to reach steady state.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ECS\" (metric_name=\"CPUUtilization\" OR metric_name=\"MemoryUtilization\")\n| stats latest(Average) as util by ClusterName, ServiceName, metric_name\n| where util > 85\n```\n\nUnderstanding this SPL\n\n**ECS Service Health** — Service-level running count versus desired indicates deployment failures, capacity shortfall, or health check flapping.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (AWS/ECS — CPUUtilization, MemoryUtilization), `sourcetype=aws:cloudwatch:events` (service events). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ClusterName, ServiceName, metric_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where util > 85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (service health), Line chart (CPU/memory by service), Table (cluster, service, failures).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on ecs service health. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.64",
              "n": "EKS Control Plane Audit",
              "c": "high",
              "f": "advanced",
              "v": "Kubernetes audit logs capture who changed roles, secrets, and workloads; essential for forensics and SOC2 evidence on EKS.",
              "t": "`Splunk_TA_aws`",
              "d": "EKS control plane logs in `sourcetype=aws:cloudwatchlogs` (cluster audit), CloudTrail `eks.amazonaws.com` API",
              "q": "index=aws sourcetype=\"aws:cloudwatchlogs\" log_group=\"/aws/eks/*/cluster\"\n| search \"audit.k8s.io\" verb=\"create\" objectRef.resource=\"clusterroles\" OR objectRef.resource=\"secrets\"\n| stats count by user.username, objectRef.namespace, objectRef.name\n| sort -count",
              "m": "Enable EKS audit logging to CloudWatch Logs and subscribe to Splunk. Optionally include CloudTrail for `CreateCluster`, `AssociateIdentityProviderConfig`. Alert on cluster-admin bindings, anonymous access, or secret reads from unexpected service accounts.",
              "z": "Table (user, resource, count), Timeline (privileged API calls), Sankey (user→namespace).",
              "kfp": "Platform or GitOps that edits `clusterroles` or `secrets` during a cluster upgrade; compare with the release runbook.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1613",
                "T1609"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: EKS control plane logs in `sourcetype=aws:cloudwatchlogs` (cluster audit), CloudTrail `eks.amazonaws.com` API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable EKS audit logging to CloudWatch Logs and subscribe to Splunk. Optionally include CloudTrail for `CreateCluster`, `AssociateIdentityProviderConfig`. Alert on cluster-admin bindings, anonymous access, or secret reads from unexpected service accounts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatchlogs\" log_group=\"/aws/eks/*/cluster\"\n| search \"audit.k8s.io\" verb=\"create\" objectRef.resource=\"clusterroles\" OR objectRef.resource=\"secrets\"\n| stats count by user.username, objectRef.namespace, objectRef.name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**EKS Control Plane Audit** — Kubernetes audit logs capture who changed roles, secrets, and workloads; essential for forensics and SOC2 evidence on EKS.\n\nDocumented **Data sources**: EKS control plane logs in `sourcetype=aws:cloudwatchlogs` (cluster audit), CloudTrail `eks.amazonaws.com` API. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatchlogs. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatchlogs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user.username, objectRef.namespace, objectRef.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**EKS Control Plane Audit** — Kubernetes audit logs capture who changed roles, secrets, and workloads; essential for forensics and SOC2 evidence on EKS.\n\nDocumented **Data sources**: EKS control plane logs in `sourcetype=aws:cloudwatchlogs` (cluster audit), CloudTrail `eks.amazonaws.com` API. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, resource, count), Timeline (privileged API calls), Sankey (user→namespace).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on eks control plane audit. Kubernetes audit logs capture who changed roles, secrets, and workloads; essential for forensics and SOC2 evidence on EK. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.65",
              "n": "GuardDuty Severity Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Prioritizing GuardDuty findings by severity and type reduces noise and speeds triage versus raw event volume.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch:guardduty`, GuardDuty S3 export",
              "q": "index=aws sourcetype=\"aws:cloudwatch:guardduty\"\n| stats count by severity, type, accountId\n| sort -count",
              "m": "Normalize severity (8–10 = high). Auto-suppress known pen-test ranges via lookup. Weekly trend of finding types. SOAR integration for HIGH and above with runbooks per `type`.",
              "z": "Bar chart (findings by type), Pie chart (severity), Table (account, type, count).",
              "kfp": "Benign port scans, red-team, authorized network scanners, and new accounts/regions before suppressions and baselines are tuned.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch:guardduty`, GuardDuty S3 export.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize severity (8–10 = high). Auto-suppress known pen-test ranges via lookup. Weekly trend of finding types. SOAR integration for HIGH and above with runbooks per `type`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:guardduty\"\n| stats count by severity, type, accountId\n| sort -count\n```\n\nUnderstanding this SPL\n\n**GuardDuty Severity Analysis** — Prioritizing GuardDuty findings by severity and type reduces noise and speeds triage versus raw event volume.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch:guardduty`, GuardDuty S3 export. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:guardduty. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:guardduty\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity, type, accountId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GuardDuty Severity Analysis** — Prioritizing GuardDuty findings by severity and type reduces noise and speeds triage versus raw event volume.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch:guardduty`, GuardDuty S3 export. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (findings by type), Pie chart (severity), Table (account, type, count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on guardduty severity analysis. The serious GuardDuty finding types the service raises, in one place, so a small team does not miss something loud in the console. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.66",
              "n": "AWS Config Rule Compliance Drift",
              "c": "medium",
              "f": "intermediate",
              "v": "Resources oscillating between COMPLIANT and NON_COMPLIANT indicate automation fights or manual changes—drift trends surface systemic issues.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:config:notification`, Config history snapshots",
              "q": "index=aws sourcetype=\"aws:config:notification\" configRuleList{}.complianceType=*\n| mvexpand configRuleList{} limit=500\n| spath input=configRuleList{} path=complianceType\n| spath input=configRuleList{} path=configRuleName\n| stats dc(complianceType) as state_changes by resourceId, configRuleName\n| where state_changes > 1\n| sort -state_changes",
              "m": "Ingest configuration item change streams. Track flapping rules weekly. Alert when critical rules (encryption, public access) change state more than N times per day. Root-cause with CloudTrail correlation.",
              "z": "Table (resource, rule, changes), Line chart (compliant % over time), Single value (flapping resources).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:config:notification`, Config history snapshots.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest configuration item change streams. Track flapping rules weekly. Alert when critical rules (encryption, public access) change state more than N times per day. Root-cause with CloudTrail correlation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:config:notification\" configRuleList{}.complianceType=*\n| mvexpand configRuleList{} limit=500\n| spath input=configRuleList{} path=complianceType\n| spath input=configRuleList{} path=configRuleName\n| stats dc(complianceType) as state_changes by resourceId, configRuleName\n| where state_changes > 1\n| sort -state_changes\n```\n\nUnderstanding this SPL\n\n**AWS Config Rule Compliance Drift** — Resources oscillating between COMPLIANT and NON_COMPLIANT indicate automation fights or manual changes—drift trends surface systemic issues.\n\nDocumented **Data sources**: `sourcetype=aws:config:notification`, Config history snapshots. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:notification. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `stats` rolls up events into metrics; results are split **by resourceId, configRuleName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where state_changes > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (resource, rule, changes), Line chart (compliant % over time), Single value (flapping resources).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on aws config rule compliance drift. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.67",
              "n": "SNS Delivery Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Failed SMS, email, or HTTP subscriptions break alerting and fan-out; monitoring delivery failures prevents silent notification loss.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (AWS/SNS — NumberOfNotificationsFailed), delivery status logs",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SNS\" metric_name=\"NumberOfNotificationsFailed\"\n| timechart span=15m sum(Sum) as failed by TopicName\n| where failed > 0",
              "m": "Enable delivery status logging for HTTP/S endpoints. Ingest CloudWatch metrics per topic. Alert on any failed count sustained 15 minutes. Validate endpoint URLs and DLQ for failed deliveries if configured.",
              "z": "Line chart (failures by topic), Table (topic, failed count), Single value (total failures).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (AWS/SNS — NumberOfNotificationsFailed), delivery status logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable delivery status logging for HTTP/S endpoints. Ingest CloudWatch metrics per topic. Alert on any failed count sustained 15 minutes. Validate endpoint URLs and DLQ for failed deliveries if configured.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SNS\" metric_name=\"NumberOfNotificationsFailed\"\n| timechart span=15m sum(Sum) as failed by TopicName\n| where failed > 0\n```\n\nUnderstanding this SPL\n\n**SNS Delivery Failures** — Failed SMS, email, or HTTP subscriptions break alerting and fan-out; monitoring delivery failures prevents silent notification loss.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (AWS/SNS — NumberOfNotificationsFailed), delivery status logs. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by TopicName** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where failed > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failures by topic), Table (topic, failed count), Single value (total failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on sns delivery failures. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.68",
              "n": "SQS Dead Letter Queue Growth",
              "c": "high",
              "f": "intermediate",
              "v": "DLQ depth growth means poison messages or downstream outages; rate-of-change highlights incidents faster than static thresholds.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (AWS/SQS — ApproximateNumberOfMessagesVisible on DLQ ARNs)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SQS\" metric_name=\"ApproximateNumberOfMessagesVisible\" QueueName=\"*dlq*\"\n| sort 0 _time\n| streamstats window=12 global=f first(Average) as prev by QueueName\n| eval growth=Average-prev\n| where growth > 10\n| table _time QueueName Average growth",
              "m": "Tag DLQs consistently for `*dlq*` matching or use explicit dimension. Alert on positive growth over 1h or depth exceeding SLO. Replay with caution after root-cause.",
              "z": "Line chart (DLQ depth), Single value (growth rate), Table (queue, depth).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (AWS/SQS — ApproximateNumberOfMessagesVisible on DLQ ARNs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag DLQs consistently for `*dlq*` matching or use explicit dimension. Alert on positive growth over 1h or depth exceeding SLO. Replay with caution after root-cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SQS\" metric_name=\"ApproximateNumberOfMessagesVisible\" QueueName=\"*dlq*\"\n| sort 0 _time\n| streamstats window=12 global=f first(Average) as prev by QueueName\n| eval growth=Average-prev\n| where growth > 10\n| table _time QueueName Average growth\n```\n\nUnderstanding this SPL\n\n**SQS Dead Letter Queue Growth** — DLQ depth growth means poison messages or downstream outages; rate-of-change highlights incidents faster than static thresholds.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (AWS/SQS — ApproximateNumberOfMessagesVisible on DLQ ARNs). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by QueueName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **growth** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where growth > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SQS Dead Letter Queue Growth**): table _time QueueName Average growth\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (DLQ depth), Single value (growth rate), Table (queue, depth).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on sqs dead letter queue growth. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.69",
              "n": "CloudFront Error Rates by Distribution",
              "c": "high",
              "f": "intermediate",
              "v": "Origin or edge errors vary by distribution; breaking out 4xx/5xx by `DistributionId` isolates bad releases and misconfigured behaviors.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (AWS/CloudFront — 4xxErrorRate, 5xxErrorRate), real-time logs",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/CloudFront\" (metric_name=\"4xxErrorRate\" OR metric_name=\"5xxErrorRate\")\n| stats latest(Average) as err_rate by DistributionId, metric_name, bin(_time, 5m)\n| where err_rate > 1\n| sort - err_rate",
              "m": "Ingest metrics per distribution ID. Correlate spikes with deployments and origin health. Use real-time logs for URI-level detail. Alert when 5xx error rate exceeds SLO for 10 minutes.",
              "z": "Line chart (error rate by distribution), Table (distribution, metric), Map (viewer country if from logs).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (AWS/CloudFront — 4xxErrorRate, 5xxErrorRate), real-time logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest metrics per distribution ID. Correlate spikes with deployments and origin health. Use real-time logs for URI-level detail. Alert when 5xx error rate exceeds SLO for 10 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/CloudFront\" (metric_name=\"4xxErrorRate\" OR metric_name=\"5xxErrorRate\")\n| stats latest(Average) as err_rate by DistributionId, metric_name, bin(_time, 5m)\n| where err_rate > 1\n| sort - err_rate\n```\n\nUnderstanding this SPL\n\n**CloudFront Error Rates by Distribution** — Origin or edge errors vary by distribution; breaking out 4xx/5xx by `DistributionId` isolates bad releases and misconfigured behaviors.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (AWS/CloudFront — 4xxErrorRate, 5xxErrorRate), real-time logs. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by DistributionId, metric_name, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where err_rate > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate by distribution), Table (distribution, metric), Map (viewer country if from logs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on cloudfront error rates by distribution. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.70",
              "n": "Route 53 Health Check Failover Validation",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed health checks drive DNS failover; sustained failures mean user-facing outages or flapping routing policies.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (AWS/Route53 — HealthCheckStatus, ChildHealthCheckHealthyCount)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Route53\" metric_name=\"HealthCheckStatus\"\n| stats latest(Minimum) as healthy by HealthCheckId, bin(_time, 5m)\n| where healthy < 1\n| sort HealthCheckId -_time",
              "m": "Map `HealthCheckId` to application names via lookup. Alert on unhealthy state for two consecutive periods. Correlate with target (ALB, IP) metrics. Include calculator health checks for complex routing.",
              "z": "Status grid (health check × time), Table (check id, target), Timeline (failures).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (AWS/Route53 — HealthCheckStatus, ChildHealthCheckHealthyCount).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `HealthCheckId` to application names via lookup. Alert on unhealthy state for two consecutive periods. Correlate with target (ALB, IP) metrics. Include calculator health checks for complex routing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Route53\" metric_name=\"HealthCheckStatus\"\n| stats latest(Minimum) as healthy by HealthCheckId, bin(_time, 5m)\n| where healthy < 1\n| sort HealthCheckId -_time\n```\n\nUnderstanding this SPL\n\n**Route 53 Health Check Failover Validation** — Failed health checks drive DNS failover; sustained failures mean user-facing outages or flapping routing policies.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (AWS/Route53 — HealthCheckStatus, ChildHealthCheckHealthyCount). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by HealthCheckId, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where healthy < 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (health check × time), Table (check id, target), Timeline (failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on route 53 health check failover validation. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.71",
              "n": "Systems Manager Patch Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Patch baselines reduce exploit exposure; instance-level compliance gaps show outdated AMIs or broken agents.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:ssm:compliance`, SSM Inventory association",
              "q": "index=aws sourcetype=\"aws:ssm:compliance\" ComplianceType=\"Patch\"\n| stats latest(status) as patch_status by resourceId, PatchSeverity\n| where patch_status!=\"Compliant\"\n| stats count by resourceId\n| sort -count",
              "m": "Schedule `AWS-RunPatchBaseline` and ingest compliance association results. Dashboard by OU and environment tag. Alert when CRITICAL severity patches are non-compliant past SLA window.",
              "z": "Table (instance, missing count), Pie chart (compliant %), Bar chart (severity).",
              "kfp": "Patch waves not finished yet, instances powered off, and CAB exceptions you already published.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Updates](https://docs.splunk.com/Documentation/CIM/latest/User/Updates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:ssm:compliance`, SSM Inventory association.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule `AWS-RunPatchBaseline` and ingest compliance association results. Dashboard by OU and environment tag. Alert when CRITICAL severity patches are non-compliant past SLA window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:ssm:compliance\" ComplianceType=\"Patch\"\n| stats latest(status) as patch_status by resourceId, PatchSeverity\n| where patch_status!=\"Compliant\"\n| stats count by resourceId\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Systems Manager Patch Compliance** — Patch baselines reduce exploit exposure; instance-level compliance gaps show outdated AMIs or broken agents.\n\nDocumented **Data sources**: `sourcetype=aws:ssm:compliance`, SSM Inventory association. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:ssm:compliance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:ssm:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resourceId, PatchSeverity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where patch_status!=\"Compliant\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by resourceId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Updates.Updates by Updates.status, Updates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Systems Manager Patch Compliance** — Patch baselines reduce exploit exposure; instance-level compliance gaps show outdated AMIs or broken agents.\n\nDocumented **Data sources**: `sourcetype=aws:ssm:compliance`, SSM Inventory association. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Updates.Updates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (instance, missing count), Pie chart (compliant %), Bar chart (severity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on systems manager patch compliance. Which machines are not in the patch state the business asked for, before auditors or ransomware find them first. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Updates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Updates.Updates by Updates.status, Updates.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.72",
              "n": "Transit Gateway Route Table Attachment Health",
              "c": "high",
              "f": "advanced",
              "v": "Beyond attachment state, route propagation to TGW route tables determines reachability; blackholes show as dropped traffic or failed tests.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail` (ec2:ReplaceTransitGatewayRoute, CreateRoute), TGW route table notifications",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"ec2.amazonaws.com\" (eventName=\"CreateTransitGatewayRoute\" OR eventName=\"DeleteTransitGatewayRoute\" OR eventName=\"ReplaceTransitGatewayRoute\")\n| stats count by userIdentity.arn, requestParameters.transitGatewayRouteTableId, eventName\n| sort -_time",
              "m": "Alert on route changes in production TGW tables. Correlate with change windows. Combine with UC-4.1.58 metrics for end-to-end path validation. Use Network Manager events if enabled.",
              "z": "Timeline (route changes), Table (route table, CIDR, action), Line chart (cross-VPC bytes with UC-4.1.58).",
              "kfp": "Verified SD-WAN, transit, or peering work that rewrites routes; align with the network change record.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail` (ec2:ReplaceTransitGatewayRoute, CreateRoute), TGW route table notifications.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on route changes in production TGW tables. Correlate with change windows. Combine with UC-4.1.58 metrics for end-to-end path validation. Use Network Manager events if enabled.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventSource=\"ec2.amazonaws.com\" (eventName=\"CreateTransitGatewayRoute\" OR eventName=\"DeleteTransitGatewayRoute\" OR eventName=\"ReplaceTransitGatewayRoute\")\n| stats count by userIdentity.arn, requestParameters.transitGatewayRouteTableId, eventName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Transit Gateway Route Table Attachment Health** — Beyond attachment state, route propagation to TGW route tables determines reachability; blackholes show as dropped traffic or failed tests.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (ec2:ReplaceTransitGatewayRoute, CreateRoute), TGW route table notifications. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, requestParameters.transitGatewayRouteTableId, eventName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Transit Gateway Route Table Attachment Health** — Beyond attachment state, route propagation to TGW route tables determines reachability; blackholes show as dropped traffic or failed tests.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (ec2:ReplaceTransitGatewayRoute, CreateRoute), TGW route table notifications. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (route changes), Table (route table, CIDR, action), Line chart (cross-VPC bytes with UC-4.1.58).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on transit gateway route table attachment health. Beyond attachment state, route propagation to TGW route tables determines reachability; blackholes show as dropped traff. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.73",
              "n": "ELB Target Health Check Failures",
              "c": "critical",
              "f": "intermediate",
              "v": "Unhealthy targets are removed from rotation; rising unhealthy counts precede customer-facing errors.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (AWS/ApplicationELB, AWS/NetworkELB — UnHealthyHostCount, HealthyHostCount)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" (namespace=\"AWS/ApplicationELB\" OR namespace=\"AWS/NetworkELB\") metric_name=\"UnHealthyHostCount\"\n| stats latest(Maximum) as unhealthy by LoadBalancer, TargetGroup, bin(_time, 5m)\n| where unhealthy > 0\n| sort - unhealthy",
              "m": "Join with target group tags for app name. Alert when unhealthy > 0 for 5 minutes or half of targets unhealthy. Correlate with ASG events and backend application logs.",
              "z": "Line chart (unhealthy hosts), Table (TG, AZ, count), Status grid (target).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (AWS/ApplicationELB, AWS/NetworkELB — UnHealthyHostCount, HealthyHostCount).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nJoin with target group tags for app name. Alert when unhealthy > 0 for 5 minutes or half of targets unhealthy. Correlate with ASG events and backend application logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" (namespace=\"AWS/ApplicationELB\" OR namespace=\"AWS/NetworkELB\") metric_name=\"UnHealthyHostCount\"\n| stats latest(Maximum) as unhealthy by LoadBalancer, TargetGroup, bin(_time, 5m)\n| where unhealthy > 0\n| sort - unhealthy\n```\n\nUnderstanding this SPL\n\n**ELB Target Health Check Failures** — Unhealthy targets are removed from rotation; rising unhealthy counts precede customer-facing errors.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (AWS/ApplicationELB, AWS/NetworkELB — UnHealthyHostCount, HealthyHostCount). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by LoadBalancer, TargetGroup, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unhealthy > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (unhealthy hosts), Table (TG, AZ, count), Status grid (target).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on elb target health check failures. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.74",
              "n": "IAM Access Analyzer Findings",
              "c": "medium",
              "f": "intermediate",
              "v": "Access Analyzer identifies unintended external access to S3, IAM roles, KMS keys, and other resources—reducing public exposure risk.",
              "t": "`Splunk_TA_aws`",
              "d": "Access Analyzer findings export (EventBridge, Security Hub), `sourcetype=aws:cloudwatch:events`",
              "q": "index=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Access Analyzer Finding\" detail.status=\"ACTIVE\"\n| stats count by detail.resourceType, detail.principal.awsAccountId, detail.isPublic\n| sort -count",
              "m": "Enable organization-wide analyzer. Send findings to EventBridge and Splunk. Auto-remediate public S3 where policy allows or ticket owners. Weekly review of new external access paths.",
              "z": "Table (resource type, account, public), Bar chart (findings by type), Single value (active findings).",
              "kfp": "Intentional public buckets or roles for a static site, partner data exchange, or test account that the policy still documents.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1087.004",
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: Access Analyzer findings export (EventBridge, Security Hub), `sourcetype=aws:cloudwatch:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable organization-wide analyzer. Send findings to EventBridge and Splunk. Auto-remediate public S3 where policy allows or ticket owners. Weekly review of new external access paths.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Access Analyzer Finding\" detail.status=\"ACTIVE\"\n| stats count by detail.resourceType, detail.principal.awsAccountId, detail.isPublic\n| sort -count\n```\n\nUnderstanding this SPL\n\n**IAM Access Analyzer Findings** — Access Analyzer identifies unintended external access to S3, IAM roles, KMS keys, and other resources—reducing public exposure risk.\n\nDocumented **Data sources**: Access Analyzer findings export (EventBridge, Security Hub), `sourcetype=aws:cloudwatch:events`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by detail.resourceType, detail.principal.awsAccountId, detail.isPublic** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (resource type, account, public), Bar chart (findings by type), Single value (active findings).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iam access analyzer findings. Unexpected paths for data or access that the analyzer found so the team can close them before a stranger stumbles in. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.75",
              "n": "AWS Backup Job Status",
              "c": "critical",
              "f": "intermediate",
              "v": "Centralized backup vault jobs must complete on schedule; failed jobs leave RPO gaps across EC2, EFS, and databases.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch:events` (Backup Job State Change), Backup notifications",
              "q": "index=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Backup Job State Change\"\n| eval ok=if(detail.state=\"COMPLETED\",1,0)\n| stats count(eval(ok=0)) as failed, count as total by detail.backupVaultArn, detail.resourceArn\n| where failed>0",
              "m": "Parse job states COMPLETED, FAILED, EXPIRED. Alert on FAILED. Track partial completion for large resources. Cross-check with UC-4.4.29 restore drills for end-to-end assurance.",
              "z": "Table (vault, resource, status), Timeline (job outcomes), Single value (failed jobs 24h).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch:events` (Backup Job State Change), Backup notifications.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse job states COMPLETED, FAILED, EXPIRED. Alert on FAILED. Track partial completion for large resources. Cross-check with UC-4.4.29 restore drills for end-to-end assurance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Backup Job State Change\"\n| eval ok=if(detail.state=\"COMPLETED\",1,0)\n| stats count(eval(ok=0)) as failed, count as total by detail.backupVaultArn, detail.resourceArn\n| where failed>0\n```\n\nUnderstanding this SPL\n\n**AWS Backup Job Status** — Centralized backup vault jobs must complete on schedule; failed jobs leave RPO gaps across EC2, EFS, and databases.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch:events` (Backup Job State Change), Backup notifications. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by detail.backupVaultArn, detail.resourceArn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failed>0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (vault, resource, status), Timeline (job outcomes), Single value (failed jobs 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on aws backup job status. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.76",
              "n": "Lambda Layer Version Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Outdated layers carry vulnerable dependencies; enforcing approved layer ARNs avoids shadow IT libraries in functions.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudtrail` (lambda:GetFunction, PublishLayerVersion), Config custom rule output",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"lambda.amazonaws.com\" eventName=\"UpdateFunctionConfiguration\"\n| spath path=requestParameters.layers{}\n| mvexpand requestParameters.layers{} limit=200\n| eval layer_arn=requestParameters.layers{}\n| lookup approved_lambda_layers layer_arn OUTPUT approved\n| where isnull(approved)\n| stats count by userIdentity.arn, requestParameters.functionName, layer_arn",
              "m": "Maintain CSV lookup of approved layer version ARNs. Alert on attach of unapproved layer or version drift weekly scan via `ListFunctions`. Integrate with CI/CD to block deploys pre-merge.",
              "z": "Table (function, layer, user), Bar chart (non-compliant functions), Timeline (changes).",
              "kfp": "CI/CD and hotfix deploys that swap Lambda layers; compare layer ARNs to the service owner before flagging as drift.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail` (lambda:GetFunction, PublishLayerVersion), Config custom rule output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain CSV lookup of approved layer version ARNs. Alert on attach of unapproved layer or version drift weekly scan via `ListFunctions`. Integrate with CI/CD to block deploys pre-merge.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventSource=\"lambda.amazonaws.com\" eventName=\"UpdateFunctionConfiguration\"\n| spath path=requestParameters.layers{}\n| mvexpand requestParameters.layers{} limit=200\n| eval layer_arn=requestParameters.layers{}\n| lookup approved_lambda_layers layer_arn OUTPUT approved\n| where isnull(approved)\n| stats count by userIdentity.arn, requestParameters.functionName, layer_arn\n```\n\nUnderstanding this SPL\n\n**Lambda Layer Version Compliance** — Outdated layers carry vulnerable dependencies; enforcing approved layer ARNs avoids shadow IT libraries in functions.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (lambda:GetFunction, PublishLayerVersion), Config custom rule output. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• `eval` defines or adjusts **layer_arn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, requestParameters.functionName, layer_arn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (function, layer, user), Bar chart (non-compliant functions), Timeline (changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on lambda layer version compliance. Outdated layers carry vulnerable dependencies; enforcing approved layer ARNs avoids shadow IT libraries in functions. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.1.77",
              "n": "AWS Fargate Task Health",
              "c": "high",
              "f": "intermediate",
              "v": "Fargate tasks are the unit of scale; tracking stopped tasks and resource limits surfaces platform issues before services miss SLAs.",
              "t": "`Splunk_TA_aws` (CloudWatch Logs/Metrics)",
              "d": "`sourcetype=aws:cloudwatch:metric` or `sourcetype=aws:cloudwatchlogs`",
              "q": "index=cloud sourcetype=\"aws:cloudwatch:metric\" Namespace=\"AWS/ECS\" MetricName=\"CPUUtilization\"\n| stats avg(Average) as cpu_avg, max(Maximum) as cpu_max by ServiceName, ClusterName\n| where cpu_max>90\n| sort -cpu_max",
              "m": "Enable CloudWatch Container Insights for ECS on Fargate and pull metrics via `Splunk_TA_aws` CloudWatch metric input. Ship task and service logs to Splunk (FireLens, Lambda, or direct subscription) and run a companion search on `sourcetype=aws:cloudwatchlogs` for `Task stopped` / error patterns. Map dimensions `ClusterName`, `ServiceName`, `TaskId`. Alert on sustained high CPU/memory, task stop reasons, and log error bursts.",
              "z": "Time chart (CPU/memory by service), Table (stopped tasks with reason), Single value (running task count).",
              "kfp": "Planned deploys, game-day tests, and vendor maintenance that you expect to move CPU, error, or lag metrics; compare with the change window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (CloudWatch Logs/Metrics).\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch:metric` or `sourcetype=aws:cloudwatchlogs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CloudWatch Container Insights for ECS on Fargate and pull metrics via `Splunk_TA_aws` CloudWatch metric input. Ship task and service logs to Splunk (FireLens, Lambda, or direct subscription) and run a companion search on `sourcetype=aws:cloudwatchlogs` for `Task stopped` / error patterns. Map dimensions `ClusterName`, `ServiceName`, `TaskId`. Alert on sustained high CPU/memory, task stop reasons, and log error bursts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"aws:cloudwatch:metric\" Namespace=\"AWS/ECS\" MetricName=\"CPUUtilization\"\n| stats avg(Average) as cpu_avg, max(Maximum) as cpu_max by ServiceName, ClusterName\n| where cpu_max>90\n| sort -cpu_max\n```\n\nUnderstanding this SPL\n\n**AWS Fargate Task Health** — Fargate tasks are the unit of scale; tracking stopped tasks and resource limits surfaces platform issues before services miss SLAs.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch:metric` or `sourcetype=aws:cloudwatchlogs`. **App/TA** (typical add-on context): `Splunk_TA_aws` (CloudWatch Logs/Metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:cloudwatch:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:cloudwatch:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ServiceName, ClusterName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu_max>90` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (CPU/memory by service), Table (stopped tasks with reason), Single value (running task count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on aws fargate task health. The key performance numbers the cloud already exposes so the team can plan capacity and notice problems before user-visible outages. We want the right people to notice in time, not only when a customer or an auditor stumbles on it first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.2,
          "qd": {
            "gold": 2,
            "silver": 1,
            "bronze": 74,
            "none": 0
          }
        },
        {
          "i": "4.2",
          "n": "Microsoft Azure",
          "u": [
            {
              "i": "4.2.1",
              "n": "Azure Activity Log Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Activity Log captures all control plane operations across Azure subscriptions. Essential audit trail for resource management and compliance.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:audit`, Azure Activity Log via Event Hub",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" operationName.value=\"*delete*\" OR operationName.value=\"*write*\"\n| stats count by caller, operationName.value, resourceGroupName, status.value\n| sort -count",
              "m": "Configure Azure Event Hub to receive Activity Log events. Set up Splunk_TA_microsoft-cloudservices with Event Hub input (connection string, consumer group). Alert on critical operations (resource deletions, policy changes).",
              "z": "Table (caller, operation, resource, status), Timeline, Bar chart by operation.",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:audit`, Azure Activity Log via Event Hub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Azure Event Hub to receive Activity Log events. Set up Splunk_TA_microsoft-cloudservices with Event Hub input (connection string, consumer group). Alert on critical operations (resource deletions, policy changes).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" operationName.value=\"*delete*\" OR operationName.value=\"*write*\"\n| stats count by caller, operationName.value, resourceGroupName, status.value\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Azure Activity Log Monitoring** — Activity Log captures all control plane operations across Azure subscriptions. Essential audit trail for resource management and compliance.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:audit`, Azure Activity Log via Event Hub. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by caller, operationName.value, resourceGroupName, status.value** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Activity Log Monitoring** — Activity Log captures all control plane operations across Azure subscriptions. Essential audit trail for resource management and compliance.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:audit`, Azure Activity Log via Event Hub. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (caller, operation, resource, status), Timeline, Bar chart by operation.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Activity Log captures all control plane operations across Azure subscriptions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.2",
              "n": "Entra ID Sign-In Anomalies",
              "c": "critical",
              "f": "intermediate",
              "v": "Risky sign-ins include impossible travel, unfamiliar locations, and anonymous IP usage. Primary detection layer for account compromise.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:signinlog`, Entra ID sign-in logs",
              "q": "index=azure sourcetype=\"mscs:azure:signinlog\" riskLevelDuringSignIn!=\"none\"\n| table _time userPrincipalName riskLevelDuringSignIn riskState ipAddress location.city location.countryOrRegion\n| sort -_time",
              "m": "Forward Entra ID sign-in logs via Event Hub or direct API. Alert on riskLevelDuringSignIn = high or medium. Correlate with conditional access policy results.",
              "z": "Table (user, risk level, location, IP), Map (sign-in locations), Timeline, Bar chart by risk type.",
              "kfp": "Some risky scores come from new devices, working from home, or a partner logging in the same way we do. We do not want every medium flag to page us; we pair higher scores with new locations, odd hours, or multiple clouds before we act.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1078.004",
                "T1110"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:signinlog`, Entra ID sign-in logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Entra ID sign-in logs via Event Hub or direct API. Alert on riskLevelDuringSignIn = high or medium. Correlate with conditional access policy results.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:signinlog\" riskLevelDuringSignIn!=\"none\"\n| table _time userPrincipalName riskLevelDuringSignIn riskState ipAddress location.city location.countryOrRegion\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Entra ID Sign-In Anomalies** — Risky sign-ins include impossible travel, unfamiliar locations, and anonymous IP usage. Primary detection layer for account compromise.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:signinlog`, Entra ID sign-in logs. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:signinlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:signinlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Entra ID Sign-In Anomalies**): table _time userPrincipalName riskLevelDuringSignIn riskState ipAddress location.city location.countryOrRegion\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, risk level, location, IP), Map (sign-in locations), Timeline, Bar chart by risk type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Risky sign-ins include impossible travel, unfamiliar locations, and anonymous IP usage.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.3",
              "n": "Entra ID Privilege Escalation",
              "c": "critical",
              "f": "intermediate",
              "v": "Privileged role assignments (Global Admin, Privileged Role Admin) grant extreme power. Unauthorized assignments mean full tenant compromise.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:auditlog`, Entra ID audit logs",
              "q": "index=azure sourcetype=\"mscs:azure:auditlog\" activityDisplayName=\"Add member to role\"\n| spath output=role path=targetResources{}.modifiedProperties{}.newValue\n| table _time initiatedBy.user.userPrincipalName targetResources{}.userPrincipalName role\n| sort -_time",
              "m": "Forward Entra ID audit logs. Create critical alerts on role assignments for Global Administrator, Privileged Role Administrator, and Exchange Administrator. Correlate with PIM activation events.",
              "z": "Events list (critical), Table (who assigned what to whom), Timeline.",
              "kfp": "Some risky scores come from new devices, working from home, or a partner logging in the same way we do. We do not want every medium flag to page us; we pair higher scores with new locations, odd hours, or multiple clouds before we act.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.003",
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:auditlog`, Entra ID audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Entra ID audit logs. Create critical alerts on role assignments for Global Administrator, Privileged Role Administrator, and Exchange Administrator. Correlate with PIM activation events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:auditlog\" activityDisplayName=\"Add member to role\"\n| spath output=role path=targetResources{}.modifiedProperties{}.newValue\n| table _time initiatedBy.user.userPrincipalName targetResources{}.userPrincipalName role\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Entra ID Privilege Escalation** — Privileged role assignments (Global Admin, Privileged Role Admin) grant extreme power. Unauthorized assignments mean full tenant compromise.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:auditlog`, Entra ID audit logs. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:auditlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:auditlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Pipeline stage (see **Entra ID Privilege Escalation**): table _time initiatedBy.user.userPrincipalName targetResources{}.userPrincipalName role\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Entra ID Privilege Escalation** — Privileged role assignments (Global Admin, Privileged Role Admin) grant extreme power. Unauthorized assignments mean full tenant compromise.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:auditlog`, Entra ID audit logs. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events list (critical), Table (who assigned what to whom), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Privileged role assignments (Global Admin, Privileged Role Admin) grant extreme power.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.4",
              "n": "NSG Flow Log Analysis",
              "c": "high",
              "f": "beginner",
              "v": "NSG Flow Logs provide Azure network-level visibility. Detects blocked traffic, anomalous patterns, and lateral movement within VNets.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:nsgflowlog`",
              "q": "index=azure sourcetype=\"mscs:azure:nsgflowlog\" flowState=\"D\"\n| stats count by src, dest, dest_port, protocol\n| sort -count | head 20",
              "m": "Enable NSG Flow Logs (Version 2) on all NSGs. Send to a storage account. Ingest via Splunk_TA_microsoft-cloudservices. Create dashboards for denied traffic and top talkers.",
              "z": "Table (top denied flows), Sankey diagram, Timechart, Map.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:nsgflowlog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable NSG Flow Logs (Version 2) on all NSGs. Send to a storage account. Ingest via Splunk_TA_microsoft-cloudservices. Create dashboards for denied traffic and top talkers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:nsgflowlog\" flowState=\"D\"\n| stats count by src, dest, dest_port, protocol\n| sort -count | head 20\n```\n\nUnderstanding this SPL\n\n**NSG Flow Log Analysis** — NSG Flow Logs provide Azure network-level visibility. Detects blocked traffic, anomalous patterns, and lateral movement within VNets.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:nsgflowlog`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:nsgflowlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:nsgflowlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, protocol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top denied flows), Sankey diagram, Timechart, Map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "NSG Flow Logs provide Azure network-level visibility.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.5",
              "n": "Azure VM Performance",
              "c": "medium",
              "f": "beginner",
              "v": "Azure Monitor metrics provide VM performance data without agents. Essential for capacity planning and correlating with application issues.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:metrics`",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" metricName=\"Percentage CPU\"\n| timechart span=1h avg(average) as avg_cpu by resourceId\n| where avg_cpu > 80",
              "m": "Configure Azure Monitor metrics collection in the Splunk TA. Collect CPU, memory, disk, and network metrics. Alert on sustained high utilization.",
              "z": "Line chart per VM, Heatmap, Gauge.",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Azure Monitor metrics collection in the Splunk TA. Collect CPU, memory, disk, and network metrics. Alert on sustained high utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" metricName=\"Percentage CPU\"\n| timechart span=1h avg(average) as avg_cpu by resourceId\n| where avg_cpu > 80\n```\n\nUnderstanding this SPL\n\n**Azure VM Performance** — Azure Monitor metrics provide VM performance data without agents. Essential for capacity planning and correlating with application issues.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:metrics`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by resourceId** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_cpu > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart per VM, Heatmap, Gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Azure Monitor metrics provide VM performance data without agents.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.6",
              "n": "Azure SQL Performance",
              "c": "high",
              "f": "beginner",
              "v": "DTU/vCore exhaustion causes query throttling. Deadlocks and long-running queries impact application performance directly.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:diagnostics` (SQL diagnostics)",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"SQLInsights\" OR Category=\"Deadlocks\"\n| stats count by database_name, Category\n| sort -count",
              "m": "Enable Azure SQL diagnostic logging to Event Hub. Collect SQL Insights, Deadlocks, and QueryStoreRuntimeStatistics categories. Alert on DTU >90%, deadlock events, and query duration outliers.",
              "z": "Line chart (DTU usage), Table (deadlocks), Bar chart (top slow queries).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:diagnostics` (SQL diagnostics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Azure SQL diagnostic logging to Event Hub. Collect SQL Insights, Deadlocks, and QueryStoreRuntimeStatistics categories. Alert on DTU >90%, deadlock events, and query duration outliers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"SQLInsights\" OR Category=\"Deadlocks\"\n| stats count by database_name, Category\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Azure SQL Performance** — DTU/vCore exhaustion causes query throttling. Deadlocks and long-running queries impact application performance directly.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:diagnostics` (SQL diagnostics). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by database_name, Category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (DTU usage), Table (deadlocks), Bar chart (top slow queries).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "DTU/vCore exhaustion causes query throttling.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.7",
              "n": "AKS Cluster Health",
              "c": "high",
              "f": "intermediate",
              "v": "AKS cluster health monitoring ensures Kubernetes workloads are running reliably on Azure's managed platform.",
              "t": "`Splunk_TA_microsoft-cloudservices`, Splunk OTel Collector",
              "d": "AKS diagnostics, kube-state-metrics",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"kube-apiserver\" level=\"Error\"\n| stats count by host, message\n| sort -count",
              "m": "Enable AKS diagnostic logging to Event Hub (kube-apiserver, kube-controller-manager, kube-scheduler, kube-audit). Deploy OTel Collector in the AKS cluster for deeper K8s-level monitoring (see Category 3.2).",
              "z": "Status panel, Error timeline, Table.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, Splunk OTel Collector.\n• Ensure the following data sources are available: AKS diagnostics, kube-state-metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AKS diagnostic logging to Event Hub (kube-apiserver, kube-controller-manager, kube-scheduler, kube-audit). Deploy OTel Collector in the AKS cluster for deeper K8s-level monitoring (see Category 3.2).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"kube-apiserver\" level=\"Error\"\n| stats count by host, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**AKS Cluster Health** — AKS cluster health monitoring ensures Kubernetes workloads are running reliably on Azure's managed platform.\n\nDocumented **Data sources**: AKS diagnostics, kube-state-metrics. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Splunk OTel Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel, Error timeline, Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Clusters cluster health monitoring ensures Kubernetes workloads are running reliably on Azure's managed platform.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365",
                "opentelemetry"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.8",
              "n": "Azure Key Vault Access Audit",
              "c": "high",
              "f": "beginner",
              "v": "Key Vault stores secrets, keys, and certificates. Unauthorized or anomalous access could indicate credential theft or data breach preparation.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:diagnostics` (Key Vault diagnostics)",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AuditEvent\" ResourceType=\"VAULTS\"\n| stats count by identity.claim.upn, operationName, ResultType\n| where ResultType!=\"Success\"\n| sort -count",
              "m": "Enable Key Vault diagnostic logging. Monitor all access operations. Alert on failed access attempts and unusual access patterns (new principals accessing secrets).",
              "z": "Table (user, operation, result), Timeline, Bar chart by operation.",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1078.004",
                "T1530"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:diagnostics` (Key Vault diagnostics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Key Vault diagnostic logging. Monitor all access operations. Alert on failed access attempts and unusual access patterns (new principals accessing secrets).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AuditEvent\" ResourceType=\"VAULTS\"\n| stats count by identity.claim.upn, operationName, ResultType\n| where ResultType!=\"Success\"\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Azure Key Vault Access Audit** — Key Vault stores secrets, keys, and certificates. Unauthorized or anomalous access could indicate credential theft or data breach preparation.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:diagnostics` (Key Vault diagnostics). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics, VAULTS. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by identity.claim.upn, operationName, ResultType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ResultType!=\"Success\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, operation, result), Timeline, Bar chart by operation.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Key Vault stores secrets, keys, and certificates.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.9",
              "n": "Defender for Cloud Alerts",
              "c": "critical",
              "f": "beginner",
              "v": "Microsoft Defender provides threat detection across Azure resources. Centralizing in Splunk enables cross-platform security correlation.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Defender alerts via Event Hub",
              "q": "index=azure sourcetype=\"mscs:azure:defender\" severity=\"High\" OR severity=\"Critical\"\n| table _time alertDisplayName severity resourceIdentifiers{} description\n| sort -_time",
              "m": "Configure Defender for Cloud to export alerts to Event Hub. Ingest via Splunk TA. Alert on High and Critical severity findings.",
              "z": "Table by severity, Bar chart (alert types), Timeline, Single value (critical count).",
              "kfp": "A fresh scanner run or a vendor re-rate can resurface a finding we already know about. We deduplicate on resource and time and we only escalate when the exposure is new or the severity stepped up.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1530",
                "T1619"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Defender alerts via Event Hub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Defender for Cloud to export alerts to Event Hub. Ingest via Splunk TA. Alert on High and Critical severity findings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:defender\" severity=\"High\" OR severity=\"Critical\"\n| table _time alertDisplayName severity resourceIdentifiers{} description\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Defender for Cloud Alerts** — Microsoft Defender provides threat detection across Azure resources. Centralizing in Splunk enables cross-platform security correlation.\n\nDocumented **Data sources**: Defender alerts via Event Hub. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:defender. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:defender\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Defender for Cloud Alerts**): table _time alertDisplayName severity resourceIdentifiers{} description\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table by severity, Bar chart (alert types), Timeline, Single value (critical count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Microsoft Defender provides threat detection across Azure resources.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "4.2.10",
              "n": "Storage Account Access Anomalies",
              "c": "medium",
              "f": "intermediate",
              "v": "Unusual storage access patterns may indicate data exfiltration or compromised service principals accessing sensitive data.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Storage analytics logs via Event Hub",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"StorageRead\" OR Category=\"StorageWrite\"\n| stats count by callerIpAddress, accountName, operationName\n| eventstats avg(count) as avg_ops, stdev(count) as stdev_ops\n| where count > avg_ops + (2 * stdev_ops)",
              "m": "Enable storage diagnostic logging. Baseline normal access patterns. Alert on volumetric anomalies (unusual number of reads/writes) or new source IPs.",
              "z": "Table (IP, account, operations), Line chart (access over time), Map.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1530",
                "T1619"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Storage analytics logs via Event Hub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable storage diagnostic logging. Baseline normal access patterns. Alert on volumetric anomalies (unusual number of reads/writes) or new source IPs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"StorageRead\" OR Category=\"StorageWrite\"\n| stats count by callerIpAddress, accountName, operationName\n| eventstats avg(count) as avg_ops, stdev(count) as stdev_ops\n| where count > avg_ops + (2 * stdev_ops)\n```\n\nUnderstanding this SPL\n\n**Storage Account Access Anomalies** — Unusual storage access patterns may indicate data exfiltration or compromised service principals accessing sensitive data.\n\nDocumented **Data sources**: Storage analytics logs via Event Hub. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by callerIpAddress, accountName, operationName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where count > avg_ops + (2 * stdev_ops)` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (IP, account, operations), Line chart (access over time), Map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Unusual storage access patterns may indicate data exfiltration or compromised service principals accessing sensitive data.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.11",
              "n": "Resource Health Events",
              "c": "high",
              "f": "intermediate",
              "v": "Azure service health impacts your resources directly. Knowing when Azure itself is having problems prevents wasted troubleshooting time.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Resource Health via Activity Log",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" category.value=\"ResourceHealth\"\n| table _time resourceGroupName resourceType status.value properties.cause properties.currentHealthStatus\n| sort -_time",
              "m": "Resource Health events flow through the Activity Log. Monitor for Unavailable and Degraded statuses. Correlate with your application health metrics to distinguish Azure platform issues from your own problems.",
              "z": "Status panel per resource type, Table, Timeline.",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Resource Health via Activity Log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nResource Health events flow through the Activity Log. Monitor for Unavailable and Degraded statuses. Correlate with your application health metrics to distinguish Azure platform issues from your own problems.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" category.value=\"ResourceHealth\"\n| table _time resourceGroupName resourceType status.value properties.cause properties.currentHealthStatus\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Resource Health Events** — Azure service health impacts your resources directly. Knowing when Azure itself is having problems prevents wasted troubleshooting time.\n\nDocumented **Data sources**: Azure Resource Health via Activity Log. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Resource Health Events**): table _time resourceGroupName resourceType status.value properties.cause properties.currentHealthStatus\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel per resource type, Table, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Azure service health impacts your resources directly.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.12",
              "n": "Cost Management Alerts",
              "c": "low",
              "f": "intermediate",
              "v": "Azure cost monitoring prevents budget overruns. Tracking spend by resource group/team enables chargeback and anomaly detection.",
              "t": "`Splunk_TA_microsoft-cloudservices`, Azure Cost Management export",
              "d": "Azure Cost Management data (exported to storage)",
              "q": "index=azure sourcetype=\"azure:costmanagement\"\n| timechart span=1d sum(CostInBillingCurrency) as daily_cost by ResourceGroup\n| eventstats avg(daily_cost) as avg_cost by ResourceGroup\n| where daily_cost > avg_cost * 1.5",
              "m": "Configure Azure Cost Management to export daily usage data to a storage account. Ingest in Splunk. Create budget alerts when spending approaches thresholds.",
              "z": "Stacked area chart (spend by RG), Line chart with budget overlay, Table.",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, Azure Cost Management export.\n• Ensure the following data sources are available: Azure Cost Management data (exported to storage).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Azure Cost Management to export daily usage data to a storage account. Ingest in Splunk. Create budget alerts when spending approaches thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:costmanagement\"\n| timechart span=1d sum(CostInBillingCurrency) as daily_cost by ResourceGroup\n| eventstats avg(daily_cost) as avg_cost by ResourceGroup\n| where daily_cost > avg_cost * 1.5\n```\n\nUnderstanding this SPL\n\n**Cost Management Alerts** — Azure cost monitoring prevents budget overruns. Tracking spend by resource group/team enables chargeback and anomaly detection.\n\nDocumented **Data sources**: Azure Cost Management data (exported to storage). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Azure Cost Management export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:costmanagement. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:costmanagement\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by ResourceGroup** — ideal for trending and alerting on this use case.\n• `eventstats` rolls up events into metrics; results are split **by ResourceGroup** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where daily_cost > avg_cost * 1.5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area chart (spend by RG), Line chart with budget overlay, Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Azure cost monitoring prevents budget overruns.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.13",
              "n": "App Service (Web App) HTTP 5xx and Slot Swap",
              "c": "high",
              "f": "beginner",
              "v": "App Service 5xx and failed slot swaps impact user experience and deployment safety. Monitoring supports reliability and blue-green deployment.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Monitor metrics (Http5xx, ResponseTime), Activity Log (Slot swap)",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" metricName=\"Http5xx\" namespace=\"Microsoft.Web/sites\"\n| where average > 0\n| timechart span=5m sum(total) by resourceId",
              "m": "Collect App Service metrics. Alert on Http5xx rate >1%. Monitor slot swap operations in Activity Log; alert on swap failure. Track response time and memory usage for capacity.",
              "z": "Line chart (5xx, response time by app), Table (app, 5xx count), Timeline (slot swaps).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Monitor metrics (Http5xx, ResponseTime), Activity Log (Slot swap).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect App Service metrics. Alert on Http5xx rate >1%. Monitor slot swap operations in Activity Log; alert on swap failure. Track response time and memory usage for capacity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" metricName=\"Http5xx\" namespace=\"Microsoft.Web/sites\"\n| where average > 0\n| timechart span=5m sum(total) by resourceId\n```\n\nUnderstanding this SPL\n\n**App Service (Web App) HTTP 5xx and Slot Swap** — App Service 5xx and failed slot swaps impact user experience and deployment safety. Monitoring supports reliability and blue-green deployment.\n\nDocumented **Data sources**: Azure Monitor metrics (Http5xx, ResponseTime), Activity Log (Slot swap). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where average > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resourceId** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (5xx, response time by app), Table (app, 5xx count), Timeline (slot swaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "App Service 5xx and failed slot swaps impact user experience and deployment safety.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.14",
              "n": "Azure Load Balancer Health Probe Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Probe failures mean backends are unhealthy; traffic stops flowing to those instances. Critical for load balancer and application availability.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Monitor metrics (ProbeHealthStatus, SnatConnectionCount)",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" metricName=\"ProbeHealthStatus\" namespace=\"Microsoft.Network/loadBalancers\"\n| where average == 0\n| table _time resourceId backendPoolName average",
              "m": "ProbeHealthStatus 1 = healthy, 0 = unhealthy. Alert when any backend pool shows unhealthy. Correlate with VM availability and application logs. Monitor SNAT exhaustion (SnatConnectionCount) for outbound issues.",
              "z": "Status panel (probe health), Table (LB, backend, status), Timeline.",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Monitor metrics (ProbeHealthStatus, SnatConnectionCount).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nProbeHealthStatus 1 = healthy, 0 = unhealthy. Alert when any backend pool shows unhealthy. Correlate with VM availability and application logs. Monitor SNAT exhaustion (SnatConnectionCount) for outbound issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" metricName=\"ProbeHealthStatus\" namespace=\"Microsoft.Network/loadBalancers\"\n| where average == 0\n| table _time resourceId backendPoolName average\n```\n\nUnderstanding this SPL\n\n**Azure Load Balancer Health Probe Failures** — Probe failures mean backends are unhealthy; traffic stops flowing to those instances. Critical for load balancer and application availability.\n\nDocumented **Data sources**: Azure Monitor metrics (ProbeHealthStatus, SnatConnectionCount). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where average == 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Azure Load Balancer Health Probe Failures**): table _time resourceId backendPoolName average\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel (probe health), Table (LB, backend, status), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Probe failures mean backends are unhealthy; traffic stops flowing to those instances.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.15",
              "n": "Azure Backup Job Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Backup job failures break recovery guarantees. Detecting failures ensures backups are fixed before they are needed for restore.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Monitor Activity Log (Backup job events), or Backup vault diagnostic logs",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" operationName.value=\"*Backup*\" status.value=\"Failed\"\n| table _time caller operationName.value resourceGroupName properties\n| sort -_time",
              "m": "Enable Activity Log or diagnostic settings for Recovery Services vault. Ingest backup job completion events. Alert on status=Failed. Dashboard job success rate by vault and policy.",
              "z": "Table (job, vault, status, time), Timeline (failed jobs), Single value (failed last 24h).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Monitor Activity Log (Backup job events), or Backup vault diagnostic logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Activity Log or diagnostic settings for Recovery Services vault. Ingest backup job completion events. Alert on status=Failed. Dashboard job success rate by vault and policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" operationName.value=\"*Backup*\" status.value=\"Failed\"\n| table _time caller operationName.value resourceGroupName properties\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Azure Backup Job Failures** — Backup job failures break recovery guarantees. Detecting failures ensures backups are fixed before they are needed for restore.\n\nDocumented **Data sources**: Azure Monitor Activity Log (Backup job events), or Backup vault diagnostic logs. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Azure Backup Job Failures**): table _time caller operationName.value resourceGroupName properties\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (job, vault, status, time), Timeline (failed jobs), Single value (failed last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Backup job failures break recovery guarantees.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.16",
              "n": "Logic Apps Run Failures",
              "c": "high",
              "f": "beginner",
              "v": "Logic App run failures break automation and integrations. Tracking failures and retries supports debugging and SLA.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Logic Apps workflow run history via diagnostic logs or Azure Monitor",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" ResourceType=\"MICROSOFT.LOGIC/WORKFLOWS\" status=\"Failed\"\n| stats count by resourceId runId\n| sort -count",
              "m": "Enable diagnostic logging for Logic Apps to Event Hub or Log Analytics. Ingest in Splunk. Alert when run status=Failed. Track retry patterns and correlate with connector/API errors.",
              "z": "Line chart (runs, failures by workflow), Table (workflow, run, status), Single value (failure rate).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Logic Apps workflow run history via diagnostic logs or Azure Monitor.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable diagnostic logging for Logic Apps to Event Hub or Log Analytics. Ingest in Splunk. Alert when run status=Failed. Track retry patterns and correlate with connector/API errors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" ResourceType=\"MICROSOFT.LOGIC/WORKFLOWS\" status=\"Failed\"\n| stats count by resourceId runId\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Logic Apps Run Failures** — Logic App run failures break automation and integrations. Tracking failures and retries supports debugging and SLA.\n\nDocumented **Data sources**: Logic Apps workflow run history via diagnostic logs or Azure Monitor. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics, MICROSOFT.LOGIC/WORKFLOWS. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resourceId runId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (runs, failures by workflow), Table (workflow, run, status), Single value (failure rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Logic App run failures break automation and integrations.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.17",
              "n": "Service Bus Queue Message Count and Dead Letter",
              "c": "high",
              "f": "beginner",
              "v": "Growing queue or dead-letter count indicates consumers falling behind or message processing failures. Prevents backlog and lost messages.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Monitor metrics (ActiveMessageCount, DeadletterMessageCount)",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" namespace=\"Microsoft.ServiceBus/namespaces\" metricName=\"ActiveMessageCount\"\n| bin _time span=5m\n| stats avg(average) as active_messages by _time, EntityName\n| where active_messages > 1000",
              "m": "Collect Service Bus metrics per queue/topic. Alert when ActiveMessageCount exceeds threshold or DeadletterMessageCount > 0. Monitor message age via custom metric or run history if available.",
              "z": "Line chart (message count, dead letter by queue), Table (queue, active, dead letter), Single value.",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Monitor metrics (ActiveMessageCount, DeadletterMessageCount).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Service Bus metrics per queue/topic. Alert when ActiveMessageCount exceeds threshold or DeadletterMessageCount > 0. Monitor message age via custom metric or run history if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" namespace=\"Microsoft.ServiceBus/namespaces\" metricName=\"ActiveMessageCount\"\n| bin _time span=5m\n| stats avg(average) as active_messages by _time, EntityName\n| where active_messages > 1000\n```\n\nUnderstanding this SPL\n\n**Service Bus Queue Message Count and Dead Letter** — Growing queue or dead-letter count indicates consumers falling behind or message processing failures. Prevents backlog and lost messages.\n\nDocumented **Data sources**: Azure Monitor metrics (ActiveMessageCount, DeadletterMessageCount). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, EntityName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where active_messages > 1000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (message count, dead letter by queue), Table (queue, active, dead letter), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Growing queue or dead-letter count indicates consumers falling behind or message processing failures.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.18",
              "n": "Cosmos DB RU Consumption and Throttling",
              "c": "high",
              "f": "beginner",
              "v": "Throttling (429) occurs when RU consumption exceeds provisioned throughput. Monitoring supports right-sizing and autoscale tuning.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Monitor Cosmos DB metrics (TotalRequestUnits, TotalRequests, Http429)",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" namespace=\"Microsoft.DocumentDB/databaseAccounts\" metricName=\"TotalRequestUnits\"\n| timechart span=5m sum(total) by CollectionName\n| eval ru_utilization_pct = TotalRequestUnits / provisioned_ru * 100",
              "m": "Collect Cosmos DB metrics. Alert when Http429 > 0 or RU consumption consistently near provisioned. Dashboard RU by operation type and partition. Consider autoscale for variable workload.",
              "z": "Line chart (RU, 429 by collection), Table (collection, RU, 429), Gauge (RU utilization %).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Monitor Cosmos DB metrics (TotalRequestUnits, TotalRequests, Http429).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Cosmos DB metrics. Alert when Http429 > 0 or RU consumption consistently near provisioned. Dashboard RU by operation type and partition. Consider autoscale for variable workload.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" namespace=\"Microsoft.DocumentDB/databaseAccounts\" metricName=\"TotalRequestUnits\"\n| timechart span=5m sum(total) by CollectionName\n| eval ru_utilization_pct = TotalRequestUnits / provisioned_ru * 100\n```\n\nUnderstanding this SPL\n\n**Cosmos DB RU Consumption and Throttling** — Throttling (429) occurs when RU consumption exceeds provisioned throughput. Monitoring supports right-sizing and autoscale tuning.\n\nDocumented **Data sources**: Azure Monitor Cosmos DB metrics (TotalRequestUnits, TotalRequests, Http429). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by CollectionName** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **ru_utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (RU, 429 by collection), Table (collection, RU, 429), Gauge (RU utilization %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Throttling (429) occurs when RU consumption exceeds provisioned throughput.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.19",
              "n": "Azure Front Door / CDN Origin Errors and Cache Hit",
              "c": "high",
              "f": "beginner",
              "v": "Origin errors and low cache hit ratio impact latency and origin load. Essential for CDN and global app performance.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Monitor Front Door metrics (BackendHealthPercentage, RequestCount, BackendRequestCount)",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" namespace=\"Microsoft.Cdn/profiles\" metricName=\"BackendHealthPercentage\"\n| where average < 100\n| table _time resourceId endpoint average",
              "m": "Collect Front Door/CDN metrics. Alert when BackendHealthPercentage < 100%. Track RequestCount vs BackendRequestCount for cache hit ratio. Enable diagnostic logs for request-level analysis.",
              "z": "Line chart (origin health, request count), Table (endpoint, health %, cache hit), Gauge (cache hit %).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Monitor Front Door metrics (BackendHealthPercentage, RequestCount, BackendRequestCount).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Front Door/CDN metrics. Alert when BackendHealthPercentage < 100%. Track RequestCount vs BackendRequestCount for cache hit ratio. Enable diagnostic logs for request-level analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" namespace=\"Microsoft.Cdn/profiles\" metricName=\"BackendHealthPercentage\"\n| where average < 100\n| table _time resourceId endpoint average\n```\n\nUnderstanding this SPL\n\n**Azure Front Door / CDN Origin Errors and Cache Hit** — Origin errors and low cache hit ratio impact latency and origin load. Essential for CDN and global app performance.\n\nDocumented **Data sources**: Azure Monitor Front Door metrics (BackendHealthPercentage, RequestCount, BackendRequestCount). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where average < 100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Azure Front Door / CDN Origin Errors and Cache Hit**): table _time resourceId endpoint average\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (origin health, request count), Table (endpoint, health %, cache hit), Gauge (cache hit %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Origin errors and low cache hit ratio impact latency and origin load.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.20",
              "n": "Event Grid Delivery Failures",
              "c": "high",
              "f": "beginner",
              "v": "Delivery failures mean subscribers did not receive events. Critical for event-driven architecture and integration reliability.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Event Grid diagnostic logs (DeliveryFailure, DeliverySuccess)",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"DeliveryFailure\"\n| stats count by topic eventSubscriptionName errorCode\n| sort -count",
              "m": "Enable Event Grid diagnostic logging to Event Hub or storage. Ingest in Splunk. Alert when DeliveryFailure count > 0. Correlate with dead-letter and subscriber endpoint health.",
              "z": "Table (topic, subscription, failures), Line chart (deliveries vs failures), Single value (failed deliveries).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Event Grid diagnostic logs (DeliveryFailure, DeliverySuccess).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Event Grid diagnostic logging to Event Hub or storage. Ingest in Splunk. Alert when DeliveryFailure count > 0. Correlate with dead-letter and subscriber endpoint health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"DeliveryFailure\"\n| stats count by topic eventSubscriptionName errorCode\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Event Grid Delivery Failures** — Delivery failures mean subscribers did not receive events. Critical for event-driven architecture and integration reliability.\n\nDocumented **Data sources**: Event Grid diagnostic logs (DeliveryFailure, DeliverySuccess). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by topic eventSubscriptionName errorCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (topic, subscription, failures), Line chart (deliveries vs failures), Single value (failed deliveries).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Delivery failures mean subscribers did not receive events.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.21",
              "n": "Azure Container Registry Pull/Push and Vulnerability Scan",
              "c": "medium",
              "f": "intermediate",
              "v": "ACR stores container images. Unusual pull/push or image scan findings indicate abuse or vulnerable images in use.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "ACR diagnostic logs (Pull, Push), Defender for Containers / ACR vulnerability scan",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" ResourceType=\"MICROSOFT.CONTAINERREGISTRY/REGISTRIES\" OperationName=\"Pull\"\n| stats count by identity_claim_upn repository\n| eventstats avg(count) as avg_pull, stdev(count) as stdev_pull\n| where count > avg_pull + 2*stdev_pull",
              "m": "Enable ACR diagnostic logs. Baseline pull/push by identity and repo; alert on anomalies. Ingest vulnerability scan results from Defender or ACR task; alert on critical/high in production repos.",
              "z": "Table (user, repo, pulls), Bar chart (top pullers), Table (image, CVE, severity).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1610",
                "T1613"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: ACR diagnostic logs (Pull, Push), Defender for Containers / ACR vulnerability scan.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ACR diagnostic logs. Baseline pull/push by identity and repo; alert on anomalies. Ingest vulnerability scan results from Defender or ACR task; alert on critical/high in production repos.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" ResourceType=\"MICROSOFT.CONTAINERREGISTRY/REGISTRIES\" OperationName=\"Pull\"\n| stats count by identity_claim_upn repository\n| eventstats avg(count) as avg_pull, stdev(count) as stdev_pull\n| where count > avg_pull + 2*stdev_pull\n```\n\nUnderstanding this SPL\n\n**Azure Container Registry Pull/Push and Vulnerability Scan** — ACR stores container images. Unusual pull/push or image scan findings indicate abuse or vulnerable images in use.\n\nDocumented **Data sources**: ACR diagnostic logs (Pull, Push), Defender for Containers / ACR vulnerability scan. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics, MICROSOFT.CONTAINERREGISTRY/REGISTRIES. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by identity_claim_upn repository** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where count > avg_pull + 2*stdev_pull` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, repo, pulls), Bar chart (top pullers), Table (image, CVE, severity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "ACR stores container images.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.22",
              "n": "Azure Firewall Rule Hit and Threat Intel",
              "c": "high",
              "f": "intermediate",
              "v": "Firewall logs show allowed/denied traffic and threat intelligence hits. Essential for network security and incident response.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Firewall diagnostic logs (AzureFirewallApplicationRule, AzureFirewallNetworkRule, AzureFirewallThreatIntelLog)",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AzureFirewallThreatIntelLog\"\n| rename msg_src_ip as src, msg_dest_ip as dest\n| table _time src dest action threat_id\n| sort -_time",
              "m": "Enable Azure Firewall diagnostic logs to Event Hub or storage. Ingest in Splunk. Alert on any threat intel hit. Dashboard rule hits, denied flows, and top sources/destinations.",
              "z": "Table (source, dest, action, rule), Map (threat IPs), Timeline (threat intel hits).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1580",
                "T1562.007"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Firewall diagnostic logs (AzureFirewallApplicationRule, AzureFirewallNetworkRule, AzureFirewallThreatIntelLog).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Azure Firewall diagnostic logs to Event Hub or storage. Ingest in Splunk. Alert on any threat intel hit. Dashboard rule hits, denied flows, and top sources/destinations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AzureFirewallThreatIntelLog\"\n| rename msg_src_ip as src, msg_dest_ip as dest\n| table _time src dest action threat_id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Azure Firewall Rule Hit and Threat Intel** — Firewall logs show allowed/denied traffic and threat intelligence hits. Essential for network security and incident response.\n\nDocumented **Data sources**: Azure Firewall diagnostic logs (AzureFirewallApplicationRule, AzureFirewallNetworkRule, AzureFirewallThreatIntelLog). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Pipeline stage (see **Azure Firewall Rule Hit and Threat Intel**): table _time src dest action threat_id\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (source, dest, action, rule), Map (threat IPs), Timeline (threat intel hits).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Firewall logs show allowed/denied traffic and threat intelligence hits.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.23",
              "n": "Azure Database for MySQL/PostgreSQL Metrics",
              "c": "high",
              "f": "beginner",
              "v": "Managed MySQL/PostgreSQL CPU, storage, and connection metrics support capacity and performance management beyond Azure SQL.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Monitor metrics (percentage_cpu, storage_percent, active_connections)",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" namespace=\"Microsoft.DBforMySQL/servers\" metricName=\"percentage_cpu\"\n| bin _time span=5m\n| stats avg(average) as cpu_pct by _time, resourceId\n| where cpu_pct > 80",
              "m": "Collect Azure DB for MySQL/PostgreSQL metrics. Alert on CPU >80%, storage_percent >85%, or active_connections nearing max. Enable slow query log and ingest for query-level analysis.",
              "z": "Line chart (CPU, storage, connections by server), Table (server, metrics), Gauge (storage %).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Monitor metrics (percentage_cpu, storage_percent, active_connections).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Azure DB for MySQL/PostgreSQL metrics. Alert on CPU >80%, storage_percent >85%, or active_connections nearing max. Enable slow query log and ingest for query-level analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" namespace=\"Microsoft.DBforMySQL/servers\" metricName=\"percentage_cpu\"\n| bin _time span=5m\n| stats avg(average) as cpu_pct by _time, resourceId\n| where cpu_pct > 80\n```\n\nUnderstanding this SPL\n\n**Azure Database for MySQL/PostgreSQL Metrics** — Managed MySQL/PostgreSQL CPU, storage, and connection metrics support capacity and performance management beyond Azure SQL.\n\nDocumented **Data sources**: Azure Monitor metrics (percentage_cpu, storage_percent, active_connections). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, resourceId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU, storage, connections by server), Table (server, metrics), Gauge (storage %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Managed MySQL/PostgreSQL CPU, storage, and connection metrics support capacity and performance management beyond Azure SQL.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.24",
              "n": "Azure Monitor Alert State Changes",
              "c": "medium",
              "f": "beginner",
              "v": "Alert state changes (fired, resolved) provide consolidated view of metric/log conditions. Centralizing in Splunk enables correlation.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Activity Log (Microsoft.Insights/activityLogAlerts), or Action Group webhook to Splunk",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" operationName.value=\"Microsoft.Insights/activityLogAlerts/Activated/Action\"\n| table _time caller properties.condition properties.alertRule\n| sort -_time",
              "m": "Configure Action Group to send alert payload to Splunk (Logic App or webhook). Ingest fired and resolved events. Dashboard active alerts by severity and resource group.",
              "z": "Table (alert, state, time), Timeline (alert history), Single value (active alerts).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Activity Log (Microsoft.Insights/activityLogAlerts), or Action Group webhook to Splunk.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Action Group to send alert payload to Splunk (Logic App or webhook). Ingest fired and resolved events. Dashboard active alerts by severity and resource group.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" operationName.value=\"Microsoft.Insights/activityLogAlerts/Activated/Action\"\n| table _time caller properties.condition properties.alertRule\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Azure Monitor Alert State Changes** — Alert state changes (fired, resolved) provide consolidated view of metric/log conditions. Centralizing in Splunk enables correlation.\n\nDocumented **Data sources**: Activity Log (Microsoft.Insights/activityLogAlerts), or Action Group webhook to Splunk. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Azure Monitor Alert State Changes**): table _time caller properties.condition properties.alertRule\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (alert, state, time), Timeline (alert history), Single value (active alerts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Alert state changes (fired, resolved) provide consolidated view of metric/log conditions.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.25",
              "n": "Entra ID Conditional Access Blocked Sign-Ins",
              "c": "high",
              "f": "intermediate",
              "v": "Blocked sign-ins by Conditional Access indicate policy enforcement. Tracking blocks helps tune policies and detect bypass attempts.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Entra ID sign-in logs (resultType=0 for success; filter for blocks)",
              "q": "index=azure sourcetype=\"mscs:azure:signinlog\" status.errorCode!=\"0\"\n| stats count by userPrincipalName appDisplayName status.errorCode location\n| sort -count",
              "m": "Forward sign-in logs. Filter for resultType or status indicating block (e.g. conditional access block). Alert on spike in blocks or blocks for sensitive apps. Correlate with risk and device compliance.",
              "z": "Table (user, app, error, location), Bar chart (blocks by reason), Timeline.",
              "kfp": "Some risky scores come from new devices, working from home, or a partner logging in the same way we do. We do not want every medium flag to page us; we pair higher scores with new locations, odd hours, or multiple clouds before we act.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Entra ID sign-in logs (resultType=0 for success; filter for blocks).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward sign-in logs. Filter for resultType or status indicating block (e.g. conditional access block). Alert on spike in blocks or blocks for sensitive apps. Correlate with risk and device compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:signinlog\" status.errorCode!=\"0\"\n| stats count by userPrincipalName appDisplayName status.errorCode location\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Entra ID Conditional Access Blocked Sign-Ins** — Blocked sign-ins by Conditional Access indicate policy enforcement. Tracking blocks helps tune policies and detect bypass attempts.\n\nDocumented **Data sources**: Entra ID sign-in logs (resultType=0 for success; filter for blocks). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:signinlog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:signinlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userPrincipalName appDisplayName status.errorCode location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, app, error, location), Bar chart (blocks by reason), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Blocked sign-ins by Conditional Access indicate policy enforcement.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.26",
              "n": "Azure Service Health and Planned Maintenance",
              "c": "high",
              "f": "beginner",
              "v": "Service Health and planned maintenance notifications prevent wasted troubleshooting and enable change planning.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Service Health alerts via Activity Log (ServiceHealth)",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" category.value=\"ServiceHealth\"\n| table _time properties.incidentType properties.title properties.description properties.status\n| sort -_time",
              "m": "Service Health events flow to Activity Log. Ingest and filter for category=ServiceHealth. Alert on incidentType=Incident or Security. Dashboard active incidents and upcoming maintenance.",
              "z": "Table (incident, service, status), Timeline (incidents), Single value (active incidents).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Service Health alerts via Activity Log (ServiceHealth).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nService Health events flow to Activity Log. Ingest and filter for category=ServiceHealth. Alert on incidentType=Incident or Security. Dashboard active incidents and upcoming maintenance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" category.value=\"ServiceHealth\"\n| table _time properties.incidentType properties.title properties.description properties.status\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Azure Service Health and Planned Maintenance** — Service Health and planned maintenance notifications prevent wasted troubleshooting and enable change planning.\n\nDocumented **Data sources**: Service Health alerts via Activity Log (ServiceHealth). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Azure Service Health and Planned Maintenance**): table _time properties.incidentType properties.title properties.description properties.status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incident, service, status), Timeline (incidents), Single value (active incidents).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Service Health and planned maintenance notifications prevent wasted troubleshooting and enable change planning.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.27",
              "n": "Azure Policy Compliance and Non-Compliant Resources",
              "c": "medium",
              "f": "intermediate",
              "v": "Azure Policy enforces governance. Non-compliant resources increase risk and compliance gaps. Tracking compliance supports remediation.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Policy state change events, Azure Monitor (policy compliance API or diagnostic)",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" resourceId=*Microsoft.Authorization/policyAssignments*\n| search complianceState=\"NonCompliant\"\n| stats count by policyDefinitionId resourceType\n| sort -count",
              "m": "Use Azure Policy compliance API or export policy states to storage/Event Hub. Ingest in Splunk. Dashboard compliance % by policy and resource group. Alert when critical policy becomes non-compliant.",
              "z": "Table (policy, resource, state), Pie chart (compliant %), Bar chart (non-compliant by type).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Policy state change events, Azure Monitor (policy compliance API or diagnostic).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Azure Policy compliance API or export policy states to storage/Event Hub. Ingest in Splunk. Dashboard compliance % by policy and resource group. Alert when critical policy becomes non-compliant.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" resourceId=*Microsoft.Authorization/policyAssignments*\n| search complianceState=\"NonCompliant\"\n| stats count by policyDefinitionId resourceType\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Azure Policy Compliance and Non-Compliant Resources** — Azure Policy enforces governance. Non-compliant resources increase risk and compliance gaps. Tracking compliance supports remediation.\n\nDocumented **Data sources**: Policy state change events, Azure Monitor (policy compliance API or diagnostic). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by policyDefinitionId resourceType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (policy, resource, state), Pie chart (compliant %), Bar chart (non-compliant by type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Azure Policy enforces governance.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.28",
              "n": "Azure App Service Plan CPU and Memory",
              "c": "high",
              "f": "intermediate",
              "v": "Platform-level resource pressure on App Service plan (not app-level) causes throttling, slow responses, and out-of-memory errors across all apps in the plan.",
              "t": "Splunk Add-on for Microsoft Cloud Services",
              "d": "Azure Monitor metrics (CpuPercentage, MemoryPercentage for App Service Plan)",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" resourceType=\"Microsoft.Web/serverfarms\" (metric_name=\"CpuPercentage\" OR metric_name=\"MemoryPercentage\")\n| stats avg(average) as avg_pct by resourceId, metric_name, bin(_time, 5m)\n| where avg_pct > 80\n| eval avg_pct=round(avg_pct, 1)\n| table _time resourceId metric_name avg_pct\n| sort -avg_pct",
              "m": "Configure Azure Monitor diagnostic settings or metrics API to export App Service Plan metrics (CpuPercentage, MemoryPercentage) to Event Hub or storage. Ingest via Splunk_TA_microsoft-cloudservices. Alert when CPU or memory exceeds 80% for 5+ minutes. Scale up plan or optimize app code. Distinguish plan-level metrics from app-level (requests, response time).",
              "z": "Line chart (CPU and memory % by plan over time), Table (plan, metric, avg %), Gauge (current utilization).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services.\n• Ensure the following data sources are available: Azure Monitor metrics (CpuPercentage, MemoryPercentage for App Service Plan).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Azure Monitor diagnostic settings or metrics API to export App Service Plan metrics (CpuPercentage, MemoryPercentage) to Event Hub or storage. Ingest via Splunk_TA_microsoft-cloudservices. Alert when CPU or memory exceeds 80% for 5+ minutes. Scale up plan or optimize app code. Distinguish plan-level metrics from app-level (requests, response time).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" resourceType=\"Microsoft.Web/serverfarms\" (metric_name=\"CpuPercentage\" OR metric_name=\"MemoryPercentage\")\n| stats avg(average) as avg_pct by resourceId, metric_name, bin(_time, 5m)\n| where avg_pct > 80\n| eval avg_pct=round(avg_pct, 1)\n| table _time resourceId metric_name avg_pct\n| sort -avg_pct\n```\n\nUnderstanding this SPL\n\n**Azure App Service Plan CPU and Memory** — Platform-level resource pressure on App Service plan (not app-level) causes throttling, slow responses, and out-of-memory errors across all apps in the plan.\n\nDocumented **Data sources**: Azure Monitor metrics (CpuPercentage, MemoryPercentage for App Service Plan). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics, Microsoft.Web/serverfarms. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resourceId, metric_name, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Azure App Service Plan CPU and Memory**): table _time resourceId metric_name avg_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU and memory % by plan over time), Table (plan, metric, avg %), Gauge (current utilization).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Platform-level resource pressure on App Service plan (not app-level) causes throttling, slow responses, and out-of-memory errors across all apps in the plan.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.29",
              "n": "Azure Front Door Origin Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Origin probe failures cause automatic failover. Repeated failures indicate backend issues or misconfigured health probes; critical for global load balancing and CDN reliability.",
              "t": "Splunk Add-on for Microsoft Cloud Services",
              "d": "Azure Front Door health probe logs, FrontDoorHealthProbeLog",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" resourceType=\"Microsoft.Cdn/profiles\" log_s=\"FrontDoorHealthProbeLog\"\n| spath path=properties\n| search properties.httpStatusCode!=200 OR properties.healthProbeSentResult=\"Unhealthy\"\n| stats count by resourceId, properties.backendPoolName, properties.healthProbeSentResult\n| sort -count",
              "m": "Enable Front Door diagnostic logs (FrontDoorHealthProbeLog) and route to Log Analytics or Event Hub. Ingest in Splunk. Alert on any Unhealthy probe result or non-200 status. Correlate with origin availability and probe configuration (path, interval). Dashboard by backend pool and origin.",
              "z": "Table (backend pool, result, count), Status grid (origin health), Timeline (probe failures).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services.\n• Ensure the following data sources are available: Azure Front Door health probe logs, FrontDoorHealthProbeLog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Front Door diagnostic logs (FrontDoorHealthProbeLog) and route to Log Analytics or Event Hub. Ingest in Splunk. Alert on any Unhealthy probe result or non-200 status. Correlate with origin availability and probe configuration (path, interval). Dashboard by backend pool and origin.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" resourceType=\"Microsoft.Cdn/profiles\" log_s=\"FrontDoorHealthProbeLog\"\n| spath path=properties\n| search properties.httpStatusCode!=200 OR properties.healthProbeSentResult=\"Unhealthy\"\n| stats count by resourceId, properties.backendPoolName, properties.healthProbeSentResult\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Azure Front Door Origin Health** — Origin probe failures cause automatic failover. Repeated failures indicate backend issues or misconfigured health probes; critical for global load balancing and CDN reliability.\n\nDocumented **Data sources**: Azure Front Door health probe logs, FrontDoorHealthProbeLog. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics, Microsoft.Cdn/profiles. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by resourceId, properties.backendPoolName, properties.healthProbeSentResult** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (backend pool, result, count), Status grid (origin health), Timeline (probe failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Origin probe failures cause automatic failover.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.30",
              "n": "NSG Flow Log Threat Hunting",
              "c": "high",
              "f": "intermediate",
              "v": "NSG flow logs reveal lateral movement, denied probes, and unexpected east-west volume; baselining flows speeds incident triage beyond simple allow/deny counts.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:nsgflow` or Event Hub JSON (flow records)",
              "q": "index=azure sourcetype=\"mscs:azure:nsgflow\" flowDirection=\"In\" macAddress=*\n| stats sum(bytes) as total_bytes, dc(src) as unique_sources by dest, dest_port_s, rule\n| where unique_sources > 50 OR total_bytes > 1000000000\n| sort -total_bytes",
              "m": "Ingest NSG Flow Logs to Event Hub and Splunk. Enrich IPs with threat intel and CMDB. Alert on denied burst to sensitive subnets or new rare port pairs. Retention per compliance.",
              "z": "Sankey or chord (src→dest), Table (top talkers), Map (geo of external IPs).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:nsgflow` or Event Hub JSON (flow records).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest NSG Flow Logs to Event Hub and Splunk. Enrich IPs with threat intel and CMDB. Alert on denied burst to sensitive subnets or new rare port pairs. Retention per compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:nsgflow\" flowDirection=\"In\" macAddress=*\n| stats sum(bytes) as total_bytes, dc(src) as unique_sources by dest, dest_port_s, rule\n| where unique_sources > 50 OR total_bytes > 1000000000\n| sort -total_bytes\n```\n\nUnderstanding this SPL\n\n**NSG Flow Log Threat Hunting** — NSG flow logs reveal lateral movement, denied probes, and unexpected east-west volume; baselining flows speeds incident triage beyond simple allow/deny counts.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:nsgflow` or Event Hub JSON (flow records). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:nsgflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:nsgflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest, dest_port_s, rule** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_sources > 50 OR total_bytes > 1000000000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(All_Traffic.src) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NSG Flow Log Threat Hunting** — NSG flow logs reveal lateral movement, denied probes, and unexpected east-west volume; baselining flows speeds incident triage beyond simple allow/deny counts.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:nsgflow` or Event Hub JSON (flow records). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey or chord (src→dest), Table (top talkers), Map (geo of external IPs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "NSG flow logs reveal lateral movement, denied probes, and unexpected east-west volume; baselining flows speeds incident triage beyond simple allow/deny counts.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t dc(All_Traffic.src) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.31",
              "n": "Azure Policy Compliance Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Point-in-time compliance misses drift; trending non-compliant resource counts shows whether governance keeps pace with deployments.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Policy compliance export, `sourcetype=mscs:azure:audit` (policy events)",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" complianceState=\"NonCompliant\"\n| timechart span=1d count by policyDefinitionId",
              "m": "Schedule daily export of compliance snapshot per subscription. Ingest as JSON with timestamp. Alert when rolling 7-day average of non-compliant % increases week over week. Tie to deployment pipelines.",
              "z": "Line chart (non-compliant % over time), Table (policy, delta), Bar chart (top resource types).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Policy compliance export, `sourcetype=mscs:azure:audit` (policy events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule daily export of compliance snapshot per subscription. Ingest as JSON with timestamp. Alert when rolling 7-day average of non-compliant % increases week over week. Tie to deployment pipelines.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" complianceState=\"NonCompliant\"\n| timechart span=1d count by policyDefinitionId\n```\n\nUnderstanding this SPL\n\n**Azure Policy Compliance Trending** — Point-in-time compliance misses drift; trending non-compliant resource counts shows whether governance keeps pace with deployments.\n\nDocumented **Data sources**: Policy compliance export, `sourcetype=mscs:azure:audit` (policy events). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by policyDefinitionId** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (non-compliant % over time), Table (policy, delta), Bar chart (top resource types).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how Azure policy compliance trends over time — a single report can look fine while drift builds, so a rising line of non-compliant resources tells you governance is not keeping up with what teams deploy.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.32",
              "n": "Key Vault Access Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Secret and key unwrap operations must be traceable for insider and breach investigations; unusual callers warrant immediate review.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:audit` (Microsoft.KeyVault vaults), diagnostic logs",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" resourceId=\"*vaults*\" (operationName.value=\"SecretGet\" OR operationName.value=\"Decrypt\" OR operationName.value=\"UnwrapKey\")\n| stats count by identity.claims.name, callerIpAddress, resourceId\n| sort -count",
              "m": "Enable Key Vault diagnostic logs to Log Analytics or Event Hub. Alert on first-time principal, after-hours bulk access, or access from non-corporate IP ranges using lookups.",
              "z": "Table (identity, vault, count), Timeline (access spikes), Map (caller IP).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004",
                "T1530"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:audit` (Microsoft.KeyVault vaults), diagnostic logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Key Vault diagnostic logs to Log Analytics or Event Hub. Alert on first-time principal, after-hours bulk access, or access from non-corporate IP ranges using lookups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" resourceId=\"*vaults*\" (operationName.value=\"SecretGet\" OR operationName.value=\"Decrypt\" OR operationName.value=\"UnwrapKey\")\n| stats count by identity.claims.name, callerIpAddress, resourceId\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Key Vault Access Audit** — Secret and key unwrap operations must be traceable for insider and breach investigations; unusual callers warrant immediate review.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:audit` (Microsoft.KeyVault vaults), diagnostic logs. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by identity.claims.name, callerIpAddress, resourceId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Key Vault Access Audit** — Secret and key unwrap operations must be traceable for insider and breach investigations; unusual callers warrant immediate review.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:audit` (Microsoft.KeyVault vaults), diagnostic logs. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (identity, vault, count), Timeline (access spikes), Map (caller IP).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Secret and key unwrap operations must be traceable for insider and breach investigations; unusual callers warrant immediate review.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.33",
              "n": "App Service Health Metrics",
              "c": "high",
              "f": "beginner",
              "v": "HTTP queue length, response time, and instance health explain user-visible slowness before 5xx rates spike.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:metrics` (Microsoft.Web/sites — HttpQueueLength, AverageResponseTime, HealthCheckStatus)",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" resourceType=\"Microsoft.Web/sites\" (metric_name=\"HttpQueueLength\" OR metric_name=\"AverageResponseTime\" OR metric_name=\"HealthCheckStatus\")\n| stats avg(average) as v by resourceId, metric_name, bin(_time, 5m)\n| where (metric_name=\"HttpQueueLength\" AND v>100) OR (metric_name=\"AverageResponseTime\" AND v>2000) OR (metric_name=\"HealthCheckStatus\" AND v>0)",
              "m": "Stream App Service metrics via diagnostic settings. Correlate with App Service Plan saturation (UC-4.2.28). Alert on sustained queue depth or failed health probes. Scale out or warm up instances.",
              "z": "Line chart (queue, response time, health), Table (app, metric, value), Status grid (probe per slot).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:metrics` (Microsoft.Web/sites — HttpQueueLength, AverageResponseTime, HealthCheckStatus).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStream App Service metrics via diagnostic settings. Correlate with App Service Plan saturation (UC-4.2.28). Alert on sustained queue depth or failed health probes. Scale out or warm up instances.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" resourceType=\"Microsoft.Web/sites\" (metric_name=\"HttpQueueLength\" OR metric_name=\"AverageResponseTime\" OR metric_name=\"HealthCheckStatus\")\n| stats avg(average) as v by resourceId, metric_name, bin(_time, 5m)\n| where (metric_name=\"HttpQueueLength\" AND v>100) OR (metric_name=\"AverageResponseTime\" AND v>2000) OR (metric_name=\"HealthCheckStatus\" AND v>0)\n```\n\nUnderstanding this SPL\n\n**App Service Health Metrics** — HTTP queue length, response time, and instance health explain user-visible slowness before 5xx rates spike.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:metrics` (Microsoft.Web/sites — HttpQueueLength, AverageResponseTime, HealthCheckStatus). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics, Microsoft.Web/sites. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resourceId, metric_name, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (metric_name=\"HttpQueueLength\" AND v>100) OR (metric_name=\"AverageResponseTime\" AND v>2000) OR (metric_name=\"HealthCh…` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**App Service Health Metrics** — HTTP queue length, response time, and instance health explain user-visible slowness before 5xx rates spike.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:metrics` (Microsoft.Web/sites — HttpQueueLength, AverageResponseTime, HealthCheckStatus). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue, response time, health), Table (app, metric, value), Status grid (probe per slot).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "HTTP queue length, response time, and instance health explain user-visible slowness before 5xx rates spike.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.34",
              "n": "AKS Diagnostics and Errors",
              "c": "high",
              "f": "advanced",
              "v": "Control plane and node problems surface as API errors, failed mounts, and ImagePullBackOff; centralized errors shorten MTTR.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:diagnostics` (kube-audit, container logs), Azure Monitor for containers",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" category=\"kube-audit\" \"responseStatus.code\">=400\n| stats count by objectRef.resource, verb, responseStatus.code\n| sort -count",
              "m": "Enable AKS diagnostic categories for audit and container logs. Ingest to Splunk. Alert on elevated 5xx from API server or repeated ImagePullBackOff patterns. Dashboard by namespace and deployment.",
              "z": "Table (resource, code, count), Timeline (audit errors), Bar chart (namespace).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1613",
                "T1609"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:diagnostics` (kube-audit, container logs), Azure Monitor for containers.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AKS diagnostic categories for audit and container logs. Ingest to Splunk. Alert on elevated 5xx from API server or repeated ImagePullBackOff patterns. Dashboard by namespace and deployment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" category=\"kube-audit\" \"responseStatus.code\">=400\n| stats count by objectRef.resource, verb, responseStatus.code\n| sort -count\n```\n\nUnderstanding this SPL\n\n**AKS Diagnostics and Errors** — Control plane and node problems surface as API errors, failed mounts, and ImagePullBackOff; centralized errors shorten MTTR.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:diagnostics` (kube-audit, container logs), Azure Monitor for containers. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by objectRef.resource, verb, responseStatus.code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (resource, code, count), Timeline (audit errors), Bar chart (namespace).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Control plane and node problems surface as errors, failed mounts, and ImagePullBackOff; centralized errors shorten MTTR.",
              "mtype": [
                "Security",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.35",
              "n": "Cost Management Anomaly Detection",
              "c": "medium",
              "f": "advanced",
              "v": "Built-in Cost Management alerts help, but statistical baselines on daily spend catch unusual service charges early.",
              "t": "`Splunk_TA_microsoft-cloudservices`, Cost Management export",
              "d": "`sourcetype=mscs:azure:cost` or amortized cost CSV to blob/Event Hub",
              "q": "index=azure sourcetype=\"mscs:azure:cost\"\n| timechart span=1d sum(pretax_cost) as daily by ServiceName\n| eventstats avg(daily) as mu, stdev(daily) as sigma by ServiceName\n| eval z=if(sigma>0, (daily-mu)/sigma, 0)\n| where z > 3",
              "m": "Ingest daily actual cost by service and resource group. Use `predict` or manual z-score as shown. Alert finance and owners on anomalies. Exclude known one-time purchases via lookup.",
              "z": "Line chart (daily cost by service), Table (service, z-score), Single value (anomaly count).",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, Cost Management export.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:cost` or amortized cost CSV to blob/Event Hub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest daily actual cost by service and resource group. Use `predict` or manual z-score as shown. Alert finance and owners on anomalies. Exclude known one-time purchases via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:cost\"\n| timechart span=1d sum(pretax_cost) as daily by ServiceName\n| eventstats avg(daily) as mu, stdev(daily) as sigma by ServiceName\n| eval z=if(sigma>0, (daily-mu)/sigma, 0)\n| where z > 3\n```\n\nUnderstanding this SPL\n\n**Cost Management Anomaly Detection** — Built-in Cost Management alerts help, but statistical baselines on daily spend catch unusual service charges early.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:cost` or amortized cost CSV to blob/Event Hub. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Cost Management export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:cost. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:cost\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by ServiceName** — ideal for trending and alerting on this use case.\n• `eventstats` rolls up events into metrics; results are split **by ServiceName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z > 3` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily cost by service), Table (service, z-score), Single value (anomaly count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Built-in Cost Management alerts help, but statistical baselines on daily spend catch unusual service charges early.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.36",
              "n": "Azure Firewall Threat Intelligence Hits",
              "c": "high",
              "f": "intermediate",
              "v": "Threat intel–based denies block known-bad IPs and domains at the edge; volume and target trends indicate active campaigns or misclassified traffic.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Azure Firewall diagnostic logs (`AzureFirewallApplicationRule`, `AzureFirewallNetworkRule`), threat intel action",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AzureFirewallApplicationRule\" msg_s=\"*ThreatIntel*\"\n| stats count by msg_s, FQDN, SourceAddress\n| sort -count",
              "m": "Enable Threat Intel mode on Firewall and full diagnostic logging. Parse rule collection and threat category. Alert on new destination countries or sudden hit rate increase. Tune false positives with application owners.",
              "z": "Map (source IP), Table (FQDN, count), Timeline (hits).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1580",
                "T1562.007"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure Firewall diagnostic logs (`AzureFirewallApplicationRule`, `AzureFirewallNetworkRule`), threat intel action.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Threat Intel mode on Firewall and full diagnostic logging. Parse rule collection and threat category. Alert on new destination countries or sudden hit rate increase. Tune false positives with application owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AzureFirewallApplicationRule\" msg_s=\"*ThreatIntel*\"\n| stats count by msg_s, FQDN, SourceAddress\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Azure Firewall Threat Intelligence Hits** — Threat intel–based denies block known-bad IPs and domains at the edge; volume and target trends indicate active campaigns or misclassified traffic.\n\nDocumented **Data sources**: Azure Firewall diagnostic logs (`AzureFirewallApplicationRule`, `AzureFirewallNetworkRule`), threat intel action. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by msg_s, FQDN, SourceAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Firewall Threat Intelligence Hits** — Threat intel–based denies block known-bad IPs and domains at the edge; volume and target trends indicate active campaigns or misclassified traffic.\n\nDocumented **Data sources**: Azure Firewall diagnostic logs (`AzureFirewallApplicationRule`, `AzureFirewallNetworkRule`), threat intel action. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (source IP), Table (FQDN, count), Timeline (hits).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Threat intel–based denies block known-bad and domains at the edge; volume and target trends indicate active campaigns or misclassified traffic.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.37",
              "n": "Front Door WAF Blocks",
              "c": "high",
              "f": "intermediate",
              "v": "Managed rule blocks protect origins; tracking rule IDs separates scanning noise from targeted application abuse.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Front Door diagnostic logs (WebApplicationFirewallLog)",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" log_s=\"WebApplicationFirewallLog\" action_s=\"Block\"\n| stats count by ruleName_s, clientIP_s, hostName_s\n| sort -count",
              "m": "Enable WAF logs on Front Door profile. Ingest to Splunk. Dashboard OWASP rule groups. Create exceptions carefully with SecOps. Correlate with origin 5xx to avoid blocking good clients.",
              "z": "Bar chart (ruleName), Table (client IP, URI), Timeline (block rate).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Front Door diagnostic logs (WebApplicationFirewallLog).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable WAF logs on Front Door profile. Ingest to Splunk. Dashboard OWASP rule groups. Create exceptions carefully with SecOps. Correlate with origin 5xx to avoid blocking good clients.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" log_s=\"WebApplicationFirewallLog\" action_s=\"Block\"\n| stats count by ruleName_s, clientIP_s, hostName_s\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Front Door WAF Blocks** — Managed rule blocks protect origins; tracking rule IDs separates scanning noise from targeted application abuse.\n\nDocumented **Data sources**: Front Door diagnostic logs (WebApplicationFirewallLog). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ruleName_s, clientIP_s, hostName_s** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Front Door WAF Blocks** — Managed rule blocks protect origins; tracking rule IDs separates scanning noise from targeted application abuse.\n\nDocumented **Data sources**: Front Door diagnostic logs (WebApplicationFirewallLog). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (ruleName), Table (client IP, URI), Timeline (block rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Managed rule blocks protect origins; tracking rule separates scanning noise from targeted application abuse.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.38",
              "n": "Logic App Run Failures",
              "c": "medium",
              "f": "intermediate",
              "v": "Integration workflows power automation; failed runs leave tickets, data, and approvals stuck.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:diagnostics` (WorkflowRuntime), Logic App run history export",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" resourceType=\"Microsoft.Logic/workflows\" status_s=\"Failed\"\n| stats count by resource_name_s, code_s, error_s\n| sort -count",
              "m": "Enable Logic App workflow diagnostics. Ingest run status and error codes. Alert on any failed production workflow or retry exhaustion. Replay failed runs from operations team process.",
              "z": "Table (workflow, error), Timeline (failures), Single value (failed runs / hour).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:diagnostics` (WorkflowRuntime), Logic App run history export.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Logic App workflow diagnostics. Ingest run status and error codes. Alert on any failed production workflow or retry exhaustion. Replay failed runs from operations team process.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" resourceType=\"Microsoft.Logic/workflows\" status_s=\"Failed\"\n| stats count by resource_name_s, code_s, error_s\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Logic App Run Failures** — Integration workflows power automation; failed runs leave tickets, data, and approvals stuck.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:diagnostics` (WorkflowRuntime), Logic App run history export. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics, Microsoft.Logic/workflows. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource_name_s, code_s, error_s** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (workflow, error), Timeline (failures), Single value (failed runs / hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Integration workflows power automation; failed runs leave tickets, data, and approvals stuck.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.39",
              "n": "Event Hub Capture Lag",
              "c": "high",
              "f": "advanced",
              "v": "Capture to ADLS/Blob enables batch analytics; lag between enqueue and file availability delays downstream pipelines.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Event Hub metrics (`CaptureLag`, incoming messages), storage write diagnostics",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" resourceType=\"Microsoft.EventHub/namespaces\" metric_name=\"CaptureLag\"\n| stats latest(average) as lag_ms by resourceId, bin(_time, 5m)\n| where lag_ms > 600000\n| eval lag_min=round(lag_ms/60000,1)",
              "m": "Ingest CaptureLag from Azure Monitor. Alert when lag exceeds SLA (for example 10 minutes). Check storage throttling and capture file naming collisions. Scale throughput units if needed.",
              "z": "Line chart (capture lag), Table (namespace, lag minutes), Single value (worst lag).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Event Hub metrics (`CaptureLag`, incoming messages), storage write diagnostics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CaptureLag from Azure Monitor. Alert when lag exceeds SLA (for example 10 minutes). Check storage throttling and capture file naming collisions. Scale throughput units if needed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" resourceType=\"Microsoft.EventHub/namespaces\" metric_name=\"CaptureLag\"\n| stats latest(average) as lag_ms by resourceId, bin(_time, 5m)\n| where lag_ms > 600000\n| eval lag_min=round(lag_ms/60000,1)\n```\n\nUnderstanding this SPL\n\n**Event Hub Capture Lag** — Capture to ADLS/Blob enables batch analytics; lag between enqueue and file availability delays downstream pipelines.\n\nDocumented **Data sources**: Event Hub metrics (`CaptureLag`, incoming messages), storage write diagnostics. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics, Microsoft.EventHub/namespaces. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resourceId, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where lag_ms > 600000` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **lag_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (capture lag), Table (namespace, lag minutes), Single value (worst lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Capture to ADLS/Blob enables batch analytics; lag between enqueue and file availability delays downstream pipelines.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.40",
              "n": "Azure Backup Job Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed or missed backup jobs leave restore gaps; operational health must be tracked per protected item.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:diagnostics` Category=\"AzureBackupReport\" or Recovery Services job events",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AzureBackupReport\" OperationName=\"Backup\"\n| eval ok=if(match(JobStatus,\"(?i)completed\"),1,0)\n| stats count(eval(ok=0)) as failed, count as total by BackupItemName\n| where failed>0\n| sort -failed",
              "m": "Parse backup job JSON from diagnostic stream. Alert on Failed, CompletedWithWarnings patterns, or missing job in expected window (lookup per item). Test restores quarterly (see UC-4.4.29).",
              "z": "Table (item, status), Timeline (job outcomes), Single value (failed jobs 24h).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:diagnostics` Category=\"AzureBackupReport\" or Recovery Services job events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse backup job JSON from diagnostic stream. Alert on Failed, CompletedWithWarnings patterns, or missing job in expected window (lookup per item). Test restores quarterly (see UC-4.4.29).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AzureBackupReport\" OperationName=\"Backup\"\n| eval ok=if(match(JobStatus,\"(?i)completed\"),1,0)\n| stats count(eval(ok=0)) as failed, count as total by BackupItemName\n| where failed>0\n| sort -failed\n```\n\nUnderstanding this SPL\n\n**Azure Backup Job Health** — Failed or missed backup jobs leave restore gaps; operational health must be tracked per protected item.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:diagnostics` Category=\"AzureBackupReport\" or Recovery Services job events. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by BackupItemName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failed>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (item, status), Timeline (job outcomes), Single value (failed jobs 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Failed or missed backup jobs leave restore gaps; operational health must be tracked per protected item.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.41",
              "n": "Private Link DNS Resolution",
              "c": "high",
              "f": "advanced",
              "v": "Private Endpoint FQDNs resolve via private DNS zones; NXDOMAIN or public resolution leaks traffic or breaks apps.",
              "t": "Custom (DNS query logs), `Splunk_TA_microsoft-cloudservices`",
              "d": "Azure DNS private zone query logs (if enabled), VM DNS client logs, `sourcetype=dns:query`",
              "q": "index=network sourcetype=\"dns:query\" zone_type=\"private\"\n| stats count(eval(rcode!=\"NOERROR\")) as failures, count as total by fqdn, src\n| eval fail_pct=round(100*failures/total,2)\n| where fail_pct > 5 AND total > 20",
              "m": "Forward DNS resolver logs from VNet-linked zones or Azure Firewall DNS proxy. Alert on high NXDOMAIN for PE FQDNs. Validate zone links and auto-registration on new NICs.",
              "z": "Table (fqdn, fail %), Timeline (DNS errors), Map (source subnet).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (DNS query logs), `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Azure DNS private zone query logs (if enabled), VM DNS client logs, `sourcetype=dns:query`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DNS resolver logs from VNet-linked zones or Azure Firewall DNS proxy. Alert on high NXDOMAIN for PE FQDNs. Validate zone links and auto-registration on new NICs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"dns:query\" zone_type=\"private\"\n| stats count(eval(rcode!=\"NOERROR\")) as failures, count as total by fqdn, src\n| eval fail_pct=round(100*failures/total,2)\n| where fail_pct > 5 AND total > 20\n```\n\nUnderstanding this SPL\n\n**Private Link DNS Resolution** — Private Endpoint FQDNs resolve via private DNS zones; NXDOMAIN or public resolution leaks traffic or breaks apps.\n\nDocumented **Data sources**: Azure DNS private zone query logs (if enabled), VM DNS client logs, `sourcetype=dns:query`. **App/TA** (typical add-on context): Custom (DNS query logs), `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: dns:query. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"dns:query\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by fqdn, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_pct > 5 AND total > 20` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Resolution.DNS by DNS.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Private Link DNS Resolution** — Private Endpoint FQDNs resolve via private DNS zones; NXDOMAIN or public resolution leaks traffic or breaks apps.\n\nDocumented **Data sources**: Azure DNS private zone query logs (if enabled), VM DNS client logs, `sourcetype=dns:query`. **App/TA** (typical add-on context): Custom (DNS query logs), `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (fqdn, fail %), Timeline (DNS errors), Map (source subnet).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Private Endpoint FQDNs resolve using private zones; NXDOMAIN or public resolution leaks traffic or breaks apps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Resolution.DNS by DNS.src | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.42",
              "n": "Azure Monitor Alert Rule Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Disabled or misconfigured alert rules create silent monitoring gaps; tracking rule state protects on-call coverage.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:audit` (scheduledQueryRules, metricAlerts), Activity Log for alert changes",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" (operationName.value=\"*scheduledQueryRules*\" OR operationName.value=\"*metricAlerts*\")\n| search disable OR Disabled OR delete OR Delete\n| stats count by caller, operationName.value, resourceId\n| sort -_time",
              "m": "Ingest Activity Log for alert create/update/delete. Nightly compare inventory of enabled rules vs golden baseline lookup. Alert when production-critical rules are disabled > 15 minutes.",
              "z": "Table (rule, action, caller), Timeline (changes), Single value (disabled rules count).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:audit` (scheduledQueryRules, metricAlerts), Activity Log for alert changes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Activity Log for alert create/update/delete. Nightly compare inventory of enabled rules vs golden baseline lookup. Alert when production-critical rules are disabled > 15 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" (operationName.value=\"*scheduledQueryRules*\" OR operationName.value=\"*metricAlerts*\")\n| search disable OR Disabled OR delete OR Delete\n| stats count by caller, operationName.value, resourceId\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Azure Monitor Alert Rule Health** — Disabled or misconfigured alert rules create silent monitoring gaps; tracking rule state protects on-call coverage.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:audit` (scheduledQueryRules, metricAlerts), Activity Log for alert changes. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by caller, operationName.value, resourceId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule, action, caller), Timeline (changes), Single value (disabled rules count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Disabled or misconfigured alert rules create silent monitoring gaps; tracking rule state protects on-call coverage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.43",
              "n": "Defender for Cloud Recommendations",
              "c": "medium",
              "f": "intermediate",
              "v": "Secure score and recommendations drive hardening backlog; trending open recommendations shows risk posture over time.",
              "t": "`Splunk_TA_microsoft-cloudservices`, Defender export API",
              "d": "Defender for Cloud recommendations JSON, continuous export to Log Analytics/Event Hub",
              "q": "index=azure sourcetype=\"mscs:azure:defender\" recommendationState=\"Active\"\n| stats count by recommendationName, severity\n| sort -count",
              "m": "Export recommendations on schedule via Logic App or Microsoft Graph security API to Splunk. Track mean time to remediate by severity. Executive dashboard of secure score trend if ingested.",
              "z": "Bar chart (recommendations by type), Table (severity, count), Line chart (open recommendations over time).",
              "kfp": "A fresh scanner run or a vendor re-rate can resurface a finding we already know about. We deduplicate on resource and time and we only escalate when the exposure is new or the severity stepped up.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [
                "T1538"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, Defender export API.\n• Ensure the following data sources are available: Defender for Cloud recommendations JSON, continuous export to Log Analytics/Event Hub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport recommendations on schedule via Logic App or Microsoft Graph security API to Splunk. Track mean time to remediate by severity. Executive dashboard of secure score trend if ingested.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:defender\" recommendationState=\"Active\"\n| stats count by recommendationName, severity\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Defender for Cloud Recommendations** — Secure score and recommendations drive hardening backlog; trending open recommendations shows risk posture over time.\n\nDocumented **Data sources**: Defender for Cloud recommendations JSON, continuous export to Log Analytics/Event Hub. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Defender export API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:defender. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:defender\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by recommendationName, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Defender for Cloud Recommendations** — Secure score and recommendations drive hardening backlog; trending open recommendations shows risk posture over time.\n\nDocumented **Data sources**: Defender for Cloud recommendations JSON, continuous export to Log Analytics/Event Hub. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Defender export API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (recommendations by type), Table (severity, count), Line chart (open recommendations over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Secure score and recommendations drive hardening backlog; trending open recommendations shows risk posture over time.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.44",
              "n": "Azure Resource Lock Changes",
              "c": "high",
              "f": "intermediate",
              "v": "Locks prevent accidental deletes; removing a lock before maintenance is high risk and must be audited.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:audit` (Microsoft.Authorization/locks)",
              "q": "index=azure sourcetype=\"mscs:azure:audit\" resourceId=\"*providers/Microsoft.Authorization/locks*\"\n| stats count by operationName.value, identity.claims.name, resourceGroupName\n| sort -count",
              "m": "Alert on Delete or write operations against lock resources. Require change ticket in comments where possible. Correlate with subsequent delete operations on parent resources.",
              "z": "Table (operation, user, resource group), Timeline (lock changes).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:audit` (Microsoft.Authorization/locks).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on Delete or write operations against lock resources. Require change ticket in comments where possible. Correlate with subsequent delete operations on parent resources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:audit\" resourceId=\"*providers/Microsoft.Authorization/locks*\"\n| stats count by operationName.value, identity.claims.name, resourceGroupName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Azure Resource Lock Changes** — Locks prevent accidental deletes; removing a lock before maintenance is high risk and must be audited.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:audit` (Microsoft.Authorization/locks). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by operationName.value, identity.claims.name, resourceGroupName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Resource Lock Changes** — Locks prevent accidental deletes; removing a lock before maintenance is high risk and must be audited.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:audit` (Microsoft.Authorization/locks). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operation, user, resource group), Timeline (lock changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Locks prevent accidental deletes; removing a lock before maintenance is high risk and must be audited.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.45",
              "n": "Azure Container Instances Health",
              "c": "high",
              "f": "intermediate",
              "v": "ACI containers are short-lived and opaque without platform metrics; monitoring restarts and resource exhaustion preserves burst workloads and integrations.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics)",
              "d": "`sourcetype=azure:monitor:metric` or `sourcetype=azure:diagnostics`",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.containerinstance/containergroups\"\n| stats avg(average) as cpu_avg, max(maximum) as cpu_peak by resource_name, resource_group\n| join type=left max=1 resource_name [\n    search index=cloud sourcetype=\"azure:diagnostics\" Category=\"ContainerInstanceLog\"\n    | where match(_raw, \"(?i)error|fail|OOM\")\n    | stats count as log_errors by resource_name\n]\n| where cpu_peak>85 OR log_errors>0\n| sort -cpu_peak",
              "m": "Route Azure Monitor metrics for Container Instances to Splunk using the Azure Add-on (Event Hub or metrics export). Enable diagnostic logs for container groups. Normalize `resource_name` to container group. Alert on CPU/memory threshold breaches, exit code non-zero patterns in logs, and restart counts from platform events.",
              "z": "Line chart (CPU/memory over time), Table (container group, region, state), Bar chart (events by group).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` or `sourcetype=azure:diagnostics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRoute Azure Monitor metrics for Container Instances to Splunk using the Azure Add-on (Event Hub or metrics export). Enable diagnostic logs for container groups. Normalize `resource_name` to container group. Alert on CPU/memory threshold breaches, exit code non-zero patterns in logs, and restart counts from platform events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.containerinstance/containergroups\"\n| stats avg(average) as cpu_avg, max(maximum) as cpu_peak by resource_name, resource_group\n| join type=left max=1 resource_name [\n    search index=cloud sourcetype=\"azure:diagnostics\" Category=\"ContainerInstanceLog\"\n    | where match(_raw, \"(?i)error|fail|OOM\")\n    | stats count as log_errors by resource_name\n]\n| where cpu_peak>85 OR log_errors>0\n| sort -cpu_peak\n```\n\nUnderstanding this SPL\n\n**Azure Container Instances Health** — ACI containers are short-lived and opaque without platform metrics; monitoring restarts and resource exhaustion preserves burst workloads and integrations.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` or `sourcetype=azure:diagnostics`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource_name, resource_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where cpu_peak>85 OR log_errors>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU/memory over time), Table (container group, region, state), Bar chart (events by group).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "ACI containers are short-lived and opaque without platform metrics; monitoring restarts and resource exhaustion preserves burst workloads and integrations.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.46",
              "n": "Azure Application Gateway and WAF Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Application Gateway is the primary L7 load balancer for most Azure web workloads. Backend health probe failures cause 502 errors for users; WAF blocks need tuning to avoid false positives.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics)",
              "d": "`sourcetype=azure:monitor:metric`, `sourcetype=azure:diagnostics` (ApplicationGatewayAccessLog, ApplicationGatewayFirewallLog)",
              "q": "index=cloud sourcetype=\"azure:diagnostics\" Category=\"ApplicationGatewayAccessLog\"\n| eval is_error=if(httpStatusCode>=500,1,0)\n| timechart span=5m count as total_requests, sum(is_error) as server_errors by host\n| eval error_pct=round(100*server_errors/total_requests,2)\n| where error_pct > 5",
              "m": "Enable diagnostics on Application Gateway to send access logs and WAF logs via Event Hub or Storage Account to Splunk. Monitor backend pool health probe status from metrics (`UnhealthyHostCount`). Alert on rising 502/504 rates, unhealthy backends, and WAF blocks that correlate with user-reported issues. Track WAF rule hit distribution to tune rule exclusions.",
              "z": "Line chart (request rate and error rate), Table (unhealthy backends), Bar chart (WAF blocks by rule ID).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1580",
                "T1562.007"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric`, `sourcetype=azure:diagnostics` (ApplicationGatewayAccessLog, ApplicationGatewayFirewallLog).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable diagnostics on Application Gateway to send access logs and WAF logs via Event Hub or Storage Account to Splunk. Monitor backend pool health probe status from metrics (`UnhealthyHostCount`). Alert on rising 502/504 rates, unhealthy backends, and WAF blocks that correlate with user-reported issues. Track WAF rule hit distribution to tune rule exclusions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:diagnostics\" Category=\"ApplicationGatewayAccessLog\"\n| eval is_error=if(httpStatusCode>=500,1,0)\n| timechart span=5m count as total_requests, sum(is_error) as server_errors by host\n| eval error_pct=round(100*server_errors/total_requests,2)\n| where error_pct > 5\n```\n\nUnderstanding this SPL\n\n**Azure Application Gateway and WAF Health** — Application Gateway is the primary L7 load balancer for most Azure web workloads. Backend health probe failures cause 502 errors for users; WAF blocks need tuning to avoid false positives.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric`, `sourcetype=azure:diagnostics` (ApplicationGatewayAccessLog, ApplicationGatewayFirewallLog). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **error_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Application Gateway and WAF Health** — Application Gateway is the primary L7 load balancer for most Azure web workloads. Backend health probe failures cause 502 errors for users; WAF blocks need tuning to avoid false positives.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric`, `sourcetype=azure:diagnostics` (ApplicationGatewayAccessLog, ApplicationGatewayFirewallLog). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (request rate and error rate), Table (unhealthy backends), Bar chart (WAF blocks by rule ID).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Application Gateway is the primary L7 load balancer for most Azure web workloads.",
              "mtype": [
                "Security",
                "Availability",
                "Fault"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest span=5m | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.47",
              "n": "Azure VPN Gateway Tunnel Status",
              "c": "critical",
              "f": "beginner",
              "v": "VPN gateway tunnel drops break hybrid connectivity between Azure and on-premises networks. Nearly every enterprise Azure customer relies on site-to-site VPN; tunnel status is a fundamental availability signal.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.Network/vpnGateways)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.network/vpngateways\" metric_name=\"TunnelAverageBandwidth\" OR metric_name=\"TunnelEgressBytes\"\n| timechart span=5m avg(average) as avg_bandwidth by resource_name\n| where avg_bandwidth < 1",
              "m": "Collect Azure Monitor metrics for VPN Gateway resources. Monitor `TunnelAverageBandwidth` (drops to zero when tunnel is down), `TunnelEgressBytes`, `TunnelIngressBytes`, and `BGPPeerStatus`. Alert when tunnel bandwidth drops to zero or BGP peer status changes. Correlate with Azure Service Health events for planned maintenance.",
              "z": "Line chart (tunnel bandwidth over time), Single value (tunnel status up/down), Table (tunnels with status).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.Network/vpnGateways).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Azure Monitor metrics for VPN Gateway resources. Monitor `TunnelAverageBandwidth` (drops to zero when tunnel is down), `TunnelEgressBytes`, `TunnelIngressBytes`, and `BGPPeerStatus`. Alert when tunnel bandwidth drops to zero or BGP peer status changes. Correlate with Azure Service Health events for planned maintenance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.network/vpngateways\" metric_name=\"TunnelAverageBandwidth\" OR metric_name=\"TunnelEgressBytes\"\n| timechart span=5m avg(average) as avg_bandwidth by resource_name\n| where avg_bandwidth < 1\n```\n\nUnderstanding this SPL\n\n**Azure VPN Gateway Tunnel Status** — VPN gateway tunnel drops break hybrid connectivity between Azure and on-premises networks. Nearly every enterprise Azure customer relies on site-to-site VPN; tunnel status is a fundamental availability signal.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Network/vpnGateways). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resource_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_bandwidth < 1` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure VPN Gateway Tunnel Status** — VPN gateway tunnel drops break hybrid connectivity between Azure and on-premises networks. Nearly every enterprise Azure customer relies on site-to-site VPN; tunnel status is a fundamental availability signal.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Network/vpnGateways). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (tunnel bandwidth over time), Single value (tunnel status up/down), Table (tunnels with status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Remote access gateway tunnel drops break hybrid connectivity between Azure and on-premises networks.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.48",
              "n": "Azure ExpressRoute Circuit Health",
              "c": "critical",
              "f": "intermediate",
              "v": "ExpressRoute provides dedicated private connectivity to Azure for large enterprises. Circuit degradation or BGP peer loss causes failover to backup paths or complete connectivity loss to Azure services.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.Network/expressRouteCircuits)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.network/expressroutecircuits\"\n| eval metric=metric_name\n| where metric IN (\"BgpAvailability\",\"ArpAvailability\",\"BitsInPerSecond\",\"BitsOutPerSecond\")\n| timechart span=5m avg(average) as value by metric, resource_name",
              "m": "Collect metrics for ExpressRoute circuits: `BgpAvailability` and `ArpAvailability` (should be 100%), `BitsInPerSecond`/`BitsOutPerSecond` for throughput trending. Alert when BGP availability drops below 100% or throughput drops to zero. Track circuit utilization against provisioned bandwidth to plan capacity upgrades.",
              "z": "Line chart (BGP/ARP availability %), Line chart (throughput), Single value (circuit status).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.Network/expressRouteCircuits).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect metrics for ExpressRoute circuits: `BgpAvailability` and `ArpAvailability` (should be 100%), `BitsInPerSecond`/`BitsOutPerSecond` for throughput trending. Alert when BGP availability drops below 100% or throughput drops to zero. Track circuit utilization against provisioned bandwidth to plan capacity upgrades.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.network/expressroutecircuits\"\n| eval metric=metric_name\n| where metric IN (\"BgpAvailability\",\"ArpAvailability\",\"BitsInPerSecond\",\"BitsOutPerSecond\")\n| timechart span=5m avg(average) as value by metric, resource_name\n```\n\nUnderstanding this SPL\n\n**Azure ExpressRoute Circuit Health** — ExpressRoute provides dedicated private connectivity to Azure for large enterprises. Circuit degradation or BGP peer loss causes failover to backup paths or complete connectivity loss to Azure services.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Network/expressRouteCircuits). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **metric** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where metric IN (\"BgpAvailability\",\"ArpAvailability\",\"BitsInPerSecond\",\"BitsOutPerSecond\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric, resource_name** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure ExpressRoute Circuit Health** — ExpressRoute provides dedicated private connectivity to Azure for large enterprises. Circuit degradation or BGP peer loss causes failover to backup paths or complete connectivity loss to Azure services.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Network/expressRouteCircuits). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (BGP/ARP availability %), Line chart (throughput), Single value (circuit status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "ExpressRoute provides dedicated private connectivity to Azure for large enterprises.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.49",
              "n": "Azure Redis Cache Performance",
              "c": "high",
              "f": "beginner",
              "v": "Redis Cache is a common caching and session store layer in Azure architectures. High server load, memory pressure, or cache misses directly impact application response times.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.Cache/Redis)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.cache/redis\"\n| where metric_name IN (\"serverLoad\",\"usedmemorypercentage\",\"cacheHits\",\"cacheMisses\",\"connectedclients\",\"evictedkeys\")\n| timechart span=5m avg(average) as value by metric_name, resource_name",
              "m": "Collect Azure Monitor metrics for Redis Cache resources. Key metrics: `serverLoad` (alert >80%), `usedmemorypercentage` (alert >90%), `evictedkeys` (any eviction signals memory pressure), and cache hit ratio (`cacheHits/(cacheHits+cacheMisses)`). Track `connectedclients` against tier limits. For Premium tier, monitor replication lag between primary and replica.",
              "z": "Gauge (server load), Line chart (memory % and hit ratio), Single value (evicted keys).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.Cache/Redis).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Azure Monitor metrics for Redis Cache resources. Key metrics: `serverLoad` (alert >80%), `usedmemorypercentage` (alert >90%), `evictedkeys` (any eviction signals memory pressure), and cache hit ratio (`cacheHits/(cacheHits+cacheMisses)`). Track `connectedclients` against tier limits. For Premium tier, monitor replication lag between primary and replica.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.cache/redis\"\n| where metric_name IN (\"serverLoad\",\"usedmemorypercentage\",\"cacheHits\",\"cacheMisses\",\"connectedclients\",\"evictedkeys\")\n| timechart span=5m avg(average) as value by metric_name, resource_name\n```\n\nUnderstanding this SPL\n\n**Azure Redis Cache Performance** — Redis Cache is a common caching and session store layer in Azure architectures. High server load, memory pressure, or cache misses directly impact application response times.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Cache/Redis). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where metric_name IN (\"serverLoad\",\"usedmemorypercentage\",\"cacheHits\",\"cacheMisses\",\"connectedclients\",\"evictedkeys\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name, resource_name** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Memory by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Redis Cache Performance** — Redis Cache is a common caching and session store layer in Azure architectures. High server load, memory pressure, or cache misses directly impact application response times.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Cache/Redis). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Memory` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (server load), Line chart (memory % and hit ratio), Single value (evicted keys).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Redis Cache is a common caching and session store layer in Azure architectures.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Memory by Performance.host span=5m | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.50",
              "n": "Azure Data Factory Pipeline Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Data Factory orchestrates ETL/ELT pipelines that feed data warehouses, analytics, and operational systems. Pipeline failures cause stale data, broken dashboards, and missed SLAs.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics)",
              "d": "`sourcetype=azure:diagnostics` (PipelineRuns, ActivityRuns, TriggerRuns)",
              "q": "index=cloud sourcetype=\"azure:diagnostics\" Category=\"PipelineRuns\"\n| where status=\"Failed\"\n| stats count as failures, latest(start) as last_failure by pipelineName, resource_name\n| sort -failures",
              "m": "Enable diagnostics on Data Factory to route `PipelineRuns`, `ActivityRuns`, and `TriggerRuns` to Splunk via Event Hub. Alert on failed pipeline runs. Track activity-level errors for root cause (copy failures, data flow errors, linked service timeouts). Monitor pipeline duration trending to detect degradation before SLA breach.",
              "z": "Table (failed pipelines with error), Bar chart (failures by pipeline), Line chart (pipeline duration trend).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics).\n• Ensure the following data sources are available: `sourcetype=azure:diagnostics` (PipelineRuns, ActivityRuns, TriggerRuns).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable diagnostics on Data Factory to route `PipelineRuns`, `ActivityRuns`, and `TriggerRuns` to Splunk via Event Hub. Alert on failed pipeline runs. Track activity-level errors for root cause (copy failures, data flow errors, linked service timeouts). Monitor pipeline duration trending to detect degradation before SLA breach.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:diagnostics\" Category=\"PipelineRuns\"\n| where status=\"Failed\"\n| stats count as failures, latest(start) as last_failure by pipelineName, resource_name\n| sort -failures\n```\n\nUnderstanding this SPL\n\n**Azure Data Factory Pipeline Failures** — Data Factory orchestrates ETL/ELT pipelines that feed data warehouses, analytics, and operational systems. Pipeline failures cause stale data, broken dashboards, and missed SLAs.\n\nDocumented **Data sources**: `sourcetype=azure:diagnostics` (PipelineRuns, ActivityRuns, TriggerRuns). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"Failed\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by pipelineName, resource_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed pipelines with error), Bar chart (failures by pipeline), Line chart (pipeline duration trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Data Factory orchestrates ETL/ELT pipelines that feed data warehouses, analytics, and operational systems.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.51",
              "n": "Azure API Management (APIM) Health",
              "c": "high",
              "f": "intermediate",
              "v": "APIM is the gateway for API-first architectures. Backend errors, high latency, and rate limit breaches directly impact API consumers and downstream applications.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.ApiManagement/service), `sourcetype=azure:diagnostics` (GatewayLogs)",
              "q": "index=cloud sourcetype=\"azure:diagnostics\" Category=\"GatewayLogs\"\n| eval is_error=if(responseCode>=500,1,0)\n| timechart span=5m count as requests, sum(is_error) as errors, avg(totalTime) as avg_latency_ms by apiId\n| eval error_pct=round(100*errors/requests,2)\n| where error_pct > 5 OR avg_latency_ms > 2000",
              "m": "Enable diagnostics on APIM to send GatewayLogs via Event Hub to Splunk. Collect metrics for `Requests`, `BackendDuration`, `OverallDuration`, `FailedRequests`, and `UnauthorizedRequests`. Alert on backend error rate spikes, latency exceeding SLA thresholds, and capacity exhaustion (approaching unit limits). Track API-level usage patterns for capacity planning.",
              "z": "Line chart (request rate and error rate by API), Gauge (latency vs. SLA), Table (top errors by API and operation).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.ApiManagement/service), `sourcetype=azure:diagnostics` (GatewayLogs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable diagnostics on APIM to send GatewayLogs via Event Hub to Splunk. Collect metrics for `Requests`, `BackendDuration`, `OverallDuration`, `FailedRequests`, and `UnauthorizedRequests`. Alert on backend error rate spikes, latency exceeding SLA thresholds, and capacity exhaustion (approaching unit limits). Track API-level usage patterns for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:diagnostics\" Category=\"GatewayLogs\"\n| eval is_error=if(responseCode>=500,1,0)\n| timechart span=5m count as requests, sum(is_error) as errors, avg(totalTime) as avg_latency_ms by apiId\n| eval error_pct=round(100*errors/requests,2)\n| where error_pct > 5 OR avg_latency_ms > 2000\n```\n\nUnderstanding this SPL\n\n**Azure API Management (APIM) Health** — APIM is the gateway for API-first architectures. Backend errors, high latency, and rate limit breaches directly impact API consumers and downstream applications.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.ApiManagement/service), `sourcetype=azure:diagnostics` (GatewayLogs). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by apiId** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **error_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_pct > 5 OR avg_latency_ms > 2000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure API Management (APIM) Health** — APIM is the gateway for API-first architectures. Backend errors, high latency, and rate limit breaches directly impact API consumers and downstream applications.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.ApiManagement/service), `sourcetype=azure:diagnostics` (GatewayLogs). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (request rate and error rate by API), Gauge (latency vs. SLA), Table (top errors by API and operation).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "APIM is the gateway -first architectures.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.52",
              "n": "Azure Virtual Desktop Session Health",
              "c": "high",
              "f": "intermediate",
              "v": "Azure Virtual Desktop provides remote desktop infrastructure. Connection failures, high round-trip latency, and session drops directly impact end-user productivity.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics)",
              "d": "`sourcetype=azure:diagnostics` (WVDConnections, WVDErrors, WVDCheckpoints)",
              "q": "index=cloud sourcetype=\"azure:diagnostics\" Category=\"WVDConnections\"\n| eval duration_min=round(SessionDuration/60000,1)\n| stats count as connections, avg(duration_min) as avg_session_min, dc(UserName) as unique_users by HostPoolName, SessionHostName\n| join type=left max=1 SessionHostName [\n    search index=cloud sourcetype=\"azure:diagnostics\" Category=\"WVDErrors\"\n    | stats count as errors by SessionHostName\n]\n| where errors > 0\n| sort -errors",
              "m": "Enable diagnostics on AVD host pools to route `WVDConnections`, `WVDErrors`, and `WVDCheckpoints` to Splunk. Monitor connection success rate, average session duration, and round-trip time. Alert on connection failure spikes, session host unavailability, and high input delay (>200ms). Track session host resource utilization (CPU, memory, disk) from Azure Monitor metrics.",
              "z": "Table (session hosts with errors), Line chart (connections and failures over time), Single value (active sessions).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics).\n• Ensure the following data sources are available: `sourcetype=azure:diagnostics` (WVDConnections, WVDErrors, WVDCheckpoints).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable diagnostics on AVD host pools to route `WVDConnections`, `WVDErrors`, and `WVDCheckpoints` to Splunk. Monitor connection success rate, average session duration, and round-trip time. Alert on connection failure spikes, session host unavailability, and high input delay (>200ms). Track session host resource utilization (CPU, memory, disk) from Azure Monitor metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:diagnostics\" Category=\"WVDConnections\"\n| eval duration_min=round(SessionDuration/60000,1)\n| stats count as connections, avg(duration_min) as avg_session_min, dc(UserName) as unique_users by HostPoolName, SessionHostName\n| join type=left max=1 SessionHostName [\n    search index=cloud sourcetype=\"azure:diagnostics\" Category=\"WVDErrors\"\n    | stats count as errors by SessionHostName\n]\n| where errors > 0\n| sort -errors\n```\n\nUnderstanding this SPL\n\n**Azure Virtual Desktop Session Health** — Azure Virtual Desktop provides remote desktop infrastructure. Connection failures, high round-trip latency, and session drops directly impact end-user productivity.\n\nDocumented **Data sources**: `sourcetype=azure:diagnostics` (WVDConnections, WVDErrors, WVDCheckpoints). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by HostPoolName, SessionHostName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where errors > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (session hosts with errors), Line chart (connections and failures over time), Single value (active sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Azure Virtual Desktop provides remote desktop infrastructure.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.53",
              "n": "Azure Traffic Manager Endpoint Health",
              "c": "high",
              "f": "beginner",
              "v": "Traffic Manager provides DNS-based global load balancing. Degraded endpoints cause traffic to shift, but undetected health changes can leave users routed to unhealthy regions.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.Network/trafficManagerProfiles)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.network/trafficmanagerprofiles\" metric_name=\"ProbeAgentCurrentEndpointStateByProfileResourceId\"\n| timechart span=5m latest(average) as health_pct by resource_name\n| where health_pct < 100",
              "m": "Collect Azure Monitor metrics for Traffic Manager profiles. Monitor `ProbeAgentCurrentEndpointStateByProfileResourceId` for endpoint health percentage and `QpsByEndpoint` for query distribution. Alert when any endpoint degrades or goes offline. Track DNS query patterns to verify failover behavior is correct after endpoint changes.",
              "z": "Status grid (endpoint × health), Line chart (health % per endpoint), Single value (degraded endpoint count).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.Network/trafficManagerProfiles).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Azure Monitor metrics for Traffic Manager profiles. Monitor `ProbeAgentCurrentEndpointStateByProfileResourceId` for endpoint health percentage and `QpsByEndpoint` for query distribution. Alert when any endpoint degrades or goes offline. Track DNS query patterns to verify failover behavior is correct after endpoint changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.network/trafficmanagerprofiles\" metric_name=\"ProbeAgentCurrentEndpointStateByProfileResourceId\"\n| timechart span=5m latest(average) as health_pct by resource_name\n| where health_pct < 100\n```\n\nUnderstanding this SPL\n\n**Azure Traffic Manager Endpoint Health** — Traffic Manager provides DNS-based global load balancing. Degraded endpoints cause traffic to shift, but undetected health changes can leave users routed to unhealthy regions.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Network/trafficManagerProfiles). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resource_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where health_pct < 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (endpoint × health), Line chart (health % per endpoint), Single value (degraded endpoint count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Traffic Manager provides -based global load balancing.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.54",
              "n": "Azure Bastion Session Audit",
              "c": "medium",
              "f": "beginner",
              "v": "Bastion provides secure, auditable VM access without public IPs. Monitoring session activity ensures compliance with access policies and detects unauthorized connection attempts.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics)",
              "d": "`sourcetype=azure:diagnostics` (BastionAuditLogs)",
              "q": "index=cloud sourcetype=\"azure:diagnostics\" Category=\"BastionAuditLogs\"\n| stats count as sessions, dc(targetVMIPAddress) as unique_targets by userName, clientIpAddress\n| sort -sessions",
              "m": "Enable diagnostic logging on Azure Bastion to send audit logs via Event Hub. Track user sessions by `userName`, `targetVMIPAddress`, `protocol` (SSH/RDP), and `duration`. Alert on connections to unexpected VMs, connections from unusual IP addresses, and failed authentication attempts. Correlate with Entra ID sign-in logs for identity context.",
              "z": "Table (sessions by user and target), Bar chart (sessions by protocol), Line chart (session count over time).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004",
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics).\n• Ensure the following data sources are available: `sourcetype=azure:diagnostics` (BastionAuditLogs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable diagnostic logging on Azure Bastion to send audit logs via Event Hub. Track user sessions by `userName`, `targetVMIPAddress`, `protocol` (SSH/RDP), and `duration`. Alert on connections to unexpected VMs, connections from unusual IP addresses, and failed authentication attempts. Correlate with Entra ID sign-in logs for identity context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:diagnostics\" Category=\"BastionAuditLogs\"\n| stats count as sessions, dc(targetVMIPAddress) as unique_targets by userName, clientIpAddress\n| sort -sessions\n```\n\nUnderstanding this SPL\n\n**Azure Bastion Session Audit** — Bastion provides secure, auditable VM access without public IPs. Monitoring session activity ensures compliance with access policies and detects unauthorized connection attempts.\n\nDocumented **Data sources**: `sourcetype=azure:diagnostics` (BastionAuditLogs). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userName, clientIpAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Bastion Session Audit** — Bastion provides secure, auditable VM access without public IPs. Monitoring session activity ensures compliance with access policies and detects unauthorized connection attempts.\n\nDocumented **Data sources**: `sourcetype=azure:diagnostics` (BastionAuditLogs). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sessions by user and target), Bar chart (sessions by protocol), Line chart (session count over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Bastion provides secure, auditable VM access without public.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.55",
              "n": "Azure Network Watcher Connection Troubleshooting",
              "c": "medium",
              "f": "intermediate",
              "v": "Network Watcher captures flow logs, connection monitors, and packet captures for Azure networks. Proactive monitoring of connectivity test results detects network issues before they impact applications.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics)",
              "d": "`sourcetype=azure:diagnostics` (NetworkSecurityGroupFlowEvent), `sourcetype=azure:monitor:metric` (Connection Monitor)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.network/networkwatchers/connectionmonitors\"\n| where metric_name=\"ChecksFailedPercent\"\n| timechart span=5m avg(average) as failed_pct by resource_name\n| where failed_pct > 10",
              "m": "Configure Connection Monitor tests for critical network paths (VM-to-VM, VM-to-PaaS, on-prem-to-Azure). Collect `ChecksFailedPercent`, `RoundTripTimeMs`, and `TestResult` metrics. Alert when failed check percentage exceeds threshold or round-trip time degrades significantly. Use NSG flow logs enriched with Traffic Analytics for deeper investigation.",
              "z": "Line chart (check failure % by monitor), Table (failing paths), Single value (overall connectivity health).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics).\n• Ensure the following data sources are available: `sourcetype=azure:diagnostics` (NetworkSecurityGroupFlowEvent), `sourcetype=azure:monitor:metric` (Connection Monitor).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Connection Monitor tests for critical network paths (VM-to-VM, VM-to-PaaS, on-prem-to-Azure). Collect `ChecksFailedPercent`, `RoundTripTimeMs`, and `TestResult` metrics. Alert when failed check percentage exceeds threshold or round-trip time degrades significantly. Use NSG flow logs enriched with Traffic Analytics for deeper investigation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.network/networkwatchers/connectionmonitors\"\n| where metric_name=\"ChecksFailedPercent\"\n| timechart span=5m avg(average) as failed_pct by resource_name\n| where failed_pct > 10\n```\n\nUnderstanding this SPL\n\n**Azure Network Watcher Connection Troubleshooting** — Network Watcher captures flow logs, connection monitors, and packet captures for Azure networks. Proactive monitoring of connectivity test results detects network issues before they impact applications.\n\nDocumented **Data sources**: `sourcetype=azure:diagnostics` (NetworkSecurityGroupFlowEvent), `sourcetype=azure:monitor:metric` (Connection Monitor). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where metric_name=\"ChecksFailedPercent\"` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resource_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where failed_pct > 10` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Network Watcher Connection Troubleshooting** — Network Watcher captures flow logs, connection monitors, and packet captures for Azure networks. Proactive monitoring of connectivity test results detects network issues before they impact applications.\n\nDocumented **Data sources**: `sourcetype=azure:diagnostics` (NetworkSecurityGroupFlowEvent), `sourcetype=azure:monitor:metric` (Connection Monitor). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (check failure % by monitor), Table (failing paths), Single value (overall connectivity health).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Network Watcher captures flow logs, connection monitors, and packet captures for Azure networks.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=5m | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.56",
              "n": "Azure Storage Queue Depth and Poison Messages",
              "c": "high",
              "f": "beginner",
              "v": "Storage Queues decouple application components. Growing queue depth indicates consumers cannot keep up; poison messages in the poison queue represent permanently failed processing that needs attention.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.Storage/storageAccounts/queueServices)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.storage/storageaccounts\" metric_name=\"QueueMessageCount\"\n| timechart span=5m avg(average) as queue_depth by resource_name\n| where queue_depth > 1000",
              "m": "Collect Azure Monitor metrics for Storage Account queue services. Monitor `QueueMessageCount` for growing backlogs and `QueueCapacity` for storage limits. Set up a separate alert for poison queues (queues ending in `-poison`) with any messages. Alert when main queue depth exceeds baseline by 3x or poison queue is non-empty.",
              "z": "Line chart (queue depth over time), Single value (current depth), Table (queues with poison messages).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.Storage/storageAccounts/queueServices).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Azure Monitor metrics for Storage Account queue services. Monitor `QueueMessageCount` for growing backlogs and `QueueCapacity` for storage limits. Set up a separate alert for poison queues (queues ending in `-poison`) with any messages. Alert when main queue depth exceeds baseline by 3x or poison queue is non-empty.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.storage/storageaccounts\" metric_name=\"QueueMessageCount\"\n| timechart span=5m avg(average) as queue_depth by resource_name\n| where queue_depth > 1000\n```\n\nUnderstanding this SPL\n\n**Azure Storage Queue Depth and Poison Messages** — Storage Queues decouple application components. Growing queue depth indicates consumers cannot keep up; poison messages in the poison queue represent permanently failed processing that needs attention.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Storage/storageAccounts/queueServices). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resource_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where queue_depth > 1000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue depth over time), Single value (current depth), Table (queues with poison messages).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Storage Queues decouple application components.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.2.57",
              "n": "Azure Managed Disk Performance Throttling",
              "c": "high",
              "f": "intermediate",
              "v": "Azure managed disks have IOPS and throughput caps based on tier and size. When VMs hit these limits, disk I/O is throttled, causing application slowdowns that are hard to diagnose without platform metrics.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.Compute/disks)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.compute/disks\"\n| where metric_name IN (\"DiskIOPSReadWrite\",\"DiskMBpsReadWrite\",\"BurstIOCreditsConsumedPercentage\")\n| timechart span=5m avg(average) as value by metric_name, resource_name",
              "m": "Collect Azure Monitor metrics for managed disks. Monitor `Composite Disk Read/Write IOPS` against the disk SKU IOPS limit and `Composite Disk Read/Write Bytes/sec` against the throughput limit. For burstable disks, track `BurstIOCreditsConsumedPercentage` — when credits exhaust, performance drops to baseline. Alert when consumption exceeds 90% of provisioned capacity sustained over 15 minutes.",
              "z": "Gauge (IOPS vs. limit), Line chart (throughput and burst credits), Table (disks hitting limits).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [Splunk Add-on for Google Cloud Platform](https://splunkbase.splunk.com/app/3088), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.Compute/disks).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Azure Monitor metrics for managed disks. Monitor `Composite Disk Read/Write IOPS` against the disk SKU IOPS limit and `Composite Disk Read/Write Bytes/sec` against the throughput limit. For burstable disks, track `BurstIOCreditsConsumedPercentage` — when credits exhaust, performance drops to baseline. Alert when consumption exceeds 90% of provisioned capacity sustained over 15 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.compute/disks\"\n| where metric_name IN (\"DiskIOPSReadWrite\",\"DiskMBpsReadWrite\",\"BurstIOCreditsConsumedPercentage\")\n| timechart span=5m avg(average) as value by metric_name, resource_name\n```\n\nUnderstanding this SPL\n\n**Azure Managed Disk Performance Throttling** — Azure managed disks have IOPS and throughput caps based on tier and size. When VMs hit these limits, disk I/O is throttled, causing application slowdowns that are hard to diagnose without platform metrics.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Compute/disks). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where metric_name IN (\"DiskIOPSReadWrite\",\"DiskMBpsReadWrite\",\"BurstIOCreditsConsumedPercentage\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name, resource_name** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Storage by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Managed Disk Performance Throttling** — Azure managed disks have IOPS and throughput caps based on tier and size. When VMs hit these limits, disk I/O is throttled, causing application slowdowns that are hard to diagnose without platform metrics.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Compute/disks). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Storage` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (IOPS vs. limit), Line chart (throughput and burst credits), Table (disks hitting limits).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Azure managed disks have IOPS and throughput caps based on tier and size.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Storage by Performance.host span=5m | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.0,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 56,
            "none": 0
          }
        },
        {
          "i": "4.3",
          "n": "Google Cloud Platform (GCP)",
          "u": [
            {
              "i": "4.3.1",
              "n": "Audit Log Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "GCP audit logs capture all admin activity and data access. Foundational for security monitoring and compliance in GCP environments.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:pubsub:message` (via Pub/Sub)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" logName=\"*activity\"\n| spath output=method path=protoPayload.methodName\n| spath output=principal path=protoPayload.authenticationInfo.principalEmail\n| stats count by principal, method\n| sort -count",
              "m": "Create a Pub/Sub topic and subscription. Configure a log sink to route audit logs to Pub/Sub. Set up Splunk_TA_google-cloudplatform with a Pub/Sub input. Alert on destructive operations (delete, setIamPolicy).",
              "z": "Table (principal, method, count), Bar chart, Timeline.",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:pubsub:message` (via Pub/Sub).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a Pub/Sub topic and subscription. Configure a log sink to route audit logs to Pub/Sub. Set up Splunk_TA_google-cloudplatform with a Pub/Sub input. Alert on destructive operations (delete, setIamPolicy).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" logName=\"*activity\"\n| spath output=method path=protoPayload.methodName\n| spath output=principal path=protoPayload.authenticationInfo.principalEmail\n| stats count by principal, method\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Audit Log Monitoring** — GCP audit logs capture all admin activity and data access. Foundational for security monitoring and compliance in GCP environments.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` (via Pub/Sub). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `stats` rolls up events into metrics; results are split **by principal, method** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Audit Log Monitoring** — GCP audit logs capture all admin activity and data access. Foundational for security monitoring and compliance in GCP environments.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` (via Pub/Sub). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, method, count), Bar chart, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "GCP audit logs capture all admin activity and data access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.2",
              "n": "IAM Policy Changes",
              "c": "critical",
              "f": "beginner",
              "v": "IAM binding changes control who can access what in GCP. Unauthorized changes to bindings on projects, folders, or organizations are critical security events.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:pubsub:message`",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.methodName=\"SetIamPolicy\"\n| spath output=resource path=resource.labels\n| spath output=principal path=protoPayload.authenticationInfo.principalEmail\n| table _time principal resource protoPayload.serviceData.policyDelta.bindingDeltas{}\n| sort -_time",
              "m": "Forward admin activity logs via Pub/Sub. Alert on `SetIamPolicy` events, especially those granting `roles/owner` or `roles/editor`. Track with change management.",
              "z": "Events list (critical), Table (who changed what), Timeline.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.001",
                "T1098.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:pubsub:message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward admin activity logs via Pub/Sub. Alert on `SetIamPolicy` events, especially those granting `roles/owner` or `roles/editor`. Track with change management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.methodName=\"SetIamPolicy\"\n| spath output=resource path=resource.labels\n| spath output=principal path=protoPayload.authenticationInfo.principalEmail\n| table _time principal resource protoPayload.serviceData.policyDelta.bindingDeltas{}\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**IAM Policy Changes** — IAM binding changes control who can access what in GCP. Unauthorized changes to bindings on projects, folders, or organizations are critical security events.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message`. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Pipeline stage (see **IAM Policy Changes**): table _time principal resource protoPayload.serviceData.policyDelta.bindingDeltas{}\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IAM Policy Changes** — IAM binding changes control who can access what in GCP. Unauthorized changes to bindings on projects, folders, or organizations are critical security events.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message`. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events list (critical), Table (who changed what), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Access control binding changes control who can access what in GCP.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 90,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison"
              ]
            },
            {
              "i": "4.3.3",
              "n": "VPC Flow Log Analysis",
              "c": "high",
              "f": "beginner",
              "v": "GCP VPC Flow Logs provide network traffic visibility. Same use case as AWS/Azure — detect rejected traffic, anomalies, exfiltration.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "VPC Flow Logs via Pub/Sub",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" logName=\"*vpc_flows\"\n| spath\n| eval src=coalesce(src,'connection.src_ip'), dest=coalesce(dest,'connection.dest_ip'), dest_port=coalesce(dest_port,'connection.dest_port')\n| stats sum(bytes_sent) as total_bytes by src, dest, dest_port\n| sort -total_bytes | head 20",
              "m": "Enable VPC Flow Logs on subnets. Sink to Pub/Sub and ingest in Splunk. Analyze for top talkers, rejected flows, and anomalous destinations.",
              "z": "Table, Sankey diagram, Timechart, Map.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: VPC Flow Logs via Pub/Sub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable VPC Flow Logs on subnets. Sink to Pub/Sub and ingest in Splunk. Analyze for top talkers, rejected flows, and anomalous destinations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" logName=\"*vpc_flows\"\n| spath\n| eval src=coalesce(src,'connection.src_ip'), dest=coalesce(dest,'connection.dest_ip'), dest_port=coalesce(dest_port,'connection.dest_port')\n| stats sum(bytes_sent) as total_bytes by src, dest, dest_port\n| sort -total_bytes | head 20\n```\n\nUnderstanding this SPL\n\n**VPC Flow Log Analysis** — GCP VPC Flow Logs provide network traffic visibility. Same use case as AWS/Azure — detect rejected traffic, anomalies, exfiltration.\n\nDocumented **Data sources**: VPC Flow Logs via Pub/Sub. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `eval` defines or adjusts **src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Sankey diagram, Timechart, Map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "GCP VPC Flow Logs provide network traffic visibility.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.4",
              "n": "GKE Cluster Health",
              "c": "high",
              "f": "beginner",
              "v": "GKE cluster health monitoring for managed Kubernetes in GCP. Node pools, upgrade status, and workload health.",
              "t": "`Splunk_TA_google-cloudplatform`, Splunk OTel Collector",
              "d": "GKE logs via Pub/Sub, Cloud Monitoring metrics",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"k8s_cluster\"\n| spath output=severity path=severity\n| where severity=\"ERROR\"\n| stats count by resource.labels.cluster_name, textPayload\n| sort -count",
              "m": "GKE logs flow through Cloud Logging. Sink to Pub/Sub for Splunk ingestion. Deploy OTel Collector in GKE for K8s-native monitoring (see Category 3.2).",
              "z": "Status panel, Error table, Timeline.",
              "kfp": "During a roll-out, nodes come and go and metrics briefly look off. We ignore short windows that line up with a deployment and we focus on steady-state failures after the cluster is calm again.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`, Splunk OTel Collector.\n• Ensure the following data sources are available: GKE logs via Pub/Sub, Cloud Monitoring metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nGKE logs flow through Cloud Logging. Sink to Pub/Sub for Splunk ingestion. Deploy OTel Collector in GKE for K8s-native monitoring (see Category 3.2).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"k8s_cluster\"\n| spath output=severity path=severity\n| where severity=\"ERROR\"\n| stats count by resource.labels.cluster_name, textPayload\n| sort -count\n```\n\nUnderstanding this SPL\n\n**GKE Cluster Health** — GKE cluster health monitoring for managed Kubernetes in GCP. Node pools, upgrade status, and workload health.\n\nDocumented **Data sources**: GKE logs via Pub/Sub, Cloud Monitoring metrics. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`, Splunk OTel Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where severity=\"ERROR\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by resource.labels.cluster_name, textPayload** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel, Error table, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Clusters cluster health monitoring for managed Kubernetes in GCP.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp",
                "opentelemetry"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.5",
              "n": "Security Command Center",
              "c": "critical",
              "f": "beginner",
              "v": "SCC provides vulnerability findings and threat detections across GCP. Centralizing in Splunk enables multi-cloud security correlation.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "SCC findings via Pub/Sub notification",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"scc_finding\"\n| spath output=severity path=finding.severity\n| spath output=category path=finding.category\n| where severity=\"CRITICAL\" OR severity=\"HIGH\"\n| table _time category severity finding.resourceName finding.description\n| sort -_time",
              "m": "Configure SCC to publish findings to Pub/Sub. Ingest via Splunk TA. Alert on CRITICAL and HIGH severity findings.",
              "z": "Table by severity, Bar chart (finding categories), Trend line.",
              "kfp": "A fresh scanner run or a vendor re-rate can resurface a finding we already know about. We deduplicate on resource and time and we only escalate when the exposure is new or the severity stepped up.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: SCC findings via Pub/Sub notification.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure SCC to publish findings to Pub/Sub. Ingest via Splunk TA. Alert on CRITICAL and HIGH severity findings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"scc_finding\"\n| spath output=severity path=finding.severity\n| spath output=category path=finding.category\n| where severity=\"CRITICAL\" OR severity=\"HIGH\"\n| table _time category severity finding.resourceName finding.description\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Security Command Center** — SCC provides vulnerability findings and threat detections across GCP. Centralizing in Splunk enables multi-cloud security correlation.\n\nDocumented **Data sources**: SCC findings via Pub/Sub notification. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where severity=\"CRITICAL\" OR severity=\"HIGH\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Security Command Center**): table _time category severity finding.resourceName finding.description\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table by severity, Bar chart (finding categories), Trend line.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "SCC provides vulnerability findings and threat detections across GCP.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.6",
              "n": "GCE Instance Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Compute Engine VM performance monitoring for capacity planning and baseline trending without guest-level agents.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Monitoring metrics via API",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"compute.googleapis.com/instance/cpu/utilization\"\n| timechart span=1h avg(value) by resource.labels.instance_id",
              "m": "Configure Cloud Monitoring metric collection in the Splunk TA. Collect CPU utilization, disk I/O, and network metrics. Alert on sustained high utilization.",
              "z": "Line chart, Heatmap, Gauge.",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1578.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Monitoring metrics via API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Cloud Monitoring metric collection in the Splunk TA. Collect CPU utilization, disk I/O, and network metrics. Alert on sustained high utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"compute.googleapis.com/instance/cpu/utilization\"\n| timechart span=1h avg(value) by resource.labels.instance_id\n```\n\nUnderstanding this SPL\n\n**GCE Instance Monitoring** — Compute Engine VM performance monitoring for capacity planning and baseline trending without guest-level agents.\n\nDocumented **Data sources**: Cloud Monitoring metrics via API. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by resource.labels.instance_id** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart, Heatmap, Gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Compute Engine VM performance monitoring for capacity planning and baseline trending without guest-level agents.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.7",
              "n": "BigQuery Audit and Cost",
              "c": "medium",
              "f": "intermediate",
              "v": "BigQuery can generate massive costs from poorly optimized queries. Audit and cost tracking prevents bill shock and identifies optimization opportunities.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "BigQuery audit logs via Pub/Sub",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"bigquery.googleapis.com\" protoPayload.methodName=\"jobservice.jobcompleted\"\n| spath output=bytes_billed path=protoPayload.serviceData.jobCompletedEvent.job.jobStatistics.totalBilledBytes\n| spath output=user path=protoPayload.authenticationInfo.principalEmail\n| eval cost_usd = round(bytes_billed / 1099511627776 * 5, 4)\n| stats sum(cost_usd) as total_cost, count as queries by user\n| sort -total_cost",
              "m": "Forward BigQuery audit logs via Pub/Sub. Calculate cost from billed bytes ($5/TB). Create dashboard showing cost per user, top expensive queries, and slot utilization.",
              "z": "Table (user, queries, cost), Bar chart (top costly queries), Trend line (daily cost).",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1530",
                "T1619"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: BigQuery audit logs via Pub/Sub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward BigQuery audit logs via Pub/Sub. Calculate cost from billed bytes ($5/TB). Create dashboard showing cost per user, top expensive queries, and slot utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"bigquery.googleapis.com\" protoPayload.methodName=\"jobservice.jobcompleted\"\n| spath output=bytes_billed path=protoPayload.serviceData.jobCompletedEvent.job.jobStatistics.totalBilledBytes\n| spath output=user path=protoPayload.authenticationInfo.principalEmail\n| eval cost_usd = round(bytes_billed / 1099511627776 * 5, 4)\n| stats sum(cost_usd) as total_cost, count as queries by user\n| sort -total_cost\n```\n\nUnderstanding this SPL\n\n**BigQuery Audit and Cost** — BigQuery can generate massive costs from poorly optimized queries. Audit and cost tracking prevents bill shock and identifies optimization opportunities.\n\nDocumented **Data sources**: BigQuery audit logs via Pub/Sub. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `eval` defines or adjusts **cost_usd** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, queries, cost), Bar chart (top costly queries), Trend line (daily cost).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "BigQuery can generate massive costs from poorly optimized queries.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.8",
              "n": "Cloud Run/Functions Errors",
              "c": "medium",
              "f": "beginner",
              "v": "Serverless function errors and cold starts impact application reliability and user experience.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Run/Functions logs via Cloud Logging",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"cloud_function\" severity=\"ERROR\"\n| spath output=function path=resource.labels.function_name\n| stats count by function, textPayload\n| sort -count",
              "m": "Forward Cloud Run/Functions logs via Pub/Sub. Monitor error rates, execution duration, and cold start frequency. Alert on error rate >5%.",
              "z": "Line chart (errors over time), Bar chart (top error functions), Single value.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Run/Functions logs via Cloud Logging.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Cloud Run/Functions logs via Pub/Sub. Monitor error rates, execution duration, and cold start frequency. Alert on error rate >5%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"cloud_function\" severity=\"ERROR\"\n| spath output=function path=resource.labels.function_name\n| stats count by function, textPayload\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloud Run/Functions Errors** — Serverless function errors and cold starts impact application reliability and user experience.\n\nDocumented **Data sources**: Cloud Run/Functions logs via Cloud Logging. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `stats` rolls up events into metrics; results are split **by function, textPayload** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (errors over time), Bar chart (top error functions), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Serverless function errors and cold starts impact application reliability and user experience.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.9",
              "n": "Cloud Load Balancing Backend Health and Request Count",
              "c": "critical",
              "f": "beginner",
              "v": "Unhealthy backends receive no traffic; request count and latency indicate load and performance. Essential for global and regional LB reliability.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Monitoring (loadbalancing.googleapis.com/https/request_count, backend_utilization)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"loadbalancing.googleapis.com/https/backend_utilization\"\n| where value > 0.9\n| timechart span=5m avg(value) by resource.labels.backend_name",
              "m": "Collect Load Balancing metrics. Alert when backend health is unhealthy or backend_utilization >90%. Monitor request_count and latency by backend and URL map.",
              "z": "Status panel (backend health), Line chart (requests, latency by backend), Table (backend, utilization).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Monitoring (loadbalancing.googleapis.com/https/request_count, backend_utilization).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Load Balancing metrics. Alert when backend health is unhealthy or backend_utilization >90%. Monitor request_count and latency by backend and URL map.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"loadbalancing.googleapis.com/https/backend_utilization\"\n| where value > 0.9\n| timechart span=5m avg(value) by resource.labels.backend_name\n```\n\nUnderstanding this SPL\n\n**Cloud Load Balancing Backend Health and Request Count** — Unhealthy backends receive no traffic; request count and latency indicate load and performance. Essential for global and regional LB reliability.\n\nDocumented **Data sources**: Cloud Monitoring (loadbalancing.googleapis.com/https/request_count, backend_utilization). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where value > 0.9` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resource.labels.backend_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel (backend health), Line chart (requests, latency by backend), Table (backend, utilization).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Unhealthy backends receive no traffic; request count and latency indicate load and performance.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.10",
              "n": "Cloud Pub/Sub Subscription Backlog and Dead Letter",
              "c": "high",
              "f": "beginner",
              "v": "Backlog (unacked messages) and dead-letter count indicate consumers falling behind or failing. Prevents message loss and SLA breach.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Monitoring (pubsub.googleapis.com/subscription/num_undelivered_messages, dead_letter_message_count)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"pubsub.googleapis.com/subscription/num_undelivered_messages\"\n| where value > 1000\n| timechart span=5m avg(value) by resource.labels.subscription_id",
              "m": "Collect Pub/Sub subscription metrics. Alert when num_undelivered_messages exceeds threshold or dead_letter_message_count > 0. Monitor old_unacked_message_age for consumer lag.",
              "z": "Line chart (backlog, dead letter by subscription), Table (subscription, backlog), Single value (max backlog).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Monitoring (pubsub.googleapis.com/subscription/num_undelivered_messages, dead_letter_message_count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Pub/Sub subscription metrics. Alert when num_undelivered_messages exceeds threshold or dead_letter_message_count > 0. Monitor old_unacked_message_age for consumer lag.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"pubsub.googleapis.com/subscription/num_undelivered_messages\"\n| where value > 1000\n| timechart span=5m avg(value) by resource.labels.subscription_id\n```\n\nUnderstanding this SPL\n\n**Cloud Pub/Sub Subscription Backlog and Dead Letter** — Backlog (unacked messages) and dead-letter count indicate consumers falling behind or failing. Prevents message loss and SLA breach.\n\nDocumented **Data sources**: Cloud Monitoring (pubsub.googleapis.com/subscription/num_undelivered_messages, dead_letter_message_count). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where value > 1000` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resource.labels.subscription_id** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (backlog, dead letter by subscription), Table (subscription, backlog), Single value (max backlog).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Backlog (unacked messages) and dead-letter count indicate consumers falling behind or failing.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.11",
              "n": "Cloud Storage (GCS) Request Metrics and Cost",
              "c": "medium",
              "f": "beginner",
              "v": "GCS request count and latency support performance tuning. Cost tracking by bucket/class prevents bill shock.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Monitoring (storage.googleapis.com/request_count), Billing export",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"storage.googleapis.com\"\n| spath output=method path=protoPayload.methodName\n| stats count by method resource.labels.bucket_name\n| sort -count",
              "m": "Enable GCS request logging to Cloud Logging; sink to Pub/Sub for Splunk. Collect storage metrics. Ingest billing export for cost by bucket. Alert on anomalous request volume or cost spike.",
              "z": "Line chart (requests, cost by bucket), Table (bucket, method, count), Bar chart (cost by bucket).",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1619",
                "T1530"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Monitoring (storage.googleapis.com/request_count), Billing export.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable GCS request logging to Cloud Logging; sink to Pub/Sub for Splunk. Collect storage metrics. Ingest billing export for cost by bucket. Alert on anomalous request volume or cost spike.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"storage.googleapis.com\"\n| spath output=method path=protoPayload.methodName\n| stats count by method resource.labels.bucket_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloud Storage (GCS) Request Metrics and Cost** — GCS request count and latency support performance tuning. Cost tracking by bucket/class prevents bill shock.\n\nDocumented **Data sources**: Cloud Monitoring (storage.googleapis.com/request_count), Billing export. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `stats` rolls up events into metrics; results are split **by method resource.labels.bucket_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (requests, cost by bucket), Table (bucket, method, count), Bar chart (cost by bucket).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "GCS request count and latency support performance tuning.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.12",
              "n": "Cloud SQL Instance Metrics and Replication Lag",
              "c": "high",
              "f": "beginner",
              "v": "Cloud SQL CPU, storage, and replication lag impact application performance and DR. Monitoring supports capacity and replica health.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Monitoring (cloudsql.googleapis.com/database/cpu/utilization, replication/replica_lag)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"cloudsql.googleapis.com/database/replication/replica_lag\"\n| where value > 10\n| timechart span=5m avg(value) by resource.labels.database_id",
              "m": "Collect Cloud SQL metrics. Alert when replica_lag > 10 seconds or CPU utilization > 80%. Monitor disk utilization and connection count. Enable slow query log for query-level analysis.",
              "z": "Line chart (CPU, lag, connections by instance), Table (instance, lag), Gauge (replica lag).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Monitoring (cloudsql.googleapis.com/database/cpu/utilization, replication/replica_lag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Cloud SQL metrics. Alert when replica_lag > 10 seconds or CPU utilization > 80%. Monitor disk utilization and connection count. Enable slow query log for query-level analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"cloudsql.googleapis.com/database/replication/replica_lag\"\n| where value > 10\n| timechart span=5m avg(value) by resource.labels.database_id\n```\n\nUnderstanding this SPL\n\n**Cloud SQL Instance Metrics and Replication Lag** — Cloud SQL CPU, storage, and replication lag impact application performance and DR. Monitoring supports capacity and replica health.\n\nDocumented **Data sources**: Cloud Monitoring (cloudsql.googleapis.com/database/cpu/utilization, replication/replica_lag). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where value > 10` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resource.labels.database_id** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU, lag, connections by instance), Table (instance, lag), Gauge (replica lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cloud SQL CPU, storage, and replication lag impact application performance and DR.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.13",
              "n": "Cloud Build Build Failures and Duration",
              "c": "high",
              "f": "beginner",
              "v": "Build failures block deployments. Duration trends support pipeline optimization and quota management.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Build logs via Pub/Sub (build completion events)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"build\" status=\"FAILURE\"\n| table _time buildId triggerId status message\n| sort -_time",
              "m": "Sink Cloud Build logs to Pub/Sub; ingest in Splunk. Alert when status=FAILURE or TIMEOUT. Track build duration and success rate by trigger. Correlate with source repo and branch.",
              "z": "Line chart (builds, failures by trigger), Table (build, trigger, status, duration), Single value (failure rate).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Build logs via Pub/Sub (build completion events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSink Cloud Build logs to Pub/Sub; ingest in Splunk. Alert when status=FAILURE or TIMEOUT. Track build duration and success rate by trigger. Correlate with source repo and branch.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"build\" status=\"FAILURE\"\n| table _time buildId triggerId status message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cloud Build Build Failures and Duration** — Build failures block deployments. Duration trends support pipeline optimization and quota management.\n\nDocumented **Data sources**: Cloud Build logs via Pub/Sub (build completion events). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Cloud Build Build Failures and Duration**): table _time buildId triggerId status message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (builds, failures by trigger), Table (build, trigger, status, duration), Single value (failure rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Build failures block deployments.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.14",
              "n": "GKE Node Pool Autoscaling and Upgrade Events",
              "c": "high",
              "f": "intermediate",
              "v": "Node pool scale-up/down and upgrade events affect workload placement and availability. Monitoring supports capacity and upgrade windows.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "GKE cluster logs, Cloud Monitoring (container metrics)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"k8s_cluster\" textPayload=*\"upgrade\"*\n| table _time resource.labels.cluster_name textPayload\n| sort -_time",
              "m": "Ingest GKE logs (cluster operations, node pool events). Monitor node count and autoscaler events. Track upgrade and maintenance window events. Alert on node pool scaling failures.",
              "z": "Timeline (node pool events), Table (cluster, pool, node count), Line chart (node count over time).",
              "kfp": "During a roll-out, nodes come and go and metrics briefly look off. We ignore short windows that line up with a deployment and we focus on steady-state failures after the cluster is calm again.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: GKE cluster logs, Cloud Monitoring (container metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest GKE logs (cluster operations, node pool events). Monitor node count and autoscaler events. Track upgrade and maintenance window events. Alert on node pool scaling failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"k8s_cluster\" textPayload=*\"upgrade\"*\n| table _time resource.labels.cluster_name textPayload\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**GKE Node Pool Autoscaling and Upgrade Events** — Node pool scale-up/down and upgrade events affect workload placement and availability. Monitoring supports capacity and upgrade windows.\n\nDocumented **Data sources**: GKE cluster logs, Cloud Monitoring (container metrics). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **GKE Node Pool Autoscaling and Upgrade Events**): table _time resource.labels.cluster_name textPayload\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (node pool events), Table (cluster, pool, node count), Line chart (node count over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Node pool scale-up/down and upgrade events affect workload placement and availability.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.15",
              "n": "Cloud CDN Cache Hit Ratio and Egress",
              "c": "medium",
              "f": "beginner",
              "v": "Cache hit ratio and egress volume impact latency and cost. Low hit ratio increases origin load and egress charges.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Monitoring (cdn.googleapis.com/cache/hit_ratio, egress)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"cdn.googleapis.com/cache/hit_ratio\"\n| bin _time span=1h\n| stats avg(value) as hit_ratio by _time, resource.labels.origin_name\n| where hit_ratio < 0.7",
              "m": "Collect CDN metrics. Calculate hit ratio from cache hits and misses. Alert when hit ratio < 70% or egress spike. Optimize cache TTL and key design based on metrics.",
              "z": "Line chart (hit ratio, egress by origin), Table (origin, hit ratio), Gauge (overall hit ratio).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Monitoring (cdn.googleapis.com/cache/hit_ratio, egress).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect CDN metrics. Calculate hit ratio from cache hits and misses. Alert when hit ratio < 70% or egress spike. Optimize cache TTL and key design based on metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"cdn.googleapis.com/cache/hit_ratio\"\n| bin _time span=1h\n| stats avg(value) as hit_ratio by _time, resource.labels.origin_name\n| where hit_ratio < 0.7\n```\n\nUnderstanding this SPL\n\n**Cloud CDN Cache Hit Ratio and Egress** — Cache hit ratio and egress volume impact latency and cost. Low hit ratio increases origin load and egress charges.\n\nDocumented **Data sources**: Cloud Monitoring (cdn.googleapis.com/cache/hit_ratio, egress). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, resource.labels.origin_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where hit_ratio < 0.7` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hit ratio, egress by origin), Table (origin, hit ratio), Gauge (overall hit ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cache hit ratio and egress volume impact latency and cost.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.16",
              "n": "Artifact Registry Push/Pull and Vulnerability Scan",
              "c": "medium",
              "f": "intermediate",
              "v": "Unusual push/pull may indicate abuse. Vulnerability scan findings in images require remediation before deployment.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Audit logs (Artifact Registry API), Container Analysis (vulnerability occurrences)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"artifactregistry.googleapis.com\"\n| spath output=method path=protoPayload.methodName\n| stats count by method resource.labels.repository\n| sort -count",
              "m": "Forward Artifact Registry audit logs via Pub/Sub. Ingest Container Analysis for CVE findings. Alert on critical/high in production repos. Baseline push/pull by principal; alert on anomalies.",
              "z": "Table (repo, method, count), Bar chart (top push/pull), Table (image, CVE, severity).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1610",
                "T1613"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Audit logs (Artifact Registry API), Container Analysis (vulnerability occurrences).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Artifact Registry audit logs via Pub/Sub. Ingest Container Analysis for CVE findings. Alert on critical/high in production repos. Baseline push/pull by principal; alert on anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"artifactregistry.googleapis.com\"\n| spath output=method path=protoPayload.methodName\n| stats count by method resource.labels.repository\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Artifact Registry Push/Pull and Vulnerability Scan** — Unusual push/pull may indicate abuse. Vulnerability scan findings in images require remediation before deployment.\n\nDocumented **Data sources**: Audit logs (Artifact Registry API), Container Analysis (vulnerability occurrences). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `stats` rolls up events into metrics; results are split **by method resource.labels.repository** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (repo, method, count), Bar chart (top push/pull), Table (image, CVE, severity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Unusual push/pull may indicate abuse.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.17",
              "n": "Cloud Logging Export Sink and Exclusion Filter",
              "c": "medium",
              "f": "intermediate",
              "v": "Log sink and exclusion changes affect what is exported to Splunk or other destinations. Unauthorized changes create visibility gaps.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Audit logs (logging.googleapis.com)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"logging.googleapis.com\" (protoPayload.methodName=\"CreateSink\" OR protoPayload.methodName=\"UpdateSink\" OR protoPayload.methodName=\"DeleteSink\")\n| table _time protoPayload.authenticationInfo.principalEmail protoPayload.methodName resource.labels.sink_id\n| sort -_time",
              "m": "Forward audit logs. Alert on CreateSink, UpdateSink, DeleteSink. Track sink destinations and filters. Ensure critical sinks (e.g. to Pub/Sub for Splunk) are not modified without change control.",
              "z": "Table (who, what, sink), Timeline (sink changes), Single value (sink count).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Audit logs (logging.googleapis.com).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward audit logs. Alert on CreateSink, UpdateSink, DeleteSink. Track sink destinations and filters. Ensure critical sinks (e.g. to Pub/Sub for Splunk) are not modified without change control.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"logging.googleapis.com\" (protoPayload.methodName=\"CreateSink\" OR protoPayload.methodName=\"UpdateSink\" OR protoPayload.methodName=\"DeleteSink\")\n| table _time protoPayload.authenticationInfo.principalEmail protoPayload.methodName resource.labels.sink_id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cloud Logging Export Sink and Exclusion Filter** — Log sink and exclusion changes affect what is exported to Splunk or other destinations. Unauthorized changes create visibility gaps.\n\nDocumented **Data sources**: Audit logs (logging.googleapis.com). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Cloud Logging Export Sink and Exclusion Filter**): table _time protoPayload.authenticationInfo.principalEmail protoPayload.methodName resource.labels.sink_id\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud Logging Export Sink and Exclusion Filter** — Log sink and exclusion changes affect what is exported to Splunk or other destinations. Unauthorized changes create visibility gaps.\n\nDocumented **Data sources**: Audit logs (logging.googleapis.com). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (who, what, sink), Timeline (sink changes), Single value (sink count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Log sink and exclusion changes affect what is exported to our monitoring platform or other destinations.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.18",
              "n": "Cloud IAM Policy and Binding Changes (Beyond SetIamPolicy)",
              "c": "high",
              "f": "beginner",
              "v": "IAM policy and custom role changes affect who can access resources. Broader than SetIamPolicy — includes role create/delete and org policy.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Audit logs (iam.googleapis.com, admin.googleapis.com)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"iam.googleapis.com\" (protoPayload.methodName=*Create* OR protoPayload.methodName=*Delete* OR protoPayload.methodName=*Update*)\n| table _time protoPayload.authenticationInfo.principalEmail protoPayload.methodName resource.labels\n| sort -_time",
              "m": "Forward IAM and Admin API audit logs. Alert on CreateRole, DeleteRole, SetIamPolicy on project/folder/org. Track custom role changes. Correlate with security review process.",
              "z": "Table (principal, method, resource), Timeline (IAM changes), Bar chart by method.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.001",
                "T1098.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Audit logs (iam.googleapis.com, admin.googleapis.com).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward IAM and Admin API audit logs. Alert on CreateRole, DeleteRole, SetIamPolicy on project/folder/org. Track custom role changes. Correlate with security review process.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"iam.googleapis.com\" (protoPayload.methodName=*Create* OR protoPayload.methodName=*Delete* OR protoPayload.methodName=*Update*)\n| table _time protoPayload.authenticationInfo.principalEmail protoPayload.methodName resource.labels\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cloud IAM Policy and Binding Changes (Beyond SetIamPolicy)** — IAM policy and custom role changes affect who can access resources. Broader than SetIamPolicy — includes role create/delete and org policy.\n\nDocumented **Data sources**: Audit logs (iam.googleapis.com, admin.googleapis.com). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Cloud IAM Policy and Binding Changes (Beyond SetIamPolicy)**): table _time protoPayload.authenticationInfo.principalEmail protoPayload.methodName resource.labels\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud IAM Policy and Binding Changes (Beyond SetIamPolicy)** — IAM policy and custom role changes affect who can access resources. Broader than SetIamPolicy — includes role create/delete and org policy.\n\nDocumented **Data sources**: Audit logs (iam.googleapis.com, admin.googleapis.com). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, method, resource), Timeline (IAM changes), Bar chart by method.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Access control policy and custom role changes affect who can access resources.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object_category All_Changes.action span=1h\n| sort -count",
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.19",
              "n": "Cloud Billing Budget Alerts and Anomaly",
              "c": "medium",
              "f": "intermediate",
              "v": "Budget alerts and spend anomalies prevent cost overruns. Early detection enables corrective action before invoice.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Billing export to BigQuery or Pub/Sub, Budget alert notifications",
              "q": "index=gcp sourcetype=\"gcp:billing\"\n| bin _time span=1d\n| stats sum(cost) as daily_cost by _time, service\n| eventstats avg(daily_cost) as avg_cost, stdev(daily_cost) as stdev_cost by service\n| where daily_cost > avg_cost + 2*stdev_cost",
              "m": "Enable billing export. Ingest daily/monthly cost data. Create budget alerts and forward to Splunk. Calculate baseline and alert on 2-sigma anomaly by service or project.",
              "z": "Line chart (cost with threshold), Table (service, cost, anomaly), Stacked area (cost by service).",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Billing export to BigQuery or Pub/Sub, Budget alert notifications.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable billing export. Ingest daily/monthly cost data. Create budget alerts and forward to Splunk. Calculate baseline and alert on 2-sigma anomaly by service or project.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"gcp:billing\"\n| bin _time span=1d\n| stats sum(cost) as daily_cost by _time, service\n| eventstats avg(daily_cost) as avg_cost, stdev(daily_cost) as stdev_cost by service\n| where daily_cost > avg_cost + 2*stdev_cost\n```\n\nUnderstanding this SPL\n\n**Cloud Billing Budget Alerts and Anomaly** — Budget alerts and spend anomalies prevent cost overruns. Early detection enables corrective action before invoice.\n\nDocumented **Data sources**: Billing export to BigQuery or Pub/Sub, Budget alert notifications. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: gcp:billing. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"gcp:billing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where daily_cost > avg_cost + 2*stdev_cost` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (cost with threshold), Table (service, cost, anomaly), Stacked area (cost by service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Budget alerts and spend anomalies prevent cost overruns.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.20",
              "n": "Cloud Armor Security Policy and DDoS Metrics",
              "c": "high",
              "f": "intermediate",
              "v": "Cloud Armor blocks and DDoS metrics indicate attack traffic and policy effectiveness. Essential for WAF and DDoS visibility.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Logging (loadbalancing.googleapis.com/http_requests with security policy), Cloud Monitoring (DDoS metrics)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" jsonPayload.enforcedSecurityPolicy.name=*\n| stats count by jsonPayload.enforcedSecurityPolicy.outcome jsonPayload.enforcedSecurityPolicy.name\n| sort -count",
              "m": "Enable HTTP(S) LB logging with security policy info. Ingest in Splunk. Alert on high block rate or DDoS mitigation events. Dashboard allowed vs denied by rule and source.",
              "z": "Table (policy, outcome, count), Bar chart (blocks by rule), Timeline (block rate).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1562.007",
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Logging (loadbalancing.googleapis.com/http_requests with security policy), Cloud Monitoring (DDoS metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable HTTP(S) LB logging with security policy info. Ingest in Splunk. Alert on high block rate or DDoS mitigation events. Dashboard allowed vs denied by rule and source.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" jsonPayload.enforcedSecurityPolicy.name=*\n| stats count by jsonPayload.enforcedSecurityPolicy.outcome jsonPayload.enforcedSecurityPolicy.name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloud Armor Security Policy and DDoS Metrics** — Cloud Armor blocks and DDoS metrics indicate attack traffic and policy effectiveness. Essential for WAF and DDoS visibility.\n\nDocumented **Data sources**: Cloud Logging (loadbalancing.googleapis.com/http_requests with security policy), Cloud Monitoring (DDoS metrics). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by jsonPayload.enforcedSecurityPolicy.outcome jsonPayload.enforcedSecurityPolicy.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (policy, outcome, count), Bar chart (blocks by rule), Timeline (block rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cloud Armor blocks and overload attacks metrics indicate attack traffic and policy effectiveness.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.21",
              "n": "Cloud Run Revision Traffic and Error Rate",
              "c": "high",
              "f": "beginner",
              "v": "Cloud Run revision traffic and errors indicate service health. Supports canary and blue-green deployment monitoring.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Cloud Run metrics (request_count, container_instance_count, container_cpu_utilizations)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"run.googleapis.com/request_count\"\n| timechart span=5m sum(value) by resource.labels.revision_name\n| eval error_rate = request_count_5xx / request_count * 100\n| where error_rate > 1",
              "m": "Collect Cloud Run metrics. Alert on 5xx rate >1% or container instance count spike. Monitor cold start and latency. Track traffic split across revisions for canary analysis.",
              "z": "Line chart (requests, errors by revision), Table (revision, error rate), Gauge (traffic % by revision).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Cloud Run metrics (request_count, container_instance_count, container_cpu_utilizations).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Cloud Run metrics. Alert on 5xx rate >1% or container instance count spike. Monitor cold start and latency. Track traffic split across revisions for canary analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"run.googleapis.com/request_count\"\n| timechart span=5m sum(value) by resource.labels.revision_name\n| eval error_rate = request_count_5xx / request_count * 100\n| where error_rate > 1\n```\n\nUnderstanding this SPL\n\n**Cloud Run Revision Traffic and Error Rate** — Cloud Run revision traffic and errors indicate service health. Supports canary and blue-green deployment monitoring.\n\nDocumented **Data sources**: Cloud Run metrics (request_count, container_instance_count, container_cpu_utilizations). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resource.labels.revision_name** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (requests, errors by revision), Table (revision, error rate), Gauge (traffic % by revision).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cloud Run revision traffic and errors indicate service health.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.22",
              "n": "Dataproc Cluster and Job Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Dataproc cluster and job failures break data pipelines. Monitoring supports reliability and cost (preemptible) optimization.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Dataproc logs (cluster and job state), Cloud Monitoring (dataproc cluster metrics)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"dataproc_cluster\" severity=\"ERROR\"\n| table _time resource.labels.cluster_name textPayload\n| sort -_time",
              "m": "Sink Dataproc logs to Pub/Sub. Ingest cluster state and job completion. Alert on cluster ERROR or job FAILED. Monitor preemptible node loss for cost vs. reliability trade-off.",
              "z": "Table (cluster, job, state), Timeline (job failures), Bar chart (failures by job type).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Dataproc logs (cluster and job state), Cloud Monitoring (dataproc cluster metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSink Dataproc logs to Pub/Sub. Ingest cluster state and job completion. Alert on cluster ERROR or job FAILED. Monitor preemptible node loss for cost vs. reliability trade-off.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"dataproc_cluster\" severity=\"ERROR\"\n| table _time resource.labels.cluster_name textPayload\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Dataproc Cluster and Job Failures** — Dataproc cluster and job failures break data pipelines. Monitoring supports reliability and cost (preemptible) optimization.\n\nDocumented **Data sources**: Dataproc logs (cluster and job state), Cloud Monitoring (dataproc cluster metrics). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Dataproc Cluster and Job Failures**): table _time resource.labels.cluster_name textPayload\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cluster, job, state), Timeline (job failures), Bar chart (failures by job type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Dataproc cluster and job failures break data pipelines.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.23",
              "n": "VPC Service Controls Perimeter Violations",
              "c": "critical",
              "f": "intermediate",
              "v": "VPC Service Controls enforce network perimeter. Violations indicate data exfiltration attempts or misconfigured access. Critical for data perimeter security.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Access Context Manager / VPC SC audit logs (perimeter violation events)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"accesscontextmanager.googleapis.com\"\n| search \"violation\" OR \"perimeter\"\n| table _time protoPayload.authenticationInfo.principalEmail protoPayload.requestMetadata.callerIp resource\n| sort -_time",
              "m": "Enable VPC SC violation logging. Forward to Pub/Sub and Splunk. Alert on every violation. Correlate with principal, source IP, and resource. Use for perimeter tuning and incident response.",
              "z": "Table (principal, resource, violation), Timeline (violations), Map (source IPs).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Access Context Manager / VPC SC audit logs (perimeter violation events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable VPC SC violation logging. Forward to Pub/Sub and Splunk. Alert on every violation. Correlate with principal, source IP, and resource. Use for perimeter tuning and incident response.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"accesscontextmanager.googleapis.com\"\n| search \"violation\" OR \"perimeter\"\n| table _time protoPayload.authenticationInfo.principalEmail protoPayload.requestMetadata.callerIp resource\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**VPC Service Controls Perimeter Violations** — VPC Service Controls enforce network perimeter. Violations indicate data exfiltration attempts or misconfigured access. Critical for data perimeter security.\n\nDocumented **Data sources**: Access Context Manager / VPC SC audit logs (perimeter violation events). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **VPC Service Controls Perimeter Violations**): table _time protoPayload.authenticationInfo.principalEmail protoPayload.requestMetadata.callerIp resource\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, resource, violation), Timeline (violations), Map (source IPs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "VPC Service Controls enforce network perimeter.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.24",
              "n": "GCP Cloud Run Cold Start Rate",
              "c": "medium",
              "f": "intermediate",
              "v": "Serverless cold start impact on request latency. High cold start rates cause P99 latency spikes and timeouts for scale-to-zero services.",
              "t": "Custom (GCP Monitoring API)",
              "d": "Cloud Run metrics (request_latencies, instance_count, container/startup_latencies)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"run.googleapis.com/request_count\" OR metric.type=\"run.googleapis.com/container/instance_count\"\n| eval metric_type=coalesce(metric.type, 'run.googleapis.com/request_count')\n| stats sum(value) as requests, latest(value) as instances by resource.labels.service_name, metric_type, bin(_time, 5m)\n| eval cold_start_indicator=if(instances=0 AND requests>0, 1, 0)\n| stats sum(requests) as total_requests, sum(cold_start_indicator) as cold_start_events by resource.labels.service_name\n| eval cold_start_pct=round(cold_start_events/total_requests*100, 1)\n| where cold_start_pct > 5\n| table resource.labels.service_name total_requests cold_start_events cold_start_pct\n| sort -cold_start_pct",
              "m": "Use GCP Monitoring API (or Cloud Monitoring export) to ingest Cloud Run metrics. Request count and instance count indicate scale-to-zero; zero instances with requests implies cold starts. For detailed latency, ingest `run.googleapis.com/request_latencies` and `run.googleapis.com/container/startup_latencies`. Alert when cold start rate exceeds 5% or startup latency > 3s. Consider min instances for latency-critical services.",
              "z": "Line chart (cold start % and startup latency by service over time), Table (service, cold starts, %), Single value (cold start rate).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (GCP Monitoring API).\n• Ensure the following data sources are available: Cloud Run metrics (request_latencies, instance_count, container/startup_latencies).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse GCP Monitoring API (or Cloud Monitoring export) to ingest Cloud Run metrics. Request count and instance count indicate scale-to-zero; zero instances with requests implies cold starts. For detailed latency, ingest `run.googleapis.com/request_latencies` and `run.googleapis.com/container/startup_latencies`. Alert when cold start rate exceeds 5% or startup latency > 3s. Consider min instances for latency-critical services.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"run.googleapis.com/request_count\" OR metric.type=\"run.googleapis.com/container/instance_count\"\n| eval metric_type=coalesce(metric.type, 'run.googleapis.com/request_count')\n| stats sum(value) as requests, latest(value) as instances by resource.labels.service_name, metric_type, bin(_time, 5m)\n| eval cold_start_indicator=if(instances=0 AND requests>0, 1, 0)\n| stats sum(requests) as total_requests, sum(cold_start_indicator) as cold_start_events by resource.labels.service_name\n| eval cold_start_pct=round(cold_start_events/total_requests*100, 1)\n| where cold_start_pct > 5\n| table resource.labels.service_name total_requests cold_start_events cold_start_pct\n| sort -cold_start_pct\n```\n\nUnderstanding this SPL\n\n**GCP Cloud Run Cold Start Rate** — Serverless cold start impact on request latency. High cold start rates cause P99 latency spikes and timeouts for scale-to-zero services.\n\nDocumented **Data sources**: Cloud Run metrics (request_latencies, instance_count, container/startup_latencies). **App/TA** (typical add-on context): Custom (GCP Monitoring API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **metric_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by resource.labels.service_name, metric_type, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cold_start_indicator** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by resource.labels.service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cold_start_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cold_start_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GCP Cloud Run Cold Start Rate**): table resource.labels.service_name total_requests cold_start_events cold_start_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (cold start % and startup latency by service over time), Table (service, cold starts, %), Single value (cold start rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Serverless cold start impact on request latency.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.25",
              "n": "BigQuery Slot Usage Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Slot contention slows queries and raises cost; tracking slot usage versus reservation prevents interactive BI outages and runaway batch jobs.",
              "t": "`Splunk_TA_google-cloudplatform`, BigQuery INFORMATION_SCHEMA export",
              "d": "`sourcetype=google:gcp:monitoring` (`bigquery.googleapis.com/slot/usage`), audit exports to Pub/Sub",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"bigquery.googleapis.com/slot/usage\"\n| stats latest(value) as slot_seconds by resource.labels.project_id, bin(_time, 5m)\n| eventstats avg(slot_seconds) as baseline by resource.labels.project_id\n| where slot_seconds > baseline * 1.5\n| table _time resource.labels.project_id slot_seconds baseline",
              "m": "Ingest Cloud Monitoring metrics for slot usage and optional `JOBS_BY_PROJECT` exports. Alert when usage exceeds reservation plus burst buffer or sustained elevation vs 7-day baseline. Dashboard by reservation assignment and job type.",
              "z": "Area chart (slot usage vs cap), Table (project, peak slots), Single value (utilization %).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`, BigQuery INFORMATION_SCHEMA export.\n• Ensure the following data sources are available: `sourcetype=google:gcp:monitoring` (`bigquery.googleapis.com/slot/usage`), audit exports to Pub/Sub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Cloud Monitoring metrics for slot usage and optional `JOBS_BY_PROJECT` exports. Alert when usage exceeds reservation plus burst buffer or sustained elevation vs 7-day baseline. Dashboard by reservation assignment and job type.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"bigquery.googleapis.com/slot/usage\"\n| stats latest(value) as slot_seconds by resource.labels.project_id, bin(_time, 5m)\n| eventstats avg(slot_seconds) as baseline by resource.labels.project_id\n| where slot_seconds > baseline * 1.5\n| table _time resource.labels.project_id slot_seconds baseline\n```\n\nUnderstanding this SPL\n\n**BigQuery Slot Usage Monitoring** — Slot contention slows queries and raises cost; tracking slot usage versus reservation prevents interactive BI outages and runaway batch jobs.\n\nDocumented **Data sources**: `sourcetype=google:gcp:monitoring` (`bigquery.googleapis.com/slot/usage`), audit exports to Pub/Sub. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`, BigQuery INFORMATION_SCHEMA export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource.labels.project_id, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by resource.labels.project_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where slot_seconds > baseline * 1.5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BigQuery Slot Usage Monitoring**): table _time resource.labels.project_id slot_seconds baseline\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (slot usage vs cap), Table (project, peak slots), Single value (utilization %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Slot contention slows queries and raises cost; tracking slot usage versus reservation prevents interactive BI outages and runaway batch jobs.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.26",
              "n": "GKE Autopilot Pod Scaling",
              "c": "medium",
              "f": "intermediate",
              "v": "Autopilot scales node pools automatically; failed scale-outs leave pods pending and degrade SLOs.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:pubsub:message` (GKE cluster logs), `sourcetype=google:gcp:monitoring` (scheduler, pending pods)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"k8s_cluster\" (\"FailedScheduling\" OR \"Insufficient cpu\" OR \"Insufficient memory\")\n| stats count by resource.labels.cluster_name, jsonPayload.reason\n| sort -count",
              "m": "Enable GKE logging and filter for scheduling events. Correlate with pending pod metrics if exported. Alert on rising pending pod count or repeated scale failures.",
              "z": "Timeline (scheduling failures), Table (cluster, reason, count), Line chart (pending pods).",
              "kfp": "During a roll-out, nodes come and go and metrics briefly look off. We ignore short windows that line up with a deployment and we focus on steady-state failures after the cluster is calm again.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:pubsub:message` (GKE cluster logs), `sourcetype=google:gcp:monitoring` (scheduler, pending pods).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable GKE logging and filter for scheduling events. Correlate with pending pod metrics if exported. Alert on rising pending pod count or repeated scale failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"k8s_cluster\" (\"FailedScheduling\" OR \"Insufficient cpu\" OR \"Insufficient memory\")\n| stats count by resource.labels.cluster_name, jsonPayload.reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**GKE Autopilot Pod Scaling** — Autopilot scales node pools automatically; failed scale-outs leave pods pending and degrade SLOs.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` (GKE cluster logs), `sourcetype=google:gcp:monitoring` (scheduler, pending pods). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource.labels.cluster_name, jsonPayload.reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (scheduling failures), Table (cluster, reason, count), Line chart (pending pods).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Autopilot scales node pools automatically; failed scale-outs leave pods pending and degrade service goals.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.27",
              "n": "Cloud Armor WAF Events",
              "c": "high",
              "f": "intermediate",
              "v": "WAF blocks indicate attack traffic or misrules; separating noise from targeted campaigns protects edge apps behind HTTPS load balancers.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "HTTP(S) LB request logs with Cloud Armor, `sourcetype=google:gcp:pubsub:message` (loadbalancing.googleapis.com/requests)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" logName=\"*requests\" httpRequest.status=403\n| search enforcedSecurityPolicy OR CLOUD_ARMOR\n| stats count by jsonPayload.enforcedSecurityPolicy.name, httpRequest.remoteIp, httpRequest.requestUrl\n| sort -count",
              "m": "Enable logging on security policies and sink to Pub/Sub. Parse rule ID and preview vs enforce. Alert on spike vs baseline or new country/ASN concentration. Tune rules to reduce false positives.",
              "z": "Bar chart (rule hits), Map (client IP geo), Timeline (block rate).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1580",
                "T1562.007"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: HTTP(S) LB request logs with Cloud Armor, `sourcetype=google:gcp:pubsub:message` (loadbalancing.googleapis.com/requests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable logging on security policies and sink to Pub/Sub. Parse rule ID and preview vs enforce. Alert on spike vs baseline or new country/ASN concentration. Tune rules to reduce false positives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" logName=\"*requests\" httpRequest.status=403\n| search enforcedSecurityPolicy OR CLOUD_ARMOR\n| stats count by jsonPayload.enforcedSecurityPolicy.name, httpRequest.remoteIp, httpRequest.requestUrl\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloud Armor WAF Events** — WAF blocks indicate attack traffic or misrules; separating noise from targeted campaigns protects edge apps behind HTTPS load balancers.\n\nDocumented **Data sources**: HTTP(S) LB request logs with Cloud Armor, `sourcetype=google:gcp:pubsub:message` (loadbalancing.googleapis.com/requests). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by jsonPayload.enforcedSecurityPolicy.name, httpRequest.remoteIp, httpRequest.requestUrl** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud Armor WAF Events** — WAF blocks indicate attack traffic or misrules; separating noise from targeted campaigns protects edge apps behind HTTPS load balancers.\n\nDocumented **Data sources**: HTTP(S) LB request logs with Cloud Armor, `sourcetype=google:gcp:pubsub:message` (loadbalancing.googleapis.com/requests). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (rule hits), Map (client IP geo), Timeline (block rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "WAF blocks indicate attack traffic or misrules; separating noise from targeted campaigns protects edge apps behind HTTPS load balancers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count",
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.28",
              "n": "VPC Service Controls Violations",
              "c": "critical",
              "f": "intermediate",
              "v": "Real-time violation tracking complements perimeter design reviews and catches data exfiltration paths early.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Access Context Manager audit via `sourcetype=google:gcp:pubsub:message`",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"accesscontextmanager.googleapis.com\"\n| search violation OR denied OR blocked\n| stats count by protoPayload.authenticationInfo.principalEmail, resource.labels.project_id\n| sort -count",
              "m": "Ensure VPC SC dry-run and enforce modes both log. Route to SIEM with severity by service (BigQuery, GCS). Weekly review of top principals for false positives.",
              "z": "Table (principal, project, count), Timeline (violations), Pie chart (service).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Access Context Manager audit via `sourcetype=google:gcp:pubsub:message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure VPC SC dry-run and enforce modes both log. Route to SIEM with severity by service (BigQuery, GCS). Weekly review of top principals for false positives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"accesscontextmanager.googleapis.com\"\n| search violation OR denied OR blocked\n| stats count by protoPayload.authenticationInfo.principalEmail, resource.labels.project_id\n| sort -count\n```\n\nUnderstanding this SPL\n\n**VPC Service Controls Violations** — Real-time violation tracking complements perimeter design reviews and catches data exfiltration paths early.\n\nDocumented **Data sources**: Access Context Manager audit via `sourcetype=google:gcp:pubsub:message`. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by protoPayload.authenticationInfo.principalEmail, resource.labels.project_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, project, count), Timeline (violations), Pie chart (service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Real-time violation tracking complements perimeter design reviews and catches data exfiltration paths early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.29",
              "n": "Pub/Sub Subscription Backlog",
              "c": "high",
              "f": "beginner",
              "v": "Growing backlog signals consumer lag or under-provisioned workers; oldest-unacked age breaches processing SLAs.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:monitoring` (`pubsub.googleapis.com/subscription/num_undelivered_messages`, `oldest_unacked_message_age`)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"pubsub.googleapis.com/subscription/num_undelivered_messages\"\n| stats latest(value) as backlog by resource.labels.subscription_id, bin(_time, 5m)\n| where backlog > 10000\n| sort - backlog",
              "m": "Set per-subscription SLOs for max backlog and oldest age. Scale push subscribers or fix poison messages. Use dead-letter topics for bad payloads.",
              "z": "Line chart (backlog over time), Single value (oldest message age), Table (subscription, backlog).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:monitoring` (`pubsub.googleapis.com/subscription/num_undelivered_messages`, `oldest_unacked_message_age`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSet per-subscription SLOs for max backlog and oldest age. Scale push subscribers or fix poison messages. Use dead-letter topics for bad payloads.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"pubsub.googleapis.com/subscription/num_undelivered_messages\"\n| stats latest(value) as backlog by resource.labels.subscription_id, bin(_time, 5m)\n| where backlog > 10000\n| sort - backlog\n```\n\nUnderstanding this SPL\n\n**Pub/Sub Subscription Backlog** — Growing backlog signals consumer lag or under-provisioned workers; oldest-unacked age breaches processing SLAs.\n\nDocumented **Data sources**: `sourcetype=google:gcp:monitoring` (`pubsub.googleapis.com/subscription/num_undelivered_messages`, `oldest_unacked_message_age`). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource.labels.subscription_id, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where backlog > 10000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (backlog over time), Single value (oldest message age), Table (subscription, backlog).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Growing backlog signals consumer lag or -provisioned workers; oldest-unacked age breaches processing service promises.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.30",
              "n": "Security Command Center Findings",
              "c": "high",
              "f": "intermediate",
              "v": "SCC aggregates misconfigurations and threats; operationalizing findings closes gaps faster than periodic console reviews.",
              "t": "`Splunk_TA_google-cloudplatform` (Pub/Sub export)",
              "d": "`sourcetype=google:gcp:pubsub:message` (SCC findings JSON), SCC Pub/Sub notifications",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" sourceProperties.ResourceName=*\n| spath path=finding\n| search finding.state=\"ACTIVE\" (finding.severity=\"HIGH\" OR finding.severity=\"CRITICAL\")\n| stats latest(finding.createTime) as seen by finding.category, resource\n| sort -seen",
              "m": "Enable continuous export or finding notifications to Pub/Sub. Map categories to owners. Auto-ticket CRITICAL; weekly review HIGH. Deduplicate by finding ID across updates.",
              "z": "Table (category, resource, severity), Bar chart (findings by category), Timeline (new findings).",
              "kfp": "A fresh scanner run or a vendor re-rate can resurface a finding we already know about. We deduplicate on resource and time and we only escalate when the exposure is new or the severity stepped up.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform` (Pub/Sub export).\n• Ensure the following data sources are available: `sourcetype=google:gcp:pubsub:message` (SCC findings JSON), SCC Pub/Sub notifications.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable continuous export or finding notifications to Pub/Sub. Map categories to owners. Auto-ticket CRITICAL; weekly review HIGH. Deduplicate by finding ID across updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" sourceProperties.ResourceName=*\n| spath path=finding\n| search finding.state=\"ACTIVE\" (finding.severity=\"HIGH\" OR finding.severity=\"CRITICAL\")\n| stats latest(finding.createTime) as seen by finding.category, resource\n| sort -seen\n```\n\nUnderstanding this SPL\n\n**Security Command Center Findings** — SCC aggregates misconfigurations and threats; operationalizing findings closes gaps faster than periodic console reviews.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` (SCC findings JSON), SCC Pub/Sub notifications. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform` (Pub/Sub export). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by finding.category, resource** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security Command Center Findings** — SCC aggregates misconfigurations and threats; operationalizing findings closes gaps faster than periodic console reviews.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` (SCC findings JSON), SCC Pub/Sub notifications. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform` (Pub/Sub export). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (category, resource, severity), Bar chart (findings by category), Timeline (new findings).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "SCC aggregates misconfigurations and threats; operationalizing findings closes gaps faster than periodic console reviews.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.category | sort - count",
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.31",
              "n": "Cloud KMS Key Rotation Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Crypto policy often mandates rotation; tracking next rotation time avoids audit findings and forced emergency rotations.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:pubsub:message` (cloudkms.googleapis.com audit), Asset Inventory key metadata",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"cloudkms.googleapis.com\" protoPayload.methodName=\"*CryptoKey*\"\n| stats latest(protoPayload.request.nextRotationTime) as next_rot by resource.labels.key_ring_id, resource.labels.crypto_key_id\n| eval days=round((strptime(next_rot,\"%Y-%m-%dT%H:%M:%SZ\")-now())/86400,0)\n| where days < 30 OR isnull(days)",
              "m": "Nightly sync key metadata including rotation period and next rotation. Alert when rotation overdue or manual rotation gaps detected. Include CMEK keys for BigQuery and GCS.",
              "z": "Table (key, days to rotation), Timeline (rotation events), Single value (keys out of compliance).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1098.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:pubsub:message` (cloudkms.googleapis.com audit), Asset Inventory key metadata.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNightly sync key metadata including rotation period and next rotation. Alert when rotation overdue or manual rotation gaps detected. Include CMEK keys for BigQuery and GCS.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"cloudkms.googleapis.com\" protoPayload.methodName=\"*CryptoKey*\"\n| stats latest(protoPayload.request.nextRotationTime) as next_rot by resource.labels.key_ring_id, resource.labels.crypto_key_id\n| eval days=round((strptime(next_rot,\"%Y-%m-%dT%H:%M:%SZ\")-now())/86400,0)\n| where days < 30 OR isnull(days)\n```\n\nUnderstanding this SPL\n\n**Cloud KMS Key Rotation Compliance** — Crypto policy often mandates rotation; tracking next rotation time avoids audit findings and forced emergency rotations.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` (cloudkms.googleapis.com audit), Asset Inventory key metadata. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource.labels.key_ring_id, resource.labels.crypto_key_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days < 30 OR isnull(days)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (key, days to rotation), Timeline (rotation events), Single value (keys out of compliance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Crypto policy often mandates rotation; tracking next rotation time avoids audit findings and forced emergency rotations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.32",
              "n": "Cloud Logging Sink Health",
              "c": "high",
              "f": "intermediate",
              "v": "Broken sinks drop audit and security logs silently; monitoring export errors preserves compliance and detection coverage.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:pubsub:message` (logging sink errors), Admin Activity for sink changes",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" logName=\"*logging*\" (severity=\"ERROR\" OR \"SinkDisappeared\" OR \"WriteError\")\n| stats count by resource.labels.project_id, textPayload\n| sort -count",
              "m": "Enable log metrics on sink write errors to Pub/Sub destinations. Alert on any error count > 0 in 15 minutes. Verify Pub/Sub IAM and destination bucket permissions after changes.",
              "z": "Single value (sink errors), Table (project, error text), Timeline.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:pubsub:message` (logging sink errors), Admin Activity for sink changes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable log metrics on sink write errors to Pub/Sub destinations. Alert on any error count > 0 in 15 minutes. Verify Pub/Sub IAM and destination bucket permissions after changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" logName=\"*logging*\" (severity=\"ERROR\" OR \"SinkDisappeared\" OR \"WriteError\")\n| stats count by resource.labels.project_id, textPayload\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloud Logging Sink Health** — Broken sinks drop audit and security logs silently; monitoring export errors preserves compliance and detection coverage.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` (logging sink errors), Admin Activity for sink changes. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource.labels.project_id, textPayload** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (sink errors), Table (project, error text), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Broken sinks drop audit and security logs silently; monitoring export errors preserves compliance and detection coverage.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.33",
              "n": "GKE Node Auto-Repair Events",
              "c": "medium",
              "f": "intermediate",
              "v": "Auto-repair replaces unhealthy nodes; frequent repairs indicate image, disk, or hardware issues affecting workload stability.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "GKE node pool operations in `sourcetype=google:gcp:pubsub:message`, cluster operations log",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.methodName=\"repairNodePool\" OR textPayload=\"*auto-repair*\"\n| stats count by resource.labels.cluster_name, resource.labels.node_pool\n| sort -count",
              "m": "Correlate repairs with container restarts and kernel OOM. Alert when repairs per day exceed baseline for a pool. Review node image version skew.",
              "z": "Bar chart (repairs by pool), Timeline (repair events), Table (cluster, pool, count).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: GKE node pool operations in `sourcetype=google:gcp:pubsub:message`, cluster operations log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate repairs with container restarts and kernel OOM. Alert when repairs per day exceed baseline for a pool. Review node image version skew.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.methodName=\"repairNodePool\" OR textPayload=\"*auto-repair*\"\n| stats count by resource.labels.cluster_name, resource.labels.node_pool\n| sort -count\n```\n\nUnderstanding this SPL\n\n**GKE Node Auto-Repair Events** — Auto-repair replaces unhealthy nodes; frequent repairs indicate image, disk, or hardware issues affecting workload stability.\n\nDocumented **Data sources**: GKE node pool operations in `sourcetype=google:gcp:pubsub:message`, cluster operations log. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource.labels.cluster_name, resource.labels.node_pool** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (repairs by pool), Timeline (repair events), Table (cluster, pool, count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Auto-repair replaces unhealthy nodes; frequent repairs indicate image, disk, or hardware issues affecting workload stability.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.34",
              "n": "Dataflow Pipeline Health",
              "c": "high",
              "f": "intermediate",
              "v": "Batch and streaming pipelines power analytics; failed jobs or high system lag delay downstream consumers.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:monitoring` (`dataflow.googleapis.com/job/*`), Dataflow worker logs",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"dataflow.googleapis.com/job/system_lag\"\n| stats latest(value) as lag_sec by resource.labels.job_name, bin(_time, 5m)\n| where lag_sec > 300\n| sort - lag_sec",
              "m": "Ingest job state changes (FAILED, UPDATED) from logging. Alert on sustained system lag for streaming jobs or failed batch completion. Dashboard worker CPU and shuffle errors.",
              "z": "Line chart (system lag), Table (job, state), Timeline (job failures).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:monitoring` (`dataflow.googleapis.com/job/*`), Dataflow worker logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest job state changes (FAILED, UPDATED) from logging. Alert on sustained system lag for streaming jobs or failed batch completion. Dashboard worker CPU and shuffle errors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"dataflow.googleapis.com/job/system_lag\"\n| stats latest(value) as lag_sec by resource.labels.job_name, bin(_time, 5m)\n| where lag_sec > 300\n| sort - lag_sec\n```\n\nUnderstanding this SPL\n\n**Dataflow Pipeline Health** — Batch and streaming pipelines power analytics; failed jobs or high system lag delay downstream consumers.\n\nDocumented **Data sources**: `sourcetype=google:gcp:monitoring` (`dataflow.googleapis.com/job/*`), Dataflow worker logs. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource.labels.job_name, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where lag_sec > 300` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (system lag), Table (job, state), Timeline (job failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Batch and streaming pipelines power analytics; failed jobs or high system lag delay downstream consumers.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.35",
              "n": "Cloud SQL Connection Limits",
              "c": "high",
              "f": "intermediate",
              "v": "Hitting max connections causes application errors; trending connections versus tier limits guides pool sizing and read replicas.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:monitoring` (`cloudsql.googleapis.com/database/network/connections`, `postgresql.googleapis.com/connection_count`)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"cloudsql.googleapis.com/database/network/connections\"\n| stats latest(value) as conns by resource.labels.database_id, bin(_time, 5m)\n| lookup cloudsql_tier_limits database_id OUTPUT max_connections\n| where conns > max_connections * 0.85",
              "m": "Maintain lookup of instance tier to max connections. Alert at 85% sustained. Correlate with connection pool metrics from apps. Plan vertical scale or read replicas before hard failures.",
              "z": "Line chart (connections vs limit), Gauge (utilization %), Table (instance, conns).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:monitoring` (`cloudsql.googleapis.com/database/network/connections`, `postgresql.googleapis.com/connection_count`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain lookup of instance tier to max connections. Alert at 85% sustained. Correlate with connection pool metrics from apps. Plan vertical scale or read replicas before hard failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"cloudsql.googleapis.com/database/network/connections\"\n| stats latest(value) as conns by resource.labels.database_id, bin(_time, 5m)\n| lookup cloudsql_tier_limits database_id OUTPUT max_connections\n| where conns > max_connections * 0.85\n```\n\nUnderstanding this SPL\n\n**Cloud SQL Connection Limits** — Hitting max connections causes application errors; trending connections versus tier limits guides pool sizing and read replicas.\n\nDocumented **Data sources**: `sourcetype=google:gcp:monitoring` (`cloudsql.googleapis.com/database/network/connections`, `postgresql.googleapis.com/connection_count`). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource.labels.database_id, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where conns > max_connections * 0.85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (connections vs limit), Gauge (utilization %), Table (instance, conns).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Hitting max connections causes application errors; trending connections versus tier limits guides pool sizing and read replicas.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.36",
              "n": "Memorystore (Redis) Health",
              "c": "high",
              "f": "intermediate",
              "v": "Redis backs sessions and caches; memory pressure and replication lag cause timeouts and stale reads.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:monitoring` (`redis.googleapis.com/stats/memory/usage_ratio`, `replication/role`, `cpu/utilization`)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"redis.googleapis.com/stats/memory/usage_ratio\"\n| stats latest(value) as mem_ratio by resource.labels.instance_id, bin(_time, 5m)\n| where mem_ratio > 0.9\n| sort - mem_ratio",
              "m": "Alert on memory usage above 90%, high CPU, or replica lag metrics. Plan tier upgrades or key eviction policies. Monitor persistence (RDB/AOF) failures if enabled.",
              "z": "Line chart (memory ratio, CPU), Table (instance, tier), Single value (evictions if exported).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:monitoring` (`redis.googleapis.com/stats/memory/usage_ratio`, `replication/role`, `cpu/utilization`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on memory usage above 90%, high CPU, or replica lag metrics. Plan tier upgrades or key eviction policies. Monitor persistence (RDB/AOF) failures if enabled.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"redis.googleapis.com/stats/memory/usage_ratio\"\n| stats latest(value) as mem_ratio by resource.labels.instance_id, bin(_time, 5m)\n| where mem_ratio > 0.9\n| sort - mem_ratio\n```\n\nUnderstanding this SPL\n\n**Memorystore (Redis) Health** — Redis backs sessions and caches; memory pressure and replication lag cause timeouts and stale reads.\n\nDocumented **Data sources**: `sourcetype=google:gcp:monitoring` (`redis.googleapis.com/stats/memory/usage_ratio`, `replication/role`, `cpu/utilization`). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource.labels.instance_id, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mem_ratio > 0.9` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (memory ratio, CPU), Table (instance, tier), Single value (evictions if exported).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Redis backs sessions and caches; memory pressure and replication lag cause timeouts and stale reads.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.37",
              "n": "Cloud CDN Cache Performance",
              "c": "medium",
              "f": "intermediate",
              "v": "Low hit ratio raises origin load and latency; optimizing cache keys and TTL improves cost and user experience.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "HTTP(S) LB logs with cache fill/lookup fields, `sourcetype=google:gcp:monitoring` (cdn metrics)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" httpRequest.latency!=\"\"\n| eval cache_hit=if(match(_raw,\"cacheHit|CACHE_HIT\"),1,0)\n| stats sum(cache_hit) as hits, count as total by resource.labels.url_map_name\n| eval hit_ratio=round(100*hits/total,2)\n| where hit_ratio < 60 AND total > 1000\n| sort hit_ratio",
              "m": "Parse cache hit/miss from load balancer logs. Segment by content type and geography. Alert when hit ratio drops vs 14-day baseline. Review cache mode and Vary headers.",
              "z": "Line chart (hit ratio by URL map), Bar chart (origin egress), Table (backend, hit %).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: HTTP(S) LB logs with cache fill/lookup fields, `sourcetype=google:gcp:monitoring` (cdn metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse cache hit/miss from load balancer logs. Segment by content type and geography. Alert when hit ratio drops vs 14-day baseline. Review cache mode and Vary headers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" httpRequest.latency!=\"\"\n| eval cache_hit=if(match(_raw,\"cacheHit|CACHE_HIT\"),1,0)\n| stats sum(cache_hit) as hits, count as total by resource.labels.url_map_name\n| eval hit_ratio=round(100*hits/total,2)\n| where hit_ratio < 60 AND total > 1000\n| sort hit_ratio\n```\n\nUnderstanding this SPL\n\n**Cloud CDN Cache Performance** — Low hit ratio raises origin load and latency; optimizing cache keys and TTL improves cost and user experience.\n\nDocumented **Data sources**: HTTP(S) LB logs with cache fill/lookup fields, `sourcetype=google:gcp:monitoring` (cdn metrics). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cache_hit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by resource.labels.url_map_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio < 60 AND total > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hit ratio by URL map), Bar chart (origin egress), Table (backend, hit %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Low hit ratio raises origin load and latency; optimizing cache keys and TTL improves cost and user experience.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.38",
              "n": "GCS Bucket Policy Changes",
              "c": "critical",
              "f": "intermediate",
              "v": "Public buckets and IAM relaxations are common breach paths; real-time detection limits exposure window.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "Admin Activity `sourcetype=google:gcp:pubsub:message` (storage.setIamPermissions, bucket updates)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"storage.googleapis.com\" protoPayload.methodName=\"storage.buckets.setIamPermissions\"\n| spath path=protoPayload.serviceData.policy.bindings{}\n| mvexpand protoPayload.serviceData.policy.bindings{} limit=500\n| search bindings.members=\"allUsers\" OR bindings.members=\"allAuthenticatedUsers\"\n| table _time protoPayload.authenticationInfo.principalEmail resource.labels.bucket_name bindings.role",
              "m": "Alert on any allUsers/allAuthenticatedUsers binding or removal of org constraints. Weekly review of bucket-level IAM diffs. Integrate with SCC public bucket findings.",
              "z": "Table (bucket, principal, role), Timeline (IAM changes), Single value (public buckets).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1530",
                "T1619"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: Admin Activity `sourcetype=google:gcp:pubsub:message` (storage.setIamPermissions, bucket updates).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on any allUsers/allAuthenticatedUsers binding or removal of org constraints. Weekly review of bucket-level IAM diffs. Integrate with SCC public bucket findings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"storage.googleapis.com\" protoPayload.methodName=\"storage.buckets.setIamPermissions\"\n| spath path=protoPayload.serviceData.policy.bindings{}\n| mvexpand protoPayload.serviceData.policy.bindings{} limit=500\n| search bindings.members=\"allUsers\" OR bindings.members=\"allAuthenticatedUsers\"\n| table _time protoPayload.authenticationInfo.principalEmail resource.labels.bucket_name bindings.role\n```\n\nUnderstanding this SPL\n\n**GCS Bucket Policy Changes** — Public buckets and IAM relaxations are common breach paths; real-time detection limits exposure window.\n\nDocumented **Data sources**: Admin Activity `sourcetype=google:gcp:pubsub:message` (storage.setIamPermissions, bucket updates). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **GCS Bucket Policy Changes**): table _time protoPayload.authenticationInfo.principalEmail resource.labels.bucket_name bindings.role\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GCS Bucket Policy Changes** — Public buckets and IAM relaxations are common breach paths; real-time detection limits exposure window.\n\nDocumented **Data sources**: Admin Activity `sourcetype=google:gcp:pubsub:message` (storage.setIamPermissions, bucket updates). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (bucket, principal, role), Timeline (IAM changes), Single value (public buckets).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Public buckets and access control relaxations are common breach paths; real-time detection limits exposure window.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.39",
              "n": "Anthos Service Mesh Health",
              "c": "high",
              "f": "advanced",
              "v": "Istio-based meshes add control-plane and sidecar failure modes; monitoring error budgets and latency protects microservices SLOs.",
              "t": "`Splunk_TA_google-cloudplatform`, Anthos Service Mesh telemetry",
              "d": "`sourcetype=google:gcp:monitoring` (Istio canonical metrics)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\"\n| where match(metric.type, \"(istio|kubernetes\\.io/istio)\")\n| stats avg(value) as err_rate by metric.labels.destination_service_name\n| where err_rate > 0.01\n| sort - err_rate",
              "m": "Export Istio canonical metrics (4xx/5xx, request duration) to Cloud Monitoring and Splunk. Dashboard golden signals per service. Alert on error rate > 1% or p99 latency vs SLO. Include control plane (istiod) pod health.",
              "z": "Service mesh graph (external tool) plus Table (service, error rate), Line chart (p50/p99 latency).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`, Anthos Service Mesh telemetry.\n• Ensure the following data sources are available: `sourcetype=google:gcp:monitoring` (Istio canonical metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport Istio canonical metrics (4xx/5xx, request duration) to Cloud Monitoring and Splunk. Dashboard golden signals per service. Alert on error rate > 1% or p99 latency vs SLO. Include control plane (istiod) pod health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\"\n| where match(metric.type, \"(istio|kubernetes\\.io/istio)\")\n| stats avg(value) as err_rate by metric.labels.destination_service_name\n| where err_rate > 0.01\n| sort - err_rate\n```\n\nUnderstanding this SPL\n\n**Anthos Service Mesh Health** — Istio-based meshes add control-plane and sidecar failure modes; monitoring error budgets and latency protects microservices SLOs.\n\nDocumented **Data sources**: `sourcetype=google:gcp:monitoring` (Istio canonical metrics). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`, Anthos Service Mesh telemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(metric.type, \"(istio|kubernetes\\.io/istio)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by metric.labels.destination_service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where err_rate > 0.01` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Service mesh graph (external tool) plus Table (service, error rate), Line chart (p50/p99 latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Istio-based meshes add control-plane and sidecar failure modes; monitoring error budgets and latency protects microservices service goals.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.3.40",
              "n": "GCP Cloud Run Task Health",
              "c": "high",
              "f": "intermediate",
              "v": "Cloud Run scales to zero and on demand; tracking request latency, instance count, and error ratio catches cold-start and quota issues before customers notice.",
              "t": "`Splunk_TA_google-cloudplatform` (Pub/Sub logging/metrics) or OTel export from Cloud Ops",
              "d": "`sourcetype=google:gcp:pubsub:message` or `sourcetype=gcp:monitoring:timeseries`",
              "q": "index=cloud sourcetype=\"gcp:monitoring:timeseries\"\n| where like(metric.type, \"run.googleapis.com%\")\n| stats avg(value) as val_avg, max(value) as val_max by metric.type\n| where match(metric.type, \"(?i)request|latency|instance|container\")\n| sort -val_max",
              "m": "Export Cloud Run request, latency, and instance metrics via GCP monitoring sink to Pub/Sub and ingest with `Splunk_TA_google-cloudplatform`, or forward OpenTelemetry from a sidecar/collector if you run hybrid instrumentation. Ensure `service_name` and `revision_name` are extracted. Alert on elevated `server_request_latencies` and `5xx` ratio versus SLO.",
              "z": "Time chart (p95 latency, request rate), Table (service, revision, error rate), Single value (active instances).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform` (Pub/Sub logging/metrics) or OTel export from Cloud Ops.\n• Ensure the following data sources are available: `sourcetype=google:gcp:pubsub:message` or `sourcetype=gcp:monitoring:timeseries`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport Cloud Run request, latency, and instance metrics via GCP monitoring sink to Pub/Sub and ingest with `Splunk_TA_google-cloudplatform`, or forward OpenTelemetry from a sidecar/collector if you run hybrid instrumentation. Ensure `service_name` and `revision_name` are extracted. Alert on elevated `server_request_latencies` and `5xx` ratio versus SLO.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"gcp:monitoring:timeseries\"\n| where like(metric.type, \"run.googleapis.com%\")\n| stats avg(value) as val_avg, max(value) as val_max by metric.type\n| where match(metric.type, \"(?i)request|latency|instance|container\")\n| sort -val_max\n```\n\nUnderstanding this SPL\n\n**GCP Cloud Run Task Health** — Cloud Run scales to zero and on demand; tracking request latency, instance count, and error ratio catches cold-start and quota issues before customers notice.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` or `sourcetype=gcp:monitoring:timeseries`. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform` (Pub/Sub logging/metrics) or OTel export from Cloud Ops. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: gcp:monitoring:timeseries. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"gcp:monitoring:timeseries\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(metric.type, \"run.googleapis.com%\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by metric.type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where match(metric.type, \"(?i)request|latency|instance|container\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (p95 latency, request rate), Table (service, revision, error rate), Single value (active instances).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cloud Run scales to zero and on demand; tracking request latency, instance count, and error ratio catches cold-start and quota issues before customers notice.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.0,
          "qd": {
            "gold": 1,
            "silver": 0,
            "bronze": 39,
            "none": 0
          }
        },
        {
          "i": "4.4",
          "n": "Multi-Cloud & Cloud Management",
          "u": [
            {
              "i": "4.4.1",
              "n": "Terraform Drift Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Infrastructure drift from declared IaC state means manual changes broke the single source of truth. Causes unpredictable behavior and deployment failures.",
              "t": "Custom input (Terraform CLI output, CI/CD integration)",
              "d": "`terraform plan` output, CI/CD pipeline logs",
              "q": "index=devops sourcetype=\"terraform:plan\"\n| where changes_detected=\"true\"\n| stats count as drifted_resources by workspace, resource_type\n| sort -drifted_resources",
              "m": "Run `terraform plan -detailed-exitcode` on schedule in CI/CD. Forward plan output to Splunk via HEC. Exit code 2 = changes detected (drift). Alert on any drift in production workspaces.",
              "z": "Table (workspace, resource, drift), Single value (drifted resources), Bar chart.",
              "kfp": "A plan can show a diff right after a release while state catches up, or when we intentionally tuned something in the console during an emergency. We confirm the workspace and the window before we treat it as unplanned drift.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom input (Terraform CLI output, CI/CD integration).\n• Ensure the following data sources are available: `terraform plan` output, CI/CD pipeline logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `terraform plan -detailed-exitcode` on schedule in CI/CD. Forward plan output to Splunk via HEC. Exit code 2 = changes detected (drift). Alert on any drift in production workspaces.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"terraform:plan\"\n| where changes_detected=\"true\"\n| stats count as drifted_resources by workspace, resource_type\n| sort -drifted_resources\n```\n\nUnderstanding this SPL\n\n**Terraform Drift Detection** — Infrastructure drift from declared IaC state means manual changes broke the single source of truth. Causes unpredictable behavior and deployment failures.\n\nDocumented **Data sources**: `terraform plan` output, CI/CD pipeline logs. **App/TA** (typical add-on context): Custom input (Terraform CLI output, CI/CD integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: terraform:plan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"terraform:plan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where changes_detected=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by workspace, resource_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (workspace, resource, drift), Single value (drifted resources), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Infrastructure drift from declared IaC state means manual changes broke the single source of truth.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_terraform"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.2",
              "n": "Cross-Cloud Identity Correlation",
              "c": "medium",
              "f": "intermediate",
              "v": "Users often have identities across AWS/Azure/GCP. Correlating activity provides unified view for security investigation and insider threat detection.",
              "t": "Combined cloud TAs + lookup tables",
              "d": "All cloud audit logs",
              "q": "index=aws OR index=azure OR index=gcp\n| eval cloud=case(index=\"aws\",\"AWS\", index=\"azure\",\"Azure\", index=\"gcp\",\"GCP\")\n| eval user=coalesce(userIdentity.arn, userPrincipalName, protoPayload.authenticationInfo.principalEmail)\n| lookup cloud_identity_map user OUTPUT normalized_user\n| stats count, dc(cloud) as clouds_active, values(cloud) as clouds by normalized_user\n| where clouds_active > 1\n| sort -count",
              "m": "Create a lookup table mapping cloud identities to a normalized user (e.g., email). Combine audit logs from all three providers. Dashboard showing cross-cloud activity per user.",
              "z": "Table (user, clouds, activity count), Sankey diagram (user to cloud to action).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078.004",
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Combined cloud TAs + lookup tables.\n• Ensure the following data sources are available: All cloud audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a lookup table mapping cloud identities to a normalized user (e.g., email). Combine audit logs from all three providers. Dashboard showing cross-cloud activity per user.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws OR index=azure OR index=gcp\n| eval cloud=case(index=\"aws\",\"AWS\", index=\"azure\",\"Azure\", index=\"gcp\",\"GCP\")\n| eval user=coalesce(userIdentity.arn, userPrincipalName, protoPayload.authenticationInfo.principalEmail)\n| lookup cloud_identity_map user OUTPUT normalized_user\n| stats count, dc(cloud) as clouds_active, values(cloud) as clouds by normalized_user\n| where clouds_active > 1\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cross-Cloud Identity Correlation** — Users often have identities across AWS/Azure/GCP. Correlating activity provides unified view for security investigation and insider threat detection.\n\nDocumented **Data sources**: All cloud audit logs. **App/TA** (typical add-on context): Combined cloud TAs + lookup tables. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cloud** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by normalized_user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where clouds_active > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, clouds, activity count), Sankey diagram (user to cloud to action).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Users often have identities across AWS/Azure/GCP.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.3",
              "n": "Multi-Cloud Cost Dashboard",
              "c": "medium",
              "f": "beginner",
              "v": "Unified cost visibility across cloud providers enables budgeting, chargeback, and optimization decisions from a single pane of glass.",
              "t": "Combined cloud TAs, billing data",
              "d": "AWS CUR, Azure Cost Management, GCP Billing export",
              "q": "index=aws sourcetype=\"aws:billing\" OR index=azure sourcetype=\"azure:costmanagement\" OR index=gcp sourcetype=\"gcp:billing\"\n| eval cloud=case(index=\"aws\",\"AWS\", index=\"azure\",\"Azure\", index=\"gcp\",\"GCP\")\n| eval cost=coalesce(BlendedCost, CostInBillingCurrency, cost)\n| timechart span=1d sum(cost) by cloud",
              "m": "Ingest billing data from each provider. Normalize cost fields. Create a unified dashboard with consistent time-grain (daily). Break down by team using tagging from each provider.",
              "z": "Stacked area chart (daily cost by cloud), Table (cost by service), Pie chart (cost distribution).",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Combined cloud TAs, billing data.\n• Ensure the following data sources are available: AWS CUR, Azure Cost Management, GCP Billing export.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest billing data from each provider. Normalize cost fields. Create a unified dashboard with consistent time-grain (daily). Break down by team using tagging from each provider.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:billing\" OR index=azure sourcetype=\"azure:costmanagement\" OR index=gcp sourcetype=\"gcp:billing\"\n| eval cloud=case(index=\"aws\",\"AWS\", index=\"azure\",\"Azure\", index=\"gcp\",\"GCP\")\n| eval cost=coalesce(BlendedCost, CostInBillingCurrency, cost)\n| timechart span=1d sum(cost) by cloud\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud Cost Dashboard** — Unified cost visibility across cloud providers enables budgeting, chargeback, and optimization decisions from a single pane of glass.\n\nDocumented **Data sources**: AWS CUR, Azure Cost Management, GCP Billing export. **App/TA** (typical add-on context): Combined cloud TAs, billing data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp; **sourcetype**: aws:billing, azure:costmanagement, gcp:billing. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp, sourcetype=\"aws:billing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cloud** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by cloud** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area chart (daily cost by cloud), Table (cost by service), Pie chart (cost distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Unified cost visibility across cloud providers enables budgeting, chargeback, and optimization decisions from a single pane of glass.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.4",
              "n": "Cloud Resource Tagging Compliance",
              "c": "low",
              "f": "intermediate",
              "v": "Untagged resources can't be tracked for cost allocation, compliance, or ownership. Tagging compliance is foundational for cloud governance.",
              "t": "Cloud provider TAs, Config rules",
              "d": "Cloud resource inventories, Config/Policy compliance",
              "q": "index=aws sourcetype=\"aws:config:notification\" resourceType=\"AWS::EC2::Instance\"\n| spath output=tags path=configuration.tags{}\n| eval has_owner = if(match(tags, \"Owner\"), \"Yes\", \"No\")\n| eval has_env = if(match(tags, \"Environment\"), \"Yes\", \"No\")\n| where has_owner=\"No\" OR has_env=\"No\"\n| table resourceId has_owner has_env",
              "m": "Use AWS Config rules (required-tags), Azure Policy, or GCP org policies to evaluate tagging. Ingest compliance results. Dashboard showing tagging compliance by tag and resource type.",
              "z": "Table (resource, missing tags), Pie chart (compliant %), Bar chart by tag.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud provider TAs, Config rules.\n• Ensure the following data sources are available: Cloud resource inventories, Config/Policy compliance.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse AWS Config rules (required-tags), Azure Policy, or GCP org policies to evaluate tagging. Ingest compliance results. Dashboard showing tagging compliance by tag and resource type.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:config:notification\" resourceType=\"AWS::EC2::Instance\"\n| spath output=tags path=configuration.tags{}\n| eval has_owner = if(match(tags, \"Owner\"), \"Yes\", \"No\")\n| eval has_env = if(match(tags, \"Environment\"), \"Yes\", \"No\")\n| where has_owner=\"No\" OR has_env=\"No\"\n| table resourceId has_owner has_env\n```\n\nUnderstanding this SPL\n\n**Cloud Resource Tagging Compliance** — Untagged resources can't be tracked for cost allocation, compliance, or ownership. Tagging compliance is foundational for cloud governance.\n\nDocumented **Data sources**: Cloud resource inventories, Config/Policy compliance. **App/TA** (typical add-on context): Cloud provider TAs, Config rules. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:notification, AWS::EC2::Instance. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `eval` defines or adjusts **has_owner** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_env** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where has_owner=\"No\" OR has_env=\"No\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cloud Resource Tagging Compliance**): table resourceId has_owner has_env\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (resource, missing tags), Pie chart (compliant %), Bar chart by tag.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Untagged resources can't be tracked for cost allocation, compliance, or ownership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.5",
              "n": "Cloud Resource Inventory and Drift Summary",
              "c": "medium",
              "f": "intermediate",
              "v": "Unified inventory of resources across AWS/Azure/GCP supports compliance, cost, and drift detection. Drift summary highlights resources changed outside IaC.",
              "t": "Combined cloud TAs, Config/Policy exports, or third-party CSPM",
              "d": "AWS Config, Azure Resource Graph, GCP Asset Inventory (or provider APIs)",
              "q": "index=aws OR index=azure OR index=gcp\n| eval cloud=case(index=\"aws\",\"AWS\", index=\"azure\",\"Azure\", index=\"gcp\",\"GCP\")\n| eval resource_type=coalesce(resourceType, type, resource.type)\n| stats dc(resourceId) as resource_count values(cloud) as clouds by resource_type\n| sort -resource_count",
              "m": "Export resource inventory from each provider (Config snapshot, Resource Graph query, Asset Inventory API) to S3/storage or stream to Splunk. Normalize resource type and tags. Dashboard resource count by type and cloud. Compare with IaC state for drift.",
              "z": "Table (type, cloud, count), Stacked bar (resources by cloud), Pie chart (resource distribution).",
              "kfp": "A plan can show a diff right after a release while state catches up, or when we intentionally tuned something in the console during an emergency. We confirm the workspace and the window before we treat it as unplanned drift.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Combined cloud TAs, Config/Policy exports, or third-party CSPM.\n• Ensure the following data sources are available: AWS Config, Azure Resource Graph, GCP Asset Inventory (or provider APIs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport resource inventory from each provider (Config snapshot, Resource Graph query, Asset Inventory API) to S3/storage or stream to Splunk. Normalize resource type and tags. Dashboard resource count by type and cloud. Compare with IaC state for drift.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws OR index=azure OR index=gcp\n| eval cloud=case(index=\"aws\",\"AWS\", index=\"azure\",\"Azure\", index=\"gcp\",\"GCP\")\n| eval resource_type=coalesce(resourceType, type, resource.type)\n| stats dc(resourceId) as resource_count values(cloud) as clouds by resource_type\n| sort -resource_count\n```\n\nUnderstanding this SPL\n\n**Cloud Resource Inventory and Drift Summary** — Unified inventory of resources across AWS/Azure/GCP supports compliance, cost, and drift detection. Drift summary highlights resources changed outside IaC.\n\nDocumented **Data sources**: AWS Config, Azure Resource Graph, GCP Asset Inventory (or provider APIs). **App/TA** (typical add-on context): Combined cloud TAs, Config/Policy exports, or third-party CSPM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cloud** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **resource_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by resource_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (type, cloud, count), Stacked bar (resources by cloud), Pie chart (resource distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Unified inventory of resources across AWS/Azure/GCP supports compliance, cost, and drift detection.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.6",
              "n": "Multi-Cloud Security Posture (CSPM) Findings",
              "c": "high",
              "f": "intermediate",
              "v": "CSPM tools (or native Security Hub, Defender, SCC) produce findings across clouds. Centralizing in Splunk enables unified prioritization and remediation tracking.",
              "t": "Splunk TAs for each cloud, or CSPM product integration (e.g. Prisma Cloud, Wiz)",
              "d": "AWS Security Hub, Azure Defender/Security Center, GCP Security Command Center, or third-party CSPM API",
              "q": "index=security (sourcetype=aws:securityhub OR sourcetype=azure:defender OR sourcetype=gcp:scc)\n| eval cloud=case(like(sourcetype, \"aws%\"), \"AWS\", like(sourcetype, \"azure%\"), \"Azure\", like(sourcetype, \"gcp%\"), \"GCP\", true(), \"Other\")\n| eval severity=coalesce(severity, Severity, 'finding.severity')\n| where severity=\"CRITICAL\" OR severity=\"HIGH\"\n| stats count by cloud, severity, finding_type\n| sort -count",
              "m": "Ingest Security Hub, Defender for Cloud, and SCC findings into a common index. Normalize severity and finding type. Alert on new critical/high. Dashboard open findings by cloud, severity, and category (e.g. encryption, networking).",
              "z": "Table (cloud, severity, type, count), Bar chart (findings by cloud), Trend line (findings over time).",
              "kfp": "A fresh scanner run or a vendor re-rate can resurface a finding we already know about. We deduplicate on resource and time and we only escalate when the exposure is new or the severity stepped up.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk TAs for each cloud, or CSPM product integration (e.g. Prisma Cloud, Wiz).\n• Ensure the following data sources are available: AWS Security Hub, Azure Defender/Security Center, GCP Security Command Center, or third-party CSPM API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Security Hub, Defender for Cloud, and SCC findings into a common index. Normalize severity and finding type. Alert on new critical/high. Dashboard open findings by cloud, severity, and category (e.g. encryption, networking).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security (sourcetype=aws:securityhub OR sourcetype=azure:defender OR sourcetype=gcp:scc)\n| eval cloud=case(like(sourcetype, \"aws%\"), \"AWS\", like(sourcetype, \"azure%\"), \"Azure\", like(sourcetype, \"gcp%\"), \"GCP\", true(), \"Other\")\n| eval severity=coalesce(severity, Severity, 'finding.severity')\n| where severity=\"CRITICAL\" OR severity=\"HIGH\"\n| stats count by cloud, severity, finding_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud Security Posture (CSPM) Findings** — CSPM tools (or native Security Hub, Defender, SCC) produce findings across clouds. Centralizing in Splunk enables unified prioritization and remediation tracking.\n\nDocumented **Data sources**: AWS Security Hub, Azure Defender/Security Center, GCP Security Command Center, or third-party CSPM API. **App/TA** (typical add-on context): Splunk TAs for each cloud, or CSPM product integration (e.g. Prisma Cloud, Wiz). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: aws:securityhub, azure:defender, gcp:scc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=aws:securityhub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cloud** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where severity=\"CRITICAL\" OR severity=\"HIGH\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cloud, severity, finding_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cloud, severity, type, count), Bar chart (findings by cloud), Trend line (findings over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "CSPM tools (or native Security Hub, Defender, SCC) produce findings across clouds.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix",
                "paloalto"
              ],
              "em": [
                "paloalto_prisma"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.7",
              "n": "Cross-Cloud Log Ingestion Pipeline Health",
              "c": "high",
              "f": "intermediate",
              "v": "Log pipelines (CloudTrail → S3 → Splunk, Event Hub → Splunk, Pub/Sub → Splunk) can break. Monitoring pipeline health ensures no audit or observability gaps.",
              "t": "Splunk _internal, ingest metrics, or custom heartbeat",
              "d": "Splunk ingest metrics (by source/sourcetype), heartbeat searches, or pipeline-specific metrics (e.g. S3 object count, Event Hub lag)",
              "q": "index=_internal source=*metrics* group=per_sourcetype_thruput\n| eval delay_minutes = (now() - _time) / 60\n| where delay_minutes > 15 AND (sourcetype=*aws* OR sourcetype=*azure* OR sourcetype=*gcp*)\n| table sourcetype last_time delay_minutes",
              "m": "Track last event time per cloud sourcetype (e.g. aws:cloudtrail, mscs:azure:audit, google:gcp:pubsub). Alert when no events received for >15–30 minutes. Monitor Event Hub consumer lag and Pub/Sub subscription backlog as pipeline indicators.",
              "z": "Table (sourcetype, last event, delay), Single value (stale pipelines), Timeline (ingest volume by source).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk _internal, ingest metrics, or custom heartbeat.\n• Ensure the following data sources are available: Splunk ingest metrics (by source/sourcetype), heartbeat searches, or pipeline-specific metrics (e.g. S3 object count, Event Hub lag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack last event time per cloud sourcetype (e.g. aws:cloudtrail, mscs:azure:audit, google:gcp:pubsub). Alert when no events received for >15–30 minutes. Monitor Event Hub consumer lag and Pub/Sub subscription backlog as pipeline indicators.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*metrics* group=per_sourcetype_thruput\n| eval delay_minutes = (now() - _time) / 60\n| where delay_minutes > 15 AND (sourcetype=*aws* OR sourcetype=*azure* OR sourcetype=*gcp*)\n| table sourcetype last_time delay_minutes\n```\n\nUnderstanding this SPL\n\n**Cross-Cloud Log Ingestion Pipeline Health** — Log pipelines (CloudTrail → S3 → Splunk, Event Hub → Splunk, Pub/Sub → Splunk) can break. Monitoring pipeline health ensures no audit or observability gaps.\n\nDocumented **Data sources**: Splunk ingest metrics (by source/sourcetype), heartbeat searches, or pipeline-specific metrics (e.g. S3 object count, Event Hub lag). **App/TA** (typical add-on context): Splunk _internal, ingest metrics, or custom heartbeat. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **delay_minutes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delay_minutes > 15 AND (sourcetype=*aws* OR sourcetype=*azure* OR sourcetype=*gcp*)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cross-Cloud Log Ingestion Pipeline Health**): table sourcetype last_time delay_minutes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sourcetype, last event, delay), Single value (stale pipelines), Timeline (ingest volume by source).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Log pipelines (CloudTrail → cloud storage → our monitoring platform, Event Hub → our monitoring platform, Pub/Sub → our monitoring platform) can break.",
              "mtype": [
                "Security",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.8",
              "n": "Cloud Spend by Tag or Project (Chargeback)",
              "c": "medium",
              "f": "intermediate",
              "v": "Allocating cost by tag (AWS/Azure) or project/label (GCP) enables chargeback and showback. Supports budget accountability and optimization by team.",
              "t": "Combined cloud TAs, CUR, Azure Cost Management export, GCP Billing export",
              "d": "AWS CUR (with tag allocation), Azure Cost Management (by tag/resource group), GCP Billing (by project/labels)",
              "q": "index=aws sourcetype=\"aws:billing\"\n| spath path=resourceTags output=tags\n| mvexpand tags limit=500\n| rex field=tags \"^(?<tag_key>[^:]+):(?<tag_value>.+)$\"\n| stats sum(BlendedCost) as cost by tag_key tag_value\n| where tag_key=\"Owner\" OR tag_key=\"Team\"\n| sort -cost",
              "m": "Ingest billing data with tag/project dimensions. Normalize tag keys (e.g. Owner, Team, Environment). Dashboard cost by tag/project and trend. Set budget alerts per tag/project. Reconcile with actual invoices.",
              "z": "Stacked bar (cost by tag value), Table (tag, cost, % of total), Line chart (cost trend by team).",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Combined cloud TAs, CUR, Azure Cost Management export, GCP Billing export.\n• Ensure the following data sources are available: AWS CUR (with tag allocation), Azure Cost Management (by tag/resource group), GCP Billing (by project/labels).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest billing data with tag/project dimensions. Normalize tag keys (e.g. Owner, Team, Environment). Dashboard cost by tag/project and trend. Set budget alerts per tag/project. Reconcile with actual invoices.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:billing\"\n| spath path=resourceTags output=tags\n| mvexpand tags limit=500\n| rex field=tags \"^(?<tag_key>[^:]+):(?<tag_value>.+)$\"\n| stats sum(BlendedCost) as cost by tag_key tag_value\n| where tag_key=\"Owner\" OR tag_key=\"Team\"\n| sort -cost\n```\n\nUnderstanding this SPL\n\n**Cloud Spend by Tag or Project (Chargeback)** — Allocating cost by tag (AWS/Azure) or project/label (GCP) enables chargeback and showback. Supports budget accountability and optimization by team.\n\nDocumented **Data sources**: AWS CUR (with tag allocation), Azure Cost Management (by tag/resource group), GCP Billing (by project/labels). **App/TA** (typical add-on context): Combined cloud TAs, CUR, Azure Cost Management export, GCP Billing export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:billing. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:billing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by tag_key tag_value** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tag_key=\"Owner\" OR tag_key=\"Team\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (cost by tag value), Table (tag, cost, % of total), Line chart (cost trend by team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Allocating cost by tag (AWS/Azure) or project/label (GCP) enables chargeback and showback.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.9",
              "n": "Reserved Capacity and Savings Plan Utilization (Multi-Cloud)",
              "c": "low",
              "f": "intermediate",
              "v": "AWS RIs/SPs, Azure Reservations, and GCP Committed Use discounts reduce cost when utilized. Low utilization wastes commitment spend.",
              "t": "Combined cloud TAs, billing and usage exports",
              "d": "AWS CUR (RI/SP usage), Azure Cost Management (reservation utilization), GCP Committed Use reports",
              "q": "index=aws sourcetype=\"aws:billing\" lineItem_LineItemType=*Reserved* OR lineItem_LineItemType=*Savings*\n| stats sum(lineItem_UnblendedCost) as cost sum(lineItem_UsageAmount) as usage by product_instanceType reservation_ReservationARN\n| eval utilization_pct = usage / reserved_units * 100\n| where utilization_pct < 70",
              "m": "Ingest reservation and usage data from each provider. Calculate utilization (used vs. committed). Dashboard utilization by type and account/project. Alert when utilization < 70% to trigger right-sizing or exchange.",
              "z": "Table (reservation, type, utilization %), Gauge (overall utilization), Bar chart (waste by type).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Combined cloud TAs, billing and usage exports.\n• Ensure the following data sources are available: AWS CUR (RI/SP usage), Azure Cost Management (reservation utilization), GCP Committed Use reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest reservation and usage data from each provider. Calculate utilization (used vs. committed). Dashboard utilization by type and account/project. Alert when utilization < 70% to trigger right-sizing or exchange.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:billing\" lineItem_LineItemType=*Reserved* OR lineItem_LineItemType=*Savings*\n| stats sum(lineItem_UnblendedCost) as cost sum(lineItem_UsageAmount) as usage by product_instanceType reservation_ReservationARN\n| eval utilization_pct = usage / reserved_units * 100\n| where utilization_pct < 70\n```\n\nUnderstanding this SPL\n\n**Reserved Capacity and Savings Plan Utilization (Multi-Cloud)** — AWS RIs/SPs, Azure Reservations, and GCP Committed Use discounts reduce cost when utilized. Low utilization wastes commitment spend.\n\nDocumented **Data sources**: AWS CUR (RI/SP usage), Azure Cost Management (reservation utilization), GCP Committed Use reports. **App/TA** (typical add-on context): Combined cloud TAs, billing and usage exports. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:billing. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:billing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by product_instanceType reservation_ReservationARN** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilization_pct < 70` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (reservation, type, utilization %), Gauge (overall utilization), Bar chart (waste by type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "AWS RIs/SPs, Azure Reservations, and GCP Committed Use discounts reduce cost when utilized.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.10",
              "n": "Cloud API Rate Limit and Throttling (429) Trends",
              "c": "medium",
              "f": "intermediate",
              "v": "429 (Too Many Requests) from cloud APIs indicate client or provider throttling. Tracking trends supports quota increase and architecture changes.",
              "t": "Splunk TAs for each cloud (CloudTrail, Activity Log, GCP audit)",
              "d": "CloudTrail (errorCode=ThrottlingException), Azure Activity Log (status=Throttled), GCP audit (status 429)",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" errorCode=\"ThrottlingException\"\n| stats count by eventName userIdentity.principalId\n| sort -count",
              "m": "Search audit logs for throttling errors (AWS ThrottlingException, Azure 429, GCP RESOURCE_EXHAUSTED). Dashboard by API and principal. Request quota increase when sustained. Consider exponential backoff and request batching in applications.",
              "z": "Table (API, principal, 429 count), Line chart (429 over time), Bar chart (top throttled APIs).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1526",
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk TAs for each cloud (CloudTrail, Activity Log, GCP audit).\n• Ensure the following data sources are available: CloudTrail (errorCode=ThrottlingException), Azure Activity Log (status=Throttled), GCP audit (status 429).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSearch audit logs for throttling errors (AWS ThrottlingException, Azure 429, GCP RESOURCE_EXHAUSTED). Dashboard by API and principal. Request quota increase when sustained. Consider exponential backoff and request batching in applications.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" errorCode=\"ThrottlingException\"\n| stats count by eventName userIdentity.principalId\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloud API Rate Limit and Throttling (429) Trends** — 429 (Too Many Requests) from cloud APIs indicate client or provider throttling. Tracking trends supports quota increase and architecture changes.\n\nDocumented **Data sources**: CloudTrail (errorCode=ThrottlingException), Azure Activity Log (status=Throttled), GCP audit (status 429). **App/TA** (typical add-on context): Splunk TAs for each cloud (CloudTrail, Activity Log, GCP audit). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by eventName userIdentity.principalId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (API, principal, 429 count), Line chart (429 over time), Bar chart (top throttled APIs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "429 (Too Many Requests) from cloud indicate client or provider throttling.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "gcp"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.11",
              "n": "Cloud Encryption and Key Rotation Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Unencrypted resources or keys past rotation date violate compliance. Central view across clouds supports audit and remediation.",
              "t": "Config/Security Hub, Defender, SCC, or CSPM",
              "d": "AWS Config (encryption rules), Azure Policy (encryption compliance), GCP SCC (crypto key rotation)",
              "q": "index=aws sourcetype=\"aws:config:notification\" configRuleList{}.configRuleName=*encryption*\n| search complianceType=\"NON_COMPLIANT\"\n| table _time resourceType resourceId configRuleList{}.configRuleName",
              "m": "Use native compliance (Config rules, Azure Policy, SCC) or CSPM to evaluate encryption and key rotation. Ingest findings. Dashboard non-compliant resources by rule and cloud. Alert on new non-compliant critical resources.",
              "z": "Table (resource, rule, cloud, status), Pie chart (compliant %), Bar chart (non-compliant by rule).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1098.001",
                "T1530"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Config/Security Hub, Defender, SCC, or CSPM.\n• Ensure the following data sources are available: AWS Config (encryption rules), Azure Policy (encryption compliance), GCP SCC (crypto key rotation).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse native compliance (Config rules, Azure Policy, SCC) or CSPM to evaluate encryption and key rotation. Ingest findings. Dashboard non-compliant resources by rule and cloud. Alert on new non-compliant critical resources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:config:notification\" configRuleList{}.configRuleName=*encryption*\n| search complianceType=\"NON_COMPLIANT\"\n| table _time resourceType resourceId configRuleList{}.configRuleName\n```\n\nUnderstanding this SPL\n\n**Cloud Encryption and Key Rotation Compliance** — Unencrypted resources or keys past rotation date violate compliance. Central view across clouds supports audit and remediation.\n\nDocumented **Data sources**: AWS Config (encryption rules), Azure Policy (encryption compliance), GCP SCC (crypto key rotation). **App/TA** (typical add-on context): Config/Security Hub, Defender, SCC, or CSPM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:notification. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Cloud Encryption and Key Rotation Compliance**): table _time resourceType resourceId configRuleList{}.configRuleName\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (resource, rule, cloud, status), Pie chart (compliant %), Bar chart (non-compliant by rule).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Unencrypted resources or keys past rotation date violate compliance.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.12",
              "n": "Multi-Cloud Identity and Access Anomalies",
              "c": "high",
              "f": "advanced",
              "v": "Correlating identity activity across AWS IAM, Entra ID, and GCP IAM detects cross-cloud abuse and compromised identities.",
              "t": "Combined cloud TAs, identity lookup tables",
              "d": "CloudTrail (IAM), Entra ID sign-in/audit, GCP audit (IAM)",
              "q": "index=aws OR index=azure OR index=gcp\n| eval user=coalesce(userIdentity.principalId, userPrincipalName, protoPayload.authenticationInfo.principalEmail)\n| lookup identity_normalized user OUTPUT normalized_id\n| stats count dc(index) as clouds values(index) as indices by normalized_id\n| where clouds >= 2\n| sort -count",
              "m": "Normalize principal IDs to a common identity (e.g. email). Ingest IAM and sign-in events from all clouds. Baseline activity per identity; alert on first-time cross-cloud activity or impossible travel across cloud regions. Use for insider threat and compromise detection.",
              "z": "Table (identity, clouds, activity count), Sankey (identity to cloud to action), Timeline (cross-cloud events).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078.004",
                "T1087.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Combined cloud TAs, identity lookup tables.\n• Ensure the following data sources are available: CloudTrail (IAM), Entra ID sign-in/audit, GCP audit (IAM).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize principal IDs to a common identity (e.g. email). Ingest IAM and sign-in events from all clouds. Baseline activity per identity; alert on first-time cross-cloud activity or impossible travel across cloud regions. Use for insider threat and compromise detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws OR index=azure OR index=gcp\n| eval user=coalesce(userIdentity.principalId, userPrincipalName, protoPayload.authenticationInfo.principalEmail)\n| lookup identity_normalized user OUTPUT normalized_id\n| stats count dc(index) as clouds values(index) as indices by normalized_id\n| where clouds >= 2\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud Identity and Access Anomalies** — Correlating identity activity across AWS IAM, Entra ID, and GCP IAM detects cross-cloud abuse and compromised identities.\n\nDocumented **Data sources**: CloudTrail (IAM), Entra ID sign-in/audit, GCP audit (IAM). **App/TA** (typical add-on context): Combined cloud TAs, identity lookup tables. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by normalized_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where clouds >= 2` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (identity, clouds, activity count), Sankey (identity to cloud to action), Timeline (cross-cloud events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Correlating identity activity across AWS access control, Entra ID, and GCP access control detects cross-cloud abuse and compromised identities.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.13",
              "n": "Cloud Provider Status and Incident Correlation",
              "c": "medium",
              "f": "intermediate",
              "v": "When AWS/Azure/GCP have outages, correlating provider status with your alerts prevents wasted troubleshooting and supports customer communication.",
              "t": "Custom input (status page API or RSS), or status page integration",
              "d": "AWS Service Health Dashboard, Azure Status, GCP Status (APIs or scraped)",
              "q": "index=cloud_status provider=* status=*impact*\n| table _time provider service status description\n| sort -_time",
              "m": "Poll provider status APIs (e.g. status.aws.amazon.com, status.azure.com) or ingest RSS. Normalize to common schema. When your alerts spike, search status index for same time window and provider. Dashboard active incidents by provider.",
              "z": "Table (provider, service, status), Timeline (incidents), Single value (active incidents).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom input (status page API or RSS), or status page integration.\n• Ensure the following data sources are available: AWS Service Health Dashboard, Azure Status, GCP Status (APIs or scraped).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll provider status APIs (e.g. status.aws.amazon.com, status.azure.com) or ingest RSS. Normalize to common schema. When your alerts spike, search status index for same time window and provider. Dashboard active incidents by provider.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_status provider=* status=*impact*\n| table _time provider service status description\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cloud Provider Status and Incident Correlation** — When AWS/Azure/GCP have outages, correlating provider status with your alerts prevents wasted troubleshooting and supports customer communication.\n\nDocumented **Data sources**: AWS Service Health Dashboard, Azure Status, GCP Status (APIs or scraped). **App/TA** (typical add-on context): Custom input (status page API or RSS), or status page integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_status.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_status. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Cloud Provider Status and Incident Correlation**): table _time provider service status description\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (provider, service, status), Timeline (incidents), Single value (active incidents).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "When AWS/Azure/GCP have outages, correlating provider status with your alerts prevents wasted troubleshooting and supports customer communication.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.14",
              "n": "Cloud Trail and Diagnostic Logging Gaps",
              "c": "critical",
              "f": "intermediate",
              "v": "Missing or disabled CloudTrail, Activity Log export, or GCP audit log sink creates blind spots. Detecting gaps ensures full audit coverage.",
              "t": "Config, Azure Policy, GCP Asset Inventory, or custom API checks",
              "d": "AWS Config (cloudtrail-enabled), Azure Policy (diagnostic setting compliance), GCP log sink audit",
              "q": "index=aws sourcetype=\"aws:config:notification\" resourceType=\"AWS::CloudTrail::Trail\"\n| search configuration.isMultiRegionTrail=false OR configuration.logFileValidationEnabled=false\n| table resourceId configuration.isMultiRegionTrail configuration.logFileValidationEnabled",
              "m": "Use Config rules (e.g. cloudtrail-enabled, multi-region), Azure Policy (diagnostic logs to Event Hub), or GCP org policy for log sinks. Ingest compliance state. Alert when any account/region has trail disabled or logging gap. Dashboard coverage by account and log type.",
              "z": "Table (account, region, trail, multi-region, validation), Status (coverage %), Bar chart (gaps by account).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Config, Azure Policy, GCP Asset Inventory, or custom API checks.\n• Ensure the following data sources are available: AWS Config (cloudtrail-enabled), Azure Policy (diagnostic setting compliance), GCP log sink audit.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Config rules (e.g. cloudtrail-enabled, multi-region), Azure Policy (diagnostic logs to Event Hub), or GCP org policy for log sinks. Ingest compliance state. Alert when any account/region has trail disabled or logging gap. Dashboard coverage by account and log type.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:config:notification\" resourceType=\"AWS::CloudTrail::Trail\"\n| search configuration.isMultiRegionTrail=false OR configuration.logFileValidationEnabled=false\n| table resourceId configuration.isMultiRegionTrail configuration.logFileValidationEnabled\n```\n\nUnderstanding this SPL\n\n**Cloud Trail and Diagnostic Logging Gaps** — Missing or disabled CloudTrail, Activity Log export, or GCP audit log sink creates blind spots. Detecting gaps ensures full audit coverage.\n\nDocumented **Data sources**: AWS Config (cloudtrail-enabled), Azure Policy (diagnostic setting compliance), GCP log sink audit. **App/TA** (typical add-on context): Config, Azure Policy, GCP Asset Inventory, or custom API checks. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:notification, AWS::CloudTrail::Trail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Cloud Trail and Diagnostic Logging Gaps**): table resourceId configuration.isMultiRegionTrail configuration.logFileValidationEnabled\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (account, region, trail, multi-region, validation), Status (coverage %), Bar chart (gaps by account).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Missing or disabled CloudTrail, Activity Log export, or GCP audit log sink creates blind spots.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "gcp"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.15",
              "n": "Cloud Resource Tag Compliance and Drift",
              "c": "medium",
              "f": "intermediate",
              "v": "Missing or inconsistent tags (Owner, CostCenter, Environment) block cost allocation, automation, and governance. Detecting untagged or non-compliant resources supports tag policy enforcement.",
              "t": "`Splunk_TA_aws`, Azure Resource Graph, GCP Asset Inventory",
              "d": "AWS Config (resource compliance), Azure Policy compliance, GCP labels API",
              "q": "index=aws sourcetype=\"aws:config:resource\" tag_compliance=\"non_compliant\"\n| stats count by resourceType, account_id, region\n| where count > 0\n| sort -count",
              "m": "Use AWS Config rules (required-tags), Azure Policy (e.g. RequireTagAndValue), or GCP org policy for label requirements. Ingest compliance results. Alert when net new untagged resources appear or compliance score drops below threshold. Dashboard by OU/account and resource type.",
              "z": "Table (account, resource type, non-compliant count), Gauge (tag compliance %), Bar chart by tag key missing.",
              "kfp": "A plan can show a diff right after a release while state catches up, or when we intentionally tuned something in the console during an emergency. We confirm the workspace and the window before we treat it as unplanned drift.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, Azure Resource Graph, GCP Asset Inventory.\n• Ensure the following data sources are available: AWS Config (resource compliance), Azure Policy compliance, GCP labels API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse AWS Config rules (required-tags), Azure Policy (e.g. RequireTagAndValue), or GCP org policy for label requirements. Ingest compliance results. Alert when net new untagged resources appear or compliance score drops below threshold. Dashboard by OU/account and resource type.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:config:resource\" tag_compliance=\"non_compliant\"\n| stats count by resourceType, account_id, region\n| where count > 0\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloud Resource Tag Compliance and Drift** — Missing or inconsistent tags (Owner, CostCenter, Environment) block cost allocation, automation, and governance. Detecting untagged or non-compliant resources supports tag policy enforcement.\n\nDocumented **Data sources**: AWS Config (resource compliance), Azure Policy compliance, GCP labels API. **App/TA** (typical add-on context): `Splunk_TA_aws`, Azure Resource Graph, GCP Asset Inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:resource. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:resource\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resourceType, account_id, region** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (account, resource type, non-compliant count), Gauge (tag compliance %), Bar chart by tag key missing.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Missing or inconsistent tags (Owner, CostCenter, Environment) block cost allocation, automation, and governance.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.16",
              "n": "Cross-Region Replication and Backup Verification",
              "c": "high",
              "f": "intermediate",
              "v": "Replication lag or failed backup copies leave RPO/RTO at risk. Monitoring ensures DR readiness and supports audit of backup and replication jobs.",
              "t": "`Splunk_TA_aws`, Azure Monitor, GCP operations",
              "d": "S3 replication metrics, RDS cross-region replica lag, Azure Backup job status, GCP snapshot schedule",
              "q": "index=aws sourcetype=\"aws:cloudwatch:metric\" metric_name=ReplicationLatency namespace=AWS/S3\n| stats latest(value) as lag_seconds by dimension.BucketName, dimension.DestinationBucket\n| where lag_seconds > 900\n| table dimension.BucketName dimension.DestinationBucket lag_seconds",
              "m": "Collect S3 ReplicationTime and ReplicationLatency from CloudWatch. For RDS, use ReplicaLag. For Azure, ingest Backup job state from Monitor or automation runbooks. Alert when replication lag exceeds RPO (e.g. 15 min) or backup job fails.",
              "z": "Line chart (replication lag by bucket/replica), Table (failed backup jobs), Single value (max lag).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1578.001",
                "T1537"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, Azure Monitor, GCP operations.\n• Ensure the following data sources are available: S3 replication metrics, RDS cross-region replica lag, Azure Backup job status, GCP snapshot schedule.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect S3 ReplicationTime and ReplicationLatency from CloudWatch. For RDS, use ReplicaLag. For Azure, ingest Backup job state from Monitor or automation runbooks. Alert when replication lag exceeds RPO (e.g. 15 min) or backup job fails.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:metric\" metric_name=ReplicationLatency namespace=AWS/S3\n| stats latest(value) as lag_seconds by dimension.BucketName, dimension.DestinationBucket\n| where lag_seconds > 900\n| table dimension.BucketName dimension.DestinationBucket lag_seconds\n```\n\nUnderstanding this SPL\n\n**Cross-Region Replication and Backup Verification** — Replication lag or failed backup copies leave RPO/RTO at risk. Monitoring ensures DR readiness and supports audit of backup and replication jobs.\n\nDocumented **Data sources**: S3 replication metrics, RDS cross-region replica lag, Azure Backup job status, GCP snapshot schedule. **App/TA** (typical add-on context): `Splunk_TA_aws`, Azure Monitor, GCP operations. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:metric. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dimension.BucketName, dimension.DestinationBucket** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where lag_seconds > 900` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cross-Region Replication and Backup Verification**): table dimension.BucketName dimension.DestinationBucket lag_seconds\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (replication lag by bucket/replica), Table (failed backup jobs), Single value (max lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Replication lag or failed backup copies leave recovery point/recovery time at risk.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.17",
              "n": "Cloud Quota and Service Limit Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "Hitting account or region quotas (e.g. EC2 instance limit, VPCs, EBS volumes) blocks provisioning and causes runtime failures. Proactive tracking supports limit increase requests.",
              "t": "`Splunk_TA_aws`, Service Quotas API, Azure quotas, GCP quotas",
              "d": "AWS Service Quotas API, Trusted Advisor (limits), Azure usage and quotas, GCP quota API",
              "q": "index=aws sourcetype=\"aws:service_quotas\"\n| eval usage_pct=round(usage/value*100, 1)\n| where usage_pct > 80\n| table quota_name region usage value usage_pct\n| sort -usage_pct",
              "m": "Poll Service Quotas (or equivalent) for key limits (EC2, EBS, VPC, Lambda concurrency). Ingest current usage and quota value. Alert when utilization exceeds 80%. Dashboard all quotas with trend.",
              "z": "Table (quota, usage %, limit), Gauge per critical quota, Bar chart (top near-limit quotas).",
              "kfp": "Planned load tests, new regions, and one-off batch jobs can legitimately run near limits. We treat a breach as normal when it is tied to a change ticket and a known window, and we reset after the window ends.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, Service Quotas API, Azure quotas, GCP quotas.\n• Ensure the following data sources are available: AWS Service Quotas API, Trusted Advisor (limits), Azure usage and quotas, GCP quota API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Service Quotas (or equivalent) for key limits (EC2, EBS, VPC, Lambda concurrency). Ingest current usage and quota value. Alert when utilization exceeds 80%. Dashboard all quotas with trend.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:service_quotas\"\n| eval usage_pct=round(usage/value*100, 1)\n| where usage_pct > 80\n| table quota_name region usage value usage_pct\n| sort -usage_pct\n```\n\nUnderstanding this SPL\n\n**Cloud Quota and Service Limit Utilization** — Hitting account or region quotas (e.g. EC2 instance limit, VPCs, EBS volumes) blocks provisioning and causes runtime failures. Proactive tracking supports limit increase requests.\n\nDocumented **Data sources**: AWS Service Quotas API, Trusted Advisor (limits), Azure usage and quotas, GCP quota API. **App/TA** (typical add-on context): `Splunk_TA_aws`, Service Quotas API, Azure quotas, GCP quotas. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:service_quotas. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:service_quotas\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **usage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where usage_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cloud Quota and Service Limit Utilization**): table quota_name region usage value usage_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (quota, usage %, limit), Gauge per critical quota, Bar chart (top near-limit quotas).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Hitting account or region quotas (cloud servers instance limit, VPCs, EBS volumes) blocks provisioning and causes runtime failures.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.18",
              "n": "Cloud Endpoint and DNS Resolution Health",
              "c": "medium",
              "f": "intermediate",
              "v": "PrivateLink, VPC endpoints, and private DNS zones enable secure access to AWS/Azure/GCP services. Endpoint or DNS failures cause application outages that are hard to diagnose.",
              "t": "Custom scripted input (nslookup, curl to endpoint), CloudWatch Route53 health",
              "d": "Route53 Resolver query logs, VPC endpoint connection acceptance, Azure Private Endpoint status",
              "q": "index=cloud sourcetype=\"endpoint:health\"\n| stats latest(connect_ok) as ok, latest(rtt_ms) as rtt by endpoint_id, vpc_id\n| where ok != 1 OR rtt > 500\n| table endpoint_id vpc_id ok rtt _time",
              "m": "Run periodic probes (DNS lookup for private hosted zone, HTTPS to VPC endpoint) from a central host or Lambda. Ingest success/failure and latency. Alert when endpoint is unreachable or RTT exceeds threshold.",
              "z": "Status grid (endpoint, OK/fail), Table (endpoint, RTT), Line chart (RTT over time).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (nslookup, curl to endpoint), CloudWatch Route53 health.\n• Ensure the following data sources are available: Route53 Resolver query logs, VPC endpoint connection acceptance, Azure Private Endpoint status.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun periodic probes (DNS lookup for private hosted zone, HTTPS to VPC endpoint) from a central host or Lambda. Ingest success/failure and latency. Alert when endpoint is unreachable or RTT exceeds threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"endpoint:health\"\n| stats latest(connect_ok) as ok, latest(rtt_ms) as rtt by endpoint_id, vpc_id\n| where ok != 1 OR rtt > 500\n| table endpoint_id vpc_id ok rtt _time\n```\n\nUnderstanding this SPL\n\n**Cloud Endpoint and DNS Resolution Health** — PrivateLink, VPC endpoints, and private DNS zones enable secure access to AWS/Azure/GCP services. Endpoint or DNS failures cause application outages that are hard to diagnose.\n\nDocumented **Data sources**: Route53 Resolver query logs, VPC endpoint connection acceptance, Azure Private Endpoint status. **App/TA** (typical add-on context): Custom scripted input (nslookup, curl to endpoint), CloudWatch Route53 health. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: endpoint:health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"endpoint:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by endpoint_id, vpc_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ok != 1 OR rtt > 500` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cloud Endpoint and DNS Resolution Health**): table endpoint_id vpc_id ok rtt _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (endpoint, OK/fail), Table (endpoint, RTT), Line chart (RTT over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "PrivateLink, VPC endpoints, and private zones enable secure access to AWS/Azure/GCP services.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.19",
              "n": "Multi-Cloud Cost Anomaly and Spike Detection",
              "c": "high",
              "f": "advanced",
              "v": "Sudden cost spikes across AWS, Azure, or GCP indicate misconfiguration, abuse, or runaway resources. Early detection limits financial impact and supports FinOps review.",
              "t": "`Splunk_TA_aws`, Azure Cost Management export, GCP Billing export",
              "d": "AWS CUR, Azure Cost Management, GCP Billing export (BigQuery or file)",
              "q": "index=cloud sourcetype=\"billing:daily\" (provider=aws OR provider=azure OR provider=gcp)\n| timechart span=1d sum(unblended_cost) as cost by provider\n| eventstats avg(cost) as avg_cost, stdev(cost) as std_cost by provider\n| eval z_score=if(std_cost>0, (cost-avg_cost)/std_cost, 0)\n| where z_score > 2\n| table _time provider cost avg_cost z_score",
              "m": "Ingest daily (or hourly) cost by provider and service. Compute rolling mean and standard deviation per provider. Alert when daily cost exceeds 2 standard deviations. Correlate with resource inventory for top contributors.",
              "z": "Line chart (cost by provider over time), Table (anomalous days), Single value (current day vs baseline).",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, Azure Cost Management export, GCP Billing export.\n• Ensure the following data sources are available: AWS CUR, Azure Cost Management, GCP Billing export (BigQuery or file).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest daily (or hourly) cost by provider and service. Compute rolling mean and standard deviation per provider. Alert when daily cost exceeds 2 standard deviations. Correlate with resource inventory for top contributors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"billing:daily\" (provider=aws OR provider=azure OR provider=gcp)\n| timechart span=1d sum(unblended_cost) as cost by provider\n| eventstats avg(cost) as avg_cost, stdev(cost) as std_cost by provider\n| eval z_score=if(std_cost>0, (cost-avg_cost)/std_cost, 0)\n| where z_score > 2\n| table _time provider cost avg_cost z_score\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud Cost Anomaly and Spike Detection** — Sudden cost spikes across AWS, Azure, or GCP indicate misconfiguration, abuse, or runaway resources. Early detection limits financial impact and supports FinOps review.\n\nDocumented **Data sources**: AWS CUR, Azure Cost Management, GCP Billing export (BigQuery or file). **App/TA** (typical add-on context): `Splunk_TA_aws`, Azure Cost Management export, GCP Billing export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: billing:daily. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"billing:daily\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by provider** — ideal for trending and alerting on this use case.\n• `eventstats` rolls up events into metrics; results are split **by provider** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z_score > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Multi-Cloud Cost Anomaly and Spike Detection**): table _time provider cost avg_cost z_score\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (cost by provider over time), Table (anomalous days), Single value (current day vs baseline).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Sudden cost spikes across AWS, Azure, or GCP indicate misconfiguration, abuse, or runaway resources.",
              "mtype": [
                "Security",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.20",
              "n": "Multi-Cloud DNS Resolution Latency",
              "c": "medium",
              "f": "intermediate",
              "v": "Cross-provider DNS query performance comparison. Slow or failed resolution causes application timeouts and user experience degradation.",
              "t": "Custom scripted input (dig, nslookup)",
              "d": "DNS query timing from multiple vantage points (AWS, Azure, GCP, on-prem)",
              "q": "index=cloud sourcetype=\"dns:resolution\" \n| stats avg(resolution_ms) as avg_ms, max(resolution_ms) as max_ms, count as queries by provider, vantage_point, domain\n| where avg_ms > 200 OR max_ms > 1000\n| eval avg_ms=round(avg_ms, 1), max_ms=round(max_ms, 1)\n| table provider vantage_point domain queries avg_ms max_ms\n| sort -avg_ms",
              "m": "Run periodic DNS probes (dig, nslookup, or custom script) from Lambda, Azure Functions, Cloud Functions, or on-prem agents. Measure resolution time per domain. Ingest results via HEC with fields: provider, vantage_point, domain, resolution_ms, success. Alert when avg latency exceeds 200ms or failure rate > 5%. Compare providers for DNS migration decisions.",
              "z": "Line chart (resolution latency by provider and domain over time), Table (provider, domain, avg ms), Heat map (provider vs domain).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (dig, nslookup).\n• Ensure the following data sources are available: DNS query timing from multiple vantage points (AWS, Azure, GCP, on-prem).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun periodic DNS probes (dig, nslookup, or custom script) from Lambda, Azure Functions, Cloud Functions, or on-prem agents. Measure resolution time per domain. Ingest results via HEC with fields: provider, vantage_point, domain, resolution_ms, success. Alert when avg latency exceeds 200ms or failure rate > 5%. Compare providers for DNS migration decisions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"dns:resolution\" \n| stats avg(resolution_ms) as avg_ms, max(resolution_ms) as max_ms, count as queries by provider, vantage_point, domain\n| where avg_ms > 200 OR max_ms > 1000\n| eval avg_ms=round(avg_ms, 1), max_ms=round(max_ms, 1)\n| table provider vantage_point domain queries avg_ms max_ms\n| sort -avg_ms\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud DNS Resolution Latency** — Cross-provider DNS query performance comparison. Slow or failed resolution causes application timeouts and user experience degradation.\n\nDocumented **Data sources**: DNS query timing from multiple vantage points (AWS, Azure, GCP, on-prem). **App/TA** (typical add-on context): Custom scripted input (dig, nslookup). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: dns:resolution. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"dns:resolution\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by provider, vantage_point, domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_ms > 200 OR max_ms > 1000` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Multi-Cloud DNS Resolution Latency**): table provider vantage_point domain queries avg_ms max_ms\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (resolution latency by provider and domain over time), Table (provider, domain, avg ms), Heat map (provider vs domain).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cross-provider query performance comparison.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.21",
              "n": "Cloud Resource Tag Coverage Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Untagged or improperly tagged resources impact cost allocation, governance, and security. Compliance gaps block chargeback and policy enforcement.",
              "t": "`Splunk_TA_aws`, Azure inputs, GCP inputs",
              "d": "AWS Config rules (required-tags), Azure Policy (tag compliance), GCP Asset Inventory (resource metadata)",
              "q": "index=aws sourcetype=\"aws:config:notification\" configRuleName=\"*required-tags*\" complianceType=\"NON_COMPLIANT\"\n| eval provider=\"aws\", resource_type=coalesce(resourceType, configuration.resourceType)\n| stats count by provider, resource_type\n| sort -count",
              "m": "Enable AWS Config rule `required-tags` (or custom rule). Use Azure Policy for tag compliance. Export GCP Asset Inventory to BigQuery or Pub/Sub. Ingest compliance results in Splunk with normalized fields (provider, resource_type, compliance_status). For multi-cloud, use `index=cloud` and union searches per provider. Dashboard untagged resources by provider, resource type, and owner. Alert when critical resources (e.g. production EC2, storage) lack required tags (Environment, Owner, CostCenter).",
              "z": "Table (provider, resource type, compliance count), Pie chart (compliant vs non-compliant), Bar chart (non-compliant by tag key).",
              "kfp": "Inventories can skip a run when an API is throttled or a connector is in maintenance, which looks like a big drop. We check the ingest time and the last good snapshot before we call it shrinkage.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, Azure inputs, GCP inputs.\n• Ensure the following data sources are available: AWS Config rules (required-tags), Azure Policy (tag compliance), GCP Asset Inventory (resource metadata).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AWS Config rule `required-tags` (or custom rule). Use Azure Policy for tag compliance. Export GCP Asset Inventory to BigQuery or Pub/Sub. Ingest compliance results in Splunk with normalized fields (provider, resource_type, compliance_status). For multi-cloud, use `index=cloud` and union searches per provider. Dashboard untagged resources by provider, resource type, and owner. Alert when critical resources (e.g. production EC2, storage) lack required tags (Environment, Owner, CostCenter).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:config:notification\" configRuleName=\"*required-tags*\" complianceType=\"NON_COMPLIANT\"\n| eval provider=\"aws\", resource_type=coalesce(resourceType, configuration.resourceType)\n| stats count by provider, resource_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloud Resource Tag Coverage Trending** — Untagged or improperly tagged resources impact cost allocation, governance, and security. Compliance gaps block chargeback and policy enforcement.\n\nDocumented **Data sources**: AWS Config rules (required-tags), Azure Policy (tag compliance), GCP Asset Inventory (resource metadata). **App/TA** (typical add-on context): `Splunk_TA_aws`, Azure inputs, GCP inputs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:notification. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **provider** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by provider, resource_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (provider, resource type, compliance count), Pie chart (compliant vs non-compliant), Bar chart (non-compliant by tag key).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Untagged or improperly tagged resources impact cost allocation, governance, and security.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.22",
              "n": "Cross-Cloud Identity Federation Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Federation misconfiguration or token abuse spans IdPs and cloud consoles; unified visibility reduces blind spots for lateral movement across AWS, Azure, and GCP.",
              "t": "`Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=aws:cloudtrail` (AssumeRoleWithSAML, federation), `sourcetype=mscs:azure:audit` (federated sign-ins), `sourcetype=google:gcp:pubsub:message` (SAML/OIDC audit)",
              "q": "(index=aws sourcetype=\"aws:cloudtrail\" eventName=\"AssumeRoleWithSAML\")\n OR (index=azure sourcetype=\"mscs:azure:audit\" identity.claims.http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier=)\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.methodName=\"google.iam.admin.v1.IAM.SignBlob\")\n| eval cloud=case(isnotnull(index) AND index=\"aws\",\"aws\", index=\"azure\",\"azure\", index=\"gcp\",\"gcp\",1=1,\"unknown\")\n| bin _time span=1h\n| stats count by cloud, user, _time\n| sort -count",
              "m": "Normalize federated principal fields into a common `user` or `subject` via `eval`/`lookup`. Ingest IdP logs (Okta, Entra ID) via HEC if available and join on session ID. Alert on unusual federation volume, new IdP thumbprint, or cross-cloud sessions within minutes for the same user.",
              "z": "Table (cloud, user, count), Timeline (federation events), Sankey or chord (IdP to cloud role).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004",
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail` (AssumeRoleWithSAML, federation), `sourcetype=mscs:azure:audit` (federated sign-ins), `sourcetype=google:gcp:pubsub:message` (SAML/OIDC audit).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize federated principal fields into a common `user` or `subject` via `eval`/`lookup`. Ingest IdP logs (Okta, Entra ID) via HEC if available and join on session ID. Alert on unusual federation volume, new IdP thumbprint, or cross-cloud sessions within minutes for the same user.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:cloudtrail\" eventName=\"AssumeRoleWithSAML\")\n OR (index=azure sourcetype=\"mscs:azure:audit\" identity.claims.http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier=)\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.methodName=\"google.iam.admin.v1.IAM.SignBlob\")\n| eval cloud=case(isnotnull(index) AND index=\"aws\",\"aws\", index=\"azure\",\"azure\", index=\"gcp\",\"gcp\",1=1,\"unknown\")\n| bin _time span=1h\n| stats count by cloud, user, _time\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cross-Cloud Identity Federation Monitoring** — Federation misconfiguration or token abuse spans IdPs and cloud consoles; unified visibility reduces blind spots for lateral movement across AWS, Azure, and GCP.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (AssumeRoleWithSAML, federation), `sourcetype=mscs:azure:audit` (federated sign-ins), `sourcetype=google:gcp:pubsub:message` (SAML/OIDC audit). **App/TA** (typical add-on context): `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp; **sourcetype**: aws:cloudtrail, mscs:azure:audit, google:gcp:pubsub:message. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cloud** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by cloud, user, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cross-Cloud Identity Federation Monitoring** — Federation misconfiguration or token abuse spans IdPs and cloud consoles; unified visibility reduces blind spots for lateral movement across AWS, Azure, and GCP.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (AssumeRoleWithSAML, federation), `sourcetype=mscs:azure:audit` (federated sign-ins), `sourcetype=google:gcp:pubsub:message` (SAML/OIDC audit). **App/TA** (typical add-on context): `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cloud, user, count), Timeline (federation events), Sankey or chord (IdP to cloud role).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Federation misconfiguration or token abuse spans IdPs and cloud consoles; unified visibility reduces blind spots for lateral movement across AWS, Azure, and GCP.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "aws",
                "azure",
                "gcp",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.23",
              "n": "Multi-Cloud DNS Resolution Health",
              "c": "high",
              "f": "intermediate",
              "v": "DNS failures in one cloud or resolver path strand hybrid apps; proactive health checks catch resolver outages and split-horizon misconfiguration before user impact.",
              "t": "Custom (synthetic probes, Route 53 Resolver / Azure DNS / Cloud DNS logs)",
              "d": "`sourcetype=dns:health`, `sourcetype=aws:route53resolverquerylog`, `sourcetype=mscs:azure:diagnostics` (DNS if enabled)",
              "q": "index=cloud (sourcetype=\"dns:health\" OR sourcetype=\"synthetic:dns\")\n| stats latest(success) as ok, avg(latency_ms) as avg_ms by provider, resolver_vantage, tested_fqdn\n| where ok=0 OR avg_ms>500\n| eval avg_ms=round(avg_ms,1)\n| sort provider tested_fqdn",
              "m": "Emit probe results from each cloud (success, latency_ms, NXDOMAIN rate) via HEC. Optionally join Route 53 Resolver query logs for SERVFAIL spikes. Page when any critical FQDN fails from two vantage points or latency doubles vs baseline.",
              "z": "Status grid (FQDN × provider), Line chart (success rate over time), Single value (failed probes).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (synthetic probes, Route 53 Resolver / Azure DNS / Cloud DNS logs).\n• Ensure the following data sources are available: `sourcetype=dns:health`, `sourcetype=aws:route53resolverquerylog`, `sourcetype=mscs:azure:diagnostics` (DNS if enabled).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmit probe results from each cloud (success, latency_ms, NXDOMAIN rate) via HEC. Optionally join Route 53 Resolver query logs for SERVFAIL spikes. Page when any critical FQDN fails from two vantage points or latency doubles vs baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud (sourcetype=\"dns:health\" OR sourcetype=\"synthetic:dns\")\n| stats latest(success) as ok, avg(latency_ms) as avg_ms by provider, resolver_vantage, tested_fqdn\n| where ok=0 OR avg_ms>500\n| eval avg_ms=round(avg_ms,1)\n| sort provider tested_fqdn\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud DNS Resolution Health** — DNS failures in one cloud or resolver path strand hybrid apps; proactive health checks catch resolver outages and split-horizon misconfiguration before user impact.\n\nDocumented **Data sources**: `sourcetype=dns:health`, `sourcetype=aws:route53resolverquerylog`, `sourcetype=mscs:azure:diagnostics` (DNS if enabled). **App/TA** (typical add-on context): Custom (synthetic probes, Route 53 Resolver / Azure DNS / Cloud DNS logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: dns:health, synthetic:dns. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"dns:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by provider, resolver_vantage, tested_fqdn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ok=0 OR avg_ms>500` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (FQDN × provider), Line chart (success rate over time), Single value (failed probes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Failures in one cloud or resolver path strand hybrid apps; proactive health checks catch resolver outages and split-horizon misconfiguration before user impact.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.24",
              "n": "Hybrid Connectivity Status",
              "c": "critical",
              "f": "advanced",
              "v": "ExpressRoute, Direct Connect, VPN, and Interconnect carry production traffic; tunnel or BGP drops partition workloads between on-prem and cloud.",
              "t": "`Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=aws:cloudwatch` (DX, VPN, TGW), Azure Monitor VPN/ExpressRoute metrics, `sourcetype=google:gcp:monitoring` (VPN/interconnect)",
              "q": "(index=aws sourcetype=\"aws:cloudwatch\" (namespace=\"AWS/DX\" OR namespace=\"AWS/VPN\") (metric_name=\"ConnectionState\" OR metric_name=\"TunnelState\"))\n OR (index=azure sourcetype=\"mscs:azure:metrics\" resourceType=\"Microsoft.Network/expressRouteCircuits\" metric_name=\"BgpPeerStatus\")\n OR (index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"vpn.googleapis.com/tunnel_established\")\n| eval link_up=case(metric_name=\"ConnectionState\" AND maximum=1,1, metric_name=\"TunnelState\" AND maximum=1,1, metric_name=\"BgpPeerStatus\" AND average>0,1, metric.type=\"vpn.googleapis.com/tunnel_established\" AND value>0,1,1=1,0)\n| stats min(link_up) as healthy by resourceId, resource.labels.*, bin(_time, 5m)\n| where healthy=0",
              "m": "Align metric semantics per provider in lookups; alert on sustained unhealthy state. Correlate with provider status pages ingested as `sourcetype=cloud:status`. Dashboard RTO/RPO targets for hybrid links.",
              "z": "Timeline (link state), Table (circuit, tunnel, status), Map (peering location if geo fields exist).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (DX, VPN, TGW), Azure Monitor VPN/ExpressRoute metrics, `sourcetype=google:gcp:monitoring` (VPN/interconnect).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign metric semantics per provider in lookups; alert on sustained unhealthy state. Correlate with provider status pages ingested as `sourcetype=cloud:status`. Dashboard RTO/RPO targets for hybrid links.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:cloudwatch\" (namespace=\"AWS/DX\" OR namespace=\"AWS/VPN\") (metric_name=\"ConnectionState\" OR metric_name=\"TunnelState\"))\n OR (index=azure sourcetype=\"mscs:azure:metrics\" resourceType=\"Microsoft.Network/expressRouteCircuits\" metric_name=\"BgpPeerStatus\")\n OR (index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"vpn.googleapis.com/tunnel_established\")\n| eval link_up=case(metric_name=\"ConnectionState\" AND maximum=1,1, metric_name=\"TunnelState\" AND maximum=1,1, metric_name=\"BgpPeerStatus\" AND average>0,1, metric.type=\"vpn.googleapis.com/tunnel_established\" AND value>0,1,1=1,0)\n| stats min(link_up) as healthy by resourceId, resource.labels.*, bin(_time, 5m)\n| where healthy=0\n```\n\nUnderstanding this SPL\n\n**Hybrid Connectivity Status** — ExpressRoute, Direct Connect, VPN, and Interconnect carry production traffic; tunnel or BGP drops partition workloads between on-prem and cloud.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (DX, VPN, TGW), Azure Monitor VPN/ExpressRoute metrics, `sourcetype=google:gcp:monitoring` (VPN/interconnect). **App/TA** (typical add-on context): `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp; **sourcetype**: aws:cloudwatch, mscs:azure:metrics, Microsoft.Network/expressRouteCircuits, google:gcp:monitoring. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **link_up** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by resourceId, resource.labels.*, bin(_time, 5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where healthy=0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (link state), Table (circuit, tunnel, status), Map (peering location if geo fields exist).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "ExpressRoute, Direct Connect, remote access, and Interconnect carry production traffic; tunnel or drops partition workloads between on-prem and cloud.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.25",
              "n": "Multi-Cloud Secret Management Audit",
              "c": "critical",
              "f": "intermediate",
              "v": "Secrets touched across AWS Secrets Manager, Azure Key Vault, and GCP Secret Manager must be auditable for least-privilege reviews and breach investigations.",
              "t": "`Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=aws:cloudtrail` (secretsmanager, kms), `sourcetype=mscs:azure:audit` (Key Vault), `sourcetype=google:gcp:pubsub:message` (secretmanager.googleapis.com)",
              "q": "(index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"secretsmanager.amazonaws.com\" eventName=\"GetSecretValue\")\n OR (index=azure sourcetype=\"mscs:azure:audit\" resourceId=\"*vaults*\")\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"secretmanager.googleapis.com\")\n| eval principal=coalesce(userIdentity.arn, identity.claims.appid, protoPayload.authenticationInfo.principalEmail)\n| stats count by principal, index\n| sort -count",
              "m": "Enrich with HR or CMDB owner for service principals. Alert on first-time accessor, after-hours bulk reads, or secrets read from unexpected regions. Retention aligned to compliance policy.",
              "z": "Table (principal, cloud, access count), Bar chart (top accessors), Timeline (spikes).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1098.001",
                "T1552.005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail` (secretsmanager, kms), `sourcetype=mscs:azure:audit` (Key Vault), `sourcetype=google:gcp:pubsub:message` (secretmanager.googleapis.com).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnrich with HR or CMDB owner for service principals. Alert on first-time accessor, after-hours bulk reads, or secrets read from unexpected regions. Retention aligned to compliance policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"secretsmanager.amazonaws.com\" eventName=\"GetSecretValue\")\n OR (index=azure sourcetype=\"mscs:azure:audit\" resourceId=\"*vaults*\")\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" protoPayload.serviceName=\"secretmanager.googleapis.com\")\n| eval principal=coalesce(userIdentity.arn, identity.claims.appid, protoPayload.authenticationInfo.principalEmail)\n| stats count by principal, index\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud Secret Management Audit** — Secrets touched across AWS Secrets Manager, Azure Key Vault, and GCP Secret Manager must be auditable for least-privilege reviews and breach investigations.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (secretsmanager, kms), `sourcetype=mscs:azure:audit` (Key Vault), `sourcetype=google:gcp:pubsub:message` (secretmanager.googleapis.com). **App/TA** (typical add-on context): `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp; **sourcetype**: aws:cloudtrail, mscs:azure:audit, google:gcp:pubsub:message. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **principal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by principal, index** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Multi-Cloud Secret Management Audit** — Secrets touched across AWS Secrets Manager, Azure Key Vault, and GCP Secret Manager must be auditable for least-privilege reviews and breach investigations.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (secretsmanager, kms), `sourcetype=mscs:azure:audit` (Key Vault), `sourcetype=google:gcp:pubsub:message` (secretmanager.googleapis.com). **App/TA** (typical add-on context): `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, cloud, access count), Bar chart (top accessors), Timeline (spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Secrets touched across AWS Secrets Manager, Azure Key Vault, and GCP Secret Manager must be auditable for least-privilege reviews and breach investigations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "azure",
                "gcp",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.26",
              "n": "Cross-Cloud Resource Tagging Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "A single tagging schema across clouds enables chargeback and policy automation; drift in one provider breaks consolidated FinOps views.",
              "t": "`Splunk_TA_aws`, Azure Policy exports, GCP Asset Inventory",
              "d": "`sourcetype=aws:config:notification`, Azure Policy compliance events, `sourcetype=google:gcp:pubsub:message` (asset exports)",
              "q": "(index=aws sourcetype=\"aws:config:notification\" configRuleName=\"*tag*\" complianceType=\"NON_COMPLIANT\")\n OR (index=azure sourcetype=\"mscs:azure:audit\" complianceState=\"NonCompliant\" operationName.value=\"*policy*\")\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" \"missingLabelKeys\")\n| eval provider=case(index=\"aws\",\"aws\", index=\"azure\",\"azure\", index=\"gcp\",\"gcp\",1=1,\"other\")\n| stats count by provider, resourceType, resourceGroup\n| sort -count",
              "m": "Normalize required tag keys (for example Environment, Owner, CostCenter) in a lookup. Weekly trend of non-compliant count per provider. Alert when any provider’s gap exceeds SLA threshold.",
              "z": "Stacked bar (non-compliant by provider over time), Table (resource, missing tags), Single value (compliance %).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1578"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, Azure Policy exports, GCP Asset Inventory.\n• Ensure the following data sources are available: `sourcetype=aws:config:notification`, Azure Policy compliance events, `sourcetype=google:gcp:pubsub:message` (asset exports).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize required tag keys (for example Environment, Owner, CostCenter) in a lookup. Weekly trend of non-compliant count per provider. Alert when any provider’s gap exceeds SLA threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:config:notification\" configRuleName=\"*tag*\" complianceType=\"NON_COMPLIANT\")\n OR (index=azure sourcetype=\"mscs:azure:audit\" complianceState=\"NonCompliant\" operationName.value=\"*policy*\")\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" \"missingLabelKeys\")\n| eval provider=case(index=\"aws\",\"aws\", index=\"azure\",\"azure\", index=\"gcp\",\"gcp\",1=1,\"other\")\n| stats count by provider, resourceType, resourceGroup\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cross-Cloud Resource Tagging Compliance** — A single tagging schema across clouds enables chargeback and policy automation; drift in one provider breaks consolidated FinOps views.\n\nDocumented **Data sources**: `sourcetype=aws:config:notification`, Azure Policy compliance events, `sourcetype=google:gcp:pubsub:message` (asset exports). **App/TA** (typical add-on context): `Splunk_TA_aws`, Azure Policy exports, GCP Asset Inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp; **sourcetype**: aws:config:notification, mscs:azure:audit, google:gcp:pubsub:message. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp, sourcetype=\"aws:config:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **provider** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by provider, resourceType, resourceGroup** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (non-compliant by provider over time), Table (resource, missing tags), Single value (compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "A single tagging schema across clouds enables chargeback and policy automation; drift in one provider breaks consolidated FinOps views.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.27",
              "n": "Multi-Cloud Egress Cost Comparison",
              "c": "medium",
              "f": "advanced",
              "v": "Egress drives surprise bills; comparing outbound spend by provider and region guides data locality and CDN decisions.",
              "t": "Billing exports (CUR, Cost Management export, BigQuery billing)",
              "d": "`sourcetype=aws:billing`, `sourcetype=azure:cost`, `sourcetype=gcp:billing`",
              "q": "index=cloud (sourcetype=\"aws:billing\" OR sourcetype=\"azure:cost\" OR sourcetype=\"gcp:billing\")\n| eval ut=lower(coalesce(lineItem_UsageType, usage_type, usageType, productSku))\n| eval is_egress=if(match(ut,\"egress|transfer|internet|download|outbound|data_transfer\"),1,0)\n| where is_egress=1\n| eval provider=case(sourcetype=\"aws:billing\",\"aws\", sourcetype=\"azure:cost\",\"azure\", sourcetype=\"gcp:billing\",\"gcp\",1=1,\"unknown\")\n| eval region=coalesce(lineItem_AvailabilityZone, resourceLocation, region)\n| stats sum(cost) as egress_usd by provider, region, bin(_time, 1d)\n| sort -egress_usd",
              "m": "Map each provider’s line items to normalized `usage_type` and `cost` fields during ingestion. Join with application tags where available. Alert on week-over-week egress growth above threshold per provider.",
              "z": "Line chart (egress USD by provider), Bar chart (region), Table (top services driving egress).",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Billing exports (CUR, Cost Management export, BigQuery billing).\n• Ensure the following data sources are available: `sourcetype=aws:billing`, `sourcetype=azure:cost`, `sourcetype=gcp:billing`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap each provider’s line items to normalized `usage_type` and `cost` fields during ingestion. Join with application tags where available. Alert on week-over-week egress growth above threshold per provider.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud (sourcetype=\"aws:billing\" OR sourcetype=\"azure:cost\" OR sourcetype=\"gcp:billing\")\n| eval ut=lower(coalesce(lineItem_UsageType, usage_type, usageType, productSku))\n| eval is_egress=if(match(ut,\"egress|transfer|internet|download|outbound|data_transfer\"),1,0)\n| where is_egress=1\n| eval provider=case(sourcetype=\"aws:billing\",\"aws\", sourcetype=\"azure:cost\",\"azure\", sourcetype=\"gcp:billing\",\"gcp\",1=1,\"unknown\")\n| eval region=coalesce(lineItem_AvailabilityZone, resourceLocation, region)\n| stats sum(cost) as egress_usd by provider, region, bin(_time, 1d)\n| sort -egress_usd\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud Egress Cost Comparison** — Egress drives surprise bills; comparing outbound spend by provider and region guides data locality and CDN decisions.\n\nDocumented **Data sources**: `sourcetype=aws:billing`, `sourcetype=azure:cost`, `sourcetype=gcp:billing`. **App/TA** (typical add-on context): Billing exports (CUR, Cost Management export, BigQuery billing). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:billing, azure:cost, gcp:billing. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:billing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ut** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_egress** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_egress=1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **provider** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **region** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by provider, region, bin(_time, 1d)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (egress USD by provider), Bar chart (region), Table (top services driving egress).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Egress drives surprise bills; comparing outbound spend by provider and region guides data locality and CDN decisions.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.28",
              "n": "Hybrid Identity Synchronization Health",
              "c": "high",
              "f": "intermediate",
              "v": "AD Connect, Cloud Identity, and similar sync failures leave cloud groups stale, breaking access and compliance.",
              "t": "`Splunk_TA_microsoft-cloudservices`, Windows/Entra diagnostics, GCP Identity logs",
              "d": "`sourcetype=mscs:azure:audit` / AD Connect health, `sourcetype=WinEventLog:Security` (on-prem), `sourcetype=google:gcp:pubsub:message` (identity sync)",
              "q": "(index=azure sourcetype=\"mscs:azure:audit\" (operationName.value=\"*AADConnect*\" OR activityDisplayName=\"*sync*\") activityStatus!=\"Success\")\n OR (index=identity sourcetype=\"adconnect:health\" status!=\"success\")\n| eval connector_name=coalesce(connector_name, resourceGroupName, \"aadconnect\")\n| stats count by sourcetype, connector_name, bin(_time, 1h)",
              "m": "Ingest connector health JSON or Event Hub stream. Correlate with password hash sync errors and object export failures. Alert on any failed sync window or rising error count.",
              "z": "Timeline (sync status), Table (connector, error), Single value (last successful sync age in minutes).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1078.004",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, Windows/Entra diagnostics, GCP Identity logs.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:audit` / AD Connect health, `sourcetype=WinEventLog:Security` (on-prem), `sourcetype=google:gcp:pubsub:message` (identity sync).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest connector health JSON or Event Hub stream. Correlate with password hash sync errors and object export failures. Alert on any failed sync window or rising error count.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=azure sourcetype=\"mscs:azure:audit\" (operationName.value=\"*AADConnect*\" OR activityDisplayName=\"*sync*\") activityStatus!=\"Success\")\n OR (index=identity sourcetype=\"adconnect:health\" status!=\"success\")\n| eval connector_name=coalesce(connector_name, resourceGroupName, \"aadconnect\")\n| stats count by sourcetype, connector_name, bin(_time, 1h)\n```\n\nUnderstanding this SPL\n\n**Hybrid Identity Synchronization Health** — AD Connect, Cloud Identity, and similar sync failures leave cloud groups stale, breaking access and compliance.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:audit` / AD Connect health, `sourcetype=WinEventLog:Security` (on-prem), `sourcetype=google:gcp:pubsub:message` (identity sync). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Windows/Entra diagnostics, GCP Identity logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure, identity; **sourcetype**: mscs:azure:audit, adconnect:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, index=identity, sourcetype=\"mscs:azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **connector_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sourcetype, connector_name, bin(_time, 1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (sync status), Table (connector, error), Single value (last successful sync age in minutes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "AD Connect, Cloud Identity, and similar sync failures leave cloud groups stale, breaking access and compliance.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "gcp",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.29",
              "n": "Multi-Cloud Backup Recovery Testing",
              "c": "high",
              "f": "intermediate",
              "v": "Untested restores fail when needed; tracking drill outcomes across AWS Backup, Azure Backup, and GCP proves RPO/RTO.",
              "t": "`Splunk_TA_aws`, Azure Backup logs, GCP Backup for GKE / Database exports",
              "d": "`sourcetype=aws:cloudwatch:events` (Backup), `sourcetype=mscs:azure:diagnostics` (Backup), `sourcetype=google:gcp:pubsub:message` (gkebackup, sqladmin)",
              "q": "(index=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Restore Job State Change\")\n OR (index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AzureBackupReport\" OperationName=\"Restore\")\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" \"Restore\" OR \"restoreBackup\")\n| eval ok=if(match(_raw,\"(?i)(FAILED|ERROR|PARTIAL)\"),0,1)\n| eval provider=case(index=\"aws\",\"aws\", index=\"azure\",\"azure\", index=\"gcp\",\"gcp\",1=1,\"unknown\")\n| eval app_name=coalesce(detail.resourceArn, BackupItemName, resource.labels.project_id, \"unknown\")\n| stats count(eval(ok=1)) as success, count(eval(ok=0)) as failed by app_name, provider\n| where failed>0",
              "m": "Tag restore jobs with `drill=true` in application metadata. Quarterly dashboard of success rate and restore duration percentiles. Alert on any failed table-top restore.",
              "z": "Table (app, provider, success/fail), Bar chart (drill outcomes by quarter), Line chart (restore duration trend).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1578.001",
                "T1578.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, Azure Backup logs, GCP Backup for GKE / Database exports.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch:events` (Backup), `sourcetype=mscs:azure:diagnostics` (Backup), `sourcetype=google:gcp:pubsub:message` (gkebackup, sqladmin).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag restore jobs with `drill=true` in application metadata. Quarterly dashboard of success rate and restore duration percentiles. Alert on any failed table-top restore.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:cloudwatch:events\" detail-type=\"Restore Job State Change\")\n OR (index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"AzureBackupReport\" OperationName=\"Restore\")\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" \"Restore\" OR \"restoreBackup\")\n| eval ok=if(match(_raw,\"(?i)(FAILED|ERROR|PARTIAL)\"),0,1)\n| eval provider=case(index=\"aws\",\"aws\", index=\"azure\",\"azure\", index=\"gcp\",\"gcp\",1=1,\"unknown\")\n| eval app_name=coalesce(detail.resourceArn, BackupItemName, resource.labels.project_id, \"unknown\")\n| stats count(eval(ok=1)) as success, count(eval(ok=0)) as failed by app_name, provider\n| where failed>0\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud Backup Recovery Testing** — Untested restores fail when needed; tracking drill outcomes across AWS Backup, Azure Backup, and GCP proves RPO/RTO.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch:events` (Backup), `sourcetype=mscs:azure:diagnostics` (Backup), `sourcetype=google:gcp:pubsub:message` (gkebackup, sqladmin). **App/TA** (typical add-on context): `Splunk_TA_aws`, Azure Backup logs, GCP Backup for GKE / Database exports. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp; **sourcetype**: aws:cloudwatch:events, mscs:azure:diagnostics, google:gcp:pubsub:message. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **provider** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **app_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by app_name, provider** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failed>0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (app, provider, success/fail), Bar chart (drill outcomes by quarter), Line chart (restore duration trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Untested restores fail when needed; tracking drill outcomes across AWS Backup, Azure Backup, and GCP proves recovery point/recovery time.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.30",
              "n": "Cloud Provider API Rate Limit Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Automation hitting AWS throttling, Azure 429s, or GCP RESOURCE_EXHAUSTED breaks pipelines; trending limits prevents silent job loss.",
              "t": "Cloud TAs + application logs with HTTP status",
              "d": "`sourcetype=aws:cloudtrail` (ThrottlingException), `sourcetype=mscs:azure:audit` / app logs (429), `sourcetype=google:gcp:pubsub:message` (status 429, RESOURCE_EXHAUSTED)",
              "q": "(index=aws sourcetype=\"aws:cloudtrail\" errorCode=\"Throttling\")\n OR (index=azure sourcetype=\"mscs:azure:audit\" status.value=\"429\")\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" \"RESOURCE_EXHAUSTED\" OR status=\"429\")\n| eval provider=case(index=\"aws\",\"aws\", index=\"azure\",\"azure\", index=\"gcp\",\"gcp\",1=1,\"unknown\")\n| timechart span=15m count by provider",
              "m": "Back off and jitter in automation based on Splunk alerts. Separate control-plane vs data-plane APIs. Dashboard top callers (principal or workload) causing throttles.",
              "z": "Line chart (throttle count by provider), Table (API operation, caller, count), Single value (15m throttle burst).",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1526",
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud TAs + application logs with HTTP status.\n• Ensure the following data sources are available: `sourcetype=aws:cloudtrail` (ThrottlingException), `sourcetype=mscs:azure:audit` / app logs (429), `sourcetype=google:gcp:pubsub:message` (status 429, RESOURCE_EXHAUSTED).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBack off and jitter in automation based on Splunk alerts. Separate control-plane vs data-plane APIs. Dashboard top callers (principal or workload) causing throttles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:cloudtrail\" errorCode=\"Throttling\")\n OR (index=azure sourcetype=\"mscs:azure:audit\" status.value=\"429\")\n OR (index=gcp sourcetype=\"google:gcp:pubsub:message\" \"RESOURCE_EXHAUSTED\" OR status=\"429\")\n| eval provider=case(index=\"aws\",\"aws\", index=\"azure\",\"azure\", index=\"gcp\",\"gcp\",1=1,\"unknown\")\n| timechart span=15m count by provider\n```\n\nUnderstanding this SPL\n\n**Cloud Provider API Rate Limit Monitoring** — Automation hitting AWS throttling, Azure 429s, or GCP RESOURCE_EXHAUSTED breaks pipelines; trending limits prevents silent job loss.\n\nDocumented **Data sources**: `sourcetype=aws:cloudtrail` (ThrottlingException), `sourcetype=mscs:azure:audit` / app logs (429), `sourcetype=google:gcp:pubsub:message` (status 429, RESOURCE_EXHAUSTED). **App/TA** (typical add-on context): Cloud TAs + application logs with HTTP status. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp; **sourcetype**: aws:cloudtrail, mscs:azure:audit, google:gcp:pubsub:message. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **provider** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by provider** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (throttle count by provider), Table (API operation, caller, count), Single value (15m throttle burst).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Automation hitting AWS throttling, Azure 429s, or GCP RESOURCE_EXHAUSTED breaks pipelines; trending limits prevents silent job loss.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.31",
              "n": "Multi-Cloud Certificate Expiry Tracking",
              "c": "critical",
              "f": "beginner",
              "v": "Expired certs break TLS for APIs and VPNs across regions; a unified expiry calendar prevents outages.",
              "t": "ACM/Key Vault/Certificate Manager exports, optional certstream",
              "d": "`sourcetype=aws:acm:inventory`, `sourcetype=mscs:azure:metrics` / cert inventory, `sourcetype=google:gcp:pubsub:message` (certificatemanager)",
              "q": "index=cloud (sourcetype=\"aws:acm:inventory\" OR sourcetype=\"azure:keyvault:certs\" OR sourcetype=\"google:gcp:pubsub:message\")\n| eval not_after_epoch=coalesce(strptime(expiry, \"%Y-%m-%dT%H:%M:%SZ\"), strptime(expiry, \"%Y-%m-%dT%H:%M:%S%z\"), strptime(notAfter, \"%Y-%m-%dT%H:%M:%S\"))\n| eval days_left=round((not_after_epoch-now())/86400,0)\n| eval provider=case(sourcetype=\"aws:acm:inventory\",\"aws\", sourcetype=\"azure:keyvault:certs\",\"azure\", sourcetype=\"google:gcp:pubsub:message\",\"gcp\",1=1,\"unknown\")\n| where days_left < 30 AND days_left >= 0\n| table cert_name, provider, expiry, days_left\n| sort days_left",
              "m": "Nightly inventory jobs push cert metadata via HEC. Escalate at 30, 14, and 7 days. Include private CAs and cloud-managed certs for load balancers and API gateways.",
              "z": "Table (cert, provider, days left), Timeline (expiry dates), Single value (next expiry).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1580"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ACM/Key Vault/Certificate Manager exports, optional certstream.\n• Ensure the following data sources are available: `sourcetype=aws:acm:inventory`, `sourcetype=mscs:azure:metrics` / cert inventory, `sourcetype=google:gcp:pubsub:message` (certificatemanager).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNightly inventory jobs push cert metadata via HEC. Escalate at 30, 14, and 7 days. Include private CAs and cloud-managed certs for load balancers and API gateways.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud (sourcetype=\"aws:acm:inventory\" OR sourcetype=\"azure:keyvault:certs\" OR sourcetype=\"google:gcp:pubsub:message\")\n| eval not_after_epoch=coalesce(strptime(expiry, \"%Y-%m-%dT%H:%M:%SZ\"), strptime(expiry, \"%Y-%m-%dT%H:%M:%S%z\"), strptime(notAfter, \"%Y-%m-%dT%H:%M:%S\"))\n| eval days_left=round((not_after_epoch-now())/86400,0)\n| eval provider=case(sourcetype=\"aws:acm:inventory\",\"aws\", sourcetype=\"azure:keyvault:certs\",\"azure\", sourcetype=\"google:gcp:pubsub:message\",\"gcp\",1=1,\"unknown\")\n| where days_left < 30 AND days_left >= 0\n| table cert_name, provider, expiry, days_left\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud Certificate Expiry Tracking** — Expired certs break TLS for APIs and VPNs across regions; a unified expiry calendar prevents outages.\n\nDocumented **Data sources**: `sourcetype=aws:acm:inventory`, `sourcetype=mscs:azure:metrics` / cert inventory, `sourcetype=google:gcp:pubsub:message` (certificatemanager). **App/TA** (typical add-on context): ACM/Key Vault/Certificate Manager exports, optional certstream. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:acm:inventory, azure:keyvault:certs, google:gcp:pubsub:message. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:acm:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **not_after_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **provider** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 30 AND days_left >= 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Multi-Cloud Certificate Expiry Tracking**): table cert_name, provider, expiry, days_left\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cert, provider, days left), Timeline (expiry dates), Single value (next expiry).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Expired certs break secure connections for and VPNs across regions; a unified expiry calendar prevents outages.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 31,
            "none": 0
          }
        },
        {
          "i": "4.5",
          "n": "Serverless & FaaS",
          "u": [
            {
              "i": "4.5.1",
              "n": "Lambda Invocation Errors and Failed Invocations",
              "c": "high",
              "f": "beginner",
              "v": "Failed Lambda invocations surface runtime bugs, dependency outages, and misconfiguration before they silently drop user traffic or break downstream workflows.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (namespace `AWS/Lambda`)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"Errors\"\n| timechart span=5m sum(Sum) as errors by FunctionName\n| join max=1 FunctionName type=left\n    [ search index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"Invocations\"\n    | timechart span=5m sum(Sum) as invocations by FunctionName ]\n| eval error_rate=if(invocations>0, round(100*errors/invocations, 2), 0)\n| where error_rate > 1",
              "m": "Enable CloudWatch metric collection for the Lambda namespace (Errors, Invocations). Ingest via Splunk_TA_aws. Optionally correlate with Lambda application logs from CloudWatch Logs subscription. Alert when error rate exceeds policy (for example 1–5% sustained over 15 minutes).",
              "z": "Line chart (errors and invocations over time by function), Single value (error rate %), Table (FunctionName, errors, invocations, error_rate).",
              "kfp": "Cold starts, retries, and a bad client sending junk traffic can all raise error counts without a code bug. We look at the share of invocations and we compare the same time last week before we assume the function is broken.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (namespace `AWS/Lambda`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CloudWatch metric collection for the Lambda namespace (Errors, Invocations). Ingest via Splunk_TA_aws. Optionally correlate with Lambda application logs from CloudWatch Logs subscription. Alert when error rate exceeds policy (for example 1–5% sustained over 15 minutes).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"Errors\"\n| timechart span=5m sum(Sum) as errors by FunctionName\n| join max=1 FunctionName type=left\n    [ search index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"Invocations\"\n    | timechart span=5m sum(Sum) as invocations by FunctionName ]\n| eval error_rate=if(invocations>0, round(100*errors/invocations, 2), 0)\n| where error_rate > 1\n```\n\nUnderstanding this SPL\n\n**Lambda Invocation Errors and Failed Invocations** — Failed Lambda invocations surface runtime bugs, dependency outages, and misconfiguration before they silently drop user traffic or break downstream workflows.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (namespace `AWS/Lambda`). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by FunctionName** — ideal for trending and alerting on this use case.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (errors and invocations over time by function), Single value (error rate %), Table (FunctionName, errors, invocations, error_rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Failed Lambda invocations surface runtime bugs, dependency outages, and misconfiguration before they silently drop user traffic or break downstream workflows.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.2",
              "n": "Lambda Cold Start and Init Duration Latency",
              "c": "medium",
              "f": "intermediate",
              "v": "Cold starts add tail latency to user-facing APIs and batch jobs; tracking Init Duration guides memory tuning, provisioned concurrency, and VPC design.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (`InitDuration`), optional `sourcetype=aws:cloudwatchlogs` (Lambda REPORT lines)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"InitDuration\"\n| timechart span=5m avg(Average) as avg_init_ms, max(Maximum) as max_init_ms by FunctionName\n| where avg_init_ms > 500",
              "m": "Collect the `InitDuration` CloudWatch metric for each function. For log-based validation, subscribe Lambda log groups to Splunk and parse `REPORT` lines for `Init Duration`. Baseline p95/p99 init time per function and alert when cold-start latency breaches SLO after deploys or scaling events.",
              "z": "Line chart (avg/max Init Duration by function), Box plot or percentile overlay (if precomputed), Table (FunctionName, p95 init ms).",
              "kfp": "Cold starts, retries, and a bad client sending junk traffic can all raise error counts without a code bug. We look at the share of invocations and we compare the same time last week before we assume the function is broken.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (`InitDuration`), optional `sourcetype=aws:cloudwatchlogs` (Lambda REPORT lines).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect the `InitDuration` CloudWatch metric for each function. For log-based validation, subscribe Lambda log groups to Splunk and parse `REPORT` lines for `Init Duration`. Baseline p95/p99 init time per function and alert when cold-start latency breaches SLO after deploys or scaling events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"InitDuration\"\n| timechart span=5m avg(Average) as avg_init_ms, max(Maximum) as max_init_ms by FunctionName\n| where avg_init_ms > 500\n```\n\nUnderstanding this SPL\n\n**Lambda Cold Start and Init Duration Latency** — Cold starts add tail latency to user-facing APIs and batch jobs; tracking Init Duration guides memory tuning, provisioned concurrency, and VPC design.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (`InitDuration`), optional `sourcetype=aws:cloudwatchlogs` (Lambda REPORT lines). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by FunctionName** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_init_ms > 500` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (avg/max Init Duration by function), Box plot or percentile overlay (if precomputed), Table (FunctionName, p95 init ms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cold starts add tail latency to user-facing and batch jobs; tracking Init Duration guides memory tuning, provisioned concurrency, and VPC design.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.3",
              "n": "Lambda Concurrent Execution Limits and Throttling",
              "c": "high",
              "f": "intermediate",
              "v": "Account- and function-level concurrency caps cause synchronous throttles and async retries; monitoring utilization prevents dropped work during traffic spikes.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (namespace `AWS/Lambda`)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" (metric_name=\"ConcurrentExecutions\" OR metric_name=\"Throttles\")\n| stats sum(Sum) as volume by FunctionName, metric_name\n| xyseries FunctionName metric_name volume\n| fillnull value=0\n| where Throttles>0",
              "m": "Ingest `ConcurrentExecutions`, `Throttles`, and reserved concurrency settings (from tags or a nightly inventory lookup). Compare concurrent usage to reserved and account limits. Alert on any non-zero throttles or when concurrent executions approach the configured cap for bursty functions.",
              "z": "Line chart (ConcurrentExecutions vs limit by function), Single value (throttle count), Area chart (stacked concurrency by function).",
              "kfp": "Planned load tests, new regions, and one-off batch jobs can legitimately run near limits. We treat a breach as normal when it is tied to a change ticket and a known window, and we reset after the window ends.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (namespace `AWS/Lambda`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest `ConcurrentExecutions`, `Throttles`, and reserved concurrency settings (from tags or a nightly inventory lookup). Compare concurrent usage to reserved and account limits. Alert on any non-zero throttles or when concurrent executions approach the configured cap for bursty functions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" (metric_name=\"ConcurrentExecutions\" OR metric_name=\"Throttles\")\n| stats sum(Sum) as volume by FunctionName, metric_name\n| xyseries FunctionName metric_name volume\n| fillnull value=0\n| where Throttles>0\n```\n\nUnderstanding this SPL\n\n**Lambda Concurrent Execution Limits and Throttling** — Account- and function-level concurrency caps cause synchronous throttles and async retries; monitoring utilization prevents dropped work during traffic spikes.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (namespace `AWS/Lambda`). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by FunctionName, metric_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• Fills null values with `fillnull`.\n• Filters the current rows with `where Throttles>0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ConcurrentExecutions vs limit by function), Single value (throttle count), Area chart (stacked concurrency by function).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Account- and function-level concurrency caps cause synchronous throttles and async retries; monitoring utilization prevents dropped work during traffic spikes.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.4",
              "n": "Azure Functions Host and Worker Health",
              "c": "high",
              "f": "beginner",
              "v": "Host startup failures, platform updates, and worker crashes take entire function apps offline; early detection reduces MTTR for serverless workloads on Azure.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:diagnostics` (Function App logs), `sourcetype=mscs:azure:metrics`",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"FunctionAppLogs\" (level=\"Error\" OR level=\"Critical\")\n| bin span=5m _time\n| stats count as errors by resourceName, operationName, _time\n| where errors > 0",
              "m": "Stream Function App diagnostics (FunctionAppLogs) to Event Hub and ingest with the Microsoft Cloud Services add-on. Normalize `resourceName` (app name) and severity. Optionally join with `mscs:azure:metrics` for `Http5xx` or `FunctionExecutionCount` drops. Alert on sustained host-level errors or absence of successful executions.",
              "z": "Timeline (errors by app), Table (resourceName, message pattern, count), Status indicator (healthy/degraded per Function App).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:diagnostics` (Function App logs), `sourcetype=mscs:azure:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStream Function App diagnostics (FunctionAppLogs) to Event Hub and ingest with the Microsoft Cloud Services add-on. Normalize `resourceName` (app name) and severity. Optionally join with `mscs:azure:metrics` for `Http5xx` or `FunctionExecutionCount` drops. Alert on sustained host-level errors or absence of successful executions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"FunctionAppLogs\" (level=\"Error\" OR level=\"Critical\")\n| bin span=5m _time\n| stats count as errors by resourceName, operationName, _time\n| where errors > 0\n```\n\nUnderstanding this SPL\n\n**Azure Functions Host and Worker Health** — Host startup failures, platform updates, and worker crashes take entire function apps offline; early detection reduces MTTR for serverless workloads on Azure.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:diagnostics` (Function App logs), `sourcetype=mscs:azure:metrics`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by resourceName, operationName, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where errors > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (errors by app), Table (resourceName, message pattern, count), Status indicator (healthy/degraded per Function App).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Host startup failures, platform updates, and worker crashes take entire function apps offline; early detection reduces MTTR for serverless workloads on Azure.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.5",
              "n": "Azure Functions Execution Duration",
              "c": "medium",
              "f": "beginner",
              "v": "Long-running functions tie up scale-out units and can hit timeout limits; duration trending guides right-sizing, connection pooling, and async patterns.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:metrics` (Function metrics)",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" metricName=\"FunctionExecutionDuration\"\n| timechart span=5m avg(average) as avg_ms, max(maximum) as max_ms by resourceName\n| where max_ms > 10000",
              "m": "Enable Azure Monitor metrics for Function Apps and ingest via the TA (dimensions: function name where available). Establish baselines per function. Alert when p95 duration approaches the function timeout or degrades after releases.",
              "z": "Line chart (avg/max duration by app or function), Heatmap (duration by hour), Table (resourceName, avg_ms, max_ms).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:metrics` (Function metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Azure Monitor metrics for Function Apps and ingest via the TA (dimensions: function name where available). Establish baselines per function. Alert when p95 duration approaches the function timeout or degrades after releases.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" metricName=\"FunctionExecutionDuration\"\n| timechart span=5m avg(average) as avg_ms, max(maximum) as max_ms by resourceName\n| where max_ms > 10000\n```\n\nUnderstanding this SPL\n\n**Azure Functions Execution Duration** — Long-running functions tie up scale-out units and can hit timeout limits; duration trending guides right-sizing, connection pooling, and async patterns.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:metrics` (Function metrics). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resourceName** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where max_ms > 10000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (avg/max duration by app or function), Heatmap (duration by hour), Table (resourceName, avg_ms, max_ms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Long-running functions tie up scale-out units and can hit timeout limits; duration trending guides right-sizing, connection pooling, and async patterns.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.6",
              "n": "Azure Functions Queue Trigger Backlog and Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Queue-triggered functions depend on storage or Service Bus depth; growing backlogs mean consumers cannot keep pace or messages are poisoned.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:metrics` (Storage Queue / Service Bus), `sourcetype=mscs:azure:diagnostics`",
              "q": "index=azure sourcetype=\"mscs:azure:metrics\" (metricName=\"QueueMessageCount\" OR metricName=\"ActiveMessages\")\n| timechart span=5m avg(average) as depth by resourceName, metricName\n| join type=left max=1 resourceName\n    [ search index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"FunctionAppLogs\" \"QueueTrigger\"\n    | stats count as trigger_errors by resourceName ]\n| where depth > 1000 OR trigger_errors > 0",
              "m": "Ingest queue depth metrics for the storage account or Service Bus namespace backing the trigger. Correlate with FunctionAppLogs for dequeue/processing errors. Alert when depth exceeds threshold or poison-message handling spikes. Map queue resource to Function App via tags or a lookup.",
              "z": "Dual-axis line chart (queue depth vs successful executions), Table (queue, depth, errors), Single value (oldest message age if exported).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:metrics` (Storage Queue / Service Bus), `sourcetype=mscs:azure:diagnostics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest queue depth metrics for the storage account or Service Bus namespace backing the trigger. Correlate with FunctionAppLogs for dequeue/processing errors. Alert when depth exceeds threshold or poison-message handling spikes. Map queue resource to Function App via tags or a lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:metrics\" (metricName=\"QueueMessageCount\" OR metricName=\"ActiveMessages\")\n| timechart span=5m avg(average) as depth by resourceName, metricName\n| join type=left max=1 resourceName\n    [ search index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"FunctionAppLogs\" \"QueueTrigger\"\n    | stats count as trigger_errors by resourceName ]\n| where depth > 1000 OR trigger_errors > 0\n```\n\nUnderstanding this SPL\n\n**Azure Functions Queue Trigger Backlog and Failures** — Queue-triggered functions depend on storage or Service Bus depth; growing backlogs mean consumers cannot keep pace or messages are poisoned.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:metrics` (Storage Queue / Service Bus), `sourcetype=mscs:azure:diagnostics`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by resourceName, metricName** — ideal for trending and alerting on this use case.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where depth > 1000 OR trigger_errors > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis line chart (queue depth vs successful executions), Table (queue, depth, errors), Single value (oldest message age if exported).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Queue-triggered functions depend on storage or Service Bus depth; growing backlogs mean consumers cannot keep pace or messages are poisoned.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.7",
              "n": "GCP Cloud Functions Memory Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Memory pressure causes OOM terminations and retries; tracking user memory against allocation prevents instability and guides memory settings per function.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:monitoring` (Cloud Functions metrics)",
              "q": "index=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"cloudfunctions.googleapis.com/function/user_memory_bytes\"\n| timechart span=5m avg(value) as avg_bytes, max(value) as max_bytes by metric.labels.function_name\n| eval max_mb=round(max_bytes/1048576, 2)\n| where max_mb > 0",
              "m": "Export Cloud Monitoring metrics for Cloud Functions to Splunk via the GCP add-on. Join max memory usage with deployed memory configuration from labels or an asset lookup. Alert when utilization consistently approaches the configured limit (for example >85% of allocated memory).",
              "z": "Line chart (avg/max memory by function), Gauge (peak vs allocation), Table (function_name, max_mb, allocation_mb).",
              "kfp": "Short spikes at deploy time, autoscale thrash, or a noisy neighbor on shared hosts can look bad for a few minutes. We require the condition to last across several intervals or clear on its own before we wake someone.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:monitoring` (Cloud Functions metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport Cloud Monitoring metrics for Cloud Functions to Splunk via the GCP add-on. Join max memory usage with deployed memory configuration from labels or an asset lookup. Alert when utilization consistently approaches the configured limit (for example >85% of allocated memory).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:monitoring\" metric.type=\"cloudfunctions.googleapis.com/function/user_memory_bytes\"\n| timechart span=5m avg(value) as avg_bytes, max(value) as max_bytes by metric.labels.function_name\n| eval max_mb=round(max_bytes/1048576, 2)\n| where max_mb > 0\n```\n\nUnderstanding this SPL\n\n**GCP Cloud Functions Memory Utilization** — Memory pressure causes OOM terminations and retries; tracking user memory against allocation prevents instability and guides memory settings per function.\n\nDocumented **Data sources**: `sourcetype=google:gcp:monitoring` (Cloud Functions metrics). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:monitoring. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric.labels.function_name** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **max_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where max_mb > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (avg/max memory by function), Gauge (peak vs allocation), Table (function_name, max_mb, allocation_mb).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Memory pressure causes OOM terminations and retries; tracking user memory against allocation prevents instability and guides memory settings per function.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.8",
              "n": "GCP Cloud Functions Timeout Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Timeouts indicate hung dependencies or insufficient deadline; they drive retries, duplicate side effects, and user-visible failures in synchronous invocations.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:pubsub:message` (Cloud Logging for `cloud_function`), `sourcetype=google:gcp:monitoring`",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"cloud_function\"\n| where match(_raw, \"(?i)timeout|deadline exceeded|function execution took too long\")\n| stats count as timeout_events by resource.labels.function_name, resource.labels.region\n| sort -timeout_events",
              "m": "Forward Cloud Functions logs to Pub/Sub and ingest with `resource.type=\"cloud_function\"`. Optionally add monitoring metrics for execution times and error result codes. Alert on timeout string patterns or rising timeout counts after dependency or region incidents.",
              "z": "Column chart (timeouts by function), Line chart (timeouts over time), Table (function_name, region, count).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:pubsub:message` (Cloud Logging for `cloud_function`), `sourcetype=google:gcp:monitoring`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Cloud Functions logs to Pub/Sub and ingest with `resource.type=\"cloud_function\"`. Optionally add monitoring metrics for execution times and error result codes. Alert on timeout string patterns or rising timeout counts after dependency or region incidents.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"cloud_function\"\n| where match(_raw, \"(?i)timeout|deadline exceeded|function execution took too long\")\n| stats count as timeout_events by resource.labels.function_name, resource.labels.region\n| sort -timeout_events\n```\n\nUnderstanding this SPL\n\n**GCP Cloud Functions Timeout Monitoring** — Timeouts indicate hung dependencies or insufficient deadline; they drive retries, duplicate side effects, and user-visible failures in synchronous invocations.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` (Cloud Logging for `cloud_function`), `sourcetype=google:gcp:monitoring`. **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw, \"(?i)timeout|deadline exceeded|function execution took too long\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by resource.labels.function_name, resource.labels.region** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (timeouts by function), Line chart (timeouts over time), Table (function_name, region, count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Timeouts indicate hung dependencies or insufficient deadline; they drive retries, duplicate side effects, and user-visible failures in synchronous invocations.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.9",
              "n": "Serverless Cost Tracking by Function",
              "c": "medium",
              "f": "intermediate",
              "v": "Function-level spend exposes expensive handlers, mis-scaled concurrency, and test sandboxes left running—essential for FinOps and chargeback.",
              "t": "`Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=aws:billing`, `sourcetype=azure:costmanagement`, `sourcetype=gcp:billing`",
              "q": "index=aws sourcetype=\"aws:billing\" OR index=azure sourcetype=\"azure:costmanagement\" OR index=gcp sourcetype=\"gcp:billing\"\n| eval cloud=case(index==\"aws\",\"AWS\", index==\"azure\",\"Azure\", index==\"gcp\",\"GCP\")\n| eval line_cost=coalesce(BlendedCost, UnblendedCost, cost, CostInBillingCurrency)\n| eval fn=coalesce(resourceId, ResourceId, labels.value)\n| where match(lower(ProductName).lower(service).lower(resource_type), \"(lambda|function|cloudfunctions|functions)\")\n| stats sum(line_cost) as spend by cloud, fn\n| sort -spend",
              "m": "Ingest CUR or cost exports with resource-level granularity and tags (`aws:createdBy`, Azure resource name, GCP labels). Normalize into a common schema. Filter to serverless SKUs (Lambda, Azure Functions, Cloud Functions). Schedule weekly reports and alerts for top-N spenders or day-over-day spikes per function.",
              "z": "Bar chart (spend by function), Treemap (cost by cloud and service), Table (cloud, function, spend, % of total).",
              "kfp": "Cloud billing and cost files often arrive a day or more late, include credits and refunds, and use tags that do not line up with app names, so a spike is not always a real overrun. We narrow alerts to sustained trends and we exclude test or sandbox projects when we can.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=aws:billing`, `sourcetype=azure:costmanagement`, `sourcetype=gcp:billing`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CUR or cost exports with resource-level granularity and tags (`aws:createdBy`, Azure resource name, GCP labels). Normalize into a common schema. Filter to serverless SKUs (Lambda, Azure Functions, Cloud Functions). Schedule weekly reports and alerts for top-N spenders or day-over-day spikes per function.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:billing\" OR index=azure sourcetype=\"azure:costmanagement\" OR index=gcp sourcetype=\"gcp:billing\"\n| eval cloud=case(index==\"aws\",\"AWS\", index==\"azure\",\"Azure\", index==\"gcp\",\"GCP\")\n| eval line_cost=coalesce(BlendedCost, UnblendedCost, cost, CostInBillingCurrency)\n| eval fn=coalesce(resourceId, ResourceId, labels.value)\n| where match(lower(ProductName).lower(service).lower(resource_type), \"(lambda|function|cloudfunctions|functions)\")\n| stats sum(line_cost) as spend by cloud, fn\n| sort -spend\n```\n\nUnderstanding this SPL\n\n**Serverless Cost Tracking by Function** — Function-level spend exposes expensive handlers, mis-scaled concurrency, and test sandboxes left running—essential for FinOps and chargeback.\n\nDocumented **Data sources**: `sourcetype=aws:billing`, `sourcetype=azure:costmanagement`, `sourcetype=gcp:billing`. **App/TA** (typical add-on context): `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp; **sourcetype**: aws:billing, azure:costmanagement, gcp:billing. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp, sourcetype=\"aws:billing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cloud** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **line_cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **fn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(lower(ProductName).lower(service).lower(resource_type), \"(lambda|function|cloudfunctions|functions)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cloud, fn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (spend by function), Treemap (cost by cloud and service), Table (cloud, function, spend, % of total).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Function-level spend exposes expensive handlers, mis-scaled concurrency, and test sandboxes left running — essential for FinOps and chargeback.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.10",
              "n": "Lambda Dead Letter Queue Depth and Message Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Messages landing in DLQs mean unprocessed events—often billing, inventory, or security actions—until replayed or dropped.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (namespace `AWS/SQS`), optional `sourcetype=aws:cloudwatch:events`",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SQS\" metric_name=\"ApproximateNumberOfMessagesVisible\"\n| where match(QueueName, \"(?i)dlq|dead\")\n| timechart span=5m max(Maximum) as visible by QueueName\n| where visible > 0",
              "m": "Tag or name DLQ queues consistently (`*dlq*`). Ingest SQS CloudWatch metrics per queue. Correlate queue to owning Lambda via Event Source Mapping inventory (lookup table). Alert on any sustained visible message count or sudden spikes after bad deployments.",
              "z": "Single value (DLQ depth), Line chart (visible messages by queue), Table (QueueName, linked function, visible).",
              "kfp": "Cold starts, retries, and a bad client sending junk traffic can all raise error counts without a code bug. We look at the share of invocations and we compare the same time last week before we assume the function is broken.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1610"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (namespace `AWS/SQS`), optional `sourcetype=aws:cloudwatch:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag or name DLQ queues consistently (`*dlq*`). Ingest SQS CloudWatch metrics per queue. Correlate queue to owning Lambda via Event Source Mapping inventory (lookup table). Alert on any sustained visible message count or sudden spikes after bad deployments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/SQS\" metric_name=\"ApproximateNumberOfMessagesVisible\"\n| where match(QueueName, \"(?i)dlq|dead\")\n| timechart span=5m max(Maximum) as visible by QueueName\n| where visible > 0\n```\n\nUnderstanding this SPL\n\n**Lambda Dead Letter Queue Depth and Message Rate** — Messages landing in DLQs mean unprocessed events—often billing, inventory, or security actions—until replayed or dropped.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (namespace `AWS/SQS`), optional `sourcetype=aws:cloudwatch:events`. **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(QueueName, \"(?i)dlq|dead\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by QueueName** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where visible > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (DLQ depth), Line chart (visible messages by queue), Table (QueueName, linked function, visible).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Messages landing in DLQs mean unprocessed events — often billing, inventory, or security actions — until replayed or dropped.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.11",
              "n": "AWS Step Functions Execution Failures",
              "c": "high",
              "f": "beginner",
              "v": "Failed state machine runs break orchestrated business processes; tracking failed executions enables rapid rollback and pinpointing of failing states or Lambda tasks.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (namespace `AWS/States`)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/States\" metric_name=\"ExecutionsFailed\"\n| timechart span=5m sum(Sum) as failed by StateMachineArn\n| where failed > 0",
              "m": "Enable CloudWatch metrics for Step Functions (`ExecutionsFailed`, `ExecutionsTimedOut`, `ExecutionsAborted`). Ingest via Splunk_TA_aws. Optionally join with execution history forwarded to S3 or CloudWatch Logs for failure context. Alert on any failed executions in production state machines or rate-based thresholds.",
              "z": "Line chart (failed executions by state machine), Single value (failures in last hour), Table (StateMachineArn, failed, timed out).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (namespace `AWS/States`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CloudWatch metrics for Step Functions (`ExecutionsFailed`, `ExecutionsTimedOut`, `ExecutionsAborted`). Ingest via Splunk_TA_aws. Optionally join with execution history forwarded to S3 or CloudWatch Logs for failure context. Alert on any failed executions in production state machines or rate-based thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/States\" metric_name=\"ExecutionsFailed\"\n| timechart span=5m sum(Sum) as failed by StateMachineArn\n| where failed > 0\n```\n\nUnderstanding this SPL\n\n**AWS Step Functions Execution Failures** — Failed state machine runs break orchestrated business processes; tracking failed executions enables rapid rollback and pinpointing of failing states or Lambda tasks.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (namespace `AWS/States`). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by StateMachineArn** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where failed > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failed executions by state machine), Single value (failures in last hour), Table (StateMachineArn, failed, timed out).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Failed state machine runs break orchestrated business processes; tracking failed executions enables rapid rollback and pinpointing of failing states or Lambda tasks.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.12",
              "n": "Azure Durable Functions Orchestration Health",
              "c": "high",
              "f": "advanced",
              "v": "Durable orchestrations span many activities; failed or stuck instances block business workflows until replayed or purged from storage.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`sourcetype=mscs:azure:diagnostics` (FunctionAppLogs, traces)",
              "q": "index=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"FunctionAppLogs\"\n| where match(_raw, \"(?i)orchestration.*(failed|faulted)|TaskFailed|SubOrchestrationFailed\")\n| stats count as orch_failures by resourceName, coalesce(functionName, name)\n| sort -orch_failures",
              "m": "Enable verbose logging for Durable Functions and ingest FunctionAppLogs. Extract orchestration instance IDs where present. Correlate with Storage Account metrics (queue/table used by the task hub) for backlog. Alert on failure patterns or rising pending instances versus completions.",
              "z": "Table (app, orchestration name, failures), Line chart (failures over time), Link to Application Insights-style trace IDs if forwarded.",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `sourcetype=mscs:azure:diagnostics` (FunctionAppLogs, traces).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable verbose logging for Durable Functions and ingest FunctionAppLogs. Extract orchestration instance IDs where present. Correlate with Storage Account metrics (queue/table used by the task hub) for backlog. Alert on failure patterns or rising pending instances versus completions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:diagnostics\" Category=\"FunctionAppLogs\"\n| where match(_raw, \"(?i)orchestration.*(failed|faulted)|TaskFailed|SubOrchestrationFailed\")\n| stats count as orch_failures by resourceName, coalesce(functionName, name)\n| sort -orch_failures\n```\n\nUnderstanding this SPL\n\n**Azure Durable Functions Orchestration Health** — Durable orchestrations span many activities; failed or stuck instances block business workflows until replayed or purged from storage.\n\nDocumented **Data sources**: `sourcetype=mscs:azure:diagnostics` (FunctionAppLogs, traces). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw, \"(?i)orchestration.*(failed|faulted)|TaskFailed|SubOrchestrationFailed\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by resourceName, coalesce(functionName, name)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (app, orchestration name, failures), Line chart (failures over time), Link to Application Insights-style trace IDs if forwarded.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Durable orchestrations span many activities; failed or stuck instances block business workflows until replayed or purged from storage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.13",
              "n": "Lambda Provisioned Concurrency Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Provisioned concurrency is a fixed cost; low utilization wastes spend while high utilization risks cold starts on overflow—balance requires continuous measurement.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (namespace `AWS/Lambda`)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"ProvisionedConcurrencyUtilization\"\n| timechart span=5m avg(Average) as util by FunctionName, Resource\n| where util < 0.2 OR util > 0.9",
              "m": "Collect `ProvisionedConcurrencyUtilization` for each alias or version with provisioned settings. Compare against provisioned units from tags or CloudFormation export. Alert when utilization is chronically low (cost optimization) or high (risk of throttling on burst beyond provisioned pool).",
              "z": "Line chart (utilization by function/alias), Area chart (consumed vs provisioned concurrency), Table (FunctionName, util %, recommended units).",
              "kfp": "Cold starts, retries, and a bad client sending junk traffic can all raise error counts without a code bug. We look at the share of invocations and we compare the same time last week before we assume the function is broken.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (namespace `AWS/Lambda`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect `ProvisionedConcurrencyUtilization` for each alias or version with provisioned settings. Compare against provisioned units from tags or CloudFormation export. Alert when utilization is chronically low (cost optimization) or high (risk of throttling on burst beyond provisioned pool).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Lambda\" metric_name=\"ProvisionedConcurrencyUtilization\"\n| timechart span=5m avg(Average) as util by FunctionName, Resource\n| where util < 0.2 OR util > 0.9\n```\n\nUnderstanding this SPL\n\n**Lambda Provisioned Concurrency Utilization** — Provisioned concurrency is a fixed cost; low utilization wastes spend while high utilization risks cold starts on overflow—balance requires continuous measurement.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (namespace `AWS/Lambda`). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by FunctionName, Resource** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where util < 0.2 OR util > 0.9` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (utilization by function/alias), Area chart (consumed vs provisioned concurrency), Table (FunctionName, util %, recommended units).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Provisioned concurrency is a fixed cost; low utilization wastes spend while high utilization risks cold starts on overflow — balance requires continuous measurement.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.14",
              "n": "API Gateway Integration Latency for Serverless Backends",
              "c": "medium",
              "f": "beginner",
              "v": "Integration latency isolates backend (Lambda, HTTP proxy) time from client-facing latency; spikes often precede Lambda timeouts or VPC connectivity issues.",
              "t": "`Splunk_TA_aws`",
              "d": "`sourcetype=aws:cloudwatch` (namespace `AWS/ApiGateway`)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ApiGateway\" metric_name=\"IntegrationLatency\"\n| timechart span=5m avg(Average) as integ_ms by ApiName, Stage\n| where integ_ms > 2000",
              "m": "Enable detailed CloudWatch metrics for REST or HTTP APIs. Ingest `IntegrationLatency` alongside `Latency` and `4XXError`/`5XXError`. Split dashboards by stage (prod vs dev). Alert when integration latency exceeds backend SLA or diverges from total API latency (pointing to edge vs origin issues).",
              "z": "Line chart (IntegrationLatency vs Latency by API), Heatmap (route/method if dimensions available), Table (ApiName, Stage, p95 integ_ms).",
              "kfp": "This pattern can surge during change windows, in sandboxes, and when a platform is in maintenance, so we line alerts up with the calendar and take a second look before we call it an incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`.\n• Ensure the following data sources are available: `sourcetype=aws:cloudwatch` (namespace `AWS/ApiGateway`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable detailed CloudWatch metrics for REST or HTTP APIs. Ingest `IntegrationLatency` alongside `Latency` and `4XXError`/`5XXError`. Split dashboards by stage (prod vs dev). Alert when integration latency exceeds backend SLA or diverges from total API latency (pointing to edge vs origin issues).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ApiGateway\" metric_name=\"IntegrationLatency\"\n| timechart span=5m avg(Average) as integ_ms by ApiName, Stage\n| where integ_ms > 2000\n```\n\nUnderstanding this SPL\n\n**API Gateway Integration Latency for Serverless Backends** — Integration latency isolates backend (Lambda, HTTP proxy) time from client-facing latency; spikes often precede Lambda timeouts or VPC connectivity issues.\n\nDocumented **Data sources**: `sourcetype=aws:cloudwatch` (namespace `AWS/ApiGateway`). **App/TA** (typical add-on context): `Splunk_TA_aws`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by ApiName, Stage** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where integ_ms > 2000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (IntegrationLatency vs Latency by API), Heatmap (route/method if dimensions available), Table (ApiName, Stage, p95 integ_ms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Integration latency isolates backend (Lambda, HTTP proxy) time from client-facing latency; spikes often precede Lambda timeouts or VPC connectivity issues.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.5.15",
              "n": "GCP Cloud Functions Retry and Error Rate Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Rising retries and error rates signal unstable dependencies or quota issues before quotas hard-stop traffic; trending supports SLO review and incident prevention.",
              "t": "`Splunk_TA_google-cloudplatform`",
              "d": "`sourcetype=google:gcp:pubsub:message` (Cloud Logging), optional `sourcetype=google:gcp:monitoring` (execution metrics)",
              "q": "index=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"cloud_function\"\n| eval fn=resource.labels.function_name\n| eval is_err=if(severity=\"ERROR\", 1, 0)\n| timechart span=1h sum(is_err) as errors, count as invocations by fn\n| eval err_rate=if(invocations>0, round(100*errors/invocations, 2), 0)\n| where err_rate > 5",
              "m": "Ingest execution count metrics with result/status labels from Cloud Monitoring. Supplement with log-based counts from Cloud Logging for detailed error classes. Baseline hourly error and retry rates per function. Alert when error share exceeds threshold or retries spike versus invocations.",
              "z": "Stacked area chart (executions by outcome), Line chart (error rate % over time), Table (function_name, invocations, errors, retry estimate).",
              "kfp": "Inventories can skip a run when an API is throttled or a connector is in maintenance, which looks like a big drop. We check the ingest time and the last good snapshot before we call it shrinkage.",
              "refs": "[Splunk_TA_google-cloudplatform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_google-cloudplatform`.\n• Ensure the following data sources are available: `sourcetype=google:gcp:pubsub:message` (Cloud Logging), optional `sourcetype=google:gcp:monitoring` (execution metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest execution count metrics with result/status labels from Cloud Monitoring. Supplement with log-based counts from Cloud Logging for detailed error classes. Baseline hourly error and retry rates per function. Alert when error share exceeds threshold or retries spike versus invocations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"google:gcp:pubsub:message\" resource.type=\"cloud_function\"\n| eval fn=resource.labels.function_name\n| eval is_err=if(severity=\"ERROR\", 1, 0)\n| timechart span=1h sum(is_err) as errors, count as invocations by fn\n| eval err_rate=if(invocations>0, round(100*errors/invocations, 2), 0)\n| where err_rate > 5\n```\n\nUnderstanding this SPL\n\n**GCP Cloud Functions Retry and Error Rate Trending** — Rising retries and error rates signal unstable dependencies or quota issues before quotas hard-stop traffic; trending supports SLO review and incident prevention.\n\nDocumented **Data sources**: `sourcetype=google:gcp:pubsub:message` (Cloud Logging), optional `sourcetype=google:gcp:monitoring` (execution metrics). **App/TA** (typical add-on context): `Splunk_TA_google-cloudplatform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: google:gcp:pubsub:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"google:gcp:pubsub:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_err** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by fn** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area chart (executions by outcome), Line chart (error rate % over time), Table (function_name, invocations, errors, retry estimate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Rising retries and error rates signal unstable dependencies or quota issues before quotas hard-stop traffic; trending supports service goals review and incident prevention.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Cloud Platform",
                "id": 3088,
                "url": "https://splunkbase.splunk.com/app/3088"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.4.32",
              "n": "Cloud Control Plane API Call Volume Anomaly (MLTK)",
              "c": "critical",
              "f": "advanced",
              "v": "Cloud control plane API calls (EC2 RunInstances, IAM CreateUser, S3 PutBucketPolicy) follow predictable patterns tied to deployment schedules and automation cadence. Anomalous spikes in API call volume may indicate compromised credentials, runaway automation, or an attacker enumerating resources — all of which are invisible to static rate limits but detectable through ML-based baselining.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Add-on for AWS / Azure / GCP",
              "d": "`index=cloud sourcetype=aws:cloudtrail` or `sourcetype=azure:monitor:activity` or `sourcetype=google:gcp:pubsub:message`",
              "q": "index=cloud sourcetype IN (\"aws:cloudtrail\",\"azure:monitor:activity\",\"google:gcp:pubsub:message\")\n| bin _time span=1h\n| stats count by _time, eventName, userIdentity.arn, sourceIPAddress\n| eventstats avg(count) as baseline_avg, stdev(count) as baseline_std by eventName\n| eval z_score=round((count - baseline_avg) / nullif(baseline_std, 0), 2)\n| where z_score > 3 AND count > 50\n| fit DensityFunction count by eventName into cloud_api_anomaly_model\n| rename \"IsOutlier(count)\" as isOutlier\n| where isOutlier > 0\n| table _time, eventName, userIdentity.arn, sourceIPAddress, count, baseline_avg, z_score\n| sort -z_score",
              "m": "Aggregate CloudTrail / Activity Log / Admin Activity events hourly by API action and principal. Train DensityFunction models per API action on 30 days of data to capture automation schedules and deployment patterns. Flag calls that exceed 3 standard deviations from the learned baseline. Prioritize high-risk APIs: IAM mutations, security group changes, KMS key operations, and resource creation. Enrich with source IP geolocation and threat intelligence. Correlate with CI/CD deployment events (cat-12) to suppress planned automation bursts. Generate risk events for Splunk ES with MITRE T1078/T1580 annotations. Retrain models weekly.",
              "z": "Line chart (API call volume vs baseline), Table (anomalous API calls with z-scores), Bar chart (top anomalous APIs by principal).",
              "kfp": "Infrastructure-as-code deployments (Terraform apply), DR drills, and cloud migration events. Maintain a deployment calendar lookup to suppress known automation windows.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1078.004",
                "T1526",
                "T1580",
                "T1098.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Add-on for AWS / Azure / GCP.\n• Ensure the following data sources are available: `index=cloud sourcetype=aws:cloudtrail` or `sourcetype=azure:monitor:activity` or `sourcetype=google:gcp:pubsub:message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate CloudTrail / Activity Log / Admin Activity events hourly by API action and principal. Train DensityFunction models per API action on 30 days of data to capture automation schedules and deployment patterns. Flag calls that exceed 3 standard deviations from the learned baseline. Prioritize high-risk APIs: IAM mutations, security group changes, KMS key operations, and resource creation. Enrich with source IP geolocation and threat intelligence. Correlate with CI/CD deployment events (cat-…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype IN (\"aws:cloudtrail\",\"azure:monitor:activity\",\"google:gcp:pubsub:message\")\n| bin _time span=1h\n| stats count by _time, eventName, userIdentity.arn, sourceIPAddress\n| eventstats avg(count) as baseline_avg, stdev(count) as baseline_std by eventName\n| eval z_score=round((count - baseline_avg) / nullif(baseline_std, 0), 2)\n| where z_score > 3 AND count > 50\n| fit DensityFunction count by eventName into cloud_api_anomaly_model\n| rename \"IsOutlier(count)\" as isOutlier\n| where isOutlier > 0\n| table _time, eventName, userIdentity.arn, sourceIPAddress, count, baseline_avg, z_score\n| sort -z_score\n```\n\nUnderstanding this SPL\n\n**Cloud Control Plane API Call Volume Anomaly (MLTK)** — Cloud control plane API calls (EC2 RunInstances, IAM CreateUser, S3 PutBucketPolicy) follow predictable patterns tied to deployment schedules and automation cadence. Anomalous spikes in API call volume may indicate compromised credentials, runaway automation, or an attacker enumerating resources — all of which are invisible to static rate limits but detectable through ML-based baselining.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:cloudtrail` or `sourcetype=azure:monitor:activity` or `sourcetype=google:gcp:pubsub:message`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Add-on for AWS / Azure / GCP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, eventName, userIdentity.arn, sourceIPAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by eventName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z_score > 3 AND count > 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cloud Control Plane API Call Volume Anomaly (MLTK)**): fit DensityFunction count by eventName into cloud_api_anomaly_model\n• Renames fields with `rename` for clarity or joins.\n• Filters the current rows with `where isOutlier > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cloud Control Plane API Call Volume Anomaly (MLTK)**): table _time, eventName, userIdentity.arn, sourceIPAddress, count, baseline_avg, z_score\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud Control Plane API Call Volume Anomaly (MLTK)** — Cloud control plane API calls (EC2 RunInstances, IAM CreateUser, S3 PutBucketPolicy) follow predictable patterns tied to deployment schedules and automation cadence. Anomalous spikes in API call volume may indicate compromised credentials, runaway automation, or an attacker enumerating resources — all of which are invisible to static rate limits but detectable through ML-based baselining.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:cloudtrail` or `sourcetype=azure:monitor:activity` or `sourcetype=google:gcp:pubsub:message`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Add-on for AWS / Azure / GCP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (API call volume vs baseline), Table (anomalous API calls with z-scores), Bar chart (top anomalous APIs by principal).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cloud control plane calls (cloud servers RunInstances, access control CreateUser, cloud storage PutBucketPolicy) follow predictable patterns tied to deployment schedules and automation cadence.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user span=1h | sort - count",
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.9,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 16,
            "none": 0
          }
        },
        {
          "i": "4.6",
          "n": "Cloud Infrastructure Trending",
          "u": [
            {
              "i": "4.6.1",
              "n": "Cloud Resource Count Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "EC2/VM instance count over 90 days reveals organic growth, failed automation leaving orphan instances, or shrinkage after optimization campaigns. Supports FinOps conversations and capacity forecasts.",
              "t": "Splunk Add-on for AWS, Splunk Add-on for Microsoft Cloud Services, Google Cloud add-ons",
              "d": "`index=cloud sourcetype=aws:config:notification` or `sourcetype=aws:description` (inventory); Azure Resource Graph exports; GCP Asset Inventory",
              "q": "index=cloud sourcetype=\"aws:config:notification\" resourceType=\"AWS::EC2::Instance\"\n| bin _time span=1d\n| stats dc(resourceId) as instance_count by _time, awsAccountId\n| timechart span=1d sum(instance_count) as total_instances\n| trendline sma7(total_instances) as instance_trend\n| predict total_instances as predicted algorithm=LLP future_timespan=30",
              "m": "Ingest periodic inventory snapshots (AWS Config, DescribeInstances exports, or Azure Resource Graph) into index=cloud with one event per instance per snapshot. If only change streams exist, maintain state with a nightly summary search. Chart instance_count over 90 days; optionally split by accountId or region. For multi-cloud, normalize resourceType across providers.",
              "z": "Line chart (instance count over 90 days with trend and 30-day forecast), area chart stacked by account.",
              "kfp": "Inventories can skip a run when an API is throttled or a connector is in maintenance, which looks like a big drop. We check the ingest time and the last good snapshot before we call it shrinkage.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS, Splunk Add-on for Microsoft Cloud Services, Google Cloud add-ons.\n• Ensure the following data sources are available: `index=cloud sourcetype=aws:config:notification` or `sourcetype=aws:description` (inventory); Azure Resource Graph exports; GCP Asset Inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest periodic inventory snapshots (AWS Config, DescribeInstances exports, or Azure Resource Graph) into index=cloud with one event per instance per snapshot. If only change streams exist, maintain state with a nightly summary search. Chart instance_count over 90 days; optionally split by accountId or region. For multi-cloud, normalize resourceType across providers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"aws:config:notification\" resourceType=\"AWS::EC2::Instance\"\n| bin _time span=1d\n| stats dc(resourceId) as instance_count by _time, awsAccountId\n| timechart span=1d sum(instance_count) as total_instances\n| trendline sma7(total_instances) as instance_trend\n| predict total_instances as predicted algorithm=LLP future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Cloud Resource Count Trending** — EC2/VM instance count over 90 days reveals organic growth, failed automation leaving orphan instances, or shrinkage after optimization campaigns. Supports FinOps conversations and capacity forecasts.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:config:notification` or `sourcetype=aws:description` (inventory); Azure Resource Graph exports; GCP Asset Inventory. **App/TA** (typical add-on context): Splunk Add-on for AWS, Splunk Add-on for Microsoft Cloud Services, Google Cloud add-ons. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:config:notification, AWS::EC2::Instance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:config:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, awsAccountId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Cloud Resource Count Trending**): trendline sma7(total_instances) as instance_trend\n• Pipeline stage (see **Cloud Resource Count Trending**): predict total_instances as predicted algorithm=LLP future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (instance count over 90 days with trend and 30-day forecast), area chart stacked by account.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cloud servers/VM instance count over 90 days reveals organic growth, failed automation leaving orphan instances, or shrinkage after optimization campaigns.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "gcp"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.6.2",
              "n": "Lambda/Function Invocation Volume Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Daily invocation counts show traffic growth, seasonal patterns, and the impact of new features or batch jobs. Sharp changes often precede cost spikes or throttling if concurrency limits are fixed.",
              "t": "Splunk Add-on for AWS (CloudWatch metrics), Azure Monitor",
              "d": "`index=cloud sourcetype=aws:cloudwatch` (Lambda Invocations metric); Azure Functions metrics",
              "q": "index=cloud sourcetype=\"aws:cloudwatch\" Namespace=\"AWS/Lambda\" MetricName=\"Invocations\"\n| bin _time span=1d\n| stats sum(Sum) as invocations by _time, FunctionName\n| timechart span=1d sum(invocations) as total_invocations\n| trendline sma7(total_invocations) as invocation_trend",
              "m": "Enable CloudWatch metric ingestion for AWS/Lambda Invocations with FunctionName dimension. For Azure, use Microsoft.Web/sites/functions equivalent metrics. Normalize time to UTC for daily buckets. Use top-N functions by volume to keep the chart readable. Correlate step changes with deployments from CI/CD timestamps.",
              "z": "Line chart (daily invocations with 7-day SMA, 30 days), column chart (top functions by volume).",
              "kfp": "Cold starts, retries, and a bad client sending junk traffic can all raise error counts without a code bug. We look at the share of invocations and we compare the same time last week before we assume the function is broken.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (CloudWatch metrics), Azure Monitor.\n• Ensure the following data sources are available: `index=cloud sourcetype=aws:cloudwatch` (Lambda Invocations metric); Azure Functions metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CloudWatch metric ingestion for AWS/Lambda Invocations with FunctionName dimension. For Azure, use Microsoft.Web/sites/functions equivalent metrics. Normalize time to UTC for daily buckets. Use top-N functions by volume to keep the chart readable. Correlate step changes with deployments from CI/CD timestamps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"aws:cloudwatch\" Namespace=\"AWS/Lambda\" MetricName=\"Invocations\"\n| bin _time span=1d\n| stats sum(Sum) as invocations by _time, FunctionName\n| timechart span=1d sum(invocations) as total_invocations\n| trendline sma7(total_invocations) as invocation_trend\n```\n\nUnderstanding this SPL\n\n**Lambda/Function Invocation Volume Trending** — Daily invocation counts show traffic growth, seasonal patterns, and the impact of new features or batch jobs. Sharp changes often precede cost spikes or throttling if concurrency limits are fixed.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:cloudwatch` (Lambda Invocations metric); Azure Functions metrics. **App/TA** (typical add-on context): Splunk Add-on for AWS (CloudWatch metrics), Azure Monitor. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, FunctionName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Lambda/Function Invocation Volume Trending**): trendline sma7(total_invocations) as invocation_trend\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily invocations with 7-day SMA, 30 days), column chart (top functions by volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Daily invocation counts show traffic growth, seasonal patterns, and the impact of new features or batch jobs.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.6.3",
              "n": "Cloud Security Finding Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Tracking new versus resolved security findings over time shows whether your cloud security posture is improving and whether scanners or policies are flagging more issues than teams can remediate. Supports executive reporting and backlog triage.",
              "t": "AWS Security Hub, Azure Defender, GCP Security Command Center — forwarded via HEC or add-on",
              "d": "`index=cloud sourcetype=aws:securityhub:finding` OR `sourcetype=azure:defender:alert` OR `sourcetype=gcp:scc:finding`",
              "q": "index=cloud sourcetype IN (\"aws:securityhub:finding\", \"azure:defender:alert\", \"gcp:scc:finding\")\n| eval status=case(match(WorkflowStatus,\"(?i)resolved|archived|suppressed\"),\"resolved\",1=1,\"new\")\n| timechart span=1d count by status\n| trendline sma7(new) as new_trend sma7(resolved) as resolved_trend",
              "m": "Map your findings feed so each event represents a finding state change or daily snapshot with Severity and status. For snapshot models, compare consecutive days to derive new and resolved counts via summary search. Align severities (Critical/High/Medium) across clouds for a combined view or use separate panels per provider. Refresh suppression lookups so trends reflect true risk.",
              "z": "Stacked column chart (new vs resolved per day), line chart (open critical count trend), area chart (cumulative open findings).",
              "kfp": "A fresh scanner run or a vendor re-rate can resurface a finding we already know about. We deduplicate on resource and time and we only escalate when the exposure is new or the severity stepped up.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1580",
                "T1526"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS Security Hub, Azure Defender, GCP Security Command Center — forwarded via HEC or add-on.\n• Ensure the following data sources are available: `index=cloud sourcetype=aws:securityhub:finding` OR `sourcetype=azure:defender:alert` OR `sourcetype=gcp:scc:finding`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap your findings feed so each event represents a finding state change or daily snapshot with Severity and status. For snapshot models, compare consecutive days to derive new and resolved counts via summary search. Align severities (Critical/High/Medium) across clouds for a combined view or use separate panels per provider. Refresh suppression lookups so trends reflect true risk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype IN (\"aws:securityhub:finding\", \"azure:defender:alert\", \"gcp:scc:finding\")\n| eval status=case(match(WorkflowStatus,\"(?i)resolved|archived|suppressed\"),\"resolved\",1=1,\"new\")\n| timechart span=1d count by status\n| trendline sma7(new) as new_trend sma7(resolved) as resolved_trend\n```\n\nUnderstanding this SPL\n\n**Cloud Security Finding Trending** — Tracking new versus resolved security findings over time shows whether your cloud security posture is improving and whether scanners or policies are flagging more issues than teams can remediate. Supports executive reporting and backlog triage.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:securityhub:finding` OR `sourcetype=azure:defender:alert` OR `sourcetype=gcp:scc:finding`. **App/TA** (typical add-on context): AWS Security Hub, Azure Defender, GCP Security Command Center — forwarded via HEC or add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by status** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Cloud Security Finding Trending**): trendline sma7(new) as new_trend sma7(resolved) as resolved_trend\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked column chart (new vs resolved per day), line chart (open critical count trend), area chart (cumulative open findings).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Tracking new versus resolved security findings over time shows whether your cloud security posture is improving and whether scanners or policies are flagging more issues than teams can remediate.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.6.4",
              "n": "S3/Blob Storage Growth Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Total object storage bytes month over month highlights data hoarding, log retention growth, or unexpected replication. Supports budgeting and lifecycle policy decisions before bills spike.",
              "t": "Splunk Add-on for AWS (S3 storage metrics), Azure Monitor metrics",
              "d": "`index=cloud sourcetype=aws:cloudwatch` (BucketSizeBytes metric); `sourcetype=azure:monitor:metrics` for storage accounts",
              "q": "index=cloud sourcetype=\"aws:cloudwatch\" Namespace=\"AWS/S3\" MetricName=\"BucketSizeBytes\"\n| bin _time span=1mon\n| stats latest(Average) as bytes by _time, BucketName\n| eval tb=round(bytes/1099511627776, 2)\n| timechart span=1mon sum(tb) as total_tb\n| predict total_tb as predicted algorithm=LLP future_timespan=3",
              "m": "Ingest daily CloudWatch BucketSizeBytes per bucket or storage account metrics for Azure/Blob. Use span=1mon aligned to calendar months for FinOps reporting. Convert bytes to TB for readability. Optionally exclude archive buckets matched to a lookup. Alert on month-over-month growth above a percentage threshold. Use predict to forecast 3 months ahead for capacity planning.",
              "z": "Line chart (total TB monthly with 3-month forecast), bar chart (top buckets by size), table (month-over-month growth %).",
              "kfp": "Inventories can skip a run when an API is throttled or a connector is in maintenance, which looks like a big drop. We check the ingest time and the last good snapshot before we call it shrinkage.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (S3 storage metrics), Azure Monitor metrics.\n• Ensure the following data sources are available: `index=cloud sourcetype=aws:cloudwatch` (BucketSizeBytes metric); `sourcetype=azure:monitor:metrics` for storage accounts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest daily CloudWatch BucketSizeBytes per bucket or storage account metrics for Azure/Blob. Use span=1mon aligned to calendar months for FinOps reporting. Convert bytes to TB for readability. Optionally exclude archive buckets matched to a lookup. Alert on month-over-month growth above a percentage threshold. Use predict to forecast 3 months ahead for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"aws:cloudwatch\" Namespace=\"AWS/S3\" MetricName=\"BucketSizeBytes\"\n| bin _time span=1mon\n| stats latest(Average) as bytes by _time, BucketName\n| eval tb=round(bytes/1099511627776, 2)\n| timechart span=1mon sum(tb) as total_tb\n| predict total_tb as predicted algorithm=LLP future_timespan=3\n```\n\nUnderstanding this SPL\n\n**S3/Blob Storage Growth Trending** — Total object storage bytes month over month highlights data hoarding, log retention growth, or unexpected replication. Supports budgeting and lifecycle policy decisions before bills spike.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:cloudwatch` (BucketSizeBytes metric); `sourcetype=azure:monitor:metrics` for storage accounts. **App/TA** (typical add-on context): Splunk Add-on for AWS (S3 storage metrics), Azure Monitor metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:cloudwatch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, BucketName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **tb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1mon** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **S3/Blob Storage Growth Trending**): predict total_tb as predicted algorithm=LLP future_timespan=3\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (total TB monthly with 3-month forecast), bar chart (top buckets by size), table (month-over-month growth %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Total object storage bytes month over month highlights data hoarding, log retention growth, or unexpected replication.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.6.5",
              "n": "Cloud Network Traffic Volume Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Weekly VPC flow log volume indicates shifting traffic patterns, DDoS aftermath, or misconfigured mirroring. Complements per-flow analysis with a coarse health signal and correlates with network-related cost changes.",
              "t": "Splunk Add-on for AWS (VPC Flow Logs), Azure NSG flow logs",
              "d": "`index=cloud sourcetype=aws:cloudwatch:vpcflow` OR `sourcetype=azure:nsg:flow`",
              "q": "index=cloud sourcetype=\"aws:cloudwatch:vpcflow\"\n| eval bytes=tonumber(bytes)\n| timechart span=1w sum(bytes) as total_bytes\n| eval total_gb=round(total_bytes/1073741824, 2)\n| trendline sma4(total_gb) as traffic_trend",
              "m": "Parse VPC Flow or NSG flow fields so bytes is numeric. Filter internal-only noise if needed via RFC1918 CIDR lists. Use weekly buckets for medium-term trending; index volume growth also correlates with ingest cost. For Azure, map to the appropriate custom sourcetype for raw flows. Alert on sudden jumps exceeding 2x the 4-week moving average.",
              "z": "Column chart (weekly total GB), line overlay (4-week SMA), dual axis with flow record count.",
              "kfp": "Inventories can skip a run when an API is throttled or a connector is in maintenance, which looks like a big drop. We check the ingest time and the last good snapshot before we call it shrinkage.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (VPC Flow Logs), Azure NSG flow logs.\n• Ensure the following data sources are available: `index=cloud sourcetype=aws:cloudwatch:vpcflow` OR `sourcetype=azure:nsg:flow`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse VPC Flow or NSG flow fields so bytes is numeric. Filter internal-only noise if needed via RFC1918 CIDR lists. Use weekly buckets for medium-term trending; index volume growth also correlates with ingest cost. For Azure, map to the appropriate custom sourcetype for raw flows. Alert on sudden jumps exceeding 2x the 4-week moving average.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"aws:cloudwatch:vpcflow\"\n| eval bytes=tonumber(bytes)\n| timechart span=1w sum(bytes) as total_bytes\n| eval total_gb=round(total_bytes/1073741824, 2)\n| trendline sma4(total_gb) as traffic_trend\n```\n\nUnderstanding this SPL\n\n**Cloud Network Traffic Volume Trending** — Weekly VPC flow log volume indicates shifting traffic patterns, DDoS aftermath, or misconfigured mirroring. Complements per-flow analysis with a coarse health signal and correlates with network-related cost changes.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:cloudwatch:vpcflow` OR `sourcetype=azure:nsg:flow`. **App/TA** (typical add-on context): Splunk Add-on for AWS (VPC Flow Logs), Azure NSG flow logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:cloudwatch:vpcflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:cloudwatch:vpcflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **total_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Cloud Network Traffic Volume Trending**): trendline sma4(total_gb) as traffic_trend\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1w | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud Network Traffic Volume Trending** — Weekly VPC flow log volume indicates shifting traffic patterns, DDoS aftermath, or misconfigured mirroring. Complements per-flow analysis with a coarse health signal and correlates with network-related cost changes.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:cloudwatch:vpcflow` OR `sourcetype=azure:nsg:flow`. **App/TA** (typical add-on context): Splunk Add-on for AWS (VPC Flow Logs), Azure NSG flow logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (weekly total GB), line overlay (4-week SMA), dual axis with flow record count.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Weekly VPC flow log volume indicates shifting traffic patterns, overload attacks aftermath, or misconfigured mirroring.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1w | sort - agg_value",
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "4.6.6",
              "n": "CloudTrail/Activity Log Event Volume Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Management event volume over 90 days highlights automation changes, new integrations, or possible abuse such as enumeration or bulk API use. Baselines help spot anomalies without reading every event.",
              "t": "Splunk Add-on for AWS, Azure Activity Log add-on",
              "d": "`index=cloud sourcetype=aws:cloudtrail`; `sourcetype=azure:monitor:activity`",
              "q": "index=cloud sourcetype=\"aws:cloudtrail\" readOnly=false\n| timechart span=1d count as mgmt_events\n| trendline sma7(mgmt_events) as event_trend\n| predict mgmt_events as predicted algorithm=LLP future_timespan=14",
              "m": "Filter to non-read-only events for management actions. For multi-cloud, use union or a combined index with sourcetype in the by clause. Chart 90 days with daily span. Alert on statistical outliers exceeding 3x baseline. Ensure CloudTrail is multi-region and organization trails where applicable so the trend is complete. For Azure, include Activity Log management category events.",
              "z": "Line chart (daily management events with 7-day SMA, 90 days), anomaly overlay, 14-day forecast.",
              "kfp": "Our own break-glass and automation accounts show up the same as anyone else, and a delete or write during a change window is expected. We allowlist the roles that do routine maintenance and we link alerts to approved changes before we call an incident.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS, Azure Activity Log add-on.\n• Ensure the following data sources are available: `index=cloud sourcetype=aws:cloudtrail`; `sourcetype=azure:monitor:activity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFilter to non-read-only events for management actions. For multi-cloud, use union or a combined index with sourcetype in the by clause. Chart 90 days with daily span. Alert on statistical outliers exceeding 3x baseline. Ensure CloudTrail is multi-region and organization trails where applicable so the trend is complete. For Azure, include Activity Log management category events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"aws:cloudtrail\" readOnly=false\n| timechart span=1d count as mgmt_events\n| trendline sma7(mgmt_events) as event_trend\n| predict mgmt_events as predicted algorithm=LLP future_timespan=14\n```\n\nUnderstanding this SPL\n\n**CloudTrail/Activity Log Event Volume Trending** — Management event volume over 90 days highlights automation changes, new integrations, or possible abuse such as enumeration or bulk API use. Baselines help spot anomalies without reading every event.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:cloudtrail`; `sourcetype=azure:monitor:activity`. **App/TA** (typical add-on context): Splunk Add-on for AWS, Azure Activity Log add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **CloudTrail/Activity Log Event Volume Trending**): trendline sma7(mgmt_events) as event_trend\n• Pipeline stage (see **CloudTrail/Activity Log Event Volume Trending**): predict mgmt_events as predicted algorithm=LLP future_timespan=14\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CloudTrail/Activity Log Event Volume Trending** — Management event volume over 90 days highlights automation changes, new integrations, or possible abuse such as enumeration or bulk API use. Baselines help spot anomalies without reading every event.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:cloudtrail`; `sourcetype=azure:monitor:activity`. **App/TA** (typical add-on context): Splunk Add-on for AWS, Azure Activity Log add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily management events with 7-day SMA, 90 days), anomaly overlay, 14-day forecast.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Management event volume over 90 days highlights automation changes, new integrations, or possible abuse such as enumeration or bulk use.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1d | sort - count",
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 6,
            "none": 0
          }
        }
      ],
      "i": 4,
      "n": "Cloud Infrastructure",
      "src": "cat-04-cloud-infrastructure.md"
    },
    {
      "s": [
        {
          "i": "5.1",
          "n": "Routers & Switches",
          "u": [
            {
              "i": "5.1.1",
              "n": "Interface Up/Down Events",
              "c": "critical",
              "f": "beginner",
              "v": "A hard-down uplink or WAN port can isolate an entire site or VLAN; flapping often manifests as application timeouts and VoIP drops before a ticket names 'the network.' Treat each DOWN on a trunk or uplink as a potential SEV-1 for that site; treat more than 3 transitions in 10 minutes as a stability risk requiring immediate investigation of optics, cabling, or port configuration.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%LINEPROTO-5-UPDOWN\" OR \"%LINK-3-UPDOWN\"\n| rex \"Interface (?<interface>\\S+), changed state to (?<state>\\w+)\"\n| stats count by host, interface, state | where count > 3 | sort -count",
              "m": "Configure syslog forwarding on all network devices (UDP/TCP 514). Install TA for field extraction. Alert on down events for uplinks/trunks. Track flapping (>3 transitions in 10 min).",
              "z": "Status grid (green/red per interface), Table, Timeline.",
              "kfp": "Interfaces flap during scheduled cable replacements, port channel rebalancing, or PoE device power cycling. Some test and lab ports flap routinely.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure syslog forwarding on all network devices (UDP/TCP 514). Install TA for field extraction. Alert on down events for uplinks/trunks. Track flapping (>3 transitions in 10 min).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%LINEPROTO-5-UPDOWN\" OR \"%LINK-3-UPDOWN\"\n| rex \"Interface (?<interface>\\S+), changed state to (?<state>\\w+)\"\n| stats count by host, interface, state | where count > 3 | sort -count\n```\n\nUnderstanding this SPL\n\n**Interface Up/Down Events** — A hard-down uplink or WAN port can isolate an entire site or VLAN; flapping often manifests as application timeouts and VoIP drops before a ticket names 'the network.' Treat each DOWN on a trunk or uplink as a potential SEV-1 for that site; treat more than 3 transitions in 10 minutes as a stability risk requiring immediate investigation of optics, cabling, or port configuration.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, interface, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (green/red per interface), Table, Timeline.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, EX4650, QFX5100, QFX5110, QFX5120, QFX5200, QFX5210, QFX5220, MX204, MX304, MX480, MX960, SRX300, SRX320, SRX340, SRX345, SRX1500, SRX4100, SRX4200, SRX4600; Arista 7010T, 7020R, 7050X3, 7060X, 7260X3, 7280R3, 7300X3, 7500R3, 7800R3; HPE Aruba CX 6000, CX 6100, CX 6200, CX 6300, CX 6400, CX 8100, CX 8320, CX 8325, CX 8360, CX 8400, CX 10000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when network ports go up and down so we can tell if a link is flapping or a whole site is at risk before users flood the help desk.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 65,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier"
              ]
            },
            {
              "i": "5.1.2",
              "n": "Interface Error Rates",
              "c": "high",
              "f": "intermediate",
              "v": "CRC errors, drops indicate cabling, transceiver, or duplex issues.",
              "t": "SNMP Modular Input, IF-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=snmp:interface`",
              "q": "index=network sourcetype=\"snmp:interface\"\n| streamstats current=f last(ifInErrors) as prev by host, ifDescr\n| eval delta = ifInErrors - prev | where delta > 0\n| table _time host ifDescr delta",
              "m": "Poll IF-MIB (ifInErrors, ifOutErrors, ifInDiscards) at 300s. Use `streamstats` for delta. Alert on increasing counts.",
              "z": "Line chart (error rate), Table, Heatmap across devices.",
              "kfp": "Brief error increments during transceiver replacement, software upgrades, or known-noisy access segments can look like a fault. Baseline by interface role before paging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP Modular Input, IF-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=snmp:interface`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll IF-MIB (ifInErrors, ifOutErrors, ifInDiscards) at 300s. Use `streamstats` for delta. Alert on increasing counts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:interface\"\n| streamstats current=f last(ifInErrors) as prev by host, ifDescr\n| eval delta = ifInErrors - prev | where delta > 0\n| table _time host ifDescr delta\n```\n\nUnderstanding this SPL\n\n**Interface Error Rates** — CRC errors, drops indicate cabling, transceiver, or duplex issues.\n\nDocumented **Data sources**: `sourcetype=snmp:interface`. **App/TA** (typical add-on context): SNMP Modular Input, IF-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:interface. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:interface\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `streamstats` rolls up events into metrics; results are split **by host, ifDescr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Interface Error Rates**): table _time host ifDescr delta\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate), Table, Heatmap across devices.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count interface errors on switches and routers so we can spot bad cables or optics before they become real outages for everyone on that link.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.3",
              "n": "Interface Utilization",
              "c": "high",
              "f": "beginner",
              "v": "Saturated links cause drops and congestion. Trending enables proactive upgrades.",
              "t": "SNMP Modular Input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "SNMP IF-MIB (ifHCInOctets, ifHCOutOctets, ifSpeed)",
              "q": "index=network sourcetype=\"snmp:interface\"\n| streamstats current=f last(ifHCInOctets) as prev_in, last(_time) as prev_time by host, ifDescr\n| eval in_bps=((ifHCInOctets-prev_in)*8)/(_time-prev_time)\n| eval util_pct=round(in_bps/ifSpeed*100,1) | where util_pct>80",
              "m": "Poll 64-bit counters every 300s. Alert at 80% sustained. Use `predict` for capacity planning.",
              "z": "Line chart, Gauge per critical link, Table sorted by utilization.",
              "kfp": "Short bursts during backups, patch pushes, or video calls can approach thresholds without an outage. Match alerts to business hours and known batch jobs.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP Modular Input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: SNMP IF-MIB (ifHCInOctets, ifHCOutOctets, ifSpeed).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll 64-bit counters every 300s. Alert at 80% sustained. Use `predict` for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:interface\"\n| streamstats current=f last(ifHCInOctets) as prev_in, last(_time) as prev_time by host, ifDescr\n| eval in_bps=((ifHCInOctets-prev_in)*8)/(_time-prev_time)\n| eval util_pct=round(in_bps/ifSpeed*100,1) | where util_pct>80\n```\n\nUnderstanding this SPL\n\n**Interface Utilization** — Saturated links cause drops and congestion. Trending enables proactive upgrades.\n\nDocumented **Data sources**: SNMP IF-MIB (ifHCInOctets, ifHCOutOctets, ifSpeed). **App/TA** (typical add-on context): SNMP Modular Input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:interface. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:interface\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `streamstats` rolls up events into metrics; results are split **by host, ifDescr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **in_bps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where util_pct>80` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.thruput) as thruput_bps\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n| where thruput_bps > 0\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Interface Utilization** — Saturated links cause drops and congestion. Trending enables proactive upgrades.\n\nDocumented **Data sources**: SNMP IF-MIB (ifHCInOctets, ifHCOutOctets, ifSpeed). **App/TA** (typical add-on context): SNMP Modular Input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where thruput_bps > 0` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart, Gauge per critical link, Table sorted by utilization.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with interface utilization so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.thruput) as thruput_bps\n  from datamodel=Performance where nodename=Performance.Network\n  by Performance.host Performance.interface span=5m\n| where thruput_bps > 0",
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.4",
              "n": "BGP Peer State Changes",
              "c": "critical",
              "f": "beginner",
              "v": "BGP session drops cause routing convergence, potentially making networks unreachable.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%BGP-5-ADJCHANGE\" OR \"%BGP-3-NOTIFICATION\"\n| rex \"neighbor (?<neighbor_ip>\\S+)\" | table _time host neighbor_ip _raw | sort -_time",
              "m": "Forward syslog from all BGP speakers. Critical alert on adjacency down. Include neighbor IP and AS number.",
              "z": "Events timeline (critical), Status panel per BGP session, Table.",
              "kfp": "BGP sessions reset during planned maintenance, route policy changes, or upstream provider maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog from all BGP speakers. Critical alert on adjacency down. Include neighbor IP and AS number.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%BGP-5-ADJCHANGE\" OR \"%BGP-3-NOTIFICATION\"\n| rex \"neighbor (?<neighbor_ip>\\S+)\" | table _time host neighbor_ip _raw | sort -_time\n```\n\nUnderstanding this SPL\n\n**BGP Peer State Changes** — BGP session drops cause routing convergence, potentially making networks unreachable.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **BGP Peer State Changes**): table _time host neighbor_ip _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline (critical), Status panel per BGP session, Table.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We tell you when a BGP session to another network drops or bounces, because that can make whole paths unreachable for many people at once.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 65,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier"
              ]
            },
            {
              "i": "5.1.5",
              "n": "OSPF Neighbor Adjacency",
              "c": "critical",
              "f": "beginner",
              "v": "OSPF neighbor loss triggers SPF recalculation, disrupting traffic.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%OSPF-5-ADJCHG\"\n| rex \"Nbr (?<neighbor_ip>\\S+) on (?<interface>\\S+) from (?<from_state>\\S+) to (?<to_state>\\S+)\"\n| table _time host neighbor_ip interface from_state to_state",
              "m": "Forward syslog from all OSPF routers. Alert on adjacency changes to/from FULL. Track frequency for instability.",
              "z": "Events timeline, Table (router, neighbor, states).",
              "kfp": "OSPF neighbors may transition during interface MTU changes, DR and BDR elections, or area type changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog from all OSPF routers. Alert on adjacency changes to/from FULL. Track frequency for instability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%OSPF-5-ADJCHG\"\n| rex \"Nbr (?<neighbor_ip>\\S+) on (?<interface>\\S+) from (?<from_state>\\S+) to (?<to_state>\\S+)\"\n| table _time host neighbor_ip interface from_state to_state\n```\n\nUnderstanding this SPL\n\n**OSPF Neighbor Adjacency** — OSPF neighbor loss triggers SPF recalculation, disrupting traffic.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **OSPF Neighbor Adjacency**): table _time host neighbor_ip interface from_state to_state\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table (router, neighbor, states).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow OSPF neighbor health so you know if routing in an area is unstable before it shows up as slow or broken application traffic.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "5.1.6",
              "n": "Spanning Tree Topology Change",
              "c": "high",
              "f": "beginner",
              "v": "STP topology changes cause brief disruption and MAC flushing. Root bridge changes are critical.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%SPANTREE-5-TOPOTCHANGE\" OR \"%SPANTREE-2-ROOTCHANGE\"\n| stats count by host | where count > 5 | sort -count",
              "m": "Forward syslog. Alert on root bridge changes (critical). Track topology change frequency per VLAN.",
              "z": "Table, Timeline, Bar chart by VLAN.",
              "kfp": "STP TCNs happen during access switch adds, link moves, and voice VLAN changes. Storm-control tuning can also shift TC rates.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog. Alert on root bridge changes (critical). Track topology change frequency per VLAN.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%SPANTREE-5-TOPOTCHANGE\" OR \"%SPANTREE-2-ROOTCHANGE\"\n| stats count by host | where count > 5 | sort -count\n```\n\nUnderstanding this SPL\n\n**Spanning Tree Topology Change** — STP topology changes cause brief disruption and MAC flushing. Root bridge changes are critical.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline, Bar chart by VLAN.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with spanning tree topology change so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability",
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.7",
              "n": "Configuration Change Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Unauthorized config changes are a top cause of outages. Essential for compliance.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%SYS-5-CONFIG_I\"\n| rex \"Configured from (?<config_source>\\S+) by (?<user>\\S+)\"\n| table _time host user config_source",
              "m": "Forward syslog. Enable archive logging. Alert on any config change. Correlate with change tickets.",
              "z": "Table (device, user, time), Timeline, Single value (changes last 24h).",
              "kfp": "Authorized changes during change windows, scheduled compliance pushes, or device decommissioning will trigger this. Correlate to tickets before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog. Enable archive logging. Alert on any config change. Correlate with change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%SYS-5-CONFIG_I\"\n| rex \"Configured from (?<config_source>\\S+) by (?<user>\\S+)\"\n| table _time host user config_source\n```\n\nUnderstanding this SPL\n\n**Configuration Change Detection** — Unauthorized config changes are a top cause of outages. Essential for compliance.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Configuration Change Detection**): table _time host user config_source\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, user, time), Timeline, Single value (changes last 24h).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log who changed a switch or router config and when, so you can line up what happened in the network with your approved change tickets.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) is enforced — Splunk UC-5.1.7: Configuration Change Detection.",
                  "ea": "Saved search 'UC-5.1.7' running on sourcetype=cisco:ios, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/IT-Grundschutz/it-grundschutz_node.html"
                },
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§4.1.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that MAS TRM §4.1.1 (Technology risk governance) is enforced — Splunk UC-5.1.7: Configuration Change Detection.",
                  "ea": "Saved search 'UC-5.1.7' running on sourcetype=cisco:ios, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.mas.gov.sg/-/media/mas/regulations-and-financial-stability/regulatory-and-supervisory-framework/risk-management/trm-guidelines-18-january-2021.pdf"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NERC CIP CIP-010-4 R1 (Configuration change management) is enforced — Splunk UC-5.1.7: Configuration Change Detection.",
                  "ea": "Saved search 'UC-5.1.7' running on sourcetype=cisco:ios, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "1.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SWIFT CSP 1.1 (SWIFT environment protection) is enforced — Splunk UC-5.1.7: Configuration Change Detection.",
                  "ea": "Saved search 'UC-5.1.7' running on sourcetype=cisco:ios, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.swift.com/myswift/customer-security-programme-csp/security-controls"
                }
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.8",
              "n": "Device CPU/Memory Utilization",
              "c": "high",
              "f": "beginner",
              "v": "CPU exhaustion causes packet drops, routing failures, management unresponsiveness.",
              "t": "SNMP, CISCO-PROCESS-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=snmp:cpu`",
              "q": "index=network sourcetype=\"snmp:cpu\"\n| timechart span=5m avg(cpmCPUTotal5minRev) as cpu_pct by host | where cpu_pct > 80",
              "m": "Poll CISCO-PROCESS-MIB and CISCO-MEMORY-POOL-MIB every 300s. Alert CPU >80% or memory >85%.",
              "z": "Line chart, Gauge, Table of high-utilization devices.",
              "kfp": "Short CPU or memory spikes during routing convergence, code upgrade, or SNMP walks are common. Baseline by platform role and compare to a maintenance calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP, CISCO-PROCESS-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=snmp:cpu`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll CISCO-PROCESS-MIB and CISCO-MEMORY-POOL-MIB every 300s. Alert CPU >80% or memory >85%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:cpu\"\n| timechart span=5m avg(cpmCPUTotal5minRev) as cpu_pct by host | where cpu_pct > 80\n```\n\nUnderstanding this SPL\n\n**Device CPU/Memory Utilization** — CPU exhaustion causes packet drops, routing failures, management unresponsiveness.\n\nDocumented **Data sources**: `sourcetype=snmp:cpu`. **App/TA** (typical add-on context): SNMP, CISCO-PROCESS-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:cpu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:cpu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where cpu_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart, Gauge, Table of high-utilization devices.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with device cpu/memory utilization so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.9",
              "n": "Device Uptime / Reload Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Unexpected reboots indicate hardware failure or unauthorized reload.",
              "t": "SNMP, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "SNMP sysUpTime, `sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%SYS-5-RESTART\" OR \"%SYS-5-RELOAD\"\n| table _time host _raw | sort -_time",
              "m": "Poll SNMP sysUpTime. Forward syslog reload messages. Alert when uptime drops. Cross-reference with maintenance windows.",
              "z": "Table (device, uptime), Timeline, Single value (unexpected reboots).",
              "kfp": "Planned power cycles, hitless upgrades, and RMA burn-in reset counters—treat as noise when the change record matches.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: SNMP sysUpTime, `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll SNMP sysUpTime. Forward syslog reload messages. Alert when uptime drops. Cross-reference with maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%SYS-5-RESTART\" OR \"%SYS-5-RELOAD\"\n| table _time host _raw | sort -_time\n```\n\nUnderstanding this SPL\n\n**Device Uptime / Reload Tracking** — Unexpected reboots indicate hardware failure or unauthorized reload.\n\nDocumented **Data sources**: SNMP sysUpTime, `sourcetype=cisco:ios`. **App/TA** (typical add-on context): SNMP, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Device Uptime / Reload Tracking**): table _time host _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, uptime), Timeline, Single value (unexpected reboots).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with device uptime / reload tracking so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.10",
              "n": "VLAN Configuration Changes",
              "c": "medium",
              "f": "beginner",
              "v": "VLAN changes affect segmentation. Unauthorized changes can bypass security controls.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%VLAN_MANAGER-6-VLAN_CREATE\" OR \"%VLAN_MANAGER-6-VLAN_DELETE\"\n| table _time host _raw | sort -_time",
              "m": "Forward syslog. Alert on VLAN creation/deletion. Correlate with change tickets.",
              "z": "Table, Timeline.",
              "kfp": "Authorized changes during change windows, scheduled compliance pushes, or device decommissioning will trigger this. Correlate to tickets before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog. Alert on VLAN creation/deletion. Correlate with change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%VLAN_MANAGER-6-VLAN_CREATE\" OR \"%VLAN_MANAGER-6-VLAN_DELETE\"\n| table _time host _raw | sort -_time\n```\n\nUnderstanding this SPL\n\n**VLAN Configuration Changes** — VLAN changes affect segmentation. Unauthorized changes can bypass security controls.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **VLAN Configuration Changes**): table _time host _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We flag when someone creates or removes VLANs on a device, because that can open holes in your segmentation if it was not planned.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.11",
              "n": "Power Supply / Fan Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Hardware failures reduce redundancy. A second failure causes outage.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog, SNMP CISCO-ENVMON-MIB",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%FAN-3-FAN_FAILED\" OR \"%PLATFORM_ENV-1-PSU\" OR \"%ENVIRONMENTAL-1-ALERT\"\n| table _time host _raw | sort -_time",
              "m": "Forward syslog. Poll ENVMON-MIB. Alert immediately on hardware failure. Include device location for dispatch.",
              "z": "Status indicator per device, Events list (critical).",
              "kfp": "Hardware sensor warnings during power redundancy testing, scheduled maintenance, or environmental swings. Lab gear often logs benign transitions.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog, SNMP CISCO-ENVMON-MIB.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog. Poll ENVMON-MIB. Alert immediately on hardware failure. Include device location for dispatch.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%FAN-3-FAN_FAILED\" OR \"%PLATFORM_ENV-1-PSU\" OR \"%ENVIRONMENTAL-1-ALERT\"\n| table _time host _raw | sort -_time\n```\n\nUnderstanding this SPL\n\n**Power Supply / Fan Failures** — Hardware failures reduce redundancy. A second failure causes outage.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog, SNMP CISCO-ENVMON-MIB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Power Supply / Fan Failures**): table _time host _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status indicator per device, Events list (critical).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We raise the alarm if power supplies or fans report trouble, so a closet does not overheat or lose redundancy without you knowing.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "5.1.12",
              "n": "ARP/MAC Table Anomalies",
              "c": "medium",
              "f": "beginner",
              "v": "MAC flapping indicates loops, misconfigurations, or layer-2 attacks.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%SW_MATM-4-MACFLAP_NOTIF\"\n| rex \"(?<mac>[0-9a-fA-F]{4}\\.[0-9a-fA-F]{4}\\.[0-9a-fA-F]{4})\"\n| stats count by host, mac | sort -count",
              "m": "Forward syslog. Alert on MACFLAP events. Investigate the MAC to find the device.",
              "z": "Table, Timeline, Bar chart.",
              "kfp": "VMware vMotion, imaging carts, and conference room churn move MACs often. Baseline by VLAN before calling an attack.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog. Alert on MACFLAP events. Investigate the MAC to find the device.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%SW_MATM-4-MACFLAP_NOTIF\"\n| rex \"(?<mac>[0-9a-fA-F]{4}\\.[0-9a-fA-F]{4}\\.[0-9a-fA-F]{4})\"\n| stats count by host, mac | sort -count\n```\n\nUnderstanding this SPL\n\n**ARP/MAC Table Anomalies** — MAC flapping indicates loops, misconfigurations, or layer-2 attacks.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline, Bar chart.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with arp/mac table anomalies so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Anomaly",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.13",
              "n": "ACL Deny Logging",
              "c": "medium",
              "f": "beginner",
              "v": "ACL deny hits show blocked traffic. High volumes may indicate attacks or misconfigured apps.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%SEC-6-IPACCESSLOGP\"\n| rex \"list (?<acl>\\S+) denied (?<proto>\\w+) (?<src>\\d+\\.\\d+\\.\\d+\\.\\d+)\"\n| stats count by host, acl, src, proto | sort -count",
              "m": "Enable ACL logging (`log` keyword). Forward syslog. Dashboard showing top denied sources and trends.",
              "z": "Table, Bar chart by source IP, Timechart.",
              "kfp": "New security baselines, pen tests, and mis-pointed app VIPs can spike denies. Weed out scanners and approved tests via subnet lookup.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ACL logging (`log` keyword). Forward syslog. Dashboard showing top denied sources and trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%SEC-6-IPACCESSLOGP\"\n| rex \"list (?<acl>\\S+) denied (?<proto>\\w+) (?<src>\\d+\\.\\d+\\.\\d+\\.\\d+)\"\n| stats count by host, acl, src, proto | sort -count\n```\n\nUnderstanding this SPL\n\n**ACL Deny Logging** — ACL deny hits show blocked traffic. High volumes may indicate attacks or misconfigured apps.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, acl, src, proto** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ACL Deny Logging** — ACL deny hits show blocked traffic. High volumes may indicate attacks or misconfigured apps.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart by source IP, Timechart.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We list traffic your access lists block so you can see unexpected scans or a mis-aimed app before it harms the business.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.14",
              "n": "SNMP Authentication Failures",
              "c": "medium",
              "f": "beginner",
              "v": "Failed SNMP auth indicates unauthorized polling or reconnaissance.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%SNMP-3-AUTHFAIL\"\n| rex \"from (?<src>\\S+)\" | stats count by host, src | sort -count",
              "m": "Forward syslog. Alert on repeated failures from unknown sources.",
              "z": "Table, Map, Timeline.",
              "kfp": "Legitimate NMS IP moves, new pollers, or SNMPv3 key rotations look like failures until the device ACL and views are updated.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog. Alert on repeated failures from unknown sources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%SNMP-3-AUTHFAIL\"\n| rex \"from (?<src>\\S+)\" | stats count by host, src | sort -count\n```\n\nUnderstanding this SPL\n\n**SNMP Authentication Failures** — Failed SNMP auth indicates unauthorized polling or reconnaissance.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Map, Timeline.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with snmp authentication failures so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of API RP 1164 5.3 (Access control) — Splunk UC-5.1.14: SNMP Authentication Failures.",
                  "ea": "Saved search 'UC-5.1.14' running on sourcetype=cisco:ios, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.api.org/products-and-services/standards/important-standards-announcements/standard-1164"
                },
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.5.1",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of CJIS 5.5.1 (Access control - identification) — Splunk UC-5.1.14: SNMP Authentication Failures.",
                  "ea": "Saved search 'UC-5.1.14' running on sourcetype=cisco:ios, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://le.fbi.gov/cjis-division/cjis-security-policy-resource-center"
                }
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.15",
              "n": "Environmental Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Temperature alerts catch cooling failures before they cause device outages.",
              "t": "SNMP, CISCO-ENVMON-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=snmp:environment`",
              "q": "index=network sourcetype=\"snmp:environment\"\n| stats latest(ciscoEnvMonTemperatureValue) as temp_c by host | where temp_c > 45",
              "m": "Poll ENVMON-MIB temperature sensors every 300s. Alert when >45°C.",
              "z": "Gauge per device, Line chart (trending), Table.",
              "kfp": "Datacenter temperature and humidity often swing during CRAC work, door propping, or seasonal load—pair alerts with BMS or site tickets.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP, CISCO-ENVMON-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=snmp:environment`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll ENVMON-MIB temperature sensors every 300s. Alert when >45°C.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:environment\"\n| stats latest(ciscoEnvMonTemperatureValue) as temp_c by host | where temp_c > 45\n```\n\nUnderstanding this SPL\n\n**Environmental Monitoring** — Temperature alerts catch cooling failures before they cause device outages.\n\nDocumented **Data sources**: `sourcetype=snmp:environment`. **App/TA** (typical add-on context): SNMP, CISCO-ENVMON-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:environment. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:environment\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where temp_c > 45` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per device, Line chart (trending), Table.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with environmental monitoring so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.16",
              "n": "Route Table Flapping",
              "c": "critical",
              "f": "intermediate",
              "v": "Unstable routes cause packet loss and reachability failures. Detecting flapping routes prevents cascading network outages across your infrastructure.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"ROUTING\" OR \"RT_ENTRY\" OR \"%DUAL-5-NBRCHANGE\" OR \"%BGP-5-ADJCHANGE\" OR \"%OSPF-5-ADJCHG\"\n| rex \"(?<protocol>BGP|OSPF|EIGRP).*?(?<prefix>\\d+\\.\\d+\\.\\d+\\.\\d+/?\\d*)\"\n| bin _time span=10m | stats count as changes by _time, host, protocol, prefix\n| where changes > 5 | sort -changes",
              "m": "Collect syslog from all routers. Alert on >5 route changes for the same prefix in 10 minutes. Correlate with interface flaps. Use `streamstats` to detect patterns.",
              "z": "Timeline (flapping events), Table (prefix, host, count), Line chart (change frequency).",
              "kfp": "Policy or static edits, redistribution experiments, and upstream path changes can move routes. Verify against maintenance windows and lab VRFs.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect syslog from all routers. Alert on >5 route changes for the same prefix in 10 minutes. Correlate with interface flaps. Use `streamstats` to detect patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"ROUTING\" OR \"RT_ENTRY\" OR \"%DUAL-5-NBRCHANGE\" OR \"%BGP-5-ADJCHANGE\" OR \"%OSPF-5-ADJCHG\"\n| rex \"(?<protocol>BGP|OSPF|EIGRP).*?(?<prefix>\\d+\\.\\d+\\.\\d+\\.\\d+/?\\d*)\"\n| bin _time span=10m | stats count as changes by _time, host, protocol, prefix\n| where changes > 5 | sort -changes\n```\n\nUnderstanding this SPL\n\n**Route Table Flapping** — Unstable routes cause packet loss and reachability failures. Detecting flapping routes prevents cascading network outages across your infrastructure.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, protocol, prefix** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where changes > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (flapping events), Table (prefix, host, count), Line chart (change frequency).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with route table flapping so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.17",
              "n": "Duplex Mismatch Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Duplex mismatches degrade link performance silently. They cause late collisions, CRC errors, and reduced throughput that are hard to diagnose.",
              "t": "SNMP Modular Input, IF-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`, `sourcetype=snmp:interface`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%CDP-4-DUPLEX_MISMATCH\"\n| rex \"duplex mismatch discovered on (?<local_intf>\\S+).*with (?<remote_device>\\S+) (?<remote_intf>\\S+)\"\n| stats count latest(_time) as last_seen by host, local_intf, remote_device, remote_intf\n| sort -last_seen",
              "m": "Enable CDP/LLDP on all interfaces. Monitor syslog for duplex mismatch messages. Cross-reference with SNMP interface counters showing late collisions.",
              "z": "Table (local device/interface → remote device/interface), Alert list.",
              "kfp": "Intermittent autonegotiation blips and cable wiggles during moves can look like a mismatch—verify with a sustained port test.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP Modular Input, IF-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=snmp:interface`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CDP/LLDP on all interfaces. Monitor syslog for duplex mismatch messages. Cross-reference with SNMP interface counters showing late collisions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%CDP-4-DUPLEX_MISMATCH\"\n| rex \"duplex mismatch discovered on (?<local_intf>\\S+).*with (?<remote_device>\\S+) (?<remote_intf>\\S+)\"\n| stats count latest(_time) as last_seen by host, local_intf, remote_device, remote_intf\n| sort -last_seen\n```\n\nUnderstanding this SPL\n\n**Duplex Mismatch Detection** — Duplex mismatches degrade link performance silently. They cause late collisions, CRC errors, and reduced throughput that are hard to diagnose.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=snmp:interface`. **App/TA** (typical add-on context): SNMP Modular Input, IF-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, local_intf, remote_device, remote_intf** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (local device/interface → remote device/interface), Alert list.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with duplex mismatch detection so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.18",
              "n": "CDP/LLDP Neighbor Changes",
              "c": "medium",
              "f": "advanced",
              "v": "Unexpected neighbor changes indicate cabling modifications, device replacements, or unauthorized devices connecting to the network.",
              "t": "SNMP Modular Input, CISCO-CDP-MIB, LLDP-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=snmp:cdp`, `sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"snmp:cdp\"\n| stats latest(cdpCacheDeviceId) as neighbor, latest(cdpCachePlatform) as platform by host, cdpCacheIfIndex\n| appendpipe [| inputlookup cdp_baseline.csv]\n| eventstats latest(neighbor) as current, first(neighbor) as baseline by host, cdpCacheIfIndex\n| where current!=baseline | table host, cdpCacheIfIndex, baseline, current, platform",
              "m": "Poll CDP-MIB/LLDP-MIB at 600s intervals. Create a baseline lookup via `outputlookup`. Compare current neighbors against baseline. Alert on new/removed neighbors.",
              "z": "Table (host, interface, old neighbor, new neighbor), Change log timeline.",
              "kfp": "New cables, SFP swaps, and VoIP phone reboots change discovery neighbors. Ignore known moves in office refresh projects.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP Modular Input, CISCO-CDP-MIB, LLDP-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=snmp:cdp`, `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll CDP-MIB/LLDP-MIB at 600s intervals. Create a baseline lookup via `outputlookup`. Compare current neighbors against baseline. Alert on new/removed neighbors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:cdp\"\n| stats latest(cdpCacheDeviceId) as neighbor, latest(cdpCachePlatform) as platform by host, cdpCacheIfIndex\n| appendpipe [| inputlookup cdp_baseline.csv]\n| eventstats latest(neighbor) as current, first(neighbor) as baseline by host, cdpCacheIfIndex\n| where current!=baseline | table host, cdpCacheIfIndex, baseline, current, platform\n```\n\nUnderstanding this SPL\n\n**CDP/LLDP Neighbor Changes** — Unexpected neighbor changes indicate cabling modifications, device replacements, or unauthorized devices connecting to the network.\n\nDocumented **Data sources**: `sourcetype=snmp:cdp`, `sourcetype=cisco:ios`. **App/TA** (typical add-on context): SNMP Modular Input, CISCO-CDP-MIB, LLDP-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:cdp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:cdp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, cdpCacheIfIndex** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• `eventstats` rolls up events into metrics; results are split **by host, cdpCacheIfIndex** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where current!=baseline` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CDP/LLDP Neighbor Changes**): table host, cdpCacheIfIndex, baseline, current, platform\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, interface, old neighbor, new neighbor), Change log timeline.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with cdp/lldp neighbor changes so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.19",
              "n": "PoE Power Budget Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "PoE budget exhaustion causes powered devices (IP phones, APs, cameras) to lose power. Proactive monitoring prevents unplanned device outages.",
              "t": "SNMP Modular Input, POWER-ETHERNET-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=snmp:poe`",
              "q": "index=network sourcetype=\"snmp:poe\"\n| stats latest(pethMainPseOperStatus) as status, latest(pethMainPsePower) as total_watts, latest(pethMainPseConsumptionPower) as used_watts by host\n| eval utilization_pct=round(used_watts/total_watts*100,1)\n| where utilization_pct > 80 | sort -utilization_pct",
              "m": "Poll POWER-ETHERNET-MIB every 300s. Track per-switch PoE budget utilization. Alert at 80% utilization. Trend over time to plan for additional PoE capacity.",
              "z": "Gauge (per switch), Line chart (utilization trending), Table (switch, budget, used, remaining).",
              "kfp": "AP reboots, phone bulk restarts, and new cameras shift PoE load. Scheduled refresh windows can look like a budget breach.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP Modular Input, POWER-ETHERNET-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=snmp:poe`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll POWER-ETHERNET-MIB every 300s. Track per-switch PoE budget utilization. Alert at 80% utilization. Trend over time to plan for additional PoE capacity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:poe\"\n| stats latest(pethMainPseOperStatus) as status, latest(pethMainPsePower) as total_watts, latest(pethMainPseConsumptionPower) as used_watts by host\n| eval utilization_pct=round(used_watts/total_watts*100,1)\n| where utilization_pct > 80 | sort -utilization_pct\n```\n\nUnderstanding this SPL\n\n**PoE Power Budget Monitoring** — PoE budget exhaustion causes powered devices (IP phones, APs, cameras) to lose power. Proactive monitoring prevents unplanned device outages.\n\nDocumented **Data sources**: `sourcetype=snmp:poe`. **App/TA** (typical add-on context): SNMP Modular Input, POWER-ETHERNET-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:poe. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:poe\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilization_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (per switch), Line chart (utilization trending), Table (switch, budget, used, remaining).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with poe power budget monitoring so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Capacity",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.20",
              "n": "EIGRP Neighbor Flapping",
              "c": "critical",
              "f": "intermediate",
              "v": "EIGRP neighbor instability causes route recalculation, increased CPU load, and traffic blackholing during convergence.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%DUAL-5-NBRCHANGE\"\n| rex \"EIGRP-(?<protocol>IPv4|IPv6) (?<as_number>\\d+).*Neighbor (?<neighbor_ip>\\S+) \\((?<interface>\\S+)\\) is (?<state>up|down)\"\n| bin _time span=15m | stats count(eval(state=\"down\")) as downs, count(eval(state=\"up\")) as ups by _time, host, neighbor_ip, interface\n| where downs > 2",
              "m": "Collect syslog from Cisco routers. Alert on >2 EIGRP neighbor down events in 15 minutes. Correlate with interface flaps and CPU utilization.",
              "z": "Timeline (up/down events), Table (neighbor, interface, flap count), Status grid.",
              "kfp": "EIGRP neighbor churn can follow redistribution changes, serial link clocking work, or lab reruns—compare to change records before treating as a fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect syslog from Cisco routers. Alert on >2 EIGRP neighbor down events in 15 minutes. Correlate with interface flaps and CPU utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%DUAL-5-NBRCHANGE\"\n| rex \"EIGRP-(?<protocol>IPv4|IPv6) (?<as_number>\\d+).*Neighbor (?<neighbor_ip>\\S+) \\((?<interface>\\S+)\\) is (?<state>up|down)\"\n| bin _time span=15m | stats count(eval(state=\"down\")) as downs, count(eval(state=\"up\")) as ups by _time, host, neighbor_ip, interface\n| where downs > 2\n```\n\nUnderstanding this SPL\n\n**EIGRP Neighbor Flapping** — EIGRP neighbor instability causes route recalculation, increased CPU load, and traffic blackholing during convergence.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, neighbor_ip, interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where downs > 2` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (up/down events), Table (neighbor, interface, flap count), Status grid.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with eigrp neighbor flapping so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Anomaly",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.21",
              "n": "CRC Error Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Increasing CRC errors indicate failing cables, SFPs, or electromagnetic interference. Early detection prevents link failures.",
              "t": "SNMP Modular Input, IF-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=snmp:interface`",
              "q": "index=network sourcetype=\"snmp:interface\"\n| streamstats current=f last(ifInErrors) as prev_errors, last(_time) as prev_time by host, ifDescr\n| eval error_rate=(ifInErrors-prev_errors)/(_time-prev_time)\n| where error_rate > 0\n| timechart span=1h avg(error_rate) by host limit=20",
              "m": "Poll IF-MIB counters every 300s. Use `streamstats` to compute deltas. Trend over days to detect worsening interfaces. Cross-reference with interface utilization.",
              "z": "Line chart (error rate over time per interface), Heatmap (device × interface), Table.",
              "kfp": "Planned work, test traffic, and known moves can look like a fault for «CRC Error Trending». Filter by change windows, lab sites, and maintenance notices you already trust.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP Modular Input, IF-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=snmp:interface`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll IF-MIB counters every 300s. Use `streamstats` to compute deltas. Trend over days to detect worsening interfaces. Cross-reference with interface utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:interface\"\n| streamstats current=f last(ifInErrors) as prev_errors, last(_time) as prev_time by host, ifDescr\n| eval error_rate=(ifInErrors-prev_errors)/(_time-prev_time)\n| where error_rate > 0\n| timechart span=1h avg(error_rate) by host limit=20\n```\n\nUnderstanding this SPL\n\n**CRC Error Trending** — Increasing CRC errors indicate failing cables, SFPs, or electromagnetic interference. Early detection prevents link failures.\n\nDocumented **Data sources**: `sourcetype=snmp:interface`. **App/TA** (typical add-on context): SNMP Modular Input, IF-MIB, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:interface. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:interface\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `streamstats` rolls up events into metrics; results are split **by host, ifDescr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host limit=20** — ideal for trending and alerting on this use case.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate over time per interface), Heatmap (device × interface), Table.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with crc error trending so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.22",
              "n": "Syslog Source Health",
              "c": "high",
              "f": "expert",
              "v": "Silence from a device means either it's healthy or its syslog forwarding broke. Detecting missing syslog sources ensures continuous visibility.",
              "t": "Splunk core (metadata search), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`, `sourcetype=syslog`",
              "q": "| tstats count where index=network sourcetype=\"cisco:ios\" by host\n| append [| inputlookup network_device_inventory.csv | rename device as host | fields host]\n| stats sum(count) as event_count by host | where event_count=0 OR isnull(event_count)\n| table host | rename host as \"Silent Devices\"",
              "m": "Maintain a device inventory lookup. Schedule a search comparing active syslog sources against inventory. Alert on devices missing for >1 hour.",
              "z": "Table (silent devices), Single value (count of silent devices), Status grid (all devices).",
              "kfp": "Index maintenance, HF restarts, and UDP drops during firewall rule pushes can look like a silent device. Check forwarder and firewall paths first.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk core (metadata search), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain a device inventory lookup. Schedule a search comparing active syslog sources against inventory. Alert on devices missing for >1 hour.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats count where index=network sourcetype=\"cisco:ios\" by host\n| append [| inputlookup network_device_inventory.csv | rename device as host | fields host]\n| stats sum(count) as event_count by host | where event_count=0 OR isnull(event_count)\n| table host | rename host as \"Silent Devices\"\n```\n\nUnderstanding this SPL\n\n**Syslog Source Health** — Silence from a device means either it's healthy or its syslog forwarding broke. Detecting missing syslog sources ensures continuous visibility.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=syslog`. **App/TA** (typical add-on context): Splunk core (metadata search), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against precomputed summaries; ensure the referenced data model is accelerated.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where event_count=0 OR isnull(event_count)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Syslog Source Health**): table host\n• Renames fields with `rename` for clarity or joins.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (silent devices), Single value (count of silent devices), Status grid (all devices).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with syslog source health so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.23",
              "n": "HSRP/VRRP State Changes",
              "c": "critical",
              "f": "intermediate",
              "v": "Gateway redundancy state changes impact all hosts on a subnet. Detecting unexpected failovers prevents prolonged outages.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "`sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"cisco:ios\" \"%HSRP-5-STATECHANGE\" OR \"%VRRP-6-STATECHANGE\"\n| rex \"Grp (?<group>\\d+) state (?<old_state>\\w+) -> (?<new_state>\\w+)\"\n| where new_state=\"Active\" OR new_state=\"Master\"\n| stats count by host, group, old_state, new_state | sort -_time",
              "m": "Enable HSRP/VRRP syslog notifications. Alert on Active/Master transitions. Correlate with interface or device failures to validate failover cause.",
              "z": "Timeline (state changes), Table (group, host, transition), Alert panel.",
              "kfp": "Hardware sensor warnings during power redundancy testing, scheduled maintenance, or environmental swings. Lab gear often logs benign transitions.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable HSRP/VRRP syslog notifications. Alert on Active/Master transitions. Correlate with interface or device failures to validate failover cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%HSRP-5-STATECHANGE\" OR \"%VRRP-6-STATECHANGE\"\n| rex \"Grp (?<group>\\d+) state (?<old_state>\\w+) -> (?<new_state>\\w+)\"\n| where new_state=\"Active\" OR new_state=\"Master\"\n| stats count by host, group, old_state, new_state | sort -_time\n```\n\nUnderstanding this SPL\n\n**HSRP/VRRP State Changes** — Gateway redundancy state changes impact all hosts on a subnet. Detecting unexpected failovers prevents prolonged outages.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where new_state=\"Active\" OR new_state=\"Master\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, group, old_state, new_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (state changes), Table (group, host, transition), Alert panel.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with hsrp/vrrp state changes so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.24",
              "n": "Network Device Configuration Backup Freshness",
              "c": "high",
              "f": "intermediate",
              "v": "Last backup age tracking; stale backups risk config loss during failures.",
              "t": "Custom (Oxidized/RANCID output, SolarWinds NCM equivalent), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "Backup system logs (timestamps of last successful backup per device)",
              "q": "index=network sourcetype=config_backup OR sourcetype=oxidized OR sourcetype=rancid\n| stats latest(_time) as last_backup by host, device_hostname\n| eval age_hours=round((now()-last_backup)/3600,1)\n| where age_hours > 24 OR isnull(last_backup)\n| table device_hostname host last_backup age_hours",
              "m": "Ingest backup job output from Oxidized, RANCID, or NCM. Parse success/failure and timestamp. Create lookup or index with device→last_backup mapping. Alert when last successful backup exceeds 24 hours. Schedule backup jobs daily; verify Splunk receives logs via scripted input or syslog.",
              "z": "Table (device, last backup, age), Single value (devices with stale backup), Gauge (hours since last backup).",
              "kfp": "Backup jobs can slip during holidays, RADIUS lockouts to the repo, or when a device is in RMA—compare to the backup product job history.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Oxidized/RANCID output, SolarWinds NCM equivalent), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: Backup system logs (timestamps of last successful backup per device).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest backup job output from Oxidized, RANCID, or NCM. Parse success/failure and timestamp. Create lookup or index with device→last_backup mapping. Alert when last successful backup exceeds 24 hours. Schedule backup jobs daily; verify Splunk receives logs via scripted input or syslog.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=config_backup OR sourcetype=oxidized OR sourcetype=rancid\n| stats latest(_time) as last_backup by host, device_hostname\n| eval age_hours=round((now()-last_backup)/3600,1)\n| where age_hours > 24 OR isnull(last_backup)\n| table device_hostname host last_backup age_hours\n```\n\nUnderstanding this SPL\n\n**Network Device Configuration Backup Freshness** — Last backup age tracking; stale backups risk config loss during failures.\n\nDocumented **Data sources**: Backup system logs (timestamps of last successful backup per device). **App/TA** (typical add-on context): Custom (Oxidized/RANCID output, SolarWinds NCM equivalent), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: config_backup, oxidized, rancid. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=config_backup. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, device_hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **age_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_hours > 24 OR isnull(last_backup)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Device Configuration Backup Freshness**): table device_hostname host last_backup age_hours\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, last backup, age), Single value (devices with stale backup), Gauge (hours since last backup).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with network device configuration backup freshness so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.08",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.08 (Regular backups) is enforced — Splunk UC-5.1.24: Network Device Configuration Backup Freshness.",
                  "ea": "Saved search 'UC-5.1.24' running on Backup system logs (timestamps of last successful backup per device), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§9",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that BAIT/KAIT §9 (ICT operations management) is enforced — Splunk UC-5.1.24: Network Device Configuration Backup Freshness.",
                  "ea": "Saved search 'UC-5.1.24' running on Backup system logs (timestamps of last successful backup per device), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Rundschreiben/2021/rs_1021_BAIT_en.html"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NERC CIP CIP-010-4 R1 (Configuration change management) is enforced — Splunk UC-5.1.24: Network Device Configuration Backup Freshness.",
                  "ea": "Saved search 'UC-5.1.24' running on Backup system logs (timestamps of last successful backup per device), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                }
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.25",
              "n": "Network Configuration Drift Detection",
              "c": "high",
              "f": "advanced",
              "v": "Running config differs from baseline/golden config.",
              "t": "Custom scripted input (diff output from RANCID/Oxidized vs golden), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "Config diff output, Git commit logs from network config repo",
              "q": "index=network sourcetype=config_drift OR sourcetype=git:commit\n| search \"diff\" OR \"drift\" OR \"changed\" OR \"modified\"\n| rex \"device[=:]\\s*(?<device>\\S+)\" | rex \"lines?\\s*(?<lines_changed>\\d+)\"\n| stats count as drift_events, values(diff_summary) as changes by device, host\n| where drift_events > 0\n| table device host drift_events changes",
              "m": "Run diff (e.g., `diff running golden`) via Oxidized hooks or custom script. Ingest diff output or Git commit metadata. Store golden configs in Git; compare after each backup. Alert on any non-whitelisted drift. Use `git diff` or `rancid -d` output as sourcetype.",
              "z": "Table (device, drift count, summary), Timeline (drift events), Single value (devices with drift).",
              "kfp": "Intentional hotfixes, emergency ACL inserts, and lab merges create drift you want—use allowlists and ticket IDs in comments.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (diff output from RANCID/Oxidized vs golden), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: Config diff output, Git commit logs from network config repo.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun diff (e.g., `diff running golden`) via Oxidized hooks or custom script. Ingest diff output or Git commit metadata. Store golden configs in Git; compare after each backup. Alert on any non-whitelisted drift. Use `git diff` or `rancid -d` output as sourcetype.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=config_drift OR sourcetype=git:commit\n| search \"diff\" OR \"drift\" OR \"changed\" OR \"modified\"\n| rex \"device[=:]\\s*(?<device>\\S+)\" | rex \"lines?\\s*(?<lines_changed>\\d+)\"\n| stats count as drift_events, values(diff_summary) as changes by device, host\n| where drift_events > 0\n| table device host drift_events changes\n```\n\nUnderstanding this SPL\n\n**Network Configuration Drift Detection** — Running config differs from baseline/golden config.\n\nDocumented **Data sources**: Config diff output, Git commit logs from network config repo. **App/TA** (typical add-on context): Custom scripted input (diff output from RANCID/Oxidized vs golden), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: config_drift, git:commit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=config_drift. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by device, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where drift_events > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Configuration Drift Detection**): table device host drift_events changes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, drift count, summary), Timeline (drift events), Single value (devices with drift).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with network configuration drift detection so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Configuration",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.26",
              "n": "Network Device Firmware Version Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Devices running unapproved or EOL firmware versions.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog, SNMP TA (sysDescr)",
              "d": "SNMP sysDescr, show version output",
              "q": "index=network sourcetype=snmp:sysinfo OR sourcetype=cisco:ios:version\n| rex field=_raw \"Version (?<ios_version>\\S+)\" | rex field=sysDescr \"Version (?<ios_version>\\S+)\"\n| lookup firmware_compliance ios_version OUTPUT approved eol_date\n| where approved!=\"yes\" OR (eol_date!=\"\" AND strptime(eol_date,\"%Y-%m-%d\")<now())\n| table host ios_version approved eol_date",
              "m": "Poll SNMP sysDescr or ingest `show version` via scripted input. Create lookup table (ios_version, approved, eol_date) from vendor EOL/EOS bulletins. Alert on non-approved or past-EOL versions. Update lookup quarterly.",
              "z": "Table (device, version, status), Bar chart (version distribution), Single value (non-compliant count).",
              "kfp": "Version drift can reflect staged rollouts and golden-image lag between regions—match to your release calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog, SNMP TA (sysDescr).\n• Ensure the following data sources are available: SNMP sysDescr, show version output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll SNMP sysDescr or ingest `show version` via scripted input. Create lookup table (ios_version, approved, eol_date) from vendor EOL/EOS bulletins. Alert on non-approved or past-EOL versions. Update lookup quarterly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=snmp:sysinfo OR sourcetype=cisco:ios:version\n| rex field=_raw \"Version (?<ios_version>\\S+)\" | rex field=sysDescr \"Version (?<ios_version>\\S+)\"\n| lookup firmware_compliance ios_version OUTPUT approved eol_date\n| where approved!=\"yes\" OR (eol_date!=\"\" AND strptime(eol_date,\"%Y-%m-%d\")<now())\n| table host ios_version approved eol_date\n```\n\nUnderstanding this SPL\n\n**Network Device Firmware Version Compliance** — Devices running unapproved or EOL firmware versions.\n\nDocumented **Data sources**: SNMP sysDescr, show version output. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog, SNMP TA (sysDescr). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:sysinfo, cisco:ios:version. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmp:sysinfo. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where approved!=\"yes\" OR (eol_date!=\"\" AND strptime(eol_date,\"%Y-%m-%d\")<now())` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Device Firmware Version Compliance**): table host ios_version approved eol_date\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, version, status), Bar chart (version distribution), Single value (non-compliant count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with network device firmware version compliance so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.27",
              "n": "Interface Error Rate Trending",
              "c": "high",
              "f": "intermediate",
              "v": "CRC, runts, giants, input/output errors as rate over time.",
              "t": "SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "IF-MIB (ifInErrors, ifOutErrors), EtherLike-MIB",
              "q": "index=network sourcetype=snmp:interface\n| streamstats current=f last(ifInErrors) as prev_in, last(ifOutErrors) as prev_out, last(_time) as prev_time by host, ifDescr\n| eval delta_in=ifInErrors-coalesce(prev_in,0), delta_out=ifOutErrors-coalesce(prev_out,0)\n| eval interval_sec=_time-prev_time | where interval_sec>0 AND interval_sec<900\n| eval in_err_rate=round(delta_in/interval_sec*60,2), out_err_rate=round(delta_out/interval_sec*60,2)\n| where in_err_rate>0 OR out_err_rate>0\n| timechart span=5m avg(in_err_rate) as in_errors_per_min, avg(out_err_rate) as out_errors_per_min by host",
              "m": "Poll IF-MIB (ifInErrors, ifOutErrors) and EtherLike-MIB (dot3StatsFCSErrors) every 300s. Use streamstats for delta calculation. Alert when error rate exceeds threshold (e.g., >1/min on uplinks). Exclude admin-down interfaces.",
              "z": "Line chart (error rate over time), Table (host, interface, rate), Heatmap.",
              "kfp": "Brief error increments during transceiver replacement, software upgrades, or known-noisy access segments can look like a fault. Baseline by interface role before paging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: IF-MIB (ifInErrors, ifOutErrors), EtherLike-MIB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll IF-MIB (ifInErrors, ifOutErrors) and EtherLike-MIB (dot3StatsFCSErrors) every 300s. Use streamstats for delta calculation. Alert when error rate exceeds threshold (e.g., >1/min on uplinks). Exclude admin-down interfaces.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=snmp:interface\n| streamstats current=f last(ifInErrors) as prev_in, last(ifOutErrors) as prev_out, last(_time) as prev_time by host, ifDescr\n| eval delta_in=ifInErrors-coalesce(prev_in,0), delta_out=ifOutErrors-coalesce(prev_out,0)\n| eval interval_sec=_time-prev_time | where interval_sec>0 AND interval_sec<900\n| eval in_err_rate=round(delta_in/interval_sec*60,2), out_err_rate=round(delta_out/interval_sec*60,2)\n| where in_err_rate>0 OR out_err_rate>0\n| timechart span=5m avg(in_err_rate) as in_errors_per_min, avg(out_err_rate) as out_errors_per_min by host\n```\n\nUnderstanding this SPL\n\n**Interface Error Rate Trending** — CRC, runts, giants, input/output errors as rate over time.\n\nDocumented **Data sources**: IF-MIB (ifInErrors, ifOutErrors), EtherLike-MIB. **App/TA** (typical add-on context): SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:interface. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmp:interface. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `streamstats` rolls up events into metrics; results are split **by host, ifDescr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta_in** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **interval_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where interval_sec>0 AND interval_sec<900` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **in_err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where in_err_rate>0 OR out_err_rate>0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate over time), Table (host, interface, rate), Heatmap.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with interface error rate trending so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.28",
              "n": "STP Topology Change Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Frequent topology changes indicating Layer 2 instability.",
              "t": "SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "BRIDGE-MIB (dot1dStpTopChanges), syslog STP events",
              "q": "index=network (sourcetype=snmp:stp OR sourcetype=\"cisco:ios\") (\"dot1dStpTopChanges\" OR \"%SPANTREE-5-TOPOTCHANGE\" OR \"%SPANTREE-2-ROOTCHANGE\")\n| eval stp_event=if(match(_raw,\"TOPOTCHANGE|ROOTCHANGE|dot1dStpTopChanges\"),1,0)\n| bin _time span=10m\n| stats sum(stp_event) as topo_changes by host, _time\n| where topo_changes > 3\n| sort -topo_changes",
              "m": "Poll BRIDGE-MIB dot1dStpTopChanges every 300s; ingest syslog for SPANTREE events. Alert when topology changes exceed 3 in 10 minutes. Correlate with root bridge changes for critical alerts.",
              "z": "Line chart (topology changes per host), Table (host, count), Timeline.",
              "kfp": "STP TCNs happen during access switch adds, link moves, and voice VLAN changes. Storm-control tuning can also shift TC rates.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: BRIDGE-MIB (dot1dStpTopChanges), syslog STP events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll BRIDGE-MIB dot1dStpTopChanges every 300s; ingest syslog for SPANTREE events. Alert when topology changes exceed 3 in 10 minutes. Correlate with root bridge changes for critical alerts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=snmp:stp OR sourcetype=\"cisco:ios\") (\"dot1dStpTopChanges\" OR \"%SPANTREE-5-TOPOTCHANGE\" OR \"%SPANTREE-2-ROOTCHANGE\")\n| eval stp_event=if(match(_raw,\"TOPOTCHANGE|ROOTCHANGE|dot1dStpTopChanges\"),1,0)\n| bin _time span=10m\n| stats sum(stp_event) as topo_changes by host, _time\n| where topo_changes > 3\n| sort -topo_changes\n```\n\nUnderstanding this SPL\n\n**STP Topology Change Rate** — Frequent topology changes indicating Layer 2 instability.\n\nDocumented **Data sources**: BRIDGE-MIB (dot1dStpTopChanges), syslog STP events. **App/TA** (typical add-on context): SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:stp, cisco:ios. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmp:stp. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **stp_event** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where topo_changes > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (topology changes per host), Table (host, count), Timeline.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with stp topology change rate so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.29",
              "n": "ARP Table Size Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "ARP table approaching hardware limits; can cause connectivity failures.",
              "t": "SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "ipNetToMediaTable entries count, show arp count",
              "q": "index=network sourcetype=snmp:arp OR sourcetype=cisco:ios:arp\n| eval arp_count=coalesce(arp_entries, arp_count, 0)\n| stats latest(arp_count) as current_arp by host\n| lookup arp_limit host OUTPUT max_arp\n| eval util_pct=round(current_arp/max_arp*100,1)\n| where util_pct > 70\n| table host current_arp max_arp util_pct",
              "m": "Poll ipNetToMediaTable (count rows) or parse `show ip arp` / `show arp` output via scripted input. Create lookup with device→max_arp (from vendor specs). Alert when utilization exceeds 70%.",
              "z": "Line chart (ARP count over time), Gauge (utilization), Table.",
              "kfp": "VMware vMotion, imaging carts, and conference room churn move MACs often. Baseline by VLAN before calling an attack.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: ipNetToMediaTable entries count, show arp count.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll ipNetToMediaTable (count rows) or parse `show ip arp` / `show arp` output via scripted input. Create lookup with device→max_arp (from vendor specs). Alert when utilization exceeds 70%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=snmp:arp OR sourcetype=cisco:ios:arp\n| eval arp_count=coalesce(arp_entries, arp_count, 0)\n| stats latest(arp_count) as current_arp by host\n| lookup arp_limit host OUTPUT max_arp\n| eval util_pct=round(current_arp/max_arp*100,1)\n| where util_pct > 70\n| table host current_arp max_arp util_pct\n```\n\nUnderstanding this SPL\n\n**ARP Table Size Trending** — ARP table approaching hardware limits; can cause connectivity failures.\n\nDocumented **Data sources**: ipNetToMediaTable entries count, show arp count. **App/TA** (typical add-on context): SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:arp, cisco:ios:arp. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmp:arp. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **arp_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where util_pct > 70` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ARP Table Size Trending**): table host current_arp max_arp util_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ARP count over time), Gauge (utilization), Table.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with arp table size trending so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.30",
              "n": "MAC Address Table Capacity",
              "c": "medium",
              "f": "intermediate",
              "v": "CAM table utilization on switches approaching hardware limits.",
              "t": "SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "dot1qTpFdbTable count, show mac address-table count",
              "q": "index=network sourcetype=snmp:bridge OR sourcetype=cisco:ios:mac\n| eval mac_count=coalesce(fdb_entries, mac_count, 0)\n| stats latest(mac_count) as current_mac by host\n| lookup mac_limit host OUTPUT max_mac\n| eval util_pct=round(current_mac/max_mac*100,1)\n| where util_pct > 75\n| table host current_mac max_mac util_pct",
              "m": "Poll dot1qTpFdbTable (count) or parse `show mac address-table count`. Create lookup with switch model→max_mac. Alert when CAM utilization exceeds 75%.",
              "z": "Line chart (MAC count over time), Gauge, Table.",
              "kfp": "VMware vMotion, imaging carts, and conference room churn move MACs often. Baseline by VLAN before calling an attack.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: dot1qTpFdbTable count, show mac address-table count.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll dot1qTpFdbTable (count) or parse `show mac address-table count`. Create lookup with switch model→max_mac. Alert when CAM utilization exceeds 75%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=snmp:bridge OR sourcetype=cisco:ios:mac\n| eval mac_count=coalesce(fdb_entries, mac_count, 0)\n| stats latest(mac_count) as current_mac by host\n| lookup mac_limit host OUTPUT max_mac\n| eval util_pct=round(current_mac/max_mac*100,1)\n| where util_pct > 75\n| table host current_mac max_mac util_pct\n```\n\nUnderstanding this SPL\n\n**MAC Address Table Capacity** — CAM table utilization on switches approaching hardware limits.\n\nDocumented **Data sources**: dot1qTpFdbTable count, show mac address-table count. **App/TA** (typical add-on context): SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:bridge, cisco:ios:mac. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmp:bridge. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mac_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where util_pct > 75` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MAC Address Table Capacity**): table host current_mac max_mac util_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (MAC count over time), Gauge, Table.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with mac address table capacity so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.31",
              "n": "QoS Policy Drops per Class",
              "c": "medium",
              "f": "advanced",
              "v": "Traffic dropped per QoS class/queue on routers/switches.",
              "t": "SNMP modular input (CISCO-CLASS-BASED-QOS-MIB), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "cbQosCMDropPkt, cbQosCMPrePolicyPkt",
              "q": "index=network sourcetype=snmp:qos\n| streamstats current=f last(cbQosCMDropPkt) as prev_drop, last(cbQosCMPrePolicyPkt) as prev_pre by host, cbQosConfigIndex, cbQosObjectsIndex\n| eval drop_delta=cbQosCMDropPkt-coalesce(prev_drop,0), pre_delta=cbQosCMPrePolicyPkt-coalesce(prev_pre,0)\n| eval drop_rate=round(drop_delta/(pre_delta+0.001)*100,2)\n| where drop_delta > 0\n| stats sum(drop_delta) as total_drops, sum(pre_delta) as total_pre by host, policy_class\n| eval drop_pct=round(total_drops/(total_pre+0.001)*100,2)\n| sort -total_drops",
              "m": "Poll CISCO-CLASS-BASED-QOS-MIB (cbQosCMDropPkt, cbQosCMPrePolicyPkt) per policy/class. Map OID to policy name via lookup. Alert when drop rate exceeds 5% for critical classes.",
              "z": "Table (host, class, drops, rate), Bar chart, Line chart (drops over time).",
              "kfp": "Large file transfers and video meetings fill priority queues in ways that are normal for the business—compare to historical drops per class.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input (CISCO-CLASS-BASED-QOS-MIB), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: cbQosCMDropPkt, cbQosCMPrePolicyPkt.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll CISCO-CLASS-BASED-QOS-MIB (cbQosCMDropPkt, cbQosCMPrePolicyPkt) per policy/class. Map OID to policy name via lookup. Alert when drop rate exceeds 5% for critical classes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=snmp:qos\n| streamstats current=f last(cbQosCMDropPkt) as prev_drop, last(cbQosCMPrePolicyPkt) as prev_pre by host, cbQosConfigIndex, cbQosObjectsIndex\n| eval drop_delta=cbQosCMDropPkt-coalesce(prev_drop,0), pre_delta=cbQosCMPrePolicyPkt-coalesce(prev_pre,0)\n| eval drop_rate=round(drop_delta/(pre_delta+0.001)*100,2)\n| where drop_delta > 0\n| stats sum(drop_delta) as total_drops, sum(pre_delta) as total_pre by host, policy_class\n| eval drop_pct=round(total_drops/(total_pre+0.001)*100,2)\n| sort -total_drops\n```\n\nUnderstanding this SPL\n\n**QoS Policy Drops per Class** — Traffic dropped per QoS class/queue on routers/switches.\n\nDocumented **Data sources**: cbQosCMDropPkt, cbQosCMPrePolicyPkt. **App/TA** (typical add-on context): SNMP modular input (CISCO-CLASS-BASED-QOS-MIB), `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:qos. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmp:qos. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `streamstats` rolls up events into metrics; results are split **by host, cbQosConfigIndex, cbQosObjectsIndex** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drop_delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **drop_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drop_delta > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, policy_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drop_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, class, drops, rate), Bar chart, Line chart (drops over time).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with qos policy drops per class so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.32",
              "n": "Network Device End-of-Life Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Devices approaching EOL/EOS dates.",
              "t": "Lookup table with vendor EOL dates, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "Device inventory + EOL lookup",
              "q": "| inputlookup device_inventory\n| lookup eol_lookup model OUTPUT eol_date eos_date\n| eval days_to_eol=round((strptime(eol_date,\"%Y-%m-%d\")-now())/86400,0)\n| where days_to_eol < 365 OR days_to_eol < 0\n| table host model eol_date days_to_eol\n| sort days_to_eol",
              "m": "Maintain device_inventory lookup (host, model) and eol_lookup (model, eol_date) from Cisco EOL/EOS bulletins. Run scheduled search or dashboard. Alert when days_to_eol < 180. Update lookups annually.",
              "z": "Table (device, model, days to EOL), Single value (devices within 6 months), Gauge.",
              "kfp": "Version drift can reflect staged rollouts and golden-image lag between regions—match to your release calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Lookup table with vendor EOL dates, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: Device inventory + EOL lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain device_inventory lookup (host, model) and eol_lookup (model, eol_date) from Cisco EOL/EOS bulletins. Run scheduled search or dashboard. Alert when days_to_eol < 180. Update lookups annually.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup device_inventory\n| lookup eol_lookup model OUTPUT eol_date eos_date\n| eval days_to_eol=round((strptime(eol_date,\"%Y-%m-%d\")-now())/86400,0)\n| where days_to_eol < 365 OR days_to_eol < 0\n| table host model eol_date days_to_eol\n| sort days_to_eol\n```\n\nUnderstanding this SPL\n\n**Network Device End-of-Life Tracking** — Devices approaching EOL/EOS dates.\n\nDocumented **Data sources**: Device inventory + EOL lookup. **App/TA** (typical add-on context): Lookup table with vendor EOL dates, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_to_eol** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_eol < 365 OR days_to_eol < 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Device End-of-Life Tracking**): table host model eol_date days_to_eol\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, model, days to EOL), Single value (devices within 6 months), Gauge.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with network device end-of-life tracking so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.33",
              "n": "Half-Duplex Negotiation Anomaly",
              "c": "medium",
              "f": "intermediate",
              "v": "Half/full duplex mismatches causing performance degradation.",
              "t": "SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "IF-MIB (ifSpeed), EtherLike-MIB (dot3StatsDuplexStatus), syslog",
              "q": "index=network (sourcetype=snmp:interface OR sourcetype=\"cisco:ios\") (\"duplex\" OR \"Duplex\" OR \"dot3StatsDuplexStatus\" OR \"halfDuplex\" OR \"fullDuplex\")\n| rex \"duplex mismatch|(?<duplex_status>halfDuplex|fullDuplex|unknown)\"\n| where match(_raw,\"mismatch|halfDuplex\") OR duplex_status=\"halfDuplex\"\n| stats count by host, ifDescr, duplex_status\n| table host ifDescr duplex_status count",
              "m": "Poll EtherLike-MIB dot3StatsDuplexStatus; ingest syslog for duplex mismatch messages. Alert on half-duplex on gigabit uplinks or explicit mismatch events.",
              "z": "Table (host, interface, duplex), Status grid, Single value.",
              "kfp": "Intermittent autonegotiation blips and cable wiggles during moves can look like a mismatch—verify with a sustained port test.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: IF-MIB (ifSpeed), EtherLike-MIB (dot3StatsDuplexStatus), syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll EtherLike-MIB dot3StatsDuplexStatus; ingest syslog for duplex mismatch messages. Alert on half-duplex on gigabit uplinks or explicit mismatch events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=snmp:interface OR sourcetype=\"cisco:ios\") (\"duplex\" OR \"Duplex\" OR \"dot3StatsDuplexStatus\" OR \"halfDuplex\" OR \"fullDuplex\")\n| rex \"duplex mismatch|(?<duplex_status>halfDuplex|fullDuplex|unknown)\"\n| where match(_raw,\"mismatch|halfDuplex\") OR duplex_status=\"halfDuplex\"\n| stats count by host, ifDescr, duplex_status\n| table host ifDescr duplex_status count\n```\n\nUnderstanding this SPL\n\n**Half-Duplex Negotiation Anomaly** — Half/full duplex mismatches causing performance degradation.\n\nDocumented **Data sources**: IF-MIB (ifSpeed), EtherLike-MIB (dot3StatsDuplexStatus), syslog. **App/TA** (typical add-on context): SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:interface, cisco:ios. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmp:interface. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where match(_raw,\"mismatch|halfDuplex\") OR duplex_status=\"halfDuplex\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, ifDescr, duplex_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Half-Duplex Negotiation Anomaly**): table host ifDescr duplex_status count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, interface, duplex), Status grid, Single value.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with half-duplex negotiation anomaly so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.34",
              "n": "PoE Power Budget Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Power over Ethernet budget approaching capacity per switch.",
              "t": "SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "POWER-ETHERNET-MIB (pethMainPseOperStatus, pethMainPseConsumptionPower, pethMainPsePower)",
              "q": "index=network sourcetype=snmp:poe\n| eval util_pct=round(pethMainPseConsumptionPower/pethMainPsePower*100,1)\n| where pethMainPseOperStatus=\"on\" AND util_pct > 80\n| stats latest(util_pct) as poe_util, latest(pethMainPseConsumptionPower) as used_w, latest(pethMainPsePower) as total_w by host\n| table host poe_util used_w total_w",
              "m": "Poll POWER-ETHERNET-MIB (pethMainPsePower, pethMainPseConsumptionPower) every 300s. Alert when utilization exceeds 80%. Track per PSE unit on modular switches.",
              "z": "Gauge (utilization), Table (host, used, total), Line chart.",
              "kfp": "AP reboots, phone bulk restarts, and new cameras shift PoE load. Scheduled refresh windows can look like a budget breach.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: POWER-ETHERNET-MIB (pethMainPseOperStatus, pethMainPseConsumptionPower, pethMainPsePower).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll POWER-ETHERNET-MIB (pethMainPsePower, pethMainPseConsumptionPower) every 300s. Alert when utilization exceeds 80%. Track per PSE unit on modular switches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=snmp:poe\n| eval util_pct=round(pethMainPseConsumptionPower/pethMainPsePower*100,1)\n| where pethMainPseOperStatus=\"on\" AND util_pct > 80\n| stats latest(util_pct) as poe_util, latest(pethMainPseConsumptionPower) as used_w, latest(pethMainPsePower) as total_w by host\n| table host poe_util used_w total_w\n```\n\nUnderstanding this SPL\n\n**PoE Power Budget Utilization** — Power over Ethernet budget approaching capacity per switch.\n\nDocumented **Data sources**: POWER-ETHERNET-MIB (pethMainPseOperStatus, pethMainPseConsumptionPower, pethMainPsePower). **App/TA** (typical add-on context): SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:poe. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmp:poe. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pethMainPseOperStatus=\"on\" AND util_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **PoE Power Budget Utilization**): table host poe_util used_w total_w\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (utilization), Table (host, used, total), Line chart.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with poe power budget utilization so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.35",
              "n": "LLDP / CDP Neighbor Change Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Unexpected topology changes in cabling/connections.",
              "t": "SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog",
              "d": "LLDP-MIB (lldpRemTable), CISCO-CDP-MIB, syslog CDP/LLDP events",
              "q": "index=network (sourcetype=snmp:lldp OR sourcetype=snmp:cdp OR sourcetype=\"cisco:ios\") (\"lldpRem\" OR \"CDP-4-NATIVE\" OR \"LLDP\" OR \"neighbor\")\n| rex \"neighbor (?<neighbor>\\S+)|lldpRemSysName[=:]\\s*(?<neighbor>\\S+)|port (?<port>\\S+)\"\n| bin _time span=1h\n| stats dc(neighbor) as neighbor_changes, values(neighbor) as neighbors by host, port, _time\n| where neighbor_changes > 1\n| table host port _time neighbor_changes neighbors",
              "m": "Poll LLDP-MIB lldpRemTable and CISCO-CDP-MIB; ingest syslog for CDP/LLDP neighbor change events. Baseline neighbor table; alert on unexpected changes (new/removed neighbors). Useful for change validation and cable swap detection.",
              "z": "Table (host, port, changes), Timeline, Single value (unexpected changes).",
              "kfp": "New cables, SFP swaps, and VoIP phone reboots change discovery neighbors. Ignore known moves in office refresh projects.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog.\n• Ensure the following data sources are available: LLDP-MIB (lldpRemTable), CISCO-CDP-MIB, syslog CDP/LLDP events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll LLDP-MIB lldpRemTable and CISCO-CDP-MIB; ingest syslog for CDP/LLDP neighbor change events. Baseline neighbor table; alert on unexpected changes (new/removed neighbors). Useful for change validation and cable swap detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=snmp:lldp OR sourcetype=snmp:cdp OR sourcetype=\"cisco:ios\") (\"lldpRem\" OR \"CDP-4-NATIVE\" OR \"LLDP\" OR \"neighbor\")\n| rex \"neighbor (?<neighbor>\\S+)|lldpRemSysName[=:]\\s*(?<neighbor>\\S+)|port (?<port>\\S+)\"\n| bin _time span=1h\n| stats dc(neighbor) as neighbor_changes, values(neighbor) as neighbors by host, port, _time\n| where neighbor_changes > 1\n| table host port _time neighbor_changes neighbors\n```\n\nUnderstanding this SPL\n\n**LLDP / CDP Neighbor Change Detection** — Unexpected topology changes in cabling/connections.\n\nDocumented **Data sources**: LLDP-MIB (lldpRemTable), CISCO-CDP-MIB, syslog CDP/LLDP events. **App/TA** (typical add-on context): SNMP modular input, `TA-cisco_ios`, `Splunk_TA_juniper`, `arista:eos` via SC4S, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:lldp, snmp:cdp, cisco:ios. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmp:lldp. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, port, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where neighbor_changes > 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **LLDP / CDP Neighbor Change Detection**): table host port _time neighbor_changes neighbors\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, port, changes), Timeline, Single value (unexpected changes).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco Catalyst 9200/9300/9400/9500/9600, ISR 1100/4000, ASR 1000; Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480, SRX300/SRX1500/SRX4100/SRX4200; Arista 7050X3, 7060X, 7260X3, 7280R3, 7500R3; HPE Aruba CX 6100/6200/6300/6400, CX 8320/8325/8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with lldp / cdp neighbor change detection so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Fault",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.36",
              "n": "Port Utilization and Congestion Alerts (Meraki MS)",
              "c": "medium",
              "f": "intermediate",
              "v": "Identifies port saturation and congestion events that require capacity upgrades or load balancing adjustments.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MS`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MS\n| stats avg(port_utilization) as avg_util, max(port_utilization) as max_util by switch_name, port_id\n| where max_util > 80\n| sort - max_util",
              "m": "Query MS switch device API for port utilization metrics. Alert on sustained >80% utilization.",
              "z": "Table of congested ports; timeline showing peak congestion; port utilization heatmap.",
              "kfp": "Short bursts during backups, patch pushes, or video calls can approach thresholds without an outage. Match alerts to business hours and known batch jobs.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MS`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery MS switch device API for port utilization metrics. Alert on sustained >80% utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MS\n| stats avg(port_utilization) as avg_util, max(port_utilization) as max_util by switch_name, port_id\n| where max_util > 80\n| sort - max_util\n```\n\nUnderstanding this SPL\n\n**Port Utilization and Congestion Alerts (Meraki MS)** — Identifies port saturation and congestion events that require capacity upgrades or load balancing adjustments.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MS`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, port_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_util > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of congested ports; timeline showing peak congestion; port utilization heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with port utilization and congestion alerts so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.37",
              "n": "Power over Ethernet (PoE) Consumption Tracking (Meraki MS)",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors PoE power allocation to prevent over-subscription and ensure sufficient power for all devices.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MS`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MS poe_consumption=*\n| stats sum(poe_consumption) as total_power_watts, avg(poe_consumption) as avg_power by switch_name\n| eval power_capacity_pct=round(total_power_watts*100/1000, 2)\n| where power_capacity_pct > 80",
              "m": "Pull poe_consumption metrics from MS device API. Aggregate by switch.",
              "z": "Gauge showing power utilization percentage; stacked bar of PoE by port; capacity dashboard.",
              "kfp": "AP reboots, phone bulk restarts, and new cameras shift PoE load. Scheduled refresh windows can look like a budget breach.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MS`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPull poe_consumption metrics from MS device API. Aggregate by switch.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MS poe_consumption=*\n| stats sum(poe_consumption) as total_power_watts, avg(poe_consumption) as avg_power by switch_name\n| eval power_capacity_pct=round(total_power_watts*100/1000, 2)\n| where power_capacity_pct > 80\n```\n\nUnderstanding this SPL\n\n**Power over Ethernet (PoE) Consumption Tracking (Meraki MS)** — Monitors PoE power allocation to prevent over-subscription and ensure sufficient power for all devices.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MS`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **power_capacity_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where power_capacity_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge showing power utilization percentage; stacked bar of PoE by port; capacity dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with power over ethernet so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.38",
              "n": "Spanning Tree Protocol (STP) Topology Changes (Meraki MS)",
              "c": "high",
              "f": "beginner",
              "v": "Alerts on unexpected STP topology changes that indicate link failures or network configuration issues.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*STP*\" OR signature=\"*topology*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*STP*\" OR signature=\"*topology*\")\n| stats count as change_count by switch_name, change_type\n| where change_count > 3",
              "m": "Monitor STP-related syslog events. Alert on excessive topology changes.",
              "z": "Timeline of topology changes; table of affected switches; alert dashboard.",
              "kfp": "STP TCNs happen during access switch adds, link moves, and voice VLAN changes. Storm-control tuning can also shift TC rates.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*STP*\" OR signature=\"*topology*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor STP-related syslog events. Alert on excessive topology changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*STP*\" OR signature=\"*topology*\")\n| stats count as change_count by switch_name, change_type\n| where change_count > 3\n```\n\nUnderstanding this SPL\n\n**Spanning Tree Protocol (STP) Topology Changes (Meraki MS)** — Alerts on unexpected STP topology changes that indicate link failures or network configuration issues.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*STP*\" OR signature=\"*topology*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, change_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where change_count > 3` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of topology changes; table of affected switches; alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with spanning tree protocol so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.39",
              "n": "Port Security Violations and Rogue Device Detection (Meraki MS)",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects unauthorized MAC addresses and port security breaches that indicate potential network intrusion.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*Port Security*\" OR signature=\"*Unauthorized MAC*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*Port Security*\" OR signature=\"*Unauthorized*\")\n| stats count as violation_count by switch_name, port_id, mac_address\n| where violation_count > 0\n| sort - violation_count",
              "m": "Monitor port security violation events from syslog. Create alert for each unique violation.",
              "z": "Table of violations; timeline of events; network detail with affected ports.",
              "kfp": "Meraki cloud delays, dashboard API limits, and large site templates can look like a gap. Confirm in dashboard before opening a P1 on Splunk only.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*Port Security*\" OR signature=\"*Unauthorized MAC*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor port security violation events from syslog. Create alert for each unique violation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*Port Security*\" OR signature=\"*Unauthorized*\")\n| stats count as violation_count by switch_name, port_id, mac_address\n| where violation_count > 0\n| sort - violation_count\n```\n\nUnderstanding this SPL\n\n**Port Security Violations and Rogue Device Detection (Meraki MS)** — Detects unauthorized MAC addresses and port security breaches that indicate potential network intrusion.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*Port Security*\" OR signature=\"*Unauthorized MAC*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, port_id, mac_address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where violation_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of violations; timeline of events; network detail with affected ports.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with port security violations and rogue device detection so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.40",
              "n": "Switch Interface Up/Down Events and Link Flapping (Meraki MS)",
              "c": "high",
              "f": "beginner",
              "v": "Identifies port flapping, cable issues, and unstable link states that cause intermittent connectivity.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*link*\" OR signature=\"*Interface*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*link*\" OR signature=\"*Interface*\" OR signature=\"*up*\" OR signature=\"*down*\")\n| stats count as event_count by switch_name, port_id\n| eval flap_rate=round(event_count/24, 2)\n| where flap_rate > 2",
              "m": "Track interface up/down state changes over 24 hours. Alert on flapping (>2 changes/hour).",
              "z": "Time-series showing flap events; table of affected ports; link state history.",
              "kfp": "Meraki cloud delays, dashboard API limits, and large site templates can look like a gap. Confirm in dashboard before opening a P1 on Splunk only.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*link*\" OR signature=\"*Interface*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack interface up/down state changes over 24 hours. Alert on flapping (>2 changes/hour).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*link*\" OR signature=\"*Interface*\" OR signature=\"*up*\" OR signature=\"*down*\")\n| stats count as event_count by switch_name, port_id\n| eval flap_rate=round(event_count/24, 2)\n| where flap_rate > 2\n```\n\nUnderstanding this SPL\n\n**Switch Interface Up/Down Events and Link Flapping (Meraki MS)** — Identifies port flapping, cable issues, and unstable link states that cause intermittent connectivity.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*link*\" OR signature=\"*Interface*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, port_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **flap_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where flap_rate > 2` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time-series showing flap events; table of affected ports; link state history.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with switch interface up/down events and link flapping so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.41",
              "n": "VLAN Configuration Mismatches and Tagging Violations (Meraki MS)",
              "c": "medium",
              "f": "intermediate",
              "v": "Detects VLAN configuration errors and tagging violations that disrupt network segmentation.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api` (MS), `sourcetype=meraki` (security/syslog)",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*VLAN*\"\n| stats count as vlan_error_count by switch_name, vlan_id\n| where vlan_error_count > 5",
              "m": "Monitor VLAN-related error events. Cross-reference with API device VLAN config.",
              "z": "Table of VLAN issues; timeline of configuration changes; network diagram with VLAN details.",
              "kfp": "VLAN work during moves, adds, and wireless SSID changes is expected. Exclude staging fabrics and change windows you already know about.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api` (MS), `sourcetype=meraki` (security/syslog).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor VLAN-related error events. Cross-reference with API device VLAN config.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*VLAN*\"\n| stats count as vlan_error_count by switch_name, vlan_id\n| where vlan_error_count > 5\n```\n\nUnderstanding this SPL\n\n**VLAN Configuration Mismatches and Tagging Violations (Meraki MS)** — Detects VLAN configuration errors and tagging violations that disrupt network segmentation.\n\nDocumented **Data sources**: `sourcetype=meraki:api` (MS), `sourcetype=meraki` (security/syslog). **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, vlan_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where vlan_error_count > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of VLAN issues; timeline of configuration changes; network diagram with VLAN details.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with vlan configuration mismatches and tagging violations so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.42",
              "n": "MAC Flooding and Bridge Table Exhaustion (Meraki MS)",
              "c": "high",
              "f": "beginner",
              "v": "Detects MAC address table exhaustion and flooding attacks that could overwhelm switch resources.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*MAC*\" OR signature=\"*bridge*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*MAC*\" OR signature=\"*flood*\")\n| stats count as flood_count by switch_name, port_id\n| where flood_count > 50",
              "m": "Monitor MAC-related syslog events. Alert on suspicious patterns.",
              "z": "Table of affected switches/ports; time-series of flood events; alert dashboard.",
              "kfp": "VMware vMotion, imaging carts, and conference room churn move MACs often. Baseline by VLAN before calling an attack.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*MAC*\" OR signature=\"*bridge*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor MAC-related syslog events. Alert on suspicious patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*MAC*\" OR signature=\"*flood*\")\n| stats count as flood_count by switch_name, port_id\n| where flood_count > 50\n```\n\nUnderstanding this SPL\n\n**MAC Flooding and Bridge Table Exhaustion (Meraki MS)** — Detects MAC address table exhaustion and flooding attacks that could overwhelm switch resources.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*MAC*\" OR signature=\"*bridge*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, port_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where flood_count > 50` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of affected switches/ports; time-series of flood events; alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with mac flooding and bridge table exhaustion so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.43",
              "n": "DHCP Snooping Violations (Meraki MS)",
              "c": "high",
              "f": "beginner",
              "v": "Detects unauthorized DHCP servers and spoofing attempts that disrupt network address allocation.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*DHCP Snooping*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*DHCP*Snooping*\"\n| stats count as violation_count by switch_name, port_id, server_ip\n| where violation_count > 0",
              "m": "Enable DHCP snooping on MS switches. Monitor syslog for violations.",
              "z": "Table of violations; timeline of events; affected port details.",
              "kfp": "New VoIP, cameras, and docked laptops may appear on the wrong access VLAN until you update trusted ports and DHCP helpers.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*DHCP Snooping*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DHCP snooping on MS switches. Monitor syslog for violations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*DHCP*Snooping*\"\n| stats count as violation_count by switch_name, port_id, server_ip\n| where violation_count > 0\n```\n\nUnderstanding this SPL\n\n**DHCP Snooping Violations (Meraki MS)** — Detects unauthorized DHCP servers and spoofing attempts that disrupt network address allocation.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*DHCP Snooping*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, port_id, server_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where violation_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of violations; timeline of events; affected port details.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with dhcp snooping violations so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.44",
              "n": "Broadcast Storm Detection and Mitigation (Meraki MS)",
              "c": "critical",
              "f": "intermediate",
              "v": "Identifies and alerts on broadcast storms that can freeze network performance across all switches.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*broadcast*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*broadcast*\"\n| stats sum(packet_count) as broadcast_packets by switch_name, port_id\n| where broadcast_packets > 10000",
              "m": "Monitor broadcast traffic thresholds. Alert on sustained high broadcast rates.",
              "z": "Real-time alert dashboard; time-series of broadcast packets; affected port list.",
              "kfp": "Imaging, Wake-on-LAN, and some IoT devices can create broadcast spikes. Confirm port security and STP before blaming a DDoS.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*broadcast*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor broadcast traffic thresholds. Alert on sustained high broadcast rates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*broadcast*\"\n| stats sum(packet_count) as broadcast_packets by switch_name, port_id\n| where broadcast_packets > 10000\n```\n\nUnderstanding this SPL\n\n**Broadcast Storm Detection and Mitigation (Meraki MS)** — Identifies and alerts on broadcast storms that can freeze network performance across all switches.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*broadcast*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, port_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where broadcast_packets > 10000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Real-time alert dashboard; time-series of broadcast packets; affected port list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with broadcast storm detection and mitigation so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.45",
              "n": "Switch CPU and Memory Utilization (Meraki MS)",
              "c": "medium",
              "f": "beginner",
              "v": "Monitors switch hardware resources to prevent performance degradation or device failure.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MS`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MS\n| stats avg(cpu_usage) as avg_cpu, max(cpu_usage) as peak_cpu, avg(memory_usage) as avg_mem by switch_name\n| where avg_cpu > 75 OR avg_mem > 80",
              "m": "Query MS device API for CPU and memory metrics. Alert on threshold breaches.",
              "z": "Gauge charts for CPU/memory; time-series trends; capacity planning dashboard.",
              "kfp": "Short CPU or memory spikes during routing convergence, code upgrade, or SNMP walks are common. Baseline by platform role and compare to a maintenance calendar.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MS`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery MS device API for CPU and memory metrics. Alert on threshold breaches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MS\n| stats avg(cpu_usage) as avg_cpu, max(cpu_usage) as peak_cpu, avg(memory_usage) as avg_mem by switch_name\n| where avg_cpu > 75 OR avg_mem > 80\n```\n\nUnderstanding this SPL\n\n**Switch CPU and Memory Utilization (Meraki MS)** — Monitors switch hardware resources to prevent performance degradation or device failure.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MS`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_cpu > 75 OR avg_mem > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge charts for CPU/memory; time-series trends; capacity planning dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with switch cpu and memory utilization so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.46",
              "n": "Stack Unit and Redundancy Health (Meraki MS)",
              "c": "high",
              "f": "beginner",
              "v": "Ensures switch stacking configuration remains healthy and redundancy is not compromised.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MS stack_id=*`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MS stack_id=*\n| stats count as stack_members, count(eval(status=\"offline\")) as offline_members by stack_id\n| where offline_members > 0",
              "m": "Monitor stack member status via device API. Alert on member removal or failure.",
              "z": "Table of stack members and status; redundancy gauge; alert dashboard.",
              "kfp": "Meraki cloud delays, dashboard API limits, and large site templates can look like a gap. Confirm in dashboard before opening a P1 on Splunk only.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MS stack_id=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor stack member status via device API. Alert on member removal or failure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MS stack_id=*\n| stats count as stack_members, count(eval(status=\"offline\")) as offline_members by stack_id\n| where offline_members > 0\n```\n\nUnderstanding this SPL\n\n**Stack Unit and Redundancy Health (Meraki MS)** — Ensures switch stacking configuration remains healthy and redundancy is not compromised.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MS stack_id=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by stack_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where offline_members > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of stack members and status; redundancy gauge; alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with stack unit and redundancy health so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.47",
              "n": "Trunk Link Utilization and Performance (Meraki MS)",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors inter-switch and uplink trunk utilization to identify bandwidth constraints.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MS`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MS port_type=\"trunk\"\n| stats avg(port_utilization) as avg_trunk_util, max(port_utilization) as peak_util by switch_name, port_id\n| where peak_util > 70\n| sort - peak_util",
              "m": "Query MS API for trunk port utilization. Alert on sustained high utilization.",
              "z": "Trunk link utilization heatmap; timeline showing peak demand; capacity planning chart.",
              "kfp": "Short bursts during backups, patch pushes, or video calls can approach thresholds without an outage. Match alerts to business hours and known batch jobs.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MS`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery MS API for trunk port utilization. Alert on sustained high utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MS port_type=\"trunk\"\n| stats avg(port_utilization) as avg_trunk_util, max(port_utilization) as peak_util by switch_name, port_id\n| where peak_util > 70\n| sort - peak_util\n```\n\nUnderstanding this SPL\n\n**Trunk Link Utilization and Performance (Meraki MS)** — Monitors inter-switch and uplink trunk utilization to identify bandwidth constraints.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MS`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, port_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where peak_util > 70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Trunk link utilization heatmap; timeline showing peak demand; capacity planning chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with trunk link utilization and performance so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.48",
              "n": "QoS Queue Drops and Priority Violations (Meraki MS)",
              "c": "medium",
              "f": "beginner",
              "v": "Detects QoS queue overflow and drops that indicate traffic priority issues.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*QoS*\" OR signature=\"*queue*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*QoS*\" OR signature=\"*queue*\" OR signature=\"*drop*\")\n| stats sum(packets_dropped) as total_drops by switch_name, queue_id\n| where total_drops > 1000",
              "m": "Monitor QoS-related syslog events and drops. Alert on significant drop rates.",
              "z": "Table of drops by queue; time-series of drop events; traffic distribution pie chart.",
              "kfp": "Large file transfers and video meetings fill priority queues in ways that are normal for the business—compare to historical drops per class.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*QoS*\" OR signature=\"*queue*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor QoS-related syslog events and drops. Alert on significant drop rates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*QoS*\" OR signature=\"*queue*\" OR signature=\"*drop*\")\n| stats sum(packets_dropped) as total_drops by switch_name, queue_id\n| where total_drops > 1000\n```\n\nUnderstanding this SPL\n\n**QoS Queue Drops and Priority Violations (Meraki MS)** — Detects QoS queue overflow and drops that indicate traffic priority issues.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*QoS*\" OR signature=\"*queue*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, queue_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_drops > 1000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of drops by queue; time-series of drop events; traffic distribution pie chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with qos queue drops and priority violations so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.49",
              "n": "Port Access Control List (ACL) Hits and Block Events (Meraki MS)",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks ACL rule hits to monitor policy enforcement and identify anomalous traffic.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*ACL*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*ACL*\" action=\"block\"\n| stats count as block_count by switch_name, src_mac, dest_mac\n| sort - block_count",
              "m": "Monitor ACL deny/block events from syslog. Track frequently blocked source/destinations.",
              "z": "Table of blocked traffic; timeline of ACL hits; top blocked addresses chart.",
              "kfp": "New security baselines, pen tests, and mis-pointed app VIPs can spike denies. Weed out scanners and approved tests via subnet lookup.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*ACL*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor ACL deny/block events from syslog. Track frequently blocked source/destinations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*ACL*\" action=\"block\"\n| stats count as block_count by switch_name, src_mac, dest_mac\n| sort - block_count\n```\n\nUnderstanding this SPL\n\n**Port Access Control List (ACL) Hits and Block Events (Meraki MS)** — Tracks ACL rule hits to monitor policy enforcement and identify anomalous traffic.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*ACL*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, src_mac, dest_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of blocked traffic; timeline of ACL hits; top blocked addresses chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with port access control list so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.50",
              "n": "Cable Test Results and Port Diagnostics (Meraki MS)",
              "c": "medium",
              "f": "beginner",
              "v": "Analyzes cable integrity test results to identify wiring faults before they cause outages.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*cable*\" OR signature=\"*diagnostic*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*cable*\" OR signature=\"*diagnostic*\")\n| stats count as test_count by switch_name, port_id, test_result\n| where test_result=\"FAIL\"",
              "m": "Periodically run cable tests on switch ports. Ingest results into syslog.",
              "z": "Table of failed cable tests; port detail with diagnostic results; failure timeline.",
              "kfp": "Cabling work and flaky patch panels can fail a test one run and pass the next. Re-run and compare with a cable certifier for chronic ports.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*cable*\" OR signature=\"*diagnostic*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically run cable tests on switch ports. Ingest results into syslog.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*cable*\" OR signature=\"*diagnostic*\")\n| stats count as test_count by switch_name, port_id, test_result\n| where test_result=\"FAIL\"\n```\n\nUnderstanding this SPL\n\n**Cable Test Results and Port Diagnostics (Meraki MS)** — Analyzes cable integrity test results to identify wiring faults before they cause outages.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*cable*\" OR signature=\"*diagnostic*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, port_id, test_result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where test_result=\"FAIL\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of failed cable tests; port detail with diagnostic results; failure timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with cable test results and port diagnostics so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.51",
              "n": "Uplink Health and Failover Events (Meraki MS)",
              "c": "critical",
              "f": "beginner",
              "v": "Monitors primary/secondary uplink status to detect failover events and connection issues.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*Uplink*\" OR signature=\"*failover*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*Uplink*\" OR signature=\"*failover*\")\n| stats count as failover_count by uplink_name, event_type\n| where failover_count > 0",
              "m": "Monitor uplink status change events in syslog. Alert on failover.",
              "z": "Uplink status dashboard; failover event timeline; connection health gauge.",
              "kfp": "Meraki cloud delays, dashboard API limits, and large site templates can look like a gap. Confirm in dashboard before opening a P1 on Splunk only.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*Uplink*\" OR signature=\"*failover*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor uplink status change events in syslog. Alert on failover.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*Uplink*\" OR signature=\"*failover*\")\n| stats count as failover_count by uplink_name, event_type\n| where failover_count > 0\n```\n\nUnderstanding this SPL\n\n**Uplink Health and Failover Events (Meraki MS)** — Monitors primary/secondary uplink status to detect failover events and connection issues.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*Uplink*\" OR signature=\"*failover*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by uplink_name, event_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failover_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Uplink status dashboard; failover event timeline; connection health gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with uplink health and failover events so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.52",
              "n": "Cellular Gateway Signal Strength Trending (Meraki MG)",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors cellular signal strength to ensure reliable backup connectivity.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MG`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MG\n| stats avg(signal_strength) as avg_signal, min(signal_strength) as min_signal by cellular_gateway_id\n| eval signal_quality=case(avg_signal > -90, \"Excellent\", avg_signal > -110, \"Good\", 1=1, \"Poor\")",
              "m": "Query MG device API for signal metrics. Alert on degraded signal.",
              "z": "Signal strength gauge; trend timeline; cellular quality status.",
              "kfp": "Carrier testing, local SIM swaps, and planned tower work can look like a connectivity fault. Compare the Meraki event log to the same window in Splunk.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MG`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery MG device API for signal metrics. Alert on degraded signal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MG\n| stats avg(signal_strength) as avg_signal, min(signal_strength) as min_signal by cellular_gateway_id\n| eval signal_quality=case(avg_signal > -90, \"Excellent\", avg_signal > -110, \"Good\", 1=1, \"Poor\")\n```\n\nUnderstanding this SPL\n\n**Cellular Gateway Signal Strength Trending (Meraki MG)** — Monitors cellular signal strength to ensure reliable backup connectivity.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MG`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cellular_gateway_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **signal_quality** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Signal strength gauge; trend timeline; cellular quality status.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with cellular gateway signal strength trending so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.53",
              "n": "Cellular Data Usage and Overage Monitoring (Meraki MG)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks cellular data consumption to manage carrier costs and prevent overages.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MG data_usage=*`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MG data_usage=*\n| stats sum(data_usage) as total_data_usage_mb by cellular_gateway_id\n| eval overage_alert=if(total_data_usage_mb > 100000, \"Yes\", \"No\")",
              "m": "Query MG API for data usage metrics. Track monthly consumption.",
              "z": "Data usage gauge per gateway; consumption timeline; overage alert table.",
              "kfp": "Carrier testing, local SIM swaps, and planned tower work can look like a connectivity fault. Compare the Meraki event log to the same window in Splunk.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MG data_usage=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery MG API for data usage metrics. Track monthly consumption.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MG data_usage=*\n| stats sum(data_usage) as total_data_usage_mb by cellular_gateway_id\n| eval overage_alert=if(total_data_usage_mb > 100000, \"Yes\", \"No\")\n```\n\nUnderstanding this SPL\n\n**Cellular Data Usage and Overage Monitoring (Meraki MG)** — Tracks cellular data consumption to manage carrier costs and prevent overages.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MG data_usage=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cellular_gateway_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **overage_alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Data usage gauge per gateway; consumption timeline; overage alert table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with cellular data usage and overage monitoring so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.54",
              "n": "Carrier Connection Health and Network Performance (Meraki MG)",
              "c": "high",
              "f": "beginner",
              "v": "Monitors carrier connectivity and network performance metrics for backup internet links.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*cellular*\" OR signature=\"*carrier*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*cellular*\" OR signature=\"*carrier*\")\n| stats count as event_count by event_type, carrier_name\n| where event_type=\"connection_error\" OR event_type=\"network_error\"",
              "m": "Monitor carrier connection and network events. Alert on issues.",
              "z": "Carrier health timeline; connection error table; network performance gauge.",
              "kfp": "Carrier testing, local SIM swaps, and planned tower work can look like a connectivity fault. Compare the Meraki event log to the same window in Splunk.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*cellular*\" OR signature=\"*carrier*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor carrier connection and network events. Alert on issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*cellular*\" OR signature=\"*carrier*\")\n| stats count as event_count by event_type, carrier_name\n| where event_type=\"connection_error\" OR event_type=\"network_error\"\n```\n\nUnderstanding this SPL\n\n**Carrier Connection Health and Network Performance (Meraki MG)** — Monitors carrier connectivity and network performance metrics for backup internet links.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*cellular*\" OR signature=\"*carrier*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by event_type, carrier_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where event_type=\"connection_error\" OR event_type=\"network_error\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Carrier health timeline; connection error table; network performance gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the cellular or backup-WAN link on your Meraki gateway and tell you when carrier or network errors pile up so you can fix the path before sites go dark.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.55",
              "n": "SIM Status and Plan Monitoring (Meraki MG)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks SIM card status and plan expiration to ensure continuous cellular connectivity.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MG sim_status=*`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MG\n| stats latest(sim_status) as sim_status, latest(plan_expiry) as expiry_date by gateway_id, sim_id\n| eval days_until_expire=round((strptime(plan_expiry, \"%Y-%m-%d\")-now())/86400, 0)\n| where sim_status != \"active\" OR days_until_expire < 30",
              "m": "Query MG API for SIM status and plan expiry. Alert before expiration.",
              "z": "SIM status table; plan expiry countdown; renewal alert dashboard.",
              "kfp": "Carrier testing, local SIM swaps, and planned tower work can look like a connectivity fault. Compare the Meraki event log to the same window in Splunk.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MG sim_status=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery MG API for SIM status and plan expiry. Alert before expiration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MG\n| stats latest(sim_status) as sim_status, latest(plan_expiry) as expiry_date by gateway_id, sim_id\n| eval days_until_expire=round((strptime(plan_expiry, \"%Y-%m-%d\")-now())/86400, 0)\n| where sim_status != \"active\" OR days_until_expire < 30\n```\n\nUnderstanding this SPL\n\n**SIM Status and Plan Monitoring (Meraki MG)** — Tracks SIM card status and plan expiration to ensure continuous cellular connectivity.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MG sim_status=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by gateway_id, sim_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_until_expire** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sim_status != \"active\" OR days_until_expire < 30` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: SIM status table; plan expiry countdown; renewal alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with sim status and plan monitoring so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.56",
              "n": "Junos Chassis Alarm Monitoring (Juniper)",
              "c": "critical",
              "f": "beginner",
              "v": "Junos raises chassis alarms for power supply loss, fan failure, FPC or PIC offline, and temperature exceedances—conditions that often need on-site hardware work before service is fully restored. Ignoring these events lets a single failed component escalate into switch-wide thermal shutdown or loss of redundancy. A clear Splunk view of major and minor chassis alarms speeds dispatch to facilities and vendor support and shortens mean time to repair for edge and campus fabrics.",
              "t": "`Splunk_TA_juniper`, syslog",
              "d": "`sourcetype=juniper:junos:structured`",
              "q": "index=network sourcetype=\"juniper:junos:structured\"\n| search CHASSISD OR \"*chassis*\" OR ALARM OR \"*alarm*\"\n| search \"*Major*\" OR \"*Minor*\" OR severity=major OR severity=minor OR \"class major\" OR \"class minor\"\n| rex field=_raw max_match=0 \"(?i)fru\\s*type:\\s*(?<fru_type>[^,\\n]+)\"\n| stats count as alarm_events, values(_raw) as sample_messages by host, fru_type\n| where alarm_events > 0\n| sort -alarm_events",
              "m": "Forward Junos structured syslog to Splunk; install `Splunk_TA_juniper` for field normalization. Tune `search` terms to your facility naming (CHASSISD, craftd). Alert on first major alarm and on minor alarms that repeat on the same FRU within 24h. Enrich with CMDB site and rack for dispatch.",
              "z": "Chassis alarm table by host and FRU; timeline of major vs minor; single-value panel for open major alarms.",
              "kfp": "Hardware sensor warnings during power redundancy testing, scheduled maintenance, or environmental swings. Lab gear often logs benign transitions.",
              "refs": "[CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_juniper`, syslog.\n• Ensure the following data sources are available: `sourcetype=juniper:junos:structured`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Junos structured syslog to Splunk; install `Splunk_TA_juniper` for field normalization. Tune `search` terms to your facility naming (CHASSISD, craftd). Alert on first major alarm and on minor alarms that repeat on the same FRU within 24h. Enrich with CMDB site and rack for dispatch.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"juniper:junos:structured\"\n| search CHASSISD OR \"*chassis*\" OR ALARM OR \"*alarm*\"\n| search \"*Major*\" OR \"*Minor*\" OR severity=major OR severity=minor OR \"class major\" OR \"class minor\"\n| rex field=_raw max_match=0 \"(?i)fru\\s*type:\\s*(?<fru_type>[^,\\n]+)\"\n| stats count as alarm_events, values(_raw) as sample_messages by host, fru_type\n| where alarm_events > 0\n| sort -alarm_events\n```\n\nUnderstanding this SPL\n\n**Junos Chassis Alarm Monitoring (Juniper)** — Junos raises chassis alarms for power supply loss, fan failure, FPC or PIC offline, and temperature exceedances—conditions that often need on-site hardware work before service is fully restored. Ignoring these events lets a single failed component escalate into switch-wide thermal shutdown or loss of redundancy. A clear Splunk view of major and minor chassis alarms speeds dispatch to facilities and vendor support and shortens mean time to repair for edge and campus fabrics.\n\nDocumented **Data sources**: `sourcetype=juniper:junos:structured`. **App/TA** (typical add-on context): `Splunk_TA_juniper`, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: juniper:junos:structured. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"juniper:junos:structured\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, fru_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where alarm_events > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Junos Chassis Alarm Monitoring (Juniper)** — Junos raises chassis alarms for power supply loss, fan failure, FPC or PIC offline, and temperature exceedances—conditions that often need on-site hardware work before service is fully restored. Ignoring these events lets a single failed component escalate into switch-wide thermal shutdown or loss of redundancy. A clear Splunk view of major and minor chassis alarms speeds dispatch to facilities and vendor support and shortens mean time to repair for edge and campus fabrics.\n\nDocumented **Data sources**: `sourcetype=juniper:junos:structured`. **App/TA** (typical add-on context): `Splunk_TA_juniper`, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Chassis alarm table by host and FRU; timeline of major vs minor; single-value panel for open major alarms.",
              "script": "",
              "premium": "",
              "hw": "Juniper EX4300, EX4400, EX4600, QFX5100/5120/5200/5220, MX204/304/480/960, SRX4100/SRX4200/SRX4600",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with junos chassis alarm monitoring so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.dest | sort - count",
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.57",
              "n": "Junos Commit History and Configuration Rollback Audit (Juniper)",
              "c": "high",
              "f": "beginner",
              "v": "Junos treats configuration as a sequence of commits, so every change is tied to a user, time, and optional comment—ideal for audit and rollback to any of the last stored revisions. Without central logging, you lose the evidence needed to prove who changed routing, security zones, or interfaces during an incident. Correlating commits with change tickets catches unapproved changes and commits outside maintenance windows before they propagate through routing or firewall policy.",
              "t": "`Splunk_TA_juniper`, syslog",
              "d": "`sourcetype=juniper:junos:structured`",
              "q": "index=network sourcetype=\"juniper:junos:structured\"\n| search UI_COMMIT OR UI_COMMIT_COMPLETED OR \"UI_COMMIT_EVENT\"\n| rex field=_raw \"(?i)user\\s+['\\\"]?(?<commit_user>[^\\s'\\\"]+)\"\n| rex field=_raw \"(?i)comment\\s*[:=]\\s*['\\\"]?(?<commit_comment>[^'\\\"\\n]+)\"\n| rex field=_raw \"configuration committed by (?<commit_user2>\\S+)\"\n| eval operator=coalesce(commit_user, commit_user2, user)\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, count as commits, latest(commit_comment) as last_comment by host, operator\n| sort -last_seen",
              "m": "Ensure `interactive-commands` (or equivalent) is logged to the host that forwards to Splunk. Parse `UI_COMMIT` / `UI_COMMIT_COMPLETED` lines; if the TA already extracts `user`, prefer that field over `rex`. Alert on commits from break-glass accounts or when `_time` is outside approved windows (lookup). Join to change-management lookup by ticket ID when comments include ticket numbers.",
              "z": "Commit timeline by device; table of last commit per host with user and comment; compliance panel for commits without matching change record.",
              "kfp": "Automated commit scripts, hitless GRES sync, and off-hours break-glass logins are normal when documented—tie alerts to the change record.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_juniper`, syslog.\n• Ensure the following data sources are available: `sourcetype=juniper:junos:structured`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure `interactive-commands` (or equivalent) is logged to the host that forwards to Splunk. Parse `UI_COMMIT` / `UI_COMMIT_COMPLETED` lines; if the TA already extracts `user`, prefer that field over `rex`. Alert on commits from break-glass accounts or when `_time` is outside approved windows (lookup). Join to change-management lookup by ticket ID when comments include ticket numbers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"juniper:junos:structured\"\n| search UI_COMMIT OR UI_COMMIT_COMPLETED OR \"UI_COMMIT_EVENT\"\n| rex field=_raw \"(?i)user\\s+['\\\"]?(?<commit_user>[^\\s'\\\"]+)\"\n| rex field=_raw \"(?i)comment\\s*[:=]\\s*['\\\"]?(?<commit_comment>[^'\\\"\\n]+)\"\n| rex field=_raw \"configuration committed by (?<commit_user2>\\S+)\"\n| eval operator=coalesce(commit_user, commit_user2, user)\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, count as commits, latest(commit_comment) as last_comment by host, operator\n| sort -last_seen\n```\n\nUnderstanding this SPL\n\n**Junos Commit History and Configuration Rollback Audit (Juniper)** — Junos treats configuration as a sequence of commits, so every change is tied to a user, time, and optional comment—ideal for audit and rollback to any of the last stored revisions. Without central logging, you lose the evidence needed to prove who changed routing, security zones, or interfaces during an incident. Correlating commits with change tickets catches unapproved changes and commits outside maintenance windows before they propagate through routing or firewall policy.\n\nDocumented **Data sources**: `sourcetype=juniper:junos:structured`. **App/TA** (typical add-on context): `Splunk_TA_juniper`, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: juniper:junos:structured. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"juniper:junos:structured\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **operator** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, operator** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Junos Commit History and Configuration Rollback Audit (Juniper)** — Junos treats configuration as a sequence of commits, so every change is tied to a user, time, and optional comment—ideal for audit and rollback to any of the last stored revisions. Without central logging, you lose the evidence needed to prove who changed routing, security zones, or interfaces during an incident. Correlating commits with change tickets catches unapproved changes and commits outside maintenance windows before they propagate through routing or firewall policy.\n\nDocumented **Data sources**: `sourcetype=juniper:junos:structured`. **App/TA** (typical add-on context): `Splunk_TA_juniper`, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Commit timeline by device; table of last commit per host with user and comment; compliance panel for commits without matching change record.",
              "script": "",
              "premium": "",
              "hw": "All Juniper Junos devices",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep a simple story of who committed Junos config changes and when, so you can prove what changed during an incident or an audit.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value",
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.58",
              "n": "Junos Routing Engine Failover Monitoring (Juniper)",
              "c": "critical",
              "f": "intermediate",
              "v": "Platforms with dual routing engines rely on GRES and related state transfer; an unplanned mastership change usually means primary RE failure, kernel panic, or loss of control-plane stability. Repeated failovers on the same chassis point to degrading hardware or software defects before a hard outage. Tracking these events in Splunk gives operations a single place to justify RMA, software upgrade, or emergency maintenance.",
              "t": "`Splunk_TA_juniper`, syslog",
              "d": "`sourcetype=juniper:junos:structured`",
              "q": "index=network sourcetype=\"juniper:junos:structured\"\n| search SERD_MASTERSHIP OR RE_SWITCHOVER OR \"mastership\" OR \"Routing Engine.*switch\" OR \"Become master\"\n| rex field=_raw \"(?i)from\\s+(?<old_role>\\w+)\\s+to\\s+(?<new_role>\\w+)\"\n| bin span=24h _time\n| stats count as failover_events, values(_raw) as samples by host, _time\n| where failover_events > 0\n| sort -failover_events",
              "m": "Classify planned vs unplanned using maintenance windows or SNMP/CLI context if ingested. Critical alert on any mastership change outside a change window; warning if more than one event per chassis per 7 days. Attach device role (PE, core, aggregation) for prioritization.",
              "z": "Failover timeline per chassis; count of failovers per device last 30 days; list of recent raw messages for triage.",
              "kfp": "RE switchovers during hitless upgrades, GRES tests, and power work are normal when planned. Require sustained impact before escalation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_juniper`, syslog.\n• Ensure the following data sources are available: `sourcetype=juniper:junos:structured`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nClassify planned vs unplanned using maintenance windows or SNMP/CLI context if ingested. Critical alert on any mastership change outside a change window; warning if more than one event per chassis per 7 days. Attach device role (PE, core, aggregation) for prioritization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"juniper:junos:structured\"\n| search SERD_MASTERSHIP OR RE_SWITCHOVER OR \"mastership\" OR \"Routing Engine.*switch\" OR \"Become master\"\n| rex field=_raw \"(?i)from\\s+(?<old_role>\\w+)\\s+to\\s+(?<new_role>\\w+)\"\n| bin span=24h _time\n| stats count as failover_events, values(_raw) as samples by host, _time\n| where failover_events > 0\n| sort -failover_events\n```\n\nUnderstanding this SPL\n\n**Junos Routing Engine Failover Monitoring (Juniper)** — Platforms with dual routing engines rely on GRES and related state transfer; an unplanned mastership change usually means primary RE failure, kernel panic, or loss of control-plane stability. Repeated failovers on the same chassis point to degrading hardware or software defects before a hard outage. Tracking these events in Splunk gives operations a single place to justify RMA, software upgrade, or emergency maintenance.\n\nDocumented **Data sources**: `sourcetype=juniper:junos:structured`. **App/TA** (typical add-on context): `Splunk_TA_juniper`, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: juniper:junos:structured. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"juniper:junos:structured\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failover_events > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Failover timeline per chassis; count of failovers per device last 30 days; list of recent raw messages for triage.",
              "script": "",
              "premium": "",
              "hw": "Juniper MX240/MX480/MX960/MX2010/MX10008, EX9200, SRX5400/SRX5600/SRX5800",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with junos routing engine failover monitoring so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.59",
              "n": "Junos Virtual Chassis Health (Juniper)",
              "c": "high",
              "f": "intermediate",
              "v": "Virtual Chassis merges multiple switches into one control plane; a member disconnect or role churn can blackhole VLANs or split forwarding across members. VCCP and member state messages are the earliest signal of stack cable, power, or software issues. Centralized monitoring reduces time to detect partial stack failures that users report as intermittent “random” connectivity loss.",
              "t": "`Splunk_TA_juniper`, syslog",
              "d": "`sourcetype=juniper:junos:structured`",
              "q": "index=network sourcetype=\"juniper:junos:structured\"\n| search VCCPD OR \"Virtual Chassis\" OR \"vcp-\" OR \"member.*state\" OR \"VC member\"\n| rex field=_raw \"(?i)member\\s+(?<member_id>\\d+)\"\n| stats count as vc_events, dc(member_id) as members_seen, latest(_raw) as last_event by host\n| sort -vc_events",
              "m": "Baseline normal VCCP chatter; alert on member disconnect, not-primary transitions, or split-brain indicators per Juniper KB wording in your release. Correlate with interface errors on VCP ports. Map `host` to stack ID in a lookup for faster operator response.",
              "z": "VC member status matrix; event timeline for stack role changes; table of stacks with elevated event rate.",
              "kfp": "Chassis messages during FRU insertion, online diagnostics, and virtual chassis work are often expected. Follow vendor guidelines for one-shot vs sustained alarms.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_juniper`, syslog.\n• Ensure the following data sources are available: `sourcetype=juniper:junos:structured`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline normal VCCP chatter; alert on member disconnect, not-primary transitions, or split-brain indicators per Juniper KB wording in your release. Correlate with interface errors on VCP ports. Map `host` to stack ID in a lookup for faster operator response.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"juniper:junos:structured\"\n| search VCCPD OR \"Virtual Chassis\" OR \"vcp-\" OR \"member.*state\" OR \"VC member\"\n| rex field=_raw \"(?i)member\\s+(?<member_id>\\d+)\"\n| stats count as vc_events, dc(member_id) as members_seen, latest(_raw) as last_event by host\n| sort -vc_events\n```\n\nUnderstanding this SPL\n\n**Junos Virtual Chassis Health (Juniper)** — Virtual Chassis merges multiple switches into one control plane; a member disconnect or role churn can blackhole VLANs or split forwarding across members. VCCP and member state messages are the earliest signal of stack cable, power, or software issues. Centralized monitoring reduces time to detect partial stack failures that users report as intermittent “random” connectivity loss.\n\nDocumented **Data sources**: `sourcetype=juniper:junos:structured`. **App/TA** (typical add-on context): `Splunk_TA_juniper`, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: juniper:junos:structured. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"juniper:junos:structured\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: VC member status matrix; event timeline for stack role changes; table of stacks with elevated event rate.",
              "script": "",
              "premium": "",
              "hw": "Juniper EX2300, EX3400, EX4300, EX4400, EX4600, QFX5100/5110/5120",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with junos virtual chassis health so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.60",
              "n": "Arista MLAG Health and Consistency (Arista)",
              "c": "critical",
              "f": "intermediate",
              "v": "MLAG pairs depend on matching configuration and a healthy peer link; inconsistency or peer loss can lead to blackholed VLANs or asymmetric forwarding while both switches appear “up.” Catching `config-sanity` failures and peer state changes early prevents subtle application outages that load balancers and servers cannot retry away from. Splunk correlation across both peers speeds root cause when only one side logs the fault.",
              "t": "`arista:eos` via SC4S, syslog",
              "d": "`sourcetype=arista:eos`",
              "q": "index=network sourcetype=\"arista:eos\"\n| search Mlag OR MLAG OR mlag OR \"Mlag:\" OR \"Dual attached\" OR \"peer-link\" OR \"inactive\"\n| rex field=_raw \"(?i)Mlag:\\s*(?<mlag_msg>[^\\n]+)\"\n| stats count as mlag_events, latest(mlag_msg) as last_summary, values(_raw) as samples by host\n| sort -mlag_events",
              "m": "Ingest syslog from both MLAG peers with synchronized clocks. Alert on peer-link down, partial connectivity, or config-sanity failure strings present in your EOS version. Use a lookup pairing `mlag_domain` or neighbor hostname to open one incident for the pair. Validate against `show mlag` snapshots if you periodically scrape CLI into Splunk.",
              "z": "MLAG peer pair dashboard; red/amber status per domain; timeline of state transitions.",
              "kfp": "Peer-link upgrades, ISSU, and cable moves trigger MLAG checks. Confirm whether both switches saw the same event.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `arista:eos` via SC4S, syslog.\n• Ensure the following data sources are available: `sourcetype=arista:eos`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest syslog from both MLAG peers with synchronized clocks. Alert on peer-link down, partial connectivity, or config-sanity failure strings present in your EOS version. Use a lookup pairing `mlag_domain` or neighbor hostname to open one incident for the pair. Validate against `show mlag` snapshots if you periodically scrape CLI into Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"arista:eos\"\n| search Mlag OR MLAG OR mlag OR \"Mlag:\" OR \"Dual attached\" OR \"peer-link\" OR \"inactive\"\n| rex field=_raw \"(?i)Mlag:\\s*(?<mlag_msg>[^\\n]+)\"\n| stats count as mlag_events, latest(mlag_msg) as last_summary, values(_raw) as samples by host\n| sort -mlag_events\n```\n\nUnderstanding this SPL\n\n**Arista MLAG Health and Consistency (Arista)** — MLAG pairs depend on matching configuration and a healthy peer link; inconsistency or peer loss can lead to blackholed VLANs or asymmetric forwarding while both switches appear “up.” Catching `config-sanity` failures and peer state changes early prevents subtle application outages that load balancers and servers cannot retry away from. Splunk correlation across both peers speeds root cause when only one side logs the fault.\n\nDocumented **Data sources**: `sourcetype=arista:eos`. **App/TA** (typical add-on context): `arista:eos` via SC4S, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: arista:eos. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"arista:eos\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: MLAG peer pair dashboard; red/amber status per domain; timeline of state transitions.",
              "script": "",
              "premium": "",
              "hw": "Arista 7050X3, 7060X, 7260X3, 7280R3, 7300X3, 7500R3, 7800R3",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with arista mlag health and consistency so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.61",
              "n": "Arista EOS Agent Health Monitoring (Arista)",
              "c": "high",
              "f": "beginner",
              "v": "EOS features run as processes under ProcMgr; unexpected agent restarts often precede control-plane instability, STP or routing anomalies, or memory pressure. Tracking restart frequency by agent name ties platform symptoms to a specific subsystem instead of generic “switch is slow” tickets. Trending restarts after code upgrades also validates stability before promoting images fleet-wide.",
              "t": "`arista:eos` via SC4S, syslog",
              "d": "`sourcetype=arista:eos`",
              "q": "index=network sourcetype=\"arista:eos\"\n| search ProcMgr OR \"ProcMgr-worker\" OR \"Agent.*restart\" OR \"restarted\" OR \"%AGENT-\"\n| rex field=_raw \"(?i)(?<agent_name>[A-Za-z0-9_\\-]+)\\s+agent.*restart\"\n| stats count as restarts, values(agent_name) as agents by host\n| where restarts > 0\n| sort -restarts",
              "m": "Create a baseline of allowed occasional restarts per major version; alert when any host exceeds threshold per day or when critical agents (e.g., Stp, Bgp, Route) restart. Attach EOS version from inventory. Open problem ticket when restarts cluster after a specific feature toggle.",
              "z": "Table of hosts with agent restart counts; bar chart by agent name; sparkline of restarts over time.",
              "kfp": "API rate limits, CVaaS maintenance, and collector restarts can look like an agent problem—check CloudVision and device reachability first.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `arista:eos` via SC4S, syslog.\n• Ensure the following data sources are available: `sourcetype=arista:eos`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a baseline of allowed occasional restarts per major version; alert when any host exceeds threshold per day or when critical agents (e.g., Stp, Bgp, Route) restart. Attach EOS version from inventory. Open problem ticket when restarts cluster after a specific feature toggle.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"arista:eos\"\n| search ProcMgr OR \"ProcMgr-worker\" OR \"Agent.*restart\" OR \"restarted\" OR \"%AGENT-\"\n| rex field=_raw \"(?i)(?<agent_name>[A-Za-z0-9_\\-]+)\\s+agent.*restart\"\n| stats count as restarts, values(agent_name) as agents by host\n| where restarts > 0\n| sort -restarts\n```\n\nUnderstanding this SPL\n\n**Arista EOS Agent Health Monitoring (Arista)** — EOS features run as processes under ProcMgr; unexpected agent restarts often precede control-plane instability, STP or routing anomalies, or memory pressure. Tracking restart frequency by agent name ties platform symptoms to a specific subsystem instead of generic “switch is slow” tickets. Trending restarts after code upgrades also validates stability before promoting images fleet-wide.\n\nDocumented **Data sources**: `sourcetype=arista:eos`. **App/TA** (typical add-on context): `arista:eos` via SC4S, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: arista:eos. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"arista:eos\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where restarts > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of hosts with agent restart counts; bar chart by agent name; sparkline of restarts over time.",
              "script": "",
              "premium": "",
              "hw": "All Arista 7000/7100/7200/7300/7500/7800 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with arista eos agent health monitoring so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.62",
              "n": "Arista CloudVision Telemetry Alerts (Arista)",
              "c": "high",
              "f": "intermediate",
              "v": "CloudVision aggregates streaming telemetry and policy state across the fabric; forwarding those alerts to Splunk gives the NOC the same fabric-wide lens as the network team without logging into CVP for every spike. You can align telemetry-driven anomalies with application incidents and compliance audits. Historical alert volume also shows whether automation or drift is increasing operational noise.",
              "t": "CloudVision webhook or syslog to Splunk HEC",
              "d": "CloudVision webhook JSON (HEC) or forwarded syslog from CVP",
              "q": "index=network (source=*cvp* OR sourcetype=\"http:event_collector\" OR sourcetype=\"_json\")\n| search CloudVision OR cvp OR \"CVP\" OR deviceId OR device_id\n| eval sev=coalesce(severity, alert_severity, severity_level)\n| eval cat=coalesce(category, alert_type, type, alertType)\n| eval dev=coalesce(deviceId, device_id, dvc, host)\n| stats count as alert_count, latest(_time) as last_alert by sev, cat, dev\n| sort -alert_count",
              "m": "Configure CVP notification to HEC with a dedicated token and index; normalize JSON keys in `props.conf` if needed. For syslog bridge, set `LINE_BREAKER` for multiline events. Map CVP severities to Splunk notable severity. Deduplicate repeated device-level alerts with throttle. Optionally lookup `deviceId` to site and customer.",
              "z": "Alert volume by severity and category; top devices by alert count; timeline for compliance or config-drift categories.",
              "kfp": "API rate limits, CVaaS maintenance, and collector restarts can look like an agent problem—check CloudVision and device reachability first.",
              "refs": "[CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CloudVision webhook or syslog to Splunk HEC.\n• Ensure the following data sources are available: CloudVision webhook JSON (HEC) or forwarded syslog from CVP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure CVP notification to HEC with a dedicated token and index; normalize JSON keys in `props.conf` if needed. For syslog bridge, set `LINE_BREAKER` for multiline events. Map CVP severities to Splunk notable severity. Deduplicate repeated device-level alerts with throttle. Optionally lookup `deviceId` to site and customer.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (source=*cvp* OR sourcetype=\"http:event_collector\" OR sourcetype=\"_json\")\n| search CloudVision OR cvp OR \"CVP\" OR deviceId OR device_id\n| eval sev=coalesce(severity, alert_severity, severity_level)\n| eval cat=coalesce(category, alert_type, type, alertType)\n| eval dev=coalesce(deviceId, device_id, dvc, host)\n| stats count as alert_count, latest(_time) as last_alert by sev, cat, dev\n| sort -alert_count\n```\n\nUnderstanding this SPL\n\n**Arista CloudVision Telemetry Alerts (Arista)** — CloudVision aggregates streaming telemetry and policy state across the fabric; forwarding those alerts to Splunk gives the NOC the same fabric-wide lens as the network team without logging into CVP for every spike. You can align telemetry-driven anomalies with application incidents and compliance audits. Historical alert volume also shows whether automation or drift is increasing operational noise.\n\nDocumented **Data sources**: CloudVision webhook JSON (HEC) or forwarded syslog from CVP. **App/TA** (typical add-on context): CloudVision webhook or syslog to Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: http:event_collector, _json. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"http:event_collector\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **sev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sev, cat, dev** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Arista CloudVision Telemetry Alerts (Arista)** — CloudVision aggregates streaming telemetry and policy state across the fabric; forwarding those alerts to Splunk gives the NOC the same fabric-wide lens as the network team without logging into CVP for every spike. You can align telemetry-driven anomalies with application incidents and compliance audits. Historical alert volume also shows whether automation or drift is increasing operational noise.\n\nDocumented **Data sources**: CloudVision webhook JSON (HEC) or forwarded syslog from CVP. **App/TA** (typical add-on context): CloudVision webhook or syslog to Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert volume by severity and category; top devices by alert count; timeline for compliance or config-drift categories.",
              "script": "",
              "premium": "",
              "hw": "Arista CloudVision (on-prem or as-a-Service), all managed EOS switches",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with arista cloudvision telemetry alerts so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Performance",
                "Anomaly"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count",
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.63",
              "n": "Aruba CX VSF Stack Health (HPE Aruba)",
              "c": "high",
              "f": "intermediate",
              "v": "VSF stacks present one logical switch; member loss or conductor changes can isolate access VLANs or reduce east-west capacity without a full box failure. Early detection of member state changes and inter-switch link issues prevents prolonged segments of a floor or IDF running on a single surviving member. Splunk gives stack-level visibility where SNMP polling alone may lag during control-plane events.",
              "t": "HPE Aruba CX syslog",
              "d": "Aruba CX syslog (`sourcetype=syslog` or site-specific parser such as `hpe:aruba` if configured)",
              "q": "index=network sourcetype=syslog\n| search \"VSF\" OR \"Virtual Switching Framework\" OR \"stack\" OR \"conductor\" OR \"standby\" OR \"Member\"\n| search \"Aruba\" OR \"6300\" OR \"6400\" OR \"8320\" OR \"8360\" OR host=\"*cx*\"\n| rex field=_raw \"(?i)member\\s*(?<member_slot>\\d+)\"\n| stats count as vsf_events, latest(_raw) as last_event by host, member_slot\n| sort -vsf_events",
              "m": "Send CX switch syslog to a dedicated VIP or SC4S; tag `host` or `orig_host` so searches can narrow to CX models. Filter false positives from non-CX syslog sharing the index. Alert on member down, split stack indicators, or repeated conductor re-election. Cross-check with `show vsf` if you ingest periodic CLI or API snapshots.",
              "z": "Stack topology-style table (member ID, role, last event); timeline of conductor changes; heatmap of stacks with events.",
              "kfp": "Stack member joins, firmware alignment, and split scenarios during transport work look scary but are often controlled—use Aruba Central or CLI status as ground truth.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HPE Aruba CX syslog.\n• Ensure the following data sources are available: Aruba CX syslog (`sourcetype=syslog` or site-specific parser such as `hpe:aruba` if configured).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSend CX switch syslog to a dedicated VIP or SC4S; tag `host` or `orig_host` so searches can narrow to CX models. Filter false positives from non-CX syslog sharing the index. Alert on member down, split stack indicators, or repeated conductor re-election. Cross-check with `show vsf` if you ingest periodic CLI or API snapshots.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=syslog\n| search \"VSF\" OR \"Virtual Switching Framework\" OR \"stack\" OR \"conductor\" OR \"standby\" OR \"Member\"\n| search \"Aruba\" OR \"6300\" OR \"6400\" OR \"8320\" OR \"8360\" OR host=\"*cx*\"\n| rex field=_raw \"(?i)member\\s*(?<member_slot>\\d+)\"\n| stats count as vsf_events, latest(_raw) as last_event by host, member_slot\n| sort -vsf_events\n```\n\nUnderstanding this SPL\n\n**Aruba CX VSF Stack Health (HPE Aruba)** — VSF stacks present one logical switch; member loss or conductor changes can isolate access VLANs or reduce east-west capacity without a full box failure. Early detection of member state changes and inter-switch link issues prevents prolonged segments of a floor or IDF running on a single surviving member. Splunk gives stack-level visibility where SNMP polling alone may lag during control-plane events.\n\nDocumented **Data sources**: Aruba CX syslog (`sourcetype=syslog` or site-specific parser such as `hpe:aruba` if configured). **App/TA** (typical add-on context): HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, member_slot** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stack topology-style table (member ID, role, last event); timeline of conductor changes; heatmap of stacks with events.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "HPE Aruba CX 6200, CX 6300, CX 6400, CX 8320, CX 8325, CX 8360",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with aruba cx vsf stack health so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.64",
              "n": "Aruba CX VSX Redundancy Monitoring (HPE Aruba)",
              "c": "critical",
              "f": "intermediate",
              "v": "VSX pairs use an inter-switch link and keepalive; if both fail, split-brain can leave two active primaries forwarding independently, risking loops, duplicate MACs, and hard-to-diagnose application errors. Monitoring ISL, keepalive, and synchronization state is essential for data center and campus cores where VSX fronts servers or downstream stacks. Splunk lets you alert before both control and data paths degrade past recovery.",
              "t": "HPE Aruba CX syslog, SNMP",
              "d": "Aruba CX syslog; SNMP traps (VSX / link state) if forwarded to Splunk",
              "q": "index=network (sourcetype=syslog OR sourcetype=snmptrapd OR sourcetype=\"snmp:trap\")\n| search \"VSX\" OR \"Inter-Switch\" OR \"ISL\" OR \"keepalive\" OR \"split\" OR \"dual-primary\" OR \"InSync\" OR \"OutOfSync\"\n| rex field=_raw \"(?i)VSX\\s*[:,-]\\s*(?<vsx_detail>[^\\n]+)\"\n| stats count as vsx_events, latest(vsx_detail) as last_detail, latest(_raw) as sample by host\n| sort -vsx_events",
              "m": "Prefer synchronized clocks on VSX peers. Critical alert on keepalive loss, ISL down, or explicit split-brain / dual-primary messages. For SNMP, forward traps to Splunk and map OID to human-readable VSX state in `transforms.conf`. Correlate both peers’ logs into one notable event using a lookup of VSX pairs.",
              "z": "VSX pair health dashboard; ISL and keepalive status indicators; timeline of sync state changes.",
              "kfp": "Inter-switch link work and MCLAG role changes can raise VSX warnings until the fabric reconverges. Keep maintenance windows in the alert path.",
              "refs": "[Splunkbase app 7523](https://splunkbase.splunk.com/app/7523), [Splunkbase app 2846](https://splunkbase.splunk.com/app/2846), [Splunkbase app 2847](https://splunkbase.splunk.com/app/2847)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HPE Aruba CX syslog, SNMP.\n• Ensure the following data sources are available: Aruba CX syslog; SNMP traps (VSX / link state) if forwarded to Splunk.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer synchronized clocks on VSX peers. Critical alert on keepalive loss, ISL down, or explicit split-brain / dual-primary messages. For SNMP, forward traps to Splunk and map OID to human-readable VSX state in `transforms.conf`. Correlate both peers’ logs into one notable event using a lookup of VSX pairs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=syslog OR sourcetype=snmptrapd OR sourcetype=\"snmp:trap\")\n| search \"VSX\" OR \"Inter-Switch\" OR \"ISL\" OR \"keepalive\" OR \"split\" OR \"dual-primary\" OR \"InSync\" OR \"OutOfSync\"\n| rex field=_raw \"(?i)VSX\\s*[:,-]\\s*(?<vsx_detail>[^\\n]+)\"\n| stats count as vsx_events, latest(vsx_detail) as last_detail, latest(_raw) as sample by host\n| sort -vsx_events\n```\n\nUnderstanding this SPL\n\n**Aruba CX VSX Redundancy Monitoring (HPE Aruba)** — VSX pairs use an inter-switch link and keepalive; if both fail, split-brain can leave two active primaries forwarding independently, risking loops, duplicate MACs, and hard-to-diagnose application errors. Monitoring ISL, keepalive, and synchronization state is essential for data center and campus cores where VSX fronts servers or downstream stacks. Splunk lets you alert before both control and data paths degrade past recovery.\n\nDocumented **Data sources**: Aruba CX syslog; SNMP traps (VSX / link state) if forwarded to Splunk. **App/TA** (typical add-on context): HPE Aruba CX syslog, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: syslog, snmptrapd, snmp:trap. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: VSX pair health dashboard; ISL and keepalive status indicators; timeline of sync state changes.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "HPE Aruba CX 8320, CX 8325, CX 8360, CX 8400, CX 10000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know early when something looks wrong with aruba cx vsx redundancy monitoring so the team can act before it grows into a bigger outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.65",
              "n": "MPLS LDP Session and Label Distribution Health",
              "c": "critical",
              "f": "advanced",
              "v": "MPLS Label Distribution Protocol (LDP) sessions underpin label-switched paths across service provider and enterprise WAN cores. Session loss silently disrupts traffic forwarding. Monitoring LDP neighbor states and label binding changes ensures MPLS data plane continuity.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (MPLS-LDP-MIB)",
              "d": "`sourcetype=cisco:ios`, `sourcetype=junos:syslog`, SNMP MPLS-LDP-MIB",
              "q": "index=network (sourcetype=\"cisco:ios\" \"%LDP-5-NBRCHG\") OR (sourcetype=\"junos:syslog\" \"LDP_NEIGHBOR\")\n| rex \"Neighbor (?<ldp_neighbor>\\S+).*(?<state>UP|DOWN)\"\n| stats count latest(state) as current_state by host, ldp_neighbor\n| where current_state=\"DOWN\"\n| sort - count",
              "m": "Enable MPLS LDP syslog on all P/PE routers. Forward at severity 5+. Optionally poll MPLS-LDP-MIB (mplsLdpSessionState) via SNMP. Alert on any LDP neighbor DOWN transition.",
              "z": "Status grid (LDP neighbor per router), Table (down neighbors), Timeline (state changes).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (MPLS-LDP-MIB).\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`, SNMP MPLS-LDP-MIB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable MPLS LDP syslog on all P/PE routers. Forward at severity 5+. Optionally poll MPLS-LDP-MIB (mplsLdpSessionState) via SNMP. Alert on any LDP neighbor DOWN transition.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"cisco:ios\" \"%LDP-5-NBRCHG\") OR (sourcetype=\"junos:syslog\" \"LDP_NEIGHBOR\")\n| rex \"Neighbor (?<ldp_neighbor>\\S+).*(?<state>UP|DOWN)\"\n| stats count latest(state) as current_state by host, ldp_neighbor\n| where current_state=\"DOWN\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**MPLS LDP Session and Label Distribution Health** — MPLS Label Distribution Protocol (LDP) sessions underpin label-switched paths across service provider and enterprise WAN cores. Session loss silently disrupts traffic forwarding. Monitoring LDP neighbor states and label binding changes ensures MPLS data plane continuity.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`, SNMP MPLS-LDP-MIB. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (MPLS-LDP-MIB). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios, junos:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, ldp_neighbor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where current_state=\"DOWN\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (LDP neighbor per router), Table (down neighbors), Timeline (state changes).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASR 1000, ASR 9000, ISR 4000, NCS 540/5500; Juniper MX204, MX304, MX480, MX960; Nokia 7750 SR",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.66",
              "n": "RSVP-TE Tunnel State and Path Errors",
              "c": "critical",
              "f": "advanced",
              "v": "RSVP-TE tunnels provide traffic-engineered paths through MPLS cores. Tunnel state changes (UP/DOWN/reroute) and path errors (RESV failures, admission control rejections) indicate bandwidth contention or link failures that affect traffic-engineered services.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (MPLS-TE-STD-MIB)",
              "d": "`sourcetype=cisco:ios`, `sourcetype=junos:syslog`",
              "q": "index=network (sourcetype=\"cisco:ios\" \"%MPLS_TE-5-TUNNEL\") OR (sourcetype=\"junos:syslog\" \"RSVP_NEIGHBOR\" OR \"RSVP_PATH_ERR\")\n| rex \"Tunnel(?<tunnel_id>\\d+).*(?<tunnel_state>UP|DOWN|REROUTED)\"\n| stats count by host, tunnel_id, tunnel_state\n| where tunnel_state!=\"UP\"\n| sort - count",
              "m": "Enable MPLS TE syslog on all P/PE routers (severity 5+). Forward to Splunk. Alert on tunnel DOWN or repeated reroutes (indicator of flapping or bandwidth contention). Correlate with interface utilization for capacity planning.",
              "z": "Table (tunnel states), Status grid (tunnel health per headend), Timeline (state transitions).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (MPLS-TE-STD-MIB).\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable MPLS TE syslog on all P/PE routers (severity 5+). Forward to Splunk. Alert on tunnel DOWN or repeated reroutes (indicator of flapping or bandwidth contention). Correlate with interface utilization for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"cisco:ios\" \"%MPLS_TE-5-TUNNEL\") OR (sourcetype=\"junos:syslog\" \"RSVP_NEIGHBOR\" OR \"RSVP_PATH_ERR\")\n| rex \"Tunnel(?<tunnel_id>\\d+).*(?<tunnel_state>UP|DOWN|REROUTED)\"\n| stats count by host, tunnel_id, tunnel_state\n| where tunnel_state!=\"UP\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**RSVP-TE Tunnel State and Path Errors** — RSVP-TE tunnels provide traffic-engineered paths through MPLS cores. Tunnel state changes (UP/DOWN/reroute) and path errors (RESV failures, admission control rejections) indicate bandwidth contention or link failures that affect traffic-engineered services.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (MPLS-TE-STD-MIB). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios, junos:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, tunnel_id, tunnel_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tunnel_state!=\"UP\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (tunnel states), Status grid (tunnel health per headend), Timeline (state transitions).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASR 1000, ASR 9000, NCS 540/5500; Juniper MX204, MX304, MX480, MX960",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.67",
              "n": "IS-IS Adjacency and SPF Calculation Monitoring",
              "c": "critical",
              "f": "advanced",
              "v": "IS-IS is the dominant IGP in large-scale service provider and data center networks. Adjacency loss triggers full SPF recalculation and route convergence. Monitoring IS-IS adjacency changes and SPF run frequency provides early warning of routing instability.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`",
              "d": "`sourcetype=cisco:ios`, `sourcetype=junos:syslog`",
              "q": "index=network (sourcetype=\"cisco:ios\" \"%ISIS-5-ADJCHANGE\") OR (sourcetype=\"junos:syslog\" \"RPD_ISIS_ADJCHANGE\")\n| rex \"(?<isis_adj_state>UP|DOWN|INIT).*(?<neighbor>\\S+)\"\n| stats count by host, neighbor, isis_adj_state\n| where isis_adj_state=\"DOWN\" OR count > 5\n| sort - count",
              "m": "Enable IS-IS syslog on all IS-IS routers (severity 5+). Forward to Splunk. Alert on adjacency DOWN events. Track SPF run frequency (`%ISIS-6-SPF_TRIG_NOT_BKOFF` on IOS-XR) — elevated SPF rates indicate network instability.",
              "z": "Table (adjacency changes), Timeline (SPF triggers), Status grid (IS-IS adjacency per router).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable IS-IS syslog on all IS-IS routers (severity 5+). Forward to Splunk. Alert on adjacency DOWN events. Track SPF run frequency (`%ISIS-6-SPF_TRIG_NOT_BKOFF` on IOS-XR) — elevated SPF rates indicate network instability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"cisco:ios\" \"%ISIS-5-ADJCHANGE\") OR (sourcetype=\"junos:syslog\" \"RPD_ISIS_ADJCHANGE\")\n| rex \"(?<isis_adj_state>UP|DOWN|INIT).*(?<neighbor>\\S+)\"\n| stats count by host, neighbor, isis_adj_state\n| where isis_adj_state=\"DOWN\" OR count > 5\n| sort - count\n```\n\nUnderstanding this SPL\n\n**IS-IS Adjacency and SPF Calculation Monitoring** — IS-IS is the dominant IGP in large-scale service provider and data center networks. Adjacency loss triggers full SPF recalculation and route convergence. Monitoring IS-IS adjacency changes and SPF run frequency provides early warning of routing instability.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios, junos:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, neighbor, isis_adj_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isis_adj_state=\"DOWN\" OR count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (adjacency changes), Timeline (SPF triggers), Status grid (IS-IS adjacency per router).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASR 9000, NCS 540/5500/5700, Catalyst 9500; Juniper MX204, MX304, MX480, MX960, QFX5120, QFX5220",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.68",
              "n": "BFD Session State for IGP Fast Failure Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Bidirectional Forwarding Detection (BFD) provides sub-second failure detection for IGP (OSPF, IS-IS, EIGRP) and BGP peers. BFD session drops are the earliest signal of link or forwarding path failure — faster than routing protocol hello timers. Centralized BFD monitoring catches silent path failures.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`",
              "d": "`sourcetype=cisco:ios`, `sourcetype=junos:syslog`",
              "q": "index=network (sourcetype=\"cisco:ios\" \"%BFD-6-BFD_SESS_DOWN\") OR (sourcetype=\"junos:syslog\" \"BFD_STATE_CHANGE\")\n| rex \"neighbor (?<bfd_peer>\\S+).*(?<bfd_state>UP|DOWN|ADMINDOWN)\"\n| stats count by host, bfd_peer, bfd_state\n| where bfd_state=\"DOWN\"\n| sort - count",
              "m": "Enable BFD for all IGP/BGP peers on core and distribution routers. Forward BFD syslog at severity 6+ to Splunk. Alert on BFD session DOWN events — correlate with IGP adjacency changes and interface status. BFD flapping (>3 transitions in 5 minutes) warrants immediate investigation of optics or cabling.",
              "z": "Status grid (BFD session per peer pair), Table (down sessions), Timeline (state changes).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable BFD for all IGP/BGP peers on core and distribution routers. Forward BFD syslog at severity 6+ to Splunk. Alert on BFD session DOWN events — correlate with IGP adjacency changes and interface status. BFD flapping (>3 transitions in 5 minutes) warrants immediate investigation of optics or cabling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"cisco:ios\" \"%BFD-6-BFD_SESS_DOWN\") OR (sourcetype=\"junos:syslog\" \"BFD_STATE_CHANGE\")\n| rex \"neighbor (?<bfd_peer>\\S+).*(?<bfd_state>UP|DOWN|ADMINDOWN)\"\n| stats count by host, bfd_peer, bfd_state\n| where bfd_state=\"DOWN\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**BFD Session State for IGP Fast Failure Detection** — Bidirectional Forwarding Detection (BFD) provides sub-second failure detection for IGP (OSPF, IS-IS, EIGRP) and BGP peers. BFD session drops are the earliest signal of link or forwarding path failure — faster than routing protocol hello timers. Centralized BFD monitoring catches silent path failures.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios, junos:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, bfd_peer, bfd_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where bfd_state=\"DOWN\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (BFD session per peer pair), Table (down sessions), Timeline (state changes).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9300/9500, ASR 1000, ASR 9000, ISR 4000, NCS 540/5500; Juniper MX204, MX304, MX480, EX4300, EX4600, QFX5120",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.69",
              "n": "IPv6 Interface and Neighbor Discovery Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "As organizations deploy dual-stack or IPv6-only networks, monitoring IPv6 interface state, Neighbor Discovery Protocol (NDP) events, and Router Advertisement (RA) consistency ensures the v6 data plane functions alongside or instead of IPv4. Rogue RA detection prevents traffic hijacking.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`",
              "d": "`sourcetype=cisco:ios`, `sourcetype=junos:syslog`",
              "q": "index=network (sourcetype=\"cisco:ios\" \"%IPV6-4-DUPLICATE\" OR \"%IPV6_ND-6\" OR \"%RA_GUARD-6\")\n  OR (sourcetype=\"junos:syslog\" \"NDP\" OR \"ROUTER_ADVERTISEMENT\")\n| stats count by host, _raw\n| sort - count",
              "m": "Enable IPv6 ND syslog on all dual-stack devices. Enable RA Guard on access ports to detect rogue RAs. Forward to Splunk at severity 4+. Alert on duplicate address detection (DAD) failures and unauthorized RA events.",
              "z": "Table (IPv6 events by device), Timeline (RA events), Status grid (v6 interface state).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable IPv6 ND syslog on all dual-stack devices. Enable RA Guard on access ports to detect rogue RAs. Forward to Splunk at severity 4+. Alert on duplicate address detection (DAD) failures and unauthorized RA events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"cisco:ios\" \"%IPV6-4-DUPLICATE\" OR \"%IPV6_ND-6\" OR \"%RA_GUARD-6\")\n  OR (sourcetype=\"junos:syslog\" \"NDP\" OR \"ROUTER_ADVERTISEMENT\")\n| stats count by host, _raw\n| sort - count\n```\n\nUnderstanding this SPL\n\n**IPv6 Interface and Neighbor Discovery Monitoring** — As organizations deploy dual-stack or IPv6-only networks, monitoring IPv6 interface state, Neighbor Discovery Protocol (NDP) events, and Router Advertisement (RA) consistency ensures the v6 data plane functions alongside or instead of IPv4. Rogue RA detection prevents traffic hijacking.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios, junos:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, _raw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (IPv6 events by device), Timeline (RA events), Status grid (v6 interface state).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9300/9500, ASR 1000, ISR 4000; Juniper EX4300, EX4600, MX204, QFX5120",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.70",
              "n": "NTP Stratum and Peer Health on Network Devices",
              "c": "high",
              "f": "beginner",
              "v": "Accurate time synchronization on network devices is critical for log correlation, certificate validation, and regulatory compliance. NTP stratum drift or peer loss on routers and switches causes log timestamp skew that undermines SIEM correlation and forensic accuracy.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (NTP-MIB)",
              "d": "`sourcetype=cisco:ios`, SNMP NTP-MIB",
              "q": "index=network (sourcetype=\"cisco:ios\" \"%NTP-4-PEER_NO_ASSOC\" OR \"%NTP-4-CLOCK_UNSYNC\")\n  OR (sourcetype=\"junos:syslog\" \"NTPD_PEER_NO_RESPONSE\")\n| stats count by host\n| where count > 0\n| sort - count",
              "m": "Configure NTP on all network devices pointing to internal NTP servers. Enable NTP syslog messages (severity 4+). Optionally poll NTP-MIB (ntpSysPeerOffset, ntpSysStratum) via SNMP. Alert when stratum exceeds 4 or peer associations drop.",
              "z": "Single value (devices with NTP issues), Table (NTP events by device), Gauge (stratum distribution).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (NTP-MIB).\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, SNMP NTP-MIB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure NTP on all network devices pointing to internal NTP servers. Enable NTP syslog messages (severity 4+). Optionally poll NTP-MIB (ntpSysPeerOffset, ntpSysStratum) via SNMP. Alert when stratum exceeds 4 or peer associations drop.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"cisco:ios\" \"%NTP-4-PEER_NO_ASSOC\" OR \"%NTP-4-CLOCK_UNSYNC\")\n  OR (sourcetype=\"junos:syslog\" \"NTPD_PEER_NO_RESPONSE\")\n| stats count by host\n| where count > 0\n| sort - count\n```\n\nUnderstanding this SPL\n\n**NTP Stratum and Peer Health on Network Devices** — Accurate time synchronization on network devices is critical for log correlation, certificate validation, and regulatory compliance. NTP stratum drift or peer loss on routers and switches causes log timestamp skew that undermines SIEM correlation and forensic accuracy.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, SNMP NTP-MIB. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (NTP-MIB). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios, junos:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (devices with NTP issues), Table (NTP events by device), Gauge (stratum distribution).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9200/9300/9500, ISR 1100/4000, ASR 1000; Juniper EX2300, EX4300, MX204; Arista 7050X3, 7260X3",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Compliance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.71",
              "n": "QoS DSCP Marking and Classification Visibility",
              "c": "medium",
              "f": "advanced",
              "v": "QoS marking ensures voice, video, and critical apps receive priority treatment across WAN and campus networks. Monitoring DSCP/CoS marking at trust boundaries and reclassification points verifies that QoS policy is applied correctly end-to-end — mismarking degrades voice quality and violates SLAs.",
              "t": "`TA-cisco_ios`, SNMP Modular Input (CISCO-CLASS-BASED-QOS-MIB), NetFlow/IPFIX",
              "d": "SNMP CISCO-CLASS-BASED-QOS-MIB, `sourcetype=cisco:ios`, NetFlow with ToS field",
              "q": "index=network sourcetype=\"netflow\"\n| eval dscp=floor(tos/4)\n| stats count bytes as total_bytes by dscp src_ip dest_ip\n| lookup dscp_names dscp OUTPUT dscp_name\n| chart sum(total_bytes) over dscp_name by src_ip",
              "m": "Export NetFlow/IPFIX with ToS/DSCP field from WAN edge and campus distribution routers. Optionally poll CISCO-CLASS-BASED-QOS-MIB for per-class-map match/drop counters. Create a DSCP-to-name lookup table. Alert when unexpected DSCP values appear at trust boundaries or when priority queue drop rates exceed threshold.",
              "z": "Pie chart (traffic by DSCP class), Table (DSCP distribution per interface), Line chart (priority queue drops over time).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, SNMP Modular Input (CISCO-CLASS-BASED-QOS-MIB), NetFlow/IPFIX.\n• Ensure the following data sources are available: SNMP CISCO-CLASS-BASED-QOS-MIB, `sourcetype=cisco:ios`, NetFlow with ToS field.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport NetFlow/IPFIX with ToS/DSCP field from WAN edge and campus distribution routers. Optionally poll CISCO-CLASS-BASED-QOS-MIB for per-class-map match/drop counters. Create a DSCP-to-name lookup table. Alert when unexpected DSCP values appear at trust boundaries or when priority queue drop rates exceed threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"netflow\"\n| eval dscp=floor(tos/4)\n| stats count bytes as total_bytes by dscp src_ip dest_ip\n| lookup dscp_names dscp OUTPUT dscp_name\n| chart sum(total_bytes) over dscp_name by src_ip\n```\n\nUnderstanding this SPL\n\n**QoS DSCP Marking and Classification Visibility** — QoS marking ensures voice, video, and critical apps receive priority treatment across WAN and campus networks. Monitoring DSCP/CoS marking at trust boundaries and reclassification points verifies that QoS policy is applied correctly end-to-end — mismarking degrades voice quality and violates SLAs.\n\nDocumented **Data sources**: SNMP CISCO-CLASS-BASED-QOS-MIB, `sourcetype=cisco:ios`, NetFlow with ToS field. **App/TA** (typical add-on context): `TA-cisco_ios`, SNMP Modular Input (CISCO-CLASS-BASED-QOS-MIB), NetFlow/IPFIX. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: netflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"netflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dscp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by dscp src_ip dest_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `chart` builds a categorical visualization, grouping **by src_ip**.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (traffic by DSCP class), Table (DSCP distribution per interface), Line chart (priority queue drops over time).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9300/9500, ISR 4000, ASR 1000; Juniper MX204, MX304, EX4300",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "cisco",
                "netflow",
                "snmp"
              ],
              "em": [
                "cisco_ios",
                "netflow_netflow",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.72",
              "n": "PIM Neighbor and Multicast Group State Monitoring",
              "c": "medium",
              "f": "advanced",
              "v": "PIM (Protocol Independent Multicast) neighbors and (S,G)/(*, G) state underpin multicast video distribution, financial market data feeds, and IPTV. PIM neighbor loss or RP mapping failures silently break multicast flows. Centralized monitoring catches multicast blackholes before users report missing video streams.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (IGMP-STD-MIB, PIM-STD-MIB)",
              "d": "`sourcetype=cisco:ios`, `sourcetype=junos:syslog`",
              "q": "index=network (sourcetype=\"cisco:ios\" \"%PIM-5-NBRCHG\" OR \"%PIM-3-INVALID_RP_JOIN\")\n  OR (sourcetype=\"junos:syslog\" \"PIM_NEIGHBOR\" OR \"PIM_RP_MAPPING\")\n| rex \"neighbor (?<pim_neighbor>\\S+).*(?<state>UP|DOWN)\"\n| stats count by host, pim_neighbor, state\n| where state=\"DOWN\"\n| sort - count",
              "m": "Enable PIM syslog on multicast-enabled routers (severity 3+). Monitor PIM neighbor changes and RP mapping failures. Alert on PIM neighbor DOWN events and unexpected RP changes. Optionally poll PIM-STD-MIB for (S,G) state counts.",
              "z": "Status grid (PIM neighbor per router), Table (down neighbors), Timeline (multicast events).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (IGMP-STD-MIB, PIM-STD-MIB).\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable PIM syslog on multicast-enabled routers (severity 3+). Monitor PIM neighbor changes and RP mapping failures. Alert on PIM neighbor DOWN events and unexpected RP changes. Optionally poll PIM-STD-MIB for (S,G) state counts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"cisco:ios\" \"%PIM-5-NBRCHG\" OR \"%PIM-3-INVALID_RP_JOIN\")\n  OR (sourcetype=\"junos:syslog\" \"PIM_NEIGHBOR\" OR \"PIM_RP_MAPPING\")\n| rex \"neighbor (?<pim_neighbor>\\S+).*(?<state>UP|DOWN)\"\n| stats count by host, pim_neighbor, state\n| where state=\"DOWN\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**PIM Neighbor and Multicast Group State Monitoring** — PIM (Protocol Independent Multicast) neighbors and (S,G)/(*, G) state underpin multicast video distribution, financial market data feeds, and IPTV. PIM neighbor loss or RP mapping failures silently break multicast flows. Centralized monitoring catches multicast blackholes before users report missing video streams.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`, SNMP Modular Input (IGMP-STD-MIB, PIM-STD-MIB). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios, junos:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, pim_neighbor, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where state=\"DOWN\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (PIM neighbor per router), Table (down neighbors), Timeline (multicast events).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9500, ASR 1000, Nexus 9000; Juniper MX204, MX304, MX480, EX4600",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.73",
              "n": "IGMP Snooping and Multicast Group Membership",
              "c": "medium",
              "f": "intermediate",
              "v": "IGMP snooping controls multicast traffic at Layer 2, preventing broadcast storms from uncontrolled multicast flooding. Monitoring IGMP membership reports and leave events provides visibility into which VLANs have active multicast receivers — essential for IPTV, surveillance video, and financial ticker plant networks.",
              "t": "`TA-cisco_ios`, SNMP Modular Input (IGMP-STD-MIB)",
              "d": "`sourcetype=cisco:ios`, SNMP IGMP-STD-MIB",
              "q": "index=network sourcetype=\"cisco:ios\" \"%IGMP-5-GROUPCHANGE\"\n| rex \"Group (?<mcast_group>\\S+).*VLAN (?<vlan>\\d+).*(?<action>JOIN|LEAVE)\"\n| stats count by host, vlan, mcast_group, action\n| sort - count",
              "m": "Enable IGMP snooping on all access and distribution switches. Forward IGMP syslog to Splunk. Monitor group membership counts per VLAN. Alert on unexpected group joins (potential multicast amplification) or complete group leave on critical VLANs (service interruption).",
              "z": "Table (active groups per VLAN), Bar chart (join/leave ratio), Timeline (group changes).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, SNMP Modular Input (IGMP-STD-MIB).\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, SNMP IGMP-STD-MIB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable IGMP snooping on all access and distribution switches. Forward IGMP syslog to Splunk. Monitor group membership counts per VLAN. Alert on unexpected group joins (potential multicast amplification) or complete group leave on critical VLANs (service interruption).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%IGMP-5-GROUPCHANGE\"\n| rex \"Group (?<mcast_group>\\S+).*VLAN (?<vlan>\\d+).*(?<action>JOIN|LEAVE)\"\n| stats count by host, vlan, mcast_group, action\n| sort - count\n```\n\nUnderstanding this SPL\n\n**IGMP Snooping and Multicast Group Membership** — IGMP snooping controls multicast traffic at Layer 2, preventing broadcast storms from uncontrolled multicast flooding. Monitoring IGMP membership reports and leave events provides visibility into which VLANs have active multicast receivers — essential for IPTV, surveillance video, and financial ticker plant networks.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, SNMP IGMP-STD-MIB. **App/TA** (typical add-on context): `TA-cisco_ios`, SNMP Modular Input (IGMP-STD-MIB). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, vlan, mcast_group, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active groups per VLAN), Bar chart (join/leave ratio), Timeline (group changes).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9300/9500, Nexus 9000; Juniper EX4300, EX4600, QFX5120",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.74",
              "n": "VLAN Configuration Change and VTP Audit",
              "c": "high",
              "f": "beginner",
              "v": "VLAN additions, deletions, or VTP revision number changes propagate across trunk links and can cause network-wide outages if misconfigured. Monitoring VLAN configuration changes provides an audit trail for change compliance and early detection of unauthorized VLAN modifications.",
              "t": "`TA-cisco_ios`, `Splunk_TA_juniper`",
              "d": "`sourcetype=cisco:ios`, `sourcetype=junos:syslog`",
              "q": "index=network sourcetype=\"cisco:ios\"\n  \"%VLAN_MGR-6-VLAN_CREATE\" OR \"%VLAN_MGR-6-VLAN_DELETE\" OR \"%VTP-6-VTP_REV_CHANGE\"\n| stats count by host, _raw\n| sort - count",
              "m": "Forward switch syslog to Splunk (severity 6+). Monitor VLAN create/delete events and VTP revision number changes. Alert on any VLAN modification outside approved change windows. For VTP-mode transparent environments, monitor configuration file changes instead.",
              "z": "Table (VLAN changes by device), Timeline (change history), Single value (changes in last 24h).",
              "kfp": "",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward switch syslog to Splunk (severity 6+). Monitor VLAN create/delete events and VTP revision number changes. Alert on any VLAN modification outside approved change windows. For VTP-mode transparent environments, monitor configuration file changes instead.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\"\n  \"%VLAN_MGR-6-VLAN_CREATE\" OR \"%VLAN_MGR-6-VLAN_DELETE\" OR \"%VTP-6-VTP_REV_CHANGE\"\n| stats count by host, _raw\n| sort - count\n```\n\nUnderstanding this SPL\n\n**VLAN Configuration Change and VTP Audit** — VLAN additions, deletions, or VTP revision number changes propagate across trunk links and can cause network-wide outages if misconfigured. Monitoring VLAN configuration changes provides an audit trail for change compliance and early detection of unauthorized VLAN modifications.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=junos:syslog`. **App/TA** (typical add-on context): `TA-cisco_ios`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, _raw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VLAN changes by device), Timeline (change history), Single value (changes in last 24h).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9200/9300/9500, Nexus 9000; Juniper EX2300, EX4300, EX4600",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.1.75",
              "n": "Network Topology Discovery and Source-of-Truth Reconciliation",
              "c": "medium",
              "f": "advanced",
              "v": "Network topology discovery tools (NetBox, IP Fabric, Nautobot) provide a source of truth for what should be on the network. Reconciling live Splunk data against the source of truth detects rogue devices, missing monitoring coverage, and inventory drift — ensuring that every device in the network is accounted for and monitored.",
              "t": "NetBox REST API (custom scripted input), IP Fabric REST API, `TA-cisco_ios`",
              "d": "NetBox API, `sourcetype=cisco:ios`, SNMP sysObjectID",
              "q": "| inputlookup netbox_devices.csv\n| join type=left hostname [search index=network sourcetype=\"cisco:ios\" earliest=-24h | stats latest(_time) as last_seen by host | rename host as hostname]\n| eval status=if(isnull(last_seen),\"NOT_REPORTING\",\"OK\")\n| where status=\"NOT_REPORTING\"\n| table hostname, site, role, status",
              "m": "Export device inventory from NetBox/IP Fabric via REST API to a Splunk lookup (CSV or KV Store). Schedule a daily reconciliation search comparing the source-of-truth inventory against devices actively sending syslog/SNMP to Splunk. Alert on devices present in the source of truth but not reporting to Splunk.",
              "z": "Table (missing devices), Single value (coverage percentage), Bar chart (reporting status by site).",
              "kfp": "",
              "refs": "[NetBox Documentation](https://docs.netbox.dev/), [IP Fabric Documentation](https://ipfabric.io/docs/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NetBox REST API (custom scripted input), IP Fabric REST API, `TA-cisco_ios`.\n• Ensure the following data sources are available: NetBox API, `sourcetype=cisco:ios`, SNMP sysObjectID.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport device inventory from NetBox/IP Fabric via REST API to a Splunk lookup (CSV or KV Store). Schedule a daily reconciliation search comparing the source-of-truth inventory against devices actively sending syslog/SNMP to Splunk. Alert on devices present in the source of truth but not reporting to Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup netbox_devices.csv\n| join type=left hostname [search index=network sourcetype=\"cisco:ios\" earliest=-24h | stats latest(_time) as last_seen by host | rename host as hostname]\n| eval status=if(isnull(last_seen),\"NOT_REPORTING\",\"OK\")\n| where status=\"NOT_REPORTING\"\n| table hostname, site, role, status\n```\n\nUnderstanding this SPL\n\n**Network Topology Discovery and Source-of-Truth Reconciliation** — Network topology discovery tools (NetBox, IP Fabric, Nautobot) provide a source of truth for what should be on the network. Reconciling live Splunk data against the source of truth detects rogue devices, missing monitoring coverage, and inventory drift — ensuring that every device in the network is accounted for and monitored.\n\nDocumented **Data sources**: NetBox API, `sourcetype=cisco:ios`, SNMP sysObjectID. **App/TA** (typical add-on context): NetBox REST API (custom scripted input), IP Fabric REST API, `TA-cisco_ios`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status=\"NOT_REPORTING\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Topology Discovery and Source-of-Truth Reconciliation**): table hostname, site, role, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing devices), Single value (coverage percentage), Bar chart (reporting status by site).",
              "script": "",
              "premium": "",
              "hw": "All network devices (source-of-truth agnostic)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Inventory",
                "Configuration"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.3,
          "qd": {
            "gold": 0,
            "silver": 4,
            "bronze": 71,
            "none": 0
          }
        },
        {
          "i": "5.2",
          "n": "Firewalls",
          "u": [
            {
              "i": "5.2.1",
              "n": "Top Denied Traffic Sources",
              "c": "medium",
              "f": "intermediate",
              "v": "Identifies top blocked traffic sources — useful for rule tuning, detecting scanning, and misconfigured apps.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "`sourcetype=pan:traffic`, `sourcetype=fgt_traffic`, `sourcetype=cisco:firepower:syslog`",
              "q": "index=firewall action=\"denied\" OR action=\"drop\"\n| stats count as denials, dc(dest) as unique_dests by src\n| sort -denials | head 20 | lookup geoip ip as src OUTPUT Country",
              "m": "Forward firewall traffic logs via syslog. Install vendor TA for CIM-compliant fields. Create top-N dashboard.",
              "z": "Table (source, denials, dests), Map (GeoIP), Bar chart.",
              "kfp": "Traffic spikes during backup windows, software distribution, or scheduled data syncs can add denied flows without an attack.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: `sourcetype=pan:traffic`, `sourcetype=fgt_traffic`, `sourcetype=cisco:firepower:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward firewall traffic logs via syslog. Install vendor TA for CIM-compliant fields. Create top-N dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall action=\"denied\" OR action=\"drop\"\n| stats count as denials, dc(dest) as unique_dests by src\n| sort -denials | head 20 | lookup geoip ip as src OUTPUT Country\n```\n\nUnderstanding this SPL\n\n**Top Denied Traffic Sources** — Identifies top blocked traffic sources — useful for rule tuning, detecting scanning, and misconfigured apps.\n\nDocumented **Data sources**: `sourcetype=pan:traffic`, `sourcetype=fgt_traffic`, `sourcetype=cisco:firepower:syslog`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Top Denied Traffic Sources** — Identifies top blocked traffic sources — useful for rule tuning, detecting scanning, and misconfigured apps.\n\nDocumented **Data sources**: `sourcetype=pan:traffic`, `sourcetype=fgt_traffic`, `sourcetype=cisco:firepower:syslog`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (source, denials, dests), Map (GeoIP), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch which sources are blocked most often on the firewall so we can fix misrouted traffic, bad rules, and scanning before a small problem spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.2",
              "n": "Policy Change Audit",
              "c": "critical",
              "f": "beginner",
              "v": "Firewall rule changes can expose the network. Compliance must-have (PCI, SOX, HIPAA).",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "`sourcetype=pan:config`, firewall system/config logs",
              "q": "index=firewall sourcetype=\"pan:config\" cmd=\"set\" OR cmd=\"edit\" OR cmd=\"delete\"\n| table _time host admin cmd path | sort -_time",
              "m": "Forward configuration change logs. Alert on any rule modification. Require change ticket correlation. Keep 1-year retention.",
              "z": "Table (who, what, when), Timeline, Single value (changes last 24h).",
              "kfp": "Scheduled policy pushes during change windows, automated deployments, or template updates can spike configuration events.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: `sourcetype=pan:config`, firewall system/config logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward configuration change logs. Alert on any rule modification. Require change ticket correlation. Keep 1-year retention.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"pan:config\" cmd=\"set\" OR cmd=\"edit\" OR cmd=\"delete\"\n| table _time host admin cmd path | sort -_time\n```\n\nUnderstanding this SPL\n\n**Policy Change Audit** — Firewall rule changes can expose the network. Compliance must-have (PCI, SOX, HIPAA).\n\nDocumented **Data sources**: `sourcetype=pan:config`, firewall system/config logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: pan:config. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"pan:config\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Policy Change Audit**): table _time host admin cmd path\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Policy Change Audit** — Firewall rule changes can expose the network. Compliance must-have (PCI, SOX, HIPAA).\n\nDocumented **Data sources**: `sourcetype=pan:config`, firewall system/config logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (who, what, when), Timeline, Single value (changes last 24h).",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on when firewall rules and settings change so we can show who did what, and when, for audits and clean rollbacks.",
              "wv": "crawl",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.BF.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Cyber Essentials CE.BF.1 (Boundary firewalls) is enforced — Splunk UC-5.2.2: Policy Change Audit.",
                  "ea": "Saved search 'UC-5.2.2' running on sourcetype=pan:config, firewall system/config logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ncsc.gov.uk/cyberessentials/overview"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that FedRAMP AC-2 (Account management) is enforced — Splunk UC-5.2.2: Policy Change Audit.",
                  "ea": "Saved search 'UC-5.2.2' running on sourcetype=pan:config, firewall system/config logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 1.1 (SWIFT environment protection) is enforced — Splunk UC-5.2.2: Policy Change Audit.",
                  "ea": "Saved search 'UC-5.2.2' running on sourcetype=pan:config, firewall system/config logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.swift.com/myswift/customer-security-programme-csp/security-controls"
                }
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "observability",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "5.2.3",
              "n": "Threat Detection Events",
              "c": "critical",
              "f": "intermediate",
              "v": "IPS/IDS events indicate active attacks. Correlation with traffic context enables rapid response.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "`sourcetype=pan:threat`, `sourcetype=cisco:firepower:alert`",
              "q": "index=firewall sourcetype=\"pan:threat\" severity=\"critical\" OR severity=\"high\"\n| stats count by src, dest, threat_name, severity, action | sort -count",
              "m": "Forward threat logs. Alert immediately on critical severity. Correlate source IPs with auth logs.",
              "z": "Table (source, dest, threat, action), Bar chart by threat type, Map.",
              "kfp": "Authorized vulnerability scanners (Tenable, Qualys, Rapid7) and security tests trigger many threat signatures by design.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: `sourcetype=pan:threat`, `sourcetype=cisco:firepower:alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward threat logs. Alert immediately on critical severity. Correlate source IPs with auth logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"pan:threat\" severity=\"critical\" OR severity=\"high\"\n| stats count by src, dest, threat_name, severity, action | sort -count\n```\n\nUnderstanding this SPL\n\n**Threat Detection Events** — IPS/IDS events indicate active attacks. Correlation with traffic context enables rapid response.\n\nDocumented **Data sources**: `sourcetype=pan:threat`, `sourcetype=cisco:firepower:alert`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: pan:threat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"pan:threat\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dest, threat_name, severity, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Threat Detection Events** — IPS/IDS events indicate active attacks. Correlation with traffic context enables rapid response.\n\nDocumented **Data sources**: `sourcetype=pan:threat`, `sourcetype=cisco:firepower:alert`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (source, dest, threat, action), Bar chart by threat type, Map.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch serious threat alerts on the firewall early so the team can stop malicious traffic before it reaches important systems.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.4",
              "n": "VPN Tunnel Status",
              "c": "high",
              "f": "intermediate",
              "v": "VPN failures isolate remote sites or users. Proactive monitoring prevents \"the VPN is down\" calls.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "Firewall VPN/system logs",
              "q": "index=firewall (\"tunnel\" OR \"IPSec\" OR \"IKE\") (\"down\" OR \"failed\" OR \"established\")\n| rex \"(?<tunnel_peer>\\d+\\.\\d+\\.\\d+\\.\\d+)\"\n| eval status=if(match(_raw,\"established|up\"),\"Up\",\"Down\")\n| stats latest(status) as state by host, tunnel_peer | where state=\"Down\"",
              "m": "Forward VPN logs. Alert on tunnel down events. Track flapping. Dashboard showing all tunnels.",
              "z": "Status grid (green/red per tunnel), Table, Timeline.",
              "kfp": "Brief tunnel blips during ISP issues, rekeys, or remote endpoint sleep and wake are common and not always incidents.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: Firewall VPN/system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward VPN logs. Alert on tunnel down events. Track flapping. Dashboard showing all tunnels.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall (\"tunnel\" OR \"IPSec\" OR \"IKE\") (\"down\" OR \"failed\" OR \"established\")\n| rex \"(?<tunnel_peer>\\d+\\.\\d+\\.\\d+\\.\\d+)\"\n| eval status=if(match(_raw,\"established|up\"),\"Up\",\"Down\")\n| stats latest(status) as state by host, tunnel_peer | where state=\"Down\"\n```\n\nUnderstanding this SPL\n\n**VPN Tunnel Status** — VPN failures isolate remote sites or users. Proactive monitoring prevents \"the VPN is down\" calls.\n\nDocumented **Data sources**: Firewall VPN/system logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, tunnel_peer** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where state=\"Down\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VPN Tunnel Status** — VPN failures isolate remote sites or users. Proactive monitoring prevents \"the VPN is down\" calls.\n\nDocumented **Data sources**: Firewall VPN/system logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (green/red per tunnel), Table, Timeline.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether secure tunnels to partners and sites stay up so people working remotely are not left guessing why they cannot connect.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.5",
              "n": "High-Risk Port Exposure",
              "c": "high",
              "f": "intermediate",
              "v": "Allowed traffic to RDP/SMB/Telnet from untrusted zones indicates policy gaps.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "Firewall traffic logs",
              "q": "index=firewall action=\"allowed\" (dest_port=3389 OR dest_port=445 OR dest_port=23)\n| where NOT cidrmatch(\"10.0.0.0/8\", src)\n| stats count by src, dest, dest_port | sort -count",
              "m": "Monitor allow rules for external traffic to high-risk ports. Alert on any matches. Review and tighten rules.",
              "z": "Table (source, dest, port), Bar chart by port, Map.",
              "kfp": "Jump boxes, admin jump hosts, and legacy apps may legitimately use high-risk ports; match to asset inventory before reacting.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: Firewall traffic logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor allow rules for external traffic to high-risk ports. Alert on any matches. Review and tighten rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall action=\"allowed\" (dest_port=3389 OR dest_port=445 OR dest_port=23)\n| where NOT cidrmatch(\"10.0.0.0/8\", src)\n| stats count by src, dest, dest_port | sort -count\n```\n\nUnderstanding this SPL\n\n**High-Risk Port Exposure** — Allowed traffic to RDP/SMB/Telnet from untrusted zones indicates policy gaps.\n\nDocumented **Data sources**: Firewall traffic logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT cidrmatch(\"10.0.0.0/8\", src)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**High-Risk Port Exposure** — Allowed traffic to RDP/SMB/Telnet from untrusted zones indicates policy gaps.\n\nDocumented **Data sources**: Firewall traffic logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (source, dest, port), Bar chart by port, Map.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for allowed traffic to risky services so we can find exposed remote desktop, sharing, and old protocols before attackers do.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.6",
              "n": "Geo-IP Anomaly Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Traffic to/from sanctioned or unexpected countries flags exfiltration, C2, or compromised hosts.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), GeoIP lookup",
              "d": "Firewall traffic logs",
              "q": "index=firewall action=\"allowed\" direction=\"outbound\"\n| lookup geoip ip as dest OUTPUT Country\n| search Country IN (\"Russia\",\"China\",\"North Korea\",\"Iran\")\n| stats count, sum(bytes_out) as data_sent by src, Country | sort -data_sent",
              "m": "Install GeoIP lookup (MaxMind). Enrich traffic logs. Alert on sanctioned country traffic and volume anomalies.",
              "z": "Choropleth map, Table, Bar chart by country.",
              "kfp": "Cloud egress, anycast, or carrier address pools can look \"wrong\" for geography until you enrich with your own allowlists.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), GeoIP lookup.\n• Ensure the following data sources are available: Firewall traffic logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall GeoIP lookup (MaxMind). Enrich traffic logs. Alert on sanctioned country traffic and volume anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall action=\"allowed\" direction=\"outbound\"\n| lookup geoip ip as dest OUTPUT Country\n| search Country IN (\"Russia\",\"China\",\"North Korea\",\"Iran\")\n| stats count, sum(bytes_out) as data_sent by src, Country | sort -data_sent\n```\n\nUnderstanding this SPL\n\n**Geo-IP Anomaly Detection** — Traffic to/from sanctioned or unexpected countries flags exfiltration, C2, or compromised hosts.\n\nDocumented **Data sources**: Firewall traffic logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), GeoIP lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by src, Country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Geo-IP Anomaly Detection** — Traffic to/from sanctioned or unexpected countries flags exfiltration, C2, or compromised hosts.\n\nDocumented **Data sources**: Firewall traffic logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), GeoIP lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Choropleth map, Table, Bar chart by country.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you notice odd geography on outbound traffic so mistakes, theft, and misrouted data are easier to spot.",
              "mtype": [
                "Security",
                "Anomaly"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.7",
              "n": "Connection Rate Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Sudden connection spikes indicate DDoS, scanning, or worm propagation.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "Firewall traffic logs",
              "q": "index=firewall\n| bin _time span=5m\n| stats count as connections by src, _time\n| eventstats avg(connections) as avg_c, stdev(connections) as std_c by src\n| where connections > (avg_c + 3*std_c)\n| sort -connections",
              "m": "Baseline connection rates over 7 days. Alert when rate exceeds 3 standard deviations.",
              "z": "Line chart with threshold overlay, Table, Timechart.",
              "kfp": "Backups, patches, and file shares can open many connections in a short window and look like a burst.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: Firewall traffic logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline connection rates over 7 days. Alert when rate exceeds 3 standard deviations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall\n| bin _time span=5m\n| stats count as connections by src, _time\n| eventstats avg(connections) as avg_c, stdev(connections) as std_c by src\n| where connections > (avg_c + 3*std_c)\n| sort -connections\n```\n\nUnderstanding this SPL\n\n**Connection Rate Anomalies** — Sudden connection spikes indicate DDoS, scanning, or worm propagation.\n\nDocumented **Data sources**: Firewall traffic logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by src, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where connections > (avg_c + 3*std_c)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Connection Rate Anomalies** — Sudden connection spikes indicate DDoS, scanning, or worm propagation.\n\nDocumented **Data sources**: Firewall traffic logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart with threshold overlay, Table, Timechart.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We flag sudden surges in how many connections one host opens so we can see scanning, broken apps, and overload before the network stumbles.",
              "mtype": [
                "Anomaly",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.8",
              "n": "Certificate Inspection Failures",
              "c": "medium",
              "f": "beginner",
              "v": "SSL decryption failures mean traffic passes uninspected — could be legitimate cert pinning or SSL evasion.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "Firewall decryption logs",
              "q": "index=firewall sourcetype=\"pan:decryption\" action=\"ssl-error\"\n| stats count by dest, dest_port, reason | sort -count",
              "m": "Enable decryption logging. Track failure rates by destination. Tune exclusion lists.",
              "z": "Table, Pie chart (reasons), Trend line.",
              "kfp": "Legacy clients, certificate rotations, HSM or cipher changes, and broken sites can all raise certificate inspection errors.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: Firewall decryption logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable decryption logging. Track failure rates by destination. Tune exclusion lists.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"pan:decryption\" action=\"ssl-error\"\n| stats count by dest, dest_port, reason | sort -count\n```\n\nUnderstanding this SPL\n\n**Certificate Inspection Failures** — SSL decryption failures mean traffic passes uninspected — could be legitimate cert pinning or SSL evasion.\n\nDocumented **Data sources**: Firewall decryption logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: pan:decryption. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"pan:decryption\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest, dest_port, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Certificate Inspection Failures** — SSL decryption failures mean traffic passes uninspected — could be legitimate cert pinning or SSL evasion.\n\nDocumented **Data sources**: Firewall decryption logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Pie chart (reasons), Trend line.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track certificate and decryption issues so we can sort broken sites and policy gaps from real interception attacks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.9",
              "n": "URL Filtering Blocks",
              "c": "medium",
              "f": "beginner",
              "v": "Shows what categories users are trying to access. Reveals policy effectiveness and shadow IT.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "`sourcetype=pan:url`",
              "q": "index=firewall sourcetype=\"pan:url\" action=\"block-url\"\n| stats count by url_category, src | sort -count",
              "m": "Forward URL filtering logs. Dashboard showing blocks by category and user.",
              "z": "Bar chart (by category), Table, Pie chart.",
              "kfp": "New sites, rewrites, and overly broad category blocks can create noisy URL blocks for benign traffic.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: `sourcetype=pan:url`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward URL filtering logs. Dashboard showing blocks by category and user.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"pan:url\" action=\"block-url\"\n| stats count by url_category, src | sort -count\n```\n\nUnderstanding this SPL\n\n**URL Filtering Blocks** — Shows what categories users are trying to access. Reveals policy effectiveness and shadow IT.\n\nDocumented **Data sources**: `sourcetype=pan:url`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: pan:url. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"pan:url\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by url_category, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**URL Filtering Blocks** — Shows what categories users are trying to access. Reveals policy effectiveness and shadow IT.\n\nDocumented **Data sources**: `sourcetype=pan:url`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (by category), Table, Pie chart.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see which web categories and pages get stopped most so policy stays fair, current, and aligned with what the business really needs.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.10",
              "n": "Admin Access Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Firewall admin access is highly privileged. Audit trail is a compliance must-have.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "Firewall system/auth logs",
              "q": "index=firewall sourcetype=\"pan:system\" (\"login\" OR \"logout\" OR \"auth\")\n| eval status=case(match(_raw,\"success\"),\"Success\", match(_raw,\"fail\"),\"Failed\", 1=1,\"Other\")\n| stats count by admin_user, src, status | sort -count",
              "m": "Forward system/auth logs. Alert on failed admin logins. Track all successful logins. Alert on unexpected source IPs.",
              "z": "Table (admin, source, status), Timeline, Bar chart.",
              "kfp": "Scheduled automation, help desk remotes, and break-glass access from new locations can look unusual without being malicious.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: Firewall system/auth logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward system/auth logs. Alert on failed admin logins. Track all successful logins. Alert on unexpected source IPs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"pan:system\" (\"login\" OR \"logout\" OR \"auth\")\n| eval status=case(match(_raw,\"success\"),\"Success\", match(_raw,\"fail\"),\"Failed\", 1=1,\"Other\")\n| stats count by admin_user, src, status | sort -count\n```\n\nUnderstanding this SPL\n\n**Admin Access Audit** — Firewall admin access is highly privileged. Audit trail is a compliance must-have.\n\nDocumented **Data sources**: Firewall system/auth logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: pan:system. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"pan:system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by admin_user, src, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Admin Access Audit** — Firewall admin access is highly privileged. Audit trail is a compliance must-have.\n\nDocumented **Data sources**: Firewall system/auth logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (admin, source, status), Timeline, Bar chart.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow who signs into the firewall itself so we can catch stolen sessions, after-hours access, and missing change records.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.11",
              "n": "Firewall Resource Utilization",
              "c": "high",
              "f": "beginner",
              "v": "Session table exhaustion blocks new connections. CPU saturation degrades throughput.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), SNMP",
              "d": "Firewall system resource logs",
              "q": "index=firewall (\"session\" AND \"utilization\") OR (\"cpu\" AND \"dataplane\")\n| timechart span=5m avg(session_utilization) as session_pct by host | where session_pct > 80",
              "m": "Monitor via SNMP (vendor-specific MIB) or system logs. Alert on session table >80%, dataplane CPU >80%.",
              "z": "Gauge (session/CPU/memory), Line chart, Table.",
              "kfp": "Scheduled high traffic, content updates, and backup windows raise CPU and session use without a failure.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), SNMP.\n• Ensure the following data sources are available: Firewall system resource logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor via SNMP (vendor-specific MIB) or system logs. Alert on session table >80%, dataplane CPU >80%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall (\"session\" AND \"utilization\") OR (\"cpu\" AND \"dataplane\")\n| timechart span=5m avg(session_utilization) as session_pct by host | where session_pct > 80\n```\n\nUnderstanding this SPL\n\n**Firewall Resource Utilization** — Session table exhaustion blocks new connections. CPU saturation degrades throughput.\n\nDocumented **Data sources**: Firewall system resource logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where session_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Firewall Resource Utilization** — Session table exhaustion blocks new connections. CPU saturation degrades throughput.\n\nDocumented **Data sources**: Firewall system resource logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (session/CPU/memory), Line chart, Table.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow CPU, memory, and session load on the firewall so the team can add capacity or fix a hot feature before users feel slowness or drops.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto",
                "snmp"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.12",
              "n": "NAT Pool Exhaustion",
              "c": "high",
              "f": "beginner",
              "v": "NAT exhaustion prevents outbound connections. Users lose internet access.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), syslog",
              "d": "Firewall NAT/system logs",
              "q": "index=firewall (\"NAT\" OR \"nat\") (\"exhausted\" OR \"allocation failed\" OR \"out of\")\n| stats count by host, nat_pool | sort -count",
              "m": "Forward firewall logs. Monitor NAT table usage. Alert on exhaustion messages or >80% utilization.",
              "z": "Gauge per pool, Table, Events timeline.",
              "kfp": "Traffic bursts, many new users, and VoIP or gaming patterns can use more NAT resources than a steady baseline.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), syslog.\n• Ensure the following data sources are available: Firewall NAT/system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward firewall logs. Monitor NAT table usage. Alert on exhaustion messages or >80% utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall (\"NAT\" OR \"nat\") (\"exhausted\" OR \"allocation failed\" OR \"out of\")\n| stats count by host, nat_pool | sort -count\n```\n\nUnderstanding this SPL\n\n**NAT Pool Exhaustion** — NAT exhaustion prevents outbound connections. Users lose internet access.\n\nDocumented **Data sources**: Firewall NAT/system logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, nat_pool** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NAT Pool Exhaustion** — NAT exhaustion prevents outbound connections. Users lose internet access.\n\nDocumented **Data sources**: Firewall NAT/system logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per pool, Table, Events timeline.",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for public address pools running out so new sessions still get out to the internet during busy days and big projects.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto",
                "syslog"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.13",
              "n": "Session Table Exhaustion",
              "c": "critical",
              "f": "advanced",
              "v": "When session tables fill, new connections are dropped. This causes service outages that are difficult to diagnose without firewall telemetry.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), SNMP",
              "d": "`sourcetype=pan:system`, `sourcetype=fgt_event`, SNMP",
              "q": "index=network sourcetype=\"pan:system\" \"session table\"\n| append [search index=network sourcetype=\"pan:traffic\" | stats dc(session_id) as active_sessions by dvc | eval max_sessions=coalesce(max_sessions,500000)]\n| stats latest(active_sessions) as sessions, latest(max_sessions) as max by dvc\n| eval utilization=round(sessions/max*100,1) | where utilization > 80",
              "m": "Monitor session counts via SNMP or firewall system logs. Know your platform's session limit. Alert at 80% utilization. Investigate top session consumers by source/destination.",
              "z": "Gauge (per firewall), Line chart (session count trending), Table (top consumers).",
              "kfp": "Backups, file transfers, and internet-heavy days fill session tables; compare to capacity, not a zero baseline.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), SNMP.\n• Ensure the following data sources are available: `sourcetype=pan:system`, `sourcetype=fgt_event`, SNMP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor session counts via SNMP or firewall system logs. Know your platform's session limit. Alert at 80% utilization. Investigate top session consumers by source/destination.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:system\" \"session table\"\n| append [search index=network sourcetype=\"pan:traffic\" | stats dc(session_id) as active_sessions by dvc | eval max_sessions=coalesce(max_sessions,500000)]\n| stats latest(active_sessions) as sessions, latest(max_sessions) as max by dvc\n| eval utilization=round(sessions/max*100,1) | where utilization > 80\n```\n\nUnderstanding this SPL\n\n**Session Table Exhaustion** — When session tables fill, new connections are dropped. This causes service outages that are difficult to diagnose without firewall telemetry.\n\nDocumented **Data sources**: `sourcetype=pan:system`, `sourcetype=fgt_event`, SNMP. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:system. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by dvc** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilization > 80` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Session Table Exhaustion** — When session tables fill, new connections are dropped. This causes service outages that are difficult to diagnose without firewall telemetry.\n\nDocumented **Data sources**: `sourcetype=pan:system`, `sourcetype=fgt_event`, SNMP. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (per firewall), Line chart (session count trending), Table (top consumers).",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow how full the session table is so we are not caught off guard by sudden growth, leaks, or attacks that eat connection slots.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto",
                "snmp"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.14",
              "n": "Firewall HA Failover Events",
              "c": "critical",
              "f": "intermediate",
              "v": "HA failovers cause brief traffic disruption and can indicate underlying hardware or link failures. Tracking failover frequency detects instability.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "`sourcetype=pan:system`, `sourcetype=fgt_event`",
              "q": "index=firewall (sourcetype=\"pan:system\" \"HA state change\") OR (sourcetype=\"fgt_event\" subtype=\"ha\")\n| rex \"state change.*from (?<old_state>\\w+) to (?<new_state>\\w+)\"\n| table _time, dvc, old_state, new_state | sort -_time",
              "m": "Forward firewall system logs to Splunk. Alert on any active/passive transition. Correlate with link down events. Track failover frequency — more than 1 per week indicates instability.",
              "z": "Timeline (failover events), Single value (failovers this month), Table (history).",
              "kfp": "Test failovers, maintenance failovers, and power events trigger HA messages even when the network is under control.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: `sourcetype=pan:system`, `sourcetype=fgt_event`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward firewall system logs to Splunk. Alert on any active/passive transition. Correlate with link down events. Track failover frequency — more than 1 per week indicates instability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall (sourcetype=\"pan:system\" \"HA state change\") OR (sourcetype=\"fgt_event\" subtype=\"ha\")\n| rex \"state change.*from (?<old_state>\\w+) to (?<new_state>\\w+)\"\n| table _time, dvc, old_state, new_state | sort -_time\n```\n\nUnderstanding this SPL\n\n**Firewall HA Failover Events** — HA failovers cause brief traffic disruption and can indicate underlying hardware or link failures. Tracking failover frequency detects instability.\n\nDocumented **Data sources**: `sourcetype=pan:system`, `sourcetype=fgt_event`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: pan:system, fgt_event. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"pan:system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Firewall HA Failover Events**): table _time, dvc, old_state, new_state\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Firewall HA Failover Events** — HA failovers cause brief traffic disruption and can indicate underlying hardware or link failures. Tracking failover frequency detects instability.\n\nDocumented **Data sources**: `sourcetype=pan:system`, `sourcetype=fgt_event`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failover events), Single value (failovers this month), Table (history).",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch failover and high-availability events so the team knows which box was live, and whether a handover was clean or a surprise.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.15",
              "n": "Botnet/C2 Traffic Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Detecting outbound connections to known C2 infrastructure identifies compromised internal hosts before data exfiltration occurs.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), Threat intelligence feeds",
              "d": "`sourcetype=pan:threat`, `sourcetype=pan:traffic`",
              "q": "index=network sourcetype=\"pan:threat\" category=\"command-and-control\" OR category=\"spyware\"\n| stats count values(dest) as c2_targets dc(dest) as unique_c2 by src\n| sort -count\n| lookup dnslookup clientip as src OUTPUT clienthost as src_hostname",
              "m": "Enable threat prevention and URL filtering on the firewall. Ingest threat logs. Cross-reference with external threat intelligence (STIX/TAXII feeds). Alert immediately on any C2 match.",
              "z": "Table (compromised hosts, C2 targets), Sankey diagram (source → C2), Single value (count).",
              "kfp": "Misclassified benign apps, software updates, and cloud service overlap can look like command-and-control until triaged.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), Threat intelligence feeds.\n• Ensure the following data sources are available: `sourcetype=pan:threat`, `sourcetype=pan:traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable threat prevention and URL filtering on the firewall. Ingest threat logs. Cross-reference with external threat intelligence (STIX/TAXII feeds). Alert immediately on any C2 match.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:threat\" category=\"command-and-control\" OR category=\"spyware\"\n| stats count values(dest) as c2_targets dc(dest) as unique_c2 by src\n| sort -count\n| lookup dnslookup clientip as src OUTPUT clienthost as src_hostname\n```\n\nUnderstanding this SPL\n\n**Botnet/C2 Traffic Detection** — Detecting outbound connections to known C2 infrastructure identifies compromised internal hosts before data exfiltration occurs.\n\nDocumented **Data sources**: `sourcetype=pan:threat`, `sourcetype=pan:traffic`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), Threat intelligence feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:threat. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:threat\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Botnet/C2 Traffic Detection** — Detecting outbound connections to known C2 infrastructure identifies compromised internal hosts before data exfiltration occurs.\n\nDocumented **Data sources**: `sourcetype=pan:threat`, `sourcetype=pan:traffic`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), Threat intelligence feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (compromised hosts, C2 targets), Sankey diagram (source → C2), Single value (count).",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for command-and-control style matches so we can stop callbacks and bot traffic that slip past simple allow lists.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.16",
              "n": "SSL/TLS Decryption Failures",
              "c": "high",
              "f": "beginner",
              "v": "Decryption failures create blind spots in security inspection. Tracking failures by destination reveals certificate pinning, protocol mismatches, or policy gaps.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "`sourcetype=pan:decryption`",
              "q": "index=network sourcetype=\"pan:decryption\" action=\"decrypt-error\" OR action=\"no-decrypt\"\n| stats count by reason, dest, dest_port\n| sort 50 -count",
              "m": "Enable decryption logging. Group failures by reason (unsupported cipher, certificate pinning, policy exclude). Review and update decryption policy based on findings.",
              "z": "Bar chart (failure reasons), Table (top undecrypted destinations), Pie chart (by reason).",
              "kfp": "Legacy clients, pinned certificates, and pinned apps that resist inspection raise decryption errors without an attack.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: `sourcetype=pan:decryption`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable decryption logging. Group failures by reason (unsupported cipher, certificate pinning, policy exclude). Review and update decryption policy based on findings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:decryption\" action=\"decrypt-error\" OR action=\"no-decrypt\"\n| stats count by reason, dest, dest_port\n| sort 50 -count\n```\n\nUnderstanding this SPL\n\n**SSL/TLS Decryption Failures** — Decryption failures create blind spots in security inspection. Tracking failures by destination reveals certificate pinning, protocol mismatches, or policy gaps.\n\nDocumented **Data sources**: `sourcetype=pan:decryption`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:decryption. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:decryption\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by reason, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SSL/TLS Decryption Failures** — Decryption failures create blind spots in security inspection. Tracking failures by destination reveals certificate pinning, protocol mismatches, or policy gaps.\n\nDocumented **Data sources**: `sourcetype=pan:decryption`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failure reasons), Table (top undecrypted destinations), Pie chart (by reason).",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count decryption and handshake failures so you can tell policy missteps and old clients from someone tampering in the middle.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.17",
              "n": "Firewall Rule Hit Count Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Unused firewall rules increase attack surface and complexity. Identifying zero-hit rules enables rule base cleanup and reduces risk.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "`sourcetype=pan:traffic`, `sourcetype=fgt_traffic`",
              "q": "index=network sourcetype=\"pan:traffic\"\n| stats count as hit_count dc(src) as unique_sources dc(dest) as unique_dests by rule\n| sort hit_count\n| eval status=if(hit_count=0,\"UNUSED\",if(hit_count<10,\"RARELY_USED\",\"ACTIVE\"))",
              "m": "Collect traffic logs with rule names. Run weekly reports to identify unused rules. Review rules with zero hits over 90 days for removal. Document cleanup actions.",
              "z": "Table (rule, hit count, status), Bar chart (hit count distribution), Single value (unused rule count).",
              "kfp": "Backup jobs, software updates, and seasonality change which rules see the most hits; expect drift over time.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: `sourcetype=pan:traffic`, `sourcetype=fgt_traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect traffic logs with rule names. Run weekly reports to identify unused rules. Review rules with zero hits over 90 days for removal. Document cleanup actions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\"\n| stats count as hit_count dc(src) as unique_sources dc(dest) as unique_dests by rule\n| sort hit_count\n| eval status=if(hit_count=0,\"UNUSED\",if(hit_count<10,\"RARELY_USED\",\"ACTIVE\"))\n```\n\nUnderstanding this SPL\n\n**Firewall Rule Hit Count Analysis** — Unused firewall rules increase attack surface and complexity. Identifying zero-hit rules enables rule base cleanup and reduces risk.\n\nDocumented **Data sources**: `sourcetype=pan:traffic`, `sourcetype=fgt_traffic`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Firewall Rule Hit Count Analysis** — Unused firewall rules increase attack surface and complexity. Identifying zero-hit rules enables rule base cleanup and reduces risk.\n\nDocumented **Data sources**: `sourcetype=pan:traffic`, `sourcetype=fgt_traffic`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule, hit count, status), Bar chart (hit count distribution), Single value (unused rule count).",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We show which rules see the most use so you can clean dead access, find shadow IT, and tune noisy policies with facts.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.18",
              "n": "Threat Prevention Signature Coverage",
              "c": "high",
              "f": "intermediate",
              "v": "Outdated threat signatures leave the firewall blind to new attacks. Monitoring signature versions ensures security posture is current.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX)",
              "d": "`sourcetype=pan:system`, `sourcetype=fgt_event`",
              "q": "index=network sourcetype=\"pan:system\" \"threat version\" OR \"content update\"\n| rex \"installed (?<content_type>threats|antivirus|wildfire) version (?<version>\\S+)\"\n| stats latest(version) as current_version, latest(_time) as last_update by dvc, content_type\n| eval days_since_update=round((now()-last_update)/86400,0)\n| where days_since_update > 7",
              "m": "Forward system logs. Alert when signature updates are >7 days old. Compare across firewalls to detect update failures. Schedule weekly compliance reports.",
              "z": "Table (firewall, content type, version, days since update), Single value (outdated count).",
              "kfp": "Regular vendor content and signature updates are expected; they should not be confused with on-box edits.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX).\n• Ensure the following data sources are available: `sourcetype=pan:system`, `sourcetype=fgt_event`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward system logs. Alert when signature updates are >7 days old. Compare across firewalls to detect update failures. Schedule weekly compliance reports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:system\" \"threat version\" OR \"content update\"\n| rex \"installed (?<content_type>threats|antivirus|wildfire) version (?<version>\\S+)\"\n| stats latest(version) as current_version, latest(_time) as last_update by dvc, content_type\n| eval days_since_update=round((now()-last_update)/86400,0)\n| where days_since_update > 7\n```\n\nUnderstanding this SPL\n\n**Threat Prevention Signature Coverage** — Outdated threat signatures leave the firewall blind to new attacks. Monitoring signature versions ensures security posture is current.\n\nDocumented **Data sources**: `sourcetype=pan:system`, `sourcetype=fgt_event`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:system. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by dvc, content_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_update** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since_update > 7` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Threat Prevention Signature Coverage** — Outdated threat signatures leave the firewall blind to new attacks. Monitoring signature versions ensures security posture is current.\n\nDocumented **Data sources**: `sourcetype=pan:system`, `sourcetype=fgt_event`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (firewall, content type, version, days since update), Single value (outdated count).",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110/3120/3130/3140, Firepower 1010/1120/1140/1150, Firepower 2110/2120/2130/2140, FMC; Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager; Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800 (FMC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track when threat and content packs update on the firewall so you know the box is on a current brain, not a stale one.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.19",
              "n": "VPN Tunnel Status and Path Monitoring (Meraki MX)",
              "c": "critical",
              "f": "intermediate",
              "v": "Ensures all site-to-site and client VPN tunnels remain active and operative.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=vpn sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=vpn\n| stats latest(status) as tunnel_status, latest(last_changed) as status_change_time by tunnel_id, remote_site\n| where tunnel_status=\"down\" OR tunnel_status=\"unstable\"",
              "m": "Monitor VPN tunnel state from syslog and API. Alert on status != \"up\".",
              "z": "VPN tunnel status matrix; site connectivity map; tunnel health sparklines.",
              "kfp": "ISPs, DPD, and path changes can flap tunnels briefly; compare duration and business impact before paging.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=vpn sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor VPN tunnel state from syslog and API. Alert on status != \"up\".\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=vpn\n| stats latest(status) as tunnel_status, latest(last_changed) as status_change_time by tunnel_id, remote_site\n| where tunnel_status=\"down\" OR tunnel_status=\"unstable\"\n```\n\nUnderstanding this SPL\n\n**VPN Tunnel Status and Path Monitoring (Meraki MX)** — Ensures all site-to-site and client VPN tunnels remain active and operative.\n\nDocumented **Data sources**: `sourcetype=meraki type=vpn sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by tunnel_id, remote_site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tunnel_status=\"down\" OR tunnel_status=\"unstable\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: VPN tunnel status matrix; site connectivity map; tunnel health sparklines.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see whether site-to-site tunnels stay in a good state so branch offices and partners stay connected when paths get noisy.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.20",
              "n": "Content Filtering and URL Category Blocks (Meraki MX)",
              "c": "high",
              "f": "beginner",
              "v": "Tracks blocked URLs and categories to monitor policy compliance and identify misclassified content.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=urls action=\"blocked\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=urls action=\"blocked\"\n| stats count as block_count by url_category, src\n| sort - block_count\n| head 20",
              "m": "Ingest URL filtering events from MX syslog. Categorize by policy.",
              "z": "Table of top blocked categories; bar chart by category; user detail table.",
              "kfp": "Overly strict categories, new SaaS, and one-off page visits can make URL blocks look worse than a policy problem.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=urls action=\"blocked\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest URL filtering events from MX syslog. Categorize by policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=urls action=\"blocked\"\n| stats count as block_count by url_category, src\n| sort - block_count\n| head 20\n```\n\nUnderstanding this SPL\n\n**Content Filtering and URL Category Blocks (Meraki MX)** — Tracks blocked URLs and categories to monitor policy compliance and identify misclassified content.\n\nDocumented **Data sources**: `sourcetype=meraki type=urls action=\"blocked\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by url_category, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of top blocked categories; bar chart by category; user detail table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We show which web categories and pages get stopped at the network edge so policy stays in step with what people really need to do their jobs.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.21",
              "n": "IDS/IPS Alert Analysis and Threat Scoring (Meraki MX)",
              "c": "critical",
              "f": "intermediate",
              "v": "Identifies and prioritizes intrusion detection alerts for investigation and threat response.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=ids_alert`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=ids_alert\n| stats count as alert_count by signature, priority, src, dest\n| eval severity=case(priority=1, \"Critical\", priority=2, \"High\", priority=3, \"Medium\", 1=1, \"Low\")\n| where priority <= 2\n| sort - alert_count",
              "m": "Ingest IDS/IPS alert events from MX appliance. Enrich with threat intelligence.",
              "z": "Alert timeline; severity breakdown pie chart; alert detail table; threat map.",
              "kfp": "Vulnerability scans, security tools, and mis-segmented lab traffic can raise IDS rates without a live intrusion.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=ids_alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest IDS/IPS alert events from MX appliance. Enrich with threat intelligence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=ids_alert\n| stats count as alert_count by signature, priority, src, dest\n| eval severity=case(priority=1, \"Critical\", priority=2, \"High\", priority=3, \"Medium\", 1=1, \"Low\")\n| where priority <= 2\n| sort - alert_count\n```\n\nUnderstanding this SPL\n\n**IDS/IPS Alert Analysis and Threat Scoring (Meraki MX)** — Identifies and prioritizes intrusion detection alerts for investigation and threat response.\n\nDocumented **Data sources**: `sourcetype=meraki type=ids_alert`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by signature, priority, src, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where priority <= 2` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IDS/IPS Alert Analysis and Threat Scoring (Meraki MX)** — Identifies and prioritizes intrusion detection alerts for investigation and threat response.\n\nDocumented **Data sources**: `sourcetype=meraki type=ids_alert`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert timeline; severity breakdown pie chart; alert detail table; threat map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add up intrusion system alerts on small office gear so the team can tell real break-in attempts from normal internet noise faster.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.22",
              "n": "Malware Detection and AMP File Reputation Events (Meraki MX)",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects and tracks file-based threats to respond quickly to potential malware infections.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*malware*\" OR signature=\"*AMP*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*malware*\" OR signature=\"*AMP*\")\n| stats count as malware_count by src, threat_name, file_name\n| where malware_count > 0\n| sort - malware_count",
              "m": "Enable AMP on MX appliance. Ingest malware detection events.",
              "z": "Threat timeline; infected hosts table; file reputation detail; incident dashboard.",
              "kfp": "Quarantine, cleanup tools, and rescanning the same file can repeat malware events without a new infection.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*malware*\" OR signature=\"*AMP*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AMP on MX appliance. Ingest malware detection events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*malware*\" OR signature=\"*AMP*\")\n| stats count as malware_count by src, threat_name, file_name\n| where malware_count > 0\n| sort - malware_count\n```\n\nUnderstanding this SPL\n\n**Malware Detection and AMP File Reputation Events (Meraki MX)** — Detects and tracks file-based threats to respond quickly to potential malware infections.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*malware*\" OR signature=\"*AMP*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, threat_name, file_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where malware_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Threat timeline; infected hosts table; file reputation detail; incident dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow malware and reputation flags from the same edge so the team can quarantine a bad file before it moves deeper inside.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.23",
              "n": "Firewall Rule Hit Analysis and Top Denied Flows (Meraki MX)",
              "c": "medium",
              "f": "intermediate",
              "v": "Identifies top denied flows to optimize firewall rules and detect policy violations.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=flow action=\"deny\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=flow action=\"deny\"\n| stats count as deny_count by firewall_rule, src, dest, dest_port\n| sort - deny_count\n| head 20",
              "m": "Analyze firewall deny events from flow logs. Correlate with rules.",
              "z": "Top denied flows table; denial timeline; source/dest distribution heatmap.",
              "kfp": "Port scans, misconfigured clients, and noisy default-deny rules can flood deny counts without a targeted attack.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=flow action=\"deny\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAnalyze firewall deny events from flow logs. Correlate with rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=flow action=\"deny\"\n| stats count as deny_count by firewall_rule, src, dest, dest_port\n| sort - deny_count\n| head 20\n```\n\nUnderstanding this SPL\n\n**Firewall Rule Hit Analysis and Top Denied Flows (Meraki MX)** — Identifies top denied flows to optimize firewall rules and detect policy violations.\n\nDocumented **Data sources**: `sourcetype=meraki type=flow action=\"deny\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by firewall_rule, src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Firewall Rule Hit Analysis and Top Denied Flows (Meraki MX)** — Identifies top denied flows to optimize firewall rules and detect policy violations.\n\nDocumented **Data sources**: `sourcetype=meraki type=flow action=\"deny\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Top denied flows table; denial timeline; source/dest distribution heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We list top denied flows on the small office device so you can see scanning, bad apps, and policy gaps without digging through raw logs by hand.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.24",
              "n": "Traffic Shaping Effectiveness and QoS Policy Analysis (Meraki MX)",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures the impact of traffic shaping policies on bandwidth distribution and priority.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=flow sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=flow priority_queue=*\n| stats sum(bytes) as total_bytes, avg(latency) as avg_latency by priority_queue\n| eval efficiency=round(total_bytes/sum(total_bytes)*100, 2)",
              "m": "Extract priority_queue field from flow logs. Measure bandwidth by priority.",
              "z": "Stacked bar chart of bandwidth by priority; latency by QoS class; efficiency gauge.",
              "kfp": "Overnight jobs, large downloads, and guest traffic can shift which queues look busy; compare to known workloads.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=flow sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract priority_queue field from flow logs. Measure bandwidth by priority.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=flow priority_queue=*\n| stats sum(bytes) as total_bytes, avg(latency) as avg_latency by priority_queue\n| eval efficiency=round(total_bytes/sum(total_bytes)*100, 2)\n```\n\nUnderstanding this SPL\n\n**Traffic Shaping Effectiveness and QoS Policy Analysis (Meraki MX)** — Measures the impact of traffic shaping policies on bandwidth distribution and priority.\n\nDocumented **Data sources**: `sourcetype=meraki type=flow sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by priority_queue** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **efficiency** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Traffic Shaping Effectiveness and QoS Policy Analysis (Meraki MX)** — Measures the impact of traffic shaping policies on bandwidth distribution and priority.\n\nDocumented **Data sources**: `sourcetype=meraki type=flow sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart of bandwidth by priority; latency by QoS class; efficiency gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at how traffic is shaped and marked so the team can see whether important apps still get a fair share when the link is full.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.25",
              "n": "Site-to-Site VPN Latency and Performance (Meraki MX)",
              "c": "medium",
              "f": "beginner",
              "v": "Monitors latency and jitter on VPN tunnels to ensure quality of critical business traffic.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=vpn sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=vpn latency=*\n| stats avg(latency) as avg_vpn_latency, max(jitter) as max_jitter by tunnel_id, remote_site\n| where avg_vpn_latency > 50",
              "m": "Extract VPN latency and jitter metrics. Monitor tunnel performance.",
              "z": "Gauge of VPN latency; latency trend line; jitter comparison chart.",
              "kfp": "ISPs, weather, and remote Wi-Fi often dominate latency; rule out the path before blaming the head-end device.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=vpn sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract VPN latency and jitter metrics. Monitor tunnel performance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=vpn latency=*\n| stats avg(latency) as avg_vpn_latency, max(jitter) as max_jitter by tunnel_id, remote_site\n| where avg_vpn_latency > 50\n```\n\nUnderstanding this SPL\n\n**Site-to-Site VPN Latency and Performance (Meraki MX)** — Monitors latency and jitter on VPN tunnels to ensure quality of critical business traffic.\n\nDocumented **Data sources**: `sourcetype=meraki type=vpn sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by tunnel_id, remote_site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_vpn_latency > 50` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge of VPN latency; latency trend line; jitter comparison chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow tunnel delay on those paths so a slow provider or far peer is visible before people open tickets about \"the VPN feels off.\"",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.26",
              "n": "Client VPN Connections and Remote Access Patterns (Meraki MX)",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks client VPN usage patterns for remote workers and identifies problematic connections.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=vpn client_vpn=true`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=vpn client_vpn=true\n| stats count as connection_count, avg(duration) as avg_session_length by user_id, src\n| where connection_count > 10",
              "m": "Filter VPN logs for client connections. Track by user and source IP.",
              "z": "Connected users timeline; session duration histogram; geography map of remote users.",
              "kfp": "Travel peaks, on-call surges, and class schedules can make remote access login counts swing widely.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=vpn client_vpn=true`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFilter VPN logs for client connections. Track by user and source IP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=vpn client_vpn=true\n| stats count as connection_count, avg(duration) as avg_session_length by user_id, src\n| where connection_count > 10\n```\n\nUnderstanding this SPL\n\n**Client VPN Connections and Remote Access Patterns (Meraki MX)** — Tracks client VPN usage patterns for remote workers and identifies problematic connections.\n\nDocumented **Data sources**: `sourcetype=meraki type=vpn client_vpn=true`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user_id, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where connection_count > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Connected users timeline; session duration histogram; geography map of remote users.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count remote access use over time so you can plan capacity, spot odd login surges, and help people who are stuck at home or on the road.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.27",
              "n": "NAT Pool Usage and Exhaustion Alerts (Meraki MX)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors NAT pool utilization to prevent address exhaustion that could block outbound traffic.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" nat_pool_usage=*\n| stats max(nat_pool_usage) as peak_nat_usage, count by nat_pool_id\n| eval nat_capacity_pct=round(peak_nat_usage*100/254, 2)\n| where nat_capacity_pct > 80",
              "m": "Query appliance API for NAT pool metrics. Alert on >80% utilization.",
              "z": "Gauge of NAT pool usage; capacity timeline; pool exhaustion alert dashboard.",
              "kfp": "New sites, guest Wi-Fi, and more endpoints can use more public NAT than last month without an exhaustion emergency.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery appliance API for NAT pool metrics. Alert on >80% utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" nat_pool_usage=*\n| stats max(nat_pool_usage) as peak_nat_usage, count by nat_pool_id\n| eval nat_capacity_pct=round(peak_nat_usage*100/254, 2)\n| where nat_capacity_pct > 80\n```\n\nUnderstanding this SPL\n\n**NAT Pool Usage and Exhaustion Alerts (Meraki MX)** — Monitors NAT pool utilization to prevent address exhaustion that could block outbound traffic.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by nat_pool_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **nat_capacity_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where nat_capacity_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge of NAT pool usage; capacity timeline; pool exhaustion alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check how full shared public address pools are on the small office so new guests and new sites do not run out of outbound space.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.28",
              "n": "BGP Peering Status and Route Stability (Meraki MX)",
              "c": "high",
              "f": "beginner",
              "v": "Ensures BGP peers remain established and routing remains stable for multi-ISP designs.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*BGP*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*BGP*\" (signature=\"*neighbor*\" OR signature=\"*route*\")\n| stats count as bgp_event_count by bgp_neighbor, event_type\n| where bgp_event_count > 5",
              "m": "Monitor BGP event syslog. Alert on neighbor state changes.",
              "z": "BGP peer status table; route change timeline; peering stability gauge.",
              "kfp": "Reconvergence, ISPs, and lab peers can jolt route tables; confirm whether the next hop is still intended.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*BGP*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor BGP event syslog. Alert on neighbor state changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*BGP*\" (signature=\"*neighbor*\" OR signature=\"*route*\")\n| stats count as bgp_event_count by bgp_neighbor, event_type\n| where bgp_event_count > 5\n```\n\nUnderstanding this SPL\n\n**BGP Peering Status and Route Stability (Meraki MX)** — Ensures BGP peers remain established and routing remains stable for multi-ISP designs.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*BGP*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by bgp_neighbor, event_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where bgp_event_count > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: BGP peer status table; route change timeline; peering stability gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch border gateway and route health messages on the same gear so a bad neighbor or wobbly path is easier to spot early.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.29",
              "n": "Threat Intelligence Correlation and IoC Matching (Meraki MX)",
              "c": "critical",
              "f": "intermediate",
              "v": "Correlates network traffic with threat intelligence databases to detect known malicious IPs and domains.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event OR type=urls OR type=flow`",
              "q": "index=cisco_network sourcetype=\"meraki\" (type=security_event OR type=urls OR type=flow)\n| lookup threat_intelligence_list src as src OUTPUTNEW threat_name, threat_severity\n| where threat_severity=\"high\" OR threat_severity=\"critical\"\n| stats count as hit_count by src, dest, threat_name\n| sort - hit_count",
              "m": "Create threat intelligence lookup table. Correlate with network events.",
              "z": "IoC match timeline; threat severity breakdown; affected hosts table.",
              "kfp": "New cloud ranges, fast-flux, and short-lived goodware can overlap threat feeds; tune age and scope of feeds you trust.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event OR type=urls OR type=flow`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate threat intelligence lookup table. Correlate with network events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" (type=security_event OR type=urls OR type=flow)\n| lookup threat_intelligence_list src as src OUTPUTNEW threat_name, threat_severity\n| where threat_severity=\"high\" OR threat_severity=\"critical\"\n| stats count as hit_count by src, dest, threat_name\n| sort - hit_count\n```\n\nUnderstanding this SPL\n\n**Threat Intelligence Correlation and IoC Matching (Meraki MX)** — Correlates network traffic with threat intelligence databases to detect known malicious IPs and domains.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event OR type=urls OR type=flow`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where threat_severity=\"high\" OR threat_severity=\"critical\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, threat_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: IoC match timeline; threat severity breakdown; affected hosts table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We line up your threat indicators with what the small office already saw so you can see known bad addresses without waiting on a manual list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.30",
              "n": "Geo-Blocking Event Tracking and Geographic Policy Enforcement (Meraki MX)",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks geo-blocking policy enforcement to verify compliance with data residency and export controls.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=urls action=\"blocked\" country=*`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=urls action=\"blocked\"\n| lookup geo_ip.csv dest OUTPUTNEW country, city\n| stats count as block_count by country\n| sort - block_count",
              "m": "Ingest URL logs with GeoIP enrichment. Track blocks by geography.",
              "z": "Geo-block map; country block count chart; policy compliance dashboard.",
              "kfp": "CDNs, VPN exit points, and roaming users can make geo policy blocks spike without a data breach.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=urls action=\"blocked\" country=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest URL logs with GeoIP enrichment. Track blocks by geography.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=urls action=\"blocked\"\n| lookup geo_ip.csv dest OUTPUTNEW country, city\n| stats count as block_count by country\n| sort - block_count\n```\n\nUnderstanding this SPL\n\n**Geo-Blocking Event Tracking and Geographic Policy Enforcement (Meraki MX)** — Tracks geo-blocking policy enforcement to verify compliance with data residency and export controls.\n\nDocumented **Data sources**: `sourcetype=meraki type=urls action=\"blocked\" country=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Geo-block map; country block count chart; policy compliance dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We group blocked requests by country so data-location rules and blocked regions are easy to show to auditors in plain language.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.31",
              "n": "Application Visibility and Network Application Trending (Meraki MX)",
              "c": "medium",
              "f": "intermediate",
              "v": "Identifies top applications and protocols on network to understand usage patterns and detect anomalies.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=flow application=*`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=flow application=*\n| stats sum(bytes) as app_bytes, count as flow_count by application, application_category\n| eval app_bandwidth_pct=round(app_bytes*100/sum(app_bytes), 2)\n| sort - app_bytes\n| head 20",
              "m": "Extract application field from flow logs. Aggregate by app and category.",
              "z": "App bandwidth pie chart; top apps bar chart; bandwidth timeline by app.",
              "kfp": "Releases, batch jobs, and video calls can make one application or department dominate bandwidth in a good week.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=flow application=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract application field from flow logs. Aggregate by app and category.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=flow application=*\n| stats sum(bytes) as app_bytes, count as flow_count by application, application_category\n| eval app_bandwidth_pct=round(app_bytes*100/sum(app_bytes), 2)\n| sort - app_bytes\n| head 20\n```\n\nUnderstanding this SPL\n\n**Application Visibility and Network Application Trending (Meraki MX)** — Identifies top applications and protocols on network to understand usage patterns and detect anomalies.\n\nDocumented **Data sources**: `sourcetype=meraki type=flow application=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by application, application_category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **app_bandwidth_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: App bandwidth pie chart; top apps bar chart; bandwidth timeline by app.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend which applications and traffic types dominate so heavy cloud use, file shares, and video do not take you by surprise.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.32",
              "n": "Bandwidth by Application and Department (Meraki MX)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks bandwidth consumption by application and business unit for chargeback and optimization.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=flow`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=flow\n| lookup department_by_ip.csv src OUTPUTNEW department\n| stats sum(sent_bytes) as upload_mb, sum(received_bytes) as download_mb by application, department\n| eval total_mb=upload_mb+download_mb\n| sort -total_mb",
              "m": "Correlate flows with IP-to-department mapping. Aggregate by app and dept.",
              "z": "Stacked bar of bandwidth by dept/app; heatmap of app usage per dept.",
              "kfp": "Backups, cloud sync, and software updates can shift \"who used the most\" without any misuse.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=flow`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate flows with IP-to-department mapping. Aggregate by app and dept.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=flow\n| lookup department_by_ip.csv src OUTPUTNEW department\n| stats sum(sent_bytes) as upload_mb, sum(received_bytes) as download_mb by application, department\n| eval total_mb=upload_mb+download_mb\n| sort -total_mb\n```\n\nUnderstanding this SPL\n\n**Bandwidth by Application and Department (Meraki MX)** — Tracks bandwidth consumption by application and business unit for chargeback and optimization.\n\nDocumented **Data sources**: `sourcetype=meraki type=flow`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by application, department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar of bandwidth by dept/app; heatmap of app usage per dept.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We slice bandwidth by team or site when you bring your own table so you can see who is driving cost and help before bills spike.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.33",
              "n": "WAN Link Quality Monitoring — Jitter, Latency, Packet Loss (Meraki MX)",
              "c": "high",
              "f": "intermediate",
              "v": "Continuously monitors WAN quality metrics to detect link degradation before impacting users.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api wan_metrics=*`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" uplink=*\n| stats avg(latency) as avg_latency, avg(jitter) as avg_jitter, avg(packet_loss) as avg_loss by uplink_id\n| eval link_quality=case(avg_loss > 5, \"Critical\", avg_latency > 100, \"Poor\", avg_jitter > 50, \"Fair\", 1=1, \"Good\")",
              "m": "Query appliance API for uplink WAN metrics. Monitor quality KPIs.",
              "z": "Uplink quality scorecard; latency/jitter/loss timeline; quality gauge per uplink.",
              "kfp": "Carrier work, DDNS, and weather-related outages can trigger jitter and loss alerts on a clean policy.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api wan_metrics=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery appliance API for uplink WAN metrics. Monitor quality KPIs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" uplink=*\n| stats avg(latency) as avg_latency, avg(jitter) as avg_jitter, avg(packet_loss) as avg_loss by uplink_id\n| eval link_quality=case(avg_loss > 5, \"Critical\", avg_latency > 100, \"Poor\", avg_jitter > 50, \"Fair\", 1=1, \"Good\")\n```\n\nUnderstanding this SPL\n\n**WAN Link Quality Monitoring — Jitter, Latency, Packet Loss (Meraki MX)** — Continuously monitors WAN quality metrics to detect link degradation before impacting users.\n\nDocumented **Data sources**: `sourcetype=meraki:api wan_metrics=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by uplink_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **link_quality** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Uplink quality scorecard; latency/jitter/loss timeline; quality gauge per uplink.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at loss, delay, and jitter on internet links from the same boxes so a flaky provider is visible on a dashboard, not in angry tickets.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.34",
              "n": "Internet Uplink Failover Events and Recovery Time (Meraki MX)",
              "c": "critical",
              "f": "beginner",
              "v": "Tracks failover events, recovery time, and uplink behavior to ensure high availability.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*failover*\" OR signature=\"*recovery*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*failover*\" OR signature=\"*recovery*\")\n| stats count as failover_count, latest(recovery_time) as recovery_duration by uplink_id, failure_reason\n| where failover_count > 0",
              "m": "Monitor failover and recovery events from syslog. Calculate recovery MTTR.",
              "z": "Failover timeline; recovery time gauge; uplink failure cause pie chart.",
              "kfp": "Test failovers, cable swaps, and ISP work can make uplink change messages noisy during business hours.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*failover*\" OR signature=\"*recovery*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor failover and recovery events from syslog. Calculate recovery MTTR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*failover*\" OR signature=\"*recovery*\")\n| stats count as failover_count, latest(recovery_time) as recovery_duration by uplink_id, failure_reason\n| where failover_count > 0\n```\n\nUnderstanding this SPL\n\n**Internet Uplink Failover Events and Recovery Time (Meraki MX)** — Tracks failover events, recovery time, and uplink behavior to ensure high availability.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*failover*\" OR signature=\"*recovery*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by uplink_id, failure_reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failover_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Failover timeline; recovery time gauge; uplink failure cause pie chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when a site moves between primary and backup internet so the team can confirm a cutover is real, fast, and expected.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.35",
              "n": "Cellular Modem Failover Activation and Usage (Meraki MX)",
              "c": "high",
              "f": "beginner",
              "v": "Tracks cellular backup activation to monitor failover effectiveness and cellular data usage.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*cellular*\" OR signature=\"*4G*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*cellular*\" OR signature=\"*4G*\" OR signature=\"*LTE*\")\n| stats count as cellular_events, sum(data_usage_mb) as total_cellular_data by event_type\n| where total_cellular_data > 0",
              "m": "Ingest cellular failover events. Track data consumption.",
              "z": "Cellular usage timeline; failover event table; data usage gauge.",
              "kfp": "Carriers, signal checks, and planned tests can make cellular backup logs busy without a site-down situation.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*cellular*\" OR signature=\"*4G*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest cellular failover events. Track data consumption.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*cellular*\" OR signature=\"*4G*\" OR signature=\"*LTE*\")\n| stats count as cellular_events, sum(data_usage_mb) as total_cellular_data by event_type\n| where total_cellular_data > 0\n```\n\nUnderstanding this SPL\n\n**Cellular Modem Failover Activation and Usage (Meraki MX)** — Tracks cellular backup activation to monitor failover effectiveness and cellular data usage.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*cellular*\" OR signature=\"*4G*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by event_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_cellular_data > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Cellular usage timeline; failover event table; data usage gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We mark when a site leans on cellular backup so you know who is on expensive paths and can fix the main line with less guesswork.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.36",
              "n": "Warm Spare Failover and Appliance Redundancy (Meraki MX)",
              "c": "critical",
              "f": "beginner",
              "v": "Ensures warm spare failover mechanism is operational and redundancy is maintained.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*warm spare*\" OR signature=\"*HA*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*warm spare*\" OR signature=\"*HA*\" OR signature=\"*redundancy*\")\n| stats latest(ha_status) as redundancy_status, count as status_change_count by appliance_pair\n| where ha_status!=\"active/standby\"",
              "m": "Monitor HA/warm spare events. Alert on status != \"active/standby\".",
              "z": "HA status dashboard; failover timeline; redundancy health gauge.",
              "kfp": "Rehearsed failovers, firmware rollouts, and power tests create warm-standby messages you already expect.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*warm spare*\" OR signature=\"*HA*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor HA/warm spare events. Alert on status != \"active/standby\".\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*warm spare*\" OR signature=\"*HA*\" OR signature=\"*redundancy*\")\n| stats latest(ha_status) as redundancy_status, count as status_change_count by appliance_pair\n| where ha_status!=\"active/standby\"\n```\n\nUnderstanding this SPL\n\n**Warm Spare Failover and Appliance Redundancy (Meraki MX)** — Ensures warm spare failover mechanism is operational and redundancy is maintained.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*warm spare*\" OR signature=\"*HA*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by appliance_pair** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ha_status!=\"active/standby\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: HA status dashboard; failover timeline; redundancy health gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch warm-spare handovers so a spare box taking over is something you know about, not a mystery outage after the fact.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.37",
              "n": "Auto VPN Path Changes and Tunnel Switching (Meraki MX)",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks automatic VPN path optimization to understand tunnel usage and convergence behavior.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=vpn signature=\"*Auto VPN*\" OR signature=\"*path change*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=vpn (signature=\"*Auto VPN*\" OR signature=\"*path change*\")\n| stats count as path_change_count by tunnel_id, new_path, old_path\n| where path_change_count > 3",
              "m": "Monitor Auto VPN path optimization events. Alert on excessive changes.",
              "z": "Path change timeline; tunnel path change distribution; convergence analysis.",
              "kfp": "Route optimization and ISP issues can re-path tunnels; verify impact before calling it a security problem.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=vpn signature=\"*Auto VPN*\" OR signature=\"*path change*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Auto VPN path optimization events. Alert on excessive changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=vpn (signature=\"*Auto VPN*\" OR signature=\"*path change*\")\n| stats count as path_change_count by tunnel_id, new_path, old_path\n| where path_change_count > 3\n```\n\nUnderstanding this SPL\n\n**Auto VPN Path Changes and Tunnel Switching (Meraki MX)** — Tracks automatic VPN path optimization to understand tunnel usage and convergence behavior.\n\nDocumented **Data sources**: `sourcetype=meraki type=vpn signature=\"*Auto VPN*\" OR signature=\"*path change*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by tunnel_id, new_path, old_path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where path_change_count > 3` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Path change timeline; tunnel path change distribution; convergence analysis.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log automatic tunnel and path changes so the team can tell normal reroutes from a misconfiguration that strands users.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.38",
              "n": "Connection Rate Analysis and DOS Detection (Meraki MX)",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects denial of service attacks by analyzing abnormal connection establishment rates.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=flow protocol=\"tcp\" tcp_flags=\"SYN\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=flow protocol=\"tcp\" tcp_flags=\"SYN\"\n| timechart count as new_connections by src\n| where new_connections > 1000",
              "m": "Monitor TCP SYN rate by source IP. Alert on anomalous connection rates.",
              "z": "Connection rate timeline; source IP detail table; DOS alert dashboard.",
              "kfp": "Scanners, new internet-facing apps, and broken clients can look like a SYN flood in raw statistics.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=flow protocol=\"tcp\" tcp_flags=\"SYN\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor TCP SYN rate by source IP. Alert on anomalous connection rates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=flow protocol=\"tcp\" tcp_flags=\"SYN\"\n| timechart count as new_connections by src\n| where new_connections > 1000\n```\n\nUnderstanding this SPL\n\n**Connection Rate Analysis and DOS Detection (Meraki MX)** — Detects denial of service attacks by analyzing abnormal connection establishment rates.\n\nDocumented **Data sources**: `sourcetype=meraki type=flow protocol=\"tcp\" tcp_flags=\"SYN\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time with a separate series **by src** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where new_connections > 1000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Connection rate timeline; source IP detail table; DOS alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at new connection rates on small office traffic so floods, scans, and misbehaving clients stand out from everyday browsing.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.39",
              "n": "Data Loss Prevention (DLP) Event Analysis (Meraki MX)",
              "c": "critical",
              "f": "beginner",
              "v": "Detects and alerts on sensitive data transmission to prevent data exfiltration.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*DLP*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*DLP*\"\n| stats count as dlp_match_count by src, dest, dlp_policy, data_type\n| where dlp_match_count > 0\n| sort - dlp_match_count",
              "m": "Enable DLP on MX appliance. Ingest DLP match events.",
              "z": "DLP incident timeline; data type breakdown; source/destination detail.",
              "kfp": "False positives, large legitimate uploads, and user mistakes can all trip data-loss rules you still need to review.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*DLP*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DLP on MX appliance. Ingest DLP match events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*DLP*\"\n| stats count as dlp_match_count by src, dest, dlp_policy, data_type\n| where dlp_match_count > 0\n| sort - dlp_match_count\n```\n\nUnderstanding this SPL\n\n**Data Loss Prevention (DLP) Event Analysis (Meraki MX)** — Detects and alerts on sensitive data transmission to prevent data exfiltration.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*DLP*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dest, dlp_policy, data_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dlp_match_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: DLP incident timeline; data type breakdown; source/destination detail.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count data loss style events on the same edge so risky uploads to personal storage and odd data paths get a second look in time.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.40",
              "n": "Meraki VPN Tunnel and Failover Health",
              "c": "high",
              "f": "intermediate",
              "v": "Site-to-site and client VPN tunnel state directly impacts remote site and user connectivity. Detecting tunnel down or failover events supports quick remediation.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580), Meraki dashboard API",
              "d": "`sourcetype=meraki:api` (VPN status), syslog from MX",
              "q": "index=cisco_network sourcetype=\"meraki:api\" vpn_tunnel=*\n| stats latest(tunnel_state) as state, latest(peer_ip) as peer by device_serial, tunnel_id\n| where state != \"up\"\n| table device_serial tunnel_id peer state _time",
              "m": "Poll Meraki API for VPN tunnel status or ingest MX syslog for tunnel events. Alert when any tunnel is down. Track failover events for active/standby links.",
              "z": "Status grid (tunnel, state), Table (down tunnels), Timeline (failover events).",
              "kfp": "Tunnels, peers, and monitored paths can flap during routine network work; use duration to separate noise from outage.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), Meraki dashboard API.\n• Ensure the following data sources are available: `sourcetype=meraki:api` (VPN status), syslog from MX.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Meraki API for VPN tunnel status or ingest MX syslog for tunnel events. Alert when any tunnel is down. Track failover events for active/standby links.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" vpn_tunnel=*\n| stats latest(tunnel_state) as state, latest(peer_ip) as peer by device_serial, tunnel_id\n| where state != \"up\"\n| table device_serial tunnel_id peer state _time\n```\n\nUnderstanding this SPL\n\n**Meraki VPN Tunnel and Failover Health** — Site-to-site and client VPN tunnel state directly impacts remote site and user connectivity. Detecting tunnel down or failover events supports quick remediation.\n\nDocumented **Data sources**: `sourcetype=meraki:api` (VPN status), syslog from MX. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), Meraki dashboard API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_serial, tunnel_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where state != \"up\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Meraki VPN Tunnel and Failover Health**): table device_serial tunnel_id peer state _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (tunnel, state), Table (down tunnels), Timeline (failover events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep watch on tunnel and failover state from the cloud dashboard data so a down path is not something you only hear about in a meeting.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.41",
              "n": "Juniper SRX IDP/IPS Event Monitoring (Juniper SRX)",
              "c": "high",
              "f": "beginner",
              "v": "Juniper SRX runs an integrated IDP/IPS engine with signature-based and protocol-anomaly detection alongside firewall state. Because events are generated in the same flow path as security policy, logs carry application context, zones, and session identifiers that standalone IPS appliances often lack. Monitoring attack name, severity, destination service, and enforcement action (drop, close, ignore) lets you prioritize true positives, spot targeted attacks, and prove that prevention is working without waiting for incident tickets.",
              "t": "`Splunk_TA_juniper` (Splunkbase 2847)",
              "d": "`sourcetype=juniper:junos:idp`, `sourcetype=juniper:junos:idp:structured`",
              "q": "index=network (sourcetype=\"juniper:junos:idp\" OR sourcetype=\"juniper:junos:idp:structured\")\n| eval attack=coalesce(attack_name, signature, threat_name, idp_attack_name)\n| eval sev=lower(coalesce(severity, threat_severity, idp_severity))\n| eval act=coalesce(action, idp_action, policy_action)\n| eval src_ip=coalesce(src, src_ip, srcaddr)\n| eval dest_ip=coalesce(dest, dest_ip, dstaddr)\n| stats count as hits by host attack sev act src_ip dest_ip dest_port service\n| sort -hits",
              "m": "Enable IDP on applicable SRX policies and send IDP logs to Splunk (structured syslog preferred). Install and enable the Juniper TA for field extraction. Build alerts for `sev` in (critical, high) or for rapid growth in `hits` against the same `dest_ip`/service. Correlate with allow/deny traffic logs on the same five-tuple. Add suppressions for known vulnerability scanners after a baseline window. Validate CIM `Intrusion_Detection` tags if you accelerate the data model.",
              "z": "Table (attack, severity, action, endpoints), Bar chart (top signatures), Timeline (bursts by host).",
              "kfp": "IDP false positives, scanner traffic, and new apps can raise signatures until you tune and whitelist known noise.",
              "refs": "[Splunkbase app 2847](https://splunkbase.splunk.com/app/2847), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_juniper` (Splunkbase 2847).\n• Ensure the following data sources are available: `sourcetype=juniper:junos:idp`, `sourcetype=juniper:junos:idp:structured`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable IDP on applicable SRX policies and send IDP logs to Splunk (structured syslog preferred). Install and enable the Juniper TA for field extraction. Build alerts for `sev` in (critical, high) or for rapid growth in `hits` against the same `dest_ip`/service. Correlate with allow/deny traffic logs on the same five-tuple. Add suppressions for known vulnerability scanners after a baseline window. Validate CIM `Intrusion_Detection` tags if you accelerate the data model.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"juniper:junos:idp\" OR sourcetype=\"juniper:junos:idp:structured\")\n| eval attack=coalesce(attack_name, signature, threat_name, idp_attack_name)\n| eval sev=lower(coalesce(severity, threat_severity, idp_severity))\n| eval act=coalesce(action, idp_action, policy_action)\n| eval src_ip=coalesce(src, src_ip, srcaddr)\n| eval dest_ip=coalesce(dest, dest_ip, dstaddr)\n| stats count as hits by host attack sev act src_ip dest_ip dest_port service\n| sort -hits\n```\n\nUnderstanding this SPL\n\n**Juniper SRX IDP/IPS Event Monitoring (Juniper SRX)** — Juniper SRX runs an integrated IDP/IPS engine with signature-based and protocol-anomaly detection alongside firewall state. Because events are generated in the same flow path as security policy, logs carry application context, zones, and session identifiers that standalone IPS appliances often lack. Monitoring attack name, severity, destination service, and enforcement action (drop, close, ignore) lets you prioritize true positives, spot targeted attacks, and prove that…\n\nDocumented **Data sources**: `sourcetype=juniper:junos:idp`, `sourcetype=juniper:junos:idp:structured`. **App/TA** (typical add-on context): `Splunk_TA_juniper` (Splunkbase 2847). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: juniper:junos:idp, juniper:junos:idp:structured. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"juniper:junos:idp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **attack** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **act** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **src_ip** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dest_ip** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host attack sev act src_ip dest_ip dest_port service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection\n  by IDS_Attacks.signature, IDS_Attacks.severity, IDS_Attacks.src, IDS_Attacks.dest span=1h\n| where count > 0\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Juniper SRX IDP/IPS Event Monitoring (Juniper SRX)** — Juniper SRX runs an integrated IDP/IPS engine with signature-based and protocol-anomaly detection alongside firewall state. Because events are generated in the same flow path as security policy, logs carry application context, zones, and session identifiers that standalone IPS appliances often lack. Monitoring attack name, severity, destination service, and enforcement action (drop, close, ignore) lets you prioritize true positives, spot targeted attacks, and prove that…\n\nDocumented **Data sources**: `sourcetype=juniper:junos:idp`, `sourcetype=juniper:junos:idp:structured`. **App/TA** (typical add-on context): `Splunk_TA_juniper` (Splunkbase 2847). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection` — enable acceleration for that model.\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (attack, severity, action, endpoints), Bar chart (top signatures), Timeline (bursts by host).",
              "script": "",
              "premium": "",
              "hw": "Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count intrusion and prevention hits on the SRX so serious signatures rise above everyday noise and get staff attention in order.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection\n  by IDS_Attacks.signature, IDS_Attacks.severity, IDS_Attacks.src, IDS_Attacks.dest span=1h\n| where count > 0\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.42",
              "n": "Juniper SRX Screen Counter Monitoring (Juniper SRX)",
              "c": "high",
              "f": "intermediate",
              "v": "Junos “Screen” features apply stateless, early-drop protections against floods, sweeps, malformed packets, and classic DoS patterns before sessions are fully created. Those drops often never appear in session or traffic logs, so screen telemetry is the only way to see perimeter volumetric or reconnaissance attacks. Sustained spikes in specific screen categories usually mean an active attack, a misconfigured peer, or a need to tune thresholds—not “normal” firewall noise.",
              "t": "`Splunk_TA_juniper` (Splunkbase 2847), SNMP Modular Input",
              "d": "`sourcetype=juniper:junos:firewall:structured` (syslog `RT_SCREEN_*`), SNMP screen or attack-related counters where published for your platform",
              "q": "index=network (sourcetype=\"juniper:junos:firewall:structured\" OR sourcetype=\"juniper:junos:firewall\")\n  \"RT_SCREEN\"\n| rex field=_raw \"RT_SCREEN_(?<screen_type>[A-Z0-9_]+)\"\n| rex field=_raw \"source:\\s*(?<src>\\S+)\"\n| rex field=_raw \"destination:\\s*(?<dest>\\S+)\"\n| bin _time span=5m\n| stats count as screen_hits by _time host screen_type src dest\n| eventstats median(screen_hits) as med by screen_type, host\n| eval threshold=max(100, 5 * med)\n| where screen_hits > threshold\n| sort -screen_hits",
              "m": "Confirm screen options are enabled on untrust-facing interfaces and that `RT_SCREEN` syslog messages (or structured equivalents) reach Splunk. For SNMP, poll platform-specific screen/attack counters if your SRX model exposes them, and chart deltas alongside syslog. Baseline each `screen_type` per site; alert on order-of-magnitude jumps or sustained elevation. Investigate source `src` clusters and coordinate with upstream ISP scrubbing if attacks are large. Map to CIM `Intrusion_Detection` where fields align.",
              "z": "Timechart (hits by screen type), Table (top sources), Single value (total screen drops vs prior day).",
              "kfp": "Port scans, routing churn, and loud internet background radiation can set off screen counters without a real breach.",
              "refs": "[Splunkbase app 2847](https://splunkbase.splunk.com/app/2847), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_juniper` (Splunkbase 2847), SNMP Modular Input.\n• Ensure the following data sources are available: `sourcetype=juniper:junos:firewall:structured` (syslog `RT_SCREEN_*`), SNMP screen or attack-related counters where published for your platform.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfirm screen options are enabled on untrust-facing interfaces and that `RT_SCREEN` syslog messages (or structured equivalents) reach Splunk. For SNMP, poll platform-specific screen/attack counters if your SRX model exposes them, and chart deltas alongside syslog. Baseline each `screen_type` per site; alert on order-of-magnitude jumps or sustained elevation. Investigate source `src` clusters and coordinate with upstream ISP scrubbing if attacks are large. Map to CIM `Intrusion_Detection` where …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"juniper:junos:firewall:structured\" OR sourcetype=\"juniper:junos:firewall\")\n  \"RT_SCREEN\"\n| rex field=_raw \"RT_SCREEN_(?<screen_type>[A-Z0-9_]+)\"\n| rex field=_raw \"source:\\s*(?<src>\\S+)\"\n| rex field=_raw \"destination:\\s*(?<dest>\\S+)\"\n| bin _time span=5m\n| stats count as screen_hits by _time host screen_type src dest\n| eventstats median(screen_hits) as med by screen_type, host\n| eval threshold=max(100, 5 * med)\n| where screen_hits > threshold\n| sort -screen_hits\n```\n\nUnderstanding this SPL\n\n**Juniper SRX Screen Counter Monitoring (Juniper SRX)** — Junos “Screen” features apply stateless, early-drop protections against floods, sweeps, malformed packets, and classic DoS patterns before sessions are fully created. Those drops often never appear in session or traffic logs, so screen telemetry is the only way to see perimeter volumetric or reconnaissance attacks. Sustained spikes in specific screen categories usually mean an active attack, a misconfigured peer, or a need to tune thresholds—not “normal” firewall noise.\n\nDocumented **Data sources**: `sourcetype=juniper:junos:firewall:structured` (syslog `RT_SCREEN_*`), SNMP screen or attack-related counters where published for your platform. **App/TA** (typical add-on context): `Splunk_TA_juniper` (Splunkbase 2847), SNMP Modular Input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: juniper:junos:firewall:structured, juniper:junos:firewall. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"juniper:junos:firewall:structured\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time host screen_type src dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by screen_type, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **threshold** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where screen_hits > threshold` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection\n  by IDS_Attacks.signature, IDS_Attacks.src span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Juniper SRX Screen Counter Monitoring (Juniper SRX)** — Junos “Screen” features apply stateless, early-drop protections against floods, sweeps, malformed packets, and classic DoS patterns before sessions are fully created. Those drops often never appear in session or traffic logs, so screen telemetry is the only way to see perimeter volumetric or reconnaissance attacks. Sustained spikes in specific screen categories usually mean an active attack, a misconfigured peer, or a need to tune thresholds—not “normal” firewall noise.\n\nDocumented **Data sources**: `sourcetype=juniper:junos:firewall:structured` (syslog `RT_SCREEN_*`), SNMP screen or attack-related counters where published for your platform. **App/TA** (typical add-on context): `Splunk_TA_juniper` (Splunkbase 2847), SNMP Modular Input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (hits by screen type), Table (top sources), Single value (total screen drops vs prior day).",
              "script": "",
              "premium": "",
              "hw": "Juniper SRX300/SRX320/SRX340/SRX345/SRX550/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow screen and flood counters on the SRX so unexpected spikes in scans and probes are easy to see next to the rest of the security story.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection\n  by IDS_Attacks.signature, IDS_Attacks.src span=1h\n| sort -count",
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.43",
              "n": "Juniper SRX Cluster Failover Events (Juniper SRX)",
              "c": "critical",
              "f": "intermediate",
              "v": "Chassis-clustered SRX devices use redundancy groups (RGs) so services fail over when a node, link, or priority changes. JSRPD and cluster-related messages record RG ownership changes, interface monitoring triggers, and manual switchovers. Frequent or flapping failovers point to unstable fabric links, NIC or RE problems, or split-brain risk. Tracking RG state, reason strings, and duration helps you distinguish planned maintenance from emerging hardware or path faults.",
              "t": "`Splunk_TA_juniper` (Splunkbase 2847), syslog",
              "d": "`sourcetype=juniper:junos:structured`",
              "q": "index=network sourcetype=\"juniper:junos:structured\"\n  (lower(process)=\"jsrpd\" OR match(_raw, \"(?i)chassis cluster|redundancy group|RG-\\d+|failover|switchover\"))\n| rex \"(?i)redundancy group (?<rg_id>\\d+)\"\n| rex \"(?i)Reason:\\s*(?<failover_reason>[^\\|]+)\"\n| rex \"(?i)interface (?<ifname>\\S+) (?<if_state>up|down)\"\n| table _time host rg_id failover_reason ifname if_state process _raw\n| sort -_time",
              "m": "Forward cluster member syslogs with millisecond timestamps and synchronized NTP. Alert on any RG primary change, interface monitoring-driven failover, or unexpected preempt. Dashboard current RG primary per cluster ID and correlate with interface `up`/`down` events on fabric/control links. For active/active designs, track both RGs independently. Keep runbooks for manual `request chassis cluster failover` versus automatic events.",
              "z": "Timeline (failover markers), Table (RG, reason, node), Status panel (current primary per cluster).",
              "kfp": "Rehearsed failover, firmware upgrades, and fabric events can make cluster state messages spike during maintenance.",
              "refs": "[Splunkbase app 2847](https://splunkbase.splunk.com/app/2847)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_juniper` (Splunkbase 2847), syslog.\n• Ensure the following data sources are available: `sourcetype=juniper:junos:structured`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward cluster member syslogs with millisecond timestamps and synchronized NTP. Alert on any RG primary change, interface monitoring-driven failover, or unexpected preempt. Dashboard current RG primary per cluster ID and correlate with interface `up`/`down` events on fabric/control links. For active/active designs, track both RGs independently. Keep runbooks for manual `request chassis cluster failover` versus automatic events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"juniper:junos:structured\"\n  (lower(process)=\"jsrpd\" OR match(_raw, \"(?i)chassis cluster|redundancy group|RG-\\d+|failover|switchover\"))\n| rex \"(?i)redundancy group (?<rg_id>\\d+)\"\n| rex \"(?i)Reason:\\s*(?<failover_reason>[^\\|]+)\"\n| rex \"(?i)interface (?<ifname>\\S+) (?<if_state>up|down)\"\n| table _time host rg_id failover_reason ifname if_state process _raw\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Juniper SRX Cluster Failover Events (Juniper SRX)** — Chassis-clustered SRX devices use redundancy groups (RGs) so services fail over when a node, link, or priority changes. JSRPD and cluster-related messages record RG ownership changes, interface monitoring triggers, and manual switchovers. Frequent or flapping failovers point to unstable fabric links, NIC or RE problems, or split-brain risk. Tracking RG state, reason strings, and duration helps you distinguish planned maintenance from emerging hardware or path faults.\n\nDocumented **Data sources**: `sourcetype=juniper:junos:structured`. **App/TA** (typical add-on context): `Splunk_TA_juniper` (Splunkbase 2847), syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: juniper:junos:structured. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"juniper:junos:structured\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Juniper SRX Cluster Failover Events (Juniper SRX)**): table _time host rg_id failover_reason ifname if_state process _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failover markers), Table (RG, reason, node), Status panel (current primary per cluster).",
              "script": "",
              "premium": "",
              "hw": "Juniper SRX300/SRX1500/SRX4100/SRX4200/SRX4600/SRX5400/SRX5600/SRX5800",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see cluster and routing daemon messages on the SRX so a planned or surprise failover is something you can explain with timestamps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.44",
              "n": "FortiGate Security Fabric Health Monitoring (Fortinet)",
              "c": "high",
              "f": "intermediate",
              "v": "Security Fabric ties FortiGate to FortiManager, FortiAnalyzer, FortiSandbox, EMS, and downstream FortiGates for synchronized policy, logging, and threat intelligence. When fabric connectivity or authorization breaks, you lose centralized management, shared object updates, and automated sandbox verdict workflows—often silently until someone notices missing logs or stale objects. Monitoring root and downstream fabric membership, heartbeat, and authorization errors gives early warning before operations and compliance gaps widen.",
              "t": "`TA-fortinet_fortigate` (Splunkbase 2846)",
              "d": "`sourcetype=fgt_event`, `sourcetype=fortinet_fortios_event`",
              "q": "index=firewall sourcetype IN (\"fgt_event\",\"fortinet_fortios_event\")\n  (lower(_raw) LIKE \"%fabric%\" OR lower(logdesc) LIKE \"%fabric%\" OR lower(msg) LIKE \"%fabric%\"\n   OR match(_raw, \"(?i)FortiManager|FortiAnalyzer|authorization failed|certificate.*fabric\"))\n| eval device=coalesce(devname, dvc, host)\n| stats count by device type subtype logdesc msg level\n| sort -count",
              "m": "Ensure FortiOS event logging includes system and fabric-related categories (varies by version). Install `TA-fortinet_fortigate` and send logs via syslog or reliable forwarding. Create alerts for authorization failures, certificate issues, or loss of FortiManager reachability strings in `logdesc`/`msg`. Validate FortiManager/Analyzer versions and time sync. Test by temporarily blocking management paths in a lab to confirm detection.",
              "z": "Table (device, subtype, message), Timeline (fabric errors), Status grid (root vs leaf FortiGate health).",
              "kfp": "Broker reconnects, backup links, and FortiManager pushes can all register as fabric or event noise.",
              "refs": "[Splunkbase app 2846](https://splunkbase.splunk.com/app/2846)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-fortinet_fortigate` (Splunkbase 2846).\n• Ensure the following data sources are available: `sourcetype=fgt_event`, `sourcetype=fortinet_fortios_event`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure FortiOS event logging includes system and fabric-related categories (varies by version). Install `TA-fortinet_fortigate` and send logs via syslog or reliable forwarding. Create alerts for authorization failures, certificate issues, or loss of FortiManager reachability strings in `logdesc`/`msg`. Validate FortiManager/Analyzer versions and time sync. Test by temporarily blocking management paths in a lab to confirm detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype IN (\"fgt_event\",\"fortinet_fortios_event\")\n  (lower(_raw) LIKE \"%fabric%\" OR lower(logdesc) LIKE \"%fabric%\" OR lower(msg) LIKE \"%fabric%\"\n   OR match(_raw, \"(?i)FortiManager|FortiAnalyzer|authorization failed|certificate.*fabric\"))\n| eval device=coalesce(devname, dvc, host)\n| stats count by device type subtype logdesc msg level\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiGate Security Fabric Health Monitoring (Fortinet)** — Security Fabric ties FortiGate to FortiManager, FortiAnalyzer, FortiSandbox, EMS, and downstream FortiGates for synchronized policy, logging, and threat intelligence. When fabric connectivity or authorization breaks, you lose centralized management, shared object updates, and automated sandbox verdict workflows—often silently until someone notices missing logs or stale objects. Monitoring root and downstream fabric membership, heartbeat, and authorization errors gives early…\n\nDocumented **Data sources**: `sourcetype=fgt_event`, `sourcetype=fortinet_fortios_event`. **App/TA** (typical add-on context): `TA-fortinet_fortigate` (Splunkbase 2846). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **device** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by device type subtype logdesc msg level** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, subtype, message), Timeline (fabric errors), Status grid (root vs leaf FortiGate health).",
              "script": "",
              "premium": "",
              "hw": "Fortinet FortiGate 60F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager, FortiAnalyzer",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track fabric and platform health messages on FortiGate so split links, sync issues, and serial oddities are visible before they become wide outages.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.45",
              "n": "FortiGate SD-WAN Health Check and SLA Monitoring (Fortinet)",
              "c": "high",
              "f": "intermediate",
              "v": "SD-WAN health checks (ICMP, HTTP, DNS, TCP/UDP echo) continuously score each member link against SLA targets for latency, jitter, and loss. When an SLA fails, FortiOS steers traffic to better paths—so log and metric visibility is how you catch ISP brownouts before users open tickets. Trending per-interface loss and delay also validates whether performance problems are underlay-related or application-side.",
              "t": "`TA-fortinet_fortigate` (Splunkbase 2846)",
              "d": "`sourcetype=fgt_event` (FortiOS system events, SD-WAN subtype varies by release, e.g. `subtype=sdwan`), `sourcetype=fortinet_fortios_event`",
              "q": "index=firewall sourcetype IN (\"fgt_event\",\"fortinet_fortios_event\") type=\"system\" (subtype=\"sdwan\" OR subtype=\"sd-wan\" OR match(_raw, \"(?i)sd-wan|sdwan\"))\n| eval iface=coalesce(interface, intf, sdwan_zone, link)\n| eval loss_pct=coalesce(pktloss, packet_loss, loss, sdwan_loss)\n| eval lat_ms=coalesce(latency, rtt, sla_latency)\n| eval jitter_ms=coalesce(jitter, sdwan_jitter)\n| where loss_pct > 0 OR lat_ms > 200 OR match(lower(_raw), \"violated|fail|unreachable|timeout\")\n| timechart span=15m avg(loss_pct) as avg_loss avg(lat_ms) as avg_latency by iface",
              "m": "Define SD-WAN SLAs and health-check servers that reflect real user paths (not only the nearest DNS). Forward `system` SD-WAN events to Splunk and confirm extracted fields with your FortiOS version—field names differ slightly across releases. Alert when SLA violations repeat for the same interface or when loss/latency step-changes correlate with carrier incidents. Cross-check with `fgt_traffic` volume shifts on the same SD-WAN zones.",
              "z": "Timechart (loss/latency per member), Table (SLA violations by interface), Single value (active violated SLAs).",
              "kfp": "Wan flaps, local loop tests, and carrier work can make SLA and health log messages loud without a bad policy.",
              "refs": "[Splunkbase app 2846](https://splunkbase.splunk.com/app/2846), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-fortinet_fortigate` (Splunkbase 2846).\n• Ensure the following data sources are available: `sourcetype=fgt_event` (FortiOS system events, SD-WAN subtype varies by release, e.g. `subtype=sdwan`), `sourcetype=fortinet_fortios_event`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine SD-WAN SLAs and health-check servers that reflect real user paths (not only the nearest DNS). Forward `system` SD-WAN events to Splunk and confirm extracted fields with your FortiOS version—field names differ slightly across releases. Alert when SLA violations repeat for the same interface or when loss/latency step-changes correlate with carrier incidents. Cross-check with `fgt_traffic` volume shifts on the same SD-WAN zones.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype IN (\"fgt_event\",\"fortinet_fortios_event\") type=\"system\" (subtype=\"sdwan\" OR subtype=\"sd-wan\" OR match(_raw, \"(?i)sd-wan|sdwan\"))\n| eval iface=coalesce(interface, intf, sdwan_zone, link)\n| eval loss_pct=coalesce(pktloss, packet_loss, loss, sdwan_loss)\n| eval lat_ms=coalesce(latency, rtt, sla_latency)\n| eval jitter_ms=coalesce(jitter, sdwan_jitter)\n| where loss_pct > 0 OR lat_ms > 200 OR match(lower(_raw), \"violated|fail|unreachable|timeout\")\n| timechart span=15m avg(loss_pct) as avg_loss avg(lat_ms) as avg_latency by iface\n```\n\nUnderstanding this SPL\n\n**FortiGate SD-WAN Health Check and SLA Monitoring (Fortinet)** — SD-WAN health checks (ICMP, HTTP, DNS, TCP/UDP echo) continuously score each member link against SLA targets for latency, jitter, and loss. When an SLA fails, FortiOS steers traffic to better paths—so log and metric visibility is how you catch ISP brownouts before users open tickets. Trending per-interface loss and delay also validates whether performance problems are underlay-related or application-side.\n\nDocumented **Data sources**: `sourcetype=fgt_event` (FortiOS system events, SD-WAN subtype varies by release, e.g. `subtype=sdwan`), `sourcetype=fortinet_fortios_event`. **App/TA** (typical add-on context): `TA-fortinet_fortigate` (Splunkbase 2846). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **iface** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **loss_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **lat_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **jitter_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where loss_pct > 0 OR lat_ms > 200 OR match(lower(_raw), \"violated|fail|unreachable|timeout\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by iface** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate SD-WAN Health Check and SLA Monitoring (Fortinet)** — SD-WAN health checks (ICMP, HTTP, DNS, TCP/UDP echo) continuously score each member link against SLA targets for latency, jitter, and loss. When an SLA fails, FortiOS steers traffic to better paths—so log and metric visibility is how you catch ISP brownouts before users open tickets. Trending per-interface loss and delay also validates whether performance problems are underlay-related or application-side.\n\nDocumented **Data sources**: `sourcetype=fgt_event` (FortiOS system events, SD-WAN subtype varies by release, e.g. `subtype=sdwan`), `sourcetype=fortinet_fortios_event`. **App/TA** (typical add-on context): `TA-fortinet_fortigate` (Splunkbase 2846). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (loss/latency per member), Table (SLA violations by interface), Single value (active violated SLAs).",
              "script": "",
              "premium": "",
              "hw": "Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow software-defined wide-area health checks so a bad underlay, dead probe, or weak policy is obvious next to the rest of the site story.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.46",
              "n": "FortiGate Web Filter and Application Control Events (Fortinet)",
              "c": "medium",
              "f": "beginner",
              "v": "FortiGate UTM combines web filtering (FortiGuard URL categories), DNS filtering, and application control in one policy pass. Reviewing blocked categories, high-risk apps, and allow/deny ratios shows policy drift, shadow IT, and risky user behavior without full packet capture. It also helps justify license spend and tune noisy categories that generate help-desk load.",
              "t": "`TA-fortinet_fortigate` (Splunkbase 2846)",
              "d": "`sourcetype=fgt_utm`, `sourcetype=fortinet_fortios_utm`",
              "q": "index=firewall sourcetype IN (\"fgt_utm\",\"fortinet_fortios_utm\")\n| eval cat=coalesce(catdesc, category, urlfilter_cat, web_cat)\n| eval app_name=coalesce(app, appname, applist, app_cat)\n| eval act=lower(coalesce(action, utm_action))\n| eval device=coalesce(devname, dvc, host)\n| stats count by device act cat app_name hostname src\n| sort -count",
              "m": "Enable UTM logging on policies using web filter and application control; send UTM logs to a dedicated index if volume is high. Use the Fortinet TA for parsing. Build dashboards for top blocked categories and applications; alert on blocks for sensitive groups (executives, servers) or sudden spikes in `proxy`/`vpn` application blocks. Periodically review `act=blocked` outliers to refine explicit allow rules and DNS filter lists.",
              "z": "Bar chart (top categories), Table (user/src, app, action), Pie chart (block vs allow ratio).",
              "kfp": "Overblocking categories, new SaaS, and end-user workarounds can make web or app control logs noisy by design.",
              "refs": "[Splunkbase app 2846](https://splunkbase.splunk.com/app/2846), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-fortinet_fortigate` (Splunkbase 2846).\n• Ensure the following data sources are available: `sourcetype=fgt_utm`, `sourcetype=fortinet_fortios_utm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable UTM logging on policies using web filter and application control; send UTM logs to a dedicated index if volume is high. Use the Fortinet TA for parsing. Build dashboards for top blocked categories and applications; alert on blocks for sensitive groups (executives, servers) or sudden spikes in `proxy`/`vpn` application blocks. Periodically review `act=blocked` outliers to refine explicit allow rules and DNS filter lists.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype IN (\"fgt_utm\",\"fortinet_fortios_utm\")\n| eval cat=coalesce(catdesc, category, urlfilter_cat, web_cat)\n| eval app_name=coalesce(app, appname, applist, app_cat)\n| eval act=lower(coalesce(action, utm_action))\n| eval device=coalesce(devname, dvc, host)\n| stats count by device act cat app_name hostname src\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiGate Web Filter and Application Control Events (Fortinet)** — FortiGate UTM combines web filtering (FortiGuard URL categories), DNS filtering, and application control in one policy pass. Reviewing blocked categories, high-risk apps, and allow/deny ratios shows policy drift, shadow IT, and risky user behavior without full packet capture. It also helps justify license spend and tune noisy categories that generate help-desk load.\n\nDocumented **Data sources**: `sourcetype=fgt_utm`, `sourcetype=fortinet_fortios_utm`. **App/TA** (typical add-on context): `TA-fortinet_fortigate` (Splunkbase 2846). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **app_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **act** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **device** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by device act cat app_name hostname src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(Web.bytes) as total_bytes\n  from datamodel=Web.Web\n  by Web.src Web.dest Web.uri_path Web.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate Web Filter and Application Control Events (Fortinet)** — FortiGate UTM combines web filtering (FortiGuard URL categories), DNS filtering, and application control in one policy pass. Reviewing blocked categories, high-risk apps, and allow/deny ratios shows policy drift, shadow IT, and risky user behavior without full packet capture. It also helps justify license spend and tune noisy categories that generate help-desk load.\n\nDocumented **Data sources**: `sourcetype=fgt_utm`, `sourcetype=fortinet_fortios_utm`. **App/TA** (typical add-on context): `TA-fortinet_fortigate` (Splunkbase 2846). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top categories), Table (user/src, app, action), Pie chart (block vs allow ratio).",
              "script": "",
              "premium": "",
              "hw": "Fortinet FortiGate models with FortiGuard web filtering and application control licensing",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add up web filter and app control events on FortiGate so overblocking, shadow apps, and policy drift are easier to see in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count sum(Web.bytes) as total_bytes\n  from datamodel=Web.Web\n  by Web.src Web.dest Web.uri_path Web.action span=1h\n| sort -count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.47",
              "n": "Check Point ClusterXL Failover Events (Check Point)",
              "c": "critical",
              "f": "intermediate",
              "v": "ClusterXL provides gateway high availability via active-standby or active-active clusters. Failover events — whether planned (manual switchover) or unplanned (process crash, NIC failure, sync timeout) — cause brief traffic interruption and may indicate underlying hardware or software instability. Monitoring failover frequency, duration, and trigger reason supports SLA reporting and proactive hardware replacement before repeated failovers degrade user experience.",
              "t": "`Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (cluster/system logs), SNMP traps",
              "q": "index=firewall sourcetype=\"cp_log\" earliest=-30d\n| where match(lower(product),\"(?i)cluster|clusterxl|ha\") OR match(lower(logdesc),\"(?i)failover|switchover|member.*down|sync.*fail\")\n| eval gw=coalesce(orig, src, hostname)\n| stats count earliest(_time) as first latest(_time) as last values(logdesc) as events by gw\n| sort -count",
              "m": "Forward Check Point system/cluster logs via Log Exporter or Smart-1 Cloud. Extract ClusterXL state change messages (member down, sync lost, failover). Alert on any unplanned failover immediately. Track failover frequency per cluster — more than 2 in 7 days warrants investigation. Correlate with gateway CPU/memory UC-10.11.35 to find resource-triggered failovers. Page on-call for active-active cluster degradation to single member.",
              "z": "Timeline (failover events), Table (clusters with recent failovers), Single value (failovers this week), Bar chart (failovers by reason).",
              "kfp": "Rehearsals, code upgrades, and link work can make cluster state logs busy without customer impact.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [Splunkbase app 5402](https://splunkbase.splunk.com/app/5402)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (cluster/system logs), SNMP traps.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Check Point system/cluster logs via Log Exporter or Smart-1 Cloud. Extract ClusterXL state change messages (member down, sync lost, failover). Alert on any unplanned failover immediately. Track failover frequency per cluster — more than 2 in 7 days warrants investigation. Correlate with gateway CPU/memory UC-10.11.35 to find resource-triggered failovers. Page on-call for active-active cluster degradation to single member.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"cp_log\" earliest=-30d\n| where match(lower(product),\"(?i)cluster|clusterxl|ha\") OR match(lower(logdesc),\"(?i)failover|switchover|member.*down|sync.*fail\")\n| eval gw=coalesce(orig, src, hostname)\n| stats count earliest(_time) as first latest(_time) as last values(logdesc) as events by gw\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point ClusterXL Failover Events (Check Point)** — ClusterXL provides gateway high availability via active-standby or active-active clusters. Failover events — whether planned (manual switchover) or unplanned (process crash, NIC failure, sync timeout) — cause brief traffic interruption and may indicate underlying hardware or software instability. Monitoring failover frequency, duration, and trigger reason supports SLA reporting and proactive hardware replacement before repeated failovers degrade user experience.\n\nDocumented **Data sources**: `sourcetype=cp_log` (cluster/system logs), SNMP traps. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)cluster|clusterxl|ha\") OR match(lower(logdesc),\"(?i)failover|switchover|member.*down|sync.*…` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by gw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failover events), Table (clusters with recent failovers), Single value (failovers this week), Bar chart (failovers by reason).",
              "script": "",
              "premium": "",
              "hw": "Check Point Quantum 6200/6400/6600/6800/7000/16200/16600/26000/28000, Check Point Quantum Maestro, Check Point CloudGuard Network, Smart-1 Cloud (management)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch high-availability handovers on Check Point so you can tell a drill or link issue from a silent split that could leave you exposed.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.48",
              "n": "Check Point Policy Install and Publish Tracking (Check Point)",
              "c": "high",
              "f": "beginner",
              "v": "Policy install pushes new rulebase and object changes from the management server (SmartConsole/Smart-1 Cloud) to enforcement gateways. A failed install leaves old policy active; a successful install with errors may silently break specific rules. Tracking install timestamps, success/failure, and who published enables change management correlation and root-cause analysis when traffic patterns shift unexpectedly after a policy push.",
              "t": "`Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (audit/admin logs), SmartConsole audit trail",
              "q": "index=checkpoint sourcetype=\"cp_log\" earliest=-30d\n| where match(lower(product),\"(?i)smartconsole|smartcenter|management\") AND match(lower(operation),\"(?i)install|publish|verify\")\n| stats count earliest(_time) as first latest(_time) as last values(operation) as ops by administrator, target_gateway\n| sort -last",
              "m": "Forward management audit logs via Log Exporter. Track policy install duration (publish → install complete). Alert on install failures or partial installs (some gateways succeeded, others failed). Require ITSM ticket IDs in SmartConsole session descriptions for audit correlation. Report on policy change frequency by admin and gateway.",
              "z": "Table (recent policy installs), Timeline (publish/install events), Bar chart (installs by admin), Single value (failed installs this week).",
              "kfp": "Planned install windows, many admins, and repeated templates can make install and publish events look excessive.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [Splunkbase app 5402](https://splunkbase.splunk.com/app/5402), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (audit/admin logs), SmartConsole audit trail.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward management audit logs via Log Exporter. Track policy install duration (publish → install complete). Alert on install failures or partial installs (some gateways succeeded, others failed). Require ITSM ticket IDs in SmartConsole session descriptions for audit correlation. Report on policy change frequency by admin and gateway.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=checkpoint sourcetype=\"cp_log\" earliest=-30d\n| where match(lower(product),\"(?i)smartconsole|smartcenter|management\") AND match(lower(operation),\"(?i)install|publish|verify\")\n| stats count earliest(_time) as first latest(_time) as last values(operation) as ops by administrator, target_gateway\n| sort -last\n```\n\nUnderstanding this SPL\n\n**Check Point Policy Install and Publish Tracking (Check Point)** — Policy install pushes new rulebase and object changes from the management server (SmartConsole/Smart-1 Cloud) to enforcement gateways. A failed install leaves old policy active; a successful install with errors may silently break specific rules. Tracking install timestamps, success/failure, and who published enables change management correlation and root-cause analysis when traffic patterns shift unexpectedly after a policy push.\n\nDocumented **Data sources**: `sourcetype=cp_log` (audit/admin logs), SmartConsole audit trail. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: checkpoint; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=checkpoint, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)smartconsole|smartcenter|management\") AND match(lower(operation),\"(?i)install|publish|verify\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by administrator, target_gateway** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Policy Install and Publish Tracking (Check Point)** — Policy install pushes new rulebase and object changes from the management server (SmartConsole/Smart-1 Cloud) to enforcement gateways. A failed install leaves old policy active; a successful install with errors may silently break specific rules. Tracking install timestamps, success/failure, and who published enables change management correlation and root-cause analysis when traffic patterns shift unexpectedly after a policy push.\n\nDocumented **Data sources**: `sourcetype=cp_log` (audit/admin logs), SmartConsole audit trail. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent policy installs), Timeline (publish/install events), Bar chart (installs by admin), Single value (failed installs this week).",
              "script": "",
              "premium": "",
              "hw": "Check Point Quantum 6200/6400/6600/6800/7000/16200/16600/26000/28000, Check Point Quantum Maestro, Check Point CloudGuard Network, Smart-1 Cloud (management)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow policy install and publish steps so a bad push, a late night edit, or a missing approval is not invisible when auditors ask why.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.49",
              "n": "Check Point SecureXL Acceleration Status (Check Point)",
              "c": "high",
              "f": "intermediate",
              "v": "SecureXL offloads connection handling from the firewall kernel to an acceleration layer, increasing throughput by 2–10×. When SecureXL cannot accelerate a connection (due to complex NAT, certain blade inspections, or resource limits), traffic falls back to the slow path (Firewall kernel or even Medium path). A rising percentage of non-accelerated connections signals policy complexity growth, blade misconfiguration, or capacity limits — reducing effective throughput well before CPU saturation appears.",
              "t": "`Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (performance/system logs), `fwaccel` CLI output via scripted input",
              "q": "index=firewall sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)securexl|fwaccel\") OR match(lower(logdesc),\"(?i)accel|template|f2f|medium.path|pxl\")\n| eval gw=coalesce(orig, src, hostname)\n| eval path=case(\n    match(lower(_raw),\"(?i)accel|template\"),\"accelerated\",\n    match(lower(_raw),\"(?i)medium.path|pxl\"),\"medium_path\",\n    match(lower(_raw),\"(?i)f2f|slow|firewall.path\"),\"slow_path\",\n    1=1,\"unknown\")\n| stats count by gw, path\n| eventstats sum(count) as total by gw\n| eval pct=round(100*count/total,1)\n| where path!=\"accelerated\" AND pct>20",
              "m": "Use `fwaccel stat` and `fwaccel conns` via scripted input on the gateway (every 5 min) or parse SecureXL log messages from system events. Baseline accelerated vs slow-path ratio per gateway. Alert when slow-path percentage exceeds 30% sustained for 1 hour. Correlate with policy install events (UC-5.2.48) — new rules with unsupported features often shift traffic to slow path. Report on acceleration trends after blade enablement changes.",
              "z": "Pie chart (accelerated vs medium vs slow path), Line chart (acceleration ratio over time), Table (gateways with low acceleration), Bar chart (slow-path reasons).",
              "kfp": "Policy changes, debug toggles, and version shifts can make acceleration messages noisy until you set a baseline.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [Splunkbase app 5402](https://splunkbase.splunk.com/app/5402), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (performance/system logs), `fwaccel` CLI output via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse `fwaccel stat` and `fwaccel conns` via scripted input on the gateway (every 5 min) or parse SecureXL log messages from system events. Baseline accelerated vs slow-path ratio per gateway. Alert when slow-path percentage exceeds 30% sustained for 1 hour. Correlate with policy install events (UC-5.2.48) — new rules with unsupported features often shift traffic to slow path. Report on acceleration trends after blade enablement changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)securexl|fwaccel\") OR match(lower(logdesc),\"(?i)accel|template|f2f|medium.path|pxl\")\n| eval gw=coalesce(orig, src, hostname)\n| eval path=case(\n    match(lower(_raw),\"(?i)accel|template\"),\"accelerated\",\n    match(lower(_raw),\"(?i)medium.path|pxl\"),\"medium_path\",\n    match(lower(_raw),\"(?i)f2f|slow|firewall.path\"),\"slow_path\",\n    1=1,\"unknown\")\n| stats count by gw, path\n| eventstats sum(count) as total by gw\n| eval pct=round(100*count/total,1)\n| where path!=\"accelerated\" AND pct>20\n```\n\nUnderstanding this SPL\n\n**Check Point SecureXL Acceleration Status (Check Point)** — SecureXL offloads connection handling from the firewall kernel to an acceleration layer, increasing throughput by 2–10×. When SecureXL cannot accelerate a connection (due to complex NAT, certain blade inspections, or resource limits), traffic falls back to the slow path (Firewall kernel or even Medium path). A rising percentage of non-accelerated connections signals policy complexity growth, blade misconfiguration, or capacity limits — reducing effective throughput well…\n\nDocumented **Data sources**: `sourcetype=cp_log` (performance/system logs), `fwaccel` CLI output via scripted input. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)securexl|fwaccel\") OR match(lower(logdesc),\"(?i)accel|template|f2f|medium.path|pxl\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **path** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by gw, path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by gw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where path!=\"accelerated\" AND pct>20` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Performance.All_Performance\n  by All_Performance.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point SecureXL Acceleration Status (Check Point)** — SecureXL offloads connection handling from the firewall kernel to an acceleration layer, increasing throughput by 2–10×. When SecureXL cannot accelerate a connection (due to complex NAT, certain blade inspections, or resource limits), traffic falls back to the slow path (Firewall kernel or even Medium path). A rising percentage of non-accelerated connections signals policy complexity growth, blade misconfiguration, or capacity limits — reducing effective throughput well…\n\nDocumented **Data sources**: `sourcetype=cp_log` (performance/system logs), `fwaccel` CLI output via scripted input. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.All_Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (accelerated vs medium vs slow path), Line chart (acceleration ratio over time), Table (gateways with low acceleration), Bar chart (slow-path reasons).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Check Point Quantum 6200/6400/6600/6800/7000/16200/16600/26000/28000, Check Point Quantum Maestro, Check Point CloudGuard Network, Smart-1 Cloud (management)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at fast-path and acceleration state so a sudden return to slow inspection is a visible clue before users call about slowness.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Performance.All_Performance\n  by All_Performance.dest span=1h",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.50",
              "n": "Check Point CoreXL CPU Distribution (Check Point)",
              "c": "high",
              "f": "intermediate",
              "v": "CoreXL distributes firewall inspection across multiple CPU cores (Firewall Worker instances). Uneven load distribution — where one core saturates while others idle — reduces effective throughput and causes packet drops on that core. This often happens when large flows or specific protocols always hash to the same core. Detecting core imbalance before it causes visible packet loss prevents elusive intermittent connectivity issues.",
              "t": "`Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (performance logs), `fw ctl multik stat` via scripted input",
              "q": "index=firewall sourcetype=\"cp_log\" earliest=-4h\n| where match(lower(product),\"(?i)corexl|multik|fw_worker\")\n| eval gw=coalesce(orig, src, hostname)\n| eval core_id=coalesce(core_id, fw_instance, worker_id)\n| eval cpu_pct=coalesce(cpu_usage, cpu_pct, cpu_util)\n| stats avg(cpu_pct) as avg_cpu max(cpu_pct) as max_cpu by gw, core_id\n| eventstats avg(avg_cpu) as gw_avg by gw\n| eval imbalance=round(max_cpu - gw_avg, 1)\n| where imbalance > 30 OR max_cpu > 85\n| sort -imbalance",
              "m": "Use `fw ctl multik stat` via scripted input (interval 300s) to capture per-core connection counts and CPU. Parse core ID and utilization. Alert when any single core exceeds 85% while the gateway average is below 50% — classic imbalance. Correlate with `fwaccel` to identify non-accelerated heavy flows. Tune CoreXL instance count and affinity after analysis.",
              "z": "Bar chart (CPU per core), Heatmap (core × time), Table (imbalanced gateways), Line chart (max core CPU trend).",
              "kfp": "Bursts, elephant flows, and short spikes can unbalance one core for minutes without a lasting performance problem.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [Splunkbase app 5402](https://splunkbase.splunk.com/app/5402), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (performance logs), `fw ctl multik stat` via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse `fw ctl multik stat` via scripted input (interval 300s) to capture per-core connection counts and CPU. Parse core ID and utilization. Alert when any single core exceeds 85% while the gateway average is below 50% — classic imbalance. Correlate with `fwaccel` to identify non-accelerated heavy flows. Tune CoreXL instance count and affinity after analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"cp_log\" earliest=-4h\n| where match(lower(product),\"(?i)corexl|multik|fw_worker\")\n| eval gw=coalesce(orig, src, hostname)\n| eval core_id=coalesce(core_id, fw_instance, worker_id)\n| eval cpu_pct=coalesce(cpu_usage, cpu_pct, cpu_util)\n| stats avg(cpu_pct) as avg_cpu max(cpu_pct) as max_cpu by gw, core_id\n| eventstats avg(avg_cpu) as gw_avg by gw\n| eval imbalance=round(max_cpu - gw_avg, 1)\n| where imbalance > 30 OR max_cpu > 85\n| sort -imbalance\n```\n\nUnderstanding this SPL\n\n**Check Point CoreXL CPU Distribution (Check Point)** — CoreXL distributes firewall inspection across multiple CPU cores (Firewall Worker instances). Uneven load distribution — where one core saturates while others idle — reduces effective throughput and causes packet drops on that core. This often happens when large flows or specific protocols always hash to the same core. Detecting core imbalance before it causes visible packet loss prevents elusive intermittent connectivity issues.\n\nDocumented **Data sources**: `sourcetype=cp_log` (performance logs), `fw ctl multik stat` via scripted input. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)corexl|multik|fw_worker\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **core_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cpu_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by gw, core_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by gw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **imbalance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where imbalance > 30 OR max_cpu > 85` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Performance.All_Performance\n  by All_Performance.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point CoreXL CPU Distribution (Check Point)** — CoreXL distributes firewall inspection across multiple CPU cores (Firewall Worker instances). Uneven load distribution — where one core saturates while others idle — reduces effective throughput and causes packet drops on that core. This often happens when large flows or specific protocols always hash to the same core. Detecting core imbalance before it causes visible packet loss prevents elusive intermittent connectivity issues.\n\nDocumented **Data sources**: `sourcetype=cp_log` (performance logs), `fw ctl multik stat` via scripted input. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.All_Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (CPU per core), Heatmap (core × time), Table (imbalanced gateways), Line chart (max core CPU trend).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Check Point Quantum 6200/6400/6600/6800/7000/16200/16600/26000/28000, Check Point Quantum Maestro, Check Point CloudGuard Network, Smart-1 Cloud (management)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at per-core load on the gateway so a hot engine on one worker does not become packet loss and mystery slowness for everyone else.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Performance.All_Performance\n  by All_Performance.dest span=1h",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.51",
              "n": "Check Point Log Rate and Capacity (Check Point)",
              "c": "high",
              "f": "beginner",
              "v": "Check Point gateways forward logs to the management server or Log Server. When log rate exceeds the management capacity or network bandwidth, logs are queued, delayed, or dropped — creating blind spots in security monitoring. Tracking log rate per gateway and comparing to Log Server capacity prevents log loss before it impacts compliance and incident detection.",
              "t": "`Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (system/management logs), log server statistics",
              "q": "index=checkpoint sourcetype=\"cp_log\" earliest=-24h\n| bin _time span=5m\n| stats count as events_5m by _time, orig\n| eventstats avg(events_5m) as baseline by orig\n| where events_5m > baseline*3 OR events_5m < baseline*0.2\n| eval anomaly=if(events_5m > baseline*3, \"spike\", \"drop\")\n| table _time, orig, events_5m, baseline, anomaly",
              "m": "Baseline event rate per gateway. Alert on sudden spikes (possible attack or debug logging left enabled) and drops (log forwarding failure or connectivity issue). Monitor Log Server disk and queue depth. Correlate log drops with gateway CPU and network congestion.",
              "z": "Line chart (log rate per gateway), Single value (current aggregate rate), Table (anomalies), Bar chart (rate by gateway).",
              "kfp": "Backup windows, debug sessions, and compliance exports can all raise log rate without a logging-system failure.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [Splunkbase app 5402](https://splunkbase.splunk.com/app/5402)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (system/management logs), log server statistics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline event rate per gateway. Alert on sudden spikes (possible attack or debug logging left enabled) and drops (log forwarding failure or connectivity issue). Monitor Log Server disk and queue depth. Correlate log drops with gateway CPU and network congestion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=checkpoint sourcetype=\"cp_log\" earliest=-24h\n| bin _time span=5m\n| stats count as events_5m by _time, orig\n| eventstats avg(events_5m) as baseline by orig\n| where events_5m > baseline*3 OR events_5m < baseline*0.2\n| eval anomaly=if(events_5m > baseline*3, \"spike\", \"drop\")\n| table _time, orig, events_5m, baseline, anomaly\n```\n\nUnderstanding this SPL\n\n**Check Point Log Rate and Capacity (Check Point)** — Check Point gateways forward logs to the management server or Log Server. When log rate exceeds the management capacity or network bandwidth, logs are queued, delayed, or dropped — creating blind spots in security monitoring. Tracking log rate per gateway and comparing to Log Server capacity prevents log loss before it impacts compliance and incident detection.\n\nDocumented **Data sources**: `sourcetype=cp_log` (system/management logs), log server statistics. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: checkpoint; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=checkpoint, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, orig** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by orig** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where events_5m > baseline*3 OR events_5m < baseline*0.2` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **anomaly** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Check Point Log Rate and Capacity (Check Point)**): table _time, orig, events_5m, baseline, anomaly\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (log rate per gateway), Single value (current aggregate rate), Table (anomalies), Bar chart (rate by gateway).",
              "script": "",
              "premium": "",
              "hw": "Check Point Quantum 6200/6400/6600/6800/7000/16200/16600/26000/28000, Check Point Quantum Maestro, Check Point CloudGuard Network, Smart-1 Cloud (management)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch log rate and volume from the gateway so a flood of events or a near-full buffer is a warning before you lose the trail entirely.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.52",
              "n": "Check Point Anti-Spoofing Violations (Check Point)",
              "c": "critical",
              "f": "beginner",
              "v": "Anti-spoofing validates that packets arriving on an interface have source IPs consistent with the interface's defined topology. Violations indicate either network misconfiguration (asymmetric routing, missing routes) or actual IP spoofing attacks. High violation rates from specific sources warrant immediate investigation as they may mask data exfiltration or DDoS reflection.",
              "t": "`Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (firewall logs with anti-spoofing drops)",
              "q": "index=firewall sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(action),\"(?i)drop\") AND match(lower(logdesc),\"(?i)anti.?spoof|spoofing\")\n| stats count by src, inzone, outzone, rule_name, orig\n| sort -count",
              "m": "Forward firewall drop logs including anti-spoofing events. Map `inzone` and `outzone` to topology to distinguish misconfiguration from attacks. Alert on new source IPs triggering anti-spoofing. Correlate with routing changes. Tune anti-spoofing topology definitions after legitimate asymmetric routing is identified.",
              "z": "Table (spoofing violations by source), Bar chart (violations by interface/zone), Line chart (violation trend), Map (source geo if available).",
              "kfp": "Asymmetric routing, late updates, and lab VLANs can trigger spoofing detections you still need to verify.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [Splunkbase app 5402](https://splunkbase.splunk.com/app/5402), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (firewall logs with anti-spoofing drops).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward firewall drop logs including anti-spoofing events. Map `inzone` and `outzone` to topology to distinguish misconfiguration from attacks. Alert on new source IPs triggering anti-spoofing. Correlate with routing changes. Tune anti-spoofing topology definitions after legitimate asymmetric routing is identified.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(action),\"(?i)drop\") AND match(lower(logdesc),\"(?i)anti.?spoof|spoofing\")\n| stats count by src, inzone, outzone, rule_name, orig\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point Anti-Spoofing Violations (Check Point)** — Anti-spoofing validates that packets arriving on an interface have source IPs consistent with the interface's defined topology. Violations indicate either network misconfiguration (asymmetric routing, missing routes) or actual IP spoofing attacks. High violation rates from specific sources warrant immediate investigation as they may mask data exfiltration or DDoS reflection.\n\nDocumented **Data sources**: `sourcetype=cp_log` (firewall logs with anti-spoofing drops). **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(action),\"(?i)drop\") AND match(lower(logdesc),\"(?i)anti.?spoof|spoofing\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, inzone, outzone, rule_name, orig** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Anti-Spoofing Violations (Check Point)** — Anti-spoofing validates that packets arriving on an interface have source IPs consistent with the interface's defined topology. Violations indicate either network misconfiguration (asymmetric routing, missing routes) or actual IP spoofing attacks. High violation rates from specific sources warrant immediate investigation as they may mask data exfiltration or DDoS reflection.\n\nDocumented **Data sources**: `sourcetype=cp_log` (firewall logs with anti-spoofing drops). **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (spoofing violations by source), Bar chart (violations by interface/zone), Line chart (violation trend), Map (source geo if available).",
              "script": "",
              "premium": "",
              "hw": "Check Point Quantum 6200/6400/6600/6800/7000/16200/16600/26000/28000, Check Point Quantum Maestro, Check Point CloudGuard Network, Smart-1 Cloud (management)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count anti-spoofing hits so bad addressing, late routing change, and miswired segments surface before they turn into silent drops.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest span=1h",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.53",
              "n": "Check Point HTTPS Inspection Status and Bypass (Check Point)",
              "c": "high",
              "f": "intermediate",
              "v": "HTTPS inspection (SSL/TLS decryption) enables deep packet inspection of encrypted traffic. Connections that bypass inspection — due to certificate pinning, bypass rules, or resource limits — create visibility gaps. Monitoring bypass rates ensures that security coverage remains effective and identifies applications or categories that need policy updates.",
              "t": "`Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (firewall/HTTPS inspection logs)",
              "q": "index=firewall sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)https.?inspection|ssl.?inspection\") OR match(lower(logdesc),\"(?i)bypass|inspect|decrypt\")\n| eval inspected=if(match(lower(logdesc),\"(?i)inspect|decrypt\") AND NOT match(lower(logdesc),\"(?i)bypass|skip|fail\"),1,0)\n| stats count sum(inspected) as inspected_count by rule_name, category\n| eval bypass_pct=round(100*(count-inspected_count)/count,1)\n| where bypass_pct > 20\n| sort -bypass_pct",
              "m": "Enable HTTPS inspection logging (log bypassed and inspected connections). Baseline bypass rate per category. Alert when bypass percentage increases (new cert-pinned apps, resource limits). Report on inspection coverage for compliance (PCI DSS, SOX). Correlate with gateway CPU — high CPU can trigger automatic inspection bypass.",
              "z": "Pie chart (inspected vs bypassed), Bar chart (bypass by category), Line chart (bypass rate trend), Table (top bypass rules).",
              "kfp": "Legacy clients, pinned apps, and certificate work can make inspection status messages look worse than the risk.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [Splunkbase app 5402](https://splunkbase.splunk.com/app/5402), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (firewall/HTTPS inspection logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable HTTPS inspection logging (log bypassed and inspected connections). Baseline bypass rate per category. Alert when bypass percentage increases (new cert-pinned apps, resource limits). Report on inspection coverage for compliance (PCI DSS, SOX). Correlate with gateway CPU — high CPU can trigger automatic inspection bypass.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)https.?inspection|ssl.?inspection\") OR match(lower(logdesc),\"(?i)bypass|inspect|decrypt\")\n| eval inspected=if(match(lower(logdesc),\"(?i)inspect|decrypt\") AND NOT match(lower(logdesc),\"(?i)bypass|skip|fail\"),1,0)\n| stats count sum(inspected) as inspected_count by rule_name, category\n| eval bypass_pct=round(100*(count-inspected_count)/count,1)\n| where bypass_pct > 20\n| sort -bypass_pct\n```\n\nUnderstanding this SPL\n\n**Check Point HTTPS Inspection Status and Bypass (Check Point)** — HTTPS inspection (SSL/TLS decryption) enables deep packet inspection of encrypted traffic. Connections that bypass inspection — due to certificate pinning, bypass rules, or resource limits — create visibility gaps. Monitoring bypass rates ensures that security coverage remains effective and identifies applications or categories that need policy updates.\n\nDocumented **Data sources**: `sourcetype=cp_log` (firewall/HTTPS inspection logs). **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)https.?inspection|ssl.?inspection\") OR match(lower(logdesc),\"(?i)bypass|inspect|decrypt\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **inspected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by rule_name, category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **bypass_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bypass_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.action span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point HTTPS Inspection Status and Bypass (Check Point)** — HTTPS inspection (SSL/TLS decryption) enables deep packet inspection of encrypted traffic. Connections that bypass inspection — due to certificate pinning, bypass rules, or resource limits — create visibility gaps. Monitoring bypass rates ensures that security coverage remains effective and identifies applications or categories that need policy updates.\n\nDocumented **Data sources**: `sourcetype=cp_log` (firewall/HTTPS inspection logs). **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (inspected vs bypassed), Bar chart (bypass by category), Line chart (bypass rate trend), Table (top bypass rules).",
              "script": "",
              "premium": "",
              "hw": "Check Point Quantum 6200/6400/6600/6800/7000/16200/16600/26000/28000, Check Point Quantum Maestro, Check Point CloudGuard Network, Smart-1 Cloud (management)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at who is inspected, bypassed, or failing HTTPS checks so you can keep encryption policy honest and still serve legacy apps fairly.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.action span=1h",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.2.54",
              "n": "Check Point Gateway Connection Table Utilization (Check Point)",
              "c": "critical",
              "f": "intermediate",
              "v": "Each Check Point gateway has a finite concurrent connection table (configurable, typically 500K–25M depending on appliance). When utilization approaches the limit, new connections are dropped — causing application failures and user complaints. Unlike CPU, connection table exhaustion can happen suddenly during attacks or application bursts with little warning.",
              "t": "`Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (system/performance logs), `fw tab -t connections -s` via scripted input",
              "q": "index=firewall sourcetype=\"cp_log\" earliest=-4h\n| where match(lower(product),\"(?i)firewall\") AND (match(lower(logdesc),\"(?i)connection.*table|conn.*limit|aggressive.aging\") OR isnotnull(connections_count))\n| eval gw=coalesce(orig, src, hostname)\n| eval conn_count=coalesce(connections_count, concurrent_connections)\n| eval conn_limit=coalesce(connections_limit, table_limit)\n| eval util_pct=if(isnotnull(conn_limit) AND conn_limit>0, round(100*conn_count/conn_limit,1), null())\n| stats latest(conn_count) as conns latest(util_pct) as util_pct by gw\n| where util_pct > 70 OR match(lower(logdesc),\"(?i)aggressive.aging\")\n| sort -util_pct",
              "m": "Use `fw tab -t connections -s` via scripted input (every 60s) to capture current and maximum connection counts. Alternatively parse system log messages about connection limits and aggressive aging (automatic cleanup when table is near capacity). Alert at 75% utilization. Page at 90%. Correlate with NAT pool usage and DDoS indicators. Enable aggressive aging thresholds as a safety net but alert when triggered.",
              "z": "Gauge (connection table utilization %), Line chart (connections over time), Single value (peak utilization today), Table (gateways approaching limit).",
              "kfp": "Large downloads, more remote users, and new sites can use more connections than a quiet baseline from last month.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [Splunkbase app 5402](https://splunkbase.splunk.com/app/5402), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (system/performance logs), `fw tab -t connections -s` via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse `fw tab -t connections -s` via scripted input (every 60s) to capture current and maximum connection counts. Alternatively parse system log messages about connection limits and aggressive aging (automatic cleanup when table is near capacity). Alert at 75% utilization. Page at 90%. Correlate with NAT pool usage and DDoS indicators. Enable aggressive aging thresholds as a safety net but alert when triggered.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"cp_log\" earliest=-4h\n| where match(lower(product),\"(?i)firewall\") AND (match(lower(logdesc),\"(?i)connection.*table|conn.*limit|aggressive.aging\") OR isnotnull(connections_count))\n| eval gw=coalesce(orig, src, hostname)\n| eval conn_count=coalesce(connections_count, concurrent_connections)\n| eval conn_limit=coalesce(connections_limit, table_limit)\n| eval util_pct=if(isnotnull(conn_limit) AND conn_limit>0, round(100*conn_count/conn_limit,1), null())\n| stats latest(conn_count) as conns latest(util_pct) as util_pct by gw\n| where util_pct > 70 OR match(lower(logdesc),\"(?i)aggressive.aging\")\n| sort -util_pct\n```\n\nUnderstanding this SPL\n\n**Check Point Gateway Connection Table Utilization (Check Point)** — Each Check Point gateway has a finite concurrent connection table (configurable, typically 500K–25M depending on appliance). When utilization approaches the limit, new connections are dropped — causing application failures and user complaints. Unlike CPU, connection table exhaustion can happen suddenly during attacks or application bursts with little warning.\n\nDocumented **Data sources**: `sourcetype=cp_log` (system/performance logs), `fw tab -t connections -s` via scripted input. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)firewall\") AND (match(lower(logdesc),\"(?i)connection.*table|conn.*limit|aggressive.aging\") …` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **conn_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **conn_limit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by gw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where util_pct > 70 OR match(lower(logdesc),\"(?i)aggressive.aging\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Performance.All_Performance\n  by All_Performance.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Gateway Connection Table Utilization (Check Point)** — Each Check Point gateway has a finite concurrent connection table (configurable, typically 500K–25M depending on appliance). When utilization approaches the limit, new connections are dropped — causing application failures and user complaints. Unlike CPU, connection table exhaustion can happen suddenly during attacks or application bursts with little warning.\n\nDocumented **Data sources**: `sourcetype=cp_log` (system/performance logs), `fw tab -t connections -s` via scripted input. **App/TA** (typical add-on context): `Splunk_TA_checkpoint` (Splunkbase 5402), Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.All_Performance` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (connection table utilization %), Line chart (connections over time), Single value (peak utilization today), Table (gateways approaching limit).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Check Point Quantum 6200/6400/6600/6800/7000/16200/16600/26000/28000, Check Point Quantum Maestro, Check Point CloudGuard Network, Smart-1 Cloud (management)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow how full the connection table is so sudden growth, leaks, and attacks that eat state are visible with room to act.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Performance.All_Performance\n  by All_Performance.dest span=1h",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.2,
          "qd": {
            "gold": 1,
            "silver": 0,
            "bronze": 53,
            "none": 0
          }
        },
        {
          "i": "5.3",
          "n": "Load Balancers & ADCs",
          "u": [
            {
              "i": "5.3.1",
              "n": "Pool Member Health Status (F5 BIG-IP)",
              "c": "critical",
              "f": "beginner",
              "v": "Offline pool members reduce capacity. All members down = complete service outage.",
              "t": "`Splunk_TA_f5-bigip`, syslog",
              "d": "`sourcetype=f5:bigip:syslog`",
              "q": "index=network sourcetype=\"f5:bigip:syslog\" (\"pool member\" AND (\"down\" OR \"up\" OR \"offline\"))\n| rex \"Pool (?<pool>\\S+) member (?<member>\\S+) monitor status (?<status>\\w+)\"\n| table _time host pool member status | sort -_time",
              "m": "Forward F5 syslog (LTM log level). Install TA. Alert when pool members go down. Critical alert when all members in a pool offline.",
              "z": "Status grid (green/red per member), Table, Timeline.",
              "kfp": "Backend servers are often drained on purpose for deploys, capacity tests, or standby; down members are not always broken.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_f5-bigip`, syslog.\n• Ensure the following data sources are available: `sourcetype=f5:bigip:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward F5 syslog (LTM log level). Install TA. Alert when pool members go down. Critical alert when all members in a pool offline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:syslog\" (\"pool member\" AND (\"down\" OR \"up\" OR \"offline\"))\n| rex \"Pool (?<pool>\\S+) member (?<member>\\S+) monitor status (?<status>\\w+)\"\n| table _time host pool member status | sort -_time\n```\n\nUnderstanding this SPL\n\n**Pool Member Health Status (F5 BIG-IP)** — Offline pool members reduce capacity. All members down = complete service outage.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:syslog`. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Pool Member Health Status (F5 BIG-IP)**): table _time host pool member status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Pool Member Health Status (F5 BIG-IP)** — Offline pool members reduce capacity. All members down = complete service outage.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:syslog`. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (green/red per member), Table, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We read pool member up and down messages from the load balancer so a drained server or a real failure is obvious before customers complain.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "f5",
                "syslog"
              ],
              "em": [
                "f5_bigip"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.2",
              "n": "Virtual Server Availability (F5 BIG-IP)",
              "c": "critical",
              "f": "beginner",
              "v": "VIP down = application unreachable. Direct service impact.",
              "t": "`Splunk_TA_f5-bigip`, SNMP",
              "d": "`sourcetype=f5:bigip:syslog`, iControl REST",
              "q": "index=network sourcetype=\"f5:bigip:syslog\" \"virtual\" (\"disabled\" OR \"offline\" OR \"unavailable\")\n| table _time host virtual_server status | sort -_time",
              "m": "Forward syslog. Monitor VIP status via SNMP or iControl REST. Alert on any state change away from \"available\".",
              "z": "Status indicator per VIP, Events timeline (critical).",
              "kfp": "Planned changes, test VIPs, and short maintenance windows can disable a listener without user-visible failure.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_f5-bigip`, SNMP.\n• Ensure the following data sources are available: `sourcetype=f5:bigip:syslog`, iControl REST.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward syslog. Monitor VIP status via SNMP or iControl REST. Alert on any state change away from \"available\".\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:syslog\" \"virtual\" (\"disabled\" OR \"offline\" OR \"unavailable\")\n| table _time host virtual_server status | sort -_time\n```\n\nUnderstanding this SPL\n\n**Virtual Server Availability (F5 BIG-IP)** — VIP down = application unreachable. Direct service impact.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:syslog`, iControl REST. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Virtual Server Availability (F5 BIG-IP)**): table _time host virtual_server status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(Web.bytes) as total_bytes\n  from datamodel=Web.Web\n  by Web.src Web.dest Web.uri_path Web.status span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Virtual Server Availability (F5 BIG-IP)** — VIP down = application unreachable. Direct service impact.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:syslog`, iControl REST. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status indicator per VIP, Events timeline (critical).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether virtual services are up or disabled on the same box so a quiet listener or a bad profile does not go unnoticed in change week.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count sum(Web.bytes) as total_bytes\n  from datamodel=Web.Web\n  by Web.src Web.dest Web.uri_path Web.status span=1h\n| sort -count",
              "e": [
                "f5",
                "snmp"
              ],
              "em": [
                "f5_bigip",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.3",
              "n": "Connection and Throughput Trending (F5 BIG-IP)",
              "c": "medium",
              "f": "beginner",
              "v": "Reveals application demand patterns. Useful for capacity planning and DDoS detection.",
              "t": "`Splunk_TA_f5-bigip`, SNMP",
              "d": "SNMP F5-BIGIP-LTM-MIB",
              "q": "index=network sourcetype=\"snmp:f5\"\n| timechart span=5m sum(clientside_curConns) as connections by virtual_server",
              "m": "Poll F5 via SNMP or iControl REST for VIP statistics. Baseline patterns and alert on anomalies.",
              "z": "Line chart per VIP, Area chart (throughput), Table.",
              "kfp": "Stats-only telemetry can spike during tests or polling changes; match to the device graphs before calling an incident.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_f5-bigip`, SNMP.\n• Ensure the following data sources are available: SNMP F5-BIGIP-LTM-MIB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll F5 via SNMP or iControl REST for VIP statistics. Baseline patterns and alert on anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:f5\"\n| timechart span=5m sum(clientside_curConns) as connections by virtual_server\n```\n\nUnderstanding this SPL\n\n**Connection and Throughput Trending (F5 BIG-IP)** — Reveals application demand patterns. Useful for capacity planning and DDoS detection.\n\nDocumented **Data sources**: SNMP F5-BIGIP-LTM-MIB. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:f5. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:f5\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by virtual_server** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Connection and Throughput Trending (F5 BIG-IP)** — Reveals application demand patterns. Useful for capacity planning and DDoS detection.\n\nDocumented **Data sources**: SNMP F5-BIGIP-LTM-MIB. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart per VIP, Area chart (throughput), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend client counts and traffic from the appliance so a surprise spike, a quiet VIP, or a long drift is easy to see next to the same graphs in the device UI.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "f5",
                "snmp"
              ],
              "em": [
                "f5_bigip",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.4",
              "n": "SSL Certificate Expiry (F5 BIG-IP)",
              "c": "high",
              "f": "intermediate",
              "v": "Expired certificates on load balancers cause browser warnings or connection failures. Most preventable outage.",
              "t": "`Splunk_TA_f5-bigip`, custom scripted input",
              "d": "iControl REST API (`/mgmt/tm/sys/crypto/cert`)",
              "q": "index=network sourcetype=\"f5:certificate_inventory\"\n| eval days_left=round((expiry_epoch-now())/86400,0) | where days_left<90\n| sort days_left | table host cert_name days_left expiry_date",
              "m": "Scripted input querying iControl REST for certs. Run daily. Alert at 90/60/30/7 day thresholds.",
              "z": "Table sorted by days to expiry, Single value (expiring <30d), Status indicator.",
              "kfp": "Automation renewals, name changes, and short-lived test certs can look urgent until you read who owns the name.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_f5-bigip`, custom scripted input.\n• Ensure the following data sources are available: iControl REST API (`/mgmt/tm/sys/crypto/cert`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input querying iControl REST for certs. Run daily. Alert at 90/60/30/7 day thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:certificate_inventory\"\n| eval days_left=round((expiry_epoch-now())/86400,0) | where days_left<90\n| sort days_left | table host cert_name days_left expiry_date\n```\n\nUnderstanding this SPL\n\n**SSL Certificate Expiry (F5 BIG-IP)** — Expired certificates on load balancers cause browser warnings or connection failures. Most preventable outage.\n\nDocumented **Data sources**: iControl REST API (`/mgmt/tm/sys/crypto/cert`). **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:certificate_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:certificate_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left<90` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **SSL Certificate Expiry (F5 BIG-IP)**): table host cert_name days_left expiry_date\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table sorted by days to expiry, Single value (expiring <30d), Status indicator.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at how many days are left on listener certificates on the big-ip so you can renew or swap in time, not the night a site breaks.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "f5"
              ],
              "em": [
                "f5_bigip"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.5",
              "n": "HTTP Error Rate by VIP (F5 BIG-IP)",
              "c": "high",
              "f": "intermediate",
              "v": "Backend 5xx errors indicate application issues. Per-VIP tracking isolates degraded services.",
              "t": "`Splunk_TA_f5-bigip`, request logging",
              "d": "F5 request logging profile",
              "q": "index=network sourcetype=\"f5:bigip:ltm:http\"\n| eval is_error=if(response_code>=500,1,0)\n| timechart span=5m sum(is_error) as errors, count as total by virtual_server\n| eval error_rate=round(errors/total*100,2) | where error_rate>5",
              "m": "Enable F5 request logging profile on VIPs. Alert when 5xx rate >5% over 5 minutes.",
              "z": "Line chart (error rate), Table (VIP, error rate), Single value.",
              "kfp": "Bad releases, client retries, and upstream blips can raise error rates; compare to the app team before the load-balancer runbook.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_f5-bigip`, request logging.\n• Ensure the following data sources are available: F5 request logging profile.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable F5 request logging profile on VIPs. Alert when 5xx rate >5% over 5 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:ltm:http\"\n| eval is_error=if(response_code>=500,1,0)\n| timechart span=5m sum(is_error) as errors, count as total by virtual_server\n| eval error_rate=round(errors/total*100,2) | where error_rate>5\n```\n\nUnderstanding this SPL\n\n**HTTP Error Rate by VIP (F5 BIG-IP)** — Backend 5xx errors indicate application issues. Per-VIP tracking isolates degraded services.\n\nDocumented **Data sources**: F5 request logging profile. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`, request logging. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:ltm:http. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:ltm:http\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by virtual_server** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate>5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=400\n  by Web.src Web.uri_path Web.status span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**HTTP Error Rate by VIP (F5 BIG-IP)** — Backend 5xx errors indicate application issues. Per-VIP tracking isolates degraded services.\n\nDocumented **Data sources**: F5 request logging profile. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`, request logging. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate), Table (VIP, error rate), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We group server-side error rates by service so a bad release or a sick back end shows up on the same chart your users already feel.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=400\n  by Web.src Web.uri_path Web.status span=5m\n| sort -count",
              "e": [
                "f5"
              ],
              "em": [
                "f5_bigip"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.6",
              "n": "Response Time Degradation (F5 BIG-IP)",
              "c": "high",
              "f": "beginner",
              "v": "Increasing response times indicate backend bottlenecks before they become outages.",
              "t": "`Splunk_TA_f5-bigip`",
              "d": "F5 request logging (server_latency)",
              "q": "index=network sourcetype=\"f5:bigip:ltm:http\"\n| timechart span=5m perc95(server_latency) as p95 by virtual_server | where p95>2000",
              "m": "Enable request logging with server-side timing. Track P95 latency per VIP. Alert when exceeding SLA threshold.",
              "z": "Line chart (P50/P95/P99), Table, Single value.",
              "kfp": "Traffic bursts, long downloads, and cold caches can raise latency for short spans without a chronic problem.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_f5-bigip`.\n• Ensure the following data sources are available: F5 request logging (server_latency).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable request logging with server-side timing. Track P95 latency per VIP. Alert when exceeding SLA threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:ltm:http\"\n| timechart span=5m perc95(server_latency) as p95 by virtual_server | where p95>2000\n```\n\nUnderstanding this SPL\n\n**Response Time Degradation (F5 BIG-IP)** — Increasing response times indicate backend bottlenecks before they become outages.\n\nDocumented **Data sources**: F5 request logging (server_latency). **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:ltm:http. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:ltm:http\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by virtual_server** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where p95>2000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Web.bytes) as avg_bytes count\n  from datamodel=Web.Web\n  by Web.uri_path Web.status span=5m\n| sort -avg_bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Response Time Degradation (F5 BIG-IP)** — Increasing response times indicate backend bottlenecks before they become outages.\n\nDocumented **Data sources**: F5 request logging (server_latency). **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (P50/P95/P99), Table, Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how slow answers are from the load balancer to important URLs so a creeping delay is a signal before a full timeout storm.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` avg(Web.bytes) as avg_bytes count\n  from datamodel=Web.Web\n  by Web.uri_path Web.status span=5m\n| sort -avg_bytes",
              "e": [
                "f5"
              ],
              "em": [
                "f5_bigip"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.7",
              "n": "Session Persistence Issues (F5 BIG-IP)",
              "c": "medium",
              "f": "intermediate",
              "v": "Broken persistence causes lost sessions, shopping carts, or random logouts.",
              "t": "`Splunk_TA_f5-bigip`",
              "d": "F5 LTM logs, request logs",
              "q": "index=network sourcetype=\"f5:bigip:syslog\" \"persistence\" (\"failed\" OR \"expired\")\n| stats count by virtual_server, persistence_type | sort -count",
              "m": "Monitor persistence failures. Track same client hitting different backends from request logs.",
              "z": "Table, Line chart, Bar chart.",
              "kfp": "Sessions ending, new builds, and cookie changes can make persistence look flaky even when the app is fine.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_f5-bigip`.\n• Ensure the following data sources are available: F5 LTM logs, request logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor persistence failures. Track same client hitting different backends from request logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:syslog\" \"persistence\" (\"failed\" OR \"expired\")\n| stats count by virtual_server, persistence_type | sort -count\n```\n\nUnderstanding this SPL\n\n**Session Persistence Issues (F5 BIG-IP)** — Broken persistence causes lost sessions, shopping carts, or random logouts.\n\nDocumented **Data sources**: F5 LTM logs, request logs. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:syslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by virtual_server, persistence_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Line chart, Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for failed or odd persistence so sticky sessions, shopping carts, and long logins do not break quietly after a deploy.",
              "mtype": [
                "Performance",
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "f5"
              ],
              "em": [
                "f5_bigip"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.8",
              "n": "WAF Policy Violations (F5 BIG-IP ASM)",
              "c": "high",
              "f": "beginner",
              "v": "WAF violations indicate attacks — SQL injection, XSS, command injection. Trending reveals campaigns.",
              "t": "`Splunk_TA_f5-bigip` (ASM)",
              "d": "`sourcetype=f5:bigip:asm:syslog`",
              "q": "index=network sourcetype=\"f5:bigip:asm:syslog\"\n| stats count by violation_name, src, request_uri, severity | sort -count",
              "m": "Enable F5 ASM logging. Dashboard showing top violations, attack sources, and targeted URIs.",
              "z": "Table, Bar chart by violation, Map (source IPs), Timeline.",
              "kfp": "Scanners, pen tests, and legacy browser quirks can make a web application firewall look busy; tune rules and test traffic.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_f5-bigip` (ASM).\n• Ensure the following data sources are available: `sourcetype=f5:bigip:asm:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable F5 ASM logging. Dashboard showing top violations, attack sources, and targeted URIs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:asm:syslog\"\n| stats count by violation_name, src, request_uri, severity | sort -count\n```\n\nUnderstanding this SPL\n\n**WAF Policy Violations (F5 BIG-IP ASM)** — WAF violations indicate attacks — SQL injection, XSS, command injection. Trending reveals campaigns.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:asm:syslog`. **App/TA** (typical add-on context): `Splunk_TA_f5-bigip` (ASM). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:asm:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:asm:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by violation_name, src, request_uri, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart by violation, Map (source IPs), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add up web application blocks on the same box so real attacks and risky paths stand out from everyday browsing noise with less guesswork.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "f5"
              ],
              "em": [
                "f5_asm",
                "f5_bigip"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.9",
              "n": "Connection Queue Depth (F5 BIG-IP)",
              "c": "critical",
              "f": "intermediate",
              "v": "Growing connection queues indicate backend saturation. Users experience timeouts before the server actually fails.",
              "t": "F5 TA (`Splunk_TA_f5-bigip`), Splunk_TA_citrix-netscaler",
              "d": "`sourcetype=f5:bigip:ltm`, SNMP",
              "q": "index=network sourcetype=\"f5:bigip:ltm\"\n| stats latest(curConns) as connections, latest(connqDepth) as queue_depth by virtual_server\n| where queue_depth > 0 | sort -queue_depth",
              "m": "Monitor LTM connection queue statistics via iControl REST or SNMP. Alert when queue depth exceeds 0 persistently (>5 min). Correlate with backend pool member health.",
              "z": "Line chart (queue depth over time), Table (virtual server, connections, queue), Gauge.",
              "kfp": "Flash crowds, new campaigns, and slow backends can fill queues during honest peaks.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: F5 TA (`Splunk_TA_f5-bigip`), Splunk_TA_citrix-netscaler.\n• Ensure the following data sources are available: `sourcetype=f5:bigip:ltm`, SNMP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor LTM connection queue statistics via iControl REST or SNMP. Alert when queue depth exceeds 0 persistently (>5 min). Correlate with backend pool member health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:ltm\"\n| stats latest(curConns) as connections, latest(connqDepth) as queue_depth by virtual_server\n| where queue_depth > 0 | sort -queue_depth\n```\n\nUnderstanding this SPL\n\n**Connection Queue Depth (F5 BIG-IP)** — Growing connection queues indicate backend saturation. Users experience timeouts before the server actually fails.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:ltm`, SNMP. **App/TA** (typical add-on context): F5 TA (`Splunk_TA_f5-bigip`), Splunk_TA_citrix-netscaler. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:ltm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:ltm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by virtual_server** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where queue_depth > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Connection Queue Depth (F5 BIG-IP)** — Growing connection queues indicate backend saturation. Users experience timeouts before the server actually fails.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:ltm`, SNMP. **App/TA** (typical add-on context): F5 TA (`Splunk_TA_f5-bigip`), Splunk_TA_citrix-netscaler. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue depth over time), Table (virtual server, connections, queue), Gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at how deep connection queues get so a slow back end or a traffic spike is not hiding behind a few lucky fast replies.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "citrix",
                "f5"
              ],
              "em": [
                "citrix_netscaler",
                "f5_bigip"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.10",
              "n": "Backend Server Error Code Distribution (F5 BIG-IP)",
              "c": "high",
              "f": "beginner",
              "v": "Understanding which backends return 5xx errors helps isolate faulty application instances vs. systemic issues.",
              "t": "F5 TA (`Splunk_TA_f5-bigip`), NGINX TA",
              "d": "`sourcetype=f5:bigip:ltm:http`, `sourcetype=nginx:plus:api`",
              "q": "index=network sourcetype=\"f5:bigip:ltm:http\"\n| where response_code >= 500\n| stats count by pool_member, response_code, virtual_server\n| sort -count",
              "m": "Enable HTTP response logging on the LB. Track 5xx rates per backend member. Alert when a single member's error rate exceeds the pool average by 3x. Auto-disable unhealthy members.",
              "z": "Bar chart (errors by backend), Table (member, error code, count), Timechart.",
              "kfp": "Deploys, dependency outages, and bad releases can return five-hundred class codes from the app servers while the load balancer is healthy.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: F5 TA (`Splunk_TA_f5-bigip`), NGINX TA.\n• Ensure the following data sources are available: `sourcetype=f5:bigip:ltm:http`, `sourcetype=nginx:plus:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable HTTP response logging on the LB. Track 5xx rates per backend member. Alert when a single member's error rate exceeds the pool average by 3x. Auto-disable unhealthy members.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:ltm:http\"\n| where response_code >= 500\n| stats count by pool_member, response_code, virtual_server\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Backend Server Error Code Distribution (F5 BIG-IP)** — Understanding which backends return 5xx errors helps isolate faulty application instances vs. systemic issues.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:ltm:http`, `sourcetype=nginx:plus:api`. **App/TA** (typical add-on context): F5 TA (`Splunk_TA_f5-bigip`), NGINX TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:ltm:http. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:ltm:http\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where response_code >= 500` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by pool_member, response_code, virtual_server** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (errors by backend), Table (member, error code, count), Timechart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We list five-hundred class errors from behind the service so a sick app, not just the load balancer, gets help before users pile on in chat.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "f5",
                "nginx"
              ],
              "em": [
                "f5_bigip",
                "nginx_open"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.11",
              "n": "Rate Limiting and DDoS Mitigation Events (F5 BIG-IP)",
              "c": "critical",
              "f": "beginner",
              "v": "Tracking rate limiting events reveals ongoing attacks and validates that DDoS protections are actively working.",
              "t": "F5 TA (`Splunk_TA_f5-bigip`), Splunk_TA_citrix-netscaler",
              "d": "`sourcetype=f5:bigip:asm`, `sourcetype=f5:bigip:ltm`",
              "q": "index=network sourcetype=\"f5:bigip:asm\" attack_type=\"*dos*\" OR violation=\"Rate Limiting\"\n| stats count values(src) as src_values dc(src) as unique_sources by virtual_server, attack_type\n| sort -count",
              "m": "Enable ASM/WAF logging. Configure rate limiting policies per virtual server. Alert on sustained rate limiting events. Track source IP patterns for blocklisting.",
              "z": "Timechart (events over time), Table (source IPs, attack types), Single value (blocked requests).",
              "kfp": "Rate-based blocks and shapers can add events during real peaks and during pen tests; confirm intent before relaxing controls.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: F5 TA (`Splunk_TA_f5-bigip`), Splunk_TA_citrix-netscaler.\n• Ensure the following data sources are available: `sourcetype=f5:bigip:asm`, `sourcetype=f5:bigip:ltm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ASM/WAF logging. Configure rate limiting policies per virtual server. Alert on sustained rate limiting events. Track source IP patterns for blocklisting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:asm\" attack_type=\"*dos*\" OR violation=\"Rate Limiting\"\n| stats count values(src) as src_values dc(src) as unique_sources by virtual_server, attack_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Rate Limiting and DDoS Mitigation Events (F5 BIG-IP)** — Tracking rate limiting events reveals ongoing attacks and validates that DDoS protections are actively working.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:asm`, `sourcetype=f5:bigip:ltm`. **App/TA** (typical add-on context): F5 TA (`Splunk_TA_f5-bigip`), Splunk_TA_citrix-netscaler. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:asm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:asm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by virtual_server, attack_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (events over time), Table (source IPs, attack types), Single value (blocked requests).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch rate limit and attack style events on the same appliance so a honest flash crowd and a real flood are both visible for the right follow-up.",
              "mtype": [
                "Security",
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix",
                "f5"
              ],
              "em": [
                "citrix_netscaler",
                "f5_bigip"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.12",
              "n": "iRule/Policy Errors (F5 BIG-IP)",
              "c": "high",
              "f": "beginner",
              "v": "iRule failures cause unexpected traffic handling — potentially bypassing security or routing traffic incorrectly.",
              "t": "F5 TA (`Splunk_TA_f5-bigip`)",
              "d": "`sourcetype=f5:bigip:ltm`",
              "q": "index=network sourcetype=\"f5:bigip:ltm\" \"TCL error\" OR \"rule error\" OR \"aborted\"\n| rex \"Rule (?<rule_name>/\\S+)\"\n| stats count by rule_name, host | sort -count",
              "m": "Enable iRule logging (sparingly — high volume). Monitor for TCL runtime errors. Alert on any iRule abort events. Review and test iRules in staging before production.",
              "z": "Table (rule name, error count, host), Timechart (errors over time).",
              "kfp": "Bad inputs, new headers, and rare corner cases can fire rule errors; treat as code bugs when volume jumps.",
              "refs": "[Splunk_TA_f5-bigip](https://splunkbase.splunk.com/app/2680)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: F5 TA (`Splunk_TA_f5-bigip`).\n• Ensure the following data sources are available: `sourcetype=f5:bigip:ltm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable iRule logging (sparingly — high volume). Monitor for TCL runtime errors. Alert on any iRule abort events. Review and test iRules in staging before production.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"f5:bigip:ltm\" \"TCL error\" OR \"rule error\" OR \"aborted\"\n| rex \"Rule (?<rule_name>/\\S+)\"\n| stats count by rule_name, host | sort -count\n```\n\nUnderstanding this SPL\n\n**iRule/Policy Errors (F5 BIG-IP)** — iRule failures cause unexpected traffic handling — potentially bypassing security or routing traffic incorrectly.\n\nDocumented **Data sources**: `sourcetype=f5:bigip:ltm`. **App/TA** (typical add-on context): F5 TA (`Splunk_TA_f5-bigip`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: f5:bigip:ltm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"f5:bigip:ltm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by rule_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule name, error count, host), Timechart (errors over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for script errors from custom rules on the same box so a bad change to logic does not stay silent while clients see odd failures.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "f5"
              ],
              "em": [
                "f5_bigip"
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.13",
              "n": "Citrix ADC Virtual Server Health and State (NetScaler)",
              "c": "critical",
              "f": "beginner",
              "v": "Citrix ADC (NetScaler) virtual servers (vServers) are the front-end load-balancing endpoints that distribute traffic to back-end service groups. A vServer transitions from UP to DOWN when all bound services fail health checks, causing a complete outage for the application it serves. Monitoring vServer state changes provides immediate alerting when applications lose load-balanced availability.",
              "t": "Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), Splunk Connect for Syslog",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `vserver_name`, `vserver_state`, `vserver_type`, `service_name`, `service_state`",
              "q": "index=network sourcetype=\"citrix:netscaler:syslog\" \"Vserver\" (\"DOWN\" OR \"UP\" OR \"OUT OF SERVICE\")\n| rex \"Vserver (?<vserver_name>\\S+) - State (?<state>\\w+)\"\n| where state=\"DOWN\" OR state=\"OUTOFSERVICE\"\n| bin _time span=5m\n| stats count as state_changes, latest(state) as current_state, values(host) as adc_node by vserver_name, _time\n| table _time, vserver_name, current_state, state_changes, adc_node",
              "m": "Configure Citrix ADC to send syslog to Splunk via Splunk Connect for Syslog (SC4S). The ADC generates syslog messages for vServer state transitions (SNMP trap equivalent). Alternatively, use the NITRO API via scripted input to poll `lbvserver` statistics including `state`, `curclntconnections`, `tothits`, and `health` (percentage of UP services). Alert immediately on any vServer transitioning to DOWN. Track vServer health percentage — a vServer at 50% health means half its services are down and may be approaching failure. Correlate with service group member health checks for root cause.",
              "z": "Status grid (vServer name x state), Timeline (state transitions), Table (DOWN vServers with service count).",
              "kfp": "Admin disable, GSLB moves, and maintenance can mark a service down on purpose; compare to the runbook and change record.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), Splunk Connect for Syslog.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `vserver_name`, `vserver_state`, `vserver_type`, `service_name`, `service_state`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Citrix ADC to send syslog to Splunk via Splunk Connect for Syslog (SC4S). The ADC generates syslog messages for vServer state transitions (SNMP trap equivalent). Alternatively, use the NITRO API via scripted input to poll `lbvserver` statistics including `state`, `curclntconnections`, `tothits`, and `health` (percentage of UP services). Alert immediately on any vServer transitioning to DOWN. Track vServer health percentage — a vServer at 50% health means half its services are down and …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:syslog\" \"Vserver\" (\"DOWN\" OR \"UP\" OR \"OUT OF SERVICE\")\n| rex \"Vserver (?<vserver_name>\\S+) - State (?<state>\\w+)\"\n| where state=\"DOWN\" OR state=\"OUTOFSERVICE\"\n| bin _time span=5m\n| stats count as state_changes, latest(state) as current_state, values(host) as adc_node by vserver_name, _time\n| table _time, vserver_name, current_state, state_changes, adc_node\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Virtual Server Health and State (NetScaler)** — Citrix ADC (NetScaler) virtual servers (vServers) are the front-end load-balancing endpoints that distribute traffic to back-end service groups. A vServer transitions from UP to DOWN when all bound services fail health checks, causing a complete outage for the application it serves. Monitoring vServer state changes provides immediate alerting when applications lose load-balanced availability.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `vserver_name`, `vserver_state`, `vserver_type`, `service_name`, `service_state`. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), Splunk Connect for Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where state=\"DOWN\" OR state=\"OUTOFSERVICE\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by vserver_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Citrix ADC Virtual Server Health and State (NetScaler)**): table _time, vserver_name, current_state, state_changes, adc_node\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (vServer name x state), Timeline (state transitions), Table (DOWN vServers with service count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow virtual service state in the same logs so a down or disabled front door is something you can match to a change, not a rumor.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix",
                "syslog"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.14",
              "n": "Citrix ADC Service Group Member Health (NetScaler)",
              "c": "high",
              "f": "beginner",
              "v": "Behind each Citrix ADC vServer, service group members represent individual back-end servers. When health monitors detect a service group member as DOWN, the ADC stops sending traffic to that server. A single member going down may be routine (maintenance), but multiple simultaneous failures indicate a systemic issue — network partition, shared dependency failure, or deployment problem. Monitoring service group member health identifies back-end server failures faster than application-level monitoring.",
              "t": "Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`)",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `service_name`, `service_ip`, `service_port`, `service_state`, `monitor_name`",
              "q": "index=network sourcetype=\"citrix:netscaler:syslog\" \"monitor\" (\"DOWN\" OR \"UP\") \"servicegroup\"\n| rex \"servicegroup member (?<sg_name>\\S+)\\((?<member_ip>[^)]+)\\) - State (?<state>\\w+)\"\n| where state=\"DOWN\"\n| stats count as transitions, latest(_time) as last_seen, latest(state) as current_state by sg_name, member_ip, host\n| eval last_seen_fmt=strftime(last_seen, \"%Y-%m-%d %H:%M:%S\")\n| sort -last_seen\n| table sg_name, member_ip, current_state, transitions, last_seen_fmt, host",
              "m": "The ADC logs service state transitions via syslog. For richer data, poll the NITRO API `servicegroup_servicegroupmember_binding` to enumerate all members and their states. Track `svrstate` (UP, DOWN, OUT OF SERVICE) and monitor response times. Alert when: more than 2 service group members go DOWN simultaneously (systemic issue), a critical service group drops below minimum capacity threshold, or a member remains DOWN for more than 15 minutes (stale failure). Correlate member health with application error rates for impact assessment.",
              "z": "Table (service groups with DOWN members), Bar chart (DOWN members by service group), Timeline (member state changes).",
              "kfp": "Members can be in slow-start or out of rotation by design; compare with the application owner before you panic.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`).\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `service_name`, `service_ip`, `service_port`, `service_state`, `monitor_name`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe ADC logs service state transitions via syslog. For richer data, poll the NITRO API `servicegroup_servicegroupmember_binding` to enumerate all members and their states. Track `svrstate` (UP, DOWN, OUT OF SERVICE) and monitor response times. Alert when: more than 2 service group members go DOWN simultaneously (systemic issue), a critical service group drops below minimum capacity threshold, or a member remains DOWN for more than 15 minutes (stale failure). Correlate member health with applicat…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:syslog\" \"monitor\" (\"DOWN\" OR \"UP\") \"servicegroup\"\n| rex \"servicegroup member (?<sg_name>\\S+)\\((?<member_ip>[^)]+)\\) - State (?<state>\\w+)\"\n| where state=\"DOWN\"\n| stats count as transitions, latest(_time) as last_seen, latest(state) as current_state by sg_name, member_ip, host\n| eval last_seen_fmt=strftime(last_seen, \"%Y-%m-%d %H:%M:%S\")\n| sort -last_seen\n| table sg_name, member_ip, current_state, transitions, last_seen_fmt, host\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Service Group Member Health (NetScaler)** — Behind each Citrix ADC vServer, service group members represent individual back-end servers. When health monitors detect a service group member as DOWN, the ADC stops sending traffic to that server. A single member going down may be routine (maintenance), but multiple simultaneous failures indicate a systemic issue — network partition, shared dependency failure, or deployment problem. Monitoring service group member health identifies back-end server failures faster than…\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `service_name`, `service_ip`, `service_port`, `service_state`, `monitor_name`. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where state=\"DOWN\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by sg_name, member_ip, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **last_seen_fmt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC Service Group Member Health (NetScaler)**): table sg_name, member_ip, current_state, transitions, last_seen_fmt, host\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (service groups with DOWN members), Bar chart (DOWN members by service group), Timeline (member state changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We read service group health on the same platform so a cold member or a flapping monitor is visible before a whole site looks \"randomly\" slow.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.15",
              "n": "Citrix ADC SSL Certificate Expiration Monitoring (NetScaler)",
              "c": "critical",
              "f": "beginner",
              "v": "SSL certificates on Citrix ADC terminate HTTPS connections for all web applications behind the load balancer. An expired certificate causes browser warnings or complete connection failures for all users. The NITRO API exposes `daystoexpiration` for every bound SSL certificate, enabling automated alerting well before expiry. Certificate expiry outages are among the most preventable yet impactful failures in production environments.",
              "t": "Custom scripted input polling Citrix ADC NITRO API",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:ssl\"` fields `certkey_name`, `days_to_expiry`, `subject`, `issuer`, `serial`, `bound_vserver`",
              "q": "index=network sourcetype=\"citrix:netscaler:ssl\"\n| stats latest(days_to_expiry) as days_left, latest(subject) as subject, latest(issuer) as issuer, values(bound_vserver) as bound_to by certkey_name, host\n| where days_left < 90\n| eval urgency=case(days_left<=7, \"CRITICAL\", days_left<=30, \"HIGH\", days_left<=90, \"MEDIUM\", 1=1, \"LOW\")\n| sort days_left\n| table certkey_name, days_left, urgency, subject, issuer, bound_to, host",
              "m": "Create a scripted input that polls the NITRO API `sslcertkey` resource on each ADC. The API returns `certkey` name, `subject`, `issuer`, `serial`, `clientcertnotbefore`, `clientcertnotafter`, `daystoexpiration`, and `expirymonitor` status. Also enable the built-in `expirymonitor` on the ADC with a `notificationperiod` (10–100 days). Run the scripted input daily. Alert at 90 days (plan renewal), 30 days (action required), 7 days (critical), and immediately when `daystoexpiration` reaches 0. Track all certificates bound to vServers — unbound certificates can be ignored or flagged for cleanup.",
              "z": "Table (certificates sorted by expiry), Single value (certificates expiring within 30 days), Gauge (soonest expiry).",
              "kfp": "New listeners, reissues, and short-lived test certs can look \"almost expired\" on paper while automation is in flight.",
              "refs": "[CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling Citrix ADC NITRO API.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:ssl\"` fields `certkey_name`, `days_to_expiry`, `subject`, `issuer`, `serial`, `bound_vserver`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that polls the NITRO API `sslcertkey` resource on each ADC. The API returns `certkey` name, `subject`, `issuer`, `serial`, `clientcertnotbefore`, `clientcertnotafter`, `daystoexpiration`, and `expirymonitor` status. Also enable the built-in `expirymonitor` on the ADC with a `notificationperiod` (10–100 days). Run the scripted input daily. Alert at 90 days (plan renewal), 30 days (action required), 7 days (critical), and immediately when `daystoexpiration` reaches 0. Track…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:ssl\"\n| stats latest(days_to_expiry) as days_left, latest(subject) as subject, latest(issuer) as issuer, values(bound_vserver) as bound_to by certkey_name, host\n| where days_left < 90\n| eval urgency=case(days_left<=7, \"CRITICAL\", days_left<=30, \"HIGH\", days_left<=90, \"MEDIUM\", 1=1, \"LOW\")\n| sort days_left\n| table certkey_name, days_left, urgency, subject, issuer, bound_to, host\n```\n\nUnderstanding this SPL\n\n**Citrix ADC SSL Certificate Expiration Monitoring (NetScaler)** — SSL certificates on Citrix ADC terminate HTTPS connections for all web applications behind the load balancer. An expired certificate causes browser warnings or complete connection failures for all users. The NITRO API exposes `daystoexpiration` for every bound SSL certificate, enabling automated alerting well before expiry. Certificate expiry outages are among the most preventable yet impactful failures in production environments.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:ssl\"` fields `certkey_name`, `days_to_expiry`, `subject`, `issuer`, `serial`, `bound_vserver`. **App/TA** (typical add-on context): Custom scripted input polling Citrix ADC NITRO API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:ssl. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:ssl\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by certkey_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where days_left < 90` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **urgency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC SSL Certificate Expiration Monitoring (NetScaler)**): table certkey_name, days_left, urgency, subject, issuer, bound_to, host\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Citrix ADC SSL Certificate Expiration Monitoring (NetScaler)** — SSL certificates on Citrix ADC terminate HTTPS connections for all web applications behind the load balancer. An expired certificate causes browser warnings or complete connection failures for all users. The NITRO API exposes `daystoexpiration` for every bound SSL certificate, enabling automated alerting well before expiry. Certificate expiry outages are among the most preventable yet impactful failures in production environments.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:ssl\"` fields `certkey_name`, `days_to_expiry`, `subject`, `issuer`, `serial`, `bound_vserver`. **App/TA** (typical add-on context): Custom scripted input polling Citrix ADC NITRO API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certificates sorted by expiry), Single value (certificates expiring within 30 days), Gauge (soonest expiry).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at time left on listener certificates in one place on the same gear so renewals are planned, not an outage story.",
              "mtype": [
                "Compliance",
                "Availability"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.dest | sort - count",
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.16",
              "n": "Citrix ADC High Availability Failover Monitoring (NetScaler)",
              "c": "critical",
              "f": "intermediate",
              "v": "Citrix ADC deployments typically use HA pairs where a secondary appliance takes over if the primary fails. Failover events (PRIMARY → SECONDARY swap) are disruptive — active connections may be dropped, and if configuration sync was incomplete, the new primary may have a stale configuration. Monitoring failover events, sync status, and node health ensures HA is functioning correctly and that failovers are investigated promptly.",
              "t": "Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`)",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `ha_state`, `ha_node`, `sync_status`, `failover_reason`",
              "q": "index=network sourcetype=\"citrix:netscaler:syslog\" (\"HA state\" OR \"failover\" OR \"STAYSECONDARY\" OR \"CLAIMING\" OR \"FORCE CHANGE\")\n| rex \"HA state of node (?<node_id>\\d+) changed from (?<from_state>\\w+) to (?<to_state>\\w+)\"\n| where isnotnull(from_state)\n| eval is_failover=if(to_state=\"PRIMARY\" AND from_state=\"SECONDARY\", \"Yes\", \"No\")\n| sort -_time\n| table _time, host, node_id, from_state, to_state, is_failover",
              "m": "The ADC logs HA state transitions via syslog when nodes change between PRIMARY, SECONDARY, CLAIMING, and FORCE CHANGE states. Also poll the NITRO API `hanode` resource for `hacurstatus`, `hacurstate`, `hasync`, `haprop`, and `hatotpktrx`. Monitor for: any failover event (state change to PRIMARY on a formerly SECONDARY node), sync failures (`hasync` not SUCCESS — configuration mismatch between nodes), system health states (COMPLETEFAIL, PARTIALFAIL, ROUTEMONITORFAIL), and STAYSECONDARY status (forced secondary, no automatic failover possible). Alert immediately on failover events. Regularly validate sync status — a desynchronized HA pair means the secondary will come up with stale configuration after failover.",
              "z": "Timeline (failover events), Status grid (node x state), Table (sync status per HA pair).",
              "kfp": "Rehearsals, power work, and card swaps can make high-availability logs chatty without user-facing loss.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`).\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `ha_state`, `ha_node`, `sync_status`, `failover_reason`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe ADC logs HA state transitions via syslog when nodes change between PRIMARY, SECONDARY, CLAIMING, and FORCE CHANGE states. Also poll the NITRO API `hanode` resource for `hacurstatus`, `hacurstate`, `hasync`, `haprop`, and `hatotpktrx`. Monitor for: any failover event (state change to PRIMARY on a formerly SECONDARY node), sync failures (`hasync` not SUCCESS — configuration mismatch between nodes), system health states (COMPLETEFAIL, PARTIALFAIL, ROUTEMONITORFAIL), and STAYSECONDARY status (fo…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:syslog\" (\"HA state\" OR \"failover\" OR \"STAYSECONDARY\" OR \"CLAIMING\" OR \"FORCE CHANGE\")\n| rex \"HA state of node (?<node_id>\\d+) changed from (?<from_state>\\w+) to (?<to_state>\\w+)\"\n| where isnotnull(from_state)\n| eval is_failover=if(to_state=\"PRIMARY\" AND from_state=\"SECONDARY\", \"Yes\", \"No\")\n| sort -_time\n| table _time, host, node_id, from_state, to_state, is_failover\n```\n\nUnderstanding this SPL\n\n**Citrix ADC High Availability Failover Monitoring (NetScaler)** — Citrix ADC deployments typically use HA pairs where a secondary appliance takes over if the primary fails. Failover events (PRIMARY → SECONDARY swap) are disruptive — active connections may be dropped, and if configuration sync was incomplete, the new primary may have a stale configuration. Monitoring failover events, sync status, and node health ensures HA is functioning correctly and that failovers are investigated promptly.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `ha_state`, `ha_node`, `sync_status`, `failover_reason`. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where isnotnull(from_state)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **is_failover** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC High Availability Failover Monitoring (NetScaler)**): table _time, host, node_id, from_state, to_state, is_failover\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failover events), Status grid (node x state), Table (sync status per HA pair).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We list high-availability and failover style messages so a switch of roles during maintenance does not look like a mystery in post-review.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.17",
              "n": "Citrix ADC GSLB Site and Service Health (NetScaler)",
              "c": "high",
              "f": "advanced",
              "v": "Global Server Load Balancing (GSLB) distributes traffic across multiple data centers based on proximity, health, and load. GSLB relies on the Metric Exchange Protocol (MEP) between sites to share health and load metrics. If MEP connectivity fails between sites, the GSLB method falls back to Round Robin — potentially sending users to degraded or distant sites. Monitoring GSLB site health and MEP status ensures intelligent multi-site traffic distribution.",
              "t": "Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), NITRO API scripted input",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `gslb_site`, `gslb_service`, `mep_status`, `site_ip`, `service_state`",
              "q": "index=network sourcetype=\"citrix:netscaler:syslog\" (\"GSLB\" OR \"MEP\") (\"DOWN\" OR \"UP\" OR \"disabled\")\n| rex \"GSLB (?:site|service) (?<gslb_entity>\\S+).*State (?<state>\\w+)\"\n| where state=\"DOWN\" OR match(_raw, \"MEP.*DOWN\")\n| bin _time span=5m\n| stats count as events, latest(state) as current_state by gslb_entity, host, _time\n| table _time, gslb_entity, current_state, events, host",
              "m": "The ADC logs GSLB service state changes and MEP connectivity events via syslog. MEP runs on TCP ports 3011 (standard) or 3009 (secure) between GSLB sites. Additionally, poll the NITRO API `gslbsite` and `gslbservice` resources for site status, MEP status, and GSLB service health. Alert on: any GSLB service going DOWN, MEP status changing to DOWN between any pair of sites (fallback to Round Robin), and GSLB site becoming unreachable. When MEP fails, all GSLB decisions for that site pair become unaware of the remote site's health — traffic may be sent to a degraded or offline site.",
              "z": "Status grid (GSLB site x MEP status), Table (DOWN GSLB services), Timeline (GSLB state changes).",
              "kfp": "GSLB shifts during drills, path changes, and ISP events can be normal if clients still land on a healthy data center.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), NITRO API scripted input.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `gslb_site`, `gslb_service`, `mep_status`, `site_ip`, `service_state`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe ADC logs GSLB service state changes and MEP connectivity events via syslog. MEP runs on TCP ports 3011 (standard) or 3009 (secure) between GSLB sites. Additionally, poll the NITRO API `gslbsite` and `gslbservice` resources for site status, MEP status, and GSLB service health. Alert on: any GSLB service going DOWN, MEP status changing to DOWN between any pair of sites (fallback to Round Robin), and GSLB site becoming unreachable. When MEP fails, all GSLB decisions for that site pair become un…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:syslog\" (\"GSLB\" OR \"MEP\") (\"DOWN\" OR \"UP\" OR \"disabled\")\n| rex \"GSLB (?:site|service) (?<gslb_entity>\\S+).*State (?<state>\\w+)\"\n| where state=\"DOWN\" OR match(_raw, \"MEP.*DOWN\")\n| bin _time span=5m\n| stats count as events, latest(state) as current_state by gslb_entity, host, _time\n| table _time, gslb_entity, current_state, events, host\n```\n\nUnderstanding this SPL\n\n**Citrix ADC GSLB Site and Service Health (NetScaler)** — Global Server Load Balancing (GSLB) distributes traffic across multiple data centers based on proximity, health, and load. GSLB relies on the Metric Exchange Protocol (MEP) between sites to share health and load metrics. If MEP connectivity fails between sites, the GSLB method falls back to Round Robin — potentially sending users to degraded or distant sites. Monitoring GSLB site health and MEP status ensures intelligent multi-site traffic distribution.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `gslb_site`, `gslb_service`, `mep_status`, `site_ip`, `service_state`. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), NITRO API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where state=\"DOWN\" OR match(_raw, \"MEP.*DOWN\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by gslb_entity, host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Citrix ADC GSLB Site and Service Health (NetScaler)**): table _time, gslb_entity, current_state, events, host\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (GSLB site x MEP status), Table (DOWN GSLB services), Timeline (GSLB state changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at site and global load state so a shifted region or a quiet peer is on the same screen as the rest of the delivery story.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.18",
              "n": "Citrix Gateway / VPN Session Monitoring (NetScaler)",
              "c": "high",
              "f": "intermediate",
              "v": "Citrix Gateway (NetScaler Gateway) provides SSL VPN access and ICA Proxy functionality for remote Citrix session launches. Monitoring active Gateway sessions provides visibility into remote user activity, concurrent connection counts (license-relevant), authentication failures (brute force detection), and session anomalies (impossible travel, excessive bandwidth). Gateway is the perimeter entry point for all remote Citrix access, making it security-critical.",
              "t": "Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`)",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `user`, `client_ip`, `session_type`, `auth_result`, `gateway_vserver`",
              "q": "index=network sourcetype=\"citrix:netscaler:syslog\" (\"SSLVPN\" OR \"ICA\" OR \"AAA\") (\"LOGIN\" OR \"LOGOUT\" OR \"FAILURE\")\n| rex \"User (?<user>\\S+) - Client_ip (?<client_ip>\\S+)\"\n| eval auth_result=case(match(_raw, \"LOGIN\"), \"Success\", match(_raw, \"FAILURE\"), \"Failure\", match(_raw, \"LOGOUT\"), \"Logout\", 1=1, \"Other\")\n| bin _time span=15m\n| stats sum(eval(if(auth_result=\"Success\", 1, 0))) as logins,\n  sum(eval(if(auth_result=\"Failure\", 1, 0))) as failures,\n  dc(user) as unique_users, dc(client_ip) as unique_ips by gateway_vserver, _time\n| eval fail_pct=if((logins+failures)>0, round(failures/(logins+failures)*100,1), 0)\n| where failures > 10 OR fail_pct > 30\n| table _time, gateway_vserver, logins, failures, fail_pct, unique_users, unique_ips",
              "m": "The ADC logs all AAA (Authentication, Authorization, Accounting) events via syslog, including Gateway login successes, failures, and logouts with client IP and username. Configure syslog with appflow and audit logging enabled. Alert on: authentication failure rate exceeding 30% (possible brute force), concurrent sessions exceeding licensed capacity, a single source IP attempting more than 20 failed logins in 15 minutes, or unusual login times/locations for known users. Track peak concurrent Gateway sessions for capacity planning.",
              "z": "Timechart (logins vs failures), Bar chart (failures by source IP), Single value (concurrent sessions).",
              "kfp": "Travel peaks, class schedules, and remote-work days swing login volume without a breach.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`).\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `user`, `client_ip`, `session_type`, `auth_result`, `gateway_vserver`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe ADC logs all AAA (Authentication, Authorization, Accounting) events via syslog, including Gateway login successes, failures, and logouts with client IP and username. Configure syslog with appflow and audit logging enabled. Alert on: authentication failure rate exceeding 30% (possible brute force), concurrent sessions exceeding licensed capacity, a single source IP attempting more than 20 failed logins in 15 minutes, or unusual login times/locations for known users. Track peak concurrent Gate…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:syslog\" (\"SSLVPN\" OR \"ICA\" OR \"AAA\") (\"LOGIN\" OR \"LOGOUT\" OR \"FAILURE\")\n| rex \"User (?<user>\\S+) - Client_ip (?<client_ip>\\S+)\"\n| eval auth_result=case(match(_raw, \"LOGIN\"), \"Success\", match(_raw, \"FAILURE\"), \"Failure\", match(_raw, \"LOGOUT\"), \"Logout\", 1=1, \"Other\")\n| bin _time span=15m\n| stats sum(eval(if(auth_result=\"Success\", 1, 0))) as logins,\n  sum(eval(if(auth_result=\"Failure\", 1, 0))) as failures,\n  dc(user) as unique_users, dc(client_ip) as unique_ips by gateway_vserver, _time\n| eval fail_pct=if((logins+failures)>0, round(failures/(logins+failures)*100,1), 0)\n| where failures > 10 OR fail_pct > 30\n| table _time, gateway_vserver, logins, failures, fail_pct, unique_users, unique_ips\n```\n\nUnderstanding this SPL\n\n**Citrix Gateway / VPN Session Monitoring (NetScaler)** — Citrix Gateway (NetScaler Gateway) provides SSL VPN access and ICA Proxy functionality for remote Citrix session launches. Monitoring active Gateway sessions provides visibility into remote user activity, concurrent connection counts (license-relevant), authentication failures (brute force detection), and session anomalies (impossible travel, excessive bandwidth). Gateway is the perimeter entry point for all remote Citrix access, making it security-critical.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `user`, `client_ip`, `session_type`, `auth_result`, `gateway_vserver`. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **auth_result** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by gateway_vserver, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failures > 10 OR fail_pct > 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Gateway / VPN Session Monitoring (NetScaler)**): table _time, gateway_vserver, logins, failures, fail_pct, unique_users, unique_ips\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(Authentication.src) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Citrix Gateway / VPN Session Monitoring (NetScaler)** — Citrix Gateway (NetScaler Gateway) provides SSL VPN access and ICA Proxy functionality for remote Citrix session launches. Monitoring active Gateway sessions provides visibility into remote user activity, concurrent connection counts (license-relevant), authentication failures (brute force detection), and session anomalies (impossible travel, excessive bandwidth). Gateway is the perimeter entry point for all remote Citrix access, making it security-critical.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:syslog\"` fields `user`, `client_ip`, `session_type`, `auth_result`, `gateway_vserver`. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (logins vs failures), Bar chart (failures by source IP), Single value (concurrent sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow remote access and session style events so a spike in failed logins or a quiet gateway is something you can explain with data.",
              "mtype": [
                "Security",
                "Capacity"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t dc(Authentication.src) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - agg_value",
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.19",
              "n": "Citrix ADC Content Switching Policy Hit Rate (NetScaler)",
              "c": "medium",
              "f": "intermediate",
              "v": "Content switching vServers route HTTP/HTTPS requests to different load-balancing vServers based on URL patterns, headers, cookies, or other request attributes. Misconfigured content switching policies result in traffic hitting the default (catch-all) policy or being routed to the wrong back-end. Monitoring policy hit rates validates that routing rules are working as intended and identifies policies that are never triggered (candidate for cleanup or misconfiguration).",
              "t": "Custom scripted input polling Citrix ADC NITRO API",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:cs\"` fields `cs_vserver`, `policy_name`, `hits`, `target_lbvserver`, `priority`",
              "q": "index=network sourcetype=\"citrix:netscaler:cs\"\n| stats latest(hits) as total_hits, latest(target_lbvserver) as target, latest(priority) as priority by cs_vserver, policy_name, host\n| eventstats sum(total_hits) as vserver_total_hits by cs_vserver\n| eval hit_pct=if(vserver_total_hits>0, round(total_hits/vserver_total_hits*100,1), 0)\n| sort cs_vserver, priority\n| table cs_vserver, policy_name, priority, target, total_hits, hit_pct",
              "m": "Poll the NITRO API `csvserver_cspolicy_binding` to get bound policies with hit counts. Alternatively, enable AppFlow on content switching vServers to capture per-request routing decisions. Run the scripted input every 15 minutes. Flag: policies with zero hits over 7 days (never triggered — misconfigured or obsolete), the default policy receiving more than 20% of traffic (indicates missing specific rules), and sudden shifts in policy hit distribution (routing change after configuration update). Content switching is critical for multi-tenant environments where different applications share a single VIP.",
              "z": "Bar chart (hit rate by policy), Table (policies with hit counts), Timechart (default policy hit rate trending).",
              "kfp": "Rule reorder, content-switch tests, and new paths can change hit mix without a misconfiguration.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling Citrix ADC NITRO API.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:cs\"` fields `cs_vserver`, `policy_name`, `hits`, `target_lbvserver`, `priority`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll the NITRO API `csvserver_cspolicy_binding` to get bound policies with hit counts. Alternatively, enable AppFlow on content switching vServers to capture per-request routing decisions. Run the scripted input every 15 minutes. Flag: policies with zero hits over 7 days (never triggered — misconfigured or obsolete), the default policy receiving more than 20% of traffic (indicates missing specific rules), and sudden shifts in policy hit distribution (routing change after configuration update). C…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:cs\"\n| stats latest(hits) as total_hits, latest(target_lbvserver) as target, latest(priority) as priority by cs_vserver, policy_name, host\n| eventstats sum(total_hits) as vserver_total_hits by cs_vserver\n| eval hit_pct=if(vserver_total_hits>0, round(total_hits/vserver_total_hits*100,1), 0)\n| sort cs_vserver, priority\n| table cs_vserver, policy_name, priority, target, total_hits, hit_pct\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Content Switching Policy Hit Rate (NetScaler)** — Content switching vServers route HTTP/HTTPS requests to different load-balancing vServers based on URL patterns, headers, cookies, or other request attributes. Misconfigured content switching policies result in traffic hitting the default (catch-all) policy or being routed to the wrong back-end. Monitoring policy hit rates validates that routing rules are working as intended and identifies policies that are never triggered (candidate for cleanup or misconfiguration).\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:cs\"` fields `cs_vserver`, `policy_name`, `hits`, `target_lbvserver`, `priority`. **App/TA** (typical add-on context): Custom scripted input polling Citrix ADC NITRO API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:cs. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:cs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cs_vserver, policy_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by cs_vserver** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hit_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC Content Switching Policy Hit Rate (NetScaler)**): table cs_vserver, policy_name, priority, target, total_hits, hit_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (hit rate by policy), Table (policies with hit counts), Timechart (default policy hit rate trending).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see how content switching rules are used so rewrites, new paths, and test rules do not sit unused while people blame the network.",
              "mtype": [
                "Performance",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.20",
              "n": "Citrix ADC System Resource Utilization (NetScaler)",
              "c": "high",
              "f": "beginner",
              "v": "Citrix ADC appliances (physical or VPX) have finite CPU, memory, and throughput capacity. Unlike general-purpose servers, ADC resource exhaustion directly impacts all applications it fronts — causing connection drops, increased latency, and SSL handshake failures. Monitoring ADC system resources enables capacity planning and prevents appliance-level bottlenecks that affect the entire application delivery infrastructure.",
              "t": "Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), NITRO API scripted input",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:perf\"` fields `cpu_use_pct`, `mgmt_cpu_use_pct`, `mem_use_pct`, `disk_use_pct`, `active_connections`, `rx_mbps`, `tx_mbps`, `ssl_tps`",
              "q": "index=network sourcetype=\"citrix:netscaler:perf\"\n| bin _time span=5m\n| stats avg(cpu_use_pct) as avg_cpu, max(cpu_use_pct) as max_cpu,\n  avg(mem_use_pct) as avg_mem, avg(active_connections) as avg_conns,\n  avg(ssl_tps) as avg_ssl_tps, avg(rx_mbps) as avg_rx, avg(tx_mbps) as avg_tx by host, _time\n| where avg_cpu > 70 OR avg_mem > 80 OR max_cpu > 90\n| table _time, host, avg_cpu, max_cpu, avg_mem, avg_conns, avg_ssl_tps, avg_rx, avg_tx",
              "m": "Poll the NITRO API `ns` (system) resource for CPU utilization, memory usage, and packet engine stats. Also poll `ssl` stats for SSL transactions per second (TPS). Run every 5 minutes. Key thresholds: CPU above 70% average (capacity planning), CPU spike above 90% (performance impact imminent), memory above 80% (connection table pressure), SSL TPS approaching licensed limit (SSL offload bottleneck). Track packet engine CPU separately from management CPU — high management CPU with low packet CPU indicates control plane issues. Trend resource utilization to forecast when additional ADC capacity is needed.",
              "z": "Line chart (CPU and memory over time), Gauge (current utilization), Table (ADCs above threshold).",
              "kfp": "Snmp polling gaps, data drops, and short bursts can wobble averages; compare in the same minute on the node.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), NITRO API scripted input.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:perf\"` fields `cpu_use_pct`, `mgmt_cpu_use_pct`, `mem_use_pct`, `disk_use_pct`, `active_connections`, `rx_mbps`, `tx_mbps`, `ssl_tps`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll the NITRO API `ns` (system) resource for CPU utilization, memory usage, and packet engine stats. Also poll `ssl` stats for SSL transactions per second (TPS). Run every 5 minutes. Key thresholds: CPU above 70% average (capacity planning), CPU spike above 90% (performance impact imminent), memory above 80% (connection table pressure), SSL TPS approaching licensed limit (SSL offload bottleneck). Track packet engine CPU separately from management CPU — high management CPU with low packet CPU in…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:perf\"\n| bin _time span=5m\n| stats avg(cpu_use_pct) as avg_cpu, max(cpu_use_pct) as max_cpu,\n  avg(mem_use_pct) as avg_mem, avg(active_connections) as avg_conns,\n  avg(ssl_tps) as avg_ssl_tps, avg(rx_mbps) as avg_rx, avg(tx_mbps) as avg_tx by host, _time\n| where avg_cpu > 70 OR avg_mem > 80 OR max_cpu > 90\n| table _time, host, avg_cpu, max_cpu, avg_mem, avg_conns, avg_ssl_tps, avg_rx, avg_tx\n```\n\nUnderstanding this SPL\n\n**Citrix ADC System Resource Utilization (NetScaler)** — Citrix ADC appliances (physical or VPX) have finite CPU, memory, and throughput capacity. Unlike general-purpose servers, ADC resource exhaustion directly impacts all applications it fronts — causing connection drops, increased latency, and SSL handshake failures. Monitoring ADC system resources enables capacity planning and prevents appliance-level bottlenecks that affect the entire application delivery infrastructure.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:perf\"` fields `cpu_use_pct`, `mgmt_cpu_use_pct`, `mem_use_pct`, `disk_use_pct`, `active_connections`, `rx_mbps`, `tx_mbps`, `ssl_tps`. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), NITRO API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:perf. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:perf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_cpu > 70 OR avg_mem > 80 OR max_cpu > 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC System Resource Utilization (NetScaler)**): table _time, host, avg_cpu, max_cpu, avg_mem, avg_conns, avg_ssl_tps, avg_rx, avg_tx\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Citrix ADC System Resource Utilization (NetScaler)** — Citrix ADC appliances (physical or VPX) have finite CPU, memory, and throughput capacity. Unlike general-purpose servers, ADC resource exhaustion directly impacts all applications it fronts — causing connection drops, increased latency, and SSL handshake failures. Monitoring ADC system resources enables capacity planning and prevents appliance-level bottlenecks that affect the entire application delivery infrastructure.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:perf\"` fields `cpu_use_pct`, `mgmt_cpu_use_pct`, `mem_use_pct`, `disk_use_pct`, `active_connections`, `rx_mbps`, `tx_mbps`, `ssl_tps`. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (`Splunk_TA_citrix-netscaler`), NITRO API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Network` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU and memory over time), Gauge (current utilization), Table (ADCs above threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We average cpu and memory on the same nodes so a hot vserver or a full box is a visible warning before a hard stop.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host span=5m | sort - agg_value",
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.21",
              "n": "Citrix ADC Responder and Rewrite Policy Errors (NetScaler)",
              "c": "medium",
              "f": "intermediate",
              "v": "Responder and rewrite policies on Citrix ADC implement URL redirects, HTTP header manipulation, security rules, and custom error responses. Policy evaluation errors or undef (undefined) hits indicate misconfiguration — the policy expression failed to evaluate, causing the request to fall through to default behavior. This can result in bypassed security headers, missing redirects, or unexpected error pages being served to users.",
              "t": "Custom scripted input polling Citrix ADC NITRO API",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:policy\"` fields `policy_name`, `policy_type`, `hits`, `undef_hits`, `bound_to`",
              "q": "index=network sourcetype=\"citrix:netscaler:policy\"\n| where undef_hits > 0\n| eval error_ratio=if(hits>0, round(undef_hits/hits*100,2), 100)\n| sort -undef_hits\n| table policy_name, policy_type, bound_to, hits, undef_hits, error_ratio, host",
              "m": "Poll the NITRO API `responderpolicy` and `rewritepolicy` resources. Each policy exposes `hits` (successful evaluations) and `undefhits` (evaluation failures). Run every 15 minutes. Alert when any policy has `undefhits > 0` — this indicates the policy expression has a bug. Common causes: referencing a non-existent header, type mismatch in expression, or regex syntax errors. Policies with high `undefhits` relative to `hits` are effectively broken. Also monitor `responderglobal_responderpolicy_binding` and `rewriteglobal_rewritepolicy_binding` for globally bound policies that affect all traffic.",
              "z": "Table (policies with undef hits), Bar chart (error ratio by policy type), Timeline (undef hits trending).",
              "kfp": "Typos, new headers, and rare clients can make responder or rewrite actions miss; not every miss is an attack.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling Citrix ADC NITRO API.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:policy\"` fields `policy_name`, `policy_type`, `hits`, `undef_hits`, `bound_to`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll the NITRO API `responderpolicy` and `rewritepolicy` resources. Each policy exposes `hits` (successful evaluations) and `undefhits` (evaluation failures). Run every 15 minutes. Alert when any policy has `undefhits > 0` — this indicates the policy expression has a bug. Common causes: referencing a non-existent header, type mismatch in expression, or regex syntax errors. Policies with high `undefhits` relative to `hits` are effectively broken. Also monitor `responderglobal_responderpolicy_bind…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:policy\"\n| where undef_hits > 0\n| eval error_ratio=if(hits>0, round(undef_hits/hits*100,2), 100)\n| sort -undef_hits\n| table policy_name, policy_type, bound_to, hits, undef_hits, error_ratio, host\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Responder and Rewrite Policy Errors (NetScaler)** — Responder and rewrite policies on Citrix ADC implement URL redirects, HTTP header manipulation, security rules, and custom error responses. Policy evaluation errors or undef (undefined) hits indicate misconfiguration — the policy expression failed to evaluate, causing the request to fall through to default behavior. This can result in bypassed security headers, missing redirects, or unexpected error pages being served to users.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:policy\"` fields `policy_name`, `policy_type`, `hits`, `undef_hits`, `bound_to`. **App/TA** (typical add-on context): Custom scripted input polling Citrix ADC NITRO API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:policy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:policy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where undef_hits > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **error_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC Responder and Rewrite Policy Errors (NetScaler)**): table policy_name, policy_type, bound_to, hits, undef_hits, error_ratio, host\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (policies with undef hits), Bar chart (error ratio by policy type), Timeline (undef hits trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for policy misses on responder and rewrite so odd headers and rare clients do not fail silently in a long tail of one-off cases.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.22",
              "n": "Citrix ADC SSL Offload Performance (NetScaler)",
              "c": "high",
              "f": "intermediate",
              "v": "Citrix ADC offloads SSL/TLS processing from back-end servers, handling certificate exchange, cipher negotiation, and encryption/decryption. SSL transactions per second (TPS) is a capacity-bound metric — hardware ADC models have fixed SSL TPS limits, and VPX instances are licensed by throughput tier. Approaching the SSL TPS ceiling causes SSL handshake delays and new connection failures. Monitoring SSL performance ensures cryptographic operations do not become a bottleneck.",
              "t": "Custom scripted input polling Citrix ADC NITRO API",
              "d": "`index=network` `sourcetype=\"citrix:netscaler:ssl\"` fields `ssl_tps`, `ssl_sessions`, `ssl_new_sessions`, `ssl_session_reuse_pct`, `ssl_protocol_version`, `cipher_suite`",
              "q": "index=network sourcetype=\"citrix:netscaler:ssl\" metric_type=\"ssl_stats\"\n| bin _time span=5m\n| stats avg(ssl_tps) as avg_tps, max(ssl_tps) as peak_tps, avg(ssl_session_reuse_pct) as reuse_pct by host, _time\n| where peak_tps > 5000 OR reuse_pct < 50\n| table _time, host, avg_tps, peak_tps, reuse_pct",
              "m": "Poll the NITRO API `ssl` statistics endpoint for SSL transaction counters: `ssltotsessions`, `ssltotnewsessions`, `ssltottlsv12sessions`, `ssltottlsv13sessions`, and session reuse rates. Calculate TPS as delta of `ssltotsessions` over the poll interval. Key thresholds: SSL TPS approaching 80% of licensed/hardware capacity (plan upgrade), session reuse rate below 50% (misconfigured session caching — excessive full handshakes), and TLS 1.0/1.1 session count > 0 (deprecated protocols in use). Track cipher suite distribution to ensure compliance with security policies (disable weak ciphers like RC4, DES, 3DES).",
              "z": "Line chart (SSL TPS over time), Gauge (current TPS vs capacity), Pie chart (protocol version distribution).",
              "kfp": "Old ciphers, pinned apps, and hardware offload quirks can all move SSL front-end numbers without a single root cause.",
              "refs": "[Aruba Networks Add-on for Splunk](https://splunkbase.splunk.com/app/4668), [HPE Aruba ClearPass App for Splunk](https://splunkbase.splunk.com/app/7865), [Splunk Add-on for Cisco Meraki](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling Citrix ADC NITRO API.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"citrix:netscaler:ssl\"` fields `ssl_tps`, `ssl_sessions`, `ssl_new_sessions`, `ssl_session_reuse_pct`, `ssl_protocol_version`, `cipher_suite`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll the NITRO API `ssl` statistics endpoint for SSL transaction counters: `ssltotsessions`, `ssltotnewsessions`, `ssltottlsv12sessions`, `ssltottlsv13sessions`, and session reuse rates. Calculate TPS as delta of `ssltotsessions` over the poll interval. Key thresholds: SSL TPS approaching 80% of licensed/hardware capacity (plan upgrade), session reuse rate below 50% (misconfigured session caching — excessive full handshakes), and TLS 1.0/1.1 session count > 0 (deprecated protocols in use). Track…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"citrix:netscaler:ssl\" metric_type=\"ssl_stats\"\n| bin _time span=5m\n| stats avg(ssl_tps) as avg_tps, max(ssl_tps) as peak_tps, avg(ssl_session_reuse_pct) as reuse_pct by host, _time\n| where peak_tps > 5000 OR reuse_pct < 50\n| table _time, host, avg_tps, peak_tps, reuse_pct\n```\n\nUnderstanding this SPL\n\n**Citrix ADC SSL Offload Performance (NetScaler)** — Citrix ADC offloads SSL/TLS processing from back-end servers, handling certificate exchange, cipher negotiation, and encryption/decryption. SSL transactions per second (TPS) is a capacity-bound metric — hardware ADC models have fixed SSL TPS limits, and VPX instances are licensed by throughput tier. Approaching the SSL TPS ceiling causes SSL handshake delays and new connection failures. Monitoring SSL performance ensures cryptographic operations do not become a bottleneck.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"citrix:netscaler:ssl\"` fields `ssl_tps`, `ssl_sessions`, `ssl_new_sessions`, `ssl_session_reuse_pct`, `ssl_protocol_version`, `cipher_suite`. **App/TA** (typical add-on context): Custom scripted input polling Citrix ADC NITRO API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: citrix:netscaler:ssl. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"citrix:netscaler:ssl\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where peak_tps > 5000 OR reuse_pct < 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC SSL Offload Performance (NetScaler)**): table _time, host, avg_tps, peak_tps, reuse_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (SSL TPS over time), Gauge (current TPS vs capacity), Pie chart (protocol version distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at how hard encryption is working on the front side so a cipher change or a busy card is not invisible to the ops team.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.7,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 22,
            "none": 0
          }
        },
        {
          "i": "5.4",
          "n": "Wireless Infrastructure",
          "u": [
            {
              "i": "5.4.1",
              "n": "AP Offline Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Offline APs create coverage dead zones. Users lose connectivity in affected areas.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580), WLC syslog",
              "d": "`sourcetype=meraki, WLC events`",
              "q": "index=network sourcetype=\"meraki\" type=\"access point\" (\"went offline\" OR \"unreachable\")\n| table _time host ap_name network status | sort -_time",
              "m": "For Meraki: configure syslog in Dashboard, or use Meraki API TA. For WLC: forward syslog. Alert when APs go offline. Maintain AP inventory lookup for location context.",
              "z": "Map (AP locations with status), Table, Status grid, Single value (APs offline).",
              "kfp": "Access points may go offline during scheduled firmware updates, PoE switch reboots, cabling work, or RF site surveys, which can look like an outage without a real coverage problem.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), WLC syslog.\n• Ensure the following data sources are available: `sourcetype=meraki, WLC events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor Meraki: configure syslog in Dashboard, or use Meraki API TA. For WLC: forward syslog. Alert when APs go offline. Maintain AP inventory lookup for location context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"meraki\" type=\"access point\" (\"went offline\" OR \"unreachable\")\n| table _time host ap_name network status | sort -_time\n```\n\nUnderstanding this SPL\n\n**AP Offline Detection** — Offline APs create coverage dead zones. Users lose connectivity in affected areas.\n\nDocumented **Data sources**: `sourcetype=meraki, WLC events`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), WLC syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **AP Offline Detection**): table _time host ap_name network status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (AP locations with status), Table, Status grid, Single value (APs offline).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR36, MR44, MR46, MR56, MR57, MR76, MR78, MR86, Cisco WLC 3504/5520/8540, Catalyst 9800-40/80/L/CL, Catalyst 9100 APs, Aironet 1815/2802/3802/4800; HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba 7000/7200 Mobility Controllers, Aruba 9004/9012 Gateways, Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch ap offline detection so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_meraki",
                "cisco_wlc"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.2",
              "n": "Client Association Failures",
              "c": "medium",
              "f": "beginner",
              "v": "Failed associations frustrate users and indicate RADIUS/auth issues, RF problems, or AP overload.",
              "t": "WLC syslog, Meraki TA",
              "d": "WLC/AP syslog, RADIUS logs",
              "q": "index=network sourcetype=\"cisco:wlc\" (\"association\" OR \"authentication\") AND (\"fail\" OR \"reject\" OR \"denied\")\n| stats count by ap_name, ssid, reason | sort -count",
              "m": "Forward WLC/AP syslog. Correlate with RADIUS logs (ISE). Alert on spike in failures per SSID or AP.",
              "z": "Table (AP, SSID, reason, count), Bar chart by reason, Timechart.",
              "kfp": "Failed logins often come from typos, expired passwords, guest self-service, or a single misconfigured device; treat sustained rises across many users as the real signal.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WLC syslog, Meraki TA.\n• Ensure the following data sources are available: WLC/AP syslog, RADIUS logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward WLC/AP syslog. Correlate with RADIUS logs (ISE). Alert on spike in failures per SSID or AP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:wlc\" (\"association\" OR \"authentication\") AND (\"fail\" OR \"reject\" OR \"denied\")\n| stats count by ap_name, ssid, reason | sort -count\n```\n\nUnderstanding this SPL\n\n**Client Association Failures** — Failed associations frustrate users and indicate RADIUS/auth issues, RF problems, or AP overload.\n\nDocumented **Data sources**: WLC/AP syslog, RADIUS logs. **App/TA** (typical add-on context): WLC syslog, Meraki TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:wlc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:wlc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, ssid, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Client Association Failures** — Failed associations frustrate users and indicate RADIUS/auth issues, RF problems, or AP overload.\n\nDocumented **Data sources**: WLC/AP syslog, RADIUS logs. **App/TA** (typical add-on context): WLC syslog, Meraki TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (AP, SSID, reason, count), Bar chart by reason, Timechart.",
              "script": "",
              "premium": "",
              "hw": "Cisco WLC 3504/5520/8540, Catalyst 9800-40/80/L/CL, Catalyst 9100 APs, Aironet 1815/2802/3802/4800; HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba 7000/7200 Mobility Controllers, Aruba 9004/9012 Gateways, Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch client association failures so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_meraki",
                "cisco_wlc"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.3",
              "n": "Channel Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "High channel utilization degrades wireless performance. Identifies congested APs needing channel changes or additional coverage.",
              "t": "Meraki API, WLC SNMP",
              "d": "Meraki API, SNMP (CISCO-DOT11-IF-MIB)",
              "q": "index=network sourcetype=\"meraki:api\"\n| stats avg(channel_utilization) as util_pct by ap_name, channel, band\n| where util_pct > 60 | sort -util_pct",
              "m": "Poll Meraki RF statistics API or WLC SNMP. Track per-AP channel utilization. Alert when >60% (2.4GHz) or >50% (5GHz).",
              "z": "Heatmap (APs by utilization), Table, Line chart (trending).",
              "kfp": "RF noise and channel changes can spike when neighbors deploy new gear, microwaves run, or the controller runs automatic channel updates; weather and outdoor clients can also move the numbers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Meraki API, WLC SNMP.\n• Ensure the following data sources are available: Meraki API, SNMP (CISCO-DOT11-IF-MIB).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Meraki RF statistics API or WLC SNMP. Track per-AP channel utilization. Alert when >60% (2.4GHz) or >50% (5GHz).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"meraki:api\"\n| stats avg(channel_utilization) as util_pct by ap_name, channel, band\n| where util_pct > 60 | sort -util_pct\n```\n\nUnderstanding this SPL\n\n**Channel Utilization** — High channel utilization degrades wireless performance. Identifies congested APs needing channel changes or additional coverage.\n\nDocumented **Data sources**: Meraki API, SNMP (CISCO-DOT11-IF-MIB). **App/TA** (typical add-on context): Meraki API, WLC SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: meraki:api. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, channel, band** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where util_pct > 60` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (APs by utilization), Table, Line chart (trending).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR36, MR44, MR46, MR56, MR57, MR76, MR78, MR86, Cisco WLC 3504/5520/8540, Catalyst 9800-40/80/L/CL, Catalyst 9100 APs, Aironet 1815/2802/3802/4800; HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba 7000/7200 Mobility Controllers, Aruba 9004/9012 Gateways, Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch channel utilization so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_meraki",
                "cisco_wlc",
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.4",
              "n": "Rogue AP Detection",
              "c": "high",
              "f": "beginner",
              "v": "Rogue APs are unauthorized and can be used for man-in-the-middle attacks or network bridging.",
              "t": "WLC syslog, Meraki TA",
              "d": "WLC/Meraki security events",
              "q": "index=network sourcetype=\"cisco:wlc\" \"rogue\" (\"detected\" OR \"alert\" OR \"contained\")\n| stats count by rogue_mac, detecting_ap, channel | sort -count",
              "m": "Forward WLC rogue detection events. Enable rogue detection policies. Alert on rogue APs, especially those broadcasting your corporate SSID.",
              "z": "Table (rogue MAC, detecting AP, channel), Map, Single value.",
              "kfp": "Neighbor networks, personal hotspots, or test labs can look like rogues; confirm against known nearby SSIDs and change windows before escalation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WLC syslog, Meraki TA.\n• Ensure the following data sources are available: WLC/Meraki security events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward WLC rogue detection events. Enable rogue detection policies. Alert on rogue APs, especially those broadcasting your corporate SSID.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:wlc\" \"rogue\" (\"detected\" OR \"alert\" OR \"contained\")\n| stats count by rogue_mac, detecting_ap, channel | sort -count\n```\n\nUnderstanding this SPL\n\n**Rogue AP Detection** — Rogue APs are unauthorized and can be used for man-in-the-middle attacks or network bridging.\n\nDocumented **Data sources**: WLC/Meraki security events. **App/TA** (typical add-on context): WLC syslog, Meraki TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:wlc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:wlc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rogue_mac, detecting_ap, channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rogue MAC, detecting AP, channel), Map, Single value.",
              "script": "",
              "premium": "",
              "hw": "Cisco WLC 3504/5520/8540, Catalyst 9800-40/80/L/CL, Catalyst 9100 APs, Aironet 1815/2802/3802/4800; HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba 7000/7200 Mobility Controllers, Aruba 9004/9012 Gateways, Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch rogue ap detection so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_meraki",
                "cisco_wlc"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.5",
              "n": "Client Count Trending",
              "c": "low",
              "f": "beginner",
              "v": "Client count trending informs capacity planning and AP density decisions.",
              "t": "Meraki API, WLC SNMP",
              "d": "WLC/Meraki client data",
              "q": "index=network sourcetype=\"meraki:api\"\n| timechart span=1h dc(client_mac) as client_count by ap_name",
              "m": "Poll client counts via API or SNMP. Track per AP, per SSID, and per building over time.",
              "z": "Line chart (clients over time), Table (AP, count), Heatmap.",
              "kfp": "Wireless client counts spike during shift changes, big events, or back-to-school style rushes; compare against the calendar before calling it an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Meraki API, WLC SNMP.\n• Ensure the following data sources are available: WLC/Meraki client data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll client counts via API or SNMP. Track per AP, per SSID, and per building over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"meraki:api\"\n| timechart span=1h dc(client_mac) as client_count by ap_name\n```\n\nUnderstanding this SPL\n\n**Client Count Trending** — Client count trending informs capacity planning and AP density decisions.\n\nDocumented **Data sources**: WLC/Meraki client data. **App/TA** (typical add-on context): Meraki API, WLC SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: meraki:api. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by ap_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (clients over time), Table (AP, count), Heatmap.",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR36, MR44, MR46, MR56, MR57, MR76, MR78, MR86, Cisco WLC 3504/5520/8540, Catalyst 9800-40/80/L/CL, Catalyst 9100 APs, Aironet 1815/2802/3802/4800; HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba 7000/7200 Mobility Controllers, Aruba 9004/9012 Gateways, Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch client count trending so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_meraki",
                "cisco_wlc",
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.6",
              "n": "RF Interference Events",
              "c": "medium",
              "f": "beginner",
              "v": "Radar (DFS), non-WiFi interference, and channel changes degrade wireless quality.",
              "t": "WLC syslog, Meraki TA",
              "d": "WLC/AP syslog",
              "q": "index=network sourcetype=\"cisco:wlc\" (\"radar\" OR \"DFS\" OR \"interference\" OR \"channel change\")\n| stats count by ap_name, channel | sort -count",
              "m": "Forward AP/WLC syslog. Alert on DFS radar events. Track channel change frequency per AP.",
              "z": "Table (AP, event type, count), Timeline, Bar chart.",
              "kfp": "RF noise and channel changes can spike when neighbors deploy new gear, microwaves run, or the controller runs automatic channel updates; weather and outdoor clients can also move the numbers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WLC syslog, Meraki TA.\n• Ensure the following data sources are available: WLC/AP syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward AP/WLC syslog. Alert on DFS radar events. Track channel change frequency per AP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:wlc\" (\"radar\" OR \"DFS\" OR \"interference\" OR \"channel change\")\n| stats count by ap_name, channel | sort -count\n```\n\nUnderstanding this SPL\n\n**RF Interference Events** — Radar (DFS), non-WiFi interference, and channel changes degrade wireless quality.\n\nDocumented **Data sources**: WLC/AP syslog. **App/TA** (typical add-on context): WLC syslog, Meraki TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:wlc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:wlc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (AP, event type, count), Timeline, Bar chart.",
              "script": "",
              "premium": "",
              "hw": "Cisco WLC 3504/5520/8540, Catalyst 9800-40/80/L/CL, Catalyst 9100 APs, Aironet 1815/2802/3802/4800; HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba 7000/7200 Mobility Controllers, Aruba 9004/9012 Gateways, Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch rf interference events so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_meraki",
                "cisco_wlc"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.7",
              "n": "Wireless Authentication Trends",
              "c": "medium",
              "f": "intermediate",
              "v": "802.1X success/failure rates indicate RADIUS health, certificate issues, or expired credentials.",
              "t": "WLC syslog, RADIUS/ISE logs",
              "d": "RADIUS logs, WLC auth events",
              "q": "index=network sourcetype=\"cisco:ise:syslog\" (\"Passed\" OR \"Failed\") AND \"Wireless\"\n| eval status=if(match(_raw,\"Passed\"),\"Success\",\"Failed\")\n| timechart span=1h count by status",
              "m": "Forward ISE/RADIUS authentication logs. Track success/failure ratio over time. Alert on sustained failure rate increase.",
              "z": "Stacked bar chart (success vs. failure), Line chart, Single value (failure rate %).",
              "kfp": "Failed logins often come from typos, expired passwords, guest self-service, or a single misconfigured device; treat sustained rises across many users as the real signal.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WLC syslog, RADIUS/ISE logs.\n• Ensure the following data sources are available: RADIUS logs, WLC auth events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward ISE/RADIUS authentication logs. Track success/failure ratio over time. Alert on sustained failure rate increase.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ise:syslog\" (\"Passed\" OR \"Failed\") AND \"Wireless\"\n| eval status=if(match(_raw,\"Passed\"),\"Success\",\"Failed\")\n| timechart span=1h count by status\n```\n\nUnderstanding this SPL\n\n**Wireless Authentication Trends** — 802.1X success/failure rates indicate RADIUS health, certificate issues, or expired credentials.\n\nDocumented **Data sources**: RADIUS logs, WLC auth events. **App/TA** (typical add-on context): WLC syslog, RADIUS/ISE logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ise:syslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ise:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by status** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Wireless Authentication Trends** — 802.1X success/failure rates indicate RADIUS health, certificate issues, or expired credentials.\n\nDocumented **Data sources**: RADIUS logs, WLC auth events. **App/TA** (typical add-on context): WLC syslog, RADIUS/ISE logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart (success vs. failure), Line chart, Single value (failure rate %).",
              "script": "",
              "premium": "",
              "hw": "Cisco ISE 3515, ISE 3595, ISE 3615, ISE 3655, ISE 3695, ISE Virtual Appliance",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch wireless authentication trends so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "syslog"
              ],
              "em": [
                "cisco_wlc"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.8",
              "n": "RADIUS Authentication Failures",
              "c": "critical",
              "f": "intermediate",
              "v": "Mass RADIUS failures prevent wireless users from connecting. Distinguishing between user errors and server issues drives faster resolution.",
              "t": "Cisco WLC syslog, `Splunk_TA_cisco-ise`",
              "d": "`sourcetype=cisco:wlc`, `sourcetype=cisco:ise:syslog`",
              "q": "index=network sourcetype=\"cisco:ise:syslog\" \"Authentication failed\"\n| rex \"UserName=(?<username>\\S+).*?FailureReason=(?<reason>[^;]+)\"\n| stats count by reason, username | sort -count\n| head 20",
              "m": "Forward ISE/RADIUS logs to Splunk. Alert when failure rate exceeds 20% of attempts. Distinguish between bad credentials, expired certificates, and server timeouts.",
              "z": "Bar chart (failure reasons), Table (username, reason, count), Timechart (failure rate).",
              "kfp": "Failed logins often come from typos, expired passwords, guest self-service, or a single misconfigured device; treat sustained rises across many users as the real signal.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco WLC syslog, `Splunk_TA_cisco-ise`.\n• Ensure the following data sources are available: `sourcetype=cisco:wlc`, `sourcetype=cisco:ise:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward ISE/RADIUS logs to Splunk. Alert when failure rate exceeds 20% of attempts. Distinguish between bad credentials, expired certificates, and server timeouts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ise:syslog\" \"Authentication failed\"\n| rex \"UserName=(?<username>\\S+).*?FailureReason=(?<reason>[^;]+)\"\n| stats count by reason, username | sort -count\n| head 20\n```\n\nUnderstanding this SPL\n\n**RADIUS Authentication Failures** — Mass RADIUS failures prevent wireless users from connecting. Distinguishing between user errors and server issues drives faster resolution.\n\nDocumented **Data sources**: `sourcetype=cisco:wlc`, `sourcetype=cisco:ise:syslog`. **App/TA** (typical add-on context): Cisco WLC syslog, `Splunk_TA_cisco-ise`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ise:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ise:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by reason, username** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**RADIUS Authentication Failures** — Mass RADIUS failures prevent wireless users from connecting. Distinguishing between user errors and server issues drives faster resolution.\n\nDocumented **Data sources**: `sourcetype=cisco:wlc`, `sourcetype=cisco:ise:syslog`. **App/TA** (typical add-on context): Cisco WLC syslog, `Splunk_TA_cisco-ise`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failure reasons), Table (username, reason, count), Timechart (failure rate).",
              "script": "",
              "premium": "",
              "hw": "Cisco WLC 3504, WLC 5520, WLC 8540, Catalyst 9800-40, Catalyst 9800-80, Catalyst 9800-L, Catalyst 9800-CL, Cisco ISE 3515, ISE 3595, ISE 3615, ISE 3655, ISE 3695, ISE Virtual Appliance",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch radius authentication failures so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ise",
                "cisco_wlc"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.9",
              "n": "Client Roaming Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Poor roaming causes dropped calls, video freezes, and application timeouts. Analyzing roaming patterns identifies coverage gaps.",
              "t": "Cisco WLC syslog, Meraki API",
              "d": "`sourcetype=cisco:wlc`, `sourcetype=meraki:api`",
              "q": "index=network sourcetype=\"cisco:wlc\" \"roam\" OR \"reassociation\"\n| transaction client_mac maxspan=1h maxpause=5m\n| eval roam_count=eventcount-1\n| stats avg(roam_count) as avg_roams, max(roam_count) as max_roams by client_mac, ssid\n| where avg_roams > 10",
              "m": "Enable client roaming event logging on the WLC. Track roaming frequency per client. Investigate clients with >10 roams/hour — indicates poor RF design or sticky client behavior.",
              "z": "Table (client, SSID, roam count), Heatmap (AP-to-AP roaming), Choropleth (floor plan).",
              "kfp": "Clients may roam often when people move between floors, during large meetings, or when access points reboot; some clients also stay 'sticky' and look noisy without a real outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco WLC syslog, Meraki API.\n• Ensure the following data sources are available: `sourcetype=cisco:wlc`, `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable client roaming event logging on the WLC. Track roaming frequency per client. Investigate clients with >10 roams/hour — indicates poor RF design or sticky client behavior.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:wlc\" \"roam\" OR \"reassociation\"\n| transaction client_mac maxspan=1h maxpause=5m\n| eval roam_count=eventcount-1\n| stats avg(roam_count) as avg_roams, max(roam_count) as max_roams by client_mac, ssid\n| where avg_roams > 10\n```\n\nUnderstanding this SPL\n\n**Client Roaming Analysis** — Poor roaming causes dropped calls, video freezes, and application timeouts. Analyzing roaming patterns identifies coverage gaps.\n\nDocumented **Data sources**: `sourcetype=cisco:wlc`, `sourcetype=meraki:api`. **App/TA** (typical add-on context): Cisco WLC syslog, Meraki API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:wlc. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:wlc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **roam_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by client_mac, ssid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_roams > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client, SSID, roam count), Heatmap (AP-to-AP roaming), Choropleth (floor plan).",
              "script": "",
              "premium": "",
              "hw": "Cisco WLC 3504/5520/8540, Catalyst 9800-40/80/L/CL, Catalyst 9100 APs, Aironet 1815/2802/3802/4800; HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba 7000/7200 Mobility Controllers, Aruba 9004/9012 Gateways, Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch client roaming analysis so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance",
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_meraki",
                "cisco_wlc"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.10",
              "n": "Wireless IDS/IPS Events",
              "c": "critical",
              "f": "intermediate",
              "v": "Wireless attacks (deauth floods, evil twin, KRACK) compromise network security. Early detection prevents credential theft and MitM attacks.",
              "t": "Cisco WLC syslog, Meraki API",
              "d": "`sourcetype=cisco:wlc`, `sourcetype=meraki:ids`",
              "q": "index=network sourcetype=\"cisco:wlc\" \"IDS Signature\" OR \"wIPS\"\n| rex \"Signature (?<sig_id>\\d+).*?(?<sig_name>[^,]+).*?MAC (?<attacker_mac>[0-9a-f:]+)\"\n| stats count by sig_name, attacker_mac | sort -count",
              "m": "Enable wireless IDS on the WLC/AP. Forward alerts to Splunk. Alert on deauth floods, rogue AP impersonation, and client spoofing events. Correlate with rogue AP detection.",
              "z": "Table (signature, attacker MAC, count), Timeline, Single value (alerts today).",
              "kfp": "Neighbor networks, personal hotspots, or test labs can look like rogues; confirm against known nearby SSIDs and change windows before escalation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco WLC syslog, Meraki API.\n• Ensure the following data sources are available: `sourcetype=cisco:wlc`, `sourcetype=meraki:ids`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable wireless IDS on the WLC/AP. Forward alerts to Splunk. Alert on deauth floods, rogue AP impersonation, and client spoofing events. Correlate with rogue AP detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:wlc\" \"IDS Signature\" OR \"wIPS\"\n| rex \"Signature (?<sig_id>\\d+).*?(?<sig_name>[^,]+).*?MAC (?<attacker_mac>[0-9a-f:]+)\"\n| stats count by sig_name, attacker_mac | sort -count\n```\n\nUnderstanding this SPL\n\n**Wireless IDS/IPS Events** — Wireless attacks (deauth floods, evil twin, KRACK) compromise network security. Early detection prevents credential theft and MitM attacks.\n\nDocumented **Data sources**: `sourcetype=cisco:wlc`, `sourcetype=meraki:ids`. **App/TA** (typical add-on context): Cisco WLC syslog, Meraki API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:wlc. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:wlc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by sig_name, attacker_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (signature, attacker MAC, count), Timeline, Single value (alerts today).",
              "script": "",
              "premium": "",
              "hw": "Cisco WLC 3504/5520/8540, Catalyst 9800-40/80/L/CL, Catalyst 9100 APs, Aironet 1815/2802/3802/4800; HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba 7000/7200 Mobility Controllers, Aruba 9004/9012 Gateways, Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch wireless ids/ips events so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_meraki",
                "cisco_wlc"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.11",
              "n": "Band Steering Effectiveness",
              "c": "low",
              "f": "intermediate",
              "v": "Band steering moves capable clients to 5 GHz, reducing congestion on 2.4 GHz. Measuring effectiveness validates RF policy.",
              "t": "Cisco WLC syslog, Meraki API",
              "d": "`sourcetype=cisco:wlc`, `sourcetype=meraki:api`",
              "q": "index=network sourcetype=\"cisco:wlc\" \"associated\"\n| eval band=if(match(channel,\"^(1|6|11)$\"),\"2.4GHz\",\"5GHz\")\n| stats count by band, ssid\n| eventstats sum(count) as total by ssid\n| eval pct=round(count/total*100,1)",
              "m": "Collect client association events with channel info. Calculate the ratio of 5 GHz vs 2.4 GHz clients per SSID. Target >70% on 5 GHz for dual-band capable clients.",
              "z": "Pie chart (band distribution), Bar chart (by SSID), Timechart (trending).",
              "kfp": "RF noise and channel changes can spike when neighbors deploy new gear, microwaves run, or the controller runs automatic channel updates; weather and outdoor clients can also move the numbers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco WLC syslog, Meraki API.\n• Ensure the following data sources are available: `sourcetype=cisco:wlc`, `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect client association events with channel info. Calculate the ratio of 5 GHz vs 2.4 GHz clients per SSID. Target >70% on 5 GHz for dual-band capable clients.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:wlc\" \"associated\"\n| eval band=if(match(channel,\"^(1|6|11)$\"),\"2.4GHz\",\"5GHz\")\n| stats count by band, ssid\n| eventstats sum(count) as total by ssid\n| eval pct=round(count/total*100,1)\n```\n\nUnderstanding this SPL\n\n**Band Steering Effectiveness** — Band steering moves capable clients to 5 GHz, reducing congestion on 2.4 GHz. Measuring effectiveness validates RF policy.\n\nDocumented **Data sources**: `sourcetype=cisco:wlc`, `sourcetype=meraki:api`. **App/TA** (typical add-on context): Cisco WLC syslog, Meraki API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:wlc. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:wlc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **band** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by band, ssid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by ssid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (band distribution), Bar chart (by SSID), Timechart (trending).",
              "script": "",
              "premium": "",
              "hw": "Cisco WLC 3504/5520/8540, Catalyst 9800-40/80/L/CL, Catalyst 9100 APs, Aironet 1815/2802/3802/4800; HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba 7000/7200 Mobility Controllers, Aruba 9004/9012 Gateways, Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch band steering effectiveness so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_meraki",
                "cisco_wlc"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.12",
              "n": "Wireless Client Association Failures (Meraki MR)",
              "c": "high",
              "f": "beginner",
              "v": "Identifies recurring authentication failures and SSID configuration issues that prevent users from connecting to wireless networks.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*Association*\" OR signature=\"*authentication*\" status=\"failure\"\n| stats count by ap_name, client_mac, reason, signature\n| sort -count",
              "m": "Monitor syslog events from Meraki MR access points for failed association attempts. Correlate with SSID configuration and 802.1X radius responses.",
              "z": "Table with top APs/clients by failure count; time-series chart of failures over time by AP.",
              "kfp": "Failed logins often come from typos, expired passwords, guest self-service, or a single misconfigured device; treat sustained rises across many users as the real signal.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor syslog events from Meraki MR access points for failed association attempts. Correlate with SSID configuration and 802.1X radius responses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*Association*\" OR signature=\"*authentication*\" status=\"failure\"\n| stats count by ap_name, client_mac, reason, signature\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Wireless Client Association Failures (Meraki MR)** — Identifies recurring authentication failures and SSID configuration issues that prevent users from connecting to wireless networks.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, client_mac, reason, signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table with top APs/clients by failure count; time-series chart of failures over time by AP.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch wireless client association failures (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.13",
              "n": "RSSI/Signal Strength Degradation Detection (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "Proactively identifies weak WiFi coverage areas and client placement issues before users experience connectivity problems.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| eval rssi_level=case(rssi>=-50, \"Excellent\", rssi>=-60, \"Good\", rssi>=-70, \"Fair\", rssi<-70, \"Poor\")\n| stats avg(rssi) as avg_rssi, min(rssi) as min_rssi, count by ap_name, ssid, rssi_level\n| where min_rssi < -70 or avg_rssi < -65",
              "m": "Ingest Meraki API client data periodically; analyze RSSI distribution by AP and SSID. Set thresholds for \"poor\" signal (< -70 dBm).",
              "z": "Heatmap of RSSI by AP location; histogram of signal strength distribution; gauge charts for coverage quality by SSID.",
              "kfp": "RF noise and channel changes can spike when neighbors deploy new gear, microwaves run, or the controller runs automatic channel updates; weather and outdoor clients can also move the numbers.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Meraki API client data periodically; analyze RSSI distribution by AP and SSID. Set thresholds for \"poor\" signal (< -70 dBm).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| eval rssi_level=case(rssi>=-50, \"Excellent\", rssi>=-60, \"Good\", rssi>=-70, \"Fair\", rssi<-70, \"Poor\")\n| stats avg(rssi) as avg_rssi, min(rssi) as min_rssi, count by ap_name, ssid, rssi_level\n| where min_rssi < -70 or avg_rssi < -65\n```\n\nUnderstanding this SPL\n\n**RSSI/Signal Strength Degradation Detection (Meraki MR)** — Proactively identifies weak WiFi coverage areas and client placement issues before users experience connectivity problems.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **rssi_level** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by ap_name, ssid, rssi_level** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where min_rssi < -70 or avg_rssi < -65` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of RSSI by AP location; histogram of signal strength distribution; gauge charts for coverage quality by SSID.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch rssi/signal strength degradation detection (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.14",
              "n": "Excessive Client Roaming Activity (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "Detects unstable roaming patterns and AP handoff issues that cause latency spikes and dropped connections.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*Roaming*\" OR signature=\"*handoff*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*Roaming*\" OR signature=\"*handoff*\")\n| stats count as roam_count by client_mac, ap_name\n| eventstats sum(roam_count) as total_roams by client_mac\n| where total_roams > 20\n| sort -total_roams",
              "m": "Track client handoff events between APs. Alert when a single client roams more than threshold in a 15-minute window.",
              "z": "Table of heavy roamers; line chart of roaming frequency by client; network diagram showing roam paths.",
              "kfp": "Clients may roam often when people move between floors, during large meetings, or when access points reboot; some clients also stay 'sticky' and look noisy without a real outage.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*Roaming*\" OR signature=\"*handoff*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack client handoff events between APs. Alert when a single client roams more than threshold in a 15-minute window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*Roaming*\" OR signature=\"*handoff*\")\n| stats count as roam_count by client_mac, ap_name\n| eventstats sum(roam_count) as total_roams by client_mac\n| where total_roams > 20\n| sort -total_roams\n```\n\nUnderstanding this SPL\n\n**Excessive Client Roaming Activity (Meraki MR)** — Detects unstable roaming patterns and AP handoff issues that cause latency spikes and dropped connections.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*Roaming*\" OR signature=\"*handoff*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by client_mac, ap_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by client_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_roams > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of heavy roamers; line chart of roaming frequency by client; network diagram showing roam paths.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch excessive client roaming activity (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.15",
              "n": "SSID Performance Ranking and Trend Analysis (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "Compares performance across multiple SSIDs to identify underperforming networks and optimize deployment strategy.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats avg(connection_duration) as avg_duration, count as client_count, avg(rssi) as avg_rssi by ssid\n| eval performance_score=round((avg_rssi+100)*client_count/100, 2)\n| sort - performance_score",
              "m": "Aggregate client connection metrics by SSID. Compare average connection duration, client count, and signal strength.",
              "z": "Bar chart comparing SSID performance; sparklines for trend; scorecard showing top/bottom performers.",
              "kfp": "RF noise and channel changes can spike when neighbors deploy new gear, microwaves run, or the controller runs automatic channel updates; weather and outdoor clients can also move the numbers.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate client connection metrics by SSID. Compare average connection duration, client count, and signal strength.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats avg(connection_duration) as avg_duration, count as client_count, avg(rssi) as avg_rssi by ssid\n| eval performance_score=round((avg_rssi+100)*client_count/100, 2)\n| sort - performance_score\n```\n\nUnderstanding this SPL\n\n**SSID Performance Ranking and Trend Analysis (Meraki MR)** — Compares performance across multiple SSIDs to identify underperforming networks and optimize deployment strategy.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ssid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **performance_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart comparing SSID performance; sparklines for trend; scorecard showing top/bottom performers.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch ssid performance ranking and trend analysis (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.16",
              "n": "WiFi Channel Utilization and Interference Detection (Meraki MR)",
              "c": "high",
              "f": "advanced",
              "v": "Identifies channel congestion and interference sources to optimize channel assignments and reduce co-channel interference.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MR\n| stats count by channel, band\n| eval utilization_pct=round(count*100/sum(count), 2)\n| where utilization_pct > 40\n| sort - utilization_pct",
              "m": "Query API device data for MR access points; track channel assignments. Correlate with interference signature logs.",
              "z": "Stacked bar chart of channel utilization by band; channel heatmap over time; interference event timeline.",
              "kfp": "RF noise and channel changes can spike when neighbors deploy new gear, microwaves run, or the controller runs automatic channel updates; weather and outdoor clients can also move the numbers.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery API device data for MR access points; track channel assignments. Correlate with interference signature logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MR\n| stats count by channel, band\n| eval utilization_pct=round(count*100/sum(count), 2)\n| where utilization_pct > 40\n| sort - utilization_pct\n```\n\nUnderstanding this SPL\n\n**WiFi Channel Utilization and Interference Detection (Meraki MR)** — Identifies channel congestion and interference sources to optimize channel assignments and reduce co-channel interference.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by channel, band** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilization_pct > 40` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart of channel utilization by band; channel heatmap over time; interference event timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch wifi channel utilization and interference detection (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.17",
              "n": "Rogue and Unauthorized AP Detection — Air Marshal (Meraki MR)",
              "c": "critical",
              "f": "intermediate",
              "v": "Identifies unauthorized wireless networks and malicious APs that may represent security threats or network intrusion attempts.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=air_marshal`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=air_marshal signature=\"*Rogue*\" OR signature=\"*Unauthorized*\"\n| stats count by ssid, bssid, first_detected, last_seen, threat_level\n| where threat_level=\"high\" OR threat_level=\"critical\"\n| sort - first_detected",
              "m": "Enable Air Marshal on MR APs and ingest syslog events. Create alert for new rogue AP detections with risk scoring.",
              "z": "Table of detected rogues with threat indicators; map showing rogue AP locations; timeline of detections.",
              "kfp": "Failed logins often come from typos, expired passwords, guest self-service, or a single misconfigured device; treat sustained rises across many users as the real signal.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=air_marshal`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Air Marshal on MR APs and ingest syslog events. Create alert for new rogue AP detections with risk scoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=air_marshal signature=\"*Rogue*\" OR signature=\"*Unauthorized*\"\n| stats count by ssid, bssid, first_detected, last_seen, threat_level\n| where threat_level=\"high\" OR threat_level=\"critical\"\n| sort - first_detected\n```\n\nUnderstanding this SPL\n\n**Rogue and Unauthorized AP Detection — Air Marshal (Meraki MR)** — Identifies unauthorized wireless networks and malicious APs that may represent security threats or network intrusion attempts.\n\nDocumented **Data sources**: `sourcetype=meraki type=air_marshal`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ssid, bssid, first_detected, last_seen, threat_level** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where threat_level=\"high\" OR threat_level=\"critical\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of detected rogues with threat indicators; map showing rogue AP locations; timeline of detections.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch rogue and unauthorized ap detection — air marshal (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.18",
              "n": "Client Device Type Distribution and Compliance (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks device types connecting to network for capacity planning, security policy enforcement, and support optimization.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats count as device_count by os_type, device_family\n| eval pct=round(device_count*100/sum(device_count), 2)\n| sort - device_count",
              "m": "Use API clients endpoint to retrieve device OS and type information. Aggregate across network.",
              "z": "Pie chart of device types; bar chart by OS; treemap of device distribution; trend sparklines.",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse API clients endpoint to retrieve device OS and type information. Aggregate across network.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats count as device_count by os_type, device_family\n| eval pct=round(device_count*100/sum(device_count), 2)\n| sort - device_count\n```\n\nUnderstanding this SPL\n\n**Client Device Type Distribution and Compliance (Meraki MR)** — Tracks device types connecting to network for capacity planning, security policy enforcement, and support optimization.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by os_type, device_family** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart of device types; bar chart by OS; treemap of device distribution; trend sparklines.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch client device type distribution and compliance (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.19",
              "n": "Band Steering Effectiveness Assessment (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures effectiveness of steering clients from 2.4GHz to 5GHz bands to reduce congestion and improve performance.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats count as client_count by band\n| eval band_ratio=round(client_count*100/sum(client_count), 2)\n| fields band, client_count, band_ratio",
              "m": "Query clients API to get current band distribution. Compare against expected ratio for band steering policy.",
              "z": "Gauge showing 5GHz percentage; pie chart of band distribution; trend line showing steering progress.",
              "kfp": "RF noise and channel changes can spike when neighbors deploy new gear, microwaves run, or the controller runs automatic channel updates; weather and outdoor clients can also move the numbers.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery clients API to get current band distribution. Compare against expected ratio for band steering policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats count as client_count by band\n| eval band_ratio=round(client_count*100/sum(client_count), 2)\n| fields band, client_count, band_ratio\n```\n\nUnderstanding this SPL\n\n**Band Steering Effectiveness Assessment (Meraki MR)** — Measures effectiveness of steering clients from 2.4GHz to 5GHz bands to reduce congestion and improve performance.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by band** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **band_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Keeps or drops fields with `fields` to shape columns and size.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge showing 5GHz percentage; pie chart of band distribution; trend line showing steering progress.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch band steering effectiveness assessment (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.20",
              "n": "802.1X Authentication Failures and RADIUS Issues (Meraki MR)",
              "c": "high",
              "f": "intermediate",
              "v": "Identifies authentication server problems, credential issues, and 802.1X configuration mismatches.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*802.1X*\" OR signature=\"*Radius*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*802.1X*\" OR signature=\"*Radius*\" OR signature=\"*authentication*\")\n| stats count as auth_failures by client_mac, ap_name, signature\n| eventstats sum(auth_failures) as total_failures by client_mac\n| where total_failures > 10\n| sort -total_failures",
              "m": "Ingest 802.1X and RADIUS-related syslog events. Correlate with RADIUS server logs.",
              "z": "Table of failing clients; time-series of auth failures; client-level detail dashboard.",
              "kfp": "Failed logins often come from typos, expired passwords, guest self-service, or a single misconfigured device; treat sustained rises across many users as the real signal.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*802.1X*\" OR signature=\"*Radius*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest 802.1X and RADIUS-related syslog events. Correlate with RADIUS server logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*802.1X*\" OR signature=\"*Radius*\" OR signature=\"*authentication*\")\n| stats count as auth_failures by client_mac, ap_name, signature\n| eventstats sum(auth_failures) as total_failures by client_mac\n| where total_failures > 10\n| sort -total_failures\n```\n\nUnderstanding this SPL\n\n**802.1X Authentication Failures and RADIUS Issues (Meraki MR)** — Identifies authentication server problems, credential issues, and 802.1X configuration mismatches.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*802.1X*\" OR signature=\"*Radius*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by client_mac, ap_name, signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by client_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_failures > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of failing clients; time-series of auth failures; client-level detail dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch 802.1x authentication failures and radius issues (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.21",
              "n": "Wireless Latency Analysis by SSID and Location (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "Identifies latency patterns across network to optimize AP placement, channel allocation, and client routing.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" latency=*\n| stats avg(latency) as avg_latency, max(latency) as max_latency, count by ssid, ap_name\n| eval latency_sla=\"OK\"\n| eval latency_sla=if(avg_latency > 50, \"Warning\", latency_sla)\n| eval latency_sla=if(avg_latency > 100, \"Critical\", latency_sla)",
              "m": "Use API clients endpoint with latency metric. Aggregate by SSID and AP location.",
              "z": "Heatmap of latency by AP; line chart of latency trends; SLA compliance dashboard.",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse API clients endpoint with latency metric. Aggregate by SSID and AP location.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" latency=*\n| stats avg(latency) as avg_latency, max(latency) as max_latency, count by ssid, ap_name\n| eval latency_sla=\"OK\"\n| eval latency_sla=if(avg_latency > 50, \"Warning\", latency_sla)\n| eval latency_sla=if(avg_latency > 100, \"Critical\", latency_sla)\n```\n\nUnderstanding this SPL\n\n**Wireless Latency Analysis by SSID and Location (Meraki MR)** — Identifies latency patterns across network to optimize AP placement, channel allocation, and client routing.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ssid, ap_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **latency_sla** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **latency_sla** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **latency_sla** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of latency by AP; line chart of latency trends; SLA compliance dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch wireless latency analysis by ssid and location (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.22",
              "n": "Splash Page Engagement and Redirection Analytics (Meraki MR)",
              "c": "low",
              "f": "beginner",
              "v": "Tracks guest network splash page performance and user acceptance rates for marketing and network access purposes.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*Splash*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*Splash*\"\n| stats count as redirect_count by result, ap_name\n| eval acceptance_rate=round(count*100/sum(count), 2)",
              "m": "Capture splash page interaction events from syslog. Track accepts vs. denies.",
              "z": "Pie chart of acceptance rates; funnel chart of splash interactions; time-series trending.",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*Splash*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture splash page interaction events from syslog. Track accepts vs. denies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*Splash*\"\n| stats count as redirect_count by result, ap_name\n| eval acceptance_rate=round(count*100/sum(count), 2)\n```\n\nUnderstanding this SPL\n\n**Splash Page Engagement and Redirection Analytics (Meraki MR)** — Tracks guest network splash page performance and user acceptance rates for marketing and network access purposes.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*Splash*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by result, ap_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **acceptance_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart of acceptance rates; funnel chart of splash interactions; time-series trending.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch splash page engagement and redirection analytics (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.23",
              "n": "Multicast and Broadcast Storm Detection (Meraki MR)",
              "c": "high",
              "f": "intermediate",
              "v": "Identifies multicast/broadcast flooding that degrades wireless performance across multiple client devices.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=flow`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=flow dest=\"255.255.255.255\" OR dest_mac=\"ff:ff:ff:ff:ff:ff\"\n| stats sum(sent_bytes) as total_bytes, count as pkt_count by ap_name, src_mac\n| where pkt_count > 1000\n| sort - pkt_count",
              "m": "Monitor broadcast/multicast flows in syslog. Set thresholds for abnormal packet rates.",
              "z": "Table of broadcast sources; time-series of broadcast packets; alert threshold dashboard.",
              "kfp": "Backup jobs, imaging, and video can create heavy wireless flows; confirm with the app owner before assuming abuse or a misbehaving client.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=flow`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor broadcast/multicast flows in syslog. Set thresholds for abnormal packet rates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=flow dest=\"255.255.255.255\" OR dest_mac=\"ff:ff:ff:ff:ff:ff\"\n| stats sum(sent_bytes) as total_bytes, count as pkt_count by ap_name, src_mac\n| where pkt_count > 1000\n| sort - pkt_count\n```\n\nUnderstanding this SPL\n\n**Multicast and Broadcast Storm Detection (Meraki MR)** — Identifies multicast/broadcast flooding that degrades wireless performance across multiple client devices.\n\nDocumented **Data sources**: `sourcetype=meraki type=flow`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, src_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where pkt_count > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of broadcast sources; time-series of broadcast packets; alert threshold dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch multicast and broadcast storm detection (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.24",
              "n": "Wireless Health Score Trending (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "Provides a composite health metric across all APs to facilitate executive reporting and trend analysis.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MR`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MR\n| stats avg(health_score) as network_health, min(health_score) as worst_ap, count(eval(health_score<80)) as unhealthy_aps by network_id\n| eval health_status=if(network_health >= 85, \"Healthy\", if(network_health >= 70, \"Degraded\", \"Critical\"))",
              "m": "Pull health_score metric from MR devices API. Aggregate across network.",
              "z": "Gauge of overall health; bar chart of individual AP health; trend sparkline; KPI dashboard.",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MR`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPull health_score metric from MR devices API. Aggregate across network.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MR\n| stats avg(health_score) as network_health, min(health_score) as worst_ap, count(eval(health_score<80)) as unhealthy_aps by network_id\n| eval health_status=if(network_health >= 85, \"Healthy\", if(network_health >= 70, \"Degraded\", \"Critical\"))\n```\n\nUnderstanding this SPL\n\n**Wireless Health Score Trending (Meraki MR)** — Provides a composite health metric across all APs to facilitate executive reporting and trend analysis.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MR`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by network_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **health_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge of overall health; bar chart of individual AP health; trend sparkline; KPI dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch wireless health score trending (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.25",
              "n": "Connected Client Count Trending and Capacity Planning (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks client density by AP and SSID for capacity planning and performance optimization.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats count as client_count by ap_name, ssid\n| eval capacity_pct=round(client_count*100/30, 2)\n| where capacity_pct > 70\n| sort - client_count",
              "m": "Query clients API to count connected devices. Track over time.",
              "z": "Bubble chart of capacity by AP; stacked bar of clients by SSID; capacity gauge.",
              "kfp": "Wireless client counts spike during shift changes, big events, or back-to-school style rushes; compare against the calendar before calling it an incident.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery clients API to count connected devices. Track over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats count as client_count by ap_name, ssid\n| eval capacity_pct=round(client_count*100/30, 2)\n| where capacity_pct > 70\n| sort - client_count\n```\n\nUnderstanding this SPL\n\n**Connected Client Count Trending and Capacity Planning (Meraki MR)** — Tracks client density by AP and SSID for capacity planning and performance optimization.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, ssid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **capacity_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where capacity_pct > 70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bubble chart of capacity by AP; stacked bar of clients by SSID; capacity gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch connected client count trending and capacity planning (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.26",
              "n": "Top Talker Analysis and Bandwidth Hogs (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "Identifies bandwidth-intensive clients and applications to enforce QoS policies and prevent network congestion.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=flow`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=flow\n| stats sum(sent_bytes) as upload_bytes, sum(received_bytes) as download_bytes by client_mac, application\n| eval total_bytes=upload_bytes+download_bytes\n| sort -total_bytes\n| head 20",
              "m": "Analyze flow records from syslog; track data usage by client and application.",
              "z": "Table of top talkers; horizontal bar chart of data usage; Sankey diagram of flows.",
              "kfp": "Backup jobs, imaging, and video can create heavy wireless flows; confirm with the app owner before assuming abuse or a misbehaving client.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=flow`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAnalyze flow records from syslog; track data usage by client and application.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=flow\n| stats sum(sent_bytes) as upload_bytes, sum(received_bytes) as download_bytes by client_mac, application\n| eval total_bytes=upload_bytes+download_bytes\n| sort -total_bytes\n| head 20\n```\n\nUnderstanding this SPL\n\n**Top Talker Analysis and Bandwidth Hogs (Meraki MR)** — Identifies bandwidth-intensive clients and applications to enforce QoS policies and prevent network congestion.\n\nDocumented **Data sources**: `sourcetype=meraki type=flow`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by client_mac, application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of top talkers; horizontal bar chart of data usage; Sankey diagram of flows.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch top talker analysis and bandwidth hogs (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.27",
              "n": "Connection Duration and Session Quality (Meraki MR)",
              "c": "low",
              "f": "intermediate",
              "v": "Analyzes typical session lengths and stability to identify problematic SSIDs or time-based issues.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" connection_duration=*\n| stats avg(connection_duration) as avg_session_time, min(connection_duration) as min_session, max(connection_duration) as max_session by ssid\n| eval session_quality=if(avg_session_time > 3600, \"Stable\", \"Short\")",
              "m": "Extract connection_duration from clients API. Aggregate by SSID and time of day.",
              "z": "Histogram of session durations; time-of-day heatmap; SSID comparison chart.",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract connection_duration from clients API. Aggregate by SSID and time of day.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" connection_duration=*\n| stats avg(connection_duration) as avg_session_time, min(connection_duration) as min_session, max(connection_duration) as max_session by ssid\n| eval session_quality=if(avg_session_time > 3600, \"Stable\", \"Short\")\n```\n\nUnderstanding this SPL\n\n**Connection Duration and Session Quality (Meraki MR)** — Analyzes typical session lengths and stability to identify problematic SSIDs or time-based issues.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ssid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **session_quality** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram of session durations; time-of-day heatmap; SSID comparison chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch connection duration and session quality (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.28",
              "n": "AP Uptime and Availability Monitoring (Meraki MR)",
              "c": "critical",
              "f": "beginner",
              "v": "Ensures all access points are online and operational; alerts on unexpected AP outages.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MR`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MR\n| stats latest(status) as ap_status, latest(last_status_change) as last_change by ap_name, ap_mac\n| where ap_status=\"offline\"",
              "m": "Monitor device status API for all MR devices. Alert on status=\"offline\".",
              "z": "Status table with last seen time; uptime percentage gauge; event alert dashboard.",
              "kfp": "Access points may go offline during scheduled firmware updates, PoE switch reboots, cabling work, or RF site surveys, which can look like an outage without a real coverage problem.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MR`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor device status API for all MR devices. Alert on status=\"offline\".\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MR\n| stats latest(status) as ap_status, latest(last_status_change) as last_change by ap_name, ap_mac\n| where ap_status=\"offline\"\n```\n\nUnderstanding this SPL\n\n**AP Uptime and Availability Monitoring (Meraki MR)** — Ensures all access points are online and operational; alerts on unexpected AP outages.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MR`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, ap_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ap_status=\"offline\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status table with last seen time; uptime percentage gauge; event alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch ap uptime and availability monitoring (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.29",
              "n": "Mesh Network Link Quality and Backhaul Health (Meraki MR)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors wireless mesh backhaul links to ensure reliability of remote AP connections.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api` (MR), `sourcetype=meraki` (events, e.g. `type=security_event`)",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MR mesh_link_quality=*\n| stats avg(mesh_link_quality) as avg_link_quality by ap_name, upstream_ap\n| where avg_link_quality < 70\n| sort avg_link_quality",
              "m": "Query MR device API for mesh_link_quality metric. Alert on degraded quality (<70%).",
              "z": "Network topology showing link quality; color-coded links; detail table with metrics.",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api` (MR), `sourcetype=meraki` (events, e.g. `type=security_event`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery MR device API for mesh_link_quality metric. Alert on degraded quality (<70%).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MR mesh_link_quality=*\n| stats avg(mesh_link_quality) as avg_link_quality by ap_name, upstream_ap\n| where avg_link_quality < 70\n| sort avg_link_quality\n```\n\nUnderstanding this SPL\n\n**Mesh Network Link Quality and Backhaul Health (Meraki MR)** — Monitors wireless mesh backhaul links to ensure reliability of remote AP connections.\n\nDocumented **Data sources**: `sourcetype=meraki:api` (MR), `sourcetype=meraki` (events, e.g. `type=security_event`). **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, upstream_ap** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_link_quality < 70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network topology showing link quality; color-coded links; detail table with metrics.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mesh network link quality and backhaul health (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.30",
              "n": "Guest Network Access Patterns and Usage (Meraki MR)",
              "c": "low",
              "f": "intermediate",
              "v": "Tracks guest network adoption, usage patterns, and peak times for network provisioning.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api ssid=\"guest*\"`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" ssid=\"guest\"\n| stats count as guest_users by _time\n| timechart avg(guest_users) as avg_concurrent_guests",
              "m": "Filter clients API results for guest SSIDs. Track concurrent count over time.",
              "z": "Time-series of guest users; daily/weekly heatmap; trend dashboard.",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api ssid=\"guest*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFilter clients API results for guest SSIDs. Track concurrent count over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" ssid=\"guest\"\n| stats count as guest_users by _time\n| timechart avg(guest_users) as avg_concurrent_guests\n```\n\nUnderstanding this SPL\n\n**Guest Network Access Patterns and Usage (Meraki MR)** — Tracks guest network adoption, usage patterns, and peak times for network provisioning.\n\nDocumented **Data sources**: `sourcetype=meraki:api ssid=\"guest*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time-series of guest users; daily/weekly heatmap; trend dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch guest network access patterns and usage (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.31",
              "n": "WiFi Geolocation and Location Analytics (Meraki MR)",
              "c": "low",
              "f": "beginner",
              "v": "Uses Cisco Meraki location services to track foot traffic patterns and heat maps in physical spaces.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api location_data=*`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" ap_name=*\n| stats count as foot_traffic by ap_name, floor\n| geom geo_from_metric lat, lon",
              "m": "Use Meraki location API to get AP-based location estimates. Map to floor/zone.",
              "z": "Heat map by physical location; AP heat map overlay; zone traffic comparison.",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api location_data=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Meraki location API to get AP-based location estimates. Map to floor/zone.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" ap_name=*\n| stats count as foot_traffic by ap_name, floor\n| geom geo_from_metric lat, lon\n```\n\nUnderstanding this SPL\n\n**WiFi Geolocation and Location Analytics (Meraki MR)** — Uses Cisco Meraki location services to track foot traffic patterns and heat maps in physical spaces.\n\nDocumented **Data sources**: `sourcetype=meraki:api location_data=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, floor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **WiFi Geolocation and Location Analytics (Meraki MR)**): geom geo_from_metric lat, lon\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heat map by physical location; AP heat map overlay; zone traffic comparison.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch wifi geolocation and location analytics (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.32",
              "n": "Wireless Client Association and Roaming Failures (Meraki MR)",
              "c": "medium",
              "f": "intermediate",
              "v": "High association failure or roaming failure rates indicate coverage gaps, interference, or AP misconfiguration. Trending supports WLAN troubleshooting and capacity planning.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `TA-cisco_ios` (WLC), wireless controller logs",
              "d": "Meraki wireless events, Cisco WLC syslog",
              "q": "index=cisco_network sourcetype=meraki:wireless (event_type=\"association_failed\" OR event_type=\"roam_failed\")\n| bin _time span=15m\n| stats count by ap_serial, ssid, _time\n| where count > 20\n| sort -count",
              "m": "Ingest wireless client events from Meraki or WLC. Extract association and roam outcomes. Alert when failure rate exceeds threshold per AP or SSID. Dashboard by location and time.",
              "z": "Table (AP, SSID, failures), Line chart (failure rate over time), Heatmap (AP by location).",
              "kfp": "Clients may roam often when people move between floors, during large meetings, or when access points reboot; some clients also stay 'sticky' and look noisy without a real outage.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `TA-cisco_ios` (WLC), wireless controller logs.\n• Ensure the following data sources are available: Meraki wireless events, Cisco WLC syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest wireless client events from Meraki or WLC. Extract association and roam outcomes. Alert when failure rate exceeds threshold per AP or SSID. Dashboard by location and time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=meraki:wireless (event_type=\"association_failed\" OR event_type=\"roam_failed\")\n| bin _time span=15m\n| stats count by ap_serial, ssid, _time\n| where count > 20\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Wireless Client Association and Roaming Failures (Meraki MR)** — High association failure or roaming failure rates indicate coverage gaps, interference, or AP misconfiguration. Trending supports WLAN troubleshooting and capacity planning.\n\nDocumented **Data sources**: Meraki wireless events, Cisco WLC syslog. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `TA-cisco_ios` (WLC), wireless controller logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:wireless. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=meraki:wireless. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ap_serial, ssid, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (AP, SSID, failures), Line chart (failure rate over time), Heatmap (AP by location).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch wireless client association and roaming failures (meraki mr) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios",
                "cisco_meraki",
                "cisco_wlc"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.33",
              "n": "AP Health and Radio Status Monitoring (HPE Aruba)",
              "c": "high",
              "f": "beginner",
              "v": "Aruba APs report radio status (up/down per band), CPU/memory utilization, and uptime via controller syslog and Aruba Central API. A radio stuck in \"down\" state creates a coverage hole on that band. Monitor per-radio health across the fleet to proactively address failing hardware and capacity issues.",
              "t": "`Aruba Networks Add-on for Splunk` (Splunkbase 4668), Aruba Central API (scripted input or HEC)",
              "d": "`sourcetype=aruba:syslog`, Aruba Central API (AP/radio inventory and health)",
              "q": "index=network sourcetype=\"aruba:syslog\"\n| eval ap=coalesce(ap_name, caller_ap_name, device_name, ap_mac)\n| eval radio_st=coalesce(radio_oper_status, radio_status, oper_status)\n| eval band=coalesce(radio_band, freq_band, band)\n| bin _time span=5m\n| stats latest(radio_st) as radio_state, latest(cpu_utilization_pct) as cpu_pct, latest(memory_utilization_pct) as mem_pct, latest(uptime_seconds) as uptime_sec by ap, band, ap_group, controller_name\n| where like(lower(radio_state), \"%down%\") OR like(lower(radio_state), \"%off%\") OR cpu_pct>85 OR mem_pct>85\n| sort ap_group ap band",
              "m": "Forward Mobility Controller / gateway syslog to Splunk with the Aruba TA (field aliases for `ap_name`, per-radio operational state, CPU/memory, uptime). Optionally poll Aruba Central for AP inventory and merge with syslog for sites without local controllers. Alert on any radio not `up`, sustained high CPU/memory, or APs with abnormal uptime resets.",
              "z": "Table (AP, band, radio state, CPU, memory, uptime), Single value (APs with down radios), Timechart (unhealthy AP count), Map or site breakdown (by `ap_group` / zone).",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 4668](https://splunkbase.splunk.com/app/4668)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Aruba Networks Add-on for Splunk` (Splunkbase 4668), Aruba Central API (scripted input or HEC).\n• Ensure the following data sources are available: `sourcetype=aruba:syslog`, Aruba Central API (AP/radio inventory and health).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Mobility Controller / gateway syslog to Splunk with the Aruba TA (field aliases for `ap_name`, per-radio operational state, CPU/memory, uptime). Optionally poll Aruba Central for AP inventory and merge with syslog for sites without local controllers. Alert on any radio not `up`, sustained high CPU/memory, or APs with abnormal uptime resets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"aruba:syslog\"\n| eval ap=coalesce(ap_name, caller_ap_name, device_name, ap_mac)\n| eval radio_st=coalesce(radio_oper_status, radio_status, oper_status)\n| eval band=coalesce(radio_band, freq_band, band)\n| bin _time span=5m\n| stats latest(radio_st) as radio_state, latest(cpu_utilization_pct) as cpu_pct, latest(memory_utilization_pct) as mem_pct, latest(uptime_seconds) as uptime_sec by ap, band, ap_group, controller_name\n| where like(lower(radio_state), \"%down%\") OR like(lower(radio_state), \"%off%\") OR cpu_pct>85 OR mem_pct>85\n| sort ap_group ap band\n```\n\nUnderstanding this SPL\n\n**AP Health and Radio Status Monitoring (HPE Aruba)** — Aruba APs report radio status (up/down per band), CPU/memory utilization, and uptime via controller syslog and Aruba Central API. A radio stuck in \"down\" state creates a coverage hole on that band. Monitor per-radio health across the fleet to proactively address failing hardware and capacity issues.\n\nDocumented **Data sources**: `sourcetype=aruba:syslog`, Aruba Central API (AP/radio inventory and health). **App/TA** (typical add-on context): `Aruba Networks Add-on for Splunk` (Splunkbase 4668), Aruba Central API (scripted input or HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: aruba:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"aruba:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **radio_st** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **band** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ap, band, ap_group, controller_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where like(lower(radio_state), \"%down%\") OR like(lower(radio_state), \"%off%\") OR cpu_pct>85 OR mem_pct>85` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (AP, band, radio state, CPU, memory, uptime), Single value (APs with down radios), Timechart (unhealthy AP count), Map or site breakdown (by `ap_group` / zone).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "HPE Aruba AP-303/AP-505/AP-515/AP-535/AP-555/AP-635, Aruba Instant On AP11/AP15/AP22/AP25",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch ap health and radio status monitoring (hpe aruba) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.34",
              "n": "Aruba ClearPass RADIUS Authentication Health (HPE Aruba)",
              "c": "critical",
              "f": "intermediate",
              "v": "ClearPass Policy Manager is the authentication backbone for Aruba wireless networks, handling 802.1X, MAC auth, and captive portal. RADIUS authentication failures, timeouts, and server unreachability directly prevent users from connecting. Track auth success/failure ratios, latency, and server health.",
              "t": "`HPE Aruba ClearPass App for Splunk` (Splunkbase 7865)",
              "d": "`sourcetype=aruba:clearpass`",
              "q": "index=network sourcetype=\"aruba:clearpass\" (\"RADIUS\" OR TipsService=\"RADIUS\" OR module=\"RADIUS\")\n| eval result=coalesce(Enforcement_Result, Auth_Result, Status, if(match(_raw,\"Access-Accept\"),\"Accept\",if(match(_raw,\"Access-Reject\"),\"Reject\",null())))\n| rex field=_raw max_match=0 \"(?i)Access-(?<radius_reply>Accept|Reject|Challenge)\"\n| eval outcome=coalesce(result, radius_reply)\n| eval latency_ms=coalesce(request_latency_ms, Radius_Request_Time, elapsed_ms, duration_ms)\n| eval is_timeout=if(match(_raw,\"(?i)timeout|timed out|server.unreachable|no.response.from\"),1,0)\n| stats count as events, sum(is_timeout) as timeouts, avg(latency_ms) as avg_latency_ms by outcome, radius_server, nas_ip\n| where like(outcome,\"%Reject%\") OR like(lower(outcome),\"%fail%\") OR timeouts>0 OR avg_latency_ms>500\n| sort -events",
              "m": "Ingest ClearPass access tracker and RADIUS-related logs via the ClearPass app. Normalize `Accept`/`Reject`/`Challenge` and timeout/unreachable patterns. Alert when reject rate or timeouts spike versus baseline, or when average RADIUS latency exceeds policy (e.g. 500ms). Segment by `nas_ip` (controller/AP cluster) to isolate WLAN vs ClearPass issues.",
              "z": "Timechart (accept vs reject vs timeout), Bar chart (outcomes by NAS), Table (radius_server, NAS, latency, counts), Single value (auth availability %).",
              "kfp": "Failed logins often come from typos, expired passwords, guest self-service, or a single misconfigured device; treat sustained rises across many users as the real signal.",
              "refs": "[Splunkbase app 7865](https://splunkbase.splunk.com/app/7865), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `HPE Aruba ClearPass App for Splunk` (Splunkbase 7865).\n• Ensure the following data sources are available: `sourcetype=aruba:clearpass`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ClearPass access tracker and RADIUS-related logs via the ClearPass app. Normalize `Accept`/`Reject`/`Challenge` and timeout/unreachable patterns. Alert when reject rate or timeouts spike versus baseline, or when average RADIUS latency exceeds policy (e.g. 500ms). Segment by `nas_ip` (controller/AP cluster) to isolate WLAN vs ClearPass issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"aruba:clearpass\" (\"RADIUS\" OR TipsService=\"RADIUS\" OR module=\"RADIUS\")\n| eval result=coalesce(Enforcement_Result, Auth_Result, Status, if(match(_raw,\"Access-Accept\"),\"Accept\",if(match(_raw,\"Access-Reject\"),\"Reject\",null())))\n| rex field=_raw max_match=0 \"(?i)Access-(?<radius_reply>Accept|Reject|Challenge)\"\n| eval outcome=coalesce(result, radius_reply)\n| eval latency_ms=coalesce(request_latency_ms, Radius_Request_Time, elapsed_ms, duration_ms)\n| eval is_timeout=if(match(_raw,\"(?i)timeout|timed out|server.unreachable|no.response.from\"),1,0)\n| stats count as events, sum(is_timeout) as timeouts, avg(latency_ms) as avg_latency_ms by outcome, radius_server, nas_ip\n| where like(outcome,\"%Reject%\") OR like(lower(outcome),\"%fail%\") OR timeouts>0 OR avg_latency_ms>500\n| sort -events\n```\n\nUnderstanding this SPL\n\n**Aruba ClearPass RADIUS Authentication Health (HPE Aruba)** — ClearPass Policy Manager is the authentication backbone for Aruba wireless networks, handling 802.1X, MAC auth, and captive portal. RADIUS authentication failures, timeouts, and server unreachability directly prevent users from connecting. Track auth success/failure ratios, latency, and server health.\n\nDocumented **Data sources**: `sourcetype=aruba:clearpass`. **App/TA** (typical add-on context): `HPE Aruba ClearPass App for Splunk` (Splunkbase 7865). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: aruba:clearpass. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"aruba:clearpass\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **result** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **outcome** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_timeout** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by outcome, radius_server, nas_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where like(outcome,\"%Reject%\") OR like(lower(outcome),\"%fail%\") OR timeouts>0 OR avg_latency_ms>500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Aruba ClearPass RADIUS Authentication Health (HPE Aruba)** — ClearPass Policy Manager is the authentication backbone for Aruba wireless networks, handling 802.1X, MAC auth, and captive portal. RADIUS authentication failures, timeouts, and server unreachability directly prevent users from connecting. Track auth success/failure ratios, latency, and server health.\n\nDocumented **Data sources**: `sourcetype=aruba:clearpass`. **App/TA** (typical add-on context): `HPE Aruba ClearPass App for Splunk` (Splunkbase 7865). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (accept vs reject vs timeout), Bar chart (outcomes by NAS), Table (radius_server, NAS, latency, counts), Single value (auth availability %).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch aruba clearpass radius authentication health (hpe aruba) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 10",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.35",
              "n": "Aruba Air Monitor — WIDS/WIPS Events (HPE Aruba)",
              "c": "high",
              "f": "intermediate",
              "v": "Aruba's Wireless Intrusion Detection and Prevention System (WIDS/WIPS) detects rogue APs, evil twin attacks, ad-hoc networks, unauthorized bridges, and DoS attacks (deauthentication floods, association floods). Air Monitor (AM) mode APs or hybrid APs provide dedicated RF security scanning.",
              "t": "`Aruba Networks Add-on for Splunk` (Splunkbase 4668)",
              "d": "`sourcetype=aruba:syslog`",
              "q": "index=network sourcetype=\"aruba:syslog\" (category=\"SECURITY\" OR subsystem=\"wids\" OR subsystem=\"WIDS\" OR match(_raw, \"(?i)(rogue|evil.twin|ad-hoc|deauth|disassoc).*(flood|detected|attack|alert)\"))\n| eval threat_type=coalesce(wids_classification, threat_name, intrusion_type, ids_signature, alert_type)\n| eval sev=coalesce(severity, threat_severity, priority)\n| stats count by threat_type, sev, ap_name, channel, detecting_ap, bssid\n| sort -count",
              "m": "Enable WIDS/WIPS and AM-capable APs per Aruba design guide; ensure security-class syslog messages are forwarded with TA parsing for threat category and severity. Tune alerts for critical classes (rogue AP, evil twin, deauth flood). Correlate with physical site/AP layout for containment workflows.",
              "z": "Table (threat type, severity, channel, detecting AP), Bar chart (threats by type), Timeline (WIDS event rate), Map or floor plan overlay when location fields exist.",
              "kfp": "Neighbor networks, personal hotspots, or test labs can look like rogues; confirm against known nearby SSIDs and change windows before escalation.",
              "refs": "[Splunkbase app 4668](https://splunkbase.splunk.com/app/4668), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Aruba Networks Add-on for Splunk` (Splunkbase 4668).\n• Ensure the following data sources are available: `sourcetype=aruba:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable WIDS/WIPS and AM-capable APs per Aruba design guide; ensure security-class syslog messages are forwarded with TA parsing for threat category and severity. Tune alerts for critical classes (rogue AP, evil twin, deauth flood). Correlate with physical site/AP layout for containment workflows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"aruba:syslog\" (category=\"SECURITY\" OR subsystem=\"wids\" OR subsystem=\"WIDS\" OR match(_raw, \"(?i)(rogue|evil.twin|ad-hoc|deauth|disassoc).*(flood|detected|attack|alert)\"))\n| eval threat_type=coalesce(wids_classification, threat_name, intrusion_type, ids_signature, alert_type)\n| eval sev=coalesce(severity, threat_severity, priority)\n| stats count by threat_type, sev, ap_name, channel, detecting_ap, bssid\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Aruba Air Monitor — WIDS/WIPS Events (HPE Aruba)** — Aruba's Wireless Intrusion Detection and Prevention System (WIDS/WIPS) detects rogue APs, evil twin attacks, ad-hoc networks, unauthorized bridges, and DoS attacks (deauthentication floods, association floods). Air Monitor (AM) mode APs or hybrid APs provide dedicated RF security scanning.\n\nDocumented **Data sources**: `sourcetype=aruba:syslog`. **App/TA** (typical add-on context): `Aruba Networks Add-on for Splunk` (Splunkbase 4668). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: aruba:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"aruba:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **threat_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by threat_type, sev, ap_name, channel, detecting_ap, bssid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection\n  by IDS_Attacks.signature, IDS_Attacks.severity, IDS_Attacks.src, IDS_Attacks.dest span=1h\n| where count > 0\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Aruba Air Monitor — WIDS/WIPS Events (HPE Aruba)** — Aruba's Wireless Intrusion Detection and Prevention System (WIDS/WIPS) detects rogue APs, evil twin attacks, ad-hoc networks, unauthorized bridges, and DoS attacks (deauthentication floods, association floods). Air Monitor (AM) mode APs or hybrid APs provide dedicated RF security scanning.\n\nDocumented **Data sources**: `sourcetype=aruba:syslog`. **App/TA** (typical add-on context): `Aruba Networks Add-on for Splunk` (Splunkbase 4668). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection` — enable acceleration for that model.\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (threat type, severity, channel, detecting AP), Bar chart (threats by type), Timeline (WIDS event rate), Map or floor plan overlay when location fields exist.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "HPE Aruba AP-505/AP-515/AP-535/AP-555/AP-635 (Air Monitor mode), Aruba 7000/7200 Mobility Controllers",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch aruba air monitor — wids/wips events (hpe aruba) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection\n  by IDS_Attacks.signature, IDS_Attacks.severity, IDS_Attacks.src, IDS_Attacks.dest span=1h\n| where count > 0\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.36",
              "n": "Aruba Dynamic Segmentation Policy Enforcement (HPE Aruba)",
              "c": "high",
              "f": "intermediate",
              "v": "Aruba Dynamic Segmentation assigns users and devices to virtual networks based on ClearPass role and policy, enforced at the Aruba gateway. Policy enforcement failures mean devices get wrong access levels — either too permissive (security risk) or too restrictive (business impact). Monitor role assignment, gateway tunnel status, and policy hits.",
              "t": "`Aruba Networks Add-on for Splunk` (Splunkbase 4668), `HPE Aruba ClearPass App for Splunk` (Splunkbase 7865)",
              "d": "`sourcetype=aruba:syslog`, `sourcetype=aruba:clearpass`",
              "q": "index=network (sourcetype=\"aruba:syslog\" OR sourcetype=\"aruba:clearpass\") (\"role\" OR \"User-Role\" OR \"user role\" OR \"tunnel\" OR \"UBT\" OR \"gateway\" OR \"enforce\")\n| eval assigned_role=coalesce(aruba_user_role, TipsRole, user_role, Role_Name, derived_role)\n| eval gw=coalesce(gateway_name, gateway_ip, cluster_name)\n| eval tunnel_st=coalesce(tunnel_status, tunnel_state, ubt_status)\n| stats dc(client_mac) as endpoints, dc(username) as users, count as events by assigned_role, gw, tunnel_st\n| where isnull(tunnel_st) OR like(lower(tunnel_st),\"%down%\") OR like(lower(tunnel_st),\"%fail%\") OR match(lower(assigned_role),\"(?i)deny|reject|quarantine|unknown\")\n| sort -endpoints",
              "m": "Ingest gateway and switch UBT/syslog role-assignment and tunnel events alongside ClearPass enforcement logs. Build dashboards for role distribution per gateway and alert on tunnel down, role `deny`, or default catch-all role spikes. Validate after policy changes that expected roles appear for test users.",
              "z": "Sankey or table (role → gateway → tunnel state), Bar chart (endpoints by role), Timechart (tunnel failures), Table (users or MACs with unexpected roles).",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Splunkbase app 4668](https://splunkbase.splunk.com/app/4668), [Splunkbase app 7865](https://splunkbase.splunk.com/app/7865), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Aruba Networks Add-on for Splunk` (Splunkbase 4668), `HPE Aruba ClearPass App for Splunk` (Splunkbase 7865).\n• Ensure the following data sources are available: `sourcetype=aruba:syslog`, `sourcetype=aruba:clearpass`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest gateway and switch UBT/syslog role-assignment and tunnel events alongside ClearPass enforcement logs. Build dashboards for role distribution per gateway and alert on tunnel down, role `deny`, or default catch-all role spikes. Validate after policy changes that expected roles appear for test users.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"aruba:syslog\" OR sourcetype=\"aruba:clearpass\") (\"role\" OR \"User-Role\" OR \"user role\" OR \"tunnel\" OR \"UBT\" OR \"gateway\" OR \"enforce\")\n| eval assigned_role=coalesce(aruba_user_role, TipsRole, user_role, Role_Name, derived_role)\n| eval gw=coalesce(gateway_name, gateway_ip, cluster_name)\n| eval tunnel_st=coalesce(tunnel_status, tunnel_state, ubt_status)\n| stats dc(client_mac) as endpoints, dc(username) as users, count as events by assigned_role, gw, tunnel_st\n| where isnull(tunnel_st) OR like(lower(tunnel_st),\"%down%\") OR like(lower(tunnel_st),\"%fail%\") OR match(lower(assigned_role),\"(?i)deny|reject|quarantine|unknown\")\n| sort -endpoints\n```\n\nUnderstanding this SPL\n\n**Aruba Dynamic Segmentation Policy Enforcement (HPE Aruba)** — Aruba Dynamic Segmentation assigns users and devices to virtual networks based on ClearPass role and policy, enforced at the Aruba gateway. Policy enforcement failures mean devices get wrong access levels — either too permissive (security risk) or too restrictive (business impact). Monitor role assignment, gateway tunnel status, and policy hits.\n\nDocumented **Data sources**: `sourcetype=aruba:syslog`, `sourcetype=aruba:clearpass`. **App/TA** (typical add-on context): `Aruba Networks Add-on for Splunk` (Splunkbase 4668), `HPE Aruba ClearPass App for Splunk` (Splunkbase 7865). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: aruba:syslog, aruba:clearpass. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"aruba:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **assigned_role** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **gw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **tunnel_st** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by assigned_role, gw, tunnel_st** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnull(tunnel_st) OR like(lower(tunnel_st),\"%down%\") OR like(lower(tunnel_st),\"%fail%\") OR match(lower(assigned_role)…` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Aruba Dynamic Segmentation Policy Enforcement (HPE Aruba)** — Aruba Dynamic Segmentation assigns users and devices to virtual networks based on ClearPass role and policy, enforced at the Aruba gateway. Policy enforcement failures mean devices get wrong access levels — either too permissive (security risk) or too restrictive (business impact). Monitor role assignment, gateway tunnel status, and policy hits.\n\nDocumented **Data sources**: `sourcetype=aruba:syslog`, `sourcetype=aruba:clearpass`. **App/TA** (typical add-on context): `Aruba Networks Add-on for Splunk` (Splunkbase 4668), `HPE Aruba ClearPass App for Splunk` (Splunkbase 7865). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey or table (role → gateway → tunnel state), Bar chart (endpoints by role), Timechart (tunnel failures), Table (users or MACs with unexpected roles).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "HPE Aruba 9004/9012 Gateways, Aruba CX switches (UBT), HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch aruba dynamic segmentation policy enforcement (hpe aruba) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.37",
              "n": "Aruba Client Experience and Connectivity Score (HPE Aruba)",
              "c": "medium",
              "f": "intermediate",
              "v": "Aruba Central provides per-client connectivity scores based on association time, authentication time, DHCP time, DNS resolution time, and throughput. Low scores identify problematic clients, congested APs, or misconfigured SSIDs before users report issues. Trending scores over time validates infrastructure changes.",
              "t": "Custom HEC or scripted input (Aruba Central REST API — client health / experience metrics)",
              "d": "Aruba Central API (client health / experience), recommended `sourcetype=aruba:central` (JSON from HEC)",
              "q": "index=network sourcetype=\"aruba:central\" OR sourcetype=\"aruba:central:client\"\n| eval score=coalesce(connectivity_score, client_health_score, experience_score, health_score)\n| eval ap=coalesce(ap_name, ap_serial, device_name)\n| stats avg(score) as avg_score, min(score) as worst_score, perc95(score) as p95_score, dc(client_mac) as clients by ap, ssid, site_name\n| where avg_score < 75 OR worst_score < 50 OR p95_score < 70\n| sort avg_score",
              "m": "Use Aruba Central API credentials with least privilege; poll client health/experience endpoints on a schedule or stream via a forwarder, normalizing to JSON on HEC with indexed fields `client_mac`, `ap_name`, `ssid`, `connectivity_score`, and timing breakdowns when available. Baseline per site and SSID; alert on drops after code upgrades or RF changes.",
              "z": "Timechart (mean connectivity score by SSID), Table (worst APs and SSIDs), Histogram (score distribution), Scatter (clients vs score) for drill-down.",
              "kfp": "Wireless metrics move with user behavior, maintenance, and nearby RF; we tune alerts around change windows and known busy hours so normal days do not page the team.",
              "refs": "[Cato Networks Events App](https://splunkbase.splunk.com/app/8037)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC or scripted input (Aruba Central REST API — client health / experience metrics).\n• Ensure the following data sources are available: Aruba Central API (client health / experience), recommended `sourcetype=aruba:central` (JSON from HEC).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Aruba Central API credentials with least privilege; poll client health/experience endpoints on a schedule or stream via a forwarder, normalizing to JSON on HEC with indexed fields `client_mac`, `ap_name`, `ssid`, `connectivity_score`, and timing breakdowns when available. Baseline per site and SSID; alert on drops after code upgrades or RF changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"aruba:central\" OR sourcetype=\"aruba:central:client\"\n| eval score=coalesce(connectivity_score, client_health_score, experience_score, health_score)\n| eval ap=coalesce(ap_name, ap_serial, device_name)\n| stats avg(score) as avg_score, min(score) as worst_score, perc95(score) as p95_score, dc(client_mac) as clients by ap, ssid, site_name\n| where avg_score < 75 OR worst_score < 50 OR p95_score < 70\n| sort avg_score\n```\n\nUnderstanding this SPL\n\n**Aruba Client Experience and Connectivity Score (HPE Aruba)** — Aruba Central provides per-client connectivity scores based on association time, authentication time, DHCP time, DNS resolution time, and throughput. Low scores identify problematic clients, congested APs, or misconfigured SSIDs before users report issues. Trending scores over time validates infrastructure changes.\n\nDocumented **Data sources**: Aruba Central API (client health / experience), recommended `sourcetype=aruba:central` (JSON from HEC). **App/TA** (typical add-on context): Custom HEC or scripted input (Aruba Central REST API — client health / experience metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: aruba:central, aruba:central:client. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"aruba:central\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by ap, ssid, site_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_score < 75 OR worst_score < 50 OR p95_score < 70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (mean connectivity score by SSID), Table (worst APs and SSIDs), Histogram (score distribution), Scatter (clients vs score) for drill-down.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "HPE Aruba managed APs and gateways onboarded in Aruba Central",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch aruba client experience and connectivity score (hpe aruba) so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.38",
              "n": "Cisco C9800 WLC AP Join Failures",
              "c": "critical",
              "f": "intermediate",
              "v": "Cisco Catalyst 9800 wireless controllers manage campus Wi-Fi infrastructure. AP join failures indicate certificate issues, CAPWAP tunnel problems, or capacity limits — each preventing clients on that AP from connecting. Early detection prevents silent coverage gaps.",
              "t": "`Cisco Networks Add-on for Splunk` (`TA-cisco_ios`, Splunkbase 1352), Cisco C9800 syslog",
              "d": "`sourcetype=\"cisco:ios\"`, C9800 syslog (facility %CAPWAP, %DTLS)",
              "q": "index=network sourcetype=\"cisco:ios\" host=\"c9800*\"\n  (\"%CAPWAP-3-ERRORLOG\" OR \"%DTLS-3-HANDSHAKE_FAILURE\" OR \"%AP_EVENT-3-CRASH\")\n| rex \"AP (?<ap_name>\\S+)\"\n| stats count by host, ap_name, _raw\n| sort - count",
              "m": "Forward Cisco C9800 syslog to Splunk (severity 3+). Monitor CAPWAP, DTLS, and AP crash messages. Alert when any AP fails to join or repeatedly crashes. Correlate with certificate expiry calendar.",
              "z": "Table (failed APs), Status grid (AP join state per WLC), Timeline (join failures over time).",
              "kfp": "",
              "refs": "[Cisco C9800 Troubleshooting](https://www.cisco.com/c/en/us/support/wireless/catalyst-9800-series-wireless-controllers/series.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Networks Add-on for Splunk` (`TA-cisco_ios`, Splunkbase 1352), Cisco C9800 syslog.\n• Ensure the following data sources are available: `sourcetype=\"cisco:ios\"`, C9800 syslog (facility %CAPWAP, %DTLS).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Cisco C9800 syslog to Splunk (severity 3+). Monitor CAPWAP, DTLS, and AP crash messages. Alert when any AP fails to join or repeatedly crashes. Correlate with certificate expiry calendar.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" host=\"c9800*\"\n  (\"%CAPWAP-3-ERRORLOG\" OR \"%DTLS-3-HANDSHAKE_FAILURE\" OR \"%AP_EVENT-3-CRASH\")\n| rex \"AP (?<ap_name>\\S+)\"\n| stats count by host, ap_name, _raw\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Cisco C9800 WLC AP Join Failures** — Cisco Catalyst 9800 wireless controllers manage campus Wi-Fi infrastructure. AP join failures indicate certificate issues, CAPWAP tunnel problems, or capacity limits — each preventing clients on that AP from connecting. Early detection prevents silent coverage gaps.\n\nDocumented **Data sources**: `sourcetype=\"cisco:ios\"`, C9800 syslog (facility %CAPWAP, %DTLS). **App/TA** (typical add-on context): `Cisco Networks Add-on for Splunk` (`TA-cisco_ios`, Splunkbase 1352), Cisco C9800 syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios; **host** filter: c9800*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, ap_name, _raw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed APs), Status grid (AP join state per WLC), Timeline (join failures over time).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9800-40, 9800-80, 9800-L, 9800-CL; Cisco Catalyst 9100/9120/9130/9136/9162/9164/9166 APs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.39",
              "n": "Cisco C9800 Client Authentication and Session Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "The C9800 provides detailed client lifecycle events (association, authentication, roaming, deauthentication). Monitoring these events centrally identifies Wi-Fi access issues, AAA misconfigurations, and potential security concerns like deauth attacks before help desk tickets pile up.",
              "t": "`TA-cisco_ios`, Cisco C9800 syslog",
              "d": "`sourcetype=\"cisco:ios\"`, C9800 syslog (facility %DOT1X, %CLIENT_ORCH)",
              "q": "index=network sourcetype=\"cisco:ios\" host=\"c9800*\"\n  (\"%DOT1X-5-FAIL\" OR \"%CLIENT_ORCH-6-CLIENT_ADDED_TO_RUN_STATE\" OR \"%DOT1X-5-SUCCESS\")\n| rex \"Username (?<username>\\S+)\"\n| rex \"MAC (?<client_mac>[0-9a-fA-F.:]+)\"\n| stats count by username, client_mac, host\n| sort - count",
              "m": "Enable client event logging on C9800 (severity 5+). Forward syslog to Splunk. Create alerts for elevated DOT1X failure rates. Correlate with ISE/RADIUS logs for full authentication path visibility.",
              "z": "Table (auth failures by user/MAC), Single value (failure rate), Pie chart (success vs. failure ratio).",
              "kfp": "",
              "refs": "[Cisco C9800 Client Troubleshooting](https://www.cisco.com/c/en/us/support/wireless/catalyst-9800-series-wireless-controllers/series.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, Cisco C9800 syslog.\n• Ensure the following data sources are available: `sourcetype=\"cisco:ios\"`, C9800 syslog (facility %DOT1X, %CLIENT_ORCH).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable client event logging on C9800 (severity 5+). Forward syslog to Splunk. Create alerts for elevated DOT1X failure rates. Correlate with ISE/RADIUS logs for full authentication path visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" host=\"c9800*\"\n  (\"%DOT1X-5-FAIL\" OR \"%CLIENT_ORCH-6-CLIENT_ADDED_TO_RUN_STATE\" OR \"%DOT1X-5-SUCCESS\")\n| rex \"Username (?<username>\\S+)\"\n| rex \"MAC (?<client_mac>[0-9a-fA-F.:]+)\"\n| stats count by username, client_mac, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Cisco C9800 Client Authentication and Session Monitoring** — The C9800 provides detailed client lifecycle events (association, authentication, roaming, deauthentication). Monitoring these events centrally identifies Wi-Fi access issues, AAA misconfigurations, and potential security concerns like deauth attacks before help desk tickets pile up.\n\nDocumented **Data sources**: `sourcetype=\"cisco:ios\"`, C9800 syslog (facility %DOT1X, %CLIENT_ORCH). **App/TA** (typical add-on context): `TA-cisco_ios`, Cisco C9800 syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios; **host** filter: c9800*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by username, client_mac, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (auth failures by user/MAC), Single value (failure rate), Pie chart (success vs. failure ratio).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9800-40, 9800-80, 9800-L, 9800-CL",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.4.40",
              "n": "Cisco C9800 RF Performance and Channel Assignment",
              "c": "medium",
              "f": "advanced",
              "v": "The C9800 RRM (Radio Resource Management) engine continuously adjusts channel and power assignments. Tracking these changes alongside RF metrics (noise, interference, utilization) helps network engineers validate that RRM decisions improve rather than degrade wireless performance.",
              "t": "`TA-cisco_ios`, Cisco C9800 syslog, Cisco Prime/Catalyst Center API (optional)",
              "d": "`sourcetype=\"cisco:ios\"`, C9800 syslog (facility %RRM, %DOT11)",
              "q": "index=network sourcetype=\"cisco:ios\" host=\"c9800*\" \"%RRM-6-CHANNEL_CHANGE\"\n| rex \"AP (?<ap_name>\\S+).*slot (?<slot>\\d+).*channel (?<old_channel>\\d+) to (?<new_channel>\\d+)\"\n| stats count by ap_name, slot, old_channel, new_channel\n| where count > 3\n| sort - count",
              "m": "Enable RRM and DOT11 syslog messages on C9800. Forward to Splunk. Monitor excessive channel changes (indicator of co-channel interference). Cross-reference with AP site surveys and client experience data.",
              "z": "Table (channel changes by AP), Bar chart (changes per hour), Heatmap (AP floor map with channel utilization).",
              "kfp": "",
              "refs": "[Cisco RRM Best Practices](https://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/8-6/b_RRM_White_Paper.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, Cisco C9800 syslog, Cisco Prime/Catalyst Center API (optional).\n• Ensure the following data sources are available: `sourcetype=\"cisco:ios\"`, C9800 syslog (facility %RRM, %DOT11).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable RRM and DOT11 syslog messages on C9800. Forward to Splunk. Monitor excessive channel changes (indicator of co-channel interference). Cross-reference with AP site surveys and client experience data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" host=\"c9800*\" \"%RRM-6-CHANNEL_CHANGE\"\n| rex \"AP (?<ap_name>\\S+).*slot (?<slot>\\d+).*channel (?<old_channel>\\d+) to (?<new_channel>\\d+)\"\n| stats count by ap_name, slot, old_channel, new_channel\n| where count > 3\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Cisco C9800 RF Performance and Channel Assignment** — The C9800 RRM (Radio Resource Management) engine continuously adjusts channel and power assignments. Tracking these changes alongside RF metrics (noise, interference, utilization) helps network engineers validate that RRM decisions improve rather than degrade wireless performance.\n\nDocumented **Data sources**: `sourcetype=\"cisco:ios\"`, C9800 syslog (facility %RRM, %DOT11). **App/TA** (typical add-on context): `TA-cisco_ios`, Cisco C9800 syslog, Cisco Prime/Catalyst Center API (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios; **host** filter: c9800*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by ap_name, slot, old_channel, new_channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (channel changes by AP), Bar chart (changes per hour), Heatmap (AP floor map with channel utilization).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 9800-40, 9800-80, 9800-L, 9800-CL; Catalyst 9100-series APs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 40,
            "none": 0
          }
        },
        {
          "i": "5.5",
          "n": "SD-WAN",
          "u": [
            {
              "i": "5.5.1",
              "n": "Tunnel Health Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Tunnel loss/latency/jitter directly impacts application experience over WAN.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage BFD metrics",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:bfd\"\n| stats avg(loss_percentage) as loss, avg(latency) as latency, avg(jitter) as jitter by site, tunnel_name\n| where loss > 1 OR latency > 100 OR jitter > 30",
              "m": "Poll vManage API for BFD session statistics. Collect loss, latency, jitter per tunnel. Alert when SLA thresholds exceeded.",
              "z": "Line chart (loss/latency/jitter per tunnel), Table, Status grid per site.",
              "kfp": "Tunnels may renegotiate during ISP maintenance, BFD timer changes, planned controller upgrades, or policy pushes; short blips may look like failures when the business path is still acceptable.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage BFD metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll vManage API for BFD session statistics. Collect loss, latency, jitter per tunnel. Alert when SLA thresholds exceeded.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:bfd\"\n| stats avg(loss_percentage) as loss, avg(latency) as latency, avg(jitter) as jitter by site, tunnel_name\n| where loss > 1 OR latency > 100 OR jitter > 30\n```\n\nUnderstanding this SPL\n\n**Tunnel Health Monitoring** — Tunnel loss/latency/jitter directly impacts application experience over WAN.\n\nDocumented **Data sources**: vManage BFD metrics. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:bfd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:bfd\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by site, tunnel_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where loss > 1 OR latency > 100 OR jitter > 30` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tunnel Health Monitoring** — Tunnel loss/latency/jitter directly impacts application experience over WAN.\n\nDocumented **Data sources**: vManage BFD metrics. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (loss/latency/jitter per tunnel), Table, Status grid per site.",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.2",
              "n": "Site Availability",
              "c": "critical",
              "f": "beginner",
              "v": "Edge device offline = remote site disconnected from the network.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage device status",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:device\"\n| where reachability!=\"reachable\"\n| table _time site_id hostname system_ip reachability | sort -_time",
              "m": "Poll vManage device inventory API. Alert when any edge device becomes unreachable. Include site name and location.",
              "z": "Map (site locations with status), Table, Status grid.",
              "kfp": "Tunnels may renegotiate during ISP maintenance, BFD timer changes, planned controller upgrades, or policy pushes; short blips may look like failures when the business path is still acceptable.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage device status.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll vManage device inventory API. Alert when any edge device becomes unreachable. Include site name and location.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:device\"\n| where reachability!=\"reachable\"\n| table _time site_id hostname system_ip reachability | sort -_time\n```\n\nUnderstanding this SPL\n\n**Site Availability** — Edge device offline = remote site disconnected from the network.\n\nDocumented **Data sources**: vManage device status. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:device. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where reachability!=\"reachable\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Site Availability**): table _time site_id hostname system_ip reachability\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (site locations with status), Table, Status grid.",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.3",
              "n": "Application SLA Violations",
              "c": "high",
              "f": "beginner",
              "v": "Detects when business-critical applications aren't meeting performance requirements over the WAN.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538)",
              "d": "vManage app-aware routing metrics",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:approute\"\n| where sla_violation=\"true\"\n| stats count by site, app_name, sla_class | sort -count",
              "m": "Collect app-aware routing statistics from vManage. Alert when critical applications violate their SLA class.",
              "z": "Table (site, app, violations), Bar chart by app, Timechart.",
              "kfp": "Tunnels may renegotiate during ISP maintenance, BFD timer changes, planned controller upgrades, or policy pushes; short blips may look like failures when the business path is still acceptable.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538).\n• Ensure the following data sources are available: vManage app-aware routing metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect app-aware routing statistics from vManage. Alert when critical applications violate their SLA class.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:approute\"\n| where sla_violation=\"true\"\n| stats count by site, app_name, sla_class | sort -count\n```\n\nUnderstanding this SPL\n\n**Application SLA Violations** — Detects when business-critical applications aren't meeting performance requirements over the WAN.\n\nDocumented **Data sources**: vManage app-aware routing metrics. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:approute. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:approute\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where sla_violation=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by site, app_name, sla_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (site, app, violations), Bar chart by app, Timechart.",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.4",
              "n": "Path Failover Events",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks when traffic switches between WAN transports. Frequent failovers indicate unstable links.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538)",
              "d": "vManage events",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:events\" (\"failover\" OR \"path-change\" OR \"transport-switch\")\n| stats count by site, from_transport, to_transport | sort -count",
              "m": "Collect vManage alarm/event data. Track path changes and failover frequency. Alert on frequent failovers.",
              "z": "Table, Sankey diagram (from/to transport), Timeline.",
              "kfp": "Tunnels may renegotiate during ISP maintenance, BFD timer changes, planned controller upgrades, or policy pushes; short blips may look like failures when the business path is still acceptable.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538).\n• Ensure the following data sources are available: vManage events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect vManage alarm/event data. Track path changes and failover frequency. Alert on frequent failovers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:events\" (\"failover\" OR \"path-change\" OR \"transport-switch\")\n| stats count by site, from_transport, to_transport | sort -count\n```\n\nUnderstanding this SPL\n\n**Path Failover Events** — Tracks when traffic switches between WAN transports. Frequent failovers indicate unstable links.\n\nDocumented **Data sources**: vManage events. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by site, from_transport, to_transport** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Sankey diagram (from/to transport), Timeline.",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.5",
              "n": "Control Plane Health",
              "c": "high",
              "f": "beginner",
              "v": "vSmart/vManage connectivity issues affect policy distribution and overlay routing.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538)",
              "d": "vManage control connection logs",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:control\"\n| where state!=\"up\"\n| table _time hostname peer_type peer_system_ip state | sort -_time",
              "m": "Monitor control connections to vSmart and vManage. Alert on any control connection down.",
              "z": "Status panel, Table, Timeline.",
              "kfp": "Tunnels may renegotiate during ISP maintenance, BFD timer changes, planned controller upgrades, or policy pushes; short blips may look like failures when the business path is still acceptable.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538).\n• Ensure the following data sources are available: vManage control connection logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor control connections to vSmart and vManage. Alert on any control connection down.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:control\"\n| where state!=\"up\"\n| table _time hostname peer_type peer_system_ip state | sort -_time\n```\n\nUnderstanding this SPL\n\n**Control Plane Health** — vSmart/vManage connectivity issues affect policy distribution and overlay routing.\n\nDocumented **Data sources**: vManage control connection logs. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:control. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:control\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state!=\"up\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Control Plane Health**): table _time hostname peer_type peer_system_ip state\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status panel, Table, Timeline.",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.6",
              "n": "Certificate Expiration",
              "c": "medium",
              "f": "beginner",
              "v": "SD-WAN device certificates must be valid for overlay connectivity.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage certificate inventory",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:certificate\"\n| eval days_left=round((expiry_epoch-now())/86400,0) | where days_left<60\n| table hostname system_ip days_left | sort days_left",
              "m": "Poll vManage for certificate status. Alert at 60/30/7 day thresholds.",
              "z": "Table, Single value, Status indicator.",
              "kfp": "Staging upgrades, RMA replacements, and deferred maintenance windows can leave devices on non-target versions briefly; align alerts with your change calendar.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage certificate inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll vManage for certificate status. Alert at 60/30/7 day thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:certificate\"\n| eval days_left=round((expiry_epoch-now())/86400,0) | where days_left<60\n| table hostname system_ip days_left | sort days_left\n```\n\nUnderstanding this SPL\n\n**Certificate Expiration** — SD-WAN device certificates must be valid for overlay connectivity.\n\nDocumented **Data sources**: vManage certificate inventory. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:certificate. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:certificate\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left<60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Certificate Expiration**): table hostname system_ip days_left\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value, Status indicator.",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.7",
              "n": "Bandwidth Utilization per Site",
              "c": "medium",
              "f": "beginner",
              "v": "WAN bandwidth consumption per site enables capacity planning and cost optimization.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538)",
              "d": "vManage interface metrics",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:interface\"\n| timechart span=1h sum(tx_octets) as bytes_out, sum(rx_octets) as bytes_in by site\n| eval out_mbps=round(bytes_out*8/3600/1000000,1)",
              "m": "Collect interface statistics from vManage. Track per-site, per-transport utilization. Use for upgrade decisions.",
              "z": "Line chart per site, Table, Stacked area.",
              "kfp": "Utilization and top-application charts jump during backups, patch windows, video calls, or large file transfers; compare to baselines and scheduled jobs before treating a spike as fault.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538).\n• Ensure the following data sources are available: vManage interface metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect interface statistics from vManage. Track per-site, per-transport utilization. Use for upgrade decisions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:interface\"\n| timechart span=1h sum(tx_octets) as bytes_out, sum(rx_octets) as bytes_in by site\n| eval out_mbps=round(bytes_out*8/3600/1000000,1)\n```\n\nUnderstanding this SPL\n\n**Bandwidth Utilization per Site** — WAN bandwidth consumption per site enables capacity planning and cost optimization.\n\nDocumented **Data sources**: vManage interface metrics. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:interface. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:interface\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by site** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **out_mbps** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Bandwidth Utilization per Site** — WAN bandwidth consumption per site enables capacity planning and cost optimization.\n\nDocumented **Data sources**: vManage interface metrics. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart per site, Table, Stacked area.",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.8",
              "n": "Jitter and Latency per Tunnel",
              "c": "high",
              "f": "intermediate",
              "v": "Real-time jitter and latency metrics reveal WAN quality degradation before users complain. Critical for voice/video SLAs.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), Cisco vManage API",
              "d": "`sourcetype=cisco:sdwan:bfd`, `sourcetype=cisco:sdwan:approute`",
              "q": "index=network sourcetype=\"cisco:sdwan:approute\"\n| stats avg(latency) as avg_latency, avg(jitter) as avg_jitter, avg(loss_percentage) as avg_loss by local_system_ip, remote_system_ip, local_color\n| where avg_latency > 100 OR avg_jitter > 30 OR avg_loss > 1\n| sort -avg_latency",
              "m": "Ingest BFD and app-route statistics from vManage API. Monitor per-tunnel quality metrics. Alert when latency >100ms, jitter >30ms, or loss >1% for business-critical SLAs.",
              "z": "Line chart (latency/jitter over time), Table (tunnel, metrics), Gauge (SLA compliance).",
              "kfp": "Tunnels may renegotiate during ISP maintenance, BFD timer changes, planned controller upgrades, or policy pushes; short blips may look like failures when the business path is still acceptable.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), Cisco vManage API.\n• Ensure the following data sources are available: `sourcetype=cisco:sdwan:bfd`, `sourcetype=cisco:sdwan:approute`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest BFD and app-route statistics from vManage API. Monitor per-tunnel quality metrics. Alert when latency >100ms, jitter >30ms, or loss >1% for business-critical SLAs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:sdwan:approute\"\n| stats avg(latency) as avg_latency, avg(jitter) as avg_jitter, avg(loss_percentage) as avg_loss by local_system_ip, remote_system_ip, local_color\n| where avg_latency > 100 OR avg_jitter > 30 OR avg_loss > 1\n| sort -avg_latency\n```\n\nUnderstanding this SPL\n\n**Jitter and Latency per Tunnel** — Real-time jitter and latency metrics reveal WAN quality degradation before users complain. Critical for voice/video SLAs.\n\nDocumented **Data sources**: `sourcetype=cisco:sdwan:bfd`, `sourcetype=cisco:sdwan:approute`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), Cisco vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:sdwan:approute. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:sdwan:approute\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by local_system_ip, remote_system_ip, local_color** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_latency > 100 OR avg_jitter > 30 OR avg_loss > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Jitter and Latency per Tunnel** — Real-time jitter and latency metrics reveal WAN quality degradation before users complain. Critical for voice/video SLAs.\n\nDocumented **Data sources**: `sourcetype=cisco:sdwan:bfd`, `sourcetype=cisco:sdwan:approute`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), Cisco vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency/jitter over time), Table (tunnel, metrics), Gauge (SLA compliance).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.9",
              "n": "Application Routing Decisions",
              "c": "medium",
              "f": "intermediate",
              "v": "Validates that SD-WAN policies are steering traffic correctly. Detects policy misconfigurations that route real-time traffic over suboptimal paths.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), Cisco vManage API",
              "d": "`sourcetype=cisco:sdwan:approute`, `sourcetype=cisco:sdwan:flow`",
              "q": "index=network sourcetype=\"cisco:sdwan:flow\"\n| stats sum(octets) as bytes by app_name, local_color, remote_system_ip\n| eval MB=round(bytes/1048576,1)\n| sort -MB\n| head 50",
              "m": "Collect flow and app-route data from vManage. Verify voice/video uses MPLS, web traffic uses Internet. Alert when critical apps route over non-preferred transports.",
              "z": "Sankey diagram (app → transport), Table (app, path, volume), Pie chart.",
              "kfp": "Utilization and top-application charts jump during backups, patch windows, video calls, or large file transfers; compare to baselines and scheduled jobs before treating a spike as fault.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), Cisco vManage API.\n• Ensure the following data sources are available: `sourcetype=cisco:sdwan:approute`, `sourcetype=cisco:sdwan:flow`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect flow and app-route data from vManage. Verify voice/video uses MPLS, web traffic uses Internet. Alert when critical apps route over non-preferred transports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:sdwan:flow\"\n| stats sum(octets) as bytes by app_name, local_color, remote_system_ip\n| eval MB=round(bytes/1048576,1)\n| sort -MB\n| head 50\n```\n\nUnderstanding this SPL\n\n**Application Routing Decisions** — Validates that SD-WAN policies are steering traffic correctly. Detects policy misconfigurations that route real-time traffic over suboptimal paths.\n\nDocumented **Data sources**: `sourcetype=cisco:sdwan:approute`, `sourcetype=cisco:sdwan:flow`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), Cisco vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:sdwan:flow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:sdwan:flow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name, local_color, remote_system_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **MB** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey diagram (app → transport), Table (app, path, volume), Pie chart.",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.10",
              "n": "WAN Link Utilization per Transport",
              "c": "high",
              "f": "intermediate",
              "v": "Unbalanced link utilization wastes expensive MPLS bandwidth while underusing broadband circuits. Enables cost-effective traffic engineering.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), SNMP",
              "d": "`sourcetype=cisco:sdwan:interface`, SNMP IF-MIB",
              "q": "index=network sourcetype=\"cisco:sdwan:interface\"\n| eval util_pct=round(tx_octets*8/speed*100,1)\n| stats avg(util_pct) as avg_util, max(util_pct) as peak_util by system_ip, color, interface_name\n| where avg_util > 70 | sort -avg_util",
              "m": "Collect interface stats per WAN transport type (MPLS, Internet, LTE). Compare utilization across links. Alert on >70% sustained utilization. Use for capacity planning.",
              "z": "Line chart (utilization per transport), Stacked bar (site comparison), Table.",
              "kfp": "Utilization and top-application charts jump during backups, patch windows, video calls, or large file transfers; compare to baselines and scheduled jobs before treating a spike as fault.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), SNMP.\n• Ensure the following data sources are available: `sourcetype=cisco:sdwan:interface`, SNMP IF-MIB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect interface stats per WAN transport type (MPLS, Internet, LTE). Compare utilization across links. Alert on >70% sustained utilization. Use for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:sdwan:interface\"\n| eval util_pct=round(tx_octets*8/speed*100,1)\n| stats avg(util_pct) as avg_util, max(util_pct) as peak_util by system_ip, color, interface_name\n| where avg_util > 70 | sort -avg_util\n```\n\nUnderstanding this SPL\n\n**WAN Link Utilization per Transport** — Unbalanced link utilization wastes expensive MPLS bandwidth while underusing broadband circuits. Enables cost-effective traffic engineering.\n\nDocumented **Data sources**: `sourcetype=cisco:sdwan:interface`, SNMP IF-MIB. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:sdwan:interface. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:sdwan:interface\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by system_ip, color, interface_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_util > 70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (utilization per transport), Stacked bar (site comparison), Table.",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_sdwan",
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.11",
              "n": "OMP Route Table Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "OMP (Overlay Management Protocol) distributes routes across the SD-WAN fabric. Route churn, missing prefixes, or unexpected withdrawals indicate overlay instability that degrades site-to-site reachability.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage OMP route table, `sourcetype=cisco:sdwan:omp`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:omp\"\n| stats dc(prefix) as route_count, dc(peer) as peer_count by system_ip, site_id\n| appendpipe [| stats avg(route_count) as baseline_routes]\n| where route_count < baseline_routes * 0.8\n| table system_ip site_id route_count peer_count",
              "m": "Poll vManage OMP peers and routes API endpoints. Baseline route count per device. Alert when a site loses more than 20% of its expected routes or when OMP peer adjacencies drop. Track route churn rate over time to identify flapping prefixes.",
              "z": "Line chart (route count over time per site), Table (devices below baseline), Single value (total OMP peers).",
              "kfp": "Planned network changes that withdraw routes intentionally; correlate with change management windows.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage OMP route table, `sourcetype=cisco:sdwan:omp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll vManage OMP peers and routes API endpoints. Baseline route count per device. Alert when a site loses more than 20% of its expected routes or when OMP peer adjacencies drop. Track route churn rate over time to identify flapping prefixes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:omp\"\n| stats dc(prefix) as route_count, dc(peer) as peer_count by system_ip, site_id\n| appendpipe [| stats avg(route_count) as baseline_routes]\n| where route_count < baseline_routes * 0.8\n| table system_ip site_id route_count peer_count\n```\n\nUnderstanding this SPL\n\n**OMP Route Table Monitoring** — OMP (Overlay Management Protocol) distributes routes across the SD-WAN fabric. Route churn, missing prefixes, or unexpected withdrawals indicate overlay instability that degrades site-to-site reachability.\n\nDocumented **Data sources**: vManage OMP route table, `sourcetype=cisco:sdwan:omp`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:omp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:omp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by system_ip, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• Filters the current rows with `where route_count < baseline_routes * 0.8` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OMP Route Table Monitoring**): table system_ip site_id route_count peer_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (route count over time per site), Table (devices below baseline), Single value (total OMP peers).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.12",
              "n": "BFD Session Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "BFD (Bidirectional Forwarding Detection) provides sub-second failure detection between SD-WAN endpoints. A BFD session going down means the tunnel is unusable, and traffic must reroute. Tracking BFD flaps reveals transport instability before it cascades.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage BFD sessions, `sourcetype=cisco:sdwan:bfd`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:bfd\"\n| where state!=\"up\"\n| stats count as flap_count, latest(_time) as last_flap, values(state) as states by local_system_ip, remote_system_ip, local_color, remote_color\n| where flap_count > 3\n| sort -flap_count\n| eval last_flap=strftime(last_flap,\"%Y-%m-%d %H:%M:%S\")",
              "m": "Collect BFD session data from vManage. Alert immediately when a BFD session transitions from up to down. Track flap frequency per tunnel; more than 3 flaps in an hour signals an unstable transport that needs carrier engagement. Cross-reference with ISP maintenance schedules.",
              "z": "Status grid (BFD sessions by color/site), Timeline (session state changes), Table (flapping tunnels).",
              "kfp": "Planned ISP maintenance windows; carrier circuit cutovers.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage BFD sessions, `sourcetype=cisco:sdwan:bfd`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect BFD session data from vManage. Alert immediately when a BFD session transitions from up to down. Track flap frequency per tunnel; more than 3 flaps in an hour signals an unstable transport that needs carrier engagement. Cross-reference with ISP maintenance schedules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:bfd\"\n| where state!=\"up\"\n| stats count as flap_count, latest(_time) as last_flap, values(state) as states by local_system_ip, remote_system_ip, local_color, remote_color\n| where flap_count > 3\n| sort -flap_count\n| eval last_flap=strftime(last_flap,\"%Y-%m-%d %H:%M:%S\")\n```\n\nUnderstanding this SPL\n\n**BFD Session Monitoring** — BFD (Bidirectional Forwarding Detection) provides sub-second failure detection between SD-WAN endpoints. A BFD session going down means the tunnel is unusable, and traffic must reroute. Tracking BFD flaps reveals transport instability before it cascades.\n\nDocumented **Data sources**: vManage BFD sessions, `sourcetype=cisco:sdwan:bfd`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:bfd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:bfd\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state!=\"up\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by local_system_ip, remote_system_ip, local_color, remote_color** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where flap_count > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `eval` defines or adjusts **last_flap** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (BFD sessions by color/site), Timeline (session state changes), Table (flapping tunnels).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.13",
              "n": "Edge Device Resource Utilization",
              "c": "high",
              "f": "beginner",
              "v": "SD-WAN edge routers running high CPU or memory can drop packets, fail to establish tunnels, or crash. Monitoring device resources prevents silent performance degradation at remote sites where physical access is limited.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage device statistics, `sourcetype=cisco:sdwan:device`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:device\"\n| stats latest(cpu_user) as cpu_user, latest(cpu_system) as cpu_system, latest(mem_used) as mem_used, latest(mem_total) as mem_total by hostname, system_ip, site_id\n| eval cpu_pct=cpu_user+cpu_system, mem_pct=round(mem_used/mem_total*100,1)\n| where cpu_pct > 80 OR mem_pct > 85\n| table hostname system_ip site_id cpu_pct mem_pct\n| sort -cpu_pct",
              "m": "Poll vManage device statistics API for CPU, memory, and disk usage. Alert when CPU exceeds 80% or memory exceeds 85% sustained over 15 minutes. Trend over time to identify sites that need hardware upgrades. Pay special attention to devices running UTD (Unified Threat Defense) or DPI, which consume significantly more resources.",
              "z": "Line chart (CPU/memory trending per device), Table (devices above threshold), Gauge (fleet-wide average).",
              "kfp": "Utilization and top-application charts jump during backups, patch windows, video calls, or large file transfers; compare to baselines and scheduled jobs before treating a spike as fault.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage device statistics, `sourcetype=cisco:sdwan:device`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll vManage device statistics API for CPU, memory, and disk usage. Alert when CPU exceeds 80% or memory exceeds 85% sustained over 15 minutes. Trend over time to identify sites that need hardware upgrades. Pay special attention to devices running UTD (Unified Threat Defense) or DPI, which consume significantly more resources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:device\"\n| stats latest(cpu_user) as cpu_user, latest(cpu_system) as cpu_system, latest(mem_used) as mem_used, latest(mem_total) as mem_total by hostname, system_ip, site_id\n| eval cpu_pct=cpu_user+cpu_system, mem_pct=round(mem_used/mem_total*100,1)\n| where cpu_pct > 80 OR mem_pct > 85\n| table hostname system_ip site_id cpu_pct mem_pct\n| sort -cpu_pct\n```\n\nUnderstanding this SPL\n\n**Edge Device Resource Utilization** — SD-WAN edge routers running high CPU or memory can drop packets, fail to establish tunnels, or crash. Monitoring device resources prevents silent performance degradation at remote sites where physical access is limited.\n\nDocumented **Data sources**: vManage device statistics, `sourcetype=cisco:sdwan:device`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:device. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname, system_ip, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cpu_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cpu_pct > 80 OR mem_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Edge Device Resource Utilization**): table hostname system_ip site_id cpu_pct mem_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(Performance.cpu_load_percent) as cpu_pct\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where cpu_pct > 80\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Edge Device Resource Utilization** — SD-WAN edge routers running high CPU or memory can drop packets, fail to establish tunnels, or crash. Monitoring device resources prevents silent performance degradation at remote sites where physical access is limited.\n\nDocumented **Data sources**: vManage device statistics, `sourcetype=cisco:sdwan:device`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance` — enable acceleration for that model.\n• Filters the current rows with `where cpu_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU/memory trending per device), Table (devices above threshold), Gauge (fleet-wide average).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats `summariesonly` avg(Performance.cpu_load_percent) as cpu_pct\n  from datamodel=Performance where nodename=Performance.CPU\n  by Performance.host span=1h\n| where cpu_pct > 80",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.14",
              "n": "Firmware Version Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Running inconsistent or outdated software versions across the SD-WAN fabric creates security vulnerabilities and feature gaps. Compliance dashboards accelerate upgrade planning and audit readiness.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage device inventory, `sourcetype=cisco:sdwan:device`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:device\"\n| stats latest(version) as sw_version, latest(model) as model by hostname, system_ip, site_id\n| eventstats count by sw_version\n| eval target_version=\"17.12.04\"\n| eval compliant=if(sw_version=target_version,\"yes\",\"no\")\n| stats count as total, count(eval(compliant=\"yes\")) as compliant_count by sw_version\n| eval pct=round(compliant_count/total*100,1)\n| sort -total",
              "m": "Poll vManage device inventory for software versions and model types. Define a target version per device family. Report on compliance percentage. Alert when devices fall more than two minor versions behind the target. Use to prioritize upgrade batches by site criticality.",
              "z": "Pie chart (version distribution), Table (non-compliant devices), Single value (compliance percentage).",
              "kfp": "Staging upgrades, RMA replacements, and deferred maintenance windows can leave devices on non-target versions briefly; align alerts with your change calendar.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage device inventory, `sourcetype=cisco:sdwan:device`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll vManage device inventory for software versions and model types. Define a target version per device family. Report on compliance percentage. Alert when devices fall more than two minor versions behind the target. Use to prioritize upgrade batches by site criticality.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:device\"\n| stats latest(version) as sw_version, latest(model) as model by hostname, system_ip, site_id\n| eventstats count by sw_version\n| eval target_version=\"17.12.04\"\n| eval compliant=if(sw_version=target_version,\"yes\",\"no\")\n| stats count as total, count(eval(compliant=\"yes\")) as compliant_count by sw_version\n| eval pct=round(compliant_count/total*100,1)\n| sort -total\n```\n\nUnderstanding this SPL\n\n**Firmware Version Compliance** — Running inconsistent or outdated software versions across the SD-WAN fabric creates security vulnerabilities and feature gaps. Compliance dashboards accelerate upgrade planning and audit readiness.\n\nDocumented **Data sources**: vManage device inventory, `sourcetype=cisco:sdwan:device`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:device. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname, system_ip, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by sw_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **target_version** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sw_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (version distribution), Table (non-compliant devices), Single value (compliance percentage).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart, vBond",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.15",
              "n": "DPI Application Visibility",
              "c": "medium",
              "f": "intermediate",
              "v": "Deep Packet Inspection on SD-WAN edges classifies traffic by application. Visibility into top applications per site drives policy tuning, bandwidth planning, and identification of shadow IT or unauthorized SaaS usage.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage DPI statistics, `sourcetype=cisco:sdwan:dpi`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:dpi\"\n| stats sum(bytes) as total_bytes, sum(packets) as total_pkts by app_name, family, site_id\n| eval GB=round(total_bytes/1073741824,2)\n| sort -total_bytes\n| head 50\n| table app_name family site_id GB total_pkts",
              "m": "Enable DPI on SD-WAN edge routers (requires UTD container or native NBAR2). Collect application statistics via vManage. Identify top bandwidth consumers per site. Compare against policy expectations — flag when non-business applications (streaming, gaming, social media) consume more than 20% of WAN bandwidth.",
              "z": "Bar chart (top 20 apps by volume), Treemap (app families), Table (app, site, volume).",
              "kfp": "Utilization and top-application charts jump during backups, patch windows, video calls, or large file transfers; compare to baselines and scheduled jobs before treating a spike as fault.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage DPI statistics, `sourcetype=cisco:sdwan:dpi`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DPI on SD-WAN edge routers (requires UTD container or native NBAR2). Collect application statistics via vManage. Identify top bandwidth consumers per site. Compare against policy expectations — flag when non-business applications (streaming, gaming, social media) consume more than 20% of WAN bandwidth.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:dpi\"\n| stats sum(bytes) as total_bytes, sum(packets) as total_pkts by app_name, family, site_id\n| eval GB=round(total_bytes/1073741824,2)\n| sort -total_bytes\n| head 50\n| table app_name family site_id GB total_pkts\n```\n\nUnderstanding this SPL\n\n**DPI Application Visibility** — Deep Packet Inspection on SD-WAN edges classifies traffic by application. Visibility into top applications per site drives policy tuning, bandwidth planning, and identification of shadow IT or unauthorized SaaS usage.\n\nDocumented **Data sources**: vManage DPI statistics, `sourcetype=cisco:sdwan:dpi`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:dpi. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:dpi\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name, family, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **GB** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **DPI Application Visibility**): table app_name family site_id GB total_pkts\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes) as bytes\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.app span=1d\n| sort -bytes | head 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DPI Application Visibility** — Deep Packet Inspection on SD-WAN edges classifies traffic by application. Visibility into top applications per site drives policy tuning, bandwidth planning, and identification of shadow IT or unauthorized SaaS usage.\n\nDocumented **Data sources**: vManage DPI statistics, `sourcetype=cisco:sdwan:dpi`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top 20 apps by volume), Treemap (app families), Table (app, site, volume).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes) as bytes\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.app span=1d\n| sort -bytes | head 20",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.16",
              "n": "Cloud OnRamp Performance",
              "c": "high",
              "f": "intermediate",
              "v": "Cloud OnRamp probes SaaS and IaaS endpoints from each site to select the best path. Monitoring probe results reveals when cloud application performance degrades before users open tickets, and validates that SD-WAN is actually improving cloud access.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage Cloud OnRamp metrics, `sourcetype=cisco:sdwan:cloudx`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:cloudx\"\n| stats avg(vqoe_score) as avg_score, avg(latency) as avg_latency, avg(loss) as avg_loss by app_name, site_id, exit_type\n| where avg_score < 8 OR avg_latency > 150\n| sort avg_score\n| table app_name site_id exit_type avg_score avg_latency avg_loss",
              "m": "Enable Cloud OnRamp for SaaS (Microsoft 365, Webex, Salesforce, etc.) and/or IaaS (AWS, Azure, GCP) in vManage. Collect vQoE scores and probe metrics. Alert when a SaaS application's quality score drops below 8 (out of 10) or latency exceeds 150ms. Compare direct internet access (DIA) vs gateway exit paths to validate routing decisions.",
              "z": "Line chart (vQoE score trending per app), Table (underperforming apps), Bar chart (DIA vs gateway comparison).",
              "kfp": "SaaS provider outages will degrade scores regardless of WAN path; cross-reference with provider status pages.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage Cloud OnRamp metrics, `sourcetype=cisco:sdwan:cloudx`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Cloud OnRamp for SaaS (Microsoft 365, Webex, Salesforce, etc.) and/or IaaS (AWS, Azure, GCP) in vManage. Collect vQoE scores and probe metrics. Alert when a SaaS application's quality score drops below 8 (out of 10) or latency exceeds 150ms. Compare direct internet access (DIA) vs gateway exit paths to validate routing decisions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:cloudx\"\n| stats avg(vqoe_score) as avg_score, avg(latency) as avg_latency, avg(loss) as avg_loss by app_name, site_id, exit_type\n| where avg_score < 8 OR avg_latency > 150\n| sort avg_score\n| table app_name site_id exit_type avg_score avg_latency avg_loss\n```\n\nUnderstanding this SPL\n\n**Cloud OnRamp Performance** — Cloud OnRamp probes SaaS and IaaS endpoints from each site to select the best path. Monitoring probe results reveals when cloud application performance degrades before users open tickets, and validates that SD-WAN is actually improving cloud access.\n\nDocumented **Data sources**: vManage Cloud OnRamp metrics, `sourcetype=cisco:sdwan:cloudx`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:cloudx. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:cloudx\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name, site_id, exit_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_score < 8 OR avg_latency > 150` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Cloud OnRamp Performance**): table app_name site_id exit_type avg_score avg_latency avg_loss\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (vQoE score trending per app), Table (underperforming apps), Bar chart (DIA vs gateway comparison).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.17",
              "n": "Security Policy Violations (UTD)",
              "c": "critical",
              "f": "intermediate",
              "v": "SD-WAN edges running Unified Threat Defense (UTD) perform IPS, URL filtering, and AMP inline. Monitoring these events at the WAN edge catches threats that bypass centralized firewalls, especially for direct internet access (DIA) traffic that never traverses the data center.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage UTD events, `sourcetype=cisco:sdwan:utd`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:utd\"\n| stats count by event_type, signature, severity, src_ip, dst_ip, site_id\n| where severity IN (\"critical\",\"high\")\n| sort -count\n| table event_type signature severity src_ip dst_ip site_id count",
              "m": "Enable UTD (IPS/URL filtering/AMP) on SD-WAN edges handling DIA traffic. Collect security events via vManage. Alert on critical/high severity IPS signatures and malware detections. Correlate with Umbrella/Secure Access if deployed for layered defense. Track blocked URL categories to refine acceptable-use policies.",
              "z": "Table (signature, severity, source, destination), Bar chart (events by category), Timeline (event frequency).",
              "kfp": "Tunnels may renegotiate during ISP maintenance, BFD timer changes, planned controller upgrades, or policy pushes; short blips may look like failures when the business path is still acceptable.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage UTD events, `sourcetype=cisco:sdwan:utd`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable UTD (IPS/URL filtering/AMP) on SD-WAN edges handling DIA traffic. Collect security events via vManage. Alert on critical/high severity IPS signatures and malware detections. Correlate with Umbrella/Secure Access if deployed for layered defense. Track blocked URL categories to refine acceptable-use policies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:utd\"\n| stats count by event_type, signature, severity, src_ip, dst_ip, site_id\n| where severity IN (\"critical\",\"high\")\n| sort -count\n| table event_type signature severity src_ip dst_ip site_id count\n```\n\nUnderstanding this SPL\n\n**Security Policy Violations (UTD)** — SD-WAN edges running Unified Threat Defense (UTD) perform IPS, URL filtering, and AMP inline. Monitoring these events at the WAN edge catches threats that bypass centralized firewalls, especially for direct internet access (DIA) traffic that never traverses the data center.\n\nDocumented **Data sources**: vManage UTD events, `sourcetype=cisco:sdwan:utd`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:utd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:utd\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by event_type, signature, severity, src_ip, dst_ip, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity IN (\"critical\",\"high\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Security Policy Violations (UTD)**): table event_type signature severity src_ip dst_ip site_id count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection\n  by IDS_Attacks.signature, IDS_Attacks.severity, IDS_Attacks.src, IDS_Attacks.dest span=1h\n| where count > 0\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security Policy Violations (UTD)** — SD-WAN edges running Unified Threat Defense (UTD) perform IPS, URL filtering, and AMP inline. Monitoring these events at the WAN edge catches threats that bypass centralized firewalls, especially for direct internet access (DIA) traffic that never traverses the data center.\n\nDocumented **Data sources**: vManage UTD events, `sourcetype=cisco:sdwan:utd`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection` — enable acceleration for that model.\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (signature, severity, source, destination), Bar chart (events by category), Timeline (event frequency).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection\n  by IDS_Attacks.signature, IDS_Attacks.severity, IDS_Attacks.src, IDS_Attacks.dest span=1h\n| where count > 0\n| sort -count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.18",
              "n": "vManage Cluster Health",
              "c": "critical",
              "f": "beginner",
              "v": "vManage is the single management plane for the entire SD-WAN fabric. If the vManage cluster is unhealthy — high CPU, disk full, database replication lag, or services down — operators lose visibility and policy push capability across all sites.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage cluster status API, `sourcetype=cisco:sdwan:vmanage`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:vmanage\"\n| stats latest(cpu_load) as cpu, latest(mem_util) as mem_pct, latest(disk_util) as disk_pct, latest(db_status) as db_status, latest(services_running) as services by vmanage_ip\n| where cpu > 70 OR mem_pct > 80 OR disk_pct > 75 OR db_status!=\"healthy\"\n| table vmanage_ip cpu mem_pct disk_pct db_status services",
              "m": "Poll vManage cluster health API. Monitor CPU, memory, disk usage, NMS database replication status, and running services. For clustered deployments, verify all nodes are in sync. Alert when any node exceeds 70% CPU, 80% memory, or 75% disk, or when database replication falls behind. Schedule regular config database backups independently.",
              "z": "Single value panels (CPU, memory, disk per node), Status indicator (cluster health), Table (services status).",
              "kfp": "Tunnels may renegotiate during ISP maintenance, BFD timer changes, planned controller upgrades, or policy pushes; short blips may look like failures when the business path is still acceptable.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage cluster status API, `sourcetype=cisco:sdwan:vmanage`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll vManage cluster health API. Monitor CPU, memory, disk usage, NMS database replication status, and running services. For clustered deployments, verify all nodes are in sync. Alert when any node exceeds 70% CPU, 80% memory, or 75% disk, or when database replication falls behind. Schedule regular config database backups independently.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:vmanage\"\n| stats latest(cpu_load) as cpu, latest(mem_util) as mem_pct, latest(disk_util) as disk_pct, latest(db_status) as db_status, latest(services_running) as services by vmanage_ip\n| where cpu > 70 OR mem_pct > 80 OR disk_pct > 75 OR db_status!=\"healthy\"\n| table vmanage_ip cpu mem_pct disk_pct db_status services\n```\n\nUnderstanding this SPL\n\n**vManage Cluster Health** — vManage is the single management plane for the entire SD-WAN fabric. If the vManage cluster is unhealthy — high CPU, disk full, database replication lag, or services down — operators lose visibility and policy push capability across all sites.\n\nDocumented **Data sources**: vManage cluster status API, `sourcetype=cisco:sdwan:vmanage`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:vmanage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:vmanage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vmanage_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu > 70 OR mem_pct > 80 OR disk_pct > 75 OR db_status!=\"healthy\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **vManage Cluster Health**): table vmanage_ip cpu mem_pct disk_pct db_status services\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value panels (CPU, memory, disk per node), Status indicator (cluster health), Table (services status).",
              "script": "",
              "premium": "",
              "hw": "vManage (physical or virtual)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.19",
              "n": "Transport Circuit SLA Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "ISPs commit to contractual SLAs for latency, jitter, loss, and uptime per circuit. SD-WAN BFD metrics provide continuous proof of whether carriers meet their commitments. SLA violation evidence supports service credits and carrier negotiations.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "`sourcetype=cisco:sdwan:bfd`, `sourcetype=cisco:sdwan:interface`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:bfd\"\n| stats avg(latency) as avg_latency, perc95(latency) as p95_latency, avg(jitter) as avg_jitter, avg(loss_percentage) as avg_loss, count as samples by local_color, site_id, remote_system_ip\n| eval sla_latency=50, sla_loss=0.1\n| eval latency_breach=if(avg_latency>sla_latency,\"YES\",\"NO\"), loss_breach=if(avg_loss>sla_loss,\"YES\",\"NO\")\n| where latency_breach=\"YES\" OR loss_breach=\"YES\"\n| table site_id local_color avg_latency p95_latency avg_jitter avg_loss latency_breach loss_breach",
              "m": "Define contractual SLA thresholds per transport type (MPLS: latency <50ms, loss <0.1%; Internet: latency <80ms, loss <0.5%). Aggregate BFD metrics daily. Generate monthly SLA compliance reports per carrier per circuit. Include uptime percentage from interface state changes. Use as evidence for carrier escalations and service credit claims.",
              "z": "Table (circuit SLA compliance), Line chart (latency trending per carrier), Single value (overall SLA compliance %).",
              "kfp": "Tunnels may renegotiate during ISP maintenance, BFD timer changes, planned controller upgrades, or policy pushes; short blips may look like failures when the business path is still acceptable.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: `sourcetype=cisco:sdwan:bfd`, `sourcetype=cisco:sdwan:interface`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine contractual SLA thresholds per transport type (MPLS: latency <50ms, loss <0.1%; Internet: latency <80ms, loss <0.5%). Aggregate BFD metrics daily. Generate monthly SLA compliance reports per carrier per circuit. Include uptime percentage from interface state changes. Use as evidence for carrier escalations and service credit claims.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:bfd\"\n| stats avg(latency) as avg_latency, perc95(latency) as p95_latency, avg(jitter) as avg_jitter, avg(loss_percentage) as avg_loss, count as samples by local_color, site_id, remote_system_ip\n| eval sla_latency=50, sla_loss=0.1\n| eval latency_breach=if(avg_latency>sla_latency,\"YES\",\"NO\"), loss_breach=if(avg_loss>sla_loss,\"YES\",\"NO\")\n| where latency_breach=\"YES\" OR loss_breach=\"YES\"\n| table site_id local_color avg_latency p95_latency avg_jitter avg_loss latency_breach loss_breach\n```\n\nUnderstanding this SPL\n\n**Transport Circuit SLA Tracking** — ISPs commit to contractual SLAs for latency, jitter, loss, and uptime per circuit. SD-WAN BFD metrics provide continuous proof of whether carriers meet their commitments. SLA violation evidence supports service credits and carrier negotiations.\n\nDocumented **Data sources**: `sourcetype=cisco:sdwan:bfd`, `sourcetype=cisco:sdwan:interface`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:bfd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:bfd\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by local_color, site_id, remote_system_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_latency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **latency_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where latency_breach=\"YES\" OR loss_breach=\"YES\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Transport Circuit SLA Tracking**): table site_id local_color avg_latency p95_latency avg_jitter avg_loss latency_breach loss_breach\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (circuit SLA compliance), Line chart (latency trending per carrier), Single value (overall SLA compliance %).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.20",
              "n": "Hub-and-Spoke vs Full-Mesh Topology Validation",
              "c": "medium",
              "f": "advanced",
              "v": "SD-WAN overlay topology determines traffic flow patterns. Validating that the actual tunnel mesh matches the intended design prevents asymmetric routing, hairpinning through hubs, and suboptimal site-to-site paths that add latency and waste hub bandwidth.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API",
              "d": "vManage BFD sessions, OMP routes, `sourcetype=cisco:sdwan:bfd`",
              "q": "index=sdwan sourcetype=\"cisco:sdwan:bfd\" state=\"up\"\n| stats dc(remote_system_ip) as peer_count, values(remote_system_ip) as peers by local_system_ip, site_id\n| eventstats avg(peer_count) as avg_peers\n| eval topology=case(peer_count>avg_peers*1.5,\"full-mesh candidate\",peer_count<=2,\"spoke\",1=1,\"partial-mesh\")\n| table site_id local_system_ip peer_count topology\n| sort -peer_count",
              "m": "Map the active tunnel mesh by enumerating BFD sessions per device. Compare against the intended topology (hub-and-spoke, regional hub, full-mesh). Identify sites with fewer tunnels than expected (potential reachability gaps) or more tunnels than intended (resource waste). Review when deploying new sites or changing control policies.",
              "z": "Network graph (nodes = sites, edges = tunnels), Table (site, peer count, topology type), Bar chart (topology distribution).",
              "kfp": "On-demand dynamic tunnels (TLOC extension) may create temporary additional peers that do not indicate misconfiguration.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API.\n• Ensure the following data sources are available: vManage BFD sessions, OMP routes, `sourcetype=cisco:sdwan:bfd`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap the active tunnel mesh by enumerating BFD sessions per device. Compare against the intended topology (hub-and-spoke, regional hub, full-mesh). Identify sites with fewer tunnels than expected (potential reachability gaps) or more tunnels than intended (resource waste). Review when deploying new sites or changing control policies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cisco:sdwan:bfd\" state=\"up\"\n| stats dc(remote_system_ip) as peer_count, values(remote_system_ip) as peers by local_system_ip, site_id\n| eventstats avg(peer_count) as avg_peers\n| eval topology=case(peer_count>avg_peers*1.5,\"full-mesh candidate\",peer_count<=2,\"spoke\",1=1,\"partial-mesh\")\n| table site_id local_system_ip peer_count topology\n| sort -peer_count\n```\n\nUnderstanding this SPL\n\n**Hub-and-Spoke vs Full-Mesh Topology Validation** — SD-WAN overlay topology determines traffic flow patterns. Validating that the actual tunnel mesh matches the intended design prevents asymmetric routing, hairpinning through hubs, and suboptimal site-to-site paths that add latency and waste hub bandwidth.\n\nDocumented **Data sources**: vManage BFD sessions, OMP routes, `sourcetype=cisco:sdwan:bfd`. **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538), vManage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cisco:sdwan:bfd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cisco:sdwan:bfd\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by local_system_ip, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **topology** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Hub-and-Spoke vs Full-Mesh Topology Validation**): table site_id local_system_ip peer_count topology\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network graph (nodes = sites, edges = tunnels), Table (site, peer count, topology type), Bar chart (topology distribution).",
              "script": "",
              "premium": "",
              "hw": "Cisco Catalyst 8200, Catalyst 8300, Catalyst 8500, ISR 1100 (SD-WAN), ISR 4000 (SD-WAN), vEdge 100, vEdge 1000, vEdge 2000, vEdge 5000, vManage, vSmart",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how our wide-area links and SD-WAN paths are behaving so we spot a bad circuit or policy issue before branch users lose voice, video, or critical apps.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.21",
              "n": "VMware VeloCloud Orchestrator Tunnel Health",
              "c": "critical",
              "f": "intermediate",
              "v": "VMware SD-WAN (VeloCloud) is widely deployed for branch/WAN connectivity. Monitoring tunnel status, latency, and jitter from the VeloCloud Orchestrator API provides early warning of WAN degradation across non-Cisco SD-WAN estates.",
              "t": "VMware VeloCloud Orchestrator REST API (custom scripted input or HEC push), `Splunk_TA_vmware` (for vCenter; VeloCloud requires custom integration)",
              "d": "VeloCloud Orchestrator API (`/monitoring/aggregate/edge/link`)",
              "q": "index=sdwan sourcetype=\"velocloud:link\"\n| stats avg(latencyMsRx) as rx_latency, avg(latencyMsTx) as tx_latency, avg(jitterMsRx) as jitter, avg(lossPctRx) as loss_pct by edgeName, linkName\n| where rx_latency > 100 OR jitter > 30 OR loss_pct > 1\n| sort - rx_latency",
              "m": "Configure scripted input to poll VeloCloud Orchestrator REST API at 5-minute intervals. Store API key in `passwords.conf`. Normalize link status fields (UP/DOWN/STANDBY) for consistent alerting.",
              "z": "Line chart (latency/jitter per edge), Table (top-N degraded links), Status grid by site.",
              "kfp": "",
              "refs": "[VMware SD-WAN Documentation](https://docs.vmware.com/en/VMware-SD-WAN/index.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VMware VeloCloud Orchestrator REST API (custom scripted input or HEC push), `Splunk_TA_vmware` (for vCenter; VeloCloud requires custom integration).\n• Ensure the following data sources are available: VeloCloud Orchestrator API (`/monitoring/aggregate/edge/link`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure scripted input to poll VeloCloud Orchestrator REST API at 5-minute intervals. Store API key in `passwords.conf`. Normalize link status fields (UP/DOWN/STANDBY) for consistent alerting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"velocloud:link\"\n| stats avg(latencyMsRx) as rx_latency, avg(latencyMsTx) as tx_latency, avg(jitterMsRx) as jitter, avg(lossPctRx) as loss_pct by edgeName, linkName\n| where rx_latency > 100 OR jitter > 30 OR loss_pct > 1\n| sort - rx_latency\n```\n\nUnderstanding this SPL\n\n**VMware VeloCloud Orchestrator Tunnel Health** — VMware SD-WAN (VeloCloud) is widely deployed for branch/WAN connectivity. Monitoring tunnel status, latency, and jitter from the VeloCloud Orchestrator API provides early warning of WAN degradation across non-Cisco SD-WAN estates.\n\nDocumented **Data sources**: VeloCloud Orchestrator API (`/monitoring/aggregate/edge/link`). **App/TA** (typical add-on context): VMware VeloCloud Orchestrator REST API (custom scripted input or HEC push), `Splunk_TA_vmware` (for vCenter; VeloCloud requires custom integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: velocloud:link. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"velocloud:link\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by edgeName, linkName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rx_latency > 100 OR jitter > 30 OR loss_pct > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency/jitter per edge), Table (top-N degraded links), Status grid by site.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "VMware SD-WAN Edge 510, 520, 540, 610, 620, 640, 680, 3400, 3800",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware",
                "vmware_vcenter"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.22",
              "n": "Aruba EdgeConnect SD-WAN Tunnel and Application Performance",
              "c": "critical",
              "f": "intermediate",
              "v": "HPE Aruba EdgeConnect (formerly Silver Peak) provides SD-WAN with WAN optimization. Monitoring tunnel health, boost metrics, and application path steering ensures WAN fabric reliability for organizations running Aruba SD-WAN alongside or instead of Cisco.",
              "t": "Aruba EdgeConnect syslog, Aruba Orchestrator REST API (custom scripted input or HEC push)",
              "d": "Aruba Orchestrator API, syslog (`sourcetype=\"aruba:edgeconnect\"`)",
              "q": "index=sdwan sourcetype=\"aruba:edgeconnect\"\n| search tunnel_state=\"DOWN\" OR tunnel_state=\"DEGRADED\"\n| stats count by appliance_name, tunnel_name, tunnel_state, peer_name\n| sort - count",
              "m": "Forward EdgeConnect appliance syslog to Splunk (UDP/TCP 514). Optionally poll Aruba Orchestrator API for structured tunnel and application metrics. Alert on tunnel DOWN/DEGRADED states and WAN optimization bypass events.",
              "z": "Status grid (tunnel per appliance), Table (degraded tunnels), Line chart (WAN optimization savings).",
              "kfp": "",
              "refs": "[HPE Aruba EdgeConnect SD-WAN](https://www.arubanetworks.com/products/sd-wan/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Aruba EdgeConnect syslog, Aruba Orchestrator REST API (custom scripted input or HEC push).\n• Ensure the following data sources are available: Aruba Orchestrator API, syslog (`sourcetype=\"aruba:edgeconnect\"`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward EdgeConnect appliance syslog to Splunk (UDP/TCP 514). Optionally poll Aruba Orchestrator API for structured tunnel and application metrics. Alert on tunnel DOWN/DEGRADED states and WAN optimization bypass events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"aruba:edgeconnect\"\n| search tunnel_state=\"DOWN\" OR tunnel_state=\"DEGRADED\"\n| stats count by appliance_name, tunnel_name, tunnel_state, peer_name\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Aruba EdgeConnect SD-WAN Tunnel and Application Performance** — HPE Aruba EdgeConnect (formerly Silver Peak) provides SD-WAN with WAN optimization. Monitoring tunnel health, boost metrics, and application path steering ensures WAN fabric reliability for organizations running Aruba SD-WAN alongside or instead of Cisco.\n\nDocumented **Data sources**: Aruba Orchestrator API, syslog (`sourcetype=\"aruba:edgeconnect\"`). **App/TA** (typical add-on context): Aruba EdgeConnect syslog, Aruba Orchestrator REST API (custom scripted input or HEC push). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: aruba:edgeconnect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"aruba:edgeconnect\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by appliance_name, tunnel_name, tunnel_state, peer_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (tunnel per appliance), Table (degraded tunnels), Line chart (WAN optimization savings).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "HPE Aruba EdgeConnect EC-S, EC-M, EC-L, EC-XL; Aruba Orchestrator (on-prem or cloud)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.23",
              "n": "Versa Networks SD-WAN Path Quality and Routing Decisions",
              "c": "high",
              "f": "advanced",
              "v": "Versa Networks provides integrated SD-WAN, security, and routing. Monitoring path quality scores, SLA violations, and routing decisions across Versa FlexVNF devices enables proactive WAN management for Versa deployments.",
              "t": "Versa Director REST API (custom scripted input or HEC push), Versa FlexVNF syslog",
              "d": "Versa Director API, syslog (`sourcetype=\"versa:sdwan\"`)",
              "q": "index=sdwan sourcetype=\"versa:sdwan\"\n| search event_type=\"sla_violation\" OR event_type=\"path_switch\"\n| stats count by branch_name, circuit_name, event_type, sla_policy\n| sort - count",
              "m": "Forward Versa FlexVNF syslog to Splunk. Poll Versa Director API for structured SLA violation and path steering data. Alert when SLA violations exceed threshold per site/circuit.",
              "z": "Table (SLA violations by branch), Timeline (path switch events), Line chart (circuit quality scores).",
              "kfp": "",
              "refs": "[Versa Networks Documentation](https://docs.versa-networks.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Versa Director REST API (custom scripted input or HEC push), Versa FlexVNF syslog.\n• Ensure the following data sources are available: Versa Director API, syslog (`sourcetype=\"versa:sdwan\"`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Versa FlexVNF syslog to Splunk. Poll Versa Director API for structured SLA violation and path steering data. Alert when SLA violations exceed threshold per site/circuit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"versa:sdwan\"\n| search event_type=\"sla_violation\" OR event_type=\"path_switch\"\n| stats count by branch_name, circuit_name, event_type, sla_policy\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Versa Networks SD-WAN Path Quality and Routing Decisions** — Versa Networks provides integrated SD-WAN, security, and routing. Monitoring path quality scores, SLA violations, and routing decisions across Versa FlexVNF devices enables proactive WAN management for Versa deployments.\n\nDocumented **Data sources**: Versa Director API, syslog (`sourcetype=\"versa:sdwan\"`). **App/TA** (typical add-on context): Versa Director REST API (custom scripted input or HEC push), Versa FlexVNF syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: versa:sdwan. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"versa:sdwan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by branch_name, circuit_name, event_type, sla_policy** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SLA violations by branch), Timeline (path switch events), Line chart (circuit quality scores).",
              "script": "",
              "premium": "",
              "hw": "Versa FlexVNF (CSG1000/2000/3000 series), Versa Director, Versa Analytics",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.24",
              "n": "Fortinet SD-WAN Health-Check and SLA Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Fortinet SD-WAN is embedded in FortiGate appliances and is one of the most widely deployed SD-WAN solutions. Monitoring SD-WAN health-check probe results and SLA rule violations across FortiGate devices ensures WAN path reliability and application performance.",
              "t": "`TA-fortinet_fortigate` (Splunkbase 2846)",
              "d": "`sourcetype=\"fgt_traffic\"`, `sourcetype=\"fgt_event\"`",
              "q": "index=network sourcetype=\"fgt_event\" subtype=\"sdwan\"\n| search event_type=\"health_check\" status!=\"alive\"\n| stats count by devname, health_check_name, interface, status\n| sort - count",
              "m": "Enable SD-WAN health-check logging on FortiGate (`config log setting`, `set fwpolicy-implicit-log enable`). Install TA-fortinet_fortigate for field extraction. Alert when health-check probes fail or SLA rule violations trigger path switches.",
              "z": "Status grid (health-check per device), Table (failed probes), Line chart (latency/jitter/packet_loss per link).",
              "kfp": "",
              "refs": "[Splunkbase app 2846](https://splunkbase.splunk.com/app/2846), [Fortinet SD-WAN docs](https://docs.fortinet.com/document/fortigate/latest/sd-wan)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-fortinet_fortigate` (Splunkbase 2846).\n• Ensure the following data sources are available: `sourcetype=\"fgt_traffic\"`, `sourcetype=\"fgt_event\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable SD-WAN health-check logging on FortiGate (`config log setting`, `set fwpolicy-implicit-log enable`). Install TA-fortinet_fortigate for field extraction. Alert when health-check probes fail or SLA rule violations trigger path switches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fgt_event\" subtype=\"sdwan\"\n| search event_type=\"health_check\" status!=\"alive\"\n| stats count by devname, health_check_name, interface, status\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Fortinet SD-WAN Health-Check and SLA Compliance** — Fortinet SD-WAN is embedded in FortiGate appliances and is one of the most widely deployed SD-WAN solutions. Monitoring SD-WAN health-check probe results and SLA rule violations across FortiGate devices ensures WAN path reliability and application performance.\n\nDocumented **Data sources**: `sourcetype=\"fgt_traffic\"`, `sourcetype=\"fgt_event\"`. **App/TA** (typical add-on context): `TA-fortinet_fortigate` (Splunkbase 2846). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fgt_event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fgt_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by devname, health_check_name, interface, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (health-check per device), Table (failed probes), Line chart (latency/jitter/packet_loss per link).",
              "script": "",
              "premium": "",
              "hw": "FortiGate 40F, 60F, 80F, 100F, 200F, 400E, 600E, 1000D, 3000F; FortiManager",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.5.25",
              "n": "Cato Networks SASE Event Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Cato Networks provides a cloud-native SASE platform combining SD-WAN with network security. Monitoring socket (edge device) health, PoP connectivity, and security events from Cato provides unified visibility for organizations using SASE as their primary WAN architecture.",
              "t": "Cato Networks Events App (Splunkbase 8037)",
              "d": "`sourcetype=\"cato:events\"`, Cato Management Application API",
              "q": "index=sdwan sourcetype=\"cato:events\" event_type=\"connectivity\"\n| search socket_status=\"disconnected\" OR pop_status=\"degraded\"\n| stats count, latest(_time) as last_seen by site_name, socket_name, pop_name, socket_status\n| sort - count",
              "m": "Install Cato Networks Events App from Splunkbase. Configure Cato API integration in the app's setup page. Alert on socket disconnect and PoP degradation events.",
              "z": "Status grid (socket health per site), Table (disconnection events), Line chart (connectivity trend).",
              "kfp": "",
              "refs": "[Splunkbase app 8037](https://splunkbase.splunk.com/app/8037), [Cato Networks API docs](https://api.catonetworks.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cato Networks Events App (Splunkbase 8037).\n• Ensure the following data sources are available: `sourcetype=\"cato:events\"`, Cato Management Application API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall Cato Networks Events App from Splunkbase. Configure Cato API integration in the app's setup page. Alert on socket disconnect and PoP degradation events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"cato:events\" event_type=\"connectivity\"\n| search socket_status=\"disconnected\" OR pop_status=\"degraded\"\n| stats count, latest(_time) as last_seen by site_name, socket_name, pop_name, socket_status\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Cato Networks SASE Event Monitoring** — Cato Networks provides a cloud-native SASE platform combining SD-WAN with network security. Monitoring socket (edge device) health, PoP connectivity, and security events from Cato provides unified visibility for organizations using SASE as their primary WAN architecture.\n\nDocumented **Data sources**: `sourcetype=\"cato:events\"`, Cato Management Application API. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: cato:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"cato:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by site_name, socket_name, pop_name, socket_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (socket health per site), Table (disconnection events), Line chart (connectivity trend).",
              "script": "",
              "premium": "",
              "hw": "Cato Socket X1500, X1600, X1700; Cato vSocket; Cato PoP network",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.6,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "5.6",
          "n": "DNS & DHCP",
          "u": [
            {
              "i": "5.6.1",
              "n": "DNS Query Volume Trending",
              "c": "medium",
              "f": "beginner",
              "v": "DNS query volume trending supports capacity planning and reveals traffic pattern changes.",
              "t": "Splunk_TA_infoblox, Splunk_TA_windows (DNS logs), Pi-hole syslog",
              "d": "`sourcetype=infoblox:dns`, `sourcetype=MSAD:NT6:DNS`",
              "q": "index=dns sourcetype=\"infoblox:dns\" OR sourcetype=\"MSAD:NT6:DNS\"\n| timechart span=5m count as qps",
              "m": "Forward DNS query logs. For Windows DNS: enable analytical logging. For Infoblox: configure syslog output. Track queries per second over time.",
              "z": "Line chart (QPS over time), Single value (current QPS), Table.",
              "kfp": "Spikes can come from DNS cache flushes, authorized security or performance monitoring, or very talky clients; compare against change windows and known scanning tools.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_infoblox, Splunk_TA_windows (DNS logs), Pi-hole syslog.\n• Ensure the following data sources are available: `sourcetype=infoblox:dns`, `sourcetype=MSAD:NT6:DNS`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DNS query logs. For Windows DNS: enable analytical logging. For Infoblox: configure syslog output. Track queries per second over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=\"infoblox:dns\" OR sourcetype=\"MSAD:NT6:DNS\"\n| timechart span=5m count as qps\n```\n\nUnderstanding this SPL\n\n**DNS Query Volume Trending** — DNS query volume trending supports capacity planning and reveals traffic pattern changes.\n\nDocumented **Data sources**: `sourcetype=infoblox:dns`, `sourcetype=MSAD:NT6:DNS`. **App/TA** (typical add-on context): Splunk_TA_infoblox, Splunk_TA_windows (DNS logs), Pi-hole syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: infoblox:dns, MSAD:NT6:DNS. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=\"infoblox:dns\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Query Volume Trending** — DNS query volume trending supports capacity planning and reveals traffic pattern changes.\n\nDocumented **Data sources**: `sourcetype=infoblox:dns`, `sourcetype=MSAD:NT6:DNS`. **App/TA** (typical add-on context): Splunk_TA_infoblox, Splunk_TA_windows (DNS logs), Pi-hole syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (QPS over time), Single value (current QPS), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual DNS patterns so we notice possible attacks, mistakes, or overloaded resolvers before people feel it as slow apps or failed lookups.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count",
              "e": [
                "infoblox",
                "syslog",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.2",
              "n": "NXDOMAIN Spike Detection",
              "c": "high",
              "f": "intermediate",
              "v": "NXDOMAIN spikes indicate DGA malware (generating random domain lookups), misconfiguration, or DNS infrastructure issues.",
              "t": "DNS TAs",
              "d": "DNS query logs",
              "q": "index=dns reply_code=\"NXDOMAIN\" OR rcode=\"3\"\n| timechart span=5m count as nxdomain_count\n| eventstats avg(nxdomain_count) as avg_nx, stdev(nxdomain_count) as std_nx\n| where nxdomain_count > (avg_nx + 3*std_nx)",
              "m": "Monitor DNS response codes. Baseline NXDOMAIN rates. Alert when exceeding 3 standard deviations. Investigate the querying clients and domain patterns.",
              "z": "Line chart with threshold, Table (top NXDOMAIN clients), Bar chart (top queried NX domains).",
              "kfp": "Legitimate NXDOMAIN or odd query bursts can come from cache flushes, new app rollouts, mis-typed domains, or chatty IoT devices; baseline your network before alerting on spikes.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DNS TAs.\n• Ensure the following data sources are available: DNS query logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor DNS response codes. Baseline NXDOMAIN rates. Alert when exceeding 3 standard deviations. Investigate the querying clients and domain patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns reply_code=\"NXDOMAIN\" OR rcode=\"3\"\n| timechart span=5m count as nxdomain_count\n| eventstats avg(nxdomain_count) as avg_nx, stdev(nxdomain_count) as std_nx\n| where nxdomain_count > (avg_nx + 3*std_nx)\n```\n\nUnderstanding this SPL\n\n**NXDOMAIN Spike Detection** — NXDOMAIN spikes indicate DGA malware (generating random domain lookups), misconfiguration, or DNS infrastructure issues.\n\nDocumented **Data sources**: DNS query logs. **App/TA** (typical add-on context): DNS TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where nxdomain_count > (avg_nx + 3*std_nx)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  where DNS.reply_code_id=3\n  by DNS.src DNS.query span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NXDOMAIN Spike Detection** — NXDOMAIN spikes indicate DGA malware (generating random domain lookups), misconfiguration, or DNS infrastructure issues.\n\nDocumented **Data sources**: DNS query logs. **App/TA** (typical add-on context): DNS TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart with threshold, Table (top NXDOMAIN clients), Bar chart (top queried NX domains).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch nxdomain spike detection so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Security",
                "Anomaly"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  where DNS.reply_code_id=3\n  by DNS.src DNS.query span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.3",
              "n": "SERVFAIL Rate Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "SERVFAIL increases indicate upstream DNS failures, DNSSEC validation issues, or resolver problems.",
              "t": "DNS TAs",
              "d": "DNS query logs",
              "q": "index=dns reply_code=\"SERVFAIL\" OR rcode=\"2\"\n| timechart span=5m count as servfail | where servfail > 10",
              "m": "Track SERVFAIL response codes. Alert on increases. Investigate which domains are failing and which resolvers are affected.",
              "z": "Line chart, Table (failing domains), Single value.",
              "kfp": "Legitimate NXDOMAIN or odd query bursts can come from cache flushes, new app rollouts, mis-typed domains, or chatty IoT devices; baseline your network before alerting on spikes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DNS TAs.\n• Ensure the following data sources are available: DNS query logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack SERVFAIL response codes. Alert on increases. Investigate which domains are failing and which resolvers are affected.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns reply_code=\"SERVFAIL\" OR rcode=\"2\"\n| timechart span=5m count as servfail | where servfail > 10\n```\n\nUnderstanding this SPL\n\n**SERVFAIL Rate Monitoring** — SERVFAIL increases indicate upstream DNS failures, DNSSEC validation issues, or resolver problems.\n\nDocumented **Data sources**: DNS query logs. **App/TA** (typical add-on context): DNS TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where servfail > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart, Table (failing domains), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch servfail rate monitoring so we notice problems while they are still small, and we can fix them before Wi-Fi users get dropped calls or dead spots.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.4",
              "n": "DNS Tunneling Detection",
              "c": "high",
              "f": "intermediate",
              "v": "DNS tunneling uses DNS queries to exfiltrate data or establish C2 channels, bypassing traditional security controls.",
              "t": "DNS TAs",
              "d": "DNS query logs",
              "q": "index=dns\n| eval query_len=len(query)\n| stats avg(query_len) as avg_len, count as queries, dc(query) as unique_queries by src, domain\n| where avg_len > 50 OR queries > 1000\n| sort -avg_len",
              "m": "Monitor for anomalously long DNS queries (>50 chars), high query volumes to single domains, and TXT record queries. Baseline normal DNS patterns.",
              "z": "Table (client, domain, query length, volume), Scatter plot, Bar chart.",
              "kfp": "Legitimate NXDOMAIN or odd query bursts can come from cache flushes, new app rollouts, mis-typed domains, or chatty IoT devices; baseline your network before alerting on spikes.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DNS TAs.\n• Ensure the following data sources are available: DNS query logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor for anomalously long DNS queries (>50 chars), high query volumes to single domains, and TXT record queries. Baseline normal DNS patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns\n| eval query_len=len(query)\n| stats avg(query_len) as avg_len, count as queries, dc(query) as unique_queries by src, domain\n| where avg_len > 50 OR queries > 1000\n| sort -avg_len\n```\n\nUnderstanding this SPL\n\n**DNS Tunneling Detection** — DNS tunneling uses DNS queries to exfiltrate data or establish C2 channels, bypassing traditional security controls.\n\nDocumented **Data sources**: DNS query logs. **App/TA** (typical add-on context): DNS TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **query_len** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src, domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_len > 50 OR queries > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count dc(DNS.query) as unique_queries\n  from datamodel=Network_Resolution.DNS\n  by DNS.src span=1h\n| where unique_queries > 500\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Tunneling Detection** — DNS tunneling uses DNS queries to exfiltrate data or establish C2 channels, bypassing traditional security controls.\n\nDocumented **Data sources**: DNS query logs. **App/TA** (typical add-on context): DNS TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Filters the current rows with `where unique_queries > 500` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client, domain, query length, volume), Scatter plot, Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual DNS patterns so we notice possible attacks, mistakes, or overloaded resolvers before people feel it as slow apps or failed lookups.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count dc(DNS.query) as unique_queries\n  from datamodel=Network_Resolution.DNS\n  by DNS.src span=1h\n| where unique_queries > 500",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.5",
              "n": "DHCP Scope Exhaustion",
              "c": "high",
              "f": "advanced",
              "v": "Empty DHCP scopes prevent new devices from getting network access.",
              "t": "Splunk_TA_windows (DHCP logs), Splunk_TA_infoblox",
              "d": "DHCP server logs, API metrics",
              "q": "index=dhcp sourcetype=\"DhcpSrvLog\" OR sourcetype=\"infoblox:dhcp\"\n| stats dc(assigned_ip) as used by scope_name, scope_range\n| eval total = scope_end - scope_start\n| eval used_pct=round(used/total*100,1) | where used_pct > 90",
              "m": "For Windows: forward DHCP audit logs + scripted input for scope stats. For Infoblox: use API or syslog. Alert when >90% utilized.",
              "z": "Gauge per scope, Table, Bar chart.",
              "kfp": "DHCP pools may temporarily fill during BYOD events, conference Wi-Fi spikes, large office moves, or right after an IP scope change while devices renew in bulk.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_windows (DHCP logs), Splunk_TA_infoblox.\n• Ensure the following data sources are available: DHCP server logs, API metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor Windows: forward DHCP audit logs + scripted input for scope stats. For Infoblox: use API or syslog. Alert when >90% utilized.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dhcp sourcetype=\"DhcpSrvLog\" OR sourcetype=\"infoblox:dhcp\"\n| stats dc(assigned_ip) as used by scope_name, scope_range\n| eval total = scope_end - scope_start\n| eval used_pct=round(used/total*100,1) | where used_pct > 90\n```\n\nUnderstanding this SPL\n\n**DHCP Scope Exhaustion** — Empty DHCP scopes prevent new devices from getting network access.\n\nDocumented **Data sources**: DHCP server logs, API metrics. **App/TA** (typical add-on context): Splunk_TA_windows (DHCP logs), Splunk_TA_infoblox. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dhcp; **sourcetype**: DhcpSrvLog, infoblox:dhcp. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dhcp, sourcetype=\"DhcpSrvLog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scope_name, scope_range** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 90` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per scope, Table, Bar chart.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when address pools are filling up or leases look wrong, so new phones and laptops can still get on the network when they need to.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "infoblox",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.6",
              "n": "DHCP Rogue Server Detection",
              "c": "high",
              "f": "beginner",
              "v": "Rogue DHCP servers assign wrong IPs/gateways, causing network disruption and potential MitM attacks.",
              "t": "Network syslog, DHCP snooping logs",
              "d": "DHCP conflict events, switch DHCP snooping",
              "q": "index=network \"DHCP\" AND (\"rogue\" OR \"conflict\" OR \"unauthorized\" OR \"snooping violation\")\n| table _time host src _raw | sort -_time",
              "m": "Enable DHCP snooping on switches. Forward syslog. Alert on any rogue DHCP server detection events.",
              "z": "Events list (critical), Table, Map.",
              "kfp": "Port-security or MAB churn, lab switches, and miswired uplinks can resemble rogue offers until you confirm the MAC and port in your switch and DHCP admin views.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Network syslog, DHCP snooping logs.\n• Ensure the following data sources are available: DHCP conflict events, switch DHCP snooping.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DHCP snooping on switches. Forward syslog. Alert on any rogue DHCP server detection events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network \"DHCP\" AND (\"rogue\" OR \"conflict\" OR \"unauthorized\" OR \"snooping violation\")\n| table _time host src _raw | sort -_time\n```\n\nUnderstanding this SPL\n\n**DHCP Rogue Server Detection** — Rogue DHCP servers assign wrong IPs/gateways, causing network disruption and potential MitM attacks.\n\nDocumented **Data sources**: DHCP conflict events, switch DHCP snooping. **App/TA** (typical add-on context): Network syslog, DHCP snooping logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **DHCP Rogue Server Detection**): table _time host src _raw\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events list (critical), Table, Map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when a strange device might be handing out addresses on the network, which can break connectivity or steer people the wrong way until we fix it.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.7",
              "n": "DNS Record Change Audit",
              "c": "medium",
              "f": "beginner",
              "v": "Unauthorized DNS changes can redirect traffic to attacker infrastructure (DNS hijacking).",
              "t": "Splunk_TA_infoblox, DNS update logs",
              "d": "Infoblox audit log, DNS dynamic update logs",
              "q": "index=dns sourcetype=\"infoblox:audit\" (\"Added\" OR \"Deleted\" OR \"Modified\") AND (\"record\" OR \"zone\")\n| table _time admin record_type record_name record_data action | sort -_time",
              "m": "Forward DNS server audit logs. Alert on changes to critical domains. Correlate with change tickets.",
              "z": "Table (record, action, who, when), Timeline, Single value.",
              "kfp": "Planned cutovers, DDI automation, and bulk import jobs can produce bursts of add/delete events that are authorized but look noisy; align to change records.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_infoblox, DNS update logs.\n• Ensure the following data sources are available: Infoblox audit log, DNS dynamic update logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DNS server audit logs. Alert on changes to critical domains. Correlate with change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=\"infoblox:audit\" (\"Added\" OR \"Deleted\" OR \"Modified\") AND (\"record\" OR \"zone\")\n| table _time admin record_type record_name record_data action | sort -_time\n```\n\nUnderstanding this SPL\n\n**DNS Record Change Audit** — Unauthorized DNS changes can redirect traffic to attacker infrastructure (DNS hijacking).\n\nDocumented **Data sources**: Infoblox audit log, DNS dynamic update logs. **App/TA** (typical add-on context): Splunk_TA_infoblox, DNS update logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: infoblox:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=\"infoblox:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **DNS Record Change Audit**): table _time admin record_type record_name record_data action\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Record Change Audit** — Unauthorized DNS changes can redirect traffic to attacker infrastructure (DNS hijacking).\n\nDocumented **Data sources**: Infoblox audit log, DNS dynamic update logs. **App/TA** (typical add-on context): Splunk_TA_infoblox, DNS update logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (record, action, who, when), Timeline, Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual DNS patterns so we notice possible attacks, mistakes, or overloaded resolvers before people feel it as slow apps or failed lookups.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count",
              "e": [
                "infoblox"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.8",
              "n": "DNS Latency Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "DNS latency directly adds to every network connection. Slow DNS = slow everything.",
              "t": "Custom scripted input, DNS diagnostic logs",
              "d": "DNS recursive query timing",
              "q": "index=dns sourcetype=\"dns:latency\"\n| timechart span=5m avg(response_time_ms) as avg_latency by dns_server\n| where avg_latency > 50",
              "m": "Use scripted input running `dig` queries against DNS servers measuring response time. Or enable DNS analytical logging with timing. Alert when average latency >50ms.",
              "z": "Line chart per server, Gauge, Table.",
              "kfp": "Spikes can come from DNS cache flushes, authorized security or performance monitoring, or very talky clients; compare against change windows and known scanning tools.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input, DNS diagnostic logs.\n• Ensure the following data sources are available: DNS recursive query timing.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse scripted input running `dig` queries against DNS servers measuring response time. Or enable DNS analytical logging with timing. Alert when average latency >50ms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=\"dns:latency\"\n| timechart span=5m avg(response_time_ms) as avg_latency by dns_server\n| where avg_latency > 50\n```\n\nUnderstanding this SPL\n\n**DNS Latency Monitoring** — DNS latency directly adds to every network connection. Slow DNS = slow everything.\n\nDocumented **Data sources**: DNS recursive query timing. **App/TA** (typical add-on context): Custom scripted input, DNS diagnostic logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: dns:latency. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=\"dns:latency\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by dns_server** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_latency > 50` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Latency Monitoring** — DNS latency directly adds to every network connection. Slow DNS = slow everything.\n\nDocumented **Data sources**: DNS recursive query timing. **App/TA** (typical add-on context): Custom scripted input, DNS diagnostic logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart per server, Gauge, Table.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual DNS patterns so we notice possible attacks, mistakes, or overloaded resolvers before people feel it as slow apps or failed lookups.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.9",
              "n": "DNS Cache Hit Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "Low cache hit ratios indicate either a surge of new queries, cache poisoning attempts, or misconfigured TTLs — all increasing latency and upstream load.",
              "t": "Splunk_TA_infoblox, BIND/Unbound logs",
              "d": "`sourcetype=infoblox:dns`, `sourcetype=named`",
              "q": "index=network sourcetype=\"infoblox:dns\"\n| eval cache_hit=if(match(message,\"cache hit\"),1,0), total=1\n| timechart span=1h sum(cache_hit) as hits, sum(total) as total\n| eval hit_ratio=round(hits/total*100,1) | where hit_ratio < 70",
              "m": "Enable query logging on DNS resolvers. Track cache hit vs. miss ratio. Alert when hit ratio drops below 70%. Investigate top domains causing misses.",
              "z": "Line chart (hit ratio over time), Single value (current ratio), Table (top miss domains).",
              "kfp": "Spikes can come from DNS cache flushes, authorized security or performance monitoring, or very talky clients; compare against change windows and known scanning tools.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_infoblox, BIND/Unbound logs.\n• Ensure the following data sources are available: `sourcetype=infoblox:dns`, `sourcetype=named`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable query logging on DNS resolvers. Track cache hit vs. miss ratio. Alert when hit ratio drops below 70%. Investigate top domains causing misses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"infoblox:dns\"\n| eval cache_hit=if(match(message,\"cache hit\"),1,0), total=1\n| timechart span=1h sum(cache_hit) as hits, sum(total) as total\n| eval hit_ratio=round(hits/total*100,1) | where hit_ratio < 70\n```\n\nUnderstanding this SPL\n\n**DNS Cache Hit Ratio** — Low cache hit ratios indicate either a surge of new queries, cache poisoning attempts, or misconfigured TTLs — all increasing latency and upstream load.\n\nDocumented **Data sources**: `sourcetype=infoblox:dns`, `sourcetype=named`. **App/TA** (typical add-on context): Splunk_TA_infoblox, BIND/Unbound logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: infoblox:dns. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"infoblox:dns\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cache_hit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio < 70` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Cache Hit Ratio** — Low cache hit ratios indicate either a surge of new queries, cache poisoning attempts, or misconfigured TTLs — all increasing latency and upstream load.\n\nDocumented **Data sources**: `sourcetype=infoblox:dns`, `sourcetype=named`. **App/TA** (typical add-on context): Splunk_TA_infoblox, BIND/Unbound logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hit ratio over time), Single value (current ratio), Table (top miss domains).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual DNS patterns so we notice possible attacks, mistakes, or overloaded resolvers before people feel it as slow apps or failed lookups.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count",
              "e": [
                "infoblox"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.10",
              "n": "DNSSEC Validation Failures",
              "c": "high",
              "f": "intermediate",
              "v": "DNSSEC failures can indicate DNS spoofing attempts or misconfigured zones. Monitoring prevents users from being directed to malicious sites.",
              "t": "Splunk_TA_infoblox, BIND logs",
              "d": "`sourcetype=infoblox:dns`, `sourcetype=named`",
              "q": "index=network sourcetype=\"named\" \"DNSSEC\" (\"validation failure\" OR \"SERVFAIL\" OR \"no valid signature\")\n| rex \"(?<query_domain>[a-zA-Z0-9.-]+\\.)/(?<query_type>\\w+)\"\n| stats count by query_domain, query_type | sort -count",
              "m": "Enable DNSSEC validation logging. Monitor for validation failures by domain. Cross-reference with known domain registrations. Alert on spikes in DNSSEC failures.",
              "z": "Table (domain, failure count), Timechart (failure rate), Bar chart.",
              "kfp": "Spikes can come from DNS cache flushes, authorized security or performance monitoring, or very talky clients; compare against change windows and known scanning tools.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_infoblox, BIND logs.\n• Ensure the following data sources are available: `sourcetype=infoblox:dns`, `sourcetype=named`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DNSSEC validation logging. Monitor for validation failures by domain. Cross-reference with known domain registrations. Alert on spikes in DNSSEC failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"named\" \"DNSSEC\" (\"validation failure\" OR \"SERVFAIL\" OR \"no valid signature\")\n| rex \"(?<query_domain>[a-zA-Z0-9.-]+\\.)/(?<query_type>\\w+)\"\n| stats count by query_domain, query_type | sort -count\n```\n\nUnderstanding this SPL\n\n**DNSSEC Validation Failures** — DNSSEC failures can indicate DNS spoofing attempts or misconfigured zones. Monitoring prevents users from being directed to malicious sites.\n\nDocumented **Data sources**: `sourcetype=infoblox:dns`, `sourcetype=named`. **App/TA** (typical add-on context): Splunk_TA_infoblox, BIND logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: named. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"named\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by query_domain, query_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  where DNS.reply_code_id=3\n  by DNS.src DNS.query span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNSSEC Validation Failures** — DNSSEC failures can indicate DNS spoofing attempts or misconfigured zones. Monitoring prevents users from being directed to malicious sites.\n\nDocumented **Data sources**: `sourcetype=infoblox:dns`, `sourcetype=named`. **App/TA** (typical add-on context): Splunk_TA_infoblox, BIND logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (domain, failure count), Timechart (failure rate), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual DNS patterns so we notice possible attacks, mistakes, or overloaded resolvers before people feel it as slow apps or failed lookups.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  where DNS.reply_code_id=3\n  by DNS.src DNS.query span=1h\n| sort -count",
              "e": [
                "infoblox"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.11",
              "n": "DHCP Lease Duration Analysis",
              "c": "low",
              "f": "intermediate",
              "v": "Short lease durations increase DHCP traffic and scope churn. Long leases waste addresses. Optimizing lease times improves IP management.",
              "t": "Splunk_TA_infoblox, Windows DHCP logs",
              "d": "`sourcetype=infoblox:dhcp`, `sourcetype=DhcpSrvLog`",
              "q": "index=network sourcetype=\"infoblox:dhcp\" \"DHCPACK\"\n| rex \"lease (?<lease_ip>\\d+\\.\\d+\\.\\d+\\.\\d+).*?(?<lease_duration>\\d+)\"\n| stats avg(lease_duration) as avg_lease, count as renewals by subnet\n| eval avg_hours=round(avg_lease/3600,1) | sort -renewals",
              "m": "Collect DHCP server logs. Analyze lease durations per scope. Identify scopes with unusually short leases (frequent renewals) or extremely long leases. Adjust based on network type (guest vs. corporate).",
              "z": "Table (scope, avg lease, renewal count), Bar chart (renewals by scope).",
              "kfp": "DHCP pools may temporarily fill during BYOD events, conference Wi-Fi spikes, large office moves, or right after an IP scope change while devices renew in bulk.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_infoblox, Windows DHCP logs.\n• Ensure the following data sources are available: `sourcetype=infoblox:dhcp`, `sourcetype=DhcpSrvLog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect DHCP server logs. Analyze lease durations per scope. Identify scopes with unusually short leases (frequent renewals) or extremely long leases. Adjust based on network type (guest vs. corporate).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"infoblox:dhcp\" \"DHCPACK\"\n| rex \"lease (?<lease_ip>\\d+\\.\\d+\\.\\d+\\.\\d+).*?(?<lease_duration>\\d+)\"\n| stats avg(lease_duration) as avg_lease, count as renewals by subnet\n| eval avg_hours=round(avg_lease/3600,1) | sort -renewals\n```\n\nUnderstanding this SPL\n\n**DHCP Lease Duration Analysis** — Short lease durations increase DHCP traffic and scope churn. Long leases waste addresses. Optimizing lease times improves IP management.\n\nDocumented **Data sources**: `sourcetype=infoblox:dhcp`, `sourcetype=DhcpSrvLog`. **App/TA** (typical add-on context): Splunk_TA_infoblox, Windows DHCP logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: infoblox:dhcp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"infoblox:dhcp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by subnet** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (scope, avg lease, renewal count), Bar chart (renewals by scope).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when address pools are filling up or leases look wrong, so new phones and laptops can still get on the network when they need to.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "infoblox"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.12",
              "n": "DNS Query Type Distribution",
              "c": "medium",
              "f": "intermediate",
              "v": "Unusual query type distribution (spikes in TXT, MX, or ANY) can indicate DNS tunneling, reconnaissance, or abuse.",
              "t": "Splunk_TA_infoblox, Splunk Stream",
              "d": "`sourcetype=infoblox:dns`, `sourcetype=stream:dns`",
              "q": "index=network sourcetype=\"stream:dns\"\n| stats count by query_type\n| eventstats sum(count) as total\n| eval pct=round(count/total*100,2) | sort -count\n| head 20",
              "m": "Capture DNS query types via Splunk Stream or DNS server logs. Baseline normal distribution (typically >80% A/AAAA). Alert on abnormal increases in TXT, NULL, or ANY queries.",
              "z": "Pie chart (query type distribution), Timechart (by type), Table.",
              "kfp": "Spikes can come from DNS cache flushes, authorized security or performance monitoring, or very talky clients; compare against change windows and known scanning tools.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_infoblox, Splunk Stream.\n• Ensure the following data sources are available: `sourcetype=infoblox:dns`, `sourcetype=stream:dns`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture DNS query types via Splunk Stream or DNS server logs. Baseline normal distribution (typically >80% A/AAAA). Alert on abnormal increases in TXT, NULL, or ANY queries.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"stream:dns\"\n| stats count by query_type\n| eventstats sum(count) as total\n| eval pct=round(count/total*100,2) | sort -count\n| head 20\n```\n\nUnderstanding this SPL\n\n**DNS Query Type Distribution** — Unusual query type distribution (spikes in TXT, MX, or ANY) can indicate DNS tunneling, reconnaissance, or abuse.\n\nDocumented **Data sources**: `sourcetype=infoblox:dns`, `sourcetype=stream:dns`. **App/TA** (typical add-on context): Splunk_TA_infoblox, Splunk Stream. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: stream:dns. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"stream:dns\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by query_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Query Type Distribution** — Unusual query type distribution (spikes in TXT, MX, or ANY) can indicate DNS tunneling, reconnaissance, or abuse.\n\nDocumented **Data sources**: `sourcetype=infoblox:dns`, `sourcetype=stream:dns`. **App/TA** (typical add-on context): Splunk_TA_infoblox, Splunk Stream. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (query type distribution), Timechart (by type), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual DNS patterns so we notice possible attacks, mistakes, or overloaded resolvers before people feel it as slow apps or failed lookups.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.src DNS.query DNS.record_type span=5m\n| sort -count",
              "e": [
                "infoblox",
                "stream"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.13",
              "n": "Failed DHCP Assignments and IP Pool Exhaustion (Meraki)",
              "c": "high",
              "f": "beginner",
              "v": "Detects DHCP server failures and IP pool exhaustion that prevent new clients from obtaining addresses.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*DHCP*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*DHCP*\" (signature=\"*failure*\" OR signature=\"*NACK*\")\n| stats count as failure_count by ap_name, signature\n| where failure_count > 5\n| sort - failure_count",
              "m": "Monitor syslog for DHCP NACK and failure events. Alert on sustained failure rate.",
              "z": "Table of DHCP failures by AP; time-series showing failure spike; alert dashboard.",
              "kfp": "DHCP pools may temporarily fill during BYOD events, conference Wi-Fi spikes, large office moves, or right after an IP scope change while devices renew in bulk.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*DHCP*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor syslog for DHCP NACK and failure events. Alert on sustained failure rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*DHCP*\" (signature=\"*failure*\" OR signature=\"*NACK*\")\n| stats count as failure_count by ap_name, signature\n| where failure_count > 5\n| sort - failure_count\n```\n\nUnderstanding this SPL\n\n**Failed DHCP Assignments and IP Pool Exhaustion (Meraki)** — Detects DHCP server failures and IP pool exhaustion that prevent new clients from obtaining addresses.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*DHCP*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name, signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failure_count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of DHCP failures by AP; time-series showing failure spike; alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when address pools are filling up or leases look wrong, so new phones and laptops can still get on the network when they need to.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.14",
              "n": "DNS Resolution Performance and Failures (Meraki)",
              "c": "medium",
              "f": "beginner",
              "v": "Monitors DNS query resolution times and failures to identify misconfiguration or server issues affecting user experience.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*DNS*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*DNS*\" resolution_time=*\n| stats avg(resolution_time) as avg_dns_time, max(resolution_time) as max_dns_time, count by ap_name\n| where avg_dns_time > 100",
              "m": "Extract DNS query timing from syslog events. Set SLA thresholds (e.g., <100ms average).",
              "z": "Gauge showing average DNS time; histogram of query times; slow query detail table.",
              "kfp": "Spikes can come from DNS cache flushes, authorized security or performance monitoring, or very talky clients; compare against change windows and known scanning tools.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*DNS*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract DNS query timing from syslog events. Set SLA thresholds (e.g., <100ms average).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*DNS*\" resolution_time=*\n| stats avg(resolution_time) as avg_dns_time, max(resolution_time) as max_dns_time, count by ap_name\n| where avg_dns_time > 100\n```\n\nUnderstanding this SPL\n\n**DNS Resolution Performance and Failures (Meraki)** — Monitors DNS query resolution times and failures to identify misconfiguration or server issues affecting user experience.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*DNS*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ap_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_dns_time > 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge showing average DNS time; histogram of query times; slow query detail table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual DNS patterns so we notice possible attacks, mistakes, or overloaded resolvers before people feel it as slow apps or failed lookups.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.15",
              "n": "DHCP Pool Exhaustion and Address Allocation Issues (Meraki)",
              "c": "high",
              "f": "intermediate",
              "v": "Alerts when DHCP pools approach depletion to prevent clients from obtaining IP addresses.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" dhcp_pool=*\n| stats latest(addresses_available) as available_ips, latest(pool_size) as total_pool by vlan_id\n| eval allocation_pct=round((total_pool-available_ips)*100/total_pool, 2)\n| where allocation_pct > 85",
              "m": "Query appliance API for DHCP metrics by VLAN. Alert on >85% allocation.",
              "z": "DHCP pool gauge per VLAN; timeline of pool usage; alert dashboard.",
              "kfp": "DHCP pools may temporarily fill during BYOD events, conference Wi-Fi spikes, large office moves, or right after an IP scope change while devices renew in bulk.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery appliance API for DHCP metrics by VLAN. Alert on >85% allocation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" dhcp_pool=*\n| stats latest(addresses_available) as available_ips, latest(pool_size) as total_pool by vlan_id\n| eval allocation_pct=round((total_pool-available_ips)*100/total_pool, 2)\n| where allocation_pct > 85\n```\n\nUnderstanding this SPL\n\n**DHCP Pool Exhaustion and Address Allocation Issues (Meraki)** — Alerts when DHCP pools approach depletion to prevent clients from obtaining IP addresses.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vlan_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **allocation_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where allocation_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: DHCP pool gauge per VLAN; timeline of pool usage; alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when address pools are filling up or leases look wrong, so new phones and laptops can still get on the network when they need to.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.16",
              "n": "DHCP Lease Exhaustion and Scope Utilization",
              "c": "high",
              "f": "beginner",
              "v": "Exhausted DHCP scopes prevent new devices from joining the network. Monitoring utilization and lease count supports proactive scope expansion or cleanup.",
              "t": "Infoblox, Microsoft DHCP, ISC DHCP — scripted input or API",
              "d": "DHCP server logs, lease table export, SNMP (DHCP pool MIB)",
              "q": "index=network sourcetype=dhcp_scope\n| eval used_pct=round(leases_in_use/scope_size*100, 1)\n| stats latest(used_pct) as pct, latest(leases_in_use) as used by scope_name, server\n| where pct > 85\n| table scope_name server used scope_size pct",
              "m": "Poll DHCP server (Infoblox API, Windows WMI, or lease file) for scope size and in-use count. Ingest daily or hourly. Alert when utilization exceeds 85%. Track lease duration and stale lease cleanup.",
              "z": "Gauge per scope, Table (scope, used, size, %), Line chart (utilization trend).",
              "kfp": "DHCP pools may temporarily fill during BYOD events, conference Wi-Fi spikes, large office moves, or right after an IP scope change while devices renew in bulk.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Infoblox, Microsoft DHCP, ISC DHCP — scripted input or API.\n• Ensure the following data sources are available: DHCP server logs, lease table export, SNMP (DHCP pool MIB).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll DHCP server (Infoblox API, Windows WMI, or lease file) for scope size and in-use count. Ingest daily or hourly. Alert when utilization exceeds 85%. Track lease duration and stale lease cleanup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=dhcp_scope\n| eval used_pct=round(leases_in_use/scope_size*100, 1)\n| stats latest(used_pct) as pct, latest(leases_in_use) as used by scope_name, server\n| where pct > 85\n| table scope_name server used scope_size pct\n```\n\nUnderstanding this SPL\n\n**DHCP Lease Exhaustion and Scope Utilization** — Exhausted DHCP scopes prevent new devices from joining the network. Monitoring utilization and lease count supports proactive scope expansion or cleanup.\n\nDocumented **Data sources**: DHCP server logs, lease table export, SNMP (DHCP pool MIB). **App/TA** (typical add-on context): Infoblox, Microsoft DHCP, ISC DHCP — scripted input or API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: dhcp_scope. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=dhcp_scope. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by scope_name, server** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where pct > 85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DHCP Lease Exhaustion and Scope Utilization**): table scope_name server used scope_size pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per scope, Table (scope, used, size, %), Line chart (utilization trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when address pools are filling up or leases look wrong, so new phones and laptops can still get on the network when they need to.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "infoblox"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.17",
              "n": "DNS Query Latency and Resolution Failure by Resolver",
              "c": "high",
              "f": "intermediate",
              "v": "Slow or failing DNS resolution impacts all applications. Tracking latency and NXDOMAIN/timeout rates per resolver supports capacity and upstream provider decisions.",
              "t": "Custom scripted input (dig, DNS query log), Infoblox/BIND query logs",
              "d": "DNS resolver query logs, synthetic DNS probes",
              "q": "index=network sourcetype=dns_query\n| bin _time span=5m\n| stats avg(response_time_ms) as avg_ms, count(eval(response_code=\"NXDOMAIN\" OR response_code=\"SERVFAIL\")) as failures, count as total by resolver_ip, _time\n| eval fail_rate=round(failures/total*100, 2)\n| where avg_ms > 200 OR fail_rate > 5\n| table resolver_ip avg_ms fail_rate total",
              "m": "Run synthetic DNS probes (e.g. dig to critical domains) from multiple hosts; ingest response time and result. Optionally ingest resolver query logs. Alert when latency exceeds 200ms or failure rate exceeds 5%.",
              "z": "Line chart (latency by resolver), Table (resolver, avg ms, fail rate), Single value (p95 latency).",
              "kfp": "Spikes can come from DNS cache flushes, authorized security or performance monitoring, or very talky clients; compare against change windows and known scanning tools.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (dig, DNS query log), Infoblox/BIND query logs.\n• Ensure the following data sources are available: DNS resolver query logs, synthetic DNS probes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun synthetic DNS probes (e.g. dig to critical domains) from multiple hosts; ingest response time and result. Optionally ingest resolver query logs. Alert when latency exceeds 200ms or failure rate exceeds 5%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=dns_query\n| bin _time span=5m\n| stats avg(response_time_ms) as avg_ms, count(eval(response_code=\"NXDOMAIN\" OR response_code=\"SERVFAIL\")) as failures, count as total by resolver_ip, _time\n| eval fail_rate=round(failures/total*100, 2)\n| where avg_ms > 200 OR fail_rate > 5\n| table resolver_ip avg_ms fail_rate total\n```\n\nUnderstanding this SPL\n\n**DNS Query Latency and Resolution Failure by Resolver** — Slow or failing DNS resolution impacts all applications. Tracking latency and NXDOMAIN/timeout rates per resolver supports capacity and upstream provider decisions.\n\nDocumented **Data sources**: DNS resolver query logs, synthetic DNS probes. **App/TA** (typical add-on context): Custom scripted input (dig, DNS query log), Infoblox/BIND query logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: dns_query. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=dns_query. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by resolver_ip, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_ms > 200 OR fail_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DNS Query Latency and Resolution Failure by Resolver**): table resolver_ip avg_ms fail_rate total\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency by resolver), Table (resolver, avg ms, fail rate), Single value (p95 latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual DNS patterns so we notice possible attacks, mistakes, or overloaded resolvers before people feel it as slow apps or failed lookups.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "infoblox"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.18",
              "n": "BlueCat DNS Edge Query Analytics",
              "c": "high",
              "f": "intermediate",
              "v": "BlueCat DNS Edge provides DNS resolution at the edge with centralized policy management. Monitoring query volume, NXDOMAIN rates, and policy-blocked queries from BlueCat complements or replaces bind/Infoblox DNS monitoring for organizations using BlueCat as their DDI platform.",
              "t": "BlueCat DNS Edge syslog, BlueCat Address Manager REST API (custom scripted input or HEC)",
              "d": "`sourcetype=\"bluecat:dns\"`, BlueCat syslog",
              "q": "index=dns sourcetype=\"bluecat:dns\"\n| stats count by query_type, response_code\n| eventstats sum(count) as total\n| eval pct_nxdomain=round(count/total*100,2)\n| where response_code=\"NXDOMAIN\"\n| sort - count",
              "m": "Configure BlueCat DNS Edge service points to forward query logs to Splunk via syslog or HEC. Install props.conf/transforms.conf for BlueCat field extraction. Alert on NXDOMAIN spikes (potential DGA/C2 activity) and query volume anomalies.",
              "z": "Line chart (query volume over time), Pie chart (response code distribution), Table (top NXDOMAIN domains).",
              "kfp": "",
              "refs": "[BlueCat Documentation](https://docs.bluecatnetworks.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BlueCat DNS Edge syslog, BlueCat Address Manager REST API (custom scripted input or HEC).\n• Ensure the following data sources are available: `sourcetype=\"bluecat:dns\"`, BlueCat syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure BlueCat DNS Edge service points to forward query logs to Splunk via syslog or HEC. Install props.conf/transforms.conf for BlueCat field extraction. Alert on NXDOMAIN spikes (potential DGA/C2 activity) and query volume anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=\"bluecat:dns\"\n| stats count by query_type, response_code\n| eventstats sum(count) as total\n| eval pct_nxdomain=round(count/total*100,2)\n| where response_code=\"NXDOMAIN\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**BlueCat DNS Edge Query Analytics** — BlueCat DNS Edge provides DNS resolution at the edge with centralized policy management. Monitoring query volume, NXDOMAIN rates, and policy-blocked queries from BlueCat complements or replaces bind/Infoblox DNS monitoring for organizations using BlueCat as their DDI platform.\n\nDocumented **Data sources**: `sourcetype=\"bluecat:dns\"`, BlueCat syslog. **App/TA** (typical add-on context): BlueCat DNS Edge syslog, BlueCat Address Manager REST API (custom scripted input or HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: bluecat:dns. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=\"bluecat:dns\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by query_type, response_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **pct_nxdomain** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where response_code=\"NXDOMAIN\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (query volume over time), Pie chart (response code distribution), Table (top NXDOMAIN domains).",
              "script": "",
              "premium": "",
              "hw": "BlueCat DNS Edge service points, BlueCat Address Manager (BAM), BlueCat DNS/DHCP Server (BDDS)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "Network_Resolution"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.6.19",
              "n": "BlueCat DHCP Lease Utilization and Scope Health",
              "c": "high",
              "f": "intermediate",
              "v": "BlueCat DHCP servers manage IP address allocation for enterprise networks. Monitoring scope utilization, lease churn, and assignment failures from BlueCat prevents address exhaustion and enables proactive capacity planning for organizations using BlueCat DDI.",
              "t": "BlueCat BDDS syslog, BlueCat Address Manager REST API",
              "d": "`sourcetype=\"bluecat:dhcp\"`, BlueCat syslog",
              "q": "index=dhcp sourcetype=\"bluecat:dhcp\"\n| stats count(eval(action=\"DHCPACK\")) as acks, count(eval(action=\"DHCPNAK\")) as naks, dc(client_mac) as unique_clients by scope\n| eval failure_rate=round(naks/(acks+naks)*100,2)\n| where failure_rate > 5 OR acks < 10\n| sort - failure_rate",
              "m": "Configure BlueCat BDDS to forward DHCP logs to Splunk via syslog. Poll BlueCat Address Manager API for scope utilization data. Alert when any scope exceeds 85% utilization or failure rates spike above 5%.",
              "z": "Gauge (scope utilization %), Table (scope health by network), Line chart (lease churn over time).",
              "kfp": "",
              "refs": "[BlueCat Documentation](https://docs.bluecatnetworks.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BlueCat BDDS syslog, BlueCat Address Manager REST API.\n• Ensure the following data sources are available: `sourcetype=\"bluecat:dhcp\"`, BlueCat syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure BlueCat BDDS to forward DHCP logs to Splunk via syslog. Poll BlueCat Address Manager API for scope utilization data. Alert when any scope exceeds 85% utilization or failure rates spike above 5%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dhcp sourcetype=\"bluecat:dhcp\"\n| stats count(eval(action=\"DHCPACK\")) as acks, count(eval(action=\"DHCPNAK\")) as naks, dc(client_mac) as unique_clients by scope\n| eval failure_rate=round(naks/(acks+naks)*100,2)\n| where failure_rate > 5 OR acks < 10\n| sort - failure_rate\n```\n\nUnderstanding this SPL\n\n**BlueCat DHCP Lease Utilization and Scope Health** — BlueCat DHCP servers manage IP address allocation for enterprise networks. Monitoring scope utilization, lease churn, and assignment failures from BlueCat prevents address exhaustion and enables proactive capacity planning for organizations using BlueCat DDI.\n\nDocumented **Data sources**: `sourcetype=\"bluecat:dhcp\"`, BlueCat syslog. **App/TA** (typical add-on context): BlueCat BDDS syslog, BlueCat Address Manager REST API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dhcp; **sourcetype**: bluecat:dhcp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dhcp, sourcetype=\"bluecat:dhcp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scope** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **failure_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failure_rate > 5 OR acks < 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (scope utilization %), Table (scope health by network), Line chart (lease churn over time).",
              "script": "",
              "premium": "",
              "hw": "BlueCat DNS/DHCP Server (BDDS), BlueCat Address Manager (BAM)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Capacity",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.6,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 19,
            "none": 0
          }
        },
        {
          "i": "5.7",
          "n": "Network Flow Data",
          "u": [
            {
              "i": "5.7.1",
              "n": "Top Talkers Analysis",
              "c": "medium",
              "f": "beginner",
              "v": "Identifies top bandwidth consumers. Essential for troubleshooting congestion and capacity planning.",
              "t": "Splunk Add-on for NetFlow",
              "d": "`sourcetype=netflow`, sFlow, IPFIX",
              "q": "index=netflow\n| stats sum(bytes) as total_bytes by src, dest\n| sort -total_bytes | head 20\n| eval total_GB=round(total_bytes/1073741824,2)",
              "m": "Export NetFlow from routers/switches to a NetFlow collector that forwards to Splunk. Install NetFlow TA for field parsing.",
              "z": "Table (source, dest, bytes), Sankey diagram, Bar chart.",
              "kfp": "Traffic spikes during backup jobs, large file transfers, or video streaming events can vault hosts to the top of the list with no security issue; tune with baselines and business-hour context.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for NetFlow.\n• Ensure the following data sources are available: `sourcetype=netflow`, sFlow, IPFIX.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport NetFlow from routers/switches to a NetFlow collector that forwards to Splunk. Install NetFlow TA for field parsing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netflow\n| stats sum(bytes) as total_bytes by src, dest\n| sort -total_bytes | head 20\n| eval total_GB=round(total_bytes/1073741824,2)\n```\n\nUnderstanding this SPL\n\n**Top Talkers Analysis** — Identifies top bandwidth consumers. Essential for troubleshooting congestion and capacity planning.\n\nDocumented **Data sources**: `sourcetype=netflow`, sFlow, IPFIX. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netflow.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netflow. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• `eval` defines or adjusts **total_GB** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Top Talkers Analysis** — Identifies top bandwidth consumers. Essential for troubleshooting congestion and capacity planning.\n\nDocumented **Data sources**: `sourcetype=netflow`, sFlow, IPFIX. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (source, dest, bytes), Sankey diagram, Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see who is using the most internet so you can find congestion and plan more capacity before video calls and apps start failing.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.2",
              "n": "Anomalous Traffic Patterns",
              "c": "high",
              "f": "intermediate",
              "v": "Unusual flows (new protocols, unexpected destinations) indicate compromise, misconfiguration, or shadow IT.",
              "t": "Splunk Add-on for NetFlow",
              "d": "`sourcetype=netflow`",
              "q": "index=netflow\n| stats dc(dest_port) as unique_ports, dc(dest) as unique_dests by src\n| where unique_ports > 100 OR unique_dests > 500\n| sort -unique_ports",
              "m": "Baseline normal flow patterns over 30 days. Alert on new protocol/port combinations, new external destinations, or unusual volume patterns.",
              "z": "Table, Scatter plot (ports vs. destinations), Timechart.",
              "kfp": "Vulnerability scans, content delivery, and legitimate many-destination services can look like reconnaissance. Traffic spikes during backup jobs, large file transfers, or video streaming also inflate diversity; baseline per role and site.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for NetFlow.\n• Ensure the following data sources are available: `sourcetype=netflow`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline normal flow patterns over 30 days. Alert on new protocol/port combinations, new external destinations, or unusual volume patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netflow\n| stats dc(dest_port) as unique_ports, dc(dest) as unique_dests by src\n| where unique_ports > 100 OR unique_dests > 500\n| sort -unique_ports\n```\n\nUnderstanding this SPL\n\n**Anomalous Traffic Patterns** — Unusual flows (new protocols, unexpected destinations) indicate compromise, misconfiguration, or shadow IT.\n\nDocumented **Data sources**: `sourcetype=netflow`. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netflow.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netflow. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_ports > 100 OR unique_dests > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Anomalous Traffic Patterns** — Unusual flows (new protocols, unexpected destinations) indicate compromise, misconfiguration, or shadow IT.\n\nDocumented **Data sources**: `sourcetype=netflow`. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Scatter plot (ports vs. destinations), Timechart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you notice when a computer suddenly talks to many new places or ports, which can be a break-in, a misconfiguration, or just an unusual but harmless app run.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.3",
              "n": "Bandwidth by Application",
              "c": "medium",
              "f": "beginner",
              "v": "Application-level bandwidth breakdown helps prioritize QoS policies and justify network upgrades.",
              "t": "Splunk Add-on for NetFlow (with NBAR)",
              "d": "NetFlow with application identification",
              "q": "index=netflow\n| stats sum(bytes) as total_bytes by application\n| sort -total_bytes | head 20 | eval GB=round(total_bytes/1073741824,2)",
              "m": "Enable NBAR (Network-Based Application Recognition) on Cisco routers to export application-tagged NetFlow. Ingest in Splunk.",
              "z": "Pie chart (bandwidth by app), Bar chart, Table.",
              "kfp": "Traffic spikes during backup jobs, large file transfers, or video streaming events can make one app dominate without a fault; NBAR and exporter labels can also shift after firmware or app updates.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for NetFlow (with NBAR).\n• Ensure the following data sources are available: NetFlow with application identification.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable NBAR (Network-Based Application Recognition) on Cisco routers to export application-tagged NetFlow. Ingest in Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netflow\n| stats sum(bytes) as total_bytes by application\n| sort -total_bytes | head 20 | eval GB=round(total_bytes/1073741824,2)\n```\n\nUnderstanding this SPL\n\n**Bandwidth by Application** — Application-level bandwidth breakdown helps prioritize QoS policies and justify network upgrades.\n\nDocumented **Data sources**: NetFlow with application identification. **App/TA** (typical add-on context): Splunk Add-on for NetFlow (with NBAR). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netflow.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netflow. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• `eval` defines or adjusts **GB** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Bandwidth by Application** — Application-level bandwidth breakdown helps prioritize QoS policies and justify network upgrades.\n\nDocumented **Data sources**: NetFlow with application identification. **App/TA** (typical add-on context): Splunk Add-on for NetFlow (with NBAR). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (bandwidth by app), Bar chart, Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which apps are eating the most bandwidth so you can pick what to protect with quality-of-service and what to upgrade on the link.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.4",
              "n": "East-West Traffic Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Lateral traffic between internal segments reveals application dependencies and detects lateral movement.",
              "t": "Splunk Add-on for NetFlow",
              "d": "NetFlow from internal segments",
              "q": "index=netflow\n| where cidrmatch(\"10.0.0.0/8\",src) AND cidrmatch(\"10.0.0.0/8\",dest)\n| stats sum(bytes) as bytes, count as flows by src, dest, dest_port\n| sort -bytes | head 50",
              "m": "Export NetFlow from internal router/switch interfaces. Analyze internal traffic patterns. Establish baseline for anomaly detection.",
              "z": "Chord diagram, Table, Sankey diagram.",
              "kfp": "Backup, replication, and large VM migrations can dominate east-west without being threats; adjust the RFC1918 CIDRs to your real internal ranges. Traffic spikes during backup jobs, large file transfers, or video streaming can look like lateral bulk moves.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for NetFlow.\n• Ensure the following data sources are available: NetFlow from internal segments.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport NetFlow from internal router/switch interfaces. Analyze internal traffic patterns. Establish baseline for anomaly detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netflow\n| where cidrmatch(\"10.0.0.0/8\",src) AND cidrmatch(\"10.0.0.0/8\",dest)\n| stats sum(bytes) as bytes, count as flows by src, dest, dest_port\n| sort -bytes | head 50\n```\n\nUnderstanding this SPL\n\n**East-West Traffic Monitoring** — Lateral traffic between internal segments reveals application dependencies and detects lateral movement.\n\nDocumented **Data sources**: NetFlow from internal segments. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netflow.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netflow. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where cidrmatch(\"10.0.0.0/8\",src) AND cidrmatch(\"10.0.0.0/8\",dest)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**East-West Traffic Monitoring** — Lateral traffic between internal segments reveals application dependencies and detects lateral movement.\n\nDocumented **Data sources**: NetFlow from internal segments. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Chord diagram, Table, Sankey diagram.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see traffic moving inside the company network, not just to the internet, so you can spot odd sideways movement or overloaded links between data centers and offices.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.5",
              "n": "Data Exfiltration Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Unusually large outbound transfers to uncommon destinations may be data theft.",
              "t": "Splunk Add-on for NetFlow",
              "d": "NetFlow",
              "q": "index=netflow direction=\"outbound\"\n| stats sum(bytes) as total_bytes by src, dest\n| where total_bytes > 1073741824\n| lookup known_destinations dest OUTPUT known\n| where isnull(known)\n| sort -total_bytes",
              "m": "Baseline normal outbound transfer volumes per host. Alert when transfers exceed threshold to unknown destinations. Correlate with DNS and firewall logs.",
              "z": "Table, Bar chart, Map (destination GeoIP).",
              "kfp": "Off-site backup, video uploads, and cloud sync can be huge and legitimate; maintain a `known_destinations` lookup and tune thresholds by site. Traffic spikes during backup jobs, large file transfers, or video streaming are common false leads.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for NetFlow.\n• Ensure the following data sources are available: NetFlow.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline normal outbound transfer volumes per host. Alert when transfers exceed threshold to unknown destinations. Correlate with DNS and firewall logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netflow direction=\"outbound\"\n| stats sum(bytes) as total_bytes by src, dest\n| where total_bytes > 1073741824\n| lookup known_destinations dest OUTPUT known\n| where isnull(known)\n| sort -total_bytes\n```\n\nUnderstanding this SPL\n\n**Data Exfiltration Detection** — Unusually large outbound transfers to uncommon destinations may be data theft.\n\nDocumented **Data sources**: NetFlow. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netflow.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netflow. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_bytes > 1073741824` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(known)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Data Exfiltration Detection** — Unusually large outbound transfers to uncommon destinations may be data theft.\n\nDocumented **Data sources**: NetFlow. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart, Map (destination GeoIP).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you catch when someone is sending a huge amount of data to an odd place, which can mean a theft attempt or a lost laptop backing up the wrong way.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.6",
              "n": "Port Scan Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Hosts scanning many ports on targets indicate reconnaissance, worm propagation, or vulnerability scanning.",
              "t": "Splunk Add-on for NetFlow",
              "d": "NetFlow",
              "q": "index=netflow\n| stats dc(dest_port) as unique_ports by src, dest\n| where unique_ports > 50\n| sort -unique_ports",
              "m": "Detect hosts connecting to >50 unique ports on a single target in 5 minutes. Alert with source and target details.",
              "z": "Table, Scatter plot, Timeline.",
              "kfp": "Load balancers, health checks, and some SaaS clients hit many ports on one target legitimately. Traffic spikes during backup jobs, large file transfers, or video streaming can also add port variety on noisy hosts; baseline scanners and NMS ranges.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for NetFlow.\n• Ensure the following data sources are available: NetFlow.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDetect hosts connecting to >50 unique ports on a single target in 5 minutes. Alert with source and target details.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netflow\n| stats dc(dest_port) as unique_ports by src, dest\n| where unique_ports > 50\n| sort -unique_ports\n```\n\nUnderstanding this SPL\n\n**Port Scan Detection** — Hosts scanning many ports on targets indicate reconnaissance, worm propagation, or vulnerability scanning.\n\nDocumented **Data sources**: NetFlow. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netflow.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netflow. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_ports > 50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Port Scan Detection** — Hosts scanning many ports on targets indicate reconnaissance, worm propagation, or vulnerability scanning.\n\nDocumented **Data sources**: NetFlow. **App/TA** (typical add-on context): Splunk Add-on for NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Scatter plot, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you spot when one machine tries a huge list of door numbers on another, which is often someone probing the network for a way in—before they find one that works.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.7",
              "n": "Protocol Distribution Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Understanding protocol mix helps validate network policies and detect unauthorized protocols (e.g., unexpected SSH, RDP, or P2P traffic).",
              "t": "Splunk Stream, NetFlow integrator",
              "d": "`sourcetype=netflow`, `sourcetype=stream:tcp`",
              "q": "index=network sourcetype=\"netflow\"\n| lookup service_lookup dest_port OUTPUT service_name\n| stats sum(bytes) as total_bytes dc(src) as unique_sources by protocol, service_name\n| eval GB=round(total_bytes/1073741824,2) | sort -total_bytes\n| head 20",
              "m": "Collect NetFlow/sFlow/IPFIX from routers and switches. Map port numbers to service names via lookup. Baseline protocol distribution. Alert on new protocols or significant shifts.",
              "z": "Pie chart (by protocol), Treemap (by service + volume), Timechart.",
              "kfp": "Traffic spikes during backup jobs, large file transfers, or video streaming events shift the protocol mix without an attack; updates to apps and NBAR signatures also change the pie chart in harmless ways.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Stream, NetFlow integrator.\n• Ensure the following data sources are available: `sourcetype=netflow`, `sourcetype=stream:tcp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect NetFlow/sFlow/IPFIX from routers and switches. Map port numbers to service names via lookup. Baseline protocol distribution. Alert on new protocols or significant shifts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"netflow\"\n| lookup service_lookup dest_port OUTPUT service_name\n| stats sum(bytes) as total_bytes dc(src) as unique_sources by protocol, service_name\n| eval GB=round(total_bytes/1073741824,2) | sort -total_bytes\n| head 20\n```\n\nUnderstanding this SPL\n\n**Protocol Distribution Analysis** — Understanding protocol mix helps validate network policies and detect unauthorized protocols (e.g., unexpected SSH, RDP, or P2P traffic).\n\nDocumented **Data sources**: `sourcetype=netflow`, `sourcetype=stream:tcp`. **App/TA** (typical add-on context): Splunk Stream, NetFlow integrator. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: netflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"netflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by protocol, service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **GB** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Protocol Distribution Analysis** — Understanding protocol mix helps validate network policies and detect unauthorized protocols (e.g., unexpected SSH, RDP, or P2P traffic).\n\nDocumented **Data sources**: `sourcetype=netflow`, `sourcetype=stream:tcp`. **App/TA** (typical add-on context): Splunk Stream, NetFlow integrator. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (by protocol), Treemap (by service + volume), Timechart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see what kinds of network traffic (web, video, file share, and so on) you are carrying so you can set fair rules and spot odd mixes that do not match policy.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow",
                "stream"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.8",
              "n": "Multicast Traffic Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Uncontrolled multicast traffic floods switches and consumes bandwidth. Monitoring ensures multicast storms are detected before impacting unicast traffic.",
              "t": "Splunk Stream, NetFlow integrator",
              "d": "`sourcetype=netflow`, `sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"netflow\" dest=\"224.0.0.0/4\"\n| stats sum(bytes) as total_bytes, dc(src) as sources by dest\n| eval MB=round(total_bytes/1048576,1) | sort -total_bytes\n| head 20",
              "m": "Enable NetFlow on core/distribution switches. Filter for multicast destination range (224.0.0.0/4). Baseline expected multicast groups. Alert on new or high-volume groups.",
              "z": "Table (multicast group, volume, sources), Timechart (multicast volume), Bar chart.",
              "kfp": "IPTV, trading floors, and imaging can legitimately use heavy multicast. Traffic spikes during backup jobs, large file transfers, or video streaming (including multicast video) are often normal; baseline known groups and PIM changes.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Stream, NetFlow integrator.\n• Ensure the following data sources are available: `sourcetype=netflow`, `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable NetFlow on core/distribution switches. Filter for multicast destination range (224.0.0.0/4). Baseline expected multicast groups. Alert on new or high-volume groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"netflow\" dest=\"224.0.0.0/4\"\n| stats sum(bytes) as total_bytes, dc(src) as sources by dest\n| eval MB=round(total_bytes/1048576,1) | sort -total_bytes\n| head 20\n```\n\nUnderstanding this SPL\n\n**Multicast Traffic Monitoring** — Uncontrolled multicast traffic floods switches and consumes bandwidth. Monitoring ensures multicast storms are detected before impacting unicast traffic.\n\nDocumented **Data sources**: `sourcetype=netflow`, `sourcetype=cisco:ios`. **App/TA** (typical add-on context): Splunk Stream, NetFlow integrator. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: netflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"netflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **MB** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Multicast Traffic Monitoring** — Uncontrolled multicast traffic floods switches and consumes bandwidth. Monitoring ensures multicast storms are detected before impacting unicast traffic.\n\nDocumented **Data sources**: `sourcetype=netflow`, `sourcetype=cisco:ios`. **App/TA** (typical add-on context): Splunk Stream, NetFlow integrator. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (multicast group, volume, sources), Timechart (multicast volume), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when one-to-many network traffic (like video or some apps) is flooding links so you can fix it before normal calls and web traffic slow down.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow",
                "stream"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.9",
              "n": "Unauthorized VLAN Traffic Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Traffic originating from or destined to unauthorized VLANs indicates misconfigured switch ports, VLAN hopping attacks, or rogue devices.",
              "t": "Splunk Stream, NetFlow integrator",
              "d": "`sourcetype=netflow`, `sourcetype=cisco:ios`",
              "q": "index=network sourcetype=\"netflow\"\n| lookup vlan_authorization_lookup src_vlan OUTPUT authorized\n| where authorized!=\"yes\" OR isnull(authorized)\n| stats sum(bytes) as bytes, dc(src) as unique_hosts by src_vlan, input_interface\n| sort -bytes",
              "m": "Map flow data to VLANs via input interface. Maintain a lookup of authorized VLANs per port. Alert on traffic from unauthorized VLANs. Correlate with 802.1X status.",
              "z": "Table (VLAN, interface, hosts, volume), Alert panel, Status grid.",
              "kfp": "Span changes, trunks, and temporary moves can put traffic on unexpected VLANs during maintenance. Traffic spikes during backup jobs, large file transfers, or video streaming on those VLANs are usually operational, not policy violations.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Stream, NetFlow integrator.\n• Ensure the following data sources are available: `sourcetype=netflow`, `sourcetype=cisco:ios`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap flow data to VLANs via input interface. Maintain a lookup of authorized VLANs per port. Alert on traffic from unauthorized VLANs. Correlate with 802.1X status.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"netflow\"\n| lookup vlan_authorization_lookup src_vlan OUTPUT authorized\n| where authorized!=\"yes\" OR isnull(authorized)\n| stats sum(bytes) as bytes, dc(src) as unique_hosts by src_vlan, input_interface\n| sort -bytes\n```\n\nUnderstanding this SPL\n\n**Unauthorized VLAN Traffic Detection** — Traffic originating from or destined to unauthorized VLANs indicates misconfigured switch ports, VLAN hopping attacks, or rogue devices.\n\nDocumented **Data sources**: `sourcetype=netflow`, `sourcetype=cisco:ios`. **App/TA** (typical add-on context): Splunk Stream, NetFlow integrator. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: netflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"netflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where authorized!=\"yes\" OR isnull(authorized)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_vlan, input_interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unauthorized VLAN Traffic Detection** — Traffic originating from or destined to unauthorized VLANs indicates misconfigured switch ports, VLAN hopping attacks, or rogue devices.\n\nDocumented **Data sources**: `sourcetype=netflow`, `sourcetype=cisco:ios`. **App/TA** (typical add-on context): Splunk Stream, NetFlow integrator. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VLAN, interface, hosts, volume), Alert panel, Status grid.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when data is crossing virtual network lines it should not cross—like a cable plugged into the wrong room—which can be a simple mistake or someone trying to sneak in.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow",
                "stream"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.10",
              "n": "Long-Duration Flow Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Extremely long-lived flows may indicate data exfiltration, persistent backdoors, or stuck sessions consuming resources.",
              "t": "Splunk Stream, NetFlow integrator",
              "d": "`sourcetype=netflow`",
              "q": "index=network sourcetype=\"netflow\"\n| eval duration_min=duration/60\n| where duration_min > 60\n| stats sum(bytes) as total_bytes, max(duration_min) as max_duration by src, dest, dest_port\n| eval GB=round(total_bytes/1073741824,2) | sort -max_duration\n| head 20",
              "m": "Analyze flow records for duration >60 minutes. Cross-reference with known long-lived services (VPN, database replication). Flag unknown long flows for investigation.",
              "z": "Table (source, destination, port, duration, bytes), Scatter plot (duration vs. bytes).",
              "kfp": "VPN, DB replication, and terminal sessions can run for hours. Traffic spikes during backup jobs, large file transfers, or video streaming are often long; allowlist those peers. Flow aggregation can also stretch or truncate duration depending on the exporter.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Stream, NetFlow integrator.\n• Ensure the following data sources are available: `sourcetype=netflow`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAnalyze flow records for duration >60 minutes. Cross-reference with known long-lived services (VPN, database replication). Flag unknown long flows for investigation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"netflow\"\n| eval duration_min=duration/60\n| where duration_min > 60\n| stats sum(bytes) as total_bytes, max(duration_min) as max_duration by src, dest, dest_port\n| eval GB=round(total_bytes/1073741824,2) | sort -max_duration\n| head 20\n```\n\nUnderstanding this SPL\n\n**Long-Duration Flow Detection** — Extremely long-lived flows may indicate data exfiltration, persistent backdoors, or stuck sessions consuming resources.\n\nDocumented **Data sources**: `sourcetype=netflow`. **App/TA** (typical add-on context): Splunk Stream, NetFlow integrator. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: netflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"netflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where duration_min > 60` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **GB** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Long-Duration Flow Detection** — Extremely long-lived flows may indicate data exfiltration, persistent backdoors, or stuck sessions consuming resources.\n\nDocumented **Data sources**: `sourcetype=netflow`. **App/TA** (typical add-on context): Splunk Stream, NetFlow integrator. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (source, destination, port, duration, bytes), Scatter plot (duration vs. bytes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see connections that stay open a very long time, which is normal for some apps but can also be a long leak of data or a backdoor to watch for.",
              "mtype": [
                "Anomaly",
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "netflow",
                "stream"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.11",
              "n": "Zeek (Bro) Connection Log Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Zeek (formerly Bro) generates structured connection, DNS, HTTP, SSL, and file logs from network packet captures. Zeek conn.log provides rich metadata (duration, bytes, connection state) for threat hunting, anomaly detection, and network forensics beyond what NetFlow alone provides.",
              "t": "Splunk Add-on for Zeek (`TA-bro`, Splunkbase 3882), `Splunk_TA_zeek`",
              "d": "`sourcetype=\"bro:conn:json\"` or `sourcetype=\"zeek:json\"`",
              "q": "index=zeek sourcetype=\"bro:conn:json\"\n| where duration > 3600 OR orig_bytes > 1073741824\n| stats count, sum(orig_bytes) as total_bytes, avg(duration) as avg_duration by id_orig_h, id_resp_h, id_resp_p\n| sort - total_bytes",
              "m": "Deploy Zeek sensors on SPAN/TAP ports at network boundaries. Forward Zeek JSON logs to Splunk via syslog or file monitor. Install TA-bro for field extraction and CIM mapping. Use conn.log for long-duration and high-volume connection detection; dns.log for DNS analytics; ssl.log for certificate monitoring.",
              "z": "Table (top connections by bytes), Timeline (long-duration connections), Bar chart (connection states).",
              "kfp": "Short maintenance windows, SNMP polling storms, or intermittent routing asymmetry can suppress samples without a faulty collector. Agents that export only counter samples may lack `sampling_rate` on every record. Burst traffic can legitimately change effective sampling when hardware buffers fill.",
              "refs": "[Splunkbase app 3882](https://splunkbase.splunk.com/app/3882), [Zeek documentation](https://docs.zeek.org/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Zeek (`TA-bro`, Splunkbase 3882), `Splunk_TA_zeek`.\n• Ensure the following data sources are available: `sourcetype=\"bro:conn:json\"` or `sourcetype=\"zeek:json\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Zeek sensors on SPAN/TAP ports at network boundaries. Forward Zeek JSON logs to Splunk via syslog or file monitor. Install TA-bro for field extraction and CIM mapping. Use conn.log for long-duration and high-volume connection detection; dns.log for DNS analytics; ssl.log for certificate monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zeek sourcetype=\"bro:conn:json\"\n| where duration > 3600 OR orig_bytes > 1073741824\n| stats count, sum(orig_bytes) as total_bytes, avg(duration) as avg_duration by id_orig_h, id_resp_h, id_resp_p\n| sort - total_bytes\n```\n\nUnderstanding this SPL\n\n**Zeek (Bro) Connection Log Analysis** — Zeek (formerly Bro) generates structured connection, DNS, HTTP, SSL, and file logs from network packet captures. Zeek conn.log provides rich metadata (duration, bytes, connection state) for threat hunting, anomaly detection, and network forensics beyond what NetFlow alone provides.\n\nDocumented **Data sources**: `sourcetype=\"bro:conn:json\"` or `sourcetype=\"zeek:json\"`. **App/TA** (typical add-on context): Splunk Add-on for Zeek (`TA-bro`, Splunkbase 3882), `Splunk_TA_zeek`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zeek; **sourcetype**: bro:conn:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zeek, sourcetype=\"bro:conn:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where duration > 3600 OR orig_bytes > 1073741824` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by id_orig_h, id_resp_h, id_resp_p** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top connections by bytes), Timeline (long-duration connections), Bar chart (connection states).",
              "script": "",
              "premium": "",
              "hw": "Zeek sensor appliances, Corelight sensors, any server running Zeek on SPAN/TAP",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-03",
              "sver": "",
              "rby": "",
              "ge": "We watch whether our traffic sampling stays steady over time. When the equipment quietly changes how much it samples or stops sending numbers, we notice quickly so our charts and alarms stay honest.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.7.12",
              "n": "SPAN/TAP Port and Packet Broker Health",
              "c": "high",
              "f": "intermediate",
              "v": "Network visibility infrastructure (SPAN sessions, TAP aggregators, packet brokers like Gigamon, Keysight, Ixia) silently fails when ports go down, filters misconfigure, or buffers overflow. Monitoring the health of the visibility layer ensures that downstream tools (Zeek, Suricata, DLP) receive complete traffic copies.",
              "t": "`TA-cisco_ios` (for SPAN sessions), Gigamon GigaVUE syslog, Keysight Vision Edge syslog",
              "d": "`sourcetype=cisco:ios`, `sourcetype=\"gigamon:syslog\"`, SNMP",
              "q": "index=network sourcetype=\"cisco:ios\" \"%SPAN-6-SESSION_MOD\" OR \"%SPAN-5-SESSION_DEL\"\n| stats count by host, _raw\n| sort - count",
              "m": "Monitor SPAN session status via syslog on Cisco switches. For Gigamon/Keysight packet brokers, forward appliance syslog to Splunk and monitor port utilization, filter match rates, and buffer overflow counters. Alert on SPAN session removal, packet broker port DOWN, or sustained buffer overflow.",
              "z": "Status grid (SPAN/TAP port status), Table (broker health events), Line chart (buffer utilization).",
              "kfp": "Midnight software updates rotate protocol packs and temporarily skew counts. Split tunnels and Secure Web Gateway hairpins can duplicate flows. Some records show aggregate \"web\" labels that hide fine-grained SaaS usage.",
              "refs": "[Gigamon documentation](https://docs.gigamon.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios` (for SPAN sessions), Gigamon GigaVUE syslog, Keysight Vision Edge syslog.\n• Ensure the following data sources are available: `sourcetype=cisco:ios`, `sourcetype=\"gigamon:syslog\"`, SNMP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor SPAN session status via syslog on Cisco switches. For Gigamon/Keysight packet brokers, forward appliance syslog to Splunk and monitor port utilization, filter match rates, and buffer overflow counters. Alert on SPAN session removal, packet broker port DOWN, or sustained buffer overflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ios\" \"%SPAN-6-SESSION_MOD\" OR \"%SPAN-5-SESSION_DEL\"\n| stats count by host, _raw\n| sort - count\n```\n\nUnderstanding this SPL\n\n**SPAN/TAP Port and Packet Broker Health** — Network visibility infrastructure (SPAN sessions, TAP aggregators, packet brokers like Gigamon, Keysight, Ixia) silently fails when ports go down, filters misconfigure, or buffers overflow. Monitoring the health of the visibility layer ensures that downstream tools (Zeek, Suricata, DLP) receive complete traffic copies.\n\nDocumented **Data sources**: `sourcetype=cisco:ios`, `sourcetype=\"gigamon:syslog\"`, SNMP. **App/TA** (typical add-on context): `TA-cisco_ios` (for SPAN sessions), Gigamon GigaVUE syslog, Keysight Vision Edge syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, _raw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (SPAN/TAP port status), Table (broker health events), Line chart (buffer utilization).",
              "script": "",
              "premium": "",
              "hw": "Gigamon GigaVUE-HC1, HC2, HC3, TA25, TA100, TA200, TA400; Keysight Vision Edge OS; Cisco Catalyst SPAN/RSPAN/ERSPAN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-03",
              "sver": "",
              "rby": "",
              "ge": "We read the network equipment’s own labels for what each conversation is for. That helps us rank heavy uses fairly and spot mystery labels that need a human name before we trust the picture.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 12,
            "none": 0
          }
        },
        {
          "i": "5.8",
          "n": "Network Management Platforms",
          "u": [
            {
              "i": "5.8.1",
              "n": "DNA Center Assurance Alerts (Cisco Catalyst Center)",
              "c": "medium",
              "f": "beginner",
              "v": "DNA Center provides AI/ML-driven network issue detection. Centralizing in Splunk enables cross-domain correlation.",
              "t": "`Cisco Catalyst Add-on for Splunk` (Splunkbase 7538)",
              "d": "DNA Center API (issues, events)",
              "q": "index=network sourcetype=\"cisco:dnac:issues\"\n| stats count by priority, category, name | sort -priority -count",
              "m": "Configure DNA Center API integration in Splunk TA. Poll for issues and client health. Alert on P1/P2 issues.",
              "z": "Table (issue, priority, category), Bar chart, Single value.",
              "kfp": "Planned Assurance recalibrations, lab controllers, and polling delays after upgrades can look like new issues. Compare any spike to the Catalyst Center Assurance / Issues UI before you page someone.",
              "refs": "[Splunkbase app 7538](https://splunkbase.splunk.com/app/7538)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538).\n• Ensure the following data sources are available: DNA Center API (issues, events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure DNA Center API integration in Splunk TA. Poll for issues and client health. Alert on P1/P2 issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:dnac:issues\"\n| stats count by priority, category, name | sort -priority -count\n```\n\nUnderstanding this SPL\n\n**DNA Center Assurance Alerts (Cisco Catalyst Center)** — DNA Center provides AI/ML-driven network issue detection. Centralizing in Splunk enables cross-domain correlation.\n\nDocumented **Data sources**: DNA Center API (issues, events). **App/TA** (typical add-on context): `Cisco Catalyst Add-on for Splunk` (Splunkbase 7538). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:dnac:issues. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:dnac:issues\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by priority, category, name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (issue, priority, category), Bar chart, Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when Cisco's management system flags real problems and priorities on the network, so you can act before the trouble spreads to the apps and phones we all use.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_sdwan"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.2",
              "n": "Meraki Organization Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks Meraki device status across all networks and organizations from a single pane.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "Meraki Dashboard API, syslog",
              "q": "index=network sourcetype=\"meraki:api\"\n| stats count by network, status | eval is_offline=if(status=\"offline\",1,0)\n| where is_offline > 0",
              "m": "Configure Meraki API integration (API key + org ID). Poll device statuses. Forward syslog for events. Dashboard showing organization-wide health.",
              "z": "Map (device locations), Table, Status grid, Single value (offline count).",
              "kfp": "Meraki maintenance windows, cellular backup failovers, and brief cloud API hiccups can look like outages; match counts to the dashboard map before paging.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: Meraki Dashboard API, syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Meraki API integration (API key + org ID). Poll device statuses. Forward syslog for events. Dashboard showing organization-wide health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"meraki:api\"\n| stats count by network, status | eval is_offline=if(status=\"offline\",1,0)\n| where is_offline > 0\n```\n\nUnderstanding this SPL\n\n**Meraki Organization Monitoring** — Tracks Meraki device status across all networks and organizations from a single pane.\n\nDocumented **Data sources**: Meraki Dashboard API, syslog. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: meraki:api. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by network, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **is_offline** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_offline > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (device locations), Table, Status grid, Single value (offline count).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MX64, MX67, MX68, MX75, MX84, MX85, MX95, MX100, MX105, MX250, MX450, Cisco Meraki MR36, MR44, MR46, MR56, MR57, MR76, MR78, MR86, Cisco Meraki MS120, MS125, MS130, MS210, MS225, MS250, MS350, MS390",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when Meraki gear goes offline or unhealthy across sites so the team can fix it before everyone loses the network in that building.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.3",
              "n": "SNMP Trap Consolidation",
              "c": "medium",
              "f": "beginner",
              "v": "Centralizing SNMP traps from all sources enables cross-tool correlation and reduces monitoring tool sprawl.",
              "t": "Splunk Add-on for SNMP (trap receiver)",
              "d": "SNMP traps",
              "q": "index=network sourcetype=\"snmp:trap\"\n| stats count by trap_oid, host, severity | sort -count",
              "m": "Configure Splunk SNMP trap receiver (UDP 162). Map trap OIDs to human-readable names via lookup. Correlate with syslog events from the same device.",
              "z": "Table (device, trap, severity), Bar chart, Timeline.",
              "kfp": "Trap storms during device reboots, link flapping, or SNMP server overload can dwarf normal traps; use host and OID baselines and storm detection (UC-5.8.25) together.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for SNMP (trap receiver).\n• Ensure the following data sources are available: SNMP traps.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk SNMP trap receiver (UDP 162). Map trap OIDs to human-readable names via lookup. Correlate with syslog events from the same device.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:trap\"\n| stats count by trap_oid, host, severity | sort -count\n```\n\nUnderstanding this SPL\n\n**SNMP Trap Consolidation** — Centralizing SNMP traps from all sources enables cross-tool correlation and reduces monitoring tool sprawl.\n\nDocumented **Data sources**: SNMP traps. **App/TA** (typical add-on context): Splunk Add-on for SNMP (trap receiver). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:trap. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:trap\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by trap_oid, host, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, trap, severity), Bar chart, Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We pull all SNMP traps into one place so we can look up what broke without jumping between old tools.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.4",
              "n": "Network Device Inventory",
              "c": "low",
              "f": "intermediate",
              "v": "Up-to-date inventory supports change management, vulnerability tracking, and compliance auditing.",
              "t": "Combined sources (NMS APIs, SNMP sysDescr)",
              "d": "NMS discovery, SNMP polling",
              "q": "index=network sourcetype=\"snmp:system\"\n| stats latest(sysDescr) as description, latest(sysLocation) as location by host\n| table host description location",
              "m": "Poll SNMP sysDescr, sysName, sysLocation from all devices. Cross-reference with NMS discovery exports. Maintain inventory lookup for enrichment.",
              "z": "Table (device, model, location, version), Pie chart (by model/vendor).",
              "kfp": "SNMP sysDescr and location strings can change in harmless upgrades; only alert when identity or site metadata shifts outside change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Combined sources (NMS APIs, SNMP sysDescr).\n• Ensure the following data sources are available: NMS discovery, SNMP polling.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll SNMP sysDescr, sysName, sysLocation from all devices. Cross-reference with NMS discovery exports. Maintain inventory lookup for enrichment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:system\"\n| stats latest(sysDescr) as description, latest(sysLocation) as location by host\n| table host description location\n```\n\nUnderstanding this SPL\n\n**Network Device Inventory** — Up-to-date inventory supports change management, vulnerability tracking, and compliance auditing.\n\nDocumented **Data sources**: NMS discovery, SNMP polling. **App/TA** (typical add-on context): Combined sources (NMS APIs, SNMP sysDescr). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:system. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Network Device Inventory**): table host description location\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, model, location, version), Pie chart (by model/vendor).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep a clear list of what each network device is and where it lives, which helps with updates, security checks, and replacing boxes on time.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.5",
              "n": "Network Device Backup Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Missing backups mean a failed device requires manual rebuilding. Tracking backup success ensures rapid disaster recovery.",
              "t": "RANCID/Oxidized logs, SolarWinds NCM, custom scripts",
              "d": "`sourcetype=rancid`, `sourcetype=oxidized`",
              "q": "index=network sourcetype=\"oxidized\"\n| stats latest(status) as backup_status, latest(_time) as last_backup by device\n| eval days_since=round((now()-last_backup)/86400,0)\n| where backup_status!=\"success\" OR days_since > 7\n| sort -days_since",
              "m": "Integrate config backup tool (Oxidized/RANCID) logs into Splunk. Track success/failure per device. Alert when a device hasn't been backed up in >7 days.",
              "z": "Table (device, status, days since backup), Single value (compliance %), Status grid.",
              "kfp": "Backup job overlap, slow SSH to old gear, and locked devices during changes can look like missed backups; compare to the backup tool’s own run log.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RANCID/Oxidized logs, SolarWinds NCM, custom scripts.\n• Ensure the following data sources are available: `sourcetype=rancid`, `sourcetype=oxidized`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate config backup tool (Oxidized/RANCID) logs into Splunk. Track success/failure per device. Alert when a device hasn't been backed up in >7 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"oxidized\"\n| stats latest(status) as backup_status, latest(_time) as last_backup by device\n| eval days_since=round((now()-last_backup)/86400,0)\n| where backup_status!=\"success\" OR days_since > 7\n| sort -days_since\n```\n\nUnderstanding this SPL\n\n**Network Device Backup Compliance** — Missing backups mean a failed device requires manual rebuilding. Tracking backup success ensures rapid disaster recovery.\n\nDocumented **Data sources**: `sourcetype=rancid`, `sourcetype=oxidized`. **App/TA** (typical add-on context): RANCID/Oxidized logs, SolarWinds NCM, custom scripts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: oxidized. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"oxidized\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where backup_status!=\"success\" OR days_since > 7` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, status, days since backup), Single value (compliance %), Status grid.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check that switch and router configs are saved on schedule so a broken box can be rebuilt quickly instead of by hand from memory.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.7",
              "n": "Network Configuration Drift Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Configuration drift from golden standards introduces vulnerabilities and operational inconsistencies. Detecting drift maintains compliance.",
              "t": "RANCID/Oxidized, custom diff scripts, DNA Center",
              "d": "`sourcetype=config:diff`, `sourcetype=cisco:dnac`",
              "q": "index=network sourcetype=\"config:diff\"\n| rex \"device=(?<device>\\S+).*?lines_changed=(?<changes>\\d+)\"\n| where changes > 0\n| stats sum(changes) as total_changes, count as change_events by device\n| sort -total_changes",
              "m": "Schedule config pulls via Oxidized/RANCID. Diff against golden templates. Ingest diff results into Splunk. Alert on unauthorized changes (outside change windows).",
              "z": "Table (device, changes, last modified), Timeline (change events), Single value (devices with drift).",
              "kfp": "Authorized template pushes, golden-config refreshes, and RANCID noise can all move diff counts; require change-ticket match before treating as incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RANCID/Oxidized, custom diff scripts, DNA Center.\n• Ensure the following data sources are available: `sourcetype=config:diff`, `sourcetype=cisco:dnac`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule config pulls via Oxidized/RANCID. Diff against golden templates. Ingest diff results into Splunk. Alert on unauthorized changes (outside change windows).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"config:diff\"\n| rex \"device=(?<device>\\S+).*?lines_changed=(?<changes>\\d+)\"\n| where changes > 0\n| stats sum(changes) as total_changes, count as change_events by device\n| sort -total_changes\n```\n\nUnderstanding this SPL\n\n**Network Configuration Drift Detection** — Configuration drift from golden standards introduces vulnerabilities and operational inconsistencies. Detecting drift maintains compliance.\n\nDocumented **Data sources**: `sourcetype=config:diff`, `sourcetype=cisco:dnac`. **App/TA** (typical add-on context): RANCID/Oxidized, custom diff scripts, DNA Center. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: config:diff. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"config:diff\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where changes > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by device** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, changes, last modified), Timeline (change events), Single value (devices with drift).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot when a device's settings drift from what we expect, so surprise changes do not sit there quietly until they cause an outage.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.8",
              "n": "SNMP Polling Gap Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Missing SNMP polls create gaps in monitoring data. Detecting polling failures ensures metrics dashboards remain accurate.",
              "t": "Splunk core (metadata search)",
              "d": "Any SNMP sourcetype",
              "q": "| tstats count where index=network sourcetype=\"snmp:*\" by host, sourcetype, _time span=10m\n| stats range(_time) as time_range, count as poll_count by host, sourcetype\n| eval expected_polls=round(time_range/300,0)\n| eval gap_pct=round((1-poll_count/expected_polls)*100,1)\n| where gap_pct > 20 | sort -gap_pct",
              "m": "Track SNMP data arrival per device using `tstats`. Compare expected vs. actual poll count. Alert when gap exceeds 20%. Investigate SNMP community/credential issues.",
              "z": "Table (device, expected, actual, gap %), Single value (devices with gaps), Heatmap.",
              "kfp": "Maintenance silences, discovery pauses, and metric-store lag can under-count expected polls; verify the poller, not just Splunk, when gaps appear.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk core (metadata search).\n• Ensure the following data sources are available: Any SNMP sourcetype.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack SNMP data arrival per device using `tstats`. Compare expected vs. actual poll count. Alert when gap exceeds 20%. Investigate SNMP community/credential issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats count where index=network sourcetype=\"snmp:*\" by host, sourcetype, _time span=10m\n| stats range(_time) as time_range, count as poll_count by host, sourcetype\n| eval expected_polls=round(time_range/300,0)\n| eval gap_pct=round((1-poll_count/expected_polls)*100,1)\n| where gap_pct > 20 | sort -gap_pct\n```\n\nUnderstanding this SPL\n\n**SNMP Polling Gap Detection** — Missing SNMP polls create gaps in monitoring data. Detecting polling failures ensures metrics dashboards remain accurate.\n\nDocumented **Data sources**: Any SNMP sourcetype. **App/TA** (typical add-on context): Splunk core (metadata search). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against precomputed summaries; ensure the referenced data model is accelerated.\n• `stats` rolls up events into metrics; results are split **by host, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **expected_polls** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **gap_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, expected, actual, gap %), Single value (devices with gaps), Heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We notice when regular SNMP checks stop showing up, which usually means a password, path, or agent problem before the graphs go blank in a real incident.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.9",
              "n": "SSL/TLS Certificate Expiration Tracking (Meraki)",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors SSL certificate expiration dates on all network devices to prevent outages.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" certificate_expiry=*\n| eval days_until_expiry=round((strptime(certificate_expiry, \"%Y-%m-%d\")-now())/86400, 0)\n| where days_until_expiry < 30\n| stats latest(days_until_expiry) as days_left by device_name, device_type\n| sort days_left",
              "m": "Query device API for certificate expiry dates. Alert on <30 days.",
              "z": "Expiration countdown gauge; timeline of expiring certs; alert table.",
              "kfp": "Cloud-managed certificate rotations and name mismatches in lab orgs can trigger warnings without user impact; compare cert dates in the Meraki UI.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery device API for certificate expiry dates. Alert on <30 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" certificate_expiry=*\n| eval days_until_expiry=round((strptime(certificate_expiry, \"%Y-%m-%d\")-now())/86400, 0)\n| where days_until_expiry < 30\n| stats latest(days_until_expiry) as days_left by device_name, device_type\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**SSL/TLS Certificate Expiration Tracking (Meraki)** — Monitors SSL certificate expiration dates on all network devices to prevent outages.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_until_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_until_expiry < 30` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by device_name, device_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Expiration countdown gauge; timeline of expiring certs; alert table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We warn you before Meraki dashboard certificates age out, so the browser and API checks keep working and nobody gets stuck with scary warnings.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.10",
              "n": "Firmware Update Compliance and Version Tracking (Meraki)",
              "c": "medium",
              "f": "intermediate",
              "v": "Ensures all network devices run supported firmware versions and patches.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats latest(firmware_version) as current_fw, count as device_count by device_type\n| lookup recommended_firmware.csv device_type OUTPUTNEW recommended_fw\n| where current_fw != recommended_fw",
              "m": "Query device API for firmware versions. Compare to recommended baseline.",
              "z": "Firmware version table by device type; compliance percentage gauge; outdated device list.",
              "kfp": "Staged rollouts, deferred upgrades for stability, and lab networks often stay on older versions on purpose; policy by site tag, not a single version everywhere.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery device API for firmware versions. Compare to recommended baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats latest(firmware_version) as current_fw, count as device_count by device_type\n| lookup recommended_firmware.csv device_type OUTPUTNEW recommended_fw\n| where current_fw != recommended_fw\n```\n\nUnderstanding this SPL\n\n**Firmware Update Compliance and Version Tracking (Meraki)** — Ensures all network devices run supported firmware versions and patches.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where current_fw != recommended_fw` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Firmware version table by device type; compliance percentage gauge; outdated device list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track whether Meraki devices run current safe firmware, so you are not running known bugs or missing fixes across offices.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.11",
              "n": "API Call Rate Monitoring and Rate Limit Alerts (Meraki)",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors API usage to prevent rate limit hits and optimize automation efficiency.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api:*\"\n| timechart count as api_calls by source, endpoint\n| eval call_rate=api_calls/60\n| where call_rate > 9",
              "m": "Log all API calls with timestamps. Monitor call rate by endpoint.",
              "z": "API call timeline; rate limit gauge; endpoint usage breakdown.",
              "kfp": "Dashboard refreshes, monitoring apps, and automation hitting the same org can add API calls; chart usage against Meraki’s published limits by product.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog all API calls with timestamps. Monitor call rate by endpoint.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api:*\"\n| timechart count as api_calls by source, endpoint\n| eval call_rate=api_calls/60\n| where call_rate > 9\n```\n\nUnderstanding this SPL\n\n**API Call Rate Monitoring and Rate Limit Alerts (Meraki)** — Monitors API usage to prevent rate limit hits and optimize automation efficiency.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api:*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api:*\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time with a separate series **by source, endpoint** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **call_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where call_rate > 9` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: API call timeline; rate limit gauge; endpoint usage breakdown.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how hard we lean on the Meraki cloud API, so a burst of calls does not trip limits when many dashboards run at once.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.12",
              "n": "License Expiration Tracking and Renewal Alerts (Meraki)",
              "c": "critical",
              "f": "intermediate",
              "v": "Ensures licenses don't expire unexpectedly and features remain available.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" license_expiry=*\n| eval days_until_expire=round((strptime(license_expiry, \"%Y-%m-%d\")-now())/86400, 0)\n| stats latest(days_until_expire) as days_left, latest(license_expiry) as expiry_date by license_type, organization\n| where days_left < 90\n| sort days_left",
              "m": "Query organization API for license expiry. Alert on <90 days.",
              "z": "License expiration countdown; renewal timeline; license detail table.",
              "kfp": "Co-term vs per-device licensing and co-termination renewals can confuse expiring tiles; match Splunk to Organization > License in Dashboard.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery organization API for license expiry. Alert on <90 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" license_expiry=*\n| eval days_until_expire=round((strptime(license_expiry, \"%Y-%m-%d\")-now())/86400, 0)\n| stats latest(days_until_expire) as days_left, latest(license_expiry) as expiry_date by license_type, organization\n| where days_left < 90\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**License Expiration Tracking and Renewal Alerts (Meraki)** — Ensures licenses don't expire unexpectedly and features remain available.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_until_expire** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by license_type, organization** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where days_left < 90` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: License expiration countdown; renewal timeline; license detail table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know before Meraki licenses slip past their date, so renewals are planned instead of a surprise cut-off.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.13",
              "n": "Network Device Inventory and Change Audit (Meraki)",
              "c": "medium",
              "f": "advanced",
              "v": "Maintains accurate inventory of network devices and tracks hardware/software changes.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats count as device_count by device_type, network_id\n| append [search index=cisco_network sourcetype=\"meraki:api\" | stats count as org_count]\n| fillnull device_count value=0",
              "m": "Query devices API to build current inventory. Track additions/removals.",
              "z": "Inventory summary table; device count by type pie chart; change log timeline.",
              "kfp": "Inventory pulls during hardware refresh or RMAs may spike changes; use baselines and change records before treating as unknown gear.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery devices API to build current inventory. Track additions/removals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats count as device_count by device_type, network_id\n| append [search index=cisco_network sourcetype=\"meraki:api\" | stats count as org_count]\n| fillnull device_count value=0\n```\n\nUnderstanding this SPL\n\n**Network Device Inventory and Change Audit (Meraki)** — Maintains accurate inventory of network devices and tracks hardware/software changes.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_type, network_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• Fills null values with `fillnull`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Inventory summary table; device count by type pie chart; change log timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an inventory and change feel for Meraki hardware and settings so large moves do not get lost in the day-to-day noise.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.14",
              "n": "Admin Activity Logging and Access Control Audit (Meraki)",
              "c": "critical",
              "f": "beginner",
              "v": "Tracks administrator actions and logins for compliance and security auditing.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*admin*\" OR signature=\"*login*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*admin*\" OR signature=\"*login*\")\n| stats count as admin_action_count by admin_user, action_type, timestamp\n| where admin_action_count > 0",
              "m": "Enable admin audit logging. Ingest login and action events.",
              "z": "Admin activity timeline; action type breakdown; user activity detail table.",
              "kfp": "Help-desk and automation accounts that log in often can look like noise; focus on new IPs, new admins, and after-hours use against policy.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*admin*\" OR signature=\"*login*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable admin audit logging. Ingest login and action events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*admin*\" OR signature=\"*login*\")\n| stats count as admin_action_count by admin_user, action_type, timestamp\n| where admin_action_count > 0\n```\n\nUnderstanding this SPL\n\n**Admin Activity Logging and Access Control Audit (Meraki)** — Tracks administrator actions and logins for compliance and security auditing.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*admin*\" OR signature=\"*login*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by admin_user, action_type, timestamp** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where admin_action_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Admin activity timeline; action type breakdown; user activity detail table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see who does what in the Meraki admin console, which matters when you need to show who changed a critical setting.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.15",
              "n": "Admin Privilege Changes and Permission Escalation (Meraki)",
              "c": "critical",
              "f": "beginner",
              "v": "Detects unauthorized privilege changes and permission escalation attempts.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*privilege*\" OR signature=\"*permission*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*privilege*\" OR signature=\"*permission*\")\n| stats count as priv_change_count by admin_user, old_role, new_role\n| where priv_change_count > 0",
              "m": "Monitor privilege and role change events. Alert on escalations.",
              "z": "Privilege change timeline; role change audit table; escalation alert dashboard.",
              "kfp": "Role updates during onboarding or support escalations are often correct; require ticket correlation for privilege events.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*privilege*\" OR signature=\"*permission*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor privilege and role change events. Alert on escalations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*privilege*\" OR signature=\"*permission*\")\n| stats count as priv_change_count by admin_user, old_role, new_role\n| where priv_change_count > 0\n```\n\nUnderstanding this SPL\n\n**Admin Privilege Changes and Permission Escalation (Meraki)** — Detects unauthorized privilege changes and permission escalation attempts.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*privilege*\" OR signature=\"*permission*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by admin_user, old_role, new_role** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where priv_change_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Privilege change timeline; role change audit table; escalation alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when someone’s Meraki rights jump up in a way that should not happen, so small mistakes or misuse do not go unnoticed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.16",
              "n": "Alert Volume Trending and Alert Fatigue Analysis (Meraki)",
              "c": "medium",
              "f": "beginner",
              "v": "Analyzes alert volume trends to optimize alerting rules and reduce false positives.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580, webhooks)",
              "d": "`sourcetype=meraki:webhook`",
              "q": "index=cisco_network sourcetype=\"meraki:webhook\"\n| timechart count as alert_count by alert_type\n| eval alert_ratio=alert_count/sum(alert_count)",
              "m": "Ingest webhook alerts. Track volume and types over time.",
              "z": "Alert volume timeline; alert type pie chart; trend sparklines.",
              "kfp": "Real incidents naturally raise alert volume; trend suppression and dedup before you tune thresholds so you do not hide true problems.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580, webhooks).\n• Ensure the following data sources are available: `sourcetype=meraki:webhook`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest webhook alerts. Track volume and types over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:webhook\"\n| timechart count as alert_count by alert_type\n| eval alert_ratio=alert_count/sum(alert_count)\n```\n\nUnderstanding this SPL\n\n**Alert Volume Trending and Alert Fatigue Analysis (Meraki)** — Analyzes alert volume trends to optimize alerting rules and reduce false positives.\n\nDocumented **Data sources**: `sourcetype=meraki:webhook`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580, webhooks). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:webhook. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:webhook\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time with a separate series **by alert_type** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **alert_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert volume timeline; alert type pie chart; trend sparklines.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when Meraki is throwing too many of the same alerts, so the team can tune and avoid crying wolf.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.17",
              "n": "Network Health Score Aggregation and Executive Reporting (Meraki)",
              "c": "medium",
              "f": "intermediate",
              "v": "Provides high-level network health metric for executive dashboards and trend reporting.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats avg(health_score) as device_health, count(eval(status=\"offline\")) as offline_count by network_id\n| eval network_health=round(device_health - (offline_count*5), 2)\n| eval health_status=case(network_health >= 85, \"Healthy\", network_health >= 70, \"Degraded\", 1=1, \"Critical\")",
              "m": "Aggregate device health scores. Calculate composite network score.",
              "z": "Network health gauge; health trend sparkline; status KPI dashboard.",
              "kfp": "Aggregates hide a single bad site; always drill to site-level before exec reporting drives the wrong project priority.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate device health scores. Calculate composite network score.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats avg(health_score) as device_health, count(eval(status=\"offline\")) as offline_count by network_id\n| eval network_health=round(device_health - (offline_count*5), 2)\n| eval health_status=case(network_health >= 85, \"Healthy\", network_health >= 70, \"Degraded\", 1=1, \"Critical\")\n```\n\nUnderstanding this SPL\n\n**Network Health Score Aggregation and Executive Reporting (Meraki)** — Provides high-level network health metric for executive dashboards and trend reporting.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by network_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **network_health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **health_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network health gauge; health trend sparkline; status KPI dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We roll Meraki health into a simple view leaders can read, not just a wall of device lists.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.18",
              "n": "Device Online/Offline Status Monitoring (Meraki)",
              "c": "critical",
              "f": "intermediate",
              "v": "Tracks device connectivity status to quickly identify and respond to device failures.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats latest(status) as device_status, latest(last_status_change) as status_change_time, count(eval(status=\"offline\")) as offline_count by network_id\n| eval offline_pct=round(offline_count*100/count, 2)\n| where offline_count > 0",
              "m": "Poll devices API for status. Alert on offline devices.",
              "z": "Device status table; offline count gauge; status change timeline.",
              "kfp": "Brief cellular or power blips to appliances can flip offline/online; use duration filters and local ping where possible before heavy paging.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll devices API for status. Alert on offline devices.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats latest(status) as device_status, latest(last_status_change) as status_change_time, count(eval(status=\"offline\")) as offline_count by network_id\n| eval offline_pct=round(offline_count*100/count, 2)\n| where offline_count > 0\n```\n\nUnderstanding this SPL\n\n**Device Online/Offline Status Monitoring (Meraki)** — Tracks device connectivity status to quickly identify and respond to device failures.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by network_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **offline_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where offline_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Device status table; offline count gauge; status change timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know quickly when a Meraki box drops offline, before users open tickets about dead Wi‑Fi or VPN.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.19",
              "n": "Multi-Organization Comparison and Benchmarking (Meraki)",
              "c": "low",
              "f": "beginner",
              "v": "Compares metrics across organizations to identify best practices and outliers.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats avg(health_score) as avg_health, count as device_count by organization\n| sort - avg_health",
              "m": "Aggregate metrics across multiple organizations. Create comparison views.",
              "z": "Organization comparison bar chart; health rank table; benchmark line chart.",
              "kfp": "Different site sizes and use cases make raw scores unfair; compare like-sized orgs and segment retail vs head office.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate metrics across multiple organizations. Create comparison views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats avg(health_score) as avg_health, count as device_count by organization\n| sort - avg_health\n```\n\nUnderstanding this SPL\n\n**Multi-Organization Comparison and Benchmarking (Meraki)** — Compares metrics across organizations to identify best practices and outliers.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by organization** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Organization comparison bar chart; health rank table; benchmark line chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you compare sites and organizations on Meraki, so the slow or noisy places stand out next to the good ones.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.20",
              "n": "Configuration Change Window Compliance (Meraki)",
              "c": "medium",
              "f": "beginner",
              "v": "Ensures configuration changes only occur within approved maintenance windows.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*config*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*config*\"\n| eval hour=strftime(_time, \"%H\")\n| stats count as config_change_count by hour\n| eval window_compliant=if(hour>=22 OR hour<6, \"Yes\", \"No\")\n| where window_compliant=\"No\" AND config_change_count > 0",
              "m": "Monitor configuration change events. Check against maintenance windows.",
              "z": "Change compliance timeline; out-of-window change alert table.",
              "kfp": "Emergency fixes outside the window are sometimes correct; require change ticket and approver for exceptions, do not only silence the alert.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*config*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor configuration change events. Check against maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*config*\"\n| eval hour=strftime(_time, \"%H\")\n| stats count as config_change_count by hour\n| eval window_compliant=if(hour>=22 OR hour<6, \"Yes\", \"No\")\n| where window_compliant=\"No\" AND config_change_count > 0\n```\n\nUnderstanding this SPL\n\n**Configuration Change Window Compliance (Meraki)** — Ensures configuration changes only occur within approved maintenance windows.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*config*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by hour** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **window_compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where window_compliant=\"No\" AND config_change_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Change compliance timeline; out-of-window change alert table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you show when changes were made in approved windows versus odd hours, so maintenance stays under control.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.21",
              "n": "Webhook Delivery Failure Tracking (Meraki)",
              "c": "medium",
              "f": "beginner",
              "v": "Ensures webhook notifications reach integrations and alerts don't get lost.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580, webhooks)",
              "d": "`sourcetype=meraki:webhook status=\"failure\" OR status=\"error\"`",
              "q": "index=cisco_network sourcetype=\"meraki:webhook\" (status=\"failure\" OR status=\"error\")\n| stats count as failure_count, latest(error_message) as last_error by webhook_id, organization\n| where failure_count > 5",
              "m": "Log webhook delivery attempts. Alert on sustained failures.",
              "z": "Webhook failure timeline; failure cause breakdown; affected org list.",
              "kfp": "Meraki and your receiver can retry webhooks; dedup by `id` and avoid paging on the first single failure without context.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580, webhooks).\n• Ensure the following data sources are available: `sourcetype=meraki:webhook status=\"failure\" OR status=\"error\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog webhook delivery attempts. Alert on sustained failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:webhook\" (status=\"failure\" OR status=\"error\")\n| stats count as failure_count, latest(error_message) as last_error by webhook_id, organization\n| where failure_count > 5\n```\n\nUnderstanding this SPL\n\n**Webhook Delivery Failure Tracking (Meraki)** — Ensures webhook notifications reach integrations and alerts don't get lost.\n\nDocumented **Data sources**: `sourcetype=meraki:webhook status=\"failure\" OR status=\"error\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580, webhooks). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:webhook. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:webhook\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by webhook_id, organization** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failure_count > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Webhook failure timeline; failure cause breakdown; affected org list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when webhooks from Meraki fail, so automations and tickets still fire when something important happens.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.22",
              "n": "API Error Rate and Endpoint Health (Meraki)",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors API endpoint health and error rates to ensure automation reliability.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api (http_status_code=4* OR http_status_code=5*)`",
              "q": "index=cisco_network sourcetype=\"meraki:api:*\" (http_status_code=4* OR http_status_code=5*)\n| stats count as error_count, values(http_status_code) as status_codes by endpoint, method\n| eval error_rate=round(error_count*100/total_requests, 2)\n| where error_rate > 5",
              "m": "Log API responses with status codes. Alert on error rate threshold.",
              "z": "API error timeline; endpoint error breakdown; error rate gauge.",
              "kfp": "Meraki 429 rate-limit responses and transient 5xx from the cloud are often environmental; back off and alert on error rate, not a single 500 line.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api (http_status_code=4* OR http_status_code=5*)`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog API responses with status codes. Alert on error rate threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api:*\" (http_status_code=4* OR http_status_code=5*)\n| stats count as error_count, values(http_status_code) as status_codes by endpoint, method\n| eval error_rate=round(error_count*100/total_requests, 2)\n| where error_rate > 5\n```\n\nUnderstanding this SPL\n\n**API Error Rate and Endpoint Health (Meraki)** — Monitors API endpoint health and error rates to ensure automation reliability.\n\nDocumented **Data sources**: `sourcetype=meraki:api (http_status_code=4* OR http_status_code=5*)`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api:*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api:*\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by endpoint, method** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: API error timeline; endpoint error breakdown; error rate gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the Meraki cloud API is returning errors, before dashboards and scripts quietly break.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.23",
              "n": "Dashboard Configuration and Export Backup (Meraki)",
              "c": "low",
              "f": "beginner",
              "v": "Tracks dashboard configuration backups to enable disaster recovery and configuration review.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" backup_timestamp=*\n| stats latest(backup_timestamp) as last_backup, count as backup_count by organization\n| eval backup_age_days=round((now()-strptime(backup_timestamp, \"%Y-%m-%d\"))/86400, 0)\n| where backup_age_days > 7",
              "m": "Periodically backup organization configurations. Track backup history.",
              "z": "Last backup timestamp by org; backup recency gauge; backup history timeline.",
              "kfp": "Exports before big dashboard edits look like noise; only alert when backup is missing for longer than the scheduled export interval.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically backup organization configurations. Track backup history.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" backup_timestamp=*\n| stats latest(backup_timestamp) as last_backup, count as backup_count by organization\n| eval backup_age_days=round((now()-strptime(backup_timestamp, \"%Y-%m-%d\"))/86400, 0)\n| where backup_age_days > 7\n```\n\nUnderstanding this SPL\n\n**Dashboard Configuration and Export Backup (Meraki)** — Tracks dashboard configuration backups to enable disaster recovery and configuration review.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by organization** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **backup_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where backup_age_days > 7` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Last backup timestamp by org; backup recency gauge; backup history timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you back up and track Meraki dashboard exports, so a bad change does not erase a good layout when you need it most.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.24",
              "n": "Network Device Configuration Backup and Drift",
              "c": "high",
              "f": "intermediate",
              "v": "Missing or stale configuration backups complicate recovery after failure or bad change. Detecting backup failure or config drift supports change control and RTO.",
              "t": "RANCID, Oxidized, custom scripted input",
              "d": "Backup job output, config repository (Git), device config fetch",
              "q": "index=network sourcetype=config_backup\n| stats latest(backup_ok) as ok, latest(backup_time) as last_backup by device_hostname\n| where ok != 1 OR (now()-last_backup) > 86400\n| table device_hostname ok last_backup",
              "m": "Run config backup (RANCID, Oxidized, or vendor API) on schedule. Ingest success/failure and timestamp. Alert when backup fails or last successful backup is older than 24 hours. Optionally diff current vs. last backup for drift.",
              "z": "Table (device, last backup, status), Single value (devices without backup today), Timeline (backup runs).",
              "kfp": "Lab devices and out-of-support gear may be intentionally absent from NCM; scope compliance to production tags.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RANCID, Oxidized, custom scripted input.\n• Ensure the following data sources are available: Backup job output, config repository (Git), device config fetch.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun config backup (RANCID, Oxidized, or vendor API) on schedule. Ingest success/failure and timestamp. Alert when backup fails or last successful backup is older than 24 hours. Optionally diff current vs. last backup for drift.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=config_backup\n| stats latest(backup_ok) as ok, latest(backup_time) as last_backup by device_hostname\n| where ok != 1 OR (now()-last_backup) > 86400\n| table device_hostname ok last_backup\n```\n\nUnderstanding this SPL\n\n**Network Device Configuration Backup and Drift** — Missing or stale configuration backups complicate recovery after failure or bad change. Detecting backup failure or config drift supports change control and RTO.\n\nDocumented **Data sources**: Backup job output, config repository (Git), device config fetch. **App/TA** (typical add-on context): RANCID, Oxidized, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: config_backup. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=config_backup. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ok != 1 OR (now()-last_backup) > 86400` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Device Configuration Backup and Drift**): table device_hostname ok last_backup\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, last backup, status), Single value (devices without backup today), Timeline (backup runs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you prove configs are stored and can be compared over time, which is a lifesaver when a device fails or someone mis-clicks in the middle of the night.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.25",
              "n": "SNMP Trap Storm Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Excessive SNMP traps from a device indicating failure cascade.",
              "t": "SNMP modular input (trap receiver)",
              "d": "snmptrapd, Splunk SNMP trap input",
              "q": "index=network sourcetype=snmptrap\n| bin _time span=1m\n| stats count as trap_count by host, _time\n| eventstats avg(trap_count) as avg_traps, stdev(trap_count) as std_traps by host\n| where trap_count > (avg_traps + 3*std_traps) OR trap_count > 100\n| sort -trap_count",
              "m": "Configure Splunk SNMP trap input or forward traps from snmptrapd. Parse trap OID and host. Alert when trap rate from a single device exceeds 100/min or 3 standard deviations above baseline. Trap storms often indicate device failure, link flapping, or misconfiguration.",
              "z": "Line chart (traps per host over time), Table (host, count, threshold), Single value (devices in storm).",
              "kfp": "Trap storms during device reboots, link flapping, or SNMP server overload; confirm with the NMS and interface counters before a hardware swap ticket.",
              "refs": "[Cisco ThousandEyes App for Splunk](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input (trap receiver).\n• Ensure the following data sources are available: snmptrapd, Splunk SNMP trap input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk SNMP trap input or forward traps from snmptrapd. Parse trap OID and host. Alert when trap rate from a single device exceeds 100/min or 3 standard deviations above baseline. Trap storms often indicate device failure, link flapping, or misconfiguration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=snmptrap\n| bin _time span=1m\n| stats count as trap_count by host, _time\n| eventstats avg(trap_count) as avg_traps, stdev(trap_count) as std_traps by host\n| where trap_count > (avg_traps + 3*std_traps) OR trap_count > 100\n| sort -trap_count\n```\n\nUnderstanding this SPL\n\n**SNMP Trap Storm Detection** — Excessive SNMP traps from a device indicating failure cascade.\n\nDocumented **Data sources**: snmptrapd, Splunk SNMP trap input. **App/TA** (typical add-on context): SNMP modular input (trap receiver). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmptrap. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=snmptrap. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where trap_count > (avg_traps + 3*std_traps) OR trap_count > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (traps per host over time), Table (host, count, threshold), Single value (devices in storm).",
              "script": "",
              "premium": "",
              "hw": "Various (SNMP-enabled network devices)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when one device is screaming SNMP traps in a short burst, which often means a bad link, power problem, or bad SNMP setup—not just normal day traffic.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.26",
              "n": "CDN Origin Hit Rate and Cache Efficiency (CloudFront / Akamai / Fastly)",
              "c": "high",
              "f": "intermediate",
              "v": "CDN cache efficiency directly impacts origin server load and end-user latency. Monitoring cache hit ratio, origin pull rates, and edge error rates across CloudFront, Akamai, or Fastly identifies misconfigured cache policies, cache-busting clients, and origin capacity risks before they affect user experience.",
              "t": "Splunk Add-on for AWS (`Splunk_TA_aws`, Splunkbase 1876) for CloudFront; Akamai DataStream syslog; Fastly Real-Time Log Streaming via HEC",
              "d": "`sourcetype=\"aws:cloudfront:accesslogs\"`, `sourcetype=\"akamai:datastream\"`, `sourcetype=\"fastly:cdn\"`",
              "q": "index=cdn sourcetype=\"aws:cloudfront:accesslogs\"\n| eval cache_result=case(x_edge_result_type=\"Hit\",\"HIT\", x_edge_result_type=\"Miss\",\"MISS\", x_edge_result_type=\"Error\",\"ERROR\", 1=1,\"OTHER\")\n| stats count as total, count(eval(cache_result=\"HIT\")) as hits, count(eval(cache_result=\"MISS\")) as misses, count(eval(cache_result=\"ERROR\")) as errors by cs_uri_stem\n| eval hit_rate=round(hits/total*100,2)\n| where hit_rate < 80 OR errors > 10\n| sort - total",
              "m": "Enable CloudFront access logging to S3 and ingest via Splunk Add-on for AWS. For Akamai, configure DataStream to forward to Splunk via syslog. For Fastly, configure Real-Time Log Streaming to a Splunk HEC endpoint. Alert on cache hit rate drops below 80% or elevated error rates.",
              "z": "Line chart (cache hit rate over time), Table (low-efficiency URIs), Pie chart (HIT/MISS/ERROR distribution).",
              "kfp": "",
              "refs": "[Splunkbase app 1876](https://splunkbase.splunk.com/app/1876), [AWS CloudFront documentation](https://docs.aws.amazon.com/cloudfront/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (`Splunk_TA_aws`, Splunkbase 1876) for CloudFront; Akamai DataStream syslog; Fastly Real-Time Log Streaming via HEC.\n• Ensure the following data sources are available: `sourcetype=\"aws:cloudfront:accesslogs\"`, `sourcetype=\"akamai:datastream\"`, `sourcetype=\"fastly:cdn\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CloudFront access logging to S3 and ingest via Splunk Add-on for AWS. For Akamai, configure DataStream to forward to Splunk via syslog. For Fastly, configure Real-Time Log Streaming to a Splunk HEC endpoint. Alert on cache hit rate drops below 80% or elevated error rates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cdn sourcetype=\"aws:cloudfront:accesslogs\"\n| eval cache_result=case(x_edge_result_type=\"Hit\",\"HIT\", x_edge_result_type=\"Miss\",\"MISS\", x_edge_result_type=\"Error\",\"ERROR\", 1=1,\"OTHER\")\n| stats count as total, count(eval(cache_result=\"HIT\")) as hits, count(eval(cache_result=\"MISS\")) as misses, count(eval(cache_result=\"ERROR\")) as errors by cs_uri_stem\n| eval hit_rate=round(hits/total*100,2)\n| where hit_rate < 80 OR errors > 10\n| sort - total\n```\n\nUnderstanding this SPL\n\n**CDN Origin Hit Rate and Cache Efficiency (CloudFront / Akamai / Fastly)** — CDN cache efficiency directly impacts origin server load and end-user latency. Monitoring cache hit ratio, origin pull rates, and edge error rates across CloudFront, Akamai, or Fastly identifies misconfigured cache policies, cache-busting clients, and origin capacity risks before they affect user experience.\n\nDocumented **Data sources**: `sourcetype=\"aws:cloudfront:accesslogs\"`, `sourcetype=\"akamai:datastream\"`, `sourcetype=\"fastly:cdn\"`. **App/TA** (typical add-on context): Splunk Add-on for AWS (`Splunk_TA_aws`, Splunkbase 1876) for CloudFront; Akamai DataStream syslog; Fastly Real-Time Log Streaming via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cdn; **sourcetype**: aws:cloudfront:accesslogs. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cdn, sourcetype=\"aws:cloudfront:accesslogs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cache_result** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cs_uri_stem** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hit_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_rate < 80 OR errors > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (cache hit rate over time), Table (low-efficiency URIs), Pie chart (HIT/MISS/ERROR distribution).",
              "script": "",
              "premium": "",
              "hw": "AWS CloudFront, Akamai CDN, Fastly CDN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your network gear — so outages, slowdowns, and risky changes get flagged in time to act.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Web"
              ],
              "e": [
                "asterisk",
                "aws",
                "syslog"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.8.27",
              "n": "CDN Edge Error Rate and 5xx Response Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Elevated 5xx error rates at the CDN edge indicate origin failures, capacity issues, or CDN configuration problems. Monitoring edge-delivered error rates across CDN providers ensures rapid detection of service degradation visible to end users.",
              "t": "`Splunk_TA_aws` (CloudFront), Akamai DataStream, Fastly Real-Time Log Streaming",
              "d": "`sourcetype=\"aws:cloudfront:accesslogs\"`, `sourcetype=\"akamai:datastream\"`, `sourcetype=\"fastly:cdn\"`",
              "q": "index=cdn (sourcetype=\"aws:cloudfront:accesslogs\" OR sourcetype=\"akamai:datastream\" OR sourcetype=\"fastly:cdn\")\n| eval http_status=coalesce(sc_status, status_code, response_status)\n| where http_status >= 500\n| timechart span=5m count by http_status",
              "m": "Ingest CDN access logs from all providers. Alert when 5xx error rate exceeds 1% of total requests in any 5-minute window. Differentiate origin 5xx from CDN-generated 5xx (e.g., CloudFront 502 vs 503).",
              "z": "Line chart (5xx rate over time), Single value (current error rate), Table (top error URIs).",
              "kfp": "RPZ and firewall hits spike during false-positive list updates; tune lists and see Infoblox threat analytics before major incident call.",
              "refs": "[AWS CloudFront error troubleshooting](https://docs.aws.amazon.com/cloudfront/latest/APIReference/errors.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (CloudFront), Akamai DataStream, Fastly Real-Time Log Streaming.\n• Ensure the following data sources are available: `sourcetype=\"aws:cloudfront:accesslogs\"`, `sourcetype=\"akamai:datastream\"`, `sourcetype=\"fastly:cdn\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CDN access logs from all providers. Alert when 5xx error rate exceeds 1% of total requests in any 5-minute window. Differentiate origin 5xx from CDN-generated 5xx (e.g., CloudFront 502 vs 503).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cdn (sourcetype=\"aws:cloudfront:accesslogs\" OR sourcetype=\"akamai:datastream\" OR sourcetype=\"fastly:cdn\")\n| eval http_status=coalesce(sc_status, status_code, response_status)\n| where http_status >= 500\n| timechart span=5m count by http_status\n```\n\nUnderstanding this SPL\n\n**CDN Edge Error Rate and 5xx Response Monitoring** — Elevated 5xx error rates at the CDN edge indicate origin failures, capacity issues, or CDN configuration problems. Monitoring edge-delivered error rates across CDN providers ensures rapid detection of service degradation visible to end users.\n\nDocumented **Data sources**: `sourcetype=\"aws:cloudfront:accesslogs\"`, `sourcetype=\"akamai:datastream\"`, `sourcetype=\"fastly:cdn\"`. **App/TA** (typical add-on context): `Splunk_TA_aws` (CloudFront), Akamai DataStream, Fastly Real-Time Log Streaming. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cdn; **sourcetype**: aws:cloudfront:accesslogs, akamai:datastream, fastly:cdn. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cdn, sourcetype=\"aws:cloudfront:accesslogs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **http_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where http_status >= 500` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by http_status** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (5xx rate over time), Single value (current error rate), Table (top error URIs).",
              "script": "",
              "premium": "",
              "hw": "AWS CloudFront, Akamai CDN, Fastly CDN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We help you see when Infoblox blocks bad DNS in real time, so you can act while the event is still fresh.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "Web"
              ],
              "e": [
                "asterisk",
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.6,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 26,
            "none": 0
          }
        },
        {
          "i": "5.9",
          "n": "Cisco ThousandEyes",
          "u": [
            {
              "i": "5.9.1",
              "n": "Network Latency Monitoring (Agent-to-Server)",
              "c": "critical",
              "f": "beginner",
              "v": "Tracks round-trip latency from ThousandEyes agents to target servers, revealing network path degradation before users report slowness.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| stats avg(network.latency) as avg_latency_s max(network.latency) as max_latency_s by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1), max_latency_ms=round(max_latency_s*1000,1)\n| where avg_latency_ms > 100\n| sort -avg_latency_ms",
              "m": "Install the Cisco ThousandEyes App for Splunk and configure the Tests Stream — Metrics input with HEC. Select the Agent-to-Server tests to stream. Update the `stream_index` macro to point to the correct index. The OTel metric `network.latency` reports maximum round-trip time in seconds.",
              "z": "Line chart (latency per agent over time), Single value (avg latency), Table (agent, server, latency).",
              "kfp": "**ISP peak-hour congestion.** Latency regularly rises 20–50% during business hours on shared last-mile or transit links (typically 08:00–10:00 and 16:00–18:00 local time). Distinguish from a real fault by checking whether the elevation affects all agents in the same metro or ISP, and whether it follows a consistent daily pattern over 7+ days. Tune by raising the threshold to 150 ms for known congested paths, or by adding a `| where avg_latency_ms > perc95_baseline` comparison against a saved baseline lookup.\n\n**BGP route change transients.** During BGP convergence (path change, peer flap, or maintenance), latency can spike for 1–3 test rounds as traffic takes a suboptimal path. Distinguish by correlating with UC-5.9.9 (BGP Path Change Trending) — if a path change and latency spike overlap within a 5-minute window and latency recovers, it was convergence. Suppress by requiring two consecutive rounds above threshold before alerting.\n\n**Cloud provider maintenance windows.** AWS, Azure, and GCP periodically perform network maintenance that temporarily increases latency to targets hosted in those clouds. Distinguish by checking cloud provider status pages and correlating with UC-5.9.31 (Multi-Cloud Network Performance). Suppress by maintaining a `cloud_maintenance_windows` lookup and excluding matching time ranges.\n\n**Agent host resource contention.** Enterprise Agents running on VMs with insufficient CPU or memory can report inflated latency due to scheduling delays in the agent process itself, not actual network delay. Distinguish by comparing Cloud Agent results to the same target — if Cloud Agents show normal latency while the Enterprise Agent shows high latency, the agent host is the bottleneck. Fix by allocating dedicated vCPU and memory to the agent VM.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall the Cisco ThousandEyes App for Splunk and configure the Tests Stream — Metrics input with HEC. Select the Agent-to-Server tests to stream. Update the `stream_index` macro to point to the correct index. The OTel metric `network.latency` reports maximum round-trip time in seconds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| stats avg(network.latency) as avg_latency_s max(network.latency) as max_latency_s by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1), max_latency_ms=round(max_latency_s*1000,1)\n| where avg_latency_ms > 100\n| sort -avg_latency_ms\n```\n\nUnderstanding this SPL\n\n**Network Latency Monitoring (Agent-to-Server)** — Tracks round-trip latency from ThousandEyes agents to target servers, revealing network path degradation before users report slowness.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_latency_ms > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency per agent over time), Single value (avg latency), Table (agent, server, latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how long it takes our tests to reach important servers, so we notice a slowing internet path before people start complaining about slow apps or choppy calls.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.2",
              "n": "Network Packet Loss Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Packet loss directly degrades application performance, voice quality, and video conferencing. Even 1% loss can cause noticeable user impact.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| stats avg(network.loss) as avg_loss max(network.loss) as max_loss by thousandeyes.source.agent.name, server.address\n| where avg_loss > 0.5\n| sort -avg_loss",
              "m": "Configure Agent-to-Server tests in ThousandEyes and stream metrics to Splunk via HEC. The OTel metric `network.loss` reports packet loss as a percentage. Alert when average loss exceeds 0.5% for critical paths.",
              "z": "Line chart (loss % over time per agent/server), Single value (current loss), Table sorted by loss.",
              "kfp": "**Last-mile ISP congestion during peak hours.** Shared cable/DSL/fibre-to-the-building last-mile links commonly drop 0.5–2% of packets during evening peaks (18:00–22:00 local) or business-hour peaks for office circuits. Distinguish from a circuit fault by checking whether the loss follows a daily pattern (`| timechart span=1h avg(network.loss) by date_hour` over 7 days) and whether it affects all agents on the same ISP. Tune by raising the threshold to 1% for known shared-media paths.\n\n**Cloud provider maintenance.** AWS, Azure, and GCP occasionally perform network maintenance that causes transient packet loss (typically < 5 minutes). Distinguish by correlating with cloud provider status pages and checking whether loss affects multiple agents targeting the same cloud region simultaneously. Suppress with a `cloud_maintenance_windows` lookup.\n\n**ICMP deprioritization.** Some ISPs and firewalls deprioritize or rate-limit ICMP packets, which Agent-to-Server tests use by default. This makes the measured loss higher than what TCP/UDP application traffic actually experiences on the same path. Distinguish by comparing with HTTP Server test results (UC-5.9.34) — if HTTP availability is 100% but network loss shows 2%, ICMP is being deprioritized. Fix by switching the ThousandEyes test to TCP mode on a specific port.\n\n**Enterprise Agent host packet drops.** If the Enterprise Agent VM's virtual NIC buffer is undersized or the host hypervisor is overcommitted, the agent itself drops packets before they reach the network. Distinguish by comparing Cloud Agent results to the same target — if Cloud Agents show 0% loss while the Enterprise Agent shows loss, the problem is local. Fix by increasing the VM's NIC ring buffer (`ethtool -G eth0 rx 4096`) or reducing hypervisor overcommit.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Agent-to-Server tests in ThousandEyes and stream metrics to Splunk via HEC. The OTel metric `network.loss` reports packet loss as a percentage. Alert when average loss exceeds 0.5% for critical paths.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| stats avg(network.loss) as avg_loss max(network.loss) as max_loss by thousandeyes.source.agent.name, server.address\n| where avg_loss > 0.5\n| sort -avg_loss\n```\n\nUnderstanding this SPL\n\n**Network Packet Loss Monitoring** — Packet loss directly degrades application performance, voice quality, and video conferencing. Even 1% loss can cause noticeable user impact.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_loss > 0.5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (loss % over time per agent/server), Single value (current loss), Table sorted by loss.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We track packet loss on the paths that matter, because even a tiny amount of lost data can make calls choppy, video freeze, and web pages load wrong.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.3",
              "n": "Network Jitter Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Jitter (variation in packet delay) directly affects real-time applications like VoIP and video. High jitter degrades voice quality even when latency is acceptable.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| stats avg(network.jitter) as avg_jitter_ms max(network.jitter) as max_jitter_ms by thousandeyes.source.agent.name, server.address\n| where avg_jitter_ms > 30\n| sort -avg_jitter_ms",
              "m": "The OTel metric `network.jitter` reports the standard deviation of round-trip times in milliseconds. Jitter above 30 ms typically degrades voice quality. Correlate with `network.latency` and `network.loss` for a complete path quality picture.",
              "z": "Line chart (jitter ms over time), Combined chart (latency + jitter + loss), Table.",
              "kfp": "**Wi-Fi last-mile variability.** Enterprise Agents or endpoint agents connected via Wi-Fi inherently show higher jitter (20–60 ms) than wired connections due to radio contention, retransmissions, and roaming. Distinguish from a real WAN issue by checking whether the same target shows low jitter from a wired Cloud Agent. Do not use Wi-Fi-connected agents as the authority for WAN jitter — use them for endpoint experience monitoring (UC-5.9.24) instead.\n\n**Bufferbloat on consumer-grade routers.** Consumer and SOHO routers with large, unmanaged buffers (common on cable/DSL modems) introduce variable queuing delay that shows up as jitter. The mean latency may look normal because the buffer absorbs bursts, but jitter spikes during any concurrent traffic. Distinguish by checking whether jitter correlates with time-of-day patterns and whether it improves when other traffic on the same circuit is reduced.\n\n**VPN encryption overhead.** Traffic traversing IPsec or SSL VPN tunnels can show elevated jitter due to variable encryption processing time on the VPN concentrator. Distinguish by comparing jitter to the same target with and without VPN (requires a test from outside the VPN). This is particularly noticeable on undersized firewall/VPN appliances handling high throughput.\n\n**QoS remarking at transit boundaries.** When traffic crosses from a DSCP-marked QoS domain into a best-effort transit network, the loss of priority scheduling causes jitter to increase abruptly at the boundary hop. Distinguish by correlating with Path Visualization (UC-5.9.33) — if jitter jumps at a specific hop that coincides with an ISP handoff, QoS policy is the cause.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `network.jitter` reports the standard deviation of round-trip times in milliseconds. Jitter above 30 ms typically degrades voice quality. Correlate with `network.latency` and `network.loss` for a complete path quality picture.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| stats avg(network.jitter) as avg_jitter_ms max(network.jitter) as max_jitter_ms by thousandeyes.source.agent.name, server.address\n| where avg_jitter_ms > 30\n| sort -avg_jitter_ms\n```\n\nUnderstanding this SPL\n\n**Network Jitter Monitoring** — Jitter (variation in packet delay) directly affects real-time applications like VoIP and video. High jitter degrades voice quality even when latency is acceptable.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_jitter_ms > 30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (jitter ms over time), Combined chart (latency + jitter + loss), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We watch how much packet timing wiggles on our network paths, because uneven delay makes voice calls crackle and video freeze even when the connection speed looks fine.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.4",
              "n": "Agent-to-Agent Latency and Throughput",
              "c": "high",
              "f": "intermediate",
              "v": "Measures bidirectional network performance between two ThousandEyes agents, useful for assessing site-to-site WAN link quality and SD-WAN overlay performance.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-agent\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter_ms by thousandeyes.source.agent.name, thousandeyes.target.agent.name, network.io.direction\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort thousandeyes.source.agent.name, network.io.direction",
              "m": "Create Agent-to-Agent tests in ThousandEyes between sites and stream metrics. The `network.io.direction` attribute distinguishes `transmit`, `receive`, and `round-trip` measurements. Compare forward and reverse paths to identify asymmetric routing issues.",
              "z": "Table (source agent, target agent, direction, latency, loss, jitter), Line chart per direction.",
              "kfp": "**Asymmetric routing causing directional disparity.** On many WAN links — especially those with multiple ISP uplinks or SD-WAN overlays — the forward and return paths take different routes through different ISPs. This makes TX and RX latency/loss differ significantly even when both paths are healthy. Distinguish from a real problem by baselining each direction independently over 7 days. If the asymmetry is stable and both directions meet SLA, it's by design.\n\n**Enterprise Agent maintenance or reboot.** When one agent in the pair is rebooted (OS update, agent software upgrade), the test produces loss=100% and high latency for the duration. Distinguish by checking whether the agent's status in ThousandEyes → Agent Settings shows a restart event in the same time window. Suppress by requiring > 3 consecutive rounds of degradation before alerting.\n\n**SD-WAN path selection oscillation.** SD-WAN controllers (Cisco vManage, Arista, Fortinet) continuously optimize path selection based on real-time quality. When the controller switches the active WAN link, Agent-to-Agent latency can spike for 1–2 rounds as the new path warms up. This is normal SD-WAN behavior. Distinguish by correlating with SD-WAN tunnel change events from UC-5.5.x.\n\n**One-sided congestion at branch offices.** Branch offices often have asymmetric bandwidth (e.g., 100 Mbps down / 20 Mbps up). The TX direction from the branch shows higher loss during peak hours because the upload circuit saturates. This is a capacity issue, not a fault — correlate with circuit utilization data.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate Agent-to-Agent tests in ThousandEyes between sites and stream metrics. The `network.io.direction` attribute distinguishes `transmit`, `receive`, and `round-trip` measurements. Compare forward and reverse paths to identify asymmetric routing issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-agent\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter_ms by thousandeyes.source.agent.name, thousandeyes.target.agent.name, network.io.direction\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort thousandeyes.source.agent.name, network.io.direction\n```\n\nUnderstanding this SPL\n\n**Agent-to-Agent Latency and Throughput** — Measures bidirectional network performance between two ThousandEyes agents, useful for assessing site-to-site WAN link quality and SD-WAN overlay performance.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, thousandeyes.target.agent.name, network.io.direction** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (source agent, target agent, direction, latency, loss, jitter), Line chart per direction.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check how our office-to-office links perform in both directions, so we can tell exactly which side of the connection has a problem when things slow down.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.5",
              "n": "Path Hop Count Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Sudden changes in the number of hops to a target can indicate routing changes, path instability, or sub-optimal traffic engineering. The Splunk App provides min-hop drilldowns on the Network dashboard.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Path Visualization data",
              "q": "`path_viz_index` thousandeyes.test.type=\"agent-to-server\"\n| stats min(hop_count) as min_hops max(hop_count) as max_hops by thousandeyes.source.agent.name, server.address\n| where max_hops - min_hops > 2\n| sort -max_hops",
              "m": "Enable \"Include Network Path Data\" in the Tests Stream — Metrics input configuration. Update the `path_viz_index` macro to the correct index. Path Visualization data is collected at a configurable interval via the ThousandEyes API.",
              "z": "Single value (min hops per target), Table (agent, server, min hops, max hops), Line chart (hop count trending).",
              "kfp": "**ECMP load balancing.** When traffic crosses a network with Equal-Cost Multi-Path routing (common in data centers and large ISP backbones), consecutive test probes may take different paths with different hop counts. The hop count fluctuates by 1–2 hops continuously — this is normal ECMP behavior, not instability. Distinguish by checking whether the variation is consistent and small (±1–2 hops) over days without any correlated latency impact. Raise the threshold to `> 3` for paths known to traverse ECMP networks.\n\n**Carrier maintenance windows.** Carriers routinely reroute traffic during maintenance, temporarily increasing hop counts. This typically happens during off-peak hours (02:00–06:00 local) and lasts 2–4 hours. Distinguish by checking whether the hop count change is temporary and reverts to baseline after the window. Suppress with a `carrier_maintenance_windows` lookup.\n\n**Anycast destination changes.** If the target server uses anycast (common for CDNs, DNS resolvers like 8.8.8.8, and cloud load balancers), the test may reach different physical servers at different times, each with different hop counts. This is normal anycast behavior. Distinguish by checking whether `server.address` resolves to different IPs across rounds (anycast) or the same IP (routing change).",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Path Visualization data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Include Network Path Data\" in the Tests Stream — Metrics input configuration. Update the `path_viz_index` macro to the correct index. Path Visualization data is collected at a configurable interval via the ThousandEyes API.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`path_viz_index` thousandeyes.test.type=\"agent-to-server\"\n| stats min(hop_count) as min_hops max(hop_count) as max_hops by thousandeyes.source.agent.name, server.address\n| where max_hops - min_hops > 2\n| sort -max_hops\n```\n\nUnderstanding this SPL\n\n**Path Hop Count Analysis** — Sudden changes in the number of hops to a target can indicate routing changes, path instability, or sub-optimal traffic engineering. The Splunk App provides min-hop drilldowns on the Network dashboard.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Path Visualization data. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `path_viz_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_hops - min_hops > 2` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min hops per target), Table (agent, server, min hops, max hops), Line chart (hop count trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We count how many stops our traffic makes on the way to its destination, because when the number suddenly changes it means the internet took a different road — and that new road might be slower.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.6",
              "n": "Network Path Change Detection",
              "c": "high",
              "f": "advanced",
              "v": "Detects when the network path between an agent and a target changes, which can indicate routing instability, ISP re-routing, or failover events. Correlating path changes with latency spikes helps isolate root cause.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Path Visualization data",
              "q": "`path_viz_index` thousandeyes.test.type=\"agent-to-server\"\n| stats dc(path_hash) as unique_paths count by thousandeyes.source.agent.name, server.address\n| where unique_paths > 1\n| sort -unique_paths",
              "m": "Path Visualization data must be enabled in the Tests Stream input. This use case requires building a path fingerprint (hash of intermediate hops) over time windows to detect when routes shift. Correlate with `network.latency` from the metrics stream to identify performance-impacting path changes.",
              "z": "Timeline (path changes over time), Table (agent, server, unique paths), Drilldown to ThousandEyes via `thousandeyes.permalink`.",
              "kfp": "**ECMP path rotation.** On networks with Equal-Cost Multi-Path routing, consecutive test probes may take different paths through the same router's multiple egress interfaces. These are parallel paths through the same network, not routing changes. Distinguish by checking whether the paths diverge at the ECMP boundary and reconverge — this is different from a path that shifts to an entirely different ISP. Reduce noise by ignoring path changes where the first and last 3 hops remain the same (only the middle varies).\n\n**Cloud provider internal routing optimization.** AWS, Azure, and GCP continuously optimize internal routing between availability zones and PoPs. Path visualization to cloud targets may show frequent changes as the cloud provider's SDN fabric rebalances. Distinguish by checking whether path changes occur after a cloud provider PoP boundary (the hops inside the cloud provider's network change, but the hops up to the cloud ingress remain stable). Generally benign if latency is stable.\n\n**DNS-based load balancing to different backends.** If the target hostname resolves to different backend IPs via DNS load balancing, each resolution may route to a different physical server with a different path. Distinguish by checking whether `server.address` changes across rounds — if it does, the path change is expected. Pin the test to a specific IP if you want path stability monitoring for a specific backend.\n\n**ThousandEyes agent network interface change.** If the Enterprise Agent host has multiple network interfaces and the default route flips (e.g., DHCP renewal assigns a different gateway), the first hop changes and the entire path looks different. Distinguish by checking whether the first hop (`hop_1` IP) changed — this indicates a source-side routing change, not a WAN path change.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Path Visualization data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPath Visualization data must be enabled in the Tests Stream input. This use case requires building a path fingerprint (hash of intermediate hops) over time windows to detect when routes shift. Correlate with `network.latency` from the metrics stream to identify performance-impacting path changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`path_viz_index` thousandeyes.test.type=\"agent-to-server\"\n| stats dc(path_hash) as unique_paths count by thousandeyes.source.agent.name, server.address\n| where unique_paths > 1\n| sort -unique_paths\n```\n\nUnderstanding this SPL\n\n**Network Path Change Detection** — Detects when the network path between an agent and a target changes, which can indicate routing instability, ISP re-routing, or failover events. Correlating path changes with latency spikes helps isolate root cause.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Path Visualization data. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `path_viz_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_paths > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (path changes over time), Table (agent, server, unique paths), Drilldown to ThousandEyes via `thousandeyes.permalink`.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We watch the exact route our traffic takes across the internet, and when it suddenly switches to a different road we flag it — because a new route might go through a congested or unreliable area.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.7",
              "n": "WAN Link Quality Scoring",
              "c": "high",
              "f": "intermediate",
              "v": "Composite quality score derived from latency, loss, and jitter provides a single metric for WAN link health, simplifying executive reporting and SLA tracking.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\" OR thousandeyes.test.type=\"agent-to-agent\"\n| stats avg(network.latency) as avg_lat avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval latency_score=if(avg_lat<0.05,100,if(avg_lat<0.1,80,if(avg_lat<0.2,60,if(avg_lat<0.5,40,20))))\n| eval loss_score=if(avg_loss<0.1,100,if(avg_loss<0.5,80,if(avg_loss<1,60,if(avg_loss<3,40,20))))\n| eval jitter_score=if(avg_jitter<5,100,if(avg_jitter<15,80,if(avg_jitter<30,60,if(avg_jitter<50,40,20))))\n| eval quality_score=round((latency_score*0.4 + loss_score*0.35 + jitter_score*0.25),0)\n| sort quality_score",
              "m": "For Endpoint agents the OTel metric `network.score` provides a pre-computed composite. For Cloud and Enterprise Agent tests, calculate a weighted score from latency, loss, and jitter as shown. Adjust weights and thresholds for your SLA requirements.",
              "z": "Gauge (quality score per link), Table (all links ranked), Trend line chart.",
              "kfp": "**Score dominated by one bad dimension.** The weighted average can mask a single critical metric. A path with 10 ms latency (score 100), 0% loss (score 100), and 80 ms jitter (score 20) gets a composite score of 65 — \"yellow\" — which doesn't convey that the path is completely unusable for VoIP. Mitigate by also displaying the component scores alongside the composite, or adding a rule: `| eval quality_score=if(latency_score<40 OR loss_score<40 OR jitter_score<40, min(quality_score, 40), quality_score)` — this caps the composite at 40 if ANY dimension is critically bad.\n\n**Thresholds inappropriate for specific path types.** The default thresholds assume WAN paths between offices. Same-datacenter paths should score 100; satellite paths should use relaxed thresholds (latency 600+ ms is normal, not bad). Consider maintaining a `thousandeyes_path_profiles` lookup with per-path-type threshold overrides.\n\n**v1/v2 unit mismatch invalidates thresholds.** If the data is OTel v1 format, `network.latency` is in milliseconds (not seconds). The latency tier thresholds (`< 0.05` seconds) would incorrectly classify a 50 ms path as scoring 20 (bad) when it should be 100 (excellent). Always verify the unit — see UC-5.9.1 Step 5 for the v1/v2 check.\n\n**Equal weighting may not match business priority.** The default 40/35/25 weighting assumes a general-purpose network. If your WAN primarily carries VoIP, increase jitter weight to 35% and reduce latency weight to 30%. If it primarily carries bulk replication, increase latency weight to 50% and reduce jitter weight to 15%. There is no universal \"correct\" weighting.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor Endpoint agents the OTel metric `network.score` provides a pre-computed composite. For Cloud and Enterprise Agent tests, calculate a weighted score from latency, loss, and jitter as shown. Adjust weights and thresholds for your SLA requirements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\" OR thousandeyes.test.type=\"agent-to-agent\"\n| stats avg(network.latency) as avg_lat avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval latency_score=if(avg_lat<0.05,100,if(avg_lat<0.1,80,if(avg_lat<0.2,60,if(avg_lat<0.5,40,20))))\n| eval loss_score=if(avg_loss<0.1,100,if(avg_loss<0.5,80,if(avg_loss<1,60,if(avg_loss<3,40,20))))\n| eval jitter_score=if(avg_jitter<5,100,if(avg_jitter<15,80,if(avg_jitter<30,60,if(avg_jitter<50,40,20))))\n| eval quality_score=round((latency_score*0.4 + loss_score*0.35 + jitter_score*0.25),0)\n| sort quality_score\n```\n\nUnderstanding this SPL\n\n**WAN Link Quality Scoring** — Composite quality score derived from latency, loss, and jitter provides a single metric for WAN link health, simplifying executive reporting and SLA tracking.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **latency_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **loss_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **jitter_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **quality_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (quality score per link), Table (all links ranked), Trend line chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We combine delay, packet loss, and jitter into one easy-to-read score for each network link — like a school grade from A to F — so anyone in the company can tell at a glance whether the network is healthy or needs attention.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.8",
              "n": "BGP Reachability Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Monitors whether BGP-advertised prefixes are reachable from global vantage points. Loss of reachability means users in affected regions cannot reach your services.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests)",
              "q": "`stream_index` thousandeyes.test.type=\"bgp\"\n| stats avg(bgp.reachability) as avg_reachability by thousandeyes.monitor.name, network.prefix\n| where avg_reachability < 100\n| sort avg_reachability",
              "m": "Create BGP tests in ThousandEyes for your critical prefixes and stream to Splunk. The OTel metric `bgp.reachability` reports a percentage — 100% means the prefix is reachable from that monitor. The Splunk App Network dashboard includes a BGP Reachability map panel.",
              "z": "Map (BGP reachability by monitor location), Single value (overall reachability %), Table (monitor, prefix, reachability).",
              "kfp": "**Planned prefix withdrawal for maintenance.** When your network team intentionally withdraws a prefix for maintenance (e.g., renumbering, migrating to a new upstream), reachability drops to 0% as expected. Distinguish by cross-referencing with your change management system. Suppress with a `bgp_maintenance_windows` lookup keyed on `network.prefix`.\n\n**Monitor-specific reachability dips.** Individual BGP monitors may temporarily lose visibility of a prefix due to their own peering issues — an ISP's route server rebooting, a peering session flapping at the IXP. If only 1–2 monitors out of 300 lose reachability while the rest show 100%, the prefix is fine and the monitor is the problem. Distinguish by checking `min_reachability` vs `avg_reachability` — if avg is 99.5% and min is 95%, it's isolated monitor issues.\n\n**Sub-prefix vs aggregate prefix routing.** If you announce both a /24 and a covering /22, some monitors may prefer the aggregate and not track the sub-prefix. Reachability for the sub-prefix may fluctuate even though the aggregate is 100% reachable. Verify by monitoring both the sub-prefix and the aggregate.\n\n**RPKI ROA validation rejecting your announcement.** If your prefix's RPKI ROA (Route Origin Authorization) is expired, misconfigured, or doesn't match your origin ASN, ISPs that enforce RPKI validation will reject your announcement, causing reachability to drop from those ISPs. This is not a false positive per se — it's a real configuration issue — but it's not an attack or outage. Check your RPKI ROA status at rpki-validator.ripe.net.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate BGP tests in ThousandEyes for your critical prefixes and stream to Splunk. The OTel metric `bgp.reachability` reports a percentage — 100% means the prefix is reachable from that monitor. The Splunk App Network dashboard includes a BGP Reachability map panel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"bgp\"\n| stats avg(bgp.reachability) as avg_reachability by thousandeyes.monitor.name, network.prefix\n| where avg_reachability < 100\n| sort avg_reachability\n```\n\nUnderstanding this SPL\n\n**BGP Reachability Monitoring** — Monitors whether BGP-advertised prefixes are reachable from global vantage points. Loss of reachability means users in affected regions cannot reach your services.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.monitor.name, network.prefix** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_reachability < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (BGP reachability by monitor location), Single value (overall reachability %), Table (monitor, prefix, reachability).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check whether the addresses that point to our company can be found by routers around the world. If our address disappears from the internet's directory in some region, we catch it immediately — because until it's back, people in that region simply cannot reach us at all.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.9",
              "n": "BGP Path Change Trending",
              "c": "high",
              "f": "beginner",
              "v": "BGP path changes indicate routing instability. Frequent path changes can cause traffic to take sub-optimal routes, increasing latency or traversing unexpected transit providers.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests)",
              "q": "`stream_index` thousandeyes.test.type=\"bgp\"\n| timechart span=1h sum(bgp.path_changes.count) as path_changes by thousandeyes.monitor.name",
              "m": "The OTel metric `bgp.path_changes.count` tracks the number of route changes per collection interval. The Splunk App Network dashboard includes a \"BGP Path Changes Count\" line chart. Correlate spikes with ISP maintenance windows or upstream provider issues.",
              "z": "Line chart (path changes over time per monitor), Bar chart (total changes per monitor), Table with drilldown.",
              "kfp": "**Carrier peering changes and traffic engineering.** Large ISPs routinely adjust BGP policies — prepending, community changes, peer preference shifts — during maintenance windows, typically at off-peak hours. These produce bursts of path changes that are planned and benign. Distinguish by checking whether the changes occur during known ISP maintenance windows and whether reachability remains at 100%.\n\n**Initial BGP test setup.** When a new BGP test is created, monitors need one or two collection cycles to establish a baseline path. The first few rounds may report path changes as the monitor's RIB converges. Ignore path changes in the first 30 minutes after test creation.\n\n**Internet backbone events (submarine cable cuts, IX outages).** Major internet infrastructure events cause global BGP path changes across all monitors and prefixes simultaneously. If you see a spike across ALL prefixes and ALL monitors at the same time, it's likely a backbone event, not specific to your prefix. Check BGP event feeds (bgpstream.com, ThousandEyes Outages page) to confirm.\n\n**AS path prepending changes by your own team.** If your network engineering team changes BGP prepending on your announcements, monitors will observe path changes as ISPs re-converge to the new best path. Correlate with your change management system.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `bgp.path_changes.count` tracks the number of route changes per collection interval. The Splunk App Network dashboard includes a \"BGP Path Changes Count\" line chart. Correlate spikes with ISP maintenance windows or upstream provider issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"bgp\"\n| timechart span=1h sum(bgp.path_changes.count) as path_changes by thousandeyes.monitor.name\n```\n\nUnderstanding this SPL\n\n**BGP Path Change Trending** — BGP path changes indicate routing instability. Frequent path changes can cause traffic to take sub-optimal routes, increasing latency or traversing unexpected transit providers.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by thousandeyes.monitor.name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (path changes over time per monitor), Bar chart (total changes per monitor), Table with drilldown.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We track how often the internet changes the route it uses to reach our addresses. A sudden burst of route changes is like seeing all the road signs on the highway flip at once — it usually means something just happened in the network, and we need to check whether it's causing problems.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.10",
              "n": "BGP Update Volume Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "High BGP update volumes can indicate route flapping, peer instability, or DDoS-related route manipulation. Trending helps establish baselines and detect anomalies.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests)",
              "q": "`stream_index` thousandeyes.test.type=\"bgp\"\n| timechart span=1h sum(bgp.updates.count) as bgp_updates by thousandeyes.monitor.name",
              "m": "The OTel metric `bgp.updates.count` tracks the number of BGP updates. The Splunk App Network dashboard includes a \"BGP Updates Count\" line chart. Set alerts when update volume exceeds 3 standard deviations from baseline.",
              "z": "Line chart (updates over time), Single value (current update rate), Table (monitor, prefix, update count).",
              "kfp": "**Initial BGP test setup or monitor reconnection.** When a monitor reconnects to its BGP peer (e.g., after maintenance or a software update), it receives a full routing table dump, which generates a burst of UPDATEs for all monitored prefixes. Distinguish by checking whether the spike affects all prefixes from a single monitor simultaneously and whether it resolves within 1–2 collection intervals.\n\n**Internet-wide BGP events.** Submarine cable cuts, large ISP peering changes, or major cloud provider routing updates can cause global UPDATE spikes. If the spike affects all monitored prefixes equally, it's a backbone event. Check bgpstream.com, ThousandEyes Outages, and ISP status pages.\n\n**Prefix aggregation/deaggregation changes.** When your network team changes prefix aggregation (e.g., withdrawing a /24 and replacing with a covering /22, or vice versa), monitors see withdraw+announce UPDATEs. Correlate with change management.\n\n**BGP graceful restart.** Some routers send route-refresh or soft-reconfiguration UPDATEs during planned maintenance. These are benign but show as UPDATE volume spikes.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `bgp.updates.count` tracks the number of BGP updates. The Splunk App Network dashboard includes a \"BGP Updates Count\" line chart. Set alerts when update volume exceeds 3 standard deviations from baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"bgp\"\n| timechart span=1h sum(bgp.updates.count) as bgp_updates by thousandeyes.monitor.name\n```\n\nUnderstanding this SPL\n\n**BGP Update Volume Tracking** — High BGP update volumes can indicate route flapping, peer instability, or DDoS-related route manipulation. Trending helps establish baselines and detect anomalies.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by thousandeyes.monitor.name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (updates over time), Single value (current update rate), Table (monitor, prefix, update count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We count how many routing messages the internet exchanges about our addresses. A sudden flood of messages means the routers are arguing about how to reach us, and that argument can slow down or break the connection for our users.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.11",
              "n": "BGP AS Path Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Tracking AS path changes reveals when traffic is routed through unexpected autonomous systems, which can indicate route leaks, hijacks, or ISP peering changes.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests)",
              "q": "`stream_index` thousandeyes.test.type=\"bgp\"\n| stats dc(network.as.path) as unique_paths values(network.as.path) as as_paths by network.prefix, thousandeyes.monitor.name\n| where unique_paths > 1\n| sort -unique_paths",
              "m": "The OTel attribute `network.as.path` provides the full AS path as a space-separated list of ASNs. By tracking distinct AS paths over time for each prefix and monitor, you can detect when routing changes introduce new transit providers. Combine with `bgp.path_changes.count` spikes to focus investigation.",
              "z": "Table (prefix, monitor, AS paths seen), Timeline of path changes, Alert on new AS path appearance.",
              "kfp": "**Normal upstream ISP path selection variation.** When your prefix is multihomed through multiple ISPs, different monitors will naturally see different AS paths (via ISP-A vs ISP-B). The number of unique paths for a multihomed prefix is typically 2–5 and stable over time. This is not a path change — it's multiple simultaneous paths. Distinguish by checking whether the `dc(network.as.path)` count is consistent with your known upstream count.\n\n**AS path prepending changes.** When your network team or an upstream ISP adds or removes AS path prepending (e.g., changing from `64496 64496 64496` to `64496`), the path changes in length but the ASNs in the path remain the same. This is benign traffic engineering. Distinguish by checking whether the same ASNs are present in both the old and new paths, just with different repetition.\n\n**Transit provider swap by upstream ISP.** Your ISP may change its upstream transit provider (e.g., from Cogent to Lumen), causing the transit ASN in the path to change. The origin AS remains yours, and reachability is maintained. Distinguish by verifying the origin AS is still your AS and that reachability (UC-5.9.8) is 100%.\n\n**IXP route server ASN visibility.** At some Internet Exchange Points, the route server's ASN appears briefly in the path during convergence, then is removed. This produces a transient path change that's an artifact of the exchange point's route server configuration.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel attribute `network.as.path` provides the full AS path as a space-separated list of ASNs. By tracking distinct AS paths over time for each prefix and monitor, you can detect when routing changes introduce new transit providers. Combine with `bgp.path_changes.count` spikes to focus investigation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"bgp\"\n| stats dc(network.as.path) as unique_paths values(network.as.path) as as_paths by network.prefix, thousandeyes.monitor.name\n| where unique_paths > 1\n| sort -unique_paths\n```\n\nUnderstanding this SPL\n\n**BGP AS Path Monitoring** — Tracking AS path changes reveals when traffic is routed through unexpected autonomous systems, which can indicate route leaks, hijacks, or ISP peering changes.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by network.prefix, thousandeyes.monitor.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_paths > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (prefix, monitor, AS paths seen), Timeline of path changes, Alert on new AS path appearance.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We watch which internet companies carry our traffic to make sure only the ones we've approved are handling it. If a stranger suddenly starts claiming they can reach us, that's like someone putting up a fake road sign — they might be trying to redirect our traffic through their network to spy on it or steal data.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.12",
              "n": "Prefix Reachability by Region",
              "c": "high",
              "f": "intermediate",
              "v": "Comparing BGP prefix reachability across geographic regions identifies regional outages or ISP-specific routing issues that affect only certain user populations.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests)",
              "q": "`stream_index` thousandeyes.test.type=\"bgp\"\n| stats avg(bgp.reachability) as avg_reachability by thousandeyes.monitor.location, network.prefix\n| eval region=case(\n    match(thousandeyes.monitor.location,\"US\\|CA\\|MX\\|BR\"),\"Americas\",\n    match(thousandeyes.monitor.location,\"GB\\|DE\\|FR\\|NL\"),\"EMEA\",\n    match(thousandeyes.monitor.location,\"JP\\|SG\\|AU\\|IN\"),\"APAC\",\n    1=1,\"Other\")\n| stats avg(avg_reachability) as regional_reachability by region, network.prefix\n| sort region, network.prefix",
              "m": "BGP monitors are distributed globally. Group reachability results by `thousandeyes.monitor.location` and aggregate into regions. A prefix that is 100% reachable in Americas but <80% in APAC indicates a regional routing problem.",
              "z": "Map (reachability by monitor location), Table (region, prefix, reachability), Column chart comparing regions.",
              "kfp": "**Sparse monitor coverage in a region.** If only 2–3 monitors exist in a region and one has a local peering issue, regional reachability drops to 50–67% even though the region is fine. Distinguish by checking `monitor_count` — if it's < 5, the regional average is unreliable. Consider adding the `monitor_count` to your alert criteria: `where regional_reachability < 95 AND monitor_count >= 5`.\n\n**Anycast prefix routing to different regional instances.** If your prefix is anycasted to different regions, some monitors may reach a regional instance while others reach a global instance. Reachability should still be 100%, but if a regional instance goes down, that region's reachability drops while other regions remain fine — this is a real outage (the anycast instance is down), not a false positive, but the root cause is your infrastructure rather than a BGP routing issue.\n\n**Country-code mapping inaccuracy.** The `match()` function uses simplified country-code patterns. Some ThousandEyes monitor locations include city names, states, or country codes that may not match your regex patterns. If a monitor doesn't match any region, it falls into \"Other,\" reducing coverage of the intended region. Verify by running `| stats count by region` and checking whether any monitors land in \"Other\" that should be in a specific region.\n\n**Regional ISP maintenance during off-peak hours.** ISPs in some regions (especially APAC) perform maintenance during local off-peak hours, causing brief reachability dips. These typically last 1–4 hours and occur during known maintenance windows. Suppress with a regional maintenance calendar lookup.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBGP monitors are distributed globally. Group reachability results by `thousandeyes.monitor.location` and aggregate into regions. A prefix that is 100% reachable in Americas but <80% in APAC indicates a regional routing problem.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"bgp\"\n| stats avg(bgp.reachability) as avg_reachability by thousandeyes.monitor.location, network.prefix\n| eval region=case(\n    match(thousandeyes.monitor.location,\"US\\|CA\\|MX\\|BR\"),\"Americas\",\n    match(thousandeyes.monitor.location,\"GB\\|DE\\|FR\\|NL\"),\"EMEA\",\n    match(thousandeyes.monitor.location,\"JP\\|SG\\|AU\\|IN\"),\"APAC\",\n    1=1,\"Other\")\n| stats avg(avg_reachability) as regional_reachability by region, network.prefix\n| sort region, network.prefix\n```\n\nUnderstanding this SPL\n\n**Prefix Reachability by Region** — Comparing BGP prefix reachability across geographic regions identifies regional outages or ISP-specific routing issues that affect only certain user populations.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (BGP tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.monitor.location, network.prefix** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **region** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by region, network.prefix** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (reachability by monitor location), Table (region, prefix, reachability), Column chart comparing regions.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check whether people in different parts of the world — the Americas, Europe, and Asia — can all reach our addresses on the internet. Sometimes a cable break or regional outage makes us unreachable from one part of the world while everything looks fine from another, and this catches that before our customers in the affected region start complaining.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.13",
              "n": "DNS Availability Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "DNS failures cascade into application outages — if users cannot resolve names, nothing works. ThousandEyes DNS tests monitor availability from multiple global vantage points.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS tests)",
              "q": "`stream_index` thousandeyes.test.type=\"dns-server\"\n| stats avg(dns.lookup.availability) as avg_availability by dns.question.name, server.address\n| where avg_availability < 100\n| sort avg_availability",
              "m": "Create DNS Server tests in ThousandEyes targeting critical domain names and DNS servers. The OTel metric `dns.lookup.availability` reports 100% when resolution succeeds and 0% on error. The Splunk App Network dashboard includes a \"DNS Availability (%)\" line chart with drilldown to ThousandEyes.",
              "z": "Line chart (availability % over time), Single value (current availability), Table (question, server, availability).",
              "kfp": "**DNS server rolling restart/maintenance.** When a DNS server reboots (e.g., BIND reload, Infoblox Grid maintenance), queries during the restart window fail. If the test interval is 1 minute and the restart takes 30 seconds, you'll see 1–2 rounds of 0% availability followed by full recovery. Distinguish by checking whether the failure is brief (< 5 minutes) and affects only one server address while other servers for the same domain remain at 100%. Suppress planned restarts with a `dns_maintenance_windows` lookup.\n\n**Transient network issues between agent and DNS server.** If the network path between a specific agent and the DNS server experiences packet loss, the DNS query may time out even though the DNS server is healthy. Distinguish by checking whether other agents querying the same DNS server show 100% availability. If only one agent shows failures, the problem is the network path, not the DNS server — correlate with UC-5.9.1/2 for that agent.\n\n**Anycast DNS server pool rotation.** DNS providers using anycast (Cloudflare 1.1.1.1, Google 8.8.8.8) route queries to different physical servers. If one anycast PoP is down, some agents may experience failures while others don't. This is a real DNS availability issue from the user's perspective (users near that PoP are affected), but it may resolve quickly as BGP routes withdraw the failing PoP.\n\n**Negative caching / NXDOMAIN for intentional test domains.** If the ThousandEyes test queries a domain that legitimately doesn't exist (misconfiguration in test setup), availability will always read 0%. Verify the `dns.question.name` is a real, resolvable domain.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate DNS Server tests in ThousandEyes targeting critical domain names and DNS servers. The OTel metric `dns.lookup.availability` reports 100% when resolution succeeds and 0% on error. The Splunk App Network dashboard includes a \"DNS Availability (%)\" line chart with drilldown to ThousandEyes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"dns-server\"\n| stats avg(dns.lookup.availability) as avg_availability by dns.question.name, server.address\n| where avg_availability < 100\n| sort avg_availability\n```\n\nUnderstanding this SPL\n\n**DNS Availability Monitoring** — DNS failures cascade into application outages — if users cannot resolve names, nothing works. ThousandEyes DNS tests monitor availability from multiple global vantage points.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by dns.question.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_availability < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (availability % over time), Single value (current availability), Table (question, server, availability).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check whether our domain names can still be found on the internet — because if people can't look up our address, they can't reach us at all, even if everything else is working perfectly.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.14",
              "n": "DNS Resolution Time Trending",
              "c": "high",
              "f": "beginner",
              "v": "Slow DNS resolution adds latency to every connection. Trending resolution time helps identify degrading DNS infrastructure or inefficient recursive resolution chains.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS tests)",
              "q": "`stream_index` thousandeyes.test.type=\"dns-server\"\n| timechart span=5m avg(dns.lookup.duration) as avg_dns_duration_s by dns.question.name\n| eval avg_dns_duration_ms=round(avg_dns_duration_s*1000,1)",
              "m": "The OTel metric `dns.lookup.duration` reports DNS resolve time in seconds. The Splunk App Network dashboard includes a \"DNS Duration (s)\" line chart. Alert when resolution time exceeds 200 ms consistently — this adds noticeable delay to every new connection.",
              "z": "Line chart (resolution time over time by domain), Table with drilldown to ThousandEyes.",
              "kfp": "**Cold cache effect.** The first query after a DNS server restarts or after a TTL expires requires full recursive resolution (root → TLD → authoritative), which takes 50–200 ms even for healthy DNS. Subsequent queries hit the cache and return in 1–5 ms. Distinguish by checking whether the slow lookups are isolated spikes (cold cache) vs sustained (real degradation). Use `avg` over 1 hour to smooth out cold-cache spikes.\n\n**Geo-distance to DNS server.** If an agent in Asia is querying a DNS server in the US, the network round-trip alone adds 150–250 ms to the resolution time. This is not a DNS problem — it's a network latency problem. Distinguish by correlating `dns.lookup.duration` with the network latency to the same server IP (UC-5.9.1). If DNS duration ≈ network latency, the DNS server is fast but far away.\n\n**Large DNS responses.** Queries for domains with many records (dozens of A records for load balancing, or large TXT records for SPF/DKIM) take longer to transfer over the network, even though the DNS server resolved them instantly. Distinguish by checking whether the affected domain has unusually large record sets.\n\n**DNS server CPU under load.** During peak query hours, a DNS server's resolution time increases due to CPU contention. This is a real capacity issue (not a false positive per se), but it's a different problem than a DNS configuration issue. Check if the degradation follows daily traffic patterns.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `dns.lookup.duration` reports DNS resolve time in seconds. The Splunk App Network dashboard includes a \"DNS Duration (s)\" line chart. Alert when resolution time exceeds 200 ms consistently — this adds noticeable delay to every new connection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"dns-server\"\n| timechart span=5m avg(dns.lookup.duration) as avg_dns_duration_s by dns.question.name\n| eval avg_dns_duration_ms=round(avg_dns_duration_s*1000,1)\n```\n\nUnderstanding this SPL\n\n**DNS Resolution Time Trending** — Slow DNS resolution adds latency to every connection. Trending resolution time helps identify degrading DNS infrastructure or inefficient recursive resolution chains.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by dns.question.name** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **avg_dns_duration_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (resolution time over time by domain), Table with drilldown to ThousandEyes.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We measure how long it takes to look up our domain names on the internet, because even though you don't see it, a slow lookup adds delay to every single click and page load.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.15",
              "n": "DNSSEC Validity Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "DNSSEC validation failures cause hard resolution failures for DNSSEC-enforcing resolvers. Monitoring validity ensures the DNSSEC chain of trust remains intact.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNSSEC tests)",
              "q": "`stream_index` thousandeyes.test.type=\"dns-dnssec\"\n| stats avg(dns.lookup.validity) as avg_validity by dns.question.name\n| where avg_validity < 100\n| sort avg_validity",
              "m": "Create DNSSEC tests in ThousandEyes for domains where you manage DNSSEC signing. The OTel metric `dns.lookup.validity` reports 100% when the DNSSEC chain validates successfully and 0% on failure. The Splunk App Network dashboard includes a \"DNS Validity (%)\" line chart.",
              "z": "Line chart (validity % over time), Single value (current validity), Table.",
              "kfp": "**Key rollover in progress.** During a planned DNSSEC Key Signing Key (KSK) or Zone Signing Key (ZSK) rollover, there is a brief window where some resolvers have the old key and some have the new key. Depending on timing, ThousandEyes may report validity failures during the rollover. Distinguish by checking whether the failure is brief (< 2 hours) and corresponds to a planned key rollover in your DNS management system. Well-executed rollovers (RFC 7583 timings) should not cause validation failures.\n\n**Parent zone DS record propagation delay.** After updating your DNSKEY and requesting a DS record update at the parent zone (registrar), there's a propagation delay (24–48 hours for some TLDs) before all resolvers see the new DS record. During this window, some agents may report validity failures. This is the most common cause of real DNSSEC outages — failure to coordinate DS record updates with key rollovers.\n\n**Resolver-side DNSSEC configuration issue.** Some agents may use resolvers that have broken DNSSEC validation themselves (wrong trust anchor, clock skew affecting signature validation). If only one agent shows failures while all others show 100%, the problem is likely that agent's resolver, not your DNSSEC chain. Distinguish by checking whether the failure is isolated to one agent.\n\n**Expired RRSIG signatures.** RRSIG records have an expiration time. If your DNSSEC signing system stops signing (software crash, expired HSM certificate, key rotation failure), the RRSIG records expire and validation fails. This is a real DNSSEC failure, not a false positive — but the root cause is your signing infrastructure, not the DNS infrastructure.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNSSEC tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate DNSSEC tests in ThousandEyes for domains where you manage DNSSEC signing. The OTel metric `dns.lookup.validity` reports 100% when the DNSSEC chain validates successfully and 0% on failure. The Splunk App Network dashboard includes a \"DNS Validity (%)\" line chart.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"dns-dnssec\"\n| stats avg(dns.lookup.validity) as avg_validity by dns.question.name\n| where avg_validity < 100\n| sort avg_validity\n```\n\nUnderstanding this SPL\n\n**DNSSEC Validity Monitoring** — DNSSEC validation failures cause hard resolution failures for DNSSEC-enforcing resolvers. Monitoring validity ensures the DNSSEC chain of trust remains intact.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNSSEC tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by dns.question.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_validity < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (validity % over time), Single value (current validity), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check that the digital signatures on our domain names are valid — because if the security seal is broken, some internet providers will refuse to deliver our address at all, even though the actual website is fine.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.16",
              "n": "DNS Provider Comparison",
              "c": "medium",
              "f": "intermediate",
              "v": "Comparing resolution times across DNS providers (internal recursive resolvers, external providers like Cloudflare, Google, ISP resolvers) helps optimize DNS configuration for lowest latency.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS tests)",
              "q": "`stream_index` thousandeyes.test.type=\"dns-server\"\n| stats avg(dns.lookup.duration) as avg_duration_s avg(dns.lookup.availability) as avg_availability by server.address, dns.question.name\n| eval avg_duration_ms=round(avg_duration_s*1000,1)\n| sort dns.question.name, avg_duration_ms",
              "m": "Create DNS Server tests in ThousandEyes for the same domain against multiple DNS server addresses. Each test targets a different resolver. Compare `dns.lookup.duration` and `dns.lookup.availability` across server addresses.",
              "z": "Column chart (resolution time by provider), Table (provider, domain, duration, availability), Comparison dashboard.",
              "kfp": "**Anycast-induced variability.** Public DNS resolvers (1.1.1.1, 8.8.8.8) use anycast, so different agents reach different physical servers. An agent in Tokyo may get excellent performance from 1.1.1.1 (hitting the Tokyo PoP) while an agent in rural Africa gets poor performance (hitting a distant PoP). This isn't a resolver problem — it's a geographic proximity effect. Compare providers per-agent, not just fleet-wide averages.\n\n**Cache state differences.** Public resolvers like Google and Cloudflare serve billions of queries and have warm caches for popular domains. Your internal resolver may have a cold cache for rarely-queried domains, making it appear slower. For a fair comparison, test with domains that are frequently queried across all resolvers, or account for cold-cache effects by looking at p95 rather than average.\n\n**Internal resolver serving different records.** If your internal DNS resolver returns different records than public resolvers (e.g., split-horizon DNS for internal vs external IPs), the comparison isn't apples-to-apples. The internal resolver may be doing additional work (conditional forwarding, policy-based routing) that increases resolution time. This is the correct behavior, not a performance problem.\n\n**Rate limiting on public resolvers.** If you send excessive test queries to public resolvers (e.g., 10 agents querying 1.1.1.1 every 60 seconds for the same domain), the resolver may rate-limit your queries, artificially degrading measured performance. Use reasonable test intervals (5–15 minutes) for public resolver tests.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate DNS Server tests in ThousandEyes for the same domain against multiple DNS server addresses. Each test targets a different resolver. Compare `dns.lookup.duration` and `dns.lookup.availability` across server addresses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"dns-server\"\n| stats avg(dns.lookup.duration) as avg_duration_s avg(dns.lookup.availability) as avg_availability by server.address, dns.question.name\n| eval avg_duration_ms=round(avg_duration_s*1000,1)\n| sort dns.question.name, avg_duration_ms\n```\n\nUnderstanding this SPL\n\n**DNS Provider Comparison** — Comparing resolution times across DNS providers (internal recursive resolvers, external providers like Cloudflare, Google, ISP resolvers) helps optimize DNS configuration for lowest latency.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by server.address, dns.question.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_duration_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (resolution time by provider), Table (provider, domain, duration, availability), Comparison dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We test the same domain name against several different DNS services side by side — like timing how fast different phone directories answer — so we always know which one is quickest and most reliable, and can switch if our usual one gets slow.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.17",
              "n": "DNS Trace Delegation Chain Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "DNS Trace tests follow the full delegation chain from root to authoritative server. Monitoring availability and duration across the chain identifies issues at specific levels of the DNS hierarchy.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS Trace tests)",
              "q": "`stream_index` thousandeyes.test.type=\"dns-trace\"\n| stats avg(dns.lookup.availability) as avg_availability avg(dns.lookup.duration) as avg_duration_s by dns.question.name, thousandeyes.source.agent.name\n| eval avg_duration_ms=round(avg_duration_s*1000,1)\n| where avg_availability < 100 OR avg_duration_ms > 500\n| sort avg_availability, -avg_duration_ms",
              "m": "Create DNS Trace tests in ThousandEyes for critical domains. Unlike DNS Server tests that query a specific resolver, DNS Trace tests follow the entire delegation chain from root servers. The same `dns.lookup.availability` and `dns.lookup.duration` metrics are reported.",
              "z": "Line chart (duration over time), Table (domain, agent, availability, duration), Alert on failures.",
              "kfp": "**TLD server maintenance.** TLD operators (e.g., Verisign for .com, AFNIC for .fr) occasionally perform maintenance on individual TLD name servers, which may cause DNS Trace tests to fail when they hit the maintained server during the delegation walk. Distinguish by checking whether the failure is brief (< 1 hour) and whether it correlates with TLD operator maintenance notices. The domain should still resolve via other TLD servers — DNS Trace tests that hit a different TLD server in the next round will succeed.\n\n**Root server unreachability from specific agents.** If an Enterprise Agent can't reach certain root servers (e.g., firewall blocking outbound UDP 53 to root server IPs), the DNS Trace test may fail or take a very long time as it tries alternate root servers. Distinguish by checking whether the failure is consistent from one agent while other agents show 100%.\n\n**Registrar DNS record propagation.** After changing NS records at your registrar, TLD servers propagate the update on their own schedule (typically 15 minutes to 48 hours depending on the TLD). During propagation, some DNS Trace tests may follow the old delegation and fail if the old nameservers have been decommissioned. This is a real window of vulnerability, not a false positive, but it's expected during migrations.\n\n**Slow authoritative DNS response.** DNS Trace tests walk the full chain, so the total `dns.lookup.duration` includes the response time of every server in the chain (root + TLD + authoritative). If your authoritative server is slow (overloaded, geographically distant), the trace duration will be high even though the delegation is correct. Distinguish this from a delegation issue by checking whether availability is 100% (delegation is fine, just slow) vs < 100% (delegation is broken).",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS Trace tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate DNS Trace tests in ThousandEyes for critical domains. Unlike DNS Server tests that query a specific resolver, DNS Trace tests follow the entire delegation chain from root servers. The same `dns.lookup.availability` and `dns.lookup.duration` metrics are reported.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"dns-trace\"\n| stats avg(dns.lookup.availability) as avg_availability avg(dns.lookup.duration) as avg_duration_s by dns.question.name, thousandeyes.source.agent.name\n| eval avg_duration_ms=round(avg_duration_s*1000,1)\n| where avg_availability < 100 OR avg_duration_ms > 500\n| sort avg_availability, -avg_duration_ms\n```\n\nUnderstanding this SPL\n\n**DNS Trace Delegation Chain Monitoring** — DNS Trace tests follow the full delegation chain from root to authoritative server. Monitoring availability and duration across the chain identifies issues at specific levels of the DNS hierarchy.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (DNS Trace tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by dns.question.name, thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_duration_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_availability < 100 OR avg_duration_ms > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (duration over time), Table (domain, agent, availability, duration), Alert on failures.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We follow the full chain of internet address books — from the master list all the way down to the one that has our address — to make sure every link in that chain is correct, because if one link breaks while the old answer is still remembered, nobody notices until it's suddenly too late.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.18",
              "n": "Network Outage Event Detection",
              "c": "critical",
              "f": "beginner",
              "v": "ThousandEyes Internet Insights uses collective intelligence from billions of daily measurements to automatically detect network outages affecting your services, including outages in ISP and cloud provider networks you do not own.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Event API",
              "q": "`event_index` type=\"Network Outage\" OR type=\"Network Path Issue\"\n| stats count by type, severity, state\n| sort -count",
              "m": "Configure the Event input in the Cisco ThousandEyes App with a ThousandEyes user and account group. Update the `event_index` macro to point to the correct index. Events are fetched at a configurable interval via the ThousandEyes API. Event types include \"Network Outage\", \"Network Path Issue\", \"DNS Issue\", \"Server Issue\", \"Proxy Issue\", and \"Local Agent Issue\".",
              "z": "Events timeline, Table (type, severity, state, count), Pie chart by severity.",
              "kfp": "**Events for ISPs/providers you don't directly use.** Internet Insights may detect outages on ISP networks that your traffic doesn't traverse — the event is real but doesn't affect your users. Distinguish by cross-referencing the affected ISP/provider in the event with your actual network paths (UC-5.9.5/6). Focus on events that name ISPs present in your test results' AS paths.\n\n**Brief flaps classified as outage events.** ThousandEyes may classify a brief network flap (< 5 minutes) as an outage event before it auto-resolves. Check the `state` field — if it transitions to `resolved` quickly, it was transient. For alerting, consider adding `state=\"active\"` and waiting one polling interval before paging.\n\n**Overlapping events for the same underlying cause.** A single backbone failure can generate multiple events (Network Outage, Network Path Issue, DNS Issue) as different symptoms surface. These are related events for the same root cause. Check whether multiple event types fire at the same time and correlate to the same ISP/provider.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Event API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure the Event input in the Cisco ThousandEyes App with a ThousandEyes user and account group. Update the `event_index` macro to point to the correct index. Events are fetched at a configurable interval via the ThousandEyes API. Event types include \"Network Outage\", \"Network Path Issue\", \"DNS Issue\", \"Server Issue\", \"Proxy Issue\", and \"Local Agent Issue\".\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`event_index` type=\"Network Outage\" OR type=\"Network Path Issue\"\n| stats count by type, severity, state\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Network Outage Event Detection** — ThousandEyes Internet Insights uses collective intelligence from billions of daily measurements to automatically detect network outages affecting your services, including outages in ISP and cloud provider networks you do not own.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Event API. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `event_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by type, severity, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table (type, severity, state, count), Pie chart by severity.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We get alerts when major internet outages are detected around the world, so when our service slows down we can immediately tell whether it's our fault or whether the internet itself has a problem.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.19",
              "n": "ISP Performance Degradation Alerts",
              "c": "critical",
              "f": "beginner",
              "v": "ThousandEyes alerts notify when ISP-level degradation is detected. Ingesting these alerts into Splunk provides a centralized view alongside other infrastructure alerts and enables correlation with internal incidents.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Alerts Stream (webhook)",
              "q": "`stream_index` sourcetype=\"thousandeyes:alerts\" severity=\"critical\" OR severity=\"warning\"\n| stats count by alert.rule.name, alert.test.name, severity\n| sort -count",
              "m": "Configure the Alerts Stream input in the ThousandEyes App, selecting alert rules to receive via webhook. The app automatically creates a webhook connector in ThousandEyes and associates it with the selected alert rules. Alerts flow in real-time to Splunk via HEC.",
              "z": "Pie chart (alerts by severity), Bar chart (alert timeline), Table (rule, test, severity, count).",
              "kfp": "**Flapping alert rules with aggressive thresholds.** If a ThousandEyes alert rule has a tight threshold (e.g., latency > 50 ms) without sufficient rounds-of-violation or suppression, the alert fires and clears repeatedly as the metric oscillates around the threshold. This floods Splunk with alert noise. Fix by tuning the ThousandEyes alert rule: increase the \"number of rounds\" requirement (e.g., 3 consecutive violations) or add ThousandEyes-side suppression.\n\n**Alerts for test maintenance windows.** When agents are taken offline for maintenance or tests are temporarily disabled, threshold violations may fire during the transition. Distinguish by checking whether the alert corresponds to a known maintenance window.\n\n**Duplicate alerts from multiple agents.** A single network issue may trigger alerts from multiple agents testing the same path. This is by design (confirms the issue is real, not agent-specific) but can look noisy. Group alerts by `alert.test.name` and time window to de-duplicate.\n\n**Alerts for paths your users don't traverse.** If you have tests monitoring ISP paths \"just in case,\" alerts from those tests may not represent actual user impact. Tag tests as \"critical\" or \"informational\" in ThousandEyes and filter Splunk alerts accordingly.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Alerts Stream (webhook).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure the Alerts Stream input in the ThousandEyes App, selecting alert rules to receive via webhook. The app automatically creates a webhook connector in ThousandEyes and associates it with the selected alert rules. Alerts flow in real-time to Splunk via HEC.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` sourcetype=\"thousandeyes:alerts\" severity=\"critical\" OR severity=\"warning\"\n| stats count by alert.rule.name, alert.test.name, severity\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ISP Performance Degradation Alerts** — ThousandEyes alerts notify when ISP-level degradation is detected. Ingesting these alerts into Splunk provides a centralized view alongside other infrastructure alerts and enables correlation with internal incidents.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Alerts Stream (webhook). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: thousandeyes:alerts. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by alert.rule.name, alert.test.name, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (alerts by severity), Bar chart (alert timeline), Table (rule, test, severity, count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We funnel all the network alarms from ThousandEyes into the same place where we see all our other alerts, so the team on call never misses a warning about an internet provider going down.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.20",
              "n": "DNS Issue Event Tracking",
              "c": "high",
              "f": "beginner",
              "v": "ThousandEyes Internet Insights automatically detects DNS infrastructure issues that deviate from established baselines, surfacing problems in third-party DNS services before they cause widespread outages.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Event API",
              "q": "`event_index` type=\"DNS Issue\"\n| stats count by severity, state, thousandeyes.test.name\n| sort -count",
              "m": "Events with type \"DNS Issue\" are fetched via the Event input at the configured interval. Filter by `severity` (high, medium, low) and `state` (active, resolved) to focus on current issues. Correlate with DNS availability metrics from UC-5.10.6.",
              "z": "Events timeline, Table (test, severity, state), Single value (active DNS issues).",
              "kfp": "**DNS Issues for domains/providers you don't depend on.** Internet Insights may detect DNS anomalies for DNS providers used by other ThousandEyes customers but not by your organization. Distinguish by cross-referencing the affected DNS provider with your DNS infrastructure. Only alert on DNS Issues that affect tests monitoring YOUR domains.\n\n**Transient DNS issues during provider maintenance.** DNS providers perform rolling maintenance that may trigger brief DNS Issue events. Check whether the event resolves within 15 minutes and whether your DNS availability metrics (UC-5.9.13) are unaffected.\n\n**DNS Issue events caused by DNSSEC key rollovers.** A widespread DNSSEC key rollover (e.g., root zone KSK rollover) may trigger DNS Issue events as some resolvers temporarily fail validation. This is typically planned and communicated in advance.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Event API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEvents with type \"DNS Issue\" are fetched via the Event input at the configured interval. Filter by `severity` (high, medium, low) and `state` (active, resolved) to focus on current issues. Correlate with DNS availability metrics from UC-5.10.6.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`event_index` type=\"DNS Issue\"\n| stats count by severity, state, thousandeyes.test.name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DNS Issue Event Tracking** — ThousandEyes Internet Insights automatically detects DNS infrastructure issues that deviate from established baselines, surfacing problems in third-party DNS services before they cause widespread outages.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Event API. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `event_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by severity, state, thousandeyes.test.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table (test, severity, state), Single value (active DNS issues).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We get an alert when the internet's phone book services are having trouble, so we can switch to a backup phone book before our customers can't find us anymore.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.21",
              "n": "Proxy Issue Detection",
              "c": "high",
              "f": "beginner",
              "v": "Detects when proxy infrastructure (forward proxies, web gateways, SASE secure edges) becomes the root cause of connectivity issues, helping teams quickly identify whether the problem is in the proxy layer or the destination.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Event API",
              "q": "`event_index` type=\"Proxy Issue\"\n| stats count by severity, state\n| sort -count",
              "m": "Events with type \"Proxy Issue\" indicate problems in proxy/web gateway infrastructure. These are automatically detected when ThousandEyes agents traverse proxy paths. Correlate with SASE or web security gateway logs in Splunk for root cause analysis.",
              "z": "Events timeline, Table, Single value (active proxy issues).",
              "kfp": "**Proxy maintenance windows.** Cloud proxy providers (Zscaler, Cisco Umbrella, Netskope) perform scheduled maintenance that may trigger brief Proxy Issue events. Cross-reference with the provider's status page and scheduled maintenance announcements.\n\n**Regional proxy PoP issues that don't affect your users.** Internet Insights may detect proxy issues at PoPs your users don't traverse. Check the event details and ThousandEyes permalink to determine which PoP is affected and whether your user traffic routes through it.\n\n**Proxy bypass rules triggering.** If certain traffic bypasses the proxy (split-tunnel VPN, direct cloud breakout), a proxy issue may not affect the bypassed traffic. Check your proxy bypass rules before escalating.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Event API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEvents with type \"Proxy Issue\" indicate problems in proxy/web gateway infrastructure. These are automatically detected when ThousandEyes agents traverse proxy paths. Correlate with SASE or web security gateway logs in Splunk for root cause analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`event_index` type=\"Proxy Issue\"\n| stats count by severity, state\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Proxy Issue Detection** — Detects when proxy infrastructure (forward proxies, web gateways, SASE secure edges) becomes the root cause of connectivity issues, helping teams quickly identify whether the problem is in the proxy layer or the destination.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Event API. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `event_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by severity, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table, Single value (active proxy issues).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We get warned when the security gateway that filters everyone's web traffic is having problems, so we can let important work bypass the broken gate until it's fixed.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.22",
              "n": "Local Agent Issue Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Detects when the source of a test failure is the agent itself (local network, DNS, or connectivity issue at the agent location), preventing false attribution of problems to the destination service.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Event API",
              "q": "`event_index` type=\"Local Agent Issue\"\n| stats count by severity, state\n| sort -count",
              "m": "\"Local Agent Issue\" events indicate that the test failure originated at the agent's local environment, not the remote target. These help filter out false positives in outage detection. Correlate with agent health data to identify sites with recurring local problems.",
              "z": "Events timeline, Table by agent, Single value (active local issues).",
              "kfp": "**Planned agent maintenance or reboots.** Restarting a VM or container hosting an Enterprise Agent triggers a Local Agent Issue event. Correlate with your change management system to suppress alerts during planned maintenance.\n\n**Brief network blips at the agent site.** A 30-second connectivity interruption (e.g., a switch failover, Wi-Fi AP handoff) can trigger an agent issue event that resolves itself. Check if the event duration is < 5 minutes and the state transitions to `cleared`.\n\n**Agent software updates.** The ThousandEyes agent auto-updates can cause brief connectivity interruptions. These events typically last 2–5 minutes.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Event API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n\"Local Agent Issue\" events indicate that the test failure originated at the agent's local environment, not the remote target. These help filter out false positives in outage detection. Correlate with agent health data to identify sites with recurring local problems.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`event_index` type=\"Local Agent Issue\"\n| stats count by severity, state\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Local Agent Issue Monitoring** — Detects when the source of a test failure is the agent itself (local network, DNS, or connectivity issue at the agent location), preventing false attribution of problems to the destination service.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Event API. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `event_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by severity, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table by agent, Single value (active local issues).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check that all our network monitoring sensors are actually working, because a broken sensor means a blind spot — we won't get any warnings when the network in that area has problems.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.23",
              "n": "Internet Outage Correlation with Internal Alerts",
              "c": "high",
              "f": "advanced",
              "v": "Correlating ThousandEyes outage events with internal monitoring alerts enables rapid determination of whether an issue is caused by an external internet problem or an internal infrastructure failure, significantly reducing MTTR.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes` (events), plus internal monitoring indexes",
              "q": "`event_index` type=\"Network Outage\" state=\"active\"\n| rename thousandeyes.test.name as test_name\n| join type=outer max=1 test_name [\n  search index=itsi_tracked_alerts severity=\"critical\"\n  | rename service_name as test_name\n]\n| table _time, type, severity, test_name, service_name, state\n| sort -_time",
              "m": "This correlation use case combines ThousandEyes outage events with internal alerting systems (ITSI episodes, Splunk alerts, or ServiceNow incidents). When a ThousandEyes \"Network Outage\" event is active and aligns with internal service degradation, the root cause is likely external. Adjust the join logic to match your naming conventions.",
              "z": "Combined timeline (TE events + internal alerts), Table, Dashboard with dual panels.",
              "kfp": "**Coincidental timing.** An internal alert fires during an Internet Insights outage window but is completely unrelated (e.g., a disk full alert during a network outage). Use domain knowledge to filter correlations — only match internal alerts related to network, application performance, or external connectivity.\n\n**Internal issues triggering downstream internet symptoms.** Your own misconfiguration (e.g., a firewall rule change) might cause ThousandEyes tests to fail, which could be interpreted as an internet issue. Check whether the Internet Insights event affects multiple ThousandEyes customers (true internet outage) or only your tests (your issue).\n\n**Time zone misalignment.** Ensure both ThousandEyes events and internal alerts use the same time reference. ThousandEyes uses UTC; internal alerts use whatever timezone the Splunk search head is configured for. The `| eval` time window calculation should work in epoch seconds to avoid TZ issues.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes` (events), plus internal monitoring indexes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThis correlation use case combines ThousandEyes outage events with internal alerting systems (ITSI episodes, Splunk alerts, or ServiceNow incidents). When a ThousandEyes \"Network Outage\" event is active and aligns with internal service degradation, the root cause is likely external. Adjust the join logic to match your naming conventions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`event_index` type=\"Network Outage\" state=\"active\"\n| rename thousandeyes.test.name as test_name\n| join type=outer max=1 test_name [\n  search index=itsi_tracked_alerts severity=\"critical\"\n  | rename service_name as test_name\n]\n| table _time, type, severity, test_name, service_name, state\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Internet Outage Correlation with Internal Alerts** — Correlating ThousandEyes outage events with internal monitoring alerts enables rapid determination of whether an issue is caused by an external internet problem or an internal infrastructure failure, significantly reducing MTTR.\n\nDocumented **Data sources**: `index=thousandeyes` (events), plus internal monitoring indexes. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `event_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Internet Outage Correlation with Internal Alerts**): table _time, type, severity, test_name, service_name, state\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Combined timeline (TE events + internal alerts), Table, Dashboard with dual panels.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "When something breaks at work, we immediately check whether the internet itself has a problem — because if the internet is broken, there's nothing wrong with our systems and the team can stop panicking and just wait for the internet to fix itself.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.24",
              "n": "Endpoint Experience Score Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "ThousandEyes Endpoint Agents provide a composite experience score aggregating CPU, memory, and network performance from the end-user device perspective, enabling proactive digital experience management for hybrid workforces.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint tests)",
              "q": "`stream_index` thousandeyes.test.domain=\"endpoint\"\n| stats avg(thousandeyes.endpoint.agent.score) as avg_score avg(system.cpu.utilization) as avg_cpu avg(system.memory.utilization) as avg_mem by thousandeyes.source.agent.name\n| where avg_score < 70\n| sort avg_score",
              "m": "Deploy ThousandEyes Endpoint Agents on user devices and configure Endpoint Agent tests in the Tests Stream input. The OTel metric `thousandeyes.endpoint.agent.score` is a composite of CPU and memory scores. `system.cpu.utilization` and `system.memory.utilization` are reported as percentages.",
              "z": "Gauge (experience score per user), Table (agent, score, CPU, memory), Trend line chart.",
              "kfp": "**Low scores during device startup.** When a laptop boots or wakes from sleep, the Endpoint Agent may report degraded scores for the first 1–2 measurement rounds as the OS initializes networking and loads startup applications. Filter by excluding the first 5 minutes after agent start.\n\n**Low agent score from legitimate heavy workloads.** A developer running a local build or a data analyst running a large query will spike CPU/memory utilization, lowering the `agent.score`. This is normal workload, not a problem. Focus on endpoints where low agent scores CORRELATE with low network scores (indicating systemic issues) rather than isolated high CPU.\n\n**Wi-Fi interference during specific hours.** Endpoints on Wi-Fi in dense office environments may show degraded `network.score` during peak hours (10am–12pm, 2pm–4pm) due to channel congestion. This is expected in some environments — compare with Ethernet-connected endpoints to distinguish infrastructure issues from wireless contention.\n\n**VPN split-tunnel effects.** Endpoints using split-tunnel VPN may show different scores for gateway vs VPN targets. This is expected — the gateway path bypasses VPN while the VPN path adds encryption overhead.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy ThousandEyes Endpoint Agents on user devices and configure Endpoint Agent tests in the Tests Stream input. The OTel metric `thousandeyes.endpoint.agent.score` is a composite of CPU and memory scores. `system.cpu.utilization` and `system.memory.utilization` are reported as percentages.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.domain=\"endpoint\"\n| stats avg(thousandeyes.endpoint.agent.score) as avg_score avg(system.cpu.utilization) as avg_cpu avg(system.memory.utilization) as avg_mem by thousandeyes.source.agent.name\n| where avg_score < 70\n| sort avg_score\n```\n\nUnderstanding this SPL\n\n**Endpoint Experience Score Monitoring** — ThousandEyes Endpoint Agents provide a composite experience score aggregating CPU, memory, and network performance from the end-user device perspective, enabling proactive digital experience management for hybrid workforces.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_score < 70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (experience score per user), Table (agent, score, CPU, memory), Trend line chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check how well each employee's computer and internet connection are working, so when someone says 'my computer is slow' we can immediately see if it's their Wi-Fi, their VPN, or their computer itself — without asking them to run a speed test.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.25",
              "n": "Remote Worker Connectivity Health",
              "c": "high",
              "f": "intermediate",
              "v": "Endpoint agents break connectivity into segments (gateway, VPN, proxy, DNS) with per-segment latency, loss, and score, enabling targeted troubleshooting of remote worker network issues without requiring on-site visits.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint local network)",
              "q": "`stream_index` thousandeyes.test.domain=\"endpoint\" target.type=*\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.score) as avg_score by thousandeyes.source.agent.name, target.type\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort thousandeyes.source.agent.name, target.type",
              "m": "Endpoint Experience Local Network data reports metrics per segment: `target.type` can be \"dns\", \"proxy\", \"gateway\", or \"vpn\". The `network.score` composite metric simplifies multi-segment health assessment. Identify whether connectivity problems are in the local network, VPN, proxy, or DNS layer.",
              "z": "Table (agent, segment type, latency, loss, score), Heatmap by segment, Drilldown per agent.",
              "kfp": "**Home router reboots.** A brief connectivity loss when a home router restarts (ISP firmware update, power blip) causes a spike in loss and latency. These are typically < 5 minutes and self-resolve. Filter by duration to exclude transient blips.\n\n**Shared home bandwidth.** A remote worker's family member streaming 4K video or downloading a game can saturate the home internet connection, degrading the worker's network metrics. This is a real experience issue but not an IT-fixable problem — advise the user on bandwidth management.\n\n**Coffee shop / mobile hotspot connections.** Remote workers on public Wi-Fi or mobile hotspots inherently have worse connectivity than home broadband. These are expected to show lower scores. Segment by `thousandeyes.source.agent.connection.type` and treat Wireless connections differently from Ethernet.\n\n**ISP maintenance windows.** Regional ISP maintenance (typically 1–5 AM local time) may degrade metrics for workers in that region. Check ISP status pages and maintenance calendars.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint local network).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEndpoint Experience Local Network data reports metrics per segment: `target.type` can be \"dns\", \"proxy\", \"gateway\", or \"vpn\". The `network.score` composite metric simplifies multi-segment health assessment. Identify whether connectivity problems are in the local network, VPN, proxy, or DNS layer.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.domain=\"endpoint\" target.type=*\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.score) as avg_score by thousandeyes.source.agent.name, target.type\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort thousandeyes.source.agent.name, target.type\n```\n\nUnderstanding this SPL\n\n**Remote Worker Connectivity Health** — Endpoint agents break connectivity into segments (gateway, VPN, proxy, DNS) with per-segment latency, loss, and score, enabling targeted troubleshooting of remote worker network issues without requiring on-site visits.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint local network). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, target.type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (agent, segment type, latency, loss, score), Heatmap by segment, Drilldown per agent.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how good each remote worker's home internet connection is, so when they call IT saying 'everything is slow,' we can immediately tell them whether it's their internet provider, their Wi-Fi, or something on our end.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.26",
              "n": "VPN Path Performance",
              "c": "high",
              "f": "intermediate",
              "v": "Measures latency, loss, and quality through VPN tunnels from endpoint agents, identifying whether the VPN concentrator or provider is the bottleneck for remote workers.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint local network)",
              "q": "`stream_index` thousandeyes.test.domain=\"endpoint\" target.type=\"vpn\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.score) as avg_score by thousandeyes.source.agent.name, vpn.vendor, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| where avg_score < 70 OR avg_loss > 1\n| sort avg_score",
              "m": "Endpoint agents with VPN connections report metrics with `target.type=\"vpn\"`. The `vpn.vendor` attribute identifies the VPN client (e.g., \"Cisco AnyConnect\"). The `server.address` is the VPN gateway. Compare VPN segment scores with gateway and DNS segment scores to isolate whether the VPN is the bottleneck.",
              "z": "Table (agent, VPN vendor, gateway, latency, loss, score), Column chart by VPN vendor, Trend line chart.",
              "kfp": "**VPN not active.** If a user isn't connected to VPN, no `target.type=\"vpn\"` data is produced. The absence of VPN data doesn't indicate a problem — the user may be on a split-tunnel configuration or working from the office.\n\n**VPN reconnection bursts.** When a VPN session drops and reconnects, there may be a brief period of high latency / loss during the handshake. This is normal for VPN reconnections, especially when switching networks (e.g., Wi-Fi to cellular).\n\n**VPN concentrator geographic distance.** Users connecting to a VPN concentrator in a different continent will inherently have higher latency. This is expected physics, not a problem. Compare with other users connecting to the same concentrator from the same region.\n\n**Split-tunnel vs full-tunnel.** In split-tunnel VPN, only corporate traffic traverses the VPN tunnel. The VPN metrics reflect tunnel performance, but internet-bound traffic bypasses VPN entirely. This means a low VPN score doesn't affect internet browsing but does affect corporate app access.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint local network).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEndpoint agents with VPN connections report metrics with `target.type=\"vpn\"`. The `vpn.vendor` attribute identifies the VPN client (e.g., \"Cisco AnyConnect\"). The `server.address` is the VPN gateway. Compare VPN segment scores with gateway and DNS segment scores to isolate whether the VPN is the bottleneck.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.domain=\"endpoint\" target.type=\"vpn\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.score) as avg_score by thousandeyes.source.agent.name, vpn.vendor, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| where avg_score < 70 OR avg_loss > 1\n| sort avg_score\n```\n\nUnderstanding this SPL\n\n**VPN Path Performance** — Measures latency, loss, and quality through VPN tunnels from endpoint agents, identifying whether the VPN concentrator or provider is the bottleneck for remote workers.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint local network). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, vpn.vendor, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_score < 70 OR avg_loss > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (agent, VPN vendor, gateway, latency, loss, score), Column chart by VPN vendor, Trend line chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We measure how well the secure tunnel (VPN) that connects each employee's computer to the company network is working, so we can tell whether slowness is coming from their home internet or from the tunnel itself.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.27",
              "n": "Endpoint Connection Type and Network Score",
              "c": "medium",
              "f": "intermediate",
              "v": "Comparing network scores across connection types (Wireless, Ethernet, Modem) identifies whether WiFi or wired connectivity is a systemic issue for the workforce, informing infrastructure investment decisions.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint local network)",
              "q": "`stream_index` thousandeyes.test.domain=\"endpoint\"\n| stats avg(network.score) as avg_score avg(network.latency) as avg_latency avg(network.loss) as avg_loss count by thousandeyes.source.agent.connection.type\n| eval avg_latency_ms=round(avg_latency*1000,1)\n| sort avg_score",
              "m": "The OTel attribute `thousandeyes.source.agent.connection.type` reports \"Wireless\", \"Ethernet\", or \"Modem\". Group endpoint network metrics by connection type to identify whether WiFi users have systematically worse performance than wired users.",
              "z": "Column chart (score by connection type), Table (connection type, avg score, latency, loss, count), Pie chart (user distribution by type).",
              "kfp": "**Enterprise Wi-Fi vs home Wi-Fi.** Office Wi-Fi (enterprise-grade APs, proper channel planning, dedicated SSID) performs significantly better than home Wi-Fi (consumer routers, interference, shared bandwidth). Segment by location or ISP to compare like-for-like.\n\n**Connection type changes during a measurement.** A user docking a laptop switches from Wireless to Ethernet mid-session. The transition period may show anomalous metrics. The Endpoint Agent reports the connection type at the time of measurement, so this typically self-corrects within one measurement cycle.\n\n**Modem connections.** The \"Modem\" connection type (cellular/mobile data) is inherently variable. High latency and jitter are expected — don't flag these as anomalies unless they're significantly worse than the cellular baseline.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint local network).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel attribute `thousandeyes.source.agent.connection.type` reports \"Wireless\", \"Ethernet\", or \"Modem\". Group endpoint network metrics by connection type to identify whether WiFi users have systematically worse performance than wired users.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.domain=\"endpoint\"\n| stats avg(network.score) as avg_score avg(network.latency) as avg_latency avg(network.loss) as avg_loss count by thousandeyes.source.agent.connection.type\n| eval avg_latency_ms=round(avg_latency*1000,1)\n| sort avg_score\n```\n\nUnderstanding this SPL\n\n**Endpoint Connection Type and Network Score** — Comparing network scores across connection types (Wireless, Ethernet, Modem) identifies whether WiFi or wired connectivity is a systemic issue for the workforce, informing infrastructure investment decisions.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint local network). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.connection.type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (score by connection type), Table (connection type, avg score, latency, loss, count), Pie chart (user distribution by type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We compare how well internet works over Wi-Fi versus a cable, so we can prove with data whether the company should buy cable adapters for everyone or invest in better Wi-Fi.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.28",
              "n": "Geographic Workforce Performance Comparison",
              "c": "medium",
              "f": "intermediate",
              "v": "Comparing digital experience metrics across office locations and regions identifies sites with persistent network quality issues, enabling targeted infrastructure improvements.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint tests)",
              "q": "`stream_index` thousandeyes.test.domain=\"endpoint\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.score) as avg_score count as agent_count by thousandeyes.source.agent.geo.country.iso_code, thousandeyes.source.agent.geo.region.iso_code\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort avg_score",
              "m": "Endpoint agent metrics include geographic attributes: `thousandeyes.source.agent.geo.country.iso_code` and `thousandeyes.source.agent.geo.region.iso_code`. Aggregate network quality metrics by region to identify poorly performing locations. Combine with `thousandeyes.source.agent.location` for more specific site-level analysis.",
              "z": "Map (score by region), Table (region, score, latency, loss, agent count), Column chart comparing regions.",
              "kfp": "**Low endpoint count in a region.** If only 2 endpoints report from a country, a single bad connection skews the entire country average. Require a minimum endpoint count (e.g., ≥ 5) before treating the data as representative.\n\n**Time zone differences.** Comparing scores across time zones during the same clock time is misleading — 2 PM in San Francisco is midnight in Singapore. Compare during each region's business hours, or use a 24-hour average.\n\n**Infrastructure differences.** Office workers (enterprise-grade networks) vs remote workers (residential ISPs) may dominate a region's average. Segment by connection type and network.org (ISP) within each region for accurate comparison.\n\n**Endpoint Agent version differences.** Different agent versions across regions may produce slightly different metric quality. Ensure consistent agent versions.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEndpoint agent metrics include geographic attributes: `thousandeyes.source.agent.geo.country.iso_code` and `thousandeyes.source.agent.geo.region.iso_code`. Aggregate network quality metrics by region to identify poorly performing locations. Combine with `thousandeyes.source.agent.location` for more specific site-level analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.domain=\"endpoint\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.score) as avg_score count as agent_count by thousandeyes.source.agent.geo.country.iso_code, thousandeyes.source.agent.geo.region.iso_code\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort avg_score\n```\n\nUnderstanding this SPL\n\n**Geographic Workforce Performance Comparison** — Comparing digital experience metrics across office locations and regions identifies sites with persistent network quality issues, enabling targeted infrastructure improvements.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Endpoint tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.geo.country.iso_code, thousandeyes.source.agent.geo.region.iso_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (score by region), Table (region, score, latency, loss, agent count), Column chart comparing regions.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We compare how well the internet works for employees in different parts of the world, so we know which offices or countries need better network infrastructure.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.29",
              "n": "SD-WAN Overlay vs Underlay Performance",
              "c": "high",
              "f": "advanced",
              "v": "Compares performance metrics across SD-WAN overlay tunnels and their underlay transport paths, revealing when SD-WAN policy routing decisions are sub-optimal or when underlay degradation affects the overlay.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics, Path Visualization",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-agent\" OR thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*SD-WAN*\" OR thousandeyes.test.name=\"*overlay*\" OR thousandeyes.test.name=\"*underlay*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort thousandeyes.source.agent.name, thousandeyes.test.name",
              "m": "Deploy ThousandEyes Enterprise Agents on Cisco Catalyst SD-WAN or Meraki MX devices via the SD-WAN Manager integration. Create paired tests — one through the overlay tunnel and one via the underlay path — and name them consistently (e.g., \"Site-A Overlay\", \"Site-A Underlay\") to enable comparison. The same `network.latency`, `network.loss`, and `network.jitter` metrics apply.",
              "z": "Dual-panel comparison (overlay vs underlay), Table (test, latency, loss, jitter), Line chart side-by-side.",
              "kfp": "**Overlay path selection differences.** The SD-WAN fabric may route overlay traffic through a different underlay transport than the one you're testing. For example, your underlay test hits the MPLS PE, but the overlay currently routes via the broadband link. The comparison is only valid when you know which underlay transport the overlay is using.\n\n**Overlay encryption overhead.** IPsec encryption in the SD-WAN tunnel adds 1–3 ms of latency. This is expected and not a performance problem — it's the cost of encryption. Only flag overlay premium > 10 ms as potentially problematic.\n\n**SD-WAN path failover during measurement.** If the SD-WAN fabric switches from MPLS to broadband mid-test, the overlay measurement reflects a blended result. Use short measurement intervals and look at per-round data rather than averages.\n\n**Test naming inconsistency.** The SPL relies on test names containing \"overlay\" or \"underlay.\" If naming conventions aren't followed, the `path_type` classification will be wrong. Alternatively, use ThousandEyes tags instead of name matching.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics, Path Visualization.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy ThousandEyes Enterprise Agents on Cisco Catalyst SD-WAN or Meraki MX devices via the SD-WAN Manager integration. Create paired tests — one through the overlay tunnel and one via the underlay path — and name them consistently (e.g., \"Site-A Overlay\", \"Site-A Underlay\") to enable comparison. The same `network.latency`, `network.loss`, and `network.jitter` metrics apply.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-agent\" OR thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*SD-WAN*\" OR thousandeyes.test.name=\"*overlay*\" OR thousandeyes.test.name=\"*underlay*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort thousandeyes.source.agent.name, thousandeyes.test.name\n```\n\nUnderstanding this SPL\n\n**SD-WAN Overlay vs Underlay Performance** — Compares performance metrics across SD-WAN overlay tunnels and their underlay transport paths, revealing when SD-WAN policy routing decisions are sub-optimal or when underlay degradation affects the overlay.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics, Path Visualization. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-panel comparison (overlay vs underlay), Table (test, latency, loss, jitter), Line chart side-by-side.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We compare how fast traffic moves through our smart network routing system (SD-WAN) versus the raw internet connection underneath it, so we can tell whether the smart system is helping or actually making things slower.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.30",
              "n": "SASE Secure Edge Performance",
              "c": "high",
              "f": "intermediate",
              "v": "SASE architectures route traffic through cloud-based security edges (Zscaler, Cisco Umbrella, etc.). Monitoring latency and loss through these edges ensures the security layer does not unacceptably degrade user experience.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*SASE*\" OR thousandeyes.test.name=\"*Zscaler*\" OR thousandeyes.test.name=\"*Umbrella*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort -avg_latency_ms",
              "m": "Create Agent-to-Server tests in ThousandEyes that route through your SASE secure edge. Name tests descriptively to include the SASE provider. Compare latency with and without the secure edge to quantify the security overhead. Correlate with Endpoint Agent `target.type=\"proxy\"` data for end-to-end visibility.",
              "z": "Line chart (latency through secure edge over time), Table (agent, SASE test, latency, loss), Comparison chart.",
              "kfp": "**Proxy bypass traffic.** Traffic that bypasses the SASE proxy (split-tunnel VPN direct access, proxy PAC exceptions) won't appear in `target.type=\"proxy\"` data. This is expected — only proxied traffic is measured.\n\n**SASE PoP selection changes.** Users may be routed to different SASE PoPs over time based on DNS resolution, Anycast routing, or SASE vendor load balancing. The `server.address` may change, making trend analysis per-PoP difficult. Group by SASE vendor name rather than individual PoP IPs.\n\n**TLS inspection overhead.** SASE proxies performing TLS inspection add processing latency that increases with page complexity (more TLS connections = more inspection). This is expected behavior, not a bug — but it can be significant (20–100+ ms per request).",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate Agent-to-Server tests in ThousandEyes that route through your SASE secure edge. Name tests descriptively to include the SASE provider. Compare latency with and without the secure edge to quantify the security overhead. Correlate with Endpoint Agent `target.type=\"proxy\"` data for end-to-end visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*SASE*\" OR thousandeyes.test.name=\"*Zscaler*\" OR thousandeyes.test.name=\"*Umbrella*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort -avg_latency_ms\n```\n\nUnderstanding this SPL\n\n**SASE Secure Edge Performance** — SASE architectures route traffic through cloud-based security edges (Zscaler, Cisco Umbrella, etc.). Monitoring latency and loss through these edges ensures the security layer does not unacceptably degrade user experience.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency through secure edge over time), Table (agent, SASE test, latency, loss), Comparison chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We measure how fast the company's security gateway in the cloud processes our internet traffic, because when everything online feels slow, we need to know if it's the security checkpoint causing the delay or something else.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.31",
              "n": "Multi-Cloud Network Performance",
              "c": "high",
              "f": "intermediate",
              "v": "Measures network path performance to workloads hosted across AWS, Azure, GCP, and other cloud providers, identifying which provider or region delivers the best connectivity from each user location.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*AWS*\" OR thousandeyes.test.name=\"*Azure*\" OR thousandeyes.test.name=\"*GCP*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort thousandeyes.source.agent.name, avg_latency_ms",
              "m": "Deploy ThousandEyes Cloud Agents in each cloud provider region and create Agent-to-Server tests targeting your workloads. ThousandEyes supports Cloud Agents in AWS, Azure, GCP, IBM Cloud, and Alibaba Cloud. Name tests with the provider and region for easy filtering.",
              "z": "Column chart (latency by cloud provider), Table (agent, cloud target, latency, loss), Map (agent-to-cloud paths).",
              "kfp": "**Geographic latency.** Cross-cloud paths between distant regions (US-East to Asia-Pacific) inherently have high latency (120–250 ms). This is physics, not a problem. Compare against baseline for each specific path.\n\n**Cloud provider maintenance.** Cloud providers perform network maintenance that can temporarily increase latency or cause brief packet loss. Check provider status pages.\n\n**Test agent resource contention.** Enterprise Agents on undersized VMs may produce inconsistent results during periods of co-tenant resource contention. Use dedicated VM sizes recommended by ThousandEyes.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy ThousandEyes Cloud Agents in each cloud provider region and create Agent-to-Server tests targeting your workloads. ThousandEyes supports Cloud Agents in AWS, Azure, GCP, IBM Cloud, and Alibaba Cloud. Name tests with the provider and region for easy filtering.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*AWS*\" OR thousandeyes.test.name=\"*Azure*\" OR thousandeyes.test.name=\"*GCP*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| sort thousandeyes.source.agent.name, avg_latency_ms\n```\n\nUnderstanding this SPL\n\n**Multi-Cloud Network Performance** — Measures network path performance to workloads hosted across AWS, Azure, GCP, and other cloud providers, identifying which provider or region delivers the best connectivity from each user location.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (latency by cloud provider), Table (agent, cloud target, latency, loss), Map (agent-to-cloud paths).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check how fast data travels between our different cloud services (like Amazon and Microsoft), because if the highway between them is congested, our applications that span both clouds will be slow.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.32",
              "n": "CDN Edge Network Performance",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures latency, loss, and path characteristics to CDN edge locations, revealing when CDN performance varies by region or when edge servers are not serving content as expected.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics",
              "q": "`stream_index` thousandeyes.test.type=\"http-server\"\n| search thousandeyes.test.name=\"*CDN*\"\n| stats avg(http.client.request.duration) as avg_ttfb_s avg(http.server.throughput) as avg_throughput by thousandeyes.test.name, thousandeyes.source.agent.name, server.address\n| eval avg_ttfb_ms=round(avg_ttfb_s*1000,1), throughput_mbps=round(avg_throughput/1048576,2)\n| sort thousandeyes.source.agent.name",
              "m": "Create HTTP Server tests targeting CDN-served URLs from multiple ThousandEyes Cloud Agents. The `server.address` will show which CDN edge server responded. Compare performance across regions by grouping by `thousandeyes.source.agent.location`. Correlate HTTP response headers (cache hit/miss) with performance differences.",
              "z": "Column chart (TTFB by CDN edge), Table (agent, CDN edge, TTFB, throughput), Map.",
              "kfp": "**Cache misses on first request.** If the CDN edge hasn't cached the tested resource, the first request causes an origin pull — dramatically increasing TTFB. Subsequent requests will be fast. Use recurring tests (every 2–5 minutes) so the cache stays warm.\n\n**CDN Anycast routing changes.** CDN providers dynamically route users to different PoPs. The `server.address` resolved for the test may change between rounds, making per-PoP trending difficult. Group by `thousandeyes.source.agent.location` (the agent's location is stable) rather than `server.address`.\n\n**Rate limiting by CDN.** Aggressive test frequencies may trigger CDN rate limiting or bot protection, causing artificial availability drops or slow responses. Use moderate test intervals (5–10 minutes per agent).",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate HTTP Server tests targeting CDN-served URLs from multiple ThousandEyes Cloud Agents. The `server.address` will show which CDN edge server responded. Compare performance across regions by grouping by `thousandeyes.source.agent.location`. Correlate HTTP response headers (cache hit/miss) with performance differences.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"http-server\"\n| search thousandeyes.test.name=\"*CDN*\"\n| stats avg(http.client.request.duration) as avg_ttfb_s avg(http.server.throughput) as avg_throughput by thousandeyes.test.name, thousandeyes.source.agent.name, server.address\n| eval avg_ttfb_ms=round(avg_ttfb_s*1000,1), throughput_mbps=round(avg_throughput/1048576,2)\n| sort thousandeyes.source.agent.name\n```\n\nUnderstanding this SPL\n\n**CDN Edge Network Performance** — Measures latency, loss, and path characteristics to CDN edge locations, revealing when CDN performance varies by region or when edge servers are not serving content as expected.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_ttfb_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (TTFB by CDN edge), Table (agent, CDN edge, TTFB, throughput), Map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check how fast our website loads from different parts of the world by testing the content delivery network (CDN) that stores copies of our pages near each user, so we can catch it when one region's copy is slow or broken.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.33",
              "n": "Cloud Provider Path Visualization",
              "c": "medium",
              "f": "intermediate",
              "v": "Hop-by-hop path visualization through cloud provider backbones reveals routing decisions, peering points, and potential bottlenecks within AWS, Azure, or GCP networks that are otherwise invisible.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Path Visualization data",
              "q": "`path_viz_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*AWS*\" OR thousandeyes.test.name=\"*Azure*\" OR thousandeyes.test.name=\"*GCP*\"\n| stats count values(hop_ip) as hops by thousandeyes.test.name, thousandeyes.source.agent.name\n| sort thousandeyes.test.name",
              "m": "Enable \"Include Network Path Data\" in the Tests Stream input for cloud-targeted tests. Path Visualization data shows every hop between the agent and target. The `path_viz_index` macro must be configured. For detailed path analysis, use the `thousandeyes.permalink` to drill into the ThousandEyes UI path visualization view.",
              "z": "Table (test, agent, hop list), Drilldown to ThousandEyes path viz, Network topology diagram.",
              "kfp": "**Cloud provider internal path changes.** Cloud providers (AWS, Azure, GCP) regularly optimize internal routing. Path changes within the cloud provider's AS are usually benign and don't affect latency. Only flag path changes that cross AS boundaries.\n\n**ECMP load balancing.** Equal-Cost Multi-Path routing at ISP and cloud edges may show multiple paths simultaneously. This is normal and doesn't indicate instability. Check whether hop count and latency are consistent despite path variation.\n\n**Traceroute-resistant hops.** Some cloud provider hops don't respond to ICMP/traceroute probes, showing as \"*\" or \"unknown.\" This doesn't mean a problem — it's a security policy of the cloud provider.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Path Visualization data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Include Network Path Data\" in the Tests Stream input for cloud-targeted tests. Path Visualization data shows every hop between the agent and target. The `path_viz_index` macro must be configured. For detailed path analysis, use the `thousandeyes.permalink` to drill into the ThousandEyes UI path visualization view.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`path_viz_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*AWS*\" OR thousandeyes.test.name=\"*Azure*\" OR thousandeyes.test.name=\"*GCP*\"\n| stats count values(hop_ip) as hops by thousandeyes.test.name, thousandeyes.source.agent.name\n| sort thousandeyes.test.name\n```\n\nUnderstanding this SPL\n\n**Cloud Provider Path Visualization** — Hop-by-hop path visualization through cloud provider backbones reveals routing decisions, peering points, and potential bottlenecks within AWS, Azure, or GCP networks that are otherwise invisible.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Path Visualization data. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `path_viz_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (test, agent, hop list), Drilldown to ThousandEyes path viz, Network topology diagram.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We track the exact route our internet traffic takes to reach our cloud services, because if the route suddenly changes and goes through a longer or congested path, everything in the cloud gets slower.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.34",
              "n": "HTTP Server Availability Monitoring (ThousandEyes)",
              "c": "critical",
              "f": "beginner",
              "v": "Monitors web server availability from multiple global vantage points using ThousandEyes Cloud and Enterprise Agents. Detects regional outages that internal monitoring misses because the problem is between the user and the server.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"http-server\"\n| stats avg(http.server.request.availability) as avg_availability by thousandeyes.test.name, server.address, thousandeyes.source.agent.name\n| where avg_availability < 100\n| sort avg_availability",
              "m": "Create HTTP Server tests in ThousandEyes targeting critical web applications and stream metrics to Splunk via the Tests Stream input. The OTel metric `http.server.request.availability` reports 100% when the HTTP request succeeds and 0% when any error occurs. The Splunk App Application dashboard includes an \"HTTP Server Availability (%)\" panel with permalink drilldown.",
              "z": "Line chart (availability % over time), Single value (current availability), Table (test, server, agent, availability).",
              "kfp": "**Planned maintenance.** HTTP availability drops to 0% during maintenance windows. Correlate with change management schedules.\n\n**Rate limiting or WAF blocking.** If the target web server rate-limits or blocks ThousandEyes agent IPs, tests fail with HTTP 403/429 — appearing as availability failures. Whitelist ThousandEyes agent IP ranges in WAF/rate limiter.\n\n**SSL certificate issues.** Expired or misconfigured SSL certificates cause availability failures. Check `error.type` for TLS-related errors.\n\n**Redirect chains.** HTTP tests following multiple redirects may timeout if the redirect chain is too long or includes slow intermediate servers. Configure appropriate redirect follow limits.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate HTTP Server tests in ThousandEyes targeting critical web applications and stream metrics to Splunk via the Tests Stream input. The OTel metric `http.server.request.availability` reports 100% when the HTTP request succeeds and 0% when any error occurs. The Splunk App Application dashboard includes an \"HTTP Server Availability (%)\" panel with permalink drilldown.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"http-server\"\n| stats avg(http.server.request.availability) as avg_availability by thousandeyes.test.name, server.address, thousandeyes.source.agent.name\n| where avg_availability < 100\n| sort avg_availability\n```\n\nUnderstanding this SPL\n\n**HTTP Server Availability Monitoring (ThousandEyes)** — Monitors web server availability from multiple global vantage points using ThousandEyes Cloud and Enterprise Agents. Detects regional outages that internal monitoring misses because the problem is between the user and the server.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, server.address, thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_availability < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (availability % over time), Single value (current availability), Table (test, server, agent, availability).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We check that our website actually works by testing it from many places around the world every few minutes — because a website that works from our office might be broken for customers in other countries.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.35",
              "n": "HTTP Server Response Time Tracking (ThousandEyes)",
              "c": "high",
              "f": "beginner",
              "v": "Tracks Time to First Byte (TTFB) from ThousandEyes agents to web servers. Rising response times indicate backend degradation, infrastructure bottlenecks, or increased load — often visible from external vantage points before internal monitoring catches it.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"http-server\"\n| timechart span=5m avg(http.client.request.duration) as avg_ttfb_s by thousandeyes.test.name\n| eval avg_ttfb_ms=round(avg_ttfb_s*1000,1)",
              "m": "The OTel metric `http.client.request.duration` reports TTFB in seconds. The Splunk App Application dashboard includes an \"HTTP Server Request Duration (s)\" line chart. Alert when TTFB exceeds your SLA threshold (e.g., 2 seconds). Correlate with `http.response.status_code` to distinguish slow responses from errors.",
              "z": "Line chart (TTFB over time by test), Single value (avg TTFB), Table with drilldown to ThousandEyes.",
              "kfp": "**Cold start / first request.** The first request to an application after idle may be slow due to connection pool initialization, JIT compilation, or cache warming. Subsequent requests will be faster.\n\n**Geographic latency.** An agent in Asia testing a server in Europe will inherently have higher TTFB due to network round-trip time. Compare agents in the same region for fair application performance assessment.\n\n**TLS handshake overhead.** HTTPS requests include TLS negotiation, adding 50–150 ms depending on cipher suite and protocol version. This is expected and not a server-side issue.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `http.client.request.duration` reports TTFB in seconds. The Splunk App Application dashboard includes an \"HTTP Server Request Duration (s)\" line chart. Alert when TTFB exceeds your SLA threshold (e.g., 2 seconds). Correlate with `http.response.status_code` to distinguish slow responses from errors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"http-server\"\n| timechart span=5m avg(http.client.request.duration) as avg_ttfb_s by thousandeyes.test.name\n| eval avg_ttfb_ms=round(avg_ttfb_s*1000,1)\n```\n\nUnderstanding this SPL\n\n**HTTP Server Response Time Tracking (ThousandEyes)** — Tracks Time to First Byte (TTFB) from ThousandEyes agents to web servers. Rising response times indicate backend degradation, infrastructure bottlenecks, or increased load — often visible from external vantage points before internal monitoring catches it.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by thousandeyes.test.name** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **avg_ttfb_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (TTFB over time by test), Single value (avg TTFB), Table with drilldown to ThousandEyes.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We measure how quickly our website starts responding to requests from different parts of the world, because if it takes too long to get the first reply, users see a blank page and think our site is broken.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.36",
              "n": "HTTP Server Throughput Analysis (ThousandEyes)",
              "c": "medium",
              "f": "beginner",
              "v": "Measures download throughput from ThousandEyes agents to web servers, revealing bandwidth constraints or content delivery issues from the user perspective.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"http-server\"\n| stats avg(http.server.throughput) as avg_throughput by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval throughput_mbps=round(avg_throughput/1048576,2)\n| sort -throughput_mbps",
              "m": "The OTel metric `http.server.throughput` reports bytes per second. The Splunk App Application dashboard includes an \"HTTP Server Throughput (MB/s)\" line chart. Low throughput combined with high latency typically indicates a network bottleneck; low throughput with low latency suggests a server-side rate limit.",
              "z": "Line chart (throughput MB/s over time), Table (test, agent, throughput), Column chart by agent.",
              "kfp": "**Small response bodies.** For tests targeting endpoints with tiny responses (health check endpoints, API status pages), throughput is meaningless because there's insufficient data to measure transfer rate. Focus on tests targeting pages with > 10 KB response bodies.\n\n**Connection reuse.** Throughput for the first request on a new connection includes TCP and TLS setup. Subsequent requests on the same connection (keep-alive) will show higher throughput.\n\n**Server-side rate limiting.** Application-level rate limiting may cap throughput by design. This is intentional, not a problem.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `http.server.throughput` reports bytes per second. The Splunk App Application dashboard includes an \"HTTP Server Throughput (MB/s)\" line chart. Low throughput combined with high latency typically indicates a network bottleneck; low throughput with low latency suggests a server-side rate limit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"http-server\"\n| stats avg(http.server.throughput) as avg_throughput by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval throughput_mbps=round(avg_throughput/1048576,2)\n| sort -throughput_mbps\n```\n\nUnderstanding this SPL\n\n**HTTP Server Throughput Analysis (ThousandEyes)** — Measures download throughput from ThousandEyes agents to web servers, revealing bandwidth constraints or content delivery issues from the user perspective.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **throughput_mbps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (throughput MB/s over time), Table (test, agent, throughput), Column chart by agent.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We measure how fast data flows from our web servers, like checking the water pressure from a pipe — if it's too slow, large files take forever to download and applications feel sluggish.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.37",
              "n": "Page Load Completion Rate (ThousandEyes)",
              "c": "critical",
              "f": "beginner",
              "v": "Measures whether web pages fully load from the user's perspective. Incomplete page loads indicate broken resources, blocked CDN content, or JavaScript errors that prevent users from completing tasks.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Page Load tests)",
              "q": "`stream_index` thousandeyes.test.type=\"page-load\"\n| stats avg(web.page_load.completion) as avg_completion by thousandeyes.test.name, server.address\n| where avg_completion < 100\n| sort avg_completion",
              "m": "Create Page Load tests in ThousandEyes targeting critical web applications. The OTel metric `web.page_load.completion` reports 100% when the page loads successfully and 0% on error. Page Load tests automatically include underlying Agent-to-Server network tests, providing correlated network and application data.",
              "z": "Single value (completion %), Line chart (completion over time), Table (test, server, completion).",
              "kfp": "**Third-party resource failures.** Analytics scripts, ad networks, or social media widgets failing cause page load completion to drop even though the application itself works. Check the ThousandEyes waterfall view to identify which resource failed.\n\n**Content gating (CAPTCHA, login).** If the page requires authentication or presents a CAPTCHA, the browser-based test may not be able to proceed, showing 0% completion. Configure the test with appropriate credentials or exclude gated pages.\n\n**Browser rendering timeouts.** Complex pages with heavy JavaScript may exceed the test timeout (default 30 seconds). Increase the timeout in the test configuration if needed.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Page Load tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate Page Load tests in ThousandEyes targeting critical web applications. The OTel metric `web.page_load.completion` reports 100% when the page loads successfully and 0% on error. Page Load tests automatically include underlying Agent-to-Server network tests, providing correlated network and application data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"page-load\"\n| stats avg(web.page_load.completion) as avg_completion by thousandeyes.test.name, server.address\n| where avg_completion < 100\n| sort avg_completion\n```\n\nUnderstanding this SPL\n\n**Page Load Completion Rate (ThousandEyes)** — Measures whether web pages fully load from the user's perspective. Incomplete page loads indicate broken resources, blocked CDN content, or JavaScript errors that prevent users from completing tasks.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Page Load tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_completion < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (completion %), Line chart (completion over time), Table (test, server, completion).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We open our web pages in a real browser from around the world to check that everything actually loads — all the pictures, buttons, and scripts — because a page can technically 'respond' but still be broken if one of its many pieces is missing.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.38",
              "n": "Page Load Duration Trending (ThousandEyes)",
              "c": "high",
              "f": "beginner",
              "v": "Tracks total page load time including all resources (HTML, CSS, JS, images). Trending reveals gradual degradation from growing page weight, slow third-party resources, or backend issues.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Page Load tests)",
              "q": "`stream_index` thousandeyes.test.type=\"page-load\"\n| timechart span=5m avg(web.page_load.duration) as avg_load_s by thousandeyes.test.name",
              "m": "The OTel metric `web.page_load.duration` reports total page load time in seconds. The Splunk App Application dashboard includes a \"Page Load Duration (s)\" line chart with permalink drilldown to ThousandEyes waterfall views. Alert when load duration exceeds your performance budget.",
              "z": "Line chart (load time over time), Single value (avg load time), Table with permalink drilldown.",
              "kfp": "**Heavy pages by design.** Some pages (dashboards, data-heavy reports) are inherently slow to load. Establish page-specific baselines rather than using a universal threshold.\n\n**Page load duration not reported on failure.** If `web.page_load.completion` = 0%, `web.page_load.duration` may not be reported (the page didn't finish loading). Check completion first.\n\n**Browser caching disabled in tests.** ThousandEyes Page Load tests typically clear cache between rounds, meaning every test is a cold-cache load. Real users with browser cache enabled will see faster load times.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Page Load tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `web.page_load.duration` reports total page load time in seconds. The Splunk App Application dashboard includes a \"Page Load Duration (s)\" line chart with permalink drilldown to ThousandEyes waterfall views. Alert when load duration exceeds your performance budget.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"page-load\"\n| timechart span=5m avg(web.page_load.duration) as avg_load_s by thousandeyes.test.name\n```\n\nUnderstanding this SPL\n\n**Page Load Duration Trending (ThousandEyes)** — Tracks total page load time including all resources (HTML, CSS, JS, images). Trending reveals gradual degradation from growing page weight, slow third-party resources, or backend issues.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Page Load tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by thousandeyes.test.name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (load time over time), Single value (avg load time), Table with permalink drilldown.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We time how long it takes our web pages to fully appear in a browser, because people leave if a page takes more than a few seconds to load.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.39",
              "n": "API Endpoint Completion Rate (ThousandEyes)",
              "c": "critical",
              "f": "beginner",
              "v": "Monitors multi-step API test completion, ensuring that entire API workflows (authentication, data retrieval, processing) succeed end-to-end from external vantage points.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (API tests)",
              "q": "`stream_index` thousandeyes.test.type=\"api\"\n| stats avg(api.completion) as avg_completion by thousandeyes.test.name\n| where avg_completion < 100\n| sort avg_completion",
              "m": "Create API tests in ThousandEyes with multi-step sequences testing your critical API workflows. The OTel metric `api.completion` reports overall completion percentage. Per-step metrics (`api.step.completion`, `api.step.duration`) are also available with the `thousandeyes.test.step` attribute. The Splunk App Application dashboard includes an \"API Completion (%)\" panel.",
              "z": "Single value (completion %), Line chart (completion over time), Table (test, completion).",
              "kfp": "**Token expiration.** If the API test relies on a long-lived API key or token that expires, all subsequent steps fail until the token is renewed. This is a test configuration issue, not an API failure.\n\n**API rate limiting.** APIs may rate-limit the ThousandEyes agent IP. If test frequency × agent count exceeds the API rate limit, some tests fail with HTTP 429. Reduce test frequency or whitelist agent IPs.\n\n**API schema changes.** When the API is intentionally updated (new fields, changed response structure), assertion-based tests may fail. Update test assertions after planned API changes.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (API tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate API tests in ThousandEyes with multi-step sequences testing your critical API workflows. The OTel metric `api.completion` reports overall completion percentage. Per-step metrics (`api.step.completion`, `api.step.duration`) are also available with the `thousandeyes.test.step` attribute. The Splunk App Application dashboard includes an \"API Completion (%)\" panel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"api\"\n| stats avg(api.completion) as avg_completion by thousandeyes.test.name\n| where avg_completion < 100\n| sort avg_completion\n```\n\nUnderstanding this SPL\n\n**API Endpoint Completion Rate (ThousandEyes)** — Monitors multi-step API test completion, ensuring that entire API workflows (authentication, data retrieval, processing) succeed end-to-end from external vantage points.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (API tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_completion < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (completion %), Line chart (completion over time), Table (test, completion).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We test our APIs by running through a real sequence of steps — like logging in, requesting data, and checking the answer — to make sure every part of the process works, not just the front door.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.40",
              "n": "API Response Time Monitoring (ThousandEyes)",
              "c": "high",
              "f": "beginner",
              "v": "Tracks total API test execution duration including all steps, revealing when API performance degrades from the consumer's perspective.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (API tests)",
              "q": "`stream_index` thousandeyes.test.type=\"api\"\n| timechart span=5m avg(api.duration) as avg_api_duration_s by thousandeyes.test.name",
              "m": "The OTel metric `api.duration` reports total API test execution time in seconds. For per-step analysis, use `api.step.duration` filtered by `thousandeyes.test.step`. The Splunk App Application dashboard includes an \"API Request Duration (s)\" line chart with permalink drilldown.",
              "z": "Line chart (API duration over time), Table (test, duration), Column chart (duration by step).",
              "kfp": "**Duration not reported on failure.** `api.duration` may not be emitted for failed tests. Check completion first.\n\n**Variable server-side processing.** Some API calls have inherently variable response times (database-heavy queries, batch operations). Establish per-test baselines.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (API tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `api.duration` reports total API test execution time in seconds. For per-step analysis, use `api.step.duration` filtered by `thousandeyes.test.step`. The Splunk App Application dashboard includes an \"API Request Duration (s)\" line chart with permalink drilldown.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"api\"\n| timechart span=5m avg(api.duration) as avg_api_duration_s by thousandeyes.test.name\n```\n\nUnderstanding this SPL\n\n**API Response Time Monitoring (ThousandEyes)** — Tracks total API test execution duration including all steps, revealing when API performance degrades from the consumer's perspective.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (API tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by thousandeyes.test.name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (API duration over time), Table (test, duration), Column chart (duration by step).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We time each step of our automated API tests to find exactly which part is slow — like timing each stop on a bus route to find where the delay happens.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.41",
              "n": "Transaction Test Completion Rate (ThousandEyes)",
              "c": "critical",
              "f": "intermediate",
              "v": "Transaction tests execute scripted multi-step user workflows (login, navigate, submit form, verify result). Completion rate below 100% means users cannot complete critical business processes.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Transaction tests)",
              "q": "`stream_index` thousandeyes.test.type=\"web-transactions\"\n| stats avg(web.transaction.completion) as avg_completion sum(web.transaction.errors.count) as total_errors by thousandeyes.test.name\n| where avg_completion < 100 OR total_errors > 0\n| sort avg_completion",
              "m": "Create Transaction tests in ThousandEyes using Selenium-based scripted workflows that simulate real user journeys. The OTel metric `web.transaction.completion` reports 100% on success and 0% on error. `web.transaction.errors.count` returns 1 when an error occurs and 0 otherwise. The Splunk App Application dashboard includes a \"Transaction Completion (%)\" panel.",
              "z": "Single value (completion %), Line chart (completion over time), Table (test, completion, errors).",
              "kfp": "**Script maintenance.** UI changes (moved buttons, renamed fields, new page flow) break the Selenium script. This is a script maintenance issue, not an application failure. Update scripts after UI releases.\n\n**Test environment differences.** Transaction tests running against a staging environment may see different behavior than production.\n\n**Dynamic content.** Pages with dynamic content (randomized layouts, A/B tests, geographically personalized content) may cause assertion failures.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Transaction tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate Transaction tests in ThousandEyes using Selenium-based scripted workflows that simulate real user journeys. The OTel metric `web.transaction.completion` reports 100% on success and 0% on error. `web.transaction.errors.count` returns 1 when an error occurs and 0 otherwise. The Splunk App Application dashboard includes a \"Transaction Completion (%)\" panel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"web-transactions\"\n| stats avg(web.transaction.completion) as avg_completion sum(web.transaction.errors.count) as total_errors by thousandeyes.test.name\n| where avg_completion < 100 OR total_errors > 0\n| sort avg_completion\n```\n\nUnderstanding this SPL\n\n**Transaction Test Completion Rate (ThousandEyes)** — Transaction tests execute scripted multi-step user workflows (login, navigate, submit form, verify result). Completion rate below 100% means users cannot complete critical business processes.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Transaction tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_completion < 100 OR total_errors > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (completion %), Line chart (completion over time), Table (test, completion, errors).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We have a robot that goes through our website clicking buttons and filling in forms exactly like a real person would, so we can tell immediately if anything in the workflow is broken — before a real customer runs into the problem.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.42",
              "n": "Transaction Duration Analysis (ThousandEyes)",
              "c": "high",
              "f": "intermediate",
              "v": "Measures end-to-end time for complex user workflows. Slow transactions directly impact user productivity and satisfaction. Trending reveals gradual degradation across the multi-step flow.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Transaction tests)",
              "q": "`stream_index` thousandeyes.test.type=\"web-transactions\"\n| timechart span=5m avg(web.transaction.duration) as avg_transaction_s by thousandeyes.test.name",
              "m": "The OTel metric `web.transaction.duration` reports total transaction execution time in seconds (only reported when the transaction completes without errors). The Splunk App Application dashboard includes a \"Transaction Duration (s)\" line chart with permalink drilldown to ThousandEyes. ThousandEyes also supports OpenTelemetry traces for transaction tests, providing detailed span-level timing.",
              "z": "Line chart (transaction duration over time), Table (test, agent, duration), Drilldown to ThousandEyes trace view.",
              "kfp": "**Duration not reported for failed transactions.** `web.transaction.duration` is only emitted when the transaction completes successfully. A sudden drop in data volume may indicate failures, not improvement.\n\n**Script execution time variability.** Selenium script execution speed varies with agent CPU load and browser rendering performance. Small variations (< 10%) are normal.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Transaction tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `web.transaction.duration` reports total transaction execution time in seconds (only reported when the transaction completes without errors). The Splunk App Application dashboard includes a \"Transaction Duration (s)\" line chart with permalink drilldown to ThousandEyes. ThousandEyes also supports OpenTelemetry traces for transaction tests, providing detailed span-level timing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"web-transactions\"\n| timechart span=5m avg(web.transaction.duration) as avg_transaction_s by thousandeyes.test.name\n```\n\nUnderstanding this SPL\n\n**Transaction Duration Analysis (ThousandEyes)** — Measures end-to-end time for complex user workflows. Slow transactions directly impact user productivity and satisfaction. Trending reveals gradual degradation across the multi-step flow.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Transaction tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by thousandeyes.test.name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (transaction duration over time), Table (test, agent, duration), Drilldown to ThousandEyes trace view.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We time how long the whole process takes — from clicking 'log in' to finishing a task — because each extra second of waiting means frustrated employees and customers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.43",
              "n": "SaaS Application Response Time Comparison (ThousandEyes)",
              "c": "high",
              "f": "intermediate",
              "v": "Compares availability and response time across business-critical SaaS applications (Microsoft 365, Salesforce, ServiceNow, etc.) from multiple office locations, enabling data-driven SaaS vendor performance management.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server / Page Load tests)",
              "q": "`stream_index` thousandeyes.test.type=\"http-server\" OR thousandeyes.test.type=\"page-load\"\n| search thousandeyes.test.name=\"*M365*\" OR thousandeyes.test.name=\"*Salesforce*\" OR thousandeyes.test.name=\"*ServiceNow*\"\n| stats avg(http.server.request.availability) as avg_avail avg(http.client.request.duration) as avg_ttfb_s by thousandeyes.test.name, thousandeyes.source.agent.location\n| eval avg_ttfb_ms=round(avg_ttfb_s*1000,1)\n| sort thousandeyes.test.name, avg_ttfb_ms",
              "m": "Create HTTP Server or Page Load tests in ThousandEyes for each SaaS application, running from Enterprise Agents at each office and Cloud Agents in relevant regions. Name tests consistently (e.g., \"M365 - Exchange Online\", \"Salesforce - Login Page\"). ThousandEyes provides best-practice monitoring guides for Microsoft 365, Salesforce, and other major SaaS platforms.",
              "z": "Column chart (TTFB by SaaS app per location), Table (app, location, availability, TTFB), Comparison dashboard.",
              "kfp": "**SaaS vendor scheduled maintenance.** SaaS providers perform maintenance that may increase response times. Check vendor status pages.\n\n**Login/auth endpoints vs app endpoints.** SaaS login pages may respond differently than authenticated application pages. Ensure tests target representative endpoints.\n\n**Test URL changes.** SaaS vendors may change endpoint URLs. Update tests when SaaS providers announce URL changes.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server / Page Load tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate HTTP Server or Page Load tests in ThousandEyes for each SaaS application, running from Enterprise Agents at each office and Cloud Agents in relevant regions. Name tests consistently (e.g., \"M365 - Exchange Online\", \"Salesforce - Login Page\"). ThousandEyes provides best-practice monitoring guides for Microsoft 365, Salesforce, and other major SaaS platforms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"http-server\" OR thousandeyes.test.type=\"page-load\"\n| search thousandeyes.test.name=\"*M365*\" OR thousandeyes.test.name=\"*Salesforce*\" OR thousandeyes.test.name=\"*ServiceNow*\"\n| stats avg(http.server.request.availability) as avg_avail avg(http.client.request.duration) as avg_ttfb_s by thousandeyes.test.name, thousandeyes.source.agent.location\n| eval avg_ttfb_ms=round(avg_ttfb_s*1000,1)\n| sort thousandeyes.test.name, avg_ttfb_ms\n```\n\nUnderstanding this SPL\n\n**SaaS Application Response Time Comparison (ThousandEyes)** — Compares availability and response time across business-critical SaaS applications (Microsoft 365, Salesforce, ServiceNow, etc.) from multiple office locations, enabling data-driven SaaS vendor performance management.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server / Page Load tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, thousandeyes.source.agent.location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_ttfb_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (TTFB by SaaS app per location), Table (app, location, availability, TTFB), Comparison dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We test how fast each of our cloud applications (like email, CRM, HR systems) responds from different offices around the world, so we know which app is causing problems and can talk to the vendor with real data.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.44",
              "n": "Multi-Region SaaS Availability (ThousandEyes)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors SaaS application reachability from multiple geographic regions using ThousandEyes Cloud Agents, identifying regional availability issues that affect specific user populations.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"http-server\"\n| stats avg(http.server.request.availability) as avg_availability by thousandeyes.test.name, thousandeyes.source.agent.geo.country.iso_code, thousandeyes.source.agent.location\n| where avg_availability < 100\n| sort avg_availability",
              "m": "Deploy the same HTTP Server tests across ThousandEyes Cloud Agents in Americas, EMEA, and APAC regions. Use `thousandeyes.source.agent.geo.country.iso_code` and `thousandeyes.source.agent.location` attributes to group results by region. A service that is available from US agents but not from EU agents indicates a regional issue.",
              "z": "Map (availability by agent location), Table (region, app, availability), Column chart (availability by region).",
              "kfp": "**Regional agent connectivity issues.** If the ThousandEyes Cloud Agent in a region has local connectivity issues, it may appear that the SaaS application is unavailable from that region when the issue is actually with the test agent itself. Cross-reference with UC-5.9.22 (Local Agent Issue Monitoring).\n\n**Geo-based access restrictions.** Some SaaS applications restrict access by geographic region (GDPR, export controls). Test failures from restricted regions are expected.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy the same HTTP Server tests across ThousandEyes Cloud Agents in Americas, EMEA, and APAC regions. Use `thousandeyes.source.agent.geo.country.iso_code` and `thousandeyes.source.agent.location` attributes to group results by region. A service that is available from US agents but not from EU agents indicates a regional issue.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"http-server\"\n| stats avg(http.server.request.availability) as avg_availability by thousandeyes.test.name, thousandeyes.source.agent.geo.country.iso_code, thousandeyes.source.agent.location\n| where avg_availability < 100\n| sort avg_availability\n```\n\nUnderstanding this SPL\n\n**Multi-Region SaaS Availability (ThousandEyes)** — Monitors SaaS application reachability from multiple geographic regions using ThousandEyes Cloud Agents, identifying regional availability issues that affect specific user populations.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (HTTP Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, thousandeyes.source.agent.geo.country.iso_code, thousandeyes.source.agent.location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_availability < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (availability by agent location), Table (region, app, availability), Column chart (availability by region).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We test that our cloud applications work from every part of the world where we have employees, because an app that works fine in New York might be down in Tokyo — and the vendor's status page won't tell us that.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.45",
              "n": "FTP Server Availability and Throughput (ThousandEyes)",
              "c": "medium",
              "f": "beginner",
              "v": "Monitors FTP/SFTP server availability and file transfer throughput from ThousandEyes agents, ensuring file transfer services are accessible and performing adequately for automated data exchange workflows.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (FTP Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"ftp-server\"\n| stats avg(ftp.server.request.availability) as avg_availability avg(ftp.client.request.duration) as avg_response_s avg(ftp.server.throughput) as avg_throughput by thousandeyes.test.name, server.address\n| eval avg_response_ms=round(avg_response_s*1000,1), throughput_mbps=round(avg_throughput/1048576,2)\n| sort avg_availability, -throughput_mbps",
              "m": "Create FTP Server tests in ThousandEyes for critical file transfer endpoints. The OTel metric `ftp.server.request.availability` reports availability, `ftp.client.request.duration` reports TTFB, and `ftp.server.throughput` reports bytes per second. The `ftp.request.command` attribute indicates the FTP command tested (GET, PUT, LS). The Splunk App Voice dashboard includes FTP panels.",
              "z": "Line chart (availability and throughput over time), Table (server, availability, throughput, response time), Single value.",
              "kfp": "**FTP session limits.** FTP servers often limit concurrent connections. If multiple test agents connect simultaneously and exceed the limit, some connections fail — a test artifact, not a server problem. Stagger test schedules across agents.\n\n**Credential rotation.** FTP tests using username/password authentication fail when credentials are rotated. Update test credentials after rotation.\n\n**Firewall/NAT issues.** Active-mode FTP requires the server to initiate data connections back to the client, which is often blocked by firewalls. Use passive mode (PASV) in test configuration.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (FTP Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate FTP Server tests in ThousandEyes for critical file transfer endpoints. The OTel metric `ftp.server.request.availability` reports availability, `ftp.client.request.duration` reports TTFB, and `ftp.server.throughput` reports bytes per second. The `ftp.request.command` attribute indicates the FTP command tested (GET, PUT, LS). The Splunk App Voice dashboard includes FTP panels.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"ftp-server\"\n| stats avg(ftp.server.request.availability) as avg_availability avg(ftp.client.request.duration) as avg_response_s avg(ftp.server.throughput) as avg_throughput by thousandeyes.test.name, server.address\n| eval avg_response_ms=round(avg_response_s*1000,1), throughput_mbps=round(avg_throughput/1048576,2)\n| sort avg_availability, -throughput_mbps\n```\n\nUnderstanding this SPL\n\n**FTP Server Availability and Throughput (ThousandEyes)** — Monitors FTP/SFTP server availability and file transfer throughput from ThousandEyes agents, ensuring file transfer services are accessible and performing adequately for automated data exchange workflows.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (FTP Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_response_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (availability and throughput over time), Table (server, availability, throughput, response time), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We test our file transfer servers to make sure they're working and fast enough, because many important business processes depend on files being sent and received on time.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.46",
              "n": "ThousandEyes Alert Severity Distribution",
              "c": "high",
              "f": "beginner",
              "v": "Provides a centralized view of all ThousandEyes alerts in Splunk by severity, enabling SOC and NOC teams to prioritize response across network, application, and voice test alerts alongside other infrastructure alerts.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Alerts Stream (webhook via HEC)",
              "q": "`stream_index` sourcetype=\"thousandeyes:alerts\"\n| stats count by severity, alert.rule.name, alert.test.name, alert.type\n| sort severity, -count",
              "m": "Configure the Alerts Stream input in the Cisco ThousandEyes App for Splunk. Select the ThousandEyes user, account group, and alert rules to receive. The app automatically creates a webhook connector in ThousandEyes and associates it with selected alert rules. Alerts flow in real-time to Splunk via HEC. The Splunk App Alerts dashboard provides pre-built panels for alert severity distribution, timeline, and drilldown.",
              "z": "Pie chart (alerts by severity), Bar chart (alerts by type), Table (rule, test, severity, count), Single value (active critical alerts).",
              "kfp": "This is a meta-analysis UC — it analyzes alerting patterns, not network issues. No false positives in the traditional sense. However, alert counts may be inflated by test configuration (more agents per test = more alert instances per incident).",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Alerts Stream (webhook via HEC).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure the Alerts Stream input in the Cisco ThousandEyes App for Splunk. Select the ThousandEyes user, account group, and alert rules to receive. The app automatically creates a webhook connector in ThousandEyes and associates it with selected alert rules. Alerts flow in real-time to Splunk via HEC. The Splunk App Alerts dashboard provides pre-built panels for alert severity distribution, timeline, and drilldown.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` sourcetype=\"thousandeyes:alerts\"\n| stats count by severity, alert.rule.name, alert.test.name, alert.type\n| sort severity, -count\n```\n\nUnderstanding this SPL\n\n**ThousandEyes Alert Severity Distribution** — Provides a centralized view of all ThousandEyes alerts in Splunk by severity, enabling SOC and NOC teams to prioritize response across network, application, and voice test alerts alongside other infrastructure alerts.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Alerts Stream (webhook via HEC). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: thousandeyes:alerts. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by severity, alert.rule.name, alert.test.name, alert.type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (alerts by severity), Bar chart (alerts by type), Table (rule, test, severity, count), Single value (active critical alerts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We count and sort all our network alerts to find the ones that cry wolf too often, so we can fix them and make sure the real emergencies get noticed.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.47",
              "n": "ThousandEyes Alert Timeline Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Trending alert volume over time reveals patterns — recurring issues at specific times, increasing alert frequency indicating degradation, or correlation with change windows. Helps teams move from reactive to proactive operations.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Alerts Stream (webhook via HEC)",
              "q": "`stream_index` sourcetype=\"thousandeyes:alerts\"\n| timechart span=1h count by severity",
              "m": "The Splunk App Alerts dashboard includes a \"Alerts Timeline\" line chart and a \"Severity Distribution Trend\" chart. Use these pre-built panels or customize with the `stream_index` macro. Set adaptive alerts on alert volume increases — a sudden spike in ThousandEyes alerts often precedes user-reported incidents. Correlate alert timing with change management windows.",
              "z": "Line chart (alerts over time by severity), Stacked bar chart (alerts per hour), Table (trending alert rules).",
              "kfp": "**Alert rule changes.** Adding new alert rules or modifying thresholds causes step changes in alert volume. Annotate the timechart with alert rule change dates.\n\n**Test addition/removal.** Adding new tests increases alert volume; removing tests decreases it. These are configuration changes, not infrastructure changes.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Alerts Stream (webhook via HEC).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe Splunk App Alerts dashboard includes a \"Alerts Timeline\" line chart and a \"Severity Distribution Trend\" chart. Use these pre-built panels or customize with the `stream_index` macro. Set adaptive alerts on alert volume increases — a sudden spike in ThousandEyes alerts often precedes user-reported incidents. Correlate alert timing with change management windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` sourcetype=\"thousandeyes:alerts\"\n| timechart span=1h count by severity\n```\n\nUnderstanding this SPL\n\n**ThousandEyes Alert Timeline Trending** — Trending alert volume over time reveals patterns — recurring issues at specific times, increasing alert frequency indicating degradation, or correlation with change windows. Helps teams move from reactive to proactive operations.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Alerts Stream (webhook via HEC). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: thousandeyes:alerts. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by severity** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (alerts over time by severity), Stacked bar chart (alerts per hour), Table (trending alert rules).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We chart how many network alerts we get each day over a month, so we can see if problems are getting worse or if our fixes are actually helping.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.48",
              "n": "ThousandEyes Activity Log Audit Trail",
              "c": "medium",
              "f": "beginner",
              "v": "Ingests ThousandEyes platform activity logs into Splunk for audit, compliance, and change tracking. Tracks who created, modified, or deleted tests, users, and alert rules — essential for troubleshooting test behavior changes and meeting compliance requirements.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes Activity Log API",
              "q": "`activity_index`\n| stats count by event, accountGroupName, aid\n| sort -count",
              "m": "Configure the Activity Log input in the Cisco ThousandEyes App with a ThousandEyes user and account group. Activity logs are fetched at a configurable interval via the ThousandEyes API. Update the `activity_index` macro to point to the correct index. Events include test creation/modification/deletion, user management, alert rule changes, and account group configuration changes.",
              "z": "Table (event type, account group, count), Timeline (activity events), Pie chart (activity by event type).",
              "kfp": "**Automated configuration changes.** If ThousandEyes is managed via API (Terraform, Ansible, or scripts), automated changes generate activity log entries. Tag automated users/service accounts to distinguish from manual changes.\n\n**ThousandEyes support access.** Cisco/ThousandEyes support engineers may access the account during support tickets. Verify with open support tickets.\n\n**Bulk operations.** Renaming tests, updating alert rules across many tests, or deploying new agents generates many activity log entries. Correlate with planned change tickets.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes Activity Log API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure the Activity Log input in the Cisco ThousandEyes App with a ThousandEyes user and account group. Activity logs are fetched at a configurable interval via the ThousandEyes API. Update the `activity_index` macro to point to the correct index. Events include test creation/modification/deletion, user management, alert rule changes, and account group configuration changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`activity_index`\n| stats count by event, accountGroupName, aid\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ThousandEyes Activity Log Audit Trail** — Ingests ThousandEyes platform activity logs into Splunk for audit, compliance, and change tracking. Tracks who created, modified, or deleted tests, users, and alert rules — essential for troubleshooting test behavior changes and meeting compliance requirements.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes Activity Log API. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `activity_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by event, accountGroupName, aid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (event type, account group, count), Timeline (activity events), Pie chart (activity by event type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We keep a diary of who changed what in our network monitoring system, so if a test suddenly disappears or an alert stops working, we know who did it and when.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.49",
              "n": "ThousandEyes Data Collection Health Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors the health of the ThousandEyes-to-Splunk data pipeline itself. Detects gaps in data collection, API errors, or HEC delivery failures that would cause blind spots in network and application monitoring.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, Splunk internal logs",
              "q": "`stream_index`\n| timechart span=5m count as event_count\n| where event_count < 1",
              "m": "Monitor the data flow from ThousandEyes to Splunk by tracking event volume per collection interval. A drop to zero events indicates a pipeline failure — possible causes include expired ThousandEyes API tokens, HEC token issues, or ThousandEyes streaming configuration changes. Combine with `index=_internal sourcetype=splunkd component=HttpInputDataHandler` to monitor HEC health. The Splunk App Health dashboard provides data freshness panels.",
              "z": "Line chart (event volume over time), Single value (events in last 5 min), Alert on zero events for >15 min.",
              "kfp": "**Low test volume environments.** If you have very few ThousandEyes tests (< 5), event volume may naturally be low. Adjust the threshold to match your expected data volume.\n\n**Off-hours data reduction.** If tests are scheduled only during business hours, data volume drops at night. Account for this in your baseline.\n\n**Splunk maintenance.** Splunk restarts or indexer maintenance may cause brief data gaps.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, Splunk internal logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor the data flow from ThousandEyes to Splunk by tracking event volume per collection interval. A drop to zero events indicates a pipeline failure — possible causes include expired ThousandEyes API tokens, HEC token issues, or ThousandEyes streaming configuration changes. Combine with `index=_internal sourcetype=splunkd component=HttpInputDataHandler` to monitor HEC health. The Splunk App Health dashboard provides data freshness panels.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index`\n| timechart span=5m count as event_count\n| where event_count < 1\n```\n\nUnderstanding this SPL\n\n**ThousandEyes Data Collection Health Monitoring** — Monitors the health of the ThousandEyes-to-Splunk data pipeline itself. Detects gaps in data collection, API errors, or HEC delivery failures that would cause blind spots in network and application monitoring.\n\nDocumented **Data sources**: `index=thousandeyes`, Splunk internal logs. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where event_count < 1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (event volume over time), Single value (events in last 5 min), Alert on zero events for >15 min.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We watch the watcher — making sure the system that monitors our network is actually running, because if the monitoring system breaks, nobody notices until something goes very wrong and we didn't get a warning.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.50",
              "n": "ThousandEyes ITSI Service Health (Content Pack)",
              "c": "high",
              "f": "intermediate",
              "v": "The ITSI Content Pack for Cisco ThousandEyes provides pre-built service templates, KPI base searches, entity types, and Glass Tables for service-centric monitoring. It maps ThousandEyes test results to ITSI services for unified health scoring across all monitoring domains.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719), ITSI Content Pack for Cisco ThousandEyes",
              "d": "`index=thousandeyes`, ThousandEyes OTel data via ITSI KPI base searches",
              "q": "| from datamodel:\"ITSI_KPI_Summary\"\n| where service_name=\"*ThousandEyes*\"\n| stats latest(kpi_urgency) as urgency latest(alert_level) as alert_level by service_name, kpiid, itsi_kpi_id\n| sort -urgency",
              "m": "Install the ITSI Content Pack for Cisco ThousandEyes from the ITSI Content Library. The content pack provides: entity types (ThousandEyes Test, ThousandEyes Agent), KPI base searches (latency, loss, jitter, availability, MOS for each test type), service templates, and Glass Table templates. After installation, import the service templates and configure entity discovery to match your ThousandEyes tests. KPIs are automatically populated from the ThousandEyes data model.",
              "z": "ITSI Service Tree, Glass Table, KPI cards (latency, loss, availability, MOS), Service health score.",
              "kfp": "**ITSI threshold misconfiguration.** If ITSI KPI thresholds are set too aggressively, ThousandEyes-fed KPIs may show \"Critical\" for normal network fluctuations. Use ITSI's adaptive thresholding or calibrate thresholds based on UC-5.9.1 baseline data.\n\n**KPI data lag.** If the KPI base search schedule doesn't align with ThousandEyes data arrival, KPIs may show stale data. Ensure KPI search frequency matches or exceeds test frequency.\n\n**Entity mapping issues.** If ITSI entities don't correctly map to ThousandEyes agents or tests, KPI calculations may include or exclude wrong data.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719), ITSI Content Pack for Cisco ThousandEyes.\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel data via ITSI KPI base searches.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall the ITSI Content Pack for Cisco ThousandEyes from the ITSI Content Library. The content pack provides: entity types (ThousandEyes Test, ThousandEyes Agent), KPI base searches (latency, loss, jitter, availability, MOS for each test type), service templates, and Glass Table templates. After installation, import the service templates and configure entity discovery to match your ThousandEyes tests. KPIs are automatically populated from the ThousandEyes data model.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| from datamodel:\"ITSI_KPI_Summary\"\n| where service_name=\"*ThousandEyes*\"\n| stats latest(kpi_urgency) as urgency latest(alert_level) as alert_level by service_name, kpiid, itsi_kpi_id\n| sort -urgency\n```\n\nUnderstanding this SPL\n\n**ThousandEyes ITSI Service Health (Content Pack)** — The ITSI Content Pack for Cisco ThousandEyes provides pre-built service templates, KPI base searches, entity types, and Glass Tables for service-centric monitoring. It maps ThousandEyes test results to ITSI services for unified health scoring across all monitoring domains.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel data via ITSI KPI base searches. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719), ITSI Content Pack for Cisco ThousandEyes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `from` (dataset / Federated Search) — verify dataset availability and permissions.\n• Filters the current rows with `where service_name=\"*ThousandEyes*\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by service_name, kpiid, itsi_kpi_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: ITSI Service Tree, Glass Table, KPI cards (latency, loss, availability, MOS), Service health score.",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We plug our network monitoring data into a bigger system that shows the health of entire business services — so instead of seeing 50 separate network metrics, the team sees one health score for 'Customer Portal' that combines network, application, and server health into a single view.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "itsi"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.51",
              "n": "Splunk On-Call Incident Routing from ThousandEyes",
              "c": "medium",
              "f": "beginner",
              "v": "Routes ThousandEyes alerts directly to Splunk On-Call (formerly VictorOps) for incident management, on-call paging, and war room coordination. Ensures network and application issues detected by ThousandEyes reach the right team within seconds.",
              "t": "ThousandEyes webhook integration with Splunk On-Call",
              "d": "ThousandEyes alert webhooks",
              "q": "index=oncall sourcetype=\"oncall:incidents\" monitoring_tool=\"ThousandEyes\"\n| stats count by incident_state, routing_key, entity_id\n| sort -count",
              "m": "Configure ThousandEyes to send alert notifications to Splunk On-Call via the REST API endpoint webhook integration. In ThousandEyes, create a webhook notification pointing to the Splunk On-Call REST endpoint URL with your routing key. Map ThousandEyes alert severity to Splunk On-Call incident severity (critical→critical, warning→warning, info→info). The integration supports recovery messages to automatically resolve incidents when ThousandEyes alerts clear.",
              "z": "Table (incidents by state and routing key), Timeline (incident creation/resolution), Single value (active incidents from ThousandEyes).",
              "kfp": "**Alert noise → incident noise.** If ThousandEyes alert rules aren't well-tuned (UC-5.9.46), noisy alerts become noisy incidents, causing on-call fatigue. Tune ThousandEyes alerting BEFORE routing to On-Call.\n\n**Duplicate incidents.** Multiple ThousandEyes agents may fire the same alert for the same network issue, creating multiple On-Call incidents. Use dedup in the Splunk search or incident deduplication rules in On-Call.\n\n**Incidents for transient issues.** Brief network blips may trigger alerts that clear within minutes. Consider adding a delay or requiring consecutive alert rounds before creating an On-Call incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ThousandEyes webhook integration with Splunk On-Call.\n• Ensure the following data sources are available: ThousandEyes alert webhooks.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure ThousandEyes to send alert notifications to Splunk On-Call via the REST API endpoint webhook integration. In ThousandEyes, create a webhook notification pointing to the Splunk On-Call REST endpoint URL with your routing key. Map ThousandEyes alert severity to Splunk On-Call incident severity (critical→critical, warning→warning, info→info). The integration supports recovery messages to automatically resolve incidents when ThousandEyes alerts clear.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=oncall sourcetype=\"oncall:incidents\" monitoring_tool=\"ThousandEyes\"\n| stats count by incident_state, routing_key, entity_id\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Splunk On-Call Incident Routing from ThousandEyes** — Routes ThousandEyes alerts directly to Splunk On-Call (formerly VictorOps) for incident management, on-call paging, and war room coordination. Ensures network and application issues detected by ThousandEyes reach the right team within seconds.\n\nDocumented **Data sources**: ThousandEyes alert webhooks. **App/TA** (typical add-on context): ThousandEyes webhook integration with Splunk On-Call. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: oncall; **sourcetype**: oncall:incidents. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=oncall, sourcetype=\"oncall:incidents\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by incident_state, routing_key, entity_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incidents by state and routing key), Timeline (incident creation/resolution), Single value (active incidents from ThousandEyes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We automatically wake up the on-call engineer when a critical network problem is detected, so important issues get fixed at 3 AM instead of waiting until someone checks their dashboard in the morning.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.52",
              "n": "ThousandEyes Trace Span Analysis and Drill-Down",
              "c": "medium",
              "f": "advanced",
              "v": "ThousandEyes Transaction tests can emit OpenTelemetry traces with span-level timing for each step of the scripted workflow. Ingesting these traces into Splunk enables correlation with application traces from Splunk APM for end-to-end distributed tracing.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Traces",
              "q": "`stream_index` sourcetype=\"thousandeyes:traces\"\n| stats count avg(duration_ms) as avg_span_duration_ms by service.name, span.name, span.kind\n| sort -avg_span_duration_ms",
              "m": "Enable the Tests Stream — Traces input in the Cisco ThousandEyes App. Traces are emitted for Transaction tests and provide span-level timing for each step of the scripted workflow. The trace data follows OpenTelemetry conventions with `trace_id`, `span_id`, `parent_span_id`, `service.name`, `span.name`, `duration`, and custom attributes. Traces can be correlated with Splunk APM traces using shared context propagation.",
              "z": "Table (spans by duration), Trace waterfall (via Splunk APM or custom visualization), Bar chart (avg span duration by step).",
              "kfp": "**Component timing approximation.** The breakdown is approximate because DNS, network, and HTTP tests may run at slightly different times within a test round. The sum of components may not exactly equal the total TTFB.\n\n**DNS caching effects.** If DNS is cached (resolved once and reused), DNS timing may appear near-zero, making it seem like DNS is not a factor. This is correct — but a cache miss during a TTL expiration event would show the true DNS cost.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Traces.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable the Tests Stream — Traces input in the Cisco ThousandEyes App. Traces are emitted for Transaction tests and provide span-level timing for each step of the scripted workflow. The trace data follows OpenTelemetry conventions with `trace_id`, `span_id`, `parent_span_id`, `service.name`, `span.name`, `duration`, and custom attributes. Traces can be correlated with Splunk APM traces using shared context propagation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` sourcetype=\"thousandeyes:traces\"\n| stats count avg(duration_ms) as avg_span_duration_ms by service.name, span.name, span.kind\n| sort -avg_span_duration_ms\n```\n\nUnderstanding this SPL\n\n**ThousandEyes Trace Span Analysis and Drill-Down** — ThousandEyes Transaction tests can emit OpenTelemetry traces with span-level timing for each step of the scripted workflow. Ingesting these traces into Splunk enables correlation with application traces from Splunk APM for end-to-end distributed tracing.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Traces. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: thousandeyes:traces. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by service.name, span.name, span.kind** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (spans by duration), Trace waterfall (via Splunk APM or custom visualization), Bar chart (avg span duration by step).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "We break down every web request into its component steps — looking up the address, connecting, encrypting, processing, and downloading — so when something is slow, we know exactly which step to fix instead of guessing.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.53",
              "n": "Cross-Platform Correlation (ThousandEyes Network + Splunk APM)",
              "c": "high",
              "f": "advanced",
              "v": "Correlates ThousandEyes network path quality data with Splunk APM application traces to determine whether performance issues are caused by the network or the application. This is the core value proposition of the Splunk + ThousandEyes integration — unified observability across network and application layers.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719), Splunk APM",
              "d": "`index=thousandeyes` (network metrics), Splunk APM traces",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| bin _time span=5m\n| stats avg(network.latency) as avg_net_latency_s avg(network.loss) as avg_net_loss by server.address, _time\n| join type=outer max=1 server.address [\n  search index=apm_traces\n  | bin _time span=5m\n| stats avg(duration_ms) as avg_app_latency_ms p99(duration_ms) as p99_app_latency_ms by service.name, server.address, _time\n]\n| eval avg_net_latency_ms=round(avg_net_latency_s*1000,1)\n| eval root_cause=case(avg_net_latency_ms>200 AND avg_app_latency_ms<500, \"Network\", avg_net_latency_ms<50 AND avg_app_latency_ms>2000, \"Application\", avg_net_latency_ms>200 AND avg_app_latency_ms>2000, \"Both\", 1=1, \"Normal\")\n| where root_cause!=\"Normal\"\n| table _time, server.address, service.name, avg_net_latency_ms, avg_net_loss, avg_app_latency_ms, root_cause",
              "m": "This correlation requires both ThousandEyes network data and Splunk APM trace data indexed in Splunk. The key join field is the server address or service endpoint. When network latency is high but application processing is fast, the network is the bottleneck. When network latency is low but application response is slow, the issue is in the application. This \"network vs. app\" isolation significantly reduces MTTR by directing the right team to investigate.",
              "z": "Table (endpoint, network latency, app latency, root cause), Dual-axis chart (network vs app latency), Dashboard with network and app panels side-by-side.",
              "kfp": "**Correlation key mismatch.** ThousandEyes uses `server.address` (IP or hostname), APM may use service names, and infrastructure monitoring may use hostnames. If the naming doesn't match, the join produces empty results. Use a lookup table to map between naming conventions.\n\n**Temporal misalignment.** ThousandEyes tests run at fixed intervals (e.g., every 2 minutes), APM samples continuously, and infrastructure metrics may report every 10 seconds. Time windows must be broad enough to capture data from all sources.\n\n**Correlation ≠ causation.** High network latency and high application response time at the same time doesn't prove the network caused the application slowness. The application could be independently slow, causing network retransmissions. Use ThousandEyes path visualization (UC-5.9.5) to confirm network-layer issues.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719), Splunk APM.\n• Ensure the following data sources are available: `index=thousandeyes` (network metrics), Splunk APM traces.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThis correlation requires both ThousandEyes network data and Splunk APM trace data indexed in Splunk. The key join field is the server address or service endpoint. When network latency is high but application processing is fast, the network is the bottleneck. When network latency is low but application response is slow, the issue is in the application. This \"network vs. app\" isolation significantly reduces MTTR by directing the right team to investigate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| bin _time span=5m\n| stats avg(network.latency) as avg_net_latency_s avg(network.loss) as avg_net_loss by server.address, _time\n| join type=outer max=1 server.address [\n  search index=apm_traces\n  | bin _time span=5m\n| stats avg(duration_ms) as avg_app_latency_ms p99(duration_ms) as p99_app_latency_ms by service.name, server.address, _time\n]\n| eval avg_net_latency_ms=round(avg_net_latency_s*1000,1)\n| eval root_cause=case(avg_net_latency_ms>200 AND avg_app_latency_ms<500, \"Network\", avg_net_latency_ms<50 AND avg_app_latency_ms>2000, \"Application\", avg_net_latency_ms>200 AND avg_app_latency_ms>2000, \"Both\", 1=1, \"Normal\")\n| where root_cause!=\"Normal\"\n| table _time, server.address, service.name, avg_net_latency_ms, avg_net_loss, avg_app_latency_ms, root_cause\n```\n\nUnderstanding this SPL\n\n**Cross-Platform Correlation (ThousandEyes Network + Splunk APM)** — Correlates ThousandEyes network path quality data with Splunk APM application traces to determine whether performance issues are caused by the network or the application. This is the core value proposition of the Splunk + ThousandEyes integration — unified observability across network and application layers.\n\nDocumented **Data sources**: `index=thousandeyes` (network metrics), Splunk APM traces. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719), Splunk APM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by server.address, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **avg_net_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **root_cause** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where root_cause!=\"Normal\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cross-Platform Correlation (ThousandEyes Network + Splunk APM)**): table _time, server.address, service.name, avg_net_latency_ms, avg_net_loss, avg_app_latency_ms, root_cause\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (endpoint, network latency, app latency, root cause), Dual-axis chart (network vs app latency), Dashboard with network and app panels side-by-side.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "When something is slow, we check the network, the application, and the servers all at the same time in one place, so we can immediately tell which part is broken instead of having three teams argue about whose fault it is.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.9.54",
              "n": "MTTR Reduction via Network vs Application Isolation",
              "c": "high",
              "f": "advanced",
              "v": "Quantifies the business value of ThousandEyes + Splunk integration by measuring how quickly teams can isolate whether a performance issue is network-caused or application-caused. Tracks Mean Time to Resolution and Mean Time to Isolate metrics for incidents where ThousandEyes data was available.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes` (alerts, events), incident management system data",
              "q": "`stream_index` sourcetype=\"thousandeyes:alerts\"\n| stats earliest(_time) as alert_start latest(_time) as alert_end by alert.rule.name, alert.test.name\n| eval mtti_minutes=round((alert_end-alert_start)/60,1)\n| join type=outer max=1 alert.test.name [\n  search `event_index`\n  | stats earliest(_time) as event_start latest(state) as final_state by thousandeyes.test.name\n  | rename thousandeyes.test.name as alert.test.name\n]\n| eval isolation_method=if(isnotnull(event_start), \"ThousandEyes Event + Alert\", \"ThousandEyes Alert Only\")\n| stats avg(mtti_minutes) as avg_mtti count by isolation_method",
              "m": "This meta-analysis use case measures how ThousandEyes data accelerates incident resolution. Track the time from ThousandEyes alert trigger to resolution (MTTR). Compare MTTR for incidents where ThousandEyes data was available vs. those without. Over time, this demonstrates the ROI of the ThousandEyes + Splunk integration. Combine with ITSM data (ServiceNow, Jira Service Management) for complete MTTR tracking.",
              "z": "Single value (avg MTTR with ThousandEyes), Comparison chart (MTTR with vs. without TE data), Table (incidents and isolation times), Trend line (MTTR improvement over time).",
              "kfp": "**Fault domain misclassification.** An internal network issue (e.g., a misconfigured firewall) may not generate an Internet Insights event but may also not show clearly in path visualization if the affected hop doesn't respond to ICMP. The absence of an external event doesn't definitively prove an internal fault.\n\n**Partial outages.** An ISP issue affecting only some agents may not trigger an Internet Insights event (which requires a threshold of affected vantage points). The fault domain may classify as Internal when it's actually a localized external issue.\n\n**Stale path visualization data.** Path visualization data is polled via API, not pushed in real-time. During the first minutes of an incident, path data may not yet reflect the current state. Wait at least 10 minutes before relying on path data for fault isolation.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes` (alerts, events), incident management system data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThis meta-analysis use case measures how ThousandEyes data accelerates incident resolution. Track the time from ThousandEyes alert trigger to resolution (MTTR). Compare MTTR for incidents where ThousandEyes data was available vs. those without. Over time, this demonstrates the ROI of the ThousandEyes + Splunk integration. Combine with ITSM data (ServiceNow, Jira Service Management) for complete MTTR tracking.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` sourcetype=\"thousandeyes:alerts\"\n| stats earliest(_time) as alert_start latest(_time) as alert_end by alert.rule.name, alert.test.name\n| eval mtti_minutes=round((alert_end-alert_start)/60,1)\n| join type=outer max=1 alert.test.name [\n  search `event_index`\n  | stats earliest(_time) as event_start latest(state) as final_state by thousandeyes.test.name\n  | rename thousandeyes.test.name as alert.test.name\n]\n| eval isolation_method=if(isnotnull(event_start), \"ThousandEyes Event + Alert\", \"ThousandEyes Alert Only\")\n| stats avg(mtti_minutes) as avg_mtti count by isolation_method\n```\n\nUnderstanding this SPL\n\n**MTTR Reduction via Network vs Application Isolation** — Quantifies the business value of ThousandEyes + Splunk integration by measuring how quickly teams can isolate whether a performance issue is network-caused or application-caused. Tracks Mean Time to Resolution and Mean Time to Isolate metrics for incidents where ThousandEyes data was available.\n\nDocumented **Data sources**: `index=thousandeyes` (alerts, events), incident management system data. **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: thousandeyes:alerts. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by alert.rule.name, alert.test.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **mtti_minutes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **isolation_method** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by isolation_method** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (avg MTTR with ThousandEyes), Comparison chart (MTTR with vs. without TE data), Table (incidents and isolation times), Trend line (MTTR improvement over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-05-02",
              "sver": "",
              "rby": "",
              "ge": "When the network breaks, we immediately figure out whether it's our problem or the internet's problem, and exactly which piece of equipment is causing the trouble, so the team can fix things in minutes instead of spending hours trying to find the problem.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 54,
            "none": 0
          }
        },
        {
          "i": "5.10",
          "n": "Carrier and Service Provider Signaling",
          "u": [
            {
              "i": "5.10.1",
              "n": "Diameter Signaling Health Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Tracks the success and failure rates of Diameter signaling messages (authentication, authorization, accounting) in the mobile core, essential for maintaining service availability and subscriber experience.",
              "t": "`Splunk App for Stream` (Splunkbase #1809)",
              "d": "`sourcetype=stream:diameter`",
              "q": "sourcetype=\"stream:diameter\"\n| stats count by command_code, result_code, origin_host, application_id\n| eval status=if(result_code==2001, \"Success\", \"Failure\")\n| stats sum(eval(if(status==\"Success\", 1, 0))) as successful, sum(eval(if(status==\"Failure\", 1, 0))) as failed by command_code, application_id\n| eval success_rate=round(successful*100/(successful+failed), 2)\n| where failed>0 OR success_rate<99",
              "m": "Install Splunk App for Stream and configure it to capture Diameter protocol traffic on the core network. Enable the Diameter protocol for full field extraction. Monitor `command_code` and `result_code` to detect signaling issues. Create alerts for sustained drops in success rate or spikes in failure codes such as DIAMETER_AUTHENTICATION_REJECTED (5003) or DIAMETER_UNABLE_TO_DELIVER (3002).",
              "z": "Single value (overall Diameter success rate with color-coded threshold: green >99%, yellow 95-99%, red <95%), Pie chart (failure breakdown by command_code), Table (origin_host, command_code, result_code, count — sortable), Line chart (success rate trend over 24h with 15-min buckets).",
              "kfp": "Planned Diameter work, HSS profile pushes, and roaming test campaigns can depress success rates in narrow windows. Compare to the PCRF and STP maintenance calendar.",
              "refs": "[REJECTED](https://splunkbase.splunk.com/app/5003)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk App for Stream` (Splunkbase #1809).\n• Ensure the following data sources are available: `sourcetype=stream:diameter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall Splunk App for Stream and configure it to capture Diameter protocol traffic on the core network. Enable the Diameter protocol for full field extraction. Monitor `command_code` and `result_code` to detect signaling issues. Create alerts for sustained drops in success rate or spikes in failure codes such as DIAMETER_AUTHENTICATION_REJECTED (5003) or DIAMETER_UNABLE_TO_DELIVER (3002).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsourcetype=\"stream:diameter\"\n| stats count by command_code, result_code, origin_host, application_id\n| eval status=if(result_code==2001, \"Success\", \"Failure\")\n| stats sum(eval(if(status==\"Success\", 1, 0))) as successful, sum(eval(if(status==\"Failure\", 1, 0))) as failed by command_code, application_id\n| eval success_rate=round(successful*100/(successful+failed), 2)\n| where failed>0 OR success_rate<99\n```\n\nUnderstanding this SPL\n\n**Diameter Signaling Health Monitoring** — Tracks the success and failure rates of Diameter signaling messages (authentication, authorization, accounting) in the mobile core, essential for maintaining service availability and subscriber experience.\n\nDocumented **Data sources**: `sourcetype=stream:diameter`. **App/TA** (typical add-on context): `Splunk App for Stream` (Splunkbase #1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: stream:diameter. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: sourcetype=\"stream:diameter\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by command_code, result_code, origin_host, application_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by command_code, application_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failed>0 OR success_rate<99` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (overall Diameter success rate with color-coded threshold: green >99%, yellow 95-99%, red <95%), Pie chart (failure breakdown by command_code), Table (origin_host, command_code, result_code, count — sortable), Line chart (success rate trend over 24h with 15-min buckets).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track whether the mobile core’s sign-in and billing handshakes succeed, so a signaling problem is caught before millions of people lose service.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "ind": "Telecommunications",
              "tuc": "IMS Core and VoLTE Monitoring (50 Ways #16)",
              "a": [
                "N/A"
              ],
              "e": [
                "stream"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.10.2",
              "n": "Diameter Subscriber Data Accounting",
              "c": "high",
              "f": "intermediate",
              "v": "Aggregates Diameter accounting records to track data usage per subscriber and session, enabling detection of high-usage anomalies, billing reconciliation, and capacity planning.",
              "t": "`Splunk App for Stream` (Splunkbase #1809)",
              "d": "`sourcetype=stream:diameter`",
              "q": "sourcetype=\"stream:diameter\" command_code=271\n| eval total_bytes=acct_input_octets+acct_output_octets\n| eval total_MB=round(total_bytes/1048576, 2)\n| stats sum(total_MB) as total_data_MB, count as session_count by calling_station_id, origin_host\n| sort -total_data_MB\n| head 100",
              "m": "Configure Splunk App for Stream to capture Diameter Accounting-Request (ACR, command_code 271) and Accounting-Answer (ACA, command_code 271) messages. The fields `acct_input_octets` and `acct_output_octets` provide byte counts per session. Correlate with `calling_station_id` (subscriber MSISDN/IMSI) to build per-subscriber usage profiles. Set alerts for subscribers exceeding data thresholds.",
              "z": "Bar chart (top 20 subscribers by data usage in MB), Table (calling_station_id, origin_host, total_data_MB, session_count — sortable), Line chart (aggregate data volume trend over 7 days), Single value (total Diameter accounting sessions).",
              "kfp": "Planned Diameter work, HSS profile pushes, and roaming test campaigns can depress success rates in narrow windows. Compare to the PCRF and STP maintenance calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk App for Stream` (Splunkbase #1809).\n• Ensure the following data sources are available: `sourcetype=stream:diameter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk App for Stream to capture Diameter Accounting-Request (ACR, command_code 271) and Accounting-Answer (ACA, command_code 271) messages. The fields `acct_input_octets` and `acct_output_octets` provide byte counts per session. Correlate with `calling_station_id` (subscriber MSISDN/IMSI) to build per-subscriber usage profiles. Set alerts for subscribers exceeding data thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsourcetype=\"stream:diameter\" command_code=271\n| eval total_bytes=acct_input_octets+acct_output_octets\n| eval total_MB=round(total_bytes/1048576, 2)\n| stats sum(total_MB) as total_data_MB, count as session_count by calling_station_id, origin_host\n| sort -total_data_MB\n| head 100\n```\n\nUnderstanding this SPL\n\n**Diameter Subscriber Data Accounting** — Aggregates Diameter accounting records to track data usage per subscriber and session, enabling detection of high-usage anomalies, billing reconciliation, and capacity planning.\n\nDocumented **Data sources**: `sourcetype=stream:diameter`. **App/TA** (typical add-on context): `Splunk App for Stream` (Splunkbase #1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: stream:diameter. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: sourcetype=\"stream:diameter\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **total_bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **total_MB** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by calling_station_id, origin_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top 20 subscribers by data usage in MB), Table (calling_station_id, origin_host, total_data_MB, session_count — sortable), Line chart (aggregate data volume trend over 7 days), Single value (total Diameter accounting sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add up data volume from the mobile core’s per-subscriber accounting feed so unusual usage or missing billing shows up before finance or customers are surprised.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Telecommunications",
              "tuc": "Broadband Service Optimization (50 Ways #17)",
              "a": [
                "N/A"
              ],
              "e": [
                "stream"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.10.3",
              "n": "Mobile Subscriber RADIUS Session Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks active mobile subscriber sessions via RADIUS accounting, providing visibility into session duration, data volume, and SGSN/MCC-MNC distribution — critical for mobile core capacity planning and roaming analytics.",
              "t": "`Splunk App for Stream` (Splunkbase #1809)",
              "d": "`sourcetype=stream:radius`",
              "q": "sourcetype=\"stream:radius\" code=\"Accounting-Request\"\n| eval session_secs=stop_time-start_time\n| eval session_min=round(session_secs/60, 1)\n| stats count as sessions, avg(session_min) as avg_duration_min, dc(login) as unique_subscribers by sgsn_address, sgsn_mcc_mnc\n| sort -sessions",
              "m": "Configure Splunk App for Stream to capture RADIUS accounting traffic from the mobile packet core (GGSN/PGW). Enable RADIUS protocol extraction including the telco-specific fields `sgsn_address` and `sgsn_mcc_mnc`. Use `code=\"Accounting-Request\"` to filter for accounting records. Correlate `start_time` and `stop_time` for session duration. The `sgsn_mcc_mnc` field identifies the serving network (home vs. roaming). Alert on sudden drops in active sessions per SGSN.",
              "z": "Column chart (active sessions by SGSN address), Table (sgsn_address, sgsn_mcc_mnc, sessions, unique_subscribers, avg_duration_min — sortable), Timechart (session count over 24h), Pie chart (session distribution by MCC-MNC for roaming analysis).",
              "kfp": "Roaming tests, SGSN failovers, and mass handset reboots can swing session counts. Use carrier maintenance notices as context.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk App for Stream` (Splunkbase #1809).\n• Ensure the following data sources are available: `sourcetype=stream:radius`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk App for Stream to capture RADIUS accounting traffic from the mobile packet core (GGSN/PGW). Enable RADIUS protocol extraction including the telco-specific fields `sgsn_address` and `sgsn_mcc_mnc`. Use `code=\"Accounting-Request\"` to filter for accounting records. Correlate `start_time` and `stop_time` for session duration. The `sgsn_mcc_mnc` field identifies the serving network (home vs. roaming). Alert on sudden drops in active sessions per SGSN.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsourcetype=\"stream:radius\" code=\"Accounting-Request\"\n| eval session_secs=stop_time-start_time\n| eval session_min=round(session_secs/60, 1)\n| stats count as sessions, avg(session_min) as avg_duration_min, dc(login) as unique_subscribers by sgsn_address, sgsn_mcc_mnc\n| sort -sessions\n```\n\nUnderstanding this SPL\n\n**Mobile Subscriber RADIUS Session Tracking** — Tracks active mobile subscriber sessions via RADIUS accounting, providing visibility into session duration, data volume, and SGSN/MCC-MNC distribution — critical for mobile core capacity planning and roaming analytics.\n\nDocumented **Data sources**: `sourcetype=stream:radius`. **App/TA** (typical add-on context): `Splunk App for Stream` (Splunkbase #1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: stream:radius. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: sourcetype=\"stream:radius\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session_secs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **session_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sgsn_address, sgsn_mcc_mnc** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (active sessions by SGSN address), Table (sgsn_address, sgsn_mcc_mnc, sessions, unique_subscribers, avg_duration_min — sortable), Timechart (session count over 24h), Pie chart (session distribution by MCC-MNC for roaming analysis).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the carrier’s session history to see how long people stay online and where traffic piles up, so the mobile team can add capacity before sites overload.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "ind": "Telecommunications",
              "tuc": "Radio Access Network Monitoring (50 Ways #15)",
              "a": [
                "N/A"
              ],
              "e": [
                "stream"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.10.4",
              "n": "Carrier SIP Trunk Failure Analysis",
              "c": "critical",
              "f": "intermediate",
              "v": "Monitors SIP response codes on carrier trunks to detect call routing failures, trunk congestion, and destination unreachable conditions — directly impacting voice service availability and revenue.",
              "t": "`Splunk App for Stream` (Splunkbase #1809)",
              "d": "`sourcetype=stream:sip`",
              "q": "sourcetype=\"stream:sip\" method=\"INVITE\"\n| stats count as total, sum(eval(if(reply_code>=400, 1, 0))) as failures by dest\n| eval failure_rate=round(failures*100/total, 2)\n| where failure_rate>5 OR failures>50\n| sort -failure_rate",
              "m": "Configure Splunk App for Stream to capture SIP signaling on trunk-facing interfaces. Enable SIP protocol extraction for fields `method`, `reply_code`, `caller`, `callee`, and `dest`. Focus on INVITE transactions as these represent call attempts. Group by `dest` to identify problematic trunks or destinations. SIP 4xx codes indicate client errors (e.g., 404 Not Found, 486 Busy Here), 5xx codes indicate server errors, and 6xx codes indicate global failures. Alert when failure rate exceeds 5% sustained over 15 minutes.",
              "z": "Single value (overall SIP trunk success rate with thresholds: green >95%, yellow 90-95%, red <90%), Column chart (failure count by dest), Table (dest, total attempts, failures, failure_rate — sortable), Timechart (SIP 4xx/5xx/6xx responses over 24h by response code class).",
              "kfp": "SBC certificate rolls, number portability batches, and customer premise equipment reboots can spike SIP failures. Match trunk names to the carrier work queue.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk App for Stream` (Splunkbase #1809).\n• Ensure the following data sources are available: `sourcetype=stream:sip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk App for Stream to capture SIP signaling on trunk-facing interfaces. Enable SIP protocol extraction for fields `method`, `reply_code`, `caller`, `callee`, and `dest`. Focus on INVITE transactions as these represent call attempts. Group by `dest` to identify problematic trunks or destinations. SIP 4xx codes indicate client errors (e.g., 404 Not Found, 486 Busy Here), 5xx codes indicate server errors, and 6xx codes indicate global failures. Alert when failure rate exceeds 5% sustai…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsourcetype=\"stream:sip\" method=\"INVITE\"\n| stats count as total, sum(eval(if(reply_code>=400, 1, 0))) as failures by dest\n| eval failure_rate=round(failures*100/total, 2)\n| where failure_rate>5 OR failures>50\n| sort -failure_rate\n```\n\nUnderstanding this SPL\n\n**Carrier SIP Trunk Failure Analysis** — Monitors SIP response codes on carrier trunks to detect call routing failures, trunk congestion, and destination unreachable conditions — directly impacting voice service availability and revenue.\n\nDocumented **Data sources**: `sourcetype=stream:sip`. **App/TA** (typical add-on context): `Splunk App for Stream` (Splunkbase #1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: stream:sip. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: sourcetype=\"stream:sip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **failure_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failure_rate>5 OR failures>50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (overall SIP trunk success rate with thresholds: green >95%, yellow 90-95%, red <90%), Column chart (failure count by dest), Table (dest, total attempts, failures, failure_rate — sortable), Timechart (SIP 4xx/5xx/6xx responses over 24h by response code class).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch carrier phone calls that fail on SIP trunks so you can see a trunk or route problem before the business loses voice service.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "ind": "Telecommunications",
              "tuc": "Carrier Media Gateway PM (50 Ways #43), Enterprise Service Assurance (50 Ways #14)",
              "a": [
                "N/A"
              ],
              "e": [
                "stream"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.10.5",
              "n": "SIP Registration Storm Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Detects sudden spikes in SIP REGISTER messages that can overwhelm IMS/SBC infrastructure — caused by mass device reboots, network flaps, or DDoS attacks. Early detection prevents cascading core failures.",
              "t": "`Splunk App for Stream` (Splunkbase #1809)",
              "d": "`sourcetype=stream:sip`",
              "q": "sourcetype=\"stream:sip\" method=\"REGISTER\"\n| bin _time span=5m\n| stats count as register_count, dc(src) as unique_sources by _time\n| eventstats avg(register_count) as baseline, stdev(register_count) as stdev_reg\n| eval threshold=baseline+(3*stdev_reg)\n| where register_count>threshold\n| eval spike_factor=round(register_count/baseline, 1)",
              "m": "Configure Splunk App for Stream to capture SIP REGISTER traffic on the IMS/SBC interfaces. Use a 5-minute time bucket for aggregation. Calculate a rolling baseline using `eventstats` and flag any bucket where REGISTER volume exceeds 3 standard deviations above the mean. The `dc(src)` field helps distinguish between a mass re-registration event (many unique sources) vs. a single device stuck in a registration loop (few unique sources, high count). Alert the NOC immediately as registration storms can cascade into full core outages within minutes.",
              "z": "Line chart (REGISTER count over time with dynamic baseline threshold line), Single value (current spike factor vs. baseline), Table (time bucket, register_count, unique_sources, baseline, threshold — highlighting rows above threshold), Area chart (unique sources over time to correlate with storms).",
              "kfp": "SBC certificate rolls, number portability batches, and customer premise equipment reboots can spike SIP failures. Match trunk names to the carrier work queue.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk App for Stream` (Splunkbase #1809).\n• Ensure the following data sources are available: `sourcetype=stream:sip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk App for Stream to capture SIP REGISTER traffic on the IMS/SBC interfaces. Use a 5-minute time bucket for aggregation. Calculate a rolling baseline using `eventstats` and flag any bucket where REGISTER volume exceeds 3 standard deviations above the mean. The `dc(src)` field helps distinguish between a mass re-registration event (many unique sources) vs. a single device stuck in a registration loop (few unique sources, high count). Alert the NOC immediately as registration storms …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsourcetype=\"stream:sip\" method=\"REGISTER\"\n| bin _time span=5m\n| stats count as register_count, dc(src) as unique_sources by _time\n| eventstats avg(register_count) as baseline, stdev(register_count) as stdev_reg\n| eval threshold=baseline+(3*stdev_reg)\n| where register_count>threshold\n| eval spike_factor=round(register_count/baseline, 1)\n```\n\nUnderstanding this SPL\n\n**SIP Registration Storm Detection** — Detects sudden spikes in SIP REGISTER messages that can overwhelm IMS/SBC infrastructure — caused by mass device reboots, network flaps, or DDoS attacks. Early detection prevents cascading core failures.\n\nDocumented **Data sources**: `sourcetype=stream:sip`. **App/TA** (typical add-on context): `Splunk App for Stream` (Splunkbase #1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: stream:sip. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: sourcetype=\"stream:sip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **threshold** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where register_count>threshold` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **spike_factor** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (REGISTER count over time with dynamic baseline threshold line), Single value (current spike factor vs. baseline), Table (time bucket, register_count, unique_sources, baseline, threshold — highlighting rows above threshold), Area chart (unique sources over time to correlate with storms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for floods of phone check-in messages to the call server so a bad app or handsets in a loop do not knock the platform over before you see a spike on a chart.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "ind": "Telecommunications",
              "tuc": "IMS Core and VoLTE Monitoring (50 Ways #16)",
              "a": [
                "N/A"
              ],
              "e": [
                "stream"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.10.6",
              "n": "SIP Post-Dial Delay Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Measures the time between a SIP INVITE and the first ringing or answer response, directly reflecting the user experience of waiting after dialing. High post-dial delay indicates trunk congestion, routing loops, or downstream SBC issues.",
              "t": "`Splunk App for Stream` (Splunkbase #1809)",
              "d": "`sourcetype=stream:sip`",
              "q": "sourcetype=\"stream:sip\" method=\"INVITE\" reply_code=200\n| where isnotnull(setup_delay)\n| stats avg(setup_delay) as avg_pdd, perc95(setup_delay) as p95_pdd, max(setup_delay) as max_pdd, count as calls by dest\n| eval avg_pdd_ms=round(avg_pdd*1000, 0), p95_pdd_ms=round(p95_pdd*1000, 0)\n| where p95_pdd_ms>3000\n| sort -p95_pdd_ms",
              "m": "Configure Splunk App for Stream to capture SIP INVITE and response transactions. The `setup_delay` field measures the time from INVITE to the first non-100 response (typically 180 Ringing or 200 OK). Monitor by `dest` to identify slow destinations or trunks. ITU-T E.721 recommends post-dial delay under 3 seconds for national calls and under 5 seconds for international calls. Create tiered alerts: warning at p95 >3s, critical at p95 >5s. Trend analysis reveals degradation patterns across time of day and destination.",
              "z": "Gauge (p95 post-dial delay with thresholds: green <2s, yellow 2-3s, red >3s), Line chart (average PDD trend by dest over 24h), Table (dest, calls, avg_pdd_ms, p95_pdd_ms, max_pdd_ms — sortable), Histogram (PDD distribution across all calls).",
              "kfp": "SBC certificate rolls, number portability batches, and customer premise equipment reboots can spike SIP failures. Match trunk names to the carrier work queue.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk App for Stream` (Splunkbase #1809).\n• Ensure the following data sources are available: `sourcetype=stream:sip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk App for Stream to capture SIP INVITE and response transactions. The `setup_delay` field measures the time from INVITE to the first non-100 response (typically 180 Ringing or 200 OK). Monitor by `dest` to identify slow destinations or trunks. ITU-T E.721 recommends post-dial delay under 3 seconds for national calls and under 5 seconds for international calls. Create tiered alerts: warning at p95 >3s, critical at p95 >5s. Trend analysis reveals degradation patterns across time of …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsourcetype=\"stream:sip\" method=\"INVITE\" reply_code=200\n| where isnotnull(setup_delay)\n| stats avg(setup_delay) as avg_pdd, perc95(setup_delay) as p95_pdd, max(setup_delay) as max_pdd, count as calls by dest\n| eval avg_pdd_ms=round(avg_pdd*1000, 0), p95_pdd_ms=round(p95_pdd*1000, 0)\n| where p95_pdd_ms>3000\n| sort -p95_pdd_ms\n```\n\nUnderstanding this SPL\n\n**SIP Post-Dial Delay Monitoring** — Measures the time between a SIP INVITE and the first ringing or answer response, directly reflecting the user experience of waiting after dialing. High post-dial delay indicates trunk congestion, routing loops, or downstream SBC issues.\n\nDocumented **Data sources**: `sourcetype=stream:sip`. **App/TA** (typical add-on context): `Splunk App for Stream` (Splunkbase #1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: stream:sip. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: sourcetype=\"stream:sip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(setup_delay)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_pdd_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where p95_pdd_ms>3000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (p95 post-dial delay with thresholds: green <2s, yellow 2-3s, red >3s), Line chart (average PDD trend by dest over 24h), Table (dest, calls, avg_pdd_ms, p95_pdd_ms, max_pdd_ms — sortable), Histogram (PDD distribution across all calls).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We measure the gap between dialing and the call really starting so when voice feels slow or dead, the team has a number to chase instead of just complaints.",
              "mtype": [
                "Performance"
              ],
              "ind": "Telecommunications",
              "tuc": "Reducing SLA Violations (50 Ways #21)",
              "a": [
                "N/A"
              ],
              "e": [
                "stream"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.8,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 6,
            "none": 0
          }
        },
        {
          "i": "5.11",
          "n": "gNMI / gRPC Streaming Telemetry",
          "u": [
            {
              "i": "5.11.1",
              "n": "Interface Utilization via gNMI Streaming Counters",
              "c": "critical",
              "f": "intermediate",
              "v": "SNMP polls interface counters every 5 minutes at best — microbursts and sub-minute congestion are invisible. gNMI SAMPLE subscriptions stream `/interfaces/interface/state/counters` at 10-30 second intervals, giving you near-real-time ingress/egress byte and packet rates. This catches congestion events that SNMP misses and enables capacity planning based on true peak utilization rather than averaged-out polling data.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/interfaces/interface/state/counters` (OpenConfig), Telegraf metric: `openconfig_interfaces`",
              "q": "| mstats rate_avg(\"openconfig_interfaces.in_octets\") AS in_bps, rate_avg(\"openconfig_interfaces.out_octets\") AS out_bps WHERE index=gnmi_metrics BY host, name span=1m\n| eval in_mbps=round(in_bps*8/1000000, 1), out_mbps=round(out_bps*8/1000000, 1)\n| where in_mbps > 800 OR out_mbps > 800\n| table _time, host, name, in_mbps, out_mbps\n| sort -in_mbps",
              "m": "Deploy Telegraf on a dedicated collector. Configure `inputs.gnmi` with device addresses (port 57400 for IOS XR, 6030 for Arista EOS, 32767 for Junos). Subscribe to `/interfaces/interface/state/counters` at `sample_interval = \"30s\"`. Output to Splunk HEC using `splunkmetric` format into a metrics index. Use `mstats` with `rate_avg()` to compute per-second rates from cumulative counters.",
              "z": "Line chart (Mbps in/out per interface), Heatmap (utilization % across fabric), Single value (peak utilization).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/interfaces/interface/state/counters` (OpenConfig), Telegraf metric: `openconfig_interfaces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Telegraf on a dedicated collector. Configure `inputs.gnmi` with device addresses (port 57400 for IOS XR, 6030 for Arista EOS, 32767 for Junos). Subscribe to `/interfaces/interface/state/counters` at `sample_interval = \"30s\"`. Output to Splunk HEC using `splunkmetric` format into a metrics index. Use `mstats` with `rate_avg()` to compute per-second rates from cumulative counters.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats rate_avg(\"openconfig_interfaces.in_octets\") AS in_bps, rate_avg(\"openconfig_interfaces.out_octets\") AS out_bps WHERE index=gnmi_metrics BY host, name span=1m\n| eval in_mbps=round(in_bps*8/1000000, 1), out_mbps=round(out_bps*8/1000000, 1)\n| where in_mbps > 800 OR out_mbps > 800\n| table _time, host, name, in_mbps, out_mbps\n| sort -in_mbps\n```\n\nUnderstanding this SPL\n\n**Interface Utilization via gNMI Streaming Counters** — SNMP polls interface counters every 5 minutes at best — microbursts and sub-minute congestion are invisible. gNMI SAMPLE subscriptions stream `/interfaces/interface/state/counters` at 10-30 second intervals, giving you near-real-time ingress/egress byte and packet rates. This catches congestion events that SNMP misses and enables capacity planning based on true peak utilization rather than averaged-out polling data.\n\nDocumented **Data sources**: gNMI path: `/interfaces/interface/state/counters` (OpenConfig), Telegraf metric: `openconfig_interfaces`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **in_mbps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where in_mbps > 800 OR out_mbps > 800` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Interface Utilization via gNMI Streaming Counters**): table _time, host, name, in_mbps, out_mbps\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (Mbps in/out per interface), Heatmap (utilization % across fabric), Single value (peak utilization).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500, Catalyst 9300/9500; Arista 7050X3/7060X/7260X3/7280R3/7500R3; Juniper QFX5120/QFX5220, MX204/MX304/MX480; Nokia SR Linux",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how full each network port is in near real time, so bursty problems show up long before a slow, old-style poll would notice.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.2",
              "n": "Interface Error and Discard Streaming",
              "c": "high",
              "f": "intermediate",
              "v": "CRC errors, input errors, and output discards often precede link failure or indicate a bad transceiver, duplex mismatch, or MTU issue. Streaming these counters at 30-second intervals via gNMI catches error bursts that 5-minute SNMP polls average away. A sudden spike in `in_fcs_errors` on a 100G spine link demands immediate investigation — it could be a failing optic about to take down a leaf.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/interfaces/interface/state/counters` (in-errors, out-errors, in-discards, out-discards, in-fcs-errors), Telegraf metric: `openconfig_interfaces`",
              "q": "| mstats rate_avg(\"openconfig_interfaces.in_errors\") AS err_rate, rate_avg(\"openconfig_interfaces.in_fcs_errors\") AS fcs_rate, rate_avg(\"openconfig_interfaces.out_discards\") AS discard_rate WHERE index=gnmi_metrics BY host, name span=1m\n| where err_rate > 0 OR fcs_rate > 0 OR discard_rate > 10\n| table _time, host, name, err_rate, fcs_rate, discard_rate\n| sort -fcs_rate",
              "m": "Subscribe to `/interfaces/interface/state/counters` at 30s sample intervals. Use `rate_avg()` to convert cumulative counters to per-second rates. Alert on any non-zero FCS error rate (indicates physical-layer problems). Alert on discard rates exceeding baseline (indicates congestion or QoS policy drops). Correlate with optic health (UC-5.11.5) for root cause.",
              "z": "Line chart (error rates over time), Table (interfaces with active errors), Heatmap (errors across fabric).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/interfaces/interface/state/counters` (in-errors, out-errors, in-discards, out-discards, in-fcs-errors), Telegraf metric: `openconfig_interfaces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to `/interfaces/interface/state/counters` at 30s sample intervals. Use `rate_avg()` to convert cumulative counters to per-second rates. Alert on any non-zero FCS error rate (indicates physical-layer problems). Alert on discard rates exceeding baseline (indicates congestion or QoS policy drops). Correlate with optic health (UC-5.11.5) for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats rate_avg(\"openconfig_interfaces.in_errors\") AS err_rate, rate_avg(\"openconfig_interfaces.in_fcs_errors\") AS fcs_rate, rate_avg(\"openconfig_interfaces.out_discards\") AS discard_rate WHERE index=gnmi_metrics BY host, name span=1m\n| where err_rate > 0 OR fcs_rate > 0 OR discard_rate > 10\n| table _time, host, name, err_rate, fcs_rate, discard_rate\n| sort -fcs_rate\n```\n\nUnderstanding this SPL\n\n**Interface Error and Discard Streaming** — CRC errors, input errors, and output discards often precede link failure or indicate a bad transceiver, duplex mismatch, or MTU issue. Streaming these counters at 30-second intervals via gNMI catches error bursts that 5-minute SNMP polls average away. A sudden spike in `in_fcs_errors` on a 100G spine link demands immediate investigation — it could be a failing optic about to take down a leaf.\n\nDocumented **Data sources**: gNMI path: `/interfaces/interface/state/counters` (in-errors, out-errors, in-discards, out-discards, in-fcs-errors), Telegraf metric: `openconfig_interfaces`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where err_rate > 0 OR fcs_rate > 0 OR discard_rate > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Interface Error and Discard Streaming**): table _time, host, name, err_rate, fcs_rate, discard_rate\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rates over time), Table (interfaces with active errors), Heatmap (errors across fabric).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500, Catalyst 9300/9500; Arista 7050X3/7060X/7260X3/7280R3; Juniper QFX5120/QFX5220, MX204/MX304; Nokia SR Linux",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you catch real cable and port problems from error counters before a link fully dies, instead of only seeing trouble after the fact.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.3",
              "n": "BGP Peer State Change Detection via ON_CHANGE",
              "c": "critical",
              "f": "intermediate",
              "v": "Syslog-based BGP monitoring depends on log forwarding latency and parsing reliability. gNMI ON_CHANGE subscriptions to BGP neighbor state deliver sub-second notification when a peer leaves Established — faster than syslog and with structured data. For VXLAN EVPN fabrics where BGP is both underlay and overlay, a single peer drop can black-hole tenant traffic within seconds.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/state` (ON_CHANGE), Telegraf metric: `openconfig_bgp`",
              "q": "| mstats latest(\"openconfig_bgp.session_state\") AS state WHERE index=gnmi_metrics BY host, neighbor_address span=1m\n| where state != 6\n| eval state_label=case(state=1, \"Idle\", state=2, \"Connect\", state=3, \"Active\", state=4, \"OpenSent\", state=5, \"OpenConfirm\", state=6, \"Established\", 1=1, \"Unknown\")\n| table _time, host, neighbor_address, state_label\n| sort -_time",
              "m": "Subscribe to `/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/state` using `subscription_mode = \"on_change\"`. BGP session state is represented as integer (1=Idle through 6=Established). Alert on any state != 6 (Established). For Cisco IOS XR, use native YANG path `Cisco-IOS-XR-ipv4-bgp-oper:bgp/instances/instance`. Correlate with interface flaps (UC-5.11.1) and optical health (UC-5.11.5) for root cause.",
              "z": "Status grid (BGP peer matrix — green=Established, red=down), Timeline (state change events), Table (non-established peers).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/state` (ON_CHANGE), Telegraf metric: `openconfig_bgp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to `/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/state` using `subscription_mode = \"on_change\"`. BGP session state is represented as integer (1=Idle through 6=Established). Alert on any state != 6 (Established). For Cisco IOS XR, use native YANG path `Cisco-IOS-XR-ipv4-bgp-oper:bgp/instances/instance`. Correlate with interface flaps (UC-5.11.1) and optical health (UC-5.11.5) for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats latest(\"openconfig_bgp.session_state\") AS state WHERE index=gnmi_metrics BY host, neighbor_address span=1m\n| where state != 6\n| eval state_label=case(state=1, \"Idle\", state=2, \"Connect\", state=3, \"Active\", state=4, \"OpenSent\", state=5, \"OpenConfirm\", state=6, \"Established\", 1=1, \"Unknown\")\n| table _time, host, neighbor_address, state_label\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**BGP Peer State Change Detection via ON_CHANGE** — Syslog-based BGP monitoring depends on log forwarding latency and parsing reliability. gNMI ON_CHANGE subscriptions to BGP neighbor state deliver sub-second notification when a peer leaves Established — faster than syslog and with structured data. For VXLAN EVPN fabrics where BGP is both underlay and overlay, a single peer drop can black-hole tenant traffic within seconds.\n\nDocumented **Data sources**: gNMI path: `/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/state` (ON_CHANGE), Telegraf metric: `openconfig_bgp`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where state != 6` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **state_label** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **BGP Peer State Change Detection via ON_CHANGE**): table _time, host, neighbor_address, state_label\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (BGP peer matrix — green=Established, red=down), Timeline (state change events), Table (non-established peers).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500 (NX-OS 9.3+), IOS XR (ASR/NCS); Arista 7050X3/7060X/7260X3/7280R3; Juniper MX204/MX304/MX480, QFX5120/QFX5220; Nokia SR Linux",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know the moment a BGP neighbor drops, which matters when one lost session can steer traffic the wrong way in a data center or WAN.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.4",
              "n": "System CPU and Memory Utilization Streaming",
              "c": "high",
              "f": "beginner",
              "v": "Network device control planes running hot indicate routing churn, excessive logging, or a control-plane DoS. gNMI streaming at 30-second intervals catches transient CPU spikes that 5-minute SNMP polls miss entirely. A Nexus spine hitting 90% CPU during a BGP convergence event could start dropping BFD keepalives, cascading into a fabric-wide outage.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/system/cpus/cpu/state` (OpenConfig), Cisco native: `Cisco-IOS-XR-wdsysmon-fd-oper:system-monitoring/cpu-utilization`; Telegraf metric: `openconfig_system`",
              "q": "| mstats avg(\"openconfig_system.cpu_total_instant\") AS cpu_pct WHERE index=gnmi_metrics BY host span=1m\n| where cpu_pct > 80\n| table _time, host, cpu_pct\n| sort -cpu_pct",
              "m": "Subscribe to `/system/cpus/cpu/state` at 30s intervals. For Cisco IOS XR, use native YANG `system-monitoring/cpu-utilization/total-cpu-one-minute`. Alert at 80% sustained for 5 minutes. Correlate with BGP update storms (UC-5.11.8) and interface flaps. Track per-process CPU if platform supports `/system/processes/process/state`.",
              "z": "Gauge (current CPU per device), Line chart (CPU trend), Table (devices above threshold).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/system/cpus/cpu/state` (OpenConfig), Cisco native: `Cisco-IOS-XR-wdsysmon-fd-oper:system-monitoring/cpu-utilization`; Telegraf metric: `openconfig_system`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to `/system/cpus/cpu/state` at 30s intervals. For Cisco IOS XR, use native YANG `system-monitoring/cpu-utilization/total-cpu-one-minute`. Alert at 80% sustained for 5 minutes. Correlate with BGP update storms (UC-5.11.8) and interface flaps. Track per-process CPU if platform supports `/system/processes/process/state`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(\"openconfig_system.cpu_total_instant\") AS cpu_pct WHERE index=gnmi_metrics BY host span=1m\n| where cpu_pct > 80\n| table _time, host, cpu_pct\n| sort -cpu_pct\n```\n\nUnderstanding this SPL\n\n**System CPU and Memory Utilization Streaming** — Network device control planes running hot indicate routing churn, excessive logging, or a control-plane DoS. gNMI streaming at 30-second intervals catches transient CPU spikes that 5-minute SNMP polls miss entirely. A Nexus spine hitting 90% CPU during a BGP convergence event could start dropping BFD keepalives, cascading into a fabric-wide outage.\n\nDocumented **Data sources**: gNMI path: `/system/cpus/cpu/state` (OpenConfig), Cisco native: `Cisco-IOS-XR-wdsysmon-fd-oper:system-monitoring/cpu-utilization`; Telegraf metric: `openconfig_system`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where cpu_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **System CPU and Memory Utilization Streaming**): table _time, host, cpu_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**System CPU and Memory Utilization Streaming** — Network device control planes running hot indicate routing churn, excessive logging, or a control-plane DoS. gNMI streaming at 30-second intervals catches transient CPU spikes that 5-minute SNMP polls miss entirely. A Nexus spine hitting 90% CPU during a BGP convergence event could start dropping BFD keepalives, cascading into a fabric-wide outage.\n\nDocumented **Data sources**: gNMI path: `/system/cpus/cpu/state` (OpenConfig), Cisco native: `Cisco-IOS-XR-wdsysmon-fd-oper:system-monitoring/cpu-utilization`; Telegraf metric: `openconfig_system`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (current CPU per device), Line chart (CPU trend), Table (devices above threshold).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500, IOS XR (ASR/NCS), IOS XE (Catalyst 9000); Arista 7050X3/7060X/7260X3/7280R3; Juniper MX/QFX/EX; Nokia SR Linux",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when a router or switch brain is getting overloaded, so we can act before it starts dropping the signals that keep the network stable.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.5",
              "n": "Optical Transceiver Health Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Optical transceivers fail gradually — Tx power drops, Rx power drifts, temperature climbs. By the time an interface goes down, the damage (packet loss, CRC errors, application impact) is already done. gNMI streaming of `/components/component` optic data at 60-second intervals enables predictive failure alerting: catch a dimming laser or overheating module hours before it causes an outage.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/components/component/transceiver/state` (output-power, input-power, laser-bias-current, temperature); Telegraf metric: `openconfig_platform`",
              "q": "| mstats latest(\"openconfig_platform.output_power_instant\") AS tx_dbm, latest(\"openconfig_platform.input_power_instant\") AS rx_dbm, latest(\"openconfig_platform.laser_bias_current_instant\") AS bias_ma, latest(\"openconfig_platform.temperature_instant\") AS temp_c WHERE index=gnmi_metrics BY host, name span=5m\n| where rx_dbm < -25 OR tx_dbm < -8 OR temp_c > 75\n| eval concern=case(rx_dbm < -28, \"CRITICAL: Rx near failure\", rx_dbm < -25, \"WARNING: Rx degrading\", tx_dbm < -8, \"WARNING: Tx low output\", temp_c > 85, \"CRITICAL: Overheating\", temp_c > 75, \"WARNING: High temp\", 1=1, \"Check\")\n| table _time, host, name, tx_dbm, rx_dbm, bias_ma, temp_c, concern\n| sort -temp_c",
              "m": "Subscribe to `/components/component/transceiver/state` at 60s intervals. Optic thresholds vary by type — SFP+ typically alarms at Rx < -14 dBm, QSFP28 at Rx < -21 dBm. Set warning at 3 dB above vendor alarm threshold. Track trends to predict failure: a steady decline of 0.5 dBm/week indicates a dying laser. Cross-reference with interface errors (UC-5.11.2) to correlate optic degradation with CRC/FCS errors.",
              "z": "Table (optics near threshold), Line chart (Rx/Tx power trend over weeks), Heatmap (temperature across all ports), Gauge (worst-case margin).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/components/component/transceiver/state` (output-power, input-power, laser-bias-current, temperature); Telegraf metric: `openconfig_platform`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to `/components/component/transceiver/state` at 60s intervals. Optic thresholds vary by type — SFP+ typically alarms at Rx < -14 dBm, QSFP28 at Rx < -21 dBm. Set warning at 3 dB above vendor alarm threshold. Track trends to predict failure: a steady decline of 0.5 dBm/week indicates a dying laser. Cross-reference with interface errors (UC-5.11.2) to correlate optic degradation with CRC/FCS errors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats latest(\"openconfig_platform.output_power_instant\") AS tx_dbm, latest(\"openconfig_platform.input_power_instant\") AS rx_dbm, latest(\"openconfig_platform.laser_bias_current_instant\") AS bias_ma, latest(\"openconfig_platform.temperature_instant\") AS temp_c WHERE index=gnmi_metrics BY host, name span=5m\n| where rx_dbm < -25 OR tx_dbm < -8 OR temp_c > 75\n| eval concern=case(rx_dbm < -28, \"CRITICAL: Rx near failure\", rx_dbm < -25, \"WARNING: Rx degrading\", tx_dbm < -8, \"WARNING: Tx low output\", temp_c > 85, \"CRITICAL: Overheating\", temp_c > 75, \"WARNING: High temp\", 1=1, \"Check\")\n| table _time, host, name, tx_dbm, rx_dbm, bias_ma, temp_c, concern\n| sort -temp_c\n```\n\nUnderstanding this SPL\n\n**Optical Transceiver Health Monitoring** — Optical transceivers fail gradually — Tx power drops, Rx power drifts, temperature climbs. By the time an interface goes down, the damage (packet loss, CRC errors, application impact) is already done. gNMI streaming of `/components/component` optic data at 60-second intervals enables predictive failure alerting: catch a dimming laser or overheating module hours before it causes an outage.\n\nDocumented **Data sources**: gNMI path: `/components/component/transceiver/state` (output-power, input-power, laser-bias-current, temperature); Telegraf metric: `openconfig_platform`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where rx_dbm < -25 OR tx_dbm < -8 OR temp_c > 75` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **concern** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Optical Transceiver Health Monitoring**): table _time, host, name, tx_dbm, rx_dbm, bias_ma, temp_c, concern\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (optics near threshold), Line chart (Rx/Tx power trend over weeks), Heatmap (temperature across all ports), Gauge (worst-case margin).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500, Catalyst 9300/9500; Arista 7050X3/7060X/7260X3/7280R3; Juniper QFX5120/QFX5220, MX204/MX304; Nokia SR Linux",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you watch the little lasers and optics so a weak cable or part does not take down a whole high-speed path without warning.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.6",
              "n": "QoS Queue Depth and Drop Streaming",
              "c": "high",
              "f": "advanced",
              "v": "SNMP-polled QoS counters miss microbursts entirely — a 100ms queue overflow causes packet loss that a 5-minute poll never sees. gNMI SAMPLE subscriptions to `/qos/interfaces/interface/output/queues/queue/state` at 10-second intervals capture queue depth and transmit/drop counters at a granularity that reveals microburst patterns, misclassified traffic, and under-provisioned queues.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/qos/interfaces/interface/output/queues/queue/state` (transmit-pkts, dropped-pkts); Telegraf metric: `openconfig_qos`",
              "q": "| mstats rate_avg(\"openconfig_qos.dropped_pkts\") AS drops_per_sec, rate_avg(\"openconfig_qos.transmit_pkts\") AS tx_per_sec WHERE index=gnmi_metrics BY host, interface_id, queue_name span=1m\n| eval drop_pct=if(tx_per_sec>0, round(drops_per_sec*100/(drops_per_sec+tx_per_sec), 2), 0)\n| where drops_per_sec > 0\n| table _time, host, interface_id, queue_name, drops_per_sec, tx_per_sec, drop_pct\n| sort -drop_pct",
              "m": "Subscribe to QoS queue state at 10-30s intervals. Focus on high-priority queues (voice, video, control-plane) where any drops indicate a problem. For best-effort queues, baseline normal drop rates and alert on 2x deviation. Correlate drops with interface utilization (UC-5.11.1) to distinguish congestion drops from policy drops. Use `drop_pct` to identify queues with systematic under-provisioning.",
              "z": "Bar chart (drops by queue class), Line chart (drop rate over time per queue), Table (queues with active drops), Heatmap (drop severity across fabric).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/qos/interfaces/interface/output/queues/queue/state` (transmit-pkts, dropped-pkts); Telegraf metric: `openconfig_qos`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to QoS queue state at 10-30s intervals. Focus on high-priority queues (voice, video, control-plane) where any drops indicate a problem. For best-effort queues, baseline normal drop rates and alert on 2x deviation. Correlate drops with interface utilization (UC-5.11.1) to distinguish congestion drops from policy drops. Use `drop_pct` to identify queues with systematic under-provisioning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats rate_avg(\"openconfig_qos.dropped_pkts\") AS drops_per_sec, rate_avg(\"openconfig_qos.transmit_pkts\") AS tx_per_sec WHERE index=gnmi_metrics BY host, interface_id, queue_name span=1m\n| eval drop_pct=if(tx_per_sec>0, round(drops_per_sec*100/(drops_per_sec+tx_per_sec), 2), 0)\n| where drops_per_sec > 0\n| table _time, host, interface_id, queue_name, drops_per_sec, tx_per_sec, drop_pct\n| sort -drop_pct\n```\n\nUnderstanding this SPL\n\n**QoS Queue Depth and Drop Streaming** — SNMP-polled QoS counters miss microbursts entirely — a 100ms queue overflow causes packet loss that a 5-minute poll never sees. gNMI SAMPLE subscriptions to `/qos/interfaces/interface/output/queues/queue/state` at 10-second intervals capture queue depth and transmit/drop counters at a granularity that reveals microburst patterns, misclassified traffic, and under-provisioned queues.\n\nDocumented **Data sources**: gNMI path: `/qos/interfaces/interface/output/queues/queue/state` (transmit-pkts, dropped-pkts); Telegraf metric: `openconfig_qos`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **drop_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drops_per_sec > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **QoS Queue Depth and Drop Streaming**): table _time, host, interface_id, queue_name, drops_per_sec, tx_per_sec, drop_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (drops by queue class), Line chart (drop rate over time per queue), Table (queues with active drops), Heatmap (drop severity across fabric).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500 (NX-OS 9.3+); Arista 7050X3/7060X/7260X3/7280R3; Juniper QFX5120/QFX5220, MX204/MX304; Nokia SR Linux",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when important traffic is piling up or getting dropped in queues, so you can fix congestion before it hits voice, video, or key apps.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.7",
              "n": "LLDP Topology Change Detection",
              "c": "medium",
              "f": "beginner",
              "v": "In a properly cabled data center, the LLDP neighbor table should not change unless someone moves a cable, adds a device, or swaps a switch. gNMI ON_CHANGE subscriptions to `/lldp/interfaces/interface/neighbors` provide instant notification of topology drift — a new neighbor appearing on a spine port, a missing neighbor on a leaf uplink, or an unauthorized device connected to a reserved port.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/lldp/interfaces/interface/neighbors/neighbor/state` (ON_CHANGE); Telegraf metric: `openconfig_lldp`",
              "q": "| mstats latest(\"openconfig_lldp.neighbor_system_name\") AS neighbor WHERE index=gnmi_metrics BY host, name span=5m\n| streamstats current=f last(neighbor) AS prev_neighbor by host, name\n| where neighbor != prev_neighbor AND isnotnull(prev_neighbor)\n| table _time, host, name, prev_neighbor, neighbor\n| eval change_type=if(isnotnull(neighbor) AND isnull(prev_neighbor), \"NEW\", if(isnull(neighbor) AND isnotnull(prev_neighbor), \"REMOVED\", \"CHANGED\"))",
              "m": "Subscribe to `/lldp/interfaces/interface/neighbors` using ON_CHANGE mode. Build a baseline LLDP topology table as a lookup (host, interface, expected_neighbor). Alert on any deviation from baseline. In data centers, unexpected LLDP changes often indicate cabling errors during maintenance. In campus networks, new neighbors on access ports may indicate unauthorized switches.",
              "z": "Network topology map (overlay LLDP changes), Table (recent topology changes), Status grid (ports with unexpected neighbors).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/lldp/interfaces/interface/neighbors/neighbor/state` (ON_CHANGE); Telegraf metric: `openconfig_lldp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to `/lldp/interfaces/interface/neighbors` using ON_CHANGE mode. Build a baseline LLDP topology table as a lookup (host, interface, expected_neighbor). Alert on any deviation from baseline. In data centers, unexpected LLDP changes often indicate cabling errors during maintenance. In campus networks, new neighbors on access ports may indicate unauthorized switches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats latest(\"openconfig_lldp.neighbor_system_name\") AS neighbor WHERE index=gnmi_metrics BY host, name span=5m\n| streamstats current=f last(neighbor) AS prev_neighbor by host, name\n| where neighbor != prev_neighbor AND isnotnull(prev_neighbor)\n| table _time, host, name, prev_neighbor, neighbor\n| eval change_type=if(isnotnull(neighbor) AND isnull(prev_neighbor), \"NEW\", if(isnull(neighbor) AND isnotnull(prev_neighbor), \"REMOVED\", \"CHANGED\"))\n```\n\nUnderstanding this SPL\n\n**LLDP Topology Change Detection** — In a properly cabled data center, the LLDP neighbor table should not change unless someone moves a cable, adds a device, or swaps a switch. gNMI ON_CHANGE subscriptions to `/lldp/interfaces/interface/neighbors` provide instant notification of topology drift — a new neighbor appearing on a spine port, a missing neighbor on a leaf uplink, or an unauthorized device connected to a reserved port.\n\nDocumented **Data sources**: gNMI path: `/lldp/interfaces/interface/neighbors/neighbor/state` (ON_CHANGE); Telegraf metric: `openconfig_lldp`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `streamstats` rolls up events into metrics; results are split **by host, name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where neighbor != prev_neighbor AND isnotnull(prev_neighbor)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **LLDP Topology Change Detection**): table _time, host, name, prev_neighbor, neighbor\n• `eval` defines or adjusts **change_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**LLDP Topology Change Detection** — In a properly cabled data center, the LLDP neighbor table should not change unless someone moves a cable, adds a device, or swaps a switch. gNMI ON_CHANGE subscriptions to `/lldp/interfaces/interface/neighbors` provide instant notification of topology drift — a new neighbor appearing on a spine port, a missing neighbor on a leaf uplink, or an unauthorized device connected to a reserved port.\n\nDocumented **Data sources**: gNMI path: `/lldp/interfaces/interface/neighbors/neighbor/state` (ON_CHANGE); Telegraf metric: `openconfig_lldp`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network topology map (overlay LLDP changes), Table (recent topology changes), Status grid (ports with unexpected neighbors).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500, Catalyst 9300/9500; Arista 7050X3/7060X/7260X3/7280R3; Juniper QFX/EX/MX; Nokia SR Linux",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you notice when a neighbor on a port changes without a work order, which can be a simple unplug—or something connected that should not be there.",
              "mtype": [
                "Change",
                "Inventory"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.8",
              "n": "BGP Prefix Count and Route Churn Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "A sudden jump in received BGP prefixes could indicate a route leak, hijack, or misconfigured peer advertising a full table into a leaf switch. Conversely, a prefix count drop means routes are being withdrawn — potentially black-holing traffic. Streaming prefix counts via gNMI at 30-second intervals detects these events far faster than waiting for syslog or SNMP traps.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/afi-safis/afi-safi/state` (prefixes/received, prefixes/installed); Telegraf metric: `openconfig_bgp`",
              "q": "| mstats latest(\"openconfig_bgp.prefixes_received\") AS prefixes WHERE index=gnmi_metrics BY host, neighbor_address, afi_safi_name span=5m\n| streamstats current=f last(prefixes) AS prev_prefixes by host, neighbor_address, afi_safi_name\n| eval delta=prefixes - prev_prefixes, pct_change=if(prev_prefixes>0, round(delta*100/prev_prefixes, 1), 0)\n| where abs(pct_change) > 10 OR abs(delta) > 1000\n| table _time, host, neighbor_address, afi_safi_name, prev_prefixes, prefixes, delta, pct_change\n| sort -abs(delta)",
              "m": "Subscribe to BGP AFI-SAFI state at 30s intervals. Baseline normal prefix counts per peer. Alert on >10% change in a 5-minute window or absolute change >1000 prefixes. A full BGP table leak (800k+ IPv4 prefixes) into a leaf with 64k TCAM will crash forwarding — detect it before the FIB overflows. Correlate with CPU spikes (UC-5.11.4) during convergence events.",
              "z": "Line chart (prefix count per peer over time), Table (peers with recent churn), Single value (total fabric prefix count), Alert list (abnormal changes).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/afi-safis/afi-safi/state` (prefixes/received, prefixes/installed); Telegraf metric: `openconfig_bgp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to BGP AFI-SAFI state at 30s intervals. Baseline normal prefix counts per peer. Alert on >10% change in a 5-minute window or absolute change >1000 prefixes. A full BGP table leak (800k+ IPv4 prefixes) into a leaf with 64k TCAM will crash forwarding — detect it before the FIB overflows. Correlate with CPU spikes (UC-5.11.4) during convergence events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats latest(\"openconfig_bgp.prefixes_received\") AS prefixes WHERE index=gnmi_metrics BY host, neighbor_address, afi_safi_name span=5m\n| streamstats current=f last(prefixes) AS prev_prefixes by host, neighbor_address, afi_safi_name\n| eval delta=prefixes - prev_prefixes, pct_change=if(prev_prefixes>0, round(delta*100/prev_prefixes, 1), 0)\n| where abs(pct_change) > 10 OR abs(delta) > 1000\n| table _time, host, neighbor_address, afi_safi_name, prev_prefixes, prefixes, delta, pct_change\n| sort -abs(delta)\n```\n\nUnderstanding this SPL\n\n**BGP Prefix Count and Route Churn Monitoring** — A sudden jump in received BGP prefixes could indicate a route leak, hijack, or misconfigured peer advertising a full table into a leaf switch. Conversely, a prefix count drop means routes are being withdrawn — potentially black-holing traffic. Streaming prefix counts via gNMI at 30-second intervals detects these events far faster than waiting for syslog or SNMP traps.\n\nDocumented **Data sources**: gNMI path: `/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/afi-safis/afi-safi/state` (prefixes/received, prefixes/installed); Telegraf metric: `openconfig_bgp`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `streamstats` rolls up events into metrics; results are split **by host, neighbor_address, afi_safi_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(pct_change) > 10 OR abs(delta) > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BGP Prefix Count and Route Churn Monitoring**): table _time, host, neighbor_address, afi_safi_name, prev_prefixes, prefixes, delta, pct_change\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (prefix count per peer over time), Table (peers with recent churn), Single value (total fabric prefix count), Alert list (abnormal changes).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500, IOS XR (ASR/NCS); Arista 7050X3/7060X/7260X3/7280R3; Juniper MX204/MX304/MX480; Nokia SR Linux",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when route tables are shaking more than usual, which can be the first sign of a bad peering, filter mistake, or unstable link upstream.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.9",
              "n": "Hardware Component Health (Fan, PSU, Temperature)",
              "c": "high",
              "f": "beginner",
              "v": "Environmental monitoring via SNMP Entity-MIB polling is slow and often unreliable. gNMI streaming of `/components/component/state` provides real-time temperature, fan speed, and power supply status. A failing fan in a top-of-rack switch triggers thermal throttling within minutes — early detection prevents performance degradation and emergency hardware swaps during business hours.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/components/component/state` (temperature, type=FAN/POWER_SUPPLY/SENSOR); Telegraf metric: `openconfig_platform`",
              "q": "| mstats latest(\"openconfig_platform.temperature_instant\") AS temp_c WHERE index=gnmi_metrics BY host, name span=5m\n| where temp_c > 65\n| eval severity=case(temp_c > 85, \"CRITICAL\", temp_c > 75, \"HIGH\", temp_c > 65, \"WARNING\")\n| table _time, host, name, temp_c, severity\n| sort -temp_c",
              "m": "Subscribe to `/components/component/state` at 60s intervals. Filter for component types FAN, POWER_SUPPLY, and SENSOR. Set thresholds per component type: chassis inlet >40°C warning, ASIC >85°C critical, fan speed <2000 RPM warning. Alert on PSU state changes (redundancy loss). Track temperature trends to detect environmental issues (HVAC failure, hot aisle containment breach).",
              "z": "Gauge (temperature per component), Status grid (fan/PSU status across fabric), Line chart (temperature trend), Table (components above threshold).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/components/component/state` (temperature, type=FAN/POWER_SUPPLY/SENSOR); Telegraf metric: `openconfig_platform`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to `/components/component/state` at 60s intervals. Filter for component types FAN, POWER_SUPPLY, and SENSOR. Set thresholds per component type: chassis inlet >40°C warning, ASIC >85°C critical, fan speed <2000 RPM warning. Alert on PSU state changes (redundancy loss). Track temperature trends to detect environmental issues (HVAC failure, hot aisle containment breach).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats latest(\"openconfig_platform.temperature_instant\") AS temp_c WHERE index=gnmi_metrics BY host, name span=5m\n| where temp_c > 65\n| eval severity=case(temp_c > 85, \"CRITICAL\", temp_c > 75, \"HIGH\", temp_c > 65, \"WARNING\")\n| table _time, host, name, temp_c, severity\n| sort -temp_c\n```\n\nUnderstanding this SPL\n\n**Hardware Component Health (Fan, PSU, Temperature)** — Environmental monitoring via SNMP Entity-MIB polling is slow and often unreliable. gNMI streaming of `/components/component/state` provides real-time temperature, fan speed, and power supply status. A failing fan in a top-of-rack switch triggers thermal throttling within minutes — early detection prevents performance degradation and emergency hardware swaps during business hours.\n\nDocumented **Data sources**: gNMI path: `/components/component/state` (temperature, type=FAN/POWER_SUPPLY/SENSOR); Telegraf metric: `openconfig_platform`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where temp_c > 65` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Hardware Component Health (Fan, PSU, Temperature)**): table _time, host, name, temp_c, severity\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Hardware Component Health (Fan, PSU, Temperature)** — Environmental monitoring via SNMP Entity-MIB polling is slow and often unreliable. gNMI streaming of `/components/component/state` provides real-time temperature, fan speed, and power supply status. A failing fan in a top-of-rack switch triggers thermal throttling within minutes — early detection prevents performance degradation and emergency hardware swaps during business hours.\n\nDocumented **Data sources**: gNMI path: `/components/component/state` (temperature, type=FAN/POWER_SUPPLY/SENSOR); Telegraf metric: `openconfig_platform`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (temperature per component), Status grid (fan/PSU status across fabric), Line chart (temperature trend), Table (components above threshold).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500, Catalyst 9300/9500; Arista 7050X3/7060X/7260X3/7280R3; Juniper QFX/EX/MX; Nokia SR Linux",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see hot gear or sick fans and power supplies early, so you can fix cooling or swap parts before the box throttles or fails.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.10",
              "n": "Telegraf gNMI Collector Pipeline Health",
              "c": "high",
              "f": "beginner",
              "v": "If your Telegraf collector goes down or loses connectivity to a target device, all gNMI telemetry for that device stops silently — dashboards freeze at last-known-good values and alerts never fire. Monitoring the Telegraf pipeline itself (gather time, buffer usage, write failures, active connections) is essential to trust the data flowing through it.",
              "t": "Telegraf internal metrics → Splunk HEC",
              "d": "Telegraf `internal` metrics (`internal_gather`, `internal_write`, `internal_memstats`); `sourcetype=telegraf:internal`",
              "q": "| mstats avg(\"internal_gather.gather_time_ns\") AS gather_ns, latest(\"internal_gather.metrics_gathered\") AS gathered WHERE index=gnmi_metrics BY host, input span=5m\n| eval gather_ms=round(gather_ns/1000000, 1)\n| where gather_ms > 5000 OR gathered=0\n| table _time, host, input, gather_ms, gathered\n| sort -gather_ms",
              "m": "Enable Telegraf `internal` input plugin to emit self-monitoring metrics every 60 seconds. Monitor `gather_time_ns` (should be <5s for healthy connections), `metrics_gathered` (should be >0), and `buffer_size` (should not grow unbounded). Alert when a specific device's gather count drops to zero (connection lost). Track `write_errors` to detect HEC ingestion issues. Deploy multiple Telegraf instances with overlapping targets for redundancy.",
              "z": "Table (collector health matrix), Line chart (gather time per device), Single value (total active subscriptions), Alert list (stale collectors).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf internal metrics → Splunk HEC.\n• Ensure the following data sources are available: Telegraf `internal` metrics (`internal_gather`, `internal_write`, `internal_memstats`); `sourcetype=telegraf:internal`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Telegraf `internal` input plugin to emit self-monitoring metrics every 60 seconds. Monitor `gather_time_ns` (should be <5s for healthy connections), `metrics_gathered` (should be >0), and `buffer_size` (should not grow unbounded). Alert when a specific device's gather count drops to zero (connection lost). Track `write_errors` to detect HEC ingestion issues. Deploy multiple Telegraf instances with overlapping targets for redundancy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(\"internal_gather.gather_time_ns\") AS gather_ns, latest(\"internal_gather.metrics_gathered\") AS gathered WHERE index=gnmi_metrics BY host, input span=5m\n| eval gather_ms=round(gather_ns/1000000, 1)\n| where gather_ms > 5000 OR gathered=0\n| table _time, host, input, gather_ms, gathered\n| sort -gather_ms\n```\n\nUnderstanding this SPL\n\n**Telegraf gNMI Collector Pipeline Health** — If your Telegraf collector goes down or loses connectivity to a target device, all gNMI telemetry for that device stops silently — dashboards freeze at last-known-good values and alerts never fire. Monitoring the Telegraf pipeline itself (gather time, buffer usage, write failures, active connections) is essential to trust the data flowing through it.\n\nDocumented **Data sources**: Telegraf `internal` metrics (`internal_gather`, `internal_write`, `internal_memstats`); `sourcetype=telegraf:internal`. **App/TA** (typical add-on context): Telegraf internal metrics → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **gather_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gather_ms > 5000 OR gathered=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Telegraf gNMI Collector Pipeline Health**): table _time, host, input, gather_ms, gathered\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (collector health matrix), Line chart (gather time per device), Single value (total active subscriptions), Alert list (stale collectors).",
              "script": "",
              "premium": "",
              "hw": "Telegraf collector instances (any platform)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the collector that ingests gNMI is falling behind, so a broken pipe does not look like a perfect network in Splunk.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.11.11",
              "n": "ACL Hit Counter Analysis via Streaming Telemetry",
              "c": "medium",
              "f": "intermediate",
              "v": "ACL rules that never match traffic are dead weight that slows TCAM lookups and obscures security intent. Conversely, deny rules with climbing hit counts reveal active attack patterns. Streaming ACL counters via gNMI at 30-second intervals provides the data needed for security policy effectiveness analysis and ACL cleanup — tasks that are nearly impossible with periodic SNMP polling.",
              "t": "Telegraf (`inputs.gnmi` plugin) → Splunk HEC",
              "d": "gNMI path: `/acl/acl-sets/acl-set/acl-entries/acl-entry/state` (matched-packets, matched-octets); Telegraf metric: `openconfig_acl`",
              "q": "| mstats rate_avg(\"openconfig_acl.matched_packets\") AS hits_per_sec WHERE index=gnmi_metrics BY host, acl_name, sequence_id, description span=5m\n| where hits_per_sec > 0\n| eval daily_hits=round(hits_per_sec * 86400, 0)\n| table host, acl_name, sequence_id, description, hits_per_sec, daily_hits\n| sort -hits_per_sec",
              "m": "Subscribe to `/acl/acl-sets/acl-set/acl-entries/acl-entry/state` at 30s intervals. Identify deny rules with increasing hit counts — these represent blocked attack traffic. Identify permit rules with zero hits over 30 days — candidates for cleanup. Cross-reference with firewall logs and IDS alerts for security correlation. Generate monthly ACL effectiveness reports for compliance.",
              "z": "Table (ACL rules sorted by hit rate), Bar chart (top 10 deny rules by hits), Stacked chart (permit vs deny hits over time), List (zero-hit rules for cleanup).",
              "kfp": "Telemetry pauses during device reboots, cert renewals, or transport changes; subscription restarts and path renames can look like drops without a live fault.",
              "refs": "[CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (`inputs.gnmi` plugin) → Splunk HEC.\n• Ensure the following data sources are available: gNMI path: `/acl/acl-sets/acl-set/acl-entries/acl-entry/state` (matched-packets, matched-octets); Telegraf metric: `openconfig_acl`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to `/acl/acl-sets/acl-set/acl-entries/acl-entry/state` at 30s intervals. Identify deny rules with increasing hit counts — these represent blocked attack traffic. Identify permit rules with zero hits over 30 days — candidates for cleanup. Cross-reference with firewall logs and IDS alerts for security correlation. Generate monthly ACL effectiveness reports for compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats rate_avg(\"openconfig_acl.matched_packets\") AS hits_per_sec WHERE index=gnmi_metrics BY host, acl_name, sequence_id, description span=5m\n| where hits_per_sec > 0\n| eval daily_hits=round(hits_per_sec * 86400, 0)\n| table host, acl_name, sequence_id, description, hits_per_sec, daily_hits\n| sort -hits_per_sec\n```\n\nUnderstanding this SPL\n\n**ACL Hit Counter Analysis via Streaming Telemetry** — ACL rules that never match traffic are dead weight that slows TCAM lookups and obscures security intent. Conversely, deny rules with climbing hit counts reveal active attack patterns. Streaming ACL counters via gNMI at 30-second intervals provides the data needed for security policy effectiveness analysis and ACL cleanup — tasks that are nearly impossible with periodic SNMP polling.\n\nDocumented **Data sources**: gNMI path: `/acl/acl-sets/acl-set/acl-entries/acl-entry/state` (matched-packets, matched-octets); Telegraf metric: `openconfig_acl`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gnmi_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where hits_per_sec > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **daily_hits** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **ACL Hit Counter Analysis via Streaming Telemetry**): table host, acl_name, sequence_id, description, hits_per_sec, daily_hits\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ACL Hit Counter Analysis via Streaming Telemetry** — ACL rules that never match traffic are dead weight that slows TCAM lookups and obscures security intent. Conversely, deny rules with climbing hit counts reveal active attack patterns. Streaming ACL counters via gNMI at 30-second intervals provides the data needed for security policy effectiveness analysis and ACL cleanup — tasks that are nearly impossible with periodic SNMP polling.\n\nDocumented **Data sources**: gNMI path: `/acl/acl-sets/acl-set/acl-entries/acl-entry/state` (matched-packets, matched-octets); Telegraf metric: `openconfig_acl`. **App/TA** (typical add-on context): Telegraf (`inputs.gnmi` plugin) → Splunk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (ACL rules sorted by hit rate), Bar chart (top 10 deny rules by hits), Stacked chart (permit vs deny hits over time), List (zero-hit rules for cleanup).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500 (NX-OS 9.3+), IOS XR; Arista 7050X3/7060X/7260X3/7280R3; Juniper MX/QFX",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which firewall or switch rules are really being hit, so you can clean dead rules and see where attacks or mistakes show up in traffic.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.4,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 11,
            "none": 0
          }
        },
        {
          "i": "5.12",
          "n": "Telecommunications & CDR Analytics",
          "u": [
            {
              "i": "5.12.1",
              "n": "CDR Call Failure Statistics",
              "c": "high",
              "f": "beginner",
              "v": "Aggregates release causes, SIP response codes, and ISUP cause values from CDRs to spot trunk, routing, or peer outages early.",
              "t": "SBC CDR CSV/JSON ingestion, custom props",
              "d": "`sourcetype=\"cdr:voip\"`, `sourcetype=\"broadworks:cdr\"`",
              "q": "index=voip sourcetype=\"cdr:voip\"\n| eval is_fail=if(call_status!=\"answered\" OR match(lower(call_status),\"fail\"),1,0)\n| timechart span=15m sum(is_fail) as fails count as total\n| eval fail_pct=if(total>0, round(100*fails/total,2), 0)",
              "m": "Normalize vendor-specific cause codes to Q.850 / SIP mapping table; baseline by destination prefix (emergency, international).",
              "z": "Stacked area (causes over time), Pie chart (cause mix), Single value (fail %).",
              "kfp": "Brief drops during gateway failovers, codec renegotiation, or PSTN trunk maintenance can add release reasons that look bad in a chart; compare to the SBC active-alarm view for the same minute.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SBC CDR CSV/JSON ingestion, custom props.\n• Ensure the following data sources are available: `sourcetype=\"cdr:voip\"`, `sourcetype=\"broadworks:cdr\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize vendor-specific cause codes to Q.850 / SIP mapping table; baseline by destination prefix (emergency, international).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cdr:voip\"\n| eval is_fail=if(call_status!=\"answered\" OR match(lower(call_status),\"fail\"),1,0)\n| timechart span=15m sum(is_fail) as fails count as total\n| eval fail_pct=if(total>0, round(100*fails/total,2), 0)\n```\n\nUnderstanding this SPL\n\n**CDR Call Failure Statistics** — Aggregates release causes, SIP response codes, and ISUP cause values from CDRs to spot trunk, routing, or peer outages early.\n\nDocumented **Data sources**: `sourcetype=\"cdr:voip\"`, `sourcetype=\"broadworks:cdr\"`. **App/TA** (typical add-on context): SBC CDR CSV/JSON ingestion, custom props. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cdr:voip. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cdr:voip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (causes over time), Pie chart (cause mix), Single value (fail %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when too many calls are failing so you can fix trunks, dial plans, or carrier issues before customers keep redialing.",
              "mtype": [
                "Availability"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.12.2",
              "n": "Call Volume Trending by Destination",
              "c": "medium",
              "f": "beginner",
              "v": "Traffic engineering for trunk groups and geographic hot spots — detects flash crowds or fraud-driven spikes to premium destinations.",
              "t": "CDR aggregation",
              "d": "`sourcetype=\"cdr:voip\"` with `called_number`, `route_label`",
              "q": "index=voip sourcetype=\"cdr:voip\"\n| eval dest_prefix=substr(called_number,1,6)\n| timechart span=1h sum(duration_sec) as minutes count as calls by dest_prefix\n| sort -calls",
              "m": "Mask PANI for privacy dashboards; use HMAC of full number for drilldown in secured role.",
              "z": "Line chart (calls by prefix), Map (if geo-lookup on prefix), Table (top routes).",
              "kfp": "Holidays, marketing bursts, and short code campaigns can change destination mix without a fault; baseline by day-of-week before alerting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CDR aggregation.\n• Ensure the following data sources are available: `sourcetype=\"cdr:voip\"` with `called_number`, `route_label`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMask PANI for privacy dashboards; use HMAC of full number for drilldown in secured role.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cdr:voip\"\n| eval dest_prefix=substr(called_number,1,6)\n| timechart span=1h sum(duration_sec) as minutes count as calls by dest_prefix\n| sort -calls\n```\n\nUnderstanding this SPL\n\n**Call Volume Trending by Destination** — Traffic engineering for trunk groups and geographic hot spots — detects flash crowds or fraud-driven spikes to premium destinations.\n\nDocumented **Data sources**: `sourcetype=\"cdr:voip\"` with `called_number`, `route_label`. **App/TA** (typical add-on context): CDR aggregation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cdr:voip. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cdr:voip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dest_prefix** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by dest_prefix** — ideal for trending and alerting on this use case.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (calls by prefix), Map (if geo-lookup on prefix), Table (top routes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which called areas or routes get busy over time so you can plan trunks and spot odd spikes that are not part of a normal day.",
              "mtype": [
                "Capacity"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.12.3",
              "n": "Call Duration Distribution Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Shifts toward very short or very long holds may indicate robocall, modem, or toll fraud vs. normal conversational distribution.",
              "t": "CDR",
              "d": "`sourcetype=\"cdr:voip\"` `duration_sec`",
              "q": "index=voip sourcetype=\"cdr:voip\" call_status=\"answered\"\n| bucket duration_sec span=30 as dur_bin\n| stats count by dur_bin\n| eventstats sum(count) as tot\n| eval pct=round(100*count/tot,2)\n| sort dur_bin",
              "m": "Compare to historical histogram; alert on >2× share in `<6s` buckets (wangiri / scanners).",
              "z": "Histogram (duration), Line chart (percentile trend via `eventstats perc*`).",
              "kfp": "Call-center wrap-up, hold music, and parked calls can stretch tails; very short successful calls are normal for callbacks and IVR self-service.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CDR.\n• Ensure the following data sources are available: `sourcetype=\"cdr:voip\"` `duration_sec`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare to historical histogram; alert on >2× share in `<6s` buckets (wangiri / scanners).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cdr:voip\" call_status=\"answered\"\n| bucket duration_sec span=30 as dur_bin\n| stats count by dur_bin\n| eventstats sum(count) as tot\n| eval pct=round(100*count/tot,2)\n| sort dur_bin\n```\n\nUnderstanding this SPL\n\n**Call Duration Distribution Analysis** — Shifts toward very short or very long holds may indicate robocall, modem, or toll fraud vs. normal conversational distribution.\n\nDocumented **Data sources**: `sourcetype=\"cdr:voip\"` `duration_sec`. **App/TA** (typical add-on context): CDR. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cdr:voip. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cdr:voip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by dur_bin** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (duration), Line chart (percentile trend via `eventstats perc*`).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see if call lengths clump in odd ways—very short or very long—which can be fraud, scanners, or a broken app, not a normal talk pattern.",
              "mtype": [
                "Performance",
                "Fraud"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.12.4",
              "n": "SIP Trunk Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "Concurrent session counts or peg counts vs. licensed trunk capacity — prevents preemptive blocking at peak.",
              "t": "SBC SNMP, CDR-derived concurrency, Stream SIP",
              "d": "`sourcetype=\"snmp:sbc\"`, `sourcetype=\"stream:sip\"`",
              "q": "index=voip sourcetype=\"stream:sip\" OR sourcetype=\"snmp:sbc\"\n| eval concurrent=if(isnotnull(active_calls), active_calls, curr_sess)\n| timechart span=1m max(concurrent) as peak_sess by trunk_group\n| lookup trunk_capacity trunk_group OUTPUT licensed_sess\n| eval util_pct=round(100*peak_sess/licensed_sess,1)\n| where util_pct>85",
              "m": "Separate inbound vs. outbound if asymmetric licensing; forecast with `predict` for capacity planning.",
              "z": "Area chart (concurrency), Gauge (utilization %), Table (trunk groups at risk).",
              "kfp": "Rehome events, SBC restarts, and carrier maintenance can dip utilization without a customer-visible outage; use duration filters before paging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SBC SNMP, CDR-derived concurrency, Stream SIP.\n• Ensure the following data sources are available: `sourcetype=\"snmp:sbc\"`, `sourcetype=\"stream:sip\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSeparate inbound vs. outbound if asymmetric licensing; forecast with `predict` for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"stream:sip\" OR sourcetype=\"snmp:sbc\"\n| eval concurrent=if(isnotnull(active_calls), active_calls, curr_sess)\n| timechart span=1m max(concurrent) as peak_sess by trunk_group\n| lookup trunk_capacity trunk_group OUTPUT licensed_sess\n| eval util_pct=round(100*peak_sess/licensed_sess,1)\n| where util_pct>85\n```\n\nUnderstanding this SPL\n\n**SIP Trunk Utilization** — Concurrent session counts or peg counts vs. licensed trunk capacity — prevents preemptive blocking at peak.\n\nDocumented **Data sources**: `sourcetype=\"snmp:sbc\"`, `sourcetype=\"stream:sip\"`. **App/TA** (typical add-on context): SBC SNMP, CDR-derived concurrency, Stream SIP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: stream:sip, snmp:sbc. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"stream:sip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **concurrent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by trunk_group** — ideal for trending and alerting on this use case.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where util_pct>85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (concurrency), Gauge (utilization %), Table (trunk groups at risk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know how full your SIP trunks to carriers or cloud are, so you can add capacity or move traffic before busy-hour busy signals.",
              "mtype": [
                "Capacity"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.12.5",
              "n": "VoIP MOS Score Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Mean Opinion Score (or derived R-factor) from RTCP XR or vendor QoE reports — user-perceived VoLTE/VoIP quality.",
              "t": "SBC QoE records, Poly/Vendor QoS feeds",
              "d": "`sourcetype=\"qos:rtcp\"`, `sourcetype=\"cdr:voip\"` with `mos` field",
              "q": "index=voip (sourcetype=\"qos:rtcp\" OR sourcetype=\"cdr:voip\")\n| where isnotnull(mos)\n| timechart span=5m avg(mos) as avg_mos perc5(mos) as worst_mos by codec\n| where avg_mos < 3.8 OR worst_mos < 3.0",
              "m": "ITU-T G.107 E-model targets; correlate with jitter/loss from same leg_id; segment by radio access (VoLTE) vs. Wi-Fi.",
              "z": "Line chart (MOS trend), Scatter (loss vs. MOS), Table (worst calls).",
              "kfp": "UDP jitter on Wi‑Fi, Bluetooth headsets, and mobile handovers can depress MOS without a core fault; compare site-to-site before blaming the core.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SBC QoE records, Poly/Vendor QoS feeds.\n• Ensure the following data sources are available: `sourcetype=\"qos:rtcp\"`, `sourcetype=\"cdr:voip\"` with `mos` field.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nITU-T G.107 E-model targets; correlate with jitter/loss from same leg_id; segment by radio access (VoLTE) vs. Wi-Fi.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip (sourcetype=\"qos:rtcp\" OR sourcetype=\"cdr:voip\")\n| where isnotnull(mos)\n| timechart span=5m avg(mos) as avg_mos perc5(mos) as worst_mos by codec\n| where avg_mos < 3.8 OR worst_mos < 3.0\n```\n\nUnderstanding this SPL\n\n**VoIP MOS Score Monitoring** — Mean Opinion Score (or derived R-factor) from RTCP XR or vendor QoE reports — user-perceived VoLTE/VoIP quality.\n\nDocumented **Data sources**: `sourcetype=\"qos:rtcp\"`, `sourcetype=\"cdr:voip\"` with `mos` field. **App/TA** (typical add-on context): SBC QoE records, Poly/Vendor QoS feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: qos:rtcp, cdr:voip. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"qos:rtcp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(mos)` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by codec** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_mos < 3.8 OR worst_mos < 3.0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (MOS trend), Scatter (loss vs. MOS), Table (worst calls).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you watch call quality scores so people notice scratchy or one-way audio before the tickets pile up—especially on home networks and long paths.",
              "mtype": [
                "Performance"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.12.6",
              "n": "Signaling Storm Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Bursts of SIP OPTIONS, REGISTER, or diameter requests can indicate reflection DDoS or misconfigured endpoints — complements UC-5.10.5 with cross-layer view.",
              "t": "Splunk App for Stream, STP/Diameter capture",
              "d": "`sourcetype=\"stream:sip\"`, `sourcetype=\"diameter:cap\"`",
              "q": "index=signaling (sourcetype=\"stream:sip\" OR sourcetype=\"diameter:cap\")\n| bin _time span=1m\n| stats count by method, cmd_code, _time\n| eventstats avg(count) as mu, stdev(count) as s by method\n| where count > mu+5*s\n| sort -count",
              "m": "Whitelist health-check sources; coordinate with peer ops when storm targets upstream interconnect.",
              "z": "Timeline (spike detection), Table (method × source ASN), Single value (peak RPS).",
              "kfp": "Signaling errors from legacy phones with old firmware or NAT or SBC misalignment can spray retries; large conferences and registration refreshes can also spike counts briefly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk App for Stream, STP/Diameter capture.\n• Ensure the following data sources are available: `sourcetype=\"stream:sip\"`, `sourcetype=\"diameter:cap\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWhitelist health-check sources; coordinate with peer ops when storm targets upstream interconnect.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=signaling (sourcetype=\"stream:sip\" OR sourcetype=\"diameter:cap\")\n| bin _time span=1m\n| stats count by method, cmd_code, _time\n| eventstats avg(count) as mu, stdev(count) as s by method\n| where count > mu+5*s\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Signaling Storm Detection** — Bursts of SIP OPTIONS, REGISTER, or diameter requests can indicate reflection DDoS or misconfigured endpoints — complements UC-5.10.5 with cross-layer view.\n\nDocumented **Data sources**: `sourcetype=\"stream:sip\"`, `sourcetype=\"diameter:cap\"`. **App/TA** (typical add-on context): Splunk App for Stream, STP/Diameter capture. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: signaling; **sourcetype**: stream:sip, diameter:cap. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=signaling, sourcetype=\"stream:sip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by method, cmd_code, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by method** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > mu+5*s` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (spike detection), Table (method × source ASN), Single value (peak RPS).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you catch a flood of SIP or Diameter messages early—often a misconfiguration, loop, or attack—before the control plane drowns in noise.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [
                "stream"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.12.7",
              "n": "IMS Registration Failure Rate",
              "c": "critical",
              "f": "intermediate",
              "v": "HSS/UDM or P-CSCF failures show up as elevated 401/403/timeout on REGISTER — impacts VoLTE attach and VoWiFi.",
              "t": "P-CSCF logs, IMS CDR",
              "d": "`sourcetype=\"ims:sip\"` `method=REGISTER`, `sourcetype=\"stream:sip\"`",
              "q": "index=ims sourcetype=\"ims:sip\" method=\"REGISTER\"\n| eval fail=if(match(reply_code,\"^(401|403|408|5..)$\"),1,0)\n| timechart span=5m sum(fail) as fails, count as attempts\n| eval fail_rate=round(100*fails/attempts,2)\n| where fail_rate > 5",
              "m": "Break out by `visited_network` for roaming; correlate with certificate expiry on IPSec for VoWiFi.",
              "z": "Line chart (fail rate), Bar chart (SIP reason by S-CSCF), Table (IMSI hash top failures).",
              "kfp": "Planned S-CSCF work, HSS cutovers, and certificate rotations create expected registration churn; use maintenance windows in your alert logic.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: P-CSCF logs, IMS CDR.\n• Ensure the following data sources are available: `sourcetype=\"ims:sip\"` `method=REGISTER`, `sourcetype=\"stream:sip\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBreak out by `visited_network` for roaming; correlate with certificate expiry on IPSec for VoWiFi.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ims sourcetype=\"ims:sip\" method=\"REGISTER\"\n| eval fail=if(match(reply_code,\"^(401|403|408|5..)$\"),1,0)\n| timechart span=5m sum(fail) as fails, count as attempts\n| eval fail_rate=round(100*fails/attempts,2)\n| where fail_rate > 5\n```\n\nUnderstanding this SPL\n\n**IMS Registration Failure Rate** — HSS/UDM or P-CSCF failures show up as elevated 401/403/timeout on REGISTER — impacts VoLTE attach and VoWiFi.\n\nDocumented **Data sources**: `sourcetype=\"ims:sip\"` `method=REGISTER`, `sourcetype=\"stream:sip\"`. **App/TA** (typical add-on context): P-CSCF logs, IMS CDR. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ims; **sourcetype**: ims:sip. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ims, sourcetype=\"ims:sip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (fail rate), Bar chart (SIP reason by S-CSCF), Table (IMSI hash top failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you know when handsets or apps cannot register to the core—before users only say “I have no bars” in the office.",
              "mtype": [
                "Availability"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.12.8",
              "n": "Number Portability Request Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "LNP order status, NPAC responses, and port-out churn — operations and regulatory reporting for porting SLAs.",
              "t": "NP/BSS extracts, SOA APIs",
              "d": "`sourcetype=\"lnp:order\"`, `sourcetype=\"npac:soa\"`",
              "q": "index=telco sourcetype=\"lnp:order\"\n| where order_status IN (\"PENDING\",\"REJECTED\",\"TIMEOUT\")\n| stats count, avg((now()-submitted_epoch)/86400) as age_days by tn_range, losing_carrier\n| sort -age_days",
              "m": "SLA alerts for orders >72h in PENDING; root-cause codes joined to carrier contact list.",
              "z": "Funnel (order states), Table (aging ports), Bar chart (reject reasons).",
              "kfp": "Batch jobs and re-drives in porting can create event bursts without customer impact; compare to the carrier’s actual FOC date, not the raw log count only.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NP/BSS extracts, SOA APIs.\n• Ensure the following data sources are available: `sourcetype=\"lnp:order\"`, `sourcetype=\"npac:soa\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSLA alerts for orders >72h in PENDING; root-cause codes joined to carrier contact list.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telco sourcetype=\"lnp:order\"\n| where order_status IN (\"PENDING\",\"REJECTED\",\"TIMEOUT\")\n| stats count, avg((now()-submitted_epoch)/86400) as age_days by tn_range, losing_carrier\n| sort -age_days\n```\n\nUnderstanding this SPL\n\n**Number Portability Request Tracking** — LNP order status, NPAC responses, and port-out churn — operations and regulatory reporting for porting SLAs.\n\nDocumented **Data sources**: `sourcetype=\"lnp:order\"`, `sourcetype=\"npac:soa\"`. **App/TA** (typical add-on context): NP/BSS extracts, SOA APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telco; **sourcetype**: lnp:order. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telco, sourcetype=\"lnp:order\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where order_status IN (\"PENDING\",\"REJECTED\",\"TIMEOUT\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tn_range, losing_carrier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel (order states), Table (aging ports), Bar chart (reject reasons).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you track number-port requests so you can see delays or backlogs with carriers, which is important when a business is counting on a port date.",
              "mtype": [
                "Operations"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.12.9",
              "n": "Roaming Usage Anomaly",
              "c": "high",
              "f": "advanced",
              "v": "Sudden data/voice roaming volume from HLR/VLR or TAP records may indicate SIM box, cloned IMSI, or billing leakage.",
              "t": "TAP files (TD.35), roaming analytics",
              "d": "`sourcetype=\"tap:cdr\"`, `sourcetype=\"roaming:usage\"`",
              "q": "index=telco sourcetype=\"roaming:usage\"\n| bin _time span=1d\n| stats sum(charge_units) as units, sum(charge_amount) as rev by imsi_hash, visited_country, _time\n| eventstats avg(units) as baseline by visited_country\n| where units > 10*baseline\n| sort -units",
              "m": "Privacy: only hashed IMSI in Splunk; correlate with HLR IMEI change for SIM swap fraud.",
              "z": "Map (visited countries), Table (suspicious subscribers), Line chart (roaming $ trend).",
              "kfp": "Border handovers, ferry routes, and mis-tagged MCC in partner feeds can look like false roaming; brief drops during gateway failovers and trunk maintenance can also add noise.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TAP files (TD.35), roaming analytics.\n• Ensure the following data sources are available: `sourcetype=\"tap:cdr\"`, `sourcetype=\"roaming:usage\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrivacy: only hashed IMSI in Splunk; correlate with HLR IMEI change for SIM swap fraud.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telco sourcetype=\"roaming:usage\"\n| bin _time span=1d\n| stats sum(charge_units) as units, sum(charge_amount) as rev by imsi_hash, visited_country, _time\n| eventstats avg(units) as baseline by visited_country\n| where units > 10*baseline\n| sort -units\n```\n\nUnderstanding this SPL\n\n**Roaming Usage Anomaly** — Sudden data/voice roaming volume from HLR/VLR or TAP records may indicate SIM box, cloned IMSI, or billing leakage.\n\nDocumented **Data sources**: `sourcetype=\"tap:cdr\"`, `sourcetype=\"roaming:usage\"`. **App/TA** (typical add-on context): TAP files (TD.35), roaming analytics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telco; **sourcetype**: roaming:usage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telco, sourcetype=\"roaming:usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by imsi_hash, visited_country, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by visited_country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where units > 10*baseline` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (visited countries), Table (suspicious subscribers), Line chart (roaming $ trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you flag odd roaming or premium usage before an invoice or fraud team finds it first—especially when a SIM or plan should stay home.",
              "mtype": [
                "Fraud",
                "Revenue Assurance"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [
                "asterisk"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.12.10",
              "n": "Toll Fraud Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Premium-rate, international, or short-duration high-cost patterns from compromised PBX or SIP credentials — classic CDR analytics use case.",
              "t": "SBC CDR, fraud scoring apps",
              "d": "`sourcetype=\"cdr:voip\"` with `rate_class`, `destination`",
              "q": "index=voip sourcetype=\"cdr:voip\"\n| lookup premium_and_high_risk_prefixes called_number OUTPUT risk_tier\n| where risk_tier IN (\"premium\",\"satellite\",\"high_cost_geo\")\n| stats sum(toll_charge) as cost, count, dc(calling_party) as sources by src, hour\n| where cost>500 OR count>100\n| sort -cost",
              "m": "Hotline to NOC + auto-block high-risk destinations on SBC after threshold; require PIN for international on suspect trunks.",
              "z": "Table (top fraud legs), Map (destination countries), Timeline (attack window).",
              "kfp": "Time-zone and midnight-boundary rating can cluster calls; also brief drops during gateway failovers, codec renegotiation, or PSTN trunk maintenance can add retries that look like extra attempts.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SBC CDR, fraud scoring apps.\n• Ensure the following data sources are available: `sourcetype=\"cdr:voip\"` with `rate_class`, `destination`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nHotline to NOC + auto-block high-risk destinations on SBC after threshold; require PIN for international on suspect trunks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cdr:voip\"\n| lookup premium_and_high_risk_prefixes called_number OUTPUT risk_tier\n| where risk_tier IN (\"premium\",\"satellite\",\"high_cost_geo\")\n| stats sum(toll_charge) as cost, count, dc(calling_party) as sources by src, hour\n| where cost>500 OR count>100\n| sort -cost\n```\n\nUnderstanding this SPL\n\n**Toll Fraud Detection** — Premium-rate, international, or short-duration high-cost patterns from compromised PBX or SIP credentials — classic CDR analytics use case.\n\nDocumented **Data sources**: `sourcetype=\"cdr:voip\"` with `rate_class`, `destination`. **App/TA** (typical add-on context): SBC CDR, fraud scoring apps. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cdr:voip. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cdr:voip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where risk_tier IN (\"premium\",\"satellite\",\"high_cost_geo\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, hour** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cost>500 OR count>100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top fraud legs), Map (destination countries), Timeline (attack window).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you spot when someone is routing expensive or odd international calls, which can be fraud, a misdial, or a broken private branch setup—before the bill is paid.",
              "mtype": [
                "Fraud",
                "Security"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.23",
              "n": "Citrix ADC AppFlow Export Health",
              "c": "high",
              "f": "intermediate",
              "v": "Citrix ADC AppFlow exports flow records to external IPFIX collectors for traffic analytics. If exports are dropped, ignored, or templates do not match the collector, you lose visibility into application traffic and may miss security or capacity signals. Monitoring AppFlow health ensures flow telemetry continuously reaches Splunk and your collectors, and surfaces misconfiguration before export backlogs or silent data loss.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` or `sourcetype=\"citrix:netscaler:appflow\"` — AppFlow export counters, IPFIX collector reachability, flows sent/ignored/dropped, template mismatch events",
              "q": "index=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:appflow\")\n((\"appflow\" AND (\"drop\" OR \"ignore\" OR \"template\" OR \"collector\" OR \"IPFIX\")) OR match(_raw, \"(?i)flow.*(export|discard)\"))\n| eval flow_action=coalesce(action, export_status, \"unknown\")\n| bin _time span=5m\n| stats count as events, dc(host) as adc_count by _time, host, flow_action\n| where match(flow_action, \"(?i)(drop|ignore|fail|mismatch)\") OR events > 100\n| sort - _time\n| table _time, host, flow_action, events, adc_count",
              "m": "Enable AppFlow on the ADC and point collectors at your IPFIX endpoints. Forward `citrix:netscaler:syslog` (export health and template messages) and `citrix:netscaler:appflow` (decapsulated or forwarded flow records) to `index=netscaler`. Alert on sustained drops/ignores, collector unreachable messages, or template mismatch events. Correlate with network path and collector capacity. Baseline normal flow export rates per ADC to detect drift.",
              "z": "Time chart of AppFlow export events by action, single value for drops per hour, table of hosts with template or collector errors.",
              "kfp": "Collector restarts, IPFIX template or vendor upgrades, and short collector maintenance can log drops, ignores, and template mismatch while things resync. A collector and template-version lookup plus a minimum sustained drop rate cut one-off resync noise.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — AppFlow overview](https://docs.citrix.com/en-us/citrix-adc/current-release/application-firewall-analytics/appflow-analytics.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` or `sourcetype=\"citrix:netscaler:appflow\"` — AppFlow export counters, IPFIX collector reachability, flows sent/ignored/dropped, template mismatch events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AppFlow on the ADC and point collectors at your IPFIX endpoints. Forward `citrix:netscaler:syslog` (export health and template messages) and `citrix:netscaler:appflow` (decapsulated or forwarded flow records) to `index=netscaler`. Alert on sustained drops/ignores, collector unreachable messages, or template mismatch events. Correlate with network path and collector capacity. Baseline normal flow export rates per ADC to detect drift.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:appflow\")\n((\"appflow\" AND (\"drop\" OR \"ignore\" OR \"template\" OR \"collector\" OR \"IPFIX\")) OR match(_raw, \"(?i)flow.*(export|discard)\"))\n| eval flow_action=coalesce(action, export_status, \"unknown\")\n| bin _time span=5m\n| stats count as events, dc(host) as adc_count by _time, host, flow_action\n| where match(flow_action, \"(?i)(drop|ignore|fail|mismatch)\") OR events > 100\n| sort - _time\n| table _time, host, flow_action, events, adc_count\n```\n\nUnderstanding this SPL\n\n**Citrix ADC AppFlow Export Health** — Citrix ADC AppFlow exports flow records to external IPFIX collectors for traffic analytics. If exports are dropped, ignored, or templates do not match the collector, you lose visibility into application traffic and may miss security or capacity signals. Monitoring AppFlow health ensures flow telemetry continuously reaches Splunk and your collectors, and surfaces misconfiguration before export backlogs or silent data loss.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` or `sourcetype=\"citrix:netscaler:appflow\"` — AppFlow export counters, IPFIX collector reachability, flows sent/ignored/dropped, template mismatch events. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:syslog, citrix:netscaler:appflow. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **flow_action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, flow_action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where match(flow_action, \"(?i)(drop|ignore|fail|mismatch)\") OR events > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC AppFlow Export Health**): table _time, host, flow_action, events, adc_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart of AppFlow export events by action, single value for drops per hour, table of hosts with template or collector errors.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We read AppFlow and export health on the same path so a gap in app visibility is a data path issue you can fix, not a blind spot forever.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.24",
              "n": "Citrix ADC Web Application Firewall (WAF) Violations",
              "c": "critical",
              "f": "advanced",
              "v": "Citrix ADC Web Application Firewall inspects HTTP traffic for common attacks (SQL injection, cross-site scripting, JSON/XML threats) and policy violations. Spikes in violations, or critical signatures firing in enforcement mode, indicate active attacks or misconfigured applications. Distinguishing learning mode noise from enforcement blocks, and monitoring geographic blocks, keeps incident response focused and reduces false positives.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` WAF log profile output — `violation_type`, `severity`, `policy_name`, `profile_name`, `url`, `learning` vs `enforcement` mode, geo block actions",
              "q": "index=netscaler sourcetype=\"citrix:netscaler:syslog\" (WAF OR \"Application Firewall\" OR APPFW)\n| rex field=_raw \"(?i)(?<violation>SQL|XSS|JSON|XML|CSRF|OWASP)\"\n| eval is_learning=if(match(_raw, \"(?i)learning\"), 1, 0)\n| where NOT (is_learning=1 AND match(_raw, \"(?i)info\"))\n| bin _time span=15m\n| stats count as hits, values(violation) as violation_types, dc(url) as unique_urls, latest(host) as adc by policy_name, _time\n| where hits > 0\n| sort - hits\n| table _time, adc, policy_name, violation_types, unique_urls, hits",
              "m": "Send WAF log profile output to syslog and index as `citrix:netscaler:syslog`. Parse violation type, action (block, learn, log), and policy. Build correlation searches for attack categories (SQL, XSS, JSON injection) and for geo-IP block actions if logged. Tune thresholds by application: public APIs may legitimately spike; internal apps should not. Use lookups for known scanner IPs and pen-test windows.",
              "z": "Stacked column chart of violations by type over time, treemap of policies, drilldown to sample URLs (sanitized).",
              "kfp": "Large legitimate uploads, public marketing scans, and PCI or vendor ASV external scans can trip WAF blocks that look like attacks. A scanner and OWASP test calendar, with URL and VIP scope, keep compliance testing from driving SOC fatigue.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — Web App Firewall](https://docs.citrix.com/en-us/citrix-adc/current-release/application-firewall.html)",
              "mitre": [
                "T1190"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` WAF log profile output — `violation_type`, `severity`, `policy_name`, `profile_name`, `url`, `learning` vs `enforcement` mode, geo block actions.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSend WAF log profile output to syslog and index as `citrix:netscaler:syslog`. Parse violation type, action (block, learn, log), and policy. Build correlation searches for attack categories (SQL, XSS, JSON injection) and for geo-IP block actions if logged. Tune thresholds by application: public APIs may legitimately spike; internal apps should not. Use lookups for known scanner IPs and pen-test windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler sourcetype=\"citrix:netscaler:syslog\" (WAF OR \"Application Firewall\" OR APPFW)\n| rex field=_raw \"(?i)(?<violation>SQL|XSS|JSON|XML|CSRF|OWASP)\"\n| eval is_learning=if(match(_raw, \"(?i)learning\"), 1, 0)\n| where NOT (is_learning=1 AND match(_raw, \"(?i)info\"))\n| bin _time span=15m\n| stats count as hits, values(violation) as violation_types, dc(url) as unique_urls, latest(host) as adc by policy_name, _time\n| where hits > 0\n| sort - hits\n| table _time, adc, policy_name, violation_types, unique_urls, hits\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Web Application Firewall (WAF) Violations** — Citrix ADC Web Application Firewall inspects HTTP traffic for common attacks (SQL injection, cross-site scripting, JSON/XML threats) and policy violations. Spikes in violations, or critical signatures firing in enforcement mode, indicate active attacks or misconfigured applications. Distinguishing learning mode noise from enforcement blocks, and monitoring geographic blocks, keeps incident response focused and reduces false positives.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` WAF log profile output — `violation_type`, `severity`, `policy_name`, `profile_name`, `url`, `learning` vs `enforcement` mode, geo block actions. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **is_learning** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where NOT (is_learning=1 AND match(_raw, \"(?i)info\"))` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by policy_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where hits > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC Web Application Firewall (WAF) Violations**): table _time, adc, policy_name, violation_types, unique_urls, hits\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked column chart of violations by type over time, treemap of policies, drilldown to sample URLs (sanitized).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We count web application firewall style blocks on the same path so a noisy rule, a test, and a real attack are easier to separate.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Web"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.25",
              "n": "Citrix ADC Bot Management Detection",
              "c": "high",
              "f": "advanced",
              "v": "Citrix ADC Bot Management classifies clients (good automation, bad bots, unknown) and can enforce CAPTCHA, allow, or deny. Tracking category mix and enforcement rates surfaces credential stuffing, scraping, and misclassified good bots. A rising bad or unknown share, or high CAPTCHA rates, can indicate attack campaigns or policy tuning needs before user experience or origin load suffers.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:appflow\"` or `sourcetype=\"citrix:netscaler:syslog\"` — bot classification (`captcha`, `allow`, `deny`, `good_bot`, `bad_bot`, `unknown`), request counts, client IP",
              "q": "index=netscaler (sourcetype=\"citrix:netscaler:appflow\" OR sourcetype=\"citrix:netscaler:syslog\") (\"bot\" OR \"BOT\" OR captcha OR \"reputation\")\n| eval bot_class=if(match(_raw, \"(?i)good\"),\"good\", if(match(_raw, \"(?i)bad|malici\"),\"bad\", if(match(_raw, \"(?i)unknown|unclass\"),\"unknown\",\"other\")))\n| eval action=if(match(_raw, \"(?i)deny|block\"),\"deny\", if(match(_raw, \"(?i)captcha\"),\"captcha\", if(match(_raw, \"(?i)allow|pass\"),\"allow\",\"other\")))\n| bin _time span=15m\n| stats count as reqs, sum(eval(action=\"captcha\")) as captcha_hits, sum(eval(action=\"deny\")) as deny_hits, sum(eval(action=\"allow\")) as allow_hits by _time, host, bot_class\n| eval bot_to_human_ratio=if(allow_hits+deny_hits>0, round((deny_hits+captcha_hits)/(allow_hits+deny_hits+captcha_hits+0.001)*100,1), 0)\n| where bot_class IN (\"bad\",\"unknown\") OR bot_to_human_ratio > 25\n| table _time, host, bot_class, reqs, allow_hits, captcha_hits, deny_hits, bot_to_human_ratio",
              "m": "Enable bot signatures and logging to appflow and/or syslog. Ensure HTTP headers or log fields that carry bot decision and action are extracted. Index to `index=netscaler`. Build baselines for allow versus challenge versus block per major application. Alert on bad-bot surges, spikes in unknown classification, or CAPTCHA rate jumps that exceed normal human traffic patterns.",
              "z": "Area chart of requests by bot class, pie chart of enforcement actions, table of ratio of challenged or denied to total session starts.",
              "kfp": "Search crawlers, uptime monitors, and partner health checks produce bot signatures on public apps. Map by URL, VIP, and user-agent; robots.txt-crawler traffic to marketing edges should not feed the same fraud threshold as protected apps.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — Bot management](https://docs.citrix.com/en-us/citrix-adc/current-release/bot-management.html)",
              "mitre": [
                "T1110",
                "T1190"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:appflow\"` or `sourcetype=\"citrix:netscaler:syslog\"` — bot classification (`captcha`, `allow`, `deny`, `good_bot`, `bad_bot`, `unknown`), request counts, client IP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable bot signatures and logging to appflow and/or syslog. Ensure HTTP headers or log fields that carry bot decision and action are extracted. Index to `index=netscaler`. Build baselines for allow versus challenge versus block per major application. Alert on bad-bot surges, spikes in unknown classification, or CAPTCHA rate jumps that exceed normal human traffic patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler (sourcetype=\"citrix:netscaler:appflow\" OR sourcetype=\"citrix:netscaler:syslog\") (\"bot\" OR \"BOT\" OR captcha OR \"reputation\")\n| eval bot_class=if(match(_raw, \"(?i)good\"),\"good\", if(match(_raw, \"(?i)bad|malici\"),\"bad\", if(match(_raw, \"(?i)unknown|unclass\"),\"unknown\",\"other\")))\n| eval action=if(match(_raw, \"(?i)deny|block\"),\"deny\", if(match(_raw, \"(?i)captcha\"),\"captcha\", if(match(_raw, \"(?i)allow|pass\"),\"allow\",\"other\")))\n| bin _time span=15m\n| stats count as reqs, sum(eval(action=\"captcha\")) as captcha_hits, sum(eval(action=\"deny\")) as deny_hits, sum(eval(action=\"allow\")) as allow_hits by _time, host, bot_class\n| eval bot_to_human_ratio=if(allow_hits+deny_hits>0, round((deny_hits+captcha_hits)/(allow_hits+deny_hits+captcha_hits+0.001)*100,1), 0)\n| where bot_class IN (\"bad\",\"unknown\") OR bot_to_human_ratio > 25\n| table _time, host, bot_class, reqs, allow_hits, captcha_hits, deny_hits, bot_to_human_ratio\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Bot Management Detection** — Citrix ADC Bot Management classifies clients (good automation, bad bots, unknown) and can enforce CAPTCHA, allow, or deny. Tracking category mix and enforcement rates surfaces credential stuffing, scraping, and misclassified good bots. A rising bad or unknown share, or high CAPTCHA rates, can indicate attack campaigns or policy tuning needs before user experience or origin load suffers.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:appflow\"` or `sourcetype=\"citrix:netscaler:syslog\"` — bot classification (`captcha`, `allow`, `deny`, `good_bot`, `bad_bot`, `unknown`), request counts, client IP. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:appflow, citrix:netscaler:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:appflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bot_class** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, bot_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **bot_to_human_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bot_class IN (\"bad\",\"unknown\") OR bot_to_human_ratio > 25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC Bot Management Detection**): table _time, host, bot_class, reqs, allow_hits, captcha_hits, deny_hits, bot_to_human_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart of requests by bot class, pie chart of enforcement actions, table of ratio of challenged or denied to total session starts.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look at bot and automation detections in one place so good crawlers you forgot to list do not drown out new risky behavior.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Web"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.26",
              "n": "Citrix ADC nFactor Authentication Pipeline Failures",
              "c": "critical",
              "f": "advanced",
              "v": "Citrix ADC nFactor ties multiple authentication steps (VPN, web apps, user directories, SAML) into one pipeline. Login schema parse errors, per-factor timeouts, SAML attribute or assertion mismatches, or IdP reachability issues strand users in partial login states. Tracking these failures protects both availability (users cannot sign in) and security (forced fallbacks, repeated attempts, or mis-issued factors).",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — nFactor, AAA, login schema, `saml` assertion errors, `timeout`, IdP `unreachable`, `factor` sequence failures",
              "q": "index=netscaler sourcetype=\"citrix:netscaler:syslog\" (nFactor OR \"login schema\" OR AAA OR SAML OR \"factor\" OR IdP OR \"assertion\" OR \"epa\")\n( \"failed\" OR \"error\" OR \"timeout\" OR \"unreachable\" OR \"invalid\" OR \"reject\" )\n| rex field=_raw \"(?i)(?<fail_reason>schema|timeout|SAML|assertion|IdP|LDAP|radius)\"\n| bin _time span=5m\n| stats count as failures, values(fail_reason) as reasons, dc(client_ip) as src_ips, latest(host) as adc by auth_policy, _time\n| where failures >= 3\n| sort - failures\n| table _time, adc, auth_policy, reasons, src_ips, failures",
              "m": "Send AAA, audit, and authentication-related syslog to `index=netscaler` as `citrix:netscaler:syslog`. Classify by factor type (SAML, LDAP, RADIUS, EPA). Alert on growing failure rates for a given policy, IdP down messages, and schema errors after config pushes. Cross-reference with change records for nFactor flow edits.",
              "z": "Time chart of auth failures by reason, top policies table, map or table of source IPs (if allowed by privacy policy).",
              "kfp": "Password expirations, MFA re-enrollment, and IdP key rollouts can flood nFactor with benign auth failure bursts. Add directory and IdP change windows, and a per-identity cap, before an automated credential-stuffing assumption.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — nFactor authentication](https://docs.citrix.com/en-us/citrix-adc/current-release/aaatm-authentication.html)",
              "mitre": [
                "T1078",
                "T1110",
                "T1556"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — nFactor, AAA, login schema, `saml` assertion errors, `timeout`, IdP `unreachable`, `factor` sequence failures.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSend AAA, audit, and authentication-related syslog to `index=netscaler` as `citrix:netscaler:syslog`. Classify by factor type (SAML, LDAP, RADIUS, EPA). Alert on growing failure rates for a given policy, IdP down messages, and schema errors after config pushes. Cross-reference with change records for nFactor flow edits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler sourcetype=\"citrix:netscaler:syslog\" (nFactor OR \"login schema\" OR AAA OR SAML OR \"factor\" OR IdP OR \"assertion\" OR \"epa\")\n( \"failed\" OR \"error\" OR \"timeout\" OR \"unreachable\" OR \"invalid\" OR \"reject\" )\n| rex field=_raw \"(?i)(?<fail_reason>schema|timeout|SAML|assertion|IdP|LDAP|radius)\"\n| bin _time span=5m\n| stats count as failures, values(fail_reason) as reasons, dc(client_ip) as src_ips, latest(host) as adc by auth_policy, _time\n| where failures >= 3\n| sort - failures\n| table _time, adc, auth_policy, reasons, src_ips, failures\n```\n\nUnderstanding this SPL\n\n**Citrix ADC nFactor Authentication Pipeline Failures** — Citrix ADC nFactor ties multiple authentication steps (VPN, web apps, user directories, SAML) into one pipeline. Login schema parse errors, per-factor timeouts, SAML attribute or assertion mismatches, or IdP reachability issues strand users in partial login states. Tracking these failures protects both availability (users cannot sign in) and security (forced fallbacks, repeated attempts, or mis-issued factors).\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — nFactor, AAA, login schema, `saml` assertion errors, `timeout`, IdP `unreachable`, `factor` sequence failures. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by auth_policy, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures >= 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC nFactor Authentication Pipeline Failures**): table _time, adc, auth_policy, reasons, src_ips, failures\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart of auth failures by reason, top policies table, map or table of source IPs (if allowed by privacy policy).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look at login and factor style failures in one chain so a broken factor, a bad schema, and a user typo are not all the same in the dark.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "both",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.27",
              "n": "Citrix ADC Surge Queue and Spillover Events",
              "c": "high",
              "f": "intermediate",
              "v": "When demand exceeds a virtual server capacity settings, connections queue in the surge buffer or spill to backup vservers, affecting latency and success rates. Sustained surge depth or repeated spillover indicates undersized `maxclient` values, pool exhaustion, or slow backends. Early detection keeps user-visible failures and cascade overload off backup paths.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` or NITRO-derived `sourcetype=\"citrix:netscaler:perf\"` — `surge_queue` depth, `spillover`, `maxclient`, vserver, backup vserver",
              "q": "index=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:perf\") (\"surge\" OR \"spillover\" OR \"max client\" OR maxclient OR \"SURGEQ\")\n| rex field=_raw max_match=0 \"(?i)depth[\\\\s:=]+(?<surge_depth>\\\\d+)\"\n| eval has_spillover=if(match(_raw, \"(?i)spillover\"),1,0)\n| eval vserver=coalesce(vserver_name, if(match(_raw, \"(?i)vserver[\\\\s:]+(?<vs>\\\\S+)\"), vs, null()))\n| bin _time span=5m\n| stats count as events, max(surge_depth) as max_depth, max(has_spillover) as spillover_flag, values(vserver) as vserver_list, latest(host) as adc by _time, host\n| where spillover_flag=1 OR max_depth>0 OR events>0\n| eval vserver=mvindex(vserver_list,0)\n| sort - _time\n| table _time, adc, vserver, max_depth, spillover_flag, events",
              "m": "Source spillover and surge events from `citrix:netscaler:syslog` (state change messages) and, if available, counter polls into `citrix:netscaler:perf` for queue depth. Parse vserver and backup vserver names. Set alert conditions on non-zero queue depth for more than a few intervals, and any spillover to backup, unless during known tests.",
              "z": "Time chart of max surge depth by vserver, event list for spillover with backup target, link to app response-time panels.",
              "kfp": "Marketing flash events, all-hands streams, and go-live go intentionally into surge and spillover. Only escalate when a capacity group is saturated past its ceiling, spillover outlasts the business runbook event, or scale actions fail, not for every high-traffic hour.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — Load balancing and surge protection](https://docs.citrix.com/en-us/citrix-adc/current-release/load-balancing.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` or NITRO-derived `sourcetype=\"citrix:netscaler:perf\"` — `surge_queue` depth, `spillover`, `maxclient`, vserver, backup vserver.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSource spillover and surge events from `citrix:netscaler:syslog` (state change messages) and, if available, counter polls into `citrix:netscaler:perf` for queue depth. Parse vserver and backup vserver names. Set alert conditions on non-zero queue depth for more than a few intervals, and any spillover to backup, unless during known tests.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:perf\") (\"surge\" OR \"spillover\" OR \"max client\" OR maxclient OR \"SURGEQ\")\n| rex field=_raw max_match=0 \"(?i)depth[\\\\s:=]+(?<surge_depth>\\\\d+)\"\n| eval has_spillover=if(match(_raw, \"(?i)spillover\"),1,0)\n| eval vserver=coalesce(vserver_name, if(match(_raw, \"(?i)vserver[\\\\s:]+(?<vs>\\\\S+)\"), vs, null()))\n| bin _time span=5m\n| stats count as events, max(surge_depth) as max_depth, max(has_spillover) as spillover_flag, values(vserver) as vserver_list, latest(host) as adc by _time, host\n| where spillover_flag=1 OR max_depth>0 OR events>0\n| eval vserver=mvindex(vserver_list,0)\n| sort - _time\n| table _time, adc, vserver, max_depth, spillover_flag, events\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Surge Queue and Spillover Events** — When demand exceeds a virtual server capacity settings, connections queue in the surge buffer or spill to backup vservers, affecting latency and success rates. Sustained surge depth or repeated spillover indicates undersized `maxclient` values, pool exhaustion, or slow backends. Early detection keeps user-visible failures and cascade overload off backup paths.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` or NITRO-derived `sourcetype=\"citrix:netscaler:perf\"` — `surge_queue` depth, `spillover`, `maxclient`, vserver, backup vserver. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:syslog, citrix:netscaler:perf. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **has_spillover** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **vserver** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where spillover_flag=1 OR max_depth>0 OR events>0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **vserver** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC Surge Queue and Spillover Events**): table _time, adc, vserver, max_depth, spillover_flag, events\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart of max surge depth by vserver, event list for spillover with backup target, link to app response-time panels.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We catch surge and spillover lines in the same logs so a full farm shows up in events before a whole front page goes static.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "Alerts"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.28",
              "n": "Citrix ADC TCP Connection Multiplexing Analysis",
              "c": "medium",
              "f": "advanced",
              "v": "Citrix ADC multiplexes many client connections onto fewer server-side connections through reuse and keep-alive, improving efficiency on backends. A falling reuse rate with rising front-end TPS, paired with high tail latency, can signal pool saturation, keep-alive misconfiguration, or backend slowness. The goal is to connect traffic shape to latency before servers exhaust ephemeral ports or file descriptors.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:appflow\"` HTTP records with TPS/response time fields, or `sourcetype=\"citrix:netscaler:perf\"` for connection reuse, keep-alive, and throughput",
              "q": "index=netscaler (sourcetype=\"citrix:netscaler:perf\" OR sourcetype=\"citrix:netscaler:appflow\")\n| eval cps=coalesce(connections_per_sec, http_reqs_per_sec, 0), reuse_pct=coalesce(tcp_reuse_percent, connection_reuse_pct, 0), p95_latency_ms=coalesce(p95_resp_time_ms, app_resp_time_95, 0)\n| bin _time span=5m\n| stats avg(cps) as tps, avg(reuse_pct) as avg_reuse, avg(p95_latency_ms) as p95_ms by _time, host, lbvserver\n| where p95_ms > 500 AND avg_reuse < 30 AND tps > 0\n| table _time, host, lbvserver, tps, avg_reuse, p95_ms",
              "m": "Populate `citrix:netscaler:perf` from NITRO with TCP and HTTP vserver service metrics where available, and align AppFlow-derived TPS and response-time percentiles. Normalize field names in props if your TA uses custom aliases. Create baselines for reuse and p95 by application; alert when reuse drops and p95 increases together during steady load.",
              "z": "Dual-axis line chart: TPS and reuse percent; scatter of reuse versus p95 latency; table of offending vservers.",
              "kfp": "Backups, large file transfers, and diagnostic pcap or throughput tests can change multiplexing patterns compared to a typical web session. A four-week same-hour and per-vserver baseline separates traffic shape tests from a real service degradation pattern.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — Load balancing and connection management](https://docs.citrix.com/en-us/citrix-adc/current-release/load-balancing.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:appflow\"` HTTP records with TPS/response time fields, or `sourcetype=\"citrix:netscaler:perf\"` for connection reuse, keep-alive, and throughput.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPopulate `citrix:netscaler:perf` from NITRO with TCP and HTTP vserver service metrics where available, and align AppFlow-derived TPS and response-time percentiles. Normalize field names in props if your TA uses custom aliases. Create baselines for reuse and p95 by application; alert when reuse drops and p95 increases together during steady load.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler (sourcetype=\"citrix:netscaler:perf\" OR sourcetype=\"citrix:netscaler:appflow\")\n| eval cps=coalesce(connections_per_sec, http_reqs_per_sec, 0), reuse_pct=coalesce(tcp_reuse_percent, connection_reuse_pct, 0), p95_latency_ms=coalesce(p95_resp_time_ms, app_resp_time_95, 0)\n| bin _time span=5m\n| stats avg(cps) as tps, avg(reuse_pct) as avg_reuse, avg(p95_latency_ms) as p95_ms by _time, host, lbvserver\n| where p95_ms > 500 AND avg_reuse < 30 AND tps > 0\n| table _time, host, lbvserver, tps, avg_reuse, p95_ms\n```\n\nUnderstanding this SPL\n\n**Citrix ADC TCP Connection Multiplexing Analysis** — Citrix ADC multiplexes many client connections onto fewer server-side connections through reuse and keep-alive, improving efficiency on backends. A falling reuse rate with rising front-end TPS, paired with high tail latency, can signal pool saturation, keep-alive misconfiguration, or backend slowness. The goal is to connect traffic shape to latency before servers exhaust ephemeral ports or file descriptors.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:appflow\"` HTTP records with TPS/response time fields, or `sourcetype=\"citrix:netscaler:perf\"` for connection reuse, keep-alive, and throughput. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:perf, citrix:netscaler:appflow. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:perf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, lbvserver** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_ms > 500 AND avg_reuse < 30 AND tps > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC TCP Connection Multiplexing Analysis**): table _time, host, lbvserver, tps, avg_reuse, p95_ms\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis line chart: TPS and reuse percent; scatter of reuse versus p95 latency; table of offending vservers.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We study connection reuse on the same box so a jump in multiplexing is something you can compare to a known database or long-lived flow.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.29",
              "n": "Citrix ADC Frontend vs Backend RTT Analysis",
              "c": "high",
              "f": "advanced",
              "v": "Separating client-side from server-side round-trip time on the Application Delivery Controller pinpoints where delay accumulates: last mile, middle mile, or data center. AppFlow or equivalent records expose both legs so you can tell user Wi-Fi issues from database latency without guessing. Sustained backend-side RTT growth drives pool tuning and app fixes; client-heavy RTT points to peering, DNS, or edge problems.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:appflow\"` with client-side and server-side RTT (or first-byte timing) — fields such as `client_rtt_ms`, `server_rtt_ms`, `app_resp_time`, vserver, URL host",
              "q": "index=netscaler sourcetype=\"citrix:netscaler:appflow\"\n| eval client_rtt=coalesce('client_rtt_ms', 'avg_client_rtt', client_rtt, 0), server_rtt=coalesce('server_rtt_ms', 'avg_server_rtt', server_rtt, 0)\n| where client_rtt>0 OR server_rtt>0\n| eval rtt_diff=abs(client_rtt - server_rtt), segment=if(client_rtt>1.2*server_rtt,\n  \"client_network\", if(server_rtt>1.2*client_rtt, \"server_network_or_backend\", \"balanced\"))\n| bin _time span=5m\n| stats median(client_rtt) as p50_c, median(server_rtt) as p50_s, p95(client_rtt) as p95_c, p95(server_rtt) as p95_s, count as flows by _time, host, vserver, segment\n| where p95_c>200 OR p95_s>200 OR p95_s>1.5*p95_c\n| table _time, host, vserver, segment, p50_c, p50_s, p95_c, p95_s, flows",
              "m": "Enable AppFlow with timing fields; ensure the Splunk TA extracts numeric RTT. Index to `index=netscaler`. If field names differ, create aliases in props. Use segment classification to tag tickets (network vs app). Add geo or ASN for client leg only if policy allows. Trend weekly for capacity reports.",
              "z": "Stacked time chart: median client vs server RTT, heatmap of vservers by segment, box plot for tail latency by region.",
              "kfp": "Carrier maintenance, path diversity tests, and SD-WAN 'what-if' exercises widen frontend-versus-backend RTT for many users at once. Carrier notification feeds and per-site overlays reduce global RTT blips to false positives when the core path is under test only.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — AppFlow and analytics](https://docs.citrix.com/en-us/citrix-adc/current-release/application-analytics.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:appflow\"` with client-side and server-side RTT (or first-byte timing) — fields such as `client_rtt_ms`, `server_rtt_ms`, `app_resp_time`, vserver, URL host.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AppFlow with timing fields; ensure the Splunk TA extracts numeric RTT. Index to `index=netscaler`. If field names differ, create aliases in props. Use segment classification to tag tickets (network vs app). Add geo or ASN for client leg only if policy allows. Trend weekly for capacity reports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler sourcetype=\"citrix:netscaler:appflow\"\n| eval client_rtt=coalesce('client_rtt_ms', 'avg_client_rtt', client_rtt, 0), server_rtt=coalesce('server_rtt_ms', 'avg_server_rtt', server_rtt, 0)\n| where client_rtt>0 OR server_rtt>0\n| eval rtt_diff=abs(client_rtt - server_rtt), segment=if(client_rtt>1.2*server_rtt,\n  \"client_network\", if(server_rtt>1.2*client_rtt, \"server_network_or_backend\", \"balanced\"))\n| bin _time span=5m\n| stats median(client_rtt) as p50_c, median(server_rtt) as p50_s, p95(client_rtt) as p95_c, p95(server_rtt) as p95_s, count as flows by _time, host, vserver, segment\n| where p95_c>200 OR p95_s>200 OR p95_s>1.5*p95_c\n| table _time, host, vserver, segment, p50_c, p50_s, p95_c, p95_s, flows\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Frontend vs Backend RTT Analysis** — Separating client-side from server-side round-trip time on the Application Delivery Controller pinpoints where delay accumulates: last mile, middle mile, or data center. AppFlow or equivalent records expose both legs so you can tell user Wi-Fi issues from database latency without guessing. Sustained backend-side RTT growth drives pool tuning and app fixes; client-heavy RTT points to peering, DNS, or edge problems.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:appflow\"` with client-side and server-side RTT (or first-byte timing) — fields such as `client_rtt_ms`, `server_rtt_ms`, `app_resp_time`, vserver, URL host. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:appflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:appflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **client_rtt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where client_rtt>0 OR server_rtt>0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **rtt_diff** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, vserver, segment** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_c>200 OR p95_s>200 OR p95_s>1.5*p95_c` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC Frontend vs Backend RTT Analysis**): table _time, host, vserver, segment, p50_c, p50_s, p95_c, p95_s, flows\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked time chart: median client vs server RTT, heatmap of vservers by segment, box plot for tail latency by region.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We compare front and back delay from flow data so a slow data center, a far user, and a hot middle are not one vague \"slowness\" in chat.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.30",
              "n": "Citrix ADC Integrated Cache Hit Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "The Citrix ADC integrated content cache can offload origin servers for static and cacheable content. A falling hit ratio increases origin load and latency; high cache memory pressure can evict hot objects. Monitoring hit ratio, miss volume, and cache memory guides TTL tuning, object sizing, and capacity for content-heavy services.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:perf\"` (NITRO): cache hits, misses, bytes served from cache, `cache` memory; optional `sourcetype=\"citrix:netscaler:syslog\"` for cache service events",
              "q": "index=netscaler sourcetype=\"citrix:netscaler:perf\" (\"ic_cache\" OR \"ico_\" OR cache_hits OR cache_misses)\n| eval hits=coalesce(cache_hits, ico_hits, 0), misses=coalesce(cache_misses, ico_misses, 0)\n| eval mem_use_pct=coalesce(cache_mem_use_pct, cache_mem_util, 0)\n| bin _time span=5m\n| stats sum(hits) as sum_hits, sum(misses) as sum_miss, avg(mem_use_pct) as cache_mem, latest(host) as adc by _time, host\n| eval hit_ratio=if((sum_hits+sum_misses)>0, round(sum_hits/(sum_hits+sum_misses)*100,2), 0)\n| where hit_ratio < 50 OR cache_mem > 90\n| table _time, adc, sum_hits, sum_miss, hit_ratio, cache_mem",
              "m": "Poll NITRO for integrated cache object statistics or use the TA’s scripted metrics into `citrix:netscaler:perf`. Align on per-content-group rollups. Alert on sustained hit ratio drop week over week, or memory utilization over 90% for multiple intervals. Log cache flush events in syslog and correlate to deployments.",
              "z": "Line chart: hit ratio; area chart: hits vs misses; gauge: cache memory usage.",
              "kfp": "Cold start after a big content push, a new app, or a deliberate cache clear tanks integrated cache hit ratio for hours. A post-publish grace period and a rolling median, not a hard static percentage, avoid nightly false positives after deploys.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — Integrated caching](https://docs.citrix.com/en-us/citrix-adc/current-release/optimization/integrated-caching.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:perf\"` (NITRO): cache hits, misses, bytes served from cache, `cache` memory; optional `sourcetype=\"citrix:netscaler:syslog\"` for cache service events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll NITRO for integrated cache object statistics or use the TA’s scripted metrics into `citrix:netscaler:perf`. Align on per-content-group rollups. Alert on sustained hit ratio drop week over week, or memory utilization over 90% for multiple intervals. Log cache flush events in syslog and correlate to deployments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler sourcetype=\"citrix:netscaler:perf\" (\"ic_cache\" OR \"ico_\" OR cache_hits OR cache_misses)\n| eval hits=coalesce(cache_hits, ico_hits, 0), misses=coalesce(cache_misses, ico_misses, 0)\n| eval mem_use_pct=coalesce(cache_mem_use_pct, cache_mem_util, 0)\n| bin _time span=5m\n| stats sum(hits) as sum_hits, sum(misses) as sum_miss, avg(mem_use_pct) as cache_mem, latest(host) as adc by _time, host\n| eval hit_ratio=if((sum_hits+sum_misses)>0, round(sum_hits/(sum_hits+sum_misses)*100,2), 0)\n| where hit_ratio < 50 OR cache_mem > 90\n| table _time, adc, sum_hits, sum_miss, hit_ratio, cache_mem\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Integrated Cache Hit Ratio** — The Citrix ADC integrated content cache can offload origin servers for static and cacheable content. A falling hit ratio increases origin load and latency; high cache memory pressure can evict hot objects. Monitoring hit ratio, miss volume, and cache memory guides TTL tuning, object sizing, and capacity for content-heavy services.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:perf\"` (NITRO): cache hits, misses, bytes served from cache, `cache` memory; optional `sourcetype=\"citrix:netscaler:syslog\"` for cache service events. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:perf. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:perf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hits** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_use_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio < 50 OR cache_mem > 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC Integrated Cache Hit Ratio**): table _time, adc, sum_hits, sum_miss, hit_ratio, cache_mem\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart: hit ratio; area chart: hits vs misses; gauge: cache memory usage.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look at content cache hit ratio on the same platform so a cold start after a release does not get mistaken for a broken cache for weeks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.31",
              "n": "Citrix ADC Compression Savings and CPU Impact",
              "c": "medium",
              "f": "intermediate",
              "v": "HTTP compression shrinks bytes on the wire but costs CPU on the Application Delivery Controller. Monitoring compression ratio, bandwidth saved, and CPU headroom together prevents turning compression on blindly when hardware is already near limits. A low savings percentage with high CPU can justify selective policies (only text types) or moving compression to origins.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:perf\"` — compression `bytes_in`, `bytes_out`, `cpu_use_pct` (for packet engine and overall), vserver; optional `citrix:netscaler:syslog` for comp errors",
              "q": "index=netscaler sourcetype=\"citrix:netscaler:perf\" (\"compress\" OR comp_ OR gzip OR deflate)\n| eval bytes_in=coalesce(compress_bytes_in, comp_bytes_in, 0), bytes_out=coalesce(compress_bytes_out, comp_bytes_out, 0), cpu=coalesce(cpu_use_pct, packet_cpu_use_pct, 0)\n| eval comp_ratio=if(bytes_out>0, round((bytes_in-bytes_out)/bytes_in*100,1), 0), savings_mb=if(bytes_in>0, (bytes_in-bytes_out)/1024/1024, 0)\n| bin _time span=5m\n| stats avg(comp_ratio) as avg_comp_pct, sum(savings_mb) as total_saved_mb, avg(cpu) as avg_cpu, max(cpu) as peak_cpu by _time, host, lbvserver\n| where avg_comp_pct>0\n| where peak_cpu>85 AND avg_comp_pct<5\n| table _time, host, lbvserver, avg_comp_pct, total_saved_mb, avg_cpu, peak_cpu",
              "m": "Ingest NITRO compression and CPU counters into `citrix:netscaler:perf`. Join per vserver or content group. Add alerts for CPU above policy threshold while compression impact is low (candidates to disable) and for high savings with headroom (good candidates to expand). Document SSL versus compress ordering if applicable.",
              "z": "Line chart: compression percent saved, overlay CPU; table of top vservers by saved megabytes; stacked bar: CPU time estimate by feature if available.",
              "kfp": "Turning on compression for new vservers or reclassifying mostly encrypted traffic lowers compression savings and can raise packet-engine CPU. A recent vserver or policy object change in NetScaler and a before/after diff explain a benign re-baseline quickly.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — Compression](https://docs.citrix.com/en-us/citrix-adc/current-release/optimization/compression.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:perf\"` — compression `bytes_in`, `bytes_out`, `cpu_use_pct` (for packet engine and overall), vserver; optional `citrix:netscaler:syslog` for comp errors.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest NITRO compression and CPU counters into `citrix:netscaler:perf`. Join per vserver or content group. Add alerts for CPU above policy threshold while compression impact is low (candidates to disable) and for high savings with headroom (good candidates to expand). Document SSL versus compress ordering if applicable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler sourcetype=\"citrix:netscaler:perf\" (\"compress\" OR comp_ OR gzip OR deflate)\n| eval bytes_in=coalesce(compress_bytes_in, comp_bytes_in, 0), bytes_out=coalesce(compress_bytes_out, comp_bytes_out, 0), cpu=coalesce(cpu_use_pct, packet_cpu_use_pct, 0)\n| eval comp_ratio=if(bytes_out>0, round((bytes_in-bytes_out)/bytes_in*100,1), 0), savings_mb=if(bytes_in>0, (bytes_in-bytes_out)/1024/1024, 0)\n| bin _time span=5m\n| stats avg(comp_ratio) as avg_comp_pct, sum(savings_mb) as total_saved_mb, avg(cpu) as avg_cpu, max(cpu) as peak_cpu by _time, host, lbvserver\n| where avg_comp_pct>0\n| where peak_cpu>85 AND avg_comp_pct<5\n| table _time, host, lbvserver, avg_comp_pct, total_saved_mb, avg_cpu, peak_cpu\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Compression Savings and CPU Impact** — HTTP compression shrinks bytes on the wire but costs CPU on the Application Delivery Controller. Monitoring compression ratio, bandwidth saved, and CPU headroom together prevents turning compression on blindly when hardware is already near limits. A low savings percentage with high CPU can justify selective policies (only text types) or moving compression to origins.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:perf\"` — compression `bytes_in`, `bytes_out`, `cpu_use_pct` (for packet engine and overall), vserver; optional `citrix:netscaler:syslog` for comp errors. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:perf. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:perf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bytes_in** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **comp_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, lbvserver** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_comp_pct>0` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where peak_cpu>85 AND avg_comp_pct<5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC Compression Savings and CPU Impact**): table _time, host, lbvserver, avg_comp_pct, total_saved_mb, avg_cpu, peak_cpu\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart: compression percent saved, overlay CPU; table of top vservers by saved megabytes; stacked bar: CPU time estimate by feature if available.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look at how much data compression is saving and what it does to the cpu so a policy change is not a surprise in cost and heat.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.32",
              "n": "Citrix ADC DNS/ADNS Service Health",
              "c": "high",
              "f": "intermediate",
              "v": "Authoritative and DNS proxy workloads on the Application Delivery Controller (ADNS) must resolve fast and with valid responses. Spikes in failure codes, DNSSEC validation errors, or response-time percentiles show overload, bad zones, or upstream issues before applications fail name resolution. Query-rate anomalies also reveal floods or cache misses.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — ADNS, DNS, `Rcode`, response time, query rate; `sourcetype=\"citrix:netscaler:snmp\"` for DNS process load if polled",
              "q": "index=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:snmp\") (ADNS OR \"dns\" OR nameserver OR NXDOMAIN OR SERVFAIL OR DNSSEC OR Rcode)\n| eval is_fail=if(match(_raw, \"(?i)(SERVFAIL|Refused|timeout)\"),1,0)\n| rex field=_raw max_match=0 \"response[\\\\s:]+(?<rtt_parsed>\\\\d+)\\\\s*ms\"\n| eval rtt=coalesce(dns_rtt_ms, rtt_parsed)\n| bin _time span=5m\n| stats count as events, sum(is_fail) as fail_ct, p95(rtt) as p95_rtt, dc(dns_name) as zones, latest(host) as adc by _time, host\n| where fail_ct>0 OR p95_rtt>150 OR events/300 > 10000\n| table _time, adc, events, fail_ct, p95_rtt, zones",
              "m": "Forward high-severity and DNS service syslog to `index=netscaler`. Parse Rcode, query type, and latency if available. For SNMP, poll process CPU and request counters. Alert on any sustained SERVFAIL, DNSSEC `Bogus` or `Insecure` transition when policy says secure, and on p95 latency above SLO. Rate-limit log volume with filters if needed.",
              "z": "Time chart: queries per second and failures, single value: p95 response time, table: top failure strings.",
              "kfp": "Zone transfers, GSLB health-test flaps during data center failovers, and auth DNS provider cutovers are expected noisy periods for ADNS. DNS change records and a minimum-down duration stop single-probe wobble from looking like a hard outage.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — DNS](https://docs.citrix.com/en-us/citrix-adc/current-release/dns/dns-citrix.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — ADNS, DNS, `Rcode`, response time, query rate; `sourcetype=\"citrix:netscaler:snmp\"` for DNS process load if polled.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward high-severity and DNS service syslog to `index=netscaler`. Parse Rcode, query type, and latency if available. For SNMP, poll process CPU and request counters. Alert on any sustained SERVFAIL, DNSSEC `Bogus` or `Insecure` transition when policy says secure, and on p95 latency above SLO. Rate-limit log volume with filters if needed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:snmp\") (ADNS OR \"dns\" OR nameserver OR NXDOMAIN OR SERVFAIL OR DNSSEC OR Rcode)\n| eval is_fail=if(match(_raw, \"(?i)(SERVFAIL|Refused|timeout)\"),1,0)\n| rex field=_raw max_match=0 \"response[\\\\s:]+(?<rtt_parsed>\\\\d+)\\\\s*ms\"\n| eval rtt=coalesce(dns_rtt_ms, rtt_parsed)\n| bin _time span=5m\n| stats count as events, sum(is_fail) as fail_ct, p95(rtt) as p95_rtt, dc(dns_name) as zones, latest(host) as adc by _time, host\n| where fail_ct>0 OR p95_rtt>150 OR events/300 > 10000\n| table _time, adc, events, fail_ct, p95_rtt, zones\n```\n\nUnderstanding this SPL\n\n**Citrix ADC DNS/ADNS Service Health** — Authoritative and DNS proxy workloads on the Application Delivery Controller (ADNS) must resolve fast and with valid responses. Spikes in failure codes, DNSSEC validation errors, or response-time percentiles show overload, bad zones, or upstream issues before applications fail name resolution. Query-rate anomalies also reveal floods or cache misses.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — ADNS, DNS, `Rcode`, response time, query rate; `sourcetype=\"citrix:netscaler:snmp\"` for DNS process load if polled. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:syslog, citrix:netscaler:snmp. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **rtt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fail_ct>0 OR p95_rtt>150 OR events/300 > 10000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC DNS/ADNS Service Health**): table _time, adc, events, fail_ct, p95_rtt, zones\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart: queries per second and failures, single value: p95 response time, table: top failure strings.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We follow DNS and address service health in the same view so a dead listener or a loop in answers is a clear lead when apps fail in odd ways.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Resolution_DNS"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.33",
              "n": "Citrix SDX Platform Health (Partition Resources)",
              "c": "high",
              "f": "advanced",
              "v": "Citrix NetScaler SDX hosts multiple isolated VPX instances. Platform health is about partition CPU and memory, hypervisor and VPX liveness, and out-of-band LOM, power, and cooling. A stressed partition throttles applications before obvious syslog noise; LOM, PSU, or fan alarms demand hardware attention before a blade fails.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:snmp\"` — per-partition or host CPU, memory, LOM, PSU, fan, platform sensors; `sourcetype=\"citrix:netscaler:syslog\"` for hypervisor, VPX state, and hardware alarms",
              "q": "index=netscaler (sourcetype=\"citrix:netscaler:snmp\" OR sourcetype=\"citrix:netscaler:syslog\") (\"SDX\" OR \"Xen\" OR \"partition\" OR \"VPX\" OR LOM OR PSU OR FAN OR \"throttle\")\n| eval part_cpu=coalesce(adc_partition_cpu_use_pct, sdx_cpu_use_pct, 0), part_mem=coalesce(adc_partition_mem_use_pct, sdx_mem_use_pct, 0)\n| eval alarm=if(match(_raw, \"(?i)(PSU|FAN|LOM|redundant|failed|critical)\"),1,0)\n| bin _time span=5m\n| stats max(part_cpu) as max_cpu, max(part_mem) as max_mem, sum(alarm) as alarm_events, values(host) as hosts by _time, sdx_name, partition_id\n| where max_cpu>85 OR max_mem>90 OR alarm_events>0\n| table _time, sdx_name, partition_id, max_cpu, max_mem, alarm_events, hosts",
              "m": "Poll SNMP with SDX/MPX and ADC-specific OIDs into `citrix:netscaler:snmp`. Forward `citrix:netscaler:syslog` for hypervisor, VPX, and platform events. Enrich with asset tags for slot, data hall, and power feed. Page on high partition utilization versus entitlement, and on any hardware or LOM down events.",
              "z": "Heatmap of partitions by CPU, horizontal bar: memory, timeline of hardware alarms, table of active VPX count per host.",
              "kfp": "Reallocating vCPU, memory, or network on an SDX partition during tenant maintenance or SVM work can show short resource stress on instances. SVM and SDX change windows and 'tenant in flight' tags isolate planned partition churn from hardware failure.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — SDX (multi-tenant platform)](https://docs.citrix.com/en-us/citrix-adc/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:snmp\"` — per-partition or host CPU, memory, LOM, PSU, fan, platform sensors; `sourcetype=\"citrix:netscaler:syslog\"` for hypervisor, VPX state, and hardware alarms.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll SNMP with SDX/MPX and ADC-specific OIDs into `citrix:netscaler:snmp`. Forward `citrix:netscaler:syslog` for hypervisor, VPX, and platform events. Enrich with asset tags for slot, data hall, and power feed. Page on high partition utilization versus entitlement, and on any hardware or LOM down events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler (sourcetype=\"citrix:netscaler:snmp\" OR sourcetype=\"citrix:netscaler:syslog\") (\"SDX\" OR \"Xen\" OR \"partition\" OR \"VPX\" OR LOM OR PSU OR FAN OR \"throttle\")\n| eval part_cpu=coalesce(adc_partition_cpu_use_pct, sdx_cpu_use_pct, 0), part_mem=coalesce(adc_partition_mem_use_pct, sdx_mem_use_pct, 0)\n| eval alarm=if(match(_raw, \"(?i)(PSU|FAN|LOM|redundant|failed|critical)\"),1,0)\n| bin _time span=5m\n| stats max(part_cpu) as max_cpu, max(part_mem) as max_mem, sum(alarm) as alarm_events, values(host) as hosts by _time, sdx_name, partition_id\n| where max_cpu>85 OR max_mem>90 OR alarm_events>0\n| table _time, sdx_name, partition_id, max_cpu, max_mem, alarm_events, hosts\n```\n\nUnderstanding this SPL\n\n**Citrix SDX Platform Health (Partition Resources)** — Citrix NetScaler SDX hosts multiple isolated VPX instances. Platform health is about partition CPU and memory, hypervisor and VPX liveness, and out-of-band LOM, power, and cooling. A stressed partition throttles applications before obvious syslog noise; LOM, PSU, or fan alarms demand hardware attention before a blade fails.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:snmp\"` — per-partition or host CPU, memory, LOM, PSU, fan, platform sensors; `sourcetype=\"citrix:netscaler:syslog\"` for hypervisor, VPX state, and hardware alarms. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:snmp, citrix:netscaler:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:snmp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **part_cpu** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **alarm** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, sdx_name, partition_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_cpu>85 OR max_mem>90 OR alarm_events>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix SDX Platform Health (Partition Resources)**): table _time, sdx_name, partition_id, max_cpu, max_mem, alarm_events, hosts\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of partitions by CPU, horizontal bar: memory, timeline of hardware alarms, table of active VPX count per host.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We read partition resource use on a shared platform so a noisy neighbor on the same blade is visible before your instance starves in silence.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.34",
              "n": "Citrix ADC Cluster Configuration Replication",
              "c": "critical",
              "f": "advanced",
              "v": "A Citrix ADC cluster must keep a consistent configuration and routing view across members. If replication lags, quorum drifts, or a split is possible, different nodes can run divergent policy—bad for availability and for policy enforcement. The goal is to detect async failures and membership changes before a maintenance window or failure widens the gap.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — `cluster`, `quorum`, `nsync`/`propagation`, `split` brain, `replication` lag, `node` up/down, config sync",
              "q": "index=netscaler sourcetype=\"citrix:netscaler:syslog\" (\"cluster\" OR CLIP OR CCO OR NSYNC OR \"quorum\" OR \"propagat\" OR \"split-brain\" OR \"split brain\" OR \"nsync\" OR \"replication\" OR configsync OR \"RHI\")\n| eval severity=if(match(_raw, \"(?i)(split|mismatch|fail|unreachable)\"),\"high\", if(match(_raw, \"(?i)warn|lag|delay\"),\"medium\",\"low\"))\n| rex field=_raw \"(?i)cluster[\\\\s:]+(?<cluster_id>\\\\S+)\"\n| bin _time span=5m\n| stats count as events, values(severity) as severities, latest(host) as node by _time, cluster_id, host\n| where like(mvjoin(severities, \" \"), \"%high%\") OR like(mvjoin(severities, \" \"), \"%medium%\")\n| table _time, cluster_id, host, node, severities, events",
              "m": "Send cluster and nsync service logs to `index=netscaler`. If your deployment exposes a numeric lag metric via NITRO, mirror it in `citrix:netscaler:perf` for more precise SLO. Alert on any split-brain, repeated sync failures, or member departures outside change windows. Automate a ticket with last known `show cluster instance` if full context is in logs (mask secrets).",
              "z": "State timeline: cluster size and roles, time chart: sync-failure count, list: members with last heartbeat string.",
              "kfp": "Rolling firmware, patch, or build pushes put ADC cluster nodes out of config sync for minutes on purpose. A rolling-maintenance tag and a lag-duration threshold, not a few seconds of diff during upgrade, are the right filters.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — High availability and clustering](https://docs.citrix.com/en-us/citrix-adc/current-release/getting-started-with-citrix-adc/high-availability-citrix.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — `cluster`, `quorum`, `nsync`/`propagation`, `split` brain, `replication` lag, `node` up/down, config sync.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSend cluster and nsync service logs to `index=netscaler`. If your deployment exposes a numeric lag metric via NITRO, mirror it in `citrix:netscaler:perf` for more precise SLO. Alert on any split-brain, repeated sync failures, or member departures outside change windows. Automate a ticket with last known `show cluster instance` if full context is in logs (mask secrets).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler sourcetype=\"citrix:netscaler:syslog\" (\"cluster\" OR CLIP OR CCO OR NSYNC OR \"quorum\" OR \"propagat\" OR \"split-brain\" OR \"split brain\" OR \"nsync\" OR \"replication\" OR configsync OR \"RHI\")\n| eval severity=if(match(_raw, \"(?i)(split|mismatch|fail|unreachable)\"),\"high\", if(match(_raw, \"(?i)warn|lag|delay\"),\"medium\",\"low\"))\n| rex field=_raw \"(?i)cluster[\\\\s:]+(?<cluster_id>\\\\S+)\"\n| bin _time span=5m\n| stats count as events, values(severity) as severities, latest(host) as node by _time, cluster_id, host\n| where like(mvjoin(severities, \" \"), \"%high%\") OR like(mvjoin(severities, \" \"), \"%medium%\")\n| table _time, cluster_id, host, node, severities, events\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Cluster Configuration Replication** — A Citrix ADC cluster must keep a consistent configuration and routing view across members. If replication lags, quorum drifts, or a split is possible, different nodes can run divergent policy—bad for availability and for policy enforcement. The goal is to detect async failures and membership changes before a maintenance window or failure widens the gap.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — `cluster`, `quorum`, `nsync`/`propagation`, `split` brain, `replication` lag, `node` up/down, config sync. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, cluster_id, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where like(mvjoin(severities, \" \"), \"%high%\") OR like(mvjoin(severities, \" \"), \"%medium%\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC Cluster Configuration Replication**): table _time, cluster_id, host, node, severities, events\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: State timeline: cluster size and roles, time chart: sync-failure count, list: members with last heartbeat string.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We see cluster and sync style messages in one place so a split in configuration or a late peer is a dated fact, not a guess after tickets.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.35",
              "n": "Citrix ADC AAA Audit Trail and Command Logging",
              "c": "high",
              "f": "beginner",
              "v": "Administrative changes on a Citrix ADC (CLI, GUI, and NITRO API) are security- and compliance-relevant. Retaining a tamper-resistant audit trail of who did what, when, from where—and whether configuration was saved—supports investigations, break-glass reviews, and control frameworks that expect full accountability for network edge devices.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — AAA, audit, `NITRO` or `API` command, `nsconfig` save, `set`, `add`, `rm` actions, `CLI`, `GUI` login, admin usernames, source IP",
              "q": "index=netscaler sourcetype=\"citrix:netscaler:syslog\" (\"audit\" OR NITRO OR \"nsconfig\" OR \"cmd\" OR \"set \" OR \"add \" OR \"rm \" OR \"save ns config\" OR \"Command\" OR \"local\" OR API)\n| eval admin=coalesce(adc_user, admin_user, user, \"unknown\")\n| eval action=if(match(_raw, \"(?i)save[\\\\s_]+ns[\\\\s_]+config\"),\"config_save\",if(match(_raw, \"(?i)NITRO|Rest|API|HTTP\"),\"api\",\"cli_gui\"))\n| bin _time span=1h\n| stats count as cmds, values(action) as actions, dc(_raw) as unique_patterns by _time, host, admin\n| where cmds>0\n| sort - cmds\n| table _time, host, admin, actions, unique_patterns, cmds",
              "m": "Enable audit logging, command logging, and API access logging on the ADC; ensure administrators cannot disable logging without a separate control. Forward to `index=netscaler` with role-based read restrictions in Splunk. Retention per policy. Consider streaming critical commands to a write-once store. Alert on new admin accounts, off-hours `save ns config` bursts, or API keys used from new subnets.",
              "z": "Table: recent high-risk commands, user timeline, count of `save` events per day, map of source IP (if approved).",
              "kfp": "NetScaler MAS, ADM, Ansible, and Terraform can log thousands of CLI-style actions that resemble privilege abuse. Allowlist automation source IPs and runbook SIDs; require interactive shell or a change outside the automation role for escalation.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — Auditing and logging](https://docs.citrix.com/en-us/citrix-adc/current-release/system/audit-logging.html)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — AAA, audit, `NITRO` or `API` command, `nsconfig` save, `set`, `add`, `rm` actions, `CLI`, `GUI` login, admin usernames, source IP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable audit logging, command logging, and API access logging on the ADC; ensure administrators cannot disable logging without a separate control. Forward to `index=netscaler` with role-based read restrictions in Splunk. Retention per policy. Consider streaming critical commands to a write-once store. Alert on new admin accounts, off-hours `save ns config` bursts, or API keys used from new subnets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler sourcetype=\"citrix:netscaler:syslog\" (\"audit\" OR NITRO OR \"nsconfig\" OR \"cmd\" OR \"set \" OR \"add \" OR \"rm \" OR \"save ns config\" OR \"Command\" OR \"local\" OR API)\n| eval admin=coalesce(adc_user, admin_user, user, \"unknown\")\n| eval action=if(match(_raw, \"(?i)save[\\\\s_]+ns[\\\\s_]+config\"),\"config_save\",if(match(_raw, \"(?i)NITRO|Rest|API|HTTP\"),\"api\",\"cli_gui\"))\n| bin _time span=1h\n| stats count as cmds, values(action) as actions, dc(_raw) as unique_patterns by _time, host, admin\n| where cmds>0\n| sort - cmds\n| table _time, host, admin, actions, unique_patterns, cmds\n```\n\nUnderstanding this SPL\n\n**Citrix ADC AAA Audit Trail and Command Logging** — Administrative changes on a Citrix ADC (CLI, GUI, and NITRO API) are security- and compliance-relevant. Retaining a tamper-resistant audit trail of who did what, when, from where—and whether configuration was saved—supports investigations, break-glass reviews, and control frameworks that expect full accountability for network edge devices.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` — AAA, audit, `NITRO` or `API` command, `nsconfig` save, `set`, `add`, `rm` actions, `CLI`, `GUI` login, admin usernames, source IP. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **admin** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, admin** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cmds>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Citrix ADC AAA Audit Trail and Command Logging**): table _time, host, admin, actions, unique_patterns, cmds\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table: recent high-risk commands, user timeline, count of `save` events per day, map of source IP (if approved).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We keep an audit line for shell and high-privilege work on the same box so who touched what, and when, is plain in one trail.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.36",
              "n": "Citrix ADC API Gateway Policy Evaluation",
              "c": "high",
              "f": "advanced",
              "v": "API gateway policies on Citrix ADC enforce OpenAPI shapes, XML/JSON validation, authentication, and rate limits. Mis-specified definitions cause load failures; validation errors may reflect attack traffic; 429 storms show mis-tuned quotas or abuse. Monitoring policy evaluation alongside latency keeps both security and user experience within SLO.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` and `sourcetype=\"citrix:netscaler:appflow\"` — API definition load errors, policy `throttle`, `rate` limit, JSON/XML validation failures, `HTTP_4xx/5xx` on API vservers",
              "q": "index=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:appflow\") (\"API\" OR \"openapi\" OR swagger OR \"json\" OR \"xml\" OR throttle OR \"rate\" OR \"429\" OR validation OR xss OR \"schema\")\n| eval is_block=if(status=429 OR match(_raw,\"(?i)429|throttl|deny\"),1,0)\n| eval val_err=if(match(_raw,\"(?i)(invalid|schema|validation|malform)\"),1,0)\n| bin _time span=5m\n| stats count as hits, sum(is_block) as throttled, sum(val_err) as val_fail, p95(resp_time_ms) as p95_lat by _time, host, vserver\n| where throttled>10 OR val_fail>0 OR p95_lat>1000\n| table _time, host, vserver, hits, throttled, val_fail, p95_lat",
              "m": "Send AppFlow or security logs with status, vserver, and response time to `index=netscaler`. Map each API product to a vserver name. Extract policy name and error class from syslog for definition load issues. Alert on definition load failure, sustained 429 above baseline, or validation error spikes uncorrelated with version rollouts.",
              "z": "Time chart: throttles vs successes, bar: validation failure types, table: top API paths by error (hashed if needed).",
              "kfp": "A new API version or microservice can spike policy evaluation and 401/403 on synthetic tests that are supposed to fail. The API release calendar and a synthetic-monitor IP allowlist keep release-day noise from looking like a policy bypass wave.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix ADC — API gateway](https://docs.citrix.com/en-us/citrix-adc/current-release/api-gateway.html)",
              "mitre": [
                "T1190",
                "T1110"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` and `sourcetype=\"citrix:netscaler:appflow\"` — API definition load errors, policy `throttle`, `rate` limit, JSON/XML validation failures, `HTTP_4xx/5xx` on API vservers.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSend AppFlow or security logs with status, vserver, and response time to `index=netscaler`. Map each API product to a vserver name. Extract policy name and error class from syslog for definition load issues. Alert on definition load failure, sustained 429 above baseline, or validation error spikes uncorrelated with version rollouts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:appflow\") (\"API\" OR \"openapi\" OR swagger OR \"json\" OR \"xml\" OR throttle OR \"rate\" OR \"429\" OR validation OR xss OR \"schema\")\n| eval is_block=if(status=429 OR match(_raw,\"(?i)429|throttl|deny\"),1,0)\n| eval val_err=if(match(_raw,\"(?i)(invalid|schema|validation|malform)\"),1,0)\n| bin _time span=5m\n| stats count as hits, sum(is_block) as throttled, sum(val_err) as val_fail, p95(resp_time_ms) as p95_lat by _time, host, vserver\n| where throttled>10 OR val_fail>0 OR p95_lat>1000\n| table _time, host, vserver, hits, throttled, val_fail, p95_lat\n```\n\nUnderstanding this SPL\n\n**Citrix ADC API Gateway Policy Evaluation** — API gateway policies on Citrix ADC enforce OpenAPI shapes, XML/JSON validation, authentication, and rate limits. Mis-specified definitions cause load failures; validation errors may reflect attack traffic; 429 storms show mis-tuned quotas or abuse. Monitoring policy evaluation alongside latency keeps both security and user experience within SLO.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` and `sourcetype=\"citrix:netscaler:appflow\"` — API definition load errors, policy `throttle`, `rate` limit, JSON/XML validation failures, `HTTP_4xx/5xx` on API vservers. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:syslog, citrix:netscaler:appflow. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_block** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **val_err** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, vserver** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where throttled>10 OR val_fail>0 OR p95_lat>1000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC API Gateway Policy Evaluation**): table _time, host, vserver, hits, throttled, val_fail, p95_lat\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart: throttles vs successes, bar: validation failure types, table: top API paths by error (hashed if needed).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We see how often gateway policies are used so a new route, a rate cap, and a test client are all visible in the same counter story.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.37",
              "n": "Citrix ADC Pooled Licensing Utilization",
              "c": "high",
              "f": "beginner",
              "v": "Pooled licensing ties multiple instances to a shared consumption meter (vCPUs, throughput, or other entitlements on supported platforms). Approaching the pool cap or entering grace or violation states risks forced throttling, feature loss, or audit findings where license position must be provable. Monitoring utilization versus entitlement is both a capacity and a compliance control.",
              "t": "Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler)",
              "d": "`index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` or `sourcetype=\"citrix:netscaler:perf\"` from NITRO — `license` pool usage percent, vCPU, throughput entitlement, `pool` name, `grace` or `expir`",
              "q": "index=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:perf\") (\"pooled\" OR \"license pool\" OR vCPU OR throughput OR entitlement OR CBM OR \"ceiling\" OR grace OR expir)\n| eval pool_pct=coalesce(license_pool_use_pct, pooled_license_use_pct, 0), vcpu=coalesce(allocated_vcpus, 0), thr_mbps=coalesce(throughput_mbps, 0)\n| eval capacity_flag=if(pool_pct>90 OR match(_raw,\"(?i)grace|violation|denied\"),1,0)\n| bin _time span=1h\n| stats max(pool_pct) as max_pool, max(vcpu) as peak_vcpu, max(thr_mbps) as peak_thr, max(capacity_flag) as risk by _time, host, pool_name\n| where max_pool>80 OR risk=1\n| table _time, host, pool_name, max_pool, peak_vcpu, peak_thr, risk",
              "m": "Ingest NITRO license and pooled-capacity output via TA scripted input into `citrix:netscaler:perf` and capture syslog warnings from `ns` `license` `pool`. Set thresholds at 80% (planning) and 90% (urgent) for pool use. Log proof-of-use reports with Splunk for quarterly true-ups. Add alerts for grace-period entry or license server connectivity loss (where applicable).",
              "z": "Gauge: pool use percent, line chart: vCPU and throughput, table: top consumers by `host` and `pool_name`.",
              "kfp": "Month-end or true-up and moving pools between license servers or ADC pooled capacity events swing utilization. License calendar and a pool rebalancing maintenance window are better context than a one-day overage in isolation.",
              "refs": "[Splunk Documentation: Splunk Add-on for Citrix NetScaler](https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/CitrixNetScaler), [Citrix — Licensing and pooled capacity](https://www.citrix.com/support/licensing/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler).\n• Ensure the following data sources are available: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` or `sourcetype=\"citrix:netscaler:perf\"` from NITRO — `license` pool usage percent, vCPU, throughput entitlement, `pool` name, `grace` or `expir`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest NITRO license and pooled-capacity output via TA scripted input into `citrix:netscaler:perf` and capture syslog warnings from `ns` `license` `pool`. Set thresholds at 80% (planning) and 90% (urgent) for pool use. Log proof-of-use reports with Splunk for quarterly true-ups. Add alerts for grace-period entry or license server connectivity loss (where applicable).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netscaler (sourcetype=\"citrix:netscaler:syslog\" OR sourcetype=\"citrix:netscaler:perf\") (\"pooled\" OR \"license pool\" OR vCPU OR throughput OR entitlement OR CBM OR \"ceiling\" OR grace OR expir)\n| eval pool_pct=coalesce(license_pool_use_pct, pooled_license_use_pct, 0), vcpu=coalesce(allocated_vcpus, 0), thr_mbps=coalesce(throughput_mbps, 0)\n| eval capacity_flag=if(pool_pct>90 OR match(_raw,\"(?i)grace|violation|denied\"),1,0)\n| bin _time span=1h\n| stats max(pool_pct) as max_pool, max(vcpu) as peak_vcpu, max(thr_mbps) as peak_thr, max(capacity_flag) as risk by _time, host, pool_name\n| where max_pool>80 OR risk=1\n| table _time, host, pool_name, max_pool, peak_vcpu, peak_thr, risk\n```\n\nUnderstanding this SPL\n\n**Citrix ADC Pooled Licensing Utilization** — Pooled licensing ties multiple instances to a shared consumption meter (vCPUs, throughput, or other entitlements on supported platforms). Approaching the pool cap or entering grace or violation states risks forced throttling, feature loss, or audit findings where license position must be provable. Monitoring utilization versus entitlement is both a capacity and a compliance control.\n\nDocumented **Data sources**: `index=netscaler` `sourcetype=\"citrix:netscaler:syslog\"` or `sourcetype=\"citrix:netscaler:perf\"` from NITRO — `license` pool usage percent, vCPU, throughput entitlement, `pool` name, `grace` or `expir`. **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler (Splunk_TA_citrix-netscaler). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netscaler; **sourcetype**: citrix:netscaler:syslog, citrix:netscaler:perf. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netscaler, sourcetype=\"citrix:netscaler:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pool_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **capacity_flag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, pool_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_pool>80 OR risk=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix ADC Pooled Licensing Utilization**): table _time, host, pool_name, max_pool, peak_vcpu, peak_thr, risk\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge: pool use percent, line chart: vCPU and throughput, table: top consumers by `host` and `pool_name`.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look at shared license use over time on the same platform so a quiet threshold day does not turn into a hard stop with no warning.",
              "mtype": [
                "Capacity",
                "Compliance"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.38",
              "n": "Citrix SD-WAN Virtual Path Loss, Jitter, and Latency",
              "c": "critical",
              "f": "intermediate",
              "v": "Citrix SD-WAN virtual paths carry business traffic between sites and cloud services. Packet loss, jitter, latency, and voice-style mean opinion score (MOS) metrics reveal path quality before users open tickets. Tracking path state changes pinpoints when the fabric moved traffic or when a path stopped meeting policy for quality of experience.",
              "t": "No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention.",
              "d": "`index=sdwan` `sourcetype=\"citrix:sdwan:virtual_path\"` or SD-WAN appliance syslog/JSON with fields `path_name`, `site_id`, `loss_pct`, `jitter_ms`, `latency_ms`, `mos` (where available), `path_state` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sdwan sourcetype=\"citrix:sdwan:virtual_path\" earliest=-4h\n| eval loss=tonumber(loss_pct), jit=tonumber(jitter_ms), lat=tonumber(latency_ms), mos=tonumber(mos)\n| bin _time span=5m\n| stats avg(loss) as avg_loss, avg(jit) as avg_jit, avg(lat) as avg_lat, avg(mos) as avg_mos, latest(path_state) as path_state by _time, site_id, path_name\n| where avg_loss>2 OR avg_jit>30 OR avg_lat>150 OR (isnotnull(avg_mos) AND avg_mos<3.5) OR match(lower(path_state),\"(?i)down|bad|degraded\")\n| table _time, site_id, path_name, avg_loss, avg_jit, avg_lat, avg_mos, path_state",
              "m": "Ingest per-virtual-path telemetry on a 1–5 minute cadence. Align thresholds to site baselines and voice or video SLOs. Alert on sustained loss or latency above policy, MOS below the floor, or explicit degraded state. Correlate with carrier incidents and change windows. Document how path reselection affects latency (acceptable vs regression).",
              "z": "Timechart: loss, jitter, latency per path; line: MOS; overlay annotations for path state changes; table: worst paths in the last hour by site.",
              "kfp": "Failover tests on microwave or cellular backup links and carrier brownouts create real loss and jitter on the standby path. SD-WAN test flags and a 'virtual path in test' change record show expected pain versus unexplained production degradation on primary only.",
              "refs": "[Citrix — SD-WAN paths and quality metrics](https://docs.citrix.com/en-us/citrix-sd-wan/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention..\n• Ensure the following data sources are available: `index=sdwan` `sourcetype=\"citrix:sdwan:virtual_path\"` or SD-WAN appliance syslog/JSON with fields `path_name`, `site_id`, `loss_pct`, `jitter_ms`, `latency_ms`, `mos` (where available), `path_state` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest per-virtual-path telemetry on a 1–5 minute cadence. Align thresholds to site baselines and voice or video SLOs. Alert on sustained loss or latency above policy, MOS below the floor, or explicit degraded state. Correlate with carrier incidents and change windows. Document how path reselection affects latency (acceptable vs regression).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"citrix:sdwan:virtual_path\" earliest=-4h\n| eval loss=tonumber(loss_pct), jit=tonumber(jitter_ms), lat=tonumber(latency_ms), mos=tonumber(mos)\n| bin _time span=5m\n| stats avg(loss) as avg_loss, avg(jit) as avg_jit, avg(lat) as avg_lat, avg(mos) as avg_mos, latest(path_state) as path_state by _time, site_id, path_name\n| where avg_loss>2 OR avg_jit>30 OR avg_lat>150 OR (isnotnull(avg_mos) AND avg_mos<3.5) OR match(lower(path_state),\"(?i)down|bad|degraded\")\n| table _time, site_id, path_name, avg_loss, avg_jit, avg_lat, avg_mos, path_state\n```\n\nUnderstanding this SPL\n\n**Citrix SD-WAN Virtual Path Loss, Jitter, and Latency** — Citrix SD-WAN virtual paths carry business traffic between sites and cloud services. Packet loss, jitter, latency, and voice-style mean opinion score (MOS) metrics reveal path quality before users open tickets. Tracking path state changes pinpoints when the fabric moved traffic or when a path stopped meeting policy for quality of experience.\n\nDocumented **Data sources**: `index=sdwan` `sourcetype=\"citrix:sdwan:virtual_path\"` or SD-WAN appliance syslog/JSON with fields `path_name`, `site_id`, `loss_pct`, `jitter_ms`, `latency_ms`, `mos` (where available), `path_state` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: citrix:sdwan:virtual_path. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"citrix:sdwan:virtual_path\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **loss** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, site_id, path_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_loss>2 OR avg_jit>30 OR avg_lat>150 OR (isnotnull(avg_mos) AND avg_mos<3.5) OR match(lower(path_state),\"(?i)down|…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix SD-WAN Virtual Path Loss, Jitter, and Latency**): table _time, site_id, path_name, avg_loss, avg_jit, avg_lat, avg_mos, path_state\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart: loss, jitter, latency per path; line: MOS; overlay annotations for path state changes; table: worst paths in the last hour by site.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We read loss, jitter, and delay on the overlay path so a brown site or a noisy link is a fact on a chart, not a feeling on a call.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "citrix",
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.39",
              "n": "Citrix SD-WAN Application Steering and QoS Enforcement",
              "c": "high",
              "f": "advanced",
              "v": "Application-aware routing and queuing are core to SD-WAN value. Monitoring which path each app uses, when drops occur in a class of service, and when steering decisions change frequently exposes misconfiguration, license limits, and congestion on steered traffic that affects voice, video, and business apps.",
              "t": "No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention.",
              "d": "`index=sdwan` `sourcetype=\"citrix:sdwan:app_route\"` (application name, `path_selected`, `qos_class`); or `sourcetype=\"citrix:sdwan:qos\"` with `drops`, `queue_depth`, `app_name` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sdwan (sourcetype=\"citrix:sdwan:app_route\" OR sourcetype=\"citrix:sdwan:qos\") earliest=-4h\n| eval drops=tonumber(drops), app=coalesce(app_name, application, \"unknown\"), psel=coalesce(path_selected, selected_path, \"unknown\")\n| bin _time span=5m\n| stats sum(drops) as total_drops, count as dec_events, values(psel) as paths_used, values(qos_class) as qos by _time, app, site_id\n| where total_drops>0 OR dec_events>1000\n| table _time, site_id, app, total_drops, dec_events, paths_used, qos",
              "m": "Import application-to-QoS mapping from the orchestrator. Track drops and deep queue signs per class. For steering churn, use `path_selected` with `streamstats` to count changes per 5m for major apps. Involve the network and app teams when a critical app rides a backup path. Pair with underlay stats from the same time window to separate LAN vs WAN causes.",
              "z": "Stacked area: drop count by `qos_class`; Sankey or table: `app` to `path_selected` distribution; timechart: steering change rate for top apps.",
              "kfp": "QoS class and application steering re-profiles from VoIP or SaaS tuning shift steering counters in bulk. Compare before and after a policy commit ID; a single day of movement after a documented change is usually tuning, not random steering failure.",
              "refs": "[Citrix — SD-WAN application quality of service](https://docs.citrix.com/en-us/citrix-sd-wan/11-4/application-qos.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention..\n• Ensure the following data sources are available: `index=sdwan` `sourcetype=\"citrix:sdwan:app_route\"` (application name, `path_selected`, `qos_class`); or `sourcetype=\"citrix:sdwan:qos\"` with `drops`, `queue_depth`, `app_name` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nImport application-to-QoS mapping from the orchestrator. Track drops and deep queue signs per class. For steering churn, use `path_selected` with `streamstats` to count changes per 5m for major apps. Involve the network and app teams when a critical app rides a backup path. Pair with underlay stats from the same time window to separate LAN vs WAN causes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan (sourcetype=\"citrix:sdwan:app_route\" OR sourcetype=\"citrix:sdwan:qos\") earliest=-4h\n| eval drops=tonumber(drops), app=coalesce(app_name, application, \"unknown\"), psel=coalesce(path_selected, selected_path, \"unknown\")\n| bin _time span=5m\n| stats sum(drops) as total_drops, count as dec_events, values(psel) as paths_used, values(qos_class) as qos by _time, app, site_id\n| where total_drops>0 OR dec_events>1000\n| table _time, site_id, app, total_drops, dec_events, paths_used, qos\n```\n\nUnderstanding this SPL\n\n**Citrix SD-WAN Application Steering and QoS Enforcement** — Application-aware routing and queuing are core to SD-WAN value. Monitoring which path each app uses, when drops occur in a class of service, and when steering decisions change frequently exposes misconfiguration, license limits, and congestion on steered traffic that affects voice, video, and business apps.\n\nDocumented **Data sources**: `index=sdwan` `sourcetype=\"citrix:sdwan:app_route\"` (application name, `path_selected`, `qos_class`); or `sourcetype=\"citrix:sdwan:qos\"` with `drops`, `queue_depth`, `app_name` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: citrix:sdwan:app_route, citrix:sdwan:qos. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"citrix:sdwan:app_route\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **drops** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, app, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_drops>0 OR dec_events>1000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix SD-WAN Application Steering and QoS Enforcement**): table _time, site_id, app, total_drops, dec_events, paths_used, qos\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area: drop count by `qos_class`; Sankey or table: `app` to `path_selected` distribution; timechart: steering change rate for top apps.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We see which apps are steered and shaped on the same fabric so a change in class or a new workload does not go unnoticed in a bill or a review.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "citrix",
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.40",
              "n": "Citrix SD-WAN WAN Link Health and Standby Failover",
              "c": "critical",
              "f": "intermediate",
              "v": "SD-WAN sites often bond multiple WAN links with a defined active and standby plan. Health checks, utilization, and failover events show when the primary is saturated, when a standby unexpectedly carries traffic, and when a metered link risks billing overages. Early visibility shortens repair time and controls cost.",
              "t": "No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention.",
              "d": "`index=sdwan` `sourcetype=\"citrix:sdwan:wan_link\"` with `link_id`, `role` (active|standby|backup), `util_in_pct`, `util_out_pct`, `state`, `failover_event`, `metered_bytes` (if billing meter applies) Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sdwan sourcetype=\"citrix:sdwan:wan_link\" earliest=-24h\n| eval uin=tonumber(util_in_pct), uout=tonumber(util_out_pct), mbytes=tonumber(metered_bytes), fo=if(match(lower(failover_event),\"(?i)yes|true|1|fail\"),1,0)\n| bin _time span=5m\n| stats max(uin) as max_in, max(uout) as max_out, max(mbytes) as max_meter, sum(fo) as failover by _time, site_id, link_id, role, state\n| where max_in>90 OR max_out>90 OR failover>0 OR match(lower(state),\"(?i)down|failed\") OR (role=\"standby\" AND max_in>1 AND max_out>1)\n| table _time, site_id, link_id, role, state, max_in, max_out, max_meter, failover",
              "m": "Tag each link with `metered` in a lookup for finance alerts. Alert on failover, sustained high utilization, or standby link carrying production traffic (possible mis-balance). Roll up to site level for a single on-call view. For meter overages, daily sums against monthly caps from procurement.",
              "z": "Gauge: active link utilization; timeline: failovers; table: top sites for metered bytes; state timeline per link.",
              "kfp": "Primary WAN maintenance and fiber cuts that force use of a healthy standby for hours are often correct behavior. The false positive is treating standby use as 'down'; page when standby is unhealthy, asymmetric routing fails, or primary does not return after the maintenance close.",
              "refs": "[Citrix — SD-WAN high availability and links](https://docs.citrix.com/en-us/citrix-sd-wan/11-4/high-availability.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention..\n• Ensure the following data sources are available: `index=sdwan` `sourcetype=\"citrix:sdwan:wan_link\"` with `link_id`, `role` (active|standby|backup), `util_in_pct`, `util_out_pct`, `state`, `failover_event`, `metered_bytes` (if billing meter applies) Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag each link with `metered` in a lookup for finance alerts. Alert on failover, sustained high utilization, or standby link carrying production traffic (possible mis-balance). Roll up to site level for a single on-call view. For meter overages, daily sums against monthly caps from procurement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan sourcetype=\"citrix:sdwan:wan_link\" earliest=-24h\n| eval uin=tonumber(util_in_pct), uout=tonumber(util_out_pct), mbytes=tonumber(metered_bytes), fo=if(match(lower(failover_event),\"(?i)yes|true|1|fail\"),1,0)\n| bin _time span=5m\n| stats max(uin) as max_in, max(uout) as max_out, max(mbytes) as max_meter, sum(fo) as failover by _time, site_id, link_id, role, state\n| where max_in>90 OR max_out>90 OR failover>0 OR match(lower(state),\"(?i)down|failed\") OR (role=\"standby\" AND max_in>1 AND max_out>1)\n| table _time, site_id, link_id, role, state, max_in, max_out, max_meter, failover\n```\n\nUnderstanding this SPL\n\n**Citrix SD-WAN WAN Link Health and Standby Failover** — SD-WAN sites often bond multiple WAN links with a defined active and standby plan. Health checks, utilization, and failover events show when the primary is saturated, when a standby unexpectedly carries traffic, and when a metered link risks billing overages. Early visibility shortens repair time and controls cost.\n\nDocumented **Data sources**: `index=sdwan` `sourcetype=\"citrix:sdwan:wan_link\"` with `link_id`, `role` (active|standby|backup), `util_in_pct`, `util_out_pct`, `state`, `failover_event`, `metered_bytes` (if billing meter applies) Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: citrix:sdwan:wan_link. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"citrix:sdwan:wan_link\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **uin** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, site_id, link_id, role, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_in>90 OR max_out>90 OR failover>0 OR match(lower(state),\"(?i)down|failed\") OR (role=\"standby\" AND max_in>1 AND ma…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix SD-WAN WAN Link Health and Standby Failover**): table _time, site_id, link_id, role, state, max_in, max_out, max_meter, failover\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge: active link utilization; timeline: failovers; table: top sites for metered bytes; state timeline per link.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We follow physical link and standby use on the same edge so a quiet line on purpose and a real cable fault are not the same in hindsight.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "citrix",
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.41",
              "n": "Citrix SD-WAN High Availability and VRRP Status",
              "c": "critical",
              "f": "intermediate",
              "v": "Appliance and control-plane high availability, including VRRP and overlay tunnels, must stay stable for uninterrupted forwarding. Spurious VRRP transitions, split roles between peers, or tunnel flaps create brownouts. Monitoring role and tunnel state over time localizes a failing unit or a broadcast domain issue on the LAN side of the edge.",
              "t": "No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention.",
              "d": "`index=sdwan` `sourcetype=\"citrix:sdwan:ha\"` (pair role: active|standby), `sourcetype=\"citrix:sdwan:vrrp\"` (VRRP group, state, priority), `sourcetype=\"citrix:sdwan:tunnel\"` (IPsec/overlay up/down) as needed Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sdwan (sourcetype=\"citrix:sdwan:ha\" OR sourcetype=\"citrix:sdwan:vrrp\" OR sourcetype=\"citrix:sdwan:tunnel\") earliest=-4h\n| eval role=lower(coalesce(ha_role, vrrp_state, \"\")), tstate=lower(coalesce(tunnel_state, \"\")), vtrans=if(match(_raw, \"(?i)vrrp|transition|master|backup\"),1,0)\n| bin _time span=1m\n| stats latest(role) as cur_role, latest(tstate) as tun, sum(vtrans) as vrrp_events by _time, site_id, group_id\n| where match(cur_role,\"(?i)unknown|init|disabled\") OR match(tun,\"(?i)down|degraded|failed\") OR vrrp_events>0\n| table _time, site_id, group_id, cur_role, tun, vrrp_events",
              "m": "Correlate HA and VRRP with power and link events from the same site. Set thresholds: any tunnel down over 1 minute; more than N VRRP events per 15 minutes; active role unknown for either peer. For paired appliances, a dashboard row per site should show mirror roles; mismatch triggers immediate investigation. Document flapping that maps to known firmware bugs and upgrade paths.",
              "z": "Timeline: HA role per site; line: VRRP event rate; state matrix: two appliances per site with color; tunnel up/down count.",
              "kfp": "Lab VRRP and preemption tuning, or a forced failover on a test link, can flip SD-WAN HA state repeatedly. Non-production edge groups, or a minimum flap count in production, avoid paging on a deliberate HA exercise.",
              "refs": "[Citrix — SD-WAN high availability and redundancy](https://docs.citrix.com/en-us/citrix-sd-wan/11-4/high-availability.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention..\n• Ensure the following data sources are available: `index=sdwan` `sourcetype=\"citrix:sdwan:ha\"` (pair role: active|standby), `sourcetype=\"citrix:sdwan:vrrp\"` (VRRP group, state, priority), `sourcetype=\"citrix:sdwan:tunnel\"` (IPsec/overlay up/down) as needed Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate HA and VRRP with power and link events from the same site. Set thresholds: any tunnel down over 1 minute; more than N VRRP events per 15 minutes; active role unknown for either peer. For paired appliances, a dashboard row per site should show mirror roles; mismatch triggers immediate investigation. Document flapping that maps to known firmware bugs and upgrade paths.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan (sourcetype=\"citrix:sdwan:ha\" OR sourcetype=\"citrix:sdwan:vrrp\" OR sourcetype=\"citrix:sdwan:tunnel\") earliest=-4h\n| eval role=lower(coalesce(ha_role, vrrp_state, \"\")), tstate=lower(coalesce(tunnel_state, \"\")), vtrans=if(match(_raw, \"(?i)vrrp|transition|master|backup\"),1,0)\n| bin _time span=1m\n| stats latest(role) as cur_role, latest(tstate) as tun, sum(vtrans) as vrrp_events by _time, site_id, group_id\n| where match(cur_role,\"(?i)unknown|init|disabled\") OR match(tun,\"(?i)down|degraded|failed\") OR vrrp_events>0\n| table _time, site_id, group_id, cur_role, tun, vrrp_events\n```\n\nUnderstanding this SPL\n\n**Citrix SD-WAN High Availability and VRRP Status** — Appliance and control-plane high availability, including VRRP and overlay tunnels, must stay stable for uninterrupted forwarding. Spurious VRRP transitions, split roles between peers, or tunnel flaps create brownouts. Monitoring role and tunnel state over time localizes a failing unit or a broadcast domain issue on the LAN side of the edge.\n\nDocumented **Data sources**: `index=sdwan` `sourcetype=\"citrix:sdwan:ha\"` (pair role: active|standby), `sourcetype=\"citrix:sdwan:vrrp\"` (VRRP group, state, priority), `sourcetype=\"citrix:sdwan:tunnel\"` (IPsec/overlay up/down) as needed Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: citrix:sdwan:ha, citrix:sdwan:vrrp, citrix:sdwan:tunnel. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"citrix:sdwan:ha\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **role** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, site_id, group_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where match(cur_role,\"(?i)unknown|init|disabled\") OR match(tun,\"(?i)down|degraded|failed\") OR vrrp_events>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix SD-WAN High Availability and VRRP Status**): table _time, site_id, group_id, cur_role, tun, vrrp_events\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline: HA role per site; line: VRRP event rate; state matrix: two appliances per site with color; tunnel up/down count.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We read high-availability and router redundancy style events on the same path so a rehearsed switch and a real fault are both time-stamped in data.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Alerts"
              ],
              "e": [
                "citrix",
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "5.3.42",
              "n": "Citrix SD-WAN Orchestrator Config Push Failures",
              "c": "high",
              "f": "intermediate",
              "v": "The SD-WAN Orchestrator (or center) applies policies and feature templates across the fleet. When pushes fail, sites can drift or stay on stale rules. Primary reachability problems and job-level errors show risk at scale, not one box at a time. A single bad change set that fails in many sites needs rollback attention fast.",
              "t": "No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention.",
              "d": "`index=sdwan` `sourcetype=\"citrix:sdwan:orchestrator\"` (job id, `change_set`, `result`, `error_code`, `target_appliance`); optional reachability from `sourcetype=\"citrix:sdwan:mgmt\"` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=sdwan (sourcetype=\"citrix:sdwan:orchestrator\" OR sourcetype=\"citrix:sdwan:mgmt\") earliest=-24h\n| eval ok=if(match(lower(result),\"(?i)success|ok|applied\"),1,0), fail=if(match(lower(result),\"(?i)fail|error|timeout|denied|partial\"),1,0), nreach=if(match(lower(error_code),\"(?i)unreach|no route|refused|tls|auth|503\"),1,0)\n| bin _time span=15m\n| stats count as jobs, sum(ok) as okc, sum(fail) as failc, sum(nreach) as reach_fails, dc(target_appliance) as appliances by _time, change_set\n| where failc>0 OR reach_fails>0\n| table _time, change_set, jobs, okc, failc, reach_fails, appliances",
              "m": "Ingest job completion and management heartbeat logs. Tag each job with a change ticket. Alert on any push failure, growing count of `target_appliance` in error, or management-plane unreachability. For fleet-wide failure, start rollback of the `change_set` in the product and notify change owner. For chronic single appliance failure, look at time sync, cert expiry, and last-seen in inventory.",
              "z": "Timechart: failed jobs per 15m; table: `change_set` with failure rates; list: top appliances by failed attempts; health tile: orchestrator reachability.",
              "kfp": "Wide orchestrator template pushes for acquired branches or RMA replacements will fail for offline appliances in bulk. A site-reachability join and a batch job with retry and grace separate expected partial failure from a broken orchestrator core.",
              "refs": "[Citrix — SD-WAN Orchestrator administration](https://docs.citrix.com/en-us/citrix-sd-wan-orchestrator/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention..\n• Ensure the following data sources are available: `index=sdwan` `sourcetype=\"citrix:sdwan:orchestrator\"` (job id, `change_set`, `result`, `error_code`, `target_appliance`); optional reachability from `sourcetype=\"citrix:sdwan:mgmt\"` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest job completion and management heartbeat logs. Tag each job with a change ticket. Alert on any push failure, growing count of `target_appliance` in error, or management-plane unreachability. For fleet-wide failure, start rollback of the `change_set` in the product and notify change owner. For chronic single appliance failure, look at time sync, cert expiry, and last-seen in inventory.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdwan (sourcetype=\"citrix:sdwan:orchestrator\" OR sourcetype=\"citrix:sdwan:mgmt\") earliest=-24h\n| eval ok=if(match(lower(result),\"(?i)success|ok|applied\"),1,0), fail=if(match(lower(result),\"(?i)fail|error|timeout|denied|partial\"),1,0), nreach=if(match(lower(error_code),\"(?i)unreach|no route|refused|tls|auth|503\"),1,0)\n| bin _time span=15m\n| stats count as jobs, sum(ok) as okc, sum(fail) as failc, sum(nreach) as reach_fails, dc(target_appliance) as appliances by _time, change_set\n| where failc>0 OR reach_fails>0\n| table _time, change_set, jobs, okc, failc, reach_fails, appliances\n```\n\nUnderstanding this SPL\n\n**Citrix SD-WAN Orchestrator Config Push Failures** — The SD-WAN Orchestrator (or center) applies policies and feature templates across the fleet. When pushes fail, sites can drift or stay on stale rules. Primary reachability problems and job-level errors show risk at scale, not one box at a time. A single bad change set that fails in many sites needs rollback attention fast.\n\nDocumented **Data sources**: `index=sdwan` `sourcetype=\"citrix:sdwan:orchestrator\"` (job id, `change_set`, `result`, `error_code`, `target_appliance`); optional reachability from `sourcetype=\"citrix:sdwan:mgmt\"` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for Citrix SD-WAN. Ingest via syslog from SD-WAN appliances and SD-WAN Center, or poll NITRO/REST API from SD-WAN Orchestrator. SNMP polling available for basic metrics. Suggested custom sourcetypes follow the `citrix:sdwan:*` convention. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdwan; **sourcetype**: citrix:sdwan:orchestrator, citrix:sdwan:mgmt. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdwan, sourcetype=\"citrix:sdwan:orchestrator\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, change_set** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failc>0 OR reach_fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix SD-WAN Orchestrator Config Push Failures**): table _time, change_set, jobs, okc, failc, reach_fails, appliances\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart: failed jobs per 15m; table: `change_set` with failure rates; list: top appliances by failed attempts; health tile: orchestrator reachability.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look at orchestrator and remote push results so a failed template or a half-applied site is a clear lead before users only \"feel something off.\"",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix",
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 30,
            "none": 0
          }
        }
      ],
      "i": 5,
      "n": "Network Infrastructure",
      "src": "cat-05-network-infrastructure.md"
    },
    {
      "s": [
        {
          "i": "6.1",
          "n": "SAN / NAS Storage",
          "u": [
            {
              "i": "6.1.1",
              "n": "Volume Capacity Trending",
              "c": "critical",
              "f": "intermediate",
              "v": "Prevents application outages caused by full volumes. Enables proactive capacity planning and procurement.",
              "t": "Vendor TA (e.g., `TA-netapp_ontap`) or scripted API input",
              "d": "Storage array REST API metrics, SNMP hrStorageTable",
              "q": "index=storage sourcetype=\"netapp:ontap:volume\"\n| stats latest(size_used_percent) as pct_used by volume_name\n| where pct_used > 85\n| sort -pct_used",
              "m": "Deploy vendor TA on a heavy forwarder. Configure REST API polling (every 15 min) for volume metrics. Create alert for >85% and >95% thresholds. Build capacity forecast using `predict` command.",
              "z": "Line chart (capacity trend per volume), Single value (current % used), Table (volumes above threshold).",
              "kfp": "Temporary spikes during snapshots or replication; use rolling average or exclude known maintenance windows.",
              "refs": "[Splunk Add-on for NetApp](https://splunkbase.splunk.com/app/1664), vendor REST/SNMP documentation",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA (e.g., `TA-netapp_ontap`) or scripted API input.\n• Ensure the following data sources are available: Storage array REST API metrics, SNMP hrStorageTable.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy vendor TA on a heavy forwarder. Configure REST API polling (every 15 min) for volume metrics. Create alert for >85% and >95% thresholds. Build capacity forecast using `predict` command.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:volume\"\n| stats latest(size_used_percent) as pct_used by volume_name\n| where pct_used > 85\n| sort -pct_used\n```\n\nUnderstanding this SPL\n\n**Volume Capacity Trending** — Prevents application outages caused by full volumes. Enables proactive capacity planning and procurement.\n\nDocumented **Data sources**: Storage array REST API metrics, SNMP hrStorageTable. **App/TA** (typical add-on context): Vendor TA (e.g., `TA-netapp_ontap`) or scripted API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:volume. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:volume\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by volume_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where pct_used > 85` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (capacity trend per volume), Single value (current % used), Table (volumes above threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "wv": "crawl",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "netapp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "6.1.2",
              "n": "Storage Latency Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "High storage latency directly impacts application performance. Early detection prevents SLA breaches and user experience degradation.",
              "t": "Vendor TA or SNMP polling",
              "d": "Array performance metrics (avg_latency, read_latency, write_latency)",
              "q": "index=storage sourcetype=\"netapp:ontap:volume_perf\"\n| stats avg(avg_latency) as latency_ms by volume_name\n| where latency_ms > 20\n| sort -latency_ms",
              "m": "Poll latency metrics via REST or SNMP every 5 minutes. Set tiered alerts: warning >10ms, critical >20ms for production volumes. Correlate with IOPS spikes to distinguish overload from hardware issues.",
              "z": "Line chart (latency over time by volume), Heatmap (volume × time), Single value (current avg latency).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA or SNMP polling.\n• Ensure the following data sources are available: Array performance metrics (avg_latency, read_latency, write_latency).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll latency metrics via REST or SNMP every 5 minutes. Set tiered alerts: warning >10ms, critical >20ms for production volumes. Correlate with IOPS spikes to distinguish overload from hardware issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:volume_perf\"\n| stats avg(avg_latency) as latency_ms by volume_name\n| where latency_ms > 20\n| sort -latency_ms\n```\n\nUnderstanding this SPL\n\n**Storage Latency Monitoring** — High storage latency directly impacts application performance. Early detection prevents SLA breaches and user experience degradation.\n\nDocumented **Data sources**: Array performance metrics (avg_latency, read_latency, write_latency). **App/TA** (typical add-on context): Vendor TA or SNMP polling. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:volume_perf. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:volume_perf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by volume_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where latency_ms > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency over time by volume), Heatmap (volume × time), Single value (current avg latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when disks or arrays slow down for your important workloads, so you can act before people notice a frozen app or missed deadlines.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "both",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "6.1.3",
              "n": "IOPS Trending per Volume",
              "c": "high",
              "f": "intermediate",
              "v": "Identifies workload hotspots and enables data placement optimization. Supports capacity planning for storage refreshes.",
              "t": "Vendor TA or SNMP",
              "d": "Array performance metrics (read_ops, write_ops, other_ops)",
              "q": "index=storage sourcetype=\"netapp:ontap:volume_perf\"\n| timechart span=15m sum(total_ops) as iops by volume_name\n| sort -iops",
              "m": "Collect IOPS metrics per volume/LUN at 5-15 min intervals. Baseline normal patterns and alert on deviations exceeding 2× baseline. Correlate with application deployment events.",
              "z": "Line chart (IOPS trend by volume), Stacked bar (read vs write IOPS), Table (top IOPS consumers).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA or SNMP.\n• Ensure the following data sources are available: Array performance metrics (read_ops, write_ops, other_ops).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect IOPS metrics per volume/LUN at 5-15 min intervals. Baseline normal patterns and alert on deviations exceeding 2× baseline. Correlate with application deployment events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:volume_perf\"\n| timechart span=15m sum(total_ops) as iops by volume_name\n| sort -iops\n```\n\nUnderstanding this SPL\n\n**IOPS Trending per Volume** — Identifies workload hotspots and enables data placement optimization. Supports capacity planning for storage refreshes.\n\nDocumented **Data sources**: Array performance metrics (read_ops, write_ops, other_ops). **App/TA** (typical add-on context): Vendor TA or SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:volume_perf. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:volume_perf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by volume_name** — ideal for trending and alerting on this use case.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (IOPS trend by volume), Stacked bar (read vs write IOPS), Table (top IOPS consumers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when disks or arrays slow down for your important workloads, so you can act before people notice a frozen app or missed deadlines.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.4",
              "n": "Disk Failure Alerts",
              "c": "critical",
              "f": "beginner",
              "v": "Immediate awareness of disk failures allows replacement before RAID degradation leads to data loss.",
              "t": "Vendor TA, SNMP traps",
              "d": "Array event/alert logs, SNMP traps",
              "q": "index=storage sourcetype=\"netapp:ontap:ems\" severity=\"EMERGENCY\" OR severity=\"ALERT\"\n| search disk_fail* OR disk_broken OR disk_error\n| table _time, node, disk, severity, message",
              "m": "Enable SNMP traps or syslog forwarding for disk failure events. Create high-priority alert with PagerDuty/ServiceNow integration. Track spare disk inventory to ensure replacements are available.",
              "z": "Single value (failed disk count), Table (failed disks with details), Timeline (failure events).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, SNMP traps.\n• Ensure the following data sources are available: Array event/alert logs, SNMP traps.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable SNMP traps or syslog forwarding for disk failure events. Create high-priority alert with PagerDuty/ServiceNow integration. Track spare disk inventory to ensure replacements are available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:ems\" severity=\"EMERGENCY\" OR severity=\"ALERT\"\n| search disk_fail* OR disk_broken OR disk_error\n| table _time, node, disk, severity, message\n```\n\nUnderstanding this SPL\n\n**Disk Failure Alerts** — Immediate awareness of disk failures allows replacement before RAID degradation leads to data loss.\n\nDocumented **Data sources**: Array event/alert logs, SNMP traps. **App/TA** (typical add-on context): Vendor TA, SNMP traps. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:ems. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:ems\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Disk Failure Alerts**): table _time, node, disk, severity, message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (failed disk count), Table (failed disks with details), Timeline (failure events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "6.1.5",
              "n": "Replication Lag Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Replication lag directly impacts RPO. Monitoring ensures DR readiness and compliance with data protection SLAs.",
              "t": "Vendor TA, REST API polling",
              "d": "Array replication status (SnapMirror, RecoverPoint, etc.)",
              "q": "index=storage sourcetype=\"netapp:ontap:snapmirror\"\n| eval lag_minutes=lag_time/60\n| where lag_minutes > 60\n| table _time, source_volume, destination_volume, lag_minutes, state",
              "m": "Poll replication status every 15 minutes. Alert when lag exceeds RPO target (e.g., >60 min for hourly replication). Track replication state (idle, transferring, broken-off) and alert on non-healthy states.",
              "z": "Single value (max replication lag), Table (replication pairs with lag), Line chart (lag over time).",
              "kfp": "Lag may increase during initial baseline transfers, scheduled resyncs, large volume moves, or upstream throttling.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, REST API polling.\n• Ensure the following data sources are available: Array replication status (SnapMirror, RecoverPoint, etc.).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll replication status every 15 minutes. Alert when lag exceeds RPO target (e.g., >60 min for hourly replication). Track replication state (idle, transferring, broken-off) and alert on non-healthy states.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:snapmirror\"\n| eval lag_minutes=lag_time/60\n| where lag_minutes > 60\n| table _time, source_volume, destination_volume, lag_minutes, state\n```\n\nUnderstanding this SPL\n\n**Replication Lag Monitoring** — Replication lag directly impacts RPO. Monitoring ensures DR readiness and compliance with data protection SLAs.\n\nDocumented **Data sources**: Array replication status (SnapMirror, RecoverPoint, etc.). **App/TA** (typical add-on context): Vendor TA, REST API polling. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:snapmirror. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:snapmirror\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lag_minutes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag_minutes > 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Replication Lag Monitoring**): table _time, source_volume, destination_volume, lag_minutes, state\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (max replication lag), Table (replication pairs with lag), Line chart (lag over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track copy and mirror health so a planned outage or a bad link does not leave you with an old or broken remote copy when you need it most.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.6",
              "n": "Controller Failover Events",
              "c": "critical",
              "f": "beginner",
              "v": "Controller failovers indicate hardware problems and may cause transient performance impact. Quick detection ensures rapid root cause analysis.",
              "t": "Vendor TA, syslog",
              "d": "Array event logs, cluster status",
              "q": "index=storage sourcetype=\"netapp:ontap:ems\"\n| search \"cf.takeover*\" OR \"cf.giveback*\" OR failover\n| table _time, node, event, message",
              "m": "For NetApp ONTAP: ingest EMS events via syslog (UDP/TCP) or use `TA-netapp_ontap` for REST-based EMS polling. Key EMS message families: `cf.takeover`, `cf.giveback`, `ha.interconnect`. Alert on any takeover outside a scheduled change window, or any giveback failure. Include `cluster`, `node`, and `partner` fields in the alert for storage operations handoff.",
              "z": "Timeline (failover events), Single value (days since last failover), Table (event details).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, syslog.\n• Ensure the following data sources are available: Array event logs, cluster status.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor NetApp ONTAP: ingest EMS events via syslog (UDP/TCP) or use `TA-netapp_ontap` for REST-based EMS polling. Key EMS message families: `cf.takeover`, `cf.giveback`, `ha.interconnect`. Alert on any takeover outside a scheduled change window, or any giveback failure. Include `cluster`, `node`, and `partner` fields in the alert for storage operations handoff.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:ems\"\n| search \"cf.takeover*\" OR \"cf.giveback*\" OR failover\n| table _time, node, event, message\n```\n\nUnderstanding this SPL\n\n**Controller Failover Events** — Controller failovers indicate hardware problems and may cause transient performance impact. Quick detection ensures rapid root cause analysis.\n\nDocumented **Data sources**: Array event logs, cluster status. **App/TA** (typical add-on context): Vendor TA, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:ems. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:ems\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Controller Failover Events**): table _time, node, event, message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failover events), Single value (days since last failover), Table (event details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "6.1.7",
              "n": "Thin Provisioning Overcommit",
              "c": "high",
              "f": "beginner",
              "v": "Over-committed thin-provisioned storage can cause sudden outages when physical capacity is exhausted. Monitoring prevents surprise failures.",
              "t": "Vendor TA, API polling",
              "d": "Aggregate/pool capacity metrics (logical vs physical)",
              "q": "index=storage sourcetype=\"netapp:ontap:aggregate\"\n| eval overcommit_ratio=logical_used/physical_used\n| where overcommit_ratio > 1.5\n| table aggregate, physical_used_pct, logical_used, overcommit_ratio",
              "m": "Poll aggregate/pool metrics showing logical vs physical capacity. Calculate overcommit ratio. Alert when physical utilization exceeds safe thresholds relative to committed capacity.",
              "z": "Gauge (overcommit ratio per pool), Table (aggregates with overcommit stats), Bar chart (logical vs physical).",
              "kfp": "Thin-provisioned ratios move with new writes, snapshots, and deleted data reclamation; spikes can follow large builds or storage efficiency jobs.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, API polling.\n• Ensure the following data sources are available: Aggregate/pool capacity metrics (logical vs physical).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll aggregate/pool metrics showing logical vs physical capacity. Calculate overcommit ratio. Alert when physical utilization exceeds safe thresholds relative to committed capacity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:aggregate\"\n| eval overcommit_ratio=logical_used/physical_used\n| where overcommit_ratio > 1.5\n| table aggregate, physical_used_pct, logical_used, overcommit_ratio\n```\n\nUnderstanding this SPL\n\n**Thin Provisioning Overcommit** — Over-committed thin-provisioned storage can cause sudden outages when physical capacity is exhausted. Monitoring prevents surprise failures.\n\nDocumented **Data sources**: Aggregate/pool capacity metrics (logical vs physical). **App/TA** (typical add-on context): Vendor TA, API polling. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:aggregate. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:aggregate\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **overcommit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overcommit_ratio > 1.5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Thin Provisioning Overcommit**): table aggregate, physical_used_pct, logical_used, overcommit_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (overcommit ratio per pool), Table (aggregates with overcommit stats), Bar chart (logical vs physical).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.8",
              "n": "Snapshot Space Consumption",
              "c": "high",
              "f": "intermediate",
              "v": "Runaway snapshot growth can consume all available space, causing volume and application outages.",
              "t": "Vendor TA, REST API",
              "d": "Snapshot usage metrics per volume",
              "q": "index=storage sourcetype=\"netapp:ontap:volume\"\n| eval snap_pct=snapshot_used_bytes/size_total*100\n| where snap_pct > 20\n| table volume_name, snap_pct, snapshot_used_bytes, snapshot_count\n| sort -snap_pct",
              "m": "Poll snapshot usage per volume. Alert when snapshot reserve exceeds threshold (e.g., >20% of volume). Track snapshot count and age. Create scheduled report for snapshot cleanup candidates.",
              "z": "Bar chart (snapshot usage by volume), Table (volumes with high snapshot usage), Line chart (snapshot growth trend).",
              "kfp": "Snapshot space grows with active data changes and retention; short jumps often follow backup or patch windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, REST API.\n• Ensure the following data sources are available: Snapshot usage metrics per volume.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll snapshot usage per volume. Alert when snapshot reserve exceeds threshold (e.g., >20% of volume). Track snapshot count and age. Create scheduled report for snapshot cleanup candidates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:volume\"\n| eval snap_pct=snapshot_used_bytes/size_total*100\n| where snap_pct > 20\n| table volume_name, snap_pct, snapshot_used_bytes, snapshot_count\n| sort -snap_pct\n```\n\nUnderstanding this SPL\n\n**Snapshot Space Consumption** — Runaway snapshot growth can consume all available space, causing volume and application outages.\n\nDocumented **Data sources**: Snapshot usage metrics per volume. **App/TA** (typical add-on context): Vendor TA, REST API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:volume. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:volume\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **snap_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where snap_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Snapshot Space Consumption**): table volume_name, snap_pct, snapshot_used_bytes, snapshot_count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (snapshot usage by volume), Table (volumes with high snapshot usage), Line chart (snapshot growth trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.9",
              "n": "Fibre Channel Port Errors",
              "c": "high",
              "f": "intermediate",
              "v": "FC port errors cause storage performance degradation and potential path failovers. Early detection prevents cascading failures.",
              "t": "SNMP TA, FC switch syslog",
              "d": "FC switch logs (Brocade, Cisco MDS), SNMP IF-MIB",
              "q": "index=network sourcetype=\"brocade:syslog\" OR sourcetype=\"cisco:mds\"\n| search CRC_error OR link_failure OR signal_loss OR sync_loss\n| stats count by switch, port, error_type\n| where count > 10",
              "m": "Forward FC switch syslog to Splunk. Poll SNMP counters for FC error rates. Alert on error rate exceeding baseline. Correlate with storage latency spikes to identify fabric issues.",
              "z": "Table (ports with errors), Bar chart (error counts by type), Timeline (error events).",
              "kfp": "FC port errors can spike during cable replacements, SFP swaps, zoning changes, or approved maintenance on the fabric.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA, FC switch syslog.\n• Ensure the following data sources are available: FC switch logs (Brocade, Cisco MDS), SNMP IF-MIB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward FC switch syslog to Splunk. Poll SNMP counters for FC error rates. Alert on error rate exceeding baseline. Correlate with storage latency spikes to identify fabric issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"brocade:syslog\" OR sourcetype=\"cisco:mds\"\n| search CRC_error OR link_failure OR signal_loss OR sync_loss\n| stats count by switch, port, error_type\n| where count > 10\n```\n\nUnderstanding this SPL\n\n**Fibre Channel Port Errors** — FC port errors cause storage performance degradation and potential path failovers. Early detection prevents cascading failures.\n\nDocumented **Data sources**: FC switch logs (Brocade, Cisco MDS), SNMP IF-MIB. **App/TA** (typical add-on context): SNMP TA, FC switch syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: brocade:syslog, cisco:mds. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"brocade:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by switch, port, error_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (ports with errors), Bar chart (error counts by type), Timeline (error events).",
              "script": "",
              "premium": "",
              "hw": "Cisco MDS 9132T, MDS 9148T, MDS 9396T, MDS 9700, MDS 9706, MDS 9710",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the storage network links for errors and dropouts so you can fix cables or switches before paths fail over or apps slow down.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.10",
              "n": "Storage Array Firmware Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Outdated firmware exposes arrays to known bugs and security vulnerabilities. Compliance tracking supports patching cadence.",
              "t": "Vendor TA, scripted inventory input",
              "d": "Array system info (firmware version, model), vendor advisory feeds",
              "q": "index=storage sourcetype=\"netapp:ontap:system\"\n| stats latest(version) as firmware by node, model\n| lookup approved_firmware_versions model OUTPUT approved_version\n| where firmware!=approved_version\n| table node, model, firmware, approved_version",
              "m": "Poll system version info periodically (daily). Maintain a lookup table of approved firmware versions per model. Alert when arrays are running non-approved versions. Report on fleet firmware distribution.",
              "z": "Table (arrays with firmware status), Pie chart (firmware version distribution), Single value (% compliant).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, scripted inventory input.\n• Ensure the following data sources are available: Array system info (firmware version, model), vendor advisory feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll system version info periodically (daily). Maintain a lookup table of approved firmware versions per model. Alert when arrays are running non-approved versions. Report on fleet firmware distribution.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:system\"\n| stats latest(version) as firmware by node, model\n| lookup approved_firmware_versions model OUTPUT approved_version\n| where firmware!=approved_version\n| table node, model, firmware, approved_version\n```\n\nUnderstanding this SPL\n\n**Storage Array Firmware Compliance** — Outdated firmware exposes arrays to known bugs and security vulnerabilities. Compliance tracking supports patching cadence.\n\nDocumented **Data sources**: Array system info (firmware version, model), vendor advisory feeds. **App/TA** (typical add-on context): Vendor TA, scripted inventory input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:system. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by node, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where firmware!=approved_version` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Storage Array Firmware Compliance**): table node, model, firmware, approved_version\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (arrays with firmware status), Pie chart (firmware version distribution), Single value (% compliant).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.11",
              "n": "Isilon Cluster and Node Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Dell EMC Isilon (OneFS) is a scale-out NAS platform. Monitoring node and cluster health ensures availability and early detection of hardware or software issues before data access is impacted.",
              "t": "Splunk Add-on for Dell EMC Isilon (if available), or REST/API polling of OneFS platform API, syslog from Isilon nodes",
              "d": "OneFS platform API (cluster/node status, events), Isilon syslog, SNMP (if enabled)",
              "q": "index=storage (sourcetype=isilon:syslog OR sourcetype=isilon:api) (node_down OR cluster_offline OR \"degraded\" OR \"readonly\")\n| table _time, node, cluster, severity, message",
              "m": "Configure syslog from Isilon cluster to Splunk; optionally use OneFS REST API or vendor TA for node state, drive status, and cluster events. Alert on node down, pool degradation, or OneFS readonly conditions.",
              "z": "Single value (nodes down), Table (node/cluster status), Timeline (health events). Aligns with use cases in Splunk IT Essentials Learn (Storage – Isilon).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Dell EMC Isilon (if available), or REST/API polling of OneFS platform API, syslog from Isilon nodes.\n• Ensure the following data sources are available: OneFS platform API (cluster/node status, events), Isilon syslog, SNMP (if enabled).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure syslog from Isilon cluster to Splunk; optionally use OneFS REST API or vendor TA for node state, drive status, and cluster events. Alert on node down, pool degradation, or OneFS readonly conditions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage (sourcetype=isilon:syslog OR sourcetype=isilon:api) (node_down OR cluster_offline OR \"degraded\" OR \"readonly\")\n| table _time, node, cluster, severity, message\n```\n\nUnderstanding this SPL\n\n**Isilon Cluster and Node Health** — Dell EMC Isilon (OneFS) is a scale-out NAS platform. Monitoring node and cluster health ensures availability and early detection of hardware or software issues before data access is impacted.\n\nDocumented **Data sources**: OneFS platform API (cluster/node status, events), Isilon syslog, SNMP (if enabled). **App/TA** (typical add-on context): Splunk Add-on for Dell EMC Isilon (if available), or REST/API polling of OneFS platform API, syslog from Isilon nodes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: isilon:syslog, isilon:api. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=isilon:syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Isilon Cluster and Node Health**): table _time, node, cluster, severity, message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (nodes down), Table (node/cluster status), Timeline (health events). Aligns with use cases in Splunk IT Essentials Learn (Storage – Isilon).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "dell_emc",
                "hardware_bmc",
                "syslog"
              ],
              "em": [
                "hardware_bmc_ilo"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.12",
              "n": "Isilon Capacity and Performance Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks Isilon cluster capacity and throughput (e.g. ops/sec, throughput MB/s) for capacity planning and performance troubleshooting. Matches IT Essentials Learn procedures for Isilon storage.",
              "t": "OneFS API or vendor add-on for Isilon metrics",
              "d": "OneFS statistics (capacity by pool/node, read/write ops, network throughput)",
              "q": "index=storage sourcetype=isilon:metrics\n| timechart span=1h avg(capacity_used_pct) as pct_used, avg(ops_per_sec) as iops by node\n| where pct_used > 80",
              "m": "Poll OneFS stats API or use Isilon TA to collect capacity and performance metrics. Set alerts for pool capacity >85% and for sustained high latency or drop in throughput.",
              "z": "Line chart (capacity and IOPS over time by node/pool), Single value (cluster used %), Table (top consumers).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OneFS API or vendor add-on for Isilon metrics.\n• Ensure the following data sources are available: OneFS statistics (capacity by pool/node, read/write ops, network throughput).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll OneFS stats API or use Isilon TA to collect capacity and performance metrics. Set alerts for pool capacity >85% and for sustained high latency or drop in throughput.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=isilon:metrics\n| timechart span=1h avg(capacity_used_pct) as pct_used, avg(ops_per_sec) as iops by node\n| where pct_used > 80\n```\n\nUnderstanding this SPL\n\n**Isilon Capacity and Performance Trending** — Tracks Isilon cluster capacity and throughput (e.g. ops/sec, throughput MB/s) for capacity planning and performance troubleshooting. Matches IT Essentials Learn procedures for Isilon storage.\n\nDocumented **Data sources**: OneFS statistics (capacity by pool/node, read/write ops, network throughput). **App/TA** (typical add-on context): OneFS API or vendor add-on for Isilon metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: isilon:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=isilon:metrics. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by node** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where pct_used > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (capacity and IOPS over time by node/pool), Single value (cluster used %), Table (top consumers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "dell_emc",
                "hardware_bmc"
              ],
              "em": [
                "hardware_bmc_ilo"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.13",
              "n": "TrueNAS / FreeNAS Pool Health",
              "c": "critical",
              "f": "intermediate",
              "v": "ZFS pool degradation, scrub results, and resilver progress directly impact data integrity. Early detection of unhealthy pools prevents data loss and enables timely intervention during rebuilds.",
              "t": "Custom (TrueNAS REST API)",
              "d": "TrueNAS API (/api/v2.0/pool, /api/v2.0/pool/id/X)",
              "q": "index=storage sourcetype=\"truenas:pool\"\n| search status!=\"ONLINE\" OR health!=\"HEALTHY\" OR \"resilver\" OR \"scrub\"\n| eval health_status=coalesce(health, status)\n| table _time, pool_name, health_status, status, size, used_pct, resilver_progress, scrub_status\n| sort -_time",
              "m": "Create scripted input or HTTP Event Collector (HEC) input that polls TrueNAS REST API every 5–15 minutes. Use `/api/v2.0/pool` for pool list and `/api/v2.0/pool/id/{id}` for detailed status including scrub/resilver. Authenticate with API key. Parse JSON response and index to Splunk with sourcetype `truenas:pool`. Alert on health != HEALTHY or status != ONLINE. Track resilver progress and ETA during rebuilds.",
              "z": "Single value (pools not healthy), Table (pool name, health, resilver %), Timeline (health change events), Gauge (resilver progress during rebuild).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (TrueNAS REST API).\n• Ensure the following data sources are available: TrueNAS API (/api/v2.0/pool, /api/v2.0/pool/id/X).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input or HTTP Event Collector (HEC) input that polls TrueNAS REST API every 5–15 minutes. Use `/api/v2.0/pool` for pool list and `/api/v2.0/pool/id/{id}` for detailed status including scrub/resilver. Authenticate with API key. Parse JSON response and index to Splunk with sourcetype `truenas:pool`. Alert on health != HEALTHY or status != ONLINE. Track resilver progress and ETA during rebuilds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"truenas:pool\"\n| search status!=\"ONLINE\" OR health!=\"HEALTHY\" OR \"resilver\" OR \"scrub\"\n| eval health_status=coalesce(health, status)\n| table _time, pool_name, health_status, status, size, used_pct, resilver_progress, scrub_status\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**TrueNAS / FreeNAS Pool Health** — ZFS pool degradation, scrub results, and resilver progress directly impact data integrity. Early detection of unhealthy pools prevents data loss and enables timely intervention during rebuilds.\n\nDocumented **Data sources**: TrueNAS API (/api/v2.0/pool, /api/v2.0/pool/id/X). **App/TA** (typical add-on context): Custom (TrueNAS REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: truenas:pool. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"truenas:pool\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **health_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **TrueNAS / FreeNAS Pool Health**): table _time, pool_name, health_status, status, size, used_pct, resilver_progress, scrub_status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (pools not healthy), Table (pool name, health, resilver %), Timeline (health change events), Gauge (resilver progress during rebuild).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "truenas"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.14",
              "n": "Ceph Cluster Health and OSD Status",
              "c": "critical",
              "f": "advanced",
              "v": "Ceph health warnings, OSD down/out events, and placement group (PG) state issues can lead to data unavailability or loss. Monitoring ensures rapid response to cluster degradation.",
              "t": "Custom scripted input (ceph status --format json)",
              "d": "ceph status JSON, ceph osd tree, ceph pg stat",
              "q": "index=storage sourcetype=\"ceph:status\"\n| search health!=\"HEALTH_OK\" OR osd_down>0 OR osd_out>0 OR \"degraded\" OR \"stuck\"\n| eval pg_degraded=if(match(pg_summary, \"degraded\"), 1, 0)\n| table _time, health, health_detail, osd_down, osd_out, osd_up, pg_degraded, pg_summary\n| sort -_time",
              "m": "Run `ceph status --format json` and `ceph osd tree --format json` via cron or Splunk scripted input every 5 minutes. Parse JSON and extract health, osd_map (num_up, num_in, num_down), and pg_summary. Index to Splunk. Alert on health != HEALTH_OK, osd_down > 0, osd_out > 0, or PG states containing \"degraded\" or \"stuck\". Correlate OSD events with disk failure logs.",
              "z": "Single value (cluster health status), Table (OSD up/down/out counts), Timeline (health and OSD events), Bar chart (PG states distribution).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (ceph status --format json).\n• Ensure the following data sources are available: ceph status JSON, ceph osd tree, ceph pg stat.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `ceph status --format json` and `ceph osd tree --format json` via cron or Splunk scripted input every 5 minutes. Parse JSON and extract health, osd_map (num_up, num_in, num_down), and pg_summary. Index to Splunk. Alert on health != HEALTH_OK, osd_down > 0, osd_out > 0, or PG states containing \"degraded\" or \"stuck\". Correlate OSD events with disk failure logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"ceph:status\"\n| search health!=\"HEALTH_OK\" OR osd_down>0 OR osd_out>0 OR \"degraded\" OR \"stuck\"\n| eval pg_degraded=if(match(pg_summary, \"degraded\"), 1, 0)\n| table _time, health, health_detail, osd_down, osd_out, osd_up, pg_degraded, pg_summary\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Ceph Cluster Health and OSD Status** — Ceph health warnings, OSD down/out events, and placement group (PG) state issues can lead to data unavailability or loss. Monitoring ensures rapid response to cluster degradation.\n\nDocumented **Data sources**: ceph status JSON, ceph osd tree, ceph pg stat. **App/TA** (typical add-on context): Custom scripted input (ceph status --format json). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: ceph:status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"ceph:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **pg_degraded** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Ceph Cluster Health and OSD Status**): table _time, health, health_detail, osd_down, osd_out, osd_up, pg_degraded, pg_summary\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (cluster health status), Table (OSD up/down/out counts), Timeline (health and OSD events), Bar chart (PG states distribution).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "ceph"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.15",
              "n": "NFS Export Availability",
              "c": "high",
              "f": "intermediate",
              "v": "NFS mount point reachability and latency directly affect application availability. Monitoring from client perspective ensures end-to-end access validation.",
              "t": "Custom scripted input (showmount, mount probes)",
              "d": "NFS mount probe results, rpcinfo output",
              "q": "index=storage sourcetype=\"nfs:probe\"\n| search status!=\"ok\" OR latency_ms>500\n| table _time, export_path, server, status, latency_ms, error_message\n| sort -_time",
              "m": "Deploy scripted input on one or more probe hosts. Script performs `showmount -e <server>` and attempts `mount -t nfs <server>:<export> <mountpoint>` or uses `rpcinfo -p` and a simple read/write test. Measure latency and record success/failure. Run every 5–10 minutes. Index results with export_path, server, status, latency_ms. Alert on status != ok or latency > 500 ms.",
              "z": "Table (exports with status and latency), Single value (unreachable exports count), Line chart (latency trend per export), Status grid (export × server).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (showmount, mount probes).\n• Ensure the following data sources are available: NFS mount probe results, rpcinfo output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy scripted input on one or more probe hosts. Script performs `showmount -e <server>` and attempts `mount -t nfs <server>:<export> <mountpoint>` or uses `rpcinfo -p` and a simple read/write test. Measure latency and record success/failure. Run every 5–10 minutes. Index results with export_path, server, status, latency_ms. Alert on status != ok or latency > 500 ms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"nfs:probe\"\n| search status!=\"ok\" OR latency_ms>500\n| table _time, export_path, server, status, latency_ms, error_message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**NFS Export Availability** — NFS mount point reachability and latency directly affect application availability. Monitoring from client perspective ensures end-to-end access validation.\n\nDocumented **Data sources**: NFS mount probe results, rpcinfo output. **App/TA** (typical add-on context): Custom scripted input (showmount, mount probes). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: nfs:probe. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"nfs:probe\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **NFS Export Availability**): table _time, export_path, server, status, latency_ms, error_message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exports with status and latency), Single value (unreachable exports count), Line chart (latency trend per export), Status grid (export × server).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when disks or arrays slow down for your important workloads, so you can act before people notice a frozen app or missed deadlines.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.16",
              "n": "SMB / CIFS Share Availability",
              "c": "high",
              "f": "intermediate",
              "v": "Windows/SMB share reachability is critical for file-serving workloads. Monitoring ensures shares are accessible before users report issues.",
              "t": "Custom scripted input (smbclient, net use)",
              "d": "SMB share probe results",
              "q": "index=storage sourcetype=\"smb:probe\"\n| search status!=\"ok\" OR latency_ms>1000\n| table _time, share_path, server, status, latency_ms, error_message\n| sort -_time",
              "m": "Deploy scripted input on Windows or Linux probe host. Use `smbclient -L //server` or `net use \\\\server\\share` (Windows) to test connectivity. Optionally perform read/write test and measure latency. Run every 5–10 minutes. Index share_path, server, status, latency_ms. Alert on status != ok or latency exceeding threshold. Use domain credentials with minimal read-only access.",
              "z": "Table (shares with status and latency), Single value (unreachable shares count), Line chart (latency trend per share), Status grid (share × server).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (smbclient, net use).\n• Ensure the following data sources are available: SMB share probe results.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy scripted input on Windows or Linux probe host. Use `smbclient -L //server` or `net use \\\\server\\share` (Windows) to test connectivity. Optionally perform read/write test and measure latency. Run every 5–10 minutes. Index share_path, server, status, latency_ms. Alert on status != ok or latency exceeding threshold. Use domain credentials with minimal read-only access.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"smb:probe\"\n| search status!=\"ok\" OR latency_ms>1000\n| table _time, share_path, server, status, latency_ms, error_message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**SMB / CIFS Share Availability** — Windows/SMB share reachability is critical for file-serving workloads. Monitoring ensures shares are accessible before users report issues.\n\nDocumented **Data sources**: SMB share probe results. **App/TA** (typical add-on context): Custom scripted input (smbclient, net use). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: smb:probe. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"smb:probe\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **SMB / CIFS Share Availability**): table _time, share_path, server, status, latency_ms, error_message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (shares with status and latency), Single value (unreachable shares count), Line chart (latency trend per share), Status grid (share × server).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when disks or arrays slow down for your important workloads, so you can act before people notice a frozen app or missed deadlines.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.17",
              "n": "RAID Rebuild Progress and Estimated Completion",
              "c": "high",
              "f": "intermediate",
              "v": "During array rebuilds, progress percentage and ETA help plan maintenance and detect stalled rebuilds. Stalled rebuilds increase risk of data loss if another disk fails.",
              "t": "Custom scripted input (mdadm, MegaCli, perccli)",
              "d": "mdadm --detail, vendor RAID CLI output (MegaCli, perccli)",
              "q": "index=storage sourcetype=\"raid:rebuild\"\n| search state=\"rebuild\" OR state=\"resync\"\n| eval progress_pct=if(isnum(progress), progress, tonumber(replace(progress, \"%\", \"\")))\n| where progress_pct < 100\n| table _time, array_name, state, progress_pct, speed_mb_s, eta_hours, spare_disk\n| sort -_time",
              "m": "Create scripted input that runs `mdadm --detail /dev/md*` (Linux software RAID) or vendor CLIs (`MegaCli64 -AdpAllInfo -aAll`, `perccli64 /c0 show` for Dell PERC). Parse rebuild/resync state, progress %, speed, and ETA. Run every 5–15 minutes during rebuilds. Index to Splunk. Alert when rebuild is active and progress has not increased in 2+ hours (stalled). Track ETA for maintenance planning.",
              "z": "Gauge (rebuild progress %), Table (arrays in rebuild with ETA), Line chart (progress over time), Single value (hours until rebuild complete).",
              "kfp": "RAID or vdev state may show degraded briefly during disk replacements before rebuild or resilver progress is reported.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (mdadm, MegaCli, perccli).\n• Ensure the following data sources are available: mdadm --detail, vendor RAID CLI output (MegaCli, perccli).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input that runs `mdadm --detail /dev/md*` (Linux software RAID) or vendor CLIs (`MegaCli64 -AdpAllInfo -aAll`, `perccli64 /c0 show` for Dell PERC). Parse rebuild/resync state, progress %, speed, and ETA. Run every 5–15 minutes during rebuilds. Index to Splunk. Alert when rebuild is active and progress has not increased in 2+ hours (stalled). Track ETA for maintenance planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"raid:rebuild\"\n| search state=\"rebuild\" OR state=\"resync\"\n| eval progress_pct=if(isnum(progress), progress, tonumber(replace(progress, \"%\", \"\")))\n| where progress_pct < 100\n| table _time, array_name, state, progress_pct, speed_mb_s, eta_hours, spare_disk\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**RAID Rebuild Progress and Estimated Completion** — During array rebuilds, progress percentage and ETA help plan maintenance and detect stalled rebuilds. Stalled rebuilds increase risk of data loss if another disk fails.\n\nDocumented **Data sources**: mdadm --detail, vendor RAID CLI output (MegaCli, perccli). **App/TA** (typical add-on context): Custom scripted input (mdadm, MegaCli, perccli). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: raid:rebuild. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"raid:rebuild\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **progress_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where progress_pct < 100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **RAID Rebuild Progress and Estimated Completion**): table _time, array_name, state, progress_pct, speed_mb_s, eta_hours, spare_disk\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (rebuild progress %), Table (arrays in rebuild with ETA), Line chart (progress over time), Single value (hours until rebuild complete).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc"
              ],
              "em": [
                "hardware_bmc_megacli",
                "hardware_bmc_perccli"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.18",
              "n": "NetApp ONTAP Performance Counters",
              "c": "high",
              "f": "intermediate",
              "v": "Counter-based throughput, latency, and queue depth from ONTAP complement volume-level views. Trending counters catches node or aggregate saturation before user-visible latency spikes.",
              "t": "`TA-netapp_ontap`, REST API scripted input",
              "d": "ONTAP REST `/api/cluster/counter/tables/*` or ZAPI `perf-object-get-list`",
              "q": "index=storage sourcetype=\"netapp:ontap:counter\"\n| where object_name=\"volume\" OR object_name=\"lun\"\n| timechart span=5m avg(read_latency) as read_ms, avg(write_latency) as write_ms, avg(total_ops) as iops by instance_name\n| where read_ms > 15 OR write_ms > 15",
              "m": "Enable performance counter polling (15m) for volumes/LUNs. Map instance to SVM and export. Baseline p95 latency and IOPS; alert on sustained deviation from baseline.",
              "z": "Line chart (latency and IOPS by object), Table (top latency contributors), Single value (max read/write ms).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-netapp_ontap`, REST API scripted input.\n• Ensure the following data sources are available: ONTAP REST `/api/cluster/counter/tables/*` or ZAPI `perf-object-get-list`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable performance counter polling (15m) for volumes/LUNs. Map instance to SVM and export. Baseline p95 latency and IOPS; alert on sustained deviation from baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:counter\"\n| where object_name=\"volume\" OR object_name=\"lun\"\n| timechart span=5m avg(read_latency) as read_ms, avg(write_latency) as write_ms, avg(total_ops) as iops by instance_name\n| where read_ms > 15 OR write_ms > 15\n```\n\nUnderstanding this SPL\n\n**NetApp ONTAP Performance Counters** — Counter-based throughput, latency, and queue depth from ONTAP complement volume-level views. Trending counters catches node or aggregate saturation before user-visible latency spikes.\n\nDocumented **Data sources**: ONTAP REST `/api/cluster/counter/tables/*` or ZAPI `perf-object-get-list`. **App/TA** (typical add-on context): `TA-netapp_ontap`, REST API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:counter. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:counter\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where object_name=\"volume\" OR object_name=\"lun\"` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by instance_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where read_ms > 15 OR write_ms > 15` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency and IOPS by object), Table (top latency contributors), Single value (max read/write ms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when disks or arrays slow down for your important workloads, so you can act before people notice a frozen app or missed deadlines.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "netapp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.19",
              "n": "Pure Storage Array Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Pure FA/FB controller, component, and capacity health events indicate hardware or software risk. Unified visibility supports proactive replacement and support cases.",
              "t": "Pure REST API (scripted input), Pure TA if deployed",
              "d": "Pure REST `/api/2.x/arrays`, `/hardware`, `/alerts`",
              "q": "index=storage sourcetype=\"pure:array\"\n| search status!=\"healthy\" OR component_status!=\"ok\" OR severity IN (\"critical\",\"warning\")\n| stats latest(_time) as last_event, values(message) as messages by array_name, component\n| sort -last_event",
              "m": "Poll array health and open alerts every 5–15 minutes. Ingest critical/warning alerts with component ID. Correlate with support bundle generation workflows.",
              "z": "Single value (open critical alerts), Table (array, component, status), Timeline (health transitions).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Pure REST API (scripted input), Pure TA if deployed.\n• Ensure the following data sources are available: Pure REST `/api/2.x/arrays`, `/hardware`, `/alerts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll array health and open alerts every 5–15 minutes. Ingest critical/warning alerts with component ID. Correlate with support bundle generation workflows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"pure:array\"\n| search status!=\"healthy\" OR component_status!=\"ok\" OR severity IN (\"critical\",\"warning\")\n| stats latest(_time) as last_event, values(message) as messages by array_name, component\n| sort -last_event\n```\n\nUnderstanding this SPL\n\n**Pure Storage Array Health** — Pure FA/FB controller, component, and capacity health events indicate hardware or software risk. Unified visibility supports proactive replacement and support cases.\n\nDocumented **Data sources**: Pure REST `/api/2.x/arrays`, `/hardware`, `/alerts`. **App/TA** (typical add-on context): Pure REST API (scripted input), Pure TA if deployed. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: pure:array. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"pure:array\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by array_name, component** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (open critical alerts), Table (array, component, status), Timeline (health transitions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.20",
              "n": "iSCSI Session Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Dropped or flapping iSCSI sessions cause path loss and I/O errors. Session count and login state trending validates host-to-array connectivity after network or firmware changes.",
              "t": "Vendor TA, Linux `iscsiadm` scripted input, array iSCSI session API",
              "d": "Host `iscsiadm -m session`, array iSCSI session list",
              "q": "index=storage sourcetype=\"iscsi:session\"\n| bin _time span=5m\n| stats dc(session_id) as sessions by host, target_iqn, _time\n| eventstats avg(sessions) as baseline by host, target_iqn\n| where sessions < baseline OR sessions=0",
              "m": "Scripted input on hosts or array API export of active sessions every 5m. Alert on session count drop to zero or vs baseline. Correlate with NIC/link events.",
              "z": "Line chart (sessions per host/target), Table (hosts with zero sessions), Single value (total active sessions).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, Linux `iscsiadm` scripted input, array iSCSI session API.\n• Ensure the following data sources are available: Host `iscsiadm -m session`, array iSCSI session list.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted input on hosts or array API export of active sessions every 5m. Alert on session count drop to zero or vs baseline. Correlate with NIC/link events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"iscsi:session\"\n| bin _time span=5m\n| stats dc(session_id) as sessions by host, target_iqn, _time\n| eventstats avg(sessions) as baseline by host, target_iqn\n| where sessions < baseline OR sessions=0\n```\n\nUnderstanding this SPL\n\n**iSCSI Session Monitoring** — Dropped or flapping iSCSI sessions cause path loss and I/O errors. Session count and login state trending validates host-to-array connectivity after network or firmware changes.\n\nDocumented **Data sources**: Host `iscsiadm -m session`, array iSCSI session list. **App/TA** (typical add-on context): Vendor TA, Linux `iscsiadm` scripted input, array iSCSI session API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: iscsi:session. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"iscsi:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, target_iqn, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by host, target_iqn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sessions < baseline OR sessions=0` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (sessions per host/target), Table (hosts with zero sessions), Single value (total active sessions).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.21",
              "n": "Multipath Failover Events",
              "c": "critical",
              "f": "intermediate",
              "v": "Path failovers indicate cable, SFP, HBA, or array port issues. Rapid detection limits prolonged single-path exposure and data loss risk.",
              "t": "Linux `multipathd` journal, Windows MPIO events, syslog",
              "d": "`multipathd` logs, `mpathadm` status (Solaris), OS MPIO event logs",
              "q": "index=os (sourcetype=linux_syslog OR sourcetype=syslog) (multipath OR \"path failed\" OR \"switching path\" OR mpio)\n| rex \"(?i)path (?<path_id>\\S+).*failed|(?i)switching.*path\"\n| bin _time span=1h\n| stats count by host, path_id, _time\n| where count > 0",
              "m": "Forward multipath daemon logs from all SAN-attached hosts. Tag events for failback/failover. Alert on any path down >5m or repeated failovers per hour.",
              "z": "Timeline (failover events), Table (host, path, count), Single value (failovers last 24h).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Linux `multipathd` journal, Windows MPIO events, syslog.\n• Ensure the following data sources are available: `multipathd` logs, `mpathadm` status (Solaris), OS MPIO event logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward multipath daemon logs from all SAN-attached hosts. Tag events for failback/failover. Alert on any path down >5m or repeated failovers per hour.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os (sourcetype=linux_syslog OR sourcetype=syslog) (multipath OR \"path failed\" OR \"switching path\" OR mpio)\n| rex \"(?i)path (?<path_id>\\S+).*failed|(?i)switching.*path\"\n| bin _time span=1h\n| stats count by host, path_id, _time\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Multipath Failover Events** — Path failovers indicate cable, SFP, HBA, or array port issues. Rapid detection limits prolonged single-path exposure and data loss risk.\n\nDocumented **Data sources**: `multipathd` logs, `mpathadm` status (Solaris), OS MPIO event logs. **App/TA** (typical add-on context): Linux `multipathd` journal, Windows MPIO events, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux_syslog, syslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux_syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, path_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failover events), Table (host, path, count), Single value (failovers last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.22",
              "n": "Fibre Channel Port Error Rate (Array)",
              "c": "high",
              "f": "intermediate",
              "v": "Array-side FC port CRCs, signal loss, and link failures differ from switch-only views. Port error rate trending isolates HBA/cable issues at the storage attachment point.",
              "t": "Vendor TA, SNMP FC port MIB",
              "d": "Array FC port statistics (CRC, enc_in, enc_out, link_fail)",
              "q": "index=storage sourcetype=\"storage:fc_port\"\n| eval err_rate=crc_errors + link_failures + signal_loss\n| timechart span=15m sum(err_rate) as errors by array_name, port_id\n| where errors > 0",
              "m": "Poll FC port counters per array port every 15m. Baseline error rate; alert on non-zero sustained errors or step changes after maintenance.",
              "z": "Bar chart (errors by port), Line chart (error rate trend), Table (ports with errors).",
              "kfp": "FC port errors can spike during cable replacements, SFP swaps, zoning changes, or approved maintenance on the fabric.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, SNMP FC port MIB.\n• Ensure the following data sources are available: Array FC port statistics (CRC, enc_in, enc_out, link_fail).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll FC port counters per array port every 15m. Baseline error rate; alert on non-zero sustained errors or step changes after maintenance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"storage:fc_port\"\n| eval err_rate=crc_errors + link_failures + signal_loss\n| timechart span=15m sum(err_rate) as errors by array_name, port_id\n| where errors > 0\n```\n\nUnderstanding this SPL\n\n**Fibre Channel Port Error Rate (Array)** — Array-side FC port CRCs, signal loss, and link failures differ from switch-only views. Port error rate trending isolates HBA/cable issues at the storage attachment point.\n\nDocumented **Data sources**: Array FC port statistics (CRC, enc_in, enc_out, link_fail). **App/TA** (typical add-on context): Vendor TA, SNMP FC port MIB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: storage:fc_port. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"storage:fc_port\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by array_name, port_id** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where errors > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (errors by port), Line chart (error rate trend), Table (ports with errors).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see how your arrays and related gear are doing before small issues turn into full outages or restore surprises.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.23",
              "n": "LUN Latency Trending",
              "c": "critical",
              "f": "intermediate",
              "v": "Per-LUN latency separates noisy neighbors and misaligned workloads from array-wide issues. Supports QoS and datastore placement decisions.",
              "t": "Vendor TA, VMware vSphere performance (if LUN mapped)",
              "d": "Array LUN performance API, VMware `disk.latency` per datastore",
              "q": "index=storage sourcetype=\"storage:lun_perf\"\n| timechart span=5m perc95(read_latency_ms) as p95_read, perc95(write_latency_ms) as p95_write by lun_id, array_name\n| where p95_read > 20 OR p95_write > 20",
              "m": "Ingest per-LUN latency at 5m granularity. Set SLA thresholds (e.g., p95 >20ms). Split by workload tier. Correlate with IOPS saturation.",
              "z": "Line chart (p95 read/write per LUN), Heatmap (LUN × hour), Table (worst LUNs).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, VMware vSphere performance (if LUN mapped).\n• Ensure the following data sources are available: Array LUN performance API, VMware `disk.latency` per datastore.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest per-LUN latency at 5m granularity. Set SLA thresholds (e.g., p95 >20ms). Split by workload tier. Correlate with IOPS saturation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"storage:lun_perf\"\n| timechart span=5m perc95(read_latency_ms) as p95_read, perc95(write_latency_ms) as p95_write by lun_id, array_name\n| where p95_read > 20 OR p95_write > 20\n```\n\nUnderstanding this SPL\n\n**LUN Latency Trending** — Per-LUN latency separates noisy neighbors and misaligned workloads from array-wide issues. Supports QoS and datastore placement decisions.\n\nDocumented **Data sources**: Array LUN performance API, VMware `disk.latency` per datastore. **App/TA** (typical add-on context): Vendor TA, VMware vSphere performance (if LUN mapped). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: storage:lun_perf. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"storage:lun_perf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by lun_id, array_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where p95_read > 20 OR p95_write > 20` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95 read/write per LUN), Heatmap (LUN × hour), Table (worst LUNs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when disks or arrays slow down for your important workloads, so you can act before people notice a frozen app or missed deadlines.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_vsphere"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.24",
              "n": "Aggregate Space Forecasting",
              "c": "high",
              "f": "advanced",
              "v": "Forecasting aggregate free space prevents sudden write failures on thin-provisioned pools. Supports procurement and volume migration planning.",
              "t": "Vendor TA, REST API",
              "d": "Aggregate used/total bytes, snapshot reserve",
              "q": "index=storage sourcetype=\"netapp:ontap:aggregate\"\n| timechart span=1d latest(physical_used_pct) as used_pct by aggregate_name\n| predict used_pct as forecast future_timespan=30",
              "m": "Daily snapshot of aggregate utilization. Use `predict` or linear regression for 30/60-day runway. Alert when forecast crosses 85% within 30 days.",
              "z": "Line chart (used % with forecast band), Table (aggregates by days-to-full), Single value (soonest full date).",
              "kfp": "Capacity may temporarily spike during snapshot consolidation, deduplication runs, or large bulk imports.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, REST API.\n• Ensure the following data sources are available: Aggregate used/total bytes, snapshot reserve.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDaily snapshot of aggregate utilization. Use `predict` or linear regression for 30/60-day runway. Alert when forecast crosses 85% within 30 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"netapp:ontap:aggregate\"\n| timechart span=1d latest(physical_used_pct) as used_pct by aggregate_name\n| predict used_pct as forecast future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Aggregate Space Forecasting** — Forecasting aggregate free space prevents sudden write failures on thin-provisioned pools. Supports procurement and volume migration planning.\n\nDocumented **Data sources**: Aggregate used/total bytes, snapshot reserve. **App/TA** (typical add-on context): Vendor TA, REST API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: netapp:ontap:aggregate. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"netapp:ontap:aggregate\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by aggregate_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Aggregate Space Forecasting**): predict used_pct as forecast future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (used % with forecast band), Table (aggregates by days-to-full), Single value (soonest full date).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.25",
              "n": "Snapshot Schedule Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Missed snapshot schedules break backup and rollback expectations. Verifying snapshot recency per policy supports operational and audit requirements.",
              "t": "Vendor TA, API",
              "d": "Snapshot list with create time, policy name",
              "q": "index=storage sourcetype=\"storage:snapshot\"\n| stats latest(snapshot_time) as last_snap by volume_name, policy_name\n| eval hours_since=round((now()-snapshot_time)/3600,1)\n| lookup snapshot_policy_expected policy_name OUTPUT expected_hours_max\n| where hours_since > expected_hours_max",
              "m": "Maintain lookup of expected max age per policy. Compare latest snapshot timestamp to policy. Alert on volumes with no snapshot within SLA window.",
              "z": "Table (non-compliant volumes), Single value (policy violations count), Timeline (snapshot completions).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, API.\n• Ensure the following data sources are available: Snapshot list with create time, policy name.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain lookup of expected max age per policy. Compare latest snapshot timestamp to policy. Alert on volumes with no snapshot within SLA window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"storage:snapshot\"\n| stats latest(snapshot_time) as last_snap by volume_name, policy_name\n| eval hours_since=round((now()-snapshot_time)/3600,1)\n| lookup snapshot_policy_expected policy_name OUTPUT expected_hours_max\n| where hours_since > expected_hours_max\n```\n\nUnderstanding this SPL\n\n**Snapshot Schedule Compliance** — Missed snapshot schedules break backup and rollback expectations. Verifying snapshot recency per policy supports operational and audit requirements.\n\nDocumented **Data sources**: Snapshot list with create time, policy name. **App/TA** (typical add-on context): Vendor TA, API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: storage:snapshot. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"storage:snapshot\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by volume_name, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where hours_since > expected_hours_max` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant volumes), Single value (policy violations count), Timeline (snapshot completions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track copy and mirror health so a planned outage or a bad link does not leave you with an old or broken remote copy when you need it most.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.26",
              "n": "Deduplication Savings Ratio",
              "c": "medium",
              "f": "beginner",
              "v": "Deduplication ratio trending validates efficiency features and detects anomalies (sudden ratio drop may indicate new data types or misconfiguration).",
              "t": "Vendor TA (NetApp, Dell, Pure)",
              "d": "Logical vs physical used, dedupe savings API fields",
              "q": "index=storage sourcetype=\"storage:dedupe\"\n| eval savings_ratio=round((logical_used_bytes-physical_used_bytes)/nullif(logical_used_bytes,0)*100,1)\n| timechart span=1d avg(savings_ratio) as ratio by aggregate_name\n| where ratio < 30",
              "m": "Poll dedupe stats weekly or daily. Baseline savings ratio per aggregate. Alert on significant drop vs 30-day average (e.g., >20% relative drop).",
              "z": "Line chart (savings ratio over time), Table (aggregate, logical, physical, ratio), Single value (fleet average ratio).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Cisco DC Networking Application for Splunk](https://splunkbase.splunk.com/app/7777)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA (NetApp, Dell, Pure).\n• Ensure the following data sources are available: Logical vs physical used, dedupe savings API fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll dedupe stats weekly or daily. Baseline savings ratio per aggregate. Alert on significant drop vs 30-day average (e.g., >20% relative drop).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"storage:dedupe\"\n| eval savings_ratio=round((logical_used_bytes-physical_used_bytes)/nullif(logical_used_bytes,0)*100,1)\n| timechart span=1d avg(savings_ratio) as ratio by aggregate_name\n| where ratio < 30\n```\n\nUnderstanding this SPL\n\n**Deduplication Savings Ratio** — Deduplication ratio trending validates efficiency features and detects anomalies (sudden ratio drop may indicate new data types or misconfiguration).\n\nDocumented **Data sources**: Logical vs physical used, dedupe savings API fields. **App/TA** (typical add-on context): Vendor TA (NetApp, Dell, Pure). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: storage:dedupe. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"storage:dedupe\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **savings_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by aggregate_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where ratio < 30` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (savings ratio over time), Table (aggregate, logical, physical, ratio), Single value (fleet average ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see whether space savings features still behave the way you expect, so sudden shifts do not mean silent data or layout issues.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "netapp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.27",
              "n": "MDS Inter-Switch Link (ISL) Utilization",
              "c": "critical",
              "f": "intermediate",
              "v": "ISLs carry all inter-switch SAN traffic. Saturated ISLs cause frame queuing, slow drain propagation, and storage latency spikes. Proactive monitoring prevents cascading congestion before hosts see I/O timeouts.",
              "t": "SNMP TA, `cisco:mds` syslog",
              "d": "SNMP IF-MIB (ifHCInOctets/ifHCOutOctets on ISL ports), MDS syslog",
              "q": "index=network sourcetype=\"snmp:if\" host=\"mds*\" port_type=\"ISL\"\n| eval util_pct=round((ifHCInOctets_delta+ifHCOutOctets_delta)*8/speed/poll_interval*100,1)\n| timechart span=5m avg(util_pct) as avg_util by switch, port\n| where avg_util > 70",
              "m": "Poll ISL port counters via SNMP every 60 seconds. Tag ISL ports in a lookup. Alert at 70% sustained utilization (5-min average). Correlate with storage latency (UC-6.1.2) and FC port errors (UC-6.1.9).",
              "z": "Line chart (ISL utilization over time), Heatmap (switch x ISL port), Single value (peak ISL utilization), Topology map.",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA, `cisco:mds` syslog.\n• Ensure the following data sources are available: SNMP IF-MIB (ifHCInOctets/ifHCOutOctets on ISL ports), MDS syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll ISL port counters via SNMP every 60 seconds. Tag ISL ports in a lookup. Alert at 70% sustained utilization (5-min average). Correlate with storage latency (UC-6.1.2) and FC port errors (UC-6.1.9).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:if\" host=\"mds*\" port_type=\"ISL\"\n| eval util_pct=round((ifHCInOctets_delta+ifHCOutOctets_delta)*8/speed/poll_interval*100,1)\n| timechart span=5m avg(util_pct) as avg_util by switch, port\n| where avg_util > 70\n```\n\nUnderstanding this SPL\n\n**MDS Inter-Switch Link (ISL) Utilization** — ISLs carry all inter-switch SAN traffic. Saturated ISLs cause frame queuing, slow drain propagation, and storage latency spikes. Proactive monitoring prevents cascading congestion before hosts see I/O timeouts.\n\nDocumented **Data sources**: SNMP IF-MIB (ifHCInOctets/ifHCOutOctets on ISL ports), MDS syslog. **App/TA** (typical add-on context): SNMP TA, `cisco:mds` syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:if; **host** filter: mds*. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:if\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by switch, port** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_util > 70` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MDS Inter-Switch Link (ISL) Utilization** — ISLs carry all inter-switch SAN traffic. Saturated ISLs cause frame queuing, slow drain propagation, and storage latency spikes. Proactive monitoring prevents cascading congestion before hosts see I/O timeouts.\n\nDocumented **Data sources**: SNMP IF-MIB (ifHCInOctets/ifHCOutOctets on ISL ports), MDS syslog. **App/TA** (typical add-on context): SNMP TA, `cisco:mds` syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Network` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ISL utilization over time), Heatmap (switch x ISL port), Single value (peak ISL utilization), Topology map.",
              "script": "",
              "premium": "",
              "hw": "Cisco MDS 9132T, MDS 9148T, MDS 9396T, MDS 9700, MDS 9706, MDS 9710",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the links between your SAN switches so they do not fill up and slow every host that depends on that fabric before anyone spots the bottleneck.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host span=5m | sort - agg_value",
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_nexus",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.28",
              "n": "MDS Slow Drain Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Slow drain occurs when a target device (storage or host) cannot accept frames fast enough, exhausting buffer-to-buffer credits and stalling the entire FC path. A single slow-drain device can impact hundreds of hosts sharing the same ISL. Early detection via TxWait and B2B credit metrics is essential.",
              "t": "`cisco:mds` syslog, SNMP TA, Cisco DC Networking Application (Splunkbase 7777)",
              "d": "MDS syslog (PORT-MONITOR, SLOW-DRAIN events), SNMP counters (TxWait, B2B credit zeros)",
              "q": "index=network sourcetype=\"cisco:mds\" \"SLOW_DRAIN\" OR \"PORT-5-IF_TXWAIT\" OR \"PORT-MONITOR\"\n| rex \"port (?<port>\\S+).*txwait=(?<txwait>\\d+)\"\n| stats max(txwait) as max_txwait count by switch, port, _time\n| where max_txwait > 100\n| sort -max_txwait",
              "m": "Enable port-monitor policies on MDS switches with appropriate TxWait thresholds. Forward syslog to Splunk. Poll SNMP slow-drain counters. Alert immediately on sustained TxWait. Cross-reference with FLOGI database (UC-6.1.30) to identify the offending host or storage port.",
              "z": "Table (ports with slow drain), Line chart (TxWait over time), Topology (affected path highlighting).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `cisco:mds` syslog, SNMP TA, Cisco DC Networking Application (Splunkbase 7777).\n• Ensure the following data sources are available: MDS syslog (PORT-MONITOR, SLOW-DRAIN events), SNMP counters (TxWait, B2B credit zeros).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable port-monitor policies on MDS switches with appropriate TxWait thresholds. Forward syslog to Splunk. Poll SNMP slow-drain counters. Alert immediately on sustained TxWait. Cross-reference with FLOGI database (UC-6.1.30) to identify the offending host or storage port.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:mds\" \"SLOW_DRAIN\" OR \"PORT-5-IF_TXWAIT\" OR \"PORT-MONITOR\"\n| rex \"port (?<port>\\S+).*txwait=(?<txwait>\\d+)\"\n| stats max(txwait) as max_txwait count by switch, port, _time\n| where max_txwait > 100\n| sort -max_txwait\n```\n\nUnderstanding this SPL\n\n**MDS Slow Drain Detection** — Slow drain occurs when a target device (storage or host) cannot accept frames fast enough, exhausting buffer-to-buffer credits and stalling the entire FC path. A single slow-drain device can impact hundreds of hosts sharing the same ISL. Early detection via TxWait and B2B credit metrics is essential.\n\nDocumented **Data sources**: MDS syslog (PORT-MONITOR, SLOW-DRAIN events), SNMP counters (TxWait, B2B credit zeros). **App/TA** (typical add-on context): `cisco:mds` syslog, SNMP TA, Cisco DC Networking Application (Splunkbase 7777). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:mds. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:mds\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by switch, port, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_txwait > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (ports with slow drain), Line chart (TxWait over time), Topology (affected path highlighting).",
              "script": "",
              "premium": "",
              "hw": "Cisco MDS 9132T, MDS 9148T, MDS 9396T, MDS 9700 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who is allowed on the storage network and who just logged in, so stray servers or surprise changes on the fabric are harder to miss.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_nexus",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.29",
              "n": "MDS Zone Configuration Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Zoning controls which initiators can communicate with which targets. Misconfigured zones create security risks (unauthorized access) and operational risks (accidental data access). Tracking zone changes and validating against a known-good baseline prevents drift.",
              "t": "`cisco:mds` syslog, scripted input (MDS NX-API / CLI)",
              "d": "MDS syslog (zone change events), NX-API CLI (`show zone`, `show zoneset active`)",
              "q": "index=network sourcetype=\"cisco:mds\" \"ZONE\" (\"added\" OR \"removed\" OR \"activated\" OR \"changed\")\n| stats count by switch, vsan_id, zone_name, action, user\n| append [| inputlookup mds_approved_zones | eval source=\"baseline\"]\n| stats values(source) as sources by vsan_id, zone_name\n| where NOT match(sources,\"baseline\")\n| table vsan_id, zone_name, sources",
              "m": "Export zone configuration periodically via NX-API. Maintain a baseline lookup of approved zones per VSAN. Detect zone additions, removals, and activations via syslog. Alert on any zone change outside change windows.",
              "z": "Table (zone changes), Timeline (change events), Diff view (current vs baseline).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `cisco:mds` syslog, scripted input (MDS NX-API / CLI).\n• Ensure the following data sources are available: MDS syslog (zone change events), NX-API CLI (`show zone`, `show zoneset active`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport zone configuration periodically via NX-API. Maintain a baseline lookup of approved zones per VSAN. Detect zone additions, removals, and activations via syslog. Alert on any zone change outside change windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:mds\" \"ZONE\" (\"added\" OR \"removed\" OR \"activated\" OR \"changed\")\n| stats count by switch, vsan_id, zone_name, action, user\n| append [| inputlookup mds_approved_zones | eval source=\"baseline\"]\n| stats values(source) as sources by vsan_id, zone_name\n| where NOT match(sources,\"baseline\")\n| table vsan_id, zone_name, sources\n```\n\nUnderstanding this SPL\n\n**MDS Zone Configuration Compliance** — Zoning controls which initiators can communicate with which targets. Misconfigured zones create security risks (unauthorized access) and operational risks (accidental data access). Tracking zone changes and validating against a known-good baseline prevents drift.\n\nDocumented **Data sources**: MDS syslog (zone change events), NX-API CLI (`show zone`, `show zoneset active`). **App/TA** (typical add-on context): `cisco:mds` syslog, scripted input (MDS NX-API / CLI). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:mds. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:mds\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch, vsan_id, zone_name, action, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by vsan_id, zone_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where NOT match(sources,\"baseline\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MDS Zone Configuration Compliance**): table vsan_id, zone_name, sources\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MDS Zone Configuration Compliance** — Zoning controls which initiators can communicate with which targets. Misconfigured zones create security risks (unauthorized access) and operational risks (accidental data access). Tracking zone changes and validating against a known-good baseline prevents drift.\n\nDocumented **Data sources**: MDS syslog (zone change events), NX-API CLI (`show zone`, `show zoneset active`). **App/TA** (typical add-on context): `cisco:mds` syslog, scripted input (MDS NX-API / CLI). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (zone changes), Timeline (change events), Diff view (current vs baseline).",
              "script": "",
              "premium": "",
              "hw": "Cisco MDS 9132T, MDS 9148T, MDS 9396T, MDS 9700 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who is allowed on the storage network and who just logged in, so stray servers or surprise changes on the fabric are harder to miss.",
              "mtype": [
                "Compliance",
                "Change"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.user | sort - count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.30",
              "n": "MDS FLOGI Database Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "The FLOGI (Fabric Login) database records every device that has logged into the SAN fabric. Monitoring FLOGI events detects rogue devices, unexpected host logins, and fabric login storms that indicate HBA or driver issues.",
              "t": "`cisco:mds` syslog, scripted input (NX-API)",
              "d": "MDS syslog (FLOGI/FDISC events), NX-API (`show flogi database`)",
              "q": "index=network sourcetype=\"cisco:mds\" \"FLOGI\" OR \"FDISC\"\n| stats count as login_count by switch, port, pwwn, nwwn\n| lookup mds_known_hosts pwwn OUTPUT host_name, authorized\n| where isnull(authorized) OR authorized!=\"yes\"\n| table switch, port, pwwn, nwwn, host_name, authorized, login_count",
              "m": "Forward MDS syslog and periodically poll FLOGI database via NX-API. Maintain a lookup of known/authorized WWNs. Alert on unknown WWN logins. Track FLOGI count trends to detect login storms.",
              "z": "Table (FLOGI entries with authorization status), Bar chart (logins per switch), Timeline (login events).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `cisco:mds` syslog, scripted input (NX-API).\n• Ensure the following data sources are available: MDS syslog (FLOGI/FDISC events), NX-API (`show flogi database`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward MDS syslog and periodically poll FLOGI database via NX-API. Maintain a lookup of known/authorized WWNs. Alert on unknown WWN logins. Track FLOGI count trends to detect login storms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:mds\" \"FLOGI\" OR \"FDISC\"\n| stats count as login_count by switch, port, pwwn, nwwn\n| lookup mds_known_hosts pwwn OUTPUT host_name, authorized\n| where isnull(authorized) OR authorized!=\"yes\"\n| table switch, port, pwwn, nwwn, host_name, authorized, login_count\n```\n\nUnderstanding this SPL\n\n**MDS FLOGI Database Monitoring** — The FLOGI (Fabric Login) database records every device that has logged into the SAN fabric. Monitoring FLOGI events detects rogue devices, unexpected host logins, and fabric login storms that indicate HBA or driver issues.\n\nDocumented **Data sources**: MDS syslog (FLOGI/FDISC events), NX-API (`show flogi database`). **App/TA** (typical add-on context): `cisco:mds` syslog, scripted input (NX-API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:mds. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:mds\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch, port, pwwn, nwwn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(authorized) OR authorized!=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MDS FLOGI Database Monitoring**): table switch, port, pwwn, nwwn, host_name, authorized, login_count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MDS FLOGI Database Monitoring** — The FLOGI (Fabric Login) database records every device that has logged into the SAN fabric. Monitoring FLOGI events detects rogue devices, unexpected host logins, and fabric login storms that indicate HBA or driver issues.\n\nDocumented **Data sources**: MDS syslog (FLOGI/FDISC events), NX-API (`show flogi database`). **App/TA** (typical add-on context): `cisco:mds` syslog, scripted input (NX-API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (FLOGI entries with authorization status), Bar chart (logins per switch), Timeline (login events).",
              "script": "",
              "premium": "",
              "hw": "Cisco MDS 9132T, MDS 9148T, MDS 9396T, MDS 9700 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who is allowed on the storage network and who just logged in, so stray servers or surprise changes on the fabric are harder to miss.",
              "mtype": [
                "Inventory",
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.31",
              "n": "MDS VSAN Health and Isolation Events",
              "c": "critical",
              "f": "intermediate",
              "v": "VSANs provide logical SAN segmentation. VSAN isolation events (caused by ISL failures, misconfigured trunking, or merge failures) split the fabric and break host-to-storage paths. Detecting isolation within seconds is essential for maintaining storage availability.",
              "t": "`cisco:mds` syslog, SNMP TA",
              "d": "MDS syslog (VSAN state change, merge failure, isolation events), SNMP",
              "q": "index=network sourcetype=\"cisco:mds\" \"VSAN\" (\"isolated\" OR \"merge\" OR \"segmented\" OR \"down\")\n| stats count latest(_time) as last_event by switch, vsan_id, event_type\n| where event_type IN (\"isolated\",\"segmented\",\"merge_failure\")\n| table switch, vsan_id, event_type, count, last_event\n| sort -last_event",
              "m": "Forward MDS syslog with facility-level logging. Alert immediately on VSAN isolation or segmentation events. Correlate with ISL link status (UC-6.1.27) and zone changes (UC-6.1.29) to identify root cause.",
              "z": "Status grid (VSAN health), Table (isolation events), Topology map (VSAN segmentation).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `cisco:mds` syslog, SNMP TA.\n• Ensure the following data sources are available: MDS syslog (VSAN state change, merge failure, isolation events), SNMP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward MDS syslog with facility-level logging. Alert immediately on VSAN isolation or segmentation events. Correlate with ISL link status (UC-6.1.27) and zone changes (UC-6.1.29) to identify root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:mds\" \"VSAN\" (\"isolated\" OR \"merge\" OR \"segmented\" OR \"down\")\n| stats count latest(_time) as last_event by switch, vsan_id, event_type\n| where event_type IN (\"isolated\",\"segmented\",\"merge_failure\")\n| table switch, vsan_id, event_type, count, last_event\n| sort -last_event\n```\n\nUnderstanding this SPL\n\n**MDS VSAN Health and Isolation Events** — VSANs provide logical SAN segmentation. VSAN isolation events (caused by ISL failures, misconfigured trunking, or merge failures) split the fabric and break host-to-storage paths. Detecting isolation within seconds is essential for maintaining storage availability.\n\nDocumented **Data sources**: MDS syslog (VSAN state change, merge failure, isolation events), SNMP. **App/TA** (typical add-on context): `cisco:mds` syslog, SNMP TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:mds. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:mds\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch, vsan_id, event_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where event_type IN (\"isolated\",\"segmented\",\"merge_failure\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MDS VSAN Health and Isolation Events**): table switch, vsan_id, event_type, count, last_event\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (VSAN health), Table (isolation events), Topology map (VSAN segmentation).",
              "script": "",
              "premium": "",
              "hw": "Cisco MDS 9132T, MDS 9148T, MDS 9396T, MDS 9700 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who is allowed on the storage network and who just logged in, so stray servers or surprise changes on the fabric are harder to miss.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_nexus",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.1.32",
              "n": "MDS SAN Fabric Oversubscription Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "The ratio of total edge port bandwidth to ISL bandwidth determines oversubscription. High oversubscription ratios (>7:1 for production, >20:1 for backup) increase the risk of congestion. Tracking this metric supports capacity planning and fabric expansion decisions.",
              "t": "SNMP TA, scripted input (NX-API)",
              "d": "SNMP IF-MIB (port speeds, port types), NX-API (`show interface brief`)",
              "q": "index=network sourcetype=\"snmp:if\" host=\"mds*\"\n| stats sum(eval(if(port_type=\"F\",speed,0))) as edge_bw sum(eval(if(port_type=\"E\" OR port_type=\"TE\",speed,0))) as isl_bw by switch\n| eval oversubscription=round(edge_bw/isl_bw,1)\n| where oversubscription > 7\n| table switch, edge_bw, isl_bw, oversubscription\n| sort -oversubscription",
              "m": "Poll interface inventory via SNMP or NX-API. Classify ports by type (F-port=edge, E/TE-port=ISL). Calculate oversubscription ratio per switch. Alert when ratio exceeds policy threshold. Report quarterly for capacity planning.",
              "z": "Table (switch oversubscription), Gauge (ratio per switch), Trend chart (ratio over quarters).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA, scripted input (NX-API).\n• Ensure the following data sources are available: SNMP IF-MIB (port speeds, port types), NX-API (`show interface brief`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll interface inventory via SNMP or NX-API. Classify ports by type (F-port=edge, E/TE-port=ISL). Calculate oversubscription ratio per switch. Alert when ratio exceeds policy threshold. Report quarterly for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:if\" host=\"mds*\"\n| stats sum(eval(if(port_type=\"F\",speed,0))) as edge_bw sum(eval(if(port_type=\"E\" OR port_type=\"TE\",speed,0))) as isl_bw by switch\n| eval oversubscription=round(edge_bw/isl_bw,1)\n| where oversubscription > 7\n| table switch, edge_bw, isl_bw, oversubscription\n| sort -oversubscription\n```\n\nUnderstanding this SPL\n\n**MDS SAN Fabric Oversubscription Ratio** — The ratio of total edge port bandwidth to ISL bandwidth determines oversubscription. High oversubscription ratios (>7:1 for production, >20:1 for backup) increase the risk of congestion. Tracking this metric supports capacity planning and fabric expansion decisions.\n\nDocumented **Data sources**: SNMP IF-MIB (port speeds, port types), NX-API (`show interface brief`). **App/TA** (typical add-on context): SNMP TA, scripted input (NX-API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:if; **host** filter: mds*. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:if\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **oversubscription** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where oversubscription > 7` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MDS SAN Fabric Oversubscription Ratio**): table switch, edge_bw, isl_bw, oversubscription\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (switch oversubscription), Gauge (ratio per switch), Trend chart (ratio over quarters).",
              "script": "",
              "premium": "",
              "hw": "Cisco MDS 9132T, MDS 9148T, MDS 9396T, MDS 9700 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the links between your SAN switches so they do not fill up and slow every host that depends on that fabric before anyone spots the bottleneck.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 39.7,
          "qd": {
            "gold": 1,
            "silver": 3,
            "bronze": 28,
            "none": 0
          }
        },
        {
          "i": "6.2",
          "n": "Object Storage",
          "u": [
            {
              "i": "6.2.1",
              "n": "Bucket Capacity Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks storage growth for cost forecasting and lifecycle policy effectiveness. Prevents unexpected cloud bills.",
              "t": "`Splunk_TA_aws` (CloudWatch), Splunk_TA_microsoft-cloudservices",
              "d": "CloudWatch S3 metrics (BucketSizeBytes), Azure Blob metrics",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" metric_name=\"BucketSizeBytes\"\n| timechart span=1d latest(Average) as size_bytes by bucket_name\n| eval size_gb=size_bytes/1024/1024/1024",
              "m": "Enable S3 storage metrics in CloudWatch (request metrics may incur cost). Ingest via Splunk Add-on for AWS. Create trending reports by bucket and apply `predict` for growth forecasting.",
              "z": "Line chart (bucket size over time), Stacked area (total storage by bucket), Table (largest buckets).",
              "kfp": "Object size can grow during bulk uploads, replication, or before lifecycle transitions cold; align alerts with finance and application release calendars.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (CloudWatch), Splunk_TA_microsoft-cloudservices.\n• Ensure the following data sources are available: CloudWatch S3 metrics (BucketSizeBytes), Azure Blob metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable S3 storage metrics in CloudWatch (request metrics may incur cost). Ingest via Splunk Add-on for AWS. Create trending reports by bucket and apply `predict` for growth forecasting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" metric_name=\"BucketSizeBytes\"\n| timechart span=1d latest(Average) as size_bytes by bucket_name\n| eval size_gb=size_bytes/1024/1024/1024\n```\n\nUnderstanding this SPL\n\n**Bucket Capacity Trending** — Tracks storage growth for cost forecasting and lifecycle policy effectiveness. Prevents unexpected cloud bills.\n\nDocumented **Data sources**: CloudWatch S3 metrics (BucketSizeBytes), Azure Blob metrics. **App/TA** (typical add-on context): `Splunk_TA_aws` (CloudWatch), Splunk_TA_microsoft-cloudservices. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by bucket_name** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (bucket size over time), Stacked area (total storage by bucket), Table (largest buckets).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how large your object buckets grow over time so you can plan cost, lifecycle rules, and capacity before a surprise bill.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.2",
              "n": "Access Pattern Anomalies",
              "c": "high",
              "f": "advanced",
              "v": "Unusual access patterns may indicate data breaches, compromised credentials, or misconfigured applications.",
              "t": "`Splunk_TA_aws` (S3 access logs), Azure Blob diagnostics",
              "d": "S3 access logs, Azure Blob analytics logs",
              "q": "index=aws sourcetype=\"aws:s3:accesslogs\"\n| stats count by bucket_name, requester, operation\n| eventstats avg(count) as avg_ops, stdev(count) as stdev_ops by bucket_name, operation\n| where count > avg_ops + 3*stdev_ops",
              "m": "Enable S3 server access logging to a dedicated logging bucket. Ingest via SQS-based S3 input. Baseline normal access patterns and alert on statistical outliers. Correlate with IAM changes.",
              "z": "Line chart (access volume over time), Table (anomalous access events), Bar chart (operations by requester).",
              "kfp": "Legitimate bulk operations, month-end reporting, or new application releases can change access patterns; compare to change windows and owner notifications.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (S3 access logs), Azure Blob diagnostics.\n• Ensure the following data sources are available: S3 access logs, Azure Blob analytics logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable S3 server access logging to a dedicated logging bucket. Ingest via SQS-based S3 input. Baseline normal access patterns and alert on statistical outliers. Correlate with IAM changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:s3:accesslogs\"\n| stats count by bucket_name, requester, operation\n| eventstats avg(count) as avg_ops, stdev(count) as stdev_ops by bucket_name, operation\n| where count > avg_ops + 3*stdev_ops\n```\n\nUnderstanding this SPL\n\n**Access Pattern Anomalies** — Unusual access patterns may indicate data breaches, compromised credentials, or misconfigured applications.\n\nDocumented **Data sources**: S3 access logs, Azure Blob analytics logs. **App/TA** (typical add-on context): `Splunk_TA_aws` (S3 access logs), Azure Blob diagnostics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:s3:accesslogs. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:s3:accesslogs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by bucket_name, requester, operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by bucket_name, operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > avg_ops + 3*stdev_ops` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (access volume over time), Table (anomalous access events), Bar chart (operations by requester).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We notice when object storage is used in unusual ways so you can spot stolen credentials, misconfigured apps, or abuse before large data leaves the bucket.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.3",
              "n": "Public Bucket Detection",
              "c": "critical",
              "f": "beginner",
              "v": "Public buckets are a top cloud security risk, leading to data breaches. Immediate detection is essential for compliance.",
              "t": "`Splunk_TA_aws` (Config), Azure Policy",
              "d": "AWS Config rules, S3 ACL/policy evaluations",
              "q": "index=aws sourcetype=\"aws:config:rule\"\n| search configRuleName=\"s3-bucket-public-read-prohibited\" OR configRuleName=\"s3-bucket-public-write-prohibited\"\n| where complianceType=\"NON_COMPLIANT\"\n| table _time, resourceId, configRuleName, complianceType",
              "m": "Enable AWS Config rules for S3 public access. Ingest Config compliance data. Create critical alert for any NON_COMPLIANT result. Also monitor S3 Block Public Access settings at account level.",
              "z": "Single value (public bucket count — should be 0), Table (non-compliant buckets), Status indicator (red/green).",
              "kfp": "Security scans, penetration tests, or partner integrations may use patterns that look unusual; validate against known projects before escalation.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (Config), Azure Policy.\n• Ensure the following data sources are available: AWS Config rules, S3 ACL/policy evaluations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable AWS Config rules for S3 public access. Ingest Config compliance data. Create critical alert for any NON_COMPLIANT result. Also monitor S3 Block Public Access settings at account level.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:config:rule\"\n| search configRuleName=\"s3-bucket-public-read-prohibited\" OR configRuleName=\"s3-bucket-public-write-prohibited\"\n| where complianceType=\"NON_COMPLIANT\"\n| table _time, resourceId, configRuleName, complianceType\n```\n\nUnderstanding this SPL\n\n**Public Bucket Detection** — Public buckets are a top cloud security risk, leading to data breaches. Immediate detection is essential for compliance.\n\nDocumented **Data sources**: AWS Config rules, S3 ACL/policy evaluations. **App/TA** (typical add-on context): `Splunk_TA_aws` (Config), Azure Policy. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:rule. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:rule\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Filters the current rows with `where complianceType=\"NON_COMPLIANT\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Public Bucket Detection**): table _time, resourceId, configRuleName, complianceType\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (public bucket count — should be 0), Table (non-compliant buckets), Status indicator (red/green).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you find buckets or containers that were left open to the world so you can lock them down before someone else lists or copies your data.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "6.2.4",
              "n": "Lifecycle Policy Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Ensures storage cost optimization policies are working. Objects not transitioning per policy waste money.",
              "t": "Cloud provider TAs",
              "d": "CloudWatch storage class metrics, lifecycle action logs",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" metric_name=\"BucketSizeBytes\"\n| stats latest(Average) as size by bucket_name, StorageType\n| xyseries bucket_name StorageType size",
              "m": "Monitor storage class distribution per bucket over time. Compare against defined lifecycle policies. Alert when objects remain in expensive storage classes longer than policy dictates.",
              "z": "Stacked bar (storage class distribution per bucket), Table (policy violations), Pie chart (total storage by class).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud provider TAs.\n• Ensure the following data sources are available: CloudWatch storage class metrics, lifecycle action logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor storage class distribution per bucket over time. Compare against defined lifecycle policies. Alert when objects remain in expensive storage classes longer than policy dictates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" metric_name=\"BucketSizeBytes\"\n| stats latest(Average) as size by bucket_name, StorageType\n| xyseries bucket_name StorageType size\n```\n\nUnderstanding this SPL\n\n**Lifecycle Policy Compliance** — Ensures storage cost optimization policies are working. Objects not transitioning per policy waste money.\n\nDocumented **Data sources**: CloudWatch storage class metrics, lifecycle action logs. **App/TA** (typical add-on context): Cloud provider TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by bucket_name, StorageType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (storage class distribution per bucket), Table (policy violations), Pie chart (total storage by class).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check that objects move through the right storage classes on schedule so old data does not sit in expensive tiers or miss deletion when policy says it should.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.5",
              "n": "Cross-Region Replication Lag",
              "c": "high",
              "f": "beginner",
              "v": "Replication lag affects DR readiness. Monitoring ensures geo-redundant data meets RPO requirements.",
              "t": "Cloud provider TAs",
              "d": "S3 replication metrics (ReplicationLatency, OperationsPendingReplication)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" metric_name=\"ReplicationLatency\"\n| timechart span=1h avg(Average) as replication_lag_sec by bucket_name\n| where replication_lag_sec > 3600",
              "m": "Enable S3 replication metrics in CloudWatch. Ingest and alert when replication latency or pending operations exceed thresholds. Correlate with data ingestion spikes that may cause temporary lag.",
              "z": "Line chart (replication lag over time), Single value (max lag), Table (buckets with lag exceeding SLA).",
              "kfp": "Lag may increase during initial baseline transfers, scheduled resyncs, large volume moves, or upstream throttling.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud provider TAs.\n• Ensure the following data sources are available: S3 replication metrics (ReplicationLatency, OperationsPendingReplication).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable S3 replication metrics in CloudWatch. Ingest and alert when replication latency or pending operations exceed thresholds. Correlate with data ingestion spikes that may cause temporary lag.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" metric_name=\"ReplicationLatency\"\n| timechart span=1h avg(Average) as replication_lag_sec by bucket_name\n| where replication_lag_sec > 3600\n```\n\nUnderstanding this SPL\n\n**Cross-Region Replication Lag** — Replication lag affects DR readiness. Monitoring ensures geo-redundant data meets RPO requirements.\n\nDocumented **Data sources**: S3 replication metrics (ReplicationLatency, OperationsPendingReplication). **App/TA** (typical add-on context): Cloud provider TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by bucket_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where replication_lag_sec > 3600` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (replication lag over time), Single value (max lag), Table (buckets with lag exceeding SLA).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how far behind cross-region copies fall so you know if you are still within your recovery time after link issues or big data moves.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.6",
              "n": "S3 and Azure Blob Lifecycle Policy Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Confirms lifecycle rules exist per bucket/container and that transitions match tagging/age rules. Reduces cost leakage from objects stuck in hot tiers.",
              "t": "`Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, Config/Policy inventory",
              "d": "S3 bucket lifecycle XML inventory, Azure Blob management policy JSON, AWS Config",
              "q": "index=aws sourcetype=\"aws:s3:lifecycle_inventory\"\n| stats values(rule_id) as rules, latest(has_expiration) as exp by bucket_name, region\n| where mvcount(rules)=0 OR exp=0\n| table bucket_name region rules exp",
              "m": "Export bucket lifecycle configurations via API/Config daily. For Azure, ingest policy definitions from Activity/Resource Graph. Alert on production buckets missing lifecycle or expiration actions.",
              "z": "Table (buckets without compliant lifecycle), Pie chart (compliant vs non-compliant), Single value (non-compliant count).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, Config/Policy inventory.\n• Ensure the following data sources are available: S3 bucket lifecycle XML inventory, Azure Blob management policy JSON, AWS Config.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport bucket lifecycle configurations via API/Config daily. For Azure, ingest policy definitions from Activity/Resource Graph. Alert on production buckets missing lifecycle or expiration actions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:s3:lifecycle_inventory\"\n| stats values(rule_id) as rules, latest(has_expiration) as exp by bucket_name, region\n| where mvcount(rules)=0 OR exp=0\n| table bucket_name region rules exp\n```\n\nUnderstanding this SPL\n\n**S3 and Azure Blob Lifecycle Policy Compliance** — Confirms lifecycle rules exist per bucket/container and that transitions match tagging/age rules. Reduces cost leakage from objects stuck in hot tiers.\n\nDocumented **Data sources**: S3 bucket lifecycle XML inventory, Azure Blob management policy JSON, AWS Config. **App/TA** (typical add-on context): `Splunk_TA_aws`, `Splunk_TA_microsoft-cloudservices`, Config/Policy inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:s3:lifecycle_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:s3:lifecycle_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by bucket_name, region** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvcount(rules)=0 OR exp=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **S3 and Azure Blob Lifecycle Policy Compliance**): table bucket_name region rules exp\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (buckets without compliant lifecycle), Pie chart (compliant vs non-compliant), Single value (non-compliant count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you show that lifecycle rules on buckets and blobs match what you promised auditors and finance, not just what was set once in the console.",
              "mtype": [
                "Compliance",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.7",
              "n": "Cross-Region Replication Lag (SLA)",
              "c": "high",
              "f": "beginner",
              "v": "Tracks replication backlog and oldest replicated object age for S3 CRR and Azure geo-replication. Complements byte-level lag with time-based SLA views.",
              "t": "Cloud TAs, CloudWatch, Azure Monitor",
              "d": "S3 `OperationsPendingReplication`, Azure `GeoReplicationLag` (where available)",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" metric_name=\"OperationsPendingReplication\"\n| timechart span=1h max(Maximum) as pending_ops by bucket_name\n| where pending_ops > 100000",
              "m": "Set thresholds from RPO (e.g., pending operations or max lag minutes). Alert when backlog grows for >1h. For Azure Blob, ingest replication health metrics from Monitor diagnostics.",
              "z": "Line chart (pending replication / lag), Table (buckets breaching SLA), Single value (max lag minutes).",
              "kfp": "Lag may increase during initial baseline transfers, scheduled resyncs, large volume moves, or upstream throttling.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud TAs, CloudWatch, Azure Monitor.\n• Ensure the following data sources are available: S3 `OperationsPendingReplication`, Azure `GeoReplicationLag` (where available).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSet thresholds from RPO (e.g., pending operations or max lag minutes). Alert when backlog grows for >1h. For Azure Blob, ingest replication health metrics from Monitor diagnostics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" metric_name=\"OperationsPendingReplication\"\n| timechart span=1h max(Maximum) as pending_ops by bucket_name\n| where pending_ops > 100000\n```\n\nUnderstanding this SPL\n\n**Cross-Region Replication Lag (SLA)** — Tracks replication backlog and oldest replicated object age for S3 CRR and Azure geo-replication. Complements byte-level lag with time-based SLA views.\n\nDocumented **Data sources**: S3 `OperationsPendingReplication`, Azure `GeoReplicationLag` (where available). **App/TA** (typical add-on context): Cloud TAs, CloudWatch, Azure Monitor. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by bucket_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where pending_ops > 100000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pending replication / lag), Table (buckets breaching SLA), Single value (max lag minutes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch replication delay against your agreed limit so you can fix links or throttling before a real outage leaves the copy too old to use.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.8",
              "n": "Bucket Policy Change Audit",
              "c": "critical",
              "f": "beginner",
              "v": "Unexpected bucket policy or IAM policy changes can expose data. Audit trail supports SOC2/PCI evidence and fast rollback.",
              "t": "`Splunk_TA_aws` (CloudTrail), Azure Activity Log",
              "d": "`PutBucketPolicy`, `DeleteBucketPolicy`, `SetContainerAcl` (Azure equivalents)",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventName IN (\"PutBucketPolicy\",\"DeleteBucketPolicy\",\"PutBucketAcl\")\n| table _time, requestParameters.bucketName, userIdentity.arn, sourceIPAddress, eventName\n| sort -_time",
              "m": "Ingest CloudTrail S3 and IAM policy events. Enrich with CMDB owner. Alert on changes outside change windows or from non-break-glass principals.",
              "z": "Timeline (policy changes), Table (bucket, user, action), Single value (changes last 24h).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (CloudTrail), Azure Activity Log.\n• Ensure the following data sources are available: `PutBucketPolicy`, `DeleteBucketPolicy`, `SetContainerAcl` (Azure equivalents).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CloudTrail S3 and IAM policy events. Enrich with CMDB owner. Alert on changes outside change windows or from non-break-glass principals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventName IN (\"PutBucketPolicy\",\"DeleteBucketPolicy\",\"PutBucketAcl\")\n| table _time, requestParameters.bucketName, userIdentity.arn, sourceIPAddress, eventName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Bucket Policy Change Audit** — Unexpected bucket policy or IAM policy changes can expose data. Audit trail supports SOC2/PCI evidence and fast rollback.\n\nDocumented **Data sources**: `PutBucketPolicy`, `DeleteBucketPolicy`, `SetContainerAcl` (Azure equivalents). **App/TA** (typical add-on context): `Splunk_TA_aws` (CloudTrail), Azure Activity Log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Bucket Policy Change Audit**): table _time, requestParameters.bucketName, userIdentity.arn, sourceIPAddress, eventName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (policy changes), Table (bucket, user, action), Single value (changes last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We record when bucket access rules change so you can spot mistakes, shadow IT, or an attacker opening data to the world.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.9",
              "n": "Pre-Signed URL Abuse Detection",
              "c": "high",
              "f": "advanced",
              "v": "Unusual volume of pre-signed GET/PUT or access from unexpected IPs may indicate credential theft or insider abuse.",
              "t": "`Splunk_TA_aws` (S3 access logs), CloudTrail data events",
              "d": "S3 server access logs with `queryString` containing `X-Amz-`, `Signature`",
              "q": "index=aws sourcetype=\"aws:s3:accesslogs\"\n| search query_string=\"*X-Amz-*\" OR query_string=\"*Signature*\"\n| stats count by bucket_name, requester, remote_ip\n| eventstats avg(count) as avg_c, stdev(count) as stdev_c by bucket_name\n| where count > avg_c + 3*stdev_c",
              "m": "Parse query string for presigned parameters. Baseline requests per requester/IP. Alert on spikes or geo anomalies. Correlate with IAM changes.",
              "z": "Table (top presigned requesters), Line chart (presigned request rate), Map (remote_ip).",
              "kfp": "Security scans, penetration tests, or partner integrations may use patterns that look unusual; validate against known projects before escalation.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (S3 access logs), CloudTrail data events.\n• Ensure the following data sources are available: S3 server access logs with `queryString` containing `X-Amz-`, `Signature`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse query string for presigned parameters. Baseline requests per requester/IP. Alert on spikes or geo anomalies. Correlate with IAM changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:s3:accesslogs\"\n| search query_string=\"*X-Amz-*\" OR query_string=\"*Signature*\"\n| stats count by bucket_name, requester, remote_ip\n| eventstats avg(count) as avg_c, stdev(count) as stdev_c by bucket_name\n| where count > avg_c + 3*stdev_c\n```\n\nUnderstanding this SPL\n\n**Pre-Signed URL Abuse Detection** — Unusual volume of pre-signed GET/PUT or access from unexpected IPs may indicate credential theft or insider abuse.\n\nDocumented **Data sources**: S3 server access logs with `queryString` containing `X-Amz-`, `Signature`. **App/TA** (typical add-on context): `Splunk_TA_aws` (S3 access logs), CloudTrail data events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:s3:accesslogs. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:s3:accesslogs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by bucket_name, requester, remote_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by bucket_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > avg_c + 3*stdev_c` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top presigned requesters), Line chart (presigned request rate), Map (remote_ip).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We flag odd use of temporary download links so you can stop abuse or leaked URLs before someone pulls more data than intended.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.10",
              "n": "Storage Class Transition Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Validates that objects move to IA/Glacier/Archive per policy. Stalled transitions indicate rule gaps or unsupported objects.",
              "t": "S3 Inventory, Azure Blob inventory, CloudWatch storage class metrics",
              "d": "S3 Inventory reports (CSV), `BucketSizeBytes` by `StorageType`",
              "q": "index=aws sourcetype=\"aws:s3:inventory\" OR sourcetype=\"aws:cloudwatch\" metric_name=\"BucketSizeBytes\"\n| stats sum(size_bytes) as bytes by bucket_name, storage_class\n| eventstats sum(bytes) as total by bucket_name\n| eval pct=round(bytes/total*100,1)\n| where storage_class=\"STANDARD\" AND pct > 40",
              "m": "Ingest periodic inventory or CloudWatch breakdown. Compare STANDARD % vs policy targets. Report buckets with excessive STANDARD after expected transition age.",
              "z": "Stacked bar (storage class % per bucket), Table (buckets with high STANDARD %), Line chart (class mix over time).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: S3 Inventory, Azure Blob inventory, CloudWatch storage class metrics.\n• Ensure the following data sources are available: S3 Inventory reports (CSV), `BucketSizeBytes` by `StorageType`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest periodic inventory or CloudWatch breakdown. Compare STANDARD % vs policy targets. Report buckets with excessive STANDARD after expected transition age.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:s3:inventory\" OR sourcetype=\"aws:cloudwatch\" metric_name=\"BucketSizeBytes\"\n| stats sum(size_bytes) as bytes by bucket_name, storage_class\n| eventstats sum(bytes) as total by bucket_name\n| eval pct=round(bytes/total*100,1)\n| where storage_class=\"STANDARD\" AND pct > 40\n```\n\nUnderstanding this SPL\n\n**Storage Class Transition Tracking** — Validates that objects move to IA/Glacier/Archive per policy. Stalled transitions indicate rule gaps or unsupported objects.\n\nDocumented **Data sources**: S3 Inventory reports (CSV), `BucketSizeBytes` by `StorageType`. **App/TA** (typical add-on context): S3 Inventory, Azure Blob inventory, CloudWatch storage class metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:s3:inventory, aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:s3:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by bucket_name, storage_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by bucket_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where storage_class=\"STANDARD\" AND pct > 40` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (storage class % per bucket), Table (buckets with high STANDARD %), Line chart (class mix over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We show when objects move between hot, warm, and archive tiers so cost and retrieval time line up with what you expect.",
              "mtype": [
                "Capacity",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.11",
              "n": "Object Versioning Compliance",
              "c": "high",
              "f": "beginner",
              "v": "Buckets without versioning risk unrecoverable overwrites. Monitoring ensures critical buckets remain versioned per data policy.",
              "t": "AWS Config, Azure Policy compliance states",
              "d": "`GetBucketVersioning`, Config rule compliance",
              "q": "index=aws sourcetype=\"aws:config:rule\"\n| search configRuleName=\"*s3-bucket-versioning*\" OR resourceType=\"AWS::S3::Bucket\"\n| spath output=versioning resource.configuration.versioning.status\n| where versioning!=\"Enabled\" AND complianceType=\"NON_COMPLIANT\"\n| table resourceId, complianceType, versioning",
              "m": "Map critical buckets via lookup. Alert when versioning is suspended or never enabled on tagged buckets. Include MFA delete status in extended implementation.",
              "z": "Table (non-compliant buckets), Single value (buckets without versioning), Status grid (bucket × region).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS Config, Azure Policy compliance states.\n• Ensure the following data sources are available: `GetBucketVersioning`, Config rule compliance.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap critical buckets via lookup. Alert when versioning is suspended or never enabled on tagged buckets. Include MFA delete status in extended implementation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:config:rule\"\n| search configRuleName=\"*s3-bucket-versioning*\" OR resourceType=\"AWS::S3::Bucket\"\n| spath output=versioning resource.configuration.versioning.status\n| where versioning!=\"Enabled\" AND complianceType=\"NON_COMPLIANT\"\n| table resourceId, complianceType, versioning\n```\n\nUnderstanding this SPL\n\n**Object Versioning Compliance** — Buckets without versioning risk unrecoverable overwrites. Monitoring ensures critical buckets remain versioned per data policy.\n\nDocumented **Data sources**: `GetBucketVersioning`, Config rule compliance. **App/TA** (typical add-on context): AWS Config, Azure Policy compliance states. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:config:rule. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:config:rule\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where versioning!=\"Enabled\" AND complianceType=\"NON_COMPLIANT\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Object Versioning Compliance**): table resourceId, complianceType, versioning\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant buckets), Single value (buckets without versioning), Status grid (bucket × region).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you prove that versioning is on where your policy requires it, so you can recover from mistakes and show it in an audit.",
              "mtype": [
                "Compliance",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.2.12",
              "n": "Object Lock Integrity",
              "c": "critical",
              "f": "intermediate",
              "v": "WORM/immutability protects against ransomware and deletion. Verifies Object Lock retention mode and legal hold on regulated buckets.",
              "t": "AWS Config, S3 API inventory",
              "d": "`GetObjectLockConfiguration`, Config compliance, S3 Inventory `ObjectLockEnabled`",
              "q": "index=aws sourcetype=\"aws:s3:object_lock_audit\"\n| where object_lock_enabled!=1 OR retention_mode=\"null\" OR compliance_gap=1\n| stats latest(_time) as last_check by bucket_name, region\n| table bucket_name region object_lock_enabled retention_mode compliance_gap",
              "m": "Scripted audit comparing required lock settings from lookup to actual API responses. Alert on drift or disabled lock. Log tamper-evident checksum of policy JSON if stored in Splunk.",
              "z": "Table (buckets failing lock check), Single value (drift count), Timeline (audit runs).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS Config, S3 API inventory.\n• Ensure the following data sources are available: `GetObjectLockConfiguration`, Config compliance, S3 Inventory `ObjectLockEnabled`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScripted audit comparing required lock settings from lookup to actual API responses. Alert on drift or disabled lock. Log tamper-evident checksum of policy JSON if stored in Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:s3:object_lock_audit\"\n| where object_lock_enabled!=1 OR retention_mode=\"null\" OR compliance_gap=1\n| stats latest(_time) as last_check by bucket_name, region\n| table bucket_name region object_lock_enabled retention_mode compliance_gap\n```\n\nUnderstanding this SPL\n\n**Object Lock Integrity** — WORM/immutability protects against ransomware and deletion. Verifies Object Lock retention mode and legal hold on regulated buckets.\n\nDocumented **Data sources**: `GetObjectLockConfiguration`, Config compliance, S3 Inventory `ObjectLockEnabled`. **App/TA** (typical add-on context): AWS Config, S3 API inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:s3:object_lock_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:s3:object_lock_audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where object_lock_enabled!=1 OR retention_mode=\"null\" OR compliance_gap=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by bucket_name, region** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Object Lock Integrity**): table bucket_name region object_lock_enabled retention_mode compliance_gap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (buckets failing lock check), Single value (drift count), Timeline (audit runs).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch that immutability and legal hold stay in place on critical buckets so ransomware or insiders cannot quietly turn protection off.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.1,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 11,
            "none": 0
          }
        },
        {
          "i": "6.3",
          "n": "Backup & Recovery",
          "u": [
            {
              "i": "6.3.1",
              "n": "Backup Job Success Rate",
              "c": "critical",
              "f": "advanced",
              "v": "Failed backups leave systems unprotected. Tracking success rate ensures recoverability and compliance with data protection policies.",
              "t": "Veeam App for Splunk, Commvault Splunk App, or scripted API input",
              "d": "Backup server job logs (job name, status, start/end time, data size)",
              "q": "index=backup sourcetype=\"veeam:job\"\n| stats count(eval(status=\"Success\")) as success, count(eval(status=\"Failed\")) as failed, count as total by job_name\n| eval success_rate=round(success/total*100,1)\n| where success_rate < 100\n| sort success_rate",
              "m": "For Veeam: use the Veeam App for Splunk or ingest via HEC from Enterprise Manager REST (`/api/v1/jobSessions`); normalize `job_name`, `result`, `end_time` fields. For Veritas NetBackup: forward master/media server syslog or use the OpsCenter REST export. Alert when `result!=Success` for jobs flagged as `backup_tier=critical` in a lookup. Throttle per `job_name` to avoid alert storms during infrastructure outages.",
              "z": "Single value (overall success rate %), Table (failed jobs with details), Bar chart (success/fail by job), Trend line (daily success rate).",
              "kfp": "Failures during DR test runs, application quiesce timeouts, or VSS provider issues can occur; confirm in the backup console whether a retry succeeded.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Veeam App for Splunk, Commvault Splunk App, or scripted API input.\n• Ensure the following data sources are available: Backup server job logs (job name, status, start/end time, data size).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor Veeam: use the Veeam App for Splunk or ingest via HEC from Enterprise Manager REST (`/api/v1/jobSessions`); normalize `job_name`, `result`, `end_time` fields. For Veritas NetBackup: forward master/media server syslog or use the OpsCenter REST export. Alert when `result!=Success` for jobs flagged as `backup_tier=critical` in a lookup. Throttle per `job_name` to avoid alert storms during infrastructure outages.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:job\"\n| stats count(eval(status=\"Success\")) as success, count(eval(status=\"Failed\")) as failed, count as total by job_name\n| eval success_rate=round(success/total*100,1)\n| where success_rate < 100\n| sort success_rate\n```\n\nUnderstanding this SPL\n\n**Backup Job Success Rate** — Failed backups leave systems unprotected. Tracking success rate ensures recoverability and compliance with data protection policies.\n\nDocumented **Data sources**: Backup server job logs (job name, status, start/end time, data size). **App/TA** (typical add-on context): Veeam App for Splunk, Commvault Splunk App, or scripted API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by job_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where success_rate < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (overall success rate %), Table (failed jobs with details), Bar chart (success/fail by job), Trend line (daily success rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-6.3.1: Backup Job Success Rate.",
                  "ea": "Saved search 'UC-6.3.1' running on Backup server job logs (job name, status, start/end time, data size), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.apra.gov.au/information-security"
                },
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.08",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ASD E8 E8.08 (Regular backups) is enforced — Splunk UC-6.3.1: Backup Job Success Rate.",
                  "ea": "Saved search 'UC-6.3.1' running on Backup server job logs (job name, status, start/end time, data size), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) is enforced — Splunk UC-6.3.1: Backup Job Success Rate.",
                  "ea": "Saved search 'UC-6.3.1' running on Backup server job logs (job name, status, start/end time, data size), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/IT-Grundschutz/it-grundschutz_node.html"
                }
              ],
              "sapp": [
                {
                  "name": "Veeam App for Splunk",
                  "id": 7312,
                  "url": "https://splunkbase.splunk.com/app/7312",
                  "desc": "Monitoring and security dashboards for Veeam Backup job statuses and events",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/033805d0-fe3c-11ee-a32e-be99bb517a22.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/0a08b346-fe3c-11ee-9b85-7afc4dbed252.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.2",
              "n": "Backup Job Duration Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Increasing backup durations signal data growth, network congestion, or storage performance issues. Prevents backup window overruns.",
              "t": "Vendor TA",
              "d": "Backup job logs (start/end timestamps, data transferred)",
              "q": "index=backup sourcetype=\"veeam:job\" status=\"Success\"\n| eval duration_min=(end_time-start_time)/60\n| timechart span=1d avg(duration_min) as avg_duration by job_name",
              "m": "Calculate job duration from start/end timestamps. Track trend over weeks/months. Alert when duration exceeds historical average by >50%. Correlate with data volume changes.",
              "z": "Line chart (duration trend per job), Table (longest running jobs), Bar chart (avg duration by job).",
              "kfp": "Long-running backups during initial fulls, large database backups, or after data growth events can exceed the window without indicating a broken schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA.\n• Ensure the following data sources are available: Backup job logs (start/end timestamps, data transferred).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCalculate job duration from start/end timestamps. Track trend over weeks/months. Alert when duration exceeds historical average by >50%. Correlate with data volume changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:job\" status=\"Success\"\n| eval duration_min=(end_time-start_time)/60\n| timechart span=1d avg(duration_min) as avg_duration by job_name\n```\n\nUnderstanding this SPL\n\n**Backup Job Duration Trending** — Increasing backup durations signal data growth, network congestion, or storage performance issues. Prevents backup window overruns.\n\nDocumented **Data sources**: Backup job logs (start/end timestamps, data transferred). **App/TA** (typical add-on context): Vendor TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by job_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (duration trend per job), Table (longest running jobs), Bar chart (avg duration by job).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.3",
              "n": "Missed Backup Detection",
              "c": "critical",
              "f": "advanced",
              "v": "A backup that doesn't run at all is worse than one that fails — it's invisible. Detection ensures no system is left unprotected.",
              "t": "Vendor TA, custom correlation",
              "d": "Backup scheduler logs, expected schedule lookup",
              "q": "| inputlookup backup_schedule.csv\n| join type=left max=1 job_name\n    [search index=backup sourcetype=\"veeam:job\" earliest=-24h\n     | stats latest(_time) as last_run by job_name]\n| where isnull(last_run) OR last_run < relative_time(now(), \"-26h\")\n| table job_name, expected_schedule, last_run",
              "m": "Maintain a lookup table of expected backup schedules. Run a scheduled search comparing expected vs actual runs. Alert when any job misses its window. Correlate with backup server health events.",
              "z": "Table (missed jobs with schedule details), Single value (number of missed jobs), Status grid (job name × date).",
              "kfp": "Failures during DR test runs, application quiesce timeouts, or VSS provider issues can occur; confirm in the backup console whether a retry succeeded.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, custom correlation.\n• Ensure the following data sources are available: Backup scheduler logs, expected schedule lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain a lookup table of expected backup schedules. Run a scheduled search comparing expected vs actual runs. Alert when any job misses its window. Correlate with backup server health events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup backup_schedule.csv\n| join type=left max=1 job_name\n    [search index=backup sourcetype=\"veeam:job\" earliest=-24h\n     | stats latest(_time) as last_run by job_name]\n| where isnull(last_run) OR last_run < relative_time(now(), \"-26h\")\n| table job_name, expected_schedule, last_run\n```\n\nUnderstanding this SPL\n\n**Missed Backup Detection** — A backup that doesn't run at all is worse than one that fails — it's invisible. Detection ensures no system is left unprotected.\n\nDocumented **Data sources**: Backup scheduler logs, expected schedule lookup. **App/TA** (typical add-on context): Vendor TA, custom correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(last_run) OR last_run < relative_time(now(), \"-26h\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Missed Backup Detection**): table job_name, expected_schedule, last_run\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missed jobs with schedule details), Single value (number of missed jobs), Status grid (job name × date).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.4",
              "n": "Backup Storage Capacity",
              "c": "high",
              "f": "advanced",
              "v": "Running out of backup repository space causes all backup jobs to fail. Proactive monitoring prevents cascading failures.",
              "t": "Vendor TA, scripted input",
              "d": "Backup repository/tape library capacity metrics",
              "q": "index=backup sourcetype=\"veeam:repository\"\n| eval pct_used=round(used_space/total_space*100,1)\n| where pct_used > 80\n| table repository_name, total_space_gb, used_space_gb, pct_used",
              "m": "Poll backup repository capacity via API or scripted input. Alert at 80% and 90% thresholds. Track growth rate and forecast when capacity will be exhausted using `predict`.",
              "z": "Gauge (% used per repository), Line chart (capacity trend), Table (repositories above threshold).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA, scripted input.\n• Ensure the following data sources are available: Backup repository/tape library capacity metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll backup repository capacity via API or scripted input. Alert at 80% and 90% thresholds. Track growth rate and forecast when capacity will be exhausted using `predict`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:repository\"\n| eval pct_used=round(used_space/total_space*100,1)\n| where pct_used > 80\n| table repository_name, total_space_gb, used_space_gb, pct_used\n```\n\nUnderstanding this SPL\n\n**Backup Storage Capacity** — Running out of backup repository space causes all backup jobs to fail. Proactive monitoring prevents cascading failures.\n\nDocumented **Data sources**: Backup repository/tape library capacity metrics. **App/TA** (typical add-on context): Vendor TA, scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:repository. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:repository\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_used > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup Storage Capacity**): table repository_name, total_space_gb, used_space_gb, pct_used\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% used per repository), Line chart (capacity trend), Table (repositories above threshold).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.5",
              "n": "Restore Test Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Backups are worthless if restores fail. Tracking restore tests ensures confidence in recoverability and satisfies audit requirements.",
              "t": "Manual/scripted input, backup TA",
              "d": "Restore test logs, manual test result entries",
              "q": "index=backup sourcetype=\"restore_test\"\n| stats latest(_time) as last_test, latest(result) as result by system_name\n| eval days_since_test=round((now()-last_test)/86400)\n| where days_since_test > 90 OR result!=\"Success\"\n| table system_name, last_test, result, days_since_test",
              "m": "Log all restore test results (automated or manual) to a dedicated index. Maintain a lookup of systems requiring quarterly restore tests. Alert when any system exceeds 90 days without a successful test.",
              "z": "Table (systems with test status), Single value (% tested in last 90d), Status grid (system × quarter).",
              "kfp": "Test restores during DR drills or compliance validation are expected and should not be treated as production incidents.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Manual/scripted input, backup TA.\n• Ensure the following data sources are available: Restore test logs, manual test result entries.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog all restore test results (automated or manual) to a dedicated index. Maintain a lookup of systems requiring quarterly restore tests. Alert when any system exceeds 90 days without a successful test.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"restore_test\"\n| stats latest(_time) as last_test, latest(result) as result by system_name\n| eval days_since_test=round((now()-last_test)/86400)\n| where days_since_test > 90 OR result!=\"Success\"\n| table system_name, last_test, result, days_since_test\n```\n\nUnderstanding this SPL\n\n**Restore Test Tracking** — Backups are worthless if restores fail. Tracking restore tests ensures confidence in recoverability and satisfies audit requirements.\n\nDocumented **Data sources**: Restore test logs, manual test result entries. **App/TA** (typical add-on context): Manual/scripted input, backup TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: restore_test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"restore_test\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by system_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_test** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since_test > 90 OR result!=\"Success\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Restore Test Tracking**): table system_name, last_test, result, days_since_test\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (systems with test status), Single value (% tested in last 90d), Status grid (system × quarter).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.6",
              "n": "Backup SLA Compliance",
              "c": "critical",
              "f": "advanced",
              "v": "Consolidated view of backup coverage and RPO/RTO compliance. Essential for management reporting and audit evidence.",
              "t": "Combined backup data + CMDB lookup",
              "d": "Backup job logs, CMDB/asset inventory",
              "q": "| inputlookup cmdb_systems.csv WHERE requires_backup=\"yes\"\n| join type=left max=1 system_name\n    [search index=backup sourcetype=\"veeam:job\" status=\"Success\" earliest=-7d\n     | stats latest(_time) as last_backup, max(data_size) as backup_size by system_name]\n| eval compliant=if(isnotnull(last_backup),\"Yes\",\"No\")\n| stats count(eval(compliant=\"Yes\")) as covered, count as total\n| eval coverage_pct=round(covered/total*100,1)",
              "m": "Cross-reference CMDB inventory with backup job data. Identify systems with no backup coverage. Calculate RPO compliance (time since last successful backup vs required RPO). Produce weekly executive report.",
              "z": "Single value (SLA compliance %), Table (non-compliant systems), Pie chart (covered vs uncovered), Dashboard with filters by business unit.",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Combined backup data + CMDB lookup.\n• Ensure the following data sources are available: Backup job logs, CMDB/asset inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCross-reference CMDB inventory with backup job data. Identify systems with no backup coverage. Calculate RPO compliance (time since last successful backup vs required RPO). Produce weekly executive report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup cmdb_systems.csv WHERE requires_backup=\"yes\"\n| join type=left max=1 system_name\n    [search index=backup sourcetype=\"veeam:job\" status=\"Success\" earliest=-7d\n     | stats latest(_time) as last_backup, max(data_size) as backup_size by system_name]\n| eval compliant=if(isnotnull(last_backup),\"Yes\",\"No\")\n| stats count(eval(compliant=\"Yes\")) as covered, count as total\n| eval coverage_pct=round(covered/total*100,1)\n```\n\nUnderstanding this SPL\n\n**Backup SLA Compliance** — Consolidated view of backup coverage and RPO/RTO compliance. Essential for management reporting and audit evidence.\n\nDocumented **Data sources**: Backup job logs, CMDB/asset inventory. **App/TA** (typical add-on context): Combined backup data + CMDB lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (SLA compliance %), Table (non-compliant systems), Pie chart (covered vs uncovered), Dashboard with filters by business unit.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.7",
              "n": "Backup Data Volume Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks data growth rate for capacity planning of backup infrastructure. Identifies unexpected data surges early.",
              "t": "Vendor TA",
              "d": "Backup job statistics (data transferred per job)",
              "q": "index=backup sourcetype=\"veeam:job\" status=\"Success\"\n| timechart span=1d sum(data_transferred_gb) as daily_volume\n| predict daily_volume as predicted future_timespan=30",
              "m": "Sum data transferred across all backup jobs daily. Track trend and apply predictive analytics for 30/60/90-day forecasts. Compare against available repository capacity.",
              "z": "Line chart (daily backup volume with prediction), Bar chart (volume by job type), Single value (total backed up today).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA.\n• Ensure the following data sources are available: Backup job statistics (data transferred per job).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSum data transferred across all backup jobs daily. Track trend and apply predictive analytics for 30/60/90-day forecasts. Compare against available repository capacity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:job\" status=\"Success\"\n| timechart span=1d sum(data_transferred_gb) as daily_volume\n| predict daily_volume as predicted future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Backup Data Volume Trending** — Tracks data growth rate for capacity planning of backup infrastructure. Identifies unexpected data surges early.\n\nDocumented **Data sources**: Backup job statistics (data transferred per job). **App/TA** (typical add-on context): Vendor TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Backup Data Volume Trending**): predict daily_volume as predicted future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily backup volume with prediction), Bar chart (volume by job type), Single value (total backed up today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.8",
              "n": "Tape Library Health",
              "c": "medium",
              "f": "beginner",
              "v": "Tape media and drive failures can silently corrupt backups. Monitoring ensures long-term archival reliability.",
              "t": "SNMP TA, vendor syslog",
              "d": "Tape library logs, SNMP traps, drive error counters",
              "q": "index=backup sourcetype=\"tape_library\"\n| search media_error OR drive_error OR cleaning_required\n| stats count by library, drive_id, error_type\n| where count > 0",
              "m": "Forward tape library syslog to Splunk. Poll SNMP for drive error counters and media faults. Alert on drive errors, media faults, or cleaning cartridge expiration. Track tape media lifecycle.",
              "z": "Table (drive/media errors), Single value (drives needing attention), Timeline (error events).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA, vendor syslog.\n• Ensure the following data sources are available: Tape library logs, SNMP traps, drive error counters.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward tape library syslog to Splunk. Poll SNMP for drive error counters and media faults. Alert on drive errors, media faults, or cleaning cartridge expiration. Track tape media lifecycle.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"tape_library\"\n| search media_error OR drive_error OR cleaning_required\n| stats count by library, drive_id, error_type\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Tape Library Health** — Tape media and drive failures can silently corrupt backups. Monitoring ensures long-term archival reliability.\n\nDocumented **Data sources**: Tape library logs, SNMP traps, drive error counters. **App/TA** (typical add-on context): SNMP TA, vendor syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: tape_library. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"tape_library\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by library, drive_id, error_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drive/media errors), Single value (drives needing attention), Timeline (error events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.9",
              "n": "Veeam Backup Job Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Job success/failure/warning status, duration, and data transferred are essential for backup reliability. Immediate visibility into job outcomes ensures data protection SLAs are met and enables rapid troubleshooting.",
              "t": "Custom (Veeam Enterprise Manager REST API, PowerShell output)",
              "d": "Veeam job session data (REST API or PowerShell Get-VBRSession)",
              "q": "index=backup sourcetype=\"veeam:job_session\"\n| stats latest(_time) as last_run, latest(status) as status, latest(duration_min) as duration_min, latest(data_transferred_gb) as data_gb by job_name\n| where status!=\"Success\" OR duration_min>480\n| table job_name, last_run, status, duration_min, data_gb\n| sort last_run",
              "m": "Use Veeam Enterprise Manager REST API (`/api/sessionMgr`) or PowerShell script invoking `Get-VBRSession` to collect job session data. Poll every 15–30 minutes or trigger on job completion. Extract job_name, status (Success/Failed/Warning), start/end time (for duration), and data transferred. Index to Splunk with sourcetype `veeam:job_session`. Alert immediately on status=Failed; warning on status=Warning. Alert when duration exceeds backup window (e.g., >8 hours).",
              "z": "Table (job, status, duration, data transferred), Single value (failed jobs count), Bar chart (duration by job), Status grid (job × date).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Veeam Enterprise Manager REST API, PowerShell output).\n• Ensure the following data sources are available: Veeam job session data (REST API or PowerShell Get-VBRSession).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Veeam Enterprise Manager REST API (`/api/sessionMgr`) or PowerShell script invoking `Get-VBRSession` to collect job session data. Poll every 15–30 minutes or trigger on job completion. Extract job_name, status (Success/Failed/Warning), start/end time (for duration), and data transferred. Index to Splunk with sourcetype `veeam:job_session`. Alert immediately on status=Failed; warning on status=Warning. Alert when duration exceeds backup window (e.g., >8 hours).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:job_session\"\n| stats latest(_time) as last_run, latest(status) as status, latest(duration_min) as duration_min, latest(data_transferred_gb) as data_gb by job_name\n| where status!=\"Success\" OR duration_min>480\n| table job_name, last_run, status, duration_min, data_gb\n| sort last_run\n```\n\nUnderstanding this SPL\n\n**Veeam Backup Job Monitoring** — Job success/failure/warning status, duration, and data transferred are essential for backup reliability. Immediate visibility into job outcomes ensures data protection SLAs are met and enables rapid troubleshooting.\n\nDocumented **Data sources**: Veeam job session data (REST API or PowerShell Get-VBRSession). **App/TA** (typical add-on context): Custom (Veeam Enterprise Manager REST API, PowerShell output). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job_session. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:job_session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by job_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status!=\"Success\" OR duration_min>480` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Veeam Backup Job Monitoring**): table job_name, last_run, status, duration_min, data_gb\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (job, status, duration, data transferred), Single value (failed jobs count), Bar chart (duration by job), Status grid (job × date).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "veeam"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Veeam App for Splunk",
                  "id": 7312,
                  "url": "https://splunkbase.splunk.com/app/7312",
                  "desc": "Monitoring and security dashboards for Veeam Backup job statuses and events",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/033805d0-fe3c-11ee-a32e-be99bb517a22.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/0a08b346-fe3c-11ee-9b85-7afc4dbed252.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.10",
              "n": "Backup Data Growth Rate",
              "c": "medium",
              "f": "intermediate",
              "v": "Backup repository consumption trending enables capacity planning and prevents surprise exhaustion. Proactive forecasting supports budget and procurement decisions.",
              "t": "Custom (backup software API/CLI)",
              "d": "Backup repository size over time",
              "q": "index=backup sourcetype=\"veeam:repository\" OR sourcetype=\"backup:repository\"\n| eval used_pct=round(used_bytes/capacity_bytes*100, 1)\n| timechart span=1d latest(used_bytes) as used, latest(capacity_bytes) as capacity by repository_name\n| eval used_pct=round(used/capacity*100, 1)\n| predict used as predicted future_timespan=30",
              "m": "Poll backup repository capacity via vendor API (Veeam, Commvault, etc.) or scripted input (filesystem df, REST endpoint). Collect used_bytes and capacity_bytes per repository daily. Index to Splunk. Use `predict` or `trendline` for 30/60/90-day forecasts. Alert when projected full date is within 90 days. Correlate growth rate with backup job data volume trends.",
              "z": "Line chart (repository usage % over time with prediction), Table (repositories with growth rate and ETA to full), Single value (days until first repository full).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (backup software API/CLI).\n• Ensure the following data sources are available: Backup repository size over time.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll backup repository capacity via vendor API (Veeam, Commvault, etc.) or scripted input (filesystem df, REST endpoint). Collect used_bytes and capacity_bytes per repository daily. Index to Splunk. Use `predict` or `trendline` for 30/60/90-day forecasts. Alert when projected full date is within 90 days. Correlate growth rate with backup job data volume trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:repository\" OR sourcetype=\"backup:repository\"\n| eval used_pct=round(used_bytes/capacity_bytes*100, 1)\n| timechart span=1d latest(used_bytes) as used, latest(capacity_bytes) as capacity by repository_name\n| eval used_pct=round(used/capacity*100, 1)\n| predict used as predicted future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Backup Data Growth Rate** — Backup repository consumption trending enables capacity planning and prevents surprise exhaustion. Proactive forecasting supports budget and procurement decisions.\n\nDocumented **Data sources**: Backup repository size over time. **App/TA** (typical add-on context): Custom (backup software API/CLI). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:repository, backup:repository. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:repository\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by repository_name** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Backup Data Growth Rate**): predict used as predicted future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (repository usage % over time with prediction), Table (repositories with growth rate and ETA to full), Single value (days until first repository full).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.11",
              "n": "Veeam Backup Job Status Summary",
              "c": "critical",
              "f": "beginner",
              "v": "Roll-up of last job result per workload (Success/Warning/Failed/Running) for executive and NOC dashboards. Complements session-level detail with a single row per protected entity.",
              "t": "Veeam App for Splunk, Enterprise Manager API",
              "d": "`veeam:job` or `veeam:job_session` with `job_name`, `status`, `end_time`",
              "q": "index=backup sourcetype=\"veeam:job_session\"\n| stats latest(_time) as last_end, latest(status) as status, latest(duration_sec) as duration by job_name\n| where status IN (\"Failed\",\"Warning\") OR duration > 28800\n| table job_name last_end status duration",
              "m": "Schedule hourly. Map Warning to ticket for review. Escalate Failed immediately. Track Running jobs exceeding expected window as Warning.",
              "z": "Status grid (job × last status), Single value (failed count), Table (jobs needing attention).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Veeam App for Splunk, Enterprise Manager API.\n• Ensure the following data sources are available: `veeam:job` or `veeam:job_session` with `job_name`, `status`, `end_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule hourly. Map Warning to ticket for review. Escalate Failed immediately. Track Running jobs exceeding expected window as Warning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:job_session\"\n| stats latest(_time) as last_end, latest(status) as status, latest(duration_sec) as duration by job_name\n| where status IN (\"Failed\",\"Warning\") OR duration > 28800\n| table job_name last_end status duration\n```\n\nUnderstanding this SPL\n\n**Veeam Backup Job Status Summary** — Roll-up of last job result per workload (Success/Warning/Failed/Running) for executive and NOC dashboards. Complements session-level detail with a single row per protected entity.\n\nDocumented **Data sources**: `veeam:job` or `veeam:job_session` with `job_name`, `status`, `end_time`. **App/TA** (typical add-on context): Veeam App for Splunk, Enterprise Manager API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job_session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:job_session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by job_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status IN (\"Failed\",\"Warning\") OR duration > 28800` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Veeam Backup Job Status Summary**): table job_name last_end status duration\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (job × last status), Single value (failed count), Table (jobs needing attention).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "veeam"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Veeam App for Splunk",
                  "id": 7312,
                  "url": "https://splunkbase.splunk.com/app/7312",
                  "desc": "Monitoring and security dashboards for Veeam Backup job statuses and events",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/033805d0-fe3c-11ee-a32e-be99bb517a22.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/0a08b346-fe3c-11ee-9b85-7afc4dbed252.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.12",
              "n": "Commvault Job Completion",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed or incomplete Commvault backup jobs leave subclients unprotected. Job-level success tracking is required for audit and restore confidence.",
              "t": "Commvault Splunk App, Commvault REST/CLI export",
              "d": "Commvault job history (subclient, status, error code)",
              "q": "index=backup sourcetype=\"commvault:job\"\n| where status!=\"Completed\" OR job_status=\"Failed\"\n| stats latest(_time) as last_run, latest(error_code) as err by job_name, subclient_name\n| table job_name subclient_name last_run err",
              "m": "Ingest completed job events from Commvault. Normalize status values. Alert on Failed; report Partial with same severity as policy dictates.",
              "z": "Table (failed jobs), Single value (failed jobs 24h), Bar chart (failures by error code).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Commvault Splunk App, Commvault REST/CLI export.\n• Ensure the following data sources are available: Commvault job history (subclient, status, error code).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest completed job events from Commvault. Normalize status values. Alert on Failed; report Partial with same severity as policy dictates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"commvault:job\"\n| where status!=\"Completed\" OR job_status=\"Failed\"\n| stats latest(_time) as last_run, latest(error_code) as err by job_name, subclient_name\n| table job_name subclient_name last_run err\n```\n\nUnderstanding this SPL\n\n**Commvault Job Completion** — Failed or incomplete Commvault backup jobs leave subclients unprotected. Job-level success tracking is required for audit and restore confidence.\n\nDocumented **Data sources**: Commvault job history (subclient, status, error code). **App/TA** (typical add-on context): Commvault Splunk App, Commvault REST/CLI export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: commvault:job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"commvault:job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"Completed\" OR job_status=\"Failed\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by job_name, subclient_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Commvault Job Completion**): table job_name subclient_name last_run err\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed jobs), Single value (failed jobs 24h), Bar chart (failures by error code).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "commvault",
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.13",
              "n": "Backup RPO and RTO Compliance",
              "c": "critical",
              "f": "advanced",
              "v": "Compares actual backup completion time and restore test duration against business RPO/RTO targets per application tier.",
              "t": "Backup TA + CMDB lookup",
              "d": "Last successful backup time, last restore test duration",
              "q": "| inputlookup cmdb_systems.csv WHERE backup_tier=*\n| join system_name max=0\n    [search index=backup sourcetype=\"veeam:job\" status=\"Success\" earliest=-7d\n     | stats latest(_time) as last_ok by system_name]\n| eval hours_since_ok=round((now()-last_ok)/3600,1)\n| lookup backup_rpo_hours tier OUTPUT rpo_hours\n| where hours_since_ok > rpo_hours\n| table system_name tier hours_since_ok rpo_hours",
              "m": "Maintain lookup of RPO hours per tier. Join to last successful backup. Alert when hours_since_ok exceeds RPO. Add parallel search for restore drill duration vs RTO.",
              "z": "Table (systems breaching RPO), Gauge (% RPO compliant), Line chart (hours since backup by tier).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Backup TA + CMDB lookup.\n• Ensure the following data sources are available: Last successful backup time, last restore test duration.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain lookup of RPO hours per tier. Join to last successful backup. Alert when hours_since_ok exceeds RPO. Add parallel search for restore drill duration vs RTO.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup cmdb_systems.csv WHERE backup_tier=*\n| join system_name max=0\n    [search index=backup sourcetype=\"veeam:job\" status=\"Success\" earliest=-7d\n     | stats latest(_time) as last_ok by system_name]\n| eval hours_since_ok=round((now()-last_ok)/3600,1)\n| lookup backup_rpo_hours tier OUTPUT rpo_hours\n| where hours_since_ok > rpo_hours\n| table system_name tier hours_since_ok rpo_hours\n```\n\nUnderstanding this SPL\n\n**Backup RPO and RTO Compliance** — Compares actual backup completion time and restore test duration against business RPO/RTO targets per application tier.\n\nDocumented **Data sources**: Last successful backup time, last restore test duration. **App/TA** (typical add-on context): Backup TA + CMDB lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **hours_since_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where hours_since_ok > rpo_hours` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup RPO and RTO Compliance**): table system_name tier hours_since_ok rpo_hours\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (systems breaching RPO), Gauge (% RPO compliant), Line chart (hours since backup by tier).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.08",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.08 (Regular backups) is enforced — Splunk UC-6.3.13: Backup RPO and RTO Compliance.",
                  "ea": "Saved search 'UC-6.3.13' running on Last successful backup time, last restore test duration, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §2.1 (Set impact tolerances) is enforced — Splunk UC-6.3.13: Backup RPO and RTO Compliance.",
                  "ea": "Saved search 'UC-6.3.13' running on Last successful backup time, last restore test duration, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fca.org.uk/publication/policy/ps21-3.pdf"
                },
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §3.1 (Scenario testing) is enforced — Splunk UC-6.3.13: Backup RPO and RTO Compliance.",
                  "ea": "Saved search 'UC-6.3.13' running on Last successful backup time, last restore test duration, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fca.org.uk/publication/policy/ps21-3.pdf"
                },
                {
                  "r": "PRA SS2/21",
                  "v": "2021",
                  "cl": "§9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PRA SS2/21 §9 (Business continuity & exit plans) is enforced — Splunk UC-6.3.13: Backup RPO and RTO Compliance.",
                  "ea": "Saved search 'UC-6.3.13' running on Last successful backup time, last restore test duration, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bankofengland.co.uk/prudential-regulation/publication/2021/march/outsourcing-and-third-party-risk-management-ss"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.14",
              "n": "Tape Library Robotics and Drive Health",
              "c": "high",
              "f": "intermediate",
              "v": "Mechanical faults, barcode read errors, and drive cleaning states cause failed backups before media errors. Dedicated robotics metrics reduce MTTR for tape operations.",
              "t": "Vendor SNMP, backup software tape events",
              "d": "Library element status, picker errors, drive cleaning required flags",
              "q": "index=backup sourcetype=\"tape_library:robot\"\n| search (robot_error OR slot_unavailable OR \"inventory failed\" OR cleaning_required=\"true\")\n| stats count by library_name, component, error_code\n| where count > 0",
              "m": "Augment generic tape syslog with SNMP polls for robotics status. Alert on inventory failures or slot errors. Schedule cleaning when `cleaning_required` is set.",
              "z": "Table (library, component, errors), Timeline (robotics faults), Single value (libraries with open faults).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor SNMP, backup software tape events.\n• Ensure the following data sources are available: Library element status, picker errors, drive cleaning required flags.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAugment generic tape syslog with SNMP polls for robotics status. Alert on inventory failures or slot errors. Schedule cleaning when `cleaning_required` is set.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"tape_library:robot\"\n| search (robot_error OR slot_unavailable OR \"inventory failed\" OR cleaning_required=\"true\")\n| stats count by library_name, component, error_code\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Tape Library Robotics and Drive Health** — Mechanical faults, barcode read errors, and drive cleaning states cause failed backups before media errors. Dedicated robotics metrics reduce MTTR for tape operations.\n\nDocumented **Data sources**: Library element status, picker errors, drive cleaning required flags. **App/TA** (typical add-on context): Vendor SNMP, backup software tape events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: tape_library:robot. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"tape_library:robot\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by library_name, component, error_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (library, component, errors), Timeline (robotics faults), Single value (libraries with open faults).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.15",
              "n": "DR Rehearsal Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Tabletop and technical DR tests must occur on schedule. Tracking rehearsal outcomes and dates supports audit and readiness scoring.",
              "t": "Custom (ITSM, spreadsheet ingest, HEC)",
              "d": "DR test results, DR runbook completion events",
              "q": "index=backup sourcetype=\"dr_rehearsal\"\n| stats latest(test_date) as last_test, latest(result) as result by system_name, scenario\n| eval days_since=round((now()-strptime(last_test,\"%Y-%m-%d\"))/86400)\n| where days_since > 365 OR result!=\"Pass\"\n| table system_name scenario last_test result days_since",
              "m": "Log each rehearsal with scenario, duration, pass/fail. Alert when annual test is overdue or result is not Pass. Correlate with actual restore tests from backup tools.",
              "z": "Table (overdue systems), Calendar (scheduled tests), Single value (% scenarios current).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (ITSM, spreadsheet ingest, HEC).\n• Ensure the following data sources are available: DR test results, DR runbook completion events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog each rehearsal with scenario, duration, pass/fail. Alert when annual test is overdue or result is not Pass. Correlate with actual restore tests from backup tools.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"dr_rehearsal\"\n| stats latest(test_date) as last_test, latest(result) as result by system_name, scenario\n| eval days_since=round((now()-strptime(last_test,\"%Y-%m-%d\"))/86400)\n| where days_since > 365 OR result!=\"Pass\"\n| table system_name scenario last_test result days_since\n```\n\nUnderstanding this SPL\n\n**DR Rehearsal Tracking** — Tabletop and technical DR tests must occur on schedule. Tracking rehearsal outcomes and dates supports audit and readiness scoring.\n\nDocumented **Data sources**: DR test results, DR runbook completion events. **App/TA** (typical add-on context): Custom (ITSM, spreadsheet ingest, HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: dr_rehearsal. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"dr_rehearsal\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by system_name, scenario** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since > 365 OR result!=\"Pass\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DR Rehearsal Tracking**): table system_name scenario last_test result days_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue systems), Calendar (scheduled tests), Single value (% scenarios current).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.16",
              "n": "Backup Window Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Jobs that consume most of the backup window risk overlap with production or fail to finish. Utilization % guides schedule tuning and parallel job limits.",
              "t": "Vendor job logs",
              "d": "Job start/end, defined backup window start/end per policy",
              "q": "index=backup sourcetype=\"veeam:job\" status=\"Success\"\n| eval duration_min=(end_time-start_time)/60\n| lookup backup_policy job_name OUTPUT window_start_hour window_end_hour\n| eval window_min=(window_end_hour-window_start_hour)*60\n| eval util_pct=round(duration_min/window_min*100,1)\n| where util_pct > 85\n| table job_name duration_min window_min util_pct",
              "m": "Define backup window per policy in lookup. Compare job duration to window length. Alert when utilization >85% or job end exceeds window end.",
              "z": "Bar chart (utilization % by job), Line chart (duration trend vs window), Table (jobs at risk of overrun).",
              "kfp": "Long-running backups during initial fulls, large database backups, or after data growth events can exceed the window without indicating a broken schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor job logs.\n• Ensure the following data sources are available: Job start/end, defined backup window start/end per policy.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine backup window per policy in lookup. Compare job duration to window length. Alert when utilization >85% or job end exceeds window end.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:job\" status=\"Success\"\n| eval duration_min=(end_time-start_time)/60\n| lookup backup_policy job_name OUTPUT window_start_hour window_end_hour\n| eval window_min=(window_end_hour-window_start_hour)*60\n| eval util_pct=round(duration_min/window_min*100,1)\n| where util_pct > 85\n| table job_name duration_min window_min util_pct\n```\n\nUnderstanding this SPL\n\n**Backup Window Utilization** — Jobs that consume most of the backup window risk overlap with production or fail to finish. Utilization % guides schedule tuning and parallel job limits.\n\nDocumented **Data sources**: Job start/end, defined backup window start/end per policy. **App/TA** (typical add-on context): Vendor job logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **window_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where util_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup Window Utilization**): table job_name duration_min window_min util_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (utilization % by job), Line chart (duration trend vs window), Table (jobs at risk of overrun).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.17",
              "n": "Incremental Backup Chain Integrity",
              "c": "critical",
              "f": "advanced",
              "v": "Broken increment chains (missing full or corrupted metadata) make restores impossible. Vendor-specific checks detect chain gaps before a failure at restore time.",
              "t": "Veeam/Commvault verification APIs, catalog exports",
              "d": "Backup chain metadata, `Verify` job results",
              "q": "index=backup sourcetype=\"backup:chain_verify\"\n| where chain_ok=0 OR missing_restore_point=1 OR verify_status=\"Failed\"\n| stats latest(_time) as last_check by job_name, vm_name\n| table job_name vm_name chain_ok missing_restore_point verify_status last_check",
              "m": "Ingest synthetic full verification or chain validation jobs. Alert on any `chain_ok=0`. Weekly full verification of random samples for large environments.",
              "z": "Table (broken chains), Single value (VMs with integrity issues), Timeline (verify jobs).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Veeam/Commvault verification APIs, catalog exports.\n• Ensure the following data sources are available: Backup chain metadata, `Verify` job results.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest synthetic full verification or chain validation jobs. Alert on any `chain_ok=0`. Weekly full verification of random samples for large environments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"backup:chain_verify\"\n| where chain_ok=0 OR missing_restore_point=1 OR verify_status=\"Failed\"\n| stats latest(_time) as last_check by job_name, vm_name\n| table job_name vm_name chain_ok missing_restore_point verify_status last_check\n```\n\nUnderstanding this SPL\n\n**Incremental Backup Chain Integrity** — Broken increment chains (missing full or corrupted metadata) make restores impossible. Vendor-specific checks detect chain gaps before a failure at restore time.\n\nDocumented **Data sources**: Backup chain metadata, `Verify` job results. **App/TA** (typical add-on context): Veeam/Commvault verification APIs, catalog exports. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: backup:chain_verify. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"backup:chain_verify\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where chain_ok=0 OR missing_restore_point=1 OR verify_status=\"Failed\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by job_name, vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Incremental Backup Chain Integrity**): table job_name vm_name chain_ok missing_restore_point verify_status last_check\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (broken chains), Single value (VMs with integrity issues), Timeline (verify jobs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "commvault",
                "hashicorp",
                "veeam"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "sapp": [
                {
                  "name": "Veeam App for Splunk",
                  "id": 7312,
                  "url": "https://splunkbase.splunk.com/app/7312",
                  "desc": "Monitoring and security dashboards for Veeam Backup job statuses and events",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/033805d0-fe3c-11ee-a32e-be99bb517a22.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/0a08b346-fe3c-11ee-9b85-7afc4dbed252.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.18",
              "n": "Backup Data Growth Trending by Workload",
              "c": "medium",
              "f": "intermediate",
              "v": "Per-workload front-end bytes backed up trend identifies data sprawl, VM growth, or unexpected database growth before repository exhaustion.",
              "t": "Vendor TA",
              "d": "Job statistics `data_transferred_bytes` or `processed_size` per job/run",
              "q": "index=backup sourcetype=\"veeam:job\" status=\"Success\"\n| timechart span=1d sum(data_transferred_gb) as daily_gb by job_name\n| predict daily_gb as forecast future_timespan=30",
              "m": "Sum data per job daily. Use `predict` for growth. Alert when week-over-week growth exceeds threshold (e.g., 25%). Compare to repository free space.",
              "z": "Line chart (daily GB with forecast per job), Table (fastest-growing jobs), Top values (growth %).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor TA.\n• Ensure the following data sources are available: Job statistics `data_transferred_bytes` or `processed_size` per job/run.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSum data per job daily. Use `predict` for growth. Alert when week-over-week growth exceeds threshold (e.g., 25%). Compare to repository free space.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:job\" status=\"Success\"\n| timechart span=1d sum(data_transferred_gb) as daily_gb by job_name\n| predict daily_gb as forecast future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Backup Data Growth Trending by Workload** — Per-workload front-end bytes backed up trend identifies data sprawl, VM growth, or unexpected database growth before repository exhaustion.\n\nDocumented **Data sources**: Job statistics `data_transferred_bytes` or `processed_size` per job/run. **App/TA** (typical add-on context): Vendor TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by job_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Backup Data Growth Trending by Workload**): predict daily_gb as forecast future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily GB with forecast per job), Table (fastest-growing jobs), Top values (growth %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.19",
              "n": "Windows Backup Job Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Windows Server Backup failures mean the server has no recovery point. Silent failures create a false sense of protection.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Microsoft-Windows-Backup` (EventCode 4, 5, 8, 9, 14, 17, 22)",
              "q": "index=wineventlog source=\"WinEventLog:Microsoft-Windows-Backup\"\n  EventCode IN (4, 5, 8, 9, 14)\n| eval status=case(EventCode=4,\"Backup completed\",EventCode=5,\"Backup failed\",EventCode=8,\"Backup failed (VSS)\",EventCode=9,\"Warning\",EventCode=14,\"Backup completed with warnings\")\n| table _time, host, status, EventCode, BackupTarget\n| sort -_time",
              "m": "Forward Windows Backup event logs. EventCode 4=success, 5=failure, 8=VSS failure. Alert on any backup failure (EventCode 5, 8). Also monitor for missing backups — if a server stops reporting EventCode 4, the backup job may have been disabled or deleted. Compare actual backup frequency against RTO/RPO requirements. Escalate servers with no successful backup in 48+ hours.",
              "z": "Status grid (host × backup status), Table (failures), Line chart (backup success rate over time), Single value (hours since last backup).",
              "kfp": "Compare a sample event to the same minute in the Windows Security log on the server to rule out parser or clock skew.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Microsoft-Windows-Backup` (EventCode 4, 5, 8, 9, 14, 17, 22).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Windows Backup event logs. EventCode 4=success, 5=failure, 8=VSS failure. Alert on any backup failure (EventCode 5, 8). Also monitor for missing backups — if a server stops reporting EventCode 4, the backup job may have been disabled or deleted. Compare actual backup frequency against RTO/RPO requirements. Escalate servers with no successful backup in 48+ hours.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog source=\"WinEventLog:Microsoft-Windows-Backup\"\n  EventCode IN (4, 5, 8, 9, 14)\n| eval status=case(EventCode=4,\"Backup completed\",EventCode=5,\"Backup failed\",EventCode=8,\"Backup failed (VSS)\",EventCode=9,\"Warning\",EventCode=14,\"Backup completed with warnings\")\n| table _time, host, status, EventCode, BackupTarget\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Windows Backup Job Monitoring** — Windows Server Backup failures mean the server has no recovery point. Silent failures create a false sense of protection.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Microsoft-Windows-Backup` (EventCode 4, 5, 8, 9, 14, 17, 22). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Windows Backup Job Monitoring**): table _time, host, status, EventCode, BackupTarget\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (host × backup status), Table (failures), Line chart (backup success rate over time), Single value (hours since last backup).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.20",
              "n": "Backup Target Capacity and Growth Rate",
              "c": "high",
              "f": "beginner",
              "v": "Backup destination (disk, dedup appliance, object storage) that fills up causes backup failures and retention gaps. Tracking growth and remaining capacity prevents surprise outages.",
              "t": "Backup vendor API, storage array metrics, S3/CloudWatch",
              "d": "Backup catalog size, target filesystem capacity, object storage metrics",
              "q": "index=backup sourcetype=backup_capacity\n| eval used_pct=round(used_bytes/capacity_bytes*100, 1)\n| stats latest(used_pct) as pct, latest(used_bytes) as used by target_name\n| where pct > 85\n| table target_name pct used capacity_bytes",
              "m": "Poll backup target capacity (vendor API or filesystem/object metrics). Ingest used and total. Alert at 85% (warning) and 95% (critical). Compute week-over-week growth rate for capacity planning.",
              "z": "Gauge per target, Line chart (usage % over time), Table (target, %, growth rate).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Backup vendor API, storage array metrics, S3/CloudWatch.\n• Ensure the following data sources are available: Backup catalog size, target filesystem capacity, object storage metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll backup target capacity (vendor API or filesystem/object metrics). Ingest used and total. Alert at 85% (warning) and 95% (critical). Compute week-over-week growth rate for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=backup_capacity\n| eval used_pct=round(used_bytes/capacity_bytes*100, 1)\n| stats latest(used_pct) as pct, latest(used_bytes) as used by target_name\n| where pct > 85\n| table target_name pct used capacity_bytes\n```\n\nUnderstanding this SPL\n\n**Backup Target Capacity and Growth Rate** — Backup destination (disk, dedup appliance, object storage) that fills up causes backup failures and retention gaps. Tracking growth and remaining capacity prevents surprise outages.\n\nDocumented **Data sources**: Backup catalog size, target filesystem capacity, object storage metrics. **App/TA** (typical add-on context): Backup vendor API, storage array metrics, S3/CloudWatch. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: backup_capacity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=backup_capacity. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by target_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where pct > 85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup Target Capacity and Growth Rate**): table target_name pct used capacity_bytes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge per target, Line chart (usage % over time), Table (target, %, growth rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.21",
              "n": "Restore Job Success and Duration Trending",
              "c": "critical",
              "f": "intermediate",
              "v": "Restore failures or abnormally long restores indicate corrupt backups, network issues, or misconfiguration. Tracking ensures recovery procedures are validated and RTO is achievable.",
              "t": "Backup vendor logs, job status API",
              "d": "Restore job status, duration, bytes restored",
              "q": "index=backup sourcetype=backup_restore job_type=restore\n| bin _time span=1d\n| stats count(eval(status=\"failed\")) as failures, count(eval(status=\"success\")) as success, avg(duration_sec) as avg_duration by job_name, _time\n| eval fail_rate=round(failures/(failures+success)*100, 1)\n| where failures > 0 OR avg_duration > 3600",
              "m": "Ingest restore job completion events. Track success/failure and duration. Alert on any restore failure. Baseline restore duration by job type; alert when duration exceeds 2x baseline. Run periodic test restores and log results.",
              "z": "Table (job, success, failures, avg duration), Line chart (restore duration trend), Single value (last 7d fail rate).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Backup vendor logs, job status API.\n• Ensure the following data sources are available: Restore job status, duration, bytes restored.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest restore job completion events. Track success/failure and duration. Alert on any restore failure. Baseline restore duration by job type; alert when duration exceeds 2x baseline. Run periodic test restores and log results.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=backup_restore job_type=restore\n| bin _time span=1d\n| stats count(eval(status=\"failed\")) as failures, count(eval(status=\"success\")) as success, avg(duration_sec) as avg_duration by job_name, _time\n| eval fail_rate=round(failures/(failures+success)*100, 1)\n| where failures > 0 OR avg_duration > 3600\n```\n\nUnderstanding this SPL\n\n**Restore Job Success and Duration Trending** — Restore failures or abnormally long restores indicate corrupt backups, network issues, or misconfiguration. Tracking ensures recovery procedures are validated and RTO is achievable.\n\nDocumented **Data sources**: Restore job status, duration, bytes restored. **App/TA** (typical add-on context): Backup vendor logs, job status API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: backup_restore. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=backup_restore. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by job_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failures > 0 OR avg_duration > 3600` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (job, success, failures, avg duration), Line chart (restore duration trend), Single value (last 7d fail rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.22",
              "n": "Backup Job Overlap and Schedule Conflict Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Overlapping full backups or too many concurrent jobs overload the backup infrastructure and extend backup windows. Detecting overlap supports schedule tuning and resource sizing.",
              "t": "Backup vendor logs or API",
              "d": "Backup job start/end timestamps, job type (full/incremental)",
              "q": "index=backup sourcetype=backup_job\n| eval start_epoch=_time end_epoch=_time+duration_sec\n| stats values(job_name) as jobs by host, _time\n| where mvcount(jobs) > 3\n| table _time host jobs",
              "m": "Ingest job start and duration. For each time window, count concurrent jobs per host or media server. Alert when more than N full backups run concurrently or when backup window is exceeded.",
              "z": "Timeline (jobs by start/end), Table (overlapping jobs), Single value (max concurrent).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Backup vendor logs or API.\n• Ensure the following data sources are available: Backup job start/end timestamps, job type (full/incremental).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest job start and duration. For each time window, count concurrent jobs per host or media server. Alert when more than N full backups run concurrently or when backup window is exceeded.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=backup_job\n| eval start_epoch=_time end_epoch=_time+duration_sec\n| stats values(job_name) as jobs by host, _time\n| where mvcount(jobs) > 3\n| table _time host jobs\n```\n\nUnderstanding this SPL\n\n**Backup Job Overlap and Schedule Conflict Detection** — Overlapping full backups or too many concurrent jobs overload the backup infrastructure and extend backup windows. Detecting overlap supports schedule tuning and resource sizing.\n\nDocumented **Data sources**: Backup job start/end timestamps, job type (full/incremental). **App/TA** (typical add-on context): Backup vendor logs or API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: backup_job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=backup_job. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **start_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvcount(jobs) > 3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup Job Overlap and Schedule Conflict Detection**): table _time host jobs\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (jobs by start/end), Table (overlapping jobs), Single value (max concurrent).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.23",
              "n": "Immutable Backup and Ransomware Recovery Readiness",
              "c": "critical",
              "f": "advanced",
              "v": "Immutable or air-gapped copies are the last line of defense against ransomware. Verifying immutability and recovery procedure readiness ensures backups cannot be deleted or encrypted by an attacker.",
              "t": "Backup vendor API, object lock compliance check",
              "d": "Backup copy retention lock status, object lock (S3), backup integrity checksum",
              "q": "index=backup sourcetype=backup_immutable\n| stats latest(immutable_ok) as ok, latest(last_checksum_verify) as last_verify by copy_name\n| where ok != 1 OR (now()-last_verify) > 604800\n| table copy_name ok last_verify",
              "m": "Poll backup copy configuration for retention lock or immutable flag. Optionally run periodic checksum or catalog validation. Alert when any critical copy is not immutable or when last verification is older than 7 days. Document and test recovery runbook.",
              "z": "Status grid (copy, immutable, last verify), Table (non-compliant copies), Single value (ready for recovery %).",
              "kfp": "Patch cycles, clean-up scripts, and archive jobs can spike deletes or renames; correlate with change tickets before treating as ransomware.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Backup vendor API, object lock compliance check.\n• Ensure the following data sources are available: Backup copy retention lock status, object lock (S3), backup integrity checksum.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll backup copy configuration for retention lock or immutable flag. Optionally run periodic checksum or catalog validation. Alert when any critical copy is not immutable or when last verification is older than 7 days. Document and test recovery runbook.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=backup_immutable\n| stats latest(immutable_ok) as ok, latest(last_checksum_verify) as last_verify by copy_name\n| where ok != 1 OR (now()-last_verify) > 604800\n| table copy_name ok last_verify\n```\n\nUnderstanding this SPL\n\n**Immutable Backup and Ransomware Recovery Readiness** — Immutable or air-gapped copies are the last line of defense against ransomware. Verifying immutability and recovery procedure readiness ensures backups cannot be deleted or encrypted by an attacker.\n\nDocumented **Data sources**: Backup copy retention lock status, object lock (S3), backup integrity checksum. **App/TA** (typical add-on context): Backup vendor API, object lock compliance check. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: backup_immutable. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=backup_immutable. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by copy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ok != 1 OR (now()-last_verify) > 604800` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Immutable Backup and Ransomware Recovery Readiness**): table copy_name ok last_verify\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (copy, immutable, last verify), Table (non-compliant copies), Single value (ready for recovery %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-6.3.23: Immutable Backup and Ransomware Recovery Readiness.",
                  "ea": "Saved search 'UC-6.3.23' running on Backup copy retention lock status, object lock (S3), backup integrity checksum, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.apra.gov.au/information-security"
                },
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.08",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.08 (Regular backups) is enforced — Splunk UC-6.3.23: Immutable Backup and Ransomware Recovery Readiness.",
                  "ea": "Saved search 'UC-6.3.23' running on Backup copy retention lock status, object lock (S3), backup integrity checksum, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "PRA SS2/21",
                  "v": "2021",
                  "cl": "§9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PRA SS2/21 §9 (Business continuity & exit plans) is enforced — Splunk UC-6.3.23: Immutable Backup and Ransomware Recovery Readiness.",
                  "ea": "Saved search 'UC-6.3.23' running on Backup copy retention lock status, object lock (S3), backup integrity checksum, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bankofengland.co.uk/prudential-regulation/publication/2021/march/outsourcing-and-third-party-risk-management-ss"
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.3.24",
              "n": "Tape Library Slot Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Tape library capacity and media expiration tracking prevent backup failures when slots are exhausted or tapes expire. Supports capacity planning and media lifecycle management.",
              "t": "Custom scripted input (tape library SNMP, vendor API)",
              "d": "Tape library management interface (SNMP/API)",
              "q": "index=backup sourcetype=\"tape_library:capacity\"\n| eval slot_util_pct=round(slots_used/total_slots*100, 1)\n| eval media_expiring_30d=if(media_expiration_days<=30, 1, 0)\n| stats latest(slot_util_pct) as pct_used, latest(slots_used) as used, latest(total_slots) as total, sum(media_expiring_30d) as expiring_soon by library_name\n| where pct_used > 85 OR expiring_soon > 0\n| table library_name, used, total, pct_used, expiring_soon",
              "m": "Poll tape library via SNMP (MIB-II, vendor-specific MIBs for slot counts) or vendor REST/CLI API. Collect total_slots, slots_used, and optionally media expiration dates. Run scripted input every 1–4 hours. Index to Splunk. Alert when slot utilization exceeds 85% or when media expiring within 30 days is detected. Maintain lookup of media barcodes and expiration for lifecycle tracking.",
              "z": "Gauge (slot utilization % per library), Table (libraries with slot counts and expiring media), Line chart (slot usage trend), Single value (libraries near capacity).",
              "kfp": "Slot counts change during vaulting, import/export jobs, or library reconfiguration; compare to the operator console before treating as data loss.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (tape library SNMP, vendor API).\n• Ensure the following data sources are available: Tape library management interface (SNMP/API).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll tape library via SNMP (MIB-II, vendor-specific MIBs for slot counts) or vendor REST/CLI API. Collect total_slots, slots_used, and optionally media expiration dates. Run scripted input every 1–4 hours. Index to Splunk. Alert when slot utilization exceeds 85% or when media expiring within 30 days is detected. Maintain lookup of media barcodes and expiration for lifecycle tracking.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"tape_library:capacity\"\n| eval slot_util_pct=round(slots_used/total_slots*100, 1)\n| eval media_expiring_30d=if(media_expiration_days<=30, 1, 0)\n| stats latest(slot_util_pct) as pct_used, latest(slots_used) as used, latest(total_slots) as total, sum(media_expiring_30d) as expiring_soon by library_name\n| where pct_used > 85 OR expiring_soon > 0\n| table library_name, used, total, pct_used, expiring_soon\n```\n\nUnderstanding this SPL\n\n**Tape Library Slot Utilization** — Tape library capacity and media expiration tracking prevent backup failures when slots are exhausted or tapes expire. Supports capacity planning and media lifecycle management.\n\nDocumented **Data sources**: Tape library management interface (SNMP/API). **App/TA** (typical add-on context): Custom scripted input (tape library SNMP, vendor API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: tape_library:capacity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"tape_library:capacity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **slot_util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **media_expiring_30d** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by library_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where pct_used > 85 OR expiring_soon > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Tape Library Slot Utilization**): table library_name, used, total, pct_used, expiring_soon\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (slot utilization % per library), Table (libraries with slot counts and expiring media), Line chart (slot usage trend), Single value (libraries near capacity).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 24,
            "none": 0
          }
        },
        {
          "i": "6.4",
          "n": "File Services",
          "u": [
            {
              "i": "6.4.1",
              "n": "File Access Audit",
              "c": "high",
              "f": "beginner",
              "v": "Provides full audit trail of file access for compliance (SOX, HIPAA, PCI-DSS). Enables investigation of data breaches and unauthorized access.",
              "t": "`Splunk_TA_windows` (Security Event Log)",
              "d": "Windows Security Event Log (Event ID 4663 — object access), NFS access logs",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663\n| stats count by Account_Name, ObjectName, AccessMask\n| sort -count\n| head 50",
              "m": "Enable \"Audit Object Access\" via GPO on file servers. Configure SACLs on sensitive folders. Forward Security logs via Universal Forwarder. Filter high-volume events to focus on sensitive paths.",
              "z": "Table (user, file, access type, count), Bar chart (top accessed files), Timeline (access events for specific files).",
              "kfp": "Backup jobs, scanners, and legitimate mass access can resemble policy violations; narrow paths and service accounts using a lookup.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Security Event Log).\n• Ensure the following data sources are available: Windows Security Event Log (Event ID 4663 — object access), NFS access logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Audit Object Access\" via GPO on file servers. Configure SACLs on sensitive folders. Forward Security logs via Universal Forwarder. Filter high-volume events to focus on sensitive paths.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663\n| stats count by Account_Name, ObjectName, AccessMask\n| sort -count\n| head 50\n```\n\nUnderstanding this SPL\n\n**File Access Audit** — Provides full audit trail of file access for compliance (SOX, HIPAA, PCI-DSS). Enables investigation of data breaches and unauthorized access.\n\nDocumented **Data sources**: Windows Security Event Log (Event ID 4663 — object access), NFS access logs. **App/TA** (typical add-on context): `Splunk_TA_windows` (Security Event Log). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name, ObjectName, AccessMask** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, file, access type, count), Bar chart (top accessed files), Timeline (access events for specific files).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see who touched important files, so you can show auditors a trail and react faster if someone reads or changes what they should not.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.2",
              "n": "Ransomware Indicator Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Ransomware causes mass file encryption in minutes. Detecting the pattern early can limit damage by triggering automated isolation.",
              "t": "`Splunk_TA_windows`, custom alert logic",
              "d": "Windows Security Event Log (4663, 4656, 4659 — file create/modify/delete)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663\n| bucket _time span=1m\n| stats dc(ObjectName) as unique_files count by Account_Name, _time\n| where unique_files > 100 AND count > 500",
              "m": "Enable file audit logging on critical file shares. Create high-urgency alert for mass file modification patterns (>100 unique files modified by one user in 1 minute). Integrate with SOAR for automated account disable/network isolation.",
              "z": "Single value (files modified per minute — current), Line chart (modification rate over time), Table (users with anomalous activity).",
              "kfp": "Patch cycles, clean-up scripts, and archive jobs can spike deletes or renames; correlate with change tickets before treating as ransomware.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, custom alert logic.\n• Ensure the following data sources are available: Windows Security Event Log (4663, 4656, 4659 — file create/modify/delete).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable file audit logging on critical file shares. Create high-urgency alert for mass file modification patterns (>100 unique files modified by one user in 1 minute). Integrate with SOAR for automated account disable/network isolation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663\n| bucket _time span=1m\n| stats dc(ObjectName) as unique_files count by Account_Name, _time\n| where unique_files > 100 AND count > 500\n```\n\nUnderstanding this SPL\n\n**Ransomware Indicator Detection** — Ransomware causes mass file encryption in minutes. Detecting the pattern early can limit damage by triggering automated isolation.\n\nDocumented **Data sources**: Windows Security Event Log (4663, 4656, 4659 — file create/modify/delete). **App/TA** (typical add-on context): `Splunk_TA_windows`, custom alert logic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by Account_Name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_files > 100 AND count > 500` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (files modified per minute — current), Line chart (modification rate over time), Table (users with anomalous activity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see who touched important files, so you can show auditors a trail and react faster if someone reads or changes what they should not.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.3",
              "n": "DFS Replication Health",
              "c": "high",
              "f": "intermediate",
              "v": "DFS-R backlog and conflicts indicate replication failures that can lead to data inconsistency and user complaints.",
              "t": "`Splunk_TA_windows` (DFS-R event logs)",
              "d": "DFS Replication event log (Event IDs 4012, 4302, 4304, 5002, 5008)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:DFS Replication\"\n| search EventCode=4302 OR EventCode=4304 OR EventCode=5002\n| stats count by EventCode, ComputerName, ReplicationGroupName\n| sort -count",
              "m": "Forward DFS Replication event logs from all DFS servers. Monitor backlog size via `dfsrdiag backlog` scripted input. Alert on replication conflicts and high backlog counts. Track resolution time.",
              "z": "Table (replication groups with backlog), Line chart (backlog trend), Single value (total conflicts today).",
              "kfp": "Lag spikes during initial baseline sync, large data transfers, or scheduled resync windows. A short red window during an approved change is normal if the relationship returns to healthy afterward.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (DFS-R event logs).\n• Ensure the following data sources are available: DFS Replication event log (Event IDs 4012, 4302, 4304, 5002, 5008).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DFS Replication event logs from all DFS servers. Monitor backlog size via `dfsrdiag backlog` scripted input. Alert on replication conflicts and high backlog counts. Track resolution time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:DFS Replication\"\n| search EventCode=4302 OR EventCode=4304 OR EventCode=5002\n| stats count by EventCode, ComputerName, ReplicationGroupName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DFS Replication Health** — DFS-R backlog and conflicts indicate replication failures that can lead to data inconsistency and user complaints.\n\nDocumented **Data sources**: DFS Replication event log (Event IDs 4012, 4302, 4304, 5002, 5008). **App/TA** (typical add-on context): `Splunk_TA_windows` (DFS-R event logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:DFS Replication. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:DFS Replication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by EventCode, ComputerName, ReplicationGroupName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (replication groups with backlog), Line chart (backlog trend), Single value (total conflicts today).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track copy and mirror health so a planned outage or a bad link does not leave you with an old or broken remote copy when you need it most.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.4",
              "n": "Share Permission Changes",
              "c": "high",
              "f": "beginner",
              "v": "Unauthorized permission changes can expose sensitive data. Change detection supports compliance and security posture.",
              "t": "`Splunk_TA_windows`",
              "d": "Windows Security Event Log (Event IDs 4670 — permissions changed, 5143 — share modified)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4670 OR EventCode=5143\n| table _time, Account_Name, ObjectName, ObjectServer, ProcessName\n| sort -_time",
              "m": "Enable \"Audit Policy Change\" and \"Audit File System\" via GPO. Forward Security events from file servers. Alert on any permission change to critical shares. Correlate with change management tickets.",
              "z": "Table (permission changes with details), Timeline (change events), Bar chart (changes by user).",
              "kfp": "Backup jobs, scanners, and legitimate mass access can resemble policy violations; narrow paths and service accounts using a lookup.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Windows Security Event Log (Event IDs 4670 — permissions changed, 5143 — share modified).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Audit Policy Change\" and \"Audit File System\" via GPO. Forward Security events from file servers. Alert on any permission change to critical shares. Correlate with change management tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4670 OR EventCode=5143\n| table _time, Account_Name, ObjectName, ObjectServer, ProcessName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Share Permission Changes** — Unauthorized permission changes can expose sensitive data. Change detection supports compliance and security posture.\n\nDocumented **Data sources**: Windows Security Event Log (Event IDs 4670 — permissions changed, 5143 — share modified). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Share Permission Changes**): table _time, Account_Name, ObjectName, ObjectServer, ProcessName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (permission changes with details), Timeline (change events), Bar chart (changes by user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when who can reach a file share changes, so permission drift does not quietly open a sensitive folder to the whole company.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.5",
              "n": "Large File Transfer Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Unusually large file copies may indicate data exfiltration. Detection supports data loss prevention and insider threat programs.",
              "t": "`Splunk_TA_windows`, network flow data",
              "d": "Windows file audit logs, SMB session logs",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663 AccessMask=\"0x1\"\n| stats sum(Size) as total_bytes, dc(ObjectName) as file_count by Account_Name, src\n| eval total_gb=round(total_bytes/1024/1024/1024,2)\n| where total_gb > 1\n| sort -total_gb",
              "m": "Monitor file read events and correlate with SMB session data for volume estimates. Baseline normal transfer patterns per user. Alert when transfers exceed threshold (e.g., >1GB in single session). Correlate with HR/departure lists.",
              "z": "Table (users with large transfers), Bar chart (transfer volume by user), Line chart (daily transfer volume trend).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, network flow data.\n• Ensure the following data sources are available: Windows file audit logs, SMB session logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor file read events and correlate with SMB session data for volume estimates. Baseline normal transfer patterns per user. Alert when transfers exceed threshold (e.g., >1GB in single session). Correlate with HR/departure lists.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663 AccessMask=\"0x1\"\n| stats sum(Size) as total_bytes, dc(ObjectName) as file_count by Account_Name, src\n| eval total_gb=round(total_bytes/1024/1024/1024,2)\n| where total_gb > 1\n| sort -total_gb\n```\n\nUnderstanding this SPL\n\n**Large File Transfer Detection** — Unusually large file copies may indicate data exfiltration. Detection supports data loss prevention and insider threat programs.\n\nDocumented **Data sources**: Windows file audit logs, SMB session logs. **App/TA** (typical add-on context): `Splunk_TA_windows`, network flow data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where total_gb > 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users with large transfers), Bar chart (transfer volume by user), Line chart (daily transfer volume trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see who touched important files, so you can show auditors a trail and react faster if someone reads or changes what they should not.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.6",
              "n": "Backup Encryption and Key Access Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Backup encryption keys must be used only by authorized backup jobs. Unusual key access or decryption attempts may indicate theft or ransomware. Auditing supports compliance and incident response.",
              "t": "Backup vendor logs, KMS/HSM audit logs",
              "d": "Backup software audit log, AWS KMS CloudTrail, Azure Key Vault audit",
              "q": "index=backup sourcetype=backup_audit (event=\"key_access\" OR event=\"decrypt\")\n| bin _time span=1h\n| stats count by user, key_id, event, _time\n| where count > 20\n| sort -count",
              "m": "Forward backup software audit logs and cloud KMS/key vault audit logs. Extract key ID, user, and action. Alert on high volume of decrypt or key access from unexpected principal or outside backup window.",
              "z": "Table (user, key, count), Timeline of key access, Bar chart by principal.",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Backup vendor logs, KMS/HSM audit logs.\n• Ensure the following data sources are available: Backup software audit log, AWS KMS CloudTrail, Azure Key Vault audit.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward backup software audit logs and cloud KMS/key vault audit logs. Extract key ID, user, and action. Alert on high volume of decrypt or key access from unexpected principal or outside backup window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=backup_audit (event=\"key_access\" OR event=\"decrypt\")\n| bin _time span=1h\n| stats count by user, key_id, event, _time\n| where count > 20\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Backup Encryption and Key Access Audit** — Backup encryption keys must be used only by authorized backup jobs. Unusual key access or decryption attempts may indicate theft or ransomware. Auditing supports compliance and incident response.\n\nDocumented **Data sources**: Backup software audit log, AWS KMS CloudTrail, Azure Key Vault audit. **App/TA** (typical add-on context): Backup vendor logs, KMS/HSM audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: backup_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=backup_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by user, key_id, event, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, key, count), Timeline of key access, Bar chart by principal.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see which backup runs finished cleanly and which did not, so you are not caught thinking data was protected when a job really failed or stopped early.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.12",
              "n": "DFS Replication Backlog and Connectivity Health",
              "c": "high",
              "f": "intermediate",
              "v": "Backlog size and partner connectivity state predict replication stalls before user-visible file divergence. Complements event-only monitoring with quantitative backlog trending.",
              "t": "`Splunk_TA_windows`, scripted `dfsrdiag` / WMI",
              "d": "DFS-R backlog metrics per replicated folder, Event ID 4012/5002",
              "q": "index=storage sourcetype=\"dfsr:backlog\"\n| where backlog_files > 100 OR connected=0\n| timechart span=15m max(backlog_files) as backlog by replication_group, member\n| where backlog > 500",
              "m": "Ingest backlog count from PowerShell `Get-DfsrState` or scheduled dfsrdiag output every 15m. Alert on rising backlog trend or disconnected partners.",
              "z": "Line chart (backlog files over time), Table (RG, member, backlog), Single value (max backlog).",
              "kfp": "Lag spikes during initial baseline sync, large data transfers, or scheduled resync windows. A short red window during an approved change is normal if the relationship returns to healthy afterward.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, scripted `dfsrdiag` / WMI.\n• Ensure the following data sources are available: DFS-R backlog metrics per replicated folder, Event ID 4012/5002.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest backlog count from PowerShell `Get-DfsrState` or scheduled dfsrdiag output every 15m. Alert on rising backlog trend or disconnected partners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"dfsr:backlog\"\n| where backlog_files > 100 OR connected=0\n| timechart span=15m max(backlog_files) as backlog by replication_group, member\n| where backlog > 500\n```\n\nUnderstanding this SPL\n\n**DFS Replication Backlog and Connectivity Health** — Backlog size and partner connectivity state predict replication stalls before user-visible file divergence. Complements event-only monitoring with quantitative backlog trending.\n\nDocumented **Data sources**: DFS-R backlog metrics per replicated folder, Event ID 4012/5002. **App/TA** (typical add-on context): `Splunk_TA_windows`, scripted `dfsrdiag` / WMI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: dfsr:backlog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"dfsr:backlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where backlog_files > 100 OR connected=0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by replication_group, member** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where backlog > 500` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (backlog files over time), Table (RG, member, backlog), Single value (max backlog).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track copy and mirror health so a planned outage or a bad link does not leave you with an old or broken remote copy when you need it most.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.13",
              "n": "NFS Export Capacity and Client Load",
              "c": "medium",
              "f": "intermediate",
              "v": "Export-level capacity and NFS operations/sec highlight hot exports and approaching full filesystems on NAS heads.",
              "t": "NetApp/Isilon API, Linux `nfsstat`, `exportfs -v` metrics",
              "d": "Per-export bytes used, NFS op counters",
              "q": "index=storage sourcetype=\"nas:nfs_export\"\n| eval used_pct=round(used_bytes/capacity_bytes*100,1)\n| timechart span=5m sum(ops_per_sec) as ops, avg(used_pct) as pct by export_path, host\n| where pct > 85 OR ops > 10000",
              "m": "Poll export statistics from NAS API or aggregated nfsd metrics. Alert on high used % or abnormal ops vs baseline.",
              "z": "Table (export, used %, ops/s), Line chart (ops and capacity trend), Bar chart (top exports by ops).",
              "kfp": "Export space and load can jump during backup through NFS, migration, or VM storage vMotion-style activity on the same paths.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NetApp/Isilon API, Linux `nfsstat`, `exportfs -v` metrics.\n• Ensure the following data sources are available: Per-export bytes used, NFS op counters.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll export statistics from NAS API or aggregated nfsd metrics. Alert on high used % or abnormal ops vs baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"nas:nfs_export\"\n| eval used_pct=round(used_bytes/capacity_bytes*100,1)\n| timechart span=5m sum(ops_per_sec) as ops, avg(used_pct) as pct by export_path, host\n| where pct > 85 OR ops > 10000\n```\n\nUnderstanding this SPL\n\n**NFS Export Capacity and Client Load** — Export-level capacity and NFS operations/sec highlight hot exports and approaching full filesystems on NAS heads.\n\nDocumented **Data sources**: Per-export bytes used, NFS op counters. **App/TA** (typical add-on context): NetApp/Isilon API, Linux `nfsstat`, `exportfs -v` metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: nas:nfs_export. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"nas:nfs_export\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by export_path, host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where pct > 85 OR ops > 10000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (export, used %, ops/s), Line chart (ops and capacity trend), Bar chart (top exports by ops).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "dell_emc",
                "hardware_bmc",
                "netapp"
              ],
              "em": [
                "hardware_bmc_ilo"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.14",
              "n": "SMB Share Access Audit",
              "c": "high",
              "f": "beginner",
              "v": "Summarizes successful and denied access to sensitive shares for insider threat and access reviews. Extends object-level 4663 views with share-level rollups.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event ID 5140 (share accessed), 4663 for sensitive paths",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5140\n| stats count by Share_Name, Account_Name, ComputerName\n| where count > 1000\n| sort -count",
              "m": "Enable share auditing on critical shares. Tune volume to avoid noise; focus on privileged groups. Alert on access to “restricted” shares from unexpected subnets via lookup.",
              "z": "Table (share, user, count), Bar chart (top shares by access count), Heatmap (share × hour).",
              "kfp": "Backup jobs, scanners, and legitimate mass access can resemble policy violations; narrow paths and service accounts using a lookup.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event ID 5140 (share accessed), 4663 for sensitive paths.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable share auditing on critical shares. Tune volume to avoid noise; focus on privileged groups. Alert on access to “restricted” shares from unexpected subnets via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5140\n| stats count by Share_Name, Account_Name, ComputerName\n| where count > 1000\n| sort -count\n```\n\nUnderstanding this SPL\n\n**SMB Share Access Audit** — Summarizes successful and denied access to sensitive shares for insider threat and access reviews. Extends object-level 4663 views with share-level rollups.\n\nDocumented **Data sources**: Security Event ID 5140 (share accessed), 4663 for sensitive paths. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Share_Name, Account_Name, ComputerName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (share, user, count), Bar chart (top shares by access count), Heatmap (share × hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see when who can reach a file share changes, so permission drift does not quietly open a sensitive folder to the whole company.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.15",
              "n": "File Server Capacity Trending",
              "c": "high",
              "f": "beginner",
              "v": "Volume-level free space trending on Windows file servers prevents user and application outages from full disks.",
              "t": "`Splunk_TA_windows` (PerfDisk), scripted `Get-Volume`",
              "d": "Logical disk free MB/%, WMI volume metrics",
              "q": "index=os sourcetype=\"Perfmon:LogicalDisk\" counter=\"% Free Space\"\n| timechart span=1h latest(InstanceValue) as free_pct by host, instance\n| where free_pct < 15",
              "m": "Collect % Free Space every 5–15m. Alert at 15% (warning) and 10% (critical). Use `predict` on large shares for procurement lead time.",
              "z": "Line chart (free % trend), Gauge (current free %), Table (volumes below threshold).",
              "kfp": "Capacity may temporarily dip during indexing, antivirus full scans, or large file copies; compare to scheduled jobs on the host.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (PerfDisk), scripted `Get-Volume`.\n• Ensure the following data sources are available: Logical disk free MB/%, WMI volume metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect % Free Space every 5–15m. Alert at 15% (warning) and 10% (critical). Use `predict` on large shares for procurement lead time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=\"Perfmon:LogicalDisk\" counter=\"% Free Space\"\n| timechart span=1h latest(InstanceValue) as free_pct by host, instance\n| where free_pct < 15\n```\n\nUnderstanding this SPL\n\n**File Server Capacity Trending** — Volume-level free space trending on Windows file servers prevents user and application outages from full disks.\n\nDocumented **Data sources**: Logical disk free MB/%, WMI volume metrics. **App/TA** (typical add-on context): `Splunk_TA_windows` (PerfDisk), scripted `Get-Volume`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: Perfmon:LogicalDisk. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=\"Perfmon:LogicalDisk\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host, instance** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where free_pct < 15` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (free % trend), Gauge (current free %), Table (volumes below threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full your storage is getting and give you time to add space or clean up snapshots and old data before an application or job suddenly stops working.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.16",
              "n": "Ransomware File Extension Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects mass renames or creates with known ransomware extensions (e.g., `.locked`, `.encrypted`) faster than generic mass-modify heuristics in some campaigns.",
              "t": "`Splunk_TA_windows`, EDR feeds",
              "d": "File create/rename events 4663 with ObjectName ending in suspicious extensions",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663\n| rex field=ObjectName \"(?i)\\.(locked|encrypted|crypt|ryuk|lockbit)(\\\"|$)\"\n| stats dc(ObjectName) as files count by Account_Name, host\n| where files > 20",
              "m": "Maintain lookup of ransomware extensions from threat intel. Combine with mass-delete and entropy signals. Integrate SOAR for host isolation.",
              "z": "Table (user, host, files affected), Timeline (detection), Single value (distinct suspicious files).",
              "kfp": "Patch cycles, clean-up scripts, and archive jobs can spike deletes or renames; correlate with change tickets before treating as ransomware.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, EDR feeds.\n• Ensure the following data sources are available: File create/rename events 4663 with ObjectName ending in suspicious extensions.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain lookup of ransomware extensions from threat intel. Combine with mass-delete and entropy signals. Integrate SOAR for host isolation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663\n| rex field=ObjectName \"(?i)\\.(locked|encrypted|crypt|ryuk|lockbit)(\\\"|$)\"\n| stats dc(ObjectName) as files count by Account_Name, host\n| where files > 20\n```\n\nUnderstanding this SPL\n\n**Ransomware File Extension Detection** — Detects mass renames or creates with known ransomware extensions (e.g., `.locked`, `.encrypted`) faster than generic mass-modify heuristics in some campaigns.\n\nDocumented **Data sources**: File create/rename events 4663 with ObjectName ending in suspicious extensions. **App/TA** (typical add-on context): `Splunk_TA_windows`, EDR feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by Account_Name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where files > 20` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, host, files affected), Timeline (detection), Single value (distinct suspicious files).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see who touched important files, so you can show auditors a trail and react faster if someone reads or changes what they should not.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.17",
              "n": "CIFS Connection Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks concurrent SMB sessions and failed session setups per file server. Spikes may indicate brute force, misconfigured apps, or server resource limits.",
              "t": "`Splunk_TA_windows`, SMB server audit",
              "d": "Event ID 5140/5145, Perfmons `Server Sessions`, `Server Rejects`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5140 OR EventCode=5145\n| bucket _time span=5m\n| stats count as sessions by ComputerName, _time\n| eventstats avg(sessions) as avg_s by ComputerName\n| where sessions > avg_s * 3",
              "m": "Baseline sessions per 5m window per server. Alert on 3× baseline or on SMB error events (551, 552) if enabled. Correlate with auth failures.",
              "z": "Line chart (session rate per server), Table (spike windows), Single value (current sessions).",
              "kfp": "Short spikes during approved changes, maintenance windows, or known batch jobs can match the rule; confirm against the vendor console and change calendar.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, SMB server audit.\n• Ensure the following data sources are available: Event ID 5140/5145, Perfmons `Server Sessions`, `Server Rejects`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline sessions per 5m window per server. Alert on 3× baseline or on SMB error events (551, 552) if enabled. Correlate with auth failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5140 OR EventCode=5145\n| bucket _time span=5m\n| stats count as sessions by ComputerName, _time\n| eventstats avg(sessions) as avg_s by ComputerName\n| where sessions > avg_s * 3\n```\n\nUnderstanding this SPL\n\n**CIFS Connection Monitoring** — Tracks concurrent SMB sessions and failed session setups per file server. Spikes may indicate brute force, misconfigured apps, or server resource limits.\n\nDocumented **Data sources**: Event ID 5140/5145, Perfmons `Server Sessions`, `Server Rejects`. **App/TA** (typical add-on context): `Splunk_TA_windows`, SMB server audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ComputerName, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by ComputerName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sessions > avg_s * 3` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (session rate per server), Table (spike windows), Single value (current sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you make sense of file and share activity in plain language, so investigations and compliance questions have a clear story behind them.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "6.4.18",
              "n": "File Deletion Volume Anomaly",
              "c": "critical",
              "f": "advanced",
              "v": "Sudden spike in delete operations may indicate ransomware preparation, malicious insider, or script error. Complements mass-modify ransomware use cases.",
              "t": "`Splunk_TA_windows`",
              "d": "Event ID 4660 (object deleted), 4663 with Delete access",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4660,4663) AccessMask=\"*DELETE*\"\n| bucket _time span=1m\n| stats count as deletes by Account_Name, ShareName, _time\n| eventstats avg(deletes) as avg_d, stdev(deletes) as stdev_d by Account_Name\n| where deletes > avg_d + 4*stdev_d AND deletes > 50",
              "m": "Enable auditing on delete for sensitive trees. Baseline deletes per user/share. Alert on statistical outliers. Exclude known maintenance accounts via lookup.",
              "z": "Timeline (delete bursts), Table (user, share, delete count), Line chart (deletes per minute).",
              "kfp": "Legitimate bulk operations, month-end reporting, or new application releases can change access patterns; compare to change windows and owner notifications.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Event ID 4660 (object deleted), 4663 with Delete access.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable auditing on delete for sensitive trees. Baseline deletes per user/share. Alert on statistical outliers. Exclude known maintenance accounts via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4660,4663) AccessMask=\"*DELETE*\"\n| bucket _time span=1m\n| stats count as deletes by Account_Name, ShareName, _time\n| eventstats avg(deletes) as avg_d, stdev(deletes) as stdev_d by Account_Name\n| where deletes > avg_d + 4*stdev_d AND deletes > 50\n```\n\nUnderstanding this SPL\n\n**File Deletion Volume Anomaly** — Sudden spike in delete operations may indicate ransomware preparation, malicious insider, or script error. Complements mass-modify ransomware use cases.\n\nDocumented **Data sources**: Event ID 4660 (object deleted), 4663 with Delete access. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by Account_Name, ShareName, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by Account_Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where deletes > avg_d + 4*stdev_d AND deletes > 50` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (delete bursts), Table (user, share, delete count), Line chart (deletes per minute).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help you see who touched important files, so you can show auditors a trail and react faster if someone reads or changes what they should not.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 13,
            "none": 0
          }
        }
      ],
      "i": 6,
      "n": "Storage & Backup",
      "src": "cat-06-storage-backup.md"
    },
    {
      "s": [
        {
          "i": "7.1",
          "n": "Relational Databases",
          "u": [
            {
              "i": "7.1.1",
              "n": "Slow Query Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Slow queries at the P95/P99 level directly impact checkout, search, and reporting latency, hitting revenue and SLA commitments. They also hold locks and inflate CPU/IO, degrading performance for other tenants and queries. Prioritize fixes by business-critical endpoint and wait type, not just raw query duration.",
              "t": "DB Connect, Splunk_TA_microsoft-sqlserver, MySQL slow query log",
              "d": "Slow query logs, SQL Server DMVs (`sys.dm_exec_query_stats`), PostgreSQL `pg_stat_statements`",
              "q": "index=database sourcetype=\"mysql:slowquery\"\n| rex field=_raw \"Query_time:\\s+(?<query_time>[\\d.]+)\"\n| where query_time > 5\n| stats count, avg(query_time) as avg_time by db, user\n| sort -avg_time",
              "m": "Enable MySQL slow query log (long_query_time=5). For SQL Server, poll DMVs via DB Connect. For PostgreSQL, enable `pg_stat_statements`. Ingest and alert on queries exceeding thresholds. Report top offenders weekly.",
              "z": "Table (slow queries with details), Bar chart (top slow queries by avg duration), Line chart (slow query count trend).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, Splunk_TA_microsoft-sqlserver, MySQL slow query log.\n• Ensure the following data sources are available: Slow query logs, SQL Server DMVs (`sys.dm_exec_query_stats`), PostgreSQL `pg_stat_statements`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable MySQL slow query log (long_query_time=5). For SQL Server, poll DMVs via DB Connect. For PostgreSQL, enable `pg_stat_statements`. Ingest and alert on queries exceeding thresholds. Report top offenders weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mysql:slowquery\"\n| rex field=_raw \"Query_time:\\s+(?<query_time>[\\d.]+)\"\n| where query_time > 5\n| stats count, avg(query_time) as avg_time by db, user\n| sort -avg_time\n```\n\nUnderstanding this SPL\n\n**Slow Query Detection** — Slow queries at the P95/P99 level directly impact checkout, search, and reporting latency, hitting revenue and SLA commitments. They also hold locks and inflate CPU/IO, degrading performance for other tenants and queries. Prioritize fixes by business-critical endpoint and wait type, not just raw query duration.\n\nDocumented **Data sources**: Slow query logs, SQL Server DMVs (`sys.dm_exec_query_stats`), PostgreSQL `pg_stat_statements`. **App/TA** (typical add-on context): DB Connect, Splunk_TA_microsoft-sqlserver, MySQL slow query log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mysql:slowquery. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mysql:slowquery\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where query_time > 5` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by db, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Slow Query Detection** — Slow queries at the P95/P99 level directly impact checkout, search, and reporting latency, hitting revenue and SLA commitments. They also hold locks and inflate CPU/IO, degrading performance for other tenants and queries. Prioritize fixes by business-critical endpoint and wait type, not just raw query duration.\n\nDocumented **Data sources**: Slow query logs, SQL Server DMVs (`sys.dm_exec_query_stats`), PostgreSQL `pg_stat_statements`. **App/TA** (typical add-on context): DB Connect, Splunk_TA_microsoft-sqlserver, MySQL slow query log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slow queries with details), Bar chart (top slow queries by avg duration), Line chart (slow query count trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [
                "db_connect",
                "mssql",
                "mysql"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.2",
              "n": "Deadlock Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Deadlocks cause transaction failures and application errors. Rapid detection and root cause analysis minimizes impact.",
              "t": "Splunk_TA_microsoft-sqlserver, database error logs",
              "d": "SQL Server error log (deadlock graph), PostgreSQL `log_lock_waits`, Oracle alert log",
              "q": "index=database sourcetype=\"mssql:errorlog\"\n| search \"deadlock\" OR \"Deadlock\"\n| stats count by _time, database_name\n| timechart span=1h sum(count) as deadlocks",
              "m": "Enable trace flag 1222 for SQL Server deadlock graphs. For PostgreSQL, set `log_lock_waits=on` and `deadlock_timeout=1s`. Ingest error logs. Alert on any deadlock occurrence. Parse deadlock graphs for involved queries/objects.",
              "z": "Line chart (deadlocks over time), Table (deadlock details), Single value (deadlocks today).",
              "kfp": "Transient deadlocks during high-concurrency batch processing are common in OLTP; tune when frequency or total wait time clearly exceeds the baseline, not on one-off events.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_microsoft-sqlserver, database error logs.\n• Ensure the following data sources are available: SQL Server error log (deadlock graph), PostgreSQL `log_lock_waits`, Oracle alert log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable trace flag 1222 for SQL Server deadlock graphs. For PostgreSQL, set `log_lock_waits=on` and `deadlock_timeout=1s`. Ingest error logs. Alert on any deadlock occurrence. Parse deadlock graphs for involved queries/objects.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mssql:errorlog\"\n| search \"deadlock\" OR \"Deadlock\"\n| stats count by _time, database_name\n| timechart span=1h sum(count) as deadlocks\n```\n\nUnderstanding this SPL\n\n**Deadlock Monitoring** — Deadlocks cause transaction failures and application errors. Rapid detection and root cause analysis minimizes impact.\n\nDocumented **Data sources**: SQL Server error log (deadlock graph), PostgreSQL `log_lock_waits`, Oracle alert log. **App/TA** (typical add-on context): Splunk_TA_microsoft-sqlserver, database error logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mssql:errorlog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mssql:errorlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by _time, database_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Deadlock Monitoring** — Deadlocks cause transaction failures and application errors. Rapid detection and root cause analysis minimizes impact.\n\nDocumented **Data sources**: SQL Server error log (deadlock graph), PostgreSQL `log_lock_waits`, Oracle alert log. **App/TA** (typical add-on context): Splunk_TA_microsoft-sqlserver, database error logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (deadlocks over time), Table (deadlock details), Single value (deadlocks today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count deadlocks and contention in our databases so we can tune hot paths and application order before they turn into long outages of customer-facing work.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action span=1h | sort - count",
              "e": [
                "mssql"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 65,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier"
              ]
            },
            {
              "i": "7.1.3",
              "n": "Connection Pool Exhaustion",
              "c": "critical",
              "f": "beginner",
              "v": "Exhausted connection pools cause application failures. Monitoring prevents outages and guides pool sizing decisions.",
              "t": "DB Connect, performance counters",
              "d": "SQL Server DMVs (`sys.dm_exec_connections`), PostgreSQL `pg_stat_activity`, app server connection pool metrics",
              "q": "index=database sourcetype=\"dbconnect:mssql_connections\"\n| timechart span=5m max(active_connections) as active, max(max_connections) as max_limit\n| eval pct_used=round(active/max_limit*100,1)\n| where pct_used > 80",
              "m": "Poll connection counts via DB Connect every 5 minutes. Compare against configured maximum. Alert at 80% and 95% thresholds. Track by application/user to identify connection leaks.",
              "z": "Gauge (% connections used), Line chart (connections over time), Table (connections by application).",
              "kfp": "Connection pool warm-up after restarts, blue-green deploys, or autoscaling can look like a spike until the pool or fleet reaches steady state.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, performance counters.\n• Ensure the following data sources are available: SQL Server DMVs (`sys.dm_exec_connections`), PostgreSQL `pg_stat_activity`, app server connection pool metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll connection counts via DB Connect every 5 minutes. Compare against configured maximum. Alert at 80% and 95% thresholds. Track by application/user to identify connection leaks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:mssql_connections\"\n| timechart span=5m max(active_connections) as active, max(max_connections) as max_limit\n| eval pct_used=round(active/max_limit*100,1)\n| where pct_used > 80\n```\n\nUnderstanding this SPL\n\n**Connection Pool Exhaustion** — Exhausted connection pools cause application failures. Monitoring prevents outages and guides pool sizing decisions.\n\nDocumented **Data sources**: SQL Server DMVs (`sys.dm_exec_connections`), PostgreSQL `pg_stat_activity`, app server connection pool metrics. **App/TA** (typical add-on context): DB Connect, performance counters. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:mssql_connections. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:mssql_connections\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **pct_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_used > 80` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Connection Pool Exhaustion** — Exhausted connection pools cause application failures. Monitoring prevents outages and guides pool sizing decisions.\n\nDocumented **Data sources**: SQL Server DMVs (`sys.dm_exec_connections`), PostgreSQL `pg_stat_activity`, app server connection pool metrics. **App/TA** (typical add-on context): DB Connect, performance counters. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% connections used), Line chart (connections over time), Table (connections by application).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many sessions and pooled connections the fleet uses so we can scale or fix apps before the database hits its connection limits.",
              "wv": "crawl",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action span=5m | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "7.1.4",
              "n": "Replication Lag Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Replication lag affects data consistency and failover readiness. Monitoring ensures HA/DR objectives are met.",
              "t": "DB Connect, vendor-specific monitoring",
              "d": "SQL Server AG DMVs (`sys.dm_hadr_database_replica_states`), MySQL `SHOW SLAVE STATUS`, PostgreSQL replication slots",
              "q": "index=database sourcetype=\"dbconnect:replication_status\"\n| eval lag_seconds=coalesce(seconds_behind_master, replication_lag_sec)\n| timechart span=5m max(lag_seconds) as max_lag by replica_name\n| where max_lag > 60",
              "m": "Poll replication status via DB Connect at 5-minute intervals. Alert when lag exceeds RPO (e.g., >60 seconds). Track lag trend over time. Correlate spikes with batch jobs or network events.",
              "z": "Line chart (lag over time by replica), Single value (current max lag), Table (replicas with lag status).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, vendor-specific monitoring.\n• Ensure the following data sources are available: SQL Server AG DMVs (`sys.dm_hadr_database_replica_states`), MySQL `SHOW SLAVE STATUS`, PostgreSQL replication slots.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll replication status via DB Connect at 5-minute intervals. Alert when lag exceeds RPO (e.g., >60 seconds). Track lag trend over time. Correlate spikes with batch jobs or network events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:replication_status\"\n| eval lag_seconds=coalesce(seconds_behind_master, replication_lag_sec)\n| timechart span=5m max(lag_seconds) as max_lag by replica_name\n| where max_lag > 60\n```\n\nUnderstanding this SPL\n\n**Replication Lag Monitoring** — Replication lag affects data consistency and failover readiness. Monitoring ensures HA/DR objectives are met.\n\nDocumented **Data sources**: SQL Server AG DMVs (`sys.dm_hadr_database_replica_states`), MySQL `SHOW SLAVE STATUS`, PostgreSQL replication slots. **App/TA** (typical add-on context): DB Connect, vendor-specific monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:replication_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:replication_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lag_seconds** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by replica_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where max_lag > 60` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Replication Lag Monitoring** — Replication lag affects data consistency and failover readiness. Monitoring ensures HA/DR objectives are met.\n\nDocumented **Data sources**: SQL Server AG DMVs (`sys.dm_hadr_database_replica_states`), MySQL `SHOW SLAVE STATUS`, PostgreSQL replication slots. **App/TA** (typical add-on context): DB Connect, vendor-specific monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (lag over time by replica), Single value (current max lag), Table (replicas with lag status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action span=5m | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.5",
              "n": "Tablespace / Data File Growth",
              "c": "high",
              "f": "intermediate",
              "v": "Uncontrolled database growth leads to disk space exhaustion and outages. Trending enables proactive storage provisioning.",
              "t": "DB Connect",
              "d": "`sys.database_files` (SQL Server), `dba_tablespaces` (Oracle), `pg_database_size()` (PostgreSQL)",
              "q": "index=database sourcetype=\"dbconnect:db_size\"\n| timechart span=1d latest(size_gb) as db_size by database_name\n| predict db_size as predicted future_timespan=30",
              "m": "Poll database size metrics via DB Connect daily. Track growth rate per database. Use `predict` command for 30-day forecast. Alert when projected size exceeds available disk. Report top growing databases.",
              "z": "Line chart (size trend with prediction), Table (databases with growth rate), Bar chart (top databases by size).",
              "kfp": "Capacity grows during ETL loads, month-end batch processing, or data migrations. Growth from `ANALYZE`, statistics runs, or one-off bulk loads is often expected when it matches the schedule.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect.\n• Ensure the following data sources are available: `sys.database_files` (SQL Server), `dba_tablespaces` (Oracle), `pg_database_size()` (PostgreSQL).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll database size metrics via DB Connect daily. Track growth rate per database. Use `predict` command for 30-day forecast. Alert when projected size exceeds available disk. Report top growing databases.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:db_size\"\n| timechart span=1d latest(size_gb) as db_size by database_name\n| predict db_size as predicted future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Tablespace / Data File Growth** — Uncontrolled database growth leads to disk space exhaustion and outages. Trending enables proactive storage provisioning.\n\nDocumented **Data sources**: `sys.database_files` (SQL Server), `dba_tablespaces` (Oracle), `pg_database_size()` (PostgreSQL). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:db_size. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:db_size\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by database_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Tablespace / Data File Growth**): predict db_size as predicted future_timespan=30\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tablespace / Data File Growth** — Uncontrolled database growth leads to disk space exhaustion and outages. Trending enables proactive storage provisioning.\n\nDocumented **Data sources**: `sys.database_files` (SQL Server), `dba_tablespaces` (Oracle), `pg_database_size()` (PostgreSQL). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (size trend with prediction), Table (databases with growth rate), Bar chart (top databases by size).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend space use on our data files and tablespaces so we can add capacity or archive data before a database runs out of room during peak load.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action span=1d | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.6",
              "n": "Backup Success Verification",
              "c": "critical",
              "f": "intermediate",
              "v": "Database backups are the last line of defense. Verifying success prevents discovering backup failures during a crisis.",
              "t": "DB Connect, Splunk_TA_microsoft-sqlserver",
              "d": "`msdb.dbo.backupset` (SQL Server), `v$rman_backup_job_details` (Oracle), PostgreSQL `pg_basebackup` logs",
              "q": "index=database sourcetype=\"dbconnect:backup_history\"\n| stats latest(backup_finish_date) as last_backup, latest(type) as backup_type by database_name, server_name\n| eval hours_since=round((now()-strptime(last_backup,\"%Y-%m-%d %H:%M:%S\"))/3600,1)\n| where hours_since > 24\n| table server_name, database_name, last_backup, backup_type, hours_since",
              "m": "Query backup history tables via DB Connect daily. Alert on any database without a successful backup in the expected window. Cross-reference with CMDB for backup classification (full/diff/log) requirements.",
              "z": "Table (databases with backup status), Single value (databases missing backup), Status grid (database × backup type).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, Splunk_TA_microsoft-sqlserver.\n• Ensure the following data sources are available: `msdb.dbo.backupset` (SQL Server), `v$rman_backup_job_details` (Oracle), PostgreSQL `pg_basebackup` logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery backup history tables via DB Connect daily. Alert on any database without a successful backup in the expected window. Cross-reference with CMDB for backup classification (full/diff/log) requirements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:backup_history\"\n| stats latest(backup_finish_date) as last_backup, latest(type) as backup_type by database_name, server_name\n| eval hours_since=round((now()-strptime(last_backup,\"%Y-%m-%d %H:%M:%S\"))/3600,1)\n| where hours_since > 24\n| table server_name, database_name, last_backup, backup_type, hours_since\n```\n\nUnderstanding this SPL\n\n**Backup Success Verification** — Database backups are the last line of defense. Verifying success prevents discovering backup failures during a crisis.\n\nDocumented **Data sources**: `msdb.dbo.backupset` (SQL Server), `v$rman_backup_job_details` (Oracle), PostgreSQL `pg_basebackup` logs. **App/TA** (typical add-on context): DB Connect, Splunk_TA_microsoft-sqlserver. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:backup_history. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:backup_history\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by database_name, server_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since > 24` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup Success Verification**): table server_name, database_name, last_backup, backup_type, hours_since\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Backup Success Verification** — Database backups are the last line of defense. Verifying success prevents discovering backup failures during a crisis.\n\nDocumented **Data sources**: `msdb.dbo.backupset` (SQL Server), `v$rman_backup_job_details` (Oracle), PostgreSQL `pg_basebackup` logs. **App/TA** (typical add-on context): DB Connect, Splunk_TA_microsoft-sqlserver. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (databases with backup status), Single value (databases missing backup), Status grid (database × backup type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check backup and snapshot success and growth so we can trust restores and long-term storage plans for regulation and for real incidents.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count",
              "e": [
                "db_connect",
                "mssql"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.7",
              "n": "Login Failure Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Repeated login failures may indicate brute-force attacks or misconfigured applications. Detection supports security posture.",
              "t": "Splunk_TA_microsoft-sqlserver, database error logs",
              "d": "SQL Server error log (login failed events), PostgreSQL `log_connections`, Oracle audit trail",
              "q": "index=database sourcetype=\"mssql:errorlog\"\n| search \"Login failed\"\n| rex \"Login failed for user '(?<user>[^']+)'\"\n| stats count by user, src\n| where count > 10\n| sort -count",
              "m": "Ensure failed login auditing is enabled (SQL Server: \"Both failed and successful logins\"). Forward error logs to Splunk. Alert on >10 failures per user per hour. Correlate with AD lockout events.",
              "z": "Table (users with failed logins), Bar chart (failures by user), Line chart (failure rate over time).",
              "kfp": "Pen tests, help desk–driven password resets, misconfigured app credentials, and short IdP or SSO outages can look like the same pattern as a real attack without environment-specific tuning.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_microsoft-sqlserver, database error logs.\n• Ensure the following data sources are available: SQL Server error log (login failed events), PostgreSQL `log_connections`, Oracle audit trail.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure failed login auditing is enabled (SQL Server: \"Both failed and successful logins\"). Forward error logs to Splunk. Alert on >10 failures per user per hour. Correlate with AD lockout events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mssql:errorlog\"\n| search \"Login failed\"\n| rex \"Login failed for user '(?<user>[^']+)'\"\n| stats count by user, src\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Login Failure Monitoring** — Repeated login failures may indicate brute-force attacks or misconfigured applications. Detection supports security posture.\n\nDocumented **Data sources**: SQL Server error log (login failed events), PostgreSQL `log_connections`, Oracle audit trail. **App/TA** (typical add-on context): Splunk_TA_microsoft-sqlserver, database error logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mssql:errorlog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mssql:errorlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by user, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Login Failure Monitoring** — Repeated login failures may indicate brute-force attacks or misconfigured applications. Detection supports security posture.\n\nDocumented **Data sources**: SQL Server error log (login failed events), PostgreSQL `log_connections`, Oracle audit trail. **App/TA** (typical add-on context): Splunk_TA_microsoft-sqlserver, database error logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users with failed logins), Bar chart (failures by user), Line chart (failure rate over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for repeated or suspicious sign-in activity on our databases so we can catch brute-force and misconfiguration before they become account takeovers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [
                "mssql"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.8",
              "n": "Long-Running Transaction Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Long transactions hold locks, causing blocking chains that degrade application performance for many users.",
              "t": "DB Connect",
              "d": "`sys.dm_exec_requests` (SQL Server), `pg_stat_activity` (PostgreSQL), Oracle `v$transaction`",
              "q": "index=database sourcetype=\"dbconnect:active_transactions\"\n| where transaction_duration_sec > 300\n| table _time, server, database_name, user, transaction_duration_sec, sql_text\n| sort -transaction_duration_sec",
              "m": "Poll active transactions via DB Connect every 5 minutes. Alert when any transaction exceeds 5 minutes. Include SQL text and blocking information. Escalate transactions blocking other sessions.",
              "z": "Table (active long transactions), Single value (longest active transaction), Timeline (long transaction events).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect.\n• Ensure the following data sources are available: `sys.dm_exec_requests` (SQL Server), `pg_stat_activity` (PostgreSQL), Oracle `v$transaction`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll active transactions via DB Connect every 5 minutes. Alert when any transaction exceeds 5 minutes. Include SQL text and blocking information. Escalate transactions blocking other sessions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:active_transactions\"\n| where transaction_duration_sec > 300\n| table _time, server, database_name, user, transaction_duration_sec, sql_text\n| sort -transaction_duration_sec\n```\n\nUnderstanding this SPL\n\n**Long-Running Transaction Detection** — Long transactions hold locks, causing blocking chains that degrade application performance for many users.\n\nDocumented **Data sources**: `sys.dm_exec_requests` (SQL Server), `pg_stat_activity` (PostgreSQL), Oracle `v$transaction`. **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:active_transactions. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:active_transactions\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where transaction_duration_sec > 300` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Long-Running Transaction Detection**): table _time, server, database_name, user, transaction_duration_sec, sql_text\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Long-Running Transaction Detection** — Long transactions hold locks, causing blocking chains that degrade application performance for many users.\n\nDocumented **Data sources**: `sys.dm_exec_requests` (SQL Server), `pg_stat_activity` (PostgreSQL), Oracle `v$transaction`. **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active long transactions), Single value (longest active transaction), Timeline (long transaction events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.9",
              "n": "Index Fragmentation",
              "c": "medium",
              "f": "beginner",
              "v": "Highly fragmented indexes cause excessive I/O and slow query performance. Monitoring guides maintenance scheduling.",
              "t": "DB Connect",
              "d": "`sys.dm_db_index_physical_stats` (SQL Server), `pg_stat_user_indexes` (PostgreSQL)",
              "q": "index=database sourcetype=\"dbconnect:index_stats\"\n| where avg_fragmentation_pct > 30\n| table server, database_name, table_name, index_name, avg_fragmentation_pct, page_count\n| sort -avg_fragmentation_pct",
              "m": "Poll index fragmentation stats via DB Connect weekly (resource-intensive query — schedule during off-hours). Alert when critical indexes exceed 30% fragmentation. Track fragmentation trend to optimize rebuild schedules.",
              "z": "Table (fragmented indexes), Bar chart (fragmentation by database), Heatmap (table × index fragmentation).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect.\n• Ensure the following data sources are available: `sys.dm_db_index_physical_stats` (SQL Server), `pg_stat_user_indexes` (PostgreSQL).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll index fragmentation stats via DB Connect weekly (resource-intensive query — schedule during off-hours). Alert when critical indexes exceed 30% fragmentation. Track fragmentation trend to optimize rebuild schedules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:index_stats\"\n| where avg_fragmentation_pct > 30\n| table server, database_name, table_name, index_name, avg_fragmentation_pct, page_count\n| sort -avg_fragmentation_pct\n```\n\nUnderstanding this SPL\n\n**Index Fragmentation** — Highly fragmented indexes cause excessive I/O and slow query performance. Monitoring guides maintenance scheduling.\n\nDocumented **Data sources**: `sys.dm_db_index_physical_stats` (SQL Server), `pg_stat_user_indexes` (PostgreSQL). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:index_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:index_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where avg_fragmentation_pct > 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Index Fragmentation**): table server, database_name, table_name, index_name, avg_fragmentation_pct, page_count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Index Fragmentation** — Highly fragmented indexes cause excessive I/O and slow query performance. Monitoring guides maintenance scheduling.\n\nDocumented **Data sources**: `sys.dm_db_index_physical_stats` (SQL Server), `pg_stat_user_indexes` (PostgreSQL). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (fragmented indexes), Bar chart (fragmentation by database), Heatmap (table × index fragmentation).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.10",
              "n": "TempDB Contention (SQL Server)",
              "c": "high",
              "f": "beginner",
              "v": "TempDB contention is a common SQL Server bottleneck. Detection enables configuration tuning (multiple data files, trace flags).",
              "t": "DB Connect, Splunk_TA_microsoft-sqlserver",
              "d": "`sys.dm_os_wait_stats` (PAGELATCH waits), `sys.dm_exec_query_stats`",
              "q": "index=database sourcetype=\"dbconnect:wait_stats\"\n| where wait_type LIKE \"PAGELATCH%\" AND resource_description LIKE \"2:%\"\n| stats sum(wait_time_ms) as total_wait by wait_type",
              "m": "Poll wait statistics via DB Connect. Filter for PAGELATCH waits on TempDB (database_id 2). Alert when TempDB waits exceed baseline. Recommend adding TempDB data files equal to number of CPU cores (up to 8).",
              "z": "Bar chart (wait types), Line chart (TempDB wait trend), Single value (current TempDB wait ms).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, Splunk_TA_microsoft-sqlserver.\n• Ensure the following data sources are available: `sys.dm_os_wait_stats` (PAGELATCH waits), `sys.dm_exec_query_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll wait statistics via DB Connect. Filter for PAGELATCH waits on TempDB (database_id 2). Alert when TempDB waits exceed baseline. Recommend adding TempDB data files equal to number of CPU cores (up to 8).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:wait_stats\"\n| where wait_type LIKE \"PAGELATCH%\" AND resource_description LIKE \"2:%\"\n| stats sum(wait_time_ms) as total_wait by wait_type\n```\n\nUnderstanding this SPL\n\n**TempDB Contention (SQL Server)** — TempDB contention is a common SQL Server bottleneck. Detection enables configuration tuning (multiple data files, trace flags).\n\nDocumented **Data sources**: `sys.dm_os_wait_stats` (PAGELATCH waits), `sys.dm_exec_query_stats`. **App/TA** (typical add-on context): DB Connect, Splunk_TA_microsoft-sqlserver. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:wait_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:wait_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where wait_type LIKE \"PAGELATCH%\" AND resource_description LIKE \"2:%\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by wait_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**TempDB Contention (SQL Server)** — TempDB contention is a common SQL Server bottleneck. Detection enables configuration tuning (multiple data files, trace flags).\n\nDocumented **Data sources**: `sys.dm_os_wait_stats` (PAGELATCH waits), `sys.dm_exec_query_stats`. **App/TA** (typical add-on context): DB Connect, Splunk_TA_microsoft-sqlserver. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (wait types), Line chart (TempDB wait trend), Single value (current TempDB wait ms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow SQL Server performance and availability signals so the team can catch regressions, AG issues, and tempdb pressure on business-critical systems.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count",
              "e": [
                "db_connect",
                "mssql"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.11",
              "n": "Buffer Cache Hit Ratio",
              "c": "medium",
              "f": "beginner",
              "v": "Low buffer cache hit ratio means excessive disk I/O. Monitoring guides memory allocation decisions.",
              "t": "DB Connect, performance counters",
              "d": "SQL Server performance counters, PostgreSQL `pg_stat_bgwriter`",
              "q": "index=database sourcetype=\"dbconnect:perf_counters\"\n| where counter_name=\"Buffer cache hit ratio\"\n| timechart span=15m avg(cntr_value) as hit_ratio by instance_name\n| where hit_ratio < 95",
              "m": "Poll buffer cache performance counters via DB Connect every 15 minutes. Alert when hit ratio drops below 95% for sustained periods. Correlate with memory pressure and query workload changes.",
              "z": "Gauge (buffer cache hit ratio), Line chart (hit ratio over time), Single value (current hit ratio %).",
              "kfp": "Buffer or page cache hit ratio may drop after instance restart, large table scans, or after data warehouse loads; short dips often normalize without action.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, performance counters.\n• Ensure the following data sources are available: SQL Server performance counters, PostgreSQL `pg_stat_bgwriter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll buffer cache performance counters via DB Connect every 15 minutes. Alert when hit ratio drops below 95% for sustained periods. Correlate with memory pressure and query workload changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:perf_counters\"\n| where counter_name=\"Buffer cache hit ratio\"\n| timechart span=15m avg(cntr_value) as hit_ratio by instance_name\n| where hit_ratio < 95\n```\n\nUnderstanding this SPL\n\n**Buffer Cache Hit Ratio** — Low buffer cache hit ratio means excessive disk I/O. Monitoring guides memory allocation decisions.\n\nDocumented **Data sources**: SQL Server performance counters, PostgreSQL `pg_stat_bgwriter`. **App/TA** (typical add-on context): DB Connect, performance counters. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:perf_counters. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:perf_counters\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where counter_name=\"Buffer cache hit ratio\"` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by instance_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where hit_ratio < 95` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Database_Instance by Database_Instance.host, Database_Instance.action span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Buffer Cache Hit Ratio** — Low buffer cache hit ratio means excessive disk I/O. Monitoring guides memory allocation decisions.\n\nDocumented **Data sources**: SQL Server performance counters, PostgreSQL `pg_stat_bgwriter`. **App/TA** (typical add-on context): DB Connect, performance counters. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Database_Instance` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (buffer cache hit ratio), Line chart (hit ratio over time), Single value (current hit ratio %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow buffer and cache hit rates so we can see when memory is too small for the working set and query latency starts to depend on disk more than it should.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Database_Instance by Database_Instance.host, Database_Instance.action span=15m | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.12",
              "n": "Database Availability Group Health",
              "c": "critical",
              "f": "beginner",
              "v": "AG/RAC cluster health is essential for HA. Detecting unhealthy replicas prevents unplanned failover failures.",
              "t": "DB Connect, Splunk_TA_microsoft-sqlserver",
              "d": "`sys.dm_hadr_availability_replica_states` (SQL Server), Oracle CRS logs",
              "q": "index=database sourcetype=\"dbconnect:ag_status\"\n| where synchronization_health_desc!=\"HEALTHY\" OR connected_state_desc!=\"CONNECTED\"\n| table _time, ag_name, replica_server_name, role_desc, synchronization_health_desc",
              "m": "Poll AG replica state DMVs every 5 minutes. Alert on any non-HEALTHY or non-CONNECTED state. Track failover events from SQL Server error log. Create dashboard showing full AG topology and health.",
              "z": "Status grid (replica × health state), Table (unhealthy replicas), Timeline (failover events).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, Splunk_TA_microsoft-sqlserver.\n• Ensure the following data sources are available: `sys.dm_hadr_availability_replica_states` (SQL Server), Oracle CRS logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll AG replica state DMVs every 5 minutes. Alert on any non-HEALTHY or non-CONNECTED state. Track failover events from SQL Server error log. Create dashboard showing full AG topology and health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:ag_status\"\n| where synchronization_health_desc!=\"HEALTHY\" OR connected_state_desc!=\"CONNECTED\"\n| table _time, ag_name, replica_server_name, role_desc, synchronization_health_desc\n```\n\nUnderstanding this SPL\n\n**Database Availability Group Health** — AG/RAC cluster health is essential for HA. Detecting unhealthy replicas prevents unplanned failover failures.\n\nDocumented **Data sources**: `sys.dm_hadr_availability_replica_states` (SQL Server), Oracle CRS logs. **App/TA** (typical add-on context): DB Connect, Splunk_TA_microsoft-sqlserver. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:ag_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:ag_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where synchronization_health_desc!=\"HEALTHY\" OR connected_state_desc!=\"CONNECTED\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Database Availability Group Health**): table _time, ag_name, replica_server_name, role_desc, synchronization_health_desc\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Database Availability Group Health** — AG/RAC cluster health is essential for HA. Detecting unhealthy replicas prevents unplanned failover failures.\n\nDocumented **Data sources**: `sys.dm_hadr_availability_replica_states` (SQL Server), Oracle CRS logs. **App/TA** (typical add-on context): DB Connect, Splunk_TA_microsoft-sqlserver. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (replica × health state), Table (unhealthy replicas), Timeline (failover events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow SQL Server performance and availability signals so the team can catch regressions, AG issues, and tempdb pressure on business-critical systems.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count",
              "e": [
                "db_connect",
                "mssql"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "7.1.13",
              "n": "Schema Change Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Unauthorized DDL changes to production databases can break applications. Detection ensures change control compliance.",
              "t": "DB Connect, SQL Server audit",
              "d": "SQL Server DDL triggers, audit logs, PostgreSQL `log_statement='ddl'`",
              "q": "index=database sourcetype=\"mssql:audit\" action_id IN (\"CR\",\"AL\",\"DR\")\n| table _time, server_principal_name, database_name, object_name, statement\n| sort -_time",
              "m": "Enable SQL Server audit for DDL events (CREATE, ALTER, DROP). For PostgreSQL, set `log_statement='ddl'`. Forward audit logs to Splunk. Alert on any DDL outside maintenance windows. Correlate with change tickets.",
              "z": "Table (DDL events with details), Timeline (schema changes), Bar chart (changes by user).",
              "kfp": "Planned release DDL, CI/CD deploys, and data-migration projects emit schema changes during approved windows; require a change ticket or pipeline ID before treating a match as abuse.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, SQL Server audit.\n• Ensure the following data sources are available: SQL Server DDL triggers, audit logs, PostgreSQL `log_statement='ddl'`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable SQL Server audit for DDL events (CREATE, ALTER, DROP). For PostgreSQL, set `log_statement='ddl'`. Forward audit logs to Splunk. Alert on any DDL outside maintenance windows. Correlate with change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mssql:audit\" action_id IN (\"CR\",\"AL\",\"DR\")\n| table _time, server_principal_name, database_name, object_name, statement\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Schema Change Detection** — Unauthorized DDL changes to production databases can break applications. Detection ensures change control compliance.\n\nDocumented **Data sources**: SQL Server DDL triggers, audit logs, PostgreSQL `log_statement='ddl'`. **App/TA** (typical add-on context): DB Connect, SQL Server audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mssql:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mssql:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Schema Change Detection**): table _time, server_principal_name, database_name, object_name, statement\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Schema Change Detection** — Unauthorized DDL changes to production databases can break applications. Detection ensures change control compliance.\n\nDocumented **Data sources**: SQL Server DDL triggers, audit logs, PostgreSQL `log_statement='ddl'`. **App/TA** (typical add-on context): DB Connect, SQL Server audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DDL events with details), Timeline (schema changes), Bar chart (changes by user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log unexpected schema and object changes so we can tie database drift to a change ticket or an investigation when something does not look intentional.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-7.1.13: Schema Change Detection.",
                  "ea": "Saved search 'UC-7.1.13' running on SQL Server DDL triggers, audit logs, PostgreSQL log_statement='ddl', archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-21/chapter-I/subchapter-A/part-11"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NERC CIP CIP-010-4 R1 (Configuration change management) is enforced — Splunk UC-7.1.13: Schema Change Detection.",
                  "ea": "Saved search 'UC-7.1.13' running on SQL Server DDL triggers, audit logs, PostgreSQL log_statement='ddl', archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Authorization",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOX-ITGC ITGC.ChangeMgmt.Authorization (Change authorised) is enforced — Splunk UC-7.1.13: Schema Change Detection.",
                  "ea": "Saved search 'UC-7.1.13' running on SQL Server DDL triggers, audit logs, PostgreSQL log_statement='ddl', archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.14",
              "n": "Query Plan Regression",
              "c": "high",
              "f": "advanced",
              "v": "Query plan changes can cause sudden performance degradation. Detection enables rapid intervention (plan forcing, hint application).",
              "t": "DB Connect",
              "d": "SQL Server Query Store, `sys.dm_exec_query_plan`, PostgreSQL `pg_stat_statements`; lookup `query_baselines.csv` with columns `query_id, baseline_avg` (rolling 30-day median rebuilt nightly)",
              "q": "index=database sourcetype=\"dbconnect:query_store\"\n| stats avg(avg_duration) as current_avg by query_id, plan_id\n| lookup query_baselines.csv query_id OUTPUT baseline_avg\n| where isnotnull(baseline_avg) AND baseline_avg > 0\n| eval regression_pct=round((current_avg-baseline_avg)/baseline_avg*100,1)\n| where regression_pct > 50",
              "m": "Enable Query Store on SQL Server databases. Poll query performance metrics via DB Connect. Maintain baseline lookup of normal query durations. Alert when queries regress >50% from baseline. Enable automatic plan correction if available.",
              "z": "Table (regressed queries), Bar chart (regression % by query), Line chart (query duration trend).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect.\n• Ensure the following data sources are available: SQL Server Query Store, `sys.dm_exec_query_plan`, PostgreSQL `pg_stat_statements`; lookup `query_baselines.csv` with columns `query_id, baseline_avg` (rolling 30-day median rebuilt nightly).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Query Store on SQL Server databases. Poll query performance metrics via DB Connect. Maintain baseline lookup of normal query durations. Alert when queries regress >50% from baseline. Enable automatic plan correction if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:query_store\"\n| stats avg(avg_duration) as current_avg by query_id, plan_id\n| lookup query_baselines.csv query_id OUTPUT baseline_avg\n| where isnotnull(baseline_avg) AND baseline_avg > 0\n| eval regression_pct=round((current_avg-baseline_avg)/baseline_avg*100,1)\n| where regression_pct > 50\n```\n\nUnderstanding this SPL\n\n**Query Plan Regression** — Query plan changes can cause sudden performance degradation. Detection enables rapid intervention (plan forcing, hint application).\n\nDocumented **Data sources**: SQL Server Query Store, `sys.dm_exec_query_plan`, PostgreSQL `pg_stat_statements`; lookup `query_baselines.csv` with columns `query_id, baseline_avg` (rolling 30-day median rebuilt nightly). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:query_store. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:query_store\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by query_id, plan_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(baseline_avg) AND baseline_avg > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **regression_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where regression_pct > 50` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Query Plan Regression** — Query plan changes can cause sudden performance degradation. Detection enables rapid intervention (plan forcing, hint application).\n\nDocumented **Data sources**: SQL Server Query Store, `sys.dm_exec_query_plan`, PostgreSQL `pg_stat_statements`; lookup `query_baselines.csv` with columns `query_id, baseline_avg` (rolling 30-day median rebuilt nightly). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (regressed queries), Bar chart (regression % by query), Line chart (query duration trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow PostgreSQL activity and space trends so we can keep vacuum, replication, and connection paths healthy as load grows.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.15",
              "n": "Privilege Escalation Audit",
              "c": "critical",
              "f": "beginner",
              "v": "Unauthorized privilege changes can enable data theft or sabotage. Audit trail is required for compliance.",
              "t": "DB Connect, SQL Server audit",
              "d": "Database audit logs (GRANT/REVOKE events), security event logs",
              "q": "index=database sourcetype=\"mssql:audit\"\n| search action_id IN (\"G\",\"R\",\"GWG\") statement=\"*GRANT*\" OR statement=\"*REVOKE*\"\n| table _time, server_principal_name, database_name, statement, target_server_principal_name",
              "m": "Enable database audit for security events (GRANT, REVOKE, ALTER ROLE). Forward to Splunk. Alert on any privilege change in production. Correlate with change management tickets and access review cycles.",
              "z": "Table (privilege change events), Timeline (changes), Bar chart (changes by granting user).",
              "kfp": "Planned access reviews, recertification, break-glass accounts, and vendor maintenance can emit privilege- or access-change events that match the rule but are already approved; require a change ticket for context.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, SQL Server audit.\n• Ensure the following data sources are available: Database audit logs (GRANT/REVOKE events), security event logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable database audit for security events (GRANT, REVOKE, ALTER ROLE). Forward to Splunk. Alert on any privilege change in production. Correlate with change management tickets and access review cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mssql:audit\"\n| search action_id IN (\"G\",\"R\",\"GWG\") statement=\"*GRANT*\" OR statement=\"*REVOKE*\"\n| table _time, server_principal_name, database_name, statement, target_server_principal_name\n```\n\nUnderstanding this SPL\n\n**Privilege Escalation Audit** — Unauthorized privilege changes can enable data theft or sabotage. Audit trail is required for compliance.\n\nDocumented **Data sources**: Database audit logs (GRANT/REVOKE events), security event logs. **App/TA** (typical add-on context): DB Connect, SQL Server audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mssql:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mssql:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Privilege Escalation Audit**): table _time, server_principal_name, database_name, statement, target_server_principal_name\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privilege Escalation Audit** — Unauthorized privilege changes can enable data theft or sabotage. Audit trail is required for compliance.\n\nDocumented **Data sources**: Database audit logs (GRANT/REVOKE events), security event logs. **App/TA** (typical add-on context): DB Connect, SQL Server audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (privilege change events), Timeline (changes), Bar chart (changes by granting user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track who changed users and privileges in our databases so we can support compliance and spot unauthorized access paths early.",
              "wv": "crawl",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [
                "db_connect",
                "mssql"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 65,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier"
              ]
            }
          ],
          "qa": 43.0,
          "qd": {
            "gold": 0,
            "silver": 4,
            "bronze": 11,
            "none": 0
          }
        },
        {
          "i": "7.2",
          "n": "NoSQL Databases",
          "u": [
            {
              "i": "7.2.1",
              "n": "Cluster Membership Changes",
              "c": "critical",
              "f": "beginner",
              "v": "Node additions/removals affect data distribution and availability. Unexpected membership changes may indicate failures.",
              "t": "Custom scripted input, database event logs",
              "d": "MongoDB replica set events, Cassandra `system.log`, Elasticsearch cluster state",
              "q": "index=database sourcetype=\"mongodb:log\"\n| search \"replSet\" (\"added\" OR \"removed\" OR \"changed state\" OR \"election\")\n| table _time, host, message\n| sort -_time",
              "m": "Forward database logs to Splunk. Parse membership change events. Alert on unexpected node departures. For Elasticsearch, poll `_cluster/health` API and alert on node count changes.",
              "z": "Timeline (membership events), Single value (current node count), Table (recent cluster changes).",
              "kfp": "Intentional node adds, removals, or elections during scaling, cloud zone work, or rolling upgrades can match the rule; correlate with the change and deployment calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input, database event logs.\n• Ensure the following data sources are available: MongoDB replica set events, Cassandra `system.log`, Elasticsearch cluster state.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward database logs to Splunk. Parse membership change events. Alert on unexpected node departures. For Elasticsearch, poll `_cluster/health` API and alert on node count changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:log\"\n| search \"replSet\" (\"added\" OR \"removed\" OR \"changed state\" OR \"election\")\n| table _time, host, message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cluster Membership Changes** — Node additions/removals affect data distribution and availability. Unexpected membership changes may indicate failures.\n\nDocumented **Data sources**: MongoDB replica set events, Cassandra `system.log`, Elasticsearch cluster state. **App/TA** (typical add-on context): Custom scripted input, database event logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Cluster Membership Changes**): table _time, host, message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (membership events), Single value (current node count), Table (recent cluster changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch replica set and cluster membership events so we can spot unplanned node changes or elections before they become availability or data-safety issues.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "7.2.2",
              "n": "Replication Lag / Consistency",
              "c": "critical",
              "f": "intermediate",
              "v": "Replication lag causes stale reads and eventual consistency violations. Monitoring ensures data freshness SLAs are met.",
              "t": "Custom scripted input (rs.status(), nodetool)",
              "d": "MongoDB `rs.status()`, Cassandra `nodetool status`, Redis `INFO replication`",
              "q": "index=database sourcetype=\"mongodb:rs_status\"\n| eval lag_sec=optime_primary-optime_secondary\n| where lag_sec > 10\n| table _time, replica_set, member, state, lag_sec",
              "m": "Run scripted input polling replica set status every minute. Parse member states and optime differences. Alert when lag exceeds threshold (e.g., >10 seconds). Track trend for capacity planning.",
              "z": "Line chart (replication lag over time), Table (replicas with lag), Single value (max current lag).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (rs.status(), nodetool).\n• Ensure the following data sources are available: MongoDB `rs.status()`, Cassandra `nodetool status`, Redis `INFO replication`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun scripted input polling replica set status every minute. Parse member states and optime differences. Alert when lag exceeds threshold (e.g., >10 seconds). Track trend for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:rs_status\"\n| eval lag_sec=optime_primary-optime_secondary\n| where lag_sec > 10\n| table _time, replica_set, member, state, lag_sec\n```\n\nUnderstanding this SPL\n\n**Replication Lag / Consistency** — Replication lag causes stale reads and eventual consistency violations. Monitoring ensures data freshness SLAs are met.\n\nDocumented **Data sources**: MongoDB `rs.status()`, Cassandra `nodetool status`, Redis `INFO replication`. **App/TA** (typical add-on context): Custom scripted input (rs.status(), nodetool). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:rs_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:rs_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lag_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag_sec > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Replication Lag / Consistency**): table _time, replica_set, member, state, lag_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (replication lag over time), Table (replicas with lag), Single value (max current lag).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cassandra"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.3",
              "n": "Read/Write Latency Trending",
              "c": "high",
              "f": "advanced",
              "v": "Latency trending detects performance degradation before it impacts users. Enables proactive tuning and scaling decisions.",
              "t": "Custom metrics input, database stats API",
              "d": "MongoDB `serverStatus()`, Cassandra JMX, Elasticsearch `_nodes/stats`",
              "q": "index=database sourcetype=\"mongodb:server_status\"\n| timechart span=5m avg(opcounters.query) as reads, avg(opcounters.insert) as writes, avg(opLatencies.reads.latency) as read_lat",
              "m": "Poll database metrics every 5 minutes via scripted input or API. Track read/write latency percentiles (p50, p95, p99). Baseline normal patterns and alert on sustained deviation. Correlate with workload changes.",
              "z": "Line chart (latency percentiles over time), Dual-axis chart (latency + throughput), Table (current latency by operation).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom metrics input, database stats API.\n• Ensure the following data sources are available: MongoDB `serverStatus()`, Cassandra JMX, Elasticsearch `_nodes/stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll database metrics every 5 minutes via scripted input or API. Track read/write latency percentiles (p50, p95, p99). Baseline normal patterns and alert on sustained deviation. Correlate with workload changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:server_status\"\n| timechart span=5m avg(opcounters.query) as reads, avg(opcounters.insert) as writes, avg(opLatencies.reads.latency) as read_lat\n```\n\nUnderstanding this SPL\n\n**Read/Write Latency Trending** — Latency trending detects performance degradation before it impacts users. Enables proactive tuning and scaling decisions.\n\nDocumented **Data sources**: MongoDB `serverStatus()`, Cassandra JMX, Elasticsearch `_nodes/stats`. **App/TA** (typical add-on context): Custom metrics input, database stats API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:server_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:server_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency percentiles over time), Dual-axis chart (latency + throughput), Table (current latency by operation).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow Oracle instance and space signals so the team can keep service levels and archivelog management in line with how the business actually uses the database.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.4",
              "n": "Shard Imbalance Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Uneven shard distribution causes hot spots and performance inconsistency. Rebalancing prevents overloaded nodes.",
              "t": "Custom scripted input",
              "d": "MongoDB `sh.status()`, Elasticsearch `_cat/shards`",
              "q": "index=database sourcetype=\"mongodb:shard_status\"\n| stats sum(count) as doc_count, sum(size) as data_size by shard\n| eventstats avg(doc_count) as avg_count\n| eval imbalance_pct=round(abs(doc_count-avg_count)/avg_count*100,1)\n| where imbalance_pct > 20",
              "m": "Poll shard statistics periodically. Calculate per-shard deviation from average. Alert when any shard deviates >20% from mean size. For Elasticsearch, track unassigned shards as a separate critical alert.",
              "z": "Bar chart (data size per shard), Table (shards with imbalance), Single value (max imbalance %).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: MongoDB `sh.status()`, Elasticsearch `_cat/shards`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll shard statistics periodically. Calculate per-shard deviation from average. Alert when any shard deviates >20% from mean size. For Elasticsearch, track unassigned shards as a separate critical alert.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:shard_status\"\n| stats sum(count) as doc_count, sum(size) as data_size by shard\n| eventstats avg(doc_count) as avg_count\n| eval imbalance_pct=round(abs(doc_count-avg_count)/avg_count*100,1)\n| where imbalance_pct > 20\n```\n\nUnderstanding this SPL\n\n**Shard Imbalance Detection** — Uneven shard distribution causes hot spots and performance inconsistency. Rebalancing prevents overloaded nodes.\n\nDocumented **Data sources**: MongoDB `sh.status()`, Elasticsearch `_cat/shards`. **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:shard_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:shard_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by shard** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **imbalance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where imbalance_pct > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (data size per shard), Table (shards with imbalance), Single value (max imbalance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow Oracle instance and space signals so the team can keep service levels and archivelog management in line with how the business actually uses the database.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.5",
              "n": "Compaction Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Pending compactions consume I/O and can cause write amplification. Monitoring ensures compaction keeps pace with writes.",
              "t": "JMX input (Cassandra), database logs",
              "d": "Cassandra `nodetool compactionstats`, MongoDB WiredTiger stats, Elasticsearch merge stats",
              "q": "index=database sourcetype=\"cassandra:compaction\"\n| timechart span=15m avg(pending_tasks) as pending, sum(bytes_compacted) as compacted\n| where pending > 50",
              "m": "Poll compaction stats via JMX (Cassandra) or scripted input. Track pending compaction tasks and throughput. Alert when pending tasks grow consistently, indicating compaction cannot keep up with write volume.",
              "z": "Line chart (pending compactions over time), Dual-axis (pending + throughput), Single value (current pending).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JMX input (Cassandra), database logs.\n• Ensure the following data sources are available: Cassandra `nodetool compactionstats`, MongoDB WiredTiger stats, Elasticsearch merge stats.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll compaction stats via JMX (Cassandra) or scripted input. Track pending compaction tasks and throughput. Alert when pending tasks grow consistently, indicating compaction cannot keep up with write volume.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"cassandra:compaction\"\n| timechart span=15m avg(pending_tasks) as pending, sum(bytes_compacted) as compacted\n| where pending > 50\n```\n\nUnderstanding this SPL\n\n**Compaction Monitoring** — Pending compactions consume I/O and can cause write amplification. Monitoring ensures compaction keeps pace with writes.\n\nDocumented **Data sources**: Cassandra `nodetool compactionstats`, MongoDB WiredTiger stats, Elasticsearch merge stats. **App/TA** (typical add-on context): JMX input (Cassandra), database logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: cassandra:compaction. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"cassandra:compaction\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where pending > 50` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pending compactions over time), Dual-axis (pending + throughput), Single value (current pending).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow MongoDB operations, replication, and resource usage so we can keep queries and writes reliable as sharding and data volume grow.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cassandra"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.6",
              "n": "GC Pause Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Long GC pauses in Java-based databases (Cassandra, Elasticsearch) cause request timeouts and can trigger node eviction from the cluster.",
              "t": "GC log parsing, JMX",
              "d": "JVM GC logs (`gc.log`), JMX GC metrics",
              "q": "index=database sourcetype=\"jvm:gc\"\n| where gc_pause_ms > 500\n| stats count, avg(gc_pause_ms) as avg_pause, max(gc_pause_ms) as max_pause by host, gc_type\n| where max_pause > 1000",
              "m": "Configure JVM GC logging on all Java-based database nodes. Forward GC logs to Splunk with proper field extraction. Alert on pauses >500ms. Track GC frequency and total pause time per hour. Recommend heap tuning when pauses are chronic.",
              "z": "Line chart (GC pause duration over time), Histogram (pause distribution), Table (hosts with excessive GC).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GC log parsing, JMX.\n• Ensure the following data sources are available: JVM GC logs (`gc.log`), JMX GC metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure JVM GC logging on all Java-based database nodes. Forward GC logs to Splunk with proper field extraction. Alert on pauses >500ms. Track GC frequency and total pause time per hour. Recommend heap tuning when pauses are chronic.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"jvm:gc\"\n| where gc_pause_ms > 500\n| stats count, avg(gc_pause_ms) as avg_pause, max(gc_pause_ms) as max_pause by host, gc_type\n| where max_pause > 1000\n```\n\nUnderstanding this SPL\n\n**GC Pause Detection** — Long GC pauses in Java-based databases (Cassandra, Elasticsearch) cause request timeouts and can trigger node eviction from the cluster.\n\nDocumented **Data sources**: JVM GC logs (`gc.log`), JMX GC metrics. **App/TA** (typical add-on context): GC log parsing, JMX. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: jvm:gc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"jvm:gc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where gc_pause_ms > 500` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, gc_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_pause > 1000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (GC pause duration over time), Histogram (pause distribution), Table (hosts with excessive GC).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.7",
              "n": "Connection Count Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Approaching connection limits causes client rejections. Monitoring enables proactive limit adjustment or connection pooling.",
              "t": "Custom scripted input",
              "d": "MongoDB `serverStatus().connections`, Redis `INFO clients`, Elasticsearch `_nodes/stats/transport`",
              "q": "index=database sourcetype=\"mongodb:server_status\"\n| eval pct_used=round(connections.current/connections.available*100,1)\n| timechart span=5m max(pct_used) as connection_pct by host\n| where connection_pct > 80",
              "m": "Poll connection metrics every 5 minutes. Calculate percentage of max connections used. Alert at 80% and 95%. Track by client application to identify connection leaks.",
              "z": "Gauge (% connections used per node), Line chart (connection count over time), Table (nodes approaching limit).",
              "kfp": "Connection pool warm-up after restarts, blue-green deploys, or autoscaling can look like a spike until the pool or fleet reaches steady state.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: MongoDB `serverStatus().connections`, Redis `INFO clients`, Elasticsearch `_nodes/stats/transport`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll connection metrics every 5 minutes. Calculate percentage of max connections used. Alert at 80% and 95%. Track by client application to identify connection leaks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:server_status\"\n| eval pct_used=round(connections.current/connections.available*100,1)\n| timechart span=5m max(pct_used) as connection_pct by host\n| where connection_pct > 80\n```\n\nUnderstanding this SPL\n\n**Connection Count Monitoring** — Approaching connection limits causes client rejections. Monitoring enables proactive limit adjustment or connection pooling.\n\nDocumented **Data sources**: MongoDB `serverStatus().connections`, Redis `INFO clients`, Elasticsearch `_nodes/stats/transport`. **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:server_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:server_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where connection_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% connections used per node), Line chart (connection count over time), Table (nodes approaching limit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many sessions and pooled connections the fleet uses so we can scale or fix apps before the database hits its connection limits.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.8",
              "n": "Index Build Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Index builds consume significant resources and can impact production performance. Tracking ensures builds complete within maintenance windows.",
              "t": "Database log parsing",
              "d": "MongoDB log (`INDEX` messages), Elasticsearch `_tasks` API",
              "q": "index=database sourcetype=\"mongodb:log\"\n| search \"index build\"\n| rex \"building index on (?<collection>\\S+)\"\n| table _time, host, collection, message",
              "m": "Parse database logs for index build events (start, progress, completion). Alert on index builds in production during business hours. Track build duration for capacity planning.",
              "z": "Table (active/recent index builds), Timeline (build events), Single value (builds in progress).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Database log parsing.\n• Ensure the following data sources are available: MongoDB log (`INDEX` messages), Elasticsearch `_tasks` API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse database logs for index build events (start, progress, completion). Alert on index builds in production during business hours. Track build duration for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:log\"\n| search \"index build\"\n| rex \"building index on (?<collection>\\S+)\"\n| table _time, host, collection, message\n```\n\nUnderstanding this SPL\n\n**Index Build Monitoring** — Index builds consume significant resources and can impact production performance. Tracking ensures builds complete within maintenance windows.\n\nDocumented **Data sources**: MongoDB log (`INDEX` messages), Elasticsearch `_tasks` API. **App/TA** (typical add-on context): Database log parsing. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **Index Build Monitoring**): table _time, host, collection, message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active/recent index builds), Timeline (build events), Single value (builds in progress).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow Oracle instance and space signals so the team can keep service levels and archivelog management in line with how the business actually uses the database.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.9",
              "n": "Memory Utilization",
              "c": "high",
              "f": "beginner",
              "v": "NoSQL databases are memory-intensive. Evictions indicate undersized cache, causing disk reads and performance degradation.",
              "t": "Custom scripted input, JMX",
              "d": "Redis `INFO memory`, MongoDB WiredTiger cache stats, Cassandra JMX heap metrics",
              "q": "index=database sourcetype=\"redis:info\"\n| eval mem_pct=round(used_memory/maxmemory*100,1)\n| timechart span=5m max(mem_pct) as memory_pct, sum(evicted_keys) as evictions by host\n| where memory_pct > 85",
              "m": "Poll memory metrics every 5 minutes. Track used vs max memory, eviction rate, and cache hit ratio. Alert when memory exceeds 85% or eviction rate spikes. Recommend sizing adjustments based on trends.",
              "z": "Gauge (memory % per node), Line chart (memory + evictions), Table (nodes with high utilization).",
              "kfp": "Bursts during known batch, release, or maintenance windows can match the rule; correlate with the change calendar before treating as an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input, JMX.\n• Ensure the following data sources are available: Redis `INFO memory`, MongoDB WiredTiger cache stats, Cassandra JMX heap metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll memory metrics every 5 minutes. Track used vs max memory, eviction rate, and cache hit ratio. Alert when memory exceeds 85% or eviction rate spikes. Recommend sizing adjustments based on trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"redis:info\"\n| eval mem_pct=round(used_memory/maxmemory*100,1)\n| timechart span=5m max(mem_pct) as memory_pct, sum(evicted_keys) as evictions by host\n| where memory_pct > 85\n```\n\nUnderstanding this SPL\n\n**Memory Utilization** — NoSQL databases are memory-intensive. Evictions indicate undersized cache, causing disk reads and performance degradation.\n\nDocumented **Data sources**: Redis `INFO memory`, MongoDB WiredTiger cache stats, Cassandra JMX heap metrics. **App/TA** (typical add-on context): Custom scripted input, JMX. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mem_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where memory_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (memory % per node), Line chart (memory + evictions), Table (nodes with high utilization).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow Oracle instance and space signals so the team can keep service levels and archivelog management in line with how the business actually uses the database.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.10",
              "n": "Elasticsearch Cluster Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Elasticsearch cluster status directly indicates data availability. Yellow/red status requires immediate attention to prevent data loss.",
              "t": "Custom REST API input",
              "d": "Elasticsearch `_cluster/health` API",
              "q": "index=database sourcetype=\"elasticsearch:cluster_health\"\n| eval status_num=case(status=\"green\",0, status=\"yellow\",1, status=\"red\",2)\n| timechart span=5m latest(status_num) as health, latest(unassigned_shards) as unassigned by cluster_name\n| where health > 0",
              "m": "Poll `_cluster/health` endpoint every minute. Alert on yellow status (warning) and red status (critical). Track unassigned shard count and node count. Correlate with JVM metrics and disk space to identify root cause.",
              "z": "Status indicator (green/yellow/red), Single value (unassigned shards), Line chart (cluster health timeline), Table (cluster details).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST API input.\n• Ensure the following data sources are available: Elasticsearch `_cluster/health` API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `_cluster/health` endpoint every minute. Alert on yellow status (warning) and red status (critical). Track unassigned shard count and node count. Correlate with JVM metrics and disk space to identify root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:cluster_health\"\n| eval status_num=case(status=\"green\",0, status=\"yellow\",1, status=\"red\",2)\n| timechart span=5m latest(status_num) as health, latest(unassigned_shards) as unassigned by cluster_name\n| where health > 0\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Cluster Health** — Elasticsearch cluster status directly indicates data availability. Yellow/red status requires immediate attention to prevent data loss.\n\nDocumented **Data sources**: Elasticsearch `_cluster/health` API. **App/TA** (typical add-on context): Custom REST API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:cluster_health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:cluster_health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status_num** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by cluster_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where health > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status indicator (green/yellow/red), Single value (unassigned shards), Line chart (cluster health timeline), Table (cluster details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.11",
              "n": "MongoDB Oplog Window",
              "c": "critical",
              "f": "advanced",
              "v": "Oplog window shrinking indicates replication at risk of falling behind. Exhausted oplog causes replica set members to resync from scratch (full resync), causing extended downtime.",
              "t": "Custom scripted input (mongosh)",
              "d": "`rs.printReplicationInfo()`, `db.getReplicationInfo()`",
              "q": "index=database sourcetype=\"mongodb:replication_info\"\n| eval window_hours=round(timeDiff/3600, 1)\n| where window_hours < 24\n| timechart span=1h latest(window_hours) as oplog_window_hours by host\n| where oplog_window_hours < 12",
              "m": "Run scripted input polling `rs.printReplicationInfo()` or `db.getReplicationInfo()` every 15–30 minutes via mongosh. Parse `timeDiff` (oplog window in seconds). Alert when window drops below 24 hours (warning) or 12 hours (critical). Correlate with write throughput and replication lag. Recommend oplog size increase when window consistently shrinks.",
              "z": "Line chart (oplog window hours over time), Single value (current window hours), Table (hosts with shrinking oplog).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (mongosh).\n• Ensure the following data sources are available: `rs.printReplicationInfo()`, `db.getReplicationInfo()`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun scripted input polling `rs.printReplicationInfo()` or `db.getReplicationInfo()` every 15–30 minutes via mongosh. Parse `timeDiff` (oplog window in seconds). Alert when window drops below 24 hours (warning) or 12 hours (critical). Correlate with write throughput and replication lag. Recommend oplog size increase when window consistently shrinks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:replication_info\"\n| eval window_hours=round(timeDiff/3600, 1)\n| where window_hours < 24\n| timechart span=1h latest(window_hours) as oplog_window_hours by host\n| where oplog_window_hours < 12\n```\n\nUnderstanding this SPL\n\n**MongoDB Oplog Window** — Oplog window shrinking indicates replication at risk of falling behind. Exhausted oplog causes replica set members to resync from scratch (full resync), causing extended downtime.\n\nDocumented **Data sources**: `rs.printReplicationInfo()`, `db.getReplicationInfo()`. **App/TA** (typical add-on context): Custom scripted input (mongosh). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:replication_info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:replication_info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **window_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where window_hours < 24` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where oplog_window_hours < 12` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (oplog window hours over time), Single value (current window hours), Table (hosts with shrinking oplog).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mongodb"
              ],
              "em": [
                "mongodb_mongod"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.12",
              "n": "MongoDB WiredTiger Cache Pressure",
              "c": "high",
              "f": "advanced",
              "v": "Cache dirty/used ratio approaching eviction thresholds causes increased disk I/O and degraded query performance. Early detection enables cache sizing or workload tuning.",
              "t": "Custom scripted input (mongosh serverStatus)",
              "d": "`db.serverStatus().wiredTiger.cache`",
              "q": "index=database sourcetype=\"mongodb:server_status\"\n| eval dirty_pct=round(bytes_dirty/bytes_currently_in_the_cache*100, 1)\n| eval used_pct=round(bytes_currently_in_the_cache/cache_maximum_bytes_configured*100, 1)\n| where dirty_pct > 20 OR used_pct > 90\n| timechart span=5m avg(dirty_pct) as dirty_pct, avg(used_pct) as used_pct by host",
              "m": "Poll `db.serverStatus()` via mongosh every 5 minutes. Extract `wiredTiger.cache`; map MongoDB fields (\"bytes dirty in the cache\", \"bytes currently in the cache\", \"maximum bytes configured\") to bytes_dirty, bytes_currently_in_the_cache, cache_maximum_bytes_configured in the scripted input output. Compute dirty and used percentages. Alert when dirty_pct >20% (eviction pressure) or used_pct >90%. Track eviction count and evicted pages. Correlate with workload spikes.",
              "z": "Line chart (dirty % and used % over time), Gauge (cache pressure), Table (hosts with high cache pressure).",
              "kfp": "Elections, chunk moves, and index builds produce noise during maintenance; match events to approved change and release windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (mongosh serverStatus).\n• Ensure the following data sources are available: `db.serverStatus().wiredTiger.cache`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `db.serverStatus()` via mongosh every 5 minutes. Extract `wiredTiger.cache`; map MongoDB fields (\"bytes dirty in the cache\", \"bytes currently in the cache\", \"maximum bytes configured\") to bytes_dirty, bytes_currently_in_the_cache, cache_maximum_bytes_configured in the scripted input output. Compute dirty and used percentages. Alert when dirty_pct >20% (eviction pressure) or used_pct >90%. Track eviction count and evicted pages. Correlate with workload spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:server_status\"\n| eval dirty_pct=round(bytes_dirty/bytes_currently_in_the_cache*100, 1)\n| eval used_pct=round(bytes_currently_in_the_cache/cache_maximum_bytes_configured*100, 1)\n| where dirty_pct > 20 OR used_pct > 90\n| timechart span=5m avg(dirty_pct) as dirty_pct, avg(used_pct) as used_pct by host\n```\n\nUnderstanding this SPL\n\n**MongoDB WiredTiger Cache Pressure** — Cache dirty/used ratio approaching eviction thresholds causes increased disk I/O and degraded query performance. Early detection enables cache sizing or workload tuning.\n\nDocumented **Data sources**: `db.serverStatus().wiredTiger.cache`. **App/TA** (typical add-on context): Custom scripted input (mongosh serverStatus). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:server_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:server_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dirty_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dirty_pct > 20 OR used_pct > 90` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (dirty % and used % over time), Gauge (cache pressure), Table (hosts with high cache pressure).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow Oracle instance and space signals so the team can keep service levels and archivelog management in line with how the business actually uses the database.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mongodb"
              ],
              "em": [
                "mongodb_mongod"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.13",
              "n": "MongoDB Atlas Cluster Alerts",
              "c": "critical",
              "f": "beginner",
              "v": "Atlas project alerts (CPU, connections, replication) forwarded to Splunk provide a single pane with on-prem MongoDB. Rapid correlation during incidents.",
              "t": "MongoDB Atlas API / Atlas App Services webhook, HEC",
              "d": "Atlas alert payloads (clusterId, alertType, status, metric values)",
              "q": "index=database sourcetype=\"mongodb:atlas:alert\"\n| where status=\"OPEN\" OR severity IN (\"CRITICAL\",\"WARNING\")\n| stats latest(_time) as last_alert, values(alertType) as types by cluster_name, project_id\n| sort -last_alert",
              "m": "Configure Atlas to send alerts to HTTPS endpoint (Splunk HEC) or poll Alerts API every minute. Normalize fields. Page on CRITICAL OPEN alerts.",
              "z": "Timeline (Atlas alerts), Table (cluster, alert type, status), Single value (open critical count).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MongoDB Atlas API / Atlas App Services webhook, HEC.\n• Ensure the following data sources are available: Atlas alert payloads (clusterId, alertType, status, metric values).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Atlas to send alerts to HTTPS endpoint (Splunk HEC) or poll Alerts API every minute. Normalize fields. Page on CRITICAL OPEN alerts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:atlas:alert\"\n| where status=\"OPEN\" OR severity IN (\"CRITICAL\",\"WARNING\")\n| stats latest(_time) as last_alert, values(alertType) as types by cluster_name, project_id\n| sort -last_alert\n```\n\nUnderstanding this SPL\n\n**MongoDB Atlas Cluster Alerts** — Atlas project alerts (CPU, connections, replication) forwarded to Splunk provide a single pane with on-prem MongoDB. Rapid correlation during incidents.\n\nDocumented **Data sources**: Atlas alert payloads (clusterId, alertType, status, metric values). **App/TA** (typical add-on context): MongoDB Atlas API / Atlas App Services webhook, HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:atlas:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:atlas:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"OPEN\" OR severity IN (\"CRITICAL\",\"WARNING\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster_name, project_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (Atlas alerts), Table (cluster, alert type, status), Single value (open critical count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mongodb"
              ],
              "em": [
                "mongodb_mongod"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.14",
              "n": "Cassandra Compaction Backlog and Throughput",
              "c": "high",
              "f": "intermediate",
              "v": "Pending compactions and compaction throughput indicate whether the cluster keeps up with writes. Complements generic compaction UC with nodetool-derived rates.",
              "t": "JMX, `nodetool compactionstats` scripted input",
              "d": "`pending_tasks`, `bytes_compacted`, `compaction throughput`",
              "q": "index=database sourcetype=\"cassandra:compactionstats\"\n| where pending_tasks > 100 OR compaction_throughput_mbps < 5\n| timechart span=15m max(pending_tasks) as pending, avg(compaction_throughput_mbps) as tp_mbps by cluster_name",
              "m": "Poll nodetool every 5m per node. Alert when pending_tasks grows monotonically for 1h or throughput collapses.",
              "z": "Dual-axis (pending vs throughput), Table (nodes with backlog), Line chart (pending tasks).",
              "kfp": "Repairs, bootstraps, and heavy write bursts increase compaction and hinted-handoff backlog as part of normal Cassandra self-healing — alert on stuck or growing queues outside known batch jobs.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JMX, `nodetool compactionstats` scripted input.\n• Ensure the following data sources are available: `pending_tasks`, `bytes_compacted`, `compaction throughput`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll nodetool every 5m per node. Alert when pending_tasks grows monotonically for 1h or throughput collapses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"cassandra:compactionstats\"\n| where pending_tasks > 100 OR compaction_throughput_mbps < 5\n| timechart span=15m max(pending_tasks) as pending, avg(compaction_throughput_mbps) as tp_mbps by cluster_name\n```\n\nUnderstanding this SPL\n\n**Cassandra Compaction Backlog and Throughput** — Pending compactions and compaction throughput indicate whether the cluster keeps up with writes. Complements generic compaction UC with nodetool-derived rates.\n\nDocumented **Data sources**: `pending_tasks`, `bytes_compacted`, `compaction throughput`. **App/TA** (typical add-on context): JMX, `nodetool compactionstats` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: cassandra:compactionstats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"cassandra:compactionstats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where pending_tasks > 100 OR compaction_throughput_mbps < 5` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by cluster_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis (pending vs throughput), Table (nodes with backlog), Line chart (pending tasks).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Cassandra Compaction Backlog and Throughput so we can keep this part of the data platform within the capacity and quality targets our teams expect.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cassandra"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.15",
              "n": "Redis Memory Fragmentation (Cache Tier)",
              "c": "medium",
              "f": "intermediate",
              "v": "High `mem_fragmentation_ratio` on self-managed Redis (not only ElastiCache) wastes RAM and increases latency. Tracks non-cloud Redis clusters in the NoSQL section.",
              "t": "redis-cli scripted input",
              "d": "`INFO memory` — `mem_fragmentation_ratio`, `used_memory_rss`",
              "q": "index=database sourcetype=\"redis:info\" role=master\n| where mem_fragmentation_ratio > 1.5\n| timechart span=15m avg(mem_fragmentation_ratio) as frag by host",
              "m": "Poll every 15m. Alert when ratio >1.5 for 24h. Recommend active defrag or restart per policy.",
              "z": "Line chart (fragmentation ratio), Table (hosts over threshold), Gauge (current ratio).",
              "kfp": "AOF rewrites, RDB saves, and replica full syncs can spike latency or CPU briefly on healthy hosts; check Redis persistence settings and the backup schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: redis-cli scripted input.\n• Ensure the following data sources are available: `INFO memory` — `mem_fragmentation_ratio`, `used_memory_rss`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll every 15m. Alert when ratio >1.5 for 24h. Recommend active defrag or restart per policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"redis:info\" role=master\n| where mem_fragmentation_ratio > 1.5\n| timechart span=15m avg(mem_fragmentation_ratio) as frag by host\n```\n\nUnderstanding this SPL\n\n**Redis Memory Fragmentation (Cache Tier)** — High `mem_fragmentation_ratio` on self-managed Redis (not only ElastiCache) wastes RAM and increases latency. Tracks non-cloud Redis clusters in the NoSQL section.\n\nDocumented **Data sources**: `INFO memory` — `mem_fragmentation_ratio`, `used_memory_rss`. **App/TA** (typical add-on context): redis-cli scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where mem_fragmentation_ratio > 1.5` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (fragmentation ratio), Table (hosts over threshold), Gauge (current ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow memory, persistence, and replication on Redis so we can catch eviction and lag before a cache or queue tier becomes the bottleneck for applications.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "redis"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.16",
              "n": "DynamoDB Throttling Events",
              "c": "high",
              "f": "beginner",
              "v": "Read/write throttle events mean application retries and latency spikes. Identifies hot partitions and undersized capacity modes.",
              "t": "`Splunk_TA_aws` (CloudWatch)",
              "d": "`UserErrors`, `ThrottledRequests`, `ConsumedReadCapacityUnits`",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/DynamoDB\" metric_name=\"ThrottledRequests\"\n| timechart span=5m sum(Sum) as throttled by TableName, Operation\n| where throttled > 0",
              "m": "Enable DynamoDB metrics with table dimension. Alert on any sustained throttling. Correlate with hot key patterns from access logs if available.",
              "z": "Line chart (throttled requests), Table (table, operation), Single value (throttle bursts per day).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (CloudWatch).\n• Ensure the following data sources are available: `UserErrors`, `ThrottledRequests`, `ConsumedReadCapacityUnits`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DynamoDB metrics with table dimension. Alert on any sustained throttling. Correlate with hot key patterns from access logs if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/DynamoDB\" metric_name=\"ThrottledRequests\"\n| timechart span=5m sum(Sum) as throttled by TableName, Operation\n| where throttled > 0\n```\n\nUnderstanding this SPL\n\n**DynamoDB Throttling Events** — Read/write throttle events mean application retries and latency spikes. Identifies hot partitions and undersized capacity modes.\n\nDocumented **Data sources**: `UserErrors`, `ThrottledRequests`, `ConsumedReadCapacityUnits`. **App/TA** (typical add-on context): `Splunk_TA_aws` (CloudWatch). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by TableName, Operation** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where throttled > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (throttled requests), Table (table, operation), Single value (throttle bursts per day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Read/write throttle events mean application retries and latency spikes We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.17",
              "n": "CouchDB Replication Conflicts",
              "c": "high",
              "f": "intermediate",
              "v": "Growing `_conflicts` document count indicates divergent replicas and data quality issues. Early detection prevents silent wrong reads.",
              "t": "CouchDB `_stats`, `_active_tasks` API",
              "d": "Replication task errors, document conflict counts (custom view or `_changes` sampling)",
              "q": "index=database sourcetype=\"couchdb:replication\"\n| where conflict_count > 0 OR error IS NOT NULL\n| stats sum(conflict_count) as conflicts, latest(error) as err by database_name, source, target\n| sort -conflicts",
              "m": "Ingest replication task status from `_active_tasks` and periodic conflict counts from a map view. Alert on replication errors or conflict_count increase week-over-week.",
              "z": "Table (DB, conflicts, error), Line chart (conflict trend), Single value (total conflicts).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CouchDB `_stats`, `_active_tasks` API.\n• Ensure the following data sources are available: Replication task errors, document conflict counts (custom view or `_changes` sampling).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest replication task status from `_active_tasks` and periodic conflict counts from a map view. Alert on replication errors or conflict_count increase week-over-week.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"couchdb:replication\"\n| where conflict_count > 0 OR error IS NOT NULL\n| stats sum(conflict_count) as conflicts, latest(error) as err by database_name, source, target\n| sort -conflicts\n```\n\nUnderstanding this SPL\n\n**CouchDB Replication Conflicts** — Growing `_conflicts` document count indicates divergent replicas and data quality issues. Early detection prevents silent wrong reads.\n\nDocumented **Data sources**: Replication task errors, document conflict counts (custom view or `_changes` sampling). **App/TA** (typical add-on context): CouchDB `_stats`, `_active_tasks` API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: couchdb:replication. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"couchdb:replication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where conflict_count > 0 OR error IS NOT NULL` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by database_name, source, target** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DB, conflicts, error), Line chart (conflict trend), Single value (total conflicts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.18",
              "n": "MongoDB Oplog Window Sufficiency",
              "c": "critical",
              "f": "advanced",
              "v": "Validates minimum oplog window hours against replica catch-up time under peak load. Extends oplog monitoring with capacity-style thresholds per deployment class.",
              "t": "mongosh scripted input",
              "d": "`getReplicationInfo()`, `rs.printReplicationInfo()`",
              "q": "index=database sourcetype=\"mongodb:replication_info\"\n| eval window_hrs=round(timeDiff/3600,2)\n| lookup mongo_replica_tier class OUTPUT min_oplog_window_hrs\n| where window_hrs < min_oplog_window_hrs\n| table host window_hrs min_oplog_window_hrs",
              "m": "Define minimum window per environment in lookup. Alert below tier minimum. Recommend oplog size change when consistently borderline.",
              "z": "Line chart (oplog window hours), Table (hosts below tier min), Gauge (worst window).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: mongosh scripted input.\n• Ensure the following data sources are available: `getReplicationInfo()`, `rs.printReplicationInfo()`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine minimum window per environment in lookup. Alert below tier minimum. Recommend oplog size change when consistently borderline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:replication_info\"\n| eval window_hrs=round(timeDiff/3600,2)\n| lookup mongo_replica_tier class OUTPUT min_oplog_window_hrs\n| where window_hrs < min_oplog_window_hrs\n| table host window_hrs min_oplog_window_hrs\n```\n\nUnderstanding this SPL\n\n**MongoDB Oplog Window Sufficiency** — Validates minimum oplog window hours against replica catch-up time under peak load. Extends oplog monitoring with capacity-style thresholds per deployment class.\n\nDocumented **Data sources**: `getReplicationInfo()`, `rs.printReplicationInfo()`. **App/TA** (typical add-on context): mongosh scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:replication_info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:replication_info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **window_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where window_hrs < min_oplog_window_hrs` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MongoDB Oplog Window Sufficiency**): table host window_hrs min_oplog_window_hrs\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (oplog window hours), Table (hosts below tier min), Gauge (worst window).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mongodb"
              ],
              "em": [
                "mongodb_mongod"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.19",
              "n": "Cassandra Tombstone Accumulation",
              "c": "high",
              "f": "advanced",
              "v": "High tombstone counts per read and GC pressure slow queries and repairs. Monitoring `TombstoneHistogram` and read repair backlog prevents timeouts.",
              "t": "JMX, `nodetool tablestats`",
              "d": "`Estimated droppable tombstones`, read path tombstone thresholds",
              "q": "index=database sourcetype=\"cassandra:tablestats\"\n| where droppable_tombstones > 100000 OR live_sstable_count > 50\n| stats latest(droppable_tombstones) as tombstones by keyspace, table, host\n| sort -tombstones",
              "m": "Poll tablestats weekly or daily per large tables. Alert on droppable tombstones above baseline. Correlate with TTL/schema design reviews.",
              "z": "Table (KS, table, tombstones), Bar chart (top tables), Line chart (tombstone trend).",
              "kfp": "Repairs, bootstraps, and heavy write bursts increase compaction and hinted-handoff backlog as part of normal Cassandra self-healing — alert on stuck or growing queues outside known batch jobs.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JMX, `nodetool tablestats`.\n• Ensure the following data sources are available: `Estimated droppable tombstones`, read path tombstone thresholds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll tablestats weekly or daily per large tables. Alert on droppable tombstones above baseline. Correlate with TTL/schema design reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"cassandra:tablestats\"\n| where droppable_tombstones > 100000 OR live_sstable_count > 50\n| stats latest(droppable_tombstones) as tombstones by keyspace, table, host\n| sort -tombstones\n```\n\nUnderstanding this SPL\n\n**Cassandra Tombstone Accumulation** — High tombstone counts per read and GC pressure slow queries and repairs. Monitoring `TombstoneHistogram` and read repair backlog prevents timeouts.\n\nDocumented **Data sources**: `Estimated droppable tombstones`, read path tombstone thresholds. **App/TA** (typical add-on context): JMX, `nodetool tablestats`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: cassandra:tablestats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"cassandra:tablestats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where droppable_tombstones > 100000 OR live_sstable_count > 50` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by keyspace, table, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (KS, table, tombstones), Bar chart (top tables), Line chart (tombstone trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cassandra"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.20",
              "n": "Redis Eviction Rate",
              "c": "high",
              "f": "beginner",
              "v": "Rising `evicted_keys` per second indicates memory pressure and cache miss storms. Distinct from fragmentation and hit ratio for ops response.",
              "t": "redis-cli `INFO stats`",
              "d": "`evicted_keys`, `maxmemory`, `used_memory`",
              "q": "index=database sourcetype=\"redis:info\"\n| timechart span=1m per_second(evicted_keys) as evict_per_sec by host\n| where evict_per_sec > 10",
              "m": "Derive per-second evictions from counter deltas. Alert when sustained above baseline. Correlate with `maxmemory` policy and traffic.",
              "z": "Line chart (evictions/sec), Table (hosts spiking), Dual-axis (evictions + memory).",
              "kfp": "AOF rewrites, RDB saves, and replica full syncs can spike latency or CPU briefly on healthy hosts; check Redis persistence settings and the backup schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: redis-cli `INFO stats`.\n• Ensure the following data sources are available: `evicted_keys`, `maxmemory`, `used_memory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDerive per-second evictions from counter deltas. Alert when sustained above baseline. Correlate with `maxmemory` policy and traffic.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"redis:info\"\n| timechart span=1m per_second(evicted_keys) as evict_per_sec by host\n| where evict_per_sec > 10\n```\n\nUnderstanding this SPL\n\n**Redis Eviction Rate** — Rising `evicted_keys` per second indicates memory pressure and cache miss storms. Distinct from fragmentation and hit ratio for ops response.\n\nDocumented **Data sources**: `evicted_keys`, `maxmemory`, `used_memory`. **App/TA** (typical add-on context): redis-cli `INFO stats`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where evict_per_sec > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (evictions/sec), Table (hosts spiking), Dual-axis (evictions + memory).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow buffer and cache hit rates so we can see when memory is too small for the working set and query latency starts to depend on disk more than it should.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "redis"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.21",
              "n": "HBase RegionServer Failover Events",
              "c": "critical",
              "f": "intermediate",
              "v": "RegionServer death and region reassignment cause latency spikes and possible unavailability. Log and metric correlation speeds recovery.",
              "t": "HBase Master/RS logs, JMX",
              "d": "`ServerShutdownHandler`, `Regions moved`, Dead RegionServer count",
              "q": "index=database sourcetype=\"hbase:master\"\n| search \"ServerShutdownHandler\" OR \"Dead RegionServer\" OR \"FailedServerShutdown\"\n| stats count by cluster_name, host\n| where count > 0",
              "m": "Forward HBase master and RS logs. Alert on any dead RS or failed shutdown. Track region-in-transition duration from JMX if ingested.",
              "z": "Timeline (RS failures), Table (cluster, host, events), Single value (RS down count).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HBase Master/RS logs, JMX.\n• Ensure the following data sources are available: `ServerShutdownHandler`, `Regions moved`, Dead RegionServer count.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward HBase master and RS logs. Alert on any dead RS or failed shutdown. Track region-in-transition duration from JMX if ingested.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"hbase:master\"\n| search \"ServerShutdownHandler\" OR \"Dead RegionServer\" OR \"FailedServerShutdown\"\n| stats count by cluster_name, host\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**HBase RegionServer Failover Events** — RegionServer death and region reassignment cause latency spikes and possible unavailability. Log and metric correlation speeds recovery.\n\nDocumented **Data sources**: `ServerShutdownHandler`, `Regions moved`, Dead RegionServer count. **App/TA** (typical add-on context): HBase Master/RS logs, JMX. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: hbase:master. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"hbase:master\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by cluster_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (RS failures), Table (cluster, host, events), Single value (RS down count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "RegionServer death and region reassignment cause latency spikes and possible unavailability We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.22",
              "n": "CouchDB View Build Times",
              "c": "medium",
              "f": "intermediate",
              "v": "Long-running view index builds block compaction and increase disk I/O. Tracks `_active_tasks` indexer progress and failures.",
              "t": "CouchDB `_active_tasks`, log ingestion",
              "d": "Indexer task type, `progress`, `total_changes`",
              "q": "index=database sourcetype=\"couchdb:active_tasks\" type=indexer\n| eval pct=round(progress/total_changes*100,1)\n| where pct < 100 AND updated_in_sec > 3600\n| table database_name design_doc pct updated_in_sec",
              "m": "Poll `_active_tasks` every minute. Alert when indexer runs >1h with low progress or task errors. Correlate with data volume growth.",
              "z": "Table (design doc, % complete), Line chart (indexer duration), Single value (stuck indexers).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CouchDB `_active_tasks`, log ingestion.\n• Ensure the following data sources are available: Indexer task type, `progress`, `total_changes`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `_active_tasks` every minute. Alert when indexer runs >1h with low progress or task errors. Correlate with data volume growth.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"couchdb:active_tasks\" type=indexer\n| eval pct=round(progress/total_changes*100,1)\n| where pct < 100 AND updated_in_sec > 3600\n| table database_name design_doc pct updated_in_sec\n```\n\nUnderstanding this SPL\n\n**CouchDB View Build Times** — Long-running view index builds block compaction and increase disk I/O. Tracks `_active_tasks` indexer progress and failures.\n\nDocumented **Data sources**: Indexer task type, `progress`, `total_changes`. **App/TA** (typical add-on context): CouchDB `_active_tasks`, log ingestion. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: couchdb:active_tasks. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"couchdb:active_tasks\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct < 100 AND updated_in_sec > 3600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CouchDB View Build Times**): table database_name design_doc pct updated_in_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (design doc, % complete), Line chart (indexer duration), Single value (stuck indexers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.2.23",
              "n": "MongoDB Index Inefficiency (Usage vs Size)",
              "c": "medium",
              "f": "advanced",
              "v": "Indexes with near-zero `accesses.ops` and large `size` waste RAM and slow writes. Identifies candidates for drop or partial indexes.",
              "t": "mongosh `$indexStats`, log export",
              "d": "`collStats`, `$indexStats` output",
              "q": "index=database sourcetype=\"mongodb:index_stats\"\n| eval usage=ops_since_start\n| where index_size_bytes > 104857600 AND usage < 10\n| table ns, name, index_size_bytes, usage\n| sort -index_size_bytes",
              "m": "Weekly job exports `$indexStats`. Flag large indexes with minimal usage. Exclude `_id` and required unique indexes via lookup.",
              "z": "Table (namespace, index, size, ops), Bar chart (wasted index size), Single value (low-usage large indexes count).",
              "kfp": "Planned step-down elections, compactions, and balanced migrations can look like incidents in logs until we compare with the MongoDB ops window or Atlas activity feed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: mongosh `$indexStats`, log export.\n• Ensure the following data sources are available: `collStats`, `$indexStats` output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWeekly job exports `$indexStats`. Flag large indexes with minimal usage. Exclude `_id` and required unique indexes via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mongodb:index_stats\"\n| eval usage=ops_since_start\n| where index_size_bytes > 104857600 AND usage < 10\n| table ns, name, index_size_bytes, usage\n| sort -index_size_bytes\n```\n\nUnderstanding this SPL\n\n**MongoDB Index Inefficiency (Usage vs Size)** — Indexes with near-zero `accesses.ops` and large `size` waste RAM and slow writes. Identifies candidates for drop or partial indexes.\n\nDocumented **Data sources**: `collStats`, `$indexStats` output. **App/TA** (typical add-on context): mongosh `$indexStats`, log export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mongodb:index_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mongodb:index_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **usage** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where index_size_bytes > 104857600 AND usage < 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MongoDB Index Inefficiency (Usage vs Size)**): table ns, name, index_size_bytes, usage\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (namespace, index, size, ops), Bar chart (wasted index size), Single value (low-usage large indexes count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mongodb"
              ],
              "em": [
                "mongodb_mongod"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.5,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 22,
            "none": 0
          }
        },
        {
          "i": "7.3",
          "n": "Cloud-Managed Databases",
          "u": [
            {
              "i": "7.3.1",
              "n": "RDS/Aurora Performance Insights",
              "c": "high",
              "f": "beginner",
              "v": "Performance Insights identifies top SQL and wait events without agent installation. Enables rapid diagnosis of managed database bottlenecks.",
              "t": "`Splunk_TA_aws` (CloudWatch)",
              "d": "RDS Performance Insights API, CloudWatch Enhanced Monitoring",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\"\n| where metric_name IN (\"CPUUtilization\",\"DatabaseConnections\",\"ReadLatency\",\"WriteLatency\")\n| timechart span=5m avg(Average) by metric_name, DBInstanceIdentifier",
              "m": "Enable Enhanced Monitoring and Performance Insights on RDS instances. Ingest CloudWatch metrics via Splunk Add-on for AWS. Enable RDS log exports (slow query, error, general) to CloudWatch Logs for deeper analysis.",
              "z": "Multi-line chart (CPU, connections, latency), Table (top wait events), Single value (current active sessions).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (CloudWatch).\n• Ensure the following data sources are available: RDS Performance Insights API, CloudWatch Enhanced Monitoring.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Enhanced Monitoring and Performance Insights on RDS instances. Ingest CloudWatch metrics via Splunk Add-on for AWS. Enable RDS log exports (slow query, error, general) to CloudWatch Logs for deeper analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\"\n| where metric_name IN (\"CPUUtilization\",\"DatabaseConnections\",\"ReadLatency\",\"WriteLatency\")\n| timechart span=5m avg(Average) by metric_name, DBInstanceIdentifier\n```\n\nUnderstanding this SPL\n\n**RDS/Aurora Performance Insights** — Performance Insights identifies top SQL and wait events without agent installation. Enables rapid diagnosis of managed database bottlenecks.\n\nDocumented **Data sources**: RDS Performance Insights API, CloudWatch Enhanced Monitoring. **App/TA** (typical add-on context): `Splunk_TA_aws` (CloudWatch). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where metric_name IN (\"CPUUtilization\",\"DatabaseConnections\",\"ReadLatency\",\"WriteLatency\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name, DBInstanceIdentifier** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-line chart (CPU, connections, latency), Table (top wait events), Single value (current active sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow Oracle instance and space signals so the team can keep service levels and archivelog management in line with how the business actually uses the database.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.2",
              "n": "Automated Failover Events",
              "c": "critical",
              "f": "beginner",
              "v": "Managed database failovers cause brief outages. Detection enables impact analysis and root cause investigation.",
              "t": "`Splunk_TA_aws` (CloudTrail/EventBridge), Azure Activity Log",
              "d": "RDS events, Azure SQL activity log, Cloud SQL admin activity",
              "q": "index=aws sourcetype=\"aws:cloudwatch:events\"\n| search detail.EventCategories=\"failover\"\n| table _time, detail.SourceIdentifier, detail.Message",
              "m": "Ingest RDS event subscriptions via SNS → SQS → Splunk. Filter for failover events. Alert immediately with PagerDuty/ServiceNow integration. Correlate with application error spikes to measure impact duration.",
              "z": "Timeline (failover events), Table (failover details), Single value (days since last failover).",
              "kfp": "Autoscale events, service tier changes, and Microsoft-side maintenance in Azure can move CPU and storage metrics; compare with the Azure service health and your deployment pipeline.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (CloudTrail/EventBridge), Azure Activity Log.\n• Ensure the following data sources are available: RDS events, Azure SQL activity log, Cloud SQL admin activity.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest RDS event subscriptions via SNS → SQS → Splunk. Filter for failover events. Alert immediately with PagerDuty/ServiceNow integration. Correlate with application error spikes to measure impact duration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:events\"\n| search detail.EventCategories=\"failover\"\n| table _time, detail.SourceIdentifier, detail.Message\n```\n\nUnderstanding this SPL\n\n**Automated Failover Events** — Managed database failovers cause brief outages. Detection enables impact analysis and root cause investigation.\n\nDocumented **Data sources**: RDS events, Azure SQL activity log, Cloud SQL admin activity. **App/TA** (typical add-on context): `Splunk_TA_aws` (CloudTrail/EventBridge), Azure Activity Log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Automated Failover Events**): table _time, detail.SourceIdentifier, detail.Message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failover events), Table (failover details), Single value (days since last failover).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Managed database failovers cause brief outages We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.3",
              "n": "Read Replica Lag",
              "c": "high",
              "f": "beginner",
              "v": "Replica lag affects read consistency for applications using read replicas. Monitoring prevents stale data serving.",
              "t": "Cloud provider TAs (CloudWatch, Azure Monitor)",
              "d": "CloudWatch `ReplicaLag` metric, Azure SQL `replication_lag`",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" metric_name=\"ReplicaLag\"\n| timechart span=5m max(Maximum) as replica_lag_sec by DBInstanceIdentifier\n| where replica_lag_sec > 30",
              "m": "Ingest CloudWatch RDS metrics. Alert when ReplicaLag exceeds application tolerance (e.g., >30 seconds). Track trend and correlate with write workload spikes. Alert on replica lag growing consistently.",
              "z": "Line chart (replica lag over time), Single value (current max lag), Table (replicas with lag).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud provider TAs (CloudWatch, Azure Monitor).\n• Ensure the following data sources are available: CloudWatch `ReplicaLag` metric, Azure SQL `replication_lag`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CloudWatch RDS metrics. Alert when ReplicaLag exceeds application tolerance (e.g., >30 seconds). Track trend and correlate with write workload spikes. Alert on replica lag growing consistently.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" metric_name=\"ReplicaLag\"\n| timechart span=5m max(Maximum) as replica_lag_sec by DBInstanceIdentifier\n| where replica_lag_sec > 30\n```\n\nUnderstanding this SPL\n\n**Read Replica Lag** — Replica lag affects read consistency for applications using read replicas. Monitoring prevents stale data serving.\n\nDocumented **Data sources**: CloudWatch `ReplicaLag` metric, Azure SQL `replication_lag`. **App/TA** (typical add-on context): Cloud provider TAs (CloudWatch, Azure Monitor). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by DBInstanceIdentifier** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where replica_lag_sec > 30` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (replica lag over time), Single value (current max lag), Table (replicas with lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.4",
              "n": "Storage Auto-Scaling Events",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks storage auto-scaling events for cost awareness and identifies databases with rapid growth needing attention.",
              "t": "Cloud provider TAs",
              "d": "CloudTrail (ModifyDBInstance), Azure Activity Log",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventName=\"ModifyDBInstance\"\n| spath output=allocated requestParameters.allocatedStorage\n| where isnotnull(allocated)\n| table _time, requestParameters.dBInstanceIdentifier, allocated, userIdentity.principalId",
              "m": "Ingest CloudTrail events. Filter for storage modification events. Track growth frequency per database. Alert when auto-scaling occurs more than twice per week, indicating rapid growth needing review.",
              "z": "Timeline (scaling events), Table (databases with scaling history), Bar chart (scaling frequency by database).",
              "kfp": "Autoscale events, service tier changes, and Microsoft-side maintenance in Azure can move CPU and storage metrics; compare with the Azure service health and your deployment pipeline.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud provider TAs.\n• Ensure the following data sources are available: CloudTrail (ModifyDBInstance), Azure Activity Log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CloudTrail events. Filter for storage modification events. Track growth frequency per database. Alert when auto-scaling occurs more than twice per week, indicating rapid growth needing review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventName=\"ModifyDBInstance\"\n| spath output=allocated requestParameters.allocatedStorage\n| where isnotnull(allocated)\n| table _time, requestParameters.dBInstanceIdentifier, allocated, userIdentity.principalId\n```\n\nUnderstanding this SPL\n\n**Storage Auto-Scaling Events** — Tracks storage auto-scaling events for cost awareness and identifies databases with rapid growth needing attention.\n\nDocumented **Data sources**: CloudTrail (ModifyDBInstance), Azure Activity Log. **App/TA** (typical add-on context): Cloud provider TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where isnotnull(allocated)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Storage Auto-Scaling Events**): table _time, requestParameters.dBInstanceIdentifier, allocated, userIdentity.principalId\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (scaling events), Table (databases with scaling history), Bar chart (scaling frequency by database).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch storage Auto-Scaling Events so we get early warning if this condition starts hurting reliability or the teams who depend on the data.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.5",
              "n": "Maintenance Window Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Awareness of upcoming and completed maintenance ensures teams are prepared for potential service impact.",
              "t": "Cloud provider TAs",
              "d": "RDS event subscriptions, Azure Service Health, GCP maintenance notifications",
              "q": "index=aws sourcetype=\"aws:cloudwatch:events\"\n| search detail.EventCategories=\"maintenance\"\n| table _time, detail.SourceIdentifier, detail.Message, detail.Date\n| sort detail.Date",
              "m": "Subscribe to RDS maintenance events via SNS. Ingest into Splunk. Create calendar view of upcoming maintenance. Alert 72 hours before scheduled maintenance. Log actual impact duration after completion.",
              "z": "Table (upcoming/recent maintenance), Calendar view, Timeline (maintenance history).",
              "kfp": "Autoscale events, service tier changes, and Microsoft-side maintenance in Azure can move CPU and storage metrics; compare with the Azure service health and your deployment pipeline.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud provider TAs.\n• Ensure the following data sources are available: RDS event subscriptions, Azure Service Health, GCP maintenance notifications.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to RDS maintenance events via SNS. Ingest into Splunk. Create calendar view of upcoming maintenance. Alert 72 hours before scheduled maintenance. Log actual impact duration after completion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch:events\"\n| search detail.EventCategories=\"maintenance\"\n| table _time, detail.SourceIdentifier, detail.Message, detail.Date\n| sort detail.Date\n```\n\nUnderstanding this SPL\n\n**Maintenance Window Tracking** — Awareness of upcoming and completed maintenance ensures teams are prepared for potential service impact.\n\nDocumented **Data sources**: RDS event subscriptions, Azure Service Health, GCP maintenance notifications. **App/TA** (typical add-on context): Cloud provider TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Maintenance Window Tracking**): table _time, detail.SourceIdentifier, detail.Message, detail.Date\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (upcoming/recent maintenance), Calendar view, Timeline (maintenance history).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Awareness of upcoming and completed maintenance ensures teams are prepared for potential service impact We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.6",
              "n": "Redis Memory Fragmentation Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "Fragmentation ratio > 1.5 indicating memory inefficiency. High fragmentation wastes RAM and can trigger OOM or evictions under memory pressure.",
              "t": "Custom scripted input (redis-cli INFO memory)",
              "d": "redis-cli INFO memory (mem_fragmentation_ratio)",
              "q": "index=database sourcetype=\"redis:info\"\n| where mem_fragmentation_ratio > 1.5\n| timechart span=15m avg(mem_fragmentation_ratio) as frag_ratio by host\n| where frag_ratio > 1.5",
              "m": "Create scripted input running `redis-cli INFO memory` every 15 minutes. Parse `mem_fragmentation_ratio` (used_memory_rss/used_memory). Alert when ratio exceeds 1.5. Track `used_memory_rss` and `used_memory` for trend analysis. Consider `MEMORY PURGE` (Redis 4+) or restart for severe fragmentation. Correlate with eviction rate.",
              "z": "Line chart (fragmentation ratio over time), Gauge (current ratio), Table (hosts with high fragmentation).",
              "kfp": "AOF rewrites, RDB saves, and replica full syncs can spike latency or CPU briefly on healthy hosts; check Redis persistence settings and the backup schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (redis-cli INFO memory).\n• Ensure the following data sources are available: redis-cli INFO memory (mem_fragmentation_ratio).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted input running `redis-cli INFO memory` every 15 minutes. Parse `mem_fragmentation_ratio` (used_memory_rss/used_memory). Alert when ratio exceeds 1.5. Track `used_memory_rss` and `used_memory` for trend analysis. Consider `MEMORY PURGE` (Redis 4+) or restart for severe fragmentation. Correlate with eviction rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"redis:info\"\n| where mem_fragmentation_ratio > 1.5\n| timechart span=15m avg(mem_fragmentation_ratio) as frag_ratio by host\n| where frag_ratio > 1.5\n```\n\nUnderstanding this SPL\n\n**Redis Memory Fragmentation Ratio** — Fragmentation ratio > 1.5 indicating memory inefficiency. High fragmentation wastes RAM and can trigger OOM or evictions under memory pressure.\n\nDocumented **Data sources**: redis-cli INFO memory (mem_fragmentation_ratio). **App/TA** (typical add-on context): Custom scripted input (redis-cli INFO memory). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where mem_fragmentation_ratio > 1.5` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where frag_ratio > 1.5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (fragmentation ratio over time), Gauge (current ratio), Table (hosts with high fragmentation).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow memory, persistence, and replication on Redis so we can catch eviction and lag before a cache or queue tier becomes the bottleneck for applications.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "redis"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.7",
              "n": "Redis Keyspace Hit / Miss Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "Cache effectiveness trending. Low hit ratio indicates cache is not serving requests effectively, increasing load on backing stores.",
              "t": "Custom scripted input (redis-cli INFO stats)",
              "d": "redis-cli INFO stats (keyspace_hits, keyspace_misses)",
              "q": "index=database sourcetype=\"redis:info\"\n| eval total_ops=keyspace_hits+keyspace_misses\n| eval hit_ratio_pct=round(100*keyspace_hits/nullif(total_ops,0), 2)\n| where hit_ratio_pct < 90\n| timechart span=15m avg(hit_ratio_pct) as hit_ratio_pct by host",
              "m": "Poll `redis-cli INFO stats` every 15 minutes. Extract `keyspace_hits` and `keyspace_misses`. Compute hit_ratio = hits/(hits+misses)*100. Alert when hit ratio drops below 90% for sustained periods. Track trend to identify cache warming after restarts or workload shifts. Correlate with eviction rate and memory usage.",
              "z": "Gauge (keyspace hit ratio %), Line chart (hit ratio over time), Table (hosts with low hit ratio).",
              "kfp": "AOF rewrites, RDB saves, and replica full syncs can spike latency or CPU briefly on healthy hosts; check Redis persistence settings and the backup schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (redis-cli INFO stats).\n• Ensure the following data sources are available: redis-cli INFO stats (keyspace_hits, keyspace_misses).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `redis-cli INFO stats` every 15 minutes. Extract `keyspace_hits` and `keyspace_misses`. Compute hit_ratio = hits/(hits+misses)*100. Alert when hit ratio drops below 90% for sustained periods. Track trend to identify cache warming after restarts or workload shifts. Correlate with eviction rate and memory usage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"redis:info\"\n| eval total_ops=keyspace_hits+keyspace_misses\n| eval hit_ratio_pct=round(100*keyspace_hits/nullif(total_ops,0), 2)\n| where hit_ratio_pct < 90\n| timechart span=15m avg(hit_ratio_pct) as hit_ratio_pct by host\n```\n\nUnderstanding this SPL\n\n**Redis Keyspace Hit / Miss Ratio** — Cache effectiveness trending. Low hit ratio indicates cache is not serving requests effectively, increasing load on backing stores.\n\nDocumented **Data sources**: redis-cli INFO stats (keyspace_hits, keyspace_misses). **App/TA** (typical add-on context): Custom scripted input (redis-cli INFO stats). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **total_ops** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hit_ratio_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio_pct < 90` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (keyspace hit ratio %), Line chart (hit ratio over time), Table (hosts with low hit ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow buffer and cache hit rates so we can see when memory is too small for the working set and query latency starts to depend on disk more than it should.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "redis"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.8",
              "n": "Aurora Serverless Scaling Events",
              "c": "medium",
              "f": "beginner",
              "v": "ACU (capacity unit) scale-up/down events explain latency and cost. Tracks whether scaling policy matches workload bursts.",
              "t": "`Splunk_TA_aws` (RDS events, CloudWatch)",
              "d": "RDS event categories `notification`, `serverless`, CloudWatch `ServerlessDatabaseCapacity`",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" metric_name=\"ServerlessDatabaseCapacity\"\n| timechart span=5m avg(Average) as acu by DBClusterIdentifier",
              "m": "Ingest ACU metric and RDS events for scale actions. Alert on repeated scale-to-max or throttling. Correlate with `DatabaseConnections` and CPU.",
              "z": "Line chart (ACU over time), Timeline (scaling events), Table (clusters at max ACU).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` (RDS events, CloudWatch).\n• Ensure the following data sources are available: RDS event categories `notification`, `serverless`, CloudWatch `ServerlessDatabaseCapacity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ACU metric and RDS events for scale actions. Alert on repeated scale-to-max or throttling. Correlate with `DatabaseConnections` and CPU.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" metric_name=\"ServerlessDatabaseCapacity\"\n| timechart span=5m avg(Average) as acu by DBClusterIdentifier\n```\n\nUnderstanding this SPL\n\n**Aurora Serverless Scaling Events** — ACU (capacity unit) scale-up/down events explain latency and cost. Tracks whether scaling policy matches workload bursts.\n\nDocumented **Data sources**: RDS event categories `notification`, `serverless`, CloudWatch `ServerlessDatabaseCapacity`. **App/TA** (typical add-on context): `Splunk_TA_aws` (RDS events, CloudWatch). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by DBClusterIdentifier** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ACU over time), Timeline (scaling events), Table (clusters at max ACU).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "ACU (capacity unit) scale-up/down events explain latency and cost We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.9",
              "n": "Azure Cosmos DB RU Consumption",
              "c": "high",
              "f": "beginner",
              "v": "Normalized RU/s consumption vs provisioned throughput identifies hot partitions and autoscale effectiveness.",
              "t": "`Splunk_TA_microsoft-cloudservices`, Azure Monitor metrics",
              "d": "`NormalizedRUConsumption`, `Total Request Units`",
              "q": "index=azure sourcetype=\"mssql:azuremonitor\" OR sourcetype=\"azure:metrics\"\n| search metric_name=\"NormalizedRUConsumption\" OR \"*Cosmos*\"\n| timechart span=5m avg(average) as norm_ru by DatabaseName, CollectionName\n| where norm_ru > 0.9",
              "m": "Map exact metric names from your Azure diagnostic settings. Alert when normalized consumption >90% sustained. Split by partition key if available in custom dimensions.",
              "z": "Line chart (RU consumption %), Table (collections over threshold), Single value (hottest collection).",
              "kfp": "Autoscale events, service tier changes, and Microsoft-side maintenance in Azure can move CPU and storage metrics; compare with the Azure service health and your deployment pipeline.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, Azure Monitor metrics.\n• Ensure the following data sources are available: `NormalizedRUConsumption`, `Total Request Units`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap exact metric names from your Azure diagnostic settings. Alert when normalized consumption >90% sustained. Split by partition key if available in custom dimensions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mssql:azuremonitor\" OR sourcetype=\"azure:metrics\"\n| search metric_name=\"NormalizedRUConsumption\" OR \"*Cosmos*\"\n| timechart span=5m avg(average) as norm_ru by DatabaseName, CollectionName\n| where norm_ru > 0.9\n```\n\nUnderstanding this SPL\n\n**Azure Cosmos DB RU Consumption** — Normalized RU/s consumption vs provisioned throughput identifies hot partitions and autoscale effectiveness.\n\nDocumented **Data sources**: `NormalizedRUConsumption`, `Total Request Units`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Azure Monitor metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mssql:azuremonitor, azure:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mssql:azuremonitor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by DatabaseName, CollectionName** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where norm_ru > 0.9` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (RU consumption %), Table (collections over threshold), Single value (hottest collection).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Normalized RU/s consumption vs provisioned throughput identifies hot partitions and autoscale effectiveness We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.10",
              "n": "Cloud Spanner Instance Health",
              "c": "critical",
              "f": "intermediate",
              "v": "CPU utilization, hot spots, and replication delay for Spanner nodes indicate risk of write/read stalls on globally distributed data.",
              "t": "GCP Monitoring TA, scripted export",
              "d": "`spanner.googleapis.com/instance/cpu/utilization`, `transaction_count`, `streaming_pull_response_count`",
              "q": "index=gcp sourcetype=\"gcp:monitoring\" metric_type=\"spanner.googleapis.com/instance/cpu/utilization\"\n| timechart span=5m avg(value) as cpu_util by instance_id\n| where cpu_util > 0.65",
              "m": "Ingest Spanner instance metrics per project. Alert on high CPU or increasing 99p latency metrics. Use query insights export for hot keys if enabled.",
              "z": "Line chart (CPU and latency), Table (instances over SLO), Heatmap (instance × region).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GCP Monitoring TA, scripted export.\n• Ensure the following data sources are available: `spanner.googleapis.com/instance/cpu/utilization`, `transaction_count`, `streaming_pull_response_count`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Spanner instance metrics per project. Alert on high CPU or increasing 99p latency metrics. Use query insights export for hot keys if enabled.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"gcp:monitoring\" metric_type=\"spanner.googleapis.com/instance/cpu/utilization\"\n| timechart span=5m avg(value) as cpu_util by instance_id\n| where cpu_util > 0.65\n```\n\nUnderstanding this SPL\n\n**Cloud Spanner Instance Health** — CPU utilization, hot spots, and replication delay for Spanner nodes indicate risk of write/read stalls on globally distributed data.\n\nDocumented **Data sources**: `spanner.googleapis.com/instance/cpu/utilization`, `transaction_count`, `streaming_pull_response_count`. **App/TA** (typical add-on context): GCP Monitoring TA, scripted export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: gcp:monitoring. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"gcp:monitoring\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by instance_id** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where cpu_util > 0.65` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU and latency), Table (instances over SLO), Heatmap (instance × region).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.11",
              "n": "Managed Database Failover Events (Multi-Cloud)",
              "c": "critical",
              "f": "beginner",
              "v": "Single search across RDS failover, Azure SQL failover, and Cloud SQL failover for hybrid teams. Supplements UC-7.3.2 with normalized fields.",
              "t": "CloudTrail, Azure Activity Log, GCP Audit Logs",
              "d": "`Failover`, `failover`, `switchover` events",
              "q": "(index=aws sourcetype=\"aws:cloudwatch:events\") OR (index=azure sourcetype=\"azure:activity\") OR (index=gcp sourcetype=\"gcp:audit\")\n| search failover OR Failover OR switchover\n| eval cloud=case(index==\"aws\",\"AWS\", index==\"azure\",\"Azure\", index==\"gcp\",\"GCP\",1=1,\"unknown\")\n| table _time, cloud, resource_name, message\n| sort -_time",
              "m": "Normalize resource identifiers in CIM-style fields at ingest. Route to incident workflow with application dependency tags.",
              "z": "Timeline (failovers by cloud), Table (resource, cloud, time), Single value (failovers 30d).",
              "kfp": "Autoscale events, service tier changes, and Microsoft-side maintenance in Azure can move CPU and storage metrics; compare with the Azure service health and your deployment pipeline.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CloudTrail, Azure Activity Log, GCP Audit Logs.\n• Ensure the following data sources are available: `Failover`, `failover`, `switchover` events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize resource identifiers in CIM-style fields at ingest. Route to incident workflow with application dependency tags.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:cloudwatch:events\") OR (index=azure sourcetype=\"azure:activity\") OR (index=gcp sourcetype=\"gcp:audit\")\n| search failover OR Failover OR switchover\n| eval cloud=case(index==\"aws\",\"AWS\", index==\"azure\",\"Azure\", index==\"gcp\",\"GCP\",1=1,\"unknown\")\n| table _time, cloud, resource_name, message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Managed Database Failover Events (Multi-Cloud)** — Single search across RDS failover, Azure SQL failover, and Cloud SQL failover for hybrid teams. Supplements UC-7.3.2 with normalized fields.\n\nDocumented **Data sources**: `Failover`, `failover`, `switchover` events. **App/TA** (typical add-on context): CloudTrail, Azure Activity Log, GCP Audit Logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure, gcp; **sourcetype**: aws:cloudwatch:events, azure:activity, gcp:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, index=gcp, sourcetype=\"aws:cloudwatch:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **cloud** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Managed Database Failover Events (Multi-Cloud)**): table _time, cloud, resource_name, message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failovers by cloud), Table (resource, cloud, time), Single value (failovers 30d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Single search across cloud databases failover, Azure SQL failover, and Cloud SQL failover for hybrid teams We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.12",
              "n": "Azure SQL Database DTU Exhaustion",
              "c": "critical",
              "f": "beginner",
              "v": "DTU/vCore saturation causes throttling and query timeouts. Distinct from generic RDS CPU for Azure-only deployments.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "`dtu_consumption_percent`, `cpu_percent`, `data_io_percent`",
              "q": "index=azure sourcetype=\"azure:sql:metrics\"\n| where dtu_consumption_percent > 85 OR cpu_percent > 90\n| timechart span=5m max(dtu_consumption_percent) as dtu_pct by database_name, elastic_pool_name",
              "m": "Enable Azure Monitor metrics for SQL DB/elastic pool. Alert on sustained high DTU%. Recommend tier upgrade or elastic pool rebalance.",
              "z": "Line chart (DTU %), Gauge (current DTU), Table (databases over 85%).",
              "kfp": "Autoscale events, service tier changes, and Microsoft-side maintenance in Azure can move CPU and storage metrics; compare with the Azure service health and your deployment pipeline.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: `dtu_consumption_percent`, `cpu_percent`, `data_io_percent`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Azure Monitor metrics for SQL DB/elastic pool. Alert on sustained high DTU%. Recommend tier upgrade or elastic pool rebalance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:sql:metrics\"\n| where dtu_consumption_percent > 85 OR cpu_percent > 90\n| timechart span=5m max(dtu_consumption_percent) as dtu_pct by database_name, elastic_pool_name\n```\n\nUnderstanding this SPL\n\n**Azure SQL Database DTU Exhaustion** — DTU/vCore saturation causes throttling and query timeouts. Distinct from generic RDS CPU for Azure-only deployments.\n\nDocumented **Data sources**: `dtu_consumption_percent`, `cpu_percent`, `data_io_percent`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:sql:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:sql:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where dtu_consumption_percent > 85 OR cpu_percent > 90` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by database_name, elastic_pool_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (DTU %), Gauge (current DTU), Table (databases over 85%).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "DTU/vCore saturation causes throttling and query timeouts We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.13",
              "n": "Cloud SQL Storage Auto-Grow Events",
              "c": "medium",
              "f": "beginner",
              "v": "Automatic storage increases for GCP Cloud SQL (and similar) signal rapid data growth and cost impact.",
              "t": "GCP audit logs, Cloud SQL Admin API events",
              "d": "`storageResize`, disk size change operations",
              "q": "index=gcp sourcetype=\"gcp:audit\" protoPayload.methodName=\"*.sql.instances.patch\"\n| spath output=new_disk_gb protoPayload.request.settings.dataDiskSizeGb\n| where isnotnull(new_disk_gb)\n| table _time, resourceName, new_disk_gb, protoPayload.authenticationInfo.principalEmail",
              "m": "Parse patch operations that change disk size. Alert when more than one resize per week per instance. Forecast disk from `disk_utilization` metrics.",
              "z": "Timeline (resize events), Table (instance, new size GB), Line chart (disk size over time).",
              "kfp": "Capacity grows during ETL loads, month-end batch processing, or data migrations. Growth from `ANALYZE`, statistics runs, or one-off bulk loads is often expected when it matches the schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GCP audit logs, Cloud SQL Admin API events.\n• Ensure the following data sources are available: `storageResize`, disk size change operations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse patch operations that change disk size. Alert when more than one resize per week per instance. Forecast disk from `disk_utilization` metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gcp sourcetype=\"gcp:audit\" protoPayload.methodName=\"*.sql.instances.patch\"\n| spath output=new_disk_gb protoPayload.request.settings.dataDiskSizeGb\n| where isnotnull(new_disk_gb)\n| table _time, resourceName, new_disk_gb, protoPayload.authenticationInfo.principalEmail\n```\n\nUnderstanding this SPL\n\n**Cloud SQL Storage Auto-Grow Events** — Automatic storage increases for GCP Cloud SQL (and similar) signal rapid data growth and cost impact.\n\nDocumented **Data sources**: `storageResize`, disk size change operations. **App/TA** (typical add-on context): GCP audit logs, Cloud SQL Admin API events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gcp; **sourcetype**: gcp:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gcp, sourcetype=\"gcp:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where isnotnull(new_disk_gb)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cloud SQL Storage Auto-Grow Events**): table _time, resourceName, new_disk_gb, protoPayload.authenticationInfo.principalEmail\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (resize events), Table (instance, new size GB), Line chart (disk size over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch managed cloud SQL storage auto-grow events so we can align billing and catch runaway growth before limits or throttling hit production.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.14",
              "n": "Managed Backup Retention Compliance",
              "c": "critical",
              "f": "intermediate",
              "v": "Verifies automated backup snapshots exist within required retention for RDS, Azure SQL LTR, and Cloud SQL backups.",
              "t": "Cloud APIs (describe-db-snapshots, backup list)",
              "d": "Snapshot timestamps, backup policy metadata",
              "q": "index=cloud sourcetype=\"rds:snapshot_inventory\"\n| stats latest(snapshot_time) as last_snap by db_instance_identifier\n| eval days_since=round((now()-strptime(last_snap,\"%Y-%m-%d %H:%M:%S\"))/86400)\n| where days_since > 1\n| table db_instance_identifier last_snap days_since",
              "m": "Ingest daily snapshot inventory from AWS/Azure/GCP APIs. Compare to RPO policy (e.g., last snapshot <25h). Alert on missing snapshot for production tier.",
              "z": "Table (instances missing recent backup), Single value (non-compliant count), Calendar (snapshot coverage).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud APIs (describe-db-snapshots, backup list).\n• Ensure the following data sources are available: Snapshot timestamps, backup policy metadata.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest daily snapshot inventory from AWS/Azure/GCP APIs. Compare to RPO policy (e.g., last snapshot <25h). Alert on missing snapshot for production tier.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"rds:snapshot_inventory\"\n| stats latest(snapshot_time) as last_snap by db_instance_identifier\n| eval days_since=round((now()-strptime(last_snap,\"%Y-%m-%d %H:%M:%S\"))/86400)\n| where days_since > 1\n| table db_instance_identifier last_snap days_since\n```\n\nUnderstanding this SPL\n\n**Managed Backup Retention Compliance** — Verifies automated backup snapshots exist within required retention for RDS, Azure SQL LTR, and Cloud SQL backups.\n\nDocumented **Data sources**: Snapshot timestamps, backup policy metadata. **App/TA** (typical add-on context): Cloud APIs (describe-db-snapshots, backup list). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: rds:snapshot_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"rds:snapshot_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by db_instance_identifier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since > 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Managed Backup Retention Compliance**): table db_instance_identifier last_snap days_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (instances missing recent backup), Single value (non-compliant count), Calendar (snapshot coverage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check backup and snapshot success and growth so we can trust restores and long-term storage plans for regulation and for real incidents.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.15",
              "n": "Read Replica Lag Trending (Percentiles)",
              "c": "high",
              "f": "intermediate",
              "v": "p95/p99 replica lag exposes tail behavior missed by max-only dashboards. Applies to RDS, Aurora, and Azure read replicas.",
              "t": "CloudWatch, Azure Monitor",
              "d": "`ReplicaLag` (seconds), `physical_replication_delay`",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" metric_name=\"ReplicaLag\"\n| timechart span=5m perc95(Maximum) as p95_lag, max(Maximum) as max_lag by DBInstanceIdentifier\n| where p95_lag > 30",
              "m": "Set SLA based on app freshness needs. Alert on p95 > threshold for 15m. Compare primary write IOPS to replica apply lag.",
              "z": "Line chart (p95/p99 replica lag), Table (replicas breaching SLA), Single value (worst p95).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CloudWatch, Azure Monitor.\n• Ensure the following data sources are available: `ReplicaLag` (seconds), `physical_replication_delay`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSet SLA based on app freshness needs. Alert on p95 > threshold for 15m. Compare primary write IOPS to replica apply lag.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/RDS\" metric_name=\"ReplicaLag\"\n| timechart span=5m perc95(Maximum) as p95_lag, max(Maximum) as max_lag by DBInstanceIdentifier\n| where p95_lag > 30\n```\n\nUnderstanding this SPL\n\n**Read Replica Lag Trending (Percentiles)** — p95/p99 replica lag exposes tail behavior missed by max-only dashboards. Applies to RDS, Aurora, and Azure read replicas.\n\nDocumented **Data sources**: `ReplicaLag` (seconds), `physical_replication_delay`. **App/TA** (typical add-on context): CloudWatch, Azure Monitor. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by DBInstanceIdentifier** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where p95_lag > 30` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95/p99 replica lag), Table (replicas breaching SLA), Single value (worst p95).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.16",
              "n": "Azure SQL Managed Instance Resource Utilization",
              "c": "high",
              "f": "beginner",
              "v": "SQL Managed Instance provides near-100% SQL Server compatibility in Azure. CPU, storage I/O, and memory pressure against provisioned limits directly impact query performance and can cause throttling.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.Sql/managedInstances)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.sql/managedinstances\"\n| where metric_name IN (\"avg_cpu_percent\",\"io_bytes_read\",\"io_bytes_written\",\"storage_space_used_mb\")\n| timechart span=5m avg(average) as value by metric_name, resource_name",
              "m": "Collect Azure Monitor metrics for SQL Managed Instance. Key metrics: `avg_cpu_percent` (alert >85% sustained), `io_bytes_read`/`io_bytes_written` against provisioned IOPS for the service tier, and `storage_space_used_mb` versus reserved storage. Monitor `virtual_core_count` utilization to guide tier scaling decisions. Alert on sustained high CPU and storage approaching the limit.",
              "z": "Line chart (CPU % over time), Gauge (storage used vs. limit), Table (instances near capacity).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.Sql/managedInstances).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Azure Monitor metrics for SQL Managed Instance. Key metrics: `avg_cpu_percent` (alert >85% sustained), `io_bytes_read`/`io_bytes_written` against provisioned IOPS for the service tier, and `storage_space_used_mb` versus reserved storage. Monitor `virtual_core_count` utilization to guide tier scaling decisions. Alert on sustained high CPU and storage approaching the limit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.sql/managedinstances\"\n| where metric_name IN (\"avg_cpu_percent\",\"io_bytes_read\",\"io_bytes_written\",\"storage_space_used_mb\")\n| timechart span=5m avg(average) as value by metric_name, resource_name\n```\n\nUnderstanding this SPL\n\n**Azure SQL Managed Instance Resource Utilization** — SQL Managed Instance provides near-100% SQL Server compatibility in Azure. CPU, storage I/O, and memory pressure against provisioned limits directly impact query performance and can cause throttling.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Sql/managedInstances). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where metric_name IN (\"avg_cpu_percent\",\"io_bytes_read\",\"io_bytes_written\",\"storage_space_used_mb\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name, resource_name** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Storage by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure SQL Managed Instance Resource Utilization** — SQL Managed Instance provides near-100% SQL Server compatibility in Azure. CPU, storage I/O, and memory pressure against provisioned limits directly impact query performance and can cause throttling.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Sql/managedInstances). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Storage` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU % over time), Gauge (storage used vs. limit), Table (instances near capacity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow SQL Server performance and availability signals so the team can catch regressions, AG issues, and tempdb pressure on business-critical systems.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Storage by Performance.host span=5m | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.3.17",
              "n": "Azure SQL Managed Instance Failover Group Status",
              "c": "critical",
              "f": "intermediate",
              "v": "Failover groups provide geo-redundancy for SQL Managed Instance. Monitoring replication lag and failover events ensures disaster recovery readiness and detects unplanned failovers.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/Activity Log)",
              "d": "`sourcetype=azure:monitor:activity`, `sourcetype=azure:monitor:metric`",
              "q": "index=cloud sourcetype=\"azure:monitor:activity\" resourceType=\"microsoft.sql/managedinstances/failovergroups\"\n| where operationName=\"Microsoft.Sql/managedInstances/failoverGroups/failover/action\"\n| table _time, caller, status, resource_name\n| sort -_time",
              "m": "Collect Activity Log events for failover group operations and Azure Monitor metrics for replication state. Alert on unplanned failover events (not initiated by known maintenance windows). Monitor `ReplicationState` metric — alert when state is not `SEEDING` or `CATCH_UP` for extended periods. Track replication lag to validate RPO compliance.",
              "z": "Timeline (failover events), Single value (current replication state), Table (failover history).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/Activity Log).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:activity`, `sourcetype=azure:monitor:metric`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Activity Log events for failover group operations and Azure Monitor metrics for replication state. Alert on unplanned failover events (not initiated by known maintenance windows). Monitor `ReplicationState` metric — alert when state is not `SEEDING` or `CATCH_UP` for extended periods. Track replication lag to validate RPO compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:activity\" resourceType=\"microsoft.sql/managedinstances/failovergroups\"\n| where operationName=\"Microsoft.Sql/managedInstances/failoverGroups/failover/action\"\n| table _time, caller, status, resource_name\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Azure SQL Managed Instance Failover Group Status** — Failover groups provide geo-redundancy for SQL Managed Instance. Monitoring replication lag and failover events ensures disaster recovery readiness and detects unplanned failovers.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:activity`, `sourcetype=azure:monitor:metric`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/Activity Log). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:activity, microsoft.sql/managedinstances/failovergroups. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:activity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where operationName=\"Microsoft.Sql/managedInstances/failoverGroups/failover/action\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Azure SQL Managed Instance Failover Group Status**): table _time, caller, status, resource_name\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failover events), Single value (current replication state), Table (failover history).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 17,
            "none": 0
          }
        },
        {
          "i": "7.4",
          "n": "Data Warehouses & Analytics Platforms",
          "u": [
            {
              "i": "7.4.1",
              "n": "Query Performance Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Identifies expensive and slow queries impacting warehouse performance and cost. Enables query optimization and cost reduction.",
              "t": "Custom API input (Snowflake ACCOUNT_USAGE), DB Connect",
              "d": "Snowflake `QUERY_HISTORY`, BigQuery `INFORMATION_SCHEMA.JOBS`, Redshift `STL_QUERY`",
              "q": "index=datawarehouse sourcetype=\"snowflake:query_history\"\n| where EXECUTION_STATUS=\"SUCCESS\" AND TOTAL_ELAPSED_TIME > 60000\n| stats avg(TOTAL_ELAPSED_TIME) as avg_ms, sum(CREDITS_USED_CLOUD_SERVICES) as credits by USER_NAME, WAREHOUSE_NAME\n| sort -credits",
              "m": "Poll query history views via REST API or DB Connect daily. Track query duration, queue time, and cost. Identify top resource consumers. Create weekly optimization report for data engineering teams.",
              "z": "Table (expensive queries), Bar chart (cost/duration by warehouse), Line chart (query performance trend).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (Snowflake ACCOUNT_USAGE), DB Connect.\n• Ensure the following data sources are available: Snowflake `QUERY_HISTORY`, BigQuery `INFORMATION_SCHEMA.JOBS`, Redshift `STL_QUERY`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll query history views via REST API or DB Connect daily. Track query duration, queue time, and cost. Identify top resource consumers. Create weekly optimization report for data engineering teams.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datawarehouse sourcetype=\"snowflake:query_history\"\n| where EXECUTION_STATUS=\"SUCCESS\" AND TOTAL_ELAPSED_TIME > 60000\n| stats avg(TOTAL_ELAPSED_TIME) as avg_ms, sum(CREDITS_USED_CLOUD_SERVICES) as credits by USER_NAME, WAREHOUSE_NAME\n| sort -credits\n```\n\nUnderstanding this SPL\n\n**Query Performance Trending** — Identifies expensive and slow queries impacting warehouse performance and cost. Enables query optimization and cost reduction.\n\nDocumented **Data sources**: Snowflake `QUERY_HISTORY`, BigQuery `INFORMATION_SCHEMA.JOBS`, Redshift `STL_QUERY`. **App/TA** (typical add-on context): Custom API input (Snowflake ACCOUNT_USAGE), DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datawarehouse; **sourcetype**: snowflake:query_history. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datawarehouse, sourcetype=\"snowflake:query_history\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where EXECUTION_STATUS=\"SUCCESS\" AND TOTAL_ELAPSED_TIME > 60000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by USER_NAME, WAREHOUSE_NAME** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (expensive queries), Bar chart (cost/duration by warehouse), Line chart (query performance trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "servicenow",
                "snowflake"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.2",
              "n": "Cluster Scaling Events",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks auto-scaling decisions for cost optimization. Identifies whether current scaling policies match workload patterns.",
              "t": "Custom API input, cloud provider TAs",
              "d": "Snowflake `WAREHOUSE_EVENTS_HISTORY`, Redshift resize events, BigQuery slot utilization",
              "q": "index=datawarehouse sourcetype=\"snowflake:warehouse_events\"\n| search event_name IN (\"RESIZE_CLUSTER\",\"SUSPEND_CLUSTER\",\"RESUME_CLUSTER\")\n| timechart span=1h count by event_name, warehouse_name",
              "m": "Poll warehouse event history. Track resume/suspend/scaling frequency. Correlate with query concurrency to validate scaling policies. Alert on unexpected scaling events outside business hours.",
              "z": "Timeline (scaling events), Stacked bar (events by type per day), Table (warehouse scaling summary).",
              "kfp": "Larger ETL or analyst workloads and seasonal reporting can raise warehouse credits or queueing without a fault; tune to business hours and team-specific baselines.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input, cloud provider TAs.\n• Ensure the following data sources are available: Snowflake `WAREHOUSE_EVENTS_HISTORY`, Redshift resize events, BigQuery slot utilization.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll warehouse event history. Track resume/suspend/scaling frequency. Correlate with query concurrency to validate scaling policies. Alert on unexpected scaling events outside business hours.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datawarehouse sourcetype=\"snowflake:warehouse_events\"\n| search event_name IN (\"RESIZE_CLUSTER\",\"SUSPEND_CLUSTER\",\"RESUME_CLUSTER\")\n| timechart span=1h count by event_name, warehouse_name\n```\n\nUnderstanding this SPL\n\n**Cluster Scaling Events** — Tracks auto-scaling decisions for cost optimization. Identifies whether current scaling policies match workload patterns.\n\nDocumented **Data sources**: Snowflake `WAREHOUSE_EVENTS_HISTORY`, Redshift resize events, BigQuery slot utilization. **App/TA** (typical add-on context): Custom API input, cloud provider TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datawarehouse; **sourcetype**: snowflake:warehouse_events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datawarehouse, sourcetype=\"snowflake:warehouse_events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by event_name, warehouse_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (scaling events), Stacked bar (events by type per day), Table (warehouse scaling summary).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Cluster Scaling Events so we can keep this part of the data platform within the capacity and quality targets our teams expect.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.3",
              "n": "Data Pipeline Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed or delayed ETL/ELT pipelines cause stale data for reporting and analytics. Early detection prevents downstream impact.",
              "t": "Custom API input, orchestrator integration (Airflow, dbt)",
              "d": "Airflow task logs, dbt run results, Snowflake TASK_HISTORY, pipeline orchestrator APIs",
              "q": "index=datawarehouse sourcetype=\"airflow:task_instance\"\n| stats count(eval(state=\"failed\")) as failed, count(eval(state=\"success\")) as success, count as total by dag_id, task_id\n| eval fail_rate=round(failed/total*100,1)\n| where fail_rate > 0\n| sort -fail_rate",
              "m": "Ingest pipeline orchestrator logs (Airflow, dbt, custom). Track job outcomes, durations, and data freshness. Alert on any pipeline failure. Create data freshness SLA dashboard showing when each table was last updated. For dbt and Snowflake pipelines, create similar searches targeting their respective sourcetypes (e.g., snowflake:task_history, dbt:run_results).",
              "z": "Status grid (pipeline × status), Table (failed pipelines), Line chart (pipeline duration trend), Single value (overall success rate).",
              "kfp": "Larger ETL or analyst workloads and seasonal reporting can raise warehouse credits or queueing without a fault; tune to business hours and team-specific baselines.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input, orchestrator integration (Airflow, dbt).\n• Ensure the following data sources are available: Airflow task logs, dbt run results, Snowflake TASK_HISTORY, pipeline orchestrator APIs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest pipeline orchestrator logs (Airflow, dbt, custom). Track job outcomes, durations, and data freshness. Alert on any pipeline failure. Create data freshness SLA dashboard showing when each table was last updated. For dbt and Snowflake pipelines, create similar searches targeting their respective sourcetypes (e.g., snowflake:task_history, dbt:run_results).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datawarehouse sourcetype=\"airflow:task_instance\"\n| stats count(eval(state=\"failed\")) as failed, count(eval(state=\"success\")) as success, count as total by dag_id, task_id\n| eval fail_rate=round(failed/total*100,1)\n| where fail_rate > 0\n| sort -fail_rate\n```\n\nUnderstanding this SPL\n\n**Data Pipeline Health** — Failed or delayed ETL/ELT pipelines cause stale data for reporting and analytics. Early detection prevents downstream impact.\n\nDocumented **Data sources**: Airflow task logs, dbt run results, Snowflake TASK_HISTORY, pipeline orchestrator APIs. **App/TA** (typical add-on context): Custom API input, orchestrator integration (Airflow, dbt). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datawarehouse; **sourcetype**: airflow:task_instance. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datawarehouse, sourcetype=\"airflow:task_instance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dag_id, task_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_rate > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (pipeline × status), Table (failed pipelines), Line chart (pipeline duration trend), Single value (overall success rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for repeated or suspicious sign-in activity on our databases so we can catch brute-force and misconfiguration before they become account takeovers.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.4",
              "n": "Credit / Cost per Query",
              "c": "high",
              "f": "beginner",
              "v": "Directly ties compute cost to individual queries, enabling chargeback and cost optimization. Identifies runaway queries consuming excessive resources.",
              "t": "Custom API input (Snowflake ACCOUNT_USAGE)",
              "d": "Snowflake `QUERY_HISTORY` (CREDITS_USED), BigQuery `INFORMATION_SCHEMA.JOBS` (total_bytes_billed)",
              "q": "index=datawarehouse sourcetype=\"snowflake:query_history\"\n| eval cost=CREDITS_USED_CLOUD_SERVICES * 3\n| stats sum(cost) as total_cost, count as query_count by USER_NAME, WAREHOUSE_NAME\n| eval cost_per_query=round(total_cost/query_count,2)\n| sort -total_cost",
              "m": "Poll query history with cost metrics daily. Calculate cost per query, per user, and per team (using role mapping). Create weekly cost report. Alert on individual queries exceeding cost threshold. Set up warehouse-level budgets.",
              "z": "Bar chart (cost by user/warehouse), Table (most expensive queries), Line chart (daily cost trend), Pie chart (cost by team).",
              "kfp": "Larger ETL or analyst workloads and seasonal reporting can raise warehouse credits or queueing without a fault; tune to business hours and team-specific baselines.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (Snowflake ACCOUNT_USAGE).\n• Ensure the following data sources are available: Snowflake `QUERY_HISTORY` (CREDITS_USED), BigQuery `INFORMATION_SCHEMA.JOBS` (total_bytes_billed).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll query history with cost metrics daily. Calculate cost per query, per user, and per team (using role mapping). Create weekly cost report. Alert on individual queries exceeding cost threshold. Set up warehouse-level budgets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datawarehouse sourcetype=\"snowflake:query_history\"\n| eval cost=CREDITS_USED_CLOUD_SERVICES * 3\n| stats sum(cost) as total_cost, count as query_count by USER_NAME, WAREHOUSE_NAME\n| eval cost_per_query=round(total_cost/query_count,2)\n| sort -total_cost\n```\n\nUnderstanding this SPL\n\n**Credit / Cost per Query** — Directly ties compute cost to individual queries, enabling chargeback and cost optimization. Identifies runaway queries consuming excessive resources.\n\nDocumented **Data sources**: Snowflake `QUERY_HISTORY` (CREDITS_USED), BigQuery `INFORMATION_SCHEMA.JOBS` (total_bytes_billed). **App/TA** (typical add-on context): Custom API input (Snowflake ACCOUNT_USAGE). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datawarehouse; **sourcetype**: snowflake:query_history. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datawarehouse, sourcetype=\"snowflake:query_history\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by USER_NAME, WAREHOUSE_NAME** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cost_per_query** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (cost by user/warehouse), Table (most expensive queries), Line chart (daily cost trend), Pie chart (cost by team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Credit / Cost per Query so we can keep this part of the data platform within the capacity and quality targets our teams expect.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow",
                "snowflake"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.5",
              "n": "Warehouse Utilization",
              "c": "medium",
              "f": "beginner",
              "v": "Right-sizing warehouses reduces cost while maintaining performance. Utilization data drives scaling policy decisions.",
              "t": "Custom API input",
              "d": "Snowflake `WAREHOUSE_LOAD_HISTORY`, Redshift `WLM_QUEUE_STATE`, BigQuery reservation utilization",
              "q": "index=datawarehouse sourcetype=\"snowflake:warehouse_load\"\n| timechart span=15m avg(AVG_RUNNING) as avg_queries, avg(AVG_QUEUED_LOAD) as avg_queued by WAREHOUSE_NAME\n| where avg_queued > 1",
              "m": "Poll warehouse utilization metrics every 15 minutes. Track running vs queued queries. Alert when queuing occurs consistently (indicates undersized warehouse). Identify idle warehouses for auto-suspend policy adjustment.",
              "z": "Line chart (running vs queued queries), Heatmap (warehouse × hour utilization), Table (underutilized warehouses), Bar chart (utilization by warehouse).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input.\n• Ensure the following data sources are available: Snowflake `WAREHOUSE_LOAD_HISTORY`, Redshift `WLM_QUEUE_STATE`, BigQuery reservation utilization.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll warehouse utilization metrics every 15 minutes. Track running vs queued queries. Alert when queuing occurs consistently (indicates undersized warehouse). Identify idle warehouses for auto-suspend policy adjustment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datawarehouse sourcetype=\"snowflake:warehouse_load\"\n| timechart span=15m avg(AVG_RUNNING) as avg_queries, avg(AVG_QUEUED_LOAD) as avg_queued by WAREHOUSE_NAME\n| where avg_queued > 1\n```\n\nUnderstanding this SPL\n\n**Warehouse Utilization** — Right-sizing warehouses reduces cost while maintaining performance. Utilization data drives scaling policy decisions.\n\nDocumented **Data sources**: Snowflake `WAREHOUSE_LOAD_HISTORY`, Redshift `WLM_QUEUE_STATE`, BigQuery reservation utilization. **App/TA** (typical add-on context): Custom API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datawarehouse; **sourcetype**: snowflake:warehouse_load. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datawarehouse, sourcetype=\"snowflake:warehouse_load\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by WAREHOUSE_NAME** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_queued > 1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (running vs queued queries), Heatmap (warehouse × hour utilization), Table (underutilized warehouses), Bar chart (utilization by warehouse).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow Oracle instance and space signals so the team can keep service levels and archivelog management in line with how the business actually uses the database.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.6",
              "n": "Elasticsearch Cluster Health and Shard Status",
              "c": "critical",
              "f": "intermediate",
              "v": "Red/yellow cluster, unassigned shards, and JVM pressure indicate data availability risk. Early detection prevents data loss and service degradation.",
              "t": "Custom scripted input (ES REST API)",
              "d": "`_cluster/health`, `_cluster/stats`, `_cat/shards`",
              "q": "index=database sourcetype=\"elasticsearch:cluster_health\"\n| eval status_num=case(status=\"green\",0, status=\"yellow\",1, status=\"red\",2)\n| where status_num > 0 OR unassigned_shards > 0\n| timechart span=5m latest(status_num) as health, latest(unassigned_shards) as unassigned, latest(active_primary_shards) as primary by cluster_name",
              "m": "Poll `GET _cluster/health?level=shards` and `GET _cat/shards?v&h=index,shard,prirep,state,node` every 1–2 minutes via REST API scripted input. Parse status (green/yellow/red), unassigned_shards, active_primary_shards. Poll `_cluster/stats` for JVM heap usage. Alert on red status (critical) or yellow (warning). Alert when unassigned_shards >0. Correlate with disk space, JVM pressure, and node availability.",
              "z": "Status indicator (green/yellow/red), Single value (unassigned shards), Table (unassigned shard details), Line chart (cluster health and JVM heap over time).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (ES REST API).\n• Ensure the following data sources are available: `_cluster/health`, `_cluster/stats`, `_cat/shards`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _cluster/health?level=shards` and `GET _cat/shards?v&h=index,shard,prirep,state,node` every 1–2 minutes via REST API scripted input. Parse status (green/yellow/red), unassigned_shards, active_primary_shards. Poll `_cluster/stats` for JVM heap usage. Alert on red status (critical) or yellow (warning). Alert when unassigned_shards >0. Correlate with disk space, JVM pressure, and node availability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:cluster_health\"\n| eval status_num=case(status=\"green\",0, status=\"yellow\",1, status=\"red\",2)\n| where status_num > 0 OR unassigned_shards > 0\n| timechart span=5m latest(status_num) as health, latest(unassigned_shards) as unassigned, latest(active_primary_shards) as primary by cluster_name\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Cluster Health and Shard Status** — Red/yellow cluster, unassigned shards, and JVM pressure indicate data availability risk. Early detection prevents data loss and service degradation.\n\nDocumented **Data sources**: `_cluster/health`, `_cluster/stats`, `_cat/shards`. **App/TA** (typical add-on context): Custom scripted input (ES REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:cluster_health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:cluster_health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status_num** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status_num > 0 OR unassigned_shards > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by cluster_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status indicator (green/yellow/red), Single value (unassigned shards), Table (unassigned shard details), Line chart (cluster health and JVM heap over time).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_es"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.7",
              "n": "Elasticsearch Index Size and Document Count Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Growth forecasting for indices enables proactive storage provisioning and index lifecycle management (ILM) tuning.",
              "t": "Custom scripted input (ES REST API)",
              "d": "`_cat/indices`, `_stats`",
              "q": "index=database sourcetype=\"elasticsearch:indices\"\n| eval size_gb=round(store.size_in_bytes/1073741824, 2)\n| timechart span=1d sum(size_gb) as total_gb, sum(docs.count) as doc_count by index\n| predict total_gb as predicted_gb future_timespan=30",
              "m": "Poll `GET _cat/indices?v&h=index,docs.count,store.size&bytes=b` or `GET _stats` every 6–24 hours. Parse index name, document count, store size. Track per-index and cluster-wide growth. Use `predict` for 30-day forecast. Alert when projected size exceeds available storage. Support ILM policy tuning based on growth rate.",
              "z": "Line chart (index size and doc count with prediction), Table (indices by size and growth rate), Bar chart (top growing indices).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (ES REST API).\n• Ensure the following data sources are available: `_cat/indices`, `_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _cat/indices?v&h=index,docs.count,store.size&bytes=b` or `GET _stats` every 6–24 hours. Parse index name, document count, store size. Track per-index and cluster-wide growth. Use `predict` for 30-day forecast. Alert when projected size exceeds available storage. Support ILM policy tuning based on growth rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:indices\"\n| eval size_gb=round(store.size_in_bytes/1073741824, 2)\n| timechart span=1d sum(size_gb) as total_gb, sum(docs.count) as doc_count by index\n| predict total_gb as predicted_gb future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Index Size and Document Count Trending** — Growth forecasting for indices enables proactive storage provisioning and index lifecycle management (ILM) tuning.\n\nDocumented **Data sources**: `_cat/indices`, `_stats`. **App/TA** (typical add-on context): Custom scripted input (ES REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:indices. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:indices\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by index** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Elasticsearch Index Size and Document Count Trending**): predict total_gb as predicted_gb future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (index size and doc count with prediction), Table (indices by size and growth rate), Bar chart (top growing indices).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_es"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.8",
              "n": "ClickHouse Query Performance",
              "c": "medium",
              "f": "advanced",
              "v": "Merge operations, insert rate, and query duration indicate system health. Monitoring enables tuning and capacity planning for analytical workloads.",
              "t": "Custom scripted input (ClickHouse system tables)",
              "d": "`system.query_log`, `system.metrics`, `system.merges`",
              "q": "index=database sourcetype=\"clickhouse:query_log\"\n| where query_duration_ms > 30000\n| stats count, avg(query_duration_ms) as avg_duration_ms, sum(read_rows) as total_rows by query_kind, user\n| sort -avg_duration_ms",
              "m": "Poll `system.query_log` (or enable query_log and ingest via DB Connect/scripted input) for completed queries. Extract query_duration_ms, query_kind, read_rows, memory_usage. Poll `system.metrics` for Merge, Insert, Query metrics. Poll `system.merges` for active merge count and progress. Alert on queries >30s, merge backlog >10, or insert rate drop. Track p95/p99 query duration by type.",
              "z": "Table (slow queries with duration and rows), Line chart (query duration p95 over time), Bar chart (merge count and insert rate), Single value (active merges).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (ClickHouse system tables).\n• Ensure the following data sources are available: `system.query_log`, `system.metrics`, `system.merges`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `system.query_log` (or enable query_log and ingest via DB Connect/scripted input) for completed queries. Extract query_duration_ms, query_kind, read_rows, memory_usage. Poll `system.metrics` for Merge, Insert, Query metrics. Poll `system.merges` for active merge count and progress. Alert on queries >30s, merge backlog >10, or insert rate drop. Track p95/p99 query duration by type.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"clickhouse:query_log\"\n| where query_duration_ms > 30000\n| stats count, avg(query_duration_ms) as avg_duration_ms, sum(read_rows) as total_rows by query_kind, user\n| sort -avg_duration_ms\n```\n\nUnderstanding this SPL\n\n**ClickHouse Query Performance** — Merge operations, insert rate, and query duration indicate system health. Monitoring enables tuning and capacity planning for analytical workloads.\n\nDocumented **Data sources**: `system.query_log`, `system.metrics`, `system.merges`. **App/TA** (typical add-on context): Custom scripted input (ClickHouse system tables). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: clickhouse:query_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"clickhouse:query_log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where query_duration_ms > 30000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by query_kind, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slow queries with duration and rows), Line chart (query duration p95 over time), Bar chart (merge count and insert rate), Single value (active merges).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow Oracle instance and space signals so the team can keep service levels and archivelog management in line with how the business actually uses the database.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "clickhouse"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.9",
              "n": "Snowflake Warehouse Credit Usage",
              "c": "high",
              "f": "beginner",
              "v": "Credits consumed per warehouse and role drive chargeback and right-sizing. Spikes indicate runaway queries or undersized warehouses thrashing.",
              "t": "Snowflake SQL via DB Connect, `ACCOUNT_USAGE` export",
              "d": "`WAREHOUSE_METERING_HISTORY`, `QUERY_HISTORY` (credits)",
              "q": "index=datawarehouse sourcetype=\"snowflake:warehouse_metering\"\n| bin _time span=1d\n| stats sum(credits_used) as credits by warehouse_name, _time\n| eventstats avg(credits) as avg_c, stdev(credits) as s by warehouse_name\n| where credits > avg_c + 3*s\n| table warehouse_name credits avg_c",
              "m": "Daily load from `ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY`. Alert on statistical spikes. Dashboard top warehouses by credits.",
              "z": "Line chart (credits per day by warehouse), Bar chart (top consumers), Single value (total credits MTD).",
              "kfp": "Larger ETL or analyst workloads and seasonal reporting can raise warehouse credits or queueing without a fault; tune to business hours and team-specific baselines.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Snowflake SQL via DB Connect, `ACCOUNT_USAGE` export.\n• Ensure the following data sources are available: `WAREHOUSE_METERING_HISTORY`, `QUERY_HISTORY` (credits).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDaily load from `ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY`. Alert on statistical spikes. Dashboard top warehouses by credits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datawarehouse sourcetype=\"snowflake:warehouse_metering\"\n| bin _time span=1d\n| stats sum(credits_used) as credits by warehouse_name, _time\n| eventstats avg(credits) as avg_c, stdev(credits) as s by warehouse_name\n| where credits > avg_c + 3*s\n| table warehouse_name credits avg_c\n```\n\nUnderstanding this SPL\n\n**Snowflake Warehouse Credit Usage** — Credits consumed per warehouse and role drive chargeback and right-sizing. Spikes indicate runaway queries or undersized warehouses thrashing.\n\nDocumented **Data sources**: `WAREHOUSE_METERING_HISTORY`, `QUERY_HISTORY` (credits). **App/TA** (typical add-on context): Snowflake SQL via DB Connect, `ACCOUNT_USAGE` export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datawarehouse; **sourcetype**: snowflake:warehouse_metering. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datawarehouse, sourcetype=\"snowflake:warehouse_metering\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by warehouse_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by warehouse_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where credits > avg_c + 3*s` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Snowflake Warehouse Credit Usage**): table warehouse_name credits avg_c\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (credits per day by warehouse), Bar chart (top consumers), Single value (total credits MTD).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Snowflake Warehouse Credit Usage so we can keep this part of the data platform within the capacity and quality targets our teams expect.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "servicenow",
                "snowflake"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.10",
              "n": "Databricks Cluster Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Cluster DBU hours, worker counts, and idle time reveal over-provisioned pools and jobs that keep clusters alive unnecessarily.",
              "t": "Databricks audit logs, cluster events API, `system.billing.usage`",
              "d": "`clusters` API events, billing export",
              "q": "index=databricks sourcetype=\"databricks:cluster_event\"\n| where event_type IN (\"RUNNING\",\"TERMINATED\")\n| bin _time span=1d\n| stats sum(uptime_seconds) as uptime, dc(cluster_id) as clusters by, _time\n| eval dbu_estimate=uptime/3600*0.1",
              "m": "Ingest cluster lifecycle and DBU billing lines. Alert on clusters RUNNING >8h with low task activity (correlate with job logs). Normalize fields from your workspace audit pipeline.",
              "z": "Line chart (DBU per day), Table (long-running clusters), Heatmap (cluster × hour utilization).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Databricks audit logs, cluster events API, `system.billing.usage`.\n• Ensure the following data sources are available: `clusters` API events, billing export.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest cluster lifecycle and DBU billing lines. Alert on clusters RUNNING >8h with low task activity (correlate with job logs). Normalize fields from your workspace audit pipeline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=databricks sourcetype=\"databricks:cluster_event\"\n| where event_type IN (\"RUNNING\",\"TERMINATED\")\n| bin _time span=1d\n| stats sum(uptime_seconds) as uptime, dc(cluster_id) as clusters by, _time\n| eval dbu_estimate=uptime/3600*0.1\n```\n\nUnderstanding this SPL\n\n**Databricks Cluster Utilization** — Cluster DBU hours, worker counts, and idle time reveal over-provisioned pools and jobs that keep clusters alive unnecessarily.\n\nDocumented **Data sources**: `clusters` API events, billing export. **App/TA** (typical add-on context): Databricks audit logs, cluster events API, `system.billing.usage`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: databricks; **sourcetype**: databricks:cluster_event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=databricks, sourcetype=\"databricks:cluster_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"RUNNING\",\"TERMINATED\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **dbu_estimate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (DBU per day), Table (long-running clusters), Heatmap (cluster × hour utilization).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cluster DBU hours, worker counts, and idle time reveal over-provisioned pools and jobs that keep clusters alive unnecessarily We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.11",
              "n": "Redshift Query Queue Depth",
              "c": "high",
              "f": "intermediate",
              "v": "WLM queue length and max execution time show concurrency saturation. Growing queue depth precedes disk-based spills and timeouts.",
              "t": "CloudWatch, `STL_WLM_QUERY` export",
              "d": "`WLMQueueDepth`, `WLMQueriesCompletedPerSecond`",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Redshift\" metric_name=\"WLMQueueDepth\"\n| timechart span=5m max(Maximum) as queue_depth by ClusterIdentifier, QueueName\n| where queue_depth > 10",
              "m": "Map queue names to workload classes. Alert when queue_depth sustained above SLA. Tune WLM slots or concurrency scaling.",
              "z": "Line chart (queue depth), Table (cluster, queue, depth), Single value (max depth).",
              "kfp": "Bursts during known batch, release, or maintenance windows can match the rule; correlate with the change calendar before treating as an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CloudWatch, `STL_WLM_QUERY` export.\n• Ensure the following data sources are available: `WLMQueueDepth`, `WLMQueriesCompletedPerSecond`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap queue names to workload classes. Alert when queue_depth sustained above SLA. Tune WLM slots or concurrency scaling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/Redshift\" metric_name=\"WLMQueueDepth\"\n| timechart span=5m max(Maximum) as queue_depth by ClusterIdentifier, QueueName\n| where queue_depth > 10\n```\n\nUnderstanding this SPL\n\n**Redshift Query Queue Depth** — WLM queue length and max execution time show concurrency saturation. Growing queue depth precedes disk-based spills and timeouts.\n\nDocumented **Data sources**: `WLMQueueDepth`, `WLMQueriesCompletedPerSecond`. **App/TA** (typical add-on context): CloudWatch, `STL_WLM_QUERY` export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by ClusterIdentifier, QueueName** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where queue_depth > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue depth), Table (cluster, queue, depth), Single value (max depth).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "WLM queue length and max execution time show concurrency saturation We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.12",
              "n": "BigQuery Cost Anomalies",
              "c": "high",
              "f": "advanced",
              "v": "Sudden jumps in bytes billed or slot usage often trace to one bad query or new scheduled job. Statistical alerting limits surprise invoices.",
              "t": "BigQuery `INFORMATION_SCHEMA.JOBS`, billing export to Splunk",
              "d": "`total_bytes_billed`, `total_slot_ms`, `creation_time`",
              "q": "index=datawarehouse sourcetype=\"bigquery:jobs\"\n| bin _time span=1d\n| stats sum(total_bytes_billed) as bytes by project_id, user_email, _time\n| eventstats avg(bytes) as avg_b, stdev(bytes) as s by project_id\n| where bytes > avg_b + 3*s\n| eval gb=round(bytes/1073741824,2)",
              "m": "Ingest completed jobs daily. Alert on project-day cost outliers. Drill into `job_id` for top offenders. Integrate with GCP billing export for ground truth.",
              "z": "Line chart (daily bytes billed), Table (anomalous days/projects), Bar chart (top users by cost).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BigQuery `INFORMATION_SCHEMA.JOBS`, billing export to Splunk.\n• Ensure the following data sources are available: `total_bytes_billed`, `total_slot_ms`, `creation_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest completed jobs daily. Alert on project-day cost outliers. Drill into `job_id` for top offenders. Integrate with GCP billing export for ground truth.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datawarehouse sourcetype=\"bigquery:jobs\"\n| bin _time span=1d\n| stats sum(total_bytes_billed) as bytes by project_id, user_email, _time\n| eventstats avg(bytes) as avg_b, stdev(bytes) as s by project_id\n| where bytes > avg_b + 3*s\n| eval gb=round(bytes/1073741824,2)\n```\n\nUnderstanding this SPL\n\n**BigQuery Cost Anomalies** — Sudden jumps in bytes billed or slot usage often trace to one bad query or new scheduled job. Statistical alerting limits surprise invoices.\n\nDocumented **Data sources**: `total_bytes_billed`, `total_slot_ms`, `creation_time`. **App/TA** (typical add-on context): BigQuery `INFORMATION_SCHEMA.JOBS`, billing export to Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datawarehouse; **sourcetype**: bigquery:jobs. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datawarehouse, sourcetype=\"bigquery:jobs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by project_id, user_email, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by project_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where bytes > avg_b + 3*s` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily bytes billed), Table (anomalous days/projects), Bar chart (top users by cost).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Sudden jumps in bytes billed or slot usage often trace to one bad query or new scheduled job We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.13",
              "n": "Snowflake Query Spillage (Bytes Spilled to Local/Remote Storage)",
              "c": "high",
              "f": "intermediate",
              "v": "Spillage indicates insufficient warehouse size or poorly written queries (exploding joins). Drives warehouse tier and query tuning decisions.",
              "t": "Snowflake `QUERY_HISTORY`, `QUERY_ACCELERATION_HISTORY`",
              "d": "`BYTES_SPILLED_TO_LOCAL_STORAGE`, `BYTES_SPILLED_TO_REMOTE_STORAGE`",
              "q": "index=datawarehouse sourcetype=\"snowflake:query_history\"\n| eval spill_bytes=BYTES_SPILLED_TO_LOCAL_STORAGE+BYTES_SPILLED_TO_REMOTE_STORAGE\n| where spill_bytes > 1073741824\n| stats sum(spill_bytes) as total_spill, count as qcount by USER_NAME, QUERY_ID\n| sort -total_spill",
              "m": "Poll `QUERY_HISTORY` for completed queries. Alert on spill_bytes >1GB. Join with warehouse size for context.",
              "z": "Table (queries with spill), Bar chart (spill by user), Line chart (daily spill volume).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Snowflake `QUERY_HISTORY`, `QUERY_ACCELERATION_HISTORY`.\n• Ensure the following data sources are available: `BYTES_SPILLED_TO_LOCAL_STORAGE`, `BYTES_SPILLED_TO_REMOTE_STORAGE`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `QUERY_HISTORY` for completed queries. Alert on spill_bytes >1GB. Join with warehouse size for context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datawarehouse sourcetype=\"snowflake:query_history\"\n| eval spill_bytes=BYTES_SPILLED_TO_LOCAL_STORAGE+BYTES_SPILLED_TO_REMOTE_STORAGE\n| where spill_bytes > 1073741824\n| stats sum(spill_bytes) as total_spill, count as qcount by USER_NAME, QUERY_ID\n| sort -total_spill\n```\n\nUnderstanding this SPL\n\n**Snowflake Query Spillage (Bytes Spilled to Local/Remote Storage)** — Spillage indicates insufficient warehouse size or poorly written queries (exploding joins). Drives warehouse tier and query tuning decisions.\n\nDocumented **Data sources**: `BYTES_SPILLED_TO_LOCAL_STORAGE`, `BYTES_SPILLED_TO_REMOTE_STORAGE`. **App/TA** (typical add-on context): Snowflake `QUERY_HISTORY`, `QUERY_ACCELERATION_HISTORY`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datawarehouse; **sourcetype**: snowflake:query_history. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datawarehouse, sourcetype=\"snowflake:query_history\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **spill_bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where spill_bytes > 1073741824` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by USER_NAME, QUERY_ID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (queries with spill), Bar chart (spill by user), Line chart (daily spill volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow",
                "snowflake"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.14",
              "n": "Databricks Job Failure Rate",
              "c": "critical",
              "f": "beginner",
              "v": "Failed notebook/jar jobs block downstream analytics. Failure rate by job name prioritizes fixes for critical pipelines.",
              "t": "Databricks job run API, `jobs` audit",
              "d": "Job run result (`result_state`, `run_id`)",
              "q": "index=databricks sourcetype=\"databricks:job_run\"\n| bin _time span=1d\n| stats count(eval(result_state=\"FAILED\")) as failed, count as total by job_name, _time\n| eval fail_rate=round(failed/total*100,1)\n| where fail_rate > 5 OR failed > 0 AND total < 5\n| table job_name failed total fail_rate",
              "m": "Ingest each run completion. Alert on any failure for tier-1 jobs; use fail_rate for high-volume jobs. Include `run_page_url` in raw events for triage.",
              "z": "Line chart (failure rate by job), Table (failed runs), Single value (failed jobs 24h).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Databricks job run API, `jobs` audit.\n• Ensure the following data sources are available: Job run result (`result_state`, `run_id`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest each run completion. Alert on any failure for tier-1 jobs; use fail_rate for high-volume jobs. Include `run_page_url` in raw events for triage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=databricks sourcetype=\"databricks:job_run\"\n| bin _time span=1d\n| stats count(eval(result_state=\"FAILED\")) as failed, count as total by job_name, _time\n| eval fail_rate=round(failed/total*100,1)\n| where fail_rate > 5 OR failed > 0 AND total < 5\n| table job_name failed total fail_rate\n```\n\nUnderstanding this SPL\n\n**Databricks Job Failure Rate** — Failed notebook/jar jobs block downstream analytics. Failure rate by job name prioritizes fixes for critical pipelines.\n\nDocumented **Data sources**: Job run result (`result_state`, `run_id`). **App/TA** (typical add-on context): Databricks job run API, `jobs` audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: databricks; **sourcetype**: databricks:job_run. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=databricks, sourcetype=\"databricks:job_run\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by job_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_rate > 5 OR failed > 0 AND total < 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Databricks Job Failure Rate**): table job_name failed total fail_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failure rate by job), Table (failed runs), Single value (failed jobs 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Failed notebook/jar jobs block downstream analytics We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.15",
              "n": "Azure Synapse Analytics SQL Pool Performance",
              "c": "high",
              "f": "intermediate",
              "v": "Synapse dedicated SQL pools have DWU-based resource limits. Queries competing for resources cause queueing, and tempdb contention degrades batch processing. Monitoring ensures analytics workloads meet SLAs.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.Synapse/workspaces/sqlPools), `sourcetype=azure:diagnostics` (SqlRequests)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.synapse/workspaces/sqlpools\"\n| where metric_name IN (\"DWUUsedPercent\",\"ActiveQueries\",\"QueuedQueries\",\"AdaptiveCacheHitPercent\")\n| timechart span=5m avg(average) as value by metric_name, resource_name",
              "m": "Collect Azure Monitor metrics for Synapse SQL pools. Alert when `DWUUsedPercent` exceeds 90% sustained (scale up DWU), when `QueuedQueries` exceeds 10 (resource contention), or when `AdaptiveCacheHitPercent` drops below 50% (cold cache after pause/resume). Enable diagnostics for `SqlRequests` to track query execution times and identify long-running queries consuming resources.",
              "z": "Line chart (DWU % and queued queries), Table (long-running queries), Gauge (cache hit ratio).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.Synapse/workspaces/sqlPools), `sourcetype=azure:diagnostics` (SqlRequests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Azure Monitor metrics for Synapse SQL pools. Alert when `DWUUsedPercent` exceeds 90% sustained (scale up DWU), when `QueuedQueries` exceeds 10 (resource contention), or when `AdaptiveCacheHitPercent` drops below 50% (cold cache after pause/resume). Enable diagnostics for `SqlRequests` to track query execution times and identify long-running queries consuming resources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.synapse/workspaces/sqlpools\"\n| where metric_name IN (\"DWUUsedPercent\",\"ActiveQueries\",\"QueuedQueries\",\"AdaptiveCacheHitPercent\")\n| timechart span=5m avg(average) as value by metric_name, resource_name\n```\n\nUnderstanding this SPL\n\n**Azure Synapse Analytics SQL Pool Performance** — Synapse dedicated SQL pools have DWU-based resource limits. Queries competing for resources cause queueing, and tempdb contention degrades batch processing. Monitoring ensures analytics workloads meet SLAs.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Synapse/workspaces/sqlPools), `sourcetype=azure:diagnostics` (SqlRequests). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where metric_name IN (\"DWUUsedPercent\",\"ActiveQueries\",\"QueuedQueries\",\"AdaptiveCacheHitPercent\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name, resource_name** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Synapse Analytics SQL Pool Performance** — Synapse dedicated SQL pools have DWU-based resource limits. Queries competing for resources cause queueing, and tempdb contention degrades batch processing. Monitoring ensures analytics workloads meet SLAs.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.Synapse/workspaces/sqlPools), `sourcetype=azure:diagnostics` (SqlRequests). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics/diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (DWU % and queued queries), Table (long-running queries), Gauge (cache hit ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow SQL Server performance and availability signals so the team can catch regressions, AG issues, and tempdb pressure on business-critical systems.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=5m | sort - agg_value",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.4.16",
              "n": "Azure Synapse Pipeline Execution Health",
              "c": "high",
              "f": "beginner",
              "v": "Synapse pipelines orchestrate data movement and transformation. Failed pipeline runs cause stale analytics data, broken reports, and missed business deadlines.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics)",
              "d": "`sourcetype=azure:diagnostics` (SynapsePipelineRuns, SynapseActivityRuns)",
              "q": "index=cloud sourcetype=\"azure:diagnostics\" Category=\"SynapsePipelineRuns\"\n| where Status=\"Failed\"\n| stats count as failures, latest(Start) as last_failure, latest(Error) as last_error by PipelineName, resource_name\n| sort -failures",
              "m": "Enable diagnostics on Synapse workspaces to route `SynapsePipelineRuns` and `SynapseActivityRuns` to Splunk via Event Hub. Alert on failed pipeline runs. Track activity-level errors for root cause analysis (data movement failures, notebook errors, SQL script timeouts). Monitor pipeline duration trending to detect degradation.",
              "z": "Table (failed pipelines with error detail), Bar chart (failures by pipeline), Line chart (duration trend).",
              "kfp": "Autoscale events, service tier changes, and Microsoft-side maintenance in Azure can move CPU and storage metrics; compare with the Azure service health and your deployment pipeline.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics).\n• Ensure the following data sources are available: `sourcetype=azure:diagnostics` (SynapsePipelineRuns, SynapseActivityRuns).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable diagnostics on Synapse workspaces to route `SynapsePipelineRuns` and `SynapseActivityRuns` to Splunk via Event Hub. Alert on failed pipeline runs. Track activity-level errors for root cause analysis (data movement failures, notebook errors, SQL script timeouts). Monitor pipeline duration trending to detect degradation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:diagnostics\" Category=\"SynapsePipelineRuns\"\n| where Status=\"Failed\"\n| stats count as failures, latest(Start) as last_failure, latest(Error) as last_error by PipelineName, resource_name\n| sort -failures\n```\n\nUnderstanding this SPL\n\n**Azure Synapse Pipeline Execution Health** — Synapse pipelines orchestrate data movement and transformation. Failed pipeline runs cause stale analytics data, broken reports, and missed business deadlines.\n\nDocumented **Data sources**: `sourcetype=azure:diagnostics` (SynapsePipelineRuns, SynapseActivityRuns). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:diagnostics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Status=\"Failed\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by PipelineName, resource_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed pipelines with error detail), Bar chart (failures by pipeline), Line chart (duration trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Synapse pipelines orchestrate data movement and transformation We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.16",
              "n": "Open Cursor Leak Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Open cursors that are never closed accumulate in the database session context and eventually exhaust the cursor limit (Oracle ORA-01000, SQL Server max open cursors), causing application errors and forcing emergency restarts. Nagios detects this via threshold checks on V$OPEN_CURSOR; Splunk enables trending, per-session attribution, and correlation with application deployments.",
              "t": "`splunk-db-connect` or `Splunk_TA_oracle`",
              "d": "Oracle `V$OPEN_CURSOR`, `V$SESSION`; SQL Server `sys.dm_exec_cursors`; PostgreSQL `pg_cursors`",
              "q": "| dbxquery connection=\"oracle_prod\" query=\"SELECT s.username, s.program, COUNT(*) AS open_cursors FROM v\\$open_cursor oc JOIN v\\$session s ON oc.sid=s.sid GROUP BY s.username, s.program ORDER BY open_cursors DESC\"\n| where open_cursors > 200\n| eval alert=if(open_cursors > 800, \"CRITICAL\", if(open_cursors > 400, \"WARNING\", \"OK\"))\n| table username, program, open_cursors, alert",
              "m": "Use Splunk DB Connect to poll `V$OPEN_CURSOR` every 5 minutes. Join with `V$SESSION` to identify which application user or service is leaking cursors. Alert when any single session exceeds 400 open cursors (WARNING) or 800 (CRITICAL). Correlate spikes with deployment events from CI/CD logs to pinpoint root cause. For SQL Server, poll `sys.dm_exec_cursors` grouped by `login_name`. Set `OPEN_CURSORS` init parameter baseline in a lookup for dynamic threshold comparison.",
              "z": "Line chart (total open cursors over time by application), Table (top sessions by cursor count), Single value (current max), Bar chart (cursors by application/service).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `splunk-db-connect` or `Splunk_TA_oracle`.\n• Ensure the following data sources are available: Oracle `V$OPEN_CURSOR`, `V$SESSION`; SQL Server `sys.dm_exec_cursors`; PostgreSQL `pg_cursors`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk DB Connect to poll `V$OPEN_CURSOR` every 5 minutes. Join with `V$SESSION` to identify which application user or service is leaking cursors. Alert when any single session exceeds 400 open cursors (WARNING) or 800 (CRITICAL). Correlate spikes with deployment events from CI/CD logs to pinpoint root cause. For SQL Server, poll `sys.dm_exec_cursors` grouped by `login_name`. Set `OPEN_CURSORS` init parameter baseline in a lookup for dynamic threshold comparison.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| dbxquery connection=\"oracle_prod\" query=\"SELECT s.username, s.program, COUNT(*) AS open_cursors FROM v\\$open_cursor oc JOIN v\\$session s ON oc.sid=s.sid GROUP BY s.username, s.program ORDER BY open_cursors DESC\"\n| where open_cursors > 200\n| eval alert=if(open_cursors > 800, \"CRITICAL\", if(open_cursors > 400, \"WARNING\", \"OK\"))\n| table username, program, open_cursors, alert\n```\n\nUnderstanding this SPL\n\n**Open Cursor Leak Detection** — Open cursors that are never closed accumulate in the database session context and eventually exhaust the cursor limit (Oracle ORA-01000, SQL Server max open cursors), causing application errors and forcing emergency restarts. Nagios detects this via threshold checks on V$OPEN_CURSOR; Splunk enables trending, per-session attribution, and correlation with application deployments.\n\nDocumented **Data sources**: Oracle `V$OPEN_CURSOR`, `V$SESSION`; SQL Server `sys.dm_exec_cursors`; PostgreSQL `pg_cursors`. **App/TA** (typical add-on context): `splunk-db-connect` or `Splunk_TA_oracle`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Pipeline stage (see **Open Cursor Leak Detection**): dbxquery connection=\"oracle_prod\" query=\"SELECT s.username, s.program, COUNT(*) AS open_cursors FROM v\\$open_cursor oc JOIN v\\$session s …\n• Filters the current rows with `where open_cursors > 200` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Open Cursor Leak Detection**): table username, program, open_cursors, alert\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (total open cursors over time by application), Table (top sessions by cursor count), Single value (current max), Bar chart (cursors by application/service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow PostgreSQL activity and space trends so we can keep vacuum, replication, and connection paths healthy as load grows.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "oracle"
              ],
              "em": [
                "oracle_oracle_db"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.17",
              "n": "Database Connection Pool Exhaustion",
              "c": "critical",
              "f": "intermediate",
              "v": "When the application connection pool or database max_connections is exhausted, new requests fail with connection errors. Detecting high connection count and pool saturation prevents outages.",
              "t": "`splunk_app_db_connect`, database performance views",
              "d": "Oracle `V$SESSION`/`V$PROCESS`, PostgreSQL `pg_stat_activity`, MySQL `SHOW PROCESSLIST`, SQL Server `sys.dm_exec_connections`",
              "q": "| dbxquery connection=\"oracle_prod\" query=\"SELECT COUNT(*) AS conn_count FROM v\\$session WHERE type='USER'\"\n| eval usage_pct=round(conn_count/400*100, 1)\n| where usage_pct > 85\n| table conn_count usage_pct",
              "m": "Use DB Connect to poll session/connection count every 1–5 minutes. Compare to max_connections (or pool size). Alert when utilization exceeds 85%. Correlate with application logs for connection leak or traffic spike.",
              "z": "Gauge (connection count vs max), Line chart (connections over time), Table (by program/user).",
              "kfp": "Connection pool warm-up after restarts, blue-green deploys, or autoscaling can look like a spike until the pool or fleet reaches steady state.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `splunk_app_db_connect`, database performance views.\n• Ensure the following data sources are available: Oracle `V$SESSION`/`V$PROCESS`, PostgreSQL `pg_stat_activity`, MySQL `SHOW PROCESSLIST`, SQL Server `sys.dm_exec_connections`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse DB Connect to poll session/connection count every 1–5 minutes. Compare to max_connections (or pool size). Alert when utilization exceeds 85%. Correlate with application logs for connection leak or traffic spike.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| dbxquery connection=\"oracle_prod\" query=\"SELECT COUNT(*) AS conn_count FROM v\\$session WHERE type='USER'\"\n| eval usage_pct=round(conn_count/400*100, 1)\n| where usage_pct > 85\n| table conn_count usage_pct\n```\n\nUnderstanding this SPL\n\n**Database Connection Pool Exhaustion** — When the application connection pool or database max_connections is exhausted, new requests fail with connection errors. Detecting high connection count and pool saturation prevents outages.\n\nDocumented **Data sources**: Oracle `V$SESSION`/`V$PROCESS`, PostgreSQL `pg_stat_activity`, MySQL `SHOW PROCESSLIST`, SQL Server `sys.dm_exec_connections`. **App/TA** (typical add-on context): `splunk_app_db_connect`, database performance views. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Pipeline stage (see **Database Connection Pool Exhaustion**): dbxquery connection=\"oracle_prod\" query=\"SELECT COUNT(*) AS conn_count FROM v\\$session WHERE type='USER'\"\n• `eval` defines or adjusts **usage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where usage_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Database Connection Pool Exhaustion**): table conn_count usage_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (connection count vs max), Line chart (connections over time), Table (by program/user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many sessions and pooled connections the fleet uses so we can scale or fix apps before the database hits its connection limits.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.18",
              "n": "Long-Running Query and Blocking Session Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Queries that run for hours or sessions that block others cause timeouts and user impact. Identifying blocking chains and long-running queries supports tuning and kill decisions.",
              "t": "`splunk_app_db_connect`, database wait/block views",
              "d": "Oracle `V$SESSION`/`V$SQL`, PostgreSQL `pg_stat_activity`, SQL Server `sys.dm_exec_requests`/`sys.dm_os_waiting_tasks`",
              "q": "| dbxquery connection=\"oracle_prod\" query=\"SELECT s.sid, s.serial#, s.username, s.seconds_in_wait, s.blocking_session, sq.sql_text FROM v\\$session s JOIN v\\$sql sq ON s.sql_id=sq.sql_id WHERE s.seconds_in_wait > 300 OR s.blocking_session IS NOT NULL\"\n| table sid username seconds_in_wait blocking_session sql_text",
              "m": "Poll active sessions and wait/block info. Ingest sessions with elapsed time >5 minutes or with blocking_session set. Alert on blocking chains. Dashboard top long-running and blocked sessions with SQL text.",
              "z": "Table (session, user, wait time, blocker), Blocking chain diagram, Line chart (long-running count).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `splunk_app_db_connect`, database wait/block views.\n• Ensure the following data sources are available: Oracle `V$SESSION`/`V$SQL`, PostgreSQL `pg_stat_activity`, SQL Server `sys.dm_exec_requests`/`sys.dm_os_waiting_tasks`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll active sessions and wait/block info. Ingest sessions with elapsed time >5 minutes or with blocking_session set. Alert on blocking chains. Dashboard top long-running and blocked sessions with SQL text.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| dbxquery connection=\"oracle_prod\" query=\"SELECT s.sid, s.serial#, s.username, s.seconds_in_wait, s.blocking_session, sq.sql_text FROM v\\$session s JOIN v\\$sql sq ON s.sql_id=sq.sql_id WHERE s.seconds_in_wait > 300 OR s.blocking_session IS NOT NULL\"\n| table sid username seconds_in_wait blocking_session sql_text\n```\n\nUnderstanding this SPL\n\n**Long-Running Query and Blocking Session Detection** — Queries that run for hours or sessions that block others cause timeouts and user impact. Identifying blocking chains and long-running queries supports tuning and kill decisions.\n\nDocumented **Data sources**: Oracle `V$SESSION`/`V$SQL`, PostgreSQL `pg_stat_activity`, SQL Server `sys.dm_exec_requests`/`sys.dm_os_waiting_tasks`. **App/TA** (typical add-on context): `splunk_app_db_connect`, database wait/block views. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Pipeline stage (see **Long-Running Query and Blocking Session Detection**): dbxquery connection=\"oracle_prod\" query=\"SELECT s.sid, s.serial#, s.username, s.seconds_in_wait, s.blocking_session, sq.sql_text FROM v\\$…\n• Pipeline stage (see **Long-Running Query and Blocking Session Detection**): table sid username seconds_in_wait blocking_session sql_text\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (session, user, wait time, blocker), Blocking chain diagram, Line chart (long-running count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.19",
              "n": "Table and Index Bloat and Maintenance Window",
              "c": "medium",
              "f": "advanced",
              "v": "Table and index bloat (PostgreSQL) or fragmentation (SQL Server) degrades query performance and wastes space. Tracking bloat and last vacuum/rebuild supports maintenance scheduling.",
              "t": "`splunk_app_db_connect`, maintenance job logs",
              "d": "PostgreSQL `pg_stat_user_tables`/bloat estimates, SQL Server `sys.dm_db_index_physical_stats`, Oracle segment size",
              "q": "| dbxquery connection=\"pg_prod\" query=\"SELECT schemaname, relname, n_dead_tup, n_live_tup, last_vacuum, last_autovacuum FROM pg_stat_user_tables WHERE n_dead_tup > 10000\"\n| eval dead_ratio=round(n_dead_tup/n_live_tup*100, 2)\n| where dead_ratio > 5\n| table schemaname relname n_dead_tup last_autovacuum dead_ratio",
              "m": "Poll table/index stats and last maintenance timestamps. Compute dead tuple ratio or fragmentation %. Alert when bloat exceeds threshold or last vacuum/rebuild is older than 7 days for critical tables.",
              "z": "Table (table, bloat %, last vacuum), Bar chart (bloat by table), Single value (tables overdue for vacuum).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `splunk_app_db_connect`, maintenance job logs.\n• Ensure the following data sources are available: PostgreSQL `pg_stat_user_tables`/bloat estimates, SQL Server `sys.dm_db_index_physical_stats`, Oracle segment size.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll table/index stats and last maintenance timestamps. Compute dead tuple ratio or fragmentation %. Alert when bloat exceeds threshold or last vacuum/rebuild is older than 7 days for critical tables.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| dbxquery connection=\"pg_prod\" query=\"SELECT schemaname, relname, n_dead_tup, n_live_tup, last_vacuum, last_autovacuum FROM pg_stat_user_tables WHERE n_dead_tup > 10000\"\n| eval dead_ratio=round(n_dead_tup/n_live_tup*100, 2)\n| where dead_ratio > 5\n| table schemaname relname n_dead_tup last_autovacuum dead_ratio\n```\n\nUnderstanding this SPL\n\n**Table and Index Bloat and Maintenance Window** — Table and index bloat (PostgreSQL) or fragmentation (SQL Server) degrades query performance and wastes space. Tracking bloat and last vacuum/rebuild supports maintenance scheduling.\n\nDocumented **Data sources**: PostgreSQL `pg_stat_user_tables`/bloat estimates, SQL Server `sys.dm_db_index_physical_stats`, Oracle segment size. **App/TA** (typical add-on context): `splunk_app_db_connect`, maintenance job logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Pipeline stage (see **Table and Index Bloat and Maintenance Window**): dbxquery connection=\"pg_prod\" query=\"SELECT schemaname, relname, n_dead_tup, n_live_tup, last_vacuum, last_autovacuum FROM pg_stat_user_t…\n• `eval` defines or adjusts **dead_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dead_ratio > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Table and Index Bloat and Maintenance Window**): table schemaname relname n_dead_tup last_autovacuum dead_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (table, bloat %, last vacuum), Bar chart (bloat by table), Single value (tables overdue for vacuum).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow PostgreSQL activity and space trends so we can keep vacuum, replication, and connection paths healthy as load grows.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.20",
              "n": "Database Backup and Archive Log Retention Verification",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed or missing backups and unarchived redo logs risk data loss and prevent point-in-time recovery. Verifying backup success and archive log retention ensures RPO is met.",
              "t": "`splunk_app_db_connect`, backup job logs",
              "d": "Oracle RMAN output, SQL Server msdb backup history, PostgreSQL pg_backup (or vendor logs)",
              "q": "| dbxquery connection=\"oracle_prod\" query=\"SELECT status, start_time, end_time, output_bytes FROM v\\$rman_backup_job_details WHERE start_time > SYSDATE-1 ORDER BY start_time DESC\"\n| search status!=\"COMPLETED\"\n| table status start_time end_time output_bytes",
              "m": "Ingest backup job status (RMAN, SQL Server backup history, or backup vendor logs). Alert on any failed or incomplete backup. Track archive log destination space and retention; alert when space is low or retention is below policy.",
              "z": "Table (last backup, status, duration), Gauge (backup success %), Timeline of backup jobs.",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `splunk_app_db_connect`, backup job logs.\n• Ensure the following data sources are available: Oracle RMAN output, SQL Server msdb backup history, PostgreSQL pg_backup (or vendor logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest backup job status (RMAN, SQL Server backup history, or backup vendor logs). Alert on any failed or incomplete backup. Track archive log destination space and retention; alert when space is low or retention is below policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| dbxquery connection=\"oracle_prod\" query=\"SELECT status, start_time, end_time, output_bytes FROM v\\$rman_backup_job_details WHERE start_time > SYSDATE-1 ORDER BY start_time DESC\"\n| search status!=\"COMPLETED\"\n| table status start_time end_time output_bytes\n```\n\nUnderstanding this SPL\n\n**Database Backup and Archive Log Retention Verification** — Failed or missing backups and unarchived redo logs risk data loss and prevent point-in-time recovery. Verifying backup success and archive log retention ensures RPO is met.\n\nDocumented **Data sources**: Oracle RMAN output, SQL Server msdb backup history, PostgreSQL pg_backup (or vendor logs). **App/TA** (typical add-on context): `splunk_app_db_connect`, backup job logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Pipeline stage (see **Database Backup and Archive Log Retention Verification**): dbxquery connection=\"oracle_prod\" query=\"SELECT status, start_time, end_time, output_bytes FROM v\\$rman_backup_job_details WHERE start_ti…\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Database Backup and Archive Log Retention Verification**): table status start_time end_time output_bytes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (last backup, status, duration), Gauge (backup success %), Timeline of backup jobs.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for repeated or suspicious sign-in activity on our databases so we can catch brute-force and misconfiguration before they become account takeovers.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.21",
              "n": "Database User and Privilege Change Audit",
              "c": "high",
              "f": "beginner",
              "v": "New users, role grants, or privilege changes can indicate compromise or policy violation. Auditing supports compliance (SOX, PCI) and security investigations.",
              "t": "Database audit logs, `splunk_app_db_connect`",
              "d": "Oracle audit trail, PostgreSQL `pg_audit` or log_statement, SQL Server audit, MySQL general log",
              "q": "index=db_audit sourcetype=oracle_audit (action=\"CREATE USER\" OR action=\"GRANT\" OR action=\"ALTER USER\")\n| bin _time span=1h\n| stats count by db_user, action, object_name, _time\n| where count > 0\n| table _time db_user action object_name",
              "m": "Enable database audit for user and privilege changes. Forward audit logs to Splunk. Alert on any CREATE USER, GRANT, or ALTER USER. Correlate with change management.",
              "z": "Events timeline, Table (user, action, object), Bar chart (changes by user).",
              "kfp": "Planned access reviews, recertification, break-glass accounts, and vendor maintenance can emit privilege- or access-change events that match the rule but are already approved; require a change ticket for context.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Database audit logs, `splunk_app_db_connect`.\n• Ensure the following data sources are available: Oracle audit trail, PostgreSQL `pg_audit` or log_statement, SQL Server audit, MySQL general log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable database audit for user and privilege changes. Forward audit logs to Splunk. Alert on any CREATE USER, GRANT, or ALTER USER. Correlate with change management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db_audit sourcetype=oracle_audit (action=\"CREATE USER\" OR action=\"GRANT\" OR action=\"ALTER USER\")\n| bin _time span=1h\n| stats count by db_user, action, object_name, _time\n| where count > 0\n| table _time db_user action object_name\n```\n\nUnderstanding this SPL\n\n**Database User and Privilege Change Audit** — New users, role grants, or privilege changes can indicate compromise or policy violation. Auditing supports compliance (SOX, PCI) and security investigations.\n\nDocumented **Data sources**: Oracle audit trail, PostgreSQL `pg_audit` or log_statement, SQL Server audit, MySQL general log. **App/TA** (typical add-on context): Database audit logs, `splunk_app_db_connect`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db_audit; **sourcetype**: oracle_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db_audit, sourcetype=oracle_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by db_user, action, object_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Database User and Privilege Change Audit**): table _time db_user action object_name\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Database User and Privilege Change Audit** — New users, role grants, or privilege changes can indicate compromise or policy violation. Auditing supports compliance (SOX, PCI) and security investigations.\n\nDocumented **Data sources**: Oracle audit trail, PostgreSQL `pg_audit` or log_statement, SQL Server audit, MySQL general log. **App/TA** (typical add-on context): Database audit logs, `splunk_app_db_connect`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Events timeline, Table (user, action, object), Bar chart (changes by user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track who changed users and privileges in our databases so we can support compliance and spot unauthorized access paths early.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object span=1h | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CJIS 5.5.1 (Access control - identification) is enforced — Splunk UC-7.1.21: Database User and Privilege Change Audit.",
                  "ea": "Saved search 'UC-7.1.21' running on Oracle audit trail, PostgreSQL pg_audit or log_statement, SQL Server audit, MySQL general log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://le.fbi.gov/cjis-division/cjis-security-policy-resource-center"
                },
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(d)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(d) (System access limited to authorized individuals) is enforced — Splunk UC-7.1.21: Database User and Privilege Change Audit.",
                  "ea": "Saved search 'UC-7.1.21' running on Oracle audit trail, PostgreSQL pg_audit or log_statement, SQL Server audit, MySQL general log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-21/chapter-I/subchapter-A/part-11"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FedRAMP AC-2 (Account management) is enforced — Splunk UC-7.1.21: Database User and Privilege Change Audit.",
                  "ea": "Saved search 'UC-7.1.21' running on Oracle audit trail, PostgreSQL pg_audit or log_statement, SQL Server audit, MySQL general log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HIPAA Privacy",
                  "v": "current",
                  "cl": "§164.502(a)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Privacy §164.502(a) (Uses and disclosures of PHI — general rules) is enforced — Splunk UC-7.1.21: Database User and Privilege Change Audit.",
                  "ea": "Saved search 'UC-7.1.21' running on Oracle audit trail, PostgreSQL pg_audit or log_statement, SQL Server audit, MySQL general log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "01.b",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 01.b (User access management) is enforced — Splunk UC-7.1.21: Database User and Privilege Change Audit.",
                  "ea": "Saved search 'UC-7.1.21' running on Oracle audit trail, PostgreSQL pg_audit or log_statement, SQL Server audit, MySQL general log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Privileged",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.AccessMgmt.Privileged (Privileged access) is enforced — Splunk UC-7.1.21: Database User and Privilege Change Audit.",
                  "ea": "Saved search 'UC-7.1.21' running on Oracle audit trail, PostgreSQL pg_audit or log_statement, SQL Server audit, MySQL general log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.22",
              "n": "PostgreSQL WAL Growth",
              "c": "high",
              "f": "advanced",
              "v": "WAL accumulation indicating replication issues or archival failures. Uncontrolled WAL growth exhausts disk space and can halt the database.",
              "t": "Splunk DB Connect or custom scripted input",
              "d": "PostgreSQL `pg_stat_replication`, `pg_wal_lsn_diff()`, `pg_ls_waldir()` or filesystem WAL directory size",
              "q": "index=database sourcetype=\"dbconnect:postgresql_wal\"\n| eval wal_size_gb=round(wal_size_bytes/1073741824, 2)\n| timechart span=1h latest(wal_size_gb) as wal_size_gb by host\n| where wal_size_gb > 10",
              "m": "Use DB Connect or a scripted input to poll WAL metrics every 15–30 minutes. Query `pg_current_wal_lsn()` and compare with `pg_walfile_name()` to derive WAL size; alternatively, measure WAL directory on disk. Track replication slot lag via `pg_stat_replication` (replication_lag). Alert when WAL size exceeds threshold (e.g., >10 GB) or when replication lag indicates archival/streaming is falling behind. Correlate with `archive_command` failures and disk space.",
              "z": "Line chart (WAL size over time), Single value (current WAL size GB), Table (host, WAL size, replication lag).",
              "kfp": "Lag spikes during large transactions, schema changes, or network maintenance to standby; align with RPO and change windows.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect or custom scripted input.\n• Ensure the following data sources are available: PostgreSQL `pg_stat_replication`, `pg_wal_lsn_diff()`, `pg_ls_waldir()` or filesystem WAL directory size.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse DB Connect or a scripted input to poll WAL metrics every 15–30 minutes. Query `pg_current_wal_lsn()` and compare with `pg_walfile_name()` to derive WAL size; alternatively, measure WAL directory on disk. Track replication slot lag via `pg_stat_replication` (replication_lag). Alert when WAL size exceeds threshold (e.g., >10 GB) or when replication lag indicates archival/streaming is falling behind. Correlate with `archive_command` failures and disk space.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:postgresql_wal\"\n| eval wal_size_gb=round(wal_size_bytes/1073741824, 2)\n| timechart span=1h latest(wal_size_gb) as wal_size_gb by host\n| where wal_size_gb > 10\n```\n\nUnderstanding this SPL\n\n**PostgreSQL WAL Growth** — WAL accumulation indicating replication issues or archival failures. Uncontrolled WAL growth exhausts disk space and can halt the database.\n\nDocumented **Data sources**: PostgreSQL `pg_stat_replication`, `pg_wal_lsn_diff()`, `pg_ls_waldir()` or filesystem WAL directory size. **App/TA** (typical add-on context): Splunk DB Connect or custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:postgresql_wal. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:postgresql_wal\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **wal_size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where wal_size_gb > 10` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.dest span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PostgreSQL WAL Growth** — WAL accumulation indicating replication issues or archival failures. Uncontrolled WAL growth exhausts disk space and can halt the database.\n\nDocumented **Data sources**: PostgreSQL `pg_stat_replication`, `pg_wal_lsn_diff()`, `pg_ls_waldir()` or filesystem WAL directory size. **App/TA** (typical add-on context): Splunk DB Connect or custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (WAL size over time), Single value (current WAL size GB), Table (host, WAL size, replication lag).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We monitor WAL and transaction log growth so replication and recovery do not stall when logs or archive destinations fill the disk.",
              "mtype": [
                "Fault",
                "Capacity"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.dest span=1h | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.23",
              "n": "PostgreSQL Vacuum Activity",
              "c": "high",
              "f": "advanced",
              "v": "Autovacuum running, dead tuples, and table bloat affect query performance. Monitoring ensures vacuum keeps pace with write workload and prevents bloat.",
              "t": "Splunk DB Connect or custom scripted input",
              "d": "`pg_stat_user_tables` (n_dead_tup, n_live_tup, last_autovacuum, last_vacuum)",
              "q": "index=database sourcetype=\"dbconnect:pg_stat_user_tables\"\n| eval dead_ratio=round(n_dead_tup/nullif(n_live_tup,0)*100, 2)\n| where dead_ratio > 5 OR n_dead_tup > 10000\n| eval hours_since_vacuum=round((now()-strptime(last_autovacuum,\"%Y-%m-%d %H:%M:%S\"))/3600, 1)\n| table schemaname, relname, n_dead_tup, n_live_tup, dead_ratio, last_autovacuum, hours_since_vacuum\n| sort -n_dead_tup",
              "m": "Poll `pg_stat_user_tables` via DB Connect every hour. Extract `n_dead_tup`, `n_live_tup`, `last_autovacuum`. Compute dead tuple ratio and time since last vacuum. Alert when dead_ratio >5% or n_dead_tup >10000 for critical tables. Alert when last_autovacuum is >24 hours for high-churn tables. Track autovacuum runs from `pg_stat_progress_vacuum` if available.",
              "z": "Table (tables with bloat risk), Bar chart (dead tuples by table), Line chart (dead tuple ratio trend), Single value (tables overdue for vacuum).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect or custom scripted input.\n• Ensure the following data sources are available: `pg_stat_user_tables` (n_dead_tup, n_live_tup, last_autovacuum, last_vacuum).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `pg_stat_user_tables` via DB Connect every hour. Extract `n_dead_tup`, `n_live_tup`, `last_autovacuum`. Compute dead tuple ratio and time since last vacuum. Alert when dead_ratio >5% or n_dead_tup >10000 for critical tables. Alert when last_autovacuum is >24 hours for high-churn tables. Track autovacuum runs from `pg_stat_progress_vacuum` if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:pg_stat_user_tables\"\n| eval dead_ratio=round(n_dead_tup/nullif(n_live_tup,0)*100, 2)\n| where dead_ratio > 5 OR n_dead_tup > 10000\n| eval hours_since_vacuum=round((now()-strptime(last_autovacuum,\"%Y-%m-%d %H:%M:%S\"))/3600, 1)\n| table schemaname, relname, n_dead_tup, n_live_tup, dead_ratio, last_autovacuum, hours_since_vacuum\n| sort -n_dead_tup\n```\n\nUnderstanding this SPL\n\n**PostgreSQL Vacuum Activity** — Autovacuum running, dead tuples, and table bloat affect query performance. Monitoring ensures vacuum keeps pace with write workload and prevents bloat.\n\nDocumented **Data sources**: `pg_stat_user_tables` (n_dead_tup, n_live_tup, last_autovacuum, last_vacuum). **App/TA** (typical add-on context): Splunk DB Connect or custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:pg_stat_user_tables. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:pg_stat_user_tables\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dead_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dead_ratio > 5 OR n_dead_tup > 10000` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **hours_since_vacuum** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **PostgreSQL Vacuum Activity**): table schemaname, relname, n_dead_tup, n_live_tup, dead_ratio, last_autovacuum, hours_since_vacuum\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PostgreSQL Vacuum Activity** — Autovacuum running, dead tuples, and table bloat affect query performance. Monitoring ensures vacuum keeps pace with write workload and prevents bloat.\n\nDocumented **Data sources**: `pg_stat_user_tables` (n_dead_tup, n_live_tup, last_autovacuum, last_vacuum). **App/TA** (typical add-on context): Splunk DB Connect or custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (tables with bloat risk), Bar chart (dead tuples by table), Line chart (dead tuple ratio trend), Single value (tables overdue for vacuum).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow PostgreSQL activity and space trends so we can keep vacuum, replication, and connection paths healthy as load grows.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.24",
              "n": "PostgreSQL Connection Pool Monitoring (PgBouncer)",
              "c": "high",
              "f": "intermediate",
              "v": "Pool utilization and wait queue length indicate connection pressure. High utilization or growing wait queue causes application timeouts.",
              "t": "Custom scripted input (PgBouncer SHOW POOLS/STATS)",
              "d": "PgBouncer admin console output (`SHOW POOLS`, `SHOW STATS`)",
              "q": "index=database sourcetype=\"pgbouncer:pools\"\n| eval pool_util_pct=round(cl_active+cl_wait)/nullif(max_client_conn,0)*100, 1\n| eval wait_queue=cl_wait\n| where pool_util_pct > 80 OR wait_queue > 5\n| timechart span=5m max(pool_util_pct) as util_pct, max(wait_queue) as wait_queue by database, pool_mode",
              "m": "Create a scripted input that connects to PgBouncer admin console (default port 6432) and runs `SHOW POOLS` and `SHOW STATS` every 5 minutes. Parse output into structured events. Extract `cl_active`, `cl_wait`, `max_client_conn` per database/pool. Alert when pool utilization >80% or `cl_wait` >5. Track `sv_idle`, `sv_used` for server connection usage.",
              "z": "Gauge (pool utilization %), Line chart (active vs wait connections), Table (pools with high utilization or wait queue).",
              "kfp": "Connection pool warm-up after restarts, blue-green deploys, or autoscaling can look like a spike until the pool or fleet reaches steady state.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (PgBouncer SHOW POOLS/STATS).\n• Ensure the following data sources are available: PgBouncer admin console output (`SHOW POOLS`, `SHOW STATS`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a scripted input that connects to PgBouncer admin console (default port 6432) and runs `SHOW POOLS` and `SHOW STATS` every 5 minutes. Parse output into structured events. Extract `cl_active`, `cl_wait`, `max_client_conn` per database/pool. Alert when pool utilization >80% or `cl_wait` >5. Track `sv_idle`, `sv_used` for server connection usage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"pgbouncer:pools\"\n| eval pool_util_pct=round(cl_active+cl_wait)/nullif(max_client_conn,0)*100, 1\n| eval wait_queue=cl_wait\n| where pool_util_pct > 80 OR wait_queue > 5\n| timechart span=5m max(pool_util_pct) as util_pct, max(wait_queue) as wait_queue by database, pool_mode\n```\n\nUnderstanding this SPL\n\n**PostgreSQL Connection Pool Monitoring (PgBouncer)** — Pool utilization and wait queue length indicate connection pressure. High utilization or growing wait queue causes application timeouts.\n\nDocumented **Data sources**: PgBouncer admin console output (`SHOW POOLS`, `SHOW STATS`). **App/TA** (typical add-on context): Custom scripted input (PgBouncer SHOW POOLS/STATS). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: pgbouncer:pools. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"pgbouncer:pools\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pool_util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **wait_queue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pool_util_pct > 80 OR wait_queue > 5` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by database, pool_mode** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PostgreSQL Connection Pool Monitoring (PgBouncer)** — Pool utilization and wait queue length indicate connection pressure. High utilization or growing wait queue causes application timeouts.\n\nDocumented **Data sources**: PgBouncer admin console output (`SHOW POOLS`, `SHOW STATS`). **App/TA** (typical add-on context): Custom scripted input (PgBouncer SHOW POOLS/STATS). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (pool utilization %), Line chart (active vs wait connections), Table (pools with high utilization or wait queue).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many sessions and pooled connections the fleet uses so we can scale or fix apps before the database hits its connection limits.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action span=5m | sort - count",
              "e": [
                "postgresql"
              ],
              "em": [
                "postgresql_pgbouncer"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.25",
              "n": "MySQL / MariaDB InnoDB Buffer Pool Hit Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "Buffer pool effectiveness; low hit ratio means excessive disk I/O and degraded query performance.",
              "t": "Splunk DB Connect or custom scripted input",
              "d": "`SHOW GLOBAL STATUS` (Innodb_buffer_pool_read_requests, Innodb_buffer_pool_reads)",
              "q": "index=database sourcetype=\"dbconnect:mysql_status\"\n| eval hit_ratio=round(100*(1-Innodb_buffer_pool_reads/nullif(Innodb_buffer_pool_read_requests,0)), 2)\n| where hit_ratio < 99\n| timechart span=15m avg(hit_ratio) as buffer_pool_hit_ratio by host",
              "m": "Poll `SHOW GLOBAL STATUS` via DB Connect every 15 minutes. Extract `Innodb_buffer_pool_read_requests` and `Innodb_buffer_pool_reads`. Compute hit ratio = (1 - reads/requests) * 100. Alert when hit ratio drops below 99% for sustained periods. Correlate with memory allocation and workload changes.",
              "z": "Gauge (buffer pool hit ratio %), Line chart (hit ratio over time), Single value (current hit ratio).",
              "kfp": "Buffer or page cache hit ratio may drop after instance restart, large table scans, or after data warehouse loads; short dips often normalize without action.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect or custom scripted input.\n• Ensure the following data sources are available: `SHOW GLOBAL STATUS` (Innodb_buffer_pool_read_requests, Innodb_buffer_pool_reads).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `SHOW GLOBAL STATUS` via DB Connect every 15 minutes. Extract `Innodb_buffer_pool_read_requests` and `Innodb_buffer_pool_reads`. Compute hit ratio = (1 - reads/requests) * 100. Alert when hit ratio drops below 99% for sustained periods. Correlate with memory allocation and workload changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:mysql_status\"\n| eval hit_ratio=round(100*(1-Innodb_buffer_pool_reads/nullif(Innodb_buffer_pool_read_requests,0)), 2)\n| where hit_ratio < 99\n| timechart span=15m avg(hit_ratio) as buffer_pool_hit_ratio by host\n```\n\nUnderstanding this SPL\n\n**MySQL / MariaDB InnoDB Buffer Pool Hit Ratio** — Buffer pool effectiveness; low hit ratio means excessive disk I/O and degraded query performance.\n\nDocumented **Data sources**: `SHOW GLOBAL STATUS` (Innodb_buffer_pool_read_requests, Innodb_buffer_pool_reads). **App/TA** (typical add-on context): Splunk DB Connect or custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:mysql_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:mysql_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio < 99` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.dest span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MySQL / MariaDB InnoDB Buffer Pool Hit Ratio** — Buffer pool effectiveness; low hit ratio means excessive disk I/O and degraded query performance.\n\nDocumented **Data sources**: `SHOW GLOBAL STATUS` (Innodb_buffer_pool_read_requests, Innodb_buffer_pool_reads). **App/TA** (typical add-on context): Splunk DB Connect or custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (buffer pool hit ratio %), Line chart (hit ratio over time), Single value (current hit ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow buffer and cache hit rates so we can see when memory is too small for the working set and query latency starts to depend on disk more than it should.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.dest span=15m | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.26",
              "n": "MySQL Binary Log Space Usage",
              "c": "medium",
              "f": "intermediate",
              "v": "Binlog accumulation on disk can exhaust disk space and impact replication. Monitoring enables proactive purging or archival.",
              "t": "Splunk DB Connect or custom scripted input",
              "d": "`SHOW BINARY LOGS`, filesystem binlog directory size",
              "q": "index=database sourcetype=\"dbconnect:mysql_binlogs\"\n| eval size_gb=round(File_size/1073741824, 2)\n| stats sum(File_size) as total_bytes by host\n| eval total_gb=round(total_bytes/1073741824, 2)\n| where total_gb > 50\n| table host, total_gb, binlog_count",
              "m": "Poll `SHOW BINARY LOGS` via DB Connect daily or every 6 hours. Sum `File_size` across all binlogs. Optionally measure binlog directory on disk. Alert when total binlog size exceeds threshold (e.g., >50 GB). Track binlog purge lag (oldest binlog age). Correlate with replication lag and `expire_logs_days`/`binlog_expire_logs_seconds` settings.",
              "z": "Line chart (binlog total size over time), Single value (current binlog size GB), Table (host, size, count).",
              "kfp": "Lag spikes during large transactions, schema changes, or network maintenance to standby; align with RPO and change windows.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect or custom scripted input.\n• Ensure the following data sources are available: `SHOW BINARY LOGS`, filesystem binlog directory size.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `SHOW BINARY LOGS` via DB Connect daily or every 6 hours. Sum `File_size` across all binlogs. Optionally measure binlog directory on disk. Alert when total binlog size exceeds threshold (e.g., >50 GB). Track binlog purge lag (oldest binlog age). Correlate with replication lag and `expire_logs_days`/`binlog_expire_logs_seconds` settings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:mysql_binlogs\"\n| eval size_gb=round(File_size/1073741824, 2)\n| stats sum(File_size) as total_bytes by host\n| eval total_gb=round(total_bytes/1073741824, 2)\n| where total_gb > 50\n| table host, total_gb, binlog_count\n```\n\nUnderstanding this SPL\n\n**MySQL Binary Log Space Usage** — Binlog accumulation on disk can exhaust disk space and impact replication. Monitoring enables proactive purging or archival.\n\nDocumented **Data sources**: `SHOW BINARY LOGS`, filesystem binlog directory size. **App/TA** (typical add-on context): Splunk DB Connect or custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:mysql_binlogs. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:mysql_binlogs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where total_gb > 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MySQL Binary Log Space Usage**): table host, total_gb, binlog_count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MySQL Binary Log Space Usage** — Binlog accumulation on disk can exhaust disk space and impact replication. Monitoring enables proactive purging or archival.\n\nDocumented **Data sources**: `SHOW BINARY LOGS`, filesystem binlog directory size. **App/TA** (typical add-on context): Splunk DB Connect or custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (binlog total size over time), Single value (current binlog size GB), Table (host, size, count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track MySQL binary log disk use so retention, replication, and purge jobs do not fill the volume and stop the instance.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.dest | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.27",
              "n": "Oracle Tablespace Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "Approaching max size per tablespace causes ORA-1653 (out of space) errors and application failures. Proactive monitoring prevents outages.",
              "t": "Splunk DB Connect",
              "d": "`DBA_TABLESPACE_USAGE_METRICS`, `DBA_DATA_FILES`",
              "q": "index=database sourcetype=\"dbconnect:oracle_tablespace\"\n| eval used_pct=round(USED_PERCENT, 1)\n| where used_pct > 80\n| timechart span=1d latest(used_pct) as used_pct by TABLESPACE_NAME\n| where used_pct > 85",
              "m": "Poll `DBA_TABLESPACE_USAGE_METRICS` (or `DBA_FREE_SPACE` + `DBA_DATA_FILES`) via DB Connect every 4–6 hours. Extract used percent per tablespace. Alert at 80% (warning) and 90% (critical). Track growth rate for capacity planning. Include temp and undo tablespaces.",
              "z": "Gauge (tablespace used %), Table (tablespaces over threshold), Line chart (utilization trend by tablespace).",
              "kfp": "Capacity grows during ETL loads, month-end batch processing, or data migrations. Growth from `ANALYZE`, statistics runs, or one-off bulk loads is often expected when it matches the schedule.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect.\n• Ensure the following data sources are available: `DBA_TABLESPACE_USAGE_METRICS`, `DBA_DATA_FILES`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `DBA_TABLESPACE_USAGE_METRICS` (or `DBA_FREE_SPACE` + `DBA_DATA_FILES`) via DB Connect every 4–6 hours. Extract used percent per tablespace. Alert at 80% (warning) and 90% (critical). Track growth rate for capacity planning. Include temp and undo tablespaces.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:oracle_tablespace\"\n| eval used_pct=round(USED_PERCENT, 1)\n| where used_pct > 80\n| timechart span=1d latest(used_pct) as used_pct by TABLESPACE_NAME\n| where used_pct > 85\n```\n\nUnderstanding this SPL\n\n**Oracle Tablespace Utilization** — Approaching max size per tablespace causes ORA-1653 (out of space) errors and application failures. Proactive monitoring prevents outages.\n\nDocumented **Data sources**: `DBA_TABLESPACE_USAGE_METRICS`, `DBA_DATA_FILES`. **App/TA** (typical add-on context): Splunk DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:oracle_tablespace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:oracle_tablespace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by TABLESPACE_NAME** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where used_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Tablespace by Tablespace.host, Tablespace.action span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Oracle Tablespace Utilization** — Approaching max size per tablespace causes ORA-1653 (out of space) errors and application failures. Proactive monitoring prevents outages.\n\nDocumented **Data sources**: `DBA_TABLESPACE_USAGE_METRICS`, `DBA_DATA_FILES`. **App/TA** (typical add-on context): Splunk DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Tablespace` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (tablespace used %), Table (tablespaces over threshold), Line chart (utilization trend by tablespace).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend space use on our data files and tablespaces so we can add capacity or archive data before a database runs out of room during peak load.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Tablespace by Tablespace.host, Tablespace.action span=1d | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.28",
              "n": "PostgreSQL Replication Lag (Streaming)",
              "c": "critical",
              "f": "intermediate",
              "v": "`pg_stat_replication` write/flush/replay lag bytes and seconds catch standby drift before read-your-writes violations. Complements generic replication UC with PostgreSQL-native metrics.",
              "t": "DB Connect, `pg_stat_replication` scripted export",
              "d": "`write_lag`, `flush_lag`, `replay_lag`, `sent_lsn`, `lsn_gap_bytes` (computed in DB Connect SQL via `pg_wal_lsn_diff(sent_lsn, replay_lsn)`)",
              "q": "index=database sourcetype=\"dbconnect:pg_replication\"\n| rex field=replay_lag \"(?<replay_lag_sec>\\d+)\"\n| eval replay_lag_sec=tonumber(replay_lag_sec),\n       lsn_gap_bytes=tonumber(lsn_gap_bytes)\n| where replay_lag_sec > 60 OR lsn_gap_bytes > 104857600\n| table application_name client_addr replay_lag_sec lsn_gap_bytes state",
              "m": "Poll replication view every 1m. Map `application_name` to replica. Alert on replay lag > RPO seconds or LSN gap >100MB. Correlate with `archive_command` and network.",
              "z": "Line chart (replay lag per standby), Table (standby, lag sec), Single value (max lag).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, `pg_stat_replication` scripted export.\n• Ensure the following data sources are available: `write_lag`, `flush_lag`, `replay_lag`, `sent_lsn`, `lsn_gap_bytes` (computed in DB Connect SQL via `pg_wal_lsn_diff(sent_lsn, replay_lsn)`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll replication view every 1m. Map `application_name` to replica. Alert on replay lag > RPO seconds or LSN gap >100MB. Correlate with `archive_command` and network.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:pg_replication\"\n| rex field=replay_lag \"(?<replay_lag_sec>\\d+)\"\n| eval replay_lag_sec=tonumber(replay_lag_sec),\n       lsn_gap_bytes=tonumber(lsn_gap_bytes)\n| where replay_lag_sec > 60 OR lsn_gap_bytes > 104857600\n| table application_name client_addr replay_lag_sec lsn_gap_bytes state\n```\n\nUnderstanding this SPL\n\n**PostgreSQL Replication Lag (Streaming)** — `pg_stat_replication` write/flush/replay lag bytes and seconds catch standby drift before read-your-writes violations. Complements generic replication UC with PostgreSQL-native metrics.\n\nDocumented **Data sources**: `write_lag`, `flush_lag`, `replay_lag`, `sent_lsn`, `lsn_gap_bytes` (computed in DB Connect SQL via `pg_wal_lsn_diff(sent_lsn, replay_lsn)`). **App/TA** (typical add-on context): DB Connect, `pg_stat_replication` scripted export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:pg_replication. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:pg_replication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **replay_lag_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where replay_lag_sec > 60 OR lsn_gap_bytes > 104857600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PostgreSQL Replication Lag (Streaming)**): table application_name client_addr replay_lag_sec lsn_gap_bytes state\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PostgreSQL Replication Lag (Streaming)** — `pg_stat_replication` write/flush/replay lag bytes and seconds catch standby drift before read-your-writes violations. Complements generic replication UC with PostgreSQL-native metrics.\n\nDocumented **Data sources**: `write_lag`, `flush_lag`, `replay_lag`, `sent_lsn`, `lsn_gap_bytes` (computed in DB Connect SQL via `pg_wal_lsn_diff(sent_lsn, replay_lsn)`). **App/TA** (typical add-on context): DB Connect, `pg_stat_replication` scripted export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (replay lag per standby), Table (standby, lag sec), Single value (max lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.29",
              "n": "MySQL InnoDB Buffer Pool Hit Ratio Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Fleet-wide buffer pool hit ratio SLO for MySQL/MariaDB with per-instance drilldown. Aligns capacity reviews with read IO pressure.",
              "t": "DB Connect, `SHOW GLOBAL STATUS`",
              "d": "`Innodb_buffer_pool_read_requests`, `Innodb_buffer_pool_reads`",
              "q": "index=database sourcetype=\"dbconnect:mysql_status\"\n| eval hit_ratio=round(100*(1-Innodb_buffer_pool_reads/nullif(Innodb_buffer_pool_read_requests,0)),2)\n| bin _time span=1h\n| stats avg(hit_ratio) as fleet_avg, min(hit_ratio) as worst by, _time\n| where fleet_avg < 99 OR worst < 95",
              "m": "Aggregate hourly for executive view; retain per-host series for alerts. Correlate drops with large table scans or buffer pool size changes.",
              "z": "Line chart (fleet avg vs worst instance), Gauge (current hit ratio), Table (instances below 99%).",
              "kfp": "Cold caches right after restarts, failover, or one-off full scans can lower hit ratios until the working set is warm again — watch trends, not a single low sample.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, `SHOW GLOBAL STATUS`.\n• Ensure the following data sources are available: `Innodb_buffer_pool_read_requests`, `Innodb_buffer_pool_reads`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate hourly for executive view; retain per-host series for alerts. Correlate drops with large table scans or buffer pool size changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:mysql_status\"\n| eval hit_ratio=round(100*(1-Innodb_buffer_pool_reads/nullif(Innodb_buffer_pool_read_requests,0)),2)\n| bin _time span=1h\n| stats avg(hit_ratio) as fleet_avg, min(hit_ratio) as worst by, _time\n| where fleet_avg < 99 OR worst < 95\n```\n\nUnderstanding this SPL\n\n**MySQL InnoDB Buffer Pool Hit Ratio Monitoring** — Fleet-wide buffer pool hit ratio SLO for MySQL/MariaDB with per-instance drilldown. Aligns capacity reviews with read IO pressure.\n\nDocumented **Data sources**: `Innodb_buffer_pool_read_requests`, `Innodb_buffer_pool_reads`. **App/TA** (typical add-on context): DB Connect, `SHOW GLOBAL STATUS`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:mysql_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:mysql_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where fleet_avg < 99 OR worst < 95` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MySQL InnoDB Buffer Pool Hit Ratio Monitoring** — Fleet-wide buffer pool hit ratio SLO for MySQL/MariaDB with per-instance drilldown. Aligns capacity reviews with read IO pressure.\n\nDocumented **Data sources**: `Innodb_buffer_pool_read_requests`, `Innodb_buffer_pool_reads`. **App/TA** (typical add-on context): DB Connect, `SHOW GLOBAL STATUS`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (fleet avg vs worst instance), Gauge (current hit ratio), Table (instances below 99%).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow buffer and cache hit rates so we can see when memory is too small for the working set and query latency starts to depend on disk more than it should.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.30",
              "n": "Oracle Tablespace Growth Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Week-over-week growth rate per tablespace drives forecast and ASM/space procurement. Extends point-in-time utilization with trend.",
              "t": "DB Connect",
              "d": "`DBA_TABLESPACE_USAGE_METRICS` (used_space, tablespace_size)",
              "q": "index=database sourcetype=\"dbconnect:oracle_tablespace\"\n| timechart span=1d latest(USED_SPACE) as used_bytes by TABLESPACE_NAME\n| streamstats window=7 range(used_bytes) as growth_7d by TABLESPACE_NAME\n| eval growth_gb=round(growth_7d/1073741824,2)\n| where growth_gb > 10",
              "m": "Daily snapshot. Alert on >10GB/week growth on critical tablespaces. Use `predict` on used_bytes for runway to maxsize.",
              "z": "Line chart (used GB trend), Table (tablespace, growth GB/week), Single value (fastest growing).",
              "kfp": "Capacity grows during ETL loads, month-end batch processing, or data migrations. Growth from `ANALYZE`, statistics runs, or one-off bulk loads is often expected when it matches the schedule.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect.\n• Ensure the following data sources are available: `DBA_TABLESPACE_USAGE_METRICS` (used_space, tablespace_size).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDaily snapshot. Alert on >10GB/week growth on critical tablespaces. Use `predict` on used_bytes for runway to maxsize.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:oracle_tablespace\"\n| timechart span=1d latest(USED_SPACE) as used_bytes by TABLESPACE_NAME\n| streamstats window=7 range(used_bytes) as growth_7d by TABLESPACE_NAME\n| eval growth_gb=round(growth_7d/1073741824,2)\n| where growth_gb > 10\n```\n\nUnderstanding this SPL\n\n**Oracle Tablespace Growth Trending** — Week-over-week growth rate per tablespace drives forecast and ASM/space procurement. Extends point-in-time utilization with trend.\n\nDocumented **Data sources**: `DBA_TABLESPACE_USAGE_METRICS` (used_space, tablespace_size). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:oracle_tablespace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:oracle_tablespace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by TABLESPACE_NAME** — ideal for trending and alerting on this use case.\n• `streamstats` rolls up events into metrics; results are split **by TABLESPACE_NAME** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **growth_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where growth_gb > 10` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Tablespace by Tablespace.host, Tablespace.action span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Oracle Tablespace Growth Trending** — Week-over-week growth rate per tablespace drives forecast and ASM/space procurement. Extends point-in-time utilization with trend.\n\nDocumented **Data sources**: `DBA_TABLESPACE_USAGE_METRICS` (used_space, tablespace_size). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Tablespace` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (used GB trend), Table (tablespace, growth GB/week), Single value (fastest growing).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend space use on our data files and tablespaces so we can add capacity or archive data before a database runs out of room during peak load.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Tablespace by Tablespace.host, Tablespace.action span=1d | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.31",
              "n": "SQL Server Always On AG Health and Replica Sync",
              "c": "critical",
              "f": "beginner",
              "v": "Combined view of `synchronization_health`, redo queue, and log send queue sizes for AG replicas. Operationalizes UC-7.1.12 with queue depth.",
              "t": "DB Connect, `Splunk_TA_microsoft-sqlserver`",
              "d": "`sys.dm_hadr_database_replica_states`, `log_send_queue_size`, `redo_queue_size`",
              "q": "index=database sourcetype=\"dbconnect:ag_replica_state\"\n| where synchronization_health_desc!=\"HEALTHY\" OR log_send_queue_size > 104857600 OR redo_queue_size > 104857600\n| table ag_name replica_server_name synchronization_health_desc log_send_queue_size redo_queue_size",
              "m": "Poll DMVs every 5m. Alert on unhealthy sync or queue >100MB (tune threshold). Track automatic failover readiness.",
              "z": "Status grid (replica × health), Line chart (queue sizes), Table (unhealthy AG databases).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, `Splunk_TA_microsoft-sqlserver`.\n• Ensure the following data sources are available: `sys.dm_hadr_database_replica_states`, `log_send_queue_size`, `redo_queue_size`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll DMVs every 5m. Alert on unhealthy sync or queue >100MB (tune threshold). Track automatic failover readiness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:ag_replica_state\"\n| where synchronization_health_desc!=\"HEALTHY\" OR log_send_queue_size > 104857600 OR redo_queue_size > 104857600\n| table ag_name replica_server_name synchronization_health_desc log_send_queue_size redo_queue_size\n```\n\nUnderstanding this SPL\n\n**SQL Server Always On AG Health and Replica Sync** — Combined view of `synchronization_health`, redo queue, and log send queue sizes for AG replicas. Operationalizes UC-7.1.12 with queue depth.\n\nDocumented **Data sources**: `sys.dm_hadr_database_replica_states`, `log_send_queue_size`, `redo_queue_size`. **App/TA** (typical add-on context): DB Connect, `Splunk_TA_microsoft-sqlserver`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:ag_replica_state. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:ag_replica_state\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where synchronization_health_desc!=\"HEALTHY\" OR log_send_queue_size > 104857600 OR redo_queue_size > 104857600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SQL Server Always On AG Health and Replica Sync**): table ag_name replica_server_name synchronization_health_desc log_send_queue_size redo_queue_size\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SQL Server Always On AG Health and Replica Sync** — Combined view of `synchronization_health`, redo queue, and log send queue sizes for AG replicas. Operationalizes UC-7.1.12 with queue depth.\n\nDocumented **Data sources**: `sys.dm_hadr_database_replica_states`, `log_send_queue_size`, `redo_queue_size`. **App/TA** (typical add-on context): DB Connect, `Splunk_TA_microsoft-sqlserver`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (replica × health), Line chart (queue sizes), Table (unhealthy AG databases).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow SQL Server performance and availability signals so the team can catch regressions, AG issues, and tempdb pressure on business-critical systems.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count",
              "e": [
                "db_connect",
                "mssql"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.32",
              "n": "Database Backup Chain Validation",
              "c": "critical",
              "f": "advanced",
              "v": "Verifies full→diff→log chain continuity (SQL Server LSN chain, Oracle archivelog sequence) to detect missing backups before restore drills fail.",
              "t": "DB Connect, backup vendor logs",
              "d": "`msdb.dbo.backupset` (first_lsn, last_lsn, type), RMAN backup pieces",
              "q": "index=database sourcetype=\"dbconnect:backup_chain\"\n| sort database_name, backup_finish_date\n| streamstats window=2 previous(last_lsn) as prev_last by database_name\n| where isnotnull(prev_last) AND first_lsn!=prev_last AND type!=1\n| table database_name backup_finish_date type first_lsn prev_last",
              "m": "Custom SQL to flag LSN gaps. For Oracle, check archivelog sequence continuity. Alert on any break in chain for production databases.",
              "z": "Table (broken chains), Timeline (backup types), Single value (databases with gaps).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, backup vendor logs.\n• Ensure the following data sources are available: `msdb.dbo.backupset` (first_lsn, last_lsn, type), RMAN backup pieces.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCustom SQL to flag LSN gaps. For Oracle, check archivelog sequence continuity. Alert on any break in chain for production databases.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:backup_chain\"\n| sort database_name, backup_finish_date\n| streamstats window=2 previous(last_lsn) as prev_last by database_name\n| where isnotnull(prev_last) AND first_lsn!=prev_last AND type!=1\n| table database_name backup_finish_date type first_lsn prev_last\n```\n\nUnderstanding this SPL\n\n**Database Backup Chain Validation** — Verifies full→diff→log chain continuity (SQL Server LSN chain, Oracle archivelog sequence) to detect missing backups before restore drills fail.\n\nDocumented **Data sources**: `msdb.dbo.backupset` (first_lsn, last_lsn, type), RMAN backup pieces. **App/TA** (typical add-on context): DB Connect, backup vendor logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:backup_chain. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:backup_chain\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by database_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnotnull(prev_last) AND first_lsn!=prev_last AND type!=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Database Backup Chain Validation**): table database_name backup_finish_date type first_lsn prev_last\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Database Backup Chain Validation** — Verifies full→diff→log chain continuity (SQL Server LSN chain, Oracle archivelog sequence) to detect missing backups before restore drills fail.\n\nDocumented **Data sources**: `msdb.dbo.backupset` (first_lsn, last_lsn, type), RMAN backup pieces. **App/TA** (typical add-on context): DB Connect, backup vendor logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (broken chains), Timeline (backup types), Single value (databases with gaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check backup and snapshot success and growth so we can trust restores and long-term storage plans for regulation and for real incidents.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.33",
              "n": "Long-Running Query Detection (Active Sessions)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces currently running queries exceeding elapsed threshold with SQL hash and wait type—faster triage than transaction-only UC-7.1.8.",
              "t": "DB Connect",
              "d": "`sys.dm_exec_requests`, `pg_stat_activity`, `V$SESSION` + `V$SQL`",
              "q": "index=database sourcetype=\"dbconnect:active_requests\"\n| where elapsed_sec > 300 AND status=\"running\"\n| stats max(elapsed_sec) as max_sec by session_id, database_name, sql_hash\n| table session_id database_name sql_hash max_sec wait_type",
              "m": "Poll every 2m. Exclude known batch accounts via lookup. Alert when max_sec >900 for OLTP. Include optional `sql_text` sampling for compliance.",
              "z": "Table (long-running sessions), Line chart (count of long queries), Single value (longest elapsed sec).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect.\n• Ensure the following data sources are available: `sys.dm_exec_requests`, `pg_stat_activity`, `V$SESSION` + `V$SQL`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll every 2m. Exclude known batch accounts via lookup. Alert when max_sec >900 for OLTP. Include optional `sql_text` sampling for compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:active_requests\"\n| where elapsed_sec > 300 AND status=\"running\"\n| stats max(elapsed_sec) as max_sec by session_id, database_name, sql_hash\n| table session_id database_name sql_hash max_sec wait_type\n```\n\nUnderstanding this SPL\n\n**Long-Running Query Detection (Active Sessions)** — Surfaces currently running queries exceeding elapsed threshold with SQL hash and wait type—faster triage than transaction-only UC-7.1.8.\n\nDocumented **Data sources**: `sys.dm_exec_requests`, `pg_stat_activity`, `V$SESSION` + `V$SQL`. **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:active_requests. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:active_requests\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where elapsed_sec > 300 AND status=\"running\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by session_id, database_name, sql_hash** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Long-Running Query Detection (Active Sessions)**): table session_id database_name sql_hash max_sec wait_type\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Long-Running Query Detection (Active Sessions)** — Surfaces currently running queries exceeding elapsed threshold with SQL hash and wait type—faster triage than transaction-only UC-7.1.8.\n\nDocumented **Data sources**: `sys.dm_exec_requests`, `pg_stat_activity`, `V$SESSION` + `V$SQL`. **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (long-running sessions), Line chart (count of long queries), Single value (longest elapsed sec).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.34",
              "n": "Deadlock Frequency by Database",
              "c": "high",
              "f": "beginner",
              "v": "Counts deadlocks per hour/database to detect code regressions after releases. Complements UC-7.1.2 event search with KPIs.",
              "t": "Error log ingestion, extended events",
              "d": "SQL Server errorlog deadlock graph frequency, PostgreSQL `log_lock_waits`, Oracle ORA-00060",
              "q": "index=database sourcetype=\"mssql:errorlog\"\n| search deadlock OR \"Deadlock\"\n| bucket _time span=1h\n| stats count as deadlocks by database_name, _time\n| where deadlocks > 5",
              "m": "Parse database name from deadlock XML if available. Alert when hourly deadlocks exceed baseline. Tie to release markers.",
              "z": "Line chart (deadlocks over time), Bar chart (by database), Single value (deadlocks today).",
              "kfp": "Transient deadlocks during high-concurrency batch processing are common in OLTP; tune when frequency or total wait time clearly exceeds the baseline, not on one-off events.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Error log ingestion, extended events.\n• Ensure the following data sources are available: SQL Server errorlog deadlock graph frequency, PostgreSQL `log_lock_waits`, Oracle ORA-00060.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse database name from deadlock XML if available. Alert when hourly deadlocks exceed baseline. Tie to release markers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"mssql:errorlog\"\n| search deadlock OR \"Deadlock\"\n| bucket _time span=1h\n| stats count as deadlocks by database_name, _time\n| where deadlocks > 5\n```\n\nUnderstanding this SPL\n\n**Deadlock Frequency by Database** — Counts deadlocks per hour/database to detect code regressions after releases. Complements UC-7.1.2 event search with KPIs.\n\nDocumented **Data sources**: SQL Server errorlog deadlock graph frequency, PostgreSQL `log_lock_waits`, Oracle ORA-00060. **App/TA** (typical add-on context): Error log ingestion, extended events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mssql:errorlog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"mssql:errorlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by database_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where deadlocks > 5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Deadlock Frequency by Database** — Counts deadlocks per hour/database to detect code regressions after releases. Complements UC-7.1.2 event search with KPIs.\n\nDocumented **Data sources**: SQL Server errorlog deadlock graph frequency, PostgreSQL `log_lock_waits`, Oracle ORA-00060. **App/TA** (typical add-on context): Error log ingestion, extended events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (deadlocks over time), Bar chart (by database), Single value (deadlocks today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count deadlocks and contention in our databases so we can tune hot paths and application order before they turn into long outages of customer-facing work.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.35",
              "n": "Connection Pool Exhaustion (Application vs Database Limit)",
              "c": "critical",
              "f": "intermediate",
              "v": "Joins app pool `activeThreads`/`waiting` with DB `session_count` to distinguish app-side pool starvation from DB `max_connections` hits.",
              "t": "OpenTelemetry, DB Connect",
              "d": "HikariCP metrics, `pg_stat_activity` count, `sys.dm_exec_connections`",
              "q": "index=application sourcetype=\"hikaricp:metrics\"\n| eval pct=round(active_connections/max_connections*100,1)\n| where pct > 90 OR threads_awaiting_connection > 5\n| table host pool_name pct threads_awaiting_connection active_connections max_connections",
              "m": "Ingest both sides; use `transaction` or `join` on host+service. Alert when either side >90%. Dashboard side-by-side.",
              "z": "Gauge (app pool vs DB sessions), Line chart (pct over time), Table (hosts in danger).",
              "kfp": "Connection pool warm-up after restarts, blue-green deploys, or autoscaling can look like a spike until the pool or fleet reaches steady state.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenTelemetry, DB Connect.\n• Ensure the following data sources are available: HikariCP metrics, `pg_stat_activity` count, `sys.dm_exec_connections`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest both sides; use `transaction` or `join` on host+service. Alert when either side >90%. Dashboard side-by-side.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"hikaricp:metrics\"\n| eval pct=round(active_connections/max_connections*100,1)\n| where pct > 90 OR threads_awaiting_connection > 5\n| table host pool_name pct threads_awaiting_connection active_connections max_connections\n```\n\nUnderstanding this SPL\n\n**Connection Pool Exhaustion (Application vs Database Limit)** — Joins app pool `activeThreads`/`waiting` with DB `session_count` to distinguish app-side pool starvation from DB `max_connections` hits.\n\nDocumented **Data sources**: HikariCP metrics, `pg_stat_activity` count, `sys.dm_exec_connections`. **App/TA** (typical add-on context): OpenTelemetry, DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: hikaricp:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"hikaricp:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct > 90 OR threads_awaiting_connection > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Connection Pool Exhaustion (Application vs Database Limit)**): table host pool_name pct threads_awaiting_connection active_connections max_connections\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Session_Info by Session_Info.host, Session_Info.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Connection Pool Exhaustion (Application vs Database Limit)** — Joins app pool `activeThreads`/`waiting` with DB `session_count` to distinguish app-side pool starvation from DB `max_connections` hits.\n\nDocumented **Data sources**: HikariCP metrics, `pg_stat_activity` count, `sys.dm_exec_connections`. **App/TA** (typical add-on context): OpenTelemetry, DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Session_Info` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (app pool vs DB sessions), Line chart (pct over time), Table (hosts in danger).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many sessions and pooled connections the fleet uses so we can scale or fix apps before the database hits its connection limits.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Session_Info by Session_Info.host, Session_Info.action | sort - count",
              "e": [
                "db_connect",
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.36",
              "n": "Index Fragmentation Maintenance Priority",
              "c": "medium",
              "f": "beginner",
              "v": "Ranks indexes by fragmentation × page_count to schedule rebuilds during maintenance windows. Extends UC-7.1.9 with prioritization score.",
              "t": "DB Connect",
              "d": "`sys.dm_db_index_physical_stats` (avg_fragmentation_in_percent, page_count)",
              "q": "index=database sourcetype=\"dbconnect:index_stats\"\n| eval priority_score=avg_fragmentation_pct * page_count / 1000000\n| where avg_fragmentation_pct > 30 AND page_count > 1000\n| sort -priority_score\n| head 50",
              "m": "Weekly job. Export top 50 for DBA runbook. Exclude tiny indexes via page_count floor.",
              "z": "Table (index, frag %, pages, score), Bar chart (top priority_score).",
              "kfp": "Post-maintenance reorganization, bulk deletes, and heavy updates temporarily raise fragmentation readouts; compare with the index maintenance calendar.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect.\n• Ensure the following data sources are available: `sys.dm_db_index_physical_stats` (avg_fragmentation_in_percent, page_count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWeekly job. Export top 50 for DBA runbook. Exclude tiny indexes via page_count floor.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:index_stats\"\n| eval priority_score=avg_fragmentation_pct * page_count / 1000000\n| where avg_fragmentation_pct > 30 AND page_count > 1000\n| sort -priority_score\n| head 50\n```\n\nUnderstanding this SPL\n\n**Index Fragmentation Maintenance Priority** — Ranks indexes by fragmentation × page_count to schedule rebuilds during maintenance windows. Extends UC-7.1.9 with prioritization score.\n\nDocumented **Data sources**: `sys.dm_db_index_physical_stats` (avg_fragmentation_in_percent, page_count). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:index_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:index_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **priority_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_fragmentation_pct > 30 AND page_count > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Index Fragmentation Maintenance Priority** — Ranks indexes by fragmentation × page_count to schedule rebuilds during maintenance windows. Extends UC-7.1.9 with prioritization score.\n\nDocumented **Data sources**: `sys.dm_db_index_physical_stats` (avg_fragmentation_in_percent, page_count). **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Instance_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (index, frag %, pages, score), Bar chart (top priority_score).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We rank the noisiest fragmented indexes by impact so we can rebuild or reorganize the right objects during maintenance and keep application queries predictable.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Instance_Stats by Instance_Stats.host, Instance_Stats.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.37",
              "n": "Temp Tablespace Usage (Oracle TEMP)",
              "c": "high",
              "f": "intermediate",
              "v": "High `TEMP` usage for sorts and hashes causes ORA-1652. Tracks session temp consumption vs temp tablespace limits.",
              "t": "DB Connect",
              "d": "`V$TEMPSEG_USAGE`, `DBA_TEMP_FREE_SPACE`",
              "q": "index=database sourcetype=\"dbconnect:oracle_temp\"\n| stats sum(blocks_used) as used_blocks by tablespace_name, session_addr\n| eventstats sum(used_blocks) as total_used by tablespace_name\n| lookup oracle_temp_space tablespace_name OUTPUT max_blocks\n| where total_used > max_blocks*0.85\n| table tablespace_name total_used max_blocks",
              "m": "Poll `V$TEMPSEG_USAGE` every 5m. Alert at 85% of temp max. Identify top SQL by `sql_id` from same view.",
              "z": "Line chart (temp usage %), Table (sessions using temp), Single value (peak temp GB).",
              "kfp": "Capacity grows during ETL loads, month-end batch processing, or data migrations. Growth from `ANALYZE`, statistics runs, or one-off bulk loads is often expected when it matches the schedule.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect.\n• Ensure the following data sources are available: `V$TEMPSEG_USAGE`, `DBA_TEMP_FREE_SPACE`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `V$TEMPSEG_USAGE` every 5m. Alert at 85% of temp max. Identify top SQL by `sql_id` from same view.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:oracle_temp\"\n| stats sum(blocks_used) as used_blocks by tablespace_name, session_addr\n| eventstats sum(used_blocks) as total_used by tablespace_name\n| lookup oracle_temp_space tablespace_name OUTPUT max_blocks\n| where total_used > max_blocks*0.85\n| table tablespace_name total_used max_blocks\n```\n\nUnderstanding this SPL\n\n**Temp Tablespace Usage (Oracle TEMP)** — High `TEMP` usage for sorts and hashes causes ORA-1652. Tracks session temp consumption vs temp tablespace limits.\n\nDocumented **Data sources**: `V$TEMPSEG_USAGE`, `DBA_TEMP_FREE_SPACE`. **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:oracle_temp. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:oracle_temp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by tablespace_name, session_addr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by tablespace_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where total_used > max_blocks*0.85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Temp Tablespace Usage (Oracle TEMP)**): table tablespace_name total_used max_blocks\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Lock_Stats by Lock_Stats.host, Lock_Stats.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Temp Tablespace Usage (Oracle TEMP)** — High `TEMP` usage for sorts and hashes causes ORA-1652. Tracks session temp consumption vs temp tablespace limits.\n\nDocumented **Data sources**: `V$TEMPSEG_USAGE`, `DBA_TEMP_FREE_SPACE`. **App/TA** (typical add-on context): DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Lock_Stats` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (temp usage %), Table (sessions using temp), Single value (peak temp GB).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend space use on our data files and tablespaces so we can add capacity or archive data before a database runs out of room during peak load.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Lock_Stats by Lock_Stats.host, Lock_Stats.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.38",
              "n": "Query Plan Regression (Runtime vs Baseline)",
              "c": "high",
              "f": "advanced",
              "v": "Compares current average CPU/duration from Query Store or AWR to baselines by `query_id`/`plan_hash`. Tightens UC-7.1.14 with explicit baseline lookup.",
              "t": "DB Connect, Query Store export",
              "d": "`sys.query_store_runtime_stats`, `dba_hist_sqlstat`",
              "q": "index=database sourcetype=\"dbconnect:query_store_runtime\"\n| stats avg(avg_cpu_time) as cur_cpu by query_id, plan_id\n| lookup query_baselines query_id OUTPUT baseline_cpu_ms\n| eval regression_pct=round((cur_cpu-baseline_cpu_ms)/baseline_cpu_ms*100,1)\n| where regression_pct > 40",
              "m": "Refresh baseline lookup weekly from stable period. Alert on regression >40% with new `plan_id`. Consider force plan workflow.",
              "z": "Table (regressed queries), Line chart (baseline vs current), Bar chart (regression %).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, Query Store export.\n• Ensure the following data sources are available: `sys.query_store_runtime_stats`, `dba_hist_sqlstat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRefresh baseline lookup weekly from stable period. Alert on regression >40% with new `plan_id`. Consider force plan workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:query_store_runtime\"\n| stats avg(avg_cpu_time) as cur_cpu by query_id, plan_id\n| lookup query_baselines query_id OUTPUT baseline_cpu_ms\n| eval regression_pct=round((cur_cpu-baseline_cpu_ms)/baseline_cpu_ms*100,1)\n| where regression_pct > 40\n```\n\nUnderstanding this SPL\n\n**Query Plan Regression (Runtime vs Baseline)** — Compares current average CPU/duration from Query Store or AWR to baselines by `query_id`/`plan_hash`. Tightens UC-7.1.14 with explicit baseline lookup.\n\nDocumented **Data sources**: `sys.query_store_runtime_stats`, `dba_hist_sqlstat`. **App/TA** (typical add-on context): DB Connect, Query Store export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:query_store_runtime. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:query_store_runtime\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by query_id, plan_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **regression_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where regression_pct > 40` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Query Plan Regression (Runtime vs Baseline)** — Compares current average CPU/duration from Query Store or AWR to baselines by `query_id`/`plan_hash`. Tightens UC-7.1.14 with explicit baseline lookup.\n\nDocumented **Data sources**: `sys.query_store_runtime_stats`, `dba_hist_sqlstat`. **App/TA** (typical add-on context): DB Connect, Query Store export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (regressed queries), Line chart (baseline vs current), Bar chart (regression %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Compares current average CPU/duration from Query Store or AWR to baselines by / We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.39",
              "n": "Database Patch Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Compares instance `@@VERSION` / `banner` / Oracle `DBA_REGISTRY_HISTORY` to approved patch levels per environment. Supports security patching SLAs.",
              "t": "DB Connect, inventory scripted input",
              "d": "SQL Server `@@VERSION`, Oracle `opatch`, PostgreSQL `pg_version`",
              "q": "index=database sourcetype=\"dbconnect:instance_version\"\n| lookup approved_db_patch matrix_key OUTPUT approved_build\n| where build != approved_build\n| table host, engine, build, approved_build, last_patch_date",
              "m": "Maintain `approved_db_patch` lookup (engine, major, approved CU/RU). Daily compare. Alert on non-compliant production instances.",
              "z": "Table (non-compliant hosts), Pie chart (compliant %), Single value (drift count).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DB Connect, inventory scripted input.\n• Ensure the following data sources are available: SQL Server `@@VERSION`, Oracle `opatch`, PostgreSQL `pg_version`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain `approved_db_patch` lookup (engine, major, approved CU/RU). Daily compare. Alert on non-compliant production instances.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"dbconnect:instance_version\"\n| lookup approved_db_patch matrix_key OUTPUT approved_build\n| where build != approved_build\n| table host, engine, build, approved_build, last_patch_date\n```\n\nUnderstanding this SPL\n\n**Database Patch Compliance** — Compares instance `@@VERSION` / `banner` / Oracle `DBA_REGISTRY_HISTORY` to approved patch levels per environment. Supports security patching SLAs.\n\nDocumented **Data sources**: SQL Server `@@VERSION`, Oracle `opatch`, PostgreSQL `pg_version`. **App/TA** (typical add-on context): DB Connect, inventory scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: dbconnect:instance_version. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"dbconnect:instance_version\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where build != approved_build` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Database Patch Compliance**): table host, engine, build, approved_build, last_patch_date\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Database_Instance by Database_Instance.host, Database_Instance.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Database Patch Compliance** — Compares instance `@@VERSION` / `banner` / Oracle `DBA_REGISTRY_HISTORY` to approved patch levels per environment. Supports security patching SLAs.\n\nDocumented **Data sources**: SQL Server `@@VERSION`, Oracle `opatch`, PostgreSQL `pg_version`. **App/TA** (typical add-on context): DB Connect, inventory scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Database_Instance` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant hosts), Pie chart (compliant %), Single value (drift count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow PostgreSQL activity and space trends so we can keep vacuum, replication, and connection paths healthy as load grows.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Database_Instance by Database_Instance.host, Database_Instance.action | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.1.40",
              "n": "Database Audit Log Tampering Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Detects unexpected audit trail disable, audit file deletion, or Unified Audit policy changes that may indicate cover-up activity.",
              "t": "OS audit, database audit, syslog",
              "d": "Oracle `V$OPTION` where parameter='Unified Auditing', `ALTER SYSTEM AUDIT`, `DROP AUDIT POLICY`, SQL Server audit shutdown events",
              "q": "index=db_audit sourcetype=oracle_audit OR sourcetype=mssql:audit\n| search action IN (\"AUDIT DISABLED\",\"AUDIT_POLICY_DROP\",\"AUDIT_TRAIL_OFF\") OR statement=\"*AUDIT*FALSE*\"\n| table _time, db_user, action, object_name, statement\n| sort -_time",
              "m": "Forward database and OS audit to tamper-evident storage. Alert on any audit disable or policy drop outside CAB. Correlate with DBA group membership.",
              "z": "Timeline (audit config changes), Table (privileged actions), Single value (critical audit events 24h).",
              "kfp": "Planned access reviews, recertification, break-glass accounts, and vendor maintenance can emit privilege- or access-change events that match the rule but are already approved; require a change ticket for context.",
              "refs": "[CIM: Databases](https://docs.splunk.com/Documentation/CIM/latest/User/Databases)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OS audit, database audit, syslog.\n• Ensure the following data sources are available: Oracle `V$OPTION` where parameter='Unified Auditing', `ALTER SYSTEM AUDIT`, `DROP AUDIT POLICY`, SQL Server audit shutdown events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward database and OS audit to tamper-evident storage. Alert on any audit disable or policy drop outside CAB. Correlate with DBA group membership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db_audit sourcetype=oracle_audit OR sourcetype=mssql:audit\n| search action IN (\"AUDIT DISABLED\",\"AUDIT_POLICY_DROP\",\"AUDIT_TRAIL_OFF\") OR statement=\"*AUDIT*FALSE*\"\n| table _time, db_user, action, object_name, statement\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Database Audit Log Tampering Detection** — Detects unexpected audit trail disable, audit file deletion, or Unified Audit policy changes that may indicate cover-up activity.\n\nDocumented **Data sources**: Oracle `V$OPTION` where parameter='Unified Auditing', `ALTER SYSTEM AUDIT`, `DROP AUDIT POLICY`, SQL Server audit shutdown events. **App/TA** (typical add-on context): OS audit, database audit, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db_audit; **sourcetype**: oracle_audit, mssql:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db_audit, sourcetype=oracle_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Database Audit Log Tampering Detection**): table _time, db_user, action, object_name, statement\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Database Audit Log Tampering Detection** — Detects unexpected audit trail disable, audit file deletion, or Unified Audit policy changes that may indicate cover-up activity.\n\nDocumented **Data sources**: Oracle `V$OPTION` where parameter='Unified Auditing', `ALTER SYSTEM AUDIT`, `DROP AUDIT POLICY`, SQL Server audit shutdown events. **App/TA** (typical add-on context): OS audit, database audit, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Databases.Query` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (audit config changes), Table (privileged actions), Single value (critical audit events 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow SQL Server performance and availability signals so the team can catch regressions, AG issues, and tempdb pressure on business-critical systems.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Databases"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Databases.Query by Query.host, Query.action | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of FDA Part 11 §11.10(e) (Audit trails) — Splunk UC-7.1.40: Database Audit Log Tampering Detection.",
                  "ea": "Saved search 'UC-7.1.40' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-21/chapter-I/subchapter-A/part-11"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "09.aa",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of HITRUST 09.aa (Audit logging) — Splunk UC-7.1.40: Database Audit Log Tampering Detection.",
                  "ea": "Saved search 'UC-7.1.40' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Logging.Continuity",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of SOX-ITGC ITGC.Logging.Continuity (Audit trail completeness) — Splunk UC-7.1.40: Database Audit Log Tampering Detection.",
                  "ea": "Saved search 'UC-7.1.40' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                },
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of SWIFT CSP 6.4 (Logging and monitoring) — Splunk UC-7.1.40: Database Audit Log Tampering Detection.",
                  "ea": "Saved search 'UC-7.1.40' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.swift.com/myswift/customer-security-programme-csp/security-controls"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 41,
            "none": 0
          }
        },
        {
          "i": "7.5",
          "n": "Search & Analytics Platforms",
          "u": [
            {
              "i": "7.5.1",
              "n": "Elasticsearch Cluster Health (Red / Yellow)",
              "c": "critical",
              "f": "beginner",
              "v": "Yellow or red cluster status means primary/replica shards are not fully allocated; search and indexing can fail or degrade. Catching status changes early limits user impact.",
              "t": "Custom REST scripted input (Elasticsearch `_cluster/health`)",
              "d": "`sourcetype=elasticsearch:cluster_health`",
              "q": "index=database sourcetype=\"elasticsearch:cluster_health\"\n| where status IN (\"yellow\",\"red\")\n| eval severity=if(status=\"red\",3,2)\n| timechart span=5m max(severity) as severity by cluster_name",
              "m": "Poll `GET _cluster/health` every 1–2 minutes and index `status`, `active_primary_shards`, `unassigned_shards`, `number_of_nodes`. Alert immediately on `red` and on sustained `yellow`. Correlate with node loss and disk events.",
              "z": "Single value or status indicator (cluster status), Line chart (status over time), Table (clusters not green).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (Elasticsearch `_cluster/health`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:cluster_health`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _cluster/health` every 1–2 minutes and index `status`, `active_primary_shards`, `unassigned_shards`, `number_of_nodes`. Alert immediately on `red` and on sustained `yellow`. Correlate with node loss and disk events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:cluster_health\"\n| where status IN (\"yellow\",\"red\")\n| eval severity=if(status=\"red\",3,2)\n| timechart span=5m max(severity) as severity by cluster_name\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Cluster Health (Red / Yellow)** — Yellow or red cluster status means primary/replica shards are not fully allocated; search and indexing can fail or degrade. Catching status changes early limits user impact.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:cluster_health`. **App/TA** (typical add-on context): Custom REST scripted input (Elasticsearch `_cluster/health`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:cluster_health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:cluster_health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status IN (\"yellow\",\"red\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by cluster_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value or status indicator (cluster status), Line chart (status over time), Table (clusters not green).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_es"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.2",
              "n": "Elasticsearch Shard Allocation Failures",
              "c": "critical",
              "f": "intermediate",
              "v": "Unassigned or stuck relocating shards leave data unavailable or at risk; allocation explain output points to disk, routing, or settings issues before outages spread.",
              "t": "Custom REST scripted input (`_cluster/allocation/explain`, `_cat/shards`)",
              "d": "`sourcetype=elasticsearch:shard_allocation`",
              "q": "index=database sourcetype=\"elasticsearch:shard_allocation\"\n| where state=\"UNASSIGNED\" OR allocation_decision=\"NO\"\n| stats latest(deciders) as deciders, count by index, shard, prirep\n| sort -count",
              "m": "Ingest `_cat/shards` with `state` filter and, for unassigned primaries, poll `POST _cluster/allocation/explain` on a schedule. Parse `allocate_explanation` and decider names. Alert when any primary shard is unassigned >5 minutes or replica unassigned count exceeds policy.",
              "z": "Table (index, shard, state, reason), Single value (unassigned shard count), Timeline of allocation events.",
              "kfp": "Yellow or relocating shards during rolling restarts, ILM/ISM moves, or snapshot restore; compare with maintenance before treating as an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_cluster/allocation/explain`, `_cat/shards`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:shard_allocation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest `_cat/shards` with `state` filter and, for unassigned primaries, poll `POST _cluster/allocation/explain` on a schedule. Parse `allocate_explanation` and decider names. Alert when any primary shard is unassigned >5 minutes or replica unassigned count exceeds policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:shard_allocation\"\n| where state=\"UNASSIGNED\" OR allocation_decision=\"NO\"\n| stats latest(deciders) as deciders, count by index, shard, prirep\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Shard Allocation Failures** — Unassigned or stuck relocating shards leave data unavailable or at risk; allocation explain output points to disk, routing, or settings issues before outages spread.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:shard_allocation`. **App/TA** (typical add-on context): Custom REST scripted input (`_cluster/allocation/explain`, `_cat/shards`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:shard_allocation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:shard_allocation\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state=\"UNASSIGNED\" OR allocation_decision=\"NO\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by index, shard, prirep** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (index, shard, state, reason), Single value (unassigned shard count), Timeline of allocation events.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.3",
              "n": "OpenSearch Index Performance",
              "c": "high",
              "f": "intermediate",
              "v": "Slow merges, refresh intervals, and segment counts drive query latency and heap use; tracking per-index stats keeps search SLAs achievable.",
              "t": "Custom scripted input (OpenSearch `_stats`, `_cat/indices`)",
              "d": "`sourcetype=opensearch:index_stats`",
              "q": "index=database sourcetype=\"opensearch:index_stats\"\n| eval merge_ms=primaries.merges.total_time_in_millis\n| eval search_qps=primaries.search.query_total / nullif(uptime_sec,0)\n| where merge_ms > 600000 OR primaries.refresh.total_time_in_millis > 300000\n| table index, merge_ms, primaries.refresh.total_time_in_millis, store.size_in_bytes",
              "m": "Poll `GET /<index>/_stats` or per-index `_stats` every 15 minutes. Extract merges, refresh, indexing, and store size. Compare against baselines; alert when merge or refresh time spikes without matching traffic increase. Review ILM/ISM policies for hot indices.",
              "z": "Line chart (merge and refresh time by index), Table (top indices by merge cost), Bar chart (segment count if extracted).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (OpenSearch `_stats`, `_cat/indices`).\n• Ensure the following data sources are available: `sourcetype=opensearch:index_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET /<index>/_stats` or per-index `_stats` every 15 minutes. Extract merges, refresh, indexing, and store size. Compare against baselines; alert when merge or refresh time spikes without matching traffic increase. Review ILM/ISM policies for hot indices.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"opensearch:index_stats\"\n| eval merge_ms=primaries.merges.total_time_in_millis\n| eval search_qps=primaries.search.query_total / nullif(uptime_sec,0)\n| where merge_ms > 600000 OR primaries.refresh.total_time_in_millis > 300000\n| table index, merge_ms, primaries.refresh.total_time_in_millis, store.size_in_bytes\n```\n\nUnderstanding this SPL\n\n**OpenSearch Index Performance** — Slow merges, refresh intervals, and segment counts drive query latency and heap use; tracking per-index stats keeps search SLAs achievable.\n\nDocumented **Data sources**: `sourcetype=opensearch:index_stats`. **App/TA** (typical add-on context): Custom scripted input (OpenSearch `_stats`, `_cat/indices`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: opensearch:index_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"opensearch:index_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **merge_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **search_qps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where merge_ms > 600000 OR primaries.refresh.total_time_in_millis > 300000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OpenSearch Index Performance**): table index, merge_ms, primaries.refresh.total_time_in_millis, store.size_in_bytes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (merge and refresh time by index), Table (top indices by merge cost), Bar chart (segment count if extracted).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_opensearch"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.4",
              "n": "OpenSearch Search Latency",
              "c": "high",
              "f": "intermediate",
              "v": "P95/P99 query latency directly affects application UX; separating queue time from query time narrows tuning to thread pools vs. mappings and shards.",
              "t": "Custom scripted input (`_nodes/stats` search, slow log), OpenSearch slow search log",
              "d": "`sourcetype=opensearch:search_latency`, `sourcetype=opensearch:slowlog`",
              "q": "index=database sourcetype=\"opensearch:search_latency\" OR sourcetype=\"opensearch:slowlog\"\n| eval took_ms=coalesce(took_ms, took)\n| where took_ms > 500\n| timechart span=5m perc95(took_ms) as p95_ms, perc99(took_ms) as p99_ms by cluster_name",
              "m": "Ingest node-level search metrics (`primaries.search.query_time_in_millis` / `query_total`) for derived latency, and/or enable slow search logging and forward with a dedicated sourcetype. Baseline p95 per cluster; alert when p95 exceeds threshold for 15+ minutes. Correlate with heap GC and segment merges.",
              "z": "Line chart (p95/p99 search latency), Table (slow queries from slowlog), Histogram of `took_ms`.",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`_nodes/stats` search, slow log), OpenSearch slow search log.\n• Ensure the following data sources are available: `sourcetype=opensearch:search_latency`, `sourcetype=opensearch:slowlog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest node-level search metrics (`primaries.search.query_time_in_millis` / `query_total`) for derived latency, and/or enable slow search logging and forward with a dedicated sourcetype. Baseline p95 per cluster; alert when p95 exceeds threshold for 15+ minutes. Correlate with heap GC and segment merges.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"opensearch:search_latency\" OR sourcetype=\"opensearch:slowlog\"\n| eval took_ms=coalesce(took_ms, took)\n| where took_ms > 500\n| timechart span=5m perc95(took_ms) as p95_ms, perc99(took_ms) as p99_ms by cluster_name\n```\n\nUnderstanding this SPL\n\n**OpenSearch Search Latency** — P95/P99 query latency directly affects application UX; separating queue time from query time narrows tuning to thread pools vs. mappings and shards.\n\nDocumented **Data sources**: `sourcetype=opensearch:search_latency`, `sourcetype=opensearch:slowlog`. **App/TA** (typical add-on context): Custom scripted input (`_nodes/stats` search, slow log), OpenSearch slow search log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: opensearch:search_latency, opensearch:slowlog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"opensearch:search_latency\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **took_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where took_ms > 500` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by cluster_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95/p99 search latency), Table (slow queries from slowlog), Histogram of `took_ms`.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_opensearch"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.5",
              "n": "Elasticsearch Indexing Rate Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "A sudden drop in docs/s indexed can signal pipeline failures, bulk rejections, or cluster overload; sustained spikes may require scaling or throttling.",
              "t": "Custom scripted input (`_nodes/stats` indexing)",
              "d": "`sourcetype=elasticsearch:indexing_stats`",
              "q": "index=database sourcetype=\"elasticsearch:indexing_stats\"\n| timechart span=1m per_second(indexing.index_total) as index_rate by node_name",
              "m": "Poll `GET _nodes/stats` every minute; extract `indices.indexing.index_total`, `index_time_in_millis`, and `index_current`. Store prior sample to compute rate of change. Set dynamic or static baselines; alert on drops below expected ingest or on `indexing` rejections from bulk thread pool.",
              "z": "Line chart (documents indexed per second by node), Single value (cluster aggregate rate), Area chart (indexing time vs. count).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`_nodes/stats` indexing).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:indexing_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _nodes/stats` every minute; extract `indices.indexing.index_total`, `index_time_in_millis`, and `index_current`. Store prior sample to compute rate of change. Set dynamic or static baselines; alert on drops below expected ingest or on `indexing` rejections from bulk thread pool.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:indexing_stats\"\n| timechart span=1m per_second(indexing.index_total) as index_rate by node_name\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Indexing Rate Monitoring** — A sudden drop in docs/s indexed can signal pipeline failures, bulk rejections, or cluster overload; sustained spikes may require scaling or throttling.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:indexing_stats`. **App/TA** (typical add-on context): Custom scripted input (`_nodes/stats` indexing). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:indexing_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:indexing_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by node_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (documents indexed per second by node), Single value (cluster aggregate rate), Area chart (indexing time vs. count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.6",
              "n": "Solr Query Cache Hit Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "Low filter/query cache hit rates increase CPU and latency; tuning caches and queries improves headroom without adding nodes.",
              "t": "Custom scripted input (Solr `metrics` API), Solr log ingestion",
              "d": "`sourcetype=solr:metrics`",
              "q": "index=database sourcetype=\"solr:metrics\"\n| where like(metric_path,\"%queryResultCache%\") OR like(metric_path,\"%filterCache%\")\n| eval hit_ratio=lookup_hits / nullif(lookup_hits+lookup_misses,0)\n| where hit_ratio < 0.7\n| timechart span=15m avg(hit_ratio) as cache_hit_ratio by core, metric_path",
              "m": "Poll `GET /solr/admin/metrics` (or per-core metrics) every 5 minutes. Map `QUERY.queryResultCache` and `FILTER.filterCache` hits/misses. Compute hit ratio; alert below team-defined threshold (e.g., 0.7). Correlate with query pattern changes and deployments.",
              "z": "Gauge (cache hit ratio per core), Line chart (hit ratio trend), Table (cores below threshold).",
              "kfp": "Cold caches right after restarts, failover, or one-off full scans can lower hit ratios until the working set is warm again — watch trends, not a single low sample.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (Solr `metrics` API), Solr log ingestion.\n• Ensure the following data sources are available: `sourcetype=solr:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET /solr/admin/metrics` (or per-core metrics) every 5 minutes. Map `QUERY.queryResultCache` and `FILTER.filterCache` hits/misses. Compute hit ratio; alert below team-defined threshold (e.g., 0.7). Correlate with query pattern changes and deployments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"solr:metrics\"\n| where like(metric_path,\"%queryResultCache%\") OR like(metric_path,\"%filterCache%\")\n| eval hit_ratio=lookup_hits / nullif(lookup_hits+lookup_misses,0)\n| where hit_ratio < 0.7\n| timechart span=15m avg(hit_ratio) as cache_hit_ratio by core, metric_path\n```\n\nUnderstanding this SPL\n\n**Solr Query Cache Hit Ratio** — Low filter/query cache hit rates increase CPU and latency; tuning caches and queries improves headroom without adding nodes.\n\nDocumented **Data sources**: `sourcetype=solr:metrics`. **App/TA** (typical add-on context): Custom scripted input (Solr `metrics` API), Solr log ingestion. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: solr:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"solr:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(metric_path,\"%queryResultCache%\") OR like(metric_path,\"%filterCache%\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio < 0.7` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by core, metric_path** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (cache hit ratio per core), Line chart (hit ratio trend), Table (cores below threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow buffer and cache hit rates so we can see when memory is too small for the working set and query latency starts to depend on disk more than it should.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.7",
              "n": "Solr Replication Lag",
              "c": "high",
              "f": "intermediate",
              "v": "Followers lagging behind leaders serve stale results and extend recovery time; catching replication gaps protects read consistency and failover readiness.",
              "t": "Custom scripted input (Solr Cloud `CLUSTERSTATUS`, replica stats)",
              "d": "`sourcetype=solr:replication`",
              "q": "index=database sourcetype=\"solr:replication\"\n| eval lag_bytes=leader_version - replica_version\n| where lag_bytes > 1048576 OR index_version_lag > 100\n| stats max(lag_bytes) as max_lag, max(replication_time_ms) as max_rep_ms by collection, shard, replica\n| sort -max_lag",
              "m": "Ingest Solr Cloud replica state (version, generation, replication timing) from admin API or `REPLICATION` metrics. For standalone Solr, use master/slave `fetch` lag fields. Alert when replica index version lags leader beyond SLA (bytes or generations). Investigate network, disk, and TLog backlog.",
              "z": "Line chart (replication lag over time), Table (replicas over SLA), Single value (max lag per collection).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (Solr Cloud `CLUSTERSTATUS`, replica stats).\n• Ensure the following data sources are available: `sourcetype=solr:replication`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Solr Cloud replica state (version, generation, replication timing) from admin API or `REPLICATION` metrics. For standalone Solr, use master/slave `fetch` lag fields. Alert when replica index version lags leader beyond SLA (bytes or generations). Investigate network, disk, and TLog backlog.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"solr:replication\"\n| eval lag_bytes=leader_version - replica_version\n| where lag_bytes > 1048576 OR index_version_lag > 100\n| stats max(lag_bytes) as max_lag, max(replication_time_ms) as max_rep_ms by collection, shard, replica\n| sort -max_lag\n```\n\nUnderstanding this SPL\n\n**Solr Replication Lag** — Followers lagging behind leaders serve stale results and extend recovery time; catching replication gaps protects read consistency and failover readiness.\n\nDocumented **Data sources**: `sourcetype=solr:replication`. **App/TA** (typical add-on context): Custom scripted input (Solr Cloud `CLUSTERSTATUS`, replica stats). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: solr:replication. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"solr:replication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lag_bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag_bytes > 1048576 OR index_version_lag > 100` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by collection, shard, replica** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (replication lag over time), Table (replicas over SLA), Single value (max lag per collection).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.8",
              "n": "Elasticsearch Disk Watermark Alerts",
              "c": "critical",
              "f": "beginner",
              "v": "Elasticsearch blocks shard allocation when flood-stage watermarks are hit; proactive disk alerts prevent read-only indices and cluster yellow/red states.",
              "t": "Custom scripted input (`_cat/allocation`, node stats fs)",
              "d": "`sourcetype=elasticsearch:disk_watermark`",
              "q": "index=database sourcetype=\"elasticsearch:disk_watermark\"\n| eval used_pct=round(disk_used_bytes/disk_total_bytes*100,1)\n| where used_pct >= watermark_low_pct OR blocks.has_read_only_allow_delete==\"true\"\n| timechart span=5m max(used_pct) as used_pct by node_name",
              "m": "Poll `GET _cat/allocation?bytes=b&h=node,disk.avail,disk.total` or `_nodes/stats/fs` for each data node. Compare `disk.used_percent` to `cluster.routing.allocation.disk.watermark` settings. Alert at low/high/flood thresholds before Elasticsearch enforces blocks. Trigger capacity or ILM actions when trending toward limits.",
              "z": "Gauge (disk % per node), Table (nodes near watermark), Line chart (free space trend).",
              "kfp": "Yellow or relocating shards during rolling restarts, ILM/ISM moves, or snapshot restore; compare with maintenance before treating as an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`_cat/allocation`, node stats fs).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:disk_watermark`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _cat/allocation?bytes=b&h=node,disk.avail,disk.total` or `_nodes/stats/fs` for each data node. Compare `disk.used_percent` to `cluster.routing.allocation.disk.watermark` settings. Alert at low/high/flood thresholds before Elasticsearch enforces blocks. Trigger capacity or ILM actions when trending toward limits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:disk_watermark\"\n| eval used_pct=round(disk_used_bytes/disk_total_bytes*100,1)\n| where used_pct >= watermark_low_pct OR blocks.has_read_only_allow_delete==\"true\"\n| timechart span=5m max(used_pct) as used_pct by node_name\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Disk Watermark Alerts** — Elasticsearch blocks shard allocation when flood-stage watermarks are hit; proactive disk alerts prevent read-only indices and cluster yellow/red states.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:disk_watermark`. **App/TA** (typical add-on context): Custom scripted input (`_cat/allocation`, node stats fs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:disk_watermark. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:disk_watermark\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct >= watermark_low_pct OR blocks.has_read_only_allow_delete==\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by node_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (disk % per node), Table (nodes near watermark), Line chart (free space trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.9",
              "n": "Elasticsearch JVM Heap Pressure",
              "c": "critical",
              "f": "intermediate",
              "v": "High heap usage and frequent GC pause search and indexing and can trigger circuit breakers; JVM trends predict node instability before restarts.",
              "t": "Custom scripted input (`_nodes/stats/jvm`)",
              "d": "`sourcetype=elasticsearch:jvm`",
              "q": "index=database sourcetype=\"elasticsearch:jvm\"\n| eval heap_used_pct=round(mem.heap_used_in_bytes/mem.heap_max_in_bytes*100,1)\n| where heap_used_pct > 85 OR gc.collectors.old.collection_time_in_millis > 30000\n| timechart span=5m avg(heap_used_pct) as heap_pct, max(gc.collectors.old.collection_time_in_millis) as old_gc_ms by node_name",
              "m": "Poll JVM stats every 1–2 minutes. Track `heap_used_percent`, young/old GC collection time and count. Alert when heap consistently >85% or old GC time spikes. Correlate with fielddata, merges, and heap dumps policy.",
              "z": "Line chart (heap % and GC time), Area chart (heap used vs. max), Table (nodes over threshold).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`_nodes/stats/jvm`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:jvm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll JVM stats every 1–2 minutes. Track `heap_used_percent`, young/old GC collection time and count. Alert when heap consistently >85% or old GC time spikes. Correlate with fielddata, merges, and heap dumps policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:jvm\"\n| eval heap_used_pct=round(mem.heap_used_in_bytes/mem.heap_max_in_bytes*100,1)\n| where heap_used_pct > 85 OR gc.collectors.old.collection_time_in_millis > 30000\n| timechart span=5m avg(heap_used_pct) as heap_pct, max(gc.collectors.old.collection_time_in_millis) as old_gc_ms by node_name\n```\n\nUnderstanding this SPL\n\n**Elasticsearch JVM Heap Pressure** — High heap usage and frequent GC pause search and indexing and can trigger circuit breakers; JVM trends predict node instability before restarts.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:jvm`. **App/TA** (typical add-on context): Custom scripted input (`_nodes/stats/jvm`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:jvm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:jvm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **heap_used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where heap_used_pct > 85 OR gc.collectors.old.collection_time_in_millis > 30000` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by node_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (heap % and GC time), Area chart (heap used vs. max), Table (nodes over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.10",
              "n": "OpenSearch Snapshot / Backup Status",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed or missing snapshots break restore and compliance RPO; monitoring repository and snapshot completion protects against silent backup gaps.",
              "t": "Custom scripted input (`_snapshot/_status`, `_cat/snapshots`)",
              "d": "`sourcetype=opensearch:snapshot`",
              "q": "index=database sourcetype=\"opensearch:snapshot\"\n| eval end_epoch=if(isnotnull(end_time), end_time, _time)\n| eval stale=if(state=\"SUCCESS\" AND end_epoch < relative_time(now(),\"-25h\"),1,0)\n| where state IN (\"FAILED\",\"PARTIAL\") OR stale=1\n| table repository, snapshot, state, end_epoch, stale",
              "m": "Ingest snapshot completion events from `_snapshot/<repo>/_all` or SLM/ISM policy history. Alert on `FAILED` or `PARTIAL` snapshots. Verify last successful snapshot per policy is within SLA (e.g., 24h). Monitor repository connectivity and `read_only` state.",
              "z": "Table (repository, last snapshot, state), Timeline (snapshot jobs), Single value (hours since last success).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`_snapshot/_status`, `_cat/snapshots`).\n• Ensure the following data sources are available: `sourcetype=opensearch:snapshot`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest snapshot completion events from `_snapshot/<repo>/_all` or SLM/ISM policy history. Alert on `FAILED` or `PARTIAL` snapshots. Verify last successful snapshot per policy is within SLA (e.g., 24h). Monitor repository connectivity and `read_only` state.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"opensearch:snapshot\"\n| eval end_epoch=if(isnotnull(end_time), end_time, _time)\n| eval stale=if(state=\"SUCCESS\" AND end_epoch < relative_time(now(),\"-25h\"),1,0)\n| where state IN (\"FAILED\",\"PARTIAL\") OR stale=1\n| table repository, snapshot, state, end_epoch, stale\n```\n\nUnderstanding this SPL\n\n**OpenSearch Snapshot / Backup Status** — Failed or missing snapshots break restore and compliance RPO; monitoring repository and snapshot completion protects against silent backup gaps.\n\nDocumented **Data sources**: `sourcetype=opensearch:snapshot`. **App/TA** (typical add-on context): Custom scripted input (`_snapshot/_status`, `_cat/snapshots`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: opensearch:snapshot. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"opensearch:snapshot\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **end_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where state IN (\"FAILED\",\"PARTIAL\") OR stale=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OpenSearch Snapshot / Backup Status**): table repository, snapshot, state, end_epoch, stale\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (repository, last snapshot, state), Timeline (snapshot jobs), Single value (hours since last success).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check backup and snapshot success and growth so we can trust restores and long-term storage plans for regulation and for real incidents.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.11",
              "n": "Elasticsearch Circuit Breaker Trips",
              "c": "high",
              "f": "advanced",
              "v": "Circuit breaker exceptions stop requests to protect the cluster; repeated trips indicate oversized aggregations, mapping issues, or undersized heap.",
              "t": "Elasticsearch slow logs / server logs forwarded to Splunk, JMX or `_nodes/stats` breaker fields",
              "d": "`sourcetype=elasticsearch:server`, `sourcetype=elasticsearch:circuit_breaker`",
              "q": "index=database (sourcetype=\"elasticsearch:server\" OR sourcetype=\"elasticsearch:circuit_breaker\")\n| search \"CircuitBreakingException\" OR breaker_tripped=1\n| stats count by breaker_name, node_name, index\n| where count > 0\n| sort -count",
              "m": "Forward Elasticsearch logs with circuit breaker messages, or poll `_nodes/stats/breaker` and alert when `tripped` is true or estimated size exceeds limit. Group by `breaker_name` (parent, fielddata, request). Tie alerts to offending queries from slow logs.",
              "z": "Bar chart (trips by breaker type), Table (node, breaker, count), Line chart (breaker estimated size vs. limit).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Elasticsearch slow logs / server logs forwarded to Splunk, JMX or `_nodes/stats` breaker fields.\n• Ensure the following data sources are available: `sourcetype=elasticsearch:server`, `sourcetype=elasticsearch:circuit_breaker`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Elasticsearch logs with circuit breaker messages, or poll `_nodes/stats/breaker` and alert when `tripped` is true or estimated size exceeds limit. Group by `breaker_name` (parent, fielddata, request). Tie alerts to offending queries from slow logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database (sourcetype=\"elasticsearch:server\" OR sourcetype=\"elasticsearch:circuit_breaker\")\n| search \"CircuitBreakingException\" OR breaker_tripped=1\n| stats count by breaker_name, node_name, index\n| where count > 0\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Circuit Breaker Trips** — Circuit breaker exceptions stop requests to protect the cluster; repeated trips indicate oversized aggregations, mapping issues, or undersized heap.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:server`, `sourcetype=elasticsearch:circuit_breaker`. **App/TA** (typical add-on context): Elasticsearch slow logs / server logs forwarded to Splunk, JMX or `_nodes/stats` breaker fields. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:server, elasticsearch:circuit_breaker. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:server\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by breaker_name, node_name, index** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (trips by breaker type), Table (node, breaker, count), Line chart (breaker estimated size vs. limit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_es"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.12",
              "n": "Elasticsearch Thread Pool Rejections",
              "c": "critical",
              "f": "intermediate",
              "v": "Thread pool rejections (HTTP 429) mean the cluster cannot keep up with search or indexing load. Sustained rejections cause data loss on ingest and timeouts on search.",
              "t": "Custom REST scripted input (`_nodes/stats/thread_pool`)",
              "d": "`sourcetype=elasticsearch:thread_pool`",
              "q": "index=database sourcetype=\"elasticsearch:thread_pool\"\n| eval search_rejected_delta=search.rejected-prev_search_rejected, write_rejected_delta=write.rejected-prev_write_rejected\n| where search_rejected_delta > 0 OR write_rejected_delta > 0\n| timechart span=5m sum(search_rejected_delta) as search_rejections, sum(write_rejected_delta) as write_rejections by node_name",
              "m": "Poll `GET _nodes/stats/thread_pool/search,write,get` every minute. Store cumulative `rejected` counters and compute deltas between samples. Alert when any node shows rejections in a 5-minute window. Correlate with JVM heap and CPU to determine root cause (undersized cluster vs. expensive queries vs. bulk indexing spikes). Do not increase queue sizes as a fix — address the underlying load.",
              "z": "Line chart (rejections per pool over time), Bar chart (rejections by node), Single value (total rejections last hour).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_nodes/stats/thread_pool`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:thread_pool`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _nodes/stats/thread_pool/search,write,get` every minute. Store cumulative `rejected` counters and compute deltas between samples. Alert when any node shows rejections in a 5-minute window. Correlate with JVM heap and CPU to determine root cause (undersized cluster vs. expensive queries vs. bulk indexing spikes). Do not increase queue sizes as a fix — address the underlying load.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:thread_pool\"\n| eval search_rejected_delta=search.rejected-prev_search_rejected, write_rejected_delta=write.rejected-prev_write_rejected\n| where search_rejected_delta > 0 OR write_rejected_delta > 0\n| timechart span=5m sum(search_rejected_delta) as search_rejections, sum(write_rejected_delta) as write_rejections by node_name\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Thread Pool Rejections** — Thread pool rejections (HTTP 429) mean the cluster cannot keep up with search or indexing load. Sustained rejections cause data loss on ingest and timeouts on search.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:thread_pool`. **App/TA** (typical add-on context): Custom REST scripted input (`_nodes/stats/thread_pool`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:thread_pool. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:thread_pool\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **search_rejected_delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where search_rejected_delta > 0 OR write_rejected_delta > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by node_name** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Elasticsearch Thread Pool Rejections** — Thread pool rejections (HTTP 429) mean the cluster cannot keep up with search or indexing load. Sustained rejections cause data loss on ingest and timeouts on search.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:thread_pool`. **App/TA** (typical add-on context): Custom REST scripted input (`_nodes/stats/thread_pool`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (rejections per pool over time), Bar chart (rejections by node), Single value (total rejections last hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=5m | sort - agg_value",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.13",
              "n": "Elasticsearch Search Latency and Slow Queries",
              "c": "high",
              "f": "intermediate",
              "v": "Search latency trending detects degradation before users notice. Slow log analysis identifies expensive queries for optimization.",
              "t": "Custom REST scripted input (`_nodes/stats`), Elasticsearch slow logs forwarded to Splunk",
              "d": "`sourcetype=elasticsearch:search_stats`, `sourcetype=elasticsearch:slow_log`",
              "q": "index=database sourcetype=\"elasticsearch:search_stats\"\n| eval query_latency_ms=search.query_time_in_millis/search.query_total\n| timechart span=5m avg(query_latency_ms) as avg_latency_ms, max(query_latency_ms) as p100_latency_ms by node_name\n| where avg_latency_ms > 500",
              "m": "Poll `GET _nodes/stats/indices/search` to compute per-node average query latency from cumulative counters. Enable slow logs (`index.search.slowlog.threshold.query.warn: 5s`) and forward to Splunk. Correlate slow queries with specific indices and query patterns. Alert on sustained average latency above baseline or frequent slow log entries.",
              "z": "Line chart (query latency p50/p95/p100), Table (slow queries by index), Single value (current avg latency).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_nodes/stats`), Elasticsearch slow logs forwarded to Splunk.\n• Ensure the following data sources are available: `sourcetype=elasticsearch:search_stats`, `sourcetype=elasticsearch:slow_log`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _nodes/stats/indices/search` to compute per-node average query latency from cumulative counters. Enable slow logs (`index.search.slowlog.threshold.query.warn: 5s`) and forward to Splunk. Correlate slow queries with specific indices and query patterns. Alert on sustained average latency above baseline or frequent slow log entries.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:search_stats\"\n| eval query_latency_ms=search.query_time_in_millis/search.query_total\n| timechart span=5m avg(query_latency_ms) as avg_latency_ms, max(query_latency_ms) as p100_latency_ms by node_name\n| where avg_latency_ms > 500\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Search Latency and Slow Queries** — Search latency trending detects degradation before users notice. Slow log analysis identifies expensive queries for optimization.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:search_stats`, `sourcetype=elasticsearch:slow_log`. **App/TA** (typical add-on context): Custom REST scripted input (`_nodes/stats`), Elasticsearch slow logs forwarded to Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:search_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:search_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **query_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by node_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_latency_ms > 500` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Elasticsearch Search Latency and Slow Queries** — Search latency trending detects degradation before users notice. Slow log analysis identifies expensive queries for optimization.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:search_stats`, `sourcetype=elasticsearch:slow_log`. **App/TA** (typical add-on context): Custom REST scripted input (`_nodes/stats`), Elasticsearch slow logs forwarded to Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (query latency p50/p95/p100), Table (slow queries by index), Single value (current avg latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host span=5m | sort - count",
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_es"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.14",
              "n": "Elasticsearch ILM Policy Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Failed ILM transitions leave indices in the wrong lifecycle phase — hot data stays on expensive storage, old data never deletes, rollover stops creating new indices. Silent failures accumulate until disk fills.",
              "t": "Custom REST scripted input (`_ilm/explain`)",
              "d": "`sourcetype=elasticsearch:ilm_status`",
              "q": "index=database sourcetype=\"elasticsearch:ilm_status\"\n| where step=\"ERROR\" OR action_time_millis > 3600000\n| stats count as error_count, latest(failed_step) as failed_step, latest(step_info) as reason by index, policy\n| sort -error_count",
              "m": "Poll `GET */_ilm/explain` periodically and extract indices where `step` equals `ERROR`. Capture the `failed_step`, `step_info`, and `phase_time` for root cause analysis. Alert on any index stuck in ERROR. Also monitor indices that have been in the same phase longer than expected (e.g., hot phase > 30 days when policy says 7 days).",
              "z": "Table (indices in error with reason), Single value (error count), Bar chart (errors by policy).",
              "kfp": "Yellow or relocating shards during rolling restarts, ILM/ISM moves, or snapshot restore; compare with maintenance before treating as an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_ilm/explain`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:ilm_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET */_ilm/explain` periodically and extract indices where `step` equals `ERROR`. Capture the `failed_step`, `step_info`, and `phase_time` for root cause analysis. Alert on any index stuck in ERROR. Also monitor indices that have been in the same phase longer than expected (e.g., hot phase > 30 days when policy says 7 days).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:ilm_status\"\n| where step=\"ERROR\" OR action_time_millis > 3600000\n| stats count as error_count, latest(failed_step) as failed_step, latest(step_info) as reason by index, policy\n| sort -error_count\n```\n\nUnderstanding this SPL\n\n**Elasticsearch ILM Policy Failures** — Failed ILM transitions leave indices in the wrong lifecycle phase — hot data stays on expensive storage, old data never deletes, rollover stops creating new indices. Silent failures accumulate until disk fills.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:ilm_status`. **App/TA** (typical add-on context): Custom REST scripted input (`_ilm/explain`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:ilm_status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:ilm_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where step=\"ERROR\" OR action_time_millis > 3600000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by index, policy** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (indices in error with reason), Single value (error count), Bar chart (errors by policy).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.15",
              "n": "Elasticsearch Snapshot Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Failed snapshots mean no viable backup for disaster recovery. Partial snapshots may leave indices unrecoverable. Monitoring ensures RPO commitments are met.",
              "t": "Custom REST scripted input (`_snapshot`)",
              "d": "`sourcetype=elasticsearch:snapshot_status`",
              "q": "index=database sourcetype=\"elasticsearch:snapshot_status\"\n| where state IN (\"FAILED\",\"PARTIAL\",\"INCOMPATIBLE\")\n| stats count by snapshot, repository, state, reason\n| sort -count",
              "m": "Poll `GET _snapshot/_all/_all` or `GET _snapshot/<repo>/_current` to track snapshot state. Alert on any snapshot with state FAILED or PARTIAL. Also monitor time since last successful snapshot — alert when it exceeds RPO threshold (e.g., 24 hours). Check `_snapshot/<repo>/_status` for in-progress snapshot progress.",
              "z": "Table (recent snapshots with state), Single value (hours since last successful), Line chart (snapshot duration trend).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_snapshot`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:snapshot_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _snapshot/_all/_all` or `GET _snapshot/<repo>/_current` to track snapshot state. Alert on any snapshot with state FAILED or PARTIAL. Also monitor time since last successful snapshot — alert when it exceeds RPO threshold (e.g., 24 hours). Check `_snapshot/<repo>/_status` for in-progress snapshot progress.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:snapshot_status\"\n| where state IN (\"FAILED\",\"PARTIAL\",\"INCOMPATIBLE\")\n| stats count by snapshot, repository, state, reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Snapshot Failures** — Failed snapshots mean no viable backup for disaster recovery. Partial snapshots may leave indices unrecoverable. Monitoring ensures RPO commitments are met.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:snapshot_status`. **App/TA** (typical add-on context): Custom REST scripted input (`_snapshot`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:snapshot_status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:snapshot_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state IN (\"FAILED\",\"PARTIAL\",\"INCOMPATIBLE\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by snapshot, repository, state, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent snapshots with state), Single value (hours since last successful), Line chart (snapshot duration trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check backup and snapshot success and growth so we can trust restores and long-term storage plans for regulation and for real incidents.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.16",
              "n": "Elasticsearch Cross-Cluster Replication Lag",
              "c": "critical",
              "f": "advanced",
              "v": "CCR follower lag directly impacts disaster recovery readiness. Excessive lag means a failover would lose recent data. Monitoring ensures replication SLAs are met.",
              "t": "Custom REST scripted input (`_ccr/stats`)",
              "d": "`sourcetype=elasticsearch:ccr_stats`",
              "q": "index=database sourcetype=\"elasticsearch:ccr_stats\"\n| eval lag_seconds=operations_behind/max(1, operations_per_second)\n| where lag_seconds > 60 OR fatal_exception IS NOT NULL\n| timechart span=5m max(lag_seconds) as max_lag_s, max(operations_behind) as ops_behind by follower_index",
              "m": "Poll `GET /_ccr/stats` to extract per-follower-index `operations_written`, `operations_read`, and `time_since_last_read`. Calculate replication lag as operations behind the leader. Alert when lag exceeds threshold (e.g., 60 seconds) or when `fatal_exception` is present. Monitor `read_exceptions` for transient network issues between clusters.",
              "z": "Line chart (lag per follower index), Table (follower status), Single value (max lag across all followers).",
              "kfp": "Planned failovers, network maintenance, or heavy bulk replication can extend lag for a time without an outage; align the alert with the DR runbook and change window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_ccr/stats`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:ccr_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET /_ccr/stats` to extract per-follower-index `operations_written`, `operations_read`, and `time_since_last_read`. Calculate replication lag as operations behind the leader. Alert when lag exceeds threshold (e.g., 60 seconds) or when `fatal_exception` is present. Monitor `read_exceptions` for transient network issues between clusters.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:ccr_stats\"\n| eval lag_seconds=operations_behind/max(1, operations_per_second)\n| where lag_seconds > 60 OR fatal_exception IS NOT NULL\n| timechart span=5m max(lag_seconds) as max_lag_s, max(operations_behind) as ops_behind by follower_index\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Cross-Cluster Replication Lag** — CCR follower lag directly impacts disaster recovery readiness. Excessive lag means a failover would lose recent data. Monitoring ensures replication SLAs are met.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:ccr_stats`. **App/TA** (typical add-on context): Custom REST scripted input (`_ccr/stats`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:ccr_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:ccr_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lag_seconds** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag_seconds > 60 OR fatal_exception IS NOT NULL` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by follower_index** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (lag per follower index), Table (follower status), Single value (max lag across all followers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication and apply lag so we can meet recovery and read-your-writes expectations when a region or network path is under stress.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.17",
              "n": "Elasticsearch Pending Cluster Tasks",
              "c": "high",
              "f": "beginner",
              "v": "A growing backlog of pending cluster tasks indicates the master node cannot process cluster state updates fast enough. This delays shard allocation, mapping updates, and index creation.",
              "t": "Custom REST scripted input (`_cluster/pending_tasks`)",
              "d": "`sourcetype=elasticsearch:pending_tasks`",
              "q": "index=database sourcetype=\"elasticsearch:pending_tasks\"\n| stats max(insert_order) as queue_depth, max(time_in_queue_millis) as max_wait_ms\n| where queue_depth > 5 OR max_wait_ms > 30000",
              "m": "Poll `GET _cluster/pending_tasks` every minute. Track the number of tasks and the `time_in_queue_millis` for the oldest task. Alert when queue depth stays above 5 for multiple consecutive samples or any task waits longer than 30 seconds. Common causes include frequent mapping changes, too many small indices, or an overloaded master node.",
              "z": "Line chart (pending task count), Single value (current queue depth), Table (pending tasks with wait time).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_cluster/pending_tasks`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:pending_tasks`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _cluster/pending_tasks` every minute. Track the number of tasks and the `time_in_queue_millis` for the oldest task. Alert when queue depth stays above 5 for multiple consecutive samples or any task waits longer than 30 seconds. Common causes include frequent mapping changes, too many small indices, or an overloaded master node.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:pending_tasks\"\n| stats max(insert_order) as queue_depth, max(time_in_queue_millis) as max_wait_ms\n| where queue_depth > 5 OR max_wait_ms > 30000\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Pending Cluster Tasks** — A growing backlog of pending cluster tasks indicates the master node cannot process cluster state updates fast enough. This delays shard allocation, mapping updates, and index creation.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:pending_tasks`. **App/TA** (typical add-on context): Custom REST scripted input (`_cluster/pending_tasks`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:pending_tasks. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:pending_tasks\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where queue_depth > 5 OR max_wait_ms > 30000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pending task count), Single value (current queue depth), Table (pending tasks with wait time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.18",
              "n": "Elasticsearch Fielddata and Cache Evictions",
              "c": "medium",
              "f": "intermediate",
              "v": "Fielddata evictions force expensive re-computation of in-memory data structures, causing search latency spikes. Query cache evictions reduce the benefit of repeated queries. Tracking eviction rates guides memory tuning.",
              "t": "Custom REST scripted input (`_nodes/stats/indices/fielddata,query_cache,request_cache`)",
              "d": "`sourcetype=elasticsearch:cache_stats`",
              "q": "index=database sourcetype=\"elasticsearch:cache_stats\"\n| eval fd_evict_delta=fielddata.evictions-prev_fd_evictions, qc_evict_delta=query_cache.evictions-prev_qc_evictions\n| where fd_evict_delta > 0 OR qc_evict_delta > 100\n| timechart span=5m sum(fd_evict_delta) as fielddata_evictions, sum(qc_evict_delta) as query_cache_evictions by node_name",
              "m": "Poll `GET _nodes/stats/indices/fielddata,query_cache,request_cache` and compute deltas for `evictions` counters. Any fielddata eviction is significant — alert immediately and investigate which fields use fielddata (should be using doc_values instead). For query cache, alert when eviction rate exceeds a percentage of total cache entries. Correlate with heap usage.",
              "z": "Line chart (eviction rate by cache type), Bar chart (evictions by node), Single value (fielddata memory size).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_nodes/stats/indices/fielddata,query_cache,request_cache`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:cache_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _nodes/stats/indices/fielddata,query_cache,request_cache` and compute deltas for `evictions` counters. Any fielddata eviction is significant — alert immediately and investigate which fields use fielddata (should be using doc_values instead). For query cache, alert when eviction rate exceeds a percentage of total cache entries. Correlate with heap usage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:cache_stats\"\n| eval fd_evict_delta=fielddata.evictions-prev_fd_evictions, qc_evict_delta=query_cache.evictions-prev_qc_evictions\n| where fd_evict_delta > 0 OR qc_evict_delta > 100\n| timechart span=5m sum(fd_evict_delta) as fielddata_evictions, sum(qc_evict_delta) as query_cache_evictions by node_name\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Fielddata and Cache Evictions** — Fielddata evictions force expensive re-computation of in-memory data structures, causing search latency spikes. Query cache evictions reduce the benefit of repeated queries. Tracking eviction rates guides memory tuning.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:cache_stats`. **App/TA** (typical add-on context): Custom REST scripted input (`_nodes/stats/indices/fielddata,query_cache,request_cache`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:cache_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:cache_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fd_evict_delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fd_evict_delta > 0 OR qc_evict_delta > 100` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by node_name** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Elasticsearch Fielddata and Cache Evictions** — Fielddata evictions force expensive re-computation of in-memory data structures, causing search latency spikes. Query cache evictions reduce the benefit of repeated queries. Tracking eviction rates guides memory tuning.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:cache_stats`. **App/TA** (typical add-on context): Custom REST scripted input (`_nodes/stats/indices/fielddata,query_cache,request_cache`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (eviction rate by cache type), Bar chart (evictions by node), Single value (fielddata memory size).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=5m | sort - agg_value",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.19",
              "n": "Elasticsearch Segment Merge Pressure",
              "c": "medium",
              "f": "advanced",
              "v": "Heavy segment merge activity competes with search for disk I/O, causing latency spikes. Merge throttling slows indexing. Monitoring merge pressure helps balance indexing throughput against search performance.",
              "t": "Custom REST scripted input (`_nodes/stats/indices/merges`)",
              "d": "`sourcetype=elasticsearch:merge_stats`",
              "q": "index=database sourcetype=\"elasticsearch:merge_stats\"\n| eval merge_rate_mb=merges.total_size_in_bytes/1048576\n| timechart span=5m avg(merges.current) as active_merges, sum(merge_rate_mb) as merge_mb by node_name\n| where active_merges > 3",
              "m": "Poll `GET _nodes/stats/indices/merges` for `current`, `total_size_in_bytes`, `total_time_in_millis`, and `total_throttled_time_in_millis`. Compute merge rate and throttle ratio. Alert when active merges remain high (>3) for sustained periods, or when throttle time exceeds 50% of total merge time. Correlate with indexing rate and search latency to detect I/O contention.",
              "z": "Line chart (active merges over time), Stacked area (merge vs. throttle time), Single value (current merge count).",
              "kfp": "Slow queries during full backup windows, statistics updates, or after index rebuilds; correlate with maintenance and batch schedules before paging.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_nodes/stats/indices/merges`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:merge_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _nodes/stats/indices/merges` for `current`, `total_size_in_bytes`, `total_time_in_millis`, and `total_throttled_time_in_millis`. Compute merge rate and throttle ratio. Alert when active merges remain high (>3) for sustained periods, or when throttle time exceeds 50% of total merge time. Correlate with indexing rate and search latency to detect I/O contention.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:merge_stats\"\n| eval merge_rate_mb=merges.total_size_in_bytes/1048576\n| timechart span=5m avg(merges.current) as active_merges, sum(merge_rate_mb) as merge_mb by node_name\n| where active_merges > 3\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Segment Merge Pressure** — Heavy segment merge activity competes with search for disk I/O, causing latency spikes. Merge throttling slows indexing. Monitoring merge pressure helps balance indexing throughput against search performance.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:merge_stats`. **App/TA** (typical add-on context): Custom REST scripted input (`_nodes/stats/indices/merges`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:merge_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:merge_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **merge_rate_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by node_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where active_merges > 3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Elasticsearch Segment Merge Pressure** — Heavy segment merge activity competes with search for disk I/O, causing latency spikes. Merge throttling slows indexing. Monitoring merge pressure helps balance indexing throughput against search performance.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:merge_stats`. **App/TA** (typical add-on context): Custom REST scripted input (`_nodes/stats/indices/merges`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Network` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (active merges over time), Stacked area (merge vs. throttle time), Single value (current merge count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host span=5m | sort - agg_value",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.20",
              "n": "Solr Core Admin Health",
              "c": "high",
              "f": "beginner",
              "v": "Core-level errors (recovery failures, corrupt index flags, leader election issues) degrade search for specific collections; admin health checks catch per-core problems early.",
              "t": "Custom scripted input (`/admin/cores?action=STATUS`), Solr logs",
              "d": "`sourcetype=solr:core_status`",
              "q": "index=database sourcetype=\"solr:core_status\"\n| where state!=\"active\" OR isnotnull(error_msg)\n| stats latest(state) as state, latest(index_version) as index_version by core, collection, node_name\n| sort state",
              "m": "Poll `STATUS` for all cores on a schedule; capture `instanceDir`, `dataDir`, `uptime`, replication/Cloud role fields. Ingest ERROR lines from `solr.log`. Alert when core state is not active, recovery fails, or leader/replica roles mismatch expectations.",
              "z": "Status grid (core × healthy), Table (cores with errors), Single value (unhealthy core count).",
              "kfp": "Planned changes, load tests, and vendor maintenance in the data platform can move the same metrics this search uses; we compare to baselines, change records, and on-call context before we treat a hit as a production incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`/admin/cores?action=STATUS`), Solr logs.\n• Ensure the following data sources are available: `sourcetype=solr:core_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `STATUS` for all cores on a schedule; capture `instanceDir`, `dataDir`, `uptime`, replication/Cloud role fields. Ingest ERROR lines from `solr.log`. Alert when core state is not active, recovery fails, or leader/replica roles mismatch expectations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"solr:core_status\"\n| where state!=\"active\" OR isnotnull(error_msg)\n| stats latest(state) as state, latest(index_version) as index_version by core, collection, node_name\n| sort state\n```\n\nUnderstanding this SPL\n\n**Solr Core Admin Health** — Core-level errors (recovery failures, corrupt index flags, leader election issues) degrade search for specific collections; admin health checks catch per-core problems early.\n\nDocumented **Data sources**: `sourcetype=solr:core_status`. **App/TA** (typical add-on context): Custom scripted input (`/admin/cores?action=STATUS`), Solr logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: solr:core_status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"solr:core_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state!=\"active\" OR isnotnull(error_msg)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by core, collection, node_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (core × healthy), Table (cores with errors), Single value (unhealthy core count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Core-level errors (recovery failures, corrupt data flags, leader election issues) degrade search for specific collections; admin health checks catch -core problems early We use it to stay ahead of pain for applications and the people who depend on the data.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.5.21",
              "n": "Elasticsearch Ingest Pipeline Error Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Ingest pipeline failures silently drop or corrupt documents before indexing. Monitoring error rates per pipeline ensures data quality and completeness.",
              "t": "Custom REST scripted input (`_nodes/stats/ingest`)",
              "d": "`sourcetype=elasticsearch:ingest_stats`",
              "q": "index=database sourcetype=\"elasticsearch:ingest_stats\"\n| eval fail_rate=round(ingest.pipelines.failed/max(1,ingest.pipelines.count)*100,2)\n| where fail_rate > 1 OR ingest.pipelines.failed > 0\n| timechart span=5m sum(ingest.pipelines.failed) as failures by pipeline_name",
              "m": "Poll `GET _nodes/stats/ingest` and extract per-pipeline `count` and `failed` counters. Compute deltas between samples. Alert when any pipeline shows a non-zero failure rate. Investigate pipeline processor errors in Elasticsearch logs. Common causes include grok pattern mismatches, script errors, and date parsing failures.",
              "z": "Line chart (failures per pipeline), Table (pipeline error details), Single value (total ingest failures).",
              "kfp": "Planned index rollovers, rebalancing, and cluster restarts can move shard counts and health states briefly; correlate with Elasticsearch or OpenSearch change records.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom REST scripted input (`_nodes/stats/ingest`).\n• Ensure the following data sources are available: `sourcetype=elasticsearch:ingest_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `GET _nodes/stats/ingest` and extract per-pipeline `count` and `failed` counters. Compute deltas between samples. Alert when any pipeline shows a non-zero failure rate. Investigate pipeline processor errors in Elasticsearch logs. Common causes include grok pattern mismatches, script errors, and date parsing failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"elasticsearch:ingest_stats\"\n| eval fail_rate=round(ingest.pipelines.failed/max(1,ingest.pipelines.count)*100,2)\n| where fail_rate > 1 OR ingest.pipelines.failed > 0\n| timechart span=5m sum(ingest.pipelines.failed) as failures by pipeline_name\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Ingest Pipeline Error Rate** — Ingest pipeline failures silently drop or corrupt documents before indexing. Monitoring error rates per pipeline ensures data quality and completeness.\n\nDocumented **Data sources**: `sourcetype=elasticsearch:ingest_stats`. **App/TA** (typical add-on context): Custom REST scripted input (`_nodes/stats/ingest`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: elasticsearch:ingest_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"elasticsearch:ingest_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_rate > 1 OR ingest.pipelines.failed > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by pipeline_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failures per pipeline), Table (pipeline error details), Single value (total ingest failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow search cluster health, indexing, and query latency so we can keep log and search services responsive when load or mapping changes.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 21,
            "none": 0
          }
        },
        {
          "i": "7.6",
          "n": "Database Trending",
          "u": [
            {
              "i": "7.6.1",
              "n": "Database Connection Pool Utilization Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Peak connection pool utilization over 30 days shows how close applications are to exhausting database sessions. Rising peaks justify pool tuning, connection string fixes, or server scale-up before login storms cause outages.",
              "t": "Splunk DB Connect, vendor DB TAs (MySQL Enterprise, PostgreSQL, Oracle, SQL Server), application pool metrics if forwarded",
              "d": "`index=db` `sourcetype=mysql:status`, `sourcetype=postgresql:metrics`, `sourcetype=mssql:perf`, `sourcetype=oracle:session`",
              "q": "index=db (sourcetype=\"mysql:status\" OR sourcetype=\"postgresql:metrics\" OR sourcetype=\"mssql:perf\" OR sourcetype=\"oracle:session\")\n| eval active=coalesce(threads_connected, numbackends, active_sessions, session_count)\n| eval max_conn=coalesce(max_connections, max_connections_setting, session_limit)\n| eval pool_pct=if(max_conn>0, round(100*active/max_conn,2), null())\n| timechart span=1d max(pool_pct) as peak_pool_util_pct by instance",
              "m": "Map instance identifiers consistently (`host` + `port` + `db_name`). For PgBouncer or RDS proxy, track pool versus backend limits separately. Alert on sustained peaks above policy (for example 80%). Combine with application-side pool settings to find mismatches. Use `perc95` if peaks are noisy from batch jobs only.",
              "z": "Line chart (peak pool % by instance), column chart (30-day max), table (instances over threshold).",
              "kfp": "Connection pool warm-up after restarts, blue-green deploys, or autoscaling can look like a spike until the pool or fleet reaches steady state.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect, vendor DB TAs (MySQL Enterprise, PostgreSQL, Oracle, SQL Server), application pool metrics if forwarded.\n• Ensure the following data sources are available: `index=db` `sourcetype=mysql:status`, `sourcetype=postgresql:metrics`, `sourcetype=mssql:perf`, `sourcetype=oracle:session`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap instance identifiers consistently (`host` + `port` + `db_name`). For PgBouncer or RDS proxy, track pool versus backend limits separately. Alert on sustained peaks above policy (for example 80%). Combine with application-side pool settings to find mismatches. Use `perc95` if peaks are noisy from batch jobs only.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db (sourcetype=\"mysql:status\" OR sourcetype=\"postgresql:metrics\" OR sourcetype=\"mssql:perf\" OR sourcetype=\"oracle:session\")\n| eval active=coalesce(threads_connected, numbackends, active_sessions, session_count)\n| eval max_conn=coalesce(max_connections, max_connections_setting, session_limit)\n| eval pool_pct=if(max_conn>0, round(100*active/max_conn,2), null())\n| timechart span=1d max(pool_pct) as peak_pool_util_pct by instance\n```\n\nUnderstanding this SPL\n\n**Database Connection Pool Utilization Trending** — Peak connection pool utilization over 30 days shows how close applications are to exhausting database sessions. Rising peaks justify pool tuning, connection string fixes, or server scale-up before login storms cause outages.\n\nDocumented **Data sources**: `index=db` `sourcetype=mysql:status`, `sourcetype=postgresql:metrics`, `sourcetype=mssql:perf`, `sourcetype=oracle:session`. **App/TA** (typical add-on context): Splunk DB Connect, vendor DB TAs (MySQL Enterprise, PostgreSQL, Oracle, SQL Server), application pool metrics if forwarded. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db; **sourcetype**: mysql:status, postgresql:metrics, mssql:perf, oracle:session. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db, sourcetype=\"mysql:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **active** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **max_conn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pool_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by instance** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (peak pool % by instance), column chart (30-day max), table (instances over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for repeated or suspicious sign-in activity on our databases so we can catch brute-force and misconfiguration before they become account takeovers.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "mssql",
                "mysql",
                "oracle",
                "postgresql"
              ],
              "em": [
                "oracle_oracle_db",
                "postgresql_pg"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.6.2",
              "n": "Slow Query Volume Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Counting queries exceeding a duration threshold per day quantifies database pain for developers and DBAs. Upward trends after releases often indicate missing indexes or plan regressions before p95 latency alerts fire.",
              "t": "Native slow logs, Percona, `pg_stat_statements` export, SQL Server extended events",
              "d": "`index=db` `sourcetype=mysql:slow`, `sourcetype=postgresql:log`, `sourcetype=mssql:query`, `sourcetype=oracle:audit`",
              "q": "index=db sourcetype IN (\"mysql:slow\",\"postgresql:log\",\"mssql:query\",\"oracle:sql\")\n| eval dur_ms=coalesce(query_time_ms, duration_ms, query_duration*1000)\n| where dur_ms > 1000\n| bin _time span=1d\n| stats count as slow_queries by _time, db_name\n| timechart span=1d sum(slow_queries) as daily_slow_queries by db_name limit=12",
              "m": "Tune the millisecond threshold per environment (OLTP vs reporting). Hash or truncate SQL text for cardinality control. Exclude known batch accounts via `user` lookup. Join top patterns to `EXPLAIN` workflow or query store IDs when available. Retention on verbose logs may require summary indexing to `sourcetype=stash`.",
              "z": "Stacked column chart (slow queries per day by database), line chart (total slow count), table (top normalized query signatures).",
              "kfp": "Maintenance work (ANALYZE, VACUUM, REINDEX, index builds) and ETL or large report jobs may show up as long-running or blocking — compare with the maintenance and batch schedule.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Native slow logs, Percona, `pg_stat_statements` export, SQL Server extended events.\n• Ensure the following data sources are available: `index=db` `sourcetype=mysql:slow`, `sourcetype=postgresql:log`, `sourcetype=mssql:query`, `sourcetype=oracle:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune the millisecond threshold per environment (OLTP vs reporting). Hash or truncate SQL text for cardinality control. Exclude known batch accounts via `user` lookup. Join top patterns to `EXPLAIN` workflow or query store IDs when available. Retention on verbose logs may require summary indexing to `sourcetype=stash`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db sourcetype IN (\"mysql:slow\",\"postgresql:log\",\"mssql:query\",\"oracle:sql\")\n| eval dur_ms=coalesce(query_time_ms, duration_ms, query_duration*1000)\n| where dur_ms > 1000\n| bin _time span=1d\n| stats count as slow_queries by _time, db_name\n| timechart span=1d sum(slow_queries) as daily_slow_queries by db_name limit=12\n```\n\nUnderstanding this SPL\n\n**Slow Query Volume Trending** — Counting queries exceeding a duration threshold per day quantifies database pain for developers and DBAs. Upward trends after releases often indicate missing indexes or plan regressions before p95 latency alerts fire.\n\nDocumented **Data sources**: `index=db` `sourcetype=mysql:slow`, `sourcetype=postgresql:log`, `sourcetype=mssql:query`, `sourcetype=oracle:audit`. **App/TA** (typical add-on context): Native slow logs, Percona, `pg_stat_statements` export, SQL Server extended events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dur_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dur_ms > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, db_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by db_name limit=12** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked column chart (slow queries per day by database), line chart (total slow count), table (top normalized query signatures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mssql"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.6.3",
              "n": "Replication Lag Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Maximum and average replication lag by replica over 30 days validates disaster-recovery readiness and read-consistency expectations. Gradual lag growth can signal network, disk, or write-volume problems before replica promotion fails.",
              "t": "MySQL replica status, PostgreSQL replication, Oracle Data Guard, SQL Server AG metrics",
              "d": "`index=db` `sourcetype=mysql:slave`, `sourcetype=postgresql:replication`, `sourcetype=oracle:dg`, `sourcetype=mssql:ag`",
              "q": "index=db sourcetype IN (\"mysql:slave\",\"postgresql:replication\",\"oracle:dg\",\"mssql:ag\")\n| eval lag_sec=coalesce(seconds_behind_source, replay_lag_seconds, commit_lag_sec, ag_synchronization_health_seconds)\n| timechart span=1d max(lag_sec) as max_replica_lag_sec avg(lag_sec) as avg_replica_lag_sec by replica_host limit=15",
              "m": "For SQL Server AG, prefer `database_replica` lag fields consistent with your sync mode. Filter out replicas in paused maintenance. Correlate spikes with large index builds or log chain breaks. Use the same clock source (NTP) across primary and replicas to avoid false lag. Cloud replicas may expose lag in milliseconds—normalize to seconds in `eval`.",
              "z": "Line chart (max lag per replica), area chart (avg lag), single value (worst replica lag now).",
              "kfp": "Lag spikes during large transactions, schema changes, or network maintenance to standby; align thresholds with RPO and published change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MySQL replica status, PostgreSQL replication, Oracle Data Guard, SQL Server AG metrics.\n• Ensure the following data sources are available: `index=db` `sourcetype=mysql:slave`, `sourcetype=postgresql:replication`, `sourcetype=oracle:dg`, `sourcetype=mssql:ag`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor SQL Server AG, prefer `database_replica` lag fields consistent with your sync mode. Filter out replicas in paused maintenance. Correlate spikes with large index builds or log chain breaks. Use the same clock source (NTP) across primary and replicas to avoid false lag. Cloud replicas may expose lag in milliseconds—normalize to seconds in `eval`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db sourcetype IN (\"mysql:slave\",\"postgresql:replication\",\"oracle:dg\",\"mssql:ag\")\n| eval lag_sec=coalesce(seconds_behind_source, replay_lag_seconds, commit_lag_sec, ag_synchronization_health_seconds)\n| timechart span=1d max(lag_sec) as max_replica_lag_sec avg(lag_sec) as avg_replica_lag_sec by replica_host limit=15\n```\n\nUnderstanding this SPL\n\n**Replication Lag Trending** — Maximum and average replication lag by replica over 30 days validates disaster-recovery readiness and read-consistency expectations. Gradual lag growth can signal network, disk, or write-volume problems before replica promotion fails.\n\nDocumented **Data sources**: `index=db` `sourcetype=mysql:slave`, `sourcetype=postgresql:replication`, `sourcetype=oracle:dg`, `sourcetype=mssql:ag`. **App/TA** (typical add-on context): MySQL replica status, PostgreSQL replication, Oracle Data Guard, SQL Server AG metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lag_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by replica_host limit=15** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (max lag per replica), area chart (avg lag), single value (worst replica lag now).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track replication lag on standbys and replicas so we know when reads or failover could fall behind our recovery expectations before an outage or SLA miss.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mssql",
                "mysql",
                "oracle",
                "postgresql"
              ],
              "em": [
                "oracle_oracle_db",
                "postgresql_pg"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.6.4",
              "n": "Database Backup Size Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Monthly backup size growth forecasts storage for backup appliances and cloud object storage costs. Anomalous jumps can indicate bulk data loads, failed truncations, or ransomware preparation worth investigating.",
              "t": "RMAN, SQL Server backup history, mysqldump / Percona log parsers, cloud backup APIs",
              "d": "`index=db` `sourcetype=mssql:backup`, `sourcetype=mysql:backup`, `sourcetype=oracle:rman`",
              "q": "index=db sourcetype IN (\"mssql:backup\",\"mysql:backup\",\"oracle:rman\",\"postgresql:backup\")\n| eval size_gb=coalesce(backup_size_gb, round(backup_size_bytes/1073741824,3))\n| where backup_status IN (\"success\",\"Success\",\"completed\") OR isnull(backup_status)\n| bin _time span=1mon\n| stats max(size_gb) as backup_size_gb by _time, database_name\n| timechart span=1mon sum(backup_size_gb) as total_backup_gb by database_name limit=10",
              "m": "Deduplicate overlapping full/diff/incremental jobs with `backup_type`. Include compression ratio if logged for better capacity forecasting. Tag cloud vs on-prem targets separately. Alert on failed backups via a companion search; this UC focuses on growth trend only.",
              "z": "Line chart (backup size GB over months), column chart (month-over-month growth %), table (largest databases).",
              "kfp": "Planned full or differential backups, log backup bursts, and restore drills can spike I/O and job duration; suppress or threshold against the known backup window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RMAN, SQL Server backup history, mysqldump / Percona log parsers, cloud backup APIs.\n• Ensure the following data sources are available: `index=db` `sourcetype=mssql:backup`, `sourcetype=mysql:backup`, `sourcetype=oracle:rman`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeduplicate overlapping full/diff/incremental jobs with `backup_type`. Include compression ratio if logged for better capacity forecasting. Tag cloud vs on-prem targets separately. Alert on failed backups via a companion search; this UC focuses on growth trend only.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db sourcetype IN (\"mssql:backup\",\"mysql:backup\",\"oracle:rman\",\"postgresql:backup\")\n| eval size_gb=coalesce(backup_size_gb, round(backup_size_bytes/1073741824,3))\n| where backup_status IN (\"success\",\"Success\",\"completed\") OR isnull(backup_status)\n| bin _time span=1mon\n| stats max(size_gb) as backup_size_gb by _time, database_name\n| timechart span=1mon sum(backup_size_gb) as total_backup_gb by database_name limit=10\n```\n\nUnderstanding this SPL\n\n**Database Backup Size Trending** — Monthly backup size growth forecasts storage for backup appliances and cloud object storage costs. Anomalous jumps can indicate bulk data loads, failed truncations, or ransomware preparation worth investigating.\n\nDocumented **Data sources**: `index=db` `sourcetype=mssql:backup`, `sourcetype=mysql:backup`, `sourcetype=oracle:rman`. **App/TA** (typical add-on context): RMAN, SQL Server backup history, mysqldump / Percona log parsers, cloud backup APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where backup_status IN (\"success\",\"Success\",\"completed\") OR isnull(backup_status)` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, database_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1mon** buckets with a separate series **by database_name limit=10** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (backup size GB over months), column chart (month-over-month growth %), table (largest databases).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check backup and snapshot success and growth so we can trust restores and long-term storage plans for regulation and for real incidents.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mssql",
                "mysql"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "7.6.5",
              "n": "Index Fragmentation Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Average fragmentation percentage over 30 days guides `REBUILD`/`REORG` scheduling and fill-factor reviews. Slow upward trends on hot tables correlate with extra I/O and slower queries even when CPU looks healthy.",
              "t": "SQL Server DMVs via scripted input, MySQL `information_schema` / InnoDB metrics, Oracle segment advisor exports",
              "d": "`index=db` `sourcetype=mssql:fragmentation`, `sourcetype=mysql:innodb`, `sourcetype=oracle:segment`",
              "q": "index=db sourcetype IN (\"mssql:fragmentation\",\"mysql:innodb\",\"oracle:segment\",\"postgresql:index\")\n| eval frag_pct=coalesce(avg_fragmentation_in_percent, fragmentation_pct, bloat_ratio*100)\n| where isnotnull(frag_pct)\n| timechart span=1d avg(frag_pct) as avg_fragmentation_pct max(frag_pct) as max_fragmentation_pct by table_name limit=12",
              "m": "Sample large catalogs during off-peak windows to control license cost. Exclude tiny tables where fragmentation is meaningless. Join `table_name` to owner/schema for remediation tickets. PostgreSQL bloat metrics may use different units—normalize in `eval`. Pair with maintenance windows from change records.",
              "z": "Line chart (fragmentation % over time), heatmap (table × week), table (tables exceeding DBA threshold).",
              "kfp": "Cold caches right after restarts, failover, or one-off full scans can lower hit ratios until the working set is warm again — watch trends, not a single low sample.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SQL Server DMVs via scripted input, MySQL `information_schema` / InnoDB metrics, Oracle segment advisor exports.\n• Ensure the following data sources are available: `index=db` `sourcetype=mssql:fragmentation`, `sourcetype=mysql:innodb`, `sourcetype=oracle:segment`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSample large catalogs during off-peak windows to control license cost. Exclude tiny tables where fragmentation is meaningless. Join `table_name` to owner/schema for remediation tickets. PostgreSQL bloat metrics may use different units—normalize in `eval`. Pair with maintenance windows from change records.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db sourcetype IN (\"mssql:fragmentation\",\"mysql:innodb\",\"oracle:segment\",\"postgresql:index\")\n| eval frag_pct=coalesce(avg_fragmentation_in_percent, fragmentation_pct, bloat_ratio*100)\n| where isnotnull(frag_pct)\n| timechart span=1d avg(frag_pct) as avg_fragmentation_pct max(frag_pct) as max_fragmentation_pct by table_name limit=12\n```\n\nUnderstanding this SPL\n\n**Index Fragmentation Trending** — Average fragmentation percentage over 30 days guides `REBUILD`/`REORG` scheduling and fill-factor reviews. Slow upward trends on hot tables correlate with extra I/O and slower queries even when CPU looks healthy.\n\nDocumented **Data sources**: `index=db` `sourcetype=mssql:fragmentation`, `sourcetype=mysql:innodb`, `sourcetype=oracle:segment`. **App/TA** (typical add-on context): SQL Server DMVs via scripted input, MySQL `information_schema` / InnoDB metrics, Oracle segment advisor exports. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **frag_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(frag_pct)` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by table_name limit=12** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (fragmentation % over time), heatmap (table × week), table (tables exceeding DBA threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface slow and blocking queries so we can fix the worst offenders first and keep applications and batch jobs within the response times we promise.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mssql",
                "mysql",
                "oracle"
              ],
              "em": [
                "oracle_oracle_db"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 5,
            "none": 0
          }
        }
      ],
      "i": 7,
      "n": "Database & Data Platforms",
      "src": "cat-07-database-data-platforms.md"
    },
    {
      "s": [
        {
          "i": "8.1",
          "n": "Web Servers & Reverse Proxies",
          "u": [
            {
              "i": "8.1.1",
              "n": "HTTP Error Rate Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Rising error rates signal application issues, backend failures, or attacks. Rapid detection reduces user impact and MTTR.",
              "t": "`Splunk_TA_apache`, `TA-nginx`, IIS via Windows TA",
              "d": "Web server access logs (Apache combined, NGINX combined, IIS W3C)",
              "q": "index=web sourcetype=\"access_combined\"\n| eval error=if(status>=400,1,0)\n| timechart span=5m sum(error) as errors, count as total\n| eval error_rate=round(errors/total*100,2)\n| where error_rate > 5",
              "m": "Install appropriate web server TA. Forward access logs via UF. Enable response time logging in web server config. Create tiered alerts: >5% error rate (warning), >10% (critical). Split 4xx from 5xx for different response.",
              "z": "Timechart (line) for 5xx rate % and a second series for 4xx rate % to show trends; stacked column for 4xx vs 5xx counts when triaging the class mix; single-value panel for current 5xx % (the SLA-facing metric); horizontal bar chart for top URIs by 5xx count to guide developer investigation.",
              "kfp": "Client errors (4xx) from bots or invalid requests; consider separate thresholds for 4xx vs 5xx.",
              "refs": "[Splunk Add-on for Apache](https://splunkbase.splunk.com/app/3186), [Splunk Add-on for NGINX](https://splunkbase.splunk.com/app/3258), [Web CIM](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_apache`, `TA-nginx`, IIS via Windows TA.\n• Ensure the following data sources are available: Web server access logs (Apache combined, NGINX combined, IIS W3C).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall appropriate web server TA. Forward access logs via UF. Enable response time logging in web server config. Create tiered alerts: >5% error rate (warning), >10% (critical). Split 4xx from 5xx for different response.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\"\n| eval error=if(status>=400,1,0)\n| timechart span=5m sum(error) as errors, count as total\n| eval error_rate=round(errors/total*100,2)\n| where error_rate > 5\n```\n\nUnderstanding this SPL\n\n**HTTP Error Rate Monitoring** — Rising error rates signal application issues, backend failures, or attacks. Rapid detection reduces user impact and MTTR.\n\nDocumented **Data sources**: Web server access logs (Apache combined, NGINX combined, IIS W3C). **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`, IIS via Windows TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=400\n  by Web.src Web.uri_path Web.status span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**HTTP Error Rate Monitoring** — Rising error rates signal application issues, backend failures, or attacks. Rapid detection reduces user impact and MTTR.\n\nDocumented **Data sources**: Web server access logs (Apache combined, NGINX combined, IIS W3C). **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`, IIS via Windows TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (line) for 5xx rate % and a second series for 4xx rate % to show trends; stacked column for 4xx vs 5xx counts when triaging the class mix; single-value panel for current 5xx % (the SLA-facing metric); horizontal bar chart for top URIs by 5xx count to guide developer investigation.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that rising error rates signal application issues, backend failures, or attacks.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=400\n  by Web.src Web.uri_path Web.status span=5m\n| sort -count",
              "e": [
                "apache",
                "iis",
                "nginx"
              ],
              "em": [
                "apache_httpd",
                "nginx_open"
              ],
              "pillar": "both",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "8.1.2",
              "n": "Response Time Trending",
              "c": "high",
              "f": "beginner",
              "v": "Increasing response times degrade user experience before complete failures occur. Trending enables proactive optimization.",
              "t": "`Splunk_TA_apache`, `TA-nginx`",
              "d": "Access logs with `%D` (Apache) or `$request_time` (NGINX)",
              "q": "index=web sourcetype=\"access_combined\"\n| timechart span=5m perc95(response_time) as p95, avg(response_time) as avg_rt by host\n| where p95 > 2000",
              "m": "Enable response time logging in web server config (Apache: `%D` in LogFormat, NGINX: `$request_time`). Track p50/p95/p99 percentiles. Alert on p95 exceeding SLA threshold. Correlate with backend service health.",
              "z": "Line chart (p50/p95/p99 over time), Histogram (response time distribution), Table (slowest endpoints).",
              "kfp": "Response time spikes during JVM garbage collection, connection pool exhaustion, or backend dependency degradation. Load tests, campaigns, and cold caches also move percentiles.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_apache`, `TA-nginx`.\n• Ensure the following data sources are available: Access logs with `%D` (Apache) or `$request_time` (NGINX).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable response time logging in web server config (Apache: `%D` in LogFormat, NGINX: `$request_time`). Track p50/p95/p99 percentiles. Alert on p95 exceeding SLA threshold. Correlate with backend service health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\"\n| timechart span=5m perc95(response_time) as p95, avg(response_time) as avg_rt by host\n| where p95 > 2000\n```\n\nUnderstanding this SPL\n\n**Response Time Trending** — Increasing response times degrade user experience before complete failures occur. Trending enables proactive optimization.\n\nDocumented **Data sources**: Access logs with `%D` (Apache) or `$request_time` (NGINX). **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where p95 > 2000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` perc95(Web.duration) as p95_ms avg(Web.duration) as avg_ms\n  from datamodel=Web.Web\n  by Web.dest Web.uri_path span=5m\n| where p95_ms > 2000\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Response Time Trending** — Increasing response times degrade user experience before complete failures occur. Trending enables proactive optimization.\n\nDocumented **Data sources**: Access logs with `%D` (Apache) or `$request_time` (NGINX). **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Filters the current rows with `where p95_ms > 2000` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p50/p95/p99 over time), Histogram (response time distribution), Table (slowest endpoints).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that increasing response times degrade user experience before complete failures occur.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` perc95(Web.duration) as p95_ms avg(Web.duration) as avg_ms\n  from datamodel=Web.Web\n  by Web.dest Web.uri_path span=5m\n| where p95_ms > 2000",
              "e": [
                "apache",
                "nginx"
              ],
              "em": [
                "apache_httpd",
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.3",
              "n": "Request Rate Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Traffic trending supports capacity planning and identifies unexpected traffic patterns (bot attacks, viral events, traffic drops).",
              "t": "`Splunk_TA_apache`, `TA-nginx`",
              "d": "Access logs",
              "q": "index=web sourcetype=\"access_combined\"\n| timechart span=1m count as requests_per_min by host\n| predict requests_per_min as predicted",
              "m": "Ingest access logs. Track requests per second/minute. Baseline normal traffic patterns using `predict`. Alert on sudden drops (possible outage) or spikes (possible attack). Break down by URI for endpoint-level trending.",
              "z": "Line chart (request rate with prediction band), Area chart (traffic over time), Bar chart (requests by host).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_apache`, `TA-nginx`.\n• Ensure the following data sources are available: Access logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest access logs. Track requests per second/minute. Baseline normal traffic patterns using `predict`. Alert on sudden drops (possible outage) or spikes (possible attack). Break down by URI for endpoint-level trending.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\"\n| timechart span=1m count as requests_per_min by host\n| predict requests_per_min as predicted\n```\n\nUnderstanding this SPL\n\n**Request Rate Trending** — Traffic trending supports capacity planning and identifies unexpected traffic patterns (bot attacks, viral events, traffic drops).\n\nDocumented **Data sources**: Access logs. **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Request Rate Trending**): predict requests_per_min as predicted\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(Web.bytes) as total_bytes\n  from datamodel=Web.Web\n  by Web.src Web.dest Web.uri_path Web.status span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Request Rate Trending** — Traffic trending supports capacity planning and identifies unexpected traffic patterns (bot attacks, viral events, traffic drops).\n\nDocumented **Data sources**: Access logs. **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (request rate with prediction band), Area chart (traffic over time), Bar chart (requests by host).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that traffic trending supports capacity planning and identifies unexpected traffic patterns (bot attacks, viral events, traffic drops).",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count sum(Web.bytes) as total_bytes\n  from datamodel=Web.Web\n  by Web.src Web.dest Web.uri_path Web.status span=1h\n| sort -count",
              "e": [
                "apache",
                "nginx"
              ],
              "em": [
                "apache_httpd",
                "nginx_open"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.4",
              "n": "Top Error URIs",
              "c": "medium",
              "f": "beginner",
              "v": "Identifies the most problematic endpoints for targeted developer attention. Reduces noise by focusing on high-impact errors.",
              "t": "`Splunk_TA_apache`, `TA-nginx`",
              "d": "Access logs",
              "q": "index=web sourcetype=\"access_combined\" status>=400\n| stats count by uri_path, status\n| sort -count\n| head 20",
              "m": "Parse URI from access logs (ensure proper field extraction). Group by URI and status code. Create daily/weekly report of top error endpoints. Track error trends per URI over time to detect regressions.",
              "z": "Table (URI, status, count), Bar chart (top 20 error URIs), Treemap (errors by URI path).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_apache`, `TA-nginx`.\n• Ensure the following data sources are available: Access logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse URI from access logs (ensure proper field extraction). Group by URI and status code. Create daily/weekly report of top error endpoints. Track error trends per URI over time to detect regressions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" status>=400\n| stats count by uri_path, status\n| sort -count\n| head 20\n```\n\nUnderstanding this SPL\n\n**Top Error URIs** — Identifies the most problematic endpoints for targeted developer attention. Reduces noise by focusing on high-impact errors.\n\nDocumented **Data sources**: Access logs. **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by uri_path, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=400\n  by Web.src Web.uri_path Web.status span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Top Error URIs** — Identifies the most problematic endpoints for targeted developer attention. Reduces noise by focusing on high-impact errors.\n\nDocumented **Data sources**: Access logs. **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (URI, status, count), Bar chart (top 20 error URIs), Treemap (errors by URI path).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the most problematic endpoints for targeted developer attention.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=400\n  by Web.src Web.uri_path Web.status span=5m\n| sort -count",
              "e": [
                "apache",
                "nginx"
              ],
              "em": [
                "apache_httpd",
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.5",
              "n": "SSL Certificate Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Expired SSL certificates cause complete service outage and browser security warnings. Proactive monitoring prevents this entirely avoidable failure.",
              "t": "Scripted input (openssl s_client), custom certificate check",
              "d": "Certificate check scripted input, web server config parsing",
              "q": "index=certificates sourcetype=\"cert_check\"\n| eval days_until_expiry=round((cert_expiry_epoch-now())/86400)\n| where days_until_expiry < 30\n| table host, port, cn, issuer, days_until_expiry\n| sort days_until_expiry",
              "m": "Deploy scripted input that runs `openssl s_client` against all HTTPS endpoints daily. Parse certificate details (CN, SAN, expiry, issuer). Alert at 30, 14, and 7 days before expiry. Maintain endpoint inventory via lookup.",
              "z": "Table (certificates with expiry dates), Single value (certs expiring within 30d), Status grid (endpoint × cert status).",
              "kfp": "Short-lived or staging certificates, planned rotations, and CA delays can look like an imminent outage. Certificates approaching expiry should trigger renewal; alerts before the renewal date are expected.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Scripted input (openssl s_client), custom certificate check.\n• Ensure the following data sources are available: Certificate check scripted input, web server config parsing.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy scripted input that runs `openssl s_client` against all HTTPS endpoints daily. Parse certificate details (CN, SAN, expiry, issuer). Alert at 30, 14, and 7 days before expiry. Maintain endpoint inventory via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=certificates sourcetype=\"cert_check\"\n| eval days_until_expiry=round((cert_expiry_epoch-now())/86400)\n| where days_until_expiry < 30\n| table host, port, cn, issuer, days_until_expiry\n| sort days_until_expiry\n```\n\nUnderstanding this SPL\n\n**SSL Certificate Monitoring** — Expired SSL certificates cause complete service outage and browser security warnings. Proactive monitoring prevents this entirely avoidable failure.\n\nDocumented **Data sources**: Certificate check scripted input, web server config parsing. **App/TA** (typical add-on context): Scripted input (openssl s_client), custom certificate check. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: certificates; **sourcetype**: cert_check. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=certificates, sourcetype=\"cert_check\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_until_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_until_expiry < 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SSL Certificate Monitoring**): table host, port, cn, issuer, days_until_expiry\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certificates with expiry dates), Single value (certs expiring within 30d), Status grid (endpoint × cert status).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that expired secure connections certificates cause complete service outage and browser security warnings.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "8.1.6",
              "n": "Upstream Backend Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Backend server failures behind reverse proxies cause partial service degradation. Detection enables rapid failover response.",
              "t": "`TA-nginx` (error logs), HAProxy stats",
              "d": "NGINX error logs (upstream errors), HAProxy stats socket, F5 pool member logs",
              "q": "index=web sourcetype=\"nginx:error\"\n| search \"upstream\" (\"connect() failed\" OR \"no live upstreams\" OR \"timed out\")\n| stats count by upstream_addr, server_name\n| sort -count",
              "m": "Forward NGINX error logs. Parse upstream failure messages. For HAProxy, enable stats socket and poll via scripted input. Alert on backend server failures. Track backend health state over time.",
              "z": "Status grid (backend × health), Table (failed backends), Timeline (backend failure events).",
              "kfp": "5xx errors spike during deployment rollouts, restart sequences, or backend service maintenance. Health-check noise and synthetic traffic can add false volume if not filtered.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nginx` (error logs), HAProxy stats.\n• Ensure the following data sources are available: NGINX error logs (upstream errors), HAProxy stats socket, F5 pool member logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward NGINX error logs. Parse upstream failure messages. For HAProxy, enable stats socket and poll via scripted input. Alert on backend server failures. Track backend health state over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"nginx:error\"\n| search \"upstream\" (\"connect() failed\" OR \"no live upstreams\" OR \"timed out\")\n| stats count by upstream_addr, server_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Upstream Backend Health** — Backend server failures behind reverse proxies cause partial service degradation. Detection enables rapid failover response.\n\nDocumented **Data sources**: NGINX error logs (upstream errors), HAProxy stats socket, F5 pool member logs. **App/TA** (typical add-on context): `TA-nginx` (error logs), HAProxy stats. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: nginx:error. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"nginx:error\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by upstream_addr, server_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=502 AND Web.status<=504\n  by Web.dest Web.uri_path span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Upstream Backend Health** — Backend server failures behind reverse proxies cause partial service degradation. Detection enables rapid failover response.\n\nDocumented **Data sources**: NGINX error logs (upstream errors), HAProxy stats socket, F5 pool member logs. **App/TA** (typical add-on context): `TA-nginx` (error logs), HAProxy stats. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (backend × health), Table (failed backends), Timeline (backend failure events).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that backend server failures behind reverse proxies cause partial service degradation.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=502 AND Web.status<=504\n  by Web.dest Web.uri_path span=5m\n| sort -count",
              "e": [
                "haproxy",
                "nginx"
              ],
              "em": [
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.7",
              "n": "Bot and Crawler Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Bot traffic inflates metrics and consumes resources. Identification enables accurate capacity planning and bot management policies.",
              "t": "`Splunk_TA_apache`, `TA-nginx`",
              "d": "Access logs (User-Agent field)",
              "q": "index=web sourcetype=\"access_combined\"\n| rex field=useragent \"(?<bot_name>Googlebot|Bingbot|baiduspider|bot|crawler|spider)\"\n| eval is_bot=if(isnotnull(bot_name),\"bot\",\"human\")\n| stats count by is_bot\n| eval pct=round(count/sum(count)*100,1)",
              "m": "Parse User-Agent from access logs. Maintain a lookup of known bot signatures. Classify traffic as bot vs human. Track bot traffic percentage over time. Alert on unknown bots or suspicious crawling patterns.",
              "z": "Pie chart (bot vs human traffic), Bar chart (top bots by request count), Line chart (bot traffic trend).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_apache`, `TA-nginx`.\n• Ensure the following data sources are available: Access logs (User-Agent field).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse User-Agent from access logs. Maintain a lookup of known bot signatures. Classify traffic as bot vs human. Track bot traffic percentage over time. Alert on unknown bots or suspicious crawling patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\"\n| rex field=useragent \"(?<bot_name>Googlebot|Bingbot|baiduspider|bot|crawler|spider)\"\n| eval is_bot=if(isnotnull(bot_name),\"bot\",\"human\")\n| stats count by is_bot\n| eval pct=round(count/sum(count)*100,1)\n```\n\nUnderstanding this SPL\n\n**Bot and Crawler Detection** — Bot traffic inflates metrics and consumes resources. Identification enables accurate capacity planning and bot management policies.\n\nDocumented **Data sources**: Access logs (User-Agent field). **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **is_bot** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by is_bot** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  by Web.http_user_agent Web.src span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Bot and Crawler Detection** — Bot traffic inflates metrics and consumes resources. Identification enables accurate capacity planning and bot management policies.\n\nDocumented **Data sources**: Access logs (User-Agent field). **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (bot vs human traffic), Bar chart (top bots by request count), Line chart (bot traffic trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that bot traffic inflates metrics and consumes resources.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  by Web.http_user_agent Web.src span=1h\n| sort -count",
              "e": [
                "apache",
                "nginx"
              ],
              "em": [
                "apache_httpd",
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.8",
              "n": "Connection Pool Saturation",
              "c": "high",
              "f": "advanced",
              "v": "Saturated worker threads/processes cause request queuing and timeouts. Monitoring enables proactive scaling.",
              "t": "Scripted input (Apache `server-status`, NGINX `stub_status`)",
              "d": "Apache mod_status, NGINX stub_status, IIS performance counters",
              "q": "index=web sourcetype=\"apache:server_status\"\n| eval pct_busy=round(BusyWorkers/(BusyWorkers+IdleWorkers)*100,1)\n| timechart span=5m avg(pct_busy) as worker_pct by host\n| where worker_pct > 80",
              "m": "Enable Apache `mod_status` or NGINX `stub_status` module. Poll via scripted input every minute. Alert when busy workers exceed 80% of total. Correlate with request rate to distinguish capacity limits from slow backends.",
              "z": "Gauge (% workers busy), Line chart (worker utilization over time), Table (hosts at capacity).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Scripted input (Apache `server-status`, NGINX `stub_status`).\n• Ensure the following data sources are available: Apache mod_status, NGINX stub_status, IIS performance counters.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Apache `mod_status` or NGINX `stub_status` module. Poll via scripted input every minute. Alert when busy workers exceed 80% of total. Correlate with request rate to distinguish capacity limits from slow backends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"apache:server_status\"\n| eval pct_busy=round(BusyWorkers/(BusyWorkers+IdleWorkers)*100,1)\n| timechart span=5m avg(pct_busy) as worker_pct by host\n| where worker_pct > 80\n```\n\nUnderstanding this SPL\n\n**Connection Pool Saturation** — Saturated worker threads/processes cause request queuing and timeouts. Monitoring enables proactive scaling.\n\nDocumented **Data sources**: Apache mod_status, NGINX stub_status, IIS performance counters. **App/TA** (typical add-on context): Scripted input (Apache `server-status`, NGINX `stub_status`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: apache:server_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"apache:server_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_busy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where worker_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% workers busy), Line chart (worker utilization over time), Table (hosts at capacity).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that saturated worker threads/processes cause request queuing and timeouts.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "apache",
                "nginx"
              ],
              "em": [
                "apache_httpd",
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.9",
              "n": "Slow POST Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Slow POST requests often indicate application-level performance issues (large form submissions, file uploads, database writes) distinct from slow GETs.",
              "t": "`Splunk_TA_apache`, `TA-nginx`",
              "d": "Access logs with response time",
              "q": "index=web sourcetype=\"access_combined\" method=POST\n| where response_time > 5000\n| stats count, avg(response_time) as avg_rt by uri_path\n| sort -avg_rt",
              "m": "Filter access logs for POST requests with high response times. Track by endpoint to identify specific bottlenecks. Correlate with backend database/API latency. Report top slow POST endpoints weekly.",
              "z": "Table (slow POST endpoints), Bar chart (avg response time by URI), Line chart (slow POST count trend).",
              "kfp": "Response time spikes during JVM garbage collection, connection pool exhaustion, or backend dependency degradation. Load tests, campaigns, and cold caches also move percentiles.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_apache`, `TA-nginx`.\n• Ensure the following data sources are available: Access logs with response time.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFilter access logs for POST requests with high response times. Track by endpoint to identify specific bottlenecks. Correlate with backend database/API latency. Report top slow POST endpoints weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" method=POST\n| where response_time > 5000\n| stats count, avg(response_time) as avg_rt by uri_path\n| sort -avg_rt\n```\n\nUnderstanding this SPL\n\n**Slow POST Detection** — Slow POST requests often indicate application-level performance issues (large form submissions, file uploads, database writes) distinct from slow GETs.\n\nDocumented **Data sources**: Access logs with response time. **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where response_time > 5000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by uri_path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` perc95(Web.duration) as p95_ms count\n  from datamodel=Web.Web\n  where Web.http_method=\"POST\"\n  by Web.uri_path span=5m\n| where p95_ms > 5000\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Slow POST Detection** — Slow POST requests often indicate application-level performance issues (large form submissions, file uploads, database writes) distinct from slow GETs.\n\nDocumented **Data sources**: Access logs with response time. **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Filters the current rows with `where p95_ms > 5000` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slow POST endpoints), Bar chart (avg response time by URI), Line chart (slow POST count trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that slow POST requests often indicate application-level performance issues (large form submissions, file uploads, database writes) distinct from slow GETs.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` perc95(Web.duration) as p95_ms count\n  from datamodel=Web.Web\n  where Web.http_method=\"POST\"\n  by Web.uri_path span=5m\n| where p95_ms > 5000",
              "e": [
                "apache",
                "nginx"
              ],
              "em": [
                "apache_httpd",
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.10",
              "n": "Configuration Reload Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Configuration changes are a common cause of outages. Tracking reloads enables rapid correlation with incidents.",
              "t": "`Splunk_TA_apache`, `TA-nginx`, process monitoring",
              "d": "Web server error/event logs",
              "q": "index=web sourcetype=\"nginx:error\" OR sourcetype=\"apache:error\"\n| search \"signal\" OR \"reload\" OR \"restarting\" OR \"resuming normal operations\"\n| table _time, host, message",
              "m": "Forward error/event logs from web servers. Parse reload/restart messages. Correlate with deployment events and change management tickets. Alert on unexpected restarts outside maintenance windows.",
              "z": "Timeline (reload events), Table (reload history with correlation), Single value (reloads this week).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_apache`, `TA-nginx`, process monitoring.\n• Ensure the following data sources are available: Web server error/event logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward error/event logs from web servers. Parse reload/restart messages. Correlate with deployment events and change management tickets. Alert on unexpected restarts outside maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"nginx:error\" OR sourcetype=\"apache:error\"\n| search \"signal\" OR \"reload\" OR \"restarting\" OR \"resuming normal operations\"\n| table _time, host, message\n```\n\nUnderstanding this SPL\n\n**Configuration Reload Tracking** — Configuration changes are a common cause of outages. Tracking reloads enables rapid correlation with incidents.\n\nDocumented **Data sources**: Web server error/event logs. **App/TA** (typical add-on context): `Splunk_TA_apache`, `TA-nginx`, process monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: nginx:error, apache:error. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"nginx:error\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Configuration Reload Tracking**): table _time, host, message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (reload events), Table (reload history with correlation), Single value (reloads this week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that configuration changes are a common cause of outages.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "apache",
                "nginx"
              ],
              "em": [
                "apache_httpd",
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.11",
              "n": "NGINX Upstream Response Errors",
              "c": "high",
              "f": "beginner",
              "v": "Counts upstream HTTP 5xx and connect/timeout errors from NGINX access/error logs. Isolates reverse-proxy vs origin issues faster than aggregate 5xx alone.",
              "t": "`TA-nginx`",
              "d": "`access_combined` with `upstream_status`, `nginx:error` upstream messages",
              "q": "index=web sourcetype=\"nginx:access\" OR sourcetype=\"access_combined\"\n| eval up_err=if(upstream_status>=500 OR status=502 OR status=504,1,0)\n| stats sum(up_err) as upstream_errors, count as total by host, upstream_addr\n| eval err_rate=round(upstream_errors/total*100,2)\n| where err_rate > 2",
              "m": "Enable `upstream_status` and `upstream_addr` in log_format. Alert on upstream error rate >2% for 5m. Correlate with backend pool health.",
              "z": "Line chart (upstream error rate), Table (upstream_addr, errors), Bar chart (5xx by upstream).",
              "kfp": "5xx errors spike during deployment rollouts, restart sequences, or backend service maintenance. Health-check noise and synthetic traffic can add false volume if not filtered.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nginx`.\n• Ensure the following data sources are available: `access_combined` with `upstream_status`, `nginx:error` upstream messages.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable `upstream_status` and `upstream_addr` in log_format. Alert on upstream error rate >2% for 5m. Correlate with backend pool health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"nginx:access\" OR sourcetype=\"access_combined\"\n| eval up_err=if(upstream_status>=500 OR status=502 OR status=504,1,0)\n| stats sum(up_err) as upstream_errors, count as total by host, upstream_addr\n| eval err_rate=round(upstream_errors/total*100,2)\n| where err_rate > 2\n```\n\nUnderstanding this SPL\n\n**NGINX Upstream Response Errors** — Counts upstream HTTP 5xx and connect/timeout errors from NGINX access/error logs. Isolates reverse-proxy vs origin issues faster than aggregate 5xx alone.\n\nDocumented **Data sources**: `access_combined` with `upstream_status`, `nginx:error` upstream messages. **App/TA** (typical add-on context): `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: nginx:access, access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"nginx:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **up_err** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, upstream_addr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate > 2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NGINX Upstream Response Errors** — Counts upstream HTTP 5xx and connect/timeout errors from NGINX access/error logs. Isolates reverse-proxy vs origin issues faster than aggregate 5xx alone.\n\nDocumented **Data sources**: `access_combined` with `upstream_status`, `nginx:error` upstream messages. **App/TA** (typical add-on context): `TA-nginx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (upstream error rate), Table (upstream_addr, errors), Bar chart (5xx by upstream).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that counts upstream HTTP 5xx and connect/timeout errors from NGINX access/error logs.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count",
              "e": [
                "nginx"
              ],
              "em": [
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.12",
              "n": "Apache mod_security WAF Blocks",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks ModSecurity rule IDs and scores for blocked requests. Supports tuning false positives and detecting attack campaigns.",
              "t": "`Splunk_TA_apache`, modsec audit log",
              "d": "`modsec_audit.log`, `SecRule` action deny entries",
              "q": "index=web sourcetype=\"apache:modsec\"\n| search action=\"denied\" OR intercept_phase=\"phase:2\"\n| stats count by rule_id, uri_path, src\n| sort -count\n| head 30",
              "m": "Ingest JSON or native ModSecurity audit format. Extract `rule_id`, `msg`. Alert on spike in unique IPs or new rule_id firing at high volume.",
              "z": "Table (rule, URI, count), Bar chart (blocks by rule), Map (src).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_apache`, modsec audit log.\n• Ensure the following data sources are available: `modsec_audit.log`, `SecRule` action deny entries.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest JSON or native ModSecurity audit format. Extract `rule_id`, `msg`. Alert on spike in unique IPs or new rule_id firing at high volume.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"apache:modsec\"\n| search action=\"denied\" OR intercept_phase=\"phase:2\"\n| stats count by rule_id, uri_path, src\n| sort -count\n| head 30\n```\n\nUnderstanding this SPL\n\n**Apache mod_security WAF Blocks** — Tracks ModSecurity rule IDs and scores for blocked requests. Supports tuning false positives and detecting attack campaigns.\n\nDocumented **Data sources**: `modsec_audit.log`, `SecRule` action deny entries. **App/TA** (typical add-on context): `Splunk_TA_apache`, modsec audit log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: apache:modsec. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"apache:modsec\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by rule_id, uri_path, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Apache mod_security WAF Blocks** — Tracks ModSecurity rule IDs and scores for blocked requests. Supports tuning false positives and detecting attack campaigns.\n\nDocumented **Data sources**: `modsec_audit.log`, `SecRule` action deny entries. **App/TA** (typical add-on context): `Splunk_TA_apache`, modsec audit log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule, URI, count), Bar chart (blocks by rule), Map (src).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track ModSecurity rule and scores for blocked requests.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.src | sort - count",
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.13",
              "n": "IIS Worker Process Recycling",
              "c": "medium",
              "f": "beginner",
              "v": "Frequent `w3wp` recycles cause session loss and latency spikes. Event Log IDs 5074, 5002, 1011 indicate config, memory, or crash-driven recycles.",
              "t": "`Splunk_TA_windows`",
              "d": "System/Application Event Log (WAS, W3SVC)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" SourceName=WAS EventCode=5074\n| bucket _time span=5m\n| stats count as recycles by ComputerName, AppPoolName, _time\n| where recycles > 3",
              "m": "Enable WAS/W3SVC auditing. Alert when recycles per app pool exceed baseline. Correlate with private bytes and GC from perfmon.",
              "z": "Timeline (recycle events), Table (app pool, recycle count), Line chart (recycles per hour).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: System/Application Event Log (WAS, W3SVC).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable WAS/W3SVC auditing. Alert when recycles per app pool exceed baseline. Correlate with private bytes and GC from perfmon.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" SourceName=WAS EventCode=5074\n| bucket _time span=5m\n| stats count as recycles by ComputerName, AppPoolName, _time\n| where recycles > 3\n```\n\nUnderstanding this SPL\n\n**IIS Worker Process Recycling** — Frequent `w3wp` recycles cause session loss and latency spikes. Event Log IDs 5074, 5002, 1011 indicate config, memory, or crash-driven recycles.\n\nDocumented **Data sources**: System/Application Event Log (WAS, W3SVC). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ComputerName, AppPoolName, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where recycles > 3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IIS Worker Process Recycling** — Frequent `w3wp` recycles cause session loss and latency spikes. Event Log IDs 5074, 5002, 1011 indicate config, memory, or crash-driven recycles.\n\nDocumented **Data sources**: System/Application Event Log (WAS, W3SVC). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (recycle events), Table (app pool, recycle count), Line chart (recycles per hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that frequent recycles cause session loss and latency spikes.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.14",
              "n": "SSL Certificate Expiry Countdown",
              "c": "critical",
              "f": "beginner",
              "v": "Days-to-expiry dashboard for all TLS endpoints monitored by cert checks. Complements UC-8.1.5 with trend and earliest-expiry focus.",
              "t": "Scripted cert check, `openssl` input",
              "d": "`cert_check` with `cert_expiry_epoch`, `cn`",
              "q": "index=certificates sourcetype=\"cert_check\"\n| eval days_left=round((cert_expiry_epoch-now())/86400,0)\n| stats min(days_left) as soonest by host, port\n| where soonest < 45\n| sort soonest",
              "m": "Daily collection. Alert tiers at 45/30/14/7 days. Include chain validation failures as severity 1.",
              "z": "Table (host, port, days_left), Single value (minimum days_left fleet-wide), Column chart (certs by expiry bucket).",
              "kfp": "Short-lived or staging certificates, planned rotations, and CA delays can look like an imminent outage. Certificates approaching expiry should trigger renewal; alerts before the renewal date are expected.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Scripted cert check, `openssl` input.\n• Ensure the following data sources are available: `cert_check` with `cert_expiry_epoch`, `cn`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDaily collection. Alert tiers at 45/30/14/7 days. Include chain validation failures as severity 1.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=certificates sourcetype=\"cert_check\"\n| eval days_left=round((cert_expiry_epoch-now())/86400,0)\n| stats min(days_left) as soonest by host, port\n| where soonest < 45\n| sort soonest\n```\n\nUnderstanding this SPL\n\n**SSL Certificate Expiry Countdown** — Days-to-expiry dashboard for all TLS endpoints monitored by cert checks. Complements UC-8.1.5 with trend and earliest-expiry focus.\n\nDocumented **Data sources**: `cert_check` with `cert_expiry_epoch`, `cn`. **App/TA** (typical add-on context): Scripted cert check, `openssl` input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: certificates; **sourcetype**: cert_check. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=certificates, sourcetype=\"cert_check\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where soonest < 45` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SSL Certificate Expiry Countdown** — Days-to-expiry dashboard for all TLS endpoints monitored by cert checks. Complements UC-8.1.5 with trend and earliest-expiry focus.\n\nDocumented **Data sources**: `cert_check` with `cert_expiry_epoch`, `cn`. **App/TA** (typical add-on context): Scripted cert check, `openssl` input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, port, days_left), Single value (minimum days_left fleet-wide), Column chart (certs by expiry bucket).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that days-to-expiry dashboard for all secure connections endpoints monitored by cert checks.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.15",
              "n": "HAProxy Backend Health State",
              "c": "critical",
              "f": "beginner",
              "v": "CSV stats `status` (UP/DOWN/MAINT) per server line with weight. Distinct from UC-8.1.6 NGINX-only upstream errors for HAProxy-native shops.",
              "t": "HAProxy stats socket scripted input",
              "d": "`haproxy:stats` `svname`, `status`, `chkfail`",
              "q": "index=haproxy sourcetype=\"haproxy:stats\" type=server\n| where status!=\"UP\" OR chkfail > 0\n| stats latest(status) as status, sum(chkfail) as fails by pxname, svname\n| sort fails",
              "m": "Poll stats every 30s. Alert on any backend DOWN not in maintenance window. Track flapping (status changes >3 in 10m).",
              "z": "Status grid (backend × UP/DOWN), Table (DOWN servers), Timeline (state changes).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HAProxy stats socket scripted input.\n• Ensure the following data sources are available: `haproxy:stats` `svname`, `status`, `chkfail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll stats every 30s. Alert on any backend DOWN not in maintenance window. Track flapping (status changes >3 in 10m).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=haproxy sourcetype=\"haproxy:stats\" type=server\n| where status!=\"UP\" OR chkfail > 0\n| stats latest(status) as status, sum(chkfail) as fails by pxname, svname\n| sort fails\n```\n\nUnderstanding this SPL\n\n**HAProxy Backend Health State** — CSV stats `status` (UP/DOWN/MAINT) per server line with weight. Distinct from UC-8.1.6 NGINX-only upstream errors for HAProxy-native shops.\n\nDocumented **Data sources**: `haproxy:stats` `svname`, `status`, `chkfail`. **App/TA** (typical add-on context): HAProxy stats socket scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: haproxy; **sourcetype**: haproxy:stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=haproxy, sourcetype=\"haproxy:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"UP\" OR chkfail > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by pxname, svname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Web.status) as agg_value from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**HAProxy Backend Health State** — CSV stats `status` (UP/DOWN/MAINT) per server line with weight. Distinct from UC-8.1.6 NGINX-only upstream errors for HAProxy-native shops.\n\nDocumented **Data sources**: `haproxy:stats` `svname`, `status`, `chkfail`. **App/TA** (typical add-on context): HAProxy stats socket scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (backend × UP/DOWN), Table (DOWN servers), Timeline (state changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that cSV stats (UP/DOWN/MAINT) per server line with weight.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t sum(Web.status) as agg_value from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - agg_value",
              "e": [
                "haproxy"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.16",
              "n": "Web Server Thread Pool Exhaustion",
              "c": "high",
              "f": "intermediate",
              "v": "IIS `QueueFull`, NGINX worker saturation, or Apache `BusyWorkers` at limit causes queueing. Unified thresholding across stacks.",
              "t": "`TA-nginx` stub_status, `Splunk_TA_windows` perfmon, Apache mod_status",
              "d": "`nginx:stub_status`, `Perfmon:W3SVC_W3WP`, `apache:server_status`",
              "q": "index=web (sourcetype=\"nginx:stub_status\" OR sourcetype=\"apache:server_status\" OR sourcetype=\"Perfmon:W3SVC_W3WP\")\n| eval util_pct=coalesce(worker_util_pct, pct_busy, thread_pool_queue_length/max_threads*100)\n| where util_pct > 85 OR queue_current > 50\n| timechart span=5m max(util_pct) as util by host, sourcetype",
              "m": "Normalize field names at ingest. Alert when util >85% for 10m or IIS request queue length sustained high. Correlate with CPU and backend latency.",
              "z": "Gauge (util %), Line chart (util and queue), Table (hosts over threshold).",
              "kfp": "Thread or worker pools fill during traffic spikes, slow upstream services, or DoS-style load. A surge alone can be benign if backends recover.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nginx` stub_status, `Splunk_TA_windows` perfmon, Apache mod_status.\n• Ensure the following data sources are available: `nginx:stub_status`, `Perfmon:W3SVC_W3WP`, `apache:server_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names at ingest. Alert when util >85% for 10m or IIS request queue length sustained high. Correlate with CPU and backend latency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web (sourcetype=\"nginx:stub_status\" OR sourcetype=\"apache:server_status\" OR sourcetype=\"Perfmon:W3SVC_W3WP\")\n| eval util_pct=coalesce(worker_util_pct, pct_busy, thread_pool_queue_length/max_threads*100)\n| where util_pct > 85 OR queue_current > 50\n| timechart span=5m max(util_pct) as util by host, sourcetype\n```\n\nUnderstanding this SPL\n\n**Web Server Thread Pool Exhaustion** — IIS `QueueFull`, NGINX worker saturation, or Apache `BusyWorkers` at limit causes queueing. Unified thresholding across stacks.\n\nDocumented **Data sources**: `nginx:stub_status`, `Perfmon:W3SVC_W3WP`, `apache:server_status`. **App/TA** (typical add-on context): `TA-nginx` stub_status, `Splunk_TA_windows` perfmon, Apache mod_status. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: nginx:stub_status, apache:server_status, Perfmon:W3SVC_W3WP. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"nginx:stub_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where util_pct > 85 OR queue_current > 50` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host, sourcetype** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.dest span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Web Server Thread Pool Exhaustion** — IIS `QueueFull`, NGINX worker saturation, or Apache `BusyWorkers` at limit causes queueing. Unified thresholding across stacks.\n\nDocumented **Data sources**: `nginx:stub_status`, `Perfmon:W3SVC_W3WP`, `apache:server_status`. **App/TA** (typical add-on context): `TA-nginx` stub_status, `Splunk_TA_windows` perfmon, Apache mod_status. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (util %), Line chart (util and queue), Table (hosts over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that iIS, NGINX worker saturation, or Apache at limit causes queueing.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.dest span=5m | sort - count",
              "e": [
                "apache",
                "nginx",
                "windows"
              ],
              "em": [
                "apache_httpd",
                "nginx_open"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.17",
              "n": "IIS Web Server Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "IIS access logs provide visibility into web application health — error rates, response times, and request volumes. Critical for web-facing services.",
              "t": "`Splunk_TA_windows`, Splunk Add-on for Microsoft IIS",
              "d": "`sourcetype=ms:iis:auto` or `sourcetype=iis`",
              "q": "index=web sourcetype=\"ms:iis:auto\"\n| timechart span=5m count by sc_status\n| eval error_rate = round((sc_status_500 + sc_status_502 + sc_status_503) / (sc_status_200 + sc_status_500 + sc_status_502 + sc_status_503) * 100, 2)",
              "m": "Configure IIS to use W3C Extended Log Format with time-taken field. Forward IIS logs from `%SystemDrive%\\inetpub\\logs\\LogFiles`. Use the Microsoft IIS TA for field extraction. Create alerts on 5xx error rate >5%.",
              "z": "Line chart (requests by status code), Single value (error rate %), Table of top error URIs.",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Splunk Add-on for Microsoft IIS.\n• Ensure the following data sources are available: `sourcetype=ms:iis:auto` or `sourcetype=iis`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure IIS to use W3C Extended Log Format with time-taken field. Forward IIS logs from `%SystemDrive%\\inetpub\\logs\\LogFiles`. Use the Microsoft IIS TA for field extraction. Create alerts on 5xx error rate >5%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"ms:iis:auto\"\n| timechart span=5m count by sc_status\n| eval error_rate = round((sc_status_500 + sc_status_502 + sc_status_503) / (sc_status_200 + sc_status_500 + sc_status_502 + sc_status_503) * 100, 2)\n```\n\nUnderstanding this SPL\n\n**IIS Web Server Monitoring** — IIS access logs provide visibility into web application health — error rates, response times, and request volumes. Critical for web-facing services.\n\nDocumented **Data sources**: `sourcetype=ms:iis:auto` or `sourcetype=iis`. **App/TA** (typical add-on context): `Splunk_TA_windows`, Splunk Add-on for Microsoft IIS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: ms:iis:auto. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"ms:iis:auto\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by sc_status** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (requests by status code), Single value (error rate %), Table of top error URIs.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that iIS access logs provide visibility into web application health — error rates, response times, and request volumes.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "iis",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.1.18",
              "n": "IIS Application Pool Crashes & Recycling",
              "c": "high",
              "f": "intermediate",
              "v": "Application pool crashes cause HTTP 503 errors and service outages. Frequent recycling indicates memory leaks or configuration issues in web applications.",
              "t": "`Splunk_TA_windows`, Splunk Add-on for Microsoft IIS",
              "d": "`sourcetype=WinEventLog:System` (Source=WAS, EventCode 5002, 5010, 5011, 5012, 5013)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" Source=\"WAS\"\n  EventCode IN (5002, 5010, 5011, 5012, 5013)\n| eval event=case(EventCode=5002,\"AppPool crashed\",EventCode=5010,\"Process termination timeout\",EventCode=5011,\"AppPool auto-disabled\",EventCode=5012,\"AppPool rapid failures\",EventCode=5013,\"AppPool timeout\")\n| table _time, host, event, AppPoolName\n| sort -_time",
              "m": "WAS (Windows Activation Service) events log automatically on IIS servers. EventCode 5002=worker process crashed, 5011=pool auto-disabled due to rapid failures (5 in 5 minutes default), 5012=rapid failure protection triggered. Alert on any 5011 event (pool disabled = site down). Track recycling frequency per pool. Correlate with WER EventCode 1000 for crash details including the faulting module.",
              "z": "Table (app pool events), Timechart (recycling frequency), Status grid (pool × health), Single value (disabled pools — target: 0).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Splunk Add-on for Microsoft IIS.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:System` (Source=WAS, EventCode 5002, 5010, 5011, 5012, 5013).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWAS (Windows Activation Service) events log automatically on IIS servers. EventCode 5002=worker process crashed, 5011=pool auto-disabled due to rapid failures (5 in 5 minutes default), 5012=rapid failure protection triggered. Alert on any 5011 event (pool disabled = site down). Track recycling frequency per pool. Correlate with WER EventCode 1000 for crash details including the faulting module.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" Source=\"WAS\"\n  EventCode IN (5002, 5010, 5011, 5012, 5013)\n| eval event=case(EventCode=5002,\"AppPool crashed\",EventCode=5010,\"Process termination timeout\",EventCode=5011,\"AppPool auto-disabled\",EventCode=5012,\"AppPool rapid failures\",EventCode=5013,\"AppPool timeout\")\n| table _time, host, event, AppPoolName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**IIS Application Pool Crashes & Recycling** — Application pool crashes cause HTTP 503 errors and service outages. Frequent recycling indicates memory leaks or configuration issues in web applications.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (Source=WAS, EventCode 5002, 5010, 5011, 5012, 5013). **App/TA** (typical add-on context): `Splunk_TA_windows`, Splunk Add-on for Microsoft IIS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **event** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **IIS Application Pool Crashes & Recycling**): table _time, host, event, AppPoolName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object, \"(?i)W3SVC|WAS|AppPool\")\n  by All_Changes.dest All_Changes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IIS Application Pool Crashes & Recycling** — Application pool crashes cause HTTP 503 errors and service outages. Frequent recycling indicates memory leaks or configuration issues in web applications.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:System` (Source=WAS, EventCode 5002, 5010, 5011, 5012, 5013). **App/TA** (typical add-on context): `Splunk_TA_windows`, Splunk Add-on for Microsoft IIS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (app pool events), Timechart (recycling frequency), Status grid (pool × health), Single value (disabled pools — target: 0).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that application pool crashes cause HTTP 503 errors and service outages.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object, \"(?i)W3SVC|WAS|AppPool\")\n  by All_Changes.dest All_Changes.user span=1h\n| sort -count",
              "e": [
                "iis",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 40.0,
          "qd": {
            "gold": 1,
            "silver": 1,
            "bronze": 16,
            "none": 0
          }
        },
        {
          "i": "8.2",
          "n": "Application Servers & Runtimes",
          "u": [
            {
              "i": "8.2.1",
              "n": "JVM Heap Utilization",
              "c": "critical",
              "f": "beginner",
              "v": "JVM heap exhaustion causes OutOfMemoryError, crashing the application. Monitoring enables tuning before failures occur.",
              "t": "`TA-jmx`, OpenTelemetry",
              "d": "JMX MBeans (`java.lang:type=Memory`), Prometheus JMX exporter",
              "q": "index=jmx sourcetype=\"jmx:memory\"\n| eval heap_pct=round(HeapMemoryUsage.used/HeapMemoryUsage.max*100,1)\n| timechart span=5m avg(heap_pct) as heap_usage by host\n| where heap_usage > 85",
              "m": "Deploy JMX TA on a heavy forwarder. Configure JMX connection to each app server. Poll memory MBeans every minute. Alert at 85% heap usage. Track heap growth pattern to detect memory leaks (sawtooth with increasing floor).",
              "z": "Line chart (heap usage over time), Gauge (current heap %), Area chart (heap used vs max).",
              "kfp": "OOM during memory leak hunts (with -XX:+HeapDumpOnOutOfMemoryError), large batch jobs, or after misconfigured heap sizes. We check deploy and job windows before calling a new defect.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-jmx`, OpenTelemetry.\n• Ensure the following data sources are available: JMX MBeans (`java.lang:type=Memory`), Prometheus JMX exporter.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy JMX TA on a heavy forwarder. Configure JMX connection to each app server. Poll memory MBeans every minute. Alert at 85% heap usage. Track heap growth pattern to detect memory leaks (sawtooth with increasing floor).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jmx sourcetype=\"jmx:memory\"\n| eval heap_pct=round(HeapMemoryUsage.used/HeapMemoryUsage.max*100,1)\n| timechart span=5m avg(heap_pct) as heap_usage by host\n| where heap_usage > 85\n```\n\nUnderstanding this SPL\n\n**JVM Heap Utilization** — JVM heap exhaustion causes OutOfMemoryError, crashing the application. Monitoring enables tuning before failures occur.\n\nDocumented **Data sources**: JMX MBeans (`java.lang:type=Memory`), Prometheus JMX exporter. **App/TA** (typical add-on context): `TA-jmx`, OpenTelemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jmx; **sourcetype**: jmx:memory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jmx, sourcetype=\"jmx:memory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **heap_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where heap_usage > 85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (heap usage over time), Gauge (current heap %), Area chart (heap used vs max).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that JVM heap exhaustion causes OutOfMemoryError, crashing the application.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "8.2.2",
              "n": "Garbage Collection Impact",
              "c": "high",
              "f": "beginner",
              "v": "Frequent or long GC pauses cause application latency spikes and request timeouts. Monitoring guides JVM tuning.",
              "t": "GC log parsing, `TA-jmx`",
              "d": "JVM GC logs, JMX GarbageCollector MBeans",
              "q": "index=jvm sourcetype=\"jvm:gc\"\n| where gc_pause_ms > 200\n| timechart span=15m count as gc_events, sum(gc_pause_ms) as total_pause_ms by host\n| eval pause_pct=round(total_pause_ms/900000*100,2)",
              "m": "Enable GC logging on all JVM-based app servers (`-Xlog:gc*` for Java 11+). Forward logs via UF. Parse pause duration, type, and cause. Alert on pauses >200ms or total pause time >5% of wall clock time.",
              "z": "Line chart (GC pause duration), Histogram (pause distribution), Single value (total pause time per hour).",
              "kfp": "OOM during memory leak hunts (with -XX:+HeapDumpOnOutOfMemoryError), large batch jobs, or after misconfigured heap sizes. We check deploy and job windows before calling a new defect.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GC log parsing, `TA-jmx`.\n• Ensure the following data sources are available: JVM GC logs, JMX GarbageCollector MBeans.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable GC logging on all JVM-based app servers (`-Xlog:gc*` for Java 11+). Forward logs via UF. Parse pause duration, type, and cause. Alert on pauses >200ms or total pause time >5% of wall clock time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jvm sourcetype=\"jvm:gc\"\n| where gc_pause_ms > 200\n| timechart span=15m count as gc_events, sum(gc_pause_ms) as total_pause_ms by host\n| eval pause_pct=round(total_pause_ms/900000*100,2)\n```\n\nUnderstanding this SPL\n\n**Garbage Collection Impact** — Frequent or long GC pauses cause application latency spikes and request timeouts. Monitoring guides JVM tuning.\n\nDocumented **Data sources**: JVM GC logs, JMX GarbageCollector MBeans. **App/TA** (typical add-on context): GC log parsing, `TA-jmx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jvm; **sourcetype**: jvm:gc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jvm, sourcetype=\"jvm:gc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where gc_pause_ms > 200` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **pause_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (GC pause duration), Histogram (pause distribution), Single value (total pause time per hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that frequent or long GC pauses cause application latency spikes and request timeouts.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.3",
              "n": "Thread Pool Exhaustion",
              "c": "critical",
              "f": "intermediate",
              "v": "Exhausted thread pools cause request rejection and application unresponsiveness. Detection prevents complete service failure.",
              "t": "`TA-jmx`, application metrics",
              "d": "JMX thread MBeans, Tomcat Connector metrics, application metrics endpoints",
              "q": "index=jmx sourcetype=\"jmx:threading\"\n| eval pct_used=round(currentThreadsBusy/maxThreads*100,1)\n| timechart span=5m max(pct_used) as thread_pct by host\n| where thread_pct > 80",
              "m": "Poll thread pool metrics via JMX (Tomcat: Connector MBeans, WildFly: undertow subsystem). Alert at 80% thread pool utilization. Correlate with request rate and response time to distinguish traffic spikes from slow backends.",
              "z": "Gauge (% threads busy), Line chart (thread utilization over time), Table (servers approaching capacity).",
              "kfp": "Thread or worker pools fill during traffic spikes, slow upstream services, or DoS-style load. A surge alone can be benign if backends recover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-jmx`, application metrics.\n• Ensure the following data sources are available: JMX thread MBeans, Tomcat Connector metrics, application metrics endpoints.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll thread pool metrics via JMX (Tomcat: Connector MBeans, WildFly: undertow subsystem). Alert at 80% thread pool utilization. Correlate with request rate and response time to distinguish traffic spikes from slow backends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jmx sourcetype=\"jmx:threading\"\n| eval pct_used=round(currentThreadsBusy/maxThreads*100,1)\n| timechart span=5m max(pct_used) as thread_pct by host\n| where thread_pct > 80\n```\n\nUnderstanding this SPL\n\n**Thread Pool Exhaustion** — Exhausted thread pools cause request rejection and application unresponsiveness. Detection prevents complete service failure.\n\nDocumented **Data sources**: JMX thread MBeans, Tomcat Connector metrics, application metrics endpoints. **App/TA** (typical add-on context): `TA-jmx`, application metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jmx; **sourcetype**: jmx:threading. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jmx, sourcetype=\"jmx:threading\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where thread_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% threads busy), Line chart (thread utilization over time), Table (servers approaching capacity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that exhausted thread pools cause request rejection and application unresponsiveness.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.4",
              "n": "Application Error Rate",
              "c": "critical",
              "f": "intermediate",
              "v": "Application exceptions indicate bugs, integration failures, or environmental issues. Tracking error rate by type guides debugging priority.",
              "t": "Custom log input, application framework logging",
              "d": "Application log files (log4j, logback, NLog, Serilog)",
              "q": "index=application sourcetype=\"log4j\" log_level=ERROR\n| timechart span=5m count as error_count by host\n| predict error_count as predicted",
              "m": "Forward application logs via UF. Ensure structured logging (JSON preferred) for reliable field extraction. Classify errors by type/exception. Alert on error rate spikes above baseline. Create error type breakdown for developer triage.",
              "z": "Line chart (error rate with baseline), Table (top error types), Bar chart (errors by component).",
              "kfp": "5xx errors spike during deployment rollouts, restart sequences, or backend service maintenance. If the UC mixes classes, split 4xx and 5xx for triage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom log input, application framework logging.\n• Ensure the following data sources are available: Application log files (log4j, logback, NLog, Serilog).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward application logs via UF. Ensure structured logging (JSON preferred) for reliable field extraction. Classify errors by type/exception. Alert on error rate spikes above baseline. Create error type breakdown for developer triage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"log4j\" log_level=ERROR\n| timechart span=5m count as error_count by host\n| predict error_count as predicted\n```\n\nUnderstanding this SPL\n\n**Application Error Rate** — Application exceptions indicate bugs, integration failures, or environmental issues. Tracking error rate by type guides debugging priority.\n\nDocumented **Data sources**: Application log files (log4j, logback, NLog, Serilog). **App/TA** (typical add-on context): Custom log input, application framework logging. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: log4j. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"log4j\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Application Error Rate**): predict error_count as predicted\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate with baseline), Table (top error types), Bar chart (errors by component).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that application exceptions indicate bugs, integration failures, or environmental issues.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.5",
              "n": "Deployment Tracking",
              "c": "high",
              "f": "beginner",
              "v": "Correlating deployments with performance changes is the fastest way to identify deployment-caused regressions. Essential for change management.",
              "t": "Webhook input, CI/CD integration",
              "d": "Deployment tool webhooks (Jenkins, GitHub Actions, ArgoCD), application version endpoints",
              "q": "index=deployments sourcetype=\"deployment_event\"\n| table _time, application, version, environment, deployer, status\n| sort -_time",
              "m": "Configure CI/CD pipeline to send deployment events to Splunk HEC (JSON payload with app, version, environment, deployer). Annotate timecharts with deployment markers. Correlate deployment times with error rate and latency changes.",
              "z": "Timeline (deployment events overlaid on performance charts), Table (recent deployments), Annotation layer on dashboards.",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Webhook input, CI/CD integration.\n• Ensure the following data sources are available: Deployment tool webhooks (Jenkins, GitHub Actions, ArgoCD), application version endpoints.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure CI/CD pipeline to send deployment events to Splunk HEC (JSON payload with app, version, environment, deployer). Annotate timecharts with deployment markers. Correlate deployment times with error rate and latency changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=deployments sourcetype=\"deployment_event\"\n| table _time, application, version, environment, deployer, status\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Deployment Tracking** — Correlating deployments with performance changes is the fastest way to identify deployment-caused regressions. Essential for change management.\n\nDocumented **Data sources**: Deployment tool webhooks (Jenkins, GitHub Actions, ArgoCD), application version endpoints. **App/TA** (typical add-on context): Webhook input, CI/CD integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: deployments; **sourcetype**: deployment_event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=deployments, sourcetype=\"deployment_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Deployment Tracking**): table _time, application, version, environment, deployer, status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (deployment events overlaid on performance charts), Table (recent deployments), Annotation layer on dashboards.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that correlating deployments with performance changes is the fastest way to identify deployment-caused regressions.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.6",
              "n": "Connection Pool Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Exhausted JDBC/database connection pools cause application errors and cascading failures. Monitoring prevents connection starvation.",
              "t": "`TA-jmx`, application metrics",
              "d": "JMX DataSource MBeans, HikariCP metrics, application framework metrics",
              "q": "index=jmx sourcetype=\"jmx:datasource\"\n| eval pct_used=round(numActive/maxTotal*100,1)\n| timechart span=5m max(pct_used) as pool_pct by host, pool_name\n| where pool_pct > 80",
              "m": "Poll JDBC connection pool MBeans via JMX. Track active, idle, and waiting connections. Alert at 80% pool utilization. Monitor connection wait time — high wait times indicate pool exhaustion even before 100%. Correlate with database latency.",
              "z": "Gauge (% pool used), Line chart (pool utilization over time), Table (pools approaching limits).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-jmx`, application metrics.\n• Ensure the following data sources are available: JMX DataSource MBeans, HikariCP metrics, application framework metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll JDBC connection pool MBeans via JMX. Track active, idle, and waiting connections. Alert at 80% pool utilization. Monitor connection wait time — high wait times indicate pool exhaustion even before 100%. Correlate with database latency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jmx sourcetype=\"jmx:datasource\"\n| eval pct_used=round(numActive/maxTotal*100,1)\n| timechart span=5m max(pct_used) as pool_pct by host, pool_name\n| where pool_pct > 80\n```\n\nUnderstanding this SPL\n\n**Connection Pool Monitoring** — Exhausted JDBC/database connection pools cause application errors and cascading failures. Monitoring prevents connection starvation.\n\nDocumented **Data sources**: JMX DataSource MBeans, HikariCP metrics, application framework metrics. **App/TA** (typical add-on context): `TA-jmx`, application metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jmx; **sourcetype**: jmx:datasource. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jmx, sourcetype=\"jmx:datasource\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host, pool_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where pool_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% pool used), Line chart (pool utilization over time), Table (pools approaching limits).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that exhausted JDBC/database connection pools cause application errors and cascading failures.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.7",
              "n": "Session Count Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Active session counts indicate concurrent user load. Trending supports capacity planning and license management.",
              "t": "`TA-jmx`, application metrics",
              "d": "JMX session MBeans, application metrics endpoints",
              "q": "index=jmx sourcetype=\"jmx:manager\"\n| timechart span=15m max(activeSessions) as sessions by host\n| predict sessions as predicted future_timespan=7",
              "m": "Poll session manager MBeans via JMX. Track active sessions per server. Correlate with user authentication events for validation. Use `predict` for capacity forecasting.",
              "z": "Line chart (session count with prediction), Single value (current active sessions), Area chart (sessions over time).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-jmx`, application metrics.\n• Ensure the following data sources are available: JMX session MBeans, application metrics endpoints.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll session manager MBeans via JMX. Track active sessions per server. Correlate with user authentication events for validation. Use `predict` for capacity forecasting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jmx sourcetype=\"jmx:manager\"\n| timechart span=15m max(activeSessions) as sessions by host\n| predict sessions as predicted future_timespan=7\n```\n\nUnderstanding this SPL\n\n**Session Count Trending** — Active session counts indicate concurrent user load. Trending supports capacity planning and license management.\n\nDocumented **Data sources**: JMX session MBeans, application metrics endpoints. **App/TA** (typical add-on context): `TA-jmx`, application metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jmx; **sourcetype**: jmx:manager. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jmx, sourcetype=\"jmx:manager\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Session Count Trending**): predict sessions as predicted future_timespan=7\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (session count with prediction), Single value (current active sessions), Area chart (sessions over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that active session counts indicate concurrent user load.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.8",
              "n": ".NET CLR Performance",
              "c": "high",
              "f": "intermediate",
              "v": "CLR performance issues (high GC, exceptions, thread starvation) directly impact .NET application performance. Monitoring guides runtime tuning.",
              "t": "`Splunk_TA_windows` (Perfmon), custom .NET metrics",
              "d": "Windows Performance Counters (.NET CLR Memory, Exceptions, Threading)",
              "q": "index=perfmon sourcetype=\"Perfmon:CLR_Memory\"\n| timechart span=5m avg(Pct_Time_in_GC) as gc_pct, avg(Gen_2_Collections) as gen2_gc by instance\n| where gc_pct > 10",
              "m": "Configure Perfmon inputs for .NET CLR counters in `inputs.conf`. Monitor % Time in GC, Gen 2 collections, exception throw rate, and thread contention rate. Alert when GC time exceeds 10% or exception rate spikes.",
              "z": "Line chart (GC % over time), Multi-metric chart (CLR counters), Table (instances with high GC).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (Perfmon), custom .NET metrics.\n• Ensure the following data sources are available: Windows Performance Counters (.NET CLR Memory, Exceptions, Threading).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon inputs for .NET CLR counters in `inputs.conf`. Monitor % Time in GC, Gen 2 collections, exception throw rate, and thread contention rate. Alert when GC time exceeds 10% or exception rate spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:CLR_Memory\"\n| timechart span=5m avg(Pct_Time_in_GC) as gc_pct, avg(Gen_2_Collections) as gen2_gc by instance\n| where gc_pct > 10\n```\n\nUnderstanding this SPL\n\n**.NET CLR Performance** — CLR performance issues (high GC, exceptions, thread starvation) directly impact .NET application performance. Monitoring guides runtime tuning.\n\nDocumented **Data sources**: Windows Performance Counters (.NET CLR Memory, Exceptions, Threading). **App/TA** (typical add-on context): `Splunk_TA_windows` (Perfmon), custom .NET metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:CLR_Memory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:CLR_Memory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by instance** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where gc_pct > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (GC % over time), Multi-metric chart (CLR counters), Table (instances with high GC).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that cLR performance issues (high GC, exceptions, thread starvation) directly impact.NET application performance.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.9",
              "n": "Node.js Event Loop Lag",
              "c": "high",
              "f": "beginner",
              "v": "Event loop lag indicates blocking operations that prevent Node.js from handling requests. Detection enables code-level investigation.",
              "t": "Custom metrics input, OpenTelemetry",
              "d": "Node.js process metrics (event loop lag, heap usage), Prometheus client metrics",
              "q": "index=application sourcetype=\"nodejs:metrics\"\n| timechart span=1m avg(event_loop_lag_ms) as el_lag, avg(heap_used_mb) as heap by host\n| where el_lag > 100",
              "m": "Instrument Node.js apps with `prom-client` or OpenTelemetry SDK. Export event loop lag, heap stats, and active handles/requests. Forward to Splunk via HEC or Prometheus remote write. Alert when lag exceeds 100ms.",
              "z": "Line chart (event loop lag), Dual-axis (lag + heap usage), Single value (current lag ms).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom metrics input, OpenTelemetry.\n• Ensure the following data sources are available: Node.js process metrics (event loop lag, heap usage), Prometheus client metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument Node.js apps with `prom-client` or OpenTelemetry SDK. Export event loop lag, heap stats, and active handles/requests. Forward to Splunk via HEC or Prometheus remote write. Alert when lag exceeds 100ms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"nodejs:metrics\"\n| timechart span=1m avg(event_loop_lag_ms) as el_lag, avg(heap_used_mb) as heap by host\n| where el_lag > 100\n```\n\nUnderstanding this SPL\n\n**Node.js Event Loop Lag** — Event loop lag indicates blocking operations that prevent Node.js from handling requests. Detection enables code-level investigation.\n\nDocumented **Data sources**: Node.js process metrics (event loop lag, heap usage), Prometheus client metrics. **App/TA** (typical add-on context): Custom metrics input, OpenTelemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: nodejs:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"nodejs:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where el_lag > 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (event loop lag), Dual-axis (lag + heap usage), Single value (current lag ms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that event loop lag indicates blocking operations that prevent Node.js from handling requests.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.10",
              "n": "Class Loading Issues",
              "c": "medium",
              "f": "beginner",
              "v": "ClassNotFoundException and NoClassDefFoundError indicate deployment or dependency issues that may cause intermittent failures.",
              "t": "Application log parsing",
              "d": "Application error logs (Java stack traces)",
              "q": "index=application sourcetype=\"log4j\" log_level=ERROR\n| search \"ClassNotFoundException\" OR \"NoClassDefFoundError\" OR \"ClassCastException\"\n| rex \"(?<exception_class>ClassNotFoundException|NoClassDefFoundError|ClassCastException):\\s+(?<missing_class>\\S+)\"\n| stats count by host, exception_class, missing_class",
              "m": "Parse Java stack traces from application logs. Extract exception type and missing class name. Alert on new class loading errors (not seen before). Track frequency to distinguish transient from persistent issues.",
              "z": "Table (class loading errors with details), Bar chart (errors by type), Timeline (error occurrences).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Application log parsing.\n• Ensure the following data sources are available: Application error logs (Java stack traces).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse Java stack traces from application logs. Extract exception type and missing class name. Alert on new class loading errors (not seen before). Track frequency to distinguish transient from persistent issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"log4j\" log_level=ERROR\n| search \"ClassNotFoundException\" OR \"NoClassDefFoundError\" OR \"ClassCastException\"\n| rex \"(?<exception_class>ClassNotFoundException|NoClassDefFoundError|ClassCastException):\\s+(?<missing_class>\\S+)\"\n| stats count by host, exception_class, missing_class\n```\n\nUnderstanding this SPL\n\n**Class Loading Issues** — ClassNotFoundException and NoClassDefFoundError indicate deployment or dependency issues that may cause intermittent failures.\n\nDocumented **Data sources**: Application error logs (Java stack traces). **App/TA** (typical add-on context): Application log parsing. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: log4j. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"log4j\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, exception_class, missing_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (class loading errors with details), Bar chart (errors by type), Timeline (error occurrences).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that classNotFoundException and NoClassDefFoundError indicate deployment or dependency issues that may cause intermittent failures.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.11",
              "n": "PHP-FPM Pool Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Active/idle process counts, listen queue depth, and slow request detection indicate PHP-FPM capacity and backend saturation. Exhausted pools cause 502 errors and request timeouts.",
              "t": "Custom scripted input (PHP-FPM status page)",
              "d": "PHP-FPM status page (JSON output, `/status?json`)",
              "q": "index=php sourcetype=\"phpfpm:status\"\n| eval pool_util=round(active_processes/(active_processes+idle_processes)*100,1)\n| where pool_util > 80 OR listen_queue > 5\n| timechart span=5m max(pool_util) as util_pct, max(listen_queue) as queue_depth by host, pool",
              "m": "Enable PHP-FPM status via `pm.status_path = /status` and `pm.status_listen` in pool config. Add `fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name`; protect with auth. Poll `/status?json` via scripted input every minute. Parse active_processes, idle_processes, listen_queue, max_listen_queue, slow_requests. Forward to Splunk via HEC. Alert when pool_util >80% or listen_queue >5. Track slow_requests for endpoints needing optimization.",
              "z": "Gauge (% pool used), Line chart (pool utilization and queue depth), Table (pools with high utilization), Single value (slow requests).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (PHP-FPM status page).\n• Ensure the following data sources are available: PHP-FPM status page (JSON output, `/status?json`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable PHP-FPM status via `pm.status_path = /status` and `pm.status_listen` in pool config. Add `fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name`; protect with auth. Poll `/status?json` via scripted input every minute. Parse active_processes, idle_processes, listen_queue, max_listen_queue, slow_requests. Forward to Splunk via HEC. Alert when pool_util >80% or listen_queue >5. Track slow_requests for endpoints needing optimization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=php sourcetype=\"phpfpm:status\"\n| eval pool_util=round(active_processes/(active_processes+idle_processes)*100,1)\n| where pool_util > 80 OR listen_queue > 5\n| timechart span=5m max(pool_util) as util_pct, max(listen_queue) as queue_depth by host, pool\n```\n\nUnderstanding this SPL\n\n**PHP-FPM Pool Monitoring** — Active/idle process counts, listen queue depth, and slow request detection indicate PHP-FPM capacity and backend saturation. Exhausted pools cause 502 errors and request timeouts.\n\nDocumented **Data sources**: PHP-FPM status page (JSON output, `/status?json`). **App/TA** (typical add-on context): Custom scripted input (PHP-FPM status page). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: php; **sourcetype**: phpfpm:status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=php, sourcetype=\"phpfpm:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pool_util** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pool_util > 80 OR listen_queue > 5` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host, pool** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% pool used), Line chart (pool utilization and queue depth), Table (pools with high utilization), Single value (slow requests).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that active/idle process counts, listen queue depth, and slow request detection indicate PHP-FPM capacity and backend saturation.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "phpfpm"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.12",
              "n": "Tomcat JMX Thread Pool Utilization",
              "c": "high",
              "f": "advanced",
              "v": "Connector thread pool busy percentage and rejected connections indicate Tomcat capacity limits. Exhausted pools cause 503 errors and connection timeouts.",
              "t": "Custom JMX input (Jolokia, JMX modular input)",
              "d": "JMX MBeans (`Catalina:type=ThreadPool,name=\"http-nio-8080\"`)",
              "q": "index=jmx sourcetype=\"jmx:tomcat:threadpool\"\n| eval pool_pct=round(currentThreadsBusy/maxThreads*100,1)\n| where pool_pct > 80 OR connectionCount > 0\n| timechart span=5m max(pool_pct) as busy_pct, sum(connectionCount) as rejected by host, connector_name",
              "m": "Deploy Jolokia agent or Splunk JMX modular input on Tomcat. Configure polling for `Catalina:type=ThreadPool,name=\"http-nio-8080\"` (adjust connector name per instance). Extract currentThreadsBusy, maxThreads, connectionCount (rejected). Poll every 5 minutes. Alert when pool_pct >80% or any rejected connections. Correlate with request rate and response time to distinguish traffic spikes from slow backends.",
              "z": "Gauge (% threads busy), Line chart (thread utilization over time), Table (connectors with rejections), Single value (rejected connections).",
              "kfp": "Thread or worker pools fill during traffic spikes, slow upstream services, or DoS-style load. A surge alone can be benign if backends recover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom JMX input (Jolokia, JMX modular input).\n• Ensure the following data sources are available: JMX MBeans (`Catalina:type=ThreadPool,name=\"http-nio-8080\"`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Jolokia agent or Splunk JMX modular input on Tomcat. Configure polling for `Catalina:type=ThreadPool,name=\"http-nio-8080\"` (adjust connector name per instance). Extract currentThreadsBusy, maxThreads, connectionCount (rejected). Poll every 5 minutes. Alert when pool_pct >80% or any rejected connections. Correlate with request rate and response time to distinguish traffic spikes from slow backends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jmx sourcetype=\"jmx:tomcat:threadpool\"\n| eval pool_pct=round(currentThreadsBusy/maxThreads*100,1)\n| where pool_pct > 80 OR connectionCount > 0\n| timechart span=5m max(pool_pct) as busy_pct, sum(connectionCount) as rejected by host, connector_name\n```\n\nUnderstanding this SPL\n\n**Tomcat JMX Thread Pool Utilization** — Connector thread pool busy percentage and rejected connections indicate Tomcat capacity limits. Exhausted pools cause 503 errors and connection timeouts.\n\nDocumented **Data sources**: JMX MBeans (`Catalina:type=ThreadPool,name=\"http-nio-8080\"`). **App/TA** (typical add-on context): Custom JMX input (Jolokia, JMX modular input). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jmx; **sourcetype**: jmx:tomcat:threadpool. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jmx, sourcetype=\"jmx:tomcat:threadpool\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pool_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pool_pct > 80 OR connectionCount > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host, connector_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% threads busy), Line chart (thread utilization over time), Table (connectors with rejections), Single value (rejected connections).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that connector thread pool busy percentage and rejected connections indicate Tomcat capacity limits.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.13",
              "n": "WildFly / JBoss Datasource Pool Usage",
              "c": "high",
              "f": "advanced",
              "v": "JMX datasource pool active/idle/wait connections indicate database connectivity health. Exhausted pools cause application errors and slow transactions.",
              "t": "Custom JMX input (Jolokia)",
              "d": "JMX MBeans (`jboss.as:subsystem=datasources,data-source=*`)",
              "q": "index=jmx sourcetype=\"jmx:wildfly:datasource\"\n| eval pool_pct=round(AvailableCount/(AvailableCount+InUseCount)*100,1), wait_pct=round(WaitingCount/(AvailableCount+InUseCount+WaitingCount)*100,1)\n| where pool_pct < 20 OR WaitingCount > 0\n| timechart span=5m max(pool_pct) as avail_pct, avg(WaitingCount) as waiting by host, data_source",
              "m": "Deploy Jolokia on WildFly/JBoss. Poll `jboss.as:subsystem=datasources,data-source=*` for AvailableCount, InUseCount, WaitingCount, MaxUsedCount. Poll every 5 minutes. Alert when pool availability drops below 20% or WaitingCount >0 (indicating connection starvation). Track MaxUsedCount for capacity planning.",
              "z": "Gauge (% pool available), Line chart (active vs idle over time), Table (datasources with waiting connections), Single value (total waiting).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom JMX input (Jolokia).\n• Ensure the following data sources are available: JMX MBeans (`jboss.as:subsystem=datasources,data-source=*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Jolokia on WildFly/JBoss. Poll `jboss.as:subsystem=datasources,data-source=*` for AvailableCount, InUseCount, WaitingCount, MaxUsedCount. Poll every 5 minutes. Alert when pool availability drops below 20% or WaitingCount >0 (indicating connection starvation). Track MaxUsedCount for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jmx sourcetype=\"jmx:wildfly:datasource\"\n| eval pool_pct=round(AvailableCount/(AvailableCount+InUseCount)*100,1), wait_pct=round(WaitingCount/(AvailableCount+InUseCount+WaitingCount)*100,1)\n| where pool_pct < 20 OR WaitingCount > 0\n| timechart span=5m max(pool_pct) as avail_pct, avg(WaitingCount) as waiting by host, data_source\n```\n\nUnderstanding this SPL\n\n**WildFly / JBoss Datasource Pool Usage** — JMX datasource pool active/idle/wait connections indicate database connectivity health. Exhausted pools cause application errors and slow transactions.\n\nDocumented **Data sources**: JMX MBeans (`jboss.as:subsystem=datasources,data-source=*`). **App/TA** (typical add-on context): Custom JMX input (Jolokia). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jmx; **sourcetype**: jmx:wildfly:datasource. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jmx, sourcetype=\"jmx:wildfly:datasource\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pool_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pool_pct < 20 OR WaitingCount > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host, data_source** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% pool available), Line chart (active vs idle over time), Table (datasources with waiting connections), Single value (total waiting).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that jMX datasource pool active/idle/wait connections indicate database connectivity health.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.14",
              "n": "JVM Garbage Collection Pause Time (STW)",
              "c": "critical",
              "f": "intermediate",
              "v": "Stop-the-world pause duration percentiles from unified GC logs (G1, ZGC) drive SLA breaches before heap % alerts fire.",
              "t": "GC log parsing, `jvm:gc` sourcetype",
              "d": "`-Xlog:gc*` (Java 11+), `gc_pause_ms`, `gc_type`",
              "q": "index=jvm sourcetype=\"jvm:gc\"\n| where gc_pause_ms > 500\n| timechart span=5m perc95(gc_pause_ms) as p95_pause, max(gc_pause_ms) as max_pause by host\n| where p95_pause > 200",
              "m": "Parse pause events only (not concurrent phases). Alert on p95 >200ms or any pause >2s. Split by pool (G1 Old Gen vs Young).",
              "z": "Line chart (p95/max pause), Histogram (pause distribution), Table (worst hosts).",
              "kfp": "OOM during memory leak hunts (with -XX:+HeapDumpOnOutOfMemoryError), large batch jobs, or after misconfigured heap sizes. We check deploy and job windows before calling a new defect.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GC log parsing, `jvm:gc` sourcetype.\n• Ensure the following data sources are available: `-Xlog:gc*` (Java 11+), `gc_pause_ms`, `gc_type`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse pause events only (not concurrent phases). Alert on p95 >200ms or any pause >2s. Split by pool (G1 Old Gen vs Young).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jvm sourcetype=\"jvm:gc\"\n| where gc_pause_ms > 500\n| timechart span=5m perc95(gc_pause_ms) as p95_pause, max(gc_pause_ms) as max_pause by host\n| where p95_pause > 200\n```\n\nUnderstanding this SPL\n\n**JVM Garbage Collection Pause Time (STW)** — Stop-the-world pause duration percentiles from unified GC logs (G1, ZGC) drive SLA breaches before heap % alerts fire.\n\nDocumented **Data sources**: `-Xlog:gc*` (Java 11+), `gc_pause_ms`, `gc_type`. **App/TA** (typical add-on context): GC log parsing, `jvm:gc` sourcetype. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jvm; **sourcetype**: jvm:gc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jvm, sourcetype=\"jvm:gc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where gc_pause_ms > 500` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where p95_pause > 200` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95/max pause), Histogram (pause distribution), Table (worst hosts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that stop-the-world pause duration percentiles from unified GC logs (G1, ZGC) drive service promises breaches before heap % alerts fire.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.15",
              "n": ".NET CLR Memory Pressure",
              "c": "high",
              "f": "beginner",
              "v": "`# Bytes in all Heaps`, `LOH` size, and `% Time in GC` together indicate memory pressure vs high allocation rate. Refines UC-8.2.8.",
              "t": "`Splunk_TA_windows` Perfmon",
              "d": "`.NET CLR Memory`, `.NET Memory Cache`",
              "q": "index=perfmon sourcetype=\"Perfmon:CLR_Memory\"\n| timechart span=5m avg(Gen_2_heap_size) as gen2_bytes, avg(Pct_Time_in_GC) as gc_pct by instance\n| where gc_pct > 15",
              "m": "Collect every 1m for critical apps. Alert when GC time >15% and Gen 2 heap grows week-over-week. Trigger dump analysis workflow.",
              "z": "Dual-axis (heap vs GC %), Line chart (Gen 2 size), Table (instances over threshold).",
              "kfp": "OOM during memory leak hunts (with -XX:+HeapDumpOnOutOfMemoryError), large batch jobs, or after misconfigured heap sizes. We check deploy and job windows before calling a new defect.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` Perfmon.\n• Ensure the following data sources are available: `.NET CLR Memory`, `.NET Memory Cache`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect every 1m for critical apps. Alert when GC time >15% and Gen 2 heap grows week-over-week. Trigger dump analysis workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:CLR_Memory\"\n| timechart span=5m avg(Gen_2_heap_size) as gen2_bytes, avg(Pct_Time_in_GC) as gc_pct by instance\n| where gc_pct > 15\n```\n\nUnderstanding this SPL\n\n**.NET CLR Memory Pressure** — `# Bytes in all Heaps`, `LOH` size, and `% Time in GC` together indicate memory pressure vs high allocation rate. Refines UC-8.2.8.\n\nDocumented **Data sources**: `.NET CLR Memory`, `.NET Memory Cache`. **App/TA** (typical add-on context): `Splunk_TA_windows` Perfmon. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:CLR_Memory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:CLR_Memory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by instance** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where gc_pct > 15` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis (heap vs GC %), Line chart (Gen 2 size), Table (instances over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow .NET CLR Memory Pressure in plain language so we notice drift or trouble early enough to act calmly.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.16",
              "n": "Node.js Event Loop Lag (High Resolution)",
              "c": "high",
              "f": "intermediate",
              "v": "`eventLoopUtilization` and `delay` histogram from `perf_hooks` or Prometheus `nodejs_eventloop_lag_seconds` for sub-millisecond vs millisecond precision.",
              "t": "OpenTelemetry, `prom-client`",
              "d": "`nodejs:metrics` `event_loop_lag_p99_ms`",
              "q": "index=application sourcetype=\"nodejs:metrics\"\n| timechart span=1m perc99(event_loop_lag_ms) as p99_lag by host\n| where p99_lag > 50",
              "m": "Export p50/p99 lag. Alert on p99 >50ms for 5m. Correlate with blocking `fs` or `dns` calls from traces.",
              "z": "Line chart (p99 event loop lag), Table (hosts breaching SLO), Single value (current p99).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenTelemetry, `prom-client`.\n• Ensure the following data sources are available: `nodejs:metrics` `event_loop_lag_p99_ms`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport p50/p99 lag. Alert on p99 >50ms for 5m. Correlate with blocking `fs` or `dns` calls from traces.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"nodejs:metrics\"\n| timechart span=1m perc99(event_loop_lag_ms) as p99_lag by host\n| where p99_lag > 50\n```\n\nUnderstanding this SPL\n\n**Node.js Event Loop Lag (High Resolution)** — `eventLoopUtilization` and `delay` histogram from `perf_hooks` or Prometheus `nodejs_eventloop_lag_seconds` for sub-millisecond vs millisecond precision.\n\nDocumented **Data sources**: `nodejs:metrics` `event_loop_lag_p99_ms`. **App/TA** (typical add-on context): OpenTelemetry, `prom-client`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: nodejs:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"nodejs:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where p99_lag > 50` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p99 event loop lag), Table (hosts breaching SLO), Single value (current p99).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that and histogram from or Prometheus for sub-millisecond vs millisecond precision.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.17",
              "n": "Python WSGI Worker Pool Exhaustion",
              "c": "high",
              "f": "intermediate",
              "v": "Gunicorn/uWSGI `active workers`, `listening queue`, and `timeout` worker kills indicate saturation or slow upstream (DB).",
              "t": "Structured app logs, stats endpoint",
              "d": "`gunicorn:json` `workers`, `req`, `timeout`",
              "q": "index=application sourcetype=\"gunicorn:json\"\n| where worker_timeout > 0 OR active_workers >= max_workers OR backlog > 10\n| stats sum(worker_timeout) as timeouts, max(backlog) as max_backlog by host, app_name\n| where timeouts > 0 OR max_backlog > 10",
              "m": "Enable `--statsd` or JSON access/error with worker fields. Alert on backlog growth or worker timeouts. Scale workers or fix slow queries.",
              "z": "Line chart (backlog and active workers), Table (apps with timeouts), Single value (total worker timeouts 1h).",
              "kfp": "Thread or worker pools fill during traffic spikes, slow upstream services, or DoS-style load. A surge alone can be benign if backends recover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Structured app logs, stats endpoint.\n• Ensure the following data sources are available: `gunicorn:json` `workers`, `req`, `timeout`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable `--statsd` or JSON access/error with worker fields. Alert on backlog growth or worker timeouts. Scale workers or fix slow queries.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"gunicorn:json\"\n| where worker_timeout > 0 OR active_workers >= max_workers OR backlog > 10\n| stats sum(worker_timeout) as timeouts, max(backlog) as max_backlog by host, app_name\n| where timeouts > 0 OR max_backlog > 10\n```\n\nUnderstanding this SPL\n\n**Python WSGI Worker Pool Exhaustion** — Gunicorn/uWSGI `active workers`, `listening queue`, and `timeout` worker kills indicate saturation or slow upstream (DB).\n\nDocumented **Data sources**: `gunicorn:json` `workers`, `req`, `timeout`. **App/TA** (typical add-on context): Structured app logs, stats endpoint. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: gunicorn:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"gunicorn:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where worker_timeout > 0 OR active_workers >= max_workers OR backlog > 10` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, app_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where timeouts > 0 OR max_backlog > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (backlog and active workers), Table (apps with timeouts), Single value (total worker timeouts 1h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that gunicorn/uWSGI, and worker kills indicate saturation or slow upstream (DB).",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.18",
              "n": "Tomcat Active Session Count",
              "c": "medium",
              "f": "beginner",
              "v": "Session explosion may indicate bot traffic, session fixation abuse, or missing session TTL. Per-context session counts from JMX.",
              "t": "`TA-jmx`",
              "d": "`Catalina:type=Manager` `activeSessions`, `sessionMaxAliveTime`",
              "q": "index=jmx sourcetype=\"jmx:tomcat:manager\"\n| timechart span=15m max(activeSessions) as sessions by host, context_path\n| eventstats avg(sessions) as baseline by context_path\n| where sessions > baseline * 3 AND sessions > 5000",
              "m": "Baseline sessions per context. Alert on 3× baseline or absolute cap. Correlate with marketing events or attacks.",
              "z": "Line chart (sessions over time), Table (context, sessions), Single value (peak sessions).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-jmx`.\n• Ensure the following data sources are available: `Catalina:type=Manager` `activeSessions`, `sessionMaxAliveTime`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline sessions per context. Alert on 3× baseline or absolute cap. Correlate with marketing events or attacks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jmx sourcetype=\"jmx:tomcat:manager\"\n| timechart span=15m max(activeSessions) as sessions by host, context_path\n| eventstats avg(sessions) as baseline by context_path\n| where sessions > baseline * 3 AND sessions > 5000\n```\n\nUnderstanding this SPL\n\n**Tomcat Active Session Count** — Session explosion may indicate bot traffic, session fixation abuse, or missing session TTL. Per-context session counts from JMX.\n\nDocumented **Data sources**: `Catalina:type=Manager` `activeSessions`, `sessionMaxAliveTime`. **App/TA** (typical add-on context): `TA-jmx`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jmx; **sourcetype**: jmx:tomcat:manager. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jmx, sourcetype=\"jmx:tomcat:manager\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host, context_path** — ideal for trending and alerting on this use case.\n• `eventstats` rolls up events into metrics; results are split **by context_path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sessions > baseline * 3 AND sessions > 5000` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (sessions over time), Table (context, sessions), Single value (peak sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that session explosion may indicate bot traffic, session fixation abuse, or missing session TTL.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.19",
              "n": "WebLogic Stuck Threads",
              "c": "critical",
              "f": "intermediate",
              "v": "Stuck thread count >0 blocks request processing and triggers health check failures. Server log `BEA-000337` patterns.",
              "t": "WebLogic Server logs, JMX",
              "d": "`weblogic:server` log, `StuckThreadCount` MBean",
              "q": "index=application sourcetype=\"weblogic:server\"\n| search \"BEA-000337\" OR \"STUCK\" OR stuck_thread_count>0\n| stats count by domain, server_name, thread_name\n| where count > 0",
              "m": "Forward stdout/stderr and domain logs. Alert on first stuck thread. Thread dump automation on critical domains.",
              "z": "Table (domain, server, stuck count), Timeline (stuck events), Single value (stuck threads now).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WebLogic Server logs, JMX.\n• Ensure the following data sources are available: `weblogic:server` log, `StuckThreadCount` MBean.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward stdout/stderr and domain logs. Alert on first stuck thread. Thread dump automation on critical domains.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"weblogic:server\"\n| search \"BEA-000337\" OR \"STUCK\" OR stuck_thread_count>0\n| stats count by domain, server_name, thread_name\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**WebLogic Stuck Threads** — Stuck thread count >0 blocks request processing and triggers health check failures. Server log `BEA-000337` patterns.\n\nDocumented **Data sources**: `weblogic:server` log, `StuckThreadCount` MBean. **App/TA** (typical add-on context): WebLogic Server logs, JMX. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: weblogic:server. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"weblogic:server\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by domain, server_name, thread_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (domain, server, stuck count), Timeline (stuck events), Single value (stuck threads now).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that stuck thread count >0 blocks request processing and triggers health check failures.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.20",
              "n": "JBoss / WildFly Deployment Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Failed deployments leave apps stopped or partial. Log markers `WFLYSRV0026`, `Deployment FAILED` require immediate attention.",
              "t": "JBoss server.log ingestion",
              "d": "`jboss:server.log`, `server.log` deployment phase",
              "q": "index=application sourcetype=\"jboss:server\"\n| search \"Deployment FAILED\" OR \"WFLYSRV0059\" OR \"Services with missing/unavailable dependencies\"\n| table _time, host, deployment, message\n| sort -_time",
              "m": "Parse deployment name from log line. Alert on any FAILURE during CI/CD window or outside window (rogue deploy). Correlate with Git commit from pipeline ID if present.",
              "z": "Timeline (deployment outcomes), Table (failed deployment, error), Single value (failures 24h).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JBoss server.log ingestion.\n• Ensure the following data sources are available: `jboss:server.log`, `server.log` deployment phase.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse deployment name from log line. Alert on any FAILURE during CI/CD window or outside window (rogue deploy). Correlate with Git commit from pipeline ID if present.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"jboss:server\"\n| search \"Deployment FAILED\" OR \"WFLYSRV0059\" OR \"Services with missing/unavailable dependencies\"\n| table _time, host, deployment, message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**JBoss / WildFly Deployment Failures** — Failed deployments leave apps stopped or partial. Log markers `WFLYSRV0026`, `Deployment FAILED` require immediate attention.\n\nDocumented **Data sources**: `jboss:server.log`, `server.log` deployment phase. **App/TA** (typical add-on context): JBoss server.log ingestion. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: jboss:server. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"jboss:server\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **JBoss / WildFly Deployment Failures**): table _time, host, deployment, message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (deployment outcomes), Table (failed deployment, error), Single value (failures 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that failed deployments leave apps stopped or partial.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jboss"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.21",
              "n": "Spring Boot Actuator Health Down",
              "c": "critical",
              "f": "beginner",
              "v": "`/actuator/health` JSON with `status:DOWN` from liveness/readiness probes. Aggregates component failures (diskSpace, db, redis).",
              "t": "HEC from K8s probe sidecar, access log",
              "d": "`spring:actuator` JSON lines, probe stderr",
              "q": "index=application sourcetype=\"spring:actuator\" OR path=\"/actuator/health\"\n| spath output=status status\n| spath output=components components\n| where status!=\"UP\"\n| table _time, host, app_name, status, components",
              "m": "Ship health check responses (avoid PII). Alert on non-UP. Break down `components.*.status` for root cause.",
              "z": "Status grid (app × component), Table (DOWN components), Timeline (health flaps).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC from K8s probe sidecar, access log.\n• Ensure the following data sources are available: `spring:actuator` JSON lines, probe stderr.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nShip health check responses (avoid PII). Alert on non-UP. Break down `components.*.status` for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"spring:actuator\" OR path=\"/actuator/health\"\n| spath output=status status\n| spath output=components components\n| where status!=\"UP\"\n| table _time, host, app_name, status, components\n```\n\nUnderstanding this SPL\n\n**Spring Boot Actuator Health Down** — `/actuator/health` JSON with `status:DOWN` from liveness/readiness probes. Aggregates component failures (diskSpace, db, redis).\n\nDocumented **Data sources**: `spring:actuator` JSON lines, probe stderr. **App/TA** (typical add-on context): HEC from K8s probe sidecar, access log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: spring:actuator. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"spring:actuator\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where status!=\"UP\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Spring Boot Actuator Health Down**): table _time, host, app_name, status, components\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (app × component), Table (DOWN components), Timeline (health flaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that jSON with from liveness/readiness probes.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.22",
              "n": ".NET Exception Rate Trending",
              "c": "high",
              "f": "beginner",
              "v": "`# of Exceps Thrown / sec` and first-chance exception logs show error storms after deploys. Complements log-based UC-8.2.4 with runtime counters.",
              "t": "`Splunk_TA_windows` Perfmon, Serilog/NLog",
              "d": "`.NET CLR Exceptions` `# of Exceps Thrown / sec`",
              "q": "index=perfmon sourcetype=\"Perfmon:CLR_Exceptions\"\n| timechart span=5m sum(Exceps_Thrown_per_sec) as ex_rate by process_name\n| eventstats avg(ex_rate) as baseline by process_name\n| where ex_rate > baseline * 5 AND ex_rate > 1",
              "m": "Baseline per process. Alert on 5× baseline. Join with deployment markers from UC-8.2.5.",
              "z": "Line chart (exception rate), Table (process, spike factor), Single value (total exceptions/sec).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` Perfmon, Serilog/NLog.\n• Ensure the following data sources are available: `.NET CLR Exceptions` `# of Exceps Thrown / sec`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline per process. Alert on 5× baseline. Join with deployment markers from UC-8.2.5.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon sourcetype=\"Perfmon:CLR_Exceptions\"\n| timechart span=5m sum(Exceps_Thrown_per_sec) as ex_rate by process_name\n| eventstats avg(ex_rate) as baseline by process_name\n| where ex_rate > baseline * 5 AND ex_rate > 1\n```\n\nUnderstanding this SPL\n\n**.NET Exception Rate Trending** — `# of Exceps Thrown / sec` and first-chance exception logs show error storms after deploys. Complements log-based UC-8.2.4 with runtime counters.\n\nDocumented **Data sources**: `.NET CLR Exceptions` `# of Exceps Thrown / sec`. **App/TA** (typical add-on context): `Splunk_TA_windows` Perfmon, Serilog/NLog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon; **sourcetype**: Perfmon:CLR_Exceptions. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, sourcetype=\"Perfmon:CLR_Exceptions\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by process_name** — ideal for trending and alerting on this use case.\n• `eventstats` rolls up events into metrics; results are split **by process_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ex_rate > baseline * 5 AND ex_rate > 1` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (exception rate), Table (process, spike factor), Single value (total exceptions/sec).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that and first-chance exception logs show error storms after deploys.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc",
                "windows"
              ],
              "em": [
                "hardware_bmc_ilo"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.2.23",
              "n": "Jira Data Center Performance",
              "c": "medium",
              "f": "advanced",
              "v": "JMX metrics, request duration, and attachment storage indicate Jira health. Slow requests frustrate users; disk usage growth risks outages. Enables capacity planning and performance tuning.",
              "t": "Custom JMX input (Jolokia), Jira REST API",
              "d": "Jira JMX MBeans (heap, threads, request duration), /rest/api/2/serverInfo, Jira access logs",
              "q": "index=jira sourcetype=\"jira:jmx\"\n| eval heap_used_pct=if(HeapMemoryMax>0, round(HeapMemoryUsed/HeapMemoryMax*100, 1), null())\n| where heap_used_pct > 85 OR ThreadCount > 500 OR RequestDurationP95 > 3000\n| bin _time span=5m\n| stats latest(HeapMemoryUsed) as heap_used, latest(HeapMemoryMax) as heap_max, latest(heap_used_pct) as heap_pct, latest(ThreadCount) as threads, latest(RequestDurationP95) as p95_ms by _time, host\n| where heap_pct > 85 OR threads > 500 OR p95_ms > 3000\n| table _time, host, heap_used, heap_max, heap_pct, threads, p95_ms",
              "m": "Deploy Jolokia agent on Jira application nodes and configure Splunk to poll JMX MBeans (java.lang:type=Memory, java.lang:type=Threading, com.atlassian.jira:type=RequestMetrics). Poll every 60 seconds. Ingest Jira access logs for request duration percentiles. Optionally poll /rest/api/2/serverInfo for version and build. Alert on heap >85%, thread count >500, or P95 request duration >3 seconds. Track attachment storage via JMX or filesystem metrics. Correlate with database and disk I/O.",
              "z": "Line chart (heap usage, thread count, P95 latency), Gauge (heap %), Table (performance metrics by node), Bar chart (request duration by endpoint).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Connect for Kafka](https://splunkbase.splunk.com/app/3862)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom JMX input (Jolokia), Jira REST API.\n• Ensure the following data sources are available: Jira JMX MBeans (heap, threads, request duration), /rest/api/2/serverInfo, Jira access logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Jolokia agent on Jira application nodes and configure Splunk to poll JMX MBeans (java.lang:type=Memory, java.lang:type=Threading, com.atlassian.jira:type=RequestMetrics). Poll every 60 seconds. Ingest Jira access logs for request duration percentiles. Optionally poll /rest/api/2/serverInfo for version and build. Alert on heap >85%, thread count >500, or P95 request duration >3 seconds. Track attachment storage via JMX or filesystem metrics. Correlate with database and disk I/O.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=jira sourcetype=\"jira:jmx\"\n| eval heap_used_pct=if(HeapMemoryMax>0, round(HeapMemoryUsed/HeapMemoryMax*100, 1), null())\n| where heap_used_pct > 85 OR ThreadCount > 500 OR RequestDurationP95 > 3000\n| bin _time span=5m\n| stats latest(HeapMemoryUsed) as heap_used, latest(HeapMemoryMax) as heap_max, latest(heap_used_pct) as heap_pct, latest(ThreadCount) as threads, latest(RequestDurationP95) as p95_ms by _time, host\n| where heap_pct > 85 OR threads > 500 OR p95_ms > 3000\n| table _time, host, heap_used, heap_max, heap_pct, threads, p95_ms\n```\n\nUnderstanding this SPL\n\n**Jira Data Center Performance** — JMX metrics, request duration, and attachment storage indicate Jira health. Slow requests frustrate users; disk usage growth risks outages. Enables capacity planning and performance tuning.\n\nDocumented **Data sources**: Jira JMX MBeans (heap, threads, request duration), /rest/api/2/serverInfo, Jira access logs. **App/TA** (typical add-on context): Custom JMX input (Jolokia), Jira REST API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: jira; **sourcetype**: jira:jmx. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=jira, sourcetype=\"jira:jmx\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **heap_used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where heap_used_pct > 85 OR ThreadCount > 500 OR RequestDurationP95 > 3000` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where heap_pct > 85 OR threads > 500 OR p95_ms > 3000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Jira Data Center Performance**): table _time, host, heap_used, heap_max, heap_pct, threads, p95_ms\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (heap usage, thread count, P95 latency), Gauge (heap %), Table (performance metrics by node), Bar chart (request duration by endpoint).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that jMX metrics, request duration, and attachment storage indicate Jira health.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jira"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.0,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 22,
            "none": 0
          }
        },
        {
          "i": "8.3",
          "n": "Message Queues & Event Streaming",
          "u": [
            {
              "i": "8.3.1",
              "n": "Consumer Lag Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Growing consumer lag means messages aren't being processed fast enough, leading to data staleness and eventual message loss if retention is exceeded.",
              "t": "`Splunk Connect for Kafka` (Splunkbase 3862), Burrow integration, JMX",
              "d": "Kafka consumer group offsets (JMX, Burrow, `kafka-consumer-groups.sh`)",
              "q": "index=kafka sourcetype=\"kafka:consumer_lag\"\n| timechart span=5m max(lag) as consumer_lag by consumer_group, topic\n| where consumer_lag > 10000",
              "m": "Deploy Kafka consumer lag monitoring via Burrow or JMX. Poll lag per consumer group/topic/partition every minute. Alert when lag exceeds threshold (e.g., >10K messages or >5 minutes equivalent). Track lag trend for capacity planning.",
              "z": "Line chart (lag per consumer group), Heatmap (topic × partition lag), Single value (max lag), Table (lagging consumers).",
              "kfp": "Queues grow during consumer maintenance, slow consumers, or burst producer activity. Seasonal traffic can also create sustained backlog without a hard fault.",
              "refs": "[Splunkbase app 3862](https://splunkbase.splunk.com/app/3862)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Connect for Kafka` (Splunkbase 3862), Burrow integration, JMX.\n• Ensure the following data sources are available: Kafka consumer group offsets (JMX, Burrow, `kafka-consumer-groups.sh`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Kafka consumer lag monitoring via Burrow or JMX. Poll lag per consumer group/topic/partition every minute. Alert when lag exceeds threshold (e.g., >10K messages or >5 minutes equivalent). Track lag trend for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka:consumer_lag\"\n| timechart span=5m max(lag) as consumer_lag by consumer_group, topic\n| where consumer_lag > 10000\n```\n\nUnderstanding this SPL\n\n**Consumer Lag Monitoring** — Growing consumer lag means messages aren't being processed fast enough, leading to data staleness and eventual message loss if retention is exceeded.\n\nDocumented **Data sources**: Kafka consumer group offsets (JMX, Burrow, `kafka-consumer-groups.sh`). **App/TA** (typical add-on context): `Splunk Connect for Kafka` (Splunkbase 3862), Burrow integration, JMX. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka:consumer_lag. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka:consumer_lag\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by consumer_group, topic** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where consumer_lag > 10000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (lag per consumer group), Heatmap (topic × partition lag), Single value (max lag), Table (lagging consumers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that growing consumer lag means messages aren't being processed fast enough, leading to data staleness and eventual message loss if retention is exceeded.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kafka"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "8.3.2",
              "n": "Queue Depth Trending",
              "c": "high",
              "f": "advanced",
              "v": "Growing queue depths indicate consumers can't keep up or are down. Trending prevents message loss and processing delays.",
              "t": "RabbitMQ management API, ActiveMQ JMX",
              "d": "RabbitMQ management API (`/api/queues`), ActiveMQ JMX",
              "q": "index=messaging sourcetype=\"rabbitmq:queue\"\n| timechart span=5m max(messages) as depth by queue_name, vhost\n| where depth > 1000",
              "m": "Poll RabbitMQ management API every minute via scripted input. Track message count, publish/deliver rates per queue. Alert when depth exceeds threshold or grows consistently. Correlate with consumer status.",
              "z": "Line chart (queue depth over time), Bar chart (top queues by depth), Table (queues exceeding threshold).",
              "kfp": "Queues grow during consumer maintenance, slow consumers, or burst producer activity. Seasonal traffic can also create sustained backlog without a hard fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RabbitMQ management API, ActiveMQ JMX.\n• Ensure the following data sources are available: RabbitMQ management API (`/api/queues`), ActiveMQ JMX.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll RabbitMQ management API every minute via scripted input. Track message count, publish/deliver rates per queue. Alert when depth exceeds threshold or grows consistently. Correlate with consumer status.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=messaging sourcetype=\"rabbitmq:queue\"\n| timechart span=5m max(messages) as depth by queue_name, vhost\n| where depth > 1000\n```\n\nUnderstanding this SPL\n\n**Queue Depth Trending** — Growing queue depths indicate consumers can't keep up or are down. Trending prevents message loss and processing delays.\n\nDocumented **Data sources**: RabbitMQ management API (`/api/queues`), ActiveMQ JMX. **App/TA** (typical add-on context): RabbitMQ management API, ActiveMQ JMX. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: messaging; **sourcetype**: rabbitmq:queue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=messaging, sourcetype=\"rabbitmq:queue\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by queue_name, vhost** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where depth > 1000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue depth over time), Bar chart (top queues by depth), Table (queues exceeding threshold).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that growing queue depths indicate consumers can't keep up or are down.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "activemq",
                "rabbitmq"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.3",
              "n": "Broker Health Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Broker failures cause message loss and application disruption. Health monitoring ensures cluster stability.",
              "t": "JMX, broker metrics",
              "d": "Kafka JMX (broker metrics), RabbitMQ management API (`/api/nodes`)",
              "q": "index=kafka sourcetype=\"kafka:broker\"\n| stats latest(UnderReplicatedPartitions) as under_replicated, latest(ActiveControllerCount) as controllers by broker_id\n| eventstats sum(controllers) as cluster_controllers\n| where under_replicated > 0 OR cluster_controllers != 1",
              "m": "Poll broker health metrics via JMX every minute. Track disk usage, CPU, memory, network I/O. Alert on broker offline, under-replicated partitions, or controller election. Monitor ISR (In-Sync Replica) shrink rate.",
              "z": "Status grid (broker × health), Single value (under-replicated partitions), Table (broker metrics), Line chart (broker resource usage).",
              "kfp": "Queues and broker metrics swing during rebalancing, replay, or maintenance. We align with change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JMX, broker metrics.\n• Ensure the following data sources are available: Kafka JMX (broker metrics), RabbitMQ management API (`/api/nodes`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll broker health metrics via JMX every minute. Track disk usage, CPU, memory, network I/O. Alert on broker offline, under-replicated partitions, or controller election. Monitor ISR (In-Sync Replica) shrink rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka:broker\"\n| stats latest(UnderReplicatedPartitions) as under_replicated, latest(ActiveControllerCount) as controllers by broker_id\n| eventstats sum(controllers) as cluster_controllers\n| where under_replicated > 0 OR cluster_controllers != 1\n```\n\nUnderstanding this SPL\n\n**Broker Health Monitoring** — Broker failures cause message loss and application disruption. Health monitoring ensures cluster stability.\n\nDocumented **Data sources**: Kafka JMX (broker metrics), RabbitMQ management API (`/api/nodes`). **App/TA** (typical add-on context): JMX, broker metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka:broker. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka:broker\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by broker_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where under_replicated > 0 OR cluster_controllers != 1` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (broker × health), Single value (under-replicated partitions), Table (broker metrics), Line chart (broker resource usage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that broker failures cause message loss and application disruption.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "8.3.4",
              "n": "Under-Replicated Partitions",
              "c": "critical",
              "f": "intermediate",
              "v": "Under-replicated partitions mean data is at risk of loss if additional brokers fail. Immediate remediation is required.",
              "t": "`Splunk Connect for Kafka` (Splunkbase 3862), JMX",
              "d": "Kafka JMX (`UnderReplicatedPartitions`, `UnderMinIsrPartitionCount`)",
              "q": "index=kafka sourcetype=\"kafka:broker\"\n| where UnderReplicatedPartitions > 0\n| stats sum(UnderReplicatedPartitions) as total_under_replicated by _time\n| timechart span=5m max(total_under_replicated) as under_replicated",
              "m": "Poll Kafka broker JMX metrics. Alert immediately on any under-replicated partitions. Track duration of under-replication. Correlate with broker disk usage and network metrics to identify root cause.",
              "z": "Single value (under-replicated count — target: 0), Line chart (under-replicated over time), Table (affected topics/partitions).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunkbase app 3862](https://splunkbase.splunk.com/app/3862)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Connect for Kafka` (Splunkbase 3862), JMX.\n• Ensure the following data sources are available: Kafka JMX (`UnderReplicatedPartitions`, `UnderMinIsrPartitionCount`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Kafka broker JMX metrics. Alert immediately on any under-replicated partitions. Track duration of under-replication. Correlate with broker disk usage and network metrics to identify root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka:broker\"\n| where UnderReplicatedPartitions > 0\n| stats sum(UnderReplicatedPartitions) as total_under_replicated by _time\n| timechart span=5m max(total_under_replicated) as under_replicated\n```\n\nUnderstanding this SPL\n\n**Under-Replicated Partitions** — Under-replicated partitions mean data is at risk of loss if additional brokers fail. Immediate remediation is required.\n\nDocumented **Data sources**: Kafka JMX (`UnderReplicatedPartitions`, `UnderMinIsrPartitionCount`). **App/TA** (typical add-on context): `Splunk Connect for Kafka` (Splunkbase 3862), JMX. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka:broker. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka:broker\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where UnderReplicatedPartitions > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (under-replicated count — target: 0), Line chart (under-replicated over time), Table (affected topics/partitions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that -replicated partitions mean data is at risk of loss if additional brokers fail.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kafka"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.5",
              "n": "Dead Letter Queue Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Messages in DLQ represent processing failures that need investigation. They may indicate bugs, schema changes, or downstream failures.",
              "t": "Queue management API, custom input",
              "d": "RabbitMQ DLQ queues, AWS SQS DLQ, Kafka DLT topics",
              "q": "index=messaging sourcetype=\"rabbitmq:queue\"\n| search queue_name=\"*dead*\" OR queue_name=\"*dlq*\" OR queue_name=\"*error*\"\n| where messages > 0\n| table _time, vhost, queue_name, messages, message_stats.publish_details.rate",
              "m": "Monitor DLQ/DLT queues specifically. Alert when any DLQ has messages (should normally be 0). Track DLQ ingestion rate to detect ongoing issues. Sample DLQ messages for root cause analysis.",
              "z": "Single value (total DLQ messages), Table (DLQs with counts), Line chart (DLQ growth over time).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Queue management API, custom input.\n• Ensure the following data sources are available: RabbitMQ DLQ queues, AWS SQS DLQ, Kafka DLT topics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor DLQ/DLT queues specifically. Alert when any DLQ has messages (should normally be 0). Track DLQ ingestion rate to detect ongoing issues. Sample DLQ messages for root cause analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=messaging sourcetype=\"rabbitmq:queue\"\n| search queue_name=\"*dead*\" OR queue_name=\"*dlq*\" OR queue_name=\"*error*\"\n| where messages > 0\n| table _time, vhost, queue_name, messages, message_stats.publish_details.rate\n```\n\nUnderstanding this SPL\n\n**Dead Letter Queue Monitoring** — Messages in DLQ represent processing failures that need investigation. They may indicate bugs, schema changes, or downstream failures.\n\nDocumented **Data sources**: RabbitMQ DLQ queues, AWS SQS DLQ, Kafka DLT topics. **App/TA** (typical add-on context): Queue management API, custom input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: messaging; **sourcetype**: rabbitmq:queue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=messaging, sourcetype=\"rabbitmq:queue\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Filters the current rows with `where messages > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Dead Letter Queue Monitoring**): table _time, vhost, queue_name, messages, message_stats.publish_details.rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (total DLQ messages), Table (DLQs with counts), Line chart (DLQ growth over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that messages in DLQ represent processing failures that need investigation.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.6",
              "n": "Message Throughput Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Throughput trending identifies capacity limits and validates scaling decisions. Unexpected drops indicate producer or broker issues.",
              "t": "JMX, broker management APIs",
              "d": "Kafka broker metrics (MessagesInPerSec), RabbitMQ message rates",
              "q": "index=kafka sourcetype=\"kafka:broker\"\n| timechart span=5m sum(MessagesInPerSec) as msgs_in, sum(BytesInPerSec) as bytes_in",
              "m": "Poll broker throughput metrics via JMX. Track messages and bytes in/out per broker and per topic. Baseline normal patterns. Alert on sudden throughput drops (possible producer failure).",
              "z": "Line chart (throughput over time), Stacked area (throughput by topic), Dual-axis (messages + bytes).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JMX, broker management APIs.\n• Ensure the following data sources are available: Kafka broker metrics (MessagesInPerSec), RabbitMQ message rates.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll broker throughput metrics via JMX. Track messages and bytes in/out per broker and per topic. Baseline normal patterns. Alert on sudden throughput drops (possible producer failure).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka:broker\"\n| timechart span=5m sum(MessagesInPerSec) as msgs_in, sum(BytesInPerSec) as bytes_in\n```\n\nUnderstanding this SPL\n\n**Message Throughput Trending** — Throughput trending identifies capacity limits and validates scaling decisions. Unexpected drops indicate producer or broker issues.\n\nDocumented **Data sources**: Kafka broker metrics (MessagesInPerSec), RabbitMQ message rates. **App/TA** (typical add-on context): JMX, broker management APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka:broker. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka:broker\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (throughput over time), Stacked area (throughput by topic), Dual-axis (messages + bytes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that throughput trending identifies capacity limits and validates scaling decisions.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.7",
              "n": "Topic/Queue Creation Audit",
              "c": "medium",
              "f": "beginner",
              "v": "Uncontrolled topic/queue creation can lead to resource sprawl. Audit trail supports governance and cleanup.",
              "t": "Broker audit logs, Kafka authorizer logs",
              "d": "Kafka authorizer logs, RabbitMQ audit log, broker event logs",
              "q": "index=kafka sourcetype=\"kafka:authorizer\"\n| search operation=\"Create\" resource_type=\"Topic\"\n| table _time, principal, resource_name, allowed",
              "m": "Enable Kafka authorizer logging or audit log. Forward broker logs to Splunk. Parse topic/queue creation events. Alert on creation of topics matching naming convention violations. Report on topic inventory growth.",
              "z": "Table (created topics with details), Timeline (creation events), Bar chart (topics created per week).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Broker audit logs, Kafka authorizer logs.\n• Ensure the following data sources are available: Kafka authorizer logs, RabbitMQ audit log, broker event logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Kafka authorizer logging or audit log. Forward broker logs to Splunk. Parse topic/queue creation events. Alert on creation of topics matching naming convention violations. Report on topic inventory growth.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka:authorizer\"\n| search operation=\"Create\" resource_type=\"Topic\"\n| table _time, principal, resource_name, allowed\n```\n\nUnderstanding this SPL\n\n**Topic/Queue Creation Audit** — Uncontrolled topic/queue creation can lead to resource sprawl. Audit trail supports governance and cleanup.\n\nDocumented **Data sources**: Kafka authorizer logs, RabbitMQ audit log, broker event logs. **App/TA** (typical add-on context): Broker audit logs, Kafka authorizer logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka:authorizer. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka:authorizer\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Topic/Queue Creation Audit**): table _time, principal, resource_name, allowed\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (created topics with details), Timeline (creation events), Bar chart (topics created per week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that uncontrolled topic/queue creation can lead to resource sprawl.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kafka"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.8",
              "n": "Consumer Group Rebalancing",
              "c": "high",
              "f": "intermediate",
              "v": "Frequent rebalances cause processing pauses and duplicate message delivery. Detection identifies unstable consumers.",
              "t": "Kafka broker logs, JMX",
              "d": "Kafka GroupCoordinator logs, consumer group state",
              "q": "index=kafka sourcetype=\"kafka:server\"\n| search \"Preparing to rebalance group\" OR \"Stabilized group\"\n| rex \"group (?<consumer_group>\\S+)\"\n| stats count by consumer_group\n| where count > 5",
              "m": "Parse Kafka broker logs for rebalance events. Track rebalance frequency per consumer group. Alert when rebalances occur more than 5 times per hour. Correlate with consumer heartbeat timeouts and session timeouts.",
              "z": "Bar chart (rebalances per consumer group), Timeline (rebalance events), Line chart (rebalance frequency trend).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kafka broker logs, JMX.\n• Ensure the following data sources are available: Kafka GroupCoordinator logs, consumer group state.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse Kafka broker logs for rebalance events. Track rebalance frequency per consumer group. Alert when rebalances occur more than 5 times per hour. Correlate with consumer heartbeat timeouts and session timeouts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka:server\"\n| search \"Preparing to rebalance group\" OR \"Stabilized group\"\n| rex \"group (?<consumer_group>\\S+)\"\n| stats count by consumer_group\n| where count > 5\n```\n\nUnderstanding this SPL\n\n**Consumer Group Rebalancing** — Frequent rebalances cause processing pauses and duplicate message delivery. Detection identifies unstable consumers.\n\nDocumented **Data sources**: Kafka GroupCoordinator logs, consumer group state. **App/TA** (typical add-on context): Kafka broker logs, JMX. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka:server. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka:server\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by consumer_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (rebalances per consumer group), Timeline (rebalance events), Line chart (rebalance frequency trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that frequent rebalances cause processing pauses and duplicate message delivery.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kafka"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.9",
              "n": "Partition Leader Elections",
              "c": "high",
              "f": "beginner",
              "v": "Frequent leader elections indicate broker instability, causing temporary unavailability for affected partitions.",
              "t": "JMX, Kafka controller logs",
              "d": "Kafka JMX (`LeaderElectionRateAndTimeMs`), controller logs",
              "q": "index=kafka sourcetype=\"kafka:controller\"\n| search \"leader\" \"election\"\n| timechart span=15m count as elections\n| where elections > 10",
              "m": "Monitor Kafka controller logs and JMX metrics. Track leader election rate and duration. Alert on elevated election rates. Correlate with broker restarts, network events, and ZooKeeper/KRaft issues.",
              "z": "Line chart (elections over time), Single value (elections per hour), Table (affected topics/partitions).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JMX, Kafka controller logs.\n• Ensure the following data sources are available: Kafka JMX (`LeaderElectionRateAndTimeMs`), controller logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Kafka controller logs and JMX metrics. Track leader election rate and duration. Alert on elevated election rates. Correlate with broker restarts, network events, and ZooKeeper/KRaft issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka:controller\"\n| search \"leader\" \"election\"\n| timechart span=15m count as elections\n| where elections > 10\n```\n\nUnderstanding this SPL\n\n**Partition Leader Elections** — Frequent leader elections indicate broker instability, causing temporary unavailability for affected partitions.\n\nDocumented **Data sources**: Kafka JMX (`LeaderElectionRateAndTimeMs`), controller logs. **App/TA** (typical add-on context): JMX, Kafka controller logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka:controller. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka:controller\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where elections > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (elections over time), Single value (elections per hour), Table (affected topics/partitions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that frequent leader elections indicate broker instability, causing temporary unavailability for affected partitions.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kafka"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.10",
              "n": "Message Age Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Old messages in queues indicate processing delays that may violate SLAs. Age tracking provides a business-relevant metric beyond raw queue depth.",
              "t": "Queue management API",
              "d": "RabbitMQ management API (message age), custom consumer timestamp comparison",
              "q": "index=messaging sourcetype=\"rabbitmq:queue\"\n| eval message_age_sec=now()-oldest_message_timestamp\n| where message_age_sec > 300\n| table queue_name, vhost, messages, message_age_sec\n| sort -message_age_sec",
              "m": "Poll message age metrics from queue management APIs. For Kafka, compare consumer offset timestamp with current time. Alert when message age exceeds SLA (e.g., >5 minutes for real-time queues). Differentiate by queue priority.",
              "z": "Table (queues with old messages), Bar chart (message age by queue), Single value (max message age).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Queue management API.\n• Ensure the following data sources are available: RabbitMQ management API (message age), custom consumer timestamp comparison.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll message age metrics from queue management APIs. For Kafka, compare consumer offset timestamp with current time. Alert when message age exceeds SLA (e.g., >5 minutes for real-time queues). Differentiate by queue priority.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=messaging sourcetype=\"rabbitmq:queue\"\n| eval message_age_sec=now()-oldest_message_timestamp\n| where message_age_sec > 300\n| table queue_name, vhost, messages, message_age_sec\n| sort -message_age_sec\n```\n\nUnderstanding this SPL\n\n**Message Age Monitoring** — Old messages in queues indicate processing delays that may violate SLAs. Age tracking provides a business-relevant metric beyond raw queue depth.\n\nDocumented **Data sources**: RabbitMQ management API (message age), custom consumer timestamp comparison. **App/TA** (typical add-on context): Queue management API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: messaging; **sourcetype**: rabbitmq:queue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=messaging, sourcetype=\"rabbitmq:queue\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **message_age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where message_age_sec > 300` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Message Age Monitoring**): table queue_name, vhost, messages, message_age_sec\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (queues with old messages), Bar chart (message age by queue), Single value (max message age).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that old messages in queues indicate processing delays that may violate service promises.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.11",
              "n": "RabbitMQ Queue Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Queue depth, consumer count, message rate, and unacknowledged messages indicate message processing health. Growing depth or unacked messages signal consumer lag or failures.",
              "t": "Custom (RabbitMQ Management API)",
              "d": "RabbitMQ Management API (`/api/queues`)",
              "q": "index=messaging sourcetype=\"rabbitmq:queue\"\n| eval unacked_pct=if(messages>0, round(messages_unacknowledged/messages*100,1), 0)\n| where messages > 1000 OR messages_unacknowledged > 100 OR consumer_count==0\n| timechart span=5m max(messages) as queue_depth, avg(messages_unacknowledged) as unacked by vhost, name",
              "m": "Enable RabbitMQ Management Plugin. Poll `/api/queues` via scripted input (curl with auth) every minute. Parse name, vhost, messages, messages_unacknowledged, messages_ready, consumers, message_stats.publish_details.rate, message_stats.deliver_get_details.rate. Forward to Splunk via HEC. Alert when queue depth exceeds threshold, unacked messages grow, or consumer_count drops to 0 for critical queues. Track publish vs deliver rate delta for backlog detection.",
              "z": "Line chart (queue depth and unacked over time), Table (queues with high depth), Single value (queues with no consumers), Bar chart (message rate by queue).",
              "kfp": "Queues and broker metrics swing during rebalancing, replay, or maintenance. We align with change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (RabbitMQ Management API).\n• Ensure the following data sources are available: RabbitMQ Management API (`/api/queues`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable RabbitMQ Management Plugin. Poll `/api/queues` via scripted input (curl with auth) every minute. Parse name, vhost, messages, messages_unacknowledged, messages_ready, consumers, message_stats.publish_details.rate, message_stats.deliver_get_details.rate. Forward to Splunk via HEC. Alert when queue depth exceeds threshold, unacked messages grow, or consumer_count drops to 0 for critical queues. Track publish vs deliver rate delta for backlog detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=messaging sourcetype=\"rabbitmq:queue\"\n| eval unacked_pct=if(messages>0, round(messages_unacknowledged/messages*100,1), 0)\n| where messages > 1000 OR messages_unacknowledged > 100 OR consumer_count==0\n| timechart span=5m max(messages) as queue_depth, avg(messages_unacknowledged) as unacked by vhost, name\n```\n\nUnderstanding this SPL\n\n**RabbitMQ Queue Monitoring** — Queue depth, consumer count, message rate, and unacknowledged messages indicate message processing health. Growing depth or unacked messages signal consumer lag or failures.\n\nDocumented **Data sources**: RabbitMQ Management API (`/api/queues`). **App/TA** (typical add-on context): Custom (RabbitMQ Management API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: messaging; **sourcetype**: rabbitmq:queue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=messaging, sourcetype=\"rabbitmq:queue\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **unacked_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where messages > 1000 OR messages_unacknowledged > 100 OR consumer_count==0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by vhost, name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue depth and unacked over time), Table (queues with high depth), Single value (queues with no consumers), Bar chart (message rate by queue).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that queue depth, consumer count, message rate, and unacknowledged messages indicate message processing health.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "rabbitmq"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.12",
              "n": "ZooKeeper Ensemble Health",
              "c": "high",
              "f": "intermediate",
              "v": "Leader election state, outstanding requests, and watch count indicate ZooKeeper stability. Frequent leader changes or growing outstanding requests signal ensemble instability affecting Kafka, HBase, and other dependents.",
              "t": "Custom scripted input (ZooKeeper 4-letter commands or AdminServer)",
              "d": "mntr command output, ZooKeeper AdminServer `/commands/monitor`",
              "q": "index=zookeeper sourcetype=\"zookeeper:mntr\"\n| where outstanding_requests > 100 OR (mode!=\"standalone\" AND num_alive_connections < 2)\n| timechart span=5m max(outstanding_requests) as outstanding by host",
              "m": "Enable ZooKeeper AdminServer or use 4-letter commands (`echo mntr | nc localhost 2181`). Poll mntr output every minute via scripted input. Parse mode (leader/follower/standalone), outstanding_requests, num_alive_connections, watch_count, zk_approximate_data_size. Forward to Splunk via HEC. Alert when outstanding_requests exceeds 100 or num_alive_connections drops (ensemble partition). Track leader changes via mode transitions.",
              "z": "Status grid (node × mode), Line chart (outstanding requests over time), Single value (leader node), Table (ensemble health summary).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (ZooKeeper 4-letter commands or AdminServer).\n• Ensure the following data sources are available: mntr command output, ZooKeeper AdminServer `/commands/monitor`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ZooKeeper AdminServer or use 4-letter commands (`echo mntr | nc localhost 2181`). Poll mntr output every minute via scripted input. Parse mode (leader/follower/standalone), outstanding_requests, num_alive_connections, watch_count, zk_approximate_data_size. Forward to Splunk via HEC. Alert when outstanding_requests exceeds 100 or num_alive_connections drops (ensemble partition). Track leader changes via mode transitions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zookeeper sourcetype=\"zookeeper:mntr\"\n| where outstanding_requests > 100 OR (mode!=\"standalone\" AND num_alive_connections < 2)\n| timechart span=5m max(outstanding_requests) as outstanding by host\n```\n\nUnderstanding this SPL\n\n**ZooKeeper Ensemble Health** — Leader election state, outstanding requests, and watch count indicate ZooKeeper stability. Frequent leader changes or growing outstanding requests signal ensemble instability affecting Kafka, HBase, and other dependents.\n\nDocumented **Data sources**: mntr command output, ZooKeeper AdminServer `/commands/monitor`. **App/TA** (typical add-on context): Custom scripted input (ZooKeeper 4-letter commands or AdminServer). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zookeeper; **sourcetype**: zookeeper:mntr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zookeeper, sourcetype=\"zookeeper:mntr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where outstanding_requests > 100 OR (mode!=\"standalone\" AND num_alive_connections < 2)` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (node × mode), Line chart (outstanding requests over time), Single value (leader node), Table (ensemble health summary).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that leader election state, outstanding requests, and watch count indicate ZooKeeper stability.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "zookeeper"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.13",
              "n": "Kafka Consumer Lag Monitoring (Consumer Group)",
              "c": "critical",
              "f": "beginner",
              "v": "Lag in messages and approximate time lag per partition for each `group.id`. Tightens UC-8.3.1 with `kafka-consumer-groups` export fields.",
              "t": "Burrow, Kafka Connect, `kafka:consumer_lag` scripted input",
              "d": "`LAG`, `CONSUMER-ID`, `TOPIC`, `PARTITION`",
              "q": "index=kafka sourcetype=\"kafka:consumer_lag\"\n| eval lag_sec=coalesce(lag_seconds, estimated_lag_sec)\n| where lag > 100000 OR lag_sec > 300\n| timechart span=5m max(lag) as max_lag by consumer_group, topic",
              "m": "Poll `kafka-consumer-groups.sh --describe` every minute. Alert on lag > SLA messages or estimated seconds. Exclude bursty batch groups via lookup.",
              "z": "Line chart (lag by group/topic), Heatmap (partition lag), Single value (worst consumer group).",
              "kfp": "Queues grow during consumer maintenance, slow consumers, or burst producer activity. Seasonal traffic can also create sustained backlog without a hard fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Burrow, Kafka Connect, `kafka:consumer_lag` scripted input.\n• Ensure the following data sources are available: `LAG`, `CONSUMER-ID`, `TOPIC`, `PARTITION`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `kafka-consumer-groups.sh --describe` every minute. Alert on lag > SLA messages or estimated seconds. Exclude bursty batch groups via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka:consumer_lag\"\n| eval lag_sec=coalesce(lag_seconds, estimated_lag_sec)\n| where lag > 100000 OR lag_sec > 300\n| timechart span=5m max(lag) as max_lag by consumer_group, topic\n```\n\nUnderstanding this SPL\n\n**Kafka Consumer Lag Monitoring (Consumer Group)** — Lag in messages and approximate time lag per partition for each `group.id`. Tightens UC-8.3.1 with `kafka-consumer-groups` export fields.\n\nDocumented **Data sources**: `LAG`, `CONSUMER-ID`, `TOPIC`, `PARTITION`. **App/TA** (typical add-on context): Burrow, Kafka Connect, `kafka:consumer_lag` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka:consumer_lag. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka:consumer_lag\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lag_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag > 100000 OR lag_sec > 300` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by consumer_group, topic** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (lag by group/topic), Heatmap (partition lag), Single value (worst consumer group).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that lag in messages and approximate time lag per partition for each.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kafka"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.14",
              "n": "RabbitMQ Queue Depth Alerts",
              "c": "high",
              "f": "beginner",
              "v": "Per-queue `messages_ready` thresholds with business priority tags. Alert routing by `queue` name pattern (`critical.*`).",
              "t": "RabbitMQ management API",
              "d": "`rabbitmq:queue` `messages`, `messages_ready`, `consumers`",
              "q": "index=messaging sourcetype=\"rabbitmq:queue\"\n| lookup rabbitmq_queue_sla queue_name OUTPUT max_depth\n| where messages_ready > max_depth OR consumers=0 OR consumers IS NULL\n| table vhost name messages_ready consumers max_depth",
              "m": "Maintain SLA lookup per queue. Page on critical queue depth. Auto-scale consumers from orchestrator if integrated.",
              "z": "Line chart (depth vs threshold), Table (breached queues), Single value (queues in alert).",
              "kfp": "Queues grow during consumer maintenance, slow consumers, or burst producer activity. Seasonal traffic can also create sustained backlog without a hard fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RabbitMQ management API.\n• Ensure the following data sources are available: `rabbitmq:queue` `messages`, `messages_ready`, `consumers`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain SLA lookup per queue. Page on critical queue depth. Auto-scale consumers from orchestrator if integrated.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=messaging sourcetype=\"rabbitmq:queue\"\n| lookup rabbitmq_queue_sla queue_name OUTPUT max_depth\n| where messages_ready > max_depth OR consumers=0 OR consumers IS NULL\n| table vhost name messages_ready consumers max_depth\n```\n\nUnderstanding this SPL\n\n**RabbitMQ Queue Depth Alerts** — Per-queue `messages_ready` thresholds with business priority tags. Alert routing by `queue` name pattern (`critical.*`).\n\nDocumented **Data sources**: `rabbitmq:queue` `messages`, `messages_ready`, `consumers`. **App/TA** (typical add-on context): RabbitMQ management API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: messaging; **sourcetype**: rabbitmq:queue. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=messaging, sourcetype=\"rabbitmq:queue\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where messages_ready > max_depth OR consumers=0 OR consumers IS NULL` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **RabbitMQ Queue Depth Alerts**): table vhost name messages_ready consumers max_depth\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (depth vs threshold), Table (breached queues), Single value (queues in alert).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that -queue thresholds with business priority tags.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "rabbitmq"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.15",
              "n": "Azure Service Bus Dead Letter Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "DLQ message count per topic/subscription and dead-letter reasons (`DeliveryCount`, `ExceptionDescription`) for cloud-native messaging.",
              "t": "Azure Monitor Diagnostic Settings → Splunk",
              "d": "`DeadletteredMessages` metric, operational logs",
              "q": "index=azure sourcetype=\"azure:servicebus:metrics\"\n| where metric_name=\"DeadletteredMessages\" OR EntityName=\"*DeadLetter*\"\n| timechart span=5m sum(Total) as dlq_count by EntityName, SubscriptionName\n| where dlq_count > 0",
              "m": "Enable metrics on topics/subscriptions. Alert on any DLQ growth for tier-1 entities. Sample DLQ messages via separate secure pipeline (not full body in Splunk if PII).",
              "z": "Line chart (DLQ count), Table (entity, subscription, count), Single value (total DLQ messages).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Azure Monitor Diagnostic Settings → Splunk.\n• Ensure the following data sources are available: `DeadletteredMessages` metric, operational logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable metrics on topics/subscriptions. Alert on any DLQ growth for tier-1 entities. Sample DLQ messages via separate secure pipeline (not full body in Splunk if PII).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:servicebus:metrics\"\n| where metric_name=\"DeadletteredMessages\" OR EntityName=\"*DeadLetter*\"\n| timechart span=5m sum(Total) as dlq_count by EntityName, SubscriptionName\n| where dlq_count > 0\n```\n\nUnderstanding this SPL\n\n**Azure Service Bus Dead Letter Monitoring** — DLQ message count per topic/subscription and dead-letter reasons (`DeliveryCount`, `ExceptionDescription`) for cloud-native messaging.\n\nDocumented **Data sources**: `DeadletteredMessages` metric, operational logs. **App/TA** (typical add-on context): Azure Monitor Diagnostic Settings → Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:servicebus:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:servicebus:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where metric_name=\"DeadletteredMessages\" OR EntityName=\"*DeadLetter*\"` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by EntityName, SubscriptionName** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where dlq_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (DLQ count), Table (entity, subscription, count), Single value (total DLQ messages).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that dLQ message count per topic/subscription and dead-letter reasons (, ) for cloud-native messaging.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.16",
              "n": "Kafka Connect Task Failures",
              "c": "critical",
              "f": "intermediate",
              "v": "Connector `FAILED` state, task failures, and `offset_commit` errors stop data pipelines. Distinct from broker-only monitoring.",
              "t": "Connect worker logs, Connect REST `/status`",
              "d": "`kafka_connect:connector_status`, worker log",
              "q": "index=kafka sourcetype=\"kafka_connect:status\"\n| where connector_state=\"FAILED\" OR task_state=\"FAILED\"\n| stats latest(trace) as err by connector, task_id\n| table connector task_id connector_state task_state err",
              "m": "Poll `/connectors/*/status` every 2m. Alert on any FAILED. Include stack trace first line only for indexing size.",
              "z": "Table (failed connectors/tasks), Timeline (state changes), Single value (open failures).",
              "kfp": "Queues and broker metrics swing during rebalancing, replay, or maintenance. We align with change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Connect worker logs, Connect REST `/status`.\n• Ensure the following data sources are available: `kafka_connect:connector_status`, worker log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `/connectors/*/status` every 2m. Alert on any FAILED. Include stack trace first line only for indexing size.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka_connect:status\"\n| where connector_state=\"FAILED\" OR task_state=\"FAILED\"\n| stats latest(trace) as err by connector, task_id\n| table connector task_id connector_state task_state err\n```\n\nUnderstanding this SPL\n\n**Kafka Connect Task Failures** — Connector `FAILED` state, task failures, and `offset_commit` errors stop data pipelines. Distinct from broker-only monitoring.\n\nDocumented **Data sources**: `kafka_connect:connector_status`, worker log. **App/TA** (typical add-on context): Connect worker logs, Connect REST `/status`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka_connect:status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka_connect:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where connector_state=\"FAILED\" OR task_state=\"FAILED\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by connector, task_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Kafka Connect Task Failures**): table connector task_id connector_state task_state err\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed connectors/tasks), Timeline (state changes), Single value (open failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that connector state, task failures, and errors stop data pipelines.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.17",
              "n": "Kafka Topic Partition Skew",
              "c": "high",
              "f": "advanced",
              "v": "Byte size and message count skew across partitions causes hot brokers and uneven consumer lag. Uses `kafka-log-dirs` or broker metrics.",
              "t": "JMX, broker metrics export",
              "d": "`Size` per partition, `LogEndOffset` per partition",
              "q": "index=kafka sourcetype=\"kafka:partition_skew\"\n| eventstats avg(partition_size_bytes) as avg_sz by topic\n| eval skew_pct=round(abs(partition_size_bytes-avg_sz)/avg_sz*100,1)\n| where skew_pct > 25\n| table topic partition partition_size_bytes skew_pct",
              "m": "Nightly job from log size per partition. Alert when skew >25%. Recommend partition key review or reassign.",
              "z": "Bar chart (skew % by partition), Table (top skewed topics), Heatmap (broker × partition size).",
              "kfp": "Queues and broker metrics swing during rebalancing, replay, or maintenance. We align with change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JMX, broker metrics export.\n• Ensure the following data sources are available: `Size` per partition, `LogEndOffset` per partition.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNightly job from log size per partition. Alert when skew >25%. Recommend partition key review or reassign.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kafka sourcetype=\"kafka:partition_skew\"\n| eventstats avg(partition_size_bytes) as avg_sz by topic\n| eval skew_pct=round(abs(partition_size_bytes-avg_sz)/avg_sz*100,1)\n| where skew_pct > 25\n| table topic partition partition_size_bytes skew_pct\n```\n\nUnderstanding this SPL\n\n**Kafka Topic Partition Skew** — Byte size and message count skew across partitions causes hot brokers and uneven consumer lag. Uses `kafka-log-dirs` or broker metrics.\n\nDocumented **Data sources**: `Size` per partition, `LogEndOffset` per partition. **App/TA** (typical add-on context): JMX, broker metrics export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kafka; **sourcetype**: kafka:partition_skew. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kafka, sourcetype=\"kafka:partition_skew\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eventstats` rolls up events into metrics; results are split **by topic** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **skew_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where skew_pct > 25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kafka Topic Partition Skew**): table topic partition partition_size_bytes skew_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (skew % by partition), Table (top skewed topics), Heatmap (broker × partition size).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that byte size and message count skew across partitions causes hot brokers and uneven consumer lag.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.18",
              "n": "RabbitMQ Memory Alarm",
              "c": "critical",
              "f": "beginner",
              "v": "`mem_alarm` blocks publishers when `vm_memory_high_watermark` is hit. Early warning from `memory` and `allocated` fields.",
              "t": "RabbitMQ management API `/api/nodes`",
              "d": "`mem_used`, `mem_limit`, `mem_alarm`",
              "q": "index=messaging sourcetype=\"rabbitmq:node\"\n| where mem_alarm==true OR mem_used/mem_limit > 0.75\n| table _time, name, mem_used, mem_limit, mem_alarm",
              "m": "Poll nodes every minute. Alert at 75% memory or alarm true. Flow control from alarm requires immediate consumer scale-up or queue purge policy.",
              "z": "Gauge (memory % per node), Line chart (mem_used trend), Table (nodes in alarm).",
              "kfp": "Queues and broker metrics swing during rebalancing, replay, or maintenance. We align with change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RabbitMQ management API `/api/nodes`.\n• Ensure the following data sources are available: `mem_used`, `mem_limit`, `mem_alarm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll nodes every minute. Alert at 75% memory or alarm true. Flow control from alarm requires immediate consumer scale-up or queue purge policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=messaging sourcetype=\"rabbitmq:node\"\n| where mem_alarm==true OR mem_used/mem_limit > 0.75\n| table _time, name, mem_used, mem_limit, mem_alarm\n```\n\nUnderstanding this SPL\n\n**RabbitMQ Memory Alarm** — `mem_alarm` blocks publishers when `vm_memory_high_watermark` is hit. Early warning from `memory` and `allocated` fields.\n\nDocumented **Data sources**: `mem_used`, `mem_limit`, `mem_alarm`. **App/TA** (typical add-on context): RabbitMQ management API `/api/nodes`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: messaging; **sourcetype**: rabbitmq:node. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=messaging, sourcetype=\"rabbitmq:node\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where mem_alarm==true OR mem_used/mem_limit > 0.75` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **RabbitMQ Memory Alarm**): table _time, name, mem_used, mem_limit, mem_alarm\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (memory % per node), Line chart (mem_used trend), Table (nodes in alarm).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that blocks publishers when is hit.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "rabbitmq"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.19",
              "n": "ActiveMQ Broker Store Usage",
              "c": "high",
              "f": "intermediate",
              "v": "Persistent store percent used (KahaDB) or JDBC store growth causes broker pause and producer blocking. JMX `StoreLimit` usage.",
              "t": "ActiveMQ JMX, `activemq` log",
              "d": "`org.apache.activemq:type=Broker` `StoreLimit`, `TempLimit`",
              "q": "index=messaging sourcetype=\"activemq:broker\"\n| eval store_pct=round(store_used/store_limit*100,1)\n| where store_pct > 80\n| timechart span=5m max(store_pct) as pct by broker_name",
              "m": "Poll JMX every 5m. Alert at 80% store. Schedule garbage collection or archive old messages per policy.",
              "z": "Gauge (store %), Line chart (store usage), Table (brokers over threshold).",
              "kfp": "Queues and broker metrics swing during rebalancing, replay, or maintenance. We align with change windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ActiveMQ JMX, `activemq` log.\n• Ensure the following data sources are available: `org.apache.activemq:type=Broker` `StoreLimit`, `TempLimit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll JMX every 5m. Alert at 80% store. Schedule garbage collection or archive old messages per policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=messaging sourcetype=\"activemq:broker\"\n| eval store_pct=round(store_used/store_limit*100,1)\n| where store_pct > 80\n| timechart span=5m max(store_pct) as pct by broker_name\n```\n\nUnderstanding this SPL\n\n**ActiveMQ Broker Store Usage** — Persistent store percent used (KahaDB) or JDBC store growth causes broker pause and producer blocking. JMX `StoreLimit` usage.\n\nDocumented **Data sources**: `org.apache.activemq:type=Broker` `StoreLimit`, `TempLimit`. **App/TA** (typical add-on context): ActiveMQ JMX, `activemq` log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: messaging; **sourcetype**: activemq:broker. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=messaging, sourcetype=\"activemq:broker\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **store_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where store_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by broker_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (store %), Line chart (store usage), Table (brokers over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that persistent store percent used (KahaDB) or JDBC store growth causes broker pause and producer blocking.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "activemq"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.20",
              "n": "NATS JetStream Consumer Ack Lag",
              "c": "high",
              "f": "intermediate",
              "v": "`NumAckPending`, `NumRedelivered`, and consumer lag for JetStream streams indicate slow consumers or poison messages.",
              "t": "NATS Prometheus exporter, `nats` server varz/jsz",
              "d": "`jetstream_consumer_lag`, `ack_pending`",
              "q": "index=messaging sourcetype=\"nats:jetstream\"\n| where num_ack_pending > 1000 OR num_redelivered > 100\n| stats max(num_ack_pending) as lag by stream_name, consumer_name\n| sort -lag",
              "m": "Scrape `/jsz` or Prometheus metrics. Alert on rising ack_pending. Correlate with consumer pod restarts.",
              "z": "Line chart (ack pending), Table (stream, consumer, lag), Single value (max redelivered).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NATS Prometheus exporter, `nats` server varz/jsz.\n• Ensure the following data sources are available: `jetstream_consumer_lag`, `ack_pending`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape `/jsz` or Prometheus metrics. Alert on rising ack_pending. Correlate with consumer pod restarts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=messaging sourcetype=\"nats:jetstream\"\n| where num_ack_pending > 1000 OR num_redelivered > 100\n| stats max(num_ack_pending) as lag by stream_name, consumer_name\n| sort -lag\n```\n\nUnderstanding this SPL\n\n**NATS JetStream Consumer Ack Lag** — `NumAckPending`, `NumRedelivered`, and consumer lag for JetStream streams indicate slow consumers or poison messages.\n\nDocumented **Data sources**: `jetstream_consumer_lag`, `ack_pending`. **App/TA** (typical add-on context): NATS Prometheus exporter, `nats` server varz/jsz. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: messaging; **sourcetype**: nats:jetstream. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=messaging, sourcetype=\"nats:jetstream\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where num_ack_pending > 1000 OR num_redelivered > 100` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by stream_name, consumer_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ack pending), Table (stream, consumer, lag), Single value (max redelivered).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow NATS JetStream Consumer Ack Lag in plain language so we notice drift or trouble early enough to act calmly.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.3.21",
              "n": "MSMQ Queue Depth Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Message queue buildup indicates application processing failures. Monitoring queue depth prevents message loss and detects downstream system outages.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=Perfmon:MSMQ`",
              "q": "index=perfmon source=\"Perfmon:MSMQ Service\" counter=\"Total Messages in all Queues\"\n| timechart span=5m avg(Value) as AvgQueueDepth by host\n| foreach * [eval <<FIELD>>=round('<<FIELD>>', 0)]",
              "m": "Configure Perfmon input for MSMQ Service counters: Total Messages in all Queues, Total Bytes in all Queues, Sessions. Also monitor individual queue counters via `MSMQ Queue` object. Alert when queue depth exceeds baseline (messages accumulating). Monitor journal queue size for message delivery confirmations. Track dead-letter queue growth for undeliverable messages.",
              "z": "Timechart (queue depth trend), Single value (current depth), Alert on queue growth exceeding threshold.",
              "kfp": "Queues grow during consumer maintenance, slow consumers, or burst producer activity. Seasonal traffic can also create sustained backlog without a hard fault.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=Perfmon:MSMQ`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Perfmon input for MSMQ Service counters: Total Messages in all Queues, Total Bytes in all Queues, Sessions. Also monitor individual queue counters via `MSMQ Queue` object. Alert when queue depth exceeds baseline (messages accumulating). Monitor journal queue size for message delivery confirmations. Track dead-letter queue growth for undeliverable messages.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perfmon source=\"Perfmon:MSMQ Service\" counter=\"Total Messages in all Queues\"\n| timechart span=5m avg(Value) as AvgQueueDepth by host\n| foreach * [eval <<FIELD>>=round('<<FIELD>>', 0)]\n```\n\nUnderstanding this SPL\n\n**MSMQ Queue Depth Monitoring** — Message queue buildup indicates application processing failures. Monitoring queue depth prevents message loss and detects downstream system outages.\n\nDocumented **Data sources**: `sourcetype=Perfmon:MSMQ`. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Iterates over multivalue fields with `foreach`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (queue depth trend), Single value (current depth), Alert on queue growth exceeding threshold.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that message queue buildup indicates application processing failures.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.6,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 19,
            "none": 0
          }
        },
        {
          "i": "8.4",
          "n": "API Gateways & Service Mesh",
          "u": [
            {
              "i": "8.4.1",
              "n": "API Error Rate by Endpoint",
              "c": "critical",
              "f": "intermediate",
              "v": "Per-endpoint error rates pinpoint failing services, enabling targeted debugging rather than broad investigation.",
              "t": "Custom log input, gateway access logs",
              "d": "API gateway access logs (Kong, Apigee, AWS API Gateway)",
              "q": "index=api sourcetype=\"kong:access\"\n| eval is_error=if(status>=400,1,0)\n| stats count, sum(is_error) as errors by request_uri, upstream_service\n| eval error_rate=round(errors/count*100,2)\n| where error_rate > 5\n| sort -error_rate",
              "m": "Forward API gateway access logs to Splunk. Parse endpoint, status code, latency, and consumer identity. Calculate error rates per endpoint. Alert when any endpoint exceeds error threshold. Break down by 4xx vs 5xx.",
              "z": "Table (endpoints with error rates), Bar chart (error rate by endpoint), Line chart (error rate trend per endpoint).",
              "kfp": "5xx errors spike during deployment rollouts, restart sequences, or backend service maintenance. If the UC mixes classes, split 4xx and 5xx for triage.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom log input, gateway access logs.\n• Ensure the following data sources are available: API gateway access logs (Kong, Apigee, AWS API Gateway).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward API gateway access logs to Splunk. Parse endpoint, status code, latency, and consumer identity. Calculate error rates per endpoint. Alert when any endpoint exceeds error threshold. Break down by 4xx vs 5xx.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"kong:access\"\n| eval is_error=if(status>=400,1,0)\n| stats count, sum(is_error) as errors by request_uri, upstream_service\n| eval error_rate=round(errors/count*100,2)\n| where error_rate > 5\n| sort -error_rate\n```\n\nUnderstanding this SPL\n\n**API Error Rate by Endpoint** — Per-endpoint error rates pinpoint failing services, enabling targeted debugging rather than broad investigation.\n\nDocumented **Data sources**: API gateway access logs (Kong, Apigee, AWS API Gateway). **App/TA** (typical add-on context): Custom log input, gateway access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: kong:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"kong:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by request_uri, upstream_service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=400\n  by Web.src Web.uri_path Web.status span=5m\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**API Error Rate by Endpoint** — Per-endpoint error rates pinpoint failing services, enabling targeted debugging rather than broad investigation.\n\nDocumented **Data sources**: API gateway access logs (Kong, Apigee, AWS API Gateway). **App/TA** (typical add-on context): Custom log input, gateway access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (endpoints with error rates), Bar chart (error rate by endpoint), Line chart (error rate trend per endpoint).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that -endpoint error rates pinpoint failing services, enabling targeted debugging rather than broad investigation.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.status>=400\n  by Web.src Web.uri_path Web.status span=5m\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.2",
              "n": "API Latency Percentiles",
              "c": "high",
              "f": "beginner",
              "v": "P95/P99 latency reveals the experience of the slowest users. Averages hide tail latency problems.",
              "t": "Custom log input, gateway metrics",
              "d": "API gateway access logs with latency fields",
              "q": "index=api sourcetype=\"kong:access\"\n| stats perc50(latency) as p50, perc95(latency) as p95, perc99(latency) as p99 by request_uri\n| where p95 > 1000\n| sort -p99",
              "m": "Ensure gateway logs include request and upstream latency. Calculate p50/p95/p99 per endpoint. Alert when p95 exceeds SLA target. Track percentile trends to detect gradual degradation before it becomes critical.",
              "z": "Line chart (p50/p95/p99 over time), Table (endpoints with high latency), Histogram (latency distribution).",
              "kfp": "Response time spikes during JVM garbage collection, connection pool exhaustion, or backend dependency degradation. Load tests, campaigns, and cold caches also move percentiles.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom log input, gateway metrics.\n• Ensure the following data sources are available: API gateway access logs with latency fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure gateway logs include request and upstream latency. Calculate p50/p95/p99 per endpoint. Alert when p95 exceeds SLA target. Track percentile trends to detect gradual degradation before it becomes critical.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"kong:access\"\n| stats perc50(latency) as p50, perc95(latency) as p95, perc99(latency) as p99 by request_uri\n| where p95 > 1000\n| sort -p99\n```\n\nUnderstanding this SPL\n\n**API Latency Percentiles** — P95/P99 latency reveals the experience of the slowest users. Averages hide tail latency problems.\n\nDocumented **Data sources**: API gateway access logs with latency fields. **App/TA** (typical add-on context): Custom log input, gateway metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: kong:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"kong:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by request_uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95 > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` perc50(Web.duration) as p50 perc95(Web.duration) as p95 perc99(Web.duration) as p99\n  from datamodel=Web.Web\n  by Web.uri_path span=5m\n| where p95 > 1000\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**API Latency Percentiles** — P95/P99 latency reveals the experience of the slowest users. Averages hide tail latency problems.\n\nDocumented **Data sources**: API gateway access logs with latency fields. **App/TA** (typical add-on context): Custom log input, gateway metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Filters the current rows with `where p95 > 1000` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p50/p95/p99 over time), Table (endpoints with high latency), Histogram (latency distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that slowness reveals the experience of the slowest users.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` perc50(Web.duration) as p50 perc95(Web.duration) as p95 perc99(Web.duration) as p99\n  from datamodel=Web.Web\n  by Web.uri_path span=5m\n| where p95 > 1000",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.3",
              "n": "Rate Limiting Events",
              "c": "medium",
              "f": "beginner",
              "v": "Rate limiting indicates consumers exceeding their quotas. May signal API abuse, misconfigured clients, or quota adjustments needed.",
              "t": "Gateway logs",
              "d": "API gateway rate limit logs (429 responses)",
              "q": "index=api sourcetype=\"kong:access\" status=429\n| stats count by consumer_id, request_uri\n| sort -count",
              "m": "Track 429 responses from API gateway. Identify rate-limited consumers and endpoints. Alert on sustained rate limiting for critical consumers. Review quota configuration if legitimate traffic is being limited.",
              "z": "Bar chart (rate-limited consumers), Line chart (429 rate over time), Table (rate limit events).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Gateway logs.\n• Ensure the following data sources are available: API gateway rate limit logs (429 responses).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack 429 responses from API gateway. Identify rate-limited consumers and endpoints. Alert on sustained rate limiting for critical consumers. Review quota configuration if legitimate traffic is being limited.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"kong:access\" status=429\n| stats count by consumer_id, request_uri\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Rate Limiting Events** — Rate limiting indicates consumers exceeding their quotas. May signal API abuse, misconfigured clients, or quota adjustments needed.\n\nDocumented **Data sources**: API gateway rate limit logs (429 responses). **App/TA** (typical add-on context): Gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: kong:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"kong:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by consumer_id, request_uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (rate-limited consumers), Line chart (429 rate over time), Table (rate limit events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that rate limiting indicates consumers exceeding their quotas.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.4",
              "n": "Authentication Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Authentication failures may indicate credential compromise, API key rotation issues, or brute-force attacks.",
              "t": "Gateway auth logs",
              "d": "API gateway authentication logs (401/403 responses), OAuth error logs",
              "q": "index=api sourcetype=\"kong:access\" status IN (401, 403)\n| stats count by consumer_id, src, request_uri\n| where count > 50\n| sort -count",
              "m": "Track 401/403 responses with source IP and consumer identity. Alert on high failure rates from single sources (potential brute force). Correlate with successful authentications to detect account compromise patterns.",
              "z": "Table (auth failures by consumer/IP), Line chart (failure rate over time), Geo map (failures by source location).",
              "kfp": "4xx spikes from web crawlers, scanners, or after API routing, key rotation, or WAF and auth policy changes. Legitimate lockouts, password changes, and scheduled penetration tests can also spike 401/403. We correlate with change records and identity team intent before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Gateway auth logs.\n• Ensure the following data sources are available: API gateway authentication logs (401/403 responses), OAuth error logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack 401/403 responses with source IP and consumer identity. Alert on high failure rates from single sources (potential brute force). Correlate with successful authentications to detect account compromise patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"kong:access\" status IN (401, 403)\n| stats count by consumer_id, src, request_uri\n| where count > 50\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Authentication Failures** — Authentication failures may indicate credential compromise, API key rotation issues, or brute-force attacks.\n\nDocumented **Data sources**: API gateway authentication logs (401/403 responses), OAuth error logs. **App/TA** (typical add-on context): Gateway auth logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: kong:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"kong:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by consumer_id, src, request_uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (auth failures by consumer/IP), Line chart (failure rate over time), Geo map (failures by source location).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that authentication failures may indicate credential compromise, key rotation issues, or brute-force attacks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.5",
              "n": "Service-to-Service Call Failures",
              "c": "critical",
              "f": "intermediate",
              "v": "Inter-service communication failures in microservices architectures cascade quickly. Detection enables rapid isolation of failing services.",
              "t": "Istio/Envoy access logs, Linkerd tap",
              "d": "Envoy access logs (upstream_cluster, response_code), Istio telemetry",
              "q": "index=mesh sourcetype=\"envoy:access\"\n| where response_code >= 500\n| stats count by upstream_cluster, downstream_cluster, response_code\n| sort -count",
              "m": "Configure Envoy/Istio to export access logs to Splunk. Parse source service, destination service, status code, and latency. Build service dependency map. Alert on inter-service error rate spikes. Track per-service error budgets.",
              "z": "Service dependency map (with error highlighting), Table (failing service pairs), Heatmap (service × service error rate).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Istio/Envoy access logs, Linkerd tap.\n• Ensure the following data sources are available: Envoy access logs (upstream_cluster, response_code), Istio telemetry.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Envoy/Istio to export access logs to Splunk. Parse source service, destination service, status code, and latency. Build service dependency map. Alert on inter-service error rate spikes. Track per-service error budgets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mesh sourcetype=\"envoy:access\"\n| where response_code >= 500\n| stats count by upstream_cluster, downstream_cluster, response_code\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Service-to-Service Call Failures** — Inter-service communication failures in microservices architectures cascade quickly. Detection enables rapid isolation of failing services.\n\nDocumented **Data sources**: Envoy access logs (upstream_cluster, response_code), Istio telemetry. **App/TA** (typical add-on context): Istio/Envoy access logs, Linkerd tap. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mesh; **sourcetype**: envoy:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mesh, sourcetype=\"envoy:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where response_code >= 500` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by upstream_cluster, downstream_cluster, response_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Service dependency map (with error highlighting), Table (failing service pairs), Heatmap (service × service error rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that inter-service communication failures in microservices architectures cascade quickly.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "envoy"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.6",
              "n": "Circuit Breaker Activations",
              "c": "critical",
              "f": "intermediate",
              "v": "Circuit breaker trips indicate a downstream service is failing. Quick detection enables proactive communication and remediation.",
              "t": "Service mesh metrics, Envoy stats",
              "d": "Envoy cluster stats (circuit breaker metrics), Istio DestinationRule events",
              "q": "index=mesh sourcetype=\"envoy:stats\"\n| search \"circuit_breaker\" \"cx_open\" OR \"rq_open\"\n| stats count by upstream_cluster\n| where count > 0",
              "m": "Monitor Envoy circuit breaker metrics. Alert on any circuit breaker opening. Track circuit breaker state transitions. Correlate with upstream service health to validate circuit breaker configuration thresholds.",
              "z": "Status grid (service × circuit breaker state), Timeline (circuit breaker events), Table (active circuit breakers).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Service mesh metrics, Envoy stats.\n• Ensure the following data sources are available: Envoy cluster stats (circuit breaker metrics), Istio DestinationRule events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Envoy circuit breaker metrics. Alert on any circuit breaker opening. Track circuit breaker state transitions. Correlate with upstream service health to validate circuit breaker configuration thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mesh sourcetype=\"envoy:stats\"\n| search \"circuit_breaker\" \"cx_open\" OR \"rq_open\"\n| stats count by upstream_cluster\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Circuit Breaker Activations** — Circuit breaker trips indicate a downstream service is failing. Quick detection enables proactive communication and remediation.\n\nDocumented **Data sources**: Envoy cluster stats (circuit breaker metrics), Istio DestinationRule events. **App/TA** (typical add-on context): Service mesh metrics, Envoy stats. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mesh; **sourcetype**: envoy:stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mesh, sourcetype=\"envoy:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by upstream_cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (service × circuit breaker state), Timeline (circuit breaker events), Table (active circuit breakers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that circuit breaker trips indicate a downstream service is failing.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "envoy"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.7",
              "n": "API Consumer Usage Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Usage tracking per API consumer enables billing, quota management, and partner relationship management.",
              "t": "Gateway access logs",
              "d": "API gateway logs with consumer identification (API key, OAuth client ID)",
              "q": "index=api sourcetype=\"kong:access\"\n| stats count, sum(request_size) as total_bytes, avg(latency) as avg_latency by consumer_id\n| sort -count",
              "m": "Ensure API gateway logs include consumer identity. Aggregate usage by consumer, endpoint, and time period. Create monthly usage reports for billing/chargeback. Track usage trends per consumer for capacity planning.",
              "z": "Table (consumer usage summary), Bar chart (top consumers), Line chart (usage trends per consumer), Pie chart (traffic by consumer).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Gateway access logs.\n• Ensure the following data sources are available: API gateway logs with consumer identification (API key, OAuth client ID).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure API gateway logs include consumer identity. Aggregate usage by consumer, endpoint, and time period. Create monthly usage reports for billing/chargeback. Track usage trends per consumer for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"kong:access\"\n| stats count, sum(request_size) as total_bytes, avg(latency) as avg_latency by consumer_id\n| sort -count\n```\n\nUnderstanding this SPL\n\n**API Consumer Usage Tracking** — Usage tracking per API consumer enables billing, quota management, and partner relationship management.\n\nDocumented **Data sources**: API gateway logs with consumer identification (API key, OAuth client ID). **App/TA** (typical add-on context): Gateway access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: kong:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"kong:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by consumer_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (consumer usage summary), Bar chart (top consumers), Line chart (usage trends per consumer), Pie chart (traffic by consumer).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that usage tracking per consumer enables billing, quota management, and partner relationship management.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.8",
              "n": "mTLS Certificate Expiration",
              "c": "critical",
              "f": "intermediate",
              "v": "Expired mTLS certificates break service-to-service communication, causing complete mesh failures. Proactive monitoring is essential.",
              "t": "Service mesh metrics, scripted input",
              "d": "Istio/Linkerd certificate metadata, `istioctl proxy-config` output",
              "q": "index=mesh sourcetype=\"istio:cert_status\"\n| eval days_until_expiry=round((cert_expiry_epoch-now())/86400)\n| where days_until_expiry < 7\n| table service, namespace, days_until_expiry, issuer\n| sort days_until_expiry",
              "m": "Monitor Istio/Linkerd certificate lifetimes. For auto-rotated certs, verify rotation is working by tracking cert age. Alert when certs approach expiry or rotation fails. Monitor CA health (Citadel, cert-manager).",
              "z": "Table (certs with expiry), Single value (certs expiring within 7d), Timeline (cert rotation events).",
              "kfp": "Short-lived or staging certificates, planned rotations, and CA delays can look like an imminent outage. Certificates approaching expiry should trigger renewal; alerts before the renewal date are expected.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Service mesh metrics, scripted input.\n• Ensure the following data sources are available: Istio/Linkerd certificate metadata, `istioctl proxy-config` output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Istio/Linkerd certificate lifetimes. For auto-rotated certs, verify rotation is working by tracking cert age. Alert when certs approach expiry or rotation fails. Monitor CA health (Citadel, cert-manager).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mesh sourcetype=\"istio:cert_status\"\n| eval days_until_expiry=round((cert_expiry_epoch-now())/86400)\n| where days_until_expiry < 7\n| table service, namespace, days_until_expiry, issuer\n| sort days_until_expiry\n```\n\nUnderstanding this SPL\n\n**mTLS Certificate Expiration** — Expired mTLS certificates break service-to-service communication, causing complete mesh failures. Proactive monitoring is essential.\n\nDocumented **Data sources**: Istio/Linkerd certificate metadata, `istioctl proxy-config` output. **App/TA** (typical add-on context): Service mesh metrics, scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mesh; **sourcetype**: istio:cert_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mesh, sourcetype=\"istio:cert_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_until_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_until_expiry < 7` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **mTLS Certificate Expiration**): table service, namespace, days_until_expiry, issuer\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certs with expiry), Single value (certs expiring within 7d), Timeline (cert rotation events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that expired mTLS certificates break service-to-service communication, causing complete mesh failures.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.9",
              "n": "HAProxy Backend and Frontend Health",
              "c": "high",
              "f": "intermediate",
              "v": "Backend server state, connection queue depth, and HTTP response distribution indicate load balancer effectiveness and backend capacity. Detection of DOWN backends or saturated queues enables rapid failover and scaling decisions.",
              "t": "Custom (HAProxy stats socket/CSV, syslog)",
              "d": "HAProxy stats CSV (`/haproxy?stats;csv`), syslog",
              "q": "index=haproxy sourcetype=\"haproxy:stats\"\n| eval backend_status=case(status==\"UP\",1, status==\"DOWN\",0, 1=1,null())\n| stats latest(backend_status) as up, latest(qcur) as queue_depth, latest(scur) as sessions by pxname, svname, type\n| where type==\"backend\" AND (up==0 OR queue_depth>10)\n| table pxname, svname, up, queue_depth, sessions",
              "m": "Enable HAProxy stats via `stats uri /haproxy?stats` and `stats enable` in the frontend. Poll stats CSV via scripted input (curl or socket) every 30–60 seconds. Parse backend/frontend rows; extract status (UP/DOWN), qcur (current queued requests), scur (current sessions), and response code distribution. Forward to Splunk via HEC. Alert when any backend is DOWN or queue_depth exceeds 10. Correlate with syslog for connection errors and backend health transitions.",
              "z": "Status grid (backend × health), Table (backends with queue depth), Line chart (queue depth over time), Single value (DOWN backends count).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (HAProxy stats socket/CSV, syslog).\n• Ensure the following data sources are available: HAProxy stats CSV (`/haproxy?stats;csv`), syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable HAProxy stats via `stats uri /haproxy?stats` and `stats enable` in the frontend. Poll stats CSV via scripted input (curl or socket) every 30–60 seconds. Parse backend/frontend rows; extract status (UP/DOWN), qcur (current queued requests), scur (current sessions), and response code distribution. Forward to Splunk via HEC. Alert when any backend is DOWN or queue_depth exceeds 10. Correlate with syslog for connection errors and backend health transitions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=haproxy sourcetype=\"haproxy:stats\"\n| eval backend_status=case(status==\"UP\",1, status==\"DOWN\",0, 1=1,null())\n| stats latest(backend_status) as up, latest(qcur) as queue_depth, latest(scur) as sessions by pxname, svname, type\n| where type==\"backend\" AND (up==0 OR queue_depth>10)\n| table pxname, svname, up, queue_depth, sessions\n```\n\nUnderstanding this SPL\n\n**HAProxy Backend and Frontend Health** — Backend server state, connection queue depth, and HTTP response distribution indicate load balancer effectiveness and backend capacity. Detection of DOWN backends or saturated queues enables rapid failover and scaling decisions.\n\nDocumented **Data sources**: HAProxy stats CSV (`/haproxy?stats;csv`), syslog. **App/TA** (typical add-on context): Custom (HAProxy stats socket/CSV, syslog). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: haproxy; **sourcetype**: haproxy:stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=haproxy, sourcetype=\"haproxy:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **backend_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by pxname, svname, type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where type==\"backend\" AND (up==0 OR queue_depth>10)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HAProxy Backend and Frontend Health**): table pxname, svname, up, queue_depth, sessions\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (backend × health), Table (backends with queue depth), Line chart (queue depth over time), Single value (DOWN backends count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that backend server state, connection queue depth, and HTTP response distribution indicate load balancer effectiveness and backend capacity.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "haproxy",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.10",
              "n": "Kong Rate Limit Violations",
              "c": "medium",
              "f": "beginner",
              "v": "Kong `rate_limiting` plugin log lines and `429` with `RateLimit-*` headers. Identifies abusive consumers vs tight quotas.",
              "t": "Kong admin/access logs",
              "d": "`kong:access` `status=429`, `rate_limiting` plugin log",
              "q": "index=api sourcetype=\"kong:access\" status=429\n| stats count by consumer_id, request_uri, rate_limit_plugin\n| sort -count\n| head 50",
              "m": "Enable plugin logging. Baseline 429s per consumer. Alert on spike vs baseline or new consumer_id hitting limit.",
              "z": "Bar chart (429 by consumer), Line chart (429 rate), Table (top limited routes).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kong admin/access logs.\n• Ensure the following data sources are available: `kong:access` `status=429`, `rate_limiting` plugin log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable plugin logging. Baseline 429s per consumer. Alert on spike vs baseline or new consumer_id hitting limit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"kong:access\" status=429\n| stats count by consumer_id, request_uri, rate_limit_plugin\n| sort -count\n| head 50\n```\n\nUnderstanding this SPL\n\n**Kong Rate Limit Violations** — Kong `rate_limiting` plugin log lines and `429` with `RateLimit-*` headers. Identifies abusive consumers vs tight quotas.\n\nDocumented **Data sources**: `kong:access` `status=429`, `rate_limiting` plugin log. **App/TA** (typical add-on context): Kong admin/access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: kong:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"kong:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by consumer_id, request_uri, rate_limit_plugin** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Kong Rate Limit Violations** — Kong `rate_limiting` plugin log lines and `429` with `RateLimit-*` headers. Identifies abusive consumers vs tight quotas.\n\nDocumented **Data sources**: `kong:access` `status=429`, `rate_limiting` plugin log. **App/TA** (typical add-on context): Kong admin/access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (429 by consumer), Line chart (429 rate), Table (top limited routes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that kong plugin log lines and with headers.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.11",
              "n": "AWS API Gateway 4xx/5xx Trends",
              "c": "critical",
              "f": "beginner",
              "v": "CloudWatch `4XXError`, `5XXError`, `Latency` per API stage. Single pane for serverless API frontends.",
              "t": "`Splunk_TA_aws` CloudWatch",
              "d": "`AWS/ApiGateway` metrics, execution logs",
              "q": "index=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ApiGateway\" metric_name IN (\"4XXError\",\"5XXError\")\n| timechart span=5m sum(Sum) as err by ApiName, Stage, metric_name\n| where err > 0",
              "m": "Enable detailed metrics per stage. Alert on 5XX >0 sustained or 4XX spike vs baseline. Join with Lambda logs for root cause.",
              "z": "Stacked area (4xx vs 5xx), Line chart (error rate), Table (API, stage, errors).",
              "kfp": "5xx errors spike during deployment rollouts, restart sequences, or backend service maintenance. Health-check noise and synthetic traffic can add false volume if not filtered.",
              "refs": "[Splunk_TA_aws](https://splunkbase.splunk.com/app/1876), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_aws` CloudWatch.\n• Ensure the following data sources are available: `AWS/ApiGateway` metrics, execution logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable detailed metrics per stage. Alert on 5XX >0 sustained or 4XX spike vs baseline. Join with Lambda logs for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudwatch\" namespace=\"AWS/ApiGateway\" metric_name IN (\"4XXError\",\"5XXError\")\n| timechart span=5m sum(Sum) as err by ApiName, Stage, metric_name\n| where err > 0\n```\n\nUnderstanding this SPL\n\n**AWS API Gateway 4xx/5xx Trends** — CloudWatch `4XXError`, `5XXError`, `Latency` per API stage. Single pane for serverless API frontends.\n\nDocumented **Data sources**: `AWS/ApiGateway` metrics, execution logs. **App/TA** (typical add-on context): `Splunk_TA_aws` CloudWatch. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by ApiName, Stage, metric_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where err > 0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**AWS API Gateway 4xx/5xx Trends** — CloudWatch `4XXError`, `5XXError`, `Latency` per API stage. Single pane for serverless API frontends.\n\nDocumented **Data sources**: `AWS/ApiGateway` metrics, execution logs. **App/TA** (typical add-on context): `Splunk_TA_aws` CloudWatch. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (4xx vs 5xx), Line chart (error rate), Table (API, stage, errors).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that cloudWatch, per stage.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for AWS",
                "id": 1876,
                "url": "https://splunkbase.splunk.com/app/1876"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.12",
              "n": "Apigee Policy Violations",
              "c": "high",
              "f": "intermediate",
              "v": "Apigee analytics API or syslog with `fault` policy name (SOAPThreat, JSONThreat, Quota, SpikeArrest) for blocked requests.",
              "t": "Apigee export (BigQuery/Splunk), `apigee:analytics`",
              "d": "`fault` policy, `developer_app`, `response_status_code`",
              "q": "index=api sourcetype=\"apigee:analytics\"\n| where isnotnull(fault_policy) OR response_status_code=\"429\"\n| stats count by fault_policy, proxy_name, developer_app\n| sort -count",
              "m": "Ingest nightly or hourly analytics. Alert on new fault_policy or high `SpikeArrest` counts. Tune policies vs false positives.",
              "z": "Bar chart (faults by policy), Table (proxy, policy, count), Line chart (policy violations over time).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Apigee export (BigQuery/Splunk), `apigee:analytics`.\n• Ensure the following data sources are available: `fault` policy, `developer_app`, `response_status_code`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest nightly or hourly analytics. Alert on new fault_policy or high `SpikeArrest` counts. Tune policies vs false positives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"apigee:analytics\"\n| where isnotnull(fault_policy) OR response_status_code=\"429\"\n| stats count by fault_policy, proxy_name, developer_app\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Apigee Policy Violations** — Apigee analytics API or syslog with `fault` policy name (SOAPThreat, JSONThreat, Quota, SpikeArrest) for blocked requests.\n\nDocumented **Data sources**: `fault` policy, `developer_app`, `response_status_code`. **App/TA** (typical add-on context): Apigee export (BigQuery/Splunk), `apigee:analytics`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: apigee:analytics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"apigee:analytics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(fault_policy) OR response_status_code=\"429\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by fault_policy, proxy_name, developer_app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Apigee Policy Violations** — Apigee analytics API or syslog with `fault` policy name (SOAPThreat, JSONThreat, Quota, SpikeArrest) for blocked requests.\n\nDocumented **Data sources**: `fault` policy, `developer_app`, `response_status_code`. **App/TA** (typical add-on context): Apigee export (BigQuery/Splunk), `apigee:analytics`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (faults by policy), Table (proxy, policy, count), Line chart (policy violations over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that apigee analytics or syslog with policy name (SOAPThreat, JSONThreat, Quota, SpikeArrest) for blocked requests.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.13",
              "n": "API Response Time SLA Breaches",
              "c": "critical",
              "f": "beginner",
              "v": "p95/p99 latency from gateway access logs vs documented SLA per route (`/api/v1/orders`). Complements UC-8.4.2 with SLA lookup join.",
              "t": "Kong, Envoy, AWS API GW access logs",
              "d": "`latency`, `request_uri`, `route_id`",
              "q": "index=api sourcetype=\"kong:access\"\n| lookup api_route_sla route_uri OUTPUT p95_ms_sla\n| stats perc95(latency) as p95 by route_uri\n| where p95 > p95_ms_sla\n| table route_uri p95 p95_ms_sla",
              "m": "Maintain SLA lookup per route. Run every 15m. Alert on breach for 3 consecutive windows. Exclude OPTIONS from stats.",
              "z": "Line chart (p95 vs SLA), Table (breached routes), Heatmap (route × hour).",
              "kfp": "Response time spikes during JVM garbage collection, connection pool exhaustion, or backend dependency degradation. Load tests, campaigns, and cold caches also move percentiles.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kong, Envoy, AWS API GW access logs.\n• Ensure the following data sources are available: `latency`, `request_uri`, `route_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain SLA lookup per route. Run every 15m. Alert on breach for 3 consecutive windows. Exclude OPTIONS from stats.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"kong:access\"\n| lookup api_route_sla route_uri OUTPUT p95_ms_sla\n| stats perc95(latency) as p95 by route_uri\n| where p95 > p95_ms_sla\n| table route_uri p95 p95_ms_sla\n```\n\nUnderstanding this SPL\n\n**API Response Time SLA Breaches** — p95/p99 latency from gateway access logs vs documented SLA per route (`/api/v1/orders`). Complements UC-8.4.2 with SLA lookup join.\n\nDocumented **Data sources**: `latency`, `request_uri`, `route_id`. **App/TA** (typical add-on context): Kong, Envoy, AWS API GW access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: kong:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"kong:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by route_uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95 > p95_ms_sla` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **API Response Time SLA Breaches**): table route_uri p95 p95_ms_sla\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**API Response Time SLA Breaches** — p95/p99 latency from gateway access logs vs documented SLA per route (`/api/v1/orders`). Complements UC-8.4.2 with SLA lookup join.\n\nDocumented **Data sources**: `latency`, `request_uri`, `route_id`. **App/TA** (typical add-on context): Kong, Envoy, AWS API GW access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95 vs SLA), Table (breached routes), Heatmap (route × hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that slowness from gateway access logs vs documented service promises per route.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "aws",
                "envoy"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.14",
              "n": "API Key Abuse Detection",
              "c": "high",
              "f": "advanced",
              "v": "Unusual volume of requests per API key or key used from many distinct IPs/countries in short window.",
              "t": "Gateway logs with `consumer_id` or `api_key` hash",
              "d": "`kong:access` `credential_id`, `src`",
              "q": "index=api sourcetype=\"kong:access\"\n| bin _time span=1h\n| stats count, dc(src) as ips by credential_id, _time\n| where count > 10000 OR ips > 50\n| table credential_id count ips",
              "m": "Never log raw API keys. Use hashed id. Baseline per credential. Alert on volume or IP diversity anomaly. Integrate with IP reputation.",
              "z": "Table (credential, count, ips), Map (src), Timeline (abuse spikes).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Gateway logs with `consumer_id` or `api_key` hash.\n• Ensure the following data sources are available: `kong:access` `credential_id`, `src`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNever log raw API keys. Use hashed id. Baseline per credential. Alert on volume or IP diversity anomaly. Integrate with IP reputation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"kong:access\"\n| bin _time span=1h\n| stats count, dc(src) as ips by credential_id, _time\n| where count > 10000 OR ips > 50\n| table credential_id count ips\n```\n\nUnderstanding this SPL\n\n**API Key Abuse Detection** — Unusual volume of requests per API key or key used from many distinct IPs/countries in short window.\n\nDocumented **Data sources**: `kong:access` `credential_id`, `src`. **App/TA** (typical add-on context): Gateway logs with `consumer_id` or `api_key` hash. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: kong:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"kong:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by credential_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10000 OR ips > 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **API Key Abuse Detection**): table credential_id count ips\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(Web.src) as agg_value from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**API Key Abuse Detection** — Unusual volume of requests per API key or key used from many distinct IPs/countries in short window.\n\nDocumented **Data sources**: `kong:access` `credential_id`, `src`. **App/TA** (typical add-on context): Gateway logs with `consumer_id` or `api_key` hash. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (credential, count, ips), Map (src), Timeline (abuse spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that unusual volume of requests per key or key used from many distinct /countries in short window.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t dc(Web.src) as agg_value from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - agg_value",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.15",
              "n": "GraphQL Query Depth Violations",
              "c": "medium",
              "f": "intermediate",
              "v": "Depth/complexity limit errors from Apollo/GraphQL server logs prevent DoS via deep queries.",
              "t": "Application logs, GraphQL gateway",
              "d": "`graphql:request` `depth`, `errors`, `operationName`",
              "q": "index=application sourcetype=\"graphql:server\"\n| search \"depth limit\" OR \"complexity\" OR \"Query is too deep\"\n| stats count by operationName, client_name, depth\n| where count > 10",
              "m": "Log structured rejection reason. Alert on high rejection rate from single client or operation. Tune limits for legitimate mobile apps.",
              "z": "Table (operation, depth, count), Bar chart (rejections by client), Line chart (depth violations over time).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Application logs, GraphQL gateway.\n• Ensure the following data sources are available: `graphql:request` `depth`, `errors`, `operationName`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog structured rejection reason. Alert on high rejection rate from single client or operation. Tune limits for legitimate mobile apps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=application sourcetype=\"graphql:server\"\n| search \"depth limit\" OR \"complexity\" OR \"Query is too deep\"\n| stats count by operationName, client_name, depth\n| where count > 10\n```\n\nUnderstanding this SPL\n\n**GraphQL Query Depth Violations** — Depth/complexity limit errors from Apollo/GraphQL server logs prevent DoS via deep queries.\n\nDocumented **Data sources**: `graphql:request` `depth`, `errors`, `operationName`. **App/TA** (typical add-on context): Application logs, GraphQL gateway. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: application; **sourcetype**: graphql:server. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=application, sourcetype=\"graphql:server\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by operationName, client_name, depth** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GraphQL Query Depth Violations** — Depth/complexity limit errors from Apollo/GraphQL server logs prevent DoS via deep queries.\n\nDocumented **Data sources**: `graphql:request` `depth`, `errors`, `operationName`. **App/TA** (typical add-on context): Application logs, GraphQL gateway. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operation, depth, count), Bar chart (rejections by client), Line chart (depth violations over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that depth/complexity limit errors from Apollo/GraphQL server logs prevent DoS using deep queries.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.4.16",
              "n": "API Version Deprecation Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Traffic to `/v1/` deprecated routes vs `/v2/` for migration planning. Header `Sunset` or path-based routing logs.",
              "t": "API gateway access logs",
              "d": "`request_uri` path version segment, `X-API-Version`",
              "q": "index=api sourcetype=\"kong:access\"\n| rex field=request_uri \"^/api/v(?<api_version>\\d+)/\"\n| stats count by api_version, request_uri\n| lookup api_version_deprecation api_version OUTPUT sunset_epoch\n| eval days_to_sunset=round((sunset_epoch-now())/86400)\n| where days_to_sunset < 90 AND api_version=\"1\"",
              "m": "Maintain deprecation calendar lookup. Weekly report of traffic still on old versions. Alert on any `/v1/*` usage after sunset date.",
              "z": "Pie chart (traffic by version), Line chart (v1 traffic trend), Table (routes still on deprecated version).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: API gateway access logs.\n• Ensure the following data sources are available: `request_uri` path version segment, `X-API-Version`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain deprecation calendar lookup. Weekly report of traffic still on old versions. Alert on any `/v1/*` usage after sunset date.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=api sourcetype=\"kong:access\"\n| rex field=request_uri \"^/api/v(?<api_version>\\d+)/\"\n| stats count by api_version, request_uri\n| lookup api_version_deprecation api_version OUTPUT sunset_epoch\n| eval days_to_sunset=round((sunset_epoch-now())/86400)\n| where days_to_sunset < 90 AND api_version=\"1\"\n```\n\nUnderstanding this SPL\n\n**API Version Deprecation Tracking** — Traffic to `/v1/` deprecated routes vs `/v2/` for migration planning. Header `Sunset` or path-based routing logs.\n\nDocumented **Data sources**: `request_uri` path version segment, `X-API-Version`. **App/TA** (typical add-on context): API gateway access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: api; **sourcetype**: kong:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=api, sourcetype=\"kong:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by api_version, request_uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_to_sunset** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_sunset < 90 AND api_version=\"1\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**API Version Deprecation Tracking** — Traffic to `/v1/` deprecated routes vs `/v2/` for migration planning. Header `Sunset` or path-based routing logs.\n\nDocumented **Data sources**: `request_uri` path version segment, `X-API-Version`. **App/TA** (typical add-on context): API gateway access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (traffic by version), Line chart (v1 traffic trend), Table (routes still on deprecated version).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that traffic to deprecated routes vs for migration planning.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 16,
            "none": 0
          }
        },
        {
          "i": "8.5",
          "n": "Caching Layers",
          "u": [
            {
              "i": "8.5.1",
              "n": "Cache Hit/Miss Ratio",
              "c": "high",
              "f": "advanced",
              "v": "Cache hit ratio directly measures cache effectiveness. Declining ratio means more backend load and higher latency.",
              "t": "Custom scripted input (`redis-cli INFO`)",
              "d": "Redis INFO stats, Memcached stats, Varnish stats",
              "q": "index=cache sourcetype=\"redis:info\"\n| eval hit_ratio=round(keyspace_hits/(keyspace_hits+keyspace_misses)*100,2)\n| timechart span=5m avg(hit_ratio) as cache_hit_pct by host\n| where cache_hit_pct < 90",
              "m": "Run `redis-cli INFO` via scripted input every minute. Parse keyspace_hits and keyspace_misses. Calculate hit ratio. Alert when ratio drops below 90%. Correlate with application deployment events (new code may change access patterns).",
              "z": "Gauge (hit ratio %), Line chart (hit ratio over time), Single value (current hit ratio).",
              "kfp": "Cache misses spike after cache flush, deploys with new cache keys, or during warm-up after restart. Compare with those events before treating as a fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`redis-cli INFO`).\n• Ensure the following data sources are available: Redis INFO stats, Memcached stats, Varnish stats.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `redis-cli INFO` via scripted input every minute. Parse keyspace_hits and keyspace_misses. Calculate hit ratio. Alert when ratio drops below 90%. Correlate with application deployment events (new code may change access patterns).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cache sourcetype=\"redis:info\"\n| eval hit_ratio=round(keyspace_hits/(keyspace_hits+keyspace_misses)*100,2)\n| timechart span=5m avg(hit_ratio) as cache_hit_pct by host\n| where cache_hit_pct < 90\n```\n\nUnderstanding this SPL\n\n**Cache Hit/Miss Ratio** — Cache hit ratio directly measures cache effectiveness. Declining ratio means more backend load and higher latency.\n\nDocumented **Data sources**: Redis INFO stats, Memcached stats, Varnish stats. **App/TA** (typical add-on context): Custom scripted input (`redis-cli INFO`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cache; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cache, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where cache_hit_pct < 90` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (hit ratio %), Line chart (hit ratio over time), Single value (current hit ratio).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that cache hit ratio directly measures cache effectiveness.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "redis"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.2",
              "n": "Memory Utilization",
              "c": "high",
              "f": "beginner",
              "v": "Cache memory exhaustion triggers evictions, degrading performance. Monitoring enables timely scaling.",
              "t": "Custom scripted input",
              "d": "Redis INFO memory, Memcached stats",
              "q": "index=cache sourcetype=\"redis:info\"\n| eval mem_pct=round(used_memory/maxmemory*100,1)\n| timechart span=5m max(mem_pct) as memory_pct by host\n| where memory_pct > 85",
              "m": "Poll memory metrics every minute. Track used vs max memory and RSS vs used ratio (fragmentation). Alert at 85% memory usage. Monitor memory fragmentation ratio — values >1.5 indicate excessive fragmentation.",
              "z": "Gauge (% memory used), Line chart (memory usage over time), Table (instances approaching limit).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: Redis INFO memory, Memcached stats.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll memory metrics every minute. Track used vs max memory and RSS vs used ratio (fragmentation). Alert at 85% memory usage. Monitor memory fragmentation ratio — values >1.5 indicate excessive fragmentation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cache sourcetype=\"redis:info\"\n| eval mem_pct=round(used_memory/maxmemory*100,1)\n| timechart span=5m max(mem_pct) as memory_pct by host\n| where memory_pct > 85\n```\n\nUnderstanding this SPL\n\n**Memory Utilization** — Cache memory exhaustion triggers evictions, degrading performance. Monitoring enables timely scaling.\n\nDocumented **Data sources**: Redis INFO memory, Memcached stats. **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cache; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cache, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mem_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where memory_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% memory used), Line chart (memory usage over time), Table (instances approaching limit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that cache memory exhaustion triggers evictions, degrading performance.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.3",
              "n": "Eviction Rate Trending",
              "c": "high",
              "f": "beginner",
              "v": "High eviction rates mean the cache is too small, causing frequent backend roundtrips. Tracking guides capacity decisions.",
              "t": "Custom scripted input",
              "d": "Redis INFO stats (evicted_keys), Memcached stats (evictions)",
              "q": "index=cache sourcetype=\"redis:info\"\n| timechart span=5m per_second(evicted_keys) as eviction_rate by host\n| where eviction_rate > 10",
              "m": "Track evicted_keys counter over time. Calculate eviction rate per second. Alert when eviction rate exceeds threshold. Correlate with memory usage — evictions with memory below max indicates maxmemory-policy is active.",
              "z": "Line chart (eviction rate over time), Single value (current eviction rate), Dual-axis (evictions + memory usage).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: Redis INFO stats (evicted_keys), Memcached stats (evictions).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack evicted_keys counter over time. Calculate eviction rate per second. Alert when eviction rate exceeds threshold. Correlate with memory usage — evictions with memory below max indicates maxmemory-policy is active.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cache sourcetype=\"redis:info\"\n| timechart span=5m per_second(evicted_keys) as eviction_rate by host\n| where eviction_rate > 10\n```\n\nUnderstanding this SPL\n\n**Eviction Rate Trending** — High eviction rates mean the cache is too small, causing frequent backend roundtrips. Tracking guides capacity decisions.\n\nDocumented **Data sources**: Redis INFO stats (evicted_keys), Memcached stats (evictions). **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cache; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cache, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where eviction_rate > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (eviction rate over time), Single value (current eviction rate), Dual-axis (evictions + memory usage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that high eviction rates mean the cache is too small, causing frequent backend roundtrips.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.4",
              "n": "Connection Count Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Approaching connection limits causes client rejections. Monitoring enables proactive limit adjustment.",
              "t": "Custom scripted input",
              "d": "Redis INFO clients, Memcached stats",
              "q": "index=cache sourcetype=\"redis:info\"\n| timechart span=5m max(connected_clients) as clients, max(maxclients) as limit by host\n| eval pct=round(clients/limit*100,1)\n| where pct > 80",
              "m": "Poll connection metrics every minute. Track connected clients vs maxclients setting. Alert at 80% threshold. Monitor rejected connections counter for actual connection refusals.",
              "z": "Line chart (connections over time), Gauge (% of max), Single value (current connections).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: Redis INFO clients, Memcached stats.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll connection metrics every minute. Track connected clients vs maxclients setting. Alert at 80% threshold. Monitor rejected connections counter for actual connection refusals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cache sourcetype=\"redis:info\"\n| timechart span=5m max(connected_clients) as clients, max(maxclients) as limit by host\n| eval pct=round(clients/limit*100,1)\n| where pct > 80\n```\n\nUnderstanding this SPL\n\n**Connection Count Monitoring** — Approaching connection limits causes client rejections. Monitoring enables proactive limit adjustment.\n\nDocumented **Data sources**: Redis INFO clients, Memcached stats. **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cache; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cache, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (connections over time), Gauge (% of max), Single value (current connections).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that approaching connection limits causes client rejections.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.5",
              "n": "Replication Lag (Redis)",
              "c": "high",
              "f": "beginner",
              "v": "Redis replication lag affects read consistency for apps reading from replicas. Monitoring ensures data freshness.",
              "t": "Custom scripted input",
              "d": "Redis INFO replication (`master_repl_offset`, `slave_repl_offset`)",
              "q": "index=cache sourcetype=\"redis:info\" role=\"slave\"\n| eval lag_bytes=master_repl_offset-slave_repl_offset\n| timechart span=1m max(lag_bytes) as repl_lag by host\n| where repl_lag > 1000000",
              "m": "Poll Redis INFO replication from replicas every minute. Calculate byte offset lag. Alert when lag exceeds threshold (e.g., >1MB or growing). Monitor replication link status (master_link_status).",
              "z": "Line chart (replication lag over time), Single value (current lag), Table (replica status).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: Redis INFO replication (`master_repl_offset`, `slave_repl_offset`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Redis INFO replication from replicas every minute. Calculate byte offset lag. Alert when lag exceeds threshold (e.g., >1MB or growing). Monitor replication link status (master_link_status).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cache sourcetype=\"redis:info\" role=\"slave\"\n| eval lag_bytes=master_repl_offset-slave_repl_offset\n| timechart span=1m max(lag_bytes) as repl_lag by host\n| where repl_lag > 1000000\n```\n\nUnderstanding this SPL\n\n**Replication Lag (Redis)** — Redis replication lag affects read consistency for apps reading from replicas. Monitoring ensures data freshness.\n\nDocumented **Data sources**: Redis INFO replication (`master_repl_offset`, `slave_repl_offset`). **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cache; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cache, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lag_bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where repl_lag > 1000000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (replication lag over time), Single value (current lag), Table (replica status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that redis replication lag affects read consistency for apps reading from replicas.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.6",
              "n": "Slow Command Detection",
              "c": "high",
              "f": "advanced",
              "v": "Slow Redis commands block the single-threaded event loop, impacting all clients. Detection enables command optimization.",
              "t": "Custom scripted input (`SLOWLOG GET`)",
              "d": "Redis SLOWLOG",
              "q": "index=cache sourcetype=\"redis:slowlog\"\n| table _time, host, duration_ms, command, key\n| where duration_ms > 10\n| sort -duration_ms",
              "m": "Run `redis-cli SLOWLOG GET 100` via scripted input every minute. Parse command, duration, and key pattern. Alert on commands exceeding 10ms. Identify O(N) commands (KEYS, SMEMBERS on large sets) for optimization.",
              "z": "Table (slow commands with details), Bar chart (slow commands by type), Line chart (slow command frequency).",
              "kfp": "Response time spikes during JVM garbage collection, connection pool exhaustion, or backend dependency degradation. Load tests, campaigns, and cold caches also move percentiles.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`SLOWLOG GET`).\n• Ensure the following data sources are available: Redis SLOWLOG.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `redis-cli SLOWLOG GET 100` via scripted input every minute. Parse command, duration, and key pattern. Alert on commands exceeding 10ms. Identify O(N) commands (KEYS, SMEMBERS on large sets) for optimization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cache sourcetype=\"redis:slowlog\"\n| table _time, host, duration_ms, command, key\n| where duration_ms > 10\n| sort -duration_ms\n```\n\nUnderstanding this SPL\n\n**Slow Command Detection** — Slow Redis commands block the single-threaded event loop, impacting all clients. Detection enables command optimization.\n\nDocumented **Data sources**: Redis SLOWLOG. **App/TA** (typical add-on context): Custom scripted input (`SLOWLOG GET`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cache; **sourcetype**: redis:slowlog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cache, sourcetype=\"redis:slowlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Slow Command Detection**): table _time, host, duration_ms, command, key\n• Filters the current rows with `where duration_ms > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slow commands with details), Bar chart (slow commands by type), Line chart (slow command frequency).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that slow Redis commands block the single-threaded event loop, impacting all clients.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.7",
              "n": "Key Expiration Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Monitoring TTL patterns ensures cache freshness strategy is working. Unusual patterns may indicate application bugs.",
              "t": "Custom scripted input",
              "d": "Redis INFO keyspace (expires, expired_keys)",
              "q": "index=cache sourcetype=\"redis:info\"\n| eval expire_pct=round(expires/keys*100,1)\n| timechart span=15m avg(expire_pct) as pct_with_ttl, per_second(expired_keys) as expire_rate by host",
              "m": "Track keys with TTL vs total keys. Monitor expiration rate. Alert if expire_pct drops significantly (new code not setting TTL on keys). Track expired_stale_perc for lazy expiration health.",
              "z": "Line chart (expiration rate), Dual-axis (keys with TTL % + expiration rate), Single value (% keys with TTL).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: Redis INFO keyspace (expires, expired_keys).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack keys with TTL vs total keys. Monitor expiration rate. Alert if expire_pct drops significantly (new code not setting TTL on keys). Track expired_stale_perc for lazy expiration health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cache sourcetype=\"redis:info\"\n| eval expire_pct=round(expires/keys*100,1)\n| timechart span=15m avg(expire_pct) as pct_with_ttl, per_second(expired_keys) as expire_rate by host\n```\n\nUnderstanding this SPL\n\n**Key Expiration Trending** — Monitoring TTL patterns ensures cache freshness strategy is working. Unusual patterns may indicate application bugs.\n\nDocumented **Data sources**: Redis INFO keyspace (expires, expired_keys). **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cache; **sourcetype**: redis:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cache, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **expire_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (expiration rate), Dual-axis (keys with TTL % + expiration rate), Single value (% keys with TTL).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that monitoring TTL patterns ensures cache freshness strategy is working.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.8",
              "n": "Memcached Hit Ratio and Eviction Rate",
              "c": "medium",
              "f": "intermediate",
              "v": "Cache hit ratio and eviction rate measure cache effectiveness and capacity pressure. Declining hit ratio or rising evictions indicate undersized cache or changing access patterns.",
              "t": "Custom scripted input (memcached stats)",
              "d": "memcached stats command (get_hits, get_misses, evictions)",
              "q": "index=cache sourcetype=\"memcached:stats\"\n| eval hit_ratio=round(get_hits/(get_hits+get_misses)*100,2)\n| timechart span=5m avg(hit_ratio) as hit_pct, per_second(evictions) as eviction_rate by host\n| where hit_pct < 85 OR eviction_rate > 5",
              "m": "Run `echo stats | nc localhost 11211` (or memcached stats protocol) via scripted input every minute. Parse get_hits, get_misses, evictions, bytes, bytes_read, bytes_written. Forward to Splunk via HEC. Calculate hit ratio; alert when below 85%. Track eviction rate; alert when evictions per second exceed 5. Correlate with memory usage (limit_maxbytes).",
              "z": "Gauge (hit ratio %), Line chart (hit ratio and eviction rate over time), Single value (current eviction rate), Table (instances with low hit ratio).",
              "kfp": "Cache misses spike after cache flush, deploys with new cache keys, or during warm-up after restart. Compare with those events before treating as a fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (memcached stats).\n• Ensure the following data sources are available: memcached stats command (get_hits, get_misses, evictions).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `echo stats | nc localhost 11211` (or memcached stats protocol) via scripted input every minute. Parse get_hits, get_misses, evictions, bytes, bytes_read, bytes_written. Forward to Splunk via HEC. Calculate hit ratio; alert when below 85%. Track eviction rate; alert when evictions per second exceed 5. Correlate with memory usage (limit_maxbytes).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cache sourcetype=\"memcached:stats\"\n| eval hit_ratio=round(get_hits/(get_hits+get_misses)*100,2)\n| timechart span=5m avg(hit_ratio) as hit_pct, per_second(evictions) as eviction_rate by host\n| where hit_pct < 85 OR eviction_rate > 5\n```\n\nUnderstanding this SPL\n\n**Memcached Hit Ratio and Eviction Rate** — Cache hit ratio and eviction rate measure cache effectiveness and capacity pressure. Declining hit ratio or rising evictions indicate undersized cache or changing access patterns.\n\nDocumented **Data sources**: memcached stats command (get_hits, get_misses, evictions). **App/TA** (typical add-on context): Custom scripted input (memcached stats). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cache; **sourcetype**: memcached:stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cache, sourcetype=\"memcached:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where hit_pct < 85 OR eviction_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (hit ratio %), Line chart (hit ratio and eviction rate over time), Single value (current eviction rate), Table (instances with low hit ratio).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that cache hit ratio and eviction rate measure cache effectiveness and capacity pressure.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "memcached"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.9",
              "n": "Squid Proxy Cache Hit Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "Cache HIT/MISS/DENY rates on forward/reverse proxy indicate cache effectiveness and upstream load. Declining ratio increases origin latency and bandwidth.",
              "t": "Custom (Squid access log, SNMP)",
              "d": "Squid access.log (cache result codes), SNMP",
              "q": "index=proxy sourcetype=\"squid:access\"\n| rex \"TCP_(?<cache_result>HIT|MISS|DENIED|REFRESH)\"\n| stats count by cache_result\n| eventstats sum(count) as total\n| eval pct=round(count/total*100,2)\n| where cache_result==\"MISS\" AND pct > 30",
              "m": "Configure Squid to log cache result codes (TCP_HIT, TCP_MISS, TCP_DENIED, TCP_REFRESH) in access.log. Forward via Universal Forwarder. Parse cache_result field. Alternatively poll Squid SNMP cacheHitRatio if available. Calculate hit ratio per 5-minute window. Alert when MISS rate exceeds 30%. Correlate with request rate for capacity planning.",
              "z": "Pie chart (HIT vs MISS vs DENY), Line chart (hit ratio over time), Table (cache result distribution), Single value (hit ratio %).",
              "kfp": "Cache misses spike after cache flush, deploys with new cache keys, or during warm-up after restart. Compare with those events before treating as a fault.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Squid access log, SNMP).\n• Ensure the following data sources are available: Squid access.log (cache result codes), SNMP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Squid to log cache result codes (TCP_HIT, TCP_MISS, TCP_DENIED, TCP_REFRESH) in access.log. Forward via Universal Forwarder. Parse cache_result field. Alternatively poll Squid SNMP cacheHitRatio if available. Calculate hit ratio per 5-minute window. Alert when MISS rate exceeds 30%. Correlate with request rate for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"squid:access\"\n| rex \"TCP_(?<cache_result>HIT|MISS|DENIED|REFRESH)\"\n| stats count by cache_result\n| eventstats sum(count) as total\n| eval pct=round(count/total*100,2)\n| where cache_result==\"MISS\" AND pct > 30\n```\n\nUnderstanding this SPL\n\n**Squid Proxy Cache Hit Ratio** — Cache HIT/MISS/DENY rates on forward/reverse proxy indicate cache effectiveness and upstream load. Declining ratio increases origin latency and bandwidth.\n\nDocumented **Data sources**: Squid access.log (cache result codes), SNMP. **App/TA** (typical add-on context): Custom (Squid access log, SNMP). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: squid:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"squid:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by cache_result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cache_result==\"MISS\" AND pct > 30` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Squid Proxy Cache Hit Ratio** — Cache HIT/MISS/DENY rates on forward/reverse proxy indicate cache effectiveness and upstream load. Declining ratio increases origin latency and bandwidth.\n\nDocumented **Data sources**: Squid access.log (cache result codes), SNMP. **App/TA** (typical add-on context): Custom (Squid access log, SNMP). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (HIT vs MISS vs DENY), Line chart (hit ratio over time), Table (cache result distribution), Single value (hit ratio %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that cache HIT/MISS/DENY rates on forward/reverse proxy indicate cache effectiveness and upstream load.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "snmp",
                "squid"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.10",
              "n": "Varnish Cache Hit Rate and Backend Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Cache efficiency and backend connection failures indicate Varnish health. Backend failures cause cache misses and user-facing errors.",
              "t": "Custom scripted input (varnishstat, varnishlog)",
              "d": "varnishstat JSON output",
              "q": "index=cache sourcetype=\"varnish:stats\"\n| eval hit_ratio=round(cache_hit/(cache_hit+cache_miss)*100,2)\n| where hit_ratio < 80 OR backend_fail > 0 OR backend_busy > 0\n| timechart span=5m avg(hit_ratio) as hit_pct, sum(backend_fail) as backend_failures by host",
              "m": "Run `varnishstat -j` via scripted input every minute. Parse MAIN.cache_hit, MAIN.cache_miss, MAIN.backend_fail, MAIN.backend_busy, MAIN.backend_unhealthy. Forward to Splunk via HEC. Alert when hit ratio drops below 80% or backend failures occur. Correlate backend_fail with backend health probes.",
              "z": "Gauge (hit ratio %), Line chart (hit ratio and backend failures), Table (backend health status), Single value (backend failures).",
              "kfp": "Cache misses spike after cache flush, deploys with new cache keys, or during warm-up after restart. Compare with those events before treating as a fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (varnishstat, varnishlog).\n• Ensure the following data sources are available: varnishstat JSON output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `varnishstat -j` via scripted input every minute. Parse MAIN.cache_hit, MAIN.cache_miss, MAIN.backend_fail, MAIN.backend_busy, MAIN.backend_unhealthy. Forward to Splunk via HEC. Alert when hit ratio drops below 80% or backend failures occur. Correlate backend_fail with backend health probes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cache sourcetype=\"varnish:stats\"\n| eval hit_ratio=round(cache_hit/(cache_hit+cache_miss)*100,2)\n| where hit_ratio < 80 OR backend_fail > 0 OR backend_busy > 0\n| timechart span=5m avg(hit_ratio) as hit_pct, sum(backend_fail) as backend_failures by host\n```\n\nUnderstanding this SPL\n\n**Varnish Cache Hit Rate and Backend Health** — Cache efficiency and backend connection failures indicate Varnish health. Backend failures cause cache misses and user-facing errors.\n\nDocumented **Data sources**: varnishstat JSON output. **App/TA** (typical add-on context): Custom scripted input (varnishstat, varnishlog). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cache; **sourcetype**: varnish:stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cache, sourcetype=\"varnish:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio < 80 OR backend_fail > 0 OR backend_busy > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (hit ratio %), Line chart (hit ratio and backend failures), Table (backend health status), Single value (backend failures).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that cache efficiency and backend connection failures indicate Varnish health.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "varnish"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.11",
              "n": "Synthetic Transaction Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Simulated multi-step user journeys with timing per step validate end-to-end availability and detect degradation before users report issues. Step-level timing enables pinpointing of slow components.",
              "t": "Splunk Synthetic Monitoring, custom scripted input (Selenium, Playwright)",
              "d": "Synthetic test runner output with step-level timings",
              "q": "index=synthetic sourcetype=\"synthetic:test\"\n| eval step_duration=step_end_time-step_start_time\n| where overall_status==\"FAIL\" OR step_duration > 5000\n| stats count, avg(step_duration) as avg_step_ms by test_name, step_name\n| sort -avg_step_ms",
              "m": "Run synthetic tests via Splunk Synthetic Monitoring, Selenium, or Playwright. Configure tests to log JSON with test_name, step_name, step_start_time, step_end_time, overall_status, error_message. Forward to Splunk via HEC. Alert when any test fails or step duration exceeds SLA (e.g., 5s). Track step-level trends to identify regressions. Use transaction for multi-step journey correlation.",
              "z": "Timeline (test runs with pass/fail), Table (slow steps by test), Line chart (step duration trend), Single value (failed tests).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Synthetic Monitoring, custom scripted input (Selenium, Playwright).\n• Ensure the following data sources are available: Synthetic test runner output with step-level timings.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun synthetic tests via Splunk Synthetic Monitoring, Selenium, or Playwright. Configure tests to log JSON with test_name, step_name, step_start_time, step_end_time, overall_status, error_message. Forward to Splunk via HEC. Alert when any test fails or step duration exceeds SLA (e.g., 5s). Track step-level trends to identify regressions. Use transaction for multi-step journey correlation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=synthetic sourcetype=\"synthetic:test\"\n| eval step_duration=step_end_time-step_start_time\n| where overall_status==\"FAIL\" OR step_duration > 5000\n| stats count, avg(step_duration) as avg_step_ms by test_name, step_name\n| sort -avg_step_ms\n```\n\nUnderstanding this SPL\n\n**Synthetic Transaction Monitoring** — Simulated multi-step user journeys with timing per step validate end-to-end availability and detect degradation before users report issues. Step-level timing enables pinpointing of slow components.\n\nDocumented **Data sources**: Synthetic test runner output with step-level timings. **App/TA** (typical add-on context): Splunk Synthetic Monitoring, custom scripted input (Selenium, Playwright). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: synthetic; **sourcetype**: synthetic:test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=synthetic, sourcetype=\"synthetic:test\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **step_duration** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overall_status==\"FAIL\" OR step_duration > 5000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by test_name, step_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (test runs with pass/fail), Table (slow steps by test), Line chart (step duration trend), Single value (failed tests).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that simulated multi-step user journeys with timing per step validate end-to-end availability and detect degradation before users report issues.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.5.12",
              "n": "Website Page Load Time Breakdown",
              "c": "medium",
              "f": "advanced",
              "v": "DNS, connect, TLS, TTFB, and download timing per page element enable root cause analysis of slow page loads. Breakdown identifies whether slowness is network, backend, or resource-related.",
              "t": "Splunk RUM or custom scripted input (curl timing)",
              "d": "Navigation Timing API, curl -w format, RUM beacon data",
              "q": "index=rum sourcetype=\"rum:timing\"\n| eval dns_ms=domain_dns_end-domain_dns_start, connect_ms=connect_end-connect_start, ttfb_ms=response_start-request_start\n| timechart span=5m perc95(ttfb_ms) as p95_ttfb, perc95(dns_ms) as p95_dns by page_url\n| where p95_ttfb > 1000",
              "m": "Instrument frontend with RUM (Splunk RUM, Boomerang, or custom beacon) to capture Navigation Timing API fields. Alternatively run curl with `-w` format for key endpoints. Parse domainLookupEnd-domainLookupStart (DNS), connectEnd-connectStart (connect), responseStart-requestStart (TTFB). Forward to Splunk via HEC. Alert when p95 TTFB exceeds 1s. Correlate with backend latency and CDN metrics.",
              "z": "Waterfall (timing breakdown by resource), Line chart (p95 TTFB/DNS/connect over time), Table (slowest pages), Single value (p95 page load).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk RUM or custom scripted input (curl timing).\n• Ensure the following data sources are available: Navigation Timing API, curl -w format, RUM beacon data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument frontend with RUM (Splunk RUM, Boomerang, or custom beacon) to capture Navigation Timing API fields. Alternatively run curl with `-w` format for key endpoints. Parse domainLookupEnd-domainLookupStart (DNS), connectEnd-connectStart (connect), responseStart-requestStart (TTFB). Forward to Splunk via HEC. Alert when p95 TTFB exceeds 1s. Correlate with backend latency and CDN metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=rum sourcetype=\"rum:timing\"\n| eval dns_ms=domain_dns_end-domain_dns_start, connect_ms=connect_end-connect_start, ttfb_ms=response_start-request_start\n| timechart span=5m perc95(ttfb_ms) as p95_ttfb, perc95(dns_ms) as p95_dns by page_url\n| where p95_ttfb > 1000\n```\n\nUnderstanding this SPL\n\n**Website Page Load Time Breakdown** — DNS, connect, TLS, TTFB, and download timing per page element enable root cause analysis of slow page loads. Breakdown identifies whether slowness is network, backend, or resource-related.\n\nDocumented **Data sources**: Navigation Timing API, curl -w format, RUM beacon data. **App/TA** (typical add-on context): Splunk RUM or custom scripted input (curl timing). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: rum; **sourcetype**: rum:timing. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=rum, sourcetype=\"rum:timing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dns_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by page_url** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where p95_ttfb > 1000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Website Page Load Time Breakdown** — DNS, connect, TLS, TTFB, and download timing per page element enable root cause analysis of slow page loads. Breakdown identifies whether slowness is network, backend, or resource-related.\n\nDocumented **Data sources**: Navigation Timing API, curl -w format, RUM beacon data. **App/TA** (typical add-on context): Splunk RUM or custom scripted input (curl timing). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Waterfall (timing breakdown by resource), Line chart (p95 TTFB/DNS/connect over time), Table (slowest pages), Single value (p95 page load).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow website Page Load Time Breakdown in plain language so we notice drift or trouble early enough to act calmly.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 12,
            "none": 0
          }
        },
        {
          "i": "8.6",
          "n": "Network Service Availability",
          "u": [
            {
              "i": "8.6.1",
              "n": "SSH Service Availability Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "SSH is the primary remote administration channel for Linux and Unix servers. An unresponsive SSH daemon locks out operators and often signals broader system distress (OOM, hung kernel, storage full). Nagios `check_ssh` is one of the most universally deployed checks; Splunk replicates it through absence-of-event detection and syslog-based availability trending.",
              "t": "`Splunk_TA_nix`, `Splunk_TA_syslog`",
              "d": "`sourcetype=syslog` (sshd messages), scripted input or Stream for TCP/22 probe",
              "q": "| inputlookup monitored_linux_hosts.csv\n| fields host\n| join type=left max=1 host [search index=os sourcetype=syslog process=sshd earliest=-15m | stats count as ssh_events by host]\n| where isnull(ssh_events) OR ssh_events=0\n| eval status=\"SSH_DOWN\"\n| table host, status",
              "m": "Ingest sshd syslog messages (Linux) via Universal Forwarder. Maintain a lookup (`monitored_linux_hosts.csv`) of expected hosts. Use `tstats` or a scheduled search every 5 minutes to detect hosts with no sshd events in the last 10 minutes. Optionally deploy a scripted input that performs a TCP connect to port 22 and logs result (0=up, 1=down) for direct availability data. Alert on SSH_DOWN status for more than 2 consecutive intervals to reduce false positives during restart.",
              "z": "Single value (hosts with SSH down), Table (host, last seen, duration down), Timeline (SSH availability per host), Heatmap (host × time availability).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, `Splunk_TA_syslog`.\n• Ensure the following data sources are available: `sourcetype=syslog` (sshd messages), scripted input or Stream for TCP/22 probe.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest sshd syslog messages (Linux) via Universal Forwarder. Maintain a lookup (`monitored_linux_hosts.csv`) of expected hosts. Use `tstats` or a scheduled search every 5 minutes to detect hosts with no sshd events in the last 10 minutes. Optionally deploy a scripted input that performs a TCP connect to port 22 and logs result (0=up, 1=down) for direct availability data. Alert on SSH_DOWN status for more than 2 consecutive intervals to reduce false positives during restart.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup monitored_linux_hosts.csv\n| fields host\n| join type=left max=1 host [search index=os sourcetype=syslog process=sshd earliest=-15m | stats count as ssh_events by host]\n| where isnull(ssh_events) OR ssh_events=0\n| eval status=\"SSH_DOWN\"\n| table host, status\n```\n\nUnderstanding this SPL\n\n**SSH Service Availability Monitoring** — SSH is the primary remote administration channel for Linux and Unix servers. An unresponsive SSH daemon locks out operators and often signals broader system distress (OOM, hung kernel, storage full). Nagios `check_ssh` is one of the most universally deployed checks; Splunk replicates it through absence-of-event detection and syslog-based availability trending.\n\nDocumented **Data sources**: `sourcetype=syslog` (sshd messages), scripted input or Stream for TCP/22 probe. **App/TA** (typical add-on context): `Splunk_TA_nix`, `Splunk_TA_syslog`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(ssh_events) OR ssh_events=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **SSH Service Availability Monitoring**): table host, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (hosts with SSH down), Table (host, last seen, duration down), Timeline (SSH availability per host), Heatmap (host × time availability).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that remote login is the primary remote administration channel for Linux and Unix servers.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux",
                "syslog"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.2",
              "n": "FTP / SFTP Service Availability Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "FTP and SFTP services support automated file transfer workflows between systems, partners, and legacy integrations. Silent service failures cause missed file deliveries that may not surface until business processes fail downstream. Nagios `check_ftp` provides port-level verification; Splunk replicates this through daemon log monitoring and scripted probes.",
              "t": "`Splunk_TA_syslog`, custom scripted input",
              "d": "`vsftpd`, `proftpd`, or `openssh-sftp-server` logs; scripted TCP port probe output",
              "q": "| inputlookup ftp_hosts.csv\n| fields host, service_name\n| join type=left max=1 host [search index=os (sourcetype=vsftpd OR sourcetype=syslog process=sftp-server) earliest=-15m | stats count as ftp_events by host]\n| where isnull(ftp_events) OR ftp_events=0\n| eval status=\"FTP_DOWN\"\n| table host, service_name, status",
              "m": "Monitor vsftpd, proftpd, or OpenSSH SFTP subsystem logs via Universal Forwarder. For SFTP (port 22 subsystem), filter syslog for `sftp-server` process events. Alert when no daemon activity is seen for more than 15 minutes on a host expected to serve FTP/SFTP. Supplement with a scripted input using `nc -z -w5 host 21` (FTP) or `nc -z -w5 host 22` (SFTP) logged as synthetic check results. Correlate FTP availability with file-transfer success/failure logs.",
              "z": "Table (host, port, status, last event), Single value (unavailable FTP hosts), Line chart (event rate over time per host), Alert timeline.",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_syslog`, custom scripted input.\n• Ensure the following data sources are available: `vsftpd`, `proftpd`, or `openssh-sftp-server` logs; scripted TCP port probe output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor vsftpd, proftpd, or OpenSSH SFTP subsystem logs via Universal Forwarder. For SFTP (port 22 subsystem), filter syslog for `sftp-server` process events. Alert when no daemon activity is seen for more than 15 minutes on a host expected to serve FTP/SFTP. Supplement with a scripted input using `nc -z -w5 host 21` (FTP) or `nc -z -w5 host 22` (SFTP) logged as synthetic check results. Correlate FTP availability with file-transfer success/failure logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup ftp_hosts.csv\n| fields host, service_name\n| join type=left max=1 host [search index=os (sourcetype=vsftpd OR sourcetype=syslog process=sftp-server) earliest=-15m | stats count as ftp_events by host]\n| where isnull(ftp_events) OR ftp_events=0\n| eval status=\"FTP_DOWN\"\n| table host, service_name, status\n```\n\nUnderstanding this SPL\n\n**FTP / SFTP Service Availability Monitoring** — FTP and SFTP services support automated file transfer workflows between systems, partners, and legacy integrations. Silent service failures cause missed file deliveries that may not surface until business processes fail downstream. Nagios `check_ftp` provides port-level verification; Splunk replicates this through daemon log monitoring and scripted probes.\n\nDocumented **Data sources**: `vsftpd`, `proftpd`, or `openssh-sftp-server` logs; scripted TCP port probe output. **App/TA** (typical add-on context): `Splunk_TA_syslog`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(ftp_events) OR ftp_events=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **FTP / SFTP Service Availability Monitoring**): table host, service_name, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, port, status, last event), Single value (unavailable FTP hosts), Line chart (event rate over time per host), Alert timeline.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that FTP and SFTP services support automated file transfer workflows between systems, partners, and legacy integrations.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.10",
              "n": "Envoy Proxy Upstream Health",
              "c": "high",
              "f": "intermediate",
              "v": "Upstream cluster health, retry rate, and circuit breaker trips indicate Envoy proxy and backend service health. Detection enables rapid isolation of failing upstreams.",
              "t": "Custom (Envoy admin /stats, Prometheus metrics)",
              "d": "Envoy /stats endpoint (envoy_cluster_upstream_cx_active, envoy_cluster_upstream_rq_retry)",
              "q": "index=mesh sourcetype=\"envoy:stats\"\n| search \"envoy_cluster_upstream\" (\"cx_active\" OR \"rq_retry\" OR \"circuit_breakers\")\n| rex \"envoy_cluster\\.(?<cluster>[^.]+)\\.(?<metric>\\w+)=(?<value>\\d+)\"\n| stats latest(value) as val by cluster, metric\n| where metric==\"rq_retry\" AND val > 0 OR metric==\"circuit_breakers_default_rq_open\" AND val > 0",
              "m": "Enable Envoy admin interface (`/stats`). Poll via scripted input or Prometheus scrape every 30 seconds. Parse envoy_cluster_upstream_cx_active, envoy_cluster_upstream_rq_retry, envoy_cluster_upstream_rq_retry_overflow, circuit_breakers.*.rq_open. Forward to Splunk via HEC. Alert when retry rate spikes or circuit breaker opens. Correlate with upstream service health checks.",
              "z": "Status grid (cluster × health), Line chart (retry rate over time), Table (clusters with circuit breaker trips), Single value (active circuit breakers).",
              "kfp": "5xx errors spike during deployment rollouts, restart sequences, or backend service maintenance. Health-check noise and synthetic traffic can add false volume if not filtered.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Envoy admin /stats, Prometheus metrics).\n• Ensure the following data sources are available: Envoy /stats endpoint (envoy_cluster_upstream_cx_active, envoy_cluster_upstream_rq_retry).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Envoy admin interface (`/stats`). Poll via scripted input or Prometheus scrape every 30 seconds. Parse envoy_cluster_upstream_cx_active, envoy_cluster_upstream_rq_retry, envoy_cluster_upstream_rq_retry_overflow, circuit_breakers.*.rq_open. Forward to Splunk via HEC. Alert when retry rate spikes or circuit breaker opens. Correlate with upstream service health checks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mesh sourcetype=\"envoy:stats\"\n| search \"envoy_cluster_upstream\" (\"cx_active\" OR \"rq_retry\" OR \"circuit_breakers\")\n| rex \"envoy_cluster\\.(?<cluster>[^.]+)\\.(?<metric>\\w+)=(?<value>\\d+)\"\n| stats latest(value) as val by cluster, metric\n| where metric==\"rq_retry\" AND val > 0 OR metric==\"circuit_breakers_default_rq_open\" AND val > 0\n```\n\nUnderstanding this SPL\n\n**Envoy Proxy Upstream Health** — Upstream cluster health, retry rate, and circuit breaker trips indicate Envoy proxy and backend service health. Detection enables rapid isolation of failing upstreams.\n\nDocumented **Data sources**: Envoy /stats endpoint (envoy_cluster_upstream_cx_active, envoy_cluster_upstream_rq_retry). **App/TA** (typical add-on context): Custom (Envoy admin /stats, Prometheus metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mesh; **sourcetype**: envoy:stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mesh, sourcetype=\"envoy:stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by cluster, metric** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where metric==\"rq_retry\" AND val > 0 OR metric==\"circuit_breakers_default_rq_open\" AND val > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (cluster × health), Line chart (retry rate over time), Table (clusters with circuit breaker trips), Single value (active circuit breakers).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that upstream cluster health, retry rate, and circuit breaker trips indicate Envoy proxy and backend service health.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "envoy",
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.11",
              "n": "HashiCorp Vault Seal Status and Token Count",
              "c": "critical",
              "f": "intermediate",
              "v": "Vault health, auto-unseal events, and token creation rate indicate secrets management availability. Sealed Vault blocks all secret access; token anomalies may indicate abuse.",
              "t": "Custom (Vault API, audit log)",
              "d": "Vault `/v1/sys/health`, `/v1/sys/audit`, audit log",
              "q": "index=vault sourcetype=\"vault:health\"\n| where sealed==true\n| table _time, host, sealed, standby, version",
              "m": "Poll Vault `/v1/sys/health` via scripted input every minute. Parse sealed, standby, version, replication_performance_mode. Forward to Splunk via HEC. Enable Vault audit log; forward audit events for token creation and auth attempts. Alert immediately when sealed==true. Track token creation rate; alert on anomalies. Correlate unseal events with operator actions.",
              "z": "Single value (sealed status — target: false), Table (Vault cluster health), Line chart (token creation rate), Timeline (unseal events).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Vault API, audit log).\n• Ensure the following data sources are available: Vault `/v1/sys/health`, `/v1/sys/audit`, audit log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Vault `/v1/sys/health` via scripted input every minute. Parse sealed, standby, version, replication_performance_mode. Forward to Splunk via HEC. Enable Vault audit log; forward audit events for token creation and auth attempts. Alert immediately when sealed==true. Track token creation rate; alert on anomalies. Correlate unseal events with operator actions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vault sourcetype=\"vault:health\"\n| where sealed==true\n| table _time, host, sealed, standby, version\n```\n\nUnderstanding this SPL\n\n**HashiCorp Vault Seal Status and Token Count** — Vault health, auto-unseal events, and token creation rate indicate secrets management availability. Sealed Vault blocks all secret access; token anomalies may indicate abuse.\n\nDocumented **Data sources**: Vault `/v1/sys/health`, `/v1/sys/audit`, audit log. **App/TA** (typical add-on context): Custom (Vault API, audit log). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vault; **sourcetype**: vault:health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vault, sourcetype=\"vault:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where sealed==true` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HashiCorp Vault Seal Status and Token Count**): table _time, host, sealed, standby, version\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (sealed status — target: false), Table (Vault cluster health), Line chart (token creation rate), Timeline (unseal events).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that vault health, auto-unseal events, and token creation rate indicate secrets management availability.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.12",
              "n": "HashiCorp Consul Service Health",
              "c": "high",
              "f": "intermediate",
              "v": "Service registration, deregistration, and health check failures indicate Consul service discovery health. Critical checks mean services are unavailable for discovery and routing.",
              "t": "Custom (Consul HTTP API)",
              "d": "Consul `/v1/health/state/critical`, `/v1/catalog/services`",
              "q": "index=consul sourcetype=\"consul:health\"\n| where status==\"critical\"\n| stats count, latest(Output) as Output by Node, ServiceID, CheckID\n| table Node, ServiceID, CheckID, count, Output",
              "m": "Poll Consul `/v1/health/state/critical` and `/v1/catalog/services` via scripted input every minute. Parse Node, ServiceID, CheckID, Status, Output. Forward to Splunk via HEC. Alert when any service has critical health. Track service registration/deregistration events from catalog changes. Correlate with deployment events.",
              "z": "Status grid (service × health), Table (critical services), Single value (critical check count), Timeline (health transitions).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Consul HTTP API).\n• Ensure the following data sources are available: Consul `/v1/health/state/critical`, `/v1/catalog/services`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Consul `/v1/health/state/critical` and `/v1/catalog/services` via scripted input every minute. Parse Node, ServiceID, CheckID, Status, Output. Forward to Splunk via HEC. Alert when any service has critical health. Track service registration/deregistration events from catalog changes. Correlate with deployment events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=consul sourcetype=\"consul:health\"\n| where status==\"critical\"\n| stats count, latest(Output) as Output by Node, ServiceID, CheckID\n| table Node, ServiceID, CheckID, count, Output\n```\n\nUnderstanding this SPL\n\n**HashiCorp Consul Service Health** — Service registration, deregistration, and health check failures indicate Consul service discovery health. Critical checks mean services are unavailable for discovery and routing.\n\nDocumented **Data sources**: Consul `/v1/health/state/critical`, `/v1/catalog/services`. **App/TA** (typical add-on context): Custom (Consul HTTP API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: consul; **sourcetype**: consul:health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=consul, sourcetype=\"consul:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status==\"critical\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by Node, ServiceID, CheckID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **HashiCorp Consul Service Health**): table Node, ServiceID, CheckID, count, Output\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (service × health), Table (critical services), Single value (critical check count), Timeline (health transitions).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that service registration, deregistration, and health check failures indicate Consul service discovery health.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_consul"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.13",
              "n": "HashiCorp Nomad Job and Allocation Status",
              "c": "high",
              "f": "intermediate",
              "v": "Failed allocations and job deployment health indicate Nomad scheduler and workload availability. Failed allocations mean tasks are not running; deployment failures block rollouts.",
              "t": "Custom (Nomad HTTP API)",
              "d": "Nomad `/v1/jobs`, `/v1/allocations`",
              "q": "index=nomad sourcetype=\"nomad:allocations\"\n| where ClientStatus==\"failed\" OR (DesiredStatus==\"run\" AND ClientStatus!=\"running\")\n| stats count by JobID, TaskGroup, ClientStatus\n| sort -count",
              "m": "Poll Nomad `/v1/jobs` and `/v1/allocations` via scripted input every 5 minutes. Parse JobID, TaskGroup, ClientStatus, DesiredStatus, CreateIndex. Forward to Splunk via HEC. Alert when ClientStatus==failed or allocations are pending/running when desired is stop. Track deployment status (job version, allocation placement). Correlate with node availability.",
              "z": "Table (failed allocations by job), Single value (failed allocation count), Status grid (job × allocation status), Timeline (allocation events).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Nomad HTTP API).\n• Ensure the following data sources are available: Nomad `/v1/jobs`, `/v1/allocations`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Nomad `/v1/jobs` and `/v1/allocations` via scripted input every 5 minutes. Parse JobID, TaskGroup, ClientStatus, DesiredStatus, CreateIndex. Forward to Splunk via HEC. Alert when ClientStatus==failed or allocations are pending/running when desired is stop. Track deployment status (job version, allocation placement). Correlate with node availability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nomad sourcetype=\"nomad:allocations\"\n| where ClientStatus==\"failed\" OR (DesiredStatus==\"run\" AND ClientStatus!=\"running\")\n| stats count by JobID, TaskGroup, ClientStatus\n| sort -count\n```\n\nUnderstanding this SPL\n\n**HashiCorp Nomad Job and Allocation Status** — Failed allocations and job deployment health indicate Nomad scheduler and workload availability. Failed allocations mean tasks are not running; deployment failures block rollouts.\n\nDocumented **Data sources**: Nomad `/v1/jobs`, `/v1/allocations`. **App/TA** (typical add-on context): Custom (Nomad HTTP API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nomad; **sourcetype**: nomad:allocations. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nomad, sourcetype=\"nomad:allocations\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where ClientStatus==\"failed\" OR (DesiredStatus==\"run\" AND ClientStatus!=\"running\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by JobID, TaskGroup, ClientStatus** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed allocations by job), Single value (failed allocation count), Status grid (job × allocation status), Timeline (allocation events).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that failed allocations and job deployment health indicate Nomad scheduler and workload availability.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_nomad"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.14",
              "n": "Asterisk / FreePBX Call Quality and Trunk Status",
              "c": "high",
              "f": "advanced",
              "v": "Call volume, ASR (Answer Seizure Ratio), ACD (Average Call Duration), and trunk registration indicate VoIP/PBX health. Trunk failures block inbound/outbound calls; quality metrics affect user experience.",
              "t": "Custom (Asterisk AMI, CDR logs)",
              "d": "Asterisk CDR logs, AMI events, SIP trunk status",
              "q": "index=asterisk sourcetype=\"asterisk:cdr\"\n| eval duration_sec=tonumber(duration)\n| bin _time span=1h\n| stats count as calls, avg(duration_sec) as acd, count(eval(disposition==\"ANSWERED\")) as answered by, _time\n| eval asr=round(answered/calls*100,2)\n| where asr < 80 OR acd < 60",
              "m": "Forward Asterisk CDR (Call Detail Record) logs via Universal Forwarder. Parse caller, callee, duration, disposition, channel. For trunk status, use AMI (Asterisk Manager Interface) or `asterisk -rx \"pjsip show endpoints\"` via scripted input. Poll trunk registration status every 5 minutes. Calculate ASR (answered/total*100) and ACD per hour. Alert when ASR drops below 80% or trunk shows UNREACHABLE. Track call volume for capacity planning.",
              "z": "Line chart (ASR and ACD over time), Table (trunk status), Single value (calls per hour), Bar chart (call volume by trunk).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Asterisk AMI, CDR logs).\n• Ensure the following data sources are available: Asterisk CDR logs, AMI events, SIP trunk status.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Asterisk CDR (Call Detail Record) logs via Universal Forwarder. Parse caller, callee, duration, disposition, channel. For trunk status, use AMI (Asterisk Manager Interface) or `asterisk -rx \"pjsip show endpoints\"` via scripted input. Poll trunk registration status every 5 minutes. Calculate ASR (answered/total*100) and ACD per hour. Alert when ASR drops below 80% or trunk shows UNREACHABLE. Track call volume for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=asterisk sourcetype=\"asterisk:cdr\"\n| eval duration_sec=tonumber(duration)\n| bin _time span=1h\n| stats count as calls, avg(duration_sec) as acd, count(eval(disposition==\"ANSWERED\")) as answered by, _time\n| eval asr=round(answered/calls*100,2)\n| where asr < 80 OR acd < 60\n```\n\nUnderstanding this SPL\n\n**Asterisk / FreePBX Call Quality and Trunk Status** — Call volume, ASR (Answer Seizure Ratio), ACD (Average Call Duration), and trunk registration indicate VoIP/PBX health. Trunk failures block inbound/outbound calls; quality metrics affect user experience.\n\nDocumented **Data sources**: Asterisk CDR logs, AMI events, SIP trunk status. **App/TA** (typical add-on context): Custom (Asterisk AMI, CDR logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: asterisk; **sourcetype**: asterisk:cdr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=asterisk, sourcetype=\"asterisk:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **asr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where asr < 80 OR acd < 60` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ASR and ACD over time), Table (trunk status), Single value (calls per hour), Bar chart (call volume by trunk).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that call volume, ASR (Answer Seizure Ratio), ACD (Average Call Duration), and trunk registration indicate VoIP/PBX health.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "asterisk"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.16",
              "n": "NTP Stratum Drift",
              "c": "high",
              "f": "intermediate",
              "v": "Stratum jumps or large offset indicate bad upstream clock or local drift — affects Kerberos, TLS, and distributed logs.",
              "t": "`Splunk_TA_nix`, `ntpq`/`chronyc` scripted input",
              "d": "`ntp:peer` `stratum`, `offset_ms`, `jitter_ms`",
              "q": "index=os sourcetype=\"ntp:peer\"\n| where stratum > 4 OR abs(offset_ms) > 100\n| timechart span=5m max(stratum) as stratum, max(abs(offset_ms)) as abs_offset by host",
              "m": "Poll `chronyc tracking` or `ntpq -pn` every 5m. Alert when stratum >4 or |offset| >100ms sustained. Correlate with VM time sync settings.",
              "z": "Line chart (offset and stratum), Table (hosts with bad clock), Single value (max |offset|).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, `ntpq`/`chronyc` scripted input.\n• Ensure the following data sources are available: `ntp:peer` `stratum`, `offset_ms`, `jitter_ms`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `chronyc tracking` or `ntpq -pn` every 5m. Alert when stratum >4 or |offset| >100ms sustained. Correlate with VM time sync settings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=\"ntp:peer\"\n| where stratum > 4 OR abs(offset_ms) > 100\n| timechart span=5m max(stratum) as stratum, max(abs(offset_ms)) as abs_offset by host\n```\n\nUnderstanding this SPL\n\n**NTP Stratum Drift** — Stratum jumps or large offset indicate bad upstream clock or local drift — affects Kerberos, TLS, and distributed logs.\n\nDocumented **Data sources**: `ntp:peer` `stratum`, `offset_ms`, `jitter_ms`. **App/TA** (typical add-on context): `Splunk_TA_nix`, `ntpq`/`chronyc` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: ntp:peer. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=\"ntp:peer\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where stratum > 4 OR abs(offset_ms) > 100` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (offset and stratum), Table (hosts with bad clock), Single value (max |offset|).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that stratum jumps or large offset indicate bad upstream clock or local drift — affects Kerberos, secure connections, and distributed logs.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.17",
              "n": "DNS Recursive Query Volume",
              "c": "medium",
              "f": "beginner",
              "v": "Sudden spike in recursive queries on internal resolvers may indicate DDoS, malware, or misconfigured application loops.",
              "t": "BIND `named` logs, Infoblox DNS, CoreDNS logs",
              "d": "`dns:query` recursive flag, `client` IP, `qname`",
              "q": "index=dns sourcetype=\"bind:query\" OR sourcetype=\"dns:query\"\n| where recursive=1\n| bucket _time span=1m\n| stats count as qps by client_ip, _time\n| eventstats avg(qps) as avg_q by client_ip\n| where qps > avg_q*10 AND qps > 1000",
              "m": "Baseline QPS per client subnet. Alert on 10× baseline or absolute flood. Top `qname` for tunneling investigation.",
              "z": "Line chart (recursive QPS), Table (top clients), Bar chart (query types).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BIND `named` logs, Infoblox DNS, CoreDNS logs.\n• Ensure the following data sources are available: `dns:query` recursive flag, `client` IP, `qname`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline QPS per client subnet. Alert on 10× baseline or absolute flood. Top `qname` for tunneling investigation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=\"bind:query\" OR sourcetype=\"dns:query\"\n| where recursive=1\n| bucket _time span=1m\n| stats count as qps by client_ip, _time\n| eventstats avg(qps) as avg_q by client_ip\n| where qps > avg_q*10 AND qps > 1000\n```\n\nUnderstanding this SPL\n\n**DNS Recursive Query Volume** — Sudden spike in recursive queries on internal resolvers may indicate DDoS, malware, or misconfigured application loops.\n\nDocumented **Data sources**: `dns:query` recursive flag, `client` IP, `qname`. **App/TA** (typical add-on context): BIND `named` logs, Infoblox DNS, CoreDNS logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: bind:query, dns:query. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=\"bind:query\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where recursive=1` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by client_ip, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where qps > avg_q*10 AND qps > 1000` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (recursive QPS), Table (top clients), Bar chart (query types).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that a sudden spike in recursive queries on internal resolvers may indicate overload attacks, malware, or misconfigured application loops.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "infoblox"
              ],
              "em": [
                "infoblox_dns"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.18",
              "n": "TFTP Unauthorized Access",
              "c": "high",
              "f": "beginner",
              "v": "TFTP should be rare in enterprise networks. Any RRQ/WRQ outside PXE scope may indicate data exfil or firmware abuse.",
              "t": "Firewall logs, `atftpd`/`tftpd` syslog",
              "d": "`tftp:syslog` `filename`, `op`, `src`",
              "q": "index=network sourcetype=\"tftp:log\" OR sourcetype=\"syslog\" process=tftpd\n| search RRQ OR WRQ\n| lookup tftp_allowed_subnets src OUTPUT allowed\n| where allowed!=1 OR isnull(allowed)\n| table _time, src, filename, op",
              "m": "Maintain allowlist for PXE subnets. Alert on any other TFTP read/write. Block TFTP at firewall unless required.",
              "z": "Timeline (TFTP events), Table (unauthorized attempts), Single value (blocked attempts).",
              "kfp": "4xx spikes from web crawlers, scanners, or after URL routing and auth policy changes. We correlate with change records and intent.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Firewall logs, `atftpd`/`tftpd` syslog.\n• Ensure the following data sources are available: `tftp:syslog` `filename`, `op`, `src`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain allowlist for PXE subnets. Alert on any other TFTP read/write. Block TFTP at firewall unless required.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"tftp:log\" OR sourcetype=\"syslog\" process=tftpd\n| search RRQ OR WRQ\n| lookup tftp_allowed_subnets src OUTPUT allowed\n| where allowed!=1 OR isnull(allowed)\n| table _time, src, filename, op\n```\n\nUnderstanding this SPL\n\n**TFTP Unauthorized Access** — TFTP should be rare in enterprise networks. Any RRQ/WRQ outside PXE scope may indicate data exfil or firmware abuse.\n\nDocumented **Data sources**: `tftp:syslog` `filename`, `op`, `src`. **App/TA** (typical add-on context): Firewall logs, `atftpd`/`tftpd` syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: tftp:log, syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"tftp:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where allowed!=1 OR isnull(allowed)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **TFTP Unauthorized Access**): table _time, src, filename, op\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (TFTP events), Table (unauthorized attempts), Single value (blocked attempts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that TFTP should be rare in enterprise networks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.6.19",
              "n": "SNMP Community String Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Detects use of default `public`/`private` or unauthorized SNMP GETs to network devices for SNMPv2c exposure auditing.",
              "t": "Device syslog, SNMP proxy audit",
              "d": "`snmpd` auth failures, `community` in trap receiver logs",
              "q": "index=network sourcetype=\"snmp:audit\" OR (sourcetype=syslog process=snmpd)\n| search \"Authentication failed\" OR community=\"public\" OR community=\"private\"\n| stats count by src, device, community\n| where count > 10",
              "m": "Forward snmpd auth failures. Alert on default community strings in use or brute-force patterns. Migrate devices to SNMPv3.",
              "z": "Table (src, device, community), Bar chart (failures by device), Line chart (auth failure rate).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Device syslog, SNMP proxy audit.\n• Ensure the following data sources are available: `snmpd` auth failures, `community` in trap receiver logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward snmpd auth failures. Alert on default community strings in use or brute-force patterns. Migrate devices to SNMPv3.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:audit\" OR (sourcetype=syslog process=snmpd)\n| search \"Authentication failed\" OR community=\"public\" OR community=\"private\"\n| stats count by src, device, community\n| where count > 10\n```\n\nUnderstanding this SPL\n\n**SNMP Community String Audit** — Detects use of default `public`/`private` or unauthorized SNMP GETs to network devices for SNMPv2c exposure auditing.\n\nDocumented **Data sources**: `snmpd` auth failures, `community` in trap receiver logs. **App/TA** (typical add-on context): Device syslog, SNMP proxy audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:audit, syslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by src, device, community** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (src, device, community), Bar chart (failures by device), Line chart (auth failure rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch use of default / or unauthorized network signals GETs to network devices for SNMPv2c exposure auditing.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.4,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 11,
            "none": 0
          }
        },
        {
          "i": "8.7",
          "n": "Application Trending",
          "u": [
            {
              "i": "8.7.1",
              "n": "User Session Volume Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Daily or weekly active session counts show adoption, campaign effects, and capacity needs before saturation. Seasonal patterns become visible for staffing and infrastructure scale plans.",
              "t": "Splunk OTel Collector / app instrumentation, Tomcat / IIS / NGINX TAs as applicable",
              "d": "`index=web` `sourcetype=access_combined`, `index=app` session or access logs, optional `JSESSIONID` / `session_id` fields",
              "q": "index=web OR index=app (sourcetype=access_combined OR sourcetype=\"tomcat:access\" OR sourcetype=\"iis:access\")\n| eval sid=coalesce(JSESSIONID, session_id, client_session)\n| bin _time span=1d\n| stats dc(sid) as approx_active_sessions by _time\n| timechart span=1d sum(approx_active_sessions) as daily_sessions",
              "m": "Prefer application-native session metrics if available (Spring session registry, .NET session state). Deduplicate proxies and bots with a known crawler user-agent lookup. For stateless APIs, substitute `dc(client_ip)` or OAuth `sub` as a proxy with documented caveats. Align time zones with business reporting.",
              "z": "Line chart (daily sessions), column chart (week-over-week), single value (rolling 7-day average).",
              "kfp": "Session counts move with campaigns, holidays, and launches. A jump is often expected; we compare to the business calendar before treating as capacity trouble.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector / app instrumentation, Tomcat / IIS / NGINX TAs as applicable.\n• Ensure the following data sources are available: `index=web` `sourcetype=access_combined`, `index=app` session or access logs, optional `JSESSIONID` / `session_id` fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer application-native session metrics if available (Spring session registry, .NET session state). Deduplicate proxies and bots with a known crawler user-agent lookup. For stateless APIs, substitute `dc(client_ip)` or OAuth `sub` as a proxy with documented caveats. Align time zones with business reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web OR index=app (sourcetype=access_combined OR sourcetype=\"tomcat:access\" OR sourcetype=\"iis:access\")\n| eval sid=coalesce(JSESSIONID, session_id, client_session)\n| bin _time span=1d\n| stats dc(sid) as approx_active_sessions by _time\n| timechart span=1d sum(approx_active_sessions) as daily_sessions\n```\n\nUnderstanding this SPL\n\n**User Session Volume Trending** — Daily or weekly active session counts show adoption, campaign effects, and capacity needs before saturation. Seasonal patterns become visible for staffing and infrastructure scale plans.\n\nDocumented **Data sources**: `index=web` `sourcetype=access_combined`, `index=app` session or access logs, optional `JSESSIONID` / `session_id` fields. **App/TA** (typical add-on context): Splunk OTel Collector / app instrumentation, Tomcat / IIS / NGINX TAs as applicable. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web, app; **sourcetype**: access_combined, tomcat:access, iis:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, index=app, sourcetype=access_combined. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sid** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User Session Volume Trending** — Daily or weekly active session counts show adoption, campaign effects, and capacity needs before saturation. Seasonal patterns become visible for staffing and infrastructure scale plans.\n\nDocumented **Data sources**: `index=web` `sourcetype=access_combined`, `index=app` session or access logs, optional `JSESSIONID` / `session_id` fields. **App/TA** (typical add-on context): Splunk OTel Collector / app instrumentation, Tomcat / IIS / NGINX TAs as applicable. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily sessions), column chart (week-over-week), single value (rolling 7-day average).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that daily or weekly active session counts show adoption, campaign effects, and capacity needs before saturation.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1d | sort - count",
              "e": [
                "iis",
                "nginx",
                "opentelemetry",
                "tomcat"
              ],
              "em": [
                "nginx_open"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.7.2",
              "n": "API Endpoint Latency Percentile Trending",
              "c": "high",
              "f": "intermediate",
              "v": "p50, p95, and p99 latency over 30 days highlights tail latency regressions that averages hide. Trends support SLO setting and regression detection after releases.",
              "t": "API gateway TAs (Kong, AWS API Gateway), reverse proxy logs, OpenTelemetry span export to Splunk",
              "d": "`index=web` or `index=app`, `sourcetype=access_combined` with response time, `index=middleware` gateway logs",
              "q": "index=web OR index=app sourcetype=access_combined earliest=-30d\n| eval ms=coalesce(response_time_ms, duration_ms, tonumber(substr(response_time,1,10)))\n| where isnotnull(ms) AND match(uri_path,\"/api/\")\n| timechart span=1d p50(ms) as p50 p95(ms) as p95 p99(ms) as p99",
              "m": "Normalize time units (seconds vs milliseconds) at ingest. Filter to API paths only; exclude static assets. Tag `service_name` for microservice drilldowns. Compare against canary or blue-green cohorts with a `deployment` field when available. Store weekly aggregates in `sourcetype=stash` for long retention.",
              "z": "Line chart (p50/p95/p99 over time), heatmap (endpoint × day for p95), table (worst endpoints).",
              "kfp": "Response time spikes during JVM garbage collection, connection pool exhaustion, or backend dependency degradation. Load tests, campaigns, and cold caches also move percentiles.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: API gateway TAs (Kong, AWS API Gateway), reverse proxy logs, OpenTelemetry span export to Splunk.\n• Ensure the following data sources are available: `index=web` or `index=app`, `sourcetype=access_combined` with response time, `index=middleware` gateway logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize time units (seconds vs milliseconds) at ingest. Filter to API paths only; exclude static assets. Tag `service_name` for microservice drilldowns. Compare against canary or blue-green cohorts with a `deployment` field when available. Store weekly aggregates in `sourcetype=stash` for long retention.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web OR index=app sourcetype=access_combined earliest=-30d\n| eval ms=coalesce(response_time_ms, duration_ms, tonumber(substr(response_time,1,10)))\n| where isnotnull(ms) AND match(uri_path,\"/api/\")\n| timechart span=1d p50(ms) as p50 p95(ms) as p95 p99(ms) as p99\n```\n\nUnderstanding this SPL\n\n**API Endpoint Latency Percentile Trending** — p50, p95, and p99 latency over 30 days highlights tail latency regressions that averages hide. Trends support SLO setting and regression detection after releases.\n\nDocumented **Data sources**: `index=web` or `index=app`, `sourcetype=access_combined` with response time, `index=middleware` gateway logs. **App/TA** (typical add-on context): API gateway TAs (Kong, AWS API Gateway), reverse proxy logs, OpenTelemetry span export to Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web, app; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, index=app, sourcetype=access_combined, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(ms) AND match(uri_path,\"/api/\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**API Endpoint Latency Percentile Trending** — p50, p95, and p99 latency over 30 days highlights tail latency regressions that averages hide. Trends support SLO setting and regression detection after releases.\n\nDocumented **Data sources**: `index=web` or `index=app`, `sourcetype=access_combined` with response time, `index=middleware` gateway logs. **App/TA** (typical add-on context): API gateway TAs (Kong, AWS API Gateway), reverse proxy logs, OpenTelemetry span export to Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p50/p95/p99 over time), heatmap (endpoint × day for p95), table (worst endpoints).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow API Endpoint Latency Percentile Trending in plain language so we notice drift or trouble early enough to act calmly.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1d | sort - count",
              "e": [
                "aws",
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.7.3",
              "n": "Application Error Budget Burn Rate Trending",
              "c": "high",
              "f": "advanced",
              "v": "Error budget remaining over a sprint or month shows whether reliability goals are sustainable. Accelerating burn triggers freeze or rollback decisions before users experience widespread outages.",
              "t": "Custom SLO pipeline, Splunk Observability Cloud export, or scripted inputs from service catalogs",
              "d": "`index=app` SLO metrics, `sourcetype=stash` error-budget summaries, `index=middleware` synthetic or gateway SLO fields",
              "q": "index=app sourcetype=stash source=\"*error_budget*\" OR index=middleware sourcetype=\"slos:metrics\"\n| eval remaining_pct=coalesce(error_budget_remaining_pct, slo_remaining_percent, 100 - burn_rate_pct)\n| eval sprint=strftime(_time,\"%Y-W%V\")\n| bin _time span=1d\n| stats first(remaining_pct) as budget_remaining by _time, service\n| timechart span=1d min(budget_remaining) as min_budget_remaining by service limit=10",
              "m": "Populate `remaining_pct` from your SLO tool (Datadog, Dynatrace, homemade) via HEC or scheduled pull. Define calendar alignment (monthly vs rolling 30d) consistently with product owners. Combine with release markers using `annotate` or a `releases` lookup. Alert on multi-day burn-rate thresholds per Google SRE multi-window practice if you export windowed burn fields.",
              "z": "Area chart (budget remaining %), line chart with release annotations, single value (days of budget left at current burn).",
              "kfp": "Spikes or gaps can follow approved maintenance, load tests, or short collection outages. We add a second signal and a change window before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom SLO pipeline, Splunk Observability Cloud export, or scripted inputs from service catalogs.\n• Ensure the following data sources are available: `index=app` SLO metrics, `sourcetype=stash` error-budget summaries, `index=middleware` synthetic or gateway SLO fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPopulate `remaining_pct` from your SLO tool (Datadog, Dynatrace, homemade) via HEC or scheduled pull. Define calendar alignment (monthly vs rolling 30d) consistently with product owners. Combine with release markers using `annotate` or a `releases` lookup. Alert on multi-day burn-rate thresholds per Google SRE multi-window practice if you export windowed burn fields.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=stash source=\"*error_budget*\" OR index=middleware sourcetype=\"slos:metrics\"\n| eval remaining_pct=coalesce(error_budget_remaining_pct, slo_remaining_percent, 100 - burn_rate_pct)\n| eval sprint=strftime(_time,\"%Y-W%V\")\n| bin _time span=1d\n| stats first(remaining_pct) as budget_remaining by _time, service\n| timechart span=1d min(budget_remaining) as min_budget_remaining by service limit=10\n```\n\nUnderstanding this SPL\n\n**Application Error Budget Burn Rate Trending** — Error budget remaining over a sprint or month shows whether reliability goals are sustainable. Accelerating burn triggers freeze or rollback decisions before users experience widespread outages.\n\nDocumented **Data sources**: `index=app` SLO metrics, `sourcetype=stash` error-budget summaries, `index=middleware` synthetic or gateway SLO fields. **App/TA** (typical add-on context): Custom SLO pipeline, Splunk Observability Cloud export, or scripted inputs from service catalogs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app, middleware; **sourcetype**: stash, slos:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, index=middleware, sourcetype=stash. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **remaining_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sprint** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by service limit=10** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (budget remaining %), line chart with release annotations, single value (days of budget left at current burn).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that error budget remaining over a sprint or month shows whether reliability goals are sustainable.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.7.4",
              "n": "Cache Hit Ratio Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Cache hit ratio over 30 days reveals memory sizing issues, key churn, and upstream slowdowns that force more origin fetches. Declining trends often precede latency SLO breaches.",
              "t": "Splunk Add-on for Redis, Memcached, Varnish or CDN logs",
              "d": "`index=middleware` `sourcetype=redis:info`, `memcached:stats`, or application-emitted cache metrics",
              "q": "index=middleware (sourcetype=\"redis:info\" OR sourcetype=\"memcached:stats\" OR sourcetype=\"app:cache:metrics\")\n| eval hits=coalesce(keyspace_hits, cache_hits, 0)\n| eval misses=coalesce(keyspace_misses, cache_misses, 0)\n| eval hit_ratio=if((hits+misses)>0, round(100*hits/(hits+misses),2), null())\n| timechart span=1d avg(hit_ratio) as cache_hit_ratio_pct",
              "m": "Poll INFO/stats on a fixed interval; compute ratio in SPL or at ingest for accuracy. Split by `instance` or `cluster` for sharded caches. Correlate drops with deployments and TTL changes. For HTTP caches, derive hits from `X-Cache` or CDN logs instead.",
              "z": "Line chart (hit ratio %), dual axis (hits and misses counts), single value (30-day min hit ratio).",
              "kfp": "Cache misses spike after cache flush, deploys with new cache keys, or during warm-up after restart. Compare with those events before treating as a fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Redis, Memcached, Varnish or CDN logs.\n• Ensure the following data sources are available: `index=middleware` `sourcetype=redis:info`, `memcached:stats`, or application-emitted cache metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll INFO/stats on a fixed interval; compute ratio in SPL or at ingest for accuracy. Split by `instance` or `cluster` for sharded caches. Correlate drops with deployments and TTL changes. For HTTP caches, derive hits from `X-Cache` or CDN logs instead.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=middleware (sourcetype=\"redis:info\" OR sourcetype=\"memcached:stats\" OR sourcetype=\"app:cache:metrics\")\n| eval hits=coalesce(keyspace_hits, cache_hits, 0)\n| eval misses=coalesce(keyspace_misses, cache_misses, 0)\n| eval hit_ratio=if((hits+misses)>0, round(100*hits/(hits+misses),2), null())\n| timechart span=1d avg(hit_ratio) as cache_hit_ratio_pct\n```\n\nUnderstanding this SPL\n\n**Cache Hit Ratio Trending** — Cache hit ratio over 30 days reveals memory sizing issues, key churn, and upstream slowdowns that force more origin fetches. Declining trends often precede latency SLO breaches.\n\nDocumented **Data sources**: `index=middleware` `sourcetype=redis:info`, `memcached:stats`, or application-emitted cache metrics. **App/TA** (typical add-on context): Splunk Add-on for Redis, Memcached, Varnish or CDN logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: middleware; **sourcetype**: redis:info, memcached:stats, app:cache:metrics. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=middleware, sourcetype=\"redis:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hits** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **misses** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hit ratio %), dual axis (hits and misses counts), single value (30-day min hit ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that cache hit ratio over 30 days reveals memory sizing issues, key churn, and upstream slowdowns that force more origin fetches.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "memcached",
                "redis",
                "varnish"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "8.7.5",
              "n": "Message Queue Backlog Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Queue depth over 7 and 30 days shows sustained consumer lag versus transient spikes. Growth trends drive consumer scaling, partition adds, or poison-message handling before disk limits or SLA misses.",
              "t": "Splunk Add-on for Kafka, RabbitMQ, ActiveMQ, AWS MSK / Azure Event Hubs TAs",
              "d": "`index=middleware` `sourcetype=kafka:consumer`, `rabbitmq:queue`, `activemq:queue`, `azure:eventhub:metrics`",
              "q": "index=middleware earliest=-30d (sourcetype=\"kafka:consumer\" OR sourcetype=\"rabbitmq:queue\" OR sourcetype=\"activemq:queue\")\n| eval depth=coalesce(consumer_lag, lag, queue_depth, backlog_messages)\n| eval qname=coalesce(topic_queue, queue, destination)\n| timechart span=1d max(depth) as queue_depth by qname limit=15",
              "m": "Align Kafka lag with consumer group and partition; use `max` across partitions for worst-case visibility. Exclude retry/DLQ topics from primary charts or show separately. Set thresholds from peak business hours using historical baselines. For cloud queues, map metric dimensions to the same `qname` namespace.",
              "z": "Line chart (max depth by queue), area chart (7d vs 30d overlay using `timewrap`), table (top queues by growth rate).",
              "kfp": "Queues grow during consumer maintenance, slow consumers, or burst producer activity. Seasonal traffic can also create sustained backlog without a hard fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Kafka, RabbitMQ, ActiveMQ, AWS MSK / Azure Event Hubs TAs.\n• Ensure the following data sources are available: `index=middleware` `sourcetype=kafka:consumer`, `rabbitmq:queue`, `activemq:queue`, `azure:eventhub:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign Kafka lag with consumer group and partition; use `max` across partitions for worst-case visibility. Exclude retry/DLQ topics from primary charts or show separately. Set thresholds from peak business hours using historical baselines. For cloud queues, map metric dimensions to the same `qname` namespace.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=middleware earliest=-30d (sourcetype=\"kafka:consumer\" OR sourcetype=\"rabbitmq:queue\" OR sourcetype=\"activemq:queue\")\n| eval depth=coalesce(consumer_lag, lag, queue_depth, backlog_messages)\n| eval qname=coalesce(topic_queue, queue, destination)\n| timechart span=1d max(depth) as queue_depth by qname limit=15\n```\n\nUnderstanding this SPL\n\n**Message Queue Backlog Trending** — Queue depth over 7 and 30 days shows sustained consumer lag versus transient spikes. Growth trends drive consumer scaling, partition adds, or poison-message handling before disk limits or SLA misses.\n\nDocumented **Data sources**: `index=middleware` `sourcetype=kafka:consumer`, `rabbitmq:queue`, `activemq:queue`, `azure:eventhub:metrics`. **App/TA** (typical add-on context): Splunk Add-on for Kafka, RabbitMQ, ActiveMQ, AWS MSK / Azure Event Hubs TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: middleware; **sourcetype**: kafka:consumer, rabbitmq:queue, activemq:queue. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=middleware, sourcetype=\"kafka:consumer\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **depth** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **qname** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by qname limit=15** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (max depth by queue), area chart (7d vs 30d overlay using `timewrap`), table (top queues by growth rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs that queue depth over 7 and 30 days shows sustained consumer lag versus transient spikes.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "activemq",
                "aws",
                "azure",
                "kafka",
                "rabbitmq"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 5,
            "none": 0
          }
        }
      ],
      "i": 8,
      "n": "Application Infrastructure",
      "src": "cat-08-application-infrastructure.md"
    },
    {
      "s": [
        {
          "i": "9.1",
          "n": "Active Directory / Entra ID",
          "u": [
            {
              "i": "9.1.1",
              "n": "Brute-Force Login Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Brute-force attacks are a primary credential compromise vector. Early detection prevents account takeover.",
              "t": "`Splunk_TA_windows`",
              "d": "Windows Security Event Log (Event ID 4625 — failed logon)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4625\n| stats count by Account_Name, Source_Network_Address\n| where count > 10\n| sort -count",
              "m": "Deploy the Universal Forwarder on all Domain Controllers with `[WinEventLog://Security]` enabled. Ensure the GPO enables Audit Logon Success and Failure. Alert with `stats count by Account_Name, src span=15m` when count exceeds 10. Suppress break-glass and service accounts via a `privileged_accounts` lookup to reduce false positives.",
              "z": "Table (accounts with failure counts), Line chart (failure rate over time), Geo map (source IPs).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1110",
                "T1110.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Windows Security Event Log (Event ID 4625 — failed logon).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy the Universal Forwarder on all Domain Controllers with `[WinEventLog://Security]` enabled. Ensure the GPO enables Audit Logon Success and Failure. Alert with `stats count by Account_Name, src span=15m` when count exceeds 10. Suppress break-glass and service accounts via a `privileged_accounts` lookup to reduce false positives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4625\n| stats count by Account_Name, Source_Network_Address\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Brute-Force Login Detection** — Brute-force attacks are a primary credential compromise vector. Early detection prevents account takeover.\n\nDocumented **Data sources**: Windows Security Event Log (Event ID 4625 — failed logon). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name, Source_Network_Address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Brute-Force Login Detection** — Brute-force attacks are a primary credential compromise vector. Early detection prevents account takeover.\n\nDocumented **Data sources**: Windows Security Event Log (Event ID 4625 — failed logon). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (accounts with failure counts), Line chart (failure rate over time), Geo map (source IPs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for many failed sign-in attempts from the same place or against the same account so we can stop password guessing before a takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.5.1",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of CJIS 5.5.1 (Access control - identification) — Splunk UC-9.1.1: Brute-Force Login Detection.",
                  "ea": "Saved search 'UC-9.1.1' running on Windows Security Event Log (Event ID 4625 — failed logon), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://le.fbi.gov/cjis-division/cjis-security-policy-resource-center"
                },
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.SAU.1",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of Cyber Essentials CE.SAU.1 (Secure authentication & access) — Splunk UC-9.1.1: Brute-Force Login Detection.",
                  "ea": "Saved search 'UC-9.1.1' running on Windows Security Event Log (Event ID 4625 — failed logon), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ncsc.gov.uk/cyberessentials/overview"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "01.b",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of HITRUST 01.b (User access management) — Splunk UC-9.1.1: Brute-Force Login Detection.",
                  "ea": "Saved search 'UC-9.1.1' running on Windows Security Event Log (Event ID 4625 — failed logon), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of PSD2 Art.97 (Strong customer authentication) — Splunk UC-9.1.1: Brute-Force Login Detection.",
                  "ea": "Saved search 'UC-9.1.1' running on Windows Security Event Log (Event ID 4625 — failed logon), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2015/2366/oj"
                }
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.2",
              "n": "Account Lockout Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Lockouts cause user productivity loss and help desk load. Source identification enables rapid resolution.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (Event ID 4740 — account locked out)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4740\n| table _time, Account_Name, CallerComputerName\n| sort -_time",
              "m": "Forward DC Security logs. Alert on each lockout with source workstation included. Create report of recurring lockouts for proactive investigation. Correlate with 4625 events to find the failing source.",
              "z": "Table (lockouts with source), Bar chart (top locked accounts), Line chart (lockout trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1110"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (Event ID 4740 — account locked out).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DC Security logs. Alert on each lockout with source workstation included. Create report of recurring lockouts for proactive investigation. Correlate with 4625 events to find the failing source.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4740\n| table _time, Account_Name, CallerComputerName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Account Lockout Monitoring** — Lockouts cause user productivity loss and help desk load. Source identification enables rapid resolution.\n\nDocumented **Data sources**: Security Event Log (Event ID 4740 — account locked out). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Account Lockout Monitoring**): table _time, Account_Name, CallerComputerName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.signature, \"4740\") OR match(Authentication.vendor_action, \"(?i)lockout\")\n  by Authentication.user Authentication.src span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Account Lockout Monitoring** — Lockouts cause user productivity loss and help desk load. Source identification enables rapid resolution.\n\nDocumented **Data sources**: Security Event Log (Event ID 4740 — account locked out). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (lockouts with source), Bar chart (top locked accounts), Line chart (lockout trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Account Lockout Monitoring",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.signature, \"4740\") OR match(Authentication.vendor_action, \"(?i)lockout\")\n  by Authentication.user Authentication.src span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.3",
              "n": "Privileged Group Membership Changes",
              "c": "critical",
              "f": "beginner",
              "v": "Adding accounts to Domain Admins or Enterprise Admins (EventCode 4728/4732/4756) in minutes limits blast radius from stolen Tier-0 credentials. Immediate detection supports audit evidence for privileged access changes and enables rapid containment before lateral movement escalates.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4728,4732,4756)\n| search TargetUserName IN (\"Domain Admins\",\"Enterprise Admins\",\"Schema Admins\",\"Administrators\")\n| table _time, MemberName, TargetUserName, SubjectUserName",
              "m": "Forward DC Security logs. Create alert for any membership change to privileged groups (Domain Admins, Enterprise Admins, Schema Admins, Backup Operators). Integrate with change management for validation.",
              "z": "Table (membership changes), Timeline (change events), Single value (changes this week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DC Security logs. Create alert for any membership change to privileged groups (Domain Admins, Enterprise Admins, Schema Admins, Backup Operators). Integrate with change management for validation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4728,4732,4756)\n| search TargetUserName IN (\"Domain Admins\",\"Enterprise Admins\",\"Schema Admins\",\"Administrators\")\n| table _time, MemberName, TargetUserName, SubjectUserName\n```\n\nUnderstanding this SPL\n\n**Privileged Group Membership Changes** — Adding accounts to Domain Admins or Enterprise Admins (EventCode 4728/4732/4756) in minutes limits blast radius from stolen Tier-0 credentials. Immediate detection supports audit evidence for privileged access changes and enables rapid containment before lateral movement escalates.\n\nDocumented **Data sources**: Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Privileged Group Membership Changes**): table _time, MemberName, TargetUserName, SubjectUserName\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object_category, \"(?i)group\") OR match(All_Changes.result, \"(?i)4728|4732|4756\")\n  by All_Changes.user All_Changes.object span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privileged Group Membership Changes** — Adding accounts to Domain Admins or Enterprise Admins (EventCode 4728/4732/4756) in minutes limits blast radius from stolen Tier-0 credentials. Immediate detection supports audit evidence for privileged access changes and enables rapid containment before lateral movement escalates.\n\nDocumented **Data sources**: Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (membership changes), Timeline (change events), Single value (changes this week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when powerful groups gain or lose members so we can catch privilege creep or mistakes before attackers abuse admin access.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object_category, \"(?i)group\") OR match(All_Changes.result, \"(?i)4728|4732|4756\")\n  by All_Changes.user All_Changes.object span=1h\n| sort -count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.05",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.05 (Restrict administrative privileges) is enforced — Splunk UC-9.1.3: Privileged Group Membership Changes.",
                  "ea": "Saved search 'UC-9.1.3' running on Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BAIT/KAIT §5 (Identity & access management) is enforced — Splunk UC-9.1.3: Privileged Group Membership Changes.",
                  "ea": "Saved search 'UC-9.1.3' running on Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Rundschreiben/2021/rs_1021_BAIT_en.html"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FedRAMP AC-2 (Account management) is enforced — Splunk UC-9.1.3: Privileged Group Membership Changes.",
                  "ea": "Saved search 'UC-9.1.3' running on Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "01.b",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HITRUST 01.b (User access management) is enforced — Splunk UC-9.1.3: Privileged Group Membership Changes.",
                  "ea": "Saved search 'UC-9.1.3' running on Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "ORP.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-Grundschutz ORP.4 (Identity & access) is enforced — Splunk UC-9.1.3: Privileged Group Membership Changes.",
                  "ea": "Saved search 'UC-9.1.3' running on Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/IT-Grundschutz/it-grundschutz_node.html"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-2(7)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST 800-53 AC-2(7) is enforced — Splunk UC-9.1.3: Privileged Group Membership Changes.",
                  "ea": "Saved search 'UC-9.1.3' running on Security Event Log (4728 — member added to security-enabled global group, 4732 — local group, 4756 — universal group), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://csrc.nist.gov/pubs/sp/800/53/r5/final"
                }
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "9.1.4",
              "n": "Service Account Anomalies",
              "c": "critical",
              "f": "intermediate",
              "v": "Service accounts used interactively or from unexpected hosts indicate compromise. Detection prevents lateral movement.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (4624 — successful logon, Logon Type field)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624\n| lookup service_accounts.csv Account_Name OUTPUT expected_hosts, account_type\n| where account_type=\"service\" AND (Logon_Type=2 OR Logon_Type=10 OR NOT match(src_host, expected_hosts))\n| table _time, Account_Name, Logon_Type, src_host",
              "m": "Maintain lookup of service accounts with expected Logon Types and source hosts. Alert on interactive logon (Type 2, 10) or unexpected source. Regularly audit service account inventory with AD queries.",
              "z": "Table (anomalous service account usage), Timeline (events), Bar chart (anomalies by account).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1078.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (4624 — successful logon, Logon Type field).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain lookup of service accounts with expected Logon Types and source hosts. Alert on interactive logon (Type 2, 10) or unexpected source. Regularly audit service account inventory with AD queries.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624\n| lookup service_accounts.csv Account_Name OUTPUT expected_hosts, account_type\n| where account_type=\"service\" AND (Logon_Type=2 OR Logon_Type=10 OR NOT match(src_host, expected_hosts))\n| table _time, Account_Name, Logon_Type, src_host\n```\n\nUnderstanding this SPL\n\n**Service Account Anomalies** — Service accounts used interactively or from unexpected hosts indicate compromise. Detection prevents lateral movement.\n\nDocumented **Data sources**: Security Event Log (4624 — successful logon, Logon Type field). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where account_type=\"service\" AND (Logon_Type=2 OR Logon_Type=10 OR NOT match(src_host, expected_hosts))` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Service Account Anomalies**): table _time, Account_Name, Logon_Type, src_host\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND match(Authentication.user, \"(?i)svc|service|_sa$\")\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Service Account Anomalies** — Service accounts used interactively or from unexpected hosts indicate compromise. Detection prevents lateral movement.\n\nDocumented **Data sources**: Security Event Log (4624 — successful logon, Logon Type field). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous service account usage), Timeline (events), Bar chart (anomalies by account).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Service Account Anomalies",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND match(Authentication.user, \"(?i)svc|service|_sa$\")\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.5",
              "n": "Kerberos Ticket Anomalies",
              "c": "critical",
              "f": "beginner",
              "v": "Detects Kerberoasting and Golden Ticket attacks, which are advanced AD compromise techniques. Essential for security monitoring.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (4768 — TGT request, 4769 — TGS request)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4769 Ticket_Encryption_Type=0x17\n| stats count by Account_Name, Service_Name\n| where count > 5\n| sort -count",
              "m": "Forward 4768/4769 events from DCs. Detect Kerberoasting by filtering for RC4 encryption (0x17) on TGS requests. Detect Golden Ticket by looking for TGT requests with unusual encryption types or from non-DC sources.",
              "z": "Table (suspicious Kerberos requests), Bar chart (requests by encryption type), Timeline (anomalous events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1558.001",
                "T1558.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (4768 — TGT request, 4769 — TGS request).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward 4768/4769 events from DCs. Detect Kerberoasting by filtering for RC4 encryption (0x17) on TGS requests. Detect Golden Ticket by looking for TGT requests with unusual encryption types or from non-DC sources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4769 Ticket_Encryption_Type=0x17\n| stats count by Account_Name, Service_Name\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Kerberos Ticket Anomalies** — Detects Kerberoasting and Golden Ticket attacks, which are advanced AD compromise techniques. Essential for security monitoring.\n\nDocumented **Data sources**: Security Event Log (4768 — TGT request, 4769 — TGS request). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name, Service_Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.signature, \"4768|4769|4771\")\n  by Authentication.user Authentication.src Authentication.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Kerberos Ticket Anomalies** — Detects Kerberoasting and Golden Ticket attacks, which are advanced AD compromise techniques. Essential for security monitoring.\n\nDocumented **Data sources**: Security Event Log (4768 — TGT request, 4769 — TGS request). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious Kerberos requests), Bar chart (requests by encryption type), Timeline (anomalous events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Kerberos Ticket Anomalies",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.signature, \"4768|4769|4771\")\n  by Authentication.user Authentication.src Authentication.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "9.1.6",
              "n": "Password Policy Violations",
              "c": "medium",
              "f": "beginner",
              "v": "Failed password changes indicate users struggling with policy or potential social engineering. Monitoring supports security awareness.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (4723 — password change attempt, 4724 — password reset attempt)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4723, 4724)\n| stats count(eval(Keywords=\"Audit Failure\")) as failures, count(eval(Keywords=\"Audit Success\")) as successes by Account_Name\n| where failures > 3",
              "m": "Forward DC Security logs. Track password change success/failure rates. Alert on excessive failures per user. Monitor 4724 (admin resets) separately as these bypass self-service and may indicate social engineering.",
              "z": "Table (users with failures), Bar chart (failure rate by user), Pie chart (change vs reset).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (4723 — password change attempt, 4724 — password reset attempt).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DC Security logs. Track password change success/failure rates. Alert on excessive failures per user. Monitor 4724 (admin resets) separately as these bypass self-service and may indicate social engineering.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4723, 4724)\n| stats count(eval(Keywords=\"Audit Failure\")) as failures, count(eval(Keywords=\"Audit Success\")) as successes by Account_Name\n| where failures > 3\n```\n\nUnderstanding this SPL\n\n**Password Policy Violations** — Failed password changes indicate users struggling with policy or potential social engineering. Monitoring supports security awareness.\n\nDocumented **Data sources**: Security Event Log (4723 — password change attempt, 4724 — password reset attempt). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures > 3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.signature, \"4723|4724|4725\") OR match(Authentication.vendor_action, \"(?i)password\")\n  by Authentication.user Authentication.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Password Policy Violations** — Failed password changes indicate users struggling with policy or potential social engineering. Monitoring supports security awareness.\n\nDocumented **Data sources**: Security Event Log (4723 — password change attempt, 4724 — password reset attempt). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users with failures), Bar chart (failure rate by user), Pie chart (change vs reset).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Password Policy Violations",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.signature, \"4723|4724|4725\") OR match(Authentication.vendor_action, \"(?i)password\")\n  by Authentication.user Authentication.action span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.7",
              "n": "GPO Modification Detection",
              "c": "critical",
              "f": "beginner",
              "v": "GPO changes affect all domain-joined machines. Unauthorized modifications can disable security controls across the organization.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (5136 — directory service object modified)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n| search ObjectClass=\"groupPolicyContainer\"\n| table _time, SubjectUserName, ObjectDN, AttributeLDAPDisplayName, AttributeValue",
              "m": "Enable \"Audit Directory Service Changes\" via GPO. Forward DC Security logs. Alert on any GPO modification. Correlate with change management tickets. Track which GPOs are modified most frequently.",
              "z": "Table (GPO changes with details), Timeline (modification events), Bar chart (changes by admin).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1484.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (5136 — directory service object modified).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable \"Audit Directory Service Changes\" via GPO. Forward DC Security logs. Alert on any GPO modification. Correlate with change management tickets. Track which GPOs are modified most frequently.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n| search ObjectClass=\"groupPolicyContainer\"\n| table _time, SubjectUserName, ObjectDN, AttributeLDAPDisplayName, AttributeValue\n```\n\nUnderstanding this SPL\n\n**GPO Modification Detection** — GPO changes affect all domain-joined machines. Unauthorized modifications can disable security controls across the organization.\n\nDocumented **Data sources**: Security Event Log (5136 — directory service object modified). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **GPO Modification Detection**): table _time, SubjectUserName, ObjectDN, AttributeLDAPDisplayName, AttributeValue\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object, \"(?i)groupPolicyContainer|Policies|GPO\")\n  by All_Changes.user All_Changes.object span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GPO Modification Detection** — GPO changes affect all domain-joined machines. Unauthorized modifications can disable security controls across the organization.\n\nDocumented **Data sources**: Security Event Log (5136 — directory service object modified). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (GPO changes with details), Timeline (modification events), Bar chart (changes by admin).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — GPO Modification Detection",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object, \"(?i)groupPolicyContainer|Policies|GPO\")\n  by All_Changes.user All_Changes.object span=1h\n| sort -count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 90,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison"
              ]
            },
            {
              "i": "9.1.8",
              "n": "AD Replication Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Replication failures cause authentication issues, stale group memberships, and inconsistent policy application across sites.",
              "t": "`Splunk_TA_windows`, `repadmin` scripted input",
              "d": "Directory Service event log, `repadmin /showrepl` output",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Directory Service\" EventCode IN (1864,1865,2042,2087)\n| table _time, ComputerName, EventCode, Message\n| sort -_time",
              "m": "Forward Directory Service event logs from DCs. Run `repadmin /showrepl` via scripted input daily. Alert on replication failures (Event IDs 1864, 2042, 2087). Track replication latency between sites.",
              "z": "Table (replication status by DC pair), Status grid (DC × replication health), Timeline (failure events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, `repadmin` scripted input.\n• Ensure the following data sources are available: Directory Service event log, `repadmin /showrepl` output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Directory Service event logs from DCs. Run `repadmin /showrepl` via scripted input daily. Alert on replication failures (Event IDs 1864, 2042, 2087). Track replication latency between sites.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Directory Service\" EventCode IN (1864,1865,2042,2087)\n| table _time, ComputerName, EventCode, Message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**AD Replication Monitoring** — Replication failures cause authentication issues, stale group memberships, and inconsistent policy application across sites.\n\nDocumented **Data sources**: Directory Service event log, `repadmin /showrepl` output. **App/TA** (typical add-on context): `Splunk_TA_windows`, `repadmin` scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Directory Service. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Directory Service\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **AD Replication Monitoring**): table _time, ComputerName, EventCode, Message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (replication status by DC pair), Status grid (DC × replication health), Timeline (failure events).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — AD Replication Monitoring",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.9",
              "n": "LDAP Query Performance",
              "c": "medium",
              "f": "beginner",
              "v": "Expensive LDAP queries degrade DC performance affecting authentication for all users. Detection enables query optimization.",
              "t": "`Splunk_TA_windows`, Directory Service diagnostics",
              "d": "Directory Service event log (Event ID 1644 — expensive search), Field Engineering diagnostics",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Directory Service\" EventCode=1644\n| rex \"Entries Visited\\s+:\\s+(?<entries_visited>\\d+)\"\n| where entries_visited > 10000\n| table _time, ComputerName, entries_visited, Message",
              "m": "Enable LDAP search diagnostics (registry key: \"15 Field Engineering\" value \"Expensive Search Results Threshold\" = 10000). Forward Directory Service logs. Alert on queries visiting >10K entries. Identify and optimize expensive applications.",
              "z": "Table (expensive queries), Bar chart (queries by source application), Line chart (expensive query frequency).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, Directory Service diagnostics.\n• Ensure the following data sources are available: Directory Service event log (Event ID 1644 — expensive search), Field Engineering diagnostics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable LDAP search diagnostics (registry key: \"15 Field Engineering\" value \"Expensive Search Results Threshold\" = 10000). Forward Directory Service logs. Alert on queries visiting >10K entries. Identify and optimize expensive applications.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Directory Service\" EventCode=1644\n| rex \"Entries Visited\\s+:\\s+(?<entries_visited>\\d+)\"\n| where entries_visited > 10000\n| table _time, ComputerName, entries_visited, Message\n```\n\nUnderstanding this SPL\n\n**LDAP Query Performance** — Expensive LDAP queries degrade DC performance affecting authentication for all users. Detection enables query optimization.\n\nDocumented **Data sources**: Directory Service event log (Event ID 1644 — expensive search), Field Engineering diagnostics. **App/TA** (typical add-on context): `Splunk_TA_windows`, Directory Service diagnostics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Directory Service. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Directory Service\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where entries_visited > 10000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **LDAP Query Performance**): table _time, ComputerName, entries_visited, Message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (expensive queries), Bar chart (queries by source application), Line chart (expensive query frequency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — LDAP Query Performance",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.10",
              "n": "Stale Account Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Stale accounts are an attack surface — unused accounts may be compromised without detection. Regular cleanup reduces risk.",
              "t": "Scripted input (PowerShell AD query)",
              "d": "AD attributes (lastLogonTimestamp, pwdLastSet) via scripted input",
              "q": "index=ad sourcetype=\"ad:accounts\"\n| eval days_inactive=round((now()-lastLogon)/86400)\n| where days_inactive > 90 AND enabled=\"True\"\n| table samAccountName, displayName, days_inactive, ou, lastLogon\n| sort -days_inactive",
              "m": "Run PowerShell script querying AD for lastLogonTimestamp weekly. Export to CSV/JSON and ingest. Flag accounts inactive >90 days. Cross-reference with HR systems for departed employees. Report for access review.",
              "z": "Table (stale accounts), Bar chart (stale accounts by OU), Single value (total stale accounts), Pie chart (by account type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078.002",
                "T1087.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Scripted input (PowerShell AD query).\n• Ensure the following data sources are available: AD attributes (lastLogonTimestamp, pwdLastSet) via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun PowerShell script querying AD for lastLogonTimestamp weekly. Export to CSV/JSON and ingest. Flag accounts inactive >90 days. Cross-reference with HR systems for departed employees. Report for access review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ad sourcetype=\"ad:accounts\"\n| eval days_inactive=round((now()-lastLogon)/86400)\n| where days_inactive > 90 AND enabled=\"True\"\n| table samAccountName, displayName, days_inactive, ou, lastLogon\n| sort -days_inactive\n```\n\nUnderstanding this SPL\n\n**Stale Account Detection** — Stale accounts are an attack surface — unused accounts may be compromised without detection. Regular cleanup reduces risk.\n\nDocumented **Data sources**: AD attributes (lastLogonTimestamp, pwdLastSet) via scripted input. **App/TA** (typical add-on context): Scripted input (PowerShell AD query). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ad; **sourcetype**: ad:accounts. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ad, sourcetype=\"ad:accounts\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_inactive** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_inactive > 90 AND enabled=\"True\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Stale Account Detection**): table samAccountName, displayName, days_inactive, ou, lastLogon\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale accounts), Bar chart (stale accounts by OU), Single value (total stale accounts), Pie chart (by account type).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Stale Account Detection",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.11",
              "n": "Entra ID Risky Sign-Ins",
              "c": "critical",
              "f": "intermediate",
              "v": "Entra ID Identity Protection detects risky sign-ins using Microsoft's threat intelligence. Ingesting into Splunk enables correlation with on-prem events.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Entra ID sign-in logs, risk detection events (via Graph API)",
              "q": "index=azure sourcetype=\"azure:aad:signin\"\n| where riskLevelDuringSignIn IN (\"high\",\"medium\")\n| table _time, userPrincipalName, ipAddress, location, riskLevelDuringSignIn, riskDetail\n| sort -_time",
              "m": "Configure Splunk Add-on for Microsoft Cloud Services to ingest Entra ID sign-in logs via Graph API. Filter for medium/high risk detections. Alert on high-risk sign-ins. Correlate with on-prem AD events for hybrid investigations.",
              "z": "Table (risky sign-ins), Geo map (sign-in locations), Line chart (risk events over time), Bar chart (risk types).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Entra ID sign-in logs, risk detection events (via Graph API).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk Add-on for Microsoft Cloud Services to ingest Entra ID sign-in logs via Graph API. Filter for medium/high risk detections. Alert on high-risk sign-ins. Correlate with on-prem AD events for hybrid investigations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:aad:signin\"\n| where riskLevelDuringSignIn IN (\"high\",\"medium\")\n| table _time, userPrincipalName, ipAddress, location, riskLevelDuringSignIn, riskDetail\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Entra ID Risky Sign-Ins** — Entra ID Identity Protection detects risky sign-ins using Microsoft's threat intelligence. Ingesting into Splunk enables correlation with on-prem events.\n\nDocumented **Data sources**: Entra ID sign-in logs, risk detection events (via Graph API). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:aad:signin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:aad:signin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where riskLevelDuringSignIn IN (\"high\",\"medium\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Entra ID Risky Sign-Ins**): table _time, userPrincipalName, ipAddress, location, riskLevelDuringSignIn, riskDetail\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND match(Authentication.app, \"(?i)azure|entra|aad\")\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Entra ID Risky Sign-Ins** — Entra ID Identity Protection detects risky sign-ins using Microsoft's threat intelligence. Ingesting into Splunk enables correlation with on-prem events.\n\nDocumented **Data sources**: Entra ID sign-in logs, risk detection events (via Graph API). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (risky sign-ins), Geo map (sign-in locations), Line chart (risk events over time), Bar chart (risk types).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Entra ID Risky Sign-Ins",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND match(Authentication.app, \"(?i)azure|entra|aad\")\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.12",
              "n": "Conditional Access Policy Failures",
              "c": "high",
              "f": "beginner",
              "v": "Conditional Access blocks indicate non-compliant devices or policy misconfigurations. Monitoring ensures security policies work without excessive user friction.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Entra ID sign-in logs (conditionalAccessStatus field)",
              "q": "index=azure sourcetype=\"azure:aad:signin\" conditionalAccessStatus=\"failure\"\n| stats count by userPrincipalName, appDisplayName, conditionalAccessPolicies{}.displayName\n| sort -count",
              "m": "Ingest Entra ID sign-in logs. Filter for Conditional Access failures. Track failure rates per policy and per user. Alert on sudden spikes indicating policy misconfiguration. Report on most-blocked policies and applications.",
              "z": "Bar chart (failures by policy), Table (blocked users), Line chart (failure rate trend), Pie chart (failures by application).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Entra ID sign-in logs (conditionalAccessStatus field).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Entra ID sign-in logs. Filter for Conditional Access failures. Track failure rates per policy and per user. Alert on sudden spikes indicating policy misconfiguration. Report on most-blocked policies and applications.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:aad:signin\" conditionalAccessStatus=\"failure\"\n| stats count by userPrincipalName, appDisplayName, conditionalAccessPolicies{}.displayName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Conditional Access Policy Failures** — Conditional Access blocks indicate non-compliant devices or policy misconfigurations. Monitoring ensures security policies work without excessive user friction.\n\nDocumented **Data sources**: Entra ID sign-in logs (conditionalAccessStatus field). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:aad:signin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:aad:signin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userPrincipalName, appDisplayName, conditionalAccessPolicies{}.displayName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Conditional Access Policy Failures** — Conditional Access blocks indicate non-compliant devices or policy misconfigurations. Monitoring ensures security policies work without excessive user friction.\n\nDocumented **Data sources**: Entra ID sign-in logs (conditionalAccessStatus field). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures by policy), Table (blocked users), Line chart (failure rate trend), Pie chart (failures by application).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Conditional Access Policy Failures",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.13",
              "n": "AD Certificate Services Certificate Expiration",
              "c": "high",
              "f": "intermediate",
              "v": "Internal CA-issued certificates approaching expiry; missed renewals cause outages.",
              "t": "`Splunk_TA_windows`, custom scripted input (certutil)",
              "d": "ADCS issued certificates database (certutil -view), Certificate Services logs",
              "q": "index=adcs sourcetype=\"adcs:cert_inventory\"\n| eval days_to_expiry=round((expiry_epoch-now())/86400)\n| where days_to_expiry < 30 AND days_to_expiry > 0\n| table _time, subject, issuer, days_to_expiry, serial_number\n| sort days_to_expiry",
              "m": "Run `certutil -view -restrict \"Disposition=20\"` (issued certs) on CA servers via scripted input daily. Parse output and compute days until expiry. Alert on certificates expiring within 30 days. Include Certificate Services event logs (Event ID 100–107) for issuance/renewal events. Maintain lookup of critical certs (e.g., LDAPS, VPN) for prioritized alerts.",
              "z": "Table (expiring certificates), Single value (certs expiring in 30 days), Gauge (days until next expiry), Bar chart (expiry by issuer).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, custom scripted input (certutil).\n• Ensure the following data sources are available: ADCS issued certificates database (certutil -view), Certificate Services logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `certutil -view -restrict \"Disposition=20\"` (issued certs) on CA servers via scripted input daily. Parse output and compute days until expiry. Alert on certificates expiring within 30 days. Include Certificate Services event logs (Event ID 100–107) for issuance/renewal events. Maintain lookup of critical certs (e.g., LDAPS, VPN) for prioritized alerts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=adcs sourcetype=\"adcs:cert_inventory\"\n| eval days_to_expiry=round((expiry_epoch-now())/86400)\n| where days_to_expiry < 30 AND days_to_expiry > 0\n| table _time, subject, issuer, days_to_expiry, serial_number\n| sort days_to_expiry\n```\n\nUnderstanding this SPL\n\n**AD Certificate Services Certificate Expiration** — Internal CA-issued certificates approaching expiry; missed renewals cause outages.\n\nDocumented **Data sources**: ADCS issued certificates database (certutil -view), Certificate Services logs. **App/TA** (typical add-on context): `Splunk_TA_windows`, custom scripted input (certutil). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: adcs; **sourcetype**: adcs:cert_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=adcs, sourcetype=\"adcs:cert_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_expiry < 30 AND days_to_expiry > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **AD Certificate Services Certificate Expiration**): table _time, subject, issuer, days_to_expiry, serial_number\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (expiring certificates), Single value (certs expiring in 30 days), Gauge (days until next expiry), Bar chart (expiry by issuer).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — AD Certificate Services Certificate Expiration",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.14",
              "n": "Service Account Password Age",
              "c": "medium",
              "f": "intermediate",
              "v": "Service accounts with passwords older than policy permits increase risk exposure.",
              "t": "`Splunk_TA_windows` (AD inventory), SA-ldapsearch",
              "d": "AD attribute pwdLastSet on service accounts",
              "q": "index=ad sourcetype=\"ad:accounts\"\n| search objectClass=serviceAccount OR samAccountName=svc_* OR samAccountName=*_svc\n| eval days_since_pwd=round((now()-(pwdLastSet/10000000-11644473600))/86400)\n| where days_since_pwd > 90 AND enabled=\"True\"\n| table samAccountName, displayName, days_since_pwd, ou\n| sort -days_since_pwd",
              "m": "Run PowerShell or ldapsearch script querying AD for service accounts (filter by naming convention or OU). Export pwdLastSet and convert to days. Ingest via scripted input. Alert on accounts exceeding policy (e.g., >90 days). Maintain lookup of accounts with approved exceptions. Report for quarterly access reviews.",
              "z": "Table (overdue service accounts), Bar chart (password age by OU), Single value (accounts over policy limit), Gauge (compliance %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (AD inventory), SA-ldapsearch.\n• Ensure the following data sources are available: AD attribute pwdLastSet on service accounts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun PowerShell or ldapsearch script querying AD for service accounts (filter by naming convention or OU). Export pwdLastSet and convert to days. Ingest via scripted input. Alert on accounts exceeding policy (e.g., >90 days). Maintain lookup of accounts with approved exceptions. Report for quarterly access reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ad sourcetype=\"ad:accounts\"\n| search objectClass=serviceAccount OR samAccountName=svc_* OR samAccountName=*_svc\n| eval days_since_pwd=round((now()-(pwdLastSet/10000000-11644473600))/86400)\n| where days_since_pwd > 90 AND enabled=\"True\"\n| table samAccountName, displayName, days_since_pwd, ou\n| sort -days_since_pwd\n```\n\nUnderstanding this SPL\n\n**Service Account Password Age** — Service accounts with passwords older than policy permits increase risk exposure.\n\nDocumented **Data sources**: AD attribute pwdLastSet on service accounts. **App/TA** (typical add-on context): `Splunk_TA_windows` (AD inventory), SA-ldapsearch. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ad; **sourcetype**: ad:accounts. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ad, sourcetype=\"ad:accounts\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **days_since_pwd** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since_pwd > 90 AND enabled=\"True\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Service Account Password Age**): table samAccountName, displayName, days_since_pwd, ou\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue service accounts), Bar chart (password age by OU), Single value (accounts over policy limit), Gauge (compliance %).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Service Account Password Age",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.15",
              "n": "Kerberoasting Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Attackers request weakly encrypted TGS tickets for service accounts to crack passwords offline. Focused Kerberoasting detection complements generic Kerberos monitoring.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (4769 — Kerberos service ticket requested)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4769 Ticket_Encryption_Type=0x17\n| stats count, values(Service_Name) as spns by Account_Name\n| where count >= 5\n| sort -count",
              "m": "Forward 4769 from DCs. Flag RC4 (0x17) TGS requests in bulk per user; tune thresholds for service accounts that legitimately use RC4. Enforce AES for sensitive SPNs in AD and rotate krbtgt on schedule.",
              "z": "Table (user, SPN, request count), Bar chart (Kerberoasting candidates by OU), Timeline (spikes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1558.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (4769 — Kerberos service ticket requested).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward 4769 from DCs. Flag RC4 (0x17) TGS requests in bulk per user; tune thresholds for service accounts that legitimately use RC4. Enforce AES for sensitive SPNs in AD and rotate krbtgt on schedule.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4769 Ticket_Encryption_Type=0x17\n| stats count, values(Service_Name) as spns by Account_Name\n| where count >= 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Kerberoasting Detection** — Attackers request weakly encrypted TGS tickets for service accounts to crack passwords offline. Focused Kerberoasting detection complements generic Kerberos monitoring.\n\nDocumented **Data sources**: Security Event Log (4769 — Kerberos service ticket requested). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count >= 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src span=1h\n| where count > 50\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Kerberoasting Detection** — Attackers request weakly encrypted TGS tickets for service accounts to crack passwords offline. Focused Kerberoasting detection complements generic Kerberos monitoring.\n\nDocumented **Data sources**: Security Event Log (4769 — Kerberos service ticket requested). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 50` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, SPN, request count), Bar chart (Kerberoasting candidates by OU), Timeline (spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for odd Kerberos service ticket requests so we can spot attackers trying to crack service accounts.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=success\n  by Authentication.user Authentication.src span=1h\n| where count > 50",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.16",
              "n": "Golden Ticket Indicators",
              "c": "critical",
              "f": "advanced",
              "v": "Forged TGTs often produce anomalous ticket lifetimes, encryption types, or DC sourcing. Heuristic alerts support hunt teams when krbtgt may be compromised.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (4768 — Kerberos authentication ticket requested), 4624 (logon type 10 with Kerberos)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4768\n| eval ticket_life_h=(Ticket_Lifetime/3600)\n| where ticket_life_h > 10 OR Ticket_Encryption_Type IN (\"0xffffffff\",\"0x12\")\n| table _time, Account_Name, Ticket_Encryption_Type, ticket_life_h, IpAddress",
              "m": "Baseline normal TGT lifetimes and encryption types per domain. Alert on unusual lifetimes, unknown ETYPE, or TGT requests not originating from expected workstations. Correlate with 4624 type 10 and lateral movement analytics.",
              "z": "Table (suspicious TGT events), Timeline, Single value (anomalies per day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1558.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (4768 — Kerberos authentication ticket requested), 4624 (logon type 10 with Kerberos).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline normal TGT lifetimes and encryption types per domain. Alert on unusual lifetimes, unknown ETYPE, or TGT requests not originating from expected workstations. Correlate with 4624 type 10 and lateral movement analytics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4768\n| eval ticket_life_h=(Ticket_Lifetime/3600)\n| where ticket_life_h > 10 OR Ticket_Encryption_Type IN (\"0xffffffff\",\"0x12\")\n| table _time, Account_Name, Ticket_Encryption_Type, ticket_life_h, IpAddress\n```\n\nUnderstanding this SPL\n\n**Golden Ticket Indicators** — Forged TGTs often produce anomalous ticket lifetimes, encryption types, or DC sourcing. Heuristic alerts support hunt teams when krbtgt may be compromised.\n\nDocumented **Data sources**: Security Event Log (4768 — Kerberos authentication ticket requested), 4624 (logon type 10 with Kerberos). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ticket_life_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ticket_life_h > 10 OR Ticket_Encryption_Type IN (\"0xffffffff\",\"0x12\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Golden Ticket Indicators**): table _time, Account_Name, Ticket_Encryption_Type, ticket_life_h, IpAddress\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Golden Ticket Indicators** — Forged TGTs often produce anomalous ticket lifetimes, encryption types, or DC sourcing. Heuristic alerts support hunt teams when krbtgt may be compromised.\n\nDocumented **Data sources**: Security Event Log (4768 — Kerberos authentication ticket requested), 4624 (logon type 10 with Kerberos). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious TGT events), Timeline, Single value (anomalies per day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch sign-in and ticket patterns that look like forged domain tickets so we can respond before wide compromise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.17",
              "n": "Entra Conditional Access Policy Changes",
              "c": "critical",
              "f": "intermediate",
              "v": "Policy edits can weaken MFA, device compliance, or location controls org-wide. Auditing changes supports SOC2/ISO and incident response.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Entra ID audit logs (`DirectoryAudit` — Conditional Access policy create/update/delete)",
              "q": "index=azure sourcetype=\"azure:aad:audit\"\n| search \"Conditional Access\" OR activityDisplayName=\"Update conditional access policy\"\n| table _time, initiatedBy.user.userPrincipalName, targetResources{}.displayName, activityDisplayName, result\n| sort -_time",
              "m": "Ingest Entra audit logs via Graph. Alert on any CA policy lifecycle change; require change ticket correlation. Snapshot policy IDs in lookups for crown-jewel apps.",
              "z": "Timeline (policy changes), Table (actor, policy, result), Bar chart (changes by admin).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.003",
                "T1562.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Entra ID audit logs (`DirectoryAudit` — Conditional Access policy create/update/delete).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Entra audit logs via Graph. Alert on any CA policy lifecycle change; require change ticket correlation. Snapshot policy IDs in lookups for crown-jewel apps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:aad:audit\"\n| search \"Conditional Access\" OR activityDisplayName=\"Update conditional access policy\"\n| table _time, initiatedBy.user.userPrincipalName, targetResources{}.displayName, activityDisplayName, result\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Entra Conditional Access Policy Changes** — Policy edits can weaken MFA, device compliance, or location controls org-wide. Auditing changes supports SOC2/ISO and incident response.\n\nDocumented **Data sources**: Entra ID audit logs (`DirectoryAudit` — Conditional Access policy create/update/delete). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:aad:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:aad:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Entra Conditional Access Policy Changes**): table _time, initiatedBy.user.userPrincipalName, targetResources{}.displayName, activityDisplayName, result\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Entra Conditional Access Policy Changes** — Policy edits can weaken MFA, device compliance, or location controls org-wide. Auditing changes supports SOC2/ISO and incident response.\n\nDocumented **Data sources**: Entra ID audit logs (`DirectoryAudit` — Conditional Access policy create/update/delete). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (policy changes), Table (actor, policy, result), Bar chart (changes by admin).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Entra Conditional Access Policy Changes",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.18",
              "n": "Hybrid Join Device Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Hybrid Azure AD join state and Intune compliance gate access; drift from compliant blocks users and signals stale or tampered endpoints.",
              "t": "`Splunk_TA_microsoft-cloudservices`, Microsoft Intune / Graph scripted input",
              "d": "Entra ID device objects (`trustType`, `isCompliant`, `profileType`), Intune compliance reports",
              "q": "index=azure sourcetype=\"azure:intune:devices\" OR sourcetype=\"azure:aad:devices\"\n| where trustType=\"ServerAd\" AND (isCompliant=\"false\" OR isCompliant=\"False\")\n| stats latest(_time) as last_seen by deviceId, displayName, managementType, isCompliant\n| sort -last_seen",
              "m": "Ingest device inventory from Graph/Intune on a schedule. Join with sign-in logs for non-compliant hybrid devices. Alert on compliance flip from true to false or long-running non-compliance.",
              "z": "Table (non-compliant hybrid devices), Pie chart (compliant vs not), Line chart (non-compliance trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, Microsoft Intune / Graph scripted input.\n• Ensure the following data sources are available: Entra ID device objects (`trustType`, `isCompliant`, `profileType`), Intune compliance reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest device inventory from Graph/Intune on a schedule. Join with sign-in logs for non-compliant hybrid devices. Alert on compliance flip from true to false or long-running non-compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:intune:devices\" OR sourcetype=\"azure:aad:devices\"\n| where trustType=\"ServerAd\" AND (isCompliant=\"false\" OR isCompliant=\"False\")\n| stats latest(_time) as last_seen by deviceId, displayName, managementType, isCompliant\n| sort -last_seen\n```\n\nUnderstanding this SPL\n\n**Hybrid Join Device Compliance** — Hybrid Azure AD join state and Intune compliance gate access; drift from compliant blocks users and signals stale or tampered endpoints.\n\nDocumented **Data sources**: Entra ID device objects (`trustType`, `isCompliant`, `profileType`), Intune compliance reports. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Microsoft Intune / Graph scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:intune:devices, azure:aad:devices. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:intune:devices\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where trustType=\"ServerAd\" AND (isCompliant=\"false\" OR isCompliant=\"False\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by deviceId, displayName, managementType, isCompliant** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Hybrid Join Device Compliance** — Hybrid Azure AD join state and Intune compliance gate access; drift from compliant blocks users and signals stale or tampered endpoints.\n\nDocumented **Data sources**: Entra ID device objects (`trustType`, `isCompliant`, `profileType`), Intune compliance reports. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Microsoft Intune / Graph scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant hybrid devices), Pie chart (compliant vs not), Line chart (non-compliance trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We list hybrid-joined devices that are out of policy so you can fix them before they block people or let risky devices stay on the network.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.19",
              "n": "LAPS Password Rotation Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Failed LAPS rotations leave predictable local admin passwords; attackers target stale LAPS attributes.",
              "t": "`Splunk_TA_windows`",
              "d": "Operational log `Microsoft-Windows-LAPS/Operational` (Event IDs 10023, 10024, 10025, 10026), or legacy CSE events",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-LAPS/Operational\" EventCode IN (10023,10024,10025,10026)\n| stats count by ComputerName, EventCode, Message\n| where count > 0\n| sort -count",
              "m": "Forward LAPS Operational log from all domain-joined clients that use LAPS. Map Event IDs to rotation success/failure. Alert on repeated failures per OU or GPO scope. Correlate with GPO and network issues.",
              "z": "Table (hosts with failures), Bar chart (failures by OU), Single value (failed rotations 24h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1552"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Operational log `Microsoft-Windows-LAPS/Operational` (Event IDs 10023, 10024, 10025, 10026), or legacy CSE events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward LAPS Operational log from all domain-joined clients that use LAPS. Map Event IDs to rotation success/failure. Alert on repeated failures per OU or GPO scope. Correlate with GPO and network issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-LAPS/Operational\" EventCode IN (10023,10024,10025,10026)\n| stats count by ComputerName, EventCode, Message\n| where count > 0\n| sort -count\n```\n\nUnderstanding this SPL\n\n**LAPS Password Rotation Failures** — Failed LAPS rotations leave predictable local admin passwords; attackers target stale LAPS attributes.\n\nDocumented **Data sources**: Operational log `Microsoft-Windows-LAPS/Operational` (Event IDs 10023, 10024, 10025, 10026), or legacy CSE events. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-LAPS/Operational. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-LAPS/Operational\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ComputerName, EventCode, Message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hosts with failures), Bar chart (failures by OU), Single value (failed rotations 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — LAPS Password Rotation Failures",
              "mtype": [
                "Fault",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.20",
              "n": "AD Replication Topology Changes",
              "c": "critical",
              "f": "advanced",
              "v": "New connections, site link, or bridgehead changes can indicate persistence or misconfiguration affecting auth paths.",
              "t": "`Splunk_TA_windows`, `repadmin` / scripted input",
              "d": "Directory Service events (KCC topology), scripted `repadmin /showconn` / `nltest`",
              "q": "index=wineventlog (sourcetype=\"WinEventLog:Directory Service\" EventCode IN (1308,1311,1394)) OR sourcetype=\"ad:topology\"\n| table _time, host, EventCode, Message, connection_from, connection_to\n| sort -_time",
              "m": "Enable KCC and replication diagnostics. Ingest periodic topology snapshots. Alert on new unexpected replication partners or disabled site links outside change windows.",
              "z": "Timeline (topology events), Table (connection changes), Diagram export (optional via lookup).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, `repadmin` / scripted input.\n• Ensure the following data sources are available: Directory Service events (KCC topology), scripted `repadmin /showconn` / `nltest`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable KCC and replication diagnostics. Ingest periodic topology snapshots. Alert on new unexpected replication partners or disabled site links outside change windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog (sourcetype=\"WinEventLog:Directory Service\" EventCode IN (1308,1311,1394)) OR sourcetype=\"ad:topology\"\n| table _time, host, EventCode, Message, connection_from, connection_to\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**AD Replication Topology Changes** — New connections, site link, or bridgehead changes can indicate persistence or misconfiguration affecting auth paths.\n\nDocumented **Data sources**: Directory Service events (KCC topology), scripted `repadmin /showconn` / `nltest`. **App/TA** (typical add-on context): `Splunk_TA_windows`, `repadmin` / scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Directory Service, ad:topology. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Directory Service\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **AD Replication Topology Changes**): table _time, host, EventCode, Message, connection_from, connection_to\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (topology events), Table (connection changes), Diagram export (optional via lookup).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — AD Replication Topology Changes",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.21",
              "n": "AdminSDHolder Modification",
              "c": "critical",
              "f": "intermediate",
              "v": "Changes to AdminSDHolder or SDProp timing can preserve attacker persistence on privileged accounts.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (5136 — directory service object modified), object DN containing AdminSDHolder",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n| search ObjectDN=\"*CN=AdminSDHolder,CN=System*\"\n| table _time, SubjectUserName, ObjectDN, AttributeLDAPDisplayName, AttributeValue\n| sort -_time",
              "m": "Enable DS change auditing on DCs. Alert on any modification to AdminSDHolder ACL or attributes. Review regularly for expected adminSDHolder propagation delays.",
              "z": "Table (changes), Timeline, Single value (changes per quarter — expect near zero).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (5136 — directory service object modified), object DN containing AdminSDHolder.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DS change auditing on DCs. Alert on any modification to AdminSDHolder ACL or attributes. Review regularly for expected adminSDHolder propagation delays.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n| search ObjectDN=\"*CN=AdminSDHolder,CN=System*\"\n| table _time, SubjectUserName, ObjectDN, AttributeLDAPDisplayName, AttributeValue\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**AdminSDHolder Modification** — Changes to AdminSDHolder or SDProp timing can preserve attacker persistence on privileged accounts.\n\nDocumented **Data sources**: Security Event Log (5136 — directory service object modified), object DN containing AdminSDHolder. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **AdminSDHolder Modification**): table _time, SubjectUserName, ObjectDN, AttributeLDAPDisplayName, AttributeValue\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**AdminSDHolder Modification** — Changes to AdminSDHolder or SDProp timing can preserve attacker persistence on privileged accounts.\n\nDocumented **Data sources**: Security Event Log (5136 — directory service object modified), object DN containing AdminSDHolder. **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (changes), Timeline, Single value (changes per quarter — expect near zero).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — AdminSDHolder Modification",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.22",
              "n": "GPO Tampering Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Tampering via SYSVOL (file-level) may bypass 5136-only monitoring. File integrity on GPO paths catches unauthorized edits.",
              "t": "`Splunk_TA_windows`, FIM TA (e.g., Splunk FIM or OSSEC)",
              "d": "GPO change events (5136), SYSVOL file integrity events, DFS-R replication errors for SYSVOL",
              "q": "index=ossec sourcetype=\"ossec:fim\" OR index=fim sourcetype=\"fim:change\"\n| search path=\"*\\\\SYSVOL\\\\*\\\\Policies\\\\*\"\n| stats count by path, user, action\n| sort -count",
              "m": "Deploy FIM on DCs or SYSVOL replica members. Alert on new/modified GPO files outside change windows. Correlate with 5136 and DFS-R 4412/5004 events.",
              "z": "Table (file paths changed), Timeline, Bar chart (changes by DC).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[GPO change events](https://splunkbase.splunk.com/app/5136), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1484.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, FIM TA (e.g., Splunk FIM or OSSEC).\n• Ensure the following data sources are available: GPO change events (5136), SYSVOL file integrity events, DFS-R replication errors for SYSVOL.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy FIM on DCs or SYSVOL replica members. Alert on new/modified GPO files outside change windows. Correlate with 5136 and DFS-R 4412/5004 events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ossec sourcetype=\"ossec:fim\" OR index=fim sourcetype=\"fim:change\"\n| search path=\"*\\\\SYSVOL\\\\*\\\\Policies\\\\*\"\n| stats count by path, user, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**GPO Tampering Detection** — Tampering via SYSVOL (file-level) may bypass 5136-only monitoring. File integrity on GPO paths catches unauthorized edits.\n\nDocumented **Data sources**: GPO change events (5136), SYSVOL file integrity events, DFS-R replication errors for SYSVOL. **App/TA** (typical add-on context): `Splunk_TA_windows`, FIM TA (e.g., Splunk FIM or OSSEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ossec, fim; **sourcetype**: ossec:fim, fim:change. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ossec, index=fim, sourcetype=\"ossec:fim\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by path, user, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GPO Tampering Detection** — Tampering via SYSVOL (file-level) may bypass 5136-only monitoring. File integrity on GPO paths catches unauthorized edits.\n\nDocumented **Data sources**: GPO change events (5136), SYSVOL file integrity events, DFS-R replication errors for SYSVOL. **App/TA** (typical add-on context): `Splunk_TA_windows`, FIM TA (e.g., Splunk FIM or OSSEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (file paths changed), Timeline, Bar chart (changes by DC).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — GPO Tampering Detection",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.action | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.23",
              "n": "Entra PIM Activation Audit",
              "c": "critical",
              "f": "beginner",
              "v": "Privileged Identity Management activations grant time-bound admin roles; auditing ensures approvals and detects abuse.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Entra audit logs (`Add member to role completed`, PIM `RequestApproved` / `RoleAssignmentSchedule`)",
              "q": "index=azure sourcetype=\"azure:aad:audit\"\n| search \"PIM\" OR activityDisplayName IN (\"Add member to role in PIM completed\",\"Add member to role completed\")\n| table _time, initiatedBy.user.userPrincipalName, targetResources{}.displayName, result, activityDisplayName\n| sort -_time",
              "m": "Ingest PIM-related audit events. Alert on activations outside business hours, without ticket ID (custom field), or for highly privileged roles. Report monthly for access reviews.",
              "z": "Table (activations), Bar chart (role activations by user), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004",
                "T1098.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Entra audit logs (`Add member to role completed`, PIM `RequestApproved` / `RoleAssignmentSchedule`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest PIM-related audit events. Alert on activations outside business hours, without ticket ID (custom field), or for highly privileged roles. Report monthly for access reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:aad:audit\"\n| search \"PIM\" OR activityDisplayName IN (\"Add member to role in PIM completed\",\"Add member to role completed\")\n| table _time, initiatedBy.user.userPrincipalName, targetResources{}.displayName, result, activityDisplayName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Entra PIM Activation Audit** — Privileged Identity Management activations grant time-bound admin roles; auditing ensures approvals and detects abuse.\n\nDocumented **Data sources**: Entra audit logs (`Add member to role completed`, PIM `RequestApproved` / `RoleAssignmentSchedule`). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:aad:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:aad:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Entra PIM Activation Audit**): table _time, initiatedBy.user.userPrincipalName, targetResources{}.displayName, result, activityDisplayName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Entra PIM Activation Audit** — Privileged Identity Management activations grant time-bound admin roles; auditing ensures approvals and detects abuse.\n\nDocumented **Data sources**: Entra audit logs (`Add member to role completed`, PIM `RequestApproved` / `RoleAssignmentSchedule`). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (activations), Bar chart (role activations by user), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Entra PIM Activation Audit",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.24",
              "n": "Stale Computer Account Cleanup",
              "c": "medium",
              "f": "intermediate",
              "v": "Stale computer objects enable rogue domain joins and clutter access reviews. Tracking supports automated disable/delete workflows.",
              "t": "Scripted input (PowerShell `Get-ADComputer`)",
              "d": "AD computer attributes (`lastLogonTimestamp`, `pwdLastSet`, `whenCreated`)",
              "q": "index=ad sourcetype=\"ad:computers\"\n| eval days_stale=round((now()-lastLogonTimestamp)/86400)\n| where days_stale > 90 AND Enabled=\"True\"\n| table samAccountName, operatingSystem, days_stale, distinguishedName\n| sort -days_stale",
              "m": "Export computer inventory weekly. Join with DHCP/DNS for false positives. Feed cleanup automation; exclude known appliance OUs via lookup.",
              "z": "Table (stale computers), Bar chart (stale count by OU), Single value (candidates for cleanup).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078.002",
                "T1087.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Scripted input (PowerShell `Get-ADComputer`).\n• Ensure the following data sources are available: AD computer attributes (`lastLogonTimestamp`, `pwdLastSet`, `whenCreated`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport computer inventory weekly. Join with DHCP/DNS for false positives. Feed cleanup automation; exclude known appliance OUs via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ad sourcetype=\"ad:computers\"\n| eval days_stale=round((now()-lastLogonTimestamp)/86400)\n| where days_stale > 90 AND Enabled=\"True\"\n| table samAccountName, operatingSystem, days_stale, distinguishedName\n| sort -days_stale\n```\n\nUnderstanding this SPL\n\n**Stale Computer Account Cleanup** — Stale computer objects enable rogue domain joins and clutter access reviews. Tracking supports automated disable/delete workflows.\n\nDocumented **Data sources**: AD computer attributes (`lastLogonTimestamp`, `pwdLastSet`, `whenCreated`). **App/TA** (typical add-on context): Scripted input (PowerShell `Get-ADComputer`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ad; **sourcetype**: ad:computers. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ad, sourcetype=\"ad:computers\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_stale > 90 AND Enabled=\"True\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Stale Computer Account Cleanup**): table samAccountName, operatingSystem, days_stale, distinguishedName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale computers), Bar chart (stale count by OU), Single value (candidates for cleanup).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Stale Computer Account Cleanup",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.25",
              "n": "AD Forest Trust Changes",
              "c": "critical",
              "f": "intermediate",
              "v": "Trust direction and selective authentication changes alter cross-forest attack surface; distinct from one-off session events.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (4706 — trust modified, 4713 — trust deleted, 4716 — trusted domain information modified)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4706,4713,4716)\n| table _time, SubjectUserName, TargetDomainName, TrustType, TrustDirection, SidFiltering\n| sort -_time",
              "m": "Forward all DC Security logs. Require CAB approval for trust changes. Alert on selective auth disablement or inbound trust creation.",
              "z": "Table (trust changes), Timeline, Single value (changes per year).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1484.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (4706 — trust modified, 4713 — trust deleted, 4716 — trusted domain information modified).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward all DC Security logs. Require CAB approval for trust changes. Alert on selective auth disablement or inbound trust creation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4706,4713,4716)\n| table _time, SubjectUserName, TargetDomainName, TrustType, TrustDirection, SidFiltering\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**AD Forest Trust Changes** — Trust direction and selective authentication changes alter cross-forest attack surface; distinct from one-off session events.\n\nDocumented **Data sources**: Security Event Log (4706 — trust modified, 4713 — trust deleted, 4716 — trusted domain information modified). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **AD Forest Trust Changes**): table _time, SubjectUserName, TargetDomainName, TrustType, TrustDirection, SidFiltering\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**AD Forest Trust Changes** — Trust direction and selective authentication changes alter cross-forest attack surface; distinct from one-off session events.\n\nDocumented **Data sources**: Security Event Log (4706 — trust modified, 4713 — trust deleted, 4716 — trusted domain information modified). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (trust changes), Timeline, Single value (changes per year).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — AD Forest Trust Changes",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.26",
              "n": "Certificate Template Abuse (ESC Attacks)",
              "c": "critical",
              "f": "advanced",
              "v": "Misconfigured templates (ESC1/ESC8) allow domain escalation via certificate requests. Monitoring issuance and template edits reduces exposure.",
              "t": "`Splunk_TA_windows`, AD CS logs",
              "d": "Certificate Services (4886, 4887, 4888), AD CS template change auditing (5136 on `CN=Certificate Templates`)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4886\n| search Requester!=\"\" Template_OID=*\n| lookup cert_template_risk Template_OID OUTPUT risk_esc\n| where risk_esc IN (\"ESC1\",\"ESC8\")\n| table _time, Requester, Template_OID, risk_esc, ComputerName",
              "m": "Enable CA and template auditing. Maintain lookup mapping template OIDs to ESC categories (per SpecterOps research). Alert on enrollment to high-risk templates and on template ACL/schema changes.",
              "z": "Table (risky enrollments), Bar chart (requests by template), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1552.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, AD CS logs.\n• Ensure the following data sources are available: Certificate Services (4886, 4887, 4888), AD CS template change auditing (5136 on `CN=Certificate Templates`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CA and template auditing. Maintain lookup mapping template OIDs to ESC categories (per SpecterOps research). Alert on enrollment to high-risk templates and on template ACL/schema changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4886\n| search Requester!=\"\" Template_OID=*\n| lookup cert_template_risk Template_OID OUTPUT risk_esc\n| where risk_esc IN (\"ESC1\",\"ESC8\")\n| table _time, Requester, Template_OID, risk_esc, ComputerName\n```\n\nUnderstanding this SPL\n\n**Certificate Template Abuse (ESC Attacks)** — Misconfigured templates (ESC1/ESC8) allow domain escalation via certificate requests. Monitoring issuance and template edits reduces exposure.\n\nDocumented **Data sources**: Certificate Services (4886, 4887, 4888), AD CS template change auditing (5136 on `CN=Certificate Templates`). **App/TA** (typical add-on context): `Splunk_TA_windows`, AD CS logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where risk_esc IN (\"ESC1\",\"ESC8\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Certificate Template Abuse (ESC Attacks)**): table _time, Requester, Template_OID, risk_esc, ComputerName\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (risky enrollments), Bar chart (requests by template), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Certificate Template Abuse (ESC Attacks)",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.27",
              "n": "Active Directory Replication",
              "c": "critical",
              "f": "intermediate",
              "v": "AD replication failures cause authentication inconsistencies — users locked out in one site but not another, stale GPOs, and split-brain scenarios.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Directory Service`, custom scripted input (`repadmin /replsummary`)",
              "q": "index=ad sourcetype=repadmin_replsummary\n| where failures > 0\n| table source_dc dest_dc failures last_failure last_success",
              "m": "Collect Directory Service event log from all DCs. Create scripted input running `repadmin /replsummary /csv` daily. Alert on any replication failure events. Critical alert on EventCode 2042 (tombstone lifetime exceeded).",
              "z": "Table of replication partners with status, Events timeline, Network diagram of DC replication.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Directory Service`, custom scripted input (`repadmin /replsummary`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Directory Service event log from all DCs. Create scripted input running `repadmin /replsummary /csv` daily. Alert on any replication failure events. Critical alert on EventCode 2042 (tombstone lifetime exceeded).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ad sourcetype=repadmin_replsummary\n| where failures > 0\n| table source_dc dest_dc failures last_failure last_success\n```\n\nUnderstanding this SPL\n\n**Active Directory Replication** — AD replication failures cause authentication inconsistencies — users locked out in one site but not another, stale GPOs, and split-brain scenarios.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Directory Service`, custom scripted input (`repadmin /replsummary`). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ad; **sourcetype**: repadmin_replsummary. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ad, sourcetype=repadmin_replsummary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where failures > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Active Directory Replication**): table source_dc dest_dc failures last_failure last_success\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of replication partners with status, Events timeline, Network diagram of DC replication.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Active Directory Replication",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.1.28",
              "n": "AD Certificate Services (ADCS) Anomalies",
              "c": "critical",
              "f": "intermediate",
              "v": "ADCS misconfigurations enable privilege escalation (ESC1-ESC8 attacks). Monitoring certificate requests catches unauthorized certificate enrollment for domain admin impersonation.",
              "t": "`Splunk_TA_windows`",
              "d": "`sourcetype=WinEventLog:Security` (EventCode 4886, 4887, 4888)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4886, 4887, 4888)\n| eval action=case(EventCode=4886,\"Request received\",EventCode=4887,\"Certificate issued\",EventCode=4888,\"Certificate denied\")\n| stats count by Requester, CertificateTemplate, SubjectName, action\n| where CertificateTemplate IN (\"User\",\"SmartcardLogon\",\"Machine\") AND NOT match(Requester, \"(?i)(SYSTEM|machine\\\\$)\")\n| sort -count",
              "m": "Enable Certificate Services auditing on CA servers. EventCode 4887=certificate issued — track who requested which template. Alert on certificates with Subject Alternative Names (SANs) containing admin usernames (ESC1 attack). Monitor for certificate requests from non-standard templates. Track enrollment agent certificates (ESC3). Audit CA configuration for overly permissive templates with `certutil -v -template`.",
              "z": "Table (certificate issuances), Bar chart (requests by template), Timeline, Alert on SAN mismatches.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1552.004",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: `sourcetype=WinEventLog:Security` (EventCode 4886, 4887, 4888).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Certificate Services auditing on CA servers. EventCode 4887=certificate issued — track who requested which template. Alert on certificates with Subject Alternative Names (SANs) containing admin usernames (ESC1 attack). Monitor for certificate requests from non-standard templates. Track enrollment agent certificates (ESC3). Audit CA configuration for overly permissive templates with `certutil -v -template`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4886, 4887, 4888)\n| eval action=case(EventCode=4886,\"Request received\",EventCode=4887,\"Certificate issued\",EventCode=4888,\"Certificate denied\")\n| stats count by Requester, CertificateTemplate, SubjectName, action\n| where CertificateTemplate IN (\"User\",\"SmartcardLogon\",\"Machine\") AND NOT match(Requester, \"(?i)(SYSTEM|machine\\\\$)\")\n| sort -count\n```\n\nUnderstanding this SPL\n\n**AD Certificate Services (ADCS) Anomalies** — ADCS misconfigurations enable privilege escalation (ESC1-ESC8 attacks). Monitoring certificate requests catches unauthorized certificate enrollment for domain admin impersonation.\n\nDocumented **Data sources**: `sourcetype=WinEventLog:Security` (EventCode 4886, 4887, 4888). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by Requester, CertificateTemplate, SubjectName, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where CertificateTemplate IN (\"User\",\"SmartcardLogon\",\"Machine\") AND NOT match(Requester, \"(?i)(SYSTEM|machine\\\\$)\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certificate issuances), Bar chart (requests by template), Timeline, Alert on SAN mismatches.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — AD Certificate Services (ADCS) Anomalies",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 41.1,
          "qd": {
            "gold": 3,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "9.2",
          "n": "LDAP Directories",
          "u": [
            {
              "i": "9.2.1",
              "n": "Bind Failure Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "LDAP bind failures indicate authentication issues, misconfigured applications, or brute-force attempts against directory services.",
              "t": "Syslog, LDAP server logs",
              "d": "OpenLDAP syslog, 389 Directory access log",
              "q": "index=ldap sourcetype=\"syslog\" \"BIND\" \"err=49\"\n| stats count by src, bind_dn\n| where count > 10\n| sort -count",
              "m": "Forward LDAP server syslog to Splunk. Parse bind operations and result codes (err=49 = invalid credentials). Alert on >10 failures per source per 15 minutes. Correlate with application health monitoring.",
              "z": "Table (bind failures by source/DN), Line chart (failure rate), Bar chart (top failing sources).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1110",
                "T1110.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Syslog, LDAP server logs.\n• Ensure the following data sources are available: OpenLDAP syslog, 389 Directory access log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward LDAP server syslog to Splunk. Parse bind operations and result codes (err=49 = invalid credentials). Alert on >10 failures per source per 15 minutes. Correlate with application health monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ldap sourcetype=\"syslog\" \"BIND\" \"err=49\"\n| stats count by src, bind_dn\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Bind Failure Monitoring** — LDAP bind failures indicate authentication issues, misconfigured applications, or brute-force attempts against directory services.\n\nDocumented **Data sources**: OpenLDAP syslog, 389 Directory access log. **App/TA** (typical add-on context): Syslog, LDAP server logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ldap; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ldap, sourcetype=\"syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, bind_dn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Bind Failure Monitoring** — LDAP bind failures indicate authentication issues, misconfigured applications, or brute-force attempts against directory services.\n\nDocumented **Data sources**: OpenLDAP syslog, 389 Directory access log. **App/TA** (typical add-on context): Syslog, LDAP server logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (bind failures by source/DN), Line chart (failure rate), Bar chart (top failing sources).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Bind Failure Monitoring",
              "mtype": [
                "Security",
                "Fault"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.2",
              "n": "Search Performance Degradation",
              "c": "medium",
              "f": "beginner",
              "v": "Slow LDAP searches impact all applications relying on directory services for authentication and authorization.",
              "t": "LDAP access log parsing",
              "d": "OpenLDAP access log (search duration), 389 Directory access log",
              "q": "index=ldap sourcetype=\"openldap:access\" operation=\"SEARCH\"\n| where elapsed_ms > 1000\n| stats count, avg(elapsed_ms) as avg_ms by base_dn, filter\n| sort -avg_ms",
              "m": "Enable LDAP access logging with timing information. Parse search operations with duration. Alert on searches exceeding 1 second. Identify expensive filters (unindexed attributes, broad base DN). Recommend index creation.",
              "z": "Table (slow searches), Bar chart (avg duration by filter), Line chart (search latency trend).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: LDAP access log parsing.\n• Ensure the following data sources are available: OpenLDAP access log (search duration), 389 Directory access log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable LDAP access logging with timing information. Parse search operations with duration. Alert on searches exceeding 1 second. Identify expensive filters (unindexed attributes, broad base DN). Recommend index creation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ldap sourcetype=\"openldap:access\" operation=\"SEARCH\"\n| where elapsed_ms > 1000\n| stats count, avg(elapsed_ms) as avg_ms by base_dn, filter\n| sort -avg_ms\n```\n\nUnderstanding this SPL\n\n**Search Performance Degradation** — Slow LDAP searches impact all applications relying on directory services for authentication and authorization.\n\nDocumented **Data sources**: OpenLDAP access log (search duration), 389 Directory access log. **App/TA** (typical add-on context): LDAP access log parsing. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ldap; **sourcetype**: openldap:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ldap, sourcetype=\"openldap:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where elapsed_ms > 1000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by base_dn, filter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slow searches), Bar chart (avg duration by filter), Line chart (search latency trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Search Performance Degradation",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.3",
              "n": "Schema Modification Audit",
              "c": "critical",
              "f": "beginner",
              "v": "Schema changes to directory services can break applications and are rarely expected. Detection ensures change control compliance.",
              "t": "LDAP audit log",
              "d": "LDAP server audit log (schema modification events)",
              "q": "index=ldap sourcetype=\"openldap:audit\"\n| search \"cn=schema\" (\"add:\" OR \"delete:\" OR \"replace:\")\n| table _time, modifier_dn, changetype, modification",
              "m": "Enable LDAP audit logging (overlay in OpenLDAP, audit log in 389 DS). Forward to Splunk. Alert on any schema modification. These should be extremely rare and always correlated with change tickets.",
              "z": "Timeline (schema changes), Table (change details), Single value (schema changes this month).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1484"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: LDAP audit log.\n• Ensure the following data sources are available: LDAP server audit log (schema modification events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable LDAP audit logging (overlay in OpenLDAP, audit log in 389 DS). Forward to Splunk. Alert on any schema modification. These should be extremely rare and always correlated with change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ldap sourcetype=\"openldap:audit\"\n| search \"cn=schema\" (\"add:\" OR \"delete:\" OR \"replace:\")\n| table _time, modifier_dn, changetype, modification\n```\n\nUnderstanding this SPL\n\n**Schema Modification Audit** — Schema changes to directory services can break applications and are rarely expected. Detection ensures change control compliance.\n\nDocumented **Data sources**: LDAP server audit log (schema modification events). **App/TA** (typical add-on context): LDAP audit log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ldap; **sourcetype**: openldap:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ldap, sourcetype=\"openldap:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Schema Modification Audit**): table _time, modifier_dn, changetype, modification\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object, \"(?i)cn=schema\")\n  by All_Changes.user All_Changes.object span=1d\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Schema Modification Audit** — Schema changes to directory services can break applications and are rarely expected. Detection ensures change control compliance.\n\nDocumented **Data sources**: LDAP server audit log (schema modification events). **App/TA** (typical add-on context): LDAP audit log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (schema changes), Table (change details), Single value (schema changes this month).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Schema Modification Audit",
              "wv": "crawl",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object, \"(?i)cn=schema\")\n  by All_Changes.user All_Changes.object span=1d\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "9.2.4",
              "n": "Replication Health Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "LDAP replication failures cause authentication inconsistencies and stale directory data across sites.",
              "t": "Scripted input, LDAP server logs",
              "d": "LDAP replication logs, `ldapsearch` monitoring attributes (contextCSN)",
              "q": "index=ldap sourcetype=\"openldap:syncrepl\"\n| search \"syncrepl\" (\"ERROR\" OR \"RETRY\" OR \"failed\")\n| stats count by host, provider\n| where count > 0",
              "m": "Monitor LDAP replication status via scripted input querying contextCSN or replication agreements. Forward syncrepl logs. Alert on replication failures or increasing lag between providers and consumers.",
              "z": "Status grid (provider × consumer health), Table (replication status), Timeline (failure events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Scripted input, LDAP server logs.\n• Ensure the following data sources are available: LDAP replication logs, `ldapsearch` monitoring attributes (contextCSN).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor LDAP replication status via scripted input querying contextCSN or replication agreements. Forward syncrepl logs. Alert on replication failures or increasing lag between providers and consumers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ldap sourcetype=\"openldap:syncrepl\"\n| search \"syncrepl\" (\"ERROR\" OR \"RETRY\" OR \"failed\")\n| stats count by host, provider\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Replication Health Monitoring** — LDAP replication failures cause authentication inconsistencies and stale directory data across sites.\n\nDocumented **Data sources**: LDAP replication logs, `ldapsearch` monitoring attributes (contextCSN). **App/TA** (typical add-on context): Scripted input, LDAP server logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ldap; **sourcetype**: openldap:syncrepl. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ldap, sourcetype=\"openldap:syncrepl\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, provider** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (provider × consumer health), Table (replication status), Timeline (failure events).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Replication Health Monitoring",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.5",
              "n": "Azure AD / Entra ID Conditional Access Policy Evaluation Failures",
              "c": "medium",
              "f": "intermediate",
              "v": "Policy conflicts causing access denials; helps fine-tune conditional access and reduce user friction.",
              "t": "Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`)",
              "d": "Azure AD Sign-in logs (conditionalAccessStatus, appliedConditionalAccessPolicies)",
              "q": "index=azure sourcetype=\"azure:aad:signin\"\n| where conditionalAccessStatus=\"failure\" OR conditionalAccessStatus=\"reportOnlyNotApplied\"\n| spath path=conditionalAccessPolicies{}\n| mvexpand conditionalAccessPolicies{} limit=500\n| spath input=conditionalAccessPolicies{} path=displayName\n| spath input=conditionalAccessPolicies{} path=result\n| where result=\"failure\" OR result=\"reportOnlyNotApplied\"\n| stats count by displayName, result\n| sort -count",
              "m": "Configure Splunk Add-on for Microsoft Cloud Services to ingest Entra ID sign-in logs via Graph API. Parse appliedConditionalAccessPolicies array for policy names and results. Alert on spikes in failures per policy. Track reportOnlyNotApplied for policy tuning. Correlate with userPrincipalName and appDisplayName to identify affected users and apps.",
              "z": "Bar chart (failures by policy), Table (blocked users with policy details), Line chart (failure rate trend), Pie chart (failures by application).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`).\n• Ensure the following data sources are available: Azure AD Sign-in logs (conditionalAccessStatus, appliedConditionalAccessPolicies).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk Add-on for Microsoft Cloud Services to ingest Entra ID sign-in logs via Graph API. Parse appliedConditionalAccessPolicies array for policy names and results. Alert on spikes in failures per policy. Track reportOnlyNotApplied for policy tuning. Correlate with userPrincipalName and appDisplayName to identify affected users and apps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:aad:signin\"\n| where conditionalAccessStatus=\"failure\" OR conditionalAccessStatus=\"reportOnlyNotApplied\"\n| spath path=conditionalAccessPolicies{}\n| mvexpand conditionalAccessPolicies{} limit=500\n| spath input=conditionalAccessPolicies{} path=displayName\n| spath input=conditionalAccessPolicies{} path=result\n| where result=\"failure\" OR result=\"reportOnlyNotApplied\"\n| stats count by displayName, result\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Azure AD / Entra ID Conditional Access Policy Evaluation Failures** — Policy conflicts causing access denials; helps fine-tune conditional access and reduce user friction.\n\nDocumented **Data sources**: Azure AD Sign-in logs (conditionalAccessStatus, appliedConditionalAccessPolicies). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:aad:signin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:aad:signin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where conditionalAccessStatus=\"failure\" OR conditionalAccessStatus=\"reportOnlyNotApplied\"` — typically the threshold or rule expression for this monitoring goal.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters the current rows with `where result=\"failure\" OR result=\"reportOnlyNotApplied\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by displayName, result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure AD / Entra ID Conditional Access Policy Evaluation Failures** — Policy conflicts causing access denials; helps fine-tune conditional access and reduce user friction.\n\nDocumented **Data sources**: Azure AD Sign-in logs (conditionalAccessStatus, appliedConditionalAccessPolicies). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures by policy), Table (blocked users with policy details), Line chart (failure rate trend), Pie chart (failures by application).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Azure AD / Entra ID Conditional Access Policy Evaluation Failures",
              "mtype": [
                "Security",
                "Operations"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.6",
              "n": "LDAP Query Volume Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Sudden spikes in LDAP searches may indicate reconnaissance, brute enumeration, or misbehaving applications hammering directory services.",
              "t": "LDAP access log parsing, `Splunk_TA_windows` (Directory Service 1644)",
              "d": "OpenLDAP access log (SEARCH count), AD DS expensive search / query stats",
              "q": "index=ldap sourcetype=\"openldap:access\" operation=\"SEARCH\"\n| bin _time span=15m\n| stats count by src, _time\n| eventstats median(count) as med by src\n| where count > med*10 AND count > 100\n| sort -count",
              "m": "Baseline searches per source per interval. Alert on statistical outliers. Correlate with known ETL jobs via lookup. On AD, combine with 1644 expensive search events.",
              "z": "Line chart (query volume by source), Table (spikes), Bar chart (top talkers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1087"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: LDAP access log parsing, `Splunk_TA_windows` (Directory Service 1644).\n• Ensure the following data sources are available: OpenLDAP access log (SEARCH count), AD DS expensive search / query stats.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline searches per source per interval. Alert on statistical outliers. Correlate with known ETL jobs via lookup. On AD, combine with 1644 expensive search events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ldap sourcetype=\"openldap:access\" operation=\"SEARCH\"\n| bin _time span=15m\n| stats count by src, _time\n| eventstats median(count) as med by src\n| where count > med*10 AND count > 100\n| sort -count\n```\n\nUnderstanding this SPL\n\n**LDAP Query Volume Anomalies** — Sudden spikes in LDAP searches may indicate reconnaissance, brute enumeration, or misbehaving applications hammering directory services.\n\nDocumented **Data sources**: OpenLDAP access log (SEARCH count), AD DS expensive search / query stats. **App/TA** (typical add-on context): LDAP access log parsing, `Splunk_TA_windows` (Directory Service 1644). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ldap; **sourcetype**: openldap:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ldap, sourcetype=\"openldap:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by src, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > med*10 AND count > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**LDAP Query Volume Anomalies** — Sudden spikes in LDAP searches may indicate reconnaissance, brute enumeration, or misbehaving applications hammering directory services.\n\nDocumented **Data sources**: OpenLDAP access log (SEARCH count), AD DS expensive search / query stats. **App/TA** (typical add-on context): LDAP access log parsing, `Splunk_TA_windows` (Directory Service 1644). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (query volume by source), Table (spikes), Bar chart (top talkers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — LDAP Query Volume Anomalies",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src span=15m | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.7",
              "n": "Bind Failure Rate Spikes",
              "c": "high",
              "f": "beginner",
              "v": "Elevated invalid credential rates often precede password spraying or application misconfiguration; complements per-event bind failure monitoring.",
              "t": "Syslog, LDAP server logs",
              "d": "OpenLDAP syslog (err=49), AD DS LDAP interface events",
              "q": "index=ldap sourcetype=\"syslog\" \"BIND\" (\"err=49\" OR \"data 52e\")\n| bin _time span=15m\n| stats count by src, _time\n| where count > 50\n| sort -count",
              "m": "Tune threshold to environment. Whitelist scanners and load balancers. Correlate with account lockouts and Entra hybrid sign-in failures if applicable.",
              "z": "Line chart (bind failure rate), Table (source IP, window count), Single value (spikes per day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1110",
                "T1110.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Syslog, LDAP server logs.\n• Ensure the following data sources are available: OpenLDAP syslog (err=49), AD DS LDAP interface events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune threshold to environment. Whitelist scanners and load balancers. Correlate with account lockouts and Entra hybrid sign-in failures if applicable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ldap sourcetype=\"syslog\" \"BIND\" (\"err=49\" OR \"data 52e\")\n| bin _time span=15m\n| stats count by src, _time\n| where count > 50\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Bind Failure Rate Spikes** — Elevated invalid credential rates often precede password spraying or application misconfiguration; complements per-event bind failure monitoring.\n\nDocumented **Data sources**: OpenLDAP syslog (err=49), AD DS LDAP interface events. **App/TA** (typical add-on context): Syslog, LDAP server logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ldap; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ldap, sourcetype=\"syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by src, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Bind Failure Rate Spikes** — Elevated invalid credential rates often precede password spraying or application misconfiguration; complements per-event bind failure monitoring.\n\nDocumented **Data sources**: OpenLDAP syslog (err=49), AD DS LDAP interface events. **App/TA** (typical add-on context): Syslog, LDAP server logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (bind failure rate), Table (source IP, window count), Single value (spikes per day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Bind Failure Rate Spikes",
              "mtype": [
                "Security",
                "Fault"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src span=15m | sort - count",
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.8",
              "n": "Active Directory Schema Modification Audit",
              "c": "critical",
              "f": "intermediate",
              "v": "Schema changes in AD (classes/attributes) are rare and high impact; complements generic LDAP schema logging for OpenLDAP/389.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (5136 — directory service object modified under `CN=Schema,CN=Configuration`)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n| search ObjectDN=\"*CN=Schema,CN=Configuration*\"\n| table _time, SubjectUserName, ObjectDN, AttributeLDAPDisplayName\n| sort -_time",
              "m": "Enable auditing on schema partition. Alert on any schema object add/modify. Require schema admin CAB approval for all changes.",
              "z": "Timeline (schema changes), Table (detail), Single value (changes per year).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1484"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (5136 — directory service object modified under `CN=Schema,CN=Configuration`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable auditing on schema partition. Alert on any schema object add/modify. Require schema admin CAB approval for all changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5136\n| search ObjectDN=\"*CN=Schema,CN=Configuration*\"\n| table _time, SubjectUserName, ObjectDN, AttributeLDAPDisplayName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Active Directory Schema Modification Audit** — Schema changes in AD (classes/attributes) are rare and high impact; complements generic LDAP schema logging for OpenLDAP/389.\n\nDocumented **Data sources**: Security Event Log (5136 — directory service object modified under `CN=Schema,CN=Configuration`). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Active Directory Schema Modification Audit**): table _time, SubjectUserName, ObjectDN, AttributeLDAPDisplayName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Active Directory Schema Modification Audit** — Schema changes in AD (classes/attributes) are rare and high impact; complements generic LDAP schema logging for OpenLDAP/389.\n\nDocumented **Data sources**: Security Event Log (5136 — directory service object modified under `CN=Schema,CN=Configuration`). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (schema changes), Table (detail), Single value (changes per year).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Active Directory Schema Modification Audit",
              "mtype": [
                "Security",
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.9",
              "n": "LDAP Signing Enforcement",
              "c": "high",
              "f": "beginner",
              "v": "Unsigned LDAP binds expose credentials to interception. Tracking enforcement and bind failures ensures GPO and domain controller settings are effective.",
              "t": "`Splunk_TA_windows`",
              "d": "Directory Service event log (2886 — unsigned LDAP bind, 2887 — unsigned SASL)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Directory Service\" EventCode IN (2886,2887)\n| stats count by ComputerName, EventCode, Client_IP\n| where count > 10\n| sort -count",
              "m": "Enable LDAP signing requirements via GPO. Alert on sustained unsigned binds from specific apps; work with owners to enable signing/TLS. Do not alert on one-off legacy until remediated.",
              "z": "Table (clients with unsigned binds), Bar chart (by subnet), Line chart (trend toward zero).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1557"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Directory Service event log (2886 — unsigned LDAP bind, 2887 — unsigned SASL).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable LDAP signing requirements via GPO. Alert on sustained unsigned binds from specific apps; work with owners to enable signing/TLS. Do not alert on one-off legacy until remediated.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Directory Service\" EventCode IN (2886,2887)\n| stats count by ComputerName, EventCode, Client_IP\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**LDAP Signing Enforcement** — Unsigned LDAP binds expose credentials to interception. Tracking enforcement and bind failures ensures GPO and domain controller settings are effective.\n\nDocumented **Data sources**: Directory Service event log (2886 — unsigned LDAP bind, 2887 — unsigned SASL). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Directory Service. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Directory Service\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ComputerName, EventCode, Client_IP** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**LDAP Signing Enforcement** — Unsigned LDAP binds expose credentials to interception. Tracking enforcement and bind failures ensures GPO and domain controller settings are effective.\n\nDocumented **Data sources**: Directory Service event log (2886 — unsigned LDAP bind, 2887 — unsigned SASL). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (clients with unsigned binds), Bar chart (by subnet), Line chart (trend toward zero).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — LDAP Signing Enforcement",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.10",
              "n": "LDAPS Certificate Validation",
              "c": "high",
              "f": "intermediate",
              "v": "LDAPS clients failing TLS handshakes or cert validation indicate expired CAs, hostname mismatches, or MITM attempts.",
              "t": "Windows Schannel, OpenLDAP TLS logs",
              "d": "System log Schannel errors (36870, 36866), slapd TLS errors",
              "q": "index=wineventlog sourcetype=\"WinEventLog:System\" SourceName=\"Schannel\" EventCode IN (36870,36866)\n| stats count by ComputerName, EventCode, Message\n| sort -count",
              "m": "Forward Schannel and LDAP server TLS logs. Map to cert renewal runbook. Alert on spike in handshake failures after cert rotation.",
              "z": "Table (hosts with TLS errors), Timeline, Single value (LDAPS errors 24h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1557"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Schannel, OpenLDAP TLS logs.\n• Ensure the following data sources are available: System log Schannel errors (36870, 36866), slapd TLS errors.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Schannel and LDAP server TLS logs. Map to cert renewal runbook. Alert on spike in handshake failures after cert rotation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:System\" SourceName=\"Schannel\" EventCode IN (36870,36866)\n| stats count by ComputerName, EventCode, Message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**LDAPS Certificate Validation** — LDAPS clients failing TLS handshakes or cert validation indicate expired CAs, hostname mismatches, or MITM attempts.\n\nDocumented **Data sources**: System log Schannel errors (36870, 36866), slapd TLS errors. **App/TA** (typical add-on context): Windows Schannel, OpenLDAP TLS logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:System. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:System\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ComputerName, EventCode, Message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hosts with TLS errors), Timeline, Single value (LDAPS errors 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — LDAPS Certificate Validation",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.11",
              "n": "LDAP Channel Binding Status",
              "c": "high",
              "f": "intermediate",
              "v": "Channel binding tokens for LDAP/SASL mitigate relay attacks; monitoring confirms clients meet `ldapEnforceChannelBinding` policy.",
              "t": "`Splunk_TA_windows`",
              "d": "Directory Service (3039 — rejected bind missing channel binding tokens when required)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Directory Service\" EventCode=3039\n| stats count by ComputerName, Client_IP\n| where count > 5\n| sort -count",
              "m": "Phase enforcement with reporting mode first. Identify legacy apps from Client_IP. Alert when moving to enforced mode and failures persist.",
              "z": "Table (clients failing channel binding), Bar chart (by application owner), Line chart (remediation trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1556"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Directory Service (3039 — rejected bind missing channel binding tokens when required).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPhase enforcement with reporting mode first. Identify legacy apps from Client_IP. Alert when moving to enforced mode and failures persist.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Directory Service\" EventCode=3039\n| stats count by ComputerName, Client_IP\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**LDAP Channel Binding Status** — Channel binding tokens for LDAP/SASL mitigate relay attacks; monitoring confirms clients meet `ldapEnforceChannelBinding` policy.\n\nDocumented **Data sources**: Directory Service (3039 — rejected bind missing channel binding tokens when required). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Directory Service. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Directory Service\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ComputerName, Client_IP** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**LDAP Channel Binding Status** — Channel binding tokens for LDAP/SASL mitigate relay attacks; monitoring confirms clients meet `ldapEnforceChannelBinding` policy.\n\nDocumented **Data sources**: Directory Service (3039 — rejected bind missing channel binding tokens when required). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (clients failing channel binding), Bar chart (by application owner), Line chart (remediation trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — LDAP Channel Binding Status",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.2.12",
              "n": "LDAP Referral Chaining Monitoring",
              "c": "medium",
              "f": "advanced",
              "v": "Excessive or looping referrals degrade auth and may indicate misconfigured base DNs or cross-domain abuse.",
              "t": "OpenLDAP / 389 DS access logs, AD debug (optional)",
              "d": "LDAP access log lines containing `REFERRAL` or `v3 referral`",
              "q": "index=ldap sourcetype=\"openldap:access\" (message=\"REFERRAL\" OR like(_raw,\"%referral%\"))\n| stats count, values(dn) as refs by src, base\n| where count > 20\n| sort -count",
              "m": "Parse referral responses in access logs. Baseline per app. Alert on referral storms or new referral targets. Correlate with GSSAPI/SASL cross-realm issues in hybrid setups.",
              "z": "Table (referral chains), Line chart (referral volume), Bar chart (by base DN).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenLDAP / 389 DS access logs, AD debug (optional).\n• Ensure the following data sources are available: LDAP access log lines containing `REFERRAL` or `v3 referral`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse referral responses in access logs. Baseline per app. Alert on referral storms or new referral targets. Correlate with GSSAPI/SASL cross-realm issues in hybrid setups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ldap sourcetype=\"openldap:access\" (message=\"REFERRAL\" OR like(_raw,\"%referral%\"))\n| stats count, values(dn) as refs by src, base\n| where count > 20\n| sort -count\n```\n\nUnderstanding this SPL\n\n**LDAP Referral Chaining Monitoring** — Excessive or looping referrals degrade auth and may indicate misconfigured base DNs or cross-domain abuse.\n\nDocumented **Data sources**: LDAP access log lines containing `REFERRAL` or `v3 referral`. **App/TA** (typical add-on context): OpenLDAP / 389 DS access logs, AD debug (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ldap; **sourcetype**: openldap:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ldap, sourcetype=\"openldap:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, base** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**LDAP Referral Chaining Monitoring** — Excessive or looping referrals degrade auth and may indicate misconfigured base DNs or cross-domain abuse.\n\nDocumented **Data sources**: LDAP access log lines containing `REFERRAL` or `v3 referral`. **App/TA** (typical add-on context): OpenLDAP / 389 DS access logs, AD debug (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (referral chains), Line chart (referral volume), Bar chart (by base DN).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — LDAP Referral Chaining Monitoring",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.1,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 11,
            "none": 0
          }
        },
        {
          "i": "9.3",
          "n": "Identity Providers (IdP) & SSO",
          "u": [
            {
              "i": "9.3.1",
              "n": "MFA Challenge Failure Rate",
              "c": "high",
              "f": "intermediate",
              "v": "High MFA failure rates indicate user friction, potential phishing, or MFA fatigue attacks. Monitoring supports both security and user experience.",
              "t": "`Splunk_TA_okta`, `Cisco Security Cloud` app (Splunkbase, replaces Duo Splunk Connector)",
              "d": "Okta system log, Duo authentication log",
              "q": "index=okta sourcetype=\"OktaIM2:log\" eventType=\"user.authentication.auth_via_mfa\"\n| stats count(eval(outcome.result=\"FAILURE\")) as failures, count(eval(outcome.result=\"SUCCESS\")) as successes by actor.displayName\n| eval fail_rate=round(failures/(failures+successes)*100,1)\n| where fail_rate > 20",
              "m": "Ingest IdP logs via API. Track MFA success/failure rates per user and per factor type. Alert on high failure rates (>20% per user). Detect MFA fatigue patterns (rapid repeated pushes). Report on factor type distribution.",
              "z": "Bar chart (failure rate by user), Pie chart (factor type distribution), Line chart (MFA success rate trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [Cisco Security Cloud](https://splunkbase.splunk.com/app/7404), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1621"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`, `Cisco Security Cloud` app (Splunkbase, replaces Duo Splunk Connector).\n• Ensure the following data sources are available: Okta system log, Duo authentication log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest IdP logs via API. Track MFA success/failure rates per user and per factor type. Alert on high failure rates (>20% per user). Detect MFA fatigue patterns (rapid repeated pushes). Report on factor type distribution.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" eventType=\"user.authentication.auth_via_mfa\"\n| stats count(eval(outcome.result=\"FAILURE\")) as failures, count(eval(outcome.result=\"SUCCESS\")) as successes by actor.displayName\n| eval fail_rate=round(failures/(failures+successes)*100,1)\n| where fail_rate > 20\n```\n\nUnderstanding this SPL\n\n**MFA Challenge Failure Rate** — High MFA failure rates indicate user friction, potential phishing, or MFA fatigue attacks. Monitoring supports both security and user experience.\n\nDocumented **Data sources**: Okta system log, Duo authentication log. **App/TA** (typical add-on context): `Splunk_TA_okta`, `Cisco Security Cloud` app (Splunkbase, replaces Duo Splunk Connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by actor.displayName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_rate > 20` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MFA Challenge Failure Rate** — High MFA failure rates indicate user friction, potential phishing, or MFA fatigue attacks. Monitoring supports both security and user experience.\n\nDocumented **Data sources**: Okta system log, Duo authentication log. **App/TA** (typical add-on context): `Splunk_TA_okta`, `Cisco Security Cloud` app (Splunkbase, replaces Duo Splunk Connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failure rate by user), Pie chart (factor type distribution), Line chart (MFA success rate trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track when extra sign-in steps fail or spike so we can see attacks, misconfigured apps, or users who need help.",
              "mtype": [
                "Security",
                "Fault"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src span=1h\n| where count > 10",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.SAU.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Cyber Essentials CE.SAU.1 (Secure authentication & access) is enforced — Splunk UC-9.3.1: MFA Challenge Failure Rate.",
                  "ea": "Saved search 'UC-9.3.1' running on Okta system log, Duo authentication log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ncsc.gov.uk/cyberessentials/overview"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "01.b",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HITRUST 01.b (User access management) is enforced — Splunk UC-9.3.1: MFA Challenge Failure Rate.",
                  "ea": "Saved search 'UC-9.3.1' running on Okta system log, Duo authentication log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-9.3.1: MFA Challenge Failure Rate.",
                  "ea": "Saved search 'UC-9.3.1' running on Okta system log, Duo authentication log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2015/2366/oj"
                }
              ],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.2",
              "n": "Impossible Travel Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Authentication from two geographically distant locations within an impossibly short timeframe strongly indicates credential compromise.",
              "t": "`Splunk_TA_okta`, custom correlation",
              "d": "IdP sign-in logs with IP geolocation",
              "q": "index=okta sourcetype=\"OktaIM2:log\" eventType=\"user.session.start\"\n| iplocation client.ipAddress\n| sort actor.alternateId, _time\n| streamstats window=2 earliest(_time) as prev_time, earliest(lat) as prev_lat, earliest(lon) as prev_lon by actor.alternateId\n| eval distance_km=round(6371*2*asin(sqrt(pow(sin((lat-prev_lat)*pi()/360),2)+cos(lat*pi()/180)*cos(prev_lat*pi()/180)*pow(sin((lon-prev_lon)*pi()/360),2))),0) , time_diff_hr=((_time-prev_time)/3600)\n| where distance_km > 500 AND time_diff_hr < 2",
              "m": "Ingest IdP sign-in logs. Enrich with GeoIP. Calculate distance and time between consecutive logins per user. Alert when distance/time ratio is impossible (>500km in <2 hours). Whitelist VPN exit IPs and known travel patterns.",
              "z": "Geo map (sign-in locations with lines), Table (impossible travel events), Timeline (flagged events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`, custom correlation.\n• Ensure the following data sources are available: IdP sign-in logs with IP geolocation.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest IdP sign-in logs. Enrich with GeoIP. Calculate distance and time between consecutive logins per user. Alert when distance/time ratio is impossible (>500km in <2 hours). Whitelist VPN exit IPs and known travel patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" eventType=\"user.session.start\"\n| iplocation client.ipAddress\n| sort actor.alternateId, _time\n| streamstats window=2 earliest(_time) as prev_time, earliest(lat) as prev_lat, earliest(lon) as prev_lon by actor.alternateId\n| eval distance_km=round(6371*2*asin(sqrt(pow(sin((lat-prev_lat)*pi()/360),2)+cos(lat*pi()/180)*cos(prev_lat*pi()/180)*pow(sin((lon-prev_lon)*pi()/360),2))),0) , time_diff_hr=((_time-prev_time)/3600)\n| where distance_km > 500 AND time_diff_hr < 2\n```\n\nUnderstanding this SPL\n\n**Impossible Travel Detection** — Authentication from two geographically distant locations within an impossibly short timeframe strongly indicates credential compromise.\n\nDocumented **Data sources**: IdP sign-in logs with IP geolocation. **App/TA** (typical add-on context): `Splunk_TA_okta`, custom correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Impossible Travel Detection**): iplocation client.ipAddress\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by actor.alternateId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **distance_km** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where distance_km > 500 AND time_diff_hr < 2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count dc(Authentication.src) as src_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user span=1h\n| where src_count > 2\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Impossible Travel Detection** — Authentication from two geographically distant locations within an impossibly short timeframe strongly indicates credential compromise.\n\nDocumented **Data sources**: IdP sign-in logs with IP geolocation. **App/TA** (typical add-on context): `Splunk_TA_okta`, custom correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where src_count > 2` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Geo map (sign-in locations with lines), Table (impossible travel events), Timeline (flagged events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We notice sign-ins that do not line up with how fast a person could move so we can catch stolen sessions or account sharing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count dc(Authentication.src) as src_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user span=1h\n| where src_count > 2",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.3",
              "n": "Token Anomaly Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Token replay attacks bypass authentication entirely. Detection prevents persistent unauthorized access.",
              "t": "`Splunk_TA_okta`, IdP audit logs",
              "d": "IdP token issuance logs, application token validation logs",
              "q": "index=okta sourcetype=\"OktaIM2:log\" eventType=\"app.oauth2.token.grant\"\n| stats count, dc(client.ipAddress) as unique_ips by actor.alternateId, target{}.displayName\n| where unique_ips > 3",
              "m": "Monitor token issuance and usage patterns. Alert on tokens used from multiple IPs (potential replay). Track token lifetime and refresh patterns. Detect anomalous token requests outside normal application patterns.",
              "z": "Table (anomalous token usage), Timeline (suspicious events), Bar chart (tokens by application).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1528",
                "T1550.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`, IdP audit logs.\n• Ensure the following data sources are available: IdP token issuance logs, application token validation logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor token issuance and usage patterns. Alert on tokens used from multiple IPs (potential replay). Track token lifetime and refresh patterns. Detect anomalous token requests outside normal application patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" eventType=\"app.oauth2.token.grant\"\n| stats count, dc(client.ipAddress) as unique_ips by actor.alternateId, target{}.displayName\n| where unique_ips > 3\n```\n\nUnderstanding this SPL\n\n**Token Anomaly Detection** — Token replay attacks bypass authentication entirely. Detection prevents persistent unauthorized access.\n\nDocumented **Data sources**: IdP token issuance logs, application token validation logs. **App/TA** (typical add-on context): `Splunk_TA_okta`, IdP audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, target{}.displayName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_ips > 3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count dc(Authentication.src) as src_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user Authentication.app span=1h\n| where src_count > 3\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Token Anomaly Detection** — Token replay attacks bypass authentication entirely. Detection prevents persistent unauthorized access.\n\nDocumented **Data sources**: IdP token issuance logs, application token validation logs. **App/TA** (typical add-on context): `Splunk_TA_okta`, IdP audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where src_count > 3` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous token usage), Timeline (suspicious events), Bar chart (tokens by application).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Token Anomaly Detection",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count dc(Authentication.src) as src_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user Authentication.app span=1h\n| where src_count > 3",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.4",
              "n": "Application Access Patterns",
              "c": "medium",
              "f": "beginner",
              "v": "Monitors which applications users access for license optimization and detects anomalous access indicating potential compromise.",
              "t": "`Splunk_TA_okta`",
              "d": "IdP application access logs",
              "q": "index=okta sourcetype=\"OktaIM2:log\" eventType=\"user.authentication.sso\"\n| stats dc(actor.alternateId) as unique_users, count as total_access by target{}.displayName\n| sort -unique_users",
              "m": "Track SSO events per application. Build user-application access matrix. Detect users accessing applications outside their normal pattern. Report on application usage for license optimization and access reviews.",
              "z": "Bar chart (top applications by user count), Table (application usage summary), Heatmap (user × application access).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1528"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: IdP application access logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack SSO events per application. Build user-application access matrix. Detect users accessing applications outside their normal pattern. Report on application usage for license optimization and access reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" eventType=\"user.authentication.sso\"\n| stats dc(actor.alternateId) as unique_users, count as total_access by target{}.displayName\n| sort -unique_users\n```\n\nUnderstanding this SPL\n\n**Application Access Patterns** — Monitors which applications users access for license optimization and detects anomalous access indicating potential compromise.\n\nDocumented **Data sources**: IdP application access logs. **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by target{}.displayName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count dc(Authentication.user) as user_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Application Access Patterns** — Monitors which applications users access for license optimization and detects anomalous access indicating potential compromise.\n\nDocumented **Data sources**: IdP application access logs. **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top applications by user count), Table (application usage summary), Heatmap (user × application access).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Application Access Patterns",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count dc(Authentication.user) as user_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.app span=1h\n| sort -count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.5",
              "n": "IdP Availability Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "IdP outage blocks all SSO authentication across the organization. Rapid detection enables failover and communication.",
              "t": "Scripted input (HTTP check), `Splunk_TA_okta`",
              "d": "IdP status API, synthetic monitoring, Okta system health",
              "q": "index=synthetic sourcetype=\"http_check\" target=\"*.okta.com\"\n| timechart span=1m avg(response_time_ms) as rt, count(eval(status_code>=500)) as errors\n| where rt > 5000 OR errors > 0",
              "m": "Set up synthetic HTTP checks against IdP login endpoints every minute. Track response time and availability. Alert on response time >5 seconds or any 5xx errors. Subscribe to vendor status page updates as secondary source.",
              "z": "Single value (IdP uptime %), Line chart (response time), Status indicator (available/degraded/down).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Scripted input (HTTP check), `Splunk_TA_okta`.\n• Ensure the following data sources are available: IdP status API, synthetic monitoring, Okta system health.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSet up synthetic HTTP checks against IdP login endpoints every minute. Track response time and availability. Alert on response time >5 seconds or any 5xx errors. Subscribe to vendor status page updates as secondary source.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=synthetic sourcetype=\"http_check\" target=\"*.okta.com\"\n| timechart span=1m avg(response_time_ms) as rt, count(eval(status_code>=500)) as errors\n| where rt > 5000 OR errors > 0\n```\n\nUnderstanding this SPL\n\n**IdP Availability Monitoring** — IdP outage blocks all SSO authentication across the organization. Rapid detection enables failover and communication.\n\nDocumented **Data sources**: IdP status API, synthetic monitoring, Okta system health. **App/TA** (typical add-on context): Scripted input (HTTP check), `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: synthetic; **sourcetype**: http_check. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=synthetic, sourcetype=\"http_check\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where rt > 5000 OR errors > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (IdP uptime %), Line chart (response time), Status indicator (available/degraded/down).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — IdP Availability Monitoring",
              "wv": "crawl",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "both",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "9.3.6",
              "n": "Phishing-Resistant MFA Adoption",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks migration from phishable factors (SMS, phone) to phishing-resistant factors (FIDO2, WebAuthn). Supports zero-trust maturity goals.",
              "t": "`Splunk_TA_okta`, IdP MFA enrollment data",
              "d": "IdP MFA enrollment logs, factor type metadata",
              "q": "index=okta sourcetype=\"OktaIM2:log\" eventType=\"user.authentication.auth_via_mfa\"\n| stats count by debugContext.debugData.factor\n| eval factor_type=case(match(factor,\"FIDO\"),\"phishing_resistant\", match(factor,\"push\"),\"medium\", 1=1,\"phishable\")\n| stats sum(count) as total by factor_type",
              "m": "Track MFA factor types used in authentication events. Classify as phishing-resistant (FIDO2, WebAuthn) vs phishable (SMS, voice, email). Report adoption percentages. Set organizational targets for phishing-resistant adoption.",
              "z": "Pie chart (factor type distribution), Line chart (phishing-resistant adoption trend), Table (users still on SMS).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553)",
              "mitre": [
                "T1621"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`, IdP MFA enrollment data.\n• Ensure the following data sources are available: IdP MFA enrollment logs, factor type metadata.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack MFA factor types used in authentication events. Classify as phishing-resistant (FIDO2, WebAuthn) vs phishable (SMS, voice, email). Report adoption percentages. Set organizational targets for phishing-resistant adoption.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" eventType=\"user.authentication.auth_via_mfa\"\n| stats count by debugContext.debugData.factor\n| eval factor_type=case(match(factor,\"FIDO\"),\"phishing_resistant\", match(factor,\"push\"),\"medium\", 1=1,\"phishable\")\n| stats sum(count) as total by factor_type\n```\n\nUnderstanding this SPL\n\n**Phishing-Resistant MFA Adoption** — Tracks migration from phishable factors (SMS, phone) to phishing-resistant factors (FIDO2, WebAuthn). Supports zero-trust maturity goals.\n\nDocumented **Data sources**: IdP MFA enrollment logs, factor type metadata. **App/TA** (typical add-on context): `Splunk_TA_okta`, IdP MFA enrollment data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by debugContext.debugData.factor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **factor_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by factor_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (factor type distribution), Line chart (phishing-resistant adoption trend), Table (users still on SMS).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Phishing-Resistant MFA Adoption",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.7",
              "n": "Session Hijacking Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Sessions used from multiple locations simultaneously indicate session token theft. Detection prevents ongoing unauthorized access.",
              "t": "`Splunk_TA_okta`, IdP session logs",
              "d": "IdP session activity logs, application session logs",
              "q": "index=okta sourcetype=\"OktaIM2:log\"\n| stats dc(client.ipAddress) as unique_ips, values(client.ipAddress) as ips by authenticationContext.externalSessionId, actor.alternateId\n| where unique_ips > 2\n| table actor.alternateId, authenticationContext.externalSessionId, unique_ips, ips",
              "m": "Track session IDs across events. Alert when a single session is used from multiple IP addresses simultaneously (excluding known VPN/proxy IPs). Correlate with user agent changes for additional confidence.",
              "z": "Table (hijacked sessions), Timeline (suspicious session events), Bar chart (users with multi-IP sessions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1539",
                "T1550.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`, IdP session logs.\n• Ensure the following data sources are available: IdP session activity logs, application session logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack session IDs across events. Alert when a single session is used from multiple IP addresses simultaneously (excluding known VPN/proxy IPs). Correlate with user agent changes for additional confidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\"\n| stats dc(client.ipAddress) as unique_ips, values(client.ipAddress) as ips by authenticationContext.externalSessionId, actor.alternateId\n| where unique_ips > 2\n| table actor.alternateId, authenticationContext.externalSessionId, unique_ips, ips\n```\n\nUnderstanding this SPL\n\n**Session Hijacking Detection** — Sessions used from multiple locations simultaneously indicate session token theft. Detection prevents ongoing unauthorized access.\n\nDocumented **Data sources**: IdP session activity logs, application session logs. **App/TA** (typical add-on context): `Splunk_TA_okta`, IdP session logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by authenticationContext.externalSessionId, actor.alternateId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_ips > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Session Hijacking Detection**): table actor.alternateId, authenticationContext.externalSessionId, unique_ips, ips\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` dc(Authentication.src) as src_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user span=1h\n| where src_count > 3\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Session Hijacking Detection** — Sessions used from multiple locations simultaneously indicate session token theft. Detection prevents ongoing unauthorized access.\n\nDocumented **Data sources**: IdP session activity logs, application session logs. **App/TA** (typical add-on context): `Splunk_TA_okta`, IdP session logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where src_count > 3` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hijacked sessions), Timeline (suspicious session events), Bar chart (users with multi-IP sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Session Hijacking Detection",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` dc(Authentication.src) as src_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user span=1h\n| where src_count > 3",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.8",
              "n": "SAML Assertion Replay Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Replayed SAML assertions can grant access without fresh authentication. Correlating assertion IDs and NotOnOrAfter windows catches reuse.",
              "t": "IdP logs, application SAML trace (e.g., Shibboleth, Okta, ADFS)",
              "d": "SAML response logs with `AssertionID`, `InResponseTo`, `NotOnOrAfter`",
              "q": "index=saml sourcetype=\"saml:assertion\"\n| stats count by assertion_id, sp_entity_id\n| where count > 1\n| table assertion_id, sp_entity_id, count",
              "m": "Ingest assertion IDs from IdP or SP debug logs (privacy-safe hashing if needed). Alert on duplicate assertion_id for same SP. Enforce short assertion lifetimes at IdP.",
              "z": "Table (duplicate assertions), Timeline, Single value (replay attempts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1550.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IdP logs, application SAML trace (e.g., Shibboleth, Okta, ADFS).\n• Ensure the following data sources are available: SAML response logs with `AssertionID`, `InResponseTo`, `NotOnOrAfter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest assertion IDs from IdP or SP debug logs (privacy-safe hashing if needed). Alert on duplicate assertion_id for same SP. Enforce short assertion lifetimes at IdP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=saml sourcetype=\"saml:assertion\"\n| stats count by assertion_id, sp_entity_id\n| where count > 1\n| table assertion_id, sp_entity_id, count\n```\n\nUnderstanding this SPL\n\n**SAML Assertion Replay Detection** — Replayed SAML assertions can grant access without fresh authentication. Correlating assertion IDs and NotOnOrAfter windows catches reuse.\n\nDocumented **Data sources**: SAML response logs with `AssertionID`, `InResponseTo`, `NotOnOrAfter`. **App/TA** (typical add-on context): IdP logs, application SAML trace (e.g., Shibboleth, Okta, ADFS). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: saml; **sourcetype**: saml:assertion. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=saml, sourcetype=\"saml:assertion\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by assertion_id, sp_entity_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SAML Assertion Replay Detection**): table assertion_id, sp_entity_id, count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SAML Assertion Replay Detection** — Replayed SAML assertions can grant access without fresh authentication. Correlating assertion IDs and NotOnOrAfter windows catches reuse.\n\nDocumented **Data sources**: SAML response logs with `AssertionID`, `InResponseTo`, `NotOnOrAfter`. **App/TA** (typical add-on context): IdP logs, application SAML trace (e.g., Shibboleth, Okta, ADFS). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (duplicate assertions), Timeline, Single value (replay attempts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — SAML Assertion Replay Detection",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.9",
              "n": "OAuth Token Abuse",
              "c": "critical",
              "f": "intermediate",
              "v": "Excessive refresh grants, scope expansion, or token use from new ASNs indicates stolen refresh tokens or malicious OAuth clients.",
              "t": "`Splunk_TA_okta`, Entra sign-in + Graph audit, API gateway logs",
              "d": "`app.oauth2.token.grant`, Entra `TokenIssuance` / `Update application` (consent)",
              "q": "index=okta sourcetype=\"OktaIM2:log\" eventType=\"app.oauth2.token.grant\"\n| stats count, dc(client.ipAddress) as ips by actor.alternateId, client_id\n| where count > 200 OR ips > 5\n| sort -count",
              "m": "Baseline grants per user and client. Alert on burst refresh or grants from many IPs. Revoke client on anomaly. Mirror logic for `azure:aad:signin` with `tokenIssuerType`.",
              "z": "Table (abusive clients), Line chart (grants over time), Bar chart (by client_id).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1528",
                "T1550.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`, Entra sign-in + Graph audit, API gateway logs.\n• Ensure the following data sources are available: `app.oauth2.token.grant`, Entra `TokenIssuance` / `Update application` (consent).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline grants per user and client. Alert on burst refresh or grants from many IPs. Revoke client on anomaly. Mirror logic for `azure:aad:signin` with `tokenIssuerType`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" eventType=\"app.oauth2.token.grant\"\n| stats count, dc(client.ipAddress) as ips by actor.alternateId, client_id\n| where count > 200 OR ips > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**OAuth Token Abuse** — Excessive refresh grants, scope expansion, or token use from new ASNs indicates stolen refresh tokens or malicious OAuth clients.\n\nDocumented **Data sources**: `app.oauth2.token.grant`, Entra `TokenIssuance` / `Update application` (consent). **App/TA** (typical add-on context): `Splunk_TA_okta`, Entra sign-in + Graph audit, API gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, client_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 200 OR ips > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OAuth Token Abuse** — Excessive refresh grants, scope expansion, or token use from new ASNs indicates stolen refresh tokens or malicious OAuth clients.\n\nDocumented **Data sources**: `app.oauth2.token.grant`, Entra `TokenIssuance` / `Update application` (consent). **App/TA** (typical add-on context): `Splunk_TA_okta`, Entra sign-in + Graph audit, API gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (abusive clients), Line chart (grants over time), Bar chart (by client_id).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — OAuth Token Abuse",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365",
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.10",
              "n": "SSO Session Hijacking Indicators",
              "c": "critical",
              "f": "intermediate",
              "v": "Complements session ID correlation with user-agent flips, ASN changes mid-session, and impossible concurrent SSO from IdP telemetry.",
              "t": "`Splunk_TA_okta`, Entra sign-in logs",
              "d": "IdP `user.authentication.sso` with session correlation ID, device fingerprint fields",
              "q": "index=okta sourcetype=\"OktaIM2:log\" eventType=\"user.authentication.sso\"\n| transaction authenticationContext.externalSessionId maxpause=300 maxevents=50\n| eval ua_change=if(mvcount(client.userAgent.rawUserAgent)>2,1,0)\n| where ua_change=1\n| table authenticationContext.externalSessionId, actor.alternateId, client.userAgent.rawUserAgent",
              "m": "Flag sessions with multiple user agents or countries within short windows. Tune for corporate VPN that rotates egress. Pair with UC-9.3.7 for IP-based hijack detection.",
              "z": "Table (suspicious sessions), Timeline, Bar chart (sessions with UA churn).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1539",
                "T1550.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`, Entra sign-in logs.\n• Ensure the following data sources are available: IdP `user.authentication.sso` with session correlation ID, device fingerprint fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFlag sessions with multiple user agents or countries within short windows. Tune for corporate VPN that rotates egress. Pair with UC-9.3.7 for IP-based hijack detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" eventType=\"user.authentication.sso\"\n| transaction authenticationContext.externalSessionId maxpause=300 maxevents=50\n| eval ua_change=if(mvcount(client.userAgent.rawUserAgent)>2,1,0)\n| where ua_change=1\n| table authenticationContext.externalSessionId, actor.alternateId, client.userAgent.rawUserAgent\n```\n\nUnderstanding this SPL\n\n**SSO Session Hijacking Indicators** — Complements session ID correlation with user-agent flips, ASN changes mid-session, and impossible concurrent SSO from IdP telemetry.\n\nDocumented **Data sources**: IdP `user.authentication.sso` with session correlation ID, device fingerprint fields. **App/TA** (typical add-on context): `Splunk_TA_okta`, Entra sign-in logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **ua_change** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ua_change=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SSO Session Hijacking Indicators**): table authenticationContext.externalSessionId, actor.alternateId, client.userAgent.rawUserAgent\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SSO Session Hijacking Indicators** — Complements session ID correlation with user-agent flips, ASN changes mid-session, and impossible concurrent SSO from IdP telemetry.\n\nDocumented **Data sources**: IdP `user.authentication.sso` with session correlation ID, device fingerprint fields. **App/TA** (typical add-on context): `Splunk_TA_okta`, Entra sign-in logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious sessions), Timeline, Bar chart (sessions with UA churn).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — SSO Session Hijacking Indicators",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365",
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.11",
              "n": "Federated Trust Modifications",
              "c": "critical",
              "f": "intermediate",
              "v": "Adding SAML/OIDC federation to new domains or apps expands blast radius; auditing trust metadata changes is essential.",
              "t": "`Splunk_TA_microsoft-cloudservices`, `Splunk_TA_okta`",
              "d": "Entra `Add federation to domain`, Okta `trustedOrigin.*` / `idp.*` lifecycle events",
              "q": "index=azure sourcetype=\"azure:aad:audit\" activityDisplayName=\"Add external user\"\n   OR activityDisplayName=\"Add federation to domain\"\n| table _time, initiatedBy.user.userPrincipalName, targetResources{}.displayName, activityDisplayName\n| sort -_time",
              "m": "Alert on new federation partners, domain verification, or IdP metadata uploads. Require security review for new trust relationships.",
              "z": "Timeline (trust changes), Table (actor, target), Single value (new trusts per quarter).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1484.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_okta`.\n• Ensure the following data sources are available: Entra `Add federation to domain`, Okta `trustedOrigin.*` / `idp.*` lifecycle events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on new federation partners, domain verification, or IdP metadata uploads. Require security review for new trust relationships.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:aad:audit\" activityDisplayName=\"Add external user\"\n   OR activityDisplayName=\"Add federation to domain\"\n| table _time, initiatedBy.user.userPrincipalName, targetResources{}.displayName, activityDisplayName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Federated Trust Modifications** — Adding SAML/OIDC federation to new domains or apps expands blast radius; auditing trust metadata changes is essential.\n\nDocumented **Data sources**: Entra `Add federation to domain`, Okta `trustedOrigin.*` / `idp.*` lifecycle events. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:aad:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:aad:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Federated Trust Modifications**): table _time, initiatedBy.user.userPrincipalName, targetResources{}.displayName, activityDisplayName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Federated Trust Modifications** — Adding SAML/OIDC federation to new domains or apps expands blast radius; auditing trust metadata changes is essential.\n\nDocumented **Data sources**: Entra `Add federation to domain`, Okta `trustedOrigin.*` / `idp.*` lifecycle events. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (trust changes), Table (actor, target), Single value (new trusts per quarter).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Federated Trust Modifications",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "azure",
                "m365",
                "okta"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.12",
              "n": "Consent Grant Abuse",
              "c": "critical",
              "f": "intermediate",
              "v": "Users granting excessive delegated permissions to malicious OAuth apps is a common attack; monitoring consent events enables revocation.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Entra audit logs (`Consent to application`, `Add OAuth2PermissionGrant`)",
              "q": "index=azure sourcetype=\"azure:aad:audit\"\n| search \"Consent\" OR activityDisplayName=\"Add OAuth2PermissionGrant\"\n| spath path=targetResources{}\n| mvexpand targetResources{} limit=500\n| spath input=targetResources{} path=displayName\n| table _time, initiatedBy.user.userPrincipalName, displayName, activityDisplayName\n| sort -_time",
              "m": "Ingest consent-related audit events. Alert on consent to apps with high privilege (`RoleManagement.ReadWrite.Directory`) or new publisher IDs. Integrate with admin consent workflow.",
              "z": "Table (consent events), Bar chart (apps by consent count), Pie chart (user vs admin consent).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Entra audit logs (`Consent to application`, `Add OAuth2PermissionGrant`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest consent-related audit events. Alert on consent to apps with high privilege (`RoleManagement.ReadWrite.Directory`) or new publisher IDs. Integrate with admin consent workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:aad:audit\"\n| search \"Consent\" OR activityDisplayName=\"Add OAuth2PermissionGrant\"\n| spath path=targetResources{}\n| mvexpand targetResources{} limit=500\n| spath input=targetResources{} path=displayName\n| table _time, initiatedBy.user.userPrincipalName, displayName, activityDisplayName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Consent Grant Abuse** — Users granting excessive delegated permissions to malicious OAuth apps is a common attack; monitoring consent events enables revocation.\n\nDocumented **Data sources**: Entra audit logs (`Consent to application`, `Add OAuth2PermissionGrant`). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:aad:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:aad:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Pipeline stage (see **Consent Grant Abuse**): table _time, initiatedBy.user.userPrincipalName, displayName, activityDisplayName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Consent Grant Abuse** — Users granting excessive delegated permissions to malicious OAuth apps is a common attack; monitoring consent events enables revocation.\n\nDocumented **Data sources**: Entra audit logs (`Consent to application`, `Add OAuth2PermissionGrant`). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (consent events), Bar chart (apps by consent count), Pie chart (user vs admin consent).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Consent Grant Abuse",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-9.3.12: Consent Grant Abuse.",
                  "ea": "Saved search 'UC-9.3.12' running on Entra audit logs (Consent to application, Add OAuth2PermissionGrant), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.13",
              "n": "App Registration Secret Expiry",
              "c": "high",
              "f": "beginner",
              "v": "Expired client secrets break automation and encourage long-lived secrets; proactive alerting avoids outages and insecure workarounds.",
              "t": "`Splunk_TA_microsoft-cloudservices`, Graph scripted input",
              "d": "Application credential inventory (`passwordCredentials.endDateTime`), audit when secret added",
              "q": "index=azure sourcetype=\"azure:graph:applications\"\n| eval days_left=round((strptime(endDateTime,\"%Y-%m-%dT%H:%M:%SZ\")-now())/86400)\n| where days_left < 30 AND days_left > 0\n| table appId, displayName, days_left, endDateTime\n| sort days_left",
              "m": "Schedule Graph export of app registrations with secrets/certificates. Alert at 30/14/7 days. Map apps to owners via lookup.",
              "z": "Table (expiring secrets), Single value (next expiry), Gauge (apps past due).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1528"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, Graph scripted input.\n• Ensure the following data sources are available: Application credential inventory (`passwordCredentials.endDateTime`), audit when secret added.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule Graph export of app registrations with secrets/certificates. Alert at 30/14/7 days. Map apps to owners via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:graph:applications\"\n| eval days_left=round((strptime(endDateTime,\"%Y-%m-%dT%H:%M:%SZ\")-now())/86400)\n| where days_left < 30 AND days_left > 0\n| table appId, displayName, days_left, endDateTime\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**App Registration Secret Expiry** — Expired client secrets break automation and encourage long-lived secrets; proactive alerting avoids outages and insecure workarounds.\n\nDocumented **Data sources**: Application credential inventory (`passwordCredentials.endDateTime`), audit when secret added. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, Graph scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:graph:applications. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:graph:applications\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 30 AND days_left > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **App Registration Secret Expiry**): table appId, displayName, days_left, endDateTime\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (expiring secrets), Single value (next expiry), Gauge (apps past due).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — App Registration Secret Expiry",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.14",
              "n": "Multi-Tenant App Access Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Unexpected tenants or guest users accessing multi-tenant apps may indicate consent phishing or lateral SaaS movement.",
              "t": "`Splunk_TA_microsoft-cloudservices`",
              "d": "Entra sign-in logs (`resourceTenantId`, `crossTenantAccessType`, `homeTenantId`)",
              "q": "index=azure sourcetype=\"azure:aad:signin\"\n| where crossTenantAccessType IN (\"b2bCollaboration\",\"passthrough\") AND resourceTenantId!=homeTenantId\n| stats count by userPrincipalName, appDisplayName, resourceTenantId\n| where count > 10\n| sort -count",
              "m": "Baseline B2B access patterns. Alert on new resource tenants for crown-jewel apps. Correlate with consent events (UC-9.3.12).",
              "z": "Table (cross-tenant access), Heatmap (user × tenant), Line chart (volume).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004",
                "T1098.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`.\n• Ensure the following data sources are available: Entra sign-in logs (`resourceTenantId`, `crossTenantAccessType`, `homeTenantId`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline B2B access patterns. Alert on new resource tenants for crown-jewel apps. Correlate with consent events (UC-9.3.12).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:aad:signin\"\n| where crossTenantAccessType IN (\"b2bCollaboration\",\"passthrough\") AND resourceTenantId!=homeTenantId\n| stats count by userPrincipalName, appDisplayName, resourceTenantId\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Multi-Tenant App Access Anomalies** — Unexpected tenants or guest users accessing multi-tenant apps may indicate consent phishing or lateral SaaS movement.\n\nDocumented **Data sources**: Entra sign-in logs (`resourceTenantId`, `crossTenantAccessType`, `homeTenantId`). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:aad:signin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:aad:signin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where crossTenantAccessType IN (\"b2bCollaboration\",\"passthrough\") AND resourceTenantId!=homeTenantId` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by userPrincipalName, appDisplayName, resourceTenantId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Multi-Tenant App Access Anomalies** — Unexpected tenants or guest users accessing multi-tenant apps may indicate consent phishing or lateral SaaS movement.\n\nDocumented **Data sources**: Entra sign-in logs (`resourceTenantId`, `crossTenantAccessType`, `homeTenantId`). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cross-tenant access), Heatmap (user × tenant), Line chart (volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Multi-Tenant App Access Anomalies",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.15",
              "n": "OAuth Scope Creep Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Applications accumulating scopes over time violate least privilege; comparing current vs approved scopes finds drift.",
              "t": "Graph API inventory, Okta `app.oauth2.*` events",
              "d": "OAuth scope grants per `client_id`, approved scope lookup CSV",
              "q": "index=oauth sourcetype=\"oauth:scope_inventory\"\n| lookup oauth_scope_approved client_id OUTPUT approved_scopes\n| eval extra_scopes=mvfilter(NOT match(approved_scopes, scope))\n| where mvcount(extra_scopes)>0\n| table client_id, scope, approved_scopes, extra_scopes",
              "m": "Export delegated/app role assignments from Graph weekly. Join with approved baseline. Alert on new sensitive scopes (`Mail.ReadWrite`, `Directory.ReadWrite.All`).",
              "z": "Table (scope drift), Bar chart (apps with extra scopes), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1098.001",
                "T1528"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Graph API inventory, Okta `app.oauth2.*` events.\n• Ensure the following data sources are available: OAuth scope grants per `client_id`, approved scope lookup CSV.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport delegated/app role assignments from Graph weekly. Join with approved baseline. Alert on new sensitive scopes (`Mail.ReadWrite`, `Directory.ReadWrite.All`).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=oauth sourcetype=\"oauth:scope_inventory\"\n| lookup oauth_scope_approved client_id OUTPUT approved_scopes\n| eval extra_scopes=mvfilter(NOT match(approved_scopes, scope))\n| where mvcount(extra_scopes)>0\n| table client_id, scope, approved_scopes, extra_scopes\n```\n\nUnderstanding this SPL\n\n**OAuth Scope Creep Detection** — Applications accumulating scopes over time violate least privilege; comparing current vs approved scopes finds drift.\n\nDocumented **Data sources**: OAuth scope grants per `client_id`, approved scope lookup CSV. **App/TA** (typical add-on context): Graph API inventory, Okta `app.oauth2.*` events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: oauth; **sourcetype**: oauth:scope_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=oauth, sourcetype=\"oauth:scope_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **extra_scopes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mvcount(extra_scopes)>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OAuth Scope Creep Detection**): table client_id, scope, approved_scopes, extra_scopes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (scope drift), Bar chart (apps with extra scopes), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — OAuth Scope Creep Detection",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "okta"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.3.16",
              "n": "Token Endpoint Rate Limiting",
              "c": "medium",
              "f": "beginner",
              "v": "Throttling at `/oauth2/token` breaks integrations and may indicate credential stuffing or runaway automation.",
              "t": "API gateway / WAF logs, Entra `SignInLogs` with error codes, custom HEC from reverse proxy",
              "d": "HTTP 429, `AADSTS50196` / `invalid_client` bursts, `rateLimit` in response headers",
              "q": "index=proxy sourcetype=\"access_combined\" uri_path=\"/oauth2/v2.0/token\"\n| search status=429 OR like(_raw,\"%rate limit%\")\n| bin _time span=5m\n| stats count by client_id, _time\n| where count > 100\n| sort -count",
              "m": "Log token endpoint from AAD Application Proxy or API Management. Alert on 429 spikes per client_id. Implement exponential backoff in callers.",
              "z": "Line chart (429 rate), Table (top clients), Single value (throttled requests/hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1110.004",
                "T1110"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: API gateway / WAF logs, Entra `SignInLogs` with error codes, custom HEC from reverse proxy.\n• Ensure the following data sources are available: HTTP 429, `AADSTS50196` / `invalid_client` bursts, `rateLimit` in response headers.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog token endpoint from AAD Application Proxy or API Management. Alert on 429 spikes per client_id. Implement exponential backoff in callers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"access_combined\" uri_path=\"/oauth2/v2.0/token\"\n| search status=429 OR like(_raw,\"%rate limit%\")\n| bin _time span=5m\n| stats count by client_id, _time\n| where count > 100\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Token Endpoint Rate Limiting** — Throttling at `/oauth2/token` breaks integrations and may indicate credential stuffing or runaway automation.\n\nDocumented **Data sources**: HTTP 429, `AADSTS50196` / `invalid_client` bursts, `rateLimit` in response headers. **App/TA** (typical add-on context): API gateway / WAF logs, Entra `SignInLogs` with error codes, custom HEC from reverse proxy. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"access_combined\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by client_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (429 rate), Table (top clients), Single value (throttled requests/hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Token Endpoint Rate Limiting",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.2,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 15,
            "none": 0
          }
        },
        {
          "i": "9.4",
          "n": "Privileged Access Management (PAM)",
          "u": [
            {
              "i": "9.4.1",
              "n": "Privileged Session Audit",
              "c": "critical",
              "f": "beginner",
              "v": "Complete audit trail of privileged sessions is required for compliance (SOX, PCI, HIPAA) and security investigation.",
              "t": "Splunk_TA_cyberark, BeyondTrust TA for Splunk",
              "d": "PAM session logs (session start/end, target system, user, protocol)",
              "q": "index=pam sourcetype=\"cyberark:session\"\n| table _time, user, target_host, target_account, protocol, duration_min, session_id\n| sort -_time",
              "m": "Install vendor PAM TA. Forward PAM vault/session logs to Splunk. Track all privileged sessions with full metadata. Alert on sessions outside business hours or to unexpected targets. Retain logs per compliance requirements.",
              "z": "Table (session history), Bar chart (sessions by user), Timeline (privileged access events), Heatmap (user × time of day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_cyberark, BeyondTrust TA for Splunk.\n• Ensure the following data sources are available: PAM session logs (session start/end, target system, user, protocol).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall vendor PAM TA. Forward PAM vault/session logs to Splunk. Track all privileged sessions with full metadata. Alert on sessions outside business hours or to unexpected targets. Retain logs per compliance requirements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:session\"\n| table _time, user, target_host, target_account, protocol, duration_min, session_id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Privileged Session Audit** — Complete audit trail of privileged sessions is required for compliance (SOX, PCI, HIPAA) and security investigation.\n\nDocumented **Data sources**: PAM session logs (session start/end, target system, user, protocol). **App/TA** (typical add-on context): Splunk_TA_cyberark, BeyondTrust TA for Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:session. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Privileged Session Audit**): table _time, user, target_host, target_account, protocol, duration_min, session_id\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND match(Authentication.app, \"(?i)pam|cyberark|beyondtrust|delinea\")\n  by Authentication.user Authentication.dest Authentication.src span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privileged Session Audit** — Complete audit trail of privileged sessions is required for compliance (SOX, PCI, HIPAA) and security investigation.\n\nDocumented **Data sources**: PAM session logs (session start/end, target system, user, protocol). **App/TA** (typical add-on context): Splunk_TA_cyberark, BeyondTrust TA for Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (session history), Bar chart (sessions by user), Timeline (privileged access events), Heatmap (user × time of day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep a clear record of who used privileged vault sessions and when, so investigators and auditors can trace access to critical systems without guessing.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND match(Authentication.app, \"(?i)pam|cyberark|beyondtrust|delinea\")\n  by Authentication.user Authentication.dest Authentication.src span=1h\n| sort -count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.05",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.05 (Restrict administrative privileges) is enforced — Splunk UC-9.4.1: Privileged Session Audit.",
                  "ea": "Saved search 'UC-9.4.1' running on PAM session logs (session start/end, target system, user, protocol), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that BAIT/KAIT §5 (Identity & access management) is enforced — Splunk UC-9.4.1: Privileged Session Audit.",
                  "ea": "Saved search 'UC-9.4.1' running on PAM session logs (session start/end, target system, user, protocol), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Rundschreiben/2021/rs_1021_BAIT_en.html"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that FedRAMP AC-2 (Account management) is enforced — Splunk UC-9.4.1: Privileged Session Audit.",
                  "ea": "Saved search 'UC-9.4.1' running on PAM session logs (session start/end, target system, user, protocol), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "01.b",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 01.b (User access management) is enforced — Splunk UC-9.4.1: Privileged Session Audit.",
                  "ea": "Saved search 'UC-9.4.1' running on PAM session logs (session start/end, target system, user, protocol), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.3.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SAMA CSF 3.3.5 (Security monitoring) is enforced — Splunk UC-9.4.1: Privileged Session Audit.",
                  "ea": "Saved search 'UC-9.4.1' running on PAM session logs (session start/end, target system, user, protocol), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.sama.gov.sa/en-US/Laws/BankingRules/SAMA%20Cyber%20Security%20Framework.pdf"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.2",
              "n": "Password Checkout Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Unusual checkout patterns may indicate misuse. Tracking ensures accountability and supports investigations.",
              "t": "Splunk_TA_cyberark",
              "d": "PAM vault logs (password retrieve/checkin events)",
              "q": "index=pam sourcetype=\"cyberark:vault\"\n| search action=\"Retrieve\" OR action=\"Checkin\"\n| transaction user, account maxspan=8h\n| eval checkout_duration_hr=duration/3600\n| where checkout_duration_hr > 4\n| table user, account, target, checkout_duration_hr",
              "m": "Track password checkout and checkin events. Calculate checkout duration. Alert on checkouts exceeding policy limits (e.g., >4 hours). Flag accounts checked out but never checked in (hoarding). Report on checkout frequency per user.",
              "z": "Table (active checkouts), Bar chart (checkout duration by user), Line chart (checkout frequency trend).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_cyberark.\n• Ensure the following data sources are available: PAM vault logs (password retrieve/checkin events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack password checkout and checkin events. Calculate checkout duration. Alert on checkouts exceeding policy limits (e.g., >4 hours). Flag accounts checked out but never checked in (hoarding). Report on checkout frequency per user.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:vault\"\n| search action=\"Retrieve\" OR action=\"Checkin\"\n| transaction user, account maxspan=8h\n| eval checkout_duration_hr=duration/3600\n| where checkout_duration_hr > 4\n| table user, account, target, checkout_duration_hr\n```\n\nUnderstanding this SPL\n\n**Password Checkout Tracking** — Unusual checkout patterns may indicate misuse. Tracking ensures accountability and supports investigations.\n\nDocumented **Data sources**: PAM vault logs (password retrieve/checkin events). **App/TA** (typical add-on context): Splunk_TA_cyberark. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:vault. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:vault\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **checkout_duration_hr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where checkout_duration_hr > 4` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Password Checkout Tracking**): table user, account, target, checkout_duration_hr\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active checkouts), Bar chart (checkout duration by user), Line chart (checkout frequency trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Password Checkout Tracking",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cyberark"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.3",
              "n": "Break-Glass Account Usage",
              "c": "critical",
              "f": "beginner",
              "v": "Break-glass accounts provide emergency access and should rarely be used. Any usage requires immediate investigation and documentation.",
              "t": "Splunk_TA_cyberark, custom alert",
              "d": "PAM vault events for break-glass accounts",
              "q": "index=pam sourcetype=\"cyberark:vault\"\n| search account_type=\"break_glass\" OR account IN (\"emergency_admin\",\"firecall_*\")\n| table _time, user, account, target, action\n| sort -_time",
              "m": "Tag break-glass accounts in PAM. Create critical alert for any break-glass access. Require documented reason within 24 hours. Send notifications to security team and management. Track usage frequency for trend reporting.",
              "z": "Single value (break-glass uses this month — target: 0), Table (usage history), Timeline (break-glass events).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078.002",
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_cyberark, custom alert.\n• Ensure the following data sources are available: PAM vault events for break-glass accounts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag break-glass accounts in PAM. Create critical alert for any break-glass access. Require documented reason within 24 hours. Send notifications to security team and management. Track usage frequency for trend reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:vault\"\n| search account_type=\"break_glass\" OR account IN (\"emergency_admin\",\"firecall_*\")\n| table _time, user, account, target, action\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Break-Glass Account Usage** — Break-glass accounts provide emergency access and should rarely be used. Any usage requires immediate investigation and documentation.\n\nDocumented **Data sources**: PAM vault events for break-glass accounts. **App/TA** (typical add-on context): Splunk_TA_cyberark, custom alert. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:vault. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:vault\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Break-Glass Account Usage**): table _time, user, account, target, action\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (break-glass uses this month — target: 0), Table (usage history), Timeline (break-glass events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Break-Glass Account Usage",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cyberark"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.4",
              "n": "Credential Rotation Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Overdue password rotations increase exposure window if credentials are compromised. Compliance tracking ensures policy adherence.",
              "t": "PAM TA, scripted input",
              "d": "PAM vault credential metadata (last rotation date, policy)",
              "q": "index=pam sourcetype=\"cyberark:account_inventory\"\n| eval days_since_rotation=round((now()-last_rotation_epoch)/86400)\n| eval overdue=if(days_since_rotation > rotation_policy_days, \"Yes\", \"No\")\n| where overdue=\"Yes\"\n| table account, target, days_since_rotation, rotation_policy_days\n| sort -days_since_rotation",
              "m": "Export credential inventory from PAM periodically. Calculate days since last rotation vs policy requirement. Alert on overdue rotations. Track compliance percentage over time. Report to management monthly.",
              "z": "Table (overdue credentials), Single value (compliance %), Gauge (% compliant), Bar chart (overdue by platform).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PAM TA, scripted input.\n• Ensure the following data sources are available: PAM vault credential metadata (last rotation date, policy).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport credential inventory from PAM periodically. Calculate days since last rotation vs policy requirement. Alert on overdue rotations. Track compliance percentage over time. Report to management monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:account_inventory\"\n| eval days_since_rotation=round((now()-last_rotation_epoch)/86400)\n| eval overdue=if(days_since_rotation > rotation_policy_days, \"Yes\", \"No\")\n| where overdue=\"Yes\"\n| table account, target, days_since_rotation, rotation_policy_days\n| sort -days_since_rotation\n```\n\nUnderstanding this SPL\n\n**Credential Rotation Compliance** — Overdue password rotations increase exposure window if credentials are compromised. Compliance tracking ensures policy adherence.\n\nDocumented **Data sources**: PAM vault credential metadata (last rotation date, policy). **App/TA** (typical add-on context): PAM TA, scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:account_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:account_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_since_rotation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overdue=\"Yes\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Credential Rotation Compliance**): table account, target, days_since_rotation, rotation_policy_days\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue credentials), Single value (compliance %), Gauge (% compliant), Bar chart (overdue by platform).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Credential Rotation Compliance",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.5",
              "n": "Suspicious Session Commands",
              "c": "critical",
              "f": "beginner",
              "v": "Detecting dangerous commands during privileged sessions enables real-time intervention before damage occurs.",
              "t": "CyberArk PSM, BeyondTrust session monitoring",
              "d": "PAM session recordings/keystroke logs",
              "q": "index=pam sourcetype=\"cyberark:psm_transcript\"\n| search command IN (\"rm -rf\",\"format\",\"del /s\",\"DROP DATABASE\",\"shutdown\",\"halt\",\"init 0\")\n| table _time, user, target_host, command, session_id",
              "m": "Enable PAM session recording and command logging. Parse keystroke transcripts. Alert immediately on high-risk commands (rm -rf, format, DROP DATABASE, etc.). Integrate with SOAR for automated session termination on critical detections.",
              "z": "Table (suspicious commands), Timeline (command events), Single value (high-risk commands today).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078",
                "T1003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CyberArk PSM, BeyondTrust session monitoring.\n• Ensure the following data sources are available: PAM session recordings/keystroke logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable PAM session recording and command logging. Parse keystroke transcripts. Alert immediately on high-risk commands (rm -rf, format, DROP DATABASE, etc.). Integrate with SOAR for automated session termination on critical detections.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:psm_transcript\"\n| search command IN (\"rm -rf\",\"format\",\"del /s\",\"DROP DATABASE\",\"shutdown\",\"halt\",\"init 0\")\n| table _time, user, target_host, command, session_id\n```\n\nUnderstanding this SPL\n\n**Suspicious Session Commands** — Detecting dangerous commands during privileged sessions enables real-time intervention before damage occurs.\n\nDocumented **Data sources**: PAM session recordings/keystroke logs. **App/TA** (typical add-on context): CyberArk PSM, BeyondTrust session monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:psm_transcript. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:psm_transcript\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Suspicious Session Commands**): table _time, user, target_host, command, session_id\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious commands), Timeline (command events), Single value (high-risk commands today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Suspicious Session Commands",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "beyondtrust",
                "cyberark"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.6",
              "n": "Vault Health Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "PAM vault downtime prevents all privileged access, blocking critical operations. Health monitoring ensures continuous availability.",
              "t": "PAM infrastructure monitoring, SNMP",
              "d": "PAM vault system logs, component health APIs",
              "q": "index=pam sourcetype=\"cyberark:vault_health\"\n| stats latest(status) as status, latest(replication_lag) as lag by vault_server, component\n| where status!=\"Running\" OR lag > 300",
              "m": "Monitor PAM vault components (vault server, PVWA, PSM, CPM). Track service availability, replication between primary/DR vault, and component health. Alert on any component failure or replication lag >5 minutes.",
              "z": "Status grid (component × health), Single value (vault uptime %), Table (unhealthy components), Line chart (replication lag).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PAM infrastructure monitoring, SNMP.\n• Ensure the following data sources are available: PAM vault system logs, component health APIs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor PAM vault components (vault server, PVWA, PSM, CPM). Track service availability, replication between primary/DR vault, and component health. Alert on any component failure or replication lag >5 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:vault_health\"\n| stats latest(status) as status, latest(replication_lag) as lag by vault_server, component\n| where status!=\"Running\" OR lag > 300\n```\n\nUnderstanding this SPL\n\n**Vault Health Monitoring** — PAM vault downtime prevents all privileged access, blocking critical operations. Health monitoring ensures continuous availability.\n\nDocumented **Data sources**: PAM vault system logs, component health APIs. **App/TA** (typical add-on context): PAM infrastructure monitoring, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:vault_health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:vault_health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vault_server, component** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status!=\"Running\" OR lag > 300` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (component × health), Single value (vault uptime %), Table (unhealthy components), Line chart (replication lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Vault Health Monitoring",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.7",
              "n": "Federated Identity Provider Health",
              "c": "critical",
              "f": "intermediate",
              "v": "IdP outages block all federated application access. Health monitoring ensures SSO availability and rapid incident response.",
              "t": "IdP monitoring, SAML/OIDC audit logs",
              "d": "IdP health endpoints, federation error logs",
              "q": "index=iam sourcetype=\"idp:health\"\n| stats latest(status) as status, latest(response_ms) as latency by idp_host, tenant\n| where status!=\"healthy\" OR latency > 5000\n| table idp_host, tenant, status, latency",
              "m": "Poll IdP health endpoints (e.g., SAML metadata, OIDC discovery) every 60 seconds. Ingest federation errors from app and IdP logs. Alert on status unhealthy or latency >5s. Correlate with user-reported SSO issues.",
              "z": "Status grid (IdP × health), Single value (IdP uptime %), Line chart (latency trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IdP monitoring, SAML/OIDC audit logs.\n• Ensure the following data sources are available: IdP health endpoints, federation error logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll IdP health endpoints (e.g., SAML metadata, OIDC discovery) every 60 seconds. Ingest federation errors from app and IdP logs. Alert on status unhealthy or latency >5s. Correlate with user-reported SSO issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iam sourcetype=\"idp:health\"\n| stats latest(status) as status, latest(response_ms) as latency by idp_host, tenant\n| where status!=\"healthy\" OR latency > 5000\n| table idp_host, tenant, status, latency\n```\n\nUnderstanding this SPL\n\n**Federated Identity Provider Health** — IdP outages block all federated application access. Health monitoring ensures SSO availability and rapid incident response.\n\nDocumented **Data sources**: IdP health endpoints, federation error logs. **App/TA** (typical add-on context): IdP monitoring, SAML/OIDC audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iam; **sourcetype**: idp:health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iam, sourcetype=\"idp:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by idp_host, tenant** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status!=\"healthy\" OR latency > 5000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Federated Identity Provider Health**): table idp_host, tenant, status, latency\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Federated Identity Provider Health** — IdP outages block all federated application access. Health monitoring ensures SSO availability and rapid incident response.\n\nDocumented **Data sources**: IdP health endpoints, federation error logs. **App/TA** (typical add-on context): IdP monitoring, SAML/OIDC audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (IdP × health), Single value (IdP uptime %), Line chart (latency trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Federated Identity Provider Health",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.8",
              "n": "API Token Usage Anomaly",
              "c": "high",
              "f": "intermediate",
              "v": "Unusual API token usage may indicate token theft or abuse. Detection supports least-privilege and incident response.",
              "t": "Cloud identity TAs, API gateway logs",
              "d": "Token audit logs, API request logs",
              "q": "index=iam sourcetype=\"api:token_audit\"\n| bin _time span=1h\n| stats dc(ip) as unique_ips, count as requests by token_id, _time\n| where unique_ips > 3 OR requests > 1000\n| sort -requests",
              "m": "Ingest token usage from IdP and API gateways. Baseline normal usage per token. Alert on new IPs, high request volume, or off-hours spikes. Rotate tokens on anomaly.",
              "z": "Table (anomalous tokens), Line chart (requests by token), Bar chart (unique IPs per token).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1528",
                "T1550.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud identity TAs, API gateway logs.\n• Ensure the following data sources are available: Token audit logs, API request logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest token usage from IdP and API gateways. Baseline normal usage per token. Alert on new IPs, high request volume, or off-hours spikes. Rotate tokens on anomaly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iam sourcetype=\"api:token_audit\"\n| bin _time span=1h\n| stats dc(ip) as unique_ips, count as requests by token_id, _time\n| where unique_ips > 3 OR requests > 1000\n| sort -requests\n```\n\nUnderstanding this SPL\n\n**API Token Usage Anomaly** — Unusual API token usage may indicate token theft or abuse. Detection supports least-privilege and incident response.\n\nDocumented **Data sources**: Token audit logs, API request logs. **App/TA** (typical add-on context): Cloud identity TAs, API gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iam; **sourcetype**: api:token_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iam, sourcetype=\"api:token_audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by token_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_ips > 3 OR requests > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**API Token Usage Anomaly** — Unusual API token usage may indicate token theft or abuse. Detection supports least-privilege and incident response.\n\nDocumented **Data sources**: Token audit logs, API request logs. **App/TA** (typical add-on context): Cloud identity TAs, API gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous tokens), Line chart (requests by token), Bar chart (unique IPs per token).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — API Token Usage Anomaly",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.9",
              "n": "Cross-Domain Trust Change Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Trust relationship changes can enable cross-domain abuse. Early detection prevents privilege escalation across forests.",
              "t": "`Splunk_TA_windows`",
              "d": "Security Event Log (4706 — trust modified, 4714 — trust created)",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4706, 4714)\n| table _time, SubjectUserName, TargetDomainName, TrustType, TrustDirection\n| sort -_time",
              "m": "Forward DC Security logs. Alert on any trust creation or modification. Require change approval for trust changes. Report on trust topology for audit.",
              "z": "Table (trust changes), Timeline (events), Single value (changes this week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1484.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`.\n• Ensure the following data sources are available: Security Event Log (4706 — trust modified, 4714 — trust created).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DC Security logs. Alert on any trust creation or modification. Require change approval for trust changes. Report on trust topology for audit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4706, 4714)\n| table _time, SubjectUserName, TargetDomainName, TrustType, TrustDirection\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cross-Domain Trust Change Detection** — Trust relationship changes can enable cross-domain abuse. Early detection prevents privilege escalation across forests.\n\nDocumented **Data sources**: Security Event Log (4706 — trust modified, 4714 — trust created). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Cross-Domain Trust Change Detection**): table _time, SubjectUserName, TargetDomainName, TrustType, TrustDirection\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cross-Domain Trust Change Detection** — Trust relationship changes can enable cross-domain abuse. Early detection prevents privilege escalation across forests.\n\nDocumented **Data sources**: Security Event Log (4706 — trust modified, 4714 — trust created). **App/TA** (typical add-on context): `Splunk_TA_windows`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (trust changes), Timeline (events), Single value (changes this week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Cross-Domain Trust Change Detection",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.10",
              "n": "Just-in-Time Access Request Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "JIT access reduces standing privilege. Monitoring request and approval patterns ensures policy compliance and detects abuse.",
              "t": "PAM / JIT access system logs",
              "d": "Access request and approval audit logs",
              "q": "index=pam sourcetype=\"jit:requests\"\n| stats count, values(approver) as approvers by requester, resource, action\n| where count > 20\n| sort -count",
              "m": "Ingest JIT request and approval events. Alert on excessive requests per user, self-approvals, or access outside business hours. Report on approval latency and denial rate.",
              "z": "Table (request summary), Bar chart (requests by requester), Line chart (approval latency).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PAM / JIT access system logs.\n• Ensure the following data sources are available: Access request and approval audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest JIT request and approval events. Alert on excessive requests per user, self-approvals, or access outside business hours. Report on approval latency and denial rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"jit:requests\"\n| stats count, values(approver) as approvers by requester, resource, action\n| where count > 20\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Just-in-Time Access Request Monitoring** — JIT access reduces standing privilege. Monitoring request and approval patterns ensures policy compliance and detects abuse.\n\nDocumented **Data sources**: Access request and approval audit logs. **App/TA** (typical add-on context): PAM / JIT access system logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: jit:requests. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"jit:requests\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by requester, resource, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Just-in-Time Access Request Monitoring** — JIT access reduces standing privilege. Monitoring request and approval patterns ensures policy compliance and detects abuse.\n\nDocumented **Data sources**: Access request and approval audit logs. **App/TA** (typical add-on context): PAM / JIT access system logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (request summary), Bar chart (requests by requester), Line chart (approval latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Just-in-Time Access Request Monitoring",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.11",
              "n": "Identity Sync Failure Detection",
              "c": "critical",
              "f": "beginner",
              "v": "Sync failures cause stale or missing identities in target systems, leading to access denials or orphaned accounts. Detection enables quick remediation.",
              "t": "Identity sync / SCIM connector logs",
              "d": "Sync job logs, connector error logs",
              "q": "index=iam sourcetype=\"sync:job\"\n| where status=\"failed\" OR error_count > 0\n| stats latest(_time) as last_failure, values(error_message) as errors by connector, target_system\n| table connector, target_system, last_failure, errors",
              "m": "Ingest sync job results from IdP and HR-driven connectors. Alert on any failed run or error count >0. Track sync latency and delta size. Report on sync health by target.",
              "z": "Table (failed syncs), Single value (sync success %), Timeline (failure events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Identity sync / SCIM connector logs.\n• Ensure the following data sources are available: Sync job logs, connector error logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest sync job results from IdP and HR-driven connectors. Alert on any failed run or error count >0. Track sync latency and delta size. Report on sync health by target.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iam sourcetype=\"sync:job\"\n| where status=\"failed\" OR error_count > 0\n| stats latest(_time) as last_failure, values(error_message) as errors by connector, target_system\n| table connector, target_system, last_failure, errors\n```\n\nUnderstanding this SPL\n\n**Identity Sync Failure Detection** — Sync failures cause stale or missing identities in target systems, leading to access denials or orphaned accounts. Detection enables quick remediation.\n\nDocumented **Data sources**: Sync job logs, connector error logs. **App/TA** (typical add-on context): Identity sync / SCIM connector logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iam; **sourcetype**: sync:job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iam, sourcetype=\"sync:job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"failed\" OR error_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by connector, target_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Identity Sync Failure Detection**): table connector, target_system, last_failure, errors\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Identity Sync Failure Detection** — Sync failures cause stale or missing identities in target systems, leading to access denials or orphaned accounts. Detection enables quick remediation.\n\nDocumented **Data sources**: Sync job logs, connector error logs. **App/TA** (typical add-on context): Identity sync / SCIM connector logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed syncs), Single value (sync success %), Timeline (failure events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Identity Sync Failure Detection",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.12",
              "n": "RADIUS / TACACS+ Server Response Time",
              "c": "high",
              "f": "intermediate",
              "v": "Authentication server latency and availability for network device auth; slow or unavailable RADIUS/TACACS+ blocks admin access to routers and switches.",
              "t": "Custom scripted input (radtest, tacacs_plus probe), syslog from NAS devices",
              "d": "RADIUS accounting logs, NAS syslog, synthetic probe results",
              "q": "index=radius sourcetype=\"radius:probe\"\n| bin _time span=5m\n| stats avg(response_ms) as avg_ms, max(response_ms) as max_ms, count(eval(response_ms>2000)) as slow_count by server, _time\n| where avg_ms > 500 OR max_ms > 2000 OR slow_count > 0\n| table _time, server, avg_ms, max_ms, slow_count",
              "m": "Run radtest (FreeRADIUS) or equivalent probe against RADIUS servers every 60 seconds. For TACACS+, use tacacs_plus Python library or custom script. Ingest probe results with response time. Forward NAS syslog (e.g., Cisco, Arista) for accounting and auth events. Alert on avg response >500ms or any probe timeout. Correlate with NAS auth failures to distinguish server vs network issues.",
              "z": "Line chart (response time by server), Table (slow probes), Single value (current avg latency), Status grid (server × health).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (radtest, tacacs_plus probe), syslog from NAS devices.\n• Ensure the following data sources are available: RADIUS accounting logs, NAS syslog, synthetic probe results.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun radtest (FreeRADIUS) or equivalent probe against RADIUS servers every 60 seconds. For TACACS+, use tacacs_plus Python library or custom script. Ingest probe results with response time. Forward NAS syslog (e.g., Cisco, Arista) for accounting and auth events. Alert on avg response >500ms or any probe timeout. Correlate with NAS auth failures to distinguish server vs network issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=radius sourcetype=\"radius:probe\"\n| bin _time span=5m\n| stats avg(response_ms) as avg_ms, max(response_ms) as max_ms, count(eval(response_ms>2000)) as slow_count by server, _time\n| where avg_ms > 500 OR max_ms > 2000 OR slow_count > 0\n| table _time, server, avg_ms, max_ms, slow_count\n```\n\nUnderstanding this SPL\n\n**RADIUS / TACACS+ Server Response Time** — Authentication server latency and availability for network device auth; slow or unavailable RADIUS/TACACS+ blocks admin access to routers and switches.\n\nDocumented **Data sources**: RADIUS accounting logs, NAS syslog, synthetic probe results. **App/TA** (typical add-on context): Custom scripted input (radtest, tacacs_plus probe), syslog from NAS devices. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: radius; **sourcetype**: radius:probe. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=radius, sourcetype=\"radius:probe\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by server, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_ms > 500 OR max_ms > 2000 OR slow_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **RADIUS / TACACS+ Server Response Time**): table _time, server, avg_ms, max_ms, slow_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (response time by server), Table (slow probes), Single value (current avg latency), Status grid (server × health).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — RADIUS / TACACS+ Server Response Time",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.13",
              "n": "Active Directory Domain Controller Response Time",
              "c": "high",
              "f": "intermediate",
              "v": "LDAP bind time, DNS query time per DC — slow DCs cause auth delays and user lockouts.",
              "t": "`Splunk_TA_windows`, custom scripted input",
              "d": "LDAP bind latency probes, DNS query timing, Windows DC perf counters",
              "q": "index=ad_perf sourcetype=\"ad:dc_probe\"\n| bin _time span=5m\n| stats avg(ldap_bind_ms) as avg_ldap, avg(dns_query_ms) as avg_dns, count(eval(ldap_bind_ms>1000)) as slow_ldap by dc_host, _time\n| where avg_ldap > 500 OR avg_dns > 200 OR slow_ldap > 0\n| table _time, dc_host, avg_ldap, avg_dns, slow_ldap",
              "m": "Run scripted input from monitoring host: perform LDAP bind (e.g., ldapsearch -x -H ldap://dc:389 -b \"\" -s base) and measure elapsed time; run nslookup or Resolve-DnsName for _ldap._tcp.dc._msdcs.domain. Ingest Windows perf counters (NTDS, LDAP Client Sessions) via Splunk_TA_windows. Alert on LDAP bind >1s or DNS >200ms. Identify overloaded DCs for load balancing.",
              "z": "Line chart (LDAP/DNS latency by DC), Table (slow DCs), Status grid (DC × response time tier), Single value (worst DC latency).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1003.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, custom scripted input.\n• Ensure the following data sources are available: LDAP bind latency probes, DNS query timing, Windows DC perf counters.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun scripted input from monitoring host: perform LDAP bind (e.g., ldapsearch -x -H ldap://dc:389 -b \"\" -s base) and measure elapsed time; run nslookup or Resolve-DnsName for _ldap._tcp.dc._msdcs.domain. Ingest Windows perf counters (NTDS, LDAP Client Sessions) via Splunk_TA_windows. Alert on LDAP bind >1s or DNS >200ms. Identify overloaded DCs for load balancing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ad_perf sourcetype=\"ad:dc_probe\"\n| bin _time span=5m\n| stats avg(ldap_bind_ms) as avg_ldap, avg(dns_query_ms) as avg_dns, count(eval(ldap_bind_ms>1000)) as slow_ldap by dc_host, _time\n| where avg_ldap > 500 OR avg_dns > 200 OR slow_ldap > 0\n| table _time, dc_host, avg_ldap, avg_dns, slow_ldap\n```\n\nUnderstanding this SPL\n\n**Active Directory Domain Controller Response Time** — LDAP bind time, DNS query time per DC — slow DCs cause auth delays and user lockouts.\n\nDocumented **Data sources**: LDAP bind latency probes, DNS query timing, Windows DC perf counters. **App/TA** (typical add-on context): `Splunk_TA_windows`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ad_perf; **sourcetype**: ad:dc_probe. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ad_perf, sourcetype=\"ad:dc_probe\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by dc_host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_ldap > 500 OR avg_dns > 200 OR slow_ldap > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Active Directory Domain Controller Response Time**): table _time, dc_host, avg_ldap, avg_dns, slow_ldap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (LDAP/DNS latency by DC), Table (slow DCs), Status grid (DC × response time tier), Single value (worst DC latency).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Active Directory Domain Controller Response Time",
              "mtype": [
                "Security",
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.14",
              "n": "CyberArk Session Recording Alerts",
              "c": "critical",
              "f": "beginner",
              "v": "Real-time alerts on PSM recordings—policy violations, blocked commands, or session anomalies—enable SOC response before logout.",
              "t": "Splunk TA for CyberArk",
              "d": "PSM recording events, policy violation syslog from PSM",
              "q": "index=pam sourcetype=\"cyberark:psm\" OR sourcetype=\"cyberark:psm_alert\"\n| search alert_level IN (\"High\",\"Critical\") OR policy_violation=\"true\"\n| table _time, user, target_account, session_id, alert_reason\n| sort -_time",
              "m": "Forward PSM alert stream to Splunk. Map vendor severity to SOC tiers. Integrate with SOAR for session kill on critical patterns.",
              "z": "Timeline (alerts), Table (session detail), Single value (critical alerts 24h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk TA for CyberArk.\n• Ensure the following data sources are available: PSM recording events, policy violation syslog from PSM.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward PSM alert stream to Splunk. Map vendor severity to SOC tiers. Integrate with SOAR for session kill on critical patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:psm\" OR sourcetype=\"cyberark:psm_alert\"\n| search alert_level IN (\"High\",\"Critical\") OR policy_violation=\"true\"\n| table _time, user, target_account, session_id, alert_reason\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**CyberArk Session Recording Alerts** — Real-time alerts on PSM recordings—policy violations, blocked commands, or session anomalies—enable SOC response before logout.\n\nDocumented **Data sources**: PSM recording events, policy violation syslog from PSM. **App/TA** (typical add-on context): Splunk TA for CyberArk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:psm, cyberark:psm_alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:psm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **CyberArk Session Recording Alerts**): table _time, user, target_account, session_id, alert_reason\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CyberArk Session Recording Alerts** — Real-time alerts on PSM recordings—policy violations, blocked commands, or session anomalies—enable SOC response before logout.\n\nDocumented **Data sources**: PSM recording events, policy violation syslog from PSM. **App/TA** (typical add-on context): Splunk TA for CyberArk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (alerts), Table (session detail), Single value (critical alerts 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch session recording and policy alerts from the vault so we can respond when privileged use breaks the rules or looks dangerous.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.15",
              "n": "Privileged Session Duration Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Sessions far longer than peer baseline may indicate data exfiltration or abandoned hijacked sessions.",
              "t": "CyberArk / BeyondTrust session logs",
              "d": "PAM session start/end with duration",
              "q": "index=pam sourcetype=\"cyberark:session\"\n| eval dur_min=duration_sec/60\n| eventstats median(dur_min) as med by target_account\n| where dur_min > med*3 AND dur_min > 60\n| table _time, user, target_account, dur_min, med",
              "m": "Baseline duration per target system type. Exclude known maintenance windows via lookup. Pair with UC-9.4.1 audit trail.",
              "z": "Table (long sessions), Box plot (duration by target), Line chart (max duration trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CyberArk / BeyondTrust session logs.\n• Ensure the following data sources are available: PAM session start/end with duration.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline duration per target system type. Exclude known maintenance windows via lookup. Pair with UC-9.4.1 audit trail.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:session\"\n| eval dur_min=duration_sec/60\n| eventstats median(dur_min) as med by target_account\n| where dur_min > med*3 AND dur_min > 60\n| table _time, user, target_account, dur_min, med\n```\n\nUnderstanding this SPL\n\n**Privileged Session Duration Anomalies** — Sessions far longer than peer baseline may indicate data exfiltration or abandoned hijacked sessions.\n\nDocumented **Data sources**: PAM session start/end with duration. **App/TA** (typical add-on context): CyberArk / BeyondTrust session logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:session. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dur_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by target_account** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dur_min > med*3 AND dur_min > 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Privileged Session Duration Anomalies**): table _time, user, target_account, dur_min, med\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privileged Session Duration Anomalies** — Sessions far longer than peer baseline may indicate data exfiltration or abandoned hijacked sessions.\n\nDocumented **Data sources**: PAM session start/end with duration. **App/TA** (typical add-on context): CyberArk / BeyondTrust session logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (long sessions), Box plot (duration by target), Line chart (max duration trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Privileged Session Duration Anomalies",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "beyondtrust",
                "cyberark"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.16",
              "n": "Vault Synchronization Failures",
              "c": "critical",
              "f": "intermediate",
              "v": "Vault replication or DR sync failures risk split-brain or stale credentials; distinct from generic component health.",
              "t": "CyberArk Vault DR logs, vendor HA APIs",
              "d": "`VaultReplication`, `DR` sync job status, cluster replication lag metrics",
              "q": "index=pam sourcetype=\"cyberark:vault_replication\"\n| where status!=\"Success\" OR lag_seconds > 120\n| stats latest(_time) as last_evt, values(error) as errs by primary_vault, dr_vault\n| table primary_vault, dr_vault, lag_seconds, errs",
              "m": "Ingest replication job results every minute. Alert on lag > policy (e.g., 2 minutes) or failed sync. Page vault admins for DR sites.",
              "z": "Line chart (lag), Table (failed jobs), Status grid (primary × DR).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CyberArk Vault DR logs, vendor HA APIs.\n• Ensure the following data sources are available: `VaultReplication`, `DR` sync job status, cluster replication lag metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest replication job results every minute. Alert on lag > policy (e.g., 2 minutes) or failed sync. Page vault admins for DR sites.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:vault_replication\"\n| where status!=\"Success\" OR lag_seconds > 120\n| stats latest(_time) as last_evt, values(error) as errs by primary_vault, dr_vault\n| table primary_vault, dr_vault, lag_seconds, errs\n```\n\nUnderstanding this SPL\n\n**Vault Synchronization Failures** — Vault replication or DR sync failures risk split-brain or stale credentials; distinct from generic component health.\n\nDocumented **Data sources**: `VaultReplication`, `DR` sync job status, cluster replication lag metrics. **App/TA** (typical add-on context): CyberArk Vault DR logs, vendor HA APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:vault_replication. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:vault_replication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"Success\" OR lag_seconds > 120` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by primary_vault, dr_vault** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Vault Synchronization Failures**): table primary_vault, dr_vault, lag_seconds, errs\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (lag), Table (failed jobs), Status grid (primary × DR).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Vault Synchronization Failures",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cyberark",
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.17",
              "n": "Just-in-Time Access Request Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Analytics on JIT volume, self-approval, and after-hours patterns complements simple volume alerts (UC-9.4.10).",
              "t": "PAM JIT / Entra PIM logs",
              "d": "Request ID, requester, approver, time-to-approve, business justification field",
              "q": "index=pam sourcetype=\"jit:requests\"\n| eval same_approver=if(requester=approver,1,0)\n| eval after_hours=if(hour(_time) < 6 OR hour(_time) > 22,1,0)\n| stats count, sum(same_approver) as self_approvals, sum(after_hours) as off_hours by requester\n| where self_approvals > 0 OR off_hours > 5\n| sort -count",
              "m": "Require justification text; alert on empty justification with approval. Report monthly JIT metrics to IAM governance.",
              "z": "Table (risky patterns), Bar chart (self-approvals), Heatmap (hour × requester).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PAM JIT / Entra PIM logs.\n• Ensure the following data sources are available: Request ID, requester, approver, time-to-approve, business justification field.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequire justification text; alert on empty justification with approval. Report monthly JIT metrics to IAM governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"jit:requests\"\n| eval same_approver=if(requester=approver,1,0)\n| eval after_hours=if(hour(_time) < 6 OR hour(_time) > 22,1,0)\n| stats count, sum(same_approver) as self_approvals, sum(after_hours) as off_hours by requester\n| where self_approvals > 0 OR off_hours > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Just-in-Time Access Request Analysis** — Analytics on JIT volume, self-approval, and after-hours patterns complements simple volume alerts (UC-9.4.10).\n\nDocumented **Data sources**: Request ID, requester, approver, time-to-approve, business justification field. **App/TA** (typical add-on context): PAM JIT / Entra PIM logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: jit:requests. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"jit:requests\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **same_approver** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **after_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by requester** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where self_approvals > 0 OR off_hours > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Just-in-Time Access Request Analysis** — Analytics on JIT volume, self-approval, and after-hours patterns complements simple volume alerts (UC-9.4.10).\n\nDocumented **Data sources**: Request ID, requester, approver, time-to-approve, business justification field. **App/TA** (typical add-on context): PAM JIT / Entra PIM logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (risky patterns), Bar chart (self-approvals), Heatmap (hour × requester).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Just-in-Time Access Request Analysis",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.18",
              "n": "Emergency Break-Glass Account Usage",
              "c": "critical",
              "f": "beginner",
              "v": "Real-time paging for emergency-only vault accounts beyond standard break-glass (UC-9.4.3)—includes usage from non-SOC networks.",
              "t": "Splunk TA for CyberArk, AD Security logs",
              "d": "PAM checkout for accounts tagged `emergency_only`, 4624 for same sAMAccountName",
              "q": "index=pam sourcetype=\"cyberark:vault\" account_tag=\"emergency_only\"\n| lookup soc_networks subnet OUTPUT network_name\n| where isnull(network_name)\n| table _time, user, account, client_ip, action\n| sort -_time",
              "m": "Define emergency accounts in PAM and AD. Alert on any checkout or interactive logon; require post-incident report within SLA. Correlate with major incident tickets.",
              "z": "Timeline (emergency usage), Table (detail), Single value (events outside SOC net).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.002",
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk TA for CyberArk, AD Security logs.\n• Ensure the following data sources are available: PAM checkout for accounts tagged `emergency_only`, 4624 for same sAMAccountName.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine emergency accounts in PAM and AD. Alert on any checkout or interactive logon; require post-incident report within SLA. Correlate with major incident tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:vault\" account_tag=\"emergency_only\"\n| lookup soc_networks subnet OUTPUT network_name\n| where isnull(network_name)\n| table _time, user, account, client_ip, action\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Emergency Break-Glass Account Usage** — Real-time paging for emergency-only vault accounts beyond standard break-glass (UC-9.4.3)—includes usage from non-SOC networks.\n\nDocumented **Data sources**: PAM checkout for accounts tagged `emergency_only`, 4624 for same sAMAccountName. **App/TA** (typical add-on context): Splunk TA for CyberArk, AD Security logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:vault. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:vault\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(network_name)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Emergency Break-Glass Account Usage**): table _time, user, account, client_ip, action\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Emergency Break-Glass Account Usage** — Real-time paging for emergency-only vault accounts beyond standard break-glass (UC-9.4.3)—includes usage from non-SOC networks.\n\nDocumented **Data sources**: PAM checkout for accounts tagged `emergency_only`, 4624 for same sAMAccountName. **App/TA** (typical add-on context): Splunk TA for CyberArk, AD Security logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (emergency usage), Table (detail), Single value (events outside SOC net).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Emergency Break-Glass Account Usage",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.19",
              "n": "Shared Account Concurrent Login Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Shared privileged accounts used from two locations simultaneously indicate credential sharing or theft.",
              "t": "PAM session logs, bastion logs",
              "d": "Session start with same `target_account` and overlapping time ranges",
              "q": "index=pam sourcetype=\"cyberark:session\"\n| eval end_time=_time+duration_sec\n| sort target_account, _time\n| streamstats window=2 current(src) as ip1 next(src) as ip2 current(_time) as t1 next(_time) as t2 by target_account\n| where ip1!=ip2 AND t2 < end_time\n| table target_account, ip1, ip2, t1, t2",
              "m": "Tune for load-balanced egress using known NAT pools. Prefer per-user vaulted accounts to eliminate shared IDs.",
              "z": "Table (concurrent sessions), Timeline, Bar chart (accounts with overlap events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PAM session logs, bastion logs.\n• Ensure the following data sources are available: Session start with same `target_account` and overlapping time ranges.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune for load-balanced egress using known NAT pools. Prefer per-user vaulted accounts to eliminate shared IDs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:session\"\n| eval end_time=_time+duration_sec\n| sort target_account, _time\n| streamstats window=2 current(src) as ip1 next(src) as ip2 current(_time) as t1 next(_time) as t2 by target_account\n| where ip1!=ip2 AND t2 < end_time\n| table target_account, ip1, ip2, t1, t2\n```\n\nUnderstanding this SPL\n\n**Shared Account Concurrent Login Detection** — Shared privileged accounts used from two locations simultaneously indicate credential sharing or theft.\n\nDocumented **Data sources**: Session start with same `target_account` and overlapping time ranges. **App/TA** (typical add-on context): PAM session logs, bastion logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:session. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **end_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by target_account** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ip1!=ip2 AND t2 < end_time` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Shared Account Concurrent Login Detection**): table target_account, ip1, ip2, t1, t2\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Shared Account Concurrent Login Detection** — Shared privileged accounts used from two locations simultaneously indicate credential sharing or theft.\n\nDocumented **Data sources**: Session start with same `target_account` and overlapping time ranges. **App/TA** (typical add-on context): PAM session logs, bastion logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (concurrent sessions), Timeline, Bar chart (accounts with overlap events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Shared Account Concurrent Login Detection",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.4.20",
              "n": "PAM Agent Health Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "CPM, PSM, and PVWA agents offline block rotation and session capture; distinct from vault binary health (UC-9.4.6).",
              "t": "CyberArk component monitoring, SNMP/HEARTBEAT logs",
              "d": "Agent heartbeat, service status, `Get-PMPServerHealth`-style scripted input",
              "q": "index=pam sourcetype=\"cyberark:agent_heartbeat\"\n| stats latest(_time) as last_hb by agent_type, hostname\n| eval secs_since=now()-last_hb\n| where secs_since > 300\n| table agent_type, hostname, secs_since",
              "m": "Agents send heartbeat every 60s. Alert if no heartbeat >5 minutes. Auto-ticket remediation for PSM in production zones.",
              "z": "Status grid (agent × host), Single value (unhealthy agents), Line chart (heartbeat age).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cisco Security Cloud](https://splunkbase.splunk.com/app/7404)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CyberArk component monitoring, SNMP/HEARTBEAT logs.\n• Ensure the following data sources are available: Agent heartbeat, service status, `Get-PMPServerHealth`-style scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAgents send heartbeat every 60s. Alert if no heartbeat >5 minutes. Auto-ticket remediation for PSM in production zones.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:agent_heartbeat\"\n| stats latest(_time) as last_hb by agent_type, hostname\n| eval secs_since=now()-last_hb\n| where secs_since > 300\n| table agent_type, hostname, secs_since\n```\n\nUnderstanding this SPL\n\n**PAM Agent Health Monitoring** — CPM, PSM, and PVWA agents offline block rotation and session capture; distinct from vault binary health (UC-9.4.6).\n\nDocumented **Data sources**: Agent heartbeat, service status, `Get-PMPServerHealth`-style scripted input. **App/TA** (typical add-on context): CyberArk component monitoring, SNMP/HEARTBEAT logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:agent_heartbeat. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:agent_heartbeat\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by agent_type, hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **secs_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where secs_since > 300` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PAM Agent Health Monitoring**): table agent_type, hostname, secs_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (agent × host), Single value (unhealthy agents), Line chart (heartbeat age).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — PAM Agent Health Monitoring",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cyberark",
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "9.5",
          "n": "Cloud Identity Providers — Okta & Duo",
          "u": [
            {
              "i": "9.5.1",
              "n": "Okta Authentication Failures",
              "c": "critical",
              "f": "beginner",
              "v": "Spikes in failed logins reveal credential attacks, misconfigured apps, or lockout conditions before accounts are fully compromised.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`user.authentication.sso`, `user.authentication.auth_via_*`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\" outcome.result=\"FAILURE\"\n| bin _time span=15m\n| stats count by actor.alternateId, client.ipAddress, _time\n| where count > 5\n| sort -count",
              "m": "Ingest Okta System Log via the TA. Normalize `outcome.result` and actor fields. Baseline failures per user and IP; alert on threshold breaches and on impossible concurrent sources. Correlate with Duo denials if both are present.",
              "z": "Table (user, IP, failure count), Line chart (failures over time), Bar chart (top source IPs).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1110",
                "T1110.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`user.authentication.sso`, `user.authentication.auth_via_*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Okta System Log via the TA. Normalize `outcome.result` and actor fields. Baseline failures per user and IP; alert on threshold breaches and on impossible concurrent sources. Correlate with Duo denials if both are present.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" outcome.result=\"FAILURE\"\n| bin _time span=15m\n| stats count by actor.alternateId, client.ipAddress, _time\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Okta Authentication Failures** — Spikes in failed logins reveal credential attacks, misconfigured apps, or lockout conditions before accounts are fully compromised.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`user.authentication.sso`, `user.authentication.auth_via_*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, client.ipAddress, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Okta Authentication Failures** — Spikes in failed logins reveal credential attacks, misconfigured apps, or lockout conditions before accounts are fully compromised.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`user.authentication.sso`, `user.authentication.auth_via_*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, IP, failure count), Line chart (failures over time), Bar chart (top source IPs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Okta sign-in records so we can see failed logins, risky sessions, and admin changes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.2",
              "n": "Okta MFA Bypass Attempts",
              "c": "critical",
              "f": "intermediate",
              "v": "Attempts to skip or weaken MFA (policy gaps, risky grant flows) are a direct path to account takeover; monitoring policy evaluation outcomes closes that gap.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`policy.evaluate_sign_on`, `user.authentication`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\" eventType=\"policy.evaluate_sign_on\"\n| where outcome.reason IN (\"MFA_NOT_ENROLLED\",\"FACTOR_NOT_USED\",\"NONE\")\n| stats count by actor.alternateId, client.ipAddress, outcome.reason\n| where count > 0",
              "m": "Track sign-on policy evaluations where MFA was not satisfied or only password was used. Tune to your org’s allowed “password-only” apps and break-glass accounts. Alert on unexpected ALLOW without MFA for protected apps. Review `policy.evaluate_sign_on` with `outcome.result` and debug fields.",
              "z": "Table (user, IP, reason), Timeline of policy events, Single value (bypass events per hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1556"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`policy.evaluate_sign_on`, `user.authentication`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack sign-on policy evaluations where MFA was not satisfied or only password was used. Tune to your org’s allowed “password-only” apps and break-glass accounts. Alert on unexpected ALLOW without MFA for protected apps. Review `policy.evaluate_sign_on` with `outcome.result` and debug fields.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" eventType=\"policy.evaluate_sign_on\"\n| where outcome.reason IN (\"MFA_NOT_ENROLLED\",\"FACTOR_NOT_USED\",\"NONE\")\n| stats count by actor.alternateId, client.ipAddress, outcome.reason\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Okta MFA Bypass Attempts** — Attempts to skip or weaken MFA (policy gaps, risky grant flows) are a direct path to account takeover; monitoring policy evaluation outcomes closes that gap.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`policy.evaluate_sign_on`, `user.authentication`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where outcome.reason IN (\"MFA_NOT_ENROLLED\",\"FACTOR_NOT_USED\",\"NONE\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, client.ipAddress, outcome.reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Okta MFA Bypass Attempts** — Attempts to skip or weaken MFA (policy gaps, risky grant flows) are a direct path to account takeover; monitoring policy evaluation outcomes closes that gap.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`policy.evaluate_sign_on`, `user.authentication`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, IP, reason), Timeline of policy events, Single value (bypass events per hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Okta sign-in records so we can see failed logins, risky sessions, and admin changes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.3",
              "n": "Okta Suspicious Sign-In Activity",
              "c": "high",
              "f": "intermediate",
              "v": "Okta threat signals and anomalous sessions (new device, new country, Tor) surface account takeovers before lateral movement.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`security.threat.detected`, `user.session.start`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\" (eventType=\"security.threat.detected\" OR severity=\"WARN\")\n| stats count by actor.alternateId, client.ipAddress, outcome.result, displayMessage\n| where count > 0\n| sort -count",
              "m": "Forward full threat and session events. Map `severity`, `outcome`, and Okta risk context. Create alerts for `security.threat.detected` and for sessions with risk scores above your baseline. Integrate with SOAR for step-up auth.",
              "z": "Table (user, IP, message), Map (sign-in geo), Line chart (threat events per day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`security.threat.detected`, `user.session.start`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward full threat and session events. Map `severity`, `outcome`, and Okta risk context. Create alerts for `security.threat.detected` and for sessions with risk scores above your baseline. Integrate with SOAR for step-up auth.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" (eventType=\"security.threat.detected\" OR severity=\"WARN\")\n| stats count by actor.alternateId, client.ipAddress, outcome.result, displayMessage\n| where count > 0\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Okta Suspicious Sign-In Activity** — Okta threat signals and anomalous sessions (new device, new country, Tor) surface account takeovers before lateral movement.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`security.threat.detected`, `user.session.start`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, client.ipAddress, outcome.result, displayMessage** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Okta Suspicious Sign-In Activity** — Okta threat signals and anomalous sessions (new device, new country, Tor) surface account takeovers before lateral movement.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`security.threat.detected`, `user.session.start`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, IP, message), Map (sign-in geo), Line chart (threat events per day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Okta sign-in records so we can see failed logins, risky sessions, and admin changes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.4",
              "n": "Okta Admin Console Changes",
              "c": "critical",
              "f": "intermediate",
              "v": "Changes in the admin console affect global security posture; auditing who changed what supports SOC2/ISO investigations and insider-threat programs.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`system.*`, `user.session.access_admin_app`, `resource.*`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\" (eventType=\"user.session.access_admin_app\" OR like(eventType,\"system.org%\"))\n| stats count by actor.alternateId, eventType, client.ipAddress, displayMessage\n| sort -count",
              "m": "Capture all admin app sessions and high-privilege system events. Restrict alerts to production Okta orgs; exclude known automation actors. Store lookups for approved admins and compare. Alert on first-time admin access from new ASN or country.",
              "z": "Timeline (admin actions), Table (actor, event, IP), Bar chart (events by admin user).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.003",
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`system.*`, `user.session.access_admin_app`, `resource.*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture all admin app sessions and high-privilege system events. Restrict alerts to production Okta orgs; exclude known automation actors. Store lookups for approved admins and compare. Alert on first-time admin access from new ASN or country.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" (eventType=\"user.session.access_admin_app\" OR like(eventType,\"system.org%\"))\n| stats count by actor.alternateId, eventType, client.ipAddress, displayMessage\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Okta Admin Console Changes** — Changes in the admin console affect global security posture; auditing who changed what supports SOC2/ISO investigations and insider-threat programs.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`system.*`, `user.session.access_admin_app`, `resource.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, eventType, client.ipAddress, displayMessage** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Okta Admin Console Changes** — Changes in the admin console affect global security posture; auditing who changed what supports SOC2/ISO investigations and insider-threat programs.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`system.*`, `user.session.access_admin_app`, `resource.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (admin actions), Table (actor, event, IP), Bar chart (events by admin user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Okta sign-in records so we can see failed logins, risky sessions, and admin changes in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.5",
              "n": "Okta Policy Modifications",
              "c": "critical",
              "f": "intermediate",
              "v": "Sign-on, MFA, and password policy edits can weaken security org-wide; detecting unauthorized or out-of-window changes is essential for governance.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`policy.*`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\"\n| where like(eventType,\"policy.lifecycle%\") OR like(eventType,\"policy.rule%\")\n| stats count by actor.alternateId, eventType, target{}.displayName\n| sort -count",
              "m": "Ingest policy lifecycle and rule events. Correlate with change tickets. Alert on any policy change outside maintenance windows or from non-admin service accounts. Snapshot policy names in a lookup for critical resources.",
              "z": "Table (policy, actor, target), Timeline (policy changes), Single value (changes in last 24h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.001",
                "T1098.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`policy.*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest policy lifecycle and rule events. Correlate with change tickets. Alert on any policy change outside maintenance windows or from non-admin service accounts. Snapshot policy names in a lookup for critical resources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\"\n| where like(eventType,\"policy.lifecycle%\") OR like(eventType,\"policy.rule%\")\n| stats count by actor.alternateId, eventType, target{}.displayName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Okta Policy Modifications** — Sign-on, MFA, and password policy edits can weaken security org-wide; detecting unauthorized or out-of-window changes is essential for governance.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`policy.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(eventType,\"policy.lifecycle%\") OR like(eventType,\"policy.rule%\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, eventType, target{}.displayName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Okta Policy Modifications** — Sign-on, MFA, and password policy edits can weaken security org-wide; detecting unauthorized or out-of-window changes is essential for governance.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`policy.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (policy, actor, target), Timeline (policy changes), Single value (changes in last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Okta sign-in records so we can see failed logins, risky sessions, and admin changes in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.6",
              "n": "Okta New Admin Creation",
              "c": "critical",
              "f": "beginner",
              "v": "New super-admin or role assignments are high-value targets for attackers; immediate notification enables rapid validation.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`user.privilege.grant`, `group.privilege*`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\"\n| where eventType=\"user.privilege.grant\" OR like(eventType,\"group.privilege%\")\n| eval tgt=lower(mvjoin('target{}.displayName',\" \"))\n| where like(tgt,\"%admin%\") OR like(tgt,\"%super%\")\n| table _time, actor.alternateId, target{}.displayName, target{}.type",
              "m": "Parse `target` for admin roles and groups. Use lookups for approved role-assignment paths. Alert on any new admin grant or role elevation. Include `actor` and `client.ipAddress` for triage.",
              "z": "Table (who, what role, when), Timeline, Single value (admin grants today).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`user.privilege.grant`, `group.privilege*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse `target` for admin roles and groups. Use lookups for approved role-assignment paths. Alert on any new admin grant or role elevation. Include `actor` and `client.ipAddress` for triage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\"\n| where eventType=\"user.privilege.grant\" OR like(eventType,\"group.privilege%\")\n| eval tgt=lower(mvjoin('target{}.displayName',\" \"))\n| where like(tgt,\"%admin%\") OR like(tgt,\"%super%\")\n| table _time, actor.alternateId, target{}.displayName, target{}.type\n```\n\nUnderstanding this SPL\n\n**Okta New Admin Creation** — New super-admin or role assignments are high-value targets for attackers; immediate notification enables rapid validation.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`user.privilege.grant`, `group.privilege*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where eventType=\"user.privilege.grant\" OR like(eventType,\"group.privilege%\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **tgt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where like(tgt,\"%admin%\") OR like(tgt,\"%super%\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Okta New Admin Creation**): table _time, actor.alternateId, target{}.displayName, target{}.type\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Okta New Admin Creation** — New super-admin or role assignments are high-value targets for attackers; immediate notification enables rapid validation.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`user.privilege.grant`, `group.privilege*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (who, what role, when), Timeline, Single value (admin grants today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Okta sign-in records so we can see failed logins, risky sessions, and admin changes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.7",
              "n": "Duo Authentication Denials",
              "c": "high",
              "f": "beginner",
              "v": "Denied logins (fraud, policy, or lockout) indicate attacks or misconfigurations; volume and user patterns guide response.",
              "t": "Cisco Duo TA",
              "d": "`sourcetype=duo:authentication`",
              "q": "index=duo sourcetype=\"duo:authentication\" result=\"deny\"\n| bin _time span=1h\n| stats count by user, ip, application\n| where count > 10\n| sort -count",
              "m": "Ingest Duo Authentication API or proxy logs with the TA. Map `result`, `reason`, `factor`, and `application`. Baseline per-user and global deny rates. Alert on spikes and on denies from many IPs for one user.",
              "z": "Table (user, IP, count), Line chart (denials over time), Bar chart (denials by application).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1110"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco Duo TA.\n• Ensure the following data sources are available: `sourcetype=duo:authentication`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Duo Authentication API or proxy logs with the TA. Map `result`, `reason`, `factor`, and `application`. Baseline per-user and global deny rates. Alert on spikes and on denies from many IPs for one user.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=duo sourcetype=\"duo:authentication\" result=\"deny\"\n| bin _time span=1h\n| stats count by user, ip, application\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Duo Authentication Denials** — Denied logins (fraud, policy, or lockout) indicate attacks or misconfigurations; volume and user patterns guide response.\n\nDocumented **Data sources**: `sourcetype=duo:authentication`. **App/TA** (typical add-on context): Cisco Duo TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: duo; **sourcetype**: duo:authentication. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=duo, sourcetype=\"duo:authentication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by user, ip, application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Duo Authentication Denials** — Denied logins (fraud, policy, or lockout) indicate attacks or misconfigurations; volume and user patterns guide response.\n\nDocumented **Data sources**: `sourcetype=duo:authentication`. **App/TA** (typical add-on context): Cisco Duo TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, IP, count), Line chart (denials over time), Bar chart (denials by application).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Duo’s records so we can see who passed or failed the extra check, and when devices or policies block access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user span=1h | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.8",
              "n": "Duo Device Trust Posture",
              "c": "high",
              "f": "intermediate",
              "v": "Non-compliant or out-of-date devices that still attempt access signal policy gaps and endpoint risk exposure.",
              "t": "Cisco Duo TA",
              "d": "`sourcetype=duo:authentication`, `sourcetype=duo:telephony` (device trust), Duo admin logs",
              "q": "index=duo sourcetype=\"duo:authentication\"\n| where device_trust_level!=\"trusted\" OR like(lower(_raw),\"%unmanaged%\")\n| stats count by user, device, device_trust_level, application\n| where count > 0\n| sort -count",
              "m": "Ensure device fields (OS, encryption, posture) are extracted from Duo or endpoint telemetry. Alert on repeated access from untrusted posture or when trust level changes. Pair with Duo Device Trust policies.",
              "z": "Table (user, device, trust level), Pie chart (trusted vs untrusted attempts), Line chart (untrusted attempts over time).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco Duo TA.\n• Ensure the following data sources are available: `sourcetype=duo:authentication`, `sourcetype=duo:telephony` (device trust), Duo admin logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure device fields (OS, encryption, posture) are extracted from Duo or endpoint telemetry. Alert on repeated access from untrusted posture or when trust level changes. Pair with Duo Device Trust policies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=duo sourcetype=\"duo:authentication\"\n| where device_trust_level!=\"trusted\" OR like(lower(_raw),\"%unmanaged%\")\n| stats count by user, device, device_trust_level, application\n| where count > 0\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Duo Device Trust Posture** — Non-compliant or out-of-date devices that still attempt access signal policy gaps and endpoint risk exposure.\n\nDocumented **Data sources**: `sourcetype=duo:authentication`, `sourcetype=duo:telephony` (device trust), Duo admin logs. **App/TA** (typical add-on context): Cisco Duo TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: duo; **sourcetype**: duo:authentication. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=duo, sourcetype=\"duo:authentication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where device_trust_level!=\"trusted\" OR like(lower(_raw),\"%unmanaged%\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, device, device_trust_level, application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Duo Device Trust Posture** — Non-compliant or out-of-date devices that still attempt access signal policy gaps and endpoint risk exposure.\n\nDocumented **Data sources**: `sourcetype=duo:authentication`, `sourcetype=duo:telephony` (device trust), Duo admin logs. **App/TA** (typical add-on context): Cisco Duo TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, device, trust level), Pie chart (trusted vs untrusted attempts), Line chart (untrusted attempts over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Duo’s records so we can see who passed or failed the extra check, and when devices or policies block access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.9",
              "n": "Duo Enrollment Anomalies",
              "c": "medium",
              "f": "intermediate",
              "v": "Sudden bulk enrollments or enrollments from unusual locations can indicate attacker-driven device registration or help-desk abuse.",
              "t": "Cisco Duo TA",
              "d": "`sourcetype=duo:admin`, `sourcetype=duo:authentication` (enrollment events)",
              "q": "index=duo sourcetype=\"duo:admin\" event_type=\"enrollment\"\n| bin _time span=15m\n| stats dc(user) as new_users by _time\n| where new_users > 20",
              "m": "Ingest Duo admin enrollment events. Baseline enrollment rate per hour per location. Alert on spikes and on enrollments outside business hours. Correlate with HR onboarding feeds when available.",
              "z": "Line chart (enrollments per hour), Table (spike windows), Bar chart (enrollments by integration).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1098.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco Duo TA.\n• Ensure the following data sources are available: `sourcetype=duo:admin`, `sourcetype=duo:authentication` (enrollment events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Duo admin enrollment events. Baseline enrollment rate per hour per location. Alert on spikes and on enrollments outside business hours. Correlate with HR onboarding feeds when available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=duo sourcetype=\"duo:admin\" event_type=\"enrollment\"\n| bin _time span=15m\n| stats dc(user) as new_users by _time\n| where new_users > 20\n```\n\nUnderstanding this SPL\n\n**Duo Enrollment Anomalies** — Sudden bulk enrollments or enrollments from unusual locations can indicate attacker-driven device registration or help-desk abuse.\n\nDocumented **Data sources**: `sourcetype=duo:admin`, `sourcetype=duo:authentication` (enrollment events). **App/TA** (typical add-on context): Cisco Duo TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: duo; **sourcetype**: duo:admin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=duo, sourcetype=\"duo:admin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where new_users > 20` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (enrollments per hour), Table (spike windows), Bar chart (enrollments by integration).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Duo’s records so we can see who passed or failed the extra check, and when devices or policies block access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.10",
              "n": "Federated SSO Token Abuse",
              "c": "critical",
              "f": "advanced",
              "v": "Excessive OAuth/OIDC grants, refresh-token reuse, or token minting from new clients can indicate session theft or malicious automation.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`app.oauth2.*`, `app.oauth2.token.*`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\" eventType=\"app.oauth2.token.grant\"\n| stats count by actor.alternateId, client.ipAddress\n| where count > 100\n| sort -count",
              "m": "Track token grants per user per IP and client. Use `transaction` or `streamstats` to detect rapid grants. Alert on unusual client IDs or scopes. Correlate with OAuth abuse detections from IdP.",
              "z": "Table (user, IP, grant count), Line chart (grants per minute), Bar chart (token grants by client).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1528",
                "T1550.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`app.oauth2.*`, `app.oauth2.token.*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack token grants per user per IP and client. Use `transaction` or `streamstats` to detect rapid grants. Alert on unusual client IDs or scopes. Correlate with OAuth abuse detections from IdP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" eventType=\"app.oauth2.token.grant\"\n| stats count by actor.alternateId, client.ipAddress\n| where count > 100\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Federated SSO Token Abuse** — Excessive OAuth/OIDC grants, refresh-token reuse, or token minting from new clients can indicate session theft or malicious automation.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`app.oauth2.*`, `app.oauth2.token.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, client.ipAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Federated SSO Token Abuse** — Excessive OAuth/OIDC grants, refresh-token reuse, or token minting from new clients can indicate session theft or malicious automation.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`app.oauth2.*`, `app.oauth2.token.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, IP, grant count), Line chart (grants per minute), Bar chart (token grants by client).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Federated SSO Token Abuse",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.11",
              "n": "Impossible Travel Detection (Okta)",
              "c": "high",
              "f": "intermediate",
              "v": "Two successful sessions from geolocations that cannot be reached in the elapsed time indicate credential theft or shared accounts.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`user.session.start`, `user.authentication.sso`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\" outcome.result=\"SUCCESS\" eventType=\"user.authentication.sso\"\n| sort 0 actor.alternateId _time\n| streamstats window=1 last(client.geographicalContext.country) as prev_country last(_time) as prev_time last(client.ipAddress) as prev_ip current(client.geographicalContext.country) as country by actor.alternateId\n| eval delta_sec=_time-prev_time\n| where delta_sec > 0 AND delta_sec < 3600 AND country!=prev_country AND isnotnull(prev_country)\n| table _time, actor.alternateId, prev_country, country, delta_sec, prev_ip, client.ipAddress",
              "m": "Use Okta geo fields (or enrich IP with `iplocation`). Tune minimum distance and maximum time windows. Exclude VPN and satellite egress via ASN lookups. Combine with Okta’s built-in impossible travel if licensed.",
              "z": "Table (user, country A → B, delta), Map (sequential points), Single value (impossible travel count per day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`user.session.start`, `user.authentication.sso`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Okta geo fields (or enrich IP with `iplocation`). Tune minimum distance and maximum time windows. Exclude VPN and satellite egress via ASN lookups. Combine with Okta’s built-in impossible travel if licensed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" outcome.result=\"SUCCESS\" eventType=\"user.authentication.sso\"\n| sort 0 actor.alternateId _time\n| streamstats window=1 last(client.geographicalContext.country) as prev_country last(_time) as prev_time last(client.ipAddress) as prev_ip current(client.geographicalContext.country) as country by actor.alternateId\n| eval delta_sec=_time-prev_time\n| where delta_sec > 0 AND delta_sec < 3600 AND country!=prev_country AND isnotnull(prev_country)\n| table _time, actor.alternateId, prev_country, country, delta_sec, prev_ip, client.ipAddress\n```\n\nUnderstanding this SPL\n\n**Impossible Travel Detection (Okta)** — Two successful sessions from geolocations that cannot be reached in the elapsed time indicate credential theft or shared accounts.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`user.session.start`, `user.authentication.sso`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by actor.alternateId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta_sec > 0 AND delta_sec < 3600 AND country!=prev_country AND isnotnull(prev_country)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Impossible Travel Detection (Okta)**): table _time, actor.alternateId, prev_country, country, delta_sec, prev_ip, client.ipAddress\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Impossible Travel Detection (Okta)** — Two successful sessions from geolocations that cannot be reached in the elapsed time indicate credential theft or shared accounts.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`user.session.start`, `user.authentication.sso`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, country A → B, delta), Map (sequential points), Single value (impossible travel count per day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We notice sign-ins that do not line up with how fast a person could move so we can catch stolen sessions or account sharing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.12",
              "n": "Okta API Rate Limit Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Hitting rate limits breaks automation, integrations, and provisioning; trending usage prevents surprise throttling during peak loads.",
              "t": "`Splunk_TA_okta`, custom HEC ingestion of API responses",
              "d": "`sourcetype=OktaIM2:log` (`system.*rate*`), API response headers ingested via scripted input",
              "q": "index=okta (sourcetype=\"okta:api\" OR sourcetype=\"OktaIM2:log\")\n| search http_status=429 OR like(lower(_raw),\"%rate limit%\")\n| stats count by client_id, endpoint, http_status\n| where count > 0\n| sort -count",
              "m": "Log API calls from integrations with `X-Rate-Limit-*` headers or ingest Okta rate-limit system events. Alert on HTTP 429 or sustained high utilization. Work with app owners to add backoff and caching.",
              "z": "Line chart (429s over time), Table (client, endpoint), Gauge (rate limit remaining %).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`, custom HEC ingestion of API responses.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`system.*rate*`), API response headers ingested via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog API calls from integrations with `X-Rate-Limit-*` headers or ingest Okta rate-limit system events. Alert on HTTP 429 or sustained high utilization. Work with app owners to add backoff and caching.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta (sourcetype=\"okta:api\" OR sourcetype=\"OktaIM2:log\")\n| search http_status=429 OR like(lower(_raw),\"%rate limit%\")\n| stats count by client_id, endpoint, http_status\n| where count > 0\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Okta API Rate Limit Monitoring** — Hitting rate limits breaks automation, integrations, and provisioning; trending usage prevents surprise throttling during peak loads.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`system.*rate*`), API response headers ingested via scripted input. **App/TA** (typical add-on context): `Splunk_TA_okta`, custom HEC ingestion of API responses. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: okta:api, OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"okta:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by client_id, endpoint, http_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (429s over time), Table (client, endpoint), Gauge (rate limit remaining %).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Okta sign-in records so we can see failed logins, risky sessions, and admin changes in one place.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.13",
              "n": "Okta App Assignment Changes",
              "c": "high",
              "f": "beginner",
              "v": "New access to sensitive SaaS apps is a common attack path; monitoring assignments supports least-privilege and access reviews.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`application.user_membership.*`, `group.user_membership.*`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\" (eventType=\"application.user_membership.add\" OR eventType=\"group.user_membership.add\")\n| stats count by actor.alternateId, target{}.displayName, target{}.type\n| sort -_time",
              "m": "Capture adds/removes for apps and groups tied to apps. Use lookups for crown-jewel applications. Alert on assignment to privileged groups. Include `actor` for service-account vs human.",
              "z": "Table (app, user, actor), Timeline (assignments), Bar chart (assignments by app).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`application.user_membership.*`, `group.user_membership.*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture adds/removes for apps and groups tied to apps. Use lookups for crown-jewel applications. Alert on assignment to privileged groups. Include `actor` for service-account vs human.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\" (eventType=\"application.user_membership.add\" OR eventType=\"group.user_membership.add\")\n| stats count by actor.alternateId, target{}.displayName, target{}.type\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Okta App Assignment Changes** — New access to sensitive SaaS apps is a common attack path; monitoring assignments supports least-privilege and access reviews.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`application.user_membership.*`, `group.user_membership.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, target{}.displayName, target{}.type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Okta App Assignment Changes** — New access to sensitive SaaS apps is a common attack path; monitoring assignments supports least-privilege and access reviews.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`application.user_membership.*`, `group.user_membership.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (app, user, actor), Timeline (assignments), Bar chart (assignments by app).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Okta sign-in records so we can see failed logins, risky sessions, and admin changes in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.14",
              "n": "Duo Push Fraud Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Push bombing and fraudulent approve taps are common MFA bypass techniques; correlating push volume and user behavior stops approval fatigue attacks.",
              "t": "Cisco Duo TA",
              "d": "`sourcetype=duo:authentication`",
              "q": "index=duo sourcetype=\"duo:authentication\" factor=\"push\"\n| bin _time span=5m\n| stats count by user, _time\n| where count > 5\n| sort -count",
              "m": "Track push attempts per user per short window. Alert on high-frequency pushes (fatigue) or pushes with `result=\"fraud\"` or Duo fraud reasons. Integrate with Duo Risk-Based Authentication. Pair with Okta MFA events for dual IdP visibility.",
              "z": "Table (user, push count in window), Line chart (pushes per user), Timeline (fraud-marked events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1556"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco Duo TA.\n• Ensure the following data sources are available: `sourcetype=duo:authentication`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack push attempts per user per short window. Alert on high-frequency pushes (fatigue) or pushes with `result=\"fraud\"` or Duo fraud reasons. Integrate with Duo Risk-Based Authentication. Pair with Okta MFA events for dual IdP visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=duo sourcetype=\"duo:authentication\" factor=\"push\"\n| bin _time span=5m\n| stats count by user, _time\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Duo Push Fraud Detection** — Push bombing and fraudulent approve taps are common MFA bypass techniques; correlating push volume and user behavior stops approval fatigue attacks.\n\nDocumented **Data sources**: `sourcetype=duo:authentication`. **App/TA** (typical add-on context): Cisco Duo TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: duo; **sourcetype**: duo:authentication. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=duo, sourcetype=\"duo:authentication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by user, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Duo Push Fraud Detection** — Push bombing and fraudulent approve taps are common MFA bypass techniques; correlating push volume and user behavior stops approval fatigue attacks.\n\nDocumented **Data sources**: `sourcetype=duo:authentication`. **App/TA** (typical add-on context): Cisco Duo TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, push count in window), Line chart (pushes per user), Timeline (fraud-marked events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Duo’s records so we can see who passed or failed the extra check, and when devices or policies block access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user span=5m | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.5.15",
              "n": "Okta User Lifecycle Events (Provisioning / Deprovisioning)",
              "c": "high",
              "f": "intermediate",
              "v": "Orphaned accounts, failed deprovisions, and unexpected creates drive audit findings and residual access after employee exit.",
              "t": "`Splunk_TA_okta`",
              "d": "`sourcetype=OktaIM2:log` (`user.lifecycle.*`, `user.account.*`)",
              "q": "index=okta sourcetype=\"OktaIM2:log\"\n| where like(eventType,\"user.lifecycle%\")\n| stats count by actor.alternateId, eventType, target{}.displayName\n| sort -_time",
              "m": "Align event types with HRIS-driven lifecycle (create, activate, deactivate). Alert on deactivations that fail or retry, and on manual creates outside HR correlation. Feed summaries to access reviews.",
              "z": "Table (event, target user, actor), Line chart (lifecycle events per day), Bar chart (events by type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1136.003",
                "T1531"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_okta`.\n• Ensure the following data sources are available: `sourcetype=OktaIM2:log` (`user.lifecycle.*`, `user.account.*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign event types with HRIS-driven lifecycle (create, activate, deactivate). Alert on deactivations that fail or retry, and on manual creates outside HR correlation. Feed summaries to access reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta sourcetype=\"OktaIM2:log\"\n| where like(eventType,\"user.lifecycle%\")\n| stats count by actor.alternateId, eventType, target{}.displayName\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Okta User Lifecycle Events (Provisioning / Deprovisioning)** — Orphaned accounts, failed deprovisions, and unexpected creates drive audit findings and residual access after employee exit.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`user.lifecycle.*`, `user.account.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: OktaIM2:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"OktaIM2:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(eventType,\"user.lifecycle%\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by actor.alternateId, eventType, target{}.displayName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Okta User Lifecycle Events (Provisioning / Deprovisioning)** — Orphaned accounts, failed deprovisions, and unexpected creates drive audit findings and residual access after employee exit.\n\nDocumented **Data sources**: `sourcetype=OktaIM2:log` (`user.lifecycle.*`, `user.account.*`). **App/TA** (typical add-on context): `Splunk_TA_okta`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (event, target user, actor), Line chart (lifecycle events per day), Bar chart (events by type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Okta sign-in records so we can see failed logins, risky sessions, and admin changes in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BAIT/KAIT §5 (Identity & access management) is enforced — Splunk UC-9.5.15: Okta User Lifecycle Events (Provisioning / Deprovisioning).",
                  "ea": "Saved search 'UC-9.5.15' running on sourcetype=OktaIM2:log (user.lifecycle.*, user.account.*), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Rundschreiben/2021/rs_1021_BAIT_en.html"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that FedRAMP AC-2 (Account management) is enforced — Splunk UC-9.5.15: Okta User Lifecycle Events (Provisioning / Deprovisioning).",
                  "ea": "Saved search 'UC-9.5.15' running on sourcetype=OktaIM2:log (user.lifecycle.*, user.account.*), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "01.b",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 01.b (User access management) is enforced — Splunk UC-9.5.15: Okta User Lifecycle Events (Provisioning / Deprovisioning).",
                  "ea": "Saved search 'UC-9.5.15' running on sourcetype=OktaIM2:log (user.lifecycle.*, user.account.*), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "ORP.4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that IT-Grundschutz ORP.4 (Identity & access) is enforced — Splunk UC-9.5.15: Okta User Lifecycle Events (Provisioning / Deprovisioning).",
                  "ea": "Saved search 'UC-9.5.15' running on sourcetype=OktaIM2:log (user.lifecycle.*, user.account.*), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/IT-Grundschutz/it-grundschutz_node.html"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Provisioning",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOX-ITGC ITGC.AccessMgmt.Provisioning (User provisioning) is enforced — Splunk UC-9.5.15: Okta User Lifecycle Events (Provisioning / Deprovisioning).",
                  "ea": "Saved search 'UC-9.5.15' running on sourcetype=OktaIM2:log (user.lifecycle.*, user.account.*), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "ta_link": {
                "name": "Okta Identity Cloud Add-on for Splunk",
                "id": 4412,
                "url": "https://splunkbase.splunk.com/app/4412"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 15,
            "none": 0
          }
        },
        {
          "i": "9.6",
          "n": "Endpoint & Mobile Device Management",
          "u": [
            {
              "i": "9.6.1",
              "n": "Device Compliance Status and Policy Enforcement",
              "c": "high",
              "f": "intermediate",
              "v": "Ensures all managed devices comply with security policies and configuration standards.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api compliance_status=*`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" compliance_status=*\n| stats count as total_devices,\n        count(eval(compliance_status IN (\"noncompliant\",\"unknown\"))) as noncompliant_count\n        by os_type, compliance_reason\n| eval compliance_pct=round(noncompliant_count*100/total_devices, 2)\n| where noncompliant_count > 0",
              "m": "Query device compliance status from SM API. Alert on noncompliance.",
              "z": "Compliance status table; compliance percentage gauge; noncompliant device list.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api compliance_status=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery device compliance status from SM API. Alert on noncompliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" compliance_status=*\n| stats count as total_devices,\n        count(eval(compliance_status IN (\"noncompliant\",\"unknown\"))) as noncompliant_count\n        by os_type, compliance_reason\n| eval compliance_pct=round(noncompliant_count*100/total_devices, 2)\n| where noncompliant_count > 0\n```\n\nUnderstanding this SPL\n\n**Device Compliance Status and Policy Enforcement** — Ensures all managed devices comply with security policies and configuration standards.\n\nDocumented **Data sources**: `sourcetype=meraki:api compliance_status=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by os_type, compliance_reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where noncompliant_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Compliance status table; compliance percentage gauge; noncompliant device list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Device Compliance Status and Policy Enforcement",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.6.2",
              "n": "Mobile Device Enrollment and MDM Status Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks device enrollment status to ensure mobile device management coverage.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api enrollment_status=*`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" enrollment_status IN (\"enrolled\", \"pending\", \"failed\")\n| stats count as device_count by enrollment_status, os_type\n| eval enrollment_pct=round(count*100/sum(count), 2)",
              "m": "Query device enrollment status. Track pending and failed enrollments.",
              "z": "Enrollment status pie chart; pending enrollment timeline; device count by OS.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api enrollment_status=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery device enrollment status. Track pending and failed enrollments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" enrollment_status IN (\"enrolled\", \"pending\", \"failed\")\n| stats count as device_count by enrollment_status, os_type\n| eval enrollment_pct=round(count*100/sum(count), 2)\n```\n\nUnderstanding this SPL\n\n**Mobile Device Enrollment and MDM Status Tracking** — Tracks device enrollment status to ensure mobile device management coverage.\n\nDocumented **Data sources**: `sourcetype=meraki:api enrollment_status=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by enrollment_status, os_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **enrollment_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Enrollment status pie chart; pending enrollment timeline; device count by OS.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Mobile Device Enrollment and MDM Status Tracking",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.6.3",
              "n": "Geofencing Alerts and Location-Based Policy Triggers",
              "c": "medium",
              "f": "beginner",
              "v": "Uses geofencing to detect when devices leave secure zones and trigger location-based policies.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*geofence*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*geofence*\"\n| stats count as geofence_event_count by device_id, zone_name, event_type\n| where event_type=\"left_zone\"",
              "m": "Monitor geofence event triggers. Track zone entry/exit by device.",
              "z": "Geofence event timeline; zone heat map; affected device list.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*geofence*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor geofence event triggers. Track zone entry/exit by device.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*geofence*\"\n| stats count as geofence_event_count by device_id, zone_name, event_type\n| where event_type=\"left_zone\"\n```\n\nUnderstanding this SPL\n\n**Geofencing Alerts and Location-Based Policy Triggers** — Uses geofencing to detect when devices leave secure zones and trigger location-based policies.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*geofence*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id, zone_name, event_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where event_type=\"left_zone\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Geofence event timeline; zone heat map; affected device list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Geofencing Alerts and Location-Based Policy Triggers",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.6.4",
              "n": "Mobile Security Policy Violations and App Restrictions",
              "c": "high",
              "f": "beginner",
              "v": "Detects policy violations and restricted app usage attempts.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*policy*\" OR signature=\"*app*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*policy*\" OR signature=\"*app*\") violation=\"true\"\n| stats count as violation_count by device_id, policy_name, violation_type\n| where violation_count > 5",
              "m": "Monitor security policy violation events. Alert on repeated violations.",
              "z": "Policy violation timeline; violation type breakdown; affected device list.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*policy*\" OR signature=\"*app*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor security policy violation events. Alert on repeated violations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*policy*\" OR signature=\"*app*\") violation=\"true\"\n| stats count as violation_count by device_id, policy_name, violation_type\n| where violation_count > 5\n```\n\nUnderstanding this SPL\n\n**Mobile Security Policy Violations and App Restrictions** — Detects policy violations and restricted app usage attempts.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*policy*\" OR signature=\"*app*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id, policy_name, violation_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where violation_count > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Policy violation timeline; violation type breakdown; affected device list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Mobile Security Policy Violations and App Restrictions",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.6.5",
              "n": "Lost Mode Device Activation and Recovery Tracking",
              "c": "high",
              "f": "beginner",
              "v": "Tracks activation of lost mode on devices to ensure recovery protocols are working.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*lost mode*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*lost mode*\"\n| stats count as lost_mode_count, latest(timestamp) as last_activation by device_id, activation_reason",
              "m": "Monitor lost mode activation events. Track recovery time.",
              "z": "Lost mode event timeline; affected device table; recovery status dashboard.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*lost mode*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor lost mode activation events. Track recovery time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*lost mode*\"\n| stats count as lost_mode_count, latest(timestamp) as last_activation by device_id, activation_reason\n```\n\nUnderstanding this SPL\n\n**Lost Mode Device Activation and Recovery Tracking** — Tracks activation of lost mode on devices to ensure recovery protocols are working.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*lost mode*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id, activation_reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Lost mode event timeline; affected device table; recovery status dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Lost Mode Device Activation and Recovery Tracking",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.6.6",
              "n": "Mobile App Deployment Success Rate and Distribution Status",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks app deployment success and identifies devices with failed or incomplete deployments.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*app*deployment*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*app*deployment*\"\n| stats count as deployment_count, count(eval(status=\"success\")) as success_count, count(eval(status=\"failed\")) as failed_count by app_name\n| eval success_rate=round(success_count*100/deployment_count, 2)\n| where success_rate < 95",
              "m": "Monitor app deployment status events. Alert on low success rates.",
              "z": "Deployment success rate gauge; app deployment timeline; failure detail table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*app*deployment*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor app deployment status events. Alert on low success rates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*app*deployment*\"\n| stats count as deployment_count, count(eval(status=\"success\")) as success_count, count(eval(status=\"failed\")) as failed_count by app_name\n| eval success_rate=round(success_count*100/deployment_count, 2)\n| where success_rate < 95\n```\n\nUnderstanding this SPL\n\n**Mobile App Deployment Success Rate and Distribution Status** — Tracks app deployment success and identifies devices with failed or incomplete deployments.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*app*deployment*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where success_rate < 95` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Deployment success rate gauge; app deployment timeline; failure detail table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use identity and sign-in data in Splunk so we can notice unusual logins, access changes, and privileged use while it still matters — Mobile App Deployment Success Rate and Distribution Status",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 6,
            "none": 0
          }
        },
        {
          "i": "9.7",
          "n": "Identity & Access Trending",
          "u": [
            {
              "i": "9.7.1",
              "n": "Authentication Volume Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Daily authentication success and failure volumes show whether login load, credential attacks, or misconfiguration are drifting over a quarter. A seven-day moving average smooths weekly noise so you can spot sustained shifts before they overwhelm help desks or mask intrusions.",
              "t": "Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`), Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`), Okta Add-on for Splunk",
              "d": "`Authentication` data model (accelerated); underlying `sourcetype` values such as `WinEventLog:Security`, `azure:aad:signin`, `Okta:im` / `OktaIM2`, `duo` (normalized to CIM Authentication)",
              "q": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where earliest=-90d@d latest=@d\n  by _time span=1d Authentication.action\n| rename \"Authentication.action\" as action\n| timechart span=1d sum(count) by action\n| trendline sma7(success) as success_sma7 sma7(failure) as failure_sma7\n| predict failure as failure_forecast algorithm=LLP future_timespan=7",
              "m": "Accelerate the Authentication data model and confirm identity sources are tagged to CIM. Schedule the search over `-90d` with daily `span` for executive and SOC review dashboards. Treat a rising failure trend with flat success as password-spray or IdP issues; rising both may indicate bulk user or application changes. Tune out known maintenance windows with a time-bound macro if needed.",
              "z": "Multi-series line or area chart (success vs failure vs SMA); optional overlay for short-term forecast.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`), Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`), Okta Add-on for Splunk.\n• Ensure the following data sources are available: `Authentication` data model (accelerated); underlying `sourcetype` values such as `WinEventLog:Security`, `azure:aad:signin`, `Okta:im` / `OktaIM2`, `duo` (normalized to CIM Authentication).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAccelerate the Authentication data model and confirm identity sources are tagged to CIM. Schedule the search over `-90d` with daily `span` for executive and SOC review dashboards. Treat a rising failure trend with flat success as password-spray or IdP issues; rising both may indicate bulk user or application changes. Tune out known maintenance windows with a time-bound macro if needed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where earliest=-90d@d latest=@d\n  by _time span=1d Authentication.action\n| rename \"Authentication.action\" as action\n| timechart span=1d sum(count) by action\n| trendline sma7(success) as success_sma7 sma7(failure) as failure_sma7\n| predict failure as failure_forecast algorithm=LLP future_timespan=7\n```\n\nUnderstanding this SPL\n\n**Authentication Volume Trending** — Daily authentication success and failure volumes show whether login load, credential attacks, or misconfiguration are drifting over a quarter. A seven-day moving average smooths weekly noise so you can spot sustained shifts before they overwhelm help desks or mask intrusions.\n\nDocumented **Data sources**: `Authentication` data model (accelerated); underlying `sourcetype` values such as `WinEventLog:Security`, `azure:aad:signin`, `Okta:im` / `OktaIM2`, `duo` (normalized to CIM Authentication). **App/TA** (typical add-on context): Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`), Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`), Okta Add-on for Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by action** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Authentication Volume Trending**): trendline sma7(success) as success_sma7 sma7(failure) as failure_sma7\n• Pipeline stage (see **Authentication Volume Trending**): predict failure as failure_forecast algorithm=LLP future_timespan=7\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Authentication Volume Trending** — Daily authentication success and failure volumes show whether login load, credential attacks, or misconfiguration are drifting over a quarter. A seven-day moving average smooths weekly noise so you can spot sustained shifts before they overwhelm help desks or mask intrusions.\n\nDocumented **Data sources**: `Authentication` data model (accelerated); underlying `sourcetype` values such as `WinEventLog:Security`, `azure:aad:signin`, `Okta:im` / `OktaIM2`, `duo` (normalized to CIM Authentication). **App/TA** (typical add-on context): Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`), Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`), Okta Add-on for Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-series line or area chart (success vs failure vs SMA); optional overlay for short-term forecast.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how sign-in and identity activity change over time so we can spot surges, drift, and seasonal norms without guessing.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action span=1d | sort - count",
              "e": [
                "azure",
                "m365",
                "okta",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                },
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.7.2",
              "n": "MFA Adoption Rate Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Tracking the percentage of users enrolled in multi-factor authentication over time proves progress toward zero-trust and regulatory expectations. Flat or declining adoption after rollout indicates gaps in onboarding, excluded groups, or integration problems that leave accounts easier to abuse.",
              "t": "Okta Add-on for Splunk, Duo Security App for Splunk, Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`) (Entra ID reporting)",
              "d": "`index=okta` `sourcetype` in (`Okta:im`, `OktaIM2`) user objects; `index=duo` `sourcetype=duo:admin` or authentication logs with enrollment fields; Entra ID audit / user detail exports ingested with MFA columns",
              "q": "index=okta (sourcetype=\"Okta:im\" OR sourcetype=\"OktaIM2\") objectType=user earliest=-90d@d\n| eval has_mfa=if(mvcount('mfaFactors{}.factorType')>0 OR mvcount(mfaFactors)>0 OR like(lower(mfaStatus),\"active\"),1,0)\n| bin _time span=1d\n| stats sum(has_mfa) as mfa_enrolled_users, dc(id) as distinct_users by _time\n| eval adoption_pct=round(100*mfa_enrolled_users/distinct_users,2)\n| sort _time\n| trendline sma7(adoption_pct) as adoption_sma7\n| predict adoption_pct as adoption_forecast algorithm=LLP future_timespan=14",
              "m": "Align field names with your vendor (Okta `mfaFactors`, Duo `is_enrolled`, Entra `strongAuthenticationDetail`). Prefer a daily saved search that snapshots user inventory or use change events if full-state logs are large. Compare adoption_pct to HR onboarding cohorts to find departments lagging behind. Use the same definition of “enrolled” as your security policy for audit evidence.",
              "z": "Single-value with sparkline; line chart of adoption_pct and sma7; stacked bar of enrolled vs not enrolled by day.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Okta Add-on for Splunk, Duo Security App for Splunk, Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`) (Entra ID reporting).\n• Ensure the following data sources are available: `index=okta` `sourcetype` in (`Okta:im`, `OktaIM2`) user objects; `index=duo` `sourcetype=duo:admin` or authentication logs with enrollment fields; Entra ID audit / user detail exports ingested with MFA columns.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign field names with your vendor (Okta `mfaFactors`, Duo `is_enrolled`, Entra `strongAuthenticationDetail`). Prefer a daily saved search that snapshots user inventory or use change events if full-state logs are large. Compare adoption_pct to HR onboarding cohorts to find departments lagging behind. Use the same definition of “enrolled” as your security policy for audit evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=okta (sourcetype=\"Okta:im\" OR sourcetype=\"OktaIM2\") objectType=user earliest=-90d@d\n| eval has_mfa=if(mvcount('mfaFactors{}.factorType')>0 OR mvcount(mfaFactors)>0 OR like(lower(mfaStatus),\"active\"),1,0)\n| bin _time span=1d\n| stats sum(has_mfa) as mfa_enrolled_users, dc(id) as distinct_users by _time\n| eval adoption_pct=round(100*mfa_enrolled_users/distinct_users,2)\n| sort _time\n| trendline sma7(adoption_pct) as adoption_sma7\n| predict adoption_pct as adoption_forecast algorithm=LLP future_timespan=14\n```\n\nUnderstanding this SPL\n\n**MFA Adoption Rate Trending** — Tracking the percentage of users enrolled in multi-factor authentication over time proves progress toward zero-trust and regulatory expectations. Flat or declining adoption after rollout indicates gaps in onboarding, excluded groups, or integration problems that leave accounts easier to abuse.\n\nDocumented **Data sources**: `index=okta` `sourcetype` in (`Okta:im`, `OktaIM2`) user objects; `index=duo` `sourcetype=duo:admin` or authentication logs with enrollment fields; Entra ID audit / user detail exports ingested with MFA columns. **App/TA** (typical add-on context): Okta Add-on for Splunk, Duo Security App for Splunk, Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`) (Entra ID reporting). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: okta; **sourcetype**: Okta:im, OktaIM2. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=okta, sourcetype=\"Okta:im\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_mfa** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **adoption_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **MFA Adoption Rate Trending**): trendline sma7(adoption_pct) as adoption_sma7\n• Pipeline stage (see **MFA Adoption Rate Trending**): predict adoption_pct as adoption_forecast algorithm=LLP future_timespan=14\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single-value with sparkline; line chart of adoption_pct and sma7; stacked bar of enrolled vs not enrolled by day.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how sign-in and identity activity change over time so we can spot surges, drift, and seasonal norms without guessing.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365",
                "okta"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.7.3",
              "n": "Privileged Account Activity Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Privileged logon volume should be relatively stable; spikes can indicate credential theft, mass admin activity during an incident, or automation run amok. Trending over thirty days highlights gradual increases that point-in-time thresholds miss.",
              "t": "Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`)",
              "d": "`Authentication` data model; `privileged_users.csv` lookup (user, is_privileged) aligned with `Authentication.user`",
              "q": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where earliest=-30d@d latest=@d Authentication.action=success\n  by _time span=1d Authentication.user\n| lookup privileged_users user AS \"Authentication.user\" OUTPUT is_privileged\n| where is_privileged=\"true\"\n| timechart span=1d sum(count) as privileged_logons\n| trendline sma7(privileged_logons) as priv_sma7\n| predict privileged_logons as priv_forecast algorithm=LLP future_timespan=7",
              "m": "Build `privileged_users.csv` from Active Directory privileged groups, cloud Global Administrator roles, and break-glass accounts; refresh on a schedule. Require `Authentication.action=success` to measure real sessions. Investigate sustained upward trends with parallel searches on source IP and workstation. Pair with change tickets to expected elevation work.",
              "z": "Line chart of privileged_logons with SMA; anomaly overlay if using MLTK or `anomalydetection`.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`).\n• Ensure the following data sources are available: `Authentication` data model; `privileged_users.csv` lookup (user, is_privileged) aligned with `Authentication.user`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild `privileged_users.csv` from Active Directory privileged groups, cloud Global Administrator roles, and break-glass accounts; refresh on a schedule. Require `Authentication.action=success` to measure real sessions. Investigate sustained upward trends with parallel searches on source IP and workstation. Pair with change tickets to expected elevation work.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where earliest=-30d@d latest=@d Authentication.action=success\n  by _time span=1d Authentication.user\n| lookup privileged_users user AS \"Authentication.user\" OUTPUT is_privileged\n| where is_privileged=\"true\"\n| timechart span=1d sum(count) as privileged_logons\n| trendline sma7(privileged_logons) as priv_sma7\n| predict privileged_logons as priv_forecast algorithm=LLP future_timespan=7\n```\n\nUnderstanding this SPL\n\n**Privileged Account Activity Trending** — Privileged logon volume should be relatively stable; spikes can indicate credential theft, mass admin activity during an incident, or automation run amok. Trending over thirty days highlights gradual increases that point-in-time thresholds miss.\n\nDocumented **Data sources**: `Authentication` data model; `privileged_users.csv` lookup (user, is_privileged) aligned with `Authentication.user`. **App/TA** (typical add-on context): Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_privileged=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Privileged Account Activity Trending**): trendline sma7(privileged_logons) as priv_sma7\n• Pipeline stage (see **Privileged Account Activity Trending**): predict privileged_logons as priv_forecast algorithm=LLP future_timespan=7\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privileged Account Activity Trending** — Privileged logon volume should be relatively stable; spikes can indicate credential theft, mass admin activity during an incident, or automation run amok. Trending over thirty days highlights gradual increases that point-in-time thresholds miss.\n\nDocumented **Data sources**: `Authentication` data model; `privileged_users.csv` lookup (user, is_privileged) aligned with `Authentication.user`. **App/TA** (typical add-on context): Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart of privileged_logons with SMA; anomaly overlay if using MLTK or `anomalydetection`.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how sign-in and identity activity change over time so we can spot surges, drift, and seasonal norms without guessing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user span=1d | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.7.4",
              "n": "Service Account Usage Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Service accounts should authenticate in predictable volumes from known automation. A dormant account suddenly trending upward may indicate compromise, scope creep, or shadow IT scripts. Ninety-day views expose slow burns and seasonal batch jobs alike.",
              "t": "Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`)",
              "d": "`Authentication` data model; `service_accounts.csv` lookup (`Account_Name`, `account_type`) or naming convention `svc_*` / `service_*`",
              "q": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where earliest=-90d@d latest=@d\n  by _time span=1d Authentication.user\n| lookup service_accounts.csv Account_Name AS \"Authentication.user\" OUTPUT account_type\n| eval is_service=if(account_type=\"service\" OR match(lower('Authentication.user'),\"^(svc|service)[-_].*\"),1,0)\n| where is_service=1\n| timechart span=1d sum(count) as service_auth_volume\n| trendline sma7(service_auth_volume) as svc_sma7\n| predict service_auth_volume as svc_forecast algorithm=LLP future_timespan=14",
              "m": "Populate the lookup from AD and cloud app registrations; treat unknown machine accounts carefully. Baseline expected daily volume per account in a separate panel if volumes differ widely. Alert when a low-volume account crosses its historical band or when the aggregate trend jumps after no change tickets. Cross-check with password last set and owner field.",
              "z": "Line chart with SMA and forecast; small multiples per high-risk service account if volume allows.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`).\n• Ensure the following data sources are available: `Authentication` data model; `service_accounts.csv` lookup (`Account_Name`, `account_type`) or naming convention `svc_*` / `service_*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPopulate the lookup from AD and cloud app registrations; treat unknown machine accounts carefully. Baseline expected daily volume per account in a separate panel if volumes differ widely. Alert when a low-volume account crosses its historical band or when the aggregate trend jumps after no change tickets. Cross-check with password last set and owner field.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where earliest=-90d@d latest=@d\n  by _time span=1d Authentication.user\n| lookup service_accounts.csv Account_Name AS \"Authentication.user\" OUTPUT account_type\n| eval is_service=if(account_type=\"service\" OR match(lower('Authentication.user'),\"^(svc|service)[-_].*\"),1,0)\n| where is_service=1\n| timechart span=1d sum(count) as service_auth_volume\n| trendline sma7(service_auth_volume) as svc_sma7\n| predict service_auth_volume as svc_forecast algorithm=LLP future_timespan=14\n```\n\nUnderstanding this SPL\n\n**Service Account Usage Trending** — Service accounts should authenticate in predictable volumes from known automation. A dormant account suddenly trending upward may indicate compromise, scope creep, or shadow IT scripts. Ninety-day views expose slow burns and seasonal batch jobs alike.\n\nDocumented **Data sources**: `Authentication` data model; `service_accounts.csv` lookup (`Account_Name`, `account_type`) or naming convention `svc_*` / `service_*`. **App/TA** (typical add-on context): Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **is_service** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_service=1` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Service Account Usage Trending**): trendline sma7(service_auth_volume) as svc_sma7\n• Pipeline stage (see **Service Account Usage Trending**): predict service_auth_volume as svc_forecast algorithm=LLP future_timespan=14\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Service Account Usage Trending** — Service accounts should authenticate in predictable volumes from known automation. A dormant account suddenly trending upward may indicate compromise, scope creep, or shadow IT scripts. Ninety-day views expose slow burns and seasonal batch jobs alike.\n\nDocumented **Data sources**: `Authentication` data model; `service_accounts.csv` lookup (`Account_Name`, `account_type`) or naming convention `svc_*` / `service_*`. **App/TA** (typical add-on context): Splunk Common Information Model (CIM), Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart with SMA and forecast; small multiples per high-risk service account if volume allows.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how sign-in and identity activity change over time so we can spot surges, drift, and seasonal norms without guessing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user span=1d | sort - count",
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.7.5",
              "n": "Conditional Access Policy Block Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracking which conditional access policies block the most sign-ins over time shows whether controls are too strict, mis-targeted, or under attack. Policy-level trends support tuning reviews and prove enforcement for audits without relying on one-off investigations.",
              "t": "Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`), Microsoft Azure Add-on",
              "d": "`index=azure` or `index=mscs` `sourcetype=\"azure:aad:signin\"` (fields such as `conditionalAccessStatus`, `conditionalAccessPolicies`, `status.errorCode`)",
              "q": "index=azure sourcetype=\"azure:aad:signin\" earliest=-90d@d\n| search conditionalAccessStatus=\"failure\" OR status.errorCode=\"53003\"\n| eval policy_name=mvindex('conditionalAccessPolicies{}.displayName',0)\n| fillnull value=\"unknown_policy\" policy_name\n| bin _time span=1d\n| stats count as block_count by _time, policy_name\n| timechart span=1d sum(block_count) by policy_name useother=f limit=10\n| appendcols [\n    search index=azure sourcetype=\"azure:aad:signin\" earliest=-90d@d\n      conditionalAccessStatus=\"failure\" OR status.errorCode=\"53003\"\n    | bin _time span=1d\n    | stats count as daily_ca_blocks by _time\n    | trendline sma7(daily_ca_blocks) as daily_ca_blocks_sma7\n  ]",
              "m": "Ensure sign-in logs include conditional access evaluation results (license and diagnostic settings in Entra). Expand `policy_name` with `mvexpand` if you need each policy in a multi-policy evaluation. Review top blockers monthly with app owners; correlate spikes with device compliance changes or new locations. Document exclusions for break-glass and service principals separately.",
              "z": "Stacked area or line chart per policy; heatmap of policy vs week for executive summaries.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`), Microsoft Azure Add-on.\n• Ensure the following data sources are available: `index=azure` or `index=mscs` `sourcetype=\"azure:aad:signin\"` (fields such as `conditionalAccessStatus`, `conditionalAccessPolicies`, `status.errorCode`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure sign-in logs include conditional access evaluation results (license and diagnostic settings in Entra). Expand `policy_name` with `mvexpand` if you need each policy in a multi-policy evaluation. Review top blockers monthly with app owners; correlate spikes with device compliance changes or new locations. Document exclusions for break-glass and service principals separately.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"azure:aad:signin\" earliest=-90d@d\n| search conditionalAccessStatus=\"failure\" OR status.errorCode=\"53003\"\n| eval policy_name=mvindex('conditionalAccessPolicies{}.displayName',0)\n| fillnull value=\"unknown_policy\" policy_name\n| bin _time span=1d\n| stats count as block_count by _time, policy_name\n| timechart span=1d sum(block_count) by policy_name useother=f limit=10\n| appendcols [\n    search index=azure sourcetype=\"azure:aad:signin\" earliest=-90d@d\n      conditionalAccessStatus=\"failure\" OR status.errorCode=\"53003\"\n    | bin _time span=1d\n    | stats count as daily_ca_blocks by _time\n    | trendline sma7(daily_ca_blocks) as daily_ca_blocks_sma7\n  ]\n```\n\nUnderstanding this SPL\n\n**Conditional Access Policy Block Trending** — Tracking which conditional access policies block the most sign-ins over time shows whether controls are too strict, mis-targeted, or under attack. Policy-level trends support tuning reviews and prove enforcement for audits without relying on one-off investigations.\n\nDocumented **Data sources**: `index=azure` or `index=mscs` `sourcetype=\"azure:aad:signin\"` (fields such as `conditionalAccessStatus`, `conditionalAccessPolicies`, `status.errorCode`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`), Microsoft Azure Add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: azure:aad:signin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"azure:aad:signin\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **policy_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Fills null values with `fillnull`.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by policy_name useother=f limit=10** — ideal for trending and alerting on this use case.\n• Adds columns from a subsearch with `appendcols`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Conditional Access Policy Block Trending** — Tracking which conditional access policies block the most sign-ins over time shows whether controls are too strict, mis-targeted, or under attack. Policy-level trends support tuning reviews and prove enforcement for audits without relying on one-off investigations.\n\nDocumented **Data sources**: `index=azure` or `index=mscs` `sourcetype=\"azure:aad:signin\"` (fields such as `conditionalAccessStatus`, `conditionalAccessPolicies`, `status.errorCode`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (`Splunk_TA_microsoft-cloudservices`), Microsoft Azure Add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area or line chart per policy; heatmap of policy vs week for executive summaries.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how sign-in and identity activity change over time so we can spot surges, drift, and seasonal norms without guessing.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1d | sort - count",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.7.6",
              "n": "Password Reset Volume Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Sudden increases in password resets—self-service or helpdesk—often align with phishing waves, credential stuffing after a breach, or attacker-driven resets. A ninety-day trend with a moving average makes campaign-scale activity visible before individual tickets pile up.",
              "t": "Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`), Okta Add-on for Splunk, Splunk Add-on for ServiceNow or your ITSM (optional)",
              "d": "AD Security log `EventCode` 4724 (password reset attempt); `index=okta` `sourcetype=Okta:system` password reset events; ITSM `category=password` incidents",
              "q": "(index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4724 earliest=-90d@d)\n OR (index=okta sourcetype=Okta:system EVENT_TYPE=user.account.reset_password earliest=-90d@d)\n| bin _time span=1d\n| stats count as reset_volume by _time\n| sort _time\n| trendline sma7(reset_volume) as reset_sma7\n| predict reset_volume as reset_forecast algorithm=LLP future_timespan=7",
              "m": "Normalize multiple sources into one panel or use `eval source_system` before `stats`. Exclude routine bulk resets from known automation via a lookup of service accounts or change windows. When the SMA breaches a static or adaptive threshold, open a phishing hunt and check MFA and impossible-travel dashboards. Add helpdesk ticket volume from ITSM if self-service is low but calls spike.",
              "z": "Column or line chart of daily reset_volume with SMA; optional forecast ribbon.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [
                "T1110.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`), Okta Add-on for Splunk, Splunk Add-on for ServiceNow or your ITSM (optional).\n• Ensure the following data sources are available: AD Security log `EventCode` 4724 (password reset attempt); `index=okta` `sourcetype=Okta:system` password reset events; ITSM `category=password` incidents.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize multiple sources into one panel or use `eval source_system` before `stats`. Exclude routine bulk resets from known automation via a lookup of service accounts or change windows. When the SMA breaches a static or adaptive threshold, open a phishing hunt and check MFA and impossible-travel dashboards. Add helpdesk ticket volume from ITSM if self-service is low but calls spike.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4724 earliest=-90d@d)\n OR (index=okta sourcetype=Okta:system EVENT_TYPE=user.account.reset_password earliest=-90d@d)\n| bin _time span=1d\n| stats count as reset_volume by _time\n| sort _time\n| trendline sma7(reset_volume) as reset_sma7\n| predict reset_volume as reset_forecast algorithm=LLP future_timespan=7\n```\n\nUnderstanding this SPL\n\n**Password Reset Volume Trending** — Sudden increases in password resets—self-service or helpdesk—often align with phishing waves, credential stuffing after a breach, or attacker-driven resets. A ninety-day trend with a moving average makes campaign-scale activity visible before individual tickets pile up.\n\nDocumented **Data sources**: AD Security log `EventCode` 4724 (password reset attempt); `index=okta` `sourcetype=Okta:system` password reset events; ITSM `category=password` incidents. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (`Splunk_TA_windows`), Okta Add-on for Splunk, Splunk Add-on for ServiceNow or your ITSM (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog, okta; **sourcetype**: WinEventLog:Security, Okta:system. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, index=okta, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Password Reset Volume Trending**): trendline sma7(reset_volume) as reset_sma7\n• Pipeline stage (see **Password Reset Volume Trending**): predict reset_volume as reset_forecast algorithm=LLP future_timespan=7\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column or line chart of daily reset_volume with SMA; optional forecast ribbon.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how sign-in and identity activity change over time so we can spot surges, drift, and seasonal norms without guessing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "okta",
                "servicenow",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                },
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "9.7.7",
              "n": "Identity Provider Availability Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Identity provider outages block all applications that rely on them; weekly or monthly uptime trends show whether your vendor or network path is degrading over a quarter. That supports SLA discussions, architecture decisions, and communication to the business before users flood the service desk.",
              "t": "Splunk Synthetic Monitoring, Splunk Observability Cloud Synthetics, or custom `curl`-based scripted input; vendor status is optional enrichment only",
              "d": "`sourcetype=synthetics:url_probe` or `http:response` (fields `http_status`, `url`, `target_name`); map probes to IdP login URLs",
              "q": "index=synthetics sourcetype=synthetics:url_probe earliest=-90d@d url IN (\"https://login.microsoftonline.com/\",\"https://*.okta.com/oauth2/\",\"https://accounts.google.com/\")\n| bin _time span=1d\n| stats count(eval(http_status<500)) as successes, count as probes by _time, url\n| eval daily_uptime_pct=round(100*successes/probes,3)\n| bin _time span=7d aligntime=@w0\n| stats avg(daily_uptime_pct) as weekly_uptime_pct by _time, url\n| sort _time\n| trendline sma4(weekly_uptime_pct) as uptime_sma\n| predict weekly_uptime_pct as uptime_forecast algorithm=LLP future_timespan=4",
              "m": "Point synthetic checks at the same endpoints your users hit for interactive login; run probes at least every few minutes from locations that match your user base. Tag each series with IdP name for clarity. Treat `weekly_uptime_pct` drops below your internal SLO as incidents even if vendor status pages are green. Combine with IdP `system` / `health` API logs if ingested for root-cause context.",
              "z": "Line chart of weekly_uptime_pct by IdP; SLA threshold band; optional forecast.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Synthetic Monitoring, Splunk Observability Cloud Synthetics, or custom `curl`-based scripted input; vendor status is optional enrichment only.\n• Ensure the following data sources are available: `sourcetype=synthetics:url_probe` or `http:response` (fields `http_status`, `url`, `target_name`); map probes to IdP login URLs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoint synthetic checks at the same endpoints your users hit for interactive login; run probes at least every few minutes from locations that match your user base. Tag each series with IdP name for clarity. Treat `weekly_uptime_pct` drops below your internal SLO as incidents even if vendor status pages are green. Combine with IdP `system` / `health` API logs if ingested for root-cause context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=synthetics sourcetype=synthetics:url_probe earliest=-90d@d url IN (\"https://login.microsoftonline.com/\",\"https://*.okta.com/oauth2/\",\"https://accounts.google.com/\")\n| bin _time span=1d\n| stats count(eval(http_status<500)) as successes, count as probes by _time, url\n| eval daily_uptime_pct=round(100*successes/probes,3)\n| bin _time span=7d aligntime=@w0\n| stats avg(daily_uptime_pct) as weekly_uptime_pct by _time, url\n| sort _time\n| trendline sma4(weekly_uptime_pct) as uptime_sma\n| predict weekly_uptime_pct as uptime_forecast algorithm=LLP future_timespan=4\n```\n\nUnderstanding this SPL\n\n**Identity Provider Availability Trending** — Identity provider outages block all applications that rely on them; weekly or monthly uptime trends show whether your vendor or network path is degrading over a quarter. That supports SLA discussions, architecture decisions, and communication to the business before users flood the service desk.\n\nDocumented **Data sources**: `sourcetype=synthetics:url_probe` or `http:response` (fields `http_status`, `url`, `target_name`); map probes to IdP login URLs. **App/TA** (typical add-on context): Splunk Synthetic Monitoring, Splunk Observability Cloud Synthetics, or custom `curl`-based scripted input; vendor status is optional enrichment only. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: synthetics; **sourcetype**: synthetics:url_probe. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=synthetics, sourcetype=synthetics:url_probe, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, url** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **daily_uptime_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, url** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Identity Provider Availability Trending**): trendline sma4(weekly_uptime_pct) as uptime_sma\n• Pipeline stage (see **Identity Provider Availability Trending**): predict weekly_uptime_pct as uptime_forecast algorithm=LLP future_timespan=4\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart of weekly_uptime_pct by IdP; SLA threshold band; optional forecast.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how sign-in and identity activity change over time so we can spot surges, drift, and seasonal norms without guessing.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 7,
            "none": 0
          }
        }
      ],
      "i": 9,
      "n": "Identity & Access Management",
      "src": "cat-09-identity-access-management.md"
    },
    {
      "s": [
        {
          "i": "10.1",
          "n": "Next-Gen Firewalls (Security-Focused)",
          "u": [
            {
              "i": "10.1.1",
              "n": "Threat Prevention Event Trending",
              "c": "critical",
              "f": "intermediate",
              "v": "Trending threat detections reveals attack campaigns, persistent threats, and the effectiveness of security controls.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on",
              "d": "Threat logs (IPS, AV, anti-spyware detections)",
              "q": "index=pan sourcetype=\"pan:threat\" severity IN (\"critical\",\"high\") earliest=-24h\n| timechart span=1h count by subtype",
              "m": "Ingest Palo Alto threat logs (`pan:threat`) via syslog to a Heavy Forwarder or via the Cortex Data Lake API. Key fields: `severity`, `threat_id`, `src`, `dest`, `action`, `rule`. Alert on `severity IN (critical, high)` with a minimum count threshold per 5-minute window to filter scanner noise. Suppress known vulnerability scanner source IPs via a `scanner_ips` lookup. Correlate threat signatures with CrowdStrike/EDR process events for endpoint confirmation.",
              "z": "Line chart (threat events by severity), Bar chart (top threats), Table (critical events), Stacked area (threat types over time).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on.\n• Ensure the following data sources are available: Threat logs (IPS, AV, anti-spyware detections).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Palo Alto threat logs (`pan:threat`) via syslog to a Heavy Forwarder or via the Cortex Data Lake API. Key fields: `severity`, `threat_id`, `src`, `dest`, `action`, `rule`. Alert on `severity IN (critical, high)` with a minimum count threshold per 5-minute window to filter scanner noise. Suppress known vulnerability scanner source IPs via a `scanner_ips` lookup. Correlate threat signatures with CrowdStrike/EDR process events for endpoint confirmation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:threat\" severity IN (\"critical\",\"high\") earliest=-24h\n| timechart span=1h count by subtype\n```\n\nUnderstanding this SPL\n\n**Threat Prevention Event Trending** — Trending threat detections reveals attack campaigns, persistent threats, and the effectiveness of security controls.\n\nDocumented **Data sources**: Threat logs (IPS, AV, anti-spyware detections). **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:threat. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:threat\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by subtype** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.signature IDS_Attacks.severity span=1h\n| sort 50 -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Threat Prevention Event Trending** — Trending threat detections reveals attack campaigns, persistent threats, and the effectiveness of security controls.\n\nDocumented **Data sources**: Threat logs (IPS, AV, anti-spyware detections). **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (threat events by severity), Bar chart (top threats), Table (critical events), Stacked area (threat types over time).",
              "script": "",
              "premium": "",
              "hw": "Cisco Secure Firewall 3110, 3120, 3130, 3140, Firepower 1010, 1120, 1140, 1150, Firepower 2110, 2120, 2130, 2140, Firepower 4110, 4120, 4140, 4150, Firepower 9300, Firepower Management Center (FMC); Palo Alto PA-220/PA-440/PA-450/PA-460/PA-3200/PA-5200/PA-5400/PA-7000, Panorama; Fortinet FortiGate 40F/60F/80F/100F/200F/400F/600F/1000F/1800F/2600F/3500F/3700F/4400F, FortiManager",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track NGFW threat and prevention events in Splunk so we see attack spikes and when controls stop malicious traffic.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic",
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.signature IDS_Attacks.severity span=1h\n| sort 50 -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.2",
              "n": "Wildfire / Sandbox Verdicts",
              "c": "critical",
              "f": "beginner",
              "v": "Tracks zero-day and unknown malware detection effectiveness. Malicious verdicts require immediate investigation of affected hosts.",
              "t": "`Splunk_TA_paloalto`",
              "d": "Wildfire submission logs, sandbox analysis results",
              "q": "index=pan sourcetype=\"pan:wildfire\"\n| stats count by verdict, filetype\n| eval verdict_label=case(verdict=0,\"benign\", verdict=1,\"malware\", verdict=2,\"grayware\", verdict=4,\"phishing\")",
              "m": "Enable Wildfire logging on NGFW. Forward submission results to Splunk. Alert immediately on malware verdicts. Track affected users/hosts for investigation. Report on submission volumes and malicious file types.",
              "z": "Pie chart (verdict distribution), Table (malware verdicts with details), Line chart (submissions over time), Bar chart (by file type).",
              "kfp": "Legitimate file submissions for scanning (e.g. sandbox testing, IR) can trigger; correlate with policy and submitter.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: Wildfire submission logs, sandbox analysis results.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Wildfire logging on NGFW. Forward submission results to Splunk. Alert immediately on malware verdicts. Track affected users/hosts for investigation. Report on submission volumes and malicious file types.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:wildfire\"\n| stats count by verdict, filetype\n| eval verdict_label=case(verdict=0,\"benign\", verdict=1,\"malware\", verdict=2,\"grayware\", verdict=4,\"phishing\")\n```\n\nUnderstanding this SPL\n\n**Wildfire / Sandbox Verdicts** — Tracks zero-day and unknown malware detection effectiveness. Malicious verdicts require immediate investigation of affected hosts.\n\nDocumented **Data sources**: Wildfire submission logs, sandbox analysis results. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:wildfire. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:wildfire\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by verdict, filetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **verdict_label** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.signature Malware_Attacks.dest Malware_Attacks.file_name span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Wildfire / Sandbox Verdicts** — Tracks zero-day and unknown malware detection effectiveness. Malicious verdicts require immediate investigation of affected hosts.\n\nDocumented **Data sources**: Wildfire submission logs, sandbox analysis results. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (verdict distribution), Table (malware verdicts with details), Line chart (submissions over time), Bar chart (by file type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We read Wildfire sandbox verdicts so we catch malicious and suspicious files at the network edge before they spread inside.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.signature Malware_Attacks.dest Malware_Attacks.file_name span=1h\n| sort -count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "10.1.3",
              "n": "C2 Communication Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Command-and-control communication indicates active compromise. Detection enables containment before data exfiltration or lateral movement.",
              "t": "`Splunk_TA_paloalto`, threat intel feeds",
              "d": "Threat logs (C2 signatures), URL filtering (malware/C2 categories), DNS logs",
              "q": "index=pan sourcetype=\"pan:threat\" category=\"command-and-control\" earliest=-24h\n| stats count, values(dest) as c2_servers by src, src_user\n| sort -count",
              "m": "Enable URL filtering and threat prevention with C2 categories. Forward to Splunk. Alert immediately on any C2 detection. Integrate with threat intel feeds for IP/domain enrichment. Trigger automated containment via SOAR.",
              "z": "Table (C2 detections with source/dest), Geo map (C2 server locations), Timeline (C2 events), Network diagram.",
              "kfp": "Some C2-like traffic may be from approved tools or research; tune by destination and context.",
              "refs": "https://attack.mitre.org/techniques/T1071/",
              "mitre": [
                "T1071"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, threat intel feeds.\n• Ensure the following data sources are available: Threat logs (C2 signatures), URL filtering (malware/C2 categories), DNS logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable URL filtering and threat prevention with C2 categories. Forward to Splunk. Alert immediately on any C2 detection. Integrate with threat intel feeds for IP/domain enrichment. Trigger automated containment via SOAR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:threat\" category=\"command-and-control\" earliest=-24h\n| stats count, values(dest) as c2_servers by src, src_user\n| sort -count\n```\n\nUnderstanding this SPL\n\n**C2 Communication Detection** — Command-and-control communication indicates active compromise. Detection enables containment before data exfiltration or lateral movement.\n\nDocumented **Data sources**: Threat logs (C2 signatures), URL filtering (malware/C2 categories), DNS logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, threat intel feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:threat. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:threat\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, src_user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.signature IDS_Attacks.severity span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**C2 Communication Detection** — Command-and-control communication indicates active compromise. Detection enables containment before data exfiltration or lateral movement.\n\nDocumented **Data sources**: Threat logs (C2 signatures), URL filtering (malware/C2 categories), DNS logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, threat intel feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (C2 detections with source/dest), Geo map (C2 server locations), Timeline (C2 events), Network diagram.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for C2-style traffic at the firewall so we can contain active compromise while there is time to respond.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic",
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.signature IDS_Attacks.severity span=1h\n| sort -count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.4",
              "n": "DNS Sinkhole Hits",
              "c": "critical",
              "f": "beginner",
              "v": "DNS sinkhole hits confirm infected endpoints attempting to reach malicious domains. Each hit is a confirmed compromise indicator.",
              "t": "`Splunk_TA_paloalto`",
              "d": "DNS proxy logs (sinkhole action), threat logs",
              "q": "index=pan sourcetype=\"pan:threat\" action=\"sinkhole\" earliest=-24h\n| stats count by src, domain, threat_name\n| sort -count",
              "m": "Configure DNS sinkhole on NGFW. Forward threat logs with sinkhole actions to Splunk. Alert on each unique source IP hitting sinkhole. Trigger automated endpoint investigation. Track resolution status.",
              "z": "Table (sinkholed hosts with domains), Single value (compromised hosts count), Bar chart (top sinkholed domains).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: DNS proxy logs (sinkhole action), threat logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure DNS sinkhole on NGFW. Forward threat logs with sinkhole actions to Splunk. Alert on each unique source IP hitting sinkhole. Trigger automated endpoint investigation. Track resolution status.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:threat\" action=\"sinkhole\" earliest=-24h\n| stats count by src, domain, threat_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DNS Sinkhole Hits** — DNS sinkhole hits confirm infected endpoints attempting to reach malicious domains. Each hit is a confirmed compromise indicator.\n\nDocumented **Data sources**: DNS proxy logs (sinkhole action), threat logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:threat. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:threat\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, domain, threat_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.query DNS.answer DNS.src span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Sinkhole Hits** — DNS sinkhole hits confirm infected endpoints attempting to reach malicious domains. Each hit is a confirmed compromise indicator.\n\nDocumented **Data sources**: DNS proxy logs (sinkhole action), threat logs. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sinkholed hosts with domains), Single value (compromised hosts count), Bar chart (top sinkholed domains).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log and review DNS sinkhole hits so we know which systems tried to reach malicious infrastructure.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.query DNS.answer DNS.src span=1h\n| sort -count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "10.1.5",
              "n": "SSL Decryption Coverage",
              "c": "high",
              "f": "beginner",
              "v": "Encrypted traffic that isn't inspected creates a blind spot. Measuring decryption coverage ensures security visibility.",
              "t": "`Splunk_TA_paloalto`",
              "d": "Decryption statistics, traffic logs (encrypted vs decrypted flags)",
              "q": "index=pan sourcetype=\"pan:traffic\"\n| eval decrypted=if(flags LIKE \"%decrypt%\",1,0)\n| stats sum(decrypted) as decrypted_sessions, count as total_sessions\n| eval coverage_pct=round(decrypted_sessions/total_sessions*100,1)",
              "m": "Enable decryption logging on NGFW. Track percentage of HTTPS traffic being decrypted. Identify exempted destinations and evaluate risk. Report coverage to security leadership. Target >80% coverage.",
              "z": "Single value (decryption coverage %), Pie chart (decrypted vs bypassed), Bar chart (top bypassed destinations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: Decryption statistics, traffic logs (encrypted vs decrypted flags).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable decryption logging on NGFW. Track percentage of HTTPS traffic being decrypted. Identify exempted destinations and evaluate risk. Report coverage to security leadership. Target >80% coverage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:traffic\"\n| eval decrypted=if(flags LIKE \"%decrypt%\",1,0)\n| stats sum(decrypted) as decrypted_sessions, count as total_sessions\n| eval coverage_pct=round(decrypted_sessions/total_sessions*100,1)\n```\n\nUnderstanding this SPL\n\n**SSL Decryption Coverage** — Encrypted traffic that isn't inspected creates a blind spot. Measuring decryption coverage ensures security visibility.\n\nDocumented **Data sources**: Decryption statistics, traffic logs (encrypted vs decrypted flags). **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:traffic. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **decrypted** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (decryption coverage %), Pie chart (decrypted vs bypassed), Bar chart (top bypassed destinations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We measure how much traffic the firewall decrypts and inspects so we are not blind to threats inside encrypted sessions.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.6",
              "n": "Cisco ASA Reconnaissance Command Activity",
              "c": "high",
              "f": "intermediate",
              "v": "Detects reconnaissance and discovery commands on Cisco ASA devices that may indicate an attacker or insider mapping the network or preparing for further exploitation.",
              "t": "Splunk Security Essentials, Splunk Add-on for Cisco ASA",
              "d": "Cisco ASA syslog",
              "q": "index=network sourcetype=cisco:asa (command=\"show run\" OR command=\"write net\" OR command=\"dir\" OR command=\"more system:running-config\")\n| stats count by src_user, command, src, _time",
              "m": "Forward Cisco ASA syslog to Splunk and deploy the ESCU detection from security_content. Correlate with admin session and change management.",
              "z": "Table (user, command, source IP), Timeline.",
              "kfp": "Legitimate admin discovery or change procedures; verify with change management.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA Reconnaissance Command Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). When triggered, it creates a Notable Event in the Incident Review dashboard for analyst triage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA Reconnaissance Command Activity\" or filter by Analytic Story.\n3. Review the detection configuration — verify the scheduling interval and throttling settings match your operational tempo.\n4. Enable the detection as a Correlation Search. It will create Notable Events directly when triggered.\n5. Set the Notable Event severity and urgency appropriate to your environment’s risk posture.\n6. Configure Adaptive Response Actions: email notifications, ServiceNow ticket creation, SOAR playbook triggers, or other response workflows.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate admin discovery or change procedures; verify with change management.\n\n• Review the detection’s filter criteria and adjust for known-good activity in your environment.\n• Configure throttling in Content Management to prevent duplicate Notable Events for the same entity within a configurable window (typically 1–24 hours depending on detection frequency).\n• Use Notable Event Suppression for entities or patterns that are consistently benign after investigation.\n\nAnalyst Response Workflow\n\nWhen this detection generates a Notable Event:\n\n1. Open the Notable Event in Incident Review. Review the triggering event details, affected entities, and assigned severity.\n2. Investigate the involved entities using the Asset Investigator and Identity Investigator dashboards for historical context and behavioral patterns.\n3. Correlate with related Notable Events and threat intelligence to assess whether this is an isolated event or part of a broader campaign.\n4. Take appropriate response actions: contain, remediate, and recover. Leverage SOAR playbooks where available for consistent and rapid response.\n5. Update the Notable Event status and document investigation findings for post-incident review and compliance.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch reconnaissance and discovery commands on Cisco ASA devices that may indicate an attacker or insider mapping the network or preparing for further exploitation — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "cisco",
                "security_essentials"
              ],
              "em": [
                "cisco_asa"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ASA",
                "id": 1621,
                "url": "https://splunkbase.splunk.com/app/1621"
              },
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.7",
              "n": "Cisco ASA - AAA Policy Tampering",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - AAA Policy Tampering. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/asa/asa-cli-reference/A-H/asa-command-ref-A-H/aa-ac-commands.html",
              "mitre": [
                "T1556.004"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - AAA Policy Tampering\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - AAA Policy Tampering\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - AAA Policy Tampering — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.8",
              "n": "Cisco ASA - Core Syslog Message Volume Drop",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - Core Syslog Message Volume Drop. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "`cisco_asa`\n    message_id IN (302013, 302014, 609002, 710005)\n    | eval msg_desc=case(\n      message_id=\"302013\",\"Built inbound TCP connection\",\n      message_id=\"302014\",\"Teardown TCP connection\",\n      message_id=\"609002\",\"Teardown local-host management\",\n      message_id=\"710005\",\"TCP request discarded\"\n    )\n    | bin _time span=15m\n    | stats count values(msg_desc) as message_description\n                  values(dest) as dest\n      by _time message_id\n    | xyseries _time message_id count\n    | `cisco_asa___core_syslog_message_volume_drop_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://blog.talosintelligence.com/arcanedoor-new-espionage-focused-campaign-found-targeting-perimeter-network-devices/, https://sec.cloudapps.cisco.com/security/center/resources/asa_ftd_continued_attacks, https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-asaftd-webvpn-z5xP8EUB, https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-http-code-exec-WmfP3h3O, https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-asaftd-webvpn-YROOTUW, https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-http-code-exec-WmfP3h3O, https://www.cisa.gov/news-events/directives/ed-25-03-identify-and-mitigate-potential-compromise-cisco-devices, https://www.ncsc.gov.uk/news/persistent-malicious-targeting-cisco-devices",
              "mitre": [
                "T1562"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - Core Syslog Message Volume Drop\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - Core Syslog Message Volume Drop\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - Core Syslog Message Volume Drop — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.9",
              "n": "Cisco ASA - Device File Copy Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - Device File Copy Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://blog.talosintelligence.com/arcanedoor-new-espionage-focused-campaign-found-targeting-perimeter-network-devices/",
              "mitre": [
                "T1005",
                "T1530"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - Device File Copy Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1005, T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - Device File Copy Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - Device File Copy Activity — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.10",
              "n": "Cisco ASA - Device File Copy to Remote Location",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - Device File Copy to Remote Location. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://community.cisco.com/t5/security-knowledge-base/asa-how-to-download-images-using-tftp-ftp-http-https-and-scp/ta-p/3109769, https://blog.talosintelligence.com/arcanedoor-new-espionage-focused-campaign-found-targeting-perimeter-network-devices/",
              "mitre": [
                "T1005",
                "T1041",
                "T1048.003"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - Device File Copy to Remote Location\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1005, T1041, T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - Device File Copy to Remote Location\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - Device File Copy to Remote Location — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.11",
              "n": "Cisco ASA - Logging Disabled via CLI",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - Logging Disabled via CLI. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/site/us/en/products/security/firewalls/adaptive-security-appliance-asa-software/index.html, https://sec.cloudapps.cisco.com/security/center/resources/asa_ftd_continued_attacks",
              "mitre": [
                "T1562"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - Logging Disabled via CLI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - Logging Disabled via CLI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - Logging Disabled using CLI — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.12",
              "n": "Cisco ASA - Logging Filters Configuration Tampering",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - Logging Filters Configuration Tampering. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/asa/asa-cli-reference/I-R/asa-command-ref-I-R/m_log-lz.html",
              "mitre": [
                "T1562"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - Logging Filters Configuration Tampering\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - Logging Filters Configuration Tampering\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - Logging Filters Configuration Tampering — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.13",
              "n": "Cisco ASA - Logging Message Suppression",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - Logging Message Suppression. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.ncsc.gov.uk/static-assets/documents/malware-analysis-reports/RayInitiator-LINE-VIPER/ncsc-mar-rayinitiator-line-viper.pdf",
              "mitre": [
                "T1562.002",
                "T1070"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - Logging Message Suppression\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002, T1070. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - Logging Message Suppression\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - Logging Message Suppression — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.14",
              "n": "Cisco ASA - New Local User Account Created",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - New Local User Account Created. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/asa/syslog/asa-syslog/syslog-messages-500000-to-520025.html#con_4773963",
              "mitre": [
                "T1136.001",
                "T1078.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - New Local User Account Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001, T1078.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - New Local User Account Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - New Local User Account Created — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.15",
              "n": "Cisco ASA - Packet Capture Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - Packet Capture Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/asa/asa-cli-reference/A-H/asa-command-ref-A-H/ca-cld-commands.html, https://www.cisco.com/c/en/us/support/docs/security/asa-5500-x-series-next-generation-firewalls/118097-configure-asa-00.html, https://www.ncsc.gov.uk/static-assets/documents/malware-analysis-reports/RayInitiator-LINE-VIPER/ncsc-mar-rayinitiator-line-viper.pdf",
              "mitre": [
                "T1040",
                "T1557"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - Packet Capture Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1040, T1557. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - Packet Capture Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - Packet Capture Activity — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.16",
              "n": "Cisco ASA - Reconnaissance Command Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - Reconnaissance Command Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/asa/asa-cli-reference/S/asa-command-ref-S/sa-shov-commands.html",
              "mitre": [
                "T1082",
                "T1590.001",
                "T1590.005"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - Reconnaissance Command Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082, T1590.001, T1590.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - Reconnaissance Command Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - Reconnaissance Command Activity — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.17",
              "n": "Cisco ASA - User Account Deleted From Local Database",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - User Account Deleted From Local Database. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/asa/syslog/asa-syslog/syslog-messages-500000-to-520025.html#con_4773969",
              "mitre": [
                "T1531",
                "T1070.008"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - User Account Deleted From Local Database\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1531, T1070.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - User Account Deleted From Local Database\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - User Account Deleted From Local Database — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.18",
              "n": "Cisco ASA - User Account Lockout Threshold Exceeded",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - User Account Lockout Threshold Exceeded. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/asa/syslog/asa-syslog/syslog-messages-101001-to-199021.html#con_4769446",
              "mitre": [
                "T1110.001",
                "T1110.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - User Account Lockout Threshold Exceeded\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001, T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - User Account Lockout Threshold Exceeded\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - User Account Lockout Threshold Exceeded — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.19",
              "n": "Cisco ASA - User Privilege Level Change",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco ASA - User Privilege Level Change. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco ASA Logs",
              "q": "# Shared SPL: intentional — see UC-10.1.7\n| from datamodel Risk.All_Risk | search normalized_risk_object IN ($host$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco ASA Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/asa/syslog/asa-syslog/syslog-messages-500000-to-520025.html#con_4773975, https://www.ncsc.gov.uk/static-assets/documents/malware-analysis-reports/RayInitiator-LINE-VIPER/ncsc-mar-rayinitiator-line-viper.pdf",
              "mitre": [
                "T1078.003",
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco ASA - User Privilege Level Change\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco ASA Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.003, T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco ASA - User Privilege Level Change\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco ASA - User Privilege Level Change — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.20",
              "n": "ESXi Firewall Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when the ESXi firewall is disabled or set to permissive mode, which can expose the host to unauthorized access and network-based attacks. Such changes are often a precursor to lateral movement, data exfiltration, or the installation of malicious software by a threat actor.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Firewall Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Firewall Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies when the ESXi firewall is disabled or set to permissive mode, which can expose the host to unauthorized access and network-based attacks — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.21",
              "n": "Abnormally High Number Of Cloud Security Group API Calls",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a spike in the number of API calls made to cloud security groups by a user. It leverages data from the Change data model, focusing on successful firewall-related changes. This activity is significant because an abnormal increase in security group API calls can indicate potential malicious activity, such as unauthorized access or configuration changes. If confirmed malicious, this could allow an attacker to manipulate security group settings, potentially exposing se…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Abnormally High Number Of Cloud Security Group API Calls\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Abnormally High Number Of Cloud Security Group API Calls\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a spike in the number of calls made to cloud security groups by a user — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.22",
              "n": "ASL AWS Defense Evasion Impair Security Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of critical AWS Security Services configurations, such as CloudWatch alarms, GuardDuty detectors, and Web Application Firewall rules. It leverages Amazon Security Lake logs to identify specific API calls like \"DeleteLogStream\" and \"DeleteDetector.\" This activity is significant because adversaries often use these actions to disable security monitoring and evade detection. If confirmed malicious, this could allow attackers to operate undetected, leading …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "`amazon_security_lake` api.operation IN (\"DeleteLogStream\",\"DeleteDetector\",\"DeleteIPSet\",\"DeleteWebACL\",\"DeleteRule\",\"DeleteRuleGroup\",\"DeleteLoggingConfiguration\",\"DeleteAlarms\")\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY actor.user.uid api.operation api.service.name\n           http_request.user_agent src_endpoint.ip actor.user.account.uid\n           cloud.provider cloud.region\n      | rename actor.user.uid as user api.operation as action api.service.name as dest http_request.user_agent as user_agent src_endpoint.ip as src actor.user.account.uid as vendor_account cloud.provider as vendor_product cloud.region as vendor_region\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `asl_aws_defense_evasion_impair_security_services_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that it is a legitimate admin activity. Please consider filtering out these noisy events using userAgent, user_arn field names.",
              "refs": "https://docs.aws.amazon.com/cli/latest/reference/guardduty/index.html, https://docs.aws.amazon.com/cli/latest/reference/waf/index.html, https://www.elastic.co/guide/en/security/current/prebuilt-rules.html",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Defense Evasion Impair Security Services\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Defense Evasion Impair Security Services\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that it is a legitimate admin activity. Please consider filtering out these noisy events using userAgent, user_arn field names.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the deletion of critical AWS Security Services configurations, such as CloudWatch alarms, GuardDuty detectors, and Web Application Firewall rules — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.23",
              "n": "Allow File And Printing Sharing In Firewall",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of firewall settings to allow file and printer sharing. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving 'netsh' commands that enable file and printer sharing. This activity is significant because it can indicate an attempt by ransomware to discover and encrypt files on additional machines connected to the compromised host. If confirmed malicious, this could lead to widespread file e…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin may modify this firewall feature that may cause this rule to be triggered.",
              "refs": "https://community.fortinet.com:443/t5/FortiEDR/How-FortiEDR-detects-and-blocks-Revil-Ransomware-aka-sodinokibi/ta-p/189638?externalID=FD52469, https://app.any.run/tasks/c0f98850-af65-4352-9746-fbebadee4f05/",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Allow File And Printing Sharing In Firewall\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Allow File And Printing Sharing In Firewall\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin may modify this firewall feature that may cause this rule to be triggered.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the modification of firewall settings to allow file and printer sharing — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.24",
              "n": "Allow Inbound Traffic By Firewall Rule Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to firewall rule registry settings that allow inbound traffic on specific ports with a public profile. It leverages data from the Endpoint.Registry data model, focusing on registry paths and values indicative of such changes. This activity is significant as it may indicate an adversary attempting to grant remote access to a machine by modifying firewall rules. If confirmed malicious, this could enable unauthorized remote access, potentially…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin may add/remove/modify public inbound firewall rule that may cause this rule to be triggered.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/netsecurity/new-netfirewallrule?view=windowsserver2019-ps",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Allow Inbound Traffic By Firewall Rule Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Allow Inbound Traffic By Firewall Rule Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin may add/remove/modify public inbound firewall rule that may cause this rule to be triggered.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects suspicious modifications to firewall rule registry settings that allow inbound traffic on specific ports with a public profile — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.25",
              "n": "Allow Inbound Traffic In Firewall Rule",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious PowerShell command that allows inbound traffic to a specific local port within the public profile. It leverages PowerShell script block logging (EventCode 4104) to identify commands containing keywords like \"firewall,\" \"Inbound,\" \"Allow,\" and \"-LocalPort.\" This activity is significant because it may indicate an attacker attempting to establish remote access by modifying firewall rules. If confirmed malicious, this could allow unauthorized access to the…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator may allow inbound traffic in certain network or machine.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/netsecurity/new-netfirewallrule?view=windowsserver2019-ps",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Allow Inbound Traffic In Firewall Rule\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Allow Inbound Traffic In Firewall Rule\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator may allow inbound traffic in certain network or machine.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a suspicious PowerShell command that allows inbound traffic to a specific local port within the public profile — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.26",
              "n": "Allow Network Discovery In Firewall",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious modification to the firewall to allow network discovery on a machine. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving the 'netsh' command to enable network discovery. This activity is significant because it is commonly used by ransomware, such as REvil and RedDot, to discover and compromise additional machines on the network. If confirmed malicious, this could lead to widespread file en…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin may modify this firewall feature that may cause this rule to be triggered.",
              "refs": "https://community.fortinet.com:443/t5/FortiEDR/How-FortiEDR-detects-and-blocks-Revil-Ransomware-aka-sodinokibi/ta-p/189638?externalID=FD52469, https://app.any.run/tasks/c0f98850-af65-4352-9746-fbebadee4f05/",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Allow Network Discovery In Firewall\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Allow Network Discovery In Firewall\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin may modify this firewall feature that may cause this rule to be triggered.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a suspicious modification to the firewall to allow network discovery on a machine — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.27",
              "n": "Disabling Firewall with Netsh",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the disabling of the firewall using the netsh application. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that include keywords like \"firewall,\" \"off,\" or \"disable.\" This activity is significant because disabling the firewall can expose the system to external threats, allowing malware to communicate with its command and control (C2) server. If confirmed malicious, this action could lead to unauthorized da…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin may disable firewall during testing or fixing network problem.",
              "refs": "https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling Firewall with Netsh\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling Firewall with Netsh\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin may disable firewall during testing or fixing network problem.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the disabling of the firewall using the netsh application — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.28",
              "n": "Firewall Allowed Program Enable",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of a firewall rule to allow the execution of a specific application. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events with command-line arguments related to firewall rule changes. This activity is significant as it may indicate an attempt to bypass firewall restrictions, potentially allowing unauthorized applications to communicate over the network. If confirmed malicious, this cou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A network operator or systems administrator may utilize an automated or manual execution of this firewall rule that may generate false positives. Filter as needed.",
              "refs": "https://app.any.run/tasks/ad4c3cda-41f2-4401-8dba-56cc2d245488/",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Firewall Allowed Program Enable\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Firewall Allowed Program Enable\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A network operator or systems administrator may utilize an automated or manual execution of this firewall rule that may generate false positives. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the modification of a firewall rule to allow the execution of a specific application — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.29",
              "n": "Linux Auditd Disable Or Modify System Firewall",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious disable or modify system firewall. This behavior is critical for a SOC to monitor because it may indicate attempts to gain unauthorized access or maintain control over a system. Such actions could be signs of malicious activity. If confirmed, this could lead to serious consequences, including a compromised system, unauthorized access to sensitive data, or even a wider breach affecting the entire network. Detecting and responding to these signs early …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Service Stop",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Service Stop ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Disable Or Modify System Firewall\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Service Stop. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Disable Or Modify System Firewall\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the suspicious disable or modify system firewall — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.30",
              "n": "Linux Iptables Firewall Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious command-line activity that modifies the iptables firewall settings on a Linux machine. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command patterns that alter firewall rules to accept traffic on certain TCP ports. This activity is significant as it can indicate malware, such as CyclopsBlink, modifying firewall settings to allow communication with a Command and Control (C2) server. If confirmed malicious, this…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator may do this commandline for auditing and testing purposes. In this scenario filter is needed.",
              "refs": "https://www.ncsc.gov.uk/files/Cyclops-Blink-Malware-Analysis-Report.pdf, https://www.trendmicro.com/en_us/research/22/c/cyclops-blink-sets-sights-on-asus-routers--.html",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Iptables Firewall Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Iptables Firewall Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator may do this commandline for auditing and testing purposes. In this scenario filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects suspicious command-line activity that modifies the iptables firewall settings on a Linux machine — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.31",
              "n": "Linux Stdout Redirection To Dev Null File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects command-line activities that redirect stdout or stderr to the /dev/null file. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This behavior is significant as it can indicate attempts to hide command outputs, a technique observed in the CyclopsBlink malware to conceal modifications to iptables firewall settings. If confirmed malicious, this activity could allow an attacker to stealthily alter system configurat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.ncsc.gov.uk/files/Cyclops-Blink-Malware-Analysis-Report.pdf, https://www.trendmicro.com/en_us/research/22/c/cyclops-blink-sets-sights-on-asus-routers--.html",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Stdout Redirection To Dev Null File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Stdout Redirection To Dev Null File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects command-line activities that redirect stdout or stderr to the /dev/null file — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.32",
              "n": "Linux System Network Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential enumeration of local network configuration on Linux systems. It detects this activity by monitoring processes such as \"arp,\" \"ifconfig,\" \"ip,\" \"netstat,\" \"firewall-cmd,\" \"ufw,\" \"iptables,\" \"ss,\" and \"route\" within a 30-minute window. This behavior is significant as it often indicates reconnaissance efforts by adversaries to gather network information for subsequent attacks. If confirmed malicious, this activity could enable attackers to map the network…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1016/T1016.md",
              "mitre": [
                "T1016"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux System Network Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1016. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux System Network Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential enumeration of local network configuration on Linux systems — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.33",
              "n": "Windows Delete or Modify System Firewall",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies 'netsh' processes that delete or modify firewall configurations. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions containing specific keywords. This activity is significant because it can indicate malware, such as NJRAT, attempting to alter firewall settings to evade detection or remove traces. If confirmed malicious, this behavior could allow an attacker to disable security measures, facilitating further c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may modify or delete firewall configuration.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.njrat",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Delete or Modify System Firewall\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Delete or Modify System Firewall\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may modify or delete firewall configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies 'netsh' processes that delete or modify firewall configurations — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.34",
              "n": "Windows Firewall Rule Added",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies instances where a Windows Firewall rule is added by monitoring Event ID 4946 in the Windows Security Event Log. Firewall rule modifications can indicate legitimate administrative actions, but they may also signal unauthorized changes, misconfigurations, or malicious activity such as attackers allowing traffic for backdoors or persistence mechanisms. By analyzing fields like RuleName, RuleId, Computer, and ProfileChanged, security teams can determine whether the change a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4946",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4946 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate admin changes, Group Policy updates, software installs, security tools, and automated scripts can trigger false positives for Event ID 4946.",
              "refs": "https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4946",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Firewall Rule Added\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4946. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Firewall Rule Added\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate admin changes, Group Policy updates, software installs, security tools, and automated scripts can trigger false positives for Event ID 4946.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies instances where a Windows Firewall rule is added by monitoring Event ID 4946 in the Windows Security Event Log — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.35",
              "n": "Windows Firewall Rule Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies instances where a Windows Firewall rule has been deleted, potentially exposing the system to security risks. Unauthorized removal of firewall rules can indicate an attacker attempting to bypass security controls or malware disabling protections for persistence and command-and-control communication. The event logs details such as the deleted rule name, protocol, port, and the user responsible for the action. Security teams should monitor for unexpected deletions, correla…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4948",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4948 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate admin delete, Group Policy updates, software installs, security tools, and automated scripts can trigger false positives for Event ID 4948.",
              "refs": "https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4948",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Firewall Rule Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4948. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Firewall Rule Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate admin delete, Group Policy updates, software installs, security tools, and automated scripts can trigger false positives for Event ID 4948.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies instances where a Windows Firewall rule has been deleted, potentially exposing the system to security risks — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.36",
              "n": "Windows Firewall Rule Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies instances where a Windows Firewall rule has been modified, which may indicate an attempt to alter security policies. Unauthorized modifications can weaken firewall protections, allowing malicious traffic or preventing legitimate communications. The event logs details such as the modified rule name, protocol, ports, application path, and the user responsible for the change. Security teams should monitor unexpected modifications, correlate them with related events, and in…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4947",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4947 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate admin changes, Group Policy updates, software installs, security tools, and automated scripts can trigger false positives for Event ID 4947.",
              "refs": "https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4947",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Firewall Rule Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4947. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Firewall Rule Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate admin changes, Group Policy updates, software installs, security tools, and automated scripts can trigger false positives for Event ID 4947.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies instances where a Windows Firewall rule has been modified, which may indicate an attempt to alter security policies — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.37",
              "n": "Windows Impair Defense Disable Defender Firewall And Network",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications in the Windows registry to disable firewall and network protection settings within Windows Defender Security Center. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the UILockdown registry value. This activity is significant as it may indicate an attempt to impair system defenses, potentially restricting users from modifying firewall or network protection settings. If confirmed malicious, this could allow an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Defender Firewall And Network\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Defender Firewall And Network\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects modifications in the Windows registry to disable firewall and network protection settings within Windows Defender Security Center — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.38",
              "n": "Windows Modify Registry Delete Firewall Rules",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a potential deletion of firewall rules, indicating a possible security breach or unauthorized access attempt. It identifies actions where firewall rules are removed using commands like netsh advfirewall firewall delete rule, which can expose the network to external threats by disabling critical security measures. Monitoring these activities helps maintain network integrity and prevent malicious attacks.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin may add/remove/modify public inbound firewall rule that may cause this rule to be triggered.",
              "refs": "https://www.bleepingcomputer.com/news/security/new-shrinklocker-ransomware-uses-bitlocker-to-encrypt-your-files/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Delete Firewall Rules\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Delete Firewall Rules\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin may add/remove/modify public inbound firewall rule that may cause this rule to be triggered.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a potential deletion of firewall rules, indicating a possible security breach or unauthorized access attempt — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.39",
              "n": "Windows Modify Registry to Add or Modify Firewall Rule",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a potential addition or modification of firewall rules, signaling possible configuration changes or security policy adjustments. It tracks commands such as netsh advfirewall firewall add rule and netsh advfirewall firewall set rule, which may indicate attempts to alter network access controls. Monitoring these actions ensures the integrity of firewall settings and helps prevent unauthorized network access.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13, Sysmon EventID 14",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin may add/remove/modify public inbound firewall rule that may cause this rule to be triggered.",
              "refs": "https://www.bleepingcomputer.com/news/security/new-shrinklocker-ransomware-uses-bitlocker-to-encrypt-your-files/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry to Add or Modify Firewall Rule\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13, Sysmon EventID 14. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry to Add or Modify Firewall Rule\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin may add/remove/modify public inbound firewall rule that may cause this rule to be triggered.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a potential addition or modification of firewall rules, signaling possible configuration changes or security policy adjustments — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.40",
              "n": "Windows Modify System Firewall with Notable Process Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to system firewall rules, specifically allowing execution of applications from notable and potentially malicious file paths. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving firewall rule changes. This activity is significant as it may indicate an adversary attempting to bypass firewall restrictions to execute malicious files. If confirmed malicious, this could al…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A network operator or systems administrator may utilize an automated or manual execution of this firewall rule that may generate false positives. Filter as needed.",
              "refs": "https://www.splunk.com/en_us/blog/security/more-than-just-a-rat-unveiling-njrat-s-mbr-wiping-capabilities.html",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify System Firewall with Notable Process Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify System Firewall with Notable Process Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A network operator or systems administrator may utilize an automated or manual execution of this firewall rule that may generate false positives. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects suspicious modifications to system firewall rules, specifically allowing execution of applications from notable and potentially malicious file paths — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.41",
              "n": "Windows Remote Services Allow Rdp In Firewall",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows firewall to enable Remote Desktop Protocol (RDP) on a targeted machine. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving \"netsh.exe\" to allow TCP port 3389. This activity is significant as it may indicate an adversary attempting to gain remote access to a compromised host, a common tactic for lateral movement. If confirmed malicious, this could allow attackers to remotely…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Services Allow Rdp In Firewall\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Services Allow Rdp In Firewall\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects modifications to the Windows firewall to enable Remote Desktop Protocol (remote desktop) on a targeted machine — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.42",
              "n": "Windows Set Network Profile Category to Private via Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to modify the Windows Registry to change a network profile's category to \"Private\", which may indicate an adversary is preparing the environment for lateral movement or reducing firewall restrictions. Specifically, this activity involves changes to the Category value within the HKLM\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\NetworkList\\Profiles\\{GUID} registry path. A value of 1 corresponds to a private network profile, which typically enables less rest…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2025/07/31/frozen-in-transit-secret-blizzards-aitm-campaign-against-diplomats/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Set Network Profile Category to Private via Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Set Network Profile Category to Private via Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to modify the Windows Registry to change a network profile's category to \"Private\", which may indicate an adversary is preparing the environment for lateral movement or reducing firewall restrictions — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.43",
              "n": "Windows System Network Connections Discovery Netsh",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Windows built-in tool netsh.exe to display the state, configuration, and profile of the host firewall. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process metadata. Monitoring this activity is crucial as netsh.exe can be used by adversaries to bypass firewall rules or discover firewall settings. If confirmed malicious, this activity could allow attackers to manipulate …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator can use this tool for auditing process.",
              "refs": "https://attack.mitre.org/techniques/T1049/, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1049"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Network Connections Discovery Netsh\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1049. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Network Connections Discovery Netsh\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator can use this tool for auditing process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of the Windows built-in tool netsh.exe to display the state, configuration, and profile of the host firewall — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.44",
              "n": "Cisco Secure Firewall - Binary File Type Download",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Binary File Type Download. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense File Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file hash entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense File Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "IT admins or developers may legitimately download executables or scripts as part of their normal workflow. Apply additional filters accordingly.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1203",
                "T1059"
              ],
              "dtype": "file_hash",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Binary File Type Download\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file hash entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense File Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1203, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Binary File Type Download\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: IT admins or developers may legitimately download executables or scripts as part of their normal workflow. Apply additional filters accordingly.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.45",
              "n": "Cisco Secure Firewall - Bits Network Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Background Intelligent Transfer Service (BITS) client application in allowed outbound connections. It leverages logs from Cisco Secure Firewall Threat Defense devices and identifies instances where BITS is used to initiate downloads from non-standard or unexpected domains. While BITS is a legitimate Windows service used for downloading updates, it is also commonly abused by adversaries to stealthily retrieve payloads or tools. This analytic filters o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Bits Network Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Bits Network Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.46",
              "n": "Cisco Secure Firewall - Blocked Connection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a blocked connection event by identifying a \"Block\" value in the action field. It leverages logs from Cisco Secure Firewall Threat Defense devices. This activity is significant as it can identify attempts from users or applications initiating network connection to explicitly or implicitly blocked range or zones. If confirmed malicious, attackers could be attempting to perform a forbidden action on the network such as data exfiltration, lateral movement, or network …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Blocked connection events are generated via an Access Control policy on the Firewall management console. Hence no false positives should be present.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1018",
                "T1046",
                "T1110",
                "T1203",
                "T1595.002"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Blocked Connection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018, T1046, T1110, T1203, T1595.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Blocked Connection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Blocked connection events are generated via an Access Control policy on the Firewall management console. Hence no false positives should be present.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.47",
              "n": "Cisco Secure Firewall - Citrix NetScaler Memory Overread Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Citrix NetScaler Memory Overread Attempt. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://support.citrix.com/support-home/kbsearch/article?articleNumber=CTX693420, https://support.citrix.com/support-home/kbsearch/article?articleNumber=CTX693420, https://www.netscaler.com/blog/news/critical-security-updates-for-netscaler-netscaler-gateway-and-netscaler-console/, https://github.com/mingshenhk/CitrixBleed-2-CVE-2025-5777-PoC-, https://horizon3.ai/attack-research/attack-blogs/cve-2025-5777-citrixbleed-2-write-up-maybe/, https://labs.watchtowr.com/how-much-more-must-we-bleed-citrix-netscaler-memory-disclosure-citrixbleed-2-cve-2025-5777/, https://github.com/projectdiscovery/nuclei-templates/blob/main/http/cves/2025/CVE-2025-5777.yaml",
              "mitre": [
                "T1203",
                "T1059"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Citrix NetScaler Memory Overread Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1203, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Citrix NetScaler Memory Overread Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.48",
              "n": "Cisco Secure Firewall - Communication Over Suspicious Ports",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Communication Over Suspicious Ports. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1021",
                "T1055",
                "T1059.001",
                "T1105",
                "T1219",
                "T1571"
              ],
              "dtype": "url",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Communication Over Suspicious Ports\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1021, T1055, T1059.001, T1105, T1219, T1571. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Communication Over Suspicious Ports\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.49",
              "n": "Cisco Secure Firewall - Connection to File Sharing Domain",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Connection to File Sharing Domain. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1071.001",
                "T1090.002",
                "T1105",
                "T1567.002",
                "T1588.002"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Connection to File Sharing Domain\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001, T1090.002, T1105, T1567.002, T1588.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Connection to File Sharing Domain\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.50",
              "n": "Cisco Secure Firewall - File Download Over Uncommon Port",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - File Download Over Uncommon Port. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense File Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file hash entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense File Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate applications may download files over custom ports (e.g., CDN mirrors, APIs). Apply additional filters accordingly.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1105",
                "T1571"
              ],
              "dtype": "file_hash",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - File Download Over Uncommon Port\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file hash entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense File Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105, T1571. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - File Download Over Uncommon Port\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate applications may download files over custom ports (e.g., CDN mirrors, APIs). Apply additional filters accordingly.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.51",
              "n": "Cisco Secure Firewall - High EVE Threat Confidence",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - High EVE Threat Confidence. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1041",
                "T1071.001",
                "T1105",
                "T1573.002"
              ],
              "dtype": "process_name",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - High EVE Threat Confidence\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1041, T1071.001, T1105, T1573.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - High EVE Threat Confidence\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.52",
              "n": "Cisco Secure Firewall - High Priority Intrusion Classification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - High Priority Intrusion Classification. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some intrusion events that are linked to these classifications might be noisy in certain environments. Apply a combination of filters for specific snort IDs and other indicators.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1203",
                "T1003",
                "T1071",
                "T1190",
                "T1078"
              ],
              "dtype": "ip_address",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - High Priority Intrusion Classification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1203, T1003, T1071, T1190, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - High Priority Intrusion Classification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some intrusion events that are linked to these classifications might be noisy in certain environments. Apply a combination of filters for specific snort IDs and other indicators.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.53",
              "n": "Cisco Secure Firewall - High Volume of Intrusion Events Per Host",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - High Volume of Intrusion Events Per Host. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1059",
                "T1071",
                "T1595.002"
              ],
              "dtype": "signature",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - High Volume of Intrusion Events Per Host\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1059, T1071, T1595.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - High Volume of Intrusion Events Per Host\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.54",
              "n": "Cisco Secure Firewall - Intrusion Events by Threat Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Intrusion Events by Threat Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur due to legitimate security testing or research activities.",
              "refs": "https://www.cisco.com/c/en/us/products/security/firewalls/index.html, https://blog.talosintelligence.com/static-tundra/",
              "mitre": [
                "T1041",
                "T1573.002"
              ],
              "dtype": "signature",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Intrusion Events by Threat Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1041, T1573.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Intrusion Events by Threat Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur due to legitimate security testing or research activities.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.55",
              "n": "Cisco Secure Firewall - Lumma Stealer Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Lumma Stealer Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be very unlikely.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.lumma",
              "mitre": [
                "T1190",
                "T1210",
                "T1027",
                "T1204"
              ],
              "dtype": "ip_address",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Lumma Stealer Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1210, T1027, T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Lumma Stealer Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be very unlikely.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.56",
              "n": "Cisco Secure Firewall - Lumma Stealer Download Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Lumma Stealer Download Attempt. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be unlikely.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.lumma",
              "mitre": [
                "T1041",
                "T1573.002"
              ],
              "dtype": "ip_address",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Lumma Stealer Download Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1041, T1573.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Lumma Stealer Download Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be unlikely.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.57",
              "n": "Cisco Secure Firewall - Lumma Stealer Outbound Connection Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Lumma Stealer Outbound Connection Attempt. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be unlikely.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.lumma",
              "mitre": [
                "T1041",
                "T1573.002"
              ],
              "dtype": "ip_address",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Lumma Stealer Outbound Connection Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1041, T1573.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Lumma Stealer Outbound Connection Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be unlikely.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.58",
              "n": "Cisco Secure Firewall - Malware File Downloaded",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Malware File Downloaded. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense File Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file hash entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense File Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Malicious verdicts could be outdated or incorrect due to retroactive threat intel.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1203",
                "T1105"
              ],
              "dtype": "file_hash",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Malware File Downloaded\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file hash entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense File Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1203, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Malware File Downloaded\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Malicious verdicts could be outdated or incorrect due to retroactive threat intel.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.59",
              "n": "Cisco Secure Firewall - Oracle E-Business Suite Correlation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Oracle E-Business Suite Correlation. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be very unlikely.",
              "refs": "https://www.oracle.com/security-alerts/alert-cve-2025-61882.html, https://www.oracle.com/security-alerts/alert-cve-2025-61884.html, https://labs.watchtowr.com/well-well-well-its-another-day-oracle-e-business-suite-pre-auth-rce-chain-cve-2025-61882well-well-well-its-another-day-oracle-e-business-suite-pre-auth-rce-chain-cve-2025-61882/, https://cloud.google.com/blog/topics/threat-intelligence/oracle-ebusiness-suite-zero-day-exploitation",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Oracle E-Business Suite Correlation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Oracle E-Business Suite Correlation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be very unlikely.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.60",
              "n": "Cisco Secure Firewall - Oracle E-Business Suite Exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Oracle E-Business Suite Exploitation. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be unlikely.",
              "refs": "https://www.oracle.com/security-alerts/alert-cve-2025-61882.html, https://www.oracle.com/security-alerts/alert-cve-2025-61884.html, https://labs.watchtowr.com/well-well-well-its-another-day-oracle-e-business-suite-pre-auth-rce-chain-cve-2025-61882well-well-well-its-another-day-oracle-e-business-suite-pre-auth-rce-chain-cve-2025-61882/, https://cloud.google.com/blog/topics/threat-intelligence/oracle-ebusiness-suite-zero-day-exploitation",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Oracle E-Business Suite Exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Oracle E-Business Suite Exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be unlikely.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.61",
              "n": "Cisco Secure Firewall - Possibly Compromised Host",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Possibly Compromised Host. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are directly related to their snort rules triggering and the firewall scoring. Apply additional filters if the rules are too noisy by disabling them or simply ignoring certain IP ranges that trigger it.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1203",
                "T1059",
                "T1587.001"
              ],
              "dtype": "signature",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Possibly Compromised Host\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1203, T1059, T1587.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Possibly Compromised Host\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are directly related to their snort rules triggering and the firewall scoring. Apply additional filters if the rules are too noisy by disabling them or simply ignoring certain IP ranges that trigger it.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.62",
              "n": "Cisco Secure Firewall - Potential Data Exfiltration",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Potential Data Exfiltration. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1041",
                "T1567.002",
                "T1048.003"
              ],
              "dtype": "url",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Potential Data Exfiltration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1041, T1567.002, T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Potential Data Exfiltration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.63",
              "n": "Cisco Secure Firewall - Privileged Command Execution via HTTP",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Privileged Command Execution via HTTP. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-239a",
              "mitre": [
                "T1059",
                "T1505.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Privileged Command Execution via HTTP\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Privileged Command Execution via HTTP\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.64",
              "n": "Cisco Secure Firewall - Rare Snort Rule Triggered",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Rare Snort Rule Triggered. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "`cisco_secure_firewall` EventType=IntrusionEvent earliest=-7d\n    | stats dc(_time) as TriggerCount min(_time) as firstTime max(_time) as lastTime\n            values(signature) as signature\n            values(src) as src\n            values(dest) as dest\n            values(dest_port) as dest_port\n            values(transport) as transport\n            values(app) as app\n            values(rule) as rule\n            by signature_id class_desc MitreAttackGroups InlineResult InlineResultReason\n    | where TriggerCount = 1\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `cisco_secure_firewall___rare_snort_rule_triggered_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur with certain rare activity. Apply additional filters where required.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1598",
                "T1583.006"
              ],
              "dtype": "Hunting",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Rare Snort Rule Triggered\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1598, T1583.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Rare Snort Rule Triggered\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur with certain rare activity. Apply additional filters where required.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.65",
              "n": "Cisco Secure Firewall - React Server Components RCE Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - React Server Components RCE Attempt. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://react.dev/blog/2025/12/03/critical-security-vulnerability-in-react-server-components, https://nextjs.org/blog/CVE-2025-66478, https://nvd.nist.gov/vuln/detail/CVE-2025-55182, https://gist.github.com/maple3142/48bc9393f45e068cf8c90ab865c0f5f3, https://www.wiz.io/blog/critical-vulnerability-in-react-cve-2025-55182",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - React Server Components RCE Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - React Server Components RCE Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.66",
              "n": "Cisco Secure Firewall - Remote Access Software Usage Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Remote Access Software Usage Traffic. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1219/, https://thedfirreport.com/2022/08/08/bumblebee-roasts-its-way-to-domain-admin/, https://thedfirreport.com/2022/11/28/emotet-strikes-again-lnk-file-leads-to-domain-wide-ransomware/",
              "mitre": [
                "T1219"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Remote Access Software Usage Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Remote Access Software Usage Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.67",
              "n": "Cisco Secure Firewall - Repeated Blocked Connections",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Repeated Blocked Connections. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1018",
                "T1046",
                "T1110",
                "T1203",
                "T1595.002"
              ],
              "dtype": "url",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Repeated Blocked Connections\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1018, T1046, T1110, T1203, T1595.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Repeated Blocked Connections\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.68",
              "n": "Cisco Secure Firewall - Repeated Malware Downloads",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Repeated Malware Downloads. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense File Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file hash entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense File Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be minimal here, tuning may be required to exclude known test machines or development hosts.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1105",
                "T1027"
              ],
              "dtype": "file_hash",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Repeated Malware Downloads\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file hash entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense File Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1105, T1027. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Repeated Malware Downloads\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be minimal here, tuning may be required to exclude known test machines or development hosts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.69",
              "n": "Cisco Secure Firewall - Snort Rule Triggered Across Multiple Hosts",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Snort Rule Triggered Across Multiple Hosts. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$signature_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be minimal. Simultaneous vulnerability scanning across multiple internal hosts might trigger this, as well as some snort rules that are noisy. Disable those if necessary or increase the threshold.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1105",
                "T1027"
              ],
              "dtype": "signature",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Snort Rule Triggered Across Multiple Hosts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1105, T1027. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Snort Rule Triggered Across Multiple Hosts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be minimal. Simultaneous vulnerability scanning across multiple internal hosts might trigger this, as well as some snort rules that are noisy. Disable those if necessary or increase the threshold.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.70",
              "n": "Cisco Secure Firewall - SSH Connection to Non-Standard Port",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - SSH Connection to Non-Standard Port. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-239a",
              "mitre": [
                "T1021.004"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - SSH Connection to Non-Standard Port\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - SSH Connection to Non-Standard Port\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.71",
              "n": "Cisco Secure Firewall - SSH Connection to sshd_operns",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - SSH Connection to sshd_operns. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-239a",
              "mitre": [
                "T1021.004"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - SSH Connection to sshd_operns\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - SSH Connection to sshd_operns\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.72",
              "n": "Cisco Secure Firewall - Static Tundra Smart Install Abuse",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Static Tundra Smart Install Abuse. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://blog.talosintelligence.com/static-tundra/, https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180328-smi2",
              "mitre": [
                "T1190",
                "T1210",
                "T1499"
              ],
              "dtype": "signature",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Static Tundra Smart Install Abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1210, T1499. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Static Tundra Smart Install Abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.73",
              "n": "Cisco Secure Firewall - Wget or Curl Download",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Wget or Curl Download. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1053.003",
                "T1059",
                "T1071.001",
                "T1105"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Wget or Curl Download\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003, T1059, T1071.001, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Wget or Curl Download\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cisco Secure Firewall / Firepower events in Splunk so we see the same blocks, intrusions, and risky connections that security teams review in Management Center.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.74",
              "n": "Detect Remote Access Software Usage Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects network traffic associated with known remote access software applications, such as AnyDesk, GoToMyPC, LogMeIn, and TeamViewer. It leverages Palo Alto traffic logs mapped to the Network_Traffic data model in Splunk. This activity is significant because adversaries often use remote access tools to maintain unauthorized access to compromised environments. If confirmed malicious, this activity could allow attackers to control systems remotely, exfiltrate data, or deplo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Traffic",
              "q": "| from datamodel:Network_Traffic.All_Traffic | search src=$src$ app=$app$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Traffic ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that legitimate remote access software is used within the environment. Ensure that the lookup is reviewed and updated with any additional remote access software that is used within the environment. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content",
              "refs": "https://attack.mitre.org/techniques/T1219/, https://thedfirreport.com/2022/08/08/bumblebee-roasts-its-way-to-domain-admin/, https://thedfirreport.com/2022/11/28/emotet-strikes-again-lnk-file-leads-to-domain-wide-ransomware/, https://applipedia.paloaltonetworks.com/",
              "mitre": [
                "T1219"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Remote Access Software Usage Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Traffic. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Remote Access Software Usage Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that legitimate remote access software is used within the environment. Ensure that the lookup is reviewed and updated with any additional remote access software that is used within the environment. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects network traffic associated with known remote access software applications, such as AnyDesk, GoToMyPC, LogMeIn, and TeamViewer — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.75",
              "n": "Detect Traffic Mirroring",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the initiation of traffic mirroring sessions on Cisco network devices. It leverages logs with specific mnemonics and facilities related to traffic mirroring, such as \"ETH_SPAN_SESSION_UP\" and \"PKTCAP_START.\" This activity is significant because adversaries may use traffic mirroring to exfiltrate data by duplicating and forwarding network traffic to an external destination. If confirmed malicious, this could allow attackers to capture sensitive information, monitor …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "`cisco_networks` (facility=\"MIRROR\" mnemonic=\"ETH_SPAN_SESSION_UP\") OR (facility=\"SPAN\" mnemonic=\"SESSION_UP\") OR (facility=\"SPAN\" mnemonic=\"PKTCAP_START\") OR (mnemonic=\"CFGLOG_LOGGEDCMD\" command=\"monitor session*\")\n      | stats min(_time) AS firstTime max(_time) AS lastTime count\n        BY host facility mnemonic\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `detect_traffic_mirroring_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This search will return false positives for any legitimate traffic captures by network administrators.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1020.001",
                "T1200",
                "T1498"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Traffic Mirroring\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1020.001, T1200, T1498. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Traffic Mirroring\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This search will return false positives for any legitimate traffic captures by network administrators.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the initiation of traffic mirroring sessions on Cisco network devices — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.76",
              "n": "Protocol or Port Mismatch",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies network traffic where the higher layer protocol does not match the expected port, such as non-HTTP traffic on TCP port 80. It leverages data from network traffic inspection technologies like Bro or Palo Alto Networks firewalls. This activity is significant because it may indicate attempts to bypass firewall restrictions or conceal malicious communications. If confirmed malicious, this behavior could allow attackers to evade detection, maintain persistence, or ex…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Running this search properly requires a technology that can inspect network traffic and identify common protocols. Technologies such as Zeek, Cisco Secure Firewall or Palo Alto Networks firewalls are examples that will identify protocols via inspection, and not just assume a specific protocol based on the transport protocol and ports.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some false positive could occur with some applications that change their default communication port for an added layer of obscurity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Protocol or Port Mismatch\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Protocol or Port Mismatch\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some false positive could occur with some applications that change their default communication port for an added layer of obscurity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies network traffic where the higher layer protocol does not match the expected port, such as non-HTTP traffic on TCP port 80 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.1.77",
              "n": "TOR Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies allowed network traffic to The Onion Router (TOR), an anonymity network often exploited for malicious activities. It leverages data from Next Generation Firewalls, using the Network_Traffic data model to detect traffic where the application is TOR and the action is allowed. This activity is significant as TOR can be used to bypass conventional monitoring, facilitating hacking, data breaches, and illicit content dissemination. If confirmed malicious, this could l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Traffic, Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Traffic, Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000ClRtCAK, https://unit42.paloaltonetworks.com/tor-traffic-enterprise-networks/#:~:text=For%20enterprises%20concerned%20about%20the, the%20most%20important%20security%20risks.",
              "mitre": [
                "T1090.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"TOR Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Traffic, Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1090.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"TOR Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies allowed network traffic to The Onion Router (TOR), an anonymity network often exploited for malicious activities — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 29.0,
          "qd": {
            "gold": 2,
            "silver": 0,
            "bronze": 75,
            "none": 0
          }
        },
        {
          "i": "10.2",
          "n": "Intrusion Detection/Prevention (IDS/IPS)",
          "u": [
            {
              "i": "10.2.1",
              "n": "Alert Severity Trending",
              "c": "high",
              "f": "beginner",
              "v": "Trending IDS alerts reveals attack patterns, campaign surges, and tuning opportunities. Supports SOC workload planning.",
              "t": "TA-suricata, Cisco Secure Firewall syslog",
              "d": "IDS/IPS alert logs",
              "q": "index=ids sourcetype=\"snort:alert\"\n| timechart span=1h count by priority",
              "m": "Forward IDS alerts to Splunk via syslog. Normalize severity/priority fields. Track alert volume by severity over time. Identify noisy signatures for tuning. Alert on sudden spikes in high-severity events.",
              "z": "Stacked area (alerts by severity), Line chart (alert volume trend), Table (top alerts today).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA-suricata, Cisco Secure Firewall syslog.\n• Ensure the following data sources are available: IDS/IPS alert logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward IDS alerts to Splunk via syslog. Normalize severity/priority fields. Track alert volume by severity over time. Identify noisy signatures for tuning. Alert on sudden spikes in high-severity events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ids sourcetype=\"snort:alert\"\n| timechart span=1h count by priority\n```\n\nUnderstanding this SPL\n\n**Alert Severity Trending** — Trending IDS alerts reveals attack patterns, campaign surges, and tuning opportunities. Supports SOC workload planning.\n\nDocumented **Data sources**: IDS/IPS alert logs. **App/TA** (typical add-on context): TA-suricata, Cisco Secure Firewall syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ids; **sourcetype**: snort:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ids, sourcetype=\"snort:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by priority** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.signature IDS_Attacks.severity span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Alert Severity Trending** — Trending IDS alerts reveals attack patterns, campaign surges, and tuning opportunities. Supports SOC workload planning.\n\nDocumented **Data sources**: IDS/IPS alert logs. **App/TA** (typical add-on context): TA-suricata, Cisco Secure Firewall syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (alerts by severity), Line chart (alert volume trend), Table (top alerts today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how many network-threat alerts fire at each severity over time, so we can spot surges, noisy rules, and attack campaigns that need analyst attention or tuning.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.signature IDS_Attacks.severity span=1h\n| sort -count",
              "e": [
                "cisco",
                "suricata",
                "syslog"
              ],
              "em": [
                "cisco_firepower"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.2",
              "n": "Top Targeted Hosts",
              "c": "high",
              "f": "beginner",
              "v": "Identifies the most-attacked internal hosts, prioritizing vulnerability remediation and incident investigation.",
              "t": "IDS/IPS TA",
              "d": "IDS/IPS alert logs (destination host)",
              "q": "index=ids sourcetype=\"snort:alert\" priority<=2\n| stats count, dc(signature) as unique_sigs by dest\n| sort 20 -count\n| head 20",
              "m": "Parse destination IP from IDS alerts. Aggregate by target host. Enrich with CMDB data (asset owner, criticality). Alert when a single host receives multiple high-severity alerts. Trigger vulnerability scan for top targets.",
              "z": "Table (top targeted hosts), Bar chart (alerts by host), Geo map (source attackers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IDS/IPS TA.\n• Ensure the following data sources are available: IDS/IPS alert logs (destination host).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse destination IP from IDS alerts. Aggregate by target host. Enrich with CMDB data (asset owner, criticality). Alert when a single host receives multiple high-severity alerts. Trigger vulnerability scan for top targets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ids sourcetype=\"snort:alert\" priority<=2\n| stats count, dc(signature) as unique_sigs by dest\n| sort 20 -count\n| head 20\n```\n\nUnderstanding this SPL\n\n**Top Targeted Hosts** — Identifies the most-attacked internal hosts, prioritizing vulnerability remediation and incident investigation.\n\nDocumented **Data sources**: IDS/IPS alert logs (destination host). **App/TA** (typical add-on context): IDS/IPS TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ids; **sourcetype**: snort:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ids, sourcetype=\"snort:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count dc(IDS_Attacks.signature) as sig_count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.dest span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Top Targeted Hosts** — Identifies the most-attacked internal hosts, prioritizing vulnerability remediation and incident investigation.\n\nDocumented **Data sources**: IDS/IPS alert logs (destination host). **App/TA** (typical add-on context): IDS/IPS TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top targeted hosts), Bar chart (alerts by host), Geo map (source attackers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see which internal systems get the most high-priority network threat hits so we can patch, investigate, and calm down the noisiest assets first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count dc(IDS_Attacks.signature) as sig_count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.dest span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.3",
              "n": "Signature Coverage Gaps",
              "c": "medium",
              "f": "advanced",
              "v": "Identifying network segments without IDS coverage ensures comprehensive threat detection across the infrastructure.",
              "t": "IDS sensor health monitoring",
              "d": "Sensor health reports, network segment inventory",
              "q": "| inputlookup network_segments.csv\n| join type=left max=1 segment_name\n    [search index=ids sourcetype=\"snort:alert\" earliest=-7d\n     | stats count by sensor, segment_name]\n| where isnull(count) OR count=0\n| table segment_name, expected_sensor, count",
              "m": "Maintain network segment inventory with expected IDS sensor mapping. Compare against actual alert data. Alert when a segment has no IDS events for >7 days (sensor may be down or misconfigured).",
              "z": "Table (uncovered segments), Status grid (segment × coverage), Single value (coverage %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IDS sensor health monitoring.\n• Ensure the following data sources are available: Sensor health reports, network segment inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain network segment inventory with expected IDS sensor mapping. Compare against actual alert data. Alert when a segment has no IDS events for >7 days (sensor may be down or misconfigured).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup network_segments.csv\n| join type=left max=1 segment_name\n    [search index=ids sourcetype=\"snort:alert\" earliest=-7d\n     | stats count by sensor, segment_name]\n| where isnull(count) OR count=0\n| table segment_name, expected_sensor, count\n```\n\nUnderstanding this SPL\n\n**Signature Coverage Gaps** — Identifying network segments without IDS coverage ensures comprehensive threat detection across the infrastructure.\n\nDocumented **Data sources**: Sensor health reports, network segment inventory. **App/TA** (typical add-on context): IDS sensor health monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(count) OR count=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Signature Coverage Gaps**): table segment_name, expected_sensor, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (uncovered segments), Status grid (segment × coverage), Single value (coverage %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare our network map to the alerts we are actually seeing so we are not blind on segments where a sensor is down, mis-aimed, or missing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.4",
              "n": "False Positive Tracking",
              "c": "medium",
              "f": "advanced",
              "v": "High false positive rates waste analyst time and cause alert fatigue. Systematic tracking drives tuning improvements.",
              "t": "IDS TA + analyst workflow",
              "d": "IDS alerts + analyst disposition data (true/false positive)",
              "q": "index=ids sourcetype=\"snort:alert\"\n| join max=1 signature [| inputlookup signature_dispositions.csv]\n| stats count(eval(disposition=\"false_positive\")) as fp, count as total by signature\n| eval fp_rate=round(fp/total*100,1)\n| where fp_rate > 50\n| sort -fp_rate",
              "m": "Track analyst dispositions for IDS alerts (true positive, false positive, benign). Calculate false positive rate per signature. Flag signatures with >50% FP rate for tuning. Report on overall alert quality metrics.",
              "z": "Bar chart (FP rate by signature), Line chart (FP rate trend), Table (signatures needing tuning).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IDS TA + analyst workflow.\n• Ensure the following data sources are available: IDS alerts + analyst disposition data (true/false positive).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack analyst dispositions for IDS alerts (true positive, false positive, benign). Calculate false positive rate per signature. Flag signatures with >50% FP rate for tuning. Report on overall alert quality metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ids sourcetype=\"snort:alert\"\n| join max=1 signature [| inputlookup signature_dispositions.csv]\n| stats count(eval(disposition=\"false_positive\")) as fp, count as total by signature\n| eval fp_rate=round(fp/total*100,1)\n| where fp_rate > 50\n| sort -fp_rate\n```\n\nUnderstanding this SPL\n\n**False Positive Tracking** — High false positive rates waste analyst time and cause alert fatigue. Systematic tracking drives tuning improvements.\n\nDocumented **Data sources**: IDS alerts + analyst disposition data (true/false positive). **App/TA** (typical add-on context): IDS TA + analyst workflow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ids; **sourcetype**: snort:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ids, sourcetype=\"snort:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `stats` rolls up events into metrics; results are split **by signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fp_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fp_rate > 50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (FP rate by signature), Line chart (FP rate trend), Table (signatures needing tuning).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track which network-threat rule names keep getting marked as false alarms so the team can tune the rules that waste the most time.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.5",
              "n": "Lateral Movement Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "IDS detections on internal network segments indicate an attacker has breached the perimeter and is moving laterally.",
              "t": "IDS TA (internal sensors)",
              "d": "IDS alerts from internal/east-west sensors",
              "q": "index=ids sourcetype=\"snort:alert\" sensor_zone=\"internal\"\n| search category IN (\"attempted-admin\",\"trojan-activity\",\"policy-violation\",\"misc-attack\")\n| stats count by src, dest, signature\n| sort -count",
              "m": "Deploy IDS sensors on internal network segments (not just perimeter). Forward alerts to Splunk. Alert on any high-severity internal detections. Correlate with AD authentication events and endpoint data for full attack chain visibility.",
              "z": "Network diagram (lateral movement paths), Table (internal IDS alerts), Timeline (attack progression).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IDS TA (internal sensors).\n• Ensure the following data sources are available: IDS alerts from internal/east-west sensors.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy IDS sensors on internal network segments (not just perimeter). Forward alerts to Splunk. Alert on any high-severity internal detections. Correlate with AD authentication events and endpoint data for full attack chain visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ids sourcetype=\"snort:alert\" sensor_zone=\"internal\"\n| search category IN (\"attempted-admin\",\"trojan-activity\",\"policy-violation\",\"misc-attack\")\n| stats count by src, dest, signature\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Lateral Movement Detection** — IDS detections on internal network segments indicate an attacker has breached the perimeter and is moving laterally.\n\nDocumented **Data sources**: IDS alerts from internal/east-west sensors. **App/TA** (typical add-on context): IDS TA (internal sensors). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ids; **sourcetype**: snort:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ids, sourcetype=\"snort:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by src, dest, signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  where IDS_Attacks.src_category=\"internal\" OR IDS_Attacks.dest_category=\"internal\"\n  by IDS_Attacks.src IDS_Attacks.dest IDS_Attacks.signature span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Lateral Movement Detection** — IDS detections on internal network segments indicate an attacker has breached the perimeter and is moving laterally.\n\nDocumented **Data sources**: IDS alerts from internal/east-west sensors. **App/TA** (typical add-on context): IDS TA (internal sensors). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network diagram (lateral movement paths), Table (internal IDS alerts), Timeline (attack progression).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for known attack patterns on internal network paths so we can tell when someone is moving around inside the environment after a breach, not just knocking on the front door.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  where IDS_Attacks.src_category=\"internal\" OR IDS_Attacks.dest_category=\"internal\"\n  by IDS_Attacks.src IDS_Attacks.dest IDS_Attacks.signature span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.6",
              "n": "Detect New Login Attempts to Routers",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies new login attempts to routers. It leverages authentication logs from the ES Assets and Identity Framework, focusing on assets categorized as routers. The detection flags connections that have not been observed in the past 30 days. This activity is significant because unauthorized access to routers can lead to network disruptions or data interception. If confirmed malicious, attackers could gain control over network traffic, potentially leading to data breaches o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count earliest(_time) as earliest latest(_time) as latest FROM datamodel=Authentication\n      WHERE Authentication.dest_category=router\n      BY Authentication.dest Authentication.user\n    | eval isOutlier=if(earliest >= relative_time(now(), \"-30d@d\"), 1, 0)\n    | where isOutlier=1\n    | `security_content_ctime(earliest)`\n    | `security_content_ctime(latest)`\n    | `drop_dm_object_name(\"Authentication\")`\n    | `detect_new_login_attempts_to_routers_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate router connections may appear as new connections",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect New Login Attempts to Routers\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect New Login Attempts to Routers\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate router connections may appear as new connections\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot new login attempts to routers — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.7",
              "n": "ESXi Lockdown Mode Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when Lockdown Mode is disabled on an ESXi host, which can indicate that a threat actor is attempting to weaken host security controls. Disabling Lockdown Mode allows broader remote access via SSH or the host client and may precede further malicious actions such as data exfiltration, lateral movement, or VM tampering.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Lockdown Mode Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Lockdown Mode Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot when Lockdown Mode is disabled on an ESXi host, which can indicate that a threat actor is attempting to weaken host security controls — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.8",
              "n": "ESXi Malicious VIB Forced Install",
              "c": "high",
              "f": "intermediate",
              "v": "Detects potentially malicious installation of VMware Installation Bundles (VIBs) using the --force flag. The --force option bypasses signature and compatibility checks, allowing unsigned, community-supported, or incompatible VIBs to be installed on an ESXi host. This behavior is uncommon in normal administrative operations and is often observed in post-compromise scenarios where adversaries attempt to install backdoored or unauthorized kernel modules, drivers, or monitoring tools to establish pe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some third party vendor VIBs or patches may require the force option.",
              "refs": "https://detect.fyi/detecting-and-responding-to-esxi-compromise-with-splunk-f33998ce7823",
              "mitre": [
                "T1505.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Malicious VIB Forced Install\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Malicious VIB Forced Install\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some third party vendor VIBs or patches may require the force option.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch potentially malicious installation of VMware Installation Bundles (VIBs) using--force flag.--force option bypasses signature and compatibility checks, allowing unsigned, community-supported, or incompatible VIBs to be installed on an ESXi host — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.9",
              "n": "ESXi Sensitive Files Accessed",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies access to sensitive system and configuration files on an ESXi host, including authentication data, service configurations, and VMware-specific management settings. Interaction with these files may indicate adversary reconnaissance, credential harvesting, or preparation for privilege escalation, lateral movement, or persistence.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may access these files for initial setup or troubleshooting. Limited in most environments. Tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1003.008",
                "T1005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Sensitive Files Accessed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.008, T1005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Sensitive Files Accessed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may access these files for initial setup or troubleshooting. Limited in most environments. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot access to sensitive system and configuration files on an ESXi host, including authentication data, service configurations, and VMware-specific management settings — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.10",
              "n": "ESXi Shared or Stolen Root Account",
              "c": "high",
              "f": "intermediate",
              "v": "This detection monitors for signs of a shared or potentially compromised root account on ESXi hosts by tracking the number of unique IP addresses logging in as root within a short time window. Multiple logins from different IPs in a brief period may indicate credential misuse, lateral movement, or account compromise.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward logs to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed",
              "refs": "https://detect.fyi/detecting-and-responding-to-esxi-compromise-with-splunk-f33998ce7823",
              "mitre": [
                "T1078"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Shared or Stolen Root Account\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Shared or Stolen Root Account\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for the same root account on ESXi being used from many different source addresses in a short window, so we can tell a shared or stolen credential from a normal jump host or support session.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.11",
              "n": "Okta Authentication Failed During MFA Challenge",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies failed authentication attempts during the Multi-Factor Authentication (MFA) challenge in an Okta tenant. It uses the Authentication datamodel to detect specific failed events where the authentication signature is `user.authentication.auth_via_mfa`. This activity is significant as it may indicate an adversary attempting to authenticate with compromised credentials on an account with MFA enabled. If confirmed malicious, this could suggest an ongoing attempt to byp…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A user may have accidentally entered the wrong credentials during the MFA challenge. If the user is new to MFA, they may have trouble authenticating. Ensure that the user is aware of the MFA process and has the correct credentials.",
              "refs": "https://splunkbase.splunk.com/app/6553",
              "mitre": [
                "T1078.004",
                "T1586.003",
                "T1621"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Authentication Failed During MFA Challenge\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Authentication Failed During MFA Challenge\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A user may have accidentally entered the wrong credentials during the MFA challenge. If the user is new to MFA, they may have trouble authenticating. Ensure that the user is aware of the MFA process and has the correct credentials.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot failed authentication attempts during the Multi-Factor Authentication (multi-factor sign-in) challenge in an Okta tenant — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.12",
              "n": "ASL AWS Concurrent Sessions From Different Ips",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an AWS IAM account with concurrent sessions originating from more than one unique IP address within a 5-minute span. This detection leverages AWS CloudTrail logs, specifically the `DescribeEventAggregates` API call, to identify multiple IP addresses associated with the same user session. This behavior is significant as it may indicate a session hijacking attack, where an adversary uses stolen session cookies to access AWS resources from a different location. If …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A user with concurrent sessions from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment.",
              "refs": "https://attack.mitre.org/techniques/T1185/, https://breakdev.org/evilginx-2-next-generation-of-phishing-2fa-tokens/, https://github.com/kgretzky/evilginx2",
              "mitre": [
                "T1185"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Concurrent Sessions From Different Ips\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1185. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Concurrent Sessions From Different Ips\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A user with concurrent sessions from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot an AWS access control account with concurrent sessions originating from more than one unique IP address within a 5-minute span — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.13",
              "n": "ASL AWS Credential Access GetPasswordData",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifiesGetPasswordData API calls in your AWS account. It leverages  CloudTrail logs from Amazon Security Lake to detect this activity by counting the distinct instance IDs accessed. This behavior is significant as it may indicate an attempt to retrieve encrypted administrator passwords for running Windows instances, which is a critical security concern. If confirmed malicious, attackers could gain unauthorized access to administrative credentials, potentially leading to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator tooling or automated scripts may make these calls but it is highly unlikely to make several calls in a short period of time.",
              "refs": "https://attack.mitre.org/techniques/T1552/, https://stratus-red-team.cloud/attack-techniques/AWS/aws.credential-access.ec2-get-password-data/",
              "mitre": [
                "T1110.001",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Credential Access GetPasswordData\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Credential Access GetPasswordData\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator tooling or automated scripts may make these calls but it is highly unlikely to make several calls in a short period of time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "IdentifiesGetPasswordData calls in your AWS account — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.14",
              "n": "ASL AWS IAM Successful Group Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the successful deletion of a group within AWS IAM, leveraging CloudTrail IAM events. This action, while not inherently malicious, can serve as a precursor to more sinister activities, such as unauthorized access or privilege escalation attempts. By monitoring for such deletions, the analytic aids in identifying potential preparatory steps towards an attack, allowing for early detection and mitigation. The identification of this behavior is crucial for a SOC to prev…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "`amazon_security_lake` api.operation=DeleteGroup status=Success\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY actor.user.uid api.operation api.service.name\n           http_request.user_agent src_endpoint.ip actor.user.account.uid\n           cloud.provider cloud.region\n      | rename actor.user.uid as user api.operation as action api.service.name as dest http_request.user_agent as user_agent src_endpoint.ip as src actor.user.account.uid as vendor_account cloud.provider as vendor_product cloud.region as vendor_region\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `asl_aws_iam_successful_group_deletion_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete groups (least privilege).",
              "refs": "https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/delete-group.html, https://docs.aws.amazon.com/IAM/latest/APIReference/API_DeleteGroup.html",
              "mitre": [
                "T1069.003",
                "T1098"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS IAM Successful Group Deletion\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.003, T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS IAM Successful Group Deletion\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete groups (least privilege).\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the successful deletion of a group within AWS access control, leveraging CloudTrail access control events — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.15",
              "n": "AWS Concurrent Sessions From Different Ips",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an AWS IAM account with concurrent sessions originating from more than one unique IP address within a 5-minute window. It leverages AWS CloudTrail logs, specifically the `DescribeEventAggregates` event, to detect this behavior. This activity is significant as it may indicate a session hijacking attack, where an adversary uses stolen session cookies to access AWS resources from a different location. If confirmed malicious, this could allow unauthorized access to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DescribeEventAggregates",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail DescribeEventAggregates ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A user with concurrent sessions from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment.",
              "refs": "https://attack.mitre.org/techniques/T1185/, https://breakdev.org/evilginx-2-next-generation-of-phishing-2fa-tokens/, https://github.com/kgretzky/evilginx2",
              "mitre": [
                "T1185"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Concurrent Sessions From Different Ips\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DescribeEventAggregates. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1185. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Concurrent Sessions From Different Ips\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A user with concurrent sessions from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot an AWS access control account with concurrent sessions originating from more than one unique IP address within a 5-minute window — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.16",
              "n": "AWS Credential Access GetPasswordData",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies more than 10 GetPasswordData API calls within a 5-minute window in your AWS account. It leverages AWS CloudTrail logs to detect this activity by counting the distinct instance IDs accessed. This behavior is significant as it may indicate an attempt to retrieve encrypted administrator passwords for running Windows instances, which is a critical security concern. If confirmed malicious, attackers could gain unauthorized access to administrative credentials, potent…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail GetPasswordData",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail GetPasswordData ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator tooling or automated scripts may make these calls but it is highly unlikely to make several calls in a short period of time.",
              "refs": "https://attack.mitre.org/techniques/T1552/, https://stratus-red-team.cloud/attack-techniques/AWS/aws.credential-access.ec2-get-password-data/",
              "mitre": [
                "T1110.001",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Credential Access GetPasswordData\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail GetPasswordData. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Credential Access GetPasswordData\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator tooling or automated scripts may make these calls but it is highly unlikely to make several calls in a short period of time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot more than 10 GetPasswordData calls within a 5-minute window in your AWS account — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.17",
              "n": "AWS S3 Exfiltration Behavior Identified",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential AWS S3 exfiltration behavior by correlating multiple risk events related to Collection and Exfiltration techniques. It leverages risk events from AWS sources, focusing on instances where two or more unique analytics and distinct MITRE ATT&CK IDs are triggered for a specific risk object. This activity is significant as it may indicate an ongoing data exfiltration attempt, which is critical for security teams to monitor. If confirmed malicious, this coul…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "alse positives may be present based on automated tooling or system administrators. Filter as needed.",
              "refs": "https://labs.nettitude.com/blog/how-to-exfiltrate-aws-ec2-data/, https://stratus-red-team.cloud/attack-techniques/AWS/aws.exfiltration.ec2-share-ebs-snapshot/, https://hackingthe.cloud/aws/enumeration/loot_public_ebs_snapshots/",
              "mitre": [
                "T1537"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS S3 Exfiltration Behavior Identified\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1537. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS S3 Exfiltration Behavior Identified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: alse positives may be present based on automated tooling or system administrators. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot potential AWS cloud storage exfiltration behavior by correlating multiple risk events related to Collection and Exfiltration techniques — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.18",
              "n": "Azure AD Concurrent Sessions From Different Ips",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an Azure AD account with concurrent sessions originating from multiple unique IP addresses within a 5-minute window. It leverages Azure Active Directory NonInteractiveUserSignInLogs to identify this behavior by analyzing successful authentication events and counting distinct source IPs. This activity is significant as it may indicate session hijacking, where an attacker uses stolen session cookies to access corporate resources from a different location. If confirme…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A user with concurrent sessions from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment. Also consider the geographic location of the IP addresses and filter out IP space that belong to your organization.",
              "refs": "https://attack.mitre.org/techniques/T1185/, https://breakdev.org/evilginx-2-next-generation-of-phishing-2fa-tokens/, https://github.com/kgretzky/evilginx2",
              "mitre": [
                "T1185"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Concurrent Sessions From Different Ips\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1185. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Concurrent Sessions From Different Ips\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A user with concurrent sessions from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment. Also consider the geographic location of the IP addresses and filter out IP space that belong to your organization.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch an Azure AD account with concurrent sessions originating from multiple unique IP addresses within a 5-minute window — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.19",
              "n": "Azure AD Multi-Source Failed Authentications Spike",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential distributed password spraying attacks in an Azure AD environment. It identifies a spike in failed authentication attempts across various user-and-IP combinations from multiple source IPs and countries, using different user agents. This detection leverages Azure AD SignInLogs, focusing on error code 50126 for failed authentications. This activity is significant as it indicates an adversary's attempt to bypass security controls by distributing login attempt…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "`azure_monitor_aad` category=*SignInLogs properties.status.errorCode=50126 properties.authenticationDetails{}.succeeded=false\n      | rename properties.* as *\n      | bucket span=5m _time\n      | eval uniqueIPUserCombo = src . \"-\" . user\n      | rename userAgent as user_agent\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime dc(uniqueIPUserCombo) as uniqueIpUserCombinations, dc(user) as uniqueUsers, dc(src) as uniqueIPs, dc(user_agent) as uniqueUserAgents, dc(location.countryOrRegion) as uniqueCountries values(location.countryOrRegion) as countries  values(action) as action values(dest) as dest values(user) as user values(src) as src values(vendor_account) as vendor_account values(vendor_product) as vendor_product values(user_agent) as user_agent\n      | where uniqueIpUserCombinations > 20 AND uniqueUsers > 20 AND uniqueIPs > 20 AND uniqueUserAgents >= 1\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `azure_ad_multi_source_failed_authentications_spike_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection may yield false positives in scenarios where legitimate bulk sign-in activities occur, such as during company-wide system updates or when users are accessing resources from varying locations in a short time frame, such as in the case of VPNs or cloud services that rotate IP addresses. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/security/compass/incident-response-playbook-password-spray, https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://learn.microsoft.com/azure/active-directory/reports-monitoring/reference-sign-ins-error-codes",
              "mitre": [
                "T1110.003",
                "T1110.004",
                "T1586.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Multi-Source Failed Authentications Spike\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Multi-Source Failed Authentications Spike\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection may yield false positives in scenarios where legitimate bulk sign-in activities occur, such as during company-wide system updates or when users are accessing resources from varying locations in a short time frame, such as in the case of VPNs or cloud services that rotate IP addresses. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch potential distributed password spraying attacks in an Azure AD environment — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.20",
              "n": "Azure AD Multiple AppIDs and UserAgents Authentication Spike",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unusual authentication activity in Azure AD, specifically when a single user account has over 8 authentication attempts using 3+ unique application IDs and 5+ unique user agents within a short period. It leverages Azure AD audit logs, focusing on authentication events and using statistical thresholds. This behavior is significant as it may indicate an adversary probing for MFA requirements. If confirmed malicious, it suggests a compromised account, potentially lead…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Sign-in activity",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Sign-in activity ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Rapid authentication from the same user using more than 5 different user agents and 3 application IDs is highly unlikely under normal circumstances. However, there are potential scenarios that could lead to false positives.",
              "refs": "https://attack.mitre.org/techniques/T1078/, https://www.blackhillsinfosec.com/exploiting-mfa-inconsistencies-on-microsoft-services/, https://github.com/dafthack/MFASweep, https://www.youtube.com/watch?v=SK1zgqaAZ2E",
              "mitre": [
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Multiple AppIDs and UserAgents Authentication Spike\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Sign-in activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Multiple AppIDs and UserAgents Authentication Spike\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Rapid authentication from the same user using more than 5 different user agents and 3 application IDs is highly unlikely under normal circumstances. However, there are potential scenarios that could lead to false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch when one account suddenly signs in through many different apps and browsers in a short window, so we can spot guessing, token abuse, or a hijacked identity before it spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.21",
              "n": "Azure AD Service Principal Authentication",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies authentication events of service principals in Azure Active Directory. It leverages the `azure_monitor_aad` data source, specifically targeting \"Sign-in activity\" within ServicePrincipalSignInLogs. This detection gathers details such as sign-in frequency, timing, source IPs, and accessed resources. Monitoring these events is significant for SOC teams to distinguish between normal application authentication and potential anomalies, which could indicate compromise…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Sign-in activity",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Sign-in activity ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service Principals will legitimally authenticate remotely to your tenant. Implementing this detection after establishing a baseline enables a more accurate identification of security threats, ensuring proactive and informed responses to safeguard the Azure AD environment. source ips.",
              "refs": "https://attack.mitre.org/techniques/T1078/004/, https://learn.microsoft.com/en-us/entra/identity/monitoring-health/concept-sign-ins#service-principal-sign-ins",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Service Principal Authentication\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Sign-in activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Service Principal Authentication\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service Principals will legitimally authenticate remotely to your tenant. Implementing this detection after establishing a baseline enables a more accurate identification of security threats, ensuring proactive and informed responses to safeguard the Azure AD environment. source ips.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot authentication events of service principals in Azure Active Directory — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.22",
              "n": "Azure AD Unusual Number of Failed Authentications From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a single source IP failing to authenticate with multiple valid users, potentially indicating a Password Spraying attack against an Azure Active Directory tenant. It uses Azure SignInLogs data and calculates the standard deviation for source IPs, applying the 3-sigma rule to detect unusual numbers of failed authentication attempts. This activity is significant as it may signal an adversary attempting to gain initial access or elevate privileges. If confirmed mali…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$userPrincipalName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A source Ip failing to authenticate with multiple users is not a common for legitimate behavior.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/security/compass/incident-response-playbook-password-spray, https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://learn.microsoft.com/azure/active-directory/reports-monitoring/reference-sign-ins-error-codes",
              "mitre": [
                "T1110.003",
                "T1110.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Unusual Number of Failed Authentications From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Unusual Number of Failed Authentications From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A source Ip failing to authenticate with multiple users is not a common for legitimate behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot a single source IP failing to authenticate with multiple valid users, potentially indicating a Password Spraying attack against an Azure Active Directory tenant — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.23",
              "n": "Circle CI Disable Security Job",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the disabling of security jobs in CircleCI pipelines. It leverages CircleCI log data, renaming and extracting fields such as job names, workflow IDs, user information, commit messages, URLs, and branches. The detection identifies mandatory jobs for each workflow and checks if they were executed. This activity is significant because disabling security jobs can allow malicious code to bypass security checks, leading to potential data breaches, system downtime, and re…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "CircleCI",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires CircleCI ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1554"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Circle CI Disable Security Job\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: CircleCI. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1554. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Circle CI Disable Security Job\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the disabling of security jobs in CircleCI pipelines — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.24",
              "n": "Cloud Compute Instance Created With Previously Unseen Image",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of cloud compute instances using previously unseen image IDs. It leverages cloud infrastructure logs to identify new image IDs that have not been observed before. This activity is significant because it may indicate unauthorized or suspicious activity, such as the deployment of malicious payloads or unauthorized access to sensitive information. If confirmed malicious, this could lead to data breaches, unauthorized access, or further compromise of the c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "After a new image is created, the first systems created with that image will cause this alert to fire.  Verify that the image being used was created by a legitimate user.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "user",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Compute Instance Created With Previously Unseen Image\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Compute Instance Created With Previously Unseen Image\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: After a new image is created, the first systems created with that image will cause this alert to fire.  Verify that the image being used was created by a legitimate user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of cloud compute instances using previously unseen image — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.25",
              "n": "Detect AWS Console Login by User from New Region",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies AWS console login attempts by users from a new region. It leverages AWS CloudTrail events and compares current login regions against a baseline of previously seen regions for each user. This activity is significant as it may indicate unauthorized access attempts or compromised credentials. If confirmed malicious, an attacker could gain unauthorized access to AWS resources, potentially leading to data breaches, resource manipulation, or further lateral movement w…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| tstats earliest(_time) as firstTime latest(_time) as lastTime FROM datamodel=Authentication\n      WHERE Authentication.signature=ConsoleLogin\n      BY Authentication.user Authentication.src\n    | iplocation Authentication.src\n    | `drop_dm_object_name(Authentication)`\n    | rename Region as justSeenRegion\n    | table firstTime lastTime user justSeenRegion\n    | join max=1 user type=outer [\n    | inputlookup previously_seen_users_console_logins\n    | rename Region as previouslySeenRegion\n    | stats min(firstTime) AS earliestseen\n      BY user previouslySeenRegion\n    | fields earliestseen user previouslySeenRegion]\n    | eval userRegion=if(firstTime >= relative_time(now(), \"-24h@h\"), \"New Region\",\"Previously Seen Region\")\n    | where userRegion= \"New Region\"\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table firstTime lastTime user previouslySeenRegion justSeenRegion userRegion\n    | `detect_aws_console_login_by_user_from_new_region_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "When a legitimate new user logins for the first time, this activity will be detected. Check how old the account is and verify that the user activity is legitimate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1535",
                "T1586.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect AWS Console Login by User from New Region\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1535, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect AWS Console Login by User from New Region\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: When a legitimate new user logins for the first time, this activity will be detected. Check how old the account is and verify that the user activity is legitimate.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot AWS console login attempts by users from a new region — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.26",
              "n": "Detect Spike in AWS Security Hub Alerts for User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a spike in the number of AWS Security Hub alerts for an AWS IAM User within a 4-hour interval. It leverages AWS Security Hub findings data, calculating the average and standard deviation of alerts to detect significant deviations. This activity is significant as a sudden increase in alerts for a specific user may indicate suspicious behavior or a potential security incident. If confirmed malicious, this could signify an ongoing attack, unauthorized access, or mi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS Security Hub",
              "q": "`aws_securityhub_finding` \"findings{}.Resources{}.Type\"= AwsIamUser\n      | rename findings{}.Resources{}.Id as user\n      | bucket span=4h _time\n      | stats count AS alerts\n        BY _time user\n      | eventstats avg(alerts) as total_launched_avg, stdev(alerts) as total_launched_stdev\n      | eval threshold_value = 2\n      | eval isOutlier=if(alerts > total_launched_avg+(total_launched_stdev * threshold_value), 1, 0)\n      | search isOutlier=1\n      | table _time user alerts\n      | `detect_spike_in_aws_security_hub_alerts_for_user_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS Security Hub ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "user",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Spike in AWS Security Hub Alerts for User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS Security Hub. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Spike in AWS Security Hub Alerts for User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot a spike in the number of AWS Security Hub alerts for an AWS access control User within a 4-hour interval — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.27",
              "n": "Detect Spike in blocked Outbound Traffic from your AWS",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies spikes in blocked outbound network connections originating from within your AWS environment. It leverages VPC Flow Logs data from CloudWatch, focusing on blocked actions from internal IP ranges to external destinations. This detection is significant as it can indicate potential exfiltration attempts or misconfigurations leading to data leakage. If confirmed malicious, such activity could allow attackers to bypass network defenses, leading to unauthorized data tr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`cloudwatchlogs_vpcflow` action=blocked (src=10.0.0.0/8 OR src=172.16.0.0/12 OR src=192.168.0.0/16) ( dest!=10.0.0.0/8 AND dest!=172.16.0.0/12 AND dest!=192.168.0.0/16)  [search  `cloudwatchlogs_vpcflow` action=blocked (src=10.0.0.0/8 OR src=172.16.0.0/12 OR src=192.168.0.0/16) ( dest!=10.0.0.0/8 AND dest!=172.16.0.0/12 AND dest!=192.168.0.0/16)\n      | stats count as numberOfBlockedConnections\n        BY src\n      | inputlookup baseline_blocked_outbound_connections append=t\n      | fields - latestCount\n      | stats values(*) as *\n        BY src\n      | rename numberOfBlockedConnections as latestCount\n      | eval newAvgBlockedConnections=avgBlockedConnections + (latestCount-avgBlockedConnections)/720\n      | eval newStdevBlockedConnections=sqrt(((pow(stdevBlockedConnections, 2)*719 + (latestCount-newAvgBlockedConnections)*(latestCount-avgBlockedConnections))/720))\n      | eval avgBlockedConnections=coalesce(newAvgBlockedConnections, avgBlockedConnections), stdevBlockedConnections=coalesce(newStdevBlockedConnections, stdevBlockedConnections), numDataPoints=if(isnull(latestCount), numDataPoints, numDataPoints+1)\n      | table src, latestCount, numDataPoints, avgBlockedConnections, stdevBlockedConnections\n      | outputlookup baseline_blocked_outbound_connections\n      | eval dataPointThreshold = 5, deviationThreshold = 3\n      | eval isSpike=if((latestCount > avgBlockedConnections+deviationThreshold*stdevBlockedConnections) AND numDataPoints > dataPointThreshold, 1, 0)\n      | where isSpike=1\n      | table src]\n      | stats values(dest) as dest, values(interface_id) as \"resourceId\" count as numberOfBlockedConnections, dc(dest) as uniqueDestConnections\n        BY src\n      | `detect_spike_in_blocked_outbound_traffic_from_your_aws_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The false-positive rate may vary based on the values of`dataPointThreshold` and `deviationThreshold`. Additionally, false positives may result when AWS administrators roll out policies enforcing network blocks, causing sudden increases in the number of blocked outbound connections.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Spike in blocked Outbound Traffic from your AWS\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Spike in blocked Outbound Traffic from your AWS\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The false-positive rate may vary based on the values of`dataPointThreshold` and `deviationThreshold`. Additionally, false positives may result when AWS administrators roll out policies enforcing network blocks, causing sudden increases in the number of blocked outbound connections.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot spikes in blocked outbound network connections originating from within your AWS environment — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.28",
              "n": "GCP Detect gcploit framework",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of the GCPloit exploitation framework within Google Cloud Platform (GCP). It detects specific GCP Pub/Sub messages with a function timeout of 539 seconds, which is indicative of GCPloit activity. This detection is significant as GCPloit can be used to escalate privileges and facilitate lateral movement from compromised high-privilege accounts. If confirmed malicious, this activity could allow attackers to gain unauthorized access, escalate their privileg…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`google_gcp_pubsub_message` data.protoPayload.request.function.timeout=539s\n      | table src src_user data.resource.labels.project_id data.protoPayload.request.function.serviceAccountEmail data.protoPayload.authorizationInfo{}.permission data.protoPayload.request.location http_user_agent\n      | `gcp_detect_gcploit_framework_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Payload.request.function.timeout value can possibly be match with other functions or requests however the source user and target request account may indicate an attempt to move laterally accross acounts or projects",
              "refs": "https://github.com/dxa4481/gcploit, https://www.youtube.com/watch?v=Ml09R38jpok",
              "mitre": [
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GCP Detect gcploit framework\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GCP Detect gcploit framework\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Payload.request.function.timeout value can possibly be match with other functions or requests however the source user and target request account may indicate an attempt to move laterally accross acounts or projects\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the use of the GCPloit exploitation framework within Google Cloud Platform (GCP) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.29",
              "n": "GCP Unusual Number of Failed Authentications From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a single source IP failing to authenticate into Google Workspace with multiple valid users, potentially indicating a Password Spraying attack. It uses Google Workspace login failure events and calculates the standard deviation for source IPs, applying the 3-sigma rule to detect unusual failed authentication attempts. This activity is significant as it may signal an adversary attempting to gain initial access or elevate privileges. If confirmed malicious, this co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Google Workspace",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$tried_accounts$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Google Workspace ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No known false positives for this detection. Please review this alert",
              "refs": "https://cloud.google.com/blog/products/identity-security/how-google-cloud-can-help-stop-credential-stuffing-attacks, https://www.slideshare.net/dafthack/ok-google-how-do-i-red-team-gsuite, https://attack.mitre.org/techniques/T1110/003/, https://www.blackhillsinfosec.com/wp-content/uploads/2020/05/Breaching-the-Cloud-Perimeter-Slides.pdf",
              "mitre": [
                "T1110.003",
                "T1110.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GCP Unusual Number of Failed Authentications From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Google Workspace. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GCP Unusual Number of Failed Authentications From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No known false positives for this detection. Please review this alert\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot a single source IP failing to authenticate into Google Workspace with multiple valid users, potentially indicating a Password Spraying attack — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.30",
              "n": "Active Directory Lateral Movement Identified",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential lateral movement activities within an organization's Active Directory (AD) environment. It detects this activity by correlating multiple analytics from the Active Directory Lateral Movement analytic story within a specified time frame. This is significant for a SOC as lateral movement is a common tactic used by attackers to expand their access within a network, posing a substantial risk. If confirmed malicious, this activity could allow attackers to es…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will most likely be present based on risk scoring and how the organization handles system to system communication. Filter, or modify as needed. In addition to count by analytics, adding a risk score may be useful. In our testing, with 22 events over 30 days, the risk scores ranged from 500 to 80,000. Your organization will be different, monitor and modify as needed.",
              "refs": "https://attack.mitre.org/tactics/TA0008/, https://research.splunk.com/stories/active_directory_lateral_movement/",
              "mitre": [
                "T1210"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Active Directory Lateral Movement Identified\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1210. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Active Directory Lateral Movement Identified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will most likely be present based on risk scoring and how the organization handles system to system communication. Filter, or modify as needed. In addition to count by analytics, adding a risk score may be useful. In our testing, with 22 events over 30 days, the risk scores ranged from 500 to 80,000. Your organization will be different, monitor and modify as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot potential lateral movement activities within an organization's Active Directory (AD) environment — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.31",
              "n": "Cisco Isovalent - Access To Cloud Metadata Service",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects workloads accessing the cloud instance metadata service at 169.254.169.254. This IP is used by AWS, GCP and Azure metadata endpoints and is frequently abused in SSRF or lateral movement scenarios to obtain credentials and sensitive environment details. Monitor unexpected access to this service from application pods or namespaces where such behavior is atypical.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Connect",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$pod_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Isovalent Process Connect ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate platform components and node agents may query the metadata service. Validate by namespace, labels and workload identity; suppress expected sources and alert on atypical pods or namespaces.",
              "refs": "https://attack.mitre.org/techniques/T1552/005/, https://hackerone.com/reports/341876, https://docs.isovalent.com/user-guide/sec-ops-visibility/lateral-movement/index.html",
              "mitre": [
                "T1552.005"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Access To Cloud Metadata Service\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Connect. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Access To Cloud Metadata Service\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate platform components and node agents may query the metadata service. Validate by namespace, labels and workload identity; suppress expected sources and alert on atypical pods or namespaces.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch workloads accessing the cloud instance metadata service at 169.254.169.254 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.32",
              "n": "Detect Excessive User Account Lockouts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies user accounts experiencing an excessive number of lockouts within a short timeframe. It leverages the 'Change' data model, specifically focusing on events where the result indicates a lockout. This activity is significant as it may indicate a brute-force attack or misconfiguration, both of which require immediate attention. If confirmed malicious, this behavior could lead to account compromise, unauthorized access, and potential lateral movement within the netwo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that a legitimate user is experiencing an issue causing multiple account login failures leading to lockouts.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Excessive User Account Lockouts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Excessive User Account Lockouts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that a legitimate user is experiencing an issue causing multiple account login failures leading to lockouts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot user accounts experiencing an excessive number of lockouts within a short timeframe — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.33",
              "n": "Detect Mimikatz With PowerShell Script Block Logging",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Mimikatz commands via PowerShell by leveraging PowerShell Script Block Logging (EventCode=4104). This method captures and logs the full command sent to PowerShell, allowing for the identification of suspicious activities such as Pass the Ticket, Pass the Hash, and credential dumping. This activity is significant as Mimikatz is a well-known tool used for credential theft and lateral movement. If confirmed malicious, this could lead to unauthorized a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "# Shared SPL: intentional — see UC-10.7.286\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as the commands being identifies are quite specific to EventCode 4104 and Mimikatz. Filter as needed.",
              "refs": "https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1003",
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Mimikatz With PowerShell Script Block Logging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Mimikatz With PowerShell Script Block Logging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as the commands being identifies are quite specific to EventCode 4104 and Mimikatz. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of Mimikatz commands using PowerShell by leveraging PowerShell Script Block Logging (EventCode=4104) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.34",
              "n": "Detect Regsvcs with No Command Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances of regsvcs.exe running without command line arguments. This behavior typically indicates process injection, where another process manipulates regsvcs.exe. The detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, IDs, and command-line executions. This activity is significant as it may signal an attempt to evade detection and execute malicious code. If confirmed malicious, the attacker could achieve code exe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, limited instances of regsvcs.exe may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.009/T1218.009.md, https://lolbas-project.github.io/lolbas/Binaries/Regsvcs/",
              "mitre": [
                "T1218.009"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Regsvcs with No Command Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Regsvcs with No Command Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, limited instances of regsvcs.exe may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch instances of regsvcs.exe running without command line arguments — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.35",
              "n": "Detect Renamed PSExec",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where `PsExec.exe` has been renamed and executed on an endpoint. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and original file names. This activity is significant because renaming `PsExec.exe` is a common tactic to evade detection. If confirmed malicious, this could allow an attacker to execute commands remotely, potentially leading to unauthorized access, lateral movement, or further compromise of the…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name!=psexec.exe\n            AND\n            Processes.process_name!=psexec64.exe\n        )\n        AND Processes.original_file_name=psexec.c\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_renamed_psexec_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives should be present. It is possible some third party applications may use older versions of PsExec, filter as needed.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1569.002/T1569.002.yaml, https://redcanary.com/blog/threat-hunting-psexec-lateral-movement/",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Renamed PSExec\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Renamed PSExec\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives should be present. It is possible some third party applications may use older versions of PsExec, filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot instances where has been renamed and executed on an endpoint — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.36",
              "n": "Detect SharpHound Command-Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of SharpHound command-line arguments, specifically `-collectionMethod` and `invoke-bloodhound`. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as SharpHound is commonly used for Active Directory enumeration, which can be a precursor to lateral movement or privilege escalation. If confirmed malicious, this activity could allow an attacker to map ou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as the arguments used are specific to SharpHound. Filter as needed or add more command-line arguments as needed.",
              "refs": "https://attack.mitre.org/software/S0521/, https://thedfirreport.com/?s=bloodhound, https://github.com/BloodHoundAD/BloodHound/tree/master/Collectors, https://github.com/BloodHoundAD/SharpHound3, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1059.001/T1059.001.md#atomic-test-2---run-bloodhound-from-local-disk",
              "mitre": [
                "T1069.001",
                "T1069.002",
                "T1087.001",
                "T1087.002",
                "T1482"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect SharpHound Command-Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001, T1069.002, T1087.001, T1087.002, T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect SharpHound Command-Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as the arguments used are specific to SharpHound. Filter as needed or add more command-line arguments as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of SharpHound command-line arguments, specifically and — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.37",
              "n": "Domain Account Discovery with Wmic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `wmic.exe` with command-line arguments used to query for domain users. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line patterns indicative of domain account discovery. This activity is significant as it often precedes lateral movement or privilege escalation attempts by adversaries. If confirmed malicious, this behavior could allow attackers to map out user accounts within the domain, facilitat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1087/002/",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Domain Account Discovery with Wmic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Domain Account Discovery with Wmic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of with command-line arguments used to query for domain users — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.38",
              "n": "DSQuery Domain Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of \"dsquery.exe\" with arguments targeting `TrustedDomain` queries directly from the command line. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on process names and command-line arguments. This activity is significant as it often indicates domain trust discovery, a common step in lateral movement or privilege escalation by adversaries. If confirmed malicious, this could allow attackers to map domain trusts…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives. If there is a true false positive, filter based on command-line or parent process.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1482/T1482.md, https://blog.harmj0y.net/redteaming/a-guide-to-attacking-domain-trusts/, https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc732952(v=ws.11), https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc754232(v=ws.11)",
              "mitre": [
                "T1482"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"DSQuery Domain Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"DSQuery Domain Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives. If there is a true false positive, filter based on command-line or parent process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of \"dsquery.exe\" with arguments targeting queries directly from the command line — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.39",
              "n": "Dump LSASS via comsvcs DLL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the behavior of dumping credentials from memory by exploiting the Local Security Authority Subsystem Service (LSASS) using the comsvcs.dll and MiniDump via rundll32. This detection leverages process information from Endpoint Detection and Response (EDR) logs, focusing on specific command-line executions. This activity is significant because it indicates potential credential theft, which can lead to broader system compromise, persistence, lateral movement, and privi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://modexp.wordpress.com/2019/08/30/minidumpwritedump-via-com-services-dll/, https://twitter.com/SBousseaden/status/1167417096374050817, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Dump LSASS via comsvcs DLL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Dump LSASS via comsvcs DLL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the behavior of dumping credentials from memory by exploiting the Local Security Authority Subsystem Service (LSASS) using the comsvcs.dll and MiniDump using rundll32 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.40",
              "n": "Enable RDP In Other Port Number",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the registry that enable RDP on a machine using a non-default port number. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"HKLM\\SYSTEM\\CurrentControlSet\\Control\\Terminal Server\\WinStations\\RDP-Tcp\" and the \"PortNumber\" value. This activity is significant as attackers often modify RDP settings to facilitate lateral movement and maintain remote access to compromised systems. If confirmed …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mvps.net/docs/how-to-secure-remote-desktop-rdp/",
              "mitre": [
                "T1021"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Enable RDP In Other Port Number\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Enable RDP In Other Port Number\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch modifications to the registry that enable remote desktop on a machine using a non-default port number — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.41",
              "n": "Esentutl SAM Copy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `esentutl.exe` to access credentials stored in the ntds.dit or SAM file. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because it may indicate an attempt to extract sensitive credential information, which is a common tactic in lateral movement and privilege escalation. If confirmed malicious, this could allow an attacker t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time)\n    as lastTime from datamodel=Endpoint.Processes where\n    (Processes.process_name=esentutl.exe OR Processes.original_file_name=esentutl.exe)\n    Processes.process IN (\"*ntds*\", \"*SAM*\")\n    by Processes.action Processes.dest Processes.original_file_name\n       Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n       Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n       Processes.process Processes.process_exec Processes.process_guid Processes.process_hash\n       Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path\n       Processes.user Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `esentutl_sam_copy_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited. Filter as needed.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/6a570c2a4630cf0c2bd41a2e8375b5d5ab92f700/atomics/T1003.002/T1003.002.md, https://attack.mitre.org/software/S0404/",
              "mitre": [
                "T1003.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Esentutl SAM Copy\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Esentutl SAM Copy\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of to access credentials stored in the ntds.dit or SAM file — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.42",
              "n": "Excessive number of service control start as disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an excessive number of `sc.exe` processes launched with the command line argument `start= disabled` within a short period. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, command-line executions, and process GUIDs. This activity is significant as it may indicate an attempt to disable critical services, potentially impairing system defenses. If confirmed malicious, this behavior could allow an attacker to disrupt secur…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate programs and administrators will execute sc.exe with the start disabled flag.  It is possible, but unlikely from the telemetry of normal Windows operation we observed, that sc.exe will be called more than seven times in a short period of time.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/sc-create, https://attack.mitre.org/techniques/T1562/001/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive number of service control start as disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive number of service control start as disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate programs and administrators will execute sc.exe with the start disabled flag.  It is possible, but unlikely from the telemetry of normal Windows operation we observed, that sc.exe will be called more than seven times in a short period of time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch an excessive number of processes launched with the command line argument within a short period — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.43",
              "n": "Excessive number of taskhost processes",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an excessive number of taskhost.exe and taskhostex.exe processes running within a short time frame. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and their counts. This behavior is significant as it is commonly associated with post-exploitation tools like Meterpreter and Koadic, which use multiple instances of these processes for actions such as discovery and lateral movement. If confirmed malicious, this activity…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators, administrative actions or certain applications may run many instances of taskhost and taskhostex concurrently.  Filter as needed.",
              "refs": "https://attack.mitre.org/software/S0250/",
              "mitre": [
                "T1059"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive number of taskhost processes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive number of taskhost processes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators, administrative actions or certain applications may run many instances of taskhost and taskhostex concurrently.  Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot an excessive number of taskhost.exe and taskhostex.exe processes running within a short time frame — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.44",
              "n": "Executable File Written in Administrative SMB Share",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects executable files (.exe or .dll) being written to Windows administrative SMB shares (Admin$, IPC$, C$). It leverages Windows Security Event Logs with EventCode 5145 to identify this activity. This behavior is significant as it is commonly used by tools like PsExec/PaExec for staging binaries before creating and starting services on remote endpoints, a technique often employed for lateral movement and remote code execution. If confirmed malicious, this activity could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5145",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting Windows Security Event Logs with 5145 EventCode enabled. The Windows TA is also required. Also enable the object Audit access success/failure in your group policy.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "System Administrators may use looks like PsExec for troubleshooting or administrations tasks. However, this will typically come only from certain users and certain systems that can be added to an allow list.",
              "refs": "https://attack.mitre.org/techniques/T1021/002/, https://labs.vipre.com/trickbot-and-its-modules/, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1021.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Executable File Written in Administrative SMB Share\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5145. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Executable File Written in Administrative SMB Share\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: System Administrators may use looks like PsExec for troubleshooting or administrations tasks. However, this will typically come only from certain users and certain systems that can be added to an allow list.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch executable files (.exe or.dll) being written to Windows administrative SMB shares (Admin$, IPC$, C$) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.45",
              "n": "Get-DomainTrust with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the Get-DomainTrust command from PowerView using PowerShell, which is used to gather domain trust information. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line telemetry. This activity is significant as it indicates potential reconnaissance efforts by an adversary to understand domain trust relationships, which can inform lateral movement strategies. If confirmed malicious, thi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives as this requires an active Administrator or adversary to bring in, import, and execute.",
              "refs": "https://blog.harmj0y.net/redteaming/a-guide-to-attacking-domain-trusts/",
              "mitre": [
                "T1482"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get-DomainTrust with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get-DomainTrust with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives as this requires an active Administrator or adversary to bring in, import, and execute.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the execution of the Get-DomainTrust command from PowerView using PowerShell, which is used to gather domain trust information — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.46",
              "n": "Get-DomainTrust with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Get-DomainTrust command from PowerView using PowerShell Script Block Logging (EventCode=4104). This method captures the full command sent to PowerShell, allowing for detailed inspection. Identifying this activity is significant because it may indicate an attempt to gather domain trust information, which is often a precursor to lateral movement or privilege escalation. If confirmed malicious, this activity could enable an attacker to map trust r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible certain system management frameworks utilize this command to gather trust information.",
              "refs": "https://blog.harmj0y.net/redteaming/a-guide-to-attacking-domain-trusts/, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/",
              "mitre": [
                "T1482"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get-DomainTrust with PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get-DomainTrust with PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible certain system management frameworks utilize this command to gather trust information.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the Get-DomainTrust command from PowerView using PowerShell Script Block Logging (EventCode=4104) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.47",
              "n": "Get-ForestTrust with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Get-ForestTrust command from PowerSploit using PowerShell Script Block Logging (EventCode=4104). This method captures the full command sent to PowerShell, providing detailed visibility into potentially suspicious activities. Monitoring this behavior is crucial as it can indicate an attempt to gather domain trust information, which is often a precursor to lateral movement or privilege escalation. If confirmed malicious, this activity could allow…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Tune as needed.",
              "refs": "https://powersploit.readthedocs.io/en/latest/Recon/Get-ForestTrust/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1482",
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get-ForestTrust with PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1482, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get-ForestTrust with PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the Get-ForestTrust command from PowerSploit using PowerShell Script Block Logging (EventCode=4104) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.48",
              "n": "Get WMIObject Group Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `Get-WMIObject Win32_Group` command executed via PowerShell to enumerate local groups on an endpoint. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. Identifying local groups can be a precursor to privilege escalation or lateral movement. If confirmed malicious, this activity could allow an attacker to map out group memberships, aiding in further exploitation or u…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=powershell.exe\n            OR\n            processes.process_name=cmd.exe\n        )\n        (Processes.process=\"*Get-WMIObject*\" AND Processes.process=\"*Win32_Group*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `get_wmiobject_group_discovery_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Tune as needed.",
              "refs": "https://attack.mitre.org/techniques/T1069/001/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1069.001/T1069.001.md",
              "mitre": [
                "T1069.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get WMIObject Group Discovery\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get WMIObject Group Discovery\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Tune as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of the command executed using PowerShell to enumerate local groups on an endpoint — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.49",
              "n": "GetAdGroup with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-AdGroup` PowerShell cmdlet using PowerShell Script Block Logging (EventCode=4104). This cmdlet is used to enumerate all domain groups, which adversaries may exploit for situational awareness and Active Directory discovery. Monitoring this activity is crucial as it can indicate reconnaissance efforts within the network. If confirmed malicious, this behavior could lead to further exploitation, such as privilege escalation or lateral movement…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 ScriptBlockText = \"*Get-ADGroup*\"\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `getadgroup_with_powershell_script_block_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-adgroup?view=windowsserver2019-ps",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetAdGroup with PowerShell Script Block\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetAdGroup with PowerShell Script Block\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the PowerShell cmdlet using PowerShell Script Block Logging (EventCode=4104) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.50",
              "n": "Impacket Lateral Movement Commandline Parameters",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of suspicious command-line parameters associated with Impacket tools, such as `wmiexec.py`, `smbexec.py`, `dcomexec.py`, and `atexec.py`, which are used for lateral movement and remote code execution. It detects these activities by analyzing process execution logs from Endpoint Detection and Response (EDR) agents, focusing on specific command-line patterns. This activity is significant because Impacket tools are commonly used by adversaries and Red Teams…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although uncommon, Administrators may leverage Impackets tools to start a process on remote systems for system administration or automation use cases.",
              "refs": "https://attack.mitre.org/techniques/T1021/002/, https://attack.mitre.org/techniques/T1021/003/, https://attack.mitre.org/techniques/T1047/, https://attack.mitre.org/techniques/T1053/, https://attack.mitre.org/techniques/T1053/005/, https://github.com/SecureAuthCorp/impacket, https://vk9-sec.com/impacket-remote-code-execution-rce-on-windows-from-linux/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1021.002",
                "T1021.003",
                "T1047",
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Impacket Lateral Movement Commandline Parameters\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.002, T1021.003, T1047, T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Impacket Lateral Movement Commandline Parameters\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although uncommon, Administrators may leverage Impackets tools to start a process on remote systems for system administration or automation use cases.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the use of suspicious command-line parameters associated with Impacket tools, such as, and, which are used for lateral movement and remote code execution — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.51",
              "n": "Impacket Lateral Movement smbexec CommandLine Parameters",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious command-line parameters associated with the use of Impacket's smbexec.py for lateral movement. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line patterns indicative of Impacket tool usage. This activity is significant as both Red Teams and adversaries use Impacket for remote code execution and lateral movement. If confirmed malicious, this activity could allow attackers to execute commands on remote…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although uncommon, Administrators may leverage Impackets tools to start a process on remote systems for system administration or automation use cases.",
              "refs": "https://attack.mitre.org/techniques/T1021/002/, https://attack.mitre.org/techniques/T1021/003/, https://attack.mitre.org/techniques/T1047/, https://attack.mitre.org/techniques/T1053/, https://attack.mitre.org/techniques/T1053/005/, https://github.com/SecureAuthCorp/impacket, https://vk9-sec.com/impacket-remote-code-execution-rce-on-windows-from-linux/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1021.002",
                "T1021.003",
                "T1047",
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Impacket Lateral Movement smbexec CommandLine Parameters\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.002, T1021.003, T1047, T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Impacket Lateral Movement smbexec CommandLine Parameters\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although uncommon, Administrators may leverage Impackets tools to start a process on remote systems for system administration or automation use cases.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot suspicious command-line parameters associated with the use of Impacket's smbexec.py for lateral movement — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.52",
              "n": "Impacket Lateral Movement WMIExec Commandline Parameters",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Impacket's `wmiexec.py` tool for lateral movement by identifying specific command-line parameters. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on processes spawned by `wmiprvse.exe` with command-line patterns indicative of Impacket usage. This activity is significant as Impacket tools are commonly used by adversaries for remote code execution and lateral movement within a network. If confirmed malicious, this could allow…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although uncommon, Administrators may leverage Impackets tools to start a process on remote systems for system administration or automation use cases.",
              "refs": "https://attack.mitre.org/techniques/T1021/002/, https://attack.mitre.org/techniques/T1021/003/, https://attack.mitre.org/techniques/T1047/, https://attack.mitre.org/techniques/T1053/, https://attack.mitre.org/techniques/T1053/005/, https://github.com/SecureAuthCorp/impacket, https://vk9-sec.com/impacket-remote-code-execution-rce-on-windows-from-linux/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1021.002",
                "T1021.003",
                "T1047",
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Impacket Lateral Movement WMIExec Commandline Parameters\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.002, T1021.003, T1047, T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Impacket Lateral Movement WMIExec Commandline Parameters\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although uncommon, Administrators may leverage Impackets tools to start a process on remote systems for system administration or automation use cases.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of Impacket's tool for lateral movement by identifying specific command-line parameters — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.53",
              "n": "Interactive Session on Remote Endpoint with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `Enter-PSSession` cmdlet to establish an interactive session on a remote endpoint via the WinRM protocol. It leverages PowerShell Script Block Logging (EventCode=4104) to identify this activity by searching for specific script block text patterns. This behavior is significant as it may indicate lateral movement or remote code execution attempts by adversaries. If confirmed malicious, this activity could allow attackers to execute commands remotely, p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage WinRM and `Enter-PSSession` for administrative and troubleshooting tasks. This activity is usually limited to a small set of hosts or users. In certain environments, tuning may not be possible.",
              "refs": "https://attack.mitre.org/techniques/T1021/006/, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/enter-pssession?view=powershell-7.2",
              "mitre": [
                "T1021.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Interactive Session on Remote Endpoint with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Interactive Session on Remote Endpoint with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage WinRM and `Enter-PSSession` for administrative and troubleshooting tasks. This activity is usually limited to a small set of hosts or users. In certain environments, tuning may not be possible.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of the cmdlet to establish an interactive session on a remote endpoint using the WinRM protocol — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.54",
              "n": "Linux Auditd Database File And Directory Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious database file and directory discovery activities, which may signal an attacker attempt to locate and assess critical database assets on a compromised system. This behavior is often a precursor to data theft, unauthorized access, or privilege escalation, as attackers seek to identify valuable information stored in databases. By monitoring for unusual or unauthorized attempts to locate database files and directories, this analytic aids in early detection o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1083"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Database File And Directory Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1083. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Database File And Directory Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch suspicious database file and directory discovery activities, which may signal an attacker attempt to locate and assess critical database assets on a compromised system — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.55",
              "n": "Linux Auditd Find Credentials From Password Managers",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious attempts to find credentials stored in password managers, which may indicate an attacker's effort to retrieve sensitive login information. Password managers are often targeted by adversaries seeking to access stored passwords for further compromise or lateral movement within a network. By monitoring for unusual or unauthorized access to password manager files or processes, this analytic helps identify potential credential theft attempts, enabling securit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1555.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Find Credentials From Password Managers\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Find Credentials From Password Managers\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch suspicious attempts to find credentials stored in password managers, which may indicate an attacker's effort to retrieve sensitive login information — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.56",
              "n": "Linux Auditd System Network Configuration Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious system network configuration discovery activities, which may indicate an adversary's attempt to gather information about the network environment. Such actions typically involve commands or tools used to identify network interfaces, routing tables, and active connections. Detecting these activities is crucial, as they often precede more targeted attacks like lateral movement or data exfiltration. By identifying unusual or unauthorized network discovery ef…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1016"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd System Network Configuration Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1016. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd System Network Configuration Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch suspicious system network configuration discovery activities, which may indicate an adversary's attempt to gather information about the network environment — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.57",
              "n": "Linux SSH Remote Services Script Execute",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of SSH to move laterally and execute a script or file on a remote host. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific SSH command-line parameters and URLs. This activity is significant as it may indicate an attacker attempting to execute remote commands or scripts, potentially leading to unauthorized access or control over additional systems. If confirmed malicious, this could result in lateral movement, privilege…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is not a common command to be executed. Filter as needed.",
              "refs": "https://redcanary.com/blog/lateral-movement-with-secure-shell/",
              "mitre": [
                "T1021.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux SSH Remote Services Script Execute\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux SSH Remote Services Script Execute\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is not a common command to be executed. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of remote login to move laterally and execute a script or file on a remote host — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.58",
              "n": "LOLBAS With Network Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of Living Off the Land Binaries and Scripts (LOLBAS) with network traffic. It leverages data from the Network Traffic data model to detect when native Windows binaries, often abused by adversaries, initiate network connections. This activity is significant as LOLBAS are frequently used to download malicious payloads, enabling lateral movement, command-and-control, or data exfiltration. If confirmed malicious, this behavior could allow attackers to execut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this detection you must ingest events into the Network traffic data model that contain the source, destination, and communicating process in the app field. Relevant processes must also be ingested in the Endpoint data model with matching process_id field. Sysmon EID1 and EID3 are good examples of this type this data type.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate usage of internal automation or scripting, especially powershell.exe or pwsh.exe, internal to internal or logon scripts. It may be necessary to omit internal IP ranges if extremely noisy. ie NOT dest IN (\"10.0.0.0/8\",\"172.16.0.0/12\",\"192.168.0.0/16\",\"170.98.0.0/16\",\"0:0:0:0:0:0:0:1\")",
              "refs": "https://lolbas-project.github.io/#",
              "mitre": [
                "T1105",
                "T1567",
                "T1218"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"LOLBAS With Network Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105, T1567, T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"LOLBAS With Network Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate usage of internal automation or scripting, especially powershell.exe or pwsh.exe, internal to internal or logon scripts. It may be necessary to omit internal IP ranges if extremely noisy. ie NOT dest IN (\"10.0.0.0/8\",\"172.16.0.0/12\",\"192.168.0.0/16\",\"170.98.0.0/16\",\"0:0:0:0:0:0:0:1\")\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the use of Living Off the Land Binaries and Scripts (LOLBAS) with network traffic — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.59",
              "n": "Microsoft Defender ATP Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic is to leverage alerts from Microsoft Defender ATP Alerts. This query aggregates and summarizes all alerts from Microsoft Defender ATP Alerts, providing details such as the source, file name, severity, process command line, ip address, registry key, signature, description, unique id, and timestamps. This detection is not intended to detect new activity from raw data, but leverages Microsoft provided alerts to be correlated with other data as part of risk based alerting. The…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MS Defender ATP Alerts",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires MS Defender ATP Alerts ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may vary based on Microsfot Defender configuration; monitor and filter out the alerts that are not relevant to your environment.",
              "refs": "https://learn.microsoft.com/en-us/defender-xdr/api-list-incidents?view=o365-worldwide, https://learn.microsoft.com/en-us/graph/api/resources/security-alert?view=graph-rest-1.0, https://splunkbase.splunk.com/app/6207, https://jasonconger.com/splunk-azure-gdi/",
              "mitre": [],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Microsoft Defender ATP Alerts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MS Defender ATP Alerts. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Microsoft Defender ATP Alerts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may vary based on Microsfot Defender configuration; monitor and filter out the alerts that are not relevant to your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Is to leverage alerts from Microsoft Defender ATP Alerts — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.60",
              "n": "Mmc LOLBAS Execution Process Spawn",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies `mmc.exe` spawning a LOLBAS execution process. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where `mmc.exe` is the parent process. This activity is significant because adversaries can abuse the DCOM protocol and MMC20 COM object to execute malicious code, using Windows native binaries documented by the LOLBAS project. If confirmed malicious, this behavior could indicate lateral movement, allowing attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may trigger this behavior, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1021/003/, https://www.cybereason.com/blog/dcom-lateral-movement-techniques, https://lolbas-project.github.io/",
              "mitre": [
                "T1021.003",
                "T1218.014"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Mmc LOLBAS Execution Process Spawn\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.003, T1218.014. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Mmc LOLBAS Execution Process Spawn\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may trigger this behavior, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot spawning a LOLBAS execution process — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.61",
              "n": "Network Discovery Using Route Windows App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `route.exe` Windows application, commonly used for network discovery. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events. This activity is significant because adversaries often use `route.exe` to map network routes and identify potential targets within a network. If confirmed malicious, this behavior could allow attackers to gain insights into network topology, facilitating lateral movement …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time)\n    as lastTime from datamodel=Endpoint.Processes where\n    (Processes.process_name=route.exe OR Processes.original_file_name=route.exe)\n    by Processes.action Processes.dest Processes.original_file_name Processes.parent_process\n       Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id\n       Processes.parent_process_name Processes.parent_process_path Processes.process Processes.process_exec\n       Processes.process_guid Processes.process_hash Processes.process_id Processes.process_integrity_level\n       Processes.process_name Processes.process_path Processes.user Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `network_discovery_using_route_windows_app_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A network operator or systems administrator may utilize an automated host discovery application that may generate false positives or an amazon ec2 script that uses this application. Filter as needed.",
              "refs": "https://app.any.run/tasks/ad4c3cda-41f2-4401-8dba-56cc2d245488/",
              "mitre": [
                "T1016.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Network Discovery Using Route Windows App\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1016.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Network Discovery Using Route Windows App\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A network operator or systems administrator may utilize an automated host discovery application that may generate false positives or an amazon ec2 script that uses this application. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the Windows application, commonly used for network discovery — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.62",
              "n": "Network Share Discovery Via Dir Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects access to Windows administrative SMB shares (Admin$, IPC$, C$) using the 'dir' command. It leverages Windows Security Event Logs with EventCode 5140 to identify this activity. This behavior is significant as it is commonly used by tools like PsExec/PaExec for staging binaries before creating and starting services on remote endpoints, a technique often employed by adversaries for lateral movement and remote code execution. If confirmed malicious, this activity could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5140",
              "q": "`wineventlog_security` EventCode=5140 ShareName IN(\"\\\\\\\\*\\\\ADMIN$\",\"\\\\\\\\*\\\\C$\",\"*\\\\\\\\*\\\\IPC$\") AccessMask= 0x1 | stats min(_time) as firstTime max(_time) as lastTime count by ShareName IpAddress ObjectType SubjectUserName SubjectDomainName IpPort AccessMask Computer | rename Computer as dest | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `network_share_discovery_via_dir_command_filter`",
              "m": "To successfully implement this search, you need to be ingesting Windows Security Event Logs with 5140 EventCode enabled. The Windows TA is also required. Also enable the object Audit access success/failure in your group policy.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "System Administrators may use looks like net.exe or \"dir commandline\" for troubleshooting or administrations tasks. However, this will typically come only from certain users and certain systems that can be added to an allow list.",
              "refs": "https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1135"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Network Share Discovery Via Dir Command\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5140. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1135. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Network Share Discovery Via Dir Command\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: System Administrators may use looks like net.exe or \"dir commandline\" for troubleshooting or administrations tasks. However, this will typically come only from certain users and certain systems that can be added to an allow list.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch access to Windows administrative SMB shares (Admin$, IPC$, C$) using the 'dir' command — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.63",
              "n": "NLTest Domain Trust Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `nltest.exe` with command-line arguments `/domain_trusts` or `/all_trusts` to query Domain Trust information. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant as it indicates potential reconnaissance efforts by adversaries to understand domain trust relationships, which can inform their lateral movement strategies. If confirmed malicio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may use nltest for troubleshooting purposes, otherwise, rarely used.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1482/T1482.md, https://malware.news/t/lets-learn-trickbot-implements-network-collector-module-leveraging-cmd-wmi-ldap/19104, https://attack.mitre.org/techniques/T1482/, https://owasp.org/www-pdf-archive/Red_Team_Operating_in_a_Modern_Environment.pdf, https://ss64.com/nt/nltest.html, https://redcanary.com/threat-detection-report/techniques/domain-trust-discovery/, https://thedfirreport.com/2020/10/08/ryuks-return/",
              "mitre": [
                "T1482"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"NLTest Domain Trust Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"NLTest Domain Trust Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may use nltest for troubleshooting purposes, otherwise, rarely used.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the execution of with command-line arguments or to query Domain Trust information — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.64",
              "n": "Possible Lateral Movement PowerShell Spawn",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the spawning of a PowerShell process as a child or grandchild of commonly abused processes like services.exe, wmiprvse.exe, svchost.exe, wsmprovhost.exe, and mmc.exe. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process names, as well as command-line executions. This activity is significant as it often indicates lateral movement or remote code execution attempts by adversaries. If confirmed malicious, this beha…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may spawn PowerShell as a child process of the the identified processes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1021/003/, https://attack.mitre.org/techniques/T1021/006/, https://attack.mitre.org/techniques/T1047/, https://attack.mitre.org/techniques/T1053/005/, https://attack.mitre.org/techniques/T1543/003/",
              "mitre": [
                "T1021.003",
                "T1021.006",
                "T1047",
                "T1053.005",
                "T1059.001",
                "T1218.014",
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Possible Lateral Movement PowerShell Spawn\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.003, T1021.006, T1047, T1053.005, T1059.001, T1218.014, T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Possible Lateral Movement PowerShell Spawn\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may spawn PowerShell as a child process of the the identified processes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the spawning of a PowerShell process as a child or grandchild of commonly abused processes like services.exe, wmiprvse.exe, svchost.exe, wsmprovhost.exe, and mmc.exe — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.65",
              "n": "Potential System Network Configuration Discovery Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the rapid execution of processes used for system network configuration discovery on an endpoint. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process GUIDs, names, parent processes, and command-line executions. This activity can be significant as it may indicate an attacker attempting to map the network, which is a common precursor to lateral movement or further exploitation. If confirmed malicious, this behavior could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is uncommon for normal users to execute a series of commands used for network discovery. System administrators often use scripts to execute these commands. These can generate false positives.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1016"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Potential System Network Configuration Discovery Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1016. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Potential System Network Configuration Discovery Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is uncommon for normal users to execute a series of commands used for network discovery. System administrators often use scripts to execute these commands. These can generate false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the rapid execution of processes used for system network configuration discovery on an endpoint — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.66",
              "n": "Powershell Enable SMB1Protocol Feature",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the enabling of the SMB1 protocol via `powershell.exe`. It leverages PowerShell script block logging (EventCode 4104) to identify the execution of the `Enable-WindowsOptionalFeature` cmdlet with the `SMB1Protocol` parameter. This activity is significant because enabling SMB1 can facilitate lateral movement and file encryption by ransomware, such as RedDot. If confirmed malicious, this action could allow an attacker to propagate through the network, encrypt files, a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network operator may enable or disable this windows feature.",
              "refs": "https://app.any.run/tasks/c0f98850-af65-4352-9746-fbebadee4f05/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1027.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Enable SMB1Protocol Feature\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Enable SMB1Protocol Feature\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network operator may enable or disable this windows feature.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the enabling of the SMB1 protocol using — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.67",
              "n": "PowerShell Get LocalGroup Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of the `get-localgroup` command executed via PowerShell or cmd.exe to enumerate local groups on an endpoint. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. Monitoring this activity is significant as it may indicate an attacker attempting to gather information about local group memberships, which can be a precursor to privilege escalation. If confirmed malicious, this …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=powershell.exe\n            OR\n            Processes.process_name=cmd.exe\n        )\n        (Processes.process=\"*get-localgroup*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `powershell_get_localgroup_discovery_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Tune as needed.",
              "refs": "https://attack.mitre.org/techniques/T1069/001/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1069.001/T1069.001.md",
              "mitre": [
                "T1069.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell Get LocalGroup Discovery\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell Get LocalGroup Discovery\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Tune as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the use of the command executed using PowerShell or cmd.exe to enumerate local groups on an endpoint — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.68",
              "n": "Powershell Get LocalGroup Discovery with Script Block Logging",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the PowerShell cmdlet `get-localgroup` using PowerShell Script Block Logging (EventCode=4104). This method captures the full command sent to PowerShell, providing detailed visibility into script execution. Monitoring this activity is significant as it can indicate an attempt to enumerate local groups, which may be a precursor to privilege escalation or lateral movement. If confirmed malicious, an attacker could gain insights into group memberships,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 ScriptBlockText = \"*get-localgroup*\"\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `powershell_get_localgroup_discovery_with_script_block_logging_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Tune as needed.",
              "refs": "https://www.splunk.com/en_us/blog/security/powershell-detections-threat-research-release-august-2021.html, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1069.001/T1069.001.md, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba, https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/",
              "mitre": [
                "T1069.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Get LocalGroup Discovery with Script Block Logging\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Get LocalGroup Discovery with Script Block Logging\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Tune as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the PowerShell cmdlet using PowerShell Script Block Logging (EventCode=4104) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.69",
              "n": "PowerShell Invoke WmiExec Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Invoke-WMIExec utility within PowerShell Script Block Logging (EventCode 4104). This detection leverages PowerShell script block logs to identify instances where the Invoke-WMIExec command is used. Monitoring this activity is crucial as it indicates potential lateral movement using WMI commands with NTLMv2 pass-the-hash authentication. If confirmed malicious, this activity could allow an attacker to execute commands remotely on target systems, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as this analytic is designed to detect a specific utility. It is recommended to apply appropriate filters as needed to minimize the number of false positives.",
              "refs": "https://github.com/Kevin-Robertson/Invoke-TheHash/blob/master/Invoke-WMIExec.ps1",
              "mitre": [
                "T1047"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell Invoke WmiExec Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell Invoke WmiExec Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as this analytic is designed to detect a specific utility. It is recommended to apply appropriate filters as needed to minimize the number of false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the Invoke-WMIExec utility within PowerShell Script Block Logging (EventCode 4104) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.70",
              "n": "Process Execution via WMI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a process by `WmiPrvSE.exe`, indicating potential use of WMI (Windows Management Instrumentation) for process creation. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process relationships. This activity is significant as WMI can be used for lateral movement, remote code execution, or persistence by attackers. If confirmed malicious, this could allow an attacker to execute arbitrary c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, administrators may use wmi to execute commands for legitimate purposes.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1047"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Process Execution via WMI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Process Execution via WMI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, administrators may use wmi to execute commands for legitimate purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of a process by, indicating potential use of WMI (Windows Management Instrumentation) for process creation — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.71",
              "n": "Processes launching netsh",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes launching netsh.exe, a command-line utility used to modify network configurations. It detects this activity by analyzing data from Endpoint Detection and Response (EDR) agents, focusing on process GUIDs, names, parent processes, and command-line executions. This behavior is significant because netsh.exe can be exploited to execute malicious helper DLLs, serving as a persistence mechanism. If confirmed malicious, an attacker could gain persistent access…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some VPN applications are known to launch netsh.exe. Outside of these instances, it is unusual for an executable to launch netsh.exe and run commands.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Processes launching netsh\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Processes launching netsh\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some VPN applications are known to launch netsh.exe. Outside of these instances, it is unusual for an executable to launch netsh.exe and run commands.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot processes launching netsh.exe, a command-line utility used to modify network configurations — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.72",
              "n": "Randomly Generated Scheduled Task Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a Scheduled Task with a high entropy, randomly generated name, leveraging Event ID 4698. It uses the `ut_shannon` function from the URL ToolBox Splunk application to measure the entropy of the Task Name. This activity is significant as adversaries often use randomly named Scheduled Tasks for lateral movement and remote code execution, employing tools like Impacket or CrackMapExec. If confirmed malicious, this could allow attackers to execute arbitra…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698",
              "q": "`wineventlog_security` EventCode=4698\n      | xmlkv Message\n      | lookup ut_shannon_lookup word as Task_Name\n      | where ut_shannon > 3\n      | table  _time, dest, Task_Name, ut_shannon, Command, Author, Enabled, Hidden\n      | `randomly_generated_scheduled_task_name_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Security 4698 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may use random Scheduled Task names.",
              "refs": "https://attack.mitre.org/techniques/T1053/005/, https://splunkbase.splunk.com/app/2734/, https://en.wikipedia.org/wiki/Entropy_(information_theory)",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Randomly Generated Scheduled Task Name\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Randomly Generated Scheduled Task Name\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may use random Scheduled Task names.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of a Scheduled Task with a high entropy, randomly generated name, leveraging Event ID 4698 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.73",
              "n": "Randomly Generated Windows Service Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the installation of a Windows Service with a suspicious, high-entropy name, indicating potential malicious activity. It leverages Event ID 7045 and the `ut_shannon` function from the URL ToolBox Splunk application to identify services with random names. This behavior is significant as adversaries often use randomly named services for lateral movement and remote code execution. If confirmed malicious, this activity could allow attackers to execute arbitrary code, es…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "`wineventlog_system` EventCode=7045\n      | lookup ut_shannon_lookup word as Service_Name\n      | where ut_shannon > 3\n      | table EventCode ComputerName Service_Name ut_shannon Service_Start_Type Service_Type Service_File_Name\n      | `randomly_generated_windows_service_name_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log System 7045 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may use random Windows Service names.",
              "refs": "https://attack.mitre.org/techniques/T1543/003/",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Randomly Generated Windows Service Name\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Randomly Generated Windows Service Name\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may use random Windows Service names.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the installation of a Windows Service with a suspicious, high-entropy name, indicating potential malicious activity — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.74",
              "n": "Remote Desktop Process Running On System",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the remote desktop process (mstsc.exe) on systems where it is not typically run. This detection leverages data from Endpoint Detection and Response (EDR) agents, filtering out systems categorized as common RDP sources. This activity is significant because unauthorized use of mstsc.exe can indicate lateral movement or unauthorized remote access attempts. If confirmed malicious, this could allow an attacker to gain remote control of a system, potenti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process=*mstsc.exe\n        AND\n        Processes.dest_category!=common_rdp_source\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `drop_dm_object_name(Processes)`\n    | `remote_desktop_process_running_on_system_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Remote Desktop may be used legitimately by users on the network.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Desktop Process Running On System\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Desktop Process Running On System\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Remote Desktop may be used legitimately by users on the network.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the remote desktop process (mstsc.exe) on systems where it is not typically run — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.75",
              "n": "Remote Process Instantiation via DCOM and PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with arguments used to start a process on a remote endpoint by abusing the DCOM protocol, specifically targeting ShellExecute and ExecuteShellCommand. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line executions. This activity is significant as it indicates potential lateral movement and remote code execution attempts by adversaries. If confirmed malic…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage DCOM to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.",
              "refs": "https://attack.mitre.org/techniques/T1021/003/, https://www.cybereason.com/blog/dcom-lateral-movement-techniques",
              "mitre": [
                "T1021.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Process Instantiation via DCOM and PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Process Instantiation via DCOM and PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage DCOM to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of with arguments used to start a process on a remote endpoint by abusing the DCOM protocol, specifically targeting ShellExecute and ExecuteShellCommand — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.76",
              "n": "Remote Process Instantiation via DCOM and PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of PowerShell commands that initiate a process on a remote endpoint via the DCOM protocol. It leverages PowerShell Script Block Logging (EventCode=4104) to identify the use of ShellExecute and ExecuteShellCommand. This activity is significant as it may indicate lateral movement or remote code execution attempts by adversaries. If confirmed malicious, this behavior could allow attackers to execute arbitrary code on remote systems, potentially leading t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage DCOM to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.",
              "refs": "https://attack.mitre.org/techniques/T1021/003/, https://www.cybereason.com/blog/dcom-lateral-movement-techniques",
              "mitre": [
                "T1021.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Process Instantiation via DCOM and PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Process Instantiation via DCOM and PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage DCOM to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of PowerShell commands that initiate a process on a remote endpoint using the DCOM protocol — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.77",
              "n": "Remote Process Instantiation via WinRM and PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with arguments used to start a process on a remote endpoint via the WinRM protocol, specifically targeting the `Invoke-Command` cmdlet. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process telemetry. This activity is significant as it may indicate lateral movement or remote code execution attempts by adversaries. If confirmed malicious, this could allow attackers to ex…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage WinRM and `Invoke-Command` to start a process on remote systems for system administration or automation use cases. However, this activity is usually limited to a small set of hosts or users.",
              "refs": "https://attack.mitre.org/techniques/T1021/006/, https://pentestlab.blog/2018/05/15/lateral-movement-winrm/",
              "mitre": [
                "T1021.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Process Instantiation via WinRM and PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Process Instantiation via WinRM and PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage WinRM and `Invoke-Command` to start a process on remote systems for system administration or automation use cases. However, this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of with arguments used to start a process on a remote endpoint using the WinRM protocol, specifically targeting the cmdlet — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.78",
              "n": "Remote Process Instantiation via WinRM and PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of PowerShell commands that use the `Invoke-Command` cmdlet to start a process on a remote endpoint via the WinRM protocol. It leverages PowerShell Script Block Logging (EventCode=4104) to identify such activities. This behavior is significant as it may indicate lateral movement or remote code execution attempts by adversaries. If confirmed malicious, this activity could allow attackers to execute arbitrary code on remote systems, potentially leading …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage WinRM and `Invoke-Command` to start a process on remote systems for system administration or automation use cases. This activity is usually limited to a small set of hosts or users. In certain environments, tuning may not be possible.",
              "refs": "https://attack.mitre.org/techniques/T1021/006/, https://pentestlab.blog/2018/05/15/lateral-movement-winrm/",
              "mitre": [
                "T1021.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Process Instantiation via WinRM and PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Process Instantiation via WinRM and PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage WinRM and `Invoke-Command` to start a process on remote systems for system administration or automation use cases. This activity is usually limited to a small set of hosts or users. In certain environments, tuning may not be possible.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of PowerShell commands that use the cmdlet to start a process on a remote endpoint using the WinRM protocol — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.79",
              "n": "Remote Process Instantiation via WinRM and Winrs",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `winrs.exe` with command-line arguments used to start a process on a remote endpoint. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions mapped to the `Processes` node of the `Endpoint` data model. This activity is significant as it may indicate lateral movement or remote code execution attempts by adversaries. If confirmed malicious, this could allow attackers to execute arbit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage WinRM and WinRs to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/winrs, https://attack.mitre.org/techniques/T1021/006/",
              "mitre": [
                "T1021.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Process Instantiation via WinRM and Winrs\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Process Instantiation via WinRM and Winrs\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage WinRM and WinRs to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of with command-line arguments used to start a process on a remote endpoint — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.80",
              "n": "Remote Process Instantiation via WMI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of wmic.exe with parameters to spawn a process on a remote system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process telemetry mapped to the `Processes` node of the `Endpoint` data model. This activity is significant as WMI can be abused for lateral movement and remote code execution, often used by adversaries and Red Teams. If confirmed malicious, this could allow attackers to execute…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The wmic.exe utility is a benign Windows application. It may be used legitimately by Administrators with these parameters for remote system administration, but it's relatively uncommon.",
              "refs": "https://attack.mitre.org/techniques/T1047/, https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/create-method-in-class-win32-process",
              "mitre": [
                "T1047"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Process Instantiation via WMI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Process Instantiation via WMI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The wmic.exe utility is a benign Windows application. It may be used legitimately by Administrators with these parameters for remote system administration, but it's relatively uncommon.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of wmic.exe with parameters to spawn a process on a remote system — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.81",
              "n": "Remote Process Instantiation via WMI and PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` using the `Invoke-WmiMethod` cmdlet to start a process on a remote endpoint via WMI. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process telemetry. This activity is significant as it indicates potential lateral movement or remote code execution attempts by adversaries. If confirmed malicious, this could allow attackers to execute arbitrary code on remote systems, lead…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage WWMI and powershell.exe to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.",
              "refs": "https://attack.mitre.org/techniques/T1047/, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/invoke-wmimethod?view=powershell-5.1",
              "mitre": [
                "T1047"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Process Instantiation via WMI and PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Process Instantiation via WMI and PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage WWMI and powershell.exe to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of using the cmdlet to start a process on a remote endpoint using WMI — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.82",
              "n": "Remote Process Instantiation via WMI and PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Invoke-WmiMethod` commandlet with parameters used to start a process on a remote endpoint via WMI, leveraging PowerShell Script Block Logging (EventCode=4104). This method identifies specific script block text patterns associated with remote process instantiation. This activity is significant as it may indicate lateral movement or remote code execution attempts by adversaries. If confirmed malicious, this could allow attackers to execute arbit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage WWMI and powershell.exe to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.",
              "refs": "https://attack.mitre.org/techniques/T1047/, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/invoke-wmimethod?view=powershell-5.1",
              "mitre": [
                "T1047"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Process Instantiation via WMI and PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Process Instantiation via WMI and PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage WWMI and powershell.exe to start a process on remote systems, but this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the commandlet with parameters used to start a process on a remote endpoint using WMI, leveraging PowerShell Script Block Logging (EventCode=4104) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.83",
              "n": "Remote WMI Command Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `wmic.exe` with the `node` switch, indicating an attempt to spawn a local or remote process. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events and command-line arguments. This activity is significant as it may indicate lateral movement or remote code execution attempts by an attacker. If confirmed malicious, the attacker could gain remote control over the targeted system, execute ar…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may use this legitimately to gather info from remote systems. Filter as needed.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1047/T1047.yaml, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1047"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote WMI Command Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote WMI Command Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may use this legitimately to gather info from remote systems. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of with the switch, indicating an attempt to spawn a local or remote process — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.84",
              "n": "Revil Registry Entry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious modifications in the registry entry, specifically targeting paths used by malware like REVIL. It detects changes in registry paths such as `SOFTWARE\\\\WOW6432Node\\\\Facebook_Assistant` and `SOFTWARE\\\\WOW6432Node\\\\BlackLivesMatter`. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on registry modifications linked to process GUIDs. This activity is significant as it indicates potential malware persistence mechanism…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12, Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://krebsonsecurity.com/2021/05/a-closer-look-at-the-darkside-ransomware-gang/, https://www.mcafee.com/blogs/other-blogs/mcafee-labs/mcafee-atr-analyzes-sodinokibi-aka-revil-ransomware-as-a-service-what-the-code-tells-us/",
              "mitre": [
                "T1112"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Revil Registry Entry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12, Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Revil Registry Entry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot suspicious modifications in the registry entry, specifically targeting paths used by malware like REVIL — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.85",
              "n": "Rubeus Command Line Parameters",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Rubeus command line parameters, a toolset for Kerberos attacks within Active Directory environments. It leverages Endpoint Detection and Response (EDR) data to identify specific command-line arguments associated with actions like ticket manipulation, kerberoasting, and password spraying. This activity is significant as Rubeus is commonly used by adversaries to exploit Kerberos for privilege escalation and lateral movement. If confirmed malicious, this co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, legitimate applications may use the same command line parameters as Rubeus. Filter as needed.",
              "refs": "https://github.com/GhostPack/Rubeus, https://web.archive.org/web/20210725005734/http://www.harmj0y.net/blog/redteaming/from-kekeo-to-rubeus/, https://attack.mitre.org/techniques/T1550/003/, https://en.hackndo.com/kerberos-silver-golden-tickets/",
              "mitre": [
                "T1550.003",
                "T1558.003",
                "T1558.004"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rubeus Command Line Parameters\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1550.003, T1558.003, T1558.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rubeus Command Line Parameters\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, legitimate applications may use the same command line parameters as Rubeus. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of Rubeus command line parameters, a toolset for Kerberos attacks within Active Directory environments — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.86",
              "n": "Runas Execution in CommandLine",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the runas.exe process with administrator user options. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process details. This activity is significant as it may indicate an attempt to gain elevated privileges, a common tactic in privilege escalation and lateral movement. If confirmed malicious, this could allow an attacker to execute commands with higher privileges, potentially leading to u…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time)\n    as lastTime from datamodel=Endpoint.Processes where\n    (Processes.process_name=\"runas.exe\" OR Processes.original_file_name=\"runas.exe\")\n    Processes.process =\"*/user:*\"\n    Processes.process = \"*admin*\"\n    by Processes.action Processes.dest\n       Processes.original_file_name Processes.parent_process Processes.parent_process_exec\n       Processes.parent_process_guid Processes.parent_process_id Processes.parent_process_name\n       Processes.parent_process_path Processes.process Processes.process_exec Processes.process_guid\n       Processes.process_hash Processes.process_id Processes.process_integrity_level Processes.process_name\n       Processes.process_path Processes.user Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `runas_execution_in_commandline_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A network operator or systems administrator may utilize an automated or manual execute this command that may generate false positives. filter is needed.",
              "refs": "https://app.any.run/tasks/ad4c3cda-41f2-4401-8dba-56cc2d245488/",
              "mitre": [
                "T1134.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Runas Execution in CommandLine\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Runas Execution in CommandLine\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A network operator or systems administrator may utilize an automated or manual execute this command that may generate false positives. filter is needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the runas.exe process with administrator user options — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.87",
              "n": "Scheduled Task Creation on Remote Endpoint using At",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of scheduled tasks on remote Windows endpoints using the at.exe command. This detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process creation events involving at.exe with remote command-line arguments. Identifying this activity is significant for a SOC as it may indicate lateral movement or remote code execution attempts by an attacker. If confirmed malicious, this activity could lead to unauthorized access, persistenc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may create scheduled tasks on remote systems, but this activity is usually limited to a small set of hosts or users.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/at, https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/win32-scheduledjob?redirectedfrom=MSDN",
              "mitre": [
                "T1053.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Scheduled Task Creation on Remote Endpoint using At\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Scheduled Task Creation on Remote Endpoint using At\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may create scheduled tasks on remote systems, but this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of scheduled tasks on remote Windows endpoints using the at.exe command — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.88",
              "n": "Scheduled Task Initiation on Remote Endpoint",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of 'schtasks.exe' to start a Scheduled Task on a remote endpoint. This detection leverages Endpoint Detection and Response (EDR) data, focusing on process details such as process name, parent process, and command-line executions. This activity is significant as adversaries often abuse Task Scheduler for lateral movement and remote code execution. If confirmed malicious, this behavior could allow attackers to execute arbitrary code remotely, potentially lead…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may start scheduled tasks on remote systems, but this activity is usually limited to a small set of hosts or users.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/schtasks, https://attack.mitre.org/techniques/T1053/005/",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Scheduled Task Initiation on Remote Endpoint\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Scheduled Task Initiation on Remote Endpoint\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may start scheduled tasks on remote systems, but this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of 'schtasks.exe' to start a Scheduled Task on a remote endpoint — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.89",
              "n": "Schtasks Run Task On Demand",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a Windows Scheduled Task on demand via the shell or command line. It leverages process-related data, including process name, parent process, and command-line executions, sourced from endpoint logs. The detection focuses on 'schtasks.exe' with an associated 'run' command. This activity is significant as adversaries often use it to force the execution of their created Scheduled Tasks for persistent access or lateral movement within a compromised mach…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Bear in mind, administrators debugging Scheduled Task entries may trigger this analytic, necessitating fine-tuning and filtering to distinguish between legitimate and potentially malicious use of 'schtasks.exe'.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1053"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Schtasks Run Task On Demand\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Schtasks Run Task On Demand\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Bear in mind, administrators debugging Scheduled Task entries may trigger this analytic, necessitating fine-tuning and filtering to distinguish between legitimate and potentially malicious use of 'schtasks.exe'.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of a Windows Scheduled Task on demand using the shell or command line — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.90",
              "n": "Schtasks scheduling job on remote system",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of 'schtasks.exe' to create a scheduled task on a remote system, indicating potential lateral movement or remote code execution. It leverages process data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line arguments and flags. This activity is significant as it may signify an adversary's attempt to persist or execute code remotely. If confirmed malicious, this could allow attackers to maintain access, execute arbitrary comm…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While it is possible to have false positives, due to legitimate administrative tasks, these are usually limited and should still be validated and investigated as appropriate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Schtasks scheduling job on remote system\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Schtasks scheduling job on remote system\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While it is possible to have false positives, due to legitimate administrative tasks, these are usually limited and should still be validated and investigated as appropriate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of 'schtasks.exe' to create a scheduled task on a remote system, indicating potential lateral movement or remote code execution — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.91",
              "n": "Services LOLBAS Execution Process Spawn",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies `services.exe` spawning a LOLBAS (Living Off the Land Binaries and Scripts) execution process. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where `services.exe` is the parent process. This activity is significant because adversaries often abuse the Service Control Manager to execute malicious code via native Windows binaries, facilitating lateral movement. If confirmed malicious, this behavior could all…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may trigger this behavior, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1543/003/, https://pentestlab.blog/2020/07/21/lateral-movement-services/, https://lolbas-project.github.io/",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Services LOLBAS Execution Process Spawn\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Services LOLBAS Execution Process Spawn\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may trigger this behavior, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot spawning a LOLBAS (Living Off the Land Binaries and Scripts) execution process — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.92",
              "n": "Set Default PowerShell Execution Policy To Unrestricted or Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects changes to the PowerShell ExecutionPolicy in the registry to \"Unrestricted\" or \"Bypass.\" It leverages data from Endpoint Detection and Response (EDR) agents, focusing on registry modifications under the path *Software\\Microsoft\\Powershell\\1\\ShellIds\\Microsoft.PowerShell*. This activity is significant because setting the ExecutionPolicy to these values can allow the execution of potentially malicious scripts without restriction. If confirmed malicious, this could en…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may attempt to change the default execution policy on a system for a variety of reasons. However, setting the policy to \"unrestricted\" or \"bypass\" as this search is designed to identify, would be unusual. Hits should be reviewed and investigated as appropriate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "registry_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Set Default PowerShell Execution Policy To Unrestricted or Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to registry key entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Set Default PowerShell Execution Policy To Unrestricted or Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may attempt to change the default execution policy on a system for a variety of reasons. However, setting the policy to \"unrestricted\" or \"bypass\" as this search is designed to identify, would be unusual. Hits should be reviewed and investigated as appropriate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch changes to the PowerShell ExecutionPolicy in the registry to \"Unrestricted\" or \"Bypass.\" It leverages data from Endpoint Detection and Response agents, focusing on registry modifications under the path *Software\\Microsoft\\Powershell\\1\\ShellIds\\Microsoft.PowerShell* — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.93",
              "n": "Short Lived Scheduled Task",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation and deletion of scheduled tasks within a short time frame (less than 30 seconds) using Windows Security EventCodes 4698 and 4699. This behavior is identified by analyzing Windows Security Event Logs and leveraging the Windows TA for parsing. Such activity is significant as it may indicate lateral movement or remote code execution attempts by adversaries. If confirmed malicious, this could lead to unauthorized access, data exfiltration, or execution of …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698, Windows Event Log Security 4699",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4698, Windows Event Log Security 4699 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although uncommon, legitimate applications may create and delete a Scheduled Task within 30 seconds. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1053/005/, https://learn.microsoft.com/en-us/windows/win32/taskschd/about-the-task-scheduler",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Short Lived Scheduled Task\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698, Windows Event Log Security 4699. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Short Lived Scheduled Task\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although uncommon, legitimate applications may create and delete a Scheduled Task within 30 seconds. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation and deletion of scheduled tasks within a short time frame (less than 30 seconds) using Windows Security EventCodes 4698 and 4699 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.94",
              "n": "Short Lived Windows Accounts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the rapid creation and deletion of Windows accounts within a short time frame of 1 hour. It leverages the \"Change\" data model in Splunk, specifically monitoring events with result IDs 4720 (account creation) and 4726 (account deletion). This behavior is significant as it may indicate an attacker attempting to create and remove accounts quickly to evade detection or gain unauthorized access. If confirmed malicious, this activity could lead to unauthorized access, pr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 4720, Windows Event Log System 4726",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 4720, Windows Event Log System 4726 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an administrator created and deleted an account in a short time period.  Verifying activity with an administrator is advised.",
              "refs": "https://www.youtube.com/watch?v=D4Cd-KK4ctk, https://attack.mitre.org/techniques/T1078/",
              "mitre": [
                "T1078.003",
                "T1136.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Short Lived Windows Accounts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 4720, Windows Event Log System 4726. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.003, T1136.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Short Lived Windows Accounts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an administrator created and deleted an account in a short time period.  Verifying activity with an administrator is advised.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the rapid creation and deletion of Windows accounts within a short time frame of 1 hour — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.95",
              "n": "Suspicious MSBuild Spawn",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where wmiprvse.exe spawns msbuild.exe, which is unusual and indicative of potential misuse of a COM object. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process relationships and command-line executions. This activity is significant because msbuild.exe is typically spawned by devenv.exe during legitimate Visual Studio use, not by wmiprvse.exe. If confirmed malicious, this behavior could indicate an attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.",
              "refs": "https://lolbas-project.github.io/lolbas/Binaries/Msbuild/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1127.001/T1127.001.md",
              "mitre": [
                "T1127.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious MSBuild Spawn\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1127.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious MSBuild Spawn\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot instances where wmiprvse.exe spawns msbuild.exe, which is unusual and indicative of potential misuse of a COM object — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.96",
              "n": "Svchost LOLBAS Execution Process Spawn",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances of 'svchost.exe' spawning Living Off The Land Binaries and Scripts (LOLBAS) processes. It leverages Endpoint Detection and Response (EDR) data to monitor child processes of 'svchost.exe' that match known LOLBAS executables. This activity is significant as adversaries often use LOLBAS techniques to execute malicious code stealthily, potentially indicating lateral movement or code execution attempts. If confirmed malicious, this behavior could allow attacke…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may trigger this behavior, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1053/005/, https://www.ired.team/offensive-security/persistence/t1053-schtask, https://lolbas-project.github.io/",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Svchost LOLBAS Execution Process Spawn\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Svchost LOLBAS Execution Process Spawn\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may trigger this behavior, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch instances of 'svchost.exe' spawning Living Off The Land Binaries and Scripts (LOLBAS) processes — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.97",
              "n": "Unusual Number of Computer Service Tickets Requested",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an unusual number of computer service ticket requests from a single source, leveraging Event ID 4769, \"A Kerberos service ticket was requested.\" It uses statistical analysis, including standard deviation and the 3-sigma rule, to detect anomalies in service ticket requests. This activity is significant as it may indicate malicious behavior such as lateral movement, malware staging, or reconnaissance. If confirmed malicious, an attacker could gain unauthorized acc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4769",
              "q": "`wineventlog_security` EventCode=4769 Service_Name=\"*$\" Account_Name!=\"*$*\"\n      | bucket span=2m _time\n      | stats dc(Service_Name) AS unique_targets values(Service_Name) as host_targets\n        BY _time, Client_Address, Account_Name\n      | eventstats avg(unique_targets) as comp_avg , stdev(unique_targets) as comp_std\n        BY Client_Address, Account_Name\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_targets >10 and unique_targets >= upperBound, 1, 0)\n      | `unusual_number_of_computer_service_tickets_requested_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Security 4769 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "An single endpoint requesting a large number of computer service tickets is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, administration systeams and missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1078/",
              "mitre": [
                "T1078"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Unusual Number of Computer Service Tickets Requested\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4769. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Unusual Number of Computer Service Tickets Requested\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: An single endpoint requesting a large number of computer service tickets is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, administration systeams and missconfigured systems.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot an unusual number of computer service ticket requests from a single source, leveraging Event ID 4769, \"A Kerberos service ticket was requested.\" It uses statistical analysis, including standard deviation and the 3-sigma rule, to detect anomalies in service ticket requests — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.98",
              "n": "Unusual Number of Remote Endpoint Authentication Events",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an unusual number of remote authentication attempts from a single source by leveraging Windows Event ID 4624, which logs successful account logons. It uses statistical analysis, specifically the 3-sigma rule, to detect deviations from normal behavior. This activity is significant for a SOC as it may indicate lateral movement, malware staging, or reconnaissance. If confirmed malicious, this behavior could allow an attacker to move laterally within the network, es…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4624",
              "q": "`wineventlog_security` EventCode=4624 Logon_Type=3 Account_Name!=\"*$\"\n      | eval Source_Account = mvindex(Account_Name, 1)\n      | bucket span=2m _time\n      | stats dc(ComputerName) AS unique_targets values(ComputerName) as target_hosts\n        BY _time, Source_Network_Address, Source_Account\n      | eventstats avg(unique_targets) as comp_avg , stdev(unique_targets) as comp_std\n        BY Source_Network_Address, Source_Account\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_targets >10 and unique_targets >= upperBound, 1, 0)\n      | `unusual_number_of_remote_endpoint_authentication_events_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Security 4624 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "An single endpoint authenticating to a large number of hosts is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, jump servers and missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1078/",
              "mitre": [
                "T1078"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Unusual Number of Remote Endpoint Authentication Events\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4624. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Unusual Number of Remote Endpoint Authentication Events\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: An single endpoint authenticating to a large number of hosts is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, jump servers and missconfigured systems.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot an unusual number of remote authentication attempts from a single source by leveraging Windows Event ID 4624, which logs successful account logons — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.99",
              "n": "Verclsid CLSID Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the potential abuse of the verclsid.exe utility to execute malicious files via generated CLSIDs. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line patterns associated with verclsid.exe. This activity is significant because verclsid.exe is a legitimate Windows application used to verify CLSID COM objects, and its misuse can indicate an attempt to bypass security controls. If confirmed malicious, this technique cou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` values(Processes.process) as process\n    values(Processes.parent_process) as parent_process values(Processes.process_id)\n    as process_id count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where\n    (Processes.process_name=\"verclsid.exe\" OR Processes.original_file_name=\"verclsid.exe\")\n    Processes.process=\"*/S*\"\n    Processes.process=\"*/C*\"\n    Processes.process=\"*{*\"\n    Processes.process=\"*}*\"\n    by Processes.action Processes.dest Processes.original_file_name\n       Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n       Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n       Processes.process Processes.process_exec Processes.process_guid Processes.process_hash\n       Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path\n       Processes.user Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `verclsid_clsid_execution_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "windows can used this application for its normal COM object validation.",
              "refs": "https://gist.github.com/NickTyrer/0598b60112eaafe6d07789f7964290d5, https://bohops.com/2018/08/18/abusing-the-com-registry-structure-part-2-loading-techniques-for-evasion-and-persistence/",
              "mitre": [
                "T1218.012"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Verclsid CLSID Execution\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Verclsid CLSID Execution\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: windows can used this application for its normal COM object validation.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the potential abuse of the verclsid.exe utility to execute malicious files using generated CLSIDs — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.100",
              "n": "Wermgr Process Spawned CMD Or Powershell Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the spawning of cmd or PowerShell processes by the wermgr.exe process. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process telemetry, including parent-child process relationships and command-line executions. This behavior is significant as it is commonly associated with code injection techniques used by malware like TrickBot to execute shellcode or malicious DLL modules. If confirmed malicious, this activity could allow attacker…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://labs.vipre.com/trickbot-and-its-modules/",
              "mitre": [
                "T1059"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Wermgr Process Spawned CMD Or Powershell Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Wermgr Process Spawned CMD Or Powershell Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the spawning of cmd or PowerShell processes by the wermgr.exe process — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.101",
              "n": "Windows Account Discovery for Sam Account Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the PowerView PowerShell cmdlet Get-NetUser, specifically querying for \"samaccountname\" and \"pwdlastset\" attributes. It leverages Event ID 4104 from PowerShell Script Block Logging to identify this activity. This behavior is significant as it may indicate an attempt to gather user account information from Active Directory, which is a common reconnaissance step in lateral movement or privilege escalation attacks. If confirmed malicious, this activit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage PowerView for legitimate purposes, filter as needed.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a",
              "mitre": [
                "T1087"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Account Discovery for Sam Account Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Account Discovery for Sam Account Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage PowerView for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the PowerView PowerShell cmdlet Get-NetUser, specifically querying for \"samaccountname\" and \"pwdlastset\" attributes — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.102",
              "n": "Windows AD Privileged Object Access Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects access attempts to privileged Active Directory objects, such as Domain Admins or Enterprise Admins. It leverages Windows Security Event Code 4662 to identify when these sensitive objects are accessed. This activity is significant because such objects should rarely be accessed by normal users or processes, and unauthorized access attempts may indicate attacker enumeration or lateral movement within the domain. If confirmed malicious, this activity could allow attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4662",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4662 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service accounts or applications that routinely query Active Directory for information.",
              "refs": "https://medium.com/securonix-tech-blog/detecting-ldap-enumeration-and-bloodhound-s-sharphound-collector-using-active-directory-decoys-dfc840f2f644, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4662, https://attack.mitre.org/tactics/TA0007/",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Privileged Object Access Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4662. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Privileged Object Access Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service accounts or applications that routinely query Active Directory for information.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch access attempts to privileged Active Directory objects, such as Domain Admins or Enterprise Admins — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.103",
              "n": "Windows Administrative Shares Accessed On Multiple Hosts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a source computer accessing Windows administrative shares (C$, Admin$, IPC$) on 30 or more remote endpoints within a 5-minute window. It leverages Event IDs 5140 and 5145 from file share events. This behavior is significant as it may indicate an adversary enumerating network shares to locate sensitive files, a common tactic used by threat actors. If confirmed malicious, this activity could lead to unauthorized access to critical data, lateral movement, and potentia…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5140, Windows Event Log Security 5145",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host_targets$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting file share events. The Advanced Security Audit policy setting `Audit Detailed File Share` or `Audit File Share` within `Object Access` need to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "An single endpoint accessing windows administrative shares across a large number of endpoints is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, administration systems and missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1135/, https://en.wikipedia.org/wiki/Administrative_share, https://thedfirreport.com/2023/01/23/sharefinder-how-threat-actors-discover-file-shares/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5140, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5145",
              "mitre": [
                "T1135"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Administrative Shares Accessed On Multiple Hosts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5140, Windows Event Log Security 5145. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1135. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Administrative Shares Accessed On Multiple Hosts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: An single endpoint accessing windows administrative shares across a large number of endpoints is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, administration systems and missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch a source computer accessing Windows administrative shares (C$, Admin$, IPC$) on 30 or more remote endpoints within a 5-minute window — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.104",
              "n": "Windows Archive Collected Data via Rar",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of RAR utilities to archive files on a system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, GUIDs, and command-line arguments. This activity is significant as threat actors, including red-teamers and malware like DarkGate, use RAR archiving to compress and exfiltrate collected data from compromised hosts. If confirmed malicious, this behavior could lead to the unauthorized transfer of sensitive inf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "user and network administrator can execute this command.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1560.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Archive Collected Data via Rar\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1560.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Archive Collected Data via Rar\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: user and network administrator can execute this command.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the execution of RAR utilities to archive files on a system — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.105",
              "n": "Windows Computer Account Created by Computer Account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a computer account creating a new computer account with a specific Service Principal Name (SPN) \"RestrictedKrbHost\". This detection leverages Windows Security Event Logs, specifically EventCode 4741, to identify such activities. This behavior is significant as it may indicate an attempt to establish unauthorized Kerberos authentication channels, potentially leading to lateral movement or privilege escalation. If confirmed malicious, this activity could allow an …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4741",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4741 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible third party applications may have a computer account that adds computer accounts, filtering may be required.",
              "refs": "https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-kile/445e4499-7e49-4f2a-8d82-aaf2d1ee3c47, https://github.com/Dec0ne/KrbRelayUp",
              "mitre": [
                "T1558"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Computer Account Created by Computer Account\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4741. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Computer Account Created by Computer Account\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible third party applications may have a computer account that adds computer accounts, filtering may be required.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot a computer account creating a new computer account with a specific Service Principal Name (SPN) \"RestrictedKrbHost\" — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.106",
              "n": "Windows Computer Account With SPN",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of Service Principal Names (SPNs) HOST and RestrictedKrbHost to a computer account, indicative of KrbRelayUp behavior. This detection leverages Windows Security Event Logs, specifically EventCode 4741, to identify changes in SPNs. This activity is significant as it is commonly associated with Kerberos-based attacks, which can be used to escalate privileges or perform lateral movement within a network. If confirmed malicious, this behavior could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4741",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4741 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible third party applications may add these SPNs to Computer Accounts, filtering may be needed.",
              "refs": "https://www.trustedsec.com/blog/an-attack-path-mapping-approach-to-cves-2021-42287-and-2021-42278, https://github.com/Dec0ne/KrbRelayUp",
              "mitre": [
                "T1558"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Computer Account With SPN\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4741. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Computer Account With SPN\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible third party applications may add these SPNs to Computer Accounts, filtering may be needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the addition of Service Principal Names (SPNs) HOST and RestrictedKrbHost to a computer account, indicative of KrbRelayUp behavior — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.107",
              "n": "Windows ComputerDefaults Spawning a Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the spawning of ComputerDefaults.exe, a Windows system process used to manage default application associations. While normally legitimate, this process can be exploited by attackers to bypass User Account Control (UAC) and execute unauthorized code with elevated privileges. Detection focuses on abnormal execution patterns, unusual parent-child process relationships, or deviations from standard paths. Such behavior may indicate attempts to modify system defaults or …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.recordedfuture.com/research/from-castleloader-to-castlerat-tag-150-advances-operations",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows ComputerDefaults Spawning a Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows ComputerDefaults Spawning a Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the spawning of ComputerDefaults.exe, a Windows system process used to manage default application associations — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.108",
              "n": "Windows Create Local Account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new local user account on a Windows system. It leverages Windows Security Audit logs, specifically event ID 4720, to identify this activity. Monitoring the creation of local accounts is crucial for a SOC as it can indicate unauthorized access or lateral movement within the network. If confirmed malicious, this activity could allow an attacker to establish persistence, escalate privileges, or gain unauthorized access to sensitive systems and data.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4720",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4720 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://thedfirreport.com/2022/03/21/apt35-automates-initial-access-using-proxyshell/",
              "mitre": [
                "T1136.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Create Local Account\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4720. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Create Local Account\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of a new local user account on a Windows system — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.109",
              "n": "Windows Credential Dumping LSASS Memory Createdump",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of CreateDump.exe to perform a process dump. This binary is not native to Windows and is often introduced by third-party applications, including PowerShell 7. The detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, GUIDs, and complete command-line executions. This activity is significant as it may indicate an attempt to dump LSASS memory, which can be used to extract credentials. If confirmed malicious, thi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if an application is dumping processes, filter as needed. Recommend reviewing createdump.exe usage across the fleet to better understand all usage and by what.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1003.001/T1003.001.md#atomic-test-11---dump-lsass-with-createdumpexe-from-net-v5",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credential Dumping LSASS Memory Createdump\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credential Dumping LSASS Memory Createdump\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if an application is dumping processes, filter as needed. Recommend reviewing createdump.exe usage across the fleet to better understand all usage and by what.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of CreateDump.exe to perform a process dump — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.110",
              "n": "Windows Debugger Tool Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This analysis detects the use of debugger tools within a production environment. While these tools are legitimate for file analysis and debugging, they are abused by malware like PlugX and DarkGate for malicious DLL side-loading. The hunting query aids Security Operations Centers (SOCs) in identifying potentially suspicious tool executions, particularly for non-technical users in the production network.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name = \"x32dbg.exe\"\n        OR\n        Processes.process_name = \"x64dbg.exe\"\n        OR\n        Processes.process_name = \"windbg.exe\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_debugger_tool_execution_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator or IT professional may execute this application for verifying files or debugging application.",
              "refs": "https://www.splunk.com/en_us/blog/security/enter-the-gates-an-analysis-of-the-darkgate-autoit-loader.html, https://www.trendmicro.com/en_us/research/23/b/investigating-the-plugx-trojan-disguised-as-a-legitimate-windows.html",
              "mitre": [
                "T1036"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Debugger Tool Execution\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Debugger Tool Execution\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator or IT professional may execute this application for verifying files or debugging application.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analysis detects the use of debugger tools within a production environment — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.111",
              "n": "Windows Default Group Policy Object Modified with GPME",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to default Group Policy Objects (GPOs) using the Group Policy Management Editor (GPME). It leverages the Endpoint data model to identify processes where `mmc.exe` executes `gpme.msc` with specific GUIDs related to default GPOs. This activity is significant because default GPOs, such as the `Default Domain Controllers Policy` and `Default Domain Policy`, are critical for enforcing security policies across the domain. If malicious, such modifications co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The default Group Policy Objects within an AD network may be legitimately updated for administrative operations, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1484/, https://attack.mitre.org/techniques/T1484/001, https://www.trustedsec.com/blog/weaponizing-group-policy-objects-access/, https://adsecurity.org/?p=2716, https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn265969(v=ws.11)",
              "mitre": [
                "T1484.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Default Group Policy Object Modified with GPME\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Default Group Policy Object Modified with GPME\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The default Group Policy Objects within an AD network may be legitimately updated for administrative operations, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch modifications to default Group Policy Objects (GPOs) using the Group Policy Management Editor (GPME) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.112",
              "n": "Windows Defender ASR Rules Stacking",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies security events from Microsoft Defender, focusing on Exploit Guard and Attack Surface Reduction (ASR) features. It detects Event IDs 1121, 1126, 1131, and 1133 for blocked operations, and Event IDs 1122, 1125, 1132, and 1134 for audit logs. Event ID 1129 indicates user overrides, while Event ID 5007 signals configuration changes. This detection uses a lookup to correlate ASR rule GUIDs with descriptive names. Monitoring these events is crucial for identifying un…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Defender 1121, Windows Event Log Defender 1122, Windows Event Log Defender 1125, Windows Event Log Defender 1126, Windows Event Log Defender 1129, Windows Event Log Defender 1131, Windows Event Log Defender 1133, Windows Event Log Defender 1134, Windows Event Log Defender 5007",
              "q": "`ms_defender` EventCode IN (1121, 1122, 1125, 1126, 1129, 1131, 1132, 1133, 1134, 5007)\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY host Parent_Commandline, Process_Name,\n           Path, ID, EventCode\n      | lookup asr_rules ID OUTPUT ASR_Rule\n      | fillnull value=NULL\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | rename host as dest\n      | `windows_defender_asr_rules_stacking_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Defender 1121, Windows Event Log Defender 1122, Windows Event Log Defender 1125, Windows Event Log Defender 1126, Windows Event Log Defender 1129, Windows Event Log Defender 1131, Windows Event Log Defender 1133, Windows Event Log Defender 1134, Windows Event Log Defender 5007 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected with this analytic, since it is a hunting analytic. It is meant to show the use of ASR rules and how they can be used to detect malicious activity.",
              "refs": "https://asrgen.streamlit.app/, https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/attack-surface-reduction?view=o365-worldwide",
              "mitre": [
                "T1566.001",
                "T1566.002",
                "T1059"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Defender ASR Rules Stacking\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Defender 1121, Windows Event Log Defender 1122, Windows Event Log Defender 1125, Windows Event Log Defender 1126, Windows Event Log Defender 1129, Windows Event Log Defender 1131, Windows Event Log Defender 1133, Windows Event Log Defender 1134, Windows Event Log Defender 5007. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001, T1566.002, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Defender ASR Rules Stacking\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected with this analytic, since it is a hunting analytic. It is meant to show the use of ASR rules and how they can be used to detect malicious activity.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot security events from Microsoft Defender, focusing on Exploit Guard and Attack Surface Reduction (ASR) features — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.113",
              "n": "Windows Developer-Signed MSIX Package Installation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the installation of developer-signed MSIX packages that lack Microsoft Store signatures. All malicious MSIX packages observed in recent threat campaigns (including those from FIN7, Zloader/Storm-0569, and FakeBat/Storm-1113) were developer-signed rather than Microsoft Store signed. Microsoft Store apps have specific publisher IDs containing '8wekyb3d8bbwe' or 'cw5n1h2txyewy', while developer-signed packages lack these identifiers. This detection focuses on EventID 855 f…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log AppXDeployment-Server 855",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log AppXDeployment-Server 855 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate developer-signed applications that are not from the Microsoft Store will trigger this detection. Organizations should maintain a baseline of expected developer-signed applications in their environment and tune the detection accordingly. Common legitimate developer-signed applications include in-house developed applications and some third-party applications that are not distributed through the Microsoft Store.",
              "refs": "https://redcanary.com/blog/threat-intelligence/msix-installers/, https://learn.microsoft.com/en-us/windows/win32/appxpkg/troubleshooting, https://www.advancedinstaller.com/msix-installation-or-launching-errors-and-fixes.html, https://redcanary.com/blog/threat-detection/code-signing-certificates/",
              "mitre": [
                "T1553.005",
                "T1204.002"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Developer-Signed MSIX Package Installation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log AppXDeployment-Server 855. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1553.005, T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Developer-Signed MSIX Package Installation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate developer-signed applications that are not from the Microsoft Store will trigger this detection. Organizations should maintain a baseline of expected developer-signed applications in their environment and tune the detection accordingly. Common legitimate developer-signed applications include in-house developed applications and some third-party applications that are not distributed through the Microsoft Store.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the installation of developer-signed MSIX packages that lack Microsoft Store signatures — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.114",
              "n": "Windows Drivers Loaded by Signature",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies all drivers being loaded on Windows systems using Sysmon EventCode 6 (Driver Load). It leverages fields such as driver path, signature status, and hash to detect potentially suspicious drivers. This activity is significant for a SOC as malicious drivers can be used to gain kernel-level access, bypass security controls, or persist in the environment. If confirmed malicious, this activity could allow an attacker to execute arbitrary code with high privileges, lead…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 6",
              "q": "`sysmon` EventCode=6\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY ImageLoaded dest dvc\n           process_hash process_path signature\n           signature_id user_id vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_drivers_loaded_by_signature_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 6 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic is meant to assist with identifying and hunting drivers loaded in the environment.",
              "refs": "https://redcanary.com/blog/tracking-driver-inventory-to-expose-rootkits/, https://attack.mitre.org/techniques/T1014/, https://www.fuzzysecurity.com/tutorials/28.html",
              "mitre": [
                "T1014",
                "T1068"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Drivers Loaded by Signature\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 6. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1014, T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Drivers Loaded by Signature\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic is meant to assist with identifying and hunting drivers loaded in the environment.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot all drivers being loaded on Windows systems using Sysmon EventCode 6 (Driver Load) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.115",
              "n": "Windows File and Directory Permissions Remove Inheritance",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the removal of permission inheritance using ICACLS. This analytic identifies instances where ICACLS is used to remove permission inheritance from files or directories. The /inheritance:r flag, which strips inherited permissions while optionally preserving or altering explicit permissions, is monitored to detect changes that may restrict access or establish isolated permission configurations. Removing inheritance can be a legitimate administrative action but may als…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or administrative scripts may use this application. Filter as needed.",
              "refs": "https://www.splunk.com/en_us/blog/security/-applocker-rules-as-defense-evasion-complete-analysis.html",
              "mitre": [
                "T1222.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows File and Directory Permissions Remove Inheritance\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows File and Directory Permissions Remove Inheritance\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or administrative scripts may use this application. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the removal of permission inheritance using ICACLS — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.116",
              "n": "Windows Find Domain Organizational Units with GetDomainOU",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainOU` cmdlet, a part of the PowerView toolkit used for Windows domain enumeration. It leverages PowerShell Script Block Logging (EventCode=4104) to identify this activity. Detecting `Get-DomainOU` usage is significant as adversaries may use it to gather information about organizational units within Active Directory, which can facilitate lateral movement or privilege escalation. If confirmed malicious, this activity could allow attacker…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage PowerSploit tools for legitimate reasons, filter as needed.",
              "refs": "https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainOU/, https://attack.mitre.org/techniques/T1087/002/, https://book.hacktricks.xyz/windows-hardening/basic-powershell-for-pentesters/powerview",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Find Domain Organizational Units with GetDomainOU\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Find Domain Organizational Units with GetDomainOU\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage PowerSploit tools for legitimate reasons, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the cmdlet, a part of the PowerView toolkit used for Windows domain enumeration — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.117",
              "n": "Windows Get Local Admin with FindLocalAdminAccess",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Find-LocalAdminAccess` cmdlet using PowerShell Script Block Logging (EventCode=4104). This cmdlet is part of PowerView, a toolkit for Windows domain enumeration. Identifying the use of `Find-LocalAdminAccess` is crucial as adversaries may use it to find machines where the current user has local administrator access, facilitating lateral movement or privilege escalation. If confirmed malicious, this activity could allow attackers to target and …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage PowerSploit tools for legitimate reasons, filter as needed.",
              "refs": "https://powersploit.readthedocs.io/en/latest/Recon/Find-LocalAdminAccess/, https://attack.mitre.org/techniques/T1087/002/, https://book.hacktricks.xyz/windows-hardening/basic-powershell-for-pentesters/powerview",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Get Local Admin with FindLocalAdminAccess\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Get Local Admin with FindLocalAdminAccess\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage PowerSploit tools for legitimate reasons, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of the cmdlet using PowerShell Script Block Logging (EventCode=4104) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.118",
              "n": "Windows Group Policy Object Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new Group Policy Object (GPO) by leveraging Event IDs 5136 and 5137. This detection uses directory service change events to identify when a new GPO is created. Monitoring GPO creation is crucial as adversaries can exploit GPOs to escalate privileges or deploy malware across an Active Directory network. If confirmed malicious, this activity could allow attackers to control system configurations, deploy ransomware, or propagate malware, leading to w…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136, Windows Event Log Security 5137",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$User$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136, Windows Event Log Security 5137 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Group Policy Objects are created as part of regular administrative operations, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1484/, https://attack.mitre.org/techniques/T1484/001, https://www.trustedsec.com/blog/weaponizing-group-policy-objects-access/, https://adsecurity.org/?p=2716, https://www.bleepingcomputer.com/news/security/lockbit-ransomware-now-encrypts-windows-domains-using-group-policies/, https://www.varonis.com/blog/group-policy-objects",
              "mitre": [
                "T1078.002",
                "T1484.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Group Policy Object Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136, Windows Event Log Security 5137. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.002, T1484.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Group Policy Object Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Group Policy Objects are created as part of regular administrative operations, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of a new Group Policy Object (GPO) by leveraging Event 5136 and 5137 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.119",
              "n": "Windows High File Deletion Frequency",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a high frequency of file deletions by monitoring Sysmon EventCodes 23 and 26 for specific file extensions. This detection leverages Sysmon logs to track deleted target filenames, process names, and process IDs. Such activity is significant as it often indicates ransomware behavior, where files are encrypted and the originals are deleted. If confirmed malicious, this activity could lead to extensive data loss and operational disruption, as ransomware can render c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 23, Sysmon EventID 26",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to ingest logs that include the deleted target file name, process name, and process ID from your endpoints. If you are using Sysmon, ensure you have at least version 2.0 of the Sysmon TA installed.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may delete a large number of pictures or files in a folder, which could trigger this detection. Additionally, heavy usage of PowerBI and Outlook may also result in false positives.",
              "refs": "https://www.mandiant.com/resources/fin11-email-campaigns-precursor-for-ransomware-data-theft, https://blog.virustotal.com/2020/11/keep-your-friends-close-keep-ransomware.html, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1485"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows High File Deletion Frequency\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 23, Sysmon EventID 26. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows High File Deletion Frequency\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may delete a large number of pictures or files in a folder, which could trigger this detection. Additionally, heavy usage of PowerBI and Outlook may also result in false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot a high frequency of file deletions by monitoring Sysmon EventCodes 23 and 26 for specific file extensions — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.120",
              "n": "Windows Impair Defense Disable Realtime Signature Delivery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable the Windows Defender real-time signature delivery feature. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path associated with Windows Defender signature updates. This activity is significant because disabling real-time signature delivery can prevent Windows Defender from receiving timely malware definitions, reducing its effectiveness. If confirmed maliciou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Realtime Signature Delivery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Realtime Signature Delivery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch modifications to the Windows registry that disable the Windows Defender real-time signature delivery feature — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.121",
              "n": "Windows Impair Defense Disable Win Defender Signature Retirement",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable Windows Defender Signature Retirement. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the DisableSignatureRetirement registry setting. This activity is significant because disabling signature retirement can prevent Windows Defender from removing outdated antivirus signatures, potentially reducing its effectiveness in detecting threats. If confirmed malicious, this action…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Win Defender Signature Retirement\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Win Defender Signature Retirement\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch modifications to the Windows registry that disable Windows Defender Signature Retirement — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.122",
              "n": "Windows Large Number of Computer Service Tickets Requested",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a high volume of Kerberos service ticket requests, specifically more than 30, from a single source within a 5-minute window. It leverages Event ID 4769, which logs when a Kerberos service ticket is requested, focusing on requests with computer names as the Service Name. This behavior is significant as it may indicate malicious activities such as lateral movement, malware staging, or reconnaissance. If confirmed malicious, an attacker could gain unauthorized access …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4769",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$IpAddress$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4769 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "An single endpoint requesting a large number of kerberos service tickets is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, administration systems and missconfigured systems.",
              "refs": "https://thedfirreport.com/2023/01/23/sharefinder-how-threat-actors-discover-file-shares/, https://attack.mitre.org/techniques/T1135/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4769",
              "mitre": [
                "T1135",
                "T1078"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Large Number of Computer Service Tickets Requested\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4769. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1135, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Large Number of Computer Service Tickets Requested\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: An single endpoint requesting a large number of kerberos service tickets is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, administration systems and missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch a high volume of Kerberos service ticket requests, specifically more than 30, from a single source within a 5-minute window — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.123",
              "n": "Windows Masquerading Explorer As Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where explorer.exe is spawned by unusual parent processes such as cmd.exe, powershell.exe, or regsvr32.exe. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process relationships. This activity is significant because explorer.exe is typically initiated by userinit.exe, and deviations from this norm can indicate code injection or process masquerading attempts by malware like Qakbot. If confi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.qakbot",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Masquerading Explorer As Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Masquerading Explorer As Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot instances where explorer.exe is spawned by unusual parent processes such as cmd.exe, powershell.exe, or regsvr32.exe — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.124",
              "n": "Windows Modify Registry MaxConnectionPerServer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious modification of the Windows registry setting for max connections per server. It detects changes to specific registry paths using data from the Endpoint.Registry datamodel. This activity is significant because altering this setting can be exploited by attackers to increase the number of concurrent connections to a remote server, potentially facilitating DDoS attacks or enabling more effective lateral movement within a compromised network. If confirme…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://asec.ahnlab.com/en/17692/, )%20is%20a%20remote%20access%20trojan, is%20as%20an%20information%20stealer.",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry MaxConnectionPerServer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry MaxConnectionPerServer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot a suspicious modification of the Windows registry setting for max connections per server — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.125",
              "n": "Windows Modify Registry Utilize ProgIDs",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows Registry specifically targeting Programmatic Identifier associations to bypass User Account Control (UAC) Windows OS feature. ValleyRAT may create or alter registry entries to targetted progIDs like `.pwn` files with malicious processes, allowing it to execute harmful scripts or commands when these files are opened. By monitoring for unusual changes in registry keys linked to ProgIDs, this detection enables security analysts to identify…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.proofpoint.com/us/blog/threat-insight/chinese-malware-appears-earnest-across-cybercrime-threat-landscape, https://www.fortinet.com/blog/threat-research/valleyrat-campaign-targeting-chinese-speakers, https://v3ded.github.io/redteam/utilizing-programmatic-identifiers-progids-for-uac-bypasses",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Utilize ProgIDs\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Utilize ProgIDs\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch modifications to the Windows Registry specifically targeting Programmatic Identifier associations to bypass User Account Control (UAC) Windows OS feature — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.126",
              "n": "Windows Modify Show Compress Color And Info Tip Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to the Windows registry keys related to file compression color and information tips. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the \"ShowCompColor\" and \"ShowInfoTip\" values under the \"Microsoft\\\\Windows\\\\CurrentVersion\\\\Explorer\\\\Advanced\" path. This activity is significant as it was observed in the Hermetic Wiper malware, indicating potential malicious intent to alter file attributes and use…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blog.talosintelligence.com/2022/02/threat-advisory-hermeticwiper.html",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Show Compress Color And Info Tip Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Show Compress Color And Info Tip Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch suspicious modifications to the Windows registry keys related to file compression color and information tips — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.127",
              "n": "Windows MOF Event Triggered Execution via WMI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of MOFComp.exe loading a MOF file, often triggered by cmd.exe or powershell.exe, or from unusual paths like User Profile directories. It leverages Endpoint Detection and Response (EDR) data, focusing on process names, parent processes, and command-line executions. This activity is significant as it may indicate an attacker using WMI for persistence or lateral movement. If confirmed malicious, it could allow the attacker to execute arbitrary code, main…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present from automation based applications (SCCM), filtering may be required. In addition, break the query out based on volume of usage. Filter process names or file paths.",
              "refs": "https://attack.mitre.org/techniques/T1546/003/, https://thedfirreport.com/2022/07/11/select-xmrig-from-sqlserver/, https://learn.microsoft.com/en-us/windows/win32/wmisdk/mofcomp, https://pentestlab.blog/2020/01/21/persistence-wmi-event-subscription/",
              "mitre": [
                "T1546.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MOF Event Triggered Execution via WMI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MOF Event Triggered Execution via WMI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present from automation based applications (SCCM), filtering may be required. In addition, break the query out based on volume of usage. Filter process names or file paths.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of MOFComp.exe loading a MOF file, often triggered by cmd.exe or powershell.exe, or from unusual paths like User Profile directories — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.128",
              "n": "Windows MSIExec DLLRegisterServer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of msiexec.exe with the /y switch parameter, which enables the loading of DLLRegisterServer. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line arguments and parent-child process relationships. This activity is significant because it can indicate an attempt to register malicious DLLs, potentially leading to code execution or persistence on the system. If confirmed malicious, this could all…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic will need to be tuned for your environment based on legitimate usage of msiexec.exe. Filter as needed.",
              "refs": "https://thedfirreport.com/2022/06/06/will-the-real-msiexec-please-stand-up-exploit-leads-to-data-exfiltration/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.007/T1218.007.md",
              "mitre": [
                "T1218.007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSIExec DLLRegisterServer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSIExec DLLRegisterServer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic will need to be tuned for your environment based on legitimate usage of msiexec.exe. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of msiexec.exe with the /y switch parameter, which enables the loading of DLLRegisterServer — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.129",
              "n": "Windows MSTSC RDP Commandline",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the mstsc.exe command-line, which is commonly used to initiate Remote Desktop Protocol (RDP) connections. This detection focuses on instances where mstsc.exe is executed with specific parameters that may indicate suspicious or unauthorized remote access attempts. Monitoring command-line arguments such as /v:<target> for direct connections or /admin for administrative sessions can help identify potential misuse or lateral movement within a network.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator may remote desktop a spe",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-071a",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSTSC RDP Commandline\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSTSC RDP Commandline\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator may remote desktop a spe\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of the mstsc.exe command-line, which is commonly used to initiate Remote Desktop Protocol (remote desktop) connections — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.130",
              "n": "Windows Net System Service Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the enumeration of Windows services using the net start command, which is a built-in utility that lists all running services on a system. Adversaries, system administrators, or automated tools may use this command to gain situational awareness of what services are active, identify potential security software, or discover opportunities for privilege escalation and lateral movement. The execution of net start is often associated with reconnaissance activity during th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://cert.gov.ua/article/6284730",
              "mitre": [
                "T1007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Net System Service Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Net System Service Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the enumeration of Windows services using the net start command, which is a built-in utility that lists all running services on a system — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.131",
              "n": "Windows Office Product Spawned Rundll32 With No DLL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects any Windows Office Product spawning `rundll32.exe` without a `.dll` file extension. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on process and parent process relationships. This activity is significant as it is a known tactic of the IcedID malware family, which can lead to unauthorized code execution. If confirmed malicious, this could allow attackers to execute arbitrary code, potentially leading to data exfiltration…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, but if any are present, filter as needed.",
              "refs": "https://www.joesandbox.com/analysis/395471/0/html, https://app.any.run/tasks/cef4b8ba-023c-4b3b-b2ef-6486a44f6ed9/, https://any.run/malware-trends/icedid",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Spawned Rundll32 With No DLL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Spawned Rundll32 With No DLL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, but if any are present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch any Windows Office Product spawning without a file extension — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.132",
              "n": "Windows PowerShell Process With Malicious String",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of multiple offensive toolkits and commands through the process execution datamodel. This method captures commands given directly to powershell.exe, allowing for the identification of suspicious activities including several well-known tools used for credential theft, lateral movement, and persistence. If confirmed malicious, this could lead to unauthorized access, privilege escalation, and potential compromise of sensitive information within the envir…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4688, Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel:Endpoint.Processes | search dest=$dest|s$ process_name=$process_name$ \"*$match$*",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4688, Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. commands with overlap.",
              "refs": "https://attack.mitre.org/techniques/T1059/001/, https://github.com/PowerShellMafia/PowerSploit, https://github.com/PowerShellEmpire/, https://github.com/S3cur3Th1sSh1t/PowerSharpPack",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Process With Malicious String\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4688, Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Process With Malicious String\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. commands with overlap.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of multiple offensive toolkits and commands through the process execution a summary — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.133",
              "n": "Windows PowerShell Script Block With Malicious String",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of multiple offensive toolkits and commands by leveraging PowerShell Script Block Logging (EventCode=4104). This method captures and logs the full command sent to PowerShell, allowing for the identification of suspicious activities including several well-known tools used for credential theft, lateral movement, and persistence. If confirmed malicious, this could lead to unauthorized access, privilege escalation, and potential compromise of sensitive in…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic requires PowerShell operational logs to be imported. Modify the powershell macro as needed to match the sourcetype or add index. This analytic is specific to 4104, or PowerShell Script Block Logging.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. commands with overlap.",
              "refs": "https://attack.mitre.org/techniques/T1059/001/, https://github.com/PowerShellMafia/PowerSploit, https://github.com/PowerShellEmpire/, https://github.com/S3cur3Th1sSh1t/PowerSharpPack",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Script Block With Malicious String\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Script Block With Malicious String\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. commands with overlap.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of multiple offensive toolkits and commands by leveraging PowerShell Script Block Logging (EventCode=4104) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.134",
              "n": "Windows Process Injection Wermgr Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious instance of wermgr.exe spawning a child process unrelated to error or fault handling. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process relationships and command-line executions. This activity is significant as it can indicate Qakbot malware, which injects malicious code into wermgr.exe to evade detection and execute malicious actions. If confirmed malicious, this behavior could allow an attacker to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://twitter.com/pr0xylife/status/1585612370441031680?s=46&t=Dc3CJi4AnM-8rNoacLbScg",
              "mitre": [
                "T1055"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Injection Wermgr Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Injection Wermgr Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot a suspicious instance of wermgr.exe spawning a child process unrelated to error or fault handling — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.135",
              "n": "Windows Protocol Tunneling with Plink",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects the use of Plink (including renamed versions like pvhost.exe) for protocol tunneling, which may be used for egress or lateral movement within an organization. It identifies specific command-line options (-R, -L, -D, -l, -N, -P, -pw) commonly used for port forwarding and tunneling by analyzing process execution logs from Endpoint Detection and Response (EDR) agents. This activity is significant as it may indicate an attempt to bypass network security controls or establish un…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the organization allows for SSH tunneling outbound or internally. Filter as needed.",
              "refs": "https://thedfirreport.com/2022/06/06/will-the-real-msiexec-please-stand-up-exploit-leads-to-data-exfiltration/, https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html, https://attack.mitre.org/techniques/T1572/, https://documentation.help/PuTTY/using-cmdline-portfwd.html#S3.8.3.5, https://media.defense.gov/2024/Jul/25/2003510137/-1/-1/0/Joint-CSA-North-Korea-Cyber-Espionage-Advance-Military-Nuclear-Programs.PDF, https://blog.talosintelligence.com/lazarus-three-rats/",
              "mitre": [
                "T1572",
                "T1021.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Protocol Tunneling with Plink\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1572, T1021.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Protocol Tunneling with Plink\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the organization allows for SSH tunneling outbound or internally. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects the use of Plink (including renamed versions like pvhost.exe) for protocol tunneling, which may be used for egress or lateral movement within an organization — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.136",
              "n": "Windows Rapid Authentication On Multiple Hosts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a source computer authenticating to 30 or more remote endpoints within a 5-minute timespan using Event ID 4624. This behavior is identified by analyzing Windows Event Logs for LogonType 3 events and counting unique target computers. Such activity is significant as it may indicate lateral movement or network share enumeration by an adversary. If confirmed malicious, this could lead to unauthorized access to multiple systems, potentially compromising sensitive data a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4624",
              "q": "# Shared SPL: intentional — see UC-10.2.103\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host_targets$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4624 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Vulnerability scanners or system administration tools may also trigger this detection. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1135/, https://thedfirreport.com/2023/01/23/sharefinder-how-threat-actors-discover-file-shares/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4624",
              "mitre": [
                "T1003.002"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Rapid Authentication On Multiple Hosts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4624. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Rapid Authentication On Multiple Hosts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Vulnerability scanners or system administration tools may also trigger this detection. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch a source computer authenticating to 30 or more remote endpoints within a 5-minute timespan using Event ID 4624 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.137",
              "n": "Windows RDP Bitmap Cache File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the creation of Remote Desktop Protocol (RDP) bitmap cache files on a Windows system, typically located in the user’s profile under the Terminal Server Client cache directory. These files (*.bmc, cache*.bin) are generated when a user initiates an RDP session using the built-in mstsc.exe client. Their presence can indicate interactive remote access activity and may be useful in detecting lateral movement or unauthorized RDP usage. Monitoring this behavior is especially i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest logs that include the process name, TargetFilename, and ProcessID executions from your endpoints. If you are utilizing Sysmon, ensure you have at least version 2.0 of the Sysmon TA installed.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present, filter as needed or restrict to critical assets on the perimeter.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RDP Bitmap Cache File Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RDP Bitmap Cache File Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present, filter as needed or restrict to critical assets on the perimeter.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the creation of Remote Desktop Protocol (remote desktop) bitmap cache files on a Windows system, typically located in the user’s profile under the Terminal Server Client cache directory — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.138",
              "n": "Windows RDP Connection Successful",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects successful Remote Desktop Protocol (RDP) connections by monitoring EventCode 1149 from the Windows TerminalServices RemoteConnectionManager Operational log. This detection is significant as successful RDP connections can indicate remote access to a system, which may be leveraged by attackers to control or exfiltrate data. If confirmed malicious, this activity could lead to unauthorized access, data theft, or further lateral movement within the network. Monitoring s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log RemoteConnectionManager 1149",
              "q": "`remoteconnectionmanager` EventCode=1149\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY Computer, user_id\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | rename Computer as dest\n      | `windows_rdp_connection_successful_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log RemoteConnectionManager 1149 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present, filter as needed or restrict to critical assets on the perimeter.",
              "refs": "https://gist.github.com/MHaggis/138c6bf563bacbda4a2524f089773706, https://doublepulsar.com/rdp-hijacking-how-to-hijack-rds-and-remoteapp-sessions-transparently-to-move-through-an-da2a1e73a5f6",
              "mitre": [
                "T1563.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RDP Connection Successful\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log RemoteConnectionManager 1149. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1563.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RDP Connection Successful\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present, filter as needed or restrict to critical assets on the perimeter.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch successful Remote Desktop Protocol (remote desktop) connections by monitoring EventCode 1149 from the Windows TerminalServices RemoteConnectionManager Operational log — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.139",
              "n": "Windows Remote Create Service",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a new service on a remote endpoint using sc.exe. It leverages data from Endpoint Detection and Response (EDR) agents, specifically monitoring for EventCode 7045, which indicates a new service creation. This activity is significant as it may indicate lateral movement or remote code execution attempts by an attacker. If confirmed malicious, this could allow the attacker to establish persistence, escalate privileges, or execute arbitrary code on the…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Note that false positives may occur, and filtering may be necessary, especially when it comes to remote service creation by administrators or software management utilities.",
              "refs": "https://attack.mitre.org/techniques/T1543/003/",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Create Service\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Create Service\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Note that false positives may occur, and filtering may be necessary, especially when it comes to remote service creation by administrators or software management utilities.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the creation of a new service on a remote endpoint using sc.exe — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.140",
              "n": "Windows Replication Through Removable Media",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or dropping of executable or script files in the root directory of a removable drive. It leverages data from the Endpoint.Filesystem datamodel, focusing on specific file types and their creation paths. This activity is significant as it may indicate an attempt to spread malware, such as ransomware, via removable media. If confirmed malicious, this behavior could lead to unauthorized code execution, lateral movement, or persistence within the network, p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the Filesystem responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may allow creation of script or exe in the paths specified. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1204/002/, https://www.fortinet.com/blog/threat-research/chaos-ransomware-variant-sides-with-russia",
              "mitre": [
                "T1091"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Replication Through Removable Media\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1091. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Replication Through Removable Media\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may allow creation of script or exe in the paths specified. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation or dropping of executable or script files in the root directory of a removable drive — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.141",
              "n": "Windows Scheduled Task Service Spawned Shell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when the Task Scheduler service (\"svchost.exe -k netsvcs -p -s Schedule\") spawns common command line, scripting, or shell execution binaries such as \"powershell.exe\" or \"cmd.exe\". This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process relationships. This activity is significant as attackers often abuse the Task Scheduler for execution and persistence, blending in with legitimate Windows operations. If…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/blog/tracking-evolution-gootloader-operations, https://nasbench.medium.com/a-deep-dive-into-windows-scheduled-tasks-and-the-processes-running-them-218d1eed4cce, https://attack.mitre.org/techniques/T1053/005/",
              "mitre": [
                "T1053.005",
                "T1059"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Scheduled Task Service Spawned Shell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Scheduled Task Service Spawned Shell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch when the Task Scheduler service (\"svchost.exe -k netsvcs -p -s Schedule\") spawns common command line, scripting, or shell execution binaries such as \"powershell.exe\" or \"cmd.exe\" — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.142",
              "n": "Windows Service Create RemComSvc",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of the RemComSvc service on a Windows endpoint, typically indicating lateral movement using RemCom.exe. It leverages Windows EventCode 7045 from the System event log, specifically looking for the \"RemCom Service\" name. This activity is significant as it often signifies unauthorized lateral movement within the network, which is a common tactic used by attackers to spread malware or gain further access. If confirmed malicious, this could lead to unauthor…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7045 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed based on administrative activity.",
              "refs": "https://www.crowdstrike.com/blog/bears-midst-intrusion-democratic-national-committee/, https://github.com/kavika13/RemCom",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Create RemComSvc\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Create RemComSvc\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed based on administrative activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of the RemComSvc service on a Windows endpoint, typically indicating lateral movement using RemCom.exe — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.143",
              "n": "Windows Service Create SliverC2",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a Windows service named \"Sliver\" with the description \"Sliver Implant,\" indicative of SliverC2 lateral movement using the PsExec module. It leverages Windows EventCode 7045 from the System Event log to identify this activity. This behavior is significant as it may indicate an adversary's attempt to establish persistence or execute commands remotely. If confirmed malicious, this activity could allow attackers to maintain control over the compromised …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7045 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, but if another service out there is named Sliver, filtering may be needed.",
              "refs": "https://github.com/BishopFox/sliver/blob/71f94928bf36c1557ea5fbeffa161b71116f56b2/client/command/exec/psexec.go#LL61C5-L61C16, https://www.microsoft.com/en-us/security/blog/2022/08/24/looking-for-the-sliver-lining-hunting-for-emerging-command-and-control-frameworks/, https://regex101.com/r/DWkkXm/1",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Create SliverC2\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Create SliverC2\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, but if another service out there is named Sliver, filtering may be needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of a Windows service named \"Sliver\" with the description \"Sliver Implant,\" indicative of SliverC2 lateral movement using the PsExec module — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.144",
              "n": "Windows Service Created with Suspicious Service Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a Windows Service with a known suspicious or malicious name using Windows Event ID 7045. It leverages logs from the `wineventlog_system` to identify these services installations. This activity is significant as adversaries, including those deploying Clop ransomware, often create malicious services for lateral movement, remote code execution, persistence, and execution. If confirmed malicious, this could allow attackers to maintain persistence, execu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "`wineventlog_system` EventCode=7045 ServiceName=\"$object_name$\" dest=\"$dest$\"",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7045 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may install services with uncommon services paths.",
              "refs": "https://attack.mitre.org/techniques/T1569/002/, https://github.com/BishopFox/sliver/blob/71f94928bf36c1557ea5fbeffa161b71116f56b2/client/command/exec/psexec.go#LL61C5-L61C16, https://www.microsoft.com/en-us/security/blog/2022/08/24/looking-for-the-sliver-lining-hunting-for-emerging-command-and-control-frameworks/, https://github.com/mthcht/awesome-lists/blob/main/Lists/suspicious_windows_services_names_list.csv",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Created with Suspicious Service Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Created with Suspicious Service Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may install services with uncommon services paths.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of a Windows Service with a known suspicious or malicious name using Windows Event ID 7045 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.145",
              "n": "Windows Service Created with Suspicious Service Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a Windows Service with a binary path located in uncommon directories, using Windows Event ID 7045. It leverages logs from the `wineventlog_system` to identify services installed outside typical system directories. This activity is significant as adversaries, including those deploying Clop ransomware, often create malicious services for lateral movement, remote code execution, persistence, and execution. If confirmed malicious, this could allow attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the Service name, Service File Name Service Start type, and Service Type from your endpoints.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may install services with uncommon services paths.",
              "refs": "https://www.mandiant.com/resources/fin11-email-campaigns-precursor-for-ransomware-data-theft, https://blog.virustotal.com/2020/11/keep-your-friends-close-keep-ransomware.html",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "service",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Created with Suspicious Service Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to service entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Created with Suspicious Service Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may install services with uncommon services paths.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of a Windows Service with a binary path located in uncommon directories, using Windows Event ID 7045 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.146",
              "n": "Windows Service Creation on Remote Endpoint",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a Windows Service on a remote endpoint using `sc.exe`. It detects this activity by analyzing process execution logs from Endpoint Detection and Response (EDR) agents, focusing on command-line arguments that include remote paths and service creation commands. This behavior is significant because adversaries often exploit the Service Control Manager for lateral movement and remote code execution. If confirmed malicious, this activity could allow at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may create Windows Services on remote systems, but this activity is usually limited to a small set of hosts or users.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/services/service-control-manager, https://learn.microsoft.com/en-us/windows/win32/services/controlling-a-service-using-sc, https://attack.mitre.org/techniques/T1543/003/",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Creation on Remote Endpoint\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Creation on Remote Endpoint\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may create Windows Services on remote systems, but this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the creation of a Windows Service on a remote endpoint using — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.147",
              "n": "Windows Service Execution RemCom",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of RemCom.exe, an open-source alternative to PsExec, used for lateral movement and remote command execution. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, original file names, and command-line arguments. This activity is significant as it indicates potential lateral movement within the network. If confirmed malicious, this could allow an attacker to execute commands remotely, potentially leading to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on Administrative use. Filter as needed.",
              "refs": "https://www.crowdstrike.com/blog/bears-midst-intrusion-democratic-national-committee/, https://github.com/kavika13/RemCom",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Execution RemCom\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Execution RemCom\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on Administrative use. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the execution of RemCom.exe, an open-source alternative to PsExec, used for lateral movement and remote command execution — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.148",
              "n": "Windows Service Initiation on Remote Endpoint",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `sc.exe` with command-line arguments used to start a Windows Service on a remote endpoint. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because adversaries may exploit the Service Control Manager for lateral movement and remote code execution. If confirmed malicious, this could allow attackers to execute arbitrary code on remote systems, pote…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may start Windows Services on remote systems, but this activity is usually limited to a small set of hosts or users.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/services/controlling-a-service-using-sc, https://attack.mitre.org/techniques/T1543/003/",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Initiation on Remote Endpoint\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Initiation on Remote Endpoint\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may start Windows Services on remote systems, but this activity is usually limited to a small set of hosts or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the execution of with command-line arguments used to start a Windows Service on a remote endpoint — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.149",
              "n": "Windows Set Account Password Policy To Unlimited Via Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of net.exe to update user account policies to set passwords as non-expiring. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving \"/maxpwage:unlimited\" or \"/maxpwage:49710\", which achieve a similar outcome theoretically. This activity is significant as it can indicate an attempt to maintain persistence, escalate privileges, evade defenses, or facilitate lateral movement. If confirmed malicious, t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This behavior is not commonly seen in production environment and not advisable, filter as needed.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/, https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/net-commands-on-operating-systems",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Set Account Password Policy To Unlimited Via Net\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Set Account Password Policy To Unlimited Via Net\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This behavior is not commonly seen in production environment and not advisable, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of net.exe to update user account policies to set passwords as non-expiring — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.150",
              "n": "Windows SIP WinVerifyTrust Failed Trust Validation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects failed trust validation attempts using Windows Event Log - CAPI2 (CryptoAPI 2). It specifically triggers on EventID 81, which indicates that \"The digital signature of the object did not verify.\" This detection leverages the CAPI2 Operational log to identify instances where digital signatures fail to validate. Monitoring this activity is crucial as it can indicate attempts to execute untrusted or potentially malicious binaries. If confirmed malicious, this activity …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log CAPI2 81",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log CAPI2 81 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present in some instances of legitimate binaries with invalid signatures. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1553/003/, https://specterops.io/wp-content/uploads/sites/3/2022/06/SpecterOps_Subverting_Trust_in_Windows.pdf, https://github.com/gtworek/PSBits/tree/master/SIP, https://github.com/mattifestation/PoCSubjectInterfacePackage, https://pentestlab.blog/2017/11/06/hijacking-digital-signatures/",
              "mitre": [
                "T1553.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SIP WinVerifyTrust Failed Trust Validation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log CAPI2 81. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1553.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SIP WinVerifyTrust Failed Trust Validation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present in some instances of legitimate binaries with invalid signatures. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch failed trust validation attempts using Windows Event Log - CAPI2 (CryptoAPI 2) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.151",
              "n": "Windows Snake Malware Registry Modification wav OpenWithProgIds",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies modifications to the registry path .wav\\\\OpenWithProgIds, associated with the Snake Malware campaign. It leverages data from the Endpoint.Registry datamodel to detect changes in this specific registry location. This activity is significant because Snake's WerFault.exe uses this registry path to decrypt an encrypted blob containing critical components like the AES key, IV, and paths for its kernel driver and loader. If confirmed malicious, this could allow the at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and will require tuning based on program Ids in large organizations.",
              "refs": "https://media.defense.gov/2023/May/09/2003218554/-1/-1/0/JOINT_CSA_HUNTING_RU_INTEL_SNAKE_MALWARE_20230509.PDF",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Snake Malware Registry Modification wav OpenWithProgIds\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Snake Malware Registry Modification wav OpenWithProgIds\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and will require tuning based on program Ids in large organizations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot modifications to the registry path.wav\\\\OpenWithProgIds, associated with the Snake Malware campaign — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.152",
              "n": "Windows Special Privileged Logon On Multiple Hosts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a user authenticating with special privileges on 30 or more remote endpoints within a 5-minute window. It leverages Event ID 4672 from Windows Security logs to identify this behavior. This activity is significant as it may indicate lateral movement or remote code execution by an adversary. If confirmed malicious, the attacker could gain extensive control over the network, potentially leading to privilege escalation, data exfiltration, or further compromise of the e…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4672",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4672 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Vulnerability scanners or system administration tools may also trigger this detection. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4672, https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn319113(v=ws.11), https://thedfirreport.com/2023/01/23/sharefinder-how-threat-actors-discover-file-shares/, https://attack.mitre.org/tactics/TA0008/",
              "mitre": [
                "T1087",
                "T1021.002",
                "T1135"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Special Privileged Logon On Multiple Hosts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4672. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087, T1021.002, T1135. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Special Privileged Logon On Multiple Hosts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Vulnerability scanners or system administration tools may also trigger this detection. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch a user authenticating with special privileges on 30 or more remote endpoints within a 5-minute window — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.153",
              "n": "Windows SQL Server xp_cmdshell Config Change",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when the xp_cmdshell configuration is modified in SQL Server. The xp_cmdshell extended stored procedure allows execution of operating system commands and programs from SQL Server, making it a high-risk feature commonly abused by attackers for privilege escalation and lateral movement.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Application 15457",
              "q": "`wineventlog_application` EventCode=15457 host=\"$dest$\" | rex field=EventData_Xml \"<Data>(?<config_name>[^<]+)</Data><Data>(?<old_value>[^<]+)</Data><Data>(?<new_value>[^<]+)</Data>\" | stats count values(config_name) as \"Changed Settings\" values(new_value) as \"New Values\" by _time dest",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to entity entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Application 15457 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Database administrators may legitimately enable xp_cmdshell for maintenance tasks, such as database maintenance scripts requiring OS-level operations, legacy applications, or automated system management tasks; however, this feature should generally remain disabled in production environments due to security risks. To reduce false positives, document when xp_cmdshell is required, monitor for unauthorized changes, create change control procedures for xp_cmdshell modifications, and consider alerting on the enabled state rather than configuration changes if preferred.",
              "refs": "https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/xp-cmdshell-transact-sql, https://attack.mitre.org/techniques/T1505/003/, https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/xp-cmdshell-server-configuration-option",
              "mitre": [
                "T1505.001"
              ],
              "dtype": "other",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SQL Server xp_cmdshell Config Change\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to entity entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Application 15457. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SQL Server xp_cmdshell Config Change\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Database administrators may legitimately enable xp_cmdshell for maintenance tasks, such as database maintenance scripts requiring OS-level operations, legacy applications, or automated system management tasks; however, this feature should generally remain disabled in production environments due to security risks. To reduce false positives, document when xp_cmdshell is required, monitor for unauthorized changes, create change control procedures for xp_cmdshell modifications, and consider alerting on the enabled state rather than configuration changes if preferred.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot when the xp_cmdshell configuration is modified in SQL Server — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.154",
              "n": "Windows Steal or Forge Kerberos Tickets Klist",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the Windows OS tool klist.exe, often used by post-exploitation tools like winpeas. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process details. Monitoring klist.exe is significant as it can indicate attempts to list or gather cached Kerberos tickets, which are crucial for lateral movement or privilege escalation. If confirmed malicious, this activity could enable attackers to mo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name=\"klist.exe\"\n        OR\n        Processes.original_file_name = \"klist.exe\" Processes.parent_process_name IN (\"cmd.exe\", \"powershell*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_steal_or_forge_kerberos_tickets_klist_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1558"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal or Forge Kerberos Tickets Klist\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal or Forge Kerberos Tickets Klist\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the execution of the Windows OS tool klist.exe, often used by post-exploitation tools like winpeas — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.155",
              "n": "Windows UAC Bypass Suspicious Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an executable known for User Account Control (UAC) bypass exploitation spawns a child process in a user-controlled location or a command shell executable (e.g., cmd.exe, powershell.exe). This detection leverages Sysmon EventID 1 data, focusing on high or system integrity level processes with specific parent-child process relationships. This activity is significant as it may indicate an attacker has successfully used a UAC bypass exploit to escalate privileges.…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest sysmon data, specifically Event ID 1 with process integrity level data.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Including Werfault.exe may cause some unintended false positives related to normal application faulting, but is used in a number of UAC bypass techniques.",
              "refs": "https://attack.mitre.org/techniques/T1548/002/, https://atomicredteam.io/defense-evasion/T1548.002/, https://hadess.io/user-account-control-uncontrol-mastering-the-art-of-bypassing-windows-uac/, https://enigma0x3.net/2016/08/15/fileless-uac-bypass-using-eventvwr-exe-and-registry-hijacking/",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows UAC Bypass Suspicious Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows UAC Bypass Suspicious Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Including Werfault.exe may cause some unintended false positives related to normal application faulting, but is used in a number of UAC bypass techniques.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch when an executable known for User Account Control (UAC) bypass exploitation spawns a child process in a user-controlled location or a command shell executable (cmd.exe, powershell.exe) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.156",
              "n": "Windows Unsigned DLL Side-Loading",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of potentially malicious unsigned DLLs in the c:\\windows\\system32 or c:\\windows\\syswow64 folders. It leverages Sysmon EventCode 7 logs to identify unsigned DLLs with unavailable signatures loaded in these critical directories. This activity is significant as it may indicate a DLL hijacking attempt, a technique used by attackers to gain unauthorized access and execute malicious code. If confirmed malicious, this could lead to privilege escalation, allow…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible some Administrative utilities will load dismcore.dll outside of normal system paths, filter as needed.",
              "refs": "https://asec.ahnlab.com/en/17692/, )%20is%20a%20remote%20access%20trojan, is%20as%20an%20information%20stealer.",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unsigned DLL Side-Loading\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unsigned DLL Side-Loading\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible some Administrative utilities will load dismcore.dll outside of normal system paths, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the creation of potentially malicious unsigned DLLs in the c:\\windows\\system32 or c:\\windows\\syswow64 folders — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.157",
              "n": "Windows Unsigned DLL Side-Loading In Same Process Path",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies unsigned DLLs loaded through DLL side-loading with same file path with the process loaded the DLL, a technique observed in DarkGate malware. This detection monitors DLL loading, verifies signatures, and flags unsigned DLLs. Suspicious file paths and known executable associations are checked. Detecting such suspicious DLLs is crucial in preventing privilege escalation attacks and other potential security breaches. Regular security assessments, thorough monitoring, and im…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.splunk.com/en_us/blog/security/enter-the-gates-an-analysis-of-the-darkgate-autoit-loader.html, https://www.trendmicro.com/en_us/research/23/b/investigating-the-plugx-trojan-disguised-as-a-legitimate-windows.html",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unsigned DLL Side-Loading In Same Process Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unsigned DLL Side-Loading In Same Process Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot unsigned DLLs loaded through DLL side-loading with same file path with the process loaded the DLL, a technique observed in DarkGate malware — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.158",
              "n": "Windows Unsigned MS DLL Side-Loading",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential DLL side-loading instances involving unsigned DLLs mimicking Microsoft signatures. It detects this activity by analyzing Sysmon logs for Event Code 7, where both the `Image` and `ImageLoaded` paths do not match system directories like `system32`, `syswow64`, and `programfiles`. This behavior is significant as adversaries often exploit DLL side-loading to execute malicious code via legitimate processes. If confirmed malicious, this activity could allow …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 7 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate processes are loading vcruntime140.dll from non-standard directories. It is recommended to investigate the context of the process loading vcruntime140.dll to determine if it is malicious or not. Modify the search to include additional known good paths for vcruntime140.dll to reduce false positives.",
              "refs": "https://www.mandiant.com/resources/blog/apt29-wineloader-german-political-parties, https://www.zscaler.com/blogs/security-research/european-diplomats-targeted-spikedwine-wineloader",
              "mitre": [
                "T1574.001",
                "T1547"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unsigned MS DLL Side-Loading\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001, T1547. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unsigned MS DLL Side-Loading\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate processes are loading vcruntime140.dll from non-standard directories. It is recommended to investigate the context of the process loading vcruntime140.dll to determine if it is malicious or not. Modify the search to include additional known good paths for vcruntime140.dll to reduce false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot potential DLL side-loading instances involving unsigned DLLs mimicking Microsoft signatures — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.159",
              "n": "Windows User Deletion Via Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of net.exe or net1.exe command-line to delete a user account on a system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line execution logs. This activity is significant as it may indicate an attempt to impair user accounts or cover tracks during lateral movement. If confirmed malicious, this could lead to unauthorized access removal, disruption of legitimate user activities, or concealment of adversari…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "System administrators or scripts may delete user accounts via this technique. Filter as needed.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1531"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows User Deletion Via Net\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1531. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows User Deletion Via Net\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: System administrators or scripts may delete user accounts via this technique. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the use of net.exe or net1.exe command-line to delete a user account on a system — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.160",
              "n": "Windows WinLogon with Public Network Connection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances of Winlogon.exe, a critical Windows process, connecting to public IP addresses. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on network connections made by Winlogon.exe. Under normal circumstances, Winlogon.exe should not connect to public IPs, and such activity may indicate a compromise, such as the BlackLotus bootkit attack. This detection is significant as it highlights potential system integrity breaches.…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name IN (winlogon.exe) Processes.process!=unknown\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | join max=1 process_id [\n    | tstats `security_content_summariesonly` count FROM datamodel=Network_Traffic.All_Traffic\n      WHERE All_Traffic.dest_port != 0 NOT (All_Traffic.dest IN (127.0.0.1,10.0.0.0/8,172.16.0.0/12, 192.168.0.0/16, 0:0:0:0:0:0:0:1))\n      BY All_Traffic.process_id All_Traffic.dest All_Traffic.dest_port\n    | `drop_dm_object_name(All_Traffic)`\n    | rename dest as publicIp ]\n    | table dest parent_process_name process_name process_path process process_id dest_port publicIp\n    | `windows_winlogon_with_public_network_connection_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1 AND Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present and filtering will be required. Legitimate IPs will be present and need to be filtered.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/04/11/guidance-for-investigating-attacks-using-cve-2022-21894-the-blacklotus-campaign/",
              "mitre": [
                "T1542.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows WinLogon with Public Network Connection\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1542.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows WinLogon with Public Network Connection\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present and filtering will be required. Legitimate IPs will be present and need to be filtered.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch instances of Winlogon.exe, a critical Windows process, connecting to public IP addresses — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.161",
              "n": "Wmic Group Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of `wmic.exe` to enumerate local groups on an endpoint. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs, including command-line details. Monitoring this activity is significant as it can indicate reconnaissance efforts by an attacker to understand group memberships, which could be a precursor to privilege escalation or lateral movement. If confirmed malicious, this activity could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/001/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1069.001/T1069.001.md",
              "mitre": [
                "T1069.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Wmic Group Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Wmic Group Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the use of to enumerate local groups on an endpoint — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.162",
              "n": "Wmiprvse LOLBAS Execution Process Spawn",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects `wmiprvse.exe` spawning a LOLBAS execution process. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where `wmiprvse.exe` is the parent process and the child process is a known LOLBAS binary. This activity is significant as it may indicate lateral movement or remote code execution by an adversary abusing Windows Management Instrumentation (WMI). If confirmed malicious, this behavior could allow attackers to ex…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may trigger this behavior, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1047/, https://www.ired.team/offensive-security/lateral-movement/t1047-wmi-for-lateral-movement, https://lolbas-project.github.io/",
              "mitre": [
                "T1047"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Wmiprvse LOLBAS Execution Process Spawn\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Wmiprvse LOLBAS Execution Process Spawn\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may trigger this behavior, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch spawning a LOLBAS execution process — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.163",
              "n": "Wsmprovhost LOLBAS Execution Process Spawn",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies `Wsmprovhost.exe` spawning a LOLBAS execution process. It leverages Endpoint Detection and Response (EDR) data to detect when `Wsmprovhost.exe` spawns child processes that are known LOLBAS (Living Off the Land Binaries and Scripts) executables. This activity is significant because it may indicate an adversary using Windows Remote Management (WinRM) to execute code on remote endpoints, a common technique for lateral movement. If confirmed malicious, this could al…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may trigger this behavior, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1021/006/, https://lolbas-project.github.io/, https://pentestlab.blog/2018/05/15/lateral-movement-winrm/",
              "mitre": [
                "T1021.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Wsmprovhost LOLBAS Execution Process Spawn\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Wsmprovhost LOLBAS Execution Process Spawn\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may trigger this behavior, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot spawning a LOLBAS execution process — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.164",
              "n": "XMRIG Driver Loaded",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the installation of the XMRIG coinminer driver on a system. It identifies the loading of the `WinRing0x64.sys` driver, commonly associated with XMRIG, by analyzing Sysmon EventCode 6 logs for specific signatures and image loads. This activity is significant because XMRIG is an open-source CPU miner frequently exploited by adversaries to mine cryptocurrency illicitly. If confirmed malicious, this activity could lead to unauthorized resource consumption, degraded sys…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 6",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the driver loaded and Signature from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited.",
              "refs": "https://www.trendmicro.com/vinfo/hk/threat-encyclopedia/malware/trojan.ps1.powtran.a/",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"XMRIG Driver Loaded\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 6. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"XMRIG Driver Loaded\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch the installation of the XMRIG coinminer driver on a system — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.165",
              "n": "Cisco Network Interface Modifications",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects the creation or modification of network interfaces on Cisco devices, which could indicate an attacker establishing persistence or preparing for lateral movement. After gaining initial access to network devices, threat actors like Static Tundra often create new interfaces (particularly loopback interfaces) to establish covert communication channels or maintain persistence. This detection specifically looks for the configuration of new interfaces, interface state changes, and…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to command entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate network interface configuration changes may trigger this detection during routine network maintenance or initial device setup. Network administrators often need to create or modify interfaces as part of normal operations. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames, typical times for interface configuration changes, and scheduled maintenance windows. You may also want to create a lookup table of approved interface naming conventions and filter out alerts for standard interface configurations.",
              "refs": "https://blog.talosintelligence.com/static-tundra/, https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180328-smi2, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/interface/command/ir-cr-book/ir-i1.html#wp1389942834",
              "mitre": [
                "T1556",
                "T1021",
                "T1133"
              ],
              "dtype": "command",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Network Interface Modifications\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to command entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556, T1021, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Network Interface Modifications\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate network interface configuration changes may trigger this detection during routine network maintenance or initial device setup. Network administrators often need to create or modify interfaces as part of normal operations. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames, typical times for interface configuration changes, and scheduled maintenance windows. You may also want to create a lookup table of approved interface naming conventions and filter out alerts for standard interface configurations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects the creation or modification of network interfaces on Cisco devices, which could indicate an attacker establishing persistence or preparing for lateral movement — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.166",
              "n": "Detect Outbound SMB Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects outbound SMB (Server Message Block) connections from internal hosts to external servers. It identifies this activity by monitoring network traffic for SMB requests directed towards the Internet, which are unusual for standard operations. This detection is significant for a SOC as it can indicate an attacker's attempt to retrieve credential hashes through compromised servers, a key step in lateral movement and privilege escalation. If confirmed malicious, this activ…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is likely that the outbound Server Message Block (SMB) traffic is legitimate, if the company's internal networks are not well-defined in the Assets and Identity Framework. Categorize the internal CIDR blocks as `internal` in the lookup file to avoid creating findings for traffic destined to those CIDR blocks. Any other network connection that is going out to the Internet should be investigated and blocked. Best practices suggest preventing external communications of all SMB versions and related protocols at the network boundary.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1071.002"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Outbound SMB Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Outbound SMB Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is likely that the outbound Server Message Block (SMB) traffic is legitimate, if the company's internal networks are not well-defined in the Assets and Identity Framework. Categorize the internal CIDR blocks as `internal` in the lookup file to avoid creating findings for traffic destined to those CIDR blocks. Any other network connection that is going out to the Internet should be investigated and blocked. Best practices suggest preventing external communications of all SMB versions and related protocols at the network boundary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch outbound SMB (Server Message Block) connections from internal hosts to external servers — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.167",
              "n": "SMB Traffic Spike - MLTK",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies spikes in the number of Server Message Block (SMB) connections using the Machine Learning Toolkit (MLTK). It leverages the Network_Traffic data model to monitor SMB traffic on ports 139 and 445, applying a machine learning model to detect anomalies. This activity is significant because sudden increases in SMB traffic can indicate lateral movement or data exfiltration attempts by attackers. If confirmed malicious, this behavior could lead to unauthorized access, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count values(All_Traffic.dest) as dest values(All_Traffic.dest_port) as port FROM datamodel=Network_Traffic\n      WHERE All_Traffic.dest_port=139\n        OR\n        All_Traffic.dest_port=445\n        OR\n        All_Traffic.app=smb\n      BY _time span=1h, All_Traffic.src\n    | eval HourOfDay=strftime(_time, \"%H\")\n    | eval DayOfWeek=strftime(_time, \"%A\")\n    | `drop_dm_object_name(All_Traffic)`\n    | apply smb_pdfmodel threshold=0.001\n    | rename \"IsOutlier(count)\" as isOutlier\n    | search isOutlier > 0\n    | sort -count\n    | table _time src dest port count\n    | `smb_traffic_spike___mltk_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "If you are seeing more results than desired, you may consider reducing the value of the threshold in the search. You should also periodically re-run the support search to re-build the ML model on the latest data. Please update the `smb_traffic_spike_mltk_filter` macro to filter out false positive results",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1021.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SMB Traffic Spike - MLTK\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SMB Traffic Spike - MLTK\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: If you are seeing more results than desired, you may consider reducing the value of the threshold in the search. You should also periodically re-run the support search to re-build the ML model on the latest data. Please update the `smb_traffic_spike_mltk_filter` macro to filter out false positive results\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot spikes in the number of Server Message Block (SMB) connections using the Machine Learning Toolkit (MLTK) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.2.168",
              "n": "Windows Remote Desktop Network Bruteforce Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential Remote Desktop Protocol (RDP) brute force attacks by monitoring network traffic for RDP application activity. This query detects potential RDP brute force attacks by identifying source IPs that have made more than 10 connection attempts to the same RDP port on a host within a one-hour window. The results are presented in a table that includes the source and destination IPs, destination port, number of attempts, and the times of the first and last conne…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "RDP gateways may have unusually high amounts of traffic from all other hosts' RDP applications in the network.Any legitimate RDP traffic using wrong/expired credentials will be also detected as a false positive.",
              "refs": "https://www.zscaler.com/blogs/security-research/ransomware-delivered-using-rdp-brute-force-attack, https://www.reliaquest.com/blog/rdp-brute-force-attacks/",
              "mitre": [
                "T1110.001"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Desktop Network Bruteforce Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Desktop Network Bruteforce Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: RDP gateways may have unusually high amounts of traffic from all other hosts' RDP applications in the network.Any legitimate RDP traffic using wrong/expired credentials will be also detected as a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot potential Remote Desktop Protocol (remote desktop) brute force attacks by monitoring network traffic for remote desktop application activity — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 26.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 168,
            "none": 0
          }
        },
        {
          "i": "10.3",
          "n": "Endpoint Detection & Response (EDR)",
          "u": [
            {
              "i": "10.3.1",
              "n": "Malware Detection Trending",
              "c": "critical",
              "f": "intermediate",
              "v": "Detection trends reveal campaign targeting, endpoint hygiene, and control effectiveness. Spikes indicate active incidents.",
              "t": "`CrowdStrike Falcon Event Streams Technical Add-On` (Splunkbase 5082), `Microsoft Defender Advanced Hunting Add-on` (Splunkbase 5518)",
              "d": "EDR detection events",
              "q": "index=edr sourcetype=\"crowdstrike:detection\"\n| timechart span=1d count by severity",
              "m": "Ingest EDR detection events via TA or API. Normalize detection severity. Track daily detection rates by severity, type, and business unit. Alert on spikes exceeding 2× daily baseline. Report on detection-to-response times.",
              "z": "Line chart (detections over time), Bar chart (detections by type), Pie chart (severity distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5082](https://splunkbase.splunk.com/app/5082), [Splunkbase app 5518](https://splunkbase.splunk.com/app/5518), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `CrowdStrike Falcon Event Streams Technical Add-On` (Splunkbase 5082), `Microsoft Defender Advanced Hunting Add-on` (Splunkbase 5518).\n• Ensure the following data sources are available: EDR detection events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest EDR detection events via TA or API. Normalize detection severity. Track daily detection rates by severity, type, and business unit. Alert on spikes exceeding 2× daily baseline. Report on detection-to-response times.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:detection\"\n| timechart span=1d count by severity\n```\n\nUnderstanding this SPL\n\n**Malware Detection Trending** — Detection trends reveal campaign targeting, endpoint hygiene, and control effectiveness. Spikes indicate active incidents.\n\nDocumented **Data sources**: EDR detection events. **App/TA** (typical add-on context): `CrowdStrike Falcon Event Streams Technical Add-On` (Splunkbase 5082), `Microsoft Defender Advanced Hunting Add-on` (Splunkbase 5518). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:detection. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:detection\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by severity** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.dest Malware_Attacks.signature Malware_Attacks.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malware Detection Trending** — Detection trends reveal campaign targeting, endpoint hygiene, and control effectiveness. Spikes indicate active incidents.\n\nDocumented **Data sources**: EDR detection events. **App/TA** (typical add-on context): `CrowdStrike Falcon Event Streams Technical Add-On` (Splunkbase 5082), `Microsoft Defender Advanced Hunting Add-on` (Splunkbase 5518). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (detections over time), Bar chart (detections by type), Pie chart (severity distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how our endpoint security findings trend over time so we can see campaigns, blind spots, and whether the controls we paid for are really doing their job.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware",
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.dest Malware_Attacks.signature Malware_Attacks.action span=1h\n| sort -count",
              "e": [
                "crowdstrike",
                "defender"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.2",
              "n": "Quarantine Action Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Failed quarantine means malware remains active on the endpoint. Monitoring ensures automated remediation is working.",
              "t": "EDR TA",
              "d": "EDR remediation/action logs",
              "q": "index=edr sourcetype=\"crowdstrike:detection\"\n| stats count(eval(action=\"quarantined\")) as quarantined, count(eval(action=\"allowed\")) as allowed by severity\n| eval quarantine_rate=round(quarantined/(quarantined+allowed)*100,1)",
              "m": "Track EDR remediation actions (quarantine, kill process, isolate). Calculate quarantine success rate. Alert on failed quarantine actions. Follow up on \"allowed\" malware detections to ensure analyst review.",
              "z": "Pie chart (action distribution), Single value (quarantine success %), Table (failed quarantine events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR TA.\n• Ensure the following data sources are available: EDR remediation/action logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack EDR remediation actions (quarantine, kill process, isolate). Calculate quarantine success rate. Alert on failed quarantine actions. Follow up on \"allowed\" malware detections to ensure analyst review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:detection\"\n| stats count(eval(action=\"quarantined\")) as quarantined, count(eval(action=\"allowed\")) as allowed by severity\n| eval quarantine_rate=round(quarantined/(quarantined+allowed)*100,1)\n```\n\nUnderstanding this SPL\n\n**Quarantine Action Monitoring** — Failed quarantine means malware remains active on the endpoint. Monitoring ensures automated remediation is working.\n\nDocumented **Data sources**: EDR remediation/action logs. **App/TA** (typical add-on context): EDR TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:detection. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:detection\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **quarantine_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Quarantine Action Monitoring** — Failed quarantine means malware remains active on the endpoint. Monitoring ensures automated remediation is working.\n\nDocumented **Data sources**: EDR remediation/action logs. **App/TA** (typical add-on context): EDR TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (action distribution), Single value (quarantine success %), Table (failed quarantine events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether quarantine and block steps actually run so malware is not still running because a fix action failed quietly in the background.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.3",
              "n": "Agent Health Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Endpoints without healthy EDR agents are blind spots. Gap detection ensures comprehensive coverage.",
              "t": "EDR TA, scripted input",
              "d": "EDR agent status API, last check-in timestamps",
              "q": "index=edr sourcetype=\"crowdstrike:sensor_health\"\n| eval hours_since_checkin=round((now()-last_seen_epoch)/3600,1)\n| where hours_since_checkin > 24 OR sensor_version < \"6.50\"\n| table hostname, os, sensor_version, hours_since_checkin, status",
              "m": "Poll EDR agent status API daily. Identify agents offline >24 hours, outdated versions, or degraded status. Cross-reference with CMDB for full coverage analysis. Alert on critical servers with unhealthy agents.",
              "z": "Table (unhealthy agents), Single value (% healthy), Pie chart (agent version distribution), Bar chart (offline by OS).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR TA, scripted input.\n• Ensure the following data sources are available: EDR agent status API, last check-in timestamps.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll EDR agent status API daily. Identify agents offline >24 hours, outdated versions, or degraded status. Cross-reference with CMDB for full coverage analysis. Alert on critical servers with unhealthy agents.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:sensor_health\"\n| eval hours_since_checkin=round((now()-last_seen_epoch)/3600,1)\n| where hours_since_checkin > 24 OR sensor_version < \"6.50\"\n| table hostname, os, sensor_version, hours_since_checkin, status\n```\n\nUnderstanding this SPL\n\n**Agent Health Monitoring** — Endpoints without healthy EDR agents are blind spots. Gap detection ensures comprehensive coverage.\n\nDocumented **Data sources**: EDR agent status API, last check-in timestamps. **App/TA** (typical add-on context): EDR TA, scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:sensor_health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:sensor_health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hours_since_checkin** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since_checkin > 24 OR sensor_version < \"6.50\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Agent Health Monitoring**): table hostname, os, sensor_version, hours_since_checkin, status\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Agent Health Monitoring** — Endpoints without healthy EDR agents are blind spots. Gap detection ensures comprehensive coverage.\n\nDocumented **Data sources**: EDR agent status API, last check-in timestamps. **App/TA** (typical add-on context): EDR TA, scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy agents), Single value (% healthy), Pie chart (agent version distribution), Bar chart (offline by OS).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check that laptops and servers are really checking in to their security sensor so a broken install does not leave a wide-open hole for attackers.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.4",
              "n": "Behavioral Detection Alerts",
              "c": "critical",
              "f": "intermediate",
              "v": "Behavioral detections catch attacks that bypass signatures (fileless malware, LOLBins, living-off-the-land). These are high-fidelity signals.",
              "t": "EDR TA",
              "d": "EDR behavioral/heuristic alerts",
              "q": "index=edr sourcetype=\"crowdstrike:detection\" technique_id=\"T*\"\n| stats count by technique_id, tactic, hostname\n| sort -count",
              "m": "Ingest behavioral detection data. Map to MITRE ATT&CK framework (technique_id, tactic). Alert on high-confidence behavioral detections. Track most common techniques for threat intelligence and red team exercises.",
              "z": "MITRE ATT&CK heatmap, Table (behavioral detections), Bar chart (top techniques).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR TA.\n• Ensure the following data sources are available: EDR behavioral/heuristic alerts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest behavioral detection data. Map to MITRE ATT&CK framework (technique_id, tactic). Alert on high-confidence behavioral detections. Track most common techniques for threat intelligence and red team exercises.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:detection\" technique_id=\"T*\"\n| stats count by technique_id, tactic, hostname\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Behavioral Detection Alerts** — Behavioral detections catch attacks that bypass signatures (fileless malware, LOLBins, living-off-the-land). These are high-fidelity signals.\n\nDocumented **Data sources**: EDR behavioral/heuristic alerts. **App/TA** (typical add-on context): EDR TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:detection. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:detection\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by technique_id, tactic, hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.dest Malware_Attacks.signature Malware_Attacks.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Behavioral Detection Alerts** — Behavioral detections catch attacks that bypass signatures (fileless malware, LOLBins, living-off-the-land). These are high-fidelity signals.\n\nDocumented **Data sources**: EDR behavioral/heuristic alerts. **App/TA** (typical add-on context): EDR TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: MITRE ATT&CK heatmap, Table (behavioral detections), Bar chart (top techniques).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We lean on behavior-based rules on the endpoint to catch fileless and living-off-the-land tricks, not only a known-bad file name.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware",
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.dest Malware_Attacks.signature Malware_Attacks.action span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.5",
              "n": "Endpoint Isolation Events",
              "c": "critical",
              "f": "beginner",
              "v": "Isolation events indicate active incident response. Tracking ensures isolation is maintained and properly lifted when resolved.",
              "t": "EDR TA",
              "d": "EDR containment/isolation logs",
              "q": "index=edr sourcetype=\"crowdstrike:containment\"\n| table _time, hostname, action, initiated_by, reason\n| sort -_time",
              "m": "Track all isolation events (isolate, un-isolate). Alert on isolation events for awareness. Track isolation duration. Alert when endpoints remain isolated >24 hours without resolution. Maintain isolation audit trail.",
              "z": "Table (isolated endpoints), Timeline (isolation events), Single value (currently isolated count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR TA.\n• Ensure the following data sources are available: EDR containment/isolation logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack all isolation events (isolate, un-isolate). Alert on isolation events for awareness. Track isolation duration. Alert when endpoints remain isolated >24 hours without resolution. Maintain isolation audit trail.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:containment\"\n| table _time, hostname, action, initiated_by, reason\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Endpoint Isolation Events** — Isolation events indicate active incident response. Tracking ensures isolation is maintained and properly lifted when resolved.\n\nDocumented **Data sources**: EDR containment/isolation logs. **App/TA** (typical add-on context): EDR TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:containment. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:containment\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Endpoint Isolation Events**): table _time, hostname, action, initiated_by, reason\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Endpoint Isolation Events** — Isolation events indicate active incident response. Tracking ensures isolation is maintained and properly lifted when resolved.\n\nDocumented **Data sources**: EDR containment/isolation logs. **App/TA** (typical add-on context): EDR TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (isolated endpoints), Timeline (isolation events), Single value (currently isolated count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow when machines get cut off from the network in an emergency so that isolation is only lifted when the problem is really cleared.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "10.3.6",
              "n": "Threat Hunting Indicators",
              "c": "high",
              "f": "beginner",
              "v": "Proactive threat hunting using EDR telemetry detects stealthy threats that evade automated detection.",
              "t": "EDR TA (telemetry data)",
              "d": "EDR process telemetry, file events, network connections",
              "q": "index=edr sourcetype=\"crowdstrike:events\"\n| search (process_name=\"powershell.exe\" AND command_line=\"*-enc*\")\n    OR (process_name=\"rundll32.exe\" AND parent_process_name!=\"explorer.exe\")\n    OR (process_name=\"certutil.exe\" AND command_line=\"*-urlcache*\")\n| table _time, hostname, user, process_name, command_line, parent_process_name",
              "m": "Ingest EDR telemetry (process creation, network connections, file writes). Create hunting queries for LOLBin usage, encoded PowerShell, suspicious parent-child process relationships. Schedule as recurring searches for continuous hunting.",
              "z": "Table (suspicious indicators), Timeline (hunting hits), Bar chart (indicators by technique).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR TA (telemetry data).\n• Ensure the following data sources are available: EDR process telemetry, file events, network connections.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest EDR telemetry (process creation, network connections, file writes). Create hunting queries for LOLBin usage, encoded PowerShell, suspicious parent-child process relationships. Schedule as recurring searches for continuous hunting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:events\"\n| search (process_name=\"powershell.exe\" AND command_line=\"*-enc*\")\n    OR (process_name=\"rundll32.exe\" AND parent_process_name!=\"explorer.exe\")\n    OR (process_name=\"certutil.exe\" AND command_line=\"*-urlcache*\")\n| table _time, hostname, user, process_name, command_line, parent_process_name\n```\n\nUnderstanding this SPL\n\n**Threat Hunting Indicators** — Proactive threat hunting using EDR telemetry detects stealthy threats that evade automated detection.\n\nDocumented **Data sources**: EDR process telemetry, file events, network connections. **App/TA** (typical add-on context): EDR TA (telemetry data). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Threat Hunting Indicators**): table _time, hostname, user, process_name, command_line, parent_process_name\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.dest Malware_Attacks.signature Malware_Attacks.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Threat Hunting Indicators** — Proactive threat hunting using EDR telemetry detects stealthy threats that evade automated detection.\n\nDocumented **Data sources**: EDR process telemetry, file events, network connections. **App/TA** (typical add-on context): EDR TA (telemetry data). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious indicators), Timeline (hunting hits), Bar chart (indicators by technique).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We search our endpoint activity for odd process trees and command lines, because that is how attackers often abuse the same trusted tools you already have installed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware",
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.dest Malware_Attacks.signature Malware_Attacks.action span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.7",
              "n": "EDR Coverage Gaps",
              "c": "high",
              "f": "advanced",
              "v": "Identifies endpoints without EDR protection, closing blind spots that attackers exploit.",
              "t": "EDR API + CMDB lookup",
              "d": "EDR agent inventory, CMDB/asset inventory",
              "q": "| inputlookup cmdb_endpoints.csv WHERE os_type IN (\"Windows\",\"Linux\",\"macOS\")\n| join type=left max=1 hostname [search index=edr sourcetype=\"crowdstrike:sensor_health\" | stats latest(status) as edr_status by hostname]\n| where isnull(edr_status) OR edr_status!=\"active\"\n| table hostname, os_type, department, edr_status",
              "m": "Export EDR agent inventory and cross-reference with CMDB/AD computer accounts. Identify systems without agents. Report coverage percentage. Alert when coverage drops below target (e.g., <98%). Prioritize critical servers.",
              "z": "Single value (coverage %), Table (uncovered endpoints), Pie chart (covered vs uncovered), Bar chart (gaps by department).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR API + CMDB lookup.\n• Ensure the following data sources are available: EDR agent inventory, CMDB/asset inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport EDR agent inventory and cross-reference with CMDB/AD computer accounts. Identify systems without agents. Report coverage percentage. Alert when coverage drops below target (e.g., <98%). Prioritize critical servers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup cmdb_endpoints.csv WHERE os_type IN (\"Windows\",\"Linux\",\"macOS\")\n| join type=left max=1 hostname [search index=edr sourcetype=\"crowdstrike:sensor_health\" | stats latest(status) as edr_status by hostname]\n| where isnull(edr_status) OR edr_status!=\"active\"\n| table hostname, os_type, department, edr_status\n```\n\nUnderstanding this SPL\n\n**EDR Coverage Gaps** — Identifies endpoints without EDR protection, closing blind spots that attackers exploit.\n\nDocumented **Data sources**: EDR agent inventory, CMDB/asset inventory. **App/TA** (typical add-on context): EDR API + CMDB lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(edr_status) OR edr_status!=\"active\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **EDR Coverage Gaps**): table hostname, os_type, department, edr_status\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**EDR Coverage Gaps** — Identifies endpoints without EDR protection, closing blind spots that attackers exploit.\n\nDocumented **Data sources**: EDR agent inventory, CMDB/asset inventory. **App/TA** (typical add-on context): EDR API + CMDB lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (coverage %), Table (uncovered endpoints), Pie chart (covered vs uncovered), Bar chart (gaps by department).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare the full list of company machines to who actually has a working sensor so we are not comforted by a green dashboard when real devices are not reporting.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Endpoint.Processes\n  by Processes.dest Processes.process_name Processes.user span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.8",
              "n": "Ransomware Canary Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "EDR-detected mass file encryption patterns provide earliest possible ransomware detection, enabling automated containment.",
              "t": "EDR TA",
              "d": "EDR behavioral detection (mass file modification patterns)",
              "q": "index=edr sourcetype=\"crowdstrike:detection\"\n| search tactic=\"impact\" technique_id=\"T1486\"\n| table _time, hostname, user, process_name, severity, description",
              "m": "Ensure EDR has behavioral ransomware detection enabled. Alert at critical priority on any ransomware behavioral detection. Integrate with SOAR for automated endpoint isolation. Track affected file scope from EDR telemetry.",
              "z": "Single value (ransomware detections — target: 0), Table (detection details), Timeline (ransomware events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR TA.\n• Ensure the following data sources are available: EDR behavioral detection (mass file modification patterns).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure EDR has behavioral ransomware detection enabled. Alert at critical priority on any ransomware behavioral detection. Integrate with SOAR for automated endpoint isolation. Track affected file scope from EDR telemetry.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:detection\"\n| search tactic=\"impact\" technique_id=\"T1486\"\n| table _time, hostname, user, process_name, severity, description\n```\n\nUnderstanding this SPL\n\n**Ransomware Canary Detection** — EDR-detected mass file encryption patterns provide earliest possible ransomware detection, enabling automated containment.\n\nDocumented **Data sources**: EDR behavioral detection (mass file modification patterns). **App/TA** (typical add-on context): EDR TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:detection. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:detection\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Ransomware Canary Detection**): table _time, hostname, user, process_name, severity, description\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.dest Malware_Attacks.signature Malware_Attacks.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Ransomware Canary Detection** — EDR-detected mass file encryption patterns provide earliest possible ransomware detection, enabling automated containment.\n\nDocumented **Data sources**: EDR behavioral detection (mass file modification patterns). **App/TA** (typical add-on context): EDR TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (ransomware detections — target: 0), Table (detection details), Timeline (ransomware events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for signs of runaway file impact on endpoints so we can act while there is still time to stop a small problem from growing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware",
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.dest Malware_Attacks.signature Malware_Attacks.action span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.9",
              "n": "Cisco AI Defense Security Alerts by Application Name",
              "c": "high",
              "f": "intermediate",
              "v": "The search surfaces alerts from the Cisco AI Defense product for potential attacks against the AI models running in your environment. This analytic identifies security events within Cisco AI Defense by examining event messages, actions, and policy names. It focuses on connections and applications associated with specific guardrail entities and ruleset types. By aggregating and analyzing these elements, the search helps detect potential policy violations and security threats, enabling proactive d…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco AI Defense Alerts",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$application_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to entity entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco AI Defense Alerts ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may vary based on Cisco AI Defense configuration; monitor and filter out the alerts that are not relevant to your environment.",
              "refs": "https://www.robustintelligence.com/blog-posts/prompt-injection-attack-on-gpt-4, https://docs.aws.amazon.com/prescriptive-guidance/latest/llm-prompt-engineering-best-practices/common-attacks.html",
              "mitre": [],
              "dtype": "other",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco AI Defense Security Alerts by Application Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to entity entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco AI Defense Alerts. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco AI Defense Security Alerts by Application Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may vary based on Cisco AI Defense configuration; monitor and filter out the alerts that are not relevant to your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.10",
              "n": "Detect HTML Help Spawn Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of hh.exe (HTML Help) spawning a child process, indicating the use of a Compiled HTML Help (CHM) file to execute Windows script code. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where hh.exe is the parent process. This activity is significant as it may indicate an attempt to execute malicious scripts via CHM files, a known technique for bypassing security controls. If confirmed m…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications (ex. web browsers) may spawn a child process. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1218/001/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.001/T1218.001.md, https://lolbas-project.github.io/lolbas/Binaries/Hh/, https://gist.github.com/mgeeky/cce31c8602a144d8f2172a73d510e0e7, https://web.archive.org/web/20220119133748/https://cyberforensicator.com/2019/01/20/silence-dissecting-malicious-chm-files-and-performing-forensic-analysis/",
              "mitre": [
                "T1218.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect HTML Help Spawn Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect HTML Help Spawn Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications (ex. web browsers) may spawn a child process. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect HTML Help Spawn Child Process” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.11",
              "n": "ESXi Bulk VM Termination",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when all virtual machines on an ESXi host are abruptly terminated, which may indicate malicious activity such as a deliberate denial-of-service, ransomware staging, or an attempt to destroy critical workloads.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1673",
                "T1529",
                "T1499"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Bulk VM Termination\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1673, T1529, T1499. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Bulk VM Termination\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “ESXi Bulk VM Termination” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.12",
              "n": "Ollama Excessive API Requests",
              "c": "high",
              "f": "intermediate",
              "v": "Detects potential Distributed Denial of Service (DDoS) attacks or rate limit abuse against Ollama API endpoints by identifying excessive request volumes from individual client IP addresses. This detection monitors GIN-formatted Ollama server logs to identify clients generating abnormally high request rates within short time windows, which may indicate automated attacks, botnet activity, or resource exhaustion attempts targeting local AI model infrastructure.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Ollama Server",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ingest Ollama logs via Splunk TA-ollama add-on by configuring file monitoring inputs pointed to your Ollama server log directories (sourcetype: ollama:server), or enable HTTP Event Collector (HEC) for real-time API telemetry and prompt analytics (sourcetypes: ollama:api, ollama:prompts). CIM compatibility using the Web datamodel for standardized security detections.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate automated services (CI/CD pipelines, monitoring tools, batch jobs), multiple users behind NAT/proxy infrastructure, or authorized load testing activities may trigger this detection during normal operations. Operator must adjust threshold accordingly.",
              "refs": "https://github.com/rosplk/ta-ollama",
              "mitre": [
                "T1498"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ollama Excessive API Requests\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Ollama Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1498. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ollama Excessive API Requests\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate automated services (CI/CD pipelines, monitoring tools, batch jobs), multiple users behind NAT/proxy infrastructure, or authorized load testing activities may trigger this detection during normal operations. Operator must adjust threshold accordingly.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Ollama Excessive API Requests” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.13",
              "n": "Ollama Possible Model Exfiltration Data Leakage",
              "c": "high",
              "f": "intermediate",
              "v": "Detects data leakage and exfiltration attempts targeting Ollama model metadata and configuration endpoints. Adversaries repeatedly query /api/show, /api/tags, and /api/v1/models to systematically extract sensitive model information including architecture details, fine-tuning parameters, system paths, Modelfile configurations, and proprietary customizations. Multiple inspection attempts within a 15-minute window indicate automated exfiltration of valuable intellectual property such as custom mode…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Ollama Server",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ingest Ollama logs via Splunk TA-ollama add-on by configuring file monitoring inputs pointed to your Ollama server log directories (sourcetype: ollama:server), or enable HTTP Event Collector (HEC) for real-time API telemetry and prompt analytics (sourcetypes: ollama:api, ollama:prompts). CIM compatibility using the Web datamodel for standardized security detections.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrative activities such as model inventory management, monitoring dashboards polling model status, automated health checks verifying model availability, CI/CD pipelines validating deployments, development tools inspecting model configurations, or users browsing available models through management interfaces may trigger this detection during normal operations. Adjust the threshold based on your environment's baseline activity.",
              "refs": "https://github.com/rosplk/ta-ollama",
              "mitre": [
                "T1048"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ollama Possible Model Exfiltration Data Leakage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Ollama Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ollama Possible Model Exfiltration Data Leakage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrative activities such as model inventory management, monitoring dashboards polling model status, automated health checks verifying model availability, CI/CD pipelines validating deployments, development tools inspecting model configurations, or users browsing available models through management interfaces may trigger this detection during normal operations. Adjust the threshold based on your environment's baseline activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.14",
              "n": "Ollama Suspicious Prompt Injection Jailbreak",
              "c": "high",
              "f": "intermediate",
              "v": "Detects potential prompt injection or jailbreak attempts against Ollama API endpoints by identifying requests with abnormally long response times. Attackers often craft complex, layered prompts designed to bypass AI safety controls, which typically result in extended processing times as the model attempts to parse and respond to these malicious inputs. This detection monitors /api/generate and /api/chat endpoints for requests exceeding 30 seconds, which may indicate sophisticated jailbreak techn…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Ollama Server",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ingest Ollama logs via Splunk TA-ollama add-on by configuring file monitoring inputs pointed to your Ollama server log directories (sourcetype: ollama:server), or enable HTTP Event Collector (HEC) for real-time API telemetry and prompt analytics (sourcetypes: ollama:api, ollama:prompts). CIM compatibility using the Web datamodel for standardized security detections.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate complex queries requiring extensive model reasoning, large context windows processing substantial amounts of text, batch processing operations, or resource-constrained systems experiencing performance degradation may trigger this detection during normal operations.",
              "refs": "https://github.com/rosplk/ta-ollama, https://github.com/OWASP/www-project-ai-testing-guide",
              "mitre": [
                "T1190",
                "T1059"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ollama Suspicious Prompt Injection Jailbreak\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Ollama Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ollama Suspicious Prompt Injection Jailbreak\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate complex queries requiring extensive model reasoning, large context windows processing substantial amounts of text, batch processing operations, or resource-constrained systems experiencing performance degradation may trigger this detection during normal operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.15",
              "n": "ASL AWS Disable Bucket Versioning",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when AWS S3 bucket versioning is suspended by a user. It leverages AWS CloudTrail logs to identify `PutBucketVersioning` events with the `VersioningConfiguration.Status` set to `Suspended`. This activity is significant because disabling versioning can prevent recovery of deleted or modified data, which is a common tactic in ransomware attacks. If confirmed malicious, this action could lead to data loss and hinder recovery efforts, severely impacting data integrity …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an AWS Administrator has legitimately disabled versioning on certain buckets to avoid costs.",
              "refs": "https://invictus-ir.medium.com/ransomware-in-the-cloud-7f14805bbe82, https://bleemb.medium.com/data-exfiltration-with-native-aws-s3-features-c94ae4d13436",
              "mitre": [
                "T1490"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Disable Bucket Versioning\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Disable Bucket Versioning\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an AWS Administrator has legitimately disabled versioning on certain buckets to avoid costs.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use cloud account activity to see mis-steps and misuse early so security and cloud teams are not the last to know.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.16",
              "n": "AWS Bedrock Delete GuardRails",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to delete AWS Bedrock GuardRails, which are security controls designed to prevent harmful, biased, or inappropriate AI outputs. It leverages AWS CloudTrail logs to detect when a user or service calls the DeleteGuardrail API. This activity is significant as it may indicate an adversary attempting to remove safety guardrails after compromising credentials, potentially to enable harmful or malicious model outputs. Removing guardrails could allow attackers …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeleteGuardrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The Splunk AWS Add-on is required to utilize this data. The search requires AWS CloudTrail logs with Bedrock service events enabled. You must install and configure the AWS App for Splunk (version 6.0.0 or later) and Splunk Add-on for AWS (version 5.1.0 or later) to collect CloudTrail logs from AWS. Ensure the CloudTrail is capturing Bedrock GuardRails management events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrators may delete GuardRails as part of normal operations, such as when replacing outdated guardrails with updated versions, cleaning up test resources, or consolidating security controls. Consider implementing an allowlist for expected administrators who regularly manage GuardRails configurations.",
              "refs": "https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html, https://docs.aws.amazon.com/bedrock/latest/APIReference/API_DeleteGuardrail.html, https://attack.mitre.org/techniques/T1562/",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Bedrock Delete GuardRails\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteGuardrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Bedrock Delete GuardRails\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrators may delete GuardRails as part of normal operations, such as when replacing outdated guardrails with updated versions, cleaning up test resources, or consolidating security controls. Consider implementing an allowlist for expected administrators who regularly manage GuardRails configurations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.17",
              "n": "AWS Bedrock Delete Knowledge Base",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to delete AWS Bedrock Knowledge Bases, which are resources that store and manage domain-specific information for AI models. It monitors AWS CloudTrail logs for DeleteKnowledgeBase API calls. This activity could indicate an adversary attempting to remove knowledge bases after compromising credentials, potentially to disrupt business operations or remove traces of data access. Deleting knowledge bases could impact model performance, remove critical busine…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeleteKnowledgeBase",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The Splunk AWS Add-on is required to utilize this data. The search requires AWS CloudTrail logs with Bedrock service events enabled. You must install and configure the AWS App for Splunk (version 6.0.0 or later) and Splunk Add-on for AWS (version 5.1.0 or later) to collect CloudTrail logs from AWS. Ensure the CloudTrail is capturing Bedrock Knowledge Base management events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrators may delete Knowledge Bases as part of normal operations, such as when replacing outdated knowledge bases, removing test resources, or consolidating information. Consider implementing an allowlist for expected administrators who regularly manage Knowledge Base configurations.",
              "refs": "https://www.sumologic.com/blog/defenders-guide-to-aws-bedrock/, https://attack.mitre.org/techniques/T1562/",
              "mitre": [
                "T1485"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Bedrock Delete Knowledge Base\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteKnowledgeBase. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Bedrock Delete Knowledge Base\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrators may delete Knowledge Bases as part of normal operations, such as when replacing outdated knowledge bases, removing test resources, or consolidating information. Consider implementing an allowlist for expected administrators who regularly manage Knowledge Base configurations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.18",
              "n": "AWS Bedrock Delete Model Invocation Logging Configuration",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to delete AWS Bedrock model invocation logging configurations. It leverages AWS CloudTrail logs to detect when a user or service calls the DeleteModelInvocationLogging API. This activity is significant as it may indicate an adversary attempting to remove audit trails of model interactions after compromising credentials. Deleting model invocation logs could allow attackers to interact with AI models without leaving traces, potentially enabling them to co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeleteModelInvocationLoggingConfiguration",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The Splunk AWS Add-on is required to utilize this data. The search requires AWS CloudTrail logs with Bedrock service events enabled. You must install and configure the AWS App for Splunk (version 6.0.0 or later) and Splunk Add-on for AWS (version 5.1.0 or later) to collect CloudTrail logs from AWS. Ensure the CloudTrail is capturing Bedrock model invocation logging management events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrators may delete model invocation logging configurations during maintenance, when updating logging policies, or when cleaning up unused resources. Consider implementing an allowlist for expected administrators who regularly manage logging configurations.",
              "refs": "https://www.sumologic.com/blog/defenders-guide-to-aws-bedrock/, https://attack.mitre.org/techniques/T1562/008/",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Bedrock Delete Model Invocation Logging Configuration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteModelInvocationLoggingConfiguration. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Bedrock Delete Model Invocation Logging Configuration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrators may delete model invocation logging configurations during maintenance, when updating logging policies, or when cleaning up unused resources. Consider implementing an allowlist for expected administrators who regularly manage logging configurations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.19",
              "n": "AWS Bedrock High Number List Foundation Model Failures",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an high number of AccessDenied attempts to list AWS Bedrock foundation models. It leverages AWS CloudTrail logs to detect when a user or service experiences multiple failures when calling the ListFoundationModels API. This activity is significant as it may indicate an adversary performing reconnaissance of available AI models after compromising credentials with limited permissions. Repeated failures could suggest brute force attempts to enumerate accessible reso…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The Splunk AWS Add-on is required to utilize this data. The search requires AWS CloudTrail logs with Bedrock service events enabled. You must install and configure the AWS App for Splunk (version 6.0.0 or later) and Splunk Add-on for AWS (version 5.1.0 or later) to collect CloudTrail logs from AWS.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate users may encounter multiple failures during permission testing, role transitions, or when service permissions are being reconfigured. High volumes of API errors may also occur during automated processes with misconfigured IAM policies or when new Bedrock features are being explored through API testing.",
              "refs": "https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html, https://trustoncloud.com/blog/exposing-the-weakness-how-we-identified-a-flaw-in-bedrocks-foundation-model-access-control/, https://attack.mitre.org/techniques/T1595/",
              "mitre": [
                "T1580"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Bedrock High Number List Foundation Model Failures\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1580. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Bedrock High Number List Foundation Model Failures\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate users may encounter multiple failures during permission testing, role transitions, or when service permissions are being reconfigured. High volumes of API errors may also occur during automated processes with misconfigured IAM policies or when new Bedrock features are being explored through API testing.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.20",
              "n": "AWS Bedrock Invoke Model Access Denied",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies access denied error when attempting to invoke AWS Bedrock models. It leverages AWS CloudTrail logs to detect when a user or service receives an AccessDenied error when calling the InvokeModel API. This activity is significant as it may indicate an adversary attempting to access Bedrock models with insufficient permissions after compromising credentials. If confirmed malicious, this could suggest reconnaissance activities or privilege escalation attempts targetin…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The Splunk AWS Add-on is required to utilize this data. The search requires AWS CloudTrail logs with Bedrock service events enabled. You must install and configure the AWS App for Splunk (version 6.0.0 or later) and Splunk Add-on for AWS (version 5.1.0 or later) to collect CloudTrail logs from AWS.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate users may encounter access denied errors during permission testing, role transitions, or when service permissions are being reconfigured. Access denials may also happen when automated processes are using outdated credentials or when new Bedrock features are being explored.",
              "refs": "https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html, https://trustoncloud.com/blog/exposing-the-weakness-how-we-identified-a-flaw-in-bedrocks-foundation-model-access-control/, https://attack.mitre.org/techniques/T1595/",
              "mitre": [
                "T1078",
                "T1550"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Bedrock Invoke Model Access Denied\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078, T1550. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Bedrock Invoke Model Access Denied\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate users may encounter access denied errors during permission testing, role transitions, or when service permissions are being reconfigured. Access denials may also happen when automated processes are using outdated credentials or when new Bedrock features are being explored.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.21",
              "n": "AWS Disable Bucket Versioning",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when AWS S3 bucket versioning is suspended by a user. It leverages AWS CloudTrail logs to identify `PutBucketVersioning` events with the `VersioningConfiguration.Status` set to `Suspended`. This activity is significant because disabling versioning can prevent recovery of deleted or modified data, which is a common tactic in ransomware attacks. If confirmed malicious, this action could lead to data loss and hinder recovery efforts, severely impacting data integrity …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail PutBucketVersioning",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail PutBucketVersioning ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an AWS Administrator has legitimately disabled versioning on certain buckets to avoid costs.",
              "refs": "https://invictus-ir.medium.com/ransomware-in-the-cloud-7f14805bbe82, https://bleemb.medium.com/data-exfiltration-with-native-aws-s3-features-c94ae4d13436",
              "mitre": [
                "T1490"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Disable Bucket Versioning\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail PutBucketVersioning. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Disable Bucket Versioning\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an AWS Administrator has legitimately disabled versioning on certain buckets to avoid costs.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use cloud account activity to see mis-steps and misuse early so security and cloud teams are not the last to know.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.22",
              "n": "Cloud API Calls From Previously Unseen User Roles",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects cloud API calls executed by user roles that have not previously run these commands. It leverages the Change data model in Splunk to identify commands executed by users with the user_type of AssumedRole and a status of success. This activity is significant because new commands from different user roles can indicate potential malicious activity or unauthorized actions. If confirmed malicious, this behavior could lead to unauthorized access, data breaches, or other da…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud API Calls From Previously Unseen User Roles\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud API Calls From Previously Unseen User Roles\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.23",
              "n": "Detect Spike in AWS Security Hub Alerts for EC2 Instance",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a spike in the number of AWS Security Hub alerts for an EC2 instance within a 4-hour interval. It leverages AWS Security Hub findings data, calculating the average and standard deviation of alerts to detect anomalies. This activity is significant for a SOC as a sudden increase in alerts may indicate potential security incidents or misconfigurations requiring immediate attention. If confirmed malicious, this could signify an ongoing attack, leading to unauthorize…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS Security Hub",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS Security Hub ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Spike in AWS Security Hub Alerts for EC2 Instance\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS Security Hub. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Spike in AWS Security Hub Alerts for EC2 Instance\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect Spike in AWS Security Hub Alerts for EC2 Instance” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.24",
              "n": "7zip CommandLine To SMB Share Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of 7z or 7za processes with command lines pointing to SMB network shares. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant as it may indicate an attempt to archive and exfiltrate sensitive files to a network share, a technique observed in CONTI LEAK tools. If confirmed malicious, this behavior could lead to data exfiltration, compromising sensitive i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (Processes.process_name =\"7z.exe\" OR Processes.process_name = \"7za.exe\" OR Processes.process_name = \"7zr.exe\" OR Processes.original_file_name = \"7z.exe\" OR Processes.original_file_name =  \"7za.exe\" OR Processes.original_file_name =  \"7zr.exe\") AND (Processes.process=\"*\\\\C$\\\\*\" OR Processes.process=\"*\\\\Admin$\\\\*\" OR Processes.process=\"*\\\\IPC$\\\\*\") by Processes.action Processes.dest Processes.original_file_name Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path Processes.process Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path Processes.user Processes.user_id Processes.vendor_product | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `7zip_commandline_to_smb_share_path_filter`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://threadreaderapp.com/thread/1423361119926816776.html",
              "mitre": [
                "T1560.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"7zip CommandLine To SMB Share Path\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1560.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"7zip CommandLine To SMB Share Path\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “7zip CommandLine To SMB Share Path” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.25",
              "n": "Active Setup Registry Autostart",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to the Active Setup registry for persistence and privilege escalation. It leverages data from the Endpoint.Registry data model, focusing on changes to the \"StubPath\" value within the \"SOFTWARE\\\\Microsoft\\\\Active Setup\\\\Installed Components\" path. This activity is significant as it is commonly used by malware, adware, and APTs to maintain persistence on compromised machines. If confirmed malicious, this could allow attackers to execute code …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Active setup installer may add or modify this registry.",
              "refs": "https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=Backdoor%3AWin32%2FPoisonivy.E, https://attack.mitre.org/techniques/T1547/014/",
              "mitre": [
                "T1547.014"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Active Setup Registry Autostart\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.014. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Active Setup Registry Autostart\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Active setup installer may add or modify this registry.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.26",
              "n": "Add DefaultUser And Password In Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious registry modifications that implement auto admin logon by adding DefaultUserName and DefaultPassword values. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the \"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Winlogon\" registry path. This activity is significant because it is associated with BlackMatter ransomware, which uses this technique to automatically log on to compromised hosts and continue encryption afte…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12, Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://news.sophos.com/en-us/2021/08/09/blackmatter-ransomware-emerges-from-the-shadow-of-darkside/",
              "mitre": [
                "T1552.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Add DefaultUser And Password In Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12, Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Add DefaultUser And Password In Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.27",
              "n": "Allow Operation with Consent Admin",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a registry modification that allows the 'Consent Admin' to perform operations requiring elevation without user consent or credentials. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the 'ConsentPromptBehaviorAdmin' value within the Windows Policies System registry path. This activity is significant as it indicates a potential privilege escalation attempt, which could allow an attacker to execute high-privilege tasks with…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-gpsb/341747f5-6b5d-4d30-85fc-fa1cc04038d4, https://www.trendmicro.com/vinfo/no/threat-encyclopedia/malware/Ransom.Win32.MRDEC.MRA/",
              "mitre": [
                "T1548"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Allow Operation with Consent Admin\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Allow Operation with Consent Admin\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.28",
              "n": "Anomalous usage of 7zip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of 7z.exe, a 7-Zip utility, spawned from rundll32.exe or dllhost.exe. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on process names and parent processes. This activity is significant as it may indicate an adversary attempting to use 7-Zip for data exfiltration, often by renaming the executable to evade detection. If confirmed malicious, this could lead to unauthorized data archiving and exfiltration, comp…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as this behavior is not normal for `rundll32.exe` or `dllhost.exe` to spawn and run 7zip.",
              "refs": "https://attack.mitre.org/techniques/T1560/001/, https://www.microsoft.com/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/, https://thedfirreport.com/2021/01/31/bazar-no-ryuk/",
              "mitre": [
                "T1560.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Anomalous usage of 7zip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1560.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Anomalous usage of 7zip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as this behavior is not normal for `rundll32.exe` or `dllhost.exe` to spawn and run 7zip.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Anomalous usage of 7zip” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.29",
              "n": "Auto Admin Logon Registry Entry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious registry modification that enables auto admin logon on a host. It leverages data from the Endpoint.Registry data model, specifically looking for changes to the \"AutoAdminLogon\" value within the \"SOFTWARE\\\\Microsoft\\\\Windows NT\\\\CurrentVersion\\\\Winlogon\" registry path. This activity is significant because it was observed in BlackMatter ransomware attacks to maintain access after a safe mode reboot, facilitating further encryption. If confirmed malicious…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://news.sophos.com/en-us/2021/08/09/blackmatter-ransomware-emerges-from-the-shadow-of-darkside/",
              "mitre": [
                "T1552.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Auto Admin Logon Registry Entry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Auto Admin Logon Registry Entry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.30",
              "n": "Batch File Write to System32",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a batch file (.bat) within the Windows system directory tree, specifically in the System32 or SysWOW64 folders. It leverages data from the Endpoint datamodel, focusing on process and filesystem events to identify this behavior. This activity is significant because writing batch files to system directories can be indicative of malicious intent, such as persistence mechanisms or system manipulation. If confirmed malicious, this could allow an attacker…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible for this search to generate a finding event for a batch file write to a path that includes the string \"system32\", but is not the actual Windows system directory. As such, you should confirm the path of the batch file identified by the search. In addition, a false positive may be generated by an administrator copying a legitimate batch file in this directory tree. You should confirm that the activity is legitimate and modify the search to add exclusions, as necessary.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1204.002"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Batch File Write to System32\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Batch File Write to System32\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible for this search to generate a finding event for a batch file write to a path that includes the string \"system32\", but is not the actual Windows system directory. As such, you should confirm the path of the batch file identified by the search. In addition, a false positive may be generated by an administrator copying a legitimate batch file in this directory tree. You should confirm that the activity is legitimate and modify the search to add exclusions, as necessary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.31",
              "n": "Bcdedit Command Back To Normal Mode Boot",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a suspicious `bcdedit` command that reconfigures a host from safe mode back to normal boot. This detection leverages Endpoint Detection and Response (EDR) data, focusing on command-line executions involving `bcdedit.exe` with specific parameters. This activity is significant as it may indicate the presence of ransomware, such as BlackMatter, which manipulates boot configurations to facilitate encryption processes. If confirmed malicious, this behav…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://news.sophos.com/en-us/2021/08/09/blackmatter-ransomware-emerges-from-the-shadow-of-darkside/",
              "mitre": [
                "T1490"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Bcdedit Command Back To Normal Mode Boot\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Bcdedit Command Back To Normal Mode Boot\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Bcdedit Command Back To Normal Mode Boot” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.32",
              "n": "BCDEdit Failure Recovery Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows error recovery boot configurations using bcdedit.exe with flags such as \"recoveryenabled\" and \"no\". It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line executions. This activity is significant because ransomware often disables recovery options to prevent system restoration, making it crucial for SOC analysts to investigate. If confirmed malicious, this could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may modify the boot configuration.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1490/T1490.md#atomic-test-4---windows---disable-windows-recovery-console-repair",
              "mitre": [
                "T1490"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"BCDEdit Failure Recovery Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"BCDEdit Failure Recovery Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may modify the boot configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “BCDEdit Failure Recovery Modification” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.33",
              "n": "BITS Job Persistence",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `bitsadmin.exe` to schedule a BITS job for persistence on an endpoint. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line parameters such as `create`, `addfile`, and `resume`. This activity is significant because BITS jobs can be used by attackers to maintain persistence, download malicious payloads, or exfiltrate data. If confirmed malicious, this could allow an attacker to persist in the environment, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives will be present. Typically, applications will use `BitsAdmin.exe`. Any filtering should be done based on command-line arguments (legitimate applications) or parent process.",
              "refs": "https://attack.mitre.org/techniques/T1197/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/bitsadmin, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1197/T1197.md#atomic-test-3---persist-download--execute, https://lolbas-project.github.io/lolbas/Binaries/Bitsadmin/",
              "mitre": [
                "T1197"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"BITS Job Persistence\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1197. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"BITS Job Persistence\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives will be present. Typically, applications will use `BitsAdmin.exe`. Any filtering should be done based on command-line arguments (legitimate applications) or parent process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “BITS Job Persistence” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.34",
              "n": "BITSAdmin Download File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `bitsadmin.exe` with the `transfer` parameter to download a remote object. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line telemetry. This activity is significant because `bitsadmin.exe` can be exploited to download and execute malicious files without immediate detection. If confirmed malicious, an attacker could use this technique to download and execute payloads, potentially leading to code exec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives, however it may be required to filter based on parent process name or network connection.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/8eb52117b748d378325f7719554a896e37bccec7/atomics/T1105/T1105.md#atomic-test-9---windows---bitsadmin-bits-download, https://github.com/redcanaryco/atomic-red-team/blob/bc705cb7aaa5f26f2d96585fac8e4c7052df0ff9/atomics/T1197/T1197.md, https://learn.microsoft.com/en-us/windows/win32/bits/bitsadmin-tool, https://thedfirreport.com/2021/03/29/sodinokibi-aka-revil-ransomware/",
              "mitre": [
                "T1197",
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"BITSAdmin Download File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1197, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"BITSAdmin Download File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives, however it may be required to filter based on parent process name or network connection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “BITSAdmin Download File” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.35",
              "n": "CertUtil With Decode Argument",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of CertUtil.exe with the 'decode' argument, which may indicate an attempt to decode a previously encoded file, potentially containing malicious payloads. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving CertUtil.exe. This activity is significant because attackers often use CertUtil to decode malicious files downloaded from the internet, which are then executed to compromise the sy…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Typically seen used to `encode` files, but it is possible to see legitimate use of `decode`. Filter based on parent-child relationship, file paths, endpoint or user.",
              "refs": "https://attack.mitre.org/techniques/T1140/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1140/T1140.md, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/certutil, https://www.bleepingcomputer.com/news/security/certutilexe-could-allow-attackers-to-download-malware-while-bypassing-av/",
              "mitre": [
                "T1140"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CertUtil With Decode Argument\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1140. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CertUtil With Decode Argument\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Typically seen used to `encode` files, but it is possible to see legitimate use of `decode`. Filter based on parent-child relationship, file paths, endpoint or user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “CertUtil With Decode Argument” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.36",
              "n": "Change To Safe Mode With Network Config",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a suspicious `bcdedit` command that configures a host to boot in safe mode with network support. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving `bcdedit.exe` with specific parameters. This activity is significant because it is a known technique used by BlackMatter ransomware to force a compromised host into safe mode for continued encryption. If confirmed malicious, this could allo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://news.sophos.com/en-us/2021/08/09/blackmatter-ransomware-emerges-from-the-shadow-of-darkside/",
              "mitre": [
                "T1490"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Change To Safe Mode With Network Config\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Change To Safe Mode With Network Config\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Change To Safe Mode With Network Config” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.37",
              "n": "CHCP Command Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the chcp.com utility, which is used to change the active code page of the console. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events. This activity is significant because it can indicate the presence of malware, such as IcedID, which uses this technique to determine the locale region, language, or country of the compromised host. If confirmed malicious, this could lead to further sy…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "other tools or script may used this to change code page to UTF-* or others",
              "refs": "https://ss64.com/nt/chcp.html, https://twitter.com/tccontre18/status/1419941156633329665?s=20",
              "mitre": [
                "T1059"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CHCP Command Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CHCP Command Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: other tools or script may used this to change code page to UTF-* or others\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “CHCP Command Execution” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.38",
              "n": "Check Elevated CMD using whoami",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the \"whoami\" command with the \"/group\" flag, where the results are passed to the \"find\" command in order to look for a the string \"12288\". This string represents the SID of the group \"Mandatory Label\\High Mandatory Level\" effectively checking if the current process is running as a \"High\" integrity process or with Administrator privileges. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line te…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The combination of these commands is unlikely to occur in a production environment. Any matches should be investigated.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1033"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Check Elevated CMD using whoami\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Check Elevated CMD using whoami\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The combination of these commands is unlikely to occur in a production environment. Any matches should be investigated.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Check Elevated CMD using whoami” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.39",
              "n": "Clear Unallocated Sector Using Cipher App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `cipher.exe` with the `/w` flag to clear unallocated sectors on a disk. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, command-line arguments, and parent processes. This activity is significant because it is a technique used by ransomware to prevent forensic recovery of deleted files. If confirmed malicious, this action could hinder incident response efforts by making it impossible to recover critica…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator may execute this app to manage disk",
              "refs": "https://unit42.paloaltonetworks.com/vatet-pyxie-defray777/3/, https://www.sophos.com/en-us/medialibrary/PDFs/technical-papers/sophoslabs-ransomware-behavior-report.pdf",
              "mitre": [
                "T1070.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Clear Unallocated Sector Using Cipher App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Clear Unallocated Sector Using Cipher App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator may execute this app to manage disk\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Clear Unallocated Sector Using Cipher App” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.40",
              "n": "Clop Common Exec Parameter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of CLOP ransomware variants using specific arguments (\"runrun\" or \"temp.dat\") to trigger their malicious activities. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. Monitoring this activity is crucial as it indicates potential ransomware behavior, which can lead to file encryption on network shares or local machines. If confirmed malicious, this activity could re…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Operators can execute third party tools using these parameters.",
              "refs": "https://www.mandiant.com/resources/fin11-email-campaigns-precursor-for-ransomware-data-theft, https://blog.virustotal.com/2020/11/keep-your-friends-close-keep-ransomware.html",
              "mitre": [
                "T1204"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Clop Common Exec Parameter\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Clop Common Exec Parameter\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Operators can execute third party tools using these parameters.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Clop Common Exec Parameter” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.41",
              "n": "Clop Ransomware Known Service Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a service with a known name used by CLOP ransomware for persistence and high-privilege code execution. It detects this activity by monitoring Windows Event Logs (EventCode 7045) for specific service names (\"SecurityCenterIBM\", \"WinCheckDRVs\"). This activity is significant because the creation of such services is a common tactic used by ransomware to maintain control over infected systems. If confirmed malicious, this could allow attackers to exec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7045 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/fin11-email-campaigns-precursor-for-ransomware-data-theft, https://blog.virustotal.com/2020/11/keep-your-friends-close-keep-ransomware.html",
              "mitre": [
                "T1543"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Clop Ransomware Known Service Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Clop Ransomware Known Service Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Clop Ransomware Known Service Name” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.42",
              "n": "CMD Carry Out String Command Parameter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `cmd.exe /c` to execute commands, a technique often employed by adversaries and malware to run batch commands or invoke other shells like PowerShell. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process metadata. Monitoring this activity is crucial as it can indicate script-based attacks or unauthorized command execution. If confirmed malicious, this behavior could lead to unauth…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_cmd`\n        AND\n        Processes.process IN (\"*/c*\", \"*/k*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `cmd_carry_out_string_command_parameter_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be high based on legitimate scripted code in any environment. Filter as needed.",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1059.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CMD Carry Out String Command Parameter\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CMD Carry Out String Command Parameter\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be high based on legitimate scripted code in any environment. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “CMD Carry Out String Command Parameter” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.43",
              "n": "CMD Echo Pipe - Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of named-pipe impersonation for privilege escalation, commonly associated with Cobalt Strike and similar frameworks. It detects command-line executions where `cmd.exe` uses `echo` to write to a named pipe, such as `cmd.exe /c echo 4sgryt3436 > \\\\.\\Pipe\\5erg53`. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line telemetry. This activity is significant as it indicates potential privilege es…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. fidelity.",
              "refs": "https://redcanary.com/threat-detection-report/threats/cobalt-strike/, https://github.com/rapid7/meterpreter/blob/master/source/extensions/priv/server/elevate/namedpipe.c",
              "mitre": [
                "T1059.003",
                "T1543.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CMD Echo Pipe - Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.003, T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CMD Echo Pipe - Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. fidelity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “CMD Echo Pipe - Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.44",
              "n": "CMLUA Or CMSTPLUA UAC Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of COM objects like CMLUA or CMSTPLUA to bypass User Account Control (UAC). It leverages Sysmon EventCode 7 to identify the loading of specific DLLs (CMLUA.dll, CMSTPLUA.dll, CMLUAUTIL.dll) by processes not typically associated with these libraries. This activity is significant as it indicates an attempt to gain elevated privileges, a common tactic used by ransomware adversaries. If confirmed malicious, this could allow attackers to execute code with admini…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate windows application that are not on the list loading this dll. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1218/003/",
              "mitre": [
                "T1218.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CMLUA Or CMSTPLUA UAC Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CMLUA Or CMSTPLUA UAC Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate windows application that are not on the list loading this dll. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “CMLUA Or CMSTPLUA UAC Bypass” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.45",
              "n": "Common Ransomware Extensions",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to files with extensions commonly associated with ransomware. It leverages the Endpoint.Filesystem data model to identify changes in file extensions that match known ransomware patterns. This activity is significant because it suggests an attacker is attempting to encrypt or alter files, potentially leading to severe data loss and operational disruption. If confirmed malicious, this activity could result in the encryption of critical data, rendering i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible for a legitimate file with these extensions to be created. If this is a true ransomware attack, there will be a large number of files created with these extensions.",
              "refs": "https://github.com/splunk/security_content/issues/2448",
              "mitre": [
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Common Ransomware Extensions\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Common Ransomware Extensions\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible for a legitimate file with these extensions to be created. If this is a true ransomware attack, there will be a large number of files created with these extensions.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.46",
              "n": "Common Ransomware Notes",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Common Ransomware Notes. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly`\n      count\n      min(_time) as firstTime\n      max(_time) as lastTime\n      values(Filesystem.user) as user\n      values(Filesystem.dest) as dest\n      values(Filesystem.file_path) as file_path\n    from datamodel=Endpoint.Filesystem\n    where [\n      | inputlookup ransomware_notes_lookup\n      | search status=true\n      | fields ransomware_notes\n      | dedup ransomware_notes\n      | rename ransomware_notes as Filesystem.file_name\n    ]\n    by Filesystem.action Filesystem.dest Filesystem.file_access_time\n       Filesystem.file_create_time Filesystem.file_hash\n       Filesystem.file_modify_time Filesystem.file_name\n       Filesystem.file_path Filesystem.file_acl Filesystem.file_size\n       Filesystem.process_guid Filesystem.process_id Filesystem.user\n       Filesystem.vendor_product\n    | `drop_dm_object_name(Filesystem)`\n    | `security_content_ctime(lastTime)`\n    | `security_content_ctime(firstTime)`\n    | `common_ransomware_notes_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1485"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Common Ransomware Notes\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Common Ransomware Notes\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Common Ransomware Notes” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.47",
              "n": "Conti Common Exec parameter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of suspicious command-line arguments commonly associated with Conti ransomware, specifically targeting local drives and network shares for encryption. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because it indicates a potential ransomware attack, which can lead to widespread data encryption and operational disruption. If confirme…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "3rd party tool may have commandline parameter that can trigger this detection.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.conti",
              "mitre": [
                "T1204"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Conti Common Exec parameter\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Conti Common Exec parameter\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: 3rd party tool may have commandline parameter that can trigger this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Conti Common Exec parameter” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.48",
              "n": "Create or delete windows shares using net exe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or deletion of Windows shares using the net.exe command. It leverages Endpoint Detection and Response (EDR) data to identify processes involving net.exe with actions related to share management. This activity is significant because it may indicate an attacker attempting to manipulate network shares for malicious purposes, such as data exfiltration, malware distribution, or establishing persistence. If confirmed malicious, this activity could lead to un…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators often leverage net.exe to create or delete network shares. You should verify that the activity was intentional and is legitimate.",
              "refs": "https://attack.mitre.org/techniques/T1070/005/",
              "mitre": [
                "T1070.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Create or delete windows shares using net exe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Create or delete windows shares using net exe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators often leverage net.exe to create or delete network shares. You should verify that the activity was intentional and is legitimate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Create or delete windows shares using net exe” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.49",
              "n": "Create Remote Thread In Shell Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious process injection in command shell applications, specifically targeting `cmd.exe` and `powershell.exe`. It leverages Sysmon EventCode 8 to identify the creation of remote threads within these shell processes. This activity is significant because it is a common technique used by malware, such as IcedID, to inject malicious code and execute it within legitimate processes. If confirmed malicious, this behavior could allow an attacker to execute arbitrary co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 8",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2021/07/19/icedid-and-cobalt-strike-vs-antivirus/",
              "mitre": [
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Create Remote Thread In Shell Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 8. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Create Remote Thread In Shell Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Create Remote Thread In Shell Application” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.50",
              "n": "Creation of Shadow Copy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of shadow copies using Vssadmin or Wmic. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because creating shadow copies can be a precursor to ransomware attacks or data exfiltration, allowing attackers to bypass file locks and access sensitive data. If confirmed malicious, this behavior could enable attackers to maintain persistence, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrator usage of Vssadmin or Wmic will create false positives.",
              "refs": "https://2017.zeronights.org/wp-content/uploads/materials/ZN17_Kheirkhabarov_Hunting_for_Credentials_Dumping_in_Windows_Environment.pdf, https://media.defense.gov/2023/May/24/2003229517/-1/-1/0/CSA_Living_off_the_Land.PDF",
              "mitre": [
                "T1003.003"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Creation of Shadow Copy\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Creation of Shadow Copy\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrator usage of Vssadmin or Wmic will create false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Creation of Shadow Copy” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.51",
              "n": "Creation of Shadow Copy with wmic and powershell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of shadow copies using \"wmic\" or \"Powershell\" commands. It leverages the Endpoint.Processes data model in Splunk to identify processes where the command includes \"shadowcopy\" and \"create\". This activity is significant because it may indicate an attacker attempting to manipulate or access data in an unauthorized manner, potentially leading to data theft or manipulation. If confirmed malicious, this behavior could allow attackers to backup and exfiltrate…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrator usage of wmic to create a shadow copy.",
              "refs": "https://2017.zeronights.org/wp-content/uploads/materials/ZN17_Kheirkhabarov_Hunting_for_Credentials_Dumping_in_Windows_Environment.pdf, https://media.defense.gov/2023/May/24/2003229517/-1/-1/0/CSA_Living_off_the_Land.PDF",
              "mitre": [
                "T1003.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Creation of Shadow Copy with wmic and powershell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Creation of Shadow Copy with wmic and powershell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrator usage of wmic to create a shadow copy.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.52",
              "n": "Credential Dumping via Copy Command from Shadow Copy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the copy command to dump credentials from a shadow copy. It leverages Endpoint Detection and Response (EDR) data to identify processes with command lines referencing critical files like \"sam\", \"security\", \"system\", and \"ntds.dit\" in system directories. This activity is significant as it indicates an attempt to extract credentials, a common technique for unauthorized access and privilege escalation. If confirmed malicious, this could lead to attackers gai…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://2017.zeronights.org/wp-content/uploads/materials/ZN17_Kheirkhabarov_Hunting_for_Credentials_Dumping_in_Windows_Environment.pdf",
              "mitre": [
                "T1003.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Credential Dumping via Copy Command from Shadow Copy\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Credential Dumping via Copy Command from Shadow Copy\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Credential Dumping via Copy Command from Shadow Copy” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.53",
              "n": "Credential Dumping via Symlink to Shadow Copy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a symlink to a shadow copy, which may indicate credential dumping attempts. It leverages the Endpoint.Processes data model in Splunk to identify processes executing commands containing \"mklink\" and \"HarddiskVolumeShadowCopy\". This activity is significant because attackers often use this technique to manipulate or delete shadow copies, hindering system backup and recovery efforts. If confirmed malicious, this could prevent data restoration, complicat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://2017.zeronights.org/wp-content/uploads/materials/ZN17_Kheirkhabarov_Hunting_for_Credentials_Dumping_in_Windows_Environment.pdf",
              "mitre": [
                "T1003.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Credential Dumping via Symlink to Shadow Copy\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Credential Dumping via Symlink to Shadow Copy\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.54",
              "n": "CSC Net On The Fly Compilation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the .NET compiler csc.exe for on-the-fly compilation of potentially malicious .NET code. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line patterns associated with csc.exe. This activity is significant because adversaries and malware often use this technique to evade detection by compiling malicious code at runtime. If confirmed malicious, this could allow attackers to execute arbitrary code, potential…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=csc.exe\n            OR\n            Processes.original_file_name=csc.exe\n        )\n        Processes.process = \"*/noconfig*\" Processes.process = \"*/fullpaths*\" Processes.process = \"*@*\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `csc_net_on_the_fly_compilation_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A network operator or systems administrator may utilize an automated powershell script taht execute .net code that may generate false positive. filter is needed.",
              "refs": "https://app.any.run/tasks/ad4c3cda-41f2-4401-8dba-56cc2d245488/, https://tccontre.blogspot.com/2019/06/maicious-macro-that-compile-c-code-as.html",
              "mitre": [
                "T1027.004"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CSC Net On The Fly Compilation\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CSC Net On The Fly Compilation\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A network operator or systems administrator may utilize an automated powershell script taht execute .net code that may generate false positive. filter is needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “CSC Net On The Fly Compilation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.55",
              "n": "Delete ShadowCopy With PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of PowerShell to delete shadow copies via the WMIC PowerShell module. It leverages EventCode 4104 and searches for specific keywords like \"ShadowCopy,\" \"Delete,\" or \"Remove\" within the ScriptBlockText. This activity is significant because deleting shadow copies is a common tactic used by ransomware, such as DarkSide, to prevent data recovery. If confirmed malicious, this action could lead to irreversible data loss and hinder recovery efforts, significantly …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/shining-a-light-on-darkside-ransomware-operations, https://www.techtarget.com/searchwindowsserver/tutorial/Set-up-PowerShell-script-block-logging-for-added-security",
              "mitre": [
                "T1490"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Delete ShadowCopy With PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Delete ShadowCopy With PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Delete ShadowCopy With PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.56",
              "n": "Deleting Shadow Copies",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of shadow copies using the vssadmin.exe or wmic.exe utilities. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because deleting shadow copies is a common tactic used by attackers to prevent recovery and hide their tracks. If confirmed malicious, this action could hinder incident response efforts and allow attackers to maintain persistence and cover t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "vssadmin.exe and wmic.exe are standard applications shipped with modern versions of windows. They may be used by administrators to legitimately delete old backup copies, although this is typically rare.",
              "refs": "https://blogs.vmware.com/security/2022/10/lockbit-3-0-also-known-as-lockbit-black.html",
              "mitre": [
                "T1490"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Deleting Shadow Copies\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Deleting Shadow Copies\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: vssadmin.exe and wmic.exe are standard applications shipped with modern versions of windows. They may be used by administrators to legitimately delete old backup copies, although this is typically rare.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Deleting Shadow Copies” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.57",
              "n": "Detect AzureHound Command-Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Invoke-AzureHound` command-line argument, commonly used by the AzureHound tool. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because AzureHound is often used for reconnaissance in Azure environments, potentially exposing sensitive information. If confirmed malicious, this activity could allow an attacker to map out Azure Active Directory…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/software/S0521/, https://github.com/BloodHoundAD/BloodHound/tree/master/Collectors, https://posts.specterops.io/introducing-bloodhound-4-0-the-azure-update-9b2b26c5e350, https://github.com/BloodHoundAD/Legacy-AzureHound.ps1/blob/master/AzureHound.ps1",
              "mitre": [
                "T1069.001",
                "T1069.002",
                "T1087.001",
                "T1087.002",
                "T1482"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect AzureHound Command-Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001, T1069.002, T1087.001, T1087.002, T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect AzureHound Command-Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect AzureHound Command-Line Arguments” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.58",
              "n": "Detect AzureHound File Modifications",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of specific AzureHound-related files, such as `*-azurecollection.zip` and various `.json` files, on disk. It leverages data from the Endpoint.Filesystem datamodel, focusing on file creation events with specific filenames. This activity is significant because AzureHound is a tool used to gather information about Azure environments, similar to SharpHound for on-premises Active Directory. If confirmed malicious, this activity could indicate an attacker is…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as the analytic is specific to a filename with extension .zip. Filter as needed.",
              "refs": "https://posts.specterops.io/introducing-bloodhound-4-0-the-azure-update-9b2b26c5e350, https://github.com/BloodHoundAD/Legacy-AzureHound.ps1/blob/master/AzureHound.ps1",
              "mitre": [
                "T1069.001",
                "T1069.002",
                "T1087.001",
                "T1087.002",
                "T1482"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect AzureHound File Modifications\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001, T1069.002, T1087.001, T1087.002, T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect AzureHound File Modifications\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as the analytic is specific to a filename with extension .zip. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.59",
              "n": "Detect Excessive Account Lockouts From Endpoint",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects endpoints causing a high number of account lockouts within a short period. It leverages the Windows security event logs ingested into the `Change` datamodel, specifically under the `Account_Management` node, to identify and count lockout events. This activity is significant as it may indicate a brute-force attack or misconfigured system causing repeated authentication failures. If confirmed malicious, this behavior could lead to account lockouts, disrupting user ac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible that a widely used system, such as a kiosk, could cause a large number of account lockouts.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Excessive Account Lockouts From Endpoint\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Excessive Account Lockouts From Endpoint\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible that a widely used system, such as a kiosk, could cause a large number of account lockouts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.60",
              "n": "Detect HTML Help Renamed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where hh.exe (HTML Help) has been renamed and is executing a Compiled HTML Help (CHM) file. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and original file names. This activity is significant because attackers can use renamed hh.exe to execute malicious scripts embedded in CHM files, potentially leading to code execution. If confirmed malicious, this technique could allow attackers to run arbitr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name!=hh.exe\n        AND\n        Processes.original_file_name=HH.EXE\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_html_help_renamed_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely a renamed instance of hh.exe will be used legitimately, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1218/001/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.001/T1218.001.md, https://lolbas-project.github.io/lolbas/Binaries/Hh/",
              "mitre": [
                "T1218.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect HTML Help Renamed\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect HTML Help Renamed\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely a renamed instance of hh.exe will be used legitimately, filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect HTML Help Renamed” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.61",
              "n": "Detect HTML Help URL in Command Line",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of hh.exe (HTML Help) loading a Compiled HTML Help (CHM) file from a remote URL. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions containing URLs. This activity is significant as it can indicate an attempt to execute malicious scripts via CHM files, potentially leading to unauthorized code execution. If confirmed malicious, this could allow an attacker to run scripts using engines lik…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may retrieve a CHM remotely, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1218/001/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.001/T1218.001.md, https://lolbas-project.github.io/lolbas/Binaries/Hh/, https://blog.sevagas.com/?Hacking-around-HTA-files, https://gist.github.com/mgeeky/cce31c8602a144d8f2172a73d510e0e7, https://web.archive.org/web/20220119133748/https://cyberforensicator.com/2019/01/20/silence-dissecting-malicious-chm-files-and-performing-forensic-analysis/",
              "mitre": [
                "T1218.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect HTML Help URL in Command Line\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect HTML Help URL in Command Line\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may retrieve a CHM remotely, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect HTML Help URL in Command Line” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.62",
              "n": "Detect HTML Help Using InfoTech Storage Handlers",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of hh.exe (HTML Help) using InfoTech Storage Handlers to load Windows script code from a Compiled HTML Help (CHM) file. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because it can be used to execute malicious scripts embedded within CHM files, potentially leading to code execution. If confirmed malicious, this technique could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is rare to see instances of InfoTech Storage Handlers being used, but it does happen in some legitimate instances. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1218/001/, https://www.kb.cert.org/vuls/id/851869, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.001/T1218.001.md, https://lolbas-project.github.io/lolbas/Binaries/Hh/, https://gist.github.com/mgeeky/cce31c8602a144d8f2172a73d510e0e7, https://web.archive.org/web/20220119133748/https://cyberforensicator.com/2019/01/20/silence-dissecting-malicious-chm-files-and-performing-forensic-analysis/",
              "mitre": [
                "T1218.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect HTML Help Using InfoTech Storage Handlers\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect HTML Help Using InfoTech Storage Handlers\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is rare to see instances of InfoTech Storage Handlers being used, but it does happen in some legitimate instances. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect HTML Help Using InfoTech Storage Handlers” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.63",
              "n": "Detect mshta inline hta execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of \"mshta.exe\" with inline protocol handlers such as \"JavaScript\", \"VBScript\", and \"About\". It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line arguments and process details. This activity is significant because mshta.exe can be exploited to execute malicious scripts, potentially leading to unauthorized code execution. If confirmed malicious, this could allow an attacker to execute arbitrary code, escalate pri…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.",
              "refs": "https://github.com/redcanaryco/AtomicTestHarnesses, https://redcanary.com/blog/introducing-atomictestharnesses/, https://learn.microsoft.com/en-us/windows/win32/search/-search-3x-wds-extidx-prot-implementing",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect mshta inline hta execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect mshta inline hta execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect mshta inline hta execution” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.64",
              "n": "Detect mshta renamed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where mshta.exe has been renamed and executed. It leverages Endpoint Detection and Response (EDR) data, specifically focusing on the original file name field to detect discrepancies. This activity is significant because renaming mshta.exe is a common tactic used by attackers to evade detection and execute malicious scripts. If confirmed malicious, this could allow an attacker to execute arbitrary code, potentially leading to system compromise, data exf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name!=mshta.exe\n        AND\n        Processes.original_file_name=MSHTA.EXE\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_mshta_renamed_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may use a moved copy of mshta.exe, but never renamed, triggering a false positive.",
              "refs": "https://github.com/redcanaryco/AtomicTestHarnesses, https://redcanary.com/blog/introducing-atomictestharnesses/",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect mshta renamed\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect mshta renamed\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may use a moved copy of mshta.exe, but never renamed, triggering a false positive.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect mshta renamed” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.65",
              "n": "Detect MSHTA Url in Command Line",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Microsoft HTML Application Host (mshta.exe) to make remote HTTP or HTTPS connections. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line arguments containing URLs. This activity is significant because adversaries often use mshta.exe to download and execute remote .hta files, bypassing security controls. If confirmed malicious, this behavior could allow attackers to execute arbitrary code, potentially leading to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible legitimate applications may perform this behavior and will need to be filtered.",
              "refs": "https://github.com/redcanaryco/AtomicTestHarnesses, https://redcanary.com/blog/introducing-atomictestharnesses/, https://learn.microsoft.com/en-us/windows/win32/search/-search-3x-wds-extidx-prot-implementing, https://denwp.com/dissecting-lumma-malware/, https://www.mcafee.com/blogs/other-blogs/mcafee-labs/behind-the-captcha-a-clever-gateway-of-malware/",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect MSHTA Url in Command Line\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect MSHTA Url in Command Line\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible legitimate applications may perform this behavior and will need to be filtered.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect MSHTA Url in Command Line” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.66",
              "n": "Detect Path Interception By Creation Of program exe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a program executable in an unquoted service path, a common technique for privilege escalation. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where the parent process is 'services.exe'. This activity is significant because unquoted service paths can be exploited by attackers to execute arbitrary code with elevated privileges. If confirmed malicious, this could allow an attacker to gain hig…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://medium.com/@SumitVerma101/windows-privilege-escalation-part-1-unquoted-service-path-c7a011a8d8ae",
              "mitre": [
                "T1574.009"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Path Interception By Creation Of program exe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Path Interception By Creation Of program exe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect Path Interception By Creation Of program exe” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.67",
              "n": "Detect Prohibited Applications Spawning cmd exe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects executions of cmd.exe spawned by processes that are commonly abused by attackers and do not typically launch cmd.exe. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process GUID, process name, parent process, and command-line executions. This activity is significant because it may indicate an attempt to execute unauthorized commands or scripts, often a precursor to further malicious actions. If confirmed malicious, this behavior co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count values(Processes.process)\n    as process min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (Processes.process_name=cmd.exe OR Processes.original_file_name=Cmd.Exe)\n    by Processes.action Processes.dest Processes.original_file_name\n       Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n       Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n       Processes.process Processes.process_exec Processes.process_guid Processes.process_hash\n       Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path\n       Processes.user Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    |search [\n        | inputlookup prohibited_apps_launching_cmd\n        | rename prohibited_applications as parent_process_name\n        | eval parent_process_name=\"*\" . parent_process_name\n        | table parent_process_name\n      ]\n    | `detect_prohibited_applications_spawning_cmd_exe_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are circumstances where an application may legitimately execute and interact with the Windows command-line interface. Investigate and modify the lookup file, as appropriate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Prohibited Applications Spawning cmd exe\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Prohibited Applications Spawning cmd exe\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are circumstances where an application may legitimately execute and interact with the Windows command-line interface. Investigate and modify the lookup file, as appropriate.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect Prohibited Applications Spawning cmd exe” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.68",
              "n": "Detect PsExec With accepteula Flag",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `PsExec.exe` with the `accepteula` flag in the command line. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant because PsExec is commonly used by threat actors to execute code on remote systems, and the `accepteula` flag indicates first-time usage, which could signify initial compromise. If confirmed malicious, this activity could allow…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators can leverage PsExec for accessing remote systems and might pass `accepteula` as an argument if they are running this tool for the first time. However, it is not likely that you'd see multiple occurrences of this event on a machine",
              "refs": "https://media.defense.gov/2023/May/24/2003229517/-1/-1/0/CSA_Living_off_the_Land.PDF",
              "mitre": [
                "T1021.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect PsExec With accepteula Flag\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect PsExec With accepteula Flag\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators can leverage PsExec for accessing remote systems and might pass `accepteula` as an argument if they are running this tool for the first time. However, it is not likely that you'd see multiple occurrences of this event on a machine\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect PsExec With accepteula Flag” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.69",
              "n": "Detect RClone Command-Line Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the usage of `rclone.exe` with specific command-line arguments indicative of file transfer activities. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process details. This activity is significant as `rclone.exe` is often used by adversaries for data exfiltration, especially during ransomware attacks. If confirmed malicious, this behavior could lead to unauthorized data transfer, resulting in data breache…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://redcanary.com/blog/rclone-mega-extortion/, https://www.mandiant.com/resources/shining-a-light-on-darkside-ransomware-operations, https://thedfirreport.com/2021/03/29/sodinokibi-aka-revil-ransomware/, https://thedfirreport.com/2021/11/29/continuing-the-bazar-ransomware-story/",
              "mitre": [
                "T1020"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect RClone Command-Line Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1020. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect RClone Command-Line Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect RClone Command-Line Usage” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.70",
              "n": "Detect Regasm Spawning a Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects regasm.exe spawning a child process. This behavior is identified using data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where regasm.exe is the parent process. This activity is significant because regasm.exe spawning a process is rare and can indicate an attempt to bypass application control mechanisms. If confirmed malicious, this could allow an attacker to execute arbitrary code, potentially leading to privilege escalati…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, limited instances of regasm.exe or regsvcs.exe may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.009/T1218.009.md, https://lolbas-project.github.io/lolbas/Binaries/Regsvcs/, https://lolbas-project.github.io/lolbas/Binaries/Regasm/",
              "mitre": [
                "T1218.009"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Regasm Spawning a Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Regasm Spawning a Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, limited instances of regasm.exe or regsvcs.exe may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect Regasm Spawning a Process” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.71",
              "n": "Detect Regasm with no Command Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances of regasm.exe running without command line arguments. This behavior typically indicates process injection, where another process manipulates regasm.exe. The detection leverages Endpoint Detection and Response (EDR) data, focusing on process names and command-line executions. This activity is significant as it may signal an attempt to evade detection or execute malicious code. If confirmed malicious, attackers could achieve code execution, potentially lead…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, limited instances of regasm.exe or may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.009/T1218.009.md, https://lolbas-project.github.io/lolbas/Binaries/Regasm/",
              "mitre": [
                "T1218.009"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Regasm with no Command Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Regasm with no Command Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, limited instances of regasm.exe or may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect Regasm with no Command Line Arguments” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.72",
              "n": "Detect Regsvcs Spawning a Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies regsvcs.exe spawning a child process. This behavior is detected using Endpoint Detection and Response (EDR) telemetry, focusing on process creation events where the parent process is regsvcs.exe. This activity is significant because regsvcs.exe rarely spawns child processes, and such behavior can indicate an attempt to bypass application control mechanisms. If confirmed malicious, this could allow an attacker to execute arbitrary code, potentially leading to pri…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, limited instances of regasm.exe or regsvcs.exe may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.009/T1218.009.md, https://lolbas-project.github.io/lolbas/Binaries/Regsvcs/",
              "mitre": [
                "T1218.009"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Regsvcs Spawning a Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Regsvcs Spawning a Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, limited instances of regasm.exe or regsvcs.exe may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect Regsvcs Spawning a Process” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.73",
              "n": "Detect Regsvr32 Application Control Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the abuse of Regsvr32.exe to proxy execution of malicious code, specifically detecting the loading of \"scrobj.dll\" by Regsvr32.exe. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events and command-line executions. This activity is significant because Regsvr32.exe is a trusted, signed Microsoft binary, often used in \"Squiblydoo\" attacks to bypass application control mechanisms. If confirmed malicious…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives related to third party software registering .DLL's.",
              "refs": "https://attack.mitre.org/techniques/T1218/010/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.010/T1218.010.md, https://lolbas-project.github.io/lolbas/Binaries/Regsvr32/, https://support.microsoft.com/en-us/topic/how-to-use-the-regsvr32-tool-and-troubleshoot-regsvr32-error-messages-a98d960a-7392-e6fe-d90a-3f4e0cb543e5",
              "mitre": [
                "T1218.010"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Regsvr32 Application Control Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.010. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Regsvr32 Application Control Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives related to third party software registering .DLL's.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect Regsvr32 Application Control Bypass” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.74",
              "n": "Detect Remote Access Software Usage Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of known remote access software within the environment. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and parent processes mapped to the Endpoint data model. We then compare with with a list of known remote access software shipped as a lookup file - remote_access_software. This activity is significant as adversaries often use remote access tools like AnyDesk, GoToMyPC, LogMeIn, and TeamViewer to maintai…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "# Shared SPL: intentional — see UC-10.7.294\n| from datamodel:Endpoint.Processes| search dest=$dest$ process_name=$process_name$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that legitimate remote access software is used within the environment. Ensure that the lookup is reviewed and updated with any additional remote access software that is used within the environment. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content",
              "refs": "https://attack.mitre.org/techniques/T1219/, https://thedfirreport.com/2022/08/08/bumblebee-roasts-its-way-to-domain-admin/, https://thedfirreport.com/2022/11/28/emotet-strikes-again-lnk-file-leads-to-domain-wide-ransomware/",
              "mitre": [
                "T1219"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Remote Access Software Usage Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Remote Access Software Usage Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that legitimate remote access software is used within the environment. Ensure that the lookup is reviewed and updated with any additional remote access software that is used within the environment. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.75",
              "n": "Detect Renamed RClone",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a renamed `rclone.exe` process, which is commonly used for data exfiltration to remote destinations. This detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process names and original file names that do not match. This activity is significant because ransomware groups often use RClone to exfiltrate sensitive data. If confirmed malicious, this behavior could indicate an ongoing data exfiltration attempt, potentially lea…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.original_file_name=rclone.exe\n            AND\n            Processes.process_name!=rclone.exe\n        )\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_renamed_rclone_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as this analytic identifies renamed instances of `rclone.exe`. Filter as needed if there is a legitimate business use case.",
              "refs": "https://redcanary.com/blog/rclone-mega-extortion/, https://www.mandiant.com/resources/shining-a-light-on-darkside-ransomware-operations, https://thedfirreport.com/2021/03/29/sodinokibi-aka-revil-ransomware/",
              "mitre": [
                "T1020"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Renamed RClone\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1020. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Renamed RClone\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as this analytic identifies renamed instances of `rclone.exe`. Filter as needed if there is a legitimate business use case.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect Renamed RClone” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.76",
              "n": "Detect Renamed WinRAR",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where `WinRAR.exe` has been renamed and executed. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and original file names within the Endpoint data model. This activity is significant because renaming executables is a common tactic used by attackers to evade detection. If confirmed malicious, this could indicate an attempt to bypass security controls, potentially leading to unauthorized data extraction or f…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.original_file_name=WinRAR.exe (Processes.process_name!=rar.exe\n        AND\n        Processes.process_name!=winrar.exe)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_renamed_winrar_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. instances of WinRAR.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1560.001/T1560.001.md",
              "mitre": [
                "T1560.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Renamed WinRAR\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1560.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Renamed WinRAR\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. instances of WinRAR.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.77",
              "n": "Detect RTLO In Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the abuse of the right-to-left override (RTLO) character (U+202E) in process names. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line data. This activity is significant because adversaries use the RTLO character to disguise malicious files or commands, making them appear benign. If confirmed malicious, this technique can allow attackers to execute harmful code undetected, potentially leading …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Implementation in regions that use right to left in native language.",
              "refs": "https://attack.mitre.org/techniques/T1036/002/, https://resources.infosecinstitute.com/topic/spoof-using-right-to-left-override-rtlo-technique-2/, https://www.trendmicro.com/en_us/research/17/f/following-trail-blacktech-cyber-espionage-campaigns.html",
              "mitre": [
                "T1036.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect RTLO In Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect RTLO In Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Implementation in regions that use right to left in native language.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect RTLO In Process” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.78",
              "n": "Detect Rundll32 Inline HTA Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of \"rundll32.exe\" with inline protocol handlers such as \"JavaScript\", \"VBScript\", and \"About\". This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on command-line arguments. This activity is significant as it is often associated with fileless malware or application whitelisting bypass techniques. If confirmed malicious, this could allow an attacker to execute arbitrary code, bypass security controls, and maintai…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.",
              "refs": "https://github.com/redcanaryco/AtomicTestHarnesses, https://redcanary.com/blog/introducing-atomictestharnesses/, https://learn.microsoft.com/en-us/windows/win32/search/-search-3x-wds-extidx-prot-implementing",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Rundll32 Inline HTA Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Rundll32 Inline HTA Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect Rundll32 Inline HTA Execution” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.79",
              "n": "Detect SharpHound File Modifications",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of files typically associated with SharpHound, a reconnaissance tool used for gathering domain and trust data. It leverages file modification events from the Endpoint.Filesystem data model, focusing on default file naming patterns like `*_BloodHound.zip` and various JSON files. This activity is significant as it indicates potential domain enumeration, which is a precursor to more targeted attacks. If confirmed malicious, an attacker could gain detailed…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as the analytic is specific to a filename with extension .zip. Filter as needed.",
              "refs": "https://attack.mitre.org/software/S0521/, https://thedfirreport.com/?s=bloodhound, https://github.com/BloodHoundAD/BloodHound/tree/master/Collectors, https://github.com/BloodHoundAD/SharpHound3, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1059.001/T1059.001.md#atomic-test-2---run-bloodhound-from-local-disk",
              "mitre": [
                "T1069.001",
                "T1069.002",
                "T1087.001",
                "T1087.002",
                "T1482"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect SharpHound File Modifications\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001, T1069.002, T1087.001, T1087.002, T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect SharpHound File Modifications\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as the analytic is specific to a filename with extension .zip. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.80",
              "n": "Detect SharpHound Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the usage of the SharpHound binary by identifying its original filename, `SharpHound.exe`, and the process name. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process metadata and command-line executions. SharpHound is a tool used for Active Directory enumeration, often by attackers during the reconnaissance phase. If confirmed malicious, this activity could allow an attacker to map out the network, identify high-value…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as this is specific to a file attribute not used by anything else. Filter as needed.",
              "refs": "https://attack.mitre.org/software/S0521/, https://thedfirreport.com/?s=bloodhound, https://github.com/BloodHoundAD/BloodHound/tree/master/Collectors, https://github.com/BloodHoundAD/SharpHound3, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1059.001/T1059.001.md#atomic-test-2---run-bloodhound-from-local-disk",
              "mitre": [
                "T1069.001",
                "T1069.002",
                "T1087.001",
                "T1087.002",
                "T1482"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect SharpHound Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001, T1069.002, T1087.001, T1087.002, T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect SharpHound Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as this is specific to a file attribute not used by anything else. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detect SharpHound Usage” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.81",
              "n": "Detect suspicious processnames using pretrained model in DSDL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious process names using a pre-trained Deep Learning model. It leverages Endpoint Detection and Response (EDR) telemetry to analyze process names and predict their likelihood of being malicious. The model, a character-level Recurrent Neural Network (RNN), classifies process names as benign or suspicious based on a threshold score of 0.5. This detection is significant as it helps identify malware, such as TrickBot, which often uses randomly generated filena…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | rename process_name as text\n    | fields text, parent_process_name, process, user, dest\n    | apply detect_suspicious_processnames_using_pretrained_model_in_dsdl\n    | rename predicted_label as is_suspicious_score\n    | rename text as process_name\n    | where is_suspicious_score > 0.5\n    | `detect_suspicious_processnames_using_pretrained_model_in_dsdl_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if a suspicious processname is similar to a benign processname.",
              "refs": "https://www.cisa.gov/uscert/ncas/alerts/aa20-302a, https://www.splunk.com/en_us/blog/security/random-words-on-entropy-and-dns.html",
              "mitre": [
                "T1059"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect suspicious processnames using pretrained model in DSDL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect suspicious processnames using pretrained model in DSDL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if a suspicious processname is similar to a benign processname.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.82",
              "n": "Detect Use of cmd exe to Launch Script Interpreters",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of cscript.exe or wscript.exe processes initiated by cmd.exe. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and parent processes within the Endpoint data model. This activity is significant as it may indicate script-based attacks or administrative actions that could be leveraged for malicious purposes. If confirmed malicious, this behavior could allow attackers to execute scripts, potentially leading to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection may also be triggered by legitimate applications and numerous service accounts, which often end with a $ sign. To manage this, it's advised to check the service account's activities and, if they are valid, modify the filter macro to exclude them.",
              "refs": "https://attack.mitre.org/techniques/T1059/, https://redcanary.com/threat-detection-report/techniques/windows-command-shell/",
              "mitre": [
                "T1059.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Use of cmd exe to Launch Script Interpreters\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Use of cmd exe to Launch Script Interpreters\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection may also be triggered by legitimate applications and numerous service accounts, which often end with a $ sign. To manage this, it's advised to check the service account's activities and, if they are valid, modify the filter macro to exclude them.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.83",
              "n": "Detection of tools built by NirSoft",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of tools built by NirSoft by detecting specific command-line arguments such as \"/stext\" and \"/scomma\". It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line executions. This activity is significant because NirSoft tools, while legitimate, can be exploited by attackers for malicious purposes such as credential theft or system reconnaissance. If confirmed malicious, this act…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) values(Processes.process) as process max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process=\"* /stext *\"\n            OR\n            Processes.process=\"* /scomma *\"\n        )\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detection_of_tools_built_by_nirsoft_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While legitimate, these NirSoft tools are prone to abuse. You should verify that the tool was used for a legitimate purpose.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1072"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detection of tools built by NirSoft\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1072. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detection of tools built by NirSoft\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While legitimate, these NirSoft tools are prone to abuse. You should verify that the tool was used for a legitimate purpose.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Detection of tools built by NirSoft” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.84",
              "n": "Disable Defender AntiVirus Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of Windows Defender registry settings to disable antivirus and antispyware protections. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to registry paths associated with Windows Defender policies. This activity is significant because disabling antivirus protections is a common tactic used by adversaries to evade detection and maintain persistence on compromised systems. If confirmed malicious, this action co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to disable windows defender product",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Defender AntiVirus Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Defender AntiVirus Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to disable windows defender product\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.85",
              "n": "Disable Defender BlockAtFirstSeen Feature",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows registry to disable the Windows Defender BlockAtFirstSeen feature. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path associated with Windows Defender SpyNet and the DisableBlockAtFirstSeen value. This activity is significant because disabling this feature can allow malicious files to bypass initial detection by Windows Defender, increasing the risk of malware infection. If c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to disable windows defender product",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Defender BlockAtFirstSeen Feature\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Defender BlockAtFirstSeen Feature\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to disable windows defender product\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.86",
              "n": "Disable Defender Enhanced Notification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the registry to disable Windows Defender's Enhanced Notification feature. It leverages data from Endpoint Detection and Response (EDR) agents, specifically monitoring changes to the registry path associated with Windows Defender reporting. This activity is significant because disabling Enhanced Notifications can prevent users and administrators from receiving critical security alerts, potentially allowing malicious activities to go unnoticed. If…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "user may choose to disable windows defender AV",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Defender Enhanced Notification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Defender Enhanced Notification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: user may choose to disable windows defender AV\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Disable Defender Enhanced Notification” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.87",
              "n": "Disable Defender MpEngine Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows Defender MpEngine registry value, specifically setting MpEnablePus to 0x00000000. This detection leverages endpoint registry logs, focusing on changes within the path \"*\\\\Policies\\\\Microsoft\\\\Windows Defender\\\\MpEngine*\". This activity is significant as it indicates an attempt to disable key Windows Defender features, potentially allowing malware to evade detection. If confirmed malicious, this could lead to undetected malware execut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to disable windows defender product",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Defender MpEngine Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Defender MpEngine Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to disable windows defender product\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Disable Defender MpEngine Registry” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.88",
              "n": "Disable Defender Spynet Reporting",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the registry to disable Windows Defender SpyNet reporting. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path associated with Windows Defender SpyNet settings. This activity is significant because disabling SpyNet reporting can prevent Windows Defender from sending telemetry data, potentially allowing malicious activities to go undetected. If confirmed malicious, this action could enable…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to disable windows defender product",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Defender Spynet Reporting\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Defender Spynet Reporting\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to disable windows defender product\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.89",
              "n": "Disable Defender Submit Samples Consent Feature",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows registry to disable the Windows Defender Submit Samples Consent feature. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path associated with Windows Defender SpyNet and the SubmitSamplesConsent value set to 0x00000000. This activity is significant as it indicates an attempt to bypass or evade detection by preventing Windows Defender from submitting samples for further analysis…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to disable windows defender product",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Defender Submit Samples Consent Feature\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Defender Submit Samples Consent Feature\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to disable windows defender product\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.3.89: Disable Defender Submit Samples Consent Feature.",
                  "ea": "Saved search 'UC-10.3.89' running on Sysmon EventID 13, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.90",
              "n": "Disable ETW Through Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the registry that disable the Event Tracing for Windows (ETW) feature. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\.NETFramework\\\\ETWEnabled\" with a value set to \"0x00000000\". This activity is significant because disabling ETW can allow attackers to evade detection mechanisms, making it harder for security tools to monitor malicious activities. If confirmed m…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network operator may disable this feature of windows but not so common.",
              "refs": "https://app.any.run/tasks/c0f98850-af65-4352-9746-fbebadee4f05/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable ETW Through Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable ETW Through Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network operator may disable this feature of windows but not so common.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.91",
              "n": "Disable Logs Using WevtUtil",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of \"wevtutil.exe\" with parameters to disable event logs. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because disabling event logs is a common tactic used by ransomware to evade detection and hinder forensic investigations. If confirmed malicious, this action could allow attackers to operate undetected, making it difficult to trace their activiti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network operator may disable audit event logs for debugging purposes.",
              "refs": "https://www.bleepingcomputer.com/news/security/new-ransom-x-ransomware-used-in-texas-txdot-cyberattack/",
              "mitre": [
                "T1070.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Logs Using WevtUtil\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Logs Using WevtUtil\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network operator may disable audit event logs for debugging purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Disable Logs Using WevtUtil” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.92",
              "n": "Disable Registry Tool",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry aimed at disabling the Registry Editor (regedit). It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Policies\\\\System\\\\DisableRegistryTools\" with a value of \"0x00000001\". This activity is significant because malware, such as RATs or trojans, often disable registry tools to prevent the removal of their entries, aiding in …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin may disable this application for non technical user.",
              "refs": "https://any.run/report/ea4ea08407d4ee72e009103a3b77e5a09412b722fdef67315ea63f22011152af/a866d7b1-c236-4f26-a391-5ae32213dfc4#registry",
              "mitre": [
                "T1112",
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Registry Tool\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Registry Tool\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin may disable this application for non technical user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.93",
              "n": "Disable Schedule Task",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a command to disable an existing scheduled task using 'schtasks.exe' with the '/change' and '/disable' parameters. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. Disabling scheduled tasks is significant as it is a common tactic used by adversaries, including malware like IcedID, to disable security applications and evade detection. If confirmed malicious, this a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin may disable problematic schedule task",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Schedule Task\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Schedule Task\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin may disable problematic schedule task\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Disable Schedule Task” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.94",
              "n": "Disable Security Logs Using MiniNt Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious registry modification aimed at disabling security audit logs by adding a specific registry entry. It leverages data from the Endpoint.Registry data model, focusing on changes to the \"Control\\\\MiniNt\" registry path. This activity is significant because it can prevent Windows from logging any events to the Security Log, effectively blinding security monitoring efforts. If confirmed malicious, this technique could allow an attacker to operate undetected, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://twitter.com/0gtweet/status/1182516740955226112",
              "mitre": [
                "T1112"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Security Logs Using MiniNt Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Security Logs Using MiniNt Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.95",
              "n": "Disable Show Hidden Files",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable the display of hidden files. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to registry paths associated with hidden file settings. This activity is significant because malware, such as worms and trojan spyware, often use hidden files to evade detection. If confirmed malicious, this behavior could allow an attacker to conceal malicious files on the system, making it harder …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.sophos.com/en-us/threat-center/threat-analyses/viruses-and-spyware/W32~Tiotua-P/detailed-analysis",
              "mitre": [
                "T1112",
                "T1562.001",
                "T1564.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Show Hidden Files\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1562.001, T1564.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Show Hidden Files\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.96",
              "n": "Disable UAC Remote Restriction",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the registry to disable UAC remote restriction by setting the \"LocalAccountTokenFilterPolicy\" value to \"0x00000001\". It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"*\\\\CurrentVersion\\\\Policies\\\\System*\". This activity is significant because disabling UAC remote restriction can allow an attacker to bypass User Account Control (UAC) protections, potentially leading to privilege escalat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin may set this policy for non-critical machine.",
              "refs": "https://learn.microsoft.com/en-us/troubleshoot/windows-server/windows-security/user-account-control-and-remote-restriction",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable UAC Remote Restriction\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable UAC Remote Restriction\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin may set this policy for non-critical machine.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.97",
              "n": "Disable Windows App Hotkeys",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious registry modification aimed at disabling Windows hotkeys for native applications. It leverages data from the Endpoint.Registry data model, focusing on specific registry paths and values indicative of this behavior. This activity is significant as it can impair an analyst's ability to use essential tools like Task Manager and Command Prompt, hindering incident response efforts. If confirmed malicious, this technique can allow an attacker to maintain per…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1112",
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Windows App Hotkeys\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Windows App Hotkeys\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.98",
              "n": "Disable Windows Behavior Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies modifications in the registry to disable Windows Defender's real-time behavior monitoring. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to registry paths associated with Windows Defender settings. This activity is significant because disabling real-time protection is a common tactic used by malware such as RATs, bots, or Trojans to evade detection. If confirmed malicious, this action could allow an attacker to execute …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to disable this windows features.",
              "refs": "https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Windows Behavior Monitoring\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Windows Behavior Monitoring\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to disable this windows features.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.99",
              "n": "Disabling CMD Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the registry that disable the CMD prompt application. It leverages data from the Endpoint.Registry data model, specifically looking for changes to the \"DisableCMD\" registry value. This activity is significant because disabling CMD can hinder an analyst's ability to investigate and remediate threats, a tactic often used by malware such as RATs, Trojans, or Worms. If confirmed malicious, this could prevent security teams from using CMD for directory …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin may disable this application for non technical user.",
              "refs": "https://any.run/report/ea4ea08407d4ee72e009103a3b77e5a09412b722fdef67315ea63f22011152af/a866d7b1-c236-4f26-a391-5ae32213dfc4#registry",
              "mitre": [
                "T1112",
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling CMD Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling CMD Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin may disable this application for non technical user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.100",
              "n": "Disabling ControlPanel",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects registry modifications that disable the Control Panel on Windows systems. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Policies\\\\Explorer\\\\NoControlPanel\" with a value of \"0x00000001\". This activity is significant as it is commonly used by malware to prevent users from accessing the Control Panel, thereby hindering the removal of malicious artifacts an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin may disable this application for non technical user.",
              "refs": "https://any.run/report/ea4ea08407d4ee72e009103a3b77e5a09412b722fdef67315ea63f22011152af/a866d7b1-c236-4f26-a391-5ae32213dfc4#registry",
              "mitre": [
                "T1112",
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling ControlPanel\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling ControlPanel\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin may disable this application for non technical user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.101",
              "n": "Disabling Defender Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the disabling of Windows Defender services by monitoring registry modifications. It leverages registry event data to identify changes to specific registry paths associated with Defender services, where the 'Start' value is set to '0x00000004'. This activity is significant because disabling Defender services can indicate an attempt by an adversary to evade detection and maintain persistence on the endpoint. If confirmed malicious, this action could allow attackers t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to disable windows defender product",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling Defender Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling Defender Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to disable windows defender product\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Disabling Defender Services” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.102",
              "n": "Disabling FolderOptions Windows Feature",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows registry to disable the Folder Options feature, which prevents users from showing hidden files and file extensions. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Policies\\\\Explorer\\\\NoFolderOptions\" with a value of \"0x00000001\". This activity is significant as it is commonly used by malware to conceal malicious files and …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin may disable this application for non technical user.",
              "refs": "https://any.run/report/ea4ea08407d4ee72e009103a3b77e5a09412b722fdef67315ea63f22011152af/a866d7b1-c236-4f26-a391-5ae32213dfc4#registry",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling FolderOptions Windows Feature\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling FolderOptions Windows Feature\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin may disable this application for non technical user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.103",
              "n": "Disabling NoRun Windows App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows registry to disable the Run application in the Start menu. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Policies\\\\Explorer\\\\NoRun\" with a value of \"0x00000001\". This activity is significant because the Run application is a useful shortcut for executing known applications and scripts. If confirmed malicious, this action c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin may disable this application for non technical user.",
              "refs": "https://any.run/report/ea4ea08407d4ee72e009103a3b77e5a09412b722fdef67315ea63f22011152af/a866d7b1-c236-4f26-a391-5ae32213dfc4#registry, https://www.malwarebytes.com/blog/detections/pum-optional-norun/",
              "mitre": [
                "T1112",
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling NoRun Windows App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling NoRun Windows App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin may disable this application for non technical user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.104",
              "n": "Disabling SystemRestore In Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of registry keys to disable System Restore on a machine. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to registry paths associated with System Restore settings. This activity is significant because disabling System Restore can hinder recovery efforts and is a tactic often used by Remote Access Trojans (RATs) to maintain persistence on an infected system. If confirmed malicious, this action could prevent s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "in some cases admin can disable systemrestore on a machine.",
              "refs": "https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html",
              "mitre": [
                "T1490"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling SystemRestore In Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling SystemRestore In Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: in some cases admin can disable systemrestore on a machine.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.105",
              "n": "Disabling Task Manager",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies modifications to the Windows registry that disable Task Manager. It leverages data from the Endpoint.Registry data model, specifically looking for changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Policies\\\\System\\\\DisableTaskMgr\" with a value of \"0x00000001\". This activity is significant as it is commonly associated with malware such as RATs, Trojans, and worms, which disable Task Manager to prevent users from terminating malicious …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin may disable this application for non technical user.",
              "refs": "https://any.run/report/ea4ea08407d4ee72e009103a3b77e5a09412b722fdef67315ea63f22011152af/a866d7b1-c236-4f26-a391-5ae32213dfc4#registry, https://blog.talosintelligence.com/2020/05/threat-roundup-0424-0501.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling Task Manager\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling Task Manager\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin may disable this application for non technical user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.106",
              "n": "DNS Exfiltration Using Nslookup App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential DNS exfiltration using the nslookup application. It detects specific command-line parameters such as query type (TXT, A, AAAA) and retry options, which are commonly used by attackers to exfiltrate data. The detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process execution logs. This activity is significant as it may indicate an attempt to communicate with a Command and Control (C2) server or exfiltrate sensitive data. I…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin nslookup usage",
              "refs": "https://www.mandiant.com/resources/fin7-spear-phishing-campaign-targets-personnel-involved-sec-filings, https://www.varonis.com/blog/dns-tunneling, https://www.microsoft.com/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/",
              "mitre": [
                "T1048"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"DNS Exfiltration Using Nslookup App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"DNS Exfiltration Using Nslookup App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin nslookup usage\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “DNS Exfiltration Using Nslookup App” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.107",
              "n": "Domain Account Discovery with Dsquery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `dsquery.exe` with command-line arguments used to discover domain users. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to map out domain users, which is a common precursor to further attacks. If confirmed malicious, this behavior could allow attackers to gain insights into user…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://jpcertcc.github.io/ToolAnalysisResultSheet/details/dsquery.htm, https://attack.mitre.org/techniques/T1087/002/",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Domain Account Discovery with Dsquery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Domain Account Discovery with Dsquery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Domain Account Discovery with Dsquery” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.108",
              "n": "Domain Controller Discovery with Nltest",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `nltest.exe` with command-line arguments `/dclist:` or `/dsgetdc:` to discover domain controllers. It leverages Endpoint Detection and Response (EDR) data, focusing on process names and command-line arguments. This activity is significant because both Red Teams and adversaries use `nltest.exe` for situational awareness and Active Directory discovery. If confirmed malicious, this behavior could allow attackers to map out domain controllers, facilita…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/cc731935(v=ws.11)",
              "mitre": [
                "T1018"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Domain Controller Discovery with Nltest\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Domain Controller Discovery with Nltest\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Domain Controller Discovery with Nltest” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.109",
              "n": "Domain Controller Discovery with Wmic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `wmic.exe` with command-line arguments used to discover domain controllers in a Windows domain. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because it is commonly used by adversaries and Red Teams for situational awareness and Active Directory discovery. If confirmed malicious, this behavior could allow attackers to map out the network, id…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"wmic.exe\"\n        )\n        (Processes.process=\"\" OR Processes.process=\"*DomainControllerAddress*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `domain_controller_discovery_with_wmic_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/",
              "mitre": [
                "T1018"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Domain Controller Discovery with Wmic\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Domain Controller Discovery with Wmic\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Domain Controller Discovery with Wmic” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.110",
              "n": "Domain Group Discovery With Dsquery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `dsquery.exe` with command-line arguments used to query for domain groups. It leverages Endpoint Detection and Response (EDR) data, focusing on process names and command-line arguments. This activity is significant because both Red Teams and adversaries use `dsquery.exe` to enumerate domain groups, gaining situational awareness and facilitating further Active Directory discovery. If confirmed malicious, this behavior could allow attackers to map…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Domain Group Discovery With Dsquery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Domain Group Discovery With Dsquery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Domain Group Discovery With Dsquery” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.111",
              "n": "Domain Group Discovery With Wmic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `wmic.exe` with command-line arguments used to query for domain groups. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to gain situational awareness and map out Active Directory structures. If confirmed malicious, this behavior could allow attackers to identify and target specif…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where `process_wmic` (Processes.process=*/NAMESPACE:\\\\\\\\root\\\\directory\\\\ldap* AND Processes.process=*ds_group* AND Processes.process=\"*GET ds_samaccountname*\") by Processes.action Processes.dest Processes.original_file_name Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path Processes.process Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path Processes.user Processes.user_id Processes.vendor_product | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `domain_group_discovery_with_wmic_filter`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Domain Group Discovery With Wmic\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Domain Group Discovery With Wmic\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Domain Group Discovery With Wmic” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.112",
              "n": "Drop IcedID License dat",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the dropping of a suspicious file named \"license.dat\" in %appdata% or %programdata%. This behavior is associated with the IcedID malware, which uses this file to inject its core bot into other processes for banking credential theft. The detection leverages Sysmon EventCode 11 to monitor file creation events in these directories. This activity is significant as it indicates a potential malware infection aiming to steal sensitive banking information. If confirmed mal…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "`sysmon` EventCode= 11  TargetFilename = \"*\\\\license.dat\" AND (TargetFilename=\"*\\\\appdata\\\\*\" OR TargetFilename=\"*\\\\programdata\\\\*\") |stats count min(_time) as firstTime max(_time) as lastTime by TargetFilename EventCode process_id  process_name dest | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `drop_icedid_license_dat_filter`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.cisecurity.org/insights/white-papers/security-primer-icedid",
              "mitre": [
                "T1204.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Drop IcedID License dat\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Drop IcedID License dat\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Drop IcedID License dat” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.113",
              "n": "Elevated Group Discovery With Wmic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `wmic.exe` with command-line arguments querying specific elevated domain groups. It leverages Endpoint Detection and Response (EDR) telemetry to identify processes that access the LDAP namespace and search for groups like \"Domain Admins\" or \"Enterprise Admins.\" This activity is significant as it indicates potential reconnaissance efforts by adversaries to identify high-privilege accounts within Active Directory. If confirmed malicious, this behavio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/appendix-b--privileged-accounts-and-groups-in-active-directory, https://adsecurity.org/?p=3658",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Elevated Group Discovery With Wmic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Elevated Group Discovery With Wmic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Elevated Group Discovery With Wmic” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.114",
              "n": "Enable WDigest UseLogonCredential Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious registry modification that enables the plain text credential feature in Windows by setting the \"UseLogonCredential\" value to 1 in the WDigest registry path. This detection leverages data from the Endpoint.Registry data model, focusing on specific registry paths and values. This activity is significant because it is commonly used by malware and tools like Mimikatz to dump plain text credentials, indicating a potential credential dumping attempt. If conf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.csoonline.com/article/3438824/how-to-detect-and-halt-credential-theft-via-windows-wdigest.html",
              "mitre": [
                "T1112",
                "T1003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Enable WDigest UseLogonCredential Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Enable WDigest UseLogonCredential Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.115",
              "n": "ETW Registry Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a registry modification that disables the ETW for the .NET Framework. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the ETWEnabled registry value under the .NETFramework path. This activity is significant because disabling ETW can allow attackers to evade Endpoint Detection and Response (EDR) tools and hide their execution from audit logs. If confirmed malicious, this action could enable attackers to operate undetected,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://gist.github.com/Cyb3rWard0g/a4a115fd3ab518a0e593525a379adee3, https://blog.xpnsec.com/hiding-your-dotnet-complus-etwenabled/",
              "mitre": [
                "T1127",
                "T1562.006"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ETW Registry Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1127, T1562.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ETW Registry Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.116",
              "n": "Eventvwr UAC Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an Eventvwr UAC bypass by identifying suspicious registry modifications in the path that Eventvwr.msc references upon execution. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on registry changes and process execution details. This activity is significant because it indicates a potential privilege escalation attempt, allowing an attacker to execute arbitrary commands with elevated privileges. If confirmed malicious, this c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some false positives may be present and will need to be filtered.",
              "refs": "https://www.malwarebytes.com/blog/malwarebytes-news/2021/02/lazyscripter-from-empire-to-double-rat/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1548.002/T1548.002.md, https://attack.mitre.org/techniques/T1548/002/, https://enigma0x3.net/2016/08/15/fileless-uac-bypass-using-eventvwr-exe-and-registry-hijacking/",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Eventvwr UAC Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Eventvwr UAC Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some false positives may be present and will need to be filtered.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Eventvwr UAC Bypass” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.117",
              "n": "Excessive Attempt To Disable Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious series of command-line executions attempting to disable multiple services. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on processes where \"sc.exe\" is used with parameters like \"config\" or \"Disabled\" within a short time frame. This activity is significant as it may indicate an adversary's attempt to disable security or other critical services to further compromise the system. If confirmed malicious, this could lead t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1489"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive Attempt To Disable Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive Attempt To Disable Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Excessive Attempt To Disable Services” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.118",
              "n": "Excessive distinct processes from Windows Temp",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an excessive number of distinct processes executing from the Windows\\Temp directory. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process paths and counts within a 20-minute window. This behavior is significant as it often indicates the presence of post-exploit frameworks like Koadic and Meterpreter, which use this technique to execute malicious actions. If confirmed malicious, this activity could allow attackers to execute ar…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Many benign applications will create processes from executables in Windows\\Temp, although unlikely to exceed the given threshold.  Filter as needed.",
              "refs": "https://www.offensive-security.com/metasploit-unleashed/about-meterpreter/",
              "mitre": [
                "T1059"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive distinct processes from Windows Temp\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive distinct processes from Windows Temp\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Many benign applications will create processes from executables in Windows\\Temp, although unlikely to exceed the given threshold.  Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Excessive distinct processes from Windows Temp” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.119",
              "n": "Excessive File Deletion In WinDefender Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects excessive file deletion events in the Windows Defender folder. It leverages Sysmon EventCodes 23 and 26 to identify processes deleting multiple files within this directory. This behavior is significant as it may indicate an attempt to corrupt or disable Windows Defender, a key security component. If confirmed malicious, this activity could allow an attacker to disable endpoint protection, facilitating further malicious actions without detection.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 23, Sysmon EventID 26",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest logs that include the process name, TargetFilename, and ProcessID executions from your endpoints. If you are utilizing Sysmon, ensure you have at least version 2.0 of the Sysmon TA installed.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Windows Defender AV updates may trigger this alert. Please adjust the filter macros to mitigate false positives.",
              "refs": "https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1485"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive File Deletion In WinDefender Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 23, Sysmon EventID 26. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive File Deletion In WinDefender Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Windows Defender AV updates may trigger this alert. Please adjust the filter macros to mitigate false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Excessive File Deletion In WinDefender Folder” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.120",
              "n": "Excessive Usage of NSLOOKUP App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects excessive usage of the nslookup application, which may indicate potential DNS exfiltration attempts. It leverages Sysmon EventCode 1 to monitor process executions, specifically focusing on nslookup.exe. The detection identifies outliers by comparing the frequency of nslookup executions against a calculated threshold. This activity is significant as it can reveal attempts by malware or APT groups to exfiltrate data via DNS queries. If confirmed malicious, this behav…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/fin7-spear-phishing-campaign-targets-personnel-involved-sec-filings, https://www.varonis.com/blog/dns-tunneling, https://www.microsoft.com/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/",
              "mitre": [
                "T1048"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive Usage of NSLOOKUP App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive Usage of NSLOOKUP App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Excessive Usage of NSLOOKUP App” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.121",
              "n": "Excessive Usage Of SC Service Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects excessive usage of the `sc.exe` service utility on a host machine. It leverages Sysmon EventCode 1 logs to identify instances where `sc.exe` is executed more frequently than normal within a 15-minute window. This behavior is significant as it is commonly associated with ransomware, cryptocurrency miners, and other malware attempting to create, modify, delete, or disable services, potentially related to security applications or for privilege escalation. If confirmed…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "excessive execution of sc.exe is quite suspicious since it can modify or execute app in high privilege permission.",
              "refs": "https://app.any.run/tasks/c0f98850-af65-4352-9746-fbebadee4f05/",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive Usage Of SC Service Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive Usage Of SC Service Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: excessive execution of sc.exe is quite suspicious since it can modify or execute app in high privilege permission.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Excessive Usage Of SC Service Utility” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.122",
              "n": "Excessive Usage Of Taskkill",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies excessive usage of `taskkill.exe`, a command-line utility used to terminate processes. The detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on instances where `taskkill.exe` is executed ten or more times within a one-minute span. This behavior is significant as adversaries often use `taskkill.exe` to disable security tools or other critical processes to evade detection. If confirmed malicious, this activity could allow attacke…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/, https://www.joesandbox.com/analysis/702680/0/html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive Usage Of Taskkill\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive Usage Of Taskkill\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Excessive Usage Of Taskkill” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.123",
              "n": "Execution of File with Multiple Extensions",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of files with multiple extensions, such as \".doc.exe\" or \".pdf.exe\". This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on process creation events where the file name contains double extensions. This activity is significant because attackers often use double extensions to disguise malicious executables as benign documents, increasing the likelihood of user execution. If confirmed malicious, this technique can l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat",
              "mitre": [
                "T1036.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Execution of File with Multiple Extensions\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Execution of File with Multiple Extensions\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Execution of File with Multiple Extensions” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.124",
              "n": "File with Samsam Extension",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies File with Samsam Extension. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "file_name",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"File with Samsam Extension\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"File with Samsam Extension\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “File with Samsam Extension” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.125",
              "n": "First Time Seen Child Process of Zoom",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the first-time execution of child processes spawned by Zoom (zoom.exe or zoom.us). It leverages Endpoint Detection and Response (EDR) data, specifically monitoring process creation events and comparing them against previously seen child processes. This activity is significant because the execution of unfamiliar child processes by Zoom could indicate malicious exploitation or misuse of the application. If confirmed malicious, this could lead to unauthorized code …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` min(_time) as firstTime max(_time) as lastTime values(Processes.user) as user values(Processes.action) as action values(Processes.dest) as dest values(Processes.original_file_name) as original_file_name values(Processes.parent_process) as parent_process values(Processes.parent_process_exec) as parent_process_exec values(Processes.parent_process_guid) as parent_process_guid values(Processes.parent_process_id) as parent_process_id values(Processes.parent_process_name) as parent_process_name values(Processes.parent_process_path) as parent_process_path values(Processes.process) as process values(Processes.process_exec) as process_exec values(Processes.process_guid) as process_guid values(Processes.process_hash) as process_hash values(Processes.process_integrity_level) as process_integrity_level values(Processes.process_name) as process_name values(Processes.process_path) as process_path  values(Processes.user_id) as user_id values(Processes.vendor_product) as vendor_product FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.parent_process_name=zoom.exe\n            OR\n            Processes.parent_process_name=zoom.us\n        )\n      BY Processes.process_id Processes.dest\n    | `drop_dm_object_name(Processes)`\n    | lookup zoom_first_time_child_process dest as dest process_name as process_name OUTPUT firstTimeSeen\n    | where isnull(firstTimeSeen) OR firstTimeSeen > relative_time(now(), \"`previously_seen_zoom_child_processes_window`\")\n    | `security_content_ctime(firstTime)`\n    | `first_time_seen_child_process_of_zoom_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A new child process of zoom isn't malicious by that fact alone. Further investigation of the actions of the child process is needed to verify any malicious behavior is taken.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1068"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"First Time Seen Child Process of Zoom\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"First Time Seen Child Process of Zoom\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A new child process of zoom isn't malicious by that fact alone. Further investigation of the actions of the child process is needed to verify any malicious behavior is taken.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “First Time Seen Child Process of Zoom” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.126",
              "n": "FodHelper UAC Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of fodhelper.exe, which is known to exploit a User Account Control (UAC) bypass by leveraging specific registry keys. The detection method uses Endpoint Detection and Response (EDR) telemetry to identify when fodhelper.exe spawns a child process and accesses the registry keys. This activity is significant because it indicates a potential privilege escalation attempt by an attacker. If confirmed malicious, the attacker could execute commands with eleva…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited to no false positives are expected.",
              "refs": "https://www.malwarebytes.com/blog/malwarebytes-news/2021/02/lazyscripter-from-empire-to-double-rat/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1548.002/T1548.002.md, https://github.com/gushmazuko/WinBypass/blob/master/FodhelperBypass.ps1, https://attack.mitre.org/techniques/T1548/002/",
              "mitre": [
                "T1112",
                "T1548.002"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"FodHelper UAC Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"FodHelper UAC Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited to no false positives are expected.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “FodHelper UAC Bypass” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.127",
              "n": "Fsutil Zeroing File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'fsutil' command with the 'setzerodata' parameter, which zeros out a target file. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because it is a technique used by ransomware, such as LockBit, to evade detection by erasing its malware path after encrypting the host. If confirmed malicious, this action could hinder forensic investi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://app.any.run/tasks/e0ac072d-58c9-4f53-8a3b-3e491c7ac5db/, https://news.sophos.com/en-us/2020/04/24/lockbit-ransomware-borrows-tricks-to-keep-up-with-revil-and-maze/",
              "mitre": [
                "T1070"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Fsutil Zeroing File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Fsutil Zeroing File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Fsutil Zeroing File” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.128",
              "n": "Get ADDefaultDomainPasswordPolicy with Powershell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` running the `Get-ADDefaultDomainPasswordPolicy` cmdlet, which is used to retrieve the password policy in a Windows domain. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. Monitoring this activity is crucial as it can indicate attempts by adversaries to gather information about domain policies for situational awareness and Active Directory discov…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"cmd.exe\"\n            OR\n            Processes.process_name=\"powershell*\"\n        )\n        AND Processes.process = \"*Get-ADDefaultDomainPasswordPolicy*\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `get_addefaultdomainpasswordpolicy_with_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet, https://attack.mitre.org/techniques/T1201/, https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-addefaultdomainpasswordpolicy?view=windowsserver2019-ps",
              "mitre": [
                "T1201"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get ADDefaultDomainPasswordPolicy with Powershell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1201. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get ADDefaultDomainPasswordPolicy with Powershell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Get ADDefaultDomainPasswordPolicy with Powershell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.129",
              "n": "Get ADUser with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with command-line arguments used to enumerate domain users via the `Get-ADUser` cmdlet. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it may indicate an attempt by adversaries to gather information about domain users for situational awareness and Active Directory discovery. If confirmed malicious, this behavior could lead t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"cmd.exe\"\n            OR\n            Processes.process_name=\"powershell*\"\n        )\n        AND Processes.process = \"*Get-ADUser*\" AND Processes.process = \"*-filter*\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `get_aduser_with_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://www.blackhillsinfosec.com/red-blue-purple/, https://attack.mitre.org/techniques/T1087/002/, https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-aduser?view=windowsserver2019-ps",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get ADUser with PowerShell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get ADUser with PowerShell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Get ADUser with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.130",
              "n": "Get ADUserResultantPasswordPolicy with Powershell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` running the `Get-ADUserResultantPasswordPolicy` cmdlet, which is used to obtain the password policy in a Windows domain. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential enumeration of domain policies, a common tactic for situational awareness and Active Directory discovery by adversaries. If confirmed m…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet, https://attack.mitre.org/techniques/T1201/, https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-aduserresultantpasswordpolicy?view=windowsserver2019-ps",
              "mitre": [
                "T1201"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get ADUserResultantPasswordPolicy with Powershell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1201. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get ADUserResultantPasswordPolicy with Powershell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Get ADUserResultantPasswordPolicy with Powershell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.131",
              "n": "Get DomainPolicy with Powershell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` running the `Get-DomainPolicy` cmdlet, which is used to retrieve password policies in a Windows domain. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to gather domain policy information, which is crucial for planning further attacks. If confirmed malicious, this c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet, https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainPolicy/, https://attack.mitre.org/techniques/T1201/",
              "mitre": [
                "T1201"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get DomainPolicy with Powershell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1201. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get DomainPolicy with Powershell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Get DomainPolicy with Powershell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.132",
              "n": "Get DomainUser with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with command-line arguments used to enumerate domain users via the `Get-DomainUser` command. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions mapped to the `Processes` node of the `Endpoint` data model. This activity is significant as it indicates potential reconnaissance efforts by adversaries or Red Teams using PowerView for Active Directory dis…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainUser/",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get DomainUser with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get DomainUser with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.133",
              "n": "Get-ForestTrust with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Get-ForestTrust command via PowerShell, commonly used by adversaries to gather domain trust information. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. Identifying this activity is crucial as it indicates potential reconnaissance efforts to map out domain trusts, which can inform further attacks. If confirmed malicious, this activity could allow attackers t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives as this requires an active Administrator or adversary to bring in, import, and execute.",
              "refs": "https://powersploit.readthedocs.io/en/latest/Recon/Get-ForestTrust/",
              "mitre": [
                "T1482"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get-ForestTrust with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get-ForestTrust with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives as this requires an active Administrator or adversary to bring in, import, and execute.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Get-ForestTrust with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.134",
              "n": "Get WMIObject Group Discovery with Script Block Logging",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-WMIObject Win32_Group` command using PowerShell Script Block Logging (EventCode=4104). This method captures the full command sent to PowerShell, allowing for detailed analysis. Identifying group information on an endpoint is not inherently malicious but can be suspicious based on context such as time, endpoint, and user. This activity is significant as it may indicate reconnaissance efforts by an attacker. If confirmed malicious, it could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 ScriptBlockText = \"*Get-WMIObject*\" AND ScriptBlockText = \"*Win32_Group*\"\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `get_wmiobject_group_discovery_with_script_block_logging_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Tune as needed.",
              "refs": "https://www.splunk.com/en_us/blog/security/powershell-detections-threat-research-release-august-2021.html, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1069.001/T1069.001.md, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/",
              "mitre": [
                "T1069.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get WMIObject Group Discovery with Script Block Logging\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get WMIObject Group Discovery with Script Block Logging\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Tune as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Get WMIObject Group Discovery with Script Block Logging” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.135",
              "n": "GetAdComputer with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with the `Get-AdComputer` commandlet, which is used to discover remote systems within a domain. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because it indicates potential reconnaissance efforts by adversaries to map out domain computers, which is a common step in the attack lifecycle. If confirmed malicious, this …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"powershell.exe\"\n        )\n        (Processes.process=*Get-AdComputer*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `getadcomputer_with_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/",
              "mitre": [
                "T1018"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetAdComputer with PowerShell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetAdComputer with PowerShell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetAdComputer with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.136",
              "n": "GetAdGroup with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with the `Get-AdGroup` commandlet, which is used to query domain groups in a Windows Domain. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. Monitoring this activity is crucial as it may indicate an adversary or Red Team enumerating domain groups for situational awareness and Active Directory discovery. If confirmed malicious, this activity could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"powershell.exe\"\n        )\n        (Processes.process=*Get-AdGroup*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `getadgroup_with_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-adgroup?view=windowsserver2019-ps",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetAdGroup with PowerShell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetAdGroup with PowerShell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetAdGroup with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.137",
              "n": "GetCurrent User with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with command-line arguments invoking the `GetCurrent` method of the WindowsIdentity .NET class. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as adversaries may use this method to identify the logged-in user on a compromised endpoint, aiding in situational awareness and Active Directory discovery. If confirmed mali…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"powershell.exe\"\n        )\n        (Processes.process=*System.Security.Principal.WindowsIdentity* OR Processes.process=*GetCurrent()*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `getcurrent_user_with_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1033/",
              "mitre": [
                "T1033"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetCurrent User with PowerShell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetCurrent User with PowerShell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetCurrent User with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.138",
              "n": "GetCurrent User with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `GetCurrent` method from the WindowsIdentity .NET class using PowerShell Script Block Logging (EventCode=4104). This method identifies the current Windows user. The detection leverages PowerShell script block logs to identify when this method is called. This activity is significant because adversaries and Red Teams may use it to gain situational awareness and perform Active Directory discovery on compromised endpoints. If confirmed malicious, t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 ScriptBlockText = \"*[System.Security.Principal.WindowsIdentity]*\"  ScriptBlockText = \"*GetCurrent()*\"\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `getcurrent_user_with_powershell_script_block_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1033/, https://learn.microsoft.com/en-us/dotnet/api/system.security.principal.windowsidentity.getcurrent?view=net-6.0&viewFallbackFrom=net-5.0",
              "mitre": [
                "T1033"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetCurrent User with PowerShell Script Block\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetCurrent User with PowerShell Script Block\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetCurrent User with PowerShell Script Block” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.139",
              "n": "GetDomainComputer with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with command-line arguments that utilize `Get-DomainComputer` to discover remote systems. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as `Get-DomainComputer` is part of PowerView, a tool often used by adversaries for domain enumeration and situational awareness. If confirmed malicious, this activity could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use PowerView for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/",
              "mitre": [
                "T1018"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetDomainComputer with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetDomainComputer with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use PowerView for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetDomainComputer with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.140",
              "n": "GetDomainController with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with the `Get-DomainController` command, which is used to discover remote systems within a Windows domain. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. Monitoring this activity is crucial as it may indicate an attempt to enumerate domain controllers, a common tactic in Active Directory discovery. If confirmed malicious, this activity could all…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"powershell.exe\"\n        )\n        (Processes.process=*Get-DomainController*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `getdomaincontroller_with_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use PowerView for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainController/",
              "mitre": [
                "T1018"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetDomainController with PowerShell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetDomainController with PowerShell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use PowerView for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetDomainController with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.141",
              "n": "GetDomainGroup with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with command-line arguments that query for domain groups using `Get-DomainGroup`. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions mapped to the `Processes` node of the `Endpoint` data model. Monitoring this activity is crucial as `Get-DomainGroup` is part of PowerView, a tool often used by adversaries for domain enumeration and situational awaren…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainGroup/",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetDomainGroup with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetDomainGroup with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.142",
              "n": "GetLocalUser with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with the `Get-LocalUser` commandlet, which is used to query local user accounts. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. Monitoring this activity is significant because adversaries and Red Teams may use it to enumerate local users for situational awareness and Active Directory discovery. If confirmed malicious, this activity could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"powershell.exe\"\n        )\n        (Processes.process=*Get-LocalUser*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `getlocaluser_with_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1087/001/",
              "mitre": [
                "T1087.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetLocalUser with PowerShell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetLocalUser with PowerShell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetLocalUser with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.143",
              "n": "GetNetTcpconnection with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `powershell.exe` with the `Get-NetTcpConnection` command, which lists current TCP connections on a system. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. Monitoring this activity is significant as it may indicate an adversary or Red Team performing network reconnaissance or situational awareness. If confirmed malicious, this activity could allow attackers to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"powershell.exe\"\n        )\n        (Processes.process=*Get-NetTcpConnection*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `getnettcpconnection_with_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1049/, https://learn.microsoft.com/en-us/powershell/module/nettcpip/get-nettcpconnection?view=windowsserver2019-ps",
              "mitre": [
                "T1049"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetNetTcpconnection with PowerShell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1049. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetNetTcpconnection with PowerShell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetNetTcpconnection with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.144",
              "n": "GetWmiObject Ds Computer with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with command-line arguments that utilize the `Get-WmiObject` cmdlet to discover remote systems, specifically targeting the `DS_Computer` parameter. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to enumerate domain computers and gather situational aware…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/",
              "mitre": [
                "T1018"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetWmiObject Ds Computer with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetWmiObject Ds Computer with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetWmiObject Ds Computer with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.145",
              "n": "GetWmiObject Ds Group with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `powershell.exe` with command-line arguments used to query domain groups via the `Get-WmiObject` cmdlet and the `-class ds_group` parameter. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to enumerate domain groups, which is a common step in Active Directory Discover…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/get-wmiobject?view=powershell-5.1",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetWmiObject Ds Group with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetWmiObject Ds Group with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetWmiObject Ds Group with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.146",
              "n": "GetWmiObject DS User with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with command-line arguments used to query domain users via the `Get-WmiObject` cmdlet and `-class ds_user` parameter. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to enumerate domain users, which is a common step in Active Directory Discovery. If conf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://jpcertcc.github.io/ToolAnalysisResultSheet/details/dsquery.htm",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetWmiObject DS User with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetWmiObject DS User with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetWmiObject DS User with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.147",
              "n": "GetWmiObject User Account with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with command-line arguments that utilize the `Get-WmiObject` cmdlet and the `Win32_UserAccount` parameter to query local user accounts. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it may indicate an attempt by adversaries to enumerate user accounts for situational awareness or Active Directory discovery. If confirmed mali…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"powershell.exe\"\n        )\n        (Processes.process=*Get-WmiObject* AND Processes.process=*Win32_UserAccount*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `getwmiobject_user_account_with_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1087/001/",
              "mitre": [
                "T1087.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetWmiObject User Account with PowerShell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetWmiObject User Account with PowerShell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GetWmiObject User Account with PowerShell” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.148",
              "n": "GPUpdate with no Command Line Arguments with Network",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of gpupdate.exe without command line arguments and with an active network connection. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on process execution and network traffic data. It is significant because gpupdate.exe typically runs with specific arguments, and its execution without them, especially with network activity, is often associated with malicious software like Cobalt Strike. If confirmed maliciou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives may be present in small environments. Tuning may be required based on parent process.",
              "refs": "https://raw.githubusercontent.com/xx0hcd/Malleable-C2-Profiles/0ef8cf4556e26f6d4190c56ba697c2159faa5822/crimeware/trick_ryuk.profile, https://www.cobaltstrike.com/blog/learn-pipe-fitting-for-all-of-your-offense-projects/",
              "mitre": [
                "T1055"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GPUpdate with no Command Line Arguments with Network\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GPUpdate with no Command Line Arguments with Network\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives may be present in small environments. Tuning may be required based on parent process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “GPUpdate with no Command Line Arguments with Network” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.149",
              "n": "Headless Browser Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the usage of headless browsers within an organization. It identifies processes containing the \"--headless\" and \"--disable-gpu\" command line arguments, which are indicative of headless browsing. This detection leverages data from the Endpoint.Processes datamodel to identify such processes. Monitoring headless browser usage is significant as these tools can be exploited by adversaries for malicious activities like web scraping, automated testing, and undetected web i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature for framework testing that may cause some false positive.",
              "refs": "https://www.trendmicro.com/en_us/research/26/a/analysis-of-the-evelyn-stealer-campaign.html, https://cert.gov.ua/article/5702579",
              "mitre": [
                "T1497",
                "T1564.003"
              ],
              "dtype": "parent_process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Headless Browser Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497, T1564.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Headless Browser Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature for framework testing that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.150",
              "n": "Hide User Account From Sign-In Screen",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious registry modification that hides a user account from the Windows Login screen. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"*\\\\Windows NT\\\\CurrentVersion\\\\Winlogon\\\\SpecialAccounts\\\\Userlist*\" with a value of \"0x00000000\". This activity is significant as it may indicate an adversary attempting to create a hidden admin account to avoid detection and maintain persistence on the compromised…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "registry_value_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Hide User Account From Sign-In Screen\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to registry value entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Hide User Account From Sign-In Screen\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.151",
              "n": "Hiding Files And Directories With Attrib exe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Windows binary attrib.exe to hide files or directories by marking them with specific flags. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line arguments that include the \"+h\" flag. This activity is significant because hiding files can be a tactic used by attackers to conceal malicious files or tools from users and security software. If confirmed malicious, this behavior could allow an attacker to persist in …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some applications and users may legitimately use attrib.exe to interact with the files.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1222.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Hiding Files And Directories With Attrib exe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Hiding Files And Directories With Attrib exe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some applications and users may legitimately use attrib.exe to interact with the files.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Hiding Files And Directories With Attrib exe” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.152",
              "n": "High Process Termination Frequency",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a high frequency of process termination events on a computer within a short period. It leverages Sysmon EventCode 5 logs to detect instances where 15 or more processes are terminated within a 3-second window. This behavior is significant as it is commonly associated with ransomware attempting to avoid exceptions during file encryption. If confirmed malicious, this activity could indicate an active ransomware attack, potentially leading to widespread file encrypt…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 5",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 5 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user tool that can terminate multiple process.",
              "refs": "https://www.mandiant.com/resources/fin11-email-campaigns-precursor-for-ransomware-data-theft, https://blog.virustotal.com/2020/11/keep-your-friends-close-keep-ransomware.html",
              "mitre": [
                "T1486"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"High Process Termination Frequency\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 5. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"High Process Termination Frequency\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user tool that can terminate multiple process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “High Process Termination Frequency” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.153",
              "n": "Java Writing JSP File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the Java process writing a .jsp file to disk, which may indicate a web shell being deployed. It leverages data from the Endpoint datamodel, specifically monitoring process and filesystem activities. This activity is significant because web shells can provide attackers with remote control over the compromised server, leading to further exploitation. If confirmed malicious, this could allow unauthorized access, data exfiltration, or further compromise of the affected…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1 AND Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 AND Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible and filtering may be required. Restrict by assets or filter known jsp files that are common for the environment.",
              "refs": "https://www.microsoft.com/security/blog/2022/04/04/springshell-rce-vulnerability-guidance-for-protecting-against-and-detecting-cve-2022-22965/, https://github.com/TheGejr/SpringShell, https://www.tenable.com/blog/spring4shell-faq-spring-framework-remote-code-execution-vulnerability",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Java Writing JSP File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1 AND Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Java Writing JSP File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible and filtering may be required. Restrict by assets or filter known jsp files that are common for the environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.154",
              "n": "Jscript Execution Using Cscript App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of JScript using the cscript.exe process. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line telemetry. This behavior is significant because JScript files are typically executed by wscript.exe, making cscript.exe execution unusual and potentially indicative of malicious activity, such as the FIN7 group's tactics. If confirmed malicious, this activity could allow attackers to execute arbitrary scri…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/fin7-pursuing-an-enigmatic-and-evasive-global-criminal-operation, https://attack.mitre.org/groups/G0046/",
              "mitre": [
                "T1059.007"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Jscript Execution Using Cscript App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Jscript Execution Using Cscript App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Jscript Execution Using Cscript App” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.155",
              "n": "Kerberos User Enumeration",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an unusual number of Kerberos Ticket Granting Ticket (TGT) requests for non-existing users from a single source endpoint. It leverages Event ID 4768 and identifies anomalies using the 3-sigma statistical rule. This behavior is significant as it may indicate an adversary performing a user enumeration attack against Active Directory. If confirmed malicious, the attacker could validate a list of usernames, potentially leading to further attacks such as brute force or …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4768",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4768 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems.",
              "refs": "https://github.com/ropnop/kerbrute, https://attack.mitre.org/techniques/T1589/002/, https://redsiege.com/tools-techniques/2020/04/user-enumeration-part-3-windows/",
              "mitre": [
                "T1589.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kerberos User Enumeration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4768. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1589.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kerberos User Enumeration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Kerberos User Enumeration” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.156",
              "n": "Linux Account Manipulation Of SSH Config and Keys",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of SSH keys on a Linux machine. It leverages filesystem event logs to identify when files within \"/etc/ssh/*\" or \"~/.ssh/*\" are deleted. This activity is significant because attackers may delete or modify SSH keys to evade security measures or as part of a destructive payload, similar to the AcidRain malware. If confirmed malicious, this behavior could lead to impaired security features, hindered forensic investigations, or further unauthorized access,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.sentinelone.com/labs/acidrain-a-modem-wiper-rains-down-on-europe/",
              "mitre": [
                "T1070.004",
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Account Manipulation Of SSH Config and Keys\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Account Manipulation Of SSH Config and Keys\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Account Manipulation Of SSH Config and Keys” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.157",
              "n": "Linux Add Files In Known Crontab Directories",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthorized file creation in known crontab directories on Unix-based systems. It leverages filesystem data to identify new files in directories such as /etc/cron* and /var/spool/cron/*. This activity is significant as it may indicate an attempt by threat actors or malware to establish persistence on a compromised host. If confirmed malicious, this could allow attackers to execute arbitrary code at scheduled intervals, potentially leading to further system compromi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can create file in crontab folders for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.sandflysecurity.com/blog/detecting-cronrat-malware-on-linux-instantly/, https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/",
              "mitre": [
                "T1053.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Add Files In Known Crontab Directories\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Add Files In Known Crontab Directories\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can create file in crontab folders for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Add Files In Known Crontab Directories” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.158",
              "n": "Linux Add User Account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of new user accounts on Linux systems using commands like \"useradd\" or \"adduser.\" It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as adversaries often create new user accounts to establish persistence on compromised hosts. If confirmed malicious, this could allow attackers to maintain access, escalate privileges, and further compromise the system, p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1, Cisco Isovalent Process Exec",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name IN (\"useradd\", \"adduser\")\n        OR\n        Processes.process IN (\"*useradd *\", \"*adduser *\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `linux_add_user_account_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1, Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://linuxize.com/post/how-to-create-users-in-linux-using-the-useradd-command/",
              "mitre": [
                "T1136.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Add User Account\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1, Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Add User Account\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Add User Account” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.159",
              "n": "Linux Adding Crontab Using List Parameter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to cron jobs on Linux systems using the crontab command with list parameters. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it may indicate an attempt to establish persistence or execute malicious code on a schedule. If confirmed malicious, the impact could include unauthorized code execution, data destruction, or other damaging out…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name = \"crontab\" Processes.process= \"* -l*\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `linux_adding_crontab_using_list_parameter_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://cert.gov.ua/article/39518",
              "mitre": [
                "T1053.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Adding Crontab Using List Parameter\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Adding Crontab Using List Parameter\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Adding Crontab Using List Parameter” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.160",
              "n": "Linux APT Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Advanced Package Tool (APT) or apt-get with elevated privileges via sudo on Linux systems. It leverages Endpoint Detection and Response (EDR) telemetry to identify processes where APT commands are executed with sudo rights. This activity is significant because it indicates a user can run system commands as root, potentially leading to unauthorized root shell access. If confirmed malicious, this could allow an attacker to escalate privileges, execute …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1, Cisco Isovalent Process Exec",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/apt/, https://www.digitalocean.com/community/tutorials/what-is-apt",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux APT Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1, Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux APT Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux APT Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.161",
              "n": "Linux At Allow Config File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of the /etc/at.allow or /etc/at.deny configuration files in Linux. It leverages file creation events from the Endpoint datamodel to identify when these files are created. This activity is significant as these files control user permissions for the \"at\" scheduling application and can be abused by attackers to establish persistence. If confirmed malicious, this could allow unauthorized execution of malicious code, leading to potential data theft or furth…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can create this file for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://linuxize.com/post/at-command-in-linux/",
              "mitre": [
                "T1053.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux At Allow Config File Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux At Allow Config File Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can create this file for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.162",
              "n": "Linux At Application Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the \"At\" application in Linux, which can be used by attackers to create persistence entries on a compromised host. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and parent process names associated with \"at\" or \"atd\". This activity is significant because the \"At\" application can be exploited to maintain unauthorized access or deliver additional malicious payloads. If confirmed malicious, t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1053/001/, https://www.linkedin.com/pulse/getting-attacker-ip-address-from-malicious-linux-job-craig-rowland/",
              "mitre": [
                "T1053.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux At Application Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux At Application Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux At Application Execution” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.163",
              "n": "Linux Auditd Add User Account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of new user accounts on Linux systems using commands like \"useradd\" or \"adduser.\" It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as adversaries often create new user accounts to establish persistence on compromised hosts. If confirmed malicious, this could allow attackers to maintain access, escalate privileges, and further compromise the system, p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://linuxize.com/post/how-to-create-users-in-linux-using-the-useradd-command/",
              "mitre": [
                "T1136.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Add User Account\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Add User Account\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Auditd Add User Account” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.164",
              "n": "Linux Auditd At Application Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the \"At\" application in Linux, which can be used by attackers to create persistence entries on a compromised host. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and parent process names associated with \"at\" or \"atd\". This activity is significant because the \"At\" application can be exploited to maintain unauthorized access or deliver additional malicious payloads. If confirmed malicious, t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1053/001/, https://www.linkedin.com/pulse/getting-attacker-ip-address-from-malicious-linux-job-craig-rowland/",
              "mitre": [
                "T1053.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd At Application Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd At Application Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Auditd At Application Execution” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.165",
              "n": "Linux Auditd Change File Owner To Root",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the 'chown' command to change a file owner to 'root' on a Linux system. It leverages Linux Auditd telemetry, specifically monitoring command-line executions and process details. This activity is significant as it may indicate an attempt to escalate privileges by adversaries, malware, or red teamers. If confirmed malicious, this action could allow an attacker to gain root-level access, leading to full control over the compromised host and potential persis…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://unix.stackexchange.com/questions/101073/how-to-change-permissions-from-root-user-to-all-users, https://askubuntu.com/questions/617850/changing-from-user-to-superuser",
              "mitre": [
                "T1222.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Change File Owner To Root\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Change File Owner To Root\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Auditd Change File Owner To Root” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.166",
              "n": "Linux Auditd Data Destruction Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a Unix shell command designed to wipe root directories on a Linux host. It leverages data from Linux Auditd, focusing on the 'rm' command with force recursive deletion and the '--no-preserve-root' option. This activity is significant as it indicates potential data destruction attempts, often associated with malware like Awfulshred. If confirmed malicious, this behavior could lead to severe data loss, system instability, and compromised integrity of…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://cert.gov.ua/article/3718487, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/overview-of-the-cyber-weapons-used-in-the-ukraine-russia-war/",
              "mitre": [
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Data Destruction Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Data Destruction Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Auditd Data Destruction Command” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.167",
              "n": "Linux Auditd Hardware Addition Swapoff",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the \"swapoff\" command, which disables the swapping of paging devices on a Linux system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This activity is significant because disabling swap can be a tactic used by malware, such as Awfulshred, to evade detection and hinder forensic analysis. If confirmed malicious, this action could allow an attacker to manipulate system memory management, poten…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator may disable swapping of devices in a linux host. Filter is needed.",
              "refs": "https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/overview-of-the-cyber-weapons-used-in-the-ukraine-russia-war/",
              "mitre": [
                "T1200"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Hardware Addition Swapoff\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1200. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Hardware Addition Swapoff\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator may disable swapping of devices in a linux host. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Auditd Hardware Addition Swapoff” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.168",
              "n": "Linux Auditd Hidden Files And Directories Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious creation of hidden files and directories, which may indicate an attacker's attempt to conceal malicious activities or unauthorized data. Hidden files and directories are often used to evade detection by security tools and administrators, providing a stealthy means for storing malware, logs, or sensitive information. By monitoring for unusual or unauthorized creation of hidden files and directories, this analytic helps identify potential attempts to hide …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1083"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Hidden Files And Directories Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1083. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Hidden Files And Directories Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Auditd Hidden Files And Directories Creation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.169",
              "n": "Linux Auditd Preload Hijack Library Calls",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the LD_PRELOAD environment variable to hijack or hook library functions on a Linux platform. It leverages data from Linux Auditd, focusing on process execution logs that include command-line details. This activity is significant because adversaries, malware authors, and red teamers commonly use this technique to gain elevated privileges and establish persistence on a compromised machine. If confirmed malicious, this behavior could allow attackers to exec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://compilepeace.medium.com/memory-malware-part-0x2-writing-userland-rootkits-via-ld-preload-30121c8343d5",
              "mitre": [
                "T1574.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Preload Hijack Library Calls\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Preload Hijack Library Calls\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Auditd Preload Hijack Library Calls” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.170",
              "n": "Linux Auditd Shred Overwrite Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'shred' command on a Linux machine, which is used to overwrite files to make them unrecoverable. It leverages data from Linux Auditd, focusing on process names and command-line arguments. This activity is significant because the 'shred' command can be used in destructive attacks, such as those seen in the Industroyer2 malware targeting energy facilities. If confirmed malicious, this activity could lead to the permanent destruction of critical f…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://cert.gov.ua/article/39518",
              "mitre": [
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Shred Overwrite Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Shred Overwrite Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Auditd Shred Overwrite Command” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.171",
              "n": "Linux Auditd Stop Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to stop a service on Linux systems. It leverages data from Linux Auditd. This activity is significant as adversaries often stop or terminate security or critical services to disable defenses or disrupt operations, as seen in malware like Industroyer2. If confirmed malicious, this could lead to the disabling of security mechanisms, allowing attackers to persist, escalate privileges, or deploy destructive payloads, severely impacting system integrity and ava…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Service Stop",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Linux Auditd Service Stop ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://cert.gov.ua/article/39518",
              "mitre": [
                "T1489"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Stop Services\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Service Stop. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Stop Services\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Auditd Stop Services” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.172",
              "n": "Linux AWK Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the AWK command with elevated privileges to execute system commands. It leverages Endpoint Detection and Response (EDR) telemetry, specifically monitoring processes that include \"sudo,\" \"awk,\" and \"BEGIN*system\" in their command lines. This activity is significant because it indicates a potential privilege escalation attempt, where a user could gain root access by executing commands as the root user. If confirmed malicious, this could allow an attacker t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present based on automated tooling or system administrative usage. Filter as needed.",
              "refs": "https://www.hacknos.com/awk-privilege-escalation/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux AWK Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux AWK Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present based on automated tooling or system administrative usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux AWK Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.173",
              "n": "Linux Busybox Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of BusyBox with sudo privileges, which can lead to privilege escalation on Linux systems. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where BusyBox is executed with both 'sh' and 'sudo' commands. This activity is significant because it indicates a user may be attempting to gain root access, bypassing standard security controls. If confirmed malicious, this could allow an attacker to execute …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/busybox/, https://man.archlinux.org/man/busybox.1.en",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Busybox Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Busybox Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Busybox Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.174",
              "n": "Linux c89 Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'c89' command with elevated privileges, which can be used to compile and execute C programs as root. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events that include command-line arguments. This activity is significant because it indicates a potential privilege escalation attempt, allowing a user to execute arbitrary commands as root. If confirmed malicious, this could lead to ful…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/c89/, https://www.ibm.com/docs/en/zos/2.1.0?topic=guide-c89-compiler-invocation-using-host-environment-variables",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux c89 Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux c89 Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux c89 Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.175",
              "n": "Linux c99 Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the c99 utility with sudo privileges, which can lead to privilege escalation on Linux systems. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because it indicates a potential misuse of the c99 utility to gain root access, which is critical for maintaining system security. If confirmed malicious, this could allow an attacker to ex…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/c99/, https://pubs.opengroup.org/onlinepubs/009604499/utilities/c99.html",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux c99 Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux c99 Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux c99 Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.176",
              "n": "Linux Change File Owner To Root",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the 'chown' command to change a file owner to 'root' on a Linux system. It leverages Endpoint Detection and Response (EDR) telemetry, specifically monitoring command-line executions and process details. This activity is significant as it may indicate an attempt to escalate privileges by adversaries, malware, or red teamers. If confirmed malicious, this action could allow an attacker to gain root-level access, leading to full control over the compromised …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://unix.stackexchange.com/questions/101073/how-to-change-permissions-from-root-user-to-all-users, https://askubuntu.com/questions/617850/changing-from-user-to-superuser",
              "mitre": [
                "T1222.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Change File Owner To Root\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Change File Owner To Root\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Change File Owner To Root” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.177",
              "n": "Linux Clipboard Data Copy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Linux 'xclip' command to copy data from the clipboard. It leverages Endpoint Detection and Response (EDR) telemetry, focusing on process names and command-line arguments related to clipboard operations. This activity is significant because adversaries can exploit clipboard data to capture sensitive information such as passwords or IP addresses. If confirmed malicious, this technique could lead to unauthorized data exfiltration, compromising sensitive…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present on Linux desktop as it may commonly be used by administrators or end users. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1115/, https://man.archlinux.org/man/xclip.1",
              "mitre": [
                "T1115"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Clipboard Data Copy\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1115. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Clipboard Data Copy\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present on Linux desktop as it may commonly be used by administrators or end users. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Clipboard Data Copy” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.178",
              "n": "Linux Common Process For Elevation Control",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of common Linux processes used for elevation control, such as `chmod`, `chown`, and `setuid`. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because these processes are often abused by adversaries to gain persistence or escalate privileges on compromised hosts. If confirmed malicious, this behavior could allow attackers to modify file attribute…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name IN (\"chmod\", \"chown\", \"fchmod\", \"fchmodat\", \"fchown\", \"fchownat\", \"fremovexattr\", \"fsetxattr\", \"lchown\", \"lremovexattr\", \"lsetxattr\", \"removexattr\", \"setuid\", \"setgid\", \"setreuid\", \"setregid\", \"chattr\")\n        OR\n        Processes.process IN (\"*chmod *\", \"*chown *\", \"*fchmod *\", \"*fchmodat *\", \"*fchown *\", \"*fchownat *\", \"*fremovexattr *\", \"*fsetxattr *\", \"*lchown *\", \"*lremovexattr *\", \"*lsetxattr *\", \"*removexattr *\", \"*setuid *\", \"*setgid *\", \"*setreuid *\", \"*setregid *\", \"*setcap *\", \"*chattr *\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `linux_common_process_for_elevation_control_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1548/001/, https://github.com/Neo23x0/auditd/blob/master/audit.rules#L285-L297, https://github.com/microsoft/MSTIC-Sysmon/blob/main/linux/configs/attack-based/privilege_escalation/T1548.001_ElevationControl_CommonProcesses.xml",
              "mitre": [
                "T1548.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Common Process For Elevation Control\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Common Process For Elevation Control\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Common Process For Elevation Control” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.179",
              "n": "Linux Composer Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Composer tool with elevated privileges on a Linux system. It identifies instances where Composer is run with the 'sudo' command, allowing the user to execute system commands as root. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant because it can indicate an attempt to escalate privileges, potentially leading to unauthoriz…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/composer/, https://getcomposer.org/doc/00-intro.md",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Composer Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Composer Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Composer Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.180",
              "n": "Linux Cpulimit Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the 'cpulimit' command with specific flags ('-l', '-f') executed with 'sudo' privileges. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line arguments and execution details. This activity is significant because if 'cpulimit' is granted sudo rights, a user can potentially execute system commands as root, leading to privilege escalation. If confirmed malicious, this could allow an attacker to gain root acce…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/cpulimit/, http://cpulimit.sourceforge.net/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Cpulimit Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Cpulimit Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Cpulimit Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.181",
              "n": "Linux Csvtool Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'csvtool' command with 'sudo' privileges, which can allow a user to run system commands as root. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because it indicates a potential privilege escalation attempt, where a user could gain unauthorized root access. If confirmed malicious, this could lead to full system com…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/csvtool/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Csvtool Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Csvtool Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Csvtool Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.182",
              "n": "Linux Curl Upload File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the curl command with specific switches (-F, --form, --upload-file, -T, -d, --data, --data-raw, -I, --head) to upload AWS credentials or configuration files to a remote destination. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process details. This activity is significant as it may indicate an attempt to exfiltrate sensitive AWS credentials, a technique known to be used by the Te…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1, Cisco Isovalent Process Exec",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1, Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Filtering may be required. In addition to AWS credentials, add other important files and monitor. The inverse would be to look for _all_ -F behavior and tune from there.",
              "refs": "https://curl.se/docs/manpage.html, https://www.cadosecurity.com/team-tnt-the-first-crypto-mining-worm-to-steal-aws-credentials/, https://gtfobins.github.io/gtfobins/curl/",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Curl Upload File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1, Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Curl Upload File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Filtering may be required. In addition to AWS credentials, add other important files and monitor. The inverse would be to look for _all_ -F behavior and tune from there.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Curl Upload File” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.183",
              "n": "Linux Data Destruction Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a Unix shell command designed to wipe root directories on a Linux host. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on the 'rm' command with force recursive deletion and the '--no-preserve-root' option. This activity is significant as it indicates potential data destruction attempts, often associated with malware like Awfulshred. If confirmed malicious, this behavior could lead to severe data loss, system instabili…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://cert.gov.ua/article/3718487, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/overview-of-the-cyber-weapons-used-in-the-ukraine-russia-war/",
              "mitre": [
                "T1485"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Data Destruction Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Data Destruction Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Data Destruction Command” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.184",
              "n": "Linux DD File Overwrite",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the 'dd' command to overwrite files on a Linux system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because adversaries often use the 'dd' command to destroy or irreversibly overwrite files, disrupting system availability and services. If confirmed malicious, this behavior could lead to data destruction, making recovery difficult and…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://gtfobins.github.io/gtfobins/dd/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1485/T1485.md",
              "mitre": [
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux DD File Overwrite\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux DD File Overwrite\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux DD File Overwrite” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.185",
              "n": "Linux Decode Base64 to Shell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the behavior of decoding base64-encoded data and passing it to a Linux shell. Additionally, it mitigates the potential damage and protects the organization's systems and data.The detection is made by searching for specific commands in the Splunk query, namely \"base64 -d\" and \"base64 --decode\", within the Endpoint.Processes data model. The analytic also includes a filter for Linux shells. The detection is important because  it indicates the presence of malicious act…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1, Cisco Isovalent Process Exec",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1, Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on legitimate software being utilized. Filter as needed.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1027/T1027.md#atomic-test-1---decode-base64-data-into-script, https://redcanary.com/blog/lateral-movement-with-secure-shell/, https://man.archlinux.org/man/base64.1",
              "mitre": [
                "T1027",
                "T1059.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Decode Base64 to Shell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1, Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027, T1059.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Decode Base64 to Shell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on legitimate software being utilized. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.186",
              "n": "Linux Deleting Critical Directory Using RM Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of critical directories on a Linux machine using the `rm` command with argument rf. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions targeting directories like /boot, /var/log, /etc, and /dev. This activity is significant because deleting these directories can severely disrupt system operations and is often associated with destructive campaigns like Industroyer2. If confirmed malicious, this actio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://cert.gov.ua/article/39518",
              "mitre": [
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Deleting Critical Directory Using RM Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Deleting Critical Directory Using RM Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Deleting Critical Directory Using RM Command” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.187",
              "n": "Linux Deletion Of Cron Jobs",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of cron jobs on a Linux machine. It leverages filesystem event logs to identify when files within the \"/etc/cron.*\" directory are deleted. This activity is significant because attackers or malware may delete cron jobs to disable scheduled security tasks or evade detection mechanisms. If confirmed malicious, this action could allow an attacker to disrupt system operations, evade security measures, or facilitate further malicious activities such as data …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.sentinelone.com/labs/acidrain-a-modem-wiper-rains-down-on-europe/",
              "mitre": [
                "T1070.004",
                "T1485"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Deletion Of Cron Jobs\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Deletion Of Cron Jobs\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Deletion Of Cron Jobs” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.188",
              "n": "Linux Disable Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to disable a service on a Linux system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on processes like \"systemctl,\" \"service,\" and \"svcadm\" with commands containing \"disable.\" This activity is significant as adversaries may disable security or critical services to evade detection and facilitate further malicious actions, such as deploying destructive payloads. If confirmed malicious, this could lead to the termination of es…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://cert.gov.ua/article/39518",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Disable Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Disable Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Disable Services” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.189",
              "n": "Linux Doas Conf File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of the doas.conf file on a Linux host. This file is used by the doas utility to allow standard users to perform tasks as root, similar to sudo. The detection leverages filesystem data from the Endpoint data model, focusing on the creation of the doas.conf file. This activity is significant because it can indicate an attempt to gain elevated privileges, potentially by an adversary. If confirmed malicious, this could allow an attacker to execute commands…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://wiki.gentoo.org/wiki/Doas, https://www.makeuseof.com/how-to-install-and-use-doas/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Doas Conf File Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Doas Conf File Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.190",
              "n": "Linux Doas Tool Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'doas' tool on a Linux host. This tool allows standard users to perform tasks with root privileges, similar to 'sudo'. The detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as 'doas' can be exploited by adversaries to gain elevated privileges on a compromised host. If confirmed malicious, this could lead to unauthorized administrative a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://wiki.gentoo.org/wiki/Doas, https://www.makeuseof.com/how-to-install-and-use-doas/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Doas Tool Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Doas Tool Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Doas Tool Execution” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.191",
              "n": "Linux Docker Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to escalate privileges on a Linux system using Docker. It identifies processes where Docker commands are used to mount the root directory or execute shell commands within a container. This detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process names, command-line arguments, and parent processes. This activity is significant because it can allow an attacker with Docker privileges to modify critical system files, such as /et…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present based on automated tooling or system administrative usage. Filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/docker/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Docker Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Docker Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present based on automated tooling or system administrative usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Docker Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.192",
              "n": "Linux Emacs Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Emacs with elevated privileges using the `sudo` command and the `--eval` option. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line arguments. This activity is significant because it indicates a potential privilege escalation attempt, where a user could gain root access by running Emacs with elevated permissions. If confirmed malicious, this could allow an attacker to ex…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/emacs/, https://en.wikipedia.org/wiki/Emacs",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Emacs Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Emacs Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Emacs Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.193",
              "n": "Linux File Creation In Init Boot Directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of files in Linux init boot directories, which are used for automatic execution upon system startup. It leverages file system logs to identify new files in directories such as /etc/init.d/ and /etc/rc.d/. This activity is significant as it is a common persistence technique used by adversaries, malware authors, and red teamers. If confirmed malicious, this could allow an attacker to maintain persistence on the compromised host, potentially leading to fu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can create file in this folders for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.intezer.com/blog/research/kaiji-new-chinese-linux-malware-turning-to-golang/",
              "mitre": [
                "T1037.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux File Creation In Init Boot Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1037.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux File Creation In Init Boot Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can create file in this folders for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux File Creation In Init Boot Directory” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.194",
              "n": "Linux Find Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the 'find' command with 'sudo' and '-exec' options, which can indicate an attempt to escalate privileges on a Linux system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line arguments. This activity is significant because it can allow a user to execute system commands as root, potentially leading to a root shell. If confirmed malicious, this could enable an attacker to gain f…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present based on automated tooling or system administrative usage. Filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/find/, https://en.wikipedia.org/wiki/Find_(Unix)",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Find Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Find Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present based on automated tooling or system administrative usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Find Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.195",
              "n": "Linux GDB Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the GNU Debugger (GDB) with specific flags that indicate an attempt to escalate privileges on a Linux system. It leverages Endpoint Detection and Response (EDR) telemetry to identify processes where GDB is run with the `-nx`, `-ex`, and `sudo` flags. This activity is significant because it can allow a user to execute system commands as root, potentially leading to a root shell. If confirmed malicious, this could result in full system compromise, al…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/gdb/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux GDB Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux GDB Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux GDB Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.196",
              "n": "Linux Gdrive Binary Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'gdrive' tool on a Linux host. This tool allows standard users to perform tasks associated with Google Drive via the command line. This is used by actors to stage tools as well as exfiltrate data. The detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. If confirmed malicious, this could lead to compromise of systems or sensitive data being stolen.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://cloud.google.com/blog/topics/threat-intelligence/uncovering-unc3886-espionage-operations",
              "mitre": [
                "T1567"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Gdrive Binary Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1567. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Gdrive Binary Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Gdrive Binary Activity” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.197",
              "n": "Linux Gem Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the RubyGems utility with elevated privileges, specifically when it is used to run system commands as root. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that include \"gem open -e\" and \"sudo\". This activity is significant because it indicates a potential privilege escalation attempt, allowing a user to execute commands as the root user. If confirmed malicious, this could lead to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/gem/, https://en.wikipedia.org/wiki/RubyGems",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Gem Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Gem Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Gem Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.198",
              "n": "Linux GNU Awk Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'gawk' command with elevated privileges on a Linux system. It leverages Endpoint Detection and Response (EDR) telemetry to identify command-line executions where 'gawk' is used with 'sudo' and 'BEGIN{system' patterns. This activity is significant because it indicates a potential privilege escalation attempt, allowing a user to execute system commands as root. If confirmed malicious, this could lead to full root access, enabling the attacker to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/gawk/, https://www.geeksforgeeks.org/gawk-command-in-linux-with-examples/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux GNU Awk Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux GNU Awk Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux GNU Awk Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.199",
              "n": "Linux Hardware Addition SwapOff",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the \"swapoff\" command, which disables the swapping of paging devices on a Linux system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This activity is significant because disabling swap can be a tactic used by malware, such as Awfulshred, to evade detection and hinder forensic analysis. If confirmed malicious, this action could allow an attacker to manipulate system memory management, poten…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator may disable swapping of devices in a linux host. Filter is needed.",
              "refs": "https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/overview-of-the-cyber-weapons-used-in-the-ukraine-russia-war/",
              "mitre": [
                "T1200"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Hardware Addition SwapOff\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1200. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Hardware Addition SwapOff\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator may disable swapping of devices in a linux host. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Hardware Addition SwapOff” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.200",
              "n": "Linux High Frequency Of File Deletion In Boot Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a high frequency of file deletions in the /boot/ folder on Linux systems. It leverages filesystem event logs to identify when 200 or more files are deleted within an hour by the same process. This behavior is significant as it may indicate the presence of wiper malware, such as Industroyer2, which targets critical system directories. If confirmed malicious, this activity could lead to system instability or failure, hindering the boot process and potentially causing…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "linux package installer/uninstaller may cause this event. Please update you filter macro to remove false positives.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://cert.gov.ua/article/39518",
              "mitre": [
                "T1070.004",
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux High Frequency Of File Deletion In Boot Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux High Frequency Of File Deletion In Boot Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: linux package installer/uninstaller may cause this event. Please update you filter macro to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux High Frequency Of File Deletion In Boot Folder” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.201",
              "n": "Linux High Frequency Of File Deletion In Etc Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a high frequency of file deletions in the /etc/ folder on Linux systems. It leverages the Endpoint.Filesystem data model to identify instances where 200 or more files are deleted within an hour, grouped by process name and process ID. This behavior is significant as it may indicate the presence of wiper malware, such as AcidRain, which aims to delete critical system files. If confirmed malicious, this activity could lead to severe system instability, data loss, and…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "linux package installer/uninstaller may cause this event. Please update you filter macro to remove false positives.",
              "refs": "https://www.sentinelone.com/labs/acidrain-a-modem-wiper-rains-down-on-europe/",
              "mitre": [
                "T1070.004",
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux High Frequency Of File Deletion In Etc Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux High Frequency Of File Deletion In Etc Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: linux package installer/uninstaller may cause this event. Please update you filter macro to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.202",
              "n": "Linux Indicator Removal Clear Cache",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects processes that clear or free page cache on a Linux system. It leverages Endpoint Detection and Response (EDR) data, focusing on specific command-line executions involving the kernel system request `drop_caches`. This activity is significant as it may indicate an attempt to delete forensic evidence or the presence of wiper malware like Awfulshred. If confirmed malicious, this behavior could allow an attacker to cover their tracks, making it difficult to investigate …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/overview-of-the-cyber-weapons-used-in-the-ukraine-russia-war/, https://cert.gov.ua/article/3718487",
              "mitre": [
                "T1070"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Indicator Removal Clear Cache\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Indicator Removal Clear Cache\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Indicator Removal Clear Cache” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.203",
              "n": "Linux Indicator Removal Service File Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of Linux service unit configuration files by suspicious processes. It leverages Endpoint Detection and Response (EDR) telemetry, focusing on processes executing the 'rm' command targeting '.service' files. This activity is significant as it may indicate malware attempting to disable critical services or security products, a common defense evasion tactic. If confirmed malicious, this behavior could lead to service disruption, security tool incapacitatio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin can delete services unit configuration file as part of normal software installation. Filter is needed.",
              "refs": "https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/overview-of-the-cyber-weapons-used-in-the-ukraine-russia-war/, https://cert.gov.ua/article/3718487",
              "mitre": [
                "T1070.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Indicator Removal Service File Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Indicator Removal Service File Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin can delete services unit configuration file as part of normal software installation. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Indicator Removal Service File Deletion” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.204",
              "n": "Linux Ingress Tool Transfer Hunting",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of 'curl' and 'wget' commands within a Linux environment. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, user information, and command-line executions. This activity is significant as 'curl' and 'wget' are commonly used for downloading files, which can indicate potential ingress of malicious tools. If confirmed malicious, this activity could lead to unauthorized code execution, data exfiltration, or further c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=curl\n            OR\n            Processes.process_name=wget\n        )\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `linux_ingress_tool_transfer_hunting_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present. This query is meant to help tune other curl and wget analytics.",
              "refs": "https://gtfobins.github.io/gtfobins/curl/, https://curl.se/docs/manpage.html#-I, https://gtfobins.github.io/gtfobins/curl/, https://github.com/rapid7/metasploit-framework/search?q=curl",
              "mitre": [
                "T1105"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Ingress Tool Transfer Hunting\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Ingress Tool Transfer Hunting\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present. This query is meant to help tune other curl and wget analytics.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Ingress Tool Transfer Hunting” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.205",
              "n": "Linux Ingress Tool Transfer with Curl",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the curl command with specific switches (-O, -sO, -ksO, --output) commonly used to download remote scripts or binaries. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant as it may indicate an attempt to download and execute potentially malicious files, often used in initial stages of an attack. If confirmed malicious, this could lead to unaut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present. Tune and then change type to TTP.",
              "refs": "https://gtfobins.github.io/gtfobins/curl/, https://curl.se/docs/manpage.html#-I, https://gtfobins.github.io/gtfobins/curl/, https://github.com/rapid7/metasploit-framework/search?q=curl",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Ingress Tool Transfer with Curl\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Ingress Tool Transfer with Curl\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present. Tune and then change type to TTP.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Ingress Tool Transfer with Curl” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.206",
              "n": "Linux Insert Kernel Module Using Insmod Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the insertion of a Linux kernel module using the insmod utility. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include process names and command-line details. This activity is significant as it may indicate the installation of a rootkit or malicious kernel module, potentially allowing an attacker to gain elevated privileges and bypass security detections. If confirmed malicious, this could lead to unaut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_Kernel_Modules/, https://security.stackexchange.com/questions/175953/how-to-load-a-malicious-lkm-at-startup, https://0x00sec.org/t/kernel-rootkits-getting-your-hands-dirty/1485",
              "mitre": [
                "T1547.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Insert Kernel Module Using Insmod Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Insert Kernel Module Using Insmod Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Insert Kernel Module Using Insmod Utility” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.207",
              "n": "Linux Install Kernel Module Using Modprobe Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the installation of a Linux kernel module using the modprobe utility. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because installing a kernel module can indicate an attempt to deploy a rootkit or other malicious kernel-level code, potentially leading to elevated privileges and bypassing security detections. If confirmed malicious, this could allow an attacke…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_Kernel_Modules/, https://security.stackexchange.com/questions/175953/how-to-load-a-malicious-lkm-at-startup, https://0x00sec.org/t/kernel-rootkits-getting-your-hands-dirty/1485",
              "mitre": [
                "T1547.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Install Kernel Module Using Modprobe Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Install Kernel Module Using Modprobe Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Install Kernel Module Using Modprobe Utility” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.208",
              "n": "Linux Kernel Module Enumeration",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of the 'kmod' process to list kernel modules on a Linux system. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. While listing kernel modules is not inherently malicious, it can be a precursor to loading unauthorized modules using 'insmod'. If confirmed malicious, this activity could allow an attacker to load kernel modules, potentially leading to privilege escalation,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present based on automated tooling or system administrative usage. Filter as needed.",
              "refs": "https://man.archlinux.org/man/kmod.8",
              "mitre": [
                "T1082",
                "T1014"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Kernel Module Enumeration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082, T1014. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Kernel Module Enumeration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present based on automated tooling or system administrative usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Kernel Module Enumeration” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.209",
              "n": "Linux Kworker Process In Writable Process Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a kworker process with a command line in writable directories such as /home/, /var/log, and /tmp on a Linux machine. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process paths. This activity is significant as kworker processes are typically kernel threads, and their presence in writable directories is unusual and indicative of potential malware, such as CyclopsBlink. If confirmed malicious, thi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.parent_process = \"*[kworker/*\" Processes.parent_process_path IN (\"/home/*\", \"/tmp/*\", \"/var/log/*\") Processes.process=\"*iptables*\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `linux_kworker_process_in_writable_process_path_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.ncsc.gov.uk/files/Cyclops-Blink-Malware-Analysis-Report.pdf, https://www.trendmicro.com/en_us/research/22/c/cyclops-blink-sets-sights-on-asus-routers--.html",
              "mitre": [
                "T1036.004"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Kworker Process In Writable Process Path\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Kworker Process In Writable Process Path\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Kworker Process In Writable Process Path” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.210",
              "n": "Linux Make Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the 'make' command with elevated privileges to execute system commands as root, potentially leading to a root shell. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that include 'make', '--eval', and 'sudo'. This activity is significant because it indicates a possible privilege escalation attempt, allowing a user to gain root access. If confirmed malicious, an attacker could achieve full control ov…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/make/, https://www.javatpoint.com/linux-make-command",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Make Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Make Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Make Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.211",
              "n": "Linux MySQL Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of MySQL commands with elevated privileges using sudo, which can lead to privilege escalation. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because it indicates a potential misuse of MySQL to execute system commands as root, which could allow an attacker to gain root shell access. If confirmed malicious, this could result in full …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present based on automated tooling or system administrative usage. Filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/mysql/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux MySQL Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux MySQL Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present based on automated tooling or system administrative usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux MySQL Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.212",
              "n": "Linux Ngrok Reverse Proxy Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Ngrok on a Linux operating system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments associated with Ngrok. This activity is significant because Ngrok can be used by adversaries to establish reverse proxies, potentially bypassing network defenses. If confirmed malicious, this could allow attackers to create persistent, unauthorized access channels, facilitating data exfiltration or f…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if Ngrok is an authorized utility. Filter as needed.",
              "refs": "https://ngrok.com, https://www.cisa.gov/uscert/sites/default/files/publications/aa22-320a_joint_csa_iranian_government-sponsored_apt_actors_compromise_federal%20network_deploy_crypto%20miner_credential_harvester.pdf",
              "mitre": [
                "T1572",
                "T1090",
                "T1102"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Ngrok Reverse Proxy Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1572, T1090, T1102. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Ngrok Reverse Proxy Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if Ngrok is an authorized utility. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Ngrok Reverse Proxy Usage” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.213",
              "n": "Linux Node Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of Node.js with elevated privileges using sudo, specifically when spawning child processes. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that include specific Node.js commands. This activity is significant because running Node.js as a superuser without dropping privileges can allow unauthorized access to the file system and potential privilege escalation. If confirmed malicious, this could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present based on automated tooling or system administrative usage. Filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/docker/, https://en.wikipedia.org/wiki/Node.js",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Node Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Node Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present based on automated tooling or system administrative usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Node Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.214",
              "n": "Linux NOPASSWD Entry In Sudoers File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of NOPASSWD entries to the /etc/sudoers file on Linux systems. It leverages Endpoint Detection and Response (EDR) telemetry to identify command lines containing \"NOPASSWD:\". This activity is significant because it allows users to execute commands with elevated privileges without requiring a password, which can be exploited by adversaries to maintain persistent, privileged access. If confirmed malicious, this could lead to unauthorized privilege escalat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://askubuntu.com/questions/334318/sudoers-file-enable-nopasswd-for-user-all-commands, https://help.ubuntu.com/community/Sudoers",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux NOPASSWD Entry In Sudoers File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux NOPASSWD Entry In Sudoers File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux NOPASSWD Entry In Sudoers File” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.215",
              "n": "Linux Obfuscated Files or Information Base64 Decode",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the base64 decode command on Linux systems, which is often used to deobfuscate files. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that include \"base64 -d\" or \"base64 --decode\". This activity is significant as it may indicate an attempt to hide malicious payloads or scripts. If confirmed malicious, an attacker could use this technique to execute hidden code, potentially leading to unauthorized a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and will require some tuning based on processes. Filter as needed.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1027/T1027.md#atomic-test-1---decode-base64-data-into-script, https://redcanary.com/blog/lateral-movement-with-secure-shell/, https://man.archlinux.org/man/base64.1",
              "mitre": [
                "T1027"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Obfuscated Files or Information Base64 Decode\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Obfuscated Files or Information Base64 Decode\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and will require some tuning based on processes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Obfuscated Files or Information Base64 Decode” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.216",
              "n": "Linux Octave Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of GNU Octave with elevated privileges, specifically when it runs system commands via sudo. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line arguments that include \"octave-cli,\" \"--eval,\" \"system,\" and \"sudo.\" This activity is significant because it indicates a potential privilege escalation attempt, allowing a user to execute commands as root. If confirmed malicious, this could lead to full system …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/octave/, https://en.wikipedia.org/wiki/GNU_Octave",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Octave Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Octave Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Octave Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.217",
              "n": "Linux OpenVPN Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of OpenVPN with elevated privileges, specifically when combined with the `--dev`, `--script-security`, `--up`, and `sudo` options. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line arguments and execution details. This activity is significant because it indicates a potential privilege escalation attempt, allowing a user to execute system commands as root. If confirmed malicious, this coul…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/openvpn/, https://en.wikipedia.org/wiki/OpenVPN",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux OpenVPN Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux OpenVPN Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux OpenVPN Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.218",
              "n": "Linux PHP Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of PHP commands with elevated privileges on a Linux system. It identifies instances where PHP is used in conjunction with 'sudo' and 'system' commands, indicating an attempt to run system commands as the root user. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line arguments. This activity is significant because it can indicate an attempt to escalate privileges, potentially leading to full…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/php/, https://en.wikipedia.org/wiki/PHP",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux PHP Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux PHP Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux PHP Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.219",
              "n": "Linux Possible Access Or Modification Of sshd Config File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious access or modification of the sshd_config file on Linux systems. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving processes like \"cat,\" \"nano,\" \"vim,\" and \"vi\" accessing the sshd_config file. This activity is significant because unauthorized changes to sshd_config can allow threat actors to redirect port connections or use unauthorized keys, potentially compromising the system. If confirme…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.hackingarticles.in/ssh-penetration-testing-port-22/, https://attack.mitre.org/techniques/T1098/004/",
              "mitre": [
                "T1098.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Possible Access Or Modification Of sshd Config File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Possible Access Or Modification Of sshd Config File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Possible Access Or Modification Of sshd Config File” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.220",
              "n": "Linux Possible Access To Credential Files",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to access or dump the contents of /etc/passwd and /etc/shadow files on Linux systems. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on processes like 'cat', 'nano', 'vim', and 'vi' accessing these files. This activity is significant as it may indicate credential dumping, a technique used by adversaries to gain persistence or escalate privileges. If confirmed malicious, attackers could obtain hashed passwords for offline crac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://askubuntu.com/questions/445361/what-is-difference-between-etc-shadow-and-etc-passwd, https://attack.mitre.org/techniques/T1003/008/",
              "mitre": [
                "T1003.008"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Possible Access To Credential Files\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Possible Access To Credential Files\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Possible Access To Credential Files” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.221",
              "n": "Linux Possible Access To Sudoers File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential access or modification of the /etc/sudoers file on a Linux system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on processes like \"cat,\" \"nano,\" \"vim,\" and \"vi\" accessing the /etc/sudoers file. This activity is significant because the sudoers file controls user permissions for executing commands with elevated privileges. If confirmed malicious, an attacker could gain persistence or escalate privileges, compromising the sec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1548/003/, https://web.archive.org/web/20210708035426/https://www.cobaltstrike.com/downloads/csmanual43.pdf",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Possible Access To Sudoers File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Possible Access To Sudoers File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Possible Access To Sudoers File” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.222",
              "n": "Linux Possible Append Command To At Allow Config File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious command lines that append user entries to /etc/at.allow or /etc/at.deny files. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving these files. This activity is significant because altering these configuration files can allow attackers to schedule tasks with elevated permissions, facilitating persistence on a compromised Linux host. If confirmed malicious, this could enable attackers to execu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://linuxize.com/post/at-command-in-linux/, https://attack.mitre.org/techniques/T1053/001/",
              "mitre": [
                "T1053.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Possible Append Command To At Allow Config File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Possible Append Command To At Allow Config File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Possible Append Command To At Allow Config File” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.223",
              "n": "Linux Possible Append Command To Profile Config File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious command-lines that modify user profile files to automatically execute scripts or executables upon system reboot. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving profile files like ~/.bashrc and /etc/profile. This activity is significant as it indicates potential persistence mechanisms used by adversaries to maintain access to compromised hosts. If confirmed malicious, this could allow att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://unix.stackexchange.com/questions/129143/what-is-the-purpose-of-bashrc-and-how-does-it-work, https://attack.mitre.org/techniques/T1546/004/",
              "mitre": [
                "T1546.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Possible Append Command To Profile Config File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Possible Append Command To Profile Config File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Possible Append Command To Profile Config File” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.224",
              "n": "Linux Possible Append Cronjob Entry on Existing Cronjob File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential tampering with cronjob files on a Linux system by identifying 'echo' commands that append code to existing cronjob files. It leverages logs from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line executions. This activity is significant because adversaries often use it for persistence or privilege escalation. If confirmed malicious, this could allow attackers to execute unauthorized code automatical…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Endpoint.Processes\n      WHERE Processes.process = \"*echo*\"\n        AND\n        Processes.process IN(\"*/etc/cron*\", \"*/var/spool/cron/*\", \"*/etc/anacrontab*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `linux_possible_append_cronjob_entry_on_existing_cronjob_file_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may arise from legitimate actions by administrators or network operators who may use these commands for automation purposes. Therefore, it's recommended to adjust filter macros to eliminate such false positives.",
              "refs": "https://attack.mitre.org/techniques/T1053/003/, https://blog.aquasec.com/threat-alert-kinsing-malware-container-vulnerability, https://www.intezer.com/blog/research/kaiji-new-chinese-linux-malware-turning-to-golang/",
              "mitre": [
                "T1053.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Possible Append Cronjob Entry on Existing Cronjob File\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Possible Append Cronjob Entry on Existing Cronjob File\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may arise from legitimate actions by administrators or network operators who may use these commands for automation purposes. Therefore, it's recommended to adjust filter macros to eliminate such false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Possible Append Cronjob Entry on Existing Cronjob File” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.225",
              "n": "Linux Preload Hijack Library Calls",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the LD_PRELOAD environment variable to hijack or hook library functions on a Linux platform. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because adversaries, malware authors, and red teamers commonly use this technique to gain elevated privileges and establish persistence on a compromised machine. If confirmed malicious, this behavi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://compilepeace.medium.com/memory-malware-part-0x2-writing-userland-rootkits-via-ld-preload-30121c8343d5",
              "mitre": [
                "T1574.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Preload Hijack Library Calls\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Preload Hijack Library Calls\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Preload Hijack Library Calls” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.226",
              "n": "Linux Proxy Socks Curl",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `curl` command with proxy-related arguments such as `-x`, `socks`, `--preproxy`, and `--proxy`. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process details. This activity is significant as it may indicate an adversary attempting to use a proxy to evade network monitoring and obscure their actions. If confirmed malicious, this behavior could allow attackers to bypass security…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on proxy usage internally. Filter as needed.",
              "refs": "https://www.offensive-security.com/metasploit-unleashed/proxytunnels/, https://curl.se/docs/manpage.html, https://en.wikipedia.org/wiki/SOCKS, https://oxylabs.io/blog/curl-with-proxy, https://reqbin.com/req/c-ddxflki5/curl-proxy-server#:~:text=To%20use%20a%20proxy%20with, be%20URL%20decoded%20by%20Curl., https://gtfobins.github.io/gtfobins/curl/",
              "mitre": [
                "T1090",
                "T1095"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Proxy Socks Curl\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1090, T1095. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Proxy Socks Curl\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on proxy usage internally. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Proxy Socks Curl” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.227",
              "n": "Linux Puppet Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Puppet commands with elevated privileges, specifically when Puppet is used to apply configurations with sudo rights. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because it indicates a potential privilege escalation attempt, where a user could gain root access and execute system commands as the root user. If confirm…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/puppet/, https://en.wikipedia.org/wiki/Puppet_(software)",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Puppet Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Puppet Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Puppet Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.228",
              "n": "Linux RPM Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the RPM Package Manager with elevated privileges, specifically when it is used to run system commands as root via the `--eval` and `lua:os.execute` options. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process metadata. This activity is significant because it indicates a potential privilege escalation attempt, allowing a user to gain root access. If confirmed malicious, thi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present based on automated tooling or system administrative usage. Filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/rpm/, https://en.wikipedia.org/wiki/RPM_Package_Manager",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux RPM Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux RPM Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present based on automated tooling or system administrative usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux RPM Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.229",
              "n": "Linux Ruby Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Ruby commands with elevated privileges on a Linux system. It identifies processes where Ruby is used with the `-e` flag to execute commands via `sudo`, leveraging Endpoint Detection and Response (EDR) telemetry. This activity is significant because it indicates a potential privilege escalation attempt, allowing a user to execute commands as root. If confirmed malicious, this could lead to full system compromise, enabling an attacker to gain root ac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present based on automated tooling or system administrative usage. Filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/ruby/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Ruby Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Ruby Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present based on automated tooling or system administrative usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Ruby Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.230",
              "n": "Linux Service File Created In Systemd Directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of suspicious service files within the systemd directories on Linux platforms. It leverages logs containing file name, file path, and process GUID data from endpoints. This activity is significant for a SOC as it may indicate an adversary attempting to establish persistence on a compromised host. If confirmed malicious, this could lead to system compromise or data exfiltration, allowing attackers to maintain control over the system and execute further …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may arise when administrators or network operators create files in systemd directories for legitimate automation tasks. Therefore, it's important to adjust filter macros to account for valid activities. To implement this search successfully, it's crucial to ingest appropriate logs, preferably using the Linux Sysmon Add-on from Splunkbase for those using Sysmon.",
              "refs": "https://attack.mitre.org/techniques/T1053/006/, https://www.intezer.com/blog/research/kaiji-new-chinese-linux-malware-turning-to-golang/, https://redcanary.com/blog/attck-t1501-understanding-systemd-service-persistence/, https://github.com/microsoft/MSTIC-Sysmon/blob/main/linux/configs/attack-based/persistence/T1053.003_Cron_Activity.xml",
              "mitre": [
                "T1053.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Service File Created In Systemd Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Service File Created In Systemd Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may arise when administrators or network operators create files in systemd directories for legitimate automation tasks. Therefore, it's important to adjust filter macros to account for valid activities. To implement this search successfully, it's crucial to ingest appropriate logs, preferably using the Linux Sysmon Add-on from Splunkbase for those using Sysmon.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Service File Created In Systemd Directory” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.231",
              "n": "Linux Service Restarted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the restarting or re-enabling of services on Linux systems using the `systemctl` or `service` commands. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line execution logs. This activity is significant as adversaries may use it to maintain persistence or execute unauthorized actions. If confirmed malicious, this behavior could lead to repeated execution of malicious payloads, unauthorized access, or data destruct…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1543/003/",
              "mitre": [
                "T1053.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Service Restarted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Service Restarted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Service Restarted” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.232",
              "n": "Linux Service Started Or Enabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or enabling of services on Linux platforms using the systemctl or service tools. It leverages Endpoint Detection and Response (EDR) logs, focusing on process names, parent processes, and command-line executions. This activity is significant as adversaries may create or modify services to maintain persistence or execute malicious payloads. If confirmed malicious, this behavior could lead to persistent access, data theft, ransomware deployment, or other …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1543/003/",
              "mitre": [
                "T1053.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Service Started Or Enabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Service Started Or Enabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Service Started Or Enabled” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.233",
              "n": "Linux Setuid Using Chmod Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the chmod utility to set the SUID or SGID bit on files, which can allow users to temporarily gain root or group-level access. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments related to chmod. This activity is significant as it can indicate an attempt to escalate privileges or maintain persistence on a system. If confirmed malicious, an attacker could gain elevated…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.hackingarticles.in/linux-privilege-escalation-using-capabilities/",
              "mitre": [
                "T1548.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Setuid Using Chmod Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Setuid Using Chmod Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Setuid Using Chmod Utility” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.234",
              "n": "Linux Setuid Using Setcap Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'setcap' utility to enable the SUID bit on Linux systems. It leverages Endpoint Detection and Response (EDR) data, focusing on process names and command-line arguments that indicate the use of 'setcap' with specific capabilities. This activity is significant because setting the SUID bit allows a user to temporarily gain root access, posing a substantial security risk. If confirmed malicious, an attacker could escalate privileges, execute arbitr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.hackingarticles.in/linux-privilege-escalation-using-capabilities/",
              "mitre": [
                "T1548.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Setuid Using Setcap Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Setuid Using Setcap Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Setuid Using Setcap Utility” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.235",
              "n": "Linux Shred Overwrite Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'shred' command on a Linux machine, which is used to overwrite files to make them unrecoverable. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because the 'shred' command can be used in destructive attacks, such as those seen in the Industroyer2 malware targeting energy facilities. If confirmed malicious, this activity could lead to the per…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://cert.gov.ua/article/39518",
              "mitre": [
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Shred Overwrite Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Shred Overwrite Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Shred Overwrite Command” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.236",
              "n": "Linux Sqlite3 Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the sqlite3 command with elevated privileges, which can be exploited for privilege escalation. It leverages Endpoint Detection and Response (EDR) telemetry to identify instances where sqlite3 is used in conjunction with shell commands and sudo. This activity is significant because it indicates a potential attempt to gain root access, which could lead to full system compromise. If confirmed malicious, an attacker could execute arbitrary commands as …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://gtfobins.github.io/gtfobins/sqlite3/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Sqlite3 Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Sqlite3 Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Sqlite3 Privilege Escalation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.237",
              "n": "Linux SSH Authorized Keys Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of SSH Authorized Keys on Linux systems. It leverages process execution data from Endpoint Detection and Response (EDR) agents, specifically monitoring commands like \"bash\" and \"cat\" interacting with \"authorized_keys\" files. This activity is significant as adversaries often modify SSH Authorized Keys to establish persistent access to compromised endpoints. If confirmed malicious, this behavior could allow attackers to maintain unauthorized access, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Filtering will be required as system administrators will add and remove. One way to filter query is to add \"echo\".",
              "refs": "https://redcanary.com/blog/lateral-movement-with-secure-shell/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1098.004/T1098.004.md",
              "mitre": [
                "T1098.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux SSH Authorized Keys Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux SSH Authorized Keys Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Filtering will be required as system administrators will add and remove. One way to filter query is to add \"echo\".\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux SSH Authorized Keys Modification” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.238",
              "n": "Linux Stop Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to stop or clear a service on Linux systems. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on processes like \"systemctl,\" \"service,\" and \"svcadm\" executing stop commands. This activity is significant as adversaries often terminate security or critical services to disable defenses or disrupt operations, as seen in malware like Industroyer2. If confirmed malicious, this could lead to the disabling of security mechanisms, allow…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://cert.gov.ua/article/39518",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Stop Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Stop Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Stop Services” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.239",
              "n": "Linux Sudo OR Su Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the \"sudo\" or \"su\" command on a Linux operating system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and parent process names. This activity is significant because \"sudo\" and \"su\" commands are commonly used by adversaries to elevate privileges, potentially leading to unauthorized access or control over the system. If confirmed malicious, this activity could allow attackers to execute commands with r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name IN (\"sudo\", \"su\")\n        OR\n        Processes.parent_process_name IN (\"sudo\", \"su\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `linux_sudo_or_su_execution_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1548/003/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Sudo OR Su Execution\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Sudo OR Su Execution\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Sudo OR Su Execution” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.240",
              "n": "Linux System Reboot Via System Request Key",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the SysReq hack to reboot a Linux system host. It leverages Endpoint Detection and Response (EDR) data to identify processes executing the command to pipe 'b' to /proc/sysrq-trigger. This activity is significant as it is an uncommon method to reboot a system and was observed in the Awfulshred malware wiper. If confirmed malicious, this technique could indicate the presence of suspicious processes and potential system compromise, leading to unauthor…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html, https://cert.gov.ua/article/3718487, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/overview-of-the-cyber-weapons-used-in-the-ukraine-russia-war/",
              "mitre": [
                "T1529"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux System Reboot Via System Request Key\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1529. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux System Reboot Via System Request Key\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux System Reboot Via System Request Key” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.241",
              "n": "Linux Unix Shell Enable All SysRq Functions",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a command to enable all SysRq functions on a Linux system, a technique associated with the AwfulShred malware. It leverages Endpoint Detection and Response (EDR) data to identify processes executing the command to pipe bitmask '1' to /proc/sys/kernel/sysrq. This activity is significant as it can indicate an attempt to manipulate kernel system requests, which is uncommon and potentially malicious. If confirmed, this could allow an attacker to reboot…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html, https://cert.gov.ua/article/3718487, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/overview-of-the-cyber-weapons-used-in-the-ukraine-russia-war/",
              "mitre": [
                "T1059.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Unix Shell Enable All SysRq Functions\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Unix Shell Enable All SysRq Functions\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Unix Shell Enable All SysRq Functions” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.242",
              "n": "Linux Visudo Utility Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'visudo' utility to modify the /etc/sudoers file on a Linux system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This activity is significant because unauthorized changes to the /etc/sudoers file can grant elevated privileges to users, potentially allowing adversaries to execute commands as root. If confirmed malicious, this could lead to full system compromise, privilege escalation, a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://askubuntu.com/questions/334318/sudoers-file-enable-nopasswd-for-user-all-commands",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Visudo Utility Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Visudo Utility Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Linux Visudo Utility Execution” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.243",
              "n": "Local Account Discovery With Wmic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `wmic.exe` with command-line arguments used to query local user accounts, specifically the `useraccount` argument. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant as it indicates potential reconnaissance efforts by adversaries to enumerate local users, which is a common step in situational awareness and Active Directory discovery.…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_wmic` (Processes.process=*useraccount*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `local_account_discovery_with_wmic_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1087/001/",
              "mitre": [
                "T1087.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Local Account Discovery With Wmic\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Local Account Discovery With Wmic\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Local Account Discovery With Wmic” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.244",
              "n": "Logon Script Event Trigger Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the UserInitMprLogonScript registry entry, which is often used by attackers to establish persistence and gain privilege escalation upon system boot. It leverages data from the Endpoint.Registry data model, focusing on changes to the specified registry path. This activity is significant because it is a common technique used by APT groups and malware to ensure their payloads execute automatically when the system starts. If confirmed malicious, thi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1037/001/",
              "mitre": [
                "T1037.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Logon Script Event Trigger Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1037.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Logon Script Event Trigger Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.245",
              "n": "MacOS - Re-opened Applications",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes referencing plist files that determine which applications are re-opened when a user reboots their MacOS machine. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and parent processes related to \"com.apple.loginwindow.\" This activity is significant because it can indicate attempts to persist across reboots, a common tactic used by attackers to maintain access. If confirmed malicious, this could allow an atta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| tstats `security_content_summariesonly` count values(Processes.process) as process values(Processes.parent_process) as parent_process min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process=\"*com.apple.loginwindow*\"\n      BY Processes.user Processes.process_name Processes.parent_process_name\n         Processes.dest\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `macos___re_opened_applications_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "At this stage, there are no known false positives. During testing, no process events referring the com.apple.loginwindow.plist files were observed during normal operation of re-opening applications on reboot. Therefore, it can be assumed that any occurrences of this in the process events would be worth investigating. In the event that the legitimate modification by the system of these files is in fact logged to the process log, then the process_name of that process can be added to an allow list.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MacOS - Re-opened Applications\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MacOS - Re-opened Applications\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: At this stage, there are no known false positives. During testing, no process events referring the com.apple.loginwindow.plist files were observed during normal operation of re-opening applications on reboot. Therefore, it can be assumed that any occurrences of this in the process events would be worth investigating. In the event that the legitimate modification by the system of these files is in fact logged to the process log, then the process_name of that process can be added to an allow list.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “MacOS - Re-opened Applications” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.246",
              "n": "Malicious InProcServer32 Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a process modifying the registry with a known malicious CLSID under InProcServer32. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on registry modifications within the HKLM or HKCU Software Classes CLSID paths. This activity is significant as it may indicate an attempt to load a malicious DLL, potentially leading to code execution. If confirmed malicious, this could allow an attacker to persist in the environment, execute arbitrary co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12, Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, filter as needed. In our test case, Remcos used regsvr32.exe to modify the registry. It may be required, dependent upon the EDR tool producing registry events, to remove (Default) from the command-line.",
              "refs": "https://bohops.com/2018/06/28/abusing-com-registry-structure-clsid-localserver32-inprocserver32/, https://tria.ge/210929-ap75vsddan, https://www.virustotal.com/gui/file/cb77b93150cb0f7fe65ce8a7e2a5781e727419451355a7736db84109fa215a89",
              "mitre": [
                "T1218.010",
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Malicious InProcServer32 Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12, Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.010, T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Malicious InProcServer32 Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, filter as needed. In our test case, Remcos used regsvr32.exe to modify the registry. It may be required, dependent upon the EDR tool producing registry events, to remove (Default) from the command-line.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Malicious InProcServer32 Modification” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.247",
              "n": "Malicious PowerShell Process - Encoded Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the EncodedCommand parameter in PowerShell processes. It leverages Endpoint Detection and Response (EDR) data to identify variations of the EncodedCommand parameter, including shortened forms and different command switch types. This activity is significant because adversaries often use encoded commands to obfuscate malicious scripts, making detection harder. If confirmed malicious, this behavior could allow attackers to execute hidden code, potentially l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where `process_powershell` by Processes.action Processes.dest Processes.original_file_name Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path Processes.process Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path Processes.user Processes.user_id Processes.vendor_product | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | where match(process,\\\"(?i)[\\\\-|\\\\/|–|—|―][Ee^]{1,2}[NnCcOoDdEeMmAa^]+\\\\s+[\\\\\\\"]?[A-Za-z0-9+/=]{5,}[\\\\\\\"]?\\\") | `malicious_powershell_process___encoded_command_filter`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "System administrators may use this option, but it's not common.",
              "refs": "https://regexr.com/662ov, https://github.com/redcanaryco/AtomicTestHarnesses/blob/master/Windows/TestHarnesses/T1059.001_PowerShell/OutPowerShellCommandLineParameter.ps1, https://ss64.com/ps/powershell.html, https://twitter.com/M_haggis/status/1440758396534214658?s=20, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1027"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Malicious PowerShell Process - Encoded Command\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Malicious PowerShell Process - Encoded Command\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: System administrators may use this option, but it's not common.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Malicious PowerShell Process - Encoded Command” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.248",
              "n": "Malicious PowerShell Process - Execution Policy Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects PowerShell processes initiated with parameters that bypass the local execution policy for scripts. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions containing specific flags like \"-ex\" or \"bypass.\" This activity is significant because bypassing execution policies is a common tactic used by attackers to run malicious scripts undetected. If confirmed malicious, this could allow an attacker to execute arbitrary c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There may be legitimate reasons to bypass the PowerShell execution policy. The PowerShell script being run with this parameter should be validated to ensure that it is legitimate.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Malicious PowerShell Process - Execution Policy Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Malicious PowerShell Process - Execution Policy Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There may be legitimate reasons to bypass the PowerShell execution policy. The PowerShell script being run with this parameter should be validated to ensure that it is legitimate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Malicious PowerShell Process - Execution Policy Bypass” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.249",
              "n": "Malicious PowerShell Process With Obfuscation Techniques",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects PowerShell processes launched with command-line arguments indicative of obfuscation techniques. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and complete command-line executions. This activity is significant because obfuscated PowerShell commands are often used by attackers to evade detection and execute malicious scripts. If confirmed malicious, this activity could lead to unauthorized code execu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "These characters might be legitimately on the command-line, but it is not common.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Malicious PowerShell Process With Obfuscation Techniques\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Malicious PowerShell Process With Obfuscation Techniques\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: These characters might be legitimately on the command-line, but it is not common.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Malicious PowerShell Process With Obfuscation Techniques” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.250",
              "n": "Mimikatz PassTheTicket CommandLine Parameters",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Mimikatz command line parameters associated with pass-the-ticket attacks. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line patterns related to Kerberos ticket manipulation. This activity is significant because pass-the-ticket attacks allow adversaries to move laterally within an environment using stolen Kerberos tickets, bypassing normal access controls. If confirmed malicious, this could enable attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although highly unlikely, legitimate applications may use the same command line parameters as Mimikatz.",
              "refs": "https://github.com/gentilkiwi/mimikatz, https://attack.mitre.org/techniques/T1550/003/",
              "mitre": [
                "T1550.003"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Mimikatz PassTheTicket CommandLine Parameters\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1550.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Mimikatz PassTheTicket CommandLine Parameters\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although highly unlikely, legitimate applications may use the same command line parameters as Mimikatz.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Mimikatz PassTheTicket CommandLine Parameters” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.251",
              "n": "Modification Of Wallpaper",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of registry keys related to the desktop wallpaper settings. It leverages Sysmon EventCode 13 to identify changes to the \"Control Panel\\\\Desktop\\\\Wallpaper\" and \"Control Panel\\\\Desktop\\\\WallpaperStyle\" registry keys, especially when the modifying process is not explorer.exe or involves suspicious file paths like temp or public directories. This activity is significant as it can indicate ransomware behavior, such as the REVIL ransomware, which change…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the Image, TargetObject registry key, registry Details from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "3rd party tool may used to changed the wallpaper of the machine",
              "refs": "https://krebsonsecurity.com/2021/05/a-closer-look-at-the-darkside-ransomware-gang/, https://www.mcafee.com/blogs/other-blogs/mcafee-labs/mcafee-atr-analyzes-sodinokibi-aka-revil-ransomware-as-a-service-what-the-code-tells-us/, https://news.sophos.com/en-us/2020/04/24/lockbit-ransomware-borrows-tricks-to-keep-up-with-revil-and-maze/",
              "mitre": [
                "T1491"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Modification Of Wallpaper\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1491. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Modification Of Wallpaper\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: 3rd party tool may used to changed the wallpaper of the machine\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Modification Of Wallpaper” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.252",
              "n": "Modify ACL permission To Files Or Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of ACL permissions to files or folders, making them accessible to everyone or to system account. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on processes like \"cacls.exe,\" \"icacls.exe,\" and \"xcacls.exe\" with specific command-line arguments. This activity is significant as it may indicate an adversary attempting to evade ACLs or access protected files. If confirmed malicious, this could allow unauthorized access to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may use this command. Filter as needed.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1222"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Modify ACL permission To Files Or Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Modify ACL permission To Files Or Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may use this command. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Modify ACL permission To Files Or Folder” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.253",
              "n": "Monitor Registry Keys for Print Monitors",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the registry key `HKLM\\SYSTEM\\CurrentControlSet\\Control\\Print\\Monitors`. It leverages data from the Endpoint.Registry data model, focusing on events where the registry path is modified. This activity is significant because attackers can exploit this registry key to load arbitrary .dll files, which will execute with elevated SYSTEM permissions and persist after a reboot. If confirmed malicious, this could allow attackers to maintain persistence, exe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "You will encounter noise from legitimate print-monitor registry entries.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1547.010"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Monitor Registry Keys for Print Monitors\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.010. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Monitor Registry Keys for Print Monitors\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: You will encounter noise from legitimate print-monitor registry entries.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.254",
              "n": "MS Scripting Process Loading WMI Module",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of WMI modules by Microsoft scripting processes like wscript.exe or cscript.exe. It leverages Sysmon EventCode 7 to identify instances where these scripting engines load specific WMI-related DLLs. This activity is significant because it can indicate the presence of malware, such as the FIN7 implant, which uses JavaScript to execute WMI queries for gathering host information to send to a C2 server. If confirmed malicious, this behavior could allow attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Tune and filter known instances where renamed rundll32.exe may be used.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "automation scripting language may used by network operator to do ldap query.",
              "refs": "https://www.mandiant.com/resources/fin7-pursuing-an-enigmatic-and-evasive-global-criminal-operation, https://attack.mitre.org/groups/G0046/",
              "mitre": [
                "T1059.007"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MS Scripting Process Loading WMI Module\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MS Scripting Process Loading WMI Module\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: automation scripting language may used by network operator to do ldap query.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “MS Scripting Process Loading WMI Module” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.255",
              "n": "MSBuild Suspicious Spawned By Script Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious spawning of MSBuild.exe by Windows Script Host processes (cscript.exe or wscript.exe). This behavior is often associated with malware or adversaries executing malicious MSBuild processes via scripts on compromised hosts. The detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process creation events where MSBuild is a child of script hosts. This activity is significant as it may indicate an attempt to execute malicious co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as developers do not spawn MSBuild via a WSH.",
              "refs": "https://app.any.run/tasks/dc93ee63-050c-4ff8-b07e-8277af9ab939/",
              "mitre": [
                "T1127.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MSBuild Suspicious Spawned By Script Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1127.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MSBuild Suspicious Spawned By Script Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as developers do not spawn MSBuild via a WSH.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “MSBuild Suspicious Spawned By Script Process” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.256",
              "n": "Mshta spawning Rundll32 OR Regsvr32 Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious mshta.exe process spawning rundll32 or regsvr32 child processes. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process GUID, process name, and parent process fields. This activity is significant as it is a known technique used by malware like Trickbot to load malicious DLLs and execute payloads. If confirmed malicious, this behavior could allow attackers to execute arbitrary code, escalate privileges, or download addi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "limitted. this anomaly behavior is not commonly seen in clean host.",
              "refs": "https://twitter.com/cyb3rops/status/1416050325870587910?s=21",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Mshta spawning Rundll32 OR Regsvr32 Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Mshta spawning Rundll32 OR Regsvr32 Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: limitted. this anomaly behavior is not commonly seen in clean host.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Mshta spawning Rundll32 OR Regsvr32 Process” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.257",
              "n": "Msmpeng Application DLL Side Loading",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious creation of msmpeng.exe or mpsvc.dll in non-default Windows Defender folders. It leverages the Endpoint.Filesystem datamodel to identify instances where these files are created outside their expected directories. This activity is significant because it is associated with the REvil ransomware, which uses DLL side-loading to execute malicious payloads. If confirmed malicious, this could lead to ransomware deployment, resulting in data encryption, syste…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the Filesystem responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "quite minimal false positive expected.",
              "refs": "https://community.sophos.com/b/security-blog/posts/active-ransomware-attack-on-kaseya-customers",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Msmpeng Application DLL Side Loading\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Msmpeng Application DLL Side Loading\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: quite minimal false positive expected.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.258",
              "n": "NET Profiler UAC bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the registry aimed at bypassing the User Account Control (UAC) feature in Windows. It identifies changes to the .NET COR_PROFILER_PATH registry key, which can be exploited to load a malicious DLL via mmc.exe. This detection leverages data from the Endpoint.Registry datamodel, focusing on specific registry paths and values. Monitoring this activity is crucial as it can indicate an attempt to escalate privileges or persist within the environment. If …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "limited false positive. It may trigger by some windows update that will modify this registry.",
              "refs": "https://offsec.almond.consulting/UAC-bypass-dotnet.html",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"NET Profiler UAC bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"NET Profiler UAC bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: limited false positive. It may trigger by some windows update that will modify this registry.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.259",
              "n": "Network Connection Discovery With Arp",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `arp.exe` with the `-a` flag, which is used to list network connections on a compromised system. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, command-line executions, and related telemetry. Monitoring this activity is significant because both Red Teams and adversaries use `arp.exe` for situational awareness and Active Directory discovery. If confirmed malicious, this activity could allo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"arp.exe\"\n        )\n        (Processes.process=*-a*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `network_connection_discovery_with_arp_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1049/, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1049"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Network Connection Discovery With Arp\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1049. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Network Connection Discovery With Arp\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Network Connection Discovery With Arp” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.260",
              "n": "Network Connection Discovery With Netstat",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `netstat.exe` with command-line arguments to list network connections on a system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, command-line executions, and parent processes. This activity is significant as both Red Teams and adversaries use `netstat.exe` for situational awareness and Active Directory discovery. If confirmed malicious, this behavior could allow attackers to map network connections,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"netstat.exe\"\n        )\n        (Processes.process=*-a*)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `network_connection_discovery_with_netstat_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1049/, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1049"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Network Connection Discovery With Netstat\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1049. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Network Connection Discovery With Netstat\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Network Connection Discovery With Netstat” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.261",
              "n": "Nishang PowershellTCPOneLine",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Nishang Invoke-PowerShellTCPOneLine utility, which initiates a callback to a remote Command and Control (C2) server. It leverages Endpoint Detection and Response (EDR) data, focusing on PowerShell processes that include specific .NET classes like Net.Sockets.TCPClient and System.Text.ASCIIEncoding. This activity is significant as it indicates potential remote control or data exfiltration attempts by an attacker. If confirmed malicious, this could lea…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives may be present. Filter as needed based on initial analysis.",
              "refs": "https://github.com/samratashok/nishang/blob/master/Shells/Invoke-PowerShellTcpOneLine.ps1, https://www.volexity.com/blog/2021/03/02/active-exploitation-of-microsoft-exchange-zero-day-vulnerabilities/, https://www.microsoft.com/security/blog/2021/03/02/hafnium-targeting-exchange-servers/, https://www.rapid7.com/blog/post/2021/03/03/rapid7s-insightidr-enables-detection-and-response-to-microsoft-exchange-0-day/",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Nishang PowershellTCPOneLine\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Nishang PowershellTCPOneLine\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives may be present. Filter as needed based on initial analysis.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Nishang PowershellTCPOneLine” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.262",
              "n": "Non Firefox Process Access Firefox Profile Dir",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects non-Firefox processes accessing the Firefox profile directory, which contains sensitive user data such as login credentials, browsing history, and cookies. It leverages Windows Security Event logs, specifically event code 4663, to monitor access attempts. This activity is significant because it may indicate attempts by malware, such as RATs or trojans, to harvest user information. If confirmed malicious, this behavior could lead to data exfiltration, unauthorized a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "other browser not listed related to firefox may catch by this rule.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1555.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Non Firefox Process Access Firefox Profile Dir\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Non Firefox Process Access Firefox Profile Dir\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: other browser not listed related to firefox may catch by this rule.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Non Firefox Process Access Firefox Profile Dir” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.263",
              "n": "Notepad with no Command Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where Notepad.exe is launched without any command line arguments, a behavior commonly associated with the SliverC2 framework. This detection leverages process creation events from Endpoint Detection and Response (EDR) agents, focusing on processes initiated by Notepad.exe within a short time frame. This activity is significant as it may indicate an attempt to inject malicious code into Notepad.exe, a known tactic for evading detection. If confirmed mal…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and filtering may need to occur based on organization endpoint behavior.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2022/08/24/looking-for-the-sliver-lining-hunting-for-emerging-command-and-control-frameworks/, https://www.cybereason.com/blog/sliver-c2-leveraged-by-many-threat-actors#Purple-Team-Section",
              "mitre": [
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Notepad with no Command Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Notepad with no Command Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and filtering may need to occur based on organization endpoint behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Notepad with no Command Line Arguments” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.264",
              "n": "Ntdsutil Export NTDS",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Ntdsutil to export the Active Directory database (NTDS.dit). It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because exporting NTDS.dit can be a precursor to offline password cracking, posing a severe security risk. If confirmed malicious, an attacker could gain access to sensitive credentials, potentially leading to unauthorized access and privilege e…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Highly possible Server Administrators will troubleshoot with ntdsutil.exe, generating false positives.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1003.003/T1003.003.md#atomic-test-3---dump-active-directory-database-with-ntdsutil, https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc753343(v=ws.11), https://2017.zeronights.org/wp-content/uploads/materials/ZN17_Kheirkhabarov_Hunting_for_Credentials_Dumping_in_Windows_Environment.pdf, https://strontic.github.io/xcyclopedia/library/vss_ps.dll-97B15BDAE9777F454C9A6BA25E938DB3.html, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1003.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ntdsutil Export NTDS\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ntdsutil Export NTDS\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Highly possible Server Administrators will troubleshoot with ntdsutil.exe, generating false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Ntdsutil Export NTDS” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.265",
              "n": "Overwriting Accessibility Binaries",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to Windows accessibility binaries such as sethc.exe, utilman.exe, osk.exe, Magnify.exe, Narrator.exe, DisplaySwitch.exe, and AtBroker.exe. It leverages filesystem activity data from the Endpoint.Filesystem data model to identify changes to these specific files. This activity is significant because adversaries can exploit these binaries to gain unauthorized access or execute commands without logging in. If confirmed malicious, this could allow attacker…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must be ingesting data that records the filesystem activity from your hosts to populate the Endpoint file-system data model node. If you are using Sysmon, you will need a Splunk Universal Forwarder on each endpoint from which you want to collect data.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Microsoft may provide updates to these binaries. Verify that these changes do not correspond with your normal software update cycle.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1546.008"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Overwriting Accessibility Binaries\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Overwriting Accessibility Binaries\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Microsoft may provide updates to these binaries. Verify that these changes do not correspond with your normal software update cycle.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.266",
              "n": "Permission Modification using Takeown App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of file or directory permissions using the takeown.exe Windows application. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include process GUID, process name, and command-line details. This activity is significant because it is a common technique used by ransomware to take ownership of files or folders for encryption or deletion. If confirmed malicious, this could lead to unauthorized ac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "takeown.exe is a normal windows application that may used by network operator.",
              "refs": "https://research.nccgroup.com/2020/06/23/wastedlocker-a-new-ransomware-variant-developed-by-the-evil-corp-group/",
              "mitre": [
                "T1222"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Permission Modification using Takeown App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Permission Modification using Takeown App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: takeown.exe is a normal windows application that may used by network operator.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Permission Modification using Takeown App” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.267",
              "n": "Possible Browser Pass View Parameter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes with command-line parameters associated with web browser credential dumping tools, specifically targeting behaviors used by Remcos RAT malware. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and specific file paths. This activity is significant as it indicates potential credential theft, a common tactic in broader cyber-espionage campaigns. If confirmed malicious, attackers could gain unauthoriz…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where Processes.process  IN (\"*/stext *\", \"*/shtml *\", \"*/LoadPasswordsIE*\", \"*/LoadPasswordsFirefox*\", \"*/LoadPasswordsChrome*\", \"*/LoadPasswordsOpera*\", \"*/LoadPasswordsSafari*\" , \"*/UseOperaPasswordFile*\", \"*/OperaPasswordFile*\",\"*/stab*\", \"*/scomma*\", \"*/stabular*\", \"*/shtml*\", \"*/sverhtml*\", \"*/sxml*\", \"*/skeepass*\" ) AND Processes.process IN (\"*\\\\temp\\\\*\", \"*\\\\users\\\\public\\\\*\", \"*\\\\programdata\\\\*\") by Processes.action Processes.dest Processes.original_file_name Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path Processes.process Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path Processes.user Processes.user_id Processes.vendor_product | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `possible_browser_pass_view_parameter_filter`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positive is quite limited. Filter is needed",
              "refs": "https://www.nirsoft.net/utils/web_browser_password.html, https://app.any.run/tasks/df0baf9f-8baf-4c32-a452-16562ecb19be/",
              "mitre": [
                "T1555.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Possible Browser Pass View Parameter\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Possible Browser Pass View Parameter\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positive is quite limited. Filter is needed\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Possible Browser Pass View Parameter” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.268",
              "n": "Potential Telegram API Request Via CommandLine",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the presence of \"api.telegram.org\" in the CommandLine of a process. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity can be significant as the telegram API has been used as an exfiltration mechanism or even as a C2 channel. If confirmed malicious, this could allow an attacker or malware to exfiltrate data or receive additional C2 instruction, potentially leading …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positive may stem from application or users requesting the API directly via CommandLine for testing purposes. Investigate the matches and apply the necessary filters.",
              "refs": "https://www.virustotal.com/gui/file/0b3ef5e04329cefb5bb4bf30b3edcb32d1ec6bbcb29d22695a079bfb5b56e8ac/behavior, https://www.virustotal.com/gui/file/72c59eeb15b5ec1d95e72e4b06a030bc058822bc10e5cb807e78a4624d329666/behavior, https://www.virustotal.com/gui/file/72c59eeb15b5ec1d95e72e4b06a030bc058822bc10e5cb807e78a4624d329666/content, https://www.virustotal.com/gui/file/1c4541bf70b6e251ef024ec4dde8dce400539c2368461c0d90e15a81b11ace44/content",
              "mitre": [
                "T1102.002",
                "T1041"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Potential Telegram API Request Via CommandLine\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1102.002, T1041. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Potential Telegram API Request Via CommandLine\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positive may stem from application or users requesting the API directly via CommandLine for testing purposes. Investigate the matches and apply the necessary filters.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Potential Telegram API Request Via CommandLine” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.269",
              "n": "Potentially malicious code on commandline",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potentially malicious command lines using a pretrained machine learning text classifier. It identifies unusual keyword combinations in command lines, such as \"streamreader,\" \"webclient,\" \"mutex,\" \"function,\" and \"computehash,\" which are often associated with adversarial PowerShell code execution for C2 communication. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command lines longer than 200 characters. This activity i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This model is an anomaly detector that identifies usage of APIs and scripting constructs that are correllated with malicious activity.  These APIs and scripting constructs are part of the programming langauge and advanced scripts may generate false positives.",
              "refs": "https://attack.mitre.org/techniques/T1059/003/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1059.001/T1059.001.md",
              "mitre": [
                "T1059.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Potentially malicious code on commandline\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Potentially malicious code on commandline\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This model is an anomaly detector that identifies usage of APIs and scripting constructs that are correllated with malicious activity.  These APIs and scripting constructs are part of the programming langauge and advanced scripts may generate false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Potentially malicious code on commandline” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.270",
              "n": "PowerShell - Connect To Internet With Hidden Window",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects PowerShell commands using the WindowStyle parameter to hide the window while connecting to the Internet. This behavior is identified through Endpoint Detection and Response (EDR) telemetry, focusing on command-line executions that include variations of the WindowStyle parameter. This activity is significant because it attempts to bypass default PowerShell execution policies and conceal its actions, which is often indicative of malicious intent. If confirmed malicio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_powershell`\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | where match(process,\"(?i)[\\-\n    | \\/\n    | –\n    | —\n    | ―]w(in*d*o*w*s*t*y*l*e*)*\\s+[^-]\")\n    | `powershell___connect_to_internet_with_hidden_window_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate process can have this combination of command-line options, but it's not common.",
              "refs": "https://regexr.com/663rr, https://github.com/redcanaryco/AtomicTestHarnesses/blob/master/Windows/TestHarnesses/T1059.001_PowerShell/OutPowerShellCommandLineParameter.ps1, https://ss64.com/ps/powershell.html, https://twitter.com/M_haggis/status/1440758396534214658?s=20, https://blog.netlab.360.com/ten-families-of-malicious-samples-are-spreading-using-the-log4j2-vulnerability-now/",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell - Connect To Internet With Hidden Window\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell - Connect To Internet With Hidden Window\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate process can have this combination of command-line options, but it's not common.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “PowerShell - Connect To Internet With Hidden Window” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.271",
              "n": "Powershell Creating Thread Mutex",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of PowerShell scripts using the `mutex` function via EventCode 4104. This detection leverages PowerShell Script Block Logging to identify scripts that create thread mutexes, a technique often used in obfuscated scripts to ensure only one instance runs on a compromised machine. This activity is significant as it may indicate the presence of sophisticated malware or persistence mechanisms. If confirmed malicious, the attacker could maintain exclusive co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "powershell developer may used this function in their script for instance checking too.",
              "refs": "https://isc.sans.edu/forums/diary/Some+Powershell+Malicious+Code/22988/, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/",
              "mitre": [
                "T1027.005",
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Creating Thread Mutex\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027.005, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Creating Thread Mutex\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: powershell developer may used this function in their script for instance checking too.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Powershell Creating Thread Mutex” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.272",
              "n": "Powershell Execute COM Object",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a COM CLSID through PowerShell. It leverages EventCode 4104 and searches for specific script block text indicating the creation of a COM object. This activity is significant as it is commonly used by adversaries and malware, such as the Conti ransomware, to execute commands, potentially for privilege escalation or bypassing User Account Control (UAC). If confirmed malicious, this technique could allow attackers to gain elevated privileges or persis…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network operrator may use this command.",
              "refs": "https://threadreaderapp.com/thread/1423361119926816776.html, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1059.001",
                "T1546.015"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Execute COM Object\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1546.015. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Execute COM Object\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network operrator may use this command.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Powershell Execute COM Object” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.273",
              "n": "Powershell Remote Thread To Known Windows Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious PowerShell processes attempting to inject code into critical Windows processes using CreateRemoteThread. It leverages Sysmon EventCode 8 to identify instances where PowerShell spawns threads in processes like svchost.exe, csrss.exe, and others. This activity is significant as it is commonly used by malware such as TrickBot and offensive tools like Cobalt Strike to execute malicious payloads, establish reverse shells, or download additional malware. If co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 8",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, Create Remote thread from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Tune and filter known instances of create remote thread may be used.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2021/01/11/trickbot-still-alive-and-well/",
              "mitre": [
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Remote Thread To Known Windows Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 8. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Remote Thread To Known Windows Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Powershell Remote Thread To Known Windows Process” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.274",
              "n": "Powershell Remove Windows Defender Directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious PowerShell command attempting to delete the Windows Defender directory. It leverages PowerShell Script Block Logging to identify commands containing \"rmdir\" and targeting the Windows Defender path. This activity is significant as it may indicate an attempt to disable or corrupt Windows Defender, a key security component. If confirmed malicious, this action could allow an attacker to bypass endpoint protection, facilitating further malicious activities …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this analytic, you will need to enable PowerShell Script Block Logging on some or all endpoints. Additional setup here https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Remove Windows Defender Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Remove Windows Defender Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Powershell Remove Windows Defender Directory” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.275",
              "n": "PowerShell Script Block With URL Chain",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious PowerShell script execution via EventCode 4104 that contains multiple URLs within a function or array. It leverages PowerShell operational logs to detect script blocks with embedded URLs, often indicative of obfuscated scripts or those attempting to download secondary payloads. This activity is significant as it may signal an attempt to execute malicious code or download additional malware. If confirmed malicious, this could lead to code execution, fu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic requires PowerShell operational logs to be imported. Modify the powershell macro as needed to match the sourcetype or add index. This analytic is specific to 4104, or PowerShell Script Block Logging.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/blog/tracking-evolution-gootloader-operations, https://thedfirreport.com/2022/05/09/seo-poisoning-a-gootloader-story/, https://attack.mitre.org/techniques/T1059/001/",
              "mitre": [
                "T1059.001",
                "T1105"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell Script Block With URL Chain\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell Script Block With URL Chain\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “PowerShell Script Block With URL Chain” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.276",
              "n": "PowerShell Start-BitsTransfer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the PowerShell command `Start-BitsTransfer`, which can be used for file transfers, including potential data exfiltration. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events and command-line arguments. This activity is significant because `Start-BitsTransfer` can be abused by adversaries to upload sensitive files to remote locations, posing a risk of data loss. If confirmed malicious, this could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives. It is possible administrators will utilize Start-BitsTransfer for administrative tasks, otherwise filter based parent process or command-line arguments.",
              "refs": "https://isc.sans.edu/diary/Investigating+Microsoft+BITS+Activity/23281, https://learn.microsoft.com/en-us/windows/win32/bits/using-windows-powershell-to-create-bits-transfer-jobs",
              "mitre": [
                "T1197"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell Start-BitsTransfer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1197. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell Start-BitsTransfer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives. It is possible administrators will utilize Start-BitsTransfer for administrative tasks, otherwise filter based parent process or command-line arguments.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “PowerShell Start-BitsTransfer” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.277",
              "n": "PowerShell Start or Stop Service",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of PowerShell's Start-Service or Stop-Service cmdlets on an endpoint. It leverages PowerShell Script Block Logging to detect these commands. This activity is significant because attackers can manipulate services to disable or stop critical functions, causing system instability or disrupting business operations. If confirmed malicious, this behavior could allow attackers to disable security services, evade detection, or disrupt essential services, leading…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This behavior may be noisy, as these cmdlets are commonly used by system administrators or other legitimate users to manage services. Therefore, it is recommended not to enable this analytic as a direct finding Instead, it should be used as part of a broader set of security controls to detect and investigate potential threats.",
              "refs": "https://learn-powershell.net/2012/01/15/startingstopping-and-restarting-remote-services-with-powershell/, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/start-service?view=powershell-7.3",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell Start or Stop Service\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell Start or Stop Service\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This behavior may be noisy, as these cmdlets are commonly used by system administrators or other legitimate users to manage services. Therefore, it is recommended not to enable this analytic as a direct finding Instead, it should be used as part of a broader set of security controls to detect and investigate potential threats.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “PowerShell Start or Stop Service” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.278",
              "n": "PowerShell WebRequest Using Memory Stream",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of .NET classes in PowerShell to download a URL payload directly into memory, a common fileless malware staging technique. It leverages PowerShell Script Block Logging (EventCode=4104) to identify suspicious PowerShell commands involving `system.net.webclient`, `system.net.webrequest`, and `IO.MemoryStream`. This activity is significant as it indicates potential fileless malware execution, which is harder to detect and can bypass traditional file-based defe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/blog/tracking-evolution-gootloader-operations, https://thedfirreport.com/2022/05/09/seo-poisoning-a-gootloader-story/, https://attack.mitre.org/techniques/T1059/001/",
              "mitre": [
                "T1059.001",
                "T1105",
                "T1027.011"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell WebRequest Using Memory Stream\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1105, T1027.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell WebRequest Using Memory Stream\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “PowerShell WebRequest Using Memory Stream” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.279",
              "n": "Prevent Automatic Repair Mode using Bcdedit",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of \"bcdedit.exe\" with parameters to set the boot status policy to ignore all failures. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because it can indicate an attempt by ransomware to prevent a compromised machine from booting into automatic repair mode, thereby hindering recovery efforts. If confirmed malicious, this action could allow attackers…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may modify the boot configuration ignore failure during testing and debugging.",
              "refs": "https://jsac.jpcert.or.jp/archive/2020/pdf/JSAC2020_1_tamada-yamazaki-nakatsuru_en.pdf",
              "mitre": [
                "T1490"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Prevent Automatic Repair Mode using Bcdedit\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Prevent Automatic Repair Mode using Bcdedit\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may modify the boot configuration ignore failure during testing and debugging.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Prevent Automatic Repair Mode using Bcdedit” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.280",
              "n": "Print Processor Registry Autostart",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications or new entries in the Print Processor registry path. It leverages registry activity data from the Endpoint data model to identify changes in the specified registry path. This activity is significant because the Print Processor registry is known to be exploited by APT groups like Turla for persistence and privilege escalation. If confirmed malicious, this could allow an attacker to execute a malicious DLL payload by restarting the spoolsv.ex…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "possible new printer installation may add driver component on this registry.",
              "refs": "https://attack.mitre.org/techniques/T1547/012/, https://www.welivesecurity.com/2020/05/21/no-game-over-winnti-group/",
              "mitre": [
                "T1547.012"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Print Processor Registry Autostart\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Print Processor Registry Autostart\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: possible new printer installation may add driver component on this registry.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.281",
              "n": "Process Deleting Its Process File Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a process attempting to delete its own file path, a behavior often associated with defense evasion techniques. This detection leverages Sysmon EventCode 1 logs, focusing on command lines executed via cmd.exe that include deletion commands. This activity is significant as it may indicate malware, such as Clop ransomware, trying to evade detection by removing its executable file if certain conditions are met. If confirmed malicious, this could allow the attacker t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/fin11-email-campaigns-precursor-for-ransomware-data-theft, https://blog.virustotal.com/2020/11/keep-your-friends-close-keep-ransomware.html, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1070"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Process Deleting Its Process File Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Process Deleting Its Process File Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Process Deleting Its Process File Path” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.282",
              "n": "Process Kill Base On File Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `wmic.exe` with the `delete` command to remove an executable path. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line executions. This activity is significant because it often indicates the initial stages of an adversary setting up malicious activities, such as cryptocurrency mining, on an endpoint. If confirmed malicious, this behavior could allow an attacker to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Process Kill Base On File Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Process Kill Base On File Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Process Kill Base On File Path” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.283",
              "n": "Process Writing DynamicWrapperX",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a process writing the dynwrapx.dll file to disk and registering it in the registry. It leverages data from the Endpoint datamodel, specifically monitoring process and filesystem events. This activity is significant because DynamicWrapperX is an ActiveX component often used in scripts to call Windows API functions, and its presence in non-standard locations is highly suspicious. If confirmed malicious, this could allow an attacker to execute arbitrary code, escalate…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Endpoint.Filesystem\n      WHERE Filesystem.file_name=\"dynwrapx.dll\"\n      BY Filesystem.action Filesystem.dest Filesystem.file_access_time\n         Filesystem.file_create_time Filesystem.file_hash Filesystem.file_modify_time\n         Filesystem.file_name Filesystem.file_path Filesystem.file_acl\n         Filesystem.file_size Filesystem.process_guid Filesystem.process_id\n         Filesystem.user Filesystem.vendor_product\n    | `drop_dm_object_name(Filesystem)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `process_writing_dynamicwrapperx_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, however it is possible to filter by Processes.process_name and specific processes (ex. wscript.exe). Filter as needed. This may need modification based on EDR telemetry and how it brings in registry data. For example, removal of (Default).",
              "refs": "https://blog.f-secure.com/hunting-for-koadic-a-com-based-rootkit/, https://www.script-coding.com/dynwrapx_eng.html, https://bohops.com/2018/06/28/abusing-com-registry-structure-clsid-localserver32-inprocserver32/, https://tria.ge/210929-ap75vsddan, https://www.virustotal.com/gui/file/cb77b93150cb0f7fe65ce8a7e2a5781e727419451355a7736db84109fa215a89",
              "mitre": [
                "T1059",
                "T1559.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Process Writing DynamicWrapperX\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059, T1559.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Process Writing DynamicWrapperX\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, however it is possible to filter by Processes.process_name and specific processes (ex. wscript.exe). Filter as needed. This may need modification based on EDR telemetry and how it brings in registry data. For example, removal of (Default).\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.284",
              "n": "Ransomware Notes bulk creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the bulk creation of ransomware notes (e.g., .txt, .html, .hta files) on an infected machine. It leverages Sysmon EventCode 11 to detect multiple instances of these file types being created within a short time frame. This activity is significant as it often indicates an active ransomware attack, where the attacker is notifying the victim of the encryption. If confirmed malicious, this behavior could lead to widespread data encryption, rendering critical files in…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/fin11-email-campaigns-precursor-for-ransomware-data-theft, https://blog.virustotal.com/2020/11/keep-your-friends-close-keep-ransomware.html",
              "mitre": [
                "T1486"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ransomware Notes bulk creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ransomware Notes bulk creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Ransomware Notes bulk creation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.285",
              "n": "Recon AVProduct Through Pwh or WMI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious PowerShell script execution via EventCode 4104, specifically targeting checks for installed anti-virus products using WMI or PowerShell commands. This detection leverages PowerShell Script Block Logging to identify scripts containing keywords like \"SELECT,\" \"WMIC,\" \"AntiVirusProduct,\" or \"AntiSpywareProduct.\" This activity is significant as it is commonly used by malware and APT actors to map running security applications or services, potentially aiding …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator may used this command for checking purposes",
              "refs": "https://news.sophos.com/en-us/2020/05/12/maze-ransomware-1-year-counting/, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1592"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Recon AVProduct Through Pwh or WMI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1592. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Recon AVProduct Through Pwh or WMI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator may used this command for checking purposes\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Recon AVProduct Through Pwh or WMI” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.286",
              "n": "Recursive Delete of Directory In Batch CMD",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a batch command designed to recursively delete files or directories, a technique often used by ransomware like Reddot to delete files in the recycle bin and prevent recovery. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that include specific flags for recursive and quiet deletions. This activity is significant as it indicates potential ransomware behavior aimed at data destruction. If conf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network operator may use this batch command to delete recursively a directory or files within directory",
              "refs": "https://app.any.run/tasks/c0f98850-af65-4352-9746-fbebadee4f05/",
              "mitre": [
                "T1070.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Recursive Delete of Directory In Batch CMD\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Recursive Delete of Directory In Batch CMD\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network operator may use this batch command to delete recursively a directory or files within directory\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Recursive Delete of Directory In Batch CMD” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.287",
              "n": "Reg exe Manipulating Windows Services Registry Keys",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of reg.exe to modify registry keys associated with Windows services and their configurations. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line executions. This activity is significant because unauthorized changes to service registry keys can indicate an attempt to establish persistence or escalate privileges. If confirmed malicious, this could allow an attacker to control serv…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual for a service to be created or modified by directly manipulating the registry. However, there may be legitimate instances of this behavior. It is important to validate and investigate, as appropriate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1574.011"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Reg exe Manipulating Windows Services Registry Keys\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Reg exe Manipulating Windows Services Registry Keys\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual for a service to be created or modified by directly manipulating the registry. However, there may be legitimate instances of this behavior. It is important to validate and investigate, as appropriate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Reg exe Manipulating Windows Services Registry Keys” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.288",
              "n": "Registry Keys for Creating SHIM Databases",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects registry activity related to the creation of application compatibility shims. It leverages data from the Endpoint.Registry data model, specifically monitoring registry paths associated with AppCompatFlags. This activity is significant because attackers can use shims to bypass security controls, achieve persistence, or escalate privileges. If confirmed malicious, this could allow an attacker to maintain long-term access, execute arbitrary code, or manipulate applica…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are many legitimate applications that leverage shim databases for compatibility purposes for legacy applications",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1546.011"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Registry Keys for Creating SHIM Databases\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Registry Keys for Creating SHIM Databases\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are many legitimate applications that leverage shim databases for compatibility purposes for legacy applications\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.289",
              "n": "Registry Keys Used For Persistence",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies modifications to registry keys commonly used for persistence mechanisms. It leverages data from endpoint detection sources like Sysmon or Carbon Black, focusing on specific registry paths known to initiate applications or services during system startup. This activity is significant as unauthorized changes to these keys can indicate attempts to maintain persistence or execute malicious actions upon system boot. If confirmed malicious, this could allow attackers t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are many legitimate applications that must execute on system startup and will use these registry keys to accomplish that task.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1547.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Registry Keys Used For Persistence\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Registry Keys Used For Persistence\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are many legitimate applications that must execute on system startup and will use these registry keys to accomplish that task.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Registry Keys Used For Persistence” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.290",
              "n": "Registry Keys Used For Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to registry keys under \"Image File Execution Options\" that can be used for privilege escalation. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to registry paths and values like GlobalFlag and Debugger. This activity is significant because attackers can use these modifications to intercept executable calls and attach malicious binaries to legitimate system binaries. If confirmed malicious, this could allow att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are many legitimate applications that must execute upon system startup and will use these registry keys to accomplish that task.",
              "refs": "https://www.malwarebytes.com/blog/101/2015/12/an-introduction-to-image-file-execution-options/",
              "mitre": [
                "T1546.012"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Registry Keys Used For Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Registry Keys Used For Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are many legitimate applications that must execute upon system startup and will use these registry keys to accomplish that task.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.291",
              "n": "Regsvr32 Silent and Install Param Dll Loading",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of a DLL using the regsvr32 application with the silent parameter and DLLInstall execution. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line arguments and parent process details. This activity is significant as it is commonly used by RAT malware like Remcos and njRAT to load malicious DLLs on compromised machines. If confirmed malicious, this technique could allow attackers to execute arbitrary code, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Other third part application may used this parameter but not so common in base windows environment.",
              "refs": "https://app.any.run/tasks/dc93ee63-050c-4ff8-b07e-8277af9ab939/, https://attack.mitre.org/techniques/T1218/010/",
              "mitre": [
                "T1218.010"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Regsvr32 Silent and Install Param Dll Loading\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.010. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Regsvr32 Silent and Install Param Dll Loading\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Other third part application may used this parameter but not so common in base windows environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Regsvr32 Silent and Install Param Dll Loading” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.292",
              "n": "Regsvr32 with Known Silent Switch Cmdline",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Regsvr32.exe with the silent switch to load DLLs. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on command-line executions containing the `-s` or `/s` switches. This activity is significant as it is commonly used in malware campaigns, such as IcedID, to stealthily load malicious DLLs. If confirmed malicious, this could allow an attacker to execute arbitrary code, download additional payloads, and potent…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "minimal. but network operator can use this application to load dll.",
              "refs": "https://app.any.run/tasks/56680cba-2bbc-4b34-8633-5f7878ddf858/, https://regexr.com/699e2",
              "mitre": [
                "T1218.010"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Regsvr32 with Known Silent Switch Cmdline\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.010. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Regsvr32 with Known Silent Switch Cmdline\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: minimal. but network operator can use this application to load dll.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Regsvr32 with Known Silent Switch Cmdline” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.293",
              "n": "Remcos client registry install entry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the presence of a registry key associated with the Remcos RAT agent on a host. It leverages data from the Endpoint.Processes and Endpoint.Registry data models in Splunk, focusing on instances where the \"license\" key is found in the \"Software\\Remcos\" path. This behavior is significant as it indicates potential compromise by the Remcos RAT, a remote access Trojan used for unauthorized access and data exfiltration. If confirmed malicious, the attacker could gain contr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12, Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/software/S0332/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remcos client registry install entry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12, Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remcos client registry install entry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.294",
              "n": "Remcos RAT File Creation in Remcos Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of files in the Remcos folder within the AppData directory, specifically targeting keylog and clipboard log files. It leverages the Endpoint.Filesystem data model to identify .dat files created in paths containing \"remcos.\" This activity is significant as it indicates the presence of the Remcos RAT, which performs keylogging, clipboard capturing, and audio recording. If confirmed malicious, this could lead to unauthorized data exfiltration and extensiv…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.malwarebytes.com/blog/threat-intelligence/2021/07/remcos-rat-delivered-via-visual-basic/",
              "mitre": [
                "T1113"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remcos RAT File Creation in Remcos Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1113. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remcos RAT File Creation in Remcos Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.295",
              "n": "Remote System Discovery with Dsquery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `dsquery.exe` with the `computer` argument, which is used to discover remote systems within a domain. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. Remote system discovery is significant as it indicates potential reconnaissance activities by adversaries or Red Teams to map out network resources and Active Directory structures. If confirmed malicious, this activ…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/cc732952(v=ws.11)",
              "mitre": [
                "T1018"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote System Discovery with Dsquery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote System Discovery with Dsquery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Remote System Discovery with Dsquery” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.296",
              "n": "Remote System Discovery with Wmic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `wmic.exe` with specific command-line arguments used to discover remote systems within a domain. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to map out network resources and Active Directory structures. If confirmed malicious, this behavior could allow attackers to gain situatio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://learn.microsoft.com/en-us/windows/win32/wmisdk/wmic",
              "mitre": [
                "T1018"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote System Discovery with Wmic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote System Discovery with Wmic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Remote System Discovery with Wmic” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.297",
              "n": "Resize ShadowStorage volume",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the resizing of shadow storage volumes, a technique used by ransomware like CLOP to prevent the recreation of shadow volumes. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving \"vssadmin.exe\" with parameters related to resizing shadow storage. This activity is significant as it indicates an attempt to hinder recovery efforts by manipulating shadow copies. If confirmed malicious, this cou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin can resize the shadowstorage for valid purposes.",
              "refs": "https://www.mandiant.com/resources/fin11-email-campaigns-precursor-for-ransomware-data-theft, https://blog.virustotal.com/2020/11/keep-your-friends-close-keep-ransomware.html, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1490/T1490.md, https://redcanary.com/blog/blackbyte-ransomware/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/vssadmin-resize-shadowstorage",
              "mitre": [
                "T1490"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Resize ShadowStorage volume\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Resize ShadowStorage volume\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin can resize the shadowstorage for valid purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Resize ShadowStorage volume” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.298",
              "n": "Revil Common Exec Parameter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of command-line parameters commonly associated with REVIL ransomware, such as \"-nolan\", \"-nolocal\", \"-fast\", and \"-full\". It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs mapped to the `Processes` node of the `Endpoint` data model. This activity is significant because these parameters are indicative of ransomware attempting to encrypt files on a compromised machine. If confirmed malicious, this co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "third party tool may have same command line parameters as revil ransomware.",
              "refs": "https://krebsonsecurity.com/2021/05/a-closer-look-at-the-darkside-ransomware-gang/, https://www.mcafee.com/blogs/other-blogs/mcafee-labs/mcafee-atr-analyzes-sodinokibi-aka-revil-ransomware-as-a-service-what-the-code-tells-us/",
              "mitre": [
                "T1204"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Revil Common Exec Parameter\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Revil Common Exec Parameter\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: third party tool may have same command line parameters as revil ransomware.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.299",
              "n": "Rundll32 Create Remote Thread To A Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a remote thread by rundll32.exe into another process. It leverages Sysmon EventCode 8 logs, specifically monitoring SourceImage and TargetImage fields. This activity is significant as it is a common technique used by malware, such as IcedID, to execute malicious code within legitimate processes, aiding in defense evasion and data theft. If confirmed malicious, this behavior could allow an attacker to execute arbitrary code, escalate privileges, and …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 8",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the SourceImage, TargetImage, and EventCode executions from your endpoints related to create remote thread or injecting codes. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.joesandbox.com/analysis/380662/0/html",
              "mitre": [
                "T1055"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rundll32 Create Remote Thread To A Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 8. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rundll32 Create Remote Thread To A Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Rundll32 Create Remote Thread To A Process” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.300",
              "n": "Rundll32 CreateRemoteThread In Browser",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious creation of a remote thread by rundll32.exe targeting browser processes such as firefox.exe, chrome.exe, iexplore.exe, and microsoftedgecp.exe. This detection leverages Sysmon EventCode 8, focusing on SourceImage and TargetImage fields to identify the behavior. This activity is significant as it is commonly associated with malware like IcedID, which hooks browsers to steal sensitive information such as banking details. If confirmed malicious, this co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 8",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the SourceImage, TargetImage, and EventCode executions from your endpoints related to create remote thread or injecting codes. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.joesandbox.com/analysis/380662/0/html",
              "mitre": [
                "T1055"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rundll32 CreateRemoteThread In Browser\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 8. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rundll32 CreateRemoteThread In Browser\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Rundll32 CreateRemoteThread In Browser” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.301",
              "n": "Rundll32 LockWorkStation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the rundll32.exe command with the user32.dll,LockWorkStation parameter, which is used to lock the workstation via command line. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it is an uncommon method to lock a screen and has been observed in CONTI ransomware tooling for defense evasion. If confirmed malicious, this technique coul…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://threadreaderapp.com/thread/1423361119926816776.html",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rundll32 LockWorkStation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rundll32 LockWorkStation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Rundll32 LockWorkStation” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.302",
              "n": "Rundll32 Process Creating Exe Dll Files",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a rundll32 process creating executable (.exe) or dynamic link library (.dll) files. It leverages Sysmon EventCode 11 to identify instances where rundll32.exe generates these file types. This activity is significant because rundll32 is often exploited by malware, such as IcedID, to drop malicious payloads in directories like Temp, AppData, or ProgramData. If confirmed malicious, this behavior could allow an attacker to execute arbitrary code, establish persistence, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://any.run/malware-trends/icedid",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rundll32 Process Creating Exe Dll Files\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rundll32 Process Creating Exe Dll Files\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Rundll32 Process Creating Exe Dll Files” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.303",
              "n": "Rundll32 Shimcache Flush",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a suspicious rundll32 command line used to clear the shim cache. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant because clearing the shim cache is an anti-forensic technique aimed at evading detection and removing forensic artifacts. If confirmed malicious, this action could hinder incident response efforts, allowing an attacker to cove…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blueteamops.medium.com/shimcache-flush-89daff28d15e",
              "mitre": [
                "T1112"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rundll32 Shimcache Flush\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rundll32 Shimcache Flush\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Rundll32 Shimcache Flush” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.304",
              "n": "Rundll32 with no Command Line Arguments with Network",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of rundll32.exe without command line arguments, followed by a network connection. This behavior is identified using Endpoint Detection and Response (EDR) telemetry and network traffic data. It is significant because rundll32.exe typically requires arguments to function, and its absence is often associated with malicious activity, such as Cobalt Strike. If confirmed malicious, this activity could indicate an attempt to establish unauthorized network co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may use a moved copy of rundll32, triggering a false positive.",
              "refs": "https://attack.mitre.org/techniques/T1218/011/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.011/T1218.011.md, https://lolbas-project.github.io/lolbas/Binaries/Rundll32/, https://bohops.com/2018/02/26/leveraging-inf-sct-fetch-execute-techniques-for-bypass-evasion-persistence/",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rundll32 with no Command Line Arguments with Network\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rundll32 with no Command Line Arguments with Network\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may use a moved copy of rundll32, triggering a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Rundll32 with no Command Line Arguments with Network” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.305",
              "n": "RunDLL Loading DLL By Ordinal",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects rundll32.exe loading a DLL export function by ordinal value. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line executions. This behavior is significant because adversaries may use rundll32.exe to execute malicious code while evading security tools that do not monitor this process. If confirmed malicious, this activity could allow attackers to execute arbitrary code, potentially leading to system compromise, privil…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible with native utilities and third party applications. Filtering may be needed based on command-line, or add world writeable paths to restrict query.",
              "refs": "https://thedfirreport.com/2022/02/07/qbot-likes-to-move-it-move-it/, https://twitter.com/M_haggis/status/1491109262428635136, https://twitter.com/pr0xylife/status/1590394227758104576",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"RunDLL Loading DLL By Ordinal\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"RunDLL Loading DLL By Ordinal\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible with native utilities and third party applications. Filtering may be needed based on command-line, or add world writeable paths to restrict query.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “RunDLL Loading DLL By Ordinal” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.306",
              "n": "Ryuk Test Files Detected",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the presence of files containing the keyword \"Ryuk\" in any folder on the C drive, indicative of Ryuk ransomware activity. It leverages the Endpoint Filesystem data model to detect file paths matching this pattern. This activity is significant as Ryuk ransomware is known for its destructive impact, encrypting critical files and demanding ransom. If confirmed malicious, this could lead to significant data loss, operational disruption, and financial damage due to r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must be ingesting data that records the filesystem activity from your hosts to populate the Endpoint Filesystem data-model object. If you are using Sysmon, you will need a Splunk Universal Forwarder on each endpoint from which you want to collect data.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "If there are files with this keywoord as file names it might trigger false possitives, please make use of our filters to tune out potential FPs.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ryuk Test Files Detected\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ryuk Test Files Detected\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: If there are files with this keywoord as file names it might trigger false possitives, please make use of our filters to tune out potential FPs.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.307",
              "n": "Ryuk Wake on LAN Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Wake-on-LAN commands associated with Ryuk ransomware. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific process and command-line activities. This behavior is significant as Ryuk ransomware uses Wake-on-LAN to power on devices in a compromised network, increasing its encryption success rate. If confirmed malicious, this activity could lead to widespread ransomware encryption across multiple endpoints, causing signif…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited to no known false positives.",
              "refs": "https://www.bleepingcomputer.com/news/security/ryuk-ransomware-uses-wake-on-lan-to-encrypt-offline-devices/, https://www.bleepingcomputer.com/news/security/ryuk-ransomware-now-self-spreads-to-other-windows-lan-devices/, https://www.cert.ssi.gouv.fr/uploads/CERTFR-2021-CTI-006.pdf",
              "mitre": [
                "T1059.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ryuk Wake on LAN Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ryuk Wake on LAN Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited to no known false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Ryuk Wake on LAN Command” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.308",
              "n": "Samsam Test File Write",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a file named \"test.txt\" within the Windows system directory, indicative of Samsam ransomware propagation. It leverages file-system activity data from the Endpoint data model, specifically monitoring file paths within the Windows System32 directory. This activity is significant as it aligns with known Samsam ransomware behavior, which uses such files for propagation and execution. If confirmed malicious, this could lead to ransomware deployment, resu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must be ingesting data that records the file-system activity from your hosts to populate the Endpoint file-system data-model node. If you are using Sysmon, you will need a Splunk Universal Forwarder on each endpoint from which you want to collect data.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Samsam Test File Write\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Samsam Test File Write\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.309",
              "n": "Sc exe Manipulating Windows Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or modification of Windows services using the sc.exe command. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because manipulating Windows services can be a method for attackers to establish persistence, escalate privileges, or execute arbitrary code. If confirmed malicious, this behavior could allow an attacker to maintain long-term access, disrupt …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Using sc.exe to manipulate Windows services is uncommon. However, there may be legitimate instances of this behavior. It is important to validate and investigate as appropriate.",
              "refs": "https://www.secureworks.com/blog/drokbk-malware-uses-github-as-dead-drop-resolver",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Sc exe Manipulating Windows Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Sc exe Manipulating Windows Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Using sc.exe to manipulate Windows services is uncommon. However, there may be legitimate instances of this behavior. It is important to validate and investigate as appropriate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Sc exe Manipulating Windows Services” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.310",
              "n": "SchCache Change By App Connect And Create ADSI Object",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an application attempting to connect and create an ADSI object to perform an LDAP query. It leverages Sysmon EventCode 11 to identify changes in the Active Directory Schema cache files located in %LOCALAPPDATA%\\Microsoft\\Windows\\SchCache or %systemroot%\\SchCache. This activity is significant as it can indicate the presence of suspicious applications, such as ransomware, using ADSI object APIs for LDAP queries. If confirmed malicious, this behavior could allow attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "normal application like mmc.exe and other ldap query tool may trigger this detections.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/adsi/adsi-and-uac, https://news.sophos.com/en-us/2021/08/09/blackmatter-ransomware-emerges-from-the-shadow-of-darkside/",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SchCache Change By App Connect And Create ADSI Object\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SchCache Change By App Connect And Create ADSI Object\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: normal application like mmc.exe and other ldap query tool may trigger this detections.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “SchCache Change By App Connect And Create ADSI Object” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.311",
              "n": "Schedule Task with HTTP Command Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of scheduled tasks on Windows systems that include HTTP command arguments, using Windows Security EventCode 4698. It identifies tasks registered via schtasks.exe or TaskService with HTTP in their command arguments. This behavior is significant as it often indicates malware activity or the use of Living off the Land binaries (lolbins) to download additional payloads. If confirmed malicious, this activity could lead to data exfiltration, malware propagat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4698 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://app.any.run/tasks/92d7ef61-bfd7-4c92-bc15-322172b4ebec/",
              "mitre": [
                "T1053"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Schedule Task with HTTP Command Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Schedule Task with HTTP Command Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Schedule Task with HTTP Command Arguments” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.312",
              "n": "Schedule Task with Rundll32 Command Trigger",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of scheduled tasks in Windows that use the rundll32 command. It leverages Windows Security EventCode 4698, which logs the creation of scheduled tasks, and filters for tasks executed via rundll32. This activity is significant as it is a common technique used by malware, such as TrickBot, to persist in an environment or deliver additional payloads. If confirmed malicious, this could lead to data theft, ransomware deployment, or other damaging outcomes. I…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4698 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://labs.vipre.com/trickbot-and-its-modules/",
              "mitre": [
                "T1053"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Schedule Task with Rundll32 Command Trigger\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Schedule Task with Rundll32 Command Trigger\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Schedule Task with Rundll32 Command Trigger” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.313",
              "n": "Scheduled Task Deleted Or Created via CMD",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation or deletion of scheduled tasks using the schtasks.exe utility with the -create or -delete flags. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it can indicate unauthorized system manipulation or malicious intent, often associated with threat actors like Dragonfly and incidents such as the SUNBURST attack. If confirmed malicious, this activit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While it is possible for legitimate scripts or administrators to trigger this behavior, filtering can be applied based on the parent process and application to reduce false positives. Analysts should reference the provided references to understand the context and threat landscape associated with this activity.",
              "refs": "https://thedfirreport.com/2022/02/21/qbot-and-zerologon-lead-to-full-domain-compromise/, https://www.joesandbox.com/analysis/691823/0/html",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Scheduled Task Deleted Or Created via CMD\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Scheduled Task Deleted Or Created via CMD\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While it is possible for legitimate scripts or administrators to trigger this behavior, filtering can be applied based on the parent process and application to reduce false positives. Analysts should reference the provided references to understand the context and threat landscape associated with this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Scheduled Task Deleted Or Created via CMD” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.314",
              "n": "Schtasks used for forcing a reboot",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of 'schtasks.exe' to schedule forced system reboots using the 'shutdown' and '/create' flags. It leverages endpoint process data to identify instances where these specific command-line arguments are used. This activity is significant because it may indicate an adversary attempting to disrupt operations or force a reboot to execute further malicious actions. If confirmed malicious, this could lead to system downtime, potential data loss, and provide an attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic may also capture legitimate administrative activities such as system updates or maintenance tasks, which can be classified as false positives. Filter as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Schtasks used for forcing a reboot\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Schtasks used for forcing a reboot\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic may also capture legitimate administrative activities such as system updates or maintenance tasks, which can be classified as false positives. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Schtasks used for forcing a reboot” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.315",
              "n": "Screensaver Event Trigger Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the SCRNSAVE.EXE registry entry, indicating potential event trigger execution via screensaver settings for persistence or privilege escalation. It leverages registry activity data from the Endpoint data model to identify changes to the specified registry path. This activity is significant as it is a known technique used by APT groups and malware to maintain persistence or escalate privileges. If confirmed malicious, this could allow an attacker to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1546/002/, https://dmcxblue.gitbook.io/red-team-notes-2-0/red-team-techniques/privilege-escalation/untitled-3/screensaver",
              "mitre": [
                "T1546.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Screensaver Event Trigger Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Screensaver Event Trigger Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.316",
              "n": "Script Execution via WMI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of scripts via Windows Management Instrumentation (WMI) by monitoring the process 'scrcons.exe'. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events. WMI-based script execution is significant because adversaries often use it to perform malicious activities stealthily, such as system compromise, data exfiltration, or establishing persistence. If confirmed malicious, this activity could al…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, administrators may use wmi to launch scripts for legitimate purposes. Filter as needed.",
              "refs": "https://redcanary.com/blog/child-processes/",
              "mitre": [
                "T1047"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Script Execution via WMI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Script Execution via WMI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, administrators may use wmi to launch scripts for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Script Execution via WMI” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.317",
              "n": "Sdclt UAC Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to the sdclt.exe registry, a technique often used to bypass User Account Control (UAC). It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific registry paths and values associated with sdclt.exe. This activity is significant because UAC bypasses can allow attackers to execute payloads with elevated privileges without user consent. If confirmed malicious, this could lead to unauthorized code execution, priv…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12, Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited to no false positives are expected.",
              "refs": "https://enigma0x3.net/2017/03/17/fileless-uac-bypass-using-sdclt-exe/, https://github.com/hfiref0x/UACME",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Sdclt UAC Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12, Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Sdclt UAC Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited to no false positives are expected.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Sdclt UAC Bypass” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.318",
              "n": "Sdelete Application Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the sdelete.exe application, a Sysinternals tool often used by adversaries to securely delete files and remove forensic evidence from a targeted host. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. Monitoring this activity is crucial as sdelete.exe is not commonly used in regular operations and its presence may indicate an attempt to cover malicious activities. If confirmed malic…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "user may execute and use this application",
              "refs": "https://app.any.run/tasks/956f50be-2c13-465a-ac00-6224c14c5f89/",
              "mitre": [
                "T1070.004",
                "T1485"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Sdelete Application Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Sdelete Application Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: user may execute and use this application\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Sdelete Application Execution” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.319",
              "n": "SearchProtocolHost with no Command Line with Network",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances of searchprotocolhost.exe running without command line arguments but with an active network connection. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on process execution and network traffic data. It is significant because searchprotocolhost.exe typically runs with specific command line arguments, and deviations from this norm can indicate malicious activity, such as Cobalt Strike usage. If confirmed malicious…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives may be present in small environments. Tuning may be required based on parent process.",
              "refs": "https://github.com/mandiant/red_team_tool_countermeasures/blob/master/rules/PGF/supplemental/hxioc/SUSPICIOUS%20EXECUTION%20OF%20SEARCHPROTOCOLHOST%20(METHODOLOGY).ioc",
              "mitre": [
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SearchProtocolHost with no Command Line with Network\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SearchProtocolHost with no Command Line with Network\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives may be present in small environments. Tuning may be required based on parent process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “SearchProtocolHost with no Command Line with Network” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.320",
              "n": "SecretDumps Offline NTDS Dumping Tool",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the potential use of the secretsdump.py tool to dump NTLM hashes from a copy of ntds.dit and the SAM, SYSTEM, and SECURITY registry hives. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line patterns and process names associated with secretsdump.py. This activity is significant because it indicates an attempt to extract sensitive credential information offline, which is a common post-exploitation technique. If conf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/SecureAuthCorp/impacket/blob/master/examples/secretsdump.py",
              "mitre": [
                "T1003.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SecretDumps Offline NTDS Dumping Tool\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SecretDumps Offline NTDS Dumping Tool\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “SecretDumps Offline NTDS Dumping Tool” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.321",
              "n": "ServicePrincipalNames Discovery with SetSPN",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `setspn.exe` to query the domain for Service Principal Names (SPNs). This detection leverages Endpoint Detection and Response (EDR) data, focusing on specific command-line arguments associated with `setspn.exe`. Monitoring this activity is crucial as it often precedes Kerberoasting or Silver Ticket attacks, which can lead to credential theft. If confirmed malicious, an attacker could use the gathered SPNs to escalate privileges or persist within the envi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be caused by Administrators resetting SPNs or querying for SPNs. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/ad/service-principal-names, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/t1208-kerberoasting, https://strontic.github.io/xcyclopedia/library/setspn.exe-5C184D581524245DAD7A0A02B51FD2C2.html, https://attack.mitre.org/techniques/T1558/003/, https://social.technet.microsoft.com/wiki/contents/articles/717.service-principal-names-spn-setspn-syntax.aspx, https://web.archive.org/web/20220212163642/https://www.harmj0y.net/blog/powershell/kerberoasting-without-mimikatz/, https://blog.zsec.uk/paving-2-da-wholeset/, https://msitpros.com/?p=3113, https://adsecurity.org/?p=3466",
              "mitre": [
                "T1558.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ServicePrincipalNames Discovery with SetSPN\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ServicePrincipalNames Discovery with SetSPN\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be caused by Administrators resetting SPNs or querying for SPNs. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “ServicePrincipalNames Discovery with SetSPN” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.322",
              "n": "Services Escalate Exe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of a randomly named binary via `services.exe`, indicative of privilege escalation using Cobalt Strike's `svc-exe`. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process lineage and command-line executions. This activity is significant as it often follows initial access, allowing adversaries to escalate privileges and establish persistence. If confirmed malicious, this behavior could enable attackers to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as `services.exe` should never spawn a process from `ADMIN$`. Filter as needed.",
              "refs": "https://thedfirreport.com/2021/03/29/sodinokibi-aka-revil-ransomware/, https://attack.mitre.org/techniques/T1548/, https://hstechdocs.helpsystems.com/manuals/cobaltstrike/current/userguide/index.htm#cshid=1085",
              "mitre": [
                "T1548"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Services Escalate Exe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Services Escalate Exe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as `services.exe` should never spawn a process from `ADMIN$`. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Services Escalate Exe” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.323",
              "n": "Shim Database Installation With Suspicious Parameters",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of sdbinst.exe with parameters indicative of silently creating a shim database. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line arguments. This activity is significant because shim databases can be used to intercept and manipulate API calls, potentially allowing attackers to bypass security controls or achieve persistence. If confirmed malicious, this could enable unaut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1546.011"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Shim Database Installation With Suspicious Parameters\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Shim Database Installation With Suspicious Parameters\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Shim Database Installation With Suspicious Parameters” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.324",
              "n": "SilentCleanup UAC Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to the registry that may indicate a UAC (User Account Control) bypass attempt via the SilentCleanup task. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on registry changes in the path \"*\\\\Environment\\\\windir\" with executable values. This activity is significant as it can allow an attacker to gain high-privilege execution without user consent, bypassing UAC protections. If confirmed malicious, this could lead …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/hfiref0x/UACME, https://www.intezer.com/blog/malware-analysis/klingon-rat-holding-on-for-dear-life/",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SilentCleanup UAC Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SilentCleanup UAC Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “SilentCleanup UAC Bypass” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.325",
              "n": "Single Letter Process On Endpoint",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects processes with names consisting of a single letter, which is often indicative of malware or an attacker attempting to evade detection. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because attackers use such techniques to obscure their presence and carry out malicious activities like data theft or ransomware attacks. If confirmed malicious, this be…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Single-letter executables are not always malicious. Investigate this activity with your normal incident-response process.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1204.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Single Letter Process On Endpoint\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Single Letter Process On Endpoint\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Single-letter executables are not always malicious. Investigate this activity with your normal incident-response process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Single Letter Process On Endpoint” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.326",
              "n": "SLUI RunAs Elevated",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Microsoft Software Licensing User Interface Tool (`slui.exe`) with elevated privileges using the `-verb runas` function. This activity is identified through logs from Endpoint Detection and Response (EDR) agents, focusing on specific registry keys and command-line parameters. This behavior is significant as it indicates a potential privilege escalation attempt, which could allow an attacker to gain elevated access and execute malicious actions …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives should be present as this is not commonly used by legitimate applications.",
              "refs": "https://www.exploit-db.com/exploits/46998, https://mattharr0ey.medium.com/privilege-escalation-uac-bypass-in-changepk-c40b92818d1b, https://gist.github.com/r00t-3xp10it/0c92cd554d3156fd74f6c25660ccc466, https://www.rapid7.com/db/modules/exploit/windows/local/bypassuac_sluihijack/, https://www.mandiant.com/resources/shining-a-light-on-darkside-ransomware-operations",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SLUI RunAs Elevated\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SLUI RunAs Elevated\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives should be present as this is not commonly used by legitimate applications.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “SLUI RunAs Elevated” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.327",
              "n": "SLUI Spawning a Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the Microsoft Software Licensing User Interface Tool (`slui.exe`) spawning a child process. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on process creation events where `slui.exe` is the parent process. This activity is significant because `slui.exe` should not typically spawn child processes, and doing so may indicate a UAC bypass attempt, leading to elevated privileges. If confirmed malicious, an attacker could leve…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Certain applications may spawn from `slui.exe` that are legitimate. Filtering will be needed to ensure proper monitoring.",
              "refs": "https://www.exploit-db.com/exploits/46998, https://www.rapid7.com/db/modules/exploit/windows/local/bypassuac_sluihijack/, https://www.mandiant.com/resources/shining-a-light-on-darkside-ransomware-operations",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SLUI Spawning a Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SLUI Spawning a Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Certain applications may spawn from `slui.exe` that are legitimate. Filtering will be needed to ensure proper monitoring.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “SLUI Spawning a Process” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.328",
              "n": "Spike in File Writes",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a sharp increase in the number of files written to a specific host. It leverages the Endpoint.Filesystem data model, focusing on 'created' actions and comparing current file write counts against historical averages and standard deviations. This activity is significant as a sudden spike in file writes can indicate malicious activities such as ransomware encryption or data exfiltration. If confirmed malicious, this behavior could lead to significant data loss, system…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Endpoint.Filesystem\n      WHERE Filesystem.action=created\n      BY _time span=1h, Filesystem.dest\n    | `drop_dm_object_name(Filesystem)`\n    | eventstats max(_time) as maxtime\n    | stats count as num_data_samples max(eval(if(_time >= relative_time(maxtime, \"-1d@d\"), count, null))) as \"count\" avg(eval(if(_time<relative_time(maxtime, \"-1d@d\"), count,null))) as avg stdev(eval(if(_time<relative_time(maxtime, \"-1d@d\"), count, null))) as stdev\n      BY \"dest\"\n    | eval upperBound=(avg+stdev*4), isOutlier=if((count > upperBound) AND num_data_samples >=20, 1, 0)\n    | search isOutlier=1\n    | `spike_in_file_writes_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is important to understand that if you happen to install any new applications on your hosts or are copying a large number of files, you can expect to see a large increase of file modifications.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Spike in File Writes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Spike in File Writes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is important to understand that if you happen to install any new applications on your hosts or are copying a large number of files, you can expect to see a large increase of file modifications.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.329",
              "n": "Sqlite Module In Temp Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of sqlite3.dll files in the %temp% folder. It leverages Sysmon EventCode 11 to identify when these files are written to the temporary directory. This activity is significant because it is associated with IcedID malware, which uses the sqlite3 module to parse browser databases and steal sensitive information such as banking details, credit card information, and credentials. If confirmed malicious, this behavior could lead to significant data theft and c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.cisecurity.org/insights/white-papers/security-primer-icedid",
              "mitre": [
                "T1005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Sqlite Module In Temp Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Sqlite Module In Temp Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Sqlite Module In Temp Folder” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.330",
              "n": "Sunburst Correlation DLL and Network Event",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the loading of the malicious SolarWinds.Orion.Core.BusinessLayer.dll by SolarWinds.BusinessLayerHost.exe and subsequent DNS queries to avsvmcloud.com. It uses Sysmon EventID 7 for DLL loading and Event ID 22 for DNS queries, correlating these events within a 12-14 day period. This activity is significant as it indicates potential Sunburst malware infection, a known supply chain attack. If confirmed malicious, this could lead to unauthorized network access, data …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7, Sysmon EventID 22",
              "q": "(`sysmon` EventCode=7 ImageLoaded=*SolarWinds.Orion.Core.BusinessLayer.dll) OR (`sysmon` EventCode=22 QueryName=*avsvmcloud.com)\n      | eventstats dc(EventCode) AS dc_events\n      | where dc_events=2\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY Image ImageLoaded dest\n           loaded_file loaded_file_path original_file_name\n           process_exec process_guid process_hash\n           process_id process_name process_path\n           service_dll_signature_exists service_dll_signature_verified signature\n           signature_id user_id vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `sunburst_correlation_dll_and_network_event_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 7, Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/evasive-attacker-leverages-solarwinds-supply-chain-compromises-with-sunburst-backdoor",
              "mitre": [
                "T1203"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Sunburst Correlation DLL and Network Event\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7, Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1203. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Sunburst Correlation DLL and Network Event\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Sunburst Correlation DLL and Network Event” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.331",
              "n": "Suspicious Curl Network Connection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the curl command contacting suspicious remote domains, such as s3.amazonaws.com, which is indicative of Command and Control (C2) activity or downloading further implants. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant as it may indicate the presence of MacOS adware or other malicious software attempting to establish persistence or…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Sysmon for Linux EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time)\n    as lastTime from datamodel=Endpoint.Processes where Processes.process_name IN (\"curl\", \"curl.exe\")\n    Processes.process IN (\"*s3.amazonaws.com*\")\n    by Processes.action Processes.dest Processes.original_file_name\n       Processes.parent_process Processes.parent_process_exec\n       Processes.parent_process_guid Processes.parent_process_id\n       Processes.parent_process_name Processes.parent_process_path\n       Processes.process Processes.process_exec Processes.process_guid\n       Processes.process_hash Processes.process_id\n       Processes.process_integrity_level Processes.process_name\n       Processes.process_path Processes.user Processes.user_id\n       Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `suspicious_curl_network_connection_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Sysmon for Linux EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://redcanary.com/blog/clipping-silver-sparrows-wings/",
              "mitre": [
                "T1105"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Curl Network Connection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Sysmon for Linux EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Curl Network Connection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Suspicious Curl Network Connection” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.332",
              "n": "Suspicious DLLHost no Command Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances of DLLHost.exe executing without command line arguments. This behavior is unusual and often associated with malicious activities, such as those performed by Cobalt Strike. The detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This activity is significant because DLLHost.exe typically requires arguments to function correctly, and its absence may indicate an attempt to evade detection. If confirm…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives may be present in small environments. Tuning may be required based on parent process.",
              "refs": "https://raw.githubusercontent.com/threatexpress/malleable-c2/c3385e481159a759f79b8acfe11acf240893b830/jquery-c2.4.2.profile, https://www.cobaltstrike.com/blog/learn-pipe-fitting-for-all-of-your-offense-projects/",
              "mitre": [
                "T1055"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious DLLHost no Command Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious DLLHost no Command Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives may be present in small environments. Tuning may be required based on parent process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Suspicious DLLHost no Command Line Arguments” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.333",
              "n": "Suspicious GPUpdate no Command Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of gpupdate.exe without any command line arguments. This behavior is identified using data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. It is significant because gpupdate.exe typically runs with specific arguments, and its execution without them is often associated with malicious activities, such as those performed by Cobalt Strike. If confirmed malicious, this activity could indicate an attempt to execute una…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives may be present in small environments. Tuning may be required based on parent process.",
              "refs": "https://raw.githubusercontent.com/xx0hcd/Malleable-C2-Profiles/0ef8cf4556e26f6d4190c56ba697c2159faa5822/crimeware/trick_ryuk.profile, https://www.cobaltstrike.com/blog/learn-pipe-fitting-for-all-of-your-offense-projects/",
              "mitre": [
                "T1055"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious GPUpdate no Command Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious GPUpdate no Command Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives may be present in small environments. Tuning may be required based on parent process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Suspicious GPUpdate no Command Line Arguments” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.334",
              "n": "Suspicious IcedID Rundll32 Cmdline",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious `rundll32.exe` command line used to execute a DLL file, a technique associated with IcedID malware. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions containing the pattern `*/i:*`. This activity is significant as it indicates potential malware attempting to load an encrypted DLL payload, often named `license.dat`. If confirmed malicious, this could allow attackers to execute arbitrary code, leadin…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "limitted. this parameter is not commonly used by windows application but can be used by the network operator.",
              "refs": "https://threatpost.com/icedid-banking-trojan-surges-emotet/165314/",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious IcedID Rundll32 Cmdline\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious IcedID Rundll32 Cmdline\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: limitted. this parameter is not commonly used by windows application but can be used by the network operator.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Suspicious IcedID Rundll32 Cmdline” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.335",
              "n": "Suspicious Image Creation In Appdata Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of image files in the AppData folder by processes that also have a file reference in the same folder. It leverages data from the Endpoint.Processes and Endpoint.Filesystem datamodels to identify this behavior. This activity is significant because it is commonly associated with malware, such as the Remcos RAT, which captures screenshots and stores them in the AppData folder before exfiltrating them to a command-and-control server. If confirmed malicious…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.malwarebytes.com/blog/threat-intelligence/2021/07/remcos-rat-delivered-via-visual-basic/",
              "mitre": [
                "T1113"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Image Creation In Appdata Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1113. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Image Creation In Appdata Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.336",
              "n": "Suspicious Linux Discovery Commands",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of suspicious bash commands commonly used in scripts like AutoSUID, LinEnum, and LinPeas for system discovery on a Linux host. It leverages Endpoint Detection and Response (EDR) data, specifically looking for a high number of distinct commands executed within a short time frame. This activity is significant as it often precedes privilege escalation or other malicious actions. If confirmed malicious, an attacker could gain detailed system information, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Unless an administrator is using these commands to troubleshoot or audit a system, the execution of these commands should be monitored.",
              "refs": "https://attack.mitre.org/matrices/enterprise/linux/, https://attack.mitre.org/techniques/T1059/004/, https://github.com/IvanGlinkin/AutoSUID, https://github.com/carlospolop/PEASS-ng/tree/master/linPEAS, https://github.com/rebootuser/LinEnum",
              "mitre": [
                "T1059.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Linux Discovery Commands\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Linux Discovery Commands\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Unless an administrator is using these commands to troubleshoot or audit a system, the execution of these commands should be monitored.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Suspicious Linux Discovery Commands” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.337",
              "n": "Suspicious microsoft workflow compiler rename",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the renaming of microsoft.workflow.compiler.exe, a rarely used executable typically located in C:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319. This detection leverages Endpoint Detection and Response (EDR) data, focusing on process names and original file names. This activity is significant because renaming this executable can indicate an attempt to evade security controls. If confirmed malicious, an attacker could use this renamed executable to execute arbitrary …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name!=microsoft.workflow.compiler.exe\n        AND\n        Processes.original_file_name=Microsoft.Workflow.Compiler.exe\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `suspicious_microsoft_workflow_compiler_rename_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may use a moved copy of microsoft.workflow.compiler.exe, triggering a false positive.",
              "refs": "https://lolbas-project.github.io/lolbas/Binaries/Microsoft.Workflow.Compiler/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218/T1218.md#atomic-test-6---microsoftworkflowcompilerexe-payload-execution",
              "mitre": [
                "T1036.003",
                "T1127"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious microsoft workflow compiler rename\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003, T1127. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious microsoft workflow compiler rename\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may use a moved copy of microsoft.workflow.compiler.exe, triggering a false positive.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Suspicious microsoft workflow compiler rename” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.338",
              "n": "Suspicious microsoft workflow compiler usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the usage of microsoft.workflow.compiler.exe, a rarely utilized executable typically found in C:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution telemetry. The significance of this activity lies in its uncommon usage, which may indicate malicious intent such as code execution or persistence mechanisms. If confirmed malicious, an attacker could leverage th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, limited instances have been identified coming from native Microsoft utilities similar to SCCM.",
              "refs": "https://lolbas-project.github.io/lolbas/Binaries/Msbuild/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218/T1218.md#atomic-test-6---microsoftworkflowcompilerexe-payload-execution",
              "mitre": [
                "T1127"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious microsoft workflow compiler usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1127. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious microsoft workflow compiler usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, limited instances have been identified coming from native Microsoft utilities similar to SCCM.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Suspicious microsoft workflow compiler usage” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.339",
              "n": "Suspicious msbuild path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of msbuild.exe from a non-standard path. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that deviate from typical msbuild.exe locations. This activity is significant because msbuild.exe is commonly abused by attackers to execute malicious code, and running it from an unusual path can indicate an attempt to evade detection. If confirmed malicious, this behavior could allow an attacker to execute …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate applications may use a moved copy of msbuild.exe, triggering a false positive. Baselining of MSBuild.exe usage is recommended to better understand it's path usage. Visual Studio runs an instance out of a path that will need to be filtered on.",
              "refs": "https://lolbas-project.github.io/lolbas/Binaries/Msbuild/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1127.001/T1127.001.md",
              "mitre": [
                "T1036.003",
                "T1127.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious msbuild path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003, T1127.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious msbuild path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate applications may use a moved copy of msbuild.exe, triggering a false positive. Baselining of MSBuild.exe usage is recommended to better understand it's path usage. Visual Studio runs an instance out of a path that will need to be filtered on.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint and security-agent signals around “Suspicious msbuild path” so we can see attacker-style moves—not only a one-off file or password mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.340",
              "n": "Suspicious MSBuild Rename",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of renamed instances of msbuild.exe. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and original file names within the Endpoint data model. This activity is significant because msbuild.exe is a legitimate tool often abused by attackers to execute malicious code while evading detection. If confirmed malicious, this behavior could allow an attacker to execute arbitrary code, potentially leading to system c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name!=msbuild.exe\n        AND\n        Processes.original_file_name=MSBuild.exe\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `suspicious_msbuild_rename_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may use a moved copy of msbuild, triggering a false positive.",
              "refs": "https://lolbas-project.github.io/lolbas/Binaries/Msbuild/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1127.001/T1127.001.md, https://github.com/infosecn1nja/MaliciousMacroMSBuild/",
              "mitre": [
                "T1036.003",
                "T1127.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious MSBuild Rename\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003, T1127.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious MSBuild Rename\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may use a moved copy of msbuild, triggering a false positive.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow security findings around how AI, models, and connected apps are used, so policy breaks and misuse are caught with context—not just a lonely alert in a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.341",
              "n": "Suspicious mshta child process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies child processes spawned from \"mshta.exe\". It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific child processes like \"powershell.exe\" and \"cmd.exe\". This activity is significant because \"mshta.exe\" is often exploited by attackers to execute malicious scripts or commands. If confirmed malicious, this behavior could allow an attacker to execute arbitrary code, escalate privileges, or maintain persistence within the environment. …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.",
              "refs": "https://github.com/redcanaryco/AtomicTestHarnesses, https://redcanary.com/blog/introducing-atomictestharnesses/",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious mshta child process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious mshta child process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious mshta child process”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.342",
              "n": "Suspicious mshta spawn",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the spawning of mshta.exe by wmiprvse.exe or svchost.exe. This behavior is identified using Endpoint Detection and Response (EDR) data, focusing on process creation events where the parent process is either wmiprvse.exe or svchost.exe. This activity is significant as it may indicate the use of a DCOM object to execute malicious scripts via mshta.exe, a common tactic in sophisticated attacks. If confirmed malicious, this could allow an attacker to execute arbitrary …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.",
              "refs": "https://codewhitesec.blogspot.com/2018/07/lethalhta.html, https://github.com/redcanaryco/AtomicTestHarnesses, https://redcanary.com/blog/introducing-atomictestharnesses/",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious mshta spawn\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious mshta spawn\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may exhibit this behavior, triggering a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious mshta spawn”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.343",
              "n": "Suspicious PlistBuddy Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of the native macOS utility, PlistBuddy, to create or modify property list (.plist) files. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions involving PlistBuddy. This activity is significant because PlistBuddy can be used to establish persistence by modifying LaunchAgents, as seen in the Silver Sparrow malware. If confirmed malicious, this could allow an attacker to mai…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name=PlistBuddy (Processes.process=*LaunchAgents*\n        OR\n        Processes.process=*RunAtLoad*\n        OR\n        Processes.process=*true*)\n      BY Processes.dest Processes.user Processes.parent_process\n         Processes.process_name Processes.process Processes.process_id\n         Processes.parent_process_id\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `suspicious_plistbuddy_usage_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate applications may use PlistBuddy to create or modify property lists and possibly generate false positives. Review the property list being modified or created to confirm.",
              "refs": "https://www.marcosantadev.com/manage-plist-files-plistbuddy/",
              "mitre": [
                "T1543.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious PlistBuddy Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious PlistBuddy Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate applications may use PlistBuddy to create or modify property lists and possibly generate false positives. Review the property list being modified or created to confirm.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious PlistBuddy Usage”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.344",
              "n": "Suspicious PlistBuddy Usage via OSquery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the PlistBuddy utility on macOS to create or modify property list (.plist) files. It leverages OSQuery to monitor process events, specifically looking for commands that interact with LaunchAgents and set properties like RunAtLoad. This activity is significant because PlistBuddy can be used to establish persistence mechanisms, as seen in malware like Silver Sparrow. If confirmed malicious, this could allow an attacker to maintain persistence, execute arbi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "osquery",
              "q": "`osquery_process` \"columns.cmdline\"=\"*LaunchAgents*\" OR \"columns.cmdline\"=\"*RunAtLoad*\" OR \"columns.cmdline\"=\"*true*\"\n      | `suspicious_plistbuddy_usage_via_osquery_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires osquery ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate applications may use PlistBuddy to create or modify property lists and possibly generate false positives. Review the property list being modified or created to confirm.",
              "refs": "https://www.marcosantadev.com/manage-plist-files-plistbuddy/",
              "mitre": [
                "T1543.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious PlistBuddy Usage via OSquery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: osquery. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious PlistBuddy Usage via OSquery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate applications may use PlistBuddy to create or modify property lists and possibly generate false positives. Review the property list being modified or created to confirm.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious PlistBuddy Usage via OSquery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.345",
              "n": "Suspicious Process Executed From Container File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious process executed from within common container/archive file types such as ZIP, ISO, IMG, and others. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it is a common technique used by adversaries to execute scripts or evade defenses. If confirmed malicious, this behavior could allow attackers to execute arbitrary code, escalate privileges, or per…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Various business process or userland applications and behavior.",
              "refs": "https://www.mandiant.com/resources/blog/tracking-evolution-gootloader-operations, https://www.crowdstrike.com/blog/weaponizing-disk-image-files-analysis/, https://attack.mitre.org/techniques/T1204/002/",
              "mitre": [
                "T1204.002",
                "T1036.008"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Process Executed From Container File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.002, T1036.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Process Executed From Container File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Various business process or userland applications and behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious Process Executed From Container File”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.346",
              "n": "Suspicious Reg exe Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances of reg.exe being launched from a command prompt (cmd.exe) that was not initiated by the user, as indicated by a parent process other than explorer.exe. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process names. This activity is significant because reg.exe is often used in registry manipulation, which can be indicative of malicious behavior such as persistence mechanisms or system confi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible for system administrators to write scripts that exhibit this behavior. If this is the case, the search will need to be modified to filter them out.",
              "refs": "https://car.mitre.org/wiki/CAR-2013-03-001/",
              "mitre": [
                "T1112"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Reg exe Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Reg exe Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible for system administrators to write scripts that exhibit this behavior. If this is the case, the search will need to be modified to filter them out.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious Reg exe Process”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.347",
              "n": "Suspicious Regsvr32 Register Suspicious Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Regsvr32.exe to register DLLs from suspicious paths such as AppData, ProgramData, or Windows Temp directories. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant because Regsvr32.exe can be abused to proxy execution of malicious code, bypassing traditional security controls. If confirmed malicious, this could allow an attacker to execute arbitrar…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives with the query restricted to specified paths. Add more world writeable paths as tuning continues.",
              "refs": "https://attack.mitre.org/techniques/T1218/010/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.010/T1218.010.md, https://lolbas-project.github.io/lolbas/Binaries/Regsvr32/, https://support.microsoft.com/en-us/topic/how-to-use-the-regsvr32-tool-and-troubleshoot-regsvr32-error-messages-a98d960a-7392-e6fe-d90a-3f4e0cb543e5, https://any.run/report/f29a7d2ecd3585e1e4208e44bcc7156ab5388725f1d29d03e7699da0d4598e7c/0826458b-5367-45cf-b841-c95a33a01718",
              "mitre": [
                "T1218.010"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Regsvr32 Register Suspicious Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.010. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Regsvr32 Register Suspicious Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives with the query restricted to specified paths. Add more world writeable paths as tuning continues.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious Regsvr32 Register Suspicious Path”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.348",
              "n": "Suspicious Rundll32 dllregisterserver",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of rundll32.exe with the DllRegisterServer command to load a DLL. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process details. This activity is significant as it may indicate an attempt to register a malicious DLL, which can be a method for code execution or persistence. If confirmed malicious, an attacker could gain unauthorized code execution, escalate privileges, or maintain persisten…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is likely to produce false positives and will require some filtering. Tune the query by adding command line paths to known good DLLs, or filtering based on parent process names.",
              "refs": "https://attack.mitre.org/techniques/T1218/011/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.011/T1218.011.md, https://lolbas-project.github.io/lolbas/Binaries/Rundll32/, https://symantec-enterprise-blogs.security.com/blogs/threat-intelligence/seedworm-apt-iran-middle-east, https://github.com/pan-unit42/tweets/blob/master/2020-12-10-IOCs-from-Ursnif-infection-with-Delf-variant.txt, https://www.crowdstrike.com/blog/duck-hunting-with-falcon-complete-qakbot-zip-based-campaign/, https://learn.microsoft.com/en-us/windows/win32/api/olectl/nf-olectl-dllregisterserver?redirectedfrom=MSDN",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Rundll32 dllregisterserver\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Rundll32 dllregisterserver\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is likely to produce false positives and will require some filtering. Tune the query by adding command line paths to known good DLLs, or filtering based on parent process names.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious Rundll32 dllregisterserver”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.349",
              "n": "Suspicious Rundll32 no Command Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of rundll32.exe without any command line arguments. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on process execution logs. It is significant because rundll32.exe typically requires command line arguments to function properly, and its absence is often associated with malicious activities, such as those performed by Cobalt Strike. If confirmed malicious, this activity could indicate an attempt to execute a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may use a moved copy of rundll32, triggering a false positive.",
              "refs": "https://attack.mitre.org/techniques/T1218/011/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.011/T1218.011.md, https://lolbas-project.github.io/lolbas/Binaries/Rundll32/, https://bohops.com/2018/02/26/leveraging-inf-sct-fetch-execute-techniques-for-bypass-evasion-persistence/",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Rundll32 no Command Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Rundll32 no Command Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may use a moved copy of rundll32, triggering a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious Rundll32 no Command Line Arguments”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.350",
              "n": "Suspicious Rundll32 PluginInit",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the rundll32.exe process with the \"plugininit\" parameter. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events and command-line arguments. This activity is significant because the \"plugininit\" parameter is commonly associated with IcedID malware, which uses it to execute an initial DLL stager to download additional payloads. If confirmed malicious, this behavior could lead to furthe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "third party application may used this dll export name to execute function.",
              "refs": "https://threatpost.com/icedid-banking-trojan-surges-emotet/165314/",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Rundll32 PluginInit\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Rundll32 PluginInit\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: third party application may used this dll export name to execute function.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious Rundll32 PluginInit”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.351",
              "n": "Suspicious Rundll32 StartW",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of rundll32.exe with the DLL function names \"Start\" and \"StartW,\" commonly associated with Cobalt Strike payloads. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process metadata. This activity is significant as it often indicates the presence of malicious payloads, such as Cobalt Strike, which can lead to unauthorized code execution. If confirmed malicious, this activity cou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate applications may use Start as a function and call it via the command line. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1218/011/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.011/T1218.011.md, https://hstechdocs.helpsystems.com/manuals/cobaltstrike/current/userguide/index.htm#cshid=1036, https://lolbas-project.github.io/lolbas/Binaries/Rundll32/, https://bohops.com/2018/02/26/leveraging-inf-sct-fetch-execute-techniques-for-bypass-evasion-persistence/",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Rundll32 StartW\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Rundll32 StartW\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate applications may use Start as a function and call it via the command line. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious Rundll32 StartW”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.352",
              "n": "Suspicious Scheduled Task from Public Directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of scheduled tasks that execute binaries or scripts from public directories, such as users\\public, \\programdata\\, or \\windows\\temp, using schtasks.exe with the /create command. It leverages Sysmon Event ID 1 data to detect this behavior. This activity is significant because it often indicates an attempt to maintain persistence or execute malicious scripts, which are common tactics in malware deployment. If confirmed as malicious, this could lead to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The main source of false positives could be the legitimate use of scheduled tasks from these directories. Careful tuning of this search may be necessary to suit the specifics of your environment, reducing the rate of false positives.",
              "refs": "https://attack.mitre.org/techniques/T1053/005/",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Scheduled Task from Public Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Scheduled Task from Public Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The main source of false positives could be the legitimate use of scheduled tasks from these directories. Careful tuning of this search may be necessary to suit the specifics of your environment, reducing the rate of false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious Scheduled Task from Public Directory”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.353",
              "n": "Suspicious SearchProtocolHost no Command Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances of searchprotocolhost.exe running without command line arguments. This behavior is unusual and often associated with malicious activities, such as those performed by Cobalt Strike. The detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process execution data. This activity is significant because searchprotocolhost.exe typically runs with specific arguments, and its absence may indicate an attempt to evade detection. If confir…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives may be present in small environments. Tuning may be required based on parent process.",
              "refs": "https://github.com/mandiant/red_team_tool_countermeasures/blob/master/rules/PGF/supplemental/hxioc/SUSPICIOUS%20EXECUTION%20OF%20SEARCHPROTOCOLHOST%20(METHODOLOGY).ioc",
              "mitre": [
                "T1055"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious SearchProtocolHost no Command Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious SearchProtocolHost no Command Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives may be present in small environments. Tuning may be required based on parent process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious SearchProtocolHost no Command Line Arguments”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.354",
              "n": "Suspicious SQLite3 LSQuarantine Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of SQLite3 querying the MacOS preferences to determine the original URL from which a package was downloaded. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions involving LSQuarantine. This activity is significant as it is commonly associated with MacOS adware and other malicious software. If confirmed malicious, this behavior could indicate an attempt to track or manipula…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name=sqlite3 Processes.process=*LSQuarantine*\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `suspicious_sqlite3_lsquarantine_behavior_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://redcanary.com/blog/clipping-silver-sparrows-wings/, https://www.marcosantadev.com/manage-plist-files-plistbuddy/",
              "mitre": [
                "T1074"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious SQLite3 LSQuarantine Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1074. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious SQLite3 LSQuarantine Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious SQLite3 LSQuarantine Behavior”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.355",
              "n": "Suspicious WAV file in Appdata Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of .wav files in the AppData folder, a behavior associated with Remcos RAT malware, which stores audio recordings in this location for data exfiltration. The detection leverages endpoint process and filesystem data to identify .wav file creation within the AppData\\Roaming directory. This activity is significant as it indicates potential unauthorized data collection and exfiltration by malware. If confirmed malicious, this could lead to sensitive inform…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11, Windows Event Log Security 4688 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, file_name, file_path and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.malwarebytes.com/blog/threat-intelligence/2021/07/remcos-rat-delivered-via-visual-basic/",
              "mitre": [
                "T1113"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious WAV file in Appdata Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11, Windows Event Log Security 4688 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1113. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious WAV file in Appdata Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious WAV file in Appdata Folder”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.356",
              "n": "Suspicious wevtutil Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the usage of wevtutil.exe with parameters for clearing event logs such as Application, Security, Setup, Trace, or System. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because clearing event logs can be an attempt to cover tracks after malicious actions, hindering forensic investigations. If confirmed malicious, this behavior could allow an attacker to erase ev…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The wevtutil.exe application is a legitimate Windows event log utility. Administrators may use it to manage Windows event logs.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1070.001/T1070.001.md",
              "mitre": [
                "T1070.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious wevtutil Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious wevtutil Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The wevtutil.exe application is a legitimate Windows event log utility. Administrators may use it to manage Windows event logs.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious wevtutil Usage”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.357",
              "n": "Suspicious writes to windows Recycle Bin",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a process other than explorer.exe writes to the Windows Recycle Bin. It leverages the Endpoint.Filesystem and Endpoint.Processes data models in Splunk to identify any process writing to the \"*$Recycle.Bin*\" file path, excluding explorer.exe. This activity is significant because it may indicate an attacker attempting to hide their actions, potentially leading to data theft, ransomware, or other malicious outcomes. If confirmed malicious, this behavior could all…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Because the Recycle Bin is a hidden folder in modern versions of Windows, it would be unusual for a process other than explorer.exe to write to it. Incidents should be investigated as appropriate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1036"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious writes to windows Recycle Bin\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious writes to windows Recycle Bin\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Because the Recycle Bin is a hidden folder in modern versions of Windows, it would be unusual for a process other than explorer.exe to write to it. Incidents should be investigated as appropriate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious writes to windows Recycle Bin”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.358",
              "n": "System Info Gathering Using Dxdiag Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the dxdiag.exe process with specific command-line arguments, which is used to gather system information. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events and command-line details. This activity is significant because dxdiag.exe is rarely used in corporate environments and its execution may indicate reconnaissance efforts by malicious actors. If confirmed malicious, this activity…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time)\n    as lastTime from datamodel=Endpoint.Processes where\n    (Processes.process_name=dxdiag.exe OR Processes.original_file_name=dxdiag.exe)\n    Processes.process = \"* /t *\"\n    by Processes.action Processes.dest Processes.original_file_name Processes.parent_process\n    Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id\n    Processes.parent_process_name Processes.parent_process_path Processes.process Processes.process_exec\n    Processes.process_guid Processes.process_hash Processes.process_id Processes.process_integrity_level\n    Processes.process_name Processes.process_path Processes.user Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`\n    | `system_info_gathering_using_dxdiag_application_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This commandline can be used by a network administrator to audit host machine specifications. Thus, a filter is needed.",
              "refs": "https://app.any.run/tasks/df0baf9f-8baf-4c32-a452-16562ecb19be/",
              "mitre": [
                "T1592"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"System Info Gathering Using Dxdiag Application\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1592. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"System Info Gathering Using Dxdiag Application\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This commandline can be used by a network administrator to audit host machine specifications. Thus, a filter is needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “System Info Gathering Using Dxdiag Application”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.359",
              "n": "System Information Discovery Detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies system information discovery techniques, such as the execution of commands like `wmic qfe`, `systeminfo`, and `hostname`. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This activity is significant because attackers often use these commands to gather system configuration details, which can aid in further exploitation. If confirmed malicious, this behavior could allow attackers to tailor their attacks base…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators debugging servers",
              "refs": "https://web.archive.org/web/20210119205146/https://oscp.infosecsanyam.in/priv-escalation/windows-priv-escalation",
              "mitre": [
                "T1082"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"System Information Discovery Detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"System Information Discovery Detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators debugging servers\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “System Information Discovery Detection”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.360",
              "n": "System Processes Run From Unexpected Locations",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies system processes running from unexpected locations outside of paths such as `C:\\Windows\\System32\\` or `C:\\Windows\\SysWOW64`. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process paths, names, and hashes. This activity is significant as it may indicate a malicious process attempting to masquerade as a legitimate system process. If confirmed malicious, this behavior could allow an attacker to execute code, escalate privileges, o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection may require tuning based on third party applications utilizing native Windows binaries in non-standard paths.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1036.003/T1036.003.yaml, https://attack.mitre.org/techniques/T1036/003/",
              "mitre": [
                "T1036.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"System Processes Run From Unexpected Locations\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"System Processes Run From Unexpected Locations\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection may require tuning based on third party applications utilizing native Windows binaries in non-standard paths.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “System Processes Run From Unexpected Locations”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.361",
              "n": "System User Discovery With Query",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `query.exe` with command-line arguments aimed at discovering logged-in users. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as adversaries may use `query.exe` to gain situational awareness and perform Active Directory discovery on compromised endpoints. If confirmed malicious, this behavior could allow attackers to identify active users, aidin…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"query.exe\"\n            OR\n            Processes.original_file_name=\"query.exe\"\n        )\n        AND Processes.process=\"*user*\" AND ((NOT Processes.process=\"*/server*\") OR Processes.process IN (\"*/server:localhost*\", \"*/server:127.0.0.1*\"))\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `system_user_discovery_with_query_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1033/",
              "mitre": [
                "T1033"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"System User Discovery With Query\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"System User Discovery With Query\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “System User Discovery With Query”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.362",
              "n": "System User Discovery With Whoami",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `whoami.exe` without any arguments. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This activity is significant because both Red Teams and adversaries use `whoami.exe` to identify the current logged-in user, aiding in situational awareness and Active Directory discovery. If confirmed malicious, this behavior could indicate an attacker is gathering information to further compromise the system…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1033/",
              "mitre": [
                "T1033"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"System User Discovery With Whoami\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"System User Discovery With Whoami\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “System User Discovery With Whoami”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.363",
              "n": "Time Provider Persistence Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to the time provider registry for persistence and autostart. It leverages data from the Endpoint.Registry data model, focusing on changes to the \"CurrentControlSet\\\\Services\\\\W32Time\\\\TimeProviders\" registry path. This activity is significant because such modifications are uncommon and can indicate an attempt to establish persistence on a compromised host. If confirmed malicious, this technique allows an attacker to maintain access and exec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://pentestlab.blog/2019/10/22/persistence-time-providers/, https://attack.mitre.org/techniques/T1547/003/",
              "mitre": [
                "T1547.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Time Provider Persistence Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Time Provider Persistence Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Time Provider Persistence Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.364",
              "n": "Trickbot Named Pipe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or connection to a named pipe associated with Trickbot malware. It leverages Sysmon EventCodes 17 and 18 to identify named pipes with the pattern \"\\\\pipe\\\\*lacesomepipe\". This activity is significant as Trickbot uses named pipes for communication with its command and control (C2) servers, facilitating data exfiltration and command execution. If confirmed malicious, this behavior could allow attackers to maintain persistence, execute arbitrary commands,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 17, Sysmon EventID 18",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and pipename from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. .",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://labs.vipre.com/trickbot-and-its-modules/",
              "mitre": [
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Trickbot Named Pipe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 17, Sysmon EventID 18. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Trickbot Named Pipe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Trickbot Named Pipe”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.365",
              "n": "UAC Bypass With Colorui COM Object",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a potential UAC bypass using the colorui.dll COM Object. It leverages Sysmon EventCode 7 to identify instances where colorui.dll is loaded by a process other than colorcpl.exe, excluding common system directories. This activity is significant because UAC bypass techniques are often used by malware, such as LockBit ransomware, to gain elevated privileges without user consent. If confirmed malicious, this could allow an attacker to execute code with higher privileges…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "not so common. but 3rd part app may load this dll.",
              "refs": "https://news.sophos.com/en-us/2020/04/24/lockbit-ransomware-borrows-tricks-to-keep-up-with-revil-and-maze/",
              "mitre": [
                "T1218.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"UAC Bypass With Colorui COM Object\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"UAC Bypass With Colorui COM Object\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: not so common. but 3rd part app may load this dll.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “UAC Bypass With Colorui COM Object”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.366",
              "n": "Uninstall App Using MsiExec",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the uninstallation of applications using msiexec with specific command-line arguments. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because it is an uncommon practice in enterprise environments and has been associated with malicious behavior, such as disabling antivirus software. If confirmed malicious, this could allow an attacker to remove se…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://threadreaderapp.com/thread/1423361119926816776.html",
              "mitre": [
                "T1218.007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Uninstall App Using MsiExec\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Uninstall App Using MsiExec\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Uninstall App Using MsiExec”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.367",
              "n": "Unknown Process Using The Kerberos Protocol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a non-lsass.exe process making an outbound connection on port 88, which is typically used by the Kerberos authentication protocol. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and network traffic logs. This activity is significant because, under normal circumstances, only the lsass.exe process should interact with the Kerberos Distribution Center. If confirmed malicious, this behavior could indicate an adve…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Custom applications may leverage the Kerberos protocol. Filter as needed.",
              "refs": "https://stealthbits.com/blog/how-to-detect-overpass-the-hash-attacks/, https://www.thehacker.recipes/ad/movement/kerberos/ptk",
              "mitre": [
                "T1550"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Unknown Process Using The Kerberos Protocol\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1550. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Unknown Process Using The Kerberos Protocol\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Custom applications may leverage the Kerberos protocol. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Unknown Process Using The Kerberos Protocol”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.368",
              "n": "Unload Sysmon Filter Driver",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `fltMC.exe` to unload the Sysmon driver, which stops Sysmon from collecting data. It leverages Endpoint Detection and Response (EDR) logs, focusing on process names and command-line executions. This activity is significant because disabling Sysmon can blind security monitoring, allowing malicious actions to go undetected. If confirmed malicious, this could enable attackers to execute further attacks without being logged, leading to potential data breache…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.ired.team/offensive-security/defense-evasion/unloading-sysmon-driver",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Unload Sysmon Filter Driver\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Unload Sysmon Filter Driver\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Unload Sysmon Filter Driver”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.369",
              "n": "Unusually Long Command Line",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unusually long command lines, which may indicate malicious activity. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on the length of command lines executed on hosts. This behavior is significant because attackers often use obfuscated or complex command lines to evade detection and execute malicious payloads. If confirmed malicious, this activity could lead to data theft, ransomware deployment, or further system compromise. Analysts sh…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(\"Processes\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | eval processlen=len(process)\n    | eventstats stdev(processlen) as stdev, avg(processlen) as avg\n      BY dest\n    | stats max(processlen) as maxlen, values(stdev) as stdevperhost, values(avg) as avgperhost\n      BY dest, user, process_name,\n         process\n    | eval threshold = 3\n    | where maxlen > ((threshold*stdevperhost) + avgperhost)\n    | `unusually_long_command_line_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate applications start with long command lines.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "process_name",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Unusually Long Command Line\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Unusually Long Command Line\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate applications start with long command lines.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Unusually Long Command Line”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.370",
              "n": "Unusually Long Command Line - MLTK",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unusually long command lines executed on hosts, which may indicate malicious activity. It leverages the Machine Learning Toolkit (MLTK) to detect command lines with lengths that deviate from the norm for a given user. This is significant for a SOC as unusually long command lines can be a sign of obfuscation or complex malicious scripts. If confirmed malicious, this activity could allow attackers to execute sophisticated commands, potentially leading to unauthori…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | eval processlen=len(process)\n    | search user!=unknown\n    | apply cmdline_pdfmodel threshold=0.01\n    | rename \"IsOutlier(processlen)\" as isOutlier\n    | search isOutlier > 0\n    | table firstTime lastTime user dest process_name process processlen count\n    | `unusually_long_command_line___mltk_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate applications use long command lines for installs or updates. You should review identified command lines for legitimacy. You may modify the first part of the search to omit legitimate command lines from consideration. If you are seeing more results than desired, you may consider changing the value of threshold in the search to a smaller value. You should also periodically re-run the support search to re-build the ML model on the latest data. You may get unexpected results if the user identified in the results is not present in the data used to build the associated model.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Unusually Long Command Line - MLTK\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Unusually Long Command Line - MLTK\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate applications use long command lines for installs or updates. You should review identified command lines for legitimacy. You may modify the first part of the search to omit legitimate command lines from consideration. If you are seeing more results than desired, you may consider changing the value of threshold in the search to a smaller value. You should also periodically re-run the support search to re-build the ML model on the latest data. You may get unexpected results if the user identified in the results is not present in the data used to build the associated model.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Unusually Long Command Line - MLTK”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.371",
              "n": "User Discovery With Env Vars PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `powershell.exe` with command-line arguments that use PowerShell environment variables to identify the current logged user. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as adversaries may use it for situational awareness and Active Directory discovery on compromised endpoints. If confirmed malicious, this behavior could allow attackers to gat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"powershell.exe\"\n        )\n        (Processes.process=\"*$env:UserName*\" OR Processes.process=\"*[System.Environment]::UserName*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `user_discovery_with_env_vars_powershell_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1033/",
              "mitre": [
                "T1033"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"User Discovery With Env Vars PowerShell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"User Discovery With Env Vars PowerShell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “User Discovery With Env Vars PowerShell”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.372",
              "n": "User Discovery With Env Vars PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of PowerShell environment variables to identify the current logged user by leveraging PowerShell Script Block Logging (EventCode=4104). This method monitors script blocks containing `$env:UserName` or `[System.Environment]::UserName`. Identifying this activity is significant as adversaries and Red Teams may use it for situational awareness and Active Directory discovery on compromised endpoints. If confirmed malicious, this activity could allow attackers to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 (ScriptBlockText = \"*$env:UserName*\" OR ScriptBlockText = \"*[System.Environment]::UserName*\")\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `user_discovery_with_env_vars_powershell_script_block_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1033/",
              "mitre": [
                "T1033"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"User Discovery With Env Vars PowerShell Script Block\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"User Discovery With Env Vars PowerShell Script Block\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “User Discovery With Env Vars PowerShell Script Block”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.373",
              "n": "USN Journal Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of the USN Journal using the fsutil.exe utility. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. This activity is significant because the USN Journal maintains a log of all changes made to files on the disk, and its deletion can be an indicator of an attempt to cover tracks or hinder forensic investigations. If confirmed malicious, this action could allow an atta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/fsutil-usn",
              "mitre": [
                "T1070"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"USN Journal Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"USN Journal Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “USN Journal Deletion”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.374",
              "n": "Vbscript Execution Using Wscript App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of VBScript using the wscript.exe application. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line telemetry. This activity is significant because wscript.exe is typically not used to execute VBScript, which is usually associated with cscript.exe. This deviation can indicate an attempt to evade traditional process monitoring and antivirus defenses. If confirmed malicious, this technique could allow…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.joesandbox.com/analysis/369332/0/html, https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat",
              "mitre": [
                "T1059.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Vbscript Execution Using Wscript App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Vbscript Execution Using Wscript App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Vbscript Execution Using Wscript App”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.375",
              "n": "WBAdmin Delete System Backups",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of wbadmin.exe with flags that delete backup files, specifically targeting catalog or system state backups. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because it is commonly used by ransomware to prevent recovery by deleting system backups. If confirmed malicious, this action could severely hinder recovery efforts, leading to prolonged downtime…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may modify the boot configuration.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1490/T1490.md, https://thedfirreport.com/2020/10/08/ryuks-return/, https://attack.mitre.org/techniques/T1490/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/wbadmin, https://www.welivesecurity.com/2019/04/30/buhtrap-backdoor-ransomware-advertising-platform/",
              "mitre": [
                "T1490"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WBAdmin Delete System Backups\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WBAdmin Delete System Backups\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may modify the boot configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “WBAdmin Delete System Backups”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.376",
              "n": "Web Servers Executing Suspicious Processes",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of suspicious processes on systems identified as web servers. It leverages the Splunk data model \"Endpoint.Processes\" to search for specific process names such as \"whoami\", \"ping\", \"iptables\", \"wget\", \"service\", and \"curl\". This activity is significant because these processes are often used by attackers for reconnaissance, persistence, or data exfiltration. If confirmed malicious, this could lead to data theft, deployment of additional malware, or eve…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.dest_category=\"web_server\"\n        AND\n        (Processes.process=\"*whoami*\"\n        OR\n        Processes.process=\"*ping*\"\n        OR\n        Processes.process=\"*iptables*\"\n        OR\n        Processes.process=\"*wget*\"\n        OR\n        Processes.process=\"*service*\"\n        OR\n        Processes.process=\"*curl*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `web_servers_executing_suspicious_processes_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some of these processes may be used legitimately on web servers during maintenance or other administrative tasks.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1082"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Web Servers Executing Suspicious Processes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Web Servers Executing Suspicious Processes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some of these processes may be used legitimately on web servers during maintenance or other administrative tasks.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Web Servers Executing Suspicious Processes”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.377",
              "n": "Wermgr Process Create Executable File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the wermgr.exe process creating an executable file. It leverages Sysmon EventCode 11 to identify instances where wermgr.exe generates a .exe file. This behavior is unusual because wermgr.exe is typically associated with error reporting, not file creation. Such activity is significant as it may indicate TrickBot malware, which injects code into wermgr.exe to execute malicious actions like downloading additional payloads. If confirmed malicious, this could lead to fu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://labs.vipre.com/trickbot-and-its-modules/",
              "mitre": [
                "T1027"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Wermgr Process Create Executable File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Wermgr Process Create Executable File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Wermgr Process Create Executable File”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.378",
              "n": "Windows AdFind Exe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `adfind.exe` standalone or with specific command-line arguments related to Active Directory queries. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, command-line arguments, and parent Processes. This activity is significant because `adfind.exe` is a powerful tool often used by threat actors like Wizard Spider and FIN6 to gather sensitive AD information. If confirmed malicious, this activity could a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "ADfind is a command-line tool for AD administration and management that is seen to be leveraged by various adversaries. Filter out legitimate administrator usage using the filter macro.",
              "refs": "https://www.volexity.com/blog/2020/12/14/dark-halo-leverages-solarwinds-compromise-to-breach-organizations/, https://www.mandiant.com/resources/a-nasty-trick-from-credential-theft-malware-to-business-disruption, https://www.joeware.net/freetools/tools/adfind/index.htm, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1018"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AdFind Exe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AdFind Exe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: ADfind is a command-line tool for AD administration and management that is seen to be leveraged by various adversaries. Filter out legitimate administrator usage using the filter macro.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows AdFind Exe”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.379",
              "n": "Windows Admin Permission Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a suspicious file named 'win.dat' in the root directory (C:\\). It leverages data from the Endpoint.Filesystem datamodel to detect this activity. This behavior is significant as it is commonly used by malware like NjRAT to check for administrative privileges on a compromised host. If confirmed malicious, this activity could indicate that the malware has administrative access, allowing it to perform high-privilege actions, potentially leading to fu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the Filesystem responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if there are legitimate accounts with the privilege to drop files in the root of the C drive. It's recommended to verify the legitimacy of such actions and the accounts involved.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.njrat",
              "mitre": [
                "T1069.001"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Admin Permission Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Admin Permission Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if there are legitimate accounts with the privilege to drop files in the root of the C drive. It's recommended to verify the legitimacy of such actions and the accounts involved.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Admin Permission Discovery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.380",
              "n": "Windows Admon Default Group Policy Object Modified",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the default Group Policy Objects (GPOs) in an Active Directory environment. It leverages Splunk's Admon to monitor updates to the \"Default Domain Policy\" and \"Default Domain Controllers Policy.\" This activity is significant because changes to these default GPOs can indicate an adversary with privileged access attempting to gain further control, establish persistence, or deploy malware across multiple hosts. If confirmed malicious, such modification…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Active Directory Admon",
              "q": "# Shared SPL: intentional — see UC-10.3.381\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dcName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Active Directory Admon ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The default Group Policy Objects within an AD network may be legitimately updated for administrative operations, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1484/, https://attack.mitre.org/techniques/T1484/001, https://www.trustedsec.com/blog/weaponizing-group-policy-objects-access/, https://adsecurity.org/?p=2716, https://help.splunk.com/en/splunk-cloud-platform/get-started/get-data-in/9.3.2411/get-windows-data/monitor-active-directory",
              "mitre": [
                "T1484.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Admon Default Group Policy Object Modified\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Active Directory Admon. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Admon Default Group Policy Object Modified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The default Group Policy Objects within an AD network may be legitimately updated for administrative operations, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Admon Default Group Policy Object Modified”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.381",
              "n": "Windows Admon Group Policy Object Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new Group Policy Object (GPO) using Splunk's Admon data. It identifies events where a new GPO is created, excluding default \"New Group Policy Object\" entries. Monitoring GPO creation is crucial as adversaries can exploit GPOs to escalate privileges or deploy malware across an Active Directory network. If confirmed malicious, this activity could allow attackers to control system configurations, deploy ransomware, or propagate malware, significantly…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Active Directory Admon",
              "q": "# Shared SPL: intentional — see UC-10.3.380\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dcName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Active Directory Admon ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Group Policy Objects are created as part of regular administrative operations, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1484/, https://attack.mitre.org/techniques/T1484/001, https://www.trustedsec.com/blog/weaponizing-group-policy-objects-access/, https://adsecurity.org/?p=2716, https://help.splunk.com/en/splunk-cloud-platform/get-started/get-data-in/9.3.2411/get-windows-data/monitor-active-directory",
              "mitre": [
                "T1484.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Admon Group Policy Object Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Active Directory Admon. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Admon Group Policy Object Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Group Policy Objects are created as part of regular administrative operations, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Admon Group Policy Object Created”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.382",
              "n": "Windows Advanced Installer MSIX with AI_STUBS Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of Advanced Installer MSIX Package Support Framework (PSF) components, specifically the AI_STUBS executables with the original filename 'popupwrapper.exe'. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process paths and original filenames. This activity is significant as adversaries have been observed packaging malicious content within MSIX files built with Advanced Installer to bypass security control…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain process execution information, including process paths. These logs must be processed using the appropriate Splunk Technology Add-…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications packaged with Advanced Installer using the Package Support Framework may trigger this detection. Verify if the MSIX package is from a trusted source and signed by a trusted publisher before taking action. Organizations that use Advanced Installer for legitimate software packaging may see false positives.",
              "refs": "https://redcanary.com/blog/threat-intelligence/msix-installers/, https://redcanary.com/threat-detection-report/techniques/installer-packages/, https://learn.microsoft.com/en-us/windows/msix/package/package-support-framework, https://learn.microsoft.com/en-us/windows/msix/desktop/desktop-to-uwp-behind-the-scenes, https://attack.mitre.org/techniques/T1218/",
              "mitre": [
                "T1218",
                "T1553.005",
                "T1204.002"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Advanced Installer MSIX with AI_STUBS Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218, T1553.005, T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Advanced Installer MSIX with AI_STUBS Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications packaged with Advanced Installer using the Package Support Framework may trigger this detection. Verify if the MSIX package is from a trusted source and signed by a trusted publisher before taking action. Organizations that use Advanced Installer for legitimate software packaging may see false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Advanced Installer MSIX with AI_STUBS Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.383",
              "n": "Windows Alternate DataStream - Base64 Content",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of Alternate Data Streams (ADS) with Base64 content on Windows systems. It leverages Sysmon EventID 15, which captures file creation events, including the content of named streams. ADS can conceal malicious payloads, making them significant for SOC monitoring. This detection identifies hidden streams that may contain executables, scripts, or configuration data, often used by malware to evade detection. If confirmed malicious, this activity could allow …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 15",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest sysmon data, specifically Event ID 15.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://gist.github.com/api0cradle/cdd2d0d0ec9abb686f0e89306e277b8f, https://car.mitre.org/analytics/CAR-2020-08-001/, https://blogs.juniper.net/en-us/threat-research/bitpaymer-ransomware-hides-behind-windows-alternate-data-streams, https://blog.netwrix.com/2022/12/16/alternate_data_stream/, https://github.com/trustedsec/SysmonCommunityGuide/blob/master/chapters/file-stream-creation-hash.md",
              "mitre": [
                "T1564.004"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Alternate DataStream - Base64 Content\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 15. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1564.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Alternate DataStream - Base64 Content\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Alternate DataStream - Base64 Content”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.384",
              "n": "Windows Apache Benchmark Binary",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Apache Benchmark binary (ab.exe), commonly used by MetaSploit payloads. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where the original file name is ab.exe. This activity is significant as it may indicate the presence of a MetaSploit attack, which uses Apache Benchmark to generate malicious payloads. If confirmed malicious, this could lead to unauthorized network connections, further s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as there is a small subset of binaries that contain the original file name of ab.exe. Filter as needed.",
              "refs": "https://seclists.org/metasploit/2013/q3/13",
              "mitre": [
                "T1059"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Apache Benchmark Binary\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Apache Benchmark Binary\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as there is a small subset of binaries that contain the original file name of ab.exe. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Apache Benchmark Binary”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.385",
              "n": "Windows App Layer Protocol Qakbot NamedPipe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious process creating or connecting to a potential Qakbot named pipe. It leverages Sysmon EventCodes 17 and 18, focusing on specific processes known to be abused by Qakbot and identifying randomly generated named pipes in GUID form. This activity is significant as Qakbot malware uses named pipes for inter-process communication after code injection, facilitating data theft. If confirmed malicious, this behavior could indicate a Qakbot infection, leading to u…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 17, Sysmon EventID 18",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, pipename, processguid and named pipe event type from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://strontic.github.io/xcyclopedia/library/wermgr.exe-0F652BF7ADA772981E8AAB0D108FCC92.html, https://www.trellix.com/en-us/about/newsroom/stories/research/demystifying-qbot-malware.html, https://www.elastic.co/security-labs/qbot-malware-analysis",
              "mitre": [
                "T1071"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows App Layer Protocol Qakbot NamedPipe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 17, Sysmon EventID 18. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows App Layer Protocol Qakbot NamedPipe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows App Layer Protocol Qakbot NamedPipe”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.386",
              "n": "Windows App Layer Protocol Wermgr Connect To NamedPipe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the wermgr.exe process creating or connecting to a named pipe. It leverages Sysmon EventCodes 17 and 18 to identify these actions. This activity is significant because wermgr.exe, a legitimate Windows OS Problem Reporting application, is often abused by malware such as Trickbot and Qakbot to execute malicious code. If confirmed malicious, this behavior could indicate that an attacker has injected code into wermgr.exe, potentially allowing them to communicate covert…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 17, Sysmon EventID 18",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, pipename, processguid and named pipe event type from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://strontic.github.io/xcyclopedia/library/wermgr.exe-0F652BF7ADA772981E8AAB0D108FCC92.html, https://www.trellix.com/en-us/about/newsroom/stories/research/demystifying-qbot-malware.html",
              "mitre": [
                "T1071"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows App Layer Protocol Wermgr Connect To NamedPipe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 17, Sysmon EventID 18. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows App Layer Protocol Wermgr Connect To NamedPipe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows App Layer Protocol Wermgr Connect To NamedPipe”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.387",
              "n": "Windows Application Layer Protocol RMS Radmin Tool Namedpipe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of default or publicly known named pipes associated with the RMX remote admin tool. It leverages Sysmon EventCodes 17 and 18 to identify named pipe creation and connection events. This activity is significant as the RMX tool has been abused by adversaries and malware like Azorult to collect data from targeted hosts. If confirmed malicious, this could indicate unauthorized remote administration capabilities, leading to data exfiltration or further compromise…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 17, Sysmon EventID 18",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Filter based on pipe name or process.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/, https://attack.mitre.org/techniques/T1071/",
              "mitre": [
                "T1071"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Application Layer Protocol RMS Radmin Tool Namedpipe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 17, Sysmon EventID 18. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Application Layer Protocol RMS Radmin Tool Namedpipe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Filter based on pipe name or process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Application Layer Protocol RMS Radmin Tool Namedpipe”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.388",
              "n": "Windows AppLocker Execution from Uncommon Locations",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of applications or scripts from uncommon or suspicious file paths, potentially indicating malware or unauthorized activity. It leverages Windows AppLocker event logs and uses statistical analysis to detect anomalies. By calculating the average and standard deviation of execution counts per file path, it flags paths with execution counts significantly higher than expected. This behavior is significant as it can uncover malicious activities or policy…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`applocker`\n      | spath input=UserData_Xml\n      | rename RuleAndFileData.* as *, Computer as dest, TargetUser AS user\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest, PolicyName, RuleId,\n           user, TargetProcessId, FilePath,\n           FullFilePath\n      | eventstats avg(count) as avg, stdev(count) as stdev\n      | eval upperBound=(avg+stdev*2), anomaly=if(count > upperBound, \"Yes\", \"No\")\n      | where anomaly=\"Yes\"\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_applocker_execution_from_uncommon_locations_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate users are executing applications from file paths that are not permitted by AppLocker. It is recommended to investigate the context of the application execution to determine if it is malicious or not. Modify the threshold as needed to reduce false positives.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/application-security/application-control/windows-defender-application-control/operations/querying-application-control-events-centrally-using-advanced-hunting, https://learn.microsoft.com/en-us/windows/security/application-security/application-control/windows-defender-application-control/applocker/using-event-viewer-with-applocker",
              "mitre": [
                "T1218"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AppLocker Execution from Uncommon Locations\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AppLocker Execution from Uncommon Locations\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate users are executing applications from file paths that are not permitted by AppLocker. It is recommended to investigate the context of the application execution to determine if it is malicious or not. Modify the threshold as needed to reduce false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows AppLocker Execution from Uncommon Locations”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.389",
              "n": "Windows Attempt To Stop Security Service",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to stop security-related services on an endpoint, which may indicate malicious activity. It leverages data from Endpoint Detection and Response (EDR) agents, specifically searching for processes involving the \"sc.exe\" or \"net.exe\" command with the \"stop\" parameter or the PowerShell \"Stop-Service\" cmdlet. This activity is significant because disabling security services can undermine the organization's security posture, potentially leading to unauthorized ac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. should be identified and understood.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1562.001/T1562.001.md#atomic-test-14---disable-arbitrary-security-windows-service, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Attempt To Stop Security Service\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Attempt To Stop Security Service\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. should be identified and understood.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Attempt To Stop Security Service”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.390",
              "n": "Windows Audit Policy Auditing Option Disabled via Auditpol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `auditpol.exe` with the \"/set\", \"/option\" and \"/value:disable\" command-line arguments used to disable specific auditing options of the audit policy. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity can be significant as it indicates potential defense evasion by adversaries or Red Teams, aiming to limit data that can be leveraged for detections and audits. If…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. and understood.",
              "refs": "https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-gpac/262a2bed-93d4-4c04-abec-cf06e9ec72fd, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-set",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Audit Policy Auditing Option Disabled via Auditpol\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Audit Policy Auditing Option Disabled via Auditpol\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. and understood.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Audit Policy Auditing Option Disabled via Auditpol”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.391",
              "n": "Windows Audit Policy Auditing Option Modified - Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potentially suspicious modifications to the Audit Policy auditing options registry values. It leverages data from the Endpoint.Registry data model, focusing on changes to one of the following auditing option values \"CrashOnAuditFail\", \"FullPrivilegeAuditing\", \"AuditBaseObjects\" and \"AuditBaseDirectories\" within the \"HKLM\\\\System\\\\CurrentControlSet\\\\Control\\\\Lsa\\\\\" registry key. This activity is significant as it could be a sign of a threat actor trying to tamper wi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Active setup installer may add or modify this registry.",
              "refs": "https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-gpac/262a2bed-93d4-4c04-abec-cf06e9ec72fd, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-set",
              "mitre": [
                "T1547.014"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Audit Policy Auditing Option Modified - Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.014. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Audit Policy Auditing Option Modified - Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Active setup installer may add or modify this registry.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Audit Policy Auditing Option Modified - Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.392",
              "n": "Windows Audit Policy Cleared via Auditpol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `auditpol.exe` with the \"/clear\" command-line argument used to clears the audit policy. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity can be significant as it indicates potential defense evasion by adversaries or Red Teams, aiming to limit data that can be leveraged for detections and audits. If confirmed malicious, this behavior could allow attackers to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. and understood.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/, https://www.cybereason.com/blog/research/prometei-botnet-exploiting-microsoft-exchange-vulnerabilities, https://attack.mitre.org/techniques/T1562/002/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-clear, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-remove",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Audit Policy Cleared via Auditpol\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Audit Policy Cleared via Auditpol\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. and understood.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Audit Policy Cleared via Auditpol”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.393",
              "n": "Windows Audit Policy Disabled via Auditpol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `auditpol.exe` with the \"/set\" command-line argument in order to disable a specific category or sub-category from the audit policy. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity can be significant as it indicates potential defense evasion by adversaries or Red Teams, aiming to limit data that can be leveraged for detections and audits. If confirmed malici…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be rare, investigate the activity, and apply additional filters when necessary.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/, https://www.cybereason.com/blog/research/prometei-botnet-exploiting-microsoft-exchange-vulnerabilities, https://attack.mitre.org/techniques/T1562/002/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-set",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Audit Policy Disabled via Auditpol\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Audit Policy Disabled via Auditpol\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be rare, investigate the activity, and apply additional filters when necessary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Audit Policy Disabled via Auditpol”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.394",
              "n": "Windows Audit Policy Disabled via Legacy Auditpol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the legacy `auditpol.exe` included with the Windows 2000 Resource Kit Tools, with the \"/disable\" command-line argument or one of the allowed category flags and the \"none\" option, in order to disable a specific logging category from the audit policy. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity can be significant as it indicates potential defense evasion …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be rare, investigate the activity, and apply additional filters when necessary.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/, https://www.cybereason.com/blog/research/prometei-botnet-exploiting-microsoft-exchange-vulnerabilities, https://attack.mitre.org/techniques/T1562/002/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-set",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Audit Policy Disabled via Legacy Auditpol\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Audit Policy Disabled via Legacy Auditpol\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be rare, investigate the activity, and apply additional filters when necessary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Audit Policy Disabled via Legacy Auditpol”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.395",
              "n": "Windows Audit Policy Excluded Category via Auditpol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `auditpol.exe` with the \"/set\" and \"/exclude\" command-line arguments which indicates that the user's per-user policy will cause audit to be suppressed regardless of the system audit policy. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity can be significant as it indicates potential defense evasion by adversaries or Red Teams, aiming to exclude specific user…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be rare, investigate the activity, and apply additional filters when necessary.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/, https://www.cybereason.com/blog/research/prometei-botnet-exploiting-microsoft-exchange-vulnerabilities, https://attack.mitre.org/techniques/T1562/002/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-set",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Audit Policy Excluded Category via Auditpol\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Audit Policy Excluded Category via Auditpol\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be rare, investigate the activity, and apply additional filters when necessary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Audit Policy Excluded Category via Auditpol”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.396",
              "n": "Windows Audit Policy Restored via Auditpol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `auditpol.exe` with the \"/restore\" command-line argument used to restore the audit policy from a file. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity can be significant as it indicates potential defense evasion by adversaries or Red Teams, aiming to limit data that can be leveraged for detections and audits. Attackers can provide an audit policy file that …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives could arise from administrative activity such as audit policy setup. Apply additional filters to known scripts and parent processes performing this action where necessary.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/, https://www.cybereason.com/blog/research/prometei-botnet-exploiting-microsoft-exchange-vulnerabilities, https://attack.mitre.org/techniques/T1562/002/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-restore",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Audit Policy Restored via Auditpol\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Audit Policy Restored via Auditpol\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives could arise from administrative activity such as audit policy setup. Apply additional filters to known scripts and parent processes performing this action where necessary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Audit Policy Restored via Auditpol”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.397",
              "n": "Windows Audit Policy Security Descriptor Tampering via Auditpol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `auditpol.exe` with the \"/set\" flag, and \"/sd\" command-line arguments used to modify the security descriptor of the audit policy. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity can be significant as it indicates potential defense evasion by adversaries or Red Teams, aiming to limit data that can be leveraged for detections and audits. An attacker, can disa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be rare to non existent. Any activity detected by this analytic should be investigated and approved or denied.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-set",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Audit Policy Security Descriptor Tampering via Auditpol\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Audit Policy Security Descriptor Tampering via Auditpol\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be rare to non existent. Any activity detected by this analytic should be investigated and approved or denied.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Audit Policy Security Descriptor Tampering via Auditpol”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.398",
              "n": "Windows Autostart Execution LSASS Driver Registry Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to undocumented registry keys that allow a DLL to load into lsass.exe, potentially capturing credentials. It leverages the Endpoint.Registry data model to identify changes to \\CurrentControlSet\\Services\\NTDS\\DirectoryServiceExtPt or \\CurrentControlSet\\Services\\NTDS\\LsaDbExtPt. This activity is significant as it indicates a possible attempt to inject malicious code into the Local Security Authority Subsystem Service (LSASS), which can lead to credentia…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present on recent Windows Operating Systems. Filtering may be required based on process_name. In addition, look for non-standard, unsigned, module loads into LSASS. If query is too noisy, modify by adding Endpoint.processes process_name to query to identify the process making the modification.",
              "refs": "https://blog.xpnsec.com/exploring-mimikatz-part-1/, https://github.com/oxfemale/LogonCredentialsSteal/tree/master/lsass_lib",
              "mitre": [
                "T1547.008"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Autostart Execution LSASS Driver Registry Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Autostart Execution LSASS Driver Registry Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present on recent Windows Operating Systems. Filtering may be required based on process_name. In addition, look for non-standard, unsigned, module loads into LSASS. If query is too noisy, modify by adding Endpoint.processes process_name to query to identify the process making the modification.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Autostart Execution LSASS Driver Registry Modification”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.399",
              "n": "Windows Binary Proxy Execution Mavinject DLL Injection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of mavinject.exe for DLL injection into running processes, identified by specific command-line parameters such as /INJECTRUNNING and /HMODULE. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because it indicates potential arbitrary code execution, a common tactic for malware deployment and persistence. If confirmed malicious, this could allow…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter on DLL name or parent process.",
              "refs": "https://attack.mitre.org/techniques/T1218/013/, https://posts.specterops.io/mavinject-exe-functionality-deconstructed-c29ab2cf5c0e, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218/T1218.md#atomic-test-1---mavinject---inject-dll-into-running-process",
              "mitre": [
                "T1218.013"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Binary Proxy Execution Mavinject DLL Injection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.013. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Binary Proxy Execution Mavinject DLL Injection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter on DLL name or parent process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Binary Proxy Execution Mavinject DLL Injection”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.400",
              "n": "Windows BitLocker Suspicious Command Usage",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic is developed to detect the usage of BitLocker commands used to disable or impact boot settings. The malware ShrinkLocker uses various commands change how BitLocker handles encryption, potentially bypassing TPM requirements, enabling BitLocker without TPM, and enforcing specific startup key and PIN configurations. Such modifications can weaken system security, making it easier for unauthorized access and data breaches. Detecting these changes is crucial for maintaining robust encryp…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Endpoint.Processes | search process_name = $process_name$ AND dest = \"$dest$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://attack.mitre.org/techniques/T1486/, https://www.nccgroup.com/us/research-blog/nameless-and-shameless-ransomware-encryption-via-bitlocker/, https://www.bitdefender.com/en-us/blog/businessinsights/shrinklocker-decryptor-from-friend-to-foe-and-back-again, https://www.bleepingcomputer.com/news/security/new-shrinklocker-ransomware-uses-bitlocker-to-encrypt-your-files/",
              "mitre": [
                "T1486",
                "T1490"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows BitLocker Suspicious Command Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1486, T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows BitLocker Suspicious Command Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows BitLocker Suspicious Command Usage”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.401",
              "n": "Windows BitLockerToGo Process Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects BitLockerToGo.exe execution, which has been observed being abused by Lumma stealer malware. The malware leverages this legitimate Windows utility to manipulate registry keys, search for cryptocurrency wallets and credentials, and exfiltrate sensitive data. This activity is significant because BitLockerToGo.exe provides functionality for viewing, copying, and writing files as well as modifying registry branches - capabilities that the Lumma stealer exploits. However…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name=bitlockertogo.exe\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_bitlockertogo_process_execution_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are likely, as BitLockerToGo.exe is a legitimate Windows utility used for managing BitLocker encryption. However, monitor for usage of BitLockerToGo.exe in your environment, tune as needed. If BitLockerToGo.exe is not used in your environment, move to TTP.",
              "refs": "https://securelist.com/fake-captcha-delivers-lumma-amadey/114312/",
              "mitre": [
                "T1218"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows BitLockerToGo Process Execution\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows BitLockerToGo Process Execution\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are likely, as BitLockerToGo.exe is a legitimate Windows utility used for managing BitLocker encryption. However, monitor for usage of BitLockerToGo.exe in your environment, tune as needed. If BitLockerToGo.exe is not used in your environment, move to TTP.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows BitLockerToGo Process Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.402",
              "n": "Windows BitLockerToGo with Network Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious usage of BitLockerToGo.exe, which has been observed being abused by Lumma stealer malware. The malware leverages this legitimate Windows utility to manipulate registry keys, search for cryptocurrency wallets and credentials, and exfiltrate sensitive data. This activity is significant because BitLockerToGo.exe provides functionality for viewing, copying, and writing files as well as modifying registry branches - capabilities that the Lumma stealer exploit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "`sysmon` EventCode=22 process_name=\"bitlockertogo.exe\"\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY answer answer_count dvc\n           process_exec process_guid process_name\n           query query_count reply_code_id\n           signature signature_id src\n           user_id vendor_product QueryName\n           QueryResults QueryStatus\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_bitlockertogo_with_network_activity_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are likely, as BitLockerToGo.exe is a legitimate Windows utility used for managing BitLocker encryption. However, the detection is designed to flag unusual execution patterns that deviate from standard usage. Filtering may be required to reduce false positives, once confirmed - move to TTP.",
              "refs": "https://any.run/report/5e9ba24639f70787e56f10a241271ae819ef9c573edb22b9eeade7cb40a2df2a/66f16c7b-2cfc-40c5-91cc-f1cbe9743fa3, https://securelist.com/fake-captcha-delivers-lumma-amadey/114312/",
              "mitre": [
                "T1218"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows BitLockerToGo with Network Activity\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows BitLockerToGo with Network Activity\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are likely, as BitLockerToGo.exe is a legitimate Windows utility used for managing BitLocker encryption. However, the detection is designed to flag unusual execution patterns that deviate from standard usage. Filtering may be required to reduce false positives, once confirmed - move to TTP.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows BitLockerToGo with Network Activity”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.403",
              "n": "Windows Boot or Logon Autostart Execution In Startup Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of files in the Windows %startup% folder, a common persistence technique. It leverages the Endpoint.Filesystem data model to identify file creation events in this specific directory. This activity is significant because adversaries often use the startup folder to ensure their malicious code executes automatically upon system boot or user logon. If confirmed malicious, this could allow attackers to maintain persistence on the host, potentially leading t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the Filesystem responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may allow creation of script or exe in this path.",
              "refs": "https://attack.mitre.org/techniques/T1204/002/, https://www.fortinet.com/blog/threat-research/chaos-ransomware-variant-sides-with-russia",
              "mitre": [
                "T1547.001"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Boot or Logon Autostart Execution In Startup Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Boot or Logon Autostart Execution In Startup Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may allow creation of script or exe in this path.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Boot or Logon Autostart Execution In Startup Folder”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.404",
              "n": "Windows BootLoader Inventory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the bootloader paths on Windows endpoints. It leverages a PowerShell Scripted input to capture this data, which is then processed and aggregated using Splunk. Monitoring bootloader paths is significant for a SOC as it helps detect unauthorized modifications that could indicate bootkits or other persistent threats. If confirmed malicious, such activity could allow attackers to maintain persistence, bypass security controls, and potentially control the boot proces…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`bootloader_inventory`\n      | stats count min(_time) as firstTime max(_time) as lastTime values(_raw)\n        BY host\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_bootloader_inventory_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives here, only bootloaders. Filter as needed or create a lookup as a baseline.",
              "refs": "https://gist.github.com/MHaggis/26518cd2844b0e03de6126660bb45707, https://www.microsoft.com/en-us/security/blog/2023/04/11/guidance-for-investigating-attacks-using-cve-2022-21894-the-blacklotus-campaign/",
              "mitre": [
                "T1542.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows BootLoader Inventory\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1542.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows BootLoader Inventory\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives here, only bootloaders. Filter as needed or create a lookup as a baseline.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows BootLoader Inventory”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.405",
              "n": "Windows Bypass UAC via Pkgmgr Tool",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the deprecated 'pkgmgr.exe' process with an XML input file, which is unusual and potentially suspicious. This detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process execution details and command-line arguments. The significance lies in the deprecated status of 'pkgmgr.exe' and the use of XML files, which could indicate an attempt to bypass User Account Control (UAC). If confirmed malicious, this activity could allo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present on recent Windows Operating Systems. Filtering may be required based on process_name. In addition, look for non-standard, unsigned, module loads into LSASS. If query is too noisy, modify by adding Endpoint.processes process_name to query to identify the process making the modification.",
              "refs": "https://asec.ahnlab.com/en/17692/, )%20is%20a%20remote%20access%20trojan, is%20as%20an%20information%20stealer.",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Bypass UAC via Pkgmgr Tool\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Bypass UAC via Pkgmgr Tool\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present on recent Windows Operating Systems. Filtering may be required based on process_name. In addition, look for non-standard, unsigned, module loads into LSASS. If query is too noisy, modify by adding Endpoint.processes process_name to query to identify the process making the modification.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Bypass UAC via Pkgmgr Tool”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.406",
              "n": "Windows CAB File on Disk",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects .cab files being written to disk. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on events where the file name is '*.cab' and the action is 'write'. This activity is significant as .cab files can be used to deliver malicious payloads, including embedded .url files that execute harmful code. If confirmed malicious, this behavior could lead to unauthorized code execution and potential system compromise. Analysts should review the file p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will only be present if a process legitimately writes a .cab file to disk. Modify the analytic as needed by file path. Filter as needed.",
              "refs": "https://github.com/PaloAltoNetworks/Unit42-timely-threat-intel/blob/main/2023-10-25-IOCs-from-DarkGate-activity.txt",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows CAB File on Disk\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows CAB File on Disk\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will only be present if a process legitimately writes a .cab file to disk. Modify the analytic as needed by file path. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows CAB File on Disk”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.407",
              "n": "Windows Cached Domain Credentials Reg Query",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a process command line querying the CachedLogonsCount registry value in the Winlogon registry. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and registry queries. Monitoring this activity is significant as it can indicate the use of post-exploitation tools like Winpeas, which gather information about login caching settings. If confirmed malicious, this activity could help attackers understand…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/, https://learn.microsoft.com/de-de/troubleshoot/windows-server/user-profiles-and-logon/cached-domain-logon-information, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS",
              "mitre": [
                "T1003.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Cached Domain Credentials Reg Query\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Cached Domain Credentials Reg Query\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Cached Domain Credentials Reg Query”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.408",
              "n": "Windows Chromium Browser Launched with Small Window Size",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where a Chromium-based browser process, including Chrome, Edge, Brave, Opera, or Vivaldi, is launched with an unusually small window size, typically less than 100 pixels in width or height. Such configurations render the browser effectively invisible to the user and are uncommon in normal user activity. When observed on endpoints, especially in combination with automation, off-screen positioning, or suppression flags, this behavior may indicate attempts t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.trendmicro.com/en_us/research/26/a/analysis-of-the-evelyn-stealer-campaign.html, https://peter.sh/experiments/chromium-command-line-switches/",
              "mitre": [
                "T1497"
              ],
              "dtype": "parent_process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chromium Browser Launched with Small Window Size\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chromium Browser Launched with Small Window Size\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Chromium Browser Launched with Small Window Size”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.409",
              "n": "Windows Chromium process Launched with Disable Popup Blocking",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where a Windows Chromium-based browser process is launched with the `--disable-popup-blocking` flag. This flag is typically used to bypass the browser’s built-in pop-up protections, allowing automatic execution of pop-ups or redirects without user interaction. While legitimate in some testing or automation scenarios, its presence on endpoints, particularly when combined with other automation or concealment flags, may indicate attempts by malicious actors …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature for framework testing that may cause some false positive.",
              "refs": "https://www.trendmicro.com/en_us/research/26/a/analysis-of-the-evelyn-stealer-campaign.html, https://peter.sh/experiments/chromium-command-line-switches/",
              "mitre": [
                "T1497"
              ],
              "dtype": "parent_process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chromium process Launched with Disable Popup Blocking\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chromium process Launched with Disable Popup Blocking\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature for framework testing that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Chromium process Launched with Disable Popup Blocking”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.410",
              "n": "Windows Chromium Process with Disabled Extensions",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances of Chromium-based browser processes on Windows launched with extensions explicitly disabled via command-line arguments. Disabling extensions can be used by automation frameworks, testing tools, or headless browser activity, but may also indicate defense evasion or abuse of browser functionality by malicious scripts or malware. This behavior reduces browser visibility and bypasses user-installed security extensions, making it relevant for detecting non-int…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature for framework testing that may cause some false positive.",
              "refs": "https://www.trendmicro.com/en_us/research/26/a/analysis-of-the-evelyn-stealer-campaign.html, https://peter.sh/experiments/chromium-command-line-switches/",
              "mitre": [
                "T1497"
              ],
              "dtype": "parent_process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chromium Process with Disabled Extensions\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chromium Process with Disabled Extensions\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature for framework testing that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Chromium Process with Disabled Extensions”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.411",
              "n": "Windows Cisco Secure Endpoint Related Service Stopped",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious termination of known services commonly targeted by ransomware before file encryption. It leverages Windows System Event Logs (EventCode 7036) to identify when critical services such as Volume Shadow Copy, backup, and antivirus services are stopped. This activity is significant because ransomware often disables these services to avoid errors and ensure successful file encryption. If confirmed malicious, this behavior could lead to widespread data encr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7036",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to service entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7036 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or troubleshooting activities may trigger this alert. Investigate the process performing this action to determine if its a legitimate activity.",
              "refs": "https://krebsonsecurity.com/2021/05/a-closer-look-at-the-darkside-ransomware-gang/, https://www.mcafee.com/blogs/other-blogs/mcafee-labs/mcafee-atr-analyzes-sodinokibi-aka-revil-ransomware-as-a-service-what-the-code-tells-us/, https://news.sophos.com/en-us/2020/04/24/lockbit-ransomware-borrows-tricks-to-keep-up-with-revil-and-maze/, https://blogs.vmware.com/security/2022/10/lockbit-3-0-also-known-as-lockbit-black.html",
              "mitre": [
                "T1490"
              ],
              "dtype": "service",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Cisco Secure Endpoint Related Service Stopped\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to service entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7036. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Cisco Secure Endpoint Related Service Stopped\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or troubleshooting activities may trigger this alert. Investigate the process performing this action to determine if its a legitimate activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Cisco Secure Endpoint Related Service Stopped”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.412",
              "n": "Windows Cisco Secure Endpoint Stop Immunet Service Via Sfc",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `sfc.exe` utility, in order to stop the Immunet Protect service. The Sfc.exe utility is part of Cisco Secure Endpoint installation. This detection leverages telemetry from the endpoint, focusing on command-line executions involving the `-k` parameter. This activity is significant as it indicates potential tampering with defensive mechanisms. If confirmed malicious, attackers could partially blind the EDR, enabling further compromise and lateral movem…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that this action is executed during troubleshooting activity. Activity needs to be confirmed on a case by case basis.",
              "refs": "https://www.cisco.com/c/en/us/support/docs/security/amp-endpoints/213690-amp-for-endpoint-command-line-switches.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Cisco Secure Endpoint Stop Immunet Service Via Sfc\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Cisco Secure Endpoint Stop Immunet Service Via Sfc\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that this action is executed during troubleshooting activity. Activity needs to be confirmed on a case by case basis.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Cisco Secure Endpoint Stop Immunet Service Via Sfc”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.413",
              "n": "Windows Cisco Secure Endpoint Unblock File Via Sfc",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the sfc.exe utility with the \"-unblock\" parameter, a feature within Cisco Secure Endpoint. The \"-unblock\" flag is used to remove system blocks imposed by the endpoint protection. This detection focuses on command-line activity that includes the \"-unblock\" parameter, as it may indicate an attempt to restore access to files or processes previously blocked by the security software. While this action could be legitimate in troubleshooting scenarios, maliciou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that this action is executed during troubleshooting activity. Activity needs to be confirmed on a case by case basis.",
              "refs": "https://www.cisco.com/c/en/us/support/docs/security/amp-endpoints/213690-amp-for-endpoint-command-line-switches.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Cisco Secure Endpoint Unblock File Via Sfc\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Cisco Secure Endpoint Unblock File Via Sfc\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that this action is executed during troubleshooting activity. Activity needs to be confirmed on a case by case basis.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Cisco Secure Endpoint Unblock File Via Sfc”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.414",
              "n": "Windows Cmdline Tool Execution From Non-Shell Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where `ipconfig.exe`, `systeminfo.exe`, or similar tools are executed by a non-standard shell parent process, excluding CMD, PowerShell, or Explorer. This detection leverages Endpoint Detection and Response (EDR) telemetry to monitor process creation events. Such behavior is significant as it may indicate adversaries using injected processes to perform system discovery, a tactic observed in FIN7's JSSLoader. If confirmed malicious, this activity could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A network operator or systems administrator may utilize an automated host discovery application that may generate false positives. Filter as needed.",
              "refs": "https://www.mandiant.com/resources/fin7-pursuing-an-enigmatic-and-evasive-global-criminal-operation, https://attack.mitre.org/groups/G0046/, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1059.007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Cmdline Tool Execution From Non-Shell Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Cmdline Tool Execution From Non-Shell Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A network operator or systems administrator may utilize an automated host discovery application that may generate false positives. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Cmdline Tool Execution From Non-Shell Process”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.415",
              "n": "Windows COM Hijacking InprocServer32 Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the InProcServer32 registry key by reg.exe, indicative of potential COM hijacking. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line execution logs. COM hijacking is significant as it allows adversaries to insert malicious code that executes in place of legitimate software, providing a means for persistence. If confirmed malicious, this activity could enable attackers to execute…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and some filtering may be required.",
              "refs": "https://attack.mitre.org/techniques/T1546/015/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1546.015/T1546.015.md",
              "mitre": [
                "T1546.015"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows COM Hijacking InprocServer32 Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.015. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows COM Hijacking InprocServer32 Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and some filtering may be required.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows COM Hijacking InprocServer32 Modification”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.416",
              "n": "Windows Command and Scripting Interpreter Path Traversal Exec",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects path traversal command-line execution, often used in malicious documents to execute code via msdt.exe for defense evasion. It leverages Endpoint Detection and Response (EDR) data, focusing on specific patterns in process paths. This activity is significant as it can indicate an attempt to bypass security controls and execute unauthorized code. If confirmed malicious, this behavior could lead to code execution, privilege escalation, or persistence within the environ…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Not known at this moment.",
              "refs": "https://app.any.run/tasks/713f05d2-fe78-4b9d-a744-f7c133e3fafb/",
              "mitre": [
                "T1059"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Command and Scripting Interpreter Path Traversal Exec\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Command and Scripting Interpreter Path Traversal Exec\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Not known at this moment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Command and Scripting Interpreter Path Traversal Exec”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.417",
              "n": "Windows Command Shell DCRat ForkBomb Payload",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a DCRat \"forkbomb\" payload, which spawns multiple cmd.exe processes that launch notepad.exe instances in quick succession. This detection leverages Endpoint Detection and Response (EDR) data, focusing on the rapid creation of cmd.exe and notepad.exe processes within a 30-second window. This activity is significant as it indicates a potential DCRat infection, a known Remote Access Trojan (RAT) with destructive capabilities. If confirmed malicious, t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://cert.gov.ua/article/405538, https://malpedia.caad.fkie.fraunhofer.de/details/win.dcrat, https://www.mandiant.com/resources/analyzing-dark-crystal-rat-backdoor",
              "mitre": [
                "T1059.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Command Shell DCRat ForkBomb Payload\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Command Shell DCRat ForkBomb Payload\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Command Shell DCRat ForkBomb Payload”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.418",
              "n": "Windows Compatibility Telemetry Suspicious Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of CompatTelRunner.exe with parameters indicative of a process not part of the normal \"Microsoft Compatibility Appraiser\" telemetry collection. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line arguments. This activity is significant because CompatTelRunner.exe and the \"Microsoft Compatibility Appraiser\" task always run as System and can be used to elevate privileges or e…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4688, Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Endpoint.Processes | search dest = \"$dest$\" AND process_name = \"$process_name$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4688, Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1546/, https://scythe.io/threat-thursday/windows-telemetry-persistence, https://www.trustedsec.com/blog/abusing-windows-telemetry-for-persistence",
              "mitre": [
                "T1546",
                "T1053.005"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Compatibility Telemetry Suspicious Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4688, Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546, T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Compatibility Telemetry Suspicious Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Compatibility Telemetry Suspicious Child Process”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.419",
              "n": "Windows Compatibility Telemetry Tampering Through Registry",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies suspicious modifications to the Windows Compatibility Telemetry registry settings, specifically within the \"TelemetryController\" registry key and \"Command\" registry value. It leverages data from the Endpoint.Registry data model, focusing on registry paths and values indicative of such changes. This activity is significant because CompatTelRunner.exe and the \"Microsoft Compatibility Appraiser\" task always run as System and can be used to elevate privileges or establish a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Endpoint.Registry | search registry_path = \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows NT\\\\CurrentVersion\\\\AppCompatFlags\\\\TelemetryController*\" AND dest = \"$dest$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 13 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1546/, https://scythe.io/threat-thursday/windows-telemetry-persistence, https://www.trustedsec.com/blog/abusing-windows-telemetry-for-persistence",
              "mitre": [
                "T1546",
                "T1053.005"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Compatibility Telemetry Tampering Through Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546, T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Compatibility Telemetry Tampering Through Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Compatibility Telemetry Tampering Through Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.420",
              "n": "Windows ConHost with Headless Argument",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the unusual invocation of the Windows Console Host process (conhost.exe) with the undocumented --headless parameter. This detection leverages Endpoint Detection and Response (EDR) telemetry, specifically monitoring for command-line executions where conhost.exe is executed with the --headless argument. This activity is significant for a SOC as it is not commonly used in legitimate operations and may indicate an attacker's attempt to execute commands stealthily. If c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the application is legitimately used, filter by user or endpoint as needed.",
              "refs": "https://x.com/embee_research/status/1559410767564181504?s=20, https://x.com/GroupIB_TI/status/1719675754886131959?s=20",
              "mitre": [
                "T1564.003",
                "T1564.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows ConHost with Headless Argument\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1564.003, T1564.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows ConHost with Headless Argument\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the application is legitimately used, filter by user or endpoint as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows ConHost with Headless Argument”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.421",
              "n": "Windows Create Local Administrator Account Via Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a local administrator account using the \"net.exe\" command. It leverages Endpoint Detection and Response (EDR) data to identify processes named \"net.exe\" with the \"/add\" parameter and keywords related to administrator accounts. This activity is significant as it may indicate an attacker attempting to gain persistent access or escalate privileges. If confirmed malicious, this could lead to unauthorized access, data theft, or further system compromise.…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators often leverage net.exe to create admin accounts.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1136.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Create Local Administrator Account Via Net\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Create Local Administrator Account Via Net\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators often leverage net.exe to create admin accounts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Create Local Administrator Account Via Net”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.422",
              "n": "Windows Credentials from Password Stores Chrome Copied in TEMP Dir",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the copying of Chrome's Local State and Login Data files into temporary folders, a tactic often used by the Braodo stealer malware. These files contain encrypted user credentials, including saved passwords and login session details. The detection monitors for suspicious copying activity involving these specific Chrome files, particularly in temp directories where malware typically processes the stolen data. Identifying this behavior enables security teams to act qu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://x.com/suyog41/status/1825869470323056748, https://g0njxa.medium.com/from-vietnam-to-united-states-malware-fraud-and-dropshipping-98b7a7b2c36d",
              "mitre": [
                "T1555.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials from Password Stores Chrome Copied in TEMP Dir\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials from Password Stores Chrome Copied in TEMP Dir\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Credentials from Password Stores Chrome Copied in TEMP Dir”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.423",
              "n": "Windows Credentials from Password Stores Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Windows OS tool cmdkey.exe, which is used to create stored usernames, passwords, or credentials. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant because cmdkey.exe is often abused by post-exploitation tools and malware, such as Darkgate, to gain unauthorized access. If confirmed malicious, this behavior could allow attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator can use this tool for auditing process.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1555"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials from Password Stores Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials from Password Stores Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator can use this tool for auditing process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Credentials from Password Stores Creation”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.424",
              "n": "Windows Credentials from Password Stores Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Windows OS tool cmdkey.exe with the /delete parameter. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. The activity is significant because cmdkey.exe can be used by attackers to delete stored credentials, potentially leading to privilege escalation and persistence. If confirmed malicious, this behavior could allow attackers to remove stored user cred…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator can use this tool for auditing process.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1555"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials from Password Stores Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials from Password Stores Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator can use this tool for auditing process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Credentials from Password Stores Deletion”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.425",
              "n": "Windows Credentials from Password Stores Query",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Windows OS tool cmdkey.exe, which is often abused by post-exploitation tools like winpeas, commonly used in ransomware attacks to list stored usernames, passwords, or credentials. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This activity is significant as it indicates potential credential harvesting, which can lead to privilege escalation and persistence. If confirmed mali…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator can use this tool for auditing process.",
              "refs": "https://ss64.com/nt/cmdkey.html, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1555"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials from Password Stores Query\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials from Password Stores Query\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator can use this tool for auditing process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Credentials from Password Stores Query”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.426",
              "n": "Windows Credentials from Web Browsers Saved in TEMP Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of files containing passwords, cookies, and saved login account information by the Braodo stealer malware in temporary folders. Braodo often collects these credentials from browsers and applications, storing them in temp directories before exfiltration. This detection focuses on monitoring for the creation of files with patterns or formats commonly associated with stolen credentials. By identifying these activities, security teams can take needed actio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://x.com/suyog41/status/1825869470323056748, https://g0njxa.medium.com/from-vietnam-to-united-states-malware-fraud-and-dropshipping-98b7a7b2c36d",
              "mitre": [
                "T1555.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials from Web Browsers Saved in TEMP Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials from Web Browsers Saved in TEMP Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Credentials from Web Browsers Saved in TEMP Folder”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.427",
              "n": "Windows Credentials in Registry Reg Query",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes querying the registry for potential passwords or credentials. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that access specific registry paths known to store sensitive information. This activity is significant as it may indicate credential theft attempts, often used by adversaries or post-exploitation tools like winPEAS. If confirmed malicious, this behavior could lead to privilege escalation,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1552/002/, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1552.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials in Registry Reg Query\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials in Registry Reg Query\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Credentials in Registry Reg Query”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.428",
              "n": "Windows Data Destruction Recursive Exec Files Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious process that is recursively deleting executable files on a compromised host. It leverages Sysmon Event Codes 23 and 26 to detect this activity by monitoring for a high volume of deletions or overwrites of files with extensions like .exe, .sys, and .dll. This behavior is significant as it is commonly associated with destructive malware such as CaddyWiper, DoubleZero, and SwiftSlicer, which aim to make file recovery impossible. If confirmed malicious,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 23, Sysmon EventID 26",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 23, Sysmon EventID 26 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The uninstallation of a large software application or the use of cleanmgr.exe may trigger this detection. A filter is necessary to reduce false positives.",
              "refs": "https://www.welivesecurity.com/2023/01/27/swiftslicer-new-destructive-wiper-malware-ukraine/",
              "mitre": [
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Data Destruction Recursive Exec Files Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 23, Sysmon EventID 26. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Data Destruction Recursive Exec Files Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The uninstallation of a large software application or the use of cleanmgr.exe may trigger this detection. A filter is necessary to reduce false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Data Destruction Recursive Exec Files Deletion”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.429",
              "n": "Windows Defacement Modify Transcodedwallpaper File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies modifications to the TranscodedWallpaper file in the wallpaper theme directory, excluding changes made by explorer.exe. This detection leverages the Endpoint.Processes and Endpoint.Filesystem data models to correlate process activity with file modifications. This activity is significant as it may indicate an adversary attempting to deface or change the desktop wallpaper of a targeted host, a tactic often used to signal compromise or deliver a message. If confirm…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "3rd part software application can change the wallpaper. Filter is needed.",
              "refs": "https://www.trendmicro.com/vinfo/us/threat-encyclopedia/malware/ransom_sifreli.a",
              "mitre": [
                "T1491"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Defacement Modify Transcodedwallpaper File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1491. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Defacement Modify Transcodedwallpaper File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: 3rd part software application can change the wallpaper. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Defacement Modify Transcodedwallpaper File”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.430",
              "n": "Windows Default Group Policy Object Modified",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to default Group Policy Objects (GPOs) using Event ID 5136. It monitors changes to the `Default Domain Controllers Policy` and `Default Domain Policy`, which are critical for enforcing security settings across domain controllers and all users/computers, respectively. This activity is significant because unauthorized changes to these GPOs can indicate an adversary with privileged access attempting to deploy persistence mechanisms or execute malware acr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The default Group Policy Objects within an AD network may be legitimately updated for administrative operations, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1484/, https://attack.mitre.org/techniques/T1484/001, https://www.trustedsec.com/blog/weaponizing-group-policy-objects-access/, https://adsecurity.org/?p=2716",
              "mitre": [
                "T1484.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Default Group Policy Object Modified\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Default Group Policy Object Modified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The default Group Policy Objects within an AD network may be legitimately updated for administrative operations, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Default Group Policy Object Modified”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.431",
              "n": "Windows Defender ASR Audit Events",
              "c": "high",
              "f": "intermediate",
              "v": "This detection searches for Windows Defender ASR audit events. ASR is a feature of Windows Defender Exploit Guard that prevents actions and apps that are typically used by exploit-seeking malware to infect machines. ASR rules are applied to processes and applications. When a process or application attempts to perform an action that is blocked by an ASR rule, an event is generated. This detection searches for ASR audit events that are generated when a process or application attempts to perform an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Defender 1122, Windows Event Log Defender 1125, Windows Event Log Defender 1126, Windows Event Log Defender 1132, Windows Event Log Defender 1134",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Defender 1122, Windows Event Log Defender 1125, Windows Event Log Defender 1126, Windows Event Log Defender 1132, Windows Event Log Defender 1134 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are expected from legitimate applications generating events that are similar to those generated by malicious activity. For example, Event ID 1122 is generated when a process attempts to load a DLL that is blocked by an ASR rule. This can be triggered by legitimate applications that attempt to load DLLs that are not blocked by ASR rules. This is audit only.",
              "refs": "https://asrgen.streamlit.app/",
              "mitre": [
                "T1059",
                "T1566.001",
                "T1566.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Defender ASR Audit Events\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Defender 1122, Windows Event Log Defender 1125, Windows Event Log Defender 1126, Windows Event Log Defender 1132, Windows Event Log Defender 1134. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059, T1566.001, T1566.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Defender ASR Audit Events\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are expected from legitimate applications generating events that are similar to those generated by malicious activity. For example, Event ID 1122 is generated when a process attempts to load a DLL that is blocked by an ASR rule. This can be triggered by legitimate applications that attempt to load DLLs that are not blocked by ASR rules. This is audit only.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Defender ASR Audit Events”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.432",
              "n": "Windows Defender ASR Block Events",
              "c": "high",
              "f": "intermediate",
              "v": "This detection searches for Windows Defender ASR block events. ASR is a feature of Windows Defender Exploit Guard that prevents actions and apps that are typically used by exploit-seeking malware to infect machines. ASR rules are applied to processes and applications. When a process or application attempts to perform an action that is blocked by an ASR rule, an event is generated. This detection searches for ASR block events that are generated when a process or application attempts to perform an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Defender 1121, Windows Event Log Defender 1126, Windows Event Log Defender 1129, Windows Event Log Defender 1131, Windows Event Log Defender 1133",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Defender 1121, Windows Event Log Defender 1126, Windows Event Log Defender 1129, Windows Event Log Defender 1131, Windows Event Log Defender 1133 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are expected from legitimate applications generating events that are similar to those generated by malicious activity. For example, Event ID 1122 is generated when a process attempts to load a DLL that is blocked by an ASR rule. This can be triggered by legitimate applications that attempt to load DLLs that are not blocked by ASR rules. This is block only.",
              "refs": "https://asrgen.streamlit.app/",
              "mitre": [
                "T1059",
                "T1566.001",
                "T1566.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Defender ASR Block Events\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Defender 1121, Windows Event Log Defender 1126, Windows Event Log Defender 1129, Windows Event Log Defender 1131, Windows Event Log Defender 1133. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059, T1566.001, T1566.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Defender ASR Block Events\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are expected from legitimate applications generating events that are similar to those generated by malicious activity. For example, Event ID 1122 is generated when a process attempts to load a DLL that is blocked by an ASR rule. This can be triggered by legitimate applications that attempt to load DLLs that are not blocked by ASR rules. This is block only.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Defender ASR Block Events”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.433",
              "n": "Windows Defender ASR Registry Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to Windows Defender Attack Surface Reduction (ASR) registry settings. It leverages Windows Defender Operational logs, specifically EventCode 5007, to identify changes in ASR rules. This activity is significant because ASR rules are designed to block actions commonly used by malware to exploit systems. Unauthorized modifications to these settings could indicate an attempt to weaken system defenses. If confirmed malicious, this could allow an attacker t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Defender 5007",
              "q": "`ms_defender` EventCode IN (5007) | rex field=New_Value \"0x(?<New_Registry_Value>\\\\d+)$\" | rex field=Old_Value \"0x(?<Old_Registry_Value>\\\\d+)$\" | rex field=New_Value \"Rules\\\\\\\\(?<ASR_ID>[A-Fa-f0-9\\\\-]+)\\\\s*=\" | eval New_Registry_Value=case(New_Registry_Value==\"0\", \"Disabled\", New_Registry_Value==\"1\", \"Block\", New_Registry_Value==\"2\", \"Audit\", New_Registry_Value==\"6\", \"Warn\") | eval Old_Registry_Value=case(Old_Registry_Value==\"0\", \"Disabled\", Old_Registry_Value==\"1\", \"Block\", Old_Registry_Value==\"2\", \"Audit\", Old_Registry_Value==\"6\", \"Warn\") | stats count min(_time) as firstTime max(_time) as lastTime by host, New_Value, Old_Value, Old_Registry_Value, New_Registry_Value, ASR_ID | lookup asr_rules ID AS ASR_ID OUTPUT ASR_Rule | `security_content_ctime(firstTime)`| rename host as dest | `security_content_ctime(lastTime)` | `windows_defender_asr_registry_modification_filter`",
              "m": "The following analytic requires collection of Windows Defender Operational logs in either XML or multi-line. To collect, setup a new input for the Windows Defender Operational logs. In addition, it does require a lookup that maps the ID to ASR Rule name.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are expected from legitimate applications generating events that are similar to those generated by malicious activity. For example, Event ID 5007 is generated when a process attempts to modify a registry key that is related to ASR rules. This can be triggered by legitimate applications that attempt to modify registry keys that are not blocked by ASR rules.",
              "refs": "https://asrgen.streamlit.app/",
              "mitre": [
                "T1112"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Defender ASR Registry Modification\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Defender 5007. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Defender ASR Registry Modification\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are expected from legitimate applications generating events that are similar to those generated by malicious activity. For example, Event ID 5007 is generated when a process attempts to modify a registry key that is related to ASR rules. This can be triggered by legitimate applications that attempt to modify registry keys that are not blocked by ASR rules.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Defender ASR Registry Modification”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.434",
              "n": "Windows Defender ASR Rule Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a Windows Defender ASR rule disabled events. ASR is a feature of Windows Defender Exploit Guard that prevents actions and apps that are typically used by exploit-seeking malware to infect machines. ASR rules are applied to processes and applications. When a process or application attempts to perform an action that is blocked by an ASR rule, an event is generated. This detection searches for ASR rule disabled events that are generated when an ASR rule is dis…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Defender 5007",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic requires collection of Windows Defender Operational logs in either XML or multi-line. To collect, setup a new input for the Windows Defender Operational logs. In addition, it does require a lookup that maps the ID to ASR Rule name.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if applications are typically disabling ASR rules in the environment. Monitor for changes to ASR rules to determine if this is a false positive.",
              "refs": "https://asrgen.streamlit.app/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Defender ASR Rule Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Defender 5007. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Defender ASR Rule Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if applications are typically disabling ASR rules in the environment. Monitor for changes to ASR rules to determine if this is a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Defender ASR Rule Disabled”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.435",
              "n": "Windows Defender Exclusion Registry Entry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows Defender exclusion registry entries. It leverages endpoint registry data to identify changes in the registry path \"*\\\\SOFTWARE\\\\Policies\\\\Microsoft\\\\Windows Defender\\\\Exclusions\\\\*\". This activity is significant because adversaries often modify these entries to bypass Windows Defender, allowing malicious code to execute without detection. If confirmed malicious, this behavior could enable attackers to evade antivirus defenses, maintain …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to use this windows features.",
              "refs": "https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html, https://app.any.run/tasks/cf1245de-06a7-4366-8209-8e3006f2bfe5/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Defender Exclusion Registry Entry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Defender Exclusion Registry Entry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to use this windows features.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Defender Exclusion Registry Entry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.436",
              "n": "Windows Deleted Registry By A Non Critical Process File Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of registry keys by non-critical processes. It leverages Endpoint Detection and Response (EDR) data, focusing on registry deletion events and correlating them with processes not typically associated with system or program files. This activity is significant as it may indicate malware, such as the Double Zero wiper, attempting to evade defenses or cause destructive payload impacts. If confirmed malicious, this behavior could lead to significant system d…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 12",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection can catch for third party application updates or installation. In this scenario false positive filter is needed.",
              "refs": "https://blog.talosintelligence.com/2022/03/threat-advisory-doublezero.html",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Deleted Registry By A Non Critical Process File Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 12. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Deleted Registry By A Non Critical Process File Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection can catch for third party application updates or installation. In this scenario false positive filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Deleted Registry By A Non Critical Process File Path”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.437",
              "n": "Windows Disable Change Password Through Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious registry modification that disables the Change Password feature on a Windows host. It identifies changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Policies\\\\System\\\\DisableChangePassword\" with a value of \"0x00000001\". This activity is significant as it can prevent users from changing their passwords, a tactic often used by ransomware to maintain control over compromised systems. If confirmed malicious, this could hinder use…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This windows feature may implemented by administrator to prevent normal user to change the password of a critical host or server, In this type of scenario filter is needed to minimized false positive.",
              "refs": "https://www.trendmicro.com/vinfo/us/threat-encyclopedia/malware/ransom_heartbleed.thdobah",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable Change Password Through Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable Change Password Through Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This windows feature may implemented by administrator to prevent normal user to change the password of a critical host or server, In this type of scenario filter is needed to minimized false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Disable Change Password Through Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.438",
              "n": "Windows Disable Lock Workstation Feature Through Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious registry modification that disables the Lock Computer feature in Windows. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Policies\\\\System\\\\DisableLockWorkstation\" with a value of \"0x00000001\". This activity is significant because it prevents users from locking their screens, a tactic often used by malware, including ransomware, to maintain c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.bleepingcomputer.com/news/security/in-dev-ransomware-forces-you-do-to-survey-before-unlocking-computer/, https://heimdalsecurity.com/blog/fatalrat-targets-telegram/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable Lock Workstation Feature Through Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable Lock Workstation Feature Through Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Disable Lock Workstation Feature Through Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.439",
              "n": "Windows Disable Memory Crash Dump",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to disable the memory crash dump feature on Windows systems by setting the registry value to 0. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to the CrashDumpEnabled registry key. This activity is significant because disabling crash dumps can hinder forensic analysis and incident response efforts. If confirmed malicious, this action could be part of a broader attack strategy, such as data destruction or system dest…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the Filesystem responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` and `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blog.talosintelligence.com/2022/02/threat-advisory-hermeticwiper.html, https://learn.microsoft.com/en-us/troubleshoot/windows-server/performance/memory-dump-file-options",
              "mitre": [
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable Memory Crash Dump\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable Memory Crash Dump\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Disable Memory Crash Dump”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.440",
              "n": "Windows Disable Notification Center",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows registry to disable the Notification Center on a host machine. It leverages data from the Endpoint.Registry data model, specifically looking for changes to the \"DisableNotificationCenter\" registry value set to \"0x00000001.\" This activity is significant because disabling the Notification Center can be a tactic used by RAT malware to hide its presence and subsequent actions. If confirmed malicious, this could allow an attacker to opera…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 13 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to disable this windows features.",
              "refs": "https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable Notification Center\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable Notification Center\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to disable this windows features.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Disable Notification Center”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.441",
              "n": "Windows Disable or Modify Tools Via Taskkill",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of taskkill.exe to forcibly terminate processes. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that include specific taskkill parameters. This activity is significant because it can indicate attempts to disable security tools or disrupt legitimate applications, a common tactic in malware operations. If confirmed malicious, this behavior could allow attackers to evade detection, disrupt system sta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Network administrator can use this application to kill process during audit or investigation.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.njrat",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable or Modify Tools Via Taskkill\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable or Modify Tools Via Taskkill\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Network administrator can use this application to kill process during audit or investigation.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Disable or Modify Tools Via Taskkill”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.442",
              "n": "Windows Disable or Stop Browser Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the taskkill command in a process command line to terminate several known browser processes, a technique commonly employed by the Braodo stealer malware to steal credentials. By forcefully closing browsers like Chrome, Edge, and Firefox, the malware can unlock files that store sensitive information, such as passwords and login data. This detection focuses on identifying taskkill commands targeting these browsers, signaling malicious intent. Early detecti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Admin or user may choose to terminate browser via taskkill.exe. Filter as needed.",
              "refs": "https://x.com/suyog41/status/1825869470323056748, https://g0njxa.medium.com/from-vietnam-to-united-states-malware-fraud-and-dropshipping-98b7a7b2c36d",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable or Stop Browser Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable or Stop Browser Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Admin or user may choose to terminate browser via taskkill.exe. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Disable or Stop Browser Process”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.443",
              "n": "Windows Disable Shutdown Button Through Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious registry modifications that disable the shutdown button on a user's logon screen. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to registry paths associated with shutdown policies. This activity is significant because it is a tactic used by malware, particularly ransomware like KillDisk, to hinder system usability and prevent the removal of malicious changes. If confirmed malicious, this could impede system reco…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This windows feature may implement by administrator in some server where shutdown is critical. In that scenario filter of machine and users that can modify this registry is needed.",
              "refs": "https://www.trendmicro.com/vinfo/us/threat-encyclopedia/malware/ransom.msil.screenlocker.a/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable Shutdown Button Through Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable Shutdown Button Through Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This windows feature may implement by administrator in some server where shutdown is critical. In that scenario filter of machine and users that can modify this registry is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Disable Shutdown Button Through Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.444",
              "n": "Windows Disable Windows Event Logging Disable HTTP Logging",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of AppCmd.exe to disable HTTP logging on IIS servers. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution events where AppCmd.exe is used with specific parameters to alter logging settings. This activity is significant because disabling HTTP logging can help adversaries hide their tracks and avoid detection by removing evidence of their actions. If confirmed malicious, this could allow attackers to operate unde…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present only if scripts or Administrators are disabling logging. Filter as needed by parent process or other.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/05/crowdstrike-iceapple-a-novel-internet-information-services-post-exploitation-framework-1.pdf, https://unit42.paloaltonetworks.com/unit42-oilrig-uses-rgdoor-iis-backdoor-targets-middle-east/, https://www.secureworks.com/research/bronze-union, https://strontic.github.io/xcyclopedia/library/appcmd.exe-055B2B09409F980BF9B5A3969D01E5B2.html",
              "mitre": [
                "T1505.004",
                "T1562.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable Windows Event Logging Disable HTTP Logging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.004, T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable Windows Event Logging Disable HTTP Logging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present only if scripts or Administrators are disabling logging. Filter as needed by parent process or other.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Disable Windows Event Logging Disable HTTP Logging”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.445",
              "n": "Windows Disable Windows Group Policy Features Through Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious registry modifications aimed at disabling Windows Group Policy features. It leverages data from the Endpoint.Registry data model, focusing on specific registry paths and values associated with disabling key Windows functionalities. This activity is significant because it is commonly used by ransomware to hinder mitigation and forensic response efforts. If confirmed malicious, this behavior could severely impair the ability of security teams to analyze an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Disabling these features for legitimate purposes is not a common use case but can still be implemented by the administrators. Filter as needed.",
              "refs": "https://hybrid-analysis.com/sample/ef1c427394c205580576d18ba68d5911089c7da0386f19d1ca126929d3e671ab?environmentId=120&lang=en, https://www.sophos.com/en-us/threat-center/threat-analyses/viruses-and-spyware/Troj~Krotten-N/detailed-analysis, https://www.virustotal.com/gui/file/2d7855bf6470aa323edf2949b54ce2a04d9e38770f1322c3d0420c2303178d91/details",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable Windows Group Policy Features Through Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable Windows Group Policy Features Through Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Disabling these features for legitimate purposes is not a common use case but can still be implemented by the administrators. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Disable Windows Group Policy Features Through Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.446",
              "n": "Windows DisableAntiSpyware Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows Registry key \"DisableAntiSpyware\" being set to disable. This detection leverages data from the Endpoint.Registry datamodel, specifically looking for the registry value name \"DisableAntiSpyware\" with a value of \"0x00000001\". This activity is significant as it is commonly associated with Ryuk ransomware infections, indicating potential malicious intent to disable Windows Defender. If confirmed malicious, this action could allow attacke…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://www.malwarebytes.com/blog/malwarebytes-news/2021/02/lazyscripter-from-empire-to-double-rat/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DisableAntiSpyware Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DisableAntiSpyware Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows DisableAntiSpyware Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.447",
              "n": "Windows DiskCryptor Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of DiskCryptor, identified by the process names \"dcrypt.exe\" or \"dcinst.exe\". This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and original file names. DiskCryptor is significant because adversaries use it to manually encrypt disks during an operation, potentially leading to data inaccessibility. If confirmed malicious, this activity could result in complete disk encryption, causing data loss a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name=\"dcrypt.exe\"\n            OR\n            Processes.original_file_name=dcinst.exe\n        )\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_diskcryptor_usage_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible false positives may be present based on the internal name dcinst.exe, filter as needed. It may be worthy to alert on the service name.",
              "refs": "https://thedfirreport.com/2021/11/15/exchange-exploit-leads-to-domain-wide-ransomware/, https://github.com/DavidXanatos/DiskCryptor",
              "mitre": [
                "T1486"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DiskCryptor Usage\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DiskCryptor Usage\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible false positives may be present based on the internal name dcinst.exe, filter as needed. It may be worthy to alert on the service name.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows DiskCryptor Usage”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.448",
              "n": "Windows Diskshadow Proxy Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of DiskShadow.exe in scripting mode, which can execute arbitrary unsigned code. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions with scripting mode flags. This activity is significant because DiskShadow.exe is typically used for legitimate backup operations, but its misuse can indicate an attempt to execute unauthorized code. If confirmed malicious, this could lead to unauthorized code execution, pote…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators using the DiskShadow tool in their infrastructure as a main backup tool with scripts will cause false positives that can be filtered with `windows_diskshadow_proxy_execution_filter`",
              "refs": "https://bohops.com/2018/03/26/diskshadow-the-return-of-vss-evasion-persistence-and-active-directory-database-extraction/",
              "mitre": [
                "T1218"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Diskshadow Proxy Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Diskshadow Proxy Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators using the DiskShadow tool in their infrastructure as a main backup tool with scripts will cause false positives that can be filtered with `windows_diskshadow_proxy_execution_filter`\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Diskshadow Proxy Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.449",
              "n": "Windows DISM Remove Defender",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `dism.exe` to remove Windows Defender. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that include specific parameters for disabling and removing Windows Defender. This activity is significant because adversaries may disable Defender to evade detection and carry out further malicious actions undetected. If confirmed malicious, this could lead to the attacker gaining persistent access, executing ad…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate administrative tools leverage `dism.exe` to manipulate packages and features of the operating system. Filter as needed.",
              "refs": "https://thedfirreport.com/2020/11/23/pysa-mespinoza-ransomware/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DISM Remove Defender\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DISM Remove Defender\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate administrative tools leverage `dism.exe` to manipulate packages and features of the operating system. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows DISM Remove Defender”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.450",
              "n": "Windows DLL Search Order Hijacking with iscsicpl",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects DLL search order hijacking involving iscsicpl.exe. It identifies when iscsicpl.exe loads a malicious DLL from a new path, triggering the payload execution. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on child processes spawned by iscsicpl.exe. This activity is significant as it indicates a potential attempt to execute unauthorized code via DLL hijacking. If confirmed malicious, this could allow an attacker to execute ar…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filtering may be required. Remove the Windows Shells macro to determine if other utilities are using iscsicpl.exe.",
              "refs": "https://github.com/hackerhouse-opensource/iscsicpl_bypassUAC, https://github.com/422926799/csplugin/tree/master/bypassUAC",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DLL Search Order Hijacking with iscsicpl\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DLL Search Order Hijacking with iscsicpl\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filtering may be required. Remove the Windows Shells macro to determine if other utilities are using iscsicpl.exe.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows DLL Search Order Hijacking with iscsicpl”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.451",
              "n": "Windows DNS Gather Network Info",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the dnscmd.exe command to enumerate DNS records. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line executions. This activity is significant as it may indicate an adversary gathering network information, a common precursor to more targeted attacks. If confirmed malicious, this behavior could enable attackers to map the network, identify critical assets, and plan subsequent actions, potentially leading to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator can execute this command to enumerate DNS record. Filter or add other paths to the exclusion as needed.",
              "refs": "https://cert.gov.ua/article/3718487, https://media.defense.gov/2023/May/24/2003229517/-1/-1/0/CSA_Living_off_the_Land.PDF",
              "mitre": [
                "T1590.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DNS Gather Network Info\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1590.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DNS Gather Network Info\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator can execute this command to enumerate DNS record. Filter or add other paths to the exclusion as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows DNS Gather Network Info”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.452",
              "n": "Windows Enable Win32 ScheduledJob via Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new DWORD value named \"EnableAt\" in the registry path \"HKLM:\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Schedule\\Configuration\". This modification enables the use of the at.exe or wmi Win32_ScheduledJob commands to add scheduled tasks on a Windows endpoint. The detection leverages registry event data from the Endpoint datamodel. This activity is significant because it may indicate that an attacker is enabling the ability to schedule tasks, poten…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "In some cases, an automated script or system may enable this setting continuously, leading to false positives. To avoid such situations, it is recommended to monitor the frequency and context of the registry modification and modify or filter the detection rules as needed. This can help to reduce the number of false positives and ensure that only genuine threats are identified. Additionally, it is important to investigate any detected instances of this modification and analyze them in the broader context of the system and network to determine if further action is necessary.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/win32-scheduledjob",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Enable Win32 ScheduledJob via Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Enable Win32 ScheduledJob via Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: In some cases, an automated script or system may enable this setting continuously, leading to false positives. To avoid such situations, it is recommended to monitor the frequency and context of the registry modification and modify or filter the detection rules as needed. This can help to reduce the number of false positives and ensure that only genuine threats are identified. Additionally, it is important to investigate any detected instances of this modification and analyze them in the broader context of the system and network to determine if further action is necessary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Enable Win32 ScheduledJob via Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.453",
              "n": "Windows Excessive Service Stop Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects multiple attempts to stop or delete services on a system using `net.exe` or `sc.exe`. It leverages Endpoint Detection and Response (EDR) telemetry, focusing on process names and command-line executions within a one-minute window. This activity is significant as it may indicate an adversary attempting to disable security or critical services to evade detection and further their objectives. If confirmed malicious, this could lead to the attacker gaining persistence, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1489"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Excessive Service Stop Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Excessive Service Stop Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Excessive Service Stop Attempt”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.454",
              "n": "Windows Excessive Usage Of Net App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects excessive usage of `net.exe` within a one-minute interval. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line executions. This behavior is significant as it may indicate an adversary attempting to create, delete, or disable multiple user accounts rapidly, a tactic observed in Monero mining incidents. If confirmed malicious, this activity could lead to unauthorized user account manipulat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1531"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Excessive Usage Of Net App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1531. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Excessive Usage Of Net App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Excessive Usage Of Net App”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.455",
              "n": "Windows Executable in Loaded Modules",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where executable files (.exe) are loaded as modules, detected through 'ImageLoaded' events in Sysmon logs. This method leverages Sysmon EventCode 7 to track unusual module loading behavior, which is significant as it deviates from the norm of loading .dll files. This activity is crucial for SOC monitoring because it can indicate the presence of malware like NjRAT, which uses this technique to load malicious modules. If confirmed malicious, this behavio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 7 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.njrat",
              "mitre": [
                "T1129"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Executable in Loaded Modules\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1129. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Executable in Loaded Modules\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Executable in Loaded Modules”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.456",
              "n": "Windows Execute Arbitrary Commands with MSDT",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects arbitrary command execution using Windows msdt.exe, a Diagnostics Troubleshooting Wizard. It leverages Endpoint Detection and Response (EDR) data to identify instances where msdt.exe is invoked via the ms-msdt:/ protocol handler to retrieve a remote payload. This activity is significant as it can indicate an exploitation attempt leveraging msdt.exe to execute arbitrary commands, potentially leading to unauthorized code execution. If confirmed malicious, this could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed. Added .xml to potentially capture any answer file usage. Remove as needed.",
              "refs": "https://isc.sans.edu/diary/rss/28694, https://doublepulsar.com/follina-a-microsoft-office-code-execution-vulnerability-1a47fce5629e, https://twitter.com/nao_sec/status/1530196847679401984?s=20&t=ZiXYI4dQuA-0_dzQzSUb3A, https://app.any.run/tasks/713f05d2-fe78-4b9d-a744-f7c133e3fafb/, https://www.virustotal.com/gui/file/4a24048f81afbe9fb62e7a6a49adbd1faf41f266b5f9feecdceb567aec096784/detection, https://strontic.github.io/xcyclopedia/library/msdt.exe-152D4C9F63EFB332CCB134C6953C0104.html",
              "mitre": [
                "T1218"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Execute Arbitrary Commands with MSDT\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Execute Arbitrary Commands with MSDT\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed. Added .xml to potentially capture any answer file usage. Remove as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Execute Arbitrary Commands with MSDT”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.457",
              "n": "Windows Exfiltration Over C2 Via Powershell UploadString",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential data exfiltration using the PowerShell `net.webclient` command with the `UploadString` method. It leverages PowerShell Script Block Logging to detect instances where this command is executed. This activity is significant as it may indicate an attempt to upload sensitive data, such as desktop screenshots or files, to an external or internal URI, often associated with malware like Winter-Vivern. If confirmed malicious, this could lead to unauthorized dat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited. Filter as needed.",
              "refs": "https://twitter.com/_CERT_UA/status/1620781684257091584, https://cert.gov.ua/article/3761104",
              "mitre": [
                "T1041"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Exfiltration Over C2 Via Powershell UploadString\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1041. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Exfiltration Over C2 Via Powershell UploadString\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Exfiltration Over C2 Via Powershell UploadString”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.458",
              "n": "Windows File Download Via CertUtil",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `certutil.exe` to download files using the `-URL`, `-urlcache` or '-verifyctl' arguments. This behavior is identified by monitoring command-line executions for these specific arguments via Endpoint Detection and Response (EDR) telemetry. This activity is significant because `certutil.exe` is a legitimate tool often abused by attackers to download and execute malicious payloads. If confirmed malicious, this could allow an attacker to download and execute …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed based on parent-child relationship or network connection.",
              "refs": "https://attack.mitre.org/techniques/T1105/, https://www.hexacorn.com/blog/2020/08/23/certutil-one-more-gui-lolbin/, https://web.archive.org/web/20210921110637/https://www.fireeye.com/blog/threat-research/2019/10/certutil-qualms-they-came-to-drop-fombs.html, https://lolbas-project.github.io/lolbas/Binaries/Certutil/, https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc732443(v=ws.11)#-verifyctl",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows File Download Via CertUtil\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows File Download Via CertUtil\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed based on parent-child relationship or network connection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows File Download Via CertUtil”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.459",
              "n": "Windows File Transfer Protocol In Non-Common Process Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects FTP connections initiated by processes located in non-standard installation paths on Windows systems. It leverages Sysmon EventCode 3 to identify network connections where the process image path does not match common directories like \"Program Files\" or \"Windows\\System32\". This activity is significant as FTP is often used by adversaries and malware, such as AgentTesla, for Command and Control (C2) communications to exfiltrate stolen data. If confirmed malicious, thi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and sysmon eventcode = 3 connection events from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Third party FTP based applications will trigger this. Apply additional filters as needed. Also consider excluding known FTP based clients installed outside of the Program Files and Windows directories.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.agent_tesla",
              "mitre": [
                "T1071.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows File Transfer Protocol In Non-Common Process Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows File Transfer Protocol In Non-Common Process Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Third party FTP based applications will trigger this. Apply additional filters as needed. Also consider excluding known FTP based clients installed outside of the Program Files and Windows directories.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows File Transfer Protocol In Non-Common Process Path”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.460",
              "n": "Windows File Without Extension In Critical Folder",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects the creation of files without extensions in critical Windows system and driver-related directories, including but not limited to System32\\Drivers, Windows\\WinSxS, and other known Windows driver storage and loading paths. The detection has been expanded to comprehensively cover all commonly abused and legitimate Windows driver folder locations, increasing visibility into attempts to stage or deploy kernel-mode components. The analytic leverages telemetry from the Endpoint.Fi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blog.talosintelligence.com/2022/02/threat-advisory-hermeticwiper.html, https://www.splunk.com/en_us/blog/security/detecting-hermeticwiper.html, https://learn.microsoft.com/en-us/answers/questions/2184241/where-does-windows-installation-search-for-drivers",
              "mitre": [
                "T1485"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows File Without Extension In Critical Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows File Without Extension In Critical Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows File Without Extension In Critical Folder”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.461",
              "n": "Windows Findstr GPP Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the findstr command to search for unsecured credentials in Group Policy Preferences (GPP). It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving findstr.exe with references to SYSVOL and cpassword. This activity is significant because it indicates an attempt to locate and potentially decrypt embedded credentials in GPP, which could lead to unauthorized access. If confirmed malicious, this could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage findstr to find passwords in GPO to validate exposure. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1552/006/, https://pentestlab.blog/2017/03/20/group-policy-preferences/, https://adsecurity.org/?p=2288, https://www.hackingarticles.in/credential-dumping-group-policy-preferences-gpp/, https://support.microsoft.com/en-us/topic/ms14-025-vulnerability-in-group-policy-preferences-could-allow-elevation-of-privilege-may-13-2014-60734e15-af79-26ca-ea53-8cd617073c30",
              "mitre": [
                "T1552.006"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Findstr GPP Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Findstr GPP Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage findstr to find passwords in GPO to validate exposure. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Findstr GPP Discovery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.462",
              "n": "Windows Gather Victim Host Information Camera",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a PowerShell script that enumerates camera devices on the targeted host. This detection leverages PowerShell Script Block Logging, specifically looking for commands querying Win32_PnPEntity for camera-related information. This activity is significant as it is commonly observed in DCRat malware, which collects camera data to send to its command-and-control server. If confirmed malicious, this behavior could indicate an attempt to gather sensitive visual information …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this powershell command to get hardware information related to camera on $dest$.",
              "refs": "https://cert.gov.ua/article/405538, https://malpedia.caad.fkie.fraunhofer.de/details/win.dcrat, https://www.mandiant.com/resources/analyzing-dark-crystal-rat-backdoor",
              "mitre": [
                "T1592.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Gather Victim Host Information Camera\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1592.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Gather Victim Host Information Camera\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this powershell command to get hardware information related to camera on $dest$.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Gather Victim Host Information Camera”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.463",
              "n": "Windows Gdrive Binary Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'gdrive' tool on a Windows host. This tool allows standard users to perform tasks associated with Google Drive via the command line. This is used by actors to stage tools as well as exfiltrate data. The detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. If confirmed malicious, this could lead to compromise of systems or sensitive data being stolen.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://cloud.google.com/blog/topics/threat-intelligence/uncovering-unc3886-espionage-operations",
              "mitre": [
                "T1567"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Gdrive Binary Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1567. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Gdrive Binary Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Gdrive Binary Activity”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.464",
              "n": "Windows Get-AdComputer Unconstrained Delegation Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Get-ADComputer cmdlet with parameters indicating a search for Windows endpoints with Kerberos Unconstrained Delegation. It leverages PowerShell Script Block Logging (EventCode=4104) to identify this specific activity. This behavior is significant as it may indicate an attempt by adversaries or Red Teams to gain situational awareness and perform Active Directory discovery. If confirmed malicious, this activity could allow attackers to identify high-va…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may leverage PowerView for system management or troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://adsecurity.org/?p=1667, https://learn.microsoft.com/en-us/defender-for-identity/cas-isp-unconstrained-kerberos, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/domain-compromise-via-unrestricted-kerberos-delegation, https://www.cyberark.com/resources/threat-research-blog/weakness-within-kerberos-delegation",
              "mitre": [
                "T1018"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Get-AdComputer Unconstrained Delegation Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Get-AdComputer Unconstrained Delegation Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may leverage PowerView for system management or troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Get-AdComputer Unconstrained Delegation Discovery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.465",
              "n": "Windows Global Object Access Audit List Cleared Via Auditpol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `auditpol.exe` with the \"/resourceSACL\" flag, and either the \"/clear\" or \"/remove\" command-line arguments used to remove or clear the global object access audit policy. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity can be significant as it indicates potential defense evasion by adversaries or Red Teams, aiming to limit data that can be leveraged for detec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be rare to non existent. Any activity detected by this analytic should be investigated and approved or denied.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol-resourcesacl",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Global Object Access Audit List Cleared Via Auditpol\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Global Object Access Audit List Cleared Via Auditpol\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be rare to non existent. Any activity detected by this analytic should be investigated and approved or denied.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Global Object Access Audit List Cleared Via Auditpol”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.466",
              "n": "Windows Group Discovery Via Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `net.exe` with command-line arguments used to query global, local and domain groups. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant as it indicates potential reconnaissance efforts by adversaries to enumerate local or domain groups, which is a common step in Active Directory or privileged accounts discovery. If confirmed malicious, this behav…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where `process_net` Processes.process=\"*group*\" AND NOT (Processes.process=\"*/add\" OR Processes.process=\"*/delete\") by Processes.action Processes.dest Processes.original_file_name Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path Processes.process Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id Processes.process_integrity_level Processes.process_name Processes.process_path Processes.user Processes.user_id Processes.vendor_product | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_group_discovery_via_net_filter`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://attack.mitre.org/techniques/T1069/001/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1069.001/T1069.001.md, https://media.defense.gov/2023/May/24/2003229517/-1/-1/0/CSA_Living_off_the_Land.PDF, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1069.001",
                "T1069.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Group Discovery Via Net\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001, T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Group Discovery Via Net\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Group Discovery Via Net”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.467",
              "n": "Windows Hidden Schedule Task Settings",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of hidden scheduled tasks on Windows systems, which are not visible in the UI. It leverages Windows Security EventCode 4698 to identify tasks where the 'Hidden' setting is enabled. This behavior is significant as it may indicate malware activity, such as Industroyer2, or the use of living-off-the-land binaries (LOLBINs) to download additional payloads. If confirmed malicious, this activity could allow attackers to execute code stealthily, maintain pers…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4698 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://cert.gov.ua/article/39518",
              "mitre": [
                "T1053"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Hidden Schedule Task Settings\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Hidden Schedule Task Settings\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Hidden Schedule Task Settings”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.468",
              "n": "Windows Hide Notification Features Through Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious registry modifications aimed at hiding common Windows notification features on a compromised host. It leverages data from the Endpoint.Registry data model, focusing on specific registry paths and values. This activity is significant as it is often used by ransomware to obscure visual indicators, increasing the impact of the attack. If confirmed malicious, this could prevent users from noticing critical system alerts, thereby aiding the attacker in mainta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.trendmicro.com/vinfo/us/threat-encyclopedia/malware/Ransom.Win32.ONALOCKER.A/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Hide Notification Features Through Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Hide Notification Features Through Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Hide Notification Features Through Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.469",
              "n": "Windows Hijack Execution Flow Version Dll Side Load",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a process loading a version.dll file from a directory other than %windir%\\system32 or %windir%\\syswow64. This detection leverages Sysmon EventCode 7 to identify instances where an unsigned or improperly located version.dll is loaded. This activity is significant as it is a common technique used in ransomware and APT malware campaigns, including Brute Ratel C4, to execute malicious code via DLL side loading. If confirmed malicious, this could allow attackers to exec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The latest Sysmon TA 3.0 https://splunkbase.splunk.com/app/5709 will add the ImageLoaded name to the process_name field, allowing this query to work. Use as an example and implement for other products.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mdsec.co.uk/2022/08/part-3-how-i-met-your-beacon-brute-ratel/",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Hijack Execution Flow Version Dll Side Load\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Hijack Execution Flow Version Dll Side Load\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Hijack Execution Flow Version Dll Side Load”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.470",
              "n": "Windows HTTP Network Communication From MSIExec",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects MSIExec making network connections over ports 443 or 80. This behavior is identified by correlating process creation events from Endpoint Detection and Response (EDR) agents with network traffic logs. Typically, MSIExec does not perform network communication to the internet, making this activity unusual and potentially indicative of malicious behavior. If confirmed malicious, an attacker could be using MSIExec to download or communicate with external servers, poten…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present and filtering is required.",
              "refs": "https://thedfirreport.com/2022/06/06/will-the-real-msiexec-please-stand-up-exploit-leads-to-data-exfiltration/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.007/T1218.007.md",
              "mitre": [
                "T1218.007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows HTTP Network Communication From MSIExec\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows HTTP Network Communication From MSIExec\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present and filtering is required.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows HTTP Network Communication From MSIExec”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.471",
              "n": "Windows Identify Protocol Handlers",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of protocol handlers executed via the command line. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line telemetry. This activity is significant because protocol handlers can be exploited to execute arbitrary commands or launch applications, potentially leading to unauthorized actions. If confirmed malicious, an attacker could use this technique to gain code execution, escalate privileges, or maintain …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime values(Processes.process) as process values(Processes.parent_process) as parent_process FROM datamodel=Endpoint.Processes\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `drop_dm_object_name(Processes)`\n    | lookup windows_protocol_handlers handler AS process OUTPUT handler ishandler\n    | where ishandler=\"TRUE\"\n    | `windows_identify_protocol_handlers_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be found. https and http is a URL Protocol handler that will trigger this analytic. Tune based on process or command-line.",
              "refs": "https://gist.github.com/MHaggis/a0d3edb57d36e0916c94c0a464b2722e, https://www.oreilly.com/library/view/learning-java/1565927184/apas02.html, https://blogs.windows.com/msedgedev/2022/01/20/getting-started-url-protocol-handlers-microsoft-edge/, https://github.com/Mr-Un1k0d3r/PoisonHandler, https://www.mdsec.co.uk/2021/03/phishing-users-to-take-a-test/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218/T1218.md#atomic-test-5---protocolhandlerexe-downloaded-a-suspicious-file, https://techcommunity.microsoft.com/t5/windows-it-pro-blog/disabling-the-msix-ms-appinstaller-protocol-handler/ba-p/3119479, https://www.huntress.com/blog/microsoft-office-remote-code-execution-follina-msdt-bug, https://parsiya.net/blog/2021-03-17-attack-surface-analysis-part-2-custom-protocol-handlers/",
              "mitre": [
                "T1059"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Identify Protocol Handlers\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Identify Protocol Handlers\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be found. https and http is a URL Protocol handler that will trigger this analytic. Tune based on process or command-line.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Identify Protocol Handlers”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.472",
              "n": "Windows IIS Components Add New Module",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of AppCmd.exe to install a new module in IIS. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as adversaries may use it to install webshells or backdoors, leading to credit card scraping, persistence, and further post-exploitation. If confirmed malicious, this could allow attackers to maintain persistent access, execute arbitrary code, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present until properly tuned. Filter as needed.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2022/12/12/iis-modules-the-evolution-of-web-shells-and-how-to-detect-them/, https://www.crowdstrike.com/wp-content/uploads/2022/05/crowdstrike-iceapple-a-novel-internet-information-services-post-exploitation-framework-1.pdf, https://unit42.paloaltonetworks.com/unit42-oilrig-uses-rgdoor-iis-backdoor-targets-middle-east/, https://www.secureworks.com/research/bronze-union, https://github.com/redcanaryco/atomic-red-team/tree/master/atomics/T1505.004, https://strontic.github.io/xcyclopedia/library/appcmd.exe-055B2B09409F980BF9B5A3969D01E5B2.html",
              "mitre": [
                "T1505.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows IIS Components Add New Module\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows IIS Components Add New Module\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present until properly tuned. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows IIS Components Add New Module”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.473",
              "n": "Windows Impair Defense Add Xml Applocker Rules",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of a PowerShell commandlet to import an AppLocker XML policy. This behavior is identified by monitoring processes that execute the \"Import-Module Applocker\" and \"Set-AppLockerPolicy\" commands with the \"-XMLPolicy\" parameter. This activity is significant because it can indicate an attempt to disable or bypass security controls, as seen in the Azorult malware. If confirmed malicious, this could allow an attacker to disable antivirus products, leading to furth…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` values(Processes.process) as process min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_powershell`\n        AND\n        Processes.process=\"*Import-Module Applocker*\"\n        AND\n        Processes.process=\"*Set-AppLockerPolicy *\"\n        AND\n        Processes.process=\"* -XMLPolicy *\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_impair_defense_add_xml_applocker_rules_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this command that may cause some false positive.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Add Xml Applocker Rules\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Add Xml Applocker Rules\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this command that may cause some false positive.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Add Xml Applocker Rules”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.474",
              "n": "Windows Impair Defense Change Win Defender Throttle Rate",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the ThrottleDetectionEventsRate registry setting in Windows Defender. It leverages data from the Endpoint.Registry datamodel to identify changes in the registry path related to Windows Defender's event logging rate. This activity is significant because altering the ThrottleDetectionEventsRate can reduce the frequency of logged detection events, potentially masking malicious activities. If confirmed malicious, this could allow an attacker to evade d…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Change Win Defender Throttle Rate\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Change Win Defender Throttle Rate\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Change Win Defender Throttle Rate”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.475",
              "n": "Windows Impair Defense Change Win Defender Tracing Level",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry specifically targeting the \"WppTracingLevel\" setting within Windows Defender. This detection leverages data from the Endpoint.Registry data model to identify changes in the registry path associated with Windows Defender tracing levels. Such modifications are significant as they can impair the diagnostic capabilities of Windows Defender, potentially hiding malicious activities. If confirmed malicious, this activity could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Change Win Defender Tracing Level\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Change Win Defender Tracing Level\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Change Win Defender Tracing Level”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.476",
              "n": "Windows Impair Defense Define Win Defender Threat Action",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows Defender ThreatSeverityDefaultAction registry setting. It leverages data from the Endpoint.Registry datamodel to identify changes in registry values that define how Windows Defender responds to threats. This activity is significant because altering these settings can impair the system's defense mechanisms, potentially allowing threats to go unaddressed. If confirmed malicious, this could enable attackers to bypass antivirus protections,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Define Win Defender Threat Action\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Define Win Defender Threat Action\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Define Win Defender Threat Action”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.477",
              "n": "Windows Impair Defense Delete Win Defender Context Menu",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of the Windows Defender context menu entry from the registry. It leverages data from the Endpoint datamodel, specifically monitoring registry actions where the path includes \"*\\\\shellex\\\\ContextMenuHandlers\\\\EPP\" and the action is 'deleted'. This activity is significant as it is commonly associated with Remote Access Trojan (RAT) malware attempting to disable security features. If confirmed malicious, this could allow an attacker to impair defenses, fa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Registry where Registry.registry_path = \"*\\\\shellex\\\\ContextMenuHandlers\\\\EPP\" Registry.action = deleted by Registry.action Registry.dest Registry.process_guid Registry.process_id Registry.registry_hive Registry.registry_path Registry.registry_key_name Registry.registry_value_data Registry.registry_value_name Registry.registry_value_type Registry.status Registry.user Registry.vendor_product | `drop_dm_object_name(Registry)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_impair_defense_delete_win_defender_context_menu_filter`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://www.malwarebytes.com/blog/malwarebytes-news/2021/02/lazyscripter-from-empire-to-double-rat/, https://app.any.run/tasks/45f5d114-91ea-486c-ab01-41c4093d2861/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Delete Win Defender Context Menu\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Delete Win Defender Context Menu\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Delete Win Defender Context Menu”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.478",
              "n": "Windows Impair Defense Delete Win Defender Profile Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of the Windows Defender main profile registry key. It leverages data from the Endpoint.Registry datamodel, specifically monitoring for deleted actions within the Windows Defender registry path. This activity is significant as it indicates potential tampering with security defenses, often associated with Remote Access Trojans (RATs) and other malware. If confirmed malicious, this action could allow an attacker to disable Windows Defender, reducing the s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://www.malwarebytes.com/blog/malwarebytes-news/2021/02/lazyscripter-from-empire-to-double-rat/, https://app.any.run/tasks/45f5d114-91ea-486c-ab01-41c4093d2861/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Delete Win Defender Profile Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Delete Win Defender Profile Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Delete Win Defender Profile Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.479",
              "n": "Windows Impair Defense Deny Security Software With Applocker",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications in the Windows registry by the Applocker utility that deny the execution of various security products. This detection leverages data from the Endpoint.Registry datamodel, focusing on specific registry paths and values indicating a \"Deny\" action against known antivirus and security software. This activity is significant as it may indicate an attempt to disable security defenses, a tactic observed in malware like Azorult. If confirmed malicious, this co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on organization use of Applocker. Filter as needed.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/, https://www.microsoftpressstore.com/articles/article.aspx?p=2228450&seqNum=11",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Deny Security Software With Applocker\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Deny Security Software With Applocker\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on organization use of Applocker. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Deny Security Software With Applocker”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.480",
              "n": "Windows Impair Defense Disable Controlled Folder Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a modification in the Windows registry that disables the Windows Defender Controlled Folder Access feature. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the EnableControlledFolderAccess registry setting. This activity is significant because Controlled Folder Access is designed to protect critical folders from unauthorized access, including ransomware attacks. If this activity is confirmed malicious, it could allow atta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Controlled Folder Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Controlled Folder Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Disable Controlled Folder Access”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.481",
              "n": "Windows Impair Defense Disable Defender Protocol Recognition",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable the Windows Defender protocol recognition feature. It leverages data from the Endpoint.Registry data model, specifically looking for changes to the \"DisableProtocolRecognition\" setting. This activity is significant because disabling protocol recognition can hinder Windows Defender's ability to detect and respond to malware or suspicious software. If confirmed malicious, this action could allow an attacker to bypass…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Defender Protocol Recognition\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Defender Protocol Recognition\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Disable Defender Protocol Recognition”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.482",
              "n": "Windows Impair Defense Disable PUA Protection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a modification in the Windows registry to disable Windows Defender PUA protection by setting PUAProtection to 0. This detection leverages data from the Endpoint.Registry datamodel, focusing on registry path changes related to Windows Defender. Disabling PUA protection is significant as it reduces defenses against Potentially Unwanted Applications (PUAs), which, while not always malicious, can negatively impact user experience and security. If confirmed malicious, t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable PUA Protection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable PUA Protection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Disable PUA Protection”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.483",
              "n": "Windows Impair Defense Disable Web Evaluation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry entry \"EnableWebContentEvaluation\" to disable Windows Defender web content evaluation. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes where the registry value is set to \"0x00000000\". This activity is significant as it indicates an attempt to impair browser security features, potentially allowing malicious web content to bypass security checks. If confirmed malicious, this could lead to u…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Web Evaluation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Web Evaluation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Disable Web Evaluation”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.484",
              "n": "Windows Impair Defense Disable Win Defender App Guard",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable Windows Defender Application Guard auditing. It leverages data from the Endpoint.Registry data model, focusing on specific registry paths and values. This activity is significant because disabling auditing can hinder security monitoring and threat detection within the isolated environment, making it easier for malicious activities to go unnoticed. If confirmed malicious, this action could allow attackers to bypass …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Win Defender App Guard\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Win Defender App Guard\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Disable Win Defender App Guard”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.485",
              "n": "Windows Impair Defense Disable Win Defender Gen reports",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications in the Windows registry to disable Windows Defender generic reports. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the \"DisableGenericRePorts\" registry value. This activity is significant as it can prevent the transmission of error reports to Microsoft's Windows Error Reporting service, potentially hiding malicious activities. If confirmed malicious, this action could allow attackers to bypass Windows Defe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Win Defender Gen reports\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Win Defender Gen reports\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Disable Win Defender Gen reports”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.486",
              "n": "Windows Impair Defense Disable Win Defender Report Infection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable Windows Defender's infection reporting. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to the \"DontReportInfectionInformation\" registry key. This activity is significant because it can prevent Windows Defender from reporting detailed threat information to Microsoft, potentially allowing malware to evade detection. If confirmed malicious, this action could enable attackers to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Win Defender Report Infection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Win Defender Report Infection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Disable Win Defender Report Infection”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.487",
              "n": "Windows Impair Defense Override SmartScreen Prompt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that override the Windows Defender SmartScreen prompt. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the \"PreventSmartScreenPromptOverride\" registry setting. This activity is significant because it indicates an attempt to disable the prevention of user overrides for SmartScreen prompts, potentially allowing users to bypass security warnings. If confirmed malicious, this could lead t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Override SmartScreen Prompt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Override SmartScreen Prompt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Override SmartScreen Prompt”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.488",
              "n": "Windows Impair Defense Set Win Defender Smart Screen Level To Warn",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that set the Windows Defender SmartScreen level to \"warn.\" This detection leverages data from the Endpoint.Registry data model, specifically monitoring changes to the ShellSmartScreenLevel registry value. This activity is significant because altering SmartScreen settings to \"warn\" can reduce immediate suspicion from users, allowing potentially malicious executables to run with just a warning prompt. If confirmed malicious, this…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Set Win Defender Smart Screen Level To Warn\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Set Win Defender Smart Screen Level To Warn\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defense Set Win Defender Smart Screen Level To Warn”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.489",
              "n": "Windows Impair Defenses Disable Auto Logger Session",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the disabling of an AutoLogger session or one of its providers, by identifying changes to the Registry values \"Start\" and \"Enabled\" part of the \"\\WMI\\Autologger\\\" key path. It leverages data from the Endpoint.Registry datamodel to monitor specific registry paths and values. This activity is significant as attackers and adversaries can leverage this in order to evade defense and blind EDRs and log ingest tooling. If confirmed malicious, this action could allow an at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://www.malwarebytes.com/blog/malwarebytes-news/2021/02/lazyscripter-from-empire-to-double-rat/, https://app.any.run/tasks/45f5d114-91ea-486c-ab01-41c4093d2861/, https://isc.sans.edu/diary/rss/28628, https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/, https://learn.microsoft.com/en-us/windows/win32/etw/configuring-and-starting-an-autologger-session",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defenses Disable Auto Logger Session\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defenses Disable Auto Logger Session\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defenses Disable Auto Logger Session”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.490",
              "n": "Windows Impair Defenses Disable AV AutoStart via Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the registry related to the disabling of autostart functionality for certain antivirus products, such as Kingsoft and Tencent. Malware like ValleyRAT may alter specific registry keys to prevent these security tools from launching automatically at startup, thereby weakening system defenses. By monitoring changes in the registry entries associated with antivirus autostart settings, this detection enables security analysts to identify attempts to disa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.proofpoint.com/us/blog/threat-insight/chinese-malware-appears-earnest-across-cybercrime-threat-landscape, https://www.fortinet.com/blog/threat-research/valleyrat-campaign-targeting-chinese-speakers",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defenses Disable AV AutoStart via Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defenses Disable AV AutoStart via Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defenses Disable AV AutoStart via Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.491",
              "n": "Windows Impair Defenses Disable HVCI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the disabling of Hypervisor-protected Code Integrity (HVCI) by monitoring changes in the Windows registry. It leverages data from the Endpoint datamodel, specifically focusing on registry paths and values related to HVCI settings. This activity is significant because HVCI helps protect the kernel and system processes from tampering by malicious code. If confirmed malicious, disabling HVCI could allow attackers to execute unsigned kernel-mode code, potentially leadi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be limited to administrative scripts disabling HVCI. Filter as needed.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/04/11/guidance-for-investigating-attacks-using-cve-2022-21894-the-blacklotus-campaign/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defenses Disable HVCI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defenses Disable HVCI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be limited to administrative scripts disabling HVCI. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defenses Disable HVCI”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.492",
              "n": "Windows Impair Defenses Disable Win Defender Auto Logging",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the disabling of Windows Defender logging by identifying changes to the Registry keys DefenderApiLogger or DefenderAuditLogger set to disable. It leverages data from the Endpoint.Registry datamodel to monitor specific registry paths and values. This activity is significant as it is commonly associated with Remote Access Trojan (RAT) malware attempting to evade detection. If confirmed malicious, this action could allow an attacker to conceal their activities, making…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://www.malwarebytes.com/blog/malwarebytes-news/2021/02/lazyscripter-from-empire-to-double-rat/, https://app.any.run/tasks/45f5d114-91ea-486c-ab01-41c4093d2861/, https://isc.sans.edu/diary/rss/28628, https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defenses Disable Win Defender Auto Logging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defenses Disable Win Defender Auto Logging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Impair Defenses Disable Win Defender Auto Logging”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.493",
              "n": "Windows Indicator Removal Via Rmdir",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'rmdir' command with '/s' and '/q' options to delete files and directory trees. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process metadata. This activity is significant as it may indicate malware attempting to remove traces or components during cleanup operations. If confirmed malicious, this behavior could allow attackers to eliminate forensic evidence, hinder incid…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "user and network administrator can execute this command.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1070"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Indicator Removal Via Rmdir\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Indicator Removal Via Rmdir\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: user and network administrator can execute this command.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Indicator Removal Via Rmdir”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.494",
              "n": "Windows Indirect Command Execution Via forfiles",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of programs initiated by forfiles.exe. This command is typically used to run commands on multiple files, often within batch scripts. The detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where forfiles.exe is the parent process. This activity is significant because forfiles.exe can be exploited to bypass command line execution protections, making it a potential vector for malicious activity…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legacy applications may be run using pcalua.exe. Similarly, forfiles.exe may be used in legitimate batch scripts.  Filter these results as needed.",
              "refs": "https://twitter.com/KyleHanslovan/status/912659279806640128, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/forfiles",
              "mitre": [
                "T1202"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Indirect Command Execution Via forfiles\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1202. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Indirect Command Execution Via forfiles\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legacy applications may be run using pcalua.exe. Similarly, forfiles.exe may be used in legitimate batch scripts.  Filter these results as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Indirect Command Execution Via forfiles”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.495",
              "n": "Windows Indirect Command Execution Via pcalua",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects programs initiated by pcalua.exe, the Microsoft Windows Program Compatibility Assistant. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process information. While pcalua.exe can start legitimate programs, it is significant because attackers may use it to bypass command line execution protections. If confirmed malicious, this activity could allow attackers to execute arbitrary commands, potentially lea…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legacy applications may be run using pcalua.exe.  Filter these results as needed.",
              "refs": "https://twitter.com/KyleHanslovan/status/912659279806640128, https://lolbas-project.github.io/lolbas/Binaries/Pcalua/",
              "mitre": [
                "T1202"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Indirect Command Execution Via pcalua\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1202. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Indirect Command Execution Via pcalua\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legacy applications may be run using pcalua.exe.  Filter these results as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Indirect Command Execution Via pcalua”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.496",
              "n": "Windows Indirect Command Execution Via Series Of Forfiles",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects excessive usage of the forfiles.exe process, which is often indicative of post-exploitation activities. The detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include process GUID, process name, and parent process. This activity is significant because forfiles.exe can be abused to execute commands on multiple files, a technique used by ransomware like Prestige. If confirmed malicious, this behavior co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/forfiles, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1202"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Indirect Command Execution Via Series Of Forfiles\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1202. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Indirect Command Execution Via Series Of Forfiles\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Indirect Command Execution Via Series Of Forfiles”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.497",
              "n": "Windows Ingress Tool Transfer Using Explorer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where the Windows Explorer process (explorer.exe) is executed with a URL in its command line. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This activity is significant because adversaries, such as those using DCRat malware, may abuse explorer.exe to open URLs with the default browser, which is an uncommon and suspicious behavior. If confirmed malicious, this technique could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on legitimate applications or third party utilities. Filter out any additional parent process names.",
              "refs": "https://www.mandiant.com/resources/analyzing-dark-crystal-rat-backdoor",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Ingress Tool Transfer Using Explorer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Ingress Tool Transfer Using Explorer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on legitimate applications or third party utilities. Filter out any additional parent process names.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Ingress Tool Transfer Using Explorer”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.498",
              "n": "Windows InstallUtil in Non Standard Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of InstallUtil.exe from non-standard paths. It leverages Endpoint Detection and Response (EDR) data, focusing on process names and original file names outside typical directories. This activity is significant because InstallUtil.exe is often used by attackers to execute malicious code or scripts. If confirmed malicious, this behavior could allow an attacker to bypass security controls, execute arbitrary code, and potentially gain unauthorized access o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and filtering may be required. Certain utilities will run from non-standard paths based on the third-party application in use.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1036.003/T1036.003.yaml, https://attack.mitre.org/techniques/T1036/003/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.004/T1218.004.md",
              "mitre": [
                "T1036.003",
                "T1218.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows InstallUtil in Non Standard Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003, T1218.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows InstallUtil in Non Standard Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and filtering may be required. Certain utilities will run from non-standard paths based on the third-party application in use.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows InstallUtil in Non Standard Path”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.499",
              "n": "Windows InstallUtil Remote Network Connection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the Windows InstallUtil.exe binary making a remote network connection. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and network telemetry. This activity is significant because InstallUtil.exe can be exploited to download and execute malicious code, bypassing application control mechanisms. If confirmed malicious, an attacker could achieve code execution, potentially leading to further system compromise, data exfiltration,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 3, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives should be present as InstallUtil is not typically used to download remote files. Filter as needed based on Developers requirements.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.004/T1218.004.md",
              "mitre": [
                "T1218.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows InstallUtil Remote Network Connection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows InstallUtil Remote Network Connection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives should be present as InstallUtil is not typically used to download remote files. Filter as needed based on Developers requirements.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows InstallUtil Remote Network Connection”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.500",
              "n": "Windows InstallUtil Uninstall Option",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Windows InstallUtil.exe binary with the `/u` (uninstall) switch, which can execute code while bypassing application control. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, parent processes, and command-line executions. This activity is significant because it can indicate an attempt to execute malicious code without administrative privileges. If confirmed malicious, an attacker could achieve…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives should be present. Filter as needed by parent process or application.",
              "refs": "https://evi1cg.me/archives/AppLocker_Bypass_Techniques.html#menu_index_12, https://github.com/api0cradle/UltimateAppLockerByPassList/blob/master/md/Installutil.exe.md, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.004/T1218.004.md",
              "mitre": [
                "T1218.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows InstallUtil Uninstall Option\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows InstallUtil Uninstall Option\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives should be present. Filter as needed by parent process or application.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows InstallUtil Uninstall Option”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.501",
              "n": "Windows InstallUtil URL in Command Line",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Windows InstallUtil.exe with an HTTP or HTTPS URL in the command line. This is identified through Endpoint Detection and Response (EDR) telemetry, focusing on command-line executions containing URLs. This activity is significant as it may indicate an attempt to download and execute malicious code, potentially bypassing application control mechanisms. If confirmed malicious, this could lead to unauthorized code execution, privilege escalation, or persiste…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives should be present as InstallUtil is not typically used to download remote files. Filter as needed based on Developers requirements.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.004/T1218.004.md, https://gist.github.com/DanielRTeixeira/0fd06ec8f041f34a32bf5623c6dd479d",
              "mitre": [
                "T1218.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows InstallUtil URL in Command Line\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows InstallUtil URL in Command Line\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives should be present as InstallUtil is not typically used to download remote files. Filter as needed based on Developers requirements.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows InstallUtil URL in Command Line”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.502",
              "n": "Windows ISO LNK File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of .iso.lnk files in the %USER%\\AppData\\Local\\Temp\\<random folder name>\\ path, indicating that an ISO file has been mounted and accessed. This detection leverages the Endpoint.Filesystem data model, specifically monitoring file creation events in the Windows Recent folder. This activity is significant as it may indicate the delivery and execution of potentially malicious payloads via ISO files. If confirmed malicious, this could lead to unauthorized co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Filesystem where Filesystem.file_path IN (\"*\\\\Microsoft\\\\Windows\\\\Recent\\\\*\") Filesystem.file_name IN (\"*.iso.lnk\", \"*.img.lnk\", \"*.vhd.lnk\", \"*vhdx.lnk\") by Filesystem.action Filesystem.dest Filesystem.file_access_time Filesystem.file_create_time Filesystem.file_hash Filesystem.file_modify_time Filesystem.file_name Filesystem.file_path Filesystem.file_acl Filesystem.file_size Filesystem.process_guid Filesystem.process_id Filesystem.user Filesystem.vendor_product | `drop_dm_object_name(Filesystem)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_iso_lnk_file_creation_filter`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be high depending on the environment and consistent use of ISOs mounting. Restrict to servers, or filter out based on commonly used ISO names. Filter as needed.",
              "refs": "https://www.microsoft.com/security/blog/2021/05/27/new-sophisticated-email-based-attack-from-nobelium/, https://github.com/MHaggis/notes/blob/master/utilities/ISOBuilder.ps1, https://isc.sans.edu/diary/Recent+AZORult+activity/25120, https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html",
              "mitre": [
                "T1204.001",
                "T1566.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows ISO LNK File Creation\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.001, T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows ISO LNK File Creation\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be high depending on the environment and consistent use of ISOs mounting. Restrict to servers, or filter out based on commonly used ISO names. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows ISO LNK File Creation”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.503",
              "n": "Windows Kerberos Local Successful Logon",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a local successful authentication event on a Windows endpoint using the Kerberos package. It detects EventCode 4624 with LogonType 3 and source address 127.0.0.1, indicating a login to the built-in local Administrator account. This activity is significant as it may suggest a Kerberos relay attack, a method attackers use to escalate privileges. If confirmed malicious, this could allow an attacker to gain unauthorized access to sensitive systems, execute arbitrary…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4624",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4624 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible, filtering may be required to restrict to workstations vs domain controllers. Filter as needed.",
              "refs": "https://github.com/Dec0ne/KrbRelayUp",
              "mitre": [
                "T1558"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Kerberos Local Successful Logon\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4624. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Kerberos Local Successful Logon\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible, filtering may be required to restrict to workstations vs domain controllers. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Kerberos Local Successful Logon”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.504",
              "n": "Windows Known Abused DLL Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of Dynamic Link Libraries (DLLs) with a known history of exploitation in atypical locations. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and filesystem events. This activity is significant as it may indicate DLL search order hijacking or sideloading, techniques used by attackers to execute arbitrary code, maintain persistence, or escalate privileges. If confirmed malicious, this activity could allow attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic may flag instances where DLLs are loaded by user mode programs for entirely legitimate and benign purposes. It is important for users to be aware that false positives are not only possible but likely, and that careful tuning of this analytic is necessary to distinguish between malicious activity and normal, everyday operations of applications. This may involve adjusting thresholds, whitelisting known good software, or incorporating additional context from other security tools and logs to reduce the rate of false positives.",
              "refs": "https://attack.mitre.org/techniques/T1574/002/, https://hijacklibs.net/api/, https://wietze.github.io/blog/hijacking-dlls-in-windows, https://github.com/olafhartong/sysmon-modular/pull/195/files",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Known Abused DLL Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Known Abused DLL Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic may flag instances where DLLs are loaded by user mode programs for entirely legitimate and benign purposes. It is important for users to be aware that false positives are not only possible but likely, and that careful tuning of this analytic is necessary to distinguish between malicious activity and normal, everyday operations of applications. This may involve adjusting thresholds, whitelisting known good software, or incorporating additional context from other security tools and logs to reduce the rate of false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Known Abused DLL Created”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.505",
              "n": "Windows Ldifde Directory Object Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of Ldifde.exe, a command-line utility for creating, modifying, or deleting LDAP directory objects. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution and command-line arguments. Monitoring Ldifde.exe is significant because it can be used by attackers to manipulate directory objects, potentially leading to unauthorized changes or data exfiltration. If confirmed malicious, this activity could allo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://lolbas-project.github.io/lolbas/Binaries/Ldifde/, https://media.defense.gov/2023/May/24/2003229517/-1/-1/0/CSA_Living_off_the_Land.PDF, https://twitter.com/0gtweet/status/1564968845726580736?s=20, https://strontic.github.io/xcyclopedia/library/ldifde.exe-45D28FB47E9B6ACC5DCA9FDA3E790210.html",
              "mitre": [
                "T1105",
                "T1069.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Ldifde Directory Object Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105, T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Ldifde Directory Object Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Ldifde Directory Object Behavior”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.506",
              "n": "Windows List ENV Variables Via SET Command From Uncommon Parent",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious process command line fetching environment variables using the cmd.exe \"set\" command, with a non-shell parent process. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and parent process names. This activity could be significant as it is commonly associated with malware like Qakbot, which uses this technique to gather system information. If confirmed malicious, this behavior could indicate that …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "shell process that are not included in this search may cause False positive. Filter as needed.",
              "refs": "https://twitter.com/pr0xylife/status/1585612370441031680?s=46&t=Dc3CJi4AnM-8rNoacLbScg",
              "mitre": [
                "T1055"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows List ENV Variables Via SET Command From Uncommon Parent\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows List ENV Variables Via SET Command From Uncommon Parent\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: shell process that are not included in this search may cause False positive. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows List ENV Variables Via SET Command From Uncommon Parent”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.507",
              "n": "Windows Local Administrator Credential Stuffing",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to authenticate using the built-in local Administrator account across more than 30 endpoints within a 5-minute window. It leverages Windows Event Logs, specifically events 4625 and 4624, to identify this behavior. This activity is significant as it may indicate an adversary attempting to validate stolen local credentials across multiple hosts, potentially leading to privilege escalation. If confirmed malicious, this could allow the attacker to gain widespr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4624, Windows Event Log Security 4625",
              "q": "# Shared SPL: intentional — see UC-10.2.103\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host_targets$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4624, Windows Event Log Security 4625 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Vulnerability scanners or system administration tools may also trigger this detection. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1110/004/, https://attack.mitre.org/techniques/T1110/, https://www.blackhillsinfosec.com/wide-spread-local-admin-testing/, https://www.pentestpartners.com/security-blog/admin-password-re-use-dont-do-it/, https://www.praetorian.com/blog/microsofts-local-administrator-password-solution-laps/, https://wiki.porchetta.industries/smb-protocol/password-spraying",
              "mitre": [
                "T1110.004"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Local Administrator Credential Stuffing\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4624, Windows Event Log Security 4625. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Local Administrator Credential Stuffing\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Vulnerability scanners or system administration tools may also trigger this detection. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Local Administrator Credential Stuffing”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.508",
              "n": "Windows LSA Secrets NoLMhash Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry related to the Local Security Authority (LSA) NoLMHash setting. It identifies when the registry value is set to 0, indicating that the system will store passwords in the weaker Lan Manager (LM) hash format. This detection leverages registry activity logs from endpoint data sources like Sysmon or EDR tools. Monitoring this activity is crucial as it can indicate attempts to weaken password storage security. If confirmed malicious…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may change this registry setting.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a",
              "mitre": [
                "T1003.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows LSA Secrets NoLMhash Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows LSA Secrets NoLMhash Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may change this registry setting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows LSA Secrets NoLMhash Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.509",
              "n": "Windows Mark Of The Web Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious process that deletes the Mark-of-the-Web (MOTW) data stream. It leverages Sysmon EventCode 23 to detect when a file's Zone.Identifier stream is removed. This activity is significant because it is a common technique used by malware, such as Ave Maria RAT, to bypass security restrictions on files downloaded from the internet. If confirmed malicious, this behavior could allow an attacker to execute potentially harmful files without triggering security …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 23",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 23 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1553/005/, https://github.com/nmantani/PS-MOTW#remove-motwps1",
              "mitre": [
                "T1553.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Mark Of The Web Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 23. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1553.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Mark Of The Web Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Mark Of The Web Bypass”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.510",
              "n": "Windows Masquerading Msdtc Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of msdtc.exe with specific command-line parameters (-a or -b), which are indicative of the PlugX malware. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because PlugX uses these parameters to masquerade its malicious operations within legitimate processes, making it harder to detect. If confirmed malicious, this behavior could allow …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.plugx",
              "mitre": [
                "T1036"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Masquerading Msdtc Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Masquerading Msdtc Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Masquerading Msdtc Process”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.511",
              "n": "Windows Mimikatz Binary Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the native mimikatz.exe binary on Windows systems, including instances where the binary is renamed. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and original file names. This activity is significant because Mimikatz is a widely used tool for extracting authentication credentials, posing a severe security risk. If confirmed malicious, this activity could allow attackers to obtain sensitive credent…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as this is directly looking for Mimikatz, the credential dumping utility.",
              "refs": "https://www.cisa.gov/uscert/sites/default/files/publications/aa22-320a_joint_csa_iranian_government-sponsored_apt_actors_compromise_federal%20network_deploy_crypto%20miner_credential_harvester.pdf, https://www.varonis.com/blog/what-is-mimikatz, https://media.defense.gov/2023/May/24/2003229517/-1/-1/0/CSA_Living_off_the_Land.PDF",
              "mitre": [
                "T1003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Mimikatz Binary Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Mimikatz Binary Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as this is directly looking for Mimikatz, the credential dumping utility.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Mimikatz Binary Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.512",
              "n": "Windows MMC Loaded Script Engine DLL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a Windows process loads scripting libraries like jscript.dll or vbscript.dll to execute script code on a target system. While these DLLs are legitimate parts of the operating system, their use by unexpected processes or in unusual contexts can indicate malicious activity, such as script-based malware, living-off-the-land techniques, or automated attacks. This detection monitors which processes load these libraries, along with their command-line arguments an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and ImageLoaded (Like sysmon EventCode 7) from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Also be sure to include those monitored dll to your own sysmon config.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "built in Windows tools such as Group Policy Management, Task Scheduler, Event Viewer, or custom MMC snap-ins may load vbscript.dll or jscript.dll to support scripted extensions, automation, or legacy management components. Filter as needed.",
              "refs": "https://www.securonix.com/blog/analyzing-fluxconsole-using-tax-themed-lures-threat-actors-exploit-windows-management-console-to-deliver-backdoor-payloads/, https://research.checkpoint.com/2019/microsoft-management-console-mmc-vulnerabilities/",
              "mitre": [
                "T1620"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MMC Loaded Script Engine DLL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1620. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MMC Loaded Script Engine DLL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: built in Windows tools such as Group Policy Management, Task Scheduler, Event Viewer, or custom MMC snap-ins may load vbscript.dll or jscript.dll to support scripted extensions, automation, or legacy management components. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows MMC Loaded Script Engine DLL”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.513",
              "n": "Windows Modify Registry AuthenticationLevelOverride",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry key \"AuthenticationLevelOverride\" within the Terminal Server Client settings. It leverages data from the Endpoint.Registry datamodel to identify changes where the registry value is set to 0x00000000. This activity is significant as it may indicate an attempt to override authentication levels for remote connections, a tactic used by DarkGate malware for malicious installations. If confirmed malicious, this could allow attackers …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry AuthenticationLevelOverride\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry AuthenticationLevelOverride\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry AuthenticationLevelOverride”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.514",
              "n": "Windows Modify Registry Auto Minor Updates",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious modification to the Windows auto update configuration registry. It detects changes to the registry path \"*\\\\SOFTWARE\\\\Policies\\\\Microsoft\\\\Windows\\\\WindowsUpdate\\\\AU\\\\AutoInstallMinorUpdates\" with a value of \"0x00000000\". This activity is significant as it is commonly used by adversaries, including malware like RedLine Stealer, to bypass detection and deploy additional payloads. If confirmed malicious, this modification could allow attackers to evad…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Registry where Registry.registry_path=\"*\\\\SOFTWARE\\\\Policies\\\\Microsoft\\\\Windows\\\\WindowsUpdate\\\\AU\\\\AutoInstallMinorUpdates\" AND Registry.registry_value_data=\"0x00000000\" by Registry.action Registry.dest Registry.process_guid Registry.process_id Registry.registry_hive Registry.registry_path Registry.registry_key_name Registry.registry_value_data Registry.registry_value_name Registry.registry_value_type Registry.status Registry.user Registry.vendor_product | `drop_dm_object_name(Registry)` | `security_content_ctime(lastTime)` | `security_content_ctime(firstTime)` | `windows_modify_registry_auto_minor_updates_filter`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/de-de/security-updates/windowsupdateservices/18127499",
              "mitre": [
                "T1112"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Auto Minor Updates\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Auto Minor Updates\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Auto Minor Updates”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.515",
              "n": "Windows Modify Registry Auto Update Notif",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious modification to the Windows registry that changes the auto-update notification setting to \"Notify before download.\" This detection leverages data from the Endpoint.Registry data model, focusing on specific registry paths and values. This activity is significant because it is a known technique used by adversaries, including malware like RedLine Stealer, to evade detection and potentially deploy additional payloads. If confirmed malicious, this modificat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/de-de/security-updates/windowsupdateservices/18127499",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Auto Update Notif\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Auto Update Notif\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Auto Update Notif”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.516",
              "n": "Windows Modify Registry Configure BitLocker",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic is developed to detect suspicious registry modifications targeting BitLocker settings. The malware ShrinkLocker alters various registry keys to change how BitLocker handles encryption, potentially bypassing TPM requirements, enabling BitLocker without TPM, and enforcing specific startup key and PIN configurations. Such modifications can weaken system security, making it easier for unauthorized access and data breaches. Detecting these changes is crucial for maintaining robust encry…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://www.bleepingcomputer.com/news/security/new-shrinklocker-ransomware-uses-bitlocker-to-encrypt-your-files/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Configure BitLocker\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Configure BitLocker\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Configure BitLocker”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.517",
              "n": "Windows Modify Registry Default Icon Setting",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to the Windows registry's default icon settings, a technique associated with Lockbit ransomware. It leverages data from the Endpoint Registry data model, focusing on changes to registry paths under \"*HKCR\\\\*\\\\defaultIcon\\\\(Default)*\". This activity is significant as it is uncommon for normal users to modify these settings, and such changes can indicate ransomware infection or other malware. If confirmed malicious, this could lead to system …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blogs.vmware.com/security/2022/10/lockbit-3-0-also-known-as-lockbit-black.html, https://news.sophos.com/en-us/2020/04/24/lockbit-ransomware-borrows-tricks-to-keep-up-with-revil-and-maze/",
              "mitre": [
                "T1112"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Default Icon Setting\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Default Icon Setting\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Default Icon Setting”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.518",
              "n": "Windows Modify Registry Disable RDP",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic is developed to detect suspicious registry modifications that disable Remote Desktop Protocol (RDP) by altering the \"fDenyTSConnections\" key. Changing this key's value to 1 prevents remote connections, which can disrupt remote management and access. Such modifications could indicate an attempt to hinder remote administration or isolate the system from remote intervention, potentially signifying malicious activity.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://www.bleepingcomputer.com/news/security/new-shrinklocker-ransomware-uses-bitlocker-to-encrypt-your-files/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Disable RDP\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Disable RDP\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Disable RDP”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.519",
              "n": "Windows Modify Registry Disable Restricted Admin",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry entry \"DisableRestrictedAdmin,\" which controls the Restricted Admin mode behavior. This detection leverages registry activity logs from endpoint data sources like Sysmon or Carbon Black. Monitoring this activity is crucial as changes to this setting can disable a security feature that limits credential exposure during remote connections. If confirmed malicious, an attacker could weaken security controls, increasing the risk of …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may change this registry setting. Filter as needed.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a",
              "mitre": [
                "T1112"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Disable Restricted Admin\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Disable Restricted Admin\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may change this registry setting. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Disable Restricted Admin”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.520",
              "n": "Windows Modify Registry Disable Toast Notifications",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable toast notifications. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\PushNotifications\\\\ToastEnabled*\" with a value set to \"0x00000000\". This activity is significant because disabling toast notifications can prevent users from receiving critical system and application updates, which adversaries like Azorul…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-remoteassistance-exe-fallowtogethelp, https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Disable Toast Notifications\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Disable Toast Notifications\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Disable Toast Notifications”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.521",
              "n": "Windows Modify Registry Disable Win Defender Raw Write Notif",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable the Windows Defender raw write notification feature. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to the registry path associated with Windows Defender's real-time protection settings. This activity is significant because disabling raw write notifications can allow malware, such as Azorult, to bypass Windows Defender's behavior monitoring, potentially leading to undetected…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive. Filter as needed.",
              "refs": "https://admx.help/?Category=SystemCenterEndpointProtection&Policy=Microsoft.Policies.Antimalware::real-time_protection_disablerawwritenotification, https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Disable Win Defender Raw Write Notif\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Disable Win Defender Raw Write Notif\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Disable Win Defender Raw Write Notif”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.522",
              "n": "Windows Modify Registry Disable WinDefender Notifications",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious registry modification aimed at disabling Windows Defender notifications. It leverages data from the Endpoint.Registry data model, specifically looking for changes to the registry path \"*\\\\SOFTWARE\\\\Policies\\\\Microsoft\\\\Windows Defender Security Center\\\\Notifications\\\\DisableNotifications\" with a value of \"0x00000001\". This activity is significant as it indicates an attempt to evade detection by disabling security alerts, a technique used by adversaries…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.redline_stealer",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Disable WinDefender Notifications\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Disable WinDefender Notifications\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Disable WinDefender Notifications”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.523",
              "n": "Windows Modify Registry Disable Windows Security Center Notif",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry aimed at disabling Windows Security Center notifications. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to the registry path \"*\\\\Windows\\\\CurrentVersion\\\\ImmersiveShell\\\\UseActionCenterExperience*\" with a value of \"0x00000000\". This activity is significant as it can indicate an attempt by adversaries or malware, such as Azorult, to evade defenses by suppressing critical update notificat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-remoteassistance-exe-fallowtogethelp, https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Disable Windows Security Center Notif\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Disable Windows Security Center Notif\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Disable Windows Security Center Notif”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.524",
              "n": "Windows Modify Registry DisableRemoteDesktopAntiAlias",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry key \"DisableRemoteDesktopAntiAlias\" with a value set to 0x00000001. This detection leverages data from the Endpoint datamodel, specifically monitoring changes in the Registry node. This activity is significant as it may indicate the presence of DarkGate malware, which alters this registry setting to enhance its remote desktop capabilities. If confirmed malicious, this modification could allow an attacker to maintain persistence…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry DisableRemoteDesktopAntiAlias\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry DisableRemoteDesktopAntiAlias\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry DisableRemoteDesktopAntiAlias”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.525",
              "n": "Windows Modify Registry DisableSecuritySettings",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable security settings for Terminal Services. It leverages the Endpoint data model, specifically monitoring changes to the registry path associated with Terminal Services security settings. This activity is significant because altering these settings can weaken the security posture of Remote Desktop Services, potentially allowing unauthorized remote access. If confirmed malicious, such modifications could enable attacke…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry DisableSecuritySettings\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry DisableSecuritySettings\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry DisableSecuritySettings”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.526",
              "n": "Windows Modify Registry Disabling WER Settings",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications in the Windows registry to disable Windows Error Reporting (WER) settings. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to registry paths related to WER with a value set to \"0x00000001\". This activity is significant as adversaries may disable WER to suppress error notifications, hiding the presence of malicious activities. If confirmed malicious, this could allow attackers to operate undetected, potentially l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-remoteassistance-exe-fallowtogethelp, https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Disabling WER Settings\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Disabling WER Settings\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Disabling WER Settings”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.527",
              "n": "Windows Modify Registry DisAllow Windows App",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry aimed at preventing the execution of specific computer programs. It leverages data from the Endpoint.Registry datamodel, focusing on changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Policies\\\\Explorer\\\\DisallowRun*\" with a value of \"0x00000001\". This activity is significant as it can indicate an attempt to disable security tools, a tactic used by malware like Azorult. If confirmed malicious, this c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive. Filter as needed.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry DisAllow Windows App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry DisAllow Windows App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry DisAllow Windows App”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.528",
              "n": "Windows Modify Registry Do Not Connect To Win Update",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious modification to the Windows registry that disables automatic updates. It leverages data from the Endpoint datamodel, specifically monitoring changes to the registry path \"*\\\\SOFTWARE\\\\Policies\\\\Microsoft\\\\Windows\\\\WindowsUpdate\\\\DoNotConnectToWindowsUpdateInternetLocations\" with a value of \"0x00000001\". This activity is significant as it can be used by adversaries, including malware like RedLine Stealer, to evade detection and prevent the system from r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/de-de/security-updates/windowsupdateservices/18127499, https://admx.help/?Category=Windows_10_2016&Policy=Microsoft.Policies.WindowsUpdate::DoNotConnectToWindowsUpdateInternetLocations",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Do Not Connect To Win Update\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Do Not Connect To Win Update\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Do Not Connect To Win Update”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.529",
              "n": "Windows Modify Registry DontShowUI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows Error Reporting registry key \"DontShowUI\" to suppress error reporting dialogs. It leverages data from the Endpoint datamodel's Registry node to identify changes where the registry value is set to 0x00000001. This activity is significant as it is commonly associated with DarkGate malware, which uses this modification to avoid detection during its installation. If confirmed malicious, this behavior could allow attackers to maintain a low …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry DontShowUI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry DontShowUI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry DontShowUI”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.530",
              "n": "Windows Modify Registry EnableLinkedConnections",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious modification to the Windows registry setting for EnableLinkedConnections. It leverages data from the Endpoint.Registry datamodel to identify changes where the registry path is \"*\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Policies\\\\System\\\\EnableLinkedConnections\" and the value is set to \"0x00000001\". This activity is significant because enabling linked connections can allow network shares to be accessed with both standard and administrator-level privileges,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/07/06/the-five-day-job-a-blackbyte-ransomware-intrusion-case-study/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry EnableLinkedConnections\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry EnableLinkedConnections\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry EnableLinkedConnections”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.531",
              "n": "Windows Modify Registry LongPathsEnabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a modification to the Windows registry setting \"LongPathsEnabled,\" which allows file paths longer than 260 characters. This detection leverages data from the Endpoint.Registry datamodel, focusing on changes to the specific registry path and value. This activity is significant because adversaries, including malware like BlackByte, exploit this setting to bypass file path limitations, potentially aiding in evasion techniques. If confirmed malicious, this modification…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/07/06/the-five-day-job-a-blackbyte-ransomware-intrusion-case-study/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry LongPathsEnabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry LongPathsEnabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry LongPathsEnabled”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.532",
              "n": "Windows Modify Registry No Auto Reboot With Logon User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious modification to the Windows registry that disables automatic reboot with a logged-on user. This detection leverages the Endpoint data model to identify changes to the registry path `SOFTWARE\\Policies\\Microsoft\\Windows\\WindowsUpdate\\AU\\NoAutoRebootWithLoggedOnUsers` with a value of `0x00000001`. This activity is significant as it is commonly used by adversaries, including malware like RedLine Stealer, to evade detection and maintain persistence. If conf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/de-de/security-updates/windowsupdateservices/18127499",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry No Auto Reboot With Logon User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry No Auto Reboot With Logon User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry No Auto Reboot With Logon User”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.533",
              "n": "Windows Modify Registry No Auto Update",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious modification to the Windows registry that disables automatic updates. It detects changes to the registry path `SOFTWARE\\Policies\\Microsoft\\Windows\\WindowsUpdate\\AU\\NoAutoUpdate` with a value of `0x00000001`. This activity is significant as it is commonly used by adversaries, including malware like RedLine Stealer, to evade detection and maintain persistence. If confirmed malicious, this could allow attackers to bypass security updates, leaving the s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/de-de/security-updates/windowsupdateservices/18127499",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry No Auto Update\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry No Auto Update\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry No Auto Update”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.534",
              "n": "Windows Modify Registry NoChangingWallPaper",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry aimed at preventing wallpaper changes. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to the \"NoChangingWallPaper\" registry value. This activity is significant as it is a known tactic used by Rhysida ransomware to enforce a malicious wallpaper, thereby limiting user control over system settings. If confirmed malicious, this registry change could indicate a ransomware infection, leading t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-319a",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry NoChangingWallPaper\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry NoChangingWallPaper\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry NoChangingWallPaper”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.535",
              "n": "Windows Modify Registry ProxyEnable",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry key \"ProxyEnable\" to enable proxy settings. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to the \"Internet Settings\\ProxyEnable\" registry path. This activity is significant as it is commonly exploited by malware and adversaries to establish proxy communication, potentially connecting to malicious Command and Control (C2) servers. If confirmed malicious, this could allow attackers to red…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry ProxyEnable\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry ProxyEnable\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry ProxyEnable”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.536",
              "n": "Windows Modify Registry ProxyServer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry key for setting up a proxy server. It leverages data from the Endpoint.Registry datamodel, focusing on changes to the \"Internet Settings\\\\ProxyServer\" registry path. This activity is significant as it can indicate malware or adversaries configuring a proxy to facilitate unauthorized communication with Command and Control (C2) servers. If confirmed malicious, this could allow attackers to establish persistent, covert channels fo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry ProxyServer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry ProxyServer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive, however is not common. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry ProxyServer”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.537",
              "n": "Windows Modify Registry Qakbot Binary Data Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a suspicious registry entry by Qakbot malware, characterized by 8 random registry value names with encrypted binary data. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on registry modifications under the \"SOFTWARE\\\\Microsoft\\\\\" path by processes like explorer.exe. This activity is significant as it indicates potential Qakbot infection, which uses the registry to store malicious code or configuration data. …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/decrypting-qakbots-encrypted-registry-keys/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Qakbot Binary Data Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Qakbot Binary Data Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Qakbot Binary Data Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.538",
              "n": "Windows Modify Registry Regedit Silent Reg Import",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows registry using the regedit.exe application with the silent mode parameter. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because the silent mode allows registry changes without user confirmation, which can be exploited by adversaries to import malicious registry settings. If confirmed malicious, this could enable attackers to pe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this command that may cause some false positive. Filter as needed.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/, https://www.techtarget.com/searchwindowsserver/tip/Command-line-options-for-Regeditexe",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Regedit Silent Reg Import\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Regedit Silent Reg Import\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this command that may cause some false positive. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Regedit Silent Reg Import”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.539",
              "n": "Windows Modify Registry Suppress Win Defender Notif",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications in the Windows registry to suppress Windows Defender notifications. It leverages data from the Endpoint.Registry datamodel, specifically targeting changes to the \"Notification_Suppress\" registry value. This activity is significant because adversaries, including those deploying Azorult malware, use this technique to bypass Windows Defender and disable critical notifications. If confirmed malicious, this behavior could allow attackers to evade detection…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-remoteassistance-exe-fallowtogethelp, https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Suppress Win Defender Notif\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Suppress Win Defender Notif\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Suppress Win Defender Notif”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.540",
              "n": "Windows Modify Registry Tamper Protection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious modification to the Windows Defender Tamper Protection registry setting. It leverages data from the Endpoint datamodel, specifically targeting changes where the registry path is set to disable Tamper Protection. This activity is significant because disabling Tamper Protection can allow adversaries to make further undetected changes to Windows Defender settings, potentially leading to reduced security on the system. If confirmed malicious, this could en…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.redline_stealer",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Tamper Protection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Tamper Protection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry Tamper Protection”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.541",
              "n": "Windows Modify Registry UpdateServiceUrlAlternate",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious modification to the Windows Update configuration registry key, specifically targeting the UpdateServiceUrlAlternate setting. It leverages data from the Endpoint.Registry datamodel to identify changes to this registry path. This activity is significant because adversaries, including malware like RedLine Stealer, exploit this technique to bypass detection and deploy additional payloads. If confirmed malicious, this modification could allow attackers to r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/de-de/security-updates/windowsupdateservices/18127499",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry UpdateServiceUrlAlternate\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry UpdateServiceUrlAlternate\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry UpdateServiceUrlAlternate”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.542",
              "n": "Windows Modify Registry ValleyRat PWN Reg Entry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows Registry specifically targeting `.pwn` file associations related to the ValleyRAT malware. ValleyRAT may create or alter registry entries to associate `.pwn` files with malicious processes, allowing it to execute harmful scripts or commands when these files are opened. By monitoring for unusual changes in registry keys linked to `.pwn` extensions, this detection enables security analysts to identify potential ValleyRAT infection attempt…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.proofpoint.com/us/blog/threat-insight/chinese-malware-appears-earnest-across-cybercrime-threat-landscape, https://www.fortinet.com/blog/threat-research/valleyrat-campaign-targeting-chinese-speakers",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry ValleyRat PWN Reg Entry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry ValleyRat PWN Reg Entry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry ValleyRat PWN Reg Entry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.543",
              "n": "Windows Modify Registry With MD5 Reg Key Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potentially malicious registry modifications characterized by MD5-like registry key names. It leverages the Endpoint data model to identify registry entries under the SOFTWARE path with 32-character hexadecimal names, a technique often used by NjRAT malware for fileless storage of keylogs and .DLL plugins. This activity is significant as it can indicate the presence of NjRAT or similar malware, which can lead to unauthorized data access and persistent threats withi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the Filesystem responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.njrat",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry With MD5 Reg Key Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry With MD5 Reg Key Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry With MD5 Reg Key Name”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.544",
              "n": "Windows Modify Registry WuServer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to the Windows Update Server (WUServer) registry settings. It leverages data from the Endpoint.Registry data model to identify changes in the registry path associated with Windows Update configurations. This activity is significant because adversaries, including malware like RedLine Stealer, exploit this technique to bypass detection and deploy additional payloads. If confirmed malicious, this registry modification could allow attackers to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Registry where Registry.registry_path=\"*\\\\SOFTWARE\\\\Policies\\\\Microsoft\\\\Windows\\\\WindowsUpdate\\\\WUServer\" by Registry.action Registry.dest Registry.process_guid Registry.process_id Registry.registry_hive Registry.registry_path Registry.registry_key_name Registry.registry_value_data Registry.registry_value_name Registry.registry_value_type Registry.status Registry.user Registry.vendor_product | `drop_dm_object_name(Registry)` | `security_content_ctime(lastTime)` | `security_content_ctime(firstTime)` | `windows_modify_registry_wuserver_filter`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/de-de/security-updates/windowsupdateservices/18127499",
              "mitre": [
                "T1112"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry WuServer\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry WuServer\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry WuServer”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.545",
              "n": "Windows Modify Registry wuStatusServer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious modifications to the Windows Update configuration registry, specifically targeting the WUStatusServer key. It leverages data from the Endpoint datamodel to detect changes in the registry path associated with Windows Update settings. This activity is significant as it is commonly used by adversaries, including malware like RedLine Stealer, to bypass detection and deploy additional payloads. If confirmed malicious, this modification could allow attacker…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Registry where Registry.registry_path=\"*\\\\SOFTWARE\\\\Policies\\\\Microsoft\\\\Windows\\\\WindowsUpdate\\\\WUStatusServer\" by Registry.action Registry.dest Registry.process_guid Registry.process_id Registry.registry_hive Registry.registry_path Registry.registry_key_name Registry.registry_value_data Registry.registry_value_name Registry.registry_value_type Registry.status Registry.user Registry.vendor_product | `drop_dm_object_name(Registry)` | `security_content_ctime(lastTime)` | `security_content_ctime(firstTime)` | `windows_modify_registry_wustatusserver_filter`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/de-de/security-updates/windowsupdateservices/18127499",
              "mitre": [
                "T1112"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry wuStatusServer\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry wuStatusServer\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Modify Registry wuStatusServer”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.546",
              "n": "Windows Mshta Execution In Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of mshta.exe via registry entries to run malicious scripts. It leverages registry activity logs to identify entries containing \"mshta,\" \"javascript,\" \"vbscript,\" or \"WScript.Shell.\" This behavior is significant as it indicates potential fileless malware, such as Kovter, which uses encoded scripts in the registry to persist and execute without files. If confirmed malicious, this activity could allow attackers to maintain persistence, execute arbitrary …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 13 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://redcanary.com/threat-detection-report/techniques/mshta/, https://learn.microsoft.com/en-us/microsoft-365/security/intelligence/fileless-threats?view=o365-worldwide",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Mshta Execution In Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Mshta Execution In Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Mshta Execution In Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.547",
              "n": "Windows MsiExec HideWindow Rundll32 Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the msiexec.exe process with the /HideWindow and rundll32 command-line parameters. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events and command-line arguments. This activity is significant because it is a known tactic used by malware like QakBot to mask malicious operations under legitimate system processes. If confirmed malicious, this behavior could allow an attacker to download additional p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Other possible 3rd party msi software installers use this technique as part of its installation process.",
              "refs": "https://twitter.com/Max_Mal_/status/1736392741758611607, https://twitter.com/1ZRR4H/status/1735944522075386332",
              "mitre": [
                "T1218.007"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MsiExec HideWindow Rundll32 Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MsiExec HideWindow Rundll32 Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Other possible 3rd party msi software installers use this technique as part of its installation process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows MsiExec HideWindow Rundll32 Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.548",
              "n": "Windows MSIExec Spawn Discovery Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects MSIExec spawning multiple discovery commands, such as Cmd.exe or PowerShell.exe. This behavior is identified using data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where MSIExec is the parent process. This activity is significant because MSIExec typically does not spawn child processes other than itself, making this behavior highly suspicious. If confirmed malicious, an attacker could use these discovery commands to gather…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present with MSIExec spawning Cmd or PowerShell. Filtering will be needed. In addition, add other known discovery processes to enhance query.",
              "refs": "https://thedfirreport.com/2022/06/06/will-the-real-msiexec-please-stand-up-exploit-leads-to-data-exfiltration/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.007/T1218.007.md",
              "mitre": [
                "T1218.007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSIExec Spawn Discovery Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSIExec Spawn Discovery Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present with MSIExec spawning Cmd or PowerShell. Filtering will be needed. In addition, add other known discovery processes to enhance query.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows MSIExec Spawn Discovery Command”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.549",
              "n": "Windows MSIExec Spawn WinDBG",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the unusual behavior of MSIExec spawning WinDBG. It detects this activity by analyzing endpoint telemetry data, specifically looking for instances where 'msiexec.exe' is the parent process of 'windbg.exe'. This behavior is significant as it may indicate an attempt to debug or tamper with system processes, which is uncommon in typical user activity and could signify malicious intent. If confirmed malicious, this activity could allow an attacker to manipulate or i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will only be present if the MSIExec process legitimately spawns WinDBG. Filter as needed.",
              "refs": "https://github.com/PaloAltoNetworks/Unit42-timely-threat-intel/blob/main/2023-10-25-IOCs-from-DarkGate-activity.txt",
              "mitre": [
                "T1218.007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSIExec Spawn WinDBG\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSIExec Spawn WinDBG\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will only be present if the MSIExec process legitimately spawns WinDBG. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows MSIExec Spawn WinDBG”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.550",
              "n": "Windows MSIExec Unregister DLLRegisterServer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of msiexec.exe with the /z switch parameter, which is used to unload DLLRegisterServer. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs, including command-line arguments. This activity is significant because unloading DLLRegisterServer can be indicative of an attempt to deregister a DLL, potentially disrupting legitimate services or hiding malicious activity. If confirmed malicious, this co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic will need to be tuned for your environment based on legitimate usage of msiexec.exe. Filter as needed.",
              "refs": "https://thedfirreport.com/2022/06/06/will-the-real-msiexec-please-stand-up-exploit-leads-to-data-exfiltration/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.007/T1218.007.md",
              "mitre": [
                "T1218.007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSIExec Unregister DLLRegisterServer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSIExec Unregister DLLRegisterServer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic will need to be tuned for your environment based on legitimate usage of msiexec.exe. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows MSIExec Unregister DLLRegisterServer”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.551",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a single source endpoint failing to authenticate with 30 unique disabled domain users using the Kerberos protocol within 5 minutes. It leverages Windows Security Event 4768, focusing on failure code `0x12`, indicating revoked credentials. This activity is significant as it may indicate a Password Spraying attack targeting disabled accounts, a tactic used by adversaries to gain initial access or elevate privileges. If confirmed malicious, this could lead to unauthor…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4768",
              "q": "`wineventlog_security` EventCode=4768 TargetUserName!=*$ Status=0x12\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as user values(dest) as dest\n        BY _time, IpAddress\n      | where unique_accounts > 30\n      | `windows_multiple_disabled_users_failed_to_authenticate_wth_kerberos_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller and Kerberos events. The Advanced Security Audit policy setting `Audit Kerberos Authentication Service` within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple disabled domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, multi-user systems missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4768. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple disabled domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, multi-user systems missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a single address trying many disabled accounts over Kerberos in a few minutes, so we catch password-spray style activity against ghost accounts before it widens.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.552",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source endpoint failing to authenticate with 30 unique invalid domain users using the Kerberos protocol. This detection leverages EventCode 4768, specifically looking for failure code 0x6, indicating the user is not found in the Kerberos database. This activity is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges. If confirmed malicious, this could lead to unauthorized access or …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4768",
              "q": "`wineventlog_security` EventCode=4768 TargetUserName!=*$ Status=0x6\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as user values(dest) as dest\n        BY _time, IpAddress\n      | where unique_accounts > 30\n      | `windows_multiple_invalid_users_fail_to_authenticate_using_kerberos_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller and Kerberos events. The Advanced Security Audit policy setting `Audit Kerberos Authentication Service` within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple invalid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, multi-user systems and missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4768. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple invalid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, multi-user systems and missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a single address failing many user logons that should not be active, so we catch guessing against stale identities.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.553",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a single source endpoint failing to authenticate with 30 unique invalid users using the NTLM protocol. It leverages EventCode 4776 from Domain Controller logs, focusing on error code 0xC0000064, which indicates non-existent usernames. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges. If confirmed malicious, this activity could lead to unauthorized access, privilege e…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4776",
              "q": "`wineventlog_security` EventCode=4776 TargetUserName!=*$ Status=0xc0000064\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as tried_accounts values(dest) as dest\n        BY _time, Workstation\n      | where unique_accounts > 30\n      | `windows_multiple_invalid_users_failed_to_authenticate_using_ntlm_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller events. The Advanced Security Audit policy setting `Audit Credential Validation' within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple invalid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems. If this detection triggers on a host other than a Domain Controller, the behavior could represent a password spraying attack against the host's local accounts.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/audit-credential-validation, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4776",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4776. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple invalid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems. If this detection triggers on a host other than a Domain Controller, the behavior could represent a password spraying attack against the host's local accounts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for many failed or unknown username attempts in one window from a single source, so we catch account enumeration and guessing early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.554",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a single source endpoint failing to authenticate with 30 unique valid users using the NTLM protocol. It leverages EventCode 4776 from Domain Controller logs, focusing on error code 0xC000006A, which indicates a bad password. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges. If confirmed malicious, this activity could lead to unauthorized access to sensitive inform…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4776",
              "q": "`wineventlog_security` EventCode=4776 TargetUserName!=*$ Status=0xC000006A\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as tried_accounts values(dest) as dest\n        BY _time, Workstation\n      | where unique_accounts > 30\n      | `windows_multiple_users_failed_to_authenticate_from_host_using_ntlm_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller events. The Advanced Security Audit policy setting `Audit Credential Validation` within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple valid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems. If this detection triggers on a host other than a Domain Controller, the behavior could represent a password spraying attack against the host's local accounts.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/audit-credential-validation, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4776",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4776. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple valid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems. If this detection triggers on a host other than a Domain Controller, the behavior could represent a password spraying attack against the host's local accounts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for many failed or unknown username attempts in one window from a single source, so we catch account enumeration and guessing early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.555",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a single source endpoint failing to authenticate with 30 unique users using the Kerberos protocol. It leverages EventCode 4771 with Status 0x18, indicating wrong password attempts, and aggregates these events over a 5-minute window. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges in an Active Directory environment. If confirmed malicious, this activity could lead…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4771",
              "q": "`wineventlog_security` EventCode=4771 TargetUserName!=\"*$\" Status=0x18\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as user values(dest) as dest\n        BY _time, IpAddress\n      | where unique_accounts > 30\n      | `windows_multiple_users_failed_to_authenticate_using_kerberos_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller and Kerberos events. The Advanced Security Audit policy setting `Audit Kerberos Authentication Service` within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple valid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, missconfigured systems and multi-user systems like Citrix farms.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn319109(v=ws.11), https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4771",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4771. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple valid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, missconfigured systems and multi-user systems like Citrix farms.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kerberos pre-auth failures in tight bursts, so we catch noisy guessing against real account names before tickets get issued.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.556",
              "n": "Windows Network Connection Discovery Via Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `net.exe` with command-line arguments used to list or display information about computer connections. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity can be significant as it indicates potential network reconnaissance by adversaries or Red Teams, aiming to gather situational awareness and Active Directory information. If confirmed malicious, this behavior c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            `process_net`\n            OR\n            (Processes.process_name=\"net.exe\"\n            OR\n            Processes.original_file_name=\"net.exe\")\n        )\n        AND (Processes.process=*use)\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_network_connection_discovery_via_net_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1049/",
              "mitre": [
                "T1049"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Network Connection Discovery Via Net\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1049. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Network Connection Discovery Via Net\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Network Connection Discovery Via Net”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.557",
              "n": "Windows New Custom Security Descriptor Set On EventLog Channel",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to the EventLog security descriptor registry value for defense evasion. It leverages data from the Endpoint.Registry data model, focusing on changes to the \"CustomSD\" value within the \"HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Services\\Eventlog\\<Channel>\\CustomSD\" path. This activity is significant as changes to the access permissions of the event log could blind security products and help attackers evade defenses. If confirmed malicious,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. a legacy option and shouldn't be a common activity.",
              "refs": "https://learn.microsoft.com/en-us/troubleshoot/windows-server/group-policy/set-event-log-security-locally-or-via-group-policy, https://attack.mitre.org/techniques/T1562/002/",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows New Custom Security Descriptor Set On EventLog Channel\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows New Custom Security Descriptor Set On EventLog Channel\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. a legacy option and shouldn't be a common activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows New Custom Security Descriptor Set On EventLog Channel”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.558",
              "n": "Windows New Default File Association Value Set",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects registry changes to the default file association value. It leverages data from the Endpoint data model, specifically monitoring registry paths under \"HKCR\\\\*\\\\shell\\\\open\\\\command\\\\*\". This activity can be significant because, attackers might alter the default file associations in order to execute arbitrary scripts or payloads when a user opens a file, leading to potential code execution. If confirmed malicious, this technique can enable attackers to persist on the…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Windows and third party software will create and modify these file associations during installation or upgrades. Additional filters needs to be applied to tune environment specific false positives.",
              "refs": "https://dmcxblue.gitbook.io/red-team-notes-2-0/red-team-techniques/privilege-escalation/untitled-3/accessibility-features",
              "mitre": [
                "T1546.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows New Default File Association Value Set\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows New Default File Association Value Set\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Windows and third party software will create and modify these file associations during installation or upgrades. Additional filters needs to be applied to tune environment specific false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows New Default File Association Value Set”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.559",
              "n": "Windows New Deny Permission Set On Service SD Via Sc.EXE",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects changes in a service security descriptor where a new deny ace has been added. It leverages data from Endpoint Detection and Response (EDR) agents, specifically searching for any process execution involving the \"sc.exe\" binary with the \"sdset\" flag targeting any service and adding a dedicated deny ace. If confirmed malicious, this could allow an attacker to escalate their privileges, blind defenses and more.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. security-related services should be immediately investigated.",
              "refs": "https://www.sans.org/blog/red-team-tactics-hiding-windows-services/, https://news.sophos.com/wp-content/uploads/2020/06/glupteba_final-1.pdf, https://attack.mitre.org/techniques/T1564/",
              "mitre": [
                "T1564"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows New Deny Permission Set On Service SD Via Sc.EXE\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1564. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows New Deny Permission Set On Service SD Via Sc.EXE\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. security-related services should be immediately investigated.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows New Deny Permission Set On Service SD Via Sc.EXE”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.560",
              "n": "Windows New EventLog ChannelAccess Registry Value Set",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to the EventLog security descriptor registry value for defense evasion. It leverages data from the Endpoint.Registry data model, focusing on changes to the \"CustomSD\" value within the \"HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Services\\Eventlog\\<Channel>\\CustomSD\" path. This activity is significant as changes to the access permissions of the event log could blind security products and help attackers evade defenses. If confirmed malicious,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be triggered from newly installed event providers or windows updates, new \"ChannelAccess\" values must be investigated.",
              "refs": "https://web.archive.org/web/20220710181255/https://blog.minerva-labs.com/lockbit-3.0-aka-lockbit-black-is-here-with-a-new-icon-new-ransom-note-new-wallpaper-but-less-evasiveness, https://attack.mitre.org/techniques/T1562/002/",
              "mitre": [
                "T1562.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows New EventLog ChannelAccess Registry Value Set\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows New EventLog ChannelAccess Registry Value Set\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be triggered from newly installed event providers or windows updates, new \"ChannelAccess\" values must be investigated.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows New EventLog ChannelAccess Registry Value Set”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.561",
              "n": "Windows New InProcServer32 Added",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of new InProcServer32 registry keys on Windows endpoints. It leverages data from the Endpoint.Registry datamodel to identify changes in registry paths associated with InProcServer32. This activity is significant because malware often uses this mechanism to achieve persistence or execute malicious code by registering a new InProcServer32 key pointing to a harmful DLL. If confirmed malicious, this could allow an attacker to persist in the environment or …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Endpoint.Registry where Registry.registry_path=\"*\\\\InProcServer32\\\\*\" by Registry.registry_path Registry.registry_key_name Registry.registry_value_name Registry.registry_value_data Registry.dest Registry.process_guid Registry.user | `drop_dm_object_name(Registry)` |`security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_new_inprocserver32_added_filter`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are expected. Filtering will be needed to properly reduce legitimate applications from the results.",
              "refs": "https://www.netspi.com/blog/technical/red-team-operations/microsoft-outlook-remote-code-execution-cve-2024-21378/",
              "mitre": [
                "T1112"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows New InProcServer32 Added\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows New InProcServer32 Added\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are expected. Filtering will be needed to properly reduce legitimate applications from the results.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows New InProcServer32 Added”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.562",
              "n": "Windows New Service Security Descriptor Set Via Sc.EXE",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects changes in a service security descriptor where a new deny ace has been added. It leverages data from Endpoint Detection and Response (EDR) agents, specifically searching for any process execution involving the \"sc.exe\" binary with the \"sdset\" flag targeting any service and adding a dedicated deny ace. If confirmed malicious, this could allow an attacker to escalate their privileges, blind defenses and more.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. should be identified and understood.",
              "refs": "https://www.sans.org/blog/red-team-tactics-hiding-windows-services/, https://news.sophos.com/wp-content/uploads/2020/06/glupteba_final-1.pdf, https://attack.mitre.org/techniques/T1564/",
              "mitre": [
                "T1564"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows New Service Security Descriptor Set Via Sc.EXE\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1564. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows New Service Security Descriptor Set Via Sc.EXE\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. should be identified and understood.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows New Service Security Descriptor Set Via Sc.EXE”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.563",
              "n": "Windows Ngrok Reverse Proxy Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of ngrok.exe on a Windows operating system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because while ngrok is a legitimate tool for creating secure tunnels, it is increasingly used by adversaries to bypass network defenses and establish reverse proxies. If confirmed malicious, this could allow attackers to exfiltrate data, maintain persistence,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present based on organizations that allow the use of Ngrok. Filter or monitor as needed.",
              "refs": "https://www.cisa.gov/uscert/sites/default/files/publications/aa22-320a_joint_csa_iranian_government-sponsored_apt_actors_compromise_federal%20network_deploy_crypto%20miner_credential_harvester.pdf",
              "mitre": [
                "T1572",
                "T1090",
                "T1102"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Ngrok Reverse Proxy Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1572, T1090, T1102. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Ngrok Reverse Proxy Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present based on organizations that allow the use of Ngrok. Filter or monitor as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Ngrok Reverse Proxy Usage”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.564",
              "n": "Windows NirSoft AdvancedRun",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of AdvancedRun.exe, a tool with capabilities similar to remote administration programs like PsExec. It identifies the process by its name or original file name and flags common command-line arguments. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and command-line telemetry. Monitoring this activity is crucial as AdvancedRun can be used for remote code execution and configuration-based automation. …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as it is specific to AdvancedRun. Filter as needed based on legitimate usage.",
              "refs": "http://www.nirsoft.net/utils/advanced_run.html, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1588.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows NirSoft AdvancedRun\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1588.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows NirSoft AdvancedRun\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as it is specific to AdvancedRun. Filter as needed based on legitimate usage.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows NirSoft AdvancedRun”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.565",
              "n": "Windows Njrat Fileless Storage via Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious registry modifications indicative of NjRat's fileless storage technique. It leverages the Endpoint.Registry data model to identify specific registry paths and values commonly used by NjRat for keylogging and executing DLL plugins. This activity is significant as it helps evade traditional file-based detection systems, making it crucial for SOC analysts to monitor. If confirmed malicious, this behavior could allow attackers to persist on the host, execute…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.njrat",
              "mitre": [
                "T1027.011"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Njrat Fileless Storage via Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Njrat Fileless Storage via Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Njrat Fileless Storage via Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.566",
              "n": "Windows Obfuscated Files or Information via RAR SFX",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of RAR Self-Extracting (SFX) files by monitoring the generation of file related to rar sfx .tmp file creation during sfx installation. This method leverages a heuristic to identify RAR SFX archives based on specific markers that indicate a combination of executable code and compressed RAR data. By tracking such activity, the analytic helps pinpoint potentially unauthorized or suspicious file creation events, which are often associated with malware pack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It can detect a third part utility software tool compiled to rar sfx.",
              "refs": "https://www.splunk.com/en_us/blog/security/-applocker-rules-as-defense-evasion-complete-analysis.html",
              "mitre": [
                "T1027.013"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Obfuscated Files or Information via RAR SFX\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027.013. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Obfuscated Files or Information via RAR SFX\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It can detect a third part utility software tool compiled to rar sfx.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Obfuscated Files or Information via RAR SFX”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.567",
              "n": "Windows Odbcconf Hunting",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of Odbcconf.exe within the environment. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where the process name is Odbcconf.exe. This activity is significant because Odbcconf.exe can be used by attackers to execute arbitrary commands or load malicious DLLs, potentially leading to code execution or persistence. If confirmed malicious, this behavior could allow an attacker to maintain access to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name=odbcconf.exe\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_odbcconf_hunting_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present as this is meant to assist with filtering and tuning.",
              "refs": "https://strontic.github.io/xcyclopedia/library/odbcconf.exe-07FBA12552331355C103999806627314.html, https://twitter.com/redcanary/status/1541838407894171650?s=20&t=kp3WBPtfnyA3xW7D7wx0uw",
              "mitre": [
                "T1218.008"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Odbcconf Hunting\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Odbcconf Hunting\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present as this is meant to assist with filtering and tuning.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Odbcconf Hunting”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.568",
              "n": "Windows Odbcconf Load DLL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of odbcconf.exe with the regsvr action to load a DLL. This is identified by monitoring command-line arguments in process creation logs from Endpoint Detection and Response (EDR) agents. This activity is significant as it may indicate an attempt to execute arbitrary code via DLL loading, a common technique used in various attack vectors. If confirmed malicious, this could allow an attacker to execute code with the privileges of the odbcconf.exe process…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and filtering may need to occur based on legitimate application usage. Filter as needed.",
              "refs": "https://strontic.github.io/xcyclopedia/library/odbcconf.exe-07FBA12552331355C103999806627314.html, https://twitter.com/redcanary/status/1541838407894171650?s=20&t=kp3WBPtfnyA3xW7D7wx0uw",
              "mitre": [
                "T1218.008"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Odbcconf Load DLL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Odbcconf Load DLL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and filtering may need to occur based on legitimate application usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Odbcconf Load DLL”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.569",
              "n": "Windows Odbcconf Load Response File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of odbcconf.exe with a response file, which may contain commands to load a DLL (REGSVR) or other instructions. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant as it may indicate an attempt to execute arbitrary code or load malicious DLLs, potentially leading to unauthorized actions. If confirmed malicious, this could allow an attacker to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and filtering may need to occur based on legitimate application usage. Filter as needed.",
              "refs": "https://strontic.github.io/xcyclopedia/library/odbcconf.exe-07FBA12552331355C103999806627314.html, https://twitter.com/redcanary/status/1541838407894171650?s=20&t=kp3WBPtfnyA3xW7D7wx0uw",
              "mitre": [
                "T1218.008"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Odbcconf Load Response File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Odbcconf Load Response File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and filtering may need to occur based on legitimate application usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Odbcconf Load Response File”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.570",
              "n": "Windows Office Product Loading Taskschd DLL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an Office document creating a scheduled task, either through a macro VBA API or by loading `taskschd.dll`. This detection leverages Sysmon EventCode 7 to identify when Office applications load the `taskschd.dll` file. This activity is significant as it is a common technique used by malicious macro malware to establish persistence or initiate beaconing. If confirmed malicious, this could allow an attacker to maintain persistence, execute arbitrary commands, or sched…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and ImageLoaded (Like sysmon EventCode 7) from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Also be sure to include those monitored dll to your own sysmon config.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if legitimate office documents are creating scheduled tasks. Ensure to investigate the scheduled task and the command to be executed. If the task is benign, add the task name to the exclusion list. Some applications may legitimately load taskschd.dll.",
              "refs": "https://research.checkpoint.com/2021/irans-apt34-returns-with-an-updated-arsenal/, https://redcanary.com/threat-detection-report/techniques/scheduled-task-job/, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trojanized-onenote-document-leads-to-formbook-malware/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Loading Taskschd DLL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Loading Taskschd DLL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if legitimate office documents are creating scheduled tasks. Ensure to investigate the scheduled task and the command to be executed. If the task is benign, add the task name to the exclusion list. Some applications may legitimately load taskschd.dll.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Office Product Loading Taskschd DLL”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.571",
              "n": "Windows Office Product Loading VBE7 DLL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies office documents executing macro code. It leverages Sysmon EventCode 7 to detect when processes like WINWORD.EXE or EXCEL.EXE load specific DLLs associated with macros (e.g., VBE7.DLL). This activity is significant because macros are a common attack vector for delivering malicious payloads, such as malware. If confirmed malicious, this could lead to unauthorized code execution, data exfiltration, or further compromise of the system. Disabling macros by default i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and ImageLoaded (Like sysmon EventCode 7) from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Also be sure to include those monitored dll to your own sysmon config.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if legitimate office documents are executing macro code. Ensure to investigate the macro code and the command to be executed. If the macro code is benign, add the document name to the exclusion list. Some applications may legitimately load VBE7INTL.DLL, VBE7.DLL, or VBEUI.DLL.",
              "refs": "https://www.joesandbox.com/analysis/386500/0/html, https://www.joesandbox.com/analysis/702680/0/html, https://bazaar.abuse.ch/sample/02cbc1ab80695fc12ff8822b926957c3a600247b9ca412a137f69cb5716c8781/, https://www.fortinet.com/blog/threat-research/latest-remcos-rat-phishing, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trojanized-onenote-document-leads-to-formbook-malware/, https://www.fortinet.com/blog/threat-research/leveraging-microsoft-office-documents-to-deliver-agent-tesla-and-njrat",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Loading VBE7 DLL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Loading VBE7 DLL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if legitimate office documents are executing macro code. Ensure to investigate the macro code and the command to be executed. If the macro code is benign, add the document name to the exclusion list. Some applications may legitimately load VBE7INTL.DLL, VBE7.DLL, or VBEUI.DLL.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Office Product Loading VBE7 DLL”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.572",
              "n": "Windows Office Product Spawned Child Process For Download",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies Office applications spawning child processes to download content via HTTP/HTTPS. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where Office applications like Word or Excel initiate network connections, excluding common browsers. This activity is significant as it often indicates the use of malicious documents to execute living-off-the-land binaries (LOLBins) for payload delivery. If confirmed malicious, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Default browser not in the filter list.",
              "refs": "https://app.any.run/tasks/92d7ef61-bfd7-4c92-bc15-322172b4ebec/, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trojanized-onenote-document-leads-to-formbook-malware/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Spawned Child Process For Download\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Spawned Child Process For Download\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Default browser not in the filter list.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Office Product Spawned Child Process For Download”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.573",
              "n": "Windows Office Product Spawned MSDT",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a Microsoft Office product spawning the Windows msdt.exe process. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where Office applications are the parent process. This activity is significant as it may indicate an attempt to exploit protocol handlers to bypass security controls, even if macros are disabled. If confirmed malicious, this behavior could allow an attacker to execute arbitrary code, p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, however filter as needed.",
              "refs": "https://isc.sans.edu/diary/rss/28694, https://doublepulsar.com/follina-a-microsoft-office-code-execution-vulnerability-1a47fce5629e, https://twitter.com/nao_sec/status/1530196847679401984?s=20&t=ZiXYI4dQuA-0_dzQzSUb3A, https://app.any.run/tasks/713f05d2-fe78-4b9d-a744-f7c133e3fafb/, https://www.virustotal.com/gui/file/4a24048f81afbe9fb62e7a6a49adbd1faf41f266b5f9feecdceb567aec096784/detection, https://strontic.github.io/xcyclopedia/library/msdt.exe-152D4C9F63EFB332CCB134C6953C0104.html, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trojanized-onenote-document-leads-to-formbook-malware/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Spawned MSDT\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Spawned MSDT\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, however filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Office Product Spawned MSDT”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.574",
              "n": "Windows PaperCut NG Spawn Shell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where the PaperCut NG application (pc-app.exe) spawns a Windows shell, such as cmd.exe or PowerShell. This behavior is identified using Endpoint Detection and Response (EDR) telemetry, focusing on process creation events where the parent process is pc-app.exe. This activity is significant as it may indicate an attacker attempting to gain unauthorized access or execute malicious commands on the system. If confirmed malicious, this could lead to unauthorize…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, but most likely not. Filter as needed.",
              "refs": "https://www.cisa.gov/news-events/alerts/2023/05/11/cisa-and-fbi-release-joint-advisory-response-active-exploitation-papercut-vulnerability, https://www.papercut.com/kb/Main/PO-1216-and-PO-1219",
              "mitre": [
                "T1059",
                "T1190",
                "T1133"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PaperCut NG Spawn Shell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059, T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PaperCut NG Spawn Shell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, but most likely not. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows PaperCut NG Spawn Shell”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.575",
              "n": "Windows Parent PID Spoofing with Explorer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a suspicious `explorer.exe` process with the `/root` command-line parameter. This detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process and command-line data. The presence of `/root` in `explorer.exe` is significant as it may indicate parent process spoofing, a technique used by malware to evade detection. If confirmed malicious, this activity could allow an attacker to operate undetected, potentially leading to unauthorized ac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://x.com/CyberRaiju/status/1273597319322058752?s=20",
              "mitre": [
                "T1134.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Parent PID Spoofing with Explorer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Parent PID Spoofing with Explorer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Parent PID Spoofing with Explorer”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.576",
              "n": "Windows Password Managers Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies command-line activity that searches for files related to password manager software, such as \"*.kdbx*\" and \"*credential*\". It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. This activity is significant because attackers often target password manager databases to extract stored credentials, which can be used for further exploitation. If confirmed malicious, this behavior could lead to unauthorized access to se…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1555/005/, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1555.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Password Managers Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Password Managers Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Password Managers Discovery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.577",
              "n": "Windows Password Policy Discovery with Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of `net.exe` with command line arguments aimed at obtaining the computer or domain password policy. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to gather information about Active Directory password policies. If confirmed malicious, this behavior could allow attackers to understa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_net`\n        AND\n        Processes.process = \"*accounts*\"\n        AND\n        NOT Processes.process IN (\"*/FORCELOGOFF*\", \"*/MINPWLEN*\", \"*/MAXPWAGE*\", \"*/MINPWAGE*\", \"*/UNIQUEPW*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_password_policy_discovery_with_net_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet",
              "mitre": [
                "T1201"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Password Policy Discovery with Net\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1201. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Password Policy Discovery with Net\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Password Policy Discovery with Net”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.578",
              "n": "Windows Powershell Cryptography Namespace",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious PowerShell script execution involving the cryptography namespace via EventCode 4104. It leverages PowerShell Script Block Logging to identify scripts using cryptographic functions, excluding common hashes like SHA and MD5. This activity is significant as it is often associated with malware that decrypts or decodes additional malicious payloads. If confirmed malicious, this could allow an attacker to execute further code, escalate privileges, or establish…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited. Filter as needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Powershell Cryptography Namespace\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Powershell Cryptography Namespace\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Powershell Cryptography Namespace”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.579",
              "n": "Windows Powershell Import Applocker Policy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the import of Windows PowerShell Applocker cmdlets, specifically identifying the use of \"Import-Module Applocker\" and \"Set-AppLockerPolicy\" with an XML policy. It leverages PowerShell Script Block Logging (EventCode 4104) to capture and analyze script block text. This activity is significant as it may indicate an attempt to enforce restrictive Applocker policies, potentially used by malware like Azorult to disable antivirus products. If confirmed malicious, this co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may execute this command that may cause some false positive.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1059.001",
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Powershell Import Applocker Policy\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Powershell Import Applocker Policy\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may execute this command that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Powershell Import Applocker Policy”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.580",
              "n": "Windows Powershell RemoteSigned File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of the \"remotesigned\" execution policy for PowerShell scripts. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions containing \"remotesigned\" and \"-File\". This activity is significant because the \"remotesigned\" policy allows locally created scripts to run without restrictions, posing a potential security risk. If confirmed malicious, an attacker could execute unauthorized scripts, leading to code execut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible administrators or scripts may run these commands, filtering may be required.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-7.3",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Powershell RemoteSigned File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Powershell RemoteSigned File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible administrators or scripts may run these commands, filtering may be required.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Powershell RemoteSigned File”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.581",
              "n": "Windows PowerShell Script From WindowsApps Directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of PowerShell scripts from the WindowsApps directory, which is a common technique used in malicious MSIX package execution. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command lines and parent process paths. This activity is significant as adversaries have been observed using MSIX packages with embedded PowerShell scripts (particularly StartingScriptWrapper.ps1) to execute malicious code. If …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Endpoint.Processes where Processes.process_name=\"powershell.exe\" AND Processes.process=\"*StartingScriptWrapper.ps1*\" by Processes.dest Processes.process Processes.parent_process_name",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain process execution information, including process paths and command lines. These logs must be processed using the appropriate Splu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications installed via the Microsoft Store or MSIX packages may execute PowerShell scripts from the WindowsApps directory as part of their normal operation. Verify if the MSIX package is from a trusted source and signed by a trusted publisher before taking action. Look for additional suspicious activities like network connections to unknown domains or execution of known malicious payloads.",
              "refs": "https://redcanary.com/blog/threat-intelligence/msix-installers/, https://redcanary.com/threat-detection-report/techniques/installer-packages/, https://learn.microsoft.com/en-us/windows/msix/package/package-support-framework, https://attack.mitre.org/techniques/T1059/001/",
              "mitre": [
                "T1059.001",
                "T1204.002"
              ],
              "dtype": "command",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Script From WindowsApps Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to command entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Script From WindowsApps Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications installed via the Microsoft Store or MSIX packages may execute PowerShell scripts from the WindowsApps directory as part of their normal operation. Verify if the MSIX package is from a trusted source and signed by a trusted publisher before taking action. Look for additional suspicious activities like network connections to unknown domains or execution of known malicious payloads.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows PowerShell Script From WindowsApps Directory”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.582",
              "n": "Windows PowerView Constrained Delegation Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of PowerView commandlets to discover Windows endpoints with Kerberos Constrained Delegation. It leverages PowerShell Script Block Logging (EventCode=4104) to identify specific commandlets like `Get-DomainComputer` or `Get-NetComputer` with the `-TrustedToAuth` parameter. This activity is significant as it indicates potential reconnaissance efforts by adversaries or Red Teams to map out privileged delegation settings in Active Directory. If confirmed malicio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may leverage PowerView for system management or troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://adsecurity.org/?p=1667, https://learn.microsoft.com/en-us/defender-for-identity/cas-isp-unconstrained-kerberos, https://www.guidepointsecurity.com/blog/delegating-like-a-boss-abusing-kerberos-delegation-in-active-directory/, https://book.hacktricks.xyz/windows-hardening/active-directory-methodology/constrained-delegation, https://www.cyberark.com/resources/threat-research-blog/weakness-within-kerberos-delegation",
              "mitre": [
                "T1018"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerView Constrained Delegation Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerView Constrained Delegation Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may leverage PowerView for system management or troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows PowerView Constrained Delegation Discovery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.583",
              "n": "Windows PowerView Unconstrained Delegation Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of PowerView commandlets to discover Windows endpoints with Kerberos Unconstrained Delegation. It leverages PowerShell Script Block Logging (EventCode=4104) to identify specific commands like `Get-DomainComputer` or `Get-NetComputer` with the `-Unconstrained` parameter. This activity is significant as it indicates potential reconnaissance efforts by adversaries or Red Teams to map out privileged delegation settings in Active Directory. If confirmed maliciou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may leverage PowerView for system management or troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://adsecurity.org/?p=1667, https://learn.microsoft.com/en-us/defender-for-identity/cas-isp-unconstrained-kerberos, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/domain-compromise-via-unrestricted-kerberos-delegation, https://www.cyberark.com/resources/threat-research-blog/weakness-within-kerberos-delegation",
              "mitre": [
                "T1018"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerView Unconstrained Delegation Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerView Unconstrained Delegation Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may leverage PowerView for system management or troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows PowerView Unconstrained Delegation Discovery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.584",
              "n": "Windows Process Commandline Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Windows Management Instrumentation Command-line (WMIC) to retrieve information about running processes, specifically targeting the command lines used to launch those processes. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on logs containing process details and command-line executions. This activity is significant as it may indicate suspicious behavior, such as a user or process gathering detailed process infor…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_wmic` Processes.process= \"* process *\" Processes.process= \"* get *\" Processes.process= \"*CommandLine*\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_process_commandline_discovery_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting. Filter as needed.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a",
              "mitre": [
                "T1057"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Commandline Discovery\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1057. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Commandline Discovery\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Process Commandline Discovery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.585",
              "n": "Windows Process Execution in Temp Dir",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes running from %temp% directory file paths. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific process paths within the Endpoint data model. This activity is significant because adversaries often use unconventional file paths to execute malicious code without requiring administrative privileges. If confirmed malicious, this behavior could indicate an attempt to bypass security controls, leading to unauthorized softw…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may allow execution of specific binaries in non-standard paths. Filter as needed.",
              "refs": "https://www.trendmicro.com/vinfo/hk/threat-encyclopedia/malware/trojan.ps1.powtran.a/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://twitter.com/pr0xylife/status/1590394227758104576, https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1543",
                "T1036.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Execution in Temp Dir\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543, T1036.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Execution in Temp Dir\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may allow execution of specific binaries in non-standard paths. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Process Execution in Temp Dir”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.586",
              "n": "Windows Process Injection In Non-Service SearchIndexer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances of the searchindexer.exe process that are not spawned by services.exe, indicating potential process injection. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and parent processes. This activity is significant because QakBot malware often uses a fake searchindexer.exe to evade detection and perform malicious actions such as data exfiltration and keystroke logging. If confirmed malicious, this a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://twitter.com/Max_Mal_/status/1736392741758611607, https://twitter.com/1ZRR4H/status/1735944522075386332",
              "mitre": [
                "T1055"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Injection In Non-Service SearchIndexer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Injection In Non-Service SearchIndexer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Process Injection In Non-Service SearchIndexer”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.587",
              "n": "Windows Process Injection Of Wermgr to Known Browser",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the suspicious remote thread execution of the wermgr.exe process into known browsers such as firefox.exe, chrome.exe, and others. It leverages Sysmon EventCode 8 logs to detect this behavior by monitoring SourceImage and TargetImage fields. This activity is significant because it is indicative of Qakbot malware, which injects malicious code into legitimate processes to steal information. If confirmed malicious, this activity could allow attackers to execute arbi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 8",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the SourceImage, TargetImage, and EventCode executions from your endpoints related to create remote thread or injecting codes. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://news.sophos.com/en-us/2022/03/10/qakbot-decoded/, https://www.trellix.com/en-us/about/newsroom/stories/research/demystifying-qbot-malware.html",
              "mitre": [
                "T1055.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Injection Of Wermgr to Known Browser\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 8. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Injection Of Wermgr to Known Browser\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Process Injection Of Wermgr to Known Browser”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.588",
              "n": "Windows Process With NamedPipe CommandLine",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects processes with command lines containing named pipes. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line executions. This behavior is significant as it is often used by adversaries, such as those behind the Olympic Destroyer malware, for inter-process communication post-injection, aiding in defense evasion and privilege escalation. If confirmed malicious, this activity could allow attackers to maintain persistence, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Normal browser application may use this technique. Please update the filter macros to remove false positives.",
              "refs": "https://blog.talosintelligence.com/2018/02/olympic-destroyer.html",
              "mitre": [
                "T1055"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process With NamedPipe CommandLine\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process With NamedPipe CommandLine\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Normal browser application may use this technique. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Process With NamedPipe CommandLine”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.589",
              "n": "Windows Process With NetExec Command Line Parameters",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of NetExec (formally CrackmapExec) a toolset used for post-exploitation enumeration and attack within Active Directory environments through command line parameters. It leverages Endpoint Detection and Response (EDR) data to identify specific command-line arguments associated with actions like ticket manipulation, kerberoasting, and password spraying. This activity is significant as NetExec is used by adversaries to exploit Kerberos for privilege escalation …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4688, Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel:Endpoint.Processes | search dest=$dest$ process_name = $process_name$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4688, Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, legitimate applications may use the same command line parameters as NetExec. Filter as needed.",
              "refs": "https://www.netexec.wiki/, https://www.johnvictorwolfe.com/2024/07/21/the-successor-to-crackmapexec/, https://attack.mitre.org/software/S0488/",
              "mitre": [
                "T1550.003",
                "T1558.003",
                "T1558.004"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process With NetExec Command Line Parameters\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4688, Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1550.003, T1558.003, T1558.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process With NetExec Command Line Parameters\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, legitimate applications may use the same command line parameters as NetExec. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Process With NetExec Command Line Parameters”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.590",
              "n": "Windows Process Writing File to World Writable Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a process writing a .txt file to a world writable path. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on file creation events within specific directories. This activity is significant as adversaries often use such techniques to deliver payloads to a system, which is uncommon for legitimate processes. If confirmed malicious, this behavior could allow attackers to execute arbitrary code, escalate privileges, or maintain …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Filesystem where Filesystem.file_name=*.txt Filesystem.file_path IN (\"*\\\\Windows\\\\Tasks\\\\*\", \"*\\\\Windows\\\\Temp\\\\*\", \"*\\\\Windows\\\\tracing\\\\*\", \"*\\\\Windows\\\\PLA\\\\Reports\\\\*\", \"*\\\\Windows\\\\PLA\\\\Rules\\\\*\", \"*\\\\Windows\\\\PLA\\\\Templates\\\\*\", \"*\\\\Windows\\\\PLA\\\\Reports\\\\en-US\\\\*\", \"*\\\\Windows\\\\PLA\\\\Rules\\\\en-US\\\\*\", \"*\\\\Windows\\\\Registration\\\\CRMLog\\\\*\", \"*\\\\Windows\\\\System32\\\\Tasks\\\\*\", \"*\\\\Windows\\\\System32\\\\Com\\\\dmp\\\\*\", \"*\\\\Windows\\\\System32\\\\LogFiles\\\\WMI\\\\*\", \"*\\\\Windows\\\\System32\\\\Microsoft\\\\Crypto\\\\RSA\\\\MachineKeys\\\\*\", \"*\\\\Windows\\\\System32\\\\spool\\\\PRINTERS\\\\*\", \"*\\\\Windows\\\\System32\\\\spool\\\\SERVERS\\\\*\", \"*\\\\Windows\\\\System32\\\\spool\\\\drivers\\\\color\\\\*\", \"*\\\\Windows\\\\System32\\\\Tasks\\\\Microsoft\\\\Windows\\\\RemoteApp and Desktop Connections Update\\\\*\", \"*\\\\Windows\\\\SysWOW64\\\\Tasks\\\\*\", \"*\\\\Windows\\\\SysWOW64\\\\Com\\\\dmp\\\\*\", \"*\\\\Windows\\\\SysWOW64\\\\Tasks\\\\Microsoft\\\\Windows\\\\PLA\\\\*\", \"*\\\\Windows\\\\SysWOW64\\\\Tasks\\\\Microsoft\\\\Windows\\\\RemoteApp and Desktop Connections Update\\\\*\", \"*\\\\Windows\\\\SysWOW64\\\\Tasks\\\\Microsoft\\\\Windows\\\\PLA\\\\System\\\\*\") by Filesystem.dest, Filesystem.user, Filesystem.file_name Filesystem.file_path | `drop_dm_object_name(\"Filesystem\")` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_process_writing_file_to_world_writable_path_filter`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the file creation event, process name, file path and, file name. These logs must be processed using the appropriate Splunk Techno…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if legitimate software writes to these paths. Modify the search to include additional file name extensions. To enhance it further, adding a join on Processes.process_name may assist with restricting the analytic to specific process names. Investigate the process and file to determine if it is malicious.",
              "refs": "https://research.splunk.com/",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Writing File to World Writable Path\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Writing File to World Writable Path\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if legitimate software writes to these paths. Modify the search to include additional file name extensions. To enhance it further, adding a join on Processes.process_name may assist with restricting the analytic to specific process names. Investigate the process and file to determine if it is malicious.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Process Writing File to World Writable Path”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.591",
              "n": "Windows Processes Killed By Industroyer2 Malware",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the termination of specific processes by the Industroyer2 malware. It leverages Sysmon EventCode 5 to identify when processes like \"PServiceControl.exe\" and \"PService_PPD.exe\" are killed. This activity is significant as it targets processes related to energy facility networks, indicating a potential attack on critical infrastructure. If confirmed malicious, this could lead to disruption of essential services, loss of control over energy systems, and significant ope…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 5",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 5 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate applications are allowed to terminate this process during testing or updates. Filter as needed based on paths that are used legitimately.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Processes Killed By Industroyer2 Malware\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 5. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Processes Killed By Industroyer2 Malware\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate applications are allowed to terminate this process during testing or updates. Filter as needed based on paths that are used legitimately.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Processes Killed By Industroyer2 Malware”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.592",
              "n": "Windows Proxy Via Netsh",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of netsh.exe to configure a connection proxy, which can be leveraged for persistence by executing a helper DLL. It detects this activity by analyzing process creation events from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving \"portproxy\" and \"v4tov4\" parameters. This activity is significant because it indicates potential unauthorized network configuration changes, which could be used to maintain persistence or…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some VPN applications are known to launch netsh.exe. Outside of these instances, it is unusual for an executable to launch netsh.exe and run commands.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1090.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Proxy Via Netsh\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1090.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Proxy Via Netsh\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some VPN applications are known to launch netsh.exe. Outside of these instances, it is unusual for an executable to launch netsh.exe and run commands.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Proxy Via Netsh”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.593",
              "n": "Windows Proxy Via Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of registry keys related to the Windows Proxy settings via netsh.exe. It leverages data from the Endpoint.Registry data model, focusing on changes to the registry path \"*\\\\System\\\\CurrentControlSet\\\\Services\\\\PortProxy\\\\v4tov4\\\\tcp*\". This activity is significant because netsh.exe can be used to establish a persistent proxy, potentially allowing an attacker to execute a helper DLL whenever netsh.exe runs. If confirmed malicious, this could enable t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1090.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Proxy Via Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1090.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Proxy Via Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Proxy Via Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.594",
              "n": "Windows Raccine Scheduled Task Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the deletion of the Raccine Rules Updater scheduled task using the `schtasks.exe` command. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because adversaries may delete this task to disable Raccine, a tool designed to prevent ransomware attacks. If confirmed malicious, this action could allow ransomware to execute without interference, leading to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, however filter as needed.",
              "refs": "https://redcanary.com/blog/blackbyte-ransomware/, https://github.com/Neo23x0/Raccine",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Raccine Scheduled Task Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Raccine Scheduled Task Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, however filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Raccine Scheduled Task Deletion”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.595",
              "n": "Windows Raw Access To Disk Volume Partition",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious raw access reads to the device disk partition of a host machine. It leverages Sysmon EventCode 9 logs to identify processes attempting to read or write to the boot sector, excluding legitimate system processes. This activity is significant as it is commonly associated with destructive actions by adversaries, such as wiping, encrypting, or overwriting the boot sector, as seen in attacks involving malware like HermeticWiper. If confirmed malicious, this be…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 9",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the raw access read event (like sysmon eventcode 9), process name and process guid from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are som minimal number of normal applications from system32 folder like svchost.exe accessing the MBR. In this case we used 'system32' and 'syswow64' path as a filter for this detection.",
              "refs": "https://blog.talosintelligence.com/2022/02/threat-advisory-hermeticwiper.html",
              "mitre": [
                "T1561.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Raw Access To Disk Volume Partition\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 9. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1561.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Raw Access To Disk Volume Partition\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are som minimal number of normal applications from system32 folder like svchost.exe accessing the MBR. In this case we used 'system32' and 'syswow64' path as a filter for this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Raw Access To Disk Volume Partition”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.596",
              "n": "Windows Registry BootExecute Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the BootExecute registry key, which manages applications and services executed during system boot. It leverages data from the Endpoint.Registry data model, focusing on changes to the registry path \"HKLM\\\\System\\\\CurrentControlSet\\\\Control\\\\Session Manager\\\\BootExecute\". This activity is significant because unauthorized changes to this key can indicate attempts to achieve persistence, load malicious code, or tamper with the boot process. If confirme…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on Windows Registry that include the name of the path and key responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and will need to be filtered.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/04/11/guidance-for-investigating-attacks-using-cve-2022-21894-the-blacklotus-campaign/",
              "mitre": [
                "T1542",
                "T1547.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Registry BootExecute Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1542, T1547.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Registry BootExecute Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and will need to be filtered.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Registry BootExecute Modification”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.597",
              "n": "Windows Registry Dotnet ETW Disabled Via ENV Variable",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a registry modification that disables the ETW for the .NET Framework. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the COMPlus_ETWEnabled registry value under the \"Environment\" registry key path for both user (HKCU\\Environment) and machine (HKLM\\SYSTEM\\CurrentControlSet\\Control\\Session Manager\\Environment) scopes. This activity is significant because disabling ETW can allow attackers to evade Endpoint Detection and Res…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Setting the \"COMPlus_ETWEnabled\" value as a global environment variable either in user or machine scope should only happens during debugging use cases, hence the false positives rate should be very minimal.",
              "refs": "https://gist.github.com/Cyb3rWard0g/a4a115fd3ab518a0e593525a379adee3, https://blog.xpnsec.com/hiding-your-dotnet-complus-etwenabled/, https://attack.mitre.org/techniques/T1562/006/",
              "mitre": [
                "T1562.006"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Registry Dotnet ETW Disabled Via ENV Variable\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Registry Dotnet ETW Disabled Via ENV Variable\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Setting the \"COMPlus_ETWEnabled\" value as a global environment variable either in user or machine scope should only happens during debugging use cases, hence the false positives rate should be very minimal.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Registry Dotnet ETW Disabled Via ENV Variable”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.598",
              "n": "Windows Registry Entries Exported Via Reg",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the reg.exe process with either the \"save\" or \"export\" parameters. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant because threat actors often use the \"reg save\" or \"reg export\" command to dump credentials or test registry modification capabilities on compromised hosts. If confirmed malicious, this behavior could allow attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_reg`\n        AND\n        Processes.process IN (\"* save *\", \"* export *\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_registry_entries_exported_via_reg_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator can use this command tool to backup registry before updates or modifying critical registries.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/quser, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1012"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Registry Entries Exported Via Reg\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Registry Entries Exported Via Reg\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator can use this command tool to backup registry before updates or modifying critical registries.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Registry Entries Exported Via Reg”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.599",
              "n": "Windows Registry Entries Restored Via Reg",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of reg.exe with the \"restore\" parameter, indicating an attempt to restore registry backup data on a host. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant as it may indicate post-exploitation actions, such as those performed by tools like winpeas, which use \"reg save\" and \"reg restore\" to manipulate registry settings. If confirme…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_reg`\n        AND\n        Processes.process = \"* restore *\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_registry_entries_restored_via_reg_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator can use this command tool to backup registry before updates or modifying critical registries.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/quser, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1012"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Registry Entries Restored Via Reg\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Registry Entries Restored Via Reg\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator can use this command tool to backup registry before updates or modifying critical registries.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Registry Entries Restored Via Reg”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.600",
              "n": "Windows Registry Modification for Safe Mode Persistence",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies modifications to the SafeBoot registry keys, specifically within the Minimal and Network paths. This detection leverages registry activity logs from endpoint data sources like Sysmon or EDR tools. Monitoring these keys is crucial as adversaries can use them to persist drivers or services in Safe Mode, with Network allowing network connections. If confirmed malicious, this activity could enable attackers to maintain persistence even in Safe Mode, potentially bypa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black or endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report reads and writes to…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "updated windows application needed in safe boot may used this registry",
              "refs": "https://malware.news/t/threat-analysis-unit-tau-threat-intelligence-notification-snatch-ransomware/36365, https://redcanary.com/blog/tracking-driver-inventory-to-expose-rootkits/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1112/T1112.md, https://blog.didierstevens.com/2007/03/26/playing-with-safe-mode/",
              "mitre": [
                "T1547.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Registry Modification for Safe Mode Persistence\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Registry Modification for Safe Mode Persistence\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: updated windows application needed in safe boot may used this registry\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Registry Modification for Safe Mode Persistence”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.601",
              "n": "Windows Registry Payload Injection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspiciously long data written to the Windows registry, a behavior often linked to fileless malware or persistence techniques. It leverages Endpoint Detection and Response (EDR) telemetry, focusing on registry events with data lengths exceeding 512 characters. This activity is significant as it can indicate an attempt to evade traditional file-based defenses, making it crucial for SOC monitoring. If confirmed malicious, this technique could allow attackers to maint…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 13 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/blog/tracking-evolution-gootloader-operations, https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/kovter-an-evolving-malware-gone-fileless, https://attack.mitre.org/techniques/T1027/011/",
              "mitre": [
                "T1027.011"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Registry Payload Injection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Registry Payload Injection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Registry Payload Injection”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.602",
              "n": "Windows Regsvr32 Renamed Binary",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where the regsvr32.exe binary has been renamed and executed. This detection leverages Endpoint Detection and Response (EDR) data, specifically focusing on the original filename metadata. Renaming regsvr32.exe is significant as it can be an evasion technique used by attackers to bypass security controls. If confirmed malicious, this activity could allow an attacker to execute arbitrary DLLs, potentially leading to code execution, privilege escalation, o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://twitter.com/pr0xylife/status/1585612370441031680?s=46&t=Dc3CJi4AnM-8rNoacLbScg",
              "mitre": [
                "T1218.010"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Regsvr32 Renamed Binary\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.010. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Regsvr32 Renamed Binary\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Regsvr32 Renamed Binary”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.603",
              "n": "Windows Remote Access Software RMS Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or modification of Windows registry entries related to the Remote Manipulator System (RMS) Remote Admin tool. It leverages data from the Endpoint.Registry datamodel, focusing on registry paths containing \"SYSTEM\\\\Remote Manipulator System.\" This activity is significant because RMS, while legitimate, is often abused by adversaries, such as in the Azorult malware campaigns, to gain unauthorized remote access. If confirmed malicious, this could allow atta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/, https://malpedia.caad.fkie.fraunhofer.de/details/win.rms",
              "mitre": [
                "T1219"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Access Software RMS Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Access Software RMS Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Remote Access Software RMS Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.604",
              "n": "Windows Remote Assistance Spawning Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects Microsoft Remote Assistance (msra.exe) spawning PowerShell.exe or cmd.exe as a child process. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where msra.exe is the parent process. This activity is significant because msra.exe typically does not spawn command-line interfaces, indicating potential process injection or misuse. If confirmed malicious, an attacker could use this technique to execute ar…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, filter as needed. Add additional shells as needed.",
              "refs": "https://thedfirreport.com/2022/02/07/qbot-likes-to-move-it-move-it/, https://app.any.run/tasks/ca1616de-89a1-4afc-a3e4-09d428df2420/",
              "mitre": [
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Assistance Spawning Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Assistance Spawning Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, filter as needed. Add additional shells as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Remote Assistance Spawning Process”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.605",
              "n": "Windows Remote Service Rdpwinst Tool Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the RDPWInst.exe tool, which is an RDP wrapper library used to enable remote desktop host support and concurrent RDP sessions. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, original file names, and specific command-line arguments. This activity is significant because adversaries can abuse this tool to establish unauthorized RDP connections, facilitating remote access and potential latera…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This tool was designed for home usage and not commonly seen in production environment. Filter as needed.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Service Rdpwinst Tool Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Service Rdpwinst Tool Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This tool was designed for home usage and not commonly seen in production environment. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Remote Service Rdpwinst Tool Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.606",
              "n": "Windows Remote Services Allow Remote Assistance",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications in the Windows registry to enable remote desktop assistance on a targeted machine. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to the \"Control\\\\Terminal Server\\\\fAllowToGetHelp\" registry path. This activity is significant because enabling remote assistance via registry is uncommon and often associated with adversaries or malware like Azorult. If confirmed malicious, this could allow an attacker to remotely a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-remoteassistance-exe-fallowtogethelp, https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Services Allow Remote Assistance\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Services Allow Remote Assistance\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Remote Services Allow Remote Assistance”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.607",
              "n": "Windows Remote Services Rdp Enable",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications in the Windows registry to enable Remote Desktop Protocol (RDP) on a targeted machine. It leverages data from the Endpoint.Registry datamodel, specifically monitoring changes to the \"fDenyTSConnections\" registry value. This activity is significant as enabling RDP via registry is uncommon and often associated with adversaries or malware attempting to gain remote access. If confirmed malicious, this could allow attackers to remotely control the compromi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://www.hybrid-analysis.com/sample/9d6611c2779316f1ef4b4a6edcfdfb5e770fe32b31ec2200df268c3bd236ed75?environmentId=100",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Services Rdp Enable\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Services Rdp Enable\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Remote Services Rdp Enable”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.608",
              "n": "Windows Renamed Powershell Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where the PowerShell executable has been renamed and executed under an alternate filename. This behavior is commonly associated with attempts to evade security controls or bypass logging mechanisms that monitor standard PowerShell usage. While rare in legitimate environments, renamed PowerShell binaries are frequently observed in malicious campaigns leveraging Living-off-the-Land Binaries (LOLBins) and fileless malware techniques. This detection flags …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.xworm",
              "mitre": [
                "T1036.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Renamed Powershell Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Renamed Powershell Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Renamed Powershell Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.609",
              "n": "Windows Rundll32 Load DLL in Temp Dir",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies instances where rundll32.exe is used to load a DLL from a temporary directory, such as C:\\Users\\<User>\\AppData\\Local\\Temp\\ or C:\\Windows\\Temp\\. While rundll32.exe is a legitimate Windows utility used to execute functions exported from DLLs, its use to load libraries from temporary locations is highly suspicious. These directories are commonly used by malware and red team tools to stage payloads or execute code in-memory without writing it to more persistent locations. T…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blog.sekoia.io/interlock-ransomware-evolving-under-the-radar/",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Rundll32 Load DLL in Temp Dir\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Rundll32 Load DLL in Temp Dir\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Rundll32 Load DLL in Temp Dir”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.610",
              "n": "Windows RunMRU Command Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows RunMRU registry key, which stores a history of commands executed through the Run dialog box (Windows+R). It leverages Endpoint Detection and Response (EDR) telemetry to monitor registry events targeting this key. This activity is significant as malware often uses the Run dialog to execute malicious commands while attempting to appear legitimate. If confirmed malicious, this could indicate an attacker using indirect command execution tec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection may generate a few false positives, such as legitimate software updates or legitimate system maintenance activities that modify the RunMRU key. However, the exclusion of MRUList value changes helps reduce the number of false positives by focusing only on actual command entries. Add any specific false positives to the built in filter to reduce findings as needed.",
              "refs": "https://medium.com/@ahmed.moh.farou2/fake-captcha-campaign-on-arabic-pirated-movie-sites-delivers-lumma-stealer-4f203f7adabf, https://medium.com/@shaherzakaria8/downloading-trojan-lumma-infostealer-through-capatcha-1f25255a0e71, https://www.forensafe.com/blogs/runmrukey.html, https://github.com/SigmaHQ/sigma/blob/master/rules-threat-hunting/windows/registry/registry_set/registry_set_runmru_command_execution.yml",
              "mitre": [
                "T1202"
              ],
              "dtype": "registry_value_text",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RunMRU Command Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to registry value entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1202. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RunMRU Command Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection may generate a few false positives, such as legitimate software updates or legitimate system maintenance activities that modify the RunMRU key. However, the exclusion of MRUList value changes helps reduce the number of false positives by focusing only on actual command entries. Add any specific false positives to the built in filter to reduce findings as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows RunMRU Command Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.611",
              "n": "Windows Scheduled Task DLL Module Loaded",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where the taskschd.dll is loaded by processes running in suspicious or writable directories. This activity is unusual, as legitimate processes that load taskschd.dll typically reside in protected system locations. Malware or threat actors may attempt to load this DLL from writable or non-standard directories to manipulate the Task Scheduler and execute malicious tasks. By identifying processes that load taskschd.dll in these unsafe locations, this detecti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Third party Legitimate application may load this task schedule dll module.",
              "refs": "https://www.proofpoint.com/us/blog/threat-insight/chinese-malware-appears-earnest-across-cybercrime-threat-landscape, https://www.fortinet.com/blog/threat-research/valleyrat-campaign-targeting-chinese-speakers",
              "mitre": [
                "T1053"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Scheduled Task DLL Module Loaded\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Scheduled Task DLL Module Loaded\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Third party Legitimate application may load this task schedule dll module.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Scheduled Task DLL Module Loaded”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.612",
              "n": "Windows Scheduled Task with Highest Privileges",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new scheduled task with the highest execution privileges via Schtasks.exe. It leverages Endpoint Detection and Response (EDR) logs to monitor for specific command-line parameters ('/rl' and 'highest') in schtasks.exe executions. This activity is significant as it is commonly used in AsyncRAT attacks for persistence and privilege escalation. If confirmed malicious, this could allow an attacker to maintain persistent access and execute tasks with el…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may arise from legitimate applications that create tasks to run as SYSTEM. Therefore, it's recommended to adjust filters based on parent process or modify the query to include world writable paths for restriction.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Scheduled Task with Highest Privileges\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Scheduled Task with Highest Privileges\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may arise from legitimate applications that create tasks to run as SYSTEM. Therefore, it's recommended to adjust filters based on parent process or modify the query to include world writable paths for restriction.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Scheduled Task with Highest Privileges”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.613",
              "n": "Windows Schtasks Create Run As System",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new scheduled task using Schtasks.exe to run as the SYSTEM user. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions and process details. This activity is significant as it often indicates an attempt to gain elevated privileges or maintain persistence within the environment. If confirmed malicious, an attacker could execute code with SYSTEM-level privileges, potentially leading to da…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be limited to legitimate applications creating a task to run as SYSTEM. Filter as needed based on parent process, or modify the query to have world writeable paths to restrict it.",
              "refs": "https://pentestlab.blog/2019/11/04/persistence-scheduled-tasks/, https://www.ired.team/offensive-security/persistence/t1053-schtask, https://thedfirreport.com/2022/02/07/qbot-likes-to-move-it-move-it/",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Schtasks Create Run As System\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Schtasks Create Run As System\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be limited to legitimate applications creating a task to run as SYSTEM. Filter as needed based on parent process, or modify the query to have world writeable paths to restrict it.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Schtasks Create Run As System”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.614",
              "n": "Windows ScManager Security Descriptor Tampering Via Sc.EXE",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects changes in the ScManager service security descriptor settings. It leverages data from Endpoint Detection and Response (EDR) agents, specifically searching for any process execution involving the \"sc.exe\" binary with the \"sdset\" flag targeting the \"scmanager\" service. If confirmed malicious, this could allow an attacker to escalate their privileges.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. descriptor settings of the scmanager service should be immediately investigated and understood.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/8ac5c4f84692b11ea2832d18d3dc6f1ce7fb4e41/atomics/T1569.002/T1569.002.md#atomic-test-7---modifying-acl-of-service-control-manager-via-sdet, https://0xv1n.github.io/posts/scmanager/, https://attack.mitre.org/techniques/T1569/002/",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows ScManager Security Descriptor Tampering Via Sc.EXE\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows ScManager Security Descriptor Tampering Via Sc.EXE\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. descriptor settings of the scmanager service should be immediately investigated and understood.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows ScManager Security Descriptor Tampering Via Sc.EXE”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.615",
              "n": "Windows Screen Capture in TEMP folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of screen capture files by the Braodo stealer malware. This stealer is known to capture screenshots of the victim's desktop as part of its data theft activities. The detection focuses on identifying unusual screen capture activity, especially when images are saved in directories often used by malware, such as temporary or hidden folders. Monitoring for these files helps to quickly identify malicious screen capture attempts, allowing security teams to r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://x.com/suyog41/status/1825869470323056748, https://g0njxa.medium.com/from-vietnam-to-united-states-malware-fraud-and-dropshipping-98b7a7b2c36d",
              "mitre": [
                "T1113"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Screen Capture in TEMP folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1113. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Screen Capture in TEMP folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Screen Capture in TEMP folder”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.616",
              "n": "Windows Security Account Manager Stopped",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the stopping of the Windows Security Account Manager (SAM) service via command-line, typically using the \"net stop samss\" command. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because stopping the SAM service can disrupt authentication mechanisms and is often associated with ransomware attacks like Ryuk. If confirmed malicious, this action could l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "SAM is a critical windows service, stopping it would cause major issues on an endpoint this makes false positive rare. AlthoughNo false positives have been identified.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1489"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Security Account Manager Stopped\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Security Account Manager Stopped\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: SAM is a critical windows service, stopping it would cause major issues on an endpoint this makes false positive rare. AlthoughNo false positives have been identified.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Security Account Manager Stopped”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.617",
              "n": "Windows Security And Backup Services Stop",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious termination of known services commonly targeted by ransomware before file encryption. It leverages Windows System Event Logs (EventCode 7036) to identify when critical services such as Volume Shadow Copy, backup, and antivirus services are stopped. This activity is significant because ransomware often disables these services to avoid errors and ensure successful file encryption. If confirmed malicious, this behavior could lead to widespread data encr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7036",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the 7036 EventCode ScManager in System audit Logs from your endpoints.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Admin activities or installing related updates may do a sudden stop to list of services we monitor.",
              "refs": "https://krebsonsecurity.com/2021/05/a-closer-look-at-the-darkside-ransomware-gang/, https://www.mcafee.com/blogs/other-blogs/mcafee-labs/mcafee-atr-analyzes-sodinokibi-aka-revil-ransomware-as-a-service-what-the-code-tells-us/, https://news.sophos.com/en-us/2020/04/24/lockbit-ransomware-borrows-tricks-to-keep-up-with-revil-and-maze/, https://blogs.vmware.com/security/2022/10/lockbit-3-0-also-known-as-lockbit-black.html",
              "mitre": [
                "T1490"
              ],
              "dtype": "service",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Security And Backup Services Stop\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to service entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7036. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Security And Backup Services Stop\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Admin activities or installing related updates may do a sudden stop to list of services we monitor.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Security And Backup Services Stop”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.618",
              "n": "Windows Security Support Provider Reg Query",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies command-line activity querying the registry for Security Support Providers (SSPs) related to Local Security Authority (LSA) protection and configuration. This detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on processes accessing specific LSA registry paths. Monitoring this activity is crucial as adversaries and post-exploitation tools like winpeas may use it to gather information on LSA protections, potentially leading to credentia…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blog.netwrix.com/2022/01/11/understanding-lsa-protection/, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1547.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Security Support Provider Reg Query\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Security Support Provider Reg Query\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Security Support Provider Reg Query”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.619",
              "n": "Windows Sensitive Group Discovery With Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `net.exe` with command-line arguments used to query elevated domain or sensitive groups. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to identify high-privileged users within Active Directory. If confirmed malicious, this behavior could lead to further attacks aimed at compromisi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/appendix-b--privileged-accounts-and-groups-in-active-directory, https://adsecurity.org/?p=3658, https://media.defense.gov/2023/May/24/2003229517/-1/-1/0/CSA_Living_off_the_Land.PDF, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Sensitive Group Discovery With Net\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Sensitive Group Discovery With Net\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Sensitive Group Discovery With Net”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.620",
              "n": "Windows Sensitive Registry Hive Dump Via CommandLine",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `reg.exe` to export Windows Registry hives, which may contain sensitive credentials. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving `save` or `export` actions targeting the `sam`, `system`, or `security` hives. This activity is significant as it indicates potential offline credential access attacks, often executed from untrusted processes or scripts. If confirmed malicious, a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible some agent based products will generate false positives. Filter as needed.",
              "refs": "https://www.mandiant.com/resources/shining-a-light-on-darkside-ransomware-operations, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1003.002/T1003.002.md, https://media.defense.gov/2023/May/24/2003229517/-1/-1/0/CSA_Living_off_the_Land.PDF",
              "mitre": [
                "T1003.002"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Sensitive Registry Hive Dump Via CommandLine\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Sensitive Registry Hive Dump Via CommandLine\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible some agent based products will generate false positives. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Sensitive Registry Hive Dump Via CommandLine”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.621",
              "n": "Windows Server Software Component GACUtil Install to GAC",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of GACUtil.exe to add a DLL into the Global Assembly Cache (GAC). It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because adding a DLL to the GAC allows it to be called by any application, potentially enabling widespread code execution. If confirmed malicious, this could allow an attacker to execute arbitrary code across the operating system, leading to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if gacutil.exe is utilized day to day by developers. Filter as needed.",
              "refs": "https://strontic.github.io/xcyclopedia/library/gacutil.exe-F2FE4DF74BD214EDDC1A658043828089.html, https://www.microsoft.com/en-us/security/blog/2022/12/12/iis-modules-the-evolution-of-web-shells-and-how-to-detect-them/, https://www.microsoft.com/en-us/security/blog/2022/07/26/malicious-iis-extensions-quietly-open-persistent-backdoors-into-servers/, https://learn.microsoft.com/en-us/dotnet/framework/app-domains/gac",
              "mitre": [
                "T1505.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Server Software Component GACUtil Install to GAC\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Server Software Component GACUtil Install to GAC\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if gacutil.exe is utilized day to day by developers. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Server Software Component GACUtil Install to GAC”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.622",
              "n": "Windows Service Create Kernel Mode Driver",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a new kernel mode driver using the sc.exe command. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs that include command-line details. The activity is significant because adding a kernel driver is uncommon in regular operations and can indicate an attempt to gain low-level access to the system. If confirmed malicious, this could allow an attacker to execute code with high privileg…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on common applications adding new drivers, however, filter as needed.",
              "refs": "https://whiteknightlabs.com/2025/11/25/discreet-driver-loading-in-windows/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/sc-config",
              "mitre": [
                "T1068",
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Create Kernel Mode Driver\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068, T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Create Kernel Mode Driver\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on common applications adding new drivers, however, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Service Create Kernel Mode Driver”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.623",
              "n": "Windows Service Create with Tscon",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential RDP Hijacking attempts by identifying the creation of a Windows service using sc.exe with a binary path that includes tscon.exe. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events and command-line arguments. This activity is significant as it indicates an attacker may be trying to hijack a disconnected RDP session, posing a risk of unauthorized access. If confirmed malicious, the attacker c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may arise in the RDP Hijacking analytic when legitimate administrators access remote sessions for maintenance or troubleshooting purposes. These activities might resemble an attacker''s attempt to hijack a disconnected session, leading to false alarms. To mitigate the risk of false positives and improve the overall security posture, organizations can implement Group Policy to automatically disconnect RDP sessions when they are complete. By enforcing this policy, administrators ensure that disconnected sessions are promptly terminated, reducing the window of opportunity for an attacker to hijack a session. Additionally, organizations can also implement access control mechanisms and monitor the behavior of privileged accounts to further enhance security and reduce the chances of false positives in RDP Hijacking detection.",
              "refs": "https://doublepulsar.com/rdp-hijacking-how-to-hijack-rds-and-remoteapp-sessions-transparently-to-move-through-an-da2a1e73a5f6, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1563.002/T1563.002.md",
              "mitre": [
                "T1543.003",
                "T1563.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Create with Tscon\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003, T1563.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Create with Tscon\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may arise in the RDP Hijacking analytic when legitimate administrators access remote sessions for maintenance or troubleshooting purposes. These activities might resemble an attacker''s attempt to hijack a disconnected session, leading to false alarms. To mitigate the risk of false positives and improve the overall security posture, organizations can implement Group Policy to automatically disconnect RDP sessions when they are complete. By enforcing this policy, administrators ensure that disconnected sessions are promptly terminated, reducing the window of opportunity for an attacker to hijack a session. Additionally, organizations can also implement access control mechanisms and monitor the behavior of privileged accounts to further enhance security and reduce the chances of false positives in RDP Hijacking detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Service Create with Tscon”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.624",
              "n": "Windows Service Deletion In Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of a service from the Windows Registry under CurrentControlSet\\Services. It leverages data from the Endpoint.Registry datamodel, specifically monitoring registry paths and actions related to service deletion. This activity is significant as adversaries may delete services to evade detection and hinder incident response efforts. If confirmed malicious, this action could disrupt legitimate services, impair system functionality, and potentially allow atta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This event can be seen when administrator delete a service or uninstall/reinstall a software that creates service entry, but it is still recommended to check this alert with high priority.",
              "refs": "https://unit42.paloaltonetworks.com/brute-ratel-c4-tool/",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Deletion In Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Deletion In Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This event can be seen when administrator delete a service or uninstall/reinstall a software that creates service entry, but it is still recommended to check this alert with high priority.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Service Deletion In Registry”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.625",
              "n": "Windows Service Stop Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to stop services on a system using `net.exe`, `sc.exe` or the \"Stop-Service\" cmdlet. It leverages Endpoint Detection and Response (EDR) telemetry. This activity can be significant as adversaries often terminate security or critical services to evade detection and further their objectives. If confirmed malicious, this behavior could allow attackers to disable security defenses, facilitate ransomware encryption, or disrupt essential services, leading to p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Windows OS or software may stop and restart services due to some critical update.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1489"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Stop Attempt\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Stop Attempt\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Windows OS or software may stop and restart services due to some critical update.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Service Stop Attempt”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.626",
              "n": "Windows Service Stop By Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `sc.exe` to delete a Windows service. It leverages Endpoint Detection and Response (EDR) data, focusing on process execution logs that capture command-line arguments. This activity is significant because adversaries often delete services to disable security mechanisms or critical system functions, aiding in evasion and persistence. If confirmed malicious, this action could lead to the termination of essential security services, allowing attackers to oper…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible administrative scripts may start/stop/delete services. Filter as needed.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/, https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1543.003/T1543.003.md",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Stop By Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Stop By Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible administrative scripts may start/stop/delete services. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Service Stop By Deletion”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.627",
              "n": "Windows Snake Malware File Modification Crmlog",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a .crmlog file within the %windows%\\Registration directory, typically with a format of <RANDOM_GUID>.<RANDOM_GUID>.crmlog. This detection leverages the Endpoint.Filesystem datamodel to monitor file creation events in the specified directory. This activity is significant as it is associated with the Snake malware, which uses this file for its operations. If confirmed malicious, this could indicate the presence of Snake malware, leading to potentia…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present as the file pattern does match legitimate files on disk. It is possible other native tools write the same file name scheme.",
              "refs": "https://media.defense.gov/2023/May/09/2003218554/-1/-1/0/JOINT_CSA_HUNTING_RU_INTEL_SNAKE_MALWARE_20230509.PDF",
              "mitre": [
                "T1027"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Snake Malware File Modification Crmlog\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Snake Malware File Modification Crmlog\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present as the file pattern does match legitimate files on disk. It is possible other native tools write the same file name scheme.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Snake Malware File Modification Crmlog”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.628",
              "n": "Windows Snake Malware Kernel Driver Comadmin",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of the comadmin.dat file in the %windows%\\system32\\Com directory, which is associated with Snake Malware. This detection leverages the Endpoint.Filesystem data model to identify file creation events matching the specified path and filename. This activity is significant because the comadmin.dat file is part of Snake Malware's installation process, which includes dropping a kernel driver and a custom DLL. If confirmed malicious, this activity could allow…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://media.defense.gov/2023/May/09/2003218554/-1/-1/0/JOINT_CSA_HUNTING_RU_INTEL_SNAKE_MALWARE_20230509.PDF",
              "mitre": [
                "T1547.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Snake Malware Kernel Driver Comadmin\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Snake Malware Kernel Driver Comadmin\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Snake Malware Kernel Driver Comadmin”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.629",
              "n": "Windows Snake Malware Service Create",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new service named WerFaultSvc with a binary path in the Windows WinSxS directory. It leverages Windows System logs, specifically EventCode 7045, to identify this activity. This behavior is significant because it indicates the presence of Snake malware, which uses this service to maintain persistence by blending in with legitimate Windows services. If confirmed malicious, this activity could allow an attacker to execute Snake malware components, le…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting Windows System logs with the Service name, Service File Name Service Start type, and Service Type from your endpoints.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as this is a strict primary indicator used by Snake Malware.",
              "refs": "https://media.defense.gov/2023/May/09/2003218554/-1/-1/0/JOINT_CSA_HUNTING_RU_INTEL_SNAKE_MALWARE_20230509.PDF",
              "mitre": [
                "T1547.006",
                "T1569.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Snake Malware Service Create\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.006, T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Snake Malware Service Create\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as this is a strict primary indicator used by Snake Malware.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Snake Malware Service Create”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.630",
              "n": "Windows SOAPHound Binary Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the SOAPHound binary (`soaphound.exe`) with specific command-line arguments. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, command-line arguments, and other process-related metadata. This activity is significant because SOAPHound is a known tool used for credential dumping and other malicious activities. If confirmed malicious, this behavior could allow an attacker to extract sensitive information, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as the command-line arguments are specific to SOAPHound. Filter as needed.",
              "refs": "https://github.com/FalconForceTeam/SOAPHound",
              "mitre": [
                "T1069.001",
                "T1069.002",
                "T1087.001",
                "T1087.002",
                "T1482"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SOAPHound Binary Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001, T1069.002, T1087.001, T1087.002, T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SOAPHound Binary Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as the command-line arguments are specific to SOAPHound. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows SOAPHound Binary Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.631",
              "n": "Windows SQL Spawning CertUtil",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of certutil to download software, specifically when spawned by SQL-related processes. This detection leverages Endpoint Detection and Response (EDR) data, focusing on command-line executions involving certutil with parameters like *urlcache* and *split*. This activity is significant as it may indicate a compromise by threat actors, such as Flax Typhoon, who use certutil to establish persistent VPN connections. If confirmed malicious, this behavior could all…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.parent_process_name IN (\"sqlservr.exe\", \"sqlagent.exe\", \"sqlps.exe\", \"launchpad.exe\", \"sqldumper.exe\") `process_certutil` (Processes.process=\"*urlcache*\"\n        OR\n        Processes.process=\"*verifyctl*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_sql_spawning_certutil_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The occurrence of false positives should be minimal, given that the SQL agent does not typically download software using CertUtil.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/08/24/flax-typhoon-using-legitimate-software-to-quietly-access-taiwanese-organizations/",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SQL Spawning CertUtil\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SQL Spawning CertUtil\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The occurrence of false positives should be minimal, given that the SQL agent does not typically download software using CertUtil.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows SQL Spawning CertUtil”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.632",
              "n": "Windows SubInAcl Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the SubInAcl utility. SubInAcl is a legacy Windows Resource Kit tool from the Windows 2003 era, used to manipulate security descriptors of securable objects. It leverages data from Endpoint Detection and Response (EDR) agents, specifically searching for any process execution involving \"SubInAcl.exe\" binary. This activity can be significant because the utility should be rarely found on modern Windows machines, which mean any execution could potentia…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. should be identified and understood.",
              "refs": "https://attack.mitre.org/techniques/T1222/001/",
              "mitre": [
                "T1222.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SubInAcl Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SubInAcl Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. should be identified and understood.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows SubInAcl Execution”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.633",
              "n": "Windows Suspicious Process File Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes running from file paths not typically associated with legitimate software. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific process paths within the Endpoint data model. This activity is significant because adversaries often use unconventional file paths to execute malicious code without requiring administrative privileges. If confirmed malicious, this behavior could indicate an attempt to bypass security contro…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may allow execution of specific binaries in non-standard paths. Filter as needed.",
              "refs": "https://www.trendmicro.com/vinfo/hk/threat-encyclopedia/malware/trojan.ps1.powtran.a/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://twitter.com/pr0xylife/status/1590394227758104576, https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1543",
                "T1036.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Suspicious Process File Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543, T1036.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Suspicious Process File Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may allow execution of specific binaries in non-standard paths. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Suspicious Process File Path”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.634",
              "n": "Windows System Binary Proxy Execution Compiled HTML File Decompile",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the decompile parameter with the HTML Help application (HH.exe). This behavior is identified through Endpoint Detection and Response (EDR) telemetry, focusing on command-line executions involving the decompile parameter. This activity is significant because it is an uncommon command and has been associated with APT41 campaigns, where it was used to unpack HTML help files for further malicious actions. If confirmed malicious, this technique could allow at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, filter as needed.",
              "refs": "https://www.ptsecurity.com/ww-en/analytics/pt-esc-threat-intelligence/higaisa-or-winnti-apt-41-backdoors-old-and-new/, https://redcanary.com/blog/introducing-atomictestharnesses/, https://attack.mitre.org/techniques/T1218/001/, https://learn.microsoft.com/en-us/windows/win32/api/htmlhelp/nf-htmlhelp-htmlhelpa",
              "mitre": [
                "T1218.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Binary Proxy Execution Compiled HTML File Decompile\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Binary Proxy Execution Compiled HTML File Decompile\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System Binary Proxy Execution Compiled HTML File Decompile”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.635",
              "n": "Windows System Discovery Using ldap Nslookup",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of nslookup.exe to query domain information using LDAP. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant as nslookup.exe can be abused by malware like Qakbot to gather critical domain details, such as SRV records and server names. If confirmed malicious, this behavior could allow attackers to map the network, identify key servers, and plan further at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "dministrator may execute this commandline tool for auditing purposes. Filter as needed.",
              "refs": "https://securelist.com/qakbot-technical-analysis/103931/, https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/verify-srv-dns-records-have-been-created",
              "mitre": [
                "T1033"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Discovery Using ldap Nslookup\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Discovery Using ldap Nslookup\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: dministrator may execute this commandline tool for auditing purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System Discovery Using ldap Nslookup”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.636",
              "n": "Windows System Discovery Using Qwinsta",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of \"qwinsta.exe\" on a Windows operating system. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. The \"qwinsta.exe\" tool is significant because it can display detailed session information on a remote desktop session host server. This behavior is noteworthy as it is commonly abused by Qakbot malware to gather system information and send it back to its Command and Control (C2) server. If…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name = \"qwinsta.exe\"\n        OR\n        Processes.original_file_name = \"qwinsta.exe\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(\"Processes\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_system_discovery_using_qwinsta_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may execute this commandline tool for auditing purposes. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/qwinsta, https://securelist.com/qakbot-technical-analysis/103931/",
              "mitre": [
                "T1033"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Discovery Using Qwinsta\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Discovery Using Qwinsta\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may execute this commandline tool for auditing purposes. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System Discovery Using Qwinsta”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.637",
              "n": "Windows System File on Disk",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of new .sys files on disk. It leverages the Endpoint.Filesystem data model to identify and log instances where .sys files are written to the filesystem. This activity is significant because .sys files are often used as kernel mode drivers, and their unauthorized creation can indicate malicious activity such as rootkit installation. If confirmed malicious, this could allow an attacker to gain kernel-level access, leading to full system compromise, persi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Filesystem\n      WHERE Filesystem.file_name=\"*.sys*\"\n      BY Filesystem.action Filesystem.dest Filesystem.file_access_time\n         Filesystem.file_create_time Filesystem.file_hash Filesystem.file_modify_time\n         Filesystem.file_name Filesystem.file_path Filesystem.file_acl\n         Filesystem.file_size Filesystem.process_guid Filesystem.process_id\n         Filesystem.user Filesystem.vendor_product\n    | `drop_dm_object_name(Filesystem)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_system_file_on_disk_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present. Filter as needed.",
              "refs": "https://redcanary.com/blog/tracking-driver-inventory-to-expose-rootkits/",
              "mitre": [
                "T1068"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System File on Disk\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System File on Disk\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System File on Disk”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.638",
              "n": "Windows System LogOff Commandline",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Windows command line to log off a host machine. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on processes involving `shutdown.exe` with specific parameters. This activity is significant as it is often associated with Advanced Persistent Threats (APTs) and Remote Access Trojans (RATs) like dcrat, which use this technique to disrupt operations, aid in system destruction, or inhibit recovery. If confirmed malicious…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may execute this commandline to trigger shutdown, logoff or restart the host machine.",
              "refs": "https://attack.mitre.org/techniques/T1529/, https://www.mandiant.com/resources/analyzing-dark-crystal-rat-backdoor",
              "mitre": [
                "T1529"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System LogOff Commandline\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1529. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System LogOff Commandline\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may execute this commandline to trigger shutdown, logoff or restart the host machine.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System LogOff Commandline”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.639",
              "n": "Windows System Network Config Discovery Display DNS",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the \"ipconfig /displaydns\" command, which retrieves DNS reply information using the built-in Windows tool IPConfig. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process command-line executions. Monitoring this activity is significant as threat actors and post-exploitation tools like WINPEAS often abuse this command to gather network information. If confirmed malicious, this activity could allow att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://superuser.com/questions/230308/explain-output-of-ipconfig-displaydns, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1016"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Network Config Discovery Display DNS\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1016. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Network Config Discovery Display DNS\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System Network Config Discovery Display DNS”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.640",
              "n": "Windows System Reboot CommandLine",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the Windows command line to reboot a host machine using \"shutdown.exe\" with specific parameters. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant as it is often associated with advanced persistent threats (APTs) and remote access trojans (RATs) like dcrat, which may use system reboots to disrupt operations, aid in system destruction…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may execute this commandline to trigger shutdown or restart the host machine.",
              "refs": "https://attack.mitre.org/techniques/T1529/, https://www.mandiant.com/resources/analyzing-dark-crystal-rat-backdoor",
              "mitre": [
                "T1529"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Reboot CommandLine\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1529. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Reboot CommandLine\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may execute this commandline to trigger shutdown or restart the host machine.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System Reboot CommandLine”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.641",
              "n": "Windows System Remote Discovery With Query",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `query.exe` with command-line arguments aimed at discovering data on remote devices. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as adversaries may use `query.exe` to gain situational awareness and perform Active Directory discovery on compromised endpoints. If confirmed malicious, this behavior could allow attackers to identify various deta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel:Endpoint.Processes | search dest=$dest$ process_name = $process_name|s$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1033/",
              "mitre": [
                "T1033"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Remote Discovery With Query\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Remote Discovery With Query\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System Remote Discovery With Query”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.642",
              "n": "Windows System Script Proxy Execution Syncappvpublishingserver",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Syncappvpublishingserver.vbs via wscript.exe or cscript.exe, which may indicate an attempt to download remote files or perform privilege escalation. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. Monitoring this activity is crucial as it can signify malicious use of a native Windows script for unauthorized actions. If confirmed malicious, this behavior could le…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the vbscript syncappvpublishingserver is used for legitimate purposes. Filter as needed. Adding a n; to the command-line arguments may help reduce any noise.",
              "refs": "https://lolbas-project.github.io/lolbas/Scripts/Syncappvpublishingserver/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1216/T1216.md#atomic-test-1---syncappvpublishingserver-signed-script-powershell-command-execution",
              "mitre": [
                "T1216",
                "T1218"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Script Proxy Execution Syncappvpublishingserver\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1216, T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Script Proxy Execution Syncappvpublishingserver\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the vbscript syncappvpublishingserver is used for legitimate purposes. Filter as needed. Adding a n; to the command-line arguments may help reduce any noise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System Script Proxy Execution Syncappvpublishingserver”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.643",
              "n": "Windows System Shutdown CommandLine",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the Windows shutdown command via the command line interface. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because attackers may use the shutdown command to erase tracks, cause disruption, or ensure changes take effect after installing backdoors. If confirmed malicious, this activity could lead to system downtime, denial of service, or evasi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may execute this commandline to trigger shutdown or restart the host machine.",
              "refs": "https://attack.mitre.org/techniques/T1529/, https://www.mandiant.com/resources/analyzing-dark-crystal-rat-backdoor",
              "mitre": [
                "T1529"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Shutdown CommandLine\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1529. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Shutdown CommandLine\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may execute this commandline to trigger shutdown or restart the host machine.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System Shutdown CommandLine”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.644",
              "n": "Windows System Time Discovery W32tm Delay",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of the w32tm.exe utility with the /stripchart function, which is indicative of DCRat malware delaying its payload execution. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line arguments used by w32tm.exe. This activity is significant as it may indicate an attempt to evade detection by delaying malicious actions such as C2 communication and beaconing. If confirmed malicious, this behavior cou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://cert.gov.ua/article/405538, https://malpedia.caad.fkie.fraunhofer.de/details/win.dcrat, https://www.mandiant.com/resources/analyzing-dark-crystal-rat-backdoor",
              "mitre": [
                "T1124"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System Time Discovery W32tm Delay\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1124. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System Time Discovery W32tm Delay\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System Time Discovery W32tm Delay”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.645",
              "n": "Windows System User Discovery Via Quser",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Windows OS tool quser.exe, commonly used to gather information about user sessions on a Remote Desktop Session Host server. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs. Monitoring this activity is crucial as quser.exe is often abused by post-exploitation tools like winpeas, used in ransomware attacks to enumerate user sessions. If confirmed malicious, attackers could levera…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name=\"quser.exe\"\n        OR\n        Processes.original_file_name = \"quser.exe\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_system_user_discovery_via_quser_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator can use this command tool to audit RDP access of user in specific network or host.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/quser, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1033"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System User Discovery Via Quser\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System User Discovery Via Quser\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator can use this command tool to audit RDP access of user in specific network or host.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System User Discovery Via Quser”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.646",
              "n": "Windows System User Privilege Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `whoami.exe` with the `/priv` parameter, which displays the privileges assigned to the current user account. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it may indicate an adversary attempting to enumerate user privileges, a common step in the reconnaissance phase of an attack. If confirmed malicious, this could lead to privilege escalati…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name=\"whoami.exe\" Processes.process= \"*/priv*\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_system_user_privilege_discovery_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1033/, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a",
              "mitre": [
                "T1033"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows System User Privilege Discovery\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows System User Privilege Discovery\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows System User Privilege Discovery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.647",
              "n": "Windows Time Based Evasion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potentially malicious processes that initiate a ping delay using an invalid IP address. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving \"ping 0 -n\". This behavior is significant as it is commonly used by malware like NJRAT to introduce time delays for evasion tactics, such as delaying self-deletion. If confirmed malicious, this activity could indicate an active infection attempting to evade detectio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.njrat",
              "mitre": [
                "T1497.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Time Based Evasion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Time Based Evasion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Time Based Evasion”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.648",
              "n": "Windows Time Based Evasion via Choice Exec",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of choice.exe in batch files as a delay tactic, a technique observed in SnakeKeylogger malware. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential time-based evasion techniques used by malware to avoid detection. If confirmed malicious, this behavior could allow attackers to execute code stealthily, delete malicious files, and pers…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator may use choice.exe to allow user to choose from and indexes of choices from a batch script.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/choice, https://malpedia.caad.fkie.fraunhofer.de/details/win.404keylogger",
              "mitre": [
                "T1497.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Time Based Evasion via Choice Exec\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Time Based Evasion via Choice Exec\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator may use choice.exe to allow user to choose from and indexes of choices from a batch script.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Time Based Evasion via Choice Exec”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.649",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source endpoint failing to authenticate with multiple disabled domain users using the Kerberos protocol. It leverages EventCode 4768, which is generated when the Key Distribution Center issues a Kerberos Ticket Granting Ticket (TGT) and detects failure code `0x12` (credentials revoked). This behavior is significant as it may indicate a Password Spraying attack targeting disabled accounts, potentially leading to initial access or privilege escalation. If confir…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4768",
              "q": "`wineventlog_security` EventCode=4768 TargetUserName!=*$ Status=0x12\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as user values(dest) as dest\n        BY _time, IpAddress\n      | eventstats avg(unique_accounts) as comp_avg , stdev(unique_accounts) as comp_std\n        BY IpAddress\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_accounts > 10 and unique_accounts >= upperBound, 1, 0)\n      | search isOutlier=1\n      | `windows_unusual_count_of_disabled_users_failed_auth_using_kerberos_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller and Kerberos events. The Advanced Security Audit policy setting `Audit Kerberos Authentication Service` within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple disabled domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, multi-user systems missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4768. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple disabled domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, multi-user systems missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a single address trying many disabled accounts over Kerberos in a few minutes, so we catch password-spray style activity against ghost accounts before it widens.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.650",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source endpoint failing to authenticate with multiple invalid domain users using the Kerberos protocol. It leverages Event ID 4768, which is generated when the Key Distribution Center issues a Kerberos Ticket Granting Ticket (TGT) and detects failure code 0x6, indicating the user is not found in the Kerberos database. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privil…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4768",
              "q": "`wineventlog_security` EventCode=4768 TargetUserName!=*$ Status=0x6\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as user values(dest) as dest\n        BY _time, IpAddress\n      | eventstats avg(unique_accounts) as comp_avg , stdev(unique_accounts) as comp_std\n        BY IpAddress\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_accounts > 10 and unique_accounts >= upperBound, 1, 0)\n      | search isOutlier=1\n      | `windows_unusual_count_of_invalid_users_fail_to_auth_using_kerberos_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller and Kerberos events. The Advanced Security Audit policy setting `Audit Kerberos Authentication Service` within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple invalid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, multi-user systems and missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4768. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple invalid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, multi-user systems and missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a single address failing many user logons that should not be active, so we catch guessing against stale identities.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.651",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source endpoint failing to authenticate with multiple invalid users using the NTLM protocol. It leverages EventCode 4776 and calculates the standard deviation for each host, using the 3-sigma rule to detect anomalies. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges. If confirmed malicious, this activity could lead to unauthorized access or privilege escalation,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4776",
              "q": "`wineventlog_security` EventCode=4776 TargetUserName!=*$ Status=0xc0000064\n      | bucket span=2m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as user values(dest) as dest\n        BY _time, Workstation\n      | eventstats avg(unique_accounts) as comp_avg , stdev(unique_accounts) as comp_std\n        BY Workstation\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_accounts > 10 and unique_accounts >= upperBound, 1, 0)\n      | search isOutlier=1\n      | rename Workstation as src\n      | `windows_unusual_count_of_invalid_users_failed_to_auth_using_ntlm_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller events. The Advanced Security Audit policy setting `Audit Credential Validation' within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple invalid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems. If this detection triggers on a host other than a Domain Controller, the behavior could represent a password spraying attack against the host's local accounts.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/audit-credential-validation, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4776",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4776. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple invalid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems. If this detection triggers on a host other than a Domain Controller, the behavior could represent a password spraying attack against the host's local accounts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for many failed or unknown username attempts in one window from a single source, so we catch account enumeration and guessing early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.652",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source endpoint failing to authenticate multiple valid users using the Kerberos protocol, potentially indicating a Password Spraying attack. It leverages Event 4771, which is generated when the Key Distribution Center fails to issue a Kerberos Ticket Granting Ticket (TGT) due to a wrong password (failure code 0x18). This detection uses statistical analysis, specifically the 3-sigma rule, to identify unusual authentication failures. If confirmed malicious, this…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4771",
              "q": "`wineventlog_security` EventCode=4771 TargetUserName!=\"*$\" Status=0x18\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as user values(dest) as dest\n        BY _time, IpAddress\n      | eventstats avg(unique_accounts) as comp_avg , stdev(unique_accounts) as comp_std\n        BY IpAddress\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_accounts > 10 and unique_accounts >= upperBound, 1, 0)\n      | search isOutlier=1\n      | `windows_unusual_count_of_users_failed_to_auth_using_kerberos_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller and Kerberos events. The Advanced Security Audit policy setting `Audit Kerberos Authentication Service` within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple valid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, missconfigured systems and multi-user systems like Citrix farms.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn319109(v=ws.11), https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4771",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4771. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple valid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, missconfigured systems and multi-user systems like Citrix farms.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kerberos pre-auth failures in tight bursts, so we catch noisy guessing against real account names before tickets get issued.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.653",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source endpoint failing to authenticate multiple valid users using the NTLM protocol, potentially indicating a Password Spraying attack. It leverages Event 4776 from Domain Controllers, calculating the standard deviation for each host and applying the 3-sigma rule to detect anomalies. This activity is significant as it may represent an adversary attempting to gain initial access or elevate privileges. If confirmed malicious, the attacker could compromise multi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4776",
              "q": "`wineventlog_security`  EventCode=4776 TargetUserName!=*$ Status=0xC000006A\n      | bucket span=2m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as tried_accounts values(dest) as dest\n        BY _time, Workstation\n      | eventstats avg(unique_accounts) as comp_avg , stdev(unique_accounts) as comp_std\n        BY Workstation\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_accounts > 10 and unique_accounts >= upperBound, 1, 0)\n      | search isOutlier=1\n      | `windows_unusual_count_of_users_failed_to_authenticate_using_ntlm_filter`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller events. The Advanced Security Audit policy setting `Audit Credential Validation` within `Account Logon` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple valid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems. If this detection triggers on a host other than a Domain Controller, the behavior could represent a password spraying attack against the host's local accounts.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/audit-credential-validation, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4776",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4776. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple valid domain users is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems. If this detection triggers on a host other than a Domain Controller, the behavior could represent a password spraying attack against the host's local accounts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for many failed or unknown username attempts in one window from a single source, so we catch account enumeration and guessing early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.654",
              "n": "Windows USBSTOR Registry Key Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic is used to identify when a USB removable media device is attached to a Windows host. In this scenario we are querying the Endpoint Registry data model to look for modifications to the HKLM\\System\\CurrentControlSet\\Enum\\USBSTOR\\ key. Adversaries and Insider Threats may use removable media devices for several malicious activities, including initial access, execution, and exfiltration.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12, Sysmon EventID 13",
              "q": "| from datamodel:Endpoint.Registry | search dest=$dest$ registry_path IN (\"HKLM\\\\System\\\\CurrentControlSet\\\\Enum\\\\USBSTOR\\\\*\")",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to registry value entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 12, Sysmon EventID 13 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate USB activity will also be detected. Please verify and investigate as appropriate.",
              "refs": "https://attack.mitre.org/techniques/T1200/, https://www.cisa.gov/news-events/news/using-caution-usb-drives, https://www.bleepingcomputer.com/news/security/fbi-hackers-use-badusb-to-target-defense-firms-with-ransomware/",
              "mitre": [
                "T1200",
                "T1025",
                "T1091"
              ],
              "dtype": "registry_value_text",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows USBSTOR Registry Key Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to registry value entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12, Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1200, T1025, T1091. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows USBSTOR Registry Key Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate USB activity will also be detected. Please verify and investigate as appropriate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows USBSTOR Registry Key Modification”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.655",
              "n": "Windows User Disabled Via Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `net.exe` utility to disable a user account via the command line. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant as it may indicate an adversary's attempt to disrupt user availability, potentially as a precursor to further malicious actions. If confirmed malicious, this could lead to denial of service for legitimate users, aiding the atta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1531"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows User Disabled Via Net\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1531. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows User Disabled Via Net\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows User Disabled Via Net”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.656",
              "n": "Windows User Discovery Via Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `net.exe` or `net1.exe` with command-line arguments `user` or `users` to query local user accounts. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it indicates potential reconnaissance efforts by adversaries to enumerate local users, which is a common step in situational awareness and Active Directory discovery. If confirmed malicious, this …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_net` (Processes.process=\"*user\"\n        OR\n        Processes.process=\"*users\"\n        OR\n        Processes.process=\"*users *\"\n        OR\n        Processes.process=\"*user *\")\n        AND\n        NOT (Processes.process=\"*/add\"\n        OR\n        Processes.process=\"*/delete\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_user_discovery_via_net_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1087/001/",
              "mitre": [
                "T1087.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows User Discovery Via Net\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows User Discovery Via Net\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows User Discovery Via Net”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.657",
              "n": "Windows User Execution Malicious URL Shortcut File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation URL shortcut files, often used by malware like CHAOS ransomware. It leverages the Endpoint.Filesystem datamodel to identify \".url\" files created outside common directories, such as \"Program Files\". This activity can be significant as \".URL\" files can be used as mean to trick the user into visiting certain websites unknowingly, or when placed in certain locations such as \"\\\\AppData\\\\Roaming\\\\Microsoft\\\\Windows\\\\Start Menu\\\\Programs\\\\Startup\\\\\", it may a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the Filesystem responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may allow creation of script or exe in this path.",
              "refs": "https://attack.mitre.org/techniques/T1204/002/, https://www.fortinet.com/blog/threat-research/chaos-ransomware-variant-sides-with-russia",
              "mitre": [
                "T1204.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows User Execution Malicious URL Shortcut File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows User Execution Malicious URL Shortcut File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may allow creation of script or exe in this path.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows User Execution Malicious URL Shortcut File”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.658",
              "n": "Windows WinDBG Spawning AutoIt3",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances of the WinDBG process spawning AutoIt3. This behavior is detected by monitoring endpoint telemetry for processes where 'windbg.exe' is the parent process and 'autoit3.exe' or similar is the child process. This activity is significant because AutoIt3 is frequently used by threat actors for scripting malicious automation, potentially indicating an ongoing attack. If confirmed malicious, this could allow attackers to automate tasks, execute arbitrary code…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will only be present if the WinDBG process legitimately spawns AutoIt3. Filter as needed.",
              "refs": "https://github.com/PaloAltoNetworks/Unit42-timely-threat-intel/blob/main/2023-10-25-IOCs-from-DarkGate-activity.txt",
              "mitre": [
                "T1059"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows WinDBG Spawning AutoIt3\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows WinDBG Spawning AutoIt3\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will only be present if the WinDBG process legitimately spawns AutoIt3. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows WinDBG Spawning AutoIt3”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.659",
              "n": "Windows WMI Impersonate Token",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential WMI token impersonation activities in a process or command. It leverages Sysmon EventCode 10 to identify instances where `wmiprvse.exe` has a duplicate handle or full granted access in a target process. This behavior is significant as it is commonly used by malware like Qakbot for privilege escalation or defense evasion. If confirmed malicious, this activity could allow an attacker to gain elevated privileges, evade defenses, and maintain persistence with…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This search requires Sysmon Logs and a Sysmon configuration, which includes EventCode 10. This search uses an input macro named `sysmon`. We strongly recommend that you specify your environment-specific configurations (index, source, sourcetype, etc.) for Windows Sysmon logs. Replace the macro definition with configurations for your Splunk environment. The search also uses a post-filter macro desi…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator may execute impersonate wmi object script for auditing. Filter is needed.",
              "refs": "https://github.com/trustedsec/SysmonCommunityGuide/blob/master/chapters/process-access.md, https://www.joesandbox.com/analysis/278341/0/html",
              "mitre": [
                "T1047"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows WMI Impersonate Token\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows WMI Impersonate Token\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator may execute impersonate wmi object script for auditing. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows WMI Impersonate Token”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.660",
              "n": "Windows WMI Process And Service List",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious WMI command lines querying for running processes or services. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific process and command-line events. This activity is significant as adversaries often use WMI to gather system information and identify services on compromised machines. If confirmed malicious, this behavior could allow attackers to map out the system, identify critical services, and plan further attacks,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "netowrk administrator or IT may execute this command for auditing processes and services.",
              "refs": "https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1047"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows WMI Process And Service List\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows WMI Process And Service List\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: netowrk administrator or IT may execute this command for auditing processes and services.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows WMI Process And Service List”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.661",
              "n": "Windows WMI Process Call Create",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of WMI command lines used to create or execute processes. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line events that include specific keywords like \"process,\" \"call,\" and \"create.\" This activity is significant because adversaries often use WMI to execute malicious payloads on local or remote hosts, potentially bypassing traditional security controls. If confirmed malicious, this behavior could allow attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_wmic` Processes.process = \"* process *\" Processes.process = \"* call *\" Processes.process = \"* create *\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_wmi_process_call_create_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this command for testing or auditing.",
              "refs": "https://github.com/NVISOsecurity/sigma-public/blob/master/rules/windows/process_creation/win_susp_wmi_execution.yml, https://github.com/redcanaryco/atomic-red-team/blob/2b804d25418004a5f1ba50e9dc637946ab8733c7/atomics/T1047/T1047.md, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1047"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows WMI Process Call Create\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows WMI Process Call Create\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this command for testing or auditing.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows WMI Process Call Create”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.662",
              "n": "Windows WMIC Shadowcopy Delete",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects the use of WMIC to delete volume shadow copies, which is a common technique used by ransomware actors to prevent system recovery. Ransomware like Cactus often delete shadow copies before encrypting files to ensure victims cannot recover their data without paying the ransom. This behavior is particularly concerning as it indicates potential ransomware activity or malicious actors attempting to prevent system recovery.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$process_name$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate system maintenance or backup operations may occasionally delete shadow copies. However, this activity should be rare and typically performed through approved administrative tools rather than direct WMIC commands. Tune and modify the search to fit your environment, enable as TTP.",
              "refs": "https://any.run/malware-trends/cactus, https://attack.mitre.org/techniques/T1490/",
              "mitre": [
                "T1490"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows WMIC Shadowcopy Delete\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows WMIC Shadowcopy Delete\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate system maintenance or backup operations may occasionally delete shadow copies. However, this activity should be rare and typically performed through approved administrative tools rather than direct WMIC commands. Tune and modify the search to fit your environment, enable as TTP.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows WMIC Shadowcopy Delete”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.663",
              "n": "Windows WPDBusEnum Registry Key Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic is used to identify when a USB removable media device is attached to a Windows host. In this scenario we are querying the Endpoint Registry data model to look for modifications to the Windows Portable Device keys HKLM\\SOFTWARE\\Microsoft\\Windows Portable Devices\\Devices\\ or HKLM\\System\\CurrentControlSet\\Enum\\SWD\\WPDBUSENUM\\ . Adversaries and Insider Threats may use removable media devices for several malicious activities, including initial access, execution, and exfiltration.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12, Sysmon EventID 13",
              "q": "| from datamodel:Endpoint.Registry | search dest=$dest$ registry_path IN (\"HKLM\\\\SOFTWARE\\\\Microsoft\\\\Windows Portable Devices\\\\Devices\\\\*\",\"HKLM\\\\System\\\\CurrentControlSet\\\\Enum\\\\SWD\\\\WPDBUSENUM\\\\*\")",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to registry value entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 12, Sysmon EventID 13 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate USB activity will also be detected. Please verify and investigate as appropriate.",
              "refs": "https://attack.mitre.org/techniques/T1200/, https://www.cisa.gov/news-events/news/using-caution-usb-drives, https://www.bleepingcomputer.com/news/security/fbi-hackers-use-badusb-to-target-defense-firms-with-ransomware/",
              "mitre": [
                "T1200",
                "T1025",
                "T1091"
              ],
              "dtype": "registry_value_text",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows WPDBusEnum Registry Key Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to registry value entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12, Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1200, T1025, T1091. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows WPDBusEnum Registry Key Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate USB activity will also be detected. Please verify and investigate as appropriate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows WPDBusEnum Registry Key Modification”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.664",
              "n": "WMI Recon Running Process Or Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious PowerShell script execution via EventCode 4104, where WMI performs an event query to list running processes or services. This detection leverages PowerShell Script Block Logging to capture and analyze script block text for specific WMI queries. This activity is significant as it is commonly used by malware and APT actors to map security applications or services on a compromised machine. If confirmed malicious, this could allow attackers to identify an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Network administrator may used this command for checking purposes",
              "refs": "https://news.sophos.com/en-us/2020/05/12/maze-ransomware-1-year-counting/, https://www.eideon.com/2018-03-02-THL03-WMIBackdoors/, https://github.com/trustedsec/SysmonCommunityGuide/blob/master/chapters/WMI-events.md, https://in.security/2019/04/03/an-intro-into-abusing-and-identifying-wmi-event-subscriptions-for-persistence/",
              "mitre": [
                "T1592"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WMI Recon Running Process Or Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1592. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WMI Recon Running Process Or Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Network administrator may used this command for checking purposes\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “WMI Recon Running Process Or Services”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.665",
              "n": "Wmic NonInteractive App Uninstallation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of the WMIC command-line tool attempting to uninstall applications non-interactively. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line patterns associated with WMIC. This activity is significant because it is uncommon and may indicate an attempt to evade detection by uninstalling security software, as seen in IcedID malware campaigns. If confirmed malicious, this behavior could allow an attacker to di…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name=wmic.exe Processes.process=\"* product *\" Processes.process=\"*where name*\" Processes.process=\"*call uninstall*\" Processes.process=\"*/nointeractive*\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `wmic_noninteractive_app_uninstallation_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Third party application may use this approach to uninstall applications.",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Wmic NonInteractive App Uninstallation\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Wmic NonInteractive App Uninstallation\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Third party application may use this approach to uninstall applications.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Wmic NonInteractive App Uninstallation”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.666",
              "n": "Wscript Or Cscript Suspicious Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic identifies a suspicious spawned process by WScript or CScript process. This technique was a common technique used by adversaries and malware to execute different LOLBIN, other scripts like PowerShell or spawn a suspended process to inject its code as a defense evasion. This TTP may detect some normal script that uses several application tools that are in the list of the child process it detects but a good pivot and indicator that a script may execute suspicious code.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may create vbs or js script that use several tool as part of its execution. Filter as needed.",
              "refs": "https://www.hybrid-analysis.com/sample/8da5b75b6380a41eee3a399c43dfe0d99eeefaa1fd21027a07b1ecaa4cd96fdd?environmentId=120, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1055",
                "T1134.004",
                "T1543"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Wscript Or Cscript Suspicious Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055, T1134.004, T1543. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Wscript Or Cscript Suspicious Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may create vbs or js script that use several tool as part of its execution. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Wscript Or Cscript Suspicious Child Process”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.667",
              "n": "WSReset UAC Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious modification of the registry aimed at bypassing User Account Control (UAC) by leveraging WSReset.exe. It identifies the creation or modification of specific registry values under the path \"*\\\\AppX82a6gwre4fdg3bt635tn5ctqjf8msdd2\\\\Shell\\\\open\\\\command*\". This detection uses data from Endpoint Detection and Response (EDR) agents, focusing on process and registry events. This activity is significant because UAC bypass techniques can allow attackers to exe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12, Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/hfiref0x/UACME, https://blog.morphisec.com/trickbot-uses-a-new-windows-10-uac-bypass",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WSReset UAC Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12, Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WSReset UAC Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “WSReset UAC Bypass”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.668",
              "n": "XSL Script Execution With WMIC",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of an XSL script using the WMIC process, which is often indicative of malicious activity. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions involving WMIC and XSL files. This behavior is significant as it has been associated with the FIN7 group, known for using this technique to execute malicious scripts. If confirmed malicious, this activity could allow attackers to execute arbitrary code, potent…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mandiant.com/resources/fin7-pursuing-an-enigmatic-and-evasive-global-criminal-operation, https://attack.mitre.org/groups/G0046/, https://web.archive.org/web/20190814201250/https://subt0x11.blogspot.com/2018/04/wmicexe-whitelisting-bypass-hacking.html, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1220/T1220.md#atomic-test-3---wmic-bypass-using-local-xsl-file",
              "mitre": [
                "T1220"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"XSL Script Execution With WMIC\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1220. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"XSL Script Execution With WMIC\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “XSL Script Execution With WMIC”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.669",
              "n": "Detect Remote Access Software Usage DNS",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects DNS queries to domains associated with known remote access software such as AnyDesk, GoToMyPC, LogMeIn, and TeamViewer. This detection is crucial as adversaries often use these tools to maintain access and control over compromised environments. Identifying such behavior is vital for a Security Operations Center (SOC) because unauthorized remote access can lead to data breaches, ransomware attacks, and other severe impacts if these threats are not mitigated promptly…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel:Network_Resolution.DNS | search src=$src$ query=$query$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that legitimate remote access software is used within the environment. Ensure that the lookup is reviewed and updated with any additional remote access software that is used within the environment. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content",
              "refs": "https://attack.mitre.org/techniques/T1219/, https://thedfirreport.com/2022/08/08/bumblebee-roasts-its-way-to-domain-admin/, https://thedfirreport.com/2022/11/28/emotet-strikes-again-lnk-file-leads-to-domain-wide-ransomware/",
              "mitre": [
                "T1219"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Remote Access Software Usage DNS\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Remote Access Software Usage DNS\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that legitimate remote access software is used within the environment. Ensure that the lookup is reviewed and updated with any additional remote access software that is used within the environment. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Detect Remote Access Software Usage DNS”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.670",
              "n": "Excessive DNS Failures",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies excessive DNS query failures by counting DNS responses that do not indicate success, triggering when there are more than 50 occurrences. It leverages the Network_Resolution data model, focusing on DNS reply codes that signify errors. This activity is significant because a high number of DNS failures can indicate potential network misconfigurations, DNS poisoning attempts, or malware communication issues. If confirmed malicious, this activity could lead to disrup…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Network_Resolution\n      WHERE nodename=DNS \"DNS.reply_code\"!=\"No Error\" \"DNS.reply_code\"!=\"NoError\" DNS.reply_code!=\"unknown\" NOT \"DNS.query\"=\"*.arpa\" \"DNS.query\"=\"*.*\"\n      BY \"DNS.src\" \"DNS.query\" \"DNS.reply_code\"\n    | `drop_dm_object_name(\"DNS\")`\n    | lookup cim_corporate_web_domain_lookup domain as query OUTPUT domain\n    | where isnull(domain)\n    | lookup update=true alexa_lookup_by_str domain as query OUTPUT rank\n    | where isnull(rank)\n    | eventstats max(count) as mc\n      BY src reply_code\n    | eval mode_query=if(count=mc, query, null())\n    | stats sum(count) as count values(mode_query) as query values(mc) as max_query_count\n      BY src reply_code\n    | where count>50\n    | `get_asset(src)`\n    | `excessive_dns_failures_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible legitimate traffic can trigger this rule. Please investigate as appropriate. The threshold for generating an event can also be customized to better suit your environment.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1071.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive DNS Failures\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive DNS Failures\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible legitimate traffic can trigger this rule. Please investigate as appropriate. The threshold for generating an event can also be customized to better suit your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Excessive DNS Failures”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.671",
              "n": "HTTP Malware User Agent",
              "c": "high",
              "f": "intermediate",
              "v": "This Splunk query analyzes web logs to identify and categorize user agents, detecting various types of malware. This activity can signify possible compromised hosts on the network.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Filtering may be required in some instances depending on legacy system usage, filter as needed.",
              "refs": "https://github.com/mthcht/awesome-lists/blob/main/Lists/suspicious_http_user_agents_list.csv",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"HTTP Malware User Agent\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"HTTP Malware User Agent\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Filtering may be required in some instances depending on legacy system usage, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “HTTP Malware User Agent”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.672",
              "n": "Remote Desktop Network Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unusual Remote Desktop Protocol (RDP) traffic on TCP/3389 by filtering out known RDP sources and destinations, focusing on atypical connections within the network. This detection leverages network traffic data to identify potentially unauthorized RDP access. Monitoring this activity is crucial for a SOC as unauthorized RDP access can indicate an attacker's attempt to control networked systems, leading to data theft, ransomware deployment, or further network comprom…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Zeek Conn",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Zeek Conn ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Remote Desktop may be used legitimately by users on the network.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote Desktop Network Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Zeek Conn. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote Desktop Network Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Remote Desktop may be used legitimately by users on the network.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Remote Desktop Network Traffic”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.673",
              "n": "Rundll32 DNSQuery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious `rundll32.exe` process making HTTP connections and performing DNS queries to web domains. It leverages Sysmon EventCode 22 logs to identify these activities. This behavior is significant as it is commonly associated with IcedID malware, where `rundll32.exe` checks internet connectivity and communicates with C&C servers to download configurations and other components. If confirmed malicious, this activity could allow attackers to establish persistence, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://any.run/malware-trends/icedid",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rundll32 DNSQuery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rundll32 DNSQuery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Rundll32 DNSQuery”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.674",
              "n": "Suspicious Process With Discord DNS Query",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a process making a DNS query to Discord, excluding legitimate Discord application paths. It leverages Sysmon logs with Event ID 22 to detect DNS queries containing \"discord\" in the QueryName field. This activity is significant because Discord can be abused by adversaries to host and download malicious files, as seen in the WhisperGate campaign. If confirmed malicious, this could indicate malware attempting to download additional payloads from Discord, potentiall…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "his detection relies on sysmon logs with the Event ID 22, DNS Query.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Noise and false positive can be seen if the following instant messaging is allowed to use within corporate network. In this case, a filter is needed.",
              "refs": "https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://medium.com/s2wblog/analysis-of-destructive-malware-whispergate-targeting-ukraine-9d5d158f19f3, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1059.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Process With Discord DNS Query\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Process With Discord DNS Query\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Noise and false positive can be seen if the following instant messaging is allowed to use within corporate network. In this case, a filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Suspicious Process With Discord DNS Query”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.675",
              "n": "Wermgr Process Connecting To IP Check Web Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the wermgr.exe process attempting to connect to known IP check web services. It leverages Sysmon EventCode 22 to identify DNS queries made by wermgr.exe to specific IP check services. This activity is significant because wermgr.exe is typically used for Windows error reporting, and its connection to these services may indicate malicious code injection, often associated with malware like Trickbot. If confirmed malicious, this behavior could allow attackers to recon …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://labs.vipre.com/trickbot-and-its-modules/",
              "mitre": [
                "T1590.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Wermgr Process Connecting To IP Check Web Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1590.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Wermgr Process Connecting To IP Check Web Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Wermgr Process Connecting To IP Check Web Services”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.676",
              "n": "Windows DNS Query Request by Telegram Bot API",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a DNS query by a process to the associated Telegram API domain, which could indicate access via a Telegram bot commonly used by malware for command and control (C2) communications. By monitoring DNS queries related to Telegram's infrastructure, the detection identifies potential attempts to establish covert communication channels between a compromised system and external malicious actors. This behavior is often observed in cyberattacks where Telegr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "a third part automation using telegram API.",
              "refs": "https://www.splunk.com/en_us/blog/security/threat-advisory-telegram-crypto-botnet-strt-ta01.html",
              "mitre": [
                "T1071.004",
                "T1102.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DNS Query Request by Telegram Bot API\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.004, T1102.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DNS Query Request by Telegram Bot API\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: a third part automation using telegram API.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows DNS Query Request by Telegram Bot API”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.677",
              "n": "Windows Gather Victim Network Info Through Ip Check Web Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects processes attempting to connect to known IP check web services. This behavior is identified using Sysmon EventCode 22 logs, specifically monitoring DNS queries to services like \"wtfismyip.com\" and \"ipinfo.io\". This activity is significant as it is commonly used by malware, such as Trickbot, for reconnaissance to determine the infected machine's IP address. If confirmed malicious, this could allow attackers to gather network information, aiding in further attacks or…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Filter internet browser application to minimize the false positive of this detection.",
              "refs": "https://app.any.run/tasks/a6f2ffe2-e6e2-4396-ae2e-04ea0143f2d8/",
              "mitre": [
                "T1590.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Gather Victim Network Info Through Ip Check Web Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1590.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Gather Victim Network Info Through Ip Check Web Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Filter internet browser application to minimize the false positive of this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Gather Victim Network Info Through Ip Check Web Services”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.678",
              "n": "Windows Multi hop Proxy TOR Website Query",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies DNS queries to known TOR proxy websites, such as \"*.torproject.org\" and \"www.theonionrouter.com\". It leverages Sysmon EventCode 22 to detect these queries by monitoring DNS query events from endpoints. This activity is significant because adversaries often use TOR proxies to disguise the source of their malicious traffic, making it harder to trace their actions. If confirmed malicious, this behavior could indicate an attempt to obfuscate network traffic, potenti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "third party application may use this proxies if allowed in production environment. Filter is needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.agent_tesla",
              "mitre": [
                "T1071.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Multi hop Proxy TOR Website Query\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Multi hop Proxy TOR Website Query\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: third party application may use this proxies if allowed in production environment. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Windows Multi hop Proxy TOR Website Query”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.679",
              "n": "High Volume of Bytes Out to Url",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a high volume of outbound web traffic, specifically over 1GB of data sent to a URL within a 2-minute window. It leverages the Web data model to identify significant uploads by analyzing the sum of bytes out. This activity is significant as it may indicate potential data exfiltration by malware or malicious insiders. If confirmed as malicious, this behavior could lead to unauthorized data transfer, resulting in data breaches and loss of sensitive information. Immedi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This search may trigger false positives if there is a legitimate reason for a high volume of bytes out to a URL. We recommend to investigate these findings. Consider updating the filter macro to exclude the applications that are relevant to your environment.",
              "refs": "https://attack.mitre.org/techniques/T1567/, https://www.trendmicro.com/en_us/research/20/l/pawn-storm-lack-of-sophistication-as-a-strategy.html, https://www.bleepingcomputer.com/news/security/hacking-group-s-new-malware-abuses-google-and-facebook-services/",
              "mitre": [
                "T1567"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"High Volume of Bytes Out to Url\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1567. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"High Volume of Bytes Out to Url\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This search may trigger false positives if there is a legitimate reason for a high volume of bytes out to a URL. We recommend to investigate these findings. Consider updating the filter macro to exclude the applications that are relevant to your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “High Volume of Bytes Out to Url”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.680",
              "n": "HTTP Scripting Tool User Agent",
              "c": "high",
              "f": "intermediate",
              "v": "This Splunk query analyzes web access logs to identify and categorize non-browser user agents, detecting various types of security tools, scripting languages, automation frameworks, and suspicious patterns. This activity can signify malicious actors attempting to interact with web endpoints in non-standard ways.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the activity is part of diagnostics or testing. Filter as needed.",
              "refs": "https://portswigger.net/web-security/request-smuggling, https://portswigger.net/research/http1-must-die, https://www.vaadata.com/blog/what-is-http-request-smuggling-exploitations-and-security-best-practices/, https://www.securityweek.com/new-http-request-smuggling-attacks-impacted-cdns-major-orgs-millions-of-websites/, https://github.com/SigmaHQ/sigma/blob/master/rules/web/proxy_generic/proxy_ua_hacktool.yml, https://help.aikido.dev/zen-firewall/miscellaneous/bot-protection-details",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"HTTP Scripting Tool User Agent\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"HTTP Scripting Tool User Agent\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the activity is part of diagnostics or testing. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “HTTP Scripting Tool User Agent”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.3.681",
              "n": "Plain HTTP POST Exfiltrated Data",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential data exfiltration using plain HTTP POST requests. It leverages network traffic logs, specifically monitoring the `stream_http` data source for POST methods containing suspicious form data such as \"wermgr.exe\" or \"svchost.exe\". This activity is significant because it is commonly associated with malware like Trickbot, trojans, keyloggers, or APT adversaries, which use plain text HTTP POST requests to communicate with remote C2 servers. If confirmed maliciou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Splunk Stream HTTP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Splunk Stream HTTP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blog.talosintelligence.com/2020/03/trickbot-primer.html",
              "mitre": [
                "T1048.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Plain HTTP POST Exfiltrated Data\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Splunk Stream HTTP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Plain HTTP POST Exfiltrated Data\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use endpoint security telemetry to stay ahead of the behavior this rule calls “Plain HTTP POST Exfiltrated Data”, so we catch real tradecraft and not just noise from everyday work.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 25.9,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 680,
            "none": 0
          }
        },
        {
          "i": "10.4",
          "n": "Email Security",
          "u": [
            {
              "i": "10.4.1",
              "n": "Phishing Detection Rate",
              "c": "critical",
              "f": "intermediate",
              "v": "Measures email security effectiveness. Increasing phishing volumes or declining detection rates indicate evolving threats.",
              "t": "Splunk_TA_MS_O365, TA-proofpoint",
              "d": "Email security gateway logs, EOP message trace",
              "q": "index=email sourcetype=\"ms:o365:messageTrace\"\n| eval is_phish=if(match(FilteringResult,\"Phish\") OR match(FilteringResult,\"Spoof\"),1,0)\n| stats sum(is_phish) as phishing_caught, count as total_messages\n| eval phish_rate=round(phishing_caught/total_messages*100,4)",
              "m": "Ingest email security logs (EOP message trace, gateway logs). Track phishing detections over time. Calculate detection rate vs total messages. Alert on spikes in phishing volume. Report on phishing types and targeted users.",
              "z": "Line chart (phishing volume trend), Single value (phishing rate %), Bar chart (phishing by type), Table (top targeted users).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_MS_O365, TA-proofpoint.\n• Ensure the following data sources are available: Email security gateway logs, EOP message trace.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest email security logs (EOP message trace, gateway logs). Track phishing detections over time. Calculate detection rate vs total messages. Alert on spikes in phishing volume. Report on phishing types and targeted users.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=\"ms:o365:messageTrace\"\n| eval is_phish=if(match(FilteringResult,\"Phish\") OR match(FilteringResult,\"Spoof\"),1,0)\n| stats sum(is_phish) as phishing_caught, count as total_messages\n| eval phish_rate=round(phishing_caught/total_messages*100,4)\n```\n\nUnderstanding this SPL\n\n**Phishing Detection Rate** — Measures email security effectiveness. Increasing phishing volumes or declining detection rates indicate evolving threats.\n\nDocumented **Data sources**: Email security gateway logs, EOP message trace. **App/TA** (typical add-on context): Splunk_TA_MS_O365, TA-proofpoint. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: ms:o365:messageTrace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=\"ms:o365:messageTrace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_phish** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **phish_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  where All_Email.action=blocked\n  by All_Email.src_user All_Email.recipient All_Email.message_type span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Phishing Detection Rate** — Measures email security effectiveness. Increasing phishing volumes or declining detection rates indicate evolving threats.\n\nDocumented **Data sources**: Email security gateway logs, EOP message trace. **App/TA** (typical add-on context): Splunk_TA_MS_O365, TA-proofpoint. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (phishing volume trend), Single value (phishing rate %), Bar chart (phishing by type), Table (top targeted users).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how often phishing and spoof-style mail is caught versus overall volume, so we see attack waves and whether filtering and policies still match real threats.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  where All_Email.action=blocked\n  by All_Email.src_user All_Email.recipient All_Email.message_type span=1h\n| sort -count",
              "e": [
                "m365",
                "proofpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.2",
              "n": "Malicious Attachment Tracking",
              "c": "critical",
              "f": "beginner",
              "v": "Attachment-based threats bypass URL filtering. Tracking by file type reveals attack vectors and informs policy decisions.",
              "t": "Email security TA",
              "d": "Email gateway attachment scanning logs, safe attachments logs",
              "q": "index=email sourcetype=\"ms:o365:messageTrace\"\n| search FilteringResult=\"*malware*\" OR FilteringResult=\"*SafeAttachment*\"\n| stats count by SenderAddress, Subject, FilteringResult\n| sort -count",
              "m": "Enable attachment scanning in email gateway. Ingest scanning results. Track detections by file type, sender domain, and verdict. Alert on malicious attachments reaching users (detection after delivery). Report on blocked attachment statistics.",
              "z": "Bar chart (detections by file type), Table (malicious attachments), Line chart (detection trend), Pie chart (verdict distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Email security TA.\n• Ensure the following data sources are available: Email gateway attachment scanning logs, safe attachments logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable attachment scanning in email gateway. Ingest scanning results. Track detections by file type, sender domain, and verdict. Alert on malicious attachments reaching users (detection after delivery). Report on blocked attachment statistics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=\"ms:o365:messageTrace\"\n| search FilteringResult=\"*malware*\" OR FilteringResult=\"*SafeAttachment*\"\n| stats count by SenderAddress, Subject, FilteringResult\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Malicious Attachment Tracking** — Attachment-based threats bypass URL filtering. Tracking by file type reveals attack vectors and informs policy decisions.\n\nDocumented **Data sources**: Email gateway attachment scanning logs, safe attachments logs. **App/TA** (typical add-on context): Email security TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: ms:o365:messageTrace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=\"ms:o365:messageTrace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by SenderAddress, Subject, FilteringResult** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  where All_Email.action=blocked\n  by All_Email.src_user All_Email.recipient All_Email.message_type span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malicious Attachment Tracking** — Attachment-based threats bypass URL filtering. Tracking by file type reveals attack vectors and informs policy decisions.\n\nDocumented **Data sources**: Email gateway attachment scanning logs, safe attachments logs. **App/TA** (typical add-on context): Email security TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (detections by file type), Table (malicious attachments), Line chart (detection trend), Pie chart (verdict distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for malware and high-risk attachments in the mail path, so dangerous files are blocked or flagged before they reach inboxes and spread inside the org.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  where All_Email.action=blocked\n  by All_Email.src_user All_Email.recipient All_Email.message_type span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "10.4.3",
              "n": "URL Click Tracking",
              "c": "critical",
              "f": "beginner",
              "v": "Tracks which users click malicious URLs in emails — the moment a phishing email becomes an active incident.",
              "t": "Splunk_TA_MS_O365 (Safe Links), Proofpoint URL Defense",
              "d": "URL rewrite/protection logs, click tracking events",
              "q": "index=email sourcetype=\"ms:o365:dlp\" OR sourcetype=\"proofpoint:click\"\n| search verdict=\"malicious\" AND action=\"allowed\"\n| table _time, userPrincipalName, url, verdict, action",
              "m": "Enable Safe Links (M365) or URL Defense (Proofpoint). Ingest click tracking data. Alert immediately when a user clicks a malicious URL. Trigger automated password reset and endpoint scan. Track click-through rates for security awareness metrics.",
              "z": "Table (malicious URL clicks), Bar chart (clicks by user), Timeline (click events), Single value (clicks on malicious URLs today).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_MS_O365 (Safe Links), Proofpoint URL Defense.\n• Ensure the following data sources are available: URL rewrite/protection logs, click tracking events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Safe Links (M365) or URL Defense (Proofpoint). Ingest click tracking data. Alert immediately when a user clicks a malicious URL. Trigger automated password reset and endpoint scan. Track click-through rates for security awareness metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=\"ms:o365:dlp\" OR sourcetype=\"proofpoint:click\"\n| search verdict=\"malicious\" AND action=\"allowed\"\n| table _time, userPrincipalName, url, verdict, action\n```\n\nUnderstanding this SPL\n\n**URL Click Tracking** — Tracks which users click malicious URLs in emails — the moment a phishing email becomes an active incident.\n\nDocumented **Data sources**: URL rewrite/protection logs, click tracking events. **App/TA** (typical add-on context): Splunk_TA_MS_O365 (Safe Links), Proofpoint URL Defense. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: ms:o365:dlp, proofpoint:click. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=\"ms:o365:dlp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **URL Click Tracking**): table _time, userPrincipalName, url, verdict, action\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.recipient All_Email.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**URL Click Tracking** — Tracks which users click malicious URLs in emails — the moment a phishing email becomes an active incident.\n\nDocumented **Data sources**: URL rewrite/protection logs, click tracking events. **App/TA** (typical add-on context): Splunk_TA_MS_O365 (Safe Links), Proofpoint URL Defense. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (malicious URL clicks), Bar chart (clicks by user), Timeline (click events), Single value (clicks on malicious URLs today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when people click links that were scanned or rewritten in email, so we can respond if someone reaches a harmful page even when the message first looked acceptable.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.recipient All_Email.action span=1h\n| sort -count",
              "e": [
                "m365",
                "proofpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "both",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "10.4.4",
              "n": "DLP Policy Violations",
              "c": "high",
              "f": "beginner",
              "v": "Email DLP violations indicate potential data exfiltration or policy non-compliance. Monitoring supports regulatory compliance.",
              "t": "Splunk_TA_MS_O365",
              "d": "M365 DLP logs, email gateway DLP events",
              "q": "index=email sourcetype=\"ms:o365:dlp\"\n| stats count by PolicyName, UserPrincipalName, SensitiveInformationType\n| sort -count",
              "m": "Configure M365 DLP policies for sensitive data types (SSN, credit card, etc.). Ingest DLP violation events. Alert on high-severity violations. Track violation trends per policy and user for compliance reporting.",
              "z": "Bar chart (violations by policy), Table (top violators), Line chart (violation trend), Pie chart (by data type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_MS_O365.\n• Ensure the following data sources are available: M365 DLP logs, email gateway DLP events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure M365 DLP policies for sensitive data types (SSN, credit card, etc.). Ingest DLP violation events. Alert on high-severity violations. Track violation trends per policy and user for compliance reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=\"ms:o365:dlp\"\n| stats count by PolicyName, UserPrincipalName, SensitiveInformationType\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DLP Policy Violations** — Email DLP violations indicate potential data exfiltration or policy non-compliance. Monitoring supports regulatory compliance.\n\nDocumented **Data sources**: M365 DLP logs, email gateway DLP events. **App/TA** (typical add-on context): Splunk_TA_MS_O365. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: ms:o365:dlp. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=\"ms:o365:dlp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by PolicyName, UserPrincipalName, SensitiveInformationType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.recipient All_Email.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DLP Policy Violations** — Email DLP violations indicate potential data exfiltration or policy non-compliance. Monitoring supports regulatory compliance.\n\nDocumented **Data sources**: M365 DLP logs, email gateway DLP events. **App/TA** (typical add-on context): Splunk_TA_MS_O365. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (violations by policy), Table (top violators), Line chart (violation trend), Pie chart (by data type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when outbound or inbound mail hits data-loss policy blocks, so sensitive content leaving by email is visible for security and compliance follow-up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.recipient All_Email.action span=1h\n| sort -count",
              "e": [
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.5",
              "n": "Spoofed Email Detection",
              "c": "high",
              "f": "intermediate",
              "v": "DMARC/SPF/DKIM failures indicate email spoofing attempts. Monitoring validates email authentication configuration.",
              "t": "Email security TA",
              "d": "DMARC aggregate reports, email authentication logs",
              "q": "index=email sourcetype=\"dmarc:aggregate\"\n| where dkim_result!=\"pass\" OR spf_result!=\"pass\"\n| stats count by src, header_from, dkim_result, spf_result, disposition\n| sort -count",
              "m": "Configure DMARC reporting (aggregate to a designated mailbox). Ingest DMARC XML reports. Track authentication failures by sending domain. Alert on spoofing of your own domains. Move toward DMARC p=reject for full protection.",
              "z": "Table (authentication failures), Bar chart (failures by domain), Pie chart (pass vs fail), Line chart (spoofing trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Email security TA.\n• Ensure the following data sources are available: DMARC aggregate reports, email authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure DMARC reporting (aggregate to a designated mailbox). Ingest DMARC XML reports. Track authentication failures by sending domain. Alert on spoofing of your own domains. Move toward DMARC p=reject for full protection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=\"dmarc:aggregate\"\n| where dkim_result!=\"pass\" OR spf_result!=\"pass\"\n| stats count by src, header_from, dkim_result, spf_result, disposition\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Spoofed Email Detection** — DMARC/SPF/DKIM failures indicate email spoofing attempts. Monitoring validates email authentication configuration.\n\nDocumented **Data sources**: DMARC aggregate reports, email authentication logs. **App/TA** (typical add-on context): Email security TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: dmarc:aggregate. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=\"dmarc:aggregate\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where dkim_result!=\"pass\" OR spf_result!=\"pass\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, header_from, dkim_result, spf_result, disposition** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.recipient All_Email.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Spoofed Email Detection** — DMARC/SPF/DKIM failures indicate email spoofing attempts. Monitoring validates email authentication configuration.\n\nDocumented **Data sources**: DMARC aggregate reports, email authentication logs. **App/TA** (typical add-on context): Email security TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (authentication failures), Bar chart (failures by domain), Pie chart (pass vs fail), Line chart (spoofing trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for mail that fails or weakens sender authentication, so spoofing and lookalike senders are caught and we can keep improving DMARC, SPF, and DKIM alignment.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.recipient All_Email.action span=1h\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.6",
              "n": "Email Volume Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Unusual outbound email volumes may indicate compromised accounts used for spam/phishing or mass data exfiltration via email.",
              "t": "Splunk_TA_MS_O365, Splunk_TA_microsoft-exchange",
              "d": "Email message tracking logs (outbound)",
              "q": "index=email sourcetype=\"ms:o365:messageTrace\" Direction=\"Outbound\"\n| stats count by SenderAddress\n| eventstats avg(count) as avg_sent, stdev(count) as stdev_sent\n| where count > avg_sent + 3*stdev_sent\n| table SenderAddress, count, avg_sent",
              "m": "Track outbound email volume per sender. Baseline normal patterns. Alert when any sender exceeds 3× standard deviation. Correlate with sign-in events to detect compromised accounts. Report on top senders for capacity planning.",
              "z": "Bar chart (top senders), Line chart (outbound volume trend), Table (anomalous senders).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_MS_O365, Splunk_TA_microsoft-exchange.\n• Ensure the following data sources are available: Email message tracking logs (outbound).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack outbound email volume per sender. Baseline normal patterns. Alert when any sender exceeds 3× standard deviation. Correlate with sign-in events to detect compromised accounts. Report on top senders for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=\"ms:o365:messageTrace\" Direction=\"Outbound\"\n| stats count by SenderAddress\n| eventstats avg(count) as avg_sent, stdev(count) as stdev_sent\n| where count > avg_sent + 3*stdev_sent\n| table SenderAddress, count, avg_sent\n```\n\nUnderstanding this SPL\n\n**Email Volume Anomalies** — Unusual outbound email volumes may indicate compromised accounts used for spam/phishing or mass data exfiltration via email.\n\nDocumented **Data sources**: Email message tracking logs (outbound). **App/TA** (typical add-on context): Splunk_TA_MS_O365, Splunk_TA_microsoft-exchange. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: ms:o365:messageTrace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=\"ms:o365:messageTrace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by SenderAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where count > avg_sent + 3*stdev_sent` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Email Volume Anomalies**): table SenderAddress, count, avg_sent\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.recipient All_Email.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Email Volume Anomalies** — Unusual outbound email volumes may indicate compromised accounts used for spam/phishing or mass data exfiltration via email.\n\nDocumented **Data sources**: Email message tracking logs (outbound). **App/TA** (typical add-on context): Splunk_TA_MS_O365, Splunk_TA_microsoft-exchange. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top senders), Line chart (outbound volume trend), Table (anomalous senders).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unusual mail volume to and from users and systems, so sudden spikes from compromise, harvesting, or misconfiguration stand out from normal business traffic.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.recipient All_Email.action span=1h\n| sort -count",
              "e": [
                "exchange",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.7",
              "n": "Quarantine Management",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks quarantine effectiveness and user release requests to balance security with user productivity.",
              "t": "Splunk_TA_MS_O365, email gateway TA",
              "d": "Email quarantine logs, release request logs",
              "q": "index=email sourcetype=\"ms:o365:messageTrace\"\n| search FilteringResult=\"*Quarantine*\"\n| stats count by FilteringResult, SenderAddress\n| sort -count",
              "m": "Track quarantine volumes, reasons, and user release requests. Alert on unusual quarantine rates (may indicate new phishing campaign). Monitor false positive rate (legitimate emails quarantined) for policy tuning.",
              "z": "Bar chart (quarantine reasons), Line chart (quarantine volume trend), Table (release requests), Single value (quarantine rate %).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_MS_O365, email gateway TA.\n• Ensure the following data sources are available: Email quarantine logs, release request logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack quarantine volumes, reasons, and user release requests. Alert on unusual quarantine rates (may indicate new phishing campaign). Monitor false positive rate (legitimate emails quarantined) for policy tuning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=\"ms:o365:messageTrace\"\n| search FilteringResult=\"*Quarantine*\"\n| stats count by FilteringResult, SenderAddress\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Quarantine Management** — Tracks quarantine effectiveness and user release requests to balance security with user productivity.\n\nDocumented **Data sources**: Email quarantine logs, release request logs. **App/TA** (typical add-on context): Splunk_TA_MS_O365, email gateway TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: ms:o365:messageTrace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=\"ms:o365:messageTrace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by FilteringResult, SenderAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  where All_Email.action=blocked\n  by All_Email.src_user All_Email.recipient All_Email.message_type span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Quarantine Management** — Tracks quarantine effectiveness and user release requests to balance security with user productivity.\n\nDocumented **Data sources**: Email quarantine logs, release request logs. **App/TA** (typical add-on context): Splunk_TA_MS_O365, email gateway TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (quarantine reasons), Line chart (quarantine volume trend), Table (release requests), Single value (quarantine rate %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch quarantine and user release activity on mail, so both analyst actions and end-user pull-through are visible and abuse of release rights is easier to catch.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  where All_Email.action=blocked\n  by All_Email.src_user All_Email.recipient All_Email.message_type span=1h\n| sort -count",
              "e": [
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.8",
              "n": "Email Attachments With Lots Of Spaces",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects email attachments with an unusually high number of spaces in their file names, which is a common tactic used by attackers to obfuscate file extensions. It leverages the Email data model to identify attachments where the ratio of spaces to the total file name length exceeds 10%. This behavior is significant as it may indicate an attempt to bypass security filters and deliver malicious payloads. If confirmed malicious, this activity could lead to the execution of har…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count values(All_Email.recipient) as recipient_address min(_time) as firstTime max(_time) as lastTime FROM datamodel=Email\n      WHERE All_Email.file_name=\"*\"\n      BY All_Email.src_user, All_Email.file_name All_Email.message_id\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `drop_dm_object_name(\"All_Email\")`\n    | eval space_ratio = (mvcount(split(file_name,\" \"))-1)/len(file_name)\n    | search space_ratio >= 0.1\n    | rex field=recipient_address \"(?<recipient_user>.*)@\"\n    | `email_attachments_with_lots_of_spaces_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "user",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Email Attachments With Lots Of Spaces\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Email Attachments With Lots Of Spaces\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.9",
              "n": "Email files written outside of the Outlook directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects email files (.pst or .ost) being created outside the standard Outlook directories. It leverages the Endpoint.Filesystem data model to identify file creation events and filters for email files not located in \"C:\\Users\\*\\My Documents\\Outlook Files\\*\" or \"C:\\Users\\*\\AppData\\Local\\Microsoft\\Outlook*\". This activity is significant as it may indicate data exfiltration or unauthorized access to email data. If confirmed malicious, an attacker could potentially access sensi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count values(Filesystem.file_path) as file_path min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Filesystem\n      WHERE (\n            Filesystem.file_name=*.pst\n            OR\n            Filesystem.file_name=*.ost\n        )\n        Filesystem.file_path != \"C:\\Users\\*\\My Documents\\Outlook Files\\*\"  Filesystem.file_path!=\"C:\\Users\\*\\AppData\\Local\\Microsoft\\Outlook*\"\n      BY Filesystem.action Filesystem.dest Filesystem.file_access_time\n         Filesystem.file_create_time Filesystem.file_hash Filesystem.file_modify_time\n         Filesystem.file_name Filesystem.file_path Filesystem.file_acl\n         Filesystem.file_size Filesystem.process_guid Filesystem.process_id\n         Filesystem.user Filesystem.vendor_product\n    | `drop_dm_object_name(\"Filesystem\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `email_files_written_outside_of_the_outlook_directory_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators and users sometimes prefer backing up their email data by moving the email files into a different folder. These attempts will be detected by the search.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1114.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Email files written outside of the Outlook directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Email files written outside of the Outlook directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators and users sometimes prefer backing up their email data by moving the email files into a different folder. These attempts will be detected by the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.10",
              "n": "Email servers sending high volume traffic to hosts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a significant increase in data transfers from your email server to client hosts. It leverages the Network_Traffic data model to monitor outbound traffic from email servers, using statistical analysis to detect anomalies based on average and standard deviation metrics. This activity is significant as it may indicate a malicious actor exfiltrating data via your email server. If confirmed malicious, this could lead to unauthorized data access and potential data bre…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` sum(All_Traffic.bytes_out) as bytes_out FROM datamodel=Network_Traffic\n      WHERE All_Traffic.src_category=email_server\n      BY All_Traffic.dest _time span=1d\n    | `drop_dm_object_name(\"All_Traffic\")`\n    | eventstats avg(bytes_out) as avg_bytes_out stdev(bytes_out) as stdev_bytes_out\n    | eventstats count as num_data_samples avg(eval(if(_time < relative_time(now(), \"@d\"), bytes_out, null))) as per_source_avg_bytes_out stdev(eval(if(_time < relative_time(now(), \"@d\"), bytes_out, null))) as per_source_stdev_bytes_out\n      BY dest\n    | eval minimum_data_samples = 4, deviation_threshold = 3\n    | where num_data_samples >= minimum_data_samples AND bytes_out > (avg_bytes_out + (deviation_threshold * stdev_bytes_out)) AND bytes_out > (per_source_avg_bytes_out + (deviation_threshold * per_source_stdev_bytes_out)) AND _time >= relative_time(now(), \"@d\")\n    | eval num_standard_deviations_away_from_server_average = round(abs(bytes_out - avg_bytes_out) / stdev_bytes_out, 2), num_standard_deviations_away_from_client_average = round(abs(bytes_out - per_source_avg_bytes_out) / per_source_stdev_bytes_out, 2)\n    | table dest, _time, bytes_out, avg_bytes_out, per_source_avg_bytes_out, num_standard_deviations_away_from_server_average, num_standard_deviations_away_from_client_average\n    | `email_servers_sending_high_volume_traffic_to_hosts_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The false-positive rate will vary based on how you set the deviation_threshold and data_samples values. Our recommendation is to adjust these values based on your network traffic to and from your email servers.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1114.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Email servers sending high volume traffic to hosts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Email servers sending high volume traffic to hosts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The false-positive rate will vary based on how you set the deviation_threshold and data_samples values. Our recommendation is to adjust these values based on your network traffic to and from your email servers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.11",
              "n": "M365 Copilot Agentic Jailbreak Attack",
              "c": "high",
              "f": "intermediate",
              "v": "Detects agentic AI jailbreak attempts that try to establish persistent control over M365 Copilot through rule injection, universal triggers, response automation, system overrides, and persona establishment techniques. The detection analyzes the PromptText field for keywords like \"from now on,\" \"always respond,\" \"ignore previous,\" \"new rule,\" \"override,\" and role-playing commands (e.g., \"act as,\" \"you are now\") that attempt to inject persistent instructions. The search computes risk by counting d…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "M365 Exported eDiscovery Prompts",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object=\"$user$\" starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To export M365 Copilot prompt logs, navigate to the Microsoft Purview compliance portal (compliance.microsoft.com) and access eDiscovery. Create a new eDiscovery case, add target user accounts or date ranges as data sources, then create a search query targeting M365 Copilot interactions across relevant workloads. Once the search completes, export the results to generate a package containing prompt…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate users discussing AI ethics research, security professionals testing system robustness, developers creating training materials for AI safety, or academic discussions about AI limitations and behavioral constraints may trigger false positives.",
              "refs": "https://www.splunk.com/en_us/blog/artificial-intelligence/m365-copilot-log-analysis-splunk.html",
              "mitre": [
                "T1562"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"M365 Copilot Agentic Jailbreak Attack\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: M365 Exported eDiscovery Prompts. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"M365 Copilot Agentic Jailbreak Attack\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate users discussing AI ethics research, security professionals testing system robustness, developers creating training materials for AI safety, or academic discussions about AI limitations and behavioral constraints may trigger false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how Copilot is used in Microsoft 365, so we see risky or abusive prompts and odd access before sensitive data is exposed through AI features.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.12",
              "n": "M365 Copilot Application Usage Pattern Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Detects M365 Copilot users exhibiting suspicious application usage patterns including multi-location access, abnormally high activity volumes, or access to multiple Copilot applications that may indicate account compromise or automated abuse. The detection aggregates M365 Copilot Graph API events per user, calculating metrics like distinct cities/countries accessed, unique IP addresses, number of different Copilot apps used, and average events per day over the observation period. Users are flagg…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "M365 Copilot Graph API",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This detection requires ingesting M365 Copilot access logs via the Splunk Add-on for Microsoft Office 365. Configure the add-on to collect Azure AD Sign-in logs (AuditLogs.SignIns) through the Graph API data input. Ensure proper authentication and permissions are configured to access sign-in audit logs. The `m365_copilot_graph_api` macro should be defined to filter for sourcetype o365:graph:api da…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Power users, executives with heavy AI workloads, employees traveling for business, users accessing multiple Copilot applications legitimately, or teams using shared corporate accounts across different office locations may trigger false positives.",
              "refs": "https://www.splunk.com/en_us/blog/artificial-intelligence/m365-copilot-log-analysis-splunk.html",
              "mitre": [
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"M365 Copilot Application Usage Pattern Anomalies\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: M365 Copilot Graph API. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"M365 Copilot Application Usage Pattern Anomalies\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Power users, executives with heavy AI workloads, employees traveling for business, users accessing multiple Copilot applications legitimately, or teams using shared corporate accounts across different office locations may trigger false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how Copilot is used in Microsoft 365, so we see risky or abusive prompts and odd access before sensitive data is exposed through AI features.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.13",
              "n": "M365 Copilot Failed Authentication Patterns",
              "c": "high",
              "f": "intermediate",
              "v": "Detects M365 Copilot users with failed authentication attempts, MFA failures, or multi-location access patterns indicating potential credential attacks or account compromise. The detection aggregates M365 Copilot Graph API authentication events per user, calculating metrics like distinct cities/countries accessed, unique IP addresses and browsers, failed login attempts (status containing \"fail\" or \"error\"), and MFA failures (error code 50074). Users are flagged when they access Copilot from mult…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object=\"$user$\" | where _time >= relative_time(now(), \"-168h@h\") | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This detection requires ingesting M365 Copilot access logs via the Splunk Add-on for Microsoft Office 365. Configure the add-on to collect Azure AD Sign-in logs (AuditLogs.SignIns) through the Graph API data input. Ensure proper authentication and permissions are configured to access sign-in audit logs. The `m365_copilot_graph_api` macro should be defined to filter for sourcetype o365:graph:api da…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate users experiencing network connectivity issues, traveling employees with intermittent VPN connections, users in regions with unstable internet infrastructure, or password reset activities during business travel may trigger false positives.",
              "refs": "https://www.splunk.com/en_us/blog/artificial-intelligence/m365-copilot-log-analysis-splunk.html",
              "mitre": [
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"M365 Copilot Failed Authentication Patterns\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"M365 Copilot Failed Authentication Patterns\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate users experiencing network connectivity issues, traveling employees with intermittent VPN connections, users in regions with unstable internet infrastructure, or password reset activities during business travel may trigger false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how Copilot is used in Microsoft 365, so we see risky or abusive prompts and odd access before sensitive data is exposed through AI features.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.14",
              "n": "M365 Copilot Impersonation Jailbreak Attack",
              "c": "high",
              "f": "intermediate",
              "v": "Detects M365 Copilot impersonation and roleplay jailbreak attempts where users try to manipulate the AI into adopting alternate personas, behaving as unrestricted entities, or impersonating malicious AI systems to bypass safety controls. The detection searches exported eDiscovery prompt logs for roleplay keywords like \"pretend you are,\" \"act as,\" \"you are now,\" \"amoral,\" and \"roleplay as\" in the Subject_Title field. Prompts are categorized into specific impersonation types (AI_Impersonation, Mal…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "M365 Exported eDiscovery Prompts",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object=\"$user$\" | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires M365 Exported eDiscovery Prompts ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate creative writers developing fictional characters, game developers creating roleplay scenarios, educators teaching about AI ethics and limitations, researchers studying AI behavior, or users engaging in harmless creative storytelling may trigger false positives.",
              "refs": "https://www.splunk.com/en_us/blog/artificial-intelligence/m365-copilot-log-analysis-splunk.html",
              "mitre": [
                "T1562"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"M365 Copilot Impersonation Jailbreak Attack\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: M365 Exported eDiscovery Prompts. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"M365 Copilot Impersonation Jailbreak Attack\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate creative writers developing fictional characters, game developers creating roleplay scenarios, educators teaching about AI ethics and limitations, researchers studying AI behavior, or users engaging in harmless creative storytelling may trigger false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how Copilot is used in Microsoft 365, so we see risky or abusive prompts and odd access before sensitive data is exposed through AI features.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.15",
              "n": "M365 Copilot Information Extraction Jailbreak Attack",
              "c": "high",
              "f": "intermediate",
              "v": "Detects M365 Copilot information extraction jailbreak attacks that attempt to obtain sensitive, classified, or comprehensive data through various social engineering techniques including fictional entity impersonation, bulk data requests, and privacy bypass attempts. The detection searches exported eDiscovery prompt logs for extraction keywords like \"transcendent,\" \"tell me everything,\" \"confidential,\" \"dump,\" \"extract,\" \"reveal,\" and \"bypass\" in the Subject_Title field, categorizing each attempt…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "M365 Exported eDiscovery Prompts",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To export M365 Copilot prompt logs, navigate to the Microsoft Purview compliance portal (compliance.microsoft.com) and access eDiscovery. Create a new eDiscovery case, add target user accounts or date ranges as data sources, then create a search query targeting M365 Copilot interactions across relevant workloads. Once the search completes, export the results to generate a package containing prompt…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate researchers studying data classification systems, cybersecurity professionals testing information handling policies, compliance officers reviewing data access procedures, journalists researching transparency issues, or employees asking for comprehensive project documentation may trigger false positives.",
              "refs": "https://www.splunk.com/en_us/blog/artificial-intelligence/m365-copilot-log-analysis-splunk.html",
              "mitre": [
                "T1562"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"M365 Copilot Information Extraction Jailbreak Attack\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: M365 Exported eDiscovery Prompts. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"M365 Copilot Information Extraction Jailbreak Attack\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate researchers studying data classification systems, cybersecurity professionals testing information handling policies, compliance officers reviewing data access procedures, journalists researching transparency issues, or employees asking for comprehensive project documentation may trigger false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how Copilot is used in Microsoft 365, so we see risky or abusive prompts and odd access before sensitive data is exposed through AI features.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.16",
              "n": "M365 Copilot Jailbreak Attempts",
              "c": "high",
              "f": "intermediate",
              "v": "Detects M365 Copilot jailbreak attempts through prompt injection techniques including rule manipulation, system bypass commands, and AI impersonation requests that attempt to circumvent built-in safety controls. The detection searches exported eDiscovery prompt logs for jailbreak keywords like \"pretend you are,\" \"act as,\" \"rules=,\" \"ignore,\" \"bypass,\" and \"override\" in the Subject_Title field, assigning severity scores based on the manipulation type (score of 4 for amoral impersonation or explic…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "M365 Exported eDiscovery Prompts",
              "q": "# Shared SPL: intentional — see UC-10.4.15\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires M365 Exported eDiscovery Prompts ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate users discussing AI ethics research, security professionals testing system robustness, developers creating training materials for AI safety, or academic discussions about AI limitations and behavioral constraints may trigger false positives.",
              "refs": "https://www.splunk.com/en_us/blog/artificial-intelligence/m365-copilot-log-analysis-splunk.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"M365 Copilot Jailbreak Attempts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: M365 Exported eDiscovery Prompts. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"M365 Copilot Jailbreak Attempts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate users discussing AI ethics research, security professionals testing system robustness, developers creating training materials for AI safety, or academic discussions about AI limitations and behavioral constraints may trigger false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how Copilot is used in Microsoft 365, so we see risky or abusive prompts and odd access before sensitive data is exposed through AI features.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.17",
              "n": "M365 Copilot Non Compliant Devices Accessing M365 Copilot",
              "c": "high",
              "f": "intermediate",
              "v": "Detects M365 Copilot access from non-compliant or unmanaged devices that violate corporate security policies, indicating potential shadow IT usage, BYOD policy violations, or compromised endpoint access. The detection filters M365 Copilot Graph API events where deviceDetail.isCompliant=false or deviceDetail.isManaged=false, then aggregates by user, operating system, and browser to calculate metrics including event counts, unique IPs and locations, and compliance/management status over time. User…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "# Shared SPL: intentional — see UC-10.4.15\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This detection requires ingesting M365 Copilot access logs via the Splunk Add-on for Microsoft Office 365. Configure the add-on to collect Azure AD Sign-in logs (AuditLogs.SignIns) through the Graph API data input. Ensure proper authentication and permissions are configured to access sign-in audit logs. The `m365_copilot_graph_api` macro should be defined to filter for sourcetype o365:graph:api da…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate employees using personal devices during emergencies, new hires awaiting device provisioning, temporary workers with unmanaged equipment, or users accessing Copilot from approved but temporarily non-compliant devices may trigger false positives.",
              "refs": "https://www.splunk.com/en_us/blog/artificial-intelligence/m365-copilot-log-analysis-splunk.html",
              "mitre": [
                "T1562"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"M365 Copilot Non Compliant Devices Accessing M365 Copilot\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"M365 Copilot Non Compliant Devices Accessing M365 Copilot\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate employees using personal devices during emergencies, new hires awaiting device provisioning, temporary workers with unmanaged equipment, or users accessing Copilot from approved but temporarily non-compliant devices may trigger false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how Copilot is used in Microsoft 365, so we see risky or abusive prompts and odd access before sensitive data is exposed through AI features.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.18",
              "n": "M365 Copilot Session Origin Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Detects M365 Copilot users accessing from multiple geographic locations to identify potential account compromise, credential sharing, or impossible travel patterns. The detection aggregates M365 Copilot Graph API events per user, calculating distinct cities and countries accessed, unique IP addresses, and the observation timeframe to compute a locations-per-day metric that measures geographic mobility. Users accessing Copilot from more than one city (cities_count > 1) are flagged and sorted by c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object=\"$user\" | where _time >= relative_time(now(), \"-168h@h\") | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This detection requires ingesting M365 Copilot access logs via the Splunk Add-on for Microsoft Office 365. Configure the add-on to collect Azure AD Sign-in logs (AuditLogs.SignIns) through the Graph API data input. Ensure proper authentication and permissions are configured to access sign-in audit logs. The `m365_copilot_graph_api` macro should be defined to filter for sourcetype o365:graph:api da…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate business travelers, remote workers using VPNs, users with corporate offices in multiple locations, or employees accessing Copilot during international travel may trigger false positives.",
              "refs": "https://www.splunk.com/en_us/blog/artificial-intelligence/m365-copilot-log-analysis-splunk.html",
              "mitre": [
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"M365 Copilot Session Origin Anomalies\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"M365 Copilot Session Origin Anomalies\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate business travelers, remote workers using VPNs, users with corporate offices in multiple locations, or employees accessing Copilot during international travel may trigger false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how Copilot is used in Microsoft 365, so we see risky or abusive prompts and odd access before sensitive data is exposed through AI features.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.19",
              "n": "Monitor Email For Brand Abuse",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies emails claiming to be sent from a domain similar to one you are monitoring for potential abuse. It leverages email header data, specifically the sender's address, and cross-references it with a lookup table of known domain permutations generated by the \"ESCU - DNSTwist Domain Names\" search. This activity is significant as it can indicate phishing attempts or brand impersonation, which are common tactics used in social engineering attacks. If confirmed malicious,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` values(All_Email.recipient) as recipients, min(_time) as firstTime, max(_time) as lastTime FROM datamodel=Email\n      BY All_Email.src_user, All_Email.message_id\n    | `drop_dm_object_name(\"All_Email\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | eval temp=split(src_user, \"@\")\n    | eval email_domain=mvindex(temp, 1)\n    | lookup update=true brandMonitoring_lookup domain as email_domain OUTPUT domain_abuse\n    | search domain_abuse=true\n    | table message_id, src_user, email_domain, recipients, firstTime, lastTime\n    | `monitor_email_for_brand_abuse_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "user",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Monitor Email For Brand Abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Monitor Email For Brand Abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.20",
              "n": "Okta Phishing Detection with FastPass Origin Check",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies failed user authentication attempts in Okta due to FastPass declining a phishing attempt. It leverages Okta logs, specifically looking for events where multi-factor authentication (MFA) fails with the reason \"FastPass declined phishing attempt.\" This activity is significant as it indicates that attackers are targeting users with real-time phishing proxies, attempting to capture credentials. If confirmed malicious, this could lead to unauthorized access to user a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "`okta` eventType=\"user.authentication.auth_via_mfa\" AND result=\"FAILURE\" AND outcome.reason=\"FastPass declined phishing attempt\"\n      | stats count min(_time) as firstTime max(_time) as lastTime values(displayMessage)\n        BY user eventType client.userAgent.rawUserAgent\n           client.userAgent.browser outcome.reason\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `okta_phishing_detection_with_fastpass_origin_check_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Fidelity of this is high as Okta is specifying malicious infrastructure. Filter and modify as needed.",
              "refs": "https://sec.okta.com/fastpassphishingdetection",
              "mitre": [
                "T1078.001",
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Phishing Detection with FastPass Origin Check\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.001, T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Phishing Detection with FastPass Origin Check\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Fidelity of this is high as Okta is specifying malicious infrastructure. Filter and modify as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.21",
              "n": "Okta Suspicious Activity Reported",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when an associate reports a login attempt as suspicious via an email from Okta. It leverages Okta Identity Management logs, specifically the `user.account.report_suspicious_activity_by_enduser` event type. This activity is significant as it indicates potential unauthorized access attempts, warranting immediate investigation to prevent possible security breaches. If confirmed malicious, the attacker could gain unauthorized access to sensitive systems and data, le…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be minimal, given the high fidelity of this detection. marker.",
              "refs": "https://help.okta.com/en-us/Content/Topics/Security/suspicious-activity-reporting.htm",
              "mitre": [
                "T1078.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Suspicious Activity Reported\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Suspicious Activity Reported\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be minimal, given the high fidelity of this detection. marker.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.22",
              "n": "Suspicious Email Attachment Extensions",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects emails containing attachments with suspicious file extensions. It leverages the Email data model in Splunk, using the tstats command to identify emails where the attachment filename is not empty. This detection is significant for SOC analysts as it highlights potential phishing or malware delivery attempts, which are common vectors for data breaches and malware infections. If confirmed malicious, this activity could lead to unauthorized access to sensitive informat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time)\n    as lastTime from datamodel=Email.All_Email where All_Email.file_name=\"*\"",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Email Attachment Extensions\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Email Attachment Extensions\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.23",
              "n": "AWS Successful Console Authentication From Multiple IPs",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an AWS account successfully authenticating from multiple unique IP addresses within a 5-minute window. It leverages AWS CloudTrail logs, specifically monitoring `ConsoleLogin` events and counting distinct source IPs. This behavior is significant as it may indicate compromised credentials, potentially from a phishing attack, being used concurrently by an adversary and a legitimate user. If confirmed malicious, this activity could allow unauthorized access to corpora…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ConsoleLogin",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A user with successful authentication events from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment.",
              "refs": "https://rhinosecuritylabs.com/aws/mfa-phishing-on-aws/",
              "mitre": [
                "T1586",
                "T1535"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Successful Console Authentication From Multiple IPs\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1586, T1535. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Successful Console Authentication From Multiple IPs\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A user with successful authentication events from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.24",
              "n": "Azure AD Block User Consent For Risky Apps Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when the risk-based step-up consent security setting in Azure AD is disabled. It monitors Azure Active Directory logs for the \"Update authorization policy\" operation, specifically changes to the \"AllowUserConsentForRiskyApps\" setting. This activity is significant because disabling this feature can expose the organization to OAuth phishing threats by allowing users to grant consent to potentially malicious applications. If confirmed malicious, attackers could gain u…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Update authorization policy",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the latest version of Splunk Add-on for Microsoft Cloud Services from Splunkbase (https://splunkbase.splunk.com/app/3110/#/details). You must be ingesting Azure Active Directory events into your Splunk environment through an EventHub. This analytic was written to be used with the azure:monitor:aad sourcetype leveraging the AuditLog log category.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate changes to the 'risk-based step-up consent' setting by administrators, perhaps as part of a policy update or security assessment, may trigger this alert, necessitating verification of the change's intent and authorization",
              "refs": "https://attack.mitre.org/techniques/T1562/, https://goodworkaround.com/2020/10/19/a-look-behind-the-azure-ad-permission-classifications-preview/, https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/configure-risk-based-step-up-consent, https://learn.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth",
              "mitre": [
                "T1562"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Block User Consent For Risky Apps Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Update authorization policy. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Block User Consent For Risky Apps Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate changes to the 'risk-based step-up consent' setting by administrators, perhaps as part of a policy update or security assessment, may trigger this alert, necessitating verification of the change's intent and authorization\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.4.24: Azure AD Block User Consent For Risky Apps Disabled.",
                  "ea": "Saved search 'UC-10.4.24' running on Azure Active Directory Update authorization policy, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.25",
              "n": "Azure AD Device Code Authentication",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies Azure Device Code Phishing attacks, which can lead to Azure Account Take-Over (ATO). It leverages Azure AD SignInLogs to detect suspicious authentication requests using the device code authentication protocol. This activity is significant as it indicates potential bypassing of Multi-Factor Authentication (MFA) and Conditional Access Policies (CAPs) through phishing emails. If confirmed malicious, attackers could gain unauthorized access to Azure AD, Exchange mai…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "In most organizations, device code authentication will be used to access common Microsoft service but it may be legitimate for others. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1528, https://github.com/rvrsh3ll/TokenTactics, https://embracethered.com/blog/posts/2022/device-code-phishing/, https://0xboku.com/2021/07/12/ArtOfDeviceCodePhish.html, https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-device-code",
              "mitre": [
                "T1528",
                "T1566.002"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Device Code Authentication\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1528, T1566.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Device Code Authentication\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: In most organizations, device code authentication will be used to access common Microsoft service but it may be legitimate for others. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.26",
              "n": "Azure AD FullAccessAsApp Permission Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of the 'full_access_as_app' permission to an application within Office 365 Exchange Online. This is identified by the GUID 'dc890d15-9560-4a4c-9b7f-a736ec74ec40' and the ResourceAppId '00000002-0000-0ff1-ce00-000000000000'. The detection leverages the azure_monitor_aad data source, focusing on AuditLogs with the operation name 'Update application'. This activity is significant as it grants broad control over Office 365 operations, including full acce…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Update application",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the latest version of Splunk Add-on for Microsoft Cloud Services from Splunkbase(https://splunkbase.splunk.com/app/3110/#/details). You must be ingesting Azure Active Directory events into your Splunk environment through an EventHub. This analytic was written to be used with the azure:monitor:aad sourcetype leveraging the AuditLogs log category.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The full_access_as_app API permission may be assigned to legitimate applications. Filter as needed.",
              "refs": "https://msrc.microsoft.com/blog/2024/01/microsoft-actions-following-attack-by-nation-state-actor-midnight-blizzard/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/, https://attack.mitre.org/techniques/T1098/002/",
              "mitre": [
                "T1098.002",
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD FullAccessAsApp Permission Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Update application. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.002, T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD FullAccessAsApp Permission Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The full_access_as_app API permission may be assigned to legitimate applications. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.27",
              "n": "Azure AD Successful Authentication From Different Ips",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an Azure AD account successfully authenticating from multiple unique IP addresses within a 30-minute window. It leverages Azure AD SignInLogs to identify instances where the same user logs in from different IPs in a short time frame. This behavior is significant as it may indicate compromised credentials being used by an adversary, potentially following a phishing attack. If confirmed malicious, this activity could allow unauthorized access to corporate resources, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A user with successful authentication events from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment.",
              "refs": "https://attack.mitre.org/techniques/T1110, https://attack.mitre.org/techniques/T1110/001/, https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1110.001",
                "T1110.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Successful Authentication From Different Ips\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001, T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Successful Authentication From Different Ips\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A user with successful authentication events from different Ips may also represent the legitimate use of more than one device. Filter as needed and/or customize the threshold to fit your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.28",
              "n": "Gdrive suspicious file sharing",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious file-sharing activity on Google Drive, where internal users share documents with more than 50 external recipients. It leverages GSuite Drive logs, focusing on changes in user access and filtering for emails outside the organization's domain. This activity is significant as it may indicate compromised accounts or intentional data exfiltration. If confirmed malicious, this behavior could lead to unauthorized access to sensitive information, data leaks, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`gsuite_drive` name=change_user_access\n      | rename parameters.* as *\n      | search email = \"*@yourdomain.com\" target_user != \"*@yourdomain.com\"\n      | stats count values(owner) as owner values(target_user) as target values(doc_type) as doc_type values(doc_title) as doc_title dc(target_user) as distinct_target\n        BY src email\n      | where distinct_target > 50\n      | `gdrive_suspicious_file_sharing_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is an anomaly search, you must specify your domain in the parameters so it either filters outside domains or focus on internal domains. This search may also help investigate compromise of accounts. By looking at for example source ip addresses, document titles and abnormal number of shares and shared target users.",
              "refs": "https://www.splunk.com/en_us/blog/security/investigating-gsuite-phishing-attacks-with-splunk.html",
              "mitre": [
                "T1566"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Gdrive suspicious file sharing\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Gdrive suspicious file sharing\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is an anomaly search, you must specify your domain in the parameters so it either filters outside domains or focus on internal domains. This search may also help investigate compromise of accounts. By looking at for example source ip addresses, document titles and abnormal number of shares and shared target users.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Google mail and file-sharing signals for odd attachments, subjects, and external shares, so we catch credential theft and data sprawl in Workspace.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.29",
              "n": "Gsuite Drive Share In External Email",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects Google Drive or Google Docs files shared externally from an internal domain. It leverages GSuite Drive logs, extracting and comparing the source and destination email domains to identify external sharing. This activity is significant as it may indicate potential data exfiltration by an attacker or insider. If confirmed malicious, this could lead to unauthorized access to sensitive information, data leakage, and potential compliance violations. Monitoring this behav…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "G Suite Drive",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires G Suite Drive ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin or normal user may share files to customer and external team.",
              "refs": "https://www.redhat.com/en/topics/devops/what-is-devsecops",
              "mitre": [
                "T1567.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Gsuite Drive Share In External Email\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: G Suite Drive. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1567.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Gsuite Drive Share In External Email\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin or normal user may share files to customer and external team.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.30",
              "n": "GSuite Email Suspicious Attachment",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious attachment file extensions in GSuite emails, potentially indicating a spear-phishing attack. It leverages GSuite Gmail logs to identify emails with attachments having file extensions commonly associated with malware, such as .exe, .bat, and .js. This activity is significant as these file types are often used to deliver malicious payloads, posing a risk of compromising targeted machines. If confirmed malicious, this could lead to unauthorized code executi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "G Suite Gmail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$destination{}.address$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires G Suite Gmail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin and normal user may send this file attachment as part of their day to day work. having a good protocol in attaching this file type to an e-mail may reduce the risk of having a spear phishing attack.",
              "refs": "https://www.redhat.com/en/topics/devops/what-is-devsecops",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "email_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GSuite Email Suspicious Attachment\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: G Suite Gmail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GSuite Email Suspicious Attachment\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin and normal user may send this file attachment as part of their day to day work. having a good protocol in attaching this file type to an e-mail may reduce the risk of having a spear phishing attack.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.31",
              "n": "Gsuite Email Suspicious Subject With Attachment",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies Gsuite emails with suspicious subjects and attachments commonly used in spear phishing attacks. It leverages Gsuite email logs, focusing on specific keywords in the subject line and known malicious file types in attachments. This activity is significant for a SOC as spear phishing is a prevalent method for initial compromise, often leading to further malicious actions. If confirmed malicious, this activity could result in unauthorized access, data exfiltration, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "G Suite Gmail",
              "q": "# Shared SPL: intentional — see UC-10.4.30\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$destination{}.address$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires G Suite Gmail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "normal user or normal transaction may contain the subject and file type attachment that this detection try to search.",
              "refs": "https://www.redhat.com/en/topics/devops/what-is-devsecops, https://www.mandiant.com/resources/top-words-used-in-spear-phishing-attacks",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "email_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Gsuite Email Suspicious Subject With Attachment\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: G Suite Gmail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Gsuite Email Suspicious Subject With Attachment\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: normal user or normal transaction may contain the subject and file type attachment that this detection try to search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.32",
              "n": "Gsuite Email With Known Abuse Web Service Link",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects emails in Gsuite containing links to known abuse web services such as Pastebin, Telegram, and Discord. It leverages Gsuite Gmail logs to identify emails with these specific domains in their links. This activity is significant because these services are commonly used by attackers to deliver malicious payloads. If confirmed malicious, this could lead to the delivery of malware, phishing attacks, or other harmful activities, potentially compromising sensitive informat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "G Suite Gmail",
              "q": "# Shared SPL: intentional — see UC-10.4.30\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$destination{}.address$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires G Suite Gmail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "normal email contains this link that are known application within the organization or network can be catched by this detection.",
              "refs": "https://news.sophos.com/en-us/2021/07/22/malware-increasingly-targets-discord-for-abuse/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "email_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Gsuite Email With Known Abuse Web Service Link\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: G Suite Gmail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Gsuite Email With Known Abuse Web Service Link\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: normal email contains this link that are known application within the organization or network can be catched by this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.33",
              "n": "Gsuite Outbound Email With Attachment To External Domain",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects outbound emails with attachments sent from an internal email domain to an external domain. It leverages Gsuite Gmail logs, parsing the source and destination email domains, and flags emails with fewer than 20 outbound instances. This activity is significant as it may indicate potential data exfiltration or insider threats. If confirmed malicious, an attacker could use this method to exfiltrate sensitive information, leading to data breaches and compliance violation…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "G Suite Gmail",
              "q": "`gsuite_gmail` num_message_attachments > 0\n      | rex field=source.from_header_address \"[^@]+@(?<source_domain>[^@]+)\"\n      | rex field=destination{}.address \"[^@]+@(?<dest_domain>[^@]+)\"\n      | where source_domain=\"internal_test_email.com\" and not dest_domain=\"internal_test_email.com\"\n      | eval phase=\"plan\"\n      | eval severity=\"low\"\n      | stats values(subject) as subject, values(source.from_header_address) as src_domain_list, count as numEvents, dc(source.from_header_address) as numSrcAddresses, min(_time) as firstTime max(_time) as lastTime\n        BY dest_domain phase severity\n      | where numSrcAddresses < 20\n      | sort - numSrcAddresses\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `gsuite_outbound_email_with_attachment_to_external_domain_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires G Suite Gmail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin and normal user may send this file attachment as part of their day to day work. having a good protocol in attaching this file type to an e-mail may reduce the risk of having a spear phishing attack.",
              "refs": "https://www.redhat.com/en/topics/devops/what-is-devsecops",
              "mitre": [
                "T1048.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Gsuite Outbound Email With Attachment To External Domain\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: G Suite Gmail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Gsuite Outbound Email With Attachment To External Domain\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin and normal user may send this file attachment as part of their day to day work. having a good protocol in attaching this file type to an e-mail may reduce the risk of having a spear phishing attack.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.34",
              "n": "Gsuite suspicious calendar invite",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious calendar invites sent via GSuite, potentially indicating compromised accounts or malicious internal activity. It leverages GSuite calendar logs, focusing on events where a high volume of invites (over 100) is sent within a 5-minute window. This behavior is significant as it may involve the distribution of malicious links or attachments, posing a security risk. If confirmed malicious, this activity could lead to widespread phishing attacks, unauthorized a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`gsuite_calendar`\n      | bin span=5m _time\n      | rename parameters.* as *\n      | search target_calendar_id!=null email=\"*yourdomain.com\"\n      | stats  count values(target_calendar_id) values(event_title) values(event_guest)\n        BY email _time\n      | where count >100\n      | `gsuite_suspicious_calendar_invite_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This search will also produce normal activity statistics. Fields such as email, ip address, name, parameters.organizer_calendar_id, parameters.target_calendar_id and parameters.event_title may give away phishing intent.For more specific results use email parameter.",
              "refs": "https://www.techrepublic.com/article/how-to-avoid-the-dreaded-google-calendar-malicious-invite-issue/",
              "mitre": [
                "T1566"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Gsuite suspicious calendar invite\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Gsuite suspicious calendar invite\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This search will also produce normal activity statistics. Fields such as email, ip address, name, parameters.organizer_calendar_id, parameters.target_calendar_id and parameters.event_title may give away phishing intent.For more specific results use email parameter.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.35",
              "n": "Gsuite Suspicious Shared File Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects shared files in Google Drive with suspicious filenames commonly used in spear phishing campaigns. It leverages GSuite Drive logs to identify documents with titles that include keywords like \"dhl,\" \"ups,\" \"invoice,\" and \"shipment.\" This activity is significant because such filenames are often used to lure users into opening malicious documents or clicking harmful links. If confirmed malicious, this activity could lead to unauthorized access, data theft, or further c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "G Suite Drive",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$email$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires G Suite Drive ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "normal user or normal transaction may contain the subject and file type attachment that this detection try to search",
              "refs": "https://www.redhat.com/en/topics/devops/what-is-devsecops, https://www.mandiant.com/resources/top-words-used-in-spear-phishing-attacks",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Gsuite Suspicious Shared File Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: G Suite Drive. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Gsuite Suspicious Shared File Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: normal user or normal transaction may contain the subject and file type attachment that this detection try to search\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.36",
              "n": "Kubernetes Pod With Host Network Attachment",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or update of a Kubernetes pod with host network attachment. It leverages Kubernetes Audit logs to identify pods configured with host network settings. This activity is significant for a SOC as it could allow an attacker to monitor all network traffic on the node, potentially capturing sensitive information and escalating privileges. If confirmed malicious, this could lead to unauthorized access, data breaches, and service disruptions, severely impactin…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1204"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Pod With Host Network Attachment\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Pod With Host Network Attachment\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.37",
              "n": "O365 Add App Role Assignment Grant User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of an application role assignment grant to a user in Office 365. It leverages data from the `o365_management_activity` dataset, specifically monitoring the \"Add app role assignment grant to user\" operation. This activity is significant as it can indicate unauthorized privilege escalation or the assignment of sensitive roles to users. If confirmed malicious, this could allow an attacker to gain elevated permissions, potentially leading to unauthorized a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Add app role assignment grant to user.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 Add app role assignment grant to user ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The creation of a new Federation is not necessarily malicious, however this events need to be followed closely, as it may indicate federated credential abuse or backdoor via federated identities at a different cloud provider.",
              "refs": "https://www.fireeye.com/content/dam/fireeye-www/blog/pdfs/wp-m-unc2452-2021-000343-01.pdf, https://www.cisa.gov/uscert/ncas/alerts/aa21-008a",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Add App Role Assignment Grant User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Add app role assignment grant to user.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Add App Role Assignment Grant User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The creation of a new Federation is not necessarily malicious, however this events need to be followed closely, as it may indicate federated credential abuse or backdoor via federated identities at a different cloud provider.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.38",
              "n": "O365 Added Service Principal",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of new service principal accounts in O365 tenants. It leverages data from the `o365_management_activity` dataset, specifically monitoring for operations related to adding or creating service principals. This activity is significant because attackers can exploit service principals to gain unauthorized access and perform malicious actions within an organization's environment. If confirmed malicious, this could allow attackers to interact with APIs, acces…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The creation of a new Federation is not necessarily malicious, however these events need to be followed closely, as it may indicate federated credential abuse or backdoor via federated identities at a different cloud provider.",
              "refs": "https://www.fireeye.com/content/dam/fireeye-www/blog/pdfs/wp-m-unc2452-2021-000343-01.pdf, https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://www.splunk.com/en_us/blog/security/a-golden-saml-journey-solarwinds-continued.html, https://blog.sygnia.co/detection-and-hunting-of-golden-saml-attack?hsLang=en",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Added Service Principal\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Added Service Principal\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The creation of a new Federation is not necessarily malicious, however these events need to be followed closely, as it may indicate federated credential abuse or backdoor via federated identities at a different cloud provider.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.39",
              "n": "O365 Admin Consent Bypassed by Service Principal",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a service principal in Office 365 Azure Active Directory assigns app roles without standard admin consent. It leverages `o365_management_activity` logs, specifically focusing on the 'Add app role assignment to service principal' operation. This activity is significant for SOCs as it may indicate a bypass of critical administrative controls, potentially leading to unauthorized access or privilege escalation. If confirmed malicious, this could allo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Add app role assignment to service principal.",
              "q": "# Shared SPL: intentional — see UC-10.4.53\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service Principals are sometimes configured to legitimately bypass the consent process for purposes of automation. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/003/, https://msrc.microsoft.com/blog/2024/01/microsoft-actions-following-attack-by-nation-state-actor-midnight-blizzard/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/, https://attack.mitre.org/techniques/T1098/002/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/, https://winsmarts.com/how-to-grant-admin-consent-to-an-api-programmatically-e32f4a100e9d",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Admin Consent Bypassed by Service Principal\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Add app role assignment to service principal.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Admin Consent Bypassed by Service Principal\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service Principals are sometimes configured to legitimately bypass the consent process for purposes of automation. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.4.39: O365 Admin Consent Bypassed by Service Principal.",
                  "ea": "Saved search 'UC-10.4.39' running on O365 Add app role assignment to service principal., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.40",
              "n": "O365 Advanced Audit Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where the O365 advanced audit is disabled for a specific user within the Office 365 tenant. It uses O365 audit logs, focusing on events related to audit license changes in AzureActiveDirectory workloads. This activity is significant because the O365 advanced audit provides critical logging and insights into user and administrator activities. Disabling it can blind security teams to potential malicious actions. If confirmed malicious, attackers could opera…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Change user license.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators might temporarily disable the advanced audit for troubleshooting, performance reasons, or other administrative tasks. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1562/008/, https://www.csoonline.com/article/570381/microsoft-365-advanced-audit-what-you-need-to-know.html",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Advanced Audit Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Change user license.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Advanced Audit Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators might temporarily disable the advanced audit for troubleshooting, performance reasons, or other administrative tasks. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.41",
              "n": "O365 Application Available To Other Tenants",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the configuration of Azure Active Directory Applications in a manner that allows authentication from external tenants or personal accounts. This configuration can lead to inappropriate or malicious access of any data or capabilities the application is allowed to access. This detection leverages the O365 Universal Audit Log data source.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Business approved changes by known administrators.",
              "refs": "https://attack.mitre.org/techniques/T1098/, https://msrc.microsoft.com/blog/2023/03/guidance-on-potential-misconfiguration-of-authorization-of-multi-tenant-applications-that-use-azure-ad/, https://www.wiz.io/blog/azure-active-directory-bing-misconfiguration",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "service",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Application Available To Other Tenants\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to service entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Application Available To Other Tenants\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Business approved changes by known administrators.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.42",
              "n": "O365 Application Registration Owner Added",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a new owner is assigned to an application registration within an Azure AD and Office 365 tenant. It leverages O365 audit logs, specifically events related to changes in owner assignments within the AzureActiveDirectory workload. This activity is significant because assigning a new owner to an application registration can grant significant control over the application's configuration, permissions, and behavior. If confirmed malicious, an attacker …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Add owner to application.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Application owners may be added for legitimate reasons, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/, https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/overview-assign-app-owners",
              "mitre": [
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Application Registration Owner Added\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Add owner to application.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Application Registration Owner Added\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Application owners may be added for legitimate reasons, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.43",
              "n": "O365 ApplicationImpersonation Role Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of the ApplicationImpersonation role in Office 365 to a user or application. It uses the Office 365 Management Activity API to monitor Azure Active Directory audit logs for role assignment events. This activity is significant because the ApplicationImpersonation role allows impersonation of any user, enabling access to and modification of their mailbox. If confirmed malicious, an attacker could gain unauthorized access to sensitive information, manip…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$target_user$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While infrequent, the ApplicationImpersonation role may be granted for leigimate reasons, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/002/, https://www.mandiant.com/resources/blog/remediation-and-hardening-strategies-for-microsoft-365-to-defend-against-unc2452, https://www.mandiant.com/media/17656",
              "mitre": [
                "T1098.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 ApplicationImpersonation Role Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 ApplicationImpersonation Role Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While infrequent, the ApplicationImpersonation role may be granted for leigimate reasons, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.44",
              "n": "O365 BEC Email Hiding Rule Created",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects mailbox rule creation, a common technique used in Business Email Compromise. It uses a scoring mechanism to identify a combination of attributes often featured in mailbox rules created by attackers. This may indicate that an attacker has gained access to the account.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object=\"$user$\" starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Short rule names may trigger false positives. Adjust the entropy and length thresholds as needed.",
              "refs": "https://attack.mitre.org/techniques/T1564/008/",
              "mitre": [
                "T1564.008"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 BEC Email Hiding Rule Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1564.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 BEC Email Hiding Rule Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Short rule names may trigger false positives. Adjust the entropy and length thresholds as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who changes inbox and transport rules, forwarding, and email-related tenant settings, so sneaky auto-forwarding or admin tampering is visible before it becomes a full account takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.45",
              "n": "O365 Block User Consent For Risky Apps Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when the \"risk-based step-up consent\" security setting in Microsoft 365 is disabled. It monitors Azure Active Directory logs for the \"Update authorization policy\" operation, specifically changes to the \"AllowUserConsentForRiskyApps\" setting. This activity is significant because disabling this feature can expose the organization to OAuth phishing threats, allowing users to grant consent to malicious applications. If confirmed malicious, attackers could gain unauthor…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Update authorization policy.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate changes to the 'risk-based step-up consent' setting by administrators, perhaps as part of a policy update or security assessment, may trigger this alert, necessitating verification of the change's intent and authorization.",
              "refs": "https://attack.mitre.org/techniques/T1562/, https://goodworkaround.com/2020/10/19/a-look-behind-the-azure-ad-permission-classifications-preview/, https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/configure-risk-based-step-up-consent, https://learn.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth",
              "mitre": [
                "T1562"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Block User Consent For Risky Apps Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Update authorization policy.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Block User Consent For Risky Apps Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate changes to the 'risk-based step-up consent' setting by administrators, perhaps as part of a policy update or security assessment, may trigger this alert, necessitating verification of the change's intent and authorization.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.4.45: O365 Block User Consent For Risky Apps Disabled.",
                  "ea": "Saved search 'UC-10.4.45' running on O365 Update authorization policy., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.46",
              "n": "O365 Bypass MFA via Trusted IP",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where new IP addresses are added to the trusted IPs list in Office 365, potentially allowing users from these IPs to bypass Multi-Factor Authentication (MFA) during login. It leverages O365 audit logs, specifically focusing on events related to the modification of trusted IP settings. This activity is significant because adding trusted IPs can weaken the security posture by bypassing MFA, which is a critical security control. If confirmed malicious, th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Set Company Information.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install Splunk Microsoft Office 365 add-on. This search works with o365:management:activity",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Unless it is a special case, it is uncommon to continually update Trusted IPs to MFA configuration.",
              "refs": "https://i.blackhat.com/USA-20/Thursday/us-20-Bienstock-My-Cloud-Is-APTs-Cloud-Investigating-And-Defending-Office-365.pdf, https://attack.mitre.org/techniques/T1562/007/, https://learn.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-mfasettings",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Bypass MFA via Trusted IP\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Set Company Information.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Bypass MFA via Trusted IP\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Unless it is a special case, it is uncommon to continually update Trusted IPs to MFA configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.47",
              "n": "O365 Compliance Content Search Exported",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when the results of a content search within the Office 365 Security and Compliance Center are exported. It uses the SearchExported operation from the SecurityComplianceCenter workload in the o365_management_activity data source. This activity is significant because exporting search results can involve sensitive or critical organizational data, potentially leading to data exfiltration. If confirmed malicious, an attacker could gain access to and exfiltrate sensit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Compliance content searche exports may be executed for legitimate purposes, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1114/002/, https://learn.microsoft.com/en-us/purview/ediscovery-content-search-overview, https://learn.microsoft.com/en-us/purview/ediscovery-keyword-queries-and-search-conditions, https://learn.microsoft.com/en-us/purview/ediscovery-search-for-activities-in-the-audit-log",
              "mitre": [
                "T1114.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Compliance Content Search Exported\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Compliance Content Search Exported\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Compliance content searche exports may be executed for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for sensitive data leaving M365 and unusual mailbox behavior, so data-loss and eDiscovery abuse does not go unnoticed between mail and file workloads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.48",
              "n": "O365 Compliance Content Search Started",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a content search is initiated within the Office 365 Security and Compliance Center. It leverages the SearchCreated operation from the o365_management_activity logs under the SecurityComplianceCenter workload. This activity is significant as it may indicate an attempt to access sensitive organizational data, including emails and documents. If confirmed malicious, this could lead to unauthorized data access, potential data exfiltration, and compliance violations…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Compliance content searches may be executed for legitimate purposes, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1114/002/, https://learn.microsoft.com/en-us/purview/ediscovery-content-search-overview, https://learn.microsoft.com/en-us/purview/ediscovery-keyword-queries-and-search-conditions, https://learn.microsoft.com/en-us/purview/ediscovery-search-for-activities-in-the-audit-log",
              "mitre": [
                "T1114.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Compliance Content Search Started\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Compliance Content Search Started\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Compliance content searches may be executed for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for sensitive data leaving M365 and unusual mailbox behavior, so data-loss and eDiscovery abuse does not go unnoticed between mail and file workloads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.49",
              "n": "O365 Concurrent Sessions From Different Ips",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies user sessions in Office 365 accessed from multiple IP addresses, indicating potential adversary-in-the-middle (AiTM) phishing attacks. It detects this activity by analyzing Azure Active Directory logs for 'UserLoggedIn' operations and flags sessions with more than one associated IP address. This behavior is significant as it suggests unauthorized concurrent access, which is uncommon in normal usage. If confirmed malicious, the impact could include data theft, ac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 UserLoggedIn",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 UserLoggedIn ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1185/, https://breakdev.org/evilginx-2-next-generation-of-phishing-2fa-tokens/, https://github.com/kgretzky/evilginx2",
              "mitre": [
                "T1185"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Concurrent Sessions From Different Ips\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 UserLoggedIn. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1185. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Concurrent Sessions From Different Ips\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.50",
              "n": "O365 Cross-Tenant Access Change",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when cross-tenant access/synchronization policies are changed in an Azure tenant. Adversaries have been observed altering victim cross-tenant policies as a method of lateral movement or maintaining persistent access to compromised environments. These policies should be considered sensitive and monitored for changes and/or loose configuration.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Business approved changes by known administrators.",
              "refs": "https://attack.mitre.org/techniques/T1484/002/, https://thehackernews.com/2023/08/emerging-attacker-exploit-microsoft.html, https://cyberaffairs.com/news/emerging-attacker-exploit-microsoft-cross-tenant-synchronization/, https://www.crowdstrike.com/blog/crowdstrike-defends-against-azure-cross-tenant-synchronization-attacks/",
              "mitre": [
                "T1484.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Cross-Tenant Access Change\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Cross-Tenant Access Change\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Business approved changes by known administrators.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.51",
              "n": "O365 Disable MFA",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where Multi-Factor Authentication (MFA) is disabled for a user within the Office 365 environment. It leverages O365 audit logs, specifically focusing on events related to MFA settings. Disabling MFA removes a critical security layer, making accounts more vulnerable to unauthorized access. If confirmed malicious, this activity could indicate an attacker attempting to maintain persistence or an insider threat, significantly increasing the risk of unautho…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Disable Strong Authentication.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 Disable Strong Authentication ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Unless it is a special case, it is uncommon to disable MFA or Strong Authentication",
              "refs": "https://attack.mitre.org/techniques/T1556/",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Disable MFA\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Disable Strong Authentication.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Disable MFA\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Unless it is a special case, it is uncommon to disable MFA or Strong Authentication\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.52",
              "n": "O365 DLP Rule Triggered",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when Microsoft Office 365 Data Loss Prevention (DLP) rules have been triggered. DLP rules can be configured for any number of security, regulatory, or business compliance reasons, as such this analytic will only be as accurate as the upstream DLP configuration. Detections from this analytic should be evaluated thoroughly to de termine what, if any, security relevance the underlying DLP events contain.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events. You must deploy DLP rules through O365 security and compliance functions.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "WIll depending on accuracy of DLP rules, these can be noisy so tune appropriately.",
              "refs": "https://learn.microsoft.com/en-us/purview/dlp-learn-about-dlp",
              "mitre": [
                "T1048",
                "T1567"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 DLP Rule Triggered\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048, T1567. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 DLP Rule Triggered\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: WIll depending on accuracy of DLP rules, these can be noisy so tune appropriately.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.53",
              "n": "O365 Elevated Mailbox Permission Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the assignment of elevated mailbox permissions in an Office 365 environment via the Add-MailboxPermission operation. It leverages logs from the Exchange workload in the o365_management_activity data source, focusing on permissions such as FullAccess, ChangePermission, or ChangeOwner. This activity is significant as it indicates potential unauthorized access or control over mailboxes, which could lead to data exfiltration or privilege escalation. If confirmed mal…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Add-MailboxPermission",
              "q": "# Shared SPL: intentional — see UC-10.4.39\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 Add-MailboxPermission ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "FullAccess mailbox delegation may be assigned for legitimate purposes, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/002/, https://learn.microsoft.com/en-us/powershell/module/exchange/add-mailboxpermission, https://learn.microsoft.com/en-us/exchange/recipients/mailbox-permissions?view=exchserver-2019",
              "mitre": [
                "T1098.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Elevated Mailbox Permission Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Add-MailboxPermission. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Elevated Mailbox Permission Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: FullAccess mailbox delegation may be assigned for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.54",
              "n": "O365 Email Access By Security Administrator",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a user with sufficient access to O365 Security & Compliance portal uses premium investigation features (Threat Explorer) to directly view email. Adversaries may exploit privileged access with this premium feature to enumerate or exfiltrate sensitive data.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitamate access by security administators for incident response measures.",
              "refs": "https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/threat-explorer-investigate-delivered-malicious-email?view=o365-worldwide",
              "mitre": [
                "T1114.002",
                "T1567"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Access By Security Administrator\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002, T1567. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Access By Security Administrator\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitamate access by security administators for incident response measures.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.55",
              "n": "O365 Email Hard Delete Excessive Volume",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when an O365 email account hard deletes an excessive number of emails within a short period (within 1 hour). This behavior may indicate a compromised account where the threat actor is attempting to permanently purge a large amount of items from the mailbox. Threat actors may attempt to remove evidence of their activity by purging items from the compromised mailbox. --- Some account owner legitimate behaviors can trigger this alert, however these actions may not …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` Workload=Exchange (Operation IN (\"HardDelete\") AND Folder.Path IN (\"\\\\Sent Items\",\"\\\\Recoverable Items\\\\Deletions\")) AND UserId = \"$user$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users that habitually/proactively cleaning the recoverable items folder may trigger this alert.",
              "refs": "https://attack.mitre.org/techniques/T1114/, https://www.hhs.gov/sites/default/files/help-desk-social-engineering-sector-alert-tlpclear.pdf, https://intelligence.abnormalsecurity.com/attack-library/threat-actor-convincingly-impersonates-employee-requesting-direct-deposit-update-in-likely-ai-generated-attack",
              "mitre": [
                "T1070.008",
                "T1485"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Hard Delete Excessive Volume\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.008, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Hard Delete Excessive Volume\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users that habitually/proactively cleaning the recoverable items folder may trigger this alert.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.56",
              "n": "O365 Email New Inbox Rule Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of new email inbox rules in an Office 365 environment. It detects events logged under New-InboxRule and Set-InboxRule operations within the o365_management_activity data source, focusing on parameters that may indicate mail forwarding, removal, or obfuscation. Inbox rule creation is a typical end-user activity however attackers also leverage this technique for multiple reasons.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` Workload=Exchange AND (Operation=New-InboxRule OR Operation=Set-InboxRule) AND UserId = \"$user$",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may create email rules for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1114/, https://www.hhs.gov/sites/default/files/help-desk-social-engineering-sector-alert-tlpclear.pdf, https://intelligence.abnormalsecurity.com/attack-library/threat-actor-convincingly-impersonates-employee-requesting-direct-deposit-update-in-likely-ai-generated-attack",
              "mitre": [
                "T1114.003",
                "T1564.008"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email New Inbox Rule Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.003, T1564.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email New Inbox Rule Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may create email rules for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who changes inbox and transport rules, forwarding, and email-related tenant settings, so sneaky auto-forwarding or admin tampering is visible before it becomes a full account takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.57",
              "n": "O365 Email Password and Payroll Compromise Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when an O365 email recipient receives and then deletes emails for the combination of both password and banking/payroll changes within a short period. This behavior may indicate a compromised account where the threat actor is attempting to redirect the victims payroll to an attacker controlled bank account.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log, Office 365 Reporting Message Trace",
              "q": "`o365_messagetrace` subject IN (\"*banking*\",\"*direct deposit*\",\"*password*\",\"*passcode*\") RecipientAddress = \"$user$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log, Office 365 Reporting Message Trace ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1114/, https://www.hhs.gov/sites/default/files/help-desk-social-engineering-sector-alert-tlpclear.pdf, https://intelligence.abnormalsecurity.com/attack-library/threat-actor-convincingly-impersonates-employee-requesting-direct-deposit-update-in-likely-ai-generated-attack",
              "mitre": [
                "T1070.008",
                "T1485",
                "T1114.001"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Password and Payroll Compromise Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log, Office 365 Reporting Message Trace. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.008, T1485, T1114.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Password and Payroll Compromise Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.58",
              "n": "O365 Email Receive and Hard Delete Takeover Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when an O365 email recipient receives and then deletes emails related to password or banking/payroll changes within a short period. This behavior may indicate a compromised account where the threat actor is attempting to redirect the victims payroll to an attacker controlled bank account.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log, Office 365 Reporting Message Trace",
              "q": "`o365_messagetrace` subject IN (\"*banking*\",\"*direct deposit*\",\"*pay-to*\",\"*password *\",\"*passcode *\",\"*OTP *\",\"*MFA *\",\"*Account Recovery*\") AND RecipientAddress = \"$user$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log, Office 365 Reporting Message Trace ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Possible new user/account onboarding processes.",
              "refs": "https://attack.mitre.org/techniques/T1114/, https://www.hhs.gov/sites/default/files/help-desk-social-engineering-sector-alert-tlpclear.pdf, https://intelligence.abnormalsecurity.com/attack-library/threat-actor-convincingly-impersonates-employee-requesting-direct-deposit-update-in-likely-ai-generated-attack",
              "mitre": [
                "T1070.008",
                "T1485",
                "T1114.001"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Receive and Hard Delete Takeover Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log, Office 365 Reporting Message Trace. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.008, T1485, T1114.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Receive and Hard Delete Takeover Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Possible new user/account onboarding processes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.59",
              "n": "O365 Email Reported By Admin Found Malicious",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an email manually submitted to Microsoft through the Security & Compliance portal is found to be malicious. This capability is an enhanced protection feature that can be used within o365 tenants by administrative users to report potentially malicious emails. This correlation looks for any submission that returns a Phish or Malware verdict upon submission.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators that submit known phishing training exercises.",
              "refs": "https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/submissions-outlook-report-messages?view=o365-worldwide",
              "mitre": [
                "T1566.001",
                "T1566.002"
              ],
              "dtype": "email_subject",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Reported By Admin Found Malicious\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001, T1566.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Reported By Admin Found Malicious\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators that submit known phishing training exercises.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.60",
              "n": "O365 Email Reported By User Found Malicious",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an email submitted to Microsoft using the built-in report button in Outlook is found to be malicious. This capability is an enhanced protection feature that can be used within o365 tenants by users to report potentially malicious emails. This correlation looks for any submission that returns a Phish or Malware verdict upon submission.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/submissions-outlook-report-messages?view=o365-worldwide",
              "mitre": [
                "T1566.001",
                "T1566.002"
              ],
              "dtype": "email_subject",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Reported By User Found Malicious\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001, T1566.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Reported By User Found Malicious\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.61",
              "n": "O365 Email Security Feature Changed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when specific O365 advanced security settings are altered within the Office 365 tenant. If an attacker successfully disables O365 security settings, they can operate within the tenant with reduced risk of detection. This can lead to unauthorized data access, data exfiltration, account compromise, or other malicious activities without leaving a detailed audit trail.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators might alter features for troubleshooting, performance reasons, or other administrative tasks. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/entra/fundamentals/security-defaults, https://attack.mitre.org/techniques/T1562/008/",
              "mitre": [
                "T1562.001",
                "T1562.008"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Security Feature Changed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Security Feature Changed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators might alter features for troubleshooting, performance reasons, or other administrative tasks. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who changes inbox and transport rules, forwarding, and email-related tenant settings, so sneaky auto-forwarding or admin tampering is visible before it becomes a full account takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.62",
              "n": "O365 Email Send and Hard Delete Exfiltration Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when an O365 email account sends and then hard deletes an email to an external recipient within a short period (within 1 hour). This behavior may indicate a compromised account where the threat actor is attempting to remove forensic artifacts or evidence of exfiltration activity. This behavior is often seen when threat actors want to reduce the probability of detection by the compromised account owner.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log, Office 365 Reporting Message Trace",
              "q": "`o365_management_activity` Workload=Exchange (Operation IN (\"Send\")) OR (Operation IN (\"HardDelete\") AND Folder.Path IN (\"\\\\Sent Items\",\"\\\\Recoverable Items\\\\Deletions\")) AND UserId = \"$user$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log, Office 365 Reporting Message Trace ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users that habitually/proactively cleaning the recoverable items folder may trigger this alert.",
              "refs": "https://attack.mitre.org/techniques/T1114/, https://www.hhs.gov/sites/default/files/help-desk-social-engineering-sector-alert-tlpclear.pdf, https://intelligence.abnormalsecurity.com/attack-library/threat-actor-convincingly-impersonates-employee-requesting-direct-deposit-update-in-likely-ai-generated-attack",
              "mitre": [
                "T1114.001",
                "T1070.008",
                "T1485"
              ],
              "dtype": "email_subject",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Send and Hard Delete Exfiltration Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log, Office 365 Reporting Message Trace. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.001, T1070.008, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Send and Hard Delete Exfiltration Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users that habitually/proactively cleaning the recoverable items folder may trigger this alert.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.63",
              "n": "O365 Email Send and Hard Delete Suspicious Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when an O365 email account sends and then hard deletes email with within a short period (within 1 hour). This behavior may indicate a compromised account where the threat actor is attempting to remove forensic artifacts or evidence of activity. Threat actors often use this technique to prevent defenders and victims from knowing the account has been compromised. --- Some account owner legitimate behaviors can trigger this alert, however these actions may not be a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` Workload=Exchange (Operation IN (\"Send*\")) OR (Operation IN (\"HardDelete\") AND Folder.Path IN (\"\\\\Sent Items\",\"\\\\Recoverable Items\\\\Deletions\")) AND UserId = \"$user$\" AND \"$subject$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users that habitually/proactively cleaning the recoverable items folder may trigger this alert.",
              "refs": "https://attack.mitre.org/techniques/T1114/, https://www.hhs.gov/sites/default/files/help-desk-social-engineering-sector-alert-tlpclear.pdf, https://intelligence.abnormalsecurity.com/attack-library/threat-actor-convincingly-impersonates-employee-requesting-direct-deposit-update-in-likely-ai-generated-attack",
              "mitre": [
                "T1114.001",
                "T1070.008",
                "T1485"
              ],
              "dtype": "email_subject",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Send and Hard Delete Suspicious Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.001, T1070.008, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Send and Hard Delete Suspicious Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users that habitually/proactively cleaning the recoverable items folder may trigger this alert.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.64",
              "n": "O365 Email Send Attachments Excessive Volume",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when an O365 email account sends an excessive number of email attachments to external recipients within a short period (within 1 hour). This behavior may indicate a compromised account where the threat actor is attempting to exfiltrate data from the mailbox. Threat actors may attempt to transfer data through email as a simple means of exfiltration from the compromised mailbox. Some account owner legitimate behaviors can trigger this alert, however these actions …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` Workload=Exchange (Operation IN (\"Send*\")) AND Item.Attachments=* AND UserId = \"$user$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users or processes that are send a large number of attachments may trigger this alert, adjust thresholds accordingly.",
              "refs": "https://attack.mitre.org/techniques/T1114/, https://www.hhs.gov/sites/default/files/help-desk-social-engineering-sector-alert-tlpclear.pdf, https://intelligence.abnormalsecurity.com/attack-library/threat-actor-convincingly-impersonates-employee-requesting-direct-deposit-update-in-likely-ai-generated-attack",
              "mitre": [
                "T1070.008",
                "T1485"
              ],
              "dtype": "email_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Send Attachments Excessive Volume\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.008, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Send Attachments Excessive Volume\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users or processes that are send a large number of attachments may trigger this alert, adjust thresholds accordingly.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.65",
              "n": "O365 Email Suspicious Behavior Alert",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when one of O365 the built-in security detections for suspicious email behaviors are triggered.  These alerts often indicate that an attacker may have compromised a mailbox within the environment. Any detections from built-in Office 365 capabilities should be monitored and responded to appropriately. Certain premium Office 365 capabilities further enhance these detection and response functions.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users emailing for legitimate business purposes that appear suspicious.",
              "refs": "https://learn.microsoft.com/en-us/purview/alert-policies",
              "mitre": [
                "T1114.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Suspicious Behavior Alert\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Suspicious Behavior Alert\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users emailing for legitimate business purposes that appear suspicious.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.66",
              "n": "O365 Email Suspicious Search Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when Office 365 users search for suspicious keywords or have an excessive number of queries to a mailbox within a limited timeframe. This behavior may indicate that a malicious actor has gained control of a mailbox and is conducting discovery or enumeration activities.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` AND Operation=SearchQueryInitiatedExchange AND UserId = \"$user$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users searching excessively or possible false positives related to matching conditions.",
              "refs": "https://learn.microsoft.com/en-us/purview/audit-get-started#step-3-enable-searchqueryinitiated-events, https://www.cisa.gov/sites/default/files/2025-01/microsoft-expanded-cloud-logs-implementation-playbook-508c.pdf, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-320a, https://attack.mitre.org/techniques/T1114/002/",
              "mitre": [
                "T1114.002",
                "T1552"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Suspicious Search Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002, T1552. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Suspicious Search Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users searching excessively or possible false positives related to matching conditions.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.67",
              "n": "O365 Email Transport Rule Changed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a user with sufficient access to Exchange Online alters the mail flow/transport rule configuration of the organization. Transport rules are a set of rules that can be used by attackers to modify or delete emails based on specific conditions, this activity could indicate an attacker hiding or exfiltrated data.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` Workload=Exchange AND Operation IN (\"Set-*\",\"Disable-*\",\"New-*\",\"Remove-*\") AND Operation=\"*Transport*\" UserId=$user$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrative changes for business needs.",
              "refs": "https://attack.mitre.org/techniques/T1114/003/, https://cardinalops.com/blog/cardinalops-contributes-new-mitre-attck-techniques-related-to-abuse-of-mail-transport-rules/, https://www.microsoft.com/en-us/security/blog/2022/09/22/malicious-OAuth-applications-used-to-compromise-email-servers-and-spread-spam/",
              "mitre": [
                "T1114.003",
                "T1564.008"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Email Transport Rule Changed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.003, T1564.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Email Transport Rule Changed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrative changes for business needs.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who changes inbox and transport rules, forwarding, and email-related tenant settings, so sneaky auto-forwarding or admin tampering is visible before it becomes a full account takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.68",
              "n": "O365 Excessive Authentication Failures Alert",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an excessive number of authentication failures, including failed attempts against MFA prompt codes. It uses data from the `o365_management_activity` dataset, focusing on events where the authentication status is marked as failure. This behavior is significant as it may indicate a brute force attack or an attempt to compromise user accounts. If confirmed malicious, this activity could lead to unauthorized access, data breaches, or further exploitation within the …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The threshold for alert is above 10 attempts and this should reduce the number of false positives.",
              "refs": "https://attack.mitre.org/techniques/T1110/",
              "mitre": [
                "T1110"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Excessive Authentication Failures Alert\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Excessive Authentication Failures Alert\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The threshold for alert is above 10 attempts and this should reduce the number of false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.69",
              "n": "O365 Excessive SSO logon errors",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects accounts experiencing a high number of Single Sign-On (SSO) logon errors. It leverages data from the `o365_management_activity` dataset, focusing on failed user login attempts with SSO errors. This activity is significant as it may indicate brute-force attempts or the hijacking/reuse of SSO tokens. If confirmed malicious, attackers could potentially gain unauthorized access to user accounts, leading to data breaches, privilege escalation, or further lateral movemen…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 UserLoginFailed",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 UserLoginFailed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Logon errors may not be malicious in nature however it may indicate attempts to reuse a token or password obtained via credential access attack.",
              "refs": "https://stealthbits.com/blog/bypassing-mfa-with-pass-the-cookie/",
              "mitre": [
                "T1556"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Excessive SSO logon errors\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 UserLoginFailed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Excessive SSO logon errors\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Logon errors may not be malicious in nature however it may indicate attempts to reuse a token or password obtained via credential access attack.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.70",
              "n": "O365 Exfiltration via File Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an excessive number of files are access from o365 by the same user over a short period of time. A malicious actor may abuse the \"open in app\" functionality of SharePoint through scripted or Graph API based access to evade triggering the FileDownloaded Event. This behavior may indicate an attacker staging data for exfiltration or an insider threat removing organizational data. Additional attention should be take with any Azure Guest (#EXT#) accounts.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` Operation IN (\"fileaccessed\") UserId=\"$UserId$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that certain file access scenarios may trigger this alert, specifically OneDrive syncing and users accessing personal onedrives of other users. Adjust threshold and filtering as needed.",
              "refs": "https://attack.mitre.org/techniques/T1567/002/, https://www.varonis.com/blog/sidestepping-detection-while-exfiltrating-sharepoint-data, https://thedfirjournal.com/posts/m365-data-exfiltration-rclone/",
              "mitre": [
                "T1567",
                "T1530"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Exfiltration via File Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1567, T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Exfiltration via File Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that certain file access scenarios may trigger this alert, specifically OneDrive syncing and users accessing personal onedrives of other users. Adjust threshold and filtering as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for sensitive data leaving M365 and unusual mailbox behavior, so data-loss and eDiscovery abuse does not go unnoticed between mail and file workloads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.71",
              "n": "O365 Exfiltration via File Download",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an excessive number of files are downloaded from o365 by the same user over a short period of time. O365 may bundle these files together as a ZIP file, however each file will have it's own download event. This behavior may indicate an attacker staging data for exfiltration or an insider threat removing organizational data. Additional attention should be taken with any Azure Guest (#EXT#) accounts.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` Operation IN (\"filedownloaded\") UserId=\"$UserId$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that certain file download scenarios may trigger this alert, specifically OneDrive syncing. Adjust threshold and filtering as needed.",
              "refs": "https://attack.mitre.org/techniques/T1567/002/, https://www.varonis.com/blog/sidestepping-detection-while-exfiltrating-sharepoint-data, https://thedfirjournal.com/posts/m365-data-exfiltration-rclone/",
              "mitre": [
                "T1567",
                "T1530"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Exfiltration via File Download\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1567, T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Exfiltration via File Download\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that certain file download scenarios may trigger this alert, specifically OneDrive syncing. Adjust threshold and filtering as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for sensitive data leaving M365 and unusual mailbox behavior, so data-loss and eDiscovery abuse does not go unnoticed between mail and file workloads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.72",
              "n": "O365 Exfiltration via File Sync Download",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an excessive number of files are sync from o365 by the same user over a short period of time. A malicious actor abuse the user-agent string through GUI or API access to evade triggering the FileDownloaded event. This behavior may indicate an attacker staging data for exfiltration or an insider threat removing organizational data. Additional attention should be taken with any Azure Guest (#EXT#) accounts.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` Operation IN (\"filesyncdownload*\") UserId=\"$UserId$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that certain file sync scenarios may trigger this alert, specifically OneNote. Adjust threshold and filtering as needed.",
              "refs": "https://attack.mitre.org/techniques/T1567/002/, https://www.varonis.com/blog/sidestepping-detection-while-exfiltrating-sharepoint-data, https://thedfirjournal.com/posts/m365-data-exfiltration-rclone/",
              "mitre": [
                "T1567",
                "T1530"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Exfiltration via File Sync Download\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1567, T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Exfiltration via File Sync Download\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that certain file sync scenarios may trigger this alert, specifically OneNote. Adjust threshold and filtering as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for sensitive data leaving M365 and unusual mailbox behavior, so data-loss and eDiscovery abuse does not go unnoticed between mail and file workloads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.73",
              "n": "O365 External Guest User Invited",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the invitation of an external guest user within Azure AD. With Azure AD B2B collaboration, users and administrators can invite external users to collaborate with internal users. External guest account invitations should be monitored by security teams as they could potentially lead to unauthorized access. An example of this attack vector was described at BlackHat 2022 by security researcher Dirk-Jan during his tall `Backdooring and Hijacking Azure AD Accounts by …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may legitimately invite external guest users. Filter as needed.",
              "refs": "https://dirkjanm.io/assets/raw/US-22-Mollema-Backdooring-and-hijacking-Azure-AD-accounts_final.pdf, https://www.blackhat.com/us-22/briefings/schedule/#backdooring-and-hijacking-azure-ad-accounts-by-abusing-external-identities-26999, https://attack.mitre.org/techniques/T1136/003/, https://learn.microsoft.com/en-us/azure/active-directory/external-identities/b2b-quickstart-add-guest-users-portal",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 External Guest User Invited\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 External Guest User Invited\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may legitimately invite external guest users. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.74",
              "n": "O365 External Identity Policy Changed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when changes are made to the external guest policies within Azure AD. With Azure AD B2B collaboration, users and administrators can invite external users to collaborate with internal users. This detection also attempts to highlight what may have changed. External guest account invitations should be monitored by security teams as they could potentially lead to unauthorized access. An example of this attack vector was described at BlackHat 2022 by security researc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Business approved changes by known administrators.",
              "refs": "https://medium.com/tenable-techblog/roles-allowing-to-abuse-entra-id-federation-for-persistence-and-privilege-escalation-df9ca6e58360, https://learn.microsoft.com/en-us/entra/external-id/external-identities-overview",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 External Identity Policy Changed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 External Identity Policy Changed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Business approved changes by known administrators.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.75",
              "n": "O365 File Permissioned Application Consent Granted by User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a user in the Office 365 environment grants consent to an application requesting file permissions for OneDrive or SharePoint. It leverages O365 audit logs, focusing on OAuth application consent events. This activity is significant because granting such permissions can allow applications to access, modify, or delete files, posing a risk if the application is malicious or overly permissive. If confirmed malicious, this could lead to data breaches, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Consent to application.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "OAuth applications that require file permissions may be legitimate, investigate and filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1528/, https://www.microsoft.com/en-us/security/blog/2022/09/22/malicious-oauth-applications-used-to-compromise-email-servers-and-spread-spam/, https://learn.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth, https://www.alteredsecurity.com/post/introduction-to-365-stealer, https://github.com/AlteredSecurity/365-Stealer",
              "mitre": [
                "T1528"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 File Permissioned Application Consent Granted by User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Consent to application.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1528. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 File Permissioned Application Consent Granted by User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: OAuth applications that require file permissions may be legitimate, investigate and filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.4.75: O365 File Permissioned Application Consent Granted by User.",
                  "ea": "Saved search 'UC-10.4.75' running on O365 Consent to application., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.76",
              "n": "O365 FullAccessAsApp Permission Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of the 'full_access_as_app' permission to an application registration in Office 365 Exchange Online. This detection leverages Office 365 management activity logs and filters Azure Active Directory workload events to identify when the specific permission, identified by GUID 'dc890d15-9560-4a4c-9b7f-a736ec74ec40', is granted. This activity is significant because it provides extensive control over Office 365 operations, including access to all mailboxes…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Update application.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The full_access_as_app API permission may be assigned to legitimate applications. Filter as needed.",
              "refs": "https://msrc.microsoft.com/blog/2024/01/microsoft-actions-following-attack-by-nation-state-actor-midnight-blizzard/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/, https://attack.mitre.org/techniques/T1098/002/",
              "mitre": [
                "T1098.002",
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 FullAccessAsApp Permission Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Update application.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.002, T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 FullAccessAsApp Permission Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The full_access_as_app API permission may be assigned to legitimate applications. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.77",
              "n": "O365 High Number Of Failed Authentications for User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an O365 account experiencing more than 20 failed authentication attempts within 5 minutes. It uses O365 Unified Audit Logs, specifically \"UserLoginFailed\" events, to monitor and flag accounts exceeding this threshold. This activity is significant as it may indicate a brute force attack or password guessing attempt. If confirmed malicious, an attacker could gain unauthorized access to the O365 environment, potentially compromising sensitive emails, documents, and…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 UserLoginFailed",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 UserLoginFailed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unusual, users who have lost their passwords may trigger this detection. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1110/, https://attack.mitre.org/techniques/T1110/001/",
              "mitre": [
                "T1110.001"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 High Number Of Failed Authentications for User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 UserLoginFailed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 High Number Of Failed Authentications for User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unusual, users who have lost their passwords may trigger this detection. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.78",
              "n": "O365 High Privilege Role Granted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when high-privilege roles such as \"Exchange Administrator,\" \"SharePoint Administrator,\" or \"Global Administrator\" are granted within Office 365. It leverages O365 audit logs to identify events where these roles are assigned to any user or service account. This activity is significant for SOCs as these roles provide extensive permissions, allowing broad access and control over critical resources and data. If confirmed malicious, this could enable attackers to gain s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Add member to role.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Privilege roles may be assigned for legitimate purposes, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/003/, https://learn.microsoft.com/en-us/azure/active-directory/roles/permissions-reference, https://learn.microsoft.com/en-us/microsoft-365/admin/add-users/about-exchange-online-admin-role?view=o365-worldwide, https://learn.microsoft.com/en-us/sharepoint/sharepoint-admin-role",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 High Privilege Role Granted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Add member to role.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 High Privilege Role Granted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Privilege roles may be assigned for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.79",
              "n": "O365 Mail Permissioned Application Consent Granted by User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a user grants consent to an application requesting mail-related permissions within the Office 365 environment. It leverages O365 audit logs, specifically focusing on events related to application permissions and user consent actions. This activity is significant as it can indicate potential security risks, such as data exfiltration or spear phishing, if malicious applications gain access. If confirmed malicious, this could lead to unauthorized da…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Consent to application.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "OAuth applications that require mail permissions may be legitimate, investigate and filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1528/, https://www.microsoft.com/en-us/security/blog/2022/09/22/malicious-oauth-applications-used-to-compromise-email-servers-and-spread-spam/, https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/protect-against-consent-phishing, https://learn.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth, https://www.alteredsecurity.com/post/introduction-to-365-stealer, https://github.com/AlteredSecurity/365-Stealer",
              "mitre": [
                "T1528"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Mail Permissioned Application Consent Granted by User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Consent to application.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1528. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Mail Permissioned Application Consent Granted by User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: OAuth applications that require mail permissions may be legitimate, investigate and filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.4.79: O365 Mail Permissioned Application Consent Granted by User.",
                  "ea": "Saved search 'UC-10.4.79' running on O365 Consent to application., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.80",
              "n": "O365 Mailbox Email Forwarding Enabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where email forwarding has been enabled on mailboxes within an Office 365 environment. It detects this activity by monitoring the Set-Mailbox operation within the o365_management_activity logs, specifically looking for changes to the ForwardingAddress or ForwardingSmtpAddress parameters. This activity is significant as unauthorized email forwarding can lead to data exfiltration and unauthorized access to sensitive information. If confirmed malicious, a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Email forwarding may be configured for legitimate purposes, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1114/003/, https://learn.microsoft.com/en-us/exchange/recipients/user-mailboxes/email-forwarding?view=exchserver-2019",
              "mitre": [
                "T1114.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Mailbox Email Forwarding Enabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Mailbox Email Forwarding Enabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Email forwarding may be configured for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who changes inbox and transport rules, forwarding, and email-related tenant settings, so sneaky auto-forwarding or admin tampering is visible before it becomes a full account takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.81",
              "n": "O365 Mailbox Folder Read Permission Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where read permissions are assigned to mailbox folders within an Office 365 environment. It leverages the `o365_management_activity` data source, specifically monitoring the `ModifyFolderPermissions` and `AddFolderPermissions` operations, while excluding Calendar, Contacts, and PersonMetadata objects. This activity is significant as unauthorized read permissions can lead to data exposure and potential information leakage. If confirmed malicious, an att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 ModifyFolderPermissions",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Mailbox folder permissions may be configured for legitimate purposes, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/002/, https://learn.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxodlgt/5610c6e6-3268-44e3-adff-8804f5315946, https://learn.microsoft.com/en-us/purview/audit-mailboxes",
              "mitre": [
                "T1098.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Mailbox Folder Read Permission Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 ModifyFolderPermissions. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Mailbox Folder Read Permission Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Mailbox folder permissions may be configured for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.82",
              "n": "O365 Mailbox Folder Read Permission Granted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where read permissions are granted to mailbox folders within an Office 365 environment. It detects this activity by monitoring the `o365_management_activity` data source for the `Set-MailboxFolderPermission` and `Add-MailboxFolderPermission` operations. This behavior is significant as it may indicate unauthorized access or changes to mailbox folder permissions, potentially exposing sensitive email content. If confirmed malicious, an attacker could gain…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 ModifyFolderPermissions",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 ModifyFolderPermissions ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Mailbox folder permissions may be configured for legitimate purposes, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/002/, https://learn.microsoft.com/en-us/powershell/module/exchange/add-mailboxfolderpermission?view=exchange-ps, https://learn.microsoft.com/en-us/powershell/module/exchange/set-mailboxfolderpermission?view=exchange-ps",
              "mitre": [
                "T1098.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Mailbox Folder Read Permission Granted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 ModifyFolderPermissions. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Mailbox Folder Read Permission Granted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Mailbox folder permissions may be configured for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.83",
              "n": "O365 Mailbox Inbox Folder Shared with All Users",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where the inbox folder of an Office 365 mailbox is shared with all users within the tenant. It leverages Office 365 management activity events to identify when the 'Inbox' folder permissions are modified to include 'Everyone' with read rights. This activity is significant as it represents a potential security risk, allowing unauthorized access to sensitive emails. If confirmed malicious, this could lead to data breaches, exfiltration of confidential infor…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 ModifyFolderPermissions",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$MailboxOwnerUPN$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators might temporarily share a mailbox with all users for legitimate reasons, such as troubleshooting, migrations, or other administrative tasks. Some organizations use shared mailboxes for teams or departments where multiple users need access to the same mailbox. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1114/002/, https://www.blackhillsinfosec.com/abusing-exchange-mailbox-permissions-mailsniper/, https://learn.microsoft.com/en-us/purview/audit-mailboxes, https://learn.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxodlgt/5610c6e6-3268-44e3-adff-8804f5315946",
              "mitre": [
                "T1114.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Mailbox Inbox Folder Shared with All Users\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 ModifyFolderPermissions. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Mailbox Inbox Folder Shared with All Users\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators might temporarily share a mailbox with all users for legitimate reasons, such as troubleshooting, migrations, or other administrative tasks. Some organizations use shared mailboxes for teams or departments where multiple users need access to the same mailbox. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.84",
              "n": "O365 Mailbox Read Access Granted to Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where the Mail.Read Graph API permissions are granted to an application registration within an Office 365 tenant. It leverages O365 audit logs, specifically events related to changes in application permissions within the AzureActiveDirectory workload. This activity is significant because the Mail.Read permission allows applications to access and read all emails within a user's mailbox, which often contain sensitive or confidential information. If confi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Update application.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are legitimate scenarios in wich an Application registrations requires Mailbox read access. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/003/, https://attack.mitre.org/techniques/T1114/002/, https://www.cisa.gov/sites/default/files/publications/Supply_Chain_Compromise_Detecting_APT_Activity_from_known_TTPs.pdf, https://learn.microsoft.com/en-us/graph/permissions-reference, https://graphpermissions.merill.net/permission/Mail.Read",
              "mitre": [
                "T1098.003",
                "T1114.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Mailbox Read Access Granted to Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Update application.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003, T1114.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Mailbox Read Access Granted to Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are legitimate scenarios in wich an Application registrations requires Mailbox read access. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.85",
              "n": "O365 Multi-Source Failed Authentications Spike",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a spike in failed authentication attempts within an Office 365 environment, indicative of a potential distributed password spraying attack. It leverages UserLoginFailed events from O365 Management Activity logs, focusing on ErrorNumber 50126. This detection is significant as it highlights attempts to bypass security controls using multiple IP addresses and user agents. If confirmed malicious, this activity could lead to unauthorized access, data breaches, privil…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 UserLoginFailed",
              "q": "`o365_management_activity` Workload=AzureActiveDirectory Operation=UserLoginFailed ErrorNumber=50126\n      | bucket span=5m _time\n      | eval uniqueIPUserCombo = src . \"-\" . user\n      | fillnull\n      | stats earliest(_time) as firstTime max(_time) as lastTime dc(uniqueIPUserCombo) as uniqueIpUserCombinations, dc(user) as uniqueUsers, dc(src) as uniqueIPs, values(user) as user, values(src) as ips, values(user_agent) as user_agents values(signature) as signature values(src) as src values(dest) as dest\n        BY _time vendor_account vendor_product\n      | where uniqueIpUserCombinations > 20 AND uniqueUsers > 20 AND uniqueIPs > 20\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `o365_multi_source_failed_authentications_spike_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires O365 UserLoginFailed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection may yield false positives in scenarios where legitimate bulk sign-in activities occur, such as during company-wide system updates or when users are accessing resources from varying locations in a short time frame, such as in the case of VPNs or cloud services that rotate IP addresses. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/security/compass/incident-response-playbook-password-spray, https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://learn.microsoft.com/azure/active-directory/reports-monitoring/reference-sign-ins-error-codes",
              "mitre": [
                "T1110.003",
                "T1110.004",
                "T1586.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Multi-Source Failed Authentications Spike\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 UserLoginFailed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Multi-Source Failed Authentications Spike\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection may yield false positives in scenarios where legitimate bulk sign-in activities occur, such as during company-wide system updates or when users are accessing resources from varying locations in a short time frame, such as in the case of VPNs or cloud services that rotate IP addresses. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.86",
              "n": "O365 Multiple AppIDs and UserAgents Authentication Spike",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unusual authentication activity in an O365 environment, where a single user account experiences more than 8 authentication attempts using 3 or more unique application IDs and over 5 unique user agents within a short timeframe. It leverages O365 audit logs, focusing on authentication events and applying statistical thresholds. This behavior is significant as it may indicate an adversary probing for multi-factor authentication weaknesses. If confirmed malicious, i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 UserLoggedIn, O365 UserLoginFailed",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 UserLoggedIn, O365 UserLoginFailed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Rapid authentication from the same user using more than 5 different user agents and 3 application IDs is highly unlikely under normal circumstances. However, there are potential scenarios that could lead to false positives.",
              "refs": "https://attack.mitre.org/techniques/T1078/, https://www.blackhillsinfosec.com/exploiting-mfa-inconsistencies-on-microsoft-services/, https://github.com/dafthack/MFASweep, https://www.youtube.com/watch?v=SK1zgqaAZ2E",
              "mitre": [
                "T1078"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Multiple AppIDs and UserAgents Authentication Spike\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 UserLoggedIn, O365 UserLoginFailed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Multiple AppIDs and UserAgents Authentication Spike\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Rapid authentication from the same user using more than 5 different user agents and 3 application IDs is highly unlikely under normal circumstances. However, there are potential scenarios that could lead to false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.87",
              "n": "O365 Multiple Failed MFA Requests For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential \"MFA fatigue\" attacks targeting Office 365 users by detecting more than nine Multi-Factor Authentication (MFA) prompts within a 10-minute timeframe. It leverages O365 management activity logs, focusing on Azure Active Directory events with the UserLoginFailed operation, a Success ResultStatus, and an ErrorNumber of 500121. This activity is significant as attackers may exploit MFA fatigue to gain unauthorized access by overwhelming users with repeated M…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 UserLoginFailed",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 UserLoginFailed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1621/",
              "mitre": [
                "T1621"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Multiple Failed MFA Requests For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 UserLoginFailed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Multiple Failed MFA Requests For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.88",
              "n": "O365 Multiple Mailboxes Accessed via API",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a high number of Office 365 Exchange mailboxes are accessed via API (Microsoft Graph API or Exchange Web Services) within a short timeframe. It leverages 'MailItemsAccessed' operations in Exchange, using AppId and regex to identify API interactions. This activity is significant as it may indicate unauthorized mass email access, potentially signaling data exfiltration or account compromise. If confirmed malicious, attackers could gain access to sensitive inform…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 MailItemsAccessed",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 MailItemsAccessed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may access multiple mailboxes via an API. You can filter by the ClientAppId or the CLientIpAddress fields.",
              "refs": "https://attack.mitre.org/techniques/T1114/002/, https://learn.microsoft.com/en-us/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in, https://learn.microsoft.com/en-us/graph/permissions-reference, https://attack.mitre.org/techniques/T1114/002/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/, https://learn.microsoft.com/en-us/exchange/client-developer/exchange-web-services/ews-applications-and-the-exchange-architecture",
              "mitre": [
                "T1114.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Multiple Mailboxes Accessed via API\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 MailItemsAccessed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Multiple Mailboxes Accessed via API\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may access multiple mailboxes via an API. You can filter by the ClientAppId or the CLientIpAddress fields.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.89",
              "n": "O365 Multiple OS Vendors Authenticating From User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when multiple operating systems are used to authenticate to Azure/EntraID/Office 365 by the same user account over a short period of time. This activity could be indicative of attackers enumerating various logon capabilities of Azure/EntraID/Office 365 and attempting to discover weaknesses in the organizational MFA or conditional access configurations. Usage of the tools like \"MFASweep\" will trigger this detection.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` Operation IN (UserLoginFailed,UserLoggedIn) \"$user$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "IP or users where the usage of multiple Operating systems is expected, filter accordingly.",
              "refs": "https://attack.mitre.org/techniques/T1110, https://www.blackhillsinfosec.com/exploiting-mfa-inconsistencies-on-microsoft-services/, https://sra.io/blog/msspray-wait-how-many-endpoints-dont-have-mfa/, https://github.com/dafthack/MFASweep/tree/master",
              "mitre": [
                "T1110"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Multiple OS Vendors Authenticating From User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Multiple OS Vendors Authenticating From User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: IP or users where the usage of multiple Operating systems is expected, filter accordingly.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.90",
              "n": "O365 Multiple Service Principals Created by SP",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a single service principal creates more than three unique OAuth applications within a 10-minute timeframe. It leverages O365 logs from the Unified Audit Log, focusing on the 'Add service principal' operation in the Office 365 Azure Active Directory environment. This activity is significant as it may indicate a compromised or malicious service principal attempting to expand control or access within the network. If confirmed malicious, this could l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Add service principal.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Certain users or applications may create multiple service principals in a short period of time for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1136/003/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Multiple Service Principals Created by SP\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Add service principal.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Multiple Service Principals Created by SP\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Certain users or applications may create multiple service principals in a short period of time for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.91",
              "n": "O365 Multiple Service Principals Created by User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a single user creates more than three unique OAuth applications within a 10-minute window in the Office 365 environment. It leverages O365 logs from the Unified Audit Log, focusing on the 'Add service principal' operation in Azure Active Directory. This activity is significant as it may indicate a compromised user account or unauthorized actions, potentially leading to broader network infiltration or privilege escalation. If confirmed malicious, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Add service principal.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Certain users or applications may create multiple service principals in a short period of time for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1136/003/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Multiple Service Principals Created by User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Add service principal.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Multiple Service Principals Created by User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Certain users or applications may create multiple service principals in a short period of time for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.92",
              "n": "O365 Multiple Users Failing To Authenticate From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where more than 10 unique user accounts fail to authenticate from a single IP address within a 5-minute window. This detection leverages O365 audit logs, specifically Azure Active Directory login failures (AzureActiveDirectoryStsLogon). Such activity is significant as it may indicate brute-force attacks or password spraying attempts. If confirmed malicious, this behavior suggests an external entity is attempting to breach security by targeting multiple…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 UserLoginFailed",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 UserLoginFailed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A source Ip failing to authenticate with multiple users in a short period of time is not common legitimate behavior.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/security/compass/incident-response-playbook-password-spray, https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://learn.microsoft.com/azure/active-directory/reports-monitoring/reference-sign-ins-error-codes",
              "mitre": [
                "T1110.003",
                "T1110.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Multiple Users Failing To Authenticate From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 UserLoginFailed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Multiple Users Failing To Authenticate From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A source Ip failing to authenticate with multiple users in a short period of time is not common legitimate behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.93",
              "n": "O365 New Email Forwarding Rule Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of new email forwarding rules in an Office 365 environment. It detects events logged under New-InboxRule and Set-InboxRule operations within the o365_management_activity data source, focusing on parameters like ForwardTo, ForwardAsAttachmentTo, and RedirectTo. This activity is significant as unauthorized email forwarding can lead to data exfiltration and unauthorized access to sensitive information. If confirmed malicious, attackers could intercept …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may create email forwarding rules for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1114/003/",
              "mitre": [
                "T1114.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 New Email Forwarding Rule Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 New Email Forwarding Rule Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may create email forwarding rules for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who changes inbox and transport rules, forwarding, and email-related tenant settings, so sneaky auto-forwarding or admin tampering is visible before it becomes a full account takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.94",
              "n": "O365 New Email Forwarding Rule Enabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of new email forwarding rules in an Office 365 environment via the UpdateInboxRules operation. It leverages Office 365 management activity events to detect rules that forward emails to external recipients by examining the OperationProperties for specific forwarding actions. This activity is significant as it may indicate unauthorized email redirection, potentially leading to data exfiltration. If confirmed malicious, attackers could intercept sensit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may create email forwarding rules for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1114/003/",
              "mitre": [
                "T1114.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 New Email Forwarding Rule Enabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 New Email Forwarding Rule Enabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may create email forwarding rules for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who changes inbox and transport rules, forwarding, and email-related tenant settings, so sneaky auto-forwarding or admin tampering is visible before it becomes a full account takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.95",
              "n": "O365 New Federated Domain Added",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the addition of a new federated domain in an Office 365 environment. This behavior is detected by analyzing Office 365 management activity logs, specifically filtering for Workload=Exchange and Operation=\"Add-FederatedDomain\". The addition of a new federated domain is significant as it may indicate unauthorized changes or potential compromises. If confirmed malicious, attackers could establish a backdoor, bypass security measures, or exfiltrate data, leading to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The creation of a new Federated domain is not necessarily malicious, however these events need to be followed closely, as it may indicate federated credential abuse or backdoor via federated identities at a similar or different cloud provider.",
              "refs": "https://www.fireeye.com/content/dam/fireeye-www/blog/pdfs/wp-m-unc2452-2021-000343-01.pdf, https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://www.splunk.com/en_us/blog/security/a-golden-saml-journey-solarwinds-continued.html, https://blog.sygnia.co/detection-and-hunting-of-golden-saml-attack?hsLang=en, https://o365blog.com/post/aadbackdoor/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 New Federated Domain Added\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 New Federated Domain Added\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The creation of a new Federated domain is not necessarily malicious, however these events need to be followed closely, as it may indicate federated credential abuse or backdoor via federated identities at a similar or different cloud provider.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who changes inbox and transport rules, forwarding, and email-related tenant settings, so sneaky auto-forwarding or admin tampering is visible before it becomes a full account takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.96",
              "n": "O365 New Forwarding Mailflow Rule Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of new mail flow rules in Office 365 that may redirect or copy emails to unauthorized or external addresses. It leverages Office 365 Management Activity logs, specifically querying for the \"New-TransportRule\" operation and parameters like \"BlindCopyTo\", \"CopyTo\", and \"RedirectMessageTo\". This activity is significant as it can indicate potential data exfiltration or unauthorized access to sensitive information. If confirmed malicious, attackers could in…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Forwarding mail flow rules may be created for legitimate reasons, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1114/, https://learn.microsoft.com/en-us/exchange/security-and-compliance/mail-flow-rules/mail-flow-rules, https://learn.microsoft.com/en-us/exchange/security-and-compliance/mail-flow-rules/mail-flow-rule-actions",
              "mitre": [
                "T1114"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 New Forwarding Mailflow Rule Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 New Forwarding Mailflow Rule Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Forwarding mail flow rules may be created for legitimate reasons, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who changes inbox and transport rules, forwarding, and email-related tenant settings, so sneaky auto-forwarding or admin tampering is visible before it becomes a full account takeover.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.97",
              "n": "O365 New MFA Method Registered",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the registration of a new Multi-Factor Authentication (MFA) method for a user account within Office 365. It leverages O365 audit logs to identify changes in MFA configurations. This activity is significant as it may indicate an attacker's attempt to maintain persistence on a compromised account. If confirmed malicious, the attacker could bypass existing security measures, solidify their access, and potentially escalate privileges or access sensitive data. Immediate…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Update user.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 Update user ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may register MFA methods legitimately, investigate and filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/005/, https://www.microsoft.com/en-us/security/blog/2023/06/08/detecting-and-mitigating-a-multi-stage-aitm-phishing-and-bec-campaign/, https://www.csoonline.com/article/573451/sophisticated-bec-scammers-bypass-microsoft-365-multi-factor-authentication.html",
              "mitre": [
                "T1098.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 New MFA Method Registered\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Update user.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 New MFA Method Registered\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may register MFA methods legitimately, investigate and filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Microsoft 365 sign-in and email-related identity signals, so we spot risky logins, token abuse, and account takeover before mailboxes are used to blast attackers’ messages.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.98",
              "n": "O365 OAuth App Mailbox Access via EWS",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when emails are accessed in Office 365 Exchange via Exchange Web Services (EWS) using OAuth-authenticated applications. It leverages the ClientInfoString field to identify EWS interactions and aggregates metrics such as access counts, timing, and client IP addresses, categorized by user, ClientAppId, OperationCount, and AppId. Monitoring OAuth applications accessing emails through EWS is crucial for identifying potential abuse or unauthorized data access. If confir…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 MailItemsAccessed",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 MailItemsAccessed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "OAuth applications may access mailboxes for legitimate purposes, you can use the src to add trusted sources to an allow list.",
              "refs": "https://attack.mitre.org/techniques/T1114/002/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/, https://learn.microsoft.com/en-us/exchange/client-developer/exchange-web-services/ews-applications-and-the-exchange-architecture",
              "mitre": [
                "T1114.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 OAuth App Mailbox Access via EWS\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 MailItemsAccessed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 OAuth App Mailbox Access via EWS\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: OAuth applications may access mailboxes for legitimate purposes, you can use the src to add trusted sources to an allow list.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.99",
              "n": "O365 OAuth App Mailbox Access via Graph API",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when emails are accessed in Office 365 Exchange via the Microsoft Graph API using the client ID '00000003-0000-0000-c000-000000000000'. It leverages the 'MailItemsAccessed' operation within the Exchange workload, focusing on OAuth-authenticated applications. This activity is significant as unauthorized access to emails can lead to data breaches and information theft. If confirmed malicious, attackers could exfiltrate sensitive information, compromise user accounts,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 MailItemsAccessed",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 MailItemsAccessed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "OAuth applications may access mailboxes for legitimate purposes, you can use the ClientAppId to add trusted applications to an allow list.",
              "refs": "https://attack.mitre.org/techniques/T1114/002/, https://learn.microsoft.com/en-us/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in, https://learn.microsoft.com/en-us/graph/permissions-reference",
              "mitre": [
                "T1114.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 OAuth App Mailbox Access via Graph API\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 MailItemsAccessed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 OAuth App Mailbox Access via Graph API\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: OAuth applications may access mailboxes for legitimate purposes, you can use the ClientAppId to add trusted applications to an allow list.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.100",
              "n": "O365 Privileged Graph API Permission Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of critical Graph API permissions in Azure AD using the O365 Unified Audit Log. It focuses on permissions such as Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All, and RoleManagement.ReadWrite.Directory. The detection method leverages Azure Active Directory workload events, specifically 'Update application' operations. This activity is significant as these permissions provide extensive control over Azure AD settings, posing a high risk if m…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Update application.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Privileged Graph API permissions may be assigned for legitimate purposes. Filter as needed.",
              "refs": "https://cloudbrothers.info/en/azure-attack-paths/, https://github.com/mandiant/Mandiant-Azure-AD-Investigator/blob/master/MandiantAzureADInvestigator.json, https://learn.microsoft.com/en-us/graph/permissions-reference, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/, https://posts.specterops.io/azure-privilege-escalation-via-azure-api-permissions-abuse-74aee1006f48",
              "mitre": [
                "T1003.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Privileged Graph API Permission Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Update application.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Privileged Graph API Permission Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Privileged Graph API permissions may be assigned for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.101",
              "n": "O365 Privileged Role Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the assignment of sensitive and privileged Azure Active Directory roles to an Azure AD user. Adversaries and red teams alike may assign these roles to a compromised account to establish Persistence in an Azure AD environment. This detection leverages the O365 Universal Audit Log data source.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators will legitimately assign the privileged roles users as part of administrative tasks. Microsoft Privileged Identity Management (PIM) may cause false positives / less accurate alerting.",
              "refs": "https://attack.mitre.org/techniques/T1098/003/, https://learn.microsoft.com/en-us/azure/active-directory/roles/permissions-reference, https://learn.microsoft.com/en-us/microsoft-365/admin/add-users/about-exchange-online-admin-role?view=o365-worldwide",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Privileged Role Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Privileged Role Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators will legitimately assign the privileged roles users as part of administrative tasks. Microsoft Privileged Identity Management (PIM) may cause false positives / less accurate alerting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.102",
              "n": "O365 Privileged Role Assigned To Service Principal",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential privilege escalation threats in Azure Active Directory (AD). This detection is important because it identifies instances where privileged roles that hold elevated permissions are assigned to service principals. This prevents unauthorized access or malicious activities, which occur when these non-human entities access Azure resources to exploit them. False positives might occur since administrators can legitimately assign privileged roles to service princi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may legitimately assign the privileged roles to Service Principals as part of administrative tasks. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/003/, https://learn.microsoft.com/en-us/azure/active-directory/roles/permissions-reference, https://learn.microsoft.com/en-us/microsoft-365/admin/add-users/about-exchange-online-admin-role?view=o365-worldwide, https://posts.specterops.io/azure-privilege-escalation-via-service-principal-abuse-210ae2be2a5",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Privileged Role Assigned To Service Principal\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Privileged Role Assigned To Service Principal\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may legitimately assign the privileged roles to Service Principals as part of administrative tasks. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.103",
              "n": "O365 PST export alert",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where a user has initiated an eDiscovery search or exported a PST file in an Office 365 environment. It leverages Office 365 management activity logs, specifically filtering for events under ThreatManagement with the name \"eDiscovery search started or exported.\" This activity is significant as it may indicate data exfiltration attempts or unauthorized access to sensitive information. If confirmed malicious, it suggests an attacker or insider threat is att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "PST export can be done for legitimate purposes but due to the sensitive nature of its content it must be monitored.",
              "refs": "https://attack.mitre.org/techniques/T1114/",
              "mitre": [
                "T1114"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 PST export alert\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 PST export alert\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: PST export can be done for legitimate purposes but due to the sensitive nature of its content it must be monitored.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.104",
              "n": "O365 Safe Links Detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when any Microsoft Safe Links alerting is triggered. This behavior may indicate when user has interacted with a phishing or otherwise malicious link within the Microsoft Office ecosystem.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Based on Safe Links policies, may vary.",
              "refs": "https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/safe-links-about?view=o365-worldwide, https://attack.mitre.org/techniques/T1566/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Safe Links Detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Safe Links Detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Based on Safe Links policies, may vary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.105",
              "n": "O365 Security And Compliance Alert Triggered",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies alerts triggered by the Office 365 Security and Compliance Center, indicating potential threats or policy violations. It leverages data from the `o365_management_activity` dataset, focusing on events where the workload is SecurityComplianceCenter and the operation is AlertTriggered. This activity is significant as it highlights security and compliance issues within the O365 environment, which are crucial for maintaining organizational security. If confirmed mali…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "O365 Security and Compliance may also generate false positives or trigger on legitimate behavior, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1078/004/, https://learn.microsoft.com/en-us/purview/alert-policies?view=o365-worldwide, https://learn.microsoft.com/en-us/purview/alert-policies",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Security And Compliance Alert Triggered\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Security And Compliance Alert Triggered\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: O365 Security and Compliance may also generate false positives or trigger on legitimate behavior, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for sensitive data leaving M365 and unusual mailbox behavior, so data-loss and eDiscovery abuse does not go unnoticed between mail and file workloads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.106",
              "n": "O365 Service Principal New Client Credentials",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of new credentials for Service Principals within an Office 365 tenant. It uses O365 audit logs, focusing on events related to credential modifications or additions in the AzureActiveDirectory workload. This activity is significant because Service Principals represent application identities, and their credentials allow applications to authenticate and access resources. If an attacker successfully adds or modifies these credentials, they can impersonate …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service Principal client credential modifications may be part of legitimate administrative operations. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/001/, https://www.mandiant.com/resources/blog/remediation-and-hardening-strategies-for-microsoft-365-to-defend-against-unc2452, https://microsoft.github.io/Azure-Threat-Research-Matrix/Persistence/AZT501/AZT501-2/, https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/Methodology%20and%20Resources/Cloud%20-%20Azure%20Pentest.md#add-credentials-to-all-enterprise-applications",
              "mitre": [
                "T1098.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Service Principal New Client Credentials\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Service Principal New Client Credentials\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service Principal client credential modifications may be part of legitimate administrative operations. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.107",
              "n": "O365 Service Principal Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when an Azure Service Principal elevates privileges by adding themself to a new app role assignment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Add app role assignment grant to user.",
              "q": "# Shared SPL: intentional — see UC-10.7.163 (parallel Entra detection via Azure add-on / Event Hub).\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$servicePrincipal$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The Splunk Add-on for Microsoft Office 365 add-on is required to ingest EntraID audit logs via the 365 API. See references for links for further details on how to onboard this log source.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/4055, https://github.com/mvelazc0/BadZure, https://www.splunk.com/en_us/blog/security/hunting-m365-invaders-navigating-the-shadows-of-midnight-blizzard.html, https://posts.specterops.io/microsoft-breach-what-happened-what-should-azure-admins-do-da2b7e674ebc",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Service Principal Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Add app role assignment grant to user.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Service Principal Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.108",
              "n": "O365 SharePoint Allowed Domains Policy Changed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when the allowed domain settings for O365 SharePoint have been changed. With Azure AD B2B collaboration, users and administrators can invite external users to collaborate with internal users. External guest account invitations may also need access to OneDrive/SharePoint resources. These changed should be monitored by security teams as they could potentially lead to unauthorized access.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Business approved changes by known administrators.",
              "refs": "https://learn.microsoft.com/en-us/sharepoint/external-sharing-overview",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 SharePoint Allowed Domains Policy Changed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 SharePoint Allowed Domains Policy Changed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Business approved changes by known administrators.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.109",
              "n": "O365 SharePoint Malware Detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a malicious file is detected within the SharePoint Online ecosystem. Attackers may stage and execute malicious files from within the Microsoft Office 365 ecosystem. Any detections from built-in Office 365 capabilities should be monitored and responded to appropriately. Certain premium Office 365 capabilities further enhance these detection and response functions.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/anti-malware-protection-for-spo-odfb-teams-about?view=o365-worldwide",
              "mitre": [
                "T1204.002"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 SharePoint Malware Detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 SharePoint Malware Detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.110",
              "n": "O365 SharePoint Suspicious Search Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when Office 365 users search for suspicious keywords or have an excessive number of queries to a SharePoint site within a limited timeframe. This behavior may indicate that a malicious actor has gained control of a user account and is conducting discovery or enumeration activities.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "`o365_management_activity` (Workload=SharePoint Operation=\"SearchQueryPerformed\" SearchQueryText=* EventData=*search* AND UserId = \"$user$\") OR (OR Operation=SearchQueryInitiatedSharepoint AND UserId = \"$user$\")",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users searching excessively or possible false positives related to matching conditions.",
              "refs": "https://learn.microsoft.com/en-us/purview/audit-get-started#step-3-enable-searchqueryinitiated-events, https://www.cisa.gov/sites/default/files/2025-01/microsoft-expanded-cloud-logs-implementation-playbook-508c.pdf, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-320a, https://attack.mitre.org/techniques/T1213/002/",
              "mitre": [
                "T1213.002",
                "T1552"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 SharePoint Suspicious Search Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1213.002, T1552. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 SharePoint Suspicious Search Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users searching excessively or possible false positives related to matching conditions.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.111",
              "n": "O365 Tenant Wide Admin Consent Granted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where admin consent is granted to an application within an Azure AD and Office 365 tenant. It leverages O365 audit logs, specifically events related to the admin consent action within the AzureActiveDirectory workload. This activity is significant because admin consent allows applications to access data across the entire tenant, potentially exposing vast amounts of organizational data. If confirmed malicious, an attacker could gain extensive and persis…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Consent to application.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may be granted tenant wide consent, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/003/, https://www.mandiant.com/resources/blog/remediation-and-hardening-strategies-for-microsoft-365-to-defend-against-unc2452, https://learn.microsoft.com/en-us/security/operations/incident-response-playbook-app-consent, https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/grant-admin-consent?pivots=portal, https://microsoft.github.io/Azure-Threat-Research-Matrix/Persistence/AZT501/AZT501-2/",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Tenant Wide Admin Consent Granted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Consent to application.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Tenant Wide Admin Consent Granted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may be granted tenant wide consent, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.4.111: O365 Tenant Wide Admin Consent Granted.",
                  "ea": "Saved search 'UC-10.4.111' running on O365 Consent to application., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.112",
              "n": "O365 Threat Intelligence Suspicious Email Delivered",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a suspicious email is detected within the Microsoft Office 365 ecosystem through the Advanced Threat Protection engine and delivered to an end user. Attackers may execute several attacks through email, any detections from built-in Office 365 capabilities should be monitored and responded to appropriately. Certain premium Office 365 capabilities such as Safe Attachment and Safe Links further enhance these detection and response functions.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/anti-malware-protection-for-spo-odfb-teams-about?view=o365-worldwide, https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/office-365-ti?view=o365-worldwide",
              "mitre": [
                "T1566.001",
                "T1566.002"
              ],
              "dtype": "email_subject",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Threat Intelligence Suspicious Email Delivered\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001, T1566.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Threat Intelligence Suspicious Email Delivered\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.113",
              "n": "O365 Threat Intelligence Suspicious File Detected",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a malicious file is detected within the Microsoft Office 365 ecosystem through the Advanced Threat Protection engine. Attackers may stage and execute malicious files from within the Microsoft Office 365 ecosystem. Any detections from built-in Office 365 capabilities should be monitored and responded to appropriately. Certain premium Office 365 capabilities such as Safe Attachment and Safe Links further enhance these detection and response functions.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/threat-explorer-real-time-detections-about?view=o365-worldwide, https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/anti-malware-protection-for-spo-odfb-teams-about?view=o365-worldwide",
              "mitre": [
                "T1204.002"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 Threat Intelligence Suspicious File Detected\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 Threat Intelligence Suspicious File Detected\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.114",
              "n": "O365 User Consent Blocked for Risky Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where Office 365 has blocked a user's attempt to grant consent to an application deemed risky or potentially malicious. This detection leverages O365 audit logs, specifically focusing on failed user consent actions due to system-driven blocks. Monitoring these blocked consent attempts is crucial as it highlights potential threats early on, indicating that a user might be targeted or that malicious applications are attempting to infiltrate the organizat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 Consent to application.",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the Splunk Microsoft Office 365 Add-on and ingest Office 365 management activity events.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Microsofts algorithm to identify risky applications is unknown and may flag legitimate applications.",
              "refs": "https://attack.mitre.org/techniques/T1528/, https://www.microsoft.com/en-us/security/blog/2022/09/22/malicious-oauth-applications-used-to-compromise-email-servers-and-spread-spam/, https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/protect-against-consent-phishing, https://learn.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth, https://www.alteredsecurity.com/post/introduction-to-365-stealer, https://github.com/AlteredSecurity/365-Stealer",
              "mitre": [
                "T1528"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 User Consent Blocked for Risky Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 Consent to application.. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1528. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 User Consent Blocked for Risky Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Microsofts algorithm to identify risky applications is unknown and may flag legitimate applications.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.4.114: O365 User Consent Blocked for Risky Application.",
                  "ea": "Saved search 'UC-10.4.114' running on O365 Consent to application., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.115",
              "n": "O365 User Consent Denied for OAuth Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a user has denied consent to an OAuth application seeking permissions within the Office 365 environment. This detection leverages O365 audit logs, focusing on events related to user consent actions. By filtering for denied consent actions associated with OAuth applications, it captures instances where users have actively rejected permission requests. This activity is significant as it may indicate users spotting potentially suspicious or unfamili…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "OAuth applications that require mail permissions may be legitimate, investigate and filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1528/, https://www.microsoft.com/en-us/security/blog/2022/09/22/malicious-oauth-applications-used-to-compromise-email-servers-and-spread-spam/, https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/protect-against-consent-phishing, https://learn.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth, https://www.alteredsecurity.com/post/introduction-to-365-stealer, https://github.com/AlteredSecurity/365-Stealer",
              "mitre": [
                "T1528"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 User Consent Denied for OAuth Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1528. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 User Consent Denied for OAuth Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: OAuth applications that require mail permissions may be legitimate, investigate and filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.4.115: O365 User Consent Denied for OAuth Application.",
                  "ea": "Saved search 'UC-10.4.115' running on O365, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.116",
              "n": "O365 ZAP Activity Detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when the Microsoft Zero-hour Automatic Purge (ZAP) capability takes action against a user's mailbox. This capability is an enhanced protection feature that retro-actively removes email with known malicious content for user inboxes. Since this is a retroactive capability, there is still a window in which the user may fall victim to the malicious content.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Office 365 Universal Audit Log",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to email address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Office 365 Universal Audit Log ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/zero-hour-auto-purge?view=o365-worldwide",
              "mitre": [
                "T1566.001",
                "T1566.002"
              ],
              "dtype": "email_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"O365 ZAP Activity Detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to email address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Office 365 Universal Audit Log. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001, T1566.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"O365 ZAP Activity Detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mail flow, attachments, and policy outcomes in normalized email data, so threats and data-loss events tied to messages are easier to see across gateways and inboxes in one place.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.117",
              "n": "Detect Exchange Web Shell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of suspicious .aspx files in known drop locations for Exchange exploitation, specifically targeting paths associated with HAFNIUM group and vulnerabilities like ProxyShell and ProxyNotShell. It leverages data from the Endpoint datamodel, focusing on process and filesystem events. This activity is significant as it may indicate a web shell deployment, a common method for persistent access and remote code execution. If confirmed malicious, attackers c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node and `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The query is structured in a way that `action` (read, create) is not defined. Review the results of this query, filter, and tune as necessary. It may be necessary to generate this query specific to your endpoint product.",
              "refs": "https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/MSTICIoCs-ExchangeServerVulnerabilitiesDisclosedMarch2021.csv, https://www.zerodayinitiative.com/blog/2021/8/17/from-pwn2own-2021-a-new-attack-surface-on-microsoft-exchange-proxyshell, https://www.youtube.com/watch?v=FC6iHw258RI, https://www.huntress.com/blog/rapid-response-microsoft-exchange-servers-still-vulnerable-to-proxyshell-exploit#what-should-you-do",
              "mitre": [
                "T1133",
                "T1190",
                "T1505.003"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Exchange Web Shell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1133, T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Exchange Web Shell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The query is structured in a way that `action` (read, create) is not defined. Review the results of this query, filter, and tune as necessary. It may be necessary to generate this query specific to your endpoint product.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.118",
              "n": "Detect Outlook exe writing a zip file",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Detect Outlook exe writing a zip file. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file path entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is not uncommon for outlook to write legitimate zip files to the disk.",
              "refs": "https://www.paubox.com/news/hackers-exploit-corrupted-zip-and-office-files-to-bypass-email-security, https://docs.datadoghq.com/security/default_rules/def-000-14w/, https://theweborion.com/blog/zip-files/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Outlook exe writing a zip file\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Outlook exe writing a zip file\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is not uncommon for outlook to write legitimate zip files to the disk.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.119",
              "n": "Disable Windows SmartScreen Protection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable SmartScreen protection. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to registry paths associated with SmartScreen settings. This activity is significant because SmartScreen provides an early warning system against phishing and malware. Disabling it can indicate malicious intent, often seen in Remote Access Trojans (RATs) to evade detection while downloading additional pa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to disable this windows features.",
              "refs": "https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable Windows SmartScreen Protection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable Windows SmartScreen Protection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to disable this windows features.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.120",
              "n": "Exchange PowerShell Abuse via SSRF",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious behavior indicative of ProxyShell exploitation against on-premise Microsoft Exchange servers. It identifies HTTP POST requests to `autodiscover.json` containing `PowerShell` in the URI, leveraging server-side request forgery (SSRF) to access backend PowerShell. This detection uses Exchange server logs ingested into Splunk. Monitoring this activity is crucial as it may indicate an attacker attempting to execute commands or scripts on the Exchange server. …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`windows_exchange_iis` c_uri=\"*//autodiscover*\" cs_uri_query=\"*PowerShell*\" cs_method=\"POST\"\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest, cs_uri_query, cs_method,\n           c_uri\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `exchange_powershell_abuse_via_ssrf_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives, however, tune as needed.",
              "refs": "https://github.com/GossiTheDog/ThreatHunting/blob/master/AzureSentinel/Exchange-Powershell-via-SSRF, https://blog.orange.tw/2021/08/proxylogon-a-new-attack-surface-on-ms-exchange-part-1.html, https://peterjson.medium.com/reproducing-the-proxyshell-pwn2own-exploit-49743a4ea9a1",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Exchange PowerShell Abuse via SSRF\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Exchange PowerShell Abuse via SSRF\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives, however, tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.121",
              "n": "Exchange PowerShell Module Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the usage of specific Exchange PowerShell modules, such as New-MailboxExportRequest, New-ManagementRoleAssignment, New-MailboxSearch, and Get-Recipient. It leverages PowerShell Script Block Logging (EventCode 4104) to identify these commands. This activity is significant because these modules can be exploited by adversaries who have gained access via ProxyShell or ProxyNotShell vulnerabilities. If confirmed malicious, attackers could export mailbox contents, assign…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/exchange/new-mailboxexportrequest?view=exchange-ps, https://learn.microsoft.com/en-us/powershell/module/exchange/new-managementroleassignment?view=exchange-ps, https://blog.orange.tw/2021/08/proxyshell-a-new-attack-surface-on-ms-exchange-part-3.html, https://www.zerodayinitiative.com/blog/2021/8/17/from-pwn2own-2021-a-new-attack-surface-on-microsoft-exchange-proxyshell, https://thedfirreport.com/2021/11/15/exchange-exploit-leads-to-domain-wide-ransomware/, https://www.cisa.gov/uscert/ncas/alerts/aa22-264a, https://learn.microsoft.com/en-us/powershell/module/exchange/new-mailboxsearch?view=exchange-ps, https://learn.microsoft.com/en-us/powershell/module/exchange/get-recipient?view=exchange-ps, https://thedfirreport.com/2022/03/21/apt35-automates-initial-access-using-proxyshell/",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Exchange PowerShell Module Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Exchange PowerShell Module Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.122",
              "n": "Mailsniper Invoke functions",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of known MailSniper PowerShell functions on a machine. It leverages PowerShell logs (EventCode 4104) to identify specific script block text associated with MailSniper activities. This behavior is significant as MailSniper is often used by attackers to harvest sensitive emails from compromised Exchange servers. If confirmed malicious, this activity could lead to unauthorized access to sensitive email data, credential theft, and further compromise of th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.blackhillsinfosec.com/introducing-mailsniper-a-tool-for-searching-every-users-email-for-sensitive-data/",
              "mitre": [
                "T1114.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Mailsniper Invoke functions\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Mailsniper Invoke functions\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.123",
              "n": "Microsoft Defender Incident Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic is to leverage alerts from Microsoft Defender O365 Incidents. This query aggregates and summarizes all alerts from Microsoft Defender O365 Incidents, providing details such as the destination, file name, severity, process command line, ip address, registry key, signature, description, unique id, and timestamps. This detection is not intended to detect new activity from raw data, but leverages Microsoft provided alerts to be correlated with other data as part of risk based …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MS365 Defender Incident Alerts",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\",\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires MS365 Defender Incident Alerts ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may vary based on Microsoft Defender configuration; monitor and filter out the alerts that are not relevant to your environment.",
              "refs": "https://learn.microsoft.com/en-us/defender-xdr/api-list-incidents?view=o365-worldwide, https://learn.microsoft.com/en-us/graph/api/resources/security-alert?view=graph-rest-1.0, https://splunkbase.splunk.com/app/6207, https://jasonconger.com/splunk-azure-gdi/",
              "mitre": [],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Microsoft Defender Incident Alerts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MS365 Defender Incident Alerts. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Microsoft Defender Incident Alerts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may vary based on Microsoft Defender configuration; monitor and filter out the alerts that are not relevant to your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.124",
              "n": "MS Exchange Mailbox Replication service writing Active Server Pages",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of suspicious .aspx files in specific directories associated with Exchange exploitation by the HAFNIUM group and the ProxyShell vulnerability. It detects this activity by monitoring the MSExchangeMailboxReplication.exe process, which typically does not write .aspx files. This behavior is significant as it may indicate an active exploitation attempt on Exchange servers. If confirmed malicious, attackers could gain unauthorized access, execute arbitra…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Endpoint.Processes where Processes.process_name=MSExchangeMailboxReplication.exe  by _time span=1h Processes.process_id Processes.process_name Processes.process_guid Processes.dest | `drop_dm_object_name(Processes)` | join max=1 process_guid, _time [| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Filesystem where Filesystem.file_path IN (\"*\\\\HttpProxy\\\\owa\\\\auth\\\\*\", \"*\\\\inetpub\\\\wwwroot\\\\aspnet_client\\\\*\", \"*\\\\HttpProxy\\\\OAB\\\\*\") Filesystem.file_name=\"*.aspx\" by _time span=1h Filesystem.dest Filesystem.file_create_time Filesystem.file_name Filesystem.file_path | `drop_dm_object_name(Filesystem)` | fields _time dest file_create_time file_name file_path process_name process_path process process_guid] | dedup file_create_time | table dest file_create_time, file_name, file_path, process_name | `ms_exchange_mailbox_replication_service_writing_active_server_pages_filter`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node and `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The query is structured in a way that `action` (read, create) is not defined. Review the results of this query, filter, and tune as necessary. It may be necessary to generate this query specific to your endpoint product.",
              "refs": "https://redcanary.com/blog/blackbyte-ransomware/",
              "mitre": [
                "T1133",
                "T1190",
                "T1505.003"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MS Exchange Mailbox Replication service writing Active Server Pages\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1133, T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MS Exchange Mailbox Replication service writing Active Server Pages\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The query is structured in a way that `action` (read, create) is not defined. Review the results of this query, filter, and tune as necessary. It may be necessary to generate this query specific to your endpoint product.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.125",
              "n": "Windows Impair Defense Overide Win Defender Phishing Filter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable the Windows Defender phishing filter. It leverages data from the Endpoint.Registry data model, focusing on changes to specific registry values related to Microsoft Edge's phishing filter settings. This activity is significant because disabling the phishing filter can allow attackers to deceive users into visiting malicious websites without triggering browser warnings. If confirmed malicious, this could lead to user…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Overide Win Defender Phishing Filter\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Overide Win Defender Phishing Filter\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.126",
              "n": "Windows InProcServer32 New Outlook Form",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or modification of registry keys associated with new Outlook form installations, potentially indicating exploitation of CVE-2024-21378. It leverages data from the Endpoint.Registry datamodel, focusing on registry paths involving InProcServer32 keys linked to Outlook forms. This activity is significant as it may signify an attempt to achieve authenticated remote code execution via malicious form objects. If confirmed malicious, this could allow an attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if the organization adds new forms to Outlook via an automated method. Filter by name or path to reduce false positives.",
              "refs": "https://www.netspi.com/blog/technical/red-team-operations/microsoft-outlook-remote-code-execution-cve-2024-21378/",
              "mitre": [
                "T1566",
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows InProcServer32 New Outlook Form\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566, T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows InProcServer32 New Outlook Form\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if the organization adds new forms to Outlook via an automated method. Filter by name or path to reduce false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.127",
              "n": "Windows Mail Protocol In Non-Common Process Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a Windows application establishing an SMTP connection from a non-common installation path. It leverages Sysmon EventCode 3 to identify processes not typically associated with email clients (e.g., Thunderbird, Outlook) making SMTP connections. This activity is significant as adversaries, including malware like AgentTesla, use such connections for Command and Control (C2) communication to exfiltrate stolen data. If confirmed malicious, this behavior could lead to una…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Third party email or SMTP based applications will trigger this. Apply additional filters as needed. Also consider excluding known email or any SMTP based clients installed outside of the Program Files and Windows directories.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.agent_tesla",
              "mitre": [
                "T1071.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Mail Protocol In Non-Common Process Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Mail Protocol In Non-Common Process Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Third party email or SMTP based applications will trigger this. Apply additional filters as needed. Also consider excluding known email or any SMTP based clients installed outside of the Program Files and Windows directories.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.128",
              "n": "Windows MSExchange Management Mailbox Cmdlet Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious Cmdlet usage in Exchange Management logs, focusing on commands like New-MailboxExportRequest and New-ManagementRoleAssignment. It leverages EventCode 1 and specific Message patterns to detect potential ProxyShell and ProxyNotShell abuse. This activity is significant as it may indicate unauthorized access or manipulation of mailboxes and roles, which are critical for maintaining email security. If confirmed malicious, attackers could export mailbox dat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present when an Administrator utilizes the cmdlets in the query. Filter or monitor as needed.",
              "refs": "https://gist.github.com/MHaggis/f66f1d608ea046efb9157020cd34c178",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSExchange Management Mailbox Cmdlet Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSExchange Management Mailbox Cmdlet Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present when an Administrator utilizes the cmdlets in the query. Filter or monitor as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.129",
              "n": "Windows Office Product Dropped Uncommon File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects Microsoft Office applications dropping or creating executables or scripts on a Windows OS. It leverages process creation and file system events from the Endpoint data model to identify Office applications like Word or Excel generating files with extensions such as \".exe\", \".dll\", or \".ps1\". This behavior is significant as it is often associated with spear-phishing attacks where malicious files are dropped to compromise the host. If confirmed malicious, this activit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "office macro for automation may do this behavior",
              "refs": "https://www.mandiant.com/resources/fin7-pursuing-an-enigmatic-and-evasive-global-criminal-operation, https://attack.mitre.org/groups/G0046/, https://www.joesandbox.com/analysis/702680/0/html, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trojanized-onenote-document-leads-to-formbook-malware/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Dropped Uncommon File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Dropped Uncommon File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: office macro for automation may do this behavior\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.130",
              "n": "Windows Outlook Dialogs Disabled from Unusual Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows Registry key \"PONT_STRING\" under Outlook Options. This disables certain dialog popups, which could allow malicious scripts to run without notice. This detection leverages data from the Endpoint.Registry datamodel to search for this key changing from an unusual process. This activity is significant as it is commonly associated with some malware infections, indicating potential malicious intent to harvest email information.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual for processes other than Outlook to modify this feature on a Windows system since it is a default Outlook functionality. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://lab52.io/blog/analyzing-notdoor-inside-apt28s-expanding-arsenal/, https://hackread.com/russian-apt28-notdoor-backdoor-microsoft-outlook/",
              "mitre": [
                "T1112",
                "T1562"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Outlook Dialogs Disabled from Unusual Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Outlook Dialogs Disabled from Unusual Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual for processes other than Outlook to modify this feature on a Windows system since it is a default Outlook functionality. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.131",
              "n": "Windows Outlook LoadMacroProviderOnBoot Persistence",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows Registry key \"LoadMacroProviderOnBoot\" under Outlook. This enables automatic loading of macros, which could allow malicious scripts to run without notice. This detection leverages data from the Endpoint.Registry datamodel to search for this key being enabled. This activity is significant as it is commonly associated with some malware infections, indicating potential malicious intent to harvest email information.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to modify this feature on a Windows system. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://lab52.io/blog/analyzing-notdoor-inside-apt28s-expanding-arsenal/, https://hackread.com/russian-apt28-notdoor-backdoor-microsoft-outlook/",
              "mitre": [
                "T1112",
                "T1137"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Outlook LoadMacroProviderOnBoot Persistence\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112, T1137. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Outlook LoadMacroProviderOnBoot Persistence\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to modify this feature on a Windows system. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.132",
              "n": "Windows Outlook Macro Created by Suspicious Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of an Outlook Macro (VbaProject.OTM) by a suspicious process. This file is normally created when you create a macro from within Outlook. If this file is created by a process other than Outlook.exe it may be maliciously created. This detection leverages data from the Filesystem datamodel, specifically looking for the file creation event for VbaProject.OTM. This activity is significant as it is commonly associated with some malware infections, indicating…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must be ingesting data that records file-system activity from your hosts to populate the Endpoint file-system data-model node. If you are using Sysmon, you will need a Splunk Universal Forwarder on each endpoint from which you want to collect data.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Because this file are always created by Outlook in normal operations, you should investigate all results.",
              "refs": "https://lab52.io/blog/analyzing-notdoor-inside-apt28s-expanding-arsenal/, https://hackread.com/russian-apt28-notdoor-backdoor-microsoft-outlook/",
              "mitre": [
                "T1137",
                "T1059.005"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Outlook Macro Created by Suspicious Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1137, T1059.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Outlook Macro Created by Suspicious Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Because this file are always created by Outlook in normal operations, you should investigate all results.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.133",
              "n": "Windows Outlook Macro Security Modified",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of the Windows Registry key \"Level\" under Outlook Security. This allows macros to execute without warning, which could allow malicious scripts to run without notice. This detection leverages data from the Endpoint.Registry datamodel, specifically looking for the registry value name \"Level\" with a value of \"0x00000001\". This activity is significant as it is commonly associated with some malware infections, indicating potential malicious intent to ha…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to modify this feature on a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://lab52.io/blog/analyzing-notdoor-inside-apt28s-expanding-arsenal/, https://hackread.com/russian-apt28-notdoor-backdoor-microsoft-outlook/",
              "mitre": [
                "T1137",
                "T1008"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Outlook Macro Security Modified\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1137, T1008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Outlook Macro Security Modified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to modify this feature on a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.134",
              "n": "Windows Outlook WebView Registry Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies modifications to specific Outlook registry values related to WebView and Today features. It detects when a URL is set in these registry locations, which could indicate attempts to manipulate Outlook's web-based components. The analytic focuses on changes to the \"URL\" value within Outlook's WebView and Today registry paths. This activity is significant as it may represent an attacker's effort to redirect Outlook's web content or inject malicious URLs. If successf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if legitimate Outlook processes are modified.",
              "refs": "https://gist.github.com/MHaggis/c6318acde2e2f691b550e3a491f49ff1, https://github.com/trustedsec/specula/wiki",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Outlook WebView Registry Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Outlook WebView Registry Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if legitimate Outlook processes are modified.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.135",
              "n": "Windows Phishing Outlook Drop Dll In FORM Dir",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a DLL file by an outlook.exe process in the AppData\\Local\\Microsoft\\FORMS directory. This detection leverages data from the Endpoint.Processes and Endpoint.Filesystem datamodels, focusing on process and file creation events. This activity is significant as it may indicate an attempt to exploit CVE-2024-21378, where a custom MAPI form loads a potentially malicious DLL. If confirmed malicious, this could allow an attacker to execute arbitrary code, le…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.netspi.com/blog/technical/red-team-operations/microsoft-outlook-remote-code-execution-cve-2024-21378/",
              "mitre": [
                "T1566"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Phishing Outlook Drop Dll In FORM Dir\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Phishing Outlook Drop Dll In FORM Dir\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.136",
              "n": "Windows Phishing PDF File Executes URL Link",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious PDF viewer processes spawning browser application child processes. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process names. This activity is significant as it may indicate a PDF spear-phishing attempt where a malicious URL link is executed, leading to potential payload download. If confirmed malicious, this could allow attackers to execute code, escalate privileges, or persist in the environment b…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives in PDF file opened PDF Viewer having legitimate URL link, however filter as needed.",
              "refs": "https://twitter.com/pr0xylife/status/1615382907446767616?s=20",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Phishing PDF File Executes URL Link\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Phishing PDF File Executes URL Link\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives in PDF file opened PDF Viewer having legitimate URL link, however filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.137",
              "n": "Windows Phishing Recent ISO Exec Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of registry artifacts when an ISO container is opened, clicked, or mounted on a Windows operating system. It leverages data from the Endpoint.Registry data model, specifically monitoring registry keys related to recent ISO or IMG file executions. This activity is significant as adversaries increasingly use container-based phishing campaigns to bypass macro-based document execution controls. If confirmed malicious, this behavior could indicate an initia…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Registry where Registry.registry_path= \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Explorer\\\\RecentDocs\\\\.iso*\" OR Registry.registry_path= \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Explorer\\\\RecentDocs\\\\.img*\" by Registry.action Registry.dest Registry.process_guid Registry.process_id Registry.registry_hive Registry.registry_path Registry.registry_key_name Registry.registry_value_data Registry.registry_value_name Registry.registry_value_type Registry.status Registry.user Registry.vendor_product | `drop_dm_object_name(Registry)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_phishing_recent_iso_exec_registry_filter`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be high depending on the environment and consistent use of ISOs. Restrict to servers, or filter out based on commonly used ISO names. Filter as needed.",
              "refs": "https://www.microsoft.com/security/blog/2021/05/27/new-sophisticated-email-based-attack-from-nobelium/, https://unit42.paloaltonetworks.com/brute-ratel-c4-tool/, https://isc.sans.edu/diary/Recent+AZORult+activity/25120, https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Phishing Recent ISO Exec Registry\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Phishing Recent ISO Exec Registry\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be high depending on the environment and consistent use of ISOs. Restrict to servers, or filter out based on commonly used ISO names. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.138",
              "n": "Windows RDP File Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a Windows RDP client attempts to execute an RDP file from a temporary directory, downloads directory, or Outlook directories. This detection is significant as it can indicate an attempt for an adversary to deliver a .rdp file, which may be leveraged by attackers to control or exfiltrate data. If confirmed malicious, this activity could lead to unauthorized access, data theft, or further lateral movement within the network.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on administrators using RDP files for legitimate purposes. Filter as needed.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2024/10/29/midnight-blizzard-conducts-large-scale-spear-phishing-campaign-using-rdp-files/",
              "mitre": [
                "T1598.002",
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RDP File Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1598.002, T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RDP File Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on administrators using RDP files for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.139",
              "n": "Windows RDPClient Connection Sequence Events",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic monitors Windows RDP client connection sequence events (EventCode 1024) from the Microsoft-Windows-TerminalServices-RDPClient/Operational log. These events track when RDP ClientActiveX initiates connection attempts to remote servers. The connection sequence is a critical phase of RDP where the client and server exchange settings and establish common parameters for the session. Monitoring these events can help identify unusual RDP connection patterns, potential lateral movement atte…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Microsoft Windows TerminalServices RDPClient 1024",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Microsoft Windows TerminalServices RDPClient 1024 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate RDP connections from authorized administrators and users will generate these events. To reduce false positives, you should baseline normal RDP connection patterns in your environment, whitelist expected RDP connection chains between known administrative workstations and servers, and track authorized remote support sessions.",
              "refs": "https://gist.github.com/MHaggis/acd5dcbf1d4fb705b77f0a48e772eefc, https://www.microsoft.com/en-us/security/blog/2024/10/29/midnight-blizzard-conducts-large-scale-spear-phishing-campaign-using-rdp-files/",
              "mitre": [
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RDPClient Connection Sequence Events\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Microsoft Windows TerminalServices RDPClient 1024. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RDPClient Connection Sequence Events\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate RDP connections from authorized administrators and users will generate these events. To reduce false positives, you should baseline normal RDP connection patterns in your environment, whitelist expected RDP connection chains between known administrative workstations and servers, and track authorized remote support sessions.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.140",
              "n": "Windows Spearphishing Attachment Onenote Spawn Mshta",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects OneNote spawning `mshta.exe`, a behavior often associated with spearphishing attacks. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where OneNote is the parent process. This activity is significant as it is commonly used by malware families like TA551, AsyncRat, Redline, and DCRAT to execute malicious scripts. If confirmed malicious, this could allow attackers to execute arbitrary code, potentially leading …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives known. Filter as needed.",
              "refs": "https://www.bleepingcomputer.com/news/security/hackers-now-use-microsoft-onenote-attachments-to-spread-malware/, https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Spearphishing Attachment Onenote Spawn Mshta\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Spearphishing Attachment Onenote Spawn Mshta\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives known. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.141",
              "n": "Windows Unsecured Outlook Credentials Access In Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthorized access to Outlook credentials stored in the Windows registry. It leverages Windows Security Event logs, specifically EventCode 4663, to identify access attempts to registry paths associated with Outlook profiles. This activity is significant as it may indicate attempts to steal sensitive email credentials, which could lead to unauthorized access to email accounts. If confirmed malicious, this could allow attackers to exfiltrate sensitive information, i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "third party software may access this outlook registry.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/choice, https://malpedia.caad.fkie.fraunhofer.de/details/win.404keylogger",
              "mitre": [
                "T1552"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unsecured Outlook Credentials Access In Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unsecured Outlook Credentials Access In Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: third party software may access this outlook registry.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.142",
              "n": "Detect Large ICMP Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies ICMP traffic to external IP addresses with total bytes (sum of bytes in and bytes out) greater than 1,000 bytes. It leverages the Network_Traffic data model to detect large ICMP packet that aren't blocked and are directed toward external networks. We use  All_Traffic.bytes in the detection to capture variations in inbound versus outbound traffic sizes, as significant discrepancies or unusually large ICMP exchanges can indicate information smuggling, covert commu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Traffic",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Traffic ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "ICMP packets are used in a variety of ways to help troubleshoot networking issues and ensure the proper flow of traffic. As such, it is possible that a large ICMP packet could be perfectly legitimate. If large ICMP packets are associated with Command And Control traffic, there will typically be a large number of these packets observed over time. If the search is providing a large number of false positives, you can modify the macro `detect_large_icmp_traffic_filter` to adjust the byte threshold or add specific IP addresses to an allow list.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1095"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Large ICMP Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Traffic. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1095. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Large ICMP Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: ICMP packets are used in a variety of ways to help troubleshoot networking issues and ensure the proper flow of traffic. As such, it is possible that a large ICMP packet could be perfectly legitimate. If large ICMP packets are associated with Command And Control traffic, there will typically be a large number of these packets observed over time. If the search is providing a large number of false positives, you can modify the macro `detect_large_icmp_traffic_filter` to adjust the byte threshold or add specific IP addresses to an allow list.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.143",
              "n": "Hosts receiving high volume of network traffic from email server",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Hosts receiving high volume of network traffic from email server. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` sum(All_Traffic.bytes_in) as bytes_in from datamodel=Network_Traffic where All_Traffic.dest_category=email_server by All_Traffic.src _time span=1d | `drop_dm_object_name(\"All_Traffic\")` | eventstats avg(bytes_in) as avg_bytes_in stdev(bytes_in) as stdev_bytes_in | eventstats count as num_data_samples avg(eval(if(_time < relative_time(now(), \"@d\"), bytes_in, null))) as per_source_avg_bytes_in stdev(eval(if(_time < relative_time(now(), \"@d\"), bytes_in, null))) as per_source_stdev_bytes_in by src | eval minimum_data_samples = 4, deviation_threshold = 3\n    | where num_data_samples >= minimum_data_samples AND bytes_in > (avg_bytes_in + (deviation_threshold * stdev_bytes_in)) AND bytes_in > (per_source_avg_bytes_in + (deviation_threshold * per_source_stdev_bytes_in)) AND _time >= relative_time(now(), \"@d\")\n    | eval num_standard_deviations_away_from_server_average = round(abs(bytes_in - avg_bytes_in) / stdev_bytes_in, 2), num_standard_deviations_away_from_client_average = round(abs(bytes_in - per_source_avg_bytes_in) / per_source_stdev_bytes_in, 2)\n    | table src, _time, bytes_in, avg_bytes_in, per_source_avg_bytes_in, num_standard_deviations_away_from_server_average, num_standard_deviations_away_from_client_average\n    | `hosts_receiving_high_volume_of_network_traffic_from_email_server_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The false-positive rate will vary based on how you set the deviation_threshold and data_samples values. Our recommendation is to adjust these values based on your network traffic to and from your email servers.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1114.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Hosts receiving high volume of network traffic from email server\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1114.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Hosts receiving high volume of network traffic from email server\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The false-positive rate will vary based on how you set the deviation_threshold and data_samples values. Our recommendation is to adjust these values based on your network traffic to and from your email servers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.144",
              "n": "SSL Certificates with Punycode",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects SSL certificates with Punycode domains in the SSL issuer email domain, identified by the prefix \"xn--\". It leverages the Certificates Datamodel to flag these domains and uses CyberChef for decoding. This activity is significant as Punycode can be used for domain spoofing and phishing attacks. If confirmed malicious, attackers could deceive users and systems, potentially leading to unauthorized access and data breaches.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Certificates.All_Certificates\n      BY All_Certificates.SSL.ssl_issuer_email_domain All_Certificates.SSL.ssl_issuer All_Certificates.SSL.ssl_subject_email\n         All_Certificates.SSL.dest All_Certificates.SSL.src All_Certificates.SSL.sourcetype\n         All_Certificates.SSL.ssl_subject_email_domain\n    | `drop_dm_object_name(\"All_Certificates.SSL\")`\n    | eval punycode=if(like(ssl_issuer_email_domain,\"%xn--%\"),1,0)\n    | where punycode=1\n    | cyberchef infield=\"ssl_issuer_email_domain\" outfield=\"convertedPuny\" jsonrecipe=\"[{\"op\":\"From Punycode\",\"args\":[true]}]\"\n    | table ssl_issuer_email_domain convertedPuny ssl_issuer ssl_subject_email dest src sourcetype ssl_subject_email_domain\n    | `ssl_certificates_with_punycode_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the organization works with international businesses. Filter as needed.",
              "refs": "https://www.splunk.com/en_us/blog/security/nothing-puny-about-cve-2022-3602.html, https://www.openssl.org/blog/blog/2022/11/01/email-address-overflows/, https://community.emergingthreats.net/t/out-of-band-ruleset-update-summary-2022-11-01/117, https://github.com/corelight/CVE-2022-3602/tree/master/scripts",
              "mitre": [
                "T1573"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SSL Certificates with Punycode\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1573. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SSL Certificates with Punycode\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the organization works with international businesses. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.145",
              "n": "Windows Spearphishing Attachment Connect To None MS Office Domain",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious Office documents that connect to non-Microsoft Office domains. It leverages Sysmon EventCode 22 to detect processes like winword.exe or excel.exe making DNS queries to domains outside of *.office.com or *.office.net. This activity is significant as it may indicate a spearphishing attempt using malicious documents to download or connect to harmful content. If confirmed malicious, this could lead to unauthorized data access, malware infection, or furthe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "`sysmon` EventCode=22 Image IN (\"*\\\\winword.exe\",\"*\\\\excel.exe\",\"*\\\\powerpnt.exe\",\"*\\\\mspub.exe\",\"*\\\\visio.exe\",\"*\\\\wordpad.exe\",\"*\\\\wordview.exe\",\"*\\\\onenote.exe\", \"*\\\\onenotem.exe\",\"*\\\\onenoteviewer.exe\",\"*\\\\onenoteim.exe\", \"*\\\\msaccess.exe\") AND NOT(QueryName IN (\"*.office.com\", \"*.office.net\")) | stats count min(_time) as firstTime max(_time) as lastTime by answer answer_count dvc process_exec process_guid process_name query query_count reply_code_id signature signature_id src user_id vendor_product QueryName QueryResults QueryStatus | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_spearphishing_attachment_connect_to_none_ms_office_domain_filter`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Windows Office document may contain legitimate url link other than MS office Domain. filter is needed",
              "refs": "https://www.netskope.com/blog/asyncrat-using-fully-undetected-downloader, https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Spearphishing Attachment Connect To None MS Office Domain\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Spearphishing Attachment Connect To None MS Office Domain\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Windows Office document may contain legitimate url link other than MS office Domain. filter is needed\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.146",
              "n": "Zeek x509 Certificate with Punycode",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the presence of punycode within x509 certificates using Zeek x509 logs. It identifies punycode in the subject alternative name email and other fields by searching for the \"xn--\" prefix. This activity is significant as punycode can be used in phishing attacks or to bypass domain filters, posing a security risk. If confirmed malicious, attackers could use these certificates to impersonate legitimate domains, potentially leading to unauthorized access or data breaches…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`zeek_x509`\n      | rex field=san.email{} \"\\@(?<domain_detected>xn--.*)\"\n      | rex field=san.other_fields{} \"\\@(?<domain_detected>xn--.*)\"\n      | stats values(domain_detected)\n        BY basic_constraints.ca source host\n      | `zeek_x509_certificate_with_punycode_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the organization works with international businesses. Filter as needed.",
              "refs": "https://community.emergingthreats.net/t/out-of-band-ruleset-update-summary-2022-11-01/117, https://github.com/corelight/CVE-2022-3602/tree/master/scripts, https://docs.zeek.org/en/master/logs/x509.html, https://www.splunk.com/en_us/blog/security/nothing-puny-about-cve-2022-3602.html, https://www.openssl.org/blog/blog/2022/11/01/email-address-overflows/, https://docs.zeek.org/en/master/scripts/base/init-bare.zeek.html#type-X509::SubjectAlternativeName",
              "mitre": [
                "T1573"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zeek x509 Certificate with Punycode\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1573. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zeek x509 Certificate with Punycode\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the organization works with international businesses. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.147",
              "n": "Monitor Web Traffic For Brand Abuse",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies web requests to domains that closely resemble your monitored brand's domain, indicating potential brand abuse. It leverages data from web traffic sources, such as web proxies or network traffic analysis tools, and cross-references these with known domain permutations generated by the \"ESCU - DNSTwist Domain Names\" search. This activity is significant as it can indicate phishing attempts or other malicious activities targeting your brand. If confirmed malicious, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly`\n      values(Web.url) as urls\n      min(_time) as firstTime\n      from datamodel=Web\n      by Web.src\n    | `drop_dm_object_name(\"Web\")`\n    | `security_content_ctime(firstTime)`\n    | lookup update=true brandMonitoring_lookup domain as urls OUTPUT domain_abuse\n    | search domain_abuse=true\n    | `monitor_web_traffic_for_brand_abuse_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Monitor Web Traffic For Brand Abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Monitor Web Traffic For Brand Abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for senders and domains that mimic our brand, so we catch impersonation and phishing that plain blocklists often miss.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.148",
              "n": "ProxyShell ProxyNotShell Behavior Detected",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential exploitation of Windows Exchange servers via ProxyShell or ProxyNotShell vulnerabilities, followed by post-exploitation activities such as running nltest, Cobalt Strike, Mimikatz, and adding new users. It leverages data from multiple analytic stories, requiring at least five distinct sources to trigger, thus reducing noise. This activity is significant as it indicates a high likelihood of an active compromise, potentially leading to unauthorized access…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be limited, however tune or modify the query as needed.",
              "refs": "https://www.gteltsc.vn/blog/warning-new-attack-campaign-utilized-a-new-0day-rce-vulnerability-on-microsoft-exchange-server-12715.html, https://msrc-blog.microsoft.com/2022/09/29/customer-guidance-for-reported-zero-day-vulnerabilities-in-microsoft-exchange-server/",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ProxyShell ProxyNotShell Behavior Detected\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ProxyShell ProxyNotShell Behavior Detected\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be limited, however tune or modify the query as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.149",
              "n": "Windows Exchange Autodiscover SSRF Abuse",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic identifies potential exploitation attempts of ProxyShell (CVE-2021-34473, CVE-2021-34523, CVE-2021-31207) and ProxyNotShell (CVE-2022-41040, CVE-2022-41082) vulnerabilities in Microsoft Exchange Server. The detection focuses on identifying the SSRF attack patterns used in these exploit chains. The analytic monitors for suspicious POST requests to /autodiscover/autodiscover.json endpoints that may indicate attempts to enumerate LegacyDN attributes as part of initial reconnaissance. …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows IIS",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows IIS ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited.",
              "refs": "https://www.gteltsc.vn/blog/warning-new-attack-campaign-utilized-a-new-0day-rce-vulnerability-on-microsoft-exchange-server-12715.html, https://msrc-blog.microsoft.com/2022/09/29/customer-guidance-for-reported-zero-day-vulnerabilities-in-microsoft-exchange-server/, https://twitter.com/GossiTheDog/status/1575762721353916417?s=20&t=67gq9xCWuyPm1VEm8ydfyA, https://twitter.com/cglyer/status/1575793769814728705?s=20&t=67gq9xCWuyPm1VEm8ydfyA, https://www.gteltsc.vn/blog/warning-new-attack-campaign-utilized-a-new-0day-rce-vulnerability-on-microsoft-exchange-server-12715.html, https://research.splunk.com/stories/proxyshell/, https://splunk.github.io/splunk-add-on-for-microsoft-iis/, https://highon.coffee/blog/ssrf-cheat-sheet/, https://owasp.org/Top10/A10_2021-Server-Side_Request_Forgery_%28SSRF%29/",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Exchange Autodiscover SSRF Abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows IIS. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Exchange Autodiscover SSRF Abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help watch email, identity, and related infrastructure together so we catch phishing, data theft, and misuse while keeping noise tied to what your teams actually use day to day.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.4.150",
              "n": "Zscaler Phishing Activity Threat Blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential phishing attempts blocked by Zscaler within a network. It leverages web proxy logs to detect actions tagged as HTML.Phish. The detection method involves analyzing critical data points such as user, threat name, URL, and hostname. This activity is significant for a SOC as it serves as an early warning system for phishing threats, enabling prompt investigation and mitigation. If confirmed malicious, this activity could indicate an attempt to deceive user…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscalar configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Phishing Activity Threat Blocked\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Phishing Activity Threat Blocked\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscalar configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the web filter for URLs tagged as phishing, so we can jump on credential-theft pages even before they show up in email reports alone.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 27.1,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 148,
            "none": 0
          }
        },
        {
          "i": "10.5",
          "n": "Web Security / Secure Web Gateway",
          "u": [
            {
              "i": "10.5.1",
              "n": "Blocked Category Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Trending blocked categories reveals user behavior patterns and informs acceptable use policy. Spikes may indicate infections.",
              "t": "Splunk Add-on for Cisco Umbrella, TA-zscaler",
              "d": "SWG/proxy logs (URL category, action)",
              "q": "index=proxy sourcetype=\"cisco:umbrella\" action=\"Blocked\"\n| top limit=20 categories",
              "m": "Forward SWG logs to Splunk. Track blocked requests by category over time. Identify trending categories. Alert on spikes in malware/phishing categories. Report on policy effectiveness.",
              "z": "Bar chart (top blocked categories), Line chart (blocks over time), Pie chart (block distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Cisco Umbrella, TA-zscaler.\n• Ensure the following data sources are available: SWG/proxy logs (URL category, action).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward SWG logs to Splunk. Track blocked requests by category over time. Identify trending categories. Alert on spikes in malware/phishing categories. Report on policy effectiveness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"cisco:umbrella\" action=\"Blocked\"\n| top limit=20 categories\n```\n\nUnderstanding this SPL\n\n**Blocked Category Trending** — Trending blocked categories reveals user behavior patterns and informs acceptable use policy. Spikes may indicate infections.\n\nDocumented **Data sources**: SWG/proxy logs (URL category, action). **App/TA** (typical add-on context): Splunk Add-on for Cisco Umbrella, TA-zscaler. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: cisco:umbrella. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"cisco:umbrella\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `top` shows the most frequent field values (limit with an explicit `limit=` if needed).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Blocked Category Trending** — Trending blocked categories reveals user behavior patterns and informs acceptable use policy. Spikes may indicate infections.\n\nDocumented **Data sources**: SWG/proxy logs (URL category, action). **App/TA** (typical add-on context): Splunk Add-on for Cisco Umbrella, TA-zscaler. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top blocked categories), Line chart (blocks over time), Pie chart (block distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track blocked web categories at the secure web gateway so policy gaps, user behavior, and attack spikes are visible.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -count",
              "e": [
                "cisco",
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.2",
              "n": "Shadow IT Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Unapproved SaaS usage creates data security risks and compliance gaps. Discovery enables governance and risk assessment.",
              "t": "SWG/CASB TA",
              "d": "SWG logs (application identification), CASB logs",
              "q": "index=proxy sourcetype=\"cisco:umbrella\"\n| stats dc(src) as unique_users, sum(bytes) as total_bytes by app_name\n| lookup approved_apps.csv app_name OUTPUT approved\n| where isnull(approved) OR approved=\"No\"\n| sort -unique_users",
              "m": "Enable application identification in SWG. Maintain lookup of approved SaaS applications. Identify unapproved apps by user count and data volume. Report to IT governance for risk assessment. Track adoption of approved alternatives.",
              "z": "Table (unapproved apps with user counts), Bar chart (top shadow IT apps), Pie chart (approved vs unapproved traffic).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SWG/CASB TA.\n• Ensure the following data sources are available: SWG logs (application identification), CASB logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable application identification in SWG. Maintain lookup of approved SaaS applications. Identify unapproved apps by user count and data volume. Report to IT governance for risk assessment. Track adoption of approved alternatives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"cisco:umbrella\"\n| stats dc(src) as unique_users, sum(bytes) as total_bytes by app_name\n| lookup approved_apps.csv app_name OUTPUT approved\n| where isnull(approved) OR approved=\"No\"\n| sort -unique_users\n```\n\nUnderstanding this SPL\n\n**Shadow IT Detection** — Unapproved SaaS usage creates data security risks and compliance gaps. Discovery enables governance and risk assessment.\n\nDocumented **Data sources**: SWG logs (application identification), CASB logs. **App/TA** (typical add-on context): SWG/CASB TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: cisco:umbrella. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"cisco:umbrella\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved) OR approved=\"No\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Shadow IT Detection** — Unapproved SaaS usage creates data security risks and compliance gaps. Discovery enables governance and risk assessment.\n\nDocumented **Data sources**: SWG logs (application identification), CASB logs. **App/TA** (typical add-on context): SWG/CASB TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved apps with user counts), Bar chart (top shadow IT apps), Pie chart (approved vs unapproved traffic).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We combine SWG and CASB signals to find unsanctioned cloud apps and heavy users so IT can govern SaaS and reduce data risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.3",
              "n": "Malware Download Blocks",
              "c": "critical",
              "f": "beginner",
              "v": "Each blocked malware download represents a prevented infection. Tracking reveals targeted users and attack vectors.",
              "t": "SWG TA",
              "d": "SWG threat logs (malware blocks)",
              "q": "index=proxy sourcetype=\"zscaler:web\" action=\"Blocked\" threat_category=\"Malware\"\n| stats count by src_user, url, threat_name\n| sort -count",
              "m": "Enable threat scanning in SWG. Forward threat events to Splunk. Alert on malware download blocks for user awareness. Track targeted users for phishing correlation. Report on malware types and delivery methods.",
              "z": "Bar chart (malware blocks by type), Table (blocked downloads), Line chart (block rate trend), Single value (blocks today).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SWG TA.\n• Ensure the following data sources are available: SWG threat logs (malware blocks).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable threat scanning in SWG. Forward threat events to Splunk. Alert on malware download blocks for user awareness. Track targeted users for phishing correlation. Report on malware types and delivery methods.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"zscaler:web\" action=\"Blocked\" threat_category=\"Malware\"\n| stats count by src_user, url, threat_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Malware Download Blocks** — Each blocked malware download represents a prevented infection. Tracking reveals targeted users and attack vectors.\n\nDocumented **Data sources**: SWG threat logs (malware blocks). **App/TA** (typical add-on context): SWG TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:web. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"zscaler:web\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_user, url, threat_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malware Download Blocks** — Each blocked malware download represents a prevented infection. Tracking reveals targeted users and attack vectors.\n\nDocumented **Data sources**: SWG threat logs (malware blocks). **App/TA** (typical add-on context): SWG TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (malware blocks by type), Table (blocked downloads), Line chart (block rate trend), Single value (blocks today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We monitor malware and bad-file blocks at the web gateway so infections are stopped at the border and repeat attempts stand out.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.4",
              "n": "DLP over Web Traffic",
              "c": "high",
              "f": "beginner",
              "v": "Web DLP events indicate sensitive data being uploaded to unauthorized destinations. Critical for compliance.",
              "t": "SWG/CASB TA",
              "d": "SWG DLP logs (file uploads, paste detection)",
              "q": "index=proxy sourcetype=\"netskope:events\" alert_type=\"DLP\"\n| stats count by user, app, policy_name, file_type\n| sort -count",
              "m": "Configure DLP policies in SWG/CASB for sensitive data patterns. Ingest DLP violation events. Alert on high-severity violations. Track by user, destination app, and data type. Report for compliance audits.",
              "z": "Table (DLP violations), Bar chart (violations by policy), Line chart (violation trend), Pie chart (by data type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SWG/CASB TA.\n• Ensure the following data sources are available: SWG DLP logs (file uploads, paste detection).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure DLP policies in SWG/CASB for sensitive data patterns. Ingest DLP violation events. Alert on high-severity violations. Track by user, destination app, and data type. Report for compliance audits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"netskope:events\" alert_type=\"DLP\"\n| stats count by user, app, policy_name, file_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DLP over Web Traffic** — Web DLP events indicate sensitive data being uploaded to unauthorized destinations. Critical for compliance.\n\nDocumented **Data sources**: SWG DLP logs (file uploads, paste detection). **App/TA** (typical add-on context): SWG/CASB TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: netskope:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"netskope:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, app, policy_name, file_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DLP over Web Traffic** — Web DLP events indicate sensitive data being uploaded to unauthorized destinations. Critical for compliance.\n\nDocumented **Data sources**: SWG DLP logs (file uploads, paste detection). **App/TA** (typical add-on context): SWG/CASB TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DLP violations), Bar chart (violations by policy), Line chart (violation trend), Pie chart (by data type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch DLP on web and upload paths so sensitive data is not exfiltrated inappropriately and compliance has evidence.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Web",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.5",
              "n": "DNS Security Events",
              "c": "critical",
              "f": "intermediate",
              "v": "Blocked DNS queries to malicious domains indicate infection attempts or active compromise. Each block is a security win.",
              "t": "Splunk Add-on for Cisco Umbrella",
              "d": "Umbrella/DNS security logs",
              "q": "index=dns_security sourcetype=\"cisco:umbrella\" action=\"Blocked\"\n| stats count by internalIp, domain, categories\n| where match(categories,\"Malware|Command and Control|Phishing\")\n| sort -count",
              "m": "Deploy DNS security (Umbrella, Zscaler). Forward blocked query logs to Splunk. Alert on blocks in malware/C2/phishing categories. Track affected internal IPs for investigation. Report on DNS security effectiveness.",
              "z": "Table (blocked domains with sources), Bar chart (blocks by category), Single value (unique blocked domains today).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Cisco Umbrella.\n• Ensure the following data sources are available: Umbrella/DNS security logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy DNS security (Umbrella, Zscaler). Forward blocked query logs to Splunk. Alert on blocks in malware/C2/phishing categories. Track affected internal IPs for investigation. Report on DNS security effectiveness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns_security sourcetype=\"cisco:umbrella\" action=\"Blocked\"\n| stats count by internalIp, domain, categories\n| where match(categories,\"Malware|Command and Control|Phishing\")\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DNS Security Events** — Blocked DNS queries to malicious domains indicate infection attempts or active compromise. Each block is a security win.\n\nDocumented **Data sources**: Umbrella/DNS security logs. **App/TA** (typical add-on context): Splunk Add-on for Cisco Umbrella. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns_security; **sourcetype**: cisco:umbrella. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns_security, sourcetype=\"cisco:umbrella\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by internalIp, domain, categories** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where match(categories,\"Malware|Command and Control|Phishing\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Security Events** — Blocked DNS queries to malicious domains indicate infection attempts or active compromise. Each block is a security win.\n\nDocumented **Data sources**: Umbrella/DNS security logs. **App/TA** (typical add-on context): Splunk Add-on for Cisco Umbrella. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (blocked domains with sources), Bar chart (blocks by category), Single value (unique blocked domains today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We review blocked and risky DNS security events so malware, phishing, and C2 do not get past the resolver unlogged.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.6",
              "n": "Bandwidth Abuse Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Excessive bandwidth on non-business sites impacts network performance and productivity. Detection supports acceptable use enforcement.",
              "t": "SWG TA",
              "d": "SWG traffic logs (bytes transferred, URL category)",
              "q": "index=proxy sourcetype=\"cisco:umbrella\"\n| stats sum(bytes) as total_bytes by src_user, categories\n| where match(categories,\"Streaming|Gaming|Social\") AND total_bytes > 1073741824\n| eval gb=round(total_bytes/1073741824,2)\n| table src_user, categories, gb",
              "m": "Track bandwidth usage per user by URL category. Alert when individual users exceed thresholds on non-business categories (>1GB/day on streaming). Report top bandwidth consumers for management review.",
              "z": "Bar chart (bandwidth by user/category), Table (top consumers), Pie chart (bandwidth by category).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SWG TA.\n• Ensure the following data sources are available: SWG traffic logs (bytes transferred, URL category).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack bandwidth usage per user by URL category. Alert when individual users exceed thresholds on non-business categories (>1GB/day on streaming). Report top bandwidth consumers for management review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"cisco:umbrella\"\n| stats sum(bytes) as total_bytes by src_user, categories\n| where match(categories,\"Streaming|Gaming|Social\") AND total_bytes > 1073741824\n| eval gb=round(total_bytes/1073741824,2)\n| table src_user, categories, gb\n```\n\nUnderstanding this SPL\n\n**Bandwidth Abuse Detection** — Excessive bandwidth on non-business sites impacts network performance and productivity. Detection supports acceptable use enforcement.\n\nDocumented **Data sources**: SWG traffic logs (bytes transferred, URL category). **App/TA** (typical add-on context): SWG TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: cisco:umbrella. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"cisco:umbrella\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_user, categories** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where match(categories,\"Streaming|Gaming|Social\") AND total_bytes > 1073741824` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Bandwidth Abuse Detection**): table src_user, categories, gb\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Bandwidth Abuse Detection** — Excessive bandwidth on non-business sites impacts network performance and productivity. Detection supports acceptable use enforcement.\n\nDocumented **Data sources**: SWG traffic logs (bytes transferred, URL category). **App/TA** (typical add-on context): SWG TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (bandwidth by user/category), Table (top consumers), Pie chart (bandwidth by category).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We measure bandwidth on non-business site categories so abuse and performance problems surface before they hurt the business.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.app span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.7",
              "n": "Unencrypted Traffic Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Sensitive data transmitted over HTTP is vulnerable to interception. Detection ensures encryption compliance.",
              "t": "SWG TA",
              "d": "SWG traffic logs (protocol, URL)",
              "q": "index=proxy sourcetype=\"cisco:umbrella\" protocol=\"HTTP\"\n| search NOT url=\"http://ocsp.\" NOT url=\"http://crl.\"\n| stats count by src_user, domain\n| sort -count",
              "m": "Monitor HTTP (non-HTTPS) traffic in SWG logs. Filter out legitimate HTTP uses (OCSP, CRL). Alert when sensitive applications are accessed over HTTP. Report unencrypted traffic percentage as a security metric.",
              "z": "Table (HTTP traffic by destination), Pie chart (HTTP vs HTTPS), Line chart (unencrypted traffic trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SWG TA.\n• Ensure the following data sources are available: SWG traffic logs (protocol, URL).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor HTTP (non-HTTPS) traffic in SWG logs. Filter out legitimate HTTP uses (OCSP, CRL). Alert when sensitive applications are accessed over HTTP. Report unencrypted traffic percentage as a security metric.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"cisco:umbrella\" protocol=\"HTTP\"\n| search NOT url=\"http://ocsp.\" NOT url=\"http://crl.\"\n| stats count by src_user, domain\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Unencrypted Traffic Detection** — Sensitive data transmitted over HTTP is vulnerable to interception. Detection ensures encryption compliance.\n\nDocumented **Data sources**: SWG traffic logs (protocol, URL). **App/TA** (typical add-on context): SWG TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: cisco:umbrella. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"cisco:umbrella\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by src_user, domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unencrypted Traffic Detection** — Sensitive data transmitted over HTTP is vulnerable to interception. Detection ensures encryption compliance.\n\nDocumented **Data sources**: SWG traffic logs (protocol, URL). **App/TA** (typical add-on context): SWG TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (HTTP traffic by destination), Pie chart (HTTP vs HTTPS), Line chart (unencrypted traffic trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We flag plain HTTP where policy expects HTTPS so sensitive browsing is not left exposed on the network.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| eval bytes=bytes_in+bytes_out\n| sort -bytes",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.8",
              "n": "Detect Web Access to Decommissioned S3 Bucket",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies web requests to domains that match previously decommissioned S3 buckets through web proxy logs. This activity is significant because attackers may attempt to access or recreate deleted S3 buckets that were previously public to hijack them for malicious purposes. If successful, this could allow attackers to host malicious content or exfiltrate data through compromised bucket names that may still be referenced by legitimate applications.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS Cloudfront",
              "q": "| from datamodel:Web | search src=\"$src$\" url_domain=\"$url_domain$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to domain entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS Cloudfront ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some applications or web pages may continue to reference old S3 bucket URLs after they have been decommissioned. These should be investigated and updated to prevent potential security risks.",
              "refs": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html, https://labs.watchtowr.com/8-million-requests-later-we-made-the-solarwinds-supply-chain-attack-look-amateur/",
              "mitre": [
                "T1485"
              ],
              "dtype": "domain",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Web Access to Decommissioned S3 Bucket\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to domain entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS Cloudfront. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Web Access to Decommissioned S3 Bucket\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some applications or web pages may continue to reference old S3 bucket URLs after they have been decommissioned. These should be investigated and updated to prevent potential security risks.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies web requests to domains that match previously decommissioned cloud storage buckets through web proxy logs — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.9",
              "n": "Zscaler Adware Activities Threat Blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential adware activity blocked by Zscaler. It leverages web proxy logs to detect blocked actions associated with adware threats. Key data points such as device owner, user, URL category, destination URL, and IP are analyzed. This activity is significant as adware can degrade system performance, lead to unwanted advertisements, and potentially expose users to further malicious content. If confirmed malicious, it could indicate an attempt to compromise user sys…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscaler configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Adware Activities Threat Blocked\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Adware Activities Threat Blocked\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscaler configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.10",
              "n": "Zscaler Behavior Analysis Threat Blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies threats blocked by the Zscaler proxy based on behavior analysis. It leverages web proxy logs to detect entries where actions are blocked and threat names and classes are specified. This detection is significant as it highlights potential malicious activities that were intercepted by Zscaler's behavior analysis, providing early indicators of threats. If confirmed malicious, these blocked threats could indicate attempted breaches or malware infections, helping sec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscalar configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Behavior Analysis Threat Blocked\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Behavior Analysis Threat Blocked\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscalar configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.11",
              "n": "Zscaler CryptoMiner Downloaded Threat Blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to download cryptomining software that are blocked by Zscaler. It leverages web proxy logs to detect blocked actions associated with cryptominer threats, analyzing key data points such as device owner, user, URL category, destination URL, and IP. This activity is significant for a SOC as it helps in early identification and mitigation of cryptomining activities, which can compromise network integrity and resource availability. If confirmed malicious, th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscaler configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler CryptoMiner Downloaded Threat Blocked\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler CryptoMiner Downloaded Threat Blocked\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscaler configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.12",
              "n": "Zscaler Employment Search Web Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies web activity related to employment searches within a network. It leverages Zscaler web proxy logs, focusing on entries categorized as 'Job/Employment Search'. Key data points such as device owner, user, URL category, destination URL, and IP are analyzed. This detection is significant for SOCs as it helps monitor potential insider threats by identifying users who may be seeking new employment. If confirmed malicious, this activity could indicate a risk of data ex…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscaler configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Employment Search Web Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Employment Search Web Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscaler configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.13",
              "n": "Zscaler Exploit Threat Blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential exploit attempts involving command and script interpreters blocked by Zscaler. It leverages web proxy logs to detect incidents where actions are blocked due to exploit references. The detection compiles statistics by user, threat name, URL, hostname, file class, and filename. This activity is significant as it helps identify and mitigate exploit attempts, which are critical for maintaining security. If confirmed malicious, such activity could lead to u…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscaler configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Exploit Threat Blocked\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Exploit Threat Blocked\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscaler configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.14",
              "n": "Zscaler Legal Liability Threat Blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies significant legal liability threats blocked by the Zscaler web proxy. It uses web proxy logs to track destinations, device owners, users, URL categories, and actions associated with legal liability. By leveraging statistics on unique fields, it ensures a precise focus on these threats. This activity is significant for SOC as it helps enforce legal compliance and risk management. If confirmed malicious, it could indicate attempts to access legally sensitive or re…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscaler configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Legal Liability Threat Blocked\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Legal Liability Threat Blocked\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscaler configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.15",
              "n": "Zscaler Malware Activity Threat Blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential malware activities within a network that are blocked by Zscaler. It leverages web proxy logs to filter for blocked actions associated with malware, aggregating occurrences by user, URL, and threat category. This detection is significant for SOC as it highlights attempts to access malicious content, indicating potential compromise or targeted attacks. If confirmed malicious, this activity could signify an ongoing attempt to infiltrate the network, neces…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscalar configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Malware Activity Threat Blocked\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Malware Activity Threat Blocked\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscalar configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.16",
              "n": "Zscaler Potentially Abused File Download",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the download of potentially malicious file types, such as .scr, .dll, .bat, and .lnk, within a network. It leverages web proxy logs from Zscaler, focusing on blocked actions and analyzing fields like deviceowner, user, urlcategory, url, dest, and filename. This activity is significant as these file types are often used to spread malware, posing a threat to network security. If confirmed malicious, this activity could lead to malware execution, data compromise, o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscaler configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Potentially Abused File Download\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Potentially Abused File Download\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscaler configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.17",
              "n": "Zscaler Privacy Risk Destinations Threat Blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies blocked destinations within a network that are deemed privacy risks by Zscaler. It leverages web proxy logs, focusing on entries marked as \"Privacy Risk.\" Key data points such as device owner, user, URL category, destination URL, and IP are analyzed. This activity is significant for a SOC as it helps monitor and manage privacy risks, ensuring a secure network environment. If confirmed malicious, this activity could indicate attempts to access or exfiltrate sensi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscaler configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Privacy Risk Destinations Threat Blocked\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Privacy Risk Destinations Threat Blocked\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscaler configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.18",
              "n": "Zscaler Scam Destinations Threat Blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies blocked scam-related activities detected by Zscaler within a network. It leverages web proxy logs to examine actions flagged as scam threats, focusing on data points such as device owner, user, URL category, destination URL, and IP. This detection is significant for SOC as it helps in the early identification and mitigation of scam activities, ensuring network safety. If confirmed malicious, this activity could indicate attempts to deceive users, potentially lea…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited to Zscaler configuration.",
              "refs": "https://help.zscaler.com/zia/nss-feed-output-format-web-logs",
              "mitre": [
                "T1566"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zscaler Scam Destinations Threat Blocked\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zscaler Scam Destinations Threat Blocked\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited to Zscaler configuration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.19",
              "n": "Zscaler Virus Download threat blocked",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to download viruses that were blocked by Zscaler within a network. It leverages web proxy logs to detect blocked actions indicative of virus download attempts. Key data points such as device owner, user, URL category, destination URL, and IP are analyzed. This activity is significant as it helps in early detection and remediation of potential virus threats, enhancing network security. If confirmed malicious, this activity could indicate an attempt to co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Deploy the detection from Splunk Security Essentials (ESCU) or the security_content repository. Ensure the required data sources (Various) are ingested and normalized. For Risk-based detections, Splunk Enterprise Security is required to populate the Risk datamodel and generate Notable events. Configure as a correlation search or saved search with alerting as needed.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Security Essentials / ESCU.\n• Ensure the following data sources are available: Various.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy the detection from Splunk Security Essentials (ESCU) or the security_content repository. Ensure the required data sources (Various) are ingested and normalized. For Risk-based detections, Splunk Enterprise Security is required to populate the Risk datamodel and generate Notable events. Configure as a correlation search or saved search with alerting as needed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`\n```\n\nUnderstanding this SPL\n\n**Zscaler Virus Download threat blocked** — The following analytic identifies attempts to download viruses that were blocked by Zscaler within a network. It leverages web proxy logs to detect blocked actions indicative of virus download attempts. Key data points such as device owner, user, URL category, destination URL, and IP are analyzed. This activity is significant as it helps in early detection and remediation of potential virus threats, enhancing network security. If confirmed malicious, this activity could…\n\nDocumented **Data sources**: Various. **App/TA** (typical add-on context): Splunk Security Essentials / ESCU. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `from` (dataset / Federated Search) — verify dataset availability and permissions.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by normalized_risk_object** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Invokes macro `security_content_ctime(firstTime)` — in Search, use the UI or expand to inspect the underlying SPL.\n• Invokes macro `security_content_ctime(lastTime)` — in Search, use the UI or expand to inspect the underlying SPL.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline (from ESCU).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We align Splunk with Zscaler so we can see which web threats the SWG blocks and which users and destinations need follow-up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.5.20",
              "n": "Shadow IT Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "Identifies SaaS and unsanctioned apps via proxy/DNS logs not mapped to approved services in CMDB or SSO.",
              "t": "Zscaler, Umbrella, firewall logs",
              "d": "`sourcetype=zscaler:web`, `sourcetype=pan:traffic`",
              "q": "index=proxy sourcetype=\"zscaler:web\" earliest=-24h\n| stats sum(bytes_in) as bytes by app_name, user\n| lookup approved_saas_apps.csv app_name OUTPUT approved\n| where isnull(approved)\n| sort -bytes\n| head 50",
              "m": "Maintain `approved_saas_apps.csv` from architecture/ITAM. Top unapproved apps by volume; correlate with CMDB business service owner for intake.",
              "z": "Table (shadow apps), Bar chart (bytes by app), Pie chart (approved vs unknown).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Zscaler, Umbrella, firewall logs.\n• Ensure the following data sources are available: `sourcetype=zscaler:web`, `sourcetype=pan:traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain `approved_saas_apps.csv` from architecture/ITAM. Top unapproved apps by volume; correlate with CMDB business service owner for intake.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"zscaler:web\" earliest=-24h\n| stats sum(bytes_in) as bytes by app_name, user\n| lookup approved_saas_apps.csv app_name OUTPUT approved\n| where isnull(approved)\n| sort -bytes\n| head 50\n```\n\nUnderstanding this SPL\n\n**Shadow IT Discovery** — Identifies SaaS and unsanctioned apps via proxy/DNS logs not mapped to approved services in CMDB or SSO.\n\nDocumented **Data sources**: `sourcetype=zscaler:web`, `sourcetype=pan:traffic`. **App/TA** (typical add-on context): Zscaler, Umbrella, firewall logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:web. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"zscaler:web\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (shadow apps), Bar chart (bytes by app), Pie chart (approved vs unknown).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use proxy and DNS-like logs to find shadow IT so we know which unapproved apps need review, approval, or blocking.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 29.8,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "10.6",
          "n": "Vulnerability Management",
          "u": [
            {
              "i": "10.6.1",
              "n": "Critical Vulnerability Trending",
              "c": "critical",
              "f": "beginner",
              "v": "Tracking critical vulnerabilities over time measures security posture improvement and identifies remediation stalls.",
              "t": "`Tenable Add-On for Splunk` (Splunkbase 4060), `Qualys Technology Add-on (TA) for Splunk` (Splunkbase 2964)",
              "d": "Vulnerability scan results (severity, CVE, affected asset)",
              "q": "index=vulnerability sourcetype=\"tenable:vuln\"\n| where severity IN (\"Critical\",\"High\")\n| timechart span=1d dc(cve_id) as unique_vulns by severity",
              "m": "Ingest scan results from vulnerability management platform. Track unique vulnerabilities by severity over time. Alert when critical count exceeds threshold or increases. Report on remediation progress weekly.",
              "z": "Line chart (vuln count trend by severity), Single value (critical vuln count), Bar chart (top CVEs), Table (critical vulnerabilities).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 4060](https://splunkbase.splunk.com/app/4060), [Splunkbase app 2964](https://splunkbase.splunk.com/app/2964), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Tenable Add-On for Splunk` (Splunkbase 4060), `Qualys Technology Add-on (TA) for Splunk` (Splunkbase 2964).\n• Ensure the following data sources are available: Vulnerability scan results (severity, CVE, affected asset).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest scan results from vulnerability management platform. Track unique vulnerabilities by severity over time. Alert when critical count exceeds threshold or increases. Report on remediation progress weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulnerability sourcetype=\"tenable:vuln\"\n| where severity IN (\"Critical\",\"High\")\n| timechart span=1d dc(cve_id) as unique_vulns by severity\n```\n\nUnderstanding this SPL\n\n**Critical Vulnerability Trending** — Tracking critical vulnerabilities over time measures security posture improvement and identifies remediation stalls.\n\nDocumented **Data sources**: Vulnerability scan results (severity, CVE, affected asset). **App/TA** (typical add-on context): `Tenable Add-On for Splunk` (Splunkbase 4060), `Qualys Technology Add-on (TA) for Splunk` (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulnerability; **sourcetype**: tenable:vuln. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulnerability, sourcetype=\"tenable:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where severity IN (\"Critical\",\"High\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by severity** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Critical Vulnerability Trending** — Tracking critical vulnerabilities over time measures security posture improvement and identifies remediation stalls.\n\nDocumented **Data sources**: Vulnerability scan results (severity, CVE, affected asset). **App/TA** (typical add-on context): `Tenable Add-On for Splunk` (Splunkbase 4060), `Qualys Technology Add-on (TA) for Splunk` (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (vuln count trend by severity), Single value (critical vuln count), Bar chart (top CVEs), Table (critical vulnerabilities).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many serious, known software flaws on our systems trend over time so we can focus patching, track progress, and see when we are getting stuck.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.2",
              "n": "Mean Time to Remediation",
              "c": "high",
              "f": "beginner",
              "v": "MTTR measures remediation efficiency. Long MTTR indicates process bottlenecks or resource constraints requiring management attention.",
              "t": "Vuln management TA",
              "d": "Scan results with first_seen and last_seen dates",
              "q": "index=vulnerability sourcetype=\"tenable:vuln\" state=\"Fixed\"\n| eval mttr_days=round((fixed_date-first_seen)/86400)\n| stats avg(mttr_days) as avg_mttr, median(mttr_days) as median_mttr by severity",
              "m": "Track first_seen and fixed_date for each vulnerability. Calculate MTTR by severity. Report against SLA targets (Critical: 7d, High: 30d, Medium: 90d). Identify teams with consistently high MTTR for process improvement.",
              "z": "Bar chart (MTTR by severity), Line chart (MTTR trend), Table (SLA compliance by team).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vuln management TA.\n• Ensure the following data sources are available: Scan results with first_seen and last_seen dates.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack first_seen and fixed_date for each vulnerability. Calculate MTTR by severity. Report against SLA targets (Critical: 7d, High: 30d, Medium: 90d). Identify teams with consistently high MTTR for process improvement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulnerability sourcetype=\"tenable:vuln\" state=\"Fixed\"\n| eval mttr_days=round((fixed_date-first_seen)/86400)\n| stats avg(mttr_days) as avg_mttr, median(mttr_days) as median_mttr by severity\n```\n\nUnderstanding this SPL\n\n**Mean Time to Remediation** — MTTR measures remediation efficiency. Long MTTR indicates process bottlenecks or resource constraints requiring management attention.\n\nDocumented **Data sources**: Scan results with first_seen and last_seen dates. **App/TA** (typical add-on context): Vuln management TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulnerability; **sourcetype**: tenable:vuln. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulnerability, sourcetype=\"tenable:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mttr_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Mean Time to Remediation** — MTTR measures remediation efficiency. Long MTTR indicates process bottlenecks or resource constraints requiring management attention.\n\nDocumented **Data sources**: Scan results with first_seen and last_seen dates. **App/TA** (typical add-on context): Vuln management TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (MTTR by severity), Line chart (MTTR trend), Table (SLA compliance by team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see how long it takes to fix reported flaws so we can tell when remediation is healthy, when a team is stuck, or when an approved exception explains the delay.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.3",
              "n": "Scan Coverage Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Assets not scanned are unknown risks. Coverage monitoring ensures comprehensive vulnerability assessment.",
              "t": "Vuln management TA + CMDB",
              "d": "Scan activity, asset inventory",
              "q": "| inputlookup cmdb_assets.csv\n| join type=left max=1 hostname [search index=vulnerability sourcetype=\"tenable:vuln\" | stats latest(_time) as last_scan by hostname]\n| eval days_since_scan=round((now()-last_scan)/86400)\n| where isnull(last_scan) OR days_since_scan > 30\n| table hostname, os, department, last_scan, days_since_scan",
              "m": "Cross-reference scan targets with CMDB. Identify assets not scanned in 30 days. Alert on scan failures. Track coverage percentage as a KPI. Report on uncovered assets for remediation.",
              "z": "Single value (scan coverage %), Table (unscanned assets), Pie chart (scanned vs unscanned), Bar chart (gaps by department).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vuln management TA + CMDB.\n• Ensure the following data sources are available: Scan activity, asset inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCross-reference scan targets with CMDB. Identify assets not scanned in 30 days. Alert on scan failures. Track coverage percentage as a KPI. Report on uncovered assets for remediation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup cmdb_assets.csv\n| join type=left max=1 hostname [search index=vulnerability sourcetype=\"tenable:vuln\" | stats latest(_time) as last_scan by hostname]\n| eval days_since_scan=round((now()-last_scan)/86400)\n| where isnull(last_scan) OR days_since_scan > 30\n| table hostname, os, department, last_scan, days_since_scan\n```\n\nUnderstanding this SPL\n\n**Scan Coverage Monitoring** — Assets not scanned are unknown risks. Coverage monitoring ensures comprehensive vulnerability assessment.\n\nDocumented **Data sources**: Scan activity, asset inventory. **App/TA** (typical add-on context): Vuln management TA + CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **days_since_scan** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(last_scan) OR days_since_scan > 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Scan Coverage Monitoring**): table hostname, os, department, last_scan, days_since_scan\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Scan Coverage Monitoring** — Assets not scanned are unknown risks. Coverage monitoring ensures comprehensive vulnerability assessment.\n\nDocumented **Data sources**: Scan activity, asset inventory. **App/TA** (typical add-on context): Vuln management TA + CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (scan coverage %), Table (unscanned assets), Pie chart (scanned vs unscanned), Bar chart (gaps by department).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare our asset list to recent scan activity so we are not blind to whole systems that never got checked for known weaknesses.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.4",
              "n": "Patch Compliance by Team/BU",
              "c": "high",
              "f": "beginner",
              "v": "Per-team compliance views drive accountability and enable targeted remediation efforts where they're most needed.",
              "t": "Vuln management TA + CMDB",
              "d": "Scan results enriched with CMDB ownership data",
              "q": "index=vulnerability sourcetype=\"tenable:vuln\" severity IN (\"Critical\",\"High\")\n| lookup cmdb_assets.csv hostname OUTPUT department, owner\n| stats dc(cve_id) as open_vulns, dc(hostname) as affected_hosts by department\n| sort -open_vulns",
              "m": "Enrich vulnerability data with asset ownership from CMDB. Aggregate by team/business unit. Create weekly compliance scorecard. Share with leadership for accountability. Track improvement trends per team.",
              "z": "Bar chart (vulns by team), Table (team compliance scorecard), Line chart (compliance trend by team).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vuln management TA + CMDB.\n• Ensure the following data sources are available: Scan results enriched with CMDB ownership data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnrich vulnerability data with asset ownership from CMDB. Aggregate by team/business unit. Create weekly compliance scorecard. Share with leadership for accountability. Track improvement trends per team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulnerability sourcetype=\"tenable:vuln\" severity IN (\"Critical\",\"High\")\n| lookup cmdb_assets.csv hostname OUTPUT department, owner\n| stats dc(cve_id) as open_vulns, dc(hostname) as affected_hosts by department\n| sort -open_vulns\n```\n\nUnderstanding this SPL\n\n**Patch Compliance by Team/BU** — Per-team compliance views drive accountability and enable targeted remediation efforts where they're most needed.\n\nDocumented **Data sources**: Scan results enriched with CMDB ownership data. **App/TA** (typical add-on context): Vuln management TA + CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulnerability; **sourcetype**: tenable:vuln. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulnerability, sourcetype=\"tenable:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Patch Compliance by Team/BU** — Per-team compliance views drive accountability and enable targeted remediation efforts where they're most needed.\n\nDocumented **Data sources**: Scan results enriched with CMDB ownership data. **App/TA** (typical add-on context): Vuln management TA + CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (vulns by team), Table (team compliance scorecard), Line chart (compliance trend by team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see which parts of the business still carry the most open serious flaws so leaders know where to focus people and time, not just a single global number.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.5",
              "n": "Exploitable Vulnerability Prioritization",
              "c": "critical",
              "f": "intermediate",
              "v": "Not all vulnerabilities are equal — those with known exploits pose immediate risk. Prioritization focuses remediation on the highest-risk items.",
              "t": "Vuln management TA + threat intel",
              "d": "Scan results + CISA KEV catalog + EPSS scores",
              "q": "index=vulnerability sourcetype=\"tenable:vuln\" severity=\"Critical\"\n| lookup cisa_kev.csv cve_id OUTPUT known_exploited, ransomware_associated\n| lookup epss_scores.csv cve_id OUTPUT epss_score\n| where known_exploited=\"Yes\" OR epss_score > 0.5\n| table hostname, cve_id, severity, epss_score, known_exploited, ransomware_associated\n| sort -epss_score",
              "m": "Maintain CISA KEV and EPSS lookup tables (update weekly). Enrich vulnerability data with exploit intelligence. Prioritize vulnerabilities with known exploits and high EPSS scores. Alert immediately on new KEV vulnerabilities found in environment.",
              "z": "Table (exploitable vulns prioritized), Single value (KEV vulns in environment), Bar chart (EPSS distribution), Scatter plot (severity × EPSS).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vuln management TA + threat intel.\n• Ensure the following data sources are available: Scan results + CISA KEV catalog + EPSS scores.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain CISA KEV and EPSS lookup tables (update weekly). Enrich vulnerability data with exploit intelligence. Prioritize vulnerabilities with known exploits and high EPSS scores. Alert immediately on new KEV vulnerabilities found in environment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulnerability sourcetype=\"tenable:vuln\" severity=\"Critical\"\n| lookup cisa_kev.csv cve_id OUTPUT known_exploited, ransomware_associated\n| lookup epss_scores.csv cve_id OUTPUT epss_score\n| where known_exploited=\"Yes\" OR epss_score > 0.5\n| table hostname, cve_id, severity, epss_score, known_exploited, ransomware_associated\n| sort -epss_score\n```\n\nUnderstanding this SPL\n\n**Exploitable Vulnerability Prioritization** — Not all vulnerabilities are equal — those with known exploits pose immediate risk. Prioritization focuses remediation on the highest-risk items.\n\nDocumented **Data sources**: Scan results + CISA KEV catalog + EPSS scores. **App/TA** (typical add-on context): Vuln management TA + threat intel. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulnerability; **sourcetype**: tenable:vuln. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulnerability, sourcetype=\"tenable:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where known_exploited=\"Yes\" OR epss_score > 0.5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Exploitable Vulnerability Prioritization**): table hostname, cve_id, severity, epss_score, known_exploited, ransomware_associated\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Exploitable Vulnerability Prioritization** — Not all vulnerabilities are equal — those with known exploits pose immediate risk. Prioritization focuses remediation on the highest-risk items.\n\nDocumented **Data sources**: Scan results + CISA KEV catalog + EPSS scores. **App/TA** (typical add-on context): Vuln management TA + threat intel. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exploitable vulns prioritized), Single value (KEV vulns in environment), Bar chart (EPSS distribution), Scatter plot (severity × EPSS).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We line up our scanner output with which issues are already being attacked in the wild so we fix what criminals would try first, not just what looks noisy on a list.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.6",
              "n": "Vulnerability SLA Compliance",
              "c": "high",
              "f": "beginner",
              "v": "SLA tracking ensures vulnerabilities are remediated within policy timeframes. Non-compliance creates audit findings.",
              "t": "Vuln management TA",
              "d": "Scan results with detection timestamps, SLA policy lookup",
              "q": "index=vulnerability sourcetype=\"tenable:vuln\" state=\"Active\"\n| eval age_days=round((now()-first_seen)/86400)\n| eval sla_days=case(severity=\"Critical\",7, severity=\"High\",30, severity=\"Medium\",90, 1=1,180)\n| eval sla_status=if(age_days>sla_days,\"Overdue\",\"Compliant\")\n| stats count by severity, sla_status",
              "m": "Define SLA targets per severity. Calculate vulnerability age against SLA. Track compliance percentage. Alert when critical/high vulns approach SLA deadline. Produce compliance reports for audit evidence.",
              "z": "Gauge (SLA compliance %), Table (overdue vulnerabilities), Bar chart (compliance by severity), Line chart (compliance trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vuln management TA.\n• Ensure the following data sources are available: Scan results with detection timestamps, SLA policy lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine SLA targets per severity. Calculate vulnerability age against SLA. Track compliance percentage. Alert when critical/high vulns approach SLA deadline. Produce compliance reports for audit evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulnerability sourcetype=\"tenable:vuln\" state=\"Active\"\n| eval age_days=round((now()-first_seen)/86400)\n| eval sla_days=case(severity=\"Critical\",7, severity=\"High\",30, severity=\"Medium\",90, 1=1,180)\n| eval sla_status=if(age_days>sla_days,\"Overdue\",\"Compliant\")\n| stats count by severity, sla_status\n```\n\nUnderstanding this SPL\n\n**Vulnerability SLA Compliance** — SLA tracking ensures vulnerabilities are remediated within policy timeframes. Non-compliance creates audit findings.\n\nDocumented **Data sources**: Scan results with detection timestamps, SLA policy lookup. **App/TA** (typical add-on context): Vuln management TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulnerability; **sourcetype**: tenable:vuln. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulnerability, sourcetype=\"tenable:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by severity, sla_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Vulnerability SLA Compliance** — SLA tracking ensures vulnerabilities are remediated within policy timeframes. Non-compliance creates audit findings.\n\nDocumented **Data sources**: Scan results with detection timestamps, SLA policy lookup. **App/TA** (typical add-on context): Vuln management TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (SLA compliance %), Table (overdue vulnerabilities), Bar chart (compliance by severity), Line chart (compliance trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check which open issues have sat past the fix times we promised so audits and service owners see gaps before they become incidents.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.7",
              "n": "New Vulnerability Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Newly discovered critical vulnerabilities require immediate triage. Alerting ensures rapid response to emerging risks.",
              "t": "Vuln management TA",
              "d": "Scan results (first_seen within last scan window)",
              "q": "index=vulnerability sourcetype=\"tenable:vuln\" severity=\"Critical\"\n| where first_seen > relative_time(now(), \"-24h\")\n| table hostname, cve_id, plugin_name, severity, first_seen\n| sort -first_seen",
              "m": "After each scan, identify new critical/high vulnerabilities (first_seen within scan window). Alert immediately on new critical findings. Include CVE details and affected hosts. Integrate with ticketing for automated remediation tracking.",
              "z": "Table (new vulnerabilities), Single value (new criticals today), Timeline (discovery events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vuln management TA.\n• Ensure the following data sources are available: Scan results (first_seen within last scan window).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAfter each scan, identify new critical/high vulnerabilities (first_seen within scan window). Alert immediately on new critical findings. Include CVE details and affected hosts. Integrate with ticketing for automated remediation tracking.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulnerability sourcetype=\"tenable:vuln\" severity=\"Critical\"\n| where first_seen > relative_time(now(), \"-24h\")\n| table hostname, cve_id, plugin_name, severity, first_seen\n| sort -first_seen\n```\n\nUnderstanding this SPL\n\n**New Vulnerability Detection** — Newly discovered critical vulnerabilities require immediate triage. Alerting ensures rapid response to emerging risks.\n\nDocumented **Data sources**: Scan results (first_seen within last scan window). **App/TA** (typical add-on context): Vuln management TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulnerability; **sourcetype**: tenable:vuln. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulnerability, sourcetype=\"tenable:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where first_seen > relative_time(now(), \"-24h\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **New Vulnerability Detection**): table hostname, cve_id, plugin_name, severity, first_seen\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**New Vulnerability Detection** — Newly discovered critical vulnerabilities require immediate triage. Alerting ensures rapid response to emerging risks.\n\nDocumented **Data sources**: Scan results (first_seen within last scan window). **App/TA** (typical add-on context): Vuln management TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new vulnerabilities), Single value (new criticals today), Timeline (discovery events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We get pushed when a serious new problem shows up on a machine so we can jump on it before the risk sits there unnoticed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Vulnerabilities.Vulnerabilities\n  by Vulnerabilities.dest Vulnerabilities.severity Vulnerabilities.cve\n| search Vulnerabilities.severity IN (\"critical\",\"high\")\n| sort -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.8",
              "n": "Cisco Duo Policy Allow Old Flash",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a Duo administrator creates or updates a policy to allow the use of outdated Flash components, specifically by detecting policy changes with the flash_remediation=no remediation attribute. It leverages Duo activity logs ingested via the Cisco Security Cloud App, searching for policy_update or policy_create actions and parsing the policy description for indicators of weakened security controls. This behavior is significant for a SOC because permit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Policy Allow Old Flash\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Policy Allow Old Flash\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies instances where a Duo administrator creates or updates a policy to allow the use of outdated Flash components, specifically by detecting policy changes with the flash_remediation=no remediation attribute, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.9",
              "n": "CrushFTP Server Side Template Injection",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic is designed to identify attempts to exploit a server-side template injection vulnerability in CrushFTP, designated as CVE-2024-4040. This severe vulnerability enables unauthenticated remote attackers to access and read files beyond the VFS Sandbox, circumvent authentication protocols, and execute arbitrary commands on the affected server. The issue impacts all versions of CrushFTP up to 10.7.1 and 11.1.0 on all supported platforms. It is highly recommended to apply patches immediat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "CrushFTP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "CrushFTP Session logs, from Windows or Linux, must be ingested to Splunk. Currently, there is no TA for CrushFTP, so the data must be extracted from the raw logs.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, however tune or filter as needed.",
              "refs": "https://github.com/airbus-cert/CVE-2024-4040, https://www.bleepingcomputer.com/news/security/crushftp-warns-users-to-patch-exploited-zero-day-immediately/",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CrushFTP Server Side Template Injection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: CrushFTP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CrushFTP Server Side Template Injection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, however tune or filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic is designed to identify attempts to exploit a server-side template injection vulnerability in CrushFTP, designated as, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.10",
              "n": "ESXi SSH Enabled",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies SSH being enabled on ESXi hosts, which can be an early indicator of malicious activity. Threat actors often use SSH to gain persistent remote access after compromising credentials or exploiting vulnerabilities.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed. Some Administrators may use SSH for troubleshooting.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1021.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi SSH Enabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi SSH Enabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed. Some Administrators may use SSH for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies remote login being enabled on ESXi hosts, which can be an early indicator of malicious activity, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.11",
              "n": "Ivanti VTM New Account Creation",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects potential exploitation of the Ivanti Virtual Traffic Manager (vTM) authentication bypass vulnerability (CVE-2024-7593) to create new administrator accounts. The vulnerability allows unauthenticated remote attackers to bypass authentication on the admin panel and create new admin users. This detection looks for suspicious new account creation events in the Ivanti vTM audit logs that lack expected authentication details, which may indicate exploitation attempts.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Ivanti VTM Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$MODUSER$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Ivanti VTM Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate new account creation by authorized administrators will generate similar log entries. However, those should include proper authentication details. Verify any detected events against expected administrative activities and authorized user lists.",
              "refs": "https://nvd.nist.gov/vuln/detail/CVE-2024-7593",
              "mitre": [
                "T1190"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ivanti VTM New Account Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Ivanti VTM Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ivanti VTM New Account Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate new account creation by authorized administrators will generate similar log entries. However, those should include proper authentication details. Verify any detected events against expected administrative activities and authorized user lists.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects potential exploitation of the Ivanti Virtual Traffic Manager (vTM) authentication bypass vulnerability to create new administrator accounts, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.12",
              "n": "MCP Github Suspicious Operation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies potentially malicious activity through MCP GitHub server connections, monitoring for secret hunting in code searches, organization and repository reconnaissance, branch protection abuse, CI/CD workflow manipulation, sensitive file access, and vulnerability intelligence gathering. These patterns indicate potential supply chain attacks, credential harvesting, or pre-attack reconnaissance.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MCP Server",
              "q": "`mcp_server` direction=inbound\n    | eval dest=host\n    | eval\n        query_lower=lower('params.query'),\n        file_path_lower=lower('params.path'),\n        search_query='params.query',\n        file_path='params.path',\n        target_owner='params.owner',\n        is_secret_hunting=if(method=\"search_code\" AND (like(query_lower, \"%password%\") OR like(query_lower, \"%api_key%\") OR like(query_lower, \"%secret%\") OR like(query_lower, \"%token%\") OR like(query_lower, \"%aws_%\") OR like(query_lower, \"%private_key%\") OR like(query_lower, \"%credential%\") OR like(query_lower, \"%.env%\") OR like(query_lower, \"%config%\")), 1, 0),\n        is_org_recon=if(method IN (\"list_repositories\", \"get_repository\", \"get_organization\", \"list_organization_members\", \"get_collaborators\", \"list_forks\", \"fork_repository\"), 1, 0),\n        is_branch_protection_abuse=if(method IN (\"update_branch_protection\", \"delete_branch_protection\"), 1, 0),\n        is_workflow_manipulation=if((method IN (\"create_or_update_file\", \"push_files\")) AND like(file_path_lower, \"%github/workflows%\"), 1, 0),\n        is_sensitive_file_access=if((method IN (\"create_or_update_file\", \"push_files\", \"get_file_contents\")) AND (like(file_path_lower, \"%dockerfile%\") OR like(file_path_lower, \"%package.json%\") OR like(file_path_lower, \"%requirements.txt%\") OR like(file_path_lower, \"%.env%\") OR like(file_path_lower, \"%settings.py%\") OR like(file_path_lower, \"%config%\")), 1, 0),\n        is_issue_intel=if(method IN (\"list_issues\", \"search_issues\") AND (like(query_lower, \"%vulnerability%\") OR like(query_lower, \"%cve%\") OR like(query_lower, \"%security%\") OR like(query_lower, \"%exploit%\") OR like(query_lower, \"%bug%\")), 1, 0)\n    | where is_secret_hunting=1 OR is_org_recon=1 OR is_branch_protection_abuse=1 OR is_workflow_manipulation=1 OR is_sensitive_file_access=1 OR is_issue_intel=1\n    | eval attack_type=case(\n        is_secret_hunting=1, \"Secret Hunting\",\n        is_branch_protection_abuse=1, \"Branch Protection Abuse\",\n        is_workflow_manipulation=1, \"Workflow Manipulation\",\n        is_sensitive_file_access=1, \"Sensitive File Access\",\n        is_issue_intel=1, \"Vulnerability Intelligence Gathering\",\n        is_org_recon=1, \"Organization Reconnaissance\",\n        1=1, \"Unknown\")\n    | stats count min(_time) as firstTime max(_time) as lastTime values(method) as methods values(search_query) as search_queries values(file_path) as file_paths values(target_owner) as target_owners values(attack_type) as attack_types dc(attack_type) as attack_diversity by dest\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table dest firstTime lastTime count attack_diversity attack_types methods search_queries file_paths target_owners\n    | `mcp_github_suspicious_operation_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires MCP Server ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate developers searching code for refactoring purposes, security teams conducting authorized secret scanning, DevOps engineers modifying workflow files, and repository administrators managing branch protection settings.",
              "refs": "https://splunkbase.splunk.com/app/8377, https://www.docker.com/blog/mcp-horror-stories-github-prompt-injection/, https://www.splunk.com/en_us/blog/security/securing-ai-agents-model-context-protocol.html",
              "mitre": [
                "T1552.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MCP Github Suspicious Operation\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MCP Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MCP Github Suspicious Operation\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate developers searching code for refactoring purposes, security teams conducting authorized secret scanning, DevOps engineers modifying workflow files, and repository administrators managing branch protection settings.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies potentially malicious activity through MCP GitHub server connections, monitoring for secret hunting in code searches, organization and repository reconnaissance, branch protection abuse, CI/CD workflow manipulation, sensitive file access, and vulnerability intelligence gathering, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.13",
              "n": "MCP Prompt Injection",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies potential prompt injection attempts within MCP (Model Context Protocol) communications by monitoring for known malicious phrases and patterns commonly used to manipulate AI assistants. Prompt injection is a critical vulnerability where adversaries embed hidden instructions in content processed by AI tools, attempting to override system prompts, bypass security controls, or hijack the AI's behavior. The search monitors JSON-RPC traffic for phrases such as \"IGNORE PREVIOU…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MCP Server",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object=\"$dest$\" starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires MCP Server ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Known false positives include security research and testing activities where red teams or developers intentionally test prompt injection defenses, as well as educational content where documentation, tutorials, or training materials discussing prompt injection techniques are legitimately processed by the AI assistant. Additionally, security tool development involving code reviews or development of prompt injection detection mechanisms may contain these patterns, and quoted references in conversations where users discuss or report prompt injection attempts they encountered elsewhere could trigger this detection.",
              "refs": "https://splunkbase.splunk.com/app/8377, https://www.tenable.com/blog/mcp-prompt-injection-not-just-for-evil, https://www.splunk.com/en_us/blog/security/securing-ai-agents-model-context-protocol.html",
              "mitre": [
                "T1059"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MCP Prompt Injection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MCP Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MCP Prompt Injection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Known false positives include security research and testing activities where red teams or developers intentionally test prompt injection defenses, as well as educational content where documentation, tutorials, or training materials discussing prompt injection techniques are legitimately processed by the AI assistant. Additionally, security tool development involving code reviews or development of prompt injection detection mechanisms may contain these patterns, and quoted references in conversations where users discuss or report prompt injection attempts they encountered elsewhere could trigger this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies potential prompt injection attempts within MCP (Model Context Protocol) communications by monitoring for known malicious phrases and patterns commonly used to manipulate AI assistants, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.14",
              "n": "No Windows Updates in a time frame",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies Windows endpoints that have not generated an event indicating a successful Windows update in the last 60 days. It leverages the 'Update' data model in Splunk, specifically looking for the latest 'Installed' status events from Microsoft Windows. This activity is significant for a SOC because endpoints that are not regularly patched are vulnerable to known exploits and security vulnerabilities. If confirmed malicious, this could indicate a compromised endpoint tha…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` max(_time) as lastTime FROM datamodel=Updates\n      WHERE Updates.status=Installed Updates.vendor_product=\"Microsoft Windows\"\n      BY Updates.dest Updates.status Updates.vendor_product\n    | rename Updates.dest as Host\n    | rename Updates.status as \"Update Status\"\n    | rename Updates.vendor_product as Product\n    | eval isOutlier=if(lastTime <= relative_time(now(), \"-60d@d\"), 1, 0)\n    | `security_content_ctime(lastTime)`\n    | search isOutlier=1\n    | rename lastTime as \"Last Update Time\",\n    | table Host, \"Update Status\", Product, \"Last Update Time\"\n    | `no_windows_updates_in_a_time_frame_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"No Windows Updates in a time frame\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"No Windows Updates in a time frame\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies Windows endpoints that have not generated an event indicating a successful Windows update in the last 60 days, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.15",
              "n": "Ollama Possible API Endpoint Scan Reconnaissance",
              "c": "high",
              "f": "intermediate",
              "v": "Detects API reconnaissance and endpoint scanning activity against Ollama servers by identifying sources probing multiple API endpoints within short timeframes, particularly when using HEAD requests or accessing diverse endpoint paths, which indicates systematic enumeration to map the API surface, discover hidden endpoints, or identify vulnerabilities before launching targeted attacks.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Ollama Server",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Ollama Server ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate web application clients or mobile apps that access multiple API endpoints as part of normal functionality, monitoring and health check systems probing various endpoints for availability, load balancers performing health checks across different paths, API testing frameworks during development and QA processes, or users navigating through web interfaces that trigger multiple API calls may generate similar patterns during normal operations.",
              "refs": "https://github.com/rosplk/ta-ollama",
              "mitre": [
                "T1595"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ollama Possible API Endpoint Scan Reconnaissance\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Ollama Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1595. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ollama Possible API Endpoint Scan Reconnaissance\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate web application clients or mobile apps that access multiple API endpoints as part of normal functionality, monitoring and health check systems probing various endpoints for availability, load balancers performing health checks across different paths, API testing frameworks during development and QA processes, or users navigating through web interfaces that trigger multiple API calls may generate similar patterns during normal operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch reconnaissance and endpoint scanning activity against Ollama servers by identifying sources probing multiple endpoints within short timeframes, particularly when using HEAD requests or accessing diverse endpoint paths, which indicates systematic enumeration to map the surface, discover hidden endpoints, or identify vulnerabilities before launching targeted attacks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.16",
              "n": "Ollama Possible RCE via Model Loading",
              "c": "high",
              "f": "intermediate",
              "v": "Detects Ollama server errors and failures during model loading operations that may indicate malicious model injection, path traversal attempts, or exploitation of model loading mechanisms to achieve remote code execution. Adversaries may attempt to load specially crafted malicious models or exploit vulnerabilities in the model loading process to execute arbitrary code on the server. This detection monitors error messages and failure patterns that could signal attempts to abuse model loading func…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Ollama Server",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host$\", starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ingest Ollama logs via Splunk TA-ollama add-on by configuring file monitoring inputs pointed to your Ollama server log directories (sourcetype: ollama:server), or enable HTTP Event Collector (HEC) for real-time API telemetry and prompt analytics (sourcetypes: ollama:api, ollama:prompts). CIM compatibility using the Web datamodel for standardized security detections.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Corrupted model files from interrupted downloads, insufficient disk space or memory during legitimate model loading, incompatible model formats or versions, network timeouts when pulling models from registries, file permission issues in multi-user environments, or genuine configuration errors during initial Ollama setup may generate similar error patterns during normal operations.",
              "refs": "https://github.com/rosplk/ta-ollama",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ollama Possible RCE via Model Loading\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Ollama Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ollama Possible RCE via Model Loading\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Corrupted model files from interrupted downloads, insufficient disk space or memory during legitimate model loading, incompatible model formats or versions, network timeouts when pulling models from registries, file permission issues in multi-user environments, or genuine configuration errors during initial Ollama setup may generate similar error patterns during normal operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch Ollama server errors and failures during model loading operations that may indicate malicious model injection, path traversal attempts, or exploitation of model loading mechanisms to achieve remote code execution, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.17",
              "n": "Suspicious Java Classes",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious Java classes often used for remote command execution exploits in Java frameworks like Apache Struts. It detects this activity by analyzing HTTP POST requests with specific content patterns using Splunk's `stream_http` data source. This behavior is significant because it may indicate an attempt to exploit vulnerabilities in web applications, potentially leading to unauthorized remote code execution. If confirmed malicious, this activity could allow att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`stream_http` http_method=POST http_content_length>1\n      | regex form_data=\"(?i)java\\.lang\\.(?:runtime\n      | processbuilder)\"\n      | rename src as src\n      | stats count earliest(_time) as firstTime, latest(_time) as lastTime, values(url) as uri, values(status) as status, values(http_user_agent) as http_user_agent\n        BY src, dest\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `suspicious_java_classes_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are no known false positives.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Java Classes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Java Classes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are no known false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies suspicious Java classes often used for remote command execution exploits in Java frameworks like Apache Struts, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.18",
              "n": "Amazon EKS Kubernetes cluster scan detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthenticated requests to an Amazon EKS Kubernetes cluster, specifically identifying actions by the \"system:anonymous\" user. It leverages AWS CloudWatch Logs data, focusing on user agents and authentication details. This activity is significant as it may indicate unauthorized scanning or probing of the Kubernetes cluster, which could be a precursor to an attack. If confirmed malicious, this could lead to unauthorized access, data exfiltration, or disruption of se…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`aws_cloudwatchlogs_eks` \"user.username\"=\"system:anonymous\" userAgent!=\"AWS Security Scanner\"\n      | rename sourceIPs{} as src\n      | stats count min(_time) as firstTime max(_time) as lastTime values(responseStatus.reason) values(source) as cluster_name values(responseStatus.code) values(userAgent) as http_user_agent values(verb) values(requestURI)\n        BY src user.username user.groups{}\n      | `security_content_ctime(lastTime)`\n      | `security_content_ctime(firstTime)`\n      | `amazon_eks_kubernetes_cluster_scan_detection_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Not all unauthenticated requests are malicious, but frequency, UA and source IPs will provide context.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1526"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Amazon EKS Kubernetes cluster scan detection\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1526. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Amazon EKS Kubernetes cluster scan detection\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Not all unauthenticated requests are malicious, but frequency, UA and source IPs will provide context.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects unauthenticated requests to an Amazon clusters Kubernetes cluster, specifically identifying actions by the \"system:anonymous\" user, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.19",
              "n": "Amazon EKS Kubernetes Pod scan detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthenticated requests made against the Kubernetes Pods API, indicating potential unauthorized access attempts. It leverages the `aws_cloudwatchlogs_eks` data source, filtering for events where `user.username` is \"system:anonymous\", `verb` is \"list\", and `objectRef.resource` is \"pods\", with `requestURI` set to \"/api/v1/pods\". This activity is significant as it may signal attempts to access sensitive resources or execute unauthorized commands within the Kubernetes…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`aws_cloudwatchlogs_eks` \"user.username\"=\"system:anonymous\" verb=list objectRef.resource=pods requestURI=\"/api/v1/pods\"\n      | rename source as cluster_name sourceIPs{} as src\n      | stats count min(_time) as firstTime max(_time) as lastTime values(responseStatus.reason) values(responseStatus.code) values(userAgent) values(verb) values(requestURI)\n        BY src cluster_name user.username\n           user.groups{}\n      | `security_content_ctime(lastTime)`\n      | `security_content_ctime(firstTime)`\n      | `amazon_eks_kubernetes_pod_scan_detection_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Not all unauthenticated requests are malicious, but frequency, UA and source IPs and direct request to API provide context.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1526"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Amazon EKS Kubernetes Pod scan detection\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1526. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Amazon EKS Kubernetes Pod scan detection\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Not all unauthenticated requests are malicious, but frequency, UA and source IPs and direct request to API provide context.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects unauthenticated requests made against the Kubernetes Pods, indicating potential unauthorized access attempts, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.20",
              "n": "AWS ECR Container Scanning Findings High",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies high-severity findings from AWS Elastic Container Registry (ECR) image scans. It detects these activities by analyzing AWS CloudTrail logs for the DescribeImageScanFindings event, specifically filtering for findings with a high severity level. This activity is significant for a SOC because high-severity vulnerabilities in container images can lead to potential exploitation if not addressed. If confirmed malicious, attackers could exploit these vulnerabilities to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DescribeImageScanFindings",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail DescribeImageScanFindings ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS ECR Container Scanning Findings High\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DescribeImageScanFindings. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS ECR Container Scanning Findings High\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies high-severity findings from AWS Elastic Container Registry (ECR) image scans, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.21",
              "n": "AWS ECR Container Scanning Findings Low Informational Unknown",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies low, informational, or unknown severity findings from AWS Elastic Container Registry (ECR) image scans. It leverages AWS CloudTrail logs, specifically the DescribeImageScanFindings event, to detect these findings. This activity is significant for a SOC as it helps in early identification of potential vulnerabilities or misconfigurations in container images, which could be exploited if left unaddressed. If confirmed malicious, these findings could lead to unautho…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DescribeImageScanFindings",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail DescribeImageScanFindings ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS ECR Container Scanning Findings Low Informational Unknown\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DescribeImageScanFindings. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS ECR Container Scanning Findings Low Informational Unknown\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies low, informational, or unknown severity findings from AWS Elastic Container Registry (ECR) image scans, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.22",
              "n": "AWS ECR Container Scanning Findings Medium",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies medium-severity findings from AWS Elastic Container Registry (ECR) image scans. It leverages AWS CloudTrail logs, specifically the DescribeImageScanFindings event, to detect vulnerabilities in container images. This activity is significant for a SOC as it highlights potential security risks in containerized applications, which could be exploited if not addressed. If confirmed malicious, these vulnerabilities could lead to unauthorized access, data breaches, or f…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DescribeImageScanFindings",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail DescribeImageScanFindings ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS ECR Container Scanning Findings Medium\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DescribeImageScanFindings. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS ECR Container Scanning Findings Medium\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies medium-severity findings from AWS Elastic Container Registry (ECR) image scans, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.23",
              "n": "AWS Excessive Security Scanning",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies excessive security scanning activities in AWS by detecting a high number of Describe, List, or Get API calls from a single user. It leverages AWS CloudTrail logs to count distinct event names and flags users with more than 50 such events. This behavior is significant as it may indicate reconnaissance activities by an attacker attempting to map out your AWS environment. If confirmed malicious, this could lead to unauthorized access, data exfiltration, or further …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives.",
              "refs": "https://github.com/aquasecurity/cloudsploit",
              "mitre": [
                "T1526"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Excessive Security Scanning\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1526. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Excessive Security Scanning\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies excessive security scanning activities in AWS by detecting a high number of Describe, List, or Get calls from a single user, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.24",
              "n": "Azure AD AzureHound UserAgent Detected",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the presence of the default AzureHound user-agent string within Microsoft Graph Activity logs and NonInteractive SignIn Logs. AzureHound is a tool used for gathering information about Azure Active Directory environments, often employed by security professionals for legitimate auditing purposes. However, it can also be leveraged by malicious actors to perform reconnaissance activities, mapping out the Azure AD infrastructure to identify potential vulnerabilities and targ…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory NonInteractiveUserSignInLogs, Azure Active Directory MicrosoftGraphActivityLogs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory NonInteractiveUserSignInLogs, Azure Active Directory MicrosoftGraphActivityLogs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/SpecterOps/AzureHound, https://splunkbase.splunk.com/app/3110, https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Install/",
              "mitre": [
                "T1087.004",
                "T1526"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD AzureHound UserAgent Detected\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory NonInteractiveUserSignInLogs, Azure Active Directory MicrosoftGraphActivityLogs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.004, T1526. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD AzureHound UserAgent Detected\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies the presence of the default AzureHound user-agent string within Microsoft Graph Activity logs and NonInteractive SignIn Logs, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.25",
              "n": "Azure AD Privileged Graph API Permission Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of high-risk Graph API permissions in Azure AD, specifically Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All, and RoleManagement.ReadWrite.Directory. It uses azure_monitor_aad data to scan AuditLogs for 'Update application' operations, identifying when these permissions are assigned. This activity is significant as it grants broad control over Azure AD, including application and directory settings. If confirmed malicious, it could lead to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Update application",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the latest version of Splunk Add-on for Microsoft Cloud Services from Splunkbase (https://splunkbase.splunk.com/app/3110/#/details). You must be ingesting Azure Active Directory events into your Splunk environment. This analytic was written to be used with the azure:monitor:aad sourcetype leveraging the AuditLog log category.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Privileged Graph API permissions may be assigned for legitimate purposes. Filter as needed.",
              "refs": "https://cloudbrothers.info/en/azure-attack-paths/, https://github.com/mandiant/Mandiant-Azure-AD-Investigator/blob/master/MandiantAzureADInvestigator.json, https://learn.microsoft.com/en-us/graph/permissions-reference, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/, https://posts.specterops.io/azure-privilege-escalation-via-azure-api-permissions-abuse-74aee1006f48",
              "mitre": [
                "T1003.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Privileged Graph API Permission Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Update application. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Privileged Graph API Permission Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Privileged Graph API permissions may be assigned for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the assignment of high-risk Graph permissions in Azure AD, specifically Application.ReadWrite.All, AppRoleAssignment.ReadWrite.All, and RoleManagement.ReadWrite.Directory, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.26",
              "n": "Circle CI Disable Security Step",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the disablement of security steps in a CircleCI pipeline. It leverages CircleCI logs, using field renaming, joining, and statistical analysis to identify instances where mandatory security steps are not executed. This activity is significant because disabling security steps can introduce vulnerabilities, unauthorized changes, or malicious code into the pipeline. If confirmed malicious, this could lead to potential attacks, data breaches, or compromised infrastructu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "CircleCI",
              "q": "`circleci`\n      | rename workflows.job_id AS job_id\n      | join max=1 job_id [\n      | search `circleci`\n      | stats values(name) as step_names count\n        BY job_id job_name ]\n      | stats count\n        BY step_names job_id job_name\n           vcs.committer_name vcs.subject vcs.url\n           owners{}\n      | rename vcs.* as * , owners{} as user\n      | lookup mandatory_step_for_job job_name OUTPUTNEW step_name AS mandatory_step\n      | search mandatory_step=*\n      | eval mandatory_step_executed=if(like(step_names, \"%\".mandatory_step.\"%\"), 1, 0)\n      | where mandatory_step_executed=0\n      | rex field=url \"(?<repository>[^\\/]*\\/[^\\/]*)$\"\n      | eval phase=\"build\"\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `circle_ci_disable_security_step_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires CircleCI ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1554"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Circle CI Disable Security Step\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: CircleCI. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1554. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Circle CI Disable Security Step\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the disablement of security steps in a CircleCI pipeline, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.27",
              "n": "GCP Kubernetes cluster pod scan detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unauthenticated requests to Kubernetes cluster pods. It detects this activity by analyzing GCP Pub/Sub messages for audit logs where the response status code is 401, indicating unauthorized access attempts. This activity is significant for a SOC because it may indicate reconnaissance or scanning attempts by an attacker trying to identify vulnerable pods. If confirmed malicious, this activity could lead to unauthorized access, allowing the attacker to exploit vul…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`google_gcp_pubsub_message` category=kube-audit\n      | spath input=properties.log\n      | search responseStatus.code=401\n      | table sourceIPs{} userAgent verb requestURI responseStatus.reason properties.pod\n      | `gcp_kubernetes_cluster_pod_scan_detection_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Not all unauthenticated requests are malicious, but frequency, User Agent, source IPs and pods  will provide context.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1526"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GCP Kubernetes cluster pod scan detection\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1526. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GCP Kubernetes cluster pod scan detection\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Not all unauthenticated requests are malicious, but frequency, User Agent, source IPs and pods  will provide context.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies unauthenticated requests to Kubernetes cluster pods, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.28",
              "n": "GitHub Enterprise Disable Dependabot",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a user disables Dependabot security features within a GitHub repository. Dependabot helps automatically identify and fix security vulnerabilities in dependencies. The detection monitors GitHub Enterprise logs for configuration changes that disable Dependabot functionality. This behavior could indicate an attacker attempting to prevent the automatic detection of vulnerable dependencies, which would allow them to exploit known vulnerabilities that would otherwis…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Disable Dependabot\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Disable Dependabot\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects when a user disables Dependabot security features within a GitHub repository, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.29",
              "n": "GitHub Organizations Disable Dependabot",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a user disables Dependabot security features within a GitHub repository. Dependabot helps automatically identify and fix security vulnerabilities in dependencies. The detection monitors GitHub Enterprise logs for configuration changes that disable Dependabot functionality. This behavior could indicate an attacker attempting to prevent the automatic detection of vulnerable dependencies, which would allow them to exploit known vulnerabilities that would otherwis…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Organizations Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Organizations Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunk.github.io/splunk-add-on-for-github-audit-log-monitoring/Install/, https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Organizations Disable Dependabot\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Organizations Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Organizations Disable Dependabot\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects when a user disables Dependabot security features within a GitHub repository, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.30",
              "n": "Kubernetes Access Scanning",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential scanning activities within a Kubernetes environment. It identifies unauthorized access attempts, probing of public APIs, or attempts to exploit known vulnerabilities by monitoring Kubernetes audit logs for repeated failed access attempts or unusual API requests. This activity is significant for a SOC as it may indicate an attacker's preliminary reconnaissance to gather information about the system. If confirmed malicious, this activity could lead to unaut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1046"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Access Scanning\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1046. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Access Scanning\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential scanning activities within a Kubernetes environment, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.31",
              "n": "Kubernetes Previously Unseen Container Image Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of containerized workloads using previously unseen images in a Kubernetes cluster. It leverages process metrics from an OTEL collector and Kubernetes cluster receiver, pulled from Splunk Observability Cloud. The detection compares container image names seen in the last hour with those from the previous 30 days. This activity is significant as unfamiliar container images may introduce vulnerabilities, malware, or misconfigurations, posing threats to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats  count(k8s.container.ready) as k8s.container.ready_count where `kubernetes_metrics` AND earliest=-24h by host.name k8s.cluster.name k8s.node.name container.image.name\n    | eval current=\"True\"\n    | append [mstats  count(k8s.container.ready) as k8s.container.ready_count where `kubernetes_metrics` AND earliest=-30d latest=-1h  by host.name k8s.cluster.name k8s.node.name container.image.name\n    | eval current=\"false\" ]\n    | stats values(current) as current\n      BY host.name k8s.cluster.name k8s.node.name\n         container.image.name\n    | search current=\"true\" AND current!=\"false\"\n    | rename host.name as host\n    | `kubernetes_previously_unseen_container_image_name_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Previously Unseen Container Image Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Previously Unseen Container Image Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the creation of containerized workloads using previously unseen images in a Kubernetes cluster, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.32",
              "n": "Kubernetes Scanner Image Pulling",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the pulling of known Kubernetes security scanner images such as kube-hunter, kube-bench, and kube-recon. It leverages Kubernetes logs ingested through Splunk Connect for Kubernetes, specifically monitoring for messages indicating the pulling of these images. This activity is significant because the use of security scanners can indicate an attempt to identify vulnerabilities within the Kubernetes environment. If confirmed malicious, this could lead to the discovery …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/splunk/splunk-connect-for-kubernetes",
              "mitre": [
                "T1526"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Scanner Image Pulling\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1526. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Scanner Image Pulling\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the pulling of known Kubernetes security scanner images such as kube-hunter, kube-bench, and kube-recon, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.33",
              "n": "Kubernetes Scanning by Unauthenticated IP Address",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential scanning activities within a Kubernetes environment by unauthenticated IP addresses. It leverages Kubernetes audit logs to detect multiple unauthorized access attempts (HTTP 403 responses) from the same source IP. This activity is significant as it may indicate an attacker probing for vulnerabilities or attempting to exploit known issues. If confirmed malicious, such scanning could lead to unauthorized access, data breaches, or further exploitation of …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1046"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Scanning by Unauthenticated IP Address\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1046. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Scanning by Unauthenticated IP Address\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential scanning activities within a Kubernetes environment by unauthenticated IP addresses, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.34",
              "n": "Advanced IP or Port Scanner Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Advanced IP or Port Scanner Execution. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://news.sophos.com/en-us/2019/12/09/snatch-ransomware-reboots-pcs-into-safe-mode-to-bypass-protection/, https://cloud.google.com/blog/topics/threat-intelligence/tactics-techniques-procedures-associated-with-maze-ransomware-incidents/, https://assets.documentcloud.org/documents/20444693/fbi-pin-egregor-ransomware-bc-01062021.pdf, https://thedfirreport.com/2021/01/18/all-that-for-a-coinminer, https://github.com/3CORESec/MAL-CL/tree/master/Descriptors/Other/Advanced%20IP%20Scanner",
              "mitre": [
                "T1046",
                "T1135"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Advanced IP or Port Scanner Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1046, T1135. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Advanced IP or Port Scanner Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Advanced IP or Port Scanner Execution, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.35",
              "n": "Attacker Tools On Endpoint",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of tools commonly exploited by cybercriminals, such as those used for unauthorized access, network scanning, or data exfiltration. It leverages process activity data from Endpoint Detection and Response (EDR) agents, focusing on known attacker tool names. This activity is significant because it serves as an early warning system for potential security incidents, enabling prompt response. If confirmed malicious, this activity could lead to unauthorized …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some administrator activity can be potentially triggered, please add those users to the filter macro.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1003",
                "T1036.005",
                "T1595"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Attacker Tools On Endpoint\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003, T1036.005, T1595. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Attacker Tools On Endpoint\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some administrator activity can be potentially triggered, please add those users to the filter macro.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of tools commonly exploited by cybercriminals, such as those used for unauthorized access, network scanning, or data exfiltration, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.36",
              "n": "Child Processes of Spoolsv exe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies child processes spawned by spoolsv.exe, the Print Spooler service in Windows, which typically runs with SYSTEM privileges. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process relationships. Monitoring this activity is crucial as it can indicate exploitation attempts, such as those associated with CVE-2018-8440, which can lead to privilege escalation. If confirmed malicious, attackers could gain …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count values(Processes.process_name) as process_name values(Processes.process) as process min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.parent_process_name=spoolsv.exe\n        AND\n        Processes.process_name!=regsvr32.exe\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `child_processes_of_spoolsv_exe_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate printer-related processes may show up as children of spoolsv.exe. You should confirm that any activity as legitimate and may be added as exclusions in the search.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1068"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Child Processes of Spoolsv exe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Child Processes of Spoolsv exe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate printer-related processes may show up as children of spoolsv.exe. You should confirm that any activity as legitimate and may be added as exclusions in the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies child processes spawned by spoolsv.exe, the Print Spooler service in Windows, which typically runs with SYSTEM privileges, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.37",
              "n": "Cisco Isovalent - Pods Running Offensive Tools",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects execution of known offensive tooling from within Kubernetes pods, including network scanners and post-exploitation frameworks (e.g., nmap, masscan, zmap, impacket-*, hashcat, john, SharpHound, kube-hunter, peirates). We have created a macro named `linux_offsec_tool_processes` that contains the list of known offensive tooling found on linux systems. Adversaries commonly introduce these tools into compromised workloads to conduct discovery, lateral movement, credenti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Exec",
              "q": "# Shared SPL: intentional — see UC-10.2.31\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$pod_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Security testing, approved red team exercises, or sanctioned diagnostics can trigger this analytic. Coordinate allowlists and maintenance windows with platform/SecOps teams. Please update a macro named `linux_offsec_tool_processes` that contains the list of known offensive tooling found on linux systems if your environment has additional known offensive tools that are not included in the macro.",
              "refs": "https://dev.to/thenjdevopsguy/attacking-a-kubernetes-cluster-enter-red-team-mode-2onj, https://www.reddit.com/r/kubernetes/comments/l6e5yr/one_of_our_kubernetes_containers_was_compromised/",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Pods Running Offensive Tools\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Pods Running Offensive Tools\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Security testing, approved red team exercises, or sanctioned diagnostics can trigger this analytic. Coordinate allowlists and maintenance windows with platform/SecOps teams. Please update a macro named `linux_offsec_tool_processes` that contains the list of known offensive tooling found on linux systems if your environment has additional known offensive tools that are not included in the macro.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects execution of known offensive tooling from within Kubernetes pods, including network scanners and post-exploitation frameworks (nmap, masscan, zmap, impacket-*, hashcat, john, SharpHound, kube-hunter, peirates), and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.38",
              "n": "ConnectWise ScreenConnect Path Traversal",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the ConnectWise ScreenConnect CVE-2024-1708 vulnerability, which allows path traversal attacks by manipulating file_path and file_name parameters in the URL. It leverages the Endpoint datamodel Filesystem node to identify suspicious file system events, specifically targeting paths and filenames associated with ScreenConnect. This activity is significant as it can lead to unauthorized access to sensitive files and directories, potentially resulti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This analytic utilizes the Endpoint datamodel Filesystem node to identify path traversal attempts against ScreenConnect. Note that using SACL auditing or other file system monitoring tools may also be used to detect path traversal attempts. Typically the data for this analytic will come from EDR or other properly CIM mapped data sources.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected, as the detection is based on the presence of file system events that indicate path traversal attempts. The analytic may be modified to look for any file writes to this path as it is not common for files to write here.",
              "refs": "https://www.huntress.com/blog/a-catastrophe-for-control-understanding-the-screenconnect-authentication-bypass, https://www.huntress.com/blog/detection-guidance-for-connectwise-cwe-288-2, https://www.connectwise.com/company/trust/security-bulletins/connectwise-screenconnect-23.9.8",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ConnectWise ScreenConnect Path Traversal\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ConnectWise ScreenConnect Path Traversal\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected, as the detection is based on the presence of file system events that indicate path traversal attempts. The analytic may be modified to look for any file writes to this path as it is not common for files to write here.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the ConnectWise ScreenConnect vulnerability, which allows path traversal attacks by manipulating file_path and file_name parameters in the URL, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.39",
              "n": "ConnectWise ScreenConnect Path Traversal Windows SACL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the ConnectWise ScreenConnect CVE-2024-1708 vulnerability using Windows SACL EventCode 4663. It identifies path traversal attacks by monitoring file system events related to the ScreenConnect service. This activity is significant as it allows unauthorized access to sensitive files and directories, potentially leading to data exfiltration or arbitrary code execution. If confirmed malicious, attackers could gain unauthorized access to critical dat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To implement the following query, enable SACL auditing for the ScreenConnect directory(ies). With this data, the following analytic will work correctly. A GIST is provided in the references to assist with enabling SACL Auditing.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as the analytic is specific to ScreenConnect path traversal attempts. Tune as needed, or restrict to specific hosts if false positives are encountered.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4663, https://www.huntress.com/blog/a-catastrophe-for-control-understanding-the-screenconnect-authentication-bypass, https://www.huntress.com/blog/detection-guidance-for-connectwise-cwe-288-2, https://www.connectwise.com/company/trust/security-bulletins/connectwise-screenconnect-23.9.8",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ConnectWise ScreenConnect Path Traversal Windows SACL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ConnectWise ScreenConnect Path Traversal Windows SACL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as the analytic is specific to ScreenConnect path traversal attempts. Tune as needed, or restrict to specific hosts if false positives are encountered.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the ConnectWise ScreenConnect vulnerability using Windows SACL EventCode 4663, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.40",
              "n": "Control Loading from World Writable Directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances of control.exe loading a .cpl or .inf file from a writable directory, which is related to CVE-2021-40444. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions mapped to the `Processes` node of the `Endpoint` data model. This activity is significant as it may indicate an attempt to exploit a known vulnerability, potentially leading to unauthorized code execution. If confir…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives will be present as control.exe does not natively load from writable paths as defined. One may add .cpl or .inf to the command-line if there is any false positives. Tune as needed.",
              "refs": "https://strontic.github.io/xcyclopedia/library/rundll32.exe-111474C61232202B5B588D2B512CBB25.html, https://app.any.run/tasks/36c14029-9df8-439c-bba0-45f2643b0c70/, https://attack.mitre.org/techniques/T1218/011/, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-40444, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.002/T1218.002.yaml",
              "mitre": [
                "T1218.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Control Loading from World Writable Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Control Loading from World Writable Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives will be present as control.exe does not natively load from writable paths as defined. One may add .cpl or .inf to the command-line if there is any false positives. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies instances of control.exe loading.cpl or.inf file from a writable directory, which is related to, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.41",
              "n": "Crowdstrike Admin Weak Password Policy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects CrowdStrike alerts for admin weak password policy violations, identifying instances where administrative passwords do not meet security standards. These alerts highlight significant vulnerabilities that could be exploited by attackers to gain unauthorized access. Promptly addressing these alerts is crucial for maintaining robust security and protecting critical systems and data from potential threats.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/12/CrowdStrike-Falcon-Event-Streams-Add-on-Guide-v3.pdf",
              "mitre": [
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Crowdstrike Admin Weak Password Policy\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Crowdstrike Admin Weak Password Policy\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects CrowdStrike alerts for admin weak password policy violations, identifying instances where administrative passwords do not meet security standards, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.42",
              "n": "Crowdstrike High Identity Risk Severity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects CrowdStrike alerts for High Identity Risk Severity with a risk score of 70 or higher. These alerts indicate significant vulnerabilities in user identities, such as suspicious behavior or compromised credentials. Promptly investigating and addressing these alerts is crucial to prevent potential security breaches and ensure the integrity and protection of sensitive information and systems.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/12/CrowdStrike-Falcon-Event-Streams-Add-on-Guide-v3.pdf",
              "mitre": [
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Crowdstrike High Identity Risk Severity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Crowdstrike High Identity Risk Severity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects CrowdStrike alerts for High Identity Risk Severity with a risk score of 70 or higher, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.43",
              "n": "Crowdstrike Medium Identity Risk Severity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects CrowdStrike alerts for Medium Identity Risk Severity with a risk score of 55 or higher. These alerts indicate significant vulnerabilities in user identities, such as suspicious behavior or compromised credentials. Promptly investigating and addressing these alerts is crucial to prevent potential security breaches and ensure the integrity and protection of sensitive information and systems.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/12/CrowdStrike-Falcon-Event-Streams-Add-on-Guide-v3.pdf",
              "mitre": [
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Crowdstrike Medium Identity Risk Severity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Crowdstrike Medium Identity Risk Severity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects CrowdStrike alerts for Medium Identity Risk Severity with a risk score of 55 or higher, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.44",
              "n": "Crowdstrike User Weak Password Policy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects CrowdStrike alerts for weak password policy violations, identifying instances where passwords do not meet the required security standards. These alerts highlight potential vulnerabilities that could be exploited by attackers, emphasizing the need for stronger password practices. Addressing these alerts promptly helps to enhance overall security and protect sensitive information from unauthorized access.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/12/CrowdStrike-Falcon-Event-Streams-Add-on-Guide-v3.pdf",
              "mitre": [
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Crowdstrike User Weak Password Policy\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Crowdstrike User Weak Password Policy\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects CrowdStrike alerts for weak password policy violations, identifying instances where passwords do not meet the required security standards, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.45",
              "n": "Detect Baron Samedit CVE-2021-3156",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the Baron Samedit vulnerability (CVE-2021-3156) by identifying the use of the \"sudoedit -s \\\\\" command. This detection leverages logs from Linux systems, specifically searching for instances of the sudoedit command with the \"-s\" flag followed by a double quote. This activity is significant because it indicates an attempt to exploit a known vulnerability that allows attackers to gain root privileges. If confirmed malicious, this could lead to com…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`linux_hosts` \"sudoedit -s \\\\\" | `detect_baron_samedit_cve_2021_3156_filter`",
              "m": "Splunk Universal Forwarder running on Linux systems, capturing logs from the /var/log directory. The vulnerability is exposed when a non privledged user tries passing in a single \\ character at the end of the command while using the shell and edit flags.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1068"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Baron Samedit CVE-2021-3156\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Baron Samedit CVE-2021-3156\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the Baron Samedit vulnerability by identifying the use of the \"sudoedit -s \\\\\" command, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.46",
              "n": "Detect Baron Samedit CVE-2021-3156 Segfault",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a heap-based buffer overflow in sudoedit by detecting Linux logs containing both \"sudoedit\" and \"segfault\" terms. This detection leverages Splunk to monitor for more than five occurrences of these terms on a single host within a specified timeframe. This activity is significant because exploiting this vulnerability (CVE-2021-3156) can allow attackers to gain root privileges, leading to potential system compromise, unauthorized access, and data breaches. If confi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`linux_hosts` TERM(sudoedit) TERM(segfault)\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY host\n      | where count > 5\n      | `detect_baron_samedit_cve_2021_3156_segfault_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "If sudoedit is throwing segfaults for other reasons this will pick those up too.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1068"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Baron Samedit CVE-2021-3156 Segfault\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Baron Samedit CVE-2021-3156 Segfault\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: If sudoedit is throwing segfaults for other reasons this will pick those up too.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies a heap-based buffer overflow in sudoedit by detecting Linux logs containing both \"sudoedit\" and \"segfault\" terms, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.47",
              "n": "Detect Baron Samedit CVE-2021-3156 via OSQuery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the \"sudoedit -s *\" command, which is associated with the Baron Samedit CVE-2021-3156 heap-based buffer overflow vulnerability. This detection leverages the `osquery_process` data source to identify instances where this specific command is run. This activity is significant because it indicates an attempt to exploit a known vulnerability that allows privilege escalation. If confirmed malicious, an attacker could gain full control of the system, exec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`osquery_process` | search \"columns.cmdline\"=\"sudoedit -s \\\\*\" | `detect_baron_samedit_cve_2021_3156_via_osquery_filter`",
              "m": "OSQuery installed and configured to pick up process events (info at https://osquery.io) as well as using the Splunk OSQuery Add-on https://splunkbase.splunk.com/app/4402. The vulnerability is exposed when a non privledged user tries passing in a single \\ character at the end of the command while using the shell and edit flags.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1068"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Baron Samedit CVE-2021-3156 via OSQuery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Baron Samedit CVE-2021-3156 via OSQuery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of the \"sudoedit -s *\" command, which is associated with the Baron Samedit heap-based buffer overflow vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.48",
              "n": "Disable AMSI Through Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable the Antimalware Scan Interface (AMSI) by setting the \"AmsiEnable\" value to \"0x00000000\". This detection leverages data from the Endpoint.Registry data model, specifically monitoring changes to the registry path \"*\\\\SOFTWARE\\\\Microsoft\\\\Windows Script\\\\Settings\\\\AmsiEnable\". Disabling AMSI is significant as it is a common technique used by ransomware, Remote Access Trojans (RATs), and Advanced Persistent Threats (AP…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network operator may disable this feature of windows but not so common.",
              "refs": "https://blog.f-secure.com/hunting-for-amsi-bypasses/, https://gist.github.com/rxwx/8955e5abf18dc258fd6b43a3a7f4dbf9",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disable AMSI Through Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disable AMSI Through Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network operator may disable this feature of windows but not so common.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects modifications to the Windows registry that disable the Antimalware Scan Interface (AMSI) by setting the \"AmsiEnable\" value to \"0x00000000\", and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.49",
              "n": "Disabling Windows Local Security Authority Defences via Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the deletion of registry keys that disable Local Security Authority (LSA) protection and Microsoft Defender Device Guard. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on registry actions and paths associated with LSA and Device Guard settings. This activity is significant because disabling these defenses can leave a system vulnerable to various attacks, including credential theft and unauthorized code execution. If confirmed mali…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Potential to be triggered by an administrator disabling protections for troubleshooting purposes.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/configuring-additional-lsa-protection, https://learn.microsoft.com/en-us/windows/security/identity-protection/credential-guard/credential-guard-manage",
              "mitre": [
                "T1556"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling Windows Local Security Authority Defences via Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling Windows Local Security Authority Defences via Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Potential to be triggered by an administrator disabling protections for troubleshooting purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the deletion of registry keys that disable Local Security Authority (LSA) protection and Microsoft Defender Device Guard, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.50",
              "n": "Download Files Using Telegram",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious file downloads by the Telegram application on a Windows system. It leverages Sysmon EventCode 15 to identify instances where Telegram.exe creates files with a Zone.Identifier, indicating a download. This activity is significant as it may indicate an adversary using Telegram to download malicious tools, such as network scanners, for further exploitation. If confirmed malicious, this behavior could lead to network mapping, lateral movement, and potential c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 15",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 15 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "normal download of file in telegram app. (if it was a common app in network)",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1105"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Download Files Using Telegram\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 15. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Download Files Using Telegram\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: normal download of file in telegram app. (if it was a common app in network)\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects suspicious file downloads by the Telegram application on a Windows system, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.51",
              "n": "Execute Javascript With Jscript COM CLSID",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of JavaScript using the JScript.Encode CLSID (COM Object) by cscript.exe. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names, command-line executions, and parent processes. This activity is significant as it is a known technique used by ransomware, such as Reddot, to execute malicious scripts and potentially disable AMSI (Antimalware Scan Interface). If confirmed malicious, this behavior could allow attacker…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://app.any.run/tasks/c0f98850-af65-4352-9746-fbebadee4f05/",
              "mitre": [
                "T1059.005"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Execute Javascript With Jscript COM CLSID\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Execute Javascript With Jscript COM CLSID\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of JavaScript using the JScript.Encode CLSID (COM Object) by cscript.exe, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.52",
              "n": "Hunting 3CXDesktopApp Software",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the presence of any version of the 3CXDesktopApp, also known as the 3CX Desktop App, on Mac or Windows systems. It leverages the Endpoint data model's Processes node to identify instances of the application running, although it does not provide file version information. This activity is significant because 3CX has identified vulnerabilities in versions 18.12.407 and 18.12.416, which could be exploited by attackers. If confirmed malicious, this could lead to unautho…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name=3CXDesktopApp.exe\n        OR\n        Processes.process_name=\"3CX Desktop App\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `hunting_3cxdesktopapp_software_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There may be false positives generated due to the reliance on version numbers for identification purposes. Despite this limitation, the primary goal of this approach is to aid in the detection of the software within the environment.",
              "refs": "https://www.sentinelone.com/blog/smoothoperator-ongoing-campaign-trojanizes-3cx-software-in-software-supply-chain-attack/, https://www.cisa.gov/news-events/alerts/2023/03/30/supply-chain-attack-against-3cxdesktopapp, https://www.reddit.com/r/crowdstrike/comments/125r3uu/20230329_situational_awareness_crowdstrike/, https://www.3cx.com/community/threads/crowdstrike-endpoint-security-detection-re-3cx-desktop-app.119934/page-2#post-558898, https://www.3cx.com/community/threads/3cx-desktopapp-security-alert.119951/",
              "mitre": [
                "T1195.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Hunting 3CXDesktopApp Software\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1195.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Hunting 3CXDesktopApp Software\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There may be false positives generated due to the reliance on version numbers for identification purposes. Despite this limitation, the primary goal of this approach is to aid in the detection of the software within the environment.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the presence of any version of the 3CXDesktopApp, also known as the 3CX Desktop App, on Mac or Windows systems, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.53",
              "n": "Linux pkexec Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of `pkexec` without any command-line arguments. This behavior leverages data from Endpoint Detection and Response (EDR) agents, focusing on process telemetry. The significance lies in the fact that this pattern is associated with the exploitation of CVE-2021-4034 (PwnKit), a critical vulnerability in Polkit's pkexec component. If confirmed malicious, this activity could allow an attacker to gain full root privileges on the affected Linux system, leadi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://www.reddit.com/r/crowdstrike/comments/sdfeig/20220126_cool_query_friday_hunting_pwnkit_local/, https://man.archlinux.org/man/pkexec.1, https://www.bleepingcomputer.com/news/security/linux-system-service-bug-gives-root-on-all-major-distros-exploit-released/, https://access.redhat.com/security/security-updates/#/?q=polkit&p=1&sort=portal_publication_date%20desc&rows=10&portal_advisory_type=Security%20Advisory&documentKind=PortalProduct",
              "mitre": [
                "T1068"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux pkexec Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux pkexec Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of without any command-line arguments, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.54",
              "n": "Linux Telnet Authentication Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "Detects an authentication bypass in telnet tracked as CVE-2026-24061. An attacker can supply a specifically crafted USER environment variable (-f root) that is passed to /usr/bin/login. Because this input isn't sanitized an attacker can force the system to skip authentication and login directly as root.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is rare for the telnetd to spawn login process with these arguments.",
              "refs": "https://www.safebreach.com/blog/safebreach-labs-root-cause-analysis-and-poc-exploit-for-cve-2026-24061/",
              "mitre": [
                "T1548"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Telnet Authentication Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Telnet Authentication Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is rare for the telnetd to spawn login process with these arguments.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch an authentication bypass in telnet tracked as, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.55",
              "n": "Log4Shell CVE-2021-44228 Exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential exploitation of Log4Shell CVE-2021-44228 by correlating multiple MITRE ATT&CK tactics detected in risk events. It leverages Splunk's risk data model to calculate the distinct count of MITRE ATT&CK tactics from Log4Shell-related detections. This activity is significant because it indicates a high probability of exploitation if two or more distinct tactics are observed. If confirmed malicious, this activity could lead to initial payload delivery, callbac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are no known false positive for this search, but it could contain false positives as multiple detections can trigger and not have successful exploitation.",
              "refs": "https://research.splunk.com/stories/log4shell_cve-2021-44228/, https://www.splunk.com/en_us/blog/security/simulating-detecting-and-responding-to-log4shell-with-splunk.html",
              "mitre": [
                "T1105",
                "T1190",
                "T1059",
                "T1133"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Log4Shell CVE-2021-44228 Exploitation\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105, T1190, T1059, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Log4Shell CVE-2021-44228 Exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are no known false positive for this search, but it could contain false positives as multiple detections can trigger and not have successful exploitation.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential exploitation of Log4Shell by correlating multiple known attack techniques tactics detected in risk events, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.56",
              "n": "MOVEit Certificate Store Access Failure",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies potential exploitation attempts of the CVE-2024-5806 vulnerability in Progress MOVEit Transfer. It looks for log entries indicating failures to access the certificate store, which can occur when an attacker attempts to exploit the authentication bypass vulnerability. This behavior is a key indicator of attempts to impersonate valid users without proper credentials. While certificate store access failures can occur during normal operations, an unusual increase in such ev…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`moveit_sftp_logs` \"IpWorksKeyService: Caught exception of type IPWorksSSHException: The certificate store could not be opened\"\n      | stats count\n        BY source _raw\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `moveit_certificate_store_access_failure_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur, therefore utilize the analytic as a jump off point to identifiy potential certificate store errors.",
              "refs": "https://labs.watchtowr.com/auth-bypass-in-un-limited-scenarios-progress-moveit-transfer-cve-2024-5806/",
              "mitre": [
                "T1190"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MOVEit Certificate Store Access Failure\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MOVEit Certificate Store Access Failure\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur, therefore utilize the analytic as a jump off point to identifiy potential certificate store errors.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies potential exploitation attempts of the vulnerability in Progress MOVEit Transfer, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.57",
              "n": "MOVEit Empty Key Fingerprint Authentication Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies attempts to authenticate with an empty public key fingerprint in Progress MOVEit Transfer, which is a key indicator of potential exploitation of the CVE-2024-5806 vulnerability. Such attempts are characteristic of the authentication bypass technique used in this vulnerability, where attackers try to impersonate valid users without providing proper credentials. While occasional empty key fingerprint authentication attempts might occur due to misconfigurations, a sudden i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`moveit_sftp_logs` \"UserAuthRequestHandler: SftpPublicKeyAuthenticator: Attempted to authenticate empty public key fingerprint\"\n      | stats count\n        BY source _raw\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `moveit_empty_key_fingerprint_authentication_attempt_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur, therefore utilize the analytic as a jump off point to identify potential empty key fingerprint authentication attempts.",
              "refs": "https://labs.watchtowr.com/auth-bypass-in-un-limited-scenarios-progress-moveit-transfer-cve-2024-5806/",
              "mitre": [
                "T1190"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MOVEit Empty Key Fingerprint Authentication Attempt\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MOVEit Empty Key Fingerprint Authentication Attempt\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur, therefore utilize the analytic as a jump off point to identify potential empty key fingerprint authentication attempts.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies attempts to authenticate with an empty public key fingerprint in Progress MOVEit Transfer, which is a key indicator of potential exploitation of the vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.58",
              "n": "MSI Module Loaded by Non-System Binary",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of `msi.dll` by a binary not located in `system32`, `syswow64`, `winsxs`, or `windows` directories. This is identified using Sysmon EventCode 7, which logs DLL loads, and filters out legitimate system paths. This activity is significant as it may indicate exploitation of CVE-2021-41379 or DLL side-loading attacks, both of which can lead to unauthorized system modifications. If confirmed malicious, this could allow an attacker to execute arbitrary code, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "`sysmon` EventCode=7 ImageLoaded=\"*\\\\msi.dll\" NOT (Image IN (\"*\\\\System32\\\\*\",\"*\\\\syswow64\\\\*\",\"*\\\\windows\\\\*\", \"*\\\\winsxs\\\\*\")) | fillnull | stats count min(_time) as firstTime max(_time) as lastTime by Image ImageLoaded dest loaded_file loaded_file_path original_file_name process_exec process_guid process_hash process_id process_name process_path service_dll_signature_exists service_dll_signature_verified signature signature_id user_id vendor_product | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `msi_module_loaded_by_non_system_binary_filter`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible some Administrative utilities will load msi.dll outside of normal system paths, filter as needed.",
              "refs": "https://attackerkb.com/topics/7LstI2clmF/cve-2021-41379, https://github.com/AlexandrVIvanov/InstallerFileTakeOver, https://github.com/mandiant/red_team_tool_countermeasures/blob/master/rules/PGF/supplemental/hxioc/msi.dll%20Hijack%20(Methodology).ioc",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MSI Module Loaded by Non-System Binary\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MSI Module Loaded by Non-System Binary\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible some Administrative utilities will load msi.dll outside of normal system paths, filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the loading of by a binary not located, or directories, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.59",
              "n": "Outbound Network Connection from Java Using Default Ports",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects outbound network connections from Java processes to default ports used by LDAP and RMI protocols, which may indicate exploitation of the CVE-2021-44228-Log4j vulnerability. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and network traffic logs. Monitoring this activity is crucial as it can signify an attacker’s attempt to perform JNDI lookups and retrieve malicious payloads. If confirmed malicious, this activit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate Java applications may use perform outbound connections to these ports. Filter as needed",
              "refs": "https://www.lunasec.io/docs/blog/log4j-zero-day/",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Outbound Network Connection from Java Using Default Ports\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Outbound Network Connection from Java Using Default Ports\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate Java applications may use perform outbound connections to these ports. Filter as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects outbound network connections from Java processes to default ports used by and RMI protocols, which may indicate exploitation of-Log4j vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.60",
              "n": "PetitPotam Network Share Access Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects network share access requests indicative of the PetitPotam attack (CVE-2021-36942). It leverages Windows Event Code 5145, which logs attempts to access network share objects. This detection is significant as PetitPotam can coerce authentication from domain controllers, potentially leading to unauthorized access. If confirmed malicious, this activity could allow attackers to escalate privileges or move laterally within the network, posing a severe security risk. Ens…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5145",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5145 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives have been limited when the Anonymous Logon is used for Account Name.",
              "refs": "https://attack.mitre.org/techniques/T1187/, https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventid=5145, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5145",
              "mitre": [
                "T1187"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PetitPotam Network Share Access Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5145. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1187. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PetitPotam Network Share Access Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives have been limited when the Anonymous Logon is used for Account Name.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects network share access requests indicative of the PetitPotam attack, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.61",
              "n": "PetitPotam Suspicious Kerberos TGT Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious Kerberos Ticket Granting Ticket (TGT) request, identified by Event Code 4768. This detection leverages Windows Security Event Logs to identify TGT requests with unusual fields, which may indicate the use of tools like Rubeus following the exploitation of CVE-2021-36942 (PetitPotam). This activity is significant as it can signal an attacker leveraging a compromised certificate to request Kerberos tickets, potentially leading to unauthorized access. If c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4768",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4768 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if the environment is using certificates for authentication.",
              "refs": "https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventid=4768, https://isc.sans.edu/forums/diary/Active+Directory+Certificate+Services+ADCS+PKI+domain+admin+vulnerability/27668/",
              "mitre": [
                "T1003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PetitPotam Suspicious Kerberos TGT Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4768. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PetitPotam Suspicious Kerberos TGT Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if the environment is using certificates for authentication.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a suspicious Kerberos Ticket Granting Ticket (TGT) request, identified by Event Code 4768, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.62",
              "n": "Print Spooler Adding A Printer Driver",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of new printer drivers by monitoring Windows PrintService operational logs, specifically EventCode 316. This detection leverages log data to identify messages indicating the addition or update of printer drivers, such as \"kernelbase.dll\" and \"UNIDRV.DLL.\" This activity is significant as it may indicate exploitation attempts related to vulnerabilities like CVE-2021-34527 (PrintNightmare). If confirmed malicious, attackers could gain code execution or es…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Printservice 316",
              "q": "# Shared SPL: intentional — see UC-10.6.63\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$ComputerName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Printservice 316 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://twitter.com/MalwareJake/status/1410421445608476679?s=20, https://www.truesec.com/hub/blog/fix-for-printnightmare-cve-2021-1675-exploit-to-keep-your-print-servers-running-while-a-patch-is-not-available, https://www.truesec.com/hub/blog/exploitable-critical-rce-vulnerability-allows-regular-users-to-fully-compromise-active-directory-printnightmare-cve-2021-1675, https://www.reddit.com/r/msp/comments/ob6y02/critical_vulnerability_printnightmare_exposes",
              "mitre": [
                "T1547.012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Print Spooler Adding A Printer Driver\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Printservice 316. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Print Spooler Adding A Printer Driver\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the addition of new printer drivers by monitoring Windows PrintService operational logs, specifically EventCode 316, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.63",
              "n": "Print Spooler Failed to Load a Plug-in",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects driver load errors in the Windows PrintService Admin logs, specifically identifying issues related to CVE-2021-34527 (PrintNightmare). It triggers on error messages indicating the print spooler failed to load a plug-in module, such as \"meterpreter.dll,\" with error code 0x45A. This detection method leverages specific event codes and error messages. This activity is significant as it may indicate an exploitation attempt of a known vulnerability. If confirmed maliciou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Printservice 808, Windows Event Log Printservice 4909",
              "q": "# Shared SPL: intentional — see UC-10.6.62\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$ComputerName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You will need to ensure PrintService Admin and Operational logs are being logged to Splunk from critical or all systems.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are unknown and filtering may be required.",
              "refs": "https://www.truesec.com/hub/blog/fix-for-printnightmare-cve-2021-1675-exploit-to-keep-your-print-servers-running-while-a-patch-is-not-available, https://www.truesec.com/hub/blog/exploitable-critical-rce-vulnerability-allows-regular-users-to-fully-compromise-active-directory-printnightmare-cve-2021-1675, https://www.reddit.com/r/msp/comments/ob6y02/critical_vulnerability_printnightmare_exposes",
              "mitre": [
                "T1547.012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Print Spooler Failed to Load a Plug-in\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Printservice 808, Windows Event Log Printservice 4909. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Print Spooler Failed to Load a Plug-in\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are unknown and filtering may be required.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects driver load errors in the Windows PrintService Admin logs, specifically identifying issues related to (PrintNightmare), and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.64",
              "n": "Rundll32 Control RunDLL Hunt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances of rundll32.exe executing with `Control_RunDLL` in the command line, which is indicative of loading a .cpl or other file types. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant as rundll32.exe can be exploited to execute malicious Control Panel Item files, potentially linked to CVE-2021-40444. If confirmed malicious, this could al…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE `process_rundll32` Processes.process=*Control_RunDLL*\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `rundll32_control_rundll_hunt_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is a hunting detection, meant to provide a understanding of how voluminous control_rundll is within the environment.",
              "refs": "https://strontic.github.io/xcyclopedia/library/rundll32.exe-111474C61232202B5B588D2B512CBB25.html, https://app.any.run/tasks/36c14029-9df8-439c-bba0-45f2643b0c70/, https://attack.mitre.org/techniques/T1218/011/, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-40444, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.002/T1218.002.yaml, https://redcanary.com/blog/intelligence-insights-december-2021/",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rundll32 Control RunDLL Hunt\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rundll32 Control RunDLL Hunt\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is a hunting detection, meant to provide a understanding of how voluminous control_rundll is within the environment.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies instances of rundll32.exe executing with in the command line, which is indicative of loading.cpl or other file types, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.65",
              "n": "Rundll32 Control RunDLL World Writable Directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of rundll32.exe with the `Control_RunDLL` command, loading files from world-writable directories such as windows\\temp, programdata, or appdata. This detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process command-line data and specific directory paths. This activity is significant as it may indicate an attempt to exploit CVE-2021-40444 or similar vulnerabilities, allowing attackers to execute arbitrary code. If confirm…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This may be tuned, or a new one related, by adding .cpl to command-line. However, it's important to look for both. Tune/filter as needed.",
              "refs": "https://strontic.github.io/xcyclopedia/library/rundll32.exe-111474C61232202B5B588D2B512CBB25.html, https://app.any.run/tasks/36c14029-9df8-439c-bba0-45f2643b0c70/, https://attack.mitre.org/techniques/T1218/011/, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-40444, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.002/T1218.002.yaml, https://redcanary.com/blog/intelligence-insights-december-2021/",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rundll32 Control RunDLL World Writable Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rundll32 Control RunDLL World Writable Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This may be tuned, or a new one related, by adding .cpl to command-line. However, it's important to look for both. Tune/filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of rundll32.exe with the command, loading files from world-writable directories such as windows\\temp, programdata, or appdata, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.66",
              "n": "SAM Database File Access Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to access the SAM, SYSTEM, or SECURITY database files within the `windows\\system32\\config` directory using Windows Security EventCode 4663. This detection leverages Windows Security Event logs to identify unauthorized access attempts. Monitoring this activity is crucial as it indicates potential credential access attempts, possibly exploiting vulnerabilities like CVE-2021-36934. If confirmed malicious, an attacker could extract user passwords, leading to u…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "`wineventlog_security` (EventCode=4663)  ProcessName!=*\\\\dllhost.exe ObjectName IN (\"*\\\\Windows\\\\System32\\\\config\\\\SAM*\",\"*\\\\Windows\\\\System32\\\\config\\\\SYSTEM*\",\"*\\\\Windows\\\\System32\\\\config\\\\SECURITY*\") | stats values(AccessList) count by ProcessName ObjectName dest src_user | rename ProcessName as process_name | `sam_database_file_access_attempt_filter`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Natively, `dllhost.exe` will access the files. Every environment will have additional native processes that do as well. Filter by process_name. As an aside, one can remove process_name entirely and add `Object_Name=*ShadowCopy*`.",
              "refs": "https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4663, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4663, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36934, https://github.com/GossiTheDog/HiveNightmare, https://github.com/JumpsecLabs/Guidance-Advice/tree/main/SAM_Permissions, https://en.wikipedia.org/wiki/Security_Account_Manager",
              "mitre": [
                "T1003.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SAM Database File Access Attempt\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SAM Database File Access Attempt\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Natively, `dllhost.exe` will access the files. Every environment will have additional native processes that do as well. Filter by process_name. As an aside, one can remove process_name entirely and add `Object_Name=*ShadowCopy*`.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to access the SAM, SYSTEM, or SECURITY database files within the directory using Windows Security EventCode 4663, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.67",
              "n": "Shim Database File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of shim database files (.sdb) in default directories using the sdbinst.exe application. It leverages filesystem activity data from the Endpoint.Filesystem data model to identify file writes to the Windows\\AppPatch\\Custom directory. This activity is significant because shims can intercept and alter API calls, potentially allowing attackers to bypass security controls or execute malicious code. If confirmed malicious, this could lead to unauthorized code…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must be ingesting data that records the filesystem activity from your hosts to populate the Endpoint file-system data model node. If you are using Sysmon, you will need a Splunk Universal Forwarder on each endpoint from which you want to collect data.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Because legitimate shim files are created and used all the time, this event, in itself, is not suspicious. However, if there are other correlating events, it may warrant further investigation.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1546.011"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Shim Database File Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Shim Database File Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Because legitimate shim files are created and used all the time, this event, in itself, is not suspicious. However, if there are other correlating events, it may warrant further investigation.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the creation of shim database files (.sdb) in default directories using the sdbinst.exe application, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.68",
              "n": "Spoolsv Spawning Rundll32",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the spawning of `rundll32.exe` without command-line arguments by `spoolsv.exe`, which is unusual and potentially indicative of exploitation attempts like CVE-2021-34527 (PrintNightmare). This detection leverages Endpoint Detection and Response (EDR) telemetry, focusing on process creation events where `spoolsv.exe` is the parent process. This activity is significant as `spoolsv.exe` typically does not spawn other processes, and such behavior could indicate an activ…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives have been identified. There are limited instances where `rundll32.exe` may be spawned by a legitimate print driver.",
              "refs": "https://www.truesec.com/hub/blog/fix-for-printnightmare-cve-2021-1675-exploit-to-keep-your-print-servers-running-while-a-patch-is-not-available, https://www.truesec.com/hub/blog/exploitable-critical-rce-vulnerability-allows-regular-users-to-fully-compromise-active-directory-printnightmare-cve-2021-1675, https://www.reddit.com/r/msp/comments/ob6y02/critical_vulnerability_printnightmare_exposes",
              "mitre": [
                "T1547.012"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Spoolsv Spawning Rundll32\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Spoolsv Spawning Rundll32\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives have been identified. There are limited instances where `rundll32.exe` may be spawned by a legitimate print driver.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the spawning of without command-line arguments by, which is unusual and potentially indicative of exploitation attempts like (PrintNightmare), and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.69",
              "n": "Spoolsv Suspicious Loaded Modules",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious loading of DLLs by spoolsv.exe, potentially indicating PrintNightmare exploitation. It leverages Sysmon EventCode 7 to identify instances where spoolsv.exe loads multiple DLLs from the Windows System32 spool drivers x64 directory. This activity is significant as it may signify an attacker exploiting the PrintNightmare vulnerability to execute arbitrary code. If confirmed malicious, this could lead to unauthorized code execution, privilege escalation,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://raw.githubusercontent.com/hieuttmmo/sigma/dceb13fe3f1821b119ae495b41e24438bd97e3d0/rules/windows/image_load/sysmon_cve_2021_1675_print_nightmare.yml",
              "mitre": [
                "T1547.012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Spoolsv Suspicious Loaded Modules\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Spoolsv Suspicious Loaded Modules\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the suspicious loading of DLLs by spoolsv.exe, potentially indicating PrintNightmare exploitation, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.70",
              "n": "Spoolsv Suspicious Process Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious process access by spoolsv.exe, potentially indicating exploitation of the PrintNightmare vulnerability (CVE-2021-34527). It leverages Sysmon EventCode 10 to identify when spoolsv.exe accesses critical system files or processes like rundll32.exe with elevated privileges. This activity is significant as it may signal an attempt to gain unauthorized privilege escalation on a vulnerable machine. If confirmed malicious, an attacker could achieve elevated priv…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with process access event where SourceImage, TargetImage, GrantedAccess and CallTrace executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Tune and filter known instances of spoolsv.exe.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/cube0x0/impacket/commit/73b9466c17761384ece11e1028ec6689abad6818, https://www.truesec.com/hub/blog/fix-for-printnightmare-cve-2021-1675-exploit-to-keep-your-print-servers-running-while-a-patch-is-not-available, https://www.truesec.com/hub/blog/exploitable-critical-rce-vulnerability-allows-regular-users-to-fully-compromise-active-directory-printnightmare-cve-2021-1675, https://www.reddit.com/r/msp/comments/ob6y02/critical_vulnerability_printnightmare_exposes",
              "mitre": [
                "T1068"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Spoolsv Suspicious Process Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Spoolsv Suspicious Process Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects suspicious process access by spoolsv.exe, potentially indicating exploitation of the PrintNightmare vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.71",
              "n": "Spoolsv Writing a DLL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects `spoolsv.exe` writing a `.dll` file, which is unusual behavior and may indicate exploitation of vulnerabilities like CVE-2021-34527 (PrintNightmare). This detection leverages the Endpoint datamodel, specifically monitoring process and filesystem events to identify `.dll` file creation within the `\\spool\\drivers\\x64\\` path. This activity is significant as it may signify an attacker attempting to execute malicious code via the Print Spooler service. If confirmed mali…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11, Windows Event Log Security 4688 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node and `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.truesec.com/hub/blog/fix-for-printnightmare-cve-2021-1675-exploit-to-keep-your-print-servers-running-while-a-patch-is-not-available, https://www.truesec.com/hub/blog/exploitable-critical-rce-vulnerability-allows-regular-users-to-fully-compromise-active-directory-printnightmare-cve-2021-1675, https://www.reddit.com/r/msp/comments/ob6y02/critical_vulnerability_printnightmare_exposes",
              "mitre": [
                "T1547.012"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Spoolsv Writing a DLL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11, Windows Event Log Security 4688 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Spoolsv Writing a DLL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects writing a file, which is unusual behavior and may indicate exploitation of vulnerabilities like (PrintNightmare), and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.72",
              "n": "Spoolsv Writing a DLL - Sysmon",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects `spoolsv.exe` writing a `.dll` file, which is unusual behavior and may indicate exploitation of vulnerabilities like CVE-2021-34527 (PrintNightmare). This detection leverages Sysmon EventID 11 to monitor file creation events in the `\\spool\\drivers\\x64\\` directory. This activity is significant because `spoolsv.exe` typically does not write DLL files, and such behavior could signify an ongoing attack. If confirmed malicious, this could allow an attacker to execute ar…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Tune and filter known instances where renamed rundll32.exe may be used.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives. Filter as needed.",
              "refs": "https://github.com/cube0x0/impacket/commit/73b9466c17761384ece11e1028ec6689abad6818, https://www.truesec.com/hub/blog/fix-for-printnightmare-cve-2021-1675-exploit-to-keep-your-print-servers-running-while-a-patch-is-not-available, https://www.truesec.com/hub/blog/exploitable-critical-rce-vulnerability-allows-regular-users-to-fully-compromise-active-directory-printnightmare-cve-2021-1675, https://www.reddit.com/r/msp/comments/ob6y02/critical_vulnerability_printnightmare_exposes",
              "mitre": [
                "T1547.012"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Spoolsv Writing a DLL - Sysmon\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Spoolsv Writing a DLL - Sysmon\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects writing a file, which is unusual behavior and may indicate exploitation of vulnerabilities like (PrintNightmare), and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.73",
              "n": "Suspicious Computer Account Name Change",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious computer account name change in Active Directory. It leverages Event ID 4781, which logs account name changes, to identify instances where a computer account name is changed to one that does not end with a `$`. This behavior is significant as it may indicate an attempt to exploit CVE-2021-42278 and CVE-2021-42287, which can lead to domain controller impersonation and privilege escalation. If confirmed malicious, this activity could allow an attacker to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4781",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$OldTargetUserName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4781 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Renaming a computer account name to a name that not end with '$' is highly unsual and may not have any legitimate scenarios.",
              "refs": "https://exploit.ph/cve-2021-42287-cve-2021-42278-weaponisation.html, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-42278, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-42287",
              "mitre": [
                "T1078.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Computer Account Name Change\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4781. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Computer Account Name Change\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Renaming a computer account name to a name that not end with '$' is highly unsual and may not have any legitimate scenarios.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a suspicious computer account name change in Active Directory, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.74",
              "n": "Suspicious Kerberos Service Ticket Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious Kerberos Service Ticket (TGS) requests where the requesting account name matches the service name, potentially indicating an exploitation attempt of CVE-2021-42278 and CVE-2021-42287. This detection leverages Event ID 4769 from Domain Controller and Kerberos events. Such activity is significant as it may represent an adversary attempting to escalate privileges by impersonating a domain controller. If confirmed malicious, this could allow an attacker to t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4769",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4769 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "We have tested this detection logic with ~2 million 4769 events and did not identify false positives. However, they may be possible in certain environments. Filter as needed.",
              "refs": "https://exploit.ph/cve-2021-42287-cve-2021-42278-weaponisation.html, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-42278, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-42287, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-sfu/02636893-7a1f-4357-af9a-b672e3e3de13",
              "mitre": [
                "T1078.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Kerberos Service Ticket Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4769. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Kerberos Service Ticket Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: We have tested this detection logic with ~2 million 4769 events and did not identify false positives. However, they may be possible in certain environments. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects suspicious Kerberos Service Ticket (TGS) requests where the requesting account name matches the service name, potentially indicating an exploitation attempt of and, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.75",
              "n": "Suspicious Ticket Granting Ticket Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious Kerberos Ticket Granting Ticket (TGT) requests that may indicate exploitation of CVE-2021-42278 and CVE-2021-42287. It leverages Event ID 4781 (account name change) and Event ID 4768 (TGT request) to identify sequences where a newly renamed computer account requests a TGT. This behavior is significant as it could represent an attempt to escalate privileges by impersonating a Domain Controller. If confirmed malicious, this activity could allow attackers t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4768, Windows Event Log Security 4781",
              "q": "`wineventlog_security` (EventCode=4781 OldTargetUserName=\"*$\" NewTargetUserName!=\"*$\") OR (EventCode=4768 TargetUserName!=\"*$\")\n      | eval RenamedComputerAccount = coalesce(NewTargetUserName, TargetUserName)\n      | transaction RenamedComputerAccount startswith=(EventCode=4781) endswith=(EventCode=4768) maxspan=5m maxpause=2m maxevents=100\n      | eval short_lived=case((duration<2),\"TRUE\")\n      | search short_lived = TRUE\n      | table _time, Computer, EventCode, TargetUserName, RenamedComputerAccount, short_lived\n      | rename Computer as dest\n      | `suspicious_ticket_granting_ticket_request_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Security 4768, Windows Event Log Security 4781 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A computer account name change event inmediately followed by a kerberos TGT request with matching fields is unsual. However, legitimate behavior may trigger it. Filter as needed.",
              "refs": "https://exploit.ph/cve-2021-42287-cve-2021-42278-weaponisation.html, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-42278, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-42287",
              "mitre": [
                "T1078.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Ticket Granting Ticket Request\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4768, Windows Event Log Security 4781. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Ticket Granting Ticket Request\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A computer account name change event inmediately followed by a kerberos TGT request with matching fields is unsual. However, legitimate behavior may trigger it. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects suspicious Kerberos Ticket Granting Ticket (TGT) requests that may indicate exploitation of and, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.76",
              "n": "Unloading AMSI via Reflection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the tampering of AMSI (Antimalware Scan Interface) via PowerShell reflection. It leverages PowerShell Script Block Logging (EventCode=4104) to capture and analyze suspicious PowerShell commands, specifically those involving `system.management.automation.amsi`. This activity is significant as it indicates an attempt to bypass AMSI, a critical security feature that helps detect and block malicious scripts. If confirmed malicious, this could allow an attacker to execu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Potential for some third party applications to disable AMSI upon invocation. Filter as needed.",
              "refs": "https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/",
              "mitre": [
                "T1059.001",
                "T1562"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Unloading AMSI via Reflection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Unloading AMSI via Reflection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Potential for some third party applications to disable AMSI upon invocation. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the tampering of AMSI (Antimalware Scan Interface) using PowerShell reflection, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.77",
              "n": "Windows Account Discovery With NetUser PreauthNotRequire",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the PowerView PowerShell cmdlet Get-NetUser with the -PreauthNotRequire parameter, leveraging Event ID 4104. This method identifies attempts to query Active Directory user accounts that do not require Kerberos preauthentication. Monitoring this activity is crucial as it can indicate reconnaissance efforts by an attacker to identify potentially vulnerable accounts. If confirmed malicious, this behavior could lead to further exploitation, such as una…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104  ScriptBlockText = \"*Get-NetUser*\" ScriptBlockText = \"*-PreauthNotRequire*\"\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_account_discovery_with_netuser_preauthnotrequire_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage PowerView for legitimate purposes, filter as needed.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a",
              "mitre": [
                "T1087"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Account Discovery With NetUser PreauthNotRequire\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Account Discovery With NetUser PreauthNotRequire\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage PowerView for legitimate purposes, filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of the PowerView PowerShell cmdlet Get-NetUser with-PreauthNotRequire parameter, leveraging Event ID 4104, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.78",
              "n": "Windows Cisco Secure Endpoint Uninstall Immunet Service Via Sfc",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the sfc.exe utility with the \"-u\" parameter, which is part of the Cisco Secure Endpoint installation. The \"-u\" flag allows the uninstallation of Cisco Secure Endpoint components. This detection leverages endpoint telemetry to monitor command-line executions that include the \"-u\" parameter. The use of this flag is significant as it could indicate an attempt to disable or remove endpoint protection, potentially leaving the system vulnerable to further expl…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that this action is executed during troubleshooting activity. Activity needs to be confirmed on a case by case basis.",
              "refs": "https://www.cisco.com/c/en/us/support/docs/security/amp-endpoints/213690-amp-for-endpoint-command-line-switches.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Cisco Secure Endpoint Uninstall Immunet Service Via Sfc\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Cisco Secure Endpoint Uninstall Immunet Service Via Sfc\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that this action is executed during troubleshooting activity. Activity needs to be confirmed on a case by case basis.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the use of the sfc.exe utility with the \"-u\" parameter, which is part of the Cisco Secure Endpoint installation, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.79",
              "n": "Windows Credential Target Information Structure in Commandline",
              "c": "high",
              "f": "intermediate",
              "v": "Detects DNS-based Kerberos coercion attacks where adversaries inject marshaled credential structures into DNS records to spoof SPNs and redirect authentication such as in CVE-2025-33073. This detection leverages process creation events looking for specific CREDENTIAL_TARGET_INFORMATION structures.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Commands with all of these base64 encoded values are unusual in production environments. Filter as needed.",
              "refs": "https://web.archive.org/web/20250617122747/https://www.synacktiv.com/publications/ntlm-reflection-is-dead-long-live-ntlm-reflection-an-in-depth-analysis-of-cve-2025, https://www.synacktiv.com/publications/relaying-kerberos-over-smb-using-krbrelayx, https://www.guidepointsecurity.com/blog/the-birth-and-death-of-loopyticket/",
              "mitre": [
                "T1557.001",
                "T1187",
                "T1071.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credential Target Information Structure in Commandline\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1557.001, T1187, T1071.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credential Target Information Structure in Commandline\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Commands with all of these base64 encoded values are unusual in production environments. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch -based Kerberos coercion attacks where adversaries inject marshaled credential structures into records to spoof SPNs and redirect authentication such as, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.80",
              "n": "Windows Detect Network Scanner Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an application is used to connect a large number of unique ports/targets within a short time frame. Network enumeration may be used by adversaries as a method of discovery, lateral movement, or remote execution. This analytic may require significant tuning depending on the organization and applications being actively used, highly recommended to pre-populate the filter macro prior to activation.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This detection relies on Sysmon EventID 3 events being ingested AND tagged into the Network_Traffic datamodel.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Various, could be noisy depending on processes in the organization and sysmon configuration used. Adjusted port/dest count thresholds as needed.",
              "refs": "https://attack.mitre.org/techniques/T1595",
              "mitre": [
                "T1595.001",
                "T1595.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Detect Network Scanner Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1595.001, T1595.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Detect Network Scanner Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Various, could be noisy depending on processes in the organization and sysmon configuration used. Adjusted port/dest count thresholds as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects when an application is used to connect a large number of unique ports/targets within a short time frame, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.81",
              "n": "Windows Disable LogOff Button Through Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious registry modification that disables the logoff feature on a Windows host. It leverages data from the Endpoint.Registry data model to identify changes to specific registry values associated with logoff functionality. This activity is significant because it can indicate ransomware attempting to make the compromised host unusable and hinder remediation efforts. If confirmed malicious, this action could prevent users from logging off, complicate incident r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This windows feature may implement by administrator in some server where shutdown is critical. In that scenario filter of machine and users that can modify this registry is needed.",
              "refs": "https://www.hybrid-analysis.com/sample/e2d4018fd3bd541c153af98ef7c25b2bf4a66bc3bfb89e437cde89fd08a9dd7b/5b1f4d947ca3e10f22714774, https://malwiki.org/index.php?title=DigiPop.xp, https://www.trendmicro.com/vinfo/be/threat-encyclopedia/search/js_noclose.e/2",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable LogOff Button Through Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable LogOff Button Through Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This windows feature may implement by administrator in some server where shutdown is critical. In that scenario filter of machine and users that can modify this registry is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a suspicious registry modification that disables the logoff feature on a Windows host, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.82",
              "n": "Windows DLL Search Order Hijacking Hunt with Sysmon",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential DLL search order hijacking or DLL sideloading by detecting known Windows libraries loaded from non-standard directories. It leverages Sysmon EventCode 7 to monitor DLL loads and cross-references them with a lookup of known hijackable libraries. This activity is significant as it may indicate an attempt to execute malicious code by exploiting DLL search order vulnerabilities. If confirmed malicious, this could allow attackers to gain code execution, esc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "`sysmon` EventCode=7 NOT (process_path IN (\"*\\\\system32\\\\*\", \"*\\\\syswow64\\\\*\",\"*\\\\winsxs\\\\*\",\"*\\\\wbem\\\\*\")) | lookup hijacklibs library AS loaded_file OUTPUT islibrary | search islibrary = True | stats count min(_time) as firstTime max(_time) as lastTime by Image ImageLoaded dest loaded_file loaded_file_path original_file_name process_exec process_guid process_hash process_id process_name process_path service_dll_signature_exists service_dll_signature_verified signature signature_id user_id vendor_product | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_dll_search_order_hijacking_hunt_with_sysmon_filter`",
              "m": "The search is written against the latest Sysmon TA 4.0 https://splunkbase.splunk.com/app/5709. For this specific event ID 7, the sysmon TA will extract the ImageLoaded name to the loaded_file field which is used in the search to compare against the hijacklibs lookup.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present based on paths. Filter or add other paths to the exclusion as needed. Some applications may legitimately load libraries from non-standard paths.",
              "refs": "https://hijacklibs.net",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DLL Search Order Hijacking Hunt with Sysmon\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DLL Search Order Hijacking Hunt with Sysmon\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present based on paths. Filter or add other paths to the exclusion as needed. Some applications may legitimately load libraries from non-standard paths.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential DLL search order hijacking or DLL sideloading by detecting known Windows libraries loaded from non-standard directories, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.83",
              "n": "Windows DLL Side-Loading In Calc",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of the \"WindowsCodecs.dll\" by calc.exe from a non-standard location This could be indicative of a potential DLL side-loading technique. This detection leverages Sysmon EventCode 7 to identify the DLL side-loading activity. In previous versions of the \"calc.exe\" binary, namely on Windows 7, it was vulnerable to DLL side-loading, where an attacker is able to load an arbitrary DLL named \"WindowsCodecs.dll\". This technique has been observed in Qakbot malwar…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on processes that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` and `Filesystem` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.bitdefender.com/blog/hotforsecurity/new-qakbot-malware-strain-replaces-windows-calculator-dll-to-infected-pcs/, https://www.menlosecurity.com/blog/an-anatomy-of-heat-attacks-used-by-qakbot-campaigns",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DLL Side-Loading In Calc\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DLL Side-Loading In Calc\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the loading of the \"WindowsCodecs.dll\" by calc.exe from a non-standard location This could be indicative of a potential DLL side-loading technique, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.84",
              "n": "Windows DLL Side-Loading Process Child Of Calc",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious child processes spawned by calc.exe, indicative of a potential DLL side-loading technique. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process GUIDs, names, and parent processes. In previous versions of the \"calc.exe\" binary, namely on Windows 7, it was vulnerable to DLL side-loading, where an attacker is able to load an arbitrary DLL named \"WindowsCodecs.dll\". This activity was observed in Qakbot malwa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.qakbot, https://www.menlosecurity.com/blog/an-anatomy-of-heat-attacks-used-by-qakbot-campaigns",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DLL Side-Loading Process Child Of Calc\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DLL Side-Loading Process Child Of Calc\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies suspicious child processes spawned by calc.exe, indicative of a potential DLL side-loading technique, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.85",
              "n": "Windows Driver Load Non-Standard Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of new Kernel Mode Drivers from non-standard paths using Windows EventCode 7045. It identifies drivers not located in typical directories like Windows, Program Files, or SystemRoot. This activity is significant because adversaries may use these non-standard paths to load malicious or vulnerable drivers, potentially bypassing security controls. If confirmed malicious, this could allow attackers to execute code at the kernel level, escalate privileges, or…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7045 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://redcanary.com/blog/tracking-driver-inventory-to-expose-rootkits/, https://attack.mitre.org/techniques/T1014/, https://www.fuzzysecurity.com/tutorials/28.html",
              "mitre": [
                "T1014",
                "T1068"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Driver Load Non-Standard Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1014, T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Driver Load Non-Standard Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the loading of new Kernel Mode Drivers from non-standard paths using Windows EventCode 7045, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.86",
              "n": "Windows ESX Admins Group Creation Security Event",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects creation, deletion, or modification of the \"ESX Admins\" group in Active Directory. These events may indicate attempts to exploit the VMware ESXi Active Directory Integration Authentication Bypass vulnerability (CVE-2024-37085).",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4727, Windows Event Log Security 4730, Windows Event Log Security 4737",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4727, Windows Event Log Security 4730, Windows Event Log Security 4737 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrators might create, delete, or modify an \"ESX Admins\" group for valid reasons. Verify that the group changes are authorized and part of normal administrative tasks. Consider the context of the action, such as the user performing it and any related activities.",
              "refs": "https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24505, https://www.microsoft.com/en-us/security/blog/2024/07/29/ransomware-operators-exploit-esxi-hypervisor-vulnerability-for-mass-encryption/, https://www.securityweek.com/microsoft-says-ransomware-gangs-exploiting-just-patched-vmware-esxi-flaw/",
              "mitre": [
                "T1136.001",
                "T1136.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows ESX Admins Group Creation Security Event\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4727, Windows Event Log Security 4730, Windows Event Log Security 4737. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001, T1136.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows ESX Admins Group Creation Security Event\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrators might create, delete, or modify an \"ESX Admins\" group for valid reasons. Verify that the group changes are authorized and part of normal administrative tasks. Consider the context of the action, such as the user performing it and any related activities.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects creation, deletion, or modification of the \"ESX Admins\" group in Active Directory, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.87",
              "n": "Windows ESX Admins Group Creation via Net",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects attempts to create an \"ESX Admins\" group using the Windows net.exe or net1.exe commands. This activity may indicate an attempt to exploit the VMware ESXi Active Directory Integration Authentication Bypass vulnerability (CVE-2024-37085). Attackers can use this method to gain unauthorized access to ESXi hosts by recreating the \"ESX Admins\" group after its deletion from Active Directory.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrators might create an \"ESX Admins\" group for valid reasons. Verify that the group creation is authorized and part of normal administrative tasks. Consider the context of the action, such as the user performing it and any related activities.",
              "refs": "https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24505, https://www.microsoft.com/en-us/security/blog/2024/07/29/ransomware-operators-exploit-esxi-hypervisor-vulnerability-for-mass-encryption/, https://www.securityweek.com/microsoft-says-ransomware-gangs-exploiting-just-patched-vmware-esxi-flaw/",
              "mitre": [
                "T1136.002",
                "T1136.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows ESX Admins Group Creation via Net\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.002, T1136.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows ESX Admins Group Creation via Net\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrators might create an \"ESX Admins\" group for valid reasons. Verify that the group creation is authorized and part of normal administrative tasks. Consider the context of the action, such as the user performing it and any related activities.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects attempts to create an \"ESX Admins\" group using the Windows net.exe or net1.exe commands, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.88",
              "n": "Windows ESX Admins Group Creation via PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects attempts to create an \"ESX Admins\" group using PowerShell commands. This activity may indicate an attempt to exploit the VMware ESXi Active Directory Integration Authentication Bypass vulnerability (CVE-2024-37085). Attackers can use this method to gain unauthorized access to ESXi hosts by recreating the 'ESX Admins' group after its deletion from Active Directory.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrators might create an \"ESX Admins\" group for valid reasons. Verify that the group creation is authorized and part of normal administrative tasks. Consider the context of the action, such as the user performing it and any related activities.",
              "refs": "https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24505, https://www.microsoft.com/en-us/security/blog/2024/07/29/ransomware-operators-exploit-esxi-hypervisor-vulnerability-for-mass-encryption/, https://www.securityweek.com/microsoft-says-ransomware-gangs-exploiting-just-patched-vmware-esxi-flaw/",
              "mitre": [
                "T1136.002",
                "T1136.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows ESX Admins Group Creation via PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.002, T1136.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows ESX Admins Group Creation via PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrators might create an \"ESX Admins\" group for valid reasons. Verify that the group creation is authorized and part of normal administrative tasks. Consider the context of the action, such as the user performing it and any related activities.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects attempts to create an \"ESX Admins\" group using PowerShell commands, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.89",
              "n": "Windows Explorer.exe Spawning PowerShell or Cmd",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies instances where Windows Explorer.exe spawns PowerShell or cmd.exe processes, particularly focusing on executions initiated by LNK files. This behavior is associated with the ZDI-CAN-25373 Windows shortcut zero-day vulnerability, where specially crafted LNK files are used to trigger malicious code execution through cmd.exe or powershell.exe. This technique has been actively exploited by multiple APT groups in targeted attacks through both HTTP and SMB delivery methods.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where Processes.parent_process_path=\"*\\\\explorer.exe\" `process_powershell` OR `process_cmd`  by  Processes.dest Processes.process_current_directory Processes.process_path Processes.process Processes.original_file_name Processes.parent_process Processes.parent_process_name Processes.parent_process_path Processes.parent_process_guid Processes.parent_process_id Processes.process_guid Processes.process_id Processes.user | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)`| `security_content_ctime(lastTime)` | `windows_explorer_exe_spawning_powershell_or_cmd_filter`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate user actions may trigger Explorer.exe to spawn PowerShell or cmd.exe, such as right-clicking and selecting \"Open PowerShell window here\" or similar options. Filter as needed based on your environment's normal behavior patterns.",
              "refs": "https://www.zerodayinitiative.com/advisories/ZDI-CAN-25373/, https://www.trendmicro.com/en_us/research/25/c/windows-shortcut-zero-day-exploit.html",
              "mitre": [
                "T1059.001",
                "T1204.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Explorer.exe Spawning PowerShell or Cmd\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Explorer.exe Spawning PowerShell or Cmd\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate user actions may trigger Explorer.exe to spawn PowerShell or cmd.exe, such as right-clicking and selecting \"Open PowerShell window here\" or similar options. Filter as needed based on your environment's normal behavior patterns.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies instances where Windows Explorer.exe spawns PowerShell or cmd.exe processes, particularly focusing on executions initiated by LNK files, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.90",
              "n": "Windows Explorer LNK Exploit Process Launch With Padding",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies instances where Windows Explorer.exe spawns PowerShell or cmd.exe processes with abnormally large padding (50 or more spaces) in the command line. This specific pattern is a key indicator of the ZDI-CAN-25373 Windows shortcut zero-day vulnerability exploitation, where threat actors craft malicious LNK files containing padded content to trigger code execution. The excessive spacing in the command line is used to manipulate the way Windows processes the shortcut file, ena…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate user actions may trigger Explorer.exe to spawn PowerShell or cmd.exe, such as right-clicking and selecting \"Open PowerShell window here\" or similar options. Filter as needed based on your environment's normal behavior patterns. Reduce or increase the padding threshold based on observed false positives.",
              "refs": "https://www.trendmicro.com/en_us/research/25/c/windows-shortcut-zero-day-exploit.html",
              "mitre": [
                "T1059.001",
                "T1204.002"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Explorer LNK Exploit Process Launch With Padding\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Explorer LNK Exploit Process Launch With Padding\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate user actions may trigger Explorer.exe to spawn PowerShell or cmd.exe, such as right-clicking and selecting \"Open PowerShell window here\" or similar options. Filter as needed based on your environment's normal behavior patterns. Reduce or increase the padding threshold based on observed false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies instances where Windows Explorer.exe spawns PowerShell or cmd.exe processes with abnormally large padding (50 or more spaces) in the command line, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.91",
              "n": "Windows IIS Components Get-WebGlobalModule Module Query",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the PowerShell cmdlet Get-WebGlobalModule, which lists all IIS Modules installed on a system. It leverages PowerShell input data to detect this activity by capturing the module names and the image paths of the DLLs. This activity is significant for a SOC because it can indicate an attempt to enumerate installed IIS modules, which could be a precursor to exploiting vulnerabilities or misconfigurations. If confirmed malicious, this could allow an …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Installed IIS Modules",
              "q": "`iis_get_webglobalmodule`\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY host name image\n      | rename host as dest\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_iis_components_get_webglobalmodule_module_query_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Installed IIS Modules ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic is meant to assist with hunting modules across a fleet of IIS servers. Filter and modify as needed.",
              "refs": "https://help.splunk.com/en/splunk-cloud-platform/get-started/get-data-in/9.3.2411/get-windows-data/monitor-windows-data-with-powershell-scripts, https://gist.github.com/MHaggis/64396dfd9fc3734e1d1901a8f2f07040, https://github.com/redcanaryco/atomic-red-team/tree/master/atomics/T1505.004",
              "mitre": [
                "T1505.004"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows IIS Components Get-WebGlobalModule Module Query\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Installed IIS Modules. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows IIS Components Get-WebGlobalModule Module Query\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic is meant to assist with hunting modules across a fleet of IIS servers. Filter and modify as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the execution of the PowerShell cmdlet Get-WebGlobalModule, which lists all IIS Modules installed on a system, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.92",
              "n": "Windows IIS Components Module Failed to Load",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an IIS Module DLL fails to load due to a configuration problem, identified by EventCode 2282. This detection leverages Windows Application event logs to identify repeated failures in loading IIS modules. Such failures can indicate misconfigurations or potential tampering with IIS components. If confirmed malicious, this activity could lead to service disruptions or provide an attacker with opportunities to exploit vulnerabilities within the IIS environment. Im…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Application 2282",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Application 2282 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present until all module failures are resolved or reviewed.",
              "refs": "https://social.technet.microsoft.com/wiki/contents/articles/21757.event-id-2282-iis-worker-process-availability.aspx, https://www.microsoft.com/en-us/security/blog/2022/12/12/iis-modules-the-evolution-of-web-shells-and-how-to-detect-them/, https://www.crowdstrike.com/wp-content/uploads/2022/05/crowdstrike-iceapple-a-novel-internet-information-services-post-exploitation-framework-1.pdf, https://unit42.paloaltonetworks.com/unit42-oilrig-uses-rgdoor-iis-backdoor-targets-middle-east/, https://www.secureworks.com/research/bronze-union, https://github.com/redcanaryco/atomic-red-team/tree/master/atomics/T1505.004, https://strontic.github.io/xcyclopedia/library/appcmd.exe-055B2B09409F980BF9B5A3969D01E5B2.html",
              "mitre": [
                "T1505.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows IIS Components Module Failed to Load\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Application 2282. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows IIS Components Module Failed to Load\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present until all module failures are resolved or reviewed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects when an IIS Module DLL fails to load due to a configuration problem, identified by EventCode 2282, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.93",
              "n": "Windows Impair Defense Change Win Defender Health Check Intervals",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that change the health check interval of Windows Defender. It leverages data from the Endpoint datamodel, specifically monitoring changes to the \"ServiceKeepAlive\" registry path with a value of \"0x00000001\". This activity is significant because altering Windows Defender settings can impair its ability to perform timely health checks, potentially leaving the system vulnerable. If confirmed malicious, this could allow an attacker…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Change Win Defender Health Check Intervals\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Change Win Defender Health Check Intervals\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects modifications to the Windows registry that change the health check interval of Windows Defender, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.94",
              "n": "Windows Impair Defense Change Win Defender Quick Scan Interval",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that change the Windows Defender Quick Scan Interval. It leverages data from the Endpoint.Registry data model, focusing on changes to the \"QuickScanInterval\" registry path. This activity is significant because altering the scan interval can impair Windows Defender's ability to detect malware promptly, potentially allowing threats to persist undetected. If confirmed malicious, this modification could enable attackers to bypass s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Change Win Defender Quick Scan Interval\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Change Win Defender Quick Scan Interval\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects modifications to the Windows registry that change the Windows Defender Quick Scan Interval, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.95",
              "n": "Windows Impair Defense Configure App Install Control",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable the Windows Defender SmartScreen App Install Control feature. It leverages data from the Endpoint.Registry data model to identify changes to specific registry values. This activity is significant because disabling App Install Control can allow users to install potentially malicious web-based applications without restrictions, increasing the risk of security vulnerabilities. If confirmed malicious, this action could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Configure App Install Control\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Configure App Install Control\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects modifications to the Windows registry that disable the Windows Defender SmartScreen App Install Control feature, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.96",
              "n": "Windows Impair Defense Disable Win Defender Compute File Hashes",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable Windows Defender's file hash computation by setting the EnableFileHashComputation value to 0. This detection leverages data from the Endpoint.Registry data model, focusing on changes to the specific registry path associated with Windows Defender. Disabling file hash computation can significantly impair Windows Defender's ability to detect and scan for malware, making it a critical behavior to monitor. If confirmed …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Win Defender Compute File Hashes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Win Defender Compute File Hashes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects modifications to the Windows registry that disable Windows Defender's file hash computation by setting the EnableFileHashComputation value to 0, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.97",
              "n": "Windows Impair Defense Disable Win Defender Network Protection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable Windows Defender Network Protection. It leverages data from the Endpoint.Registry data model, specifically monitoring changes to the EnableNetworkProtection registry entry. This activity is significant because disabling Network Protection can leave the system vulnerable to network-based threats by preventing Windows Defender from analyzing and blocking malicious network activity. If confirmed malicious, this action…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Win Defender Network Protection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Win Defender Network Protection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects modifications to the Windows registry that disable Windows Defender Network Protection, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.98",
              "n": "Windows Impair Defense Disable Win Defender Scan On Update",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry that disable the Windows Defender Scan On Update feature. It leverages data from the Endpoint.Registry datamodel, specifically looking for changes to the \"DisableScanOnUpdate\" registry setting with a value of \"0x00000001\". This activity is significant because disabling automatic scans can leave systems vulnerable to malware and other threats. If confirmed malicious, this action could allow attackers to bypass Windows Defender, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.",
              "refs": "https://x.com/malmoeb/status/1742604217989415386?s=20, https://github.com/undergroundwires/privacy.sexy",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Impair Defense Disable Win Defender Scan On Update\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Impair Defense Disable Win Defender Scan On Update\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is unusual to turn this feature off a Windows system since it is a default security control, although it is not rare for some policies to disable it. Although no false positives have been identified, use the provided filter macro to tune the search.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects modifications to the Windows registry that disable the Windows Defender Scan On Update feature, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.99",
              "n": "Windows Kerberos Coercion via DNS",
              "c": "high",
              "f": "intermediate",
              "v": "Detects DNS-based Kerberos coercion attacks where adversaries inject marshaled credential structures into DNS records to spoof SPNs and redirect authentication such as in CVE-2025-33073. This detection leverages Windows Security Event Codes 5136, 5137, 4662, looking for DNS events with specific CREDENTIAL_TARGET_INFORMATION entries.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4662, Windows Event Log Security 5136, Windows Event Log Security 5137",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4662, Windows Event Log Security 5136, Windows Event Log Security 5137 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Creating a DNS entry matching this pattern is very unusual in a production environment. Filter as needed.",
              "refs": "https://web.archive.org/web/20250617122747/https://www.synacktiv.com/publications/ntlm-reflection-is-dead-long-live-ntlm-reflection-an-in-depth-analysis-of-cve-2025, https://www.synacktiv.com/publications/relaying-kerberos-over-smb-using-krbrelayx, https://www.guidepointsecurity.com/blog/the-birth-and-death-of-loopyticket/",
              "mitre": [
                "T1071.004",
                "T1557.001",
                "T1187"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Kerberos Coercion via DNS\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4662, Windows Event Log Security 5136, Windows Event Log Security 5137. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.004, T1557.001, T1187. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Kerberos Coercion via DNS\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Creating a DNS entry matching this pattern is very unusual in a production environment. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch -based Kerberos coercion attacks where adversaries inject marshaled credential structures into records to spoof SPNs and redirect authentication such as, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.100",
              "n": "Windows Modify Registry USeWuServer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious modification to the Windows Update configuration registry key \"UseWUServer.\" It leverages data from the Endpoint.Registry data model to identify changes where the registry value is set to \"0x00000001.\" This activity is significant because it is commonly used by adversaries, including malware like RedLine Stealer, to bypass detection mechanisms and potentially exploit zero-day vulnerabilities. If confirmed malicious, this modification could allow attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Registry where Registry.registry_path=\"*\\\\SOFTWARE\\\\Policies\\\\Microsoft\\\\Windows\\\\WindowsUpdate\\\\AU\\\\UseWUServer\" AND Registry.registry_value_data=\"0x00000001\" by Registry.action Registry.dest Registry.process_guid Registry.process_id Registry.registry_hive Registry.registry_path Registry.registry_key_name Registry.registry_value_data Registry.registry_value_name Registry.registry_value_type Registry.status Registry.user Registry.vendor_product | `drop_dm_object_name(Registry)` | `security_content_ctime(lastTime)` | `security_content_ctime(firstTime)` | `windows_modify_registry_usewuserver_filter`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://learn.microsoft.com/de-de/security-updates/windowsupdateservices/18127499",
              "mitre": [
                "T1112"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry USeWuServer\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry USeWuServer\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a suspicious modification to the Windows Update configuration registry key \"UseWUServer.\" It leverages data from the Endpoint.Registry a summary to identify changes where the registry value is set to \"0x00000001.\" This activity is significant because it is commonly used by adversaries, including malware like RedLine Stealer, to bypass detection mechanisms and.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.101",
              "n": "Windows MOVEit Transfer Writing ASPX",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of new ASPX files in the MOVEit Transfer application's \"wwwroot\" directory. It leverages endpoint data on process and filesystem activity to identify processes responsible for creating these files. This activity is significant as it may indicate exploitation of a critical zero-day vulnerability in MOVEit Transfer, used by threat actors to install malicious ASPX files. If confirmed malicious, this could lead to exfiltration of sensitive data, including …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://community.progress.com/s/article/MOVEit-Transfer-Critical-Vulnerability-31May2023, https://www.reddit.com/r/sysadmin/comments/13wxuej/critical_vulnerability_moveit_file_transfer/, https://www.bleepingcomputer.com/news/security/new-moveit-transfer-zero-day-mass-exploited-in-data-theft-attacks/, https://www.reddit.com/r/sysadmin/comments/13wxuej/critical_vulnerability_moveit_file_transfer/, https://www.mandiant.com/resources/blog/zero-day-moveit-data-theft",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MOVEit Transfer Writing ASPX\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MOVEit Transfer Writing ASPX\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the creation of new ASPX files in the MOVEit Transfer application's \"wwwroot\" directory, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.102",
              "n": "Windows MSC EvilTwin Directory Path Manipulation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential MSC EvilTwin loader exploitation, which manipulates directory paths with spaces to bypass security controls. The technique, described as CVE-2025-26633, involves crafting malicious MSC files that leverage MUIPath parameter manipulation. This detection focuses on suspicious MSC file execution patterns with unconventional command-line parameters, particularly those containing unusual spaces in Windows System32 paths or suspicious additional parameters after…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate system maintenance tools might use MSC files with unusual parameters. Filter for specific known maintenance activities in your environment.",
              "refs": "https://securityintelligence.com/posts/new-threat-actor-water-gamayun-targets-telecom-finance/, https://www.ncsc.gov.uk/report/weekly-threat-report-12th-april-2024",
              "mitre": [
                "T1218",
                "T1036.005",
                "T1203"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSC EvilTwin Directory Path Manipulation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218, T1036.005, T1203. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSC EvilTwin Directory Path Manipulation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate system maintenance tools might use MSC files with unusual parameters. Filter for specific known maintenance activities in your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential MSC EvilTwin loader exploitation, which manipulates directory paths with spaces to bypass security controls, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.103",
              "n": "Windows Office Product Dropped Cab or Inf File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects Office products writing .cab or .inf files, indicative of CVE-2021-40444 exploitation. It leverages the Endpoint.Processes and Endpoint.Filesystem data models to identify Office applications creating these file types. This activity is significant as it may signal an attempt to load malicious ActiveX controls and download remote payloads, a known attack vector. If confirmed malicious, this could lead to remote code execution, allowing attackers to gain control over …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11, Windows Event Log Security 4688 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 11, Windows Event Log Security 4688 AND Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The query is structured in a way that `action` (read, create) is not defined. Review the results of this query, filter, and tune as necessary. It may be necessary to generate this query specific to your endpoint product.",
              "refs": "https://twitter.com/vxunderground/status/1436326057179860992?s=20, https://app.any.run/tasks/36c14029-9df8-439c-bba0-45f2643b0c70/, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-40444, https://twitter.com/RonnyTNL/status/1436334640617373699?s=20, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trojanized-onenote-document-leads-to-formbook-malware/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Dropped Cab or Inf File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11, Windows Event Log Security 4688 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Dropped Cab or Inf File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The query is structured in a way that `action` (read, create) is not defined. Review the results of this query, filter, and tune as necessary. It may be necessary to generate this query specific to your endpoint product.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects Office products writing.cab or.inf files, indicative of exploitation, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.104",
              "n": "Windows Office Product Loaded MSHTML Module",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of the mshtml.dll module into an Office product, which is indicative of CVE-2021-40444 exploitation. It leverages Sysmon EventID 7 to monitor image loads by specific Office processes. This activity is significant because it can indicate an attempt to exploit a vulnerability in the MSHTML component via a malicious document. If confirmed malicious, this could allow an attacker to execute arbitrary code, potentially leading to system compromise, data exfil…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process names and image loads from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives will be present, however, tune as necessary. Some applications may legitimately load mshtml.dll.",
              "refs": "https://app.any.run/tasks/36c14029-9df8-439c-bba0-45f2643b0c70/, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-40444, https://strontic.github.io/xcyclopedia/index-dll, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trojanized-onenote-document-leads-to-formbook-malware/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Loaded MSHTML Module\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Loaded MSHTML Module\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives will be present, however, tune as necessary. Some applications may legitimately load mshtml.dll.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the loading of the mshtml.dll module into an Office product, which is indicative of exploitation, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.105",
              "n": "Windows Office Product Spawned Control",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where `control.exe` is spawned by a Microsoft Office product. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process relationships. This activity is significant because it can indicate exploitation attempts related to CVE-2021-40444, where `control.exe` is used to execute malicious .cpl or .inf files. If confirmed malicious, this behavior could allow an attacker to execute arbitrary code, potentially…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives should be present.",
              "refs": "https://strontic.github.io/xcyclopedia/library/control.exe-1F13E714A0FEA8887707DFF49287996F.html, https://app.any.run/tasks/36c14029-9df8-439c-bba0-45f2643b0c70/, https://attack.mitre.org/techniques/T1218/011/, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-40444, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.002/T1218.002.yaml, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trojanized-onenote-document-leads-to-formbook-malware/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Spawned Control\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Spawned Control\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives should be present.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies instances where is spawned by a Microsoft Office product, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.106",
              "n": "Windows Office Product Spawned Uncommon Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a Microsoft Office product spawning uncommon processes. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where Office applications are the parent process. This activity is significant as it may indicate an attempt of a malicious macro execution or exploitation of an unknown vulnerability in an office product, in order to bypass security controls. If confirmed malicious, this behavior could allow an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, however filter as needed.",
              "refs": "https://any.run/malware-trends/trickbot, https://any.run/report/47561b4e949041eff0a0f4693c59c81726591779fe21183ae9185b5eb6a69847/aba3722a-b373-4dae-8273-8730fb40cdbe, https://app.any.run/tasks/fb894ab8-a966-4b72-920b-935f41756afd/, https://attack.mitre.org/techniques/T1047/, https://bazaar.abuse.ch/sample/02cbc1ab80695fc12ff8822b926957c3a600247b9ca412a137f69cb5716c8781/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1047/T1047.md, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1105/T1105.md, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1197/T1197.md, https://redcanary.com/threat-detection-report/threats/TA551/, https://twitter.com/cyb3rops/status/1416050325870587910?s=21, https://www.fortinet.com/blog/threat-research/latest-remcos-rat-phishing, https://www.joesandbox.com/analysis/380662/0/html, https://www.joesandbox.com/analysis/702680/0/html, https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trojanized-onenote-document-leads-to-formbook-malware/",
              "mitre": [
                "T1566.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Office Product Spawned Uncommon Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Office Product Spawned Uncommon Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, however filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects a Microsoft Office product spawning uncommon processes, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.107",
              "n": "Windows Privileged Group Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects modifications to privileged groups in Active Directory, including creation, deletion, and changes to various types of groups such as local, global, universal, and LDAP query groups. It specifically monitors for changes to high-privilege groups like \"Administrators\", \"Domain Admins\", \"Enterprise Admins\", and \"ESX Admins\", among others. This detection is particularly relevant in the context of potential exploitation of vulnerabilities like the VMware ESXi Active Directory Int…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4727, Windows Event Log Security 4731, Windows Event Log Security 4744, Windows Event Log Security 4749, Windows Event Log Security 4754, Windows Event Log Security 4759, Windows Event Log Security 4783, Windows Event Log Security 4790",
              "q": "# Shared SPL: intentional — see UC-10.7.466\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4727, Windows Event Log Security 4731, Windows Event Log Security 4744, Windows Event Log Security 4749, Windows Event Log Security 4754, Windows Event Log Security 4759, Windows Event Log Security 4783, Windows Event Log Security 4790 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrators might create, delete, or modify an a privileged group for valid reasons. Verify that the group changes are authorized and part of normal administrative tasks. Consider the context of the action, such as the user performing it and any related activities.",
              "refs": "https://nvd.nist.gov/vuln/detail/CVE-2024-37085, https://www.rapid7.com/blog/post/2024/07/30/vmware-esxi-cve-2024-37085-targeted-in-ransomware-campaigns/, https://x.com/mthcht/status/1818196168515461431?s=12&t=kwffmj0KM1sZtg3MrqC0QQ",
              "mitre": [
                "T1136.001",
                "T1136.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Privileged Group Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4727, Windows Event Log Security 4731, Windows Event Log Security 4744, Windows Event Log Security 4749, Windows Event Log Security 4754, Windows Event Log Security 4759, Windows Event Log Security 4783, Windows Event Log Security 4790. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001, T1136.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Privileged Group Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrators might create, delete, or modify an a privileged group for valid reasons. Verify that the group changes are authorized and part of normal administrative tasks. Consider the context of the action, such as the user performing it and any related activities.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects modifications to privileged groups in Active Directory, including creation, deletion, and changes to various types of groups such as local, global, universal, and query groups, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.108",
              "n": "Windows Query Registry UnInstall Program List",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an access request on the uninstall registry key. It leverages Windows Security Event logs, specifically event code 4663. This activity is significant because adversaries or malware can exploit this key to gather information about installed applications, aiding in further attacks. If confirmed malicious, this behavior could allow attackers to map out installed software, potentially identifying vulnerabilities or software to exploit, leading to further system comprom…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For Event code 4663, enable the \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Uninstallers may access this registry to remove the entry of the target application. Filter as needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.redline_stealer",
              "mitre": [
                "T1012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Query Registry UnInstall Program List\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Query Registry UnInstall Program List\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Uninstallers may access this registry to remove the entry of the target application. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects an access request on the uninstall registry key, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.109",
              "n": "Windows Rundll32 WebDAV Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of rundll32.exe with command-line arguments loading davclnt.dll and the davsetcookie function to access a remote WebDAV instance. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant as it may indicate an attempt to exploit CVE-2023-23397, a known vulnerability. If confirmed malicious, this could allow an attacker to execute remote code o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present based on legitimate software, filtering may need to occur.",
              "refs": "https://strontic.github.io/xcyclopedia/library/davclnt.dll-0EA3050E7CC710526E330C413C165DA0.html, https://twitter.com/ACEResponder/status/1636116096506818562?s=20, https://twitter.com/domchell/status/1635999068282408962?s=20, https://msrc.microsoft.com/blog/2023/03/microsoft-mitigates-outlook-elevation-of-privilege-vulnerability/, https://www.pwndefend.com/2023/03/15/the-long-game-persistent-hash-theft/",
              "mitre": [
                "T1048.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Rundll32 WebDAV Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Rundll32 WebDAV Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present based on legitimate software, filtering may need to occur.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the execution of rundll32.exe with command-line arguments loading davclnt.dll and the davsetcookie function to access a remote WebDAV instance, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.110",
              "n": "Windows Rundll32 WebDav With Network Connection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of rundll32.exe with command-line arguments loading davclnt.dll and the davsetcookie function to access a remote WebDav instance. It uses data from Endpoint Detection and Response (EDR) agents, correlating process execution and network traffic data. This activity is significant as it may indicate exploitation of CVE-2023-23397, a known vulnerability. If confirmed malicious, this could allow an attacker to establish unauthorized remote connections, pot…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://strontic.github.io/xcyclopedia/library/davclnt.dll-0EA3050E7CC710526E330C413C165DA0.html, https://twitter.com/ACEResponder/status/1636116096506818562?s=20, https://twitter.com/domchell/status/1635999068282408962?s=20, https://msrc.microsoft.com/blog/2023/03/microsoft-mitigates-outlook-elevation-of-privilege-vulnerability/, https://www.pwndefend.com/2023/03/15/the-long-game-persistent-hash-theft/",
              "mitre": [
                "T1048.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Rundll32 WebDav With Network Connection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Rundll32 WebDav With Network Connection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of rundll32.exe with command-line arguments loading davclnt.dll and the davsetcookie function to access a remote WebDav instance, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.111",
              "n": "Windows Service Stop Win Updates",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the disabling of Windows Update services, such as \"Update Orchestrator Service for Windows Update,\" \"WaaSMedicSvc,\" and \"Windows Update.\" It leverages Windows System Event ID 7040 logs to identify changes in service start modes to 'disabled.' This activity is significant as it can indicate an adversary's attempt to evade defenses by preventing critical updates, leaving the system vulnerable to exploits. If confirmed malicious, this could allow attackers to maintain…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7040",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7040 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Network administrator may disable this services as part of its audit process within the network. Filter is needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.redline_stealer",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Stop Win Updates\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7040. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Stop Win Updates\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Network administrator may disable this services as part of its audit process within the network. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the disabling of Windows Update services, such as \"Update Orchestrator Service for Windows Update,\" \"WaaSMedicSvc,\" and \"Windows Update.\" It leverages Windows System Event ID 7040 logs to identify changes in service start modes to 'disabled.' This activity is significant as it can indicate an adversary's attempt to evade defenses by preventing critical updates.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.112",
              "n": "Windows SharePoint Spinstall0 Webshell File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the creation or modification of the \"spinstall0.aspx\" webshell file in Microsoft SharePoint directories. This file is a known indicator of compromise associated with the exploitation of CVE-2025-53770 (ToolShell vulnerability). Attackers exploit the vulnerability to drop webshells that provide persistent access to compromised SharePoint servers, allowing them to execute arbitrary commands, access sensitive data, and move laterally within the network.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the file name, file path, and process information from your endpoints. If you are using Sysmon, you must have at least Sysmon version 6.0.4 with Event Code 11 enabled. You can also use other EDR products or Windows Event Logs that capture file creation events. The detection requires the Endpoint data model, populated with fi…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives are expected as the spinstall0.aspx file is not a legitimate SharePoint component. However, there might be rare cases where legitimate files with similar names are created during SharePoint updates or maintenance. Verify the process that created the file and the file content to confirm malicious intent.",
              "refs": "https://research.eye.security/sharepoint-under-siege/, https://www.cisa.gov/news-events/alerts/2025/07/20/microsoft-releases-guidance-exploitation-sharepoint-vulnerability-cve-2025-53770, https://msrc.microsoft.com/blog/2025/07/customer-guidance-for-sharepoint-vulnerability-cve-2025-53770/",
              "mitre": [
                "T1190",
                "T1505.003"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SharePoint Spinstall0 Webshell File Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SharePoint Spinstall0 Webshell File Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives are expected as the spinstall0.aspx file is not a legitimate SharePoint component. However, there might be rare cases where legitimate files with similar names are created during SharePoint updates or maintenance. Verify the process that created the file and the file content to confirm malicious intent.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies the creation or modification of the \"spinstall0.aspx\" webshell file in Microsoft SharePoint directories, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.113",
              "n": "Windows Shell Process from CrushFTP",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where CrushFTP's service process (crushftpservice.exe) spawns shell processes like cmd.exe or powershell.exe. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events. This activity is significant because CrushFTP should not normally spawn interactive shell processes during regular operations. If confirmed malicious, this behavior could indicate successful exploitation of vulnerabilities like …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://nvd.nist.gov/vuln/detail/CVE-2025-31161, https://www.crushftp.com/crush11wiki/Wiki.jsp?page=Update, https://www.huntress.com/blog/crushftp-cve-2025-31161-auth-bypass-and-post-exploitation",
              "mitre": [
                "T1059.001",
                "T1059.003",
                "T1190",
                "T1505"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Shell Process from CrushFTP\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1059.003, T1190, T1505. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Shell Process from CrushFTP\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies instances where CrushFTP's service process (crushftpservice.exe) spawns shell processes like cmd.exe or powershell.exe, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.114",
              "n": "Windows SpeechRuntime COM Hijacking DLL Load",
              "c": "high",
              "f": "intermediate",
              "v": "SpeechRuntime is vulnerable to an attack that allows a user to run code on another user's session remotely and stealthily by exploiting a Windows COM class. When this class is invoked, it launches SpeechRuntime.exe in the context of the currently logged-on user. Because this COM class is susceptible to COM Hijacking, the attacker can alter the registry remotely to point to a malicious DLL. By dropping that DLL on the target system (e.g., via SMB) and triggering the COM object, the attacker cause…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 7 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This process should normally never be loading dlls from outside the Windows system directory.",
              "refs": "https://github.com/rtecCyberSec/SpeechRuntimeMove",
              "mitre": [
                "T1021.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SpeechRuntime COM Hijacking DLL Load\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SpeechRuntime COM Hijacking DLL Load\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This process should normally never be loading dlls from outside the Windows system directory.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "SpeechRuntime is vulnerable to an attack that allows a user to run code on another user's session remotely and stealthily by exploiting a Windows COM class, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.115",
              "n": "Windows SpeechRuntime Suspicious Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "SpeechRuntime is vulnerable to an attack that allows a user to run code on another user's session remotely and stealthily by exploiting a Windows COM class. When this class is invoked, it launches SpeechRuntime.exe in the context of the currently logged-on user. Because this COM class is susceptible to COM Hijacking, the attacker can alter the registry remotely to point to a malicious DLL. By dropping that DLL on the target system (e.g., via SMB) and triggering the COM object, the attacker cause…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This process should normally never be spawning these child processes.",
              "refs": "https://github.com/rtecCyberSec/SpeechRuntimeMove",
              "mitre": [
                "T1021.003"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SpeechRuntime Suspicious Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SpeechRuntime Suspicious Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This process should normally never be spawning these child processes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "SpeechRuntime is vulnerable to an attack that allows a user to run code on another user's session remotely and stealthily by exploiting a Windows COM class, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.116",
              "n": "Windows Sqlservr Spawning Shell",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects instances where the sqlservr.exe process spawns a command shell (cmd.exe) or PowerShell process. This behavior is often indicative of command execution initiated from within the SQL Server process, potentially due to exploitation of SQL injection vulnerabilities or the use of extended stored procedures like xp_cmdshell.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrative activities or monitoring tools might occasionally spawn command shells from sqlservr.exe. Review the process command-line arguments and consider filtering out known legitimate processes or users.",
              "refs": "https://attack.mitre.org/techniques/T1505/001/, https://github.com/MHaggis/notes/tree/master/utilities/SQLSSTT",
              "mitre": [
                "T1505.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Sqlservr Spawning Shell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Sqlservr Spawning Shell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrative activities or monitoring tools might occasionally spawn command shells from sqlservr.exe. Review the process command-line arguments and consider filtering out known legitimate processes or users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects instances where the sqlservr.exe process spawns a command shell (cmd.exe) or PowerShell process, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.117",
              "n": "Windows Suspicious Child Process Spawned From WebServer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of suspicious processes typically associated with WebShell activity on web servers. It detects when processes like `cmd.exe`, `powershell.exe`, or `bash.exe` are spawned by web server processes such as `w3wp.exe` or `nginx.exe`. This behavior is significant as it may indicate an adversary exploiting a web application vulnerability to install a WebShell, providing persistent access and command execution capabilities. If confirmed malicious, this act…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate OS functions called by vendor applications, baseline the environment and filter before enabling. Recommend throttle by dest/process_name",
              "refs": "https://attack.mitre.org/techniques/T1505/003/, https://github.com/nsacyber/Mitigating-Web-Shells",
              "mitre": [
                "T1505.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Suspicious Child Process Spawned From WebServer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Suspicious Child Process Spawned From WebServer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate OS functions called by vendor applications, baseline the environment and filter before enabling. Recommend throttle by dest/process_name\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the execution of suspicious processes typically associated with WebShell activity on web servers, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.118",
              "n": "Windows Suspicious VMWare Tools Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies child processes spawned by vmtoolsd.exe, the VMWare Tools service in Windows, which typically runs with SYSTEM privileges. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process relationships. Monitoring this activity is crucial as it can indicate exploitation attempts, such as CVE-2023-20867. If confirmed malicious, attackers could gain SYSTEM-level access, allowing them to execute arbitrary code,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, some legitimate Administrative scripts may utilize VMWare Tools to execute commands on virtual machines.",
              "refs": "https://cloud.google.com/blog/topics/threat-intelligence/vmware-esxi-zero-day-bypass/",
              "mitre": [
                "T1059"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Suspicious VMWare Tools Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Suspicious VMWare Tools Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, some legitimate Administrative scripts may utilize VMWare Tools to execute commands on virtual machines.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies child processes spawned by vmtoolsd.exe, the VMWare Tools service in Windows, which typically runs with SYSTEM privileges, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.119",
              "n": "Windows Vulnerable 3CX Software",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances of the 3CXDesktopApp.exe with a FileVersion of 18.12.x, leveraging Sysmon logs. This detection focuses on identifying vulnerable versions 18.12.407 and 18.12.416 of the 3CX desktop app. Monitoring this activity is crucial as these specific versions have known vulnerabilities that could be exploited by attackers. If confirmed malicious, exploitation of this vulnerability could lead to unauthorized access, code execution, or further compromise of the affect…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on file version, modify the analytic to only look for version between 18.12.407 and 18.12.416 as needed.",
              "refs": "https://www.sentinelone.com/blog/smoothoperator-ongoing-campaign-trojanizes-3cx-software-in-software-supply-chain-attack/, https://www.cisa.gov/news-events/alerts/2023/03/30/supply-chain-attack-against-3cxdesktopapp, https://www.reddit.com/r/crowdstrike/comments/125r3uu/20230329_situational_awareness_crowdstrike/, https://www.3cx.com/community/threads/crowdstrike-endpoint-security-detection-re-3cx-desktop-app.119934/page-2#post-558898, https://www.3cx.com/community/threads/3cx-desktopapp-security-alert.119951/",
              "mitre": [
                "T1195.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Vulnerable 3CX Software\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1195.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Vulnerable 3CX Software\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on file version, modify the analytic to only look for version between 18.12.407 and 18.12.416 as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects instances of the 3CXDesktopApp.exe with a FileVersion of 18.12.x, leveraging Sysmon logs, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.120",
              "n": "Windows Vulnerable Driver Installed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of known vulnerable Windows drivers, which may indicate potential persistence or privilege escalation attempts. It leverages Windows System service install EventCode 7045 to identify driver loading events and cross-references them with a list of vulnerable drivers. This activity is significant as attackers often exploit vulnerable drivers to gain elevated privileges or maintain persistence on a system. If confirmed malicious, this could allow attackers …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7045 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present. Drill down into the driver further by version number and cross reference by signer. Review the reference material in the lookup. In addition, modify the query to look within specific paths, which will remove a lot of \"normal\" drivers.",
              "refs": "https://loldrivers.io/, https://github.com/SpikySabra/Kernel-Cactus, https://github.com/wavestone-cdt/EDRSandblast, https://research.splunk.com/, https://www.splunk.com/en_us/blog/security/these-are-the-drivers-you-are-looking-for-detect-and-prevent-malicious-drivers.html",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Vulnerable Driver Installed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Vulnerable Driver Installed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present. Drill down into the driver further by version number and cross reference by signer. Review the reference material in the lookup. In addition, modify the query to look within specific paths, which will remove a lot of \"normal\" drivers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the loading of known vulnerable Windows drivers, which may indicate potential persistence or privilege escalation attempts, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.121",
              "n": "Windows Vulnerable Driver Loaded",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of known vulnerable Windows drivers, which may indicate potential persistence or privilege escalation attempts. It leverages Sysmon EventCode 6 to identify driver loading events and cross-references them with a list of vulnerable drivers. This activity is significant as attackers often exploit vulnerable drivers to gain elevated privileges or maintain persistence on a system. If confirmed malicious, this could allow attackers to execute arbitrary code w…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 6",
              "q": "`sysmon` EventCode=6\n      | stats  min(_time) as firstTime max(_time) as lastTime count\n        BY ImageLoaded dest dvc\n           process_hash process_path signature\n           signature_id user_id vendor_product\n      | lookup loldrivers driver_name AS ImageLoaded OUTPUT is_driver driver_description\n      | search is_driver = TRUE\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_vulnerable_driver_loaded_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 6 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present. Drill down into the driver further by version number and cross reference by signer. Review the reference material in the lookup. In addition, modify the query to look within specific paths, which will remove a lot of \"normal\" drivers.",
              "refs": "https://github.com/eclypsium/Screwed-Drivers/blob/master/DRIVERS.md, https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/microsoft-recommended-driver-block-rules, https://www.rapid7.com/blog/post/2021/12/13/driver-based-attacks-past-and-present/, https://github.com/jbaines-r7/dellicious, https://github.com/namazso/physmem_drivers, https://github.com/stong/CVE-2020-15368, https://github.com/CaledoniaProject/drivers-binaries, https://www.welivesecurity.com/2022/01/11/signed-kernel-drivers-unguarded-gateway-windows-core/, https://eclypsium.com/2019/11/12/mother-of-all-drivers/, https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-37969",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Vulnerable Driver Loaded\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 6. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Vulnerable Driver Loaded\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present. Drill down into the driver further by version number and cross reference by signer. Review the reference material in the lookup. In addition, modify the query to look within specific paths, which will remove a lot of \"normal\" drivers.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the loading of known vulnerable Windows drivers, which may indicate potential persistence or privilege escalation attempts, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.122",
              "n": "Windows WSUS Spawning Shell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a shell (PowerShell.exe or Cmd.exe) is spawned from wsusservice.exe, the Windows Server Update Services process. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events where the parent process is wsusservice.exe. This activity is significant as it may indicate exploitation of CVE-2025-59287, a critical deserialization vulnerability in WSUS that allows unauthenticated remote code execut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate WSUS maintenance scripts or administrative tools may spawn shells in rare cases. However, wsusservice.exe spawning interactive shells is highly abnormal behavior. Review the command line arguments and user context to determine legitimacy. Filter known administrative scripts if needed.",
              "refs": "https://www.huntress.com/blog/exploitation-of-windows-server-update-services-remote-code-execution-vulnerability, https://msrc.microsoft.com/update-guide/vulnerability/CVE-2025-59287, https://hawktrace.com/blog/CVE-2025-59287-UNAUTH",
              "mitre": [
                "T1190",
                "T1505.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows WSUS Spawning Shell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows WSUS Spawning Shell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate WSUS maintenance scripts or administrative tools may spawn shells in rare cases. However, wsusservice.exe spawning interactive shells is highly abnormal behavior. Review the command line arguments and user context to determine legitimacy. Filter known administrative scripts if needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies instances where a shell (PowerShell.exe or Cmd.exe) is spawned from wsusservice.exe, the Windows Server Update Services process, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.123",
              "n": "Winhlp32 Spawning a Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects winhlp32.exe spawning a child process that loads a file from appdata, programdata, or temp directories. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process creation events. This activity is significant because winhlp32.exe has known vulnerabilities and can be exploited to execute malicious code. If confirmed malicious, an attacker could use this technique to execute arbitrary scripts, escalate privileges, or maintain…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as winhlp32.exe is typically not used with the latest flavors of Windows OS. However, filter as needed.",
              "refs": "https://www.exploit-db.com/exploits/16541, https://tria.ge/210929-ap75vsddan, https://www.virustotal.com/gui/file/cb77b93150cb0f7fe65ce8a7e2a5781e727419451355a7736db84109fa215a89",
              "mitre": [
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Winhlp32 Spawning a Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Winhlp32 Spawning a Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as winhlp32.exe is typically not used with the latest flavors of Windows OS. However, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects winhlp32.exe spawning a child process that loads a file from appdata, programdata, or temp directories, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.124",
              "n": "WinRAR Spawning Shell Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Windows shell processes initiated by WinRAR, such as \"cmd.exe\", \"powershell.exe\", \"certutil.exe\", \"mshta.exe\", or \"bitsadmin.exe\". This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process and parent process relationships. This activity is significant because it may indicate exploitation of the WinRAR CVE-2023-38831 vulnerability, where malicious scripts are executed from spoofed ZIP archives. If confirmed…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Be aware of potential false positives - legitimate uses of WinRAR and the listed processes in your environment may cause benign activities to be flagged. Upon triage, review the destination, user, parent process, and process name involved in the flagged activity. Capture and inspect any relevant on-disk artifacts, and look for concurrent processes to identify the attack source. This approach helps analysts detect potential threats earlier and mitigate the risks.",
              "refs": "https://www.group-ib.com/blog/cve-2023-38831-winrar-zero-day/, https://github.com/BoredHackerBlog/winrar_CVE-2023-38831_lazy_poc, https://github.com/b1tg/CVE-2023-38831-winrar-exploit",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WinRAR Spawning Shell Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WinRAR Spawning Shell Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Be aware of potential false positives - legitimate uses of WinRAR and the listed processes in your environment may cause benign activities to be flagged. Upon triage, review the destination, user, parent process, and process name involved in the flagged activity. Capture and inspect any relevant on-disk artifacts, and look for concurrent processes to identify the attack source. This approach helps analysts detect potential threats earlier and mitigate the risks.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of Windows shell processes initiated by WinRAR, such as \"cmd.exe\", \"powershell.exe\", \"certutil.exe\", \"mshta.exe\", or \"bitsadmin.exe\", and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.125",
              "n": "WinRM Spawning a Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious processes spawned by WinRM (wsmprovhost.exe). It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific child processes like cmd.exe, powershell.exe, and others. This activity is significant as it may indicate exploitation attempts of vulnerabilities like CVE-2021-31166, which could lead to system instability or compromise. If confirmed malicious, attackers could execute arbitrary commands, escalate privileges, or maintain…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.parent_process_name=wsmprovhost.exe Processes.process_name IN (\"cmd.exe\",\"sh.exe\",\"bash.exe\",\"powershell.exe\",\"pwsh.exe\",\"schtasks.exe\",\"certutil.exe\",\"whoami.exe\",\"bitsadmin.exe\",\"scp.exe\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `winrm_spawning_a_process_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. system management software may spawn processes from `wsmprovhost.exe`.",
              "refs": "https://github.com/SigmaHQ/sigma/blob/9b7fb0c0f3af2e53ed483e29e0d0f88ccf1c08ca/rules/windows/process_access/win_susp_shell_spawn_from_winrm.yml, https://www.zerodayinitiative.com/blog/2021/5/17/cve-2021-31166-a-wormable-code-execution-bug-in-httpsys, https://github.com/0vercl0k/CVE-2021-31166/blob/main/cve-2021-31166.py",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WinRM Spawning a Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WinRM Spawning a Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. system management software may spawn processes from `wsmprovhost.exe`.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects suspicious processes spawned by WinRM (wsmprovhost.exe), and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.126",
              "n": "Cisco Secure Firewall - Veeam CVE-2023-27532 Exploitation Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Veeam CVE-2023-27532 Exploitation Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Intrusion Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Intrusion Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be very unlikely.",
              "refs": "https://nvd.nist.gov/vuln/detail/CVE-2023-27532, https://www.veeam.com/kb4424",
              "mitre": [
                "T1190",
                "T1210",
                "T1059.001",
                "T1003.001"
              ],
              "dtype": "ip_address",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Veeam CVE-2023-27532 Exploitation Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Intrusion Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1210, T1059.001, T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Veeam CVE-2023-27532 Exploitation Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be very unlikely.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco Secure Firewall - Veeam Exploitation Activity, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.127",
              "n": "Cisco Smart Install Port Discovery and Status",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects network traffic to TCP port 4786, which is used by the Cisco Smart Install protocol. Smart Install is a plug-and-play configuration and image-management feature that helps customers to deploy Cisco switches. This protocol has been exploited via CVE-2018-0171, a vulnerability that allows unauthenticated remote attackers to execute arbitrary code or cause denial of service conditions. Recently, Cisco Talos reported that a Russian state-sponsored threat actor called \"Static Tu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Splunk Stream TCP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Splunk Stream TCP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of Cisco Smart Install may generate traffic to port 4786 within environments that actively use this feature for switch deployment and management. Network administrators might use Smart Install for legitimate device configuration purposes, especially during network deployment or maintenance windows. To reduce false positives, baseline normal Smart Install usage patterns in your environment and consider implementing time-based filtering to alert only on unexpected usage outside of scheduled maintenance periods. Additionally, consider whitelisting known management stations that legitimately use Smart Install.",
              "refs": "https://blog.talosintelligence.com/static-tundra/, https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180328-smi2, https://github.com/AlrikRr/Cisco-Smart-Exploit, https://www.exploit-db.com/exploits/44451",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Smart Install Port Discovery and Status\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Splunk Stream TCP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Smart Install Port Discovery and Status\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of Cisco Smart Install may generate traffic to port 4786 within environments that actively use this feature for switch deployment and management. Network administrators might use Smart Install for legitimate device configuration purposes, especially during network deployment or maintenance windows. To reduce false positives, baseline normal Smart Install usage patterns in your environment and consider implementing time-based filtering to alert only on unexpected usage outside of scheduled maintenance periods. Additionally, consider whitelisting known management stations that legitimately use Smart Install.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects network traffic to TCP port 4786, which is used by the Cisco Smart Install protocol, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.128",
              "n": "Detect Windows DNS SIGRed via Splunk Stream",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the SIGRed vulnerability (CVE-2020-1350) in Windows DNS servers. It leverages Splunk Stream DNS and TCP data to identify DNS SIG and KEY records, as well as TCP payloads exceeding 65KB. This activity is significant because SIGRed is a critical wormable vulnerability that allows remote code execution. If confirmed malicious, an attacker could gain unauthorized access, execute arbitrary code, and potentially disrupt services, leading to severe dat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`stream_dns`\n    | spath \"query_type{}\"\n    | search \"query_type{}\" IN (SIG,KEY)\n    | spath protocol_stack\n    | search protocol_stack=\"ip:tcp:dns\"\n    | append [search `stream_tcp` bytes_out>65000]\n    | stats count by flow_id\n    | where count>1\n    | fields - count\n    | `detect_windows_dns_sigred_via_splunk_stream_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to entity entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1203"
              ],
              "dtype": "other",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Windows DNS SIGRed via Splunk Stream\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to entity entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1203. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Windows DNS SIGRed via Splunk Stream\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the SIGRed vulnerability in Windows servers, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.129",
              "n": "Detect Windows DNS SIGRed via Zeek",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the presence of SIGRed, a critical DNS vulnerability, using Zeek DNS and Zeek Conn data. It identifies specific DNS query types (SIG and KEY) and checks for high data transfer within a flow. This detection is significant because SIGRed allows attackers to execute remote code on Windows DNS servers, potentially leading to unauthorized access and control. If confirmed malicious, this activity could result in data exfiltration, service disruption, or further network c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count from datamodel=Network_Resolution where\n      DNS.query_type IN (SIG,KEY) by DNS.flow_id\n    | rename DNS.flow_id as flow_id\n    | append [\n      | tstats  `security_content_summariesonly` count\n      from datamodel=Network_Traffic where\n      All_Traffic.bytes_in>65000\n      by All_Traffic.flow_id\n      | rename All_Traffic.flow_id as flow_id\n    ]\n    | stats count by flow_id\n    | where count>1\n    | fields - count'\n    | `detect_windows_dns_sigred_via_zeek_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to entity entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1203"
              ],
              "dtype": "other",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Windows DNS SIGRed via Zeek\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to entity entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1203. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Windows DNS SIGRed via Zeek\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the presence of SIGRed, a critical vulnerability, using Zeek and Zeek Conn data, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.130",
              "n": "Detect Zerologon via Zeek",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the Zerologon CVE-2020-1472 vulnerability via Zeek RPC. It leverages Zeek DCE-RPC data to identify specific operations: NetrServerPasswordSet2, NetrServerReqChallenge, and NetrServerAuthenticate3. This activity is significant because it indicates an attempt to gain unauthorized access to a domain controller, potentially leading to a complete takeover of an organization''s IT infrastructure. If confirmed malicious, the impact could be severe, inc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`zeek_rpc` operation IN (NetrServerPasswordSet2,NetrServerReqChallenge,NetrServerAuthenticate3)\n      | bin span=5m _time\n      | stats values(operation) dc(operation) as opscount count(eval(operation==\"NetrServerReqChallenge\")) as challenge count(eval(operation==\"NetrServerAuthenticate3\")) as authcount count(eval(operation==\"NetrServerPasswordSet2\")) as passcount count as totalcount\n        BY _time,src,dest\n      | search opscount=3 authcount>4 passcount>0\n      | search `detect_zerologon_via_zeek_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.secura.com/blog/zero-logon, https://github.com/SecuraBV/CVE-2020-1472, https://msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2020-1472, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-319a",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Zerologon via Zeek\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Zerologon via Zeek\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the Zerologon vulnerability using Zeek RPC, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.131",
              "n": "DNS Kerberos Coercion",
              "c": "high",
              "f": "intermediate",
              "v": "Detects DNS-based Kerberos coercion attacks where adversaries inject marshaled credential structures into DNS records to spoof SPNs and redirect authentication such as in CVE-2025-33073. This detection leverages suricata looking for specific CREDENTIAL_TARGET_INFORMATION structures in DNS queries.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata, Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata, Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's unlikely that a DNS entry contains the specific structure used by this attack. Filter as needed for your organization.",
              "refs": "https://web.archive.org/web/20250617122747/https://www.synacktiv.com/publications/ntlm-reflection-is-dead-long-live-ntlm-reflection-an-in-depth-analysis-of-cve-2025, https://www.synacktiv.com/publications/relaying-kerberos-over-smb-using-krbrelayx, https://www.guidepointsecurity.com/blog/the-birth-and-death-of-loopyticket/",
              "mitre": [
                "T1557.001",
                "T1187",
                "T1071.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"DNS Kerberos Coercion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata, Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1557.001, T1187, T1071.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"DNS Kerberos Coercion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's unlikely that a DNS entry contains the specific structure used by this attack. Filter as needed for your organization.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch -based Kerberos coercion attacks where adversaries inject marshaled credential structures into records to spoof SPNs and redirect authentication such as, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.132",
              "n": "F5 BIG-IP iControl REST Vulnerability CVE-2022-1388",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the F5 BIG-IP iControl REST API vulnerability (CVE-2022-1388) for unauthenticated remote code execution. It identifies suspicious URI paths and POST HTTP methods, along with specific request headers containing potential commands in the `utilcmdargs` field and a random base64 encoded value in the `X-F5-Auth-Token` field. This activity is significant as it targets a critical vulnerability that can allow attackers to execute arbitrary commands on t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Threat",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Threat ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the activity is blocked or was not successful. Filter known vulnerablity scanners. Filter as needed.",
              "refs": "https://github.com/dk4trin/templates-nuclei/blob/main/CVE-2022-1388.yaml, https://www.randori.com/blog/vulnerability-analysis-cve-2022-1388/, https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-1388, https://twitter.com/da_667/status/1523770267327250438?s=20&t=-JnB_aNWuJFsmcOmxGUWLQ, https://github.com/horizon3ai/CVE-2022-1388/blob/main/CVE-2022-1388.py",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"F5 BIG-IP iControl REST Vulnerability CVE-2022-1388\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Threat. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"F5 BIG-IP iControl REST Vulnerability CVE-2022-1388\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the activity is blocked or was not successful. Filter known vulnerablity scanners. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the F5 BIG-IP iControl REST vulnerability for unauthenticated remote code execution, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.133",
              "n": "Internal Horizontal Port Scan",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic identifies instances where an internal host has attempted to communicate with 250 or more destination IP addresses using the same port and protocol. Horizontal port scans from internal hosts can indicate reconnaissance or scanning activities, potentially signaling malicious intent or misconfiguration. By monitoring network traffic logs, this detection helps detect and respond to such behavior promptly, enhancing network security and preventing potential threats.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudWatchLogs VPCflow, Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudWatchLogs VPCflow, Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1046"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Internal Horizontal Port Scan\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudWatchLogs VPCflow, Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1046. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Internal Horizontal Port Scan\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic identifies instances where an internal host has attempted to communicate with 250 or more destination IP addresses using the same port and protocol, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.134",
              "n": "Internal Horizontal Port Scan NMAP Top 20",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic identifies instances where an internal host has attempted to communicate with 250 or more destination IP addresses using on of the NMAP top 20 ports. Horizontal port scans from internal hosts can indicate reconnaissance or scanning activities, potentially signaling malicious intent or misconfiguration. By monitoring network traffic logs, this detection helps detect and respond to such behavior promptly, enhancing network security and preventing potential threats.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudWatchLogs VPCflow, Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN ($src$) starthoursago=168 endhoursago=1 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To properly run this search, Splunk needs to ingest data from networking telemetry sources such as firewalls like Cisco Secure Firewall, NetFlow, or host-based networking events. Ensure that the Network_Traffic data model is populated to enable this search effectively.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1046"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Internal Horizontal Port Scan NMAP Top 20\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudWatchLogs VPCflow, Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1046. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Internal Horizontal Port Scan NMAP Top 20\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic identifies instances where an internal host has attempted to communicate with 250 or more destination IP addresses using on of the NMAP top 20 ports, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.135",
              "n": "Internal Vertical Port Scan",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects instances where an internal host attempts to communicate with over 500 ports on a single destination IP address. It includes filtering criteria to exclude applications performing scans over ephemeral port ranges, focusing on potential reconnaissance or scanning activities. Monitoring network traffic logs allows for timely detection and response to such behavior, enhancing network security by identifying and mitigating potential threats promptly.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudWatchLogs VPCflow, Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To properly run this search, Splunk needs to ingest data from networking telemetry sources such as firewalls, NetFlow, or host-based networking events. Ensure that the Network_Traffic data model is populated to enable this search effectively.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1046"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Internal Vertical Port Scan\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudWatchLogs VPCflow, Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1046. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Internal Vertical Port Scan\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects instances where an internal host attempts to communicate with over 500 ports on a single destination IP address, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.136",
              "n": "Internal Vulnerability Scan",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects internal hosts triggering multiple IDS signatures, which may include either more than 25 signatures against a single host or a single signature across over 25 destination IP addresses. Such patterns can indicate active vulnerability scanning activities within the network. By monitoring IDS logs, this detection helps identify and respond to potential vulnerability scanning attempts, enhancing the network's security posture and preventing potential exploits.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` values(IDS_Attacks.action) as action values(IDS_Attacks.src_category) as src_category values(IDS_Attacks.dest_category) as dest_category count FROM datamodel=Intrusion_Detection.IDS_Attacks\n      WHERE IDS_Attacks.src IN (10.0.0.0/8,192.168.0.0/16,172.16.0.0/12) IDS_Attacks.severity IN (critical, high, medium)\n      BY IDS_Attacks.src IDS_Attacks.severity IDS_Attacks.signature\n         IDS_Attacks.dest IDS_Attacks.dest_port IDS_Attacks.transport\n         span=1s _time\n    | `drop_dm_object_name(\"IDS_Attacks\")`\n    | eval gtime=_time\n    | bin span=1h gtime\n    | eventstats count as sevCount\n      BY severity src\n    | eventstats count as sigCount\n      BY signature src\n    | eval severity=severity +\"(\"+sevCount+\")\"\n    | eval signature=signature +\"(\"+sigCount+\")\"\n    | eval dest_port=transport + \"/\" + dest_port\n    | stats min(_time) as _time values(action) as action dc(dest) as destCount dc(signature) as sigCount values(signature) values(src_category) as src_category values(dest_category) as dest_category values(severity) as severity values(dest_port) as dest_ports\n      BY src gtime\n    | fields - gtime\n    | where destCount>25 OR sigCount>25\n    | `internal_vulnerability_scan_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Internal vulnerability scanners will trigger this detection.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1595.002",
                "T1046"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Internal Vulnerability Scan\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1595.002, T1046. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Internal Vulnerability Scan\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Internal vulnerability scanners will trigger this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This analytic detects internal hosts triggering multiple signatures, which may include either more than 25 signatures against a single host or a single signature across over 25 destination IP addresses, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.137",
              "n": "Access to Vulnerable Ivanti Connect Secure Bookmark Endpoint",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies access to the /api/v1/configuration/users/user-roles/user-role/rest-userrole1/web/web-bookmarks/bookmark endpoint, which is associated with CVE-2023-46805 and CVE-2024-21887 vulnerabilities. It detects this activity by monitoring for GET requests that receive a 403 Forbidden response with an empty body. This behavior is significant as it indicates potential exploitation attempts against Ivanti Connect Secure systems. If confirmed malicious, attackers could explo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic is limited to HTTP Status 403; adjust as necessary. False positives may occur if the URI path is IP-restricted or externally blocked. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.",
              "refs": "https://github.com/RootUp/PersonalStuff/blob/master/http-vuln-cve2023-46805_2024_21887.nse, https://github.com/projectdiscovery/nuclei-templates/blob/c6b351e71b0fb0e40e222e97038f1fe09ac58194/http/misconfiguration/ivanti/CVE-2023-46085-CVE-2024-21887-mitigation-not-applied.yaml, https://github.com/rapid7/metasploit-framework/pull/18708/files",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Access to Vulnerable Ivanti Connect Secure Bookmark Endpoint\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Access to Vulnerable Ivanti Connect Secure Bookmark Endpoint\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic is limited to HTTP Status 403; adjust as necessary. False positives may occur if the URI path is IP-restricted or externally blocked. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies access to the //v1/configuration/users/user-roles/user-role/rest-userrole1/web/web-bookmarks/bookmark endpoint, which is associated with and vulnerabilities, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.138",
              "n": "Adobe ColdFusion Access Control Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential exploitation attempts against Adobe ColdFusion vulnerabilities CVE-2023-29298 and CVE-2023-26360. It monitors requests to specific ColdFusion Administrator endpoints, especially those with an unexpected additional forward slash, using the Web datamodel. This activity is significant for a SOC as it indicates attempts to bypass access controls, which can lead to unauthorized access to ColdFusion administration endpoints. If confirmed malicious, this could r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic is limited to HTTP Status 200; adjust as necessary. False positives may occur if the URI path is IP-restricted or externally blocked. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.",
              "refs": "https://www.rapid7.com/blog/post/2023/07/11/cve-2023-29298-adobe-coldfusion-access-control-bypass/",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Adobe ColdFusion Access Control Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Adobe ColdFusion Access Control Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic is limited to HTTP Status 200; adjust as necessary. False positives may occur if the URI path is IP-restricted or externally blocked. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential exploitation attempts against Adobe ColdFusion vulnerabilities and, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.139",
              "n": "Adobe ColdFusion Unauthenticated Arbitrary File Read",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential exploitation of the Adobe ColdFusion vulnerability, CVE-2023-26360, which allows unauthenticated arbitrary file read. It monitors web requests to the \"/cf_scripts/scripts/ajax/ckeditor/*\" path using the Web datamodel, focusing on specific ColdFusion paths to differentiate malicious activity from normal traffic. This activity is significant due to the vulnerability's high CVSS score of 9.8, indicating severe risk. If confirmed malicious, it could lead to u…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "In the wild, we have observed three different types of attempts that could potentially trigger false positives if the HTTP status code is not in the query. Please check this github gist for the specific URIs : https://gist.github.com/patel-bhavin/d10830f3f375a2397233f6a4fe38d5c9 . These could be legitimate requests depending on the context of your organization. Therefore, it is recommended to modify the analytic as needed to suit your specific environment.",
              "refs": "https://www.rapid7.com/db/modules/auxiliary/gather/adobe_coldfusion_fileread_cve_2023_26360/, https://github.com/projectdiscovery/nuclei-templates/blob/main/http/cves/2023/CVE-2023-26360.yaml",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Adobe ColdFusion Unauthenticated Arbitrary File Read\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Adobe ColdFusion Unauthenticated Arbitrary File Read\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: In the wild, we have observed three different types of attempts that could potentially trigger false positives if the HTTP status code is not in the query. Please check this github gist for the specific URIs : https://gist.github.com/patel-bhavin/d10830f3f375a2397233f6a4fe38d5c9 . These could be legitimate requests depending on the context of your organization. Therefore, it is recommended to modify the analytic as needed to suit your specific environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential exploitation of the Adobe ColdFusion vulnerability, which allows unauthenticated arbitrary file read, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.140",
              "n": "Cisco IOS XE Implant Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the potential exploitation of a vulnerability (CVE-2023-20198) in the Web User Interface of Cisco IOS XE software. It detects suspicious account creation and subsequent actions, including the deployment of a non-persistent implant configuration file. The detection leverages the Web datamodel, focusing on specific URL patterns and HTTP methods. This activity is significant as it indicates unauthorized administrative access, which can lead to full control of the d…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, restrict to Cisco IOS XE devices or perimeter appliances. Modify the analytic as needed based on hunting for successful exploitation of CVE-2023-20198.",
              "refs": "https://blog.talosintelligence.com/active-exploitation-of-cisco-ios-xe-software/, https://github.com/vulncheck-oss/cisco-ios-xe-implant-scanner",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco IOS XE Implant Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco IOS XE Implant Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, restrict to Cisco IOS XE devices or perimeter appliances. Modify the analytic as needed based on hunting for successful exploitation of CVE-2023-20198.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the potential exploitation of a vulnerability in the Web User Interface of Cisco IOS XE software, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.141",
              "n": "Citrix ADC and Gateway Unauthorized Data Disclosure",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the Citrix Bleed vulnerability (CVE-2023-4966), which can lead to the leaking of session tokens. It identifies HTTP requests with a 200 status code targeting the /oauth/idp/.well-known/openid-configuration URL endpoint. By parsing web traffic and filtering based on user agent details, HTTP method, source and destination IPs, and sourcetype, it aims to identify potentially malicious requests. This activity is significant for a SOC because success…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on organization use of Citrix ADC and Gateway. Filter, or restrict the analytic to Citrix devices only.",
              "refs": "https://www.assetnote.io/resources/research/citrix-bleed-leaking-session-tokens-with-cve-2023-4966, https://github.com/assetnote/exploits/tree/main/citrix/CVE-2023-4966",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Citrix ADC and Gateway Unauthorized Data Disclosure\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Citrix ADC and Gateway Unauthorized Data Disclosure\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on organization use of Citrix ADC and Gateway. Filter, or restrict the analytic to Citrix devices only.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the Citrix Bleed vulnerability, which can lead to the leaking of session tokens, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.142",
              "n": "Citrix ADC Exploitation CVE-2023-3519",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential exploitation attempts against Citrix ADC related to CVE-2023-3519. It detects POST requests to specific web endpoints associated with this vulnerability by leveraging the Web datamodel. This activity is significant as CVE-2023-3519 involves a SAML processing overflow issue that can lead to memory corruption, posing a high risk. If confirmed malicious, attackers could exploit this to execute arbitrary code, escalate privileges, or disrupt services, maki…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Threat",
              "q": "| tstats count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Web\n      WHERE Web.url IN (\"*/saml/login\",\"/cgi/samlauth\",\"*/saml/activelogin\",\"/cgi/samlart?samlart=*\",\"*/cgi/logout\",\"/gwtest/formssso?event=start&target=*\",\"/netscaler/ns_gui/vpn/*\")  Web.http_method=POST\n      BY Web.http_user_agent, Web.status Web.http_method,\n         Web.url, Web.url_length, Web.src,\n         Web.dest, sourcetype\n    | `drop_dm_object_name(\"Web\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `citrix_adc_exploitation_cve_2023_3519_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Palo Alto Network Threat ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on organization use of SAML utilities. Filter, or restrict the analytic to Citrix devices only.",
              "refs": "https://blog.assetnote.io/2023/07/21/citrix-CVE-2023-3519-analysis/, https://support.citrix.com/article/CTX561482/citrix-adc-and-citrix-gateway-security-bulletin-for-cve20233519-cve20233466-cve20233467, https://securityintelligence.com/x-force/x-force-uncovers-global-netscaler-gateway-credential-harvesting-campaign/, https://support.citrix.com/article/CTX579459/netscaler-adc-and-netscaler-gateway-security-bulletin-for-cve20234966-and-cve20234967",
              "mitre": [
                "T1190"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Citrix ADC Exploitation CVE-2023-3519\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Threat. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Citrix ADC Exploitation CVE-2023-3519\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on organization use of SAML utilities. Filter, or restrict the analytic to Citrix devices only.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential exploitation attempts against Citrix ADC related to, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.143",
              "n": "Citrix ShareFile Exploitation CVE-2023-24489",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potentially malicious file upload attempts to Citrix ShareFile via specific suspicious URLs and the HTTP POST method. It leverages the Web datamodel to identify URL patterns such as \"/documentum/upload.aspx?parentid=\", \"/documentum/upload.aspx?filename=\", and \"/documentum/upload.aspx?uploadId=*\", combined with the HTTP POST method. This activity is significant for a SOC as it may indicate an attempt to upload harmful scripts or content, potentially compromising the…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| tstats count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Web\n      WHERE Web.url=\"/documentum/upload.aspx?*\"\n        AND\n        Web.url IN (\"*parentid=*\",\"*filename=*\",\"*uploadId=*\")\n        AND\n        Web.url IN (\"*unzip=*\", \"*raw=*\") Web.http_method=POST\n      BY Web.http_user_agent, Web.status Web.http_method,\n         Web.url, Web.url_length, Web.src,\n         Web.dest, sourcetype\n    | `drop_dm_object_name(\"Web\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `citrix_sharefile_exploitation_cve_2023_24489_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filtering may be needed. Also, restricting to known web servers running IIS or ShareFile will change this from Hunting to TTP.",
              "refs": "https://blog.assetnote.io/2023/07/04/citrix-sharefile-rce/",
              "mitre": [
                "T1190"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Citrix ShareFile Exploitation CVE-2023-24489\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Citrix ShareFile Exploitation CVE-2023-24489\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filtering may be needed. Also, restricting to known web servers running IIS or ShareFile will change this from Hunting to TTP.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potentially malicious file upload attempts to Citrix ShareFile using specific suspicious URLs and the HTTP POST method, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.144",
              "n": "Confluence CVE-2023-22515 Trigger Vulnerability",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential exploitation attempts of the Confluence CVE-2023-22515 vulnerability. It detects successful accesses (HTTP status 200) to specific vulnerable endpoints by analyzing web logs within the Splunk 'Web' Data Model. This activity is significant for a SOC as it indicates possible privilege escalation attempts in Confluence. If confirmed malicious, attackers could gain unauthorized access or create accounts with escalated privileges, leading to potential data …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to Confluence servers.",
              "refs": "https://github.com/Chocapikk/CVE-2023-22515/blob/main/exploit.py, https://x.com/Shadowserver/status/1712378833536741430?s=20, https://github.com/j3seer/CVE-2023-22515-POC",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Confluence CVE-2023-22515 Trigger Vulnerability\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Confluence CVE-2023-22515 Trigger Vulnerability\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to Confluence servers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential exploitation attempts of the Confluence vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.145",
              "n": "Confluence Data Center and Server Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential exploitation attempts on a known vulnerability in Atlassian Confluence, specifically targeting the /setup/*.action* URL pattern. It leverages web logs within the Splunk 'Web' Data Model, filtering for successful accesses (HTTP status 200) to these endpoints. This activity is significant as it suggests attackers might be exploiting a privilege escalation flaw in Confluence. If confirmed malicious, it could result in unauthorized access or account creati…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to confluence servers.",
              "refs": "https://confluence.atlassian.com/security/cve-2023-22515-privilege-escalation-vulnerability-in-confluence-data-center-and-server-1295682276.html, https://confluence.atlassian.com/security/cve-2023-22518-improper-authorization-vulnerability-in-confluence-data-center-and-server-1311473907.html, https://www.rapid7.com/blog/post/2023/10/04/etr-cve-2023-22515-zero-day-privilege-escalation-in-confluence-server-and-data-center/, https://attackerkb.com/topics/Q5f0ItSzw5/cve-2023-22515",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Confluence Data Center and Server Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Confluence Data Center and Server Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to confluence servers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential exploitation attempts on a known vulnerability in Atlassian Confluence, specifically targeting the /setup/*.action* URL pattern, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.146",
              "n": "Confluence Pre-Auth RCE via OGNL Injection CVE-2023-22527",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to exploit a critical template injection vulnerability (CVE-2023-22527) in outdated Confluence Data Center and Server versions. It detects POST requests to the \"/template/aui/text-inline.vm\" endpoint with HTTP status codes 200 or 202, indicating potential OGNL injection attacks. This activity is significant as it allows unauthenticated attackers to execute arbitrary code remotely. If confirmed malicious, attackers could gain full control over the affect…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to confluence servers.",
              "refs": "https://confluence.atlassian.com/security/cve-2023-22527-rce-remote-code-execution-vulnerability-in-confluence-data-center-and-confluence-server-1333990257.html",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Confluence Pre-Auth RCE via OGNL Injection CVE-2023-22527\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Confluence Pre-Auth RCE via OGNL Injection CVE-2023-22527\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to confluence servers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies attempts to exploit a critical template injection vulnerability in outdated Confluence Data Center and Server versions, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.147",
              "n": "Confluence Unauthenticated Remote Code Execution CVE-2022-26134",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit CVE-2022-26134, an unauthenticated remote code execution vulnerability in Confluence. It leverages the Web datamodel to analyze network and CIM-compliant web logs, identifying suspicious URL patterns and parameters indicative of exploitation attempts. This activity is significant as it allows attackers to execute arbitrary code on the Confluence server without authentication, potentially leading to full system compromise. If confirmed malicious,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Threat",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Threat ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Tune based on assets if possible, or restrict to known Confluence servers. Remove the ${ for a more broad query. To identify more exec, remove everything up to the last parameter (Runtime().exec) for a broad query.",
              "refs": "https://confluence.atlassian.com/doc/confluence-security-advisory-2022-06-02-1130377146.html, https://www.splunk.com/en_us/blog/security/atlassian-confluence-vulnerability-cve-2022-26134.html, https://www.rapid7.com/blog/post/2022/06/02/active-exploitation-of-confluence-cve-2022-26134/, https://www.volexity.com/blog/2022/06/02/zero-day-exploitation-of-atlassian-confluence/",
              "mitre": [
                "T1505",
                "T1190",
                "T1133"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Confluence Unauthenticated Remote Code Execution CVE-2022-26134\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Threat. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505, T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Confluence Unauthenticated Remote Code Execution CVE-2022-26134\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Tune based on assets if possible, or restrict to known Confluence servers. Remove the ${ for a more broad query. To identify more exec, remove everything up to the last parameter (Runtime().exec) for a broad query.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit, an unauthenticated remote code execution vulnerability in Confluence, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.148",
              "n": "ConnectWise ScreenConnect Authentication Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the ConnectWise ScreenConnect CVE-2024-1709 vulnerability, which allows attackers to bypass authentication via an alternate path or channel. It leverages web request logs to identify access to the SetupWizard.aspx page, indicating potential exploitation. This activity is significant as it can lead to unauthorized administrative access and remote code execution. If confirmed malicious, attackers could create administrative users and gain full con…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected, as the detection is based on the presence of web requests to the SetupWizard.aspx page, which is not a common page to be accessed by legitimate users. Note that the analytic is limited to HTTP POST and a status of 200 to reduce false positives. Modify the query as needed to reduce false positives or hunt for additional indicators of compromise.",
              "refs": "https://www.huntress.com/blog/a-catastrophe-for-control-understanding-the-screenconnect-authentication-bypass, https://www.huntress.com/blog/detection-guidance-for-connectwise-cwe-288-2, https://www.connectwise.com/company/trust/security-bulletins/connectwise-screenconnect-23.9.8",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ConnectWise ScreenConnect Authentication Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ConnectWise ScreenConnect Authentication Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected, as the detection is based on the presence of web requests to the SetupWizard.aspx page, which is not a common page to be accessed by legitimate users. Note that the analytic is limited to HTTP POST and a status of 200 to reduce false positives. Modify the query as needed to reduce false positives or hunt for additional indicators of compromise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the ConnectWise ScreenConnect vulnerability, which allows attackers to bypass authentication using an alternate path or channel, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.149",
              "n": "CrushFTP Authentication Bypass Exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential exploitation of the CrushFTP authentication bypass vulnerability (CVE-2025-31161). This detection identifies suspicious command execution patterns associated with exploitation of this vulnerability, such as executing mesch.exe with specific arguments like b64exec or fullinstall. This activity is indicative of an attacker exploiting CVE-2025-31161 to gain unauthorized access to the CrushFTP server and perform post-exploitation activities.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "CrushFTP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To implement this detection, you need to ingest CrushFTP logs into your Splunk environment. Configure CrushFTP to forward logs to Splunk via a syslog forwarder or direct file monitoring. This detection searches for CrushFTP logs containing suspicious command execution patterns commonly associated with exploitation of the CVE-2025-31161 vulnerability.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if there are legitimate administrative commands being executed on the CrushFTP server that match the suspicious patterns. Review the commands being executed to determine if the activity is legitimate administrative work or potential malicious activity.",
              "refs": "https://www.huntress.com/blog/crushftp-cve-2025-31161-auth-bypass-and-post-exploitation, https://nvd.nist.gov/vuln/detail/CVE-2025-31161, https://www.crushftp.com/crush11wiki/Wiki.jsp?page=Update",
              "mitre": [
                "T1190",
                "T1059.003",
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CrushFTP Authentication Bypass Exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: CrushFTP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1059.003, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CrushFTP Authentication Bypass Exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if there are legitimate administrative commands being executed on the CrushFTP server that match the suspicious patterns. Review the commands being executed to determine if the activity is legitimate administrative work or potential malicious activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential exploitation of the CrushFTP authentication bypass vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.150",
              "n": "CrushFTP Max Simultaneous Users From IP",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where CrushFTP has blocked access due to exceeding the maximum number of simultaneous connections from a single IP address. This activity may indicate brute force attempts, credential stuffing, or automated attacks against the CrushFTP server. This detection is particularly relevant following the discovery of CVE-2025-31161, an authentication bypass vulnerability in CrushFTP versions 10.0.0 through 10.8.3 and 11.0.0 through 11.3.0.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "CrushFTP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To implement this detection, you need to ingest CrushFTP logs into your Splunk environment. Configure CrushFTP to forward logs to Splunk via a syslog forwarder or direct file monitoring. Ensure the sourcetype is correctly set for the CrushFTP logs. The detection requires the SESSION field and the \"[HTTP:*:user:IP]\" format in the logs. Adjust the threshold in the \"where count >= 3\" clause based on …",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "In environments where multiple users legitimately access CrushFTP from behind the same NAT or proxy, this may generate false positives. Tune the threshold based on your organization's usage patterns.",
              "refs": "https://www.huntress.com/blog/crushftp-cve-2025-31161-auth-bypass-and-post-exploitation, https://nvd.nist.gov/vuln/detail/CVE-2025-31161, https://www.crushftp.com/crush11wiki/Wiki.jsp?page=Update",
              "mitre": [
                "T1110.001",
                "T1110.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CrushFTP Max Simultaneous Users From IP\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: CrushFTP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001, T1110.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CrushFTP Max Simultaneous Users From IP\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: In environments where multiple users legitimately access CrushFTP from behind the same NAT or proxy, this may generate false positives. Tune the threshold based on your organization's usage patterns.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies instances where CrushFTP has blocked access due to exceeding the maximum number of simultaneous connections from a single IP address, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.151",
              "n": "Detect attackers scanning for vulnerable JBoss servers",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies specific GET or HEAD requests to web servers that indicate reconnaissance attempts to find vulnerable JBoss servers. It leverages data from the Web data model, focusing on HTTP methods and URLs associated with JBoss management interfaces. This activity is significant because it often precedes exploitation attempts using tools like JexBoss, which can compromise the server. If confirmed malicious, attackers could gain unauthorized access, execute arbitrary code, o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Web\n      WHERE (\n            Web.http_method=\"GET\"\n            OR\n            Web.http_method=\"HEAD\"\n        )\n        AND (Web.url=\"*/web-console/ServerInfo.jsp*\" OR Web.url=\"*web-console*\" OR Web.url=\"*jmx-console*\" OR Web.url = \"*invoker*\")\n      BY Web.http_method, Web.url, Web.src,\n         Web.dest\n    | `drop_dm_object_name(\"Web\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_attackers_scanning_for_vulnerable_jboss_servers_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible for legitimate HTTP requests to be made to URLs containing the suspicious paths.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1082",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect attackers scanning for vulnerable JBoss servers\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect attackers scanning for vulnerable JBoss servers\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible for legitimate HTTP requests to be made to URLs containing the suspicious paths.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies specific GET or HEAD requests to web servers that indicate reconnaissance attempts to find vulnerable JBoss servers, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.152",
              "n": "Detect F5 TMUI RCE CVE-2020-5902",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies remote code execution (RCE) attempts targeting F5 BIG-IP, BIG-IQ, and Traffix SDC devices, specifically exploiting CVE-2020-5902. It uses regex to detect patterns in syslog data that match known exploit strings such as \"hsqldb;\" and directory traversal sequences. This activity is significant because successful exploitation can allow attackers to execute arbitrary commands on the affected devices, leading to full system compromise. If confirmed malicious, this co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`f5_bigip_rogue` | regex _raw=\"(hsqldb;|.*\\\\.\\\\.;.*)\" | search `detect_f5_tmui_rce_cve_2020_5902_filter`",
              "m": "To consistently detect exploit attempts on F5 devices using the vulnerabilities contained within CVE-2020-5902 it is recommended to ingest logs via syslog.  As many BIG-IP devices will have SSL enabled on their management interfaces, detections via wire data may not pick anything up unless you are decrypting SSL traffic in order to inspect it.  I am using a regex string from a Cloudflare mitigatio…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.ptsecurity.com/ww-en/about/news/f5-fixes-critical-vulnerability-discovered-by-positive-technologies-in-big-ip-application-delivery-controller/, https://support.f5.com/csp/article/K52145254",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect F5 TMUI RCE CVE-2020-5902\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect F5 TMUI RCE CVE-2020-5902\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies remote code execution (RCE) attempts targeting F5 BIG-IP, BIG-IQ, and Traffix SDC devices, specifically exploiting, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.153",
              "n": "Detect malicious requests to exploit JBoss servers",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies malicious HTTP requests targeting the jmx-console in JBoss servers. It detects unusually long URLs, indicative of embedded payloads, by analyzing web server logs for GET or HEAD requests with specific URL patterns and lengths. This activity is significant as it may indicate an attempt to exploit JBoss vulnerabilities, potentially leading to unauthorized remote code execution. If confirmed malicious, attackers could gain control over the server, escalate privileg…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Web\n      WHERE (\n            Web.http_method=\"GET\"\n            OR\n            Web.http_method=\"HEAD\"\n        )\n      BY Web.http_method, Web.url,Web.url_length Web.src,\n         Web.dest\n    | search Web.url=\"*jmx-console/HtmlAdaptor?action=invokeOpByName&name=jboss.admin*import*\" AND Web.url_length > 200\n    | `drop_dm_object_name(\"Web\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table src, dest, http_method, url, firstTime, lastTime\n    | `detect_malicious_requests_to_exploit_jboss_servers_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No known false positives for this detection.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect malicious requests to exploit JBoss servers\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect malicious requests to exploit JBoss servers\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No known false positives for this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies malicious HTTP requests targeting the jmx-console in JBoss servers, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.154",
              "n": "Exploit Public Facing Application via Apache Commons Text",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the CVE-2022-42889 vulnerability in the Apache Commons Text Library, known as Text4Shell. It leverages the Web datamodel to identify suspicious HTTP requests containing specific lookup keys (url, dns, script) that can lead to Remote Code Execution (RCE). This activity is significant as it targets a critical vulnerability that can allow attackers to execute arbitrary code on the server. If confirmed malicious, this could lead to full system compr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present when the values are set to 1 for utf and lookup. It's possible to raise this to TTP (direct finding) if removal of other_lookups occur and Score is raised to 2 (down from 4).",
              "refs": "https://sysdig.com/blog/cve-2022-42889-text4shell/, https://nvd.nist.gov/vuln/detail/CVE-2022-42889, https://lists.apache.org/thread/n2bd4vdsgkqh2tm14l1wyc3jyol7s1om, https://www.rapid7.com/blog/post/2022/10/17/cve-2022-42889-keep-calm-and-stop-saying-4shell/, https://github.com/kljunowsky/CVE-2022-42889-text4shell, https://medium.com/geekculture/text4shell-exploit-walkthrough-ebc02a01f035",
              "mitre": [
                "T1133",
                "T1190",
                "T1505.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Exploit Public Facing Application via Apache Commons Text\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1133, T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Exploit Public Facing Application via Apache Commons Text\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present when the values are set to 1 for utf and lookup. It's possible to raise this to TTP (direct finding) if removal of other_lookups occur and Score is raised to 2 (down from 4).\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the vulnerability in the Apache Commons Text Library, known as Text4Shell, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.155",
              "n": "Exploit Public-Facing Fortinet FortiNAC CVE-2022-39952",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the Fortinet FortiNAC CVE-2022-39952 vulnerability. It identifies HTTP POST requests to the URI configWizard/keyUpload.jsp with a payload.zip file. The detection leverages the Web datamodel, analyzing fields such as URL, HTTP method, and user agent. This activity is significant as it indicates an attempt to exploit a known vulnerability, potentially leading to remote code execution. If confirmed malicious, attackers could gain control over the a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Threat",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Threat ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Modify the query as needed to POST, or add additional filtering (based on log source).",
              "refs": "https://github.com/horizon3ai/CVE-2022-39952, https://www.horizon3.ai/fortinet-fortinac-cve-2022-39952-deep-dive-and-iocs/, https://viz.greynoise.io/tag/fortinac-rce-attempt?days=30",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Exploit Public-Facing Fortinet FortiNAC CVE-2022-39952\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Threat. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Exploit Public-Facing Fortinet FortiNAC CVE-2022-39952\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Modify the query as needed to POST, or add additional filtering (based on log source).\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the Fortinet FortiNAC vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.156",
              "n": "F5 TMUI Authentication Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the CVE-2023-46747 vulnerability, an authentication bypass flaw in F5 BIG-IP's Configuration utility (TMUI). It identifies this activity by monitoring for specific URI paths such as \"*/mgmt/tm/auth/user/*\" with the PATCH method and a 200 status code. This behavior is significant for a SOC as it indicates potential unauthorized access attempts, leading to remote code execution. If confirmed malicious, an attacker could gain unauthorized access, e…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited to as this is strict to active exploitation. Reduce noise by filtering to F5 devices with TMUI enabled or filter data as needed.",
              "refs": "https://www.praetorian.com/blog/refresh-compromising-f5-big-ip-with-request-smuggling-cve-2023-46747/, https://github.com/projectdiscovery/nuclei-templates/blob/3b0bb71bd627c6c3139e1d06c866f8402aa228ae/http/cves/2023/CVE-2023-46747.yaml",
              "mitre": [],
              "dtype": "ip_address",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"F5 TMUI Authentication Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"F5 TMUI Authentication Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited to as this is strict to active exploitation. Reduce noise by filtering to F5 devices with TMUI enabled or filter data as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the vulnerability, an authentication bypass flaw in F5 BIG-IP's Configuration utility (TMUI), and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.157",
              "n": "Fortinet Appliance Auth bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit CVE-2022-40684, a Fortinet appliance authentication bypass vulnerability. It identifies REST API requests to the /api/v2/ endpoint using various HTTP methods (GET, POST, PUT, DELETE) that may indicate unauthorized modifications, such as adding SSH keys or creating new users. This detection leverages the Web datamodel to monitor specific URL patterns and HTTP methods. This activity is significant as it can lead to unauthorized access and control …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Threat",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Threat ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "GET requests will be noisy and need to be filtered out or removed from the query based on volume. Restrict analytic to known publically facing Fortigates, or run analytic as a Hunt until properly tuned. It is also possible the user agent may be filtered on Report Runner or Node.js only for the exploit, however, it is unknown at this if other user agents may be used.",
              "refs": "https://www.wordfence.com/blog/2022/10/threat-advisory-cve-2022-40684-fortinet-appliance-auth-bypass/, https://www.horizon3.ai/fortios-fortiproxy-and-fortiswitchmanager-authentication-bypass-technical-deep-dive-cve-2022-40684/, https://github.com/horizon3ai/CVE-2022-40684, https://www.horizon3.ai/fortinet-iocs-cve-2022-40684/, https://attackerkb.com/topics/QWOxGIKkGx/cve-2022-40684, https://github.com/rapid7/metasploit-framework/pull/17143",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Fortinet Appliance Auth bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Threat. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Fortinet Appliance Auth bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: GET requests will be noisy and need to be filtered out or removed from the query based on volume. Restrict analytic to known publically facing Fortigates, or run analytic as a Hunt until properly tuned. It is also possible the user agent may be filtered on Report Runner or Node.js only for the exploit, however, it is unknown at this if other user agents may be used.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit, a Fortinet appliance authentication bypass vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.158",
              "n": "HTTP Possible Request Smuggling",
              "c": "high",
              "f": "intermediate",
              "v": "HTTP request smuggling is a technique for interfering with the way a web site processes sequences of HTTP requests that are received from one or more users. Request smuggling vulnerabilities are often critical in nature, allowing an attacker to bypass security controls, gain unauthorized access to sensitive data, and directly compromise other application users. This detection identifies a common request smuggling technique of using both Content-Length and Transfer-Encoding headers to cause a par…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected, however, monitor, filter, and tune as needed based on organization log sources.",
              "refs": "https://portswigger.net/web-security/request-smuggling, https://portswigger.net/research/http1-must-die, https://www.vaadata.com/blog/what-is-http-request-smuggling-exploitations-and-security-best-practices/, https://www.securityweek.com/new-http-request-smuggling-attacks-impacted-cdns-major-orgs-millions-of-websites/",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"HTTP Possible Request Smuggling\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"HTTP Possible Request Smuggling\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected, however, monitor, filter, and tune as needed based on organization log sources.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "HTTP request smuggling is a technique for interfering with the way a web site processes sequences of HTTP requests that are received from one or more users, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.159",
              "n": "HTTP Rapid POST with Mixed Status Codes",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies rapid-fire POST request attacks where an attacker sends more than 20 POST requests within a 5-second window, potentially attempting to exploit race conditions or overwhelm request handling. The pattern is particularly suspicious when responses vary in size or status codes, indicating successful exploitation attempts or probing for vulnerable endpoints.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the activity is part of diagnostics or testing. Filter as needed.",
              "refs": "https://portswigger.net/web-security/request-smuggling, https://portswigger.net/research/http1-must-die, https://www.vaadata.com/blog/what-is-http-request-smuggling-exploitations-and-security-best-practices/, https://www.securityweek.com/new-http-request-smuggling-attacks-impacted-cdns-major-orgs-millions-of-websites/",
              "mitre": [
                "T1071.001",
                "T1190",
                "T1595"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"HTTP Rapid POST with Mixed Status Codes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001, T1190, T1595. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"HTTP Rapid POST with Mixed Status Codes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the activity is part of diagnostics or testing. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies rapid-fire POST request attacks where an attacker sends more than 20 POST requests within a 5-second window, potentially attempting to exploit race conditions or overwhelm request handling, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.160",
              "n": "Hunting for Log4Shell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential exploitation attempts of the Log4Shell vulnerability (CVE-2021-44228) by analyzing HTTP headers for specific patterns. It leverages the Web Datamodel and evaluates various indicators such as the presence of `{jndi:`, environment variables, and common URI paths. This detection is significant as Log4Shell allows remote code execution, posing a severe threat to systems. If confirmed malicious, attackers could gain unauthorized access, execute arbitrary code,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Web.Web | eval jndi=if(match(_raw, \"(\\{|%7B)[jJnNdDiI]{4}:\"),4,0) | eval jndi_fastmatch=if(match(_raw, \"[jJnNdDiI]{4}\"),2,0) | eval jndi_proto=if(match(_raw,\"(?i)jndi:(ldap[s]?|rmi|dns|nis|iiop|corba|nds|http|https):\"),5,0) | eval all_match = if(match(_raw, \"(?i)(%(25){0,}20|\\s)*(%(25){0,}24|\\$)(%(25){0,}20|\\s)*(%(25){0,}7B|{)(%(25){0,}20|\\s)*(%(25){0,}(6A|4A)|J)(%(25){0,}(6E|4E)|N)(%(25){0,}(64|44)|D)(%(25){0,}(69|49)|I)(%(25){0,}20|\\s)*(%(25){0,}3A|:)[\\w\\%]+(%(25){1,}3A|:)(%(25){1,}2F|\\/)[^\\n]+\"),5,0) | eval env_var = if(match(_raw, \"env:\") OR match(_raw, \"env:AWS_ACCESS_KEY_ID\") OR match(_raw, \"env:AWS_SECRET_ACCESS_KEY\"),5,0) | eval uridetect = if(match(_raw, \"(?i)Basic\\/Command\\/Base64|Basic\\/ReverseShell|Basic\\/TomcatMemshell|Basic\\/JBossMemshell|Basic\\/WebsphereMemshell|Basic\\/SpringMemshell|Basic\\/Command|Deserialization\\/CommonsCollectionsK|Deserialization\\/CommonsBeanutils|Deserialization\\/Jre8u20\\/TomcatMemshell|Deserialization\\/CVE_2020_2555\\/WeblogicMemshell|TomcatBypass|GroovyBypass|WebsphereBypass\"),4,0) | eval keywords = if(match(_raw,\"(?i)\\$\\{ctx\\:loginId\\}|\\$\\{map\\:type\\}|\\$\\{filename\\}|\\$\\{date\\:MM-dd-yyyy\\}|\\$\\{docker\\:containerId\\}|\\$\\{docker\\:containerName\\}|\\$\\{docker\\:imageName\\}|\\$\\{env\\:USER\\}|\\$\\{event\\:Marker\\}|\\$\\{mdc\\:UserId\\}|\\$\\{java\\:runtime\\}|\\$\\{java\\:vm\\}|\\$\\{java\\:os\\}|\\$\\{jndi\\:logging/context-name\\}|\\$\\{hostName\\}|\\$\\{docker\\:containerId\\}|\\$\\{k8s\\:accountName\\}|\\$\\{k8s\\:clusterName\\}|\\$\\{k8s\\:containerId\\}|\\$\\{k8s\\:containerName\\}|\\$\\{k8s\\:host\\}|\\$\\{k8s\\:labels.app\\}|\\$\\{k8s\\:labels.podTemplateHash\\}|\\$\\{k8s\\:masterUrl\\}|\\$\\{k8s\\:namespaceId\\}|\\$\\{k8s\\:namespaceName\\}|\\$\\{k8s\\:podId\\}|\\$\\{k8s\\:podIp\\}|\\$\\{k8s\\:podName\\}|\\$\\{k8s\\:imageId\\}|\\$\\{k8s\\:imageName\\}|\\$\\{log4j\\:configLocation\\}|\\$\\{log4j\\:configParentLocation\\}|\\$\\{spring\\:spring.application.name\\}|\\$\\{main\\:myString\\}|\\$\\{main\\:0\\}|\\$\\{main\\:1\\}|\\$\\{main\\:2\\}|\\$\\{main\\:3\\}|\\$\\{main\\:4\\}|\\$\\{main\\:bar\\}|\\$\\{name\\}|\\$\\{marker\\}|\\$\\{marker\\:name\\}|\\$\\{spring\\:profiles.active[0]|\\$\\{sys\\:logPath\\}|\\$\\{web\\:rootDir\\}|\\$\\{sys\\:user.name\\}\"),4,0) | eval obf = if(match(_raw, \"(\\$|%24)[^ /]*({|%7b)[^ /]*(j|%6a)[^ /]*(n|%6e)[^ /]*(d|%64)[^ /]*(i|%69)[^ /]*(:|%3a)[^ /]*(:|%3a)[^ /]*(/|%2f)\"),5,0) | eval lookups = if(match(_raw, \"(?i)({|%7b)(main|sys|k8s|spring|lower|upper|env|date|sd)\"),4,0)  | addtotals fieldname=Score, jndi, jndi_proto, env_var, uridetect, all_match, jndi_fastmatch, keywords, obf, lookups | where Score > 2 | stats values(Score) by  jndi, jndi_proto, env_var, uridetect, all_match, jndi_fastmatch, keywords, lookups, obf, dest, src, http_method, _raw | `hunting_for_log4shell_filter`",
              "m": "Out of the box, the Web datamodel is required to be pre-filled. However, tested was performed against raw httpd access logs. Change the first line to any dataset to pass the regex's against.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is highly possible you will find false positives, however, the base score is set to 2 for _any_ jndi found in raw logs. tune and change as needed, include any filtering.",
              "refs": "https://gist.github.com/olafhartong/916ebc673ba066537740164f7e7e1d72, https://gist.github.com/Neo23x0/e4c8b03ff8cdf1fa63b7d15db6e3860b#gistcomment-3994449, https://regex101.com/r/OSrm0q/1/, https://github.com/Neo23x0/signature-base/blob/master/yara/expl_log4j_cve_2021_44228.yar, https://news.sophos.com/en-us/2021/12/12/log4shell-hell-anatomy-of-an-exploit-outbreak/, https://gist.github.com/MHaggis/1899b8554f38c8692a9fb0ceba60b44c, https://twitter.com/sasi2103/status/1469764719850442760?s=20",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Hunting for Log4Shell\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Hunting for Log4Shell\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is highly possible you will find false positives, however, the base score is set to 2 for _any_ jndi found in raw logs. tune and change as needed, include any filtering.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential exploitation attempts of the Log4Shell vulnerability by analyzing HTTP headers for specific patterns, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.161",
              "n": "Ivanti Connect Secure Command Injection Attempts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to exploit the CVE-2023-46805 and CVE-2024-21887 vulnerabilities in Ivanti Connect Secure. It detects POST requests to specific URIs that leverage command injection to execute arbitrary commands. The detection uses the Web datamodel to monitor for these requests and checks for a 200 OK response, indicating a successful exploit attempt. This activity is significant as it can lead to unauthorized command execution on the server. If confirmed malicious, at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic is limited to HTTP Status 200; adjust as necessary. False positives may occur if the URI path is IP-restricted or externally blocked. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.",
              "refs": "https://github.com/RootUp/PersonalStuff/blob/master/http-vuln-cve2023-46805_2024_21887.nse, https://github.com/projectdiscovery/nuclei-templates/blob/c6b351e71b0fb0e40e222e97038f1fe09ac58194/http/misconfiguration/ivanti/CVE-2023-46085-CVE-2024-21887-mitigation-not-applied.yaml, https://github.com/rapid7/metasploit-framework/pull/18708/files, https://attackerkb.com/topics/AdUh6by52K/cve-2023-46805, https://labs.watchtowr.com/welcome-to-2024-the-sslvpn-chaos-continues-ivanti-cve-2023-46805-cve-2024-21887/, https://twitter.com/GreyNoiseIO/status/1747711939466453301",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ivanti Connect Secure Command Injection Attempts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ivanti Connect Secure Command Injection Attempts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic is limited to HTTP Status 200; adjust as necessary. False positives may occur if the URI path is IP-restricted or externally blocked. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies attempts to exploit the and vulnerabilities in Ivanti Connect Secure, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.162",
              "n": "Ivanti Connect Secure SSRF in SAML Component",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies POST requests targeting endpoints vulnerable to the SSRF issue (CVE-2024-21893) in Ivanti's products. It leverages the Web data model, focusing on endpoints such as /dana-ws/saml20.ws, /dana-ws/saml.ws, /dana-ws/samlecp.ws, and /dana-na/auth/saml-logout.cgi. The detection filters for POST requests that received an HTTP 200 OK response, indicating successful execution. This activity is significant as it may indicate an attempt to exploit SSRF vulnerabilities, pot…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic is limited to HTTP Status 200; adjust as necessary. False positives may occur if the HTTP Status is removed, as most failed attempts result in a 301. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.",
              "refs": "https://attackerkb.com/topics/FGlK1TVnB2/cve-2024-21893, https://www.assetnote.io/resources/research/ivantis-pulse-connect-secure-auth-bypass-round-two",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ivanti Connect Secure SSRF in SAML Component\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ivanti Connect Secure SSRF in SAML Component\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic is limited to HTTP Status 200; adjust as necessary. False positives may occur if the HTTP Status is removed, as most failed attempts result in a 301. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies POST requests targeting endpoints vulnerable to the SSRF issue in Ivanti's products, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.163",
              "n": "Ivanti Connect Secure System Information Access via Auth Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to exploit the CVE-2023-46805 and CVE-2024-21887 vulnerabilities in Ivanti Connect Secure. It detects GET requests to the /api/v1/totp/user-backup-code/../../system/system-information URI, which leverage an authentication bypass to access system information. The detection uses the Web datamodel to identify requests with a 200 OK response, indicating a successful exploit attempt. This activity is significant as it reveals potential unauthorized access to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This analytic is limited to HTTP Status 200; adjust as necessary. False positives may occur if the URI path is IP-restricted or externally blocked. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.",
              "refs": "https://github.com/RootUp/PersonalStuff/blob/master/http-vuln-cve2023-46805_2024_21887.nse, https://github.com/projectdiscovery/nuclei-templates/blob/c6b351e71b0fb0e40e222e97038f1fe09ac58194/http/misconfiguration/ivanti/CVE-2023-46085-CVE-2024-21887-mitigation-not-applied.yaml, https://github.com/rapid7/metasploit-framework/pull/18708/files",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ivanti Connect Secure System Information Access via Auth Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ivanti Connect Secure System Information Access via Auth Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This analytic is limited to HTTP Status 200; adjust as necessary. False positives may occur if the URI path is IP-restricted or externally blocked. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies attempts to exploit the and vulnerabilities in Ivanti Connect Secure, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.164",
              "n": "Ivanti EPM SQL Injection Remote Code Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies potential exploitation of a critical SQL injection vulnerability in Ivanti Endpoint Manager (EPM), identified as CVE-2024-29824. The vulnerability, which has a CVSS score of 9.8, allows for remote code execution through the `RecordGoodApp` function in the `PatchBiz.dll` file. An attacker can exploit this vulnerability by manipulating the `goodApp.md5` value in an HTTP POST request to the `/WSStatusEvents/EventHandler.asmx` endpoint, leading to unauthorized command execu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected, as this detection is based on monitoring HTTP POST requests to a specific endpoint with a status code of 200. However, ensure that legitimate requests to the `/WSStatusEvents/EventHandler.asmx` endpoint are accounted for in the environment to avoid false positives.",
              "refs": "https://www.horizon3.ai/attack-research/attack-blogs/cve-2024-29824-deep-dive-ivanti-epm-sql-injection-remote-code-execution-vulnerability/, https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-29824, https://github.com/projectdiscovery/nuclei-templates/pull/10020/files",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ivanti EPM SQL Injection Remote Code Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ivanti EPM SQL Injection Remote Code Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected, as this detection is based on monitoring HTTP POST requests to a specific endpoint with a status code of 200. However, ensure that legitimate requests to the `/WSStatusEvents/EventHandler.asmx` endpoint are accounted for in the environment to avoid false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies potential exploitation of a critical SQL injection vulnerability in Ivanti Endpoint Manager (EPM), identified as, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.165",
              "n": "Ivanti EPMM Remote Unauthenticated API Access CVE-2023-35078",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit CVE-2023-35078, a vulnerability in Ivanti Endpoint Manager Mobile (EPMM) versions up to 11.4. It identifies HTTP requests to the endpoint \"/mifs/aad/api/v2/authorized/users?*\" with a status code of 200 in web logs. This activity is significant as it indicates unauthorized remote access to restricted functionalities or resources. If confirmed malicious, this could lead to data theft, unauthorized modifications, or further system compromise, neces…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The Proof of Concept exploit script indicates that status=200 is required for successful exploitation of the vulnerability. False positives may be present if status=200 is removed from the search.  If it is removed,then the search also alert on status=301 and status=404 which indicates unsuccessful exploitation attempts.  Analysts may find it useful to hunt for these status codes as well, but it is likely to produce a significant number of alerts as this is a widespread vulnerability.",
              "refs": "https://github.com/vchan-in/CVE-2023-35078-Exploit-POC/blob/main/cve_2023_35078_poc.py",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ivanti EPMM Remote Unauthenticated API Access CVE-2023-35078\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ivanti EPMM Remote Unauthenticated API Access CVE-2023-35078\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The Proof of Concept exploit script indicates that status=200 is required for successful exploitation of the vulnerability. False positives may be present if status=200 is removed from the search.  If it is removed,then the search also alert on status=301 and status=404 which indicates unsuccessful exploitation attempts.  Analysts may find it useful to hunt for these status codes as well, but it is likely to produce a significant number of alerts as this is a widespread vulnerability.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit, a vulnerability in Ivanti Endpoint Manager Mobile (EPMM) versions up to 11.4, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.166",
              "n": "Ivanti EPMM Remote Unauthenticated API Access CVE-2023-35082",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential unauthorized access attempts exploiting CVE-2023-35082 within Ivanti's software products. It identifies access to the specific URI path /mifs/asfV3/api/v2/ with an HTTP 200 response code in web access logs, indicating successful unauthorized access. This activity is significant for a SOC as it highlights potential security breaches that could lead to unauthorized data access or system modifications. If confirmed malicious, an attacker could gain unbridled…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Similar to CVE-2023-35078, the path for exploitation indicates that status=200 is required for successful exploitation of the vulnerability. False positives may be present if status=200 is removed from the search.  If it is removed,then the search also alert on status=301 and status=404 which indicates unsuccessful exploitation attempts.  Analysts may find it useful to hunt for these status codes as well, but it is likely to produce a significant number of alerts as this is a widespread vulnerability.",
              "refs": "https://github.com/vchan-in/CVE-2023-35078-Exploit-POC/blob/main/cve_2023_35078_poc.py, https://www.rapid7.com/blog/post/2023/08/02/cve-2023-35082-mobileiron-core-unauthenticated-api-access-vulnerability/",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ivanti EPMM Remote Unauthenticated API Access CVE-2023-35082\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ivanti EPMM Remote Unauthenticated API Access CVE-2023-35082\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Similar to CVE-2023-35078, the path for exploitation indicates that status=200 is required for successful exploitation of the vulnerability. False positives may be present if status=200 is removed from the search.  If it is removed,then the search also alert on status=301 and status=404 which indicates unsuccessful exploitation attempts.  Analysts may find it useful to hunt for these status codes as well, but it is likely to produce a significant number of alerts as this is a widespread vulnerability.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential unauthorized access attempts exploiting within Ivanti's software products, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.167",
              "n": "Ivanti Sentry Authentication Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unauthenticated access attempts to the System Manager Portal in Ivanti Sentry, exploiting CVE-2023-38035. It detects this activity by monitoring HTTP requests to specific endpoints (\"/mics/services/configservice/*\", \"/mics/services/*\", \"/mics/services/MICSLogService*\") with a status code of 200. This behavior is significant for a SOC as it indicates potential unauthorized access, which could lead to OS command execution as root. If confirmed malicious, this acti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is important to note that false positives may occur if the search criteria are expanded beyond the HTTP status code 200. In other words, if the search includes other HTTP status codes, the likelihood of encountering false positives increases. This is due to the fact that HTTP status codes other than 200 may not necessarily indicate a successful exploitation attempt.",
              "refs": "https://github.com/horizon3ai/CVE-2023-38035/blob/main/CVE-2023-38035.py, https://www.horizon3.ai/ivanti-sentry-authentication-bypass-cve-2023-38035-deep-dive/",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ivanti Sentry Authentication Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ivanti Sentry Authentication Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is important to note that false positives may occur if the search criteria are expanded beyond the HTTP status code 200. In other words, if the search includes other HTTP status codes, the likelihood of encountering false positives increases. This is due to the fact that HTTP status codes other than 200 may not necessarily indicate a successful exploitation attempt.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies unauthenticated access attempts to the System Manager Portal in Ivanti Sentry, exploiting, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.168",
              "n": "Java Class File download by Java User Agent",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a Java user agent performing a GET request for a .class file from a remote site. It leverages web or proxy logs within the Web Datamodel to detect this activity. This behavior is significant as it may indicate exploitation attempts, such as those related to CVE-2021-44228 (Log4Shell). If confirmed malicious, an attacker could exploit vulnerabilities in the Java application, potentially leading to remote code execution and further compromise of the affected syste…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Splunk Stream HTTP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Splunk Stream HTTP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Filtering may be required in some instances, filter as needed.",
              "refs": "https://arstechnica.com/information-technology/2021/12/as-log4shell-wreaks-havoc-payroll-service-reports-ransomware-attack/",
              "mitre": [
                "T1190"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Java Class File download by Java User Agent\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Splunk Stream HTTP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Java Class File download by Java User Agent\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Filtering may be required in some instances, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies a Java user agent performing a GET request for.class file from a remote site, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.169",
              "n": "Jenkins Arbitrary File Read CVE-2024-23897",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to exploit Jenkins Arbitrary File Read CVE-2024-23897. It detects HTTP POST requests to Jenkins URLs containing \"*/cli?remoting=false*\" with a 200 status code. This activity is significant as it indicates potential unauthorized access to sensitive files on the Jenkins server, such as credentials and private keys. If confirmed malicious, this could lead to severe data breaches, unauthorized access, and further exploitation within the environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as this detection is based on a specific URL path and HTTP status code. Adjust the search as necessary to fit the environment.",
              "refs": "https://github.com/projectdiscovery/nuclei-templates/pull/9025, https://github.com/jenkinsci-cert/SECURITY-3314-3315, https://github.com/binganao/CVE-2024-23897, https://github.com/h4x0r-dz/CVE-2024-23897, https://www.sonarsource.com/blog/excessive-expansion-uncovering-critical-security-vulnerabilities-in-jenkins/, https://www.shodan.io/search?query=product%3A%22Jenkins%22, https://thehackernews.com/2024/01/critical-jenkins-vulnerability-exposes.html",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Jenkins Arbitrary File Read CVE-2024-23897\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Jenkins Arbitrary File Read CVE-2024-23897\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as this detection is based on a specific URL path and HTTP status code. Adjust the search as necessary to fit the environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies attempts to exploit Jenkins Arbitrary File Read, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.170",
              "n": "JetBrains TeamCity Authentication Bypass CVE-2024-27198",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to exploit the JetBrains TeamCity Authentication Bypass vulnerability (CVE-2024-27198). It detects suspicious POST requests to the `/app/rest/users` and `/app/rest/users/id:1/tokens` endpoints, which are indicative of attempts to create new administrator users or generate admin access tokens without authentication. This detection leverages the Web datamodel and CIM-compliant log sources, such as Nginx or TeamCity logs. This activity is significant as it…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected, as this detection is based on the presence of specific URI paths and HTTP methods that are indicative of the CVE-2024-27198 vulnerability exploitation. Monitor, filter and tune as needed based on organization log sources.",
              "refs": "https://github.com/projectdiscovery/nuclei-templates/pull/9279/files, https://www.rapid7.com/blog/post/2024/03/04/etr-cve-2024-27198-and-cve-2024-27199-jetbrains-teamcity-multiple-authentication-bypass-vulnerabilities-fixed/, https://blog.jetbrains.com/teamcity/2024/03/teamcity-2023-11-4-is-out/, https://blog.jetbrains.com/teamcity/2024/03/additional-critical-security-issues-affecting-teamcity-on-premises-cve-2024-27198-and-cve-2024-27199-update-to-2023-11-4-now/, https://github.com/yoryio/CVE-2024-27198/blob/main/CVE-2024-27198.py",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"JetBrains TeamCity Authentication Bypass CVE-2024-27198\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"JetBrains TeamCity Authentication Bypass CVE-2024-27198\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected, as this detection is based on the presence of specific URI paths and HTTP methods that are indicative of the CVE-2024-27198 vulnerability exploitation. Monitor, filter and tune as needed based on organization log sources.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies attempts to exploit the JetBrains TeamCity Authentication Bypass vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.171",
              "n": "JetBrains TeamCity Authentication Bypass Suricata CVE-2024-27198",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the CVE-2024-27198 vulnerability in JetBrains TeamCity on-premises servers, which allows attackers to bypass authentication mechanisms. It leverages Suricata HTTP traffic logs to identify suspicious POST requests to the `/app/rest/users` and `/app/rest/users/id:1/tokens` endpoints. This activity is significant because it can lead to unauthorized administrative access, enabling attackers to gain full control over the TeamCity server, including pr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected, as this detection is based on the presence of specific URI paths and HTTP methods that are indicative of the CVE-2024-27198 vulnerability exploitation. Monitor, filter and tune as needed based on organization log sources.",
              "refs": "https://github.com/projectdiscovery/nuclei-templates/pull/9279/files, https://www.rapid7.com/blog/post/2024/03/04/etr-cve-2024-27198-and-cve-2024-27199-jetbrains-teamcity-multiple-authentication-bypass-vulnerabilities-fixed/, https://blog.jetbrains.com/teamcity/2024/03/teamcity-2023-11-4-is-out/, https://blog.jetbrains.com/teamcity/2024/03/additional-critical-security-issues-affecting-teamcity-on-premises-cve-2024-27198-and-cve-2024-27199-update-to-2023-11-4-now/",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"JetBrains TeamCity Authentication Bypass Suricata CVE-2024-27198\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"JetBrains TeamCity Authentication Bypass Suricata CVE-2024-27198\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected, as this detection is based on the presence of specific URI paths and HTTP methods that are indicative of the CVE-2024-27198 vulnerability exploitation. Monitor, filter and tune as needed based on organization log sources.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the vulnerability in JetBrains TeamCity on-premises servers, which allows attackers to bypass authentication mechanisms, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.172",
              "n": "JetBrains TeamCity Limited Auth Bypass Suricata CVE-2024-27199",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to exploit CVE-2024-27199, a critical vulnerability in JetBrains TeamCity web server, allowing unauthenticated access to specific endpoints. It detects unusual access patterns to vulnerable paths such as /res/, /update/, and /.well-known/acme-challenge/ by monitoring HTTP traffic logs via Suricata. This activity is significant as it could indicate an attacker bypassing authentication to access or modify system settings. If confirmed malicious, this coul…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected, however, monitor, filter, and tune as needed based on organization log sources. The analytic is restricted to 200 and GET requests to specific URI paths, which should limit false positives.",
              "refs": "https://github.com/projectdiscovery/nuclei-templates/blob/f644ec82dfe018890c6aa308967424d26c0f1522/http/cves/2024/CVE-2024-27199.yaml, https://www.rapid7.com/blog/post/2024/03/04/etr-cve-2024-27198-and-cve-2024-27199-jetbrains-teamcity-multiple-authentication-bypass-vulnerabilities-fixed/, https://blog.jetbrains.com/teamcity/2024/03/teamcity-2023-11-4-is-out/, https://blog.jetbrains.com/teamcity/2024/03/additional-critical-security-issues-affecting-teamcity-on-premises-cve-2024-27198-and-cve-2024-27199-update-to-2023-11-4-now/",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"JetBrains TeamCity Limited Auth Bypass Suricata CVE-2024-27199\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"JetBrains TeamCity Limited Auth Bypass Suricata CVE-2024-27199\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected, however, monitor, filter, and tune as needed based on organization log sources. The analytic is restricted to 200 and GET requests to specific URI paths, which should limit false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies attempts to exploit, a critical vulnerability in JetBrains TeamCity web server, allowing unauthenticated access to specific endpoints, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.173",
              "n": "JetBrains TeamCity RCE Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the CVE-2023-42793 vulnerability in JetBrains TeamCity On-Premises. It identifies suspicious POST requests to /app/rest/users/id:1/tokens/RPC2, leveraging the Web datamodel to monitor specific URL patterns and HTTP methods. This activity is significant as it may indicate an unauthenticated attacker attempting to gain administrative access via Remote Code Execution (RCE). If confirmed malicious, this could allow the attacker to execute arbitrary …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "If TeamCity is not in use, this analytic will not return results. Monitor and tune for your environment.",
              "refs": "https://blog.jetbrains.com/teamcity/2023/09/critical-security-issue-affecting-teamcity-on-premises-update-to-2023-05-4-now/, https://www.sonarsource.com/blog/teamcity-vulnerability/, https://github.com/rapid7/metasploit-framework/pull/18408, https://attackerkb.com/topics/1XEEEkGHzt/cve-2023-42793",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"JetBrains TeamCity RCE Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"JetBrains TeamCity RCE Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: If TeamCity is not in use, this analytic will not return results. Monitor and tune for your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the vulnerability in JetBrains TeamCity On-Premises, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.174",
              "n": "Juniper Networks Remote Code Execution Exploit Detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit a remote code execution vulnerability in Juniper Networks devices. It identifies requests to /webauth_operation.php?PHPRC=*, which are indicative of uploading and executing malicious PHP files. This detection leverages the Web data model, focusing on specific URL patterns and HTTP status codes. This activity is significant because it signals an attempt to gain unauthorized access and execute arbitrary code on the device. If confirmed malicious, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Be aware of potential false positives - legitimate uses of the /webauth_operation.php endpoint may cause benign activities to be flagged.The URL in the analytic is specific to a successful attempt to exploit the vulnerability. Review contents of the HTTP body to determine if the request is malicious. If the request is benign, add the URL to the whitelist or continue to monitor.",
              "refs": "https://supportportal.juniper.net/s/article/2023-08-Out-of-Cycle-Security-Bulletin-Junos-OS-SRX-Series-and-EX-Series-Multiple-vulnerabilities-in-J-Web-can-be-combined-to-allow-a-preAuth-Remote-Code-Execution?language=en_US, https://github.com/projectdiscovery/nuclei-templates/blob/main/http/cves/2023/CVE-2023-36844.yaml, https://thehackernews.com/2023/08/new-juniper-junos-os-flaws-expose.html, https://github.com/watchtowrlabs/juniper-rce_cve-2023-36844, https://labs.watchtowr.com/cve-2023-36844-and-friends-rce-in-juniper-firewalls/, https://vulncheck.com/blog/juniper-cve-2023-36845",
              "mitre": [
                "T1190",
                "T1105",
                "T1059"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Juniper Networks Remote Code Execution Exploit Detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1105, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Juniper Networks Remote Code Execution Exploit Detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Be aware of potential false positives - legitimate uses of the /webauth_operation.php endpoint may cause benign activities to be flagged.The URL in the analytic is specific to a successful attempt to exploit the vulnerability. Review contents of the HTTP body to determine if the request is malicious. If the request is benign, add the URL to the whitelist or continue to monitor.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit a remote code execution vulnerability in Juniper Networks devices, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.175",
              "n": "Log4Shell JNDI Payload Injection Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to inject Log4Shell JNDI payloads via web calls. It leverages the Web datamodel and uses regex to detect patterns like `${jndi:ldap://` in raw web event data, including HTTP headers. This activity is significant because it targets vulnerabilities in Java web applications using Log4j, such as Apache Struts and Solr. If confirmed malicious, this could allow attackers to execute arbitrary code, potentially leading to full system compromise. Immediate inves…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "If there is a vulnerablility scannner looking for log4shells this will trigger, otherwise likely to have low false positives.",
              "refs": "https://www.lunasec.io/docs/blog/log4j-zero-day/",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Log4Shell JNDI Payload Injection Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Log4Shell JNDI Payload Injection Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: If there is a vulnerablility scannner looking for log4shells this will trigger, otherwise likely to have low false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies attempts to inject Log4Shell JNDI payloads using web calls, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.176",
              "n": "Log4Shell JNDI Payload Injection with Outbound Connection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects Log4Shell JNDI payload injections via outbound connections. It identifies suspicious LDAP lookup functions in web logs, such as `${jndi:ldap://PAYLOAD_INJECTED}`, and correlates them with network traffic to known malicious IP addresses. This detection leverages the Web and Network_Traffic data models in Splunk. Monitoring this activity is crucial as it targets vulnerabilities in Java web applications using log4j, potentially leading to remote code execution. If con…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "If there is a vulnerablility scannner looking for log4shells this will trigger, otherwise likely to have low false positives.",
              "refs": "https://www.lunasec.io/docs/blog/log4j-zero-day/",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Log4Shell JNDI Payload Injection with Outbound Connection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Log4Shell JNDI Payload Injection with Outbound Connection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: If there is a vulnerablility scannner looking for log4shells this will trigger, otherwise likely to have low false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects Log4Shell JNDI payload injections using outbound connections, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.177",
              "n": "Microsoft SharePoint Server Elevation of Privilege",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential exploitation attempts against Microsoft SharePoint Server vulnerability CVE-2023-29357. It leverages the Web datamodel to monitor for specific API calls and HTTP methods indicative of privilege escalation attempts. This activity is significant as it may indicate an attacker is trying to gain unauthorized privileged access to the SharePoint environment. If confirmed malicious, the impact could include unauthorized access to sensitive data, potential data t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if there are legitimate activities that mimic the exploitation pattern. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.",
              "refs": "https://socradar.io/microsoft-sharepoint-server-elevation-of-privilege-vulnerability-exploit-cve-2023-29357/, https://github.com/LuemmelSec/CVE-2023-29357/blob/main/CVE-2023-29357/Program.cs",
              "mitre": [
                "T1068"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Microsoft SharePoint Server Elevation of Privilege\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Microsoft SharePoint Server Elevation of Privilege\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if there are legitimate activities that mimic the exploitation pattern. It's recommended to review the context of the alerts and adjust the analytic parameters to better fit the specific environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential exploitation attempts against Microsoft SharePoint Server vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.178",
              "n": "Nginx ConnectWise ScreenConnect Authentication Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the ConnectWise ScreenConnect CVE-2024-1709 vulnerability, which allows attackers to bypass authentication via alternate paths or channels. It leverages Nginx access logs to identify web requests to the SetupWizard.aspx page, indicating potential exploitation. This activity is significant as it can lead to unauthorized administrative access and remote code execution. If confirmed malicious, attackers could create administrative users and gain fu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected, as the detection is based on the presence of web requests to the SetupWizard.aspx page, which is not a common page to be accessed by legitimate users. Note that the analytic is limited to HTTP POST and a status of 200 to reduce false positives. Modify the query as needed to reduce false positives or hunt for additional indicators of compromise.",
              "refs": "https://docs.splunk.com/Documentation/AddOns/released/NGINX/Sourcetypes, https://gist.github.com/MHaggis/26f59108b04da8f1d870c9cc3a3c8eec, https://www.huntress.com/blog/a-catastrophe-for-control-understanding-the-screenconnect-authentication-bypass, https://www.huntress.com/blog/detection-guidance-for-connectwise-cwe-288-2, https://www.connectwise.com/company/trust/security-bulletins/connectwise-screenconnect-23.9.8",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Nginx ConnectWise ScreenConnect Authentication Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Nginx ConnectWise ScreenConnect Authentication Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected, as the detection is based on the presence of web requests to the SetupWizard.aspx page, which is not a common page to be accessed by legitimate users. Note that the analytic is limited to HTTP POST and a status of 200 to reduce false positives. Modify the query as needed to reduce false positives or hunt for additional indicators of compromise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the ConnectWise ScreenConnect vulnerability, which allows attackers to bypass authentication using alternate paths or channels, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.179",
              "n": "Spring4Shell Payload URL Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to exploit the Spring4Shell vulnerability (CVE-2022-22963) by identifying specific URL patterns associated with web shell payloads. It leverages web traffic data, focusing on HTTP GET requests with URLs containing indicators like \"tomcatwar.jsp,\" \"poc.jsp,\" and \"shell.jsp.\" This activity is significant as it suggests an attacker is trying to deploy a web shell, which can lead to remote code execution. If confirmed malicious, this could allow the attacker t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The jsp file names are static names used in current proof of concept code. =",
              "refs": "https://www.microsoft.com/security/blog/2022/04/04/springshell-rce-vulnerability-guidance-for-protecting-against-and-detecting-cve-2022-22965/, https://github.com/TheGejr/SpringShell, https://www.tenable.com/blog/spring4shell-faq-spring-framework-remote-code-execution-vulnerability",
              "mitre": [
                "T1133",
                "T1190",
                "T1505.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Spring4Shell Payload URL Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1133, T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Spring4Shell Payload URL Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The jsp file names are static names used in current proof of concept code. =\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects attempts to exploit the Spring4Shell vulnerability by identifying specific URL patterns associated with web shell payloads, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.180",
              "n": "Tomcat Session Deserialization Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies potential exploitation of CVE-2025-24813 in Apache Tomcat through the second stage of the attack. This phase occurs when an attacker attempts to trigger deserialization of a previously uploaded malicious session file by sending a GET request with a specially crafted JSESSIONID cookie. These requests typically have specific characteristics, including a JSESSIONID cookie with a leading dot that matches a previously uploaded filename, and typically result in a HTTP 500 err…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Web.Web | search http_method=GET AND cookie=\"*JSESSIONID=.*\" src=$src$ | table src dest http_method uri_path http_user_agent status",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives should occur as this pattern is highly specific to CVE-2025-24813 exploitation. However, legitimate application errors that use similar cookie patterns and result in 500 status codes might trigger false positives. Review the JSESSIONID cookie format and the associated request context to confirm exploitation attempts.",
              "refs": "https://lists.apache.org/thread/j5fkjv2k477os90nczf2v9l61fb0kkgq, https://nvd.nist.gov/vuln/detail/CVE-2025-24813, https://github.com/vulhub/vulhub/tree/master/tomcat/CVE-2025-24813, https://www.rapid7.com/db/vulnerabilities/apache-tomcat-cve-2025-24813/, https://gist.github.com/MHaggis/e106367f6649fbb09ab27e7b4a01cf73",
              "mitre": [
                "T1190",
                "T1505.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Tomcat Session Deserialization Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Tomcat Session Deserialization Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives should occur as this pattern is highly specific to CVE-2025-24813 exploitation. However, legitimate application errors that use similar cookie patterns and result in 500 status codes might trigger false positives. Review the JSESSIONID cookie format and the associated request context to confirm exploitation attempts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies potential exploitation of in Apache Tomcat through the second stage of the attack, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.181",
              "n": "Tomcat Session File Upload Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies potential exploitation of CVE-2025-24813 in Apache Tomcat through the initial stage of the attack. This first phase occurs when an attacker attempts to upload a malicious serialized Java object with a .session file extension via an HTTP PUT request. When successful, these uploads typically result in HTTP status codes 201 (Created) or 409 (Conflict) and create the foundation for subsequent deserialization attacks by placing malicious content in a location where Tomcat's …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Web.Web | search http_method = PUT uri_path=\"*.session\" src=$src$ | table src dest http_method uri_path http_user_agent status",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate applications might use PUT requests to create .session files, especially in custom implementations that leverage Tomcat's session persistence mechanism. Verify if the detected activity is part of a normal application flow or if it correlates with other suspicious behavior, such as subsequent GET requests with manipulated JSESSIONID cookies.",
              "refs": "https://lists.apache.org/thread/j5fkjv2k477os90nczf2v9l61fb0kkgq, https://nvd.nist.gov/vuln/detail/CVE-2025-24813, https://github.com/vulhub/vulhub/tree/master/tomcat/CVE-2025-24813, https://www.rapid7.com/db/vulnerabilities/apache-tomcat-cve-2025-24813/, https://gist.github.com/MHaggis/e106367f6649fbb09ab27e7b4a01cf73",
              "mitre": [
                "T1190",
                "T1505.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Tomcat Session File Upload Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Tomcat Session File Upload Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate applications might use PUT requests to create .session files, especially in custom implementations that leverage Tomcat's session persistence mechanism. Verify if the detected activity is part of a normal application flow or if it correlates with other suspicious behavior, such as subsequent GET requests with manipulated JSESSIONID cookies.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies potential exploitation of in Apache Tomcat through the initial stage of the attack, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.182",
              "n": "Unusually Long Content-Type Length",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unusually long strings in the Content-Type HTTP header sent by the client to the server. It uses data from the Stream:HTTP source, specifically evaluating the length of the `cs_content_type` field. This activity is significant because excessively long Content-Type headers can indicate attempts to exploit vulnerabilities or evade detection mechanisms. If confirmed malicious, this behavior could allow attackers to execute code, manipulate data, or bypass security …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`stream_http` | eval cs_content_type_length = len(cs_content_type)\n| where cs_content_type_length > 100\n| table _time src dest cs_content_type cs_content_type_length url\n| sort -cs_content_type_length",
              "m": "This particular search leverages data extracted from Stream:HTTP. You must configure the http stream using the Splunk Stream App on your Splunk Stream deployment server to extract the cs_content_type field.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Very few legitimate Content-Type fields will have a length greater than 100 characters.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Unusually Long Content-Type Length\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Unusually Long Content-Type Length\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Very few legitimate Content-Type fields will have a length greater than 100 characters.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies unusually long strings in the Content-Type HTTP header sent by the client to the server, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.183",
              "n": "VMWare Aria Operations Exploit Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential exploitation attempts against VMWare vRealize Network Insight, specifically targeting the CVE-2023-20887 vulnerability. It monitors web traffic for HTTP POST requests directed at the vulnerable endpoint \"/saas./resttosaasservlet.\" This detection leverages web traffic data, focusing on specific URL patterns and HTTP methods. Identifying this behavior is crucial for a SOC as it indicates an active exploit attempt. If confirmed malicious, the attacker could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Threat",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Threat ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present based on gateways in use, modify the status field as needed.",
              "refs": "https://nvd.nist.gov/vuln/detail/CVE-2023-20887, https://viz.greynoise.io/tag/vmware-aria-operations-for-networks-rce-attempt?days=30, https://github.com/sinsinology/CVE-2023-20887, https://summoning.team/blog/vmware-vrealize-network-insight-rce-cve-2023-20887/",
              "mitre": [
                "T1133",
                "T1190",
                "T1210",
                "T1068"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"VMWare Aria Operations Exploit Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Threat. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1133, T1190, T1210, T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"VMWare Aria Operations Exploit Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present based on gateways in use, modify the status field as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential exploitation attempts against VMWare vRealize Network Insight, specifically targeting the vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.184",
              "n": "VMware Server Side Template Injection Hunt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential server-side template injection attempts related to CVE-2022-22954. It detects suspicious URL patterns containing \"deviceudid\" and keywords like \"java.lang.ProcessBuilder\" or \"freemarker.template.utility.ObjectConstructor\" using web or proxy logs within the Web Datamodel. This activity is significant as it may indicate an attempt to exploit a known vulnerability in VMware, potentially leading to remote code execution. If confirmed malicious, attackers c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Threat",
              "q": "| tstats count FROM datamodel=Web\n      WHERE Web.http_method IN (\"GET\") Web.url=\"*deviceudid=*\"\n        AND\n        Web.url IN (\"*java.lang.ProcessBuilder*\",\"*freemarker.template.utility.ObjectConstructor*\")\n      BY Web.http_user_agent Web.http_method, Web.url,Web.url_length\n         Web.src, Web.dest sourcetype\n    | `drop_dm_object_name(\"Web\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `vmware_server_side_template_injection_hunt_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Palo Alto Network Threat ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the activity is blocked or was not successful. Filter known vulnerablity scanners. Filter as needed.",
              "refs": "https://www.cisa.gov/uscert/ncas/alerts/aa22-138b, https://github.com/wvu/metasploit-framework/blob/master/modules/exploits/linux/http/vmware_workspace_one_access_cve_2022_22954.rb, https://github.com/sherlocksecurity/VMware-CVE-2022-22954, https://www.vmware.com/security/advisories/VMSA-2022-0011.html, https://attackerkb.com/topics/BDXyTqY1ld/cve-2022-22954, https://twitter.com/wvuuuuuuuuuuuuu/status/1519476924757778433",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"VMware Server Side Template Injection Hunt\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Threat. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"VMware Server Side Template Injection Hunt\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the activity is blocked or was not successful. Filter known vulnerablity scanners. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential server-side template injection attempts related to, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.185",
              "n": "VMware Workspace ONE Freemarker Server-side Template Injection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects server-side template injection attempts related to CVE-2022-22954 in VMware Workspace ONE. It leverages web or proxy logs to identify HTTP GET requests to the endpoint catalog-portal/ui/oauth/verify with the freemarker.template.utility.Execute command. This activity is significant as it indicates potential exploitation attempts that could lead to remote code execution. If confirmed malicious, an attacker could execute arbitrary commands on the server, leading to fu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Threat",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Threat ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the activity is blocked or was not successful. Filter known vulnerablity scanners. Filter as needed.",
              "refs": "https://www.cisa.gov/uscert/ncas/alerts/aa22-138b, https://github.com/wvu/metasploit-framework/blob/master/modules/exploits/linux/http/vmware_workspace_one_access_cve_2022_22954.rb, https://github.com/sherlocksecurity/VMware-CVE-2022-22954, https://www.vmware.com/security/advisories/VMSA-2022-0011.html, https://attackerkb.com/topics/BDXyTqY1ld/cve-2022-22954",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"VMware Workspace ONE Freemarker Server-side Template Injection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Threat. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"VMware Workspace ONE Freemarker Server-side Template Injection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the activity is blocked or was not successful. Filter known vulnerablity scanners. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects server-side template injection attempts related to in VMware Workspace ONE, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.186",
              "n": "Web JSP Request via URL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies URL requests associated with CVE-2022-22965 (Spring4Shell) exploitation attempts, specifically targeting webshell access on a remote webserver. It detects HTTP GET requests with URLs containing \".jsp?cmd=\" or \"j&cmd=\" patterns. This activity is significant as it indicates potential webshell deployment, which can lead to unauthorized remote command execution. If confirmed malicious, attackers could gain control over the webserver, execute arbitrary commands, and …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to servers.",
              "refs": "https://www.microsoft.com/security/blog/2022/04/04/springshell-rce-vulnerability-guidance-for-protecting-against-and-detecting-cve-2022-22965/, https://github.com/TheGejr/SpringShell, https://www.tenable.com/blog/spring4shell-faq-spring-framework-remote-code-execution-vulnerability",
              "mitre": [
                "T1133",
                "T1190",
                "T1505.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Web JSP Request via URL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1133, T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Web JSP Request via URL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to servers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies URL requests associated with (Spring4Shell) exploitation attempts, specifically targeting webshell access on a remote webserver, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.187",
              "n": "Web Remote ShellServlet Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts to access the Remote ShellServlet on a web server, specifically targeting Confluence servers vulnerable to CVE-2023-22518 and CVE-2023-22515. It leverages web data to detect URLs containing \"*plugins/servlet/com.jsos.shell/*\" with a status code of 200. This activity is significant as it is commonly associated with web shells and other malicious behaviors, potentially leading to unauthorized command execution. If confirmed malicious, attackers could gain…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur depending on the web server's configuration. If the web server is intentionally configured to utilize the Remote ShellServlet, then the detections by this analytic would not be considered true positives.",
              "refs": "http://www.servletsuite.com/servlets/shell.htm",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Web Remote ShellServlet Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Web Remote ShellServlet Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur depending on the web server's configuration. If the web server is intentionally configured to utilize the Remote ShellServlet, then the detections by this analytic would not be considered true positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies attempts to access the Remote ShellServlet on a web server, specifically targeting Confluence servers vulnerable to and, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.188",
              "n": "Web Spring4Shell HTTP Request Class Module",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects HTTP requests containing payloads related to the Spring4Shell vulnerability (CVE-2022-22965). It leverages Splunk Stream HTTP data to inspect the HTTP request body and form data for specific fields such as \"class.module.classLoader.resources.context.parent.pipeline.first\". This activity is significant as it indicates an attempt to exploit a critical vulnerability in Spring Framework, potentially leading to remote code execution. If confirmed malicious, this could a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Splunk Stream HTTP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Splunk Stream HTTP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur and filtering may be required. Restrict analytic to asset type.",
              "refs": "https://github.com/DDuarte/springshell-rce-poc/blob/master/poc.py",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Web Spring4Shell HTTP Request Class Module\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Splunk Stream HTTP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Web Spring4Shell HTTP Request Class Module\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur and filtering may be required. Restrict analytic to asset type.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects HTTP requests containing payloads related to the Spring4Shell vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.189",
              "n": "Web Spring Cloud Function FunctionRouter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies HTTP POST requests to the Spring Cloud Function endpoint containing \"functionRouter\" in the URL. It leverages the Web data model to detect these requests based on specific fields such as http_method, url, and http_user_agent. This activity is significant because it targets CVE-2022-22963, a known vulnerability in Spring Cloud Function, which has multiple proof-of-concept exploits available. If confirmed malicious, this activity could allow attackers to execute a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Splunk Stream HTTP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Splunk Stream HTTP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to servers.",
              "refs": "https://github.com/rapid7/metasploit-framework/pull/16395, https://github.com/hktalent/spring-spel-0day-poc",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Web Spring Cloud Function FunctionRouter\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Splunk Stream HTTP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Web Spring Cloud Function FunctionRouter\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present with legitimate applications. Attempt to filter by dest IP or use Asset groups to restrict to servers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies HTTP POST requests to the Spring Cloud Function endpoint containing \"functionRouter\" in the URL, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.190",
              "n": "Windows SharePoint Spinstall0 GET Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential post-exploitation activity related to the Microsoft SharePoint CVE-2025-53770 vulnerability. After successful exploitation via the ToolPane.aspx endpoint, attackers typically deploy a webshell named \"spinstall0.aspx\" in the SharePoint layouts directory. This detection identifies GET requests to this webshell, which indicates active use of the backdoor for command execution, data exfiltration, or credential/key extraction. Attackers commonly use these webs…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives are expected as spinstall0.aspx is not a legitimate SharePoint component. However, security teams investigating the incident might also access this file for analysis purposes. Verify the source IP addresses against known security team IPs and the timing of the requests in relation to the initial exploitation attempt.",
              "refs": "https://research.eye.security/sharepoint-under-siege/, https://www.cisa.gov/news-events/alerts/2025/07/20/microsoft-releases-guidance-exploitation-sharepoint-vulnerability-cve-2025-53770, https://msrc.microsoft.com/blog/2025/07/customer-guidance-for-sharepoint-vulnerability-cve-2025-53770/, https://splunkbase.splunk.com/app/3185",
              "mitre": [
                "T1190",
                "T1505.003",
                "T1552"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SharePoint Spinstall0 GET Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1505.003, T1552. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SharePoint Spinstall0 GET Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives are expected as spinstall0.aspx is not a legitimate SharePoint component. However, security teams investigating the incident might also access this file for analysis purposes. Verify the source IP addresses against known security team IPs and the timing of the requests in relation to the initial exploitation attempt.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential post-exploitation activity related to the Microsoft SharePoint vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.191",
              "n": "Windows SharePoint ToolPane Endpoint Exploitation Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential exploitation attempts against Microsoft SharePoint Server vulnerability CVE-2025-53770, also known as \"ToolShell\". This detection monitors for POST requests to the ToolPane.aspx endpoint with specific DisplayMode parameter, which is a key indicator of the exploit. This vulnerability allows unauthenticated remote code execution on affected SharePoint servers, enabling attackers to fully access SharePoint content, file systems, internal configurations, and …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives are expected as legitimate use of the ToolPane.aspx endpoint with DisplayMode=Edit parameter in POST requests is uncommon. However, some SharePoint administration activities might trigger this detection. Verify against known administrator IPs and activity patterns.",
              "refs": "https://research.eye.security/sharepoint-under-siege/, https://cybersecuritynews.com/sharepoint-0-day-rce-vulnerability-exploited/, https://msrc.microsoft.com/blog/2025/07/customer-guidance-for-sharepoint-vulnerability-cve-2025-53770/, https://www.cisa.gov/news-events/alerts/2025/07/20/microsoft-releases-guidance-exploitation-sharepoint-vulnerability-cve-2025-53770, https://splunkbase.splunk.com/app/3185",
              "mitre": [
                "T1190",
                "T1505.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SharePoint ToolPane Endpoint Exploitation Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SharePoint ToolPane Endpoint Exploitation Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives are expected as legitimate use of the ToolPane.aspx endpoint with DisplayMode=Edit parameter in POST requests is uncommon. However, some SharePoint administration activities might trigger this detection. Verify against known administrator IPs and activity patterns.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential exploitation attempts against Microsoft SharePoint Server vulnerability, also known as \"ToolShell\", and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.192",
              "n": "WordPress Bricks Builder plugin RCE",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential exploitation of the WordPress Bricks Builder plugin RCE vulnerability. It detects HTTP POST requests to the URL path \"/wp-json/bricks/v1/render_element\" with a status code of 200, leveraging the Web datamodel. This activity is significant as it indicates an attempt to exploit CVE-2024-25600, a known vulnerability that allows remote code execution. If confirmed malicious, an attacker could execute arbitrary commands on the target server, leading to pote…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Nginx Access",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Nginx Access ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be possible, however we restricted it to HTTP Status 200 and POST requests, based on the POC. Upon investigation review the POST body for the actual payload - or command - being executed.",
              "refs": "https://attack.mitre.org/techniques/T1190, https://github.com/Tornad0007/CVE-2024-25600-Bricks-Builder-plugin-for-WordPress/blob/main/exploit.py, https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-25600, https://op-c.net/blog/cve-2024-25600-wordpresss-bricks-builder-rce-flaw-under-active-exploitation/, https://thehackernews.com/2024/02/wordpress-bricks-theme-under-active.html",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WordPress Bricks Builder plugin RCE\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Nginx Access. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WordPress Bricks Builder plugin RCE\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be possible, however we restricted it to HTTP Status 200 and POST requests, based on the POC. Upon investigation review the POST body for the actual payload - or command - being executed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential exploitation of the WordPress Bricks Builder plugin RCE vulnerability, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.193",
              "n": "WS FTP Remote Code Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential Remote Code Execution (RCE) attempts exploiting CVE-2023-40044 in WS_FTP software. It identifies HTTP POST requests to the \"/AHT/AhtApiService.asmx/AuthUser\" URL with a status code of 200. This detection leverages the Web datamodel to monitor specific URL patterns and HTTP status codes. This activity is significant as it may indicate an exploitation attempt, potentially allowing an attacker to execute arbitrary code on the server. If confirmed malicious, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "If WS_FTP Server is not in use, this analytic will not return results. Monitor and tune for your environment. Note the MetaSploit module is focused on only hitting /AHT/ and not the full /AHT/AhtApiService.asmx/AuthUser URL.",
              "refs": "https://github.com/projectdiscovery/nuclei-templates/pull/8296/files, https://www.assetnote.io/resources/research/rce-in-progress-ws-ftp-ad-hoc-via-iis-http-modules-cve-2023-40044, https://github.com/rapid7/metasploit-framework/pull/18414",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WS FTP Remote Code Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WS FTP Remote Code Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: If WS_FTP Server is not in use, this analytic will not return results. Monitor and tune for your environment. Note the MetaSploit module is focused on only hitting /AHT/ and not the full /AHT/AhtApiService.asmx/AuthUser URL.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects potential Remote Code Execution (RCE) attempts exploiting in WS_FTP software, and we use Splunk so the team can see it early and act.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 26.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 193,
            "none": 0
          }
        },
        {
          "i": "10.7",
          "n": "SIEM & SOAR",
          "u": [
            {
              "i": "10.7.1",
              "n": "Alert Volume Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Alert volume trends reveal SOC workload, detection rule effectiveness, and potential alert fatigue risks.",
              "t": "Splunk Enterprise Security",
              "d": "ES notable events (`` `notable` `` macro)",
              "q": "`notable`\n| timechart span=1d count by source\n| sort -count",
              "m": "Track notable event volume from ES over time. Break down by source (correlation search). Identify noisy rules for tuning. Alert when daily volume exceeds analyst capacity thresholds. Report on volume trends.",
              "z": "Stacked area (alerts by source), Line chart (total alert volume), Bar chart (top alerting rules).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: ES notable events (`` `notable` `` macro).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack notable event volume from ES over time. Break down by source (correlation search). Identify noisy rules for tuning. Alert when daily volume exceeds analyst capacity thresholds. Report on volume trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable`\n| timechart span=1d count by source\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Alert Volume Trending** — Alert volume trends reveal SOC workload, detection rule effectiveness, and potential alert fatigue risks.\n\nDocumented **Data sources**: ES notable events (`` `notable` `` macro). **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by source** — ideal for trending and alerting on this use case.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (alerts by source), Line chart (total alert volume), Bar chart (top alerting rules).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this for Alert Volume Trending so leadership can see trends and the SOC can see workload and quality over time.",
              "wv": "crawl",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 55,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "10.7.2",
              "n": "Analyst Workload Distribution",
              "c": "medium",
              "f": "beginner",
              "v": "Uneven workload distribution leads to analyst burnout and inconsistent response times. Monitoring enables fair distribution.",
              "t": "Splunk ES",
              "d": "ES investigation/ownership logs, notable event audit",
              "q": "`notable`\n| stats count, avg(time_to_close) as avg_close_time by owner\n| sort -count",
              "m": "Track alert assignment and closure by analyst. Calculate workload distribution and average handling time. Report to SOC management. Identify training needs based on handling time variations.",
              "z": "Bar chart (alerts per analyst), Table (workload summary), Pie chart (distribution).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ES.\n• Ensure the following data sources are available: ES investigation/ownership logs, notable event audit.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack alert assignment and closure by analyst. Calculate workload distribution and average handling time. Report to SOC management. Identify training needs based on handling time variations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable`\n| stats count, avg(time_to_close) as avg_close_time by owner\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Analyst Workload Distribution** — Uneven workload distribution leads to analyst burnout and inconsistent response times. Monitoring enables fair distribution.\n\nDocumented **Data sources**: ES investigation/ownership logs, notable event audit. **App/TA** (typical add-on context): Splunk ES. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (alerts per analyst), Table (workload summary), Pie chart (distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this for Analyst Workload Distribution so leadership can see trends and the SOC can see workload and quality over time.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.3",
              "n": "MTTD and MTTR Tracking",
              "c": "critical",
              "f": "intermediate",
              "v": "MTTD and MTTR are the primary metrics for SOC effectiveness. Tracking drives process improvement and justifies investment.",
              "t": "Splunk ES",
              "d": "ES notable events (detection time, response time, closure time)",
              "q": "`notable` status=\"Closed\"\n| eval mttd_hours=round((detection_time-event_time)/3600,1)\n| eval mttr_hours=round((closure_time-detection_time)/3600,1)\n| stats avg(mttd_hours) as avg_mttd, avg(mttr_hours) as avg_mttr, perc95(mttr_hours) as p95_mttr",
              "m": "Ensure ES workflows capture detection, triage, and resolution timestamps. Calculate MTTD (event to detection) and MTTR (detection to resolution). Track by severity, type, and analyst. Report weekly/monthly to leadership.",
              "z": "Single value (avg MTTD/MTTR), Line chart (MTTD/MTTR trends), Bar chart (by incident type), Gauge (vs target).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ES.\n• Ensure the following data sources are available: ES notable events (detection time, response time, closure time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure ES workflows capture detection, triage, and resolution timestamps. Calculate MTTD (event to detection) and MTTR (detection to resolution). Track by severity, type, and analyst. Report weekly/monthly to leadership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` status=\"Closed\"\n| eval mttd_hours=round((detection_time-event_time)/3600,1)\n| eval mttr_hours=round((closure_time-detection_time)/3600,1)\n| stats avg(mttd_hours) as avg_mttd, avg(mttr_hours) as avg_mttr, perc95(mttr_hours) as p95_mttr\n```\n\nUnderstanding this SPL\n\n**MTTD and MTTR Tracking** — MTTD and MTTR are the primary metrics for SOC effectiveness. Tracking drives process improvement and justifies investment.\n\nDocumented **Data sources**: ES notable events (detection time, response time, closure time). **App/TA** (typical add-on context): Splunk ES. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **mttd_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mttr_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (avg MTTD/MTTR), Line chart (MTTD/MTTR trends), Bar chart (by incident type), Gauge (vs target).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this for MTTD and MTTR Tracking so leadership can see trends and the SOC can see workload and quality over time.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.4",
              "n": "Playbook Execution Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "SOAR playbook failures leave incidents unhandled. Monitoring ensures automation reliability and identifies integration issues.",
              "t": "Splunk SOAR",
              "d": "SOAR execution logs, playbook run results",
              "q": "index=soar sourcetype=\"phantom:playbook_run\"\n| stats count(eval(status=\"success\")) as success, count(eval(status=\"failed\")) as failed by playbook_name\n| eval success_rate=round(success/(success+failed)*100,1)\n| where success_rate < 95",
              "m": "Ingest SOAR execution logs into Splunk. Track playbook success/failure rates. Alert on failures for critical playbooks. Identify failing action steps for debugging. Report on automation coverage and time savings.",
              "z": "Table (playbook success rates), Bar chart (failure rate by playbook), Line chart (execution trend), Single value (overall success %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk SOAR.\n• Ensure the following data sources are available: SOAR execution logs, playbook run results.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SOAR execution logs into Splunk. Track playbook success/failure rates. Alert on failures for critical playbooks. Identify failing action steps for debugging. Report on automation coverage and time savings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=soar sourcetype=\"phantom:playbook_run\"\n| stats count(eval(status=\"success\")) as success, count(eval(status=\"failed\")) as failed by playbook_name\n| eval success_rate=round(success/(success+failed)*100,1)\n| where success_rate < 95\n```\n\nUnderstanding this SPL\n\n**Playbook Execution Monitoring** — SOAR playbook failures leave incidents unhandled. Monitoring ensures automation reliability and identifies integration issues.\n\nDocumented **Data sources**: SOAR execution logs, playbook run results. **App/TA** (typical add-on context): Splunk SOAR. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: soar; **sourcetype**: phantom:playbook_run. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=soar, sourcetype=\"phantom:playbook_run\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by playbook_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where success_rate < 95` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (playbook success rates), Bar chart (failure rate by playbook), Line chart (execution trend), Single value (overall success %).",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Playbook Execution Monitoring' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.5",
              "n": "Correlation Search Performance",
              "c": "medium",
              "f": "intermediate",
              "v": "Slow or resource-intensive correlation searches degrade ES performance and may miss detections if they timeout.",
              "t": "Splunk internal metrics",
              "d": "`_internal` scheduler logs",
              "q": "index=_internal sourcetype=scheduler savedsearch_name=\"*Correlation*\"\n| stats avg(run_time) as avg_runtime, max(run_time) as max_runtime by savedsearch_name\n| where avg_runtime > 60\n| sort -avg_runtime",
              "m": "Monitor ES correlation search run times from `_internal`. Alert when searches exceed their schedule interval (running longer than they should). Identify skipped searches. Optimize SPL for slow searches.",
              "z": "Table (search performance), Bar chart (avg runtime by search), Line chart (runtime trend), Single value (skipped searches).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk internal metrics.\n• Ensure the following data sources are available: `_internal` scheduler logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor ES correlation search run times from `_internal`. Alert when searches exceed their schedule interval (running longer than they should). Identify skipped searches. Optimize SPL for slow searches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=scheduler savedsearch_name=\"*Correlation*\"\n| stats avg(run_time) as avg_runtime, max(run_time) as max_runtime by savedsearch_name\n| where avg_runtime > 60\n| sort -avg_runtime\n```\n\nUnderstanding this SPL\n\n**Correlation Search Performance** — Slow or resource-intensive correlation searches degrade ES performance and may miss detections if they timeout.\n\nDocumented **Data sources**: `_internal` scheduler logs. **App/TA** (typical add-on context): Splunk internal metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: scheduler. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=scheduler. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by savedsearch_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_runtime > 60` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (search performance), Bar chart (avg runtime by search), Line chart (runtime trend), Single value (skipped searches).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Correlation Search Performance' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.6",
              "n": "False Positive Rate Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "High false positive rates cause alert fatigue, leading analysts to miss real threats. Tracking drives detection rule optimization.",
              "t": "Splunk ES",
              "d": "ES notable events with analyst disposition",
              "q": "`notable` status=\"Closed\"\n| stats count(eval(disposition=\"True Positive\")) as tp, count(eval(disposition=\"False Positive\")) as fp by source\n| eval fp_rate=round(fp/(tp+fp)*100,1)\n| where fp_rate > 30\n| sort -fp_rate",
              "m": "Ensure analysts set dispositions when closing notables (TP, FP, Benign). Calculate FP rate per detection rule. Flag rules with >30% FP rate for tuning. Track overall FP rate as a SOC quality metric. Target <20% FP rate.",
              "z": "Bar chart (FP rate by rule), Line chart (overall FP trend), Table (rules needing tuning), Gauge (overall FP rate).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ES.\n• Ensure the following data sources are available: ES notable events with analyst disposition.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure analysts set dispositions when closing notables (TP, FP, Benign). Calculate FP rate per detection rule. Flag rules with >30% FP rate for tuning. Track overall FP rate as a SOC quality metric. Target <20% FP rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` status=\"Closed\"\n| stats count(eval(disposition=\"True Positive\")) as tp, count(eval(disposition=\"False Positive\")) as fp by source\n| eval fp_rate=round(fp/(tp+fp)*100,1)\n| where fp_rate > 30\n| sort -fp_rate\n```\n\nUnderstanding this SPL\n\n**False Positive Rate Tracking** — High false positive rates cause alert fatigue, leading analysts to miss real threats. Tracking drives detection rule optimization.\n\nDocumented **Data sources**: ES notable events with analyst disposition. **App/TA** (typical add-on context): Splunk ES. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by source** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fp_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fp_rate > 30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (FP rate by rule), Line chart (overall FP trend), Table (rules needing tuning), Gauge (overall FP rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this for False Positive Rate Tracking so leadership can see trends and the SOC can see workload and quality over time.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.7",
              "n": "AWS Defense Evasion Delete CloudTrail",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects the deletion of AWS CloudTrail logs via DeleteTrail events. Adversaries may delete CloudTrail to evade detection and operate stealthily. Confirmed malicious activity could allow attackers to cover their tracks and prolong unauthorized access.",
              "t": "Splunk Security Essentials, Splunk Add-on for AWS",
              "d": "AWS CloudTrail DeleteTrail",
              "q": "`cloudtrail` eventName = DeleteTrail eventSource = cloudtrail.amazonaws.com userAgent != console.amazonaws.com errorCode = success\n| rename user_name as user\n| stats count min(_time) as firstTime max(_time) as lastTime BY signature dest user user_agent src vendor_account vendor_region vendor_product\n| `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `aws_defense_evasion_delete_cloudtrail_filter`",
              "m": "Install Splunk AWS Add-on and enable CloudTrail logs in your AWS environment. Deploy the ESCU detection from Splunk Security Essentials or security_content.",
              "z": "Table (user, dest, firstTime, lastTime), Timeline.",
              "kfp": "Legitimate teardown of CloudTrail by an admin; verify with change management.",
              "refs": "https://attack.mitre.org/techniques/T1562/008/",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "TTP",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Defense Evasion Delete CloudTrail\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). When triggered, it creates a Notable Event in the Incident Review dashboard for analyst triage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (cloud): Ensure data relevant to the cloud security domain is ingested and CIM-normalized.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Defense Evasion Delete CloudTrail\" or filter by Analytic Story.\n3. Review the detection configuration — verify the scheduling interval and throttling settings match your operational tempo.\n4. Enable the detection as a Correlation Search. It will create Notable Events directly when triggered.\n5. Set the Notable Event severity and urgency appropriate to your environment’s risk posture.\n6. Configure Adaptive Response Actions: email notifications, ServiceNow ticket creation, SOAR playbook triggers, or other response workflows.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate teardown of CloudTrail by an admin; verify with change management.\n\n• Review the detection’s filter criteria and adjust for known-good activity in your environment.\n• Configure throttling in Content Management to prevent duplicate Notable Events for the same entity within a configurable window (typically 1–24 hours depending on detection frequency).\n• Use Notable Event Suppression for entities or patterns that are consistently benign after investigation.\n\nAnalyst Response Workflow\n\nWhen this detection generates a Notable Event:\n\n1. Open the Notable Event in Incident Review. Review the triggering event details, affected entities, and assigned severity.\n2. Investigate the involved entities using the Asset Investigator and Identity Investigator dashboards for historical context and behavioral patterns.\n3. Correlate with related Notable Events and threat intelligence to assess whether this is an isolated event or part of a broader campaign.\n4. Take appropriate response actions: contain, remediate, and recover. Leverage SOAR playbooks where available for consistent and rapid response.\n5. Update the Notable Event status and document investigation findings for post-incident review and compliance.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'AWS Defense Evasion Delete CloudTrail' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "aws",
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.8",
              "n": "Detect Password Spray Attempts",
              "c": "critical",
              "f": "intermediate",
              "v": "Identifies password spray attacks across accounts: many accounts targeted with few attempts each to avoid lockout. Early detection prevents credential compromise and account takeover.",
              "t": "Splunk Security Essentials, Splunk_TA_windows, Splunk_TA_nix",
              "d": "Windows Security Event Log (4625), LDAP, authentication logs",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode=4625\n| stats dc(Account) as unique_accounts count as attempt_count by Computer, IpAddress, _time\n| where unique_accounts >= 5 AND attempt_count < 50\n| table _time, Computer, IpAddress, unique_accounts, attempt_count",
              "m": "Ingest Windows 4625 (failed logon) and/or IdP logs. Deploy the ESCU detection from security_content; tune threshold (unique_accounts, attempt_count) for your environment.",
              "z": "Table (source IP, unique accounts, attempts), Timeline.",
              "kfp": "Legitimate lockouts or lab/testing; tune by source IP and account set.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "TTP",
              "sdomain": "identity",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Password Spray Attempts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). When triggered, it creates a Notable Event in the Incident Review dashboard for analyst triage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Security Event Log (4625), LDAP, authentication logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Data Model Acceleration: enable DMA for Authentication under Settings → Data Models. Set the acceleration summary range to cover the detection’s lookback window (typically 7 days minimum).\n• Security domain (identity): Ensure authentication and identity logs are ingested. Configure the Asset and Identity framework with your CMDB and HR data sources for entity enrichment.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Password Spray Attempts\" or filter by Analytic Story.\n3. Review the detection configuration — verify the scheduling interval and throttling settings match your operational tempo.\n4. Enable the detection as a Correlation Search. It will create Notable Events directly when triggered.\n5. Set the Notable Event severity and urgency appropriate to your environment’s risk posture.\n6. Configure Adaptive Response Actions: email notifications, ServiceNow ticket creation, SOAR playbook triggers, or other response workflows.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate lockouts or lab/testing; tune by source IP and account set.\n\n• Review the detection’s filter criteria and adjust for known-good activity in your environment.\n• Configure throttling in Content Management to prevent duplicate Notable Events for the same entity within a configurable window (typically 1–24 hours depending on detection frequency).\n• Use Notable Event Suppression for entities or patterns that are consistently benign after investigation.\n\nAnalyst Response Workflow\n\nWhen this detection generates a Notable Event:\n\n1. Open the Notable Event in Incident Review. Review the triggering event details, affected entities, and assigned severity.\n2. Investigate the involved entities using the Asset Investigator and Identity Investigator dashboards for historical context and behavioral patterns.\n3. Correlate with related Notable Events and threat intelligence to assess whether this is an isolated event or part of a broader campaign.\n4. Take appropriate response actions: contain, remediate, and recover. Leverage SOAR playbooks where available for consistent and rapid response.\n5. Update the Notable Event status and document investigation findings for post-incident review and compliance.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for many failed logon attempts spread across a lot of accounts from the same place, a pattern that often means someone is trying common passwords without locking anyone out, so the team can stop takeover early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t dc(Authentication.user) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value",
              "escu": true,
              "escu_rba": false,
              "e": [
                "linux",
                "security_essentials",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.9",
              "n": "Cisco Duo Admin Login Unusual Browser",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a Duo admin logs in using a browser other than Chrome, which is considered unusual based on typical access patterns. Please adjust as needed to your environment. The detection leverages Duo activity logs ingested via the Cisco Security Cloud App and filters for admin login actions where the browser is not Chrome. By renaming and aggregating relevant fields such as user, browser, IP address, and location, the analytic highlights potentially suspic…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Activity",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Activity ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Admin Login Unusual Browser\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Admin Login Unusual Browser\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.10",
              "n": "Cisco Duo Admin Login Unusual Country",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where a Duo admin login originates from a country outside of the United States, which may indicate suspicious or unauthorized access attempts. Please adjust as needed to your environment. It works by analyzing Duo activity logs for admin login actions and filtering out events where the access device's country is not within the expected region. By correlating user, device, browser, and location details, the analytic highlights anomalies in geographic login…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Activity",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Activity ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Admin Login Unusual Country\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Admin Login Unusual Country\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.11",
              "n": "Cisco Duo Admin Login Unusual Os",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies Duo admin login attempts from operating systems that are unusual for your environment, excluding commonly used OS such as Mac OS X. Please adjust to your environment. It works by analyzing Duo activity logs for admin login actions and filtering out logins from expected operating systems. The analytic then aggregates events by browser, version, source IP, location, and OS details to highlight anomalies. Detecting admin logins from unexpected operating systems is …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Activity",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Activity ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Admin Login Unusual Os\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Admin Login Unusual Os\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.12",
              "n": "Cisco Duo Bulk Policy Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where a Duo administrator performs a bulk deletion of more than three policies in a single action. It identifies this behavior by searching Duo activity logs for the policy_bulk_delete action, extracting the names of deleted policies, and counting them. If the count exceeds three, the event is flagged. This behavior is significant for a Security Operations Center (SOC) because mass deletion of security policies can indicate malicious activity, such as an …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The analytic leverages Duo activity logs to be ingested using the Cisco Security Cloud App (https://splunkbase.splunk.com/app/7404).",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Bulk Policy Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Bulk Policy Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.13",
              "n": "Cisco Duo Bypass Code Generation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Duo Bypass Code Generation. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Bypass Code Generation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Bypass Code Generation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.14",
              "n": "Cisco Duo Policy Allow Devices Without Screen Lock",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Duo Policy Allow Devices Without Screen Lock. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Policy Allow Devices Without Screen Lock\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Policy Allow Devices Without Screen Lock\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.15",
              "n": "Cisco Duo Policy Allow Network Bypass 2FA",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Duo Policy Allow Network Bypass 2FA. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Policy Allow Network Bypass 2FA\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Policy Allow Network Bypass 2FA\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.16",
              "n": "Cisco Duo Policy Allow Old Java",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Duo Policy Allow Old Java. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Policy Allow Old Java\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Policy Allow Old Java\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.17",
              "n": "Cisco Duo Policy Allow Tampered Devices",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Duo Policy Allow Tampered Devices. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Policy Allow Tampered Devices\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Policy Allow Tampered Devices\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.18",
              "n": "Cisco Duo Policy Bypass 2FA",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where a Duo policy is created or updated to allow access without two-factor authentication (2FA). It identifies this behavior by searching Duo administrator activity logs for policy changes that set the authentication status to \"Allow access without 2FA.\" By monitoring for these specific actions, the analytic highlights potential attempts to weaken authentication controls, which could be indicative of malicious activity or insider threats. This behavior i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Policy Bypass 2FA\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Policy Bypass 2FA\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.19",
              "n": "Cisco Duo Policy Deny Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a Duo administrator creates or updates a policy to explicitly deny user access within the Duo environment. It detects this behavior by searching Duo administrator activity logs for policy creation or update actions where the authentication status is set to \"Deny access.\" By correlating these events with user and admin details, the analytic highlights potential misuse or malicious changes to access policies. This behavior is critical for a SOC to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Policy Deny Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Policy Deny Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.20",
              "n": "Cisco Duo Policy Skip 2FA for Other Countries",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Duo Policy Skip 2FA for Other Countries. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Policy Skip 2FA for Other Countries\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Policy Skip 2FA for Other Countries\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.21",
              "n": "Cisco Duo Set User Status to Bypass 2FA",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Duo Set User Status to Bypass 2FA. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Duo Administrator",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Duo Administrator ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/7404",
              "mitre": [
                "T1556"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Duo Set User Status to Bypass 2FA\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Duo Administrator. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Duo Set User Status to Bypass 2FA\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for unusual Cisco Duo admin sign-in patterns described in the title so we notice possible stolen or misused admin sessions early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.22",
              "n": "Detect Distributed Password Spray Attempts",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic employs the 3-sigma approach to identify distributed password spray attacks. A distributed password spray attack is a type of brute force attack where the attacker attempts a few common passwords against many different accounts, connecting from multiple IP addresses to avoid detection. By utilizing the Authentication Data Model, this detection is effective for all CIM-mapped authentication events, providing comprehensive coverage and enhancing security against these attacks.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Sign-in activity",
              "q": "| tstats `security_content_summariesonly` count dc(Authentication.user) AS unique_accounts dc(Authentication.src) AS unique_src values(Authentication.app) AS app from datamodel=Authentication where Authentication.action=\"failure\" by _time span=5m\n| eventstats avg(unique_accounts) AS avg_accounts stdev(unique_accounts) AS stdev_accounts avg(unique_src) AS avg_src stdev(unique_src) AS stdev_src\n| eval upper_accounts = avg_accounts + (3 * stdev_accounts), upper_src = avg_src + (3 * stdev_src)\n| where unique_accounts > upper_accounts AND unique_src > upper_src",
              "m": "Ensure that all relevant authentication data is mapped to the Common Information Model (CIM) and that the src field is populated with the source device information. Additionally, ensure that fill_nullvalue is set within the security_content_summariesonly macro to include authentication events from log sources that do not feature the signature_id field in the results.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is common to see a spike of legitimate failed authentication events on monday mornings.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Distributed Password Spray Attempts\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Sign-in activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Distributed Password Spray Attempts\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is common to see a spike of legitimate failed authentication events on monday mornings.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the behavior in “Detect Distributed Password Spray Attempts” using the same log fields the detection already uses, so the team can act before impact spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.23",
              "n": "ESXi Account Modified",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the creation, deletion, or modification of a local user account on an ESXi host. This activity may indicate unauthorized access, indicator removal, or persistence attempts by an attacker seeking to establish or maintain control of the host.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "New local accounts being created in ESXi is rare in most environments. Tune as needed.",
              "refs": "https://detect.fyi/detecting-and-responding-to-esxi-compromise-with-splunk-f33998ce7823",
              "mitre": [
                "T1136.001",
                "T1078",
                "T1098"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Account Modified\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001, T1078, T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Account Modified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: New local accounts being created in ESXi is rare in most environments. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi Account Modified so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.24",
              "n": "ESXi Audit Tampering",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the use of the esxcli system auditrecords commands, which can be used to tamper with logging on an ESXi host. This action may indicate an attempt to evade detection or hinder forensic analysis by preventing the recording of system-level audit events.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed.",
              "refs": "https://detect.fyi/detecting-and-responding-to-esxi-compromise-with-splunk-f33998ce7823",
              "mitre": [
                "T1562.003",
                "T1070"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Audit Tampering\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.003, T1070. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Audit Tampering\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi Audit Tampering so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.25",
              "n": "ESXi Download Errors",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies failed file download attempts on ESXi hosts by looking for specific error messages in the system logs. These failures may indicate unauthorized or malicious attempts to install or update components—such as VIBs or scripts",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1601.001",
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Download Errors\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1601.001, T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Download Errors\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi Download Errors so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.26",
              "n": "ESXi Encryption Settings Modified",
              "c": "high",
              "f": "intermediate",
              "v": "Detects the disabling of critical encryption enforcement settings on an ESXi host, such as secure boot or executable verification requirements, which may indicate an attempt to weaken hypervisor integrity or allow unauthorized code execution.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Encryption Settings Modified\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Encryption Settings Modified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi Encryption Settings Modified so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.27",
              "n": "ESXi External Root Login Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies instances where the ESXi UI is accessed using the root account instead of a delegated administrative user. Direct root access to the UI bypasses role-based access controls and auditing practices, and may indicate risky behavior, misconfiguration, or unauthorized activity by a malicious actor using compromised credentials.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed. Administrators may use the root account for troubleshooting or initial user creation.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi External Root Login Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi External Root Login Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed. Administrators may use the root account for troubleshooting or initial user creation.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi External Root Login Activity so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.28",
              "n": "ESXi Loghost Config Tampering",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies changes to the syslog loghost configuration on an ESXi host, which may indicate an attempt to disrupt log forwarding and evade detection.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Loghost Config Tampering\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Loghost Config Tampering\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi Loghost Config Tampering so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.29",
              "n": "ESXi Reverse Shell Patterns",
              "c": "high",
              "f": "intermediate",
              "v": "This detection looks for reverse shell string patterns on an ESXi host, which may indicate that a threat actor is attempting to establish remote control over the system.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Reverse Shell Patterns\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Reverse Shell Patterns\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi Reverse Shell Patterns so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.30",
              "n": "ESXi Shell Access Enabled",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when the ESXi Shell is enabled on a host, which may indicate that a malicious actor is preparing to execute commands locally or establish persistent access. Enabling the shell outside of approved maintenance windows can be a sign of compromise or unauthorized administrative activity.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed. Some Administrators may enable this for troubleshooting.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1021"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Shell Access Enabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Shell Access Enabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed. Some Administrators may enable this for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi Shell Access Enabled so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.31",
              "n": "ESXi SSH Brute Force",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies signs of SSH brute-force attacks by monitoring for a high number of failed login attempts within a short time frame. Such activity may indicate an attacker attempting to gain unauthorized access through password guessing.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1110"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi SSH Brute Force\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi SSH Brute Force\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi SSH Brute Force so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.32",
              "n": "ESXi Syslog Config Change",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies changes to the syslog configuration on an ESXi host using esxcli, which may indicate an attempt to disrupt log collection and evade detection.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi Syslog Config Change\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi Syslog Config Change\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi Syslog Config Change so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.33",
              "n": "ESXi System Clock Manipulation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies a significant change to the system clock on an ESXi host, which may indicate an attempt to manipulate timestamps and evade detection or forensic analysis",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments, however tune as needed",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1070.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi System Clock Manipulation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi System Clock Manipulation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments, however tune as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi System Clock Manipulation so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.34",
              "n": "ESXi System Information Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the use of ESXCLI system-level commands that retrieve configuration details. While used for legitimate administration, this behavior may also indicate adversary reconnaissance aimed at profiling the ESXi host's capabilities, build information, or system role in preparation for further compromise.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may use this command when troubleshooting. Tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1082"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi System Information Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi System Information Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may use this command when troubleshooting. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi System Information Discovery so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.35",
              "n": "ESXi User Granted Admin Role",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when a user is granted the Administrator role on an ESXi host. Assigning elevated privileges is a critical action that can indicate potential malicious behavior if performed unexpectedly. Adversaries who gain access may use this to escalate privileges, maintain persistence, or disable security controls.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives in most environments after initial setup, however tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1098",
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi User Granted Admin Role\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi User Granted Admin Role\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives in most environments after initial setup, however tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi User Granted Admin Role so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.36",
              "n": "ESXi VIB Acceptance Level Tampering",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies changes to the VIB (vSphere Installation Bundle) acceptance level on an ESXi host. Modifying the acceptance level, such as setting it to CommunitySupported, lowers the system's integrity enforcement and may allow the installation of unsigned or unverified software.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may use this command when installing third party VIBs. Tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi VIB Acceptance Level Tampering\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi VIB Acceptance Level Tampering\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may use this command when installing third party VIBs. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi VIB Acceptance Level Tampering so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.37",
              "n": "ESXi VM Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the use of ESXCLI commands to discover virtual machines on an ESXi host While used by administrators, this activity may also indicate adversary reconnaissance aimed at identifying high value targets, mapping the virtual environment, or preparing for data theft or destructive operations.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may use this command when troubleshooting. Tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1673"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi VM Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1673. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi VM Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may use this command when troubleshooting. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi VM Discovery so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.38",
              "n": "ESXi VM Exported via Remote Tool",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the use of a remote tool to download virtual machine disk files from a datastore. The NFC protocol is used by management tools to transfer files to and from ESXi hosts, but it can also be abused by attackers or insiders to exfiltrate full virtual disk images",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMWare ESXi Syslog",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This is based on syslog data generated by VMware ESXi hosts. To implement this search, you must configure your ESXi systems to forward syslog output to your Splunk deployment. These logs must be ingested with the appropriate Splunk Technology Add-on for VMware ESXi Logs, which provides field extractions and CIM compatibility.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may use this command when troubleshooting. Tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ESXi VM Exported via Remote Tool\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMWare ESXi Syslog. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ESXi VM Exported via Remote Tool\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may use this command when troubleshooting. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ESXi VM Exported via Remote Tool so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.39",
              "n": "MCP Filesystem Server Suspicious Extension Write",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies attempts to create executable or script files through MCP filesystem server connections. Threat actors leveraging LLM-based tools may attempt to write malicious executables, scripts, or batch files to disk for persistence or code execution. The detection prioritizes files written to system directories or startup locations which indicate higher likelihood of malicious intent.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MCP Server",
              "q": "`mcp_server` method IN (\"write_file\", \"create_file\") direction=inbound\n    | spath output=file_path path=params.path\n    | spath output=file_content path=params.content\n    | eval dest=host\n    | eval file_extension=lower(mvindex(split(file_path, \".\"), -1))\n    | where file_extension IN (\n        \"exe\", \"dll\", \"ps1\", \"bat\", \"cmd\", \"vbs\", \"js\", \"scr\", \"msi\", \"hta\", \"wsf\", \"wsh\", \"pif\", \"com\", \"cpl\",\n        \"sh\", \"bash\", \"zsh\", \"ksh\", \"csh\", \"tcsh\", \"fish\",\n        \"py\", \"pl\", \"rb\", \"php\", \"lua\", \"awk\",\n        \"so\", \"dylib\", \"bin\", \"elf\", \"run\", \"AppImage\",\n        \"deb\", \"rpm\", \"pkg\", \"dmg\",\n        \"plist\", \"service\", \"timer\", \"socket\", \"conf\"\n        )\n    | eval\n        file_path_lower=lower(file_path),\n        is_system_path = if(match(file_path_lower, \"(windows|system32|syswow64|program files|/usr|/bin|/sbin|/lib|/lib64|/etc|/opt)\"), 1, 0),\n        is_startup_path = if(match(file_path_lower, \"(startup|autorun|cron\\.d|crontab|launchd|launchagents|launchdaemons|systemd|init\\.d|rc\\.d|rc\\.local|profile\\.d|bashrc|zshrc|bash_profile)\"), 1, 0),\n        is_hidden_unix = if(match(file_path, \"/\\.[^/]+$\"), 1, 0),\n        content_length=len(file_content)\n    | stats count min(_time) as firstTime max(_time) as lastTime values(file_path) as file_paths values(file_extension) as extensions max(is_system_path) as targets_system_path max(is_startup_path) as targets_startup_path max(is_hidden_unix) as targets_hidden_file avg(content_length) as avg_content_size by dest, method\n    | eval\n        targets_system_path=if(isnull(targets_system_path), 0, targets_system_path),\n        targets_startup_path=if(isnull(targets_startup_path), 0, targets_startup_path),\n        targets_hidden_file=if(isnull(targets_hidden_file), 0, targets_hidden_file)\n    | sort - targets_startup_path, - targets_system_path, - targets_hidden_file, - count\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table dest firstTime lastTime count method extensions file_paths targets_system_path targets_startup_path targets_hidden_file avg_content_size\n    | `mcp_filesystem_server_suspicious_extension_write_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires MCP Server ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate developers using LLM assistants to generate scripts or automation tools, DevOps engineers creating deployment scripts, and system administrators generating batch files for maintenance tasks.",
              "refs": "https://splunkbase.splunk.com/app/8377, https://cymulate.com/blog/cve-2025-53109-53110-escaperoute-anthropic/, https://www.splunk.com/en_us/blog/security/securing-ai-agents-model-context-protocol.html",
              "mitre": [
                "T1059"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MCP Filesystem Server Suspicious Extension Write\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MCP Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MCP Filesystem Server Suspicious Extension Write\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate developers using LLM assistants to generate scripts or automation tools, DevOps engineers creating deployment scripts, and system administrators generating batch files for maintenance tasks.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'MCP Filesystem Server Suspicious Extension Write' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.40",
              "n": "MCP Postgres Suspicious Query",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies potentially malicious SQL queries executed through MCP PostgreSQL server connections, monitoring for privilege escalation attempts, credential theft, and schema reconnaissance. These patterns are commonly observed in SQL injection attacks, compromised application credentials, and insider threat scenarios targeting database assets.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MCP Server",
              "q": "`mcp_server` method=query direction=inbound\n    | eval dest=host\n    | eval query_lower=lower('params.query')\n    | eval suspicious_query='params.query'\n    | eval is_priv_escalation=if(like(query_lower, \"%update%users%role%admin%\") OR like(query_lower, \"%grant%admin%\") OR like(query_lower, \"%grant%superuser%\"), 1, 0)\n    | eval is_credential_theft=if(like(query_lower, \"%password%\") OR like(query_lower, \"%credential%\") OR like(query_lower, \"%api_key%\") OR like(query_lower, \"%secret%\"), 1, 0)\n    | eval is_recon=if(like(query_lower, \"%information_schema%\") OR like(query_lower, \"%pg_catalog%\") OR like(query_lower, \"%pg_tables%\") OR like(query_lower, \"%pg_user%\"), 1, 0)\n    | where is_priv_escalation=1 OR is_credential_theft=1 OR is_recon=1\n    | eval attack_type=case(\n        is_priv_escalation=1, \"Privilege Escalation\",\n        is_credential_theft=1, \"Credential Theft\",\n        is_recon=1, \"Schema Reconnaissance\",\n        1=1, \"Unknown\")\n    | stats count min(_time) as firstTime max(_time) as lastTime values(suspicious_query) as suspicious_queries values(attack_type) as attack_types dc(attack_type) as attack_diversity by dest\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table dest firstTime lastTime count suspicious_queries attack_types attack_diversity\n    | `mcp_postgres_suspicious_query_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires MCP Server ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate database administrators performing user management tasks, ORM frameworks querying information_schema for schema validation, password reset functionality, and CI/CD pipelines running database migrations.",
              "refs": "https://splunkbase.splunk.com/app/8377, https://www.nodejs-security.com/blog/the-tale-of-the-vulnerable-mcp-database-server, https://www.splunk.com/en_us/blog/security/securing-ai-agents-model-context-protocol.html",
              "mitre": [
                "T1555"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MCP Postgres Suspicious Query\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MCP Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MCP Postgres Suspicious Query\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate database administrators performing user management tasks, ORM frameworks querying information_schema for schema validation, password reset functionality, and CI/CD pipelines running database migrations.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'MCP Postgres Suspicious Query' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.41",
              "n": "MCP Sensitive System File Search",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies MCP filesystem tool usage attempting to search for files containing sensitive patterns such as passwords, credentials, API keys, secrets, and configuration files. Adversaries and malicious insiders may abuse legitimate MCP filesystem capabilities to conduct reconnaissance and discover sensitive data stores for exfiltration or credential harvesting.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MCP Server",
              "q": "`mcp_server`\n    (method IN (\"read_file\", \"get_file_contents\", \"read\", \"search_files\", \"find_files\", \"grep\", \"search\", \"list_directory\", \"read_directory\"))\n    (params.path=\"*.ssh*\" OR params.path=\"*Administrator*\" OR params.path=\"*credentials*\" OR params.path=\"*password*\" OR params.path=\"*.env*\" OR params.path=\"*id_rsa*\" OR params.path=\"*.pem*\" OR params.path=\"*.ppk*\" OR params.path=\"*.key*\" OR params.path=\"*secrets*\" OR params.path=\"*.aws*\" OR params.path=\"*.config*\"\n    OR params.pattern=\"*password*\" OR params.pattern=\"*key*\" OR params.pattern=\"*secret*\" OR params.pattern=\"*credential*\" OR params.pattern=\"*token*\" OR params.pattern=\"*auth*\" OR params.pattern=\"*api_key*\" OR params.pattern=\"*private_key*\")\n    | eval dest=host\n    | eval detection_type=case(\n        method IN (\"read_file\", \"get_file_contents\", \"read\"), \"PATH_ACCESS\",\n        method IN (\"search_files\", \"find_files\", \"grep\", \"search\"), \"PATTERN_SEARCH\",\n        method IN (\"list_directory\", \"read_directory\"), \"DIRECTORY_ENUM\",\n        1=1, \"UNKNOWN\")\n    | eval target_path=coalesce('params.path', 'params.directory', 'params.file')\n    | eval search_pattern=coalesce('params.pattern', 'params.query', 'params.search')\n    | stats count min(_time) as firstTime max(_time) as lastTime values(detection_type) as detection_types values(target_path) as targeted_paths values(search_pattern) as search_patterns values(method) as methods_used by dest, source\n    | eval time_span_seconds=lastTime-firstTime\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table dest firstTime lastTime count source detection_types methods_used targeted_paths search_patterns time_span_seconds\n    | `mcp_sensitive_system_file_search_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires MCP Server ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Known false positives include legitimate development activities where developers search for configuration files, environment variables, or authentication modules as part of normal coding tasks, as well as security audits involving authorized security reviews or code scanning tools searching for hardcoded secrets. Additionally, documentation lookups for example config files or authentication documentation may trigger this detection, along with refactoring tasks where developers rename or consolidate credential management code across a codebase, and onboarding activities where new developers explore unfamiliar codebases to understand authentication flows.",
              "refs": "https://splunkbase.splunk.com/app/8377, https://www.splunk.com/en_us/blog/security/securing-ai-agents-model-context-protocol.html",
              "mitre": [
                "T1552.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MCP Sensitive System File Search\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MCP Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MCP Sensitive System File Search\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Known false positives include legitimate development activities where developers search for configuration files, environment variables, or authentication modules as part of normal coding tasks, as well as security audits involving authorized security reviews or code scanning tools searching for hardcoded secrets. Additionally, documentation lookups for example config files or authentication documentation may trigger this detection, along with refactoring tasks where developers rename or consolidate credential management code across a codebase, and onboarding activities where new developers explore unfamiliar codebases to understand authentication flows.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'MCP Sensitive System File Search' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.42",
              "n": "Okta IDP Lifecycle Modifications",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies modifications to Okta Identity Provider (IDP) lifecycle events, including creation, activation, deactivation, and deletion of IDP configurations. It uses OktaIm2 logs ingested via the Splunk Add-on for Okta Identity Cloud. Monitoring these events is crucial for maintaining the integrity and security of authentication mechanisms. Unauthorized or anomalous changes could indicate potential security breaches or misconfigurations. If confirmed malicious, attackers co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible for legitimate administrative actions or automated processes to trigger this detection, especially if there are bulk modifications to Okta IDP lifecycle events. Review the context of the modification, such as the user making the change and the specific lifecycle event modified, to determine if it aligns with expected behavior.",
              "refs": "https://www.obsidiansecurity.com/blog/behind-the-breach-cross-tenant-impersonation-in-okta/, https://splunkbase.splunk.com/app/6553",
              "mitre": [
                "T1087.004"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta IDP Lifecycle Modifications\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta IDP Lifecycle Modifications\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible for legitimate administrative actions or automated processes to trigger this detection, especially if there are bulk modifications to Okta IDP lifecycle events. Review the context of the modification, such as the user making the change and the specific lifecycle event modified, to determine if it aligns with expected behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.43",
              "n": "Okta MFA Exhaustion Hunt",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects patterns of successful and failed Okta MFA push attempts to identify potential MFA exhaustion attacks. It leverages Okta event logs, specifically focusing on push verification events, and uses statistical evaluations to determine suspicious activity. This activity is significant as it may indicate an attacker attempting to bypass MFA by overwhelming the user with push notifications. If confirmed malicious, this could lead to unauthorized access, compromising the se…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "`okta` eventType=system.push.send_factor_verify_push OR ((legacyEventType=core.user.factor.attempt_success) AND (debugContext.debugData.factor=OKTA_VERIFY_PUSH)) OR ((legacyEventType=core.user.factor.attempt_fail) AND (debugContext.debugData.factor=OKTA_VERIFY_PUSH))\n      | stats count(eval(legacyEventType=\"core.user.factor.attempt_success\"))  as successes count(eval(legacyEventType=\"core.user.factor.attempt_fail\")) as failures count(eval(eventType=\"system.push.send_factor_verify_push\")) as pushes\n        BY user,_time\n      | stats latest(_time) as lasttime earliest(_time) as firsttime sum(successes) as successes sum(failures) as failures sum(pushes) as pushes\n        BY user\n      | eval seconds=lasttime-firsttime\n      | eval lasttime=strftime(lasttime, \"%c\")\n      | search (pushes>1)\n      | eval totalattempts=successes+failures\n      | eval finding=\"Normal authentication pattern\"\n      | eval finding=if(failures==pushes AND pushes>1,\"Authentication attempts not successful because multiple pushes denied\",finding)\n      | eval finding=if(totalattempts==0,\"Multiple pushes sent and ignored\",finding)\n      | eval finding=if(successes>0 AND pushes>3,\"Probably should investigate. Multiple pushes sent, eventual successful authentication!\",finding)\n      | `okta_mfa_exhaustion_hunt_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Tune Okta and tune the analytic to ensure proper fidelity. Modify risk score as needed. Drop to anomaly until tuning is complete.",
              "refs": "https://developer.okta.com/docs/reference/api/event-types/?q=user.acount.lock, https://splunkbase.splunk.com/app/6553",
              "mitre": [
                "T1110"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta MFA Exhaustion Hunt\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta MFA Exhaustion Hunt\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Tune Okta and tune the analytic to ensure proper fidelity. Modify risk score as needed. Drop to anomaly until tuning is complete.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Okta MFA Exhaustion Hunt' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.44",
              "n": "Okta Mismatch Between Source and Response for Verify Push Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies discrepancies between the source and response events for Okta Verify Push requests, indicating potential suspicious behavior. It leverages Okta System Log events, specifically `system.push.send_factor_verify_push` and `user.authentication.auth_via_mfa` with the factor \"OKTA_VERIFY_PUSH.\" The detection groups events by SessionID, calculates the ratio of successful sign-ins to push requests, and checks for session roaming and new device/IP usage. This activity is …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on organization size and configuration of Okta. Monitor, tune and filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1621, https://splunkbase.splunk.com/app/6553",
              "mitre": [
                "T1621"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Mismatch Between Source and Response for Verify Push Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Mismatch Between Source and Response for Verify Push Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on organization size and configuration of Okta. Monitor, tune and filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.45",
              "n": "Okta Multi-Factor Authentication Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an attempt to disable multi-factor authentication (MFA) for an Okta user. It leverages OktaIM2 logs to detect when the 'user.mfa.factor.deactivate' command is executed. This activity is significant because disabling MFA can allow an adversary to maintain persistence within the environment using a compromised valid account. If confirmed malicious, this action could enable attackers to bypass additional security layers, potentially leading to unauthorized access t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use case may require for users to disable MFA. Filter lightly and monitor for any unusual activity.",
              "refs": "https://attack.mitre.org/techniques/T1556/, https://splunkbase.splunk.com/app/6553",
              "mitre": [
                "T1556.006"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Multi-Factor Authentication Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Multi-Factor Authentication Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use case may require for users to disable MFA. Filter lightly and monitor for any unusual activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.46",
              "n": "Okta Multiple Accounts Locked Out",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects multiple Okta accounts being locked out within a short period. It uses the user.account.lock event from Okta logs, aggregated over a 5-minute window, to identify this behavior. This activity is significant as it may indicate a brute force or password spraying attack, where an adversary attempts to guess passwords, leading to account lockouts. If confirmed malicious, this could result in potential account takeovers or unauthorized access to sensitive Okta accounts, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Multiple account lockouts may be also triggered by an application malfunction. Filter as needed, and monitor for any unusual activity.",
              "refs": "https://attack.mitre.org/techniques/T1110/, https://splunkbase.splunk.com/app/6553",
              "mitre": [
                "T1110"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Multiple Accounts Locked Out\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Multiple Accounts Locked Out\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Multiple account lockouts may be also triggered by an application malfunction. Filter as needed, and monitor for any unusual activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.47",
              "n": "Okta Multiple Failed MFA Requests For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies multiple failed multi-factor authentication (MFA) requests for a single user within an Okta tenant. It triggers when more than 10 MFA attempts fail within 5 minutes, using Okta event logs to detect this pattern. This activity is significant as it may indicate an adversary attempting to bypass MFA by bombarding the user with repeated authentication requests, a technique used by threat actors like Lapsus and APT29. If confirmed malicious, this could lead to unauth…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed and monitor for any unusual activity.",
              "refs": "https://attack.mitre.org/techniques/T1621/",
              "mitre": [
                "T1621"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Multiple Failed MFA Requests For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Multiple Failed MFA Requests For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed and monitor for any unusual activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.48",
              "n": "Okta Multiple Failed Requests to Access Applications",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects multiple failed attempts to access applications in Okta, potentially indicating the reuse of a stolen web session cookie. It leverages Okta logs to evaluate policy and SSO events, aggregating data by user, session, and IP. The detection triggers when more than half of the app sign-on attempts are unsuccessful across multiple applications. This activity is significant as it may indicate an attempt to bypass authentication mechanisms. If confirmed malicious, it could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "`okta` target{}.type=AppInstance (eventType=policy.evaluate_sign_on outcome.result=CHALLENGE) OR (eventType=user.authentication.sso outcome.result=SUCCESS) | eval targets=mvzip('target{}.type', 'target{}.displayName', \\\": \\\") | eval targets=mvfilter(targets LIKE \\\"AppInstance%\\\") | stats count min(_time) as _time values(outcome.result) as outcome.result dc(eval(if(eventType=\\\"policy.evaluate_sign_on\\\",targets,NULL))) as total_challenges sum(eval(if(eventType=\\\"user.authentication.sso\\\",1,0))) as total_successes by authenticationContext.externalSessionId targets actor.alternateId client.ipAddress | search total_challenges > 0 | stats min(_time) as _time values(*) as * sum(total_challenges) as total_challenges sum(total_successes) as total_successes values(eval(if(\\\"outcome.result\\\"=\\\"SUCCESS\\\",targets,NULL))) as success_apps values(eval(if(\\\":outcome.result\\\"!=\\\"SUCCESS\\\",targets,NULL))) as no_success_apps by authenticationContext.externalSessionId actor.alternateId client.ipAddress | fillnull | eval ratio=round(total_successes/total_challenges,2), severity=\\\"HIGH\\\", mitre_technique_id=\\\"T1538\\\", description=\\\"actor.alternateId\\\". \\\" from \\\" . \\\"client.ipAddress\\\" . \\\" seen opening \\\" . total_challenges . \\\" chiclets/apps with \\\" . total_successes . \\\" challenges successfully passed\\\" | fields - count, targets | search ratio < 0.5 total_challenges > 2 | `okta_multiple_failed_requests_to_access_applications_filter`",
              "m": "This analytic is specific to Okta and requires Okta:im2 logs to be ingested.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on organization size and configuration of Okta.",
              "refs": "https://attack.mitre.org/techniques/T1538, https://attack.mitre.org/techniques/T1550/004",
              "mitre": [
                "T1550.004",
                "T1538"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Multiple Failed Requests to Access Applications\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1550.004, T1538. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Multiple Failed Requests to Access Applications\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on organization size and configuration of Okta.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Okta Multiple Failed Requests to Access Applications' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.49",
              "n": "Okta Multiple Users Failing To Authenticate From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where more than 10 unique user accounts have failed to authenticate from a single IP address within a 5-minute window in an Okta tenant. This detection uses OktaIm2 logs ingested via the Splunk Add-on for Okta Identity Cloud. Such activity is significant as it may indicate brute-force attacks or password spraying attempts. If confirmed malicious, this behavior suggests an external entity is attempting to compromise multiple user accounts, potentially l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A source Ip failing to authenticate with multiple users in a short period of time is not common legitimate behavior.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://splunkbase.splunk.com/app/6553",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Multiple Users Failing To Authenticate From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Multiple Users Failing To Authenticate From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A source Ip failing to authenticate with multiple users in a short period of time is not common legitimate behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.50",
              "n": "Okta New API Token Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new API token within an Okta tenant. It uses OktaIm2 logs ingested via the Splunk Add-on for Okta Identity Cloud to identify events where the `system.api_token.create` command is executed. This activity is significant because creating a new API token can indicate potential account takeover attempts or unauthorized access, allowing an adversary to maintain persistence. If confirmed malicious, this could enable attackers to execute API calls, access…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Tune Okta and tune the analytic to ensure proper fidelity. Modify risk score as needed.",
              "refs": "https://developer.okta.com/docs/reference/api/event-types/?q=security.threat.detected, https://splunkbase.splunk.com/app/6553",
              "mitre": [
                "T1078.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta New API Token Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta New API Token Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Tune Okta and tune the analytic to ensure proper fidelity. Modify risk score as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.51",
              "n": "Okta New Device Enrolled on Account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a new device is enrolled on an Okta account. It uses OktaIm2 logs ingested via the Splunk Add-on for Okta Identity Cloud to detect the creation of new device enrollments. This activity is significant as it may indicate a legitimate user setting up a new device or an adversary adding a device to maintain unauthorized access. If confirmed malicious, this could lead to potential account takeover, unauthorized access, and persistent control over the compromised…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that the user has legitimately added a new device to their account. Please verify this activity.",
              "refs": "https://attack.mitre.org/techniques/T1098/005/, https://developer.okta.com/docs/reference/api/event-types/?q=device.enrollment.create",
              "mitre": [
                "T1098.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta New Device Enrolled on Account\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta New Device Enrolled on Account\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that the user has legitimately added a new device to their account. Please verify this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.52",
              "n": "Okta Risk Threshold Exceeded",
              "c": "high",
              "f": "intermediate",
              "v": "The following correlation identifies when a user exceeds a risk threshold based on multiple suspicious Okta activities. It leverages the Risk Framework from Enterprise Security, aggregating risk events from \"Suspicious Okta Activity,\" \"Okta Account Takeover,\" and \"Okta MFA Exhaustion\" analytic stories. This detection is significant as it highlights potentially compromised user accounts exhibiting multiple tactics, techniques, and procedures (TTPs) within a 24-hour period. If confirmed malicious,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be limited to the number of events generated by the analytics tied to the stories. Analytics will need to be tested and tuned, and the risk score reduced as needed based on the organization.",
              "refs": "https://developer.okta.com/docs/reference/api/event-types",
              "mitre": [
                "T1078",
                "T1110"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Risk Threshold Exceeded\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078, T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Risk Threshold Exceeded\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be limited to the number of events generated by the analytics tied to the stories. Analytics will need to be tested and tuned, and the risk score reduced as needed based on the organization.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.53",
              "n": "Okta Successful Single Factor Authentication",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies successful single-factor authentication events against the Okta Dashboard for accounts without Multi-Factor Authentication (MFA) enabled. It detects this activity by analyzing Okta logs for successful authentication events where \"Okta Verify\" is not used. This behavior is significant as it may indicate a misconfiguration, policy violation, or potential account takeover. If confirmed malicious, an attacker could gain unauthorized access to the account, potentiall…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although not recommended, certain users may be exempt from multi-factor authentication. Adjust the filter as necessary.",
              "refs": "https://attack.mitre.org/techniques/T1078/004/",
              "mitre": [
                "T1078.004",
                "T1586.003",
                "T1621"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Successful Single Factor Authentication\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Successful Single Factor Authentication\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although not recommended, certain users may be exempt from multi-factor authentication. Adjust the filter as necessary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.54",
              "n": "Okta Suspicious Use of a Session Cookie",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious use of a session cookie by detecting multiple client values (IP, User Agent, etc.) changing for the same Device Token associated with a specific user. It leverages policy evaluation events from successful authentication logs in Okta. This activity is significant as it may indicate an adversary attempting to reuse a stolen web session cookie, potentially bypassing authentication mechanisms. If confirmed malicious, this could allow unauthorized access t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur, depending on the organization's size and the configuration of Okta.",
              "refs": "https://attack.mitre.org/techniques/T1539/",
              "mitre": [
                "T1539"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Suspicious Use of a Session Cookie\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1539. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Suspicious Use of a Session Cookie\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur, depending on the organization's size and the configuration of Okta.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.55",
              "n": "Okta ThreatInsight Threat Detected",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies threats detected by Okta ThreatInsight, such as password spraying, login failures, and high counts of unknown user login attempts. It leverages Okta Identity Management logs, specifically focusing on security.threat.detected events. This activity is significant for a SOC as it highlights potential unauthorized access attempts and credential-based attacks. If confirmed malicious, these activities could lead to unauthorized access, data breaches, and further explo…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$app$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur. It is recommended to fine-tune Okta settings and the analytic to ensure high fidelity. Adjust the risk score as necessary.",
              "refs": "https://developer.okta.com/docs/reference/api/event-types/?q=security.threat.detected",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta ThreatInsight Threat Detected\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta ThreatInsight Threat Detected\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur. It is recommended to fine-tune Okta settings and the analytic to ensure high fidelity. Adjust the risk score as necessary.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.56",
              "n": "Okta Unauthorized Access to Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies attempts by users to access Okta applications that have not been assigned to them. It leverages Okta Identity Management logs, specifically focusing on failed access attempts to unassigned applications. This activity is significant for a SOC as it may indicate potential unauthorized access attempts, which could lead to exposure of sensitive information or disruption of services. If confirmed malicious, such activity could result in data breaches, non-compliance …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There is a possibility that a user may accidentally click on the wrong application, which could trigger this event. It is advisable to verify the location from which this activity originates.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1087.004"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Unauthorized Access to Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Unauthorized Access to Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There is a possibility that a user may accidentally click on the wrong application, which could trigger this event. It is advisable to verify the location from which this activity originates.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.57",
              "n": "Okta User Logins from Multiple Cities",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where the same Okta user logs in from different cities within a 24-hour period. This detection leverages Okta Identity Management logs, analyzing login events and their geographic locations. Such behavior is significant as it may indicate a compromised account, with an attacker attempting unauthorized access from multiple locations. If confirmed malicious, this activity could lead to account takeovers and data breaches, allowing attackers to access sen…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Okta ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is uncommon for a user to log in from multiple cities simultaneously, which may indicate a false positive.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta User Logins from Multiple Cities\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta User Logins from Multiple Cities\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is uncommon for a user to log in from multiple cities simultaneously, which may indicate a false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.58",
              "n": "Ollama Abnormal Network Connectivity",
              "c": "high",
              "f": "intermediate",
              "v": "Detects abnormal network activity and connectivity issues in Ollama including non-localhost API access attempts and warning-level network errors such as DNS lookup failures, TCP connection issues, or host resolution problems that may indicate network-based attacks, unauthorized access attempts, or infrastructure reconnaissance activity.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Ollama Server",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\",) starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Ollama Server ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate remote access from authorized users or applications connecting from non-localhost addresses, temporary network infrastructure issues causing DNS resolution failures, firewall or network configuration changes resulting in connection timeouts, cloud-hosted Ollama instances receiving valid external API requests, or intermittent connectivity problems during network maintenance may trigger this detection during normal operations.",
              "refs": "https://github.com/rosplk/ta-ollama",
              "mitre": [
                "T1571"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ollama Abnormal Network Connectivity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Ollama Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1571. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ollama Abnormal Network Connectivity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate remote access from authorized users or applications connecting from non-localhost addresses, temporary network infrastructure issues causing DNS resolution failures, firewall or network configuration changes resulting in connection timeouts, cloud-hosted Ollama instances receiving valid external API requests, or intermittent connectivity problems during network maintenance may trigger this detection during normal operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Ollama Abnormal Network Connectivity so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.59",
              "n": "Ollama Abnormal Service Crash Availability Attack",
              "c": "high",
              "f": "intermediate",
              "v": "Detects critical service crashes, fatal errors, and abnormal process terminations in Ollama that may indicate exploitation attempts, resource exhaustion attacks, malicious input triggering unhandled exceptions, or deliberate denial of service attacks designed to disrupt AI model availability and degrade system stability.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Ollama Server",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ingest Ollama logs via Splunk TA-ollama add-on by configuring file monitoring inputs pointed to your Ollama server log directories (sourcetype: ollama:server), or enable HTTP Event Collector (HEC) for real-time API telemetry and prompt analytics (sourcetypes: ollama:api, ollama:prompts). CIM compatibility using the Web datamodel for standardized security detections.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Normal service restarts during system updates or maintenance windows, graceful shutdowns with non-zero exit codes, intentional service stops by administrators, software upgrades requiring process termination, out-of-memory conditions on resource-constrained systems, or known bugs in specific Ollama versions that cause benign crashes may trigger this detection during routine operations.",
              "refs": "https://github.com/rosplk/ta-ollama",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ollama Abnormal Service Crash Availability Attack\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Ollama Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ollama Abnormal Service Crash Availability Attack\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Normal service restarts during system updates or maintenance windows, graceful shutdowns with non-zero exit codes, intentional service stops by administrators, software upgrades requiring process termination, out-of-memory conditions on resource-constrained systems, or known bugs in specific Ollama versions that cause benign crashes may trigger this detection during routine operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Ollama Abnormal Service Crash Availability Attack so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.60",
              "n": "Ollama Possible Memory Exhaustion Resource Abuse",
              "c": "high",
              "f": "intermediate",
              "v": "Detects abnormal memory allocation patterns and excessive runner operations in Ollama that may indicate resource exhaustion attacks, memory abuse through malicious model loading, or attempts to degrade system performance by overwhelming GPU/CPU resources. Adversaries may deliberately load multiple large models, trigger repeated model initialization cycles, or exploit memory allocation mechanisms to exhaust available system resources, causing denial of service conditions or degrading performance …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Ollama Server",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ingest Ollama logs via Splunk TA-ollama add-on by configuring file monitoring inputs pointed to your Ollama server log directories (sourcetype: ollama:server), or enable HTTP Event Collector (HEC) for real-time API telemetry and prompt analytics (sourcetypes: ollama:api, ollama:prompts). CIM compatibility using the Web datamodel for standardized security detections.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate high-volume production workloads processing multiple concurrent requests, users loading large language models (7B+ parameters) that naturally require substantial memory allocation, simultaneous multi-model deployments during system scaling, batch processing operations, or initial system startup sequences may generate similar memory allocation patterns during normal operations.",
              "refs": "https://github.com/rosplk/ta-ollama",
              "mitre": [
                "T1499"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ollama Possible Memory Exhaustion Resource Abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Ollama Server. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1499. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ollama Possible Memory Exhaustion Resource Abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate high-volume production workloads processing multiple concurrent requests, users loading large language models (7B+ parameters) that naturally require substantial memory allocation, simultaneous multi-model deployments during system scaling, batch processing operations, or initial system startup sequences may generate similar memory allocation patterns during normal operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Ollama Possible Memory Exhaustion Resource Abuse so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.61",
              "n": "PingID Mismatch Auth Source and Verification Response",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies discrepancies between the IP address of an authentication event and the IP address of the verification response event, focusing on differences in the originating countries. It leverages JSON logs from PingID, comparing the 'auth_Country' and 'verify_Country' fields. This activity is significant as it may indicate suspicious sign-in behavior, such as account compromise or unauthorized access attempts. If confirmed malicious, this could allow attackers to bypass a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "PingID",
              "q": "# Shared SPL: intentional — see UC-10.7.64\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest JSON logging from a PingID(PingOne) enterprise environment, either via Webhook or Push Subscription.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be generated by users working out the geographic region where the organizations services or technology is hosted.",
              "refs": "https://twitter.com/jhencinski/status/1618660062352007174, https://attack.mitre.org/techniques/T1098/005/, https://attack.mitre.org/techniques/T1556/006/",
              "mitre": [
                "T1621",
                "T1556.006",
                "T1098.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PingID Mismatch Auth Source and Verification Response\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: PingID. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1621, T1556.006, T1098.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PingID Mismatch Auth Source and Verification Response\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be generated by users working out the geographic region where the organizations services or technology is hosted.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for PingID Mismatch Auth Source and Verification Response so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.62",
              "n": "PingID Multiple Failed MFA Requests For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies multiple failed multi-factor authentication (MFA) requests for a single user within a PingID environment. It triggers when 10 or more MFA prompts fail within 10 minutes, using JSON logs from PingID. This activity is significant as it may indicate an adversary attempting to bypass MFA by bombarding the user with repeated authentication requests. If confirmed malicious, this could lead to unauthorized access, as the user might eventually accept the fraudulent requ…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "PingID",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest JSON logging from a PingID(PingOne) enterprise environment, either via Webhook or Push Subscription.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be generated by normal provisioning workflows for user device registration.",
              "refs": "https://therecord.media/russian-hackers-bypass-2fa-by-annoying-victims-with-repeated-push-notifications/, https://attack.mitre.org/techniques/T1621/, https://attack.mitre.org/techniques/T1110/, https://attack.mitre.org/techniques/T1078/004/",
              "mitre": [
                "T1621",
                "T1078",
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PingID Multiple Failed MFA Requests For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: PingID. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1621, T1078, T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PingID Multiple Failed MFA Requests For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be generated by normal provisioning workflows for user device registration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for PingID Multiple Failed MFA Requests For User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.63",
              "n": "PingID New MFA Method After Credential Reset",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the provisioning of a new MFA device shortly after a password reset. It detects this activity by correlating Windows Event Log events for password changes (EventID 4723, 4724) with PingID logs indicating device pairing. This behavior is significant as it may indicate a social engineering attack where a threat actor impersonates a valid user to reset credentials and add a new MFA device. If confirmed malicious, this activity could allow an attacker to gain persis…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "PingID",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest Windows Event Log and PingID(PingOne) data sources. Specifically from logs from Active Directory Domain Controllers and JSON logging from a PingID(PingOne) enterprise environment, either via Webhook or Push Subscription.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be generated by normal provisioning workflows that generate a password reset followed by a device registration.",
              "refs": "https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/defend-your-users-from-mfa-fatigue-attacks/ba-p/2365677, https://www.bleepingcomputer.com/news/security/mfa-fatigue-hackers-new-favorite-tactic-in-high-profile-breaches/, https://attack.mitre.org/techniques/T1098/005/, https://attack.mitre.org/techniques/T1556/006/",
              "mitre": [
                "T1621",
                "T1556.006",
                "T1098.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PingID New MFA Method After Credential Reset\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: PingID. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1621, T1556.006, T1098.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PingID New MFA Method After Credential Reset\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be generated by normal provisioning workflows that generate a password reset followed by a device registration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for PingID New MFA Method After Credential Reset so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.64",
              "n": "PingID New MFA Method Registered For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the registration of a new Multi-Factor Authentication (MFA) method for a PingID (PingOne) account. It leverages JSON logs from PingID, specifically looking for successful device pairing events. This activity is significant as adversaries who gain unauthorized access to a user account may register a new MFA method to maintain persistence. If confirmed malicious, this could allow attackers to bypass existing security measures, maintain long-term access, and potential…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "PingID",
              "q": "# Shared SPL: intentional — see UC-10.7.61\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest JSON logging from a PingID(PingOne) enterprise environment, either via Webhook or Push Subscription.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be generated by normal provisioning workflows for user device registration.",
              "refs": "https://twitter.com/jhencinski/status/1618660062352007174, https://attack.mitre.org/techniques/T1098/005/, https://attack.mitre.org/techniques/T1556/006/",
              "mitre": [
                "T1621",
                "T1556.006",
                "T1098.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PingID New MFA Method Registered For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: PingID. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1621, T1556.006, T1098.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PingID New MFA Method Registered For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be generated by normal provisioning workflows for user device registration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for PingID New MFA Method Registered For User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.65",
              "n": "Splunk AppDynamics Secure Application Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Splunk AppDynamics Secure Application Alerts. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Splunk AppDynamics Secure Application Alert",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$app_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Splunk AppDynamics Secure Application Alert ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No known false positives for this detection. If the alerts are noisy, consider tuning this detection by using the _filter macro in this search, and/or updating the tool this alert originates from.",
              "refs": "https://docs.appdynamics.com/appd/24.x/latest/en/application-security-monitoring/integrate-cisco-secure-application-with-splunk",
              "mitre": [],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Splunk AppDynamics Secure Application Alerts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Splunk AppDynamics Secure Application Alert. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Splunk AppDynamics Secure Application Alerts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No known false positives for this detection. If the alerts are noisy, consider tuning this detection by using the _filter macro in this search, and/or updating the tool this alert originates from.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Splunk AppDynamics Secure Application Alerts so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.66",
              "n": "Zoom High Video Latency",
              "c": "high",
              "f": "intermediate",
              "v": "Detects particularly high latency from Zoom logs. Latency observed from threat actors performing Remote Employment Fraud (REF) is typically well above what’s normal for the majority of employees. Note: The displayed SPL is the generic ESCU Risk.All_Risk drilldown keyed on `$email$`, the same template as UC-10.4.35; it is not Zoom-specific. Consider replacing with a Zoom-appropriate risk object in a content uplift.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$email$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The analytic leverages Zoom logs to be ingested using Splunk Connect for Zoom (https://splunkbase.splunk.com/app/4961)",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While latency could simply indicate a slow network connection, when combined with other indicators, it can help build a more complete picture. Tune the threshold as needed for your environment baseline.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zoom High Video Latency\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zoom High Video Latency\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While latency could simply indicate a slow network connection, when combined with other indicators, it can help build a more complete picture. Tune the threshold as needed for your environment baseline.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Zoom High Video Latency so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.67",
              "n": "Zoom Rare Audio Devices",
              "c": "high",
              "f": "intermediate",
              "v": "Detects rare audio devices from Zoom logs. Actors performing Remote Employment Fraud (REF) typically use unusual device information compared to a majority of employees. Detecting this activity requires careful analysis, regular review, and a thorough understanding of the audio and video devices commonly used within your environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`zoom_index` speaker=* NOT (camera=*iPhone* OR camera=\"*FaceTime*\" OR speaker=\"*AirPods*\" OR camera=\"*MacBook*\" OR microphone=\"*MacBook Pro Microphone*\")\n      | rare speaker limit=50\n      | `zoom_rare_audio_devices_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is a hunting query meant to identify rare audio devices.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1123"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zoom Rare Audio Devices\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1123. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zoom Rare Audio Devices\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is a hunting query meant to identify rare audio devices.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Zoom Rare Audio Devices' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.68",
              "n": "Zoom Rare Input Devices",
              "c": "high",
              "f": "intermediate",
              "v": "Detects rare input devices from Zoom logs. Actors performing Remote Employment Fraud (REF) typically use unusual device information compared to a majority of employees. Detecting this activity requires careful analysis, regular review, and a thorough understanding of the audio and video devices commonly used within your environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`zoom_index` microphone=* NOT (camera=*iPhone* OR camera=\"*FaceTime*\" OR speaker=\"*AirPods*\" OR camera=\"*MacBook*\" OR microphone=\"*MacBook Pro Microphone*\")\n      | rare microphone limit=50\n      | `zoom_rare_input_devices_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is a hunting query meant to identify rare microphone devices.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1123"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zoom Rare Input Devices\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1123. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zoom Rare Input Devices\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is a hunting query meant to identify rare microphone devices.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Zoom Rare Input Devices' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.69",
              "n": "Zoom Rare Video Devices",
              "c": "high",
              "f": "intermediate",
              "v": "Detects rare video devices from Zoom logs. Actors performing Remote Employment Fraud (REF) typically use unusual device information compared to a majority of employees. Detecting this activity requires careful analysis, regular review, and a thorough understanding of the audio and video devices commonly used within your environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`zoom_index` camera=* NOT (camera=*iPhone* OR camera=\"*FaceTime*\" OR speaker=\"*AirPods*\" OR camera=\"*MacBook*\" OR microphone=\"*MacBook Pro Microphone*\")\n      | rare camera limit=50\n      | `zoom_rare_video_devices_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is a hunting query meant to identify rare video devices.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1123"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Zoom Rare Video Devices\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1123. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Zoom Rare Video Devices\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is a hunting query meant to identify rare video devices.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Zoom Rare Video Devices' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.70",
              "n": "Abnormally High Number Of Cloud Infrastructure API Calls",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a spike in the number of API calls made to your cloud infrastructure by a user. It leverages cloud infrastructure logs and compares the current API call volume against a baseline probability density function to identify anomalies. This activity is significant because an unusual increase in API calls can indicate potential misuse or compromise of cloud resources. If confirmed malicious, this could lead to unauthorized access, data exfiltration, or disruption of clou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Abnormally High Number Of Cloud Infrastructure API Calls\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Abnormally High Number Of Cloud Infrastructure API Calls\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Abnormally High Number Of Cloud Infrastructure API Calls so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.71",
              "n": "Abnormally High Number Of Cloud Instances Destroyed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an abnormally high number of cloud instances being destroyed within a 4-hour period. It leverages cloud infrastructure logs and applies a probability density model to detect outliers. This activity is significant for a SOC because a sudden spike in destroyed instances could indicate malicious activity, such as an insider threat or a compromised account attempting to disrupt services. If confirmed malicious, this could lead to significant operational disruptions,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| tstats count as instances_destroyed values(All_Changes.object_id) as object_id FROM datamodel=Change\n      WHERE All_Changes.action=deleted\n        AND\n        All_Changes.status=success\n        AND\n        All_Changes.object_category=instance\n      BY All_Changes.user _time span=1h\n    | `drop_dm_object_name(\"All_Changes\")`\n    | eval HourOfDay=strftime(_time, \"%H\")\n    | eval HourOfDay=floor(HourOfDay/4)*4\n    | eval DayOfWeek=strftime(_time, \"%w\")\n    | eval isWeekend=if(DayOfWeek >= 1 AND DayOfWeek <= 5, 0, 1)\n    | join max=1 HourOfDay isWeekend [summary cloud_excessive_instances_destroyed_v1]\n    | where cardinality >=16\n    | apply cloud_excessive_instances_destroyed_v1 threshold=0.005\n    | rename \"IsOutlier(instances_destroyed)\" as isOutlier\n    | where isOutlier=1\n    | eval expected_upper_threshold = mvindex(split(mvindex(BoundaryRanges, -1), \":\"), 0)\n    | eval distance_from_threshold = instances_destroyed - expected_upper_threshold\n    | table _time, user, instances_destroyed, expected_upper_threshold, distance_from_threshold, object_id\n    | `abnormally_high_number_of_cloud_instances_destroyed_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Many service accounts configured within a cloud infrastructure are known to exhibit this behavior. Please adjust the threshold values and filter out service accounts from the output. Always verify if this search alerted on a human user.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Abnormally High Number Of Cloud Instances Destroyed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Abnormally High Number Of Cloud Instances Destroyed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Many service accounts configured within a cloud infrastructure are known to exhibit this behavior. Please adjust the threshold values and filter out service accounts from the output. Always verify if this search alerted on a human user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the behavior in “Abnormally High Number Of Cloud Instances Destroyed” using the same log fields the detection already uses, so the team can act before impact spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.72",
              "n": "Abnormally High Number Of Cloud Instances Launched",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an abnormally high number of cloud instances launched within a 4-hour period. It leverages cloud infrastructure logs and applies a probability density model to identify outliers based on historical data. This activity is significant for a SOC because a sudden spike in instance creation could indicate unauthorized access or misuse of cloud resources. If confirmed malicious, this behavior could lead to resource exhaustion, increased costs, or provide attackers with a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| tstats count as instances_launched values(All_Changes.object_id) as object_id FROM datamodel=Change\n      WHERE (\n            All_Changes.action=created\n        )\n        AND All_Changes.status=success AND All_Changes.object_category=instance\n      BY All_Changes.user _time span=1h\n    | `drop_dm_object_name(\"All_Changes\")`\n    | eval HourOfDay=strftime(_time, \"%H\")\n    | eval HourOfDay=floor(HourOfDay/4)*4\n    | eval DayOfWeek=strftime(_time, \"%w\")\n    | eval isWeekend=if(DayOfWeek >= 1 AND DayOfWeek <= 5, 0, 1)\n    | join max=1 HourOfDay isWeekend [summary cloud_excessive_instances_created_v1]\n    | where cardinality >=16\n    | apply cloud_excessive_instances_created_v1 threshold=0.005\n    | rename \"IsOutlier(instances_launched)\" as isOutlier\n    | where isOutlier=1\n    | eval expected_upper_threshold = mvindex(split(mvindex(BoundaryRanges, -1), \":\"), 0)\n    | eval distance_from_threshold = instances_launched - expected_upper_threshold\n    | table _time, user, instances_launched, expected_upper_threshold, distance_from_threshold, object_id\n    | `abnormally_high_number_of_cloud_instances_launched_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Many service accounts configured within an AWS infrastructure are known to exhibit this behavior. Please adjust the threshold values and filter out service accounts from the output. Always verify if this search alerted on a human user.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Abnormally High Number Of Cloud Instances Launched\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Abnormally High Number Of Cloud Instances Launched\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Many service accounts configured within an AWS infrastructure are known to exhibit this behavior. Please adjust the threshold values and filter out service accounts from the output. Always verify if this search alerted on a human user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the behavior in “Abnormally High Number Of Cloud Instances Launched” using the same log fields the detection already uses, so the team can act before impact spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.73",
              "n": "ASL AWS Create Access Key",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of AWS IAM access keys by a user for another user, which can indicate privilege escalation. It leverages AWS CloudTrail logs to detect instances where the user creating the access key is different from the user for whom the key is created. This activity is significant because unauthorized access key creation can allow attackers to establish persistence or exfiltrate data via AWS APIs. If confirmed malicious, this could lead to unauthorized access to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "`amazon_security_lake` api.operation=CreateAccessKey\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY actor.user.uid api.operation api.service.name\n           http_request.user_agent src_endpoint.ip actor.user.account.uid\n           cloud.provider cloud.region\n      | rename actor.user.uid as user api.operation as action api.service.name as dest http_request.user_agent as user_agent src_endpoint.ip as src actor.user.account.uid as vendor_account cloud.provider as vendor_product cloud.region as vendor_region\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `asl_aws_create_access_key_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately created keys for another user.",
              "refs": "https://bishopfox.com/blog/privilege-escalation-in-aws, https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Create Access Key\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Create Access Key\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately created keys for another user.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'ASL AWS Create Access Key' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.74",
              "n": "ASL AWS Create Policy Version to allow all resources",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a new AWS IAM policy version that allows access to all resources. It detects this activity by analyzing AWS CloudTrail logs for the CreatePolicyVersion event with a policy document that grants broad permissions. This behavior is significant because it violates the principle of least privilege, potentially exposing the environment to misuse or abuse. If confirmed malicious, an attacker could gain extensive access to AWS resources, leading to unaut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately created a policy to allow a user to access all resources. That said, AWS strongly advises against granting full control to all AWS resources and you must verify this activity.",
              "refs": "https://bishopfox.com/blog/privilege-escalation-in-aws, https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Create Policy Version to allow all resources\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Create Policy Version to allow all resources\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately created a policy to allow a user to access all resources. That said, AWS strongly advises against granting full control to all AWS resources and you must verify this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.75",
              "n": "ASL AWS Credential Access RDS Password reset",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the resetting of the master user password for an Amazon RDS DB instance. It leverages AWS CloudTrail logs from Amazon Security Lake to identify events where the `ModifyDBInstance` API call includes a new `masterUserPassword` parameter. This activity is significant because unauthorized password resets can grant attackers access to sensitive data stored in production databases, such as credit card information, PII, and healthcare data. If confirmed malicious, this co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may genuinely reset the RDS password.",
              "refs": "https://aws.amazon.com/premiumsupport/knowledge-center/reset-master-user-password-rds",
              "mitre": [
                "T1110",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Credential Access RDS Password reset\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Credential Access RDS Password reset\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may genuinely reset the RDS password.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.76",
              "n": "ASL AWS Defense Evasion Delete CloudWatch Log Group",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of CloudWatch log groups in AWS, identified through `DeleteLogGroup` events in CloudTrail logs. This method leverages Amazon Security Lake logs parsed in the OCSF format. The activity is significant because attackers may delete log groups to evade detection and disrupt logging capabilities, hindering incident response efforts. If confirmed malicious, this action could allow attackers to cover their tracks, making it difficult to trace their activities …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has deleted CloudWatch logging. Please investigate this activity.",
              "refs": "https://attack.mitre.org/techniques/T1562/008/",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Defense Evasion Delete CloudWatch Log Group\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Defense Evasion Delete CloudWatch Log Group\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has deleted CloudWatch logging. Please investigate this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.77",
              "n": "ASL AWS Defense Evasion PutBucketLifecycle",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects `PutBucketLifecycle` events in AWS CloudTrail logs where a user sets a lifecycle rule for an S3 bucket with an expiration period of fewer than three days. This detection leverages CloudTrail logs to identify suspicious lifecycle configurations. This activity is significant because attackers may use it to delete CloudTrail logs quickly, thereby evading detection and impairing forensic investigations. If confirmed malicious, this could allow attackers to cover their …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "`amazon_security_lake` api.operation=PutBucketLifecycle\n      | spath input=api.request.data path=LifecycleConfiguration.Rule.NoncurrentVersionExpiration.NoncurrentDays output=NoncurrentDays\n      | where NoncurrentDays < 3\n      | spath input=api.request.data\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY actor.user.uid api.operation api.service.name\n           http_request.user_agent src_endpoint.ip actor.user.account.uid\n           cloud.provider cloud.region NoncurrentDays\n           bucketName\n      | rename actor.user.uid as user api.operation as action api.service.name as dest http_request.user_agent as user_agent src_endpoint.ip as src actor.user.account.uid as vendor_account cloud.provider as vendor_product cloud.region as vendor_region\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `asl_aws_defense_evasion_putbucketlifecycle_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that it is a legitimate admin activity. Please consider filtering out these noisy events using userAgent, user_arn field names.",
              "refs": "https://stratus-red-team.cloud/attack-techniques/AWS/aws.defense-evasion.cloudtrail-lifecycle-rule/",
              "mitre": [
                "T1485.001",
                "T1562.008"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Defense Evasion PutBucketLifecycle\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485.001, T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Defense Evasion PutBucketLifecycle\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that it is a legitimate admin activity. Please consider filtering out these noisy events using userAgent, user_arn field names.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'ASL AWS Defense Evasion PutBucketLifecycle' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.78",
              "n": "ASL AWS Defense Evasion Stop Logging Cloudtrail",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects `StopLogging` events within AWS CloudTrail logs, a critical action that adversaries may use to evade detection. By halting the logging of their malicious activities, attackers aim to operate undetected within a compromised AWS environment. This detection is achieved by monitoring for specific CloudTrail log entries that indicate the cessation of logging activities. Identifying such behavior is crucial for a Security Operations Center (SOC), as it signals an attempt…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has stopped cloudtrail logging. Please investigate this activity.",
              "refs": "https://attack.mitre.org/techniques/T1562/008/",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Defense Evasion Stop Logging Cloudtrail\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Defense Evasion Stop Logging Cloudtrail\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has stopped cloudtrail logging. Please investigate this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.79",
              "n": "ASL AWS Defense Evasion Update Cloudtrail",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects `UpdateTrail` events within AWS CloudTrail logs, aiming to identify attempts by attackers to evade detection by altering logging configurations. By updating CloudTrail settings with incorrect parameters, such as changing multi-regional logging to a single region, attackers can impair the logging of their activities across other regions. This behavior is crucial for Security Operations Centers (SOCs) to identify, as it indicates an adversary's intent to operate unde…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has updated cloudtrail logging. Please investigate this activity.",
              "refs": "https://attack.mitre.org/techniques/T1562/008/",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Defense Evasion Update Cloudtrail\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Defense Evasion Update Cloudtrail\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has updated cloudtrail logging. Please investigate this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.80",
              "n": "ASL AWS Detect Users creating keys with encrypt policy without MFA",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of AWS KMS keys with an encryption policy accessible to everyone, including external entities. It leverages AWS CloudTrail logs from Amazon Security Lake to identify `CreateKey` or `PutKeyPolicy` events where the `kms:Encrypt` action is granted to all principals. This activity is significant as it may indicate a compromised account, allowing an attacker to misuse the encryption key to target other organizations. If confirmed malicious, this could lead …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://rhinosecuritylabs.com/aws/s3-ransomware-part-1-attack-vector/, https://github.com/d1vious/git-wild-hunt, https://www.youtube.com/watch?v=PgzNib37g0M",
              "mitre": [
                "T1486"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Detect Users creating keys with encrypt policy without MFA\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Detect Users creating keys with encrypt policy without MFA\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.81",
              "n": "ASL AWS EC2 Snapshot Shared Externally",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an EC2 snapshot is shared publicly by analyzing AWS CloudTrail events. This detection method leverages CloudTrail logs to identify modifications in snapshot permissions, specifically when the snapshot is shared outside the originating AWS account. This activity is significant as it may indicate an attempt to exfiltrate sensitive data stored in the snapshot. If confirmed malicious, an attacker could gain unauthorized access to the snapshot's data, potentially l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an AWS admin has legitimately shared a snapshot with others for  a specific purpose.",
              "refs": "https://labs.nettitude.com/blog/how-to-exfiltrate-aws-ec2-data/, https://stratus-red-team.cloud/attack-techniques/AWS/aws.exfiltration.ec2-share-ebs-snapshot/, https://hackingthe.cloud/aws/enumeration/loot_public_ebs_snapshots/",
              "mitre": [
                "T1537"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS EC2 Snapshot Shared Externally\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1537. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS EC2 Snapshot Shared Externally\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an AWS admin has legitimately shared a snapshot with others for  a specific purpose.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.82",
              "n": "ASL AWS ECR Container Upload Outside Business Hours",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the upload of new containers to AWS Elastic Container Service (ECR) outside of standard business hours through AWS CloudTrail events. It identifies this behavior by monitoring for `PutImage` events occurring before 8 AM or after 8 PM, as well as any uploads on weekends. This activity is significant for a SOC to investigate as it may indicate unauthorized access or malicious deployments, potentially leading to compromised services or data breaches. Identifying and a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "When your development is spreaded in different time zones, applying this rule can be difficult.",
              "refs": "https://attack.mitre.org/techniques/T1204/003/",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS ECR Container Upload Outside Business Hours\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS ECR Container Upload Outside Business Hours\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: When your development is spreaded in different time zones, applying this rule can be difficult.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.83",
              "n": "ASL AWS ECR Container Upload Unknown User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthorized container uploads to AWS Elastic Container Service (ECR) by monitoring AWS CloudTrail events. It identifies instances where a new container is uploaded by a user not previously recognized as authorized. This detection is crucial for a SOC as it can indicate a potential compromise or misuse of AWS ECR, which could lead to unauthorized access to sensitive data or the deployment of malicious containers. By identifying and investigating these events, organ…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1204/003/",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS ECR Container Upload Unknown User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS ECR Container Upload Unknown User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.84",
              "n": "ASL AWS IAM AccessDenied Discovery Events",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies excessive AccessDenied events within an hour timeframe for IAM users in AWS. It leverages AWS CloudTrail logs to detect multiple failed access attempts from the same source IP and user identity. This activity is significant as it may indicate that an access key has been compromised and is being misused for unauthorized discovery actions. If confirmed malicious, this could allow attackers to gather information about the AWS environment, potentially leading to fur…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible to start this detection will need to be tuned by source IP or user. In addition, change the count values to an upper threshold to restrict false positives.",
              "refs": "https://aws.amazon.com/premiumsupport/knowledge-center/troubleshoot-iam-permission-errors/",
              "mitre": [
                "T1580"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS IAM AccessDenied Discovery Events\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1580. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS IAM AccessDenied Discovery Events\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible to start this detection will need to be tuned by source IP or user. In addition, change the count values to an upper threshold to restrict false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.85",
              "n": "ASL AWS IAM Assume Role Policy Brute Force",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects multiple failed attempts to assume an AWS IAM role, indicating a potential brute force attack. It leverages AWS CloudTrail logs to identify `MalformedPolicyDocumentException` errors with a status of `failure` and filters out legitimate AWS services. This activity is significant as repeated failures to assume roles can indicate an adversary attempting to guess role names, which is a precursor to unauthorized access. If confirmed malicious, this could lead to unautho…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users.",
              "refs": "https://www.praetorian.com/blog/aws-iam-assume-role-vulnerabilities/, https://rhinosecuritylabs.com/aws/assume-worst-aws-assume-role-enumeration/",
              "mitre": [
                "T1580",
                "T1110"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS IAM Assume Role Policy Brute Force\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1580, T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS IAM Assume Role Policy Brute Force\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.86",
              "n": "ASL AWS IAM Delete Policy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a policy is deleted in AWS. It leverages Amazon Security Lake logs to detect the DeletePolicy API operation. Monitoring policy deletions is crucial as it can indicate unauthorized attempts to weaken security controls. If confirmed malicious, this activity could allow an attacker to remove critical security policies, potentially leading to privilege escalation or unauthorized access to sensitive resources.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "`amazon_security_lake` api.operation=DeletePolicy\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY actor.user.uid api.operation api.service.name\n           http_request.user_agent src_endpoint.ip actor.user.account.uid\n           cloud.provider cloud.region\n      | rename actor.user.uid as user api.operation as action api.service.name as dest http_request.user_agent as user_agent src_endpoint.ip as src actor.user.account.uid as vendor_account cloud.provider as vendor_product cloud.region as vendor_region\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `asl_aws_iam_delete_policy_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete policies (least privilege). In addition, this may be saved seperately and tuned for failed or success attempts only.",
              "refs": "https://docs.aws.amazon.com/IAM/latest/APIReference/API_DeletePolicy.html, https://docs.aws.amazon.com/cli/latest/reference/iam/delete-policy.html",
              "mitre": [
                "T1098"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS IAM Delete Policy\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS IAM Delete Policy\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete policies (least privilege). In addition, this may be saved seperately and tuned for failed or success attempts only.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'ASL AWS IAM Delete Policy' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.87",
              "n": "ASL AWS IAM Failure Group Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects failed attempts to delete AWS IAM groups, triggered by access denial, conflicts, or non-existent groups. It operates by monitoring CloudTrail logs for specific error codes related to deletion failures. This behavior is significant for a SOC as it may indicate unauthorized attempts to modify access controls or disrupt operations by removing groups. Such actions could be part of a larger attack aiming to escalate privileges or impair security protocols. Identifying t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete groups (least privilege).",
              "refs": "https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/delete-group.html, https://docs.aws.amazon.com/IAM/latest/APIReference/API_DeleteGroup.html",
              "mitre": [
                "T1098"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS IAM Failure Group Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS IAM Failure Group Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete groups (least privilege).\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.88",
              "n": "ASL AWS Multi-Factor Authentication Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to disable multi-factor authentication (MFA) for an AWS IAM user. It leverages Amazon Security Lake logs, specifically monitoring for `DeleteVirtualMFADevice` or `DeactivateMFADevice` API operations. This activity is significant as disabling MFA can indicate an adversary attempting to weaken account security to maintain persistence using a compromised account. If confirmed malicious, this action could allow attackers to retain access to the AWS environment…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "AWS Administrators may disable MFA but it is highly unlikely for this event to occur without prior notice to the company",
              "refs": "https://attack.mitre.org/techniques/T1621/, https://aws.amazon.com/what-is/mfa/",
              "mitre": [
                "T1556.006",
                "T1586.003",
                "T1621"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Multi-Factor Authentication Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556.006, T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Multi-Factor Authentication Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: AWS Administrators may disable MFA but it is highly unlikely for this event to occur without prior notice to the company\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.89",
              "n": "ASL AWS Network Access Control List Created with All Open Ports",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of AWS Network Access Control Lists (ACLs) with all ports open to a specified CIDR. It leverages AWS CloudTrail events, specifically monitoring for `CreateNetworkAclEntry` or `ReplaceNetworkAclEntry` actions with rules allowing all traffic. This activity is significant because it can expose the network to unauthorized access, increasing the risk of data breaches and other malicious activities. If confirmed malicious, an attacker could exploit this misc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible that an admin has created this ACL with all ports open for some legitimate purpose however, this should be scoped and not allowed in production environment.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Network Access Control List Created with All Open Ports\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Network Access Control List Created with All Open Ports\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible that an admin has created this ACL with all ports open for some legitimate purpose however, this should be scoped and not allowed in production environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.90",
              "n": "ASL AWS Network Access Control List Deleted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of AWS Network Access Control Lists (ACLs). It leverages AWS CloudTrail logs to identify events where a user deletes a network ACL entry. This activity is significant because deleting a network ACL can remove critical access restrictions, potentially allowing unauthorized access to cloud instances. If confirmed malicious, this action could enable attackers to bypass network security controls, leading to unauthorized access, data exfiltration, or furthe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible that a user has legitimately deleted a network ACL.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS Network Access Control List Deleted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS Network Access Control List Deleted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible that a user has legitimately deleted a network ACL.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.91",
              "n": "ASL AWS New MFA Method Registered For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the registration of a new Multi-Factor Authentication (MFA) method for an AWS account, as logged through Amazon Security Lake (ASL). It detects this activity by monitoring the `CreateVirtualMFADevice` API operation within ASL logs. This behavior is significant because adversaries who gain unauthorized access to an AWS account may register a new MFA method to maintain persistence. If confirmed malicious, this activity could allow attackers to secure their access,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Newly onboarded users who are registering an MFA method for the first time will also trigger this detection.",
              "refs": "https://aws.amazon.com/blogs/security/you-can-now-assign-multiple-mfa-devices-in-iam/, https://attack.mitre.org/techniques/T1556/, https://attack.mitre.org/techniques/T1556/006/, https://twitter.com/jhencinski/status/1618660062352007174",
              "mitre": [
                "T1556.006"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS New MFA Method Registered For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS New MFA Method Registered For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Newly onboarded users who are registering an MFA method for the first time will also trigger this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.92",
              "n": "ASL AWS SAML Update identity provider",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects updates to the SAML provider in AWS. It leverages AWS CloudTrail logs to identify the `UpdateSAMLProvider` event, analyzing fields such as `sAMLProviderArn`, `sourceIPAddress`, and `userIdentity` details. Monitoring updates to the SAML provider is crucial as it may indicate a perimeter compromise of federated credentials or unauthorized backdoor access set by an attacker. If confirmed malicious, this activity could allow attackers to manipulate identity federation,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Updating a SAML provider or creating a new one may not necessarily be malicious however it needs to be closely monitored.",
              "refs": "https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://www.splunk.com/en_us/blog/security/a-golden-saml-journey-solarwinds-continued.html, https://www.fireeye.com/content/dam/fireeye-www/blog/pdfs/wp-m-unc2452-2021-000343-01.pdf, https://www.cyberark.com/resources/threat-research-blog/golden-saml-newly-discovered-attack-technique-forges-authentication-to-cloud-apps",
              "mitre": [
                "T1078"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS SAML Update identity provider\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS SAML Update identity provider\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Updating a SAML provider or creating a new one may not necessarily be malicious however it needs to be closely monitored.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.93",
              "n": "ASL AWS UpdateLoginProfile",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an AWS CloudTrail event where a user with permissions updates the login profile of another user. It leverages CloudTrail logs to identify instances where the user making the change is different from the user whose profile is being updated. This activity is significant because it can indicate privilege escalation attempts, where an attacker uses a compromised account to gain higher privileges. If confirmed malicious, this could allow the attacker to escalate their p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ASL AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ASL AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately created keys for another user.",
              "refs": "https://bishopfox.com/blog/privilege-escalation-in-aws, https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ASL AWS UpdateLoginProfile\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ASL AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ASL AWS UpdateLoginProfile\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately created keys for another user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.94",
              "n": "AWS AMI Attribute Modification for Exfiltration",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious modifications to AWS AMI attributes, such as sharing an AMI with another AWS account or making it publicly accessible. It leverages AWS CloudTrail logs to identify these changes by monitoring specific API calls. This activity is significant because adversaries can exploit these modifications to exfiltrate sensitive data stored in AWS resources. If confirmed malicious, this could lead to unauthorized access and potential data breaches, compromising the co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ModifyImageAttribute",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ModifyImageAttribute ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an AWS admin has legitimately shared a snapshot with others for  a specific purpose.",
              "refs": "https://labs.nettitude.com/blog/how-to-exfiltrate-aws-ec2-data/, https://stratus-red-team.cloud/attack-techniques/AWS/aws.exfiltration.ec2-share-ami/, https://hackingthe.cloud/aws/enumeration/loot_public_ebs_snapshots/",
              "mitre": [
                "T1537"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS AMI Attribute Modification for Exfiltration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ModifyImageAttribute. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1537. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS AMI Attribute Modification for Exfiltration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an AWS admin has legitimately shared a snapshot with others for  a specific purpose.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.95",
              "n": "AWS Console Login Failed During MFA Challenge",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies failed authentication attempts to the AWS Console during the Multi-Factor Authentication (MFA) challenge. It leverages AWS CloudTrail logs, specifically the `additionalEventData` field, to detect when MFA was used but the login attempt still failed. This activity is significant as it may indicate an adversary attempting to access an account with compromised credentials but being thwarted by MFA. If confirmed malicious, this could suggest an ongoing attempt to br…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ConsoleLogin",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate users may miss to reply the MFA challenge within the time window or deny it by mistake.",
              "refs": "https://attack.mitre.org/techniques/T1621/, https://aws.amazon.com/what-is/mfa/",
              "mitre": [
                "T1586.003",
                "T1621"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Console Login Failed During MFA Challenge\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Console Login Failed During MFA Challenge\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate users may miss to reply the MFA challenge within the time window or deny it by mistake.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.96",
              "n": "AWS Create Policy Version to allow all resources",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a new AWS IAM policy version that allows access to all resources. It detects this activity by analyzing AWS CloudTrail logs for the CreatePolicyVersion event with a policy document that grants broad permissions. This behavior is significant because it violates the principle of least privilege, potentially exposing the environment to misuse or abuse. If confirmed malicious, an attacker could gain extensive access to AWS resources, leading to unaut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail CreatePolicyVersion",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail CreatePolicyVersion ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately created a policy to allow a user to access all resources. That said, AWS strongly advises against granting full control to all AWS resources and you must verify this activity.",
              "refs": "https://bishopfox.com/blog/privilege-escalation-in-aws, https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Create Policy Version to allow all resources\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail CreatePolicyVersion. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Create Policy Version to allow all resources\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately created a policy to allow a user to access all resources. That said, AWS strongly advises against granting full control to all AWS resources and you must verify this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.97",
              "n": "AWS CreateAccessKey",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of AWS IAM access keys by a user for another user, which can indicate privilege escalation. It leverages AWS CloudTrail logs to detect instances where the user creating the access key is different from the user for whom the key is created. This activity is significant because unauthorized access key creation can allow attackers to establish persistence or exfiltrate data via AWS APIs. If confirmed malicious, this could lead to unauthorized access to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail CreateAccessKey",
              "q": "`cloudtrail` eventName = CreateAccessKey userAgent !=console.amazonaws.com errorCode = success\n      | eval match=if(match(userIdentity.userName,requestParameters.userName),1,0)\n      | search match=0\n      | rename user_name as user\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY signature dest user\n           user_agent src vendor_account\n           vendor_region vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `aws_createaccesskey_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail CreateAccessKey ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately created keys for another user.",
              "refs": "https://bishopfox.com/blog/privilege-escalation-in-aws, https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS CreateAccessKey\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail CreateAccessKey. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS CreateAccessKey\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately created keys for another user.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'AWS CreateAccessKey' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.98",
              "n": "AWS CreateLoginProfile",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of a login profile for one AWS user by another, followed by a console login from the same source IP. It uses AWS CloudTrail logs to correlate the `CreateLoginProfile` and `ConsoleLogin` events based on the source IP and user identity. This activity is significant as it may indicate privilege escalation, where an attacker creates a new login profile to gain unauthorized access. If confirmed malicious, this could allow the attacker to escalate privile…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail CreateLoginProfile AND AWS CloudTrail ConsoleLogin",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail CreateLoginProfile AND AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately created a login profile for another user.",
              "refs": "https://bishopfox.com/blog/privilege-escalation-in-aws, https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS CreateLoginProfile\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail CreateLoginProfile AND AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS CreateLoginProfile\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately created a login profile for another user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.99",
              "n": "AWS Credential Access Failed Login",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unsuccessful login attempts to the AWS Management Console using a specific user identity. It leverages AWS CloudTrail logs to detect failed authentication events associated with the AWS ConsoleLogin action. This activity is significant for a SOC because repeated failed login attempts may indicate a brute force attack or unauthorized access attempts. If confirmed malicious, an attacker could potentially gain access to AWS account services and resources, leading t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ConsoleLogin",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may genuinely mistype or forget the password.",
              "refs": "https://attack.mitre.org/techniques/T1110/001/",
              "mitre": [
                "T1110.001",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Credential Access Failed Login\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Credential Access Failed Login\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may genuinely mistype or forget the password.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.100",
              "n": "AWS Credential Access RDS Password reset",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the resetting of the master user password for an Amazon RDS DB instance. It leverages AWS CloudTrail logs to identify events where the `ModifyDBInstance` API call includes a new `masterUserPassword` parameter. This activity is significant because unauthorized password resets can grant attackers access to sensitive data stored in production databases, such as credit card information, PII, and healthcare data. If confirmed malicious, this could lead to data breaches,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ModifyDBInstance",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ModifyDBInstance ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may genuinely reset the RDS password.",
              "refs": "https://aws.amazon.com/premiumsupport/knowledge-center/reset-master-user-password-rds",
              "mitre": [
                "T1110",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Credential Access RDS Password reset\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ModifyDBInstance. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Credential Access RDS Password reset\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may genuinely reset the RDS password.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.101",
              "n": "AWS Defense Evasion Delete CloudWatch Log Group",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of CloudWatch log groups in AWS, identified through `DeleteLogGroup` events in CloudTrail logs. This detection leverages CloudTrail data to monitor for successful log group deletions, excluding console-based actions. This activity is significant as it indicates potential attempts to evade logging and monitoring, which is crucial for maintaining visibility into AWS activities. If confirmed malicious, this could allow attackers to hide their tracks, maki…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeleteLogGroup",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail DeleteLogGroup ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has deleted CloudWatch logging. Please investigate this activity.",
              "refs": "https://attack.mitre.org/techniques/T1562/008/",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Defense Evasion Delete CloudWatch Log Group\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteLogGroup. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Defense Evasion Delete CloudWatch Log Group\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has deleted CloudWatch logging. Please investigate this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.102",
              "n": "AWS Defense Evasion Impair Security Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to impair or disable AWS security services by monitoring specific deletion operations across GuardDuty, AWS WAF (classic and v2), CloudWatch, Route 53, and CloudWatch Logs. These actions include deleting detectors, rule groups, IP sets, web ACLs, logging configurations, alarms, and log streams. Adversaries may perform such operations to evade detection or remove visibility from defenders. By explicitly pairing eventName values with their corresponding even…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeleteLogStream, AWS CloudTrail DeleteDetector, AWS CloudTrail DeleteIPSet, AWS CloudTrail DeleteWebACL, AWS CloudTrail DeleteRule, AWS CloudTrail DeleteRuleGroup, AWS CloudTrail DeleteLoggingConfiguration, AWS CloudTrail DeleteAlarms",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail DeleteLogStream, AWS CloudTrail DeleteDetector, AWS CloudTrail DeleteIPSet, AWS CloudTrail DeleteWebACL, AWS CloudTrail DeleteRule, AWS CloudTrail DeleteRuleGroup, AWS CloudTrail DeleteLoggingConfiguration, AWS CloudTrail DeleteAlarms ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrators may occasionally delete GuardDuty detectors, WAF rule groups, or CloudWatch alarms during environment reconfiguration, migration, or decommissioning activities. In such cases, these events are expected and benign. These should be validated against approved change tickets or deployment pipelines to differentiate malicious activity from normal operations. Please consider filtering out these noisy events using userAgent, user_arn field names.",
              "refs": "https://docs.aws.amazon.com/cli/latest/reference/guardduty/index.html, https://docs.aws.amazon.com/cli/latest/reference/waf/index.html, https://www.elastic.co/guide/en/security/current/prebuilt-rules.html",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Defense Evasion Impair Security Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteLogStream, AWS CloudTrail DeleteDetector, AWS CloudTrail DeleteIPSet, AWS CloudTrail DeleteWebACL, AWS CloudTrail DeleteRule, AWS CloudTrail DeleteRuleGroup, AWS CloudTrail DeleteLoggingConfiguration, AWS CloudTrail DeleteAlarms. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Defense Evasion Impair Security Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrators may occasionally delete GuardDuty detectors, WAF rule groups, or CloudWatch alarms during environment reconfiguration, migration, or decommissioning activities. In such cases, these events are expected and benign. These should be validated against approved change tickets or deployment pipelines to differentiate malicious activity from normal operations. Please consider filtering out these noisy events using userAgent, user_arn field names.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.103",
              "n": "AWS Defense Evasion PutBucketLifecycle",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects `PutBucketLifecycle` events in AWS CloudTrail logs where a user sets a lifecycle rule for an S3 bucket with an expiration period of fewer than three days. This detection leverages CloudTrail logs to identify suspicious lifecycle configurations. This activity is significant because attackers may use it to delete CloudTrail logs quickly, thereby evading detection and impairing forensic investigations. If confirmed malicious, this could allow attackers to cover their …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail PutBucketLifecycle",
              "q": "`cloudtrail` eventName=PutBucketLifecycle user_type=IAMUser errorCode=success\n      | spath path=requestParameters{}.LifecycleConfiguration{}.Rule{}.Expiration{}.Days output=expiration_days\n      | spath path=requestParameters{}.bucketName output=bucket_name\n      | rename user_name as user\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY signature dest user\n           user_agent src vendor_account\n           vendor_region vendor_product bucket_name\n           expiration_days\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `aws_defense_evasion_putbucketlifecycle_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail PutBucketLifecycle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that it is a legitimate admin activity. Please consider filtering out these noisy events using userAgent, user_arn field names.",
              "refs": "https://stratus-red-team.cloud/attack-techniques/AWS/aws.defense-evasion.cloudtrail-lifecycle-rule/",
              "mitre": [
                "T1485.001",
                "T1562.008"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Defense Evasion PutBucketLifecycle\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail PutBucketLifecycle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485.001, T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Defense Evasion PutBucketLifecycle\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that it is a legitimate admin activity. Please consider filtering out these noisy events using userAgent, user_arn field names.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'AWS Defense Evasion PutBucketLifecycle' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.104",
              "n": "AWS Defense Evasion Stop Logging Cloudtrail",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects `StopLogging` events in AWS CloudTrail logs. It leverages CloudTrail event data to identify when logging is intentionally stopped, excluding console-based actions and focusing on successful attempts. This activity is significant because adversaries may stop logging to evade detection and operate stealthily within the compromised environment. If confirmed malicious, this action could allow attackers to perform further activities without being logged, hindering incid…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail StopLogging",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail StopLogging ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has stopped cloudtrail logging. Please investigate this activity.",
              "refs": "https://attack.mitre.org/techniques/T1562/008/",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Defense Evasion Stop Logging Cloudtrail\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail StopLogging. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Defense Evasion Stop Logging Cloudtrail\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has stopped cloudtrail logging. Please investigate this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.105",
              "n": "AWS Defense Evasion Update Cloudtrail",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects `UpdateTrail` events in AWS CloudTrail logs. It identifies attempts to modify CloudTrail settings, potentially to evade logging. The detection leverages CloudTrail logs, focusing on `UpdateTrail` events where the user agent is not the AWS console and the operation is successful. This activity is significant because altering CloudTrail settings can disable or limit logging, hindering visibility into AWS account activities. If confirmed malicious, this could allow at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail UpdateTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail UpdateTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has updated cloudtrail logging. Please investigate this activity.",
              "refs": "https://attack.mitre.org/techniques/T1562/008/",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Defense Evasion Update Cloudtrail\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail UpdateTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Defense Evasion Update Cloudtrail\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has updated cloudtrail logging. Please investigate this activity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.106",
              "n": "AWS Detect Users creating keys with encrypt policy without MFA",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of AWS KMS keys with an encryption policy accessible to everyone, including external entities. It leverages AWS CloudTrail logs to identify `CreateKey` or `PutKeyPolicy` events where the `kms:Encrypt` action is granted to all principals. This activity is significant as it may indicate a compromised account, allowing an attacker to misuse the encryption key to target other organizations. If confirmed malicious, this could lead to unauthorized data encry…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail CreateKey, AWS CloudTrail PutKeyPolicy",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail CreateKey, AWS CloudTrail PutKeyPolicy ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://rhinosecuritylabs.com/aws/s3-ransomware-part-1-attack-vector/, https://github.com/d1vious/git-wild-hunt, https://www.youtube.com/watch?v=PgzNib37g0M",
              "mitre": [
                "T1486"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Detect Users creating keys with encrypt policy without MFA\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail CreateKey, AWS CloudTrail PutKeyPolicy. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Detect Users creating keys with encrypt policy without MFA\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.107",
              "n": "AWS Detect Users with KMS keys performing encryption S3",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies users with KMS keys performing encryption operations on S3 buckets. It leverages AWS CloudTrail logs to detect the `CopyObject` event where server-side encryption with AWS KMS is specified. This activity is significant as it may indicate unauthorized or suspicious encryption of data, potentially masking exfiltration or tampering efforts. If confirmed malicious, an attacker could be encrypting sensitive data to evade detection or preparing it for exfiltration, po…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There maybe buckets provisioned with S3 encryption",
              "refs": "https://rhinosecuritylabs.com/aws/s3-ransomware-part-1-attack-vector/, https://github.com/d1vious/git-wild-hunt, https://www.youtube.com/watch?v=PgzNib37g0M",
              "mitre": [
                "T1486"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Detect Users with KMS keys performing encryption S3\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Detect Users with KMS keys performing encryption S3\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There maybe buckets provisioned with S3 encryption\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.108",
              "n": "AWS EC2 Snapshot Shared Externally",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an EC2 snapshot is shared with an external AWS account by analyzing AWS CloudTrail events. This detection method leverages CloudTrail logs to identify modifications in snapshot permissions, specifically when the snapshot is shared outside the originating AWS account. This activity is significant as it may indicate an attempt to exfiltrate sensitive data stored in the snapshot. If confirmed malicious, an attacker could gain unauthorized access to the snapshot's…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ModifySnapshotAttribute",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ModifySnapshotAttribute ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an AWS admin has legitimately shared a snapshot with others for  a specific purpose.",
              "refs": "https://labs.nettitude.com/blog/how-to-exfiltrate-aws-ec2-data/, https://stratus-red-team.cloud/attack-techniques/AWS/aws.exfiltration.ec2-share-ebs-snapshot/, https://hackingthe.cloud/aws/enumeration/loot_public_ebs_snapshots/",
              "mitre": [
                "T1537"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS EC2 Snapshot Shared Externally\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ModifySnapshotAttribute. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1537. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS EC2 Snapshot Shared Externally\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an AWS admin has legitimately shared a snapshot with others for  a specific purpose.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.109",
              "n": "AWS ECR Container Upload Outside Business Hours",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the upload of a new container image to AWS Elastic Container Registry (ECR) outside of standard business hours. It leverages AWS CloudTrail logs to identify `PutImage` events occurring between 8 PM and 8 AM or on weekends. This activity is significant because container uploads outside business hours can indicate unauthorized or suspicious activity, potentially pointing to a compromised account or insider threat. If confirmed malicious, this could allow an attacker …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail PutImage",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail PutImage ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "When your development is spreaded in different time zones, applying this rule can be difficult.",
              "refs": "https://attack.mitre.org/techniques/T1204/003/",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS ECR Container Upload Outside Business Hours\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail PutImage. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS ECR Container Upload Outside Business Hours\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: When your development is spreaded in different time zones, applying this rule can be difficult.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.110",
              "n": "AWS ECR Container Upload Unknown User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the upload of a new container image to AWS Elastic Container Registry (ECR) by an unknown user. It leverages AWS CloudTrail logs to identify `PutImage` events from the ECR service, filtering out known users. This activity is significant because container uploads should typically be performed by a limited set of authorized users. If confirmed malicious, this could indicate unauthorized access, potentially leading to the deployment of malicious containers, data exfil…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail PutImage",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail PutImage ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1204/003/",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS ECR Container Upload Unknown User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail PutImage. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS ECR Container Upload Unknown User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.111",
              "n": "AWS Exfiltration via Anomalous GetObject API Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies anomalous GetObject API activity in AWS, indicating potential data exfiltration attempts. It leverages AWS CloudTrail logs and uses the `anomalydetection` command to detect unusual patterns in the frequency of GetObject API calls by analyzing fields such as \"count,\" \"user_type,\" and \"user_arn\" within a 10-minute window. This activity is significant as it may indicate unauthorized data access or exfiltration from S3 buckets. If confirmed malicious, attackers coul…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail GetObject",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail GetObject ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that a user downloaded these files to use them locally and there are AWS services in configured that perform these activities for a legitimate reason. Filter is needed.",
              "refs": "https://labs.nettitude.com/blog/how-to-exfiltrate-aws-ec2-data/, https://help.splunk.com/en/splunk-enterprise/search/spl-search-reference/9.4/search-commands/anomalydetection",
              "mitre": [
                "T1119"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Exfiltration via Anomalous GetObject API Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail GetObject. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1119. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Exfiltration via Anomalous GetObject API Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that a user downloaded these files to use them locally and there are AWS services in configured that perform these activities for a legitimate reason. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.112",
              "n": "AWS Exfiltration via Batch Service",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of AWS Batch jobs that could potentially abuse the AWS Bucket Replication feature on S3 buckets. It leverages AWS CloudTrail logs to detect the `JobCreated` event, analyzing job details and their status. This activity is significant because attackers can exploit this feature to exfiltrate data by creating malicious batch jobs. If confirmed malicious, this could lead to unauthorized data transfer between S3 buckets, resulting in data breaches and los…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail JobCreated",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail JobCreated ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an AWS Administrator or a user has legitimately created this job for some tasks.",
              "refs": "https://hackingthe.cloud/aws/exploitation/s3-bucket-replication-exfiltration/, https://bleemb.medium.com/data-exfiltration-with-native-aws-s3-features-c94ae4d13436",
              "mitre": [
                "T1119"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Exfiltration via Batch Service\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail JobCreated. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1119. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Exfiltration via Batch Service\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an AWS Administrator or a user has legitimately created this job for some tasks.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.113",
              "n": "AWS Exfiltration via Bucket Replication",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects API calls to enable S3 bucket replication services. It leverages AWS CloudTrail logs to identify `PutBucketReplication` events, focusing on fields like `bucketName`, `ReplicationConfiguration.Rule.Destination.Bucket`, and user details. This activity is significant as it can indicate unauthorized data replication, potentially leading to data exfiltration. If confirmed malicious, attackers could replicate sensitive data to external accounts, leading to data breaches …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail PutBucketReplication",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_arn$\", \"$aws_account_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail PutBucketReplication ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an AWS admin has legitimately implemented data replication to ensure data availability and improve data protection/backup strategies.",
              "refs": "https://hackingthe.cloud/aws/exploitation/s3-bucket-replication-exfiltration/",
              "mitre": [
                "T1537"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Exfiltration via Bucket Replication\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail PutBucketReplication. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1537. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Exfiltration via Bucket Replication\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an AWS admin has legitimately implemented data replication to ensure data availability and improve data protection/backup strategies.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.114",
              "n": "AWS Exfiltration via DataSync Task",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of an AWS DataSync task, which could indicate potential data exfiltration. It leverages AWS CloudTrail logs to identify the `CreateTask` event from the DataSync service. This activity is significant because attackers can misuse DataSync to transfer sensitive data from a private AWS location to a public one, leading to data compromise. If confirmed malicious, this could result in unauthorized access to sensitive information, causing severe data breaches…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail CreateTask",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail CreateTask ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an AWS Administrator has legitimately created this task for creating backup. Please check the `sourceLocationArn` and `destinationLocationArn` of this task",
              "refs": "https://labs.nettitude.com/blog/how-to-exfiltrate-aws-ec2-data/, https://www.shehackske.com/how-to/data-exfiltration-on-cloud-1606/",
              "mitre": [
                "T1119"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Exfiltration via DataSync Task\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail CreateTask. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1119. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Exfiltration via DataSync Task\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an AWS Administrator has legitimately created this task for creating backup. Please check the `sourceLocationArn` and `destinationLocationArn` of this task\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.115",
              "n": "AWS Exfiltration via EC2 Snapshot",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a series of AWS API calls related to EC2 snapshots within a short time window, indicating potential exfiltration via EC2 Snapshot modifications. It leverages AWS CloudTrail logs to identify actions such as creating, describing, and modifying snapshot attributes. This activity is significant as it may indicate an attacker attempting to exfiltrate data by sharing EC2 snapshots externally. If confirmed malicious, the attacker could gain access to sensitive information…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail CreateSnapshot, AWS CloudTrail DescribeSnapshotAttribute, AWS CloudTrail ModifySnapshotAttribute, AWS CloudTrail DeleteSnapshot",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail CreateSnapshot, AWS CloudTrail DescribeSnapshotAttribute, AWS CloudTrail ModifySnapshotAttribute, AWS CloudTrail DeleteSnapshot ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an AWS admin has legitimately shared a snapshot with an other account for a specific purpose. Please check any recent change requests filed in your organization.",
              "refs": "https://labs.nettitude.com/blog/how-to-exfiltrate-aws-ec2-data/, https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifySnapshotAttribute.html, https://bleemb.medium.com/data-exfiltration-with-native-aws-s3-features-c94ae4d13436, https://stratus-red-team.cloud/attack-techniques/list/",
              "mitre": [
                "T1537"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Exfiltration via EC2 Snapshot\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail CreateSnapshot, AWS CloudTrail DescribeSnapshotAttribute, AWS CloudTrail ModifySnapshotAttribute, AWS CloudTrail DeleteSnapshot. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1537. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Exfiltration via EC2 Snapshot\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an AWS admin has legitimately shared a snapshot with an other account for a specific purpose. Please check any recent change requests filed in your organization.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.116",
              "n": "AWS High Number Of Failed Authentications For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an AWS account experiencing more than 20 failed authentication attempts within a 5-minute window. It leverages AWS CloudTrail logs to identify multiple failed ConsoleLogin events. This behavior is significant as it may indicate a brute force attack targeting the account. If confirmed malicious, the attacker could potentially gain unauthorized access, leading to data breaches or further exploitation of the AWS environment. Security teams should consider adjusting th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ConsoleLogin",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A user with more than 20 failed authentication attempts in the span of 5 minutes may also be triggered by a broken application.",
              "refs": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/IAM/password-policy.html",
              "mitre": [
                "T1201"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS High Number Of Failed Authentications For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1201. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS High Number Of Failed Authentications For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A user with more than 20 failed authentication attempts in the span of 5 minutes may also be triggered by a broken application.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.117",
              "n": "AWS High Number Of Failed Authentications From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an IP address with 20 or more failed authentication attempts to the AWS Web Console within a 5-minute window. This detection leverages CloudTrail logs, aggregating failed login events by IP address and time span. This activity is significant as it may indicate a brute force attack aimed at gaining unauthorized access or escalating privileges within an AWS environment. If confirmed malicious, this could lead to unauthorized access, data breaches, or further exploita…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ConsoleLogin",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "An Ip address with more than 20 failed authentication attempts in the span of 5 minutes may also be triggered by a broken application.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://www.whiteoaksecurity.com/blog/goawsconsolespray-password-spraying-tool/, https://softwaresecuritydotblog.wordpress.com/2019/09/28/how-to-protect-against-credential-stuffing-on-aws/",
              "mitre": [
                "T1110.003",
                "T1110.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS High Number Of Failed Authentications From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS High Number Of Failed Authentications From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: An Ip address with more than 20 failed authentication attempts in the span of 5 minutes may also be triggered by a broken application.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.118",
              "n": "AWS IAM AccessDenied Discovery Events",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies excessive AccessDenied events within an hour timeframe for IAM users in AWS. It leverages AWS CloudTrail logs to detect multiple failed access attempts from the same source IP and user identity. This activity is significant as it may indicate that an access key has been compromised and is being misused for unauthorized discovery actions. If confirmed malicious, this could allow attackers to gather information about the AWS environment, potentially leading to fur…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible to start this detection will need to be tuned by source IP or user. In addition, change the count values to an upper threshold to restrict false positives.",
              "refs": "https://aws.amazon.com/premiumsupport/knowledge-center/troubleshoot-iam-permission-errors/",
              "mitre": [
                "T1580"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS IAM AccessDenied Discovery Events\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1580. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS IAM AccessDenied Discovery Events\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible to start this detection will need to be tuned by source IP or user. In addition, change the count values to an upper threshold to restrict false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.119",
              "n": "AWS IAM Assume Role Policy Brute Force",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects multiple failed attempts to assume an AWS IAM role, indicating a potential brute force attack. It leverages AWS CloudTrail logs to identify `MalformedPolicyDocumentException` errors with a status of `failure` and filters out legitimate AWS services. This activity is significant as repeated failures to assume roles can indicate an adversary attempting to guess role names, which is a precursor to unauthorized access. If confirmed malicious, this could lead to unautho…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users.",
              "refs": "https://www.praetorian.com/blog/aws-iam-assume-role-vulnerabilities/, https://rhinosecuritylabs.com/aws/assume-worst-aws-assume-role-enumeration/",
              "mitre": [
                "T1580",
                "T1110"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS IAM Assume Role Policy Brute Force\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1580, T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS IAM Assume Role Policy Brute Force\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.120",
              "n": "AWS IAM Delete Policy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of an IAM policy in AWS. It leverages AWS CloudTrail logs to identify `DeletePolicy` events, excluding those from AWS internal services. This activity is significant as unauthorized policy deletions can disrupt access controls and weaken security postures. If confirmed malicious, an attacker could remove critical security policies, potentially leading to privilege escalation, unauthorized access, or data exfiltration. Monitoring this behavior helps ens…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeletePolicy",
              "q": "`cloudtrail` eventName=DeletePolicy (userAgent!=*.amazonaws.com)\n      | rename user_name as user\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY signature dest user\n           user_agent src vendor_account\n           vendor_region vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `aws_iam_delete_policy_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail DeletePolicy ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete policies (least privilege). In addition, this may be saved seperately and tuned for failed or success attempts only.",
              "refs": "https://docs.aws.amazon.com/IAM/latest/APIReference/API_DeletePolicy.html, https://docs.aws.amazon.com/cli/latest/reference/iam/delete-policy.html",
              "mitre": [
                "T1098"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS IAM Delete Policy\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeletePolicy. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS IAM Delete Policy\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete policies (least privilege). In addition, this may be saved seperately and tuned for failed or success attempts only.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'AWS IAM Delete Policy' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.121",
              "n": "AWS IAM Failure Group Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies failed attempts to delete AWS IAM groups. It leverages AWS CloudTrail logs to detect events where the DeleteGroup action fails due to errors like NoSuchEntityException, DeleteConflictException, or AccessDenied. This activity is significant as it may indicate unauthorized attempts to modify IAM group configurations, which could be a precursor to privilege escalation or other malicious actions. If confirmed malicious, this could allow an attacker to disrupt IAM po…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeleteGroup",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail DeleteGroup ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete groups (least privilege).",
              "refs": "https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/delete-group.html, https://docs.aws.amazon.com/IAM/latest/APIReference/API_DeleteGroup.html",
              "mitre": [
                "T1098"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS IAM Failure Group Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteGroup. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS IAM Failure Group Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete groups (least privilege).\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.122",
              "n": "AWS IAM Successful Group Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the successful deletion of an IAM group in AWS. It leverages CloudTrail logs to detect `DeleteGroup` events with a success status. This activity is significant as it could indicate potential changes in user permissions or access controls, which may be a precursor to further unauthorized actions. If confirmed malicious, an attacker could disrupt access management, potentially leading to privilege escalation or unauthorized access to sensitive resources. Analysts …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeleteGroup",
              "q": "`cloudtrail` eventSource=iam.amazonaws.com eventName=DeleteGroup errorCode=success (userAgent!=*.amazonaws.com)\n      | rename user_name as user\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY signature dest user\n           user_agent src vendor_account\n           vendor_region vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `aws_iam_successful_group_deletion_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail DeleteGroup ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete groups (least privilege).",
              "refs": "https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/delete-group.html, https://docs.aws.amazon.com/IAM/latest/APIReference/API_DeleteGroup.html",
              "mitre": [
                "T1069.003",
                "T1098"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS IAM Successful Group Deletion\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteGroup. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.003, T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS IAM Successful Group Deletion\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection will require tuning to provide high fidelity detection capabilties. Tune based on src addresses (corporate offices, VPN terminations) or by groups of users. Not every user with AWS access should have permission to delete groups (least privilege).\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'AWS IAM Successful Group Deletion' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.123",
              "n": "AWS Lambda UpdateFunctionCode",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies IAM users attempting to update or modify AWS Lambda code via the AWS CLI. It leverages CloudTrail logs to detect successful `UpdateFunctionCode` events initiated by IAM users. This activity is significant as it may indicate an attempt to gain persistence, further access, or plant backdoors within your AWS environment. If confirmed malicious, an attacker could upload and execute malicious code automatically when the Lambda function is triggered, potentially compr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "`cloudtrail` eventSource=lambda.amazonaws.com eventName=UpdateFunctionCode*  errorCode = success  user_type=IAMUser\n      | rename user_name as user\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY signature dest user\n           user_agent src vendor_account\n           vendor_region vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `aws_lambda_updatefunctioncode_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin or an autorized IAM user has updated the lambda fuction code legitimately.",
              "refs": "https://sysdig.com/blog/exploit-mitigate-aws-lambdas-mitre/",
              "mitre": [
                "T1204"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Lambda UpdateFunctionCode\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Lambda UpdateFunctionCode\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin or an autorized IAM user has updated the lambda fuction code legitimately.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'AWS Lambda UpdateFunctionCode' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.124",
              "n": "AWS Multi-Factor Authentication Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to disable multi-factor authentication (MFA) for an AWS IAM user. It leverages AWS CloudTrail logs to identify events where MFA devices are deleted or deactivated. This activity is significant because disabling MFA can indicate an adversary attempting to weaken account security, potentially to maintain persistence using a compromised account. If confirmed malicious, this action could allow attackers to retain access to the AWS environment without detection…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeleteVirtualMFADevice, AWS CloudTrail DeactivateMFADevice",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$vendor_account$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail DeleteVirtualMFADevice, AWS CloudTrail DeactivateMFADevice ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "AWS Administrators may disable MFA but it is highly unlikely for this event to occur without prior notice to the company",
              "refs": "https://attack.mitre.org/techniques/T1621/, https://aws.amazon.com/what-is/mfa/",
              "mitre": [
                "T1556.006",
                "T1586.003",
                "T1621"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Multi-Factor Authentication Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteVirtualMFADevice, AWS CloudTrail DeactivateMFADevice. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556.006, T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Multi-Factor Authentication Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: AWS Administrators may disable MFA but it is highly unlikely for this event to occur without prior notice to the company\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.125",
              "n": "AWS Multiple Failed MFA Requests For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies multiple failed multi-factor authentication (MFA) requests to an AWS Console for a single user. It leverages AWS CloudTrail logs, specifically the `additionalEventData` field, to detect more than 10 failed MFA prompts within 5 minutes. This activity is significant as it may indicate an adversary attempting to bypass MFA by bombarding the user with repeated authentication requests. If confirmed malicious, this could lead to unauthorized access to the AWS environm…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ConsoleLogin",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1621/, https://aws.amazon.com/what-is/mfa/",
              "mitre": [
                "T1586.003",
                "T1621"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Multiple Failed MFA Requests For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Multiple Failed MFA Requests For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.126",
              "n": "AWS Multiple Users Failing To Authenticate From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a single source IP failing to authenticate into the AWS Console with 30 unique valid users within 10 minutes. It leverages CloudTrail logs to detect multiple failed login attempts from the same IP address. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain unauthorized access or elevate privileges by trying common passwords across many accounts. If confirmed malicious, this activity could lead to unaut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ConsoleLogin",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No known false postives for this detection. Please review this alert",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://www.whiteoaksecurity.com/blog/goawsconsolespray-password-spraying-tool/, https://softwaresecuritydotblog.wordpress.com/2019/09/28/how-to-protect-against-credential-stuffing-on-aws/",
              "mitre": [
                "T1110.003",
                "T1110.004"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Multiple Users Failing To Authenticate From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Multiple Users Failing To Authenticate From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No known false postives for this detection. Please review this alert\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.127",
              "n": "AWS Network Access Control List Created with All Open Ports",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of AWS Network Access Control Lists (ACLs) with all ports open to a specified CIDR. It leverages AWS CloudTrail events, specifically monitoring for `CreateNetworkAclEntry` or `ReplaceNetworkAclEntry` actions with rules allowing all traffic. This activity is significant because it can expose the network to unauthorized access, increasing the risk of data breaches and other malicious activities. If confirmed malicious, an attacker could exploit this misc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail CreateNetworkAclEntry, AWS CloudTrail ReplaceNetworkAclEntry",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail CreateNetworkAclEntry, AWS CloudTrail ReplaceNetworkAclEntry ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible that an admin has created this ACL with all ports open for some legitimate purpose however, this should be scoped and not allowed in production environment.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Network Access Control List Created with All Open Ports\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail CreateNetworkAclEntry, AWS CloudTrail ReplaceNetworkAclEntry. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Network Access Control List Created with All Open Ports\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible that an admin has created this ACL with all ports open for some legitimate purpose however, this should be scoped and not allowed in production environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.128",
              "n": "AWS Network Access Control List Deleted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of AWS Network Access Control Lists (ACLs). It leverages AWS CloudTrail logs to identify events where a user deletes a network ACL entry. This activity is significant because deleting a network ACL can remove critical access restrictions, potentially allowing unauthorized access to cloud instances. If confirmed malicious, this action could enable attackers to bypass network security controls, leading to unauthorized access, data exfiltration, or furthe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail DeleteNetworkAclEntry",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail DeleteNetworkAclEntry ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible that a user has legitimately deleted a network ACL.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Network Access Control List Deleted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail DeleteNetworkAclEntry. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Network Access Control List Deleted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible that a user has legitimately deleted a network ACL.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.129",
              "n": "AWS New MFA Method Registered For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the registration of a new Multi-Factor Authentication (MFA) method for an AWS account. It leverages AWS CloudTrail logs to identify the `CreateVirtualMFADevice` event. This activity is significant because adversaries who gain unauthorized access to an AWS account may register a new MFA method to maintain persistence. If confirmed malicious, this could allow attackers to secure their access, making it difficult to detect and remove their presence, potentially leadin…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail CreateVirtualMFADevice",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail CreateVirtualMFADevice ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Newly onboarded users who are registering an MFA method for the first time will also trigger this detection.",
              "refs": "https://aws.amazon.com/blogs/security/you-can-now-assign-multiple-mfa-devices-in-iam/, https://attack.mitre.org/techniques/T1556/, https://attack.mitre.org/techniques/T1556/006/, https://twitter.com/jhencinski/status/1618660062352007174",
              "mitre": [
                "T1556.006"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS New MFA Method Registered For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail CreateVirtualMFADevice. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS New MFA Method Registered For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Newly onboarded users who are registering an MFA method for the first time will also trigger this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.130",
              "n": "AWS Password Policy Changes",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects successful API calls to view, update, or delete the password policy in an AWS organization. It leverages AWS CloudTrail logs to identify events such as \"UpdateAccountPasswordPolicy,\" \"GetAccountPasswordPolicy,\" and \"DeleteAccountPasswordPolicy.\" This activity is significant because it is uncommon for regular users to perform these actions, and such changes can indicate an adversary attempting to understand or weaken password defenses. If confirmed malicious, this c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail UpdateAccountPasswordPolicy, AWS CloudTrail GetAccountPasswordPolicy, AWS CloudTrail DeleteAccountPasswordPolicy",
              "q": "`cloudtrail` eventName IN (\"UpdateAccountPasswordPolicy\",\"GetAccountPasswordPolicy\",\"DeleteAccountPasswordPolicy\") errorCode=success\n      | rename user_name as user\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY signature dest user\n           user_agent src vendor_account\n           vendor_region vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `aws_password_policy_changes_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail UpdateAccountPasswordPolicy, AWS CloudTrail GetAccountPasswordPolicy, AWS CloudTrail DeleteAccountPasswordPolicy ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately triggered an AWS audit tool activity which may trigger this event.",
              "refs": "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/IAM/password-policy.html",
              "mitre": [
                "T1201"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Password Policy Changes\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail UpdateAccountPasswordPolicy, AWS CloudTrail GetAccountPasswordPolicy, AWS CloudTrail DeleteAccountPasswordPolicy. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1201. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Password Policy Changes\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately triggered an AWS audit tool activity which may trigger this event.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'AWS Password Policy Changes' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.131",
              "n": "AWS SAML Update identity provider",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects updates to the SAML provider in AWS. It leverages AWS CloudTrail logs to identify the `UpdateSAMLProvider` event, analyzing fields such as `sAMLProviderArn`, `sourceIPAddress`, and `userIdentity` details. Monitoring updates to the SAML provider is crucial as it may indicate a perimeter compromise of federated credentials or unauthorized backdoor access set by an attacker. If confirmed malicious, this activity could allow attackers to manipulate identity federation,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail UpdateSAMLProvider",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail UpdateSAMLProvider ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Updating a SAML provider or creating a new one may not necessarily be malicious however it needs to be closely monitored.",
              "refs": "https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://www.splunk.com/en_us/blog/security/a-golden-saml-journey-solarwinds-continued.html, https://www.fireeye.com/content/dam/fireeye-www/blog/pdfs/wp-m-unc2452-2021-000343-01.pdf, https://www.cyberark.com/resources/threat-research-blog/golden-saml-newly-discovered-attack-technique-forges-authentication-to-cloud-apps",
              "mitre": [
                "T1078"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS SAML Update identity provider\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail UpdateSAMLProvider. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS SAML Update identity provider\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Updating a SAML provider or creating a new one may not necessarily be malicious however it needs to be closely monitored.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.132",
              "n": "AWS SetDefaultPolicyVersion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a user sets a default policy version in AWS. It leverages AWS CloudTrail logs to identify the `SetDefaultPolicyVersion` event from the IAM service. This activity is significant because attackers may exploit this technique for privilege escalation, especially if previous policy versions grant more extensive permissions than the current one. If confirmed malicious, this could allow an attacker to gain elevated access to AWS resources, potentially leading to unau…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail SetDefaultPolicyVersion",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail SetDefaultPolicyVersion ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately set a default policy to allow a user to access all resources. That said, AWS strongly advises against granting full control to all AWS resources",
              "refs": "https://bishopfox.com/blog/privilege-escalation-in-aws, https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS SetDefaultPolicyVersion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail SetDefaultPolicyVersion. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS SetDefaultPolicyVersion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately set a default policy to allow a user to access all resources. That said, AWS strongly advises against granting full control to all AWS resources\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.133",
              "n": "AWS Successful Single-Factor Authentication",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a successful Console Login authentication event for an AWS IAM user account without Multi-Factor Authentication (MFA) enabled. It leverages AWS CloudTrail logs to detect instances where MFA was not used during login. This activity is significant as it may indicate a misconfiguration, policy violation, or potential account takeover attempt. If confirmed malicious, an attacker could gain unauthorized access to the AWS environment, potentially leading to data exfil…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ConsoleLogin",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that some accounts do not have MFA enabled for the AWS account however its agaisnt the best practices of securing AWS.",
              "refs": "https://attack.mitre.org/techniques/T1621/, https://attack.mitre.org/techniques/T1078/004/, https://aws.amazon.com/what-is/mfa/",
              "mitre": [
                "T1078.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Successful Single-Factor Authentication\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Successful Single-Factor Authentication\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that some accounts do not have MFA enabled for the AWS account however its agaisnt the best practices of securing AWS.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.134",
              "n": "AWS Unusual Number of Failed Authentications From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a single source IP failing to authenticate into the AWS Console with multiple valid users. It uses CloudTrail logs and calculates the standard deviation for source IP, leveraging the 3-sigma rule to detect unusual numbers of failed authentication attempts. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges. If confirmed malicious, this activity could lead to unautho…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail ConsoleLogin",
              "q": "# Shared SPL: intentional — see UC-10.2.29\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$tried_accounts$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ConsoleLogin ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No known false postives for this detection. Please review this alert",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://www.whiteoaksecurity.com/blog/goawsconsolespray-password-spraying-tool/, https://softwaresecuritydotblog.wordpress.com/2019/09/28/how-to-protect-against-credential-stuffing-on-aws/",
              "mitre": [
                "T1110.003",
                "T1110.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS Unusual Number of Failed Authentications From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail ConsoleLogin. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS Unusual Number of Failed Authentications From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No known false postives for this detection. Please review this alert\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.135",
              "n": "AWS UpdateLoginProfile",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an AWS CloudTrail event where a user with permissions updates the login profile of another user. It leverages CloudTrail logs to identify instances where the user making the change is different from the user whose profile is being updated. This activity is significant because it can indicate privilege escalation attempts, where an attacker uses a compromised account to gain higher privileges. If confirmed malicious, this could allow the attacker to escalate their p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail UpdateLoginProfile",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail UpdateLoginProfile ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately created keys for another user.",
              "refs": "https://bishopfox.com/blog/privilege-escalation-in-aws, https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AWS UpdateLoginProfile\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail UpdateLoginProfile. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AWS UpdateLoginProfile\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately created keys for another user.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.136",
              "n": "Azure Active Directory High Risk Sign-in",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects high-risk sign-in attempts against Azure Active Directory, identified by Azure Identity Protection. It leverages the RiskyUsers and UserRiskEvents log categories from Azure AD events ingested via EventHub. This activity is significant as it indicates potentially compromised accounts, flagged by heuristics and machine learning. If confirmed malicious, attackers could gain unauthorized access to sensitive resources, leading to data breaches or further exploitation wi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Details for the risk calculation algorithm used by Identity Protection are unknown and may be prone to false positives.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/security/compass/incident-response-playbook-password-spray, https://learn.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection, https://learn.microsoft.com/en-us/azure/active-directory/identity-protection/concept-identity-protection-risks",
              "mitre": [
                "T1110.003",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure Active Directory High Risk Sign-in\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure Active Directory High Risk Sign-in\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Details for the risk calculation algorithm used by Identity Protection are unknown and may be prone to false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure Active Directory High Risk Sign-in so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.137",
              "n": "Azure AD Admin Consent Bypassed by Service Principal",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a service principal in Azure Active Directory assigns app roles without standard admin consent. It uses Entra ID logs from the `azure_monitor_aad` data source, focusing on the \"Add app role assignment to service principal\" operation. This detection is significant as it highlights potential bypasses of critical administrative consent processes, which could lead to unauthorized privileges being granted. If confirmed malicious, this activity could a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add app role assignment to service principal",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the latest version of Splunk Add-on for Microsoft Cloud Services from Splunkbase(https://splunkbase.splunk.com/app/3110/#/details). You must be ingesting Azure Active Directory events into your Splunk environment. This analytic was written to be used with the azure:monitor:aad sourcetype leveraging the Auditlog log category",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service Principals are sometimes configured to legitimately bypass the consent process for purposes of automation. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/003/",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Admin Consent Bypassed by Service Principal\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add app role assignment to service principal. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Admin Consent Bypassed by Service Principal\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service Principals are sometimes configured to legitimately bypass the consent process for purposes of automation. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Admin Consent Bypassed by Service Principal so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.7.137: Azure AD Admin Consent Bypassed by Service Principal.",
                  "ea": "Saved search 'UC-10.7.137' running on Azure Active Directory Add app role assignment to service principal, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.138",
              "n": "Azure AD Application Administrator Role Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the assignment of the Application Administrator role to an Azure AD user. It leverages Azure Active Directory events, specifically monitoring the \"Add member to role\" operation. This activity is significant because users in this role can manage all aspects of enterprise applications, including credentials, which can be used to impersonate application identities. If confirmed malicious, an attacker could escalate privileges, manage application settings, and poten…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add member to role",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add member to role ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may legitimately assign the Application Administrator role to a user. Filter as needed.",
              "refs": "https://dirkjanm.io/azure-ad-privilege-escalation-application-admin/, https://posts.specterops.io/azure-privilege-escalation-via-service-principal-abuse-210ae2be2a5, https://learn.microsoft.com/en-us/azure/active-directory/roles/concept-understand-roles, https://attack.mitre.org/techniques/T1098/003/, https://learn.microsoft.com/en-us/azure/active-directory/roles/permissions-reference#application-administrator",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Application Administrator Role Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add member to role. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Application Administrator Role Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may legitimately assign the Application Administrator role to a user. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Application Administrator Role Assigned so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.139",
              "n": "Azure AD Authentication Failed During MFA Challenge",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies failed authentication attempts against an Azure AD tenant during the Multi-Factor Authentication (MFA) challenge, specifically flagged by error code 500121. It leverages Azure AD SignInLogs to detect these events. This activity is significant as it may indicate an adversary attempting to authenticate using compromised credentials on an account with MFA enabled. If confirmed malicious, this could suggest an ongoing effort to bypass MFA protections, potentially le…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the latest version of Splunk Add-on for Microsoft Cloud Services from Splunkbase (https://splunkbase.splunk.com/app/3110/#/details). You must be ingesting Azure Active Directory events into your Splunk environment through an EventHub. This analytic was written to be used with the azure:monitor:aad sourcetype leveraging the SignInLogs log category.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives have been minimized by removing attempts that result in 'MFA successfully completed messages', which were found to be generated when a user opts to use a different MFA method than the default.\\nFurther reductions in finding events can be achieved through filtering 'MFA denied; duplicate authentication attempt' messages within the auth_msg field, as they could arguably be considered as false positives.",
              "refs": "https://attack.mitre.org/techniques/T1621/, https://attack.mitre.org/techniques/T1078/004/, https://learn.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks, https://learn.microsoft.com/en-us/entra/identity/monitoring-health/concept-sign-in-log-activity-details",
              "mitre": [
                "T1078.004",
                "T1586.003",
                "T1621"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Authentication Failed During MFA Challenge\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Authentication Failed During MFA Challenge\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives have been minimized by removing attempts that result in 'MFA successfully completed messages', which were found to be generated when a user opts to use a different MFA method than the default.\\nFurther reductions in finding events can be achieved through filtering 'MFA denied; duplicate authentication attempt' messages within the auth_msg field, as they could arguably be considered as false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Authentication Failed During MFA Challenge so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.140",
              "n": "Azure AD External Guest User Invited",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the invitation of an external guest user within Azure AD. It leverages Azure AD AuditLogs to identify events where an external user is invited, using fields such as operationName and initiatedBy. Monitoring these invitations is crucial as they can lead to unauthorized access if abused. If confirmed malicious, this activity could allow attackers to gain access to internal resources, potentially leading to data breaches or further exploitation of the environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Invite external user",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Invite external user ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may legitimately invite external guest users. Filter as needed.",
              "refs": "https://dirkjanm.io/assets/raw/US-22-Mollema-Backdooring-and-hijacking-Azure-AD-accounts_final.pdf, https://www.blackhat.com/us-22/briefings/schedule/#backdooring-and-hijacking-azure-ad-accounts-by-abusing-external-identities-26999, https://attack.mitre.org/techniques/T1136/003/, https://learn.microsoft.com/en-us/azure/active-directory/external-identities/b2b-quickstart-add-guest-users-portal",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD External Guest User Invited\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Invite external user. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD External Guest User Invited\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may legitimately invite external guest users. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD External Guest User Invited so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.141",
              "n": "Azure AD Global Administrator Role Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of the Azure AD Global Administrator role to a user. It leverages Azure Active Directory AuditLogs to identify when the \"Add member to role\" operation includes the \"Global Administrator\" role. This activity is significant because the Global Administrator role grants extensive access to data, resources, and settings, similar to a Domain Administrator in traditional AD environments. If confirmed malicious, this could allow an attacker to establish pers…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add member to role",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add member to role ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may legitimately assign the Global Administrator role to a user. Filter as needed.",
              "refs": "https://o365blog.com/post/admin/, https://adsecurity.org/?p=4277, https://www.mandiant.com/resources/detecting-microsoft-365-azure-active-directory-backdoors, https://learn.microsoft.com/en-us/azure/active-directory/roles/security-planning, https://learn.microsoft.com/en-us/azure/role-based-access-control/elevate-access-global-admin, https://attack.mitre.org/techniques/T1098/003/",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Global Administrator Role Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add member to role. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Global Administrator Role Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may legitimately assign the Global Administrator role to a user. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Global Administrator Role Assigned so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.142",
              "n": "Azure AD High Number Of Failed Authentications For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an Azure AD account experiencing more than 20 failed authentication attempts within a 10-minute window. This detection leverages Azure SignInLogs data, specifically monitoring for error code 50126 and unsuccessful authentication attempts. This behavior is significant as it may indicate a brute force attack targeting the account. If confirmed malicious, an attacker could potentially gain unauthorized access, leading to data breaches or further exploitation within…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A user with more than 20 failed authentication attempts in the span of 10 minutes may also be triggered by a broken application.",
              "refs": "https://attack.mitre.org/techniques/T1110/, https://attack.mitre.org/techniques/T1110/001/",
              "mitre": [
                "T1110.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD High Number Of Failed Authentications For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD High Number Of Failed Authentications For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A user with more than 20 failed authentication attempts in the span of 10 minutes may also be triggered by a broken application.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD High Number Of Failed Authentications For User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.143",
              "n": "Azure AD High Number Of Failed Authentications From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an IP address with 20 or more failed authentication attempts to an Azure AD tenant within 10 minutes. It leverages Azure AD SignInLogs to identify repeated failed logins from the same IP. This behavior is significant as it may indicate a brute force attack aimed at gaining unauthorized access or escalating privileges. If confirmed malicious, the attacker could potentially compromise user accounts, leading to unauthorized access to sensitive information and resource…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "An Ip address with more than 20 failed authentication attempts in the span of 10 minutes may also be triggered by a broken application.",
              "refs": "https://attack.mitre.org/techniques/T1110/, https://attack.mitre.org/techniques/T1110/001/, https://attack.mitre.org/techniques/T1110/003/",
              "mitre": [
                "T1110.001",
                "T1110.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD High Number Of Failed Authentications From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001, T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD High Number Of Failed Authentications From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: An Ip address with more than 20 failed authentication attempts in the span of 10 minutes may also be triggered by a broken application.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD High Number Of Failed Authentications From Ip so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.144",
              "n": "Azure AD Multi-Factor Authentication Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to disable multi-factor authentication (MFA) for an Azure AD user. It leverages Azure Active Directory AuditLogs to identify the \"Disable Strong Authentication\" operation. This activity is significant because disabling MFA can allow adversaries to maintain persistence using compromised accounts without raising suspicion. If confirmed malicious, this action could enable attackers to bypass an essential security control, potentially leading to unauthorized a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Disable Strong Authentication",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Disable Strong Authentication ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use case may require for users to disable MFA. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks, https://learn.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-userstates, https://attack.mitre.org/tactics/TA0005/, https://attack.mitre.org/techniques/T1556/",
              "mitre": [
                "T1556.006",
                "T1586.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Multi-Factor Authentication Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Disable Strong Authentication. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556.006, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Multi-Factor Authentication Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use case may require for users to disable MFA. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Multi-Factor Authentication Disabled so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.145",
              "n": "Azure AD Multiple Denied MFA Requests For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an unusually high number of denied Multi-Factor Authentication (MFA) requests for a single user within a 10-minute window, specifically when more than nine MFA prompts are declined. It leverages Azure Active Directory (Azure AD) sign-in logs, focusing on \"Sign-in activity\" events with error code 500121 and additional details indicating \"MFA denied; user declined the authentication.\" This behavior is significant as it may indicate a targeted attack or account compro…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Sign-in activity",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Sign-in activity ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Multiple denifed MFA requests in a short period of span may also be a sign of authentication errors. Investigate and filter as needed.",
              "refs": "https://www.mandiant.com/resources/blog/russian-targeting-gov-business, https://arstechnica.com/information-technology/2022/03/lapsus-and-solar-winds-hackers-both-use-the-same-old-trick-to-bypass-mfa/, https://therecord.media/russian-hackers-bypass-2fa-by-annoying-victims-with-repeated-push-notifications/, https://attack.mitre.org/techniques/T1621/, https://attack.mitre.org/techniques/T1078/004/, https://www.cisa.gov/sites/default/files/publications/fact-sheet-implement-number-matching-in-mfa-applications-508c.pdf",
              "mitre": [
                "T1621"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Multiple Denied MFA Requests For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Sign-in activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Multiple Denied MFA Requests For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Multiple denifed MFA requests in a short period of span may also be a sign of authentication errors. Investigate and filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Multiple Denied MFA Requests For User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.146",
              "n": "Azure AD Multiple Failed MFA Requests For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies multiple failed multi-factor authentication (MFA) requests for a single user within an Azure AD tenant. It leverages Azure AD Sign-in Logs, specifically error code 500121, to detect more than 10 failed MFA attempts within 10 minutes. This behavior is significant as it may indicate an adversary attempting to bypass MFA by bombarding the user with repeated authentication prompts. If confirmed malicious, this activity could lead to unauthorized access, allowing att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Sign-in activity",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Sign-in activity ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed.",
              "refs": "https://www.mandiant.com/resources/blog/russian-targeting-gov-business, https://arstechnica.com/information-technology/2022/03/lapsus-and-solar-winds-hackers-both-use-the-same-old-trick-to-bypass-mfa/, https://therecord.media/russian-hackers-bypass-2fa-by-annoying-victims-with-repeated-push-notifications/, https://attack.mitre.org/techniques/T1621/, https://attack.mitre.org/techniques/T1078/004/, https://www.cisa.gov/sites/default/files/publications/fact-sheet-implement-number-matching-in-mfa-applications-508c.pdf",
              "mitre": [
                "T1078.004",
                "T1586.003",
                "T1621"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Multiple Failed MFA Requests For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Sign-in activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Multiple Failed MFA Requests For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Multiple Failed MFA Requests For User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.147",
              "n": "Azure AD Multiple Service Principals Created by SP",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a single service principal in Azure AD creates more than three unique OAuth applications within a 10-minute span. It leverages Azure AD audit logs, specifically monitoring the 'Add service principal' operation initiated by service principals. This behavior is significant as it may indicate an attacker using a compromised or malicious service principal to rapidly establish multiple service principals, potentially staging an attack. If confirmed malicious, this …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add service principal",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add service principal ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Certain users or applications may create multiple service principals in a short period of time for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1136/003/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Multiple Service Principals Created by SP\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add service principal. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Multiple Service Principals Created by SP\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Certain users or applications may create multiple service principals in a short period of time for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Multiple Service Principals Created by SP so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.148",
              "n": "Azure AD Multiple Service Principals Created by User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a single user creates more than three unique OAuth applications within a 10-minute timeframe in Azure AD. It detects this activity by monitoring the 'Add service principal' operation and aggregating data in 10-minute intervals. This behavior is significant as it may indicate an adversary rapidly creating multiple service principals to stage an attack or expand their foothold within the network. If confirmed malicious, this activity could allow at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add service principal",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add service principal ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Certain users or applications may create multiple service principals in a short period of time for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1136/003/, https://www.microsoft.com/en-us/security/blog/2024/01/25/midnight-blizzard-guidance-for-responders-on-nation-state-attack/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Multiple Service Principals Created by User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add service principal. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Multiple Service Principals Created by User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Certain users or applications may create multiple service principals in a short period of time for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Multiple Service Principals Created by User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.149",
              "n": "Azure AD Multiple Users Failing To Authenticate From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a single source IP failing to authenticate with 30 unique valid users within 5 minutes in Azure Active Directory. It leverages Azure AD SignInLogs with error code 50126, indicating invalid passwords. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges by trying common passwords across many accounts. If confirmed malicious, this activity could lead to unauthorized access…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A source Ip failing to authenticate with multiple users is not a common for legitimate behavior.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/security/compass/incident-response-playbook-password-spray, https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://learn.microsoft.com/azure/active-directory/reports-monitoring/reference-sign-ins-error-codes",
              "mitre": [
                "T1110.003",
                "T1110.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Multiple Users Failing To Authenticate From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Multiple Users Failing To Authenticate From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A source Ip failing to authenticate with multiple users is not a common for legitimate behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Multiple Users Failing To Authenticate From Ip so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.150",
              "n": "Azure AD New Custom Domain Added",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of a new custom domain within an Azure Active Directory (AD) tenant. It leverages Azure AD AuditLogs to identify successful \"Add unverified domain\" operations. This activity is significant as it may indicate an adversary attempting to establish persistence by setting up identity federation backdoors, allowing them to impersonate users and bypass authentication mechanisms. If confirmed malicious, this could enable attackers to gain unauthorized access, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add unverified domain",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add unverified domain ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "In most organizations, new customm domains will be updated infrequently. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/azure/active-directory/enterprise-users/domains-manage, https://www.mandiant.com/resources/remediation-and-hardening-strategies-microsoft-365-defend-against-apt29-v13, https://o365blog.com/post/federation-vulnerability/, https://www.inversecos.com/2021/11/how-to-detect-azure-active-directory.html, https://www.mandiant.com/resources/blog/detecting-microsoft-365-azure-active-directory-backdoors, https://attack.mitre.org/techniques/T1484/002/",
              "mitre": [
                "T1484.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD New Custom Domain Added\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add unverified domain. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD New Custom Domain Added\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: In most organizations, new customm domains will be updated infrequently. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD New Custom Domain Added so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.151",
              "n": "Azure AD New Federated Domain Added",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of a new federated domain within an Azure Active Directory tenant. It leverages Azure AD AuditLogs to identify successful \"Set domain authentication\" operations. This activity is significant as it may indicate the use of the Azure AD identity federation backdoor technique, allowing an adversary to establish persistence. If confirmed malicious, the attacker could impersonate any user, bypassing password and MFA requirements, potentially leading to unaut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Set domain authentication",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Set domain authentication ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "In most organizations, domain federation settings will be updated infrequently. Filter as needed.",
              "refs": "https://www.mandiant.com/resources/remediation-and-hardening-strategies-microsoft-365-defend-against-apt29-v13, https://o365blog.com/post/federation-vulnerability/, https://www.inversecos.com/2021/11/how-to-detect-azure-active-directory.html, https://www.mandiant.com/resources/blog/detecting-microsoft-365-azure-active-directory-backdoors, https://attack.mitre.org/techniques/T1484/002/",
              "mitre": [
                "T1484.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD New Federated Domain Added\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Set domain authentication. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD New Federated Domain Added\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: In most organizations, domain federation settings will be updated infrequently. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD New Federated Domain Added so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.152",
              "n": "Azure AD New MFA Method Registered",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the registration of a new Multi-Factor Authentication (MFA) method for a user account in Azure Active Directory. It leverages Azure AD audit logs to identify changes in MFA configurations. This activity is significant because adding a new MFA method can indicate an attacker's attempt to maintain persistence on a compromised account. If confirmed malicious, the attacker could bypass existing security measures, solidify their access, and potentially escalate privileg…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Update user",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the latest version of Splunk Add-on for Microsoft Cloud Services from Splunkbase (https://splunkbase.splunk.com/app/3110/#/details). You must be ingesting Azure Active Directory events into your Splunk environment through an EventHub. This analytic was written to be used with the azure:monitor:aad sourcetype leveraging the AuditLog log category.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may register MFA methods legitimally, investigate and filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/005/, https://www.microsoft.com/en-us/security/blog/2023/06/08/detecting-and-mitigating-a-multi-stage-aitm-phishing-and-bec-campaign/, https://www.csoonline.com/article/573451/sophisticated-bec-scammers-bypass-microsoft-365-multi-factor-authentication.html",
              "mitre": [
                "T1098.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD New MFA Method Registered\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Update user. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD New MFA Method Registered\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may register MFA methods legitimally, investigate and filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD New MFA Method Registered so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.153",
              "n": "Azure AD New MFA Method Registered For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the registration of a new Multi-Factor Authentication (MFA) method for an Azure AD account. It leverages Azure AD AuditLogs to identify when a user registers new security information. This activity is significant because adversaries who gain unauthorized access to an account may add their own MFA method to maintain persistence. If confirmed malicious, this could allow attackers to bypass existing security controls, maintain long-term access, and potentially escalat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory User registered security info",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory User registered security info ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Newly onboarded users who are registering an MFA method for the first time will also trigger this detection.",
              "refs": "https://learn.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks, https://attack.mitre.org/techniques/T1556/, https://attack.mitre.org/techniques/T1556/006/, https://twitter.com/jhencinski/status/1618660062352007174",
              "mitre": [
                "T1556.006"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD New MFA Method Registered For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory User registered security info. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD New MFA Method Registered For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Newly onboarded users who are registering an MFA method for the first time will also trigger this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD New MFA Method Registered For User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.154",
              "n": "Azure AD OAuth Application Consent Granted By User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a user in an Azure AD environment grants consent to an OAuth application. It leverages Azure AD audit logs to identify events where users approve application consents. This activity is significant as it can expose organizational data to third-party applications, a common tactic used by malicious actors to gain unauthorized access. If confirmed malicious, this could lead to unauthorized access to sensitive information and resources. Immediate investigation is r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Consent to application",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the latest version of Splunk Add-on for Microsoft Cloud Services from Splunkbase (https://splunkbase.splunk.com/app/3110/#/details). You must be ingesting Azure Active Directory events into your Splunk environment through an EventHub. This analytic was written to be used with the azure:monitor:aad sourcetype leveraging the AuditLog log category.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if users are granting consents as part of legitimate application integrations or setups. It is crucial to review the application and the permissions it requests to ensure they align with organizational policies and security best practices.",
              "refs": "https://attack.mitre.org/techniques/T1528/, https://www.microsoft.com/en-us/security/blog/2022/09/22/malicious-oauth-applications-used-to-compromise-email-servers-and-spread-spam/, https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/protect-against-consent-phishing, https://learn.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth, https://www.alteredsecurity.com/post/introduction-to-365-stealer, https://github.com/AlteredSecurity/365-Stealer",
              "mitre": [
                "T1528"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD OAuth Application Consent Granted By User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Consent to application. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1528. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD OAuth Application Consent Granted By User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if users are granting consents as part of legitimate application integrations or setups. It is crucial to review the application and the permissions it requests to ensure they align with organizational policies and security best practices.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD OAuth Application Consent Granted By User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.7.154: Azure AD OAuth Application Consent Granted By User.",
                  "ea": "Saved search 'UC-10.7.154' running on Azure Active Directory Consent to application, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.155",
              "n": "Azure AD PIM Role Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of an Azure AD Privileged Identity Management (PIM) role. It leverages Azure Active Directory events to identify when a user is added as an eligible member to a PIM role. This activity is significant because PIM roles grant elevated privileges, and their assignment should be closely monitored to prevent unauthorized access. If confirmed malicious, an attacker could exploit this to gain privileged access, potentially leading to unauthorized actions, d…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "As part of legitimate administrative behavior, users may be assigned PIM roles. Filter as needed",
              "refs": "https://learn.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-configure, https://learn.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-how-to-activate-role, https://microsoft.github.io/Azure-Threat-Research-Matrix/PrivilegeEscalation/AZT401/AZT401/",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD PIM Role Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD PIM Role Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: As part of legitimate administrative behavior, users may be assigned PIM roles. Filter as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD PIM Role Assigned so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.156",
              "n": "Azure AD PIM Role Assignment Activated",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the activation of an Azure AD Privileged Identity Management (PIM) role. It leverages Azure Active Directory events to identify when a user activates a PIM role assignment, indicated by the \"Add member to role completed (PIM activation)\" operation. Monitoring this activity is crucial as PIM roles grant elevated privileges, and unauthorized activation could indicate an adversary attempting to gain privileged access. If confirmed malicious, this could lead to unautho…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "As part of legitimate administrative behavior, users may activate PIM roles. Filter as needed",
              "refs": "https://learn.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-configure, https://learn.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-how-to-activate-role, https://microsoft.github.io/Azure-Threat-Research-Matrix/PrivilegeEscalation/AZT401/AZT401/",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD PIM Role Assignment Activated\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD PIM Role Assignment Activated\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: As part of legitimate administrative behavior, users may activate PIM roles. Filter as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD PIM Role Assignment Activated so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.157",
              "n": "Azure AD Privileged Authentication Administrator Role Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of the Privileged Authentication Administrator role to an Azure AD user. It leverages Azure Active Directory audit logs to identify when this specific role is assigned. This activity is significant because users in this role can set or reset authentication methods for any user, including those in privileged roles like Global Administrators. If confirmed malicious, an attacker could change credentials and assume the identity and permissions of high-pr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add member to role",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add member to role ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may legitimately assign the Privileged Authentication Administrator role as part of administrative tasks. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/azure/active-directory/roles/permissions-reference#privileged-authentication-administrator, https://posts.specterops.io/azure-privilege-escalation-via-azure-api-permissions-abuse-74aee1006f48, https://learn.microsoft.com/en-us/azure/active-directory/roles/permissions-reference",
              "mitre": [
                "T1003.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Privileged Authentication Administrator Role Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add member to role. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Privileged Authentication Administrator Role Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may legitimately assign the Privileged Authentication Administrator role as part of administrative tasks. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Privileged Authentication Administrator Role Assigned so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.158",
              "n": "Azure AD Privileged Role Assigned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of privileged Azure Active Directory roles to a user. It leverages Azure AD audit logs, specifically monitoring the \"Add member to role\" operation. This activity is significant as adversaries may assign privileged roles to compromised accounts to maintain persistence within the Azure AD environment. If confirmed malicious, this could allow attackers to escalate privileges, access sensitive information, and maintain long-term control over the Azure AD…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add member to role",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add member to role ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators will legitimately assign the privileged roles users as part of administrative tasks. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/azure/active-directory/roles/concept-understand-roles, https://learn.microsoft.com/en-us/azure/active-directory/roles/permissions-reference, https://adsecurity.org/?p=4277, https://www.mandiant.com/resources/detecting-microsoft-365-azure-active-directory-backdoors, https://learn.microsoft.com/en-us/azure/active-directory/roles/security-planning, https://attack.mitre.org/techniques/T1098/003/",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Privileged Role Assigned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add member to role. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Privileged Role Assigned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators will legitimately assign the privileged roles users as part of administrative tasks. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Privileged Role Assigned so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.159",
              "n": "Azure AD Privileged Role Assigned to Service Principal",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the assignment of privileged roles to service principals in Azure Active Directory (AD). It leverages the AuditLogs log category from ingested Azure AD events. This activity is significant because assigning elevated permissions to non-human entities can lead to unauthorized access or malicious activities. If confirmed malicious, attackers could exploit these service principals to gain elevated access to Azure resources, potentially compromising sensitive data and c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add member to role",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$initiatedBy$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add member to role ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may legitimately assign the privileged roles to Service Principals as part of administrative tasks. Filter as needed.",
              "refs": "https://posts.specterops.io/azure-privilege-escalation-via-service-principal-abuse-210ae2be2a5",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Privileged Role Assigned to Service Principal\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add member to role. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Privileged Role Assigned to Service Principal\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may legitimately assign the privileged roles to Service Principals as part of administrative tasks. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Privileged Role Assigned to Service Principal so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.160",
              "n": "Azure AD Service Principal Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a Service Principal in an Azure AD environment. It leverages Azure Active Directory events ingested through EventHub, specifically monitoring the \"Add service principal\" operation. This activity is significant because Service Principals can be used by adversaries to establish persistence and bypass multi-factor authentication and conditional access policies. If confirmed malicious, this could allow attackers to maintain single-factor access to the A…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add service principal",
              "q": "# Shared SPL: intentional — see UC-10.7.162\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$displayName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add service principal ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may legitimately create Service Principal. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals, https://learn.microsoft.com/en-us/powershell/azure/create-azure-service-principal-azureps?view=azps-8.2.0, https://www.truesec.com/hub/blog/using-a-legitimate-application-to-create-persistence-and-initiate-email-campaigns, https://www.inversecos.com/2021/10/how-to-backdoor-azure-applications-and.html, https://attack.mitre.org/techniques/T1136/003/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Service Principal Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add service principal. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Service Principal Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may legitimately create Service Principal. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Service Principal Created so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.161",
              "n": "Azure AD Service Principal Enumeration",
              "c": "high",
              "f": "intermediate",
              "v": "Detects enumeration of Azure AD service principals via Microsoft Graph API activity logs. Excessive enumeration of service principals may indicate reconnaissance activity by an attacker seeking to identify privileged applications and automation accounts for lateral movement or privilege escalation.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory MicrosoftGraphActivityLogs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory MicrosoftGraphActivityLogs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/SpecterOps/AzureHound, https://github.com/dirkjanm/ROADtools, https://splunkbase.splunk.com/app/3110, https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Install/",
              "mitre": [
                "T1087.004",
                "T1526"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Service Principal Enumeration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory MicrosoftGraphActivityLogs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.004, T1526. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Service Principal Enumeration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Service Principal Enumeration so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.162",
              "n": "Azure AD Service Principal Owner Added",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of a new owner to a Service Principal within an Azure AD tenant. It leverages Azure Active Directory events from the AuditLog log category to identify this activity. This behavior is significant because Service Principals do not support multi-factor authentication or conditional access policies, making them a target for adversaries seeking persistence or privilege escalation. If confirmed malicious, this activity could allow attackers to maintain acces…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add owner to application",
              "q": "# Shared SPL: intentional — see UC-10.7.160\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$displayName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Add owner to application ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator may legitimately add new owners for Service Principals. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/",
              "mitre": [
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Service Principal Owner Added\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add owner to application. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Service Principal Owner Added\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator may legitimately add new owners for Service Principals. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Service Principal Owner Added so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.163",
              "n": "Azure AD Service Principal Privilege Escalation",
              "c": "high",
              "f": "intermediate",
              "v": "Note: This UC shares its detection logic with UC-10.4.107; consider consolidating. This detection identifies when an Azure Service Principal elevates privileges by adding themself to a new app role assignment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Add app role assignment to service principal",
              "q": "# Shared SPL: intentional — see UC-10.4.107 (parallel Entra detection via Office 365 management activity).\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$servicePrincipal$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The Splunk Add-on for Microsoft Cloud Services add-on is required to ingest EntraID audit logs via Azure EventHub. See reference for links for further details on how to onboard this log source.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/3110, https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Install/, https://github.com/mvelazc0/BadZure, https://www.splunk.com/en_us/blog/security/hunting-m365-invaders-navigating-the-shadows-of-midnight-blizzard.html, https://posts.specterops.io/microsoft-breach-what-happened-what-should-azure-admins-do-da2b7e674ebc",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Service Principal Privilege Escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Add app role assignment to service principal. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Service Principal Privilege Escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Service Principal Privilege Escalation so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.164",
              "n": "Azure AD Successful PowerShell Authentication",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a successful authentication event against an Azure AD tenant using PowerShell cmdlets. This detection leverages Azure AD SignInLogs to identify successful logins where the appDisplayName is \"Microsoft Azure PowerShell.\" This activity is significant because it is uncommon for regular, non-administrative users to authenticate using PowerShell, and it may indicate enumeration and discovery techniques by an attacker. If confirmed malicious, this activity could allow…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrative users will likely use PowerShell commandlets to troubleshoot and maintain the environment. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1078/004/, https://learn.microsoft.com/en-us/powershell/module/azuread/connect-azuread?view=azureadps-2.0, https://securitycafe.ro/2022/04/29/pentesting-azure-recon-techniques/, https://github.com/swisskyrepo/PayloadsAllTheThings/blob/master/Methodology%20and%20Resources/Cloud%20-%20Azure%20Pentest.md",
              "mitre": [
                "T1078.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Successful PowerShell Authentication\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Successful PowerShell Authentication\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrative users will likely use PowerShell commandlets to troubleshoot and maintain the environment. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Successful PowerShell Authentication so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.165",
              "n": "Azure AD Successful Single-Factor Authentication",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a successful single-factor authentication event against Azure Active Directory. It leverages Azure SignInLogs data, specifically focusing on events where single-factor authentication succeeded. This activity is significant as it may indicate a misconfiguration, policy violation, or potential account takeover attempt. If confirmed malicious, an attacker could gain unauthorized access to the account, potentially leading to data breaches, privilege escalation, or f…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although not recommended, certain users may be required without multi-factor authentication. Filter as needed",
              "refs": "https://attack.mitre.org/techniques/T1078/004/, https://learn.microsoft.com/en-us/azure/active-directory/authentication/concept-mfa-howitworks, https://www.forbes.com/sites/daveywinder/2020/07/08/new-dark-web-audit-reveals-15-billion-stolen-logins-from-100000-breaches-passwords-hackers-cybercrime/?sh=69927b2a180f",
              "mitre": [
                "T1078.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Successful Single-Factor Authentication\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Successful Single-Factor Authentication\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although not recommended, certain users may be required without multi-factor authentication. Filter as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Successful Single-Factor Authentication so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.166",
              "n": "Azure AD Tenant Wide Admin Consent Granted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where admin consent is granted to an application within an Azure AD tenant. It leverages Azure AD audit logs, specifically events related to the admin consent action within the ApplicationManagement category. This activity is significant because admin consent allows applications to access data across the entire tenant, potentially exposing vast amounts of organizational data. If confirmed malicious, an attacker could gain extensive and persistent acces…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Consent to application",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the latest version of Splunk Add-on for Microsoft Cloud Services from Splunkbase (https://splunkbase.splunk.com/app/3110/#/details). You must be ingesting Azure Active Directory events into your Splunk environment through an EventHub. This analytic was written to be used with the azure:monitor:aad sourcetype leveraging the Auditlogs log category.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may be granted tenant wide consent, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/003/, https://www.mandiant.com/resources/blog/remediation-and-hardening-strategies-for-microsoft-365-to-defend-against-unc2452, https://learn.microsoft.com/en-us/security/operations/incident-response-playbook-app-consent, https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/grant-admin-consent?pivots=portal, https://microsoft.github.io/Azure-Threat-Research-Matrix/Persistence/AZT501/AZT501-2/",
              "mitre": [
                "T1098.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Tenant Wide Admin Consent Granted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Consent to application. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Tenant Wide Admin Consent Granted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may be granted tenant wide consent, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD Tenant Wide Admin Consent Granted so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.7.166: Azure AD Tenant Wide Admin Consent Granted.",
                  "ea": "Saved search 'UC-10.7.166' running on Azure Active Directory Consent to application, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.167",
              "n": "Azure AD User Consent Blocked for Risky Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where Azure AD has blocked a user's attempt to grant consent to a risky or potentially malicious application. This detection leverages Azure AD audit logs, focusing on user consent actions and system-driven blocks. Monitoring these blocked consent attempts is crucial as it highlights potential threats early on, indicating that a user might be targeted or that malicious applications are attempting to infiltrate the organization. If confirmed malicious, thi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Consent to application",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must install the latest version of Splunk Add-on for Microsoft Cloud Services from Splunkbase (https://splunkbase.splunk.com/app/3110/#/details). You must be ingesting Azure Active Directory events into your Splunk environment through an EventHub. This analytic was written to be used with the azure:monitor:aad sourcetype leveraging the AuditLog log category.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "UPDATE_KNOWN_FALSE_POSITIVES",
              "refs": "https://attack.mitre.org/techniques/T1528/, https://www.microsoft.com/en-us/security/blog/2022/09/22/malicious-oauth-applications-used-to-compromise-email-servers-and-spread-spam/, https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/protect-against-consent-phishing, https://learn.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth, https://www.alteredsecurity.com/post/introduction-to-365-stealer, https://github.com/AlteredSecurity/365-Stealer",
              "mitre": [
                "T1528"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD User Consent Blocked for Risky Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Consent to application. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1528. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD User Consent Blocked for Risky Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: UPDATE_KNOWN_FALSE_POSITIVES\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD User Consent Blocked for Risky Application so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.7.167: Azure AD User Consent Blocked for Risky Application.",
                  "ea": "Saved search 'UC-10.7.167' running on Azure Active Directory Consent to application, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.168",
              "n": "Azure AD User Consent Denied for OAuth Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where a user has denied consent to an OAuth application seeking permissions within the Azure AD environment. This detection leverages Azure AD's audit logs, specifically focusing on user consent actions with error code 65004. Monitoring denied consent actions is significant as it can indicate users recognizing potentially suspicious or untrusted applications. If confirmed malicious, this activity could suggest attempts by unauthorized applications to g…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Sign-in activity",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Sign-in activity ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Users may deny consent for legitimate applications by mistake, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1528/, https://www.microsoft.com/en-us/security/blog/2022/09/22/malicious-oauth-applications-used-to-compromise-email-servers-and-spread-spam/, https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/protect-against-consent-phishing, https://learn.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth, https://www.alteredsecurity.com/post/introduction-to-365-stealer, https://github.com/AlteredSecurity/365-Stealer",
              "mitre": [
                "T1528"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD User Consent Denied for OAuth Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Sign-in activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1528. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD User Consent Denied for OAuth Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Users may deny consent for legitimate applications by mistake, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD User Consent Denied for OAuth Application so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.7.168: Azure AD User Consent Denied for OAuth Application.",
                  "ea": "Saved search 'UC-10.7.168' running on Azure Active Directory Sign-in activity, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.169",
              "n": "Azure AD User Enabled And Password Reset",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an Azure AD user enabling a previously disabled account and resetting its password within 2 minutes. It uses Azure Active Directory events to identify this sequence of actions. This activity is significant because it may indicate an adversary with administrative access attempting to establish a backdoor identity within the Azure AD tenant. If confirmed malicious, this could allow the attacker to maintain persistent access, escalate privileges, and potentially exfil…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Enable account, Azure Active Directory Reset password (by admin), Azure Active Directory Update user",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Enable account, Azure Active Directory Reset password (by admin), Azure Active Directory Update user ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While not common, Administrators may enable accounts and reset their passwords for legitimate reasons. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/",
              "mitre": [
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD User Enabled And Password Reset\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Enable account, Azure Active Directory Reset password (by admin), Azure Active Directory Update user. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD User Enabled And Password Reset\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While not common, Administrators may enable accounts and reset their passwords for legitimate reasons. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD User Enabled And Password Reset so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.170",
              "n": "Azure AD User ImmutableId Attribute Updated",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the modification of the SourceAnchor (ImmutableId) attribute for an Azure Active Directory user. This detection leverages Azure AD audit logs, specifically monitoring the \"Update user\" operation and changes to the SourceAnchor attribute. This activity is significant as it is a step in setting up an Azure AD identity federation backdoor, allowing an adversary to establish persistence. If confirmed malicious, the attacker could impersonate any user, bypassing pass…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory Update user",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory Update user ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The SourceAnchor (also called ImmutableId) Azure AD attribute has legitimate uses for directory synchronization. Investigate and filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/azure/active-directory/hybrid/plan-connect-design-concepts, https://www.mandiant.com/resources/remediation-and-hardening-strategies-microsoft-365-defend-against-apt29-v13, https://o365blog.com/post/federation-vulnerability/, https://www.inversecos.com/2021/11/how-to-detect-azure-active-directory.html, https://www.mandiant.com/resources/blog/detecting-microsoft-365-azure-active-directory-backdoors, https://attack.mitre.org/techniques/T1098/",
              "mitre": [
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD User ImmutableId Attribute Updated\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory Update user. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD User ImmutableId Attribute Updated\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The SourceAnchor (also called ImmutableId) Azure AD attribute has legitimate uses for directory synchronization. Investigate and filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure AD User ImmutableId Attribute Updated so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.171",
              "n": "Azure Automation Account Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new Azure Automation account within an Azure tenant. It leverages Azure Audit events, specifically the Azure Activity log category, to identify when an account is created or updated. This activity is significant because Azure Automation accounts can be used to automate tasks and orchestrate actions across Azure and on-premise environments. If an attacker creates an Automation account with elevated privileges, they could maintain persistence, execu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Audit Create or Update an Azure Automation account",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Audit Create or Update an Azure Automation account ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may legitimately create Azure Automation accounts. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/azure/automation/overview, https://learn.microsoft.com/en-us/azure/automation/automation-create-standalone-account?tabs=azureportal, https://learn.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker, https://www.inversecos.com/2021/12/how-to-detect-malicious-azure.html, https://www.netspi.com/blog/technical/cloud-penetration-testing/maintaining-azure-persistence-via-automation-accounts/, https://microsoft.github.io/Azure-Threat-Research-Matrix/Persistence/AZT503/AZT503-3/, https://attack.mitre.org/techniques/T1136/003/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure Automation Account Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Audit Create or Update an Azure Automation account. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure Automation Account Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may legitimately create Azure Automation accounts. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure Automation Account Created so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.172",
              "n": "Azure Automation Runbook Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new Azure Automation Runbook within an Azure tenant. It leverages Azure Audit events, specifically the Azure Activity log category, to identify when a new Runbook is created or updated. This activity is significant because adversaries with privileged access can use Runbooks to maintain persistence, escalate privileges, or execute malicious code. If confirmed malicious, this could lead to unauthorized actions such as creating Global Administrators,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Audit Create or Update an Azure Automation Runbook",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Audit Create or Update an Azure Automation Runbook ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may legitimately create Azure Automation Runbooks. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/azure/automation/overview, https://learn.microsoft.com/en-us/azure/automation/automation-runbook-types, https://learn.microsoft.com/en-us/azure/automation/manage-runbooks, https://www.inversecos.com/2021/12/how-to-detect-malicious-azure.html, https://www.netspi.com/blog/technical/cloud-penetration-testing/maintaining-azure-persistence-via-automation-accounts/, https://microsoft.github.io/Azure-Threat-Research-Matrix/Persistence/AZT503/AZT503-3/, https://attack.mitre.org/techniques/T1136/003/",
              "mitre": [
                "T1136.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure Automation Runbook Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Audit Create or Update an Azure Automation Runbook. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure Automation Runbook Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may legitimately create Azure Automation Runbooks. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure Automation Runbook Created so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.173",
              "n": "Azure Runbook Webhook Created",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a new Automation Runbook Webhook within an Azure tenant. It leverages Azure Audit events, specifically the \"Create or Update an Azure Automation webhook\" operation, to identify this activity. This behavior is significant because Webhooks can trigger Automation Runbooks via unauthenticated URLs exposed to the Internet, posing a security risk. If confirmed malicious, an attacker could use this to execute code, create users, or maintain persistence wit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Audit Create or Update an Azure Automation webhook",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Audit Create or Update an Azure Automation webhook ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may legitimately create Azure Runbook Webhooks. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/azure/automation/overview, https://learn.microsoft.com/en-us/azure/automation/automation-runbook-types, https://learn.microsoft.com/en-us/azure/automation/automation-webhooks?tabs=portal, https://www.inversecos.com/2021/12/how-to-detect-malicious-azure.html, https://www.netspi.com/blog/technical/cloud-penetration-testing/maintaining-azure-persistence-via-automation-accounts/, https://microsoft.github.io/Azure-Threat-Research-Matrix/Persistence/AZT503/AZT503-3/, https://attack.mitre.org/techniques/T1078/004/",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure Runbook Webhook Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Audit Create or Update an Azure Automation webhook. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure Runbook Webhook Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may legitimately create Azure Runbook Webhooks. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Azure Runbook Webhook Created so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.174",
              "n": "Cloud Compute Instance Created By Previously Unseen User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of cloud compute instances by users who have not previously created them. It leverages data from the Change data model, focusing on 'create' actions by users, and cross-references with a baseline of known user activities. This activity is significant as it may indicate unauthorized access or misuse of cloud resources by new or compromised accounts. If confirmed malicious, attackers could deploy unauthorized compute instances, leading to potential da…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible that a user will start to create compute instances for the first time, for any number of reasons. Verify with the user launching instances that this is the intended behavior.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Compute Instance Created By Previously Unseen User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Compute Instance Created By Previously Unseen User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible that a user will start to create compute instances for the first time, for any number of reasons. Verify with the user launching instances that this is the intended behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cloud Compute Instance Created By Previously Unseen User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.175",
              "n": "Cloud Compute Instance Created In Previously Unused Region",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a cloud compute instance in a region that has not been previously used within the last hour. It leverages cloud infrastructure logs and compares the regions of newly created instances against a lookup file of historically used regions. This activity is significant because the creation of instances in new regions can indicate unauthorized or suspicious activity, such as an attacker attempting to evade detection or establish a foothold in a less monit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible that a user has unknowingly started an instance in a new region. Please verify that this activity is legitimate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1535"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Compute Instance Created In Previously Unused Region\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1535. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Compute Instance Created In Previously Unused Region\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible that a user has unknowingly started an instance in a new region. Please verify that this activity is legitimate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cloud Compute Instance Created In Previously Unused Region so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.176",
              "n": "Cloud Compute Instance Created With Previously Unseen Instance Type",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of EC2 instances with previously unseen instance types. It leverages Splunk's tstats command to analyze data from the Change data model, identifying instance types that have not been previously recorded. This activity is significant for a SOC because it may indicate unauthorized or suspicious activity, such as an attacker attempting to create instances for malicious purposes. If confirmed malicious, this could lead to unauthorized access, data exfiltra…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that an admin will create a new system using a new instance type that has never been used before. Verify with the creator that they intended to create the system with the new instance type.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "user",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Compute Instance Created With Previously Unseen Instance Type\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Compute Instance Created With Previously Unseen Instance Type\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that an admin will create a new system using a new instance type that has never been used before. Verify with the creator that they intended to create the system with the new instance type.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cloud Compute Instance Created With Previously Unseen Instance Type so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.177",
              "n": "Cloud Instance Modified By Previously Unseen User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies cloud instances being modified by users who have not previously modified them. It leverages data from the Change data model, focusing on successful modifications of EC2 instances. This activity is significant because it can indicate unauthorized or suspicious changes by potentially compromised or malicious users. If confirmed malicious, this could lead to unauthorized access, configuration changes, or potential disruption of cloud services, posing a significant …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible that a new user will start to modify EC2 instances when they haven't before for any number of reasons. Verify with the user that is modifying instances that this is the intended behavior.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Instance Modified By Previously Unseen User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Instance Modified By Previously Unseen User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible that a new user will start to modify EC2 instances when they haven't before for any number of reasons. Verify with the user that is modifying instances that this is the intended behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cloud Instance Modified By Previously Unseen User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.178",
              "n": "Cloud Provisioning Activity From Previously Unseen City",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects cloud provisioning activities originating from previously unseen cities. It leverages cloud infrastructure logs and compares the geographic location of the source IP address against a baseline of known locations. This activity is significant as it may indicate unauthorized access or misuse of cloud resources from an unexpected location. If confirmed malicious, this could lead to unauthorized resource creation, potential data exfiltration, or further compromise of c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is a strictly behavioral search, so we define \\\"false positive\\\" slightly differently. Every time this fires, it will accurately reflect the first occurrence in the time period you're searching within, plus what is stored in the cache feature. But while there are really no \\\"false positives\\\" in a traditional sense, there is definitely lots of noise.\\nThis search will fire any time a new IP address is seen in the **GeoIP** database for any kind of provisioning activity. If you typically do all provisioning from tools inside of your country, there should be few false positives. If you are located in countries where the free version of **MaxMind GeoIP** that ships by default with Splunk has weak resolution (particularly small countries in less economically powerful regions), this may be much less valuable to you.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Provisioning Activity From Previously Unseen City\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Provisioning Activity From Previously Unseen City\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is a strictly behavioral search, so we define \\\"false positive\\\" slightly differently. Every time this fires, it will accurately reflect the first occurrence in the time period you're searching within, plus what is stored in the cache feature. But while there are really no \\\"false positives\\\" in a traditional sense, there is definitely lots of noise.\\nThis search will fire any time a new IP address is seen in the **GeoIP** database for any kind of provisioning activity. If you typically do all provisioning from tools inside of your country, there should be few false positives. If you are located in countries where the free version of **MaxMind GeoIP** that ships by default with Splunk has weak resolution (particularly small countries in less economically powerful regions), this may be much less valuable to you.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cloud Provisioning Activity From Previously Unseen City so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.179",
              "n": "Cloud Provisioning Activity From Previously Unseen Country",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects cloud provisioning activities originating from previously unseen countries. It leverages cloud infrastructure logs and compares the geographic location of the source IP address against a baseline of known locations. This activity is significant as it may indicate unauthorized access or potential compromise of cloud resources. If confirmed malicious, an attacker could gain control over cloud assets, leading to data breaches, service disruptions, or further infiltrat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is a strictly behavioral search, so we define \\\"false positive\\\" slightly differently. Every time this fires, it will accurately reflect the first occurrence in the time period you're searching within, plus what is stored in the cache feature. But while there are really no \\\"false positives\\\" in a traditional sense, there is definitely lots of noise.\\nThis search will fire any time a new IP address is seen in the **GeoIP** database for any kind of provisioning activity. If you typically do all provisioning from tools inside of your country, there should be few false positives. If you are located in countries where the free version of **MaxMind GeoIP** that ships by default with Splunk has weak resolution (particularly small countries in less economically powerful regions), this may be much less valuable to you.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Provisioning Activity From Previously Unseen Country\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Provisioning Activity From Previously Unseen Country\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is a strictly behavioral search, so we define \\\"false positive\\\" slightly differently. Every time this fires, it will accurately reflect the first occurrence in the time period you're searching within, plus what is stored in the cache feature. But while there are really no \\\"false positives\\\" in a traditional sense, there is definitely lots of noise.\\nThis search will fire any time a new IP address is seen in the **GeoIP** database for any kind of provisioning activity. If you typically do all provisioning from tools inside of your country, there should be few false positives. If you are located in countries where the free version of **MaxMind GeoIP** that ships by default with Splunk has weak resolution (particularly small countries in less economically powerful regions), this may be much less valuable to you.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cloud Provisioning Activity From Previously Unseen Country so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.180",
              "n": "Cloud Provisioning Activity From Previously Unseen IP Address",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects cloud provisioning activities originating from previously unseen IP addresses. It leverages cloud infrastructure logs to identify events where resources are created or started, and cross-references these with a baseline of known IP addresses. This activity is significant as it may indicate unauthorized access or potential misuse of cloud resources. If confirmed malicious, an attacker could gain unauthorized control over cloud resources, leading to data breaches, se…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$object_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is a strictly behavioral search, so we define \\\"false positive\\\" slightly differently. Every time this fires, it will accurately reflect the first occurrence in the time period you're searching within, plus what is stored in the cache feature. But while there are really no \\\"false positives\\\" in a traditional sense, there is definitely lots of noise.\\nThis search will fire any time a new IP address is seen in the **GeoIP** database for any kind of provisioning activity. If you typically do all provisioning from tools inside of your country, there should be few false positives. If you are located in countries where the free version of **MaxMind GeoIP** that ships by default with Splunk has weak resolution (particularly small countries in less economically powerful regions), this may be much less valuable to you.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Provisioning Activity From Previously Unseen IP Address\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Provisioning Activity From Previously Unseen IP Address\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is a strictly behavioral search, so we define \\\"false positive\\\" slightly differently. Every time this fires, it will accurately reflect the first occurrence in the time period you're searching within, plus what is stored in the cache feature. But while there are really no \\\"false positives\\\" in a traditional sense, there is definitely lots of noise.\\nThis search will fire any time a new IP address is seen in the **GeoIP** database for any kind of provisioning activity. If you typically do all provisioning from tools inside of your country, there should be few false positives. If you are located in countries where the free version of **MaxMind GeoIP** that ships by default with Splunk has weak resolution (particularly small countries in less economically powerful regions), this may be much less valuable to you.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cloud Provisioning Activity From Previously Unseen IP Address so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.181",
              "n": "Cloud Provisioning Activity From Previously Unseen Region",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects cloud provisioning activities originating from previously unseen regions. It leverages cloud infrastructure logs to identify events where resources are started or created, and cross-references these with a baseline of known regions. This activity is significant as it may indicate unauthorized access or misuse of cloud resources from unfamiliar locations. If confirmed malicious, this could lead to unauthorized resource creation, potential data exfiltration, or furth…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is a strictly behavioral search, so we define \\\"false positive\\\" slightly differently. Every time this fires, it will accurately reflect the first occurrence in the time period you're searching within, plus what is stored in the cache feature. But while there are really no \\\"false positives\\\" in a traditional sense, there is definitely lots of noise.\\nThis search will fire any time a new IP address is seen in the **GeoIP** database for any kind of provisioning activity. If you typically do all provisioning from tools inside of your country, there should be few false positives. If you are located in countries where the free version of **MaxMind GeoIP** that ships by default with Splunk has weak resolution (particularly small countries in less economically powerful regions), this may be much less valuable to you.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Provisioning Activity From Previously Unseen Region\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Provisioning Activity From Previously Unseen Region\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is a strictly behavioral search, so we define \\\"false positive\\\" slightly differently. Every time this fires, it will accurately reflect the first occurrence in the time period you're searching within, plus what is stored in the cache feature. But while there are really no \\\"false positives\\\" in a traditional sense, there is definitely lots of noise.\\nThis search will fire any time a new IP address is seen in the **GeoIP** database for any kind of provisioning activity. If you typically do all provisioning from tools inside of your country, there should be few false positives. If you are located in countries where the free version of **MaxMind GeoIP** that ships by default with Splunk has weak resolution (particularly small countries in less economically powerful regions), this may be much less valuable to you.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cloud Provisioning Activity From Previously Unseen Region so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.182",
              "n": "Cloud Security Groups Modifications by User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unusual modifications to security groups in your cloud environment by users, focusing on actions such as modifications, deletions, or creations over 30-minute intervals. It leverages cloud infrastructure logs and calculates the standard deviation for each user, using the 3-sigma rule to detect anomalies. This activity is significant as it may indicate a compromised account or insider threat. If confirmed malicious, attackers could alter security group configurat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that legitimate user/admin may modify a number of security groups",
              "refs": "https://attack.mitre.org/techniques/T1578/005/",
              "mitre": [
                "T1578.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cloud Security Groups Modifications by User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1578.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cloud Security Groups Modifications by User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that legitimate user/admin may modify a number of security groups\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cloud Security Groups Modifications by User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.183",
              "n": "Detect AWS Console Login by New User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects AWS console login events by new users. It leverages AWS CloudTrail events and compares them against a lookup file of previously seen users based on ARN values. This detection is significant because a new user logging into the AWS console could indicate the creation of new accounts or potential unauthorized access. If confirmed malicious, this activity could lead to unauthorized access to AWS resources, data exfiltration, or further exploitation within the cloud env…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| tstats earliest(_time) as firstTime latest(_time) as lastTime FROM datamodel=Authentication\n      WHERE Authentication.signature=ConsoleLogin\n      BY Authentication.user\n    | `drop_dm_object_name(Authentication)`\n    | join max=1 user type=outer [\n    | inputlookup previously_seen_users_console_logins\n    | stats min(firstTime) as earliestseen\n      BY user]\n    | eval userStatus=if(earliestseen >= relative_time(now(), \"-24h@h\") OR isnull(earliestseen), \"First Time Logging into AWS Console\", \"Previously Seen User\")\n    | where userStatus=\"First Time Logging into AWS Console\"\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_aws_console_login_by_new_user_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "When a legitimate new user logins for the first time, this activity will be detected. Check how old the account is and verify that the user activity is legitimate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1552",
                "T1586.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect AWS Console Login by New User\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect AWS Console Login by New User\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: When a legitimate new user logins for the first time, this activity will be detected. Check how old the account is and verify that the user activity is legitimate.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the behavior in “Detect AWS Console Login by New User” using the same log fields the detection already uses, so the team can act before impact spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.184",
              "n": "Detect AWS Console Login by User from New City",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies AWS console login events by users from a new city within the last hour. It leverages AWS CloudTrail events and compares them against a lookup file of previously seen user locations. This activity is significant for a SOC as it may indicate unauthorized access or credential compromise, especially if the login originates from an unusual location. If confirmed malicious, this could lead to unauthorized access to AWS resources, data exfiltration, or further exploita…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| tstats earliest(_time) as firstTime latest(_time) as lastTime FROM datamodel=Authentication\n      WHERE Authentication.signature=ConsoleLogin\n      BY Authentication.user Authentication.src\n    | iplocation Authentication.src\n    | `drop_dm_object_name(Authentication)`\n    | rename City as justSeenCity\n    | table firstTime lastTime user justSeenCity\n    | join max=1 user type=outer [\n    | inputlookup previously_seen_users_console_logins\n    | rename City as previouslySeenCity\n    | stats min(firstTime) AS earliestseen\n      BY user previouslySeenCity\n    | fields earliestseen user previouslySeenCity]\n    | eval userCity=if(firstTime >= relative_time(now(), \"-24h@h\"), \"New City\",\"Previously Seen City\")\n    | where userCity = \"New City\"\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table firstTime lastTime user previouslySeenCity justSeenCity userCity\n    | `detect_aws_console_login_by_user_from_new_city_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "When a legitimate new user logins for the first time, this activity will be detected. Check how old the account is and verify that the user activity is legitimate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1535",
                "T1586.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect AWS Console Login by User from New City\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1535, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect AWS Console Login by User from New City\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: When a legitimate new user logins for the first time, this activity will be detected. Check how old the account is and verify that the user activity is legitimate.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the behavior in “Detect AWS Console Login by User from New City” using the same log fields the detection already uses, so the team can act before impact spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.185",
              "n": "Detect AWS Console Login by User from New Country",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies AWS console login events by users from a new country. It leverages AWS CloudTrail events and compares them against a lookup file of previously seen users and their login locations. This activity is significant because logins from new countries can indicate potential unauthorized access or compromised accounts. If confirmed malicious, this could lead to unauthorized access to AWS resources, data exfiltration, or further exploitation within the AWS environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| tstats earliest(_time) as firstTime latest(_time) as lastTime FROM datamodel=Authentication\n      WHERE Authentication.signature=ConsoleLogin\n      BY Authentication.user Authentication.src\n    | iplocation Authentication.src\n    | `drop_dm_object_name(Authentication)`\n    | rename Country as justSeenCountry\n    | table firstTime lastTime user justSeenCountry\n    | join max=1 user type=outer [\n    | inputlookup previously_seen_users_console_logins\n    | rename Country as previouslySeenCountry\n    | stats min(firstTime) AS earliestseen\n      BY user previouslySeenCountry\n    | fields earliestseen user previouslySeenCountry]\n    | eval userCountry=if(firstTime >= relative_time(now(), \"-24h@h\"), \"New Country\",\"Previously Seen Country\")\n    | where userCountry = \"New Country\"\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table firstTime lastTime user previouslySeenCountry justSeenCountry userCountry\n    | `detect_aws_console_login_by_user_from_new_country_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "When a legitimate new user logins for the first time, this activity will be detected. Check how old the account is and verify that the user activity is legitimate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1535",
                "T1586.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect AWS Console Login by User from New Country\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1535, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect AWS Console Login by User from New Country\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: When a legitimate new user logins for the first time, this activity will be detected. Check how old the account is and verify that the user activity is legitimate.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the behavior in “Detect AWS Console Login by User from New Country” using the same log fields the detection already uses, so the team can act before impact spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.186",
              "n": "Detect GCP Storage access from a new IP",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies access to GCP Storage buckets from new or previously unseen remote IP addresses. It leverages GCP Storage bucket-access logs ingested via Cloud Pub/Sub and compares current access events against a lookup table of previously seen IP addresses. This activity is significant as it may indicate unauthorized access or potential reconnaissance by an attacker. If confirmed malicious, this could lead to data exfiltration, unauthorized data manipulation, or further compro…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`google_gcp_pubsub_message`\n      | multikv\n      | rename sc_status_ as status\n      | rename cs_object_ as bucket_name\n      | rename c_ip_ as remote_ip\n      | rename cs_uri_ as request_uri\n      | rename cs_method_ as operation\n      | search status=\"\\\"200\\\"\"\n      | stats earliest(_time) as firstTime latest(_time) as lastTime\n        BY bucket_name remote_ip operation\n           request_uri\n      | table firstTime, lastTime, bucket_name, remote_ip, operation, request_uri\n      | inputlookup append=t previously_seen_gcp_storage_access_from_remote_ip\n      | stats min(firstTime) as firstTime, max(lastTime) as lastTime\n        BY bucket_name remote_ip operation\n           request_uri\n      | outputlookup previously_seen_gcp_storage_access_from_remote_ip\n      | eval newIP=if(firstTime >= relative_time(now(),\"-70m@m\"), 1, 0)\n      | where newIP=1\n      | eval first_time=strftime(firstTime,\"%m/%d/%y %H:%M:%S\")\n      | eval last_time=strftime(lastTime,\"%m/%d/%y %H:%M:%S\")\n      | table  first_time last_time bucket_name remote_ip operation request_uri\n      | `detect_gcp_storage_access_from_a_new_ip_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "GCP Storage buckets can be accessed from any IP (if the ACLs are open to allow it), as long as it can make a successful connection. This will be a false postive, since the search is looking for a new IP within the past two hours.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1530"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect GCP Storage access from a new IP\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect GCP Storage access from a new IP\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: GCP Storage buckets can be accessed from any IP (if the ACLs are open to allow it), as long as it can make a successful connection. This will be a false postive, since the search is looking for a new IP within the past two hours.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Detect GCP Storage access from a new IP' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.187",
              "n": "Detect New Open GCP Storage Buckets",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of new open/public GCP Storage buckets. It leverages GCP PubSub events, specifically monitoring for the `storage.setIamPermissions` method and checks if the `allUsers` member is added. This activity is significant because open storage buckets can expose sensitive data to the public, posing a severe security risk. If confirmed malicious, an attacker could access, modify, or delete data within the bucket, leading to data breaches and potential complia…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`google_gcp_pubsub_message` data.resource.type=gcs_bucket data.protoPayload.methodName=storage.setIamPermissions\n      | spath output=action path=data.protoPayload.serviceData.policyDelta.bindingDeltas{}.action\n      | spath output=user path=data.protoPayload.authenticationInfo.principalEmail\n      | spath output=location path=data.protoPayload.resourceLocation.currentLocations{}\n      | spath output=src path=data.protoPayload.requestMetadata.callerIp\n      | spath output=bucketName path=data.protoPayload.resourceName\n      | spath output=role path=data.protoPayload.serviceData.policyDelta.bindingDeltas{}.role\n      | spath output=member path=data.protoPayload.serviceData.policyDelta.bindingDeltas{}.member\n      | search (member=allUsers AND action=ADD)\n      | table  _time, bucketName, src, user, location, action, role, member\n      | search `detect_new_open_gcp_storage_buckets_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that a GCP admin has legitimately created a public bucket for a specific purpose. That said, GCP strongly advises against granting full control to the \"allUsers\" group.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1530"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect New Open GCP Storage Buckets\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect New Open GCP Storage Buckets\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that a GCP admin has legitimately created a public bucket for a specific purpose. That said, GCP strongly advises against granting full control to the \"allUsers\" group.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Detect New Open GCP Storage Buckets' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.188",
              "n": "Detect New Open S3 buckets",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of open/public S3 buckets in AWS. It detects this activity by analyzing AWS CloudTrail events for `PutBucketAcl` actions where the access control list (ACL) grants permissions to all users or authenticated users. This activity is significant because open S3 buckets can expose sensitive data to unauthorized access, leading to data breaches. If confirmed malicious, an attacker could read, write, or fully control the contents of the bucket, potentially…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_arn$\", \"$bucketName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately created a public bucket for a specific purpose. That said, AWS strongly advises against granting full control to the \"All Users\" group.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1530"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect New Open S3 buckets\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect New Open S3 buckets\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately created a public bucket for a specific purpose. That said, AWS strongly advises against granting full control to the \"All Users\" group.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect New Open S3 buckets so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.189",
              "n": "Detect New Open S3 Buckets over AWS CLI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of open/public S3 buckets via the AWS CLI. It leverages AWS CloudTrail logs to identify events where a user has set bucket permissions to allow access to \"AuthenticatedUsers\" or \"AllUsers.\" This activity is significant because open S3 buckets can expose sensitive data to unauthorized users, leading to data breaches. If confirmed malicious, an attacker could gain unauthorized access to potentially sensitive information stored in the S3 bucket, posing a …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "While this search has no known false positives, it is possible that an AWS admin has legitimately created a public bucket for a specific purpose. That said, AWS strongly advises against granting full control to the \"All Users\" group.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1530"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect New Open S3 Buckets over AWS CLI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect New Open S3 Buckets over AWS CLI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: While this search has no known false positives, it is possible that an AWS admin has legitimately created a public bucket for a specific purpose. That said, AWS strongly advises against granting full control to the \"All Users\" group.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this AWS and CloudTrail pattern so we can see risky API or login behavior while it can still be tied to a user or role in the console.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.190",
              "n": "Detect S3 access from a new IP",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies access to an S3 bucket from a new or previously unseen remote IP address. It leverages S3 bucket-access logs, specifically focusing on successful access events (http_status=200). This activity is significant because access from unfamiliar IP addresses could indicate unauthorized access or potential data exfiltration attempts. If confirmed malicious, this activity could lead to unauthorized data access, data theft, or further exploitation of the compromised S3 bu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`aws_s3_accesslogs` http_status=200  [search `aws_s3_accesslogs` http_status=200\n      | stats earliest(_time) as firstTime latest(_time) as lastTime\n        BY bucket_name remote_ip\n      | inputlookup append=t previously_seen_S3_access_from_remote_ip\n      | stats min(firstTime) as firstTime, max(lastTime) as lastTime\n        BY bucket_name remote_ip\n      | outputlookup previously_seen_S3_access_from_remote_ip\n      | eval newIP=if(firstTime >= relative_time(now(), \"-70m@m\"), 1, 0)\n      | where newIP=1\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | table bucket_name remote_ip]\n      | iplocation remote_ip\n      | rename remote_ip as src\n      | table _time bucket_name src City Country operation request_uri\n      | `detect_s3_access_from_a_new_ip_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "S3 buckets can be accessed from any IP, as long as it can make a successful connection. This will be a false postive, since the search is looking for a new IP within the past hour",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1530"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect S3 access from a new IP\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect S3 access from a new IP\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: S3 buckets can be accessed from any IP, as long as it can make a successful connection. This will be a false postive, since the search is looking for a new IP within the past hour\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Detect S3 access from a new IP' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.191",
              "n": "Detect Spike in S3 Bucket deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a spike in API activity related to the deletion of S3 buckets in your AWS environment. It leverages AWS CloudTrail logs to detect anomalies by comparing current deletion activity against a historical baseline. This activity is significant as unusual spikes in S3 bucket deletions could indicate malicious actions such as data exfiltration or unauthorized data destruction. If confirmed malicious, this could lead to significant data loss, disruption of services, and…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "AWS CloudTrail",
              "q": "`cloudtrail` eventName=DeleteBucket [search `cloudtrail` eventName=DeleteBucket\n      | spath output=arn path=userIdentity.arn\n      | stats count as apiCalls\n        BY arn\n      | inputlookup s3_deletion_baseline append=t\n      | fields - latestCount\n      | stats values(*) as *\n        BY arn\n      | rename apiCalls as latestCount\n      | eval newAvgApiCalls=avgApiCalls + (latestCount-avgApiCalls)/720\n      | eval newStdevApiCalls=sqrt(((pow(stdevApiCalls, 2)*719 + (latestCount-newAvgApiCalls)*(latestCount-avgApiCalls))/720))\n      | eval avgApiCalls=coalesce(newAvgApiCalls, avgApiCalls), stdevApiCalls=coalesce(newStdevApiCalls, stdevApiCalls), numDataPoints=if(isnull(latestCount), numDataPoints, numDataPoints+1)\n      | table arn, latestCount, numDataPoints, avgApiCalls, stdevApiCalls\n      | outputlookup s3_deletion_baseline\n      | eval dataPointThreshold = 15, deviationThreshold = 3\n      | eval isSpike=if((latestCount > avgApiCalls+deviationThreshold*stdevApiCalls) AND numDataPoints > dataPointThreshold, 1, 0)\n      | where isSpike=1\n      | rename arn as userIdentity.arn\n      | table userIdentity.arn]\n      | spath output=user userIdentity.arn\n      | spath output=bucketName path=requestParameters.bucketName\n      | stats values(bucketName) as bucketName, count as numberOfApiCalls, dc(eventName) as uniqueApisCalled\n        BY user\n      | `detect_spike_in_s3_bucket_deletion_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires AWS CloudTrail ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Based on the values of`dataPointThreshold` and `deviationThreshold`, the false positive rate may vary. Please modify this according the your environment.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1530"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Spike in S3 Bucket deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: AWS CloudTrail. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Spike in S3 Bucket deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Based on the values of`dataPointThreshold` and `deviationThreshold`, the false positive rate may vary. Please modify this according the your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Detect Spike in S3 Bucket deletion' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.192",
              "n": "GCP Authentication Failed During MFA Challenge",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects failed authentication attempts during the Multi-Factor Authentication (MFA) challenge on a Google Cloud Platform (GCP) tenant. It uses Google Workspace login failure events to identify instances where MFA methods were challenged but not successfully completed. This activity is significant as it may indicate an adversary attempting to access an account with compromised credentials despite MFA protection. If confirmed malicious, this could lead to unauthorized access…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Google Workspace login_failure",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Google Workspace login_failure ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate users may miss to reply the MFA challenge within the time window or deny it by mistake.",
              "refs": "https://attack.mitre.org/techniques/T1621/, https://attack.mitre.org/techniques/T1078/004/",
              "mitre": [
                "T1078.004",
                "T1586.003",
                "T1621"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GCP Authentication Failed During MFA Challenge\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Google Workspace login_failure. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GCP Authentication Failed During MFA Challenge\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate users may miss to reply the MFA challenge within the time window or deny it by mistake.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GCP Authentication Failed During MFA Challenge so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.193",
              "n": "GCP Multi-Factor Authentication Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an attempt to disable multi-factor authentication (MFA) for a Google Cloud Platform (GCP) user. It leverages Google Workspace Admin log events, specifically the `UNENROLL_USER_FROM_STRONG_AUTH` command. This activity is significant because disabling MFA can allow an adversary to maintain persistence within the environment using a compromised account without raising suspicion. If confirmed malicious, this action could enable attackers to bypass additional security l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Google Workspace",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Google Workspace ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use case may require for users to disable MFA. Filter as needed.",
              "refs": "https://attack.mitre.org/tactics/TA0005/, https://attack.mitre.org/techniques/T1556/",
              "mitre": [
                "T1556.006",
                "T1586.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GCP Multi-Factor Authentication Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Google Workspace. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1556.006, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GCP Multi-Factor Authentication Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use case may require for users to disable MFA. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GCP Multi-Factor Authentication Disabled so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.194",
              "n": "GCP Multiple Failed MFA Requests For User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects multiple failed multi-factor authentication (MFA) requests for a single user within a Google Cloud Platform (GCP) tenant. It triggers when 10 or more MFA prompts fail within a 5-minute window, using Google Workspace login failure events. This behavior is significant as it may indicate an adversary attempting to bypass MFA by bombarding the user with repeated authentication requests. If confirmed malicious, this activity could lead to unauthorized access, allowing a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Google Workspace",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Google Workspace ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed.",
              "refs": "https://www.mandiant.com/resources/blog/russian-targeting-gov-business, https://arstechnica.com/information-technology/2022/03/lapsus-and-solar-winds-hackers-both-use-the-same-old-trick-to-bypass-mfa/, https://therecord.media/russian-hackers-bypass-2fa-by-annoying-victims-with-repeated-push-notifications/, https://attack.mitre.org/techniques/T1621/, https://attack.mitre.org/techniques/T1078/004/",
              "mitre": [
                "T1078.004",
                "T1586.003",
                "T1621"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GCP Multiple Failed MFA Requests For User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Google Workspace. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003, T1621. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GCP Multiple Failed MFA Requests For User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Multiple Failed MFA requests may also be a sign of authentication or application issues. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GCP Multiple Failed MFA Requests For User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.195",
              "n": "GCP Multiple Users Failing To Authenticate From Ip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a single source IP address failing to authenticate into more than 20 unique Google Workspace user accounts within a 5-minute window. It leverages Google Workspace login failure events to identify potential password spraying attacks. This activity is significant as it may indicate an adversary attempting to gain unauthorized access or elevate privileges within the Google Cloud Platform. If confirmed malicious, this behavior could lead to unauthorized access to sensi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Google Workspace",
              "q": "# Shared SPL: intentional — see UC-10.2.29\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$tried_accounts$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Google Workspace ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No known false postives for this detection. Please review this alert.",
              "refs": "https://cloud.google.com/blog/products/identity-security/how-google-cloud-can-help-stop-credential-stuffing-attacks, https://www.slideshare.net/dafthack/ok-google-how-do-i-red-team-gsuite, https://attack.mitre.org/techniques/T1110/003/, https://www.blackhillsinfosec.com/wp-content/uploads/2020/05/Breaching-the-Cloud-Perimeter-Slides.pdf",
              "mitre": [
                "T1110.003",
                "T1110.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GCP Multiple Users Failing To Authenticate From Ip\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Google Workspace. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003, T1110.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GCP Multiple Users Failing To Authenticate From Ip\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No known false postives for this detection. Please review this alert.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GCP Multiple Users Failing To Authenticate From Ip so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.196",
              "n": "GCP Successful Single-Factor Authentication",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a successful single-factor authentication event against Google Cloud Platform (GCP) for an account without Multi-Factor Authentication (MFA) enabled. It uses Google Workspace login event data to detect instances where MFA is not utilized. This activity is significant as it may indicate a misconfiguration, policy violation, or potential account takeover attempt. If confirmed malicious, an attacker could gain unauthorized access to GCP resources, potentially leadi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Google Workspace",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Google Workspace ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although not recommended, certain users may be required without multi-factor authentication. Filter as needed",
              "refs": "https://attack.mitre.org/techniques/T1078/004/, https://www.forbes.com/sites/daveywinder/2020/07/08/new-dark-web-audit-reveals-15-billion-stolen-logins-from-100000-breaches-passwords-hackers-cybercrime/?sh=69927b2a180f",
              "mitre": [
                "T1078.004",
                "T1586.003"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GCP Successful Single-Factor Authentication\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Google Workspace. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.004, T1586.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GCP Successful Single-Factor Authentication\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although not recommended, certain users may be required without multi-factor authentication. Filter as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GCP Successful Single-Factor Authentication so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.197",
              "n": "Geographic Improbable Location",
              "c": "high",
              "f": "intermediate",
              "v": "Geolocation data can be inaccurate or easily spoofed by Remote Employment Fraud (REF) workers. REF actors sometimes slip up and reveal their true location, creating what we call 'improbable travel' scenarios — logins from opposite sides of the world within minutes. This identifies situations where these travel scenarios occur.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The analytic leverages Okta OktaIm2 logs to be ingested using the Splunk Add-on for Okta Identity Cloud (https://splunkbase.splunk.com/app/6553). This also utilizes Splunk Enterprise Security Suite for several macros and lookups. The known_devices_public_ip_filter lookup is a placeholder for known public edge devices in your network.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate usage of some VPNs may cause false positives. Tune as needed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Okta Identity Cloud](https://splunkbase.splunk.com/app/6553)",
              "mitre": [
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Geographic Improbable Location\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Geographic Improbable Location\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate usage of some VPNs may cause false positives. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Geographic Improbable Location so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.198",
              "n": "GitHub Enterprise Delete Branch Ruleset",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when branch rules are deleted in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for branch rule deletion events by tracking actor details, repository information, and associated metadata. For a SOC, identifying deleted branch rules is critical as it could indicate attempts to bypass code review requirements and security controls. Branch deletion rules are essential security controls that enforce code review, prevent force pushes, and maintai…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Delete Branch Ruleset\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Delete Branch Ruleset\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.199",
              "n": "GitHub Enterprise Disable 2FA Requirement",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when two-factor authentication (2FA) requirements are disabled in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for 2FA requirement changes by tracking actor details, organization information, and associated metadata. For a SOC, identifying disabled 2FA requirements is critical as it could indicate attempts to weaken account security controls. Two-factor authentication is a fundamental security control that helps prevent unauthorized access…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Disable 2FA Requirement\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Disable 2FA Requirement\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.200",
              "n": "GitHub Enterprise Disable Audit Log Event Stream",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a user disables audit log event streaming in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for configuration changes that disable the audit log streaming functionality, which is used to send audit events to security monitoring platforms. This behavior could indicate an attacker attempting to prevent their malicious activities from being logged and detected by disabling the audit trail. For a SOC, identifying the disabling of audit logg…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1562.008",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Disable Audit Log Event Stream\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Disable Audit Log Event Stream\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.201",
              "n": "GitHub Enterprise Disable Classic Branch Protection Rule",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when classic branch protection rules are disabled in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for branch protection removal events by tracking actor details, repository information, and associated metadata. For a SOC, identifying disabled branch protection is critical as it could indicate attempts to bypass code review requirements and security controls. Branch protection rules are essential security controls that enforce code review, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Disable Classic Branch Protection Rule\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Disable Classic Branch Protection Rule\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.202",
              "n": "GitHub Enterprise Disable IP Allow List",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when an IP allow list is disabled in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for actions related to disabling IP allow lists at the organization or enterprise level. This behavior is concerning because IP allow lists are a critical security control that restricts access to GitHub Enterprise resources to only trusted IP addresses. When disabled, it could indicate an attacker attempting to bypass access controls to gain unauthorized …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Disable IP Allow List\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Disable IP Allow List\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.203",
              "n": "GitHub Enterprise Modify Audit Log Event Stream",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a user modifies or disables audit log event streaming in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for configuration changes that affect the audit log streaming functionality, which is used to send audit events to security monitoring platforms. This behavior could indicate an attacker attempting to prevent their malicious activities from being logged and detected by tampering with the audit trail. For a SOC, identifying modificatio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1562.008",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Modify Audit Log Event Stream\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Modify Audit Log Event Stream\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.204",
              "n": "GitHub Enterprise Pause Audit Log Event Stream",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a user pauses audit log event streaming in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for configuration changes that temporarily suspend the audit log streaming functionality, which is used to send audit events to security monitoring platforms. This behavior could indicate an attacker attempting to prevent their malicious activities from being logged and detected by temporarily disabling the audit trail. For a SOC, identifying the p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1562.008",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Pause Audit Log Event Stream\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.008, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Pause Audit Log Event Stream\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.205",
              "n": "GitHub Enterprise Register Self Hosted Runner",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a self-hosted runner is created in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for actions related to creating new self-hosted runners at the organization or enterprise level. his behavior warrants monitoring because self-hosted runners execute workflow jobs on customer-controlled infrastructure, which could be exploited by attackers to execute malicious code, access sensitive data, or pivot to other systems. While self-hosted run…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack, https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Register Self Hosted Runner\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Register Self Hosted Runner\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.206",
              "n": "GitHub Enterprise Remove Organization",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a user removes an organization from GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for organization deletion events, which could indicate unauthorized removal of critical business resources. For a SOC, identifying organization removals is crucial as it may signal account compromise, insider threats, or malicious attempts to disrupt business operations by deleting entire organizational structures. The impact could be severe, potentially …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1485",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Remove Organization\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Remove Organization\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.207",
              "n": "GitHub Enterprise Repository Archived",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a repository is archived in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for repository archival events by tracking actor details, repository information, and associated metadata. For a SOC, identifying repository archival is important as it could indicate attempts to make critical code inaccessible or preparation for repository deletion. While archiving is a legitimate feature, unauthorized archival of active repositories could signa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1485",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Repository Archived\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Repository Archived\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.208",
              "n": "GitHub Enterprise Repository Deleted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a user deletes a repository in GitHub Enterprise. The detection monitors GitHub Enterprise audit logs for repository deletion events, which could indicate unauthorized removal of critical source code and project resources. For a SOC, identifying repository deletions is crucial as it may signal account compromise, insider threats, or malicious attempts to destroy intellectual property and disrupt development operations. The impact could be severe, potentially r…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Enterprise Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Enterprise Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610, https://docs.github.com/en/enterprise-cloud@latest/admin/monitoring-activity-in-your-enterprise/reviewing-audit-logs-for-your-enterprise/streaming-the-audit-log-for-your-enterprise#setting-up-streaming-to-splunk",
              "mitre": [
                "T1485",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Enterprise Repository Deleted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Enterprise Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Enterprise Repository Deleted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.209",
              "n": "GitHub Organizations Delete Branch Ruleset",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when branch rulesets are deleted in GitHub Organizations. The detection monitors GitHub Organizations audit logs for branch ruleset deletion events by tracking actor details, repository information, and associated metadata. For a SOC, identifying deleted branch rulesets is critical as it could indicate attempts to bypass code review requirements and security controls. Branch rulesets are essential security controls that enforce code review, prevent force pushes, an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Organizations Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Organizations Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunk.github.io/splunk-add-on-for-github-audit-log-monitoring/Install/, https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Organizations Delete Branch Ruleset\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Organizations Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Organizations Delete Branch Ruleset\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.210",
              "n": "GitHub Organizations Disable 2FA Requirement",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when two-factor authentication (2FA) requirements are disabled in GitHub Organizations. The detection monitors GitHub Organizations audit logs for 2FA requirement changes by tracking actor details, organization information, and associated metadata. For a SOC, identifying disabled 2FA requirements is critical as it could indicate attempts to weaken account security controls. Two-factor authentication is a fundamental security control that helps prevent unauthorized …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Organizations Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Organizations Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunk.github.io/splunk-add-on-for-github-audit-log-monitoring/Install/, https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Organizations Disable 2FA Requirement\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Organizations Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Organizations Disable 2FA Requirement\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.211",
              "n": "GitHub Organizations Disable Classic Branch Protection Rule",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when classic branch protection rules are disabled in GitHub Organizations. The detection monitors GitHub Organizations audit logs for branch protection removal events by tracking actor details, repository information, and associated metadata. For a SOC, identifying disabled branch protection is critical as it could indicate attempts to bypass code review requirements and security controls. Branch protection rules are essential security controls that enforce code re…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Organizations Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Organizations Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunk.github.io/splunk-add-on-for-github-audit-log-monitoring/Install/, https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610",
              "mitre": [
                "T1562.001",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Organizations Disable Classic Branch Protection Rule\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Organizations Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Organizations Disable Classic Branch Protection Rule\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.212",
              "n": "GitHub Organizations Repository Archived",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a repository is archived in GitHub Organizations. The detection monitors GitHub Organizations audit logs for repository archival events by tracking actor details, repository information, and associated metadata. For a SOC, identifying repository archival is important as it could indicate attempts to make critical code inaccessible or preparation for repository deletion. While archiving is a legitimate feature, unauthorized archival of active repositories could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Organizations Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Organizations Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunk.github.io/splunk-add-on-for-github-audit-log-monitoring/Install/, https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610",
              "mitre": [
                "T1485",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Organizations Repository Archived\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Organizations Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Organizations Repository Archived\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.213",
              "n": "GitHub Organizations Repository Deleted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when a repository is deleted within a GitHub organization. The detection monitors GitHub Organizations audit logs for repository deletion events by tracking actor details, repository information, and associated metadata. This behavior is concerning for SOC teams as malicious actors may attempt to delete repositories to destroy source code, intellectual property, or evidence of compromise. Repository deletion can result in permanent loss of code, documentation, a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "GitHub Organizations Audit Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires GitHub Organizations Audit Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunk.github.io/splunk-add-on-for-github-audit-log-monitoring/Install/, https://www.googlecloudcommunity.com/gc/Community-Blog/Monitoring-for-Suspicious-GitHub-Activity-with-Google-Security/ba-p/763610",
              "mitre": [
                "T1485",
                "T1195"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Organizations Repository Deleted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: GitHub Organizations Audit Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Organizations Repository Deleted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitHub Enterprise audit and workflow signals that match this issue so an attacker cannot weaken logging or the software supply chain without the team noticing.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.214",
              "n": "High Number of Login Failures from a single source",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects multiple failed login attempts in Office365 Azure Active Directory from a single source IP address. It leverages Office365 management activity logs, specifically AzureActiveDirectoryStsLogon records, aggregating these logs in 5-minute intervals to count failed login attempts. This activity is significant as it may indicate brute-force attacks or password spraying, which are critical to monitor. If confirmed malicious, an attacker could gain unauthorized access to O…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "O365 UserLoginFailed",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires O365 UserLoginFailed ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "An Ip address with more than 10 failed authentication attempts in the span of 5 minutes may also be triggered by a broken application.",
              "refs": "https://attack.mitre.org/techniques/T1110/001/, https://learn.microsoft.com/en-us/security/compass/incident-response-playbook-password-spray, https://www.cisa.gov/uscert/ncas/alerts/aa21-008a, https://learn.microsoft.com/azure/active-directory/reports-monitoring/reference-sign-ins-error-codes",
              "mitre": [
                "T1110.001"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"High Number of Login Failures from a single source\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: O365 UserLoginFailed. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"High Number of Login Failures from a single source\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: An Ip address with more than 10 failed authentication attempts in the span of 5 minutes may also be triggered by a broken application.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for High Number of Login Failures from a single source so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.215",
              "n": "Kubernetes Abuse of Secret by Unusual Location",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthorized access or misuse of Kubernetes Secrets from unusual locations. It leverages Kubernetes Audit logs to identify anomalies in access patterns by analyzing the source of requests by country. This activity is significant for a SOC as Kubernetes Secrets store sensitive information like passwords, OAuth tokens, and SSH keys, making them critical assets. If confirmed malicious, this behavior could indicate an attacker attempting to exfiltrate or misuse these s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1552.007"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Abuse of Secret by Unusual Location\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Abuse of Secret by Unusual Location\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Abuse of Secret by Unusual Location so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.216",
              "n": "Kubernetes Abuse of Secret by Unusual User Agent",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthorized access or misuse of Kubernetes Secrets by unusual user agents. It leverages Kubernetes Audit logs to identify anomalies in access patterns by analyzing the source of requests based on user agents. This activity is significant for a SOC because Kubernetes Secrets store sensitive information like passwords, OAuth tokens, and SSH keys, making them critical assets. If confirmed malicious, this activity could lead to unauthorized access to sensitive systems…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1552.007"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Abuse of Secret by Unusual User Agent\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Abuse of Secret by Unusual User Agent\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Abuse of Secret by Unusual User Agent so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.217",
              "n": "Kubernetes Abuse of Secret by Unusual User Group",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthorized access or misuse of Kubernetes Secrets by unusual user groups. It leverages Kubernetes Audit logs to identify anomalies in access patterns by analyzing the source of requests and user groups. This activity is significant for a SOC as Kubernetes Secrets store sensitive information like passwords, OAuth tokens, and SSH keys. If confirmed malicious, this behavior could indicate an attacker attempting to exfiltrate or misuse these secrets, potentially lead…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1552.007"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Abuse of Secret by Unusual User Group\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Abuse of Secret by Unusual User Group\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Abuse of Secret by Unusual User Group so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.218",
              "n": "Kubernetes Abuse of Secret by Unusual User Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthorized access or misuse of Kubernetes Secrets by unusual user names. It leverages Kubernetes Audit logs to identify anomalies in access patterns by analyzing the source of requests based on user names. This activity is significant for a SOC as Kubernetes Secrets store sensitive information like passwords, OAuth tokens, and SSH keys, making them critical assets. If confirmed malicious, this activity could lead to unauthorized access to sensitive systems or dat…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1552.007"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Abuse of Secret by Unusual User Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Abuse of Secret by Unusual User Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Abuse of Secret by Unusual User Name so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.219",
              "n": "Kubernetes Anomalous Inbound Network Activity from Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies anomalous inbound network traffic volumes from processes within containerized workloads. It leverages Network Performance Monitoring metrics collected via an OTEL collector and pulled from Splunk Observability Cloud. The detection compares recent metrics (tcp.bytes, tcp.new_sockets, tcp.packets, udp.bytes, udp.packets) over the last hour with the average over the past 30 days. This activity is significant as it may indicate unauthorized data reception, potential…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats avg(tcp.*) as tcp.* avg(udp.*) as udp.* where `kubernetes_metrics` AND earliest=-1h by k8s.cluster.name dest.workload.name dest.process.name  span=10s | eval key='dest.workload.name' + \\\":\\\" + 'dest.process.name' | join type=left max=1 key [ mstats avg(tcp.*) as avg_tcp.* avg(udp.*) as avg_udp.* stdev(tcp.*) as stdev_tcp.* avg(udp.*) as stdev_udp.* where `kubernetes_metrics` AND earliest=-30d latest=-1h by dest.workload.name dest.process.name | eval key='dest.workload.name' + \\\":\\\" + 'dest.process.name' ] | eval anomalies = \\\"\\\" | foreach stdev_* [ eval anomalies =if( '<<MATCHSTR>>' > ('avg_<<MATCHSTR>>' + 3 * 'stdev_<<MATCHSTR>>'), anomalies + \\\"<<MATCHSTR>> higher than average by \\\" + tostring(round(('<<MATCHSTR>>' - 'avg_<<MATCHSTR>>')/'stdev_<<MATCHSTR>>' ,2)) + \\\" Standard Deviations. <<MATCHSTR>>=\\\" + tostring('<<MATCHSTR>>') + \\\" avg_<<MATCHSTR>>=\\\" + tostring('avg_<<MATCHSTR>>') + \\\" 'stdev_<<MATCHSTR>>'=\\\" + tostring('stdev_<<MATCHSTR>>') + \\\", \\\" , anomalies) ] | fillnull | eval anomalies = split(replace(anomalies, \\\",\\\\s$$$$\\\", \\\"\\\") ,\\\", \\\") | where anomalies!=\\\"\\\" | stats count(anomalies) as count values(anomalies) as anomalies by k8s.cluster.name dest.workload.name dest.process.name | where count > 5 | rename k8s.cluster.name as host | `kubernetes_anomalous_inbound_network_activity_from_process_filter`",
              "m": "To gather NPM metrics the Open Telemetry to the Kubernetes Cluster and enable Network Performance Monitoring according to instructions found in Splunk Docs https://help.splunk.com/en/splunk-observability-cloud/monitor-infrastructure/network-explorer/set-up-network-explorer-in-kubernetes#network-explorer-setup In order to access those metrics from within Splunk Enterprise and ES, the Splunk Infrast…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Anomalous Inbound Network Activity from Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Anomalous Inbound Network Activity from Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Anomalous Inbound Network Activity from Process' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.220",
              "n": "Kubernetes Anomalous Inbound Outbound Network IO",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies high inbound or outbound network I/O anomalies in Kubernetes containers. It leverages process metrics from an OTEL collector and Kubelet Stats Receiver, along with data from Splunk Observability Cloud. A lookup table with average and standard deviation values for network I/O is used to detect anomalies persisting over a 1-hour period. This activity is significant as it may indicate data exfiltration, command and control communication, or unauthorized data transf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats avg(k8s.pod.network.io) as io where `kubernetes_metrics` by k8s.cluster.name k8s.pod.name k8s.node.name direction span=10s | eval service = replace('k8s.pod.name', \\\"-\\\\w{5}$$|-[abcdef0-9]{8,10}-\\\\w{5}$$\\\", \\\"\\\") | stats avg(eval(if(direction=\\\"transmit\\\", io,null()))) as outbound_network_io avg(eval(if(direction=\\\"receive\\\", io,null()))) as inbound_network_io by k8s.cluster.name k8s.node.name k8s.pod.name service _time | eval key = 'k8s.cluster.name' + \\\":\\\" + 'service' | lookup k8s_container_network_io_baseline key | eval anomalies = \\\"\\\" | foreach stdev_* [ eval anomalies =if( '<<MATCHSTR>>' > ('avg_<<MATCHSTR>>' + 4 * 'stdev_<<MATCHSTR>>'), anomalies + \\\"<<MATCHSTR>> higher than average by \\\" + tostring(round(('<<MATCHSTR>>' - 'avg_<<MATCHSTR>>')/'stdev_<<MATCHSTR>>' ,2)) + \\\" Standard Deviations. <<MATCHSTR>>=\\\" + tostring('<<MATCHSTR>>') + \\\" avg_<<MATCHSTR>>=\\\" + tostring('avg_<<MATCHSTR>>') + \\\" 'stdev_<<MATCHSTR>>'=\\\" + tostring('stdev_<<MATCHSTR>>') + \\\", \\\" , anomalies) ] | eval anomalies = replace(anomalies, \\\",\\\\s$$\\\", \\\"\\\") | where anomalies!=\\\"\\\" | stats count values(anomalies) as anomalies by k8s.cluster.name k8s.node.name k8s.pod.name service | rename service as k8s.service | where count > 5 | rename k8s.node.name as host | `kubernetes_anomalous_inbound_outbound_network_io_filter`",
              "m": "To implement this detection, follow these steps:\\n* Deploy the OpenTelemetry Collector (OTEL) to your Kubernetes cluster.\\n* Enable the hostmetrics/process receiver in the OTEL configuration.\\n* Ensure that the process metrics, specifically Process.cpu.utilization and process.memory.utilization, are enabled.\\n* Install the Splunk Infrastructure Monitoring (SIM) add-on. (ref: https://splunkbase.spl…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Anomalous Inbound Outbound Network IO\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Anomalous Inbound Outbound Network IO\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Anomalous Inbound Outbound Network IO' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.221",
              "n": "Kubernetes Anomalous Inbound to Outbound Network IO Ratio",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies significant changes in network communication behavior within Kubernetes containers by examining the inbound to outbound network IO ratios. It leverages process metrics from an OTEL collector and Kubelet Stats Receiver, along with data from Splunk Observability Cloud. Anomalies are detected using a lookup table containing average and standard deviation values for network IO, triggering an event if the anomaly persists for over an hour. This activity is significan…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats avg(k8s.pod.network.io) as io where `kubernetes_metrics` by k8s.cluster.name k8s.pod.name k8s.node.name direction span=10s | eval service = replace('k8s.pod.name', \\\"-\\\\w{5}$|-[abcdef0-9]{8,10}-\\\\w{5}$\\\", \\\"\\\") | eval key = 'k8s.cluster.name' + \\\":\\\" + 'service' | stats avg(eval(if(direction=\\\"transmit\\\", io,null()))) as outbound_network_io avg(eval(if(direction=\\\"receive\\\", io,null()))) as inbound_network_io by key service k8s.cluster.name k8s.pod.name k8s.node.name _time | eval inbound:outbound = inbound_network_io/outbound_network_io | eval outbound:inbound = outbound_network_io/inbound_network_io | fields - *network_io | lookup k8s_container_network_io_ratio_baseline key | eval anomalies = \\\"\\\" | foreach stdev_* [ eval anomalies =if( '<<MATCHSTR>>' > ('avg_<<MATCHSTR>>' + 4 * 'stdev_<<MATCHSTR>>'), anomalies + \\\"<<MATCHSTR>> ratio higher than average by \\\" + tostring(round(('<<MATCHSTR>>' - 'avg_<<MATCHSTR>>')/'stdev_<<MATCHSTR>>' ,2)) + \\\" Standard Deviations. <<MATCHSTR>>=\\\" + tostring('<<MATCHSTR>>') + \\\" avg_<<MATCHSTR>>=\\\" + tostring('avg_<<MATCHSTR>>') + \\\" 'stdev_<<MATCHSTR>>'=\\\" + tostring('stdev_<<MATCHSTR>>') + \\\", \\\" , anomalies) ] | eval anomalies = replace(anomalies, \\\",\\\\s$\\\", \\\"\\\") | where anomalies!=\\\"\\\" | stats count values(anomalies) as anomalies by k8s.cluster.name k8s.node.name k8s.pod.name service | rename service as k8s.service | where count > 5 | rename k8s.node.name as host | `kubernetes_anomalous_inbound_to_outbound_network_io_ratio_filter`",
              "m": "To implement this detection, follow these steps:\\n* Deploy the OpenTelemetry Collector (OTEL) to your Kubernetes cluster.\\n* Enable the hostmetrics/process receiver in the OTEL configuration.\\n* Ensure that the process metrics, specifically Process.cpu.utilization and process.memory.utilization, are enabled.\\n* Install the Splunk Infrastructure Monitoring (SIM) add-on. (ref: https://splunkbase.spl…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Anomalous Inbound to Outbound Network IO Ratio\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Anomalous Inbound to Outbound Network IO Ratio\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Anomalous Inbound to Outbound Network IO Ratio' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.222",
              "n": "Kubernetes Anomalous Outbound Network Activity from Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies anomalously high outbound network activity from processes running within containerized workloads in a Kubernetes environment. It leverages Network Performance Monitoring metrics collected via an OTEL collector and pulled from Splunk Observability Cloud. The detection compares recent network metrics (tcp.bytes, tcp.new_sockets, tcp.packets, udp.bytes, udp.packets) over the last hour with the average metrics over the past 30 days. This activity is significant as i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats avg(tcp.*) as tcp.* avg(udp.*) as udp.* where `kubernetes_metrics` AND earliest=-1h by k8s.cluster.name source.workload.name source.process.name  span=10s | eval key='source.workload.name' + \\\":\\\" + 'source.process.name' | join type=left max=1 key [ mstats avg(tcp.*) as avg_tcp.* avg(udp.*) as avg_udp.* stdev(tcp.*) as stdev_tcp.* avg(udp.*) as stdev_udp.* where `kubernetes_metrics` AND earliest=-30d latest=-1h by source.workload.name source.process.name | eval key='source.workload.name' + \\\":\\\" + 'source.process.name' ] | eval anomalies = \\\"\\\" | foreach stdev_* [ eval anomalies =if( '<<MATCHSTR>>' > ('avg_<<MATCHSTR>>' + 3 * 'stdev_<<MATCHSTR>>'), anomalies + \\\"<<MATCHSTR>> higher than average by \\\" + tostring(round(('<<MATCHSTR>>' - 'avg_<<MATCHSTR>>')/'stdev_<<MATCHSTR>>' ,2)) + \\\" Standard Deviations. <<MATCHSTR>>=\\\" + tostring('<<MATCHSTR>>') + \\\" avg_<<MATCHSTR>>=\\\" + tostring('avg_<<MATCHSTR>>') + \\\" 'stdev_<<MATCHSTR>>'=\\\" + tostring('stdev_<<MATCHSTR>>') + \\\", \\\" , anomalies) ] | fillnull | eval anomalies = split(replace(anomalies, \\\",\\\\s$$$$\\\", \\\"\\\") ,\\\", \\\") | where anomalies!=\\\"\\\" | stats count(anomalies) as count values(anomalies) as anomalies by k8s.cluster.name source.workload.name source.process.name | where count > 5 | rename k8s.cluster.name as host | `kubernetes_anomalous_outbound_network_activity_from_process_filter`",
              "m": "To gather NPM metrics the Open Telemetry to the Kubernetes Cluster and enable Network Performance Monitoring according to instructions found in Splunk Docs https://help.splunk.com/en/splunk-observability-cloud/monitor-infrastructure/network-explorer/set-up-network-explorer-in-kubernetes#network-explorer-setup In order to access those metrics from within Splunk Enterprise and ES, the Splunk Infrast…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Anomalous Outbound Network Activity from Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Anomalous Outbound Network Activity from Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Anomalous Outbound Network Activity from Process' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.223",
              "n": "Kubernetes Anomalous Traffic on Network Edge",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies anomalous network traffic volumes between Kubernetes workloads or between a workload and external sources. It leverages Network Performance Monitoring metrics collected via an OTEL collector and pulled from Splunk Observability Cloud. The detection compares recent network metrics (tcp.bytes, tcp.new_sockets, tcp.packets, udp.bytes, udp.packets) over the last hour with the average over the past 30 days to identify significant deviations. This activity is signific…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats avg(tcp.*) as tcp.* avg(udp.*) as udp.* where `kubernetes_metrics` AND earliest=-1h by k8s.cluster.name source.workload.name dest.workload.name span=10s | eval key='source.workload.name' + \\\":\\\" + 'dest.workload.name' | join type=left max=1 key [ mstats avg(tcp.*) as avg_tcp.* avg(udp.*) as avg_udp.* stdev(tcp.*) as stdev_tcp.* avg(udp.*) as stdev_udp.* where `kubernetes_metrics` AND earliest=-30d latest=-1h by source.workload.name dest.workload.name | eval key='source.workload.name' + \\\":\\\" + 'dest.workload.name' ] | eval anomalies = \\\"\\\" | foreach stdev_* [ eval anomalies =if( '<<MATCHSTR>>' > ('avg_<<MATCHSTR>>' + 3 * 'stdev_<<MATCHSTR>>'), anomalies + \\\"<<MATCHSTR>> higher than average by \\\" + tostring(round(('<<MATCHSTR>>' - 'avg_<<MATCHSTR>>')/'stdev_<<MATCHSTR>>' ,2)) + \\\" Standard Deviations. <<MATCHSTR>>=\\\" + tostring('<<MATCHSTR>>') + \\\" avg_<<MATCHSTR>>=\\\" + tostring('avg_<<MATCHSTR>>') + \\\" 'stdev_<<MATCHSTR>>'=\\\" + tostring('stdev_<<MATCHSTR>>') + \\\", \\\" , anomalies) ] | fillnull | eval anomalies = split(replace(anomalies, \\\",\\\\s$$$$\\\", \\\"\\\") ,\\\", \\\") | where anomalies!=\\\"\\\" | stats count(anomalies) as count values(anomalies) as anomalies by k8s.cluster.name source.workload.name dest.workload.name | rename service as k8s.service | where count > 5 | rename k8s.cluster.name as host | `kubernetes_anomalous_traffic_on_network_edge_filter`",
              "m": "To gather NPM metrics the Open Telemetry to the Kubernetes Cluster and enable Network Performance Monitoring according to instructions found in Splunk Docs https://help.splunk.com/en/splunk-observability-cloud/monitor-infrastructure/network-explorer/set-up-network-explorer-in-kubernetes#network-explorer-setup In order to access those metrics from within Splunk Enterprise and ES, the Splunk Infrast…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Anomalous Traffic on Network Edge\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Anomalous Traffic on Network Edge\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Anomalous Traffic on Network Edge' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.224",
              "n": "Kubernetes AWS detect suspicious kubectl calls",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects anonymous and unauthenticated requests to a Kubernetes cluster. It identifies this behavior by monitoring API calls from users who have not provided any token or password in their request, using data from `kube_audit` logs. This activity is significant for a SOC as it indicates a severe misconfiguration, allowing unfettered access to the cluster with no traceability. If confirmed malicious, an attacker could gain access to sensitive data or control over the cluster…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "`kube_audit` user.username=\"system:anonymous\" user.groups{} IN (\"system:unauthenticated\")\n      | fillnull\n      | stats count\n        BY objectRef.name objectRef.namespace objectRef.resource\n           requestReceivedTimestamp requestURI responseStatus.code\n           sourceIPs{} stage user.groups{}\n           user.uid user.username userAgent\n           verb\n      | rename sourceIPs{} as src, user.username as user\n      | `kubernetes_aws_detect_suspicious_kubectl_calls_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Kubectl calls are not malicious by nature. However source IP, verb and Object can reveal potential malicious activity, specially anonymous suspicious IPs and sensitive objects such as configmaps or secrets",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "user",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes AWS detect suspicious kubectl calls\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes AWS detect suspicious kubectl calls\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Kubectl calls are not malicious by nature. However source IP, verb and Object can reveal potential malicious activity, specially anonymous suspicious IPs and sensitive objects such as configmaps or secrets\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes AWS detect suspicious kubectl calls' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.225",
              "n": "Kubernetes Create or Update Privileged Pod",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or update of privileged pods in Kubernetes. It identifies this activity by monitoring Kubernetes Audit logs for pod configurations that include root privileges. This behavior is significant for a SOC as it could indicate an attempt to escalate privileges, exploit the kernel, and gain full access to the host's namespace and devices. If confirmed malicious, this activity could lead to unauthorized access to sensitive information, data breaches, and servi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1204"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Create or Update Privileged Pod\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Create or Update Privileged Pod\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Create or Update Privileged Pod so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.226",
              "n": "Kubernetes Cron Job Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a Kubernetes cron job, which is a task scheduled to run automatically at specified intervals. It identifies this activity by monitoring Kubernetes Audit logs for the creation events of cron jobs. This behavior is significant for a SOC as it could allow an attacker to execute malicious tasks repeatedly and automatically, posing a threat to the Kubernetes infrastructure. If confirmed malicious, this activity could lead to persistent attacks, service d…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1053.007"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Cron Job Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Cron Job Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Cron Job Creation so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.227",
              "n": "Kubernetes DaemonSet Deployed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a DaemonSet in a Kubernetes cluster. This behavior is identified by monitoring Kubernetes Audit logs for the creation event of a DaemonSet. DaemonSets ensure a specific pod runs on every node, making them a potential vector for persistent access. This activity is significant for a SOC as it could indicate an attempt to maintain persistent access to the Kubernetes infrastructure. If confirmed malicious, it could lead to persistent attacks, service di…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1204"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes DaemonSet Deployed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes DaemonSet Deployed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes DaemonSet Deployed so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.228",
              "n": "Kubernetes Falco Shell Spawned",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where a shell is spawned within a Kubernetes container. Leveraging Falco, a cloud-native runtime security tool, this analytic monitors system calls within the Kubernetes environment and flags when a shell is spawned. This activity is significant for a SOC as it may indicate unauthorized access, allowing an attacker to execute arbitrary commands, manipulate container processes, or escalate privileges. If confirmed malicious, this could lead to data breache…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Falco",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Falco ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1204"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Falco Shell Spawned\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Falco. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Falco Shell Spawned\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Falco Shell Spawned so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.229",
              "n": "Kubernetes newly seen TCP edge",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies newly seen TCP communication between source and destination workload pairs within a Kubernetes cluster. It leverages Network Performance Monitoring metrics collected via an OTEL collector and pulled from Splunk Observability Cloud. The detection compares network activity over the last hour with the past 30 days to spot new inter-workload communications. This is significant as new connections can indicate changes in application behavior or potential security thre…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats count(tcp.packets) as tcp.packets_count where `kubernetes_metrics` AND earliest=-1h by k8s.cluster.name source.workload.name dest.workload.name\n    | eval current=\"True\"\n    | append [ mstats count(tcp.packets) as tcp.packets_count where `kubernetes_metrics` AND earliest=-30d latest=-1h by source.workload.name dest.workload.name\n    | eval current=\"false\" ]\n    | eventstats values(current) as current\n      BY source.workload.name dest.workload.name\n    | search current=\"true\" current!=\"false\"\n    | rename k8s.cluster.name as host\n    | `kubernetes_newly_seen_tcp_edge_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes newly seen TCP edge\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes newly seen TCP edge\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes newly seen TCP edge' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.230",
              "n": "Kubernetes newly seen UDP edge",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects UDP communication between a newly seen source and destination workload pair within a Kubernetes cluster. It leverages Network Performance Monitoring metrics collected via an OTEL collector and pulled from Splunk Observability Cloud. This detection compares network activity over the last hour with the past 30 days to identify new inter-workload communication. Such changes in network behavior can indicate potential security threats or anomalies. If confirmed maliciou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats count(udp.packets) as udp.packets_count where `kubernetes_metrics` AND earliest=-1h by k8s.cluster.name source.workload.name dest.workload.name\n    | eval current=\"True\"\n    | append [ mstats count(udp.packets) as udp.packets_count where `kubernetes_metrics` AND earliest=-30d latest=-1h by source.workload.name dest.workload.name\n    | eval current=\"false\" ]\n    | eventstats values(current) as current\n      BY source.workload.name dest.workload.name\n    | search current=\"true\" current!=\"false\"\n    | rename k8s.cluster.name as host\n    | `kubernetes_newly_seen_udp_edge_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes newly seen UDP edge\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes newly seen UDP edge\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes newly seen UDP edge' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.231",
              "n": "Kubernetes Nginx Ingress LFI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects local file inclusion (LFI) attacks targeting Kubernetes Nginx ingress controllers. It leverages Kubernetes logs, parsing fields such as `request` and `status` to identify suspicious patterns indicative of LFI attempts. This activity is significant because LFI attacks can allow attackers to read sensitive files from the server, potentially exposing critical information. If confirmed malicious, this could lead to unauthorized access to sensitive data, further exploit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must ingest Kubernetes logs through Splunk Connect for Kubernetes.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/splunk/splunk-connect-for-kubernetes, https://www.offensive-security.com/metasploit-unleashed/file-inclusion-vulnerabilities/",
              "mitre": [
                "T1212"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Nginx Ingress LFI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1212. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Nginx Ingress LFI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Nginx Ingress LFI so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.232",
              "n": "Kubernetes Nginx Ingress RFI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects remote file inclusion (RFI) attacks targeting Kubernetes Nginx ingress controllers. It leverages Kubernetes logs from the Nginx ingress controller, parsing fields such as `remote_addr`, `request`, and `url` to identify suspicious activity. This activity is significant because RFI attacks can allow attackers to execute arbitrary code or access sensitive files on the server. If confirmed malicious, this could lead to unauthorized access, data exfiltration, or further…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "You must ingest Kubernetes logs through Splunk Connect for Kubernetes.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/splunk/splunk-connect-for-kubernetes, https://www.invicti.com/blog/web-security/remote-file-inclusion-vulnerability/",
              "mitre": [
                "T1212"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Nginx Ingress RFI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1212. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Nginx Ingress RFI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Nginx Ingress RFI so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.233",
              "n": "Kubernetes Node Port Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a Kubernetes NodePort service, which exposes a service to the external network. It identifies this activity by monitoring Kubernetes Audit logs for the creation of NodePort services. This behavior is significant for a SOC as it could allow an attacker to access internal services, posing a threat to the Kubernetes infrastructure's integrity and security. If confirmed malicious, this activity could lead to data breaches, service disruptions, or unauth…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1204"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Node Port Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Node Port Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Node Port Creation so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.234",
              "n": "Kubernetes Pod Created in Default Namespace",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of Kubernetes pods in the default, kube-system, or kube-public namespaces. It leverages Kubernetes audit logs to identify pod creation events within these specific namespaces. This activity is significant for a SOC as it may indicate an attacker attempting to hide their presence or evade defenses. Unauthorized pod creation in these namespaces can suggest a successful cluster breach, potentially leading to privilege escalation, persistent access, or fur…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1204"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Pod Created in Default Namespace\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Pod Created in Default Namespace\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Pod Created in Default Namespace so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.235",
              "n": "Kubernetes Previously Unseen Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects previously unseen processes within the Kubernetes environment on master or worker nodes. It leverages process metrics collected via an OTEL collector and hostmetrics receiver, and data is pulled from Splunk Observability Cloud. This detection compares processes observed in the last hour against those seen in the previous 30 days. Identifying new processes is crucial as they may indicate unauthorized activity or attempts to compromise the node. If confirmed maliciou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats  count(process.memory.utilization) as process.memory.utilization_count where `kubernetes_metrics` AND earliest=-1h by host.name k8s.cluster.name k8s.node.name process.executable.name\n    | eval current=\"True\"\n    | append [mstats  count(process.memory.utilization) as process.memory.utilization_count where `kubernetes_metrics` AND earliest=-30d latest=-1h by host.name k8s.cluster.name k8s.node.name process.executable.name ]\n    | stats count values(current) as current\n      BY host.name k8s.cluster.name k8s.node.name\n         process.executable.name\n    | where count=1 and current=\"True\"\n    | rename host.name as host\n    | `kubernetes_previously_unseen_process_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Previously Unseen Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Previously Unseen Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Previously Unseen Process' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.236",
              "n": "Kubernetes Process Running From New Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes running from newly seen paths within a Kubernetes environment. It leverages process metrics collected via an OTEL collector and hostmetrics receiver, and data is pulled from Splunk Observability Cloud using the Splunk Infrastructure Monitoring Add-on. This detection compares processes observed in the last hour with those seen over the previous 30 days. This activity is significant as it may indicate unauthorized changes, compromised nodes, or the intro…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats count(process.memory.utilization) as process.memory.utilization_count where `kubernetes_metrics` AND earliest=-1h by host.name k8s.cluster.name k8s.node.name process.pid process.executable.path process.executable.name\n    | eval current=\"True\"\n    | append [ mstats count(process.memory.utilization) as process.memory.utilization_count where `kubernetes_metrics` AND earliest=-30d latest=-1h by host.name k8s.cluster.name k8s.node.name process.pid process.executable.path process.executable.name ]\n    | stats count values(current) as current\n      BY host.name k8s.cluster.name k8s.node.name\n         process.pid process.executable.name process.executable.path\n    | where count=1 and current=\"True\"\n    | rename host.name as host\n    | `kubernetes_process_running_from_new_path_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Process Running From New Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Process Running From New Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Process Running From New Path' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.237",
              "n": "Kubernetes Process with Anomalous Resource Utilisation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies high resource utilization anomalies in Kubernetes processes. It leverages process metrics from an OTEL collector and hostmetrics receiver, fetched via the Splunk Infrastructure Monitoring Add-on. The detection uses a lookup table with average and standard deviation values to spot anomalies. This activity is significant as high resource utilization can indicate security threats like cryptojacking, unauthorized data exfiltration, or compromised containers. If conf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats avg(process.*) as process.* where `kubernetes_metrics` by host.name k8s.cluster.name k8s.node.name process.executable.name span=10s | eval key = 'k8s.cluster.name' + \\\":\\\" + 'host.name' + \\\":\\\" + 'process.executable.name' | lookup k8s_process_resource_baseline key | fillnull | eval anomalies = \\\"\\\" | foreach stdev_* [ eval anomalies =if( '<<MATCHSTR>>' > ('avg_<<MATCHSTR>>' + 4 * 'stdev_<<MATCHSTR>>'), anomalies + \\\"<<MATCHSTR>> higher than average by \\\" + tostring(round(('<<MATCHSTR>>' - 'avg_<<MATCHSTR>>')/'stdev_<<MATCHSTR>>' ,2)) + \\\" Standard Deviations. <<MATCHSTR>>=\\\" + tostring('<<MATCHSTR>>') + \\\" avg_<<MATCHSTR>>=\\\" + tostring('avg_<<MATCHSTR>>') + \\\" 'stdev_<<MATCHSTR>>'=\\\" + tostring('stdev_<<MATCHSTR>>') + \\\", \\\" , anomalies) ] | eval anomalies = replace(anomalies, \\\",\\\\s$\\\", \\\"\\\") | where anomalies!=\\\"\\\" | stats count values(anomalies) as anomalies by host.name k8s.cluster.name k8s.node.name process.executable.name | sort - count | where count > 5 | rename host.name as host | `kubernetes_process_with_anomalous_resource_utilisation_filter`",
              "m": "To implement this detection, follow these steps:\\n* Deploy the OpenTelemetry Collector (OTEL) to your Kubernetes cluster.\\n* Enable the hostmetrics/process receiver in the OTEL configuration.\\n* Ensure that the process metrics, specifically Process.cpu.utilization and process.memory.utilization, are enabled.\\n* Install the Splunk Infrastructure Monitoring (SIM) add-on. (ref: https://splunkbase.spl…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Process with Anomalous Resource Utilisation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Process with Anomalous Resource Utilisation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Process with Anomalous Resource Utilisation' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.238",
              "n": "Kubernetes Process with Resource Ratio Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects anomalous changes in resource utilization ratios for processes running on a Kubernetes node. It leverages process metrics collected via an OTEL collector and hostmetrics receiver, analyzed through Splunk Observability Cloud. The detection uses a lookup table containing average and standard deviation values for various resource ratios (e.g., CPU:memory, CPU:disk operations). Significant deviations from these baselines may indicate compromised processes, malicious ac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats avg(process.*) as process.* where `kubernetes_metrics` by host.name k8s.cluster.name k8s.node.name process.executable.name span=10s | eval cpu:mem = 'process.cpu.utilization'/'process.memory.utilization' | eval cpu:disk = 'process.cpu.utilization'/'process.disk.operations' | eval mem:disk = 'process.memory.utilization'/'process.disk.operations' | eval cpu:threads = 'process.cpu.utilization'/'process.threads' | eval disk:threads = 'process.disk.operations'/'process.threads' | eval key = 'k8s.cluster.name' + \\\":\\\" + 'host.name' + \\\":\\\" + 'process.executable.name' | lookup k8s_process_resource_ratio_baseline key | fillnull | eval anomalies = \\\"\\\" | foreach stdev_* [ eval anomalies =if( '<<MATCHSTR>>' > ('avg_<<MATCHSTR>>' + 4 * 'stdev_<<MATCHSTR>>'), anomalies + \\\"<<MATCHSTR>> ratio higher than average by \\\" + tostring(round(('<<MATCHSTR>>' - 'avg_<<MATCHSTR>>')/'stdev_<<MATCHSTR>>' ,2)) + \\\" Standard Deviations. <<MATCHSTR>>=\\\" + tostring('<<MATCHSTR>>') + \\\" avg_<<MATCHSTR>>=\\\" + tostring('avg_<<MATCHSTR>>') + \\\" 'stdev_<<MATCHSTR>>'=\\\" + tostring('stdev_<<MATCHSTR>>') + \\\", \\\" , anomalies) ] | eval anomalies = replace(anomalies, \\\",\\\\s$\\\", \\\"\\\") | where anomalies!=\\\"\\\" | stats count values(anomalies) as anomalies by host.name k8s.cluster.name k8s.node.name process.executable.name | where count > 5 | rename host.name as host | `kubernetes_process_with_resource_ratio_anomalies_filter`",
              "m": "To implement this detection, follow these steps:\\n* Deploy the OpenTelemetry Collector (OTEL) to your Kubernetes cluster.\\n* Enable the hostmetrics/process receiver in the OTEL configuration.\\n* Ensure that the process metrics, specifically Process.cpu.utilization and process.memory.utilization, are enabled.\\n* Install the Splunk Infrastructure Monitoring (SIM) add-on. (ref: https://splunkbase.spl…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Process with Resource Ratio Anomalies\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Process with Resource Ratio Anomalies\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Process with Resource Ratio Anomalies' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.239",
              "n": "Kubernetes Shell Running on Worker Node",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies shell activity within the Kubernetes privilege scope on a worker node. It leverages process metrics from an OTEL collector hostmetrics receiver, specifically process.cpu.utilization and process.memory.utilization, pulled from Splunk Observability Cloud. This activity is significant as unauthorized shell processes can indicate potential security threats, providing attackers an entry point to compromise the node and the entire Kubernetes cluster. If confirmed mali…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats avg(process.cpu.utilization) as process.cpu.utilization avg(process.memory.utilization) as process.memory.utilization where `kubernetes_metrics` AND process.executable.name IN (\"sh\",\"bash\",\"csh\", \"tcsh\") by host.name k8s.cluster.name k8s.node.name process.pid process.executable.name span=10s\n    | search process.cpu.utilization>0 OR process.memory.utilization>0\n    | stats avg(process.cpu.utilization) as process.cpu.utilization avg(process.memory.utilization) as process.memory.utilization\n      BY host.name k8s.cluster.name k8s.node.name\n         process.pid process.executable.name\n    | rename host.name as host\n    | `kubernetes_shell_running_on_worker_node_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart/tree/main",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Shell Running on Worker Node\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Shell Running on Worker Node\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Shell Running on Worker Node' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.240",
              "n": "Kubernetes Shell Running on Worker Node with CPU Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies shell activity within the Kubernetes privilege scope on a worker node, specifically when shell processes are consuming CPU resources. It leverages process metrics from an OTEL collector hostmetrics receiver, pulled from Splunk Observability Cloud via the Splunk Infrastructure Monitoring Add-on, focusing on process.cpu.utilization and process.memory.utilization. This activity is significant as unauthorized shell processes can indicate a security threat, potential…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| mstats avg(process.cpu.utilization) as process.cpu.utilization avg(process.memory.utilization) as process.memory.utilization where `kubernetes_metrics` AND process.executable.name IN (\"sh\",\"bash\",\"csh\", \"tcsh\") by host.name k8s.cluster.name k8s.node.name process.pid process.executable.name span=10s\n    | search process.cpu.utilization>0\n    | stats avg(process.cpu.utilization) as process.cpu.utilization avg(process.memory.utilization) as process.memory.utilization\n      BY host.name k8s.cluster.name k8s.node.name\n         process.pid process.executable.name\n    | rename host.name as host\n    | `kubernetes_shell_running_on_worker_node_with_cpu_activity_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/signalfx/splunk-otel-collector-chart/tree/main",
              "mitre": [
                "T1204"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Shell Running on Worker Node with CPU Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Shell Running on Worker Node with CPU Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Kubernetes Shell Running on Worker Node with CPU Activity' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.241",
              "n": "Kubernetes Suspicious Image Pulling",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious image pulling in Kubernetes environments. It identifies this activity by monitoring Kubernetes audit logs for image pull requests that do not match a predefined list of allowed images. This behavior is significant for a SOC as it may indicate an attacker attempting to deploy malicious software or infiltrate the system. If confirmed malicious, the impact could be severe, potentially leading to unauthorized access to sensitive systems or data, and enabling…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1526"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Suspicious Image Pulling\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1526. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Suspicious Image Pulling\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Suspicious Image Pulling so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.242",
              "n": "Kubernetes Unauthorized Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects unauthorized access attempts to Kubernetes by analyzing Kubernetes audit logs. It identifies anomalies in access patterns by examining the source of requests and their response statuses. This activity is significant for a SOC as it may indicate an attacker attempting to infiltrate the Kubernetes environment. If confirmed malicious, such access could lead to unauthorized control over Kubernetes resources, potentially compromising sensitive systems or data within the…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Kubernetes Audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Kubernetes Audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/",
              "mitre": [
                "T1204"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kubernetes Unauthorized Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Kubernetes Audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kubernetes Unauthorized Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kubernetes Unauthorized Access so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.243",
              "n": "Microsoft Intune Device Health Scripts",
              "c": "high",
              "f": "intermediate",
              "v": "Detects creation or modification of Intune Device Health Scripts, which could be abused for lateral movement from Azure to on-premises Active Directory.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Monitor Activity",
              "q": "`azure_monitor_aad` operationName=\"Create deviceHealthScript\" OR operationName=\"Patch deviceHealthScript\"\n| rename properties.* AS *\n| stats count min(_time) AS firstTime max(_time) AS lastTime values(operationName) AS operationName by user, result, targetResources{}.displayName\n| `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Enable Azure Monitor Activity logging for Microsoft Intune operations and forward to Splunk via the Splunk Add-on for Microsoft Cloud Services.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate adminstrative usage of this functionality will trigger this detection.",
              "refs": "https://posts.specterops.io/death-from-above-lateral-movement-from-azure-to-on-prem-ad-d18cb3959d4d, https://securityintelligence.com/x-force/detecting-intune-lateral-movement/, https://posts.specterops.io/maestro-9ed71d38d546",
              "mitre": [
                "T1072",
                "T1021.007",
                "T1202",
                "T1105"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Microsoft Intune Device Health Scripts\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Monitor Activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1072, T1021.007, T1202, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Microsoft Intune Device Health Scripts\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate adminstrative usage of this functionality will trigger this detection.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Microsoft Intune Device Health Scripts' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.244",
              "n": "Microsoft Intune DeviceManagementConfigurationPolicies",
              "c": "high",
              "f": "intermediate",
              "v": "Detects modifications to Intune Device Management Configuration Policies, which attackers could abuse to disable security controls or push malicious configurations to managed endpoints.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Monitor Activity",
              "q": "`azure_monitor_aad` operationName=\"Create deviceManagementConfigurationPolicy\" OR operationName=\"Patch deviceManagementConfigurationPolicy\"\n| rename properties.* AS *\n| stats count min(_time) AS firstTime max(_time) AS lastTime values(operationName) AS operationName by user, result, targetResources{}.displayName\n| `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Enable Azure Monitor Activity logging for Microsoft Intune operations and forward to Splunk via the Splunk Add-on for Microsoft Cloud Services.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate adminstrative usage of this functionality will trigger this detection.",
              "refs": "https://posts.specterops.io/death-from-above-lateral-movement-from-azure-to-on-prem-ad-d18cb3959d4d, https://securityintelligence.com/x-force/detecting-intune-lateral-movement/, https://posts.specterops.io/maestro-9ed71d38d546",
              "mitre": [
                "T1072",
                "T1484",
                "T1021.007",
                "T1562.001",
                "T1562.004"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Microsoft Intune DeviceManagementConfigurationPolicies\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Monitor Activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1072, T1484, T1021.007, T1562.001, T1562.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Microsoft Intune DeviceManagementConfigurationPolicies\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate adminstrative usage of this functionality will trigger this detection.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Microsoft Intune DeviceManagementConfigurationPolicies' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.245",
              "n": "Microsoft Intune Manual Device Management",
              "c": "high",
              "f": "intermediate",
              "v": "Detects manual device management actions in Microsoft Intune such as wipe, retire, or restart, which could indicate an attacker or insider abusing device management capabilities to disrupt endpoints.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Monitor Activity",
              "q": "`azure_monitor_aad` operationName=\"wipe managedDevice\" OR operationName=\"retire managedDevice\" OR operationName=\"shutdown managedDevice\" OR operationName=\"rebootNow managedDevice\"\n| rename properties.* AS *\n| stats count min(_time) AS firstTime max(_time) AS lastTime values(operationName) AS operationName by user, result, targetResources{}.displayName\n| `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Enable Azure Monitor Activity logging for Microsoft Intune operations and forward to Splunk via the Splunk Add-on for Microsoft Cloud Services.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate adminstrative usage of this functionality will trigger this detection.",
              "refs": "https://posts.specterops.io/death-from-above-lateral-movement-from-azure-to-on-prem-ad-d18cb3959d4d, https://securityintelligence.com/x-force/detecting-intune-lateral-movement/, https://posts.specterops.io/maestro-9ed71d38d546",
              "mitre": [
                "T1021.007",
                "T1072",
                "T1529"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Microsoft Intune Manual Device Management\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Monitor Activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.007, T1072, T1529. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Microsoft Intune Manual Device Management\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate adminstrative usage of this functionality will trigger this detection.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Microsoft Intune Manual Device Management' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.246",
              "n": "Microsoft Intune Mobile Apps",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Microsoft Intune Mobile Apps. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Monitor Activity",
              "q": "`azure_monitor_activity` operationName=\"*MobileApp*\"\n    | rename identity as user, properties.TargetObjectIds{} as TargetObjectId, properties.TargetDisplayNames{} as TargetDisplayName, properties.Actor.IsDelegatedAdmin as user_isDelegatedAdmin\n    | rex field=\"operationName\" \"^(?P<action>\\w+)\\s\" | replace \"Patch\" with \"updated\", \"Create\" with \"created\", \"Delete\", with \"deleted\", \"assign\", with \"assigned\" IN action\n    | table _time operationName action user user_type user_isDelegatedAdmin TargetDisplayName TargetObjectId status tenantId correlationId\n    | `microsoft_intune_mobile_apps_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Azure Monitor Activity ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate adminstrative usage of this functionality will trigger this detection.",
              "refs": "https://posts.specterops.io/death-from-above-lateral-movement-from-azure-to-on-prem-ad-d18cb3959d4d, https://securityintelligence.com/x-force/detecting-intune-lateral-movement/, https://posts.specterops.io/maestro-9ed71d38d546",
              "mitre": [
                "T1072",
                "T1021.007",
                "T1202",
                "T1105"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Microsoft Intune Mobile Apps\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Monitor Activity. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1072, T1021.007, T1202, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Microsoft Intune Mobile Apps\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate adminstrative usage of this functionality will trigger this detection.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Microsoft Intune Mobile Apps' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.247",
              "n": "Okta Non-Standard VPN Usage",
              "c": "high",
              "f": "intermediate",
              "v": "Remote Employment Fraud (REF) actors will often use virtual private networks (VPNs) to conceal their true physical location. Threat actors mask their originating IP address and instead appear to be situated in any location where the VPN service has a node.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Okta",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The analytic leverages Okta OktaIm2 logs to be ingested using the Splunk Add-on for Okta Identity Cloud (https://splunkbase.splunk.com/app/6553).",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited to no expected false positives once a baseline of common VPN software has been completed.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Okta Identity Cloud](https://splunkbase.splunk.com/app/6553)",
              "mitre": [
                "T1078",
                "T1572",
                "T1090"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Okta Non-Standard VPN Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Okta. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078, T1572, T1090. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Okta Non-Standard VPN Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited to no expected false positives once a baseline of common VPN software has been completed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Okta and IdP events that match this issue so we can see account takeover, token abuse, or credential problems before they spread.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.248",
              "n": "Risk Rule for Dev Sec Ops by Repository",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies high-risk activities within repositories by correlating repository data with risk scores. It leverages findings and intermediate findings created by detections from the Dev Sec Ops analytic stories, summing risk scores and capturing source and user information. The detection focuses on high-risk scores above 100 and sources with more than three occurrences. This activity is significant as it highlights repositories frequently targeted by threats, providing insig…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Risk Rule for Dev Sec Ops by Repository\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Risk Rule for Dev Sec Ops by Repository\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Risk Rule for Dev Sec Ops by Repository so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.249",
              "n": "Access LSASS Memory for Dump Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to dump the LSASS process memory, a common technique in credential dumping attacks. It leverages Sysmon logs, specifically EventCode 10, to identify suspicious call traces to dbgcore.dll and dbghelp.dll associated with lsass.exe. This activity is significant as it often precedes the theft of sensitive login credentials, posing a high risk of unauthorized access to systems and data. If confirmed malicious, attackers could gain access to critical credentials…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 10 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators can create memory dumps for debugging purposes, but memory dumps of the LSASS process would be unusual.",
              "refs": "https://2017.zeronights.org/wp-content/uploads/materials/ZN17_Kheirkhabarov_Hunting_for_Credentials_Dumping_in_Windows_Environment.pdf",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Access LSASS Memory for Dump Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Access LSASS Memory for Dump Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators can create memory dumps for debugging purposes, but memory dumps of the LSASS process would be unusual.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Access LSASS Memory for Dump Creation so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.250",
              "n": "Active Directory Privilege Escalation Identified",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential privilege escalation activities within an organization's Active Directory (AD) environment. It detects this activity by correlating multiple analytics from the Active Directory Privilege Escalation analytic story within a specified time frame. This is significant for a SOC as it helps identify coordinated attempts to gain elevated privileges, which could indicate a serious security threat. If confirmed malicious, this activity could allow attackers to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will most likely be present based on risk scoring and how the organization handles system to system communication. Filter, or modify as needed. In addition to count by analytics, adding a risk score may be useful. In our testing, with 22 events over 30 days, the risk scores ranged from 500 to 80,000. Your organization will be different, monitor and modify as needed.",
              "refs": "https://attack.mitre.org/tactics/TA0004/, https://research.splunk.com/stories/active_directory_privilege_escalation/",
              "mitre": [
                "T1484"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Active Directory Privilege Escalation Identified\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Active Directory Privilege Escalation Identified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will most likely be present based on risk scoring and how the organization handles system to system communication. Filter, or modify as needed. In addition to count by analytics, adding a risk score may be useful. In our testing, with 22 events over 30 days, the risk scores ranged from 500 to 80,000. Your organization will be different, monitor and modify as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Active Directory Privilege Escalation Identified so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.251",
              "n": "Add or Set Windows Defender Exclusion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Add or Set Windows Defender Exclusion. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html, https://app.any.run/tasks/cf1245de-06a7-4366-8209-8e3006f2bfe5/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://learn.microsoft.com/en-us/powershell/module/defender/add-mppreference?view=windowsserver2025-ps",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Add or Set Windows Defender Exclusion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Add or Set Windows Defender Exclusion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Add or Set Windows Defender Exclusion so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.252",
              "n": "AdsiSearcher Account Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `[Adsisearcher]` type accelerator in PowerShell to query Active Directory for domain users. It leverages PowerShell Script Block Logging (EventCode=4104) to identify script blocks containing `[adsisearcher]`, `objectcategory=user`, and `.findAll()`. This activity is significant as it may indicate an attempt by adversaries or Red Teams to enumerate domain users for situational awareness and Active Directory discovery. If confirmed malicious, this coul…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1087/002/, https://www.blackhillsinfosec.com/red-blue-purple/, https://devblogs.microsoft.com/scripting/use-the-powershell-adsisearcher-type-accelerator-to-search-active-directory/",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AdsiSearcher Account Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AdsiSearcher Account Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for AdsiSearcher Account Discovery so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.253",
              "n": "Cisco Isovalent - Cron Job Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a cron job within the Cisco Isovalent environment. It identifies this activity by monitoring process execution logs for cron job creation events. This behavior is significant for a SOC as it could allow an attacker to execute malicious tasks repeatedly and automatically, posing a threat to the Kubernetes infrastructure. If confirmed malicious, this activity could lead to persistent attacks, service disruptions, or unauthorized access to sensitive in…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Exec",
              "q": "# Shared SPL: intentional — see UC-10.2.31\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$pod_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This activity may be triggered by legitimate administrative scripts, container images, or third-party operators that use cron for scheduled tasks, so please investigate the alert in context to rule out benign operations.",
              "refs": "https://attack.mitre.org/techniques/T1053/003/, https://medium.com/@bag0zathev2/cronjobs-for-hackers-bugbounty-article-7d51588d0fd5, https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/",
              "mitre": [
                "T1053.003",
                "T1053.007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Cron Job Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003, T1053.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Cron Job Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This activity may be triggered by legitimate administrative scripts, container images, or third-party operators that use cron for scheduled tasks, so please investigate the alert in context to rule out benign operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco Isovalent - Cron Job Creation so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.254",
              "n": "Cisco Isovalent - Kprobe Spike",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Isovalent - Kprobe Spike. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Kprobe",
              "q": "`cisco_isovalent` process_kprobe.action!=\"\"\n    | bin _time span=5m | rename process_kprobe.parent.pod.name as pod_name\n    | stats count as kprobe_count\n            values(process_kprobe.function_name) as functions\n            values(process_kprobe.process.binary) as binaries\n            values(process_kprobe.args{}.string_arg) as args\n      by pod_name _time\n    | where kprobe_count > 10 | `cisco_isovalent___kprobe_spike_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Cisco Isovalent Process Kprobe ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://docs.isovalent.com/user-guide/sec-ops-visibility/process-execution/index.html",
              "mitre": [
                "T1068"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Kprobe Spike\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Kprobe. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Kprobe Spike\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Cisco Isovalent - Kprobe Spike' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.255",
              "n": "Cisco Isovalent - Late Process Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Isovalent - Late Process Execution. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Exec",
              "q": "# Shared SPL: intentional — see UC-10.2.31\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$pod_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This activity may be triggered by legitimate administrative scripts, container images, or third-party operators that use cron for scheduled tasks, so please investigate the alert in context to rule out benign operations.",
              "refs": "https://docs.isovalent.com/user-guide/sec-ops-visibility/process-execution/index.html",
              "mitre": [
                "T1543"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Late Process Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Late Process Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This activity may be triggered by legitimate administrative scripts, container images, or third-party operators that use cron for scheduled tasks, so please investigate the alert in context to rule out benign operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco Isovalent - Late Process Execution so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.256",
              "n": "Cisco Isovalent - Non Allowlisted Image Use",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Isovalent - Non Allowlisted Image Use. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Exec",
              "q": "# Shared SPL: intentional — see UC-10.2.31\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$pod_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "New legitimate images during rollouts or blue/green deployments may appear until the allowlist is updated. Coordinate with platform/DevOps teams to synchronize allowlist changes.",
              "refs": "https://dev.to/thenjdevopsguy/attacking-a-kubernetes-cluster-enter-red-team-mode-2onj, https://www.reddit.com/r/kubernetes/comments/l6e5yr/one_of_our_kubernetes_containers_was_compromised/",
              "mitre": [
                "T1204.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Non Allowlisted Image Use\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Non Allowlisted Image Use\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: New legitimate images during rollouts or blue/green deployments may appear until the allowlist is updated. Coordinate with platform/DevOps teams to synchronize allowlist changes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco Isovalent - Non Allowlisted Image Use so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.257",
              "n": "Cisco Isovalent - Nsenter Usage in Kubernetes Pod",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Isovalent - Nsenter Usage in Kubernetes Pod. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Exec",
              "q": "# Shared SPL: intentional — see UC-10.2.31\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$pod_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is highly unlikely that nsenter will be used in a legitimate way, investigate the alert in context to rule out benign operations.",
              "refs": "https://isovalent.com/blog/post/2021-11-container-escape/, https://kubehound.io/reference/attacks/CE_NSENTER/",
              "mitre": [
                "T1543"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Nsenter Usage in Kubernetes Pod\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Nsenter Usage in Kubernetes Pod\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is highly unlikely that nsenter will be used in a legitimate way, investigate the alert in context to rule out benign operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco Isovalent - Nsenter Usage in Kubernetes Pod so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.258",
              "n": "Cisco Isovalent - Potential Escape to Host",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Isovalent - Potential Escape to Host. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Exec",
              "q": "# Shared SPL: intentional — see UC-10.2.31\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$pod_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1611/, https://unit42.paloaltonetworks.com/hildegard-malware-teamtnt/",
              "mitre": [
                "T1611"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Potential Escape to Host\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1611. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Potential Escape to Host\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco Isovalent - Potential Escape to Host so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.259",
              "n": "Cisco Isovalent - Shell Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a shell inside a container namespace within the Cisco Isovalent environment. It identifies this activity by monitoring process execution logs for the execution of a shell (sh or bash) inside a container namespace. This behavior is significant for a SOC as it could allow an attacker to gain shell access to the container, potentially leading to further compromise of the Kubernetes cluster. If confirmed malicious, this activity could lead to data thef…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Exec",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$node_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This activity may be triggered by legitimate administrative scripts, container images, or third-party operators that use cron for scheduled tasks, so please investigate the alert in context to rule out benign operations.",
              "refs": "https://www.sysdig.com/blog/mitre-attck-framework-for-container-runtime-security-with-sysdig-falco",
              "mitre": [
                "T1543"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Shell Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Shell Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This activity may be triggered by legitimate administrative scripts, container images, or third-party operators that use cron for scheduled tasks, so please investigate the alert in context to rule out benign operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco Isovalent - Shell Execution so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.260",
              "n": "Cisco NVM - Curl Execution With Insecure Flags",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Curl Execution With Insecure Flags. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://thedfirreport.com/2025/05/19/another-confluence-bites-the-dust-falling-to-elpaco-team-ransomware/",
              "mitre": [
                "T1197"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Curl Execution With Insecure Flags\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1197. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Curl Execution With Insecure Flags\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Curl Execution With Insecure Flags so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.261",
              "n": "Cisco NVM - Installation of Typosquatted Python Package",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Installation of Typosquatted Python Package. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://securelist.com/two-more-malicious-python-packages-in-the-pypi/107218/, https://blog.checkpoint.com/securing-the-cloud/pypi-inundated-by-malicious-typosquatting-campaign/, https://rhisac.org/threat-intelligence/typosquatting-campaign-targets-python-developers-with-hundreds-of-malicious-libraries/",
              "mitre": [
                "T1059"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Installation of Typosquatted Python Package\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Installation of Typosquatted Python Package\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Installation of Typosquatted Python Package so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.262",
              "n": "Cisco NVM - MSHTML or MSHTA Network Execution Without URL in CLI",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - MSHTML or MSHTA Network Execution Without URL in CLI. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1218/005/, https://lolbas-project.github.io/lolbas/Binaries/Rundll32/, https://learn.microsoft.com/en-us/windows/win32/api/mshtml/nf-mshtml-mshtml_runhtmlapplication",
              "mitre": [
                "T1218.005",
                "T1059.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - MSHTML or MSHTA Network Execution Without URL in CLI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005, T1059.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - MSHTML or MSHTA Network Execution Without URL in CLI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - MSHTML or MSHTA Network Execution Without URL in CLI so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.263",
              "n": "Cisco NVM - Non-Network Binary Making Network Connection",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Non-Network Binary Making Network Connection. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://redcanary.com/threat-detection-report/techniques/process-injection/, https://www.blue-prints.blog/content/blog/posts/lolbin/addinutil-lolbas.html",
              "mitre": [
                "T1055",
                "T1036"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Non-Network Binary Making Network Connection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055, T1036. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Non-Network Binary Making Network Connection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Non-Network Binary Making Network Connection so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.264",
              "n": "Cisco NVM - Outbound Connection to Suspicious Port",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Outbound Connection to Suspicious Port. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://mthcht.medium.com/hunting-for-suspicious-ports-activities-50ef56d5cef",
              "mitre": [
                "T1571"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Outbound Connection to Suspicious Port\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1571. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Outbound Connection to Suspicious Port\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Outbound Connection to Suspicious Port so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.265",
              "n": "Cisco NVM - Rclone Execution With Network Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Rclone Execution With Network Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://thedfirreport.com/2021/03/29/sodinokibi-aka-revil-ransomware/, https://thedfirreport.com/2021/10/04/bazarloader-and-the-conti-leaks/, https://thedfirreport.com/2021/11/29/continuing-the-bazar-ransomware-story/, https://redcanary.com/blog/threat-detection/rclone-mega-extortion/",
              "mitre": [
                "T1567.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Rclone Execution With Network Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1567.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Rclone Execution With Network Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Rclone Execution With Network Activity so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.266",
              "n": "Cisco NVM - Rundll32 Abuse of MSHTML.DLL for Payload Download",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Rundll32 Abuse of MSHTML.DLL for Payload Download. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://lolbas-project.github.io/lolbas/Binaries/Rundll32/, https://redcanary.com/blog/threat-detection/threat-research-questions/, https://twitter.com/n1nj4sec/status/1421190238081277959, https://hyp3rlinx.altervista.org/advisories/MICROSOFT_WINDOWS_DEFENDER_TROJAN.WIN32.POWESSERE.G_MITIGATION_BYPASS_PART2.txt, http://hyp3rlinx.altervista.org/advisories/MICROSOFT_WINDOWS_DEFENDER_DETECTION_BYPASS.txt",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Rundll32 Abuse of MSHTML.DLL for Payload Download\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Rundll32 Abuse of MSHTML.DLL for Payload Download\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Rundll32 Abuse of MSHTML.DLL for Payload Download so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.267",
              "n": "Cisco NVM - Susp Script From Archive Triggering Network Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Susp Script From Archive Triggering Network Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://redcanary.com/threat-detection-report/threats/scarlet-goldfinch/",
              "mitre": [
                "T1059.005",
                "T1204.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Susp Script From Archive Triggering Network Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.005, T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Susp Script From Archive Triggering Network Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Susp Script From Archive Triggering Network Activity so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.268",
              "n": "Cisco NVM - Suspicious Download From File Sharing Website",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Suspicious Download From File Sharing Website. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://twitter.com/jhencinski/status/1102695118455349248, https://isc.sans.edu/forums/diary/Investigating+Microsoft+BITS+Activity/23281/, https://www.virustotal.com/gui/domain/paste.ee/relations, https://www.cisa.gov/uscert/ncas/alerts/aa22-321a, https://www.microsoft.com/en-us/security/blog/2024/01/17/new-ttps-observed-in-mint-sandstorm-campaign-targeting-high-profile-individuals-at-universities-and-research-orgs/",
              "mitre": [
                "T1197"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Suspicious Download From File Sharing Website\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1197. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Suspicious Download From File Sharing Website\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Suspicious Download From File Sharing Website so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.269",
              "n": "Cisco NVM - Suspicious File Download via Headless Browser",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Suspicious File Download via Headless Browser. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://labs.withsecure.com/content/dam/labs/docs/WithSecure_Research_DUCKTAIL.pdf, https://www.trendmicro.com/en_us/research/23/e/managed-xdr-investigation-of-ducktail-in-trend-micro-vision-one.html, https://x.com/mrd0x/status/1478234484881436672?s=12, https://developer.chrome.com/docs/chromium/headless",
              "mitre": [
                "T1105",
                "T1059"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Suspicious File Download via Headless Browser\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Suspicious File Download via Headless Browser\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Suspicious File Download via Headless Browser so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.270",
              "n": "Cisco NVM - Suspicious Network Connection From Process With No Args",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Suspicious Network Connection From Process With No Args. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://redcanary.com/threat-detection-report/techniques/process-injection/",
              "mitre": [
                "T1055",
                "T1218"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Suspicious Network Connection From Process With No Args\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055, T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Suspicious Network Connection From Process With No Args\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Suspicious Network Connection From Process With No Args so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.271",
              "n": "Cisco NVM - Suspicious Network Connection Initiated via MsXsl",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Suspicious Network Connection Initiated via MsXsl. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://lolbas-project.github.io/lolbas/OtherMSBinaries/Msxsl/",
              "mitre": [
                "T1220"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Suspicious Network Connection Initiated via MsXsl\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1220. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Suspicious Network Connection Initiated via MsXsl\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Suspicious Network Connection Initiated via MsXsl so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.272",
              "n": "Cisco NVM - Suspicious Network Connection to IP Lookup Service API",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Suspicious Network Connection to IP Lookup Service API. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://github.com/SigmaHQ/sigma/blob/master/rules/windows/network_connection/net_connection_win_domain_external_ip_lookup.yml, https://www.cisa.gov/news-events/cybersecurity-advisories/aa20-302a",
              "mitre": [
                "T1590.005",
                "T1016"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Suspicious Network Connection to IP Lookup Service API\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1590.005, T1016. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Suspicious Network Connection to IP Lookup Service API\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Suspicious Network Connection to IP Lookup Service API so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.273",
              "n": "Cisco NVM - Webserver Download From File Sharing Website",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco NVM - Webserver Download From File Sharing Website. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2024/01/17/new-ttps-observed-in-mint-sandstorm-campaign-targeting-high-profile-individuals-at-universities-and-research-orgs/, https://research.splunk.com/",
              "mitre": [
                "T1105",
                "T1190"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco NVM - Webserver Download From File Sharing Website\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105, T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco NVM - Webserver Download From File Sharing Website\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Cisco NVM - Webserver Download From File Sharing Website so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.274",
              "n": "Create Remote Thread into LSASS",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a remote thread in the Local Security Authority Subsystem Service (LSASS). This behavior is identified using Sysmon EventID 8 logs, focusing on processes that create remote threads in lsass.exe. This activity is significant because it is commonly associated with credential dumping, a tactic used by adversaries to steal user authentication credentials. If confirmed malicious, this could allow attackers to gain unauthorized access to sensitive informa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 8",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 8 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Other tools can access LSASS for legitimate reasons and generate an event. In these cases, tweaking the search may help eliminate noise.",
              "refs": "https://2017.zeronights.org/wp-content/uploads/materials/ZN17_Kheirkhabarov_Hunting_for_Credentials_Dumping_in_Windows_Environment.pdf",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Create Remote Thread into LSASS\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 8. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Create Remote Thread into LSASS\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Other tools can access LSASS for legitimate reasons and generate an event. In these cases, tweaking the search may help eliminate noise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Create Remote Thread into LSASS so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.275",
              "n": "Creation of lsass Dump with Taskmgr",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of an lsass.exe process dump using Windows Task Manager. It leverages Sysmon EventID 11 to identify file creation events where the target filename matches *lsass*.dmp. This activity is significant because creating an lsass dump can be a precursor to credential theft, as the dump file contains sensitive information such as user passwords. If confirmed malicious, an attacker could use the lsass dump to extract credentials and escalate privileges, potenti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators can create memory dumps for debugging purposes, but memory dumps of the LSASS process would be unusual.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1003.001/T1003.001.md#atomic-test-5---dump-lsassexe-memory-using-windows-task-manager, https://attack.mitre.org/techniques/T1003/001/, https://2017.zeronights.org/wp-content/uploads/materials/ZN17_Kheirkhabarov_Hunting_for_Credentials_Dumping_in_Windows_Environment.pdf",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Creation of lsass Dump with Taskmgr\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Creation of lsass Dump with Taskmgr\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators can create memory dumps for debugging purposes, but memory dumps of the LSASS process would be unusual.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Creation of lsass Dump with Taskmgr so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.276",
              "n": "Crowdstrike Admin With Duplicate Password",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects CrowdStrike alerts for admin accounts with duplicate password risk, identifying instances where administrative users share the same password. This practice significantly increases the risk of unauthorized access and potential breaches. Addressing these alerts promptly is crucial for maintaining strong security protocols, ensuring each admin account uses a unique, secure password to protect critical systems and data.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/12/CrowdStrike-Falcon-Event-Streams-Add-on-Guide-v3.pdf",
              "mitre": [
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Crowdstrike Admin With Duplicate Password\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Crowdstrike Admin With Duplicate Password\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Crowdstrike Admin With Duplicate Password so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.277",
              "n": "CrowdStrike Falcon Stream Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic is to leverage alerts from CrowdStrike Falcon Event Stream. This query aggregates and summarizes DetectionSummaryEvent and IdpDetectionSummaryEvent alerts from CrowdStrike Falcon Event Stream, providing details such as destination, user, severity, MITRE information, and Crowdstrike id and links. The evals in the search do multiple things to include align the severity, ensure the user, dest, title, description, MITRE fields are set properly, and the drilldowns are defined b…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "CrowdStrike Falcon Stream Alert",
              "q": "index=crowdstrike sourcetype=\"CrowdStrike:Event:Streams:JSON\" event_simpleName IN (\"DetectionSummaryEvent\", \"IdpDetectionSummaryEvent\")\n| eval severity=case(SeverityName==\"Critical\",1, SeverityName==\"High\",2, SeverityName==\"Medium\",3, SeverityName==\"Low\",4, true(),5)\n| stats count min(_time) AS firstTime max(_time) AS lastTime values(Tactic) AS mitre_tactic values(Technique) AS mitre_technique by ComputerName UserName DetectName SeverityName FalconHostLink\n| sort severity\n| `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file hash entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires CrowdStrike Falcon Stream Alert ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may vary based on Crowdstrike configuration; monitor and filter out the alerts that are not relevant to your environment.",
              "refs": "https://www.crowdstrike.com/en-us/resources/guides/crowdstrike-falcon-event-streams-add-on-for-splunk-guide-v3/, https://splunkbase.splunk.com/app/5082",
              "mitre": [],
              "dtype": "file_hash",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CrowdStrike Falcon Stream Alerts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file hash entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: CrowdStrike Falcon Stream Alert. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CrowdStrike Falcon Stream Alerts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may vary based on Crowdstrike configuration; monitor and filter out the alerts that are not relevant to your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'CrowdStrike Falcon Stream Alerts' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.278",
              "n": "Crowdstrike Medium Severity Alert",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a CrowdStrike alert with MEDIUM severity indicates a potential threat that requires prompt attention. This alert level suggests suspicious activity that may compromise security but is not immediately critical. It typically involves detectable but non-imminent risks, such as unusual behavior or attempted policy violations, which should be investigated further and mitigated quickly to prevent escalation of attacks.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/12/CrowdStrike-Falcon-Event-Streams-Add-on-Guide-v3.pdf",
              "mitre": [
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Crowdstrike Medium Severity Alert\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Crowdstrike Medium Severity Alert\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Crowdstrike Medium Severity Alert so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.279",
              "n": "Crowdstrike Multiple LOW Severity Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects multiple CrowdStrike LOW severity alerts, indicating a series of minor suspicious activities or policy violations. These alerts are not immediately critical but should be reviewed to prevent potential threats. They often highlight unusual behavior or low-level risks that, if left unchecked, could escalate into more significant security issues. Regular monitoring and analysis of these alerts are essential for maintaining robust security.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_host$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/12/CrowdStrike-Falcon-Event-Streams-Add-on-Guide-v3.pdf",
              "mitre": [
                "T1110"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Crowdstrike Multiple LOW Severity Alerts\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Crowdstrike Multiple LOW Severity Alerts\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Crowdstrike Multiple LOW Severity Alerts so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.280",
              "n": "Crowdstrike Privilege Escalation For Non-Admin User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects CrowdStrike alerts for privilege escalation attempts by non-admin users. These alerts indicate unauthorized efforts by regular users to gain elevated permissions, posing a significant security risk. Detecting and addressing these attempts promptly helps prevent potential breaches and ensures that user privileges remain properly managed, maintaining the integrity of the organization's security protocols.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/12/CrowdStrike-Falcon-Event-Streams-Add-on-Guide-v3.pdf",
              "mitre": [
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Crowdstrike Privilege Escalation For Non-Admin User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Crowdstrike Privilege Escalation For Non-Admin User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Crowdstrike Privilege Escalation For Non-Admin User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.281",
              "n": "Crowdstrike User with Duplicate Password",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects CrowdStrike alerts for non-admin accounts with duplicate password risk, identifying instances where multiple non-admin users share the same password. This practice weakens security and increases the potential for unauthorized access. Addressing these alerts is essential to ensure each user account has a unique, strong password, thereby enhancing overall security and protecting sensitive information.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.crowdstrike.com/wp-content/uploads/2022/12/CrowdStrike-Falcon-Event-Streams-Add-on-Guide-v3.pdf",
              "mitre": [
                "T1110"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Crowdstrike User with Duplicate Password\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Crowdstrike User with Duplicate Password\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Crowdstrike User with Duplicate Password so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.282",
              "n": "Curl Execution with Percent Encoded URL",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Curl Execution with Percent Encoded URL. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "CrowdStrike ProcessRollup2, Sysmon EventID 1, Sysmon for Linux EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires CrowdStrike ProcessRollup2, Sysmon EventID 1, Sysmon for Linux EventID 1, Windows Event Log Security 4688 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1027/, https://attack.mitre.org/techniques/T1105/, https://curl.se/docs/manpage.html",
              "mitre": [
                "T1027",
                "T1105"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Curl Execution with Percent Encoded URL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: CrowdStrike ProcessRollup2, Sysmon EventID 1, Sysmon for Linux EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Curl Execution with Percent Encoded URL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Curl Execution with Percent Encoded URL so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.283",
              "n": "Detect Computer Changed with Anonymous Account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects changes to computer accounts using an anonymous logon. It leverages Windows Security Event Codes 4742 (Computer Change) and 4624 (Successful Logon) with the TargetUserName set to \"ANONYMOUS LOGON\" and LogonType 3. This activity is significant because anonymous logons should not typically be modifying computer accounts, indicating potential unauthorized access or misconfiguration. If confirmed malicious, this could allow an attacker to alter computer accounts, poten…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4624, Windows Event Log Security 4742",
              "q": "`wineventlog_security` EventCode=4624 OR EventCode=4742 TargetUserName=\"ANONYMOUS LOGON\" LogonType=3\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY action app authentication_method\n           dest dvc process\n           process_id process_name process_path\n           signature signature_id src\n           src_port status subject\n           user user_group vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `detect_computer_changed_with_anonymous_account_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Security 4624, Windows Event Log Security 4742 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.lares.com/blog/from-lares-labs-defensive-guidance-for-zerologon-cve-2020-1472/",
              "mitre": [
                "T1210"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Computer Changed with Anonymous Account\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4624, Windows Event Log Security 4742. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1210. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Computer Changed with Anonymous Account\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Detect Computer Changed with Anonymous Account' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.284",
              "n": "Detect Copy of ShadowCopy with Script Block Logging",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of PowerShell commands to copy the SAM, SYSTEM, or SECURITY hives, which are critical for credential theft. It leverages PowerShell Script Block Logging (EventCode=4104) to capture and analyze the full command executed. This activity is significant as it indicates an attempt to exfiltrate sensitive registry hives for offline password cracking. If confirmed malicious, this could lead to unauthorized access to credentials, enabling further compromise of the s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this analytic, you will need to enable PowerShell Script Block Logging on some or all endpoints. Additional setup here https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives as the scope is limited to SAM, SYSTEM and SECURITY hives.",
              "refs": "https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36934, https://github.com/GossiTheDog/HiveNightmare, https://github.com/JumpsecLabs/Guidance-Advice/tree/main/SAM_Permissions",
              "mitre": [
                "T1003.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Copy of ShadowCopy with Script Block Logging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Copy of ShadowCopy with Script Block Logging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives as the scope is limited to SAM, SYSTEM and SECURITY hives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect Copy of ShadowCopy with Script Block Logging so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.285",
              "n": "Detect Credential Dumping through LSASS access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to read LSASS memory, indicative of credential dumping. It leverages Sysmon EventCode 10, filtering for specific access permissions (0x1010 and 0x1410) on the lsass.exe process. This activity is significant because it suggests an attacker is trying to extract credentials from LSASS memory, potentially leading to unauthorized access, data breaches, and compromise of sensitive information. If confirmed malicious, this could enable attackers to escalate privi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$TargetImage$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 10 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The activity may be legitimate. Other tools can access lsass for legitimate reasons, and it's possible this event could be generated in those cases. In these cases, false positives should be fairly obvious and you may need to tweak the search to eliminate noise.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Credential Dumping through LSASS access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Credential Dumping through LSASS access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The activity may be legitimate. Other tools can access lsass for legitimate reasons, and it's possible this event could be generated in those cases. In these cases, false positives should be fairly obvious and you may need to tweak the search to eliminate noise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect Credential Dumping through LSASS access so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.286",
              "n": "Detect Empire with PowerShell Script Block Logging",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious PowerShell execution indicative of PowerShell-Empire activity. It leverages PowerShell Script Block Logging (EventCode=4104) to capture and analyze commands sent to PowerShell, specifically looking for patterns involving `system.net.webclient` and base64 encoding. This behavior is significant as it often represents initial stagers used by PowerShell-Empire, a known post-exploitation framework. If confirmed malicious, this activity could allow attackers t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "# Shared SPL: intentional — see UC-10.2.33\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may only pertain to it not being related to Empire, but another framework. Filter as needed if any applications use the same pattern.",
              "refs": "https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/, https://github.com/BC-SECURITY/Empire, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Empire with PowerShell Script Block Logging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Empire with PowerShell Script Block Logging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may only pertain to it not being related to Empire, but another framework. Filter as needed if any applications use the same pattern.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect Empire with PowerShell Script Block Logging so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.287",
              "n": "Detect New Local Admin account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of new accounts elevated to local administrators. It uses Windows event logs, specifically EventCode 4720 (user account creation) and EventCode 4732 (user added to Administrators group). This activity is significant as it indicates potential unauthorized privilege escalation, which is critical for SOC monitoring. If confirmed malicious, this could allow attackers to gain administrative access, leading to unauthorized data access, system modifications, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4732, Windows Event Log Security 4720",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4732, Windows Event Log Security 4720 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The activity may be legitimate. For this reason, it's best to verify the account with an administrator and ask whether there was a valid service request for the account creation. If your local administrator group name is not \"Administrators\", this search may generate an excessive number of false positives",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1136.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect New Local Admin account\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4732, Windows Event Log Security 4720. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect New Local Admin account\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The activity may be legitimate. For this reason, it's best to verify the account with an administrator and ask whether there was a valid service request for the account creation. If your local administrator group name is not \"Administrators\", this search may generate an excessive number of false positives\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect New Local Admin account so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.288",
              "n": "Detect Password Spray Attack Behavior From Source",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies one source failing to authenticate with 10 or more unique users. This behavior could represent an adversary performing a Password Spraying attack to obtain initial access or elevate privileges. This logic can be used for real time security monitoring as well as threat hunting exercises and works well against any number of data sources ingested into the CIM datamodel. Environments can be very different depending on the organization. Test and customize this detect…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4624, Windows Event Log Security 4625",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4624, Windows Event Log Security 4625 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Domain controllers, authentication chokepoints, and vulnerability scanners.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://www.microsoft.com/en-us/security/blog/2020/04/23/protecting-organization-password-spray-attacks/, https://github.com/MarkoH17/Spray365",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Password Spray Attack Behavior From Source\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4624, Windows Event Log Security 4625. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Password Spray Attack Behavior From Source\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Domain controllers, authentication chokepoints, and vulnerability scanners.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect Password Spray Attack Behavior From Source so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.289",
              "n": "Detect Password Spray Attack Behavior On User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies any user failing to authenticate from 10 or more unique sources. This behavior could represent an adversary performing a Password Spraying attack to obtain initial access or elevate privileges. This logic can be used for real time security monitoring as well as threat hunting exercises. Environments can be very different depending on the organization. Test and customize this detections thresholds as needed",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4624, Windows Event Log Security 4625",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4624, Windows Event Log Security 4625 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Domain controllers, authentication chokepoints, and vulnerability scanners.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://www.microsoft.com/en-us/security/blog/2020/04/23/protecting-organization-password-spray-attacks/, https://github.com/MarkoH17/Spray365",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Password Spray Attack Behavior On User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4624, Windows Event Log Security 4625. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Password Spray Attack Behavior On User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Domain controllers, authentication chokepoints, and vulnerability scanners.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect Password Spray Attack Behavior On User so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.290",
              "n": "Detect Rare Executables",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Detect Rare Executables. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1204"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Rare Executables\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Rare Executables\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect Rare Executables so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.291",
              "n": "Detect Regasm with Network Connection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of regasm.exe establishing a network connection to a public IP address, excluding private IP ranges. This detection leverages Sysmon EventID 3 logs to identify such behavior. This activity is significant as regasm.exe is a legitimate Microsoft-signed binary that can be exploited to bypass application control mechanisms. If confirmed malicious, this behavior could indicate an adversary's attempt to establish a remote Command and Control (C2) channel, p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, limited instances of regasm.exe with a network connection may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.009/T1218.009.md, https://lolbas-project.github.io/lolbas/Binaries/Regasm/",
              "mitre": [
                "T1218.009"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Regasm with Network Connection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Regasm with Network Connection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, limited instances of regasm.exe with a network connection may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect Regasm with Network Connection so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.292",
              "n": "Detect Regsvcs with Network Connection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances of Regsvcs.exe establishing a network connection to a public IP address, excluding private IP ranges. This detection leverages Sysmon EventID 3 logs to monitor network connections initiated by Regsvcs.exe. This activity is significant as Regsvcs.exe, a legitimate Microsoft-signed binary, can be exploited to bypass application control mechanisms and establish remote Command and Control (C2) channels. If confirmed malicious, this behavior could allow an …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, limited instances of regsvcs.exe may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.009/T1218.009.md, https://lolbas-project.github.io/lolbas/Binaries/Regsvcs/",
              "mitre": [
                "T1218.009"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Regsvcs with Network Connection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Regsvcs with Network Connection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, limited instances of regsvcs.exe may cause a false positive. Filter based endpoint usage, command line arguments, or process lineage.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect Regsvcs with Network Connection so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.293",
              "n": "Detect Remote Access Software Usage File",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Detect Remote Access Software Usage File. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel:Endpoint.Filesystem | search dest=$dest$ file_name=$file_name$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Known or approved applications used by the organization or usage of built-in functions. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content",
              "refs": "https://attack.mitre.org/techniques/T1219/, https://thedfirreport.com/2022/08/08/bumblebee-roasts-its-way-to-domain-admin/, https://thedfirreport.com/2022/11/28/emotet-strikes-again-lnk-file-leads-to-domain-wide-ransomware/",
              "mitre": [
                "T1219"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Remote Access Software Usage File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Remote Access Software Usage File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Known or approved applications used by the organization or usage of built-in functions. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the endpoint file, process, or registry signals described in 'Detect Remote Access Software Usage File' so we catch abuse of common tools and persistence paths in time.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.294",
              "n": "Detect Remote Access Software Usage FileInfo",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of processes with file or code signing attributes from known remote access software within the environment. It leverages Sysmon EventCode 1 data and cross-references a lookup table of remote access utilities such as AnyDesk, GoToMyPC, LogMeIn, and TeamViewer. This activity is significant as adversaries often use these tools to maintain unauthorized remote access. If confirmed malicious, this could allow attackers to persist in the environment, potenti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "# Shared SPL: intentional — see UC-10.3.74\n| from datamodel:Endpoint.Processes| search dest=$dest$ process_name=$process_name$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Known or approved applications used by the organization or usage of built-in functions. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content",
              "refs": "https://attack.mitre.org/techniques/T1219/, https://thedfirreport.com/2022/08/08/bumblebee-roasts-its-way-to-domain-admin/, https://thedfirreport.com/2022/11/28/emotet-strikes-again-lnk-file-leads-to-domain-wide-ransomware/",
              "mitre": [
                "T1219"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Remote Access Software Usage FileInfo\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Remote Access Software Usage FileInfo\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Known or approved applications used by the organization or usage of built-in functions. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the endpoint file, process, or registry signals described in 'Detect Remote Access Software Usage FileInfo' so we catch abuse of common tools and persistence paths in time.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.295",
              "n": "Detect Remote Access Software Usage Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a known remote access software is added to common persistence locations on a device within the environment. Adversaries use these utilities to retain remote access capabilities to the environment. Utilities in the lookup include AnyDesk, GoToMyPC, LogMeIn, TeamViewer and much more. Review the lookup for the entire list and add any others.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel:Endpoint.Registry| search dest=$dest$ registry_path=$registry_path$",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the file path, file name, and the user that created the file. These logs must be processed using the appropriate Splunk Technolog…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Known or approved applications used by the organization or usage of built-in functions. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content",
              "refs": "https://attack.mitre.org/techniques/T1219/, https://thedfirreport.com/2022/08/08/bumblebee-roasts-its-way-to-domain-admin/, https://thedfirreport.com/2022/11/28/emotet-strikes-again-lnk-file-leads-to-domain-wide-ransomware/",
              "mitre": [
                "T1219"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Remote Access Software Usage Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Remote Access Software Usage Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Known or approved applications used by the organization or usage of built-in functions. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the endpoint file, process, or registry signals described in 'Detect Remote Access Software Usage Registry' so we catch abuse of common tools and persistence paths in time.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.296",
              "n": "Detect Renamed 7-Zip",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the usage of a renamed 7-Zip executable using Sysmon data. It leverages the OriginalFileName field to identify instances where the 7-Zip process has been renamed. This activity is significant as attackers often rename legitimate tools to evade detection while staging or exfiltrating data. If confirmed malicious, this behavior could indicate data exfiltration attempts or other unauthorized data manipulation, potentially leading to significant data breaches or loss o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.original_file_name=7z*.exe\n            AND\n            Processes.process_name!=7z*.exe\n        )\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_renamed_7_zip_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives, however this analytic will need to be modified for each environment if Sysmon is not used.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1560.001/T1560.001.md",
              "mitre": [
                "T1560.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Renamed 7-Zip\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1560.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Renamed 7-Zip\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives, however this analytic will need to be modified for each environment if Sysmon is not used.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the behavior in “Detect Renamed 7-Zip” using the same log fields the detection already uses, so the team can act before impact spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.297",
              "n": "Detect RTLO In File Name",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Detect RTLO In File Name. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1036/002/, https://resources.infosecinstitute.com/topic/spoof-using-right-to-left-override-rtlo-technique-2/, https://www.trendmicro.com/en_us/research/17/f/following-trail-blacktech-cyber-espionage-campaigns.html",
              "mitre": [
                "T1036.002"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect RTLO In File Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect RTLO In File Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect RTLO In File Name so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.298",
              "n": "Detect WMI Event Subscription Persistence",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of WMI Event Subscriptions, which can be used to establish persistence or perform privilege escalation. It detects EventID 19 (EventFilter creation), EventID 20 (EventConsumer creation), and EventID 21 (FilterToConsumerBinding creation) from Sysmon logs. This activity is significant because WMI Event Subscriptions can execute code with elevated SYSTEM privileges, making it a powerful persistence mechanism. If confirmed malicious, an attacker could m…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 20",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 20 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible some applications will create a consumer and may be required to be filtered. For tuning, add any additional LOLBin's for further depth of coverage.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1546.003/T1546.003.md, https://www.eideon.com/2018-03-02-THL03-WMIBackdoors/, https://github.com/trustedsec/SysmonCommunityGuide/blob/master/chapters/WMI-events.md, https://in.security/2019/04/03/an-intro-into-abusing-and-identifying-wmi-event-subscriptions-for-persistence/",
              "mitre": [
                "T1546.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect WMI Event Subscription Persistence\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 20. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect WMI Event Subscription Persistence\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible some applications will create a consumer and may be required to be filtered. For tuning, add any additional LOLBin's for further depth of coverage.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Detect WMI Event Subscription Persistence so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.299",
              "n": "Disabled Kerberos Pre-Authentication Discovery With Get-ADUser",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-ADUser` PowerShell cmdlet with parameters indicating a search for domain accounts with Kerberos Pre-Authentication disabled. It leverages PowerShell Script Block Logging (EventCode=4104) to identify this specific activity. This behavior is significant because discovering accounts with Kerberos Pre-Authentication disabled can allow adversaries to perform offline password cracking. If confirmed malicious, this activity could lead to unauthor…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use search for accounts with Kerberos Pre Authentication disabled for legitimate purposes.",
              "refs": "https://attack.mitre.org/techniques/T1558/004/, https://m0chan.github.io/2019/07/31/How-To-Attack-Kerberos-101.html, https://stealthbits.com/blog/cracking-active-directory-passwords-with-as-rep-roasting/",
              "mitre": [
                "T1558.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabled Kerberos Pre-Authentication Discovery With Get-ADUser\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabled Kerberos Pre-Authentication Discovery With Get-ADUser\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use search for accounts with Kerberos Pre Authentication disabled for legitimate purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Disabled Kerberos Pre-Authentication Discovery With Get-ADUser so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.300",
              "n": "Disabled Kerberos Pre-Authentication Discovery With PowerView",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainUser` commandlet with the `-PreauthNotRequired` parameter using PowerShell Script Block Logging (EventCode=4104). This command is part of PowerView, a tool used for enumerating Windows Active Directory networks. Identifying domain accounts with Kerberos Pre-Authentication disabled is significant because adversaries can leverage this information to attempt offline password cracking. If confirmed malicious, this activity could lead to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use PowerView for troubleshooting",
              "refs": "https://attack.mitre.org/techniques/T1558/004/, https://m0chan.github.io/2019/07/31/How-To-Attack-Kerberos-101.html, https://stealthbits.com/blog/cracking-active-directory-passwords-with-as-rep-roasting/",
              "mitre": [
                "T1558.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabled Kerberos Pre-Authentication Discovery With PowerView\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabled Kerberos Pre-Authentication Discovery With PowerView\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use PowerView for troubleshooting\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Disabled Kerberos Pre-Authentication Discovery With PowerView so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.301",
              "n": "Disabling Remote User Account Control",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies modifications to the registry key that controls the enforcement of Windows User Account Control (UAC). It detects changes to the registry path `HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System\\EnableLUA` where the value is set to `0x00000000`. This activity is significant because disabling UAC can allow unauthorized changes to the system without user consent, potentially leading to privilege escalation. If confirmed malicious, an attacker could gai…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records registry activity from your hosts to populate the endpoint data model in the registry node. This is typically populated via endpoint detection-and-response product, such as Carbon Black, or via other endpoint data sources, such as Sysmon. The data used for this search is typically generated via logs that report registry…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This registry key may be modified via administrators to implement a change in system policy. This type of change should be a very rare occurrence.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Disabling Remote User Account Control\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Disabling Remote User Account Control\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This registry key may be modified via administrators to implement a change in system policy. This type of change should be a very rare occurrence.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Disabling Remote User Account Control so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.302",
              "n": "DLLHost with no Command Line Arguments with Network",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies DLLHost with no Command Line Arguments with Network. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://raw.githubusercontent.com/threatexpress/malleable-c2/c3385e481159a759f79b8acfe11acf240893b830/jquery-c2.4.2.profile, https://www.cobaltstrike.com/blog/learn-pipe-fitting-for-all-of-your-offense-projects/",
              "mitre": [
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"DLLHost with no Command Line Arguments with Network\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"DLLHost with no Command Line Arguments with Network\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for DLLHost with no Command Line Arguments with Network so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.303",
              "n": "Domain Group Discovery with Adsisearcher",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `[Adsisearcher]` type accelerator in PowerShell to query Active Directory for domain groups. It leverages PowerShell Script Block Logging (EventCode=4104) to identify specific script blocks containing `[adsisearcher]` and group-related queries. This activity is significant as it may indicate an attempt by adversaries or Red Teams to enumerate domain groups for situational awareness and Active Directory discovery. If confirmed malicious, this behavior…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use Adsisearcher for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://devblogs.microsoft.com/scripting/use-the-powershell-adsisearcher-type-accelerator-to-search-active-directory/",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Domain Group Discovery with Adsisearcher\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Domain Group Discovery with Adsisearcher\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use Adsisearcher for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Domain Group Discovery with Adsisearcher so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.304",
              "n": "Dump LSASS via procdump",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Dump LSASS via procdump. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1003/001/, https://learn.microsoft.com/en-us/sysinternals/downloads/procdump, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1003.001/T1003.001.md#atomic-test-2---dump-lsassexe-memory-using-procdump, https://thedfirreport.com/2022/08/08/bumblebee-roasts-its-way-to-domain-admin/, https://x.com/wietze/status/1958302556033065292?s=12",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Dump LSASS via procdump\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Dump LSASS via procdump\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Dump LSASS via procdump so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.305",
              "n": "Elevated Group Discovery with PowerView",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainGroupMember` cmdlet from PowerView, identified through PowerShell Script Block Logging (EventCode=4104). This cmdlet is used to enumerate members of elevated domain groups such as Domain Admins and Enterprise Admins. Monitoring this activity is crucial as it indicates potential reconnaissance efforts by adversaries to identify high-privileged users within the domain. If confirmed malicious, this activity could lead to targeted attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 (ScriptBlockText = \"*Get-DomainGroupMember*\") AND ScriptBlockText IN (\"*Domain Admins*\",\"*Enterprise Admins*\", \"*Schema Admins*\", \"*Account Operators*\" , \"*Server Operators*\", \"*Protected Users*\",  \"*Dns Admins*\")\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `elevated_group_discovery_with_powerview_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerView for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainGroupMember/, https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/appendix-b--privileged-accounts-and-groups-in-active-directory, https://attack.mitre.org/techniques/T1069/002/",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Elevated Group Discovery with PowerView\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Elevated Group Discovery with PowerView\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerView for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Elevated Group Discovery with PowerView' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.306",
              "n": "Enumerate Users Local Group Using Telegram",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a Telegram process enumerating all network users in a local group. It leverages EventCode 4798, which is generated when a process enumerates a user's security-enabled local groups on a computer or device. This activity is significant as it may indicate an attempt to gather information on user accounts, a common precursor to further malicious actions. If confirmed malicious, this behavior could allow an attacker to map out user accounts, potentially leading to privi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4798",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the Task Schedule (Exa. Security Log EventCode 4798) endpoints. Tune and filter known instances of process like logonUI used in your environment.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4798",
              "mitre": [
                "T1087"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Enumerate Users Local Group Using Telegram\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4798. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Enumerate Users Local Group Using Telegram\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Enumerate Users Local Group Using Telegram so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.307",
              "n": "Excessive Usage Of Cacls App",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Excessive Usage Of Cacls App. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or administrative scripts may use this application. Filter as needed.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1222"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Excessive Usage Of Cacls App\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Excessive Usage Of Cacls App\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or administrative scripts may use this application. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Excessive Usage Of Cacls App so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.308",
              "n": "Executables Or Script Creation In Suspicious Path",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Executables Or Script Creation In Suspicious Path. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file path entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://twitter.com/pr0xylife/status/1590394227758104576, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1036"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Executables Or Script Creation In Suspicious Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Executables Or Script Creation In Suspicious Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Executables Or Script Creation In Suspicious Path so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.309",
              "n": "Executables Or Script Creation In Temp Path",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Executables Or Script Creation In Temp Path. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the Filesystem responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://twitter.com/pr0xylife/status/1590394227758104576, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1036"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Executables Or Script Creation In Temp Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Executables Or Script Creation In Temp Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Executables Or Script Creation In Temp Path so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.310",
              "n": "File Download or Read to Pipe Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies File Download or Read to Pipe Execution. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Sysmon for Linux EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Sysmon for Linux EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://gist.github.com/nathanqthai/01808c569903f41a52e7e7b575caa890, https://github.com/MHaggis/notes/blob/master/utilities/warp_pipe_tester.py, https://www.huntress.com/blog/rapid-response-critical-rce-vulnerability-is-affecting-java, https://www.lunasec.io/docs/blog/log4j-zero-day/, https://securelist.com/bad-magic-apt/109087/",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"File Download or Read to Pipe Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Sysmon for Linux EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"File Download or Read to Pipe Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for File Download or Read to Pipe Execution so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.311",
              "n": "First Time Seen Running Windows Service",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the first occurrence of a Windows service running in your environment. It leverages Windows system event logs, specifically EventCode 7036, to identify services entering the \"running\" state. This activity is significant because the appearance of a new or previously unseen service could indicate the installation of unauthorized or malicious software. If confirmed malicious, this activity could allow an attacker to execute arbitrary code, maintain persistence, or esc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7036",
              "q": "`wineventlog_system` EventCode=7036\n      | rex field=Message \"The (?<service>[-\\(\\)\\s\\w]+) service entered the (?<state>\\w+) state\"\n      | where state=\"running\"\n      | lookup previously_seen_running_windows_services service as service OUTPUT firstTimeSeen\n      | where isnull(firstTimeSeen) OR firstTimeSeen > relative_time(now(), `previously_seen_windows_services_window`)\n      | table _time dest service\n      | `first_time_seen_running_windows_service_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7036 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A previously unseen service is not necessarily malicious. Verify that the service is legitimate and that was installed by a legitimate process.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"First Time Seen Running Windows Service\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7036. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"First Time Seen Running Windows Service\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A previously unseen service is not necessarily malicious. Verify that the service is legitimate and that was installed by a legitimate process.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'First Time Seen Running Windows Service' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.312",
              "n": "Get ADDefaultDomainPasswordPolicy with Powershell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-ADDefaultDomainPasswordPolicy` PowerShell cmdlet, which is used to retrieve the password policy in a Windows domain. This detection leverages PowerShell Script Block Logging (EventCode=4104) to identify the specific command execution. Monitoring this activity is significant as it can indicate an attempt to gather domain policy information, which is often a precursor to further malicious actions. If confirmed malicious, this activity could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 ScriptBlockText =\"*Get-ADDefaultDomainPasswordPolicy*\"\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `get_addefaultdomainpasswordpolicy_with_powershell_script_block_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet, https://attack.mitre.org/techniques/T1201/, https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-addefaultdomainpasswordpolicy?view=windowsserver2019-ps",
              "mitre": [
                "T1201"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get ADDefaultDomainPasswordPolicy with Powershell Script Block\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1201. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get ADDefaultDomainPasswordPolicy with Powershell Script Block\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Get ADDefaultDomainPasswordPolicy with Powershell Script Block' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.313",
              "n": "Get ADUser with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-AdUser` PowerShell cmdlet, which is used to enumerate all domain users. It leverages PowerShell Script Block Logging (EventCode=4104) to identify instances where this command is executed with a filter. This activity is significant as it may indicate an attempt by adversaries or Red Teams to gather information about domain users for situational awareness and Active Directory discovery. If confirmed malicious, this behavior could lead to fur…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 ScriptBlockText = \"*get-aduser*\" ScriptBlockText = \"*-filter*\"\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `get_aduser_with_powershell_script_block_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://www.blackhillsinfosec.com/red-blue-purple/, https://attack.mitre.org/techniques/T1087/002/, https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-aduser?view=windowsserver2019-ps",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get ADUser with PowerShell Script Block\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get ADUser with PowerShell Script Block\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'Get ADUser with PowerShell Script Block' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.314",
              "n": "Get ADUserResultantPasswordPolicy with Powershell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-ADUserResultantPasswordPolicy` PowerShell cmdlet, which is used to obtain the password policy in a Windows domain. It leverages PowerShell Script Block Logging (EventCode=4104) to identify this activity. Monitoring this behavior is significant as it may indicate an attempt to enumerate domain policies, a common tactic used by adversaries for situational awareness and Active Directory discovery. If confirmed malicious, this activity could a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet, https://attack.mitre.org/techniques/T1201/, https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-aduserresultantpasswordpolicy?view=windowsserver2019-ps",
              "mitre": [
                "T1201"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get ADUserResultantPasswordPolicy with Powershell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1201. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get ADUserResultantPasswordPolicy with Powershell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Get ADUserResultantPasswordPolicy with Powershell Script Block so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.315",
              "n": "Get DomainPolicy with Powershell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainPolicy` cmdlet using PowerShell Script Block Logging (EventCode=4104). It leverages logs capturing script block text to identify attempts to obtain the password policy in a Windows domain. This activity is significant as it indicates potential reconnaissance efforts by adversaries or Red Teams to gather domain policy information, which is crucial for planning further attacks. If confirmed malicious, this behavior could lead to detail…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://github.com/S1ckB0y1337/Active-Directory-Exploitation-Cheat-Sheet, https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainPolicy/, https://attack.mitre.org/techniques/T1201/",
              "mitre": [
                "T1201"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get DomainPolicy with Powershell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1201. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get DomainPolicy with Powershell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Get DomainPolicy with Powershell Script Block so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.316",
              "n": "Get DomainUser with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainUser` cmdlet using PowerShell Script Block Logging (EventCode=4104). This cmdlet is part of PowerView, a tool often used for domain enumeration. The detection leverages PowerShell operational logs to identify instances where this command is executed. Monitoring this activity is crucial as it may indicate an adversary's attempt to gather information about domain users, which is a common step in Active Directory Discovery. If confirmed…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainUser/",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Get DomainUser with PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Get DomainUser with PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Get DomainUser with PowerShell Script Block so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.317",
              "n": "GetAdComputer with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-AdComputer` PowerShell commandlet using PowerShell Script Block Logging (EventCode=4104). This detection leverages script block text to identify when this commandlet is run. The `Get-AdComputer` commandlet is significant as it can be used by adversaries to enumerate all domain computers, aiding in situational awareness and Active Directory discovery. If confirmed malicious, this activity could allow attackers to map the network, identify t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 (ScriptBlockText = \"*Get-AdComputer*\")\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `getadcomputer_with_powershell_script_block_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://learn.microsoft.com/en-us/powershell/module/activedirectory/get-adgroup?view=windowsserver2019-ps",
              "mitre": [
                "T1018"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetAdComputer with PowerShell Script Block\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetAdComputer with PowerShell Script Block\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'GetAdComputer with PowerShell Script Block' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.318",
              "n": "GetDomainComputer with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainComputer` commandlet using PowerShell Script Block Logging (EventCode=4104). This commandlet is part of PowerView, a tool often used for enumerating domain computers within Windows environments. The detection leverages script block text analysis to identify this specific command. Monitoring this activity is crucial as it can indicate an adversary's attempt to gather information about domain computers, which is a common step in Active…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use PowerView for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainComputer/",
              "mitre": [
                "T1018"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetDomainComputer with PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetDomainComputer with PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use PowerView for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GetDomainComputer with PowerShell Script Block so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.319",
              "n": "GetDomainController with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainController` commandlet using PowerShell Script Block Logging (EventCode=4104). This commandlet is part of PowerView, a tool often used for domain enumeration. The detection leverages script block text to identify this specific activity. Monitoring this behavior is crucial as it may indicate an adversary or Red Team performing reconnaissance to map out domain controllers. If confirmed malicious, this activity could lead to further dom…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainController/",
              "mitre": [
                "T1018"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetDomainController with PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetDomainController with PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GetDomainController with PowerShell Script Block so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.320",
              "n": "GetDomainGroup with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainGroup` cmdlet using PowerShell Script Block Logging (EventCode=4104). This cmdlet, part of the PowerView tool, is used to enumerate domain groups within a Windows domain. The detection leverages script block text to identify this specific command. Monitoring this activity is crucial as it may indicate an adversary or Red Team performing reconnaissance to gain situational awareness and map out Active Directory structures. If confirmed…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerView functions for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainGroup/",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetDomainGroup with PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetDomainGroup with PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerView functions for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GetDomainGroup with PowerShell Script Block so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.321",
              "n": "GetLocalUser with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-LocalUser` PowerShell commandlet using PowerShell Script Block Logging (EventCode=4104). This commandlet lists all local users on a system. The detection leverages script block text from PowerShell logs to identify this activity. Monitoring this behavior is significant as adversaries and Red Teams may use it to enumerate local users for situational awareness and Active Directory discovery. If confirmed malicious, this activity could lead t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 (ScriptBlockText = \"*Get-LocalUser*\")\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `getlocaluser_with_powershell_script_block_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1087/001/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1059.001",
                "T1087.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetLocalUser with PowerShell Script Block\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1087.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetLocalUser with PowerShell Script Block\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'GetLocalUser with PowerShell Script Block' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.322",
              "n": "GetNetTcpconnection with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-NetTcpconnection` PowerShell cmdlet using PowerShell Script Block Logging (EventCode=4104). This cmdlet lists network connections on a system, which adversaries may use for situational awareness and Active Directory discovery. Monitoring this activity is crucial as it can indicate reconnaissance efforts by an attacker. If confirmed malicious, this behavior could allow an attacker to map the network, identify critical systems, and plan furt…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 (ScriptBlockText = \"*Get-NetTcpconnection*\")\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `getnettcpconnection_with_powershell_script_block_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1049/, https://learn.microsoft.com/en-us/powershell/module/nettcpip/get-nettcpconnection?view=windowsserver2019-ps",
              "mitre": [
                "T1049"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetNetTcpconnection with PowerShell Script Block\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1049. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetNetTcpconnection with PowerShell Script Block\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'GetNetTcpconnection with PowerShell Script Block' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.323",
              "n": "GetWmiObject Ds Computer with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-WmiObject` cmdlet with the `DS_Computer` class parameter via PowerShell Script Block Logging (EventCode=4104). This detection leverages script block text to identify queries targeting domain computers using WMI. Monitoring this activity is crucial as adversaries and Red Teams may use it for Active Directory Discovery and situational awareness. If confirmed malicious, this behavior could allow attackers to map out domain computers, facilita…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this analytic, you will need to enable PowerShell Script Block Logging on some or all endpoints. Additional setup here https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/get-wmiobject?view=powershell-5.1",
              "mitre": [
                "T1018"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetWmiObject Ds Computer with PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetWmiObject Ds Computer with PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GetWmiObject Ds Computer with PowerShell Script Block so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.324",
              "n": "GetWmiObject Ds Group with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-WmiObject` commandlet with the `DS_Group` parameter via PowerShell Script Block Logging (EventCode=4104). This method leverages WMI to query all domain groups. Monitoring this activity is crucial as adversaries and Red Teams may use it for domain group enumeration, aiding in situational awareness and Active Directory discovery. If confirmed malicious, this activity could allow attackers to map out the domain structure, potentially leading …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this analytic, you will need to enable PowerShell Script Block Logging on some or all endpoints. Additional setup here https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/get-wmiobject?view=powershell-5.1",
              "mitre": [
                "T1069.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetWmiObject Ds Group with PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetWmiObject Ds Group with PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GetWmiObject Ds Group with PowerShell Script Block so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.325",
              "n": "GetWmiObject DS User with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-WmiObject` cmdlet with the `DS_User` class parameter via PowerShell Script Block Logging (EventCode=4104). It leverages logs to identify attempts to query all domain users using WMI. This activity is significant as it may indicate an adversary or Red Team operation attempting to enumerate domain users for situational awareness and Active Directory discovery. If confirmed malicious, this behavior could lead to further reconnaissance, enabli…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following Hunting analytic requires PowerShell operational logs to be imported. Modify the powershell macro as needed to match the sourcetype or add index. This analytic is specific to 4104, or PowerShell Script Block Logging.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://www.blackhillsinfosec.com/red-blue-purple/, https://learn.microsoft.com/en-us/windows/win32/wmisdk/describing-the-ldap-namespace",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetWmiObject DS User with PowerShell Script Block\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetWmiObject DS User with PowerShell Script Block\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for GetWmiObject DS User with PowerShell Script Block so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.326",
              "n": "GetWmiObject User Account with PowerShell Script Block",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-WmiObject` commandlet with the `Win32_UserAccount` parameter via PowerShell Script Block Logging (EventCode=4104). This method leverages script block text to identify when a list of all local users is being enumerated. This activity is significant as it may indicate an adversary or Red Team operation attempting to gather user information for situational awareness and Active Directory discovery. If confirmed malicious, this could lead to fu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 (ScriptBlockText=\"*Get-WmiObject*\" AND ScriptBlockText=\"*Win32_UserAccount*\")\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `getwmiobject_user_account_with_powershell_script_block_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this PowerShell commandlet for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1087/001/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1059.001",
                "T1087.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GetWmiObject User Account with PowerShell Script Block\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1087.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GetWmiObject User Account with PowerShell Script Block\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this PowerShell commandlet for troubleshooting.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this 'GetWmiObject User Account with PowerShell Script Block' search to catch the exact behavior in the data we already collect, so the team can confirm or clear it in the right management console or identity store.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.327",
              "n": "GitHub Workflow File Creation or Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies GitHub Workflow File Creation or Modification. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11, Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 11, Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem, https://securelist.com/shai-hulud-worm-infects-500-npm-packages-in-a-supply-chain-attack/117547/, https://github.com/SigmaHQ/sigma/pull/5658/files, https://docs.github.com/en/actions/reference/workflows-and-actions/workflow-syntax",
              "mitre": [
                "T1574.006",
                "T1554",
                "T1195"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"GitHub Workflow File Creation or Modification\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11, Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.006, T1554, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"GitHub Workflow File Creation or Modification\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for unexpected add or change to GitHub Actions workflow files, because that is a common way to sneak a supply-chain or secret-theft step into the pipeline, and we want the team to catch it while it is still reviewable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.328",
              "n": "Headless Browser Mockbin or Mocky Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects headless browser activity accessing mockbin.org or mocky.io. It identifies processes with the \"--headless\" and \"--disable-gpu\" command line arguments, along with references to mockbin.org or mocky.io. This behavior is significant as headless browsers are often used for automated tasks, including malicious activities like web scraping or automated attacks. If confirmed malicious, this activity could indicate an attempt to bypass traditional browser security measures…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected with this detection, unless within the organization there is a legitimate need for headless browsing accessing mockbin.org or mocky.io.",
              "refs": "https://mockbin.org/, https://www.mocky.io/",
              "mitre": [
                "T1564.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Headless Browser Mockbin or Mocky Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1564.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Headless Browser Mockbin or Mocky Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected with this detection, unless within the organization there is a legitimate need for headless browsing accessing mockbin.org or mocky.io.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Headless Browser Mockbin or Mocky Request so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.329",
              "n": "High Frequency Copy Of Files In Network Share",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a high frequency of file copying or moving within network shares, which may indicate potential data sabotage or exfiltration attempts. It leverages Windows Security Event Logs (EventCode 5145) to monitor access to specific file types and network shares. This activity is significant as it can reveal insider threats attempting to transfer classified or internal files, potentially leading to data breaches or evidence tampering. If confirmed malicious, this behavior co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5145",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting Windows Security Event Logs with 5145 EventCode enabled. The Windows TA is also required. Also enable the object Audit access success/failure in your group policy.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This behavior may seen in normal transfer of file within network if network share is common place for sharing documents.",
              "refs": "https://attack.mitre.org/techniques/T1537/",
              "mitre": [
                "T1537"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"High Frequency Copy Of Files In Network Share\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5145. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1537. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"High Frequency Copy Of Files In Network Share\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This behavior may seen in normal transfer of file within network if network share is common place for sharing documents.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for High Frequency Copy Of Files In Network Share so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.330",
              "n": "Icacls Deny Command",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Icacls Deny Command. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. Filter as needed.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1222"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Icacls Deny Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Icacls Deny Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Icacls Deny Command so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.331",
              "n": "ICACLS Grant Command",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies ICACLS Grant Command. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://thedfirreport.com/2020/04/20/sqlserver-or-the-miner-in-the-basement/",
              "mitre": [
                "T1222"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ICACLS Grant Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ICACLS Grant Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for ICACLS Grant Command so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.332",
              "n": "IcedID Exfiltrated Archived File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of suspicious files named passff.tar and cookie.tar, which are indicative of archived stolen browser information such as history and cookies on a machine compromised with IcedID. It leverages Sysmon EventCode 11 to identify these specific filenames. This activity is significant because it suggests that sensitive browser data has been exfiltrated, which could lead to further exploitation or data breaches. If confirmed malicious, this could allow attacke…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count values(Filesystem.file_path) as file_path min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Filesystem where Filesystem.file_path=\"*\\\\passff.tar\" OR Filesystem.file_path=\"*\\\\cookie.tar\" by Filesystem.action Filesystem.dest Filesystem.file_access_time Filesystem.file_create_time Filesystem.file_hash Filesystem.file_modify_time Filesystem.file_name Filesystem.file_path Filesystem.file_acl Filesystem.file_size Filesystem.process_guid Filesystem.process_id Filesystem.user Filesystem.vendor_product | `drop_dm_object_name(Filesystem)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `icedid_exfiltrated_archived_file_creation_filter`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.cisecurity.org/insights/white-papers/security-primer-icedid",
              "mitre": [
                "T1560.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"IcedID Exfiltrated Archived File Creation\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1560.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"IcedID Exfiltrated Archived File Creation\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the behavior in “IcedID Exfiltrated Archived File Creation” using the same log fields the detection already uses, so the team can act before impact spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.333",
              "n": "Kerberoasting spn request with RC4 encryption",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential Kerberoasting attacks by identifying Kerberos service ticket requests with RC4 encryption through Event ID 4769. It leverages specific Ticket_Options values commonly used by Kerberoasting tools. This activity is significant as Kerberoasting allows attackers to request service tickets for domain accounts, typically service accounts, and crack them offline to gain privileged access. If confirmed malicious, this could lead to unauthorized access, privilege e…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4769",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4769 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Older systems that support kerberos RC4 by default like NetApp may generate false positives. Filter as needed",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/4e3e9c8096dde00639a6b98845ec349135554ed5/atomics/T1208/T1208.md, https://www.hub.trimarcsecurity.com/post/trimarc-research-detecting-kerberoasting-activity",
              "mitre": [
                "T1558.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kerberoasting spn request with RC4 encryption\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4769. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kerberoasting spn request with RC4 encryption\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Older systems that support kerberos RC4 by default like NetApp may generate false positives. Filter as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kerberoasting spn request with RC4 encryption so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.334",
              "n": "Kerberos Pre-Authentication Flag Disabled in UserAccountControl",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when the Kerberos Pre-Authentication flag is disabled in a user account, using Windows Security Event 4738. This event indicates a change in the UserAccountControl property of a domain user object. Disabling this flag allows adversaries to perform offline brute force attacks on the user's password using the AS-REP Roasting technique. This activity is significant as it can be used by attackers with existing privileges to escalate their access or maintain persistence…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4738",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting Domain Controller events. The Advanced Security Audit policy setting `User Account Management` within `Account Management` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/troubleshoot/windows-server/identity/useraccountcontrol-manipulate-account-properties, https://m0chan.github.io/2019/07/31/How-To-Attack-Kerberos-101.html, https://stealthbits.com/blog/cracking-active-directory-passwords-with-as-rep-roasting/",
              "mitre": [
                "T1558.004"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kerberos Pre-Authentication Flag Disabled in UserAccountControl\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4738. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kerberos Pre-Authentication Flag Disabled in UserAccountControl\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kerberos Pre-Authentication Flag Disabled in UserAccountControl so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.335",
              "n": "Kerberos Pre-Authentication Flag Disabled with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `Set-ADAccountControl` PowerShell cmdlet with parameters that disable Kerberos Pre-Authentication. It leverages PowerShell Script Block Logging (EventCode=4104) to identify this specific command execution. Disabling Kerberos Pre-Authentication is significant because it allows adversaries to perform offline brute force attacks against user passwords using the AS-REP Roasting technique. If confirmed malicious, this activity could enable attackers to es…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, Administrators may need to set this flag for legitimate purposes.",
              "refs": "https://learn.microsoft.com/en-us/troubleshoot/windows-server/identity/useraccountcontrol-manipulate-account-properties, https://m0chan.github.io/2019/07/31/How-To-Attack-Kerberos-101.html, https://stealthbits.com/blog/cracking-active-directory-passwords-with-as-rep-roasting/",
              "mitre": [
                "T1558.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kerberos Pre-Authentication Flag Disabled with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kerberos Pre-Authentication Flag Disabled with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, Administrators may need to set this flag for legitimate purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kerberos Pre-Authentication Flag Disabled with PowerShell so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.336",
              "n": "Kerberos Service Ticket Request Using RC4 Encryption",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects Kerberos service ticket requests using RC4 encryption, leveraging Kerberos Event 4769. This method identifies potential Golden Ticket attacks, where adversaries forge Kerberos Granting Tickets (TGT) using the Krbtgt account NTLM password hash to gain unrestricted access to an Active Directory environment. Monitoring for RC4 encryption usage is significant as it is rare in modern networks, indicating possible malicious activity. If confirmed malicious, attackers cou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4769",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4769 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Based on Microsoft documentation, legacy systems or applications will use RC4-HMAC as the default encryption for Kerberos Service Ticket requests. Specifically, systems before Windows Server 2008 and Windows Vista. Newer systems will use AES128 or AES256.",
              "refs": "https://attack.mitre.org/techniques/T1558/001/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4769, https://adsecurity.org/?p=1515, https://gist.github.com/TarlogicSecurity/2f221924fef8c14a1d8e29f3cb5c5c4a, https://en.hackndo.com/kerberos-silver-golden-tickets/",
              "mitre": [
                "T1558.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kerberos Service Ticket Request Using RC4 Encryption\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4769. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kerberos Service Ticket Request Using RC4 Encryption\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Based on Microsoft documentation, legacy systems or applications will use RC4-HMAC as the default encryption for Kerberos Service Ticket requests. Specifically, systems before Windows Server 2008 and Windows Vista. Newer systems will use AES128 or AES256.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kerberos Service Ticket Request Using RC4 Encryption so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.337",
              "n": "Kerberos TGT Request Using RC4 Encryption",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a Kerberos Ticket Granting Ticket (TGT) request using RC4-HMAC encryption (type 0x17) by leveraging Event 4768. This encryption type is outdated and its presence may indicate an OverPass The Hash attack. Monitoring this activity is crucial as it can signify credential theft, allowing adversaries to authenticate to the Kerberos Distribution Center (KDC) using a stolen NTLM hash. If confirmed malicious, this could enable unauthorized access to systems and resources, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4768",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4768 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Based on Microsoft documentation, legacy systems or applications will use RC4-HMAC as the default encryption for TGT requests. Specifically, systems before Windows Server 2008 and Windows Vista. Newer systems will use AES128 or AES256.",
              "refs": "https://stealthbits.com/blog/how-to-detect-overpass-the-hash-attacks/, https://www.thehacker.recipes/ad/movement/kerberos/ptk, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4768",
              "mitre": [
                "T1550"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kerberos TGT Request Using RC4 Encryption\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4768. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1550. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kerberos TGT Request Using RC4 Encryption\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Based on Microsoft documentation, legacy systems or applications will use RC4-HMAC as the default encryption for TGT requests. Specifically, systems before Windows Server 2008 and Windows Vista. Newer systems will use AES128 or AES256.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Kerberos TGT Request Using RC4 Encryption so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.338",
              "n": "Linux Auditd Add User Account Type",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious add user account type. This behavior is critical for a SOC to monitor because it may indicate attempts to gain unauthorized access or maintain control over a system. Such actions could be signs of malicious activity. If confirmed, this could lead to serious consequences, including a compromised system, unauthorized access to sensitive data, or even a wider breach affecting the entire network. Detecting and responding to these signs early is essential…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Add User",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Add User ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1136.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Add User Account Type\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Add User. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Add User Account Type\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Add User Account Type so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.339",
              "n": "Linux Auditd Auditd Daemon Abort",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the abnormal termination of the Linux audit daemon (auditd) by identifying DAEMON_ABORT events in audit logs. These terminations suggest a serious failure of the auditing subsystem, potentially due to resource exhaustion, corruption, or malicious interference. Unlike a clean shutdown, DAEMON_ABORT implies that audit logging may have been disabled without system administrator intent. Alerts should be generated on detection and correlated with DAEMON_START, DAEMON_EN…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Daemon Abort",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Daemon Abort ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-audit_record_types",
              "mitre": [
                "T1562.012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Auditd Daemon Abort\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Daemon Abort. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Auditd Daemon Abort\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Auditd Daemon Abort so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.340",
              "n": "Linux Auditd Auditd Daemon Shutdown",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the unexpected termination of the Linux Audit daemon (auditd) by monitoring for log entries of type DAEMON_END. This event signifies that the audit logging service has stopped, either due to a legitimate system shutdown, manual administrative action, or potentially malicious tampering. Since auditd is responsible for recording critical security events, its sudden stoppage may indicate an attempt to disable security monitoring or evade detection during an attack. Th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Daemon End",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Daemon End ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-audit_record_types",
              "mitre": [
                "T1562.012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Auditd Daemon Shutdown\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Daemon End. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Auditd Daemon Shutdown\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Auditd Daemon Shutdown so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.341",
              "n": "Linux Auditd Auditd Daemon Start",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the (re)initialization of the Linux audit daemon (auditd) by identifying log entries of type DAEMON_START. This event indicates that the audit subsystem has resumed logging after being stopped or has started during system boot. While DAEMON_START may be expected during reboots or legitimate configuration changes, it can also signal attempts to re-enable audit logging after evasion, or restarts with modified or reduced rule sets. Monitoring this event in correlation…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Daemon Start",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Daemon Start ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-audit_record_types",
              "mitre": [
                "T1562.012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Auditd Daemon Start\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Daemon Start. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Auditd Daemon Start\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Auditd Daemon Start so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.342",
              "n": "Linux Auditd Auditd Service Stop",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious auditd service stop. This behavior is critical for a SOC to monitor because it may indicate attempts to gain unauthorized access or maintain control over a system. Such actions could be signs of malicious activity. If confirmed, this could lead to serious consequences, including a compromised system, unauthorized access to sensitive data, or even a wider breach affecting the entire network. Detecting and responding to these signs early is essential t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Service Stop",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Service Stop ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Auditd Service Stop\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Service Stop. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Auditd Service Stop\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Auditd Service Stop so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.343",
              "n": "Linux Auditd Base64 Decode Files",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious Base64 decode operations that may indicate malicious activity, such as data exfiltration or execution of encoded commands. Base64 is commonly used to encode data for safe transmission, but attackers may abuse it to conceal malicious payloads. This detection focuses on identifying unusual or unexpected Base64 decoding processes, particularly when associated with critical files or directories. By monitoring these activities, the analytic helps uncover pote…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://gtfobins.github.io/gtfobins/dd/",
              "mitre": [
                "T1140"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Base64 Decode Files\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1140. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Base64 Decode Files\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Base64 Decode Files so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.344",
              "n": "Linux Auditd Clipboard Data Copy",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Linux 'xclip' command to copy data from the clipboard. It leverages Linux Auditd telemetry, focusing on process names and command-line arguments related to clipboard operations. This activity is significant because adversaries can exploit clipboard data to capture sensitive information such as passwords or IP addresses. If confirmed malicious, this technique could lead to unauthorized data exfiltration, compromising sensitive information and potentia…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present on Linux desktop as it may commonly be used by administrators or end users. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1115/, https://man.archlinux.org/man/xclip.1",
              "mitre": [
                "T1115"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Clipboard Data Copy\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1115. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Clipboard Data Copy\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present on Linux desktop as it may commonly be used by administrators or end users. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Clipboard Data Copy so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.345",
              "n": "Linux Auditd Data Transfer Size Limits Via Split",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious data transfer activities that involve the use of the `split` syscall, potentially indicating an attempt to evade detection by breaking large files into smaller parts. Attackers may use this technique to bypass size-based security controls, facilitating the covert exfiltration of sensitive data. By monitoring for unusual or unauthorized use of the `split` syscall, this analytic helps identify potential data exfiltration attempts, allowing security teams t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1030"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Data Transfer Size Limits Via Split\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1030. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Data Transfer Size Limits Via Split\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Data Transfer Size Limits Via Split so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.346",
              "n": "Linux Auditd Data Transfer Size Limits Via Split Syscall",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious data transfer activities that involve the use of the `split` syscall, potentially indicating an attempt to evade detection by breaking large files into smaller parts. Attackers may use this technique to bypass size-based security controls, facilitating the covert exfiltration of sensitive data. By monitoring for unusual or unauthorized use of the `split` syscall, this analytic helps identify potential data exfiltration attempts, allowing security teams t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1030"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Data Transfer Size Limits Via Split Syscall\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1030. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Data Transfer Size Limits Via Split Syscall\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Data Transfer Size Limits Via Split Syscall so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.347",
              "n": "Linux Auditd Dd File Overwrite",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the 'dd' command to overwrite files on a Linux system. It leverages data from Linux Auditd telemetry, focusing on process execution logs that include command-line details. This activity is significant because adversaries often use the 'dd' command to destroy or irreversibly overwrite files, disrupting system availability and services. If confirmed malicious, this behavior could lead to data destruction, making recovery difficult and potentially causing s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://gtfobins.github.io/gtfobins/dd/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1485/T1485.md",
              "mitre": [
                "T1485"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Dd File Overwrite\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Dd File Overwrite\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Dd File Overwrite so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.348",
              "n": "Linux Auditd Doas Conf File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Linux Auditd Doas Conf File Creation. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Path, Linux Auditd Cwd",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Path, Linux Auditd Cwd ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://wiki.gentoo.org/wiki/Doas, https://www.makeuseof.com/how-to-install-and-use-doas/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Doas Conf File Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Path, Linux Auditd Cwd. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Doas Conf File Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Doas Conf File Creation so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.349",
              "n": "Linux Auditd Doas Tool Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'doas' tool on a Linux host. This tool allows standard users to perform tasks with root privileges, similar to 'sudo'. The detection leverages data from Linux Auditd, focusing on process names and command-line executions. This activity is significant as 'doas' can be exploited by adversaries to gain elevated privileges on a compromised host. If confirmed malicious, this could lead to unauthorized administrative access, potentially compromising …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://wiki.gentoo.org/wiki/Doas, https://www.makeuseof.com/how-to-install-and-use-doas/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Doas Tool Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Doas Tool Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Doas Tool Execution so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.350",
              "n": "Linux Auditd Edit Cron Table Parameter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious editing of cron jobs in Linux using the crontab command-line parameter (-e). It identifies this activity by monitoring command-line executions involving 'crontab' and the edit parameter. This behavior is significant for a SOC as cron job manipulations can indicate unauthorized persistence attempts or scheduled malicious actions. If confirmed malicious, this activity could lead to system compromise, unauthorized access, or broader network compromise.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1053/003/",
              "mitre": [
                "T1053.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Edit Cron Table Parameter\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Edit Cron Table Parameter\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Edit Cron Table Parameter so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.351",
              "n": "Linux Auditd File And Directory Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious file and directory discovery activities, which may indicate an attacker's effort to locate sensitive documents and files on a compromised system. This behavior often precedes data exfiltration, as adversaries seek to identify valuable or confidential information for theft. By identifying unusual or unauthorized attempts to browse or enumerate files and directories, this analytic helps security teams detect potential reconnaissance or preparatory actions …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1083"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd File And Directory Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1083. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd File And Directory Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd File And Directory Discovery so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.352",
              "n": "Linux Auditd File Permission Modification Via Chmod",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious file permission modifications using the `chmod` command, which may indicate an attacker attempting to alter access controls on critical files or directories. Such modifications can be used to grant unauthorized users elevated privileges or to conceal malicious activities by restricting legitimate access. By monitoring for unusual or unauthorized `chmod` usage, this analytic helps identify potential security breaches, allowing security teams to respond pr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1222.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd File Permission Modification Via Chmod\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd File Permission Modification Via Chmod\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd File Permission Modification Via Chmod so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.353",
              "n": "Linux Auditd File Permissions Modification Via Chattr",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious file permissions modifications using the chattr command, which may indicate an attacker attempting to manipulate file attributes to evade detection or prevent alteration. The chattr command can be used to make files immutable or restrict deletion, which can be leveraged to protect malicious files or disrupt system operations. By monitoring for unusual or unauthorized chattr usage, this analytic helps identify potential tampering with critical files, enab…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1222.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd File Permissions Modification Via Chattr\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd File Permissions Modification Via Chattr\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd File Permissions Modification Via Chattr so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.354",
              "n": "Linux Auditd Find Credentials From Password Stores",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious attempts to find credentials stored in password stores, indicating a potential attacker's effort to access sensitive login information. Password stores are critical repositories that contain valuable credentials, and unauthorized access to them can lead to significant security breaches. By monitoring for unusual or unauthorized activities related to password store access, this analytic helps identify potential credential theft attempts, allowing security…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1555.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Find Credentials From Password Stores\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Find Credentials From Password Stores\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Find Credentials From Password Stores so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.355",
              "n": "Linux Auditd Find Ssh Private Keys",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious attempts to find SSH private keys, which may indicate an attacker's effort to compromise secure access to systems. SSH private keys are essential for secure authentication, and unauthorized access to these keys can enable attackers to gain unauthorized access to servers and other critical infrastructure. By monitoring for unusual or unauthorized searches for SSH private keys, this analytic helps identify potential threats to network security, allowing se…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1552.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Find Ssh Private Keys\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Find Ssh Private Keys\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Find Ssh Private Keys so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.356",
              "n": "Linux Auditd Insert Kernel Module Using Insmod Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the insertion of a Linux kernel module using the insmod utility. It leverages data from Linux Auditd, focusing on process execution logs that include process names and command-line details. This activity is significant as it may indicate the installation of a rootkit or malicious kernel module, potentially allowing an attacker to gain elevated privileges and bypass security detections. If confirmed malicious, this could lead to unauthorized code execution, persiste…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_Kernel_Modules/, https://security.stackexchange.com/questions/175953/how-to-load-a-malicious-lkm-at-startup, https://0x00sec.org/t/kernel-rootkits-getting-your-hands-dirty/1485",
              "mitre": [
                "T1547.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Insert Kernel Module Using Insmod Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Insert Kernel Module Using Insmod Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Insert Kernel Module Using Insmod Utility so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.357",
              "n": "Linux Auditd Install Kernel Module Using Modprobe Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the installation of a Linux kernel module using the modprobe utility. It leverages data from Linux Auditd, focusing on process names and command-line executions. This activity is significant because installing a kernel module can indicate an attempt to deploy a rootkit or other malicious kernel-level code, potentially leading to elevated privileges and bypassing security detections. If confirmed malicious, this could allow an attacker to gain persistent, high-level…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_Kernel_Modules/, https://security.stackexchange.com/questions/175953/how-to-load-a-malicious-lkm-at-startup, https://0x00sec.org/t/kernel-rootkits-getting-your-hands-dirty/1485",
              "mitre": [
                "T1547.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Install Kernel Module Using Modprobe Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Install Kernel Module Using Modprobe Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Install Kernel Module Using Modprobe Utility so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.358",
              "n": "Linux Auditd Kernel Module Enumeration",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of the 'kmod' process to list kernel modules on a Linux system. This detection leverages data from Linux Auditd, focusing on process names and command-line executions. While listing kernel modules is not inherently malicious, it can be a precursor to loading unauthorized modules using 'insmod'. If confirmed malicious, this activity could allow an attacker to load kernel modules, potentially leading to privilege escalation, persistence, or other malicious…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are present based on automated tooling or system administrative usage. Filter as needed.",
              "refs": "https://man.archlinux.org/man/kmod.8",
              "mitre": [
                "T1082",
                "T1014"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Kernel Module Enumeration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082, T1014. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Kernel Module Enumeration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are present based on automated tooling or system administrative usage. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Kernel Module Enumeration so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.359",
              "n": "Linux Auditd Kernel Module Using Rmmod Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious use of the `rmmod` utility for kernel module removal, which may indicate an attacker attempt to unload critical or security-related kernel modules. The `rmmod` command is used to remove modules from the Linux kernel, and unauthorized use can be a tactic to disable security features, conceal malicious activities, or disrupt system operations. By monitoring for unusual or unauthorized `rmmod` activity, this analytic helps identify potential tampering with …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1547.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Kernel Module Using Rmmod Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Kernel Module Using Rmmod Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Kernel Module Using Rmmod Utility so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.360",
              "n": "Linux Auditd Nopasswd Entry In Sudoers File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of NOPASSWD entries to the /etc/sudoers file on Linux systems. It leverages Linux Auditd data to identify command lines containing \"NOPASSWD:\". This activity is significant because it allows users to execute commands with elevated privileges without requiring a password, which can be exploited by adversaries to maintain persistent, privileged access. If confirmed malicious, this could lead to unauthorized privilege escalation, persistent access, and po…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://askubuntu.com/questions/334318/sudoers-file-enable-nopasswd-for-user-all-commands, https://help.ubuntu.com/community/Sudoers",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Nopasswd Entry In Sudoers File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Nopasswd Entry In Sudoers File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for Linux Auditd Nopasswd Entry In Sudoers File so the security team can catch abuse while it is still small and controllable, before accounts or data are at real risk.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.361",
              "n": "Linux Auditd Osquery Service Stop",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious stopping of the `osquery` service, which may indicate an attempt to disable monitoring and evade detection. `Osquery` is a powerful tool used for querying system information and detecting anomalies, and stopping its service can be a sign that an attacker is trying to disrupt security monitoring or hide malicious activities. By monitoring for unusual or unauthorized stops of the `osquery` service, this analytic helps identify potential efforts to bypass s…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Service Stop",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Service Stop ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Osquery Service Stop\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Service Stop. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Osquery Service Stop\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Osquery Service Stop\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.362",
              "n": "Linux Auditd Possible Access Or Modification Of Sshd Config File",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Linux Auditd Possible Access Or Modification Of Sshd Config File. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Path, Linux Auditd Cwd",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Path, Linux Auditd Cwd ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.hackingarticles.in/ssh-penetration-testing-port-22/, https://attack.mitre.org/techniques/T1098/004/",
              "mitre": [
                "T1098.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Possible Access Or Modification Of Sshd Config File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Path, Linux Auditd Cwd. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Possible Access Or Modification Of Sshd Config File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Possible Access Or Modification Of Sshd Config \" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.363",
              "n": "Linux Auditd Possible Access To Credential Files",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to access or dump the contents of /etc/passwd and /etc/shadow files on Linux systems. It leverages data from Linux Auditd, focusing on processes like 'cat', 'nano', 'vim', and 'vi' accessing these files. This activity is significant as it may indicate credential dumping, a technique used by adversaries to gain persistence or escalate privileges. If confirmed malicious, privileges. If confirmed malicious, attackers could obtain hashed passwords for offline …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://askubuntu.com/questions/445361/what-is-difference-between-etc-shadow-and-etc-passwd, https://attack.mitre.org/techniques/T1003/008/",
              "mitre": [
                "T1003.008"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Possible Access To Credential Files\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Possible Access To Credential Files\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Possible Access To Credential Files\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.364",
              "n": "Linux Auditd Possible Access To Sudoers File",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Linux Auditd Possible Access To Sudoers File. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Path, Linux Auditd Cwd",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Path, Linux Auditd Cwd ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1548/003/, https://web.archive.org/web/20210708035426/https://www.cobaltstrike.com/downloads/csmanual43.pdf",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Possible Access To Sudoers File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Path, Linux Auditd Cwd. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Possible Access To Sudoers File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Possible Access To Sudoers File\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.365",
              "n": "Linux Auditd Possible Append Cronjob Entry On Existing Cronjob File",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Linux Auditd Possible Append Cronjob Entry On Existing Cronjob File. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Path, Linux Auditd Cwd",
              "q": "`linux_auditd` (type=PATH OR type=CWD)\n    | rex \"msg=audit\\([^)]*:(?<audit_id>\\d+)\\)\"",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Linux Auditd Path, Linux Auditd Cwd ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1053/003/, https://blog.aquasec.com/threat-alert-kinsing-malware-container-vulnerability, https://www.intezer.com/blog/research/kaiji-new-chinese-linux-malware-turning-to-golang/",
              "mitre": [
                "T1053.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Possible Append Cronjob Entry On Existing Cronjob File\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Path, Linux Auditd Cwd. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Possible Append Cronjob Entry On Existing Cronjob File\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on Linux Auditd Possible Append Cronjob Entry On Existing Cronjob File so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.366",
              "n": "Linux Auditd Preload Hijack Via Preload File",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Linux Auditd Preload Hijack Via Preload File. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Path, Linux Auditd Cwd",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Path, Linux Auditd Cwd ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1574.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Preload Hijack Via Preload File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Path, Linux Auditd Cwd. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Preload Hijack Via Preload File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Preload Hijack Via Preload File\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.367",
              "n": "Linux Auditd Service Restarted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the restarting or re-enabling of services on Linux systems using the `systemctl` or `service` commands. It leverages data from Linux Auditd, focusing on process and command-line execution logs. This activity is significant as adversaries may use it to maintain persistence or execute unauthorized actions. If confirmed malicious, this behavior could lead to repeated execution of malicious payloads, unauthorized access, or data destruction. Security analysts should in…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1543/003/",
              "mitre": [
                "T1053.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Service Restarted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Service Restarted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Service Restarted\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.368",
              "n": "Linux Auditd Service Started",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious service started. This behavior is critical for a SOC to monitor because it may indicate attempts to gain unauthorized access or maintain control over a system. Such actions could be signs of malicious activity. If confirmed, this could lead to serious consequences, including a compromised system, unauthorized access to sensitive data, or even a wider breach affecting the entire network. Detecting and responding to these signs early is essential to pr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Service Started\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Service Started\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Service Started\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.369",
              "n": "Linux Auditd Setuid Using Chmod Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the chmod utility to set the SUID or SGID bit on files, which can allow users to temporarily gain root or group-level access. This detection leverages data from Linux Auditd, focusing on process names and command-line arguments related to chmod. This activity is significant as it can indicate an attempt to escalate privileges or maintain persistence on a system. If confirmed malicious, an attacker could gain elevated access, potentially compromisin…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.hackingarticles.in/linux-privilege-escalation-using-capabilities/",
              "mitre": [
                "T1548.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Setuid Using Chmod Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Setuid Using Chmod Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Setuid Using Chmod Utility\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.370",
              "n": "Linux Auditd Setuid Using Setcap Utility",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the 'setcap' utility to enable the SUID bit on Linux systems. It leverages Linux Auditd data, focusing on process names and command-line arguments that indicate the use of 'setcap' with specific capabilities. This activity is significant because setting the SUID bit allows a user to temporarily gain root access, posing a substantial security risk. If confirmed malicious, an attacker could escalate privileges, execute arbitrary commands with elevate…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.hackingarticles.in/linux-privilege-escalation-using-capabilities/",
              "mitre": [
                "T1548.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Setuid Using Setcap Utility\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Setuid Using Setcap Utility\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Setuid Using Setcap Utility\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.371",
              "n": "Linux Auditd Sudo Or Su Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the \"sudo\" or \"su\" command on a Linux operating system. It leverages data from Linux Auditd, focusing on process names and parent process names. This activity is significant because \"sudo\" and \"su\" commands are commonly used by adversaries to elevate privileges, potentially leading to unauthorized access or control over the system. If confirmed malicious, this activity could allow attackers to execute commands with root privileges, leading to sever…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Proctitle",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Proctitle ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1548/003/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Sudo Or Su Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Proctitle. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Sudo Or Su Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Sudo Or Su Execution\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.372",
              "n": "Linux Auditd Sysmon Service Stop",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious sysmon service stop. This behavior is critical for a SOC to monitor because it may indicate attempts to gain unauthorized access or maintain control over a system. Such actions could be signs of malicious activity. If confirmed, this could lead to serious consequences, including a compromised system, unauthorized access to sensitive data, or even a wider breach affecting the entire network. Detecting and responding to these signs early is essential t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Service Stop",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Service Stop ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1489"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Sysmon Service Stop\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Service Stop. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1489. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Sysmon Service Stop\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Sysmon Service Stop\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.373",
              "n": "Linux Auditd Unix Shell Configuration Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Linux Auditd Unix Shell Configuration Modification. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Path, Linux Auditd Cwd",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Path, Linux Auditd Cwd ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1546.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Unix Shell Configuration Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Path, Linux Auditd Cwd. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Unix Shell Configuration Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Unix Shell Configuration Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.374",
              "n": "Linux Auditd Unload Module Via Modprobe",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious use of the `modprobe` command to unload kernel modules, which may indicate an attempt to disable critical system components or evade detection. The `modprobe` utility manages kernel modules, and unauthorized unloading of modules can disrupt system security features, remove logging capabilities, or conceal malicious activities. By monitoring for unusual or unauthorized `modprobe` operations involving module unloading, this analytic helps identify potentia…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html",
              "mitre": [
                "T1547.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Unload Module Via Modprobe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Unload Module Via Modprobe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Unload Module Via Modprobe\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.375",
              "n": "Linux Auditd Virtual Disk File And Directory Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious discovery of virtual disk files and directories, which may indicate an attacker's attempt to locate and access virtualized storage environments. Virtual disks can contain sensitive data or critical system configurations, and unauthorized discovery attempts could signify preparatory actions for data exfiltration or further compromise. By monitoring for unusual or unauthorized searches for virtual disk files and directories, this analytic helps identify po…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1083"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Virtual Disk File And Directory Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1083. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Virtual Disk File And Directory Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Virtual Disk File And Directory Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.376",
              "n": "Linux Auditd Whoami User Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious use of the whoami command, which may indicate an attacker trying to gather information about the current user account on a compromised system. The whoami command is commonly used to verify user privileges and identity, especially during initial stages of an attack to assess the level of access. By monitoring for unusual or unauthorized executions of whoami, this analytic helps in identifying potential reconnaissance activities, enabling security team…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Syscall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Syscall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1033"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Whoami User Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Syscall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1033. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Whoami User Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Auditd Whoami User Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.377",
              "n": "Linux Deletion Of Init Daemon Script",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of init daemon scripts on a Linux machine. It leverages filesystem event logs to identify when files within the /etc/init.d/ directory are deleted. This activity is significant because init daemon scripts control the start and stop of critical services, and their deletion can indicate an attempt to impair security features or evade defenses. If confirmed malicious, this behavior could allow an attacker to disrupt essential services, execute destructive…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.sentinelone.com/labs/acidrain-a-modem-wiper-rains-down-on-europe/",
              "mitre": [
                "T1070.004",
                "T1485"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Deletion Of Init Daemon Script\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Deletion Of Init Daemon Script\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Deletion Of Init Daemon Script\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.378",
              "n": "Linux Deletion Of Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of services on a Linux machine. It leverages filesystem event logs to identify when service files within system directories (e.g., /etc/systemd/, /lib/systemd/, /run/systemd/) are deleted. This activity is significant because attackers may delete or modify services to disable security features or evade defenses. If confirmed malicious, this behavior could indicate an attempt to impair system functionality or execute a destructive payload, potentially l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.sentinelone.com/labs/acidrain-a-modem-wiper-rains-down-on-europe/, https://unix.stackexchange.com/questions/224992/where-do-i-put-my-systemd-unit-file, https://cert.gov.ua/article/3718487",
              "mitre": [
                "T1070.004",
                "T1485"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Deletion Of Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Deletion Of Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Deletion Of Services\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.379",
              "n": "Linux Edit Cron Table Parameter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the suspicious editing of cron jobs in Linux using the crontab command-line parameter (-e). It identifies this activity by monitoring command-line executions involving 'crontab' and the edit parameter. This behavior is significant for a SOC as cron job manipulations can indicate unauthorized persistence attempts or scheduled malicious actions. If confirmed malicious, this activity could lead to system compromise, unauthorized access, or broader network compromise.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name = crontab Processes.process = \"*crontab *\" Processes.process = \"* -e*\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `linux_edit_cron_table_parameter_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1053/003/",
              "mitre": [
                "T1053.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Edit Cron Table Parameter\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Edit Cron Table Parameter\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the suspicious editing of cron jobs in Linux using the crontab command-line parameter (-e) so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.380",
              "n": "Linux File Created In Kernel Driver Directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of files in the Linux kernel/driver directory. It leverages filesystem data to identify new files in this critical directory. This activity is significant because the kernel/driver directory is typically reserved for kernel modules, and unauthorized file creation here can indicate a rootkit installation. If confirmed malicious, this could allow an attacker to gain high-level privileges, potentially compromising the entire system by executing code at th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can create file in this folders for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_Kernel_Modules/, https://security.stackexchange.com/questions/175953/how-to-load-a-malicious-lkm-at-startup, https://0x00sec.org/t/kernel-rootkits-getting-your-hands-dirty/1485",
              "mitre": [
                "T1547.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux File Created In Kernel Driver Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1547.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux File Created In Kernel Driver Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can create file in this folders for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux File Created In Kernel Driver Directory\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.381",
              "n": "Linux File Creation In Profile Directory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of files in the /etc/profile.d directory on Linux systems. It leverages filesystem data to identify new files in this directory, which is often used by adversaries for persistence by executing scripts upon system boot. This activity is significant as it may indicate an attempt to maintain long-term access to the compromised host. If confirmed malicious, this could allow attackers to execute arbitrary code with elevated privileges each time the system b…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can create file in profile.d folders for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1546/004/, https://www.intezer.com/blog/research/kaiji-new-chinese-linux-malware-turning-to-golang/",
              "mitre": [
                "T1546.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux File Creation In Profile Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux File Creation In Profile Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can create file in profile.d folders for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux File Creation In Profile Directory\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.382",
              "n": "Linux Magic SysRq Key Abuse",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Linux Magic SysRq Key Abuse. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Path, Linux Auditd Cwd",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Path, Linux Auditd Cwd ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.kernel.org/doc/html/v4.10/_sources/admin-guide/sysrq.txt, https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-proc-sys-kernel, https://www.splunk.com/en_us/blog/security/threat-update-awfulshred-script-wiper.html",
              "mitre": [
                "T1059.004",
                "T1529",
                "T1489",
                "T1499"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Magic SysRq Key Abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Path, Linux Auditd Cwd. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.004, T1529, T1489, T1499. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Magic SysRq Key Abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Magic SysRq Key Abuse\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.383",
              "n": "Linux Medusa Rootkit",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies file creation events associated with the installation of the Medusa rootkit, a userland LD_PRELOAD-based rootkit known for deploying shared objects, loader binaries, and configuration files into specific system directories. These files typically facilitate process hiding, credential theft, and backdoor access. Monitoring for such file creation patterns enables early detection of rootkit deployment before full compromise.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Little to no false positives in most environments. Tune as needed.",
              "refs": "https://cloud.google.com/blog/topics/threat-intelligence/uncovering-unc3886-espionage-operations",
              "mitre": [
                "T1014",
                "T1589.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Medusa Rootkit\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1014, T1589.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Medusa Rootkit\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Little to no false positives in most environments. Tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Medusa Rootkit\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.384",
              "n": "Linux Persistence and Privilege Escalation Risk Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential Linux persistence and privilege escalation activities. It leverages risk scores and event counts from various Linux-related data sources, focusing on tactics associated with persistence and privilege escalation. This activity is significant for a SOC because it highlights behaviors that could allow an attacker to maintain access or gain elevated privileges on a Linux system. If confirmed malicious, this activity could enable an attacker to execute code…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present based on many factors. Tune the correlation as needed to reduce too many triggers.",
              "refs": "https://attack.mitre.org/tactics/TA0004/",
              "mitre": [
                "T1548"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Persistence and Privilege Escalation Risk Behavior\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Persistence and Privilege Escalation Risk Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present based on many factors. Tune the correlation as needed to reduce too many triggers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Persistence and Privilege Escalation Risk Behavior\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.385",
              "n": "Linux Possible Cronjob Modification With Editor",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential unauthorized modifications to Linux cronjobs using text editors like \"nano,\" \"vi,\" or \"vim.\" It identifies this activity by monitoring command-line executions that interact with cronjob configuration paths. This behavior is significant for a SOC as it may indicate attempts at privilege escalation or establishing persistent access. If confirmed malicious, the impact could be severe, allowing attackers to execute damaging actions such as data theft, system …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE (\n            Processes.process_name IN(\"nano\",\"vim.basic\")\n            OR\n            Processes.process IN (\"*nano *\", \"*vi *\", \"*vim *\")\n        )\n        AND Processes.process IN(\"*/etc/cron*\", \"*/var/spool/cron/*\", \"*/etc/anacrontab*\")\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `linux_possible_cronjob_modification_with_editor_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://attack.mitre.org/techniques/T1053/003/",
              "mitre": [
                "T1053.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Possible Cronjob Modification With Editor\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Possible Cronjob Modification With Editor\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this commandline for automation purposes. Please update the filter macros to remove false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on potential unauthorized modifications to Linux cronjobs using text editors like \"nano,\" \"vi,\" or \"vim.\" It identifies this activity by monitoring command-line executions that interact with cronjob conf so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.386",
              "n": "Linux Possible Ssh Key File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of SSH key files in the ~/.ssh/ directory. It leverages filesystem data to identify new files in this specific path. This activity is significant because threat actors often create SSH keys to gain persistent access and escalate privileges on a compromised host. If confirmed malicious, this could allow attackers to remotely access the machine using the OpenSSH daemon service, leading to potential unauthorized control and data exfiltration.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can create file in ~/.ssh folders for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.hackingarticles.in/ssh-penetration-testing-port-22/, https://attack.mitre.org/techniques/T1098/004/",
              "mitre": [
                "T1098.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Possible Ssh Key File Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Possible Ssh Key File Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can create file in ~/.ssh folders for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Possible Ssh Key File Creation\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.387",
              "n": "Linux Sudoers Tmp File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of the \"sudoers.tmp\" file, which occurs when editing the /etc/sudoers file using visudo or another editor on a Linux platform. This detection leverages filesystem data to identify the presence of \"sudoers.tmp\" files. Monitoring this activity is crucial as adversaries may exploit it to gain elevated privileges on a compromised host. If confirmed malicious, this activity could allow attackers to modify sudoers configurations, potentially granting them un…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://forum.ubuntuusers.de/topic/sudo-visudo-gibt-etc-sudoers-tmp/",
              "mitre": [
                "T1548.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Sudoers Tmp File Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Sudoers Tmp File Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Sudoers Tmp File Creation\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.388",
              "n": "Linux Suspicious React or Next.js Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Linux Suspicious React or Next.js Child Process. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://react.dev/blog/2025/12/03/critical-security-vulnerability-in-react-server-components, https://nextjs.org/blog/CVE-2025-66478, https://nvd.nist.gov/vuln/detail/CVE-2025-55182, https://gist.github.com/maple3142/48bc9393f45e068cf8c90ab865c0f5f3, https://www.wiz.io/blog/critical-vulnerability-in-react-cve-2025-55182",
              "mitre": [
                "T1190",
                "T1059.004"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Suspicious React or Next.js Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1059.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Suspicious React or Next.js Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Linux Suspicious React or Next.js Child Process\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.389",
              "n": "Living Off The Land Detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following correlation identifies multiple risk events associated with the \"Living Off The Land\" analytic story, indicating potentially suspicious behavior. It leverages the Risk data model to aggregate and correlate events tagged under this story, focusing on systems with a high count of distinct sources. This activity is significant as it often involves the use of legitimate tools for malicious purposes, making detection challenging. If confirmed malicious, this behavior could allow attacke…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are no known false positive for this search, but it could contain false positives as multiple detections can trigger and not have successful exploitation. Modify the static value distinct_detection_name to a higher value. It is also required to tune analytics that are also tagged to ensure volume is never too much.",
              "refs": "https://www.splunk.com/en_us/blog/security/living-off-the-land-threat-research-february-2022-release.html, https://research.splunk.com/stories/living_off_the_land/",
              "mitre": [
                "T1105",
                "T1190",
                "T1059",
                "T1133"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Living Off The Land Detection\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105, T1190, T1059, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Living Off The Land Detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are no known false positive for this search, but it could contain false positives as multiple detections can trigger and not have successful exploitation. Modify the static value distinct_detection_name to a higher value. It is also required to tune analytics that are also tagged to ensure volume is never too much.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Living Off The Land Detection\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.390",
              "n": "LLM Model File Creation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies LLM Model File Creation. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| tstats `security_content_summariesonly` count\n        min(_time) as firstTime\n        max(_time) as lastTime\n    from datamodel=Endpoint.Filesystem\n    where Filesystem.file_name IN (\n        \"*.gguf*\",\n        \"*ggml*\",\n        \"*Modelfile*\",\n        \"*safetensors*\"\n    )\n    by Filesystem.action Filesystem.dest Filesystem.file_access_time Filesystem.file_create_time\n       Filesystem.file_hash Filesystem.file_modify_time Filesystem.file_name Filesystem.file_path\n       Filesystem.file_acl Filesystem.file_size Filesystem.process_guid Filesystem.process_id\n       Filesystem.user Filesystem.vendor_product\n    | `drop_dm_object_name(Filesystem)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `llm_model_file_creation_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://learn.microsoft.com/en-us/sysinternals/downloads/sysmon, https://www.ibm.com/think/topics/shadow-ai, https://www.splunk.com/en_us/blog/artificial-intelligence/splunk-technology-add-on-for-ollama.html, https://blogs.cisco.com/security/detecting-exposed-llm-servers-shodan-case-study-on-ollama",
              "mitre": [
                "T1543"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"LLM Model File Creation\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"LLM Model File Creation\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on LLM Model File Creation so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.391",
              "n": "Loading Of Dynwrapx Module",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of the dynwrapx.dll module, which is associated with the DynamicWrapperX ActiveX component. This detection leverages Sysmon EventCode 7 to identify processes that load or register dynwrapx.dll. This activity is significant because DynamicWrapperX can be used to call Windows API functions in scripts, making it a potential tool for malicious actions. If confirmed malicious, this could allow an attacker to execute arbitrary code, escalate privileges, or ma…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on processes that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` and `Filesystem` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, however it is possible to filter by Processes.process_name and specific processes (ex. wscript.exe). Filter as needed. This may need modification based on EDR telemetry and how it brings in registry data. For example, removal of (Default).",
              "refs": "https://blog.f-secure.com/hunting-for-koadic-a-com-based-rootkit/, https://www.script-coding.com/dynwrapx_eng.html, https://bohops.com/2018/06/28/abusing-com-registry-structure-clsid-localserver32-inprocserver32/, https://tria.ge/210929-ap75vsddan, https://www.virustotal.com/gui/file/cb77b93150cb0f7fe65ce8a7e2a5781e727419451355a7736db84109fa215a89, https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat",
              "mitre": [
                "T1055.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Loading Of Dynwrapx Module\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Loading Of Dynwrapx Module\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, however it is possible to filter by Processes.process_name and specific processes (ex. wscript.exe). Filter as needed. This may need modification based on EDR telemetry and how it brings in registry data. For example, removal of (Default).\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Loading Of Dynwrapx Module\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.392",
              "n": "Local LLM Framework DNS Query",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Local LLM Framework DNS Query. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "`sysmon`\n    EventCode=22\n    QueryName IN (\n        \"*huggingface*\",\n        \"*ollama*\",\n        \"*jan.ai*\",\n        \"*gpt4all*\",\n        \"*nomic*\",\n        \"*koboldai*\",\n        \"*lmstudio*\",\n        \"*modelscope*\",\n        \"*civitai*\",\n        \"*oobabooga*\",\n        \"*replicate*\",\n        \"*anthropic*\",\n        \"*openai*\",\n        \"*openrouter*\",\n        \"*api.openrouter*\",\n        \"*aliyun*\",\n        \"*alibabacloud*\",\n        \"*dashscope.aliyuncs*\"\n    )\n    NOT Image IN (\n        \"*\\\\MsMpEng.exe\",\n        \"C:\\\\ProgramData\\\\*\",\n        \"C:\\\\Windows\\\\System32\\\\*\",\n        \"C:\\\\Windows\\\\SysWOW64\\\\*\"\n    )\n    | stats count\n        min(_time) as firstTime\n        max(_time) as lastTime\n        by src Image process_name QueryName query_count answer answer_count reply_code_id vendor_product\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `local_llm_framework_dns_query_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://learn.microsoft.com/en-us/sysinternals/downloads/sysmon, https://www.splunk.com/en_us/blog/artificial-intelligence/splunk-technology-add-on-for-ollama.html, https://blogs.cisco.com/security/detecting-exposed-llm-servers-shodan-case-study-on-ollama",
              "mitre": [
                "T1590"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Local LLM Framework DNS Query\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1590. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Local LLM Framework DNS Query\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on Local LLM Framework Query so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.393",
              "n": "MacOS AMOS Stealer - Virtual Machine Check Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies MacOS AMOS Stealer - Virtual Machine Check Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "osquery",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires osquery ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://osquery.readthedocs.io/en/stable/deployment/process-auditing/, https://www.virustotal.com/gui/search/behaviour_processes%253A%2522osascript%2520-e%2520set%2522%2520AND%2520behaviour_processes%253A%2522system_profiler%2522%2520AND%2520(behaviour_processes%253A%2522VMware%2522%2520OR%2520behaviour_processes%253A%2522QEMU%2522)?type=files",
              "mitre": [
                "T1059.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MacOS AMOS Stealer - Virtual Machine Check Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: osquery. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MacOS AMOS Stealer - Virtual Machine Check Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"MacOS AMOS Stealer - Virtual Machine Check Activity\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.394",
              "n": "MacOS LOLbin",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects multiple executions of Living off the Land (LOLbin) binaries on macOS within a short period. It leverages osquery to monitor process events and identifies commands such as \"find\", \"crontab\", \"screencapture\", \"openssl\", \"curl\", \"wget\", \"killall\", and \"funzip\". This activity is significant as LOLbins are often used by attackers to perform malicious actions while evading detection. If confirmed malicious, this behavior could allow attackers to execute arbitrary code, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "osquery",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires osquery ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://osquery.readthedocs.io/en/stable/deployment/process-auditing/",
              "mitre": [
                "T1059.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MacOS LOLbin\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: osquery. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MacOS LOLbin\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"MacOS LOLbin\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.395",
              "n": "MacOS plutil",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the usage of the `plutil` command to modify plist files on macOS systems. It leverages osquery to monitor process events, specifically looking for executions of `/usr/bin/plutil`. This activity is significant because adversaries can use `plutil` to alter plist files, potentially adding malicious binaries or command-line arguments that execute upon user logon or system startup. If confirmed malicious, this could allow attackers to achieve persistence, execute arbitr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "osquery",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires osquery ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators using plutil to change plist files.",
              "refs": "https://osquery.readthedocs.io/en/stable/deployment/process-auditing/",
              "mitre": [
                "T1647"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MacOS plutil\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: osquery. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1647. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MacOS plutil\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators using plutil to change plist files.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"MacOS plutil\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.396",
              "n": "Malicious Powershell Executed As A Service",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of malicious PowerShell commands or payloads via the Windows SC.exe utility. It detects this activity by analyzing Windows System logs (EventCode 7045) and filtering for specific PowerShell-related patterns in the ImagePath field. This behavior is significant because it indicates potential abuse of the Windows Service Control Manager to run unauthorized or harmful scripts, which could lead to system compromise. If confirmed malicious, this activity…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7045 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Creating a hidden powershell service is rare and could key off of those instances.",
              "refs": "https://www.fireeye.com/content/dam/fireeye-www/blog/pdfs/dosfuscation-report.pdf, http://az4n6.blogspot.com/2017/, https://www.danielbohannon.com/blog-1/2017/3/12/powershell-execution-argument-obfuscation-how-it-can-make-detection-easier",
              "mitre": [
                "T1569.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Malicious Powershell Executed As A Service\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1569.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Malicious Powershell Executed As A Service\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Creating a hidden powershell service is rare and could key off of those instances.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Malicious Powershell Executed As A Service\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.397",
              "n": "MS Scripting Process Loading Ldap Module",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of MS scripting processes (wscript.exe or cscript.exe) loading LDAP-related modules (Wldap32.dll, adsldp.dll, adsldpc.dll). It leverages Sysmon EventCode 7 to identify these specific DLL loads. This activity is significant as it may indicate an attempt to query LDAP for host information, a behavior observed in FIN7 implants. If confirmed malicious, this could allow attackers to gather detailed Active Directory information, potentially leading to furth…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Tune and filter known instances where renamed rundll32.exe may be used.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "automation scripting language may used by network operator to do ldap query.",
              "refs": "https://www.mandiant.com/resources/fin7-pursuing-an-enigmatic-and-evasive-global-criminal-operation, https://attack.mitre.org/groups/G0046/",
              "mitre": [
                "T1059.007"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MS Scripting Process Loading Ldap Module\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MS Scripting Process Loading Ldap Module\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: automation scripting language may used by network operator to do ldap query.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"MS Scripting Process Loading Ldap Module\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.398",
              "n": "Network Traffic to Active Directory Web Services Protocol",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies network traffic directed to the Active Directory Web Services Protocol (ADWS) on port 9389. It leverages network traffic logs, focusing on source and destination IP addresses, application names, and destination ports. This activity is significant as ADWS is used to manage Active Directory, and unauthorized access could indicate malicious intent. If confirmed malicious, an attacker could manipulate Active Directory, potentially leading to privilege escalation, un…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 3",
              "q": "| tstats count FROM datamodel=Network_Traffic\n      WHERE All_Traffic.dest_port=9389\n      BY All_Traffic.action All_Traffic.app All_Traffic.dest\n         All_Traffic.dest_port All_Traffic.direction\n         All_Traffic.dvc All_Traffic.protocol All_Traffic.protocol_version\n         All_Traffic.src All_Traffic.src_port\n         All_Traffic.transport All_Traffic.user All_Traffic.vendor_product\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `drop_dm_object_name(\"All_Traffic\")`\n    | `network_traffic_to_active_directory_web_services_protocol_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 3 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as the destination port is specific to Active Directory Web Services Protocol, however we recommend utilizing this analytic to hunt for non-standard processes querying the ADWS port. Filter by App or dest to AD servers and remove known processes querying ADWS.",
              "refs": "https://github.com/FalconForceTeam/SOAPHound",
              "mitre": [
                "T1069.001",
                "T1069.002",
                "T1087.001",
                "T1087.002",
                "T1482"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Network Traffic to Active Directory Web Services Protocol\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1069.001, T1069.002, T1087.001, T1087.002, T1482. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Network Traffic to Active Directory Web Services Protocol\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as the destination port is specific to Active Directory Web Services Protocol, however we recommend utilizing this analytic to hunt for non-standard processes querying the ADWS port. Filter by App or dest to AD servers and remove known processes querying ADWS.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on network traffic directed to the Active Directory Web Services Protocol (ADWS) on port 9389 so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.399",
              "n": "Non Chrome Process Accessing Chrome Default Dir",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a non-Chrome process accessing files in the Chrome user default folder. It leverages Windows Security Event logs, specifically event code 4663, to identify unauthorized access attempts. This activity is significant because the Chrome default folder contains sensitive user data such as login credentials, browsing history, and cookies. If confirmed malicious, this behavior could indicate an attempt to exfiltrate sensitive information, often associated with RATs, troj…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "other browser not listed related to chrome may catch by this rule.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1555.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Non Chrome Process Accessing Chrome Default Dir\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Non Chrome Process Accessing Chrome Default Dir\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: other browser not listed related to chrome may catch by this rule.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Non Chrome Process Accessing Chrome Default Dir\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.400",
              "n": "PaperCut NG Suspicious Behavior Debug Log",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential exploitation attempts on a PaperCut NG server by analyzing its debug log data. It detects unauthorized or suspicious access attempts from public IP addresses and searches for specific URIs associated with known exploits. The detection leverages regex to parse unstructured log data, focusing on admin login activities. This activity is significant as it can indicate an active exploitation attempt on the server. If confirmed malicious, attackers could gai…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`papercutng`\n    (loginType=Admin OR userName=admin)",
              "m": "Debug logs must be enabled and shipped to Splunk in order to properly identify behavior with this analytic.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, as this is based on the admin user accessing the Papercut NG instance from a public IP address. Filter as needed.",
              "refs": "https://www.papercut.com/kb/Main/HowToCollectApplicationServerDebugLogs, https://github.com/inodee/threathunting-spl/blob/master/hunt-queries/HAFNIUM.md, https://www.cisa.gov/news-events/alerts/2023/05/11/cisa-and-fbi-release-joint-advisory-response-active-exploitation-papercut-vulnerability, https://www.papercut.com/kb/Main/PO-1216-and-PO-1219, https://www.horizon3.ai/papercut-cve-2023-27350-deep-dive-and-indicators-of-compromise/, https://www.bleepingcomputer.com/news/security/hackers-actively-exploit-critical-rce-bug-in-papercut-servers/, https://www.huntress.com/blog/critical-vulnerabilities-in-papercut-print-management-software",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PaperCut NG Suspicious Behavior Debug Log\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PaperCut NG Suspicious Behavior Debug Log\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, as this is based on the admin user accessing the Papercut NG instance from a public IP address. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on potential exploitation attempts on a PaperCut NG server by analyzing its debug log data so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.401",
              "n": "Ping Sleep Batch Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of ping sleep batch commands.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator may execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1497.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ping Sleep Batch Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ping Sleep Batch Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator may execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Ping Sleep Batch Command\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.402",
              "n": "Potential password in username",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where users may have mistakenly entered their passwords in the username field during authentication attempts. It detects this by analyzing failed authentication events with usernames longer than 7 characters and high Shannon entropy, followed by a successful authentication from the same source to the same destination. This activity is significant as it can indicate potential security risks, such as password exposure. If confirmed malicious, attackers c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Secure",
              "q": "| tstats `security_content_summariesonly` earliest(_time) AS starttime latest(_time) AS endtime latest(sourcetype) AS sourcetype values(Authentication.src) AS src values(Authentication.dest) AS dest count FROM datamodel=Authentication\n      WHERE nodename=Authentication.Failed_Authentication\n      BY \"Authentication.user\"\n    | `drop_dm_object_name(Authentication)`\n    | lookup ut_shannon_lookup word AS user\n    | where ut_shannon>3 AND len(user)>=8 AND mvcount(src) == 1\n    | sort count, - ut_shannon\n    | eval incorrect_cred=user\n    | eval endtime=endtime+1000\n    | map maxsearches=70 search=\"\n    | tstats `security_content_summariesonly` earliest(_time) AS starttime latest(_time) AS endtime latest(sourcetype) AS sourcetype values(Authentication.src) AS src values(Authentication.dest) AS dest count FROM datamodel=Authentication\n      WHERE nodename=Authentication.Successful_Authentication Authentication.src=\\\"$src$\\\" Authentication.dest=\\\"$dest$\\\" sourcetype IN (\\\"$sourcetype$\\\") earliest=\\\"$starttime$\\\" latest=\\\"$endtime$\\\" BY \\\"Authentication.user\\\"\n    | `drop_dm_object_name(\\\"Authentication\\\")`\n    | `potential_password_in_username_false_positive_reduction`\n    | eval incorrect_cred=\\\"$incorrect_cred$\\\"\n    | eval ut_shannon=\\\"$ut_shannon$\\\"\n    | sort count\"\n    | where user!=incorrect_cred\n    | outlier action=RM count\n    | `potential_password_in_username_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Linux Secure ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Valid usernames with high entropy or source/destination system pairs with multiple authenticating users will make it difficult to identify the real user authenticating.",
              "refs": "https://medium.com/@markmotig/search-for-passwords-accidentally-typed-into-the-username-field-975f1a389928",
              "mitre": [
                "T1078.003",
                "T1552.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Potential password in username\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Secure. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.003, T1552.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Potential password in username\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Valid usernames with high entropy or source/destination system pairs with multiple authenticating users will make it difficult to identify the real user authenticating.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on instances where users may have mistakenly entered their passwords in the username field during authentication attempts so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.403",
              "n": "PowerShell 4104 Hunting",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious PowerShell execution using Script Block Logging (EventCode 4104). It leverages specific patterns and keywords within the ScriptBlockText field to detect potentially malicious activities. This detection is significant for SOC analysts as PowerShell is commonly used by attackers for various malicious purposes, including code execution, privilege escalation, and persistence. If confirmed malicious, this activity could allow attackers to execute arbitrary…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104\n      | eval DoIt = if(match(ScriptBlockText,\"(?i)(\\$doit)\"), \"4\", 0)\n      | eval enccom=if(match(ScriptBlockText,\"[A-Za-z0-9+\\/]{44,}([A-Za-z0-9+\\/]{4}\n      | [A-Za-z0-9+\\/]{3}=\n      | [A-Za-z0-9+\\/]{2}==)\") OR match(ScriptBlockText, \"(?i)[-]e(nc*o*d*e*d*c*o*m*m*a*n*d*)*\\s+[^-]\"),4,0)\n      | eval suspcmdlet=if(match(ScriptBlockText, \"(?i)Add-Exfiltration\n      | Add-Persistence\n      | Add-RegBackdoor\n      | Add-ScrnSaveBackdoor\n      | Check-VM\n      | Do-Exfiltration\n      | Enabled-DuplicateToken\n      | Exploit-Jboss\n      | Find-Fruit\n      | Find-GPOLocation\n      | Find-TrustedDocuments\n      | Get-ApplicationHost\n      | Get-ChromeDump\n      | Get-ClipboardContents\n      | Get-FoxDump\n      | Get-GPPPassword\n      | Get-IndexedItem\n      | Get-Keystrokes\n      | LSASecret\n      | Get-PassHash\n      | Get-RegAlwaysInstallElevated\n      | Get-RegAutoLogon\n      | Get-RickAstley\n      | Get-Screenshot\n      | Get-SecurityPackages\n      | Get-ServiceFilePermission\n      | Get-ServicePermission\n      | Get-ServiceUnquoted\n      | Get-SiteListPassword\n      | Get-System\n      | Get-TimedScreenshot\n      | Get-UnattendedInstallFile\n      | Get-Unconstrained\n      | Get-VaultCredential\n      | Get-VulnAutoRun\n      | Get-VulnSchTask\n      | Gupt-Backdoor\n      | HTTP-Login\n      | Install-SSP\n      | Install-ServiceBinary\n      | Invoke-ACLScanner\n      | Invoke-ADSBackdoor\n      | Invoke-ARPScan\n      | Invoke-AllChecks\n      | Invoke-BackdoorLNK\n      | Invoke-BypassUAC\n      | Invoke-CredentialInjection\n      | Invoke-DCSync\n      | Invoke-DllInjection\n      | Invoke-DowngradeAccount\n      | Invoke-EgressCheck\n      | Invoke-Inveigh\n      | Invoke-InveighRelay\n      | Invoke-Mimikittenz\n      | Invoke-NetRipper\n      | Invoke-NinjaCopy\n      | Invoke-PSInject\n      | Invoke-Paranoia\n      | Invoke-PortScan\n      | Invoke-PoshRat\n      | Invoke-PostExfil\n      | Invoke-PowerDump\n      | Invoke-PowerShellTCP\n      | Invoke-PsExec\n      | Invoke-PsUaCme\n      | Invoke-ReflectivePEInjection\n      | Invoke-ReverseDNSLookup\n      | Invoke-RunAs\n      | Invoke-SMBScanner\n      | Invoke-SSHCommand\n      | Invoke-Service\n      | Invoke-Shellcode\n      | Invoke-Tater\n      | Invoke-ThunderStruck\n      | Invoke-Token\n      | Invoke-UserHunter\n      | Invoke-VoiceTroll\n      | Invoke-WScriptBypassUAC\n      | Invoke-WinEnum\n      | MailRaider\n      | New-HoneyHash\n      | Out-Minidump\n      | Port-Scan\n      | PowerBreach\n      | PowerUp\n      | PowerView\n      | Remove-Update\n      | Set-MacAttribute\n      | Set-Wallpaper\n      | Show-TargetScreen\n      | Start-CaptureServer\n      | VolumeShadowCopyTools\n      | NEEEEWWW\n      | (Computer\n      | User)Property\n      | CachedRDPConnection\n      | get-net\\S+\n      | invoke-\\S+hunter\n      | Install-Service\n      | get-\\S+(credent\n      | password)\n      | remoteps\n      | Kerberos.*(policy\n      | ticket)\n      | netfirewall\n      | Uninstall-Windows\n      | Verb\\s+Runas\n      | AmsiBypass\n      | nishang\n      | Invoke-Interceptor\n      | EXEonRemote\n      | NetworkRelay\n      | PowerShelludp\n      | PowerShellIcmp\n      | CreateShortcut\n      | copy-vss\n      | invoke-dll\n      | invoke-mass\n      | out-shortcut\n      | Invoke-ShellCommand\"),1,0)\n      | eval base64 = if(match(lower(ScriptBlockText),\"frombase64\"), \"4\", 0)\n      | eval empire=if(match(lower(ScriptBlockText),\"system.net.webclient\") AND match(lower(ScriptBlockText), \"frombase64string\") ,5,0)\n      | eval mimikatz=if(match(lower(ScriptBlockText),\"mimikatz\") OR match(lower(ScriptBlockText), \"-dumpcr\") OR match(lower(ScriptBlockText), \"SEKURLSA::Pth\") OR match(lower(ScriptBlockText), \"kerberos::ptt\") OR match(lower(ScriptBlockText), \"kerberos::golden\") ,5,0)\n      | eval iex=if(match(ScriptBlockText, \"(?i)iex\n      | invoke-expression\"),2,0)\n      | eval webclient=if(match(lower(ScriptBlockText),\"http\") OR match(lower(ScriptBlockText),\"web(client\n      | request)\") OR match(lower(ScriptBlockText),\"socket\") OR match(lower(ScriptBlockText),\"download(file\n      | string)\") OR match(lower(ScriptBlockText),\"bitstransfer\") OR match(lower(ScriptBlockText),\"internetexplorer.application\") OR match(lower(ScriptBlockText),\"xmlhttp\"),5,0)\n      | eval get = if(match(lower(ScriptBlockText),\"get-\"), \"1\", 0)\n      | eval rundll32 = if(match(lower(ScriptBlockText),\"rundll32\"), \"4\", 0)\n      | eval suspkeywrd=if(match(ScriptBlockText, \"(?i)(bitstransfer\n      | mimik\n      | metasp\n      | AssemblyBuilderAccess\n      | Reflection\\.Assembly\n      | shellcode\n      | injection\n      | cnvert\n      | shell\\.application\n      | start-process\n      | Rc4ByteStream\n      | System\\.Security\\.Cryptography\n      | lsass\\.exe\n      | localadmin\n      | LastLoggedOn\n      | hijack\n      | BackupPrivilege\n      | ngrok\n      | comsvcs\n      | backdoor\n      | brute.?force\n      | Port.?Scan\n      | Exfiltration\n      | exploit\n      | DisableRealtimeMonitoring\n      | beacon)\"),1,0)\n      | eval syswow64 = if(match(lower(ScriptBlockText),\"syswow64\"), \"3\", 0)\n      | eval httplocal = if(match(lower(ScriptBlockText),\"http://127.0.0.1\"), \"4\", 0)\n      | eval reflection = if(match(lower(ScriptBlockText),\"reflection\"), \"1\", 0)\n      | eval invokewmi=if(match(lower(ScriptBlockText), \"(?i)(wmiobject\n      | WMIMethod\n      | RemoteWMI\n      | PowerShellWmi\n      | wmicommand)\"),5,0)\n      | eval downgrade=if(match(ScriptBlockText, \"(?i)([-]ve*r*s*i*o*n*\\s+2)\") OR match(lower(ScriptBlockText),\"powershell -version\"),3,0)\n      | eval compressed=if(match(ScriptBlockText, \"(?i)GZipStream\n      | ::Decompress\n      | IO.Compression\n      | write-zip\n      | (expand\n      | compress)-Archive\"),5,0)\n      | eval invokecmd = if(match(lower(ScriptBlockText),\"invoke-command\"), \"4\", 0)\n      | addtotals fieldname=Score DoIt, enccom, suspcmdlet, suspkeywrd, compressed, downgrade, mimikatz, iex, empire, rundll32, webclient, syswow64, httplocal, reflection, invokewmi, invokecmd, base64, get\n      | stats values(Score)\n        BY UserID, Computer, DoIt,\n           enccom, compressed, downgrade,\n           iex, mimikatz, rundll32,\n           empire, webclient, syswow64,\n           httplocal, reflection, invokewmi,\n           invokecmd, base64, get,\n           suspcmdlet, suspkeywrd\n      | rename Computer as dest, UserID as user\n      | `powershell_4104_hunting_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives. May filter as needed.",
              "refs": "https://github.com/inodee/threathunting-spl/blob/master/hunt-queries/powershell_qualifiers.md, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba, https://github.com/marcurdy/dfir-toolset/blob/master/Powershell%20Blueteam.txt, https://devblogs.microsoft.com/powershell/powershell-the-blue-team/, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_logging?view=powershell-5.1, https://www.mandiant.com/resources/greater-visibilityt, https://hurricanelabs.com/splunk-tutorials/how-to-use-powershell-transcription-logs-in-splunk/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html, https://adlumin.com/post/powerdrop-a-new-insidious-powershell-script-for-command-and-control-attacks-targets-u-s-aerospace-defense-industry/",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell 4104 Hunting\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell 4104 Hunting\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives. May filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on suspicious PowerShell execution using Script Block Logging (EventCode 4104) so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.404",
              "n": "Powershell COM Hijacking InprocServer32 Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to modify or add a Component Object Model (COM) entry to the InProcServer32 path within the registry using PowerShell. It leverages PowerShell ScriptBlock Logging (EventCode 4104) to identify suspicious script blocks that target the InProcServer32 registry path. This activity is significant because modifying COM objects can be used for persistence or privilege escalation by attackers. If confirmed malicious, this could allow an attacker to execute arbitrar…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic requires PowerShell operational logs to be imported. Modify the PowerShell macro as needed to match the sourcetype or add index. This analytic is specific to 4104, or PowerShell Script Block Logging.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present if any scripts are adding to inprocserver32. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1546/015/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1546.015/T1546.015.md",
              "mitre": [
                "T1059.001",
                "T1546.015"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell COM Hijacking InprocServer32 Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1546.015. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell COM Hijacking InprocServer32 Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present if any scripts are adding to inprocserver32. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Powershell COM Hijacking InprocServer32 Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.405",
              "n": "Powershell Disable Security Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Powershell Disable Security Monitoring. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1562.001/T1562.001.md#atomic-test-15---tamper-with-windows-defender-atp-powershell, https://learn.microsoft.com/en-us/powershell/module/defender/set-mppreference?view=windowsserver2022-ps",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Disable Security Monitoring\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Disable Security Monitoring\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Powershell Disable Security Monitoring\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.406",
              "n": "PowerShell Domain Enumeration",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of PowerShell commands used for domain enumeration, such as `get-netdomaintrust` and `get-adgroupmember`. It leverages PowerShell Script Block Logging (EventCode=4104) to capture and analyze the full command sent to PowerShell. This activity is significant as it often indicates reconnaissance efforts by an attacker to map out the domain structure and identify key users and groups. If confirmed malicious, this behavior could lead to further targeted at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible there will be false positives, filter as needed.",
              "refs": "https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell Domain Enumeration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell Domain Enumeration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible there will be false positives, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"PowerShell Domain Enumeration\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.407",
              "n": "PowerShell Enable PowerShell Remoting",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Enable-PSRemoting cmdlet, which allows PowerShell remoting on a local or remote computer. This detection leverages PowerShell Script Block Logging (EventCode 4104) to identify when this cmdlet is executed. Monitoring this activity is crucial as it can indicate an attacker enabling remote command execution capabilities on a compromised system. If confirmed malicious, this activity could allow an attacker to take control of the system remotely, execute…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Note that false positives may occur due to the use of the Enable-PSRemoting cmdlet by legitimate users, such as system administrators. It is recommended to apply appropriate filters as needed to minimize the number of false positives.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/enable-psremoting?view=powershell-7.3",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell Enable PowerShell Remoting\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell Enable PowerShell Remoting\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Note that false positives may occur due to the use of the Enable-PSRemoting cmdlet by legitimate users, such as system administrators. It is recommended to apply appropriate filters as needed to minimize the number of false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"PowerShell Enable PowerShell Remoting\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.408",
              "n": "Powershell Fileless Process Injection via GetProcAddress",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `GetProcAddress` in PowerShell script blocks, leveraging PowerShell Script Block Logging (EventCode=4104). This method captures the full command sent to PowerShell, which is then logged in Windows event logs. The presence of `GetProcAddress` is unusual for typical PowerShell scripts and often indicates malicious activity, as many attack toolkits use it to achieve code execution. If confirmed malicious, this activity could allow an attacker to execute arb…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives. Filter as needed.",
              "refs": "https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html",
              "mitre": [
                "T1055",
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Fileless Process Injection via GetProcAddress\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Fileless Process Injection via GetProcAddress\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Powershell Fileless Process Injection via GetProcAddress\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.409",
              "n": "Powershell Fileless Script Contains Base64 Encoded Content",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of PowerShell scripts containing Base64 encoded content, specifically identifying the use of `FromBase64String`. It leverages PowerShell Script Block Logging (EventCode=4104) to capture and analyze the full command sent to PowerShell. This activity is significant as Base64 encoding is often used by attackers to obfuscate malicious payloads, making it harder to detect. If confirmed malicious, this could lead to code execution, allowing attackers to run…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited. Filter as needed.",
              "refs": "https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1027",
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Fileless Script Contains Base64 Encoded Content\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Fileless Script Contains Base64 Encoded Content\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Powershell Fileless Script Contains Base64 Encoded Content\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.410",
              "n": "PowerShell Invoke CIMMethod CIMSession",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a New-CIMSession cmdlet followed by the use of the Invoke-CIMMethod cmdlet within PowerShell. It leverages PowerShell Script Block Logging to identify these specific cmdlets in the ScriptBlockText field. This activity is significant because it mirrors the behavior of the Invoke-WMIMethod cmdlet, often used for remote code execution via NTLMv2 pass-the-hash authentication. If confirmed malicious, this could allow an attacker to execute commands remot…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on third-party applications or administrators using CIM. It is recommended to apply appropriate filters as needed to minimize the number of false positives.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/cimcmdlets/invoke-cimmethod?view=powershell-7.3",
              "mitre": [
                "T1047"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell Invoke CIMMethod CIMSession\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell Invoke CIMMethod CIMSession\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on third-party applications or administrators using CIM. It is recommended to apply appropriate filters as needed to minimize the number of false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for the detection that watches PowerShell opening a remote management session and running commands through it, so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.411",
              "n": "Powershell Load Module in Meterpreter",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of suspicious PowerShell commands associated with Meterpreter modules, such as \"MSF.Powershell\" and \"MSF.Powershell.Meterpreter\". It leverages PowerShell Script Block Logging (EventCode=4104) to capture and analyze the full command sent to PowerShell. This activity is significant as it indicates potential post-exploitation actions, including credential dumping and persistence mechanisms. If confirmed malicious, an attacker could gain extensive control…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be very limited as this is strict to MetaSploit behavior.",
              "refs": "https://github.com/OJ/metasploit-payloads/blob/master/powershell/MSF.Powershell/Scripts.cs",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Load Module in Meterpreter\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Load Module in Meterpreter\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be very limited as this is strict to MetaSploit behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Powershell Load Module in Meterpreter\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.412",
              "n": "PowerShell Loading DotNET into Memory via Reflection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of PowerShell scripts to load .NET assemblies into memory via reflection, a technique often used in malicious activities such as those by Empire and Cobalt Strike. It leverages PowerShell Script Block Logging (EventCode=4104) to capture and analyze the full command executed. This behavior is significant as it can indicate advanced attack techniques aiming to execute code in memory, bypassing traditional defenses. If confirmed malicious, this activity could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as day to day scripts do not use this method.",
              "refs": "https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assembly?view=net-5.0, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PowerShell Loading DotNET into Memory via Reflection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PowerShell Loading DotNET into Memory via Reflection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as day to day scripts do not use this method.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"PowerShell Loading DotNET into Memory via Reflection\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.413",
              "n": "Powershell Processing Stream Of Data",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious PowerShell script execution involving compressed stream data processing, identified via EventCode 4104. It leverages PowerShell Script Block Logging to flag scripts using `IO.Compression`, `IO.StreamReader`, or decompression methods. This activity is significant as it often indicates obfuscated PowerShell or embedded .NET/binary execution, which are common tactics for evading detection. If confirmed malicious, this behavior could allow attackers to execu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "powershell may used this function to process compressed data.",
              "refs": "https://medium.com/@ahmedjouini99/deobfuscating-emotets-powershell-payload-e39fb116f7b9, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba, https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Processing Stream Of Data\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Processing Stream Of Data\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: powershell may used this function to process compressed data.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Powershell Processing Stream Of Data\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.414",
              "n": "Powershell Remote Services Add TrustedHost",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a PowerShell script that modifies the 'TrustedHosts' configuration via EventCode 4104. It leverages PowerShell Script Block Logging to identify commands targeting WSMan settings, specifically those altering or concatenating trusted hosts. This activity is significant as it can indicate attempts to manipulate remote connection settings, potentially allowing unauthorized remote access. If confirmed malicious, this could enable attackers to establish …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this analytic, you will need to enable PowerShell Script Block Logging on some or all endpoints. Additional setup here https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "user and network administrator may used this function to add trusted host.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.darkgate",
              "mitre": [
                "T1021.006"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Remote Services Add TrustedHost\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Remote Services Add TrustedHost\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: user and network administrator may used this function to add trusted host.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Powershell Remote Services Add TrustedHost\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.415",
              "n": "Powershell Using memory As Backing Store",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious PowerShell script execution using memory streams as a backing store, identified via EventCode 4104. It leverages PowerShell Script Block Logging to capture scripts that create new objects with memory streams, often used to decompress and execute payloads in memory. This activity is significant as it indicates potential in-memory execution of malicious code, bypassing traditional file-based detection. If confirmed malicious, this technique could allow att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "powershell may used this function to store out object into memory.",
              "refs": "https://web.archive.org/web/20201112031711/https://www.carbonblack.com/blog/decoding-malicious-powershell-streams/, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Using memory As Backing Store\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Using memory As Backing Store\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: powershell may used this function to store out object into memory.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Powershell Using memory As Backing Store\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.416",
              "n": "Powershell Windows Defender Exclusion Commands",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of PowerShell commands to add or set Windows Defender exclusions. It leverages EventCode 4104 to identify suspicious `Add-MpPreference` or `Set-MpPreference` commands with exclusion parameters. This activity is significant because adversaries often use it to bypass Windows Defender, allowing malicious code to execute without detection. If confirmed malicious, this behavior could enable attackers to evade antivirus defenses, maintain persistence, and execute…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "admin or user may choose to use this windows features.",
              "refs": "https://tccontre.blogspot.com/2020/01/remcos-rat-evading-windows-defender-av.html, https://app.any.run/tasks/cf1245de-06a7-4366-8209-8e3006f2bfe5/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Powershell Windows Defender Exclusion Commands\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Powershell Windows Defender Exclusion Commands\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: admin or user may choose to use this windows features.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Powershell Windows Defender Exclusion Commands\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.417",
              "n": "Process Creating LNK file in Suspicious Location",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Process Creating LNK file in Suspicious Location. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file path entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1566/001/, https://www.trendmicro.com/en_us/research/17/e/rising-trend-attackers-using-lnk-files-download-malware.html, https://twitter.com/pr0xylife/status/1590394227758104576",
              "mitre": [
                "T1566.002"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Process Creating LNK file in Suspicious Location\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1566.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Process Creating LNK file in Suspicious Location\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Process Creating LNK file in Suspicious Location\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.418",
              "n": "Processes Tapping Keyboard Events",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects processes on macOS systems that are tapping keyboard events, potentially monitoring all keystrokes made by a user. It leverages data from osquery results within the Alerts data model, focusing on specific process names and command lines. This activity is significant as it is a common technique used by Remote Access Trojans (RATs) to log keystrokes, posing a serious security risk. If confirmed malicious, this could lead to unauthorized access to sensitive informatio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "osquery",
              "q": "| from datamodel Alerts.Alerts\n    | search app=osquery:results name=pack_osx-attacks_Keyboard_Event_Taps\n    | rename columns.cmdline as cmd, columns.name as process_name, columns.pid as process_id\n    | dedup host,process_name\n    | table host,process_name, cmd, process_id\n    | `processes_tapping_keyboard_events_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires osquery ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There might be some false positives as keyboard event taps are used by processes like Siri and Zoom video chat, for some good examples of processes to exclude please see [this](https://github.com/facebook/osquery/pull/5345#issuecomment-454639161) comment.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Processes Tapping Keyboard Events\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: osquery. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Processes Tapping Keyboard Events\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There might be some false positives as keyboard event taps are used by processes like Siri and Zoom video chat, for some good examples of processes to exclude please see [this](https://github.com/facebook/osquery/pull/5345#issuecomment-454639161) comment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on processes on macOS systems that are tapping keyboard events, potentially monitoring all keystrokes made by a user so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.419",
              "n": "Recon Using WMI Class",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious PowerShell activity via EventCode 4104, where WMI performs event queries to gather information on running processes or services. This detection leverages PowerShell Script Block Logging to identify specific WMI queries targeting system information classes like Win32_Bios and Win32_OperatingSystem. This activity is significant as it often indicates reconnaissance efforts by an adversary to profile the compromised machine. If confirmed malicious, the attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network administrator may used this command for checking purposes",
              "refs": "https://news.sophos.com/en-us/2020/05/12/maze-ransomware-1-year-counting/, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/, https://www.splunk.com/en_us/blog/security/hunting-for-malicious-powershell-using-script-block-logging.html, https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://blogs.vmware.com/security/2022/10/lockbit-3-0-also-known-as-lockbit-black.html",
              "mitre": [
                "T1592",
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Recon Using WMI Class\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1592, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Recon Using WMI Class\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network administrator may used this command for checking purposes\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Recon Using WMI Class\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.420",
              "n": "Remote System Discovery with Adsisearcher",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `[Adsisearcher]` type accelerator in PowerShell scripts to query Active Directory for domain computers. It leverages PowerShell Script Block Logging (EventCode=4104) to identify specific script blocks containing `adsisearcher` and `objectcategory=computer` with methods like `findAll()` or `findOne()`. This activity is significant as it may indicate an attempt by adversaries or Red Teams to perform Active Directory discovery and gain situational aware…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use Adsisearcher for troubleshooting.",
              "refs": "https://attack.mitre.org/techniques/T1018/, https://devblogs.microsoft.com/scripting/use-the-powershell-adsisearcher-type-accelerator-to-search-active-directory/",
              "mitre": [
                "T1018"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Remote System Discovery with Adsisearcher\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Remote System Discovery with Adsisearcher\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use Adsisearcher for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Remote System Discovery with Adsisearcher\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.421",
              "n": "Rubeus Kerberos Ticket Exports Through Winlogon Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a process accessing the winlogon.exe system process, indicative of the Rubeus tool attempting to export Kerberos tickets from memory. This detection leverages Sysmon EventCode 10 logs, focusing on processes obtaining a handle to winlogon.exe with specific access rights. This activity is significant as it often precedes pass-the-ticket attacks, where adversaries use stolen Kerberos tickets to move laterally within an environment. If confirmed malicious, this could a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This search needs Sysmon Logs and a sysmon configuration, which includes EventCode 10. This search uses an input macro named `sysmon`. We strongly recommend that you specify your environment-specific configurations (index, source, sourcetype, etc.) for Windows Sysmon logs. Replace the macro definition with configurations for your Splunk environment.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may obtain a handle for winlogon.exe. Filter as needed",
              "refs": "https://github.com/GhostPack/Rubeus, https://web.archive.org/web/20210725005734/http://www.harmj0y.net/blog/redteaming/from-kekeo-to-rubeus/, https://attack.mitre.org/techniques/T1550/003/",
              "mitre": [
                "T1550.003"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Rubeus Kerberos Ticket Exports Through Winlogon Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1550.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Rubeus Kerberos Ticket Exports Through Winlogon Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may obtain a handle for winlogon.exe. Filter as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Rubeus Kerberos Ticket Exports Through Winlogon Access\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.422",
              "n": "ServicePrincipalNames Discovery with PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `powershell.exe` to query the domain for Service Principal Names (SPNs) using Script Block Logging EventCode 4104. It identifies the use of the KerberosRequestorSecurityToken class within the script block, which is equivalent to using setspn.exe. This activity is significant as it often precedes kerberoasting or silver ticket attacks, which can lead to credential theft. If confirmed malicious, attackers could leverage this information to escalate privile…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited, however filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/ad/service-principal-names, https://learn.microsoft.com/en-us/dotnet/api/system.identitymodel.tokens.kerberosrequestorsecuritytoken?view=netframework-4.8, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/t1208-kerberoasting, https://strontic.github.io/xcyclopedia/library/setspn.exe-5C184D581524245DAD7A0A02B51FD2C2.html, https://attack.mitre.org/techniques/T1558/003/, https://social.technet.microsoft.com/wiki/contents/articles/717.service-principal-names-spn-setspn-syntax.aspx, https://web.archive.org/web/20220212163642/https://www.harmj0y.net/blog/powershell/kerberoasting-without-mimikatz/, https://blog.zsec.uk/paving-2-da-wholeset/, https://msitpros.com/?p=3113, https://adsecurity.org/?p=3466, https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba., https://blog.palantir.com/tampering-with-windows-event-tracing-background-offense-and-defense-4be7ac62ac63, https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/59c1814829f18782e24f1fe2/1505853768977/Windows+PowerShell+Logging+Cheat+Sheet+ver+Sept+2017+v2.1.pdf, https://www.crowdstrike.com/blog/investigating-powershell-command-and-script-logging/",
              "mitre": [
                "T1558.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ServicePrincipalNames Discovery with PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ServicePrincipalNames Discovery with PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited, however filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"ServicePrincipalNames Discovery with PowerShell\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.423",
              "n": "Shai-Hulud 2 Exfiltration Artifact Files",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Shai-Hulud 2 Exfiltration Artifact Files. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11, Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11, Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack, https://securelist.com/shai-hulud-worm-infects-500-npm-packages-in-a-supply-chain-attack/117547/",
              "mitre": [
                "T1074.001",
                "T1552.001",
                "T1195.002"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Shai-Hulud 2 Exfiltration Artifact Files\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11, Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1074.001, T1552.001, T1195.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Shai-Hulud 2 Exfiltration Artifact Files\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Shai-Hulud 2 Exfiltration Artifact Files\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.424",
              "n": "Shai-Hulud Workflow File Creation or Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Shai-Hulud Workflow File Creation or Modification. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11, Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file path entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11, Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack, https://securelist.com/shai-hulud-worm-infects-500-npm-packages-in-a-supply-chain-attack/117547/, https://github.com/SigmaHQ/sigma/pull/5658/files, https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem",
              "mitre": [
                "T1574.006",
                "T1554",
                "T1195"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Shai-Hulud Workflow File Creation or Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11, Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.006, T1554, T1195. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Shai-Hulud Workflow File Creation or Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Shai-Hulud Workflow File Creation or Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.425",
              "n": "Suspicious Copy on System32",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Suspicious Copy on System32. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Copying files from System directories can happen for multiple admin reasons, allbeit rare without approval. Apply additional filters where needed.",
              "refs": "https://www.hybrid-analysis.com/sample/8da5b75b6380a41eee3a399c43dfe0d99eeefaa1fd21027a07b1ecaa4cd96fdd?environmentId=120, https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1036.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Copy on System32\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Copy on System32\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Copying files from System directories can happen for multiple admin reasons, allbeit rare without approval. Apply additional filters where needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Suspicious Copy on System32\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.426",
              "n": "UAC Bypass MMC Load Unsigned Dll",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of an unsigned DLL by the MMC.exe application, which is indicative of a potential UAC bypass or privilege escalation attempt. It leverages Sysmon EventCode 7 to identify instances where MMC.exe loads a non-Microsoft, unsigned DLL. This activity is significant because attackers often use this technique to modify CLSID registry entries, causing MMC.exe to load malicious DLLs, thereby bypassing User Account Control (UAC) and gaining elevated privileges. If…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. dll.",
              "refs": "https://offsec.almond.consulting/UAC-bypass-dotnet.html",
              "mitre": [
                "T1218.014",
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"UAC Bypass MMC Load Unsigned Dll\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.014, T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"UAC Bypass MMC Load Unsigned Dll\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. dll.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"UAC Bypass MMC Load Unsigned Dll\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.427",
              "n": "Unusual Number of Kerberos Service Tickets Requested",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an unusual number of Kerberos service ticket requests, potentially indicating a kerberoasting attack. It leverages Kerberos Event 4769 and calculates the standard deviation for each host, using the 3-sigma rule to detect anomalies. This activity is significant as kerberoasting allows adversaries to request service tickets and crack them offline, potentially gaining privileged access to the domain. If confirmed malicious, this could lead to unauthorized access to…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4769",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4769 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "An single endpoint requesting a large number of kerberos service tickets is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, administration systems and missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1558/003/, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/t1208-kerberoasting",
              "mitre": [
                "T1558.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Unusual Number of Kerberos Service Tickets Requested\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4769. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Unusual Number of Kerberos Service Tickets Requested\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: An single endpoint requesting a large number of kerberos service tickets is not common behavior. Possible false positive scenarios include but are not limited to vulnerability scanners, administration systems and missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Unusual Number of Kerberos Service Tickets Requested\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.428",
              "n": "Wbemprox COM Object Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious process loading a COM object from wbemprox.dll, fastprox.dll, or wbemcomn.dll. It leverages Sysmon EventCode 7 to identify instances where these DLLs are loaded by processes not typically associated with them, excluding known legitimate processes and directories. This activity is significant as it may indicate an attempt by threat actors to abuse COM objects for privilege escalation or evasion of detection mechanisms. If confirmed malicious, this could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "legitimate process that are not in the exception list may trigger this event.",
              "refs": "https://krebsonsecurity.com/2021/05/a-closer-look-at-the-darkside-ransomware-gang/, https://www.mcafee.com/blogs/other-blogs/mcafee-labs/mcafee-atr-analyzes-sodinokibi-aka-revil-ransomware-as-a-service-what-the-code-tells-us/",
              "mitre": [
                "T1218.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Wbemprox COM Object Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Wbemprox COM Object Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: legitimate process that are not in the exception list may trigger this event.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Wbemprox COM Object Execution\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.429",
              "n": "Web or Application Server Spawning a Shell",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Web or Application Server Spawning a Shell. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1, Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 1, Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://blog.netlab.360.com/ten-families-of-malicious-samples-are-spreading-using-the-log4j2-vulnerability-now/, https://gist.github.com/olafhartong/916ebc673ba066537740164f7e7e1d72",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Web or Application Server Spawning a Shell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1, Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Web or Application Server Spawning a Shell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Web or Application Server Spawning a Shell\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.430",
              "n": "Windows Access Token Manipulation SeDebugPrivilege",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a process enabling the \"SeDebugPrivilege\" privilege token. It leverages Windows Security Event Logs with EventCode 4703, filtering out common legitimate processes. This activity is significant because SeDebugPrivilege allows a process to inspect and modify the memory of other processes, potentially leading to credential dumping or code injection. If confirmed malicious, an attacker could gain extensive control over system processes, enabling them to escalate privil…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4703",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting Windows Security Event Logs with 4703 EventCode enabled. The Windows TA is also required.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some native binaries and browser applications may request SeDebugPrivilege. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4703, https://devblogs.microsoft.com/oldnewthing/20080314-00/?p=23113, https://blog.palantir.com/windows-privilege-abuse-auditing-detection-and-defense-3078a403d74e, https://atomicredteam.io/privilege-escalation/T1134.001/#atomic-test-2---%60sedebugprivilege%60-token-duplication, https://malpedia.caad.fkie.fraunhofer.de/details/win.asyncrat",
              "mitre": [
                "T1134.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Access Token Manipulation SeDebugPrivilege\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4703. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Access Token Manipulation SeDebugPrivilege\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some native binaries and browser applications may request SeDebugPrivilege. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Access Token Manipulation SeDebugPrivilege\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.431",
              "n": "Windows Access Token Manipulation Winlogon Duplicate Token Handle",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a process attempting to access winlogon.exe to duplicate its handle. This is identified using Sysmon EventCode 10, focusing on processes targeting winlogon.exe with specific access rights. This activity is significant because it is a common technique used by adversaries to escalate privileges by leveraging the high privileges and security tokens associated with winlogon.exe. If confirmed malicious, this could allow an attacker to gain elevated privileges, potential…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "`sysmon` EventCode=10 TargetImage IN(\"*\\\\system32\\\\winlogon.exe*\", \"*\\\\SysWOW64\\\\winlogon.exe*\") GrantedAccess = 0x1040 | stats count min(_time) as firstTime max(_time) as lastTime by CallTrace EventID GrantedAccess Guid Opcode ProcessID SecurityID SourceImage SourceProcessGUID SourceProcessId TargetImage TargetProcessGUID TargetProcessId UserID dest granted_access parent_process_exec parent_process_guid parent_process_id parent_process_name parent_process_path process_exec process_guid process_id process_name process_path signature signature_id user_id vendor_product | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_access_token_manipulation_winlogon_duplicate_token_handle_filter`",
              "m": "To successfully implement this search, you must be ingesting data that records process activity from your hosts to populate the endpoint data model in the processes node. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible legitimate applications will request access to winlogon, filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/api/handleapi/nf-handleapi-duplicatehandle, https://attack.mitre.org/techniques/T1134/001/",
              "mitre": [
                "T1134.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Access Token Manipulation Winlogon Duplicate Token Handle\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Access Token Manipulation Winlogon Duplicate Token Handle\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible legitimate applications will request access to winlogon, filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a process attempting to access winlogon.exe to duplicate its handle so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.432",
              "n": "Windows Access Token Winlogon Duplicate Handle In Uncommon Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a process attempting to duplicate the handle of winlogon.exe from an uncommon or public source path. This is identified using Sysmon EventCode 10, focusing on processes targeting winlogon.exe with specific access rights and excluding common system paths. This activity is significant because it may indicate an adversary trying to escalate privileges by leveraging the high-privilege tokens associated with winlogon.exe. If confirmed malicious, this could allow the att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records process activity from your hosts to populate the endpoint data model in the processes node. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible legitimate applications will request access to winlogon, filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/api/handleapi/nf-handleapi-duplicatehandle, https://attack.mitre.org/techniques/T1134/001/",
              "mitre": [
                "T1134.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Access Token Winlogon Duplicate Handle In Uncommon Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Access Token Winlogon Duplicate Handle In Uncommon Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible legitimate applications will request access to winlogon, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Access Token Winlogon Duplicate Handle In Uncommon P\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.433",
              "n": "Windows Account Access Removal via Logoff Exec",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the process of logging off a user through the use of the quser and logoff commands. By monitoring for these commands, the analytic identifies actions where a user session is forcibly terminated, which could be part of an administrative task or a potentially unauthorized access attempt. This detection helps identify potential misuse or malicious activity where a user’s access is revoked without proper authorization, providing insight into potential security incident…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command.",
              "refs": "https://devblogs.microsoft.com/scripting/automating-quser-through-powershell/",
              "mitre": [
                "T1059.001",
                "T1531"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Account Access Removal via Logoff Exec\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1531. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Account Access Removal via Logoff Exec\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Account Access Removal via Logoff Exec\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.434",
              "n": "Windows Account Discovery for None Disable User Account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the PowerView PowerShell cmdlet Get-NetUser with the UACFilter parameter set to NOT_ACCOUNTDISABLE, indicating an attempt to enumerate Active Directory user accounts that are not disabled. This detection leverages PowerShell Script Block Logging (EventCode 4104) to identify the specific script block text. Monitoring this activity is significant as it may indicate reconnaissance efforts by an attacker to identify active user accounts for further exp…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104  ScriptBlockText = \"*Get-NetUser*\" ScriptBlockText = \"*NOT_ACCOUNTDISABLE*\" ScriptBlockText = \"*-UACFilter*\"\n      | fillnull\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest signature signature_id\n           user_id vendor_product EventID\n           Guid Opcode Name\n           Path ProcessID ScriptBlockId\n           ScriptBlockText\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_account_discovery_for_none_disable_user_account_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage PowerView for legitimate purposes, filter as needed.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a, https://powersploit.readthedocs.io/en/stable/Recon/README/, https://book.hacktricks.xyz/windows-hardening/basic-powershell-for-pentesters/powerview, https://atomicredteam.io/discovery/T1087.001/",
              "mitre": [
                "T1087.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Account Discovery for None Disable User Account\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Account Discovery for None Disable User Account\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage PowerView for legitimate purposes, filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the execution of the PowerView PowerShell cmdlet Get-NetUser with the UACFilter parameter set to NOT_ACCOUNTDISABLE, indicating an attempt to enumerate Active Directory user accounts that are not disa so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.435",
              "n": "Windows AD Abnormal Object Access Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a statistically significant increase in access to Active Directory objects, which may indicate attacker enumeration. It leverages Windows Security Event Code 4662 to monitor and analyze access patterns, comparing them against historical averages to detect anomalies. This activity is significant for a SOC because abnormal access to AD objects can be an early indicator of reconnaissance efforts by an attacker. If confirmed malicious, this behavior could lead to un…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4662",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4662 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service accounts or applications that routinely query Active Directory for information.",
              "refs": "https://medium.com/securonix-tech-blog/detecting-ldap-enumeration-and-bloodhound-s-sharphound-collector-using-active-directory-decoys-dfc840f2f644, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4662, https://attack.mitre.org/tactics/TA0007/",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Abnormal Object Access Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4662. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Abnormal Object Access Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service accounts or applications that routinely query Active Directory for information.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Abnormal Object Access Activity\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.436",
              "n": "Windows AD add Self to Group",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects instances where a user adds themselves to an Active Directory (AD) group. This activity is a common indicator of privilege escalation, where a user attempts to gain unauthorized access to higher privileges or sensitive resources. By monitoring AD logs, this detection identifies such suspicious behavior, which could be part of a larger attack strategy aimed at compromising critical systems and data.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4728",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4728 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD add Self to Group\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4728. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD add Self to Group\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows add Self to Group\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.437",
              "n": "Windows AD AdminSDHolder ACL Modified",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Access Control List (ACL) of the AdminSDHolder object in a Windows domain, specifically the addition of new rules. It leverages EventCode 5136 from the Security Event Log, focusing on changes to the nTSecurityDescriptor attribute. This activity is significant because the AdminSDHolder object secures privileged group members, and unauthorized changes can allow attackers to establish persistence and escalate privileges. If confirmed malicious, th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you ned to be ingesting eventcode `5136`. The Advanced Security Audit policy setting `Audit Directory Services Changes` within `DS Access` needs to be enabled. Additionally, a SACL needs to be created for the AdminSDHolder object in order to log modifications.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Adding new users or groups to the AdminSDHolder ACL is not usual. Filter as needed",
              "refs": "https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/appendix-c--protected-accounts-and-groups-in-active-directory, https://social.technet.microsoft.com/wiki/contents/articles/22331.adminsdholder-protected-groups-and-security-descriptor-propagator.aspx, https://adsecurity.org/?p=1906, https://pentestlab.blog/2022/01/04/domain-persistence-adminsdholder/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5136, https://learn.microsoft.com/en-us/windows/win32/secauthz/access-control-lists, https://medium.com/@cryps1s/detecting-windows-endpoint-compromise-with-sacls-cd748e10950, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1546"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD AdminSDHolder ACL Modified\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD AdminSDHolder ACL Modified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Adding new users or groups to the AdminSDHolder ACL is not usual. Filter as needed\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows AdminSDHolder ACL Modified\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.438",
              "n": "Windows AD Cross Domain SID History Addition",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects changes to the sIDHistory attribute of user or computer objects across different domains. It leverages Windows Security Event Codes 4738 and 4742 to identify when the sIDHistory attribute is modified. This activity is significant because the sIDHistory attribute allows users to inherit permissions from other AD accounts, which can be exploited by adversaries for inter-domain privilege escalation and persistence. If confirmed malicious, this could enable attackers t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4742, Windows Event Log Security 4738",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting eventcodes `4738` and `4742`. The Advanced Security Audit policy settings `Audit User Account Management` and  `Audit Computer Account Management` within `Account Management` all need to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Domain mergers and migrations may generate large volumes of false positives for this analytic.",
              "refs": "https://adsecurity.org/?p=1772, https://learn.microsoft.com/en-us/windows/win32/adschema/a-sidhistory?redirectedfrom=MSDN, https://learn.microsoft.com/en-us/defender-for-identity/security-assessment-unsecure-sid-history-attribute",
              "mitre": [
                "T1134.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Cross Domain SID History Addition\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4742, Windows Event Log Security 4738. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Cross Domain SID History Addition\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Domain mergers and migrations may generate large volumes of false positives for this analytic.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Cross Domain SID History Addition\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.439",
              "n": "Windows AD Dangerous Deny ACL Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies an Active Directory access-control list (ACL) modification event, which applies permissions that deny the ability to enumerate permissions of the object.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ensure you are ingesting Active Directory audit logs - specifically event 5136. See lantern article in references for further on how to onboard AD audit data. Ensure the wineventlog_security macro is configured with the correct indexes and include lookups for SID resolution if evt_resolve_ad_obj is set to 0.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://happycamper84.medium.com/sneaky-persistence-via-hidden-objects-in-ad-1c91fc37bf54, https://www.youtube.com/watch?v=_nGpZ1ydzS8, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1222.001",
                "T1484"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Dangerous Deny ACL Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001, T1484. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Dangerous Deny ACL Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Dangerous Deny ACL Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.440",
              "n": "Windows AD Dangerous Group ACL Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection monitors the addition of the following ACLs to an Active Directory group object: \"Full control\", \"All extended rights\", \"All validated writes\",  \"Create all child objects\", \"Delete all child objects\", \"Delete subtree\", \"Delete\", \"Modify permissions\", \"Modify owner\", and \"Write all properties\".  Such modifications can indicate potential privilege escalation or malicious activity. Immediate investigation is recommended upon alert.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ensure you are ingesting Active Directory audit logs - specifically event 5136. See lantern article in references for further on how to onboard AD audit data. Ensure the wineventlog_security macro is configured with the correct indexes and include lookups for SID resolution if evt_resolve_ad_obj is set to 0.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/secauthz/ace-strings, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/1522b774-6464-41a3-87a5-1e5633c3fbbb, https://trustedsec.com/blog/a-hitchhackers-guide-to-dacl-based-detections-part-1-a, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1222.001",
                "T1484"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Dangerous Group ACL Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001, T1484. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Dangerous Group ACL Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Dangerous Group ACL Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.441",
              "n": "Windows AD Dangerous User ACL Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection monitors the addition of the following ACLs to an Active Directory user object: \"Full control\",\"All extended rights\",\"All validated writes\", \"Create all child objects\",\"Delete all child objects\",\"Delete subtree\",\"Delete\",\"Modify permissions\",\"Modify owner\",\"Write all properties\".  Such modifications can indicate potential privilege escalation or malicious activity. Immediate investigation is recommended upon alert.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ensure you are ingesting Active Directory audit logs - specifically event 5136. See lantern article in references for further on how to onboard AD audit data. Ensure the wineventlog_security macro is configured with the correct indexes and include lookups for SID resolution if evt_resolve_ad_obj is set to 0.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/secauthz/ace-strings, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/1522b774-6464-41a3-87a5-1e5633c3fbbb, https://trustedsec.com/blog/a-hitchhackers-guide-to-dacl-based-detections-part-1-a, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1222.001",
                "T1484"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Dangerous User ACL Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001, T1484. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Dangerous User ACL Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Dangerous User ACL Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.442",
              "n": "Windows AD DCShadow Privileges ACL Addition",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies an Active Directory access-control list (ACL) modification event, which applies the minimum required extended rights to perform the DCShadow attack.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.labofapenetrationtester.com/2018/04/dcshadow.html, https://github.com/samratashok/nishang/blob/master/ActiveDirectory/Set-DCShadowPermissions.ps1, https://trustedsec.com/blog/a-hitchhackers-guide-to-dacl-based-detections-part-1-a, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1484",
                "T1207",
                "T1222.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD DCShadow Privileges ACL Addition\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484, T1207, T1222.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD DCShadow Privileges ACL Addition\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows DCShadow Privileges ACL Addition\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.443",
              "n": "Windows AD Domain Controller Audit Policy Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the disabling of audit policies on a domain controller. It leverages EventCode 4719 from Windows Security Event Logs to identify changes where success or failure auditing is removed. This activity is significant as it suggests an attacker may have gained access to the domain controller and is attempting to evade detection by tampering with audit policies. If confirmed malicious, this could lead to severe consequences, including data theft, privilege escalation, and…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4719",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4719 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4719",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Domain Controller Audit Policy Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4719. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Domain Controller Audit Policy Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Domain Controller Audit Policy Disabled\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.444",
              "n": "Windows AD Domain Controller Promotion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a genuine Domain Controller (DC) promotion event by detecting when a computer assigns itself the necessary Service Principal Names (SPNs) to function as a domain controller. It leverages Windows Security Event Code 4742 to monitor existing domain controllers for these changes. This activity is significant as it can help identify rogue DCs added to the network, which could indicate a DCShadow attack. If confirmed malicious, this could allow an attacker to manipul…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4742",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4742 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1207/",
              "mitre": [
                "T1207"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Domain Controller Promotion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4742. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1207. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Domain Controller Promotion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Domain Controller Promotion\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.445",
              "n": "Windows AD Domain Replication ACL Addition",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of permissions required for a DCSync attack, specifically DS-Replication-Get-Changes, DS-Replication-Get-Changes-All, and DS-Replication-Get-Changes-In-Filtered-Set. It leverages EventCode 5136 from the Windows Security Event Log to identify when these permissions are granted. This activity is significant because it indicates potential preparation for a DCSync attack, which can be used to replicate AD objects and exfiltrate sensitive data. If confirmed…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "When there is a change to nTSecurityDescriptor, Windows logs the entire ACL with the newly added components. If existing accounts are present with this permission, they will raise an alert each time the nTSecurityDescriptor is updated unless whitelisted.",
              "refs": "https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/1522b774-6464-41a3-87a5-1e5633c3fbbb, https://github.com/SigmaHQ/sigma/blob/29a5c62784faf986dc03952ae3e90e3df3294284/rules/windows/builtin/security/win_security_account_backdoor_dcsync_rights.yml, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1484"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Domain Replication ACL Addition\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Domain Replication ACL Addition\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: When there is a change to nTSecurityDescriptor, Windows logs the entire ACL with the newly added components. If existing accounts are present with this permission, they will raise an alert each time the nTSecurityDescriptor is updated unless whitelisted.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Domain Replication ACL Addition\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.446",
              "n": "Windows AD Domain Root ACL Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "ACL deletion performed on the domain root object, significant AD change with high impact. Following MS guidance all changes at this level should be reviewed. Drill into the logonID within EventCode 4624 for information on the source device during triage.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ensure you are ingesting Active Directory audit logs - specifically event 5136. See lantern article in references for further on how to onboard AD audit data. Ensure the wineventlog_security macro is configured with the correct indexes and include lookups for SID resolution if evt_resolve_ad_obj is set to 0.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/secauthz/ace-strings, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/1522b774-6464-41a3-87a5-1e5633c3fbbb, https://trustedsec.com/blog/a-hitchhackers-guide-to-dacl-based-detections-part-1-a, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1222.001",
                "T1484"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Domain Root ACL Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001, T1484. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Domain Root ACL Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Domain Root ACL Deletion\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.447",
              "n": "Windows AD Domain Root ACL Modification",
              "c": "high",
              "f": "intermediate",
              "v": "ACL modification performed on the domain root object, significant AD change with high impact. Following MS guidance all changes at this level should be reviewed. Drill into the logonID within EventCode 4624 for information on the source device during triage.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ensure you are ingesting Active Directory audit logs - specifically event 5136. See lantern article in references for further on how to onboard AD audit data. Ensure the wineventlog_security macro is configured with the correct indexes and include lookups for SID resolution if evt_resolve_ad_obj is set to 0.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/secauthz/ace-strings, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/1522b774-6464-41a3-87a5-1e5633c3fbbb, https://trustedsec.com/blog/a-hitchhackers-guide-to-dacl-based-detections-part-1-a, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1222.001",
                "T1484"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Domain Root ACL Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001, T1484. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Domain Root ACL Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Domain Root ACL Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.448",
              "n": "Windows AD DSRM Account Changes",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies changes to the Directory Services Restore Mode (DSRM) account behavior via registry modifications. It detects alterations in the registry path \"*\\\\System\\\\CurrentControlSet\\\\Control\\\\Lsa\\\\DSRMAdminLogonBehavior\" with specific values indicating potential misuse. This activity is significant because the DSRM account, if misconfigured, can be exploited to persist within a domain, similar to a local administrator account. If confirmed malicious, an attacker could ga…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Disaster recovery events.",
              "refs": "https://adsecurity.org/?p=1714",
              "mitre": [
                "T1098"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD DSRM Account Changes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD DSRM Account Changes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Disaster recovery events.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows DSRM Account Changes\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.449",
              "n": "Windows AD DSRM Password Reset",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to reset the Directory Services Restore Mode (DSRM) administrator password on a Domain Controller. It leverages event code 4794 from the Windows Security Event Log, specifically looking for events where the DSRM password reset is attempted. This activity is significant because the DSRM account can be used similarly to a local administrator account, providing potential persistence for an attacker. If confirmed malicious, this could allow an attacker to main…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4794",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4794 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Resetting the DSRM password for legitamate reasons, i.e. forgot the password. Disaster recovery. Deploying AD backdoor deliberately.",
              "refs": "https://adsecurity.org/?p=1714",
              "mitre": [
                "T1098"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD DSRM Password Reset\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4794. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD DSRM Password Reset\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Resetting the DSRM password for legitamate reasons, i.e. forgot the password. Disaster recovery. Deploying AD backdoor deliberately.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows DSRM Password Reset\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.450",
              "n": "Windows AD GPO Deleted",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when an Active Directory Group Policy is deleted using the Group Policy Management Console.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1562.001",
                "T1484.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD GPO Deleted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1484.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD GPO Deleted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows GPO Deleted\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.451",
              "n": "Windows AD GPO Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when an Active Directory Group Policy is disabled using the Group Policy Management Console.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1562.001",
                "T1484.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD GPO Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1484.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD GPO Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows GPO Disabled\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.452",
              "n": "Windows AD GPO New CSE Addition",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when a a new client side extension is added to an Active Directory Group Policy using the Group Policy Management Console.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "General usage of group policy will trigger this detection, also please not GPOs modified using tools such as SharpGPOAbuse will not generate the AD audit events which enable this detection.",
              "refs": "https://wald0.com/?p=179, https://learn.microsoft.com/en-gb/archive/blogs/mempson/group-policy-client-side-extension-list, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory, https://github.com/FSecureLABS/SharpGPOAbuse",
              "mitre": [
                "T1222.001",
                "T1484.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD GPO New CSE Addition\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001, T1484.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD GPO New CSE Addition\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: General usage of group policy will trigger this detection, also please not GPOs modified using tools such as SharpGPOAbuse will not generate the AD audit events which enable this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows GPO New CSE Addition\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.453",
              "n": "Windows AD Hidden OU Creation",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic is looking for when an ACL is applied to an OU which denies listing the objects residing in the OU. This activity combined with modifying the owner of the OU will hide AD objects even from domain administrators.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ensure you are ingesting Active Directory audit logs - specifically event 5136. See lantern article in references for further on how to onboard AD audit data. Ensure the wineventlog_security macro is configured with the correct indexes and include lookups for SID resolution if evt_resolve_ad_obj is set to 0.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://happycamper84.medium.com/sneaky-persistence-via-hidden-objects-in-ad-1c91fc37bf54, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1222.001",
                "T1484"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Hidden OU Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001, T1484. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Hidden OU Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Hidden OU Creation\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.454",
              "n": "Windows AD Object Owner Updated",
              "c": "high",
              "f": "intermediate",
              "v": "AD Object Owner Updated. The owner provides Full control level privileges over the target AD Object. This event has significant impact alone and is also a precursor activity for hiding an AD object.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/secauthz/ace-strings, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/1522b774-6464-41a3-87a5-1e5633c3fbbb, https://trustedsec.com/blog/a-hitchhackers-guide-to-dacl-based-detections-part-1-a, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1222.001",
                "T1484"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Object Owner Updated\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001, T1484. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Object Owner Updated\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Object Owner Updated\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.455",
              "n": "Windows AD Privileged Account SID History Addition",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies when the SID of a privileged user is added to the SID History attribute of another user. It leverages Windows Security Event Codes 4742 and 4738, combined with identity lookups, to detect this activity. This behavior is significant as it may indicate an attempt to abuse SID history for unauthorized access across multiple domains. If confirmed malicious, this activity could allow an attacker to escalate privileges or maintain persistent access within the environm…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4742, Windows Event Log Security 4738",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4742, Windows Event Log Security 4738 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Migration of privileged accounts.",
              "refs": "https://adsecurity.org/?p=1772",
              "mitre": [
                "T1134.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Privileged Account SID History Addition\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4742, Windows Event Log Security 4738. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Privileged Account SID History Addition\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Migration of privileged accounts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Privileged Account SID History Addition\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.456",
              "n": "Windows AD Privileged Group Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows AD Privileged Group Modification. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4728",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4728 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://splunkbase.splunk.com/app/6853",
              "mitre": [
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Privileged Group Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4728. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Privileged Group Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Privileged Group Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.457",
              "n": "Windows AD Replication Request Initiated by User Account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a user account initiating an Active Directory replication request, indicative of a DCSync attack. It leverages EventCode 4662 from the Windows Security Event Log, focusing on specific object types and replication permissions. This activity is significant because it can allow an attacker with sufficient privileges to request password hashes for any or all users within the domain. If confirmed malicious, this could lead to unauthorized access, privilege escalation, a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4662, Windows Event Log Security 4624",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4662, Windows Event Log Security 4624 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Azure AD Connect syncing operations and the dcdiag.exe /Test:Replications command.",
              "refs": "https://adsecurity.org/?p=1729, https://www.linkedin.com/pulse/mimikatz-dcsync-event-log-detections-john-dwyer, https://github.com/SigmaHQ/sigma/blob/0.22-699-g29a5c6278/rules/windows/builtin/security/win_security_dcsync.yml",
              "mitre": [
                "T1003.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Replication Request Initiated by User Account\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4662, Windows Event Log Security 4624. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Replication Request Initiated by User Account\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Azure AD Connect syncing operations and the dcdiag.exe /Test:Replications command.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Replication Request Initiated by User Account\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.458",
              "n": "Windows AD Replication Request Initiated from Unsanctioned Location",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unauthorized Active Directory replication requests initiated from non-domain controller locations. It leverages EventCode 4662 to detect when a computer account with replication permissions creates a handle to domainDNS, filtering out known domain controller IP addresses. This activity is significant as it may indicate a DCSync attack, where an attacker with privileged access can request password hashes for any or all users within the domain. If confirmed malici…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4662, Windows Event Log Security 4624",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4662, Windows Event Log Security 4624 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Genuine DC promotion may trigger this alert.",
              "refs": "https://adsecurity.org/?p=1729, https://www.linkedin.com/pulse/mimikatz-dcsync-event-log-detections-john-dwyer, https://github.com/SigmaHQ/sigma/blob/0.22-699-g29a5c6278/rules/windows/builtin/security/win_security_dcsync.yml",
              "mitre": [
                "T1003.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Replication Request Initiated from Unsanctioned Location\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4662, Windows Event Log Security 4624. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Replication Request Initiated from Unsanctioned Location\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Genuine DC promotion may trigger this alert.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Replication Request Initiated from Unsanctioned L\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.459",
              "n": "Windows AD Same Domain SID History Addition",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects changes to the sIDHistory attribute of user or computer objects within the same domain. It leverages Windows Security Event Codes 4738 and 4742 to identify when the sIDHistory attribute is modified. This activity is significant because the sIDHistory attribute can be abused by adversaries to grant unauthorized access by inheriting permissions from another account. If confirmed malicious, this could allow attackers to maintain persistent access or escalate privilege…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4742, Windows Event Log Security 4738",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting eventcodes `4738` and `4742`. The Advanced Security Audit policy settings `Audit User Account Management` and  `Audit Computer Account Management` within `Account Management` all need to be enabled. SID resolution is not required..",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://adsecurity.org/?p=1772, https://learn.microsoft.com/en-us/windows/win32/adschema/a-sidhistory?redirectedfrom=MSDN, https://learn.microsoft.com/en-us/defender-for-identity/security-assessment-unsecure-sid-history-attribute, https://book.hacktricks.xyz/windows-hardening/active-directory-methodology/sid-history-injection",
              "mitre": [
                "T1134.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Same Domain SID History Addition\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4742, Windows Event Log Security 4738. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Same Domain SID History Addition\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Same Domain SID History Addition\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.460",
              "n": "Windows AD Self DACL Assignment",
              "c": "high",
              "f": "intermediate",
              "v": "Detect when a user creates a new DACL in AD for their own AD object.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Ensure you are ingesting Active Directory audit logs - specifically event 5136. See lantern article in references for further on how to onboard AD audit data. Ensure the wineventlog_security macro is configured with the correct indexes and include lookups for SID resolution if evt_resolve_ad_obj is set to 0.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1484",
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Self DACL Assignment\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1484, T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Self DACL Assignment\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Self DACL Assignment\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.461",
              "n": "Windows AD ServicePrincipalName Added To Domain Account",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of a Service Principal Name (SPN) to a domain account. It leverages Windows Event Code 5136 and monitors changes to the servicePrincipalName attribute. This activity is significant because it may indicate an attempt to perform Kerberoasting, a technique where attackers extract and crack service account passwords offline. If confirmed malicious, this could allow an attacker to obtain cleartext passwords, leading to unauthorized access and potential late…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$ObjectDN$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you ned to be ingesting eventcode `5136`. The Advanced Security Audit policy setting `Audit Directory Services Changes` within `DS Access` needs to be enabled. Additionally, a SACL needs to be created for AD objects in order to ingest attribute modifications.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A Service Principal Name should only be added to an account when an application requires it. While infrequent, this detection may trigger on legitimate actions. Filter as needed.",
              "refs": "https://adsecurity.org/?p=3466, https://www.thehacker.recipes/ad/movement/dacl/targeted-kerberoasting, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5136, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/t1208-kerberoasting",
              "mitre": [
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD ServicePrincipalName Added To Domain Account\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD ServicePrincipalName Added To Domain Account\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A Service Principal Name should only be added to an account when an application requires it. While infrequent, this detection may trigger on legitimate actions. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows ServicePrincipalName Added To Domain Account\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.462",
              "n": "Windows AD Short Lived Domain Account ServicePrincipalName",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the addition and quick deletion of a Service Principal Name (SPN) to a domain account within 5 minutes. This detection leverages EventCode 5136 from the Windows Security Event Log, focusing on changes to the servicePrincipalName attribute. This activity is significant as it may indicate an attempt to perform Kerberoasting, a technique used to crack the cleartext password of a domain account offline. If confirmed malicious, this could allow an attacker to gain un…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A Service Principal Name should only be added to an account when an application requires it. Adding an SPN and quickly deleting it is less common but may be part of legitimate action. Filter as needed.",
              "refs": "https://adsecurity.org/?p=3466, https://www.thehacker.recipes/ad/movement/dacl/targeted-kerberoasting, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5136, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/t1208-kerberoasting",
              "mitre": [
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Short Lived Domain Account ServicePrincipalName\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Short Lived Domain Account ServicePrincipalName\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A Service Principal Name should only be added to an account when an application requires it. Adding an SPN and quickly deleting it is less common but may be part of legitimate action. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Short Lived Domain Account ServicePrincipalName\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.463",
              "n": "Windows AD Short Lived Domain Controller SPN Attribute",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the temporary addition of a global catalog SPN or a DRS RPC SPN to an Active Directory computer object, indicative of a potential DCShadow attack. This detection leverages EventCode 5136 from the `wineventlog_security` data source, focusing on specific SPN attribute changes. This activity is significant as DCShadow attacks allow attackers with privileged access to register rogue Domain Controllers, enabling unauthorized changes to the AD infrastructure. If confirme…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136, Windows Event Log Security 4624",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136, Windows Event Log Security 4624 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.dcshadow.com/, https://blog.netwrix.com/2022/09/28/dcshadow_attack/, https://gist.github.com/gentilkiwi/dcc132457408cf11ad2061340dcb53c2, https://attack.mitre.org/techniques/T1207/, https://blog.alsid.eu/dcshadow-explained-4510f52fc19d",
              "mitre": [
                "T1207"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Short Lived Domain Controller SPN Attribute\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136, Windows Event Log Security 4624. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1207. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Short Lived Domain Controller SPN Attribute\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Short Lived Domain Controller SPN Attribute\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.464",
              "n": "Windows AD Short Lived Server Object",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation and quick deletion of a Domain Controller (DC) object within 30 seconds in an Active Directory environment, indicative of a potential DCShadow attack. This detection leverages Windows Security Event Codes 5137 and 5141, analyzing the duration between these events. This activity is significant as DCShadow allows attackers with privileged access to register a rogue DC, enabling unauthorized changes to AD objects, including credentials. If confirmed ma…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5137, Windows Event Log Security 5141",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5137, Windows Event Log Security 5141 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Creating and deleting a server object within 30 seconds or less is unusual but not impossible in a production environment. Filter as needed.",
              "refs": "https://www.dcshadow.com/, https://attack.mitre.org/techniques/T1207/, https://stealthbits.com/blog/detecting-dcshadow-with-event-logs/, https://pentestlab.blog/2018/04/16/dcshadow/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5137, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5141",
              "mitre": [
                "T1207"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Short Lived Server Object\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5137, Windows Event Log Security 5141. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1207. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Short Lived Server Object\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Creating and deleting a server object within 30 seconds or less is unusual but not impossible in a production environment. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Short Lived Server Object\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.465",
              "n": "Windows AD SID History Attribute Modified",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the SID History attribute in Active Directory by leveraging event code 5136. This detection uses logs from the `wineventlog_security` data source to identify changes to the sIDHistory attribute. Monitoring this activity is crucial as the SID History attribute can be exploited by adversaries to inherit permissions from other accounts, potentially granting unauthorized access. If confirmed malicious, this activity could allow attackers to maintain pe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Domain mergers and migrations may generate large volumes of false positives for this analytic.",
              "refs": "https://adsecurity.org/?p=1772, https://learn.microsoft.com/en-us/windows/win32/adschema/a-sidhistory?redirectedfrom=MSDN, https://learn.microsoft.com/en-us/defender-for-identity/security-assessment-unsecure-sid-history-attribute, https://book.hacktricks.xyz/windows-hardening/active-directory-methodology/sid-history-injection",
              "mitre": [
                "T1134.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD SID History Attribute Modified\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD SID History Attribute Modified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Domain mergers and migrations may generate large volumes of false positives for this analytic.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows SID History Attribute Modified\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.466",
              "n": "Windows AD Suspicious Attribute Modification",
              "c": "high",
              "f": "intermediate",
              "v": "This detection monitors changes to the following Active Directory attributes: \"msDS-AllowedToDelegateTo\", \"msDS-AllowedToActOnBehalfOfOtherIdentity\", \"msDS-KeyCredentialLink\", \"scriptPath\", and \"msTSInitialProgram\".  Modifications to these attributes can indicate potential malicious activity or privilege escalation attempts. Immediate investigation is recommended upon alert.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136",
              "q": "# Shared SPL: intentional — see UC-10.6.107\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "If key credentials are regularly assigned to users, these events will need to be tuned out.",
              "refs": "https://trustedsec.com/blog/a-hitchhackers-guide-to-dacl-based-detections-part-1-a, https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Enabling_an_audit_trail_from_Active_Directory",
              "mitre": [
                "T1222.001",
                "T1550"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Suspicious Attribute Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001, T1550. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Suspicious Attribute Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: If key credentials are regularly assigned to users, these events will need to be tuned out.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Suspicious Attribute Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.467",
              "n": "Windows AI Platform DNS Query",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows AI Platform DNS Query. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "researcher, engineering and administrator may create a automation that queries huggingface ai platform hub for accomplishing task.",
              "refs": "https://cert.gov.ua/article/6284730, https://www.microsoft.com/en-us/security/blog/2025/11/03/sesameop-novel-backdoor-uses-openai-assistants-api-for-command-and-control/",
              "mitre": [
                "T1071.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AI Platform DNS Query\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AI Platform DNS Query\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: researcher, engineering and administrator may create a automation that queries huggingface ai platform hub for accomplishing task.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows AI Platform Query\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.468",
              "n": "Windows Alternate DataStream - Executable Content",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the writing of data with an IMPHASH value to an Alternate Data Stream (ADS) in the NTFS file system. It leverages Sysmon Event ID 15 and regex to identify files with a Portable Executable (PE) structure. This activity is significant as it may indicate a threat actor staging malicious code in hidden areas for persistence or future execution. If confirmed malicious, this could allow attackers to execute hidden code, maintain persistence, or escalate privileges within…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 15",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest Sysmon data, specifically Event ID 15, and import hashing/imphash must be enabled within Sysmon.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://car.mitre.org/analytics/CAR-2020-08-001/, https://blogs.juniper.net/en-us/threat-research/bitpaymer-ransomware-hides-behind-windows-alternate-data-streams, https://twitter.com/0xrawsec/status/1002478725605273600?s=21",
              "mitre": [
                "T1564.004"
              ],
              "dtype": "file_hash",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Alternate DataStream - Executable Content\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file hash entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 15. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1564.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Alternate DataStream - Executable Content\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Alternate DataStream - Executable Content\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.469",
              "n": "Windows Alternate DataStream - Process Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a process attempts to execute a file from within an NTFS file system alternate data stream. This detection leverages process execution data from sources like Windows process monitoring or Sysmon Event ID 1, focusing on specific processes known for such behavior. This activity is significant because alternate data streams can be used by threat actors to hide malicious code, making it difficult to detect. If confirmed malicious, this could allow an attacker to e…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4688, Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest process execution data sources such as Windows process monitoring and/or Sysmon EventID 1.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be generated by process executions within the commandline, regex has been provided to minimize the possibilty.",
              "refs": "https://gist.github.com/api0cradle/cdd2d0d0ec9abb686f0e89306e277b8f, https://car.mitre.org/analytics/CAR-2020-08-001/, https://blogs.juniper.net/en-us/threat-research/bitpaymer-ransomware-hides-behind-windows-alternate-data-streams, https://blog.netwrix.com/2022/12/16/alternate_data_stream/",
              "mitre": [
                "T1564.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Alternate DataStream - Process Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4688, Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1564.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Alternate DataStream - Process Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be generated by process executions within the commandline, regex has been provided to minimize the possibilty.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Alternate DataStream - Process Execution\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.470",
              "n": "Windows Anonymous Pipe Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or connection of anonymous pipes for inter-process communication (IPC) within a Windows environment. Anonymous pipes are commonly used by legitimate system processes, services, and applications to transfer data between related processes. However, adversaries frequently abuse anonymous pipes to facilitate stealthy process injection, command-and-control (C2) communication, credential theft, or privilege escalation. This detection monitors for unusual ano…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 17, Sysmon EventID 18",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and pipename from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. .",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Automation tool might use anonymous pipe for task orchestration or process communication.",
              "refs": "https://www.trendmicro.com/en_nl/research/24/k/earth-estries.html",
              "mitre": [
                "T1559"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Anonymous Pipe Activity\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 17, Sysmon EventID 18. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1559. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Anonymous Pipe Activity\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Automation tool might use anonymous pipe for task orchestration or process communication.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Anonymous Pipe Activity\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.471",
              "n": "Windows Application Whitelisting Bypass Attempt via Rundll32",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Application Whitelisting Bypass Attempt via Rundll32. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1218/011/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.011/T1218.011.md, https://lolbas-project.github.io/lolbas/Binaries/Rundll32/, https://bohops.com/2018/02/26/leveraging-inf-sct-fetch-execute-techniques-for-bypass-evasion-persistence/, https://lolbas-project.github.io/lolbas/Libraries/Advpack/, https://lolbas-project.github.io/lolbas/Libraries/Ieadvpack/, https://lolbas-project.github.io/lolbas/Libraries/Setupapi/, https://lolbas-project.github.io/lolbas/Libraries/Syssetup/",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Application Whitelisting Bypass Attempt via Rundll32\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Application Whitelisting Bypass Attempt via Rundll32\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Application Whitelisting Bypass Attempt via Rundll32\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.472",
              "n": "Windows AppLocker Block Events",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to bypass application restrictions by identifying Windows AppLocker policy violations. It leverages Windows AppLocker event logs, specifically EventCodes 8007, 8004, 8022, 8025, 8029, and 8040, to pinpoint blocked actions. This activity is significant for a SOC as it highlights potential unauthorized application executions, which could indicate malicious intent or policy circumvention. If confirmed malicious, this activity could allow an attacker to execut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may legitimately use AppLocker to allow applications.",
              "refs": "https://attack.mitre.org/techniques/T1218, https://learn.microsoft.com/en-us/windows/security/application-security/application-control/windows-defender-application-control/applocker/using-event-viewer-with-applocker, https://learn.microsoft.com/en-us/windows/security/application-security/application-control/windows-defender-application-control/operations/querying-application-control-events-centrally-using-advanced-hunting",
              "mitre": [
                "T1218"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AppLocker Block Events\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AppLocker Block Events\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may legitimately use AppLocker to allow applications.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows AppLocker Block Events\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.473",
              "n": "Windows AppLocker Privilege Escalation via Unauthorized Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic utilizes Windows AppLocker event logs to identify attempts to bypass application restrictions. AppLocker is a feature that allows administrators to specify which applications are permitted to run on a system. This analytic is designed to identify attempts to bypass these restrictions, which could be indicative of an attacker attempting to escalate privileges. The analytic uses EventCodes 8007, 8004, 8022, 8025, 8029, and 8040 to identify these attempts. The analytic will i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate users are attempting to bypass application restrictions. This could occur if a user is attempting to run an application that is not permitted by AppLocker. It is recommended to investigate the context of the bypass attempt to determine if it is malicious or not. Modify the threshold as needed to reduce false positives.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/application-security/application-control/windows-defender-application-control/operations/querying-application-control-events-centrally-using-advanced-hunting, https://learn.microsoft.com/en-us/windows/security/application-security/application-control/windows-defender-application-control/applocker/using-event-viewer-with-applocker",
              "mitre": [
                "T1218"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AppLocker Privilege Escalation via Unauthorized Bypass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AppLocker Privilege Escalation via Unauthorized Bypass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate users are attempting to bypass application restrictions. This could occur if a user is attempting to run an application that is not permitted by AppLocker. It is recommended to investigate the context of the bypass attempt to determine if it is malicious or not. Modify the threshold as needed to reduce false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows AppLocker Privilege Escalation via Unauthorized Bypa\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.474",
              "n": "Windows AppLocker Rare Application Launch Detection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the launch of rarely used applications within the environment, which may indicate the use of potentially malicious software or tools by attackers. It leverages Windows AppLocker event logs, aggregating application launch counts over time and flagging those that significantly deviate from the norm. This behavior is significant as it helps identify unusual application activity that could signal a security threat. If confirmed malicious, this activity could allow atta…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`applocker`\n      | spath input=UserData_Xml\n      | rename RuleAndFileData.* as *, Computer as dest, TargetUser AS user\n      | stats dc(_time) as days, count\n        BY FullFilePath dest user\n      | eventstats avg(count) as avg, stdev(count) as stdev\n      | eval upperBound=(avg+stdev*3), lowerBound=(avg-stdev*3)\n      | where count > upperBound OR count < lowerBound\n      | `windows_applocker_rare_application_launch_detection_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate users are launching applications that are not permitted by AppLocker. It is recommended to investigate the context of the application launch to determine if it is malicious or not. Modify the threshold as needed to reduce false positives.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/application-security/application-control/windows-defender-application-control/applocker/using-event-viewer-with-applocker, https://learn.microsoft.com/en-us/windows/security/application-security/application-control/windows-defender-application-control/operations/querying-application-control-events-centrally-using-advanced-hunting",
              "mitre": [
                "T1218"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AppLocker Rare Application Launch Detection\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AppLocker Rare Application Launch Detection\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate users are launching applications that are not permitted by AppLocker. It is recommended to investigate the context of the application launch to determine if it is malicious or not. Modify the threshold as needed to reduce false positives.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the launch of rarely used applications within the environment, which may indicate the use of potentially malicious software or tools by attackers so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.475",
              "n": "Windows AppX Deployment Full Trust Package Installation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the installation of MSIX/AppX packages with full trust privileges. This detection leverages Windows event logs from the AppXDeployment-Server, specifically focusing on EventCode 400 which indicates a package deployment operation. Full trust packages are significant as they run with elevated privileges outside the normal AppX container restrictions, allowing them to access system resources that regular AppX packages cannot. Adversaries have been observed leveraging …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log AppXDeployment-Server 400",
              "q": "`powershell` EventCode=4104 dest=\"$dest$\" | stats count by ScriptBlockText",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log AppXDeployment-Server 400 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate applications may be deployed as full trust MSIX packages, especially line-of-business applications that require access to system resources. Microsoft Store applications, development tools, and enterprise applications may legitimately use full trust packages. Verify if the package is from a trusted source and signed by a trusted publisher before taking action. Review the package source URI and calling process to determine if the installation is expected in your environment.",
              "refs": "https://redcanary.com/blog/threat-intelligence/msix-installers/, https://redcanary.com/threat-detection-report/techniques/installer-packages/, https://learn.microsoft.com/en-us/windows/msix/desktop/desktop-to-uwp-behind-the-scenes, https://learn.microsoft.com/en-us/windows/msix/package/package-identity, https://attack.mitre.org/techniques/T1553/005/",
              "mitre": [
                "T1553.005",
                "T1204.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AppX Deployment Full Trust Package Installation\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log AppXDeployment-Server 400. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1553.005, T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AppX Deployment Full Trust Package Installation\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate applications may be deployed as full trust MSIX packages, especially line-of-business applications that require access to system resources. Microsoft Store applications, development tools, and enterprise applications may legitimately use full trust packages. Verify if the package is from a trusted source and signed by a trusted publisher before taking action. Review the package source URI and calling process to determine if the installation is expected in your environment.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the installation of MSIX/AppX packages with full trust privileges so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.476",
              "n": "Windows AppX Deployment Package Installation Success",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects successful MSIX/AppX package installations on Windows systems by monitoring EventID 854 in the Microsoft-Windows-AppXDeployment-Server/Operational log. This event is generated when an MSIX/AppX package has been successfully installed on a system. While most package installations are legitimate, monitoring these events can help identify unauthorized or suspicious package installations, especially when correlated with other events such as unsigned package installations (Event…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log AppXDeployment-Server 854",
              "q": "source=\"XmlWinEventLog:Microsoft-Windows-AppXDeploymentServer/Operational\" EventCode=400 HasFullTrust=\"true\" host=\"$dest$\"",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file path entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log AppXDeployment-Server 854 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate MSIX/AppX package installations will trigger this detection. This is expected behavior and not necessarily indicative of malicious activity. This analytic is designed to provide visibility into package installations and should be used as part of a broader detection strategy. Consider correlating these events with other suspicious indicators such as unsigned packages or packages from unusual sources.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/appxpkg/troubleshooting, https://www.appdeploynews.com/packaging-types/msix/troubleshooting-an-msix-package/, https://www.advancedinstaller.com/msix-installation-or-launching-errors-and-fixes.html",
              "mitre": [
                "T1204.002"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AppX Deployment Package Installation Success\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log AppXDeployment-Server 854. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AppX Deployment Package Installation Success\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate MSIX/AppX package installations will trigger this detection. This is expected behavior and not necessarily indicative of malicious activity. This analytic is designed to provide visibility into package installations and should be used as part of a broader detection strategy. Consider correlating these events with other suspicious indicators such as unsigned packages or packages from unusual sources.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on this analytic detects successful MSIX/AppX package installations on Windows systems by monitoring EventID 854 in the Microsoft-Windows-AppXDeployment-Server/Operational log so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.477",
              "n": "Windows AppX Deployment Unsigned Package Installation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects attempts to install unsigned MSIX/AppX packages using the -AllowUnsigned parameter. This detection leverages Windows event logs from the AppXDeployment-Server, specifically focusing on EventID 603 which indicates the start of a deployment operation with specific deployment flags. The flag value 8388608 corresponds to the -AllowUnsigned option in PowerShell's Add-AppxPackage cmdlet. This activity is significant as adversaries have been observed leveraging unsigned M…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log AppXDeployment-Server 855",
              "q": "`powershell` EventCode=4104 dest=\"$dest$\" ScriptBlockText=\"*Add-AppxPackage*\" OR ScriptBlockText=\"*Add-AppPackage*\" | stats count by ScriptBlockText",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log AppXDeployment-Server 855 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate software development and testing activities may trigger this detection. Internal application development teams testing MSIX packages before signing or system administrators installing custom unsigned applications for business purposes may use the -AllowUnsigned parameter. Note that the -AllowUnsigned flag is only available on Windows 11 and later versions. Verify if the package installation is expected in your environment and if the calling process and user are authorized to install unsigned packages.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/appx/add-appxpackage, https://learn.microsoft.com/en-us/windows/msix/package/unsigned-package, https://redcanary.com/blog/threat-intelligence/msix-installers/, https://attack.mitre.org/techniques/T1553/005/",
              "mitre": [
                "T1553.005",
                "T1204.002"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AppX Deployment Unsigned Package Installation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log AppXDeployment-Server 855. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1553.005, T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AppX Deployment Unsigned Package Installation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate software development and testing activities may trigger this detection. Internal application development teams testing MSIX packages before signing or system administrators installing custom unsigned applications for business purposes may use the -AllowUnsigned parameter. Note that the -AllowUnsigned flag is only available on Windows 11 and later versions. Verify if the package installation is expected in your environment and if the calling process and user are authorized to install unsigned packages.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on attempts to install unsigned MSIX/AppX packages using the -AllowUnsigned parameter so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.478",
              "n": "Windows Archive Collected Data via Powershell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of PowerShell scripts to archive files into a temporary folder. It leverages PowerShell Script Block Logging, specifically monitoring for the `Compress-Archive` command targeting the `Temp` directory. This activity is significant as it may indicate an adversary's attempt to collect and compress data for exfiltration. If confirmed malicious, this behavior could lead to unauthorized data access and exfiltration, posing a severe risk to sensitive information a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this analytic, you will need to enable PowerShell Script Block Logging on some or all endpoints. Additional setup here https://help.splunk.com/en/security-offerings/splunk-user-behavior-analytics/get-data-in/5.4.1/add-other-data-to-splunk-uba/configure-powershell-logging-to-see-powershell-anomalies-in-splunk-uba.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "powershell may used this function to archive data.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a",
              "mitre": [
                "T1560"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Archive Collected Data via Powershell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1560. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Archive Collected Data via Powershell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: powershell may used this function to archive data.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Archive Collected Data via Powershell\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.479",
              "n": "Windows Archived Collected Data In TEMP Folder",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of archived files in a temporary folder, which may contain collected data. This behavior is often associated with malicious activity, where attackers compress sensitive information before exfiltration. The detection focuses on monitoring specific directories, such as temp folders, for the presence of newly created archive files (e.g., .zip, .rar, .tar). By identifying this pattern, security teams can quickly respond to potential data collection and exf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://x.com/suyog41/status/1825869470323056748, https://g0njxa.medium.com/from-vietnam-to-united-states-malware-fraud-and-dropshipping-98b7a7b2c36d",
              "mitre": [
                "T1560"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Archived Collected Data In TEMP Folder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1560. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Archived Collected Data In TEMP Folder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Archived Collected Data In TEMP Folder\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.480",
              "n": "Windows AutoIt3 Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows AutoIt3 Execution. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if the application is legitimately used, filter by user or endpoint as needed.",
              "refs": "https://github.com/PaloAltoNetworks/Unit42-timely-threat-intel/blob/main/2023-10-25-IOCs-from-DarkGate-activity.txt",
              "mitre": [
                "T1059"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AutoIt3 Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AutoIt3 Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if the application is legitimately used, filter by user or endpoint as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows AutoIt3 Execution\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.481",
              "n": "Windows Browser Process Launched with Unusual Flags",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of unusual browser flags, specifically --mute-audio and --do-not-elevate, which deviate from standard browser launch behavior. These flags may indicate automated scripts, testing environments, or attempts to modify browser functionality for silent operation or restricted privilege execution. Detection focuses on non-standard launch parameters, unexpected process behavior, or deviations from baseline configurations. Monitoring such flag usage helps identify …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible false positives will be present based on third party applications. Filtering may be needed.",
              "refs": "https://www.recordedfuture.com/research/from-castleloader-to-castlerat-tag-150-advances-operations, https://peter.sh/experiments/chromium-command-line-switches/",
              "mitre": [
                "T1185"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Browser Process Launched with Unusual Flags\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1185. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Browser Process Launched with Unusual Flags\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible false positives will be present based on third party applications. Filtering may be needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Browser Process Launched with Unusual Flags\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.482",
              "n": "Windows Cabinet File Extraction Via Expand",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Cabinet File Extraction Via Expand. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\",\"$dest$\") starthoursago=168\n        | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\"\n          values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\"\n          values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\"\n          by normalized_risk_object\n        | `security_content_ctime(firstTime)`\n        | `security_content_ctime(lastTime)`\n      earliest_offset: $info_min_time$\n      latest_offset: $info_max_time$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.zscaler.com/blogs/security-research/apt37-targets-windows-rust-backdoor-and-python-loader",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Cabinet File Extraction Via Expand\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Cabinet File Extraction Via Expand\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Cabinet File Extraction Via Expand\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.483",
              "n": "Windows Change File Association Command To Notepad",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Change File Association Command To Notepad. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1546.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Change File Association Command To Notepad\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Change File Association Command To Notepad\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Change File Association Command To Notepad\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.484",
              "n": "Windows Chrome Auto-Update Disabled via Registry",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Chrome Auto-Update Disabled via Registry. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 13 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.gdatasoftware.com/blog/2025/11/38298-learning-about-browser-hijacking",
              "mitre": [
                "T1185"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chrome Auto-Update Disabled via Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1185. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chrome Auto-Update Disabled via Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Chrome Auto-Update Disabled via Registry\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.485",
              "n": "Windows Chrome Enable Extension Loading via Command-Line",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Chrome Enable Extension Loading via Command-Line. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Developers or IT admins loading unpacked extensions for testing or deployment purposes.",
              "refs": "https://www.gdatasoftware.com/blog/2025/11/38298-learning-about-browser-hijacking",
              "mitre": [
                "T1185"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chrome Enable Extension Loading via Command-Line\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1185. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chrome Enable Extension Loading via Command-Line\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Developers or IT admins loading unpacked extensions for testing or deployment purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Chrome Enable Extension Loading via Command-Line\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.486",
              "n": "Windows Chrome Extension Allowed Registry Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry keys that control the Chrome Extension Install Allowlist. Unauthorized changes to these keys may indicate attempts to bypass Chrome extension restrictions or install unapproved extensions. This detection helps identify potential security policy violations or malicious activity targeting Chrome extension settings.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate IT admin updates to Chrome extension allowlist via Group Policy or enterprise management tools. Filtering is needed.",
              "refs": "https://www.gdatasoftware.com/blog/2025/11/38298-learning-about-browser-hijacking",
              "mitre": [
                "T1185"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chrome Extension Allowed Registry Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1185. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chrome Extension Allowed Registry Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate IT admin updates to Chrome extension allowlist via Group Policy or enterprise management tools. Filtering is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Chrome Extension Allowed Registry Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.487",
              "n": "Windows Chromium Browser No Security Sandbox Process",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Chromium Browser No Security Sandbox Process. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://unix.stackexchange.com/questions/68832/what-does-the-chromium-option-no-sandbox-mean",
              "mitre": [
                "T1497"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chromium Browser No Security Sandbox Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chromium Browser No Security Sandbox Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Chromium Browser No Security Sandbox Process\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.488",
              "n": "Windows Chromium Browser with Custom User Data Directory",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Chromium Browser with Custom User Data Directory. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://chromium.googlesource.com/chromium/src/+/main/docs/user_data_dir.md",
              "mitre": [
                "T1497"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chromium Browser with Custom User Data Directory\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chromium Browser with Custom User Data Directory\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Chromium Browser with Custom User Data Directory\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.489",
              "n": "Windows Chromium Process Launched with Logging Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Chromium Process Launched with Logging Disabled. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.trendmicro.com/en_us/research/26/a/analysis-of-the-evelyn-stealer-campaign.html, https://peter.sh/experiments/chromium-command-line-switches/",
              "mitre": [
                "T1497"
              ],
              "dtype": "parent_process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chromium Process Launched with Logging Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1497. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chromium Process Launched with Logging Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Chromium Process Launched with Logging Disabled\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.490",
              "n": "Windows Chromium Process Loaded Extension via Command-Line",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Chromium Process Loaded Extension via Command-Line. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Developers or IT admins loading unpacked extensions for testing or deployment purposes.",
              "refs": "https://www.gdatasoftware.com/blog/2025/11/38298-learning-about-browser-hijacking, https://peter.sh/experiments/chromium-command-line-switches/",
              "mitre": [
                "T1185"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Chromium Process Loaded Extension via Command-Line\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1185. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Chromium Process Loaded Extension via Command-Line\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Developers or IT admins loading unpacked extensions for testing or deployment purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Chromium Process Loaded Extension via Command-Line\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.491",
              "n": "Windows ClipBoard Data via Get-ClipBoard",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the PowerShell command 'Get-Clipboard' to retrieve clipboard data. It leverages PowerShell Script Block Logging (EventCode 4104) to identify instances where this command is used. This activity is significant because it can indicate an attempt to steal sensitive information such as usernames, passwords, or other confidential data copied to the clipboard. If confirmed malicious, this behavior could lead to unauthorized access to sensitive information…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible there will be false positives, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1115/, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1115"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows ClipBoard Data via Get-ClipBoard\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1115. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows ClipBoard Data via Get-ClipBoard\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible there will be false positives, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows ClipBoard Data via Get-ClipBoard\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.492",
              "n": "Windows Command and Scripting Interpreter Hunting Path Traversal",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Command and Scripting Interpreter Hunting Path Traversal. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time)\n    as lastTime FROM datamodel=Endpoint.Processes where\n    Processes.process IN (\"*\\\\..*\", \"*//..*\", \"*\\..*\", \"*/..*\")\n    by Processes.action Processes.dest Processes.original_file_name Processes.parent_process\n       Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id\n       Processes.parent_process_name Processes.parent_process_path Processes.process\n       Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id\n       Processes.process_integrity_level Processes.process_name Processes.process_path\n       Processes.user Processes.user_id Processes.vendor_product",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://app.any.run/tasks/713f05d2-fe78-4b9d-a744-f7c133e3fafb/",
              "mitre": [
                "T1059"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Command and Scripting Interpreter Hunting Path Traversal\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Command and Scripting Interpreter Hunting Path Traversal\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on Windows Command and Scripting Interpreter Hunting Path Traversal so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.493",
              "n": "Windows Common Abused Cmd Shell Risk Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where four or more distinct detection analytics are associated with malicious command line behavior on a specific host. This detection leverages the Command Line Interface (CLI) data from various sources to identify suspicious activities. This behavior is significant as it often indicates attempts to execute malicious commands, access sensitive data, install backdoors, or perform other nefarious actions. If confirmed malicious, attackers could gain una…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present based on many factors. Tune the correlation as needed to reduce too many triggers.",
              "refs": "https://www.splunk.com/en_us/blog/security/from-macros-to-no-macros-continuous-malware-improvements-by-qakbot.html, https://www.splunk.com/en_us/blog/security/dark-crystal-rat-agent-deep-dive.html",
              "mitre": [
                "T1222",
                "T1049",
                "T1033",
                "T1529",
                "T1016",
                "T1059"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Common Abused Cmd Shell Risk Behavior\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222, T1049, T1033, T1529, T1016, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Common Abused Cmd Shell Risk Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present based on many factors. Tune the correlation as needed to reduce too many triggers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Common Abused Cmd Shell Risk Behavior\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.494",
              "n": "Windows Computer Account Requesting Kerberos Ticket",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a computer account requesting a Kerberos ticket, which is unusual as typically user accounts request these tickets. This detection leverages Windows Security Event Logs, specifically EventCode 4768, to identify instances where the TargetUserName ends with a dollar sign ($), indicating a computer account. This activity is significant because it may indicate the use of tools like KrbUpRelay or other Kerberos-based attacks. If confirmed malicious, this could allow att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4768",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4768 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible false positives will be present based on third party applications. Filtering may be needed.",
              "refs": "https://github.com/Dec0ne/KrbRelayUp",
              "mitre": [
                "T1558"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Computer Account Requesting Kerberos Ticket\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4768. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Computer Account Requesting Kerberos Ticket\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible false positives will be present based on third party applications. Filtering may be needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Computer Account Requesting Kerberos Ticket\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.495",
              "n": "Windows ConsoleHost History File Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of the ConsoleHost_history.txt file, which stores command history for PowerShell sessions. Attackers may attempt to remove this file to cover their tracks and evade detection during post-exploitation activities. This detection focuses on file deletion commands executed via PowerShell, Command Prompt, or scripting languages that specifically target ConsoleHost_history.txt, typically located at %APPDATA%\\Microsoft\\Windows\\PowerShell\\PSReadline\\ConsoleHos…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 23, Sysmon EventID 26",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to ingest logs that include the deleted target file name, process name, and process ID from your endpoints. If you are using Sysmon, ensure you have at least version 2.0 of the Sysmon TA installed.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "An administrator may delete the ConsoleHost history file on a specific machine, potentially triggering this detection. However, this action is uncommon for regular users who are not accustomed to using the PowerShell command line",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-071a",
              "mitre": [
                "T1070.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows ConsoleHost History File Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 23, Sysmon EventID 26. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows ConsoleHost History File Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: An administrator may delete the ConsoleHost history file on a specific machine, potentially triggering this detection. However, this action is uncommon for regular users who are not accustomed to using the PowerShell command line\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows ConsoleHost History File Deletion\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.496",
              "n": "Windows Credential Access From Browser Password Store",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a possible non-common browser process accessing its browser user data profile. This tactic/technique has been observed in various Trojan Stealers, such as SnakeKeylogger, which attempt to gather sensitive browser information and credentials as part of their exfiltration strategy. Detecting this anomaly can serve as a valuable pivot for identifying processes that access lists of browser user data profiles unexpectedly. This detection uses a lookup file `browser_a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.\" This search may trigger on a browser application that is not included in the browser_app_list lookup file.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The lookup file `browser_app_list` may not contain all the browser applications that are allowed to access the browser user data profiles. Consider updating the lookup files to add allowed object paths for the browser applications that are not included in the lookup file.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.404keylogger, https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-malware/snake-keylogger-malware/",
              "mitre": [
                "T1012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credential Access From Browser Password Store\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credential Access From Browser Password Store\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The lookup file `browser_app_list` may not contain all the browser applications that are allowed to access the browser user data profiles. Consider updating the lookup files to add allowed object paths for the browser applications that are not included in the lookup file.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Credential Access From Browser Password Store\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.497",
              "n": "Windows Credentials Access via VaultCli Module",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potentially abnormal interactions with VaultCLI.dll, particularly those initiated by processes located in publicly writable Windows folder paths. The VaultCLI.dll module allows processes to extract credentials from the Windows Credential Vault. It was seen being abused by information stealers such as Meduza. The analytic monitors suspicious API calls, unauthorized credential access patterns, and anomalous process behaviors indicative of malicious activity. By lever…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Third party software might leverage this DLL in order to make use of the Credential Manager feature via the provided exports. Typically the vaultcli.dll module is loaded by the vaultcmd.exe Windows Utility to interact with the Windows Credential Manager for secure storage and retrieval of credentials.",
              "refs": "https://hijacklibs.net/entries/microsoft/built-in/vaultcli.html, https://www.fortinet.com/blog/threat-research/exploiting-cve-2024-21412-stealer-campaign-unleashed, https://cert.gov.ua/article/6276652, https://cert.gov.ua/article/6281018, https://g0njxa.medium.com/approaching-stealers-devs-a-brief-interview-with-meduza-f1bbd2efb84f",
              "mitre": [
                "T1555.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials Access via VaultCli Module\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1555.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials Access via VaultCli Module\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Third party software might leverage this DLL in order to make use of the Credential Manager feature via the provided exports. Typically the vaultcli.dll module is loaded by the vaultcmd.exe Windows Utility to interact with the Windows Credential Manager for secure storage and retrieval of credentials.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Credentials Access via VaultCli Module\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.498",
              "n": "Windows Credentials from Password Stores Chrome Extension Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects non-Chrome processes attempting to access the Chrome extensions file. It leverages Windows Security Event logs, specifically event code 4663, to identify this behavior. This activity is significant because adversaries may exploit this file to extract sensitive information from the Chrome browser, posing a security risk. If confirmed malicious, this could lead to unauthorized access to stored credentials and other sensitive data, potentially compromising the securit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Uninstall chrome browser extension application may access this file and folder path to removed chrome installation in the target host. Filter is needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.redline_stealer",
              "mitre": [
                "T1012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials from Password Stores Chrome Extension Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials from Password Stores Chrome Extension Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Uninstall chrome browser extension application may access this file and folder path to removed chrome installation in the target host. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Credentials from Password Stores Chrome Extension Ac\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.499",
              "n": "Windows Credentials from Password Stores Chrome LocalState Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects non-Chrome processes accessing the Chrome \"Local State\" file, which contains critical settings and information. It leverages Windows Security Event logs, specifically event code 4663, to identify this behavior. This activity is significant because threat actors can exploit this file to extract the encrypted master key used for decrypting saved passwords in Chrome. If confirmed malicious, this could lead to unauthorized access to sensitive information, posing a seve…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Uninstall chrome application may access this file and folder path to removed chrome installation in target host. Filter is needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.redline_stealer",
              "mitre": [
                "T1012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials from Password Stores Chrome LocalState Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials from Password Stores Chrome LocalState Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Uninstall chrome application may access this file and folder path to removed chrome installation in target host. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Credentials from Password Stores Chrome LocalState A\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.500",
              "n": "Windows Credentials from Password Stores Chrome Login Data Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies non-Chrome processes accessing the Chrome user data file \"login data.\" This file is an SQLite database containing sensitive information, including saved passwords. The detection leverages Windows Security Event logs, specifically event code 4663, to monitor access attempts. This activity is significant as it may indicate attempts by threat actors to extract and decrypt stored passwords, posing a risk to user credentials. If confirmed malicious, attackers could g…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Uninstall application may access this registry to remove the entry of the target application. filter is needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.redline_stealer",
              "mitre": [
                "T1012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Credentials from Password Stores Chrome Login Data Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Credentials from Password Stores Chrome Login Data Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Uninstall application may access this registry to remove the entry of the target application. filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Credentials from Password Stores Chrome Login Data A\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.501",
              "n": "Windows Curl Download to Suspicious Path",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Curl Download to Suspicious Path. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/, https://attack.mitre.org/techniques/T1105/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1105/T1105.md",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Curl Download to Suspicious Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Curl Download to Suspicious Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Curl Download to Suspicious Path\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.502",
              "n": "Windows Curl Upload to Remote Destination",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Windows Curl.exe to upload a file to a remote destination. It identifies command-line arguments such as `-T`, `--upload-file`, `-d`, `--data`, and `-F` in process execution logs. This activity is significant because adversaries may use Curl to exfiltrate data or upload malicious payloads. If confirmed malicious, this could lead to data breaches or further compromise of the system. Analysts should review parallel processes and network logs to determine if…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be limited to source control applications and may be required to be filtered out.",
              "refs": "https://everything.curl.dev/usingcurl/uploads, https://techcommunity.microsoft.com/t5/containers/tar-and-curl-come-to-windows/ba-p/382409, https://twitter.com/d1r4c/status/1279042657508081664?s=20",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Curl Upload to Remote Destination\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Curl Upload to Remote Destination\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be limited to source control applications and may be required to be filtered out.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Curl Upload to Remote Destination\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.503",
              "n": "Windows Default RDP File Creation By Non MSTSC Process",
              "c": "high",
              "f": "intermediate",
              "v": "This detection monitors the creation or modification of the Default.rdp file by non mstsc.exe process, typically found in the user's Documents folder. This file is automatically generated or updated by the Remote Desktop Connection client (mstsc.exe) when a user initiates an RDP session. It stores connection settings such as the last-used hostname, screen size, and other preferences. The presence or update of this file strongly suggests that an RDP session has been launched from the system. Sinc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present, filter as needed or restrict to critical assets on the perimeter.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/, https://iam0xc4t.medium.com/rogue-rdp-via-spear-phishing-initial-access-tactic-d7be328a0b13",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Default RDP File Creation By Non MSTSC Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Default RDP File Creation By Non MSTSC Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present, filter as needed or restrict to critical assets on the perimeter.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Default RDP File Creation By Non MSTSC Process\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.504",
              "n": "Windows Default Rdp File Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the deletion of the Default.rdp file from a user’s Documents folder. This file is automatically created or updated by the Remote Desktop Connection client (mstsc.exe) whenever a user initiates an RDP session. It contains session configuration data, such as the remote hostname and display settings. While the presence of this file is normal during legitimate RDP usage, its deletion may indicate an attempt to conceal evidence of remote access activity. Threat actors and re…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 23, Sysmon EventID 26",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to ingest logs that include the deleted target file name, process name, and process ID from your endpoints. If you are using Sysmon, ensure you have at least version 2.0 of the Sysmon TA installed.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/",
              "mitre": [
                "T1070.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Default Rdp File Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 23, Sysmon EventID 26. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Default Rdp File Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Default Rdp File Deletion\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.505",
              "n": "Windows Default Rdp File Unhidden",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the use of attrib.exe to remove hidden (-h) or system (-s) attributes from the Default.rdp file, which is automatically created in a user's Documents folder when a Remote Desktop Protocol (RDP) session is initiated using mstsc.exe. The Default.rdp file stores session configuration details such as the remote host address and screen settings. Unhiding this file is uncommon in normal user behavior and may indicate that an attacker or red team operator is attempting to acce…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Default Rdp File Unhidden\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Default Rdp File Unhidden\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Default Rdp File Unhidden\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.506",
              "n": "Windows Defender ASR or Threat Configuration Tamper",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Defender ASR or Threat Configuration Tamper. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/defender/set-mppreference?view=windowsserver2025-ps#-attacksurfacereductionrules-actions, https://www.virustotal.com/gui/file/7e805617c313ec2fb59d86719c827074cb7dfbf8f0aa18194ac1ffe6c21c8967/behavior",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Defender ASR or Threat Configuration Tamper\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Defender ASR or Threat Configuration Tamper\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Defender ASR or Threat Configuration Tamper\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.507",
              "n": "Windows Disable Internet Explorer Addons",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Disable Internet Explorer Addons. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://www.hybrid-analysis.com/sample/e285feeca968b3ca22017a64363eea5e69ccd519696671df523291b089597875/588175f1aac2edf92bbed32f",
              "mitre": [
                "T1176.001"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Disable Internet Explorer Addons\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1176.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Disable Internet Explorer Addons\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Disable Internet Explorer Addons\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.508",
              "n": "Windows DISM Install PowerShell Web Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the installation of PowerShell Web Access using the Deployment Image Servicing and Management (DISM) tool. It leverages Sysmon EventID 1 to identify the execution of `dism.exe` with specific parameters related to enabling the WindowsPowerShellWebAccess feature. This activity is significant because enabling PowerShell Web Access can facilitate remote execution of PowerShell commands, potentially allowing an attacker to gain unauthorized access to systems and network…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4688, Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4688, Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators using the DISM tool to update and install Windows features may cause false positives that can be filtered with `windows_dism_install_powershell_web_access_filter`.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa24-241a, https://gist.github.com/MHaggis/7e67b659af9148fa593cf2402edebb41",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DISM Install PowerShell Web Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4688, Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DISM Install PowerShell Web Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators using the DISM tool to update and install Windows features may cause false positives that can be filtered with `windows_dism_install_powershell_web_access_filter`.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows DISM Install PowerShell Web Access\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.509",
              "n": "Windows DLL Module Loaded in Temp Dir",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where a Dynamic Link Library (DLL) is loaded from a temporary directory on a Windows system. Loading DLLs from non-standard paths such as %TEMP% is uncommon for legitimate applications and is often associated with adversary tradecraft, including DLL search order hijacking, side-loading, or execution of malicious payloads staged in temporary folders. Adversaries frequently leverage these directories because they are writable by standard users and often ove…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "`sysmon` EventCode=7 NOT (ImageLoaded IN(\"C:\\\\Program Files*\")) AND ImageLoaded=\"*\\\\temp\\\\*\" AND ImageLoaded=\"*.dll\" | fillnull | stats count min(_time) as firstTime max(_time) as lastTime by Image ImageLoaded dest loaded_file loaded_file_path original_file_name process_exec process_guid process_hash process_id process_name process_path service_dll_signature_exists service_dll_signature_verified signature signature_id user_id vendor_product | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_dll_module_loaded_in_temp_dir_filter`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-203a, https://blog.sekoia.io/interlock-ransomware-evolving-under-the-radar/",
              "mitre": [
                "T1105"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DLL Module Loaded in Temp Dir\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DLL Module Loaded in Temp Dir\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on instances where a Dynamic Link Library (DLL) is loaded from a temporary directory on a Windows system so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.510",
              "n": "Windows DNS Query Request To TinyUrl",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows DNS Query Request To TinyUrl. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://x.com/Unit42_Intel/status/1919418143476199869",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DNS Query Request To TinyUrl\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DNS Query Request To TinyUrl\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Query Request To TinyUrl\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.511",
              "n": "Windows DnsAdmins New Member Added",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of a new member to the DnsAdmins group in Active Directory by leveraging Event ID 4732. This detection uses security event logs to identify changes to this high-privilege group. Monitoring this activity is crucial because members of the DnsAdmins group can manage the DNS service, often running on Domain Controllers, and potentially execute malicious code with SYSTEM privileges. If confirmed malicious, this activity could allow an attacker to escalate p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4732",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4732 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "New members can be added to the DnsAdmins group as part of legitimate administrative tasks. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/from-dnsadmins-to-system-to-domain-compromise, https://www.hackingarticles.in/windows-privilege-escalation-dnsadmins-to-domainadmin/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4732",
              "mitre": [
                "T1098"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DnsAdmins New Member Added\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4732. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DnsAdmins New Member Added\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: New members can be added to the DnsAdmins group as part of legitimate administrative tasks. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows DnsAdmins New Member Added\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.512",
              "n": "Windows Domain Account Discovery Via Get-NetComputer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the PowerView PowerShell cmdlet Get-NetComputer, which is used to query Active Directory for user account details such as \"samaccountname,\" \"accountexpires,\" \"lastlogon,\" and more. It leverages Event ID 4104 from PowerShell Script Block Logging to identify this activity. This behavior is significant as it may indicate an attempt to gather user account information, which is often a precursor to further malicious actions. If confirmed malicious, this…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage PowerView for legitimate purposes, filter as needed.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Domain Account Discovery Via Get-NetComputer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Domain Account Discovery Via Get-NetComputer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage PowerView for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Domain Account Discovery Via Get-NetComputer\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.513",
              "n": "Windows Domain Admin Impersonation Indicator",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential Kerberos ticket forging attacks, specifically the Diamond Ticket attack. This is detected when a user logs into a host and the GroupMembership field in event 4627 indicates a privileged group (e.g., Domain Admins), but the user does not actually belong to that group in the directory service. The detection leverages Windows Security Event Log 4627, which logs account logon events. The analytic cross-references the GroupMembership field from the event ag…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4627",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$TargetUserName$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4627 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may trigger the detections certain scenarios like directory service delays or out of date lookups. Filter as needed.",
              "refs": "https://trustedsec.com/blog/a-diamond-in-the-ruff, https://unit42.paloaltonetworks.com/next-gen-kerberos-attacks, https://github.com/GhostPack/Rubeus/pull/136, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4627",
              "mitre": [
                "T1558"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Domain Admin Impersonation Indicator\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4627. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Domain Admin Impersonation Indicator\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may trigger the detections certain scenarios like directory service delays or out of date lookups. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Domain Admin Impersonation Indicator\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.514",
              "n": "Windows DotNet Binary in Non Standard Path",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows DotNet Binary in Non Standard Path. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and filtering may be required. Certain utilities will run from non-standard paths based on the third-party application in use.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1036.003/T1036.003.yaml, https://attack.mitre.org/techniques/T1036/003/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.004/T1218.004.md",
              "mitre": [
                "T1036.003",
                "T1218.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows DotNet Binary in Non Standard Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003, T1218.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows DotNet Binary in Non Standard Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and filtering may be required. Certain utilities will run from non-standard paths based on the third-party application in use.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows DotNet Binary in Non Standard Path\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.515",
              "n": "Windows Driver Inventory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies drivers being loaded across the fleet. It leverages a PowerShell script input deployed to critical systems to capture driver data. This detection is significant as it helps monitor for unauthorized or malicious drivers that could compromise system integrity. If confirmed malicious, such drivers could allow attackers to execute arbitrary code, escalate privileges, or maintain persistence within the environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`driverinventory`\n      | stats values(Path) min(_time) as firstTime max(_time) as lastTime count\n        BY host DriverType\n      | rename host as dest\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_driver_inventory_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Filter and modify the analytic as you'd like. Filter based on path. Remove the system32\\drivers and look for non-standard paths.",
              "refs": "https://gist.github.com/MHaggis/3e4dc85c69b3f7a4595a06c8a692f244",
              "mitre": [
                "T1068"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Driver Inventory\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Driver Inventory\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Filter and modify the analytic as you'd like. Filter based on path. Remove the system32\\drivers and look for non-standard paths.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on drivers being loaded across the fleet so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.516",
              "n": "Windows Enable PowerShell Web Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the enabling of PowerShell Web Access via PowerShell commands. It leverages PowerShell script block logging (EventCode 4104) to identify the execution of the `Install-WindowsFeature` cmdlet with the `WindowsPowerShellWebAccess` parameter. This activity is significant because enabling PowerShell Web Access can facilitate remote execution of PowerShell commands, potentially allowing an attacker to gain unauthorized access to systems and networks.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that legitimate scripts or network administrators may enable PowerShell Web Access. Monitor and escalate as needed.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa24-241a, https://gist.github.com/MHaggis/7e67b659af9148fa593cf2402edebb41",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Enable PowerShell Web Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Enable PowerShell Web Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that legitimate scripts or network administrators may enable PowerShell Web Access. Monitor and escalate as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Enable PowerShell Web Access\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.517",
              "n": "Windows Event For Service Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a Windows service is modified from a start type to disabled. It leverages system event logs, specifically EventCode 7040, to identify this change. This activity is significant because adversaries often disable security or other critical services to evade detection and maintain control over a compromised host. If confirmed malicious, this action could allow attackers to bypass security defenses, leading to further exploitation and persistence within the environ…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7040",
              "q": "`wineventlog_system` EventCode=7040  EventData_Xml=\"*disabled*\"\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY Computer EventCode Name\n           UserID service ServiceName\n      | rename Computer as dest\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_event_for_service_disabled_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log System 7040 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Windows service update may cause this event. In that scenario, filtering is needed.",
              "refs": "https://blog.talosintelligence.com/2018/02/olympic-destroyer.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Event For Service Disabled\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7040. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Event For Service Disabled\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Windows service update may cause this event. In that scenario, filtering is needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on when a Windows service is modified from a start type to disabled so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.518",
              "n": "Windows Event Log Cleared",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the clearing of Windows event logs by identifying Windows Security Event ID 1102 or System log event 104. This detection leverages Windows event logs to monitor for log clearing activities. Such behavior is significant as it may indicate an attempt to cover tracks after malicious activities. If confirmed malicious, this action could hinder forensic investigations and allow attackers to persist undetected, making it crucial to investigate further and correlate with …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 1102, Windows Event Log System 104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 1102, Windows Event Log System 104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that these logs may be legitimately cleared by Administrators. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-1102, https://www.ired.team/offensive-security/defense-evasion/disabling-windows-event-logs-by-suspending-eventlog-service-threads, https://attack.mitre.org/techniques/T1070/001/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1070.001/T1070.001.md",
              "mitre": [
                "T1070.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Event Log Cleared\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 1102, Windows Event Log System 104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Event Log Cleared\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that these logs may be legitimately cleared by Administrators. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Event Log Cleared\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.519",
              "n": "Windows Event Logging Service Has Shutdown",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the shutdown of the Windows Event Log service by leveraging Windows Event ID 1100. This event is logged every time the service stops, including during normal system shutdowns. Monitoring this activity is crucial as it can indicate attempts to cover tracks or disable logging. If confirmed malicious, an attacker could hide their activities, making it difficult to trace their actions and investigate further incidents. Analysts should verify if the shutdown was planned…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 1100",
              "q": "`wineventlog_security` EventCode=1100\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY action app change_type\n           dest dvc name\n           object_attrs object_category service\n           service_name signature signature_id\n           status subject vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_event_logging_service_has_shutdown_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Security 1100 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible the Event Logging service gets shut down due to system errors or legitimate administration tasks. Investigate the cause of this issue and apply additional filters as needed.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-1100, https://www.ired.team/offensive-security/defense-evasion/disabling-windows-event-logs-by-suspending-eventlog-service-threads, https://attack.mitre.org/techniques/T1070/001/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1070.001/T1070.001.md",
              "mitre": [
                "T1070.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Event Logging Service Has Shutdown\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 1100. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Event Logging Service Has Shutdown\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible the Event Logging service gets shut down due to system errors or legitimate administration tasks. Investigate the cause of this issue and apply additional filters as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the shutdown of the Windows Event Log service by leveraging Windows Event ID 1100 so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.520",
              "n": "Windows Event Triggered Image File Execution Options Injection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation or modification of Image File Execution Options (IFEO) registry keys, detected via EventCode 3000 in the Application channel. This detection leverages Windows Event Logs to monitor for process names added to IFEO under specific registry paths. This activity is significant as it can indicate attempts to set traps for process monitoring or debugging, often used by attackers for persistence or evasion. If confirmed malicious, this could allow an attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Application 3000",
              "q": "`wineventlog_application` EventCode=3000\n      | rename param1 AS \"Process\" param2 AS \"Exit_Code\"\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY Process Exit_Code dest\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_event_triggered_image_file_execution_options_injection_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Application 3000 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and tuning will be required before turning into a finding or intermediate finding.",
              "refs": "https://blog.thinkst.com/2022/09/sensitive-command-token-so-much-offense.html, https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/registry-entries-for-silent-process-exit",
              "mitre": [
                "T1546.012"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Event Triggered Image File Execution Options Injection\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Application 3000. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Event Triggered Image File Execution Options Injection\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and tuning will be required before turning into a finding or intermediate finding.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the creation or modification of Image File Execution Options (IFEO) registry keys, detected via EventCode 3000 in the Application channel so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.521",
              "n": "Windows Eventlog Cleared Via Wevtutil",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Eventlog Cleared Via Wevtutil. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "The wevtutil.exe application is a legitimate Windows event log utility. Administrators may use it to manage Windows event logs.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1070.001/T1070.001.md",
              "mitre": [
                "T1070.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Eventlog Cleared Via Wevtutil\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Eventlog Cleared Via Wevtutil\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: The wevtutil.exe application is a legitimate Windows event log utility. Administrators may use it to manage Windows event logs.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Eventlog Cleared Via Wevtutil\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.522",
              "n": "Windows EventLog Recon Activity Using Log Query Utilities",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows EventLog Recon Activity Using Log Query Utilities. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "http://blog.talosintelligence.com/2022/09/lazarus-three-rats.html, https://thedfirreport.com/2023/10/30/netsupport-intrusion-results-in-domain-compromise/, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-144a, https://www.group-ib.com/blog/apt41-world-tour-2021/, https://labs.withsecure.com/content/dam/labs/docs/f-secureLABS-tlp-white-lazarus-threat-intel-report2.pdf, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.diagnostics/get-winevent, https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/get-eventlog, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/wevtutil",
              "mitre": [
                "T1654"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows EventLog Recon Activity Using Log Query Utilities\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1654. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows EventLog Recon Activity Using Log Query Utilities\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows EventLog Recon Activity Using Log Query Utilities\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.523",
              "n": "Windows Excel ActiveMicrosoftApp Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the ActiveMicrosoftApp process as a child of Microsoft Excel. Under normal conditions, Excel primarily spawns internal Office-related processes, and the creation of ActiveMicrosoftApp is uncommon in day-to-day business workflows. Adversaries may abuse this behavior to blend malicious activity within trusted applications, execute unauthorized code, or bypass application control mechanisms. This technique aligns with common tradecraft where Office…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Microsoft Project has been discontinued since January 2010, so its presence is unlikely in modern environments. If a related child process is observed, verify its legitimacy to rule out potential misuse.",
              "refs": "https://specterops.io/blog/2023/10/30/lateral-movement-abuse-the-power-of-dcom-excel-application/, https://blog.talosintelligence.com/pathwiper-targets-ukraine/, https://www.trellix.com/blogs/research/dcom-abuse-and-network-erasure-with-trellix-ndr/",
              "mitre": [
                "T1021.003"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Excel ActiveMicrosoftApp Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Excel ActiveMicrosoftApp Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Microsoft Project has been discontinued since January 2010, so its presence is unlikely in modern environments. If a related child process is observed, verify its legitimacy to rule out potential misuse.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Excel ActiveMicrosoftApp Child Process\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.524",
              "n": "Windows Excessive Disabled Services Event",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies an excessive number of system events where services are modified from start to disabled. It leverages Windows Event Logs (EventCode 7040) to detect multiple service state changes on a single host. This activity is significant as it may indicate an adversary attempting to disable security applications or other critical services, potentially leading to defense evasion or destructive actions. If confirmed malicious, this behavior could allow attackers to disable se…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7040",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7040 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blog.talosintelligence.com/2018/02/olympic-destroyer.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Excessive Disabled Services Event\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7040. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Excessive Disabled Services Event\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Excessive Disabled Services Event\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.525",
              "n": "Windows Executable Masquerading as Benign File Types",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Executable Masquerading as Benign File Types. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 29",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file path entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 29 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.linkedin.com/posts/mauricefielenbach_cybersecurity-incidentresponse-dfir-activity-7394805779448418304-g0gZ?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAuFTjIB5weY_kcyu4qp3kHbI4v49tO0zEk, https://www.blackhillsinfosec.com/a-sysmon-event-id-breakdown/, https://thedfirreport.com/2023/10/30/netsupport-intrusion-results-in-domain-compromise/, https://www.esentire.com/blog/evalusion-campaign-delivers-amatera-stealer-and-netsupport-rat",
              "mitre": [
                "T1036.008"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Executable Masquerading as Benign File Types\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 29. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.008. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Executable Masquerading as Benign File Types\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Executable Masquerading as Benign File Types\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.526",
              "n": "Windows Execution of Microsoft MSC File In Suspicious Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a Microsoft Management Console (MMC) process executes an .msc file in a suspicious path on a Windows system. While .msc files are legitimate components used for system administration, unexpected execution of these files by non-administrative processes or in unusual contexts can indicate malicious activity, such as living-off-the-land attacks, persistence mechanisms, or automated administrative abuse. This detection monitors process creation events, command-lin…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A possible false positive (FP) for the execution of .msc files is legitimate administrative activity, since .msc files are standard Microsoft Management Console snap-ins used for system administration.",
              "refs": "https://www.securonix.com/blog/analyzing-fluxconsole-using-tax-themed-lures-threat-actors-exploit-windows-management-console-to-deliver-backdoor-payloads/, https://research.checkpoint.com/2019/microsoft-management-console-mmc-vulnerabilities/",
              "mitre": [
                "T1218.014"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Execution of Microsoft MSC File In Suspicious Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.014. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Execution of Microsoft MSC File In Suspicious Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A possible false positive (FP) for the execution of .msc files is legitimate administrative activity, since .msc files are standard Microsoft Management Console snap-ins used for system administration.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Execution of Microsoft MSC File In Suspicious Path\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.527",
              "n": "Windows Exfiltration Over C2 Via Invoke RestMethod",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential data exfiltration using PowerShell's Invoke-RestMethod. It leverages PowerShell Script Block Logging to identify scripts that attempt to upload files via HTTP POST requests. This activity is significant as it may indicate an attacker is exfiltrating sensitive data, such as desktop screenshots or files, to an external command and control (C2) server. If confirmed malicious, this could lead to data breaches, loss of sensitive information, and further compro…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited. Filter as needed.",
              "refs": "https://twitter.com/_CERT_UA/status/1620781684257091584, https://cert.gov.ua/article/3761104",
              "mitre": [
                "T1041"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Exfiltration Over C2 Via Invoke RestMethod\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1041. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Exfiltration Over C2 Via Invoke RestMethod\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Exfiltration Over C2 Via Invoke RestMethod\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.528",
              "n": "Windows File and Directory Enable ReadOnly Permissions",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where file or folder permissions are modified to grant read-only access. Such changes are characterized by the presence of read-related permissions (e.g., R, REA, RA, RD) and the absence of write (W) or execute (E) permissions. Monitoring these events is crucial for tracking access control changes that could be intentional for restricting access or indicative of malicious behavior. Alerts generated by this detection help ensure that legitimate security me…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or administrative scripts may use this application. Filter as needed.",
              "refs": "https://www.splunk.com/en_us/blog/security/-applocker-rules-as-defense-evasion-complete-analysis.html",
              "mitre": [
                "T1222.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows File and Directory Enable ReadOnly Permissions\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows File and Directory Enable ReadOnly Permissions\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or administrative scripts may use this application. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows File and Directory Enable ReadOnly Permissions\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.529",
              "n": "Windows File and Directory Permissions Enable Inheritance",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the enabling of permission inheritance using ICACLS. This analytic identifies instances where ICACLS commands are used to enable permission inheritance on files or directories. The /inheritance:e flag, which restores inherited permissions from a parent directory, is monitored to detect changes that might reapply broader access control settings. Enabling inheritance can indicate legitimate administrative actions but may also signal attempts to override restrictive c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or administrative scripts may use this application. Filter as needed.",
              "refs": "https://www.splunk.com/en_us/blog/security/-applocker-rules-as-defense-evasion-complete-analysis.html",
              "mitre": [
                "T1222.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows File and Directory Permissions Enable Inheritance\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows File and Directory Permissions Enable Inheritance\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or administrative scripts may use this application. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows File and Directory Permissions Enable Inheritance\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.530",
              "n": "Windows File Collection Via Copy Utilities",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Windows command-line copy utilities, such as xcopy, to systematically collect files from user directories and consolidate them into a centralized location on the system. This activity is often indicative of malicious behavior, as threat actors frequently use such commands to gather sensitive information, including documents with .doc, .docx, and .pdf extensions. The detection focuses on identifying recursive copy operations targeting user folders, such a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this command for testing or auditing.",
              "refs": "https://cert.gov.ua/article/6284730",
              "mitre": [
                "T1119"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows File Collection Via Copy Utilities\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1119. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows File Collection Via Copy Utilities\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this command for testing or auditing.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows File Collection Via Copy Utilities\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.531",
              "n": "Windows File Download Via PowerShell",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows File Download Via PowerShell. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://learn.microsoft.com/en-us/dotnet/api/system.net.webclient?view=net-9.0#methods, https://www.malwarebytes.com/blog/malwarebytes-news/2021/02/lazyscripter-from-empire-to-double-rat/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1059.001/T1059.001.md, https://thedfirreport.com/2023/05/22/icedid-macro-ends-in-nokoyawa-ransomware/",
              "mitre": [
                "T1059.001",
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows File Download Via PowerShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows File Download Via PowerShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows File Download Via PowerShell\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.532",
              "n": "Windows File Share Discovery With Powerview",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Invoke-ShareFinder PowerShell cmdlet from PowerView. This detection leverages PowerShell Script Block Logging to identify instances where this specific command is executed. Monitoring this activity is crucial as it indicates an attempt to enumerate network file shares, which may contain sensitive information such as backups, scripts, and credentials. If confirmed malicious, this activity could enable an attacker to escalate privileges or move l…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Security teams may leverage PowerView proactively to identify and remediate sensitive file shares. Filter as needed.",
              "refs": "https://github.com/PowerShellEmpire/PowerTools/blob/master/PowerView/powerview.ps1, https://thedfirreport.com/2023/01/23/sharefinder-how-threat-actors-discover-file-shares/, https://attack.mitre.org/techniques/T1135/",
              "mitre": [
                "T1135"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows File Share Discovery With Powerview\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1135. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows File Share Discovery With Powerview\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Security teams may leverage PowerView proactively to identify and remediate sensitive file shares. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows File Share Discovery With Powerview\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.533",
              "n": "Windows Files and Dirs Access Rights Modification Via Icacls",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Files and Dirs Access Rights Modification Via Icacls. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. Filter as needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.amadey",
              "mitre": [
                "T1222.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Files and Dirs Access Rights Modification Via Icacls\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Files and Dirs Access Rights Modification Via Icacls\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Files and Dirs Access Rights Modification Via Icacls\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.534",
              "n": "Windows Find Interesting ACL with FindInterestingDomainAcl",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Find-InterestingDomainAcl` cmdlet, part of the PowerView toolkit, using PowerShell Script Block Logging (EventCode=4104). This detection leverages logs to identify when this command is run, which is significant as adversaries may use it to find misconfigured or unusual Access Control Lists (ACLs) within a domain. If confirmed malicious, this activity could allow attackers to identify privilege escalation opportunities or weak security configur…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage PowerSploit tools for legitimate reasons, filter as needed.",
              "refs": "https://powersploit.readthedocs.io/en/latest/Recon/Find-InterestingDomainAcl/, https://attack.mitre.org/techniques/T1087/002/, https://book.hacktricks.xyz/windows-hardening/basic-powershell-for-pentesters/powerview",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Find Interesting ACL with FindInterestingDomainAcl\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Find Interesting ACL with FindInterestingDomainAcl\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage PowerSploit tools for legitimate reasons, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Find Interesting ACL with FindInterestingDomainAcl\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.535",
              "n": "Windows Forest Discovery with GetForestDomain",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-ForestDomain` cmdlet, a component of the PowerView toolkit used for Windows domain enumeration. It leverages PowerShell Script Block Logging (EventCode=4104) to identify this activity. Detecting `Get-ForestDomain` is significant because adversaries and Red Teams use it to gather detailed information about Active Directory forest and domain configurations. If confirmed malicious, this activity could enable attackers to understand the domain…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage PowerSploit tools for legitimate reasons, filter as needed.",
              "refs": "https://powersploit.readthedocs.io/en/latest/Recon/Get-ForestDomain/, https://attack.mitre.org/techniques/T1087/002/, https://book.hacktricks.xyz/windows-hardening/basic-powershell-for-pentesters/powerview",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Forest Discovery with GetForestDomain\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Forest Discovery with GetForestDomain\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage PowerSploit tools for legitimate reasons, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Forest Discovery with GetForestDomain\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.536",
              "n": "Windows Gather Victim Identity SAM Info",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects processes loading the samlib.dll or samcli.dll modules, which are often abused to access Security Account Manager (SAM) objects or credentials on domain controllers. This detection leverages Sysmon EventCode 7 to identify these DLLs being loaded outside typical system directories. Monitoring this activity is crucial as it may indicate attempts to gather sensitive identity information. If confirmed malicious, this behavior could allow attackers to obtain credentials…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "`sysmon` EventCode=7  (ImageLoaded = \"*\\\\samlib.dll\" AND OriginalFileName = \"samlib.dll\") OR (ImageLoaded = \"*\\\\samcli.dll\" AND OriginalFileName = \"SAMCLI.DLL\") AND NOT (Image IN(\"C:\\\\Windows\\\\*\", \"C:\\\\Program File*\", \"%systemroot%\\\\*\")) | fillnull | stats count min(_time) as firstTime max(_time) as lastTime by Image ImageLoaded dest loaded_file loaded_file_path original_file_name process_exec process_guid process_hash process_id process_name process_path service_dll_signature_exists service_dll_signature_verified signature signature_id user_id vendor_product | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_gather_victim_identity_sam_info_filter`",
              "m": "The latest Sysmon TA 3.0 https://splunkbase.splunk.com/app/5709 will add the ImageLoaded name to the process_name field, allowing this query to work. Use as an example and implement for other products.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "this module can be loaded by a third party application. Filter is needed.",
              "refs": "https://redcanary.com/blog/active-breach-evading-defenses/, https://strontic.github.io/xcyclopedia/library/samlib.dll-0BDF6351009F6EBA5BA7E886F23263B1.html",
              "mitre": [
                "T1589.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Gather Victim Identity SAM Info\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1589.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Gather Victim Identity SAM Info\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: this module can be loaded by a third party application. Filter is needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on processes loading the samlib.dll or samcli.dll modules, which are often abused to access Security Account Manager (SAM) objects or credentials on domain controllers so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.537",
              "n": "Windows Handle Duplication in Known UAC-Bypass Binaries",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious handle duplication activity targeting known Windows utilities such as ComputerDefaults.exe, Eventvwr.exe, and others. This technique is commonly used to escalate privileges or bypass UAC by inheriting or injecting elevated tokens or handles. The detection focuses on non-standard use of DuplicateHandle or token duplication where process, thread, or token handles are copied into the context of trusted, signed utilities. Such behavior may indicate attempts …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records process activity from your hosts to populate the endpoint data model in the processes node. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible legitimate applications will request access to list of know abused Windows UAC binaries process, filter as needed.",
              "refs": "https://www.recordedfuture.com/research/from-castleloader-to-castlerat-tag-150-advances-operations",
              "mitre": [
                "T1134.001"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Handle Duplication in Known UAC-Bypass Binaries\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1134.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Handle Duplication in Known UAC-Bypass Binaries\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible legitimate applications will request access to list of know abused Windows UAC binaries process, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Handle Duplication in Known UAC-Bypass Binaries\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.538",
              "n": "Windows Hunting System Account Targeting Lsass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes attempting to access Lsass.exe, which may indicate credential dumping or applications needing credential access. It leverages Sysmon EventCode 10 to detect such activities by analyzing fields like TargetImage, GrantedAccess, and SourceImage. This behavior is significant as unauthorized access to Lsass.exe can lead to credential theft, posing a severe security risk. If confirmed malicious, attackers could gain access to sensitive credentials, potentiall…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "`sysmon` EventCode=10 TargetImage=*lsass.exe\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY CallTrace EventID GrantedAccess\n           Guid Opcode ProcessID\n           SecurityID SourceImage SourceProcessGUID\n           SourceProcessId TargetImage TargetProcessGUID\n           TargetProcessId UserID dest\n           granted_access parent_process_exec parent_process_guid\n           parent_process_id parent_process_name parent_process_path\n           process_exec process_guid process_id\n           process_name process_path signature\n           signature_id user_id vendor_product\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_hunting_system_account_targeting_lsass_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 10 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will occur based on GrantedAccess and SourceUser, filter based on source image as needed. Utilize this hunting analytic to tune out false positives in TTP or anomaly analytics.",
              "refs": "https://en.wikipedia.org/wiki/Local_Security_Authority_Subsystem_Service, https://learn.microsoft.com/en-us/windows/win32/api/minidumpapiset/nf-minidumpapiset-minidumpwritedump, https://cyberwardog.blogspot.com/2017/03/chronicles-of-threat-hunter-hunting-for_22.html, https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/master/Exfiltration/Invoke-Mimikatz.ps1, https://learn.microsoft.com/en-us/windows/win32/procthread/process-security-and-access-rights?redirectedfrom=MSDN",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Hunting System Account Targeting Lsass\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Hunting System Account Targeting Lsass\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will occur based on GrantedAccess and SourceUser, filter based on source image as needed. Utilize this hunting analytic to tune out false positives in TTP or anomaly analytics.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on processes attempting to access Lsass.exe, which may indicate credential dumping or applications needing credential access so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.539",
              "n": "Windows Identify PowerShell Web Access IIS Pool",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects and analyzes PowerShell Web Access (PSWA) usage in Windows environments. It tracks both connection attempts (EventID 4648) and successful logons (EventID 4624) associated with PSWA, providing a comprehensive view of access patterns. The analytic identifies PSWA's operational status, host servers, processes, and connection metrics. It highlights unique target accounts, domains accessed, and verifies logon types. This information is crucial for detecting potential misuse, suc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4648",
              "q": "`wineventlog_security` (EventCode=4648 OR EventCode=4624 OR EventCode=4625) SubjectUserName=\"pswa_pool\" | fields EventCode, SubjectUserName, TargetUserName, Computer, TargetDomainName, ProcessName, LogonType | rename Computer as dest | stats count(eval(EventCode=4648)) as \"Connection Attempts\", count(eval(EventCode=4624)) as \"Successful Logons\", count(eval(EventCode=4625)) as \"Unsuccessful Logons\", dc(TargetUserName) as \"Unique Target Accounts\", values(dest) as \"PSWA Host\", dc(TargetDomainName) as \"Unique Target Domains\", values(ProcessName) as \"PSWA Process\", values(TargetUserName) as \"Target Users List\", values(TargetServerName) as \"Target Servers List\", values(LogonType) as \"Logon Types\" | eval PSWA_Running = \"Yes\", \"PSWA Process\" = mvindex(split(mvindex(\"PSWA Process\", 0), \"\\\\\"), -1) | fields PSWA_Running, \"PSWA Host\", \"PSWA Process\", \"Connection Attempts\", \"Successful Logons\",\"Unsuccessful Logons\", \"Unique Target Accounts\", \"Unique Target Domains\", \"Target Users List\",\"Target Servers List\", \"Logon Types\" | `security_content_ctime(firstTime)` |`security_content_ctime(lastTime)` | `windows_identify_powershell_web_access_iis_pool_filter`",
              "m": "To successfully implement this search, you need to be ingesting Windows Security Event logs, specifically Event ID 4648 (A logon was attempted using explicit credentials). Ensure that your Windows systems are configured to audit logon events and that these logs are being forwarded to your SIEM or log management solution. You may need to enable advanced audit policy settings in Windows to capture t…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if legitimate PSWA processes are used for administrative tasks. Careful review of the logs is recommended to distinguish between legitimate and malicious activity.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa24-241a, https://gist.github.com/MHaggis/7e67b659af9148fa593cf2402edebb41",
              "mitre": [
                "T1190"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Identify PowerShell Web Access IIS Pool\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4648. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Identify PowerShell Web Access IIS Pool\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if legitimate PSWA processes are used for administrative tasks. Careful review of the logs is recommended to distinguish between legitimate and malicious activity.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on this analytic detects and analyzes PowerShell Web Access (PSWA) usage in Windows environments so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.540",
              "n": "Windows IIS Components New Module Added",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of new IIS modules on a Windows IIS server. It leverages the Windows Event log - Microsoft-IIS-Configuration/Operational, specifically EventCode 29, to identify this activity. This behavior is significant because IIS modules are rarely added to production servers, and unauthorized modules could indicate malicious activity. If confirmed malicious, an attacker could use these modules to execute arbitrary code, escalate privileges, or maintain persistence…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows IIS 29",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows IIS 29 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present when updates or an administrator adds a new module to IIS. Monitor and filter as needed.",
              "refs": "https://gist.github.com/MHaggis/64396dfd9fc3734e1d1901a8f2f07040, https://www.microsoft.com/en-us/security/blog/2022/12/12/iis-modules-the-evolution-of-web-shells-and-how-to-detect-them/, https://www.crowdstrike.com/wp-content/uploads/2022/05/crowdstrike-iceapple-a-novel-internet-information-services-post-exploitation-framework-1.pdf, https://unit42.paloaltonetworks.com/unit42-oilrig-uses-rgdoor-iis-backdoor-targets-middle-east/, https://www.secureworks.com/research/bronze-union, https://github.com/redcanaryco/atomic-red-team/tree/master/atomics/T1505.004, https://strontic.github.io/xcyclopedia/library/appcmd.exe-055B2B09409F980BF9B5A3969D01E5B2.html",
              "mitre": [
                "T1505.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows IIS Components New Module Added\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows IIS 29. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows IIS Components New Module Added\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present when updates or an administrator adds a new module to IIS. Monitor and filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows IIS Components New Module Added\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.541",
              "n": "Windows Important Audit Policy Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the disabling of important audit policies. It leverages EventCode 4719 from Windows Security Event Logs to identify changes where success or failure auditing is removed. This activity is significant as it suggests an attacker may have gained access to the domain controller and is attempting to evade detection by tampering with audit policies. If confirmed malicious, this could lead to severe consequences, including data theft, privilege escalation, and full network…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4719",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4719 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4719",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Important Audit Policy Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4719. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Important Audit Policy Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Important Audit Policy Disabled\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.542",
              "n": "Windows Increase in Group or Object Modification Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects an increase in modifications to AD groups or objects. Frequent changes to AD groups or objects can indicate potential security risks, such as unauthorized access attempts, impairing defences or establishing persistence. By monitoring AD logs for unusual modification patterns, this detection helps identify suspicious behavior that could compromise the integrity and security of the AD environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4663 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1098",
                "T1562"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Increase in Group or Object Modification Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098, T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Increase in Group or Object Modification Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Increase in Group or Object Modification Activity\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.543",
              "n": "Windows Increase in User Modification Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects an increase in modifications to AD user objects. A large volume of changes to user objects can indicate potential security risks, such as unauthorized access attempts, impairing defences or establishing persistence. By monitoring AD logs for unusual modification patterns, this detection helps identify suspicious behavior that could compromise the integrity and security of the AD environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4720",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4720 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Genuine activity",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1098",
                "T1562"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Increase in User Modification Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4720. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098, T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Increase in User Modification Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Genuine activity\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Increase in User Modification Activity\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.544",
              "n": "Windows Information Discovery Fsutil",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Information Discovery Fsutil. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/fsutil, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/fsutil-volume, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/, https://www.microsoft.com/en-us/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/",
              "mitre": [
                "T1082"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Information Discovery Fsutil\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Information Discovery Fsutil\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Information Discovery Fsutil\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.545",
              "n": "Windows Input Capture Using Credential UI Dll",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a process loading the credui.dll or wincredui.dll module. This detection leverages Sysmon EventCode 7 to identify instances where these DLLs are loaded by processes outside typical system directories. This activity is significant because adversaries often abuse these modules to create fake credential prompts or dump credentials, posing a risk of credential theft. If confirmed malicious, this activity could allow attackers to harvest user credentials, leading to una…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "`sysmon` EventCode=7  (ImageLoaded = \"*\\\\credui.dll\" AND OriginalFileName = \"credui.dll\") OR (ImageLoaded = \"*\\\\wincredui.dll\" AND OriginalFileName = \"wincredui.dll\") AND NOT(Image IN(\"*\\\\windows\\\\explorer.exe\", \"*\\\\windows\\\\system32\\\\*\", \"*\\\\windows\\\\sysWow64\\\\*\", \"*:\\\\program files*\")) | fillnull | stats count min(_time) as firstTime max(_time) as lastTime by Image ImageLoaded dest loaded_file loaded_file_path original_file_name process_exec process_guid process_hash process_id process_name process_path service_dll_signature_exists service_dll_signature_verified signature signature_id user_id vendor_product | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_input_capture_using_credential_ui_dll_filter`",
              "m": "The latest Sysmon TA 3.0 https://splunkbase.splunk.com/app/5709 will add the ImageLoaded name to the process_name field, allowing this query to work. Use as an example and implement for other products.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "this module can be loaded by a third party application. Filter is needed.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/api/wincred/nf-wincred-creduipromptforcredentialsa, https://github.com/redcanaryco/atomic-red-team/blob/f339e7da7d05f6057fdfcdd3742bfcf365fee2a9/atomics/T1056.002/T1056.002.md#atomic-test-2---powershell---prompt-user-for-password",
              "mitre": [
                "T1056.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Input Capture Using Credential UI Dll\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1056.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Input Capture Using Credential UI Dll\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: this module can be loaded by a third party application. Filter is needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a process loading the credui.dll or wincredui.dll module so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.546",
              "n": "Windows InstallUtil Credential Theft",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where the Windows InstallUtil.exe binary loads `vaultcli.dll` and `Samlib.dll`. This detection leverages Sysmon EventCode 7 to identify these specific DLL loads. This activity is significant because it can indicate an attempt to execute code that bypasses application control and captures credentials using tools like Mimikatz. If confirmed malicious, this behavior could allow an attacker to steal credentials, potentially leading to unauthorized access and …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and module loads from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Typically, this will not trigger because, by its very nature, InstallUtil does not require credentials. Filter as needed.",
              "refs": "https://gist.github.com/xorrior/bbac3919ca2aef8d924bdf3b16cce3d0",
              "mitre": [
                "T1218.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows InstallUtil Credential Theft\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows InstallUtil Credential Theft\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Typically, this will not trigger because, by its very nature, InstallUtil does not require credentials. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows InstallUtil Credential Theft\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.547",
              "n": "Windows Known Abused DLL Loaded Suspiciously",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when DLLs with known abuse history are loaded from an unusual location. This activity may represent an attacker performing a DLL search order or sideload hijacking technique. These techniques are used to gain persistence as well as elevate privileges on the target system. This detection relies on Sysmon EID7 and is compatible with all Officla Sysmon TA versions.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic requires Sysmon operational logs to be imported, with EID7 being mapped to the process_name field. Modify the sysmon macro as needed to match the sourcetype or add index.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "DLLs being loaded by user mode programs for legitimate reasons.",
              "refs": "https://attack.mitre.org/techniques/T1574/002/, https://hijacklibs.net/api/, https://wietze.github.io/blog/hijacking-dlls-in-windows, https://github.com/olafhartong/sysmon-modular/pull/195/files",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Known Abused DLL Loaded Suspiciously\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Known Abused DLL Loaded Suspiciously\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: DLLs being loaded by user mode programs for legitimate reasons.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Known Abused DLL Loaded Suspiciously\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.548",
              "n": "Windows Known GraphicalProton Loaded Modules",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of DLL modules associated with the GraphicalProton backdoor implant, commonly used by SVR in targeted attacks. It leverages Sysmon EventCode 7 to identify specific DLLs loaded by processes. This activity is significant as it may indicate the presence of a sophisticated backdoor, warranting immediate investigation. If confirmed malicious, the attacker could gain persistent access to the compromised host, potentially leading to further exploitation and da…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-347a",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Known GraphicalProton Loaded Modules\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Known GraphicalProton Loaded Modules\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Known GraphicalProton Loaded Modules\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.549",
              "n": "Windows KrbRelayUp Service Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of a service with the default name \"KrbSCM\" associated with the KrbRelayUp tool. It leverages Windows System Event Logs, specifically EventCode 7045, to identify this activity. This behavior is significant as KrbRelayUp is a known tool used for privilege escalation attacks. If confirmed malicious, this activity could allow an attacker to escalate privileges, potentially gaining unauthorized access to sensitive systems and data.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log System 7045",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log System 7045 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives should be limited as this is specific to KrbRelayUp based attack. Filter as needed.",
              "refs": "https://github.com/Dec0ne/KrbRelayUp",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows KrbRelayUp Service Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log System 7045. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows KrbRelayUp Service Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives should be limited as this is specific to KrbRelayUp based attack. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows KrbRelayUp Service Creation\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.550",
              "n": "Windows Linked Policies In ADSI Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `[Adsisearcher]` type accelerator in PowerShell Script Block Logging (EventCode=4104) to query Active Directory for domain organizational units. This detection leverages PowerShell operational logs to identify script blocks containing `[adsisearcher]`, `objectcategory=organizationalunit`, and `findAll()`. This activity is significant as it indicates potential reconnaissance efforts by adversaries to gain situational awareness of the domain structure.…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://medium.com/@pentesttas/discover-hidden-gpo-s-on-active-directory-using-ps-adsi-a284b6814c81",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Linked Policies In ADSI Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Linked Policies In ADSI Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Linked Policies In ADSI Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.551",
              "n": "Windows Local LLM Framework Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Local LLM Framework Execution. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count\n        min(_time) as firstTime\n        max(_time) as lastTime\n    from datamodel=Endpoint.Processes\n    where\n        (\n          Processes.process_name IN (\n              \"gpt4all.exe\",\n              \"jan.exe\",\n              \"kobold.exe\",\n              \"koboldcpp.exe\",\n              \"llama-run.exe\",\n              \"llama.cpp.exe\",\n              \"lmstudio.exe\",\n              \"nutstudio.exe\",\n              \"ollama.exe\",\n              \"oobabooga.exe\",\n              \"text-generation-webui.exe\"\n          )\n          OR\n          Processes.original_file_name IN (\n              \"ollama.exe\",\n              \"lmstudio.exe\",\n              \"gpt4all.exe\",\n              \"jan.exe\",\n              \"llama-run.exe\",\n              \"koboldcpp.exe\",\n              \"nutstudio.exe\"\n          )\n          OR\n          Processes.process IN (\n              \"*\\\\gpt4all\\\\*\",\n              \"*\\\\jan\\\\*\",\n              \"*\\\\koboldcpp\\\\*\",\n              \"*\\\\llama.cpp\\\\*\",\n              \"*\\\\lmstudio\\\\*\",\n              \"*\\\\nutstudio\\\\*\",\n              \"*\\\\ollama\\\\*\",\n              \"*\\\\oobabooga\\\\*\",\n              \"*huggingface*\",\n              \"*langchain*\",\n              \"*llama-run*\",\n              \"*transformers*\"\n          )\n          OR\n          Processes.parent_process_name IN (\n              \"gpt4all.exe\",\n              \"jan.exe\",\n              \"kobold.exe\",\n              \"koboldcpp.exe\",\n              \"llama-run.exe\",\n              \"llama.cpp.exe\",\n              \"lmstudio.exe\",\n              \"nutstudio.exe\",\n              \"ollama.exe\",\n              \"oobabooga.exe\",\n              \"text-generation-webui.exe\"\n          )\n        )\n    by Processes.action Processes.dest Processes.original_file_name Processes.parent_process\n       Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id\n       Processes.parent_process_name Processes.parent_process_path Processes.process\n       Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id\n       Processes.process_integrity_level Processes.process_name Processes.process_path Processes.user\n       Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | eval Framework=case(\n        match(process_name, \"(?i)ollama\") OR match(process, \"(?i)ollama\"), \"Ollama\",\n        match(process_name, \"(?i)lmstudio\") OR match(process, \"(?i)lmstudio\") OR match(process, \"(?i)lm-studio\"), \"LM Studio\",\n        match(process_name, \"(?i)gpt4all\") OR match(process, \"(?i)gpt4all\"), \"GPT4All\",\n        match(process_name, \"(?i)kobold\") OR match(process, \"(?i)kobold\"), \"KoboldCPP\",\n        match(process_name, \"(?i)jan\") OR match(process, \"(?i)jan\"), \"Jan AI\",\n        match(process_name, \"(?i)nutstudio\") OR match(process, \"(?i)nutstudio\"), \"NutStudio\",\n        match(process_name, \"(?i)llama\") OR match(process, \"(?i)llama\"), \"llama.cpp\",\n        match(process_name, \"(?i)oobabooga\") OR match(process, \"(?i)oobabooga\") OR match(process, \"(?i)text-generation-webui\"), \"Oobabooga\",\n        match(process, \"(?i)transformers\") OR match(process, \"(?i)huggingface\"), \"HuggingFace/Transformers\",\n        match(process, \"(?i)langchain\"), \"LangChain\",\n        1=1, \"Other\"\n    )\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table action dest Framework original_file_name parent_process parent_process_exec\n            parent_process_guid parent_process_id parent_process_name parent_process_path\n            process process_exec process_guid process_hash process_id process_integrity_level\n            process_name process_path user user_id vendor_product\n    | `windows_local_llm_framework_execution_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate development, data science, and AI/ML workflows where authorized developers, researchers, or engineers intentionally execute local LLM frameworks (Ollama, LM Studio, GPT4All, Jan, NutStudio) for model experimentation, fine-tuning, or prototyping. Python developers using HuggingFace Transformers or LangChain for legitimate AI/ML projects. Approved sandbox and lab environments where framework testing is authorized. Open-source contributors and hobbyists running frameworks for educational purposes. Third-party applications that bundle or invoke LLM frameworks as dependencies (e.g., IDE plugins, analytics tools, chatbot integrations). System administrators deploying frameworks as part of containerized services or orchestrated ML workloads. Process name keyword overlap with unrelated utilities (e.g., \"llama-backup\", \"janimation\"). Recommended tuning — baseline approved frameworks and users by role/department, exclude sanctioned dev/lab systems via the filter macro, correlate with user identity and peer group anomalies before escalating to incident response.",
              "refs": "https://splunkbase.splunk.com/app/8024, https://www.ibm.com/think/topics/shadow-ai, https://www.splunk.com/en_us/blog/artificial-intelligence/splunk-technology-add-on-for-ollama.html, https://blogs.cisco.com/security/detecting-exposed-llm-servers-shodan-case-study-on-ollama, https://learn.microsoft.com/en-us/sysinternals/downloads/sysmon",
              "mitre": [
                "T1543"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Local LLM Framework Execution\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Local LLM Framework Execution\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate development, data science, and AI/ML workflows where authorized developers, researchers, or engineers intentionally execute local LLM frameworks (Ollama, LM Studio, GPT4All, Jan, NutStudio) for model experimentation, fine-tuning, or prototyping. Python developers using HuggingFace Transformers or LangChain for legitimate AI/ML projects. Approved sandbox and lab environments where framework testing is authorized. Open-source contributors and hobbyists running frameworks for educational purposes. Third-party applications that bundle or invoke LLM frameworks as dependencies (e.g., IDE plugins, analytics tools, chatbot integrations). System administrators deploying frameworks as part of containerized services or orchestrated ML workloads. Process name keyword overlap with unrelated utilities (e.g., \"llama-backup\", \"janimation\"). Recommended tuning — baseline approved frameworks and users by role/department, exclude sanctioned dev/lab systems via the filter macro, correlate with user identity and peer group anomalies before escalating to incident response.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on Windows Local LLM Framework Execution so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.552",
              "n": "Windows LOLBAS Executed As Renamed File",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a LOLBAS process being executed where it's process name does not match it's original file name attribute. Processes that have been renamed and executed may be an indicator that an adversary is attempting to evade defenses or execute malicious code. The LOLBAS project documents Windows native binaries that can be abused by threat actors to perform tasks like executing malicious code.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A certain amount of false positives are likely with this detection. MSI based installers often trigger for SETUPAPL.dll and vendors will often copy system exectables to a different path for application usage.",
              "refs": "https://attack.mitre.org/techniques/T1036/, https://attack.mitre.org/techniques/T1036/003/",
              "mitre": [
                "T1036.003",
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows LOLBAS Executed As Renamed File\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.003, T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows LOLBAS Executed As Renamed File\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A certain amount of false positives are likely with this detection. MSI based installers often trigger for SETUPAPL.dll and vendors will often copy system exectables to a different path for application usage.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows LOLBAS Executed As Renamed File\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.553",
              "n": "Windows LOLBAS Executed Outside Expected Path",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows LOLBAS Executed Outside Expected Path. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To implement this search, you must ingest logs that contain the process name and process path, such as with Sysmon EID 1.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1036/, https://attack.mitre.org/techniques/T1036/005/",
              "mitre": [
                "T1036.005",
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows LOLBAS Executed Outside Expected Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.005, T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows LOLBAS Executed Outside Expected Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows LOLBAS Executed Outside Expected Path\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.554",
              "n": "Windows Modify Registry on Smart Card Group Policy",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic is developed to detect suspicious registry modifications targeting the \"scforceoption\" key. Altering this key enforces smart card login for all users, potentially disrupting normal access methods. Unauthorized changes to this setting could indicate an attempt to restrict access or force a specific authentication method, possibly signifying malicious intent to manipulate system security protocols.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may enable or disable this feature that may cause some false positive.",
              "refs": "https://www.bleepingcomputer.com/news/security/new-shrinklocker-ransomware-uses-bitlocker-to-encrypt-your-files/",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry on Smart Card Group Policy\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry on Smart Card Group Policy\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may enable or disable this feature that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Modify Registry on Smart Card Group Policy\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.555",
              "n": "Windows Modify Registry Risk Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where three or more distinct registry modification events associated with MITRE ATT&CK Technique T1112 are detected. It leverages data from the Risk data model in Splunk, focusing on registry-related sources and MITRE technique annotations. This activity is significant because multiple registry modifications can indicate an attempt to persist, hide malicious configurations, or erase forensic evidence. If confirmed malicious, this behavior could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present based on many factors. Tune the correlation as needed to reduce too many triggers.",
              "refs": "https://www.splunk.com/en_us/blog/security/do-not-cross-the-redline-stealer-detections-and-analysis.html, https://www.splunk.com/en_us/blog/security/asyncrat-crusade-detections-and-defense.html, https://www.splunk.com/en_us/blog/security/from-registry-with-love-malware-registry-abuses.html, https://www.splunk.com/en_us/blog/security/-applocker-rules-as-defense-evasion-complete-analysis.html",
              "mitre": [
                "T1112"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry Risk Behavior\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry Risk Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present based on many factors. Tune the correlation as needed to reduce too many triggers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Modify Registry Risk Behavior\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.556",
              "n": "Windows Modify Registry ValleyRAT C2 Config",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to theregistry related to ValleyRAT C2 configuration. Specifically,  it monitors changes in registry keys where ValleyRAT saves the IP address and port information of its command-and-control (C2) server. This activity is a key indicator of ValleyRAT attempting to establish persistent communication with its C2 infrastructure. By identifying these unauthorized registry modifications, security analysts can quickly detect malicious configurations and inve…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.proofpoint.com/us/blog/threat-insight/chinese-malware-appears-earnest-across-cybercrime-threat-landscape, https://www.fortinet.com/blog/threat-research/valleyrat-campaign-targeting-chinese-speakers",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Modify Registry ValleyRAT C2 Config\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Modify Registry ValleyRAT C2 Config\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Modify Registry ValleyRAT C2 Config\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.557",
              "n": "Windows MSHTA Writing to World Writable Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances of `mshta.exe` writing files to world-writable directories. It leverages Sysmon EventCode 11 logs to detect file write operations by `mshta.exe` to directories like `C:\\Windows\\Tasks` and `C:\\Windows\\Temp`. This activity is significant as it often indicates an attempt to establish persistence or execute malicious code, deviating from the utility's legitimate use. If confirmed malicious, this behavior could lead to the execution of multi-stage payloads,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if legitimate processes are writing to world-writable directories. It is recommended to investigate the context of the file write operation to determine if it is malicious or not. Modify the search to include additional known good paths for `mshta.exe` to reduce false positives.",
              "refs": "https://www.mandiant.com/resources/blog/apt29-wineloader-german-political-parties, https://www.zscaler.com/blogs/security-research/european-diplomats-targeted-spikedwine-wineloader",
              "mitre": [
                "T1218.005"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSHTA Writing to World Writable Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSHTA Writing to World Writable Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if legitimate processes are writing to world-writable directories. It is recommended to investigate the context of the file write operation to determine if it is malicious or not. Modify the search to include additional known good paths for `mshta.exe` to reduce false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows MSHTA Writing to World Writable Path\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.558",
              "n": "Windows MSIExec Remote Download",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows MSIExec Remote Download. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://thedfirreport.com/2022/06/06/will-the-real-msiexec-please-stand-up-exploit-leads-to-data-exfiltration/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1218.007/T1218.007.md",
              "mitre": [
                "T1218.007"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSIExec Remote Download\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.007. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSIExec Remote Download\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows MSIExec Remote Download\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.559",
              "n": "Windows MSIX Package Interaction",
              "c": "high",
              "f": "intermediate",
              "v": "This hunting query detects user interactions with MSIX packages by monitoring EventCode 171 in the Microsoft-Windows-AppXPackaging/Operational logs. These events are generated when a user clicks on or attempts to interact with an MSIX package, even if the package is not fully installed. This information can be valuable for security teams to identify what MSIX packages users are attempting to open in their environment, which may help detect malicious MSIX packages before they're fully installed. …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log AppXPackaging 171",
              "q": "`wineventlog_appxpackaging` EventCode=171\n      | stats count min(_time) as firstTime max(_time) as lastTime values(packageFullName) as packageFullName values(user_id) as user_id\n        BY host EventCode\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_msix_package_interaction_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log AppXPackaging 171 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This hunting query will detect legitimate MSIX package interactions from normal users. It is not designed to specifically identify malicious activity but rather to provide visibility into all MSIX package interactions. Security teams should review the results and look for unusual patterns, unexpected packages, or suspicious file paths.",
              "refs": "https://www.appdeploynews.com/packaging-types/msix/troubleshooting-an-msix-package/, https://learn.microsoft.com/en-us/windows/win32/appxpkg/troubleshooting, https://www.advancedinstaller.com/msix-installation-or-launching-errors-and-fixes.html, https://redcanary.com/blog/msix-installers/",
              "mitre": [
                "T1204.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows MSIX Package Interaction\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log AppXPackaging 171. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows MSIX Package Interaction\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This hunting query will detect legitimate MSIX package interactions from normal users. It is not designed to specifically identify malicious activity but rather to provide visibility into all MSIX package interactions. Security teams should review the results and look for unusual patterns, unexpected packages, or suspicious file paths.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on this hunting query detects user interactions with MSIX packages by monitoring EventCode 171 in the Microsoft-Windows-AppXPackaging/Operational logs so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.560",
              "n": "Windows Multiple Account Passwords Changed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where more than five unique Windows account passwords are changed within a 10-minute interval. It leverages Event Code 4724 from the Windows Security Event Log, using the wineventlog_security dataset to monitor and count distinct TargetUserName values. This behavior is significant as rapid password changes across multiple accounts are unusual and may indicate unauthorized access or internal compromise. If confirmed malicious, this activity could lead to w…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4724",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4724 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service accounts may be responsible for the creation, deletion or modification of accounts for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/",
              "mitre": [
                "T1098",
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Multiple Account Passwords Changed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4724. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Multiple Account Passwords Changed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service accounts may be responsible for the creation, deletion or modification of accounts for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Multiple Account Passwords Changed\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.561",
              "n": "Windows Multiple Accounts Deleted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of more than five unique Windows accounts within a 10-minute period, using Event Code 4726 from the Windows Security Event Log. It leverages the `wineventlog_security` dataset, segmenting data into 10-minute intervals to identify suspicious account deletions. This activity is significant as it may indicate an attacker attempting to erase traces of their actions. If confirmed malicious, this could lead to unauthorized access removal, hindering incident …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4726",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4726 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service accounts may be responsible for the creation, deletion or modification of accounts for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/",
              "mitre": [
                "T1098",
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Multiple Accounts Deleted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4726. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Multiple Accounts Deleted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service accounts may be responsible for the creation, deletion or modification of accounts for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Multiple Accounts Deleted\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.562",
              "n": "Windows Multiple Accounts Disabled",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies instances where more than five unique Windows accounts are disabled within a 10-minute window, as indicated by Event Code 4725 in the Windows Security Event Log. It leverages the wineventlog_security dataset, grouping data into 10-minute segments and tracking the count and distinct count of TargetUserName. This behavior is significant as it may indicate internal policy breaches or an external attacker's attempt to disrupt operations. If confirmed malicious, this…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4725",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4725 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service accounts may be responsible for the creation, deletion or modification of accounts for legitimate purposes. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/",
              "mitre": [
                "T1098",
                "T1078"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Multiple Accounts Disabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4725. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Multiple Accounts Disabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service accounts may be responsible for the creation, deletion or modification of accounts for legitimate purposes. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Multiple Accounts Disabled\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.563",
              "n": "Windows Multiple NTLM Null Domain Authentications",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a device is the target of numerous NTLM authentications using a null domain. This activity generally results when an attacker attempts to brute force, password spray, or otherwise authenticate to a domain joined Windows device from a non-domain device. This activity may also generate a large number of EventID 4776 events in tandem, however these events will not indicate the attacker or target device",
              "t": "Splunk Security Essentials / ESCU",
              "d": "NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic requires that NTLM Operational logs to be imported from the environment Domain Controllers. This requires configuration of specific auditing settings, see Microsoft references for further guidance. This analytic is specific to EventID 8004~8006.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/ntlm-blocking-and-you-application-analysis-and-auditing/ba-p/397191, https://techcommunity.microsoft.com/t5/microsoft-defender-for-identity/enriched-ntlm-authentication-data-using-windows-event-8004/m-p/871827, https://www.varonis.com/blog/investigate-ntlm-brute-force, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-nrpc/4d1235e3-2c96-4e9f-a147-3cb338a0d09f",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Multiple NTLM Null Domain Authentications\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Multiple NTLM Null Domain Authentications\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Multiple NTLM Null Domain Authentications\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.564",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source user failing to authenticate with 30 unique users using explicit credentials on a host. It leverages Windows Event 4648, which is generated when a process attempts an account logon by explicitly specifying account credentials. This detection is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges within an Active Directory environment. If confirmed malicious, this activity co…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4648",
              "q": "`wineventlog_security` EventCode=4648 Caller_User_Name!=*$ Target_User_Name!=*$\n      | bucket span=5m _time\n      | stats dc(Target_User_Name) AS unique_accounts values(Target_User_Name) as tried_account values(dest) as dest values(src) as src values(user) as user\n        BY _time, Computer, Caller_User_Name\n      | where unique_accounts > 30\n      | `windows_multiple_users_fail_to_authenticate_wth_explicitcredentials_filter`",
              "m": "To successfully implement this search, you need to be ingesting Windows Event Logs from domain controllers as well as member servers and workstations. The Advanced Security Audit policy setting `Audit Logon` within `Logon/Logoff` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A source user failing attempting to authenticate multiple users on a host is not a common behavior for regular systems. Some applications, however, may exhibit this behavior in which case sets of users hosts can be added to an allow list. Possible false positive scenarios include systems where several users connect to like Mail servers, identity providers, remote desktop services, Citrix, etc.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4648, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/basic-audit-logon-events",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4648. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A source user failing attempting to authenticate multiple users on a host is not a common behavior for regular systems. Some applications, however, may exhibit this behavior in which case sets of users hosts can be added to an allow list. Possible false positive scenarios include systems where several users connect to like Mail servers, identity providers, remote desktop services, Citrix, etc.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a source user failing to authenticate with 30 unique users using explicit credentials on a host so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.565",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a source process failing to authenticate with 30 unique users, indicating a potential Password Spraying attack. It leverages Windows Event 4625 with Logon Type 2, collected from domain controllers, member servers, and workstations. This activity is significant as it may represent an adversary attempting to gain initial access or elevate privileges within an Active Directory environment. If confirmed malicious, this could lead to unauthorized access, privilege escal…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4625",
              "q": "`wineventlog_security` EventCode=4625 Logon_Type=2 ProcessName!=\"-\"\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as tried_accounts values(dest) as dest values(src) as src values(user) as user\n        BY _time, ProcessName, SubjectUserName,\n           Computer, action, app,\n           authentication_method, signature, signature_id\n      | rename Computer as dest\n      | where unique_accounts > 30\n      | `windows_multiple_users_failed_to_authenticate_from_process_filter`",
              "m": "To successfully implement this search, you need to be ingesting Windows Event Logs from domain controllers aas well as member servers and workstations. The Advanced Security Audit policy setting `Audit Logon` within `Logon/Logoff` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A process failing to authenticate with multiple users is not a common behavior for legitimate user sessions. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4625, https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4625, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/basic-audit-logon-events",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4625. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A process failing to authenticate with multiple users is not a common behavior for legitimate user sessions. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a source process failing to authenticate with 30 unique users, indicating a potential Password Spraying attack so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.566",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source host failing to authenticate against a remote host with 30 unique users. It leverages Windows Event 4625 with Logon Type 3, indicating remote authentication attempts. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges in an Active Directory environment. If confirmed malicious, this activity could lead to unauthorized access, privilege escalation, and potent…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4625",
              "q": "`wineventlog_security` EventCode=4625 Logon_Type=3 IpAddress!=\"-\"\n      | bucket span=5m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as tried_accounts values(dest) as dest values(src) as src values(user) as user\n        BY _time, IpAddress, Computer,\n           action, app, authentication_method,\n           signature, signature_id\n      | rename Computer as dest\n      | where unique_accounts > 30\n      | `windows_multiple_users_remotely_failed_to_authenticate_from_host_filter`",
              "m": "To successfully implement this search, you need to be ingesting Windows Event Logs from domain controllers as as well as member servers and workstations. The Advanced Security Audit policy setting `Audit Logon` within `Logon/Logoff` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple valid users against a remote host is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, remote administration tools, missconfigyred systems, etc.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4625, https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4625, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/basic-audit-logon-events",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4625. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple valid users against a remote host is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, remote administration tools, missconfigyred systems, etc.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a source host failing to authenticate against a remote host with 30 unique users so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.567",
              "n": "Windows NetSupport RMM DLL Loaded By Uncommon Process",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows NetSupport RMM DLL Loaded By Uncommon Process. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 7 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.linkedin.com/posts/mauricefielenbach_cybersecurity-incidentresponse-dfir-activity-7394805779448418304-g0gZ?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAuFTjIB5weY_kcyu4qp3kHbI4v49tO0zEk, https://thedfirreport.com/2023/10/30/netsupport-intrusion-results-in-domain-compromise/, https://www.esentire.com/blog/evalusion-campaign-delivers-amatera-stealer-and-netsupport-rat",
              "mitre": [
                "T1036"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows NetSupport RMM DLL Loaded By Uncommon Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows NetSupport RMM DLL Loaded By Uncommon Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows NetSupport RMM DLL Loaded By Uncommon Process\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.568",
              "n": "Windows Network Share Interaction Via Net",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies network share discovery and collection activities performed on Windows systems using the Net command. Attackers often use network share discovery to identify accessible shared resources within a network, which can be a precursor to privilege escalation or data exfiltration. By monitoring Windows Event Logs for the usage of the Net command to list and interact with network shares, this detection helps identify potential reconnaissance and collection activities.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command. Additional filters needs to be applied.",
              "refs": "https://attack.mitre.org/techniques/T1135/",
              "mitre": [
                "T1135",
                "T1039"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Network Share Interaction Via Net\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1135, T1039. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Network Share Interaction Via Net\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command. Additional filters needs to be applied.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Network Share Interaction Via Net\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.569",
              "n": "Windows NirSoft Tool Bundle File Created",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows NirSoft Tool Bundle File Created. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://thedfirreport.com/2020/04/04/gogoogle-ransomware/, https://asec.ahnlab.com/en/48940/, https://www.trendmicro.com/en_gb/research/23/c/emotet-returns-now-adopts-binary-padding-for-evasion.html, https://www.welivesecurity.com/2022/06/16/how-emotet-is-changing-tactics-microsoft-tightening-office-macro-security/",
              "mitre": [
                "T1588.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows NirSoft Tool Bundle File Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1588.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows NirSoft Tool Bundle File Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows NirSoft Tool Bundle File Created\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.570",
              "n": "Windows NirSoft Utilities",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows NirSoft Utilities. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time)\n    as lastTime FROM datamodel=Endpoint.Processes\n    by Processes.action Processes.dest Processes.original_file_name Processes.parent_process\n       Processes.parent_process_exec Processes.parent_process_guid Processes.parent_process_id\n       Processes.parent_process_name Processes.parent_process_path Processes.process\n       Processes.process_exec Processes.process_guid Processes.process_hash Processes.process_id\n       Processes.process_integrity_level Processes.process_name Processes.process_path\n       Processes.user Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(\"Processes\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | lookup update=true is_nirsoft_software filename as process_name OUTPUT nirsoftFile\n    | search nirsoftFile=true\n    | `windows_nirsoft_utilities_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present. Filtering may be required before setting to alert.",
              "refs": "https://www.cisa.gov/uscert/ncas/alerts/TA18-201A, http://www.nirsoft.net/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1588.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows NirSoft Utilities\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1588.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows NirSoft Utilities\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present. Filtering may be required before setting to alert.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on Windows NirSoft Utilities so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.571",
              "n": "Windows Non Discord App Access Discord LevelDB",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects non-Discord applications accessing the Discord LevelDB database. It leverages Windows Security Event logs, specifically event code 4663, to identify file access attempts to the LevelDB directory by processes other than Discord. This activity is significant as it may indicate attempts to steal Discord credentials or access sensitive user data. If confirmed malicious, this could lead to unauthorized access to user profiles, messages, and other critical information, p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.404keylogger",
              "mitre": [
                "T1012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Non Discord App Access Discord LevelDB\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Non Discord App Access Discord LevelDB\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Non Discord App Access Discord LevelDB\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.572",
              "n": "Windows Non-System Account Targeting Lsass",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies non-SYSTEM accounts requesting access to lsass.exe. This detection leverages Sysmon EventCode 10 logs to monitor access attempts to the Local Security Authority Subsystem Service (lsass.exe) by non-SYSTEM users. This activity is significant as it may indicate credential dumping attempts or unauthorized access to sensitive credentials. If confirmed malicious, an attacker could potentially extract credentials from memory, leading to privilege escalation or lateral…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Enabling EventCode 10 TargetProcess lsass.exe is required.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will occur based on legitimate application requests, filter based on source image as needed.",
              "refs": "https://en.wikipedia.org/wiki/Local_Security_Authority_Subsystem_Service, https://learn.microsoft.com/en-us/windows/win32/api/minidumpapiset/nf-minidumpapiset-minidumpwritedump, https://cyberwardog.blogspot.com/2017/03/chronicles-of-threat-hunter-hunting-for_22.html, https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/master/Exfiltration/Invoke-Mimikatz.ps1, https://learn.microsoft.com/en-us/windows/win32/procthread/process-security-and-access-rights?redirectedfrom=MSDN",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Non-System Account Targeting Lsass\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Non-System Account Targeting Lsass\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will occur based on legitimate application requests, filter based on source image as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Non-System Account Targeting Lsass\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.573",
              "n": "Windows Possible Credential Dumping",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential credential dumping by identifying specific GrantedAccess permission requests and CallTrace DLLs targeting the LSASS process. It leverages Sysmon EventCode 10 logs, focusing on access requests to lsass.exe and call traces involving debug and native API DLLs like dbgcore.dll, dbghelp.dll, and ntdll.dll. This activity is significant as credential dumping can lead to unauthorized access to sensitive credentials. If confirmed malicious, attackers could gain el…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user_id$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA. Enabling EventCode 10 TargetProcess lsass.exe is required.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will occur based on GrantedAccess 0x1010 and 0x1400, filter based on source image as needed or remove them. Concern is Cobalt Strike usage of Mimikatz will generate 0x1010 initially, but later be caught.",
              "refs": "https://en.wikipedia.org/wiki/Local_Security_Authority_Subsystem_Service, https://learn.microsoft.com/en-us/windows/win32/api/minidumpapiset/nf-minidumpapiset-minidumpwritedump, https://cyberwardog.blogspot.com/2017/03/chronicles-of-threat-hunter-hunting-for_22.html, https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/master/Exfiltration/Invoke-Mimikatz.ps1, https://learn.microsoft.com/en-us/windows/win32/procthread/process-security-and-access-rights?redirectedfrom=MSDN, https://github.com/redcanaryco/AtomicTestHarnesses/blob/master/Windows/TestHarnesses/T1003.001_DumpLSASS/DumpLSASS.ps1",
              "mitre": [
                "T1003.001"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Possible Credential Dumping\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Possible Credential Dumping\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will occur based on GrantedAccess 0x1010 and 0x1400, filter based on source image as needed or remove them. Concern is Cobalt Strike usage of Mimikatz will generate 0x1010 initially, but later be caught.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Possible Credential Dumping\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.574",
              "n": "Windows Post Exploitation Risk Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies four or more distinct post-exploitation behaviors on a Windows system. It leverages data from the Risk data model in Splunk Enterprise Security, focusing on multiple risk events and their associated MITRE ATT&CK tactics and techniques. This activity is significant as it indicates potential malicious actions following an initial compromise, such as persistence, privilege escalation, or data exfiltration. If confirmed malicious, this behavior could allow attackers…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present based on many factors. Tune the correlation as needed to reduce too many triggers.",
              "refs": "https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS/winPEASbat",
              "mitre": [
                "T1012",
                "T1049",
                "T1069",
                "T1016",
                "T1003",
                "T1082",
                "T1115",
                "T1552"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Post Exploitation Risk Behavior\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012, T1049, T1069, T1016, T1003, T1082, T1115, T1552. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Post Exploitation Risk Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present based on many factors. Tune the correlation as needed to reduce too many triggers.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Post Exploitation Risk Behavior\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.575",
              "n": "Windows Potential AppDomainManager Hijack Artifacts Creation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of an .exe file along with its corresponding .exe.config and a .dll in the same directory, which is a common pattern indicative of potential AppDomain hijacking or CLR code injection attempts. This behavior may signal that a malicious actor is attempting to load a rogue assembly into a legitimate application's AppDomain, allowing code execution under the context of a trusted process.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Filesystem` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection may still produce false positives, so additional filtering is recommended. To validate potential alerts, verify that the executable’s original file name matches its current file name, and also review the associated .config file to confirm which DLLs are expected to load during execution. This helps distinguish legitimate activity from suspicious behavior.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2025/11/03/sesameop-novel-backdoor-uses-openai-assistants-api-for-command-and-control/, https://attack.mitre.org/techniques/T1574/014/, https://gist.github.com/djhohnstein/afb93a114b848e16facf0b98cd7cb57b, https://www.scworld.com/brief/appdomain-manager-injection-exploited-for-cobalt-strike-beacon-delivery, https://jp.security.ntt/insights_resources/tech_blog/appdomainmanager-injection-en/",
              "mitre": [
                "T1574.014"
              ],
              "dtype": "file_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Potential AppDomainManager Hijack Artifacts Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file path entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.014. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Potential AppDomainManager Hijack Artifacts Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection may still produce false positives, so additional filtering is recommended. To validate potential alerts, verify that the executable’s original file name matches its current file name, and also review the associated .config file to confirm which DLLs are expected to load during execution. This helps distinguish legitimate activity from suspicious behavior.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Potential AppDomainManager Hijack Artifacts Creation\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.576",
              "n": "Windows PowerShell Add Module to Global Assembly Cache",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of a DLL to the Windows Global Assembly Cache (GAC) using PowerShell. It leverages PowerShell Script Block Logging to identify commands containing \"system.enterpriseservices.internal.publish\". This activity is significant because adding a DLL to the GAC allows it to be shared across multiple applications, potentially enabling an adversary to execute malicious code system-wide. If confirmed malicious, this could lead to widespread code execution, privil…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on developers or third party utilities adding items to the GAC.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2022/12/12/iis-modules-the-evolution-of-web-shells-and-how-to-detect-them/, https://www.microsoft.com/en-us/security/blog/2022/07/26/malicious-iis-extensions-quietly-open-persistent-backdoors-into-servers/",
              "mitre": [
                "T1505.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Add Module to Global Assembly Cache\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Add Module to Global Assembly Cache\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on developers or third party utilities adding items to the GAC.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerShell Add Module to Global Assembly Cache\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.577",
              "n": "Windows PowerShell Disable HTTP Logging",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of `get-WebConfigurationProperty` and `Set-ItemProperty` commands in PowerShell to disable HTTP logging on Windows systems. This detection leverages PowerShell Script Block Logging, specifically looking for script blocks that reference HTTP logging properties and attempt to set them to \"false\" or \"dontLog\". Disabling HTTP logging is significant as it can be used by adversaries to cover their tracks and delete logs, hindering forensic investigations. If conf…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible administrators or scripts may run these commands, filtering may be required.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2022/12/12/iis-modules-the-evolution-of-web-shells-and-how-to-detect-them/, https://www.crowdstrike.com/wp-content/uploads/2022/05/crowdstrike-iceapple-a-novel-internet-information-services-post-exploitation-framework-1.pdf, https://unit42.paloaltonetworks.com/unit42-oilrig-uses-rgdoor-iis-backdoor-targets-middle-east/, https://www.secureworks.com/research/bronze-union",
              "mitre": [
                "T1505.004",
                "T1562.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Disable HTTP Logging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.004, T1562.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Disable HTTP Logging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible administrators or scripts may run these commands, filtering may be required.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerShell Disable HTTP Logging\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.578",
              "n": "Windows PowerShell FakeCAPTCHA Clipboard Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows PowerShell FakeCAPTCHA Clipboard Execution. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate PowerShell commands that use hidden windows for automation tasks may trigger this detection. The search specifically looks for patterns typical of FakeCAPTCHA campaigns. You may need to add additional exclusions for legitimate administrative activities in your environment by modifying the filter macro.",
              "refs": "https://urlhaus.abuse.ch/, https://www.proofpoint.com/us/blog/threat-insight/security-brief-clickfix-social-engineering-technique-floods-threat-landscape, https://reliaquest.com/blog/using-captcha-for-compromise/, https://attack.mitre.org/techniques/T1204/001/, https://github.com/MHaggis/ClickGrab",
              "mitre": [
                "T1059.001",
                "T1204.001",
                "T1059.003"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell FakeCAPTCHA Clipboard Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1204.001, T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell FakeCAPTCHA Clipboard Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate PowerShell commands that use hidden windows for automation tasks may trigger this detection. The search specifically looks for patterns typical of FakeCAPTCHA campaigns. You may need to add additional exclusions for legitimate administrative activities in your environment by modifying the filter macro.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerShell FakeCAPTCHA Clipboard Execution\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.579",
              "n": "Windows PowerShell Get CIMInstance Remote Computer",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Get-CimInstance cmdlet with the -ComputerName parameter, indicating an attempt to retrieve information from a remote computer. It leverages PowerShell Script Block Logging to identify this specific command execution. This activity is significant as it may indicate unauthorized remote access or information gathering by an attacker. If confirmed malicious, this could allow the attacker to collect sensitive data from remote systems, potentially leading …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This is meant to be a low risk RBA anomaly analytic or to be used for hunting. Enable this with a low risk score and let it generate risk in the risk index.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/cimcmdlets/get-ciminstance?view=powershell-7.3",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Get CIMInstance Remote Computer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Get CIMInstance Remote Computer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This is meant to be a low risk RBA anomaly analytic or to be used for hunting. Enable this with a low risk score and let it generate risk in the risk index.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for the detection that watches PowerShell pulling system details from a remote computer, so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.580",
              "n": "Windows Powershell History File Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the usage of PowerShell to delete its command history file, which may indicate an attempt to evade detection by removing evidence of executed commands. PowerShell stores command history in ConsoleHost_history.txt under the user’s profile directory. Adversaries or malicious scripts may delete this file using Remove-Item, del, or similar commands. This detection focuses on file deletion events targeting the history file, correlating them with recent PowerShell activi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrators may execute this command that may cause some false positive.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-071a",
              "mitre": [
                "T1059.003",
                "T1070.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Powershell History File Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.003, T1070.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Powershell History File Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrators may execute this command that may cause some false positive.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Powershell History File Deletion\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.581",
              "n": "Windows PowerShell IIS Components WebGlobalModule Usage",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the usage of PowerShell Cmdlets - New-WebGlobalModule, Enable-WebGlobalModule, and Set-WebGlobalModule, which are used to create, enable, or modify IIS Modules. This detection leverages PowerShell Script Block Logging, specifically monitoring EventCode 4104 for these cmdlets. This activity is significant as adversaries may use these lesser-known cmdlets to manipulate IIS configurations, similar to AppCmd.exe, potentially bypassing traditional defenses. If confirmed…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible administrators or scripts may run these commands, filtering may be required.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/webadministration/new-webglobalmodule?view=windowsserver2022-ps, https://www.microsoft.com/en-us/security/blog/2022/12/12/iis-modules-the-evolution-of-web-shells-and-how-to-detect-them/, https://www.crowdstrike.com/wp-content/uploads/2022/05/crowdstrike-iceapple-a-novel-internet-information-services-post-exploitation-framework-1.pdf, https://unit42.paloaltonetworks.com/unit42-oilrig-uses-rgdoor-iis-backdoor-targets-middle-east/, https://www.secureworks.com/research/bronze-union, https://github.com/redcanaryco/atomic-red-team/tree/master/atomics/T1505.004",
              "mitre": [
                "T1505.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell IIS Components WebGlobalModule Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell IIS Components WebGlobalModule Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible administrators or scripts may run these commands, filtering may be required.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerShell IIS Components WebGlobalModule Usage\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.582",
              "n": "Windows PowerShell Invoke-RestMethod IP Information Collection",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of PowerShell's Invoke-RestMethod cmdlet to collect geolocation data from ipinfo.io or IP address information from api.ipify.org. This behavior leverages PowerShell Script Block Logging to identify scripts that gather external IP information and potential geolocation data. This activity is significant as it may indicate reconnaissance efforts, where threat actors are attempting to determine the geographical location or network details of a compromised syste…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some legitimate applications or administrative scripts may use these services for IP validation or geolocation. Filter as needed for approved administrative tools.",
              "refs": "https://securityintelligence.com/posts/new-threat-actor-water-gamayun-targets-telecom-finance/, https://www.ncsc.gov.uk/report/weekly-threat-report-12th-april-2024",
              "mitre": [
                "T1082",
                "T1016",
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Invoke-RestMethod IP Information Collection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082, T1016, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Invoke-RestMethod IP Information Collection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some legitimate applications or administrative scripts may use these services for IP validation or geolocation. Filter as needed for approved administrative tools.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerShell Invoke-RestMethod IP Information Collecti\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.583",
              "n": "Windows PowerShell Invoke-Sqlcmd Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies potentially suspicious usage of Invoke-Sqlcmd PowerShell cmdlet, which can be used for database operations and potential data exfiltration. The detection looks for suspicious parameter combinations and query patterns that may indicate unauthorized database access, data theft, or malicious database operations. Threat actors may prefer using PowerShell Invoke-Sqlcmd over sqlcmd.exe as it provides a more flexible programmatic interface and can better evade detection.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "`powershell` EventCode=4104 ScriptBlockText=\"*invoke-sqlcmd*\" | eval script_lower=lower(ScriptBlockText) | eval has_query=case( match(script_lower, \"(?i)-query\\\\s+\"), 1, match(script_lower, \"(?i)-q\\\\s+\"), 1, true(), 0 ), has_input_file=case( match(script_lower, \"(?i)-inputfile\\\\s+\"), 1, match(script_lower, \"(?i)-i\\\\s+\"), 1, true(), 0 ), has_url_input=case( match(script_lower, \"(?i)-inputfile\\\\s+https?://\"), 1, match(script_lower, \"(?i)-i\\\\s+https?://\"), 1, match(script_lower, \"(?i)-inputfile\\\\s+ftp://\"), 1, match(script_lower, \"(?i)-i\\\\s+ftp://\"), 1, true(), 0 ), has_admin_conn=case( match(script_lower, \"(?i)-dedicatedadministratorconnection\"), 1, true(), 0 ), has_suspicious_auth=case( match(script_lower, \"(?i)-username\\\\s+sa\\\\b\"), 1, match(script_lower, \"(?i)-u\\\\s+sa\\\\b\"), 1, match(script_lower, \"(?i)-username\\\\s+admin\\\\b\"), 1, match(script_lower, \"(?i)-u\\\\s+admin\\\\b\"), 1, true(), 0 ), has_suspicious_query=case( match(script_lower, \"(?i)(xp_cmdshell|sp_oacreate|sp_execute_external|openrowset|bulk\\\\s+insert)\"), 1, match(script_lower, \"(?i)(master\\\\.\\\\.\\\\.sysdatabases|msdb\\\\.\\\\.\\\\.backuphistory|sysadmin|securityadmin)\"), 1, match(script_lower, \"(?i)(select.*from.*sys\\\\.|select.*password|dump\\\\s+database)\"), 1, match(script_lower, \"(?i)(sp_addextendedproc|sp_makewebtask|sp_addsrvrolemember)\"), 1, match(script_lower, \"(?i)(sp_configure.*show\\\\s+advanced|reconfigure|enable_xp_cmdshell)\"), 1, match(script_lower, \"(?i)(exec.*master\\\\.dbo\\\\.|exec.*msdb\\\\.dbo\\\\.)\"), 1, match(script_lower, \"(?i)(sp_password|sp_control_dbmasterkey_password|sp_dropextendedproc)\"), 1, match(script_lower, \"(?i)(powershell|cmd\\\\.exe|rundll32|regsvr32|certutil)\"), 1, true(), 0 ), has_data_exfil=case( match(script_lower, \"(?i)-outputas\\\\s+(dataset|datatables)\"), 1, match(script_lower, \"(?i)-as\\\\s+(dataset|datatables)\"), 1, match(script_lower, \"(?i)(for\\\\s+xml|for\\\\s+json)\"), 1, match(script_lower, \"(?i)(select.*into.*from|select.*into.*outfile)\"), 1, true(), 0 ), has_cert_bypass=case( match(script_lower, \"(?i)-trustservercertificate\"), 1, true(), 0 )",
              "m": "To successfully implement this detection, you need to be ingesting PowerShell logs with Script Block Logging and Module Logging enabled. The detection looks for Invoke-Sqlcmd usage in PowerShell scripts and evaluates the parameters and queries for suspicious patterns. Configure your PowerShell logging to capture script block execution and ensure the logs are mapped to the PowerShell node of the En…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Database administrators and developers frequently use Invoke-Sqlcmd as a legitimate tool for various database management tasks. This includes running automated database maintenance scripts, performing ETL (Extract, Transform, Load) processes, executing data migration jobs, implementing database deployment and configuration scripts, and running monitoring and reporting tasks. To effectively manage false positives in your environment, consider implementing several mitigation strategies. First, establish a whitelist of known administrator and service accounts that regularly perform these operations. Second, create exceptions for approved script paths where legitimate database operations typically occur. Additionally, it's important to baseline your environment's normal PowerShell database interaction patterns and implement monitoring for any deviations from these established patterns. Finally, consider adjusting the risk score thresholds based on your specific environment and security requirements to achieve an optimal balance between security and operational efficiency.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/sqlserver/invoke-sqlcmd, https://attack.mitre.org/techniques/T1059/001/, https://attack.mitre.org/techniques/T1059/003/",
              "mitre": [
                "T1059.001",
                "T1059.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Invoke-Sqlcmd Execution\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Invoke-Sqlcmd Execution\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Database administrators and developers frequently use Invoke-Sqlcmd as a legitimate tool for various database management tasks. This includes running automated database maintenance scripts, performing ETL (Extract, Transform, Load) processes, executing data migration jobs, implementing database deployment and configuration scripts, and running monitoring and reporting tasks. To effectively manage false positives in your environment, consider implementing several mitigation strategies. First, establish a whitelist of known administrator and service accounts that regularly perform these operations. Second, create exceptions for approved script paths where legitimate database operations typically occur. Additionally, it's important to baseline your environment's normal PowerShell database interaction patterns and implement monitoring for any deviations from these established patterns. Finally, consider adjusting the risk score thresholds based on your specific environment and security requirements to achieve an optimal balance between security and operational efficiency.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on potentially suspicious usage of Invoke-Sqlcmd PowerShell cmdlet, which can be used for database operations and potential data exfiltration so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.584",
              "n": "Windows Powershell Logoff User via Quser",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the process of logging off a user through the use of the quser and logoff commands. By monitoring for these commands, the analytic identifies actions where a user session is forcibly terminated, which could be part of an administrative task or a potentially unauthorized access attempt. This detection helps identify potential misuse or malicious activity where a user’s access is revoked without proper authorization, providing insight into potential security incident…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command.",
              "refs": "https://devblogs.microsoft.com/scripting/automating-quser-through-powershell/",
              "mitre": [
                "T1059.001",
                "T1531"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Powershell Logoff User via Quser\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1531. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Powershell Logoff User via Quser\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Powershell Logoff User via Quser\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.585",
              "n": "Windows PowerShell MSIX Package Installation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of PowerShell commands to install unsigned AppX packages using Add-AppxPackage or Add-AppPackage cmdlets with the -AllowUnsigned flag. This detection leverages PowerShell Script Block Logging (EventCode=4104) to capture the full command content. This activity is significant as adversaries may use unsigned AppX packages to install malicious applications, bypass security controls, or establish persistence. If confirmed malicious, this could allow attack…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to command entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://learn.microsoft.com/en-us/windows/msix/package/unsigned-package, https://learn.microsoft.com/en-us/windows/msix/desktop/powershell-msix-cmdlets, https://learn.microsoft.com/en-us/powershell/module/appx/add-appxpackage, https://twitter.com/WindowsDocs/status/1620078135080325122, https://attack.mitre.org/techniques/T1059/001/, https://attack.mitre.org/techniques/T1547/001/, https://learn.microsoft.com/en-us/powershell/module/appx/add-appxpackage?view=windowsserver2025-ps",
              "mitre": [
                "T1059.001",
                "T1547.001"
              ],
              "dtype": "command",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell MSIX Package Installation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to command entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1547.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell MSIX Package Installation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerShell MSIX Package Installation\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.586",
              "n": "Windows PowerShell Process Implementing Manual Base64 Decoder",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows PowerShell Process Implementing Manual Base64 Decoder. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.virustotal.com/gui/file/4b3ab4d9f2332da6b6cd8d9d0f4910a5eb85ac8c969108acb3ad49631812f998/behavior",
              "mitre": [
                "T1027.010",
                "T1059.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Process Implementing Manual Base64 Decoder\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1027.010, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Process Implementing Manual Base64 Decoder\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerShell Process Implementing Manual Base64 Decode\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.587",
              "n": "Windows PowerShell ScheduleTask",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects potential malicious activities involving PowerShell's task scheduling cmdlets. It leverages PowerShell Script Block Logging (EventCode 4104) to identify unusual or suspicious use of cmdlets like 'New-ScheduledTask' and 'Set-ScheduledTask'. This activity is significant as attackers often use these cmdlets for persistence and remote execution of malicious code. If confirmed malicious, this could allow attackers to maintain access, deliver additional payloads, or exec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Benign administrative tasks can also trigger alerts, necessitating a firm understanding of the typical system behavior and precise tuning of the analytic to reduce false positives.",
              "refs": "https://learn.microsoft.com/en-us/powershell/module/scheduledtasks/?view=windowsserver2022-ps, https://thedfirreport.com/2023/06/12/a-truly-graceful-wipe-out/",
              "mitre": [
                "T1053.005",
                "T1059.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell ScheduleTask\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell ScheduleTask\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Benign administrative tasks can also trigger alerts, necessitating a firm understanding of the typical system behavior and precise tuning of the analytic to reduce false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerShell ScheduleTask\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.588",
              "n": "Windows PowerShell WMI Win32 ScheduledJob",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Win32_ScheduledJob WMI class via PowerShell script block logging. This class, which manages scheduled tasks, is disabled by default due to security concerns and must be explicitly enabled through registry modifications. The detection leverages PowerShell event code 4104 and script block text analysis. Monitoring this activity is crucial as it may indicate malicious intent, especially if the class was enabled by an attacker. If confirmed malicious, th…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on legacy applications or utilities. Win32_ScheduledJob uses the Remote Procedure Call (RPC) protocol to create scheduled tasks on remote computers. It uses the DCOM (Distributed Component Object Model) infrastructure to establish a connection with the remote computer and invoke the necessary methods. The RPC service needs to be running on both the local and remote computers for the communication to take place.",
              "refs": "https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/win32-scheduledjob",
              "mitre": [
                "T1059.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell WMI Win32 ScheduledJob\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell WMI Win32 ScheduledJob\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on legacy applications or utilities. Win32_ScheduledJob uses the Remote Procedure Call (RPC) protocol to create scheduled tasks on remote computers. It uses the DCOM (Distributed Component Object Model) infrastructure to establish a connection with the remote computer and invoke the necessary methods. The RPC service needs to be running on both the local and remote computers for the communication to take place.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerShell WMI Win32 ScheduledJob\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.589",
              "n": "Windows PowerSploit GPP Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the Get-GPPPassword PowerShell cmdlet, which is used to search for unsecured credentials in Group Policy Preferences (GPP). This detection leverages PowerShell Script Block Logging to identify specific script block text associated with this cmdlet. Monitoring this activity is crucial as it can indicate an attempt to retrieve and decrypt stored credentials from SYSVOL, potentially leading to unauthorized access. If confirmed malicious, this activity…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1552/006/, https://pentestlab.blog/2017/03/20/group-policy-preferences/, https://adsecurity.org/?p=2288, https://www.hackingarticles.in/credential-dumping-group-policy-preferences-gpp/, https://adsecurity.org/?p=2288, https://support.microsoft.com/en-us/topic/ms14-025-vulnerability-in-group-policy-preferences-could-allow-elevation-of-privilege-may-13-2014-60734e15-af79-26ca-ea53-8cd617073c30",
              "mitre": [
                "T1552.006"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerSploit GPP Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerSploit GPP Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerSploit GPP Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.590",
              "n": "Windows PowerView AD Access Control List Enumeration",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of PowerView PowerShell cmdlets `Get-ObjectAcl` or `Get-DomainObjectAcl`, which are used to enumerate Access Control List (ACL) permissions for Active Directory objects. It leverages Event ID 4104 from PowerShell Script Block Logging to identify this activity. This behavior is significant as it may indicate an attempt to discover weak permissions in Active Directory, potentially leading to privilege escalation. If confirmed malicious, attackers could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may leverage PowerView for legitimate purposes, filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1078/002/, https://medium.com/r3d-buck3t/enumerating-access-controls-in-active-directory-c06e2efa8b89, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/abusing-active-directory-acls-aces, https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainObjectAcl/",
              "mitre": [
                "T1078.002",
                "T1069"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerView AD Access Control List Enumeration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1078.002, T1069. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerView AD Access Control List Enumeration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may leverage PowerView for legitimate purposes, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerView Access Control List Enumeration\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.591",
              "n": "Windows PowerView Kerberos Service Ticket Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainSPNTicket` commandlet, part of the PowerView tool, by leveraging PowerShell Script Block Logging (EventCode=4104). This commandlet requests Kerberos service tickets for specified service principal names (SPNs). Monitoring this activity is crucial as it can indicate attempts to perform Kerberoasting, a technique used to extract SPN account passwords via cracking tools like hashcat. If confirmed malicious, this activity could allow att…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positive may include Administrators using PowerView for troubleshooting and management.",
              "refs": "https://powersploit.readthedocs.io/en/latest/Recon/Get-DomainSPNTicket/, https://book.hacktricks.xyz/windows-hardening/active-directory-methodology/kerberoast, https://github.com/PowerShellMafia/PowerSploit/blob/master/Recon/PowerView.ps1, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/t1208-kerberoasting, https://attack.mitre.org/techniques/T1558/003",
              "mitre": [
                "T1558.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerView Kerberos Service Ticket Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerView Kerberos Service Ticket Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positive may include Administrators using PowerView for troubleshooting and management.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerView Kerberos Service Ticket Request\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.592",
              "n": "Windows PowerView SPN Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of the `Get-DomainUser` or `Get-NetUser` PowerShell cmdlets with the `-SPN` parameter, indicating the use of PowerView for SPN discovery. It leverages PowerShell Script Block Logging (EventCode=4104) to identify these specific commands. This activity is significant as it suggests an attempt to enumerate domain accounts associated with Service Principal Names (SPNs), a common precursor to Kerberoasting attacks. If confirmed malicious, this could allow …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positive may include Administrators using PowerView for troubleshooting and management.",
              "refs": "https://book.hacktricks.xyz/windows-hardening/active-directory-methodology/kerberoast, https://github.com/PowerShellMafia/PowerSploit/blob/master/Recon/PowerView.ps1, https://www.ired.team/offensive-security-experiments/active-directory-kerberos-abuse/t1208-kerberoasting, https://attack.mitre.org/techniques/T1558/003",
              "mitre": [
                "T1558.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerView SPN Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1558.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerView SPN Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positive may include Administrators using PowerView for troubleshooting and management.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PowerView SPN Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.593",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a process running with low or medium integrity from a user account spawns an elevated process with high or system integrity in suspicious locations. This behavior is identified using process execution data from Windows process monitoring or Sysmon EventID 1. This activity is significant as it may indicate a threat actor successfully elevating privileges, which is a common tactic in advanced attacks. If confirmed malicious, this could allow the attacker to exec…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest process execution data sources such as Windows process monitoring and/or Sysmon EID 1.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be generated by administrators installing benign applications using run-as/elevation.",
              "refs": "https://attack.mitre.org/techniques/T1068/, https://redcanary.com/blog/getsystem-offsec/, https://atomicredteam.io/privilege-escalation/T1134.001/",
              "mitre": [
                "T1068",
                "T1548",
                "T1134"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068, T1548, T1134. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be generated by administrators installing benign applications using run-as/elevation.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"True Positive Test\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.594",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects any system integrity level process spawned by a non-system account. It leverages Sysmon EventID 1, focusing on process integrity and parent user data. This behavior is significant as it often indicates successful privilege escalation to SYSTEM from a user-controlled process or service. If confirmed malicious, this activity could allow an attacker to gain full control over the system, execute arbitrary code, and potentially compromise the entire environment.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$src_user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest sysmon data, specifically Event ID 1 with process integrity and parent user data.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1068/, https://redcanary.com/blog/getsystem-offsec/, https://atomicredteam.io/privilege-escalation/T1134.001/",
              "mitre": [
                "T1068",
                "T1548",
                "T1134"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068, T1548, T1134. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"True Positive Test\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.595",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a process with low, medium, or high integrity spawns a system integrity process from a user-controlled location. This behavior is indicative of privilege escalation attempts where attackers elevate their privileges to SYSTEM level from a user-controlled process or service. The detection leverages Sysmon data, specifically Event ID 15, to identify such transitions. Monitoring this activity is crucial as it can signify an attacker gaining SYSTEM-level access, po…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "Target environment must ingest sysmon data, specifically Event ID 15.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1068/, https://redcanary.com/blog/getsystem-offsec/, https://atomicredteam.io/privilege-escalation/T1134.001/",
              "mitre": [
                "T1068",
                "T1548",
                "T1134"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1068, T1548, T1134. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"True Positive Test\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.596",
              "n": "Windows Process Executed From Removable Media",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic is used to identify when a removable media device is attached to a machine and then a process is executed from the same drive letter assigned to the removable media device. Adversaries and Insider Threats may use removable media devices for several malicious activities, including initial access, execution, and exfiltration.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 13",
              "q": "| from datamodel:Endpoint.Processes | search dest=$dest$ process_current_directory=$object_handle$*",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to registry value entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 13 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate USB activity will also be detected. Please verify and investigate as appropriate.",
              "refs": "https://attack.mitre.org/techniques/T1200/, https://www.cisa.gov/news-events/news/using-caution-usb-drives, https://www.bleepingcomputer.com/news/security/fbi-hackers-use-badusb-to-target-defense-firms-with-ransomware/",
              "mitre": [
                "T1200",
                "T1025",
                "T1091"
              ],
              "dtype": "registry_value_text",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Executed From Removable Media\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to registry value entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1200, T1025, T1091. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Executed From Removable Media\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate USB activity will also be detected. Please verify and investigate as appropriate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on this analytic is used to identify when a removable media device is attached to a machine and then a process is executed from the same drive letter assigned to the removable media device so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.597",
              "n": "Windows Process Execution From ProgramData",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Process Execution From ProgramData. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count values(Processes.process_name)\n    as process_name values(Processes.process) as process min(_time) as firstTime max(_time)\n    as lastTime from datamodel=Endpoint.Processes where",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may allow execution of specific binaries in non-standard paths. Filter as needed.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2023/05/24/volt-typhoon-targets-us-critical-infrastructure-with-living-off-the-land-techniques/",
              "mitre": [
                "T1036.005"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Execution From ProgramData\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Execution From ProgramData\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may allow execution of specific binaries in non-standard paths. Filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on Windows Process Execution From ProgramData so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.598",
              "n": "Windows Process Execution From RDP Share",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Process Execution From RDP Share. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://jsac.jpcert.or.jp/archive/2024/pdf/JSAC2024_2_7_hara_shoji_higashi_vickie-su_nick-dai_en.pdf, https://thedfirreport.com/2020/04/04/gogoogle-ransomware/, https://www.welivesecurity.com/2022/09/30/amazon-themed-campaigns-lazarus-netherlands-belgium/",
              "mitre": [
                "T1021.001",
                "T1105",
                "T1059"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Execution From RDP Share\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001, T1105, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Execution From RDP Share\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Process Execution From RDP Share\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.599",
              "n": "Windows Process Injection into Commonly Abused Processes",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects process injection into executables that are commonly abused using Sysmon EventCode 10. It identifies suspicious GrantedAccess requests (0x40 and 0x1fffff) to processes such as notepad.exe, wordpad.exe and calc.exe, excluding common system paths like System32, Syswow64, and Program Files. This behavior is often associated with the SliverC2 framework by BishopFox. Monitoring this activity is crucial as it may indicate an initial payload attempting to execute maliciou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on SourceImage paths, particularly those with a legitimate reason for accessing lsass.exe or regsvr32.exe. If removing the paths is important, realize svchost and many native binaries inject into processes consistently. Restrict or tune as needed.",
              "refs": "https://dominicbreuker.com/post/learning_sliver_c2_08_implant_basics/, https://www.cybereason.com/blog/sliver-c2-leveraged-by-many-threat-actors, https://redcanary.com/threat-detection-report/techniques/process-injection/",
              "mitre": [
                "T1055.002"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Injection into Commonly Abused Processes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Injection into Commonly Abused Processes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on SourceImage paths, particularly those with a legitimate reason for accessing lsass.exe or regsvr32.exe. If removing the paths is important, realize svchost and many native binaries inject into processes consistently. Restrict or tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Process Injection into Commonly Abused Processes\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.600",
              "n": "Windows Process Injection into Notepad",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects process injection into Notepad.exe using Sysmon EventCode 10. It identifies suspicious GrantedAccess requests (0x40 and 0x1fffff) to Notepad.exe, excluding common system paths like System32, Syswow64, and Program Files. This behavior is often associated with the SliverC2 framework by BishopFox. Monitoring this activity is crucial as it may indicate an initial payload attempting to execute malicious code within Notepad.exe. If confirmed malicious, this could allow a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name, parent process, and command-line executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on SourceImage paths. If removing the paths is important, realize svchost and many native binaries inject into notepad consistently. Restrict or tune as needed.",
              "refs": "https://dominicbreuker.com/post/learning_sliver_c2_08_implant_basics/, https://www.cybereason.com/blog/sliver-c2-leveraged-by-many-threat-actors",
              "mitre": [
                "T1055.002"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Injection into Notepad\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Injection into Notepad\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on SourceImage paths. If removing the paths is important, realize svchost and many native binaries inject into notepad consistently. Restrict or tune as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Process Injection into Notepad\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.601",
              "n": "Windows Process Injection Remote Thread",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Process Injection Remote Thread. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 8",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must be ingesting data that records process activity from your hosts like remote thread EventCode=8 of sysmon. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://twitter.com/pr0xylife/status/1585612370441031680?s=46&t=Dc3CJi4AnM-8rNoacLbScg, https://thedfirreport.com/2023/06/12/a-truly-graceful-wipe-out/",
              "mitre": [
                "T1055.002"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Injection Remote Thread\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 8. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Injection Remote Thread\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Process Injection Remote Thread\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.602",
              "n": "Windows Process Injection With Public Source Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a process from a non-standard file path on Windows attempting to create a remote thread in another process. This is identified using Sysmon EventCode 8, focusing on processes not originating from typical system directories. This behavior is significant as it often indicates process injection, a technique used by adversaries to evade detection or escalate privileges. If confirmed malicious, this activity could allow an attacker to execute arbitrary code within anoth…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 8",
              "q": "`sysmon` EventCode=8 TargetImage = \"*.exe\" AND NOT(SourceImage IN(\"C:\\\\Windows\\\\*\", \"C:\\\\Program File*\", \"%systemroot%\\\\*\")) | stats count min(_time) as firstTime max(_time) as lastTime by EventID Guid NewThreadId ProcessID SecurityID SourceImage SourceProcessGuid SourceProcessId StartAddress StartFunction StartModule TargetImage TargetProcessGuid TargetProcessId UserID dest parent_process_exec parent_process_guid parent_process_id parent_process_name parent_process_path process_exec process_guid process_id process_name process_path signature signature_id user_id vendor_product | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_process_injection_with_public_source_path_filter`",
              "m": "To successfully implement this search, you must be ingesting data that records process activity from your hosts to populate the endpoint data model in the processes node. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some security products or third party applications may utilize CreateRemoteThread, filter as needed.",
              "refs": "https://unit42.paloaltonetworks.com/brute-ratel-c4-tool/",
              "mitre": [
                "T1055.002"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Process Injection With Public Source Path\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 8. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Process Injection With Public Source Path\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some security products or third party applications may utilize CreateRemoteThread, filter as needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a process from a non-standard file path on Windows attempting to create a remote thread in another process so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.603",
              "n": "Windows PsTools Recon Usage",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows PsTools Recon Usage. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://learn.microsoft.com/sysinternals/downloads/pstools, https://github.com/CyberMonitor/APT_CyberCriminal_Campagin_Collections/raw/master/2015/2015.09.17.Operation_Iron_Tiger/wp-operation-iron-tiger.pdf, https://go.crowdstrike.com/rs/281-OBQ-266/images/Report2018OverwatchReport.pdf",
              "mitre": [
                "T1082",
                "T1046",
                "T1018"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PsTools Recon Usage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082, T1046, T1018. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PsTools Recon Usage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PsTools Recon Usage\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.604",
              "n": "Windows PUA Named Pipe",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows PUA Named Pipe. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 17, Sysmon EventID 18",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 17, Sysmon EventID 18 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://learn.microsoft.com/en-us/windows/win32/ipc/named-pipes",
              "mitre": [
                "T1559",
                "T1021.002",
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PUA Named Pipe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 17, Sysmon EventID 18. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1559, T1021.002, T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PUA Named Pipe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows PUA Named Pipe\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.605",
              "n": "Windows Query Registry Browser List Application",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious process accessing the registry entries for default internet browsers. It leverages Windows Security Event logs, specifically event code 4663, to identify access attempts to these registry paths. This activity is significant because adversaries can exploit this registry key to gather information about installed browsers and their settings, potentially leading to the theft of sensitive data such as login credentials and browsing history. If confirmed mal…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "uninstall application may access this registry to remove the entry of the target application. filter is needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.redline_stealer",
              "mitre": [
                "T1012"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Query Registry Browser List Application\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1012. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Query Registry Browser List Application\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: uninstall application may access this registry to remove the entry of the target application. filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Query Registry Browser List Application\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.606",
              "n": "Windows Rasautou DLL Execution",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of an arbitrary DLL by the Windows Remote Auto Dialer (rasautou.exe). This behavior is identified by analyzing process creation events where rasautou.exe is executed with specific command-line arguments. This activity is significant because it leverages a Living Off The Land Binary (LOLBin) to execute potentially malicious code, bypassing traditional security controls. If confirmed malicious, this technique could allow an attacker to execute arbitrary…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be limited to applications that require Rasautou.exe to load a DLL from disk. Filter as needed.",
              "refs": "https://github.com/mandiant/DueDLLigence, https://github.com/MHaggis/notes/blob/master/utilities/Invoke-SPLDLLigence.ps1, https://gist.github.com/NickTyrer/c6043e4b302d5424f701f15baf136513, https://www.mandiant.com/resources/staying-hidden-on-the-endpoint-evading-detection-with-shellcode",
              "mitre": [
                "T1055.001",
                "T1218"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Rasautou DLL Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1055.001, T1218. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Rasautou DLL Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be limited to applications that require Rasautou.exe to load a DLL from disk. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Rasautou DLL Execution\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.607",
              "n": "Windows Raw Access To Master Boot Record Drive",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious raw access reads to the drive containing the Master Boot Record (MBR). It leverages Sysmon EventCode 9 to identify processes attempting to read or write to the MBR sector, excluding legitimate system processes. This activity is significant because adversaries often target the MBR to wipe, encrypt, or overwrite it as part of their impact payload. If confirmed malicious, this could lead to system instability, data loss, or a complete system compromise, sev…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 9",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the raw access read event (like sysmon eventcode 9), process name and process guid from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There are som minimal number of normal applications from system32 folder like svchost.exe accessing the MBR. In this case we used 'system32' and 'syswow64' path as a filter for this detection.",
              "refs": "https://www.splunk.com/en_us/blog/security/threat-advisory-strt-ta02-destructive-software.html, https://www.crowdstrike.com/blog/technical-analysis-of-whispergate-malware/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1561.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Raw Access To Master Boot Record Drive\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 9. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1561.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Raw Access To Master Boot Record Drive\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There are som minimal number of normal applications from system32 folder like svchost.exe accessing the MBR. In this case we used 'system32' and 'syswow64' path as a filter for this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Raw Access To Master Boot Record Drive\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.608",
              "n": "Windows Rdp AutomaticDestinations Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the deletion of files within the AutomaticDestinations folder, located under a user’s AppData\\Roaming\\Microsoft\\Windows\\Recent directory. These files are part of the Windows Jump List feature, which records recently accessed files and folders tied to specific applications. Each .automaticDestinations-ms file corresponds to a program (e.g., Explorer, Word, Notepad) and can be valuable for forensic analysis of user activity. Adversaries may target this folder to erase evi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 23, Sysmon EventID 26",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest logs that include the process name, TargetFilename, and ProcessID executions from your endpoints. If you are utilizing Sysmon, ensure you have at least version 2.0 of the Sysmon TA installed.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present, filter as needed or restrict to critical assets on the perimeter.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/",
              "mitre": [
                "T1070.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Rdp AutomaticDestinations Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 23, Sysmon EventID 26. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Rdp AutomaticDestinations Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present, filter as needed or restrict to critical assets on the perimeter.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Rdp AutomaticDestinations Deletion\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.609",
              "n": "Windows RDP Cache File Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the deletion of RDP bitmap cache files—specifically .bmc and .bin files—typically stored in the user profile under the Terminal Server Client\\Cache directory. These files are created by the native Windows Remote Desktop Client (mstsc.exe) and store graphical elements from remote sessions to improve performance. Deleting these files may indicate an attempt to remove forensic evidence of RDP usage. While rare in legitimate user behavior, this action is commonly associated…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 23, Sysmon EventID 26",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest logs that include the process name, TargetFilename, and ProcessID executions from your endpoints. If you are utilizing Sysmon, ensure you have at least version 2.0 of the Sysmon TA installed.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present, filter as needed or restrict to critical assets on the perimeter.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/",
              "mitre": [
                "T1070.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RDP Cache File Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 23, Sysmon EventID 26. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RDP Cache File Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present, filter as needed or restrict to critical assets on the perimeter.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows RDP Cache File Deletion\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.610",
              "n": "Windows RDP Client Launched with Admin Session",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the execution of the Windows Remote Desktop Client (mstsc.exe) with the \"/v\" and /admin command-line arguments. The \"/v\" flag specifies the remote host to connect to, while the /admin flag initiates a connection to the target system’s console session, often used for administrative purposes. This combination may indicate that a user or attacker is performing privileged remote access, potentially to manage a system without disrupting existing user sessions. While such usa…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present, filter as needed or restrict to critical assets on the perimeter.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RDP Client Launched with Admin Session\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RDP Client Launched with Admin Session\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present, filter as needed or restrict to critical assets on the perimeter.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows RDP Client Launched with Admin Session\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.611",
              "n": "Windows RDP Login Session Was Established",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where a successful Remote Desktop Protocol (RDP) login session was established, as indicated by Windows Security Event ID 4624 with Logon Type 10. This event confirms that a user has not only provided valid credentials but has also initiated a full interactive RDP session. It is a key indicator of successful remote access to a Windows system. When correlated with Event ID 1149, which logs RDP authentication success, this analytic helps distinguish between…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4624",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4624 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection can catch for third party application updates or installation. In this scenario false positive filter is needed.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RDP Login Session Was Established\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4624. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RDP Login Session Was Established\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection can catch for third party application updates or installation. In this scenario false positive filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows RDP Login Session Was Established\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.612",
              "n": "Windows RDP Server Registry Deletion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the deletion of registry keys under HKEY_CURRENT_USER\\Software\\Microsoft\\Terminal Server Client\\Servers\\, which store records of previously connected remote systems via Remote Desktop Protocol (RDP). These keys are created automatically when a user connects to a remote host using the native Windows RDP client (mstsc.exe) and can be valuable forensic artifacts for tracking remote access activity. Malicious actors aware of this behavior may delete these keys after using R…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12, Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This detection can catch for third party application updates or installation. In this scenario false positive filter is needed.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/",
              "mitre": [
                "T1070.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RDP Server Registry Deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12, Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RDP Server Registry Deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This detection can catch for third party application updates or installation. In this scenario false positive filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows RDP Server Registry Deletion\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.613",
              "n": "Windows RDP Server Registry Entry Created",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies the creation of registry keys under HKEY_CURRENT_USER\\Software\\Microsoft\\Terminal Server Client\\Servers\\, which occur when a user initiates a Remote Desktop Protocol (RDP) connection using the built-in Windows RDP client (mstsc.exe). These registry entries store information about previously connected remote hosts, including usernames and display settings. Their creation is a strong indicator that an outbound RDP session was initiated from the system. While the presence …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present, filter as needed or restrict to critical assets on the perimeter.",
              "refs": "https://medium.com/@bonguides25/how-to-clear-rdp-connections-history-in-windows-cf0ffb67f344, https://thelocalh0st.github.io/posts/rdp/",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RDP Server Registry Entry Created\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RDP Server Registry Entry Created\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present, filter as needed or restrict to critical assets on the perimeter.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows RDP Server Registry Entry Created\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.614",
              "n": "Windows Registry Delete Task SD",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Registry Delete Task SD. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.microsoft.com/security/blog/2022/04/12/tarrask-malware-uses-scheduled-tasks-for-defense-evasion/, https://gist.github.com/MHaggis/5f7fd6745915166fc6da863d685e2728, https://gist.github.com/MHaggis/b246e2fae6213e762a6e694cabaf0c17",
              "mitre": [
                "T1053.005",
                "T1562"
              ],
              "dtype": "registry_path",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Registry Delete Task SD\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to registry key entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005, T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Registry Delete Task SD\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Registry Delete Task SD\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.615",
              "n": "Windows Registry SIP Provider Modification",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows Registry SIP Provider. It leverages Sysmon EventID 7 to monitor registry changes in paths and values related to Cryptography Providers and OID Encoding Types. This activity is significant as it may indicate an attempt to subvert trust controls, a common tactic for bypassing security measures and maintaining persistence. If confirmed malicious, an attacker could manipulate the system's cryptographic functions, potentially leading to unau…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Be aware of potential false positives - legitimate applications may cause benign activities to be flagged.",
              "refs": "https://attack.mitre.org/techniques/T1553/003/, https://github.com/SigmaHQ/sigma/blob/master/rules/windows/registry/registry_set/registry_set_sip_persistence.yml, https://specterops.io/wp-content/uploads/sites/3/2022/06/SpecterOps_Subverting_Trust_in_Windows.pdf, https://github.com/gtworek/PSBits/tree/master/SIP, https://github.com/mattifestation/PoCSubjectInterfacePackage, https://pentestlab.blog/2017/11/06/hijacking-digital-signatures/",
              "mitre": [
                "T1553.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Registry SIP Provider Modification\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1553.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Registry SIP Provider Modification\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Be aware of potential false positives - legitimate applications may cause benign activities to be flagged.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Registry SIP Provider Modification\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.616",
              "n": "Windows Remote Access Software BRC4 Loaded Dll",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the loading of four specific Windows DLLs (credui.dll, dbghelp.dll, samcli.dll, winhttp.dll) by a non-standard process. This detection leverages Sysmon EventCode 7 to monitor DLL load events and flags when all four DLLs are loaded within a short time frame. This activity is significant as it may indicate the presence of Brute Ratel C4, a sophisticated remote access tool used for credential dumping and other malicious activities. If confirmed malicious, this beha…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 7 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This module can be loaded by a third party application. Filter is needed.",
              "refs": "https://unit42.paloaltonetworks.com/brute-ratel-c4-tool/, https://www.mdsec.co.uk/2022/08/part-3-how-i-met-your-beacon-brute-ratel/, https://strontic.github.io/xcyclopedia/library/logoncli.dll-138871DBE68D0696D3D7FA91BC2873B1.html, https://strontic.github.io/xcyclopedia/library/credui.dll-A5BD797BBC2DD55231B9DE99837E5461.html, https://learn.microsoft.com/en-us/windows/win32/secauthn/credential-manager, https://strontic.github.io/xcyclopedia/library/samcli.dll-522D6D616EF142CDE965BD3A450A9E4C.html, https://strontic.github.io/xcyclopedia/library/dbghelp.dll-15A55EAB307EF8C190FE6135C0A86F7C.html",
              "mitre": [
                "T1219",
                "T1003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Access Software BRC4 Loaded Dll\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219, T1003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Access Software BRC4 Loaded Dll\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This module can be loaded by a third party application. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Remote Access Software BRC4 Loaded Dll\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.617",
              "n": "Windows Remote Host Computer Management Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of mmc.exe to launch Computer Management (compmgmt.msc) and connect to a remote machine. This technique allows administrators to access system management tools, including Event Viewer, Services, Shared Folders, and Local Users & Groups, without initiating a full remote desktop session. While commonly used for legitimate administrative purposes, adversaries may leverage this method for remote reconnaissance, privilege escalation, or persistence. Monitoring t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator or power user can execute command shell or script to access Windows Remote Management.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-071a",
              "mitre": [
                "T1021.006"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Host Computer Management Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Host Computer Management Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator or power user can execute command shell or script to access Windows Remote Management.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Remote Host Computer Management Access\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.618",
              "n": "Windows Remote Management Execute Shell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of winrshost.exe initiating CMD or PowerShell processes as part of a potential payload execution. winrshost.exe is associated with Windows Remote Management (WinRM) and is typically used for remote execution. By monitoring for this behavior, the detection identifies instances where winrshost.exe is leveraged to run potentially malicious commands or payloads via CMD or PowerShell. This behavior may indicate exploitation of remote management tools for u…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "administrator or power user can execute command shell or script remotely using WINRM.",
              "refs": "https://strontic.github.io/xcyclopedia/library/winrshost.exe-6790044CEB4BA5BE6AA8161460D990FD.html",
              "mitre": [
                "T1021.006"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Remote Management Execute Shell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.006. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Remote Management Execute Shell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: administrator or power user can execute command shell or script remotely using WINRM.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Remote Management Execute Shell\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.619",
              "n": "Windows RMM Named Pipe",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows RMM Named Pipe. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 17, Sysmon EventID 18",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 17, Sysmon EventID 18 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://learn.microsoft.com/en-us/windows/win32/ipc/named-pipes",
              "mitre": [
                "T1559",
                "T1021.002",
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RMM Named Pipe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 17, Sysmon EventID 18. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1559, T1021.002, T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RMM Named Pipe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows RMM Named Pipe\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.620",
              "n": "Windows Root Domain linked policies Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the `[Adsisearcher]` type accelerator in PowerShell to query Active Directory for root domain linked policies. It leverages PowerShell Script Block Logging (EventCode=4104) to identify this activity. This behavior is significant as it may indicate an attempt by adversaries or Red Teams to gain situational awareness and perform Active Directory Discovery. If confirmed malicious, this activity could allow attackers to map out domain policies, potentially a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators or power users may use this command for troubleshooting.",
              "refs": "https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/, https://medium.com/@pentesttas/discover-hidden-gpo-s-on-active-directory-using-ps-adsi-a284b6814c81",
              "mitre": [
                "T1087.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Root Domain linked policies Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Root Domain linked policies Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators or power users may use this command for troubleshooting.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Root Domain linked policies Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.621",
              "n": "Windows Rundll32 Apply User Settings Changes",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Rundll32 Apply User Settings Changes. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-319a, https://www.cisa.gov/sites/default/files/publications/aa22-264a-iranian-cyber-actors-conduct-cyber-operations-against-the-government-of-albania.pdf, https://cdn.pathfactory.com/assets/10555/contents/400686/13f4424c-05b4-46db-bb9c-6bf9b5436ec4.pdf",
              "mitre": [
                "T1218.011"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Rundll32 Apply User Settings Changes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Rundll32 Apply User Settings Changes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Rundll32 Apply User Settings Changes\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.622",
              "n": "Windows RunMRU Registry Key or Value Deleted",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion or modification of Most Recently Used (MRU) command entries stored within the Windows Registry. Adversaries often clear these registry keys, such as HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\RunMRU, to remove forensic evidence of commands executed via the Windows Run dialog or other system utilities. This activity aims to obscure their actions, hinder incident response efforts, and evade detection. Detection focuses on monitoring for chan…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 12",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This event can be seen when administrator delete a history manually or uninstall/reinstall a software that creates MRU registry entry. It is recommended to check this alert with high priority.",
              "refs": "https://www.linkedin.com/posts/mauricefielenbach_cybersecurity-incidentresponse-dfir-activity-7394805779448418304-g0gZ?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAuFTjIB5weY_kcyu4qp3kHbI4v49tO0zEk, https://thedfirreport.com/2023/10/30/netsupport-intrusion-results-in-domain-compromise/, https://www.esentire.com/blog/evalusion-campaign-delivers-amatera-stealer-and-netsupport-rat",
              "mitre": [
                "T1112"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows RunMRU Registry Key or Value Deleted\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 12. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows RunMRU Registry Key or Value Deleted\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This event can be seen when administrator delete a history manually or uninstall/reinstall a software that creates MRU registry entry. It is recommended to check this alert with high priority.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows RunMRU Registry Key or Value Deleted\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.623",
              "n": "Windows Scheduled Task Created Via XML",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Scheduled Task Created Via XML. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Installers are known to create scheduled tasks via XML. Apply additional filters as needed.",
              "refs": "https://twitter.com/_CERT_UA/status/1620781684257091584, https://cert.gov.ua/article/3761104",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Scheduled Task Created Via XML\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Scheduled Task Created Via XML\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Installers are known to create scheduled tasks via XML. Apply additional filters as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Scheduled Task Created Via XML\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.624",
              "n": "Windows Scheduled Task with Suspicious Command",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of scheduled tasks designed to execute commands using native Windows shells like PowerShell, Cmd, Wscript, or Cscript or from public folders such as Users, Temp, or ProgramData. It leverages Windows Security EventCode 4698, 4700, and 4702 to identify when such tasks are registered, enabled, or modified. This activity is significant as it may indicate an attempt to establish persistence or execute malicious commands on a system. If confirmed malicious, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698, Windows Event Log Security 4700, Windows Event Log Security 4702",
              "q": "`wineventlog_security` EventCode IN (4698,4700,4702) Computer=\"$dest$\" Caller_User_Name=\"$user$\"",
              "m": "To successfully implement this search, you need to be ingesting Windows Security Event Logs with 4698 EventCode enabled. The Windows TA is also required.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate applications are allowed to register tasks that call a shell to be spawned. Filter as needed based on command-line or processes that are used legitimately. Windows Defender, Google Chrome, and MS Edge updates may trigger this detection.",
              "refs": "https://attack.mitre.org/techniques/T1053/005/, https://www.ic3.gov/CSA/2023/231213.pdf, https://news.sophos.com/en-us/2024/11/06/bengal-cat-lovers-in-australia-get-psspsspssd-in-google-driven-gootloader-campaign/, https://github.com/mthcht/awesome-lists/blob/main/Lists/suspicious_windows_tasks_list.csv",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Scheduled Task with Suspicious Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698, Windows Event Log Security 4700, Windows Event Log Security 4702. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Scheduled Task with Suspicious Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate applications are allowed to register tasks that call a shell to be spawned. Filter as needed based on command-line or processes that are used legitimately. Windows Defender, Google Chrome, and MS Edge updates may trigger this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the creation of scheduled tasks designed to execute commands using native Windows shells like PowerShell, Cmd, Wscript, or Cscript or from public folders such as Users, Temp, or ProgramData so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.625",
              "n": "Windows Scheduled Task with Suspicious Name",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation, modification, or enabling of scheduled tasks with known suspicious or malicious task names. It leverages Windows Security EventCode 4698, 4700, and 4702 to identify when such tasks are registered, modified, or enabled. This activity is significant as it may indicate an attempt to establish persistence or execute malicious commands on a system. If confirmed malicious, this could allow an attacker to maintain access, execute arbitrary code, or escalate …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698, Windows Event Log Security 4700, Windows Event Log Security 4702",
              "q": "`wineventlog_security` EventCode IN (4698,4700,4702) | xmlkv TaskContent | search dest=\"$dest$\" AND TaskName = \"$TaskName$\"",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4698, Windows Event Log Security 4700, Windows Event Log Security 4702 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate applications are allowed to register tasks that call a shell to be spawned. Filter as needed based on command-line or processes that are used legitimately.",
              "refs": "https://attack.mitre.org/techniques/T1053/005/, https://www.ic3.gov/CSA/2023/231213.pdf, https://news.sophos.com/en-us/2024/11/06/bengal-cat-lovers-in-australia-get-psspsspssd-in-google-driven-gootloader-campaign/, https://github.com/mthcht/awesome-lists/blob/main/Lists/suspicious_windows_tasks_list.csv",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Scheduled Task with Suspicious Name\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698, Windows Event Log Security 4700, Windows Event Log Security 4702. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Scheduled Task with Suspicious Name\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate applications are allowed to register tasks that call a shell to be spawned. Filter as needed based on command-line or processes that are used legitimately.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the creation, modification, or enabling of scheduled tasks with known suspicious or malicious task names so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.626",
              "n": "Windows Scheduled Tasks for CompMgmtLauncher or Eventvwr",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation or modification of Windows Scheduled Tasks related to CompMgmtLauncher or Eventvwr. These legitimate system utilities, used for launching the Computer Management Console and Event Viewer, can be abused by attackers to execute malicious payloads under the guise of normal system processes. By leveraging these tasks, adversaries can establish persistence or elevate privileges without raising suspicion. This detection helps security analysts identify unusu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting Windows Security Event Logs with 4698 EventCode enabled. The Windows TA as well as the URL ToolBox application are also required.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.proofpoint.com/us/blog/threat-insight/chinese-malware-appears-earnest-across-cybercrime-threat-landscape, https://www.fortinet.com/blog/threat-research/valleyrat-campaign-targeting-chinese-speakers",
              "mitre": [
                "T1053"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Scheduled Tasks for CompMgmtLauncher or Eventvwr\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Scheduled Tasks for CompMgmtLauncher or Eventvwr\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Scheduled Tasks for CompMgmtLauncher or Eventvwr\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.627",
              "n": "Windows Screen Capture Via Powershell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of a PowerShell script designed to capture screen images on a host. It leverages PowerShell Script Block Logging to identify specific script block text patterns associated with screen capture activities. This behavior is significant as it may indicate an attempt to exfiltrate sensitive information by capturing desktop screenshots. If confirmed malicious, this activity could allow an attacker to gather visual data from the compromised system, potential…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$Computer$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://twitter.com/_CERT_UA/status/1620781684257091584, https://cert.gov.ua/article/3761104",
              "mitre": [
                "T1113"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Screen Capture Via Powershell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1113. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Screen Capture Via Powershell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Screen Capture Via Powershell\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.628",
              "n": "Windows Service Creation Using Registry Entry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the modification of registry keys that define Windows services using reg.exe. This detection leverages Splunk to search for specific keywords in the registry path, value name, and value data fields. This activity is significant because it indicates potential unauthorized changes to service configurations, a common persistence technique used by attackers. If confirmed malicious, this could allow an attacker to maintain access, escalate privileges, or move laterally …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the registry value name, registry path, and registry value data from your endpoints. If you are using Sysmon, you must have at least version 2.0 of the official Sysmon TA. https://splunkbase.splunk.com/app/5709",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Third party tools may used this technique to create services but not so common.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/36d49de4c8b00bf36054294b4a1fcbab3917d7c5/atomics/T1574.011/T1574.011.md",
              "mitre": [
                "T1574.011"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Service Creation Using Registry Entry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.011. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Service Creation Using Registry Entry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Third party tools may used this technique to create services but not so common.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Service Creation Using Registry Entry\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.629",
              "n": "Windows Short Lived DNS Record",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation and quick deletion of a DNS object within 300 seconds in an Active Directory environment, indicative of a potential attack abusing DNS. This detection leverages Windows Security Event Codes 5136 and 5137, analyzing the duration between these events. This activity is significant as temporary DNS entries allows attackers to cause unexpecting network trafficking, leading to potential compromise.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 5136, Windows Event Log Security 5137",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 5136, Windows Event Log Security 5137 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Creating and deleting a DNS server object within 30 seconds or less is unusual but not impossible in a production environment. Filter as needed.",
              "refs": "https://web.archive.org/web/20250617122747/https://www.synacktiv.com/publications/ntlm-reflection-is-dead-long-live-ntlm-reflection-an-in-depth-analysis-of-cve-2025, https://www.synacktiv.com/publications/relaying-kerberos-over-smb-using-krbrelayx, https://www.guidepointsecurity.com/blog/the-birth-and-death-of-loopyticket/",
              "mitre": [
                "T1071.004",
                "T1557.001",
                "T1187"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Short Lived DNS Record\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 5136, Windows Event Log Security 5137. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.004, T1557.001, T1187. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Short Lived DNS Record\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Creating and deleting a DNS server object within 30 seconds or less is unusual but not impossible in a production environment. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Short Lived Record\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.630",
              "n": "Windows SIP Provider Inventory",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies all SIP (Subject Interface Package) providers on a Windows system using PowerShell scripted inputs. It detects SIP providers by capturing DLL paths from relevant events. This activity is significant because malicious SIP providers can be used to bypass trust controls, potentially allowing unauthorized code execution. If confirmed malicious, this activity could enable attackers to subvert system integrity, leading to unauthorized access or persistent threats with…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`subjectinterfacepackage` Dll=*\\\\*.dll | stats count min(_time) as firstTime max(_time) as lastTime values(Dll) by Path host| `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`| `windows_sip_provider_inventory_filter`",
              "m": "To implement this analytic, one must first perform inventory using a scripted inputs. Review the following Gist - https://gist.github.com/MHaggis/75dd5db546c143ea67703d0e86cdbbd1",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are limited as this is a hunting query for inventory.",
              "refs": "https://gist.github.com/MHaggis/75dd5db546c143ea67703d0e86cdbbd1",
              "mitre": [
                "T1553.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SIP Provider Inventory\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1553.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SIP Provider Inventory\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are limited as this is a hunting query for inventory.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on all SIP (Subject Interface Package) providers on a Windows system using PowerShell scripted inputs so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.631",
              "n": "Windows SnappyBee Create Test Registry",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects modifications to the Windows registry under `SOFTWARE\\Microsoft\\Test`, a location rarely used by legitimate applications in a production environment. Monitoring this key is crucial, as adversaries may create or alter values here for monitoring update of itself file path, updated configuration file, or system mark compromised. The detection leverages **Sysmon Event ID 13** (Registry Value Set) to identify unauthorized changes. Analysts should investigate processes a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Registry` node. Also make sure that this registry was included in your config files ex. sysmon config to be monitored.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators and third party software may create this registry entry.",
              "refs": "https://www.trendmicro.com/en_nl/research/24/k/earth-estries.html",
              "mitre": [
                "T1112"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SnappyBee Create Test Registry\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1112. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SnappyBee Create Test Registry\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators and third party software may create this registry entry.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows SnappyBee Create Test Registry\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.632",
              "n": "Windows SQL Server Configuration Option Hunt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection helps hunt for changes to SQL Server configuration options that could indicate malicious activity. It monitors for modifications to any SQL Server configuration settings, allowing analysts to identify potentially suspicious changes that may be part of an attack, such as enabling dangerous features or modifying security-relevant settings.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Application 15457",
              "q": "`wineventlog_application` EventCode=15457\n      | rex field=EventData_Xml \"<Data>(?<config_name>[^<]+)</Data><Data>(?<old_value>[^<]+)</Data><Data>(?<new_value>[^<]+)</Data>\"\n      | rename host as dest\n      | eval change_type=case( old_value=\"0\" AND new_value=\"1\", \"enabled\", old_value=\"1\" AND new_value=\"0\", \"disabled\", true(), \"modified\" )\n      | eval risk_score=case( change_type=\"enabled\", 90, change_type=\"disabled\", 60, true(), 70 )\n      | eval risk_message=\"SQL Server \".config_name.\" was \".change_type.\" on host \".dest\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY dest EventCode config_name\n           change_type risk_message risk_score\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `windows_sql_server_configuration_option_hunt_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Application 15457 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Database administrators frequently make legitimate configuration changes for maintenance, performance tuning, and security hardening. To reduce false positives, establish a baseline of normal configuration changes, document approved configuration modifications, implement change control procedures, and maintain an inventory of expected settings.",
              "refs": "https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/server-configuration-options-sql-server, https://attack.mitre.org/techniques/T1505/001/, https://www.netspi.com/blog/technical/network-penetration-testing/sql-server-persistence-part-1-startup-stored-procedures/",
              "mitre": [
                "T1505.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SQL Server Configuration Option Hunt\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Application 15457. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SQL Server Configuration Option Hunt\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Database administrators frequently make legitimate configuration changes for maintenance, performance tuning, and security hardening. To reduce false positives, establish a baseline of normal configuration changes, document approved configuration modifications, implement change control procedures, and maintain an inventory of expected settings.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on this detection helps hunt for changes to SQL Server configuration options that could indicate malicious activity so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.633",
              "n": "Windows SQL Server Critical Procedures Enabled",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when critical SQL Server configuration options are modified, including \"Ad Hoc Distributed Queries\", \"external scripts enabled\", \"Ole Automation Procedures\", \"clr enabled\", and \"clr strict security\". These features can be abused by attackers for various malicious purposes - Ad Hoc Distributed Queries enables Active Directory reconnaissance through ADSI provider, external scripts and Ole Automation allow execution of arbitrary code, and CLR features can be used to run cu…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Application 15457",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to entity entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Application 15457 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Database administrators may legitimately enable these features for valid business purposes such as cross-database queries, custom CLR assemblies, automation scripts, or application requirements. To reduce false positives, document when these features are required, monitor for unauthorized changes, create change control procedures for configuration modifications, and consider alerting on the enabled state rather than configuration changes if preferred.",
              "refs": "https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/ad-hoc-distributed-queries-server-configuration-option, https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/external-scripts-enabled-server-configuration-option, https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/ole-automation-procedures-server-configuration-option, https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/clr-enabled-server-configuration-option, https://attack.mitre.org/techniques/T1505/001/, https://www.netspi.com/blog/technical-blog/adversary-simulation/attacking-sql-server-clr-assemblies/",
              "mitre": [
                "T1505.001"
              ],
              "dtype": "other",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SQL Server Critical Procedures Enabled\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to entity entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Application 15457. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SQL Server Critical Procedures Enabled\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Database administrators may legitimately enable these features for valid business purposes such as cross-database queries, custom CLR assemblies, automation scripts, or application requirements. To reduce false positives, document when these features are required, monitor for unauthorized changes, create change control procedures for configuration modifications, and consider alerting on the enabled state rather than configuration changes if preferred.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows SQL Server Critical Procedures Enabled\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.634",
              "n": "Windows SQL Server Extended Procedure DLL Loading Hunt",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects when SQL Server loads DLLs to execute extended stored procedures. This is particularly important for security monitoring as it indicates the first-time use or version changes of potentially dangerous procedures like xp_cmdshell, sp_OACreate, and others. While this is a legitimate operation, adversaries may abuse these procedures for execution, discovery, or privilege escalation.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Application 8128",
              "q": "`wineventlog_application` EventCode=8128\n      | rex field=EventData_Xml \"<Data>(?<dll_name>[^<]+)</Data><Data>(?<dll_version>[^<]+)</Data><Data>(?<procedure_name>[^<]+)</Data>\"\n      | rename host as dest\n      | eval dll_category=case( dll_name==\"xpstar.dll\", \"Extended Procedures\", dll_name==\"odsole70.dll\", \"OLE Automation\", dll_name==\"xplog70.dll\", \"Logging Procedures\", true(), \"Other\")\n      | stats count as execution_count, values(procedure_name) as procedures_used, latest(_time) as last_seen\n        BY dest dll_name dll_category\n           dll_version\n      | sort - execution_count\n      | `windows_sql_server_extended_procedure_dll_loading_hunt_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows Event Log Application 8128 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate administrative activity and normal database operations may trigger this detection. Common false positives include initial database startup and configuration, patch deployment and version updates, regular administrative tasks using extended stored procedures, and application servers that legitimately use OLE automation.",
              "refs": "https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/general-extended-stored-procedures-transact-sql, https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms175543(v=sql.105), https://learn.microsoft.com/en-us/sql/relational-databases/extended-stored-procedures-programming/using-extended-stored-procedures",
              "mitre": [
                "T1505.001",
                "T1059.009"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SQL Server Extended Procedure DLL Loading Hunt\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Application 8128. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.001, T1059.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SQL Server Extended Procedure DLL Loading Hunt\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate administrative activity and normal database operations may trigger this detection. Common false positives include initial database startup and configuration, patch deployment and version updates, regular administrative tasks using extended stored procedures, and application servers that legitimately use OLE automation.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on this analytic detects when SQL Server loads DLLs to execute extended stored procedures so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.635",
              "n": "Windows SQL Server Startup Procedure",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies when a startup procedure is registered or executed in SQL Server. Startup procedures automatically execute when SQL Server starts, making them an attractive persistence mechanism for attackers. The detection monitors for suspicious stored procedure names and patterns that may indicate malicious activity, such as attempts to execute operating system commands or gain elevated privileges.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Application 17135",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this detection, you need to be ingesting Windows Application Event Logs from SQL Server instances. The detection specifically looks for EventID 17135 which indicates startup procedure execution. Ensure proper logging is enabled for SQL Server startup events and that the logs are being forwarded to your SIEM.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate startup procedures may be used by database administrators for maintenance, monitoring, or application functionality. Common legitimate uses include database maintenance and cleanup jobs, performance monitoring and statistics collection, application initialization procedures, and system health checks. To reduce false positives, organizations should document approved startup procedures, maintain an inventory of expected startup procedures, monitor for changes to startup procedure configurations, and create exceptions for known good procedures.",
              "refs": "https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-procoption-transact-sql, https://www.netspi.com/blog/technical-blog/network-penetration-testing/sql-server-persistence-part-1-startup-stored-procedures/, https://attack.mitre.org/techniques/T1505/001/",
              "mitre": [
                "T1505.001"
              ],
              "dtype": "other",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SQL Server Startup Procedure\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to entity entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Application 17135. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SQL Server Startup Procedure\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate startup procedures may be used by database administrators for maintenance, monitoring, or application functionality. Common legitimate uses include database maintenance and cleanup jobs, performance monitoring and statistics collection, application initialization procedures, and system health checks. To reduce false positives, organizations should document approved startup procedures, maintain an inventory of expected startup procedures, monitor for changes to startup procedure configurations, and create exceptions for known good procedures.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows SQL Server Startup Procedure\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.636",
              "n": "Windows SQLCMD Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies potentially suspicious usage of sqlcmd.exe, focusing on command patterns that may indicate data exfiltration, reconnaissance, or malicious database operations. The detection looks for both short-form (-X) and long-form (--flag) suspicious parameter combinations, which have been observed in APT campaigns targeting high-value organizations. For example, threat actors like CL-STA-0048 have been known to abuse sqlcmd.exe for data theft and exfiltration from compromised MSSQ…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (Processes.process_name=sqlcmd.exe OR Processes.original_file_name=sqlcmd.exe) by Processes.dest Processes.user Processes.parent_process_name Processes.process_name Processes.process Processes.process_id Processes.parent_process_id | `drop_dm_object_name(Processes)` | eval process_lower=lower(process) | eval is_help_check=case( match(process, \"(?i)-[?]\"), 1, match(process_lower, \"(?i)--help\"), 1, match(process_lower, \"(?i)--version\"), 1, true(), 0 ), has_parameters=if(match(process, \"-[A-Za-z]\"), 1, 0), has_query=case( match(process, \"-[Qq]\\\\s+\"), 1, match(process_lower, \"--query\\\\s+\"), 1, match(process_lower, \"--initial-query\\\\s+\"), 1, true(), 0 ), has_output=case( match(process, \"-[oO]\\\\s+\"), 1, match(process_lower, \"--output-file\\\\s+\"), 1, true(), 0 ), has_input=case( match(process, \"-[iI]\\\\s+\"), 1, match(process_lower, \"--input-file\\\\s+\"), 1, true(), 0 ), has_url_input=case( match(process, \"-[iI]\\\\s+https?://\"), 1, match(process_lower, \"--input-file\\\\s+https?://\"), 1, match(process, \"-[iI]\\\\s+ftp://\"), 1, match(process_lower, \"--input-file\\\\s+ftp://\"), 1, true(), 0 ), has_admin_conn=case( match(process, \"-A\"), 1, match(process_lower, \"--dedicated-admin-connection\"), 1, true(), 0 ), has_suspicious_auth=case( match(process, \"-U\\\\s+sa\\\\b\"), 1, match(process_lower, \"--user-name\\\\s+sa\\\\b\"), 1, match(process, \"-U\\\\s+admin\\\\b\"), 1, match(process_lower, \"--user-name\\\\s+admin\\\\b\"), 1, match(process, \"-E\\\\b\"), 1, match(process_lower, \"--use-trusted-connection\"), 1, true(), 0 ), has_local_server=case( match(process, \"-S\\\\s+127\\\\.0\\\\.0\\\\.1\"), 1, match(process_lower, \"--server\\\\s+127\\\\.0\\\\.0\\\\.1\"), 1, match(process, \"-S\\\\s+localhost\"), 1, match(process_lower, \"--server\\\\s+localhost\"), 1, true(), 0 ), has_suspicious_output=case( match(process_lower, \"-o\\\\s+.*\\\\.(txt|csv|dat)\"), 1, match(process_lower, \"--output-file\\\\s+.*\\\\.(txt|csv|dat)\"), 1, true(), 0 ), has_cert_bypass=case( match(process, \"-C\"), 1, match(process_lower, \"--trust-server-certificate\"), 1, true(), 0 ), has_suspicious_query=case( match(process_lower, \"(xp_cmdshell|sp_oacreate|sp_execute_external|openrowset|bulk\\\\s+insert)\"), 1, match(process_lower, \"(master\\\\.\\\\.\\\\.sysdatabases|msdb\\\\.\\\\.\\\\.backuphistory|sysadmin|securityadmin)\"), 1, match(process_lower, \"(select.*from.*sys\\\\.|select.*password|dump\\\\s+database)\"), 1, match(process_lower, \"(sp_addextendedproc|sp_makewebtask|sp_addsrvrolemember)\"), 1, match(process_lower, \"(sp_configure.*show\\\\s+advanced|reconfigure|enable_xp_cmdshell)\"), 1, match(process_lower, \"(exec.*master\\\\.dbo\\\\.|exec.*msdb\\\\.dbo\\\\.)\"), 1, match(process_lower, \"(sp_password|sp_control_dbmasterkey_password|sp_dropextendedproc)\"), 1, match(process_lower, \"(powershell|cmd\\\\.exe|rundll32|regsvr32|certutil)\"), 1, true(), 0 ), has_suspicious_path=case( match(process_lower, \"(\\\\\\\\temp\\\\\\\\|\\\\\\\\windows\\\\\\\\|\\\\\\\\public\\\\\\\\|\\\\\\\\users\\\\\\\\public\\\\\\\\|\\\\\\\\programdata\\\\\\\\)\"), 1, match(process_lower, \"(\\\\\\\\desktop\\\\\\\\.*\\\\.(zip|rar|7z|tar|gz))\"), 1, match(process_lower, \"(\\\\\\\\downloads\\\\\\\\.*\\\\.(dat|bin|tmp))\"), 1, match(process_lower, \"(\\\\\\\\appdata\\\\\\\\local\\\\\\\\temp\\\\\\\\|\\\\\\\\windows\\\\\\\\tasks\\\\\\\\)\"), 1, match(process_lower, \"(\\\\\\\\recycler\\\\\\\\|\\\\\\\\system32\\\\\\\\|\\\\\\\\system volume information\\\\\\\\)\"), 1, match(process_lower, \"(\\\\.vbs|\\\\.ps1|\\\\.bat|\\\\.cmd|\\\\.exe)$\"), 1, true(), 0 ), has_suspicious_combo=case( match(process, \"-E\") AND match(process_lower, \"(?i)xp_cmdshell\"), 1, match(process, \"-Q\") AND match(process_lower, \"(?i)exec\\\\s+master\"), 1, has_local_server=1 AND has_suspicious_query=1, 1, true(), 0 ), has_obfuscation=case( match(process_lower, \"(char\\\\(|convert\\\\(|cast\\\\(|declare\\\\s+@)\"), 1, match(process_lower, \"(exec\\\\s+\\\\(|exec\\\\s+@|;\\\\s*exec)\"), 1, match(process, \"\\\\^|\\\\%|\\\\+\\\\+|\\\\-\\\\-\"), 1, len(process) > 500, 1, true(), 0 ), has_data_exfil=case( match(process_lower, \"(for\\\\s+xml|for\\\\s+json)\"), 1, match(process_lower, \"(bulk\\\\s+insert.*from)\"), 1, match(process_lower, \"(bcp.*queryout|bcp.*out)\"), 1, match(process_lower, \"(select.*into.*from|select.*into.*outfile)\"), 1, true(), 0 )",
              "m": "The analytic will need to be tuned based on organization specific data. Currently, set to hunting to allow for tuning. SQLCmd is a legitimate tool for database management and scripting tasks within enterprise environments. The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoi…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://learn.microsoft.com/en-us/sql/tools/sqlcmd-utility, https://attack.mitre.org/techniques/T1078/, https://attack.mitre.org/techniques/T1213/, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1105/T1105.md#atomic-test-32---file-download-with-sqlcmdexe, https://unit42.paloaltonetworks.com/espionage-campaign-targets-south-asian-entities/",
              "mitre": [
                "T1059.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SQLCMD Execution\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SQLCMD Execution\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on potentially suspicious usage of sqlcmd.exe, focusing on command patterns that may indicate data exfiltration, reconnaissance, or malicious database operations so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.637",
              "n": "Windows SqlWriter SQLDumper DLL Sideload",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the abuse of SqlWriter and SQLDumper executables to sideload the vcruntime140.dll library. It leverages Sysmon EventCode 7 logs, focusing on instances where SQLDumper.exe or SQLWriter.exe load vcruntime140.dll, excluding legitimate loads from the System32 directory. This activity is significant as it indicates potential DLL sideloading, a technique used by adversaries to execute malicious code within trusted processes. If confirmed malicious, this could allow attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The analytic is designed to be run against Sysmon event logs collected from endpoints. The analytic requires the Sysmon event logs to be ingested into Splunk. The analytic searches for EventCode 7 where the Image is either SQLDumper.exe or SQLWriter.exe and the ImageLoaded is vcruntime140.dll. The search also filters out the legitimate loading of vcruntime140.dll from the System32 directory to red…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate processes are loading vcruntime140.dll from non-standard directories. It is recommended to investigate the context of the process loading vcruntime140.dll to determine if it is malicious or not. Modify the search to include additional known good paths for vcruntime140.dll to reduce false positives.",
              "refs": "https://www.mandiant.com/resources/blog/apt29-wineloader-german-political-parties, https://www.zscaler.com/blogs/security-research/european-diplomats-targeted-spikedwine-wineloader",
              "mitre": [
                "T1574.001"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SqlWriter SQLDumper DLL Sideload\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1574.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SqlWriter SQLDumper DLL Sideload\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate processes are loading vcruntime140.dll from non-standard directories. It is recommended to investigate the context of the process loading vcruntime140.dll to determine if it is malicious or not. Modify the search to include additional known good paths for vcruntime140.dll to reduce false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows SqlWriter SQLDumper DLL Sideload\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.638",
              "n": "Windows SSH Proxy Command",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows SSH Proxy Command. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.virustotal.com/gui/file/c33f82868dbbfc3ab03918f430b1a348499f5baf047b136ff0a4fc3e8addaa9b/detection, https://attack.mitre.org/techniques/T1572/, https://lolbas-project.github.io/lolbas/Binaries/Ssh/, https://man.openbsd.org/ssh_config#ProxyCommand, https://man.openbsd.org/ssh_config#LocalCommand",
              "mitre": [
                "T1572",
                "T1059.001",
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows SSH Proxy Command\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1572, T1059.001, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows SSH Proxy Command\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows SSH Proxy Command\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.639",
              "n": "Windows Suspect Process With Authentication Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects executables running from public or temporary locations that are communicating over Windows domain authentication ports/protocols such as LDAP (389), LDAPS (636), and Kerberos (88). It leverages network traffic data to identify processes originating from user-controlled directories. This activity is significant because legitimate applications rarely run from these locations and attempt domain authentication, making it a potential indicator of compromise. If confirme…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 3",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To implement this analytic, Sysmon should be installed in the environment and generating network events for  userland and/or known public writable locations.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Known applications running from these locations for legitimate purposes. Targeting only kerberos (port 88) may significantly reduce noise.",
              "refs": "https://attack.mitre.org/techniques/T1069/002/, https://book.hacktricks.xyz/network-services-pentesting/pentesting-kerberos-88",
              "mitre": [
                "T1087.002",
                "T1204.002"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Suspect Process With Authentication Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 3. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1087.002, T1204.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Suspect Process With Authentication Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Known applications running from these locations for legitimate purposes. Targeting only kerberos (port 88) may significantly reduce noise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Suspect Process With Authentication Traffic\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.640",
              "n": "Windows Suspicious C2 Named Pipe",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Suspicious C2 Named Pipe. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 17, Sysmon EventID 18",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and pipename from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://learn.microsoft.com/en-us/windows/win32/ipc/named-pipes",
              "mitre": [
                "T1559",
                "T1021.002",
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Suspicious C2 Named Pipe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 17, Sysmon EventID 18. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1559, T1021.002, T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Suspicious C2 Named Pipe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Suspicious C2 Named Pipe\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.641",
              "n": "Windows Suspicious Driver Loaded Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the loading of drivers from suspicious paths, which is a technique often used by malicious software such as coin miners (e.g., xmrig). It leverages Sysmon EventCode 6 to identify drivers loaded from non-standard directories. This activity is significant because legitimate drivers typically reside in specific system directories, and deviations may indicate malicious activity. If confirmed malicious, this could allow an attacker to execute code at the kernel level, p…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 6",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the driver loaded and Signature from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Limited false positives will be present. Some applications do load drivers",
              "refs": "https://www.trendmicro.com/vinfo/hk/threat-encyclopedia/malware/trojan.ps1.powtran.a/, https://redcanary.com/blog/tracking-driver-inventory-to-expose-rootkits/, https://whiteknightlabs.com/2025/11/25/discreet-driver-loading-in-windows/",
              "mitre": [
                "T1543.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Suspicious Driver Loaded Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 6. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1543.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Suspicious Driver Loaded Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Limited false positives will be present. Some applications do load drivers\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Suspicious Driver Loaded Path\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.642",
              "n": "Windows Suspicious Named Pipe",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Suspicious Named Pipe. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 17, Sysmon EventID 18",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 17, Sysmon EventID 18 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://attack.mitre.org/techniques/T1218/009/, https://learn.microsoft.com/en-us/windows/win32/ipc/named-pipes",
              "mitre": [
                "T1559",
                "T1021.002",
                "T1055"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Suspicious Named Pipe\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 17, Sysmon EventID 18. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1559, T1021.002, T1055. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Suspicious Named Pipe\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Suspicious Named Pipe\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.643",
              "n": "Windows Suspicious React or Next.js Child Process",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Suspicious React or Next.js Child Process. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://react.dev/blog/2025/12/03/critical-security-vulnerability-in-react-server-components, https://nextjs.org/blog/CVE-2025-66478, https://nvd.nist.gov/vuln/detail/CVE-2025-55182, https://gist.github.com/maple3142/48bc9393f45e068cf8c90ab865c0f5f3, https://www.wiz.io/blog/critical-vulnerability-in-react-cve-2025-55182",
              "mitre": [
                "T1190",
                "T1059.003",
                "T1059.001"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Suspicious React or Next.js Child Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1059.003, T1059.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Suspicious React or Next.js Child Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Suspicious React or Next.js Child Process\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.644",
              "n": "Windows Svchost.exe Parent Process Anomaly",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an anomaly where an svchost.exe process is spawned by a parent process other than the standard services.exe. In a typical Windows environment, svchost.exe is a system process that hosts Windows service DLLs, and is expected to be a child of services.exe. A process deviation from this hierarchy may indicate suspicious behavior, such as malicious code attempting to masquerade as a legitimate system process or evade detection. It is essential to investigate the parent…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Windows Update or other Windows Installer processes may launch their own svchost.exe processes that are not directly spawned by services.exe in certain edge cases (e.g., during patches or updates).",
              "refs": "https://attack.mitre.org/techniques/T1036/009/, https://www.trendmicro.com/en_nl/research/24/k/earth-estries.html",
              "mitre": [
                "T1036.009"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Svchost.exe Parent Process Anomaly\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Svchost.exe Parent Process Anomaly\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Windows Update or other Windows Installer processes may launch their own svchost.exe processes that are not directly spawned by services.exe in certain edge cases (e.g., during patches or updates).\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Svchost.exe Parent Process Anomaly\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.645",
              "n": "Windows Symlink Evaluation Change via Fsutil",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Symlink Evaluation Change via Fsutil. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://learn.microsoft.com/windows-server/administration/windows-commands/fsutil-behavior, https://www.group-ib.com/blog/blackcat/, https://www.intrinsec.com/alphv-ransomware-gang-analysis/",
              "mitre": [
                "T1222.001"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Symlink Evaluation Change via Fsutil\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1222.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Symlink Evaluation Change via Fsutil\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Symlink Evaluation Change via Fsutil\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.646",
              "n": "Windows Terminating Lsass Process",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious process attempting to terminate the Lsass.exe process. It leverages Sysmon EventCode 10 logs to identify processes granted PROCESS_TERMINATE access to Lsass.exe. This activity is significant because Lsass.exe is a critical process responsible for enforcing security policies and handling user credentials. If confirmed malicious, this behavior could indicate an attempt to perform credential dumping, privilege escalation, or evasion of security policies, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 10",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 10 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://blog.talosintelligence.com/2022/03/threat-advisory-doublezero.html",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Terminating Lsass Process\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 10. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Terminating Lsass Process\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Terminating Lsass Process\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.647",
              "n": "Windows TOR Client Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows TOR Client Execution. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "CrowdStrike ProcessRollup2, Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires CrowdStrike ProcessRollup2, Sysmon EventID 1, Windows Event Log Security 4688 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://unit42.paloaltonetworks.com/tor-traffic-enterprise-networks/, https://attack.mitre.org/software/S0183/, https://attack.mitre.org/techniques/T1090/003/",
              "mitre": [
                "T1090.003"
              ],
              "dtype": "process",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows TOR Client Execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: CrowdStrike ProcessRollup2, Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1090.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows TOR Client Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows TOR Client Execution\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.648",
              "n": "Windows UAC Bypass Suspicious Escalation Behavior",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a process spawns an executable known for User Account Control (UAC) bypass exploitation and subsequently monitors for any child processes with a higher integrity level than the original process. This detection leverages Sysmon EventID 1 data, focusing on process integrity levels and known UAC bypass executables. This activity is significant as it may indicate an attacker has successfully used a UAC bypass exploit to escalate privileges. If confirmed malicious,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1 AND Sysmon EventID 1",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1 AND Sysmon EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Including Werfault.exe may cause some unintended false positives related to normal application faulting, but is used in a number of UAC bypass techniques.",
              "refs": "https://attack.mitre.org/techniques/T1548/002/, https://atomicredteam.io/defense-evasion/T1548.002/, https://hadess.io/user-account-control-uncontrol-mastering-the-art-of-bypassing-windows-uac/, https://enigma0x3.net/2016/08/15/fileless-uac-bypass-using-eventvwr-exe-and-registry-hijacking/",
              "mitre": [
                "T1548.002"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows UAC Bypass Suspicious Escalation Behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1 AND Sysmon EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1548.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows UAC Bypass Suspicious Escalation Behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Including Werfault.exe may cause some unintended false positives related to normal application faulting, but is used in a number of UAC bypass techniques.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows UAC Bypass Suspicious Escalation Behavior\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.649",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source user failing to authenticate with multiple users using explicit credentials on a host. It leverages Windows Event Code 4648 and calculates the standard deviation for each host, using the 3-sigma rule to detect anomalies. This behavior is significant as it may indicate a Password Spraying attack, where an adversary attempts to gain initial access or elevate privileges. If confirmed malicious, this activity could lead to unauthorized access, privilege esc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4648",
              "q": "`wineventlog_security` EventCode=4648 Caller_User_Name!=*$ Target_User_Name!=*$\n      | bucket span=5m _time\n      | stats dc(Target_User_Name) AS unique_accounts values(Target_User_Name) as user values(dest) as dest values(src) as src\n        BY _time, Computer, Caller_User_Name\n      | eventstats avg(unique_accounts) as comp_avg , stdev(unique_accounts) as comp_std\n        BY Computer\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_accounts > 10 and unique_accounts >= upperBound, 1, 0)\n      | search isOutlier=1\n      | `windows_unusual_count_of_users_fail_to_auth_wth_explicitcredentials_filter`",
              "m": "To successfully implement this search, you need to be ingesting Windows Event Logs from domain controllers as well as member servers and workstations. The Advanced Security Audit policy setting `Audit Logon` within `Logon/Logoff` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A source user failing attempting to authenticate multiple users on a host is not a common behavior for regular systems. Some applications, however, may exhibit this behavior in which case sets of users hosts can be added to an allow list. Possible false positive scenarios include systems where several users connect to like Mail servers, identity providers, remote desktop services, Citrix, etc.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4648, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/basic-audit-logon-events",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4648. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A source user failing attempting to authenticate multiple users on a host is not a common behavior for regular systems. Some applications, however, may exhibit this behavior in which case sets of users hosts can be added to an allow list. Possible false positive scenarios include systems where several users connect to like Mail servers, identity providers, remote desktop services, Citrix, etc.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a source user failing to authenticate with multiple users using explicit credentials on a host so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.650",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source process failing to authenticate multiple users, potentially indicating a Password Spraying attack. It leverages Windows Event 4625, which logs failed logon attempts, and uses statistical analysis to detect anomalies. This activity is significant as it may represent an adversary attempting to gain initial access or elevate privileges within an Active Directory environment. If confirmed malicious, the attacker could compromise multiple accounts, leading t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4625",
              "q": "`wineventlog_security`  EventCode=4625 Logon_Type=2 ProcessName!=\"-\"\n      | bucket span=2m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as user values(dest) as dest values(src) as src\n        BY _time, ProcessName, SubjectUserName,\n           Computer, action, app,\n           authentication_method, signature, signature_id\n      | eventstats avg(unique_accounts) as comp_avg , stdev(unique_accounts) as comp_std\n        BY ProcessName, SubjectUserName, Computer\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_accounts > 10 and unique_accounts >= upperBound, 1, 0)\n      | search isOutlier=1\n      | `windows_unusual_count_of_users_failed_to_authenticate_from_process_filter`",
              "m": "To successfully implement this search, you need to be ingesting Windows Event Logs from domain controllers aas well as member servers and workstations. The Advanced Security Audit policy setting `Audit Logon` within `Logon/Logoff` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A process failing to authenticate with multiple users is not a common behavior for legitimate user sessions. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4625, https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4625, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/basic-audit-logon-events",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4625. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A process failing to authenticate with multiple users is not a common behavior for legitimate user sessions. Possible false positive scenarios include but are not limited to vulnerability scanners and missconfigured systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a source process failing to authenticate multiple users, potentially indicating a Password Spraying attack so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.651",
              "n": "True Positive Test",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a source host failing to authenticate against a remote host with multiple users, potentially indicating a Password Spraying attack. It leverages Windows Event 4625 (failed logon attempts) and Logon Type 3 (remote authentication) to detect this behavior. This activity is significant as it may represent an adversary attempting to gain initial access or elevate privileges within an Active Directory environment. If confirmed malicious, this could lead to unauthorize…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4625",
              "q": "`wineventlog_security`  EventCode=4625 Logon_Type=3 IpAddress!=\"-\"\n      | bucket span=2m _time\n      | stats dc(TargetUserName) AS unique_accounts values(TargetUserName) as tried_accounts values(dest) as dest values(src) as src values(user) as user\n        BY _time, IpAddress, Computer,\n           action, app, authentication_method,\n           signature, signature_id\n      | eventstats avg(unique_accounts) as comp_avg , stdev(unique_accounts) as comp_std\n        BY IpAddress, Computer\n      | eval upperBound=(comp_avg+comp_std*3)\n      | eval isOutlier=if(unique_accounts > 10 and unique_accounts >= upperBound, 1, 0)\n      | search isOutlier=1\n      | `windows_unusual_count_of_users_remotely_failed_to_auth_from_host_filter`",
              "m": "To successfully implement this search, you need to be ingesting Windows Event Logs from domain controllers as as well as member servers and workstations. The Advanced Security Audit policy setting `Audit Logon` within `Logon/Logoff` needs to be enabled.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A host failing to authenticate with multiple valid users against a remote host is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, remote administration tools, missconfigyred systems, etc.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4625, https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4625, https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/basic-audit-logon-events",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"True Positive Test\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4625. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"True Positive Test\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A host failing to authenticate with multiple valid users against a remote host is not a common behavior for legitimate systems. Possible false positive scenarios include but are not limited to vulnerability scanners, remote administration tools, missconfigyred systems, etc.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a source host failing to authenticate against a remote host with multiple users, potentially indicating a Password Spraying attack so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.652",
              "n": "Windows Unusual FileZilla XML Config Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes accessing FileZilla XML config files such as recentservers.xml and sitemanager.xml. It leverages Windows Security Event logs, specifically monitoring EventCode 4663, which tracks object access events. This activity is significant because it can indicate unauthorized access or manipulation of sensitive configuration files used by FileZilla, a popular FTP client. If confirmed malicious, this could lead to data exfiltration, credential theft, or further c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "a third party application can access the FileZilla XML config files. Filter is needed.",
              "refs": "https://www.trendmicro.com/en_us/research/18/k/trickbot-shows-off-new-trick-password-grabber-module.html",
              "mitre": [
                "T1552.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unusual FileZilla XML Config Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unusual FileZilla XML Config Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: a third party application can access the FileZilla XML config files. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Unusual FileZilla XML Config Access\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.653",
              "n": "Windows Unusual Intelliform Storage Registry Access",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes accessing Intelliform Storage Registry keys used by Internet Explorer. It leverages Windows Security Event logs, specifically monitoring EventCode 4663, which tracks object access events. This activity is significant because it can indicate unauthorized access or manipulation of sensitive registry keys used for storing form data in Internet Explorer. If confirmed malicious, this could lead to data exfiltration, credential theft, or further compromise o…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4663",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you must ingest Windows Security Event logs and track event code 4663. For 4663, enable \"Audit Object Access\" in Group Policy. Then check the two boxes listed for both \"Success\" and \"Failure.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "a third party application can access the FileZilla XML config files. Filter is needed.",
              "refs": "https://stackoverflow.com/questions/1276700/where-does-internet-explorer-stores-its-form-data-history-that-is-uses-for-auto",
              "mitre": [
                "T1552.001"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unusual Intelliform Storage Registry Access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4663. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unusual Intelliform Storage Registry Access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: a third party application can access the FileZilla XML config files. Filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Unusual Intelliform Storage Registry Access\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.654",
              "n": "Windows Unusual NTLM Authentication Destinations By Source",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an unusual number NTLM authentications is attempted by the same source against multiple destinations. This activity generally results when an attacker attempts to brute force, password spray, or otherwise authenticate to a multiple domain joined Windows devices using an NTLM based process/attack. This same activity may also generate a large number of EventID 4776 events as well.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic requires that NTLM Operational logs to be imported from the environment Domain Controllers. This requires configuration of specific auditing settings, see Microsoft references for further guidance. This analytic is specific to EventID 8004~8006.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Vulnerability scanners, print servers, and applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/ntlm-blocking-and-you-application-analysis-and-auditing/ba-p/397191, https://techcommunity.microsoft.com/t5/microsoft-defender-for-identity/enriched-ntlm-authentication-data-using-windows-event-8004/m-p/871827, https://www.varonis.com/blog/investigate-ntlm-brute-force, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-nrpc/4d1235e3-2c96-4e9f-a147-3cb338a0d09f",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unusual NTLM Authentication Destinations By Source\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unusual NTLM Authentication Destinations By Source\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Vulnerability scanners, print servers, and applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Unusual NTLM Authentication Destinations By Source\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.655",
              "n": "Windows Unusual NTLM Authentication Destinations By User",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an unusual number of NTLM authentications is attempted by the same user account against multiple destinations. This activity generally results when an attacker attempts to brute force, password spray, or otherwise authenticate to numerous domain joined Windows devices using an NTLM based process/attack. This same activity may also generate a large number of EventID 4776 events as well.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic requires that NTLM Operational logs to be imported from the environment Domain Controllers. This requires configuration of specific auditing settings, see Microsoft references for further guidance. This analytic is specific to EventID 8004~8006.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Vulnerability scanners, print servers, and applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/ntlm-blocking-and-you-application-analysis-and-auditing/ba-p/397191, https://techcommunity.microsoft.com/t5/microsoft-defender-for-identity/enriched-ntlm-authentication-data-using-windows-event-8004/m-p/871827, https://www.varonis.com/blog/investigate-ntlm-brute-force, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-nrpc/4d1235e3-2c96-4e9f-a147-3cb338a0d09f",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unusual NTLM Authentication Destinations By User\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unusual NTLM Authentication Destinations By User\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Vulnerability scanners, print servers, and applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Unusual NTLM Authentication Destinations By User\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.656",
              "n": "Windows Unusual NTLM Authentication Users By Destination",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a device is the target of numerous NTLM authentications using a null domain. This activity generally results when an attacker attempts to brute force, password spray, or otherwise authenticate to a domain joined Windows device from a non-domain device. This activity may also generate a large number of EventID 4776 events in tandem, however these events will not indicate the attacker or target device.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic detects when an unusual number of NTLM authentications is attempted against the same destination. This activity generally results when an attacker attempts to brute force, password spray, or otherwise authenticate to a domain joined Windows device using an NTLM based process/attack. This same activity may also generate a large number of EventID 4776 events as well.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Vulnerability scanners, print servers, and applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/ntlm-blocking-and-you-application-analysis-and-auditing/ba-p/397191, https://techcommunity.microsoft.com/t5/microsoft-defender-for-identity/enriched-ntlm-authentication-data-using-windows-event-8004/m-p/871827, https://www.varonis.com/blog/investigate-ntlm-brute-force, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-nrpc/4d1235e3-2c96-4e9f-a147-3cb338a0d09f",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unusual NTLM Authentication Users By Destination\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unusual NTLM Authentication Users By Destination\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Vulnerability scanners, print servers, and applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Unusual NTLM Authentication Users By Destination\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.657",
              "n": "Windows Unusual NTLM Authentication Users By Source",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when an unusual number of NTLM authentications is attempted by the same source. This activity generally results when an attacker attempts to brute force, password spray, or otherwise authenticate to a domain joined Windows device using an NTLM based process/attack. This same activity may also generate a large number of EventID 4776 events in as well.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The following analytic requires that NTLM Operational logs to be imported from the environment Domain Controllers. This requires configuration of specific auditing settings, see Microsoft references for further guidance. This analytic is specific to EventID 8004~8006.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Vulnerability scanners, print servers, and applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.",
              "refs": "https://attack.mitre.org/techniques/T1110/003/, https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/ntlm-blocking-and-you-application-analysis-and-auditing/ba-p/397191, https://techcommunity.microsoft.com/t5/microsoft-defender-for-identity/enriched-ntlm-authentication-data-using-windows-event-8004/m-p/871827, https://www.varonis.com/blog/investigate-ntlm-brute-force, https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-nrpc/4d1235e3-2c96-4e9f-a147-3cb338a0d09f",
              "mitre": [
                "T1110.003"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unusual NTLM Authentication Users By Source\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: NTLM Operational 8004, NTLM Operational 8005, NTLM Operational 8006. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1110.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unusual NTLM Authentication Users By Source\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Vulnerability scanners, print servers, and applications that deal with non-domain joined authentications. Recommend adjusting the upperBound_unique eval for tailoring the correlation to your environment, running with a 24hr search window will smooth out some statistical noise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Unusual NTLM Authentication Users By Source\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.658",
              "n": "Windows Unusual Process Load Mozilla NSS-Mozglue Module",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes loading Mozilla NSS-Mozglue libraries such as mozglue.dll and nss3.dll. It leverages Sysmon Event logs, specifically monitoring EventCode 7, which tracks image loaded events. This activity is significant because it can indicate unauthorized access or manipulation of these libraries, which are commonly used by Mozilla applications like Firefox and Thunderbird. If confirmed malicious, this could lead to data exfiltration, credential theft, or further com…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 7",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search, you need to be ingesting logs with the process name and imageloaded executions from your endpoints. If you are using Sysmon, you must have at least version 6.0.4 of the Sysmon TA.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate windows application that are not on the list loading this dll. Filter as needed.",
              "refs": "https://www.trendmicro.com/vinfo/nz/threat-encyclopedia/malware/trojanspy.win32.vidar.yxdftz",
              "mitre": [
                "T1218.003"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unusual Process Load Mozilla NSS-Mozglue Module\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 7. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1218.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unusual Process Load Mozilla NSS-Mozglue Module\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate windows application that are not on the list loading this dll. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Unusual Process Load Mozilla NSS-Mozglue Module\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.659",
              "n": "Windows Unusual SysWOW64 Process Run System32 Executable",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects an unusual process execution pattern where a process running from C:\\Windows\\SysWOW64\\ attempts to execute a binary from C:\\Windows\\System32\\. In a typical Windows environment, 32-bit processes under SysWOW64 should primarily interact with 32-bit binaries within the same directory. However, an execution flow where a 32-bit process spawns a 64-bit binary from System32 can indicate potential process injection, privilege escalation, evasion techniques, or unauthorized…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "The detection is based on data that originates from Endpoint Detection and Response (EDR) agents. These agents are designed to provide security-related telemetry from the endpoints where the agent is installed. To implement this search, you must ingest logs that contain the process GUID, process name, and parent process. Additionally, you must ingest complete command-line executions. These logs mu…",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "some legitimate system processes, software updaters, or compatibility tools may trigger this behavior, occurrences involving unknown, unsigned, or unusual parent processes should be investigated for potential malware activity, persistence mechanisms, or execution flow hijacking.",
              "refs": "https://www.trendmicro.com/en_nl/research/24/k/earth-estries.html",
              "mitre": [
                "T1036.009"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Unusual SysWOW64 Process Run System32 Executable\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1036.009. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Unusual SysWOW64 Process Run System32 Executable\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: some legitimate system processes, software updaters, or compatibility tools may trigger this behavior, occurrences involving unknown, unsigned, or unusual parent processes should be investigated for potential malware activity, persistence mechanisms, or execution flow hijacking.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Unusual SysWOW64 Process Run System32 Executable\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.660",
              "n": "Windows Visual Basic Commandline Compiler DNSQuery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where vbc.exe, the Visual Basic Command Line Compiler, initiates DNS queries. Normally, vbc.exe operates locally to compile Visual Basic code and does not require internet access or to perform DNS lookups. Therefore, any observed DNS activity originating from vbc.exe is highly suspicious and indicative of potential malicious activity. This behavior often suggests that a malicious payload is masquerading as the legitimate vbc.exe process to establish comma…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa20-266a",
              "mitre": [
                "T1071.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Visual Basic Commandline Compiler DNSQuery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Visual Basic Commandline Compiler DNSQuery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Visual Basic Commandline Compiler DNSQuery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.661",
              "n": "Windows WBAdmin File Recovery From Backup",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows WBAdmin File Recovery From Backup. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://lolbas-project.github.io/lolbas/Binaries/Wbadmin/, https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/wbadmin-start-recovery, https://redmondmag.com/articles/2025/07/18/restoring-a-file-from-a-windows-image-backup.aspx",
              "mitre": [
                "T1490",
                "T1565.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows WBAdmin File Recovery From Backup\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1490, T1565.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows WBAdmin File Recovery From Backup\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows WBAdmin File Recovery From Backup\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.662",
              "n": "Windows Wmic CPU Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of WMIC (Windows Management Instrumentation Command-line) for CPU discovery, often executed with commands such as “wmic cpu get name” This behavior is commonly associated with reconnaissance, where adversaries seek to gather details about system hardware, assess processing power, or determine if the environment is virtualized. While WMIC is a legitimate administrative tool, its use for CPU queries outside of normal inventory or management scripts can indica…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this command for testing or auditing.",
              "refs": "https://cert.gov.ua/article/6284730",
              "mitre": [
                "T1082"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Wmic CPU Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Wmic CPU Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this command for testing or auditing.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Wmic CPU Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.663",
              "n": "Windows Wmic DiskDrive Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Windows Management Instrumentation Command-line (WMIC) for disk drive discovery activities on a Windows system. This process involves monitoring commands such as “wmic diskdrive” which are often used by administrators for inventory and diagnostics but can also be leveraged by attackers to enumerate hardware details for malicious purposes. Detecting these commands is essential for identifying potentially unauthorized asset reconnaissance or pre-attack map…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this command for testing or auditing.",
              "refs": "https://cert.gov.ua/article/6284730",
              "mitre": [
                "T1082"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Wmic DiskDrive Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Wmic DiskDrive Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this command for testing or auditing.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Wmic DiskDrive Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.664",
              "n": "Windows Wmic Memory Chip Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Windows Management Instrumentation Command-line (WMIC) commands related to memory chip discovery on a Windows system. Specifically, it monitors instances where commands such as “wmic memorychip” are used to retrieve detailed information about installed RAM modules. While these commands can serve legitimate administrative and troubleshooting purposes, they may also be employed by adversaries to gather system hardware specifications as part of their …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this command for testing or auditing.",
              "refs": "https://cert.gov.ua/article/6284730",
              "mitre": [
                "T1082"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Wmic Memory Chip Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Wmic Memory Chip Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this command for testing or auditing.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Wmic Memory Chip Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.665",
              "n": "Windows Wmic Network Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Windows Management Instrumentation Command-line (WMIC) commands used for network interface discovery on a Windows system. Specifically, it identifies commands such as “wmic nic” that retrieve detailed information about the network adapters installed on the device. While these commands are commonly used by IT administrators for legitimate network inventory and diagnostics, they can also be leveraged by malicious actors for reconnaissance, enabling t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this command for testing or auditing.",
              "refs": "https://cert.gov.ua/article/6284730",
              "mitre": [
                "T1082"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Wmic Network Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Wmic Network Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this command for testing or auditing.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Wmic Network Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.666",
              "n": "Windows Wmic Systeminfo Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of Windows Management Instrumentation Command-line (WMIC) commands used for computer system discovery on a Windows system. Specifically, it monitors for commands such as “wmic computersystem” that retrieve detailed information about the computer’s model, manufacturer, name, domain, and other system attributes. While these commands are commonly used by administrators for inventory and troubleshooting, they may also be exploited by adversaries to gain i…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrators may execute this command for testing or auditing.",
              "refs": "https://cert.gov.ua/article/6284730",
              "mitre": [
                "T1082"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Wmic Systeminfo Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1082. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Wmic Systeminfo Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrators may execute this command for testing or auditing.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Wmic Systeminfo Discovery\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.667",
              "n": "WinEvent Scheduled Task Created to Spawn Shell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of scheduled tasks designed to execute commands using native Windows shells like PowerShell, Cmd, Wscript, or Cscript. It leverages Windows Security EventCode 4698 to identify when such tasks are registered. This activity is significant as it may indicate an attempt to establish persistence or execute malicious commands on a system. If confirmed malicious, this could allow an attacker to maintain access, execute arbitrary code, or escalate privileges, …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4698 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate applications are allowed to register tasks that call a shell to be spawned. Filter as needed based on command-line or processes that are used legitimately.",
              "refs": "https://research.checkpoint.com/2021/irans-apt34-returns-with-an-updated-arsenal/, https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4698, https://redcanary.com/threat-detection-report/techniques/scheduled-task-job/, https://learn.microsoft.com/en-us/windows/win32/taskschd/time-trigger-example--scripting-?redirectedfrom=MSDN",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WinEvent Scheduled Task Created to Spawn Shell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WinEvent Scheduled Task Created to Spawn Shell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate applications are allowed to register tasks that call a shell to be spawned. Filter as needed based on command-line or processes that are used legitimately.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"WinEvent Scheduled Task Created to Spawn Shell\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.668",
              "n": "WinEvent Scheduled Task Created Within Public Path",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of scheduled tasks within user-writable paths using Windows Security EventCode 4698. It identifies tasks registered via schtasks.exe or TaskService that execute commands from directories like Public, ProgramData, Temp, and AppData. This behavior is significant as it may indicate an attempt to establish persistence or execute unauthorized commands. If confirmed malicious, an attacker could maintain long-term access, escalate privileges, or execute arbit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4698",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4698 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are possible if legitimate applications are allowed to register tasks in public paths. Filter as needed based on paths that are used legitimately.",
              "refs": "https://research.checkpoint.com/2021/irans-apt34-returns-with-an-updated-arsenal/, https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4698, https://redcanary.com/threat-detection-report/techniques/scheduled-task-job/, https://learn.microsoft.com/en-us/windows/win32/taskschd/time-trigger-example--scripting-?redirectedfrom=MSDN, https://app.any.run/tasks/e26f1b2e-befa-483b-91d2-e18636e2faf3/",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WinEvent Scheduled Task Created Within Public Path\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4698. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WinEvent Scheduled Task Created Within Public Path\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are possible if legitimate applications are allowed to register tasks in public paths. Filter as needed based on paths that are used legitimately.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"WinEvent Scheduled Task Created Within Public Path\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.669",
              "n": "WinEvent Windows Task Scheduler Event Action Started",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of tasks registered in Windows Task Scheduler by monitoring EventID 200 (action run) and 201 (action completed) from the Task Scheduler logs. This detection leverages Task Scheduler logs to identify potentially suspicious or unauthorized task executions. Monitoring these events is significant for a SOC as it helps uncover evasive techniques used for persistence, unauthorized code execution, or other malicious activities. If confirmed malicious, this a…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log TaskScheduler 200, Windows Event Log TaskScheduler 201",
              "q": "`wineventlog_task_scheduler` EventCode IN (\"200\",\"201\")  | stats count min(_time) as firstTime max(_time) as lastTime by TaskName dvc EventCode | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `winevent_windows_task_scheduler_event_action_started_filter`",
              "m": "Task Scheduler logs are required to be collected. Enable logging with inputs.conf by adding a stanza for [WinEventLog://Microsoft-Windows-TaskScheduler/Operational] and renderXml=false. Note, not translating it in XML may require a proper extraction of specific items in the Message.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present. Filter based on ActionName paths or specify keywords of interest.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1053.005/T1053.005.md, https://thedfirreport.com/2021/10/18/icedid-to-xinglocker-ransomware-in-24-hours/",
              "mitre": [
                "T1053.005"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WinEvent Windows Task Scheduler Event Action Started\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log TaskScheduler 200, Windows Event Log TaskScheduler 201. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1053.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WinEvent Windows Task Scheduler Event Action Started\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present. Filter based on ActionName paths or specify keywords of interest.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the execution of tasks registered in Windows Task Scheduler by monitoring EventID 200 (action run) and 201 (action completed) from the Task Scheduler logs so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.670",
              "n": "WMI Permanent Event Subscription",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of permanent event subscriptions using Windows Management Instrumentation (WMI). It leverages Sysmon EventID 5 data to identify instances where the event consumers are not the expected \"NTEventLogEventConsumer.\" This activity is significant because it suggests an attacker is attempting to achieve persistence by running malicious scripts or binaries in response to specific system events. If confirmed malicious, this could lead to severe impacts such as …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`wmi` EventCode=5861 Binding\n      | rex field=Message \"Consumer =\\s+(?<consumer>[^;\n      | ^$]+)\"\n      | search consumer!=\"NTEventLogEventConsumer=\\\"SCM Event Log Consumer\\\"\"\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY ComputerName, consumer, Message\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | rename ComputerName as dest\n      | `wmi_permanent_event_subscription_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, administrators may use event subscriptions for legitimate purposes.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1047"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WMI Permanent Event Subscription\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WMI Permanent Event Subscription\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, administrators may use event subscriptions for legitimate purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the creation of permanent event subscriptions using Windows Management Instrumentation (WMI) so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.671",
              "n": "WMI Permanent Event Subscription - Sysmon",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the creation of WMI permanent event subscriptions, which can be used to establish persistence or perform privilege escalation. It leverages Sysmon data, specifically EventCodes 19, 20, and 21, to detect the creation of WMI EventFilters, EventConsumers, and FilterToConsumerBindings. This activity is significant as it may indicate an attacker setting up mechanisms to execute code with elevated SYSTEM privileges when specific events occur. If confirmed malicious, t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 21",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 21 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Although unlikely, administrators may use event subscriptions for legitimate purposes.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1546.003/T1546.003.md, https://www.eideon.com/2018-03-02-THL03-WMIBackdoors/, https://github.com/trustedsec/SysmonCommunityGuide/blob/master/chapters/WMI-events.md, https://in.security/2019/04/03/an-intro-into-abusing-and-identifying-wmi-event-subscriptions-for-persistence/",
              "mitre": [
                "T1546.003"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WMI Permanent Event Subscription - Sysmon\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 21. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1546.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WMI Permanent Event Subscription - Sysmon\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Although unlikely, administrators may use event subscriptions for legitimate purposes.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"WMI Permanent Event Subscription - Sysmon\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.672",
              "n": "WMI Temporary Event Subscription",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of WMI temporary event subscriptions. It leverages Windows Event Logs, specifically EventCode 5860, to identify these activities. This detection is significant because attackers often use WMI to execute commands, gather information, or maintain persistence within a compromised system. If confirmed malicious, this activity could allow an attacker to execute arbitrary code, escalate privileges, or persist in the environment. Analysts should review the sp…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`wmi` EventCode=5860 Temporary\n      | rex field=Message \"NotificationQuery =\\s+(?<query>[^;\n      | ^$]+)\"\n      | search query!=\"SELECT * FROM Win32_ProcessStartTrace WHERE ProcessName = 'wsmprovhost.exe'\" AND query!=\"SELECT * FROM __InstanceOperationEvent WHERE TargetInstance ISA 'AntiVirusProduct' OR TargetInstance ISA 'FirewallProduct' OR TargetInstance ISA 'AntiSpywareProduct'\"\n      | stats count min(_time) as firstTime max(_time) as lastTime\n        BY ComputerName, query\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `wmi_temporary_event_subscription_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some software may create WMI temporary event subscriptions for various purposes. The included search contains an exception for two of these that occur by default on Windows 10 systems. You may need to modify the search to create exceptions for other legitimate events.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1047"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WMI Temporary Event Subscription\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1047. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WMI Temporary Event Subscription\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some software may create WMI temporary event subscriptions for various purposes. The included search contains an exception for two of these that occur by default on Windows 10 systems. You may need to modify the search to create exceptions for other legitimate events.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the creation of WMI temporary event subscriptions so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.673",
              "n": "WMIC XSL Execution via URL",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies WMIC XSL Execution via URL. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1220/T1220.md, https://web.archive.org/web/20190814201250/https://subt0x11.blogspot.com/2018/04/wmicexe-whitelisting-bypass-hacking.html, https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1220/T1220.md#atomic-test-4---wmic-bypass-using-remote-xsl-file, https://securitydatasets.com/notebooks/atomic/windows/defense_evasion/SDWIN-201017061100.html",
              "mitre": [
                "T1220"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"WMIC XSL Execution via URL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2, Cisco Network Visibility Module Flow Data. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1220. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"WMIC XSL Execution via URL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"WMIC XSL Execution via URL\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.674",
              "n": "3CX Supply Chain Attack Network Indicators",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies DNS queries to domains associated with the 3CX supply chain attack. It leverages the Network_Resolution datamodel to detect these suspicious domain indicators. This activity is significant because it can indicate a potential compromise stemming from the 3CX supply chain attack, which is known for distributing malicious software through trusted updates. If confirmed malicious, this activity could allow attackers to establish a foothold in the network, exfiltrate …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to domain entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present for accessing the 3cx[.]com website. Remove from the lookup as needed.",
              "refs": "https://www.sentinelone.com/blog/smoothoperator-ongoing-campaign-trojanizes-3cx-software-in-software-supply-chain-attack/, https://www.cisa.gov/news-events/alerts/2023/03/30/supply-chain-attack-against-3cxdesktopapp, https://www.reddit.com/r/crowdstrike/comments/125r3uu/20230329_situational_awareness_crowdstrike/, https://www.3cx.com/community/threads/crowdstrike-endpoint-security-detection-re-3cx-desktop-app.119934/page-2#post-558898, https://www.3cx.com/community/threads/3cx-desktopapp-security-alert.119951/",
              "mitre": [
                "T1195.002"
              ],
              "dtype": "domain",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"3CX Supply Chain Attack Network Indicators\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to domain entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1195.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"3CX Supply Chain Attack Network Indicators\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present for accessing the 3cx[.]com website. Remove from the lookup as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"3CX Supply Chain Attack Network Indicators\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.675",
              "n": "Cisco Configuration Archive Logging Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic provides comprehensive monitoring of configuration changes on Cisco devices by analyzing archive logs. Configuration archive logging captures all changes made to a device's configuration, providing a detailed audit trail that can be used to identify suspicious or malicious activities. This detection is particularly valuable for identifying patterns of malicious configuration changes that might indicate an attacker's presence, such as the creation of backdoor accounts, SNMP communit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate configuration changes during routine maintenance or device setup may trigger this detection, especially when multiple related changes are made in a single session. Network administrators often make several configuration changes in sequence during maintenance windows. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames and scheduled maintenance windows. The detection includes a threshold (count > 2) to filter out isolated configuration changes, but this threshold may need to be adjusted based on your environment's normal activity patterns.",
              "refs": "https://blog.talosintelligence.com/static-tundra/, https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180328-smi2, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/config-mgmt/configuration/15-mt/config-mgmt-15-mt-book/cm-config-logger.html",
              "mitre": [
                "T1562.001",
                "T1098",
                "T1505.003"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Configuration Archive Logging Analysis\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1098, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Configuration Archive Logging Analysis\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate configuration changes during routine maintenance or device setup may trigger this detection, especially when multiple related changes are made in a single session. Network administrators often make several configuration changes in sequence during maintenance windows. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames and scheduled maintenance windows. The detection includes a threshold (count > 2) to filter out isolated configuration changes, but this threshold may need to be adjusted based on your environment's normal activity patterns.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Cisco Configuration Archive Logging Analysis\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.676",
              "n": "Cisco IOS Suspicious Privileged Account Creation",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects the creation of privileged user accounts on Cisco IOS devices, which could indicate an attacker establishing backdoor access. The detection focuses on identifying when user accounts are created with privilege level 15 (the highest administrative privilege level in Cisco IOS) or when existing accounts have their privileges elevated. This type of activity is particularly concerning when performed by unauthorized users or during unusual hours, as it may represent a key step in…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to command entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate account creation and privilege elevation activities by authorized administrators will generate alerts with this detection. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames, typical times for account management, and authorized administrators who regularly perform these actions. You may also want to create a lookup table of approved administrative accounts and filter out alerts for these accounts. Additionally, scheduled maintenance windows should be taken into account when evaluating alerts.",
              "refs": "https://blog.talosintelligence.com/static-tundra/, https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180328-smi2, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/security/a1/sec-a1-cr-book/sec-cr-a2.html#wp3796044403",
              "mitre": [
                "T1136",
                "T1078"
              ],
              "dtype": "command",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco IOS Suspicious Privileged Account Creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to command entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1136, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco IOS Suspicious Privileged Account Creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate account creation and privilege elevation activities by authorized administrators will generate alerts with this detection. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames, typical times for account management, and authorized administrators who regularly perform these actions. You may also want to create a lookup table of approved administrative accounts and filter out alerts for these accounts. Additionally, scheduled maintenance windows should be taken into account when evaluating alerts.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Cisco IOS Suspicious Privileged Account Creation\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.677",
              "n": "Cisco Privileged Account Creation with HTTP Command Execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Privileged Account Creation with HTTP Command Execution. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-239a",
              "mitre": [
                "T1021.004",
                "T1136",
                "T1078"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Privileged Account Creation with HTTP Command Execution\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.004, T1136, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Privileged Account Creation with HTTP Command Execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Cisco Privileged Account Creation with HTTP Command Executio\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.678",
              "n": "Cisco Privileged Account Creation with Suspicious SSH Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Privileged Account Creation with Suspicious SSH Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$normalized_risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa25-239a",
              "mitre": [
                "T1021.004",
                "T1136",
                "T1078"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Privileged Account Creation with Suspicious SSH Activity\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.004, T1136, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Privileged Account Creation with Suspicious SSH Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Cisco Privileged Account Creation with Suspicious SSH Activi\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.679",
              "n": "Cisco SD-WAN - Low Frequency Rogue Peer",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco SD-WAN - Low Frequency Rogue Peer. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco SD-WAN NTCE 1000001",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco SD-WAN NTCE 1000001 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisa.gov/news-events/directives/supplemental-direction-ed-26-03-hunt-and-hardening-guidance-cisco-sd-wan-systems, https://blog.talosintelligence.com/uat-8616-sd-wan/, https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/Monitor-And-Maintain/monitor-maintain-book/m-alarms-events-logs.html, https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/system-interface/ios-xe-17/systems-interfaces-book-xe-sdwan/configure-logging.html, https://sec.cloudapps.cisco.com/security/center/resources/Cisco-Catalyst-SD-WAN-HardeningGuide, https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwan-rpa-EHchtZk",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco SD-WAN - Low Frequency Rogue Peer\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco SD-WAN NTCE 1000001. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco SD-WAN - Low Frequency Rogue Peer\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Cisco SD-WAN - Low Frequency Rogue Peer\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.680",
              "n": "Cisco SD-WAN - Peering Activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco SD-WAN - Peering Activity. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco SD-WAN NTCE 1000001",
              "q": "`cisco_sd_wan_syslog`\n    TERM(\"*control-connection-state-change*\")\n    TERM(\"*peer-system-ip:*\")\n    TERM(\"*public-ip:*\")\n    TERM(\"*new-state:up*\")\n    | rex field=_raw \"^(?<event_timestamp>(?:[A-Z][a-z]{2}\\s+\\d{1,2}\\s+\\d{2}:\\d{2}:\\d{2}(?:\\.\\d{3})?|[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(?:\\.\\d{1,6})?(?:Z|[+-][0-9]{2}:[0-9]{2})))\\s*:?\"\n    | rex field=_raw \"^(?:[A-Z][a-z]{2}\\s+\\d{1,2}\\s+\\d{2}:\\d{2}:\\d{2}(?:\\.\\d{3})?\\s*:?\\s+)(?<prefix_host>[^\\s:]+)\\s+\\S+(?:\\[\\d+\\])?:\\s+%\"\n    | eval dest=coalesce(prefix_host, legacy_host, device_name, host)\n    | rex field=_raw \"new-state:(?<new_state>\\S+)\"\n    | rex field=_raw \"peer-type:(?<peer_type>\\S+)\"\n    | rex field=_raw \"peer-system-ip:(?<peer_system_ip>\\S+)\"\n    | rex field=_raw \"public-ip:(?<public_ip>\\S+)\"\n    | rex field=_raw \"public-port:(?<public_port>\\d+)\"",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Cisco SD-WAN NTCE 1000001 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.cisa.gov/news-events/directives/supplemental-direction-ed-26-03-hunt-and-hardening-guidance-cisco-sd-wan-systems, https://blog.talosintelligence.com/uat-8616-sd-wan/, https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/Monitor-And-Maintain/monitor-maintain-book/m-alarms-events-logs.html, https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/system-interface/ios-xe-17/systems-interfaces-book-xe-sdwan/configure-logging.html, https://sec.cloudapps.cisco.com/security/center/resources/Cisco-Catalyst-SD-WAN-HardeningGuide, https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-sdwan-rpa-EHchtZk",
              "mitre": [
                "T1190"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco SD-WAN - Peering Activity\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco SD-WAN NTCE 1000001. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco SD-WAN - Peering Activity\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on Cisco SD-WAN - Peering Activity so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.681",
              "n": "Cisco Smart Install Oversized Packet Detection",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Smart Install Oversized Packet Detection. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Splunk Stream TCP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Splunk Stream TCP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://blog.talosintelligence.com/static-tundra/, https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180328-smi2",
              "mitre": [
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Smart Install Oversized Packet Detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Splunk Stream TCP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Smart Install Oversized Packet Detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Cisco Smart Install Oversized Packet Detection\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.682",
              "n": "Cisco SNMP Community String Configuration Changes",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects changes to SNMP community strings on Cisco devices, which could indicate an attacker establishing persistence or attempting to extract credentials. After gaining initial access to network devices, threat actors like Static Tundra often modify SNMP configurations to enable unauthorized monitoring and data collection. This detection specifically looks for the configuration of SNMP community strings with read-write (rw) or read-only (ro) permissions, as well as the configurati…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to command entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate SNMP configuration changes may trigger this detection during routine network maintenance or initial device setup. Network administrators often need to configure SNMP for monitoring and management purposes. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames, typical times for SNMP configuration changes, and scheduled maintenance windows. You may also want to create a lookup table of approved SNMP hosts and filter out alerts for these destinations.",
              "refs": "https://blog.talosintelligence.com/static-tundra/, https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180328-smi2, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/snmp/command/snmp-cr-book/snmp-s1.html#wp1307296356",
              "mitre": [
                "T1562.001",
                "T1040",
                "T1552"
              ],
              "dtype": "command",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco SNMP Community String Configuration Changes\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to command entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001, T1040, T1552. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco SNMP Community String Configuration Changes\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate SNMP configuration changes may trigger this detection during routine network maintenance or initial device setup. Network administrators often need to configure SNMP for monitoring and management purposes. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames, typical times for SNMP configuration changes, and scheduled maintenance windows. You may also want to create a lookup table of approved SNMP hosts and filter out alerts for these destinations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Cisco SNMP Community String Configuration Changes\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.683",
              "n": "Cisco TFTP Server Configuration for Data Exfiltration",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects the configuration of TFTP services on Cisco IOS devices that could be used to exfiltrate sensitive configuration files. Threat actors like Static Tundra have been observed configuring TFTP servers to make device configuration files accessible for exfiltration after gaining initial access. The detection specifically looks for commands that expose critical configuration files such as startup-config, running-config, and other sensitive system information through TFTP. This act…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to command entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate TFTP server configurations may be detected by this analytic during authorized backup operations or device maintenance. Network administrators sometimes use TFTP for legitimate configuration backups, firmware updates, or during troubleshooting. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames and scheduled maintenance windows.",
              "refs": "https://blog.talosintelligence.com/static-tundra/, https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180328-smi2, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/fundamentals/command/cf_command_ref/T_through_X.html#wp3081407060",
              "mitre": [
                "T1567",
                "T1005"
              ],
              "dtype": "command",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco TFTP Server Configuration for Data Exfiltration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to command entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1567, T1005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco TFTP Server Configuration for Data Exfiltration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate TFTP server configurations may be detected by this analytic during authorized backup operations or device maintenance. Network administrators sometimes use TFTP for legitimate configuration backups, firmware updates, or during troubleshooting. To reduce false positives, consider implementing a baseline of expected administrative activities, including approved administrative usernames and scheduled maintenance windows.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Cisco TFTP Server Configuration for Data Exfiltration\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.684",
              "n": "Detect ARP Poisoning",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects ARP Poisoning attacks by monitoring for Dynamic ARP Inspection (DAI) errors on Cisco network devices. It leverages logs from Cisco devices, specifically looking for events where the ARP inspection feature has disabled an interface due to suspicious activity. This activity is significant because ARP Poisoning can allow attackers to intercept, modify, or disrupt network traffic, leading to potential data breaches or denial of service. If confirmed malicious, this cou…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "`cisco_networks` facility=\"PM\" mnemonic=\"ERR_DISABLE\" disable_cause=\"arp-inspection\"\n      | eval src_interface=src_int_prefix_long+src_int_suffix\n      | stats min(_time) AS firstTime max(_time) AS lastTime count\n        BY host src_interface\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `detect_arp_poisoning_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This search might be prone to high false positives if DHCP Snooping or ARP inspection has been incorrectly configured, or if a device normally sends many ARP packets (unlikely).",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1200",
                "T1498",
                "T1557.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect ARP Poisoning\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1200, T1498, T1557.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect ARP Poisoning\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This search might be prone to high false positives if DHCP Snooping or ARP inspection has been incorrectly configured, or if a device normally sends many ARP packets (unlikely).\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on aRP Poisoning attacks by monitoring for Dynamic ARP Inspection (DAI) errors on Cisco network devices so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.685",
              "n": "Detect DGA domains using pretrained model in DSDL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies Domain Generation Algorithm (DGA) generated domains using a pre-trained deep learning model. It leverages the Network Resolution data model to analyze domain names and detect unusual character sequences indicative of DGA activity. This behavior is significant as adversaries often use DGAs to generate numerous domain names for command-and-control servers, making it harder to block malicious traffic. If confirmed malicious, this activity could enable attackers to …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` values(DNS.answer) as IPs min(_time) as firstTime  max(_time) as lastTime FROM datamodel=Network_Resolution\n      BY DNS.src, DNS.query\n    | `drop_dm_object_name(DNS)`\n    | rename query AS domain\n    | fields IPs, src, domain, firstTime, lastTime\n    | apply pretrained_dga_model_dsdl\n    | rename pred_dga_proba AS dga_score\n    | where dga_score>0.5\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table src, domain, IPs, firstTime, lastTime, dga_score\n    | `detect_dga_domains_using_pretrained_model_in_dsdl_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if domain name is similar to dga generated domains.",
              "refs": "https://attack.mitre.org/techniques/T1568/002/, https://unit42.paloaltonetworks.com/threat-brief-understanding-domain-generation-algorithms-dga/, https://en.wikipedia.org/wiki/Domain_generation_algorithm",
              "mitre": [
                "T1568.002"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect DGA domains using pretrained model in DSDL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1568.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect DGA domains using pretrained model in DSDL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if domain name is similar to dga generated domains.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on domain Generation Algorithm (DGA) generated domains using a pre-trained deep learning model so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.686",
              "n": "Detect DNS Data Exfiltration using pretrained model in DSDL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential DNS data exfiltration using a pre-trained deep learning model. It leverages DNS request data from the Network Resolution datamodel and computes features from past events between the same source and domain. The model generates a probability score (pred_is_exfiltration_proba) indicating the likelihood of data exfiltration. This activity is significant as DNS tunneling can be used by attackers to covertly exfiltrate sensitive data. If confirmed malicious,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Network_Resolution\n      BY DNS.src _time DNS.query\n    | `drop_dm_object_name(\"DNS\")`\n    | sort - _time,src, query\n    | streamstats count as rank\n      BY src query\n    | where rank < 10\n    | table src,query,rank,_time\n    | apply detect_dns_data_exfiltration_using_pretrained_model_in_dsdl\n    | table src,_time,query,rank,pred_is_dns_data_exfiltration_proba,pred_is_dns_data_exfiltration\n    | where rank == 1\n    | rename pred_is_dns_data_exfiltration_proba as is_exfiltration_score\n    | rename pred_is_dns_data_exfiltration as is_exfiltration\n    | where is_exfiltration_score > 0.5\n    | `security_content_ctime(_time)`\n    | table src, _time,query,is_exfiltration_score,is_exfiltration\n    | `detect_dns_data_exfiltration_using_pretrained_model_in_dsdl_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to domain entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if DNS data exfiltration request look very similar to benign DNS requests.",
              "refs": "https://attack.mitre.org/techniques/T1048/003/, https://unit42.paloaltonetworks.com/dns-tunneling-how-dns-can-be-abused-by-malicious-actors/, https://en.wikipedia.org/wiki/Data_exfiltration",
              "mitre": [
                "T1048.003"
              ],
              "dtype": "domain",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect DNS Data Exfiltration using pretrained model in DSDL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to domain entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect DNS Data Exfiltration using pretrained model in DSDL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if DNS data exfiltration request look very similar to benign DNS requests.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on potential data exfiltration using a pre-trained deep learning model so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.687",
              "n": "Detect DNS Query to Decommissioned S3 Bucket",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies DNS queries to domains that match previously decommissioned S3 buckets. This activity is significant because attackers may attempt to recreate deleted S3 buckets that were previously public to hijack them for malicious purposes. If successful, this could allow attackers to host malicious content or exfiltrate data through compromised bucket names that may still be referenced by legitimate applications.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel:Network_Resolution | search src=\"$src$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to domain entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some applications or scripts may continue to reference old S3 bucket names after they have been decommissioned. These should be investigated and updated to prevent potential security risks.",
              "refs": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html, https://labs.watchtowr.com/8-million-requests-later-we-made-the-solarwinds-supply-chain-attack-look-amateur/",
              "mitre": [
                "T1485"
              ],
              "dtype": "domain",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect DNS Query to Decommissioned S3 Bucket\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to domain entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect DNS Query to Decommissioned S3 Bucket\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some applications or scripts may continue to reference old S3 bucket names after they have been decommissioned. These should be investigated and updated to prevent potential security risks.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on dNS queries to domains that match previously decommissioned S3 buckets so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.688",
              "n": "Detect hosts connecting to dynamic domain providers",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies DNS queries from internal hosts to dynamic domain providers. It leverages DNS query logs from the `Network_Resolution` data model and cross-references them with a lookup file containing known dynamic DNS providers. This activity is significant because attackers often use dynamic DNS services to host malicious payloads or command-and-control servers, making it crucial for security teams to monitor. If confirmed malicious, this activity could allow attackers to by…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some users and applications may leverage Dynamic DNS to reach out to some domains on the Internet since dynamic DNS by itself is not malicious, however this activity must be verified.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1189"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect hosts connecting to dynamic domain providers\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1189. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect hosts connecting to dynamic domain providers\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some users and applications may leverage Dynamic DNS to reach out to some domains on the Internet since dynamic DNS by itself is not malicious, however this activity must be verified.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Detect hosts connecting to dynamic domain providers\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.689",
              "n": "Detect IPv6 Network Infrastructure Threats",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects IPv6 network infrastructure threats by identifying suspicious activities such as IP and MAC address theft or packet drops. It leverages logs from Cisco network devices configured with First Hop Security measures like RA Guard and DHCP Guard. This activity is significant as it can indicate attempts to compromise network integrity and security. If confirmed malicious, attackers could manipulate network traffic, leading to potential data interception, unauthorized acc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "`cisco_networks` facility=\"SISF\" mnemonic IN (\"IP_THEFT\",\"MAC_THEFT\",\"MAC_AND_IP_THEFT\",\"PAK_DROP\")\n      | eval src_interface=src_int_prefix_long+src_int_suffix\n      | eval dest_interface=dest_int_prefix_long+dest_int_suffix\n      | stats min(_time) AS firstTime max(_time) AS lastTime values(src_mac) AS src_mac values(src_vlan) AS src_vlan values(mnemonic) AS mnemonic values(vendor_explanation) AS vendor_explanation values(src) AS src values(dest) AS dest values(dest_interface) AS dest_interface values(action) AS action count\n        BY host src_interface\n      | table host src_interface dest_interface src_mac src dest src_vlan mnemonic vendor_explanation action count\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `detect_ipv6_network_infrastructure_threats_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6_fhsec/configuration/xe-16-12/ip6f-xe-16-12-book/ip6-ra-guard.html, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6_fhsec/configuration/xe-16-12/ip6f-xe-16-12-book/ip6-snooping.html, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6_fhsec/configuration/xe-16-12/ip6f-xe-16-12-book/ip6-dad-proxy.html, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6_fhsec/configuration/xe-16-12/ip6f-xe-16-12-book/ip6-nd-mcast-supp.html, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6_fhsec/configuration/xe-16-12/ip6f-xe-16-12-book/ip6-dhcpv6-guard.html, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6_fhsec/configuration/xe-16-12/ip6f-xe-16-12-book/ip6-src-guard.html, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6_fhsec/configuration/xe-16-12/ip6f-xe-16-12-book/ipv6-dest-guard.html",
              "mitre": [
                "T1200",
                "T1498",
                "T1557.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect IPv6 Network Infrastructure Threats\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1200, T1498, T1557.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect IPv6 Network Infrastructure Threats\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on iPv6 network infrastructure threats by identifying suspicious activities such as IP and MAC address theft or packet drops so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.690",
              "n": "Detect Outbound LDAP Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies outbound LDAP traffic to external IP addresses. It leverages the Network_Traffic data model to detect connections on ports 389 or 636 that are not directed to private IP ranges (RFC1918). This activity is significant because outbound LDAP traffic can indicate potential data exfiltration or unauthorized access attempts. If confirmed malicious, attackers could exploit this to access sensitive directory information, leading to data breaches or further network compr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Traffic, Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime values(All_Traffic.dest) as dest FROM datamodel=Network_Traffic.All_Traffic\n      WHERE (All_Traffic.dest_port = 389 OR All_Traffic.dest_port = 636)\n      BY All_Traffic.action All_Traffic.app All_Traffic.bytes\n         All_Traffic.bytes_in All_Traffic.bytes_out All_Traffic.dest All_Traffic.dest_port All_Traffic.dvc\n         All_Traffic.protocol All_Traffic.protocol_version All_Traffic.src All_Traffic.src_port All_Traffic.transport\n         All_Traffic.user All_Traffic.vendor_product All_Traffic.rule\n    | `drop_dm_object_name(\"All_Traffic\")`\n    | where src != dest\n    | where NOT (cidrmatch(dest,\"10.0.0.0/8\") OR cidrmatch(dest,\"192.168.0.0/16\") OR cidrmatch(dest,\"172.16.0.0/12\"))\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_outbound_ldap_traffic_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Palo Alto Network Traffic, Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time. allowed outbound through your perimeter firewall. Please check those servers to verify if the activity is legitimate.",
              "refs": "[Splunk Lantern — search use cases](https://lantern.splunk.com/Splunk_Platform/Use_Cases)",
              "mitre": [
                "T1190",
                "T1059"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Outbound LDAP Traffic\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Traffic, Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Outbound LDAP Traffic\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time. allowed outbound through your perimeter firewall. Please check those servers to verify if the activity is legitimate.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on outbound LDAP traffic to external IP addresses so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.691",
              "n": "Detect Port Security Violation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects port security violations on Cisco switches. It leverages logs from Cisco network devices, specifically looking for events with mnemonics indicating port security violations. This activity is significant because it indicates an unauthorized device attempting to connect to a secured port, potentially bypassing network access controls. If confirmed malicious, this could allow an attacker to gain unauthorized access to the network, leading to data exfiltration, network…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "`cisco_networks` (facility=\"PM\" mnemonic=\"ERR_DISABLE\" disable_cause=\"psecure-violation\") OR (facility=\"PORT_SECURITY\" mnemonic=\"PSECURE_VIOLATION\" OR mnemonic=\"PSECURE_VIOLATION_VLAN\")\n      | eval src_interface=src_int_prefix_long+src_int_suffix\n      | stats min(_time) AS firstTime max(_time) AS lastTime values(disable_cause) AS disable_cause values(src_mac) AS src_mac values(src_vlan) AS src_vlan values(action) AS action count\n        BY host src_interface\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `detect_port_security_violation_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This search might be prone to high false positives if you have malfunctioning devices connected to your ethernet ports or if end users periodically connect physical devices to the network.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1200",
                "T1498",
                "T1557.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Port Security Violation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1200, T1498, T1557.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Port Security Violation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This search might be prone to high false positives if you have malfunctioning devices connected to your ethernet ports or if end users periodically connect physical devices to the network.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on port security violations on Cisco switches so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.692",
              "n": "Detect Rogue DHCP Server",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the presence of unauthorized DHCP servers on the network. It leverages logs from Cisco network devices with DHCP Snooping enabled, specifically looking for events where DHCP leases are issued from untrusted ports. This activity is significant because rogue DHCP servers can facilitate Man-in-the-Middle attacks, leading to potential data interception and network disruption. If confirmed malicious, this could allow attackers to redirect network traffic, capture sen…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS Logs",
              "q": "`cisco_networks` facility=\"DHCP_SNOOPING\" mnemonic=\"DHCP_SNOOPING_UNTRUSTED_PORT\"\n      | stats min(_time) AS firstTime max(_time) AS lastTime count values(message_type) AS message_type values(src_mac) AS src_mac\n        BY host\n      | `security_content_ctime(firstTime)`\n      | `security_content_ctime(lastTime)`\n      | `detect_rogue_dhcp_server_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS Logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This search might be prone to high false positives if DHCP Snooping has been incorrectly configured or in the unlikely event that the DHCP server has been moved to another network interface.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1200",
                "T1498",
                "T1557"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Rogue DHCP Server\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS Logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1200, T1498, T1557. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Rogue DHCP Server\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This search might be prone to high false positives if DHCP Snooping has been incorrectly configured or in the unlikely event that the DHCP server has been moved to another network interface.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the presence of unauthorized DHCP servers on the network so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.693",
              "n": "Detect Software Download To Network Device",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unauthorized software downloads to network devices via TFTP, FTP, or SSH/SCP. It detects this activity by analyzing network traffic events on specific ports (69, 21, 22) from devices categorized as network, router, or switch. This activity is significant because adversaries may exploit netbooting to load unauthorized operating systems, potentially compromising network integrity. If confirmed malicious, this could lead to unauthorized control over network devices…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Network_Traffic\n      WHERE (\n            All_Traffic.transport=udp\n            AND\n            All_Traffic.dest_port=69\n        )\n        OR (All_Traffic.transport=tcp AND All_Traffic.dest_port=21) OR (All_Traffic.transport=tcp AND All_Traffic.dest_port=22) AND All_Traffic.dest_category!=common_software_repo_destination AND All_Traffic.src_category=network OR All_Traffic.src_category=router OR All_Traffic.src_category=switch\n      BY All_Traffic.src All_Traffic.dest All_Traffic.dest_port\n    | `drop_dm_object_name(\"All_Traffic\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `detect_software_download_to_network_device_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This search will also report any legitimate attempts of software downloads to network devices as well as outbound SSH sessions from network devices.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1542.005"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Software Download To Network Device\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1542.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Software Download To Network Device\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This search will also report any legitimate attempts of software downloads to network devices as well as outbound SSH sessions from network devices.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on unauthorized software downloads to network devices via TFTP, FTP, or SSH/SCP so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.694",
              "n": "Detect suspicious DNS TXT records using pretrained model in DSDL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies suspicious DNS TXT records using a pre-trained deep learning model. It leverages DNS response data from the Network Resolution data model, categorizing TXT records into known types via regular expressions. Records that do not match known patterns are flagged as suspicious. This activity is significant as DNS TXT records can be used for data exfiltration or command-and-control communication. If confirmed malicious, attackers could use these records to covertly tr…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Network_Resolution\n      WHERE DNS.message_type=response\n        AND\n        DNS.record_type=TXT\n      BY DNS.src DNS.dest DNS.answer\n         DNS.record_type\n    | `drop_dm_object_name(\"DNS\")`\n    | rename answer as text\n    | fields firstTime, lastTime, message_type,record_type,src,dest, text\n    | apply detect_suspicious_dns_txt_records_using_pretrained_model_in_dsdl\n    | rename predicted_is_unknown as is_suspicious_score\n    | where is_suspicious_score > 0.5\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table src,dest,text,record_type, firstTime, lastTime,is_suspicious_score\n    | `detect_suspicious_dns_txt_records_using_pretrained_model_in_dsdl_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present if DNS TXT record contents are similar to benign DNS TXT record contents.",
              "refs": "https://attack.mitre.org/techniques/T1071/004/, https://unit42.paloaltonetworks.com/dns-tunneling-how-dns-can-be-abused-by-malicious-actors/, https://en.wikipedia.org/wiki/TXT_record",
              "mitre": [
                "T1568.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect suspicious DNS TXT records using pretrained model in DSDL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1568.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect suspicious DNS TXT records using pretrained model in DSDL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present if DNS TXT record contents are similar to benign DNS TXT record contents.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on suspicious TXT records using a pre-trained deep learning model so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.695",
              "n": "Detect Unauthorized Assets by MAC address",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unauthorized devices attempting to connect to the organization's network by inspecting DHCP request packets. It detects this activity by comparing the MAC addresses in DHCP requests against a list of known authorized devices stored in the assets_by_str.csv file. This activity is significant for a SOC because unauthorized devices can pose security risks, including potential data breaches or network disruptions. If confirmed malicious, this activity could allow an…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Network_Sessions\n      WHERE nodename=All_Sessions.DHCP All_Sessions.tag=dhcp\n      BY All_Sessions.dest All_Sessions.dest_mac\n    | dedup All_Sessions.dest_mac\n    | `drop_dm_object_name(\"Network_Sessions\")`\n    | `drop_dm_object_name(\"All_Sessions\")`\n    | search NOT [\n    | inputlookup asset_lookup_by_str\n    | rename mac as dest_mac\n    | fields + dest_mac]\n    | `detect_unauthorized_assets_by_mac_address_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This search might be prone to high false positives. Please consider this when conducting analysis or investigations. Authorized devices may be detected as unauthorized. If this is the case, verify the MAC address of the system responsible for the false positive and add it to the Assets and Identity framework with the proper information.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "system",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Unauthorized Assets by MAC address\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Unauthorized Assets by MAC address\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This search might be prone to high false positives. Please consider this when conducting analysis or investigations. Authorized devices may be detected as unauthorized. If this is the case, verify the MAC address of the system responsible for the false positive and add it to the Assets and Identity framework with the proper information.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on unauthorized devices attempting to connect to the organization's network by inspecting DHCP request packets so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.696",
              "n": "DNS Query Length Outliers - MLTK",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies DNS requests with unusually large query lengths for the record type being requested. It leverages the Network_Resolution data model and applies a machine learning model to detect outliers in DNS query lengths. This activity is significant because unusually large DNS queries can indicate data exfiltration or command-and-control communication attempts. If confirmed malicious, this activity could allow attackers to exfiltrate sensitive data or maintain persistent c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count min(_time) as start_time max(_time) as end_time values(DNS.src) as src values(DNS.dest) as dest FROM datamodel=Network_Resolution\n      BY DNS.query DNS.record_type\n    | search DNS.record_type=*\n    | `drop_dm_object_name(DNS)`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | eval query_length = len(query)\n    | apply dns_query_pdfmodel threshold=0.01\n    | rename \"IsOutlier(query_length)\" as isOutlier\n    | search isOutlier > 0\n    | sort -query_length\n    | table start_time end_time query record_type count src dest query_length\n    | `dns_query_length_outliers___mltk_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "If you are seeing more results than desired, you may consider reducing the value for threshold in the search. You should also periodically re-run the support search to re-build the ML model on the latest data.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1071.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"DNS Query Length Outliers - MLTK\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"DNS Query Length Outliers - MLTK\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: If you are seeing more results than desired, you may consider reducing the value for threshold in the search. You should also periodically re-run the support search to re-build the ML model on the latest data.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on dNS requests with unusually large query lengths for the record type being requested so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.697",
              "n": "DNS Query Length With High Standard Deviation",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies DNS queries with unusually large lengths by computing the standard deviation of query lengths and filtering those exceeding two times the standard deviation. It leverages DNS query data from the Network_Resolution data model, focusing on the length of the domain names being resolved. This activity is significant as unusually long DNS queries can indicate data exfiltration or command-and-control communication attempts. If confirmed malicious, this activity could …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$host$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible there can be long domain names that are legitimate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048.003"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"DNS Query Length With High Standard Deviation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"DNS Query Length With High Standard Deviation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible there can be long domain names that are legitimate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \" Query Length With High Standard Deviation\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.698",
              "n": "HTTP C2 Framework User Agent",
              "c": "high",
              "f": "intermediate",
              "v": "This Splunk query analyzes web logs to identify and categorize user agents, detecting various types of c2 frameworks. This activity can signify malicious actors attempting to interact with hosts on the network using known default configurations of command and control tools.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Filtering may be required in some instances depending on legacy system usage, filter as needed.",
              "refs": "https://github.com/BC-SECURITY/Malleable-C2-Profiles, https://www.keysight.com/blogs/en/tech/nwvs/2021/07/28/koadic-c3-command-control-decoded, https://github.com/mthcht/awesome-lists/blob/main/Lists/suspicious_http_user_agents_list.csv",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"HTTP C2 Framework User Agent\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"HTTP C2 Framework User Agent\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Filtering may be required in some instances depending on legacy system usage, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"HTTP C2 Framework User Agent\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.699",
              "n": "HTTP PUA User Agent",
              "c": "high",
              "f": "intermediate",
              "v": "This Splunk query analyzes web logs to identify and categorize user agents, detecting various types of unwanted applications. This activity can signify possible compromised hosts on the network.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Noise and false positive can be seen if these programs are allowed to be used within corporate network. In this case, a filter is needed.",
              "refs": "https://github.com/mthcht/awesome-lists/blob/main/Lists/suspicious_http_user_agents_list.csv",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"HTTP PUA User Agent\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"HTTP PUA User Agent\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Noise and false positive can be seen if these programs are allowed to be used within corporate network. In this case, a filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"HTTP PUA User Agent\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.700",
              "n": "HTTP RMM User Agent",
              "c": "high",
              "f": "intermediate",
              "v": "This Splunk query analyzes web logs to identify and categorize user agents, detecting various types of Remote Monitoring and Mangement applications. This activity can signify possible compromised hosts on the network.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to HTTP user agent entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Noise and false positive can be seen if these programs are allowed to be used within corporate network. In this case, a filter is needed.",
              "refs": "https://github.com/mthcht/awesome-lists/blob/main/Lists/suspicious_http_user_agents_list.csv",
              "mitre": [
                "T1071.001",
                "T1219"
              ],
              "dtype": "http_user_agent",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"HTTP RMM User Agent\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to HTTP user agent entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001, T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"HTTP RMM User Agent\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Noise and false positive can be seen if these programs are allowed to be used within corporate network. In this case, a filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"HTTP RMM User Agent\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.701",
              "n": "Large Volume of DNS ANY Queries",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies a large volume of DNS ANY queries, which may indicate a DNS amplification attack. It leverages the Network_Resolution data model to count DNS queries of type \"ANY\" directed to specific destinations. This activity is significant because DNS amplification attacks can overwhelm network resources, leading to Denial of Service (DoS) conditions. If confirmed malicious, this activity could disrupt services, degrade network performance, and potentially be part of a larg…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Network_Resolution\n      WHERE nodename=DNS \"DNS.message_type\"=\"QUERY\" \"DNS.record_type\"=\"ANY\"\n      BY \"DNS.dest\"\n    | `drop_dm_object_name(\"DNS\")`\n    | where count>200\n    | `large_volume_of_dns_any_queries_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate ANY requests may trigger this search, however it is unusual to see a large volume of them under typical circumstances. You may modify the threshold in the search to better suit your environment.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1498.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Large Volume of DNS ANY Queries\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1498.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Large Volume of DNS ANY Queries\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate ANY requests may trigger this search, however it is unusual to see a large volume of them under typical circumstances. You may modify the threshold in the search to better suit your environment.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on a large volume of ANY queries, which may indicate a amplification attack so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.702",
              "n": "Ngrok Reverse Proxy on Network",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects DNS queries to common Ngrok domains, indicating potential use of the Ngrok reverse proxy tool. It leverages the Network Resolution datamodel to identify queries to domains such as \"*.ngrok.com\" and \"*.ngrok.io\". While Ngrok usage is not inherently malicious, it has been increasingly adopted by adversaries for covert communication and data exfiltration. If confirmed malicious, this activity could allow attackers to bypass network defenses, establish persistent conne…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be present based on organizations that allow the use of Ngrok. Filter or monitor as needed.",
              "refs": "https://www.cisa.gov/uscert/sites/default/files/publications/aa22-320a_joint_csa_iranian_government-sponsored_apt_actors_compromise_federal%20network_deploy_crypto%20miner_credential_harvester.pdf",
              "mitre": [
                "T1572",
                "T1090",
                "T1102"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ngrok Reverse Proxy on Network\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1572, T1090, T1102. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ngrok Reverse Proxy on Network\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be present based on organizations that allow the use of Ngrok. Filter or monitor as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Ngrok Reverse Proxy on Network\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.703",
              "n": "Prohibited Network Traffic Allowed",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects instances where network traffic, identified by port and transport layer protocol as prohibited in the \"lookup_interesting_ports\" table, is allowed. It uses the Network_Traffic data model to cross-reference traffic data against predefined security policies. This activity is significant for a SOC as it highlights potential misconfigurations or policy violations that could lead to unauthorized access or data exfiltration. If confirmed malicious, this could allow attac…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Prohibited Network Traffic Allowed\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Prohibited Network Traffic Allowed\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Prohibited Network Traffic Allowed\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.704",
              "n": "Protocols passing authentication in cleartext",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of cleartext protocols that risk leaking sensitive information. It detects network traffic on legacy protocols such as Telnet (port 23), POP3 (port 110), IMAP (port 143), and non-anonymous FTP (port 21). The detection leverages the Network_Traffic data model to identify TCP traffic on these ports. Monitoring this activity is crucial as it can expose credentials and other sensitive data to interception. If confirmed malicious, attackers could capture auth…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Some networks may use leverage clear text protocols such as kerberos, FTP or telnet servers. Apply the necessary exclusions where needed.",
              "refs": "https://www.rackaid.com/blog/secure-your-email-and-file-transfers/, https://www.infosecmatter.com/capture-passwords-using-wireshark/",
              "mitre": [],
              "dtype": "ip_address",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Protocols passing authentication in cleartext\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Protocols passing authentication in cleartext\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Some networks may use leverage clear text protocols such as kerberos, FTP or telnet servers. Apply the necessary exclusions where needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Protocols passing authentication in cleartext\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.705",
              "n": "SMB Traffic Spike",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects spikes in Server Message Block (SMB) traffic connections, which are used for sharing files and resources between computers. It leverages network traffic logs to monitor connections on ports 139 and 445, and SMB application usage. By calculating the average and standard deviation of SMB connections over the past 70 minutes, it identifies sources exceeding two standard deviations from the average. This activity is significant as it may indicate potential SMB-based at…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Network_Traffic\n      WHERE All_Traffic.dest_port=139\n        OR\n        All_Traffic.dest_port=445\n        OR\n        All_Traffic.app=smb\n      BY _time span=1h, All_Traffic.src\n    | `drop_dm_object_name(\"All_Traffic\")`\n    | eventstats max(_time) as maxtime\n    | stats count as num_data_samples max(eval(if(_time >= relative_time(maxtime, \"-70m@m\"), count, null))) as count avg(eval(if(_time<relative_time(maxtime, \"-70m@m\"), count, null))) as avg stdev(eval(if(_time<relative_time(maxtime, \"-70m@m\"), count, null))) as stdev\n      BY src\n    | eval upperBound=(avg+stdev*2), isOutlier=if(count > upperBound AND num_data_samples >=50, 1, 0)\n    | where isOutlier=1\n    | table src count\n    | `smb_traffic_spike_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "A file server may experience high-demand loads that could cause this analytic to trigger.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1021.002"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SMB Traffic Spike\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1021.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SMB Traffic Spike\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: A file server may experience high-demand loads that could cause this analytic to trigger.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on spikes in Server Message Block (SMB) traffic connections, which are used for sharing files and resources between computers so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.706",
              "n": "Suspicious Process DNS Query Known Abuse Web Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious process making DNS queries to known, abused text-paste web services, VoIP, instant messaging, and digital distribution platforms. It leverages Sysmon EventID 22 logs to identify queries from processes like cmd.exe, powershell.exe, and others. This activity is significant as it may indicate an attempt to download malicious files, a common initial access technique. If confirmed malicious, this could lead to unauthorized code execution, data exfiltration,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "This detection relies on sysmon logs with the Event ID 22, DNS Query. We suggest you run this detection at least once a day over the last 14 days.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Noise and false positive can be seen if the following instant messaging is allowed to use within corporate network. In this case, a filter is needed.",
              "refs": "https://urlhaus.abuse.ch/url/1798923/, https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/",
              "mitre": [
                "T1059.005"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Process DNS Query Known Abuse Web Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.005. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Process DNS Query Known Abuse Web Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Noise and false positive can be seen if the following instant messaging is allowed to use within corporate network. In this case, a filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Suspicious Process Query Known Abuse Web Services\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.707",
              "n": "Windows Abused Web Services",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects a suspicious process making DNS queries to known, abused web services such as text-paste sites, VoIP, secure tunneling, instant messaging, and digital distribution platforms. This detection leverages Sysmon logs with Event ID 22, focusing on specific query names. This activity is significant as it may indicate an adversary attempting to download malicious files, a common initial access technique. If confirmed malicious, this could lead to unauthorized code executio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 22",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 22 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Noise and false positive can be seen if the following instant messaging is allowed to use within corporate network. In this case, a filter is needed.",
              "refs": "https://malpedia.caad.fkie.fraunhofer.de/details/win.njrat",
              "mitre": [
                "T1102"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Abused Web Services\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 22. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1102. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Abused Web Services\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Noise and false positive can be seen if the following instant messaging is allowed to use within corporate network. In this case, a filter is needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Windows Abused Web Services\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.708",
              "n": "Windows AD Replication Service Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unexpected Active Directory replication traffic from non-domain controller sources. It leverages data from the Network Traffic datamodel, specifically looking for applications related to AD replication. This activity is significant because AD replication traffic should typically only occur between domain controllers. Detection of such traffic from other sources may indicate malicious activities like DCSync or DCShadow, which are used for credential dumping. If c…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count values(All_Traffic.transport) as transport values(All_Traffic.user) as user values(All_Traffic.src_category) as src_category values(All_Traffic.dest_category) as dest_category min(_time) as firstTime max(_time) as lastTime FROM datamodel=Network_Traffic\n      WHERE All_Traffic.app IN (\"ms-dc-replication\",\"*drsr*\",\"ad drs\")\n      BY All_Traffic.src All_Traffic.dest All_Traffic.app\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `drop_dm_object_name(\"All_Traffic\")`\n    | `windows_ad_replication_service_traffic_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "New domain controllers or certian scripts run by administrators.",
              "refs": "https://adsecurity.org/?p=1729, https://attack.mitre.org/techniques/T1003/006/, https://attack.mitre.org/techniques/T1207/",
              "mitre": [
                "T1003.006",
                "T1207"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Replication Service Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1003.006, T1207. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Replication Service Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: New domain controllers or certian scripts run by administrators.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on unexpected Active Directory replication traffic from non-domain controller sources so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.709",
              "n": "Windows AD Rogue Domain Controller Network Activity",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies unauthorized replication RPC calls from non-domain controller devices. It leverages Zeek wire data to detect specific RPC operations like DrsReplicaAdd and DRSGetNCChanges, filtering out legitimate domain controllers. This activity is significant as it may indicate an attempt to introduce a rogue domain controller, which can compromise the integrity of the Active Directory environment. If confirmed malicious, this could allow attackers to manipulate directory da…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`zeek_rpc` DrsReplicaAdd OR DRSGetNCChanges\n      | where NOT (dest_category=\"Domain Controller\") OR NOT (src_category=\"Domain Controller\")\n      | fillnull value=\"Unknown\" src_category, dest_category\n      | table _time endpoint operation src src_category dest dest_category\n      | `windows_ad_rogue_domain_controller_network_activity_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://adsecurity.org/?p=1729",
              "mitre": [
                "T1207"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows AD Rogue Domain Controller Network Activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1207. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows AD Rogue Domain Controller Network Activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on unauthorized replication RPC calls from non-domain controller devices so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.710",
              "n": "Citrix ADC and Gateway CitrixBleed 2 Memory Disclosure",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Citrix ADC and Gateway CitrixBleed 2 Memory Disclosure. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$dest$\") starthoursago=168 | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://support.citrix.com/support-home/kbsearch/article?articleNumber=CTX693420, https://www.netscaler.com/blog/news/critical-security-updates-for-netscaler-netscaler-gateway-and-netscaler-console/, https://github.com/mingshenhk/CitrixBleed-2-CVE-2025-5777-PoC-, https://horizon3.ai/attack-research/attack-blogs/cve-2025-5777-citrixbleed-2-write-up-maybe/, https://labs.watchtowr.com/how-much-more-must-we-bleed-citrix-netscaler-memory-disclosure-citrixbleed-2-cve-2025-5777/, https://github.com/projectdiscovery/nuclei-templates/blob/main/http/cves/2025/CVE-2025-5777.yaml",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Citrix ADC and Gateway CitrixBleed 2 Memory Disclosure\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Citrix ADC and Gateway CitrixBleed 2 Memory Disclosure\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Citrix ADC and Gateway CitrixBleed 2 Memory Disclosure\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.711",
              "n": "Detect Remote Access Software Usage URL",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of known remote access software within the environment. It leverages network logs mapped to the Web data model, identifying specific URLs and user agents associated with remote access tools like AnyDesk, GoToMyPC, LogMeIn, and TeamViewer. This activity is significant as adversaries often use these utilities to maintain unauthorized remote access. If confirmed malicious, this could allow attackers to control systems remotely, exfiltrate data, or furthe…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Palo Alto Network Threat",
              "q": "| from datamodel:Web | search src=$src$ url_domain=$url_domain$",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to detection signature entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Palo Alto Network Threat ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible that legitimate remote access software is used within the environment. Ensure that the lookup is reviewed and updated with any additional remote access software that is used within the environment. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content",
              "refs": "https://attack.mitre.org/techniques/T1219/, https://thedfirreport.com/2022/08/08/bumblebee-roasts-its-way-to-domain-admin/, https://thedfirreport.com/2022/11/28/emotet-strikes-again-lnk-file-leads-to-domain-wide-ransomware/",
              "mitre": [
                "T1219"
              ],
              "dtype": "signature",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Remote Access Software Usage URL\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to detection signature entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Palo Alto Network Threat. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1219. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Remote Access Software Usage URL\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible that legitimate remote access software is used within the environment. Ensure that the lookup is reviewed and updated with any additional remote access software that is used within the environment. Known false positives can be added to the remote_access_software_usage_exception.csv lookup to globally suppress these situations across all remote access content\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the execution of known remote access software within the environment so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.712",
              "n": "HTTP Duplicated Header",
              "c": "high",
              "f": "intermediate",
              "v": "Detects when a request has more than one of the same header. This is commonly used in request smuggling and other web based attacks. HTTP Request Smuggling exploits inconsistencies in how front-end and back-end servers parse HTTP requests by using ambiguous or malformed headers to hide malicious requests within legitimate ones. Attackers leverage duplicate headers, particularly Content-Length and Transfer-Encoding, to cause different servers in the chain to disagree on where one request ends and…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected, however, monitor, filter, and tune as needed based on organization log sources.",
              "refs": "https://portswigger.net/web-security/request-smuggling, https://portswigger.net/research/http1-must-die, https://www.vaadata.com/blog/what-is-http-request-smuggling-exploitations-and-security-best-practices/, https://www.securityweek.com/new-http-request-smuggling-attacks-impacted-cdns-major-orgs-millions-of-websites/",
              "mitre": [
                "T1071.001",
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"HTTP Duplicated Header\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001, T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"HTTP Duplicated Header\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected, however, monitor, filter, and tune as needed based on organization log sources.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"HTTP Duplicated Header\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.713",
              "n": "HTTP Request to Reserved Name on IIS Server",
              "c": "high",
              "f": "intermediate",
              "v": "Detects attempts to exploit a request smuggling technique against IIS that leverages a Windows quirk where requests for reserved Windows device names such as \"/con\" trigger an early server response before the request body is received. When combined with a Content-Length desynchronization, this behavior can lead to a parsing confusion between frontend and backend.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to IP address entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives are not expected on IIS servers, as the detection is based on the presence of web requests to reserved names, which is not a common page to be accessed by legitimate users. Modify the query as needed to reduce false positives or hunt for additional indicators of compromise.",
              "refs": "https://portswigger.net/web-security/request-smuggling, https://portswigger.net/research/http1-must-die, https://www.vaadata.com/blog/what-is-http-request-smuggling-exploitations-and-security-best-practices/, https://www.securityweek.com/new-http-request-smuggling-attacks-impacted-cdns-major-orgs-millions-of-websites/",
              "mitre": [
                "T1071.001",
                "T1190"
              ],
              "dtype": "ip_address",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"HTTP Request to Reserved Name on IIS Server\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to IP address entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1071.001, T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"HTTP Request to Reserved Name on IIS Server\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives are not expected on IIS servers, as the detection is based on the presence of web requests to reserved names, which is not a common page to be accessed by legitimate users. Modify the query as needed to reduce false positives or hunt for additional indicators of compromise.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"HTTP Request to Reserved Name on IIS Server\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.714",
              "n": "Multiple Archive Files Http Post Traffic",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the high-frequency exfiltration of archive files via HTTP POST requests. It leverages HTTP stream logs to identify specific archive file headers within the request body. This activity is significant as it often indicates data exfiltration by APTs or trojan spyware after data collection. If confirmed malicious, this behavior could lead to the unauthorized transfer of sensitive data to an attacker’s command and control server, potentially resulting in severe data bre…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Splunk Stream HTTP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Splunk Stream HTTP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Normal archive transfer via HTTP protocol may trip this detection.",
              "refs": "https://attack.mitre.org/techniques/T1560/001/, https://www.mandiant.com/resources/apt39-iranian-cyber-espionage-group-focused-on-personal-information, https://www.microsoft.com/security/blog/2021/01/20/deep-dive-into-the-solorigate-second-stage-activation-from-sunburst-to-teardrop-and-raindrop/",
              "mitre": [
                "T1048.003"
              ],
              "dtype": "url",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Multiple Archive Files Http Post Traffic\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Splunk Stream HTTP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1048.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Multiple Archive Files Http Post Traffic\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Normal archive transfer via HTTP protocol may trip this detection.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"Multiple Archive Files Http Post Traffic\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.715",
              "n": "PaperCut NG Remote Web Access Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies PaperCut NG Remote Web Access Attempt. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on Web traffic that include fields relevant for traffic into the `Web` datamodel.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present, filter as needed.",
              "refs": "https://www.cisa.gov/news-events/alerts/2023/05/11/cisa-and-fbi-release-joint-advisory-response-active-exploitation-papercut-vulnerability, https://www.papercut.com/kb/Main/PO-1216-and-PO-1219, https://www.horizon3.ai/papercut-cve-2023-27350-deep-dive-and-indicators-of-compromise/, https://www.bleepingcomputer.com/news/security/hackers-actively-exploit-critical-rce-bug-in-papercut-servers/, https://www.huntress.com/blog/critical-vulnerabilities-in-papercut-print-management-software",
              "mitre": [
                "T1190",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PaperCut NG Remote Web Access Attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PaperCut NG Remote Web Access Attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present, filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add these signals to the risk story for \"PaperCut NG Remote Web Access Attempt\" so the team can spot abuse early without getting lost in one-off alerts that lack context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.716",
              "n": "SAP NetWeaver Visual Composer Exploitation Attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies SAP NetWeaver Visual Composer Exploitation Attempt. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Suricata",
              "q": "| tstats count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Web.Web\n      WHERE (\n            Web.url IN (\"/CTCWebService/CTCWebServiceBean\", \"/VisualComposer/services/DesignTimeService\", \"/ctc/CTCWebService/CTCWebServiceBean\")\n        )\n        AND Web.http_method IN (\"HEAD\", \"POST\") AND Web.status=200\n      BY Web.src, Web.dest, Web.http_method,\n         Web.url, Web.http_user_agent, Web.url_length,\n         sourcetype\n    | `drop_dm_object_name(\"Web\")`\n    | eval action=case(http_method=\"HEAD\", \"Recon/Probe\", http_method=\"POST\", \"Possible Exploitation\")\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | table firstTime, lastTime, src, dest, http_method, action, url, user_agent, url_length, sourcetype\n    | `sap_netweaver_visual_composer_exploitation_attempt_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Suricata ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://onapsis.com/blog/active-exploitation-of-sap-vulnerability-cve-2025-31324/, https://reliaquest.com/blog/threat-spotlight-reliaquest-uncovers-vulnerability-behind-sap-netweaver-compromise/, https://www.rapid7.com/blog/post/2025/04/28/etr-active-exploitation-of-sap-netweaver-visual-composer-cve-2025-31324/",
              "mitre": [
                "T1190"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SAP NetWeaver Visual Composer Exploitation Attempt\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Suricata. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SAP NetWeaver Visual Composer Exploitation Attempt\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on SAP NetWeaver Visual Composer Exploitation Attempt so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.717",
              "n": "SQL Injection with Long URLs",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects long URLs containing multiple SQL commands, indicating a potential SQL injection attack. This detection leverages web traffic data, specifically targeting web server destinations with URLs longer than 1024 characters or HTTP user agents longer than 200 characters. SQL injection is significant as it allows attackers to manipulate a web application's database, potentially leading to unauthorized data access or modification. If confirmed malicious, this activity could…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Web\n      WHERE Web.dest_category=web_server\n        AND\n        (Web.url_length > 1024\n        OR\n        Web.http_user_agent_length > 200)\n      BY Web.src Web.dest Web.url\n         Web.url_length Web.http_user_agent\n    | `drop_dm_object_name(\"Web\")`\n    | eval url=lower(url)\n    | eval num_sql_cmds=mvcount(split(url, \"alter%20table\")) + mvcount(split(url, \"between\")) + mvcount(split(url, \"create%20table\")) + mvcount(split(url, \"create%20database\")) + mvcount(split(url, \"create%20index\")) + mvcount(split(url, \"create%20view\")) + mvcount(split(url, \"delete\")) + mvcount(split(url, \"drop%20database\")) + mvcount(split(url, \"drop%20index\")) + mvcount(split(url, \"drop%20table\")) + mvcount(split(url, \"exists\")) + mvcount(split(url, \"exec\")) + mvcount(split(url, \"group%20by\")) + mvcount(split(url, \"having\")) + mvcount(split(url, \"insert%20into\")) + mvcount(split(url, \"inner%20join\")) + mvcount(split(url, \"left%20join\")) + mvcount(split(url, \"right%20join\")) + mvcount(split(url, \"full%20join\")) + mvcount(split(url, \"select\")) + mvcount(split(url, \"distinct\")) + mvcount(split(url, \"select%20top\")) + mvcount(split(url, \"union\")) + mvcount(split(url, \"xp_cmdshell\")) - 24\n    | where num_sql_cmds > 3\n    | `sql_injection_with_long_urls_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It's possible that legitimate traffic will have long URLs or long user agent strings and that common SQL commands may be found within the URL. Please investigate as appropriate.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SQL Injection with Long URLs\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SQL Injection with Long URLs\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It's possible that legitimate traffic will have long URLs or long user agent strings and that common SQL commands may be found within the URL. Please investigate as appropriate.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on long URLs containing multiple SQL commands, indicating a potential SQL injection attack so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.718",
              "n": "Supernova Webshell",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the presence of the Supernova webshell, used in the SUNBURST attack, by identifying specific patterns in web URLs. The detection leverages Splunk to search for URLs containing \"*logoimagehandler.ashx*codes*\", \"*logoimagehandler.ashx*clazz*\", \"*logoimagehandler.ashx*method*\", and \"*logoimagehandler.ashx*args*\". This activity is significant as it indicates potential unauthorized access and arbitrary code execution on a compromised system. If confirmed malicious, this…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Web.Web\n      WHERE web.url=*logoimagehandler.ashx*codes*\n        OR\n        Web.url=*logoimagehandler.ashx*clazz*\n        OR\n        Web.url=*logoimagehandler.ashx*method*\n        OR\n        Web.url=*logoimagehandler.ashx*args*\n      BY Web.src Web.dest Web.url\n         Web.vendor_product Web.user Web.http_user_agent\n         _time span=1s\n    | `supernova_webshell_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There might be false positives associted with this detection since items like args as a web argument is pretty generic.",
              "refs": "https://www.splunk.com/en_us/blog/security/detecting-supernova-malware-solarwinds-continued.html, https://www.guidepointsecurity.com/blog/supernova-solarwinds-net-webshell-analysis/",
              "mitre": [
                "T1505.003",
                "T1133"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Supernova Webshell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1505.003, T1133. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Supernova Webshell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There might be false positives associted with this detection since items like args as a web argument is pretty generic.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the presence of the Supernova webshell, used in the SUNBURST attack, by identifying specific patterns in web URLs so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.7.719",
              "n": "Windows IIS Server PSWA Console Access",
              "c": "high",
              "f": "intermediate",
              "v": "This analytic detects access attempts to the PowerShell Web Access (PSWA) console on Windows IIS servers. It monitors web traffic for requests to PSWA-related URIs, which could indicate legitimate administrative activity or potential unauthorized access attempts. By tracking source IP, HTTP status, URI path, and HTTP method, it helps identify suspicious patterns or brute-force attacks targeting PSWA. This detection is crucial for maintaining the security of remote PowerShell management interface…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows IIS",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Web\n      WHERE Web.dest IN (\"/pswa/*\")\n      BY Web.src Web.status Web.uri_path\n         Web.dest Web.http_method Web.uri_query\n    | `drop_dm_object_name(\"Web\")`\n    | `security_content_ctime(firstTime)`\n    | `security_content_ctime(lastTime)`\n    | `windows_iis_server_pswa_console_access_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Windows IIS ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may occur if legitimate PSWA processes are used for administrative tasks. Careful review of the logs is recommended to distinguish between legitimate and malicious activity.",
              "refs": "https://www.cisa.gov/news-events/cybersecurity-advisories/aa24-241a",
              "mitre": [
                "T1190"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows IIS Server PSWA Console Access\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows IIS. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows IIS Server PSWA Console Access\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may occur if legitimate PSWA processes are used for administrative tasks. Careful review of the logs is recommended to distinguish between legitimate and malicious activity.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on this analytic detects access attempts to the PowerShell Web Access (PSWA) console on Windows IIS servers so unusual activity is visible early — before it spreads or hides in normal noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 26.4,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 718,
            "none": 0
          }
        },
        {
          "i": "10.8",
          "n": "Certificate & PKI Management",
          "u": [
            {
              "i": "10.8.1",
              "n": "Certificate Expiry Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Expired certificates cause service outages, authentication failures, and security warnings. Proactive monitoring is the simplest prevention.",
              "t": "Custom scripted input",
              "d": "Certificate inventory scans (openssl, certutil, CT logs)",
              "q": "index=certificates sourcetype=\"cert_inventory\"\n| eval days_to_expiry=round((cert_not_after_epoch-now())/86400)\n| where days_to_expiry < 90\n| table cn, san, issuer, days_to_expiry, host, port\n| sort days_to_expiry",
              "m": "Deploy scripted input scanning all known endpoints (HTTPS, LDAPS, SMTPS, etc.) daily. Parse certificate metadata. Alert at 90/60/30/7 day thresholds with escalating severity. Maintain endpoint inventory for comprehensive coverage.",
              "z": "Table (certs with expiry countdown), Single value (certs expiring within 30d), Status grid (cert × expiry status), Bar chart (certs by expiry bucket).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: Certificate inventory scans (openssl, certutil, CT logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy scripted input scanning all known endpoints (HTTPS, LDAPS, SMTPS, etc.) daily. Parse certificate metadata. Alert at 90/60/30/7 day thresholds with escalating severity. Maintain endpoint inventory for comprehensive coverage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=certificates sourcetype=\"cert_inventory\"\n| eval days_to_expiry=round((cert_not_after_epoch-now())/86400)\n| where days_to_expiry < 90\n| table cn, san, issuer, days_to_expiry, host, port\n| sort days_to_expiry\n```\n\nUnderstanding this SPL\n\n**Certificate Expiry Monitoring** — Expired certificates cause service outages, authentication failures, and security warnings. Proactive monitoring is the simplest prevention.\n\nDocumented **Data sources**: Certificate inventory scans (openssl, certutil, CT logs). **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: certificates; **sourcetype**: cert_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=certificates, sourcetype=\"cert_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_expiry < 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Certificate Expiry Monitoring**): table cn, san, issuer, days_to_expiry, host, port\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certs with expiry countdown), Single value (certs expiring within 30d), Status grid (cert × expiry status), Bar chart (certs by expiry bucket).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track certificate lifetimes in Splunk so we renew in time and avoid outages and trust warnings for users and customers.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.2",
              "n": "Certificate Issuance Audit",
              "c": "high",
              "f": "beginner",
              "v": "Unauthorized certificate issuance from internal CAs can enable man-in-the-middle attacks. Audit trail supports compliance.",
              "t": "CA server log forwarding",
              "d": "CA audit logs (Microsoft AD CS, EJBCA, HashiCorp Vault PKI)",
              "q": "index=pki sourcetype=\"adcs:audit\"\n| search EventCode=4887\n| table _time, RequesterName, CertificateTemplate, SerialNumber, SubjectCN\n| sort -_time",
              "m": "Forward CA server audit logs to Splunk (Event ID 4887 for AD CS). Track all certificate issuance events. Alert on issuance from non-standard templates or by unauthorized requesters. Report on issuance volume and template usage.",
              "z": "Table (issued certificates), Timeline (issuance events), Bar chart (by template), Line chart (issuance volume trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CA server log forwarding.\n• Ensure the following data sources are available: CA audit logs (Microsoft AD CS, EJBCA, HashiCorp Vault PKI).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward CA server audit logs to Splunk (Event ID 4887 for AD CS). Track all certificate issuance events. Alert on issuance from non-standard templates or by unauthorized requesters. Report on issuance volume and template usage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pki sourcetype=\"adcs:audit\"\n| search EventCode=4887\n| table _time, RequesterName, CertificateTemplate, SerialNumber, SubjectCN\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Certificate Issuance Audit** — Unauthorized certificate issuance from internal CAs can enable man-in-the-middle attacks. Audit trail supports compliance.\n\nDocumented **Data sources**: CA audit logs (Microsoft AD CS, EJBCA, HashiCorp Vault PKI). **App/TA** (typical add-on context): CA server log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pki; **sourcetype**: adcs:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pki, sourcetype=\"adcs:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Certificate Issuance Audit**): table _time, RequesterName, CertificateTemplate, SerialNumber, SubjectCN\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (issued certificates), Timeline (issuance events), Bar chart (by template), Line chart (issuance volume trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We review who requested which certificates from our internal CAs so surprise or stolen issuance does not go unnoticed in Splunk or at audit time.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.3",
              "n": "Weak Cipher / Key Detection",
              "c": "high",
              "f": "beginner",
              "v": "Certificates using weak algorithms (SHA-1, RSA <2048-bit) are vulnerable to attack. Detection ensures cryptographic standards compliance.",
              "t": "Custom scripted input",
              "d": "Certificate scan results",
              "q": "index=certificates sourcetype=\"cert_inventory\"\n| where key_size < 2048 OR signature_algorithm LIKE \"%sha1%\" OR signature_algorithm LIKE \"%md5%\"\n| table cn, host, port, key_size, signature_algorithm, issuer",
              "m": "Include key size and signature algorithm in certificate scans. Flag certificates using SHA-1, MD5, or RSA <2048-bit. Alert on new weak certificates. Track remediation progress as a compliance metric.",
              "z": "Table (weak certificates), Pie chart (algorithm distribution), Single value (weak cert count), Bar chart (by weakness type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input.\n• Ensure the following data sources are available: Certificate scan results.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInclude key size and signature algorithm in certificate scans. Flag certificates using SHA-1, MD5, or RSA <2048-bit. Alert on new weak certificates. Track remediation progress as a compliance metric.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=certificates sourcetype=\"cert_inventory\"\n| where key_size < 2048 OR signature_algorithm LIKE \"%sha1%\" OR signature_algorithm LIKE \"%md5%\"\n| table cn, host, port, key_size, signature_algorithm, issuer\n```\n\nUnderstanding this SPL\n\n**Weak Cipher / Key Detection** — Certificates using weak algorithms (SHA-1, RSA <2048-bit) are vulnerable to attack. Detection ensures cryptographic standards compliance.\n\nDocumented **Data sources**: Certificate scan results. **App/TA** (typical add-on context): Custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: certificates; **sourcetype**: cert_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=certificates, sourcetype=\"cert_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where key_size < 2048 OR signature_algorithm LIKE \"%sha1%\" OR signature_algorithm LIKE \"%md5%\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Weak Cipher / Key Detection**): table cn, host, port, key_size, signature_algorithm, issuer\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (weak certificates), Pie chart (algorithm distribution), Single value (weak cert count), Bar chart (by weakness type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We scan for weak ciphers and keys so we can modernize crypto before it becomes an easy target or compliance finding.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.4",
              "n": "Certificate Revocation Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Revocation activity indicates compromised or misused certificates. Tracking ensures revocations are processed and CRLs distributed.",
              "t": "CA server logs",
              "d": "CA audit logs (revocation events), CRL distribution point monitoring",
              "q": "index=pki sourcetype=\"adcs:audit\" EventCode=4889\n| table _time, RequesterName, SerialNumber, RevokeReason, SubjectCN\n| sort -_time",
              "m": "Forward CA revocation events (Event ID 4889). Monitor CRL publication and OCSP responder health. Alert on revocations for investigation. Track revocation reasons for security program improvement.",
              "z": "Table (revoked certificates), Timeline (revocation events), Bar chart (revocation reasons).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CA server logs.\n• Ensure the following data sources are available: CA audit logs (revocation events), CRL distribution point monitoring.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward CA revocation events (Event ID 4889). Monitor CRL publication and OCSP responder health. Alert on revocations for investigation. Track revocation reasons for security program improvement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pki sourcetype=\"adcs:audit\" EventCode=4889\n| table _time, RequesterName, SerialNumber, RevokeReason, SubjectCN\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Certificate Revocation Tracking** — Revocation activity indicates compromised or misused certificates. Tracking ensures revocations are processed and CRLs distributed.\n\nDocumented **Data sources**: CA audit logs (revocation events), CRL distribution point monitoring. **App/TA** (typical add-on context): CA server logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pki; **sourcetype**: adcs:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pki, sourcetype=\"adcs:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Certificate Revocation Tracking**): table _time, RequesterName, SerialNumber, RevokeReason, SubjectCN\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (revoked certificates), Timeline (revocation events), Bar chart (revocation reasons).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow revocation and trust status so we are not still trusting certificates that the CA or our policy has retired.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.5",
              "n": "CT Log Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Certificate Transparency logs reveal all publicly-issued certificates for your domains. Detects unauthorized issuance by rogue or compromised CAs.",
              "t": "Custom API input (crt.sh, CT log APIs)",
              "d": "Certificate Transparency log API",
              "q": "index=certificates sourcetype=\"ct_log\"\n| search NOT issuer IN (\"DigiCert*\",\"Let's Encrypt*\",\"Sectigo*\")\n| table _time, cn, issuer, serial, not_before, not_after\n| sort -_time",
              "m": "Poll CT log aggregators (crt.sh) for your domains daily. Maintain whitelist of approved issuers. Alert on certificates from unexpected CAs. Track issuance patterns for certificate lifecycle management.",
              "z": "Table (CT log entries), Timeline (issuance events), Bar chart (certs by issuer), Single value (unauthorized issuances).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (crt.sh, CT log APIs).\n• Ensure the following data sources are available: Certificate Transparency log API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll CT log aggregators (crt.sh) for your domains daily. Maintain whitelist of approved issuers. Alert on certificates from unexpected CAs. Track issuance patterns for certificate lifecycle management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=certificates sourcetype=\"ct_log\"\n| search NOT issuer IN (\"DigiCert*\",\"Let's Encrypt*\",\"Sectigo*\")\n| table _time, cn, issuer, serial, not_before, not_after\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**CT Log Monitoring** — Certificate Transparency logs reveal all publicly-issued certificates for your domains. Detects unauthorized issuance by rogue or compromised CAs.\n\nDocumented **Data sources**: Certificate Transparency log API. **App/TA** (typical add-on context): Custom API input (crt.sh, CT log APIs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: certificates; **sourcetype**: ct_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=certificates, sourcetype=\"ct_log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **CT Log Monitoring**): table _time, cn, issuer, serial, not_before, not_after\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (CT log entries), Timeline (issuance events), Bar chart (certs by issuer), Single value (unauthorized issuances).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch certificate transparency and issuance patterns so unexpected or fraudulent certificates are noticed early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.6",
              "n": "Azure AD Service Principal New Client Credentials",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the addition of new credentials to Service Principals and Applications in Azure AD. It leverages Azure AD AuditLogs, specifically monitoring the \"Update application*Certificates and secrets management\" operation. This activity is significant as it may indicate an adversary attempting to maintain persistent access or escalate privileges within the Azure environment. If confirmed malicious, attackers could use these new credentials to log in as the service principal,…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure Active Directory",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure Active Directory ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Service Principal client credential modifications may be part of legitimate administrative operations. Filter as needed.",
              "refs": "https://attack.mitre.org/techniques/T1098/001/, https://microsoft.github.io/Azure-Threat-Research-Matrix/Persistence/AZT501/AZT501-2/, https://hausec.com/2021/10/26/attacking-azure-azure-ad-part-ii/, https://www.inversecos.com/2021/10/how-to-backdoor-azure-applications-and.html, https://www.mandiant.com/resources/blog/apt29-continues-targeting-microsoft, https://microsoft.github.io/Azure-Threat-Research-Matrix/PrivilegeEscalation/AZT405/AZT405-3/",
              "mitre": [
                "T1098.001"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Azure AD Service Principal New Client Credentials\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure Active Directory. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1098.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Azure AD Service Principal New Client Credentials\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Service Principal client credential modifications may be part of legitimate administrative operations. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the addition of new credentials to Service Principals and Applications in Azure AD — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.7",
              "n": "Attempt To Add Certificate To Untrusted Store",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Attempt To Add Certificate To Untrusted Store. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "There may be legitimate reasons for administrators to add a certificate to the untrusted certificate store. In such cases, this will typically be done on a large number of systems.",
              "refs": "https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1553.004/T1553.004.md",
              "mitre": [
                "T1553.004"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Attempt To Add Certificate To Untrusted Store\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1553.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Attempt To Add Certificate To Untrusted Store\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: There may be legitimate reasons for administrators to add a certificate to the untrusted certificate store. In such cases, this will typically be done on a large number of systems.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Attempt To Add Certificate To Untrusted Store — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.8",
              "n": "Certutil exe certificate extraction",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of certutil.exe with arguments indicating the manipulation or extraction of certificates. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line arguments. This activity is significant because extracting certificates can allow attackers to sign new authentication tokens, particularly in federated environments like Windows ADFS. If confirmed malicious, this could enable attackers to forge authentica…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Unless there are specific use cases, manipulating or exporting certificates using certutil is uncommon. Extraction of certificate has been observed during attacks such as Golden SAML and other campaigns targeting Federated services.",
              "refs": "https://blog.sygnia.co/detection-and-hunting-of-golden-saml-attack, https://strontic.github.io/xcyclopedia/library/certutil.exe-09A8A29BAA3A451713FD3D07943B4A43.html",
              "mitre": [],
              "dtype": "process_name",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Certutil exe certificate extraction\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Certutil exe certificate extraction\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Unless there are specific use cases, manipulating or exporting certificates using certutil is uncommon. Extraction of certificate has been observed during attacks such as Golden SAML and other campaigns targeting Federated services.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the use of certutil.exe with arguments indicating the manipulation or extraction of certificates — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.9",
              "n": "Cisco Isovalent - Curl Execution With Insecure Flags",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the execution of curl commands with insecure flags within the Cisco Isovalent environment. It identifies this activity by monitoring process execution logs for curl commands that use the -k or --insecure flags. This behavior is significant for a SOC as it could allow an attacker to bypass SSL/TLS verification, potentially exposing the Kubernetes infrastructure to man-in-the-middle attacks. If confirmed malicious, this activity could lead to data interception, servi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Isovalent Process Exec",
              "q": "# Shared SPL: intentional — see UC-10.2.31\n| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$pod_name$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Isovalent Process Exec ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "This activity may be triggered by legitimate administrative scripts, container images, or third-party operators that use cron for scheduled tasks, so please investigate the alert in context to rule out benign operations.",
              "refs": "https://rawcode7.medium.com/understanding-the-kubernetes-attack-surface-9a48ebcb6bc4, https://www.tutorialworks.com/kubernetes-curl/",
              "mitre": [
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Isovalent - Curl Execution With Insecure Flags\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Isovalent Process Exec. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Isovalent - Curl Execution With Insecure Flags\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: This activity may be triggered by legitimate administrative scripts, container images, or third-party operators that use cron for scheduled tasks, so please investigate the alert in context to rule out benign operations.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the execution of curl commands with insecure flags within the Cisco Isovalent environment — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.10",
              "n": "Detect Certify Command Line Arguments",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of Certify or Certipy tools to enumerate Active Directory Certificate Services (AD CS) environments. It leverages Endpoint Detection and Response (EDR) data, focusing on specific command-line arguments associated with these tools. This activity is significant because it indicates potential reconnaissance or exploitation attempts targeting AD CS, which could lead to unauthorized access or privilege escalation. If confirmed malicious, attackers could gain ins…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/GhostPack/Certify, https://github.com/ly4k/Certipy, https://specterops.io/wp-content/uploads/sites/3/2022/06/Certified_Pre-Owned.pdf",
              "mitre": [
                "T1649",
                "T1105"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Certify Command Line Arguments\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649, T1105. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Certify Command Line Arguments\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the use of Certify or Certipy tools to enumerate Active Directory Certificate Services (AD CS) environments — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.11",
              "n": "Detect Certify With PowerShell Script Block Logging",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Certify tool via an in-memory PowerShell function to enumerate Active Directory Certificate Services (AD CS) environments. It leverages PowerShell Script Block Logging (EventCode 4104) to identify specific command patterns associated with Certify's enumeration and exploitation functions. This activity is significant as it indicates potential reconnaissance or exploitation attempts against AD CS, which could lead to unauthorized certificate issuance. …",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user_id$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to user account entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/GhostPack/Certify, https://specterops.io/wp-content/uploads/sites/3/2022/06/Certified_Pre-Owned.pdf",
              "mitre": [
                "T1059.001",
                "T1649"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Certify With PowerShell Script Block Logging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1059.001, T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Certify With PowerShell Script Block Logging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the use of the Certify tool using-memory PowerShell function to enumerate Active Directory Certificate Services (AD CS) environments — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.12",
              "n": "Detect Certipy File Modifications",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the Certipy tool to enumerate Active Directory Certificate Services (AD CS) environments by identifying unique file modifications. It leverages endpoint process and filesystem data to spot the creation of files with specific names or extensions associated with Certipy's information gathering and exfiltration activities. This activity is significant as it indicates potential reconnaissance and data exfiltration efforts by an attacker. If confirmed malicio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://github.com/ly4k/Certipy",
              "mitre": [
                "T1649",
                "T1560"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect Certipy File Modifications\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649, T1560. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect Certipy File Modifications\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the use of the Certipy tool to enumerate Active Directory Certificate Services (AD CS) environments by identifying unique file modifications — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.13",
              "n": "Linux Auditd Private Keys and Certificate Enumeration",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects suspicious attempts to find private keys, which may indicate an attacker's effort to access sensitive cryptographic information. Private keys are crucial for securing encrypted communications and data, and unauthorized access to them can lead to severe security breaches, including data decryption and identity theft. By monitoring for unusual or unauthorized searches for private keys, this analytic helps identify potential threats to cryptographic security, enabling…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux Auditd Execve",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux Auditd Execve ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.",
              "refs": "https://www.splunk.com/en_us/blog/security/deep-dive-on-persistence-privilege-escalation-technique-and-detection-in-linux-platform.html, https://github.com/peass-ng/PEASS-ng/tree/master/linPEAS",
              "mitre": [
                "T1552.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Auditd Private Keys and Certificate Enumeration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux Auditd Execve. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Auditd Private Keys and Certificate Enumeration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can use this application for automation purposes. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects suspicious attempts to find private keys, which may indicate an attacker's effort to access sensitive cryptographic information — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.14",
              "n": "Linux Deletion of SSL Certificate",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the deletion of SSL certificates on a Linux machine. It leverages filesystem event logs to identify when files with extensions .pem or .crt are deleted from the /etc/ssl/certs/ directory. This activity is significant because attackers may delete or modify SSL certificates to disable security features or evade defenses on a compromised system. If confirmed malicious, this behavior could indicate an attempt to disrupt secure communications, evade detection, or execut…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to file entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon for Linux EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrator or network operator can execute this command. Please update the filter macros to remove false positives.",
              "refs": "https://www.sentinelone.com/labs/acidrain-a-modem-wiper-rains-down-on-europe/",
              "mitre": [
                "T1070.004",
                "T1485"
              ],
              "dtype": "file_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Deletion of SSL Certificate\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to file entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1070.004, T1485. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Deletion of SSL Certificate\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrator or network operator can execute this command. Please update the filter macros to remove false positives.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the deletion of secure connections certificates on a Linux machine — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.15",
              "n": "Linux Impair Defenses Process Kill",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the execution of the 'pkill' command, which is used to terminate processes on a Linux system. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on process names and command-line executions. This activity is significant because threat actors often use 'pkill' to disable security defenses or terminate critical processes, facilitating further malicious actions. If confirmed malicious, this behavior could lead to the disruption of securit…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon for Linux EventID 1",
              "q": "| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Processes\n      WHERE Processes.process_name IN ( \"pgrep\", \"pkill\") Processes.process = \"*pkill *\"\n      BY Processes.action Processes.dest Processes.original_file_name\n         Processes.parent_process Processes.parent_process_exec Processes.parent_process_guid\n         Processes.parent_process_id Processes.parent_process_name Processes.parent_process_path\n         Processes.process Processes.process_exec Processes.process_guid\n         Processes.process_hash Processes.process_id Processes.process_integrity_level\n         Processes.process_name Processes.process_path Processes.user\n         Processes.user_id Processes.vendor_product\n    | `drop_dm_object_name(Processes)`\n    | `security_content_ctime(firstTime)`\n    | `linux_impair_defenses_process_kill_filter`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Sysmon for Linux EventID 1 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "network admin can terminate a process using this linux command. Filter is needed.",
              "refs": "https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/overview-of-the-cyber-weapons-used-in-the-ukraine-russia-war/, https://cert.gov.ua/article/3718487",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "Hunting",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Linux Impair Defenses Process Kill\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon for Linux EventID 1. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1562.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Linux Impair Defenses Process Kill\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: network admin can terminate a process using this linux command. Filter is needed.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the execution of the 'pkill' command, which is used to terminate processes on a Linux system — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": false,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.16",
              "n": "Steal or Forge Authentication Certificates Behavior Identified",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies potential threats related to the theft or forgery of authentication certificates. It detects when five or more analytics from the Windows Certificate Services story trigger within a specified timeframe. This detection leverages aggregated risk scores and event counts from the Risk data model. This activity is significant as it may indicate an ongoing attack aimed at compromising authentication mechanisms. If confirmed malicious, attackers could gain unauthorized…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$risk_object$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present based on automated tooling or system administrators. Filter as needed.",
              "refs": "https://research.splunk.com/stories/windows_certificate_services/, https://attack.mitre.org/techniques/T1649/",
              "mitre": [
                "T1649"
              ],
              "dtype": "Correlation",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Steal or Forge Authentication Certificates Behavior Identified\" is a Correlation detection that aggregates and correlates signals from multiple detection sources, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Steal or Forge Authentication Certificates Behavior Identified\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present based on automated tooling or system administrators. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies potential threats related to the theft or forgery of authentication certificates — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.17",
              "n": "Windows Certutil Root Certificate Addition",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Windows Certutil Root Certificate Addition. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to parent process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.",
              "refs": "https://www.microsoft.com/en-us/security/blog/2025/07/31/frozen-in-transit-secret-blizzards-aitm-campaign-against-diplomats/, https://www.deepinstinct.com/blog/iranian-threat-actor-continues-to-develop-mass-exploitation-tools, https://unit42.paloaltonetworks.com/retefe-banking-trojan-targets-sweden-switzerland-and-japan/",
              "mitre": [
                "T1587.003"
              ],
              "dtype": "parent_process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Certutil Root Certificate Addition\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to parent process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1587.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Certutil Root Certificate Addition\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: None explicitly documented by the upstream ESCU detection — review alerts in business context before suppressing, and consult the linked references for vendor guidance.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Windows Certutil Root Certificate Addition — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.18",
              "n": "Windows Export Certificate",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the export of a certificate from the Windows Certificate Store. It leverages the Certificates Lifecycle log channel, specifically event ID 1007, to identify this activity. Monitoring certificate exports is crucial as certificates can be used for authentication to VPNs or private resources. If malicious actors export certificates, they could potentially gain unauthorized access to sensitive systems or data, leading to significant security breaches.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log CertificateServicesClient 1007",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log CertificateServicesClient 1007 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be generated based on an automated process or service that exports certificates on the regular. Review is required before setting to alert. Monitor for abnormal processes performing an export.",
              "refs": "https://atomicredteam.io/defense-evasion/T1553.004/#atomic-test-4---install-root-ca-on-windows",
              "mitre": [
                "T1552.004",
                "T1649"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Export Certificate\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log CertificateServicesClient 1007. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.004, T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Export Certificate\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be generated based on an automated process or service that exports certificates on the regular. Review is required before setting to alert. Monitor for abnormal processes performing an export.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the export of a certificate from the Windows Certificate Store — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.19",
              "n": "Windows Mimikatz Crypto Export File Extensions",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the creation of files with extensions commonly associated with the Mimikatz Crypto module. It leverages the Endpoint.Filesystem data model to identify specific file names indicative of certificate export activities. This behavior is significant as it may indicate the use of Mimikatz to export cryptographic keys, which is a common tactic for credential theft. If confirmed malicious, this activity could allow an attacker to exfiltrate sensitive cryptographic material…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 11",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 11 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present and may need to be reviewed before this can be turned into a TTP. In addition, remove .pfx (standalone) if it's too much volume.",
              "refs": "https://github.com/gentilkiwi/mimikatz/blob/master/mimikatz/modules/kuhl_m_crypto.c#L628-L645",
              "mitre": [
                "T1649"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Mimikatz Crypto Export File Extensions\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 11. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Mimikatz Crypto Export File Extensions\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present and may need to be reviewed before this can be turned into a TTP. In addition, remove .pfx (standalone) if it's too much volume.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the creation of files with extensions commonly associated with the Mimikatz Crypto module — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.20",
              "n": "Windows PowerShell Export Certificate",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the PowerShell Cmdlet `export-certificate` by leveraging Script Block Logging. This activity is significant as it may indicate an adversary attempting to exfiltrate certificates from the local Certificate Store on a Windows endpoint. Monitoring this behavior is crucial because stolen certificates can be used to impersonate users, decrypt sensitive data, or facilitate further attacks. If confirmed malicious, this activity could lead to unauthorized access…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible administrators or scripts may run these commands, filtering may be required.",
              "refs": "https://dev.to/iamthecarisma/managing-windows-pfx-certificates-through-powershell-3pj, https://learn.microsoft.com/en-us/powershell/module/pki/export-certificate?view=windowsserver2022-ps",
              "mitre": [
                "T1552.004",
                "T1649"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Export Certificate\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.004, T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Export Certificate\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible administrators or scripts may run these commands, filtering may be required.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the use of the PowerShell Cmdlet by leveraging Script Block Logging — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.21",
              "n": "Windows PowerShell Export PfxCertificate",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the PowerShell cmdlet `export-pfxcertificate` by leveraging Script Block Logging. This activity is significant as it may indicate an adversary attempting to exfiltrate certificates from the Windows Certificate Store. Monitoring this behavior is crucial for identifying potential certificate theft, which can lead to unauthorized access and impersonation attacks. If confirmed malicious, this activity could allow attackers to compromise secure communications…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Powershell Script Block Logging 4104",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Powershell Script Block Logging 4104 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "It is possible administrators or scripts may run these commands, filtering may be required.",
              "refs": "https://dev.to/iamthecarisma/managing-windows-pfx-certificates-through-powershell-3pj, https://learn.microsoft.com/en-us/powershell/module/pki/export-pfxcertificate?view=windowsserver2022-ps",
              "mitre": [
                "T1552.004",
                "T1649"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows PowerShell Export PfxCertificate\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Powershell Script Block Logging 4104. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.004, T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows PowerShell Export PfxCertificate\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: It is possible administrators or scripts may run these commands, filtering may be required.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the use of the PowerShell cmdlet by leveraging Script Block Logging — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.22",
              "n": "Windows Private Keys Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies processes that retrieve information related to private key files, often used by post-exploitation tools like winpeas. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on command-line executions that search for private key certificates. This activity is significant as it indicates potential attempts to locate insecurely stored credentials, which adversaries can exploit for privilege escalation, persistence, or remote servi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://attack.mitre.org/techniques/T1552/004/, https://github.com/carlospolop/PEASS-ng/tree/master/winPEAS, https://www.microsoft.com/en-us/security/blog/2022/10/14/new-prestige-ransomware-impacts-organizations-in-ukraine-and-poland/",
              "mitre": [
                "T1552.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Private Keys Discovery\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1552.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Private Keys Discovery\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies processes that retrieve information related to private key files, often used by post-exploitation tools like winpeas — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.23",
              "n": "Windows Registry Certificate Added",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the installation of a root CA certificate by monitoring specific registry paths for SetValue events. It leverages data from the Endpoint datamodel, focusing on registry paths containing \"certificates\" and registry values named \"Blob.\" This activity is significant because unauthorized root CA certificates can compromise the integrity of encrypted communications and facilitate man-in-the-middle attacks. If confirmed malicious, this could allow an attacker to intercep…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 13",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To successfully implement this search you need to be ingesting information on process that include the name of the process responsible for the changes from your endpoints into the `Endpoint` datamodel in the `Processes` and `Registry` node. In addition, confirm the latest CIM App 4.20 or higher is installed and the latest TA for the endpoint product.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be limited to a legitimate business applicating consistently adding new root certificates to the endpoint. Filter by user, process, or thumbprint.",
              "refs": "https://posts.specterops.io/code-signing-certificate-cloning-attacks-and-defenses-6f98657fc6ec, https://github.com/redcanaryco/atomic-red-team/tree/master/atomics/T1553.004",
              "mitre": [
                "T1553.004"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Registry Certificate Added\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 13. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1553.004. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Registry Certificate Added\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be limited to a legitimate business applicating consistently adding new root certificates to the endpoint. Filter by user, process, or thumbprint.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the installation of a root CA certificate by monitoring specific registry paths for SetValue events — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.24",
              "n": "Windows Steal Authentication Certificates - ESC1 Abuse",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a new certificate is requested or granted against Active Directory Certificate Services (AD CS) using a Subject Alternative Name (SAN). It leverages Windows Security Event Codes 4886 and 4887 to identify these actions. This activity is significant because improperly configured certificate templates can be exploited for privilege escalation and environment compromise. If confirmed malicious, an attacker could gain elevated privileges or persist within the envir…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4886, Windows Event Log Security 4887",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To implement this analytic, enhanced Audit Logging must be enabled on AD CS and within Group Policy Management for CS server. See Page 115 of first reference. Recommend throttle correlation by RequestId/ssl_serial at minimum.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be generated in environments where administrative users or processes are allowed to generate certificates with Subject Alternative Names. Sources or templates used in these processes may need to be tuned out for accurate function.",
              "refs": "https://specterops.io/wp-content/uploads/sites/3/2022/06/Certified_Pre-Owned.pdf, https://github.com/ly4k/Certipy#esc1, https://pentestlaboratories.com/2021/11/08/threat-hunting-certificate-account-persistence/",
              "mitre": [
                "T1649"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal Authentication Certificates - ESC1 Abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to user account entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4886, Windows Event Log Security 4887. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal Authentication Certificates - ESC1 Abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be generated in environments where administrative users or processes are allowed to generate certificates with Subject Alternative Names. Sources or templates used in these processes may need to be tuned out for accurate function.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects when a new certificate is requested or granted against Active Directory Certificate Services (AD CS) using a Subject Alternative Name (SAN) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.25",
              "n": "Windows Steal Authentication Certificates - ESC1 Authentication",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a suspicious certificate with a Subject Alternative Name (SAN) is issued using Active Directory Certificate Services (AD CS) and then immediately used for authentication. This detection leverages Windows Security Event Logs, specifically EventCode 4887, to identify the issuance and subsequent use of the certificate. This activity is significant because improperly configured certificate templates can be exploited for privilege escalation and environment comprom…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4887, Windows Event Log Security 4768",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\", \"$dest$\", \"$src_user$\", \"$user$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "To implement this analytic, enhanced Audit Logging must be enabled on AD CS and within Group Policy Management for CS server. See Page 115 of first reference. Recommend throttle correlation by RequestId/ssl_serial at minimum.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be generated in environments where administrative users or processes are allowed to generate certificates with Subject Alternative Names for authentication. Sources or templates used in these processes may need to be tuned out for accurate function.",
              "refs": "https://specterops.io/wp-content/uploads/sites/3/2022/06/Certified_Pre-Owned.pdf, https://github.com/ly4k/Certipy#esc1, https://pentestlaboratories.com/2021/11/08/threat-hunting-certificate-account-persistence/",
              "mitre": [
                "T1649",
                "T1550"
              ],
              "dtype": "certificate_serial",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal Authentication Certificates - ESC1 Authentication\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to certificate entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4887, Windows Event Log Security 4768. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649, T1550. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal Authentication Certificates - ESC1 Authentication\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be generated in environments where administrative users or processes are allowed to generate certificates with Subject Alternative Names for authentication. Sources or templates used in these processes may need to be tuned out for accurate function.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects when a suspicious certificate with a Subject Alternative Name (SAN) is issued using Active Directory Certificate Services (AD CS) and then immediately used for authentication — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.26",
              "n": "Windows Steal Authentication Certificates Certificate Issued",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the issuance of a new certificate by Certificate Services - AD CS, detected via Event ID 4887. This event logs the requester user context, DNS hostname of the requesting machine, and the request time. Monitoring this activity is crucial as it can indicate potential misuse of authentication certificates. If confirmed malicious, an attacker could use the issued certificate to impersonate users, escalate privileges, or maintain persistence within the environment. T…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4887",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4887 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be generated based on normal certificates issued. Leave enabled to generate Risk, as this is meant to be an anomaly analytic.",
              "refs": "https://specterops.io/wp-content/uploads/sites/3/2022/06/Certified_Pre-Owned.pdf",
              "mitre": [
                "T1649"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal Authentication Certificates Certificate Issued\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4887. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal Authentication Certificates Certificate Issued\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be generated based on normal certificates issued. Leave enabled to generate Risk, as this is meant to be an anomaly analytic.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the issuance of a new certificate by Certificate Services - AD CS, detected using Event ID 4887 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.27",
              "n": "Windows Steal Authentication Certificates Certificate Request",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects when a new certificate is requested from Certificate Services - AD CS. It leverages Event ID 4886, which indicates that a certificate request has been received. This activity is significant because unauthorized certificate requests can be part of credential theft or lateral movement tactics. If confirmed malicious, an attacker could use the certificate to impersonate users, gain unauthorized access to resources, or establish persistent access within the environment…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4886",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4886 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be generated based on normal certificate requests. Leave enabled to generate Risk, as this is meant to be an anomaly analytic.",
              "refs": "https://specterops.io/wp-content/uploads/sites/3/2022/06/Certified_Pre-Owned.pdf",
              "mitre": [
                "T1649"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal Authentication Certificates Certificate Request\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4886. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal Authentication Certificates Certificate Request\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be generated based on normal certificate requests. Leave enabled to generate Risk, as this is meant to be an anomaly analytic.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects when a new certificate is requested from Certificate Services - AD CS — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.28",
              "n": "Windows Steal Authentication Certificates CertUtil Backup",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects CertUtil.exe performing a backup of the Certificate Store. It leverages data from Endpoint Detection and Response (EDR) agents, focusing on specific command-line executions involving CertUtil with backup parameters. This activity is significant because it may indicate an attempt to steal authentication certificates, which are critical for secure communications. If confirmed malicious, an attacker could use the stolen certificates to impersonate users, decrypt sensi…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be generated based on normal certificate store backups. Leave enabled to generate Risk, as this is meant to be an anomaly analytic. If CS backups are not normal, enable as TTP.",
              "refs": "https://specterops.io/wp-content/uploads/sites/3/2022/06/Certified_Pre-Owned.pdf",
              "mitre": [
                "T1649"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal Authentication Certificates CertUtil Backup\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal Authentication Certificates CertUtil Backup\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be generated based on normal certificate store backups. Leave enabled to generate Risk, as this is meant to be an anomaly analytic. If CS backups are not normal, enable as TTP.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects CertUtil.exe performing a backup of the Certificate Store — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.29",
              "n": "Windows Steal Authentication Certificates CryptoAPI",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the extraction of authentication certificates using Windows Event Log - CAPI2 (CryptoAPI 2). It leverages EventID 70, which is generated when a certificate's private key is acquired. This detection is significant because it can identify potential misuse of certificates, such as those extracted by tools like Mimikatz or Cobalt Strike. If confirmed malicious, this activity could allow attackers to impersonate users, escalate privileges, or access sensitive informatio…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log CAPI2 70",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log CAPI2 70 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives may be present in some instances of legitimate applications requiring to export certificates. Filter as needed.",
              "refs": "https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-vista/cc749296(v=ws.10)",
              "mitre": [
                "T1649"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal Authentication Certificates CryptoAPI\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log CAPI2 70. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal Authentication Certificates CryptoAPI\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives may be present in some instances of legitimate applications requiring to export certificates. Filter as needed.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the extraction of authentication certificates using Windows Event Log - CAPI2 (CryptoAPI 2) — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.30",
              "n": "Windows Steal Authentication Certificates CS Backup",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the backup of the Active Directory Certificate Services (AD CS) store, detected via Event ID 4876. This event is logged when a backup is performed using the CertSrv.msc UI or the CertUtil.exe -BackupDB command. Monitoring this activity is crucial as unauthorized backups can indicate an attempt to steal authentication certificates, which are critical for secure communications. If confirmed malicious, this activity could allow an attacker to impersonate users, esc…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Windows Event Log Security 4876",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Windows Event Log Security 4876 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "False positives will be generated based on normal certificate store backups. Leave enabled to generate Risk, as this is meant to be an anomaly analytic. If CS backups are not normal, enable as TTP.",
              "refs": "https://specterops.io/wp-content/uploads/sites/3/2022/06/Certified_Pre-Owned.pdf",
              "mitre": [
                "T1649"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal Authentication Certificates CS Backup\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Windows Event Log Security 4876. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal Authentication Certificates CS Backup\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: False positives will be generated based on normal certificate store backups. Leave enabled to generate Risk, as this is meant to be an anomaly analytic. If CS backups are not normal, enable as TTP.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the backup of the Active Directory Certificate Services (AD CS) store, detected using Event ID 4876 — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.31",
              "n": "Windows Steal Authentication Certificates Export Certificate",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the PowerShell cmdlet 'export-certificate' executed via the command line, indicating an attempt to export a certificate from the local Windows Certificate Store. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. Exporting certificates is significant as it may indicate credential theft or preparation for man-in-the-middle attacks. If confirmed malicious, this act…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Filtering may be requried based on automated utilities and third party applications that may export certificates.",
              "refs": "https://dev.to/iamthecarisma/managing-windows-pfx-certificates-through-powershell-3pj, https://learn.microsoft.com/en-us/powershell/module/pki/export-certificate?view=windowsserver2022-ps",
              "mitre": [
                "T1649"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal Authentication Certificates Export Certificate\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal Authentication Certificates Export Certificate\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Filtering may be requried based on automated utilities and third party applications that may export certificates.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the use of the PowerShell cmdlet 'export-certificate' executed using the command line, indicating an attempt to export a certificate from the local Windows Certificate Store — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.32",
              "n": "Windows Steal Authentication Certificates Export PfxCertificate",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic detects the use of the PowerShell cmdlet `export-pfxcertificate` on the command line, indicating an attempt to export a certificate from the local Windows Certificate Store. This detection leverages data from Endpoint Detection and Response (EDR) agents, focusing on process execution logs and command-line arguments. This activity is significant as it may indicate an attempt to exfiltrate authentication certificates, which can be used to impersonate users or decrypt sensiti…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Filtering may be requried based on automated utilities and third party applications that may export certificates.",
              "refs": "https://dev.to/iamthecarisma/managing-windows-pfx-certificates-through-powershell-3pj, https://learn.microsoft.com/en-us/powershell/module/pki/export-pfxcertificate?view=windowsserver2022-ps",
              "mitre": [
                "T1649"
              ],
              "dtype": "process_name",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Windows Steal Authentication Certificates Export PfxCertificate\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Event Log Security 4688, CrowdStrike ProcessRollup2. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1649. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Windows Steal Authentication Certificates Export PfxCertificate\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Filtering may be requried based on automated utilities and third party applications that may export certificates.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic detects the use of the PowerShell cmdlet on the command line, indicating an attempt to export a certificate from the local Windows Certificate Store — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.33",
              "n": "Cisco Secure Firewall - Blacklisted SSL Certificate Fingerprint",
              "c": "high",
              "f": "intermediate",
              "v": "This detection identifies Cisco Secure Firewall - Blacklisted SSL Certificate Fingerprint. It supports security monitoring and incident response. Part of Splunk Security Essentials (ESCU) where applicable.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco Secure Firewall Threat Defense Connection Event",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$src$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to URL entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco Secure Firewall Threat Defense Connection Event ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Certain SSL certificates may be flagged in threat intelligence feeds due to historical misuse, yet still be used by legitimate services, particularly in content delivery or shared hosting environments. Internal or self-signed certificates used in testing or development environments may inadvertently match known blacklisted fingerprints. It is recommended to validate the connection context (destination IP, domain, ClientApplication) and correlate with other indicators before taking action.",
              "refs": "https://www.cisco.com/c/en/us/td/docs/security/firepower/741/api/FQE/secure_firewall_estreamer_fqe_guide_740.pdf",
              "mitre": [
                "T1587.002",
                "T1588.004",
                "T1071.001",
                "T1573.002"
              ],
              "dtype": "url",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco Secure Firewall - Blacklisted SSL Certificate Fingerprint\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to URL entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco Secure Firewall Threat Defense Connection Event. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1587.002, T1588.004, T1071.001, T1573.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco Secure Firewall - Blacklisted SSL Certificate Fingerprint\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Certain SSL certificates may be flagged in threat intelligence feeds due to historical misuse, yet still be used by legitimate services, particularly in content delivery or shared hosting environments. Internal or self-signed certificates used in testing or development environments may inadvertently match known blacklisted fingerprints. It is recommended to validate the connection context (destination IP, domain, ClientApplication) and correlate with other indicators before taking action.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "This detection identifies Cisco Secure Firewall - Blacklisted secure connections Certificate Fingerprint — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.34",
              "n": "Detect SNICat SNI Exfiltration",
              "c": "high",
              "f": "intermediate",
              "v": "The following analytic identifies the use of SNICat tool commands within the TLS SNI field, indicating potential data exfiltration attempts. It leverages Zeek SSL data to detect specific SNICat commands such as LIST, LS, SIZE, LD, CB, EX, ALIVE, EXIT, WHERE, and finito in the server_name field. This activity is significant as SNICat is a known tool for covert data exfiltration using TLS. If confirmed malicious, this could allow attackers to exfiltrate sensitive data undetected, posing a severe t…",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Various",
              "q": "`zeek_ssl`\n      | rex field=server_name \"(?<snicat>(LIST\n      | LS\n      | SIZE\n      | LD\n      | CB\n      | CD\n      | EX\n      | ALIVE\n      | EXIT\n      | WHERE\n      | finito)-[A-Za-z0-9]{16}\\.)\"\n      | stats count\n        BY src dest server_name\n           snicat\n      | where count>0\n      | table src dest server_name snicat\n      | `detect_snicat_sni_exfiltration_filter`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to host or system entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Various ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "No false positives have been identified at this time.",
              "refs": "https://www.mnemonic.io/resources/blog/introducing-snicat/, https://github.com/mnemonic-no/SNIcat, https://attack.mitre.org/techniques/T1041/",
              "mitre": [
                "T1041"
              ],
              "dtype": "system",
              "sdomain": "",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Detect SNICat SNI Exfiltration\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to host or system entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Various. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• MITRE ATT&CK: T1041. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Detect SNICat SNI Exfiltration\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: No false positives have been identified at this time.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "The following analytic identifies the use of SNICat tool commands within the secure connections SNI field, indicating potential data exfiltration attempts — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 25,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.35",
              "n": "Container Runtime Security Event Correlation",
              "c": "high",
              "f": "intermediate",
              "v": "Container escape or privileged container abuse can lead to host compromise. Correlating runtime events with image and orchestrator data improves detection of container-based attacks.",
              "t": "Container security TAs, Kubernetes audit logs",
              "d": "Container runtime events, K8s audit log, image registry",
              "q": "index=containers (sourcetype=\"kube:audit\" OR sourcetype=\"container:runtime\")\n| search (reason=\"PrivilegedContainers\" OR action=\"exec\" OR severity=\"high\")\n| stats count by pod, namespace, user, action\n| where count > 10\n| sort -count",
              "m": "Ingest container runtime and K8s audit logs. Alert on privileged pod creation, exec into production namespaces, or high-severity runtime events. Correlate with image provenance and network flows.",
              "z": "Table (security events by pod), Timeline (runtime events), Bar chart (events by namespace).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Container security TAs, Kubernetes audit logs.\n• Ensure the following data sources are available: Container runtime events, K8s audit log, image registry.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest container runtime and K8s audit logs. Alert on privileged pod creation, exec into production namespaces, or high-severity runtime events. Correlate with image provenance and network flows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers (sourcetype=\"kube:audit\" OR sourcetype=\"container:runtime\")\n| search (reason=\"PrivilegedContainers\" OR action=\"exec\" OR severity=\"high\")\n| stats count by pod, namespace, user, action\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Container Runtime Security Event Correlation** — Container escape or privileged container abuse can lead to host compromise. Correlating runtime events with image and orchestrator data improves detection of container-based attacks.\n\nDocumented **Data sources**: Container runtime events, K8s audit log, image registry. **App/TA** (typical add-on context): Container security TAs, Kubernetes audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: kube:audit, container:runtime. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=\"kube:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by pod, namespace, user, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (security events by pod), Timeline (runtime events), Bar chart (events by namespace).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Container escape or privileged container abuse can lead to host compromise — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.36",
              "n": "Certificate Transparency Log Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Unauthorized or fraudulent certificate issuance for your domains can enable phishing or MITM. CT log monitoring detects new certs for your domains and supports rapid revocation.",
              "t": "CT log ingestion, custom scripted input",
              "d": "Certificate Transparency log streams, domain watchlist",
              "q": "index=certs sourcetype=\"ct:log\"\n| lookup domain_watchlist.csv domain OUTPUT domain as watched\n| where watched=*\n| stats latest(_time) as issued, values(issuer_cn) as issuers by domain, cn\n| table domain, cn, issued, issuers",
              "m": "Subscribe to CT log streams (e.g., crt.sh) for your domains. Maintain domain watchlist. Alert on new certificates for critical domains. Correlate with internal PKI and public CAs.",
              "z": "Table (new certs by domain), Timeline (issuance events), Single value (certs this week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CT log ingestion, custom scripted input.\n• Ensure the following data sources are available: Certificate Transparency log streams, domain watchlist.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSubscribe to CT log streams (e.g., crt.sh) for your domains. Maintain domain watchlist. Alert on new certificates for critical domains. Correlate with internal PKI and public CAs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=certs sourcetype=\"ct:log\"\n| lookup domain_watchlist.csv domain OUTPUT domain as watched\n| where watched=*\n| stats latest(_time) as issued, values(issuer_cn) as issuers by domain, cn\n| table domain, cn, issued, issuers\n```\n\nUnderstanding this SPL\n\n**Certificate Transparency Log Monitoring** — Unauthorized or fraudulent certificate issuance for your domains can enable phishing or MITM. CT log monitoring detects new certs for your domains and supports rapid revocation.\n\nDocumented **Data sources**: Certificate Transparency log streams, domain watchlist. **App/TA** (typical add-on context): CT log ingestion, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: certs; **sourcetype**: ct:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=certs, sourcetype=\"ct:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where watched=*` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by domain, cn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Certificate Transparency Log Monitoring**): table domain, cn, issued, issuers\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new certs by domain), Timeline (issuance events), Single value (certs this week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch certificate transparency and issuance patterns so unexpected or fraudulent certificates are noticed early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.37",
              "n": "EDR Tampering and Exclusion Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Attackers often disable or exclude EDR/AV to evade detection. Detecting tampering or new exclusions ensures security controls remain active on endpoints.",
              "t": "EDR/AV agent logs, Windows Security events",
              "d": "EDR agent status, exclusion lists, Defender/AV events",
              "q": "index=endpoint (sourcetype=\"edr:agent\" OR sourcetype=\"defender:event\")\n| search (event_type=\"tampering\" OR event_type=\"exclusion_added\" OR status=\"disabled\")\n| table _time, host, event_type, user, path_or_rule\n| sort -_time",
              "m": "Ingest EDR and AV agent events. Alert on tampering, new exclusions, or agent disabled. Require change approval for exclusions. Report on exclusion coverage by host.",
              "z": "Table (tampering/exclusion events), Timeline (events), Bar chart (by host).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR/AV agent logs, Windows Security events.\n• Ensure the following data sources are available: EDR agent status, exclusion lists, Defender/AV events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest EDR and AV agent events. Alert on tampering, new exclusions, or agent disabled. Require change approval for exclusions. Report on exclusion coverage by host.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint (sourcetype=\"edr:agent\" OR sourcetype=\"defender:event\")\n| search (event_type=\"tampering\" OR event_type=\"exclusion_added\" OR status=\"disabled\")\n| table _time, host, event_type, user, path_or_rule\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**EDR Tampering and Exclusion Detection** — Attackers often disable or exclude EDR/AV to evade detection. Detecting tampering or new exclusions ensures security controls remain active on endpoints.\n\nDocumented **Data sources**: EDR agent status, exclusion lists, Defender/AV events. **App/TA** (typical add-on context): EDR/AV agent logs, Windows Security events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: edr:agent, defender:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"edr:agent\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **EDR Tampering and Exclusion Detection**): table _time, host, event_type, user, path_or_rule\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (tampering/exclusion events), Timeline (events), Bar chart (by host).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Attackers often disable or exclude /AV to evade detection — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.38",
              "n": "Data Loss Prevention Policy Violation Trending",
              "c": "high",
              "f": "beginner",
              "v": "DLP violations indicate policy gaps or insider risk. Trending by policy, user, and channel supports tuning and incident prioritization.",
              "t": "DLP product logs, email/cloud TAs",
              "d": "DLP alert logs, policy evaluation events",
              "q": "index=dlp sourcetype=\"dlp:alert\"\n| timechart span=1h count by policy_name\n| where count > 0",
              "m": "Ingest DLP alerts from email, endpoint, and cloud channels. Classify by policy and severity. Alert on spike in violations or repeated offenders. Report on top policies and channels.",
              "z": "Line chart (violations over time), Bar chart (by policy), Table (top users).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DLP product logs, email/cloud TAs.\n• Ensure the following data sources are available: DLP alert logs, policy evaluation events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest DLP alerts from email, endpoint, and cloud channels. Classify by policy and severity. Alert on spike in violations or repeated offenders. Report on top policies and channels.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dlp sourcetype=\"dlp:alert\"\n| timechart span=1h count by policy_name\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Data Loss Prevention Policy Violation Trending** — DLP violations indicate policy gaps or insider risk. Trending by policy, user, and channel supports tuning and incident prioritization.\n\nDocumented **Data sources**: DLP alert logs, policy evaluation events. **App/TA** (typical add-on context): DLP product logs, email/cloud TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dlp; **sourcetype**: dlp:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dlp, sourcetype=\"dlp:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by policy_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (violations over time), Bar chart (by policy), Table (top users).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend DLP policy hits so patterns of data misuse or misconfiguration are obvious before they turn into a major breach.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.8.39",
              "n": "Security Tool Availability and Heartbeat",
              "c": "critical",
              "f": "beginner",
              "v": "Security tools that stop reporting indicate outage or tampering. Heartbeat monitoring ensures continuous visibility and alerts on missing data.",
              "t": "All security data source TAs",
              "d": "Heartbeat or last-event timestamp per source type",
              "q": "index=_internal source=*heartbeat* OR index=main sourcetype=*\n| stats latest(_time) as last_seen by host, sourcetype\n| eval gap=now()-last_seen\n| where gap > 900\n| table host, sourcetype, last_seen, gap",
              "m": "Use Splunk heartbeat or last-seen per host/sourcetype. Alert when any critical source has not reported for >15 minutes. Maintain source inventory and expected reporting interval.",
              "z": "Table (sources with gap), Single value (sources missing), Status grid (host × sourcetype).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: All security data source TAs.\n• Ensure the following data sources are available: Heartbeat or last-event timestamp per source type.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk heartbeat or last-seen per host/sourcetype. Alert when any critical source has not reported for >15 minutes. Maintain source inventory and expected reporting interval.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*heartbeat* OR index=main sourcetype=*\n| stats latest(_time) as last_seen by host, sourcetype\n| eval gap=now()-last_seen\n| where gap > 900\n| table host, sourcetype, last_seen, gap\n```\n\nUnderstanding this SPL\n\n**Security Tool Availability and Heartbeat** — Security tools that stop reporting indicate outage or tampering. Heartbeat monitoring ensures continuous visibility and alerts on missing data.\n\nDocumented **Data sources**: Heartbeat or last-event timestamp per source type. **App/TA** (typical add-on context): All security data source TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal, main; **sourcetype**: *. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, index=main, sourcetype=*. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap > 900` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Security Tool Availability and Heartbeat**): table host, sourcetype, last_seen, gap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sources with gap), Single value (sources missing), Status grid (host × sourcetype).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Security tools that stop reporting indicate outage or tampering — so we know when protection is off, out of date, or bypassed.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 28.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 39,
            "none": 0
          }
        },
        {
          "i": "10.9",
          "n": "ESCU 2025-2026 Analytic Stories",
          "u": [
            {
              "i": "10.9.1",
              "n": "Suspicious Ollama process execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Suspicious Ollama process execution. Detects unexpected ollama or llama runtime processes outside approved paths, consistent with local LLM abuse or cryptomining.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Sysmon EventID 1, Windows Security 4688, Linux auditd, EDR process telemetry",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events attributed to process entities — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Sysmon EventID 1, Windows Security 4688, Linux auditd, EDR process telemetry ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059",
                "T1204"
              ],
              "dtype": "process_name",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Suspicious Ollama process execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework, attributing risk to process entities. Rather than generating standalone alerts, each firing contributes a risk score to the identified entity — Notable Events are created only when cumulative risk exceeds the configured threshold, significantly reducing alert fatigue while preserving detection coverage.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Sysmon EventID 1, Windows Security 4688, Linux auditd, EDR process telemetry. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1059, T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Suspicious Ollama process execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Suspicious Ollama process execution” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.2",
              "n": "Ollama API abuse detection",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ollama API abuse detection. Identifies anomalous API call volume, model switching, or authentication failures against Ollama endpoints.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Web proxy, API gateway logs, Ollama server access logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Web proxy, API gateway logs, Ollama server access logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ollama API abuse detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Web proxy, API gateway logs, Ollama server access logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ollama API abuse detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ollama API abuse detection” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.3",
              "n": "LLM framework unauthorized model download",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: LLM framework unauthorized model download. Detects pull or download of models from non-approved registries or unexpected hosts.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, proxy, container image pull logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, proxy, container image pull logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1105",
                "T1566"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"LLM framework unauthorized model download\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, proxy, container image pull logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1105, T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"LLM framework unauthorized model download\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “LLM framework unauthorized model download” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.4",
              "n": "Prompt injection attempt via API",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Prompt injection attempt via API. Correlates API payloads containing injection patterns with downstream tool or retrieval actions.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "API gateway logs, LLM gateway audit, WAF",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires API gateway logs, LLM gateway audit, WAF ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059.007",
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Prompt injection attempt via API\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: API gateway logs, LLM gateway audit, WAF. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1059.007, T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Prompt injection attempt via API\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Prompt injection attempt via API” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.5",
              "n": "Prompt extraction defense evasion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Prompt extraction defense evasion. Detects attempts to extract system prompts, policies, or hidden instructions from chat interfaces.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "LLM gateway logs, application security logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires LLM gateway logs, application security logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1552.004",
                "T1562"
              ],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Prompt extraction defense evasion\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: LLM gateway logs, application security logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1552.004, T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Prompt extraction defense evasion\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Prompt extraction defense evasion” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.6",
              "n": "MCP server unauthorized tool invocation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: MCP server unauthorized tool invocation. Flags tool calls from MCP sessions to disallowed tools or hosts.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MCP server audit logs, application logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires MCP server audit logs, application logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MCP server unauthorized tool invocation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MCP server audit logs, application logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1059, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MCP server unauthorized tool invocation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “MCP server unauthorized tool invocation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.7",
              "n": "MCP session hijacking indicators",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: MCP session hijacking indicators. Detects token reuse, IP changes mid-session, or concurrent sessions for the same MCP identity.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MCP server logs, session store, IdP sign-in logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires MCP server logs, session store, IdP sign-in logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1550",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MCP session hijacking indicators\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MCP server logs, session store, IdP sign-in logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1550, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MCP session hijacking indicators\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “MCP session hijacking indicators” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.8",
              "n": "Microsoft 365 Copilot sensitive data access",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Microsoft 365 Copilot sensitive data access. Surfaces Copilot queries or actions touching highly sensitive labels or restricted SharePoint sites.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Microsoft 365 audit, Purview, Copilot activity logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Microsoft 365 audit, Purview, Copilot activity logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1530",
                "T1114"
              ],
              "dtype": "TTP",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Microsoft 365 Copilot sensitive data access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Microsoft 365 audit, Purview, Copilot activity logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (cloud): Ensure data relevant to the cloud security domain is ingested and CIM-normalized.\n• MITRE ATT&CK: T1530, T1114. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Microsoft 365 Copilot sensitive data access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Microsoft 365 Copilot sensitive data access” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.9",
              "n": "Copilot plugin abuse",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Copilot plugin abuse. Detects installation or invocation of unapproved Copilot plugins or connectors.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Microsoft 365 audit, Entra ID app consent logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Microsoft 365 audit, Entra ID app consent logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1098",
                "T1562"
              ],
              "dtype": "TTP",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Copilot plugin abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Microsoft 365 audit, Entra ID app consent logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (cloud): Ensure data relevant to the cloud security domain is ingested and CIM-normalized.\n• MITRE ATT&CK: T1098, T1562. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Copilot plugin abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Copilot plugin abuse” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.10",
              "n": "AI model exfiltration attempt",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: AI model exfiltration attempt. Identifies large outbound transfers of model weights or checkpoints to external destinations.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "DLP, egress proxy, cloud storage audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires DLP, egress proxy, cloud storage audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048",
                "T1567"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AI model exfiltration attempt\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: DLP, egress proxy, cloud storage audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1048, T1567. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AI model exfiltration attempt\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “AI model exfiltration attempt” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.11",
              "n": "LLM API key exposure",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: LLM API key exposure. Correlates secrets scanners or repo events with LLM provider API key patterns.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Secret scanning, Git, CI/CD, cloud IAM",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Secret scanning, Git, CI/CD, cloud IAM ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1552.001",
                "T1528"
              ],
              "dtype": "Hunting",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"LLM API key exposure\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Secret scanning, Git, CI/CD, cloud IAM. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (cloud): Ensure data relevant to the cloud security domain is ingested and CIM-normalized.\n• MITRE ATT&CK: T1552.001, T1528. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"LLM API key exposure\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “LLM API key exposure” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.12",
              "n": "AI training data poisoning indicators",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: AI training data poisoning indicators. Detects anomalous training pipeline inputs or label changes from non-pipeline principals.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ML platform audit, data lake object versioning, CI for training",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ML platform audit, data lake object versioning, CI for training ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1565",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AI training data poisoning indicators\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ML platform audit, data lake object versioning, CI for training. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (cloud): Ensure data relevant to the cloud security domain is ingested and CIM-normalized.\n• MITRE ATT&CK: T1565, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AI training data poisoning indicators\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “AI training data poisoning indicators” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.13",
              "n": "AI service account privilege escalation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: AI service account privilege escalation. Tracks unusual role assignments or token scope expansion for AI workload identities.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cloud IAM, Kubernetes service account, Entra ID",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cloud IAM, Kubernetes service account, Entra ID ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "TTP",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AI service account privilege escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cloud IAM, Kubernetes service account, Entra ID. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (cloud): Ensure data relevant to the cloud security domain is ingested and CIM-normalized.\n• MITRE ATT&CK: T1078, T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AI service account privilege escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “AI service account privilege escalation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.14",
              "n": "Generative AI data leakage",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Generative AI data leakage. Monitors outputs or retrievals that include PII or regulated data in violation of policy.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "LLM gateway, DLP inline inspection",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires LLM gateway, DLP inline inspection ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048",
                "T1530"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Generative AI data leakage\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: LLM gateway, DLP inline inspection. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1048, T1530. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Generative AI data leakage\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Generative AI data leakage” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.15",
              "n": "Chatbot session manipulation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Chatbot session manipulation. Detects session fixation, cookie tampering, or role escalation in chatbot web sessions.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Web server logs, WAF, application auth logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Web server logs, WAF, application auth logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1556",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Chatbot session manipulation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Web server logs, WAF, application auth logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1556, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Chatbot session manipulation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Chatbot session manipulation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.16",
              "n": "LLM output content policy violation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: LLM output content policy violation. Flags policy blocks, moderation escalations, or jailbreak success signals from safety classifiers.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "LLM gateway, moderation API logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires LLM gateway, moderation API logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562.003",
                "T1204"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"LLM output content policy violation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: LLM gateway, moderation API logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1562.003, T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"LLM output content policy violation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “LLM output content policy violation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.17",
              "n": "AI model API unauthorized endpoint access",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: AI model API unauthorized endpoint access. Detects calls to admin or internal-only model endpoints from external networks or users.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "API gateway, Zero Trust access logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires API gateway, Zero Trust access logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1046"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AI model API unauthorized endpoint access\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: API gateway, Zero Trust access logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1046. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AI model API unauthorized endpoint access\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “AI model API unauthorized endpoint access” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.18",
              "n": "AI pipeline integrity monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: AI pipeline integrity monitoring. Validates checksums, signatures, or pipeline stage approvals for model build and deploy.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "CI/CD, artifact registry, signing logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires CI/CD, artifact registry, signing logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1195",
                "T1552"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AI pipeline integrity monitoring\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: CI/CD, artifact registry, signing logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1195, T1552. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AI pipeline integrity monitoring\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “AI pipeline integrity monitoring” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.19",
              "n": "Shadow AI service detection",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Shadow AI service detection. Identifies unsanctioned SaaS AI tools or self-hosted models via DNS, TLS SNI, or expense signals.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "DNS, proxy, CASB, firewall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires DNS, proxy, CASB, firewall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1567.002",
                "T1584"
              ],
              "dtype": "Hunting",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Shadow AI service detection\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: DNS, proxy, CASB, firewall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1567.002, T1584. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Shadow AI service detection\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Shadow AI service detection” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.20",
              "n": "AI compute resource abuse",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: AI compute resource abuse. Detects GPU or high-cost inference spikes inconsistent with baseline workloads.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cloud billing, K8s metrics, scheduler logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cloud billing, K8s metrics, scheduler logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1496",
                "T1498"
              ],
              "dtype": "TTP",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"AI compute resource abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cloud billing, K8s metrics, scheduler logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (cloud): Ensure data relevant to the cloud security domain is ingested and CIM-normalized.\n• MITRE ATT&CK: T1496, T1498. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"AI compute resource abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “AI compute resource abuse” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.21",
              "n": "Hellcat ransomware encryption behavior",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Hellcat ransomware encryption behavior. Detects mass file extension changes and encryption tool process chains associated with Hellcat.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, Sysmon, Windows Security",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, Sysmon, Windows Security ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Hellcat ransomware encryption behavior\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, Sysmon, Windows Security. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Hellcat ransomware encryption behavior\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Hellcat ransomware encryption behavior” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.22",
              "n": "Storm-0501 lateral movement",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Storm-0501 lateral movement. Correlates known Storm-0501 TTPs including RMM abuse and credential staging.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, VPN, RMM product logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, VPN, RMM product logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1021",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Storm-0501 lateral movement\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, VPN, RMM product logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1021, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Storm-0501 lateral movement\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Storm-0501 lateral movement” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.23",
              "n": "Interlock ransomware deployment",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Interlock ransomware deployment. Identifies Interlock loader and deployment patterns prior to encryption.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, email gateway, proxy",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, email gateway, proxy ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1204",
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Interlock ransomware deployment\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, email gateway, proxy. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1204, T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Interlock ransomware deployment\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Interlock ransomware deployment” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.24",
              "n": "Termite ransomware file staging",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Termite ransomware file staging. Detects Termite-related staging directories and pre-encryption behavior.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, Sysmon file events",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, Sysmon file events ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486",
                "T1070"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Termite ransomware file staging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, Sysmon file events. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1486, T1070. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Termite ransomware file staging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Termite ransomware file staging” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.25",
              "n": "NailaoLocker ransom note creation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: NailaoLocker ransom note creation. Surfaces ransom note filenames and content indicators for NailaoLocker.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, file integrity monitoring",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, file integrity monitoring ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"NailaoLocker ransom note creation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, file integrity monitoring. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"NailaoLocker ransom note creation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “NailaoLocker ransom note creation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.26",
              "n": "DynoWiper disk wipe activity",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: DynoWiper disk wipe activity. Detects destructive wiper patterns aligned with DynoWiper campaigns.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, disk/volume management logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, disk/volume management logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1485",
                "T1561"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"DynoWiper disk wipe activity\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, disk/volume management logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1485, T1561. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"DynoWiper disk wipe activity\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “DynoWiper disk wipe activity” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.27",
              "n": "ZOVWiper partition manipulation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: ZOVWiper partition manipulation. Identifies destructive partition or volume operations consistent with ZOVWiper.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, Windows System logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, Windows System logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1561.001"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ZOVWiper partition manipulation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, Windows System logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1561.001. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ZOVWiper partition manipulation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “ZOVWiper partition manipulation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.28",
              "n": "PathWiper MBR corruption",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: PathWiper MBR corruption. Detects boot sector or MBR overwrite activity.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, low-level disk telemetry",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, low-level disk telemetry ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1561.002"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PathWiper MBR corruption\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, low-level disk telemetry. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1561.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PathWiper MBR corruption\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “PathWiper MBR corruption” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.29",
              "n": "BlackBasta new variant indicators",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: BlackBasta new variant indicators. Tracks updated IOCs and behaviors for emerging BlackBasta variants.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, network IDS, threat intel",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires EDR, network IDS, threat intel ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486",
                "T1021"
              ],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"BlackBasta new variant indicators\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, network IDS, threat intel. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1486, T1021. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"BlackBasta new variant indicators\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “BlackBasta new variant indicators” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.30",
              "n": "RansomHub affiliate tooling",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: RansomHub affiliate tooling. Correlates RansomHub affiliate toolchains and initial access brokers.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, email, proxy",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, email, proxy ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1204",
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"RansomHub affiliate tooling\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, email, proxy. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1204, T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"RansomHub affiliate tooling\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “RansomHub affiliate tooling” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.31",
              "n": "Play ransomware VMware targeting",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Play ransomware VMware targeting. Detects Play-related activity against vSphere and guest VMs.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMware vCenter logs, ESXi, EDR",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires VMware vCenter logs, ESXi, EDR ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486",
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Play ransomware VMware targeting\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMware vCenter logs, ESXi, EDR. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1486, T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Play ransomware VMware targeting\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Play ransomware VMware targeting” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.32",
              "n": "Akira ransomware Linux variant",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Akira ransomware Linux variant. Linux-specific Akira encryption and credential access behaviors.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Linux auditd, EDR for Linux",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Linux auditd, EDR for Linux ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Akira ransomware Linux variant\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Linux auditd, EDR for Linux. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1486, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Akira ransomware Linux variant\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Akira ransomware Linux variant” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.33",
              "n": "LockBit successor indicators",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: LockBit successor indicators. Hunting content for post-LockBit ecosystem behaviors and rebrands.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, intel feeds, dark web scrapes (where permitted)",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires EDR, intel feeds, dark web scrapes (where permitted) ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486"
              ],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"LockBit successor indicators\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, intel feeds, dark web scrapes (where permitted). Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"LockBit successor indicators\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “LockBit successor indicators” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.34",
              "n": "Medusa ransomware extortion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Medusa ransomware extortion. Detects Medusa encryption plus leak-site correlation signals from internal telemetry.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, DLP, email exfil patterns",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, DLP, email exfil patterns ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486",
                "T1048"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Medusa ransomware extortion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, DLP, email exfil patterns. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1486, T1048. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Medusa ransomware extortion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Medusa ransomware extortion” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.35",
              "n": "Qilin ransomware data staging",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Qilin ransomware data staging. Identifies Qilin-related staging and exfil prior to encryption.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, network DLP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, network DLP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486",
                "T1048"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Qilin ransomware data staging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, network DLP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1486, T1048. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Qilin ransomware data staging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Qilin ransomware data staging” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.36",
              "n": "Ransomware via RMM tool abuse",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ransomware via RMM tool abuse. Detects encryption processes spawned from remote management sessions.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "RMM logs, EDR parent-child",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires RMM logs, EDR parent-child ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1021.001",
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ransomware via RMM tool abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: RMM logs, EDR parent-child. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1021.001, T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ransomware via RMM tool abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ransomware via RMM tool abuse” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.37",
              "n": "Fileless ransomware execution",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Fileless ransomware execution. Surfaces in-memory or script-only ransomware chains without traditional binary drops.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, PowerShell Script Block, AMSI",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, PowerShell Script Block, AMSI ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059",
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Fileless ransomware execution\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, PowerShell Script Block, AMSI. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1059, T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Fileless ransomware execution\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Fileless ransomware execution” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.38",
              "n": "Ransomware backup deletion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ransomware backup deletion. Detects wbadmin, vssadmin, or backup catalog tampering preceding encryption.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, Windows Security",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, Windows Security ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1490",
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ransomware backup deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, Windows Security. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1490, T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ransomware backup deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ransomware backup deletion” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.39",
              "n": "Ransomware print bomb",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ransomware print bomb. Identifies driver abuse or print spooler anomalies used for impact.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, Print Service operational logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, Print Service operational logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1491",
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ransomware print bomb\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, Print Service operational logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1491, T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ransomware print bomb\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ransomware print bomb” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.40",
              "n": "Ransomware network share encryption",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ransomware network share encryption. High-rate file modifications on SMB shares from single hosts.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, file server audit, Zeek SMB",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, file server audit, Zeek SMB ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486",
                "T1021.002"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ransomware network share encryption\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, file server audit, Zeek SMB. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1486, T1021.002. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ransomware network share encryption\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ransomware network share encryption” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.41",
              "n": "Ransomware shadow copy deletion",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ransomware shadow copy deletion. Detects vssadmin/bcdedit/cipher abuse to remove recovery points.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, Sysmon",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, Sysmon ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1490"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ransomware shadow copy deletion\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, Sysmon. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1490. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ransomware shadow copy deletion\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ransomware shadow copy deletion” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.42",
              "n": "Ransomware safe mode boot",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ransomware safe mode boot. Surfaces reboot into safe mode followed by encryption tooling.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, Windows System 6008/6009",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, Windows System 6008/6009 ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486",
                "T1529"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ransomware safe mode boot\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, Windows System 6008/6009. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1486, T1529. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ransomware safe mode boot\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ransomware safe mode boot” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.43",
              "n": "Ransomware ESXi targeting",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ransomware ESXi targeting. ESXi-specific encryption and locker deployment behaviors.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VMware ESXi logs, storage, EDR",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires VMware ESXi logs, storage, EDR ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1486",
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ransomware ESXi targeting\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VMware ESXi logs, storage, EDR. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1486, T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ransomware ESXi targeting\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ransomware ESXi targeting” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.44",
              "n": "Ransomware vCenter exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ransomware vCenter exploitation. Detects exploitation or admin abuse leading to mass VM encryption.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "vCenter, ESXi, firewall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires vCenter, ESXi, firewall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ransomware vCenter exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: vCenter, ESXi, firewall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ransomware vCenter exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ransomware vCenter exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.45",
              "n": "Double extortion data staging",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Double extortion data staging. Correlates large archive exfiltration with subsequent encryption on same host.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "DLP, EDR, proxy",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires DLP, EDR, proxy ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048",
                "T1486"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Double extortion data staging\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: DLP, EDR, proxy. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1048, T1486. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Double extortion data staging\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Double extortion data staging” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.46",
              "n": "MuddyWater PowerShell dropper",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: MuddyWater PowerShell dropper. Detects MuddyWater PowerShell staging and dropper patterns.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, PowerShell Script Block, proxy",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, PowerShell Script Block, proxy ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059.001",
                "T1059.003"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MuddyWater PowerShell dropper\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, PowerShell Script Block, proxy. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1059.001, T1059.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MuddyWater PowerShell dropper\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “MuddyWater PowerShell dropper” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.47",
              "n": "Scattered Spider social engineering TTPs",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Scattered Spider social engineering TTPs. Identifies help-desk and credential theft tactics associated with Scattered Spider.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "IdP, MFA, service desk logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires IdP, MFA, service desk logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1566",
                "T1078"
              ],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Scattered Spider social engineering TTPs\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: IdP, MFA, service desk logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1566, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Scattered Spider social engineering TTPs\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Scattered Spider social engineering tactics” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.48",
              "n": "China-Nexus supply chain compromise",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: China-Nexus supply chain compromise. Hunting for supply-chain indicators tied to China-nexus campaigns.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Software bill of materials, CI/CD, package repos",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Software bill of materials, CI/CD, package repos ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1195",
                "T1195.003"
              ],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"China-Nexus supply chain compromise\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Software bill of materials, CI/CD, package repos. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1195, T1195.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"China-Nexus supply chain compromise\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “China-Nexus supply chain compromise” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.49",
              "n": "Secret Blizzard credential harvesting",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Secret Blizzard credential harvesting. Detects tradecraft aligned with Secret Blizzard operations.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, AD, Azure AD",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, AD, Azure AD ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1003",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Secret Blizzard credential harvesting\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, AD, Azure AD. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1003, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Secret Blizzard credential harvesting\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Secret Blizzard credential harvesting” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.50",
              "n": "Earth Alux zero-day exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Earth Alux zero-day exploitation. Surfaces exploitation of zero-days in Earth Alux targeting chains.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, WAF, IPS/IDS",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, WAF, IPS/IDS ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1203"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Earth Alux zero-day exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, WAF, IPS/IDS. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1203. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Earth Alux zero-day exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Earth Alux zero-day exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.51",
              "n": "Lotus Blossom watering hole",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Lotus Blossom watering hole. Detects watering-hole web compromise and victim profiling.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Web server logs, WAF, proxy",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Web server logs, WAF, proxy ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1189",
                "T1566"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Lotus Blossom watering hole\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Web server logs, WAF, proxy. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1189, T1566. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Lotus Blossom watering hole\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Lotus Blossom watering hole” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.52",
              "n": "APT37 Rustonotto loader",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: APT37 Rustonotto loader. Identifies Rustonotto loader execution and C2 patterns.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, network IDS",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, network IDS ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059",
                "T1071"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"APT37 Rustonotto loader\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, network IDS. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1059, T1071. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"APT37 Rustonotto loader\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “APT37 Rustonotto loader” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.53",
              "n": "Salt Typhoon telecom targeting",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Salt Typhoon telecom targeting. Telemetry aligned with Salt Typhoon targeting of telecom infrastructure.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Router, firewall, SNMP, telecom OSS logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Router, firewall, SNMP, telecom OSS logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Salt Typhoon telecom targeting\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Router, firewall, SNMP, telecom OSS logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Salt Typhoon telecom targeting\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Salt Typhoon telecom targeting” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.54",
              "n": "Volt Typhoon living-off-the-land",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Volt Typhoon living-off-the-land. Detects LOTL commands and network pivoting consistent with Volt Typhoon.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, Zeek, firewall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, Zeek, firewall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1059",
                "T1021"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Volt Typhoon living-off-the-land\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, Zeek, firewall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1059, T1021. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Volt Typhoon living-off-the-land\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Volt Typhoon living-off-the-land” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.55",
              "n": "Sandworm ICS targeting",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Sandworm ICS targeting. ICS/OT anomalies correlated with Sandworm-related TTPs.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "OT protocols, IDS, historian",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires OT protocols, IDS, historian ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T0856",
                "T0806"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Sandworm ICS targeting\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: OT protocols, IDS, historian. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T0856, T0806. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Sandworm ICS targeting\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Sandworm ICS targeting” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.56",
              "n": "Midnight Blizzard OAuth abuse",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Midnight Blizzard OAuth abuse. Detects OAuth app abuse and token theft aligned with Midnight Blizzard.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Entra ID, OAuth audit, CASB",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Entra ID, OAuth audit, CASB ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1528",
                "T1098"
              ],
              "dtype": "TTP",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Midnight Blizzard OAuth abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Entra ID, OAuth audit, CASB. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (cloud): Ensure data relevant to the cloud security domain is ingested and CIM-normalized.\n• MITRE ATT&CK: T1528, T1098. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Midnight Blizzard OAuth abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Midnight Blizzard OAuth abuse” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.57",
              "n": "APT29 cloud infrastructure abuse",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: APT29 cloud infrastructure abuse. Identifies APT29 cloud persistence and token replay.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Azure AD, AWS CloudTrail, GCP audit",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Azure AD, AWS CloudTrail, GCP audit ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078",
                "T1550"
              ],
              "dtype": "TTP",
              "sdomain": "cloud",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"APT29 cloud infrastructure abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Azure AD, AWS CloudTrail, GCP audit. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (cloud): Ensure data relevant to the cloud security domain is ingested and CIM-normalized.\n• MITRE ATT&CK: T1078, T1550. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"APT29 cloud infrastructure abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “APT29 cloud infrastructure abuse” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.58",
              "n": "Kimsuky credential phishing",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Kimsuky credential phishing. Phishing and macro tradecraft associated with Kimsuky.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Email gateway, EDR",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Email gateway, EDR ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1566.001",
                "T1204"
              ],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Kimsuky credential phishing\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Email gateway, EDR. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1566.001, T1204. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Kimsuky credential phishing\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator or Identity Investigator to review the entity’s full activity timeline and correlate with threat intelligence and other security events.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Kimsuky credential phishing” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.59",
              "n": "Lazarus cryptocurrency targeting",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Lazarus cryptocurrency targeting. Detects wallet interaction and infrastructure tied to Lazarus crypto operations.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Blockchain analytics exports, proxy, EDR",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Hunting detection — not designed for automated alerting. Run ad-hoc or on a low-frequency schedule from Splunk Enterprise Security for analyst-driven investigation. Requires Blockchain analytics exports, proxy, EDR ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1531",
                "T1048"
              ],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Lazarus cryptocurrency targeting\" is a Hunting detection designed for analyst-driven threat hunting rather than automated alerting, sourced from the Splunk Enterprise Security Content Update (ESCU). Hunting detections are not intended for automated alerting. They are run on-demand or on a low-frequency schedule by security analysts investigating specific threat hypotheses.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Blockchain analytics exports, proxy, EDR. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (threat): Configure threat intelligence feeds and verify the Threat Intelligence data model is populated for IOC matching.\n• MITRE ATT&CK: T1531, T1048. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Lazarus cryptocurrency targeting\" or filter by Analytic Story.\n3. Hunting detections are typically left disabled for automated scheduling. Instead, run them on-demand from the Search bar or schedule at low frequency (daily or weekly).\n4. Review results manually — hunting detections cast a wider net and expect analyst judgment to separate signal from noise.\n5. When results warrant further investigation, create a Notable Event manually or initiate your incident response workflow.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.\n\n• Hunting detections are expected to produce broader result sets that require analyst interpretation. Focus on refining the search scope (time range, specific hosts or users) rather than eliminating all noise.\n• Maintain a hunting journal documenting hypotheses tested, results found, and detection improvements identified.\n• Share high-fidelity findings with the detection engineering team to convert recurring hunting patterns into automated TTP or Anomaly detections.\n\nAnalyst Response Workflow\n\nWhen reviewing Hunting results:\n\n1. Run the search manually or review scheduled results. Evaluate each returned event against your threat hypothesis — not every result indicates compromise.\n2. Cross-reference findings with current threat intelligence, recent security advisories, and outputs from other detection sources.\n3. If suspicious activity is confirmed, create a Notable Event or escalate directly to your incident response process with the supporting evidence.\n4. Document the hunting exercise: hypothesis, data sources queried, time range, findings, and any detection engineering improvements identified.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Lazarus cryptocurrency targeting” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.60",
              "n": "APT28 router exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: APT28 router exploitation. Surfaces exploitation of edge routers and VPN concentrators consistent with APT28.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Router syslog, IPS, Zeek",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Router syslog, IPS, Zeek ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1021"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"APT28 router exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Router syslog, IPS, Zeek. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1021. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"APT28 router exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “APT28 router exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.61",
              "n": "SAP NetWeaver CVE exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: SAP NetWeaver CVE exploitation. Detects exploitation attempts against SAP NetWeaver known CVEs.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "SAP logs, WAF, IPS",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires SAP logs, WAF, IPS ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SAP NetWeaver CVE exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: SAP logs, WAF, IPS. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SAP NetWeaver CVE exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “SAP NetWeaver known flaws exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.62",
              "n": "Oracle E-Business Suite RCE",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Oracle E-Business Suite RCE. Surfaces RCE patterns against Oracle EBS endpoints.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "WAF, app logs, Oracle HTTP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires WAF, app logs, Oracle HTTP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1059"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Oracle E-Business Suite RCE\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: WAF, app logs, Oracle HTTP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Oracle E-Business Suite RCE\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Oracle E-Business Suite remote code flaws” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.63",
              "n": "SharePoint deserialization exploit",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: SharePoint deserialization exploit. Detects deserialization and gadget chain abuse in SharePoint.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "IIS, SharePoint ULS, WAF",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires IIS, SharePoint ULS, WAF ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1203"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"SharePoint deserialization exploit\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: IIS, SharePoint ULS, WAF. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1203. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"SharePoint deserialization exploit\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “SharePoint deserialization exploit” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.64",
              "n": "Apache Tomcat CVE detection",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Apache Tomcat CVE detection. Identifies known exploit patterns for Tomcat CVEs.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Tomcat access logs, WAF",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Tomcat access logs, WAF ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Apache Tomcat CVE detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Tomcat access logs, WAF. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Apache Tomcat CVE detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Apache Tomcat known flaws detection” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.65",
              "n": "CLFS zero-day privilege escalation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: CLFS zero-day privilege escalation. Kernel and CLFS-related privilege escalation telemetry.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "EDR, Windows kernel events",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires EDR, Windows kernel events ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1068"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"CLFS zero-day privilege escalation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: EDR, Windows kernel events. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1068. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"CLFS zero-day privilege escalation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “CLFS zero-day privilege escalation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.66",
              "n": "Apache Struts RCE",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Apache Struts RCE. Detects OGNL and Struts RCE exploit attempts.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "WAF, reverse proxy, app logs",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires WAF, reverse proxy, app logs ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Apache Struts RCE\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: WAF, reverse proxy, app logs. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Apache Struts RCE\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Apache Struts remote code flaws” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.67",
              "n": "Cisco IOS XE implant detection",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Cisco IOS XE implant detection. Surfaces IOS XE web implant and privilege escalation behaviors.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Cisco IOS XE telemetry, NetFlow",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Cisco IOS XE telemetry, NetFlow ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1505"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Cisco IOS XE implant detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Cisco IOS XE telemetry, NetFlow. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1505. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Cisco IOS XE implant detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Cisco IOS XE implant detection” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.68",
              "n": "Ivanti Connect Secure exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Ivanti Connect Secure exploitation. Detects auth bypass and RCE chains against Ivanti Connect Secure.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "VPN logs, WAF, IDS",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires VPN logs, WAF, IDS ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Ivanti Connect Secure exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: VPN logs, WAF, IDS. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Ivanti Connect Secure exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Ivanti Connect Secure exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.69",
              "n": "Citrix Bleed exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Citrix Bleed exploitation. Session token theft patterns for Citrix Bleed-class issues.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Citrix ADC logs, WAF",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Citrix ADC logs, WAF ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1550",
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Citrix Bleed exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Citrix ADC logs, WAF. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1550, T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Citrix Bleed exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Citrix Bleed exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.70",
              "n": "MOVEit Transfer CVE",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: MOVEit Transfer CVE. Detects MOVEit Transfer exploitation and unexpected bulk downloads.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "MOVEit logs, proxy, DLP",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires MOVEit logs, proxy, DLP ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1048"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"MOVEit Transfer CVE\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: MOVEit logs, proxy, DLP. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1048. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"MOVEit Transfer CVE\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “MOVEit Transfer known flaws” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.71",
              "n": "ScreenConnect exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: ScreenConnect exploitation. Surfaces auth bypass and remote code execution against ScreenConnect.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ScreenConnect audit, firewall",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ScreenConnect audit, firewall ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1021"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ScreenConnect exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ScreenConnect audit, firewall. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1021. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ScreenConnect exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “ScreenConnect exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.72",
              "n": "ConnectWise vulnerability abuse",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: ConnectWise vulnerability abuse. Detects exploitation of ConnectWise control plane issues.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "ConnectWise logs, EDR",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires ConnectWise logs, EDR ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ConnectWise vulnerability abuse\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: ConnectWise logs, EDR. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ConnectWise vulnerability abuse\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “ConnectWise vulnerability abuse” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.73",
              "n": "Fortinet FortiOS CVE",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Fortinet FortiOS CVE. Identifies SSL-VPN and FortiOS exploitation indicators.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "FortiOS syslog, IPS",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires FortiOS syslog, IPS ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Fortinet FortiOS CVE\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: FortiOS syslog, IPS. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Fortinet FortiOS CVE\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Fortinet FortiOS known flaws” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.74",
              "n": "PaperCut exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: PaperCut exploitation. Detects RCE and auth abuse in PaperCut MF.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "PaperCut application logs, WAF",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires PaperCut application logs, WAF ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"PaperCut exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: PaperCut application logs, WAF. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"PaperCut exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “PaperCut exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.75",
              "n": "Atlassian Confluence RCE",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Atlassian Confluence RCE. Surfaces OGNL/template RCE attempts against Confluence.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Confluence access logs, WAF",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Confluence access logs, WAF ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Atlassian Confluence RCE\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Confluence access logs, WAF. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Atlassian Confluence RCE\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Atlassian Confluence remote code flaws” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.76",
              "n": "Log4Shell persistent exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Log4Shell persistent exploitation. Ongoing Log4j JNDI callback and post-exploitation activity.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "WAF, Java app logs, EDR",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires WAF, Java app logs, EDR ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1059"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Log4Shell persistent exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: WAF, Java app logs, EDR. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1059. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Log4Shell persistent exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Log4Shell persistent exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.77",
              "n": "Spring4Shell detection",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Spring4Shell detection. Spring Core RCE exploitation patterns.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Tomcat/Spring access logs, WAF",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Tomcat/Spring access logs, WAF ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Spring4Shell detection\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Tomcat/Spring access logs, WAF. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Spring4Shell detection\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Spring4Shell detection” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.78",
              "n": "ProxyShell/ProxyNotShell",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: ProxyShell/ProxyNotShell. Exchange ProxyShell and related SSRF/RCE chains.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "IIS, Exchange, WAF",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires IIS, Exchange, WAF ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1048"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"ProxyShell/ProxyNotShell\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: IIS, Exchange, WAF. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1048. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"ProxyShell/ProxyNotShell\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “ProxyShell/ProxyNotShell” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.79",
              "n": "Exchange Server CVE chain",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: Exchange Server CVE chain. Multi-stage Exchange exploitation and web shell activity.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "Exchange, IIS, EDR",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires Exchange, IIS, EDR ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1505.003"
              ],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"Exchange Server CVE chain\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: Exchange, IIS, EDR. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (network): Ensure network traffic, DNS, and proxy logs are flowing and the Network Traffic and Network Resolution data models are populated.\n• MITRE ATT&CK: T1190, T1505.003. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"Exchange Server CVE chain\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the device’s network connections, DNS queries, and traffic volume patterns. Check for beaconing behavior, unusual destination IPs, or data exfiltration indicators.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “Exchange Server known flaws chain” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.9.80",
              "n": "JetBrains TeamCity exploitation",
              "c": "high",
              "f": "intermediate",
              "v": "This detection aligns with ESCU 2025–2026 analytic story coverage for: JetBrains TeamCity exploitation. Detects auth bypass and pipeline abuse in TeamCity.",
              "t": "Splunk Security Essentials / ESCU",
              "d": "TeamCity audit logs, CI runners",
              "q": "| from datamodel Risk.All_Risk | search normalized_risk_object IN (\"$user$\", \"$dest$\") starthoursago=168  | stats count min(_time) as firstTime max(_time) as lastTime values(search_name) as \"Search Name\" values(risk_message) as \"Risk Message\" values(analyticstories) as \"Analytic Stories\" values(annotations._all) as \"Annotations\" values(annotations.mitre_attack.mitre_tactic) as \"ATT&CK Tactics\" by normalized_risk_object | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`",
              "m": "ESCU Risk-Based Alerting detection. Enable in ES Content Management as a Correlation Search. Generates risk events — Notable Events fire when an entity’s cumulative risk exceeds the configured threshold. Requires TeamCity audit logs, CI runners ingested and CIM-normalized.",
              "z": "Table, Timeline (from ESCU).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1190",
                "T1078"
              ],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Enterprise Security Detection Rule\n\n\"JetBrains TeamCity exploitation\" is a TTP (Tactics, Techniques, and Procedures) detection that identifies known adversary behaviors with high fidelity, sourced from the Splunk Enterprise Security Content Update (ESCU). It operates within the Risk-Based Alerting (RBA) framework. Rather than generating standalone alerts, it contributes risk scores to identified entities — Notable Events are created only when cumulative risk exceeds the threshold.\n\nPrerequisites\n\n• Splunk Enterprise Security 7.x or later with the ES Content Update (ESCU) app installed and up to date.\n• Data sources: TeamCity audit logs, CI runners. Must be ingested into Splunk and normalized to the Common Information Model (CIM) via the appropriate Technology Add-on.\n• Security domain (endpoint): Configure endpoint data collection (Sysmon, EDR agents, Windows Security Event Logs) and verify the Endpoint data model is populated and accelerated.\n• MITRE ATT&CK: T1190, T1078. Review the ATT&CK matrix for adjacent techniques to identify detection coverage gaps in your environment.\n\nDeployment\n\n1. In Enterprise Security, navigate to Configure → Content → Content Management.\n2. Search for \"JetBrains TeamCity exploitation\" or filter by Analytic Story to locate the detection.\n3. Review the detection’s configuration: scheduling interval, risk score weight, and risk message template. The default risk score reflects the detection’s relative severity — adjust based on your organization’s risk tolerance.\n4. Enable the detection. It will run as a scheduled Correlation Search and write risk events to the risk index when conditions are met.\n5. Verify the Risk Notable aggregation rule is enabled (Content Management → search for \"Risk Notable\"). This is the correlation search that fires when an entity’s cumulative risk score crosses the configured threshold, creating the Notable Event that analysts investigate.\n6. Optionally configure Adaptive Response Actions on this detection — for example, automatically enriching risk events with threat intelligence lookups, adding entities to a triage watchlist, or triggering a SOAR playbook for high-confidence detections.\n\nTuning and False Positive Management\n\nKnown false positives for this detection: Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.\n\n• Adjust the risk score weight in Content Management. Start with the ESCU default and increase for detections that consistently produce true positives in your environment; decrease for noisy detections.\n• Use the Risk Analysis dashboard in ES to identify which detections contribute the most risk events and which entities are most frequently flagged — this reveals both coverage strengths and tuning opportunities.\n• Create lookup-based suppressions for known-good activity: approved administrative tools, vulnerability scanner IPs, scheduled batch processes, and maintenance windows.\n• If the detection fires frequently on a specific entity that is consistently benign, consider adding a per-entity risk exception rather than disabling the detection entirely — this preserves coverage for other entities.\n\nAnalyst Response Workflow\n\nWhen a Risk Notable fires for an entity associated with this detection:\n\n1. Open the Notable Event in Incident Review. Examine the entity’s risk timeline — this detection is one of potentially many contributing risk signals. The composite risk score provides more context than any single detection alone.\n2. Review the Risk Message and Analytic Story annotations to understand what adversary behavior was detected and its position in the MITRE ATT&CK kill chain.\n3. Pivot to the Asset Investigator to review the host’s recent process executions, file modifications, registry changes, and network connections. Cross-reference with EDR telemetry for process trees and parent-child relationships.\n4. Assess the full scope: check for related risk events from the same Analytic Story and from other entities that may indicate lateral movement or a coordinated attack.\n5. Determine disposition: True Positive (initiate incident response), Benign True Positive (legitimate activity — document and consider tuning), or False Positive (add suppression and adjust detection).\n6. Update the Notable Event status, set the owner, and document findings in the investigation notes for audit trail and team visibility.\n\nAbout the SPL Query Shown Above\n\nThe SPL displayed for this use case is the Risk Investigation drilldown search — it queries the Risk data model to show all risk events associated with a specific entity. This is the search analysts use during investigation to understand what contributed to an entity’s risk score.\n\nThe actual detection logic is packaged within the ESCU Correlation Search definition and runs automatically on schedule. To view or modify the detection’s underlying search logic, navigate to Configure → Content → Content Management and click on the detection name.\n\nValidation\n\nConfirm the required data sources are flowing:\n\n```spl\n| tstats count where index=* by index, sourcetype | sort -count\n```\n\nVerify the detection is enabled and firing:\n\n```spl\nindex=_audit action=\"alert_fired\" ss_name=\"*\"\n| stats count by ss_name, trigger_time | sort -trigger_time\n```",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our security program’s detections to catch activity tied to “JetBrains TeamCity exploitation” when someone tries that attacker move in our environment, so we can respond before the damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "escu": true,
              "escu_rba": true,
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 32.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 80,
            "none": 0
          }
        },
        {
          "i": "10.10",
          "n": "Detection Efficacy & YARA Monitoring",
          "u": [
            {
              "i": "10.10.1",
              "n": "Detection rule false positive rate tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks the ratio of false positives to total hits per correlation search to prioritize tuning.",
              "t": "Splunk Enterprise Security / custom",
              "d": "Summary metrics from ES or custom TA populating detection_metrics",
              "q": "index=summary sourcetype=detection_metrics\n| bin _time span=1d\n| eval fp_rate=if(total_hits>0, false_positives/total_hits, 0)\n| timechart span=1d avg(fp_rate) by rule_name",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Single value, Time chart",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security / custom.\n• Ensure the following data sources are available: Summary metrics from ES or custom TA populating detection_metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=summary sourcetype=detection_metrics\n| bin _time span=1d\n| eval fp_rate=if(total_hits>0, false_positives/total_hits, 0)\n| timechart span=1d avg(fp_rate) by rule_name\n```\n\nUnderstanding this SPL\n\n**Detection rule false positive rate tracking** — Tracks the ratio of false positives to total hits per correlation search to prioritize tuning.\n\nDocumented **Data sources**: Summary metrics from ES or custom TA populating detection_metrics. **App/TA** (typical add-on context): Splunk Enterprise Security / custom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: summary; **sourcetype**: detection_metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=summary, sourcetype=detection_metrics. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `eval` defines or adjusts **fp_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by rule_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value, Time chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how often our rules cry wolf versus hit real issues so we can tune detections and give leadership honest numbers on quality.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.2",
              "n": "Detection rule true positive attribution",
              "c": "medium",
              "f": "intermediate",
              "v": "Attributes confirmed incidents to originating detection rules for coverage reporting.",
              "t": "Splunk Enterprise Security / custom",
              "d": "Splunk ES notable events (`` `notable` `` macro), incident review fields",
              "q": "`notable` status=closed disposition=confirmed\n| stats count by rule_name, disposition\n| sort - count",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Single value, Time chart",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security / custom.\n• Ensure the following data sources are available: Splunk ES notable events (`` `notable` `` macro), incident review fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` status=closed disposition=confirmed\n| stats count by rule_name, disposition\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Detection rule true positive attribution** — Attributes confirmed incidents to originating detection rules for coverage reporting.\n\nDocumented **Data sources**: Splunk ES notable events (`` `notable` `` macro), incident review fields. **App/TA** (typical add-on context): Splunk Enterprise Security / custom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by rule_name, disposition** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value, Time chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We line up which detections end in real incidents versus noise so we know where our program actually helps, not just what fires the most.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.3",
              "n": "Stale detection identification (rules never firing 90+ days)",
              "c": "medium",
              "f": "intermediate",
              "v": "Lists correlation searches with zero or near-zero hits over 90 days for retirement review.",
              "t": "Splunk Enterprise Security / custom",
              "d": "Summary index populated from savedsearch scheduler or ES introspection",
              "q": "index=summary sourcetype=detection_metrics earliest=-90d\n| stats sum(hit_count) as hits by rule_name\n| where hits=0 OR hits<1\n| sort rule_name",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Single value, Time chart",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security / custom.\n• Ensure the following data sources are available: Summary index populated from savedsearch scheduler or ES introspection.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=summary sourcetype=detection_metrics earliest=-90d\n| stats sum(hit_count) as hits by rule_name\n| where hits=0 OR hits<1\n| sort rule_name\n```\n\nUnderstanding this SPL\n\n**Stale detection identification (rules never firing 90+ days)** — Lists correlation searches with zero or near-zero hits over 90 days for retirement review.\n\nDocumented **Data sources**: Summary index populated from savedsearch scheduler or ES introspection. **App/TA** (typical add-on context): Splunk Enterprise Security / custom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: summary; **sourcetype**: detection_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=summary, sourcetype=detection_metrics, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where hits=0 OR hits<1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value, Time chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We find rules that have gone quiet so we are not holding broken glass while attackers walk past.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.4",
              "n": "Detection rule runtime performance",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures search duration and skipped runs to find expensive or failing detections.",
              "t": "Splunk Enterprise Security / custom",
              "d": "_internal scheduler logs",
              "q": "index=_internal source=*scheduler.log* savedsearch_name=*\n| stats avg(run_time) as avg_run_time max(run_time) as max_run_time count as run_count by savedsearch_name\n| where avg_run_time>60 OR run_count<1\n| sort - avg_run_time",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Single value, Time chart",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security / custom.\n• Ensure the following data sources are available: _internal scheduler logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*scheduler.log* savedsearch_name=*\n| stats avg(run_time) as avg_run_time max(run_time) as max_run_time count as run_count by savedsearch_name\n| where avg_run_time>60 OR run_count<1\n| sort - avg_run_time\n```\n\nUnderstanding this SPL\n\n**Detection rule runtime performance** — Measures search duration and skipped runs to find expensive or failing detections.\n\nDocumented **Data sources**: _internal scheduler logs. **App/TA** (typical add-on context): Splunk Enterprise Security / custom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by savedsearch_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_run_time>60 OR run_count<1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value, Time chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how long our searches run so heavy or fragile rules do not blind us when seconds matter.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.5",
              "n": "MITRE ATT&CK technique coverage gap analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Compares technique IDs from fired notables against desired coverage matrix.",
              "t": "Splunk Enterprise Security / custom",
              "d": "Notable events with MITRE annotations, lookup of desired coverage",
              "q": "| inputlookup desired_mitre_coverage.csv\n| fields technique_id\n| join type=left max=0 technique_id [\n    search `notable` earliest=-365d\n    | mvexpand annotations.mitre_attack.mitre_attack_id limit=500\n    | rename annotations.mitre_attack.mitre_attack_id as technique_id\n    | stats dc(rule_name) as rule_count by technique_id\n  ]\n| where isnull(rule_count)",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Single value, Time chart",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security / custom.\n• Ensure the following data sources are available: Notable events with MITRE annotations, lookup of desired coverage.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup desired_mitre_coverage.csv\n| fields technique_id\n| join type=left max=0 technique_id [\n    search `notable` earliest=-365d\n    | mvexpand annotations.mitre_attack.mitre_attack_id limit=500\n    | rename annotations.mitre_attack.mitre_attack_id as technique_id\n    | stats dc(rule_name) as rule_count by technique_id\n  ]\n| where isnull(rule_count)\n```\n\nUnderstanding this SPL\n\n**MITRE ATT&CK technique coverage gap analysis** — Compares technique IDs from fired notables against desired coverage matrix.\n\nDocumented **Data sources**: Notable events with MITRE annotations, lookup of desired coverage. **App/TA** (typical add-on context): Splunk Enterprise Security / custom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(rule_count)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value, Time chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We map our coverage to attacker techniques so we see blind spots in the matrix before a reviewer does.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.6",
              "n": "Custom detection vs ESCU baseline comparison",
              "c": "medium",
              "f": "intermediate",
              "v": "Side-by-side hit volume and severity for custom rules vs ESCU-packaged searches.",
              "t": "Splunk Enterprise Security / custom",
              "d": "Summary metrics keyed by rule_name",
              "q": "index=summary sourcetype=detection_metrics\n| eval family=if(match(rule_name, \"(?i)escu\"), \"ESCU\", \"Custom\")\n| timechart span=1d sum(hit_count) by family",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Single value, Time chart",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security / custom.\n• Ensure the following data sources are available: Summary metrics keyed by rule_name.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=summary sourcetype=detection_metrics\n| eval family=if(match(rule_name, \"(?i)escu\"), \"ESCU\", \"Custom\")\n| timechart span=1d sum(hit_count) by family\n```\n\nUnderstanding this SPL\n\n**Custom detection vs ESCU baseline comparison** — Side-by-side hit volume and severity for custom rules vs ESCU-packaged searches.\n\nDocumented **Data sources**: Summary metrics keyed by rule_name. **App/TA** (typical add-on context): Splunk Enterprise Security / custom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: summary; **sourcetype**: detection_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=summary, sourcetype=detection_metrics. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **family** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by family** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value, Time chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare our home-built rules to the shipped baseline so we do not double-pay effort or miss what the community already fixed.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.7",
              "n": "Detection rule severity distribution balance",
              "c": "medium",
              "f": "intermediate",
              "v": "Shows concentration of CRITICAL/HIGH notables to balance analyst load.",
              "t": "Splunk Enterprise Security / custom",
              "d": "Splunk ES notable events (`` `notable` `` macro)",
              "q": "`notable`\n| stats count by urgency, rule_name\n| sort urgency, - count",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Single value, Time chart",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security / custom.\n• Ensure the following data sources are available: Splunk ES notable events (`` `notable` `` macro).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable`\n| stats count by urgency, rule_name\n| sort urgency, - count\n```\n\nUnderstanding this SPL\n\n**Detection rule severity distribution balance** — Shows concentration of CRITICAL/HIGH notables to balance analyst load.\n\nDocumented **Data sources**: Splunk ES notable events (`` `notable` `` macro). **App/TA** (typical add-on context): Splunk Enterprise Security / custom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by urgency, rule_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value, Time chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We balance how many critical versus low alerts we generate so the queue stays usable and the right issues pop out.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.8",
              "n": "Notable event triage time by detection rule",
              "c": "medium",
              "f": "intermediate",
              "v": "Average time from notable creation to first owner assignment or close by rule.",
              "t": "Splunk Enterprise Security / custom",
              "d": "Notable event fields (custom field names may vary by ES version)",
              "q": "`notable`\n| eval triage_sec=if(isnotnull(first_owner_assignment_time), first_owner_assignment_time-_time, now()-_time)\n| stats avg(triage_sec) as avg_triage_sec perc95(triage_sec) as p95_triage_sec by rule_name\n| sort - avg_triage_sec",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Single value, Time chart",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security / custom.\n• Ensure the following data sources are available: Notable event fields (custom field names may vary by ES version).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable`\n| eval triage_sec=if(isnotnull(first_owner_assignment_time), first_owner_assignment_time-_time, now()-_time)\n| stats avg(triage_sec) as avg_triage_sec perc95(triage_sec) as p95_triage_sec by rule_name\n| sort - avg_triage_sec\n```\n\nUnderstanding this SPL\n\n**Notable event triage time by detection rule** — Average time from notable creation to first owner assignment or close by rule.\n\nDocumented **Data sources**: Notable event fields (custom field names may vary by ES version). **App/TA** (typical add-on context): Splunk Enterprise Security / custom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **triage_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by rule_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value, Time chart\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We measure how long it takes a human to own a triaged item so we can staff and fix bottlenecks with data.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.9",
              "n": "YARA scan result ingestion and match alerting",
              "c": "high",
              "f": "intermediate",
              "v": "Ingests YARA scanner output and alerts on any match or severity threshold.",
              "t": "Custom TA / HEC",
              "d": "YARA scanner JSON/CEF forwarded to Splunk",
              "q": "index=yara sourcetype=yara:scan\n| search match=\"true\" OR match_count>0\n| table _time, host, rule_name, path, match_count, scanner_version\n| sort - _time",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Time chart, Single value",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom TA / HEC.\n• Ensure the following data sources are available: YARA scanner JSON/CEF forwarded to Splunk.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=yara sourcetype=yara:scan\n| search match=\"true\" OR match_count>0\n| table _time, host, rule_name, path, match_count, scanner_version\n| sort - _time\n```\n\nUnderstanding this SPL\n\n**YARA scan result ingestion and match alerting** — Ingests YARA scanner output and alerts on any match or severity threshold.\n\nDocumented **Data sources**: YARA scanner JSON/CEF forwarded to Splunk. **App/TA** (typical add-on context): Custom TA / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: yara; **sourcetype**: yara:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=yara, sourcetype=yara:scan. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **YARA scan result ingestion and match alerting**): table _time, host, rule_name, path, match_count, scanner_version\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart, Single value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We wire YARA match lines into the SIEM and alert on real hits so file-level malware sweeps are not an offline spreadsheet.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.10",
              "n": "YARA rule match trending by rule name",
              "c": "high",
              "f": "intermediate",
              "v": "Time-series of matches per rule to spot outbreaks or noisy rules.",
              "t": "Custom TA / HEC",
              "d": "YARA scan sourcetype",
              "q": "index=yara sourcetype=yara:scan match=\"true\"\n| timechart span=1h count by rule_name",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Time chart, Single value",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom TA / HEC.\n• Ensure the following data sources are available: YARA scan sourcetype.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=yara sourcetype=yara:scan match=\"true\"\n| timechart span=1h count by rule_name\n```\n\nUnderstanding this SPL\n\n**YARA rule match trending by rule name** — Time-series of matches per rule to spot outbreaks or noisy rules.\n\nDocumented **Data sources**: YARA scan sourcetype. **App/TA** (typical add-on context): Custom TA / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: yara; **sourcetype**: yara:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=yara, sourcetype=yara:scan. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by rule_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart, Single value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend which YARA names fire over time so noisy rules and new malware campaigns stand out clearly.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.11",
              "n": "YARA scan coverage tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Percentage of endpoints or paths scanned per interval vs inventory.",
              "t": "Custom TA / HEC",
              "d": "YARA scans plus CMDB lookup",
              "q": "index=yara sourcetype=yara:scan\n| stats dc(host) as hosts_scanned\n| append [ | inputlookup server_inventory.csv | stats dc(host) as hosts_total ]\n| eval coverage_pct=round(100*hosts_scanned/hosts_total,2)\n| table hosts_scanned, hosts_total, coverage_pct",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Time chart, Single value",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom TA / HEC.\n• Ensure the following data sources are available: YARA scans plus CMDB lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=yara sourcetype=yara:scan\n| stats dc(host) as hosts_scanned\n| append [ | inputlookup server_inventory.csv | stats dc(host) as hosts_total ]\n| eval coverage_pct=round(100*hosts_scanned/hosts_total,2)\n| table hosts_scanned, hosts_total, coverage_pct\n```\n\nUnderstanding this SPL\n\n**YARA scan coverage tracking** — Percentage of endpoints or paths scanned per interval vs inventory.\n\nDocumented **Data sources**: YARA scans plus CMDB lookup. **App/TA** (typical add-on context): Custom TA / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: yara; **sourcetype**: yara:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=yara, sourcetype=yara:scan. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Appends rows from a subsearch with `append`.\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **YARA scan coverage tracking**): table hosts_scanned, hosts_total, coverage_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart, Single value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check that scans actually ran everywhere they should so we are not blind to a whole site or system group.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.12",
              "n": "YARA rule source update compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Verifies scanner pulled latest rule pack within SLA.",
              "t": "Custom TA / HEC",
              "d": "YARA scanner heartbeat with rule pack version",
              "q": "index=yara sourcetype=yara:scan\n| stats latest(rule_pack_version) as ver latest(_time) as last_pull by host\n| lookup yara_rule_pack_sla.csv rule_pack_version OUTPUT expected_max_age_sec\n| where now()-last_pull > expected_max_age_sec",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Time chart, Single value",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom TA / HEC.\n• Ensure the following data sources are available: YARA scanner heartbeat with rule pack version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=yara sourcetype=yara:scan\n| stats latest(rule_pack_version) as ver latest(_time) as last_pull by host\n| lookup yara_rule_pack_sla.csv rule_pack_version OUTPUT expected_max_age_sec\n| where now()-last_pull > expected_max_age_sec\n```\n\nUnderstanding this SPL\n\n**YARA rule source update compliance** — Verifies scanner pulled latest rule pack within SLA.\n\nDocumented **Data sources**: YARA scanner heartbeat with rule pack version. **App/TA** (typical add-on context): Custom TA / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: yara; **sourcetype**: yara:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=yara, sourcetype=yara:scan. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where now()-last_pull > expected_max_age_sec` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart, Single value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We make sure our YARA content stayed fresh and approved so we are not running stale signatures by mistake.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.13",
              "n": "YARA false positive analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Correlates YARA matches with EDR verdicts to label likely FPs.",
              "t": "Custom TA / HEC",
              "d": "YARA + EDR reputation lookup",
              "q": "index=yara sourcetype=yara:scan match=\"true\"\n| lookup edr_file_reputation hash_sha256 OUTPUT verdict\n| stats count by rule_name, verdict\n| where verdict=\"clean\" OR verdict=\"unknown\"",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Time chart, Single value",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom TA / HEC.\n• Ensure the following data sources are available: YARA + EDR reputation lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=yara sourcetype=yara:scan match=\"true\"\n| lookup edr_file_reputation hash_sha256 OUTPUT verdict\n| stats count by rule_name, verdict\n| where verdict=\"clean\" OR verdict=\"unknown\"\n```\n\nUnderstanding this SPL\n\n**YARA false positive analysis** — Correlates YARA matches with EDR verdicts to label likely FPs.\n\nDocumented **Data sources**: YARA + EDR reputation lookup. **App/TA** (typical add-on context): Custom TA / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: yara; **sourcetype**: yara:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=yara, sourcetype=yara:scan. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by rule_name, verdict** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where verdict=\"clean\" OR verdict=\"unknown\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart, Single value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We study the false side of YARA so we can tune or retire noisy rules and stop burning analyst hours.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.14",
              "n": "YARA scan performance monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks scan duration and CPU to detect scanner health issues.",
              "t": "Custom TA / HEC",
              "d": "YARA scanner performance fields",
              "q": "index=yara sourcetype=yara:scan\n| stats avg(scan_duration_ms) as avg_ms perc95(scan_duration_ms) as p95_ms max(scan_duration_ms) as max_ms by host\n| where avg_ms>600000",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Time chart, Single value",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom TA / HEC.\n• Ensure the following data sources are available: YARA scanner performance fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=yara sourcetype=yara:scan\n| stats avg(scan_duration_ms) as avg_ms perc95(scan_duration_ms) as p95_ms max(scan_duration_ms) as max_ms by host\n| where avg_ms>600000\n```\n\nUnderstanding this SPL\n\n**YARA scan performance monitoring** — Tracks scan duration and CPU to detect scanner health issues.\n\nDocumented **Data sources**: YARA scanner performance fields. **App/TA** (typical add-on context): Custom TA / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: yara; **sourcetype**: yara:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=yara, sourcetype=yara:scan. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_ms>600000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart, Single value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch scan run-time and success so a broken agent or full disk does not look like a clean bill of health.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.10.15",
              "n": "YARA rule version tracking and change audit",
              "c": "high",
              "f": "intermediate",
              "v": "Audit trail when rule files or hashes change on scanners.",
              "t": "Custom TA / HEC",
              "d": "File integrity or config management audit of YARA rules",
              "q": "index=audit sourcetype=yara:rule_change\n| stats latest(_time) as changed_at values(user) as changed_by by rule_file, sha256\n| sort - changed_at",
              "m": "Normalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.",
              "z": "Table, Time chart, Single value",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "Operational metrics",
              "sdomain": "soc_operations",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom TA / HEC.\n• Ensure the following data sources are available: File integrity or config management audit of YARA rules.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize field names to your Splunk ES version and index names. Use summary indexing or ES introspection where `detection_metrics` is not present. For YARA, deploy a scripted input or HEC to ingest scanner JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=yara:rule_change\n| stats latest(_time) as changed_at values(user) as changed_by by rule_file, sha256\n| sort - changed_at\n```\n\nUnderstanding this SPL\n\n**YARA rule version tracking and change audit** — Audit trail when rule files or hashes change on scanners.\n\nDocumented **Data sources**: File integrity or config management audit of YARA rules. **App/TA** (typical add-on context): Custom TA / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: yara:rule_change. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Detection type** for this use case: Operational metrics — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=yara:rule_change. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule_file, sha256** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart, Single value\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We audit YARA version drift so we know who changed what and can roll back when a bundle goes wrong.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 15,
            "none": 0
          }
        },
        {
          "i": "10.11",
          "n": "Vendor-Specific Security Detections",
          "u": [
            {
              "i": "10.11.1",
              "n": "FortiGate Firewall Policy Violations",
              "c": "high",
              "f": "intermediate",
              "v": "Denied sessions reveal policy gaps, misconfigurations, or active probing against protected assets.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_traffic`",
              "q": "index=firewall sourcetype=\"fortinet:fortigate_traffic\" (action=deny OR status=\"deny\")\n| timechart span=1h count by policyid\n| where count>0",
              "m": "Ingest FortiGate traffic logs via syslog or FortiAnalyzer forwarder. Map policyid to names in a lookup. Alert on spikes in denies for critical segments.",
              "z": "Line chart (denies over time), Bar chart (top policies), Table (recent denies).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest FortiGate traffic logs via syslog or FortiAnalyzer forwarder. Map policyid to names in a lookup. Alert on spikes in denies for critical segments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"fortinet:fortigate_traffic\" (action=deny OR status=\"deny\")\n| timechart span=1h count by policyid\n| where count>0\n```\n\nUnderstanding this SPL\n\n**FortiGate Firewall Policy Violations** — Denied sessions reveal policy gaps, misconfigurations, or active probing against protected assets.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_traffic`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: fortinet:fortigate_traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"fortinet:fortigate_traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by policyid** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate Firewall Policy Violations** — Denied sessions reveal policy gaps, misconfigurations, or active probing against protected assets.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_traffic`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (denies over time), Bar chart (top policies), Table (recent denies).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate Firewall Policy Violations\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.2",
              "n": "FortiGate IPS Event Trending",
              "c": "high",
              "f": "intermediate",
              "v": "IPS trending highlights exploit attempts and helps tune profiles and exception lists.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_ips`",
              "q": "index=firewall sourcetype=\"fortinet:fortigate_ips\"\n| stats count by attack, severity, src\n| sort 100 -count",
              "m": "Enable IPS logging on relevant policies. Normalize severity and attack name fields from the TA. Trend top signatures and alert on critical severity bursts.",
              "z": "Bar chart (top attacks), Line chart (IPS events over time), Table (critical hits).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_ips`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable IPS logging on relevant policies. Normalize severity and attack name fields from the TA. Trend top signatures and alert on critical severity bursts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"fortinet:fortigate_ips\"\n| stats count by attack, severity, src\n| sort 100 -count\n```\n\nUnderstanding this SPL\n\n**FortiGate IPS Event Trending** — IPS trending highlights exploit attempts and helps tune profiles and exception lists.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_ips`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: fortinet:fortigate_ips. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"fortinet:fortigate_ips\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by attack, severity, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity, IDS_Attacks.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate IPS Event Trending** — IPS trending highlights exploit attempts and helps tune profiles and exception lists.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_ips`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top attacks), Line chart (IPS events over time), Table (critical hits).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate IPS Event Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity, IDS_Attacks.src | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.3",
              "n": "FortiGate Anti-Virus Detection Rate",
              "c": "high",
              "f": "intermediate",
              "v": "AV rate analysis shows control effectiveness and whether the same threat is recurring across endpoints.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_virus`",
              "q": "index=firewall sourcetype=\"fortinet:fortigate_virus\"\n| bin span=1d _time\n| stats count as detections, dc(file_hash) as unique_samples by _time\n| eval rate=if(unique_samples>0, round(detections/unique_samples,2), 0)",
              "m": "Collect AV events from proxy and flow-based inspection. Track unique malware samples versus total events to spot mass repeats or polymorphic campaigns.",
              "z": "Line chart (detections vs unique samples), Single value (daily rate), Table (top malware names).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_virus`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect AV events from proxy and flow-based inspection. Track unique malware samples versus total events to spot mass repeats or polymorphic campaigns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"fortinet:fortigate_virus\"\n| bin span=1d _time\n| stats count as detections, dc(file_hash) as unique_samples by _time\n| eval rate=if(unique_samples>0, round(detections/unique_samples,2), 0)\n```\n\nUnderstanding this SPL\n\n**FortiGate Anti-Virus Detection Rate** — AV rate analysis shows control effectiveness and whether the same threat is recurring across endpoints.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_virus`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: fortinet:fortigate_virus. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"fortinet:fortigate_virus\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate Anti-Virus Detection Rate** — AV rate analysis shows control effectiveness and whether the same threat is recurring across endpoints.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_virus`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (detections vs unique samples), Single value (daily rate), Table (top malware names).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate Anti-Virus Detection Rate\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.4",
              "n": "FortiGate Web Filter Category Blocks",
              "c": "high",
              "f": "intermediate",
              "v": "Category blocks expose risky browsing, policy misuse, and potential malware download attempts.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_webfilter`",
              "q": "index=web sourcetype=\"fortinet:fortigate_webfilter\" action=blocked\n| stats count by category, user, src\n| sort -count",
              "m": "Ingest explicit-proxy or flow-mode web filter logs. Join category IDs to human-readable names. Report top blocked categories and repeat offenders.",
              "z": "Bar chart (categories), Table (users with high block counts), Pie chart (category mix).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_webfilter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest explicit-proxy or flow-mode web filter logs. Join category IDs to human-readable names. Report top blocked categories and repeat offenders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"fortinet:fortigate_webfilter\" action=blocked\n| stats count by category, user, src\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiGate Web Filter Category Blocks** — Category blocks expose risky browsing, policy misuse, and potential malware download attempts.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_webfilter`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: fortinet:fortigate_webfilter. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"fortinet:fortigate_webfilter\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by category, user, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate Web Filter Category Blocks** — Category blocks expose risky browsing, policy misuse, and potential malware download attempts.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_webfilter`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (categories), Table (users with high block counts), Pie chart (category mix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate Web Filter Category Blocks\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.src | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.5",
              "n": "FortiGate Application Control Violations",
              "c": "high",
              "f": "intermediate",
              "v": "Application denials indicate shadow IT, policy violations, or evasive protocols.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_appctrl`",
              "q": "index=firewall sourcetype=\"fortinet:fortigate_appctrl\" action=deny\n| stats count by app_name, src, dst_ip\n| where count>5",
              "m": "Enable application control with deny logging. Correlate app IDs with business-approved lists. Alert on repeated denials from single hosts.",
              "z": "Bar chart (denied apps), Table (src, app, count), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_appctrl`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable application control with deny logging. Correlate app IDs with business-approved lists. Alert on repeated denials from single hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"fortinet:fortigate_appctrl\" action=deny\n| stats count by app_name, src, dst_ip\n| where count>5\n```\n\nUnderstanding this SPL\n\n**FortiGate Application Control Violations** — Application denials indicate shadow IT, policy violations, or evasive protocols.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_appctrl`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: fortinet:fortigate_appctrl. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"fortinet:fortigate_appctrl\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name, src, dst_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate Application Control Violations** — Application denials indicate shadow IT, policy violations, or evasive protocols.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_appctrl`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (denied apps), Table (src, app, count), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate Application Control Violations\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.6",
              "n": "FortiSandbox Analysis Results",
              "c": "high",
              "f": "intermediate",
              "v": "Sandbox verdicts surface zero-day and unknown files before they execute broadly in the environment.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortisandbox`",
              "q": "index=security sourcetype=\"fortinet:fortisandbox\"\n| stats count by verdict, file_type, sha256\n| search verdict IN (\"Malicious\",\"Suspicious\")",
              "m": "Forward FortiSandbox JSON or syslog to Splunk. Track verdict distribution and alert on malicious with destination context from related traffic logs.",
              "z": "Pie chart (verdicts), Table (malicious files with hash), Line chart (submissions).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortisandbox`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward FortiSandbox JSON or syslog to Splunk. Track verdict distribution and alert on malicious with destination context from related traffic logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"fortinet:fortisandbox\"\n| stats count by verdict, file_type, sha256\n| search verdict IN (\"Malicious\",\"Suspicious\")\n```\n\nUnderstanding this SPL\n\n**FortiSandbox Analysis Results** — Sandbox verdicts surface zero-day and unknown files before they execute broadly in the environment.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortisandbox`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: fortinet:fortisandbox. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"fortinet:fortisandbox\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by verdict, file_type, sha256** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiSandbox Analysis Results** — Sandbox verdicts surface zero-day and unknown files before they execute broadly in the environment.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortisandbox`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (verdicts), Table (malicious files with hash), Line chart (submissions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiSandbox Analysis Results\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.7",
              "n": "FortiGate SD-WAN Tunnel Health",
              "c": "high",
              "f": "intermediate",
              "v": "SD-WAN tunnel health ensures branch connectivity and path failover behave as designed.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_event`",
              "q": "index=network sourcetype=\"fortinet:fortigate_event\" (subtype=\"vpn\" OR logdesc=\"*SD-WAN*\")\n| stats latest(state) as state by interface, neighbor\n| search state IN (\"down\",\"dead\",\"inactive\")",
              "m": "Ingest event logs for SD-WAN and IPsec state changes. Build expected neighbor inventory. Page on tunnel down beyond SLA.",
              "z": "Status table (tunnels), Single value (down count), Timeline (state changes).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_event`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest event logs for SD-WAN and IPsec state changes. Build expected neighbor inventory. Page on tunnel down beyond SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fortinet:fortigate_event\" (subtype=\"vpn\" OR logdesc=\"*SD-WAN*\")\n| stats latest(state) as state by interface, neighbor\n| search state IN (\"down\",\"dead\",\"inactive\")\n```\n\nUnderstanding this SPL\n\n**FortiGate SD-WAN Tunnel Health** — SD-WAN tunnel health ensures branch connectivity and path failover behave as designed.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_event`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fortinet:fortigate_event. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fortinet:fortigate_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by interface, neighbor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate SD-WAN Tunnel Health** — SD-WAN tunnel health ensures branch connectivity and path failover behave as designed.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_event`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status table (tunnels), Single value (down count), Timeline (state changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate SD-WAN Tunnel Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.8",
              "n": "FortiGate HA Failover Events",
              "c": "high",
              "f": "intermediate",
              "v": "Unexpected HA events can indicate split-brain risk or hardware stress on clustered firewalls.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_event`",
              "q": "index=network sourcetype=\"fortinet:fortigate_event\" (logdesc=\"*HA*\" OR subtype=\"ha\")\n| stats count by action, group_id\n| sort -_time",
              "m": "Parse HA role change, sync loss, and failover messages. Alert on unplanned failovers and track frequency for stability reviews.",
              "z": "Timeline (HA events), Table (cluster, action), Alert on failover.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_event`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse HA role change, sync loss, and failover messages. Alert on unplanned failovers and track frequency for stability reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fortinet:fortigate_event\" (logdesc=\"*HA*\" OR subtype=\"ha\")\n| stats count by action, group_id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**FortiGate HA Failover Events** — Unexpected HA events can indicate split-brain risk or hardware stress on clustered firewalls.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_event`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fortinet:fortigate_event. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fortinet:fortigate_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action, group_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate HA Failover Events** — Unexpected HA events can indicate split-brain risk or hardware stress on clustered firewalls.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_event`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (HA events), Table (cluster, action), Alert on failover.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate HA Failover Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.9",
              "n": "FortiGate SSL Inspection Bypass",
              "c": "high",
              "f": "intermediate",
              "v": "SSL bypass monitoring balances privacy with ensuring encrypted threats are still inspected where required.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_ssl`",
              "q": "index=firewall sourcetype=\"fortinet:fortigate_ssl\" bypass_reason=*\n| stats count by bypass_reason, policyid, user\n| sort -count",
              "m": "Log SSL inspection decisions including bypass reasons (user, category, tech). Review for excessive bypasses that reduce visibility.",
              "z": "Bar chart (bypass reasons), Table (policy, user), Line chart (trend).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_ssl`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog SSL inspection decisions including bypass reasons (user, category, tech). Review for excessive bypasses that reduce visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"fortinet:fortigate_ssl\" bypass_reason=*\n| stats count by bypass_reason, policyid, user\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiGate SSL Inspection Bypass** — SSL bypass monitoring balances privacy with ensuring encrypted threats are still inspected where required.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_ssl`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: fortinet:fortigate_ssl. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"fortinet:fortigate_ssl\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by bypass_reason, policyid, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate SSL Inspection Bypass** — SSL bypass monitoring balances privacy with ensuring encrypted threats are still inspected where required.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_ssl`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (bypass reasons), Table (policy, user), Line chart (trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate SSL Inspection Bypass\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.10",
              "n": "FortiManager Configuration Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Centralized compliance reduces misconfiguration-driven gaps in UTM and policy.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortimanager`",
              "q": "index=security sourcetype=\"fortinet:fortimanager\" fmgt_event=\"config\"\n| stats latest(revision) as rev by adom, device_name\n| join type=left max=1 device_name [| inputlookup fortigate_golden_config]\n| where rev!=expected_rev OR isnull(expected_rev)",
              "m": "Ingest FortiManager audit and revision logs. Compare device config revisions to approved baselines via lookup. Alert on drift.",
              "z": "Table (non-compliant devices), Bar chart (ADOM compliance %).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortimanager`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest FortiManager audit and revision logs. Compare device config revisions to approved baselines via lookup. Alert on drift.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"fortinet:fortimanager\" fmgt_event=\"config\"\n| stats latest(revision) as rev by adom, device_name\n| join type=left max=1 device_name [| inputlookup fortigate_golden_config]\n| where rev!=expected_rev OR isnull(expected_rev)\n```\n\nUnderstanding this SPL\n\n**FortiManager Configuration Compliance** — Centralized compliance reduces misconfiguration-driven gaps in UTM and policy.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortimanager`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: fortinet:fortimanager. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"fortinet:fortimanager\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by adom, device_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where rev!=expected_rev OR isnull(expected_rev)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiManager Configuration Compliance** — Centralized compliance reduces misconfiguration-driven gaps in UTM and policy.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortimanager`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant devices), Bar chart (ADOM compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiManager Configuration Compliance\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.11",
              "n": "FortiGate UTM Threat Correlation",
              "c": "high",
              "f": "intermediate",
              "v": "Multi-module UTM hits on one session often indicate automated or staged attacks.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_utm`",
              "q": "index=firewall sourcetype=\"fortinet:fortigate_utm\"\n| stats values(subtype) as utm_types, count by src, dst_ip, session_id\n| where mvcount(utm_types)>2",
              "m": "Normalize UTM subtypes (av, ips, webfilter, appctrl) on shared session keys. Flag sessions hitting multiple UTM modules as higher fidelity incidents.",
              "z": "Table (correlated sessions), Multi-value field list.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_utm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize UTM subtypes (av, ips, webfilter, appctrl) on shared session keys. Flag sessions hitting multiple UTM modules as higher fidelity incidents.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"fortinet:fortigate_utm\"\n| stats values(subtype) as utm_types, count by src, dst_ip, session_id\n| where mvcount(utm_types)>2\n```\n\nUnderstanding this SPL\n\n**FortiGate UTM Threat Correlation** — Multi-module UTM hits on one session often indicate automated or staged attacks.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_utm`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: fortinet:fortigate_utm. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"fortinet:fortigate_utm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dst_ip, session_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvcount(utm_types)>2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate UTM Threat Correlation** — Multi-module UTM hits on one session often indicate automated or staged attacks.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_utm`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (correlated sessions), Multi-value field list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate UTM Threat Correlation\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src, IDS_Attacks.dest | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.12",
              "n": "FortiGate Admin Authentication Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Admin auth auditing protects firewall management plane from takeover and insider misuse.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_admin`",
              "q": "index=auth sourcetype=\"fortinet:fortigate_admin\"\n| search (status=failed OR result=failed OR action=\"login_failed\")\n| stats count by user, src, interface\n| where count>5",
              "m": "Collect admin GUI, SSH, and API logins. Geofence or list approved admin sources. Alert on brute force or logins from new IPs.",
              "z": "Table (failed logins), Map (src), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_admin`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect admin GUI, SSH, and API logins. Geofence or list approved admin sources. Alert on brute force or logins from new IPs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=auth sourcetype=\"fortinet:fortigate_admin\"\n| search (status=failed OR result=failed OR action=\"login_failed\")\n| stats count by user, src, interface\n| where count>5\n```\n\nUnderstanding this SPL\n\n**FortiGate Admin Authentication Audit** — Admin auth auditing protects firewall management plane from takeover and insider misuse.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_admin`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: auth; **sourcetype**: fortinet:fortigate_admin. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=auth, sourcetype=\"fortinet:fortigate_admin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, src, interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate Admin Authentication Audit** — Admin auth auditing protects firewall management plane from takeover and insider misuse.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_admin`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed logins), Map (src), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate Admin Authentication Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.13",
              "n": "FortiGate Resource Utilization Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "Resource pressure can cause dropped logs, missed inspection, and HA instability.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_snmp`",
              "q": "index=metrics OR index=network sourcetype=\"fortinet:fortigate_snmp\"\n| stats avg(cpu_pct) as cpu, avg(mem_used_pct) as mem by host\n| where cpu>85 OR mem>90",
              "m": "Ingest SNMP or streaming metrics for CPU, memory, and session table. Threshold alert before performance degrades inspection.",
              "z": "Single value (CPU/mem), Line chart (utilization), Table (top busy devices).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_snmp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SNMP or streaming metrics for CPU, memory, and session table. Threshold alert before performance degrades inspection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=metrics OR index=network sourcetype=\"fortinet:fortigate_snmp\"\n| stats avg(cpu_pct) as cpu, avg(mem_used_pct) as mem by host\n| where cpu>85 OR mem>90\n```\n\nUnderstanding this SPL\n\n**FortiGate Resource Utilization Alerts** — Resource pressure can cause dropped logs, missed inspection, and HA instability.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_snmp`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: metrics, network; **sourcetype**: fortinet:fortigate_snmp. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=metrics, index=network, sourcetype=\"fortinet:fortigate_snmp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu>85 OR mem>90` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate Resource Utilization Alerts** — Resource pressure can cause dropped logs, missed inspection, and HA instability.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_snmp`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Network` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (CPU/mem), Line chart (utilization), Table (top busy devices).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate Resource Utilization Alerts\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.Network by Performance.host | sort - agg_value",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.14",
              "n": "FortiGate VPN Tunnel Status",
              "c": "high",
              "f": "intermediate",
              "v": "VPN status monitoring secures remote access and site-to-site connectivity for users and automation.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_vpn`",
              "q": "index=network sourcetype=\"fortinet:fortigate_vpn\"\n| stats latest(status) as st by tunnel_name, phase\n| search st IN (\"down\",\"connecting\",\"error\")",
              "m": "Parse IKE/IPsec phase-1/phase-2 status from VPN logs. Maintain tunnel inventory with expected state=up. Alert on sustained down.",
              "z": "Table (down tunnels), Timeline (state flaps).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_vpn`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse IKE/IPsec phase-1/phase-2 status from VPN logs. Maintain tunnel inventory with expected state=up. Alert on sustained down.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fortinet:fortigate_vpn\"\n| stats latest(status) as st by tunnel_name, phase\n| search st IN (\"down\",\"connecting\",\"error\")\n```\n\nUnderstanding this SPL\n\n**FortiGate VPN Tunnel Status** — VPN status monitoring secures remote access and site-to-site connectivity for users and automation.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_vpn`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fortinet:fortigate_vpn. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fortinet:fortigate_vpn\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by tunnel_name, phase** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate VPN Tunnel Status** — VPN status monitoring secures remote access and site-to-site connectivity for users and automation.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_vpn`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (down tunnels), Timeline (state flaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate VPN Tunnel Status\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.15",
              "n": "FortiGate Firmware Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Firmware compliance closes known CVEs affecting SSL VPN, IPS engine, and management interfaces.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_system`",
              "q": "index=network sourcetype=\"fortinet:fortigate_system\"\n| stats latest(version) as fw by host\n| lookup fortinet_supported_versions version OUTPUT minimum_secure\n| where fw<minimum_secure OR isnull(minimum_secure)",
              "m": "Onboard version strings from config backups or `get system status` syslog. Compare to vendor-supported releases via lookup. If your Fortinet TA does not define a separate inventory sourcetype, keep to `fortinet:fortigate_system` (or FortiManager events) rather than assuming `fortinet:fortigate_inventory`.",
              "z": "Table (non-compliant), Bar chart (version distribution).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Compute_Inventory](https://docs.splunk.com/Documentation/CIM/latest/User/Compute_Inventory)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_system`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOnboard version strings from config backups or `get system status` syslog. Compare to vendor-supported releases via lookup. If your Fortinet TA does not define a separate inventory sourcetype, keep to `fortinet:fortigate_system` (or FortiManager events) rather than assuming `fortinet:fortigate_inventory`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fortinet:fortigate_system\"\n| stats latest(version) as fw by host\n| lookup fortinet_supported_versions version OUTPUT minimum_secure\n| where fw<minimum_secure OR isnull(minimum_secure)\n```\n\nUnderstanding this SPL\n\n**FortiGate Firmware Compliance** — Firmware compliance closes known CVEs affecting SSL VPN, IPS engine, and management interfaces.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_system`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fortinet:fortigate_system. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fortinet:fortigate_system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where fw<minimum_secure OR isnull(minimum_secure)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Compute_Inventory.Network by Network.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate Firmware Compliance** — Firmware compliance closes known CVEs affecting SSL VPN, IPS engine, and management interfaces.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_system`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Compute_Inventory.Network` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant), Bar chart (version distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate Firmware Compliance\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Compute_Inventory.Network by Network.dest | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.16",
              "n": "Palo Alto WildFire Malware Verdict Trending",
              "c": "high",
              "f": "intermediate",
              "v": "WildFire trending shows evolving malware volume and sandbox pipeline health.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:wildfire`",
              "q": "index=pan sourcetype=\"pan:wildfire\"\n| timechart span=1d count by verdict\n| fillnull value=0",
              "m": "Ensure WildFire submissions are fully parsed. Map verdict integers to labels in eval. Trend malicious versus benign over time.",
              "z": "Stacked area (verdicts), Line chart (malicious trend).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:wildfire`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure WildFire submissions are fully parsed. Map verdict integers to labels in eval. Trend malicious versus benign over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:wildfire\"\n| timechart span=1d count by verdict\n| fillnull value=0\n```\n\nUnderstanding this SPL\n\n**Palo Alto WildFire Malware Verdict Trending** — WildFire trending shows evolving malware volume and sandbox pipeline health.\n\nDocumented **Data sources**: `sourcetype=pan:wildfire`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:wildfire. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:wildfire\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by verdict** — ideal for trending and alerting on this use case.\n• Fills null values with `fillnull`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto WildFire Malware Verdict Trending** — WildFire trending shows evolving malware volume and sandbox pipeline health.\n\nDocumented **Data sources**: `sourcetype=pan:wildfire`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (verdicts), Line chart (malicious trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto WildFire Malware Verdict Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest span=1d | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.17",
              "n": "Palo Alto URL Filtering Policy Blocks",
              "c": "high",
              "f": "intermediate",
              "v": "URL blocks reduce web-borne malware and enforce acceptable use.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:url`",
              "q": "index=pan sourcetype=\"pan:url\" action=blocked\n| stats count by category, url_domain, user\n| sort 50 -count",
              "m": "Ingest URL filtering logs with category and user-ID. Report top blocked domains and tune categories for false positives.",
              "z": "Bar chart (categories), Table (top URLs).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:url`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest URL filtering logs with category and user-ID. Report top blocked domains and tune categories for false positives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:url\" action=blocked\n| stats count by category, url_domain, user\n| sort 50 -count\n```\n\nUnderstanding this SPL\n\n**Palo Alto URL Filtering Policy Blocks** — URL blocks reduce web-borne malware and enforce acceptable use.\n\nDocumented **Data sources**: `sourcetype=pan:url`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:url. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:url\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by category, url_domain, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto URL Filtering Policy Blocks** — URL blocks reduce web-borne malware and enforce acceptable use.\n\nDocumented **Data sources**: `sourcetype=pan:url`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (categories), Table (top URLs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto URL Filtering Policy Blocks\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.18",
              "n": "Palo Alto GlobalProtect VPN Health",
              "c": "high",
              "f": "intermediate",
              "v": "GlobalProtect health ensures remote workforce connectivity and posture enforcement.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:globalprotect`",
              "q": "index=pan sourcetype=\"pan:globalprotect\"\n| stats latest(status) as vpn_status by user, public_ip, gateway\n| search vpn_status IN (\"disconnect\",\"error\",\"login_failed\")",
              "m": "Collect GlobalProtect portal/gateway logs. Track login success, HIP result, and gateway assignment. Alert on auth failures and error reasons.",
              "z": "Table (problem users), Timeline (sessions).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:globalprotect`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect GlobalProtect portal/gateway logs. Track login success, HIP result, and gateway assignment. Alert on auth failures and error reasons.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:globalprotect\"\n| stats latest(status) as vpn_status by user, public_ip, gateway\n| search vpn_status IN (\"disconnect\",\"error\",\"login_failed\")\n```\n\nUnderstanding this SPL\n\n**Palo Alto GlobalProtect VPN Health** — GlobalProtect health ensures remote workforce connectivity and posture enforcement.\n\nDocumented **Data sources**: `sourcetype=pan:globalprotect`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:globalprotect. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:globalprotect\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, public_ip, gateway** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto GlobalProtect VPN Health** — GlobalProtect health ensures remote workforce connectivity and posture enforcement.\n\nDocumented **Data sources**: `sourcetype=pan:globalprotect`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (problem users), Timeline (sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto GlobalProtect VPN Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.user | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.19",
              "n": "Palo Alto Threat Log Severity Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Threat severity analysis prioritizes NGFW alerts for analyst queues.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:threat`",
              "q": "index=pan sourcetype=\"pan:threat\"\n| stats count by severity, threat_name, subtype\n| sort -count",
              "m": "Normalize threat subtype (vulnerability, spyware, virus). Weight critical/high for paging. Feed SOC dashboards by severity.",
              "z": "Bar chart (severity), Table (top threats).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:threat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize threat subtype (vulnerability, spyware, virus). Weight critical/high for paging. Feed SOC dashboards by severity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:threat\"\n| stats count by severity, threat_name, subtype\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Palo Alto Threat Log Severity Analysis** — Threat severity analysis prioritizes NGFW alerts for analyst queues.\n\nDocumented **Data sources**: `sourcetype=pan:threat`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:threat. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:threat\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity, threat_name, subtype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity, IDS_Attacks.signature | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Threat Log Severity Analysis** — Threat severity analysis prioritizes NGFW alerts for analyst queues.\n\nDocumented **Data sources**: `sourcetype=pan:threat`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (severity), Table (top threats).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Threat Log Severity Analysis\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity, IDS_Attacks.signature | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.20",
              "n": "Palo Alto Decryption Policy Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Decryption compliance ensures blind spots in TLS are minimized per policy.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:decryption`",
              "q": "index=pan sourcetype=\"pan:decryption\"\n| stats count by action, rule, category\n| where action IN (\"no-decrypt\",\"decrypt-error\")",
              "m": "Log SSL decryption success and failures. Compare decrypted traffic ratio to policy intent. Investigate no-decrypt spikes.",
              "z": "Pie chart (decrypt vs not), Table (rules with errors).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:decryption`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog SSL decryption success and failures. Compare decrypted traffic ratio to policy intent. Investigate no-decrypt spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:decryption\"\n| stats count by action, rule, category\n| where action IN (\"no-decrypt\",\"decrypt-error\")\n```\n\nUnderstanding this SPL\n\n**Palo Alto Decryption Policy Compliance** — Decryption compliance ensures blind spots in TLS are minimized per policy.\n\nDocumented **Data sources**: `sourcetype=pan:decryption`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:decryption. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:decryption\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action, rule, category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where action IN (\"no-decrypt\",\"decrypt-error\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Decryption Policy Compliance** — Decryption compliance ensures blind spots in TLS are minimized per policy.\n\nDocumented **Data sources**: `sourcetype=pan:decryption`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (decrypt vs not), Table (rules with errors).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Decryption Policy Compliance\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.21",
              "n": "Palo Alto Zone-Based Firewall Violations",
              "c": "high",
              "f": "intermediate",
              "v": "Zone violations reveal lateral movement attempts and policy gaps between segments.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:traffic`",
              "q": "index=pan sourcetype=\"pan:traffic\" action=deny\n| stats count by src_zone, dst_zone, rule\n| sort -count",
              "m": "Use traffic logs with zone fields. Baseline expected east-west patterns. Alert on denies involving sensitive zones.",
              "z": "Heatmap (zone pairs), Bar chart (rules).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse traffic logs with zone fields. Baseline expected east-west patterns. Alert on denies involving sensitive zones.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:traffic\" action=deny\n| stats count by src_zone, dst_zone, rule\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Palo Alto Zone-Based Firewall Violations** — Zone violations reveal lateral movement attempts and policy gaps between segments.\n\nDocumented **Data sources**: `sourcetype=pan:traffic`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_zone, dst_zone, rule** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Zone-Based Firewall Violations** — Zone violations reveal lateral movement attempts and policy gaps between segments.\n\nDocumented **Data sources**: `sourcetype=pan:traffic`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (zone pairs), Bar chart (rules).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Zone-Based Firewall Violations\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.22",
              "n": "Palo Alto Cortex XDR Incident Trending",
              "c": "high",
              "f": "intermediate",
              "v": "XDR incident trending measures detection pipeline load and emerging attack types.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=cortex:xdr_incident`",
              "q": "index=endpoint sourcetype=\"cortex:xdr_incident\"\n| timechart span=1h count by severity\n| where count>0",
              "m": "Ingest Cortex XDR incidents via API or SIEM export. Normalize severity and MITRE tactics. Trend open versus closed backlog.",
              "z": "Line chart (incidents), Stacked bar (severity).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=cortex:xdr_incident`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Cortex XDR incidents via API or SIEM export. Normalize severity and MITRE tactics. Trend open versus closed backlog.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"cortex:xdr_incident\"\n| timechart span=1h count by severity\n| where count>0\n```\n\nUnderstanding this SPL\n\n**Palo Alto Cortex XDR Incident Trending** — XDR incident trending measures detection pipeline load and emerging attack types.\n\nDocumented **Data sources**: `sourcetype=cortex:xdr_incident`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: cortex:xdr_incident. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"cortex:xdr_incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by severity** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Cortex XDR Incident Trending** — XDR incident trending measures detection pipeline load and emerging attack types.\n\nDocumented **Data sources**: `sourcetype=cortex:xdr_incident`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (incidents), Stacked bar (severity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Cortex XDR data and alerts to stay ahead of the risks described in \"Palo Alto Cortex XDR Incident Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity span=1h | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.23",
              "n": "Palo Alto Data Filtering Policy Violations",
              "c": "high",
              "f": "intermediate",
              "v": "Data filtering catches sensitive file egress at the NGFW edge.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:data`",
              "q": "index=pan sourcetype=\"pan:data\" (action=alert OR action=block)\n| stats count by filter, file_type, user\n| sort -count",
              "m": "Enable data filtering profiles on sensitive rules. Log file transfers with patterns matched. Alert on blocks to critical destinations.",
              "z": "Table (violations), Bar chart (by filter name).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: DLP](https://docs.splunk.com/Documentation/CIM/latest/User/DLP)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:data`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable data filtering profiles on sensitive rules. Log file transfers with patterns matched. Alert on blocks to critical destinations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:data\" (action=alert OR action=block)\n| stats count by filter, file_type, user\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Palo Alto Data Filtering Policy Violations** — Data filtering catches sensitive file egress at the NGFW edge.\n\nDocumented **Data sources**: `sourcetype=pan:data`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:data. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:data\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by filter, file_type, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Data Filtering Policy Violations** — Data filtering catches sensitive file egress at the NGFW edge.\n\nDocumented **Data sources**: `sourcetype=pan:data`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Data_Loss_Prevention.DLP_Incidents` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Bar chart (by filter name).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Data Filtering Policy Violations\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "DLP"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.user | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.24",
              "n": "Palo Alto DNS Sinkhole Hits",
              "c": "high",
              "f": "intermediate",
              "v": "Sinkhole hits are high-fidelity indicators of compromised clients resolving malicious domains.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:threat`",
              "q": "index=pan sourcetype=\"pan:threat\" (category=\"sinkhole\" OR dest=\"sinkhole_vip\")\n| stats count by src, fqdn\n| sort -count",
              "m": "Correlate threat or DNS logs where resolution points to sinkhole IP. Confirms C2 or malware domain blocks fired.",
              "z": "Table (hosts hitting sinkhole), Map (src).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:threat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate threat or DNS logs where resolution points to sinkhole IP. Confirms C2 or malware domain blocks fired.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:threat\" (category=\"sinkhole\" OR dest=\"sinkhole_vip\")\n| stats count by src, fqdn\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Palo Alto DNS Sinkhole Hits** — Sinkhole hits are high-fidelity indicators of compromised clients resolving malicious domains.\n\nDocumented **Data sources**: `sourcetype=pan:threat`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:threat. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:threat\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, fqdn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Resolution.DNS by DNS.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto DNS Sinkhole Hits** — Sinkhole hits are high-fidelity indicators of compromised clients resolving malicious domains.\n\nDocumented **Data sources**: `sourcetype=pan:threat`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hosts hitting sinkhole), Map (src).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto DNS Sinkhole Hits\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Resolution.DNS by DNS.src | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.25",
              "n": "Palo Alto App-ID Unknown Application Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Unknown applications may hide covert channels or misclassified business traffic.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:traffic`",
              "q": "index=pan sourcetype=\"pan:traffic\" (app=\"unknown-tcp\" OR app=\"unknown-udp\" OR app=\"not-applicable\")\n| stats count by src, dst_ip, dest_port\n| where count>100",
              "m": "Trend unknown App-ID volume. Investigate with packet capture policy; may be custom or encrypted evasion.",
              "z": "Line chart (unknown apps), Table (top flows).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrend unknown App-ID volume. Investigate with packet capture policy; may be custom or encrypted evasion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:traffic\" (app=\"unknown-tcp\" OR app=\"unknown-udp\" OR app=\"not-applicable\")\n| stats count by src, dst_ip, dest_port\n| where count>100\n```\n\nUnderstanding this SPL\n\n**Palo Alto App-ID Unknown Application Detection** — Unknown applications may hide covert channels or misclassified business traffic.\n\nDocumented **Data sources**: `sourcetype=pan:traffic`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dst_ip, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>100` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto App-ID Unknown Application Detection** — Unknown applications may hide covert channels or misclassified business traffic.\n\nDocumented **Data sources**: `sourcetype=pan:traffic`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (unknown apps), Table (top flows).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto App-ID Unknown Application Detection\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.26",
              "n": "Palo Alto User-ID Mapping Failures",
              "c": "high",
              "f": "intermediate",
              "v": "User-ID gaps weaken policy enforcement and logging attribution for investigations.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:useridd`",
              "q": "index=pan sourcetype=\"pan:useridd\" (status=failed OR error=*)\n| stats count by datasource, ip, message\n| sort -count",
              "m": "Ingest User-ID agent, syslog, and XML API mapping logs. Alert when IP-to-user mapping fails for critical subnets.",
              "z": "Table (failures), Timeline.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:useridd`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest User-ID agent, syslog, and XML API mapping logs. Alert when IP-to-user mapping fails for critical subnets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:useridd\" (status=failed OR error=*)\n| stats count by datasource, ip, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Palo Alto User-ID Mapping Failures** — User-ID gaps weaken policy enforcement and logging attribution for investigations.\n\nDocumented **Data sources**: `sourcetype=pan:useridd`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:useridd. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:useridd\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by datasource, ip, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto User-ID Mapping Failures** — User-ID gaps weaken policy enforcement and logging attribution for investigations.\n\nDocumented **Data sources**: `sourcetype=pan:useridd`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failures), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto User-ID Mapping Failures\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.27",
              "n": "Palo Alto GlobalProtect HIP Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "HIP compliance enforces security baselines before granting full network access.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:hipmatch`",
              "q": "index=pan sourcetype=\"pan:hipmatch\" hip_status!=pass\n| stats count by hip_profile, reason, user\n| sort -count",
              "m": "Track host information profile failures: disk encryption, AV, patch level. Block or remediate non-compliant endpoints.",
              "z": "Bar chart (HIP failures), Table (users).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:hipmatch`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack host information profile failures: disk encryption, AV, patch level. Block or remediate non-compliant endpoints.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:hipmatch\" hip_status!=pass\n| stats count by hip_profile, reason, user\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Palo Alto GlobalProtect HIP Compliance** — HIP compliance enforces security baselines before granting full network access.\n\nDocumented **Data sources**: `sourcetype=pan:hipmatch`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:hipmatch. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:hipmatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hip_profile, reason, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto GlobalProtect HIP Compliance** — HIP compliance enforces security baselines before granting full network access.\n\nDocumented **Data sources**: `sourcetype=pan:hipmatch`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (HIP failures), Table (users).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto GlobalProtect HIP Compliance\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.28",
              "n": "Palo Alto Panorama Push Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Push failures risk inconsistent security posture across managed firewalls.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:panorama`",
              "q": "index=pan sourcetype=\"pan:panorama\" (status=failed OR result=error)\n| stats count by device, job_id, admin\n| sort -_time",
              "m": "Forward Panorama commit and push job logs. Alert on failed pushes that leave devices out of sync with policy.",
              "z": "Table (failed jobs), Timeline.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:panorama`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Panorama commit and push job logs. Alert on failed pushes that leave devices out of sync with policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:panorama\" (status=failed OR result=error)\n| stats count by device, job_id, admin\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Palo Alto Panorama Push Failures** — Push failures risk inconsistent security posture across managed firewalls.\n\nDocumented **Data sources**: `sourcetype=pan:panorama`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:panorama. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:panorama\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device, job_id, admin** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Panorama Push Failures** — Push failures risk inconsistent security posture across managed firewalls.\n\nDocumented **Data sources**: `sourcetype=pan:panorama`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed jobs), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Panorama Push Failures\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.29",
              "n": "Palo Alto Anti-Spyware Detection Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Anti-spyware trending tracks infostealers, callbacks, and grayware activity.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:threat`",
              "q": "index=pan sourcetype=\"pan:threat\" subtype=\"spyware\"\n| timechart span=1h count by threat_id\n| where count>0",
              "m": "Filter threat subtype spyware. Correlate with DNS and URL logs. Tune signatures generating noise.",
              "z": "Line chart (spyware events), Table (top IDs).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:threat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFilter threat subtype spyware. Correlate with DNS and URL logs. Tune signatures generating noise.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:threat\" subtype=\"spyware\"\n| timechart span=1h count by threat_id\n| where count>0\n```\n\nUnderstanding this SPL\n\n**Palo Alto Anti-Spyware Detection Trending** — Anti-spyware trending tracks infostealers, callbacks, and grayware activity.\n\nDocumented **Data sources**: `sourcetype=pan:threat`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:threat. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:threat\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by threat_id** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Anti-Spyware Detection Trending** — Anti-spyware trending tracks infostealers, callbacks, and grayware activity.\n\nDocumented **Data sources**: `sourcetype=pan:threat`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (spyware events), Table (top IDs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Anti-Spyware Detection Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1h | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.30",
              "n": "Palo Alto Credential Phishing Detections",
              "c": "high",
              "f": "intermediate",
              "v": "Credential phishing detections at the firewall complement email gateway controls.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:threat`",
              "q": "index=pan sourcetype=\"pan:threat\" (threat_name=\"*phish*\" OR category=\"phishing\")\n| stats count by src_user, url_domain, dst_ip\n| sort -count",
              "m": "Use threat names and URL categories for phishing. Send high hits to SOAR for user education workflows.",
              "z": "Table (users targeted), Bar chart (domains).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:threat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse threat names and URL categories for phishing. Send high hits to SOAR for user education workflows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:threat\" (threat_name=\"*phish*\" OR category=\"phishing\")\n| stats count by src_user, url_domain, dst_ip\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Palo Alto Credential Phishing Detections** — Credential phishing detections at the firewall complement email gateway controls.\n\nDocumented **Data sources**: `sourcetype=pan:threat`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:threat. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:threat\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_user, url_domain, dst_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Credential Phishing Detections** — Credential phishing detections at the firewall complement email gateway controls.\n\nDocumented **Data sources**: `sourcetype=pan:threat`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users targeted), Bar chart (domains).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Credential Phishing Detections\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.31",
              "n": "Check Point SandBlast Threat Emulation Results",
              "c": "high",
              "f": "intermediate",
              "v": "SandBlast emulation catches evasive malware before execution on endpoints.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:sandblast`",
              "q": "index=security sourcetype=\"cp:sandblast\"\n| stats count by verdict, file_type, md5\n| search verdict IN (\"Malicious\",\"Infected\")",
              "m": "Ingest SandBlast TE API or syslog. Join file hashes to email and firewall events. Alert on malicious emulation.",
              "z": "Pie chart (verdicts), Table (malicious hashes).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:sandblast`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SandBlast TE API or syslog. Join file hashes to email and firewall events. Alert on malicious emulation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp:sandblast\"\n| stats count by verdict, file_type, md5\n| search verdict IN (\"Malicious\",\"Infected\")\n```\n\nUnderstanding this SPL\n\n**Check Point SandBlast Threat Emulation Results** — SandBlast emulation catches evasive malware before execution on endpoints.\n\nDocumented **Data sources**: `sourcetype=cp:sandblast`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp:sandblast. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp:sandblast\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by verdict, file_type, md5** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point SandBlast Threat Emulation Results** — SandBlast emulation catches evasive malware before execution on endpoints.\n\nDocumented **Data sources**: `sourcetype=cp:sandblast`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (verdicts), Table (malicious hashes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point SandBlast Threat Emulation Results\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.32",
              "n": "Check Point Anti-Bot Detection Events",
              "c": "high",
              "f": "intermediate",
              "v": "Anti-bot events reveal C2 and automated malware behavior.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:antibot`",
              "q": "index=security sourcetype=\"cp:antibot\"\n| stats count by bot_name, action, src\n| sort -count",
              "m": "Enable anti-bot blade logging. Map bot families to MITRE. Block or monitor per policy.",
              "z": "Bar chart (bot families), Table (sources).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:antibot`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable anti-bot blade logging. Map bot families to MITRE. Block or monitor per policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp:antibot\"\n| stats count by bot_name, action, src\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point Anti-Bot Detection Events** — Anti-bot events reveal C2 and automated malware behavior.\n\nDocumented **Data sources**: `sourcetype=cp:antibot`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp:antibot. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp:antibot\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by bot_name, action, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Anti-Bot Detection Events** — Anti-bot events reveal C2 and automated malware behavior.\n\nDocumented **Data sources**: `sourcetype=cp:antibot`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (bot families), Table (sources).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Anti-Bot Detection Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.src | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.33",
              "n": "Check Point SmartEvent Correlation Quality",
              "c": "high",
              "f": "intermediate",
              "v": "Correlation quality metrics keep SmartEvent noise manageable for SOC staffing.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:smartevent`",
              "q": "index=security sourcetype=\"cp:smartevent\"\n| stats count by correlation_name, confidence\n| where confidence IN (\"low\",\"medium\")",
              "m": "Ingest SmartEvent correlation units. Track analyst disposition and false positive rate per correlation. Tune thresholds.",
              "z": "Table (low-confidence rules), Line chart (events per correlation).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:smartevent`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SmartEvent correlation units. Track analyst disposition and false positive rate per correlation. Tune thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp:smartevent\"\n| stats count by correlation_name, confidence\n| where confidence IN (\"low\",\"medium\")\n```\n\nUnderstanding this SPL\n\n**Check Point SmartEvent Correlation Quality** — Correlation quality metrics keep SmartEvent noise manageable for SOC staffing.\n\nDocumented **Data sources**: `sourcetype=cp:smartevent`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp:smartevent. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp:smartevent\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by correlation_name, confidence** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where confidence IN (\"low\",\"medium\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point SmartEvent Correlation Quality** — Correlation quality metrics keep SmartEvent noise manageable for SOC staffing.\n\nDocumented **Data sources**: `sourcetype=cp:smartevent`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (low-confidence rules), Line chart (events per correlation).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point SmartEvent Correlation Quality\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.34",
              "n": "Check Point Compliance Blade Violations",
              "c": "high",
              "f": "intermediate",
              "v": "Compliance blade output proves NGFW configuration meets regulatory baselines.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:compliance`",
              "q": "index=security sourcetype=\"cp:compliance\"\n| stats count by check_name, severity, gateway\n| sort -severity",
              "m": "Onboard compliance blade scans (PCI, STIG-style). Report failing checks by gateway and remediate.",
              "z": "Bar chart (failed checks), Table (gateways).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:compliance`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOnboard compliance blade scans (PCI, STIG-style). Report failing checks by gateway and remediate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp:compliance\"\n| stats count by check_name, severity, gateway\n| sort -severity\n```\n\nUnderstanding this SPL\n\n**Check Point Compliance Blade Violations** — Compliance blade output proves NGFW configuration meets regulatory baselines.\n\nDocumented **Data sources**: `sourcetype=cp:compliance`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp:compliance. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by check_name, severity, gateway** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Compliance Blade Violations** — Compliance blade output proves NGFW configuration meets regulatory baselines.\n\nDocumented **Data sources**: `sourcetype=cp:compliance`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failed checks), Table (gateways).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Compliance Blade Violations\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.35",
              "n": "Check Point Gateway CPU and Memory Health",
              "c": "high",
              "f": "intermediate",
              "v": "Gateway health prevents inspection bypass under load and connection drops.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:perf`",
              "q": "index=metrics (sourcetype=\"cp:perf\" OR sourcetype=\"cp:snmp\")\n| stats avg(cpu_util) as cpu, avg(mem_pct) as mem by gw_name\n| where cpu>90 OR mem>92",
              "m": "Stream gateway performance metrics. Correlate with inspection load and connection spikes.",
              "z": "Single value, Line chart (CPU/mem), Table (top gateways).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:perf`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStream gateway performance metrics. Correlate with inspection load and connection spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=metrics (sourcetype=\"cp:perf\" OR sourcetype=\"cp:snmp\")\n| stats avg(cpu_util) as cpu, avg(mem_pct) as mem by gw_name\n| where cpu>90 OR mem>92\n```\n\nUnderstanding this SPL\n\n**Check Point Gateway CPU and Memory Health** — Gateway health prevents inspection bypass under load and connection drops.\n\nDocumented **Data sources**: `sourcetype=cp:perf`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: metrics; **sourcetype**: cp:perf, cp:snmp. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=metrics, sourcetype=\"cp:perf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by gw_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu>90 OR mem>92` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Gateway CPU and Memory Health** — Gateway health prevents inspection bypass under load and connection drops.\n\nDocumented **Data sources**: `sourcetype=cp:perf`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value, Line chart (CPU/mem), Table (top gateways).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Gateway CPU and Memory Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host | sort - agg_value",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.36",
              "n": "Check Point VPN Tunnel Status",
              "c": "high",
              "f": "intermediate",
              "v": "VPN status protects site-to-site and remote user connectivity.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:vpn`",
              "q": "index=network sourcetype=\"cp:vpn\"\n| stats latest(tunnel_state) as st by peer, community\n| search st IN (\"Down\",\"Init\",\"Problem\")",
              "m": "Parse VPN tunnel monitoring from gateway logs. Alert on tunnels down past grace period.",
              "z": "Table (down tunnels), Timeline.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:vpn`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse VPN tunnel monitoring from gateway logs. Alert on tunnels down past grace period.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cp:vpn\"\n| stats latest(tunnel_state) as st by peer, community\n| search st IN (\"Down\",\"Init\",\"Problem\")\n```\n\nUnderstanding this SPL\n\n**Check Point VPN Tunnel Status** — VPN status protects site-to-site and remote user connectivity.\n\nDocumented **Data sources**: `sourcetype=cp:vpn`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cp:vpn. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cp:vpn\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by peer, community** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point VPN Tunnel Status** — VPN status protects site-to-site and remote user connectivity.\n\nDocumented **Data sources**: `sourcetype=cp:vpn`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (down tunnels), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point VPN Tunnel Status\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.37",
              "n": "Check Point URL Filtering Blocks",
              "c": "high",
              "f": "intermediate",
              "v": "URL filtering blocks malicious and prohibited web content at the gateway.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:urlf`",
              "q": "index=web sourcetype=\"cp:urlf\" action=Block\n| stats count by category, user, url\n| sort 100 -count",
              "m": "Enable URLF with full logging. Map categories to policy intent. Report abuse and malware categories.",
              "z": "Bar chart (categories), Table (top URLs).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:urlf`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable URLF with full logging. Map categories to policy intent. Report abuse and malware categories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"cp:urlf\" action=Block\n| stats count by category, user, url\n| sort 100 -count\n```\n\nUnderstanding this SPL\n\n**Check Point URL Filtering Blocks** — URL filtering blocks malicious and prohibited web content at the gateway.\n\nDocumented **Data sources**: `sourcetype=cp:urlf`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: cp:urlf. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"cp:urlf\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by category, user, url** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.url | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point URL Filtering Blocks** — URL filtering blocks malicious and prohibited web content at the gateway.\n\nDocumented **Data sources**: `sourcetype=cp:urlf`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (categories), Table (top URLs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point URL Filtering Blocks\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.url | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.38",
              "n": "Check Point Identity Awareness Events",
              "c": "high",
              "f": "intermediate",
              "v": "Identity Awareness ensures user and machine context for firewall policy.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:identity`",
              "q": "index=auth sourcetype=\"cp:identity\"\n| stats count by event, user, src\n| search event IN (\"mapping_failed\",\"ad_query_error\")",
              "m": "Collect Identity Awareness AD/LDAP and captive portal logs. Alert on mapping failures affecting user-based rules.",
              "z": "Table (failures), Timeline.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:identity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Identity Awareness AD/LDAP and captive portal logs. Alert on mapping failures affecting user-based rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=auth sourcetype=\"cp:identity\"\n| stats count by event, user, src\n| search event IN (\"mapping_failed\",\"ad_query_error\")\n```\n\nUnderstanding this SPL\n\n**Check Point Identity Awareness Events** — Identity Awareness ensures user and machine context for firewall policy.\n\nDocumented **Data sources**: `sourcetype=cp:identity`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: auth; **sourcetype**: cp:identity. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=auth, sourcetype=\"cp:identity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by event, user, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Identity Awareness Events** — Identity Awareness ensures user and machine context for firewall policy.\n\nDocumented **Data sources**: `sourcetype=cp:identity`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failures), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Identity Awareness Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.39",
              "n": "Check Point IPS Signature Coverage",
              "c": "high",
              "f": "intermediate",
              "v": "IPS coverage validation ensures protections are active after upgrades.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:ips`",
              "q": "index=security sourcetype=\"cp:ips\"\n| stats dc(signature_id) as sigs_seen by gateway\n| join max=1 gateway [| inputlookup checkpoint_ips_expected_coverage]\n| where sigs_seen < expected_minimum",
              "m": "Compare active IPS signatures seen in logs to profile expectations. Detect disabled or outdated IPS packages.",
              "z": "Table (coverage gap), Bar chart (per gateway).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:ips`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare active IPS signatures seen in logs to profile expectations. Detect disabled or outdated IPS packages.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp:ips\"\n| stats dc(signature_id) as sigs_seen by gateway\n| join max=1 gateway [| inputlookup checkpoint_ips_expected_coverage]\n| where sigs_seen < expected_minimum\n```\n\nUnderstanding this SPL\n\n**Check Point IPS Signature Coverage** — IPS coverage validation ensures protections are active after upgrades.\n\nDocumented **Data sources**: `sourcetype=cp:ips`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp:ips. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp:ips\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by gateway** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where sigs_seen < expected_minimum` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(IDS_Attacks.signature) as agg_value from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point IPS Signature Coverage** — IPS coverage validation ensures protections are active after upgrades.\n\nDocumented **Data sources**: `sourcetype=cp:ips`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (coverage gap), Bar chart (per gateway).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point IPS Signature Coverage\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t dc(IDS_Attacks.signature) as agg_value from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - agg_value",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.40",
              "n": "Check Point Threat Extraction Results",
              "c": "high",
              "f": "intermediate",
              "v": "Threat extraction neutralizes weaponized office documents and scripts in transit.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:threatemulation`",
              "q": "index=security sourcetype=\"cp:threatemulation\"\n| stats count by extract_action, file_name\n| search extract_action IN (\"blocked\",\"quarantined\")",
              "m": "Log threat extraction (remove active content). Review blocked deliveries for business impact.",
              "z": "Table (blocked files), Line chart (blocks over time).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:threatemulation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog threat extraction (remove active content). Review blocked deliveries for business impact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp:threatemulation\"\n| stats count by extract_action, file_name\n| search extract_action IN (\"blocked\",\"quarantined\")\n```\n\nUnderstanding this SPL\n\n**Check Point Threat Extraction Results** — Threat extraction neutralizes weaponized office documents and scripts in transit.\n\nDocumented **Data sources**: `sourcetype=cp:threatemulation`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp:threatemulation. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp:threatemulation\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by extract_action, file_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.file_name | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Threat Extraction Results** — Threat extraction neutralizes weaponized office documents and scripts in transit.\n\nDocumented **Data sources**: `sourcetype=cp:threatemulation`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (blocked files), Line chart (blocks over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Threat Extraction Results\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.file_name | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.41",
              "n": "CrowdStrike Detection Classification Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Classification analysis shows which ATT&CK techniques dominate your environment.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:detection`",
              "q": "index=endpoint sourcetype=\"crowdstrike:detection\"\n| stats count by tactic, technique, objective\n| sort -count",
              "m": "Normalize Falcon detection JSON. Pivot by MITRE mapping and objective. Feed executive and hunt dashboards.",
              "z": "Bar chart (tactics), Table (techniques).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:detection`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize Falcon detection JSON. Pivot by MITRE mapping and objective. Feed executive and hunt dashboards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:detection\"\n| stats count by tactic, technique, objective\n| sort -count\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Detection Classification Analysis** — Classification analysis shows which ATT&CK techniques dominate your environment.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:detection`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:detection. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:detection\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by tactic, technique, objective** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Detection Classification Analysis** — Classification analysis shows which ATT&CK techniques dominate your environment.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:detection`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (tactics), Table (techniques).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Detection Classification Analysis\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.42",
              "n": "CrowdStrike IOA Rule Trigger Trending",
              "c": "high",
              "f": "intermediate",
              "v": "IOA trending balances custom behavioral coverage with analyst workload.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:ioa`",
              "q": "index=endpoint sourcetype=\"crowdstrike:ioa\"\n| timechart span=1h count by rule_id\n| where count>0",
              "m": "Track IOA rule frequency. Retire or tune noisy rules. Escalate rare IOAs quickly.",
              "z": "Line chart (IOA volume), Table (rule_id, description).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:ioa`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack IOA rule frequency. Retire or tune noisy rules. Escalate rare IOAs quickly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:ioa\"\n| timechart span=1h count by rule_id\n| where count>0\n```\n\nUnderstanding this SPL\n\n**CrowdStrike IOA Rule Trigger Trending** — IOA trending balances custom behavioral coverage with analyst workload.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:ioa`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:ioa. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:ioa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by rule_id** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike IOA Rule Trigger Trending** — IOA trending balances custom behavioral coverage with analyst workload.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:ioa`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (IOA volume), Table (rule_id, description).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike IOA Rule Trigger Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1h | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.43",
              "n": "CrowdStrike Falcon Discover Asset Inventory Drift",
              "c": "high",
              "f": "intermediate",
              "v": "Asset drift finds shadow systems missing EDR or patching.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:discover`",
              "q": "index=endpoint sourcetype=\"crowdstrike:discover\"\n| stats latest(last_seen) as ls by hostname, aid\n| eval days_since=(now()-ls)/86400\n| where days_since>30",
              "m": "Ingest Discover unmanaged assets. Compare to CMDB. Alert on stale or unknown devices.",
              "z": "Table (stale assets), Single value (drift count).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Compute_Inventory](https://docs.splunk.com/Documentation/CIM/latest/User/Compute_Inventory)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:discover`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Discover unmanaged assets. Compare to CMDB. Alert on stale or unknown devices.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:discover\"\n| stats latest(last_seen) as ls by hostname, aid\n| eval days_since=(now()-ls)/86400\n| where days_since>30\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Falcon Discover Asset Inventory Drift** — Asset drift finds shadow systems missing EDR or patching.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:discover`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:discover. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:discover\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname, aid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since>30` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Falcon Discover Asset Inventory Drift** — Asset drift finds shadow systems missing EDR or patching.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:discover`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Compute_Inventory.Virtual_OS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale assets), Single value (drift count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Falcon Discover Asset Inventory Drift\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.44",
              "n": "CrowdStrike Real-Time Response Audit Trail",
              "c": "high",
              "f": "intermediate",
              "v": "RTR auditing is critical because live response can access sensitive hosts.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:rr`",
              "q": "index=audit sourcetype=\"crowdstrike:rr\"\n| stats count by command, user, target_host\n| search command IN (\"put\",\"get\",\"runscript\",\"map\")",
              "m": "Log every RTR command with analyst identity. Review privileged actions for insider and compromise scenarios.",
              "z": "Table (commands), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Splunk_Audit](https://docs.splunk.com/Documentation/CIM/latest/User/Splunk_Audit)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:rr`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog every RTR command with analyst identity. Review privileged actions for insider and compromise scenarios.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"crowdstrike:rr\"\n| stats count by command, user, target_host\n| search command IN (\"put\",\"get\",\"runscript\",\"map\")\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Real-Time Response Audit Trail** — RTR auditing is critical because live response can access sensitive hosts.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:rr`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: crowdstrike:rr. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"crowdstrike:rr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by command, user, target_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Real-Time Response Audit Trail** — RTR auditing is critical because live response can access sensitive hosts.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:rr`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Splunk_Audit` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (commands), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Real-Time Response Audit Trail\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Splunk_Audit"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.45",
              "n": "CrowdStrike Sensor Health and Connectivity",
              "c": "high",
              "f": "intermediate",
              "v": "Sensor health ensures detections and prevention remain active on every endpoint.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:status`",
              "q": "index=endpoint sourcetype=\"crowdstrike:status\"\n| stats latest(connection_status) as cs by aid, hostname\n| search cs IN (\"offline\",\"unknown\",\"error\")",
              "m": "Use host status API or streaming. Alert when sensors lose cloud connectivity beyond SLA.",
              "z": "Table (offline hosts), Donut (healthy vs not).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse host status API or streaming. Alert when sensors lose cloud connectivity beyond SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:status\"\n| stats latest(connection_status) as cs by aid, hostname\n| search cs IN (\"offline\",\"unknown\",\"error\")\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Sensor Health and Connectivity** — Sensor health ensures detections and prevention remain active on every endpoint.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:status`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:status. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by aid, hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Sensor Health and Connectivity** — Sensor health ensures detections and prevention remain active on every endpoint.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:status`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (offline hosts), Donut (healthy vs not).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Sensor Health and Connectivity\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.46",
              "n": "CrowdStrike Incident Severity Distribution",
              "c": "high",
              "f": "intermediate",
              "v": "Severity distribution informs staffing and escalation policy.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:incident`",
              "q": "index=endpoint sourcetype=\"crowdstrike:incident\"\n| stats count by severity, status\n| sort -severity",
              "m": "Aggregate Falcon incident severities. Track mean time to resolve by tier.",
              "z": "Bar chart (severity), Pie chart (open vs closed).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:incident`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate Falcon incident severities. Track mean time to resolve by tier.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:incident\"\n| stats count by severity, status\n| sort -severity\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Incident Severity Distribution** — Severity distribution informs staffing and escalation policy.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:incident`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Incident Severity Distribution** — Severity distribution informs staffing and escalation policy.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:incident`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (severity), Pie chart (open vs closed).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Incident Severity Distribution\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.47",
              "n": "CrowdStrike Quarantine Action Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Quarantine audits prove containment actions and support recovery.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:quarantine`",
              "q": "index=endpoint sourcetype=\"crowdstrike:quarantine\"\n| stats count by action, file_path, user\n| sort -_time",
              "m": "Log quarantine and restore events. Correlate with detection IDs for chain of custody.",
              "z": "Table (actions), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:quarantine`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog quarantine and restore events. Correlate with detection IDs for chain of custody.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:quarantine\"\n| stats count by action, file_path, user\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Quarantine Action Audit** — Quarantine audits prove containment actions and support recovery.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:quarantine`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:quarantine. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:quarantine\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action, file_path, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.file_name, Malware_Attacks.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Quarantine Action Audit** — Quarantine audits prove containment actions and support recovery.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:quarantine`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (actions), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Quarantine Action Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.file_name, Malware_Attacks.user | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.48",
              "n": "CrowdStrike Prevention Policy Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Policy compliance ensures prevention modes (e.g. extra verbose) apply uniformly.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:policy`",
              "q": "index=endpoint sourcetype=\"crowdstrike:policy\"\n| stats values(policy_id) as policy_id by hostname\n| lookup crowdstrike_golden_policies hostname OUTPUT expected_policy\n| where policy_id!=expected_policy",
              "m": "Compare applied prevention policies to golden templates per OU. Alert on drift after bulk edits.",
              "z": "Table (non-compliant hosts), Bar chart (policy mix).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:policy`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare applied prevention policies to golden templates per OU. Alert on drift after bulk edits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:policy\"\n| stats values(policy_id) as policy_id by hostname\n| lookup crowdstrike_golden_policies hostname OUTPUT expected_policy\n| where policy_id!=expected_policy\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Prevention Policy Compliance** — Policy compliance ensures prevention modes (e.g. extra verbose) apply uniformly.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:policy`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:policy. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:policy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where policy_id!=expected_policy` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Prevention Policy Compliance** — Policy compliance ensures prevention modes (e.g. extra verbose) apply uniformly.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:policy`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant hosts), Bar chart (policy mix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Prevention Policy Compliance\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.49",
              "n": "CrowdStrike Behavioral IOC Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Behavioral IOC trending measures hunting hypothesis validation over time.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:ioc`",
              "q": "index=endpoint sourcetype=\"crowdstrike:ioc\"\n| timechart span=1d count by ioc_type\n| where count>0",
              "m": "Track custom IOC hits: hash, domain, IP. Retire stale IOCs to reduce noise.",
              "z": "Line chart (IOC hits), Table (top values).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:ioc`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack custom IOC hits: hash, domain, IP. Retire stale IOCs to reduce noise.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:ioc\"\n| timechart span=1d count by ioc_type\n| where count>0\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Behavioral IOC Trending** — Behavioral IOC trending measures hunting hypothesis validation over time.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:ioc`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:ioc. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:ioc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by ioc_type** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (IOC hits), Table (top values).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Behavioral IOC Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.50",
              "n": "CrowdStrike Spotlight Vulnerability Assessment",
              "c": "high",
              "f": "intermediate",
              "v": "Spotlight ties EDR visibility to prioritized patching and exposure reduction.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:vuln`",
              "q": "index=endpoint sourcetype=\"crowdstrike:vuln\"\n| stats count by cve_id, severity, status\n| where status=\"open\" AND severity IN (\"CRITICAL\",\"HIGH\")",
              "m": "Ingest Spotlight CVE data. Prioritize open criticals with exploit intel. Join to patch tickets.",
              "z": "Table (top CVEs), Bar chart (severity).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:vuln`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Spotlight CVE data. Prioritize open criticals with exploit intel. Join to patch tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:vuln\"\n| stats count by cve_id, severity, status\n| where status=\"open\" AND severity IN (\"CRITICAL\",\"HIGH\")\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Spotlight Vulnerability Assessment** — Spotlight ties EDR visibility to prioritized patching and exposure reduction.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:vuln`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cve_id, severity, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status=\"open\" AND severity IN (\"CRITICAL\",\"HIGH\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Spotlight Vulnerability Assessment** — Spotlight ties EDR visibility to prioritized patching and exposure reduction.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:vuln`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top CVEs), Bar chart (severity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Spotlight Vulnerability Assessment\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.51",
              "n": "ZIA Web Policy Violation Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Web policy trending shows how often controls block risky or non-compliant traffic.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:web`",
              "q": "index=proxy sourcetype=\"zscaler:web\" action=BLOCKED\n| timechart span=1h count by rule_name\n| where count>0",
              "m": "Ingest ZIA NSS or log streaming. Normalize rule and URL category. Tune noisy categories.",
              "z": "Line chart (blocks), Bar chart (rules).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:web`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ZIA NSS or log streaming. Normalize rule and URL category. Tune noisy categories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"zscaler:web\" action=BLOCKED\n| timechart span=1h count by rule_name\n| where count>0\n```\n\nUnderstanding this SPL\n\n**ZIA Web Policy Violation Trending** — Web policy trending shows how often controls block risky or non-compliant traffic.\n\nDocumented **Data sources**: `sourcetype=zscaler:web`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:web. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"zscaler:web\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by rule_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZIA Web Policy Violation Trending** — Web policy trending shows how often controls block risky or non-compliant traffic.\n\nDocumented **Data sources**: `sourcetype=zscaler:web`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (blocks), Bar chart (rules).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"ZIA Web Policy Violation Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1h | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.52",
              "n": "ZIA DLP Incident Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "ZIA DLP analysis reduces data exfiltration via web and SaaS paths.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:dlp`",
              "q": "index=dlp sourcetype=\"zscaler:dlp\" dlp_action=block\n| stats count by policy, channel, user\n| sort -count",
              "m": "Collect DLP events from web and cloud channels. Classify by data identifier severity.",
              "z": "Table (incidents), Bar chart (policies).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: DLP](https://docs.splunk.com/Documentation/CIM/latest/User/DLP)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:dlp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect DLP events from web and cloud channels. Classify by data identifier severity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dlp sourcetype=\"zscaler:dlp\" dlp_action=block\n| stats count by policy, channel, user\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ZIA DLP Incident Analysis** — ZIA DLP analysis reduces data exfiltration via web and SaaS paths.\n\nDocumented **Data sources**: `sourcetype=zscaler:dlp`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dlp; **sourcetype**: zscaler:dlp. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dlp, sourcetype=\"zscaler:dlp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by policy, channel, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZIA DLP Incident Analysis** — ZIA DLP analysis reduces data exfiltration via web and SaaS paths.\n\nDocumented **Data sources**: `sourcetype=zscaler:dlp`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Data_Loss_Prevention.DLP_Incidents` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incidents), Bar chart (policies).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"ZIA DLP Incident Analysis\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "DLP"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.user | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.53",
              "n": "Zscaler Cloud Application Discovery",
              "c": "high",
              "f": "intermediate",
              "v": "Cloud app discovery informs CASB and firewall policy for sanctioned tools.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:cloudapp`",
              "q": "index=cloud sourcetype=\"zscaler:cloudapp\"\n| stats dc(app_name) as apps by department\n| sort -apps",
              "m": "Use cloud app visibility feeds. Flag unsanctioned SaaS with high upload volume.",
              "z": "Bar chart (apps per dept), Table (shadow IT).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:cloudapp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse cloud app visibility feeds. Flag unsanctioned SaaS with high upload volume.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"zscaler:cloudapp\"\n| stats dc(app_name) as apps by department\n| sort -apps\n```\n\nUnderstanding this SPL\n\n**Zscaler Cloud Application Discovery** — Cloud app discovery informs CASB and firewall policy for sanctioned tools.\n\nDocumented **Data sources**: `sourcetype=zscaler:cloudapp`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: zscaler:cloudapp. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"zscaler:cloudapp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler Cloud Application Discovery** — Cloud app discovery informs CASB and firewall policy for sanctioned tools.\n\nDocumented **Data sources**: `sourcetype=zscaler:cloudapp`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (apps per dept), Table (shadow IT).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"Zscaler Cloud Application Discovery\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.54",
              "n": "ZPA Application Segment Health",
              "c": "high",
              "f": "intermediate",
              "v": "Segment health ensures zero trust access to private apps remains available.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:zpa`",
              "q": "index=network sourcetype=\"zscaler:zpa\"\n| stats latest(segment_status) as st by app_segment\n| search st!=\"HEALTHY\"",
              "m": "Ingest ZPA admin and connector telemetry. Alert when application segments degrade or disconnect.",
              "z": "Table (unhealthy segments), Timeline.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:zpa`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ZPA admin and connector telemetry. Alert when application segments degrade or disconnect.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"zscaler:zpa\"\n| stats latest(segment_status) as st by app_segment\n| search st!=\"HEALTHY\"\n```\n\nUnderstanding this SPL\n\n**ZPA Application Segment Health** — Segment health ensures zero trust access to private apps remains available.\n\nDocumented **Data sources**: `sourcetype=zscaler:zpa`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: zscaler:zpa. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"zscaler:zpa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_segment** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZPA Application Segment Health** — Segment health ensures zero trust access to private apps remains available.\n\nDocumented **Data sources**: `sourcetype=zscaler:zpa`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy segments), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"ZPA Application Segment Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.55",
              "n": "ZDX User Experience Scores",
              "c": "high",
              "f": "intermediate",
              "v": "ZDX scores tie security service performance to user productivity and help desk tickets.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:zdx`",
              "q": "index=ux sourcetype=\"zscaler:zdx\"\n| stats avg(score) as avg_score by user, app\n| where avg_score<70",
              "m": "Onboard ZDX synthetic and real user metrics. Correlate poor scores with ISP or app outages.",
              "z": "Table (low scores), Line chart (score trend).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:zdx`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOnboard ZDX synthetic and real user metrics. Correlate poor scores with ISP or app outages.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ux sourcetype=\"zscaler:zdx\"\n| stats avg(score) as avg_score by user, app\n| where avg_score<70\n```\n\nUnderstanding this SPL\n\n**ZDX User Experience Scores** — ZDX scores tie security service performance to user productivity and help desk tickets.\n\nDocumented **Data sources**: `sourcetype=zscaler:zdx`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ux; **sourcetype**: zscaler:zdx. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ux, sourcetype=\"zscaler:zdx\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_score<70` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZDX User Experience Scores** — ZDX scores tie security service performance to user productivity and help desk tickets.\n\nDocumented **Data sources**: `sourcetype=zscaler:zdx`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (low scores), Line chart (score trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"ZDX User Experience Scores\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host | sort - agg_value",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.56",
              "n": "ZIA SSL Inspection Coverage",
              "c": "high",
              "f": "intermediate",
              "v": "SSL coverage ensures encrypted threats are visible where policy allows inspection.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:ssl`",
              "q": "index=proxy sourcetype=\"zscaler:ssl\"\n| stats count as total, sum(eval(if(inspected==1,1,0))) as insp\n| eval pct=round(100*insp/total,2)\n| where pct<85",
              "m": "Calculate inspected versus bypassed SSL sessions. Investigate certificate pinning or legal bypass lists.",
              "z": "Single value (inspection %), Pie chart (inspected vs bypass).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:ssl`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCalculate inspected versus bypassed SSL sessions. Investigate certificate pinning or legal bypass lists.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"zscaler:ssl\"\n| stats count as total, sum(eval(if(inspected==1,1,0))) as insp\n| eval pct=round(100*insp/total,2)\n| where pct<85\n```\n\nUnderstanding this SPL\n\n**ZIA SSL Inspection Coverage** — SSL coverage ensures encrypted threats are visible where policy allows inspection.\n\nDocumented **Data sources**: `sourcetype=zscaler:ssl`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:ssl. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"zscaler:ssl\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct<85` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZIA SSL Inspection Coverage** — SSL coverage ensures encrypted threats are visible where policy allows inspection.\n\nDocumented **Data sources**: `sourcetype=zscaler:ssl`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (inspection %), Pie chart (inspected vs bypass).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"ZIA SSL Inspection Coverage\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.57",
              "n": "ZIA Sandbox Analysis Results",
              "c": "high",
              "f": "intermediate",
              "v": "ZIA sandbox catches unknown files before they reach the browser or endpoint.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:sandbox`",
              "q": "index=security sourcetype=\"zscaler:sandbox\"\n| stats count by verdict, file_type\n| search verdict=\"Malicious\"",
              "m": "Forward sandbox detonation results from ZIA. Chain to user and destination for incident creation.",
              "z": "Table (malicious files), Bar chart (verdicts).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:sandbox`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward sandbox detonation results from ZIA. Chain to user and destination for incident creation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"zscaler:sandbox\"\n| stats count by verdict, file_type\n| search verdict=\"Malicious\"\n```\n\nUnderstanding this SPL\n\n**ZIA Sandbox Analysis Results** — ZIA sandbox catches unknown files before they reach the browser or endpoint.\n\nDocumented **Data sources**: `sourcetype=zscaler:sandbox`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: zscaler:sandbox. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"zscaler:sandbox\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by verdict, file_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZIA Sandbox Analysis Results** — ZIA sandbox catches unknown files before they reach the browser or endpoint.\n\nDocumented **Data sources**: `sourcetype=zscaler:sandbox`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (malicious files), Bar chart (verdicts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"ZIA Sandbox Analysis Results\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.58",
              "n": "Zscaler Cloud Firewall Rule Hit Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Cloud firewall hits extend NGFW-style control to roaming and branch users.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:fw`",
              "q": "index=firewall sourcetype=\"zscaler:fw\"\n| stats count by rule_label, action, dest_country\n| sort -count",
              "m": "Use cloud firewall logs for non-web TCP/UDP. Identify noisy rules and geo anomalies.",
              "z": "Bar chart (rules), Map (dest country).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:fw`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse cloud firewall logs for non-web TCP/UDP. Identify noisy rules and geo anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"zscaler:fw\"\n| stats count by rule_label, action, dest_country\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Zscaler Cloud Firewall Rule Hit Analysis** — Cloud firewall hits extend NGFW-style control to roaming and branch users.\n\nDocumented **Data sources**: `sourcetype=zscaler:fw`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: zscaler:fw. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"zscaler:fw\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule_label, action, dest_country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler Cloud Firewall Rule Hit Analysis** — Cloud firewall hits extend NGFW-style control to roaming and branch users.\n\nDocumented **Data sources**: `sourcetype=zscaler:fw`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (rules), Map (dest country).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"Zscaler Cloud Firewall Rule Hit Analysis\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.59",
              "n": "ZPA Connector Health",
              "c": "high",
              "f": "intermediate",
              "v": "Connector health is prerequisite for ZPA private application reachability.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:connector`",
              "q": "index=network sourcetype=\"zscaler:connector\"\n| stats latest(status) as st by connector_name\n| search st!=\"ONLINE\"",
              "m": "Monitor private service edge and app connector heartbeats. Page on offline connectors affecting access.",
              "z": "Table (connectors), Single value (offline count).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:connector`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor private service edge and app connector heartbeats. Page on offline connectors affecting access.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"zscaler:connector\"\n| stats latest(status) as st by connector_name\n| search st!=\"ONLINE\"\n```\n\nUnderstanding this SPL\n\n**ZPA Connector Health** — Connector health is prerequisite for ZPA private application reachability.\n\nDocumented **Data sources**: `sourcetype=zscaler:connector`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: zscaler:connector. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"zscaler:connector\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by connector_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZPA Connector Health** — Connector health is prerequisite for ZPA private application reachability.\n\nDocumented **Data sources**: `sourcetype=zscaler:connector`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (connectors), Single value (offline count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"ZPA Connector Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.60",
              "n": "Zscaler Advanced Threat Protection Blocks",
              "c": "high",
              "f": "intermediate",
              "v": "ATP blocks stop multi-stage attacks in the secure web path.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:atp`",
              "q": "index=security sourcetype=\"zscaler:atp\" threat_class=*\n| stats count by threat_name, action\n| sort -count",
              "m": "Ingest ATP blocks for C2, phishing, and malware. Correlate with sandbox and DLP.",
              "z": "Table (threats), Line chart (ATP volume).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:atp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ATP blocks for C2, phishing, and malware. Correlate with sandbox and DLP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"zscaler:atp\" threat_class=*\n| stats count by threat_name, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Zscaler Advanced Threat Protection Blocks** — ATP blocks stop multi-stage attacks in the secure web path.\n\nDocumented **Data sources**: `sourcetype=zscaler:atp`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: zscaler:atp. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"zscaler:atp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by threat_name, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature, IDS_Attacks.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler Advanced Threat Protection Blocks** — ATP blocks stop multi-stage attacks in the secure web path.\n\nDocumented **Data sources**: `sourcetype=zscaler:atp`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (threats), Line chart (ATP volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"Zscaler Advanced Threat Protection Blocks\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature, IDS_Attacks.action | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.61",
              "n": "Zscaler Browser Isolation Usage",
              "c": "high",
              "f": "intermediate",
              "v": "Browser isolation usage shows adoption of zero-trust browsing for untrusted sites.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:isolation`",
              "q": "index=proxy sourcetype=\"zscaler:isolation\"\n| stats count by user, url_category\n| sort -count",
              "m": "Track isolation session starts for risky categories. Validate policy covers targeted users.",
              "z": "Bar chart (categories), Table (users).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:isolation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack isolation session starts for risky categories. Validate policy covers targeted users.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"zscaler:isolation\"\n| stats count by user, url_category\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Zscaler Browser Isolation Usage** — Browser isolation usage shows adoption of zero-trust browsing for untrusted sites.\n\nDocumented **Data sources**: `sourcetype=zscaler:isolation`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:isolation. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"zscaler:isolation\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, url_category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler Browser Isolation Usage** — Browser isolation usage shows adoption of zero-trust browsing for untrusted sites.\n\nDocumented **Data sources**: `sourcetype=zscaler:isolation`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (categories), Table (users).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"Zscaler Browser Isolation Usage\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.62",
              "n": "Zscaler Data Protection Policy Effectiveness",
              "c": "high",
              "f": "intermediate",
              "v": "Effectiveness metrics prove DLP policies catch sensitive data without blocking business.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:dlp`",
              "q": "index=dlp sourcetype=\"zscaler:dlp\"\n| timechart span=1d sum(blocked) as b, sum(allowed) as a\n| eval effectiveness=round(100*b/(b+a),2)",
              "m": "Compare blocked versus allowed DLP events over time. Tune identifiers that rarely fire.",
              "z": "Line chart (effectiveness), Single value (block rate).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: DLP](https://docs.splunk.com/Documentation/CIM/latest/User/DLP)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:dlp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare blocked versus allowed DLP events over time. Tune identifiers that rarely fire.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dlp sourcetype=\"zscaler:dlp\"\n| timechart span=1d sum(blocked) as b, sum(allowed) as a\n| eval effectiveness=round(100*b/(b+a),2)\n```\n\nUnderstanding this SPL\n\n**Zscaler Data Protection Policy Effectiveness** — Effectiveness metrics prove DLP policies catch sensitive data without blocking business.\n\nDocumented **Data sources**: `sourcetype=zscaler:dlp`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dlp; **sourcetype**: zscaler:dlp. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dlp, sourcetype=\"zscaler:dlp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **effectiveness** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.action, DLP_Incidents.src, DLP_Incidents.user span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler Data Protection Policy Effectiveness** — Effectiveness metrics prove DLP policies catch sensitive data without blocking business.\n\nDocumented **Data sources**: `sourcetype=zscaler:dlp`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Data_Loss_Prevention.DLP_Incidents` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (effectiveness), Single value (block rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"Zscaler Data Protection Policy Effectiveness\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "DLP"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.action, DLP_Incidents.src, DLP_Incidents.user span=1d | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.100",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA unknown is enforced — Splunk UC-10.11.62: Zscaler Data Protection Policy Effectiveness.",
                  "ea": "Saved search 'UC-10.11.62' running on sourcetype=zscaler:dlp, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-10.11.62: Zscaler Data Protection Policy Effectiveness.",
                  "ea": "Saved search 'UC-10.11.62' running on sourcetype=zscaler:dlp, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.63",
              "n": "VMware Carbon Black Binary Reputation Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Binary reputation prioritizes investigation toward untrusted code execution.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:binary`",
              "q": "index=endpoint sourcetype=\"carbonblack:binary\"\n| stats count by reputation, signed_status, publisher\n| sort -count",
              "m": "Ingest reputation events from CB Enterprise. Flag unsigned or banned publishers at scale.",
              "z": "Bar chart (reputation), Table (unsigned binaries).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:binary`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest reputation events from CB Enterprise. Flag unsigned or banned publishers at scale.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"carbonblack:binary\"\n| stats count by reputation, signed_status, publisher\n| sort -count\n```\n\nUnderstanding this SPL\n\n**VMware Carbon Black Binary Reputation Analysis** — Binary reputation prioritizes investigation toward untrusted code execution.\n\nDocumented **Data sources**: `sourcetype=carbonblack:binary`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: carbonblack:binary. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"carbonblack:binary\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by reputation, signed_status, publisher** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VMware Carbon Black Binary Reputation Analysis** — Binary reputation prioritizes investigation toward untrusted code execution.\n\nDocumented **Data sources**: `sourcetype=carbonblack:binary`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (reputation), Table (unsigned binaries).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"VMware Carbon Black Binary Reputation Analysis\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.64",
              "n": "Carbon Black Live Response Session Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Live Response audits satisfy privileged access reviews for EDR consoles.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:liveresponse`",
              "q": "index=audit sourcetype=\"carbonblack:liveresponse\"\n| stats count by command, analyst, sensor_id\n| sort -_time",
              "m": "Log every Live Response command with session ID. Review for lateral live response abuse.",
              "z": "Table (sessions), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Splunk_Audit](https://docs.splunk.com/Documentation/CIM/latest/User/Splunk_Audit)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:liveresponse`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog every Live Response command with session ID. Review for lateral live response abuse.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"carbonblack:liveresponse\"\n| stats count by command, analyst, sensor_id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Carbon Black Live Response Session Audit** — Live Response audits satisfy privileged access reviews for EDR consoles.\n\nDocumented **Data sources**: `sourcetype=carbonblack:liveresponse`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: carbonblack:liveresponse. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"carbonblack:liveresponse\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by command, analyst, sensor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Carbon Black Live Response Session Audit** — Live Response audits satisfy privileged access reviews for EDR consoles.\n\nDocumented **Data sources**: `sourcetype=carbonblack:liveresponse`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Splunk_Audit` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sessions), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"Carbon Black Live Response Session Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Splunk_Audit"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.65",
              "n": "Carbon Black Watchlist Hit Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Watchlist trending validates threat intel relevance to your estate.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:watchlist`",
              "q": "index=endpoint sourcetype=\"carbonblack:watchlist\"\n| timechart span=1h count by watchlist_id\n| where count>0",
              "m": "Track IOC watchlist hits from intel feeds. Expire stale lists to reduce volume.",
              "z": "Line chart (hits), Table (watchlist, IOC).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:watchlist`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack IOC watchlist hits from intel feeds. Expire stale lists to reduce volume.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"carbonblack:watchlist\"\n| timechart span=1h count by watchlist_id\n| where count>0\n```\n\nUnderstanding this SPL\n\n**Carbon Black Watchlist Hit Trending** — Watchlist trending validates threat intel relevance to your estate.\n\nDocumented **Data sources**: `sourcetype=carbonblack:watchlist`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: carbonblack:watchlist. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"carbonblack:watchlist\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by watchlist_id** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hits), Table (watchlist, IOC).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"Carbon Black Watchlist Hit Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.66",
              "n": "Carbon Black Alert Severity Distribution",
              "c": "high",
              "f": "intermediate",
              "v": "Severity distribution balances SOC workload and SLA design.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:alert`",
              "q": "index=endpoint sourcetype=\"carbonblack:alert\"\n| stats count by severity, type\n| sort -severity",
              "m": "Normalize CB alert schema. Route criticals to on-call. Report mean time to triage.",
              "z": "Pie chart (severity), Bar chart (alert types).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize CB alert schema. Route criticals to on-call. Report mean time to triage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"carbonblack:alert\"\n| stats count by severity, type\n| sort -severity\n```\n\nUnderstanding this SPL\n\n**Carbon Black Alert Severity Distribution** — Severity distribution balances SOC workload and SLA design.\n\nDocumented **Data sources**: `sourcetype=carbonblack:alert`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: carbonblack:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"carbonblack:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity, type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.type | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Carbon Black Alert Severity Distribution** — Severity distribution balances SOC workload and SLA design.\n\nDocumented **Data sources**: `sourcetype=carbonblack:alert`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (severity), Bar chart (alert types).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"Carbon Black Alert Severity Distribution\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.type | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.67",
              "n": "Carbon Black Sensor Communication Health",
              "c": "high",
              "f": "intermediate",
              "v": "Sensor communication health preserves visibility and response capability.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:sensor`",
              "q": "index=endpoint sourcetype=\"carbonblack:sensor\"\n| stats latest(comm_status) as cs by hostname\n| search cs!=\"active\"",
              "m": "Monitor sensor check-in and upgrade state. Alert on sensors not communicating >24h.",
              "z": "Table (offline sensors), Donut chart (health).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:sensor`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor sensor check-in and upgrade state. Alert on sensors not communicating >24h.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"carbonblack:sensor\"\n| stats latest(comm_status) as cs by hostname\n| search cs!=\"active\"\n```\n\nUnderstanding this SPL\n\n**Carbon Black Sensor Communication Health** — Sensor communication health preserves visibility and response capability.\n\nDocumented **Data sources**: `sourcetype=carbonblack:sensor`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: carbonblack:sensor. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"carbonblack:sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Carbon Black Sensor Communication Health** — Sensor communication health preserves visibility and response capability.\n\nDocumented **Data sources**: `sourcetype=carbonblack:sensor`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (offline sensors), Donut chart (health).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"Carbon Black Sensor Communication Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.68",
              "n": "Carbon Black Device Control Policy Violations",
              "c": "high",
              "f": "intermediate",
              "v": "Device control stops data theft and malware via removable media.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:device`",
              "q": "index=endpoint sourcetype=\"carbonblack:device\" action=block\n| stats count by device_type, vendor_id, user\n| sort -count",
              "m": "Log USB and peripheral blocks. Investigate repeated blocks for policy exceptions.",
              "z": "Bar chart (device types), Table (users).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:device`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog USB and peripheral blocks. Investigate repeated blocks for policy exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"carbonblack:device\" action=block\n| stats count by device_type, vendor_id, user\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Carbon Black Device Control Policy Violations** — Device control stops data theft and malware via removable media.\n\nDocumented **Data sources**: `sourcetype=carbonblack:device`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: carbonblack:device. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"carbonblack:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_type, vendor_id, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Carbon Black Device Control Policy Violations** — Device control stops data theft and malware via removable media.\n\nDocumented **Data sources**: `sourcetype=carbonblack:device`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (device types), Table (users).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"Carbon Black Device Control Policy Violations\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.69",
              "n": "Carbon Black Network Isolation Events",
              "c": "high",
              "f": "intermediate",
              "v": "Isolation events contain active threats while preserving forensic options.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:isolation`",
              "q": "index=endpoint sourcetype=\"carbonblack:isolation\"\n| stats count by action, reason, hostname\n| sort -_time",
              "m": "Track isolate and release events with analyst or automated source. Document for IR.",
              "z": "Timeline (isolations), Table (hosts).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:isolation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack isolate and release events with analyst or automated source. Document for IR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"carbonblack:isolation\"\n| stats count by action, reason, hostname\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Carbon Black Network Isolation Events** — Isolation events contain active threats while preserving forensic options.\n\nDocumented **Data sources**: `sourcetype=carbonblack:isolation`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: carbonblack:isolation. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"carbonblack:isolation\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action, reason, hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Carbon Black Network Isolation Events** — Isolation events contain active threats while preserving forensic options.\n\nDocumented **Data sources**: `sourcetype=carbonblack:isolation`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (isolations), Table (hosts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"Carbon Black Network Isolation Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.dest | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.70",
              "n": "Carbon Black Audit Log Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Audit log analysis protects the EDR management plane from unauthorized access.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:audit`",
              "q": "index=audit sourcetype=\"carbonblack:audit\"\n| search (action=login_failed OR action=permission_denied)\n| stats count by user, src\n| where count>10",
              "m": "Ingest console and API audit logs. Alert on failed admin logins and API key abuse patterns.",
              "z": "Table (failed access), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest console and API audit logs. Alert on failed admin logins and API key abuse patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"carbonblack:audit\"\n| search (action=login_failed OR action=permission_denied)\n| stats count by user, src\n| where count>10\n```\n\nUnderstanding this SPL\n\n**Carbon Black Audit Log Analysis** — Audit log analysis protects the EDR management plane from unauthorized access.\n\nDocumented **Data sources**: `sourcetype=carbonblack:audit`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: carbonblack:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"carbonblack:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>10` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Carbon Black Audit Log Analysis** — Audit log analysis protects the EDR management plane from unauthorized access.\n\nDocumented **Data sources**: `sourcetype=carbonblack:audit`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed access), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"Carbon Black Audit Log Analysis\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.71",
              "n": "Tenable Nessus Vulnerability Scan Coverage Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Scan coverage ensures vulnerability data stays current for risk scoring.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=nessus:scan`",
              "q": "index=vuln sourcetype=\"nessus:scan\"\n| stats latest(finish_time) as last_scan by asset_uuid, scan_name\n| eval overdue=if(now()-last_scan > 2592000,1,0)\n| where overdue=1",
              "m": "Track last successful authenticated scan per asset group. Alert when assets exceed 30-day window.",
              "z": "Table (overdue assets), Single value (coverage %).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=nessus:scan`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack last successful authenticated scan per asset group. Alert when assets exceed 30-day window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"nessus:scan\"\n| stats latest(finish_time) as last_scan by asset_uuid, scan_name\n| eval overdue=if(now()-last_scan > 2592000,1,0)\n| where overdue=1\n```\n\nUnderstanding this SPL\n\n**Tenable Nessus Vulnerability Scan Coverage Tracking** — Scan coverage ensures vulnerability data stays current for risk scoring.\n\nDocumented **Data sources**: `sourcetype=nessus:scan`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: nessus:scan. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"nessus:scan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by asset_uuid, scan_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overdue=1` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable Nessus Vulnerability Scan Coverage Tracking** — Scan coverage ensures vulnerability data stays current for risk scoring.\n\nDocumented **Data sources**: `sourcetype=nessus:scan`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue assets), Single value (coverage %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable Nessus Vulnerability Scan Coverage Tracking\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.72",
              "n": "Tenable Compliance Scan Results",
              "c": "high",
              "f": "intermediate",
              "v": "Compliance scans prove configuration alignment to CIS and regulatory frameworks.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=nessus:compliance`",
              "q": "index=vuln sourcetype=\"nessus:compliance\"\n| stats count by benchmark, status\n| where status=\"FAILED\"",
              "m": "Ingest .nessus compliance exports or API. Map failures to owners for GRC reporting.",
              "z": "Bar chart (failed controls), Table (assets).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=nessus:compliance`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest .nessus compliance exports or API. Map failures to owners for GRC reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"nessus:compliance\"\n| stats count by benchmark, status\n| where status=\"FAILED\"\n```\n\nUnderstanding this SPL\n\n**Tenable Compliance Scan Results** — Compliance scans prove configuration alignment to CIS and regulatory frameworks.\n\nDocumented **Data sources**: `sourcetype=nessus:compliance`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: nessus:compliance. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"nessus:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by benchmark, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status=\"FAILED\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.status | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable Compliance Scan Results** — Compliance scans prove configuration alignment to CIS and regulatory frameworks.\n\nDocumented **Data sources**: `sourcetype=nessus:compliance`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failed controls), Table (assets).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable Compliance Scan Results\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.status | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.73",
              "n": "Tenable Web Application Scan Findings",
              "c": "high",
              "f": "intermediate",
              "v": "WAS findings reduce exploitable web flaws before external discovery.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=nessus:was`",
              "q": "index=vuln sourcetype=\"nessus:was\"\n| stats count by severity, owasp_category, uri\n| where severity IN (\"Critical\",\"High\")",
              "m": "Onboard WAS plugin output. Prioritize OWASP top 10 categories with open criticals.",
              "z": "Table (findings), Bar chart (OWASP).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=nessus:was`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOnboard WAS plugin output. Prioritize OWASP top 10 categories with open criticals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"nessus:was\"\n| stats count by severity, owasp_category, uri\n| where severity IN (\"Critical\",\"High\")\n```\n\nUnderstanding this SPL\n\n**Tenable Web Application Scan Findings** — WAS findings reduce exploitable web flaws before external discovery.\n\nDocumented **Data sources**: `sourcetype=nessus:was`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: nessus:was. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"nessus:was\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity, owasp_category, uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity IN (\"Critical\",\"High\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable Web Application Scan Findings** — WAS findings reduce exploitable web flaws before external discovery.\n\nDocumented **Data sources**: `sourcetype=nessus:was`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (findings), Bar chart (OWASP).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable Web Application Scan Findings\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.74",
              "n": "Tenable Remediation SLA Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "SLA monitoring drives accountability for patch and mitigation timelines.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=tenable:remediation`",
              "q": "index=vuln sourcetype=\"tenable:remediation\"\n| eval breach=if((now()-detected_time) > 604800,1,0)\n| stats sum(breach) as breaches by owner\n| where breaches>0",
              "m": "Join vulnerability records to SLA lookup by severity. Escalate overdue remediations.",
              "z": "Table (SLA breaches), Line chart (age distribution).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=tenable:remediation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nJoin vulnerability records to SLA lookup by severity. Escalate overdue remediations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"tenable:remediation\"\n| eval breach=if((now()-detected_time) > 604800,1,0)\n| stats sum(breach) as breaches by owner\n| where breaches>0\n```\n\nUnderstanding this SPL\n\n**Tenable Remediation SLA Monitoring** — SLA monitoring drives accountability for patch and mitigation timelines.\n\nDocumented **Data sources**: `sourcetype=tenable:remediation`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:remediation. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"tenable:remediation\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where breaches>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable Remediation SLA Monitoring** — SLA monitoring drives accountability for patch and mitigation timelines.\n\nDocumented **Data sources**: `sourcetype=tenable:remediation`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SLA breaches), Line chart (age distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable Remediation SLA Monitoring\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.75",
              "n": "Tenable CVSS Score Distribution Trending",
              "c": "high",
              "f": "intermediate",
              "v": "CVSS trending contextualizes scanner noise versus real risk concentration.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=tenable:vm`",
              "q": "index=vuln sourcetype=\"tenable:vm\"\n| bin cvss_base_score span=1\n| timechart span=1w count by cvss_base_score",
              "m": "Snapshot CVSS histogram weekly. Watch for shift toward higher scores after new plugins.",
              "z": "Histogram (CVSS), Line chart (critical count).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=tenable:vm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSnapshot CVSS histogram weekly. Watch for shift toward higher scores after new plugins.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"tenable:vm\"\n| bin cvss_base_score span=1\n| timechart span=1w count by cvss_base_score\n```\n\nUnderstanding this SPL\n\n**Tenable CVSS Score Distribution Trending** — CVSS trending contextualizes scanner noise versus real risk concentration.\n\nDocumented **Data sources**: `sourcetype=tenable:vm`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"tenable:vm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `timechart` plots the metric over time using **span=1w** buckets with a separate series **by cvss_base_score** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest span=1w | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable CVSS Score Distribution Trending** — CVSS trending contextualizes scanner noise versus real risk concentration.\n\nDocumented **Data sources**: `sourcetype=tenable:vm`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (CVSS), Line chart (critical count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable CVSS Score Distribution Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest span=1w | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.76",
              "n": "Tenable Scan Schedule Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Schedule compliance prevents blind spots between assessment cycles.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=nessus:scan_job`",
              "q": "index=vuln sourcetype=\"nessus:scan_job\"\n| stats latest(run_status) as st, latest(scheduled_time) as sched by job_id\n| search st!=\"completed\" OR now()-sched > 604800",
              "m": "Monitor scheduled scan jobs for missed runs or failures. Fix cred or network issues.",
              "z": "Table (failed jobs), Alert panel.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=nessus:scan_job`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor scheduled scan jobs for missed runs or failures. Fix cred or network issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"nessus:scan_job\"\n| stats latest(run_status) as st, latest(scheduled_time) as sched by job_id\n| search st!=\"completed\" OR now()-sched > 604800\n```\n\nUnderstanding this SPL\n\n**Tenable Scan Schedule Compliance** — Schedule compliance prevents blind spots between assessment cycles.\n\nDocumented **Data sources**: `sourcetype=nessus:scan_job`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: nessus:scan_job. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"nessus:scan_job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by job_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable Scan Schedule Compliance** — Schedule compliance prevents blind spots between assessment cycles.\n\nDocumented **Data sources**: `sourcetype=nessus:scan_job`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed jobs), Alert panel.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable Scan Schedule Compliance\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.77",
              "n": "Tenable Plugin Update Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Plugin currency ensures new CVE checks run in every assessment.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=tenable:sc`",
              "q": "index=main sourcetype=\"tenable:sc\"\n| stats latest(plugin_set_version) as ver by scanner\n| lookup tenable_latest_plugins scanner OUTPUT expected\n| where ver!=expected",
              "m": "Compare Security Center plugin feed versions to current. Alert when scanners lag.",
              "z": "Table (outdated scanners), Single value.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=tenable:sc`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare Security Center plugin feed versions to current. Alert when scanners lag.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=main sourcetype=\"tenable:sc\"\n| stats latest(plugin_set_version) as ver by scanner\n| lookup tenable_latest_plugins scanner OUTPUT expected\n| where ver!=expected\n```\n\nUnderstanding this SPL\n\n**Tenable Plugin Update Compliance** — Plugin currency ensures new CVE checks run in every assessment.\n\nDocumented **Data sources**: `sourcetype=tenable:sc`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: main; **sourcetype**: tenable:sc. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=main, sourcetype=\"tenable:sc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scanner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where ver!=expected` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable Plugin Update Compliance** — Plugin currency ensures new CVE checks run in every assessment.\n\nDocumented **Data sources**: `sourcetype=tenable:sc`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (outdated scanners), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable Plugin Update Compliance\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.78",
              "n": "Tenable Asset Discovery Reconciliation",
              "c": "high",
              "f": "intermediate",
              "v": "Asset reconciliation aligns VM scope with actual infrastructure for accurate risk.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=tenable:asset`",
              "q": "index=vuln sourcetype=\"tenable:asset\"\n| stats values(ip) as ips by hostname\n| lookup cmdb_hosts hostname OUTPUT expected_ip\n| where isnull(expected_ip) OR mvfind(ips, expected_ip)<0",
              "m": "Compare Tenable-discovered assets to CMDB. Find missing or duplicate records.",
              "z": "Table (reconciliation exceptions), Bar chart (by site).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Compute_Inventory](https://docs.splunk.com/Documentation/CIM/latest/User/Compute_Inventory)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=tenable:asset`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare Tenable-discovered assets to CMDB. Find missing or duplicate records.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"tenable:asset\"\n| stats values(ip) as ips by hostname\n| lookup cmdb_hosts hostname OUTPUT expected_ip\n| where isnull(expected_ip) OR mvfind(ips, expected_ip)<0\n```\n\nUnderstanding this SPL\n\n**Tenable Asset Discovery Reconciliation** — Asset reconciliation aligns VM scope with actual infrastructure for accurate risk.\n\nDocumented **Data sources**: `sourcetype=tenable:asset`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:asset. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"tenable:asset\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(expected_ip) OR mvfind(ips, expected_ip)<0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable Asset Discovery Reconciliation** — Asset reconciliation aligns VM scope with actual infrastructure for accurate risk.\n\nDocumented **Data sources**: `sourcetype=tenable:asset`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Compute_Inventory.Virtual_OS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (reconciliation exceptions), Bar chart (by site).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable Asset Discovery Reconciliation\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.79",
              "n": "Tanium Endpoint Compliance Assessment Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Compliance trending shows drift from security baselines across the fleet.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:compliance`",
              "q": "index=endpoint sourcetype=\"tanium:compliance\"\n| bin span=1d _time\n| stats avg(compliance_pct) as compliance_pct by _time, profile\n| where compliance_pct < 95",
              "m": "Ingest Tanium compliance question results. Trend profile scores and alert on regression.",
              "z": "Line chart (compliance), Table (low endpoints).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:compliance`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Tanium compliance question results. Trend profile scores and alert on regression.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"tanium:compliance\"\n| bin span=1d _time\n| stats avg(compliance_pct) as compliance_pct by _time, profile\n| where compliance_pct < 95\n```\n\nUnderstanding this SPL\n\n**Tanium Endpoint Compliance Assessment Trending** — Compliance trending shows drift from security baselines across the fleet.\n\nDocumented **Data sources**: `sourcetype=tanium:compliance`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: tanium:compliance. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"tanium:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, profile** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where compliance_pct < 95` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Endpoint Compliance Assessment Trending** — Compliance trending shows drift from security baselines across the fleet.\n\nDocumented **Data sources**: `sourcetype=tanium:compliance`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (compliance), Table (low endpoints).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Endpoint Compliance Assessment Trending\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.80",
              "n": "Tanium Patch Deployment Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Patch tracking closes vulnerabilities faster with operational visibility.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:patch`",
              "q": "index=endpoint sourcetype=\"tanium:patch\"\n| stats count by kb_id, status, os\n| where status=\"failed\" OR status=\"pending\"",
              "m": "Track patch deployment packages and reboot requirements. Escalate failed critical KBs.",
              "z": "Bar chart (failures), Table (endpoints pending).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:patch`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack patch deployment packages and reboot requirements. Escalate failed critical KBs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"tanium:patch\"\n| stats count by kb_id, status, os\n| where status=\"failed\" OR status=\"pending\"\n```\n\nUnderstanding this SPL\n\n**Tanium Patch Deployment Tracking** — Patch tracking closes vulnerabilities faster with operational visibility.\n\nDocumented **Data sources**: `sourcetype=tanium:patch`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: tanium:patch. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"tanium:patch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by kb_id, status, os** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status=\"failed\" OR status=\"pending\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Patch Deployment Tracking** — Patch tracking closes vulnerabilities faster with operational visibility.\n\nDocumented **Data sources**: `sourcetype=tanium:patch`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures), Table (endpoints pending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Patch Deployment Tracking\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.81",
              "n": "Tanium Asset Discovery Reconciliation",
              "c": "high",
              "f": "intermediate",
              "v": "Reconciliation ensures every managed asset appears in authoritative inventories.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:asset`",
              "q": "index=endpoint sourcetype=\"tanium:asset\"\n| stats latest(last_seen) as ls by computer_id\n| join max=1 computer_id [| inputlookup cmdb_assets]\n| where isnull(cmdb_id)",
              "m": "Compare Tanium inventory to CMDB. Investigate unmanaged IDs as rogue or stale.",
              "z": "Table (unreconciled), Single value (gap count).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Compute_Inventory](https://docs.splunk.com/Documentation/CIM/latest/User/Compute_Inventory)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:asset`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare Tanium inventory to CMDB. Investigate unmanaged IDs as rogue or stale.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"tanium:asset\"\n| stats latest(last_seen) as ls by computer_id\n| join max=1 computer_id [| inputlookup cmdb_assets]\n| where isnull(cmdb_id)\n```\n\nUnderstanding this SPL\n\n**Tanium Asset Discovery Reconciliation** — Reconciliation ensures every managed asset appears in authoritative inventories.\n\nDocumented **Data sources**: `sourcetype=tanium:asset`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: tanium:asset. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"tanium:asset\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by computer_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(cmdb_id)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest, Virtual_OS.status | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Asset Discovery Reconciliation** — Reconciliation ensures every managed asset appears in authoritative inventories.\n\nDocumented **Data sources**: `sourcetype=tanium:asset`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Compute_Inventory.Virtual_OS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unreconciled), Single value (gap count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Asset Discovery Reconciliation\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest, Virtual_OS.status | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.82",
              "n": "Tanium Software Usage Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Software audits reduce attack surface from unused or risky applications.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:software`",
              "q": "index=endpoint sourcetype=\"tanium:software\"\n| stats dc(endpoint) as installs by software_name, version\n| sort -installs",
              "m": "Use software inventory questions. Find prohibited or vulnerable legacy apps.",
              "z": "Bar chart (top software), Table (unapproved).",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:software`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse software inventory questions. Find prohibited or vulnerable legacy apps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"tanium:software\"\n| stats dc(endpoint) as installs by software_name, version\n| sort -installs\n```\n\nUnderstanding this SPL\n\n**Tanium Software Usage Audit** — Software audits reduce attack surface from unused or risky applications.\n\nDocumented **Data sources**: `sourcetype=tanium:software`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: tanium:software. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"tanium:software\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by software_name, version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Software Usage Audit** — Software audits reduce attack surface from unused or risky applications.\n\nDocumented **Data sources**: `sourcetype=tanium:software`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top software), Table (unapproved).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Software Usage Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.83",
              "n": "Tanium Real-Time Question Response Times",
              "c": "high",
              "f": "intermediate",
              "v": "Response times matter when Tanium is used for incident containment questions.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:interact`",
              "q": "index=endpoint sourcetype=\"tanium:interact\"\n| stats perc95(response_ms) as p95 by question_name\n| where p95>5000",
              "m": "Monitor Interact latency for live investigations. Alert when platform or network degrades.",
              "z": "Line chart (p95 latency), Table (slow questions).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:interact`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Interact latency for live investigations. Alert when platform or network degrades.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"tanium:interact\"\n| stats perc95(response_ms) as p95 by question_name\n| where p95>5000\n```\n\nUnderstanding this SPL\n\n**Tanium Real-Time Question Response Times** — Response times matter when Tanium is used for incident containment questions.\n\nDocumented **Data sources**: `sourcetype=tanium:interact`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: tanium:interact. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"tanium:interact\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by question_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95>5000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Real-Time Question Response Times** — Response times matter when Tanium is used for incident containment questions.\n\nDocumented **Data sources**: `sourcetype=tanium:interact`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95 latency), Table (slow questions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Real-Time Question Response Times\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.84",
              "n": "Tanium Module Deployment Health",
              "c": "high",
              "f": "intermediate",
              "v": "Module health guarantees sensors can run latest compliance and threat content.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:module`",
              "q": "index=endpoint sourcetype=\"tanium:module\"\n| stats latest(version) as ver by module_name, endpoint\n| lookup tanium_expected_module_version module_name OUTPUT expected\n| where ver!=expected",
              "m": "Track client tools and content versions. Ensure modules match server expectations.",
              "z": "Table (outdated), Bar chart (module coverage).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:module`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack client tools and content versions. Ensure modules match server expectations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"tanium:module\"\n| stats latest(version) as ver by module_name, endpoint\n| lookup tanium_expected_module_version module_name OUTPUT expected\n| where ver!=expected\n```\n\nUnderstanding this SPL\n\n**Tanium Module Deployment Health** — Module health guarantees sensors can run latest compliance and threat content.\n\nDocumented **Data sources**: `sourcetype=tanium:module`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: tanium:module. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"tanium:module\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by module_name, endpoint** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where ver!=expected` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Module Deployment Health** — Module health guarantees sensors can run latest compliance and threat content.\n\nDocumented **Data sources**: `sourcetype=tanium:module`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (outdated), Bar chart (module coverage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Module Deployment Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.85",
              "n": "Tanium Network Quarantine Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Quarantine audits document containment for regulators and IR retrospectives.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:quarantine`",
              "q": "index=audit sourcetype=\"tanium:quarantine\"\n| stats count by action, operator, endpoint\n| sort -_time",
              "m": "Log quarantine and unquarantine with user attribution. Review for misuse.",
              "z": "Timeline, Table (actions).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:quarantine`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog quarantine and unquarantine with user attribution. Review for misuse.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"tanium:quarantine\"\n| stats count by action, operator, endpoint\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Tanium Network Quarantine Audit** — Quarantine audits document containment for regulators and IR retrospectives.\n\nDocumented **Data sources**: `sourcetype=tanium:quarantine`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: tanium:quarantine. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"tanium:quarantine\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action, operator, endpoint** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Network Quarantine Audit** — Quarantine audits document containment for regulators and IR retrospectives.\n\nDocumented **Data sources**: `sourcetype=tanium:quarantine`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table (actions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Network Quarantine Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.86",
              "n": "Tanium Threat Response Action Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Threat response audits ensure automated remediation executes safely at scale.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:threatresponse`",
              "q": "index=audit sourcetype=\"tanium:threatresponse\"\n| stats count by playbook, outcome, hash\n| search outcome IN (\"failed\",\"partial\")",
              "m": "Capture automated Threat Response actions from Tanium. Investigate failures to restore trust.",
              "z": "Table (failed actions), Bar chart (playbooks).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:threatresponse`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture automated Threat Response actions from Tanium. Investigate failures to restore trust.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"tanium:threatresponse\"\n| stats count by playbook, outcome, hash\n| search outcome IN (\"failed\",\"partial\")\n```\n\nUnderstanding this SPL\n\n**Tanium Threat Response Action Audit** — Threat response audits ensure automated remediation executes safely at scale.\n\nDocumented **Data sources**: `sourcetype=tanium:threatresponse`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: tanium:threatresponse. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"tanium:threatresponse\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by playbook, outcome, hash** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Threat Response Action Audit** — Threat response audits ensure automated remediation executes safely at scale.\n\nDocumented **Data sources**: `sourcetype=tanium:threatresponse`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed actions), Bar chart (playbooks).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Threat Response Action Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.87",
              "n": "FortiGate DNS Filter Security Profile Blocks",
              "c": "high",
              "f": "intermediate",
              "v": "DNS filter blocks stop resolution of malicious domains before TCP connections form.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_dns`",
              "q": "index=network sourcetype=\"fortinet:fortigate_dns\" action=blocked\n| stats count by threat_name, src, domain\n| sort -count",
              "m": "Enable DNS filter logging for malware and C2 categories. Chain to client remediation.",
              "z": "Bar chart (threat types), Table (top domains).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_dns`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DNS filter logging for malware and C2 categories. Chain to client remediation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fortinet:fortigate_dns\" action=blocked\n| stats count by threat_name, src, domain\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiGate DNS Filter Security Profile Blocks** — DNS filter blocks stop resolution of malicious domains before TCP connections form.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_dns`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fortinet:fortigate_dns. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fortinet:fortigate_dns\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by threat_name, src, domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Resolution.DNS by DNS.src, DNS.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate DNS Filter Security Profile Blocks** — DNS filter blocks stop resolution of malicious domains before TCP connections form.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_dns`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (threat types), Table (top domains).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate DNS Filter Security Profile Blocks\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Resolution.DNS by DNS.src, DNS.dest | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.88",
              "n": "FortiGate Botnet C2 Command Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Botnet signature hits often indicate compromised hosts participating in C2.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortigate_ips`",
              "q": "index=firewall sourcetype=\"fortinet:fortigate_ips\" attack=\"*botnet*\"\n| stats count by src, attack\n| sort -count",
              "m": "Tune IPS botnet signatures. Correlate with endpoint EDR for host isolation.",
              "z": "Table (sources), Line chart (events).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortigate_ips`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune IPS botnet signatures. Correlate with endpoint EDR for host isolation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"fortinet:fortigate_ips\" attack=\"*botnet*\"\n| stats count by src, attack\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiGate Botnet C2 Command Detection** — Botnet signature hits often indicate compromised hosts participating in C2.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_ips`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: fortinet:fortigate_ips. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"fortinet:fortigate_ips\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, attack** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiGate Botnet C2 Command Detection** — Botnet signature hits often indicate compromised hosts participating in C2.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortigate_ips`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sources), Line chart (events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiGate Botnet C2 Command Detection\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.89",
              "n": "FortiAuthenticator RADIUS and MFA Audit",
              "c": "high",
              "f": "intermediate",
              "v": "FortiAuthenticator auditing secures step-up auth for remote and wireless access.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:fortiauth`",
              "q": "index=auth sourcetype=\"fortinet:fortiauth\"\n| search (result=failure OR event=\"mfa_fail\")\n| stats count by user, nas_ip\n| where count>5",
              "m": "Forward FortiAuthenticator logs for VPN and Wi-Fi. Alert on MFA brute force.",
              "z": "Table (failures), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:fortiauth`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward FortiAuthenticator logs for VPN and Wi-Fi. Alert on MFA brute force.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=auth sourcetype=\"fortinet:fortiauth\"\n| search (result=failure OR event=\"mfa_fail\")\n| stats count by user, nas_ip\n| where count>5\n```\n\nUnderstanding this SPL\n\n**FortiAuthenticator RADIUS and MFA Audit** — FortiAuthenticator auditing secures step-up auth for remote and wireless access.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortiauth`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: auth; **sourcetype**: fortinet:fortiauth. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=auth, sourcetype=\"fortinet:fortiauth\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, nas_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiAuthenticator RADIUS and MFA Audit** — FortiAuthenticator auditing secures step-up auth for remote and wireless access.\n\nDocumented **Data sources**: `sourcetype=fortinet:fortiauth`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failures), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiAuthenticator RADIUS and MFA Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.90",
              "n": "FortiClient EMS Compliance Posture",
              "c": "high",
              "f": "intermediate",
              "v": "FortiClient posture extends firewall policy to the endpoint before network admission.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:forticlient`",
              "q": "index=endpoint sourcetype=\"fortinet:forticlient\" compliance!=compliant\n| stats count by profile, hostname\n| sort -count",
              "m": "Ingest FortiClient telemetry for AV, patch, and fabric compliance. Block non-compliant VPN.",
              "z": "Bar chart (profiles), Table (hosts).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:forticlient`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest FortiClient telemetry for AV, patch, and fabric compliance. Block non-compliant VPN.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"fortinet:forticlient\" compliance!=compliant\n| stats count by profile, hostname\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiClient EMS Compliance Posture** — FortiClient posture extends firewall policy to the endpoint before network admission.\n\nDocumented **Data sources**: `sourcetype=fortinet:forticlient`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: fortinet:forticlient. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"fortinet:forticlient\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by profile, hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiClient EMS Compliance Posture** — FortiClient posture extends firewall policy to the endpoint before network admission.\n\nDocumented **Data sources**: `sourcetype=fortinet:forticlient`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (profiles), Table (hosts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiClient EMS Compliance Posture\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.91",
              "n": "FortiAnalyzer Log Forwarding Gap Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Log forwarding gaps mean SOC loses visibility until collectors resume.",
              "t": "`Splunk_TA_fortinet`",
              "d": "`sourcetype=fortinet:faz`",
              "q": "index=network sourcetype=\"fortinet:faz\"\n| stats latest(last_recv) as lr by device\n| where now()-lr > 3600",
              "m": "Monitor FortiAnalyzer receive timestamps per device. Alert when expected firewalls stop forwarding.",
              "z": "Table (stale devices), Single value (gap count).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Splunk_Audit](https://docs.splunk.com/Documentation/CIM/latest/User/Splunk_Audit)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_fortinet`.\n• Ensure the following data sources are available: `sourcetype=fortinet:faz`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor FortiAnalyzer receive timestamps per device. Alert when expected firewalls stop forwarding.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fortinet:faz\"\n| stats latest(last_recv) as lr by device\n| where now()-lr > 3600\n```\n\nUnderstanding this SPL\n\n**FortiAnalyzer Log Forwarding Gap Detection** — Log forwarding gaps mean SOC loses visibility until collectors resume.\n\nDocumented **Data sources**: `sourcetype=fortinet:faz`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fortinet:faz. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fortinet:faz\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where now()-lr > 3600` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiAnalyzer Log Forwarding Gap Detection** — Log forwarding gaps mean SOC loses visibility until collectors resume.\n\nDocumented **Data sources**: `sourcetype=fortinet:faz`. **App/TA** (typical add-on context): `Splunk_TA_fortinet`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Splunk_Audit` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale devices), Single value (gap count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use FortiGate data and alerts to stay ahead of the risks described in \"FortiAnalyzer Log Forwarding Gap Detection\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Splunk_Audit"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.92",
              "n": "Palo Alto Prisma Access Tunnel and Portal Health",
              "c": "high",
              "f": "intermediate",
              "v": "Prisma Access health secures cloud-delivered firewall and ZTNA for distributed users.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=prisma:access`",
              "q": "index=cloud sourcetype=\"prisma:access\"\n| stats latest(tunnel_status) as ts by location, user\n| search ts!=\"up\"",
              "m": "Ingest Prisma Access telemetry. Correlate with ISP issues for SASE troubleshooting.",
              "z": "Table (down users), Map (locations).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=prisma:access`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Prisma Access telemetry. Correlate with ISP issues for SASE troubleshooting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"prisma:access\"\n| stats latest(tunnel_status) as ts by location, user\n| search ts!=\"up\"\n```\n\nUnderstanding this SPL\n\n**Palo Alto Prisma Access Tunnel and Portal Health** — Prisma Access health secures cloud-delivered firewall and ZTNA for distributed users.\n\nDocumented **Data sources**: `sourcetype=prisma:access`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: prisma:access. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"prisma:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by location, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Prisma Access Tunnel and Portal Health** — Prisma Access health secures cloud-delivered firewall and ZTNA for distributed users.\n\nDocumented **Data sources**: `sourcetype=prisma:access`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (down users), Map (locations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Prisma Access data and alerts to stay ahead of the risks described in \"Palo Alto Prisma Access Tunnel and Portal Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.user | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.93",
              "n": "Palo Alto Certificate Inspection and Untrusted Issuer Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "Certificate anomalies highlight decryption failures and potential adversary-in-the-middle.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:decryption`",
              "q": "index=pan sourcetype=\"pan:decryption\" cert_status!=\"trusted\"\n| stats count by issuer, dest\n| sort -count",
              "m": "Review untrusted or expired certificates in decryption logs. May indicate MITM or misconfig.",
              "z": "Table (issuers), Bar chart.",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:decryption`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nReview untrusted or expired certificates in decryption logs. May indicate MITM or misconfig.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:decryption\" cert_status!=\"trusted\"\n| stats count by issuer, dest\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Palo Alto Certificate Inspection and Untrusted Issuer Alerts** — Certificate anomalies highlight decryption failures and potential adversary-in-the-middle.\n\nDocumented **Data sources**: `sourcetype=pan:decryption`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:decryption. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:decryption\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by issuer, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Certificate Inspection and Untrusted Issuer Alerts** — Certificate anomalies highlight decryption failures and potential adversary-in-the-middle.\n\nDocumented **Data sources**: `sourcetype=pan:decryption`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (issuers), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Certificate Inspection and Untrusted Issuer Alerts\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.94",
              "n": "Palo Alto Log Forwarding and SIEM Connectivity",
              "c": "high",
              "f": "intermediate",
              "v": "Log forwarding health ensures NGFW events reach Splunk without silent drops.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:system`",
              "q": "index=pan sourcetype=\"pan:system\" (logdesc=\"*log*forward*\" OR subsystem=\"logfwd\")\n| stats count by profile, destination\n| where count>0",
              "m": "Monitor system logs for syslog and HTTP log forwarding errors. Fix before log loss.",
              "z": "Table (destinations), Timeline.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Splunk_Audit](https://docs.splunk.com/Documentation/CIM/latest/User/Splunk_Audit)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:system`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor system logs for syslog and HTTP log forwarding errors. Fix before log loss.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:system\" (logdesc=\"*log*forward*\" OR subsystem=\"logfwd\")\n| stats count by profile, destination\n| where count>0\n```\n\nUnderstanding this SPL\n\n**Palo Alto Log Forwarding and SIEM Connectivity** — Log forwarding health ensures NGFW events reach Splunk without silent drops.\n\nDocumented **Data sources**: `sourcetype=pan:system`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:system. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by profile, destination** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Palo Alto Log Forwarding and SIEM Connectivity** — Log forwarding health ensures NGFW events reach Splunk without silent drops.\n\nDocumented **Data sources**: `sourcetype=pan:system`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Splunk_Audit` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (destinations), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Log Forwarding and SIEM Connectivity\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Splunk_Audit"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.95",
              "n": "Palo Alto Threat Intelligence Feed Freshness",
              "c": "high",
              "f": "intermediate",
              "v": "Stale threat feeds reduce blocking efficacy for known malicious indicators.",
              "t": "`Splunk_TA_paloalto`",
              "d": "`sourcetype=pan:system`",
              "q": "index=pan sourcetype=\"pan:system\" logdesc=\"*threat*feed*\"\n| stats latest(update_time) as ut by feed_name\n| where now()-ut > 86400",
              "m": "Track dynamic update and EDL refresh times. Alert when feeds fall behind vendor SLA.",
              "z": "Table (stale feeds), Single value.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: `sourcetype=pan:system`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack dynamic update and EDL refresh times. Alert when feeds fall behind vendor SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pan sourcetype=\"pan:system\" logdesc=\"*threat*feed*\"\n| stats latest(update_time) as ut by feed_name\n| where now()-ut > 86400\n```\n\nUnderstanding this SPL\n\n**Palo Alto Threat Intelligence Feed Freshness** — Stale threat feeds reduce blocking efficacy for known malicious indicators.\n\nDocumented **Data sources**: `sourcetype=pan:system`. **App/TA** (typical add-on context): `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pan; **sourcetype**: pan:system. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pan, sourcetype=\"pan:system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by feed_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where now()-ut > 86400` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale feeds), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Palo Alto Networks firewall data and alerts to stay ahead of the risks described in \"Palo Alto Threat Intelligence Feed Freshness\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.96",
              "n": "Check Point Access Policy Rule Hit Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Rule hit analysis supports least-privilege firewall redesign.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:traffic`",
              "q": "index=firewall sourcetype=\"cp:traffic\"\n| stats count by rule_name, action\n| sort 50 -count",
              "m": "Use traffic logs with rule UID resolved to names. Identify overly permissive top rules.",
              "z": "Bar chart (rules), Table.",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse traffic logs with rule UID resolved to names. Identify overly permissive top rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"cp:traffic\"\n| stats count by rule_name, action\n| sort 50 -count\n```\n\nUnderstanding this SPL\n\n**Check Point Access Policy Rule Hit Analysis** — Rule hit analysis supports least-privilege firewall redesign.\n\nDocumented **Data sources**: `sourcetype=cp:traffic`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: cp:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"cp:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule_name, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Access Policy Rule Hit Analysis** — Rule hit analysis supports least-privilege firewall redesign.\n\nDocumented **Data sources**: `sourcetype=cp:traffic`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (rules), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Access Policy Rule Hit Analysis\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.97",
              "n": "Check Point Anti-Spam and Email Security Events",
              "c": "high",
              "f": "intermediate",
              "v": "Email security events complement perimeter and gateway detections.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:email`",
              "q": "index=email sourcetype=\"cp:email\" action=block\n| stats count by threat_type, sender\n| sort -count",
              "m": "If using Check Point email security, trend malware and spam blocks. Feed phishing awareness.",
              "z": "Bar chart (threat types), Table.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:email`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIf using Check Point email security, trend malware and spam blocks. Feed phishing awareness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=\"cp:email\" action=block\n| stats count by threat_type, sender\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point Anti-Spam and Email Security Events** — Email security events complement perimeter and gateway detections.\n\nDocumented **Data sources**: `sourcetype=cp:email`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: cp:email. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=\"cp:email\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by threat_type, sender** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.src_user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Anti-Spam and Email Security Events** — Email security events complement perimeter and gateway detections.\n\nDocumented **Data sources**: `sourcetype=cp:email`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (threat types), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Anti-Spam and Email Security Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.src_user | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.98",
              "n": "Check Point Identity Awareness Captive Portal Abuse",
              "c": "high",
              "f": "intermediate",
              "v": "Captive portal abuse can indicate credential stuffing on guest networks.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:identity`",
              "q": "index=auth sourcetype=\"cp:identity\" portal_event=failed\n| stats count by src, reason\n| where count>20",
              "m": "Detect brute force against captive portal auth. Rate-limit and alert SOC.",
              "z": "Table (sources), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:identity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDetect brute force against captive portal auth. Rate-limit and alert SOC.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=auth sourcetype=\"cp:identity\" portal_event=failed\n| stats count by src, reason\n| where count>20\n```\n\nUnderstanding this SPL\n\n**Check Point Identity Awareness Captive Portal Abuse** — Captive portal abuse can indicate credential stuffing on guest networks.\n\nDocumented **Data sources**: `sourcetype=cp:identity`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: auth; **sourcetype**: cp:identity. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=auth, sourcetype=\"cp:identity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>20` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Identity Awareness Captive Portal Abuse** — Captive portal abuse can indicate credential stuffing on guest networks.\n\nDocumented **Data sources**: `sourcetype=cp:identity`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sources), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Identity Awareness Captive Portal Abuse\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.99",
              "n": "Check Point Threat Prevention Extraction Blocks",
              "c": "high",
              "f": "intermediate",
              "v": "Threat prevention blocks show inline protection value across blades.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:threatprevention`",
              "q": "index=security sourcetype=\"cp:threatprevention\" action=prevent\n| stats count by protection_type, severity\n| sort -count",
              "m": "Aggregate IPS, AV, and AB combined preventions. Prioritize by severity and asset.",
              "z": "Stacked bar (types), Table.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:threatprevention`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate IPS, AV, and AB combined preventions. Prioritize by severity and asset.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp:threatprevention\" action=prevent\n| stats count by protection_type, severity\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point Threat Prevention Extraction Blocks** — Threat prevention blocks show inline protection value across blades.\n\nDocumented **Data sources**: `sourcetype=cp:threatprevention`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp:threatprevention. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp:threatprevention\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by protection_type, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Threat Prevention Extraction Blocks** — Threat prevention blocks show inline protection value across blades.\n\nDocumented **Data sources**: `sourcetype=cp:threatprevention`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (types), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Threat Prevention Extraction Blocks\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.100",
              "n": "Check Point Mobile Access VPN Session Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Mobile VPN anomalies may indicate account sharing or credential theft.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp:mobile`",
              "q": "index=vpn sourcetype=\"cp:mobile\"\n| stats dc(public_ip) as locs by user\n| where locs>3",
              "m": "Detect impossible travel for mobile VPN users. Correlate with IdP signals.",
              "z": "Table (users), Map.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp:mobile`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDetect impossible travel for mobile VPN users. Correlate with IdP signals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cp:mobile\"\n| stats dc(public_ip) as locs by user\n| where locs>3\n```\n\nUnderstanding this SPL\n\n**Check Point Mobile Access VPN Session Anomalies** — Mobile VPN anomalies may indicate account sharing or credential theft.\n\nDocumented **Data sources**: `sourcetype=cp:mobile`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cp:mobile. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cp:mobile\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where locs>3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Mobile Access VPN Session Anomalies** — Mobile VPN anomalies may indicate account sharing or credential theft.\n\nDocumented **Data sources**: `sourcetype=cp:mobile`. **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users), Map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Mobile Access VPN Session Anomalies\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.user | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.101",
              "n": "CrowdStrike Identity Protection Risk Scores",
              "c": "high",
              "f": "intermediate",
              "v": "Identity risk scores prioritize account takeover investigations.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:identity`",
              "q": "index=identity sourcetype=\"crowdstrike:identity\"\n| stats max(risk_score) as r by user_principal\n| where r>70",
              "m": "Ingest Falcon Identity Protection events. Escalate high-risk users to IdP step-up.",
              "z": "Table (risky users), Bar chart.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:identity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Falcon Identity Protection events. Escalate high-risk users to IdP step-up.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=identity sourcetype=\"crowdstrike:identity\"\n| stats max(risk_score) as r by user_principal\n| where r>70\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Identity Protection Risk Scores** — Identity risk scores prioritize account takeover investigations.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:identity`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: identity; **sourcetype**: crowdstrike:identity. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=identity, sourcetype=\"crowdstrike:identity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user_principal** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where r>70` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Identity Protection Risk Scores** — Identity risk scores prioritize account takeover investigations.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:identity`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (risky users), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Identity Protection Risk Scores\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.102",
              "n": "CrowdStrike File Integrity Monitoring Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "FIM alerts catch persistence and tampering on servers and domain controllers.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:fim`",
              "q": "index=endpoint sourcetype=\"crowdstrike:fim\"\n| search (file_op IN (\"write\",\"delete\",\"chmod\")) AND (match(path, \"/etc/.*\") OR match(path, \".*\\\\System32\\\\.*\"))\n| stats count by path, hostname\n| sort -count",
              "m": "Enable FIM on critical paths. Tune noisy installers. Alert on off-hours system changes.",
              "z": "Table (changes), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:fim`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable FIM on critical paths. Tune noisy installers. Alert on off-hours system changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:fim\"\n| search (file_op IN (\"write\",\"delete\",\"chmod\")) AND (match(path, \"/etc/.*\") OR match(path, \".*\\\\System32\\\\.*\"))\n| stats count by path, hostname\n| sort -count\n```\n\nUnderstanding this SPL\n\n**CrowdStrike File Integrity Monitoring Alerts** — FIM alerts catch persistence and tampering on servers and domain controllers.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:fim`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:fim. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:fim\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by path, hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike File Integrity Monitoring Alerts** — FIM alerts catch persistence and tampering on servers and domain controllers.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:fim`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (changes), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike File Integrity Monitoring Alerts\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.103",
              "n": "CrowdStrike Cloud Workload Protection Detections",
              "c": "high",
              "f": "intermediate",
              "v": "CWP extends Falcon detection to cloud-native workloads.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:cwp`",
              "q": "index=cloud sourcetype=\"crowdstrike:cwp\"\n| stats count by cluster, image, technique\n| sort -count",
              "m": "Ingest CWP for containers and VMs. Map to Kubernetes namespace ownership.",
              "z": "Bar chart (clusters), Table.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:cwp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CWP for containers and VMs. Map to Kubernetes namespace ownership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"crowdstrike:cwp\"\n| stats count by cluster, image, technique\n| sort -count\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Cloud Workload Protection Detections** — CWP extends Falcon detection to cloud-native workloads.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:cwp`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: crowdstrike:cwp. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"crowdstrike:cwp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster, image, technique** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Cloud Workload Protection Detections** — CWP extends Falcon detection to cloud-native workloads.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:cwp`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (clusters), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Cloud Workload Protection Detections\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.104",
              "n": "CrowdStrike Host Containment Events",
              "c": "high",
              "f": "intermediate",
              "v": "Containment events document network isolation during active incidents.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:containment`",
              "q": "index=audit sourcetype=\"crowdstrike:containment\"\n| stats count by state, initiated_by\n| sort -_time",
              "m": "Audit network containment toggles. Correlate with incident IDs for IR metrics.",
              "z": "Timeline, Table.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:containment`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAudit network containment toggles. Correlate with incident IDs for IR metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"crowdstrike:containment\"\n| stats count by state, initiated_by\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Host Containment Events** — Containment events document network isolation during active incidents.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:containment`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: crowdstrike:containment. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"crowdstrike:containment\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by state, initiated_by** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Host Containment Events** — Containment events document network isolation during active incidents.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:containment`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Host Containment Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.105",
              "n": "CrowdStrike Custom IOA Tuning Metrics",
              "c": "high",
              "f": "intermediate",
              "v": "IOA tuning keeps behavioral detection accurate for your environment.",
              "t": "`TA-crowdstrike-falcon`",
              "d": "`sourcetype=crowdstrike:ioa`",
              "q": "index=endpoint sourcetype=\"crowdstrike:ioa\"\n| stats count by rule_id, disposition\n| where disposition=\"false_positive\"\n| sort -count",
              "m": "Measure false positive rate per custom IOA. Retire or refine noisy rules.",
              "z": "Bar chart (FP rate), Table.",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `sourcetype=crowdstrike:ioa`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMeasure false positive rate per custom IOA. Retire or refine noisy rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"crowdstrike:ioa\"\n| stats count by rule_id, disposition\n| where disposition=\"false_positive\"\n| sort -count\n```\n\nUnderstanding this SPL\n\n**CrowdStrike Custom IOA Tuning Metrics** — IOA tuning keeps behavioral detection accurate for your environment.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:ioa`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: crowdstrike:ioa. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"crowdstrike:ioa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule_id, disposition** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where disposition=\"false_positive\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CrowdStrike Custom IOA Tuning Metrics** — IOA tuning keeps behavioral detection accurate for your environment.\n\nDocumented **Data sources**: `sourcetype=crowdstrike:ioa`. **App/TA** (typical add-on context): `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (FP rate), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use CrowdStrike data and alerts to stay ahead of the risks described in \"CrowdStrike Custom IOA Tuning Metrics\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "ta_link": {
                "name": "CrowdStrike Falcon Event Streams TA",
                "id": 5082,
                "url": "https://splunkbase.splunk.com/app/5082"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.106",
              "n": "Zscaler ZTNA Application Access Denials",
              "c": "high",
              "f": "intermediate",
              "v": "ZTNA denials reveal unauthorized private application access attempts.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:ztna`",
              "q": "index=network sourcetype=\"zscaler:ztna\" result=deny\n| stats count by app_name, user, reason\n| sort -count",
              "m": "Track ZTNA denials for policy versus user error. Investigate repeated denials as potential probing.",
              "z": "Table (denials), Bar chart.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:ztna`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack ZTNA denials for policy versus user error. Investigate repeated denials as potential probing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"zscaler:ztna\" result=deny\n| stats count by app_name, user, reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Zscaler ZTNA Application Access Denials** — ZTNA denials reveal unauthorized private application access attempts.\n\nDocumented **Data sources**: `sourcetype=zscaler:ztna`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: zscaler:ztna. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"zscaler:ztna\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name, user, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler ZTNA Application Access Denials** — ZTNA denials reveal unauthorized private application access attempts.\n\nDocumented **Data sources**: `sourcetype=zscaler:ztna`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (denials), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"Zscaler ZTNA Application Access Denials\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.user | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.107",
              "n": "Zscaler CASB Shadow IT Upload Volume",
              "c": "high",
              "f": "intermediate",
              "v": "CASB upload analytics prioritizes data loss risk in sanctioned and shadow SaaS.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:casb`",
              "q": "index=cloud sourcetype=\"zscaler:casb\"\n| stats sum(bytes_up) as up by app, user\n| where up>1073741824\n| sort -up",
              "m": "Flag unsanctioned apps with large uploads. DLP correlation for exfiltration.",
              "z": "Table (apps), Bar chart.",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: DLP](https://docs.splunk.com/Documentation/CIM/latest/User/DLP)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:casb`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFlag unsanctioned apps with large uploads. DLP correlation for exfiltration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"zscaler:casb\"\n| stats sum(bytes_up) as up by app, user\n| where up>1073741824\n| sort -up\n```\n\nUnderstanding this SPL\n\n**Zscaler CASB Shadow IT Upload Volume** — CASB upload analytics prioritizes data loss risk in sanctioned and shadow SaaS.\n\nDocumented **Data sources**: `sourcetype=zscaler:casb`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: zscaler:casb. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"zscaler:casb\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where up>1073741824` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler CASB Shadow IT Upload Volume** — CASB upload analytics prioritizes data loss risk in sanctioned and shadow SaaS.\n\nDocumented **Data sources**: `sourcetype=zscaler:casb`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Data_Loss_Prevention.DLP_Incidents` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (apps), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"Zscaler CASB Shadow IT Upload Volume\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "DLP"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.user | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.108",
              "n": "Zscaler DNS Security Filtering",
              "c": "high",
              "f": "intermediate",
              "v": "DNS filtering stops malicious resolutions before HTTP or TLS sessions start.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:dns`",
              "q": "index=dns sourcetype=\"zscaler:dns\" action=blocked\n| stats count by threat_category, domain\n| sort -count",
              "m": "Use ZIA DNS security logs. Chain to endpoint for malware families using DNS tunneling.",
              "z": "Bar chart (categories), Table.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:dns`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse ZIA DNS security logs. Chain to endpoint for malware families using DNS tunneling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=\"zscaler:dns\" action=blocked\n| stats count by threat_category, domain\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Zscaler DNS Security Filtering** — DNS filtering stops malicious resolutions before HTTP or TLS sessions start.\n\nDocumented **Data sources**: `sourcetype=zscaler:dns`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: zscaler:dns. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=\"zscaler:dns\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by threat_category, domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Resolution.DNS by DNS.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler DNS Security Filtering** — DNS filtering stops malicious resolutions before HTTP or TLS sessions start.\n\nDocumented **Data sources**: `sourcetype=zscaler:dns`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (categories), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"Zscaler DNS Security Filtering\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Resolution.DNS by DNS.dest | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.109",
              "n": "Zscaler Bandwidth Control and QoS Events",
              "c": "high",
              "f": "intermediate",
              "v": "QoS events explain user complaints and detect bandwidth abuse.",
              "t": "`Splunk_TA_zscaler`",
              "d": "`sourcetype=zscaler:qos`",
              "q": "index=network sourcetype=\"zscaler:qos\" event=throttle\n| stats count by user, app\n| sort -count",
              "m": "Review throttle events for fairness and abuse. Distinguish policy from congestion.",
              "z": "Table (throttled flows), Line chart.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`.\n• Ensure the following data sources are available: `sourcetype=zscaler:qos`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nReview throttle events for fairness and abuse. Distinguish policy from congestion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"zscaler:qos\" event=throttle\n| stats count by user, app\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Zscaler Bandwidth Control and QoS Events** — QoS events explain user complaints and detect bandwidth abuse.\n\nDocumented **Data sources**: `sourcetype=zscaler:qos`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: zscaler:qos. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"zscaler:qos\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.Network by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler Bandwidth Control and QoS Events** — QoS events explain user complaints and detect bandwidth abuse.\n\nDocumented **Data sources**: `sourcetype=zscaler:qos`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Network` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (throttled flows), Line chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Zscaler data and alerts to stay ahead of the risks described in \"Zscaler Bandwidth Control and QoS Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.Network by Performance.host | sort - count",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.110",
              "n": "VMware Carbon Black Enterprise EDR Search Audit",
              "c": "high",
              "f": "intermediate",
              "v": "EDR search auditing protects sensitive hunt activity from insider misuse.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:edrsearch`",
              "q": "index=audit sourcetype=\"carbonblack:edrsearch\"\n| stats count by query, analyst\n| where count>50",
              "m": "Log heavy EDR federated searches. Review for data exfiltration via search export.",
              "z": "Table (queries), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Splunk_Audit](https://docs.splunk.com/Documentation/CIM/latest/User/Splunk_Audit)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:edrsearch`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog heavy EDR federated searches. Review for data exfiltration via search export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"carbonblack:edrsearch\"\n| stats count by query, analyst\n| where count>50\n```\n\nUnderstanding this SPL\n\n**VMware Carbon Black Enterprise EDR Search Audit** — EDR search auditing protects sensitive hunt activity from insider misuse.\n\nDocumented **Data sources**: `sourcetype=carbonblack:edrsearch`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: carbonblack:edrsearch. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"carbonblack:edrsearch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by query, analyst** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>50` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VMware Carbon Black Enterprise EDR Search Audit** — EDR search auditing protects sensitive hunt activity from insider misuse.\n\nDocumented **Data sources**: `sourcetype=carbonblack:edrsearch`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Splunk_Audit` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (queries), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"VMware Carbon Black Enterprise EDR Search Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Splunk_Audit"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.111",
              "n": "VMware Carbon Black Vulnerability Assessment Integration",
              "c": "high",
              "f": "intermediate",
              "v": "Integrated vuln view reduces tool sprawl between EDR and VM teams.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:vuln`",
              "q": "index=endpoint sourcetype=\"carbonblack:vuln\"\n| stats count by cve_id, severity\n| where patch_status=\"missing\" AND severity IN (\"CRITICAL\",\"HIGH\")\n| sort -severity",
              "m": "Join CB vuln data with patch tickets. Prioritize criticals on internet-facing hosts.",
              "z": "Table (CVEs), Bar chart.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:vuln`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nJoin CB vuln data with patch tickets. Prioritize criticals on internet-facing hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"carbonblack:vuln\"\n| stats count by cve_id, severity\n| where patch_status=\"missing\" AND severity IN (\"CRITICAL\",\"HIGH\")\n| sort -severity\n```\n\nUnderstanding this SPL\n\n**VMware Carbon Black Vulnerability Assessment Integration** — Integrated vuln view reduces tool sprawl between EDR and VM teams.\n\nDocumented **Data sources**: `sourcetype=carbonblack:vuln`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: carbonblack:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"carbonblack:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cve_id, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where patch_status=\"missing\" AND severity IN (\"CRITICAL\",\"HIGH\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VMware Carbon Black Vulnerability Assessment Integration** — Integrated vuln view reduces tool sprawl between EDR and VM teams.\n\nDocumented **Data sources**: `sourcetype=carbonblack:vuln`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (CVEs), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"VMware Carbon Black Vulnerability Assessment Integration\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.112",
              "n": "Carbon Black Prevention Exclusion Review",
              "c": "high",
              "f": "intermediate",
              "v": "Exclusion review prevents attackers or admins from weakening prevention silently.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:exclusion`",
              "q": "index=audit sourcetype=\"carbonblack:exclusion\"\n| stats latest(_time) as t by path_pattern, added_by\n| sort -t",
              "m": "Track new prevention exclusions with approver. Alert on wildcards to sensitive paths.",
              "z": "Table (exclusions), Timeline.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:exclusion`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack new prevention exclusions with approver. Alert on wildcards to sensitive paths.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"carbonblack:exclusion\"\n| stats latest(_time) as t by path_pattern, added_by\n| sort -t\n```\n\nUnderstanding this SPL\n\n**Carbon Black Prevention Exclusion Review** — Exclusion review prevents attackers or admins from weakening prevention silently.\n\nDocumented **Data sources**: `sourcetype=carbonblack:exclusion`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: carbonblack:exclusion. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"carbonblack:exclusion\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by path_pattern, added_by** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Carbon Black Prevention Exclusion Review** — Exclusion review prevents attackers or admins from weakening prevention silently.\n\nDocumented **Data sources**: `sourcetype=carbonblack:exclusion`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exclusions), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"Carbon Black Prevention Exclusion Review\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.113",
              "n": "Carbon Black Network Connection Anomaly Baseline",
              "c": "high",
              "f": "intermediate",
              "v": "Connection anomalies highlight C2 and data staging without static IOCs.",
              "t": "`Splunk_TA_vmware_carbonblack`",
              "d": "`sourcetype=carbonblack:netconn`",
              "q": "index=endpoint sourcetype=\"carbonblack:netconn\"\n| rename remote_ip as dest, remote_port as dest_port\n| stats count by dest, dest_port, hostname\n| sort 500 -count",
              "m": "Baseline rare outbound ports per business unit. Hunt beaconing patterns with statistical overlays.",
              "z": "Scatter (volume vs rarity), Table.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware_carbonblack`.\n• Ensure the following data sources are available: `sourcetype=carbonblack:netconn`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline rare outbound ports per business unit. Hunt beaconing patterns with statistical overlays.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"carbonblack:netconn\"\n| rename remote_ip as dest, remote_port as dest_port\n| stats count by dest, dest_port, hostname\n| sort 500 -count\n```\n\nUnderstanding this SPL\n\n**Carbon Black Network Connection Anomaly Baseline** — Connection anomalies highlight C2 and data staging without static IOCs.\n\nDocumented **Data sources**: `sourcetype=carbonblack:netconn`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: carbonblack:netconn. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"carbonblack:netconn\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• `stats` rolls up events into metrics; results are split **by dest, dest_port, hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Carbon Black Network Connection Anomaly Baseline** — Connection anomalies highlight C2 and data staging without static IOCs.\n\nDocumented **Data sources**: `sourcetype=carbonblack:netconn`. **App/TA** (typical add-on context): `Splunk_TA_vmware_carbonblack`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter (volume vs rarity), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Carbon Black data and alerts to stay ahead of the risks described in \"Carbon Black Network Connection Anomaly Baseline\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "ta_link": {
                "name": "Splunk Add-on for VMware",
                "id": 3258,
                "url": "https://splunkbase.splunk.com/app/3258"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.114",
              "n": "Tenable SC Scan Zone and Scanner Assignment Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Scanner assignment audits prevent unscannable segments in large enterprises.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=tenable:sc`",
              "q": "index=main sourcetype=\"tenable:sc\"\n| stats values(scanner_id) as scanner_id by asset_list\n| join max=1 asset_list [| inputlookup tenable_scan_zones]\n| where isnull(scanner_id)",
              "m": "Verify every scan zone has active scanner coverage. Fix assignment gaps.",
              "z": "Table (zones), Bar chart.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=tenable:sc`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nVerify every scan zone has active scanner coverage. Fix assignment gaps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=main sourcetype=\"tenable:sc\"\n| stats values(scanner_id) as scanner_id by asset_list\n| join max=1 asset_list [| inputlookup tenable_scan_zones]\n| where isnull(scanner_id)\n```\n\nUnderstanding this SPL\n\n**Tenable SC Scan Zone and Scanner Assignment Audit** — Scanner assignment audits prevent unscannable segments in large enterprises.\n\nDocumented **Data sources**: `sourcetype=tenable:sc`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: main; **sourcetype**: tenable:sc. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=main, sourcetype=\"tenable:sc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by asset_list** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(scanner_id)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable SC Scan Zone and Scanner Assignment Audit** — Scanner assignment audits prevent unscannable segments in large enterprises.\n\nDocumented **Data sources**: `sourcetype=tenable:sc`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (zones), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable SC Scan Zone and Scanner Assignment Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.115",
              "n": "Tenable.io Agent and Linked Scanner Health",
              "c": "high",
              "f": "intermediate",
              "v": "Agent health maintains continuous assessment without network-only blind spots.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=tenable:io`",
              "q": "index=vuln sourcetype=\"tenable:io\"\n| stats latest(last_scan) as ls by agent_uuid\n| where now()-ls > 1209600",
              "m": "Monitor Tenable.io linked agents for two-week silence. Redeploy or fix connectivity.",
              "z": "Table (stale agents), Single value.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=tenable:io`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Tenable.io linked agents for two-week silence. Redeploy or fix connectivity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"tenable:io\"\n| stats latest(last_scan) as ls by agent_uuid\n| where now()-ls > 1209600\n```\n\nUnderstanding this SPL\n\n**Tenable.io Agent and Linked Scanner Health** — Agent health maintains continuous assessment without network-only blind spots.\n\nDocumented **Data sources**: `sourcetype=tenable:io`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:io. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"tenable:io\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by agent_uuid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where now()-ls > 1209600` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable.io Agent and Linked Scanner Health** — Agent health maintains continuous assessment without network-only blind spots.\n\nDocumented **Data sources**: `sourcetype=tenable:io`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale agents), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable.io Agent and Linked Scanner Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.116",
              "n": "Tenable WAS Remediation Verification Scans",
              "c": "high",
              "f": "intermediate",
              "v": "Verification scans prove web fixes before production promotion.",
              "t": "`Splunk_TA_nessus`",
              "d": "`sourcetype=nessus:was`",
              "q": "index=vuln sourcetype=\"nessus:was\" scan_type=verification\n| stats count by finding_id, status\n| where status!=\"fixed\"",
              "m": "Run verification scans after developer fixes. Track reopen rates.",
              "z": "Table (open findings), Line chart.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nessus`.\n• Ensure the following data sources are available: `sourcetype=nessus:was`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun verification scans after developer fixes. Track reopen rates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"nessus:was\" scan_type=verification\n| stats count by finding_id, status\n| where status!=\"fixed\"\n```\n\nUnderstanding this SPL\n\n**Tenable WAS Remediation Verification Scans** — Verification scans prove web fixes before production promotion.\n\nDocumented **Data sources**: `sourcetype=nessus:was`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: nessus:was. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"nessus:was\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by finding_id, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status!=\"fixed\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tenable WAS Remediation Verification Scans** — Verification scans prove web fixes before production promotion.\n\nDocumented **Data sources**: `sourcetype=nessus:was`. **App/TA** (typical add-on context): `Splunk_TA_nessus`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open findings), Line chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tenable data and alerts to stay ahead of the risks described in \"Tenable WAS Remediation Verification Scans\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Nessus",
                "id": 2804,
                "url": "https://splunkbase.splunk.com/app/2804"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.117",
              "n": "Tanium Interact Saved Question Drift",
              "c": "high",
              "f": "intermediate",
              "v": "Saved question drift avoids acting on outdated inventory logic.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:saved`",
              "q": "index=endpoint sourcetype=\"tanium:saved\"\n| stats latest(mod_time) as m by question_name\n| where now()-m > 15552000",
              "m": "Find saved questions not updated in 180 days. Retire or refresh for accuracy.",
              "z": "Table (stale questions), Bar chart.",
              "kfp": "Broad scope: expect noise. Triage using asset criticality, user role and recent change-management activity; promote to a detection only after stable tuning.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "Hunting",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:saved`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFind saved questions not updated in 180 days. Retire or refresh for accuracy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"tanium:saved\"\n| stats latest(mod_time) as m by question_name\n| where now()-m > 15552000\n```\n\nUnderstanding this SPL\n\n**Tanium Interact Saved Question Drift** — Saved question drift avoids acting on outdated inventory logic.\n\nDocumented **Data sources**: `sourcetype=tanium:saved`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: tanium:saved. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"tanium:saved\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by question_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where now()-m > 15552000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Interact Saved Question Drift** — Saved question drift avoids acting on outdated inventory logic.\n\nDocumented **Data sources**: `sourcetype=tanium:saved`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Hunting — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale questions), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Interact Saved Question Drift\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.118",
              "n": "Tanium Connect SOAR Forwarding Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Connect audits ensure automated responses and enrichment do not silently fail.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:connect`",
              "q": "index=audit sourcetype=\"tanium:connect\"\n| stats count by destination, status\n| where status!=\"success\"",
              "m": "Monitor Connect outbound integrations to Splunk and ticketing. Fix auth or schema errors.",
              "z": "Table (failures), Timeline.",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[CIM: Splunk_Audit](https://docs.splunk.com/Documentation/CIM/latest/User/Splunk_Audit)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:connect`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Connect outbound integrations to Splunk and ticketing. Fix auth or schema errors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"tanium:connect\"\n| stats count by destination, status\n| where status!=\"success\"\n```\n\nUnderstanding this SPL\n\n**Tanium Connect SOAR Forwarding Audit** — Connect audits ensure automated responses and enrichment do not silently fail.\n\nDocumented **Data sources**: `sourcetype=tanium:connect`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: tanium:connect. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"tanium:connect\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by destination, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status!=\"success\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Connect SOAR Forwarding Audit** — Connect audits ensure automated responses and enrichment do not silently fail.\n\nDocumented **Data sources**: `sourcetype=tanium:connect`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Splunk_Audit` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failures), Timeline.",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Connect SOAR Forwarding Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Splunk_Audit"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Splunk_Audit by _time span=1h | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.119",
              "n": "Tanium Threat Intel Feed Application",
              "c": "high",
              "f": "intermediate",
              "v": "Intel feed freshness keeps IOC hunts and blocks aligned with current campaigns.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:intel`",
              "q": "index=endpoint sourcetype=\"tanium:intel\"\n| stats latest(import_time) as it by feed_name\n| where now()-it > 172800",
              "m": "Track intel feed imports into Tanium. Alert when feeds are >48h stale.",
              "z": "Table (stale feeds), Single value.",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:intel`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack intel feed imports into Tanium. Alert when feeds are >48h stale.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"tanium:intel\"\n| stats latest(import_time) as it by feed_name\n| where now()-it > 172800\n```\n\nUnderstanding this SPL\n\n**Tanium Threat Intel Feed Application** — Intel feed freshness keeps IOC hunts and blocks aligned with current campaigns.\n\nDocumented **Data sources**: `sourcetype=tanium:intel`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: tanium:intel. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"tanium:intel\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by feed_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where now()-it > 172800` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale feeds), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Threat Intel Feed Application\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.120",
              "n": "Tanium Benchmark and CIS Report Pack Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Benchmark packs operationalize configuration compliance beyond ad-hoc scripts.",
              "t": "`Splunk_TA_tanium`",
              "d": "`sourcetype=tanium:benchmark`",
              "q": "index=endpoint sourcetype=\"tanium:benchmark\"\n| stats avg(score) as s by benchmark_pack\n| where s<90",
              "m": "Use CIS and DISA benchmark packs. Executive summary for audit readiness.",
              "z": "Bar chart (packs), Table (low scores).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "endpoint",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_tanium`.\n• Ensure the following data sources are available: `sourcetype=tanium:benchmark`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse CIS and DISA benchmark packs. Executive summary for audit readiness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=\"tanium:benchmark\"\n| stats avg(score) as s by benchmark_pack\n| where s<90\n```\n\nUnderstanding this SPL\n\n**Tanium Benchmark and CIS Report Pack Compliance** — Benchmark packs operationalize configuration compliance beyond ad-hoc scripts.\n\nDocumented **Data sources**: `sourcetype=tanium:benchmark`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: tanium:benchmark. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=\"tanium:benchmark\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by benchmark_pack** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where s<90` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Tanium Benchmark and CIS Report Pack Compliance** — Benchmark packs operationalize configuration compliance beyond ad-hoc scripts.\n\nDocumented **Data sources**: `sourcetype=tanium:benchmark`. **App/TA** (typical add-on context): `Splunk_TA_tanium`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (packs), Table (low scores).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Tanium data and alerts to stay ahead of the risks described in \"Tanium Benchmark and CIS Report Pack Compliance\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Tanium",
                "id": 6076,
                "url": "https://splunkbase.splunk.com/app/6076"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.121",
              "n": "Check Point Zero-Day Phishing Detection via Zero Phishing",
              "c": "critical",
              "f": "intermediate",
              "v": "Zero Phishing uses AI to detect unknown phishing sites in real time at the gateway, blocking users before credentials are submitted — even for sites not yet in any threat feed.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (Threat Prevention / Zero Phishing events)",
              "q": "index=security sourcetype=\"cp_log\"\n| where match(lower(product),\"(?i)zero.?phishing|anti.?phishing\") AND match(lower(action),\"(?i)prevent|block|detect\")\n| stats count by protection_name, resource, src, dst, action\n| sort -count",
              "m": "Enable Zero Phishing blade on Quantum gateways. Forward prevention logs. Alert on blocks targeting executive or finance user segments. Correlate blocked URLs with email gateway logs for campaign attribution.",
              "z": "Table (phishing sites blocked), Bar chart (by category), Timeline (detections).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (Threat Prevention / Zero Phishing events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Zero Phishing blade on Quantum gateways. Forward prevention logs. Alert on blocks targeting executive or finance user segments. Correlate blocked URLs with email gateway logs for campaign attribution.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp_log\"\n| where match(lower(product),\"(?i)zero.?phishing|anti.?phishing\") AND match(lower(action),\"(?i)prevent|block|detect\")\n| stats count by protection_name, resource, src, dst, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point Zero-Day Phishing Detection via Zero Phishing** — Zero Phishing uses AI to detect unknown phishing sites in real time at the gateway, blocking users before credentials are submitted — even for sites not yet in any threat feed.\n\nDocumented **Data sources**: `sourcetype=cp_log` (Threat Prevention / Zero Phishing events). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp_log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)zero.?phishing|anti.?phishing\") AND match(lower(action),\"(?i)prevent|block|detect\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by protection_name, resource, src, dst, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.src, Web.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Zero-Day Phishing Detection via Zero Phishing** — Zero Phishing uses AI to detect unknown phishing sites in real time at the gateway, blocking users before credentials are submitted — even for sites not yet in any threat feed.\n\nDocumented **Data sources**: `sourcetype=cp_log` (Threat Prevention / Zero Phishing events). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (phishing sites blocked), Bar chart (by category), Timeline (detections).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Zero-Day Phishing Detection via Zero Phishing\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web",
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.src, Web.action | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.122",
              "n": "Check Point ThreatCloud IOC Match Rate",
              "c": "high",
              "f": "beginner",
              "v": "ThreatCloud is Check Point's global threat intelligence feed powering all prevention blades. Match rate trending validates that protections are active and that feed updates are being applied. A drop in match rate may indicate feed delivery failure or policy misconfiguration.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (IPS/AV/Anti-Bot logs with ThreatCloud indicators)",
              "q": "index=security sourcetype=\"cp_log\" earliest=-7d\n| where isnotnull(protection_name) AND match(lower(action),\"(?i)prevent|block|detect|drop\")\n| timechart span=1d count by product",
              "m": "Baseline daily detection count per blade (IPS, Anti-Virus, Anti-Bot, Threat Emulation). Alert when daily count drops >50% vs 7-day average — indicates possible feed or blade issue. Report on detection mix for security posture assessment.",
              "z": "Line chart (detections/day by blade), Single value (today vs baseline), Bar chart (top protections).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (IPS/AV/Anti-Bot logs with ThreatCloud indicators).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline daily detection count per blade (IPS, Anti-Virus, Anti-Bot, Threat Emulation). Alert when daily count drops >50% vs 7-day average — indicates possible feed or blade issue. Report on detection mix for security posture assessment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp_log\" earliest=-7d\n| where isnotnull(protection_name) AND match(lower(action),\"(?i)prevent|block|detect|drop\")\n| timechart span=1d count by product\n```\n\nUnderstanding this SPL\n\n**Check Point ThreatCloud IOC Match Rate** — ThreatCloud is Check Point's global threat intelligence feed powering all prevention blades. Match rate trending validates that protections are active and that feed updates are being applied. A drop in match rate may indicate feed delivery failure or policy misconfiguration.\n\nDocumented **Data sources**: `sourcetype=cp_log` (IPS/AV/Anti-Bot logs with ThreatCloud indicators). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(protection_name) AND match(lower(action),\"(?i)prevent|block|detect|drop\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by product** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point ThreatCloud IOC Match Rate** — ThreatCloud is Check Point's global threat intelligence feed powering all prevention blades. Match rate trending validates that protections are active and that feed updates are being applied. A drop in match rate may indicate feed delivery failure or policy misconfiguration.\n\nDocumented **Data sources**: `sourcetype=cp_log` (IPS/AV/Anti-Bot logs with ThreatCloud indicators). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (detections/day by blade), Single value (today vs baseline), Bar chart (top protections).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point ThreatCloud IOC Match Rate\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1d | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.123",
              "n": "Check Point Quantum IoT Protect Device Discovery",
              "c": "medium",
              "f": "intermediate",
              "v": "IoT Protect uses network fingerprinting to automatically discover and classify IoT/OT devices on the network and apply micro-segmentation policies. Tracking discovered device types and policy assignments validates that IoT assets are protected and not bypassing security.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (IoT Protect discovery/policy events)",
              "q": "index=security sourcetype=\"cp_log\"\n| where match(lower(product),\"(?i)iot.?protect|iot.?discovery\")\n| stats dc(device_id) as devices values(device_type) as types by policy_name, action\n| sort -devices",
              "m": "Enable IoT Protect on Quantum gateways. Monitor discovered device counts and types. Alert on new unclassified devices. Validate that IoT-specific policies are being enforced. Report on IoT device inventory growth for asset management.",
              "z": "Table (discovered devices), Pie chart (device types), Bar chart (policies applied), Line chart (discovery trend).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (IoT Protect discovery/policy events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable IoT Protect on Quantum gateways. Monitor discovered device counts and types. Alert on new unclassified devices. Validate that IoT-specific policies are being enforced. Report on IoT device inventory growth for asset management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp_log\"\n| where match(lower(product),\"(?i)iot.?protect|iot.?discovery\")\n| stats dc(device_id) as devices values(device_type) as types by policy_name, action\n| sort -devices\n```\n\nUnderstanding this SPL\n\n**Check Point Quantum IoT Protect Device Discovery** — IoT Protect uses network fingerprinting to automatically discover and classify IoT/OT devices on the network and apply micro-segmentation policies. Tracking discovered device types and policy assignments validates that IoT assets are protected and not bypassing security.\n\nDocumented **Data sources**: `sourcetype=cp_log` (IoT Protect discovery/policy events). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp_log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)iot.?protect|iot.?discovery\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by policy_name, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Quantum IoT Protect Device Discovery** — IoT Protect uses network fingerprinting to automatically discover and classify IoT/OT devices on the network and apply micro-segmentation policies. Tracking discovered device types and policy assignments validates that IoT assets are protected and not bypassing security.\n\nDocumented **Data sources**: `sourcetype=cp_log` (IoT Protect discovery/policy events). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (discovered devices), Pie chart (device types), Bar chart (policies applied), Line chart (discovery trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Quantum IoT Protect Device Discovery\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.124",
              "n": "Check Point Quantum Maestro Orchestrator Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Maestro scales Check Point gateways into a hyperscale cluster using an Orchestrator to distribute traffic across Security Gateway Modules (SGMs). Orchestrator health determines whether traffic distribution, failover, and scaling work correctly.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (Maestro/system logs), SNMP (Orchestrator MIBs)",
              "q": "index=firewall sourcetype=\"cp_log\"\n| where match(lower(product),\"(?i)maestro|orchestrator|sgm\")\n| eval sgm=coalesce(sgm_id, member_id, module_name)\n| stats count values(logdesc) as events by sgm, severity\n| sort -severity -count",
              "m": "Forward Maestro system and orchestrator logs. Track SGM membership changes (join/leave), traffic distribution balance, and orchestrator failover events. Alert on any SGM leaving the cluster or orchestrator redundancy loss. Report on scaling events and capacity utilization.",
              "z": "Table (SGM status), Timeline (membership changes), Bar chart (events by SGM), Single value (active SGMs).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (Maestro/system logs), SNMP (Orchestrator MIBs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Maestro system and orchestrator logs. Track SGM membership changes (join/leave), traffic distribution balance, and orchestrator failover events. Alert on any SGM leaving the cluster or orchestrator redundancy loss. Report on scaling events and capacity utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"cp_log\"\n| where match(lower(product),\"(?i)maestro|orchestrator|sgm\")\n| eval sgm=coalesce(sgm_id, member_id, module_name)\n| stats count values(logdesc) as events by sgm, severity\n| sort -severity -count\n```\n\nUnderstanding this SPL\n\n**Check Point Quantum Maestro Orchestrator Health** — Maestro scales Check Point gateways into a hyperscale cluster using an Orchestrator to distribute traffic across Security Gateway Modules (SGMs). Orchestrator health determines whether traffic distribution, failover, and scaling work correctly.\n\nDocumented **Data sources**: `sourcetype=cp_log` (Maestro/system logs), SNMP (Orchestrator MIBs). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"cp_log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)maestro|orchestrator|sgm\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **sgm** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sgm, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SGM status), Timeline (membership changes), Bar chart (events by SGM), Single value (active SGMs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Quantum Maestro Orchestrator Health\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.125",
              "n": "Check Point CloudGuard Network Security Events",
              "c": "high",
              "f": "intermediate",
              "v": "CloudGuard Network extends Check Point Quantum prevention to cloud workloads in AWS, Azure, and GCP. Security events from cloud-deployed gateways provide the same IPS, threat prevention, and access control visibility as on-prem — critical for hybrid environments.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (CloudGuard logs via Log Exporter or Smart-1 Cloud)",
              "q": "index=cloud sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)cloudguard|cloud.?network\") OR match(lower(origin_sic_name),\"(?i)aws|azure|gcp\")\n| stats count by product, action, protection_name, cloud_provider\n| sort -count",
              "m": "Deploy CloudGuard Network in AWS VPC, Azure VNET, or GCP VPC and forward logs to Smart-1 Cloud or on-prem Log Server. Normalize `cloud_provider` from SIC name or tags. Alert on critical threats in cloud environments. Report on cloud vs on-prem detection ratios for security posture.",
              "z": "Bar chart (events by cloud provider), Table (top protections), Pie chart (action distribution), Line chart (event trend).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (CloudGuard logs via Log Exporter or Smart-1 Cloud).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy CloudGuard Network in AWS VPC, Azure VNET, or GCP VPC and forward logs to Smart-1 Cloud or on-prem Log Server. Normalize `cloud_provider` from SIC name or tags. Alert on critical threats in cloud environments. Report on cloud vs on-prem detection ratios for security posture.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)cloudguard|cloud.?network\") OR match(lower(origin_sic_name),\"(?i)aws|azure|gcp\")\n| stats count by product, action, protection_name, cloud_provider\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point CloudGuard Network Security Events** — CloudGuard Network extends Check Point Quantum prevention to cloud workloads in AWS, Azure, and GCP. Security events from cloud-deployed gateways provide the same IPS, threat prevention, and access control visibility as on-prem — critical for hybrid environments.\n\nDocumented **Data sources**: `sourcetype=cp_log` (CloudGuard logs via Log Exporter or Smart-1 Cloud). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)cloudguard|cloud.?network\") OR match(lower(origin_sic_name),\"(?i)aws|azure|gcp\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by product, action, protection_name, cloud_provider** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point CloudGuard Network Security Events** — CloudGuard Network extends Check Point Quantum prevention to cloud workloads in AWS, Azure, and GCP. Security events from cloud-deployed gateways provide the same IPS, threat prevention, and access control visibility as on-prem — critical for hybrid environments.\n\nDocumented **Data sources**: `sourcetype=cp_log` (CloudGuard logs via Log Exporter or Smart-1 Cloud). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by cloud provider), Table (top protections), Pie chart (action distribution), Line chart (event trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point CloudGuard Network Security Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.126",
              "n": "Check Point Threat Prevention Policy Layer Effectiveness",
              "c": "high",
              "f": "intermediate",
              "v": "Check Point Threat Prevention operates through policy layers (IPS, Anti-Virus, Anti-Bot, Threat Emulation, Threat Extraction). Measuring detection and prevention rates per layer validates that each security function is contributing and identifies layers that may need tuning.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (all Threat Prevention blade logs)",
              "q": "index=security sourcetype=\"cp_log\" earliest=-30d\n| where match(lower(product),\"(?i)ips|anti.?virus|anti.?bot|threat.?emulation|threat.?extraction\")\n| eval blade=lower(product)\n| eval prevented=if(match(lower(action),\"(?i)prevent|block|drop\"),1,0)\n| stats count sum(prevented) as prevents by blade\n| eval prevention_rate=round(100*prevents/count,1)\n| sort -count",
              "m": "Ensure all Threat Prevention blades log to Splunk. Compare prevention rates across blades. Alert when any blade's detection count drops to zero (possibly disabled). Report on blade effectiveness for security investment justification and compliance.",
              "z": "Bar chart (detections by blade), Stacked bar (detect vs prevent), Table (blade effectiveness), Line chart (detection trend by blade).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (all Threat Prevention blade logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure all Threat Prevention blades log to Splunk. Compare prevention rates across blades. Alert when any blade's detection count drops to zero (possibly disabled). Report on blade effectiveness for security investment justification and compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp_log\" earliest=-30d\n| where match(lower(product),\"(?i)ips|anti.?virus|anti.?bot|threat.?emulation|threat.?extraction\")\n| eval blade=lower(product)\n| eval prevented=if(match(lower(action),\"(?i)prevent|block|drop\"),1,0)\n| stats count sum(prevented) as prevents by blade\n| eval prevention_rate=round(100*prevents/count,1)\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point Threat Prevention Policy Layer Effectiveness** — Check Point Threat Prevention operates through policy layers (IPS, Anti-Virus, Anti-Bot, Threat Emulation, Threat Extraction). Measuring detection and prevention rates per layer validates that each security function is contributing and identifies layers that may need tuning.\n\nDocumented **Data sources**: `sourcetype=cp_log` (all Threat Prevention blade logs). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)ips|anti.?virus|anti.?bot|threat.?emulation|threat.?extraction\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **blade** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **prevented** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by blade** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **prevention_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Threat Prevention Policy Layer Effectiveness** — Check Point Threat Prevention operates through policy layers (IPS, Anti-Virus, Anti-Bot, Threat Emulation, Threat Extraction). Measuring detection and prevention rates per layer validates that each security function is contributing and identifies layers that may need tuning.\n\nDocumented **Data sources**: `sourcetype=cp_log` (all Threat Prevention blade logs). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (detections by blade), Stacked bar (detect vs prevent), Table (blade effectiveness), Line chart (detection trend by blade).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Threat Prevention Policy Layer Effectiveness\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.127",
              "n": "Check Point Admin Session and Login Audit",
              "c": "high",
              "f": "beginner",
              "v": "SmartConsole administrator logins and session activity form the audit trail for all firewall management actions. Unauthorized or anomalous admin access could lead to policy tampering, backdoor rules, or data exfiltration via policy changes.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (audit/admin logs)",
              "q": "index=checkpoint sourcetype=\"cp_log\" earliest=-30d\n| where match(lower(product),\"(?i)smartconsole|management\") AND match(lower(operation),\"(?i)login|logout|session|lock|unlock\")\n| stats count earliest(_time) as first latest(_time) as last by administrator, client_ip, operation\n| sort -last",
              "m": "Forward management audit logs. Alert on logins from new IPs, after-hours access, and concurrent sessions from different locations. Track locked/failed sessions. Correlate with MFA status from IdP if integrated.",
              "z": "Table (admin sessions), Timeline (login/logout events), Bar chart (logins by admin), Single value (failed logins today).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "access",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (audit/admin logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward management audit logs. Alert on logins from new IPs, after-hours access, and concurrent sessions from different locations. Track locked/failed sessions. Correlate with MFA status from IdP if integrated.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=checkpoint sourcetype=\"cp_log\" earliest=-30d\n| where match(lower(product),\"(?i)smartconsole|management\") AND match(lower(operation),\"(?i)login|logout|session|lock|unlock\")\n| stats count earliest(_time) as first latest(_time) as last by administrator, client_ip, operation\n| sort -last\n```\n\nUnderstanding this SPL\n\n**Check Point Admin Session and Login Audit** — SmartConsole administrator logins and session activity form the audit trail for all firewall management actions. Unauthorized or anomalous admin access could lead to policy tampering, backdoor rules, or data exfiltration via policy changes.\n\nDocumented **Data sources**: `sourcetype=cp_log` (audit/admin logs). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: checkpoint; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=checkpoint, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)smartconsole|management\") AND match(lower(operation),\"(?i)login|logout|session|lock|unlock\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by administrator, client_ip, operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Admin Session and Login Audit** — SmartConsole administrator logins and session activity form the audit trail for all firewall management actions. Unauthorized or anomalous admin access could lead to policy tampering, backdoor rules, or data exfiltration via policy changes.\n\nDocumented **Data sources**: `sourcetype=cp_log` (audit/admin logs). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (admin sessions), Timeline (login/logout events), Bar chart (logins by admin), Single value (failed logins today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Admin Session and Login Audit\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.128",
              "n": "Check Point DDoS Protector Integration Events",
              "c": "critical",
              "f": "intermediate",
              "v": "Check Point DDoS Protector (Quantum DDoS) sits in front of the gateway and mitigates volumetric and application-layer DDoS attacks. Detection and mitigation events validate protection effectiveness and provide attack attribution.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (DDoS Protector logs, if forwarded through Log Exporter)",
              "q": "index=security sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)ddos|dos.?protect|quantum.?ddos\")\n| stats count by attack_type, src, action, severity\n| sort -count",
              "m": "Forward DDoS Protector logs to Splunk via Log Exporter or syslog. Classify attack types (SYN flood, HTTP flood, DNS amplification). Alert on active mitigations. Report on attack frequency and volume for ISP coordination and capacity planning.",
              "z": "Bar chart (attacks by type), Table (active mitigations), Timeline (attack events), Single value (attacks mitigated today).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (DDoS Protector logs, if forwarded through Log Exporter).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward DDoS Protector logs to Splunk via Log Exporter or syslog. Classify attack types (SYN flood, HTTP flood, DNS amplification). Alert on active mitigations. Report on attack frequency and volume for ISP coordination and capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)ddos|dos.?protect|quantum.?ddos\")\n| stats count by attack_type, src, action, severity\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point DDoS Protector Integration Events** — Check Point DDoS Protector (Quantum DDoS) sits in front of the gateway and mitigates volumetric and application-layer DDoS attacks. Detection and mitigation events validate protection effectiveness and provide attack attribution.\n\nDocumented **Data sources**: `sourcetype=cp_log` (DDoS Protector logs, if forwarded through Log Exporter). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)ddos|dos.?protect|quantum.?ddos\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by attack_type, src, action, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src, IDS_Attacks.action, IDS_Attacks.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point DDoS Protector Integration Events** — Check Point DDoS Protector (Quantum DDoS) sits in front of the gateway and mitigates volumetric and application-layer DDoS attacks. Detection and mitigation events validate protection effectiveness and provide attack attribution.\n\nDocumented **Data sources**: `sourcetype=cp_log` (DDoS Protector logs, if forwarded through Log Exporter). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (attacks by type), Table (active mitigations), Timeline (attack events), Single value (attacks mitigated today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point DDoS Protector Integration Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src, IDS_Attacks.action, IDS_Attacks.severity | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.129",
              "n": "Check Point Infinity ThreatCloud Managed Security Service Events",
              "c": "high",
              "f": "beginner",
              "v": "Check Point Infinity combines prevention, detection, and managed response across network, cloud, mobile, and endpoint. Managed service events (from Check Point SOC) represent expert-validated detections requiring customer action.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (Infinity/managed service events)",
              "q": "index=security sourcetype=\"cp_log\" earliest=-7d\n| where match(lower(product),\"(?i)infinity|managed|threatcloud.?managed\") OR match(lower(logdesc),\"(?i)managed.?detect|recommended.?action\")\n| stats count by severity, protection_name, recommended_action\n| sort -severity -count",
              "m": "Ingest Infinity managed service alerts. These are pre-triaged by Check Point's SOC and include recommended actions. Escalate critical-severity findings immediately. Track remediation time for SLA reporting.",
              "z": "Table (managed alerts), Bar chart (by severity), Timeline (events), Single value (open critical findings).",
              "kfp": "Legitimate use of dual-purpose administrative tools (e.g. PowerShell, SSH, cloud CLIs) during maintenance or deployment can match this pattern — correlate with change tickets, asset criticality and user role before escalating.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "TTP",
              "sdomain": "threat",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (Infinity/managed service events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Infinity managed service alerts. These are pre-triaged by Check Point's SOC and include recommended actions. Escalate critical-severity findings immediately. Track remediation time for SLA reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"cp_log\" earliest=-7d\n| where match(lower(product),\"(?i)infinity|managed|threatcloud.?managed\") OR match(lower(logdesc),\"(?i)managed.?detect|recommended.?action\")\n| stats count by severity, protection_name, recommended_action\n| sort -severity -count\n```\n\nUnderstanding this SPL\n\n**Check Point Infinity ThreatCloud Managed Security Service Events** — Check Point Infinity combines prevention, detection, and managed response across network, cloud, mobile, and endpoint. Managed service events (from Check Point SOC) represent expert-validated detections requiring customer action.\n\nDocumented **Data sources**: `sourcetype=cp_log` (Infinity/managed service events). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)infinity|managed|threatcloud.?managed\") OR match(lower(logdesc),\"(?i)managed.?detect|recomm…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by severity, protection_name, recommended_action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Infinity ThreatCloud Managed Security Service Events** — Check Point Infinity combines prevention, detection, and managed response across network, cloud, mobile, and endpoint. Managed service events (from Check Point SOC) represent expert-validated detections requiring customer action.\n\nDocumented **Data sources**: `sourcetype=cp_log` (Infinity/managed service events). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: TTP — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (managed alerts), Bar chart (by severity), Timeline (events), Single value (open critical findings).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point Infinity ThreatCloud Managed Security Service Events\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.11.130",
              "n": "Check Point HTTPS Inspection Certificate Errors",
              "c": "high",
              "f": "intermediate",
              "v": "HTTPS inspection replaces server certificates with gateway-signed versions. Certificate errors — expired CA, untrusted root, or client certificate validation failures — break inspection and may cause user-facing errors or silent bypass of security inspection.",
              "t": "`Splunk_TA_checkpoint`",
              "d": "`sourcetype=cp_log` (HTTPS inspection logs)",
              "q": "index=firewall sourcetype=\"cp_log\" earliest=-7d\n| where match(lower(product),\"(?i)https.?inspection|ssl.?inspection\") AND match(lower(logdesc),\"(?i)cert.*error|untrusted|expired|revoked|mismatch|validation.*fail\")\n| stats count by logdesc, src, dst, category\n| sort -count",
              "m": "Enable detailed HTTPS inspection logging. Track certificate error types and destinations. Alert on errors involving the inspection CA certificate (affects all users). Report on bypass-causing certificate issues. Correlate with help desk tickets for SSL errors.",
              "z": "Bar chart (errors by type), Table (destinations with errors), Line chart (error trend), Single value (errors today).",
              "kfp": "Seasonal traffic, marketing campaigns, backups or project rollouts can shift the baseline — tune per-entity baselines and widen the training window if the signal becomes noisy.",
              "refs": "[Splunk_TA_checkpoint](https://splunkbase.splunk.com/app/2646), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "Anomaly",
              "sdomain": "network",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_checkpoint`.\n• Ensure the following data sources are available: `sourcetype=cp_log` (HTTPS inspection logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable detailed HTTPS inspection logging. Track certificate error types and destinations. Alert on errors involving the inspection CA certificate (affects all users). Report on bypass-causing certificate issues. Correlate with help desk tickets for SSL errors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=firewall sourcetype=\"cp_log\" earliest=-7d\n| where match(lower(product),\"(?i)https.?inspection|ssl.?inspection\") AND match(lower(logdesc),\"(?i)cert.*error|untrusted|expired|revoked|mismatch|validation.*fail\")\n| stats count by logdesc, src, dst, category\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point HTTPS Inspection Certificate Errors** — HTTPS inspection replaces server certificates with gateway-signed versions. Certificate errors — expired CA, untrusted root, or client certificate validation failures — break inspection and may cause user-facing errors or silent bypass of security inspection.\n\nDocumented **Data sources**: `sourcetype=cp_log` (HTTPS inspection logs). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: firewall; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=firewall, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)https.?inspection|ssl.?inspection\") AND match(lower(logdesc),\"(?i)cert.*error|untrusted|exp…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by logdesc, src, dst, category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point HTTPS Inspection Certificate Errors** — HTTPS inspection replaces server certificates with gateway-signed versions. Certificate errors — expired CA, untrusted root, or client certificate validation failures — break inspection and may cause user-facing errors or silent bypass of security inspection.\n\nDocumented **Data sources**: `sourcetype=cp_log` (HTTPS inspection logs). **App/TA** (typical add-on context): `Splunk_TA_checkpoint`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: Anomaly — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (errors by type), Table (destinations with errors), Line chart (error trend), Single value (errors today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use Check Point data and alerts to stay ahead of the risks described in \"Check Point HTTPS Inspection Certificate Errors\". We help the team notice trouble early, while a situation is still small and containable.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Checkpoint",
                "id": 3435,
                "url": "https://splunkbase.splunk.com/app/3435"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 130,
            "none": 0
          }
        },
        {
          "i": "10.12",
          "n": "Industry-Specific Compliance & Fraud Detection",
          "u": [
            {
              "i": "10.12.1",
              "n": "ATM Fraud Pattern Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Correlates card-present withdrawals with geographic velocity and device tamper signals to catch skimming and cash-out campaigns before losses spread.",
              "t": "ATM switch logs, core banking TA, ISO 8583 parsers",
              "d": "`sourcetype=\"atm:transaction\"`, `sourcetype=\"iso8583:auth\"`",
              "q": "index=payments sourcetype IN (\"atm:transaction\",\"iso8583:auth\") txn_type=\"withdrawal\"\n| eval hour=strftime(_time,\"%H\")\n| stats count, dc(atm_id) as atms, values(card_hash) as cards by src_merchant_id, hour\n| where count>20 AND atms>5\n| sort -count",
              "m": "Ingest ATM host and authorization logs with terminal ID, amount, response code, and geo. Baseline per-terminal throughput; alert on velocity across many ATMs or impossible travel vs. last chip read.",
              "z": "Table (suspicious terminals), Map (withdrawal clusters), Line chart (ATM fraud attempts over time).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ATM switch logs, core banking TA, ISO 8583 parsers.\n• Ensure the following data sources are available: `sourcetype=\"atm:transaction\"`, `sourcetype=\"iso8583:auth\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ATM host and authorization logs with terminal ID, amount, response code, and geo. Baseline per-terminal throughput; alert on velocity across many ATMs or impossible travel vs. last chip read.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype IN (\"atm:transaction\",\"iso8583:auth\") txn_type=\"withdrawal\"\n| eval hour=strftime(_time,\"%H\")\n| stats count, dc(atm_id) as atms, values(card_hash) as cards by src_merchant_id, hour\n| where count>20 AND atms>5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ATM Fraud Pattern Detection** — Correlates card-present withdrawals with geographic velocity and device tamper signals to catch skimming and cash-out campaigns before losses spread.\n\nDocumented **Data sources**: `sourcetype=\"atm:transaction\"`, `sourcetype=\"iso8583:auth\"`. **App/TA** (typical add-on context): ATM switch logs, core banking TA, ISO 8583 parsers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src_merchant_id, hour** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>20 AND atms>5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ATM Fraud Pattern Detection** — Correlates card-present withdrawals with geographic velocity and device tamper signals to catch skimming and cash-out campaigns before losses spread.\n\nDocumented **Data sources**: `sourcetype=\"atm:transaction\"`, `sourcetype=\"iso8583:auth\"`. **App/TA** (typical add-on context): ATM switch logs, core banking TA, ISO 8583 parsers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious terminals), Map (withdrawal clusters), Line chart (ATM fraud attempts over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"ATM Fraud Pattern Detection\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Security",
                "Fraud"
              ],
              "ind": "Financial Services",
              "a": [
                "Authentication (card PIN attempts)",
                "Network_Traffic (if IP backhaul available)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.2",
              "n": "Wire Transfer Anomaly Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Flags unusual outbound wires (amount, beneficiary, corridor) for BEC and mule-account schemes.",
              "t": "Wire host / SWIFT TA, core banking",
              "d": "`sourcetype=\"wire:outbound\"`, `sourcetype=\"swift:fin\"`",
              "q": "index=banking sourcetype=\"wire:outbound\" status=\"posted\"\n| eventstats median(amount_usd) as med by customer_id\n| eval ratio=if(med>0, amount_usd/med, 0)\n| where ratio>5 OR amount_usd>250000\n| stats values(beneficiary_id) as beneficiaries, sum(amount_usd) as total by customer_id, _time\n| sort -total",
              "m": "Normalize amounts to USD; maintain beneficiary risk lists. Alert on first-time high-value beneficiaries, round-dollar spikes, and wires initiated after new payee setup.",
              "z": "Table (high-risk wires), Bar chart (amount by corridor), Timeline (wire events per customer).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Wire host / SWIFT TA, core banking.\n• Ensure the following data sources are available: `sourcetype=\"wire:outbound\"`, `sourcetype=\"swift:fin\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize amounts to USD; maintain beneficiary risk lists. Alert on first-time high-value beneficiaries, round-dollar spikes, and wires initiated after new payee setup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=banking sourcetype=\"wire:outbound\" status=\"posted\"\n| eventstats median(amount_usd) as med by customer_id\n| eval ratio=if(med>0, amount_usd/med, 0)\n| where ratio>5 OR amount_usd>250000\n| stats values(beneficiary_id) as beneficiaries, sum(amount_usd) as total by customer_id, _time\n| sort -total\n```\n\nUnderstanding this SPL\n\n**Wire Transfer Anomaly Detection** — Flags unusual outbound wires (amount, beneficiary, corridor) for BEC and mule-account schemes.\n\nDocumented **Data sources**: `sourcetype=\"wire:outbound\"`, `sourcetype=\"swift:fin\"`. **App/TA** (typical add-on context): Wire host / SWIFT TA, core banking. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: banking; **sourcetype**: wire:outbound. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=banking, sourcetype=\"wire:outbound\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eventstats` rolls up events into metrics; results are split **by customer_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ratio>5 OR amount_usd>250000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by customer_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (high-risk wires), Bar chart (amount by corridor), Timeline (wire events per customer).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"Wire Transfer Anomaly Detection\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Fraud",
                "Compliance"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.3",
              "n": "Credit Card Velocity Checks",
              "c": "high",
              "f": "intermediate",
              "v": "Detects card testing and enumeration via rapid auth attempts across merchants or regions.",
              "t": "Payment processor TA, gateway logs",
              "d": "`sourcetype=\"card:authorization\"`",
              "q": "index=payments sourcetype=\"card:authorization\"\n| bin _time span=15m\n| stats count, dc(merchant_id) as merchants, dc(auth_result) as outcomes by card_token, _time\n| where count>=8 AND merchants>=4\n| sort -count",
              "m": "Tokenize PANs; never log full track data. Thresholds tuned per BIN and channel. Feed declines into case management for issuer contact.",
              "z": "Table (hot cards), Line chart (auth attempts per BIN), Single value (unique cards in alert state).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Payment processor TA, gateway logs.\n• Ensure the following data sources are available: `sourcetype=\"card:authorization\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTokenize PANs; never log full track data. Thresholds tuned per BIN and channel. Feed declines into case management for issuer contact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"card:authorization\"\n| bin _time span=15m\n| stats count, dc(merchant_id) as merchants, dc(auth_result) as outcomes by card_token, _time\n| where count>=8 AND merchants>=4\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Credit Card Velocity Checks** — Detects card testing and enumeration via rapid auth attempts across merchants or regions.\n\nDocumented **Data sources**: `sourcetype=\"card:authorization\"`. **App/TA** (typical add-on context): Payment processor TA, gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: card:authorization. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"card:authorization\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by card_token, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=8 AND merchants>=4` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hot cards), Line chart (auth attempts per BIN), Single value (unique cards in alert state).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"Credit Card Velocity Checks\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Fraud"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.4",
              "n": "Account Takeover Indicators",
              "c": "critical",
              "f": "advanced",
              "v": "Combines login, device, and session signals to catch ATO before fraudulent transfers.",
              "t": "IdP logs, mobile banking SDK, SIEM correlation",
              "d": "`sourcetype=\"okta:systemlog\"`, `sourcetype=\"banking:session\"`",
              "q": "index=identity OR index=banking (sourcetype=\"okta:systemlog\" OR sourcetype=\"banking:session\")\n| search (eventType=\"user.session.start\" OR event=\"new_device\") OR (risk_score>70)\n| transaction user_id maxspan=1h maxpause=10m\n| where mvcount(device_id)>2 OR mvcount(geo_country)>2\n| table _time, user_id, device_id, geo_country, risk_score",
              "m": "Require device fingerprint and step-up auth on risk score. Correlate password reset + wire initiation within short windows.",
              "z": "Timeline (user journey), Table (risky sessions), Sankey (device → action).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IdP logs, mobile banking SDK, SIEM correlation.\n• Ensure the following data sources are available: `sourcetype=\"okta:systemlog\"`, `sourcetype=\"banking:session\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequire device fingerprint and step-up auth on risk score. Correlate password reset + wire initiation within short windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=identity OR index=banking (sourcetype=\"okta:systemlog\" OR sourcetype=\"banking:session\")\n| search (eventType=\"user.session.start\" OR event=\"new_device\") OR (risk_score>70)\n| transaction user_id maxspan=1h maxpause=10m\n| where mvcount(device_id)>2 OR mvcount(geo_country)>2\n| table _time, user_id, device_id, geo_country, risk_score\n```\n\nUnderstanding this SPL\n\n**Account Takeover Indicators** — Combines login, device, and session signals to catch ATO before fraudulent transfers.\n\nDocumented **Data sources**: `sourcetype=\"okta:systemlog\"`, `sourcetype=\"banking:session\"`. **App/TA** (typical add-on context): IdP logs, mobile banking SDK, SIEM correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: identity, banking; **sourcetype**: okta:systemlog, banking:session. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=identity, index=banking, sourcetype=\"okta:systemlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• Filters the current rows with `where mvcount(device_id)>2 OR mvcount(geo_country)>2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Account Takeover Indicators**): table _time, user_id, device_id, geo_country, risk_score\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (user journey), Table (risky sessions), Sankey (device → action).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"Account Takeover Indicators\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Security"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.5",
              "n": "ACH Origination Anomalies",
              "c": "critical",
              "f": "intermediate",
              "v": "Surfaces unusual ACH batches (debit concentration, new originators) that may indicate fraud or operational error.",
              "t": "ACH/NACHA ingestion, treasury workstation",
              "d": "`sourcetype=\"ach:origination\"`",
              "q": "index=payments sourcetype=\"ach:origination\" sec_code=\"WEB\"\n| stats sum(amount) as batch_total, dc(individual_id) as distinct_receivers by company_id, file_id\n| eventstats avg(batch_total) as avg_batch by company_id\n| where batch_total > 3*avg_batch OR distinct_receivers > 500\n| sort -batch_total",
              "m": "Validate company_id against registered originators; alert on new SEC codes or return spikes after origination.",
              "z": "Table (suspicious batches), Line chart (ACH volume by company), Bar chart (returns vs. originations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ACH/NACHA ingestion, treasury workstation.\n• Ensure the following data sources are available: `sourcetype=\"ach:origination\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nValidate company_id against registered originators; alert on new SEC codes or return spikes after origination.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"ach:origination\" sec_code=\"WEB\"\n| stats sum(amount) as batch_total, dc(individual_id) as distinct_receivers by company_id, file_id\n| eventstats avg(batch_total) as avg_batch by company_id\n| where batch_total > 3*avg_batch OR distinct_receivers > 500\n| sort -batch_total\n```\n\nUnderstanding this SPL\n\n**ACH Origination Anomalies** — Surfaces unusual ACH batches (debit concentration, new originators) that may indicate fraud or operational error.\n\nDocumented **Data sources**: `sourcetype=\"ach:origination\"`. **App/TA** (typical add-on context): ACH/NACHA ingestion, treasury workstation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: ach:origination. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"ach:origination\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by company_id, file_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by company_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where batch_total > 3*avg_batch OR distinct_receivers > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious batches), Line chart (ACH volume by company), Bar chart (returns vs. originations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"ACH Origination Anomalies\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Fraud",
                "Operations"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.6",
              "n": "Card-Not-Present Transaction Spikes",
              "c": "high",
              "f": "beginner",
              "v": "E-commerce CNP spikes often precede credential stuffing or BIN attacks.",
              "t": "Gateway TA, e-commerce platform",
              "d": "`sourcetype=\"gateway:txn\"`, `card_present=\"false\"`",
              "q": "index=payments sourcetype=\"gateway:txn\" card_present=\"false\"\n| timechart span=1h sum(amount) as gmv, count as txns\n| eventstats avg(txns) as baseline, stdev(txns) as s\n| eval z=(txns-baseline)/s\n| where abs(z)>3",
              "m": "Segment by merchant and MCC; seasonal baselines for retail peaks. Correlate with 3DS failures and chargeback velocity.",
              "z": "Line chart (CNP volume vs. baseline), Single value (Z-score), Table (merchants over threshold).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Gateway TA, e-commerce platform.\n• Ensure the following data sources are available: `sourcetype=\"gateway:txn\"`, `card_present=\"false\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSegment by merchant and MCC; seasonal baselines for retail peaks. Correlate with 3DS failures and chargeback velocity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"gateway:txn\" card_present=\"false\"\n| timechart span=1h sum(amount) as gmv, count as txns\n| eventstats avg(txns) as baseline, stdev(txns) as s\n| eval z=(txns-baseline)/s\n| where abs(z)>3\n```\n\nUnderstanding this SPL\n\n**Card-Not-Present Transaction Spikes** — E-commerce CNP spikes often precede credential stuffing or BIN attacks.\n\nDocumented **Data sources**: `sourcetype=\"gateway:txn\"`, `card_present=\"false\"`. **App/TA** (typical add-on context): Gateway TA, e-commerce platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: gateway:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"gateway:txn\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(z)>3` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CNP volume vs. baseline), Single value (Z-score), Table (merchants over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"Card-Not-Present Transaction Spikes\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Fraud"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.7",
              "n": "PCI DSS Log Review Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Demonstrates ongoing review of security events for cardholder data environments (CDE) per PCI DSS Req. 10–12.",
              "t": "Splunk Enterprise Security, file integrity / FIM TA",
              "d": "`sourcetype=\"linux:audit\"`, firewall, IDS in `tag=pci`",
              "q": "index=pci (sourcetype=\"linux:audit\" OR sourcetype=\"pan:traffic\") tag=pci\n| stats count by sourcetype, host, severity\n| join type=left max=1 host [| inputlookup pci_asset_inventory.csv | fields host reviewer last_review]\n| where isnull(last_review) OR last_review < relative_time(now(),\"-7d@d\")",
              "m": "Tag CDE assets; weekly attested review searches with saved reports and sign-off in GRC tooling.",
              "z": "Table (assets pending review), Pie chart (events by severity), Status (compliance %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, file integrity / FIM TA.\n• Ensure the following data sources are available: `sourcetype=\"linux:audit\"`, firewall, IDS in `tag=pci`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag CDE assets; weekly attested review searches with saved reports and sign-off in GRC tooling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pci (sourcetype=\"linux:audit\" OR sourcetype=\"pan:traffic\") tag=pci\n| stats count by sourcetype, host, severity\n| join type=left max=1 host [| inputlookup pci_asset_inventory.csv | fields host reviewer last_review]\n| where isnull(last_review) OR last_review < relative_time(now(),\"-7d@d\")\n```\n\nUnderstanding this SPL\n\n**PCI DSS Log Review Compliance** — Demonstrates ongoing review of security events for cardholder data environments (CDE) per PCI DSS Req. 10–12.\n\nDocumented **Data sources**: `sourcetype=\"linux:audit\"`, firewall, IDS in `tag=pci`. **App/TA** (typical add-on context): Splunk Enterprise Security, file integrity / FIM TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pci; **sourcetype**: linux:audit, pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pci, sourcetype=\"linux:audit\", tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sourcetype, host, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(last_review) OR last_review < relative_time(now(),\"-7d@d\")` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (assets pending review), Pie chart (events by severity), Status (compliance %).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"PCI DSS Log Review Compliance\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.4.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS unknown is enforced — Splunk UC-10.12.7: PCI DSS Log Review Compliance.",
                  "ea": "Saved search 'UC-10.12.7' running on sourcetype=\"linux:audit\", firewall, IDS in tag=pci, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.8",
              "n": "SOX Access Control Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Evidence of periodic access reviews and segregation of duties for financially relevant systems.",
              "t": "Active Directory, ERP (SAP/Oracle) TA",
              "d": "`sourcetype=\"WinEventLog:Security\"` (AD group changes), `sourcetype=\"sap:audit\"` / `sourcetype=\"oracle:audit\"` (ERP journal postings)",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4720,4728,4732)\n| lookup sox_sensitive_roles SubjectUserName as user OUTPUT role owner\n| stats latest(_time) as last_use, values(EventCode) as ad_events by user, role\n| where isnotnull(role)",
              "m": "Map SoD rules; quarterly certification of role membership; alert on emergency access account usage without ticket.",
              "z": "Table (SoD violations), Bar chart (access by application), Timeline (privileged use).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Active Directory, ERP (SAP/Oracle) TA.\n• Ensure the following data sources are available: `sourcetype=\"WinEventLog:Security\"` (AD group changes), `sourcetype=\"sap:audit\"` / `sourcetype=\"oracle:audit\"` (ERP journal postings).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap SoD rules; quarterly certification of role membership; alert on emergency access account usage without ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4720,4728,4732)\n| lookup sox_sensitive_roles SubjectUserName as user OUTPUT role owner\n| stats latest(_time) as last_use, values(EventCode) as ad_events by user, role\n| where isnotnull(role)\n```\n\nUnderstanding this SPL\n\n**SOX Access Control Audit** — Evidence of periodic access reviews and segregation of duties for financially relevant systems.\n\nDocumented **Data sources**: `sourcetype=\"WinEventLog:Security\"` (AD group changes), `sourcetype=\"sap:audit\"` / `sourcetype=\"oracle:audit\"` (ERP journal postings). **App/TA** (typical add-on context): Active Directory, ERP (SAP/Oracle) TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by user, role** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnotnull(role)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SoD violations), Bar chart (access by application), Timeline (privileged use).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"SOX Access Control Audit\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [
                "oracle"
              ],
              "em": [
                "oracle_oracle_db"
              ],
              "cmp": [
                {
                  "r": "SOX",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Access.Review",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX unknown is enforced — Splunk UC-10.12.8: SOX Access Control Audit.",
                  "ea": "Saved search 'UC-10.12.8' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "SOX"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.9",
              "n": "KYC Continuous Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Detects adverse media, sanctions hits, or identity churn post-onboarding.",
              "t": "KYC vendor API logs, case management",
              "d": "`sourcetype=\"kyc:screening\"`",
              "q": "index=compliance sourcetype=\"kyc:screening\"\n| where match(alert_type,\"(sanctions|pep|adverse)\") AND disposition=\"open\"\n| stats latest(_time) as last_alert, values(match_score) as scores by customer_id, case_id\n| where last_alert < relative_time(now(),\"-90d@d\")\n| sort last_alert",
              "m": "Refresh high-risk customers on schedule; integrate watchlist updates; escalate unresolved alerts > SLA.",
              "z": "Table (aging KYC alerts), Line chart (screening volume), Single value (open PEP cases).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: KYC vendor API logs, case management.\n• Ensure the following data sources are available: `sourcetype=\"kyc:screening\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRefresh high-risk customers on schedule; integrate watchlist updates; escalate unresolved alerts > SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=\"kyc:screening\"\n| where match(alert_type,\"(sanctions|pep|adverse)\") AND disposition=\"open\"\n| stats latest(_time) as last_alert, values(match_score) as scores by customer_id, case_id\n| where last_alert < relative_time(now(),\"-90d@d\")\n| sort last_alert\n```\n\nUnderstanding this SPL\n\n**KYC Continuous Monitoring** — Detects adverse media, sanctions hits, or identity churn post-onboarding.\n\nDocumented **Data sources**: `sourcetype=\"kyc:screening\"`. **App/TA** (typical add-on context): KYC vendor API logs, case management. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: kyc:screening. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=\"kyc:screening\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(alert_type,\"(sanctions|pep|adverse)\") AND disposition=\"open\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by customer_id, case_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where last_alert < relative_time(now(),\"-90d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (aging KYC alerts), Line chart (screening volume), Single value (open PEP cases).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"KYC Continuous Monitoring\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.10",
              "n": "Trade Settlement Failure Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed or partial settlements create counterparty and liquidity risk; early detection supports regulatory reporting.",
              "t": "OMS/EMS, clearing TA",
              "d": "`sourcetype=\"fix:exec\"`, `sourcetype=\"settlement:status\"`",
              "q": "index=trading (sourcetype=\"settlement:status\" OR sourcetype=\"fix:exec\")\n| search settlement_status IN (\"FAIL\",\"PARTIAL\") OR ExecType IN (\"8\",\"9\")\n| stats count, sum(quantity*price) as notional by symbol, contra_broker, trade_date\n| sort -notional",
              "m": "Correlate FIX ExecType with back-office settlement files; alert on fails above threshold or concentrated by broker.",
              "z": "Table (failed trades), Bar chart (fails by symbol), Timeline (settlement chain).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OMS/EMS, clearing TA.\n• Ensure the following data sources are available: `sourcetype=\"fix:exec\"`, `sourcetype=\"settlement:status\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate FIX ExecType with back-office settlement files; alert on fails above threshold or concentrated by broker.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading (sourcetype=\"settlement:status\" OR sourcetype=\"fix:exec\")\n| search settlement_status IN (\"FAIL\",\"PARTIAL\") OR ExecType IN (\"8\",\"9\")\n| stats count, sum(quantity*price) as notional by symbol, contra_broker, trade_date\n| sort -notional\n```\n\nUnderstanding this SPL\n\n**Trade Settlement Failure Monitoring** — Failed or partial settlements create counterparty and liquidity risk; early detection supports regulatory reporting.\n\nDocumented **Data sources**: `sourcetype=\"fix:exec\"`, `sourcetype=\"settlement:status\"`. **App/TA** (typical add-on context): OMS/EMS, clearing TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: settlement:status, fix:exec. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"settlement:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by symbol, contra_broker, trade_date** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed trades), Bar chart (fails by symbol), Timeline (settlement chain).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"Trade Settlement Failure Monitoring\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Operations",
                "Risk"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.11",
              "n": "Market Data Feed Latency",
              "c": "critical",
              "f": "advanced",
              "v": "Stale ticks distort pricing and execution; required for exchange member obligations and best-ex policies.",
              "t": "Feed handler logs, hardware timestamping",
              "d": "`sourcetype=\"mktdata:itch\"`, `sourcetype=\"mktdata:bbo\"`",
              "q": "index=marketdata sourcetype=\"mktdata:bbo\"\n| eval latency_ms=(ingest_time - exchange_send_time)*1000\n| timechart span=1m perc95(latency_ms) as p95_lat by feed\n| where p95_lat > 5",
              "m": "Use PTP-synced clocks; alert on gap messages, sequence breaks, or latency SLO breach per venue.",
              "z": "Line chart (p95 latency by feed), Single value (max gap), Heatmap (symbol × latency).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Feed handler logs, hardware timestamping.\n• Ensure the following data sources are available: `sourcetype=\"mktdata:itch\"`, `sourcetype=\"mktdata:bbo\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse PTP-synced clocks; alert on gap messages, sequence breaks, or latency SLO breach per venue.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=marketdata sourcetype=\"mktdata:bbo\"\n| eval latency_ms=(ingest_time - exchange_send_time)*1000\n| timechart span=1m perc95(latency_ms) as p95_lat by feed\n| where p95_lat > 5\n```\n\nUnderstanding this SPL\n\n**Market Data Feed Latency** — Stale ticks distort pricing and execution; required for exchange member obligations and best-ex policies.\n\nDocumented **Data sources**: `sourcetype=\"mktdata:itch\"`, `sourcetype=\"mktdata:bbo\"`. **App/TA** (typical add-on context): Feed handler logs, hardware timestamping. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: marketdata; **sourcetype**: mktdata:bbo. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=marketdata, sourcetype=\"mktdata:bbo\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by feed** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where p95_lat > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95 latency by feed), Single value (max gap), Heatmap (symbol × latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"Market Data Feed Latency\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Performance"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.12",
              "n": "Order Execution Anomaly Detection",
              "c": "high",
              "f": "advanced",
              "v": "Flags unusual order patterns (layering, spoofing precursors) for market abuse surveillance.",
              "t": "OMS logs, exchange drop copies",
              "d": "`sourcetype=\"oms:order\"`",
              "q": "index=trading sourcetype=\"oms:order\"\n| transaction client_order_id maxspan=2s\n| eval cancels=mvcount(if(action=\"cancel\",1,null()))\n| eval submits=mvcount(if(action=\"new\",1,null()))\n| where submits>10 AND cancels/submits>0.8\n| table _time, symbol, client_order_id, submits, cancels",
              "m": "Tune for legitimate market making; integrate with surveillance team playbooks and MAR/SEC retention.",
              "z": "Table (suspicious clusters), Timeline (order lifecycle), Bar chart (cancel ratio by desk).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OMS logs, exchange drop copies.\n• Ensure the following data sources are available: `sourcetype=\"oms:order\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune for legitimate market making; integrate with surveillance team playbooks and MAR/SEC retention.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"oms:order\"\n| transaction client_order_id maxspan=2s\n| eval cancels=mvcount(if(action=\"cancel\",1,null()))\n| eval submits=mvcount(if(action=\"new\",1,null()))\n| where submits>10 AND cancels/submits>0.8\n| table _time, symbol, client_order_id, submits, cancels\n```\n\nUnderstanding this SPL\n\n**Order Execution Anomaly Detection** — Flags unusual order patterns (layering, spoofing precursors) for market abuse surveillance.\n\nDocumented **Data sources**: `sourcetype=\"oms:order\"`. **App/TA** (typical add-on context): OMS logs, exchange drop copies. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: oms:order. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"oms:order\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **cancels** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **submits** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where submits>10 AND cancels/submits>0.8` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Order Execution Anomaly Detection**): table _time, symbol, client_order_id, submits, cancels\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious clusters), Timeline (order lifecycle), Bar chart (cancel ratio by desk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"Order Execution Anomaly Detection\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance",
                "Trading"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [
                "exchange"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.13",
              "n": "Algorithmic Trading Circuit Breaker",
              "c": "critical",
              "f": "advanced",
              "v": "Ensures volatility/gateway breakers trip correctly and strategies are flattened when limits hit.",
              "t": "Risk engine, exchange gateway",
              "d": "`sourcetype=\"risk:circuit\"`, `sourcetype=\"fix:session\"`",
              "q": "index=trading sourcetype=\"risk:circuit\"\n| search state IN (\"HALT\",\"THROTTLE\",\"KILL\")\n| stats latest(_time) as tripped, values(strategy_id) as strategies by gateway, state\n| join max=1 gateway [| search index=trading sourcetype=\"fix:session\" | stats latest(heartbeat_gap) as gap by gateway]\n| where gap>30 OR isnotnull(tripped)",
              "m": "Log all manual overrides; post-mortem each halt with P&L and message counts.",
              "z": "Timeline (breaker events), Table (gateway state), Single value (strategies disabled).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Risk engine, exchange gateway.\n• Ensure the following data sources are available: `sourcetype=\"risk:circuit\"`, `sourcetype=\"fix:session\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog all manual overrides; post-mortem each halt with P&L and message counts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"risk:circuit\"\n| search state IN (\"HALT\",\"THROTTLE\",\"KILL\")\n| stats latest(_time) as tripped, values(strategy_id) as strategies by gateway, state\n| join max=1 gateway [| search index=trading sourcetype=\"fix:session\" | stats latest(heartbeat_gap) as gap by gateway]\n| where gap>30 OR isnotnull(tripped)\n```\n\nUnderstanding this SPL\n\n**Algorithmic Trading Circuit Breaker** — Ensures volatility/gateway breakers trip correctly and strategies are flattened when limits hit.\n\nDocumented **Data sources**: `sourcetype=\"risk:circuit\"`, `sourcetype=\"fix:session\"`. **App/TA** (typical add-on context): Risk engine, exchange gateway. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: risk:circuit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"risk:circuit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by gateway, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where gap>30 OR isnotnull(tripped)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (breaker events), Table (gateway state), Single value (strategies disabled).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"Algorithmic Trading Circuit Breaker\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Risk",
                "Availability"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [
                "exchange"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.14",
              "n": "FIX Protocol Session Health",
              "c": "critical",
              "f": "intermediate",
              "v": "FIX session drops or sequence gaps stop trading and breach venue SLAs.",
              "t": "QuickFIX / vendor bridge logs",
              "d": "`sourcetype=\"fix:session\"`",
              "q": "index=trading sourcetype=\"fix:session\"\n| stats latest(MsgSeqNum) as last_in, latest(ExpectedSeqNum) as expected by SenderCompID, TargetCompID\n| eval seq_gap=abs(last_in-expected)\n| where seq_gap>0 OR LogonStatus!=\"LOGGED_ON\"\n| table SenderCompID, TargetCompID, LogonStatus, seq_gap",
              "m": "Alert on Logout, ResendRequest storms, or heartbeat miss (MsgType=0). Track test vs. prod sessions separately.",
              "z": "Status grid (sessions), Line chart (heartbeat RTT), Table (sequence gaps).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: QuickFIX / vendor bridge logs.\n• Ensure the following data sources are available: `sourcetype=\"fix:session\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on Logout, ResendRequest storms, or heartbeat miss (MsgType=0). Track test vs. prod sessions separately.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"fix:session\"\n| stats latest(MsgSeqNum) as last_in, latest(ExpectedSeqNum) as expected by SenderCompID, TargetCompID\n| eval seq_gap=abs(last_in-expected)\n| where seq_gap>0 OR LogonStatus!=\"LOGGED_ON\"\n| table SenderCompID, TargetCompID, LogonStatus, seq_gap\n```\n\nUnderstanding this SPL\n\n**FIX Protocol Session Health** — FIX session drops or sequence gaps stop trading and breach venue SLAs.\n\nDocumented **Data sources**: `sourcetype=\"fix:session\"`. **App/TA** (typical add-on context): QuickFIX / vendor bridge logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: fix:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"fix:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by SenderCompID, TargetCompID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **seq_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where seq_gap>0 OR LogonStatus!=\"LOGGED_ON\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **FIX Protocol Session Health**): table SenderCompID, TargetCompID, LogonStatus, seq_gap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (sessions), Line chart (heartbeat RTT), Table (sequence gaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"FIX Protocol Session Health\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Availability"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.15",
              "n": "PCI Scope Validation",
              "c": "high",
              "f": "intermediate",
              "v": "Confirms only in-scope networks and flows touch CHD; catches scope creep from misrouted traffic.",
              "t": "NetFlow, firewall, CMDB",
              "d": "`sourcetype=\"pan:traffic\"`, `sourcetype=\"flow:netflow\"`",
              "q": "index=network (sourcetype=\"pan:traffic\" OR sourcetype=\"flow:netflow\")\n| lookup pci_in_scope_subnets subnet OUTPUT scope_zone as in_pci\n| where isnull(in_pci) AND (dest_port=443 OR dest_port=8080) AND match(dest,\"^10\\.\")\n| stats count by src, dest, dest_port, app\n| sort -count",
              "m": "Maintain authoritative CDE subnet lookup; alert on new flows into CHD from non-scoped zones; quarterly firewall rule review.",
              "z": "Sankey (zone flows), Table (out-of-scope hits), Map (external sources).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NetFlow, firewall, CMDB.\n• Ensure the following data sources are available: `sourcetype=\"pan:traffic\"`, `sourcetype=\"flow:netflow\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain authoritative CDE subnet lookup; alert on new flows into CHD from non-scoped zones; quarterly firewall rule review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"pan:traffic\" OR sourcetype=\"flow:netflow\")\n| lookup pci_in_scope_subnets subnet OUTPUT scope_zone as in_pci\n| where isnull(in_pci) AND (dest_port=443 OR dest_port=8080) AND match(dest,\"^10\\.\")\n| stats count by src, dest, dest_port, app\n| sort -count\n```\n\nUnderstanding this SPL\n\n**PCI Scope Validation** — Confirms only in-scope networks and flows touch CHD; catches scope creep from misrouted traffic.\n\nDocumented **Data sources**: `sourcetype=\"pan:traffic\"`, `sourcetype=\"flow:netflow\"`. **App/TA** (typical add-on context): NetFlow, firewall, CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic, flow:netflow. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(in_pci) AND (dest_port=443 OR dest_port=8080) AND match(dest,\"^10\\.\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey (zone flows), Table (out-of-scope hits), Map (external sources).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Financial Services, we pay close attention to \"PCI Scope Validation\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "a": [
                "N/A"
              ],
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.5.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS unknown is enforced — Splunk UC-10.12.15: PCI Scope Validation.",
                  "ea": "Saved search 'UC-10.12.15' running on sourcetype=\"pan:traffic\", sourcetype=\"flow:netflow\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.16",
              "n": "ePHI Access Audit",
              "c": "critical",
              "f": "beginner",
              "v": "HIPAA Security Rule requires accounting of disclosures; supports minimum necessary and investigation workflows.",
              "t": "EHR audit TA, Active Directory",
              "d": "`sourcetype=\"epic:audit\"`, `sourcetype=\"cerner:audit\"`",
              "q": "index=healthcare sourcetype IN (\"epic:audit\",\"cerner:audit\") object_type=\"chart\"\n| search action IN (\"VIEW\",\"EXPORT\",\"PRINT\")\n| stats count, dc(patient_mrn) as patients by user_id, department\n| where patients>50 OR count>200\n| sort -patients",
              "m": "De-identify MRNs in dashboards where possible; alert on bulk export and after-hours access to VIP records.",
              "z": "Table (high-volume accessors), Heatmap (user × department), Timeline (access bursts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EHR audit TA, Active Directory.\n• Ensure the following data sources are available: `sourcetype=\"epic:audit\"`, `sourcetype=\"cerner:audit\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDe-identify MRNs in dashboards where possible; alert on bulk export and after-hours access to VIP records.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype IN (\"epic:audit\",\"cerner:audit\") object_type=\"chart\"\n| search action IN (\"VIEW\",\"EXPORT\",\"PRINT\")\n| stats count, dc(patient_mrn) as patients by user_id, department\n| where patients>50 OR count>200\n| sort -patients\n```\n\nUnderstanding this SPL\n\n**ePHI Access Audit** — HIPAA Security Rule requires accounting of disclosures; supports minimum necessary and investigation workflows.\n\nDocumented **Data sources**: `sourcetype=\"epic:audit\"`, `sourcetype=\"cerner:audit\"`. **App/TA** (typical add-on context): EHR audit TA, Active Directory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user_id, department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where patients>50 OR count>200` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (high-volume accessors), Heatmap (user × department), Timeline (access bursts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"ePHI Access Audit\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.312(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA unknown is enforced — Splunk UC-10.12.16: ePHI Access Audit.",
                  "ea": "Saved search 'UC-10.12.16' running on sourcetype=\"epic:audit\", sourcetype=\"cerner:audit\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.17",
              "n": "HIPAA Transmission Security",
              "c": "high",
              "f": "intermediate",
              "v": "Detects cleartext PHI on protocols or TLS versions below policy (e.g., TLS 1.0).",
              "t": "Load balancer, TLS inspection (where lawful)",
              "d": "`sourcetype=\"lb:access\"`, `sourcetype=\"zeek:ssl\"`",
              "q": "index=network sourcetype=\"zeek:ssl\" (server_name=\"*fhir*\" OR server_name=\"*ccd*\")\n| where version IN (\"TLSv1\",\"TLSv1.1\") OR match(cipher, \"RC4|DES\")\n| stats count by src, dest, version, cipher\n| sort -count",
              "m": "Block weak ciphers at edge; exception process for legacy devices with compensating controls.",
              "z": "Table (weak sessions), Pie chart (TLS version mix), Trend (remediation progress).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Load balancer, TLS inspection (where lawful).\n• Ensure the following data sources are available: `sourcetype=\"lb:access\"`, `sourcetype=\"zeek:ssl\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBlock weak ciphers at edge; exception process for legacy devices with compensating controls.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"zeek:ssl\" (server_name=\"*fhir*\" OR server_name=\"*ccd*\")\n| where version IN (\"TLSv1\",\"TLSv1.1\") OR match(cipher, \"RC4|DES\")\n| stats count by src, dest, version, cipher\n| sort -count\n```\n\nUnderstanding this SPL\n\n**HIPAA Transmission Security** — Detects cleartext PHI on protocols or TLS versions below policy (e.g., TLS 1.0).\n\nDocumented **Data sources**: `sourcetype=\"lb:access\"`, `sourcetype=\"zeek:ssl\"`. **App/TA** (typical add-on context): Load balancer, TLS inspection (where lawful). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: zeek:ssl. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"zeek:ssl\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where version IN (\"TLSv1\",\"TLSv1.1\") OR match(cipher, \"RC4|DES\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, version, cipher** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (weak sessions), Pie chart (TLS version mix), Trend (remediation progress).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"HIPAA Transmission Security\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.312(e)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA unknown is enforced — Splunk UC-10.12.17: HIPAA Transmission Security.",
                  "ea": "Saved search 'UC-10.12.17' running on sourcetype=\"lb:access\", sourcetype=\"zeek:ssl\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.18",
              "n": "Audit Log Retention Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Verifies clinical and security logs meet retention policy and are not silently dropped.",
              "t": "Splunk internal, `_audit` index",
              "d": "`index=_internal`, `index=_audit`",
              "q": "index=_internal source=*metrics.log* group=per_index_thruput series=healthcare_epic\n| timechart span=1d sum(kb) as volume\n| appendcols [ search index=_audit action=delete NOT user=system | timechart span=1d count as deletes ]",
              "m": "Legal hold overrides; document cold storage for long retention; alert on forwarder silence > 1h for PHI indexes.",
              "z": "Line chart (ingest volume), Table (delete operations), Single value (days of coverage).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk internal, `_audit` index.\n• Ensure the following data sources are available: `index=_internal`, `index=_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLegal hold overrides; document cold storage for long retention; alert on forwarder silence > 1h for PHI indexes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*metrics.log* group=per_index_thruput series=healthcare_epic\n| timechart span=1d sum(kb) as volume\n| appendcols [ search index=_audit action=delete NOT user=system | timechart span=1d count as deletes ]\n```\n\nUnderstanding this SPL\n\n**Audit Log Retention Compliance** — Verifies clinical and security logs meet retention policy and are not silently dropped.\n\nDocumented **Data sources**: `index=_internal`, `index=_audit`. **App/TA** (typical add-on context): Splunk internal, `_audit` index. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Adds columns from a subsearch with `appendcols`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ingest volume), Table (delete operations), Single value (days of coverage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"Audit Log Retention Compliance\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.19",
              "n": "Minimum Necessary Access Validation",
              "c": "high",
              "f": "advanced",
              "v": "Identifies users with broader EHR rights than their role requires (privilege creep).",
              "t": "IAM, EHR role extracts",
              "d": "`sourcetype=\"epic:security\"`, HR feed",
              "q": "index=healthcare sourcetype=\"epic:security\"\n| lookup hr_job_role user_id OUTPUT job_family expected_epic_role\n| where epic_role!=expected_epic_role AND NOT match(epic_role,\"(train|breakglass)\")\n| stats values(epic_role) as roles by user_id, job_family",
              "m": "Quarterly recertification; break-glass accounts heavily monitored; integrate with provisioning.",
              "z": "Table (misaligned users), Bar chart (excess roles by department), Funnel (remediation).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IAM, EHR role extracts.\n• Ensure the following data sources are available: `sourcetype=\"epic:security\"`, HR feed.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuarterly recertification; break-glass accounts heavily monitored; integrate with provisioning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"epic:security\"\n| lookup hr_job_role user_id OUTPUT job_family expected_epic_role\n| where epic_role!=expected_epic_role AND NOT match(epic_role,\"(train|breakglass)\")\n| stats values(epic_role) as roles by user_id, job_family\n```\n\nUnderstanding this SPL\n\n**Minimum Necessary Access Validation** — Identifies users with broader EHR rights than their role requires (privilege creep).\n\nDocumented **Data sources**: `sourcetype=\"epic:security\"`, HR feed. **App/TA** (typical add-on context): IAM, EHR role extracts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: epic:security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"epic:security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where epic_role!=expected_epic_role AND NOT match(epic_role,\"(train|breakglass)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user_id, job_family** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (misaligned users), Bar chart (excess roles by department), Funnel (remediation).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"Minimum Necessary Access Validation\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.20",
              "n": "Breach Notification Readiness Assessment",
              "c": "critical",
              "f": "intermediate",
              "v": "Measures ability to identify affected individuals within HIPAA breach notification timelines.",
              "t": "DLP, EHR, email gateway",
              "d": "`sourcetype=\"dlp:phi\"`, `sourcetype=\"o365:mail\"`",
              "q": "index=security (sourcetype=\"dlp:phi\" OR (sourcetype=\"o365:mail\" severity=\"high\"))\n| search match(body,\"(?i)(ssn|mrn|diagnosis)\")\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, dc(recipient) as recipients by incident_id\n| eval dwell_hours=(last_seen-first_seen)/3600\n| where dwell_hours>24",
              "m": "Run tabletop data queries; pre-build patient cohort lists from message IDs and chart exports.",
              "z": "Timeline (incident dwell), Table (open PHI incidents), Single value (mean time to contain).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DLP, EHR, email gateway.\n• Ensure the following data sources are available: `sourcetype=\"dlp:phi\"`, `sourcetype=\"o365:mail\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun tabletop data queries; pre-build patient cohort lists from message IDs and chart exports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security (sourcetype=\"dlp:phi\" OR (sourcetype=\"o365:mail\" severity=\"high\"))\n| search match(body,\"(?i)(ssn|mrn|diagnosis)\")\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, dc(recipient) as recipients by incident_id\n| eval dwell_hours=(last_seen-first_seen)/3600\n| where dwell_hours>24\n```\n\nUnderstanding this SPL\n\n**Breach Notification Readiness Assessment** — Measures ability to identify affected individuals within HIPAA breach notification timelines.\n\nDocumented **Data sources**: `sourcetype=\"dlp:phi\"`, `sourcetype=\"o365:mail\"`. **App/TA** (typical add-on context): DLP, EHR, email gateway. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: dlp:phi, o365:mail. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"dlp:phi\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by incident_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dwell_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dwell_hours>24` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (incident dwell), Table (open PHI incidents), Single value (mean time to contain).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"Breach Notification Readiness Assessment\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.21",
              "n": "BAA Compliance Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Ensures business associates' systems sending PHI are authorized and logged per contract.",
              "t": "Cloud app security, firewall",
              "d": "`sourcetype=\"casb:activity\"`, `sourcetype=\"pan:traffic\"`",
              "q": "index=cloud sourcetype=\"casb:activity\" tag=phi\n| lookup baa_vendors vendor OUTPUT baa_signed subprocessor_ok\n| where isnull(baa_signed) OR baa_signed=\"false\"\n| stats count by vendor, app, user\n| sort -count",
              "m": "Maintain vendor inventory with BAA dates; block or proxy unsanctioned apps with PHI.",
              "z": "Table (non-BAA usage), Bar chart (vendors), Status (BAA coverage %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud app security, firewall.\n• Ensure the following data sources are available: `sourcetype=\"casb:activity\"`, `sourcetype=\"pan:traffic\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain vendor inventory with BAA dates; block or proxy unsanctioned apps with PHI.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"casb:activity\" tag=phi\n| lookup baa_vendors vendor OUTPUT baa_signed subprocessor_ok\n| where isnull(baa_signed) OR baa_signed=\"false\"\n| stats count by vendor, app, user\n| sort -count\n```\n\nUnderstanding this SPL\n\n**BAA Compliance Monitoring** — Ensures business associates' systems sending PHI are authorized and logged per contract.\n\nDocumented **Data sources**: `sourcetype=\"casb:activity\"`, `sourcetype=\"pan:traffic\"`. **App/TA** (typical add-on context): Cloud app security, firewall. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: casb:activity. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"casb:activity\", tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(baa_signed) OR baa_signed=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by vendor, app, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-BAA usage), Bar chart (vendors), Status (BAA coverage %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"BAA Compliance Monitoring\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.22",
              "n": "Infusion Pump Connectivity Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Pump disconnects can delay therapy; supports Joint Commission device management expectations.",
              "t": "Medical device integration (MDI), RTLS",
              "d": "`sourcetype=\"mdi:pump\"`, `sourcetype=\"hl7:mdt\"`",
              "q": "index=clinical sourcetype=\"mdi:pump\"\n| stats latest(_time) as last_seen, latest(alarm_state) as alarm by pump_id, unit\n| eval gap_sec=now()-last_seen\n| where gap_sec>300 OR alarm=\"AIR_IN_LINE\"\n| table pump_id, unit, gap_sec, alarm",
              "m": "Integrate with CMMS for maintenance; alert biomed on extended silence from critical care beds.",
              "z": "Status map (units), Table (stale pumps), Timeline (alarms).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Medical device integration (MDI), RTLS.\n• Ensure the following data sources are available: `sourcetype=\"mdi:pump\"`, `sourcetype=\"hl7:mdt\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate with CMMS for maintenance; alert biomed on extended silence from critical care beds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=clinical sourcetype=\"mdi:pump\"\n| stats latest(_time) as last_seen, latest(alarm_state) as alarm by pump_id, unit\n| eval gap_sec=now()-last_seen\n| where gap_sec>300 OR alarm=\"AIR_IN_LINE\"\n| table pump_id, unit, gap_sec, alarm\n```\n\nUnderstanding this SPL\n\n**Infusion Pump Connectivity Monitoring** — Pump disconnects can delay therapy; supports Joint Commission device management expectations.\n\nDocumented **Data sources**: `sourcetype=\"mdi:pump\"`, `sourcetype=\"hl7:mdt\"`. **App/TA** (typical add-on context): Medical device integration (MDI), RTLS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: clinical; **sourcetype**: mdi:pump. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=clinical, sourcetype=\"mdi:pump\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by pump_id, unit** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_sec>300 OR alarm=\"AIR_IN_LINE\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Infusion Pump Connectivity Monitoring**): table pump_id, unit, gap_sec, alarm\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status map (units), Table (stale pumps), Timeline (alarms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"Infusion Pump Connectivity Monitoring\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Availability",
                "Patient Safety"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.23",
              "n": "Medical Device Network Segmentation",
              "c": "critical",
              "f": "advanced",
              "v": "Detects east-west traffic from device VLANs to corporate or internet against segmentation policy.",
              "t": "Firewall, NAC",
              "d": "`sourcetype=\"pan:traffic\"`, `sourcetype=\"ise:radius\"`",
              "q": "index=network sourcetype=\"pan:traffic\" src_zone=\"MED_DEVICE\"\n| lookup allowed_med_flows src dest_zone OUTPUT approved\n| where approved!=\"true\" AND dest_zone!=\"PACS\"\n| stats count by src, dest, dest_port, app\n| sort -count",
              "m": "Document clinical exceptions (vendor VPN); use dynamic lists for imaging modalities.",
              "z": "Sankey (zone traffic), Table (violations), Single value (unique offending devices).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Firewall, NAC.\n• Ensure the following data sources are available: `sourcetype=\"pan:traffic\"`, `sourcetype=\"ise:radius\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDocument clinical exceptions (vendor VPN); use dynamic lists for imaging modalities.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\" src_zone=\"MED_DEVICE\"\n| lookup allowed_med_flows src dest_zone OUTPUT approved\n| where approved!=\"true\" AND dest_zone!=\"PACS\"\n| stats count by src, dest, dest_port, app\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Medical Device Network Segmentation** — Detects east-west traffic from device VLANs to corporate or internet against segmentation policy.\n\nDocumented **Data sources**: `sourcetype=\"pan:traffic\"`, `sourcetype=\"ise:radius\"`. **App/TA** (typical add-on context): Firewall, NAC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where approved!=\"true\" AND dest_zone!=\"PACS\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey (zone traffic), Table (violations), Single value (unique offending devices).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"Medical Device Network Segmentation\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Security"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.24",
              "n": "HL7 Message Delivery Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "ADT/ORM gaps break registration, billing, and care coordination workflows.",
              "t": "Interface engine (Mirth, Cloverleaf)",
              "d": "`sourcetype=\"hl7:v2\"`",
              "q": "index=interfaces sourcetype=\"hl7:v2\"\n| eval ack_ok=if(ACK_Code=\"AA\",1,0)\n| timechart span=15m sum(ack_ok) as ok, count as total\n| eval ack_rate=round(ok/total*100,2)\n| where ack_rate < 99",
              "m": "Track message control ID retries; alert on NAK storms or queue depth from engine metrics.",
              "z": "Line chart (ACK rate), Table (top failing interfaces), Single value (queue depth).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Interface engine (Mirth, Cloverleaf).\n• Ensure the following data sources are available: `sourcetype=\"hl7:v2\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack message control ID retries; alert on NAK storms or queue depth from engine metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=interfaces sourcetype=\"hl7:v2\"\n| eval ack_ok=if(ACK_Code=\"AA\",1,0)\n| timechart span=15m sum(ack_ok) as ok, count as total\n| eval ack_rate=round(ok/total*100,2)\n| where ack_rate < 99\n```\n\nUnderstanding this SPL\n\n**HL7 Message Delivery Monitoring** — ADT/ORM gaps break registration, billing, and care coordination workflows.\n\nDocumented **Data sources**: `sourcetype=\"hl7:v2\"`. **App/TA** (typical add-on context): Interface engine (Mirth, Cloverleaf). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: interfaces; **sourcetype**: hl7:v2. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=interfaces, sourcetype=\"hl7:v2\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ack_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **ack_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ack_rate < 99` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ACK rate), Table (top failing interfaces), Single value (queue depth).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"HL7 Message Delivery Monitoring\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Operations"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.25",
              "n": "FHIR API Error Rate",
              "c": "high",
              "f": "beginner",
              "v": "High 4xx/5xx on FHIR hurts patient apps and partner integrations (ONC interoperability).",
              "t": "API gateway, EHR FHIR logs",
              "d": "`sourcetype=\"apigee:fhir\"`, `sourcetype=\"hapi:fhir\"`",
              "q": "index=healthcare sourcetype IN (\"apigee:fhir\",\"hapi:fhir\")\n| eval err=if(status>=400,1,0)\n| timechart span=5m sum(err) as errors, count as reqs\n| eval err_pct=round(errors/reqs*100,2)\n| where err_pct > 2",
              "m": "Break out by client_id and resource; rate-limit abusive partners; SLO dashboards for Patient/$everything bulk exports.",
              "z": "Line chart (error %), Bar chart (errors by route), Table (top clients).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: API gateway, EHR FHIR logs.\n• Ensure the following data sources are available: `sourcetype=\"apigee:fhir\"`, `sourcetype=\"hapi:fhir\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBreak out by client_id and resource; rate-limit abusive partners; SLO dashboards for Patient/$everything bulk exports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype IN (\"apigee:fhir\",\"hapi:fhir\")\n| eval err=if(status>=400,1,0)\n| timechart span=5m sum(err) as errors, count as reqs\n| eval err_pct=round(errors/reqs*100,2)\n| where err_pct > 2\n```\n\nUnderstanding this SPL\n\n**FHIR API Error Rate** — High 4xx/5xx on FHIR hurts patient apps and partner integrations (ONC interoperability).\n\nDocumented **Data sources**: `sourcetype=\"apigee:fhir\"`, `sourcetype=\"hapi:fhir\"`. **App/TA** (typical add-on context): API gateway, EHR FHIR logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **err** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **err_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_pct > 2` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error %), Bar chart (errors by route), Table (top clients).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"FHIR API Error Rate\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.26",
              "n": "EHR Login Anomaly Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Unusual geography, time, or volume of EHR logins may indicate credential theft or insider snooping.",
              "t": "EHR SSO logs, IdP",
              "d": "`sourcetype=\"epic:login\"`, `sourcetype=\"okta:systemlog\"`",
              "q": "index=healthcare sourcetype=\"epic:login\" outcome=\"success\"\n| iplocation src\n| stats dc(Country) as countries, count by user_id\n| where countries>2 OR count>100\n| sort -count",
              "m": "Step-up auth from new countries; peer-group baseline per department; VIP patient list access rules.",
              "z": "Map (logins), Table (risky users), Timeline (session starts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EHR SSO logs, IdP.\n• Ensure the following data sources are available: `sourcetype=\"epic:login\"`, `sourcetype=\"okta:systemlog\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStep-up auth from new countries; peer-group baseline per department; VIP patient list access rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"epic:login\" outcome=\"success\"\n| iplocation src\n| stats dc(Country) as countries, count by user_id\n| where countries>2 OR count>100\n| sort -count\n```\n\nUnderstanding this SPL\n\n**EHR Login Anomaly Detection** — Unusual geography, time, or volume of EHR logins may indicate credential theft or insider snooping.\n\nDocumented **Data sources**: `sourcetype=\"epic:login\"`, `sourcetype=\"okta:systemlog\"`. **App/TA** (typical add-on context): EHR SSO logs, IdP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: epic:login. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"epic:login\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **EHR Login Anomaly Detection**): iplocation src\n• `stats` rolls up events into metrics; results are split **by user_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where countries>2 OR count>100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (logins), Table (risky users), Timeline (session starts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"EHR Login Anomaly Detection\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Security"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.27",
              "n": "PACS Image Transfer Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Failed C-STORE or Q/R delays impact radiology reads and surgical planning.",
              "t": "DICOM router / PACS logs",
              "d": "`sourcetype=\"dicom:dimse\"`",
              "q": "index=imaging sourcetype=\"dicom:dimse\" sop_class=\"1.2.840.10008.5.1.4.1.1.2\"\n| where status!=0 OR move_status!=\"SUCCESS\"\n| stats count, values(study_uid) as studies by calling_ae, called_ae\n| sort -count",
              "m": "Monitor association release and reject reasons; correlate with network MTU issues for large MR series.",
              "z": "Table (failing AE pairs), Line chart (failures over time), Bar chart (modalities).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DICOM router / PACS logs.\n• Ensure the following data sources are available: `sourcetype=\"dicom:dimse\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor association release and reject reasons; correlate with network MTU issues for large MR series.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=imaging sourcetype=\"dicom:dimse\" sop_class=\"1.2.840.10008.5.1.4.1.1.2\"\n| where status!=0 OR move_status!=\"SUCCESS\"\n| stats count, values(study_uid) as studies by calling_ae, called_ae\n| sort -count\n```\n\nUnderstanding this SPL\n\n**PACS Image Transfer Failures** — Failed C-STORE or Q/R delays impact radiology reads and surgical planning.\n\nDocumented **Data sources**: `sourcetype=\"dicom:dimse\"`. **App/TA** (typical add-on context): DICOM router / PACS logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: imaging; **sourcetype**: dicom:dimse. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=imaging, sourcetype=\"dicom:dimse\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=0 OR move_status!=\"SUCCESS\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by calling_ae, called_ae** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failing AE pairs), Line chart (failures over time), Bar chart (modalities).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"PACS Image Transfer Failures\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Operations"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.28",
              "n": "Pharmacy Dispensing Audit",
              "c": "high",
              "f": "beginner",
              "v": "Tracks controlled substance dispensing overrides and after-hours fills for diversion programs.",
              "t": "Pharmacy system (Pyxis, Epic Willow)",
              "d": "`sourcetype=\"pharmacy:dispense\"`",
              "q": "index=clinical sourcetype=\"pharmacy:dispense\" dea_schedule IN (\"2\",\"3\",\"4\")\n| search override_flag=\"true\" OR fill_hour<6 OR fill_hour>22\n| stats count, sum(quantity) as units by drug_name, pharmacist_id, patient_hash\n| sort -units",
              "m": "Align with state PDMP reporting; peer review for high override rates.",
              "z": "Table (overrides), Bar chart (by drug), Heatmap (pharmacist × shift).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Pharmacy system (Pyxis, Epic Willow).\n• Ensure the following data sources are available: `sourcetype=\"pharmacy:dispense\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign with state PDMP reporting; peer review for high override rates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=clinical sourcetype=\"pharmacy:dispense\" dea_schedule IN (\"2\",\"3\",\"4\")\n| search override_flag=\"true\" OR fill_hour<6 OR fill_hour>22\n| stats count, sum(quantity) as units by drug_name, pharmacist_id, patient_hash\n| sort -units\n```\n\nUnderstanding this SPL\n\n**Pharmacy Dispensing Audit** — Tracks controlled substance dispensing overrides and after-hours fills for diversion programs.\n\nDocumented **Data sources**: `sourcetype=\"pharmacy:dispense\"`. **App/TA** (typical add-on context): Pharmacy system (Pyxis, Epic Willow). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: clinical; **sourcetype**: pharmacy:dispense. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=clinical, sourcetype=\"pharmacy:dispense\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by drug_name, pharmacist_id, patient_hash** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overrides), Bar chart (by drug), Heatmap (pharmacist × shift).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"Pharmacy Dispensing Audit\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.29",
              "n": "Clinical Alert Fatigue Analysis",
              "c": "medium",
              "f": "advanced",
              "v": "Quantifies alert acknowledgement times and duplicate fires to guide CDS tuning and Joint Commission alarm safety.",
              "t": "EHR alerting, middleware",
              "d": "`sourcetype=\"epic:alert\"`",
              "q": "index=clinical sourcetype=\"epic:alert\"\n| eval ack_sec=ack_time-_time\n| stats median(ack_sec) as med_ack, count as fires, dc(alert_id) as rules by unit\n| eval fires_per_bed=fires/bed_count\n| sort -fires",
              "m": "Deprecate low-value rules; correlate with nurse staffing ratios from census feeds.",
              "z": "Bar chart (fires per bed), Box plot (ack times), Table (top noisy rules).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EHR alerting, middleware.\n• Ensure the following data sources are available: `sourcetype=\"epic:alert\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeprecate low-value rules; correlate with nurse staffing ratios from census feeds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=clinical sourcetype=\"epic:alert\"\n| eval ack_sec=ack_time-_time\n| stats median(ack_sec) as med_ack, count as fires, dc(alert_id) as rules by unit\n| eval fires_per_bed=fires/bed_count\n| sort -fires\n```\n\nUnderstanding this SPL\n\n**Clinical Alert Fatigue Analysis** — Quantifies alert acknowledgement times and duplicate fires to guide CDS tuning and Joint Commission alarm safety.\n\nDocumented **Data sources**: `sourcetype=\"epic:alert\"`. **App/TA** (typical add-on context): EHR alerting, middleware. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: clinical; **sourcetype**: epic:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=clinical, sourcetype=\"epic:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ack_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by unit** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fires_per_bed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (fires per bed), Box plot (ack times), Table (top noisy rules).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"Clinical Alert Fatigue Analysis\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Operations"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.30",
              "n": "ADT Message Routing Failures",
              "c": "high",
              "f": "intermediate",
              "v": "ADT misroutes leave wrong census in downstream systems (lab, HIE), affecting care and billing.",
              "t": "Interface engine",
              "d": "`sourcetype=\"hl7:v2\"`, `msg_type=\"ADT\"`",
              "q": "index=interfaces sourcetype=\"hl7:v2\" msg_type=\"ADT*\"\n| search routing_status=\"FAIL\" OR ErrorCode!=\"0\"\n| stats count, values(pid) as patients by sending_facility, receiving_system\n| sort -count",
              "m": "Reconciliation job for ADT A03 without matching A01; monitor duplicate PV1 segments.",
              "z": "Table (failed routes), Sankey (facility → system), Line chart (failure trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Interface engine.\n• Ensure the following data sources are available: `sourcetype=\"hl7:v2\"`, `msg_type=\"ADT\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nReconciliation job for ADT A03 without matching A01; monitor duplicate PV1 segments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=interfaces sourcetype=\"hl7:v2\" msg_type=\"ADT*\"\n| search routing_status=\"FAIL\" OR ErrorCode!=\"0\"\n| stats count, values(pid) as patients by sending_facility, receiving_system\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ADT Message Routing Failures** — ADT misroutes leave wrong census in downstream systems (lab, HIE), affecting care and billing.\n\nDocumented **Data sources**: `sourcetype=\"hl7:v2\"`, `msg_type=\"ADT\"`. **App/TA** (typical add-on context): Interface engine. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: interfaces; **sourcetype**: hl7:v2. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=interfaces, sourcetype=\"hl7:v2\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by sending_facility, receiving_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed routes), Sankey (facility → system), Line chart (failure trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Healthcare, we pay close attention to \"ADT Message Routing Failures\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Operations"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.31",
              "n": "POS Transaction Anomaly Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Spikes in voids, returns, or manager overrides may indicate insider theft or terminal compromise.",
              "t": "POS gateway, store systems",
              "d": "`sourcetype=\"pos:txn\"`",
              "q": "index=retail sourcetype=\"pos:txn\"\n| eval is_void=if(txn_type IN (\"VOID\",\"RETURN\"),1,0)\n| stats sum(is_void) as voids, count as txns, sum(amount) as sales by store_id, terminal_id\n| eval void_rate=voids/txns\n| where void_rate>0.15 OR voids>30\n| sort -void_rate",
              "m": "Compare peer stores; video correlate with exception transactions; PCI-safe tokenization at ingest.",
              "z": "Table (hot terminals), Map (stores), Line chart (void rate trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: POS gateway, store systems.\n• Ensure the following data sources are available: `sourcetype=\"pos:txn\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare peer stores; video correlate with exception transactions; PCI-safe tokenization at ingest.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"pos:txn\"\n| eval is_void=if(txn_type IN (\"VOID\",\"RETURN\"),1,0)\n| stats sum(is_void) as voids, count as txns, sum(amount) as sales by store_id, terminal_id\n| eval void_rate=voids/txns\n| where void_rate>0.15 OR voids>30\n| sort -void_rate\n```\n\nUnderstanding this SPL\n\n**POS Transaction Anomaly Detection** — Spikes in voids, returns, or manager overrides may indicate insider theft or terminal compromise.\n\nDocumented **Data sources**: `sourcetype=\"pos:txn\"`. **App/TA** (typical add-on context): POS gateway, store systems. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: pos:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"pos:txn\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_void** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by store_id, terminal_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **void_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where void_rate>0.15 OR voids>30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hot terminals), Map (stores), Line chart (void rate trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Retail and E-Commerce, we pay close attention to \"POS Transaction Anomaly Detection\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Fraud"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.32",
              "n": "Gift Card Fraud Patterns",
              "c": "high",
              "f": "intermediate",
              "v": "Detects bulk activation, balance draining, and reseller abuse patterns.",
              "t": "Gift card processor",
              "d": "`sourcetype=\"giftcard:event\"`",
              "q": "index=retail sourcetype=\"giftcard:event\"\n| search event IN (\"ACTIVATE\",\"REDEEM\",\"TRANSFER\")\n| stats sum(amount) as flow, dc(card_id) as cards by customer_hash, _time\n| bin _time span=1h\n| where flow>5000 AND cards>10\n| sort -flow",
              "m": "Velocity limits per device; block serial activations from single payment instrument across stores.",
              "z": "Table (suspicious bursts), Timeline (events), Bar chart (flow by hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Gift card processor.\n• Ensure the following data sources are available: `sourcetype=\"giftcard:event\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nVelocity limits per device; block serial activations from single payment instrument across stores.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"giftcard:event\"\n| search event IN (\"ACTIVATE\",\"REDEEM\",\"TRANSFER\")\n| stats sum(amount) as flow, dc(card_id) as cards by customer_hash, _time\n| bin _time span=1h\n| where flow>5000 AND cards>10\n| sort -flow\n```\n\nUnderstanding this SPL\n\n**Gift Card Fraud Patterns** — Detects bulk activation, balance draining, and reseller abuse patterns.\n\nDocumented **Data sources**: `sourcetype=\"giftcard:event\"`. **App/TA** (typical add-on context): Gift card processor. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: giftcard:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"giftcard:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by customer_hash, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• Filters the current rows with `where flow>5000 AND cards>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious bursts), Timeline (events), Bar chart (flow by hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Retail and E-Commerce, we pay close attention to \"Gift Card Fraud Patterns\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Fraud"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.33",
              "n": "E-Commerce Bot Detection",
              "c": "high",
              "f": "advanced",
              "v": "Protects inventory, loyalty points, and checkout from scalping and credential stuffing.",
              "t": "CDN/WAF, application logs",
              "d": "`sourcetype=\"nginx:access\"`, `sourcetype=\"akamai:siem\"`",
              "q": "index=web sourcetype=\"nginx:access\" uri=\"/checkout*\"\n| eval bot_score=if(match(http_user_agent,\"(?i)(bot|crawler|python)\"),1,0)\n| stats count, dc(session_id) as sess, avg(request_time_ms) as avg_rt by src\n| where count>200 AND sess<5 AND avg_rt<50\n| sort -count",
              "m": "Layer bot management (JS challenge, CAPTCHA); rate limit by ASN and fingerprint.",
              "z": "Table (suspicious IPs), Line chart (bot traffic share), Map (ASN concentration).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CDN/WAF, application logs.\n• Ensure the following data sources are available: `sourcetype=\"nginx:access\"`, `sourcetype=\"akamai:siem\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLayer bot management (JS challenge, CAPTCHA); rate limit by ASN and fingerprint.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"nginx:access\" uri=\"/checkout*\"\n| eval bot_score=if(match(http_user_agent,\"(?i)(bot|crawler|python)\"),1,0)\n| stats count, dc(session_id) as sess, avg(request_time_ms) as avg_rt by src\n| where count>200 AND sess<5 AND avg_rt<50\n| sort -count\n```\n\nUnderstanding this SPL\n\n**E-Commerce Bot Detection** — Protects inventory, loyalty points, and checkout from scalping and credential stuffing.\n\nDocumented **Data sources**: `sourcetype=\"nginx:access\"`, `sourcetype=\"akamai:siem\"`. **App/TA** (typical add-on context): CDN/WAF, application logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: nginx:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"nginx:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bot_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>200 AND sess<5 AND avg_rt<50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious IPs), Line chart (bot traffic share), Map (ASN concentration).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Retail and E-Commerce, we pay close attention to \"E-Commerce Bot Detection\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Security"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.34",
              "n": "Payment Gateway Health",
              "c": "critical",
              "f": "beginner",
              "v": "Gateway latency and error rates directly impact cart conversion and SLA with PSPs.",
              "t": "Stripe/Adyen webhooks, gateway logs",
              "d": "`sourcetype=\"stripe:webhook\"`, `sourcetype=\"gateway:response\"`",
              "q": "index=payments sourcetype=\"gateway:response\"\n| eval ok=if(status=\"approved\",1,0)\n| timechart span=5m avg(latency_ms) as p50, perc95(latency_ms) as p95, sum(ok)/count*100 as success_pct\n| where success_pct < 95 OR p95>2000",
              "m": "Circuit-break to backup MID; correlate with PSP status pages; alert NOC on regional degradation.",
              "z": "Line chart (success % and latency), Single value (decline spike), Table (error codes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Stripe/Adyen webhooks, gateway logs.\n• Ensure the following data sources are available: `sourcetype=\"stripe:webhook\"`, `sourcetype=\"gateway:response\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCircuit-break to backup MID; correlate with PSP status pages; alert NOC on regional degradation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"gateway:response\"\n| eval ok=if(status=\"approved\",1,0)\n| timechart span=5m avg(latency_ms) as p50, perc95(latency_ms) as p95, sum(ok)/count*100 as success_pct\n| where success_pct < 95 OR p95>2000\n```\n\nUnderstanding this SPL\n\n**Payment Gateway Health** — Gateway latency and error rates directly impact cart conversion and SLA with PSPs.\n\nDocumented **Data sources**: `sourcetype=\"stripe:webhook\"`, `sourcetype=\"gateway:response\"`. **App/TA** (typical add-on context): Stripe/Adyen webhooks, gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: gateway:response. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"gateway:response\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where success_pct < 95 OR p95>2000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (success % and latency), Single value (decline spike), Table (error codes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Retail and E-Commerce, we pay close attention to \"Payment Gateway Health\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Availability"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.35",
              "n": "Promotional Pricing Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Ensures discounts apply only during approved windows and price books match marketing filings.",
              "t": "ERP, e-commerce pricing engine",
              "d": "`sourcetype=\"price:audit\"`",
              "q": "index=retail sourcetype=\"price:audit\"\n| lookup promo_calendar sku OUTPUT promo_start promo_end\n| where _time < promo_start OR _time > promo_end\n| stats values(shelf_price) as prices by sku, store_id, _time\n| sort -_time",
              "m": "SOX-relevant for material promotions; feed exceptions to merchandising before financial close.",
              "z": "Table (out-of-window pricing), Calendar heatmap (promos), Bar chart (variance $).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ERP, e-commerce pricing engine.\n• Ensure the following data sources are available: `sourcetype=\"price:audit\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSOX-relevant for material promotions; feed exceptions to merchandising before financial close.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"price:audit\"\n| lookup promo_calendar sku OUTPUT promo_start promo_end\n| where _time < promo_start OR _time > promo_end\n| stats values(shelf_price) as prices by sku, store_id, _time\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Promotional Pricing Compliance** — Ensures discounts apply only during approved windows and price books match marketing filings.\n\nDocumented **Data sources**: `sourcetype=\"price:audit\"`. **App/TA** (typical add-on context): ERP, e-commerce pricing engine. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: price:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"price:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where _time < promo_start OR _time > promo_end` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by sku, store_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (out-of-window pricing), Calendar heatmap (promos), Bar chart (variance $).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Retail and E-Commerce, we pay close attention to \"Promotional Pricing Compliance\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.36",
              "n": "Inventory System Sync Failures",
              "c": "high",
              "f": "intermediate",
              "v": "OMS/IMS drift causes oversell, BOPIS failures, and poor customer experience.",
              "t": "WMS, e-commerce OMS",
              "d": "`sourcetype=\"ims:sync\"`",
              "q": "index=retail sourcetype=\"ims:sync\"\n| search status IN (\"FAIL\",\"TIMEOUT\",\"PARTIAL\")\n| stats count, latest(error_code) as err by sku, channel, dc_id\n| sort -count",
              "m": "Dead-letter queue depth alerts; reconcile counts nightly; prioritize high-velocity SKUs.",
              "z": "Table (failed SKUs), Line chart (sync lag), Single value (unresolved errors).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WMS, e-commerce OMS.\n• Ensure the following data sources are available: `sourcetype=\"ims:sync\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDead-letter queue depth alerts; reconcile counts nightly; prioritize high-velocity SKUs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"ims:sync\"\n| search status IN (\"FAIL\",\"TIMEOUT\",\"PARTIAL\")\n| stats count, latest(error_code) as err by sku, channel, dc_id\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Inventory System Sync Failures** — OMS/IMS drift causes oversell, BOPIS failures, and poor customer experience.\n\nDocumented **Data sources**: `sourcetype=\"ims:sync\"`. **App/TA** (typical add-on context): WMS, e-commerce OMS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: ims:sync. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"ims:sync\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by sku, channel, dc_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed SKUs), Line chart (sync lag), Single value (unresolved errors).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Retail and E-Commerce, we pay close attention to \"Inventory System Sync Failures\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Operations"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.37",
              "n": "Omnichannel Order Routing Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Failed ship-from-store or DC allocation breaks promise dates and loyalty SLAs.",
              "t": "OMS, DOM",
              "d": "`sourcetype=\"oms:route\"`",
              "q": "index=retail sourcetype=\"oms:route\"\n| where route_status=\"FAIL\" OR promise_date=0\n| stats count, values(order_id) as orders by failure_reason, orig_store\n| sort -count",
              "m": "Fallback routing rules; alert when failure rate exceeds 2% in any region for 15m.",
              "z": "Bar chart (reasons), Table (stores with failures), Map (regional heat).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OMS, DOM.\n• Ensure the following data sources are available: `sourcetype=\"oms:route\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFallback routing rules; alert when failure rate exceeds 2% in any region for 15m.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"oms:route\"\n| where route_status=\"FAIL\" OR promise_date=0\n| stats count, values(order_id) as orders by failure_reason, orig_store\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Omnichannel Order Routing Failures** — Failed ship-from-store or DC allocation breaks promise dates and loyalty SLAs.\n\nDocumented **Data sources**: `sourcetype=\"oms:route\"`. **App/TA** (typical add-on context): OMS, DOM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: oms:route. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"oms:route\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where route_status=\"FAIL\" OR promise_date=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by failure_reason, orig_store** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (reasons), Table (stores with failures), Map (regional heat).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Retail and E-Commerce, we pay close attention to \"Omnichannel Order Routing Failures\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Operations"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.38",
              "n": "Cart Abandonment Correlation",
              "c": "medium",
              "f": "advanced",
              "v": "Links payment errors, latency, and coupon issues to abandonment for revenue recovery.",
              "t": "Web analytics, app logs",
              "d": "`sourcetype=\"segment:track\"`, `sourcetype=\"app:checkout\"`",
              "q": "index=web sourcetype=\"segment:track\"\n| transaction session_id maxspan=30m\n| eval has_checkout=if(match(mvjoin(event,\"|\"),\"Checkout\"),1,0)\n| eval has_order=if(match(mvjoin(event,\"|\"),\"Order Completed\"),1,0)\n| where has_checkout=1 AND has_order=0\n| stats count by checkout_error\n| sort -count",
              "m": "Privacy-compliant session stitching; A/B test fixes on top error codes.",
              "z": "Funnel (checkout steps), Bar chart (errors vs. abandon), Line chart (recovery campaigns).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Web analytics, app logs.\n• Ensure the following data sources are available: `sourcetype=\"segment:track\"`, `sourcetype=\"app:checkout\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrivacy-compliant session stitching; A/B test fixes on top error codes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"segment:track\"\n| transaction session_id maxspan=30m\n| eval has_checkout=if(match(mvjoin(event,\"|\"),\"Checkout\"),1,0)\n| eval has_order=if(match(mvjoin(event,\"|\"),\"Order Completed\"),1,0)\n| where has_checkout=1 AND has_order=0\n| stats count by checkout_error\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cart Abandonment Correlation** — Links payment errors, latency, and coupon issues to abandonment for revenue recovery.\n\nDocumented **Data sources**: `sourcetype=\"segment:track\"`, `sourcetype=\"app:checkout\"`. **App/TA** (typical add-on context): Web analytics, app logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: segment:track. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"segment:track\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **has_checkout** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_order** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where has_checkout=1 AND has_order=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by checkout_error** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel (checkout steps), Bar chart (errors vs. abandon), Line chart (recovery campaigns).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In Retail and E-Commerce, we pay close attention to \"Cart Abandonment Correlation\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Performance"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.39",
              "n": "FedRAMP Continuous Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Automates ConMon evidence: vulnerability scanning, logging, and boundary defenses for FedRAMP Moderate/High.",
              "t": "SCAP scanners, cloud TAs (AWS/Azure/GCP)",
              "d": "`sourcetype=\"aws:cloudtrail\"`, `sourcetype=\"tenable:scan\"`",
              "q": "index=govcloud (sourcetype=\"aws:cloudtrail\" OR sourcetype=\"tenable:scan\")\n| search (eventName=\"DeleteTrail\" OR severity=\"CRITICAL\") AND tag=fedramp_boundary\n| stats latest(_time) as last_event by resource_id, control_id\n| where last_event < relative_time(now(),\"-30d@d\")",
              "m": "Map controls to saved searches; export to POA&M workflow; 3PAO evidence packs from scheduled PDFs.",
              "z": "Gantt (control testing), Table (stale evidence), Scorecard (ConMon % green).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SCAP scanners, cloud TAs (AWS/Azure/GCP).\n• Ensure the following data sources are available: `sourcetype=\"aws:cloudtrail\"`, `sourcetype=\"tenable:scan\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap controls to saved searches; export to POA&M workflow; 3PAO evidence packs from scheduled PDFs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=govcloud (sourcetype=\"aws:cloudtrail\" OR sourcetype=\"tenable:scan\")\n| search (eventName=\"DeleteTrail\" OR severity=\"CRITICAL\") AND tag=fedramp_boundary\n| stats latest(_time) as last_event by resource_id, control_id\n| where last_event < relative_time(now(),\"-30d@d\")\n```\n\nUnderstanding this SPL\n\n**FedRAMP Continuous Monitoring** — Automates ConMon evidence: vulnerability scanning, logging, and boundary defenses for FedRAMP Moderate/High.\n\nDocumented **Data sources**: `sourcetype=\"aws:cloudtrail\"`, `sourcetype=\"tenable:scan\"`. **App/TA** (typical add-on context): SCAP scanners, cloud TAs (AWS/Azure/GCP). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: govcloud; **sourcetype**: aws:cloudtrail, tenable:scan. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=govcloud, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by resource_id, control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where last_event < relative_time(now(),\"-30d@d\")` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gantt (control testing), Table (stale evidence), Scorecard (ConMon % green).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In our line of business, we pay close attention to \"FedRAMP Continuous Monitoring\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "CA-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that FedRAMP unknown is enforced — Splunk UC-10.12.39: FedRAMP Continuous Monitoring.",
                  "ea": "Saved search 'UC-10.12.39' running on sourcetype=\"aws:cloudtrail\", sourcetype=\"tenable:scan\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "FedRAMP"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.40",
              "n": "CMMC Compliance Assessment",
              "c": "critical",
              "f": "advanced",
              "v": "Tracks DFARS 7012 logging, FCI/CUI handling, and practice maturity for CMMC L2/L3.",
              "t": "Microsoft 365 GCC High, endpoint DLP",
              "d": "`sourcetype=\"o365:audit\"`, `sourcetype=\"dlp:cui\"`",
              "q": "index=cui sourcetype=\"o365:audit\" Workload=SharePoint\n| search CUI_label=\"true\" AND SharingScope=\"Anyone\"\n| stats count by UserId, SiteUrl, ObjectId\n| sort -count",
              "m": "Encrypt CUI at rest; block personal email domains; SSP alignment to CMMC domains.",
              "z": "Table (policy violations), Bar chart (practices), Heatmap (domain × status).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Microsoft 365 GCC High, endpoint DLP.\n• Ensure the following data sources are available: `sourcetype=\"o365:audit\"`, `sourcetype=\"dlp:cui\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEncrypt CUI at rest; block personal email domains; SSP alignment to CMMC domains.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cui sourcetype=\"o365:audit\" Workload=SharePoint\n| search CUI_label=\"true\" AND SharingScope=\"Anyone\"\n| stats count by UserId, SiteUrl, ObjectId\n| sort -count\n```\n\nUnderstanding this SPL\n\n**CMMC Compliance Assessment** — Tracks DFARS 7012 logging, FCI/CUI handling, and practice maturity for CMMC L2/L3.\n\nDocumented **Data sources**: `sourcetype=\"o365:audit\"`, `sourcetype=\"dlp:cui\"`. **App/TA** (typical add-on context): Microsoft 365 GCC High, endpoint DLP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cui; **sourcetype**: o365:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cui, sourcetype=\"o365:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by UserId, SiteUrl, ObjectId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (policy violations), Bar chart (practices), Heatmap (domain × status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In our line of business, we pay close attention to \"CMMC Compliance Assessment\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AU.L2-3.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CMMC unknown is enforced — Splunk UC-10.12.40: CMMC Compliance Assessment.",
                  "ea": "Saved search 'UC-10.12.40' running on sourcetype=\"o365:audit\", sourcetype=\"dlp:cui\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "CMMC"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.41",
              "n": "NIST 800-53 Control Validation",
              "c": "high",
              "f": "intermediate",
              "v": "Operationalizes AU, AC, SI, and IR families via automated checks against ingested security telemetry.",
              "t": "Splunk ES, compliance frameworks",
              "d": "Mixed — auth, change, vuln indexes tagged `control_id`",
              "q": "index=security tag=nist_800_53\n| lookup nist_controls control_id OUTPUT family automated\n| where automated=\"true\" AND result=\"fail\"\n| stats latest(_time) as last_fail by control_id, host\n| sort control_id",
              "m": "Map each control to correlation searches; feed failures into GRC; annual assessor exports.",
              "z": "Matrix (control × system), Trend (fail rate), Table (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ES, compliance frameworks.\n• Ensure the following data sources are available: Mixed — auth, change, vuln indexes tagged `control_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap each control to correlation searches; feed failures into GRC; annual assessor exports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=nist_800_53\n| lookup nist_controls control_id OUTPUT family automated\n| where automated=\"true\" AND result=\"fail\"\n| stats latest(_time) as last_fail by control_id, host\n| sort control_id\n```\n\nUnderstanding this SPL\n\n**NIST 800-53 Control Validation** — Operationalizes AU, AC, SI, and IR families via automated checks against ingested security telemetry.\n\nDocumented **Data sources**: Mixed — auth, change, vuln indexes tagged `control_id`. **App/TA** (typical add-on context): Splunk ES, compliance frameworks. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where automated=\"true\" AND result=\"fail\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by control_id, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (control × system), Trend (fail rate), Table (open findings).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In our line of business, we pay close attention to \"NIST 800-53 Control Validation\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 unknown is enforced — Splunk UC-10.12.41: NIST 800-53 Control Validation.",
                  "ea": "Saved search 'UC-10.12.41' running on Mixed — auth, change, vuln indexes tagged control_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.42",
              "n": "CAC Authentication Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks smart card logon failures, PIN lockouts, and certificate issues for DOD/IC access.",
              "t": "Windows Security, PKI",
              "d": "`sourcetype=\"WinEventLog:Security\"` EventCode=4768,4769",
              "q": "index=ad sourcetype=\"WinEventLog:Security\" EventCode IN (4768,4769,4776)\n| search Certificate_Information=\"*\"\n| eval fail=if(match(Status,\"0x[1-9a-f]\"),1,0)\n| timechart span=15m sum(fail) as card_failures, count as attempts\n| where card_failures/attempts > 0.1",
              "m": "Correlate with HSM and CRL availability; help desk runbooks for YubiKey/CAC readers.",
              "z": "Line chart (failure rate), Table (users with lockouts), Timeline (PKI events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Security, PKI.\n• Ensure the following data sources are available: `sourcetype=\"WinEventLog:Security\"` EventCode=4768,4769.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate with HSM and CRL availability; help desk runbooks for YubiKey/CAC readers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ad sourcetype=\"WinEventLog:Security\" EventCode IN (4768,4769,4776)\n| search Certificate_Information=\"*\"\n| eval fail=if(match(Status,\"0x[1-9a-f]\"),1,0)\n| timechart span=15m sum(fail) as card_failures, count as attempts\n| where card_failures/attempts > 0.1\n```\n\nUnderstanding this SPL\n\n**CAC Authentication Monitoring** — Tracks smart card logon failures, PIN lockouts, and certificate issues for DOD/IC access.\n\nDocumented **Data sources**: `sourcetype=\"WinEventLog:Security\"` EventCode=4768,4769. **App/TA** (typical add-on context): Windows Security, PKI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ad; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ad, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where card_failures/attempts > 0.1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failure rate), Table (users with lockouts), Timeline (PKI events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In our line of business, we pay close attention to \"CAC Authentication Monitoring\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.43",
              "n": "FISMA Reporting Automation",
              "c": "high",
              "f": "intermediate",
              "v": "Aggregates incident, POA&M, and control metrics for annual FISMA/CyberScope submissions.",
              "t": "Splunk ES, ticketing",
              "d": "`index=notable`, `sourcetype=\"jira:fisma\"`",
              "q": "index=notable OR index=itsm sourcetype IN (\"notable:incident\",\"jira:fisma\")\n| eval fisma_system=coalesce(system_id, bureausystem)\n| stats count(eval(severity=\"critical\")) as crit, count as total, avg(days_open) as avg_age by fisma_system\n| where crit>0 OR avg_age>30\n| sort -crit",
              "m": "Align with NIST RMF system inventory; automate CSV for OMB dashboards; CIO attestation workflow.",
              "z": "Table (system risk), Bar chart (POA&M age), Single value (contained incidents %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ES, ticketing.\n• Ensure the following data sources are available: `index=notable`, `sourcetype=\"jira:fisma\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign with NIST RMF system inventory; automate CSV for OMB dashboards; CIO attestation workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=notable OR index=itsm sourcetype IN (\"notable:incident\",\"jira:fisma\")\n| eval fisma_system=coalesce(system_id, bureausystem)\n| stats count(eval(severity=\"critical\")) as crit, count as total, avg(days_open) as avg_age by fisma_system\n| where crit>0 OR avg_age>30\n| sort -crit\n```\n\nUnderstanding this SPL\n\n**FISMA Reporting Automation** — Aggregates incident, POA&M, and control metrics for annual FISMA/CyberScope submissions.\n\nDocumented **Data sources**: `index=notable`, `sourcetype=\"jira:fisma\"`. **App/TA** (typical add-on context): Splunk ES, ticketing. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: notable, itsm.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=notable, index=itsm. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fisma_system** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by fisma_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where crit>0 OR avg_age>30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (system risk), Bar chart (POA&M age), Single value (contained incidents %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In our line of business, we pay close attention to \"FISMA Reporting Automation\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that FISMA unknown is enforced — Splunk UC-10.12.43: FISMA Reporting Automation.",
                  "ea": "Saved search 'UC-10.12.43' running on index=notable, sourcetype=\"jira:fisma\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "FISMA"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.44",
              "n": "CJIS Audit Log Compliance",
              "c": "critical",
              "f": "intermediate",
              "v": "Ensures CJIS systems capture user ID, timestamp, device, and action for NCIC/III access as required.",
              "t": "RMS/CAD, CJIS WAN",
              "d": "`sourcetype=\"cjis:query\"`",
              "q": "index=cjis sourcetype=\"cjis:query\"\n| where isnull(officer_id) OR isnull(ncic_transaction_id) OR len(device_id)<4\n| stats count by agency, src, query_type\n| sort -count",
              "m": "Monthly completeness report; tamper-evident forwarding; background check status on accounts.",
              "z": "Table (incomplete records), Line chart (query volume), Compliance gauge (field completeness %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RMS/CAD, CJIS WAN.\n• Ensure the following data sources are available: `sourcetype=\"cjis:query\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonthly completeness report; tamper-evident forwarding; background check status on accounts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cjis sourcetype=\"cjis:query\"\n| where isnull(officer_id) OR isnull(ncic_transaction_id) OR len(device_id)<4\n| stats count by agency, src, query_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**CJIS Audit Log Compliance** — Ensures CJIS systems capture user ID, timestamp, device, and action for NCIC/III access as required.\n\nDocumented **Data sources**: `sourcetype=\"cjis:query\"`. **App/TA** (typical add-on context): RMS/CAD, CJIS WAN. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cjis; **sourcetype**: cjis:query. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cjis, sourcetype=\"cjis:query\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(officer_id) OR isnull(ncic_transaction_id) OR len(device_id)<4` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by agency, src, query_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incomplete records), Line chart (query volume), Compliance gauge (field completeness %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In our line of business, we pay close attention to \"CJIS Audit Log Compliance\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.4.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CJIS unknown is enforced — Splunk UC-10.12.44: CJIS Audit Log Compliance.",
                  "ea": "Saved search 'UC-10.12.44' running on sourcetype=\"cjis:query\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "regs": [
                "CJIS"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.12.45",
              "n": "Government Cloud Authorization Boundary",
              "c": "critical",
              "f": "advanced",
              "v": "Detects traffic or APIs crossing the FedRAMP authorization boundary without SC monitoring.",
              "t": "Cloud network flow, API gateway",
              "d": "`sourcetype=\"aws:vpcflow\"`, `sourcetype=\"apigw:exec\"`",
              "q": "index=govcloud sourcetype=\"aws:vpcflow\"\n| lookup auth_boundary_prefix dest OUTPUT in_scope\n| where in_scope=\"true\" AND action=\"ACCEPT\" AND NOT cidrmatch(src,\"10.0.0.0/8\")\n| lookup splunk_egress_allowlist src OUTPUT approved\n| where isnull(approved)\n| stats count by src, dest, dest_port\n| sort -count",
              "m": "Diagram boundary in SSP; alert on new peering or PrivateLink; quarterly penetration test findings.",
              "z": "Map (flows), Table (unapproved egress), Sankey (trust zones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud network flow, API gateway.\n• Ensure the following data sources are available: `sourcetype=\"aws:vpcflow\"`, `sourcetype=\"apigw:exec\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDiagram boundary in SSP; alert on new peering or PrivateLink; quarterly penetration test findings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=govcloud sourcetype=\"aws:vpcflow\"\n| lookup auth_boundary_prefix dest OUTPUT in_scope\n| where in_scope=\"true\" AND action=\"ACCEPT\" AND NOT cidrmatch(src,\"10.0.0.0/8\")\n| lookup splunk_egress_allowlist src OUTPUT approved\n| where isnull(approved)\n| stats count by src, dest, dest_port\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Government Cloud Authorization Boundary** — Detects traffic or APIs crossing the FedRAMP authorization boundary without SC monitoring.\n\nDocumented **Data sources**: `sourcetype=\"aws:vpcflow\"`, `sourcetype=\"apigw:exec\"`. **App/TA** (typical add-on context): Cloud network flow, API gateway. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: govcloud; **sourcetype**: aws:vpcflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=govcloud, sourcetype=\"aws:vpcflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\" AND action=\"ACCEPT\" AND NOT cidrmatch(src,\"10.0.0.0/8\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (flows), Table (unapproved egress), Sankey (trust zones).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "In our line of business, we pay close attention to \"Government Cloud Authorization Boundary\" in our day-to-day work so that fraud, waste, and compliance issues do not slip by quietly. We help teams focus on the few situations that need a person, not every routine blip in the data.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 45,
            "none": 0
          }
        },
        {
          "i": "10.13",
          "n": "CIM Data Model Standard Monitoring Patterns",
          "u": [
            {
              "i": "10.13.1",
              "n": "Failed Authentication Ratio Trending",
              "c": "high",
              "f": "beginner",
              "v": "Rising failure ratio vs. successes indicates credential attacks, misconfigured apps, or account lockout waves — normalized via CIM across vendors.",
              "t": "Any TA mapping to `Authentication`",
              "d": "`sourcetype` values tagged for CIM Authentication (e.g. `WinEventLog:Security`, `o365:audit`, `vpn:syslog`)",
              "q": "index=security tag=authentication action=failure OR action=success\n| timechart span=1h count(eval(action=\"failure\")) as fail count(eval(action=\"success\")) as ok\n| eval fail_ratio=round(fail/(fail+ok)*100,2)",
              "m": "Accelerate the Authentication data model; alert when `fail_ratio` exceeds baseline + 3σ for the same hour-of-week.",
              "z": "Line chart (fail vs. ok), Single value (fail ratio %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Any TA mapping to `Authentication`.\n• Ensure the following data sources are available: `sourcetype` values tagged for CIM Authentication (e.g. `WinEventLog:Security`, `o365:audit`, `vpn:syslog`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAccelerate the Authentication data model; alert when `fail_ratio` exceeds baseline + 3σ for the same hour-of-week.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=authentication action=failure OR action=success\n| timechart span=1h count(eval(action=\"failure\")) as fail count(eval(action=\"success\")) as ok\n| eval fail_ratio=round(fail/(fail+ok)*100,2)\n```\n\nUnderstanding this SPL\n\n**Failed Authentication Ratio Trending** — Rising failure ratio vs. successes indicates credential attacks, misconfigured apps, or account lockout waves — normalized via CIM across vendors.\n\nDocumented **Data sources**: `sourcetype` values tagged for CIM Authentication (e.g. `WinEventLog:Security`, `o365:audit`, `vpn:syslog`). **App/TA** (typical add-on context): Any TA mapping to `Authentication`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **fail_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Authentication.Authentication by Authentication.action, _time span=1h\n| xyseries _time Authentication.action count\n| fillnull value=0 failure success\n| eval fail_ratio=if(failure+success>0, round(failure/(failure+success)*100,2), null())\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Failed Authentication Ratio Trending** — Rising failure ratio vs. successes indicates credential attacks, misconfigured apps, or account lockout waves — normalized via CIM across vendors.\n\nDocumented **Data sources**: `sourcetype` values tagged for CIM Authentication (e.g. `WinEventLog:Security`, `o365:audit`, `vpn:syslog`). **App/TA** (typical add-on context): Any TA mapping to `Authentication`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Pivots fields for charting with `xyseries`.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **fail_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (fail vs. ok), Single value (fail ratio %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Failed Authentication Ratio Trending\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Authentication.Authentication by Authentication.action, _time span=1h\n| xyseries _time Authentication.action count\n| fillnull value=0 failure success\n| eval fail_ratio=if(failure+success>0, round(failure/(failure+success)*100,2), null())",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.2",
              "n": "Authentication Type Distribution",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks Kerberos vs. NTLM vs. SAML mix to spot downgrade attacks and legacy protocol use.",
              "t": "AD, IdP TAs → CIM Authentication",
              "d": "Domain controller security, IdP logs with `signature` or `app` denoting mechanism",
              "q": "index=security tag=authentication\n| stats count by signature, app\n| sort -count",
              "m": "Baseline per business unit; alert on new `signature` values or surges in `NTLM` where Kerberos is expected.",
              "z": "Pie chart (mechanism), Bar chart (over time via `timechart`).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AD, IdP TAs → CIM Authentication.\n• Ensure the following data sources are available: Domain controller security, IdP logs with `signature` or `app` denoting mechanism.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline per business unit; alert on new `signature` values or surges in `NTLM` where Kerberos is expected.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=authentication\n| stats count by signature, app\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Authentication Type Distribution** — Tracks Kerberos vs. NTLM vs. SAML mix to spot downgrade attacks and legacy protocol use.\n\nDocumented **Data sources**: Domain controller security, IdP logs with `signature` or `app` denoting mechanism. **App/TA** (typical add-on context): AD, IdP TAs → CIM Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by signature, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Authentication.Authentication by Authentication.signature, Authentication.app\n| sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Authentication Type Distribution** — Tracks Kerberos vs. NTLM vs. SAML mix to spot downgrade attacks and legacy protocol use.\n\nDocumented **Data sources**: Domain controller security, IdP logs with `signature` or `app` denoting mechanism. **App/TA** (typical add-on context): AD, IdP TAs → CIM Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (mechanism), Bar chart (over time via `timechart`).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Authentication Type Distribution\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Authentication.Authentication by Authentication.signature, Authentication.app\n| sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.3",
              "n": "Impossible Travel Detection (CIM)",
              "c": "critical",
              "f": "advanced",
              "v": "Two successful auths from distant `src` locations within a window that precludes travel — vendor-agnostic when geo is mapped to CIM.",
              "t": "IdP + iplocation enrichment in TA",
              "d": "CIM Authentication with `src` populated",
              "q": "index=security tag=authentication action=success user=*\n| iplocation src\n| streamstats window=2 global=f current(Country) as cur_c prev(Country) as prev_c current(_time) as ts prev(_time) as pts by user\n| where isnotnull(prev_c) AND cur_c!=prev_c AND (ts-pts)<3600\n| table _time, user, src, cur_c, prev_c, ts, pts",
              "m": "Enrich with `src_lat`/`src_lon` for haversine distance when available; whitelist VPN egress ASNs; tune time window per org.",
              "z": "Map (user path), Table (violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IdP + iplocation enrichment in TA.\n• Ensure the following data sources are available: CIM Authentication with `src` populated.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnrich with `src_lat`/`src_lon` for haversine distance when available; whitelist VPN egress ASNs; tune time window per org.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=authentication action=success user=*\n| iplocation src\n| streamstats window=2 global=f current(Country) as cur_c prev(Country) as prev_c current(_time) as ts prev(_time) as pts by user\n| where isnotnull(prev_c) AND cur_c!=prev_c AND (ts-pts)<3600\n| table _time, user, src, cur_c, prev_c, ts, pts\n```\n\nUnderstanding this SPL\n\n**Impossible Travel Detection (CIM)** — Two successful auths from distant `src` locations within a window that precludes travel — vendor-agnostic when geo is mapped to CIM.\n\nDocumented **Data sources**: CIM Authentication with `src` populated. **App/TA** (typical add-on context): IdP + iplocation enrichment in TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Impossible Travel Detection (CIM)**): iplocation src\n• `streamstats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnotnull(prev_c) AND cur_c!=prev_c AND (ts-pts)<3600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Impossible Travel Detection (CIM)**): table _time, user, src, cur_c, prev_c, ts, pts\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` dc(Authentication.src) as srcs from datamodel=Authentication.Authentication where Authentication.action=\"success\" by Authentication.user, _time span=15m\n| where srcs>1\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Impossible Travel Detection (CIM)** — Two successful auths from distant `src` locations within a window that precludes travel — vendor-agnostic when geo is mapped to CIM.\n\nDocumented **Data sources**: CIM Authentication with `src` populated. **App/TA** (typical add-on context): IdP + iplocation enrichment in TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where srcs>1` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (user path), Table (violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Impossible Travel Detection\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` dc(Authentication.src) as srcs from datamodel=Authentication.Authentication where Authentication.action=\"success\" by Authentication.user, _time span=15m\n| where srcs>1",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.4",
              "n": "Service Account Interactive Login (CIM)",
              "c": "critical",
              "f": "intermediate",
              "v": "Service accounts should not have interactive / remote-interactive logons; indicates credential theft or misconfiguration.",
              "t": "Windows TA → Authentication",
              "d": "Event 4624 with logon type mapped to `signature` or `user`",
              "q": "index=security tag=authentication user=\"svc_*\" OR user=\"*-svc\"\n| search signature=\"*Interactive*\" OR app=\"WinLogon:Interactive\"\n| stats count by user, src, host",
              "m": "Maintain `lookup service_accounts_noninteractive.csv`; exclude break-glass; alert on first-seen interactive.",
              "z": "Table (violations), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows TA → Authentication.\n• Ensure the following data sources are available: Event 4624 with logon type mapped to `signature` or `user`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain `lookup service_accounts_noninteractive.csv`; exclude break-glass; alert on first-seen interactive.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=authentication user=\"svc_*\" OR user=\"*-svc\"\n| search signature=\"*Interactive*\" OR app=\"WinLogon:Interactive\"\n| stats count by user, src, host\n```\n\nUnderstanding this SPL\n\n**Service Account Interactive Login (CIM)** — Service accounts should not have interactive / remote-interactive logons; indicates credential theft or misconfiguration.\n\nDocumented **Data sources**: Event 4624 with logon type mapped to `signature` or `user`. **App/TA** (typical add-on context): Windows TA → Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, src, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Authentication.Authentication where (Authentication.user=\"svc_*\" OR match(Authentication.user,\".*-svc\")) AND match(Authentication.signature,\"(?i)interactive|remote.?interactive\") by Authentication.user, Authentication.src, Authentication.dest\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Service Account Interactive Login (CIM)** — Service accounts should not have interactive / remote-interactive logons; indicates credential theft or misconfiguration.\n\nDocumented **Data sources**: Event 4624 with logon type mapped to `signature` or `user`. **App/TA** (typical add-on context): Windows TA → Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Service Account Interactive Login\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Authentication.Authentication where (Authentication.user=\"svc_*\" OR match(Authentication.user,\".*-svc\")) AND match(Authentication.signature,\"(?i)interactive|remote.?interactive\") by Authentication.user, Authentication.src, Authentication.dest",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.5",
              "n": "Authentication Source Diversity",
              "c": "medium",
              "f": "beginner",
              "v": "Many distinct `src` addresses for one `user` in a short window suggests shared credentials or stuffing.",
              "t": "Any Authentication TA",
              "d": "CIM-authenticated VPN, cloud, or SSO logs",
              "q": "index=security tag=authentication action=failure OR action=success\n| bin _time span=1h\n| stats dc(src) as src_diversity by user, _time\n| where src_diversity>15",
              "m": "Lower threshold for privileged users; combine with threat intel on `src`.",
              "z": "Bar chart (src diversity), Table (users).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Any Authentication TA.\n• Ensure the following data sources are available: CIM-authenticated VPN, cloud, or SSO logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLower threshold for privileged users; combine with threat intel on `src`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=authentication action=failure OR action=success\n| bin _time span=1h\n| stats dc(src) as src_diversity by user, _time\n| where src_diversity>15\n```\n\nUnderstanding this SPL\n\n**Authentication Source Diversity** — Many distinct `src` addresses for one `user` in a short window suggests shared credentials or stuffing.\n\nDocumented **Data sources**: CIM-authenticated VPN, cloud, or SSO logs. **App/TA** (typical add-on context): Any Authentication TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by user, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where src_diversity>15` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` dc(Authentication.src) as n from datamodel=Authentication.Authentication by Authentication.user, _time span=1h\n| where n>15\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Authentication Source Diversity** — Many distinct `src` addresses for one `user` in a short window suggests shared credentials or stuffing.\n\nDocumented **Data sources**: CIM-authenticated VPN, cloud, or SSO logs. **App/TA** (typical add-on context): Any Authentication TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where n>15` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (src diversity), Table (users).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Authentication Source Diversity\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` dc(Authentication.src) as n from datamodel=Authentication.Authentication by Authentication.user, _time span=1h\n| where n>15",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.6",
              "n": "MFA Failure Rate Trending (CIM)",
              "c": "high",
              "f": "beginner",
              "v": "MFA fatigue attacks and token issues both raise failure rates; CIM unifies IdP vendors.",
              "t": "Okta, Azure AD, Duo TAs → Authentication",
              "d": "MFA challenge events mapped with `action=failure` and `reason` or `signature`",
              "q": "index=security tag=authentication \"mfa\" OR signature=\"*MFA*\"\n| timechart span=1h count(eval(action=\"failure\")) as mfa_fail count as mfa_total\n| eval mfa_fail_pct=round(mfa_fail/mfa_total*100,2)",
              "m": "Alert when `mfa_fail_pct` doubles vs. same hour prior week.",
              "z": "Line chart (MFA fail %), Single value.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Okta, Azure AD, Duo TAs → Authentication.\n• Ensure the following data sources are available: MFA challenge events mapped with `action=failure` and `reason` or `signature`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert when `mfa_fail_pct` doubles vs. same hour prior week.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=authentication \"mfa\" OR signature=\"*MFA*\"\n| timechart span=1h count(eval(action=\"failure\")) as mfa_fail count as mfa_total\n| eval mfa_fail_pct=round(mfa_fail/mfa_total*100,2)\n```\n\nUnderstanding this SPL\n\n**MFA Failure Rate Trending (CIM)** — MFA fatigue attacks and token issues both raise failure rates; CIM unifies IdP vendors.\n\nDocumented **Data sources**: MFA challenge events mapped with `action=failure` and `reason` or `signature`. **App/TA** (typical add-on context): Okta, Azure AD, Duo TAs → Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **mfa_fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Authentication.Authentication where Authentication.action=\"failure\" AND Authentication.signature=\"*MFA*\" by _time span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MFA Failure Rate Trending (CIM)** — MFA fatigue attacks and token issues both raise failure rates; CIM unifies IdP vendors.\n\nDocumented **Data sources**: MFA challenge events mapped with `action=failure` and `reason` or `signature`. **App/TA** (typical add-on context): Okta, Azure AD, Duo TAs → Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (MFA fail %), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"MFA Failure Rate Trending\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Authentication.Authentication where Authentication.action=\"failure\" AND Authentication.signature=\"*MFA*\" by _time span=1h",
              "e": [
                "azure",
                "okta"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.7",
              "n": "Off-Hours Authentication (CIM)",
              "c": "high",
              "f": "beginner",
              "v": "Privileged or bulk success auths outside business hours warrant review.",
              "t": "CIM Authentication",
              "d": "Normalized `_time` with user timezone optional",
              "q": "index=security tag=authentication action=success\n| eval hr=strftime(_time,\"%H\")\n| where (hr<6 OR hr>22) AND (match(user,\"^admin\") OR match(user,\"^svc_\"))\n| stats count by user, app, src",
              "m": "Join HR schedule lookup; reduce noise for global teams.",
              "z": "Table (off-hours auths), Heatmap (hour × day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CIM Authentication.\n• Ensure the following data sources are available: Normalized `_time` with user timezone optional.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nJoin HR schedule lookup; reduce noise for global teams.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=authentication action=success\n| eval hr=strftime(_time,\"%H\")\n| where (hr<6 OR hr>22) AND (match(user,\"^admin\") OR match(user,\"^svc_\"))\n| stats count by user, app, src\n```\n\nUnderstanding this SPL\n\n**Off-Hours Authentication (CIM)** — Privileged or bulk success auths outside business hours warrant review.\n\nDocumented **Data sources**: Normalized `_time` with user timezone optional. **App/TA** (typical add-on context): CIM Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (hr<6 OR hr>22) AND (match(user,\"^admin\") OR match(user,\"^svc_\"))` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, app, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Authentication.Authentication where Authentication.action=\"success\" by Authentication.user, Authentication.app, _time span=1h\n| eval hr=strftime(_time,\"%H\")\n| where hr<6 OR hr>22\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Off-Hours Authentication (CIM)** — Privileged or bulk success auths outside business hours warrant review.\n\nDocumented **Data sources**: Normalized `_time` with user timezone optional. **App/TA** (typical add-on context): CIM Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• `eval` defines or adjusts **hr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hr<6 OR hr>22` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (off-hours auths), Heatmap (hour × day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Off-Hours Authentication\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Authentication.Authentication where Authentication.action=\"success\" by Authentication.user, Authentication.app, _time span=1h\n| eval hr=strftime(_time,\"%H\")\n| where hr<6 OR hr>22",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.8",
              "n": "Password Spray Detection via CIM",
              "c": "critical",
              "f": "intermediate",
              "v": "Many users with single failure from same `src` — classic spray pattern visible after CIM normalization.",
              "t": "AD, IdP → Authentication",
              "d": "`action=failure` with `signature` password-related",
              "q": "index=security tag=authentication action=failure\n| stats dc(user) as ucount by src\n| where ucount>20",
              "m": "Require minimum event volume per `src`; correlate with geo ASN blocklist.",
              "z": "Table (spray sources), Map.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AD, IdP → Authentication.\n• Ensure the following data sources are available: `action=failure` with `signature` password-related.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequire minimum event volume per `src`; correlate with geo ASN blocklist.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=authentication action=failure\n| stats dc(user) as ucount by src\n| where ucount>20\n```\n\nUnderstanding this SPL\n\n**Password Spray Detection via CIM** — Many users with single failure from same `src` — classic spray pattern visible after CIM normalization.\n\nDocumented **Data sources**: `action=failure` with `signature` password-related. **App/TA** (typical add-on context): AD, IdP → Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ucount>20` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` dc(Authentication.user) as u from datamodel=Authentication.Authentication where Authentication.action=\"failure\" by Authentication.src\n| where u>20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Password Spray Detection via CIM** — Many users with single failure from same `src` — classic spray pattern visible after CIM normalization.\n\nDocumented **Data sources**: `action=failure` with `signature` password-related. **App/TA** (typical add-on context): AD, IdP → Authentication. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where u>20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (spray sources), Map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Password Spray Detection\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` dc(Authentication.user) as u from datamodel=Authentication.Authentication where Authentication.action=\"failure\" by Authentication.src\n| where u>20",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.9",
              "n": "Top Talkers Analysis (CIM)",
              "c": "medium",
              "f": "beginner",
              "v": "Baselines bytes and sessions by `src`/`dest` for capacity and exfiltration hunting.",
              "t": "NetFlow, firewall TAs → Network_Traffic",
              "d": "`All_Traffic` CIM-tagged events",
              "q": "index=security tag=network tag=communicate\n| stats sum(bytes) as b by src dest\n| sort -b\n| head 100",
              "m": "Use summaries; split internal vs. external via `dest_zone` or RFC1918 lookups.",
              "z": "Table (top pairs), Sankey.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NetFlow, firewall TAs → Network_Traffic.\n• Ensure the following data sources are available: `All_Traffic` CIM-tagged events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse summaries; split internal vs. external via `dest_zone` or RFC1918 lookups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=network tag=communicate\n| stats sum(bytes) as b by src dest\n| sort -b\n| head 100\n```\n\nUnderstanding this SPL\n\n**Top Talkers Analysis (CIM)** — Baselines bytes and sessions by `src`/`dest` for capacity and exfiltration hunting.\n\nDocumented **Data sources**: `All_Traffic` CIM-tagged events. **App/TA** (typical add-on context): NetFlow, firewall TAs → Network_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in, sum(All_Traffic.bytes_out) as bytes_out from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n| head 100\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Top Talkers Analysis (CIM)** — Baselines bytes and sessions by `src`/`dest` for capacity and exfiltration hunting.\n\nDocumented **Data sources**: `All_Traffic` CIM-tagged events. **App/TA** (typical add-on context): NetFlow, firewall TAs → Network_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top pairs), Sankey.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Top Talkers Analysis\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in, sum(All_Traffic.bytes_out) as bytes_out from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest\n| eval bytes=bytes_in+bytes_out\n| sort -bytes\n| head 100",
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.10",
              "n": "Protocol Anomaly Detection (CIM)",
              "c": "high",
              "f": "intermediate",
              "v": "Unexpected `app`/`transport` on a segment (e.g., SMB from workstation VLAN to internet).",
              "t": "Zeek, firewall → All_Traffic",
              "d": "`app`, `transport`, `dest_port` in CIM",
              "q": "index=security tag=network app=smb OR dest_port=445\n| lookup internal_subnets dest OUTPUT zone\n| where zone=\"user_lan\" AND NOT cidrmatch(dest,\"10.0.0.0/8\")\n| stats count by src, dest",
              "m": "Maintain allowlists per zone; alert on new `app` per subnet.",
              "z": "Table (anomalies), Treemap (app mix).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Zeek, firewall → All_Traffic.\n• Ensure the following data sources are available: `app`, `transport`, `dest_port` in CIM.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain allowlists per zone; alert on new `app` per subnet.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=network app=smb OR dest_port=445\n| lookup internal_subnets dest OUTPUT zone\n| where zone=\"user_lan\" AND NOT cidrmatch(dest,\"10.0.0.0/8\")\n| stats count by src, dest\n```\n\nUnderstanding this SPL\n\n**Protocol Anomaly Detection (CIM)** — Unexpected `app`/`transport` on a segment (e.g., SMB from workstation VLAN to internet).\n\nDocumented **Data sources**: `app`, `transport`, `dest_port` in CIM. **App/TA** (typical add-on context): Zeek, firewall → All_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where zone=\"user_lan\" AND NOT cidrmatch(dest,\"10.0.0.0/8\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Network_Traffic.All_Traffic where All_Traffic.app=\"smb\" OR All_Traffic.dest_port=445 by All_Traffic.src, All_Traffic.dest, All_Traffic.app\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Protocol Anomaly Detection (CIM)** — Unexpected `app`/`transport` on a segment (e.g., SMB from workstation VLAN to internet).\n\nDocumented **Data sources**: `app`, `transport`, `dest_port` in CIM. **App/TA** (typical add-on context): Zeek, firewall → All_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalies), Treemap (app mix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Protocol Anomaly Detection\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Network_Traffic.All_Traffic where All_Traffic.app=\"smb\" OR All_Traffic.dest_port=445 by All_Traffic.src, All_Traffic.dest, All_Traffic.app",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.11",
              "n": "Connection Duration Outliers (CIM)",
              "c": "medium",
              "f": "intermediate",
              "v": "Very long flows may indicate tunneling; very short bursts may indicate scanning — CIM `duration` enables cross-vendor analytics.",
              "t": "Flow exporters → All_Traffic",
              "d": "Fields `duration` or computed from start/stop",
              "q": "index=security tag=network isnotnull(duration)\n| eventstats perc99(duration) as p99 by dest_port\n| where duration > 10*p99 AND duration>3600\n| stats count by src, dest, dest_port",
              "m": "Per-`app` baselines; exclude backup and streaming CDNs via AS lookup.",
              "z": "Histogram (duration), Table (outliers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Flow exporters → All_Traffic.\n• Ensure the following data sources are available: Fields `duration` or computed from start/stop.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPer-`app` baselines; exclude backup and streaming CDNs via AS lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=network isnotnull(duration)\n| eventstats perc99(duration) as p99 by dest_port\n| where duration > 10*p99 AND duration>3600\n| stats count by src, dest, dest_port\n```\n\nUnderstanding this SPL\n\n**Connection Duration Outliers (CIM)** — Very long flows may indicate tunneling; very short bursts may indicate scanning — CIM `duration` enables cross-vendor analytics.\n\nDocumented **Data sources**: Fields `duration` or computed from start/stop. **App/TA** (typical add-on context): Flow exporters → All_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eventstats` rolls up events into metrics; results are split **by dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where duration > 10*p99 AND duration>3600` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(All_Traffic.duration) as avg_dur, max(All_Traffic.duration) as max_dur from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest span=1h\n| where max_dur>86400\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Connection Duration Outliers (CIM)** — Very long flows may indicate tunneling; very short bursts may indicate scanning — CIM `duration` enables cross-vendor analytics.\n\nDocumented **Data sources**: Fields `duration` or computed from start/stop. **App/TA** (typical add-on context): Flow exporters → All_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Filters the current rows with `where max_dur>86400` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (duration), Table (outliers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Connection Duration Outliers\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` avg(All_Traffic.duration) as avg_dur, max(All_Traffic.duration) as max_dur from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest span=1h\n| where max_dur>86400",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.12",
              "n": "Internal-to-External Traffic Ratio (CIM)",
              "c": "high",
              "f": "intermediate",
              "v": "Sudden shift in egress ratio can signal C2 or misrouted backups.",
              "t": "NetFlow / firewall CIM",
              "d": "`src`/`dest` with RFC1918 classification",
              "q": "index=security tag=network\n| eval src_int=if(cidrmatch(src,\"10.0.0.0/8\") OR cidrmatch(src,\"192.168.0.0/16\"),1,0)\n| eval dest_ext=if(NOT cidrmatch(dest,\"10.0.0.0/8\"),1,0)\n| timechart span=15m sum(eval(src_int=1 AND dest_ext=1)) as egress sum(bytes) as total\n| eval ratio=egress/total",
              "m": "Alert on ratio change >30% vs. weekly baseline.",
              "z": "Area chart (egress ratio), Line chart (bytes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NetFlow / firewall CIM.\n• Ensure the following data sources are available: `src`/`dest` with RFC1918 classification.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on ratio change >30% vs. weekly baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=network\n| eval src_int=if(cidrmatch(src,\"10.0.0.0/8\") OR cidrmatch(src,\"192.168.0.0/16\"),1,0)\n| eval dest_ext=if(NOT cidrmatch(dest,\"10.0.0.0/8\"),1,0)\n| timechart span=15m sum(eval(src_int=1 AND dest_ext=1)) as egress sum(bytes) as total\n| eval ratio=egress/total\n```\n\nUnderstanding this SPL\n\n**Internal-to-External Traffic Ratio (CIM)** — Sudden shift in egress ratio can signal C2 or misrouted backups.\n\nDocumented **Data sources**: `src`/`dest` with RFC1918 classification. **App/TA** (typical add-on context): NetFlow / firewall CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **src_int** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dest_ext** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in, sum(All_Traffic.bytes_out) as bytes_out from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, _time span=15m\n| `drop_dm_object_name(\"All_Traffic\")`\n| eval bytes=bytes_in+bytes_out\n| eval src_int=if(match(src,\"^(10\\.|192\\.168\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.)\"),1,0)\n| eval dest_int=if(match(dest,\"^(10\\.|192\\.168\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.)\"),1,0)\n| where src_int=1 AND dest_int=0\n| timechart span=15m sum(bytes)\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Internal-to-External Traffic Ratio (CIM)** — Sudden shift in egress ratio can signal C2 or misrouted backups.\n\nDocumented **Data sources**: `src`/`dest` with RFC1918 classification. **App/TA** (typical add-on context): NetFlow / firewall CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Invokes macro `drop_dm_object_name(\"All_Traffic\")` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **src_int** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dest_int** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where src_int=1 AND dest_int=0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (egress ratio), Line chart (bytes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Internal-to-External Traffic Ratio\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in, sum(All_Traffic.bytes_out) as bytes_out from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, _time span=15m\n| `drop_dm_object_name(\"All_Traffic\")`\n| eval bytes=bytes_in+bytes_out\n| eval src_int=if(match(src,\"^(10\\.|192\\.168\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.)\"),1,0)\n| eval dest_int=if(match(dest,\"^(10\\.|192\\.168\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.)\"),1,0)\n| where src_int=1 AND dest_int=0\n| timechart span=15m sum(bytes)",
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.13",
              "n": "DNS Query Volume (CIM)",
              "c": "high",
              "f": "beginner",
              "v": "DNS tunneling and malware generate high `query` counts per `src` — normalized in `DNS` CIM model.",
              "t": "Infoblox, BIND, Zeek dns → DNS",
              "d": "`sourcetype=\"bind:query\"` etc., CIM-tagged",
              "q": "index=security tag=dns tag=network\n| timechart span=5m count by src\n| eventstats avg(count) as mu, stdev(count) as s\n| where count > mu+4*s",
              "m": "Exclude resolver IPs; focus on endpoint subnets.",
              "z": "Line chart (queries by src), Single value (peak factor).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Infoblox, BIND, Zeek dns → DNS.\n• Ensure the following data sources are available: `sourcetype=\"bind:query\"` etc., CIM-tagged.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExclude resolver IPs; focus on endpoint subnets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=dns tag=network\n| timechart span=5m count by src\n| eventstats avg(count) as mu, stdev(count) as s\n| where count > mu+4*s\n```\n\nUnderstanding this SPL\n\n**DNS Query Volume (CIM)** — DNS tunneling and malware generate high `query` counts per `src` — normalized in `DNS` CIM model.\n\nDocumented **Data sources**: `sourcetype=\"bind:query\"` etc., CIM-tagged. **App/TA** (typical add-on context): Infoblox, BIND, Zeek dns → DNS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by src** — ideal for trending and alerting on this use case.\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where count > mu+4*s` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Query Volume (CIM)** — DNS tunneling and malware generate high `query` counts per `src` — normalized in `DNS` CIM model.\n\nDocumented **Data sources**: `sourcetype=\"bind:query\"` etc., CIM-tagged. **App/TA** (typical add-on context): Infoblox, BIND, Zeek dns → DNS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queries by src), Single value (peak factor).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"DNS Query Volume\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic (DNS)"
              ],
              "qs": "| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src span=5m | sort - agg_value",
              "e": [
                "infoblox"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.14",
              "n": "Network Session Anomaly (CIM)",
              "c": "high",
              "f": "intermediate",
              "v": "Session count per host vs. baseline — lateral movement and port scans increase discrete sessions.",
              "t": "Firewall session logs → All_Traffic",
              "d": "`action` allowed/blocked with session identifiers",
              "q": "index=security tag=network tag=session\n| bin _time span=10m\n| stats dc(dest_port) as ports, dc(dest) as dsts by src, _time\n| where ports>100 OR dsts>50",
              "m": "Tune per server role; datacenter hosts expected higher `dsts`.",
              "z": "Table (noisy hosts), Scatter (ports vs. dsts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Firewall session logs → All_Traffic.\n• Ensure the following data sources are available: `action` allowed/blocked with session identifiers.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune per server role; datacenter hosts expected higher `dsts`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=network tag=session\n| bin _time span=10m\n| stats dc(dest_port) as ports, dc(dest) as dsts by src, _time\n| where ports>100 OR dsts>50\n```\n\nUnderstanding this SPL\n\n**Network Session Anomaly (CIM)** — Session count per host vs. baseline — lateral movement and port scans increase discrete sessions.\n\nDocumented **Data sources**: `action` allowed/blocked with session identifiers. **App/TA** (typical add-on context): Firewall session logs → All_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by src, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ports>100 OR dsts>50` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` dc(All_Traffic.dest_port) as ports, dc(All_Traffic.dest) as dsts from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, _time span=10m\n| where ports>100 OR dsts>50\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Session Anomaly (CIM)** — Session count per host vs. baseline — lateral movement and port scans increase discrete sessions.\n\nDocumented **Data sources**: `action` allowed/blocked with session identifiers. **App/TA** (typical add-on context): Firewall session logs → All_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Filters the current rows with `where ports>100 OR dsts>50` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (noisy hosts), Scatter (ports vs. dsts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Network Session Anomaly\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` dc(All_Traffic.dest_port) as ports, dc(All_Traffic.dest) as dsts from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, _time span=10m\n| where ports>100 OR dsts>50",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.15",
              "n": "Malware Detection Rate Trending",
              "c": "critical",
              "f": "beginner",
              "v": "Tracks EDR/AV detection volume normalized as `Malware` CIM for executive and SOC trending.",
              "t": "Defender, CrowdStrike, etc. → Malware",
              "d": "`Malware_Attacks` objects",
              "q": "index=security tag=malware\n| timechart span=1d count by signature",
              "m": "Whitelist known PUPs; alert on new `signature` or 3× baseline for same family.",
              "z": "Stacked area (families), Line chart (total).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Defender, CrowdStrike, etc. → Malware.\n• Ensure the following data sources are available: `Malware_Attacks` objects.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWhitelist known PUPs; alert on new `signature` or 3× baseline for same family.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=malware\n| timechart span=1d count by signature\n```\n\nUnderstanding this SPL\n\n**Malware Detection Rate Trending** — Tracks EDR/AV detection volume normalized as `Malware` CIM for executive and SOC trending.\n\nDocumented **Data sources**: `Malware_Attacks` objects. **App/TA** (typical add-on context): Defender, CrowdStrike, etc. → Malware. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by signature** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Malware.Malware_Attacks by Malware_Attacks.signature, _time span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malware Detection Rate Trending** — Tracks EDR/AV detection volume normalized as `Malware` CIM for executive and SOC trending.\n\nDocumented **Data sources**: `Malware_Attacks` objects. **App/TA** (typical add-on context): Defender, CrowdStrike, etc. → Malware. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (families), Line chart (total).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Malware Detection Rate Trending\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Malware.Malware_Attacks by Malware_Attacks.signature, _time span=1d",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.16",
              "n": "Quarantine Success Rate",
              "c": "high",
              "f": "beginner",
              "v": "Ratio of `action=blocked` or vendor quarantine to total malware events — failed containment drives risk.",
              "t": "EDR TA → Malware",
              "d": "`action`, `vendor_product`",
              "q": "index=security tag=malware\n| stats count(eval(match(action,\"blocked|quarantined|deleted\"))) as ok count as tot\n| eval rate=round(ok/tot*100,1)\n| where rate < 85",
              "m": "Break out by OS; investigate agents in \"detect only\" mode.",
              "z": "Single value (quarantine %), Pie chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR TA → Malware.\n• Ensure the following data sources are available: `action`, `vendor_product`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBreak out by OS; investigate agents in \"detect only\" mode.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=malware\n| stats count(eval(match(action,\"blocked|quarantined|deleted\"))) as ok count as tot\n| eval rate=round(ok/tot*100,1)\n| where rate < 85\n```\n\nUnderstanding this SPL\n\n**Quarantine Success Rate** — Ratio of `action=blocked` or vendor quarantine to total malware events — failed containment drives risk.\n\nDocumented **Data sources**: `action`, `vendor_product`. **App/TA** (typical add-on context): EDR TA → Malware. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where rate < 85` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count as contained from datamodel=Malware.Malware_Attacks where Malware_Attacks.action IN (\"blocked\",\"quarantined\",\"deleted\") by Malware_Attacks.dest\n| join type=left max=1 Malware_Attacks.dest [\n    | tstats `summariesonly` count as total from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest\n  ]\n| eval quarantine_rate=round(if(total>0, 100*contained/total, 0),1)\n| where quarantine_rate < 85\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Quarantine Success Rate** — Ratio of `action=blocked` or vendor quarantine to total malware events — failed containment drives risk.\n\nDocumented **Data sources**: `action`, `vendor_product`. **App/TA** (typical add-on context): EDR TA → Malware. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• `eval` defines or adjusts **quarantine_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where quarantine_rate < 85` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (quarantine %), Pie chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Quarantine Success Rate\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats `summariesonly` count as contained from datamodel=Malware.Malware_Attacks where Malware_Attacks.action IN (\"blocked\",\"quarantined\",\"deleted\") by Malware_Attacks.dest\n| join type=left max=1 Malware_Attacks.dest [\n    | tstats `summariesonly` count as total from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest\n  ]\n| eval quarantine_rate=round(if(total>0, 100*contained/total, 0),1)\n| where quarantine_rate < 85",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.17",
              "n": "Recurring Infection (Same Host)",
              "c": "critical",
              "f": "intermediate",
              "v": "Repeat malware on same `dest` indicates failed remediation or persistent foothold.",
              "t": "Malware CIM",
              "d": "`Malware_Attacks.dest`, `signature`",
              "q": "index=security tag=malware\n| stats count by dest, signature\n| where count>3",
              "m": "Auto-ticket reimage workflow when count > threshold in 7d.",
              "z": "Table (repeat offenders), Bar chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Malware CIM.\n• Ensure the following data sources are available: `Malware_Attacks.dest`, `signature`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAuto-ticket reimage workflow when count > threshold in 7d.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=malware\n| stats count by dest, signature\n| where count>3\n```\n\nUnderstanding this SPL\n\n**Recurring Infection (Same Host)** — Repeat malware on same `dest` indicates failed remediation or persistent foothold.\n\nDocumented **Data sources**: `Malware_Attacks.dest`, `signature`. **App/TA** (typical add-on context): Malware CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest, signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest, Malware_Attacks.signature\n| where count>3\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Recurring Infection (Same Host)** — Repeat malware on same `dest` indicates failed remediation or persistent foothold.\n\nDocumented **Data sources**: `Malware_Attacks.dest`, `signature`. **App/TA** (typical add-on context): Malware CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Filters the current rows with `where count>3` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (repeat offenders), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Recurring Infection (Same Host)\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest, Malware_Attacks.signature\n| where count>3",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.18",
              "n": "Malware Family Distribution",
              "c": "medium",
              "f": "beginner",
              "v": "Shifts toward ransomware or infostealer families prioritize IR playbooks.",
              "t": "Malware CIM",
              "d": "`signature` or `user` field for family tags",
              "q": "index=security tag=malware\n| rex field=signature \"(?<family>[A-Za-z0-9]+)\"\n| stats count by family\n| sort -count",
              "m": "Maintain `malware_family_lookup` for vendor-specific strings.",
              "z": "Pie chart (families), Treemap.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Malware CIM.\n• Ensure the following data sources are available: `signature` or `user` field for family tags.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain `malware_family_lookup` for vendor-specific strings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=malware\n| rex field=signature \"(?<family>[A-Za-z0-9]+)\"\n| stats count by family\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Malware Family Distribution** — Shifts toward ransomware or infostealer families prioritize IR playbooks.\n\nDocumented **Data sources**: `signature` or `user` field for family tags. **App/TA** (typical add-on context): Malware CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by family** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Malware.Malware_Attacks by Malware_Attacks.signature\n| sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malware Family Distribution** — Shifts toward ransomware or infostealer families prioritize IR playbooks.\n\nDocumented **Data sources**: `signature` or `user` field for family tags. **App/TA** (typical add-on context): Malware CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (families), Treemap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Malware Family Distribution\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Malware.Malware_Attacks by Malware_Attacks.signature\n| sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.19",
              "n": "Endpoint Malware Coverage",
              "c": "high",
              "f": "intermediate",
              "v": "Hosts with no malware events in 30d may be offline, mis-deployed agents, or blind spots — compare to CMDB.",
              "t": "Malware + Endpoint inventory",
              "d": "`Malware_Attacks.dest` distinct vs. `endpoint:inventory`",
              "q": "index=security tag=malware earliest=-30d\n| stats dc(dest) as reporting_hosts\n| appendcols [| inputlookup cmdb_servers.csv | stats dc(hostname) as total_managed ]\n| eval coverage_pct=round(reporting_hosts/total_managed*100,1)",
              "m": "Left join finds gaps; alert on critical servers missing telemetry.",
              "z": "Single value (coverage %), Table (missing hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Malware + Endpoint inventory.\n• Ensure the following data sources are available: `Malware_Attacks.dest` distinct vs. `endpoint:inventory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLeft join finds gaps; alert on critical servers missing telemetry.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=malware earliest=-30d\n| stats dc(dest) as reporting_hosts\n| appendcols [| inputlookup cmdb_servers.csv | stats dc(hostname) as total_managed ]\n| eval coverage_pct=round(reporting_hosts/total_managed*100,1)\n```\n\nUnderstanding this SPL\n\n**Endpoint Malware Coverage** — Hosts with no malware events in 30d may be offline, mis-deployed agents, or blind spots — compare to CMDB.\n\nDocumented **Data sources**: `Malware_Attacks.dest` distinct vs. `endpoint:inventory`. **App/TA** (typical add-on context): Malware + Endpoint inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Adds columns from a subsearch with `appendcols`.\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` dc(Malware_Attacks.dest) as h from datamodel=Malware.Malware_Attacks\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Endpoint Malware Coverage** — Hosts with no malware events in 30d may be offline, mis-deployed agents, or blind spots — compare to CMDB.\n\nDocumented **Data sources**: `Malware_Attacks.dest` distinct vs. `endpoint:inventory`. **App/TA** (typical add-on context): Malware + Endpoint inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (coverage %), Table (missing hosts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Endpoint Malware Coverage\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats `summariesonly` dc(Malware_Attacks.dest) as h from datamodel=Malware.Malware_Attacks",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.20",
              "n": "IDS Alert Severity Trending",
              "c": "high",
              "f": "beginner",
              "v": "Tracks IDS/IPS `severity` mix over time for tuning and attack campaigns.",
              "t": "Snort, Suricata, NGFW IPS → Intrusion_Detection",
              "d": "`IDS_Attacks`",
              "q": "index=security tag=ids tag=attack\n| timechart span=1h count by severity",
              "m": "Correlate with change windows; drop noisy signatures via ES adaptive response.",
              "z": "Stacked area (severity), Line chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Snort, Suricata, NGFW IPS → Intrusion_Detection.\n• Ensure the following data sources are available: `IDS_Attacks`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate with change windows; drop noisy signatures via ES adaptive response.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=ids tag=attack\n| timechart span=1h count by severity\n```\n\nUnderstanding this SPL\n\n**IDS Alert Severity Trending** — Tracks IDS/IPS `severity` mix over time for tuning and attack campaigns.\n\nDocumented **Data sources**: `IDS_Attacks`. **App/TA** (typical add-on context): Snort, Suricata, NGFW IPS → Intrusion_Detection. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by severity** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity, _time span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IDS Alert Severity Trending** — Tracks IDS/IPS `severity` mix over time for tuning and attack campaigns.\n\nDocumented **Data sources**: `IDS_Attacks`. **App/TA** (typical add-on context): Snort, Suricata, NGFW IPS → Intrusion_Detection. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (severity), Line chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"IDS Alert Severity Trending\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity, _time span=1h",
              "e": [
                "suricata"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.21",
              "n": "Signature Hit Analysis",
              "c": "medium",
              "f": "beginner",
              "v": "Top `signature` hits drive tuning priority and threat hunting focus.",
              "t": "IDS CIM",
              "d": "`IDS_Attacks.signature`",
              "q": "index=security tag=ids\n| stats count by signature, category\n| sort 50 -count",
              "m": "Join CVE reference lookup for prioritization.",
              "z": "Bar chart (top signatures), Table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IDS CIM.\n• Ensure the following data sources are available: `IDS_Attacks.signature`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nJoin CVE reference lookup for prioritization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=ids\n| stats count by signature, category\n| sort 50 -count\n```\n\nUnderstanding this SPL\n\n**Signature Hit Analysis** — Top `signature` hits drive tuning priority and threat hunting focus.\n\nDocumented **Data sources**: `IDS_Attacks.signature`. **App/TA** (typical add-on context): IDS CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by signature, category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature, IDS_Attacks.category\n| sort 50 -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Signature Hit Analysis** — Top `signature` hits drive tuning priority and threat hunting focus.\n\nDocumented **Data sources**: `IDS_Attacks.signature`. **App/TA** (typical add-on context): IDS CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top signatures), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Signature Hit Analysis\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature, IDS_Attacks.category\n| sort 50 -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.22",
              "n": "False Positive Ratio (IDS CIM)",
              "c": "medium",
              "f": "advanced",
              "v": "When analysts tag disposition, or vendor provides `action`/`priority`, measure noise ratio per `signature`.",
              "t": "IDS + SOAR disposition fields mapped to CIM",
              "d": "Optional `vendor_info` or custom field `disposition`",
              "q": "index=security tag=ids disposition=*\n| stats count(eval(disposition=\"false_positive\")) as fp count as tot by signature\n| eval fp_ratio=round(fp/tot*100,1)\n| where fp_ratio>40",
              "m": "Requires workflow — without disposition, proxy with low-severity bulk.",
              "z": "Table (noisy rules), Line chart (fp_ratio trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IDS + SOAR disposition fields mapped to CIM.\n• Ensure the following data sources are available: Optional `vendor_info` or custom field `disposition`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires workflow — without disposition, proxy with low-severity bulk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=ids disposition=*\n| stats count(eval(disposition=\"false_positive\")) as fp count as tot by signature\n| eval fp_ratio=round(fp/tot*100,1)\n| where fp_ratio>40\n```\n\nUnderstanding this SPL\n\n**False Positive Ratio (IDS CIM)** — When analysts tag disposition, or vendor provides `action`/`priority`, measure noise ratio per `signature`.\n\nDocumented **Data sources**: Optional `vendor_info` or custom field `disposition`. **App/TA** (typical add-on context): IDS + SOAR disposition fields mapped to CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fp_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fp_ratio>40` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks where IDS_Attacks.severity IN (\"low\",\"informational\") by IDS_Attacks.signature\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**False Positive Ratio (IDS CIM)** — When analysts tag disposition, or vendor provides `action`/`priority`, measure noise ratio per `signature`.\n\nDocumented **Data sources**: Optional `vendor_info` or custom field `disposition`. **App/TA** (typical add-on context): IDS + SOAR disposition fields mapped to CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (noisy rules), Line chart (fp_ratio trend).",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same idea as our false-positive ratio for intrusion alerts: once data from different products is lined up the same way, we help the team see noise early, compare tools fairly, and avoid a custom report for every vendor.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks where IDS_Attacks.severity IN (\"low\",\"informational\") by IDS_Attacks.signature",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.23",
              "n": "Attack Vector Distribution",
              "c": "high",
              "f": "beginner",
              "v": "`category` or `ids_type` field shows web vs. email vs. network vector concentration.",
              "t": "IDS CIM",
              "d": "`IDS_Attacks.category`",
              "q": "index=security tag=ids\n| stats count by category\n| sort -count",
              "m": "Map to MITRE tactics for reporting.",
              "z": "Pie chart (vectors), Sunburst.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IDS CIM.\n• Ensure the following data sources are available: `IDS_Attacks.category`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap to MITRE tactics for reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=ids\n| stats count by category\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Attack Vector Distribution** — `category` or `ids_type` field shows web vs. email vs. network vector concentration.\n\nDocumented **Data sources**: `IDS_Attacks.category`. **App/TA** (typical add-on context): IDS CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.category\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Attack Vector Distribution** — `category` or `ids_type` field shows web vs. email vs. network vector concentration.\n\nDocumented **Data sources**: `IDS_Attacks.category`. **App/TA** (typical add-on context): IDS CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (vectors), Sunburst.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Attack Vector Distribution\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.category",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.24",
              "n": "IDS Signature Update Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Sensors reporting stale `vendor_severity` or outdated rule versions — map vendor feed metadata to custom index if not in CIM.",
              "t": "IDS management plane logs (parallel to CIM for detections)",
              "d": "Non-CIM vendor logs + CIM for validation",
              "q": "index=ids_mgmt sourcetype=\"suricata:engine\"\n| where rules_age_days>7 OR engine_version!=expected_version\n| stats count by sensor_id",
              "m": "Pair with `tstats` on recent `IDS_Attacks` volume drop.",
              "z": "Table (stale sensors), Single value (max rules_age).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IDS management plane logs (parallel to CIM for detections).\n• Ensure the following data sources are available: Non-CIM vendor logs + CIM for validation.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPair with `tstats` on recent `IDS_Attacks` volume drop.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ids_mgmt sourcetype=\"suricata:engine\"\n| where rules_age_days>7 OR engine_version!=expected_version\n| stats count by sensor_id\n```\n\nUnderstanding this SPL\n\n**IDS Signature Update Compliance** — Sensors reporting stale `vendor_severity` or outdated rule versions — map vendor feed metadata to custom index if not in CIM.\n\nDocumented **Data sources**: Non-CIM vendor logs + CIM for validation. **App/TA** (typical add-on context): IDS management plane logs (parallel to CIM for detections). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ids_mgmt; **sourcetype**: suricata:engine. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ids_mgmt, sourcetype=\"suricata:engine\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where rules_age_days>7 OR engine_version!=expected_version` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by sensor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` latest(_time) as last_hit from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src\n| eval days_since_last=(now()-last_hit)/86400\n| where days_since_last>7\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IDS Signature Update Compliance** — Sensors reporting stale `vendor_severity` or outdated rule versions — map vendor feed metadata to custom index if not in CIM.\n\nDocumented **Data sources**: Non-CIM vendor logs + CIM for validation. **App/TA** (typical add-on context): IDS management plane logs (parallel to CIM for detections). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• `eval` defines or adjusts **days_since_last** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since_last>7` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale sensors), Single value (max rules_age).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"IDS Signature Update Compliance\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Intrusion_Detection (detections) + custom"
              ],
              "qs": "| tstats `summariesonly` latest(_time) as last_hit from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src\n| eval days_since_last=(now()-last_hit)/86400\n| where days_since_last>7",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.25",
              "n": "Vulnerability Age Tracking",
              "c": "high",
              "f": "beginner",
              "v": "Time open since `first_found` drives SLA and board reporting.",
              "t": "Tenable, Qualys → Vulnerabilities",
              "d": "`Vulnerabilities` CIM with `status`, `cve`",
              "q": "index=security tag=vulnerability status=open\n| eval age_days=(now()-strptime(first_found,\"%Y-%m-%d\"))/86400\n| where age_days>90\n| stats sum(age_days) as total_age by dest, cve",
              "m": "Join CVSS for prioritization; exclude accepted risks with GRC ID.",
              "z": "Histogram (age), Table (oldest 20).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable, Qualys → Vulnerabilities.\n• Ensure the following data sources are available: `Vulnerabilities` CIM with `status`, `cve`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nJoin CVSS for prioritization; exclude accepted risks with GRC ID.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=vulnerability status=open\n| eval age_days=(now()-strptime(first_found,\"%Y-%m-%d\"))/86400\n| where age_days>90\n| stats sum(age_days) as total_age by dest, cve\n```\n\nUnderstanding this SPL\n\n**Vulnerability Age Tracking** — Time open since `first_found` drives SLA and board reporting.\n\nDocumented **Data sources**: `Vulnerabilities` CIM with `status`, `cve`. **App/TA** (typical add-on context): Tenable, Qualys → Vulnerabilities. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>90` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dest, cve** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` latest(Vulnerabilities.first_found) as ff from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest, Vulnerabilities.cve\n| eval age_days=(now()-ff)/86400\n| where age_days>90\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Vulnerability Age Tracking** — Time open since `first_found` drives SLA and board reporting.\n\nDocumented **Data sources**: `Vulnerabilities` CIM with `status`, `cve`. **App/TA** (typical add-on context): Tenable, Qualys → Vulnerabilities. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>90` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (age), Table (oldest 20).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Vulnerability Age Tracking\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Risk"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` latest(Vulnerabilities.first_found) as ff from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest, Vulnerabilities.cve\n| eval age_days=(now()-ff)/86400\n| where age_days>90",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.26",
              "n": "Remediation Velocity",
              "c": "high",
              "f": "intermediate",
              "v": "Median time from `first_found` to `status=closed` by team — CIM-normalized across scanners.",
              "t": "Vulnerabilities CIM",
              "d": "State change events or periodic snapshots",
              "q": "index=security tag=vulnerability status=closed\n| eval mttr=(closed_time-first_found_epoch)/3600\n| stats median(mttr) as med_hrs by owner_team",
              "m": "Requires closed timestamp in data; else use ticket linkage.",
              "z": "Bar chart (median MTTR by team), Trend.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vulnerabilities CIM.\n• Ensure the following data sources are available: State change events or periodic snapshots.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires closed timestamp in data; else use ticket linkage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=vulnerability status=closed\n| eval mttr=(closed_time-first_found_epoch)/3600\n| stats median(mttr) as med_hrs by owner_team\n```\n\nUnderstanding this SPL\n\n**Remediation Velocity** — Median time from `first_found` to `status=closed` by team — CIM-normalized across scanners.\n\nDocumented **Data sources**: State change events or periodic snapshots. **App/TA** (typical add-on context): Vulnerabilities CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mttr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by owner_team** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Vulnerabilities.Vulnerabilities where Vulnerabilities.status=\"closed\" by Vulnerabilities.dest\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Remediation Velocity** — Median time from `first_found` to `status=closed` by team — CIM-normalized across scanners.\n\nDocumented **Data sources**: State change events or periodic snapshots. **App/TA** (typical add-on context): Vulnerabilities CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (median MTTR by team), Trend.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Remediation Velocity\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Risk"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Vulnerabilities.Vulnerabilities where Vulnerabilities.status=\"closed\" by Vulnerabilities.dest",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.27",
              "n": "CVSS Distribution",
              "c": "medium",
              "f": "beginner",
              "v": "Histogram of `severity` / CVSS for backlog risk profile.",
              "t": "Vulnerabilities CIM",
              "d": "`severity` field (critical/high/medium/low) or numeric `cvss`",
              "q": "index=security tag=vulnerability status=open\n| stats count by severity\n| sort severity",
              "m": "Filter internet-exposed assets via lookup for criticality weighting.",
              "z": "Bar chart (severity counts), Pie chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vulnerabilities CIM.\n• Ensure the following data sources are available: `severity` field (critical/high/medium/low) or numeric `cvss`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFilter internet-exposed assets via lookup for criticality weighting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=vulnerability status=open\n| stats count by severity\n| sort severity\n```\n\nUnderstanding this SPL\n\n**CVSS Distribution** — Histogram of `severity` / CVSS for backlog risk profile.\n\nDocumented **Data sources**: `severity` field (critical/high/medium/low) or numeric `cvss`. **App/TA** (typical add-on context): Vulnerabilities CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Vulnerabilities.Vulnerabilities where Vulnerabilities.status=\"open\" by Vulnerabilities.severity\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CVSS Distribution** — Histogram of `severity` / CVSS for backlog risk profile.\n\nDocumented **Data sources**: `severity` field (critical/high/medium/low) or numeric `cvss`. **App/TA** (typical add-on context): Vulnerabilities CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (severity counts), Pie chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"CVSS Distribution\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Risk"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Vulnerabilities.Vulnerabilities where Vulnerabilities.status=\"open\" by Vulnerabilities.severity",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.28",
              "n": "Asset Risk Scoring (CIM)",
              "c": "high",
              "f": "advanced",
              "v": "Aggregate open vulns per `dest` weighted by CVSS for top-risk hosts.",
              "t": "Vulnerabilities CIM",
              "d": "`dest`, `severity`, `cve`",
              "q": "index=security tag=vulnerability status=open\n| eval w=case(severity=\"critical\",10, severity=\"high\",7, severity=\"medium\",4, 1)\n| stats sum(w) as risk_score, dc(cve) as cve_count by dest\n| sort -risk_score\n| head 100",
              "m": "Blend with threat intel for exploited-in-wild CVEs (2× weight).",
              "z": "Table (top risk assets), Bubble chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vulnerabilities CIM.\n• Ensure the following data sources are available: `dest`, `severity`, `cve`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBlend with threat intel for exploited-in-wild CVEs (2× weight).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=vulnerability status=open\n| eval w=case(severity=\"critical\",10, severity=\"high\",7, severity=\"medium\",4, 1)\n| stats sum(w) as risk_score, dc(cve) as cve_count by dest\n| sort -risk_score\n| head 100\n```\n\nUnderstanding this SPL\n\n**Asset Risk Scoring (CIM)** — Aggregate open vulns per `dest` weighted by CVSS for top-risk hosts.\n\nDocumented **Data sources**: `dest`, `severity`, `cve`. **App/TA** (typical add-on context): Vulnerabilities CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **w** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` dc(Vulnerabilities.cve) as cves from datamodel=Vulnerabilities.Vulnerabilities where Vulnerabilities.status=\"open\" by Vulnerabilities.dest, Vulnerabilities.severity\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Asset Risk Scoring (CIM)** — Aggregate open vulns per `dest` weighted by CVSS for top-risk hosts.\n\nDocumented **Data sources**: `dest`, `severity`, `cve`. **App/TA** (typical add-on context): Vulnerabilities CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top risk assets), Bubble chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Asset Risk Scoring\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Risk"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` dc(Vulnerabilities.cve) as cves from datamodel=Vulnerabilities.Vulnerabilities where Vulnerabilities.status=\"open\" by Vulnerabilities.dest, Vulnerabilities.severity",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.29",
              "n": "Vulnerability Scan Gap Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Assets in CMDB without recent vuln assessment event in CIM — scanning blind spot.",
              "t": "Vulnerabilities CIM + inventory",
              "d": "Last seen scan per `dest`",
              "q": "index=security tag=vulnerability\n| stats latest(_time) as last_scan by dest\n| eval gap_days=(now()-last_scan)/86400\n| where gap_days>30\n| lookup cmdb_servers hostname as dest OUTPUT owner",
              "m": "Exclude decommissioned; alert on prod tier >14d gap.",
              "z": "Table (gap list), Gauge (coverage).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vulnerabilities CIM + inventory.\n• Ensure the following data sources are available: Last seen scan per `dest`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExclude decommissioned; alert on prod tier >14d gap.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=vulnerability\n| stats latest(_time) as last_scan by dest\n| eval gap_days=(now()-last_scan)/86400\n| where gap_days>30\n| lookup cmdb_servers hostname as dest OUTPUT owner\n```\n\nUnderstanding this SPL\n\n**Vulnerability Scan Gap Detection** — Assets in CMDB without recent vuln assessment event in CIM — scanning blind spot.\n\nDocumented **Data sources**: Last seen scan per `dest`. **App/TA** (typical add-on context): Vulnerabilities CIM + inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_days>30` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` latest(_time) as last_scan from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest\n| eval gap_days=(now()-last_scan)/86400\n| where gap_days>30\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Vulnerability Scan Gap Detection** — Assets in CMDB without recent vuln assessment event in CIM — scanning blind spot.\n\nDocumented **Data sources**: Last seen scan per `dest`. **App/TA** (typical add-on context): Vulnerabilities CIM + inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• `eval` defines or adjusts **gap_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_days>30` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gap list), Gauge (coverage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Vulnerability Scan Gap Detection\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats `summariesonly` latest(_time) as last_scan from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest\n| eval gap_days=(now()-last_scan)/86400\n| where gap_days>30",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.30",
              "n": "Email Volume Anomaly",
              "c": "high",
              "f": "beginner",
              "v": "Spike in `Email` model message count may indicate spam flood, phish campaign, or misconfigured relay.",
              "t": "Exchange, Proofpoint → Email",
              "d": "`Email` CIM",
              "q": "index=security tag=email\n| timechart span=1h count\n| predict count as prediction algorithm=LLP future_timespan=0\n| where count > 2*prediction",
              "m": "Seasonal business hours in `predict`; whitelist marketing bursts via subject lookup.",
              "z": "Line chart (actual vs. predicted), Single value (ratio).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Exchange, Proofpoint → Email.\n• Ensure the following data sources are available: `Email` CIM.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSeasonal business hours in `predict`; whitelist marketing bursts via subject lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=email\n| timechart span=1h count\n| predict count as prediction algorithm=LLP future_timespan=0\n| where count > 2*prediction\n```\n\nUnderstanding this SPL\n\n**Email Volume Anomaly** — Spike in `Email` model message count may indicate spam flood, phish campaign, or misconfigured relay.\n\nDocumented **Data sources**: `Email` CIM. **App/TA** (typical add-on context): Exchange, Proofpoint → Email. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Email Volume Anomaly**): predict count as prediction algorithm=LLP future_timespan=0\n• Filters the current rows with `where count > 2*prediction` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Email Volume Anomaly** — Spike in `Email` model message count may indicate spam flood, phish campaign, or misconfigured relay.\n\nDocumented **Data sources**: `Email` CIM. **App/TA** (typical add-on context): Exchange, Proofpoint → Email. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (actual vs. predicted), Single value (ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Email Volume Anomaly\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user span=1h | sort - count",
              "e": [
                "exchange",
                "proofpoint"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.31",
              "n": "Attachment Type Analysis",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks `file_name` extensions / MIME for macro and archive risk.",
              "t": "Email CIM",
              "d": "`file_name`, `signature`",
              "q": "index=security tag=email\n| rex field=file_name \"(?<ext>\\.[a-zA-Z0-9]+)$\"\n| stats count by ext\n| sort -count",
              "m": "Alert on `.iso`, `.lnk` spikes from external senders.",
              "z": "Bar chart (extensions), Pie chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Email CIM.\n• Ensure the following data sources are available: `file_name`, `signature`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on `.iso`, `.lnk` spikes from external senders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=email\n| rex field=file_name \"(?<ext>\\.[a-zA-Z0-9]+)$\"\n| stats count by ext\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Attachment Type Analysis** — Tracks `file_name` extensions / MIME for macro and archive risk.\n\nDocumented **Data sources**: `file_name`, `signature`. **App/TA** (typical add-on context): Email CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by ext** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Attachment Type Analysis** — Tracks `file_name` extensions / MIME for macro and archive risk.\n\nDocumented **Data sources**: `file_name`, `signature`. **App/TA** (typical add-on context): Email CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (extensions), Pie chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Attachment Type Analysis\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.32",
              "n": "Internal-External Mail Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "Sudden flip to mostly external recipients may indicate data exfil via email or open relay abuse.",
              "t": "Email CIM",
              "d": "`recipient`, `src_user` with domain classification",
              "q": "index=security tag=email\n| eval ext=if(match(recipient,\"@ourdomain\\.com$\"),0,1)\n| timechart span=1h sum(ext) as ext_count, count as tot\n| eval ext_ratio=ext_count/tot",
              "m": "Baseline per department; alert when `ext_ratio` >2× normal.",
              "z": "Area chart (external ratio), Line chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Email CIM.\n• Ensure the following data sources are available: `recipient`, `src_user` with domain classification.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline per department; alert when `ext_ratio` >2× normal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=email\n| eval ext=if(match(recipient,\"@ourdomain\\.com$\"),0,1)\n| timechart span=1h sum(ext) as ext_count, count as tot\n| eval ext_ratio=ext_count/tot\n```\n\nUnderstanding this SPL\n\n**Internal-External Mail Ratio** — Sudden flip to mostly external recipients may indicate data exfil via email or open relay abuse.\n\nDocumented **Data sources**: `recipient`, `src_user` with domain classification. **App/TA** (typical add-on context): Email CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ext** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **ext_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Internal-External Mail Ratio** — Sudden flip to mostly external recipients may indicate data exfil via email or open relay abuse.\n\nDocumented **Data sources**: `recipient`, `src_user` with domain classification. **App/TA** (typical add-on context): Email CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (external ratio), Line chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Internal-External Mail Ratio\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user span=1h | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.33",
              "n": "Delivery Failure Trending",
              "c": "high",
              "f": "beginner",
              "v": "Rises in `action` bounce/drop indicate reputation issues, DNS problems, or attacks.",
              "t": "Email CIM with `action` mapped",
              "d": "DSN / MTA logs",
              "q": "index=security tag=email action=bounced OR status=deferred\n| timechart span=15m count by signature",
              "m": "Correlate with SPF/DKIM failures in same window.",
              "z": "Line chart (failures), Table (top recipient domains).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Email CIM with `action` mapped.\n• Ensure the following data sources are available: DSN / MTA logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate with SPF/DKIM failures in same window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=email action=bounced OR status=deferred\n| timechart span=15m count by signature\n```\n\nUnderstanding this SPL\n\n**Delivery Failure Trending** — Rises in `action` bounce/drop indicate reputation issues, DNS problems, or attacks.\n\nDocumented **Data sources**: DSN / MTA logs. **App/TA** (typical add-on context): Email CIM with `action` mapped. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by signature** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Delivery Failure Trending** — Rises in `action` bounce/drop indicate reputation issues, DNS problems, or attacks.\n\nDocumented **Data sources**: DSN / MTA logs. **App/TA** (typical add-on context): Email CIM with `action` mapped. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failures), Table (top recipient domains).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Delivery Failure Trending\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user span=15m | sort - count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.34",
              "n": "HTTP Method Distribution",
              "c": "medium",
              "f": "beginner",
              "v": "Surge in `PUT`/`DELETE` or rare methods may indicate API abuse.",
              "t": "WAF, proxy → Web",
              "d": "`http_method` in `Web` model",
              "q": "index=security tag=web\n| stats count by http_method\n| sort -count",
              "m": "Per-`url` allowlists for admin APIs.",
              "z": "Bar chart (methods), Pie chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WAF, proxy → Web.\n• Ensure the following data sources are available: `http_method` in `Web` model.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPer-`url` allowlists for admin APIs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=web\n| stats count by http_method\n| sort -count\n```\n\nUnderstanding this SPL\n\n**HTTP Method Distribution** — Surge in `PUT`/`DELETE` or rare methods may indicate API abuse.\n\nDocumented **Data sources**: `http_method` in `Web` model. **App/TA** (typical add-on context): WAF, proxy → Web. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by http_method** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Web.Web by Web.http_method\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**HTTP Method Distribution** — Surge in `PUT`/`DELETE` or rare methods may indicate API abuse.\n\nDocumented **Data sources**: `http_method` in `Web` model. **App/TA** (typical add-on context): WAF, proxy → Web. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (methods), Pie chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"HTTP Method Distribution\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Web.Web by Web.http_method",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.35",
              "n": "HTTP Response Code Trending",
              "c": "high",
              "f": "beginner",
              "v": "5xx and 4xx rates from CIM `status` for SLO dashboards.",
              "t": "Web CIM",
              "d": "`status`, `url`",
              "q": "index=security tag=web\n| timechart span=5m count(eval(status>=500)) as s5 count(eval(status>=400 AND status<500)) as s4",
              "m": "Split by `app` or `url` top paths; alert on s5 >1% of traffic.",
              "z": "Line chart (5xx/4xx), Single value (error rate).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Web CIM.\n• Ensure the following data sources are available: `status`, `url`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSplit by `app` or `url` top paths; alert on s5 >1% of traffic.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=web\n| timechart span=5m count(eval(status>=500)) as s5 count(eval(status>=400 AND status<500)) as s4\n```\n\nUnderstanding this SPL\n\n**HTTP Response Code Trending** — 5xx and 4xx rates from CIM `status` for SLO dashboards.\n\nDocumented **Data sources**: `status`, `url`. **App/TA** (typical add-on context): Web CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Web.Web where Web.status>=400 by Web.status, _time span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**HTTP Response Code Trending** — 5xx and 4xx rates from CIM `status` for SLO dashboards.\n\nDocumented **Data sources**: `status`, `url`. **App/TA** (typical add-on context): Web CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (5xx/4xx), Single value (error rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"HTTP Response Code Trending\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Web.Web where Web.status>=400 by Web.status, _time span=5m",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.36",
              "n": "User Agent Analysis",
              "c": "medium",
              "f": "beginner",
              "v": "Rare or known-bad `http_user_agent` strings — scraping and legacy clients.",
              "t": "Web CIM",
              "d": "`http_user_agent`",
              "q": "index=security tag=web\n| stats count by http_user_agent\n| sort 100 -count",
              "m": "Lookup `ua_badlist`; alert on empty or SQLi-like UAs.",
              "z": "Table (top UAs), Word cloud.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Web CIM.\n• Ensure the following data sources are available: `http_user_agent`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLookup `ua_badlist`; alert on empty or SQLi-like UAs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=web\n| stats count by http_user_agent\n| sort 100 -count\n```\n\nUnderstanding this SPL\n\n**User Agent Analysis** — Rare or known-bad `http_user_agent` strings — scraping and legacy clients.\n\nDocumented **Data sources**: `http_user_agent`. **App/TA** (typical add-on context): Web CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by http_user_agent** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Web.Web by Web.http_user_agent\n| sort 100 -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User Agent Analysis** — Rare or known-bad `http_user_agent` strings — scraping and legacy clients.\n\nDocumented **Data sources**: `http_user_agent`. **App/TA** (typical add-on context): Web CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top UAs), Word cloud.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"User Agent Analysis\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Web.Web by Web.http_user_agent\n| sort 100 -count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.37",
              "n": "Bandwidth by Category",
              "c": "medium",
              "f": "intermediate",
              "v": "When `app`/`url` category exists in CIM or via lookup — top GB by category.",
              "t": "Proxy → Web / Network_Traffic",
              "d": "`bytes_in`, `bytes_out` on Web events",
              "q": "index=security tag=web\n| eval bytes=bytes_in+bytes_out\n| lookup proxy_category url OUTPUT category\n| stats sum(bytes) as b by category\n| sort -b",
              "m": "Normalize CDN cache hits; exclude software update domains.",
              "z": "Bar chart (GB by category), Treemap.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Proxy → Web / Network_Traffic.\n• Ensure the following data sources are available: `bytes_in`, `bytes_out` on Web events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize CDN cache hits; exclude software update domains.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=web\n| eval bytes=bytes_in+bytes_out\n| lookup proxy_category url OUTPUT category\n| stats sum(bytes) as b by category\n| sort -b\n```\n\nUnderstanding this SPL\n\n**Bandwidth by Category** — When `app`/`url` category exists in CIM or via lookup — top GB by category.\n\nDocumented **Data sources**: `bytes_in`, `bytes_out` on Web events. **App/TA** (typical add-on context): Proxy → Web / Network_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(Web.bytes_in) as bi, sum(Web.bytes_out) as bo from datamodel=Web.Web by Web.url\n| eval bytes=bi+bo\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Bandwidth by Category** — When `app`/`url` category exists in CIM or via lookup — top GB by category.\n\nDocumented **Data sources**: `bytes_in`, `bytes_out` on Web events. **App/TA** (typical add-on context): Proxy → Web / Network_Traffic. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• `eval` defines or adjusts **bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (GB by category), Treemap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Bandwidth by Category\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` sum(Web.bytes_in) as bi, sum(Web.bytes_out) as bo from datamodel=Web.Web by Web.url\n| eval bytes=bi+bo",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.38",
              "n": "Web Application Error Rate",
              "c": "high",
              "f": "beginner",
              "v": "Application-level errors in `status` 500 series or custom `error` field in CIM extensions.",
              "t": "Web CIM",
              "d": "`status=500`, `url`",
              "q": "index=security tag=web status>=500\n| timechart span=5m count by url\n| where count>100",
              "m": "Group by `app` token in URL path; tie to deployment markers.",
              "z": "Line chart (5xx by app), Table (top paths).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Web CIM.\n• Ensure the following data sources are available: `status=500`, `url`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nGroup by `app` token in URL path; tie to deployment markers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=web status>=500\n| timechart span=5m count by url\n| where count>100\n```\n\nUnderstanding this SPL\n\n**Web Application Error Rate** — Application-level errors in `status` 500 series or custom `error` field in CIM extensions.\n\nDocumented **Data sources**: `status=500`, `url`. **App/TA** (typical add-on context): Web CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by url** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where count>100` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Web.Web where Web.status>=500 by Web.url, _time span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Web Application Error Rate** — Application-level errors in `status` 500 series or custom `error` field in CIM extensions.\n\nDocumented **Data sources**: `status=500`, `url`. **App/TA** (typical add-on context): Web CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (5xx by app), Table (top paths).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Web Application Error Rate\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Web.Web where Web.status>=500 by Web.url, _time span=5m",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.39",
              "n": "Change Velocity Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Count of `All_Changes` per day — spikes suggest emergency change storms or automation gone wrong.",
              "t": "ServiceNow, Ansible → Change",
              "d": "`All_Changes` with `change_type`",
              "q": "index=security tag=change\n| timechart span=1d count by change_type",
              "m": "Compare to CAB-approved schedule; flag off-window bursts.",
              "z": "Line chart (changes/day), Stacked bar by type.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ServiceNow, Ansible → Change.\n• Ensure the following data sources are available: `All_Changes` with `change_type`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare to CAB-approved schedule; flag off-window bursts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=change\n| timechart span=1d count by change_type\n```\n\nUnderstanding this SPL\n\n**Change Velocity Tracking** — Count of `All_Changes` per day — spikes suggest emergency change storms or automation gone wrong.\n\nDocumented **Data sources**: `All_Changes` with `change_type`. **App/TA** (typical add-on context): ServiceNow, Ansible → Change. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by change_type** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Change.All_Changes by All_Changes.change_type, _time span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Change Velocity Tracking** — Count of `All_Changes` per day — spikes suggest emergency change storms or automation gone wrong.\n\nDocumented **Data sources**: `All_Changes` with `change_type`. **App/TA** (typical add-on context): ServiceNow, Ansible → Change. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (changes/day), Stacked bar by type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Change Velocity Tracking\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Change.All_Changes by All_Changes.change_type, _time span=1d",
              "e": [
                "ansible",
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.40",
              "n": "Unauthorized Change Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Changes without ticket ID or with `user` not in CAB roster — map `object_attrs` to CIM.",
              "t": "Change CIM + ITSM",
              "d": "`object_id`, `user`, `change_type`",
              "q": "index=security tag=change\n| lookup cab_authorized_users user OUTPUT authorized\n| where authorized!=\"true\" OR isnull(change_ticket)\n| stats count by user, object_id",
              "m": "Auto-severity for production `object_category`.",
              "z": "Table (unauthorized), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Change CIM + ITSM.\n• Ensure the following data sources are available: `object_id`, `user`, `change_type`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAuto-severity for production `object_category`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=change\n| lookup cab_authorized_users user OUTPUT authorized\n| where authorized!=\"true\" OR isnull(change_ticket)\n| stats count by user, object_id\n```\n\nUnderstanding this SPL\n\n**Unauthorized Change Detection** — Changes without ticket ID or with `user` not in CAB roster — map `object_attrs` to CIM.\n\nDocumented **Data sources**: `object_id`, `user`, `change_type`. **App/TA** (typical add-on context): Change CIM + ITSM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where authorized!=\"true\" OR isnull(change_ticket)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, object_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Change.All_Changes where All_Changes.user!=\"cicd_svc\" by All_Changes.user, All_Changes.object_id\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unauthorized Change Detection** — Changes without ticket ID or with `user` not in CAB roster — map `object_attrs` to CIM.\n\nDocumented **Data sources**: `object_id`, `user`, `change_type`. **App/TA** (typical add-on context): Change CIM + ITSM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unauthorized), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Unauthorized Change Detection\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Change.All_Changes where All_Changes.user!=\"cicd_svc\" by All_Changes.user, All_Changes.object_id",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.41",
              "n": "Change-to-Incident Correlation",
              "c": "high",
              "f": "advanced",
              "v": "Incidents within 4h of `result` success on same `dest` — causal analysis for CAB feedback.",
              "t": "Change + Incident indexes",
              "d": "CIM Change timestamps vs. ITSM incidents",
              "q": "index=security tag=change result=success OR index=itsm sourcetype=\"incident:event\"\n| eval key=dest\n| transaction key maxspan=4h startswith=\"change\" endswith=\"incident\"\n| where closed_txn=0 OR duration<14400\n| stats count by change_ticket, incident_id",
              "m": "Simpler: `join` change `dest` with incident `configuration_item` within time window.",
              "z": "Table (pairs), Sankey (change → incident).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Change + Incident indexes.\n• Ensure the following data sources are available: CIM Change timestamps vs. ITSM incidents.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSimpler: `join` change `dest` with incident `configuration_item` within time window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=change result=success OR index=itsm sourcetype=\"incident:event\"\n| eval key=dest\n| transaction key maxspan=4h startswith=\"change\" endswith=\"incident\"\n| where closed_txn=0 OR duration<14400\n| stats count by change_ticket, incident_id\n```\n\nUnderstanding this SPL\n\n**Change-to-Incident Correlation** — Incidents within 4h of `result` success on same `dest` — causal analysis for CAB feedback.\n\nDocumented **Data sources**: CIM Change timestamps vs. ITSM incidents. **App/TA** (typical add-on context): Change + Incident indexes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security, itsm; **sourcetype**: incident:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, index=itsm, sourcetype=\"incident:event\", tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• Filters the current rows with `where closed_txn=0 OR duration<14400` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by change_ticket, incident_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` latest(_time) as change_time from datamodel=Change.All_Changes where All_Changes.result=\"success\" by All_Changes.dest\n| rename All_Changes.dest as dest\n| join max=1 dest type=inner [\n    search index=itsm sourcetype=\"incident:event\" earliest=-48h\n    | stats min(_time) as inc_time by configuration_item\n    | rename configuration_item as dest\n  ]\n| eval delta_sec=inc_time-change_time\n| where delta_sec>=0 AND delta_sec<=14400\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Change-to-Incident Correlation** — Incidents within 4h of `result` success on same `dest` — causal analysis for CAB feedback.\n\nDocumented **Data sources**: CIM Change timestamps vs. ITSM incidents. **App/TA** (typical add-on context): Change + Incident indexes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **delta_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta_sec>=0 AND delta_sec<=14400` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pairs), Sankey (change → incident).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Change-to-Incident Correlation\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` latest(_time) as change_time from datamodel=Change.All_Changes where All_Changes.result=\"success\" by All_Changes.dest\n| rename All_Changes.dest as dest\n| join max=1 dest type=inner [\n    search index=itsm sourcetype=\"incident:event\" earliest=-48h\n    | stats min(_time) as inc_time by configuration_item\n    | rename configuration_item as dest\n  ]\n| eval delta_sec=inc_time-change_time\n| where delta_sec>=0 AND delta_sec<=14400",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.42",
              "n": "Configuration Drift Analysis",
              "c": "high",
              "f": "advanced",
              "v": "Repeated `change_type=update` on same `object_id` without matching change record — drift vs. process.",
              "t": "Change CIM, config mgmt",
              "d": "`object_id`, `object_attrs`",
              "q": "index=security tag=change change_type=update\n| stats count by object_id, user\n| where count>5\n| lookup snow_changes object_id OUTPUT ticket\n| where isnull(ticket)",
              "m": "Integrate Ansible Tower job IDs as authorized change keys.",
              "z": "Table (drift objects), Bar chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Change CIM, config mgmt.\n• Ensure the following data sources are available: `object_id`, `object_attrs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate Ansible Tower job IDs as authorized change keys.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=change change_type=update\n| stats count by object_id, user\n| where count>5\n| lookup snow_changes object_id OUTPUT ticket\n| where isnull(ticket)\n```\n\nUnderstanding this SPL\n\n**Configuration Drift Analysis** — Repeated `change_type=update` on same `object_id` without matching change record — drift vs. process.\n\nDocumented **Data sources**: `object_id`, `object_attrs`. **App/TA** (typical add-on context): Change CIM, config mgmt. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by object_id, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>5` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(ticket)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Change.All_Changes where All_Changes.change_type=\"update\" by All_Changes.object_id, All_Changes.user\n| where count>5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Configuration Drift Analysis** — Repeated `change_type=update` on same `object_id` without matching change record — drift vs. process.\n\nDocumented **Data sources**: `object_id`, `object_attrs`. **App/TA** (typical add-on context): Change CIM, config mgmt. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Filters the current rows with `where count>5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drift objects), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Configuration Drift Analysis\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Change.All_Changes where All_Changes.change_type=\"update\" by All_Changes.object_id, All_Changes.user\n| where count>5",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.43",
              "n": "Change Window Violations",
              "c": "high",
              "f": "intermediate",
              "v": "Production changes outside approved windows — `_time` vs. lookup `allowed_window`.",
              "t": "Change CIM",
              "d": "`result`, `_time`",
              "q": "index=security tag=change dest_zone=\"prod\"\n| eval dow=strftime(_time,\"%u\"), hr=strftime(_time,\"%H\")\n| lookup change_windows dow hr OUTPUT allowed\n| where allowed!=\"true\"\n| stats count by user, object_id",
              "m": "Emergency flag in ticket bypasses alert.",
              "z": "Table (violations), Calendar heatmap.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Change CIM.\n• Ensure the following data sources are available: `result`, `_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmergency flag in ticket bypasses alert.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=change dest_zone=\"prod\"\n| eval dow=strftime(_time,\"%u\"), hr=strftime(_time,\"%H\")\n| lookup change_windows dow hr OUTPUT allowed\n| where allowed!=\"true\"\n| stats count by user, object_id\n```\n\nUnderstanding this SPL\n\n**Change Window Violations** — Production changes outside approved windows — `_time` vs. lookup `allowed_window`.\n\nDocumented **Data sources**: `result`, `_time`. **App/TA** (typical add-on context): Change CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dow** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where allowed!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, object_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Change.All_Changes by All_Changes.user, _time span=1h\n| eval hr=strftime(_time,\"%H\")\n| where hr<5 OR hr>23\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Change Window Violations** — Production changes outside approved windows — `_time` vs. lookup `allowed_window`.\n\nDocumented **Data sources**: `result`, `_time`. **App/TA** (typical add-on context): Change CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• `eval` defines or adjusts **hr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hr<5 OR hr>23` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Calendar heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Change Window Violations\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Change.All_Changes by All_Changes.user, _time span=1h\n| eval hr=strftime(_time,\"%H\")\n| where hr<5 OR hr>23",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.44",
              "n": "Process Execution Anomaly",
              "c": "critical",
              "f": "intermediate",
              "v": "Rare `process_name` on endpoint fleet — living-off-the-land and malware binaries.",
              "t": "Sysmon, EDR → Endpoint.Processes",
              "d": "`Processes` in Endpoint model",
              "q": "index=security tag=endpoint tag=process\n| stats count by process_name, host\n| eventstats median(count) as med by process_name\n| where count < med*0.01 AND count>0",
              "m": "Focus on `process_path` under user-writable dirs; elevate if `parent_process_name` is office apps.",
              "z": "Table (rare processes), Histogram.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Sysmon, EDR → Endpoint.Processes.\n• Ensure the following data sources are available: `Processes` in Endpoint model.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFocus on `process_path` under user-writable dirs; elevate if `parent_process_name` is office apps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=endpoint tag=process\n| stats count by process_name, host\n| eventstats median(count) as med by process_name\n| where count < med*0.01 AND count>0\n```\n\nUnderstanding this SPL\n\n**Process Execution Anomaly** — Rare `process_name` on endpoint fleet — living-off-the-land and malware binaries.\n\nDocumented **Data sources**: `Processes` in Endpoint model. **App/TA** (typical add-on context): Sysmon, EDR → Endpoint.Processes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by process_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by process_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count < med*0.01 AND count>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Endpoint.Processes by Processes.process_name, Processes.dest\n| eventstats sum(count) as tot by Processes.process_name\n| eval pct=count/tot\n| where pct<0.001\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Process Execution Anomaly** — Rare `process_name` on endpoint fleet — living-off-the-land and malware binaries.\n\nDocumented **Data sources**: `Processes` in Endpoint model. **App/TA** (typical add-on context): Sysmon, EDR → Endpoint.Processes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• `eventstats` rolls up events into metrics; results are split **by Processes.process_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct<0.001` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rare processes), Histogram.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Process Execution Anomaly\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Endpoint.Processes by Processes.process_name, Processes.dest\n| eventstats sum(count) as tot by Processes.process_name\n| eval pct=count/tot\n| where pct<0.001",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.45",
              "n": "Service State Changes",
              "c": "high",
              "f": "beginner",
              "v": "Stopping security services (AV, EDR) or critical agents — `Endpoint.Services` state changes.",
              "t": "Windows service control, EDR → Endpoint",
              "d": "`service_name`, `action`",
              "q": "index=security tag=endpoint service_name=\"*defender*\" OR service_name=\"*edr*\"\n| search action=stopped OR action=disabled\n| stats count by host, service_name, user",
              "m": "Correlate with approved maintenance windows.",
              "z": "Timeline (stops), Table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows service control, EDR → Endpoint.\n• Ensure the following data sources are available: `service_name`, `action`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate with approved maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=endpoint service_name=\"*defender*\" OR service_name=\"*edr*\"\n| search action=stopped OR action=disabled\n| stats count by host, service_name, user\n```\n\nUnderstanding this SPL\n\n**Service State Changes** — Stopping security services (AV, EDR) or critical agents — `Endpoint.Services` state changes.\n\nDocumented **Data sources**: `service_name`, `action`. **App/TA** (typical add-on context): Windows service control, EDR → Endpoint. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, service_name, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Endpoint.Services where match(Services.service_name,\"(?i)(WinDefend|MsMpSvc|edr|defender)\") AND match(Services.status,\"(?i)stopped|disabled\") by Services.dest, Services.user\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Service State Changes** — Stopping security services (AV, EDR) or critical agents — `Endpoint.Services` state changes.\n\nDocumented **Data sources**: `service_name`, `action`. **App/TA** (typical add-on context): Windows service control, EDR → Endpoint. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Services` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (stops), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Service State Changes\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Endpoint.Services where match(Services.service_name,\"(?i)(WinDefend|MsMpSvc|edr|defender)\") AND match(Services.status,\"(?i)stopped|disabled\") by Services.dest, Services.user",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.46",
              "n": "Filesystem Integrity",
              "c": "high",
              "f": "intermediate",
              "v": "FIM events for critical paths (`/etc`, `C:\\Windows\\System32`) — map vendor FIM to Endpoint `Processes`/`Files` tags or parallel sourcetypes.",
              "t": "Tripwire, OSSEC, WEF → CIM-tagged filesystem events",
              "d": "`file_path`, `action=modify`",
              "q": "index=security tag=endpoint tag=filesystem action=modify\n| lookup fim_critical_paths file_path OUTPUT criticality\n| where criticality=\"high\"\n| stats count by host, file_path",
              "m": "Hash diff for high-value files; suppress during patch Tuesday.",
              "z": "Table (changes), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tripwire, OSSEC, WEF → CIM-tagged filesystem events.\n• Ensure the following data sources are available: `file_path`, `action=modify`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nHash diff for high-value files; suppress during patch Tuesday.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=endpoint tag=filesystem action=modify\n| lookup fim_critical_paths file_path OUTPUT criticality\n| where criticality=\"high\"\n| stats count by host, file_path\n```\n\nUnderstanding this SPL\n\n**Filesystem Integrity** — FIM events for critical paths (`/etc`, `C:\\Windows\\System32`) — map vendor FIM to Endpoint `Processes`/`Files` tags or parallel sourcetypes.\n\nDocumented **Data sources**: `file_path`, `action=modify`. **App/TA** (typical add-on context): Tripwire, OSSEC, WEF → CIM-tagged filesystem events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where criticality=\"high\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, file_path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Endpoint.Processes where match(Processes.process_path,\"(?i)(etc|System32|SysWOW64)\") by Processes.dest, Processes.process_path, _time span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Filesystem Integrity** — FIM events for critical paths (`/etc`, `C:\\Windows\\System32`) — map vendor FIM to Endpoint `Processes`/`Files` tags or parallel sourcetypes.\n\nDocumented **Data sources**: `file_path`, `action=modify`. **App/TA** (typical add-on context): Tripwire, OSSEC, WEF → CIM-tagged filesystem events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (changes), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Filesystem Integrity\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Endpoint.Processes where match(Processes.process_path,\"(?i)(etc|System32|SysWOW64)\") by Processes.dest, Processes.process_path, _time span=1h",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.47",
              "n": "Registry Modification Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Persistence via `Registry` keys — `action=set` on Run keys.",
              "t": "Sysmon → Endpoint.Registry",
              "d": "`registry_path`, `process_name`",
              "q": "index=security tag=endpoint registry_path=\"*\\\\CurrentVersion\\\\Run*\"\n| stats count by host, registry_path, process_name\n| sort -count",
              "m": "Allowlist IT deployment tools; alert on `process_name` = unusual parents.",
              "z": "Table (modifications), Sunburst (path).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Sysmon → Endpoint.Registry.\n• Ensure the following data sources are available: `registry_path`, `process_name`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAllowlist IT deployment tools; alert on `process_name` = unusual parents.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=endpoint registry_path=\"*\\\\CurrentVersion\\\\Run*\"\n| stats count by host, registry_path, process_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Registry Modification Tracking** — Persistence via `Registry` keys — `action=set` on Run keys.\n\nDocumented **Data sources**: `registry_path`, `process_name`. **App/TA** (typical add-on context): Sysmon → Endpoint.Registry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, registry_path, process_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count from datamodel=Endpoint.Registry where match(Registry.registry_path,\"CurrentVersion\\\\\\\\Run\") by Registry.dest, Registry.registry_path, Registry.process_name\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Registry Modification Tracking** — Persistence via `Registry` keys — `action=set` on Run keys.\n\nDocumented **Data sources**: `registry_path`, `process_name`. **App/TA** (typical add-on context): Sysmon → Endpoint.Registry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Registry` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (modifications), Sunburst (path).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Registry Modification Tracking\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` count from datamodel=Endpoint.Registry where match(Registry.registry_path,\"CurrentVersion\\\\\\\\Run\") by Registry.dest, Registry.registry_path, Registry.process_name",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.48",
              "n": "Installed Software Changes",
              "c": "medium",
              "f": "beginner",
              "v": "New or rare `process_path` / binaries on servers — inventory or package logs mapped to Endpoint.",
              "t": "SCCM, Jamf inventory",
              "d": "Software inventory events",
              "q": "index=security tag=endpoint tag=package action=install\n| where match(package_name,\"(?i)(miner|remote|vpn)\")\n| stats count by host, package_name, user",
              "m": "Prod servers should use golden images; alert on non-approved titles.",
              "z": "Table (new software), Bar chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SCCM, Jamf inventory.\n• Ensure the following data sources are available: Software inventory events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nProd servers should use golden images; alert on non-approved titles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security tag=endpoint tag=package action=install\n| where match(package_name,\"(?i)(miner|remote|vpn)\")\n| stats count by host, package_name, user\n```\n\nUnderstanding this SPL\n\n**Installed Software Changes** — New or rare `process_path` / binaries on servers — inventory or package logs mapped to Endpoint.\n\nDocumented **Data sources**: Software inventory events. **App/TA** (typical add-on context): SCCM, Jamf inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(package_name,\"(?i)(miner|remote|vpn)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, package_name, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` dc(Processes.process_path) as paths from datamodel=Endpoint.Processes by Processes.dest, _time span=1d\n| where paths>30\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Installed Software Changes** — New or rare `process_path` / binaries on servers — inventory or package logs mapped to Endpoint.\n\nDocumented **Data sources**: Software inventory events. **App/TA** (typical add-on context): SCCM, Jamf inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Filters the current rows with `where paths>30` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new software), Bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Installed Software Changes\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats `summariesonly` dc(Processes.process_path) as paths from datamodel=Endpoint.Processes by Processes.dest, _time span=1d\n| where paths>30",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.13.49",
              "n": "Endpoint Resource Utilization (CIM)",
              "c": "medium",
              "f": "intermediate",
              "v": "Sustained high CPU on many endpoints may indicate cryptomining or ransomware — use `Performance` DM when TAs map `cpu_load_percent` (field name varies by TA).",
              "t": "Perfmon, Telegraf → Performance CIM",
              "d": "Normalized performance metrics per `host`",
              "q": "index=perf tag=endpoint metric_name=\"cpu_pct\"\n| bin _time span=5m\n| stats perc95(value) as cpu_p95 by _time, host\n| where cpu_p95>90",
              "m": "Prefer `mstats` for metrics indexes; join CIM `Performance.host` to CMDB tier.",
              "z": "Line chart (CPU p95), Heatmap (host × time).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Perfmon, Telegraf → Performance CIM.\n• Ensure the following data sources are available: Normalized performance metrics per `host`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer `mstats` for metrics indexes; join CIM `Performance.host` to CMDB tier.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=perf tag=endpoint metric_name=\"cpu_pct\"\n| bin _time span=5m\n| stats perc95(value) as cpu_p95 by _time, host\n| where cpu_p95>90\n```\n\nUnderstanding this SPL\n\n**Endpoint Resource Utilization (CIM)** — Sustained high CPU on many endpoints may indicate cryptomining or ransomware — use `Performance` DM when TAs map `cpu_load_percent` (field name varies by TA).\n\nDocumented **Data sources**: Normalized performance metrics per `host`. **App/TA** (typical add-on context): Perfmon, Telegraf → Performance CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perf.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perf, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu_p95>90` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Endpoint Resource Utilization (CIM)** — Sustained high CPU on many endpoints may indicate cryptomining or ransomware — use `Performance` DM when TAs map `cpu_load_percent` (field name varies by TA).\n\nDocumented **Data sources**: Normalized performance metrics per `host`. **App/TA** (typical add-on context): Perfmon, Telegraf → Performance CIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU p95), Heatmap (host × time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use the same kind of check described for \"Endpoint Resource Utilization\" once our data is shaped in a common, repeatable way across products. We help the team see trouble early, compare different tools fairly, and avoid building a one-off report for every vendor.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host span=5m | sort - count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 49,
            "none": 0
          }
        },
        {
          "i": "10.14",
          "n": "OT Security and MITRE ATT&CK for ICS",
          "u": [
            {
              "i": "10.14.1",
              "n": "OT Security Add-on Health and Configuration Status",
              "c": "medium",
              "f": "intermediate",
              "v": "Ensures OT Security Add-on correlation searches, lookups, and MITRE mappings stay healthy so process-safety detections do not silently fail.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=_internal` scheduler/search logs, `index=ot_security` `sourcetype=\"ot:security:health\"`",
              "q": "index=_internal source=*scheduler.log status=failed savedsearch_name=\"OT*\"\n| stats count by savedsearch_name, status\n| append [\n    | rest /servicesNS/-/-/saved/searches search=\"OT*\"\n    | where disabled=1\n    | table title disabled\n  ]\n| table savedsearch_name, title, status, disabled, count",
              "m": "Deploy Splunk Enterprise Security and the OT Security Add-on. Schedule a daily check for disabled or failing correlation searches. Alert when any OT-prefixed saved search has failed or been disabled.",
              "z": "Table (search name, status, last failure), Single value (failed OT searches count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=_internal` scheduler/search logs, `index=ot_security` `sourcetype=\"ot:security:health\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Splunk Enterprise Security and the OT Security Add-on. Schedule a daily check for disabled or failing correlation searches. Alert when any OT-prefixed saved search has failed or been disabled.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*scheduler.log status=failed savedsearch_name=\"OT*\"\n| stats count by savedsearch_name, status\n| append [\n    | rest /servicesNS/-/-/saved/searches search=\"OT*\"\n    | where disabled=1\n    | table title disabled\n  ]\n| table savedsearch_name, title, status, disabled, count\n```\n\nUnderstanding this SPL\n\n**OT Security Add-on Health and Configuration Status** — Ensures OT Security Add-on correlation searches, lookups, and MITRE mappings stay healthy so process-safety detections do not silently fail.\n\nDocumented **Data sources**: `index=_internal` scheduler/search logs, `index=ot_security` `sourcetype=\"ot:security:health\"`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by savedsearch_name, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• Pipeline stage (see **OT Security Add-on Health and Configuration Status**): table savedsearch_name, title, status, disabled, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (search name, status, last failure), Single value (failed OT searches count).",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the tools that watch the plant so our safety and attack detections in OT do not quietly stop running.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.2",
              "n": "ICS Protocol Allow-List Violation",
              "c": "critical",
              "f": "advanced",
              "v": "Detects protocols outside the approved OT baseline (MITRE ATT&CK for ICS T0886), limiting lateral movement and unsafe cross-zone traffic.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot` `sourcetype=\"ics:firewall\"` fields `src`, `dest`, `dest_port`, `protocol`, `zone`",
              "q": "index=ot sourcetype=\"ics:firewall\"\n| eval proto=upper(coalesce(protocol, app_proto))\n| lookup ot_protocol_allowlist proto OUTPUT allowed\n| where isnull(allowed) OR allowed=0\n| stats count earliest(_time) as first_seen latest(_time) as last_seen values(dest) as dest_ips by src, proto, dest_port, zone\n| where count>0",
              "m": "Requires Splunk ES and OT Security Add-on. Maintain an allow-list lookup aligned with OT network design. Tune false positives per site. Route notables into ES incident review with MITRE T0886 tagging.",
              "z": "Sankey (zone to protocol), Stacked bar by protocol, Table of violating endpoints.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"ics:firewall\"` fields `src`, `dest`, `dest_port`, `protocol`, `zone`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires Splunk ES and OT Security Add-on. Maintain an allow-list lookup aligned with OT network design. Tune false positives per site. Route notables into ES incident review with MITRE T0886 tagging.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics:firewall\"\n| eval proto=upper(coalesce(protocol, app_proto))\n| lookup ot_protocol_allowlist proto OUTPUT allowed\n| where isnull(allowed) OR allowed=0\n| stats count earliest(_time) as first_seen latest(_time) as last_seen values(dest) as dest_ips by src, proto, dest_port, zone\n| where count>0\n```\n\nUnderstanding this SPL\n\n**ICS Protocol Allow-List Violation** — Detects protocols outside the approved OT baseline (MITRE ATT&CK for ICS T0886), limiting lateral movement and unsafe cross-zone traffic.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"ics:firewall\"` fields `src`, `dest`, `dest_port`, `protocol`, `zone`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:firewall. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:firewall\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **proto** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(allowed) OR allowed=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, proto, dest_port, zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ICS Protocol Allow-List Violation** — Detects protocols outside the approved OT baseline (MITRE ATT&CK for ICS T0886), limiting lateral movement and unsafe cross-zone traffic.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"ics:firewall\"` fields `src`, `dest`, `dest_port`, `protocol`, `zone`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey (zone to protocol), Stacked bar by protocol, Table of violating endpoints.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We make sure only the industrial protocols and sessions we expect run across our OT wire so a stranger cannot steer real equipment.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest_port | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.3",
              "n": "Default or Shared Credentials on OT Devices",
              "c": "critical",
              "f": "advanced",
              "v": "Surfaces authentication using vendor default or shared accounts on PLCs, RTUs, and HMIs (T0812), enabling trivial takeover and process manipulation.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` `sourcetype=\"ics:auth\"` RADIUS/TACACS+, HMI/SCADA login logs",
              "q": "index=ot_security sourcetype=\"ics:auth\"\n| eval user=lower(trim(user))\n| lookup ot_default_creds user OUTPUT is_default\n| where is_default=1 OR match(user, \"admin|root|operator|guest\")\n| stats count by src, dest_host, user, vendor\n| sort - count",
              "m": "Integrate OT authentication logs with ES. Pair with password policy and account governance. Escalate repeated default-credential use referencing T0812.",
              "z": "Bar chart by user/asset, Timeline of default logins, Table with vendor and site.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` `sourcetype=\"ics:auth\"` RADIUS/TACACS+, HMI/SCADA login logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate OT authentication logs with ES. Pair with password policy and account governance. Escalate repeated default-credential use referencing T0812.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"ics:auth\"\n| eval user=lower(trim(user))\n| lookup ot_default_creds user OUTPUT is_default\n| where is_default=1 OR match(user, \"admin|root|operator|guest\")\n| stats count by src, dest_host, user, vendor\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Default or Shared Credentials on OT Devices** — Surfaces authentication using vendor default or shared accounts on PLCs, RTUs, and HMIs (T0812), enabling trivial takeover and process manipulation.\n\nDocumented **Data sources**: `index=ot_security` `sourcetype=\"ics:auth\"` RADIUS/TACACS+, HMI/SCADA login logs. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: ics:auth. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"ics:auth\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_default=1 OR match(user, \"admin|root|operator|guest\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest_host, user, vendor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Default or Shared Credentials on OT Devices** — Surfaces authentication using vendor default or shared accounts on PLCs, RTUs, and HMIs (T0812), enabling trivial takeover and process manipulation.\n\nDocumented **Data sources**: `index=ot_security` `sourcetype=\"ics:auth\"` RADIUS/TACACS+, HMI/SCADA login logs. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart by user/asset, Timeline of default logins, Table with vendor and site.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for default or shared logins on control gear so a generic password is not a skeleton key to the process.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.user | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.4",
              "n": "PLC Program Change Outside Maintenance Window",
              "c": "critical",
              "f": "advanced",
              "v": "Flags logic downloads or program changes to PLCs outside approved windows (T0845), protecting process integrity and safety interlocks.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot` `sourcetype=\"ics:plc\"` or engineering tool audit logs",
              "q": "index=ot sourcetype=\"ics:plc\"\n| where match(action, \"(?i)download|upload|program|transfer\")\n| lookup ot_maintenance_windows plc_id OUTPUT window_start window_end\n| eval in_window=if(_time>=window_start AND _time<=window_end, 1, 0)\n| where in_window=0 OR isnull(window_start)\n| stats count values(action) as actions by plc_id, src_host, user",
              "m": "Requires Splunk ES and OT Security Add-on tied to CMMS or change tickets. Validate engineering station identity. Feed ES notables for T0845 when no matching maintenance record exists.",
              "z": "Timeline of program events, Table of PLCs with/without tickets, Single value (out-of-window changes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"ics:plc\"` or engineering tool audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires Splunk ES and OT Security Add-on tied to CMMS or change tickets. Validate engineering station identity. Feed ES notables for T0845 when no matching maintenance record exists.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics:plc\"\n| where match(action, \"(?i)download|upload|program|transfer\")\n| lookup ot_maintenance_windows plc_id OUTPUT window_start window_end\n| eval in_window=if(_time>=window_start AND _time<=window_end, 1, 0)\n| where in_window=0 OR isnull(window_start)\n| stats count values(action) as actions by plc_id, src_host, user\n```\n\nUnderstanding this SPL\n\n**PLC Program Change Outside Maintenance Window** — Flags logic downloads or program changes to PLCs outside approved windows (T0845), protecting process integrity and safety interlocks.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"ics:plc\"` or engineering tool audit logs. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:plc. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:plc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(action, \"(?i)download|upload|program|transfer\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **in_window** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where in_window=0 OR isnull(window_start)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by plc_id, src_host, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of program events, Table of PLCs with/without tickets, Single value (out-of-window changes).",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch logic downloads to controllers outside agreed maintenance windows so nobody reprograms a line in secret.",
              "mtype": [
                "Security",
                "Change",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.5",
              "n": "Data Historian Compromise Indicators",
              "c": "critical",
              "f": "advanced",
              "v": "Detects anomalous queries, bulk exports, or new clients against historian servers (T0815), which can hide sabotage or exfiltrate operational data.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` `sourcetype=\"ics:historian\"` DB/OS logs for historian hosts",
              "q": "index=ot_security sourcetype=\"ics:historian\"\n| bin _time span=1h\n| stats count sum(rows_returned) as rows dc(client_ip) as clients by _time, historian_host\n| eventstats avg(count) as avg_count by historian_host\n| where count>avg_count*3 OR rows>10000 OR clients>5\n| table _time, historian_host, count, rows, clients, avg_count",
              "m": "Whitelist backup and reporting hosts. Create thresholds for off-hours bulk reads and new ODBC/JDBC clients. Tag with T0815 for threat intel alignment.",
              "z": "Timechart of query volume vs baseline, Scatter of rows vs clients, Table of anomalous source IPs.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` `sourcetype=\"ics:historian\"` DB/OS logs for historian hosts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWhitelist backup and reporting hosts. Create thresholds for off-hours bulk reads and new ODBC/JDBC clients. Tag with T0815 for threat intel alignment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"ics:historian\"\n| bin _time span=1h\n| stats count sum(rows_returned) as rows dc(client_ip) as clients by _time, historian_host\n| eventstats avg(count) as avg_count by historian_host\n| where count>avg_count*3 OR rows>10000 OR clients>5\n| table _time, historian_host, count, rows, clients, avg_count\n```\n\nUnderstanding this SPL\n\n**Data Historian Compromise Indicators** — Detects anomalous queries, bulk exports, or new clients against historian servers (T0815), which can hide sabotage or exfiltrate operational data.\n\nDocumented **Data sources**: `index=ot_security` `sourcetype=\"ics:historian\"` DB/OS logs for historian hosts. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: ics:historian. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"ics:historian\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, historian_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by historian_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>avg_count*3 OR rows>10000 OR clients>5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Data Historian Compromise Indicators**): table _time, historian_host, count, rows, clients, avg_count\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart of query volume vs baseline, Scatter of rows vs clients, Table of anomalous source IPs.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the historian for odd bulk reads or changes so someone is not siphoning process secrets or history they should not take.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.6",
              "n": "Internet-Accessible Device in OT Zone",
              "c": "critical",
              "f": "advanced",
              "v": "Identifies OT assets with unexpected paths to the public internet (T0822), increasing ransomware and remote attack surface.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot` OT firewall/NAT logs `sourcetype=\"ics:firewall\"`",
              "q": "index=ot sourcetype=\"ics:firewall\"\n| eval dest_is_public=if(NOT match(dest, \"^(10\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.|192\\.168\\.)\"), 1, 0)\n| lookup ot_asset_inventory ip AS src OUTPUT zone asset_id\n| where zone=\"OT\" AND dest_is_public=1\n| stats earliest(_time) as first_seen latest(_time) as last_seen values(dest) as public_peers by src, asset_id",
              "m": "Deploy with ES and OT Security Add-on asset enrichment. Remediate via network segmentation. Use for T0822 mapping in ES risk dashboards.",
              "z": "Table of OT hosts reaching public IPs, Network diagram export.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot` OT firewall/NAT logs `sourcetype=\"ics:firewall\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy with ES and OT Security Add-on asset enrichment. Remediate via network segmentation. Use for T0822 mapping in ES risk dashboards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics:firewall\"\n| eval dest_is_public=if(NOT match(dest, \"^(10\\.|172\\.(1[6-9]|2[0-9]|3[01])\\.|192\\.168\\.)\"), 1, 0)\n| lookup ot_asset_inventory ip AS src OUTPUT zone asset_id\n| where zone=\"OT\" AND dest_is_public=1\n| stats earliest(_time) as first_seen latest(_time) as last_seen values(dest) as public_peers by src, asset_id\n```\n\nUnderstanding this SPL\n\n**Internet-Accessible Device in OT Zone** — Identifies OT assets with unexpected paths to the public internet (T0822), increasing ransomware and remote attack surface.\n\nDocumented **Data sources**: `index=ot` OT firewall/NAT logs `sourcetype=\"ics:firewall\"`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:firewall. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:firewall\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dest_is_public** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where zone=\"OT\" AND dest_is_public=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, asset_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Internet-Accessible Device in OT Zone** — Identifies OT assets with unexpected paths to the public internet (T0822), increasing ransomware and remote attack surface.\n\nDocumented **Data sources**: `index=ot` OT firewall/NAT logs `sourcetype=\"ics:firewall\"`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of OT hosts reaching public IPs, Network diagram export.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We flag control-zone gear that should not have a path to the open internet, because that is an easy on-ramp for bad actors.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.7",
              "n": "Rogue Engineering Tool Execution",
              "c": "critical",
              "f": "expert",
              "v": "Spots unauthorized engineering software sessions to field devices (T0843), reducing risk of malicious reprogramming.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` host-based logs from engineering workstations, EDR forwarded to OT index",
              "q": "index=ot_security sourcetype=\"ot:security:process\"\n| lookup ot_approved_eng_tools process_name OUTPUT approved_tool\n| where isnull(approved_tool) AND match(process_name, \"(?i)studio|logix|step7|tia|codesys|unity\")\n| stats earliest(_time) as first_seen values(user) as users dc(host) as hosts by process_name\n| lookup ot_eng_workstations host OUTPUT approved_role\n| where isnull(approved_role)",
              "m": "Requires Splunk ES, OT Security Add-on, and workstation inventory. Tune for sanctioned tools and vendor remote support. Correlate with T0843 and change records.",
              "z": "Process execution timeline, Stacked bar by binary, Table linking host to user.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` host-based logs from engineering workstations, EDR forwarded to OT index.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires Splunk ES, OT Security Add-on, and workstation inventory. Tune for sanctioned tools and vendor remote support. Correlate with T0843 and change records.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"ot:security:process\"\n| lookup ot_approved_eng_tools process_name OUTPUT approved_tool\n| where isnull(approved_tool) AND match(process_name, \"(?i)studio|logix|step7|tia|codesys|unity\")\n| stats earliest(_time) as first_seen values(user) as users dc(host) as hosts by process_name\n| lookup ot_eng_workstations host OUTPUT approved_role\n| where isnull(approved_role)\n```\n\nUnderstanding this SPL\n\n**Rogue Engineering Tool Execution** — Spots unauthorized engineering software sessions to field devices (T0843), reducing risk of malicious reprogramming.\n\nDocumented **Data sources**: `index=ot_security` host-based logs from engineering workstations, EDR forwarded to OT index. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: ot:security:process. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"ot:security:process\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved_tool) AND match(process_name, \"(?i)studio|logix|step7|tia|codesys|unity\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by process_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved_role)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Process execution timeline, Stacked bar by binary, Table linking host to user.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot engineering software running where and when it should not, so only trusted techs reflash or reconfigure the plant.",
              "mtype": [
                "Security",
                "Change"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.8",
              "n": "OT Network Reconnaissance and Scanning",
              "c": "high",
              "f": "intermediate",
              "v": "Detects port and service scanning inside ICS VLANs (T0846), often precursors to exploitation or lateral movement toward PLCs and HMIs.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot` IDS/IPS `sourcetype=\"ics:ids\"`, firewall deny logs",
              "q": "index=ot (sourcetype=\"ics:ids\" OR sourcetype=\"ics:firewall\" action=\"deny\")\n| bin _time span=5m\n| stats dc(dest) as targets dc(dest_port) as ports count by _time, src\n| where targets>=10 OR ports>=20\n| lookup ot_asset_inventory src OUTPUT asset_owner zone",
              "m": "Use OT Security Add-on with ES. Distinguish vulnerability scanners from adversary behavior via source allow-lists. Map to T0846 in ES MITRE views.",
              "z": "Heatmap (time × src), Bubble chart (targets vs ports).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot` IDS/IPS `sourcetype=\"ics:ids\"`, firewall deny logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse OT Security Add-on with ES. Distinguish vulnerability scanners from adversary behavior via source allow-lists. Map to T0846 in ES MITRE views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot (sourcetype=\"ics:ids\" OR sourcetype=\"ics:firewall\" action=\"deny\")\n| bin _time span=5m\n| stats dc(dest) as targets dc(dest_port) as ports count by _time, src\n| where targets>=10 OR ports>=20\n| lookup ot_asset_inventory src OUTPUT asset_owner zone\n```\n\nUnderstanding this SPL\n\n**OT Network Reconnaissance and Scanning** — Detects port and service scanning inside ICS VLANs (T0846), often precursors to exploitation or lateral movement toward PLCs and HMIs.\n\nDocumented **Data sources**: `index=ot` IDS/IPS `sourcetype=\"ics:ids\"`, firewall deny logs. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:ids, ics:firewall. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:ids\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where targets>=10 OR ports>=20` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Network Reconnaissance and Scanning** — Detects port and service scanning inside ICS VLANs (T0846), often precursors to exploitation or lateral movement toward PLCs and HMIs.\n\nDocumented **Data sources**: `index=ot` IDS/IPS `sourcetype=\"ics:ids\"`, firewall deny logs. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (time × src), Bubble chart (targets vs ports).",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We notice when someone is mapping or poking the OT network the way a thief cases a building before a break-in.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src span=5m | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.9",
              "n": "Unauthorized Firmware Upload to Field Device",
              "c": "critical",
              "f": "advanced",
              "v": "Surfaces firmware or OS image uploads to controllers and I/O modules (T0839), protecting integrity of safety and control logic.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot` device syslog `sourcetype=\"ics:device\"`, vendor-specific firmware events",
              "q": "index=ot sourcetype=\"ics:device\"\n| where match(event_type, \"(?i)firmware|image|flash|upgrade\")\n| lookup ot_firmware_change_auth device_id user OUTPUT change_ticket\n| where isnull(change_ticket)\n| stats earliest(_time) as first_seen values(old_version) as from_v values(new_version) as to_v by device_id, user, src",
              "m": "Integrate with Splunk ES and OT Security Add-on change systems. Require ticket correlation for production. Escalate unsigned or off-shift uploads as T0839 incidents.",
              "z": "Timeline of firmware events, Table of devices with version deltas, Single value (unapproved uploads).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot` device syslog `sourcetype=\"ics:device\"`, vendor-specific firmware events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate with Splunk ES and OT Security Add-on change systems. Require ticket correlation for production. Escalate unsigned or off-shift uploads as T0839 incidents.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics:device\"\n| where match(event_type, \"(?i)firmware|image|flash|upgrade\")\n| lookup ot_firmware_change_auth device_id user OUTPUT change_ticket\n| where isnull(change_ticket)\n| stats earliest(_time) as first_seen values(old_version) as from_v values(new_version) as to_v by device_id, user, src\n```\n\nUnderstanding this SPL\n\n**Unauthorized Firmware Upload to Field Device** — Surfaces firmware or OS image uploads to controllers and I/O modules (T0839), protecting integrity of safety and control logic.\n\nDocumented **Data sources**: `index=ot` device syslog `sourcetype=\"ics:device\"`, vendor-specific firmware events. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:device. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(event_type, \"(?i)firmware|image|flash|upgrade\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(change_ticket)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by device_id, user, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of firmware events, Table of devices with version deltas, Single value (unapproved uploads).",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We alert on surprise firmware or program uploads to field devices so malware or mis-uploads do not land on real gear.",
              "mtype": [
                "Security",
                "Change",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.10",
              "n": "SCADA HMI Tampering Indicators",
              "c": "critical",
              "f": "advanced",
              "v": "Detects unauthorized screen, tag, or recipe edits on HMIs (T0831), which can mislead operators or suppress alarms during an attack.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` `sourcetype=\"ics:hmi\"` audit logs",
              "q": "index=ot_security sourcetype=\"ics:hmi\"\n| where match(action, \"(?i)edit|write|delete|import|export|recipe|screen\")\n| lookup ot_hmi_maintenance hmi_id OUTPUT maint_window\n| eval in_maint=if(_time>=mvindex(maint_window,0) AND _time<=mvindex(maint_window,1), 1, 0)\n| where in_maint=0 OR isnull(maint_window)\n| stats count values(object) as objects by hmi_id, user, src, action",
              "m": "Requires Splunk ES with OT Security Add-on parsing for vendor HMI audit trails. Align with operator procedures and shift schedules. Map findings to T0831.",
              "z": "Timeline of HMI changes, Table with user and station.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` `sourcetype=\"ics:hmi\"` audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires Splunk ES with OT Security Add-on parsing for vendor HMI audit trails. Align with operator procedures and shift schedules. Map findings to T0831.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"ics:hmi\"\n| where match(action, \"(?i)edit|write|delete|import|export|recipe|screen\")\n| lookup ot_hmi_maintenance hmi_id OUTPUT maint_window\n| eval in_maint=if(_time>=mvindex(maint_window,0) AND _time<=mvindex(maint_window,1), 1, 0)\n| where in_maint=0 OR isnull(maint_window)\n| stats count values(object) as objects by hmi_id, user, src, action\n```\n\nUnderstanding this SPL\n\n**SCADA HMI Tampering Indicators** — Detects unauthorized screen, tag, or recipe edits on HMIs (T0831), which can mislead operators or suppress alarms during an attack.\n\nDocumented **Data sources**: `index=ot_security` `sourcetype=\"ics:hmi\"` audit logs. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: ics:hmi. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"ics:hmi\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(action, \"(?i)edit|write|delete|import|export|recipe|screen\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **in_maint** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where in_maint=0 OR isnull(maint_window)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by hmi_id, user, src, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of HMI changes, Table with user and station.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for HMI or operator-screen tampering that could hide a bad process state or trick operators during an incident.",
              "mtype": [
                "Security",
                "Change"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.11",
              "n": "Safety Instrumented System (SIS) Bypass",
              "c": "critical",
              "f": "expert",
              "v": "Monitors mode changes, jumpering, or overrides on SIS (T0800), directly tied to process safety and regulatory expectations.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot` `sourcetype=\"ics:sis\"` safety PLC diagnostics, alarm historians",
              "q": "index=ot sourcetype=\"ics:sis\"\n| where match(status_message, \"(?i)bypass|override|maintenance|manual|inhibit\")\n| lookup ot_sis_authorized_changes loop_id user OUTPUT permit_id valid_until\n| eval permit_ok=if(!isnull(permit_id) AND _time<=valid_until, 1, 0)\n| where permit_ok=0\n| stats earliest(_time) as first_seen values(status_message) as states by loop_id, asset, user",
              "m": "Use Splunk ES and OT Safety data integrations. Require MOC/permit correlation. Escalate immediately to process safety and T0800 workflows.",
              "z": "State timeline per SIS loop, Alert list with permit linkage, Gauge of open bypasses.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"ics:sis\"` safety PLC diagnostics, alarm historians.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk ES and OT Safety data integrations. Require MOC/permit correlation. Escalate immediately to process safety and T0800 workflows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics:sis\"\n| where match(status_message, \"(?i)bypass|override|maintenance|manual|inhibit\")\n| lookup ot_sis_authorized_changes loop_id user OUTPUT permit_id valid_until\n| eval permit_ok=if(!isnull(permit_id) AND _time<=valid_until, 1, 0)\n| where permit_ok=0\n| stats earliest(_time) as first_seen values(status_message) as states by loop_id, asset, user\n```\n\nUnderstanding this SPL\n\n**Safety Instrumented System (SIS) Bypass** — Monitors mode changes, jumpering, or overrides on SIS (T0800), directly tied to process safety and regulatory expectations.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"ics:sis\"` safety PLC diagnostics, alarm historians. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:sis. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:sis\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(status_message, \"(?i)bypass|override|maintenance|manual|inhibit\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **permit_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where permit_ok=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by loop_id, asset, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: State timeline per SIS loop, Alert list with permit linkage, Gauge of open bypasses.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for safety or interlock systems being sidestepped in ways that only happen when someone tries to work around the guardrails.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.12",
              "n": "Unauthorized VPN Tunnel into OT Network",
              "c": "critical",
              "f": "advanced",
              "v": "Detects VPN or encrypted sessions into OT that violate perimeter policy (T0886), supporting regulatory and segmentation requirements.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` VPN concentrator logs, firewall sessions",
              "q": "index=ot_security sourcetype=\"ot:security:vpn\" OR (index=ot sourcetype=\"ics:firewall\" dest_port IN (443, 1194, 4500))\n| lookup ot_approved_vpn_peers remote_ip user OUTPUT approved_site\n| where isnull(approved_site)\n| stats earliest(_time) as start latest(_time) as end by remote_ip, user, assigned_ip, gateway",
              "m": "Maintain approved vendor and employee VPN profiles. Alert on split tunnel to OT, new device IDs. Map to T0886.",
              "z": "Timeline of VPN sessions, Table of unauthorized peers.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` VPN concentrator logs, firewall sessions.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain approved vendor and employee VPN profiles. Alert on split tunnel to OT, new device IDs. Map to T0886.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"ot:security:vpn\" OR (index=ot sourcetype=\"ics:firewall\" dest_port IN (443, 1194, 4500))\n| lookup ot_approved_vpn_peers remote_ip user OUTPUT approved_site\n| where isnull(approved_site)\n| stats earliest(_time) as start latest(_time) as end by remote_ip, user, assigned_ip, gateway\n```\n\nUnderstanding this SPL\n\n**Unauthorized VPN Tunnel into OT Network** — Detects VPN or encrypted sessions into OT that violate perimeter policy (T0886), supporting regulatory and segmentation requirements.\n\nDocumented **Data sources**: `index=ot_security` VPN concentrator logs, firewall sessions. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security, ot; **sourcetype**: ot:security:vpn, ics:firewall. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, index=ot, sourcetype=\"ot:security:vpn\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved_site)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by remote_ip, user, assigned_ip, gateway** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unauthorized VPN Tunnel into OT Network** — Detects VPN or encrypted sessions into OT that violate perimeter policy (T0886), supporting regulatory and segmentation requirements.\n\nDocumented **Data sources**: `index=ot_security` VPN concentrator logs, firewall sessions. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of VPN sessions, Table of unauthorized peers.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for hidden VPNs or tunnels into the plant that bypass the path we want every remote session to use.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Network_Traffic",
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.13",
              "n": "OT Asset Risk Score Threshold Breach",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks cumulative OT asset risk from vulnerabilities and behavioral signals so operations can prioritize patching and isolation before compliance gaps widen.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` `sourcetype=\"ot:security:risk\"` ES risk framework outputs",
              "q": "index=ot_security sourcetype=\"ot:security:risk\"\n| stats latest(risk_score) as risk by asset_id\n| lookup ot_risk_thresholds asset_class OUTPUT max_risk\n| where risk>max_risk\n| sort - risk\n| table asset_id, risk, max_risk",
              "m": "Configure Splunk ES risk notables with OT Security Add-on scoring models. Align thresholds with corporate risk appetite and NERC CIP-relevant assets.",
              "z": "Bubble chart (risk vs asset class), Prioritized table, Single value (breach count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` `sourcetype=\"ot:security:risk\"` ES risk framework outputs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk ES risk notables with OT Security Add-on scoring models. Align thresholds with corporate risk appetite and NERC CIP-relevant assets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"ot:security:risk\"\n| stats latest(risk_score) as risk by asset_id\n| lookup ot_risk_thresholds asset_class OUTPUT max_risk\n| where risk>max_risk\n| sort - risk\n| table asset_id, risk, max_risk\n```\n\nUnderstanding this SPL\n\n**OT Asset Risk Score Threshold Breach** — Tracks cumulative OT asset risk from vulnerabilities and behavioral signals so operations can prioritize patching and isolation before compliance gaps widen.\n\nDocumented **Data sources**: `index=ot_security` `sourcetype=\"ot:security:risk\"` ES risk framework outputs. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: ot:security:risk. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"ot:security:risk\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by asset_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where risk>max_risk` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **OT Asset Risk Score Threshold Breach**): table asset_id, risk, max_risk\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bubble chart (risk vs asset class), Prioritized table, Single value (breach count).",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use asset risk figures so we know which pieces of the OT fleet look most dangerous when scores jump above what we expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.14",
              "n": "OT User Risk Score Accumulation",
              "c": "high",
              "f": "intermediate",
              "v": "Aggregates user-level risk across OT logins, engineering actions, and policy violations to catch compromised or insider accounts before process impact.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` ES UBA or custom risk rollup",
              "q": "index=ot_security sourcetype=\"ot:security:risk\"\n| where isnotnull(user)\n| timechart span=1d max(risk_score) as daily_peak by user limit=20\n| untable _time user daily_peak\n| stats max(daily_peak) as peak_risk sum(daily_peak) as rolling_sum by user\n| where peak_risk>70 OR rolling_sum>200",
              "m": "Use Splunk ES identity and OT Security Add-on correlation outputs. Tune for shared service accounts separately. Trigger step-up authentication when user risk crosses tiers.",
              "z": "User risk trend lines, Heatmap by user and day, Leaderboard table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` ES UBA or custom risk rollup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Splunk ES identity and OT Security Add-on correlation outputs. Tune for shared service accounts separately. Trigger step-up authentication when user risk crosses tiers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"ot:security:risk\"\n| where isnotnull(user)\n| timechart span=1d max(risk_score) as daily_peak by user limit=20\n| untable _time user daily_peak\n| stats max(daily_peak) as peak_risk sum(daily_peak) as rolling_sum by user\n| where peak_risk>70 OR rolling_sum>200\n```\n\nUnderstanding this SPL\n\n**OT User Risk Score Accumulation** — Aggregates user-level risk across OT logins, engineering actions, and policy violations to catch compromised or insider accounts before process impact.\n\nDocumented **Data sources**: `index=ot_security` ES UBA or custom risk rollup. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: ot:security:risk. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"ot:security:risk\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(user)` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by user limit=20** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **OT User Risk Score Accumulation**): untable _time user daily_peak\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where peak_risk>70 OR rolling_sum>200` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: User risk trend lines, Heatmap by user and day, Leaderboard table.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We build a picture of risky user actions around OT so shared or abused accounts in sensitive roles stand out early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.15",
              "n": "IT-to-OT Perimeter Firewall Rule Violation",
              "c": "critical",
              "f": "advanced",
              "v": "Detects traffic crossing the IT/OT boundary that violates approved rules, preserving segmentation and reducing ransomware lateral movement into control systems.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot` `sourcetype=\"ics:firewall\"` with `rule_id`, `action`, `src_zone`, `dst_zone`",
              "q": "index=ot sourcetype=\"ics:firewall\"\n| where (src_zone=\"IT\" AND dst_zone=\"OT\")\n| lookup ot_perimeter_allow_rules rule_id OUTPUT allowed_description\n| where isnull(allowed_description) AND action=\"permit\"\n| stats count earliest(_time) as first_seen values(dest_port) as ports by src, dest, rule_id\n| sort - count",
              "m": "Consume normalized perimeter logs in Splunk ES. Reconcile rule_id with change-managed policies. Use for continuous compliance evidence.",
              "z": "Sankey IT→OT flows, Table of violating rules, Trend of unique new flows.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"ics:firewall\"` with `rule_id`, `action`, `src_zone`, `dst_zone`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConsume normalized perimeter logs in Splunk ES. Reconcile rule_id with change-managed policies. Use for continuous compliance evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics:firewall\"\n| where (src_zone=\"IT\" AND dst_zone=\"OT\")\n| lookup ot_perimeter_allow_rules rule_id OUTPUT allowed_description\n| where isnull(allowed_description) AND action=\"permit\"\n| stats count earliest(_time) as first_seen values(dest_port) as ports by src, dest, rule_id\n| sort - count\n```\n\nUnderstanding this SPL\n\n**IT-to-OT Perimeter Firewall Rule Violation** — Detects traffic crossing the IT/OT boundary that violates approved rules, preserving segmentation and reducing ransomware lateral movement into control systems.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"ics:firewall\"` with `rule_id`, `action`, `src_zone`, `dst_zone`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:firewall. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:firewall\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where (src_zone=\"IT\" AND dst_zone=\"OT\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(allowed_description) AND action=\"permit\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, rule_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IT-to-OT Perimeter Firewall Rule Violation** — Detects traffic crossing the IT/OT boundary that violates approved rules, preserving segmentation and reducing ransomware lateral movement into control systems.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"ics:firewall\"` with `rule_id`, `action`, `src_zone`, `dst_zone`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey IT→OT flows, Table of violating rules, Trend of unique new flows.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We enforce the rule that IT-style traffic is not allowed straight into the plant without the right hop and approval.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.16",
              "n": "NERC CIP Electronic Access Point Monitoring",
              "c": "critical",
              "f": "advanced",
              "v": "Supports NERC CIP-005 by monitoring access to BES Cyber Systems at controlled Electronic Security Perimeter entry points for audit readiness and breach detection.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` perimeter firewalls, VPN, jump hosts tagged `bes_asset=1`",
              "q": "index=ot_security sourcetype=\"ics:firewall\" bes_asset=1\n| lookup cip_electronic_access_points dest OUTPUT eap_name esp_id\n| where isnotnull(eap_name)\n| stats count earliest(_time) as first latest(_time) as last by eap_name, esp_id, src, user, action\n| eval cip_standard=\"CIP-005\"\n| table cip_standard, eap_name, esp_id, src, user, action, count, first, last",
              "m": "Map Splunk ES and OT Security Add-on NERC CIP dashboards to registered EAPs. Retain evidence per regional entity policy. Correlate with access authorization lists per CIP-005.",
              "z": "ESP topology summary, Timeline of EAP sessions, Compliance summary by site.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` perimeter firewalls, VPN, jump hosts tagged `bes_asset=1`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap Splunk ES and OT Security Add-on NERC CIP dashboards to registered EAPs. Retain evidence per regional entity policy. Correlate with access authorization lists per CIP-005.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"ics:firewall\" bes_asset=1\n| lookup cip_electronic_access_points dest OUTPUT eap_name esp_id\n| where isnotnull(eap_name)\n| stats count earliest(_time) as first latest(_time) as last by eap_name, esp_id, src, user, action\n| eval cip_standard=\"CIP-005\"\n| table cip_standard, eap_name, esp_id, src, user, action, count, first, last\n```\n\nUnderstanding this SPL\n\n**NERC CIP Electronic Access Point Monitoring** — Supports NERC CIP-005 by monitoring access to BES Cyber Systems at controlled Electronic Security Perimeter entry points for audit readiness and breach detection.\n\nDocumented **Data sources**: `index=ot_security` perimeter firewalls, VPN, jump hosts tagged `bes_asset=1`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: ics:firewall. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"ics:firewall\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(eap_name)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by eap_name, esp_id, src, user, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cip_standard** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NERC CIP Electronic Access Point Monitoring**): table cip_standard, eap_name, esp_id, src, user, action, count, first, last\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NERC CIP Electronic Access Point Monitoring** — Supports NERC CIP-005 by monitoring access to BES Cyber Systems at controlled Electronic Security Perimeter entry points for audit readiness and breach detection.\n\nDocumented **Data sources**: `index=ot_security` perimeter firewalls, VPN, jump hosts tagged `bes_asset=1`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: ESP topology summary, Timeline of EAP sessions, Compliance summary by site.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help prove we only let remote access the way the grid security standards say we should, in one place to audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Network_Traffic",
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.action | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-7 R1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NERC CIP unknown is enforced — Splunk UC-10.14.16: NERC CIP Electronic Access Point Monitoring.",
                  "ea": "Saved search 'UC-10.14.16' running on index=ot_security perimeter firewalls, VPN, jump hosts tagged bes_asset=1, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.17",
              "n": "NERC CIP Physical Security Perimeter Log Review",
              "c": "high",
              "f": "intermediate",
              "v": "Aligns with NERC CIP-006 by auditing physical access events to critical facilities housing BES cyber assets.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` PACS `sourcetype=\"physical:access\"` with `facility_id`, `reader_id`, `result`",
              "q": "index=ot_security sourcetype=\"physical:access\"\n| lookup cip_physical_perimeter facility_id OUTPUT psp_name bes_site\n| where isnotnull(psp_name)\n| where match(result, \"(?i)deny|forced|tailgate|unknown\") OR (hour(_time)<6 OR hour(_time)>22)\n| stats count earliest(_time) as first_seen by badge_id, facility_id, reader_id, result\n| eval cip_standard=\"CIP-006\"\n| table cip_standard, psp_name, badge_id, reader_id, result, count, first_seen",
              "m": "Ingest PACS into Splunk with OT Security Add-on. Align facilities with CIP-006 PSP inventories. Support periodic log review workflows.",
              "z": "Calendar heatmap of access anomalies, Facility-level summary bars, Detail table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` PACS `sourcetype=\"physical:access\"` with `facility_id`, `reader_id`, `result`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest PACS into Splunk with OT Security Add-on. Align facilities with CIP-006 PSP inventories. Support periodic log review workflows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"physical:access\"\n| lookup cip_physical_perimeter facility_id OUTPUT psp_name bes_site\n| where isnotnull(psp_name)\n| where match(result, \"(?i)deny|forced|tailgate|unknown\") OR (hour(_time)<6 OR hour(_time)>22)\n| stats count earliest(_time) as first_seen by badge_id, facility_id, reader_id, result\n| eval cip_standard=\"CIP-006\"\n| table cip_standard, psp_name, badge_id, reader_id, result, count, first_seen\n```\n\nUnderstanding this SPL\n\n**NERC CIP Physical Security Perimeter Log Review** — Aligns with NERC CIP-006 by auditing physical access events to critical facilities housing BES cyber assets.\n\nDocumented **Data sources**: `index=ot_security` PACS `sourcetype=\"physical:access\"` with `facility_id`, `reader_id`, `result`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: physical:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"physical:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(psp_name)` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(result, \"(?i)deny|forced|tailgate|unknown\") OR (hour(_time)<6 OR hour(_time)>22)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by badge_id, facility_id, reader_id, result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cip_standard** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NERC CIP Physical Security Perimeter Log Review**): table cip_standard, psp_name, badge_id, reader_id, result, count, first_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Calendar heatmap of access anomalies, Facility-level summary bars, Detail table.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We make sure the physical site logs and evidence line up for regulated sites so a door or badge event is not lost.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-006-6 R1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NERC CIP unknown is enforced — Splunk UC-10.14.17: NERC CIP Physical Security Perimeter Log Review.",
                  "ea": "Saved search 'UC-10.14.17' running on index=ot_security PACS sourcetype=\"physical:access\" with facility_id, reader_id, result, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.18",
              "n": "NERC CIP System Security Management Patch Tracking",
              "c": "high",
              "f": "advanced",
              "v": "Supports NERC CIP-007 by tracking patch status and timelines for BES Cyber Assets, reducing exploitable vulnerabilities on cyber systems.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` vulnerability/patch feeds with `bes_asset=1`",
              "q": "index=ot_security sourcetype=\"ot:security:patch\" bes_asset=1\n| eval overdue=if(patch_available=1 AND now()>deadline AND applied=0, 1, 0)\n| stats values(cve) as cves max(severity_score) as max_sev by asset_id, hostname, deadline, applied\n| where overdue=1 OR (max_sev>=7 AND applied=0)\n| eval cip_standard=\"CIP-007\"\n| table cip_standard, asset_id, hostname, cves, max_sev, deadline, applied",
              "m": "Integrate patch and vulnerability scanners with Splunk ES and OT Security Add-on for BES asset groups. Document evaluation intervals per CIP-007. Drive remediation tickets from overdue findings.",
              "z": "Gauge of patch compliance %, Timeline to deadlines, Treemap by severity.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` vulnerability/patch feeds with `bes_asset=1`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate patch and vulnerability scanners with Splunk ES and OT Security Add-on for BES asset groups. Document evaluation intervals per CIP-007. Drive remediation tickets from overdue findings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"ot:security:patch\" bes_asset=1\n| eval overdue=if(patch_available=1 AND now()>deadline AND applied=0, 1, 0)\n| stats values(cve) as cves max(severity_score) as max_sev by asset_id, hostname, deadline, applied\n| where overdue=1 OR (max_sev>=7 AND applied=0)\n| eval cip_standard=\"CIP-007\"\n| table cip_standard, asset_id, hostname, cves, max_sev, deadline, applied\n```\n\nUnderstanding this SPL\n\n**NERC CIP System Security Management Patch Tracking** — Supports NERC CIP-007 by tracking patch status and timelines for BES Cyber Assets, reducing exploitable vulnerabilities on cyber systems.\n\nDocumented **Data sources**: `index=ot_security` vulnerability/patch feeds with `bes_asset=1`. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: ot:security:patch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"ot:security:patch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by asset_id, hostname, deadline, applied** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where overdue=1 OR (max_sev>=7 AND applied=0)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **cip_standard** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NERC CIP System Security Management Patch Tracking**): table cip_standard, asset_id, hostname, cves, max_sev, deadline, applied\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge of patch compliance %, Timeline to deadlines, Treemap by severity.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track patch and hardening work on critical systems so we can show the board and the regulator the real backlog, not a guess.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NERC CIP unknown is enforced — Splunk UC-10.14.18: NERC CIP System Security Management Patch Tracking.",
                  "ea": "Saved search 'UC-10.14.18' running on index=ot_security vulnerability/patch feeds with bes_asset=1, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.19",
              "n": "NERC CIP Incident Reporting Timeline Compliance",
              "c": "critical",
              "f": "advanced",
              "v": "Enforces NERC CIP-008 by verifying cyber incidents involving BES Cyber Systems are detected and escalated within required reporting timelines.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot_security` ES notable events `sourcetype=\"notable\"`, incident tickets",
              "q": "index=ot_security sourcetype=\"notable\"\n| lookup bes_cyber_assets asset OUTPUT bes_impact\n| where bes_impact=1\n| eval elapsed_min=round((reported_time - detected_time)/60,2)\n| lookup cip008_reporting_thresholds incident_category OUTPUT max_report_minutes\n| where elapsed_min>max_report_minutes OR isnull(reported_time)\n| eval cip_standard=\"CIP-008\"\n| table cip_standard, incident_id, incident_category, detected_time, reported_time, elapsed_min, max_report_minutes",
              "m": "Configure Splunk ES incident management with OT Security Add-on BES tagging. Automate SLA dashboards. Integrate with legal/compliance for official CIP-008 filings.",
              "z": "SLA funnel (detect→report), Timeline of incidents, Single value (breaches count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot_security` ES notable events `sourcetype=\"notable\"`, incident tickets.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk ES incident management with OT Security Add-on BES tagging. Automate SLA dashboards. Integrate with legal/compliance for official CIP-008 filings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_security sourcetype=\"notable\"\n| lookup bes_cyber_assets asset OUTPUT bes_impact\n| where bes_impact=1\n| eval elapsed_min=round((reported_time - detected_time)/60,2)\n| lookup cip008_reporting_thresholds incident_category OUTPUT max_report_minutes\n| where elapsed_min>max_report_minutes OR isnull(reported_time)\n| eval cip_standard=\"CIP-008\"\n| table cip_standard, incident_id, incident_category, detected_time, reported_time, elapsed_min, max_report_minutes\n```\n\nUnderstanding this SPL\n\n**NERC CIP Incident Reporting Timeline Compliance** — Enforces NERC CIP-008 by verifying cyber incidents involving BES Cyber Systems are detected and escalated within required reporting timelines.\n\nDocumented **Data sources**: `index=ot_security` ES notable events `sourcetype=\"notable\"`, incident tickets. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_security; **sourcetype**: notable. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_security, sourcetype=\"notable\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where bes_impact=1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **elapsed_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where elapsed_min>max_report_minutes OR isnull(reported_time)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **cip_standard** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NERC CIP Incident Reporting Timeline Compliance**): table cip_standard, incident_id, incident_category, detected_time, reported_time, elapsed_min, max_report_minutes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: SLA funnel (detect→report), Timeline of incidents, Single value (breaches count).",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch reporting timelines for OT incidents so if something real happens, the clock for official notice does not get missed by accident.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-008-6 R1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NERC CIP unknown is enforced — Splunk UC-10.14.19: NERC CIP Incident Reporting Timeline Compliance.",
                  "ea": "Saved search 'UC-10.14.19' running on index=ot_security ES notable events sourcetype=\"notable\", incident tickets, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.14.20",
              "n": "OT Network Baseline Deviation Detection",
              "c": "high",
              "f": "advanced",
              "v": "Detects new protocols, ports, or connections relative to learned OT baselines (T0840), highlighting shadow connectivity and preparatory attack activity.",
              "t": "OT Security Add-on for Splunk (Splunkbase 5151)",
              "d": "`index=ot` NetFlow or Zeek `sourcetype=\"ics:flow\"`, baseline KV store",
              "q": "index=ot sourcetype=\"ics:flow\"\n| eval sig=src.\"|\".dest.\"|\".dest_port.\"|\".proto\n| lookup ot_traffic_baseline sig OUTPUT first_seen_baseline\n| where isnull(first_seen_baseline)\n| bin _time span=1d\n| stats earliest(_time) as first_seen dc(dest_port) as new_ports by src, dest, proto\n| lookup ot_asset_inventory src OUTPUT zone\n| where zone=\"OT\"",
              "m": "Populate baselines via OT Security Add-on batch jobs after stable learning windows. Use Splunk ES for notable generation on first-seen edges. Pair with T0840 in threat hunting.",
              "z": "Chord graph of new edges, Timechart of first-seen flows, Table with MITRE tag.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[OT Security Add-on for Splunk](https://splunkbase.splunk.com/app/5151), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT Security Add-on for Splunk (Splunkbase 5151).\n• Ensure the following data sources are available: `index=ot` NetFlow or Zeek `sourcetype=\"ics:flow\"`, baseline KV store.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPopulate baselines via OT Security Add-on batch jobs after stable learning windows. Use Splunk ES for notable generation on first-seen edges. Pair with T0840 in threat hunting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics:flow\"\n| eval sig=src.\"|\".dest.\"|\".dest_port.\"|\".proto\n| lookup ot_traffic_baseline sig OUTPUT first_seen_baseline\n| where isnull(first_seen_baseline)\n| bin _time span=1d\n| stats earliest(_time) as first_seen dc(dest_port) as new_ports by src, dest, proto\n| lookup ot_asset_inventory src OUTPUT zone\n| where zone=\"OT\"\n```\n\nUnderstanding this SPL\n\n**OT Network Baseline Deviation Detection** — Detects new protocols, ports, or connections relative to learned OT baselines (T0840), highlighting shadow connectivity and preparatory attack activity.\n\nDocumented **Data sources**: `index=ot` NetFlow or Zeek `sourcetype=\"ics:flow\"`, baseline KV store. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:flow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:flow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sig** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(first_seen_baseline)` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by src, dest, proto** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where zone=\"OT\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(All_Traffic.dest_port) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest span=1d | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Network Baseline Deviation Detection** — Detects new protocols, ports, or connections relative to learned OT baselines (T0840), highlighting shadow connectivity and preparatory attack activity.\n\nDocumented **Data sources**: `index=ot` NetFlow or Zeek `sourcetype=\"ics:flow\"`, baseline KV store. **App/TA** (typical add-on context): OT Security Add-on for Splunk (Splunkbase 5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Chord graph of new edges, Timechart of first-seen flows, Table with MITRE tag.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare how the network usually behaves in the plant to today so a creeping change in traffic or talkers rings a bell early.",
              "mtype": [
                "Security",
                "Change"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t dc(All_Traffic.dest_port) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest span=1d | sort - agg_value",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "10.15",
          "n": "Machine Learning & Behavioral Analytics",
          "u": [
            {
              "i": "10.15.1",
              "n": "User Peer-Group Logon Volume Anomaly (MLTK)",
              "c": "critical",
              "f": "expert",
              "v": "Comparing each user's authentication volume to their peer group (same department, role, or location) surfaces compromised accounts and insider threats that per-user baselines alone cannot catch — a user whose logon count is normal for themselves but triple the peer average is a strong anomaly signal.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES), SA-IdentityCenter",
              "d": "`index=main sourcetype=WinEventLog:Security` (EventCode=4624), `identity_lookup` (from SA-IdentityCenter)",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Authentication\n    WHERE Authentication.action=\"success\"\n    BY _time span=1d, Authentication.user\n| `drop_dm_object_name(Authentication)`\n| lookup identity_lookup identity AS user OUTPUT bunit\n| eventstats avg(count) as peer_avg stdev(count) as peer_std by bunit, _time\n| eval z_score=round((count - peer_avg) / nullif(peer_std, 0), 2)\n| where z_score > 3\n| fit DensityFunction count by bunit into peer_logon_model\n| where isOutlier > 0\n| table _time, user, bunit, count, peer_avg, z_score\n| sort -z_score",
              "m": "Build peer groups from the ES identity framework using `bunit` (business unit) or custom identity attributes such as role and location. Train the DensityFunction model on 30 days of authentication data, then schedule the search hourly. Z-scores above 3 feed Risk-Based Alerting (RBA) as risk events on the user entity. Retrain the model weekly via a scheduled search using `fit` to adapt to seasonal patterns such as quarter-end. Exclude service accounts by filtering on `category!=service` in the identity lookup.",
              "z": "Table (anomalous users with z-score), Line chart (user count vs peer average over time), Heatmap (users × days).",
              "kfp": "Users legitimately performing bulk operations (auditors, IT admins during rollouts). Mitigate by excluding known service accounts and adjusting the z-score threshold per business unit.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1021"
              ],
              "dtype": "user",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES), SA-IdentityCenter.\n• Ensure the following data sources are available: `index=main sourcetype=WinEventLog:Security` (EventCode=4624), `identity_lookup` (from SA-IdentityCenter).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild peer groups from the ES identity framework using `bunit` (business unit) or custom identity attributes such as role and location. Train the DensityFunction model on 30 days of authentication data, then schedule the search hourly. Z-scores above 3 feed Risk-Based Alerting (RBA) as risk events on the user entity. Retrain the model weekly via a scheduled search using `fit` to adapt to seasonal patterns such as quarter-end. Exclude service accounts by filtering on `category!=service` in the id…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `security_content_summariesonly` count FROM datamodel=Authentication\n    WHERE Authentication.action=\"success\"\n    BY _time span=1d, Authentication.user\n| `drop_dm_object_name(Authentication)`\n| lookup identity_lookup identity AS user OUTPUT bunit\n| eventstats avg(count) as peer_avg stdev(count) as peer_std by bunit, _time\n| eval z_score=round((count - peer_avg) / nullif(peer_std, 0), 2)\n| where z_score > 3\n| fit DensityFunction count by bunit into peer_logon_model\n| where isOutlier > 0\n| table _time, user, bunit, count, peer_avg, z_score\n| sort -z_score\n```\n\nUnderstanding this SPL\n\n**User Peer-Group Logon Volume Anomaly (MLTK)** — Comparing each user's authentication volume to their peer group (same department, role, or location) surfaces compromised accounts and insider threats that per-user baselines alone cannot catch — a user whose logon count is normal for themselves but triple the peer average is a strong anomaly signal.\n\nDocumented **Data sources**: `index=main sourcetype=WinEventLog:Security` (EventCode=4624), `identity_lookup` (from SA-IdentityCenter). **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES), SA-IdentityCenter. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: user — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication` — enable acceleration for that model.\n• Invokes macro `drop_dm_object_name(Authentication)` — in Search, use the UI or expand to inspect the underlying SPL.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eventstats` rolls up events into metrics; results are split **by bunit, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z_score > 3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **User Peer-Group Logon Volume Anomaly (MLTK)**): fit DensityFunction count by bunit into peer_logon_model\n• Filters the current rows with `where isOutlier > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **User Peer-Group Logon Volume Anomaly (MLTK)**): table _time, user, bunit, count, peer_avg, z_score\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User Peer-Group Logon Volume Anomaly (MLTK)** — Comparing each user's authentication volume to their peer group (same department, role, or location) surfaces compromised accounts and insider threats that per-user baselines alone cannot catch — a user whose logon count is normal for themselves but triple the peer average is a strong anomaly signal.\n\nDocumented **Data sources**: `index=main sourcetype=WinEventLog:Security` (EventCode=4624), `identity_lookup` (from SA-IdentityCenter). **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES), SA-IdentityCenter. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: user — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous users with z-score), Line chart (user count vs peer average over time), Heatmap (users × days).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use learning models to see whose sign-on pattern suddenly breaks from their workmates’ so stolen or misused accounts stand out earlier than a flat rule allows.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.15.2",
              "n": "Lateral Movement via Rare Destination Hosts (MLTK)",
              "c": "critical",
              "f": "expert",
              "v": "Attackers moving laterally connect to hosts they have never accessed before. By modeling each user's typical destination set and flagging first-seen or statistically rare destinations, this detection catches post-compromise pivoting that static ACLs and signature rules cannot see.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES)",
              "d": "`index=main sourcetype=WinEventLog:Security` (EventCode=4624 Type 3/10), `index=network sourcetype=pan:traffic`",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Authentication\n    WHERE Authentication.action=\"success\" AND (Authentication.signature_id=4624)\n    BY _time span=1h, Authentication.user, Authentication.dest\n| `drop_dm_object_name(Authentication)`\n| stats earliest(_time) as first_seen latest(_time) as last_seen count by user, dest\n| join type=left user\n    [| tstats count FROM datamodel=Authentication WHERE Authentication.action=\"success\"\n        BY Authentication.user, Authentication.dest\n        | `drop_dm_object_name(Authentication)`\n        | stats dc(dest) as historical_dest_count values(dest) as historical_dests by user]\n| where NOT match(historical_dests, dest)\n| fit LogisticRegression fit_intercept=true \"is_lateral\" from count, historical_dest_count into lateral_movement_model\n| where predicted(is_lateral) > 0.7\n| table first_seen, user, dest, count, historical_dest_count\n| sort -first_seen",
              "m": "Maintain a rolling 90-day lookup of `user → dest` pairs using `outputlookup`. Compare each new authentication event against this baseline. First-seen pairs are candidate lateral movement. Enrich with ES asset data to prioritize connections to high-value targets (domain controllers, jump servers, database servers). Score with MLTK LogisticRegression or DensityFunction trained on known-good movement patterns. Feed results into RBA as risk events on both user and dest entities. Pair with T1021 and T1076 MITRE annotations for SOC triage context.",
              "z": "Network graph (user → dest with edge weight), Table (first-seen connections), Timeline (lateral movement sequence).",
              "kfp": "IT staff rotating through hosts for maintenance, new employees onboarding. Suppress by role-based exceptions via the identity framework.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1076",
                "T1550"
              ],
              "dtype": "user, dest",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES).\n• Ensure the following data sources are available: `index=main sourcetype=WinEventLog:Security` (EventCode=4624 Type 3/10), `index=network sourcetype=pan:traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain a rolling 90-day lookup of `user → dest` pairs using `outputlookup`. Compare each new authentication event against this baseline. First-seen pairs are candidate lateral movement. Enrich with ES asset data to prioritize connections to high-value targets (domain controllers, jump servers, database servers). Score with MLTK LogisticRegression or DensityFunction trained on known-good movement patterns. Feed results into RBA as risk events on both user and dest entities. Pair with T1021 and …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `security_content_summariesonly` count FROM datamodel=Authentication\n    WHERE Authentication.action=\"success\" AND (Authentication.signature_id=4624)\n    BY _time span=1h, Authentication.user, Authentication.dest\n| `drop_dm_object_name(Authentication)`\n| stats earliest(_time) as first_seen latest(_time) as last_seen count by user, dest\n| join type=left user\n    [| tstats count FROM datamodel=Authentication WHERE Authentication.action=\"success\"\n        BY Authentication.user, Authentication.dest\n        | `drop_dm_object_name(Authentication)`\n        | stats dc(dest) as historical_dest_count values(dest) as historical_dests by user]\n| where NOT match(historical_dests, dest)\n| fit LogisticRegression fit_intercept=true \"is_lateral\" from count, historical_dest_count into lateral_movement_model\n| where predicted(is_lateral) > 0.7\n| table first_seen, user, dest, count, historical_dest_count\n| sort -first_seen\n```\n\nUnderstanding this SPL\n\n**Lateral Movement via Rare Destination Hosts (MLTK)** — Attackers moving laterally connect to hosts they have never accessed before. By modeling each user's typical destination set and flagging first-seen or statistically rare destinations, this detection catches post-compromise pivoting that static ACLs and signature rules cannot see.\n\nDocumented **Data sources**: `index=main sourcetype=WinEventLog:Security` (EventCode=4624 Type 3/10), `index=network sourcetype=pan:traffic`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: user, dest — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication` — enable acceleration for that model.\n• Invokes macro `drop_dm_object_name(Authentication)` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by user, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Uses `tstats` against accelerated summaries for data model `Authentication` — enable acceleration for that model.\n• Filters the current rows with `where NOT match(historical_dests, dest)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Lateral Movement via Rare Destination Hosts (MLTK)**): fit LogisticRegression fit_intercept=true \"is_lateral\" from count, historical_dest_count into lateral_movement_model\n• Filters the current rows with `where predicted(is_lateral) > 0.7` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Lateral Movement via Rare Destination Hosts (MLTK)**): table first_seen, user, dest, count, historical_dest_count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Lateral Movement via Rare Destination Hosts (MLTK)** — Attackers moving laterally connect to hosts they have never accessed before. By modeling each user's typical destination set and flagging first-seen or statistically rare destinations, this detection catches post-compromise pivoting that static ACLs and signature rules cannot see.\n\nDocumented **Data sources**: `index=main sourcetype=WinEventLog:Security` (EventCode=4624 Type 3/10), `index=network sourcetype=pan:traffic`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: user, dest — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network graph (user → dest with edge weight), Table (first-seen connections), Timeline (lateral movement sequence).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use baselines to flag rare host-to-host paths that can show lateral movement when someone hops outside their usual neighborhood.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.15.3",
              "n": "C2 Beaconing Detection via Time-Series Regularity (MLTK)",
              "c": "critical",
              "f": "advanced",
              "v": "Command-and-control (C2) implants call home at regular intervals. By measuring inter-arrival time variance for each src-dest-port tuple, this detection identifies periodic beaconing that blends into normal traffic volumes — catching Cobalt Strike, Sliver, and custom implants that frequency-based rules miss.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES)",
              "d": "`index=network sourcetype=pan:traffic` or any proxy/firewall log with timestamps",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Network_Traffic\n    BY _time span=1s, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port\n| `drop_dm_object_name(All_Traffic)`\n| sort src, dest, dest_port, _time\n| streamstats current=f window=1 last(_time) as prev_time by src, dest, dest_port\n| eval iat=_time - prev_time\n| where isnotnull(iat) AND iat > 0 AND iat < 86400\n| stats count, avg(iat) as avg_iat, stdev(iat) as std_iat, min(iat) as min_iat, max(iat) as max_iat by src, dest, dest_port\n| eval cv=round(std_iat / nullif(avg_iat, 0), 4)\n| where cv < 0.15 AND count > 50 AND avg_iat > 10 AND avg_iat < 3600\n| apply beacon_density_model threshold=0.01\n| rename \"IsOutlier(cv)\" as isBeacon\n| where isBeacon > 0\n| table src, dest, dest_port, count, avg_iat, cv\n| sort cv",
              "m": "Calculate inter-arrival times (IAT) for each network flow tuple. Low coefficient of variation (CV < 0.15) with consistent intervals over 50+ connections is a strong beaconing signal. Pre-train a DensityFunction model (`beacon_density_model`) on the CV distribution across the environment to isolate the lowest-variance tuples. Exclude known infrastructure (NTP, heartbeats, health checks) via a lookup. Generate ES notables with risk attribution on the source host. Tune `avg_iat` bounds to catch both fast (10s) and slow (1h) beacons. Investigate hits with full PCAP or proxy logs for payload confirmation.",
              "z": "Scatter plot (avg_iat vs CV, colored by src), Table (beacon candidates), Timeline (connection pattern for selected tuple).",
              "kfp": "Legitimate polling services (monitoring agents, NTP clients, software update checks). Maintain an exclusion lookup of known polling pairs.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1071",
                "T1573",
                "T1095"
              ],
              "dtype": "network",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES).\n• Ensure the following data sources are available: `index=network sourcetype=pan:traffic` or any proxy/firewall log with timestamps.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCalculate inter-arrival times (IAT) for each network flow tuple. Low coefficient of variation (CV < 0.15) with consistent intervals over 50+ connections is a strong beaconing signal. Pre-train a DensityFunction model (`beacon_density_model`) on the CV distribution across the environment to isolate the lowest-variance tuples. Exclude known infrastructure (NTP, heartbeats, health checks) via a lookup. Generate ES notables with risk attribution on the source host. Tune `avg_iat` bounds to catch bot…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `security_content_summariesonly` count FROM datamodel=Network_Traffic\n    BY _time span=1s, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port\n| `drop_dm_object_name(All_Traffic)`\n| sort src, dest, dest_port, _time\n| streamstats current=f window=1 last(_time) as prev_time by src, dest, dest_port\n| eval iat=_time - prev_time\n| where isnotnull(iat) AND iat > 0 AND iat < 86400\n| stats count, avg(iat) as avg_iat, stdev(iat) as std_iat, min(iat) as min_iat, max(iat) as max_iat by src, dest, dest_port\n| eval cv=round(std_iat / nullif(avg_iat, 0), 4)\n| where cv < 0.15 AND count > 50 AND avg_iat > 10 AND avg_iat < 3600\n| apply beacon_density_model threshold=0.01\n| rename \"IsOutlier(cv)\" as isBeacon\n| where isBeacon > 0\n| table src, dest, dest_port, count, avg_iat, cv\n| sort cv\n```\n\nUnderstanding this SPL\n\n**C2 Beaconing Detection via Time-Series Regularity (MLTK)** — Command-and-control (C2) implants call home at regular intervals. By measuring inter-arrival time variance for each src-dest-port tuple, this detection identifies periodic beaconing that blends into normal traffic volumes — catching Cobalt Strike, Sliver, and custom implants that frequency-based rules miss.\n\nDocumented **Data sources**: `index=network sourcetype=pan:traffic` or any proxy/firewall log with timestamps. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: network — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic` — enable acceleration for that model.\n• Invokes macro `drop_dm_object_name(All_Traffic)` — in Search, use the UI or expand to inspect the underlying SPL.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **iat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(iat) AND iat > 0 AND iat < 86400` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cv < 0.15 AND count > 50 AND avg_iat > 10 AND avg_iat < 3600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **C2 Beaconing Detection via Time-Series Regularity (MLTK)**): apply beacon_density_model threshold=0.01\n• Renames fields with `rename` for clarity or joins.\n• Filters the current rows with `where isBeacon > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **C2 Beaconing Detection via Time-Series Regularity (MLTK)**): table src, dest, dest_port, count, avg_iat, cv\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**C2 Beaconing Detection via Time-Series Regularity (MLTK)** — Command-and-control (C2) implants call home at regular intervals. By measuring inter-arrival time variance for each src-dest-port tuple, this detection identifies periodic beaconing that blends into normal traffic volumes — catching Cobalt Strike, Sliver, and custom implants that frequency-based rules miss.\n\nDocumented **Data sources**: `index=network sourcetype=pan:traffic` or any proxy/firewall log with timestamps. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: network — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter plot (avg_iat vs CV, colored by src), Table (beacon candidates), Timeline (connection pattern for selected tuple).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use machine learning to notice connections that are too regular in time, a hint of bot-like command and control when human browsing is messier.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.15.4",
              "n": "Credential Stuffing Burst Detection vs Baseline (MLTK)",
              "c": "critical",
              "f": "advanced",
              "v": "Credential stuffing attacks generate sudden spikes in failed authentication from distributed sources. By baselining normal failure rates per application endpoint and flagging statistical outliers, this detection catches automated credential replay before account lockouts cascade.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES)",
              "d": "`index=main sourcetype=WinEventLog:Security` (EventCode=4625), web application auth logs, `index=web sourcetype=access_combined`",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Authentication\n    WHERE Authentication.action=\"failure\"\n    BY _time span=5m, Authentication.app, Authentication.src\n| `drop_dm_object_name(Authentication)`\n| eventstats avg(count) as baseline_avg stdev(count) as baseline_std by app\n| eval z_score=round((count - baseline_avg) / nullif(baseline_std, 0), 2)\n| where z_score > 4\n| stats sum(count) as total_failures dc(src) as unique_sources max(z_score) as peak_z by _time, app\n| where unique_sources > 10 AND total_failures > 100\n| fit DensityFunction total_failures unique_sources into cred_stuffing_model\n| where isOutlier > 0\n| table _time, app, total_failures, unique_sources, peak_z\n| sort -total_failures",
              "m": "Aggregate authentication failures in 5-minute windows per application. Compare to 30-day rolling baselines built with MLTK DensityFunction. Credential stuffing signatures include: high failure count, many distinct source IPs, and rapid succession. Enrich with threat intelligence on source IPs via ES. Trigger adaptive response actions (IP block via firewall API, CAPTCHA enforcement) when z-score exceeds threshold. Correlate with successful logins from the same source pool to detect compromised accounts. Feed risk events to RBA on both source IPs and targeted accounts.",
              "z": "Line chart (failure rate vs baseline), Scatter plot (failures × unique sources), Table (burst events with z-scores).",
              "kfp": "Password reset campaigns, SSO provider outages causing mass re-authentication. Correlate with change management records.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1110.004"
              ],
              "dtype": "source, app",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES).\n• Ensure the following data sources are available: `index=main sourcetype=WinEventLog:Security` (EventCode=4625), web application auth logs, `index=web sourcetype=access_combined`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate authentication failures in 5-minute windows per application. Compare to 30-day rolling baselines built with MLTK DensityFunction. Credential stuffing signatures include: high failure count, many distinct source IPs, and rapid succession. Enrich with threat intelligence on source IPs via ES. Trigger adaptive response actions (IP block via firewall API, CAPTCHA enforcement) when z-score exceeds threshold. Correlate with successful logins from the same source pool to detect compromised ac…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `security_content_summariesonly` count FROM datamodel=Authentication\n    WHERE Authentication.action=\"failure\"\n    BY _time span=5m, Authentication.app, Authentication.src\n| `drop_dm_object_name(Authentication)`\n| eventstats avg(count) as baseline_avg stdev(count) as baseline_std by app\n| eval z_score=round((count - baseline_avg) / nullif(baseline_std, 0), 2)\n| where z_score > 4\n| stats sum(count) as total_failures dc(src) as unique_sources max(z_score) as peak_z by _time, app\n| where unique_sources > 10 AND total_failures > 100\n| fit DensityFunction total_failures unique_sources into cred_stuffing_model\n| where isOutlier > 0\n| table _time, app, total_failures, unique_sources, peak_z\n| sort -total_failures\n```\n\nUnderstanding this SPL\n\n**Credential Stuffing Burst Detection vs Baseline (MLTK)** — Credential stuffing attacks generate sudden spikes in failed authentication from distributed sources. By baselining normal failure rates per application endpoint and flagging statistical outliers, this detection catches automated credential replay before account lockouts cascade.\n\nDocumented **Data sources**: `index=main sourcetype=WinEventLog:Security` (EventCode=4625), web application auth logs, `index=web sourcetype=access_combined`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: source, app — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication` — enable acceleration for that model.\n• Invokes macro `drop_dm_object_name(Authentication)` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eventstats` rolls up events into metrics; results are split **by app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z_score > 4` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by _time, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_sources > 10 AND total_failures > 100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Credential Stuffing Burst Detection vs Baseline (MLTK)**): fit DensityFunction total_failures unique_sources into cred_stuffing_model\n• Filters the current rows with `where isOutlier > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Credential Stuffing Burst Detection vs Baseline (MLTK)**): table _time, app, total_failures, unique_sources, peak_z\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.app, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Credential Stuffing Burst Detection vs Baseline (MLTK)** — Credential stuffing attacks generate sudden spikes in failed authentication from distributed sources. By baselining normal failure rates per application endpoint and flagging statistical outliers, this detection catches automated credential replay before account lockouts cascade.\n\nDocumented **Data sources**: `index=main sourcetype=WinEventLog:Security` (EventCode=4625), web application auth logs, `index=web sourcetype=access_combined`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: source, app — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failure rate vs baseline), Scatter plot (failures × unique sources), Table (burst events with z-scores).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare a burst of failed logins to a normal day so coordinated guessing against our apps is obvious before accounts start to fall over.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.app, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.15.5",
              "n": "Risk Score Calibration with Supervised ML (MLTK)",
              "c": "high",
              "f": "expert",
              "v": "Risk-Based Alerting assigns scores heuristically, but not all risk events are equal. Training a supervised model on analyst-confirmed true/false positives re-weights risk contributions so the aggregate score better predicts real incidents — reducing alert fatigue and focusing SOC attention on the highest-fidelity signals.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES)",
              "d": "`index=risk`, `index=notable` (with analyst disposition), ES identity and asset lookups",
              "q": "| tstats `security_content_summariesonly` count sum(All_Risk.calculated_risk_score) as total_risk\n    FROM datamodel=Risk BY All_Risk.risk_object, All_Risk.risk_object_type, All_Risk.source\n| `drop_dm_object_name(All_Risk)`\n| join type=left risk_object\n    [search index=notable status IN (\"Closed - True Positive\",\"Closed - False Positive\")\n    | eval label=if(status=\"Closed - True Positive\", 1, 0)\n    | stats max(label) as is_true_positive by risk_object]\n| where isnotnull(is_true_positive)\n| fit RandomForestClassifier is_true_positive from total_risk, count into risk_calibration_model\n| table risk_object, total_risk, count, predicted(is_true_positive), probability\n| where 'predicted(is_true_positive)' = 1\n| sort -probability",
              "m": "Export 90+ days of closed notables with analyst dispositions (true positive, false positive) from ES. Join to the Risk datamodel to build feature vectors per risk object: total risk score, contributing source count, risk event types, time spread, and asset criticality. Train a RandomForestClassifier or GradientBoostedTrees model monthly. Use the model to re-score incoming risk aggregations, replacing or supplementing the raw sum. Publish re-calibrated scores as a lookup that ES correlation searches reference. Track model precision/recall in a dashboard to measure SOC efficiency gains over time.",
              "z": "ROC curve (model performance), Table (recalibrated risk objects), Bar chart (feature importance from model).",
              "kfp": "Model may overfit to historical analyst patterns if disposition quality is inconsistent. Regularly audit a sample of dispositions and retrain.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "risk_object",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES).\n• Ensure the following data sources are available: `index=risk`, `index=notable` (with analyst disposition), ES identity and asset lookups.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport 90+ days of closed notables with analyst dispositions (true positive, false positive) from ES. Join to the Risk datamodel to build feature vectors per risk object: total risk score, contributing source count, risk event types, time spread, and asset criticality. Train a RandomForestClassifier or GradientBoostedTrees model monthly. Use the model to re-score incoming risk aggregations, replacing or supplementing the raw sum. Publish re-calibrated scores as a lookup that ES correlation searc…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `security_content_summariesonly` count sum(All_Risk.calculated_risk_score) as total_risk\n    FROM datamodel=Risk BY All_Risk.risk_object, All_Risk.risk_object_type, All_Risk.source\n| `drop_dm_object_name(All_Risk)`\n| join type=left risk_object\n    [search index=notable status IN (\"Closed - True Positive\",\"Closed - False Positive\")\n    | eval label=if(status=\"Closed - True Positive\", 1, 0)\n    | stats max(label) as is_true_positive by risk_object]\n| where isnotnull(is_true_positive)\n| fit RandomForestClassifier is_true_positive from total_risk, count into risk_calibration_model\n| table risk_object, total_risk, count, predicted(is_true_positive), probability\n| where 'predicted(is_true_positive)' = 1\n| sort -probability\n```\n\nUnderstanding this SPL\n\n**Risk Score Calibration with Supervised ML (MLTK)** — Risk-Based Alerting assigns scores heuristically, but not all risk events are equal. Training a supervised model on analyst-confirmed true/false positives re-weights risk contributions so the aggregate score better predicts real incidents — reducing alert fatigue and focusing SOC attention on the highest-fidelity signals.\n\nDocumented **Data sources**: `index=risk`, `index=notable` (with analyst disposition), ES identity and asset lookups. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: risk_object — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Risk` — enable acceleration for that model.\n• Invokes macro `drop_dm_object_name(All_Risk)` — in Search, use the UI or expand to inspect the underlying SPL.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(is_true_positive)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Risk Score Calibration with Supervised ML (MLTK)**): fit RandomForestClassifier is_true_positive from total_risk, count into risk_calibration_model\n• Pipeline stage (see **Risk Score Calibration with Supervised ML (MLTK)**): table risk_object, total_risk, count, predicted(is_true_positive), probability\n• Filters the current rows with `where 'predicted(is_true_positive)' = 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: ROC curve (model performance), Table (recalibrated risk objects), Bar chart (feature importance from model).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We re-score risk with supervised models so the numbers line up with how analysts *actually* closed past cases, not just raw sums.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.15.6",
              "n": "Phishing Email NLP Classification (DSDL)",
              "c": "critical",
              "f": "expert",
              "v": "Rule-based phishing filters miss novel lure themes and zero-day phishing kits. A pre-trained NLP classifier scores email subjects and body snippets for phishing intent — catching social engineering campaigns, BEC attempts, and spear-phishing that bypass gateway reputation checks.",
              "t": "Splunk Deep Learning Toolkit (DSDL), Splunk Enterprise Security (ES), Splunk Add-on for Microsoft Office 365",
              "d": "`index=email sourcetype=o365:management:activity` or `sourcetype=ms:o365:reporting:messagetrace`",
              "q": "index=email sourcetype IN (\"o365:management:activity\",\"ms:o365:reporting:messagetrace\")\n| search Direction=\"Inbound\"\n| eval text_features=lower(Subject . \" \" . coalesce(BodyPreview,\"\"))\n| apply pretrained_phishing_classifier_dsdl\n| rename pred_phishing_proba as phishing_score\n| where phishing_score > 0.75\n| lookup identity_lookup identity AS RecipientAddress OUTPUT bunit, priority\n| eval risk=case(phishing_score>0.95,\"critical\", phishing_score>0.85,\"high\", true(),\"medium\")\n| table _time, SenderAddress, RecipientAddress, Subject, phishing_score, risk, bunit\n| sort -phishing_score",
              "m": "Deploy a pre-trained BERT or DistilBERT text classification model via DSDL's containerized Python environment. The model ingests concatenated subject + body preview and outputs a phishing probability. Fine-tune the model on your organization's reported phishing samples (use the phishing mailbox as labeled data) for domain-specific accuracy. Score inbound emails in near-real-time (5-minute batches). High-confidence hits (>0.95) trigger automatic quarantine via adaptive response; medium scores (0.75–0.95) create risk events for SOC review. Pair with URL reputation enrichment and attachment sandboxing for layered defense. Retrain quarterly as attacker lure themes evolve.",
              "z": "Table (scored emails), Pie chart (risk distribution), Line chart (phishing score trend), Bar chart (top sender domains).",
              "kfp": "Marketing emails with urgency language, internal security awareness training simulations. Maintain a sender domain allowlist for known senders.",
              "refs": "https://huggingface.co/models?search=phishing",
              "mitre": [
                "T1566.001",
                "T1566.002",
                "T1534"
              ],
              "dtype": "email",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Deep Learning Toolkit (DSDL), Splunk Enterprise Security (ES), Splunk Add-on for Microsoft Office 365.\n• Ensure the following data sources are available: `index=email sourcetype=o365:management:activity` or `sourcetype=ms:o365:reporting:messagetrace`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a pre-trained BERT or DistilBERT text classification model via DSDL's containerized Python environment. The model ingests concatenated subject + body preview and outputs a phishing probability. Fine-tune the model on your organization's reported phishing samples (use the phishing mailbox as labeled data) for domain-specific accuracy. Score inbound emails in near-real-time (5-minute batches). High-confidence hits (>0.95) trigger automatic quarantine via adaptive response; medium scores (0.…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype IN (\"o365:management:activity\",\"ms:o365:reporting:messagetrace\")\n| search Direction=\"Inbound\"\n| eval text_features=lower(Subject . \" \" . coalesce(BodyPreview,\"\"))\n| apply pretrained_phishing_classifier_dsdl\n| rename pred_phishing_proba as phishing_score\n| where phishing_score > 0.75\n| lookup identity_lookup identity AS RecipientAddress OUTPUT bunit, priority\n| eval risk=case(phishing_score>0.95,\"critical\", phishing_score>0.85,\"high\", true(),\"medium\")\n| table _time, SenderAddress, RecipientAddress, Subject, phishing_score, risk, bunit\n| sort -phishing_score\n```\n\nUnderstanding this SPL\n\n**Phishing Email NLP Classification (DSDL)** — Rule-based phishing filters miss novel lure themes and zero-day phishing kits. A pre-trained NLP classifier scores email subjects and body snippets for phishing intent — catching social engineering campaigns, BEC attempts, and spear-phishing that bypass gateway reputation checks.\n\nDocumented **Data sources**: `index=email sourcetype=o365:management:activity` or `sourcetype=ms:o365:reporting:messagetrace`. **App/TA** (typical add-on context): Splunk Deep Learning Toolkit (DSDL), Splunk Enterprise Security (ES), Splunk Add-on for Microsoft Office 365. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email.\n\n**Detection type** for this use case: email — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **text_features** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Phishing Email NLP Classification (DSDL)**): apply pretrained_phishing_classifier_dsdl\n• Renames fields with `rename` for clarity or joins.\n• Filters the current rows with `where phishing_score > 0.75` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Phishing Email NLP Classification (DSDL)**): table _time, SenderAddress, RecipientAddress, Subject, phishing_score, risk, bunit\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Phishing Email NLP Classification (DSDL)** — Rule-based phishing filters miss novel lure themes and zero-day phishing kits. A pre-trained NLP classifier scores email subjects and body snippets for phishing intent — catching social engineering campaigns, BEC attempts, and spear-phishing that bypass gateway reputation checks.\n\nDocumented **Data sources**: `index=email sourcetype=o365:management:activity` or `sourcetype=ms:o365:reporting:messagetrace`. **App/TA** (typical add-on context): Splunk Deep Learning Toolkit (DSDL), Splunk Enterprise Security (ES), Splunk Add-on for Microsoft Office 365. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: email — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (scored emails), Pie chart (risk distribution), Line chart (phishing score trend), Bar chart (top sender domains).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use language models to triage likely phishing in the mail stream so the front of the human queue is worth reading first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Email.All_Email by All_Email.action, All_Email.src_user | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.15.7",
              "n": "Notable Event Prioritization Model (MLTK)",
              "c": "high",
              "f": "expert",
              "v": "SOC analysts face hundreds of notables daily. A machine learning ranking model that predicts which notables are most likely true positives — based on historical disposition, asset criticality, risk score, time-of-day, and correlated events — lets the queue self-sort so analysts investigate the highest-value alerts first.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES)",
              "d": "`index=notable`, `index=risk`, ES asset and identity lookups",
              "q": "index=notable\n| eval label=case(status=\"Closed - True Positive\", 1, status=\"Closed - False Positive\", 0, true(), null())\n| where isnotnull(label)\n| lookup asset_lookup dest OUTPUT priority as asset_priority, bunit\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%w\")\n| join type=left rule_name\n    [search index=notable | stats avg(urgency_score) as avg_urgency, stdev(urgency_score) as std_urgency by rule_name]\n| eval features=mvappend(urgency, asset_priority, hour, dow, rule_name)\n| fit GradientBoostingClassifier label from urgency, asset_priority, hour, dow into notable_priority_model\n| where 'predicted(label)' = 1\n| sort -probability\n| table _time, rule_name, dest, user, urgency, probability, asset_priority",
              "m": "Collect at least 6 months of closed notables with consistent dispositions from SOC analysts. Build feature vectors including: rule name, urgency, asset priority, business unit, time features (hour, day-of-week), correlated risk event count, and historical true-positive rate for that rule. Train a GradientBoostingClassifier in MLTK. Deploy as a saved search that enriches the notable queue with a `priority_score` field. SOC leads review the model's feature importance report to identify poorly-performing correlation searches. Retrain monthly and track precision/recall. Consider A/B testing the model against the default urgency ranking.",
              "z": "Table (prioritized notable queue), Bar chart (feature importance), Line chart (model precision over time), Single value (daily TP prediction accuracy).",
              "kfp": "The model may deprioritize novel attack types not seen in training data. Maintain a random sample review of low-scored notables weekly.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "notable",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES).\n• Ensure the following data sources are available: `index=notable`, `index=risk`, ES asset and identity lookups.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect at least 6 months of closed notables with consistent dispositions from SOC analysts. Build feature vectors including: rule name, urgency, asset priority, business unit, time features (hour, day-of-week), correlated risk event count, and historical true-positive rate for that rule. Train a GradientBoostingClassifier in MLTK. Deploy as a saved search that enriches the notable queue with a `priority_score` field. SOC leads review the model's feature importance report to identify poorly-perf…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=notable\n| eval label=case(status=\"Closed - True Positive\", 1, status=\"Closed - False Positive\", 0, true(), null())\n| where isnotnull(label)\n| lookup asset_lookup dest OUTPUT priority as asset_priority, bunit\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%w\")\n| join type=left rule_name\n    [search index=notable | stats avg(urgency_score) as avg_urgency, stdev(urgency_score) as std_urgency by rule_name]\n| eval features=mvappend(urgency, asset_priority, hour, dow, rule_name)\n| fit GradientBoostingClassifier label from urgency, asset_priority, hour, dow into notable_priority_model\n| where 'predicted(label)' = 1\n| sort -probability\n| table _time, rule_name, dest, user, urgency, probability, asset_priority\n```\n\nUnderstanding this SPL\n\n**Notable Event Prioritization Model (MLTK)** — SOC analysts face hundreds of notables daily. A machine learning ranking model that predicts which notables are most likely true positives — based on historical disposition, asset criticality, risk score, time-of-day, and correlated events — lets the queue self-sort so analysts investigate the highest-value alerts first.\n\nDocumented **Data sources**: `index=notable`, `index=risk`, ES asset and identity lookups. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: notable.\n\n**Detection type** for this use case: notable — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=notable. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **label** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(label)` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **features** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Notable Event Prioritization Model (MLTK)**): fit GradientBoostingClassifier label from urgency, asset_priority, hour, dow into notable_priority_model\n• Filters the current rows with `where 'predicted(label)' = 1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Notable Event Prioritization Model (MLTK)**): table _time, rule_name, dest, user, urgency, probability, asset_priority\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (prioritized notable queue), Bar chart (feature importance), Line chart (model precision over time), Single value (daily TP prediction accuracy).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use models to order notables by true urgency so the team works the right ticket next when everything seems loud.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.15.8",
              "n": "User and Entity Behavior Analytics — Anomalous Process Execution (MLTK)",
              "c": "critical",
              "f": "advanced",
              "v": "Establishing per-host baselines of normal process execution and flagging rare or never-before-seen processes catches fileless malware, living-off-the-land binaries (LOLBins), and novel toolkits that signature-based EDR misses.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES), Sysmon TA",
              "d": "`index=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode=1)",
              "q": "| tstats `security_content_summariesonly` count FROM datamodel=Endpoint.Processes\n    BY _time span=1d, Processes.dest, Processes.process_name, Processes.user\n| `drop_dm_object_name(Processes)`\n| eventstats dc(dest) as host_prevalence by process_name\n| eventstats avg(count) as user_avg stdev(count) as user_std by dest, process_name\n| eval rarity_score=1 - (host_prevalence / max_hosts)\n| eval z_score=round((count - user_avg) / nullif(user_std, 0), 2)\n| where (rarity_score > 0.95 OR z_score > 3) AND count < 5\n| fit DensityFunction count by dest into process_anomaly_model\n| where isOutlier > 0\n| lookup asset_lookup dest OUTPUT priority, bunit, category\n| table _time, dest, user, process_name, count, rarity_score, z_score, priority\n| sort -rarity_score",
              "m": "Collect Sysmon Event ID 1 (process creation) and build a 30-day baseline of process-to-host mappings. Calculate host prevalence (how many hosts run this process) and per-host execution frequency. Processes that appear on fewer than 5% of hosts and have unusually high or first-time execution counts are flagged. Use DensityFunction for multivariate outlier detection across count and prevalence. Enrich with asset criticality from ES. Feed as risk events with MITRE T1059 annotation. Exclude known deployment tools (SCCM, Puppet, Chef) via a managed allowlist. Review and update the allowlist monthly.",
              "z": "Table (rare processes with rarity score), Scatter plot (rarity vs z-score), Bar chart (top anomalous hosts).",
              "kfp": "Software deployments, developer tools, one-time admin utilities. Maintain per-host and per-role exception lists.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [
                "T1059",
                "T1218",
                "T1036"
              ],
              "dtype": "endpoint",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES), Sysmon TA.\n• Ensure the following data sources are available: `index=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode=1).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Sysmon Event ID 1 (process creation) and build a 30-day baseline of process-to-host mappings. Calculate host prevalence (how many hosts run this process) and per-host execution frequency. Processes that appear on fewer than 5% of hosts and have unusually high or first-time execution counts are flagged. Use DensityFunction for multivariate outlier detection across count and prevalence. Enrich with asset criticality from ES. Feed as risk events with MITRE T1059 annotation. Exclude known de…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `security_content_summariesonly` count FROM datamodel=Endpoint.Processes\n    BY _time span=1d, Processes.dest, Processes.process_name, Processes.user\n| `drop_dm_object_name(Processes)`\n| eventstats dc(dest) as host_prevalence by process_name\n| eventstats avg(count) as user_avg stdev(count) as user_std by dest, process_name\n| eval rarity_score=1 - (host_prevalence / max_hosts)\n| eval z_score=round((count - user_avg) / nullif(user_std, 0), 2)\n| where (rarity_score > 0.95 OR z_score > 3) AND count < 5\n| fit DensityFunction count by dest into process_anomaly_model\n| where isOutlier > 0\n| lookup asset_lookup dest OUTPUT priority, bunit, category\n| table _time, dest, user, process_name, count, rarity_score, z_score, priority\n| sort -rarity_score\n```\n\nUnderstanding this SPL\n\n**User and Entity Behavior Analytics — Anomalous Process Execution (MLTK)** — Establishing per-host baselines of normal process execution and flagging rare or never-before-seen processes catches fileless malware, living-off-the-land binaries (LOLBins), and novel toolkits that signature-based EDR misses.\n\nDocumented **Data sources**: `index=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode=1). **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES), Sysmon TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Detection type** for this use case: endpoint — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Invokes macro `drop_dm_object_name(Processes)` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eventstats` rolls up events into metrics; results are split **by process_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by dest, process_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **rarity_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (rarity_score > 0.95 OR z_score > 3) AND count < 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **User and Entity Behavior Analytics — Anomalous Process Execution (MLTK)**): fit DensityFunction count by dest into process_anomaly_model\n• Filters the current rows with `where isOutlier > 0` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Pipeline stage (see **User and Entity Behavior Analytics — Anomalous Process Execution (MLTK)**): table _time, dest, user, process_name, count, rarity_score, z_score, priority\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest, Processes.process_name, Processes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User and Entity Behavior Analytics — Anomalous Process Execution (MLTK)** — Establishing per-host baselines of normal process execution and flagging rare or never-before-seen processes catches fileless malware, living-off-the-land binaries (LOLBins), and novel toolkits that signature-based EDR misses.\n\nDocumented **Data sources**: `index=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` (EventCode=1). **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Enterprise Security (ES), Sysmon TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Detection type** for this use case: endpoint — interpret thresholds and fields in that context.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rare processes with rarity score), Scatter plot (rarity vs z-score), Bar chart (top anomalous hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use behavior baselines to catch odd program launches and parent chains that single-event rules would miss in the noise.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest, Processes.process_name, Processes.user | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 8,
            "none": 0
          }
        },
        {
          "i": "10.16",
          "n": "Security Operations Trending",
          "u": [
            {
              "i": "10.16.1",
              "n": "Attack Surface Change Trending",
              "c": "high",
              "f": "advanced",
              "v": "Tracking newly exposed services and listening ports over time reveals shadow IT, misconfigurations, and scan gaps before attackers find them. Sudden increases in open ports or services often precede breaches or follow failed change control.",
              "t": "Splunk Enterprise Security, Qualys/Nessus/Rapid7 TAs, Splunk Add-on for Tenable",
              "d": "`index=main` vuln scanner assets (`sourcetype=qualys*`, `nessus*`, `vuln:*`), optional `sourcetype=stash` nightly exposure summaries; `Intrusion_Detection` for correlated context",
              "q": "index=main (sourcetype=\"qualys:asset\" OR sourcetype=\"tenable:nessus\" OR sourcetype=\"vuln:asset\")\n| eval port_key=coalesce(dest_port, port, listening_port)\n| eval svc_key=coalesce(protocol, \"tcp\") . \":\" . port_key\n| bin _time span=1d\n| stats dc(svc_key) as distinct_services dc(host) as hosts_affected by _time\n| timechart span=1w max(distinct_services) as peak_exposed_services max(hosts_affected) as hosts_with_exposure",
              "m": "Normalize host and port fields per your scanner TA; backfill a KV store of first-seen `(host, port, protocol)` tuples and flag deltas as “new exposure.” Run weekly after patch cycles. Exclude approved scanner ranges from “new” logic. Join high-risk ports to a lookup for prioritization. Schedule the search after nightly import jobs complete.",
              "z": "Line chart (distinct services and hosts over time), single value (week-over-week delta), table (top new port/protocol pairs).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, Qualys/Nessus/Rapid7 TAs, Splunk Add-on for Tenable.\n• Ensure the following data sources are available: `index=main` vuln scanner assets (`sourcetype=qualys*`, `nessus*`, `vuln:*`), optional `sourcetype=stash` nightly exposure summaries; `Intrusion_Detection` for correlated context.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize host and port fields per your scanner TA; backfill a KV store of first-seen `(host, port, protocol)` tuples and flag deltas as “new exposure.” Run weekly after patch cycles. Exclude approved scanner ranges from “new” logic. Join high-risk ports to a lookup for prioritization. Schedule the search after nightly import jobs complete.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=main (sourcetype=\"qualys:asset\" OR sourcetype=\"tenable:nessus\" OR sourcetype=\"vuln:asset\")\n| eval port_key=coalesce(dest_port, port, listening_port)\n| eval svc_key=coalesce(protocol, \"tcp\") . \":\" . port_key\n| bin _time span=1d\n| stats dc(svc_key) as distinct_services dc(host) as hosts_affected by _time\n| timechart span=1w max(distinct_services) as peak_exposed_services max(hosts_affected) as hosts_with_exposure\n```\n\nUnderstanding this SPL\n\n**Attack Surface Change Trending** — Tracking newly exposed services and listening ports over time reveals shadow IT, misconfigurations, and scan gaps before attackers find them. Sudden increases in open ports or services often precede breaches or follow failed change control.\n\nDocumented **Data sources**: `index=main` vuln scanner assets (`sourcetype=qualys*`, `nessus*`, `vuln:*`), optional `sourcetype=stash` nightly exposure summaries; `Intrusion_Detection` for correlated context. **App/TA** (typical add-on context): Splunk Enterprise Security, Qualys/Nessus/Rapid7 TAs, Splunk Add-on for Tenable. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: main; **sourcetype**: qualys:asset, tenable:nessus, vuln:asset. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=main, sourcetype=\"qualys:asset\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **port_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **svc_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1w | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Attack Surface Change Trending** — Tracking newly exposed services and listening ports over time reveals shadow IT, misconfigurations, and scan gaps before attackers find them. Sudden increases in open ports or services often precede breaches or follow failed change control.\n\nDocumented **Data sources**: `index=main` vuln scanner assets (`sourcetype=qualys*`, `nessus*`, `vuln:*`), optional `sourcetype=stash` nightly exposure summaries; `Intrusion_Detection` for correlated context. **App/TA** (typical add-on context): Splunk Enterprise Security, Qualys/Nessus/Rapid7 TAs, Splunk Add-on for Tenable. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (distinct services and hosts over time), single value (week-over-week delta), table (top new port/protocol pairs).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how exposed services and open paths change over time so surprise growth in our footprint is visible before an attacker maps it for us.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection (context)",
                "N/A for raw scanner (use normalized asset tags if mapped)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1w | sort - count",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.16.2",
              "n": "SIEM Alert-to-Incident Ratio Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Comparing ES notables to analyst-confirmed incidents shows whether the SOC is drowning in noise or under-escalating real cases. A falling ratio over 90 days may mean tuning drift; a rising ratio can indicate improved correlation or staffing effects worth validating.",
              "t": "Splunk Enterprise Security",
              "d": "`index=notable`, disposition fields (`status`, `urgency`, `owner`), optional `index=itsi` if incidents sync externally",
              "q": "index=notable earliest=-90d\n| eval confirmed=if(match(status,\"Closed - True Positive\"),1,0)\n| eval triaged_incident=if(match(status,\"(New|In Progress|Pending)\") OR match(owner,\"SOC|Incident\"),1,0)\n| bin _time span=1w\n| stats count as alerts sum(confirmed) as true_positives sum(triaged_incident) as in_flight by _time\n| eval tp_rate_pct=round(100*true_positives/nullif(alerts,0),2)\n| timechart span=1w first(alerts) as weekly_alerts first(tp_rate_pct) as true_positive_rate_pct",
              "m": "Align `status` values with your ES workflow (Closed - True Positive / False Positive). If you use formal incident tickets, join notables to `incident_id` from a lookup populated by SOAR or ITSM export. Review weekly with detection engineering to separate tuning opportunities from genuine threat volume changes. Exclude test rules via `rule_name` lookup.",
              "z": "Dual-axis line chart (alerts vs true positives), column chart (weekly alert volume), single value (90-day TP rate trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=notable`, disposition fields (`status`, `urgency`, `owner`), optional `index=itsi` if incidents sync externally.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign `status` values with your ES workflow (Closed - True Positive / False Positive). If you use formal incident tickets, join notables to `incident_id` from a lookup populated by SOAR or ITSM export. Review weekly with detection engineering to separate tuning opportunities from genuine threat volume changes. Exclude test rules via `rule_name` lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=notable earliest=-90d\n| eval confirmed=if(match(status,\"Closed - True Positive\"),1,0)\n| eval triaged_incident=if(match(status,\"(New|In Progress|Pending)\") OR match(owner,\"SOC|Incident\"),1,0)\n| bin _time span=1w\n| stats count as alerts sum(confirmed) as true_positives sum(triaged_incident) as in_flight by _time\n| eval tp_rate_pct=round(100*true_positives/nullif(alerts,0),2)\n| timechart span=1w first(alerts) as weekly_alerts first(tp_rate_pct) as true_positive_rate_pct\n```\n\nUnderstanding this SPL\n\n**SIEM Alert-to-Incident Ratio Trending** — Comparing ES notables to analyst-confirmed incidents shows whether the SOC is drowning in noise or under-escalating real cases. A falling ratio over 90 days may mean tuning drift; a rising ratio can indicate improved correlation or staffing effects worth validating.\n\nDocumented **Data sources**: `index=notable`, disposition fields (`status`, `urgency`, `owner`), optional `index=itsi` if incidents sync externally. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: notable.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=notable, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **confirmed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **triaged_incident** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **tp_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app span=1w | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SIEM Alert-to-Incident Ratio Trending** — Comparing ES notables to analyst-confirmed incidents shows whether the SOC is drowning in noise or under-escalating real cases. A falling ratio over 90 days may mean tuning drift; a rising ratio can indicate improved correlation or staffing effects worth validating.\n\nDocumented **Data sources**: `index=notable`, disposition fields (`status`, `urgency`, `owner`), optional `index=itsi` if incidents sync externally. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis line chart (alerts vs true positives), column chart (weekly alert volume), single value (90-day TP rate trend).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many raw alerts turn into real tracked incidents so we know whether we are just noisy or actually closing the loop.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts (if mirrored into CIM)",
                "N/A for notable index"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app span=1w | sort - count",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.16.3",
              "n": "Mean Time to Detect (MTTD) Trending",
              "c": "critical",
              "f": "advanced",
              "v": "MTTD measures how quickly the organization notices malicious activity after it starts. Quarter-over-quarter trending supports leadership reporting, purple-team exercises, and investment in detection content versus response tooling.",
              "t": "Splunk Enterprise Security, optional SOAR for dwell-time fields",
              "d": "`index=notable` with `time_to_detect` / dwell-time enrichment, `index=risk` for first-risk attribution time, `sourcetype=stash` MTTD summaries from correlation searches",
              "q": "index=notable earliest=-400d\n| eval mttd_sec=coalesce(time_to_detect, dwell_time_seconds, first_detection_lag)\n| where isnotnull(mttd_sec) AND mttd_sec>=0 AND mttd_sec<8640000\n| eval quarter=strftime(_time,\"%Y\") . \"-Q\" . ceil(tonumber(strftime(_time,\"%m\"))/3)\n| stats median(mttd_sec) as median_mttd_sec p95(mttd_sec) as p95_mttd_sec count as cases by quarter\n| eval median_mttd_hrs=round(median_mttd_sec/3600,2)\n| table quarter, cases, median_mttd_hrs, p95_mttd_sec",
              "m": "Populate `time_to_detect` using ES notable workflow (first event time vs detection time) or a scripted lookup from EDR dwell analytics. Cap outliers at plausible maxima to avoid bad timestamps skewing medians. Reconcile calendar quarters with board reporting. Document methodology once so year-over-year comparisons stay valid when fields change.",
              "z": "Line chart (median MTTD hours by quarter), table (case counts and percentiles), box plot if exporting sample-level data to a summary index.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, optional SOAR for dwell-time fields.\n• Ensure the following data sources are available: `index=notable` with `time_to_detect` / dwell-time enrichment, `index=risk` for first-risk attribution time, `sourcetype=stash` MTTD summaries from correlation searches.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPopulate `time_to_detect` using ES notable workflow (first event time vs detection time) or a scripted lookup from EDR dwell analytics. Cap outliers at plausible maxima to avoid bad timestamps skewing medians. Reconcile calendar quarters with board reporting. Document methodology once so year-over-year comparisons stay valid when fields change.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=notable earliest=-400d\n| eval mttd_sec=coalesce(time_to_detect, dwell_time_seconds, first_detection_lag)\n| where isnotnull(mttd_sec) AND mttd_sec>=0 AND mttd_sec<8640000\n| eval quarter=strftime(_time,\"%Y\") . \"-Q\" . ceil(tonumber(strftime(_time,\"%m\"))/3)\n| stats median(mttd_sec) as median_mttd_sec p95(mttd_sec) as p95_mttd_sec count as cases by quarter\n| eval median_mttd_hrs=round(median_mttd_sec/3600,2)\n| table quarter, cases, median_mttd_hrs, p95_mttd_sec\n```\n\nUnderstanding this SPL\n\n**Mean Time to Detect (MTTD) Trending** — MTTD measures how quickly the organization notices malicious activity after it starts. Quarter-over-quarter trending supports leadership reporting, purple-team exercises, and investment in detection content versus response tooling.\n\nDocumented **Data sources**: `index=notable` with `time_to_detect` / dwell-time enrichment, `index=risk` for first-risk attribution time, `sourcetype=stash` MTTD summaries from correlation searches. **App/TA** (typical add-on context): Splunk Enterprise Security, optional SOAR for dwell-time fields. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: notable.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=notable, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mttd_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(mttd_sec) AND mttd_sec>=0 AND mttd_sec<8640000` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **quarter** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by quarter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **median_mttd_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Mean Time to Detect (MTTD) Trending**): table quarter, cases, median_mttd_hrs, p95_mttd_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (median MTTD hours by quarter), table (case counts and percentiles), box plot if exporting sample-level data to a summary index.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how long it takes to notice a problem from first evidence so we can show leadership whether we are getting faster or slower over the quarter.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.16.4",
              "n": "Mean Time to Respond (MTTR) Trending",
              "c": "critical",
              "f": "advanced",
              "v": "MTTR reflects containment and remediation speed once a detection exists. Sustained increases may signal playbook gaps, staffing shortages, or complex multi-domain incidents; decreases validate SOAR and runbook maturity.",
              "t": "Splunk Enterprise Security, Splunk SOAR (optional)",
              "d": "`index=notable` closed events, `sourcetype=stash` SOAR playbook duration summaries",
              "q": "index=notable earliest=-180d status=\"Closed*\"\n| eval mttr_sec=coalesce(duration, if(isnotnull(closed_time), closed_time - _time, null()), time_to_close)\n| where isnotnull(mttr_sec) AND mttr_sec>=0\n| bin _time span=1mon\n| stats median(mttr_sec) as median_mttr_sec p95(mttr_sec) as p95_mttr_sec count by _time\n| eval median_mttr_hrs=round(median_mttr_sec/3600,2)\n| timechart span=1mon first(median_mttr_hrs) as median_mttr_hrs first(p95_mttr_sec) as p95_mttr_sec",
              "m": "Map `closed_time` and open times from ES fields your deployment actually fills; use `transaction` or `stats` range if only event pairs exist. Exclude pending vendor response cases via a tag. Segment by `severity` or `security_domain` for actionable drilldowns. Align MTTR definitions with SLA contracts to avoid mismatched executive expectations.",
              "z": "Line chart (median and P95 MTTR), stacked bar (case volume by closure month), table (top rules by MTTR).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, Splunk SOAR (optional).\n• Ensure the following data sources are available: `index=notable` closed events, `sourcetype=stash` SOAR playbook duration summaries.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `closed_time` and open times from ES fields your deployment actually fills; use `transaction` or `stats` range if only event pairs exist. Exclude pending vendor response cases via a tag. Segment by `severity` or `security_domain` for actionable drilldowns. Align MTTR definitions with SLA contracts to avoid mismatched executive expectations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=notable earliest=-180d status=\"Closed*\"\n| eval mttr_sec=coalesce(duration, if(isnotnull(closed_time), closed_time - _time, null()), time_to_close)\n| where isnotnull(mttr_sec) AND mttr_sec>=0\n| bin _time span=1mon\n| stats median(mttr_sec) as median_mttr_sec p95(mttr_sec) as p95_mttr_sec count by _time\n| eval median_mttr_hrs=round(median_mttr_sec/3600,2)\n| timechart span=1mon first(median_mttr_hrs) as median_mttr_hrs first(p95_mttr_sec) as p95_mttr_sec\n```\n\nUnderstanding this SPL\n\n**Mean Time to Respond (MTTR) Trending** — MTTR reflects containment and remediation speed once a detection exists. Sustained increases may signal playbook gaps, staffing shortages, or complex multi-domain incidents; decreases validate SOAR and runbook maturity.\n\nDocumented **Data sources**: `index=notable` closed events, `sourcetype=stash` SOAR playbook duration summaries. **App/TA** (typical add-on context): Splunk Enterprise Security, Splunk SOAR (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: notable.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=notable, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mttr_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(mttr_sec) AND mttr_sec>=0` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **median_mttr_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1mon** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (median and P95 MTTR), stacked bar (case volume by closure month), table (top rules by MTTR).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how long it takes to contain and clean up an issue from first alert so we can prove response time and where the handoffs hurt.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.16.5",
              "n": "Phishing Attempt Volume Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Weekly volume of blocked phishing attempts indicates campaign intensity and user targeting. Spikes justify awareness programs, mailbox rule reviews, and gateway tuning without waiting for a successful compromise.",
              "t": "Splunk Add-on for Microsoft Office 365, Proofpoint TA, Cisco ESA TA, Splunk Enterprise Security",
              "d": "`index=email` or secure email gateway sourcetypes, `Alerts` data model–eligible email threat events",
              "q": "index=email OR index=main (sourcetype=\"ms:o365:reporting:messagetrace\" OR sourcetype=\"proofpoint:ppt:msg\" OR sourcetype=\"cisco:esa:mail\")\n| search threat_type=\"*phish*\" OR category=\"phishing\" OR verdict=\"malicious\"\n| bin _time span=1w\n| stats count as blocked_phish by _time\n| timechart span=1w sum(blocked_phish) as phishing_blocked",
              "m": "Normalize vendor-specific threat classifications into a `threat_type` field at ingest. Deduplicate by message ID where possible. Correlate with `index=notable` for user-reported phishing that bypassed automation. Refresh weekly in SOC metrics reviews alongside click-through simulations.",
              "z": "Column chart (blocked phishing per week), line chart (trend vs 12-week baseline), pie chart (subcategories if available).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365, Proofpoint TA, Cisco ESA TA, Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=email` or secure email gateway sourcetypes, `Alerts` data model–eligible email threat events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize vendor-specific threat classifications into a `threat_type` field at ingest. Deduplicate by message ID where possible. Correlate with `index=notable` for user-reported phishing that bypassed automation. Refresh weekly in SOC metrics reviews alongside click-through simulations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email OR index=main (sourcetype=\"ms:o365:reporting:messagetrace\" OR sourcetype=\"proofpoint:ppt:msg\" OR sourcetype=\"cisco:esa:mail\")\n| search threat_type=\"*phish*\" OR category=\"phishing\" OR verdict=\"malicious\"\n| bin _time span=1w\n| stats count as blocked_phish by _time\n| timechart span=1w sum(blocked_phish) as phishing_blocked\n```\n\nUnderstanding this SPL\n\n**Phishing Attempt Volume Trending** — Weekly volume of blocked phishing attempts indicates campaign intensity and user targeting. Spikes justify awareness programs, mailbox rule reviews, and gateway tuning without waiting for a successful compromise.\n\nDocumented **Data sources**: `index=email` or secure email gateway sourcetypes, `Alerts` data model–eligible email threat events. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365, Proofpoint TA, Cisco ESA TA, Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email, main; **sourcetype**: ms:o365:reporting:messagetrace, proofpoint:ppt:msg, cisco:esa:mail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, index=main, sourcetype=\"ms:o365:reporting:messagetrace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app span=1w | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Phishing Attempt Volume Trending** — Weekly volume of blocked phishing attempts indicates campaign intensity and user targeting. Spikes justify awareness programs, mailbox rule reviews, and gateway tuning without waiting for a successful compromise.\n\nDocumented **Data sources**: `index=email` or secure email gateway sourcetypes, `Alerts` data model–eligible email threat events. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365, Proofpoint TA, Cisco ESA TA, Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (blocked phishing per week), line chart (trend vs 12-week baseline), pie chart (subcategories if available).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend phishing and similar mail so we can see a campaign season or a new bypass path without guessing from anecdotes.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app span=1w | sort - count",
              "e": [
                "cisco",
                "m365",
                "proofpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.16.6",
              "n": "Firewall Rule Hit Rate Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Top rules by hit count over 30 days expose noisy policies, overly broad permits, and candidates for segmentation review. Flat or rising hit rates on “catch-all” rules often precede policy hygiene projects.",
              "t": "`Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), Splunk Common Information Model",
              "d": "`Network_Traffic` data model (firewall allow/deny), `index=network` `sourcetype=pan:traffic` or equivalent",
              "q": "| tstats `summariesonly` count FROM datamodel=Network_Traffic WHERE nodename=All_Traffic All_Traffic.action IN (\"allowed\",\"accept\") earliest=-30d\n    BY _time span=1d All_Traffic.rule\n| `drop_dm_object_name(All_Traffic)`\n| timechart span=1d sum(count) as hits by rule useother=f limit=10",
              "m": "If `rule` is missing, derive from `policy_id` or `rule_uid` via vendor TA. Exclude health-check and internal monitoring sources with a subnet lookup. Use accelerated DM or summary `sourcetype=stash` for very high volume. Pair with change tickets when hit rates jump after policy pushes.",
              "z": "Bar chart (top rules by hits), line chart (daily hits for top five rules), treemap (rule share of total hits).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), Splunk Common Information Model.\n• Ensure the following data sources are available: `Network_Traffic` data model (firewall allow/deny), `index=network` `sourcetype=pan:traffic` or equivalent.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIf `rule` is missing, derive from `policy_id` or `rule_uid` via vendor TA. Exclude health-check and internal monitoring sources with a subnet lookup. Use accelerated DM or summary `sourcetype=stash` for very high volume. Pair with change tickets when hit rates jump after policy pushes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` count FROM datamodel=Network_Traffic WHERE nodename=All_Traffic All_Traffic.action IN (\"allowed\",\"accept\") earliest=-30d\n    BY _time span=1d All_Traffic.rule\n| `drop_dm_object_name(All_Traffic)`\n| timechart span=1d sum(count) as hits by rule useother=f limit=10\n```\n\nUnderstanding this SPL\n\n**Firewall Rule Hit Rate Trending** — Top rules by hit count over 30 days expose noisy policies, overly broad permits, and candidates for segmentation review. Flat or rising hit rates on “catch-all” rules often precede policy hygiene projects.\n\nDocumented **Data sources**: `Network_Traffic` data model (firewall allow/deny), `index=network` `sourcetype=pan:traffic` or equivalent. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), Splunk Common Information Model. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic` — enable acceleration for that model.\n• Invokes macro `drop_dm_object_name(All_Traffic)` — in Search, use the UI or expand to inspect the underlying SPL.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by rule useother=f limit=10** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1d | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Firewall Rule Hit Rate Trending** — Top rules by hit count over 30 days expose noisy policies, overly broad permits, and candidates for segmentation review. Flat or rising hit rates on “catch-all” rules often precede policy hygiene projects.\n\nDocumented **Data sources**: `Network_Traffic` data model (firewall allow/deny), `index=network` `sourcetype=pan:traffic` or equivalent. **App/TA** (typical add-on context): `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, Cisco Secure Firewall Add-on, `Splunk_TA_juniper` (SRX), Splunk Common Information Model. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top rules by hits), line chart (daily hits for top five rules), treemap (rule share of total hits).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how often our firewall rules actually fire so policy pushes and new apps do not silently break or get bypassed in one region.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1d | sort - agg_value",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_firepower",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.16.7",
              "n": "Risk Score Distribution Trending",
              "c": "high",
              "f": "advanced",
              "v": "A shifting histogram of Enterprise Security risk scores shows whether the environment is accumulating unmanaged risk objects or whether tuning and closure activities are working. Persistent high-score tails warrant entity prioritization and threat-hunting focus.",
              "t": "Splunk Enterprise Security",
              "d": "`index=risk`, Risk data model",
              "q": "index=risk earliest=-90d\n| eval band=case(risk_score<20,\"0-19\", risk_score<40,\"20-39\", risk_score<60,\"40-59\", risk_score<80,\"60-79\", true(),\"80-100\")\n| bin _time span=1w\n| stats count by _time, band\n| xyseries _time band count\n| fillnull value=0",
              "m": "Ensure `risk_score` reflects the consolidated object score post-RBA aggregation. Filter to active risk objects if you archive stale entries. Compare week-over-week using `timewrap` on bucketed counts. Feed executive dashboards with simplified bands; keep raw drilldown for analysts in ES Risk Analysis.",
              "z": "Stacked column chart (count by band over weeks), heatmap (band × week), line chart (80–100 band trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=risk`, Risk data model.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure `risk_score` reflects the consolidated object score post-RBA aggregation. Filter to active risk objects if you archive stale entries. Compare week-over-week using `timewrap` on bucketed counts. Feed executive dashboards with simplified bands; keep raw drilldown for analysts in ES Risk Analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=risk earliest=-90d\n| eval band=case(risk_score<20,\"0-19\", risk_score<40,\"20-39\", risk_score<60,\"40-59\", risk_score<80,\"60-79\", true(),\"80-100\")\n| bin _time span=1w\n| stats count by _time, band\n| xyseries _time band count\n| fillnull value=0\n```\n\nUnderstanding this SPL\n\n**Risk Score Distribution Trending** — A shifting histogram of Enterprise Security risk scores shows whether the environment is accumulating unmanaged risk objects or whether tuning and closure activities are working. Persistent high-score tails warrant entity prioritization and threat-hunting focus.\n\nDocumented **Data sources**: `index=risk`, Risk data model. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: risk.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=risk, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **band** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, band** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• Fills null values with `fillnull`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked column chart (count by band over weeks), heatmap (band × week), line chart (80–100 band trend).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at the spread of risk scores so we are not piling “critical” on everything and diluting the word when something truly bad happens.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.16.8",
              "n": "Endpoint Protection Coverage Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Percentage of endpoints with current antivirus or EDR definitions and healthy agent status demonstrates control effectiveness for audits and cyber insurance. Downward trends often track OS rebuilds, mergers, or agent failures rather than policy alone.",
              "t": "Splunk Add-on for CrowdStrike, Microsoft Defender TA, Tanium, Splunk Enterprise Security (asset framework)",
              "d": "`index=main` or `index=endpoint` EDR inventory, `sourcetype=stash` weekly compliance rollups",
              "q": "index=main (sourcetype=\"crowdstrike:hosts\" OR sourcetype=\"ms:defender:machineinfo\" OR sourcetype=\"tanium:endpoint\")\n| eval defs_age_hours=(now()-strptime(last_av_update,\"%Y-%m-%dT%H:%M:%S\"))/3600\n| eval current=if(isnotnull(defs_age_hours) AND defs_age_hours<=48 AND match(status,\"normal|active|enabled\"),1,0)\n| bin _time span=1w\n| stats dc(host) as fleet, sum(current) as compliant by _time\n| eval coverage_pct=round(100*compliant/nullif(fleet,0),2)\n| timechart span=1w first(coverage_pct) as av_edr_coverage_pct",
              "m": "Map vendor-specific “last update” and protection-state fields; some use epoch seconds. Join to CMDB or ES `assets` to exclude decommissioned hosts from the denominator. Alert when coverage drops more than a few points week over week. Document exceptions (OT, lab) via a lookup to avoid false remediation tickets.",
              "z": "Line chart (coverage percentage over time), single value (current vs target), pie chart (compliant vs noncompliant).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CrowdStrike, Microsoft Defender TA, Tanium, Splunk Enterprise Security (asset framework).\n• Ensure the following data sources are available: `index=main` or `index=endpoint` EDR inventory, `sourcetype=stash` weekly compliance rollups.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor-specific “last update” and protection-state fields; some use epoch seconds. Join to CMDB or ES `assets` to exclude decommissioned hosts from the denominator. Alert when coverage drops more than a few points week over week. Document exceptions (OT, lab) via a lookup to avoid false remediation tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=main (sourcetype=\"crowdstrike:hosts\" OR sourcetype=\"ms:defender:machineinfo\" OR sourcetype=\"tanium:endpoint\")\n| eval defs_age_hours=(now()-strptime(last_av_update,\"%Y-%m-%dT%H:%M:%S\"))/3600\n| eval current=if(isnotnull(defs_age_hours) AND defs_age_hours<=48 AND match(status,\"normal|active|enabled\"),1,0)\n| bin _time span=1w\n| stats dc(host) as fleet, sum(current) as compliant by _time\n| eval coverage_pct=round(100*compliant/nullif(fleet,0),2)\n| timechart span=1w first(coverage_pct) as av_edr_coverage_pct\n```\n\nUnderstanding this SPL\n\n**Endpoint Protection Coverage Trending** — Percentage of endpoints with current antivirus or EDR definitions and healthy agent status demonstrates control effectiveness for audits and cyber insurance. Downward trends often track OS rebuilds, mergers, or agent failures rather than policy alone.\n\nDocumented **Data sources**: `index=main` or `index=endpoint` EDR inventory, `sourcetype=stash` weekly compliance rollups. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike, Microsoft Defender TA, Tanium, Splunk Enterprise Security (asset framework). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: main; **sourcetype**: crowdstrike:hosts, ms:defender:machineinfo, tanium:endpoint. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=main, sourcetype=\"crowdstrike:hosts\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **defs_age_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **current** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest span=1w | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Endpoint Protection Coverage Trending** — Percentage of endpoints with current antivirus or EDR definitions and healthy agent status demonstrates control effectiveness for audits and cyber insurance. Downward trends often track OS rebuilds, mergers, or agent failures rather than policy alone.\n\nDocumented **Data sources**: `index=main` or `index=endpoint` EDR inventory, `sourcetype=stash` weekly compliance rollups. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike, Microsoft Defender TA, Tanium, Splunk Enterprise Security (asset framework). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (coverage percentage over time), single value (current vs target), pie chart (compliant vs noncompliant).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check that most endpoints and servers still show healthy agent or log presence so a silent gap is not hiding the next incident.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Endpoint (if normalized)",
                "N/A for raw vendor inventory"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest span=1w | sort - count",
              "e": [
                "crowdstrike",
                "defender"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.194",
              "n": "Citrix SmartAccess Policy Match Failures",
              "c": "high",
              "f": "advanced",
              "v": "SmartAccess and contextual policies decide what users see after logon. When policy match fails, users can authenticate but receive no virtual apps, wrong filters, or an unexpected post-logon state. Correlating gateway authentication context with storefront and broker results surfaces misconfiguration, attribute mapping errors, and attacks that try to evade the intended entitlements view.",
              "t": "Splunk Add-on for Citrix NetScaler, Citrix DaaS / CVAD broker event forwarders, and optional FAS / identity integration logs for correlation",
              "d": "`index=xd` `sourcetype=\"citrix:netscaler:aaalog\"` (authentication, SmartAccess / EPA context); `sourcetype=\"citrix:storefront:launch\"` (post-auth resource enumeration); `sourcetype=\"citrix:broker:events\"` (session and assignment); fields `access_policy`, `post_logon_state`, `virtual_apps_visible` (if extracted)",
              "q": "index=xd (sourcetype=\"citrix:netscaler:aaalog\" OR sourcetype=\"citrix:storefront:launch\" OR sourcetype=\"citrix:broker:events\") earliest=-24h\n| eval fail=if(match(lower(coalesce(post_logon_state, \"\")),\"(?i)deny|no.?resource|unassigned|mismatch|filter\") OR match(_raw, \"(?i)smartaccess|policy.?(deny|miss|fail)\"),1,0)\n| eval usr=coalesce(user, user_name, sAMAccountName)\n| where isnotnull(usr) AND (fail=1 OR isnotnull(action) AND like(action, \"%Error%\"))\n| bin _time span=1h\n| stats count as ev, values(access_policy) as policies by _time, usr, client_ip\n| where ev>3\n| table _time, usr, client_ip, ev, policies",
              "m": "Ensure consistent user keys across netscaler, StoreFront, and broker logs. Add lookups for user-to-group and expected virtual app set per role. Tune false positives: mass failures after a policy push point to a bad rule, not an attacker. For incidents, join to change records and the policy export from the day before. Escalate repeated failure patterns from one subnet as possible scripted probing.",
              "z": "Timechart: failed policy or empty launch stack per hour; table: top users and subnets; link to a sample raw event in the case queue.",
              "kfp": "SmartAccess and attribute renames after an AD restructure, GPO, or a pilot group addition can make many users see empty app lists the same day. A mass failure after a policy publish points to a bad rule, not a single user; use change and AD group correlation first.",
              "refs": "[Citrix — NetScaler Gateway session policies and SmartAccess](https://docs.citrix.com/en-us/citrix-gateway/current-release/planning-for-smartcontrol-smartaccess.html)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix NetScaler, Citrix DaaS / CVAD broker event forwarders, and optional FAS / identity integration logs for correlation.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:netscaler:aaalog\"` (authentication, SmartAccess / EPA context); `sourcetype=\"citrix:storefront:launch\"` (post-auth resource enumeration); `sourcetype=\"citrix:broker:events\"` (session and assignment); fields `access_policy`, `post_logon_state`, `virtual_apps_visible` (if extracted).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure consistent user keys across netscaler, StoreFront, and broker logs. Add lookups for user-to-group and expected virtual app set per role. Tune false positives: mass failures after a policy push point to a bad rule, not an attacker. For incidents, join to change records and the policy export from the day before. Escalate repeated failure patterns from one subnet as possible scripted probing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:netscaler:aaalog\" OR sourcetype=\"citrix:storefront:launch\" OR sourcetype=\"citrix:broker:events\") earliest=-24h\n| eval fail=if(match(lower(coalesce(post_logon_state, \"\")),\"(?i)deny|no.?resource|unassigned|mismatch|filter\") OR match(_raw, \"(?i)smartaccess|policy.?(deny|miss|fail)\"),1,0)\n| eval usr=coalesce(user, user_name, sAMAccountName)\n| where isnotnull(usr) AND (fail=1 OR isnotnull(action) AND like(action, \"%Error%\"))\n| bin _time span=1h\n| stats count as ev, values(access_policy) as policies by _time, usr, client_ip\n| where ev>3\n| table _time, usr, client_ip, ev, policies\n```\n\nUnderstanding this SPL\n\n**Citrix SmartAccess Policy Match Failures** — SmartAccess and contextual policies decide what users see after logon. When policy match fails, users can authenticate but receive no virtual apps, wrong filters, or an unexpected post-logon state. Correlating gateway authentication context with storefront and broker results surfaces misconfiguration, attribute mapping errors, and attacks that try to evade the intended entitlements view.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:netscaler:aaalog\"` (authentication, SmartAccess / EPA context); `sourcetype=\"citrix:storefront:launch\"` (post-auth resource enumeration); `sourcetype=\"citrix:broker:events\"` (session and assignment); fields `access_policy`, `post_logon_state`, `virtual_apps_visible` (if extracted). **App/TA** (typical add-on context): Splunk Add-on for Citrix NetScaler, Citrix DaaS / CVAD broker event forwarders, and optional FAS / identity integration logs for correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:netscaler:aaalog, citrix:storefront:launch, citrix:broker:events. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:netscaler:aaalog\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **usr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(usr) AND (fail=1 OR isnotnull(action) AND like(action, \"%Error%\"))` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, usr, client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ev>3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix SmartAccess Policy Match Failures**): table _time, usr, client_ip, ev, policies\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart: failed policy or empty launch stack per hour; table: top users and subnets; link to a sample raw event in the case queue.\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look for the “you got in, but the door to your apps stayed shut” case — so you find broken rules, missing groups, and odd logins before a whole team sits idle.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.195",
              "n": "Citrix Clipboard and File Transfer Data Exfiltration",
              "c": "critical",
              "f": "advanced",
              "v": "Clipboard and client drive file transfer are common exfiltration paths from virtual apps and desktops. Tracking bulk bytes, blocked versus allowed actions, and repeated policy denials highlights both policy enforcement health and potential abuse of allowed channels. Large allowed transfers on sensitive groups warrant follow-up even when not blocked.",
              "t": "No official Splunk TA for HDX policy telemetry. Ingest via Windows Event Logs from VDAs (Session Recording event detection policies log clipboard and file transfer actions), or use Citrix Analytics for Security which surfaces DLP-related risk indicators. Splunk Add-on for Microsoft Windows for event log collection.",
              "d": "`index=xd` `sourcetype=\"citrix:hdx:clipboard\"`, `sourcetype=\"citrix:hdx:filetransfer\"` (copy, paste, upload, download policy results); fields `direction`, `policy_result` (allowed|blocked|warned), `bytes`, `session_id`, `user` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd (sourcetype=\"citrix:hdx:clipboard\" OR sourcetype=\"citrix:hdx:filetransfer\") earliest=-24h\n| eval ok=if(match(lower(policy_result),\"(?i)allow|permit|pass\"),1,0), blk=if(match(lower(policy_result),\"(?i)block|deny\"),1,0), b=tonumber(bytes)\n| bin _time span=1h\n| stats sum(b) as tot_bytes, sum(blk) as blocks, sum(ok) as allows, count as ops by _time, user, direction\n| where tot_bytes>50000000 OR blocks>10 OR (allows>100 AND tot_bytes>100000000)\n| table _time, user, direction, tot_bytes, blocks, allows, ops",
              "m": "Ingest HDX session agent or endpoint management logs with policy outcome. Map users to data-sensitivity tiers. Alert on blocked surge (misconfig or attack), or allowed bytes over threshold for protected groups. Pair with DLP from the guest OS if available. Never log full clipboard text; use size and direction only where required by policy.",
              "z": "Timechart: blocked vs allowed by direction; table: top users by bytes; pie: policy outcomes.",
              "kfp": "Help desk remote support, vendor assists, and power users with clipboard and drive mapping in policy for troubleshooting can look like exfil. Restrict high alerts to DLP-tagged data classes, sensitive file extensions, and destinations outside the approved exfil list.",
              "refs": "[Citrix — HDX client drive and clipboard redirection policy](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/policies/reference/citrix-policy-settings/ica-client-settings.html)",
              "mitre": [
                "T1115",
                "T1039"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for HDX policy telemetry. Ingest via Windows Event Logs from VDAs (Session Recording event detection policies log clipboard and file transfer actions), or use Citrix Analytics for Security which surfaces DLP-related risk indicators. Splunk Add-on for Microsoft Windows for event log collection..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:hdx:clipboard\"`, `sourcetype=\"citrix:hdx:filetransfer\"` (copy, paste, upload, download policy results); fields `direction`, `policy_result` (allowed|blocked|warned), `bytes`, `session_id`, `user` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest HDX session agent or endpoint management logs with policy outcome. Map users to data-sensitivity tiers. Alert on blocked surge (misconfig or attack), or allowed bytes over threshold for protected groups. Pair with DLP from the guest OS if available. Never log full clipboard text; use size and direction only where required by policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:hdx:clipboard\" OR sourcetype=\"citrix:hdx:filetransfer\") earliest=-24h\n| eval ok=if(match(lower(policy_result),\"(?i)allow|permit|pass\"),1,0), blk=if(match(lower(policy_result),\"(?i)block|deny\"),1,0), b=tonumber(bytes)\n| bin _time span=1h\n| stats sum(b) as tot_bytes, sum(blk) as blocks, sum(ok) as allows, count as ops by _time, user, direction\n| where tot_bytes>50000000 OR blocks>10 OR (allows>100 AND tot_bytes>100000000)\n| table _time, user, direction, tot_bytes, blocks, allows, ops\n```\n\nUnderstanding this SPL\n\n**Citrix Clipboard and File Transfer Data Exfiltration** — Clipboard and client drive file transfer are common exfiltration paths from virtual apps and desktops. Tracking bulk bytes, blocked versus allowed actions, and repeated policy denials highlights both policy enforcement health and potential abuse of allowed channels. Large allowed transfers on sensitive groups warrant follow-up even when not blocked.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:hdx:clipboard\"`, `sourcetype=\"citrix:hdx:filetransfer\"` (copy, paste, upload, download policy results); fields `direction`, `policy_result` (allowed|blocked|warned), `bytes`, `session_id`, `user` Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for HDX policy telemetry. Ingest via Windows Event Logs from VDAs (Session Recording event detection policies log clipboard and file transfer actions), or use Citrix Analytics for Security which surfaces DLP-related risk indicators. Splunk Add-on for Microsoft Windows for event log collection. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:hdx:clipboard, citrix:hdx:filetransfer. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:hdx:clipboard\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, user, direction** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tot_bytes>50000000 OR blocks>10 OR (allows>100 AND tot_bytes>100000000)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Clipboard and File Transfer Data Exfiltration**): table _time, user, direction, tot_bytes, blocks, allows, ops\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart: blocked vs allowed by direction; table: top users by bytes; pie: policy outcomes.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We watch copy-paste and file moves between the office desktop in the cloud and the local machine — so you see when someone tries to sneak data out or when the safety rules trip a lot at once.",
              "mtype": [
                "Security"
              ],
              "a": [
                "DLP"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.196",
              "n": "Citrix Session Hijack Detection (Impossible Travel)",
              "c": "critical",
              "f": "advanced",
              "v": "Credential theft or session token reuse can show up as the same account active from two distant geographies within a short window, or as a client IP change mid-session without a clean re-authentication. Joining session and gateway events with geo data flags impossible travel and concurrent locations that warrant step-up auth and session termination.",
              "t": "Citrix broker and gateway logs, MaxMind or internal geo lookup in Splunk, optional Splunk User Behavior Analytics",
              "d": "`index=xd` `sourcetype=\"citrix:broker:session\"` or `sourcetype=\"citrix:netscaler:aaalog\"` with `user`, `client_ip`, `session_id`, `geo_country` (from lookup), `logon_time`",
              "q": "index=xd (sourcetype=\"citrix:broker:session\" OR sourcetype=\"citrix:netscaler:aaalog\") earliest=-24h\n| iplocation client_ip\n| eval country=coalesce(Country, geo_country, \"unknown\")\n| sort 0 user, _time\n| streamstats window=2 global=f current=f first(_time) as prev_t first(client_ip) as prev_ip first(country) as prev_c by user\n| eval km_s=if(isnotnull(prev_t) and (_time-prev_t)<3600 and prev_c!=country and country!=\"unknown\" and prev_c!=\"unknown\",1,0)\n| where km_s=1\n| table _time, user, prev_ip, client_ip, prev_c, country",
              "m": "Keep the geo lookup current. Use `streamstats` only as a starting point; refine with distance or travel-time models for fewer false positives from VPN exit changes. Pair with identity provider sign-in history. Auto-isolate high-value accounts. Document corporate VPN that shows a single country even when users are mobile.",
              "z": "Notable list: user, previous vs new country; map: client IP points; timeline: session events per user.",
              "kfp": "VPN home regions, fast executive travel, mobile NAT churn, and shared egress IPs can trigger impossible travel without compromise. Add MFA success, new device, and PAM context; or require velocity plus second-factor challenge failure before an account takeover call.",
              "refs": "[Citrix — Secure logon and session management](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/secure.html)",
              "mitre": [
                "T1550",
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix broker and gateway logs, MaxMind or internal geo lookup in Splunk, optional Splunk User Behavior Analytics.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:session\"` or `sourcetype=\"citrix:netscaler:aaalog\"` with `user`, `client_ip`, `session_id`, `geo_country` (from lookup), `logon_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nKeep the geo lookup current. Use `streamstats` only as a starting point; refine with distance or travel-time models for fewer false positives from VPN exit changes. Pair with identity provider sign-in history. Auto-isolate high-value accounts. Document corporate VPN that shows a single country even when users are mobile.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:broker:session\" OR sourcetype=\"citrix:netscaler:aaalog\") earliest=-24h\n| iplocation client_ip\n| eval country=coalesce(Country, geo_country, \"unknown\")\n| sort 0 user, _time\n| streamstats window=2 global=f current=f first(_time) as prev_t first(client_ip) as prev_ip first(country) as prev_c by user\n| eval km_s=if(isnotnull(prev_t) and (_time-prev_t)<3600 and prev_c!=country and country!=\"unknown\" and prev_c!=\"unknown\",1,0)\n| where km_s=1\n| table _time, user, prev_ip, client_ip, prev_c, country\n```\n\nUnderstanding this SPL\n\n**Citrix Session Hijack Detection (Impossible Travel)** — Credential theft or session token reuse can show up as the same account active from two distant geographies within a short window, or as a client IP change mid-session without a clean re-authentication. Joining session and gateway events with geo data flags impossible travel and concurrent locations that warrant step-up auth and session termination.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:session\"` or `sourcetype=\"citrix:netscaler:aaalog\"` with `user`, `client_ip`, `session_id`, `geo_country` (from lookup), `logon_time`. **App/TA** (typical add-on context): Citrix broker and gateway logs, MaxMind or internal geo lookup in Splunk, optional Splunk User Behavior Analytics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:session, citrix:netscaler:aaalog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Citrix Session Hijack Detection (Impossible Travel)**): iplocation client_ip\n• `eval` defines or adjusts **country** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **km_s** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where km_s=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Session Hijack Detection (Impossible Travel)**): table _time, user, prev_ip, client_ip, prev_c, country\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Notable list: user, previous vs new country; map: client IP points; timeline: session events per user.",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We notice when the same sign-in shows up in two far-apart places faster than a person could travel — so you catch stolen or shared sign-ins before they quietly keep using your systems.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.197",
              "n": "Citrix Admin Console Privilege Escalation Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Elevated configuration work in Citrix environments can change who gets what desktop, what image runs, and what the gateway allows. Monitoring administrative APIs and consoles for bulk or unusual changes from non-service accounts reveals privilege misuse, compromised admin creds, and shadow IT changes that bypass change control.",
              "t": "Citrix Cloud / CVAD administrative logging, NetScaler NITRO access logs, optional Splunk Enterprise Security for privileged access analytics",
              "d": "`index=xd` `sourcetype=\"citrix:broker:admin\"` (Studio and delivery controller admin actions); `sourcetype=\"citrix:pvs:mcli\"` (PVS command line); `sourcetype=\"citrix:netscaler:nitro\"` (NITRO API with `user`, `operation`, `resource`); Windows security / PowerShell logs for remote admin hosts",
              "q": "index=xd (sourcetype=\"citrix:broker:admin\" OR sourcetype=\"citrix:pvs:mcli\" OR sourcetype=\"citrix:netscaler:nitro\") earliest=-24h\n| eval priv=if(match(lower(operation),\"(?i)create|delete|assign|full|admin|role|owner|publish|merge|override|masterimage|privilege\") OR match(lower(resource),\"(?i)catalog|desktop|machine|site|administrator|rbac\"),1,0), off=if(match(lower(user),\"(?i)svc|script|automation\") OR match(client_ip, \"^10\\\\.\"),0,1)\n| where priv=1 AND off=1\n| bin _time span=1h\n| stats count as acts, values(operation) as ops, dc(resource) as rescount by _time, user, client_ip\n| where acts>5 OR rescount>10\n| table _time, user, client_ip, acts, rescount, ops",
              "m": "Send all admin paths to a protected index with immutable storage if required. Baseline service accounts and break-glass IDs. Alert on first-time admin from a new workstation, burst of publish or catalog changes, or NITRO calls from non-manager networks. Correlate with ticketing; require a change ID in a lookup for expected work. For PVS, track vDisk merge and mode changes as high severity.",
              "z": "Table: top admin users by action count; timeline: operations; single-value: admin events with no ticket match in 15m (from lookup join).",
              "kfp": "Break-glass, PAM JIT, monthly access reviews, and service accounts used for runbooks can look like 'new' Citrix admin privilege. A PAM ticket join and exclusion of known PIM product identities keep governed elevation out of a pure anomaly queue.",
              "refs": "[Citrix — Delegated administration and logging](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/install-configure/delegated-administration.html)",
              "mitre": [
                "T1078",
                "T1548"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Cloud / CVAD administrative logging, NetScaler NITRO access logs, optional Splunk Enterprise Security for privileged access analytics.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:admin\"` (Studio and delivery controller admin actions); `sourcetype=\"citrix:pvs:mcli\"` (PVS command line); `sourcetype=\"citrix:netscaler:nitro\"` (NITRO API with `user`, `operation`, `resource`); Windows security / PowerShell logs for remote admin hosts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSend all admin paths to a protected index with immutable storage if required. Baseline service accounts and break-glass IDs. Alert on first-time admin from a new workstation, burst of publish or catalog changes, or NITRO calls from non-manager networks. Correlate with ticketing; require a change ID in a lookup for expected work. For PVS, track vDisk merge and mode changes as high severity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:broker:admin\" OR sourcetype=\"citrix:pvs:mcli\" OR sourcetype=\"citrix:netscaler:nitro\") earliest=-24h\n| eval priv=if(match(lower(operation),\"(?i)create|delete|assign|full|admin|role|owner|publish|merge|override|masterimage|privilege\") OR match(lower(resource),\"(?i)catalog|desktop|machine|site|administrator|rbac\"),1,0), off=if(match(lower(user),\"(?i)svc|script|automation\") OR match(client_ip, \"^10\\\\.\"),0,1)\n| where priv=1 AND off=1\n| bin _time span=1h\n| stats count as acts, values(operation) as ops, dc(resource) as rescount by _time, user, client_ip\n| where acts>5 OR rescount>10\n| table _time, user, client_ip, acts, rescount, ops\n```\n\nUnderstanding this SPL\n\n**Citrix Admin Console Privilege Escalation Detection** — Elevated configuration work in Citrix environments can change who gets what desktop, what image runs, and what the gateway allows. Monitoring administrative APIs and consoles for bulk or unusual changes from non-service accounts reveals privilege misuse, compromised admin creds, and shadow IT changes that bypass change control.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:admin\"` (Studio and delivery controller admin actions); `sourcetype=\"citrix:pvs:mcli\"` (PVS command line); `sourcetype=\"citrix:netscaler:nitro\"` (NITRO API with `user`, `operation`, `resource`); Windows security / PowerShell logs for remote admin hosts. **App/TA** (typical add-on context): Citrix Cloud / CVAD administrative logging, NetScaler NITRO access logs, optional Splunk Enterprise Security for privileged access analytics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:admin, citrix:pvs:mcli, citrix:netscaler:nitro. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:admin\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **priv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where priv=1 AND off=1` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, user, client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where acts>5 OR rescount>10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Admin Console Privilege Escalation Detection**): table _time, user, client_ip, acts, rescount, ops\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table: top admin users by action count; timeline: operations; single-value: admin events with no ticket match in 15m (from lookup join).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We track the powerful “control room” moves that reshape who can log in and what they see — so you catch misuse or a stolen admin account before the whole farm is quietly reshaped.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad",
                "citrix_netscaler"
              ],
              "ta_link": {
                "name": "Splunk Add-on for Citrix NetScaler",
                "id": 2770,
                "url": "https://splunkbase.splunk.com/app/2770"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.198",
              "n": "Citrix Session Watermarking Enforcement Errors",
              "c": "high",
              "f": "intermediate",
              "v": "Watermarking and screen-capture policy protect regulated data in virtual sessions. When the policy stack fails to apply, sessions may run without visible traceability, violating policy intent. Spikes in enforcement errors after image updates, agent upgrades, or client releases warrant rollback or targeted patching before auditors find a gap in technical controls.",
              "t": "No official Splunk TA for HDX policy telemetry. Ingest via Windows Event Logs from VDAs (Session Recording event detection policies log clipboard and file transfer actions), or use Citrix Analytics for Security which surfaces DLP-related risk indicators. Splunk Add-on for Microsoft Windows for event log collection.",
              "d": "`index=xd` `sourcetype=\"citrix:hdx:policy\"` (watermark, screen capture, recording policy apply); `sourcetype=\"citrix:session:recording:control\"` (recording and screenshot rules); VDA and broker events indicating policy not applied or client mismatch Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf.",
              "q": "index=xd (sourcetype=\"citrix:hdx:policy\" OR sourcetype=\"citrix:session:recording:control\") earliest=-24h\n| eval wm_fail=if(match(_raw, \"(?i)watermark.?(fail|error|not.?(applied|enforced))\"),1,0), cap_fail=if(match(_raw, \"(?i)screen(shot|.?capture).*(deny|block|violation|not.?(allowed|permitted))\"),1,0), pol=coalesce(policy_name, policy_set, \"unknown\")\n| where wm_fail=1 OR cap_fail=1\n| bin _time span=1h\n| stats count as err, values(host) as vda, values(pol) as policies by _time, user, session_id\n| where err>0\n| table _time, user, session_id, vda, err, policies",
              "m": "Correlate errors with a version lookup for Workspace app and VDA. Prioritize high-sensitivity delivery groups. For Session Recording, watch storage and service health in a related availability use case. If errors reference unknown clients, treat as a compliance gap and block out-of-revision clients at the gateway where policy allows.",
              "z": "Timechart: enforcement errors; table: by `policy_name` and VDA build; list: example raw messages for the support team.",
              "kfp": "Workspaces in a pilot or older build that do not support watermarking yet, or a policy in report-only, will log 'not enforced' style errors. A minimum supported Workspace build matrix and feature flag per ring avoid a false compliance gap report.",
              "refs": "[Citrix — Session Watermarking](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/session-recording.html)",
              "mitre": [
                "T1113"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: No official Splunk TA for HDX policy telemetry. Ingest via Windows Event Logs from VDAs (Session Recording event detection policies log clipboard and file transfer actions), or use Citrix Analytics for Security which surfaces DLP-related risk indicators. Splunk Add-on for Microsoft Windows for event log collection..\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:hdx:policy\"` (watermark, screen capture, recording policy apply); `sourcetype=\"citrix:session:recording:control\"` (recording and screenshot rules); VDA and broker events indicating policy not applied or client mismatch Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate errors with a version lookup for Workspace app and VDA. Prioritize high-sensitivity delivery groups. For Session Recording, watch storage and service health in a related availability use case. If errors reference unknown clients, treat as a compliance gap and block out-of-revision clients at the gateway where policy allows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:hdx:policy\" OR sourcetype=\"citrix:session:recording:control\") earliest=-24h\n| eval wm_fail=if(match(_raw, \"(?i)watermark.?(fail|error|not.?(applied|enforced))\"),1,0), cap_fail=if(match(_raw, \"(?i)screen(shot|.?capture).*(deny|block|violation|not.?(allowed|permitted))\"),1,0), pol=coalesce(policy_name, policy_set, \"unknown\")\n| where wm_fail=1 OR cap_fail=1\n| bin _time span=1h\n| stats count as err, values(host) as vda, values(pol) as policies by _time, user, session_id\n| where err>0\n| table _time, user, session_id, vda, err, policies\n```\n\nUnderstanding this SPL\n\n**Citrix Session Watermarking Enforcement Errors** — Watermarking and screen-capture policy protect regulated data in virtual sessions. When the policy stack fails to apply, sessions may run without visible traceability, violating policy intent. Spikes in enforcement errors after image updates, agent upgrades, or client releases warrant rollback or targeted patching before auditors find a gap in technical controls.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:hdx:policy\"` (watermark, screen capture, recording policy apply); `sourcetype=\"citrix:session:recording:control\"` (recording and screenshot rules); VDA and broker events indicating policy not applied or client mismatch Note: field names in SPL are suggested conventions for custom ingestion; actual field names depend on your parsing configuration in props.conf/transforms.conf. **App/TA** (typical add-on context): No official Splunk TA for HDX policy telemetry. Ingest via Windows Event Logs from VDAs (Session Recording event detection policies log clipboard and file transfer actions), or use Citrix Analytics for Security which surfaces DLP-related risk indicators. Splunk Add-on for Microsoft Windows for event log collection. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:hdx:policy, citrix:session:recording:control. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:hdx:policy\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **wm_fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where wm_fail=1 OR cap_fail=1` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, user, session_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where err>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Session Watermarking Enforcement Errors**): table _time, user, session_id, vda, err, policies\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart: enforcement errors; table: by `policy_name` and VDA build; list: example raw messages for the support team.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We look for the moment the “name-stamped screen” and “no sneaky screen shots” rules fail to take hold — so you are not left telling auditors the protection was on when it quietly was not.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Network_Sessions"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.199",
              "n": "Citrix Unauthorized Published Application Launch Attempts",
              "c": "high",
              "f": "intermediate",
              "v": "Users should only start applications and desktops they are assigned. Elevated or repeated denials for resources outside entitlement can show probing for exposed icons, policy gaps, or automation testing wrong shortcuts. A burst from one account to many unassigned apps is higher risk than a single typo in self-service.",
              "t": "Citrix DaaS / CVAD broker event collection, StoreFront or Workspace app logs, optional identity group lookup for expected entitlements",
              "d": "`index=xd` `sourcetype=\"citrix:broker:events\"` (launch, enumeration, error); `sourcetype=\"citrix:storefront:resource\"` (requested resource, deny); fields `app_name`, `user`, `result`, `deny_reason`",
              "q": "index=xd (sourcetype=\"citrix:broker:events\" OR sourcetype=\"citrix:storefront:resource\") earliest=-24h\n| eval denied=if(match(lower(result),\"(?i)deny|unauthorized|forbidden|not.?(allowed|entitled|assigned)\") OR match(lower(deny_reason),\"(?i)deny|policy|not.?(found|entitled)\"),1,0)\n| where denied=1\n| bin _time span=1h\n| stats count as hits, values(app_name) as apps, values(deny_reason) as reason by _time, user, client_ip\n| where hits>3\n| table _time, user, client_ip, hits, apps, reason",
              "m": "Distinguish user-driven mistakes from service accounts. Join `app_name` to a catalog of all published resources and role assignments. Alert on more than N distinct denied apps per user per day. If storefront exposes sensitive names to the wrong group, also fix catalog visibility, not just logging. For Citrix DaaS, include cloud connector–visible events in scope.",
              "z": "Bar chart: denied launches by app; table: top users; sparkline: hits over time for flagged accounts.",
              "kfp": "Pen-test, red-team, and negative UAT on unauthorized icons are supposed to show blocked launches. A purple-team window, test usernames, and a non-prod store scope stop harmless blocked attempts in test from a production insider-threat page.",
              "refs": "[Citrix — Publish applications and entitlements](https://docs.citrix.com/en-us/citrix-daas-service/technical-overview/technical-overview.html)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix DaaS / CVAD broker event collection, StoreFront or Workspace app logs, optional identity group lookup for expected entitlements.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:events\"` (launch, enumeration, error); `sourcetype=\"citrix:storefront:resource\"` (requested resource, deny); fields `app_name`, `user`, `result`, `deny_reason`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDistinguish user-driven mistakes from service accounts. Join `app_name` to a catalog of all published resources and role assignments. Alert on more than N distinct denied apps per user per day. If storefront exposes sensitive names to the wrong group, also fix catalog visibility, not just logging. For Citrix DaaS, include cloud connector–visible events in scope.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:broker:events\" OR sourcetype=\"citrix:storefront:resource\") earliest=-24h\n| eval denied=if(match(lower(result),\"(?i)deny|unauthorized|forbidden|not.?(allowed|entitled|assigned)\") OR match(lower(deny_reason),\"(?i)deny|policy|not.?(found|entitled)\"),1,0)\n| where denied=1\n| bin _time span=1h\n| stats count as hits, values(app_name) as apps, values(deny_reason) as reason by _time, user, client_ip\n| where hits>3\n| table _time, user, client_ip, hits, apps, reason\n```\n\nUnderstanding this SPL\n\n**Citrix Unauthorized Published Application Launch Attempts** — Users should only start applications and desktops they are assigned. Elevated or repeated denials for resources outside entitlement can show probing for exposed icons, policy gaps, or automation testing wrong shortcuts. A burst from one account to many unassigned apps is higher risk than a single typo in self-service.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:events\"` (launch, enumeration, error); `sourcetype=\"citrix:storefront:resource\"` (requested resource, deny); fields `app_name`, `user`, `result`, `deny_reason`. **App/TA** (typical add-on context): Citrix DaaS / CVAD broker event collection, StoreFront or Workspace app logs, optional identity group lookup for expected entitlements. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:events, citrix:storefront:resource. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:events\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **denied** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where denied=1` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, user, client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where hits>3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Unauthorized Published Application Launch Attempts**): table _time, user, client_ip, hits, apps, reason\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart: denied launches by app; table: top users; sparkline: hits over time for flagged accounts.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We count tries to open apps that were never meant for that person’s badge — so you see probing or a broken catalog before a lucky click finds something that should have stayed hidden.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "10.6.200",
              "n": "Citrix Anomalous Session Behavior (Off-Hours, Unusual Apps)",
              "c": "high",
              "f": "advanced",
              "v": "Baseline user behavior in virtual apps: typical hours, common titles, and session length. Off-hours or weekend spikes and unusual first-time applications (especially data-heavy or admin tools) can signal account compromise, side work by insiders, or automation errors. This use case combines time-of-day signals with per-app duration compared to a rolling median to reduce noise for routine shift work.",
              "t": "Citrix Monitor Service or broker-derived usage via OData collectors, custom forwarders, optional Splunk User Behavior Analytics",
              "d": "`index=xd` `sourcetype=\"citrix:broker:session\"` and `sourcetype=\"citrix:broker:app_usage\"` (app title, `session_start`, `session_end`, `user`, `day_of_week`, `client_ip`); optional baseline from summary indexing",
              "q": "index=xd (sourcetype=\"citrix:broker:session\" OR sourcetype=\"citrix:broker:app_usage\") earliest=-7d\n| eval hr=tonumber(strftime(_time, \"%H\")), wday=tonumber(strftime(_time, \"%w\")), du=tonumber(session_duration_sec)\n| eval off=if(hr<5 OR hr>22 OR wday==0 OR wday==6,1,0)\n| eventstats median(du) as medu by app_name\n| eval longsess=if(du>(3*coalesce(medu,0)) AND coalesce(medu,0)>0,1,0)\n| where off=1 OR longsess=1\n| bin _time span=1d\n| stats count as ev, dc(app_name) as appc, max(longsess) as has_long by _time, user\n| where ev>5 OR appc>15\n| table _time, user, ev, appc, has_long",
              "m": "Feed shift schedules from HR or facilities into a lookup to refine off-hours. Maintain an allow list for on-call and batch accounts. `eventstats` medians are a starting point; consider ML assistants or user-level baselines for large estates. Triage with the impossible-travel and privilege use cases. Respect privacy: aggregate before HR notification policies.",
              "z": "Heatmap: events by user hour; table: top users for off-hours; scatter: `session_duration_sec` by app (spot long tails).",
              "kfp": "Global BPO, weekend SRE, and batch job operators naturally use 'off hours' and niche published apps. OU- or country-based baselines, holiday and on-call calendars, and a minimum deviation from a 30-day user habit reduce one-off 'weird' from critical insider.",
              "refs": "[Citrix — Monitor and Director session analytics](https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/technical-overview/director-monitoring-solutions.html)",
              "mitre": [
                "T1078",
                "T1074"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Monitor Service or broker-derived usage via OData collectors, custom forwarders, optional Splunk User Behavior Analytics.\n• Ensure the following data sources are available: `index=xd` `sourcetype=\"citrix:broker:session\"` and `sourcetype=\"citrix:broker:app_usage\"` (app title, `session_start`, `session_end`, `user`, `day_of_week`, `client_ip`); optional baseline from summary indexing.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFeed shift schedules from HR or facilities into a lookup to refine off-hours. Maintain an allow list for on-call and batch accounts. `eventstats` medians are a starting point; consider ML assistants or user-level baselines for large estates. Triage with the impossible-travel and privilege use cases. Respect privacy: aggregate before HR notification policies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=xd (sourcetype=\"citrix:broker:session\" OR sourcetype=\"citrix:broker:app_usage\") earliest=-7d\n| eval hr=tonumber(strftime(_time, \"%H\")), wday=tonumber(strftime(_time, \"%w\")), du=tonumber(session_duration_sec)\n| eval off=if(hr<5 OR hr>22 OR wday==0 OR wday==6,1,0)\n| eventstats median(du) as medu by app_name\n| eval longsess=if(du>(3*coalesce(medu,0)) AND coalesce(medu,0)>0,1,0)\n| where off=1 OR longsess=1\n| bin _time span=1d\n| stats count as ev, dc(app_name) as appc, max(longsess) as has_long by _time, user\n| where ev>5 OR appc>15\n| table _time, user, ev, appc, has_long\n```\n\nUnderstanding this SPL\n\n**Citrix Anomalous Session Behavior (Off-Hours, Unusual Apps)** — Baseline user behavior in virtual apps: typical hours, common titles, and session length. Off-hours or weekend spikes and unusual first-time applications (especially data-heavy or admin tools) can signal account compromise, side work by insiders, or automation errors. This use case combines time-of-day signals with per-app duration compared to a rolling median to reduce noise for routine shift work.\n\nDocumented **Data sources**: `index=xd` `sourcetype=\"citrix:broker:session\"` and `sourcetype=\"citrix:broker:app_usage\"` (app title, `session_start`, `session_end`, `user`, `day_of_week`, `client_ip`); optional baseline from summary indexing. **App/TA** (typical add-on context): Citrix Monitor Service or broker-derived usage via OData collectors, custom forwarders, optional Splunk User Behavior Analytics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: xd; **sourcetype**: citrix:broker:session, citrix:broker:app_usage. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=xd, sourcetype=\"citrix:broker:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **off** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by app_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **longsess** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where off=1 OR longsess=1` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ev>5 OR appc>15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Anomalous Session Behavior (Off-Hours, Unusual Apps)**): table _time, user, ev, appc, has_long\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap: events by user hour; table: top users for off-hours; scatter: `session_duration_sec` by app (spot long tails).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-24",
              "sver": "",
              "rby": "",
              "ge": "We learn what a normal work pattern looks like in your app farm, then flag the odd late nights, odd weekends, and strangely long stints in unusual tools — so you see odd use before a loss happens.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "citrix"
              ],
              "em": [
                "citrix_cvad"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 15,
            "none": 0
          }
        }
      ],
      "i": 10,
      "n": "Security Infrastructure",
      "src": "cat-10-security-infrastructure.md"
    },
    {
      "s": [
        {
          "i": "11.1",
          "n": "Microsoft 365 / Exchange",
          "u": [
            {
              "i": "11.1.1",
              "n": "Mail Flow Health Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Deferred or failed message traces directly hit revenue-dependent communications and support SLAs. Sustained delivery failure spikes should drive incident severity classification and trigger customer communication templates before users report the issue.",
              "t": "`Splunk_TA_MS_O365`, Exchange message tracking",
              "d": "Exchange message tracking logs, O365 message trace",
              "q": "index=o365 sourcetype=\"ms:o365:messageTrace\"\n| timechart span=1h count by Status",
              "m": "Ingest O365 message trace data via the Splunk Add-on for Microsoft Cloud Services (`sourcetype=ms:o365:messageTrace`). Key fields: `Status`, `RecipientStatus`, `SenderAddress`. Alert when the Failed/Deferred percentage exceeds the 14-day same-hour median, split by connector vs DNS vs policy rejection for targeted triage.",
              "z": "Line chart (message volume by status), Single value (delivery success rate), Bar chart (top NDR reasons).",
              "kfp": "Delivery failures during recipient migrations, distribution list changes, or upstream MTA maintenance.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`, Exchange message tracking.\n• Ensure the following data sources are available: Exchange message tracking logs, O365 message trace.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest O365 message trace data via the Splunk Add-on for Microsoft Cloud Services (`sourcetype=ms:o365:messageTrace`). Key fields: `Status`, `RecipientStatus`, `SenderAddress`. Alert when the Failed/Deferred percentage exceeds the 14-day same-hour median, split by connector vs DNS vs policy rejection for targeted triage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:messageTrace\"\n| timechart span=1h count by Status\n```\n\nUnderstanding this SPL\n\n**Mail Flow Health Monitoring** — Deferred or failed message traces directly hit revenue-dependent communications and support SLAs. Sustained delivery failure spikes should drive incident severity classification and trigger customer communication templates before users report the issue.\n\nDocumented **Data sources**: Exchange message tracking logs, O365 message trace. **App/TA** (typical add-on context): `Splunk_TA_MS_O365`, Exchange message tracking. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:messageTrace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:messageTrace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by Status** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  where All_Email.action IN (\"deferred\", \"failed\", \"bounced\", \"rejected\")\n  by All_Email.src_user All_Email.recipient span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Mail Flow Health Monitoring** — Deferred or failed message traces directly hit revenue-dependent communications and support SLAs. Sustained delivery failure spikes should drive incident severity classification and trigger customer communication templates before users report the issue.\n\nDocumented **Data sources**: Exchange message tracking logs, O365 message trace. **App/TA** (typical add-on context): `Splunk_TA_MS_O365`, Exchange message tracking. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (message volume by status), Single value (delivery success rate), Bar chart (top NDR reasons).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether our mail is delivering or piling up as deferred and failed, so we can act before people notice the outage.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  where All_Email.action IN (\"deferred\", \"failed\", \"bounced\", \"rejected\")\n  by All_Email.src_user All_Email.recipient span=1h\n| sort -count",
              "e": [
                "exchange",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "observability",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "11.1.2",
              "n": "Mailbox Audit Logging",
              "c": "high",
              "f": "beginner",
              "v": "Tracks who accesses what mailboxes, including delegate and admin access. Essential for insider threat detection and compliance.",
              "t": "`Splunk_TA_MS_O365`",
              "d": "O365 unified audit log (ExchangeItem events)",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=\"Exchange\" Operation IN (\"MailItemsAccessed\",\"Send\",\"SendAs\")\n| stats count by UserId, Operation, MailboxOwnerUPN\n| where UserId!=MailboxOwnerUPN",
              "m": "Enable mailbox audit logging in Exchange Online. Ingest via O365 Management Activity API. Alert on non-owner access to sensitive mailboxes. Track delegate activity. Monitor SendAs events for potential impersonation.",
              "z": "Table (non-owner mailbox access), Bar chart (access by user), Timeline (audit events).",
              "kfp": "Legitimate delegate access, shared mailboxes, and service account activity in busy support teams.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`.\n• Ensure the following data sources are available: O365 unified audit log (ExchangeItem events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable mailbox audit logging in Exchange Online. Ingest via O365 Management Activity API. Alert on non-owner access to sensitive mailboxes. Track delegate activity. Monitor SendAs events for potential impersonation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=\"Exchange\" Operation IN (\"MailItemsAccessed\",\"Send\",\"SendAs\")\n| stats count by UserId, Operation, MailboxOwnerUPN\n| where UserId!=MailboxOwnerUPN\n```\n\nUnderstanding this SPL\n\n**Mailbox Audit Logging** — Tracks who accesses what mailboxes, including delegate and admin access. Essential for insider threat detection and compliance.\n\nDocumented **Data sources**: O365 unified audit log (ExchangeItem events). **App/TA** (typical add-on context): `Splunk_TA_MS_O365`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by UserId, Operation, MailboxOwnerUPN** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where UserId!=MailboxOwnerUPN` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-owner mailbox access), Bar chart (access by user), Timeline (audit events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see who opened or changed mailboxes, so we can tell normal support access from snooping or takeover.",
              "wv": "crawl",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "11.1.3",
              "n": "Exchange Online Protection Events",
              "c": "high",
              "f": "intermediate",
              "v": "EOP filtering metrics show email threat landscape and security control effectiveness.",
              "t": "`Splunk_TA_MS_O365`",
              "d": "EOP message trace, threat protection status",
              "q": "index=o365 sourcetype=\"ms:o365:messageTrace\"\n| eval threat_type=case(match(FilteringResult,\"Spam\"),\"Spam\", match(FilteringResult,\"Phish\"),\"Phishing\", match(FilteringResult,\"Malware\"),\"Malware\", 1=1,\"Clean\")\n| stats count by threat_type",
              "m": "Ingest O365 message trace data. Classify messages by EOP verdict. Track filtering rates over time. Report on threat types and volumes. Alert on phishing/malware volume spikes.",
              "z": "Pie chart (message classification), Line chart (threat volume trend), Bar chart (top blocked senders).",
              "kfp": "Aggressive false positives from EOP; bulk marketing waves can resemble spam surges during holidays.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`.\n• Ensure the following data sources are available: EOP message trace, threat protection status.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest O365 message trace data. Classify messages by EOP verdict. Track filtering rates over time. Report on threat types and volumes. Alert on phishing/malware volume spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:messageTrace\"\n| eval threat_type=case(match(FilteringResult,\"Spam\"),\"Spam\", match(FilteringResult,\"Phish\"),\"Phishing\", match(FilteringResult,\"Malware\"),\"Malware\", 1=1,\"Clean\")\n| stats count by threat_type\n```\n\nUnderstanding this SPL\n\n**Exchange Online Protection Events** — EOP filtering metrics show email threat landscape and security control effectiveness.\n\nDocumented **Data sources**: EOP message trace, threat protection status. **App/TA** (typical add-on context): `Splunk_TA_MS_O365`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:messageTrace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:messageTrace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **threat_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by threat_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (message classification), Line chart (threat volume trend), Bar chart (top blocked senders).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track what Exchange Online Protection does with bad mail, so we catch filtering gaps before junk or malware spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.1.4",
              "n": "Teams Usage Analytics",
              "c": "medium",
              "f": "beginner",
              "v": "Teams adoption and quality metrics inform collaboration strategy and help identify user experience issues.",
              "t": "`Splunk_TA_MS_O365`",
              "d": "M365 Teams activity reports, Teams call quality data",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=\"MicrosoftTeams\"\n| stats count by Operation\n| sort -count",
              "m": "Ingest Teams activity reports via Graph API. Track meetings, messages, calls, and file sharing volumes. Monitor call quality metrics (jitter, packet loss). Report on adoption trends per department.",
              "z": "Line chart (Teams activity trend), Bar chart (activity by type), Table (call quality issues).",
              "kfp": "Seasonal or project-based spikes in Teams usage from training, town halls, or all-hands events.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`.\n• Ensure the following data sources are available: M365 Teams activity reports, Teams call quality data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Teams activity reports via Graph API. Track meetings, messages, calls, and file sharing volumes. Monitor call quality metrics (jitter, packet loss). Report on adoption trends per department.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=\"MicrosoftTeams\"\n| stats count by Operation\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Teams Usage Analytics** — Teams adoption and quality metrics inform collaboration strategy and help identify user experience issues.\n\nDocumented **Data sources**: M365 Teams activity reports, Teams call quality data. **App/TA** (typical add-on context): `Splunk_TA_MS_O365`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (Teams activity trend), Bar chart (activity by type), Table (call quality issues).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see how people use Teams over time, so we can spot real adoption dips versus one-off project swings.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.1.5",
              "n": "SharePoint/OneDrive Sharing Audit",
              "c": "high",
              "f": "beginner",
              "v": "External sharing can expose sensitive data. Audit trail ensures data protection and compliance.",
              "t": "`Splunk_TA_MS_O365`",
              "d": "O365 audit log (SharingSet, AnonymousLinkCreated)",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=\"SharePoint\" Operation IN (\"SharingSet\",\"AnonymousLinkCreated\",\"CompanyLinkCreated\")\n| where TargetUserOrGroupType=\"Guest\" OR Operation=\"AnonymousLinkCreated\"\n| table _time, UserId, Operation, ObjectId, TargetUserOrGroupName",
              "m": "Ingest SharePoint/OneDrive audit events. Alert on external sharing (guest users, anonymous links). Track sharing activity per user. Flag sharing of sensitive files or sites. Report for data governance reviews.",
              "z": "Table (external sharing events), Bar chart (sharing by user), Line chart (sharing trend), Pie chart (sharing type distribution).",
              "kfp": "Anonymous links for guest collaboration, public document sharing, or temporary file transfers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`.\n• Ensure the following data sources are available: O365 audit log (SharingSet, AnonymousLinkCreated).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SharePoint/OneDrive audit events. Alert on external sharing (guest users, anonymous links). Track sharing activity per user. Flag sharing of sensitive files or sites. Report for data governance reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=\"SharePoint\" Operation IN (\"SharingSet\",\"AnonymousLinkCreated\",\"CompanyLinkCreated\")\n| where TargetUserOrGroupType=\"Guest\" OR Operation=\"AnonymousLinkCreated\"\n| table _time, UserId, Operation, ObjectId, TargetUserOrGroupName\n```\n\nUnderstanding this SPL\n\n**SharePoint/OneDrive Sharing Audit** — External sharing can expose sensitive data. Audit trail ensures data protection and compliance.\n\nDocumented **Data sources**: O365 audit log (SharingSet, AnonymousLinkCreated). **App/TA** (typical add-on context): `Splunk_TA_MS_O365`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where TargetUserOrGroupType=\"Guest\" OR Operation=\"AnonymousLinkCreated\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SharePoint/OneDrive Sharing Audit**): table _time, UserId, Operation, ObjectId, TargetUserOrGroupName\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (external sharing events), Bar chart (sharing by user), Line chart (sharing trend), Pie chart (sharing type distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch file sharing and guest links, so we know when data leaves the company more than we expect.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Privacy",
                  "v": "current",
                  "cl": "§164.502(a)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Privacy §164.502(a) (Uses and disclosures of PHI — general rules) is enforced — Splunk UC-11.1.5: SharePoint/OneDrive Sharing Audit.",
                  "ea": "Saved search 'UC-11.1.5' running on O365 audit log (SharingSet, AnonymousLinkCreated), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E"
                },
                {
                  "r": "HIPAA Privacy",
                  "v": "current",
                  "cl": "§164.514(a)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Privacy §164.514(a) (De-identification of PHI) is enforced — Splunk UC-11.1.5: SharePoint/OneDrive Sharing Audit.",
                  "ea": "Saved search 'UC-11.1.5' running on O365 audit log (SharingSet, AnonymousLinkCreated), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E"
                },
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §24 (Protection of personal data obligation) is enforced — Splunk UC-11.1.5: SharePoint/OneDrive Sharing Audit.",
                  "ea": "Saved search 'UC-11.1.5' running on O365 audit log (SharingSet, AnonymousLinkCreated), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://sso.agc.gov.sg/Act/PDPA2012"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.1.6",
              "n": "DLP Policy Events",
              "c": "high",
              "f": "beginner",
              "v": "M365 DLP policy matches across email, Teams, SharePoint identify sensitive data exposure. Centralized tracking supports compliance.",
              "t": "`Splunk_TA_MS_O365`",
              "d": "O365 DLP logs",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=\"Dlp\"\n| stats count by PolicyName, UserPrincipalName, SensitiveInfoType\n| sort -count",
              "m": "Configure M365 DLP policies. Ingest DLP events. Track violations by policy, user, and data type. Alert on high-severity matches. Produce compliance reports for regulated data (PII, PCI, HIPAA).",
              "z": "Bar chart (violations by policy), Table (top violators), Line chart (violation trend).",
              "kfp": "Test DLP rules, training simulations, and broad “monitor only” policies that create noisy matches.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`.\n• Ensure the following data sources are available: O365 DLP logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure M365 DLP policies. Ingest DLP events. Track violations by policy, user, and data type. Alert on high-severity matches. Produce compliance reports for regulated data (PII, PCI, HIPAA).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=\"Dlp\"\n| stats count by PolicyName, UserPrincipalName, SensitiveInfoType\n| sort -count\n```\n\nUnderstanding this SPL\n\n**DLP Policy Events** — M365 DLP policy matches across email, Teams, SharePoint identify sensitive data exposure. Centralized tracking supports compliance.\n\nDocumented **Data sources**: O365 DLP logs. **App/TA** (typical add-on context): `Splunk_TA_MS_O365`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by PolicyName, UserPrincipalName, SensitiveInfoType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (violations by policy), Table (top violators), Line chart (violation trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when your rules block or flag sensitive data, so compliance can tune policies without guesswork.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Privacy",
                  "v": "current",
                  "cl": "§164.502(a)",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of HIPAA Privacy §164.502(a) (Uses and disclosures of PHI — general rules) — Splunk UC-11.1.6: DLP Policy Events.",
                  "ea": "Saved search 'UC-11.1.6' running on O365 DLP logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E"
                },
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§24",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of SG PDPA §24 (Protection of personal data obligation) — Splunk UC-11.1.6: DLP Policy Events.",
                  "ea": "Saved search 'UC-11.1.6' running on O365 DLP logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://sso.agc.gov.sg/Act/PDPA2012"
                },
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§26A",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of SG PDPA §26A (Data breach notification) — Splunk UC-11.1.6: DLP Policy Events.",
                  "ea": "Saved search 'UC-11.1.6' running on O365 DLP logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://sso.agc.gov.sg/Act/PDPA2012"
                },
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§26B",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of SG PDPA §26B (Criteria for notifiability) — Splunk UC-11.1.6: DLP Policy Events.",
                  "ea": "Saved search 'UC-11.1.6' running on O365 DLP logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://sso.agc.gov.sg/Act/PDPA2012"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.1.7",
              "n": "Admin Activity Audit",
              "c": "high",
              "f": "beginner",
              "v": "M365 admin actions (user creation, license changes, policy modifications) need audit trails for compliance and security.",
              "t": "`Splunk_TA_MS_O365`",
              "d": "O365 audit log (admin operations)",
              "q": "index=o365 sourcetype=\"ms:o365:management\" RecordType=1\n| table _time, UserId, Operation, ObjectId, ResultStatus\n| sort -_time",
              "m": "Ingest O365 admin audit log. Track admin operations by administrator. Alert on sensitive operations (user creation, role changes, policy modifications). Correlate with change management tickets.",
              "z": "Table (admin activities), Timeline (admin events), Bar chart (actions by admin).",
              "kfp": "Planned break-glass access, helpdesk changes, and vendor maintenance accounts during migrations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`.\n• Ensure the following data sources are available: O365 audit log (admin operations).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest O365 admin audit log. Track admin operations by administrator. Alert on sensitive operations (user creation, role changes, policy modifications). Correlate with change management tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" RecordType=1\n| table _time, UserId, Operation, ObjectId, ResultStatus\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Admin Activity Audit** — M365 admin actions (user creation, license changes, policy modifications) need audit trails for compliance and security.\n\nDocumented **Data sources**: O365 audit log (admin operations). **App/TA** (typical add-on context): `Splunk_TA_MS_O365`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Admin Activity Audit**): table _time, UserId, Operation, ObjectId, ResultStatus\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (admin activities), Timeline (admin events), Bar chart (actions by admin).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log important Microsoft 365 admin changes, so we can prove who changed what and catch surprise privilege moves.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.1.8",
              "n": "Inbox Rule Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Malicious inbox rules (auto-forward to external, auto-delete) are a key post-compromise technique for data exfiltration.",
              "t": "`Splunk_TA_MS_O365`",
              "d": "O365 audit log (New-InboxRule, Set-InboxRule)",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Operation IN (\"New-InboxRule\",\"Set-InboxRule\")\n| spath output=forward Parameters{}.Value\n| search forward=\"*@*\" NOT forward=\"*@yourdomain.com\"\n| table _time, UserId, Operation, forward",
              "m": "Monitor inbox rule creation events. Alert on rules that forward to external addresses, delete messages, or move to uncommon folders. These are high-confidence indicators of account compromise. Trigger immediate investigation.",
              "z": "Table (suspicious inbox rules), Single value (external forwarding rules — target: 0), Timeline (rule creation events).",
              "kfp": "Legitimate forwarding rules for out-of-office automation, helpdesk routing, or executive assistant access.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`.\n• Ensure the following data sources are available: O365 audit log (New-InboxRule, Set-InboxRule).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor inbox rule creation events. Alert on rules that forward to external addresses, delete messages, or move to uncommon folders. These are high-confidence indicators of account compromise. Trigger immediate investigation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Operation IN (\"New-InboxRule\",\"Set-InboxRule\")\n| spath output=forward Parameters{}.Value\n| search forward=\"*@*\" NOT forward=\"*@yourdomain.com\"\n| table _time, UserId, Operation, forward\n```\n\nUnderstanding this SPL\n\n**Inbox Rule Monitoring** — Malicious inbox rules (auto-forward to external, auto-delete) are a key post-compromise technique for data exfiltration.\n\nDocumented **Data sources**: O365 audit log (New-InboxRule, Set-InboxRule). **App/TA** (typical add-on context): `Splunk_TA_MS_O365`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Inbox Rule Monitoring**): table _time, UserId, Operation, forward\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious inbox rules), Single value (external forwarding rules — target: 0), Timeline (rule creation events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch sneaky auto-forward and inbox rules, so a stolen account cannot silently ship mail somewhere else.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "11.1.9",
              "n": "Service Health Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "M365 service incidents affect all users. Early awareness from API enables proactive communication and workaround planning.",
              "t": "Custom API input (M365 Service Health API)",
              "d": "M365 Service Health API",
              "q": "index=m365 sourcetype=\"m365:servicehealth\"\n| where status!=\"ServiceOperational\"\n| table _time, service, status, title, classification",
              "m": "Poll M365 Service Health API every 5 minutes. Alert on service degradations and incidents. Track incident duration and frequency. Correlate with internal ticket volumes to measure user impact.",
              "z": "Status grid (service × health), Table (active incidents), Timeline (incident history).",
              "kfp": "Microsoft-announced service incidents that already have an advisory; do not double-page on those windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (M365 Service Health API).\n• Ensure the following data sources are available: M365 Service Health API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll M365 Service Health API every 5 minutes. Alert on service degradations and incidents. Track incident duration and frequency. Correlate with internal ticket volumes to measure user impact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=m365 sourcetype=\"m365:servicehealth\"\n| where status!=\"ServiceOperational\"\n| table _time, service, status, title, classification\n```\n\nUnderstanding this SPL\n\n**Service Health Monitoring** — M365 service incidents affect all users. Early awareness from API enables proactive communication and workaround planning.\n\nDocumented **Data sources**: M365 Service Health API. **App/TA** (typical add-on context): Custom API input (M365 Service Health API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: m365; **sourcetype**: m365:servicehealth. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=m365, sourcetype=\"m365:servicehealth\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"ServiceOperational\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Service Health Monitoring**): table _time, service, status, title, classification\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (service × health), Table (active incidents), Timeline (incident history).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow Microsoft 365 service incidents, so we know a cloud blip is the cause and not a mystery on our side.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.1.10",
              "n": "License Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "M365 license costs are significant. Tracking utilization identifies unused licenses for reallocation and cost savings.",
              "t": "Custom API input (M365 Reports API)",
              "d": "M365 license assignment and usage reports",
              "q": "index=m365 sourcetype=\"m365:licenses\"\n| stats sum(assigned) as assigned, sum(consumed) as consumed by sku_name\n| eval utilization_pct=round(consumed/assigned*100,1)\n| table sku_name, assigned, consumed, utilization_pct",
              "m": "Poll M365 license reports via Graph API weekly. Track assigned vs consumed licenses per SKU. Identify inactive users (no activity in 90 days with assigned license). Report on cost optimization opportunities.",
              "z": "Table (license utilization), Gauge (% utilized per SKU), Bar chart (unused licenses by SKU).",
              "kfp": "True-up and license true-down windows; co-existence of trial or dev tenants can shift utilization.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (M365 Reports API).\n• Ensure the following data sources are available: M365 license assignment and usage reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll M365 license reports via Graph API weekly. Track assigned vs consumed licenses per SKU. Identify inactive users (no activity in 90 days with assigned license). Report on cost optimization opportunities.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=m365 sourcetype=\"m365:licenses\"\n| stats sum(assigned) as assigned, sum(consumed) as consumed by sku_name\n| eval utilization_pct=round(consumed/assigned*100,1)\n| table sku_name, assigned, consumed, utilization_pct\n```\n\nUnderstanding this SPL\n\n**License Utilization** — M365 license costs are significant. Tracking utilization identifies unused licenses for reallocation and cost savings.\n\nDocumented **Data sources**: M365 license assignment and usage reports. **App/TA** (typical add-on context): Custom API input (M365 Reports API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: m365; **sourcetype**: m365:licenses. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=m365, sourcetype=\"m365:licenses\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sku_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **License Utilization**): table sku_name, assigned, consumed, utilization_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (license utilization), Gauge (% utilized per SKU), Bar chart (unused licenses by SKU).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track license burn against seats, so we are not overbuying or hitting a hard block during a peak.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.1.11",
              "n": "Exchange Message Queue Depth",
              "c": "high",
              "f": "intermediate",
              "v": "Hub transport queue length indicates mail flow issues. Messages stuck in queue cause delivery delays, NDRs, and user complaints. Growing queues often precede transport service failures.",
              "t": "`Splunk_TA_windows` (perfmon), custom scripted input",
              "d": "Exchange Transport Queue counters (MSExchangeTransport Queues), Get-Queue PowerShell output",
              "q": "(index=perfmon object=\"MSExchangeTransport Queues\") OR (index=exchange sourcetype=\"exchange:queue_depth\")\n| eval queue_depth=coalesce(QueueLength, MessageCount, 0)\n| where queue_depth > 0\n| bin _time span=15m\n| stats latest(queue_depth) as current_depth, max(queue_depth) as peak_depth, avg(queue_depth) as avg_depth by _time, host, QueueName\n| where current_depth > 100 OR peak_depth > 500\n| sort -current_depth\n| table _time, host, QueueName, current_depth, peak_depth, avg_depth",
              "m": "Configure Splunk_TA_windows to collect MSExchangeTransport Queues perfmon counters (QueueLength, MessageCount) from Exchange servers every 60 seconds. Alternatively, run a scheduled script that executes `Get-Queue` and outputs JSON to Splunk via HEC or scripted input. Map QueueName to delivery type (Submission, Poison, Unreachable). Alert when any queue exceeds 100 messages (warning) or 500 (critical). Correlate queue spikes with transport service restarts and network issues.",
              "z": "Line chart (queue depth over time by queue), Single value (max queue depth), Table (queues exceeding threshold), Bar chart (peak depth by server).",
              "kfp": "Backlog during planned transport maintenance, storm patches, or large mailing bursts.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows` (perfmon), custom scripted input.\n• Ensure the following data sources are available: Exchange Transport Queue counters (MSExchangeTransport Queues), Get-Queue PowerShell output.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk_TA_windows to collect MSExchangeTransport Queues perfmon counters (QueueLength, MessageCount) from Exchange servers every 60 seconds. Alternatively, run a scheduled script that executes `Get-Queue` and outputs JSON to Splunk via HEC or scripted input. Map QueueName to delivery type (Submission, Poison, Unreachable). Alert when any queue exceeds 100 messages (warning) or 500 (critical). Correlate queue spikes with transport service restarts and network issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=perfmon object=\"MSExchangeTransport Queues\") OR (index=exchange sourcetype=\"exchange:queue_depth\")\n| eval queue_depth=coalesce(QueueLength, MessageCount, 0)\n| where queue_depth > 0\n| bin _time span=15m\n| stats latest(queue_depth) as current_depth, max(queue_depth) as peak_depth, avg(queue_depth) as avg_depth by _time, host, QueueName\n| where current_depth > 100 OR peak_depth > 500\n| sort -current_depth\n| table _time, host, QueueName, current_depth, peak_depth, avg_depth\n```\n\nUnderstanding this SPL\n\n**Exchange Message Queue Depth** — Hub transport queue length indicates mail flow issues. Messages stuck in queue cause delivery delays, NDRs, and user complaints. Growing queues often precede transport service failures.\n\nDocumented **Data sources**: Exchange Transport Queue counters (MSExchangeTransport Queues), Get-Queue PowerShell output. **App/TA** (typical add-on context): `Splunk_TA_windows` (perfmon), custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: perfmon, exchange; **sourcetype**: exchange:queue_depth. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=perfmon, index=exchange, sourcetype=\"exchange:queue_depth\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **queue_depth** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where queue_depth > 0` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, QueueName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where current_depth > 100 OR peak_depth > 500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Exchange Message Queue Depth**): table _time, host, QueueName, current_depth, peak_depth, avg_depth\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue depth over time by queue), Single value (max queue depth), Table (queues exceeding threshold), Bar chart (peak depth by server).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch transport queues, so a stuck pile of mail is not the first time we hear from angry users.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.1.12",
              "n": "Exchange Database Copy Queue Length",
              "c": "high",
              "f": "intermediate",
              "v": "DAG replication lag measured in number of log files. Growing copy queue risks data loss on failover and indicates replication or disk I/O issues. Replay queue lag affects failover time.",
              "t": "`Splunk_TA_windows`, custom scripted input",
              "d": "Get-MailboxDatabaseCopyStatus (CopyQueueLength, ReplayQueueLength)",
              "q": "index=exchange sourcetype=\"exchange:dag_status\"\n| eval copy_queue=coalesce(CopyQueueLength, copyQueueLength, 0), replay_queue=coalesce(ReplayQueueLength, replayQueueLength, 0)\n| where copy_queue > 10 OR replay_queue > 50\n| bin _time span=15m\n| stats latest(copy_queue) as copy_queue_len, latest(replay_queue) as replay_queue_len, latest(Status) as status by _time, host, DatabaseName, MailboxServer\n| where copy_queue_len > 10 OR replay_queue_len > 50\n| sort -copy_queue_len\n| table _time, host, DatabaseName, MailboxServer, copy_queue_len, replay_queue_len, status",
              "m": "Run a PowerShell script every 5–15 minutes that executes `Get-MailboxDatabaseCopyStatus | Select-Object DatabaseName, MailboxServer, CopyQueueLength, ReplayQueueLength, Status` and forwards results to Splunk via HEC. Normalize field names (CopyQueueLength vs copyQueueLength). Alert when CopyQueueLength > 10 (warning) or > 50 (critical), or ReplayQueueLength > 50. Track per-database and per-server. Correlate with disk latency and network metrics.",
              "z": "Line chart (copy/replay queue over time by database), Table (lagging copies), Single value (max copy queue), Heat map (database × server queue status).",
              "kfp": "Database copy backlog during index rebuilds, storage maintenance, or datacenter failovers.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_windows`, custom scripted input.\n• Ensure the following data sources are available: Get-MailboxDatabaseCopyStatus (CopyQueueLength, ReplayQueueLength).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun a PowerShell script every 5–15 minutes that executes `Get-MailboxDatabaseCopyStatus | Select-Object DatabaseName, MailboxServer, CopyQueueLength, ReplayQueueLength, Status` and forwards results to Splunk via HEC. Normalize field names (CopyQueueLength vs copyQueueLength). Alert when CopyQueueLength > 10 (warning) or > 50 (critical), or ReplayQueueLength > 50. Track per-database and per-server. Correlate with disk latency and network metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=exchange sourcetype=\"exchange:dag_status\"\n| eval copy_queue=coalesce(CopyQueueLength, copyQueueLength, 0), replay_queue=coalesce(ReplayQueueLength, replayQueueLength, 0)\n| where copy_queue > 10 OR replay_queue > 50\n| bin _time span=15m\n| stats latest(copy_queue) as copy_queue_len, latest(replay_queue) as replay_queue_len, latest(Status) as status by _time, host, DatabaseName, MailboxServer\n| where copy_queue_len > 10 OR replay_queue_len > 50\n| sort -copy_queue_len\n| table _time, host, DatabaseName, MailboxServer, copy_queue_len, replay_queue_len, status\n```\n\nUnderstanding this SPL\n\n**Exchange Database Copy Queue Length** — DAG replication lag measured in number of log files. Growing copy queue risks data loss on failover and indicates replication or disk I/O issues. Replay queue lag affects failover time.\n\nDocumented **Data sources**: Get-MailboxDatabaseCopyStatus (CopyQueueLength, ReplayQueueLength). **App/TA** (typical add-on context): `Splunk_TA_windows`, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: exchange; **sourcetype**: exchange:dag_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=exchange, sourcetype=\"exchange:dag_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **copy_queue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where copy_queue > 10 OR replay_queue > 50` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, DatabaseName, MailboxServer** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where copy_queue_len > 10 OR replay_queue_len > 50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Exchange Database Copy Queue Length**): table _time, host, DatabaseName, MailboxServer, copy_queue_len, replay_queue_len, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (copy/replay queue over time by database), Table (lagging copies), Single value (max copy queue), Heat map (database × server queue status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch transport queues, so a stuck pile of mail is not the first time we hear from angry users.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 43.8,
          "qd": {
            "gold": 1,
            "silver": 2,
            "bronze": 9,
            "none": 0
          }
        },
        {
          "i": "11.2",
          "n": "Google Workspace",
          "u": [
            {
              "i": "11.2.1",
              "n": "Admin Console Audit",
              "c": "high",
              "f": "beginner",
              "v": "Admin actions in Google Workspace affect all users. Audit trail supports compliance and detects unauthorized changes.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Google Workspace Admin audit log",
              "q": "index=gws sourcetype=\"gws:admin\" event_name IN (\"CREATE_USER\",\"DELETE_USER\",\"CHANGE_ADMIN_ROLE\")\n| table _time, actor.email, event_name, target_user",
              "m": "Configure Google Workspace TA to ingest admin audit logs via Reports API. Track user management, policy changes, and configuration modifications. Alert on sensitive operations (role changes, 2FA disablement).",
              "z": "Table (admin events), Timeline (admin activity), Bar chart (events by admin).",
              "kfp": "Planned org changes, helpdesk account lifecycle, and contractor onboarding in Google Admin console.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Google Workspace Admin audit log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Google Workspace TA to ingest admin audit logs via Reports API. Track user management, policy changes, and configuration modifications. Alert on sensitive operations (role changes, 2FA disablement).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:admin\" event_name IN (\"CREATE_USER\",\"DELETE_USER\",\"CHANGE_ADMIN_ROLE\")\n| table _time, actor.email, event_name, target_user\n```\n\nUnderstanding this SPL\n\n**Admin Console Audit** — Admin actions in Google Workspace affect all users. Audit trail supports compliance and detects unauthorized changes.\n\nDocumented **Data sources**: Google Workspace Admin audit log. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:admin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Admin Console Audit**): table _time, actor.email, event_name, target_user\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (admin events), Timeline (admin activity), Bar chart (events by admin).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track big Google admin actions, so we know when accounts, policies, or roles change outside normal work.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.2",
              "n": "Gmail Message Flow",
              "c": "high",
              "f": "beginner",
              "v": "Email delivery monitoring and DLP enforcement protects sensitive data and ensures business communication reliability.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Gmail logs via BigQuery export or Reports API",
              "q": "index=gws sourcetype=\"gws:gmail\"\n| stats count by message_info.disposition\n| eval pct=round(count/sum(count)*100,1)",
              "m": "Ingest Gmail logs. Track message delivery rates, spam filtering effectiveness, and DLP triggers. Alert on delivery failures or increased spam rates. Report on email security posture.",
              "z": "Pie chart (message disposition), Line chart (message volume), Table (DLP triggers).",
              "kfp": "Newsletter spikes, campaign sends, and bulk internal notices that change disposition mix.",
              "refs": "[CIM: Email](https://docs.splunk.com/Documentation/CIM/latest/User/Email)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Gmail logs via BigQuery export or Reports API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Gmail logs. Track message delivery rates, spam filtering effectiveness, and DLP triggers. Alert on delivery failures or increased spam rates. Report on email security posture.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:gmail\"\n| stats count by message_info.disposition\n| eval pct=round(count/sum(count)*100,1)\n```\n\nUnderstanding this SPL\n\n**Gmail Message Flow** — Email delivery monitoring and DLP enforcement protects sensitive data and ensures business communication reliability.\n\nDocumented **Data sources**: Gmail logs via BigQuery export or Reports API. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:gmail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:gmail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by message_info.disposition** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.action All_Email.recipient span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Gmail Message Flow** — Email delivery monitoring and DLP enforcement protects sensitive data and ensures business communication reliability.\n\nDocumented **Data sources**: Gmail logs via BigQuery export or Reports API. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Email.All_Email` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (message disposition), Line chart (message volume), Table (DLP triggers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether Gmail is delivering, deferring, or flagging messages, so mail problems show up in one place.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Email"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Email.All_Email\n  by All_Email.src_user All_Email.action All_Email.recipient span=1h\n| sort -count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.3",
              "n": "Drive Sharing Anomalies",
              "c": "high",
              "f": "beginner",
              "v": "Unusual file sharing patterns may indicate data exfiltration or accidental exposure of sensitive documents.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Google Drive audit log",
              "q": "index=gws sourcetype=\"gws:drive\" event_name=\"change_user_access\"\n| where new_value=\"people_with_link\" OR target_user_email NOT LIKE \"%@yourdomain.com%\"\n| table _time, actor.email, doc_title, target_user_email, new_value",
              "m": "Ingest Drive audit logs. Alert on external sharing, \"anyone with link\" sharing, and bulk sharing events. Track sharing patterns per user. Flag sharing of sensitive folders or documents.",
              "z": "Table (sharing events), Bar chart (external sharing by user), Line chart (sharing activity trend).",
              "kfp": "Partner sharing during projects; large files moved by design teams or creative agencies.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Google Drive audit log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Drive audit logs. Alert on external sharing, \"anyone with link\" sharing, and bulk sharing events. Track sharing patterns per user. Flag sharing of sensitive folders or documents.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:drive\" event_name=\"change_user_access\"\n| where new_value=\"people_with_link\" OR target_user_email NOT LIKE \"%@yourdomain.com%\"\n| table _time, actor.email, doc_title, target_user_email, new_value\n```\n\nUnderstanding this SPL\n\n**Drive Sharing Anomalies** — Unusual file sharing patterns may indicate data exfiltration or accidental exposure of sensitive documents.\n\nDocumented **Data sources**: Google Drive audit log. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:drive. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:drive\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where new_value=\"people_with_link\" OR target_user_email NOT LIKE \"%@yourdomain.com%\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Drive Sharing Anomalies**): table _time, actor.email, doc_title, target_user_email, new_value\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sharing events), Bar chart (external sharing by user), Line chart (sharing activity trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch unusual file sharing, so a burst of outside links is not the first time we learn about a leak.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.4",
              "n": "Login Anomaly Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Suspicious login activity (new device, unusual location, failed MFA) indicates potential account compromise.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Google Workspace login audit log",
              "q": "index=gws sourcetype=\"gws:login\" event_name=\"login_failure\"\n| stats count by actor.email, ip_address\n| where count > 5",
              "m": "Ingest login audit logs. Track failed logins, new device registrations, and unusual locations. Alert on multiple failures, suspicious activity events, and login from new countries. Correlate with Google's built-in risk signals.",
              "z": "Table (suspicious logins), Geo map (login locations), Line chart (failure rate), Bar chart (failures by user).",
              "kfp": "Travel, VPN, and new device sign-ins that look new but are routine for mobile staff.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Google Workspace login audit log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest login audit logs. Track failed logins, new device registrations, and unusual locations. Alert on multiple failures, suspicious activity events, and login from new countries. Correlate with Google's built-in risk signals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:login\" event_name=\"login_failure\"\n| stats count by actor.email, ip_address\n| where count > 5\n```\n\nUnderstanding this SPL\n\n**Login Anomaly Detection** — Suspicious login activity (new device, unusual location, failed MFA) indicates potential account compromise.\n\nDocumented **Data sources**: Google Workspace login audit log. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:login. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:login\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by actor.email, ip_address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious logins), Geo map (login locations), Line chart (failure rate), Bar chart (failures by user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch odd sign-in patterns, so travel and new devices are not confused with someone stealing a password.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "11.2.5",
              "n": "Meet Quality Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Poor meeting quality impacts productivity and user satisfaction. Monitoring enables network/infrastructure optimization.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Google Meet quality logs",
              "q": "index=gws sourcetype=\"gws:meet\"\n| where video_recv_jitter_ms > 30 OR audio_recv_jitter_ms > 30\n| stats count, avg(video_recv_jitter_ms) as avg_jitter by organizer_email, meeting_code",
              "m": "Ingest Meet quality data. Track jitter, latency, and packet loss per meeting. Alert on recurring poor quality for specific users or locations. Correlate with network performance data.",
              "z": "Table (poor quality meetings), Line chart (quality metrics trend), Bar chart (issues by location).",
              "kfp": "Regional ISP issues and home Wi-Fi problems that mirror Meet quality dips without a Google outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Google Meet quality logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Meet quality data. Track jitter, latency, and packet loss per meeting. Alert on recurring poor quality for specific users or locations. Correlate with network performance data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:meet\"\n| where video_recv_jitter_ms > 30 OR audio_recv_jitter_ms > 30\n| stats count, avg(video_recv_jitter_ms) as avg_jitter by organizer_email, meeting_code\n```\n\nUnderstanding this SPL\n\n**Meet Quality Monitoring** — Poor meeting quality impacts productivity and user satisfaction. Monitoring enables network/infrastructure optimization.\n\nDocumented **Data sources**: Google Meet quality logs. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:meet. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:meet\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where video_recv_jitter_ms > 30 OR audio_recv_jitter_ms > 30` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by organizer_email, meeting_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (poor quality meetings), Line chart (quality metrics trend), Bar chart (issues by location).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Meet call quality, so we can tell a bad call from a bad day on the public Internet.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.6",
              "n": "Third-Party App Access",
              "c": "high",
              "f": "beginner",
              "v": "OAuth app grants to third-party applications create data access risks. Monitoring enables governance and risk assessment.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Google Workspace token audit log",
              "q": "index=gws sourcetype=\"gws:token\" event_name=\"authorize\"\n| stats dc(actor.email) as unique_users by app_name, scope\n| sort -unique_users",
              "m": "Ingest token audit logs. Track OAuth grants by application and scope. Identify high-risk scopes (full Drive access, Gmail read). Alert on new third-party apps accessing sensitive scopes. Report for governance review.",
              "z": "Table (third-party apps with scope), Bar chart (apps by user count), Pie chart (scope distribution).",
              "kfp": "OAuth consents for approved SaaS; verify against your app-allow list before revoking.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Google Workspace token audit log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest token audit logs. Track OAuth grants by application and scope. Identify high-risk scopes (full Drive access, Gmail read). Alert on new third-party apps accessing sensitive scopes. Report for governance review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:token\" event_name=\"authorize\"\n| stats dc(actor.email) as unique_users by app_name, scope\n| sort -unique_users\n```\n\nUnderstanding this SPL\n\n**Third-Party App Access** — OAuth app grants to third-party applications create data access risks. Monitoring enables governance and risk assessment.\n\nDocumented **Data sources**: Google Workspace token audit log. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:token. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:token\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name, scope** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (third-party apps with scope), Bar chart (apps by user count), Pie chart (scope distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.7",
              "n": "Drive External Sharing Alerts",
              "c": "high",
              "f": "beginner",
              "v": "Alerts on files and folders shared outside the primary domain or with `anyone with link`—complements anomaly baselines (UC-11.2.3).",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Drive audit log (`change_user_access`, `shared_drive_settings_change`)",
              "q": "index=gws sourcetype=\"gws:drive\" event_name=\"change_user_access\"\n| where new_value=\"people_with_link\" OR NOT match(target_user_email, \".*@yourdomain\\\\.com\")\n| table _time, actor.email, doc_title, target_user_email, new_value\n| sort -_time",
              "m": "Tune domain pattern via lookup. Alert on shares to competitor domains or public links on confidential folders (path regex).",
              "z": "Table (external shares), Bar chart (domains), Timeline.",
              "kfp": "Customer extranet folders, partner portals, and shared research drives with expected external access.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Drive audit log (`change_user_access`, `shared_drive_settings_change`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune domain pattern via lookup. Alert on shares to competitor domains or public links on confidential folders (path regex).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:drive\" event_name=\"change_user_access\"\n| where new_value=\"people_with_link\" OR NOT match(target_user_email, \".*@yourdomain\\\\.com\")\n| table _time, actor.email, doc_title, target_user_email, new_value\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Drive External Sharing Alerts** — Alerts on files and folders shared outside the primary domain or with `anyone with link`—complements anomaly baselines (UC-11.2.3).\n\nDocumented **Data sources**: Drive audit log (`change_user_access`, `shared_drive_settings_change`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:drive. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:drive\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where new_value=\"people_with_link\" OR NOT match(target_user_email, \".*@yourdomain\\\\.com\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Drive External Sharing Alerts**): table _time, actor.email, doc_title, target_user_email, new_value\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (external shares), Bar chart (domains), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch unusual file sharing, so a burst of outside links is not the first time we learn about a leak.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.8",
              "n": "Admin Console Audit (Security Settings)",
              "c": "critical",
              "f": "beginner",
              "v": "Focused on security-sensitive admin events (2SV, SSO, API keys)—extends general admin audit (UC-11.2.1).",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Admin audit (`CHANGE_APPLICATION_SETTING`, `CHANGE_TWO_STEP_VERIFICATION_ENROLLMENT`, `CREATE_ROLE`, `ASSIGN_ROLE`)",
              "q": "index=gws sourcetype=\"gws:admin\"\n| search event_name IN (\"CHANGE_APPLICATION_SETTING\",\"CHANGE_TWO_STEP_VERIFICATION_ENROLLMENT\",\"CREATE_ROLE\",\"ASSIGN_ROLE\",\"AUTHORIZE_API_CLIENT_ACCESS\")\n| table _time, actor.email, event_name, target_user, setting_name\n| sort -_time",
              "m": "Alert on SSO IdP changes, 2SV exemptions, and new OAuth client authorizations. Correlate actor with known break-glass accounts.",
              "z": "Timeline (security admin events), Table (actor, event), Single value (critical events 24h).",
              "kfp": "Security setting changes from scheduled hardening, pen-test follow-ups, or compliance projects.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Admin audit (`CHANGE_APPLICATION_SETTING`, `CHANGE_TWO_STEP_VERIFICATION_ENROLLMENT`, `CREATE_ROLE`, `ASSIGN_ROLE`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on SSO IdP changes, 2SV exemptions, and new OAuth client authorizations. Correlate actor with known break-glass accounts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:admin\"\n| search event_name IN (\"CHANGE_APPLICATION_SETTING\",\"CHANGE_TWO_STEP_VERIFICATION_ENROLLMENT\",\"CREATE_ROLE\",\"ASSIGN_ROLE\",\"AUTHORIZE_API_CLIENT_ACCESS\")\n| table _time, actor.email, event_name, target_user, setting_name\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Admin Console Audit (Security Settings)** — Focused on security-sensitive admin events (2SV, SSO, API keys)—extends general admin audit (UC-11.2.1).\n\nDocumented **Data sources**: Admin audit (`CHANGE_APPLICATION_SETTING`, `CHANGE_TWO_STEP_VERIFICATION_ENROLLMENT`, `CREATE_ROLE`, `ASSIGN_ROLE`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:admin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Admin Console Audit (Security Settings)**): table _time, actor.email, event_name, target_user, setting_name\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Admin Console Audit (Security Settings)** — Focused on security-sensitive admin events (2SV, SSO, API keys)—extends general admin audit (UC-11.2.1).\n\nDocumented **Data sources**: Admin audit (`CHANGE_APPLICATION_SETTING`, `CHANGE_TWO_STEP_VERIFICATION_ENROLLMENT`, `CREATE_ROLE`, `ASSIGN_ROLE`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (security admin events), Table (actor, event), Single value (critical events 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track big Google admin actions, so we know when accounts, policies, or roles change outside normal work.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.9",
              "n": "Gmail Suspicious Forwarding",
              "c": "critical",
              "f": "beginner",
              "v": "Detects auto-forwarding and delegation to external addresses—common post-compromise behavior.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Email settings audit (`CHANGE_EMAIL_SETTING`, Gmail routing / forwarding flags in Reports API export)",
              "q": "index=gws sourcetype=\"gws:admin\" event_name=\"CHANGE_EMAIL_SETTING\"\n| search forward OR delegate OR filter\n| where NOT match(setting_value, \".*@yourdomain\\\\.com\")\n| table _time, actor.email, target_user, setting_name, setting_value",
              "m": "Ingest email settings changes. Alert on new forwarding to non-corporate domains. Weekly report of active forwards from `gmail:user_settings` export.",
              "z": "Table (forwarding changes), Single value (external forwards), Timeline.",
              "kfp": "Travel forwarding, mailbox merges, and temporary coverage rules during parental leave or rotations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Email settings audit (`CHANGE_EMAIL_SETTING`, Gmail routing / forwarding flags in Reports API export).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest email settings changes. Alert on new forwarding to non-corporate domains. Weekly report of active forwards from `gmail:user_settings` export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:admin\" event_name=\"CHANGE_EMAIL_SETTING\"\n| search forward OR delegate OR filter\n| where NOT match(setting_value, \".*@yourdomain\\\\.com\")\n| table _time, actor.email, target_user, setting_name, setting_value\n```\n\nUnderstanding this SPL\n\n**Gmail Suspicious Forwarding** — Detects auto-forwarding and delegation to external addresses—common post-compromise behavior.\n\nDocumented **Data sources**: Email settings audit (`CHANGE_EMAIL_SETTING`, Gmail routing / forwarding flags in Reports API export). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:admin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Filters the current rows with `where NOT match(setting_value, \".*@yourdomain\\\\.com\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Gmail Suspicious Forwarding**): table _time, actor.email, target_user, setting_name, setting_value\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (forwarding changes), Single value (external forwards), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.10",
              "n": "Chrome Management Policy Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Managed Chrome browsers should enforce updates, extensions allowlists, and safe browsing; drift indicates unmanaged or tampered clients.",
              "t": "`Splunk_TA_GoogleWorkspace`, Chrome Browser Cloud Management events",
              "d": "Chrome policy audit events, device reports (`chromeosdevices` / `managed_browser`)",
              "q": "index=gws sourcetype=\"gws:chrome_management\" OR sourcetype=\"gws:admin\" event_name=\"CHROME_DEVICES_POLICY_UPDATE\"\n| stats latest(policy_version) as ver, latest(compliance_state) as state by device_id, org_unit\n| where state!=\"COMPLIANT\" OR isnull(ver)\n| table device_id, org_unit, state, ver",
              "m": "Ingest Chrome Enterprise Connector or admin audit for policy pushes. Alert on devices not checking in >7 days. Map OUs to sensitivity.",
              "z": "Table (non-compliant browsers), Bar chart (by OU), Line chart (compliance %).",
              "kfp": "Staged rollouts, delayed check-ins, and lab devices that remain non-compliant for a known window.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`, Chrome Browser Cloud Management events.\n• Ensure the following data sources are available: Chrome policy audit events, device reports (`chromeosdevices` / `managed_browser`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Chrome Enterprise Connector or admin audit for policy pushes. Alert on devices not checking in >7 days. Map OUs to sensitivity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:chrome_management\" OR sourcetype=\"gws:admin\" event_name=\"CHROME_DEVICES_POLICY_UPDATE\"\n| stats latest(policy_version) as ver, latest(compliance_state) as state by device_id, org_unit\n| where state!=\"COMPLIANT\" OR isnull(ver)\n| table device_id, org_unit, state, ver\n```\n\nUnderstanding this SPL\n\n**Chrome Management Policy Compliance** — Managed Chrome browsers should enforce updates, extensions allowlists, and safe browsing; drift indicates unmanaged or tampered clients.\n\nDocumented **Data sources**: Chrome policy audit events, device reports (`chromeosdevices` / `managed_browser`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`, Chrome Browser Cloud Management events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:chrome_management, gws:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:chrome_management\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id, org_unit** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where state!=\"COMPLIANT\" OR isnull(ver)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Chrome Management Policy Compliance**): table device_id, org_unit, state, ver\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Chrome Management Policy Compliance** — Managed Chrome browsers should enforce updates, extensions allowlists, and safe browsing; drift indicates unmanaged or tampered clients.\n\nDocumented **Data sources**: Chrome policy audit events, device reports (`chromeosdevices` / `managed_browser`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`, Chrome Browser Cloud Management events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant browsers), Bar chart (by OU), Line chart (compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see which managed browsers are out of policy, so unpatched or non-compliant clients do not sit there for weeks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.11",
              "n": "Google Vault Hold Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Hold creation, release, and matter changes affect legal preservation; errors risk spoliation findings.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Vault audit (`CREATE_HOLD`, `DELETE_HOLD`, `UPDATE_HOLD`, `CREATE_MATTER`)",
              "q": "index=gws sourcetype=\"gws:vault\" OR (sourcetype=\"gws:admin\" event_name=\"VAULT_*\")\n| table _time, actor.email, event_name, matter_id, hold_name\n| sort -_time",
              "m": "Forward Vault audit to Splunk. Restrict alerts to legal-team actors; flag hold deletions without ticket ID in custom field.",
              "z": "Timeline (hold changes), Table (matters affected), Single value (hold deletions 90d).",
              "kfp": "Legal hold expansions, HR matters, and ediscovery projects that add many items at once.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Vault audit (`CREATE_HOLD`, `DELETE_HOLD`, `UPDATE_HOLD`, `CREATE_MATTER`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Vault audit to Splunk. Restrict alerts to legal-team actors; flag hold deletions without ticket ID in custom field.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:vault\" OR (sourcetype=\"gws:admin\" event_name=\"VAULT_*\")\n| table _time, actor.email, event_name, matter_id, hold_name\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Google Vault Hold Compliance** — Hold creation, release, and matter changes affect legal preservation; errors risk spoliation findings.\n\nDocumented **Data sources**: Vault audit (`CREATE_HOLD`, `DELETE_HOLD`, `UPDATE_HOLD`, `CREATE_MATTER`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:vault, gws:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:vault\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Google Vault Hold Compliance**): table _time, actor.email, event_name, matter_id, hold_name\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (hold changes), Table (matters affected), Single value (hold deletions 90d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.12",
              "n": "Workspace Marketplace App Review",
              "c": "high",
              "f": "intermediate",
              "v": "New marketplace app installs can expose Drive/Gmail data; tracks installs and OAuth scopes.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Token audit (`authorize`), admin audit (`ADD_APPLICATION`, `INSTALL_MARKETPLACE_APP`)",
              "q": "index=gws sourcetype=\"gws:token\" event_name=\"authorize\"\n| stats values(scope) as scopes by app_name, actor.email\n| mvexpand scopes limit=500\n| search scopes=\"https://www.googleapis.com/auth/drive\" OR scopes=\"*gmail*\"\n| sort app_name",
              "m": "Maintain allowlist of approved apps. Alert on new apps with sensitive scopes. Quarterly access review with app owners.",
              "z": "Table (high-risk grants), Bar chart (apps by user count), Pie chart (scope categories).",
              "kfp": "Marketplace trials for vetted tools; time-bound installs during hackathons or pilot weeks.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Token audit (`authorize`), admin audit (`ADD_APPLICATION`, `INSTALL_MARKETPLACE_APP`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain allowlist of approved apps. Alert on new apps with sensitive scopes. Quarterly access review with app owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:token\" event_name=\"authorize\"\n| stats values(scope) as scopes by app_name, actor.email\n| mvexpand scopes limit=500\n| search scopes=\"https://www.googleapis.com/auth/drive\" OR scopes=\"*gmail*\"\n| sort app_name\n```\n\nUnderstanding this SPL\n\n**Workspace Marketplace App Review** — New marketplace app installs can expose Drive/Gmail data; tracks installs and OAuth scopes.\n\nDocumented **Data sources**: Token audit (`authorize`), admin audit (`ADD_APPLICATION`, `INSTALL_MARKETPLACE_APP`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:token. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:token\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name, actor.email** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (high-risk grants), Bar chart (apps by user count), Pie chart (scope categories).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.13",
              "n": "Groups Membership Changes",
              "c": "high",
              "f": "beginner",
              "v": "Sensitive Google Groups (all-staff, external partners) membership changes can broadly expose mail and Drive.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Groups audit (`ADD_GROUP_MEMBER`, `REMOVE_GROUP_MEMBER`, `CREATE_GROUP`)",
              "q": "index=gws sourcetype=\"gws:groups\" OR (sourcetype=\"gws:admin\" event_name=\"GROUP_*\")\n| search event_name IN (\"ADD_GROUP_MEMBER\",\"REMOVE_GROUP_MEMBER\",\"CREATE_GROUP\")\n| lookup sensitive_groups.csv group_email OUTPUT tier\n| where tier=\"high\" OR like(group_email,\"*external*\")\n| table _time, actor.email, event_name, group_email, member_email",
              "m": "Tag high-impact groups in lookup. Alert on adds to groups with external posting or shared drives.",
              "z": "Table (membership changes), Timeline, Bar chart (changes by group).",
              "kfp": "Re-orgs, role changes, and automated group sync from your identity system.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Groups audit (`ADD_GROUP_MEMBER`, `REMOVE_GROUP_MEMBER`, `CREATE_GROUP`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag high-impact groups in lookup. Alert on adds to groups with external posting or shared drives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:groups\" OR (sourcetype=\"gws:admin\" event_name=\"GROUP_*\")\n| search event_name IN (\"ADD_GROUP_MEMBER\",\"REMOVE_GROUP_MEMBER\",\"CREATE_GROUP\")\n| lookup sensitive_groups.csv group_email OUTPUT tier\n| where tier=\"high\" OR like(group_email,\"*external*\")\n| table _time, actor.email, event_name, group_email, member_email\n```\n\nUnderstanding this SPL\n\n**Groups Membership Changes** — Sensitive Google Groups (all-staff, external partners) membership changes can broadly expose mail and Drive.\n\nDocumented **Data sources**: Groups audit (`ADD_GROUP_MEMBER`, `REMOVE_GROUP_MEMBER`, `CREATE_GROUP`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:groups, gws:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:groups\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where tier=\"high\" OR like(group_email,\"*external*\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Groups Membership Changes**): table _time, actor.email, event_name, group_email, member_email\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Groups Membership Changes** — Sensitive Google Groups (all-staff, external partners) membership changes can broadly expose mail and Drive.\n\nDocumented **Data sources**: Groups audit (`ADD_GROUP_MEMBER`, `REMOVE_GROUP_MEMBER`, `CREATE_GROUP`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (membership changes), Timeline, Bar chart (changes by group).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch SharePoint sharing and file pulls, so odd permission changes or big downloads are visible before they spread.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.14",
              "n": "Cloud Identity Device Management",
              "c": "high",
              "f": "intermediate",
              "v": "Endpoint visibility for mobile and desktop enrolled in Endpoint Verification / MDM—lost devices and OS drift.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Endpoint audit (`REGISTER_DEVICE`, `SYNC_DEVICE`), Reports API mobile devices",
              "q": "index=gws sourcetype=\"gws:endpoint\" OR sourcetype=\"gws:mobile\"\n| where compliance_state!=\"COMPLIANT\" OR device_compromised=\"true\" OR status=\"LOST\"\n| stats count by device_id, user_email, compliance_state\n| sort -count",
              "m": "Ingest device inventory daily. Alert on lost/stolen, rooted/jailbroken, or encryption-off. Integrate with Chrome management (UC-11.2.10).",
              "z": "Table (non-compliant devices), Map (last sync location if available), Line chart (fleet compliance %).",
              "kfp": "Recently enrolled devices, depot repairs, and beta-channel Chrome builds during pilot.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Endpoint audit (`REGISTER_DEVICE`, `SYNC_DEVICE`), Reports API mobile devices.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest device inventory daily. Alert on lost/stolen, rooted/jailbroken, or encryption-off. Integrate with Chrome management (UC-11.2.10).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:endpoint\" OR sourcetype=\"gws:mobile\"\n| where compliance_state!=\"COMPLIANT\" OR device_compromised=\"true\" OR status=\"LOST\"\n| stats count by device_id, user_email, compliance_state\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloud Identity Device Management** — Endpoint visibility for mobile and desktop enrolled in Endpoint Verification / MDM—lost devices and OS drift.\n\nDocumented **Data sources**: Endpoint audit (`REGISTER_DEVICE`, `SYNC_DEVICE`), Reports API mobile devices. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:endpoint, gws:mobile. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:endpoint\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where compliance_state!=\"COMPLIANT\" OR device_compromised=\"true\" OR status=\"LOST\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by device_id, user_email, compliance_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud Identity Device Management** — Endpoint visibility for mobile and desktop enrolled in Endpoint Verification / MDM—lost devices and OS drift.\n\nDocumented **Data sources**: Endpoint audit (`REGISTER_DEVICE`, `SYNC_DEVICE`), Reports API mobile devices. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant devices), Map (last sync location if available), Line chart (fleet compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.15",
              "n": "Google Meet Quality Metrics",
              "c": "medium",
              "f": "beginner",
              "v": "Meet quality telemetry (packet loss, jitter, RTT) for troubleshooting conferencing issues—extends UC-11.2.5 with org-level rollups.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Meet quality tool export / BigQuery (`conference_records`, participant `score` metrics)",
              "q": "index=gws sourcetype=\"gws:meet_quality\"\n| where packet_loss_pct > 2 OR round_trip_time_ms > 300 OR jitter_ms > 35\n| stats avg(packet_loss_pct) as avg_loss, avg(round_trip_time_ms) as avg_rtt by conference_id, organizer_email\n| sort -avg_loss",
              "m": "Enable Meet quality logging in Admin. Ingest via BigQuery export or Reports API. Baseline per office ASN. Alert when median MOS proxy metrics exceed SLA.",
              "z": "Table (worst conferences), Line chart (loss/jitter trend), Bar chart (by network location).",
              "kfp": "ISP and Wi-Fi issues driving Meet scores down without any change in admin policy.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Meet quality tool export / BigQuery (`conference_records`, participant `score` metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Meet quality logging in Admin. Ingest via BigQuery export or Reports API. Baseline per office ASN. Alert when median MOS proxy metrics exceed SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:meet_quality\"\n| where packet_loss_pct > 2 OR round_trip_time_ms > 300 OR jitter_ms > 35\n| stats avg(packet_loss_pct) as avg_loss, avg(round_trip_time_ms) as avg_rtt by conference_id, organizer_email\n| sort -avg_loss\n```\n\nUnderstanding this SPL\n\n**Google Meet Quality Metrics** — Meet quality telemetry (packet loss, jitter, RTT) for troubleshooting conferencing issues—extends UC-11.2.5 with org-level rollups.\n\nDocumented **Data sources**: Meet quality tool export / BigQuery (`conference_records`, participant `score` metrics). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:meet_quality. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:meet_quality\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where packet_loss_pct > 2 OR round_trip_time_ms > 300 OR jitter_ms > 35` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by conference_id, organizer_email** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (worst conferences), Line chart (loss/jitter trend), Bar chart (by network location).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Meet call quality, so we can tell a bad call from a bad day on the public Internet.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.16",
              "n": "Gmail Phishing Report Analysis",
              "c": "high",
              "f": "beginner",
              "v": "User-reported phishing provides ground-truth for tuning secure links and awareness; volume spikes indicate campaigns.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Email log (`PHISHING_REPORT`, user report button events in audit)",
              "q": "index=gws sourcetype=\"gws:gmail\" event_name=\"PHISHING_REPORT\"\n| bin _time span=1h\n| stats count by reporter, _time\n| where count > 5\n| sort -count",
              "m": "Ingest reported-message metadata (no body in Splunk if policy requires). Correlate with Postini/Workspace security dashboards. Feed SOC for IOC extraction.",
              "z": "Line chart (reports per hour), Table (top reporters), Bar chart (campaign hash if extracted).",
              "kfp": "User training exercises and simulated phish that spike reported-message volume.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Email log (`PHISHING_REPORT`, user report button events in audit).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest reported-message metadata (no body in Splunk if policy requires). Correlate with Postini/Workspace security dashboards. Feed SOC for IOC extraction.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:gmail\" event_name=\"PHISHING_REPORT\"\n| bin _time span=1h\n| stats count by reporter, _time\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Gmail Phishing Report Analysis** — User-reported phishing provides ground-truth for tuning secure links and awareness; volume spikes indicate campaigns.\n\nDocumented **Data sources**: Email log (`PHISHING_REPORT`, user report button events in audit). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:gmail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:gmail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by reporter, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (reports per hour), Table (top reporters), Bar chart (campaign hash if extracted).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.17",
              "n": "Workspace DLP Rule Violations",
              "c": "high",
              "f": "beginner",
              "v": "Google Workspace DLP incidents for Drive, Gmail, Chat—centralized for compliance trending.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "DLP incident export (`rule_name`, `triggered_action`, `resource_type`)",
              "q": "index=gws sourcetype=\"gws:dlp\"\n| stats count by rule_name, severity, actor_email, data_source\n| where severity IN (\"HIGH\",\"CRITICAL\") OR count > 3\n| sort -count",
              "m": "Schedule DLP API or BigQuery export of incidents. Map rules to data classes. Alert on exfiltration-blocked events to external domains.",
              "z": "Bar chart (violations by rule), Table (repeat offenders), Line chart (incident trend).",
              "kfp": "Rule tuning, new sensitive types, and monitor-only policy phases that create extra incidents.",
              "refs": "[CIM: DLP](https://docs.splunk.com/Documentation/CIM/latest/User/DLP)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: DLP incident export (`rule_name`, `triggered_action`, `resource_type`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule DLP API or BigQuery export of incidents. Map rules to data classes. Alert on exfiltration-blocked events to external domains.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:dlp\"\n| stats count by rule_name, severity, actor_email, data_source\n| where severity IN (\"HIGH\",\"CRITICAL\") OR count > 3\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Workspace DLP Rule Violations** — Google Workspace DLP incidents for Drive, Gmail, Chat—centralized for compliance trending.\n\nDocumented **Data sources**: DLP incident export (`rule_name`, `triggered_action`, `resource_type`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:dlp. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:dlp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule_name, severity, actor_email, data_source** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity IN (\"HIGH\",\"CRITICAL\") OR count > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.signature | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Workspace DLP Rule Violations** — Google Workspace DLP incidents for Drive, Gmail, Chat—centralized for compliance trending.\n\nDocumented **Data sources**: DLP incident export (`rule_name`, `triggered_action`, `resource_type`). **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Data_Loss_Prevention.DLP_Incidents` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (violations by rule), Table (repeat offenders), Line chart (incident trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when Google’s rules block or flag sensitive data, so teams can tune what is really sensitive versus noisy.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "DLP"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.signature | sort - count",
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.2.18",
              "n": "Google Takeout Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Takeout requests export large user datasets—strong indicator of insider risk or compromised account.",
              "t": "`Splunk_TA_GoogleWorkspace`",
              "d": "Admin audit (`REQUEST_DATA_EXPORT`, `DOWNLOAD_DATA_EXPORT`), login audit for takeout.google.com",
              "q": "index=gws sourcetype=\"gws:admin\" event_name=\"REQUEST_DATA_EXPORT\"\n| table _time, actor.email, target_user, export_size_bytes, services_included\n| sort -_time",
              "m": "Alert on any Takeout request for privileged users. Require HR/legal approval lookup for departing employees. Block self-service takeout for high-risk OUs via policy.",
              "z": "Table (export requests), Single value (exports 7d), Timeline.",
              "kfp": "Legal exports, employee data requests, and offboarding that legitimately use Takeout at volume.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_GoogleWorkspace`.\n• Ensure the following data sources are available: Admin audit (`REQUEST_DATA_EXPORT`, `DOWNLOAD_DATA_EXPORT`), login audit for takeout.google.com.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on any Takeout request for privileged users. Require HR/legal approval lookup for departing employees. Block self-service takeout for high-risk OUs via policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gws sourcetype=\"gws:admin\" event_name=\"REQUEST_DATA_EXPORT\"\n| table _time, actor.email, target_user, export_size_bytes, services_included\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Google Takeout Monitoring** — Takeout requests export large user datasets—strong indicator of insider risk or compromised account.\n\nDocumented **Data sources**: Admin audit (`REQUEST_DATA_EXPORT`, `DOWNLOAD_DATA_EXPORT`), login audit for takeout.google.com. **App/TA** (typical add-on context): `Splunk_TA_GoogleWorkspace`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gws; **sourcetype**: gws:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gws, sourcetype=\"gws:admin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Google Takeout Monitoring**): table _time, actor.email, target_user, export_size_bytes, services_included\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (export requests), Single value (exports 7d), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch large data exports, so a mass download lines up with a person leaving or a real need instead of a surprise.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "Splunk Add-on for Google Workspace",
                "id": 5556,
                "url": "https://splunkbase.splunk.com/app/5556"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.7,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 17,
            "none": 0
          }
        },
        {
          "i": "11.3",
          "n": "Unified Communications",
          "u": [
            {
              "i": "11.3.1",
              "n": "Call Quality Monitoring — MOS (Cisco CUCM)",
              "c": "high",
              "f": "intermediate",
              "v": "MOS scores directly measure voice quality experience. Degradation impacts business communication and customer service.",
              "t": "Cisco UCM CDR/CMR, Webex API",
              "d": "Call Detail Records (CDR), Call Management Records (CMR)",
              "q": "index=voip sourcetype=\"cisco:ucm:cmr\"\n| where MOS < 3.5\n| stats count, avg(MOS) as avg_mos by origDeviceName, destDeviceName\n| sort avg_mos",
              "m": "Ingest CDR/CMR from UCM or cloud UC platform. Parse MOS, jitter, latency, and packet loss. Alert when MOS drops below 3.5 (fair quality). Correlate with network metrics to identify root cause. Track per-site quality.",
              "z": "Gauge (average MOS), Line chart (MOS trend), Table (poor quality calls), Heatmap (site × quality).",
              "kfp": "Transcoding, codec negotiation, and remote worker Wi-Fi; short dips during handoffs are often benign.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco UCM CDR/CMR, Webex API.\n• Ensure the following data sources are available: Call Detail Records (CDR), Call Management Records (CMR).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CDR/CMR from UCM or cloud UC platform. Parse MOS, jitter, latency, and packet loss. Alert when MOS drops below 3.5 (fair quality). Correlate with network metrics to identify root cause. Track per-site quality.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cmr\"\n| where MOS < 3.5\n| stats count, avg(MOS) as avg_mos by origDeviceName, destDeviceName\n| sort avg_mos\n```\n\nUnderstanding this SPL\n\n**Call Quality Monitoring — MOS (Cisco CUCM)** — MOS scores directly measure voice quality experience. Degradation impacts business communication and customer service.\n\nDocumented **Data sources**: Call Detail Records (CDR), Call Management Records (CMR). **App/TA** (typical add-on context): Cisco UCM CDR/CMR, Webex API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cmr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cmr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where MOS < 3.5` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by origDeviceName, destDeviceName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (average MOS), Line chart (MOS trend), Table (poor quality calls), Heatmap (site × quality).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), Unity Connection, IP Phone 7800 series, IP Phone 8800 series, Cisco Webex Calling, Webex Meetings, Webex Room Kit, Webex Board, Webex Desk",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucm",
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.2",
              "n": "Call Volume Trending (Cisco CUCM)",
              "c": "medium",
              "f": "intermediate",
              "v": "Call volume patterns support capacity planning and detect anomalies (toll fraud, system issues).",
              "t": "UCM CDR input",
              "d": "CDR records",
              "q": "index=voip sourcetype=\"cisco:ucm:cdr\"\n| timechart span=1h count as calls\n| predict calls as predicted",
              "m": "Ingest CDR data. Track call volumes by hour, day, site. Baseline normal patterns. Alert on significant drops (possible outage) or spikes (possible toll fraud). Report on peak hour utilization.",
              "z": "Line chart (call volume with prediction), Bar chart (calls by site), Area chart (hourly distribution).",
              "kfp": "Business-hour peaks, marketing campaigns, and seasonal call volume; compare to a same-week baseline.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: UCM CDR input.\n• Ensure the following data sources are available: CDR records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CDR data. Track call volumes by hour, day, site. Baseline normal patterns. Alert on significant drops (possible outage) or spikes (possible toll fraud). Report on peak hour utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cdr\"\n| timechart span=1h count as calls\n| predict calls as predicted\n```\n\nUnderstanding this SPL\n\n**Call Volume Trending (Cisco CUCM)** — Call volume patterns support capacity planning and detect anomalies (toll fraud, system issues).\n\nDocumented **Data sources**: CDR records. **App/TA** (typical add-on context): UCM CDR input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cdr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Call Volume Trending (Cisco CUCM)**): predict calls as predicted\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (call volume with prediction), Bar chart (calls by site), Area chart (hourly distribution).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), Unity Connection, IP Phone 7800 series, IP Phone 8800 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "cisco_ucm"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.3",
              "n": "VoIP Jitter/Latency/Packet Loss (Cisco CUCM)",
              "c": "high",
              "f": "beginner",
              "v": "Transport quality metrics identify network issues affecting voice quality before users report problems.",
              "t": "UCM CMR, RTCP data",
              "d": "CMR records (jitter, latency, packet loss), RTCP reports",
              "q": "index=voip sourcetype=\"cisco:ucm:cmr\"\n| where jitter > 30 OR latency > 150 OR packet_loss_pct > 1\n| stats count by origDeviceName, destDeviceName, jitter, latency, packet_loss_pct",
              "m": "Parse transport quality metrics from CMR. Alert on jitter >30ms, latency >150ms, or packet loss >1%. Correlate with WAN/LAN performance metrics. Track per-site to identify network segments needing attention.",
              "z": "Multi-metric chart (jitter, latency, packet loss), Table (calls with poor transport), Heatmap (site × metric).",
              "kfp": "Congestion on WAN or VPN paths; not every jitter spike is a voice platform fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: UCM CMR, RTCP data.\n• Ensure the following data sources are available: CMR records (jitter, latency, packet loss), RTCP reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse transport quality metrics from CMR. Alert on jitter >30ms, latency >150ms, or packet loss >1%. Correlate with WAN/LAN performance metrics. Track per-site to identify network segments needing attention.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cmr\"\n| where jitter > 30 OR latency > 150 OR packet_loss_pct > 1\n| stats count by origDeviceName, destDeviceName, jitter, latency, packet_loss_pct\n```\n\nUnderstanding this SPL\n\n**VoIP Jitter/Latency/Packet Loss (Cisco CUCM)** — Transport quality metrics identify network issues affecting voice quality before users report problems.\n\nDocumented **Data sources**: CMR records (jitter, latency, packet loss), RTCP reports. **App/TA** (typical add-on context): UCM CMR, RTCP data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cmr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cmr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where jitter > 30 OR latency > 150 OR packet_loss_pct > 1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by origDeviceName, destDeviceName, jitter, latency, packet_loss_pct** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-metric chart (jitter, latency, packet loss), Table (calls with poor transport), Heatmap (site × metric).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), Unity Connection, IP Phone 7800 series, IP Phone 8800 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.4",
              "n": "Trunk Utilization (Cisco CUCM)",
              "c": "high",
              "f": "beginner",
              "v": "Trunk capacity limits cause busy signals and missed calls. Monitoring prevents capacity-related service degradation.",
              "t": "UCM CDR, gateway logs",
              "d": "CDR records, gateway/trunk metrics",
              "q": "index=voip sourcetype=\"cisco:ucm:cdr\"\n| bin _time span=15m\n| stats dc(globalCallID_callId) as concurrent_calls by _time, trunk_group\n| where concurrent_calls > 20",
              "m": "Track concurrent calls per trunk group from CDR data. Alert when utilization exceeds 80% of capacity. Monitor for trunk failures and failover events. Report on peak utilization for capacity planning.",
              "z": "Line chart (trunk utilization), Gauge (% capacity per trunk), Table (trunk utilization summary).",
              "kfp": "Trunk overruns during expected peaks (e.g. retail hours) on properly sized trunks; validate capacity plans.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: UCM CDR, gateway logs.\n• Ensure the following data sources are available: CDR records, gateway/trunk metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack concurrent calls per trunk group from CDR data. Alert when utilization exceeds 80% of capacity. Monitor for trunk failures and failover events. Report on peak utilization for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cdr\"\n| bin _time span=15m\n| stats dc(globalCallID_callId) as concurrent_calls by _time, trunk_group\n| where concurrent_calls > 20\n```\n\nUnderstanding this SPL\n\n**Trunk Utilization (Cisco CUCM)** — Trunk capacity limits cause busy signals and missed calls. Monitoring prevents capacity-related service degradation.\n\nDocumented **Data sources**: CDR records, gateway/trunk metrics. **App/TA** (typical add-on context): UCM CDR, gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cdr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, trunk_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where concurrent_calls > 20` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (trunk utilization), Gauge (% capacity per trunk), Table (trunk utilization summary).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), Unity Connection, IP Phone 7800 series, IP Phone 8800 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "cisco_ucm"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.5",
              "n": "Conference Bridge Capacity (Cisco CUCM)",
              "c": "medium",
              "f": "beginner",
              "v": "Conference bridge resource exhaustion prevents users from joining meetings. Monitoring ensures adequate capacity.",
              "t": "Webex API, UCM conference bridge metrics",
              "d": "Conference bridge utilization, Webex meeting data",
              "q": "index=voip sourcetype=\"webex:meetings\"\n| timechart span=1h max(concurrent_participants) as max_participants\n| where max_participants > 500",
              "m": "Track conference bridge resource utilization and concurrent participant counts. Alert when approaching capacity limits. Monitor meeting quality metrics at scale. Report on peak usage patterns for capacity planning.",
              "z": "Line chart (concurrent participants), Single value (peak participants today), Bar chart (meetings by size).",
              "kfp": "Large scheduled conferences, earnings calls, and town halls that fill bridges briefly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Webex API, UCM conference bridge metrics.\n• Ensure the following data sources are available: Conference bridge utilization, Webex meeting data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack conference bridge resource utilization and concurrent participant counts. Alert when approaching capacity limits. Monitor meeting quality metrics at scale. Report on peak usage patterns for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"webex:meetings\"\n| timechart span=1h max(concurrent_participants) as max_participants\n| where max_participants > 500\n```\n\nUnderstanding this SPL\n\n**Conference Bridge Capacity (Cisco CUCM)** — Conference bridge resource exhaustion prevents users from joining meetings. Monitoring ensures adequate capacity.\n\nDocumented **Data sources**: Conference bridge utilization, Webex meeting data. **App/TA** (typical add-on context): Webex API, UCM conference bridge metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: webex:meetings. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"webex:meetings\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where max_participants > 500` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (concurrent participants), Single value (peak participants today), Bar chart (meetings by size).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), Unity Connection, IP Phone 7800 series, IP Phone 8800 series, Cisco Webex Calling, Webex Meetings, Webex Room Kit, Webex Board, Webex Desk",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.6",
              "n": "Toll Fraud Detection (Cisco CUCM)",
              "c": "critical",
              "f": "intermediate",
              "v": "Toll fraud causes significant financial loss. International premium-rate calls from compromised systems can cost thousands per hour.",
              "t": "UCM CDR analysis",
              "d": "CDR records (called party number, duration, time of day)",
              "q": "index=voip sourcetype=\"cisco:ucm:cdr\"\n| where match(calledPartyNumber,\"^011|^00\") AND duration > 60\n| stats count, sum(duration) as total_min by callingPartyNumber, calledPartyNumber\n| where count > 10\n| sort -total_min",
              "m": "Monitor CDR for international calls, premium-rate numbers (900, 976), and calls outside business hours. Baseline normal international calling patterns. Alert on anomalous patterns. Block suspicious numbers in real-time.",
              "z": "Table (suspicious calls), Bar chart (international calls by destination), Timeline (unusual calling activity), Geo map (call destinations).",
              "kfp": "Misdialed international numbers, new routes, and marketing callbacks that look like fraud until verified.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: UCM CDR analysis.\n• Ensure the following data sources are available: CDR records (called party number, duration, time of day).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor CDR for international calls, premium-rate numbers (900, 976), and calls outside business hours. Baseline normal international calling patterns. Alert on anomalous patterns. Block suspicious numbers in real-time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cdr\"\n| where match(calledPartyNumber,\"^011|^00\") AND duration > 60\n| stats count, sum(duration) as total_min by callingPartyNumber, calledPartyNumber\n| where count > 10\n| sort -total_min\n```\n\nUnderstanding this SPL\n\n**Toll Fraud Detection (Cisco CUCM)** — Toll fraud causes significant financial loss. International premium-rate calls from compromised systems can cost thousands per hour.\n\nDocumented **Data sources**: CDR records (called party number, duration, time of day). **App/TA** (typical add-on context): UCM CDR analysis. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cdr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(calledPartyNumber,\"^011|^00\") AND duration > 60` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by callingPartyNumber, calledPartyNumber** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious calls), Bar chart (international calls by destination), Timeline (unusual calling activity), Geo map (call destinations).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), Unity Connection, IP Phone 7800 series, IP Phone 8800 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch unusual or expensive call patterns, so fraud or a hacked trunk does not run up the phone bill in silence.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "cisco_ucm"
              ],
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "11.3.7",
              "n": "Phone Registration Status (Cisco CUCM)",
              "c": "high",
              "f": "beginner",
              "v": "Mass phone de-registration indicates network or UCM issues affecting the entire communications infrastructure.",
              "t": "UCM syslog, RISPORT API",
              "d": "UCM device status, RISPORT real-time data",
              "q": "index=voip sourcetype=\"cisco:ucm:syslog\"\n| search \"DeviceUnregistered\" OR \"StationDeregister\"\n| timechart span=5m count as deregistrations\n| where deregistrations > 10",
              "m": "Poll UCM RISPORT API for device registration status or forward UCM syslog. Alert on mass de-registrations (>10 devices in 5 minutes). Track registration counts per site. Monitor SRST fallback activations.",
              "z": "Single value (registered phones), Line chart (registration count trend), Table (recently de-registered devices).",
              "kfp": "Phone reboots, firmware updates, and DHCP renewals that drop registration for seconds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: UCM syslog, RISPORT API.\n• Ensure the following data sources are available: UCM device status, RISPORT real-time data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll UCM RISPORT API for device registration status or forward UCM syslog. Alert on mass de-registrations (>10 devices in 5 minutes). Track registration counts per site. Monitor SRST fallback activations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:syslog\"\n| search \"DeviceUnregistered\" OR \"StationDeregister\"\n| timechart span=5m count as deregistrations\n| where deregistrations > 10\n```\n\nUnderstanding this SPL\n\n**Phone Registration Status (Cisco CUCM)** — Mass phone de-registration indicates network or UCM issues affecting the entire communications infrastructure.\n\nDocumented **Data sources**: UCM device status, RISPORT real-time data. **App/TA** (typical add-on context): UCM syslog, RISPORT API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:syslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where deregistrations > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (registered phones), Line chart (registration count trend), Table (recently de-registered devices).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), Unity Connection, IP Phone 7800 series, IP Phone 8800 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.8",
              "n": "Webex Meeting Analytics",
              "c": "medium",
              "f": "beginner",
              "v": "Meeting analytics support collaboration optimization, license management, and quality improvement initiatives.",
              "t": "`ta_cisco_webex_add_on_for_splunk` (GitHub)",
              "d": "Webex meeting/participant data via API",
              "q": "index=webex sourcetype=\"webex:meetings\"\n| stats count as meetings, avg(participant_count) as avg_participants, avg(duration_min) as avg_duration by organizerEmail\n| sort -meetings",
              "m": "Poll Webex API for meeting data. Track meeting counts, participants, duration, and quality. Report on adoption metrics per department. Identify power users and underutilized licenses.",
              "z": "Bar chart (meetings by department), Line chart (meeting volume trend), Table (usage summary), Pie chart (meeting types).",
              "kfp": "Executive briefings, webinars, and all-hands that spike meeting analytics without a fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk` (GitHub).\n• Ensure the following data sources are available: Webex meeting/participant data via API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Webex API for meeting data. Track meeting counts, participants, duration, and quality. Report on adoption metrics per department. Identify power users and underutilized licenses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:meetings\"\n| stats count as meetings, avg(participant_count) as avg_participants, avg(duration_min) as avg_duration by organizerEmail\n| sort -meetings\n```\n\nUnderstanding this SPL\n\n**Webex Meeting Analytics** — Meeting analytics support collaboration optimization, license management, and quality improvement initiatives.\n\nDocumented **Data sources**: Webex meeting/participant data via API. **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:meetings. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:meetings\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by organizerEmail** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (meetings by department), Line chart (meeting volume trend), Table (usage summary), Pie chart (meeting types).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Calling, Webex Meetings, Webex Room Kit, Webex Board, Webex Desk",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.9",
              "n": "Mailbox Size and Quota Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Mailbox growth and quota usage help plan storage and avoid user lockouts. Trending supports capacity planning and proactive quota increases.",
              "t": "`Splunk_TA_MS_O365`, Exchange Online / M365 reporting, mailbox stats API",
              "d": "Mailbox size and quota metrics",
              "q": "index=o365 sourcetype=\"exchange:mailbox_stats\"\n| stats latest(mailbox_size_mb) as size_mb, latest(quota_mb) as quota_mb by user, mailbox_type\n| eval used_pct=round((size_mb/quota_mb)*100, 1)\n| where used_pct >= 80\n| sort -used_pct",
              "m": "Ingest mailbox size and quota from Graph API or Exchange reporting. Alert when usage exceeds 80% of quota. Report on top consumers and growth rate by department.",
              "z": "Table (mailboxes near quota), Bar chart (size by user), Line chart (growth trend).",
              "kfp": "Archive projects, large mailbox migrations, and journal mailboxes that sit near quota by design.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`, Exchange Online / M365 reporting, mailbox stats API.\n• Ensure the following data sources are available: Mailbox size and quota metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest mailbox size and quota from Graph API or Exchange reporting. Alert when usage exceeds 80% of quota. Report on top consumers and growth rate by department.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"exchange:mailbox_stats\"\n| stats latest(mailbox_size_mb) as size_mb, latest(quota_mb) as quota_mb by user, mailbox_type\n| eval used_pct=round((size_mb/quota_mb)*100, 1)\n| where used_pct >= 80\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**Mailbox Size and Quota Trending** — Mailbox growth and quota usage help plan storage and avoid user lockouts. Trending supports capacity planning and proactive quota increases.\n\nDocumented **Data sources**: Mailbox size and quota metrics. **App/TA** (typical add-on context): `Splunk_TA_MS_O365`, Exchange Online / M365 reporting, mailbox stats API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: exchange:mailbox_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"exchange:mailbox_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, mailbox_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct >= 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mailboxes near quota), Bar chart (size by user), Line chart (growth trend).",
              "script": "",
              "premium": "",
              "hw": "Microsoft Exchange Online, Microsoft 365",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "exchange",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.10",
              "n": "Email Forwarding Rule and Auto-Reply Audit",
              "c": "high",
              "f": "beginner",
              "v": "Unauthorized forwarding rules can exfiltrate mail; auto-replies may leak sensitive info. Auditing rule changes supports security and compliance.",
              "t": "`Splunk_TA_microsoft-cloudservices`, `Splunk_TA_MS_O365`, Exchange audit",
              "d": "Exchange/M365 mailbox rule and forwarding audit logs",
              "q": "index=o365 sourcetype=\"o365:audit\"\n| search (Operation=\"New-InboxRule\" OR Operation=\"Set-Mailbox\" OR Message=\"ForwardingRule\")\n| table _time, UserId, Operation, Parameters, ClientIP\n| sort -_time",
              "m": "Ingest mailbox rule and forwarding change events from M365/Exchange. Alert on new forwarding rules or external redirects. Report on rule changes by user and time.",
              "z": "Table (rule changes), Timeline (events), Bar chart (changes by user).",
              "kfp": "Legitimate auto-reply and forward rules for coverage, PTO, and shared functional mailboxes.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_MS_O365`, Exchange audit.\n• Ensure the following data sources are available: Exchange/M365 mailbox rule and forwarding audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest mailbox rule and forwarding change events from M365/Exchange. Alert on new forwarding rules or external redirects. Report on rule changes by user and time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"o365:audit\"\n| search (Operation=\"New-InboxRule\" OR Operation=\"Set-Mailbox\" OR Message=\"ForwardingRule\")\n| table _time, UserId, Operation, Parameters, ClientIP\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Email Forwarding Rule and Auto-Reply Audit** — Unauthorized forwarding rules can exfiltrate mail; auto-replies may leak sensitive info. Auditing rule changes supports security and compliance.\n\nDocumented **Data sources**: Exchange/M365 mailbox rule and forwarding audit logs. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices`, `Splunk_TA_MS_O365`, Exchange audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: o365:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"o365:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Email Forwarding Rule and Auto-Reply Audit**): table _time, UserId, Operation, Parameters, ClientIP\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule changes), Timeline (events), Bar chart (changes by user).",
              "script": "",
              "premium": "",
              "hw": "Microsoft Exchange Online, Microsoft 365",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "exchange",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                },
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.11",
              "n": "Collaboration App Permission and Consent Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Third-party app consent and OAuth grants can expose data. Auditing consent and permission changes reduces shadow IT and abuse risk.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Entra ID), `Splunk_TA_MS_O365`, M365 audit",
              "d": "App consent, OAuth grant, and permission change events",
              "q": "index=o365 sourcetype=\"azure:audit\"\n| search (ActivityType=\"Consent to application\" OR ActivityType=\"Add app role assignment\")\n| stats count by ApplicationId, ResourceDisplayName, UserPrincipalName\n| sort -count",
              "m": "Ingest Entra ID consent and app role assignment events. Alert on new high-privilege consents or consent by non-admin. Report on app usage and permission scope.",
              "z": "Table (consent events), Bar chart (apps by consent count), Timeline (consent over time).",
              "kfp": "OAuth re-consent during app updates; verify against allow-listed collaboration apps before revoke.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Entra ID), `Splunk_TA_MS_O365`, M365 audit.\n• Ensure the following data sources are available: App consent, OAuth grant, and permission change events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Entra ID consent and app role assignment events. Alert on new high-privilege consents or consent by non-admin. Report on app usage and permission scope.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"azure:audit\"\n| search (ActivityType=\"Consent to application\" OR ActivityType=\"Add app role assignment\")\n| stats count by ApplicationId, ResourceDisplayName, UserPrincipalName\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Collaboration App Permission and Consent Audit** — Third-party app consent and OAuth grants can expose data. Auditing consent and permission changes reduces shadow IT and abuse risk.\n\nDocumented **Data sources**: App consent, OAuth grant, and permission change events. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Entra ID), `Splunk_TA_MS_O365`, M365 audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: azure:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"azure:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by ApplicationId, ResourceDisplayName, UserPrincipalName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (consent events), Bar chart (apps by consent count), Timeline (consent over time).",
              "script": "",
              "premium": "",
              "hw": "Microsoft Entra ID, Microsoft 365",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR unknown is enforced — Splunk UC-11.3.11: Collaboration App Permission and Consent Audit.",
                  "ea": "Saved search 'UC-11.3.11' running on App consent, OAuth grant, and permission change events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                },
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.12",
              "n": "Voicemail and Call Recording Retention Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Voicemail and call recordings may be subject to retention and deletion policies. Monitoring supports compliance and storage management.",
              "t": "UCM/Teams Call Quality, voicemail system logs",
              "d": "Voicemail and recording metadata, retention policy events",
              "q": "index=ucm sourcetype=\"voicemail:metadata\"\n| eval age_days=floor((now()-_time)/86400)\n| where age_days > 365\n| stats count by mailbox_id, retention_policy\n| sort -count",
              "m": "Ingest voicemail and recording metadata. Compare against retention policy (e.g., 1 year). Alert on items past retention. Report on storage by policy and cleanup actions.",
              "z": "Table (items past retention), Single value (total over-retained), Bar chart (by mailbox).",
              "kfp": "Legal retention extensions, call-center recorder upgrades, and regulator-driven hold policies.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: UCM/Teams Call Quality, voicemail system logs.\n• Ensure the following data sources are available: Voicemail and recording metadata, retention policy events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest voicemail and recording metadata. Compare against retention policy (e.g., 1 year). Alert on items past retention. Report on storage by policy and cleanup actions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ucm sourcetype=\"voicemail:metadata\"\n| eval age_days=floor((now()-_time)/86400)\n| where age_days > 365\n| stats count by mailbox_id, retention_policy\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Voicemail and Call Recording Retention Compliance** — Voicemail and call recordings may be subject to retention and deletion policies. Monitoring supports compliance and storage management.\n\nDocumented **Data sources**: Voicemail and recording metadata, retention policy events. **App/TA** (typical add-on context): UCM/Teams Call Quality, voicemail system logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ucm; **sourcetype**: voicemail:metadata. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ucm, sourcetype=\"voicemail:metadata\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days > 365` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by mailbox_id, retention_policy** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (items past retention), Single value (total over-retained), Bar chart (by mailbox).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), Unity Connection, IP Phone 7800 series, IP Phone 8800 series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.13",
              "n": "Outbound Email Volume and Domain Anomaly",
              "c": "high",
              "f": "intermediate",
              "v": "Spike in outbound volume or new recipient domains can indicate compromise or data exfiltration. Baseline comparison supports early detection.",
              "t": "`Splunk_TA_MS_O365`, Exchange/M365 message trace, email gateway logs",
              "d": "Outbound message counts, recipient domains",
              "q": "index=mail sourcetype=\"exchange:message_trace\" direction=outbound\n| bin _time span=1h\n| stats count by user, recipient_domain, _time\n| eventstats avg(count) as avg_count by user\n| where count > (avg_count * 3)\n| sort -count",
              "m": "Ingest message trace or gateway logs for outbound mail. Baseline volume per user and domain. Alert on volume spike or high volume to new domains. Correlate with DLP and sign-in data.",
              "z": "Line chart (outbound volume by user), Table (anomalous senders), Bar chart (recipient domains).",
              "kfp": "Marketing sends, system notifications, and migration batches that change outbound mix.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_MS_O365`, Exchange/M365 message trace, email gateway logs.\n• Ensure the following data sources are available: Outbound message counts, recipient domains.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest message trace or gateway logs for outbound mail. Baseline volume per user and domain. Alert on volume spike or high volume to new domains. Correlate with DLP and sign-in data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail sourcetype=\"exchange:message_trace\" direction=outbound\n| bin _time span=1h\n| stats count by user, recipient_domain, _time\n| eventstats avg(count) as avg_count by user\n| where count > (avg_count * 3)\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Outbound Email Volume and Domain Anomaly** — Spike in outbound volume or new recipient domains can indicate compromise or data exfiltration. Baseline comparison supports early detection.\n\nDocumented **Data sources**: Outbound message counts, recipient domains. **App/TA** (typical add-on context): `Splunk_TA_MS_O365`, Exchange/M365 message trace, email gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: exchange:message_trace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=\"exchange:message_trace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by user, recipient_domain, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > (avg_count * 3)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (outbound volume by user), Table (anomalous senders), Bar chart (recipient domains).",
              "script": "",
              "premium": "",
              "hw": "Microsoft Exchange Online, Microsoft 365",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "exchange",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Office 365",
                "id": 4055,
                "url": "https://splunkbase.splunk.com/app/4055"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.14",
              "n": "Webex Meeting Quality Degradation Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Proactively detects poor meeting experiences by monitoring MOS scores, packet loss, jitter, and latency per participant. Enables IT to identify network segments, ISPs, or locations that consistently deliver degraded quality — before users start complaining. Supports SLA reporting and vendor accountability for Webex Calling and Meetings.",
              "t": "`ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Meeting Qualities API (Pro Pack required)",
              "d": "Webex Meeting Qualities API (audio/video/sharing quality per participant)",
              "q": "index=webex sourcetype=\"webex:meeting_quality\"\n| eval quality_issue=case(audioMOS<3.5, \"Poor Audio MOS\", videoPacketLoss>5, \"High Video Packet Loss\", audioJitter>30, \"High Audio Jitter\", audioLatency>300, \"High Audio Latency\", 1==1, null())\n| where isnotnull(quality_issue)\n| stats count as incidents, avg(audioMOS) as avg_mos, avg(videoPacketLoss) as avg_pkt_loss, avg(audioJitter) as avg_jitter by meetingId, participantEmail, quality_issue\n| sort -incidents\n| table meetingId, participantEmail, quality_issue, incidents, avg_mos, avg_pkt_loss, avg_jitter",
              "m": "Configure a modular input to poll the Webex Meeting Qualities API every 10 minutes (the minimum interval allowed). Requires Pro Pack licensing and an integration with the `analytics:read_all` scope. Quality data is available 10 minutes after a meeting starts and up to 7 days after. Set alert thresholds: MOS < 3.5, packet loss > 5%, jitter > 30 ms, latency > 300 ms. Correlate with participant location and network data to pinpoint root causes.",
              "z": "Line chart (MOS trend over time), Heat map (quality by location/building), Table (worst participants by avg MOS), Single value panels (current avg MOS, packet loss).",
              "kfp": "Client-side Wi-Fi, VPN, and home ISP issues; correlate before blaming Webex cloud.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Meeting Qualities API (Pro Pack required).\n• Ensure the following data sources are available: Webex Meeting Qualities API (audio/video/sharing quality per participant).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure a modular input to poll the Webex Meeting Qualities API every 10 minutes (the minimum interval allowed). Requires Pro Pack licensing and an integration with the `analytics:read_all` scope. Quality data is available 10 minutes after a meeting starts and up to 7 days after. Set alert thresholds: MOS < 3.5, packet loss > 5%, jitter > 30 ms, latency > 300 ms. Correlate with participant location and network data to pinpoint root causes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:meeting_quality\"\n| eval quality_issue=case(audioMOS<3.5, \"Poor Audio MOS\", videoPacketLoss>5, \"High Video Packet Loss\", audioJitter>30, \"High Audio Jitter\", audioLatency>300, \"High Audio Latency\", 1==1, null())\n| where isnotnull(quality_issue)\n| stats count as incidents, avg(audioMOS) as avg_mos, avg(videoPacketLoss) as avg_pkt_loss, avg(audioJitter) as avg_jitter by meetingId, participantEmail, quality_issue\n| sort -incidents\n| table meetingId, participantEmail, quality_issue, incidents, avg_mos, avg_pkt_loss, avg_jitter\n```\n\nUnderstanding this SPL\n\n**Webex Meeting Quality Degradation Detection** — Proactively detects poor meeting experiences by monitoring MOS scores, packet loss, jitter, and latency per participant. Enables IT to identify network segments, ISPs, or locations that consistently deliver degraded quality — before users start complaining. Supports SLA reporting and vendor accountability for Webex Calling and Meetings.\n\nDocumented **Data sources**: Webex Meeting Qualities API (audio/video/sharing quality per participant). **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Meeting Qualities API (Pro Pack required). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:meeting_quality. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:meeting_quality\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **quality_issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(quality_issue)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by meetingId, participantEmail, quality_issue** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Webex Meeting Quality Degradation Detection**): table meetingId, participantEmail, quality_issue, incidents, avg_mos, avg_pkt_loss, avg_jitter\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (MOS trend over time), Heat map (quality by location/building), Table (worst participants by avg MOS), Single value panels (current avg MOS, packet loss).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Meetings, Webex Calling, Webex Room Kit, Webex Room Kit Pro, Webex Board 55/70/85, Webex Desk, Webex Desk Pro",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.15",
              "n": "Webex Calling CDR and Call Flow Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Provides full call detail records for Webex Calling, enabling cost analysis, trunk utilization tracking, call flow optimization, and identification of failed or abandoned calls. Unlike on-premise UCM CDRs, this covers cloud-native Webex Calling deployments. Supports chargebacks, capacity planning, and troubleshooting call routing issues through auto-attendants and call queues.",
              "t": "`ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Detailed Call History API (Pro Pack required)",
              "d": "Webex Detailed Call History API (caller, callee, duration, type, disposition, call legs)",
              "q": "index=webex sourcetype=\"webex:calling_cdr\"\n| eval call_duration_min=round(duration/60, 1)\n| stats count as total_calls, avg(call_duration_min) as avg_duration, sum(call_duration_min) as total_minutes, count(eval(disposition=\"FAILURE\")) as failed_calls by callingLineId, direction\n| eval failure_rate=round(failed_calls/total_calls*100, 1)\n| where failure_rate > 5 OR total_calls > 100\n| sort -total_calls\n| table callingLineId, direction, total_calls, avg_duration, total_minutes, failed_calls, failure_rate",
              "m": "Configure the Webex Detailed Call History API integration with Pro Pack licensing. Poll at hourly intervals (data available within 24 hours of call completion). Map calling line IDs to users and departments via lookup tables. Create separate dashboards for inbound vs. outbound analysis, PSTN vs. on-net call distribution, and per-location breakdowns.",
              "z": "Timechart (call volume by hour/day), Sankey diagram (call routing flow), Table (CDR detail with filters), Bar chart (calls by location/department).",
              "kfp": "After-hours on-call, callback queues, and test calls in lab dial plans.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Detailed Call History API (Pro Pack required).\n• Ensure the following data sources are available: Webex Detailed Call History API (caller, callee, duration, type, disposition, call legs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure the Webex Detailed Call History API integration with Pro Pack licensing. Poll at hourly intervals (data available within 24 hours of call completion). Map calling line IDs to users and departments via lookup tables. Create separate dashboards for inbound vs. outbound analysis, PSTN vs. on-net call distribution, and per-location breakdowns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:calling_cdr\"\n| eval call_duration_min=round(duration/60, 1)\n| stats count as total_calls, avg(call_duration_min) as avg_duration, sum(call_duration_min) as total_minutes, count(eval(disposition=\"FAILURE\")) as failed_calls by callingLineId, direction\n| eval failure_rate=round(failed_calls/total_calls*100, 1)\n| where failure_rate > 5 OR total_calls > 100\n| sort -total_calls\n| table callingLineId, direction, total_calls, avg_duration, total_minutes, failed_calls, failure_rate\n```\n\nUnderstanding this SPL\n\n**Webex Calling CDR and Call Flow Analysis** — Provides full call detail records for Webex Calling, enabling cost analysis, trunk utilization tracking, call flow optimization, and identification of failed or abandoned calls. Unlike on-premise UCM CDRs, this covers cloud-native Webex Calling deployments. Supports chargebacks, capacity planning, and troubleshooting call routing issues through auto-attendants and call queues.\n\nDocumented **Data sources**: Webex Detailed Call History API (caller, callee, duration, type, disposition, call legs). **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Detailed Call History API (Pro Pack required). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:calling_cdr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:calling_cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **call_duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by callingLineId, direction** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **failure_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failure_rate > 5 OR total_calls > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Webex Calling CDR and Call Flow Analysis**): table callingLineId, direction, total_calls, avg_duration, total_minutes, failed_calls, failure_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (call volume by hour/day), Sankey diagram (call routing flow), Table (CDR detail with filters), Bar chart (calls by location/department).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Calling, Webex Desk Phone, IP Phone 8800 series (MPP), IP Phone 6800 series (MPP)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.16",
              "n": "Webex Calling Queue Performance and SLA",
              "c": "critical",
              "f": "intermediate",
              "v": "Monitors call queue health metrics that directly impact customer experience: average wait time, abandon rate, SLA compliance, and agent availability. Enables real-time staffing decisions and long-term workforce planning. High abandon rates or long wait times indicate understaffing or routing problems that need immediate attention.",
              "t": "`ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Calling Analytics API",
              "d": "Webex Calling Queue Analytics API (queue stats, agent stats, wait times)",
              "q": "index=webex sourcetype=\"webex:calling_queue\"\n| bin _time span=15m\n| stats avg(avgWaitTimeSec) as avg_wait, sum(abandonedCalls) as abandoned, sum(answeredCalls) as answered, sum(totalCalls) as total by _time, queueName\n| eval abandon_rate=if(total>0, round(abandoned/total*100, 1), 0)\n| eval sla_met=if(avg_wait<=30, \"Yes\", \"No\")\n| where abandon_rate > 10 OR avg_wait > 60\n| table _time, queueName, total, answered, abandoned, abandon_rate, avg_wait, sla_met",
              "m": "Integrate with the Webex Calling Analytics API to retrieve queue statistics. Poll every 15 minutes for near-real-time monitoring. Define SLA thresholds per queue (e.g., 80% of calls answered within 30 seconds). Alert when abandon rate exceeds 10% or average wait time exceeds 60 seconds. Create wallboard-style dashboards for supervisors showing live queue status.",
              "z": "Real-time single value panels (current queue depth, avg wait), Line chart (abandon rate over time), Column chart (calls by queue), Table (SLA compliance by queue and time).",
              "kfp": "Staffing surges, seasonal campaigns, and training days that change queue times.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Calling Analytics API.\n• Ensure the following data sources are available: Webex Calling Queue Analytics API (queue stats, agent stats, wait times).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate with the Webex Calling Analytics API to retrieve queue statistics. Poll every 15 minutes for near-real-time monitoring. Define SLA thresholds per queue (e.g., 80% of calls answered within 30 seconds). Alert when abandon rate exceeds 10% or average wait time exceeds 60 seconds. Create wallboard-style dashboards for supervisors showing live queue status.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:calling_queue\"\n| bin _time span=15m\n| stats avg(avgWaitTimeSec) as avg_wait, sum(abandonedCalls) as abandoned, sum(answeredCalls) as answered, sum(totalCalls) as total by _time, queueName\n| eval abandon_rate=if(total>0, round(abandoned/total*100, 1), 0)\n| eval sla_met=if(avg_wait<=30, \"Yes\", \"No\")\n| where abandon_rate > 10 OR avg_wait > 60\n| table _time, queueName, total, answered, abandoned, abandon_rate, avg_wait, sla_met\n```\n\nUnderstanding this SPL\n\n**Webex Calling Queue Performance and SLA** — Monitors call queue health metrics that directly impact customer experience: average wait time, abandon rate, SLA compliance, and agent availability. Enables real-time staffing decisions and long-term workforce planning. High abandon rates or long wait times indicate understaffing or routing problems that need immediate attention.\n\nDocumented **Data sources**: Webex Calling Queue Analytics API (queue stats, agent stats, wait times). **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Calling Analytics API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:calling_queue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:calling_queue\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, queueName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **abandon_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abandon_rate > 10 OR avg_wait > 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Webex Calling Queue Performance and SLA**): table _time, queueName, total, answered, abandoned, abandon_rate, avg_wait, sla_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Real-time single value panels (current queue depth, avg wait), Line chart (abandon rate over time), Column chart (calls by queue), Table (SLA compliance by queue and time).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Calling, Webex Contact Center",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.17",
              "n": "Webex Admin Audit Trail",
              "c": "high",
              "f": "beginner",
              "v": "Provides full accountability for administrative changes in Webex Control Hub — user provisioning, policy changes, license assignments, security settings, and integration modifications. Required for compliance audits (SOX, HIPAA, ISO 27001) and security investigations. Detects unauthorized admin actions or compromised admin accounts through anomaly detection on change volume and timing.",
              "t": "`ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Admin Audit Events API via HEC webhook",
              "d": "Webex Admin Audit Events API (admin actions, actor, target, timestamp)",
              "q": "index=webex sourcetype=\"webex:admin_audit\"\n| stats count as changes by actorEmail, actionType, targetType\n| eventstats sum(changes) as total_by_admin by actorEmail\n| sort -total_by_admin\n| table actorEmail, actionType, targetType, changes, total_by_admin",
              "m": "Register a Webex integration with the `audit:events_read` scope and full admin credentials. Poll the Admin Audit Events API every 5 minutes. Enrich events with admin role information from the People API. Alert on: changes outside business hours, bulk user deletions (>10 in 1 hour), security policy modifications, and admin actions from new IP addresses or locations.",
              "z": "Timeline (admin changes over time), Table (audit log with filters), Bar chart (changes by admin), Pie chart (change types).",
              "kfp": "Planned break-glass, partner support logins, and read-only admin roles during maintenance.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Admin Audit Events API via HEC webhook.\n• Ensure the following data sources are available: Webex Admin Audit Events API (admin actions, actor, target, timestamp).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRegister a Webex integration with the `audit:events_read` scope and full admin credentials. Poll the Admin Audit Events API every 5 minutes. Enrich events with admin role information from the People API. Alert on: changes outside business hours, bulk user deletions (>10 in 1 hour), security policy modifications, and admin actions from new IP addresses or locations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:admin_audit\"\n| stats count as changes by actorEmail, actionType, targetType\n| eventstats sum(changes) as total_by_admin by actorEmail\n| sort -total_by_admin\n| table actorEmail, actionType, targetType, changes, total_by_admin\n```\n\nUnderstanding this SPL\n\n**Webex Admin Audit Trail** — Provides full accountability for administrative changes in Webex Control Hub — user provisioning, policy changes, license assignments, security settings, and integration modifications. Required for compliance audits (SOX, HIPAA, ISO 27001) and security investigations. Detects unauthorized admin actions or compromised admin accounts through anomaly detection on change volume and timing.\n\nDocumented **Data sources**: Webex Admin Audit Events API (admin actions, actor, target, timestamp). **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Admin Audit Events API via HEC webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:admin_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:admin_audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by actorEmail, actionType, targetType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by actorEmail** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Webex Admin Audit Trail**): table actorEmail, actionType, targetType, changes, total_by_admin\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Webex Admin Audit Trail** — Provides full accountability for administrative changes in Webex Control Hub — user provisioning, policy changes, license assignments, security settings, and integration modifications. Required for compliance audits (SOX, HIPAA, ISO 27001) and security investigations. Detects unauthorized admin actions or compromised admin accounts through anomaly detection on change volume and timing.\n\nDocumented **Data sources**: Webex Admin Audit Events API (admin actions, actor, target, timestamp). **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Admin Audit Events API via HEC webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (admin changes over time), Table (audit log with filters), Bar chart (changes by admin), Pie chart (change types).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Control Hub",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.18",
              "n": "Webex DLP and File Compliance Monitoring",
              "c": "critical",
              "f": "advanced",
              "v": "Monitors real-time file sharing across Webex spaces for data loss prevention violations. Files are scanned before becoming accessible to other members, preventing sensitive data exposure. Tracks DLP approval/rejection decisions, identifies repeat offenders, and provides evidence for compliance investigations. Critical for organizations handling PII, PHI, financial data, or intellectual property in Webex.",
              "t": "`ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Compliance webhooks via HEC (Pro Pack required)",
              "d": "Webex Compliance Events API, Real-time File DLP webhook events",
              "q": "index=webex sourcetype=\"webex:compliance\"\n| search eventType=\"file_*\" OR eventType=\"dlp_*\"\n| eval outcome=case(dlpAction=\"approved\", \"Approved\", dlpAction=\"rejected\", \"Blocked\", dlpAction=\"default_approved\", \"Default Approved\", 1==1, \"Unknown\")\n| stats count as total_files, count(eval(outcome=\"Blocked\")) as blocked, count(eval(outcome=\"Approved\")) as approved by actorEmail, spaceName\n| where blocked > 0\n| eval block_rate=round(blocked/total_files*100, 1)\n| sort -blocked\n| table actorEmail, spaceName, total_files, approved, blocked, block_rate",
              "m": "Enable real-time file DLP in Webex Control Hub (requires Pro Pack and Compliance Officer role). Configure organization-level webhooks with `ownedBy=org` to receive file events. Integrate with your DLP provider (Cisco Cloudlock, Microsoft DLP, Symantec, etc.) for policy enforcement. Forward webhook events to Splunk via HEC. Alert on: any blocked file, users with >3 violations per day, and DLP bypasses (default approved when DLP scanner is unreachable).",
              "z": "Table (DLP violations with file type and actor), Bar chart (violations by user), Timechart (violation trend), Single value (total blocks today).",
              "kfp": "DLP false positives, default-approved paths, and large meetings with many file shares at once.",
              "refs": "[CIM: DLP](https://docs.splunk.com/Documentation/CIM/latest/User/DLP)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Compliance webhooks via HEC (Pro Pack required).\n• Ensure the following data sources are available: Webex Compliance Events API, Real-time File DLP webhook events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable real-time file DLP in Webex Control Hub (requires Pro Pack and Compliance Officer role). Configure organization-level webhooks with `ownedBy=org` to receive file events. Integrate with your DLP provider (Cisco Cloudlock, Microsoft DLP, Symantec, etc.) for policy enforcement. Forward webhook events to Splunk via HEC. Alert on: any blocked file, users with >3 violations per day, and DLP bypasses (default approved when DLP scanner is unreachable).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:compliance\"\n| search eventType=\"file_*\" OR eventType=\"dlp_*\"\n| eval outcome=case(dlpAction=\"approved\", \"Approved\", dlpAction=\"rejected\", \"Blocked\", dlpAction=\"default_approved\", \"Default Approved\", 1==1, \"Unknown\")\n| stats count as total_files, count(eval(outcome=\"Blocked\")) as blocked, count(eval(outcome=\"Approved\")) as approved by actorEmail, spaceName\n| where blocked > 0\n| eval block_rate=round(blocked/total_files*100, 1)\n| sort -blocked\n| table actorEmail, spaceName, total_files, approved, blocked, block_rate\n```\n\nUnderstanding this SPL\n\n**Webex DLP and File Compliance Monitoring** — Monitors real-time file sharing across Webex spaces for data loss prevention violations. Files are scanned before becoming accessible to other members, preventing sensitive data exposure. Tracks DLP approval/rejection decisions, identifies repeat offenders, and provides evidence for compliance investigations. Critical for organizations handling PII, PHI, financial data, or intellectual property in Webex.\n\nDocumented **Data sources**: Webex Compliance Events API, Real-time File DLP webhook events. **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Compliance webhooks via HEC (Pro Pack required). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:compliance. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **outcome** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by actorEmail, spaceName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where blocked > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **block_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Webex DLP and File Compliance Monitoring**): table actorEmail, spaceName, total_files, approved, blocked, block_rate\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.action, DLP_Incidents.src, DLP_Incidents.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Webex DLP and File Compliance Monitoring** — Monitors real-time file sharing across Webex spaces for data loss prevention violations. Files are scanned before becoming accessible to other members, preventing sensitive data exposure. Tracks DLP approval/rejection decisions, identifies repeat offenders, and provides evidence for compliance investigations. Critical for organizations handling PII, PHI, financial data, or intellectual property in Webex.\n\nDocumented **Data sources**: Webex Compliance Events API, Real-time File DLP webhook events. **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Compliance webhooks via HEC (Pro Pack required). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Data_Loss_Prevention.DLP_Incidents` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DLP violations with file type and actor), Bar chart (violations by user), Timechart (violation trend), Single value (total blocks today).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Messaging, Webex Control Hub",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch file checks in Webex spaces, so blocked or approved files match what the business expects for sensitive data.",
              "mtype": [
                "Security"
              ],
              "a": [
                "DLP"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.action, DLP_Incidents.src, DLP_Incidents.user | sort - count",
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.19",
              "n": "Webex Device Health and Environmental Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors the health and environmental conditions of Webex room devices (Room Kit, Board, Desk series). RoomOS devices report temperature, humidity, ambient noise, air quality, and occupancy data via RoomAnalytics. Detecting devices that go offline, run outdated firmware, or operate in poor environmental conditions prevents meeting disruptions and protects hardware investments. Occupancy data from room sensors supports real estate optimization.",
              "t": "`ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Devices API via HEC or custom modular input",
              "d": "Webex Devices API (device status, software), RoomAnalytics xAPI (temperature, humidity, noise, air quality, people count)",
              "q": "index=webex sourcetype=\"webex:devices\"\n| eval status_issue=case(connectionStatus=\"disconnected\", \"Offline\", softwareCurrent!=softwareAvailable, \"Firmware Outdated\", temperature>35, \"High Temperature\", humidity>70, \"High Humidity\", ambientNoise>65, \"Noisy Environment\", 1==1, null())\n| where isnotnull(status_issue)\n| stats count by displayName, workspaceName, product, connectionStatus, status_issue, softwareCurrent\n| sort -count\n| table displayName, workspaceName, product, status_issue, connectionStatus, softwareCurrent",
              "m": "Register a Webex integration with `spark-admin:devices_read` and `spark-admin:workspaces_read` scopes. Poll device status every 5 minutes. Enable RoomAnalytics on devices via Control Hub or xAPI (`xConfiguration RoomAnalytics PeoplePresenceDetector: On`). For environmental data, enable `xConfiguration RoomAnalytics AmbientNoise` and temperature/humidity reporting. Historical environmental data is available for up to 30 days. Alert on devices offline >15 minutes, firmware more than one version behind, or temperature/humidity outside safe ranges.",
              "z": "Map (device locations with status), Table (offline/unhealthy devices), Line chart (environmental trends by room), Single value panels (devices online, firmware compliance %).",
              "kfp": "Planned power work, HVAC maintenance, and room equipment swaps in meeting spaces.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Devices API via HEC or custom modular input.\n• Ensure the following data sources are available: Webex Devices API (device status, software), RoomAnalytics xAPI (temperature, humidity, noise, air quality, people count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRegister a Webex integration with `spark-admin:devices_read` and `spark-admin:workspaces_read` scopes. Poll device status every 5 minutes. Enable RoomAnalytics on devices via Control Hub or xAPI (`xConfiguration RoomAnalytics PeoplePresenceDetector: On`). For environmental data, enable `xConfiguration RoomAnalytics AmbientNoise` and temperature/humidity reporting. Historical environmental data is available for up to 30 days. Alert on devices offline >15 minutes, firmware more than one version be…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:devices\"\n| eval status_issue=case(connectionStatus=\"disconnected\", \"Offline\", softwareCurrent!=softwareAvailable, \"Firmware Outdated\", temperature>35, \"High Temperature\", humidity>70, \"High Humidity\", ambientNoise>65, \"Noisy Environment\", 1==1, null())\n| where isnotnull(status_issue)\n| stats count by displayName, workspaceName, product, connectionStatus, status_issue, softwareCurrent\n| sort -count\n| table displayName, workspaceName, product, status_issue, connectionStatus, softwareCurrent\n```\n\nUnderstanding this SPL\n\n**Webex Device Health and Environmental Monitoring** — Monitors the health and environmental conditions of Webex room devices (Room Kit, Board, Desk series). RoomOS devices report temperature, humidity, ambient noise, air quality, and occupancy data via RoomAnalytics. Detecting devices that go offline, run outdated firmware, or operate in poor environmental conditions prevents meeting disruptions and protects hardware investments. Occupancy data from room sensors supports real estate optimization.\n\nDocumented **Data sources**: Webex Devices API (device status, software), RoomAnalytics xAPI (temperature, humidity, noise, air quality, people count). **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Devices API via HEC or custom modular input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:devices. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:devices\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status_issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(status_issue)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by displayName, workspaceName, product, connectionStatus, status_issue, softwareCurrent** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Webex Device Health and Environmental Monitoring**): table displayName, workspaceName, product, status_issue, connectionStatus, softwareCurrent\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (device locations with status), Table (offline/unhealthy devices), Line chart (environmental trends by room), Single value panels (devices online, firmware compliance %).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Room Kit, Room Kit Mini, Room Kit Pro, Room Kit EQ, Webex Board 55/70/85, Webex Desk, Webex Desk Pro, Webex Room Bar, Room Navigator",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.20",
              "n": "Webex License Utilization and Adoption Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Compares assigned Webex licenses against actual usage to identify wasted spend. Organizations often over-provision Meetings, Calling, and Messaging licenses. Tracking adoption rates by department and user helps right-size license allocations, reclaim unused seats, and justify renewals. Combines license inventory data with meeting/calling activity to calculate true utilization rates.",
              "t": "`ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Licenses API via HEC or custom modular input",
              "d": "Webex Licenses API (assigned/consumed counts), People API (user status), Meeting Summary Reports (active users)",
              "q": "index=webex sourcetype=\"webex:licenses\"\n| stats latest(totalUnits) as total, latest(consumedUnits) as consumed by licenseName\n| eval available=total-consumed\n| eval utilization_pct=round(consumed/total*100, 1)\n| sort -utilization_pct\n| table licenseName, total, consumed, available, utilization_pct",
              "m": "Poll the Webex Licenses API daily to track assigned vs. consumed license counts. Cross-reference with meeting and calling activity data to identify users who have licenses but haven't used them in 30/60/90 days. Create a lookup mapping users to departments for group-level reporting. Flag users inactive for >60 days as reclamation candidates. Report license utilization weekly to finance and IT leadership.",
              "z": "Bar chart (license utilization by type), Table (inactive users with assigned licenses), Timechart (license consumption trend), Single value panels (total cost, waste estimate).",
              "kfp": "True-up, trial SKUs, and site consolidations; adoption swings after mergers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Licenses API via HEC or custom modular input.\n• Ensure the following data sources are available: Webex Licenses API (assigned/consumed counts), People API (user status), Meeting Summary Reports (active users).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll the Webex Licenses API daily to track assigned vs. consumed license counts. Cross-reference with meeting and calling activity data to identify users who have licenses but haven't used them in 30/60/90 days. Create a lookup mapping users to departments for group-level reporting. Flag users inactive for >60 days as reclamation candidates. Report license utilization weekly to finance and IT leadership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:licenses\"\n| stats latest(totalUnits) as total, latest(consumedUnits) as consumed by licenseName\n| eval available=total-consumed\n| eval utilization_pct=round(consumed/total*100, 1)\n| sort -utilization_pct\n| table licenseName, total, consumed, available, utilization_pct\n```\n\nUnderstanding this SPL\n\n**Webex License Utilization and Adoption Tracking** — Compares assigned Webex licenses against actual usage to identify wasted spend. Organizations often over-provision Meetings, Calling, and Messaging licenses. Tracking adoption rates by department and user helps right-size license allocations, reclaim unused seats, and justify renewals. Combines license inventory data with meeting/calling activity to calculate true utilization rates.\n\nDocumented **Data sources**: Webex Licenses API (assigned/consumed counts), People API (user status), Meeting Summary Reports (active users). **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Licenses API via HEC or custom modular input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:licenses. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:licenses\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by licenseName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **available** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Webex License Utilization and Adoption Tracking**): table licenseName, total, consumed, available, utilization_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (license utilization by type), Table (inactive users with assigned licenses), Timechart (license consumption trend), Single value panels (total cost, waste estimate).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Meetings, Webex Calling, Webex Messaging",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track collaboration license use, so we buy only what we need and still have seats when everyone jumps on a call.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.21",
              "n": "Webex Messaging Activity and Anomaly Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks messaging volume, file sharing, and space activity across Webex Messaging. Establishes baselines per user to detect anomalous bulk messaging that could indicate a compromised account, bot abuse, or policy violation. Also measures collaboration adoption and engagement trends across departments. Identifies inactive spaces consuming storage and active spaces that may need governance controls.",
              "t": "`ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Events/Compliance API via HEC",
              "d": "Webex Events API (messages created/deleted, files shared, memberships changed)",
              "q": "index=webex sourcetype=\"webex:events\" resource=\"messages\"\n| bin _time span=1h\n| stats count as msg_count, dc(roomId) as active_spaces by actorEmail, _time\n| eventstats avg(msg_count) as avg_msgs, stdev(msg_count) as stdev_msgs by actorEmail\n| eval upper_bound=avg_msgs+(3*stdev_msgs)\n| where msg_count > upper_bound AND msg_count > 50\n| table _time, actorEmail, msg_count, active_spaces, avg_msgs, upper_bound",
              "m": "Register a Webex Compliance integration with the `spark-compliance:events_read` scope to access organization-wide message events. Poll the Events API every 5 minutes for near-real-time visibility. Build user activity baselines over 30 days before enabling anomaly alerting. Alert on: message volume exceeding 3 standard deviations from baseline, bulk file uploads (>20 files in 1 hour), and messages containing URLs to known-bad domains (cross-reference with threat intel). Combine with DLP data for comprehensive messaging security.",
              "z": "Timechart (message volume by department), Table (anomalous users), Bar chart (top active spaces), Line chart (adoption trend over 90 days).",
              "kfp": "Bots, integrations, and off-hours red-eye teams that can look anomalous without being malicious.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Events/Compliance API via HEC.\n• Ensure the following data sources are available: Webex Events API (messages created/deleted, files shared, memberships changed).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRegister a Webex Compliance integration with the `spark-compliance:events_read` scope to access organization-wide message events. Poll the Events API every 5 minutes for near-real-time visibility. Build user activity baselines over 30 days before enabling anomaly alerting. Alert on: message volume exceeding 3 standard deviations from baseline, bulk file uploads (>20 files in 1 hour), and messages containing URLs to known-bad domains (cross-reference with threat intel). Combine with DLP data for …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:events\" resource=\"messages\"\n| bin _time span=1h\n| stats count as msg_count, dc(roomId) as active_spaces by actorEmail, _time\n| eventstats avg(msg_count) as avg_msgs, stdev(msg_count) as stdev_msgs by actorEmail\n| eval upper_bound=avg_msgs+(3*stdev_msgs)\n| where msg_count > upper_bound AND msg_count > 50\n| table _time, actorEmail, msg_count, active_spaces, avg_msgs, upper_bound\n```\n\nUnderstanding this SPL\n\n**Webex Messaging Activity and Anomaly Detection** — Tracks messaging volume, file sharing, and space activity across Webex Messaging. Establishes baselines per user to detect anomalous bulk messaging that could indicate a compromised account, bot abuse, or policy violation. Also measures collaboration adoption and engagement trends across departments. Identifies inactive spaces consuming storage and active spaces that may need governance controls.\n\nDocumented **Data sources**: Webex Events API (messages created/deleted, files shared, memberships changed). **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk` (GitHub), Webex Events/Compliance API via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by actorEmail, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by actorEmail** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **upper_bound** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where msg_count > upper_bound AND msg_count > 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Webex Messaging Activity and Anomaly Detection**): table _time, actorEmail, msg_count, active_spaces, avg_msgs, upper_bound\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (message volume by department), Table (anomalous users), Bar chart (top active spaces), Line chart (adoption trend over 90 days).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Messaging, Webex App",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.22",
              "n": "SharePoint Site Storage Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Site collections approaching quota block uploads and disrupt teams. Proactive monitoring enables quota increases or content archival before users hit limits.",
              "t": "Custom (SharePoint REST API, PowerShell)",
              "d": "SharePoint REST API (/_api/site/usage), Get-SPOSite (SharePoint Online)",
              "q": "index=sharepoint sourcetype=\"sharepoint:site_usage\"\n| eval used_mb=coalesce(StorageUsageCurrent/1024, StorageUsageCurrentMB, 0), quota_mb=coalesce(StorageQuota/1024, StorageQuotaMB, 0)\n| eval used_pct=if(quota_mb>0, round(used_mb/quota_mb*100, 1), 0)\n| where used_pct >= 70\n| bin _time span=1d\n| stats latest(used_mb) as used_mb, latest(quota_mb) as quota_mb, latest(used_pct) as used_pct by _time, SiteUrl, SiteName\n| where used_pct >= 70\n| sort -used_pct\n| table SiteUrl, SiteName, used_mb, quota_mb, used_pct",
              "m": "For SharePoint Online, use `Get-SPOSite -IncludePersonalSite $false | Select-Object Url, StorageUsageCurrent, StorageQuota` via a scheduled PowerShell script; for on-prem, call `/_api/site/usage` with app-only or user credentials. Forward JSON to Splunk via HEC. Run daily or every 6 hours. Alert when used_pct >= 80 (warning) or >= 95 (critical). Maintain a lookup of site owners for notification. Report on growth rate and projected quota exhaustion.",
              "z": "Table (sites near quota), Bar chart (usage by site), Gauge (overall tenant usage %), Line chart (storage growth trend).",
              "kfp": "Large sites and file-heavy libraries; storage growth from migration and archive projects.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (SharePoint REST API, PowerShell).\n• Ensure the following data sources are available: SharePoint REST API (/_api/site/usage), Get-SPOSite (SharePoint Online).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor SharePoint Online, use `Get-SPOSite -IncludePersonalSite $false | Select-Object Url, StorageUsageCurrent, StorageQuota` via a scheduled PowerShell script; for on-prem, call `/_api/site/usage` with app-only or user credentials. Forward JSON to Splunk via HEC. Run daily or every 6 hours. Alert when used_pct >= 80 (warning) or >= 95 (critical). Maintain a lookup of site owners for notification. Report on growth rate and projected quota exhaustion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sharepoint sourcetype=\"sharepoint:site_usage\"\n| eval used_mb=coalesce(StorageUsageCurrent/1024, StorageUsageCurrentMB, 0), quota_mb=coalesce(StorageQuota/1024, StorageQuotaMB, 0)\n| eval used_pct=if(quota_mb>0, round(used_mb/quota_mb*100, 1), 0)\n| where used_pct >= 70\n| bin _time span=1d\n| stats latest(used_mb) as used_mb, latest(quota_mb) as quota_mb, latest(used_pct) as used_pct by _time, SiteUrl, SiteName\n| where used_pct >= 70\n| sort -used_pct\n| table SiteUrl, SiteName, used_mb, quota_mb, used_pct\n```\n\nUnderstanding this SPL\n\n**SharePoint Site Storage Utilization** — Site collections approaching quota block uploads and disrupt teams. Proactive monitoring enables quota increases or content archival before users hit limits.\n\nDocumented **Data sources**: SharePoint REST API (/_api/site/usage), Get-SPOSite (SharePoint Online). **App/TA** (typical add-on context): Custom (SharePoint REST API, PowerShell). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sharepoint; **sourcetype**: sharepoint:site_usage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sharepoint, sourcetype=\"sharepoint:site_usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct >= 70` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, SiteUrl, SiteName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where used_pct >= 70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **SharePoint Site Storage Utilization**): table SiteUrl, SiteName, used_mb, quota_mb, used_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sites near quota), Bar chart (usage by site), Gauge (overall tenant usage %), Line chart (storage growth trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "sharepoint"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.23",
              "n": "SharePoint Search Crawl Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Crawl errors, stale content, and index freshness affect findability. Failed crawls leave content undiscoverable; slow crawls delay new content visibility.",
              "t": "Custom (SharePoint Search Admin API)",
              "d": "Search Administration crawl logs, Get-SPEnterpriseSearchCrawlContentSource (on-prem), Search & Intelligence admin reports (M365)",
              "q": "index=sharepoint sourcetype=\"sharepoint:search_crawl\"\n| eval error_type=case(match(ErrorLevel, \"Error|Critical|Failure\"), \"Error\", match(ErrorLevel, \"Warning\"), \"Warning\", 1==1, \"Info\")\n| where error_type=\"Error\" OR error_type=\"Warning\"\n| bin _time span=1h\n| stats count as crawl_errors, dc(ItemId) as unique_items, latest(LastCrawlTime) as last_crawl by _time, ContentSource, error_type\n| where crawl_errors > 0\n| sort -crawl_errors\n| table _time, ContentSource, error_type, crawl_errors, unique_items, last_crawl",
              "m": "For on-prem SharePoint, query crawl logs via Search Admin API or `Get-SPEnterpriseSearchCrawlLog` and stream to Splunk. For M365, use Search & Intelligence admin center APIs or export crawl health reports. Ingest crawl start/end, error count, item count, and last successful crawl per content source. Alert on error count spike (>10 errors in 1 hour) or last successful crawl >24 hours ago for critical sources. Track crawl duration and index freshness.",
              "z": "Line chart (crawl errors over time), Table (content sources with errors), Single value (sources with stale index), Bar chart (errors by content source).",
              "kfp": "Full crawls after index resets, new content sources, and search topology changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (SharePoint Search Admin API).\n• Ensure the following data sources are available: Search Administration crawl logs, Get-SPEnterpriseSearchCrawlContentSource (on-prem), Search & Intelligence admin reports (M365).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor on-prem SharePoint, query crawl logs via Search Admin API or `Get-SPEnterpriseSearchCrawlLog` and stream to Splunk. For M365, use Search & Intelligence admin center APIs or export crawl health reports. Ingest crawl start/end, error count, item count, and last successful crawl per content source. Alert on error count spike (>10 errors in 1 hour) or last successful crawl >24 hours ago for critical sources. Track crawl duration and index freshness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sharepoint sourcetype=\"sharepoint:search_crawl\"\n| eval error_type=case(match(ErrorLevel, \"Error|Critical|Failure\"), \"Error\", match(ErrorLevel, \"Warning\"), \"Warning\", 1==1, \"Info\")\n| where error_type=\"Error\" OR error_type=\"Warning\"\n| bin _time span=1h\n| stats count as crawl_errors, dc(ItemId) as unique_items, latest(LastCrawlTime) as last_crawl by _time, ContentSource, error_type\n| where crawl_errors > 0\n| sort -crawl_errors\n| table _time, ContentSource, error_type, crawl_errors, unique_items, last_crawl\n```\n\nUnderstanding this SPL\n\n**SharePoint Search Crawl Health** — Crawl errors, stale content, and index freshness affect findability. Failed crawls leave content undiscoverable; slow crawls delay new content visibility.\n\nDocumented **Data sources**: Search Administration crawl logs, Get-SPEnterpriseSearchCrawlContentSource (on-prem), Search & Intelligence admin reports (M365). **App/TA** (typical add-on context): Custom (SharePoint Search Admin API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sharepoint; **sourcetype**: sharepoint:search_crawl. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sharepoint, sourcetype=\"sharepoint:search_crawl\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **error_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_type=\"Error\" OR error_type=\"Warning\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, ContentSource, error_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where crawl_errors > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **SharePoint Search Crawl Health**): table _time, ContentSource, error_type, crawl_errors, unique_items, last_crawl\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (crawl errors over time), Table (content sources with errors), Single value (sources with stale index), Bar chart (errors by content source).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "sharepoint"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.25",
              "n": "SIP Server Availability Monitoring (ThousandEyes)",
              "c": "critical",
              "f": "beginner",
              "v": "Monitors SIP server reachability from ThousandEyes agents, ensuring voice and unified communications infrastructure is responsive from the network path perspective. SIP server failures directly impact call setup for all users.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (SIP Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"sip-server\"\n| stats avg(sip.server.request.availability) as avg_availability by thousandeyes.test.name, server.address, thousandeyes.source.agent.name\n| where avg_availability < 100\n| sort avg_availability",
              "m": "Create SIP Server tests in ThousandEyes targeting your SIP proxy, session border controllers, or CUCM servers. The OTel metric `sip.server.request.availability` reports 100% when the SIP OPTIONS request succeeds. The `sip.response.status_code` attribute provides the SIP response code. The Splunk App Voice dashboard includes a \"SIP Availability (%)\" panel.",
              "z": "Line chart (availability % over time), Single value (current availability), Table (test, server, agent, availability).",
              "kfp": "ThousandEyes test false positives, scheduled ISP maintenance, and DNS anycast differences.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (SIP Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate SIP Server tests in ThousandEyes targeting your SIP proxy, session border controllers, or CUCM servers. The OTel metric `sip.server.request.availability` reports 100% when the SIP OPTIONS request succeeds. The `sip.response.status_code` attribute provides the SIP response code. The Splunk App Voice dashboard includes a \"SIP Availability (%)\" panel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"sip-server\"\n| stats avg(sip.server.request.availability) as avg_availability by thousandeyes.test.name, server.address, thousandeyes.source.agent.name\n| where avg_availability < 100\n| sort avg_availability\n```\n\nUnderstanding this SPL\n\n**SIP Server Availability Monitoring (ThousandEyes)** — Monitors SIP server reachability from ThousandEyes agents, ensuring voice and unified communications infrastructure is responsive from the network path perspective. SIP server failures directly impact call setup for all users.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (SIP Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, server.address, thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_availability < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (availability % over time), Single value (current availability), Table (test, server, agent, availability).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use path tests to see where voice and meeting traffic get ugly, so we can tell cloud issues from a bad home connection.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.26",
              "n": "SIP Registration Time Tracking (ThousandEyes)",
              "c": "high",
              "f": "beginner",
              "v": "Slow SIP registration or response times indicate server overload, network congestion, or infrastructure issues that delay call setup and affect voice quality perception.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (SIP Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"sip-server\"\n| bin _time span=5m\n| stats avg(sip.client.request.duration) as avg_ttfb_s, avg(sip.client.request.total_time) as avg_total_s by _time, thousandeyes.test.name\n| eval avg_ttfb_ms=round(avg_ttfb_s*1000,1), avg_total_ms=round(avg_total_s*1000,1)\n| table _time, thousandeyes.test.name, avg_ttfb_ms, avg_total_ms",
              "m": "The OTel metric `sip.client.request.duration` reports TTFB (time to first SIP response) in seconds, and `sip.client.request.total_time` reports total SIP transaction time. The Splunk App Voice dashboard includes a \"SIP Request Duration (s)\" line chart. Alert when SIP response time consistently exceeds 500 ms — this adds noticeable delay to call setup.",
              "z": "Line chart (TTFB and total time over time), Table (test, TTFB, total time), Single value.",
              "kfp": "Reregistration on SIP expiry, client mobility, and Wi-Fi handoff between access points.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (SIP Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe OTel metric `sip.client.request.duration` reports TTFB (time to first SIP response) in seconds, and `sip.client.request.total_time` reports total SIP transaction time. The Splunk App Voice dashboard includes a \"SIP Request Duration (s)\" line chart. Alert when SIP response time consistently exceeds 500 ms — this adds noticeable delay to call setup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"sip-server\"\n| bin _time span=5m\n| stats avg(sip.client.request.duration) as avg_ttfb_s, avg(sip.client.request.total_time) as avg_total_s by _time, thousandeyes.test.name\n| eval avg_ttfb_ms=round(avg_ttfb_s*1000,1), avg_total_ms=round(avg_total_s*1000,1)\n| table _time, thousandeyes.test.name, avg_ttfb_ms, avg_total_ms\n```\n\nUnderstanding this SPL\n\n**SIP Registration Time Tracking (ThousandEyes)** — Slow SIP registration or response times indicate server overload, network congestion, or infrastructure issues that delay call setup and affect voice quality perception.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (SIP Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, thousandeyes.test.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_ttfb_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **SIP Registration Time Tracking (ThousandEyes)**): table _time, thousandeyes.test.name, avg_ttfb_ms, avg_total_ms\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (TTFB and total time over time), Table (test, TTFB, total time), Single value.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use path tests to see where voice and meeting traffic get ugly, so we can tell cloud issues from a bad home connection.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.27",
              "n": "RTP MOS Score Monitoring (ThousandEyes)",
              "c": "critical",
              "f": "beginner",
              "v": "Mean Opinion Score (MOS) is the standard measure of voice call quality on a scale of 1 to 5. ThousandEyes RTP tests provide MOS alongside packet loss, discards, and delay variation, enabling continuous voice quality assurance from the network perspective.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (RTP tests)",
              "q": "`stream_index` thousandeyes.test.type=\"rtp\"\n| stats avg(rtp.client.request.mos) as avg_mos avg(rtp.client.request.loss) as avg_loss avg(rtp.client.request.pdv) as avg_pdv_s avg(rtp.client.request.discards) as avg_discards by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval avg_pdv_ms=round(avg_pdv_s*1000,1)\n| where avg_mos < 3.5\n| sort avg_mos",
              "m": "Create RTP (Voice Layer) tests in ThousandEyes targeting voice infrastructure. RTP tests are paired with SIP Server tests. The OTel metric `rtp.client.request.mos` reports MOS (1-5), `rtp.client.request.loss` reports packet loss percentage, `rtp.client.request.pdv` reports Packet Delay Variation in seconds, and `rtp.client.request.discards` reports discarded packets percentage. The Splunk App Voice dashboard includes \"RTP MOS\" and \"RTP Loss (%)\" panels. MOS below 3.5 indicates poor call quality.",
              "z": "Line chart (MOS over time), Single value (current MOS), Table (test, agent, MOS, loss, PDV, discards).",
              "kfp": "Wi-Fi, VPN, and last-mile quality issues that affect MOS but are outside your PBX.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (RTP tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate RTP (Voice Layer) tests in ThousandEyes targeting voice infrastructure. RTP tests are paired with SIP Server tests. The OTel metric `rtp.client.request.mos` reports MOS (1-5), `rtp.client.request.loss` reports packet loss percentage, `rtp.client.request.pdv` reports Packet Delay Variation in seconds, and `rtp.client.request.discards` reports discarded packets percentage. The Splunk App Voice dashboard includes \"RTP MOS\" and \"RTP Loss (%)\" panels. MOS below 3.5 indicates poor call quality.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"rtp\"\n| stats avg(rtp.client.request.mos) as avg_mos avg(rtp.client.request.loss) as avg_loss avg(rtp.client.request.pdv) as avg_pdv_s avg(rtp.client.request.discards) as avg_discards by thousandeyes.test.name, thousandeyes.source.agent.name\n| eval avg_pdv_ms=round(avg_pdv_s*1000,1)\n| where avg_mos < 3.5\n| sort avg_mos\n```\n\nUnderstanding this SPL\n\n**RTP MOS Score Monitoring (ThousandEyes)** — Mean Opinion Score (MOS) is the standard measure of voice call quality on a scale of 1 to 5. ThousandEyes RTP tests provide MOS alongside packet loss, discards, and delay variation, enabling continuous voice quality assurance from the network perspective.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (RTP tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.test.name, thousandeyes.source.agent.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_pdv_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_mos < 3.5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (MOS over time), Single value (current MOS), Table (test, agent, MOS, loss, PDV, discards).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use path tests to see where voice and meeting traffic get ugly, so we can tell cloud issues from a bad home connection.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.28",
              "n": "Webex Meeting Quality Assurance via ThousandEyes",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors the network path from offices to Webex data centers using ThousandEyes agents, providing proactive visibility into network conditions that degrade meeting quality — before users file tickets. Correlate with Webex quality data (UC-11.3.6) for end-to-end troubleshooting.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Agent-to-Server and HTTP Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*Webex*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| where avg_latency_ms > 150 OR avg_loss > 1 OR avg_jitter > 30\n| sort -avg_latency_ms",
              "m": "Create Agent-to-Server tests in ThousandEyes targeting Webex media and signaling endpoints (e.g., *.webex.com, Webex data center IPs). Name tests with \"Webex\" for filtering. ThousandEyes provides Webex-specific monitoring guides with recommended test configurations. Correlate network path quality with Webex meeting quality metrics from the Webex APIs for end-to-end root cause analysis.",
              "z": "Line chart (latency to Webex over time), Table (agent, latency, loss, jitter), Dashboard combining TE network data with Webex meeting quality.",
              "kfp": "Synthetic tests versus real user traffic; both matter but explain different baselines.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Agent-to-Server and HTTP Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate Agent-to-Server tests in ThousandEyes targeting Webex media and signaling endpoints (e.g., *.webex.com, Webex data center IPs). Name tests with \"Webex\" for filtering. ThousandEyes provides Webex-specific monitoring guides with recommended test configurations. Correlate network path quality with Webex meeting quality metrics from the Webex APIs for end-to-end root cause analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*Webex*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| where avg_latency_ms > 150 OR avg_loss > 1 OR avg_jitter > 30\n| sort -avg_latency_ms\n```\n\nUnderstanding this SPL\n\n**Webex Meeting Quality Assurance via ThousandEyes** — Monitors the network path from offices to Webex data centers using ThousandEyes agents, providing proactive visibility into network conditions that degrade meeting quality — before users file tickets. Correlate with Webex quality data (UC-11.3.6) for end-to-end troubleshooting.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Agent-to-Server and HTTP Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_latency_ms > 150 OR avg_loss > 1 OR avg_jitter > 30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency to Webex over time), Table (agent, latency, loss, jitter), Dashboard combining TE network data with Webex meeting quality.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use path tests to see where voice and meeting traffic get ugly, so we can tell cloud issues from a bad home connection.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.29",
              "n": "Microsoft Teams Network Readiness (ThousandEyes)",
              "c": "high",
              "f": "intermediate",
              "v": "Validates that each office location meets Microsoft's network quality requirements for Teams calls and meetings (latency <50ms, loss <1%, jitter <30ms). ThousandEyes tests the actual network path to Microsoft 365 endpoints.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Agent-to-Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*Teams*\" OR thousandeyes.test.name=\"*M365*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| eval teams_ready=if(avg_latency_ms<50 AND avg_loss<1 AND avg_jitter<30, \"Ready\", \"Not Ready\")\n| sort teams_ready, -avg_latency_ms",
              "m": "Create Agent-to-Server tests targeting Microsoft Teams media relay IPs and Microsoft 365 front-door endpoints from each office Enterprise Agent. Microsoft publishes recommended network requirements: latency <50ms, loss <1%, jitter <30ms for optimal Teams quality. ThousandEyes provides Microsoft 365 monitoring best practices. Name tests with \"Teams\" or \"M365\" for easy filtering.",
              "z": "Table (agent, latency, loss, jitter, readiness status), Single value (sites meeting requirements), Map (readiness by location).",
              "kfp": "Site Wi-Fi, split tunneling, and home broadband skewing path quality metrics.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Agent-to-Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate Agent-to-Server tests targeting Microsoft Teams media relay IPs and Microsoft 365 front-door endpoints from each office Enterprise Agent. Microsoft publishes recommended network requirements: latency <50ms, loss <1%, jitter <30ms for optimal Teams quality. ThousandEyes provides Microsoft 365 monitoring best practices. Name tests with \"Teams\" or \"M365\" for easy filtering.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*Teams*\" OR thousandeyes.test.name=\"*M365*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| eval teams_ready=if(avg_latency_ms<50 AND avg_loss<1 AND avg_jitter<30, \"Ready\", \"Not Ready\")\n| sort teams_ready, -avg_latency_ms\n```\n\nUnderstanding this SPL\n\n**Microsoft Teams Network Readiness (ThousandEyes)** — Validates that each office location meets Microsoft's network quality requirements for Teams calls and meetings (latency <50ms, loss <1%, jitter <30ms). ThousandEyes tests the actual network path to Microsoft 365 endpoints.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Agent-to-Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **teams_ready** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (agent, latency, loss, jitter, readiness status), Single value (sites meeting requirements), Map (readiness by location).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use path tests to see where voice and meeting traffic get ugly, so we can tell cloud issues from a bad home connection.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.30",
              "n": "Zoom Collaboration Performance (ThousandEyes)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors network path quality to Zoom data centers from office locations, ensuring the network supports high-quality video conferencing. Helps distinguish between Zoom platform issues and local network problems.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Agent-to-Server tests)",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*Zoom*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| where avg_latency_ms > 150 OR avg_loss > 1 OR avg_jitter > 30\n| sort -avg_latency_ms",
              "m": "Create Agent-to-Server tests targeting Zoom data center endpoints from each office Enterprise Agent. Zoom publishes recommended network requirements similar to Microsoft Teams. Name tests with \"Zoom\" for filtering. Correlate with Zoom Dashboard API data (if available) for end-to-end quality analysis.",
              "z": "Line chart (latency to Zoom over time), Table (agent, latency, loss, jitter), Comparison dashboard across collaboration platforms.",
              "kfp": "Zoom POP or ISP blips; short spikes during large webinars may still meet SLAs on average.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Agent-to-Server tests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate Agent-to-Server tests targeting Zoom data center endpoints from each office Enterprise Agent. Zoom publishes recommended network requirements similar to Microsoft Teams. Name tests with \"Zoom\" for filtering. Correlate with Zoom Dashboard API data (if available) for end-to-end quality analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.test.name=\"*Zoom*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| where avg_latency_ms > 150 OR avg_loss > 1 OR avg_jitter > 30\n| sort -avg_latency_ms\n```\n\nUnderstanding this SPL\n\n**Zoom Collaboration Performance (ThousandEyes)** — Monitors network path quality to Zoom data centers from office locations, ensuring the network supports high-quality video conferencing. Helps distinguish between Zoom platform issues and local network problems.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Agent-to-Server tests). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_latency_ms > 150 OR avg_loss > 1 OR avg_jitter > 30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency to Zoom over time), Table (agent, latency, loss, jitter), Comparison dashboard across collaboration platforms.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use path tests to see where voice and meeting traffic get ugly, so we can tell cloud issues from a bad home connection.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.31",
              "n": "RoomOS Device Network Health via ThousandEyes",
              "c": "high",
              "f": "intermediate",
              "v": "ThousandEyes agents can be enabled on Cisco RoomOS devices (Room Kit, Board, Desk), monitoring the network path from the conference room to cloud meeting services. This provides per-room visibility into network conditions affecting meeting quality.",
              "t": "`Cisco ThousandEyes App for Splunk` (Splunkbase 7719)",
              "d": "`index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Enterprise Agent tests from RoomOS)",
              "q": "`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.source.agent.name=\"RoomOS*\" OR thousandeyes.source.agent.name=\"Room-*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| where avg_latency_ms > 100 OR avg_loss > 0.5\n| sort -avg_latency_ms",
              "m": "Enable ThousandEyes agent on Cisco RoomOS devices via Webex Control Hub or xAPI. The agent runs integrated tests from the room device itself, providing true end-to-end network quality measurement from the meeting room. Name agents with a \"RoomOS-\" or \"Room-\" prefix for filtering. Tests run from RoomOS devices provide the exact network perspective of the video endpoint.",
              "z": "Table (room device, target, latency, loss, jitter), Map (room locations with quality indicators), Dashboard per building/floor.",
              "kfp": "Room on standby, offline maintenance, and scheduled codec refresh windows.",
              "refs": "[Splunkbase app 7719](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco ThousandEyes App for Splunk` (Splunkbase 7719).\n• Ensure the following data sources are available: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Enterprise Agent tests from RoomOS).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ThousandEyes agent on Cisco RoomOS devices via Webex Control Hub or xAPI. The agent runs integrated tests from the room device itself, providing true end-to-end network quality measurement from the meeting room. Name agents with a \"RoomOS-\" or \"Room-\" prefix for filtering. Tests run from RoomOS devices provide the exact network perspective of the video endpoint.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`stream_index` thousandeyes.test.type=\"agent-to-server\"\n| search thousandeyes.source.agent.name=\"RoomOS*\" OR thousandeyes.source.agent.name=\"Room-*\"\n| stats avg(network.latency) as avg_latency_s avg(network.loss) as avg_loss avg(network.jitter) as avg_jitter by thousandeyes.source.agent.name, server.address\n| eval avg_latency_ms=round(avg_latency_s*1000,1)\n| where avg_latency_ms > 100 OR avg_loss > 0.5\n| sort -avg_latency_ms\n```\n\nUnderstanding this SPL\n\n**RoomOS Device Network Health via ThousandEyes** — ThousandEyes agents can be enabled on Cisco RoomOS devices (Room Kit, Board, Desk), monitoring the network path from the conference room to cloud meeting services. This provides per-room visibility into network conditions affecting meeting quality.\n\nDocumented **Data sources**: `index=thousandeyes`, ThousandEyes OTel Tests Stream — Metrics (Enterprise Agent tests from RoomOS). **App/TA** (typical add-on context): `Cisco ThousandEyes App for Splunk` (Splunkbase 7719). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `stream_index` — in Search, use the UI or expand to inspect the underlying SPL.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by thousandeyes.source.agent.name, server.address** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_latency_ms > 100 OR avg_loss > 0.5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (room device, target, latency, loss, jitter), Map (room locations with quality indicators), Dashboard per building/floor.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use path tests to see where voice and meeting traffic get ugly, so we can tell cloud issues from a bad home connection.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.32",
              "n": "Wire-Level VoIP Quality (MOS from RTP Stream)",
              "c": "critical",
              "f": "intermediate",
              "v": "Captures Mean Opinion Score (MOS) and R-Factor directly from RTP packets on the wire, providing platform-independent voice quality measurement. Unlike UC-11.3.1 which uses Cisco CUCM CMR data (application-level), this use case captures quality at the network level regardless of the call platform — covering third-party PBX, SIP trunking providers, and carrier interconnects.",
              "t": "`Splunk App for Stream` (Splunkbase #1809)",
              "d": "`sourcetype=stream:rtp`",
              "q": "sourcetype=\"stream:rtp\"\n| stats avg(mos_session) as avg_mos, avg(rfactor) as avg_rfactor, avg(lost) as avg_loss_pct, avg(unseq) as avg_unseq, count as streams by codec_name\n| eval quality=case(avg_mos>=4.0, \"Good\", avg_mos>=3.5, \"Acceptable\", avg_mos>=3.0, \"Poor\", 1==1, \"Unacceptable\")\n| sort avg_mos",
              "m": "Install Splunk App for Stream and configure it to capture RTP traffic on voice network segments. Enable the RTP protocol for full field extraction including `mos_session`, `rfactor`, `lost`, `unseq`, and `codec_name`. The MOS is calculated by Stream from RTP statistics (jitter, loss, codec type) per session. Deploy Stream forwarders on network taps or SPAN ports that see voice traffic. Alert when average MOS drops below 3.5 (ITU-T G.107 threshold for acceptable quality). Segment analysis by `codec_name` to identify if codec choice affects quality.",
              "z": "Gauge (average MOS score with thresholds: green >4.0, yellow 3.5-4.0, red <3.5), Line chart (MOS trend over 24h in 15-min buckets), Table (codec_name, avg_mos, avg_rfactor, avg_loss_pct, streams — sortable), Bar chart (stream count by quality category).",
              "kfp": "Packet capture on mirror ports can miss or duplicate streams; confirm tap placement.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk App for Stream` (Splunkbase #1809).\n• Ensure the following data sources are available: `sourcetype=stream:rtp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall Splunk App for Stream and configure it to capture RTP traffic on voice network segments. Enable the RTP protocol for full field extraction including `mos_session`, `rfactor`, `lost`, `unseq`, and `codec_name`. The MOS is calculated by Stream from RTP statistics (jitter, loss, codec type) per session. Deploy Stream forwarders on network taps or SPAN ports that see voice traffic. Alert when average MOS drops below 3.5 (ITU-T G.107 threshold for acceptable quality). Segment analysis by `cod…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsourcetype=\"stream:rtp\"\n| stats avg(mos_session) as avg_mos, avg(rfactor) as avg_rfactor, avg(lost) as avg_loss_pct, avg(unseq) as avg_unseq, count as streams by codec_name\n| eval quality=case(avg_mos>=4.0, \"Good\", avg_mos>=3.5, \"Acceptable\", avg_mos>=3.0, \"Poor\", 1==1, \"Unacceptable\")\n| sort avg_mos\n```\n\nUnderstanding this SPL\n\n**Wire-Level VoIP Quality (MOS from RTP Stream)** — Captures Mean Opinion Score (MOS) and R-Factor directly from RTP packets on the wire, providing platform-independent voice quality measurement. Unlike UC-11.3.1 which uses Cisco CUCM CMR data (application-level), this use case captures quality at the network level regardless of the call platform — covering third-party PBX, SIP trunking providers, and carrier interconnects.\n\nDocumented **Data sources**: `sourcetype=stream:rtp`. **App/TA** (typical add-on context): `Splunk App for Stream` (Splunkbase #1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: stream:rtp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: sourcetype=\"stream:rtp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by codec_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **quality** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (average MOS score with thresholds: green >4.0, yellow 3.5-4.0, red <3.5), Line chart (MOS trend over 24h in 15-min buckets), Table (codec_name, avg_mos, avg_rfactor, avg_loss_pct, streams — sortable), Bar chart (stream count by quality category).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "ind": "Telecommunications",
              "tuc": "Contact Center Analytics (50 Ways #7)",
              "a": [
                "N/A"
              ],
              "e": [
                "stream"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.33",
              "n": "Emergency Call (E911/E112) Tracking",
              "c": "critical",
              "f": "advanced",
              "v": "Tracks all emergency calls (911, 933 test, 112) through the telephony system to ensure regulatory compliance and rapid response. Monitors call completion rate, answer time, and any failed emergency calls — a regulatory requirement in many jurisdictions.",
              "t": "`TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), or `Splunk App for Stream` (Splunkbase #1809)",
              "d": "`sourcetype=cisco:ucm:cdr` or `sourcetype=stream:sip`",
              "q": "sourcetype=\"cisco:ucm:cdr\"\n| where match(calledPartyNumber, \"^(911|933|112)$\")\n| eval answer_time=dateTimeConnect-dateTimeOrigination\n| eval completed=if(destCause_value==16, \"Yes\", \"No\")\n| stats count as total_calls, sum(eval(if(completed==\"Yes\", 1, 0))) as answered, avg(answer_time) as avg_answer_sec, avg(duration) as avg_duration_sec by calledPartyNumber\n| eval answer_rate=round(answered*100/total_calls, 1)\n| table calledPartyNumber, total_calls, answered, answer_rate, avg_answer_sec, avg_duration_sec",
              "m": "Ingest Cisco UCM CDR data using the TA for Cisco CDR Reporting and Analytics. Emergency numbers are identified by `calledPartyNumber` matching 911 (US), 933 (US test), or 112 (EU). The `destCause_value=16` indicates normal call clearing (answered and completed). Calculate answer rate as the percentage of calls that were connected. For SIP-based tracking via Stream, filter `sourcetype=\"stream:sip\"` where `callee_e164` matches emergency numbers and check `reply_code=200`. Create real-time alerts for ANY failed emergency call (non-16 cause value). Generate compliance reports monthly.",
              "z": "Single value (emergency call answer rate — target: 100%, red if <99%), Table (calledPartyNumber, total_calls, answered, answer_rate, avg_answer_sec), Timeline (emergency calls over 30 days), Bar chart (emergency calls by hour of day).",
              "kfp": "PSAP test calls, number portability delays, and campus moves with updated location data.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), or `Splunk App for Stream` (Splunkbase #1809).\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:cdr` or `sourcetype=stream:sip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Cisco UCM CDR data using the TA for Cisco CDR Reporting and Analytics. Emergency numbers are identified by `calledPartyNumber` matching 911 (US), 933 (US test), or 112 (EU). The `destCause_value=16` indicates normal call clearing (answered and completed). Calculate answer rate as the percentage of calls that were connected. For SIP-based tracking via Stream, filter `sourcetype=\"stream:sip\"` where `callee_e164` matches emergency numbers and check `reply_code=200`. Create real-time alerts f…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsourcetype=\"cisco:ucm:cdr\"\n| where match(calledPartyNumber, \"^(911|933|112)$\")\n| eval answer_time=dateTimeConnect-dateTimeOrigination\n| eval completed=if(destCause_value==16, \"Yes\", \"No\")\n| stats count as total_calls, sum(eval(if(completed==\"Yes\", 1, 0))) as answered, avg(answer_time) as avg_answer_sec, avg(duration) as avg_duration_sec by calledPartyNumber\n| eval answer_rate=round(answered*100/total_calls, 1)\n| table calledPartyNumber, total_calls, answered, answer_rate, avg_answer_sec, avg_duration_sec\n```\n\nUnderstanding this SPL\n\n**Emergency Call (E911/E112) Tracking** — Tracks all emergency calls (911, 933 test, 112) through the telephony system to ensure regulatory compliance and rapid response. Monitors call completion rate, answer time, and any failed emergency calls — a regulatory requirement in many jurisdictions.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:cdr` or `sourcetype=stream:sip`. **App/TA** (typical add-on context): `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), or `Splunk App for Stream` (Splunkbase #1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: cisco:ucm:cdr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: sourcetype=\"cisco:ucm:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(calledPartyNumber, \"^(911|933|112)$\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **answer_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **completed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by calledPartyNumber** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **answer_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Emergency Call (E911/E112) Tracking**): table calledPartyNumber, total_calls, answered, answer_rate, avg_answer_sec, avg_duration_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (emergency call answer rate — target: 100%, red if <99%), Table (calledPartyNumber, total_calls, answered, answer_rate, avg_answer_sec), Timeline (emergency calls over 30 days), Bar chart (emergency calls by hour of day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "ind": "Telecommunications",
              "tuc": "Emergency Services Monitoring (50 Ways #11)",
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "stream"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.34",
              "n": "Answer Seizure Ratio (ASR) by Route Group",
              "c": "high",
              "f": "intermediate",
              "v": "Calculates Answer Seizure Ratio — the percentage of call attempts that are successfully answered — by route group or trunk. ASR is the primary KPI for voice service quality in carrier networks. Low ASR indicates trunk failures, destination unreachable, or capacity exhaustion.",
              "t": "`TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), or `Splunk App for Stream` (Splunkbase #1809)",
              "d": "`sourcetype=cisco:ucm:cdr` or `sourcetype=stream:sip`",
              "q": "sourcetype=\"cisco:ucm:cdr\"\n| eval answered=if(destCause_value==16, 1, 0)\n| stats count as total_attempts, sum(answered) as answered_calls by destDeviceName\n| eval ASR=round(answered_calls*100/total_attempts, 2)\n| where total_attempts>10\n| sort ASR",
              "m": "Ingest Cisco UCM CDR data. The `destCause_value=16` (Normal Call Clearing) indicates a successfully answered call. Group by `destDeviceName` (which represents the route group or gateway) to calculate ASR per trunk. Industry standard ASR benchmarks: >50% is acceptable for international routes, >70% is good for domestic routes. For SIP-based tracking via Stream, use `sourcetype=\"stream:sip\"` with `method=INVITE` and calculate the ratio of `reply_code=200` to total INVITEs per `dest`. Alert when ASR drops below historical baseline by more than 10 percentage points.",
              "z": "Gauge (overall ASR with thresholds: green >70%, yellow 50-70%, red <50%), Column chart (ASR by destDeviceName/trunk), Line chart (ASR trend over 7 days), Table (destDeviceName, total_attempts, answered_calls, ASR — sortable, highlighted red below 50%).",
              "kfp": "Expected dips when carriers rate-limit or when campaigns end; long trends matter more than one hour.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), or `Splunk App for Stream` (Splunkbase #1809).\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:cdr` or `sourcetype=stream:sip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Cisco UCM CDR data. The `destCause_value=16` (Normal Call Clearing) indicates a successfully answered call. Group by `destDeviceName` (which represents the route group or gateway) to calculate ASR per trunk. Industry standard ASR benchmarks: >50% is acceptable for international routes, >70% is good for domestic routes. For SIP-based tracking via Stream, use `sourcetype=\"stream:sip\"` with `method=INVITE` and calculate the ratio of `reply_code=200` to total INVITEs per `dest`. Alert when AS…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsourcetype=\"cisco:ucm:cdr\"\n| eval answered=if(destCause_value==16, 1, 0)\n| stats count as total_attempts, sum(answered) as answered_calls by destDeviceName\n| eval ASR=round(answered_calls*100/total_attempts, 2)\n| where total_attempts>10\n| sort ASR\n```\n\nUnderstanding this SPL\n\n**Answer Seizure Ratio (ASR) by Route Group** — Calculates Answer Seizure Ratio — the percentage of call attempts that are successfully answered — by route group or trunk. ASR is the primary KPI for voice service quality in carrier networks. Low ASR indicates trunk failures, destination unreachable, or capacity exhaustion.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:cdr` or `sourcetype=stream:sip`. **App/TA** (typical add-on context): `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), or `Splunk App for Stream` (Splunkbase #1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **sourcetype**: cisco:ucm:cdr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: sourcetype=\"cisco:ucm:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **answered** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by destDeviceName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ASR** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where total_attempts>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (overall ASR with thresholds: green >70%, yellow 50-70%, red <50%), Column chart (ASR by destDeviceName/trunk), Line chart (ASR trend over 7 days), Table (destDeviceName, total_attempts, answered_calls, ASR — sortable, highlighted red below 50%).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "ind": "Telecommunications",
              "tuc": "Voice/VoIP Revenue Assurance (50 Ways #30), Real-Time Service Reporting (50 Ways #33)",
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "stream"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.35",
              "n": "CUCM CDR Call Path Analysis",
              "c": "high",
              "f": "advanced",
              "v": "End-to-end call routing visibility across gateways, trunks, route patterns, and route lists exposes misconfigured route plans that silently send calls through unintended paths — causing unexpected PSTN charges, degraded codec quality, or failed calls that users report as \"the phone just doesn't work.\" CDR path analysis turns cryptic cause codes into actionable routing intelligence.",
              "t": "`TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), `Cisco CDR Reporting and Analytics` (Splunkbase #669)",
              "d": "`sourcetype=cisco:ucm:cdr`",
              "q": "index=voip sourcetype=\"cisco:ucm:cdr\"\n| eval call_path=origDeviceName.\" → \".lastRedirectDn.\" → \".destDeviceName\n| eval failed=if(destCause_value!=16 AND destCause_value!=0, 1, 0)\n| stats count as total_calls, sum(failed) as failed_calls, values(origCause_value) as orig_causes, values(destCause_value) as dest_causes by call_path, origCallingPartyNumber, finalCalledPartyNumber\n| eval fail_pct=round(failed_calls*100/total_calls, 1)\n| where fail_pct > 10 OR failed_calls > 5\n| sort -fail_pct\n| table call_path, origCallingPartyNumber, finalCalledPartyNumber, total_calls, failed_calls, fail_pct, dest_causes",
              "m": "Ingest CUCM CDR data via the Cisco CDR Reporting TA. The `origDeviceName`, `lastRedirectDn`, and `destDeviceName` fields trace the call path through the CUCM dial plan. `destCause_value=16` (Normal Call Clearing) indicates a successful call; any other value signals routing failure, congestion, or configuration error. Common cause codes to watch: 1 (Unallocated Number), 34 (No Circuit), 47 (Resource Unavailable), 127 (Interworking). Build a lookup for cause code descriptions. Group by route pattern or called party transform pattern to identify which dial plan rules produce the most failures. Alert when a previously healthy path exceeds 10% failure rate within 1 hour. Correlate with gateway utilization (UC-11.3.38) to distinguish capacity-related failures from configuration errors.",
              "z": "Sankey diagram (call flow from origin → redirect → destination), Table (failed paths with cause codes), Bar chart (failures by cause code), Line chart (path failure rate over 24 hours).",
              "kfp": "Complex transfers, call forwarding, and hunt overflow paths that are valid but hard to read.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), `Cisco CDR Reporting and Analytics` (Splunkbase #669).\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:cdr`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CUCM CDR data via the Cisco CDR Reporting TA. The `origDeviceName`, `lastRedirectDn`, and `destDeviceName` fields trace the call path through the CUCM dial plan. `destCause_value=16` (Normal Call Clearing) indicates a successful call; any other value signals routing failure, congestion, or configuration error. Common cause codes to watch: 1 (Unallocated Number), 34 (No Circuit), 47 (Resource Unavailable), 127 (Interworking). Build a lookup for cause code descriptions. Group by route patte…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cdr\"\n| eval call_path=origDeviceName.\" → \".lastRedirectDn.\" → \".destDeviceName\n| eval failed=if(destCause_value!=16 AND destCause_value!=0, 1, 0)\n| stats count as total_calls, sum(failed) as failed_calls, values(origCause_value) as orig_causes, values(destCause_value) as dest_causes by call_path, origCallingPartyNumber, finalCalledPartyNumber\n| eval fail_pct=round(failed_calls*100/total_calls, 1)\n| where fail_pct > 10 OR failed_calls > 5\n| sort -fail_pct\n| table call_path, origCallingPartyNumber, finalCalledPartyNumber, total_calls, failed_calls, fail_pct, dest_causes\n```\n\nUnderstanding this SPL\n\n**CUCM CDR Call Path Analysis** — End-to-end call routing visibility across gateways, trunks, route patterns, and route lists exposes misconfigured route plans that silently send calls through unintended paths — causing unexpected PSTN charges, degraded codec quality, or failed calls that users report as \"the phone just doesn't work.\" CDR path analysis turns cryptic cause codes into actionable routing intelligence.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:cdr`. **App/TA** (typical add-on context): `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), `Cisco CDR Reporting and Analytics` (Splunkbase #669). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cdr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **call_path** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by call_path, origCallingPartyNumber, finalCalledPartyNumber** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_pct > 10 OR failed_calls > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **CUCM CDR Call Path Analysis**): table call_path, origCallingPartyNumber, finalCalledPartyNumber, total_calls, failed_calls, fail_pct, dest_causes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey diagram (call flow from origin → redirect → destination), Table (failed paths with cause codes), Bar chart (failures by cause code), Line chart (path failure rate over 24 hours).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), CUBE, ISDN Gateways, Cisco VG series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.36",
              "n": "CUCM CMR Call Quality Heatmap",
              "c": "high",
              "f": "advanced",
              "v": "Beyond per-call MOS monitoring (UC-11.3.1), mapping CMR metrics — MOS, jitter, concealed seconds, severely concealed seconds, latency — by site-pair reveals which network segments consistently degrade voice quality. A site-to-site heatmap transforms thousands of individual call quality records into an instant visual that network engineers can use to prioritize WAN/SD-WAN optimization.",
              "t": "`TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), `Cisco CDR Reporting and Analytics` (Splunkbase #669)",
              "d": "`sourcetype=cisco:ucm:cmr`, `sourcetype=cisco:ucm:cdr` (for location join)",
              "q": "index=voip sourcetype=\"cisco:ucm:cmr\"\n| join globalCallID_callId [search index=voip sourcetype=\"cisco:ucm:cdr\" | fields globalCallID_callId, origDeviceName, destDeviceName, callingPartyNumber_uri]\n| lookup cucm_device_location origDeviceName as origDeviceName OUTPUT location as orig_site\n| lookup cucm_device_location destDeviceName as destDeviceName OUTPUT location as dest_site\n| eval site_pair=orig_site.\" ↔ \".dest_site\n| stats avg(MOS) as avg_mos, avg(jitter) as avg_jitter, avg(latency) as avg_latency, sum(severelyConcealedSeconds) as total_scs, count as call_count by site_pair\n| eval quality_score=case(avg_mos>=4.0, \"Good\", avg_mos>=3.5, \"Fair\", avg_mos>=3.0, \"Poor\", 1==1, \"Critical\")\n| sort avg_mos",
              "m": "Join CMR records with CDR data on `globalCallID_callId` to obtain device names and caller information. Build a `cucm_device_location` lookup mapping device names to site/location codes from CUCM device pools or locations configuration. Aggregate quality metrics by site-pair to produce the heatmap matrix. Track `severelyConcealedSeconds` as a leading indicator — it measures seconds where >5% of audio frames were interpolated, indicating packet loss that may not yet impact MOS. Schedule hourly during business hours. Alert when any site-pair avg MOS drops below 3.5 for more than 2 consecutive hours. Feed results into SD-WAN QoS policy reviews.",
              "z": "Heatmap (origin site × destination site, colored by avg MOS), Table (site-pairs with worst quality), Line chart (avg MOS per site-pair over 7 days), Gauge (overall fleet MOS).",
              "kfp": "Site-specific issues during local outages; heatmaps need geographic context before escalation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), `Cisco CDR Reporting and Analytics` (Splunkbase #669).\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:cmr`, `sourcetype=cisco:ucm:cdr` (for location join).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nJoin CMR records with CDR data on `globalCallID_callId` to obtain device names and caller information. Build a `cucm_device_location` lookup mapping device names to site/location codes from CUCM device pools or locations configuration. Aggregate quality metrics by site-pair to produce the heatmap matrix. Track `severelyConcealedSeconds` as a leading indicator — it measures seconds where >5% of audio frames were interpolated, indicating packet loss that may not yet impact MOS. Schedule hourly dur…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cmr\"\n| join globalCallID_callId [search index=voip sourcetype=\"cisco:ucm:cdr\" | fields globalCallID_callId, origDeviceName, destDeviceName, callingPartyNumber_uri]\n| lookup cucm_device_location origDeviceName as origDeviceName OUTPUT location as orig_site\n| lookup cucm_device_location destDeviceName as destDeviceName OUTPUT location as dest_site\n| eval site_pair=orig_site.\" ↔ \".dest_site\n| stats avg(MOS) as avg_mos, avg(jitter) as avg_jitter, avg(latency) as avg_latency, sum(severelyConcealedSeconds) as total_scs, count as call_count by site_pair\n| eval quality_score=case(avg_mos>=4.0, \"Good\", avg_mos>=3.5, \"Fair\", avg_mos>=3.0, \"Poor\", 1==1, \"Critical\")\n| sort avg_mos\n```\n\nUnderstanding this SPL\n\n**CUCM CMR Call Quality Heatmap** — Beyond per-call MOS monitoring (UC-11.3.1), mapping CMR metrics — MOS, jitter, concealed seconds, severely concealed seconds, latency — by site-pair reveals which network segments consistently degrade voice quality. A site-to-site heatmap transforms thousands of individual call quality records into an instant visual that network engineers can use to prioritize WAN/SD-WAN optimization.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:cmr`, `sourcetype=cisco:ucm:cdr` (for location join). **App/TA** (typical add-on context): `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), `Cisco CDR Reporting and Analytics` (Splunkbase #669). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cmr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cmr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **site_pair** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by site_pair** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **quality_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (origin site × destination site, colored by avg MOS), Table (site-pairs with worst quality), Line chart (avg MOS per site-pair over 7 days), Gauge (overall fleet MOS).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM), IP Phone 7800 series, IP Phone 8800 series, Cisco Webex Calling",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.37",
              "n": "CUCM Phone Firmware Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "IP phone firmware versions determine security posture and feature availability. Phones running end-of-support firmware are vulnerable to known exploits and may lack critical features like encrypted RTP. Tracking firmware across a fleet of thousands of phones via CUCM syslog registration events provides automated compliance reporting that replaces manual CUCM Admin page audits.",
              "t": "`Splunk Connect for Syslog`, `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434)",
              "d": "`sourcetype=cisco:ucm:syslog` (registration events), CUCM AXL/RIS API via scripted input",
              "q": "index=voip sourcetype=\"cisco:ucm:syslog\" \"%CCM_CALLMANAGER-CALLMANAGER-7-DeviceRegistered%\"\n| rex field=_raw \"DeviceName=(?<device_name>\\S+).*ActiveLoadID=(?<firmware>\\S+).*IPAddress=(?<ip>\\S+)\"\n| rex field=device_name \"^(?<model>SEP|ATA|CIPC|CSF|BOT|TCT|TAB)\"\n| stats latest(firmware) as current_fw, latest(ip) as ip, latest(_time) as last_seen by device_name, model\n| lookup phone_firmware_baseline model OUTPUT recommended_fw, eol_fw\n| eval compliant=if(current_fw==recommended_fw, \"Yes\", \"No\")\n| eval eol_risk=if(current_fw==eol_fw, \"EOL\", \"Supported\")\n| stats count as total, sum(eval(if(compliant==\"No\",1,0))) as non_compliant, sum(eval(if(eol_risk==\"EOL\",1,0))) as eol_count by model\n| eval compliance_pct=round((total-non_compliant)*100/total, 1)",
              "m": "CUCM generates `DeviceRegistered` syslog messages each time a phone registers or re-registers, containing the device name, firmware version (ActiveLoadID), and IP address. Forward CUCM syslog via Splunk Connect for Syslog. Build a `phone_firmware_baseline` lookup with columns: model, recommended_fw, eol_fw (populated from Cisco's firmware recommendations). Schedule daily to track fleet compliance. Alert when compliance percentage drops below 90% or any EOL firmware is detected. For more complete inventory, add a scripted input querying CUCM RIS API for real-time registered device data. Track firmware rollout progress during upgrade campaigns with a timechart of compliant vs non-compliant counts.",
              "z": "Single value (fleet compliance %), Bar chart (compliance by model), Table (non-compliant devices with firmware and IP), Pie chart (firmware version distribution).",
              "kfp": "Staged phone refresh programs and different firmware channels per site.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Connect for Syslog`, `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434).\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:syslog` (registration events), CUCM AXL/RIS API via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCUCM generates `DeviceRegistered` syslog messages each time a phone registers or re-registers, containing the device name, firmware version (ActiveLoadID), and IP address. Forward CUCM syslog via Splunk Connect for Syslog. Build a `phone_firmware_baseline` lookup with columns: model, recommended_fw, eol_fw (populated from Cisco's firmware recommendations). Schedule daily to track fleet compliance. Alert when compliance percentage drops below 90% or any EOL firmware is detected. For more complete…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:syslog\" \"%CCM_CALLMANAGER-CALLMANAGER-7-DeviceRegistered%\"\n| rex field=_raw \"DeviceName=(?<device_name>\\S+).*ActiveLoadID=(?<firmware>\\S+).*IPAddress=(?<ip>\\S+)\"\n| rex field=device_name \"^(?<model>SEP|ATA|CIPC|CSF|BOT|TCT|TAB)\"\n| stats latest(firmware) as current_fw, latest(ip) as ip, latest(_time) as last_seen by device_name, model\n| lookup phone_firmware_baseline model OUTPUT recommended_fw, eol_fw\n| eval compliant=if(current_fw==recommended_fw, \"Yes\", \"No\")\n| eval eol_risk=if(current_fw==eol_fw, \"EOL\", \"Supported\")\n| stats count as total, sum(eval(if(compliant==\"No\",1,0))) as non_compliant, sum(eval(if(eol_risk==\"EOL\",1,0))) as eol_count by model\n| eval compliance_pct=round((total-non_compliant)*100/total, 1)\n```\n\nUnderstanding this SPL\n\n**CUCM Phone Firmware Compliance** — IP phone firmware versions determine security posture and feature availability. Phones running end-of-support firmware are vulnerable to known exploits and may lack critical features like encrypted RTP. Tracking firmware across a fleet of thousands of phones via CUCM syslog registration events provides automated compliance reporting that replaces manual CUCM Admin page audits.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:syslog` (registration events), CUCM AXL/RIS API via scripted input. **App/TA** (typical add-on context): `Splunk Connect for Syslog`, `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by device_name, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **eol_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (fleet compliance %), Bar chart (compliance by model), Table (non-compliant devices with firmware and IP), Pie chart (firmware version distribution).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco IP Phone 7800 series, IP Phone 8800 series, IP Phone 6800 series, Cisco ATA 190, Cisco Webex Room Kit",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.38",
              "n": "CUCM Gateway and CUBE Utilization",
              "c": "critical",
              "f": "intermediate",
              "v": "PSTN gateways and CUBE (Cisco Unified Border Element) have finite channel capacity. When all channels are in use during peak hours, additional calls receive busy signals or route to overflow destinations that may incur higher PSTN costs. Monitoring channel utilization against capacity prevents revenue-impacting call failures and supports trunk procurement decisions.",
              "t": "`TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), `Cisco Networks Add-on for Splunk` (Splunkbase #1352)",
              "d": "`sourcetype=cisco:ucm:cdr`, `sourcetype=syslog` (gateway voice counters), SNMP (CISCO-VOICE-DIAL-CONTROL-MIB)",
              "q": "index=voip sourcetype=\"cisco:ucm:cdr\"\n| eval gw=coalesce(destDeviceName, origDeviceName)\n| where like(gw, \"CUBE%\") OR like(gw, \"GW%\") OR like(gw, \"MGCP%\")\n| bin _time span=15m\n| stats dc(globalCallID_callId) as concurrent_calls by _time, gw\n| lookup gateway_capacity gw OUTPUT max_channels\n| eval utilization_pct=round(concurrent_calls*100/max_channels, 1)\n| where utilization_pct > 80\n| table _time, gw, concurrent_calls, max_channels, utilization_pct\n| sort -utilization_pct",
              "m": "Ingest CDR data and identify gateway devices by naming convention (CUBE*, GW*, MGCP*) or by maintaining a gateway device lookup. Build a `gateway_capacity` lookup mapping gateway names to their maximum channel count (PRI=23 channels per T1, SIP trunk=configured max sessions). Calculate concurrent call count per 15-minute bin as a proxy for channel utilization. Alert at 80% utilization to allow proactive capacity addition. For real-time monitoring, supplement CDR analysis with SNMP polling of CISCO-VOICE-DIAL-CONTROL-MIB for active call legs. Track codec negotiation: G.711 uses 1 channel, G.729 uses 1 channel but lower bandwidth — codec distribution affects WAN planning but not channel capacity.",
              "z": "Line chart (utilization % per gateway over 24 hours), Gauge (peak utilization per gateway), Table (gateways above 80%), Bar chart (concurrent calls by gateway at peak hour).",
              "kfp": "Conference peaks and after-hours backhaul that fill gateways before capacity upgrades ship.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), `Cisco Networks Add-on for Splunk` (Splunkbase #1352).\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:cdr`, `sourcetype=syslog` (gateway voice counters), SNMP (CISCO-VOICE-DIAL-CONTROL-MIB).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CDR data and identify gateway devices by naming convention (CUBE*, GW*, MGCP*) or by maintaining a gateway device lookup. Build a `gateway_capacity` lookup mapping gateway names to their maximum channel count (PRI=23 channels per T1, SIP trunk=configured max sessions). Calculate concurrent call count per 15-minute bin as a proxy for channel utilization. Alert at 80% utilization to allow proactive capacity addition. For real-time monitoring, supplement CDR analysis with SNMP polling of CIS…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cdr\"\n| eval gw=coalesce(destDeviceName, origDeviceName)\n| where like(gw, \"CUBE%\") OR like(gw, \"GW%\") OR like(gw, \"MGCP%\")\n| bin _time span=15m\n| stats dc(globalCallID_callId) as concurrent_calls by _time, gw\n| lookup gateway_capacity gw OUTPUT max_channels\n| eval utilization_pct=round(concurrent_calls*100/max_channels, 1)\n| where utilization_pct > 80\n| table _time, gw, concurrent_calls, max_channels, utilization_pct\n| sort -utilization_pct\n```\n\nUnderstanding this SPL\n\n**CUCM Gateway and CUBE Utilization** — PSTN gateways and CUBE (Cisco Unified Border Element) have finite channel capacity. When all channels are in use during peak hours, additional calls receive busy signals or route to overflow destinations that may incur higher PSTN costs. Monitoring channel utilization against capacity prevents revenue-impacting call failures and supports trunk procurement decisions.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:cdr`, `sourcetype=syslog` (gateway voice counters), SNMP (CISCO-VOICE-DIAL-CONTROL-MIB). **App/TA** (typical add-on context): `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434), `Cisco Networks Add-on for Splunk` (Splunkbase #1352). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cdr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **gw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where like(gw, \"CUBE%\") OR like(gw, \"GW%\") OR like(gw, \"MGCP%\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, gw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilization_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CUCM Gateway and CUBE Utilization**): table _time, gw, concurrent_calls, max_channels, utilization_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (utilization % per gateway over 24 hours), Gauge (peak utilization per gateway), Table (gateways above 80%), Bar chart (concurrent calls by gateway at peak hour).",
              "script": "",
              "premium": "",
              "hw": "Cisco ISR 4000 series (CUBE), Cisco VG310/350, Cisco CUBE Enterprise, ISDN PRI Gateways",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.39",
              "n": "CUCM Cluster Database Replication Health",
              "c": "critical",
              "f": "advanced",
              "v": "CUCM relies on Informix database replication between publisher and subscriber nodes to synchronize configuration and runtime data. Replication failures cause configuration drift — changes made on the publisher don't propagate, causing inconsistent dial plans, missing device registrations, and failover failures. Detecting replication lag or broken replication before it impacts call processing prevents hard-to-diagnose intermittent call failures.",
              "t": "`Splunk Connect for Syslog`, CUCM RTMT log forwarding",
              "d": "`sourcetype=cisco:ucm:syslog` (DBReplication alerts), CUCM CLI `utils dbreplication runtimestate` via scripted input",
              "q": "index=voip sourcetype=\"cisco:ucm:syslog\" (\"DBReplication\" OR \"Replication\" OR \"%CCM_DB-DB-3%\")\n| eval severity=case(\n    like(_raw, \"%CRITICAL%\") OR like(_raw, \"%-3-%\"), \"Critical\",\n    like(_raw, \"%WARNING%\") OR like(_raw, \"%-4-%\"), \"Warning\",\n    1==1, \"Info\")\n| stats count as event_count, latest(_time) as last_event, values(severity) as severities by host\n| eval repl_status=if(mvfind(severities,\"Critical\")>=0, \"BROKEN\", if(mvfind(severities,\"Warning\")>=0, \"DEGRADED\", \"OK\"))\n| table host, repl_status, event_count, last_event, severities\n| sort -event_count",
              "m": "CUCM generates syslog messages with facility `%CCM_DB-DB` for replication events. Severity level 3 (Error) indicates replication failure; level 4 (Warning) indicates replication lag. Forward all CUCM node syslog via Splunk Connect for Syslog. For deeper monitoring, deploy a scripted input that runs `utils dbreplication runtimestate` via SSH/expect script on the CUCM publisher CLI every 30 minutes, parsing the output to extract replication status per subscriber (values: 0=Init, 1=Bad, 2=Good, 3=Setup, 4=Uncertain). Alert immediately on status values other than 2. After replication breaks, CUCM requires `utils dbreplication repair` or `reset` which may cause service disruption — early detection is critical. Correlate with network connectivity between CUCM nodes.",
              "z": "Single value (nodes with replication OK vs broken), Table (node replication status), Timeline (replication events), Line chart (replication event rate over 7 days).",
              "kfp": "Brief replication lag during UCM cluster switchovers and publisher maintenance.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Connect for Syslog`, CUCM RTMT log forwarding.\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:syslog` (DBReplication alerts), CUCM CLI `utils dbreplication runtimestate` via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCUCM generates syslog messages with facility `%CCM_DB-DB` for replication events. Severity level 3 (Error) indicates replication failure; level 4 (Warning) indicates replication lag. Forward all CUCM node syslog via Splunk Connect for Syslog. For deeper monitoring, deploy a scripted input that runs `utils dbreplication runtimestate` via SSH/expect script on the CUCM publisher CLI every 30 minutes, parsing the output to extract replication status per subscriber (values: 0=Init, 1=Bad, 2=Good, 3=S…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:syslog\" (\"DBReplication\" OR \"Replication\" OR \"%CCM_DB-DB-3%\")\n| eval severity=case(\n    like(_raw, \"%CRITICAL%\") OR like(_raw, \"%-3-%\"), \"Critical\",\n    like(_raw, \"%WARNING%\") OR like(_raw, \"%-4-%\"), \"Warning\",\n    1==1, \"Info\")\n| stats count as event_count, latest(_time) as last_event, values(severity) as severities by host\n| eval repl_status=if(mvfind(severities,\"Critical\")>=0, \"BROKEN\", if(mvfind(severities,\"Warning\")>=0, \"DEGRADED\", \"OK\"))\n| table host, repl_status, event_count, last_event, severities\n| sort -event_count\n```\n\nUnderstanding this SPL\n\n**CUCM Cluster Database Replication Health** — CUCM relies on Informix database replication between publisher and subscriber nodes to synchronize configuration and runtime data. Replication failures cause configuration drift — changes made on the publisher don't propagate, causing inconsistent dial plans, missing device registrations, and failover failures. Detecting replication lag or broken replication before it impacts call processing prevents hard-to-diagnose intermittent call failures.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:syslog` (DBReplication alerts), CUCM CLI `utils dbreplication runtimestate` via scripted input. **App/TA** (typical add-on context): `Splunk Connect for Syslog`, CUCM RTMT log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **repl_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **CUCM Cluster Database Replication Health**): table host, repl_status, event_count, last_event, severities\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (nodes with replication OK vs broken), Table (node replication status), Timeline (replication events), Line chart (replication event rate over 7 days).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM) — Publisher and Subscriber nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [
                "cisco_ucm"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.40",
              "n": "CUCM Call Admission Control (CAC) Rejection Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Call Admission Control prevents WAN link saturation by rejecting calls when bandwidth allocation is exhausted for a location pair. CAC rejections mean users hear reorder tone or get rerouted to PSTN (higher cost). Trending CAC rejections by location directly supports SD-WAN, MPLS, and QoS capacity planning by quantifying where and when voice bandwidth demand exceeds allocation.",
              "t": "`Splunk Connect for Syslog`, `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434)",
              "d": "`sourcetype=cisco:ucm:syslog` (CAC events), `sourcetype=cisco:ucm:cdr` (cause code 47)",
              "q": "index=voip sourcetype=\"cisco:ucm:cdr\" destCause_value=47\n| eval location_pair=origNodeId.\" → \".destNodeId\n| bin _time span=1h\n| stats count as cac_rejections by _time, location_pair\n| eventstats avg(cac_rejections) as avg_rej, stdev(cac_rejections) as std_rej by location_pair\n| eval z_score=round((cac_rejections - avg_rej)/nullif(std_rej, 0), 2)\n| where cac_rejections > 5\n| table _time, location_pair, cac_rejections, avg_rej, z_score\n| sort -cac_rejections",
              "m": "CUCM CDR cause code 47 (Resource Unavailable) indicates CAC rejection. Map `origNodeId` and `destNodeId` to location names via a CUCM location lookup extracted from CUCM Admin. Trend rejections by hour and location pair to identify peak congestion windows. Correlate with WAN utilization data from SD-WAN (cat-05) to validate whether the location bandwidth configuration matches actual link capacity. Alert when any location pair exceeds 5 rejections in an hour — this indicates active user impact. Use this data to justify bandwidth upgrades or QoS policy changes. Track week-over-week trends to measure whether capacity additions reduce rejections.",
              "z": "Line chart (CAC rejections per location pair over 7 days), Heatmap (hour of day × location pair), Table (top rejected location pairs), Single value (total rejections today vs yesterday).",
              "kfp": "CAC during correct admission control; some rejects are the system protecting quality.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Connect for Syslog`, `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434).\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:syslog` (CAC events), `sourcetype=cisco:ucm:cdr` (cause code 47).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCUCM CDR cause code 47 (Resource Unavailable) indicates CAC rejection. Map `origNodeId` and `destNodeId` to location names via a CUCM location lookup extracted from CUCM Admin. Trend rejections by hour and location pair to identify peak congestion windows. Correlate with WAN utilization data from SD-WAN (cat-05) to validate whether the location bandwidth configuration matches actual link capacity. Alert when any location pair exceeds 5 rejections in an hour — this indicates active user impact. U…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cdr\" destCause_value=47\n| eval location_pair=origNodeId.\" → \".destNodeId\n| bin _time span=1h\n| stats count as cac_rejections by _time, location_pair\n| eventstats avg(cac_rejections) as avg_rej, stdev(cac_rejections) as std_rej by location_pair\n| eval z_score=round((cac_rejections - avg_rej)/nullif(std_rej, 0), 2)\n| where cac_rejections > 5\n| table _time, location_pair, cac_rejections, avg_rej, z_score\n| sort -cac_rejections\n```\n\nUnderstanding this SPL\n\n**CUCM Call Admission Control (CAC) Rejection Trending** — Call Admission Control prevents WAN link saturation by rejecting calls when bandwidth allocation is exhausted for a location pair. CAC rejections mean users hear reorder tone or get rerouted to PSTN (higher cost). Trending CAC rejections by location directly supports SD-WAN, MPLS, and QoS capacity planning by quantifying where and when voice bandwidth demand exceeds allocation.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:syslog` (CAC events), `sourcetype=cisco:ucm:cdr` (cause code 47). **App/TA** (typical add-on context): `Splunk Connect for Syslog`, `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cdr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **location_pair** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, location_pair** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by location_pair** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cac_rejections > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CUCM Call Admission Control (CAC) Rejection Trending**): table _time, location_pair, cac_rejections, avg_rej, z_score\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CAC rejections per location pair over 7 days), Heatmap (hour of day × location pair), Table (top rejected location pairs), Single value (total rejections today vs yesterday).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.41",
              "n": "CUCM Hunt Group and Line Group Overflow",
              "c": "medium",
              "f": "intermediate",
              "v": "Hunt pilots distribute incoming calls across line groups (e.g., helpdesk, sales, reception). When all members of a line group are busy, calls overflow to the next group or voicemail. Excessive overflow indicates understaffing, misconfigured hunt lists, or members not logging into their phones. Tracking overflow rates per hunt pilot directly supports workforce management and ensures callers reach a live agent rather than voicemail during business hours.",
              "t": "`TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434)",
              "d": "`sourcetype=cisco:ucm:cdr`",
              "q": "index=voip sourcetype=\"cisco:ucm:cdr\"\n| where isnotnull(huntPilotDN)\n| eval answered=if(destCause_value==16, 1, 0)\n| eval overflowed=if(lastRedirectDn!=huntPilotDN AND isnotnull(lastRedirectDn), 1, 0)\n| eval to_voicemail=if(like(destDeviceName, \"VM%\") OR like(destDeviceName, \"Unity%\"), 1, 0)\n| bin _time span=1h\n| stats count as total_calls, sum(answered) as answered, sum(overflowed) as overflowed, sum(to_voicemail) as to_vm by _time, huntPilotDN\n| eval answer_pct=round(answered*100/total_calls, 1)\n| eval overflow_pct=round(overflowed*100/total_calls, 1)\n| eval vm_pct=round(to_vm*100/total_calls, 1)\n| where overflow_pct > 20 OR vm_pct > 30\n| table _time, huntPilotDN, total_calls, answer_pct, overflow_pct, vm_pct\n| sort -overflow_pct",
              "m": "Ingest CUCM CDR data. The `huntPilotDN` field identifies calls that entered a hunt pilot. `lastRedirectDn` shows where the call was ultimately redirected — if it differs from the hunt pilot, the call overflowed. Calls landing on devices named VM* or Unity* went to voicemail. Calculate answer rate, overflow rate, and voicemail rate per hunt pilot per hour. Alert when overflow exceeds 20% or voicemail exceeds 30% during business hours (8am-6pm). Provide daily reports to department managers showing their hunt group performance. Correlate with agent availability data if integrated with contact center (UC-11.3.42). Use trends to recommend hunt group membership changes or additional line group members during peak periods.",
              "z": "Bar chart (answer/overflow/voicemail split per hunt pilot), Line chart (overflow rate trend over 5 business days), Table (hunt pilots with highest overflow), Single value (fleet-wide answer rate).",
              "kfp": "Overflows during campaigns and seasonal peaks when hunt settings are working as designed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434).\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:cdr`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CUCM CDR data. The `huntPilotDN` field identifies calls that entered a hunt pilot. `lastRedirectDn` shows where the call was ultimately redirected — if it differs from the hunt pilot, the call overflowed. Calls landing on devices named VM* or Unity* went to voicemail. Calculate answer rate, overflow rate, and voicemail rate per hunt pilot per hour. Alert when overflow exceeds 20% or voicemail exceeds 30% during business hours (8am-6pm). Provide daily reports to department managers showing…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cdr\"\n| where isnotnull(huntPilotDN)\n| eval answered=if(destCause_value==16, 1, 0)\n| eval overflowed=if(lastRedirectDn!=huntPilotDN AND isnotnull(lastRedirectDn), 1, 0)\n| eval to_voicemail=if(like(destDeviceName, \"VM%\") OR like(destDeviceName, \"Unity%\"), 1, 0)\n| bin _time span=1h\n| stats count as total_calls, sum(answered) as answered, sum(overflowed) as overflowed, sum(to_voicemail) as to_vm by _time, huntPilotDN\n| eval answer_pct=round(answered*100/total_calls, 1)\n| eval overflow_pct=round(overflowed*100/total_calls, 1)\n| eval vm_pct=round(to_vm*100/total_calls, 1)\n| where overflow_pct > 20 OR vm_pct > 30\n| table _time, huntPilotDN, total_calls, answer_pct, overflow_pct, vm_pct\n| sort -overflow_pct\n```\n\nUnderstanding this SPL\n\n**CUCM Hunt Group and Line Group Overflow** — Hunt pilots distribute incoming calls across line groups (e.g., helpdesk, sales, reception). When all members of a line group are busy, calls overflow to the next group or voicemail. Excessive overflow indicates understaffing, misconfigured hunt lists, or members not logging into their phones. Tracking overflow rates per hunt pilot directly supports workforce management and ensures callers reach a live agent rather than voicemail during business hours.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:cdr`. **App/TA** (typical add-on context): `TA for Cisco CDR Reporting and Analytics` (Splunkbase #4434). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cdr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cdr\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(huntPilotDN)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **answered** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overflowed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **to_voicemail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, huntPilotDN** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **answer_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overflow_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **vm_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overflow_pct > 20 OR vm_pct > 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CUCM Hunt Group and Line Group Overflow**): table _time, huntPilotDN, total_calls, answer_pct, overflow_pct, vm_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (answer/overflow/voicemail split per hunt pilot), Line chart (overflow rate trend over 5 business days), Table (hunt pilots with highest overflow), Single value (fleet-wide answer rate).",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Communications Manager (CUCM)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see when hunt groups overflow or abandon calls pile up, so staffing and call routing can be fixed before service suffers.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.42",
              "n": "Webex Contact Center Agent State and Occupancy",
              "c": "high",
              "f": "advanced",
              "v": "Agent state distribution directly determines customer wait times and contact center throughput. Agents stuck in \"Not Ready\" or spending excessive time in \"Wrap-Up\" reduce effective capacity without appearing as staffing shortages. Real-time and trended agent state analytics expose hidden productivity issues, validate workforce management schedules, and provide evidence for staffing adjustments that reduce customer wait times.",
              "t": "Webex Contact Center GraphQL API via HEC or scripted input, `Cisco Webex Add-on` (Splunkbase #5781)",
              "d": "`sourcetype=wxcc:agent_activity` (custom via API), `sourcetype=cisco:webex:events`",
              "q": "index=contact_center sourcetype=\"wxcc:agent_activity\"\n| eval state_duration=if(isnotnull(duration_sec), duration_sec, 0)\n| stats sum(eval(if(state==\"Available\", state_duration, 0))) as avail_sec,\n        sum(eval(if(state==\"Talking\", state_duration, 0))) as talk_sec,\n        sum(eval(if(state==\"WrapUp\", state_duration, 0))) as wrap_sec,\n        sum(eval(if(state==\"NotReady\", state_duration, 0))) as notready_sec,\n        sum(state_duration) as total_sec\n        by agent_id, agent_name, team\n| eval occupancy_pct=round((talk_sec+wrap_sec)*100/total_sec, 1)\n| eval notready_pct=round(notready_sec*100/total_sec, 1)\n| eval avg_wrap_min=round(wrap_sec/60, 1)\n| where occupancy_pct < 50 OR notready_pct > 40\n| sort -notready_pct\n| table agent_name, team, occupancy_pct, notready_pct, avg_wrap_min, talk_sec, total_sec",
              "m": "Ingest Webex Contact Center agent activity data via the WxCC GraphQL API (Agent Activity endpoint) using a scripted input or HEC integration. Each record contains agent ID, state (Available, Talking, Hold, WrapUp, NotReady, RONA), state duration, and timestamp. Calculate occupancy (time in Talking+WrapUp as percentage of logged-in time) and Not Ready percentage per agent per shift. Industry benchmarks: occupancy 75-85% is healthy; below 50% indicates overstaffing or excessive breaks; NotReady above 40% requires investigation. Alert supervisors when agents exceed configured thresholds. Provide team-level aggregation for workforce management. Track daily/weekly trends to validate schedule adherence and identify coaching opportunities.",
              "z": "Stacked bar chart (state distribution per agent), Gauge (team occupancy), Table (agents with low occupancy or high NotReady), Line chart (team occupancy trend over 30 days).",
              "kfp": "Breaks, coaching, and after-call work that look like not-ready for benign reasons.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Webex Contact Center GraphQL API via HEC or scripted input, `Cisco Webex Add-on` (Splunkbase #5781).\n• Ensure the following data sources are available: `sourcetype=wxcc:agent_activity` (custom via API), `sourcetype=cisco:webex:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Webex Contact Center agent activity data via the WxCC GraphQL API (Agent Activity endpoint) using a scripted input or HEC integration. Each record contains agent ID, state (Available, Talking, Hold, WrapUp, NotReady, RONA), state duration, and timestamp. Calculate occupancy (time in Talking+WrapUp as percentage of logged-in time) and Not Ready percentage per agent per shift. Industry benchmarks: occupancy 75-85% is healthy; below 50% indicates overstaffing or excessive breaks; NotReady ab…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=contact_center sourcetype=\"wxcc:agent_activity\"\n| eval state_duration=if(isnotnull(duration_sec), duration_sec, 0)\n| stats sum(eval(if(state==\"Available\", state_duration, 0))) as avail_sec,\n        sum(eval(if(state==\"Talking\", state_duration, 0))) as talk_sec,\n        sum(eval(if(state==\"WrapUp\", state_duration, 0))) as wrap_sec,\n        sum(eval(if(state==\"NotReady\", state_duration, 0))) as notready_sec,\n        sum(state_duration) as total_sec\n        by agent_id, agent_name, team\n| eval occupancy_pct=round((talk_sec+wrap_sec)*100/total_sec, 1)\n| eval notready_pct=round(notready_sec*100/total_sec, 1)\n| eval avg_wrap_min=round(wrap_sec/60, 1)\n| where occupancy_pct < 50 OR notready_pct > 40\n| sort -notready_pct\n| table agent_name, team, occupancy_pct, notready_pct, avg_wrap_min, talk_sec, total_sec\n```\n\nUnderstanding this SPL\n\n**Webex Contact Center Agent State and Occupancy** — Agent state distribution directly determines customer wait times and contact center throughput. Agents stuck in \"Not Ready\" or spending excessive time in \"Wrap-Up\" reduce effective capacity without appearing as staffing shortages. Real-time and trended agent state analytics expose hidden productivity issues, validate workforce management schedules, and provide evidence for staffing adjustments that reduce customer wait times.\n\nDocumented **Data sources**: `sourcetype=wxcc:agent_activity` (custom via API), `sourcetype=cisco:webex:events`. **App/TA** (typical add-on context): Webex Contact Center GraphQL API via HEC or scripted input, `Cisco Webex Add-on` (Splunkbase #5781). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: contact_center; **sourcetype**: wxcc:agent_activity. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=contact_center, sourcetype=\"wxcc:agent_activity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **state_duration** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by agent_id, agent_name, team** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **occupancy_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **notready_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **avg_wrap_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where occupancy_pct < 50 OR notready_pct > 40` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Webex Contact Center Agent State and Occupancy**): table agent_name, team, occupancy_pct, notready_pct, avg_wrap_min, talk_sec, total_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart (state distribution per agent), Gauge (team occupancy), Table (agents with low occupancy or high NotReady), Line chart (team occupancy trend over 30 days).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Webex Contact Center (WxCC), Webex Contact Center Enterprise (WxCCE)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.43",
              "n": "Webex Contact Center IVR Containment Rate",
              "c": "high",
              "f": "intermediate",
              "v": "IVR containment rate measures the percentage of callers who complete their task within the IVR self-service system without speaking to a live agent. High containment reduces agent workload and cost per contact. Declining containment signals IVR menu confusion, new customer issues not covered by self-service, or technical failures in IVR integrations (database lookup failures, speech recognition errors) — all of which increase agent queue volume and customer frustration.",
              "t": "Webex Contact Center GraphQL API via HEC, `Cisco Webex Add-on` (Splunkbase #5781)",
              "d": "`sourcetype=wxcc:ivr_activity` (custom via API), `sourcetype=wxcc:call_legs`",
              "q": "index=contact_center sourcetype=\"wxcc:call_legs\"\n| eval reached_agent=if(isnotnull(agent_id) AND agent_id!=\"\", 1, 0)\n| eval self_served=if(reached_agent==0 AND disposition==\"Completed\", 1, 0)\n| eval abandoned_ivr=if(reached_agent==0 AND disposition==\"Abandoned\", 1, 0)\n| bin _time span=1d\n| stats count as total_calls, sum(self_served) as contained, sum(reached_agent) as to_agent, sum(abandoned_ivr) as abandoned by _time, entry_point\n| eval containment_pct=round(contained*100/total_calls, 1)\n| eval abandon_pct=round(abandoned*100/total_calls, 1)\n| table _time, entry_point, total_calls, contained, to_agent, abandoned, containment_pct, abandon_pct\n| sort -_time, -total_calls",
              "m": "Ingest WxCC call leg data which tracks each call's journey through the IVR flow. A call is \"contained\" if it completes with a successful disposition without ever being connected to an agent. Track containment rate daily by entry point (phone number or IVR menu). Industry benchmarks vary: 30-50% for complex support, 60-80% for billing/account inquiries. Alert when containment drops more than 10 percentage points from the 7-day average — this usually indicates an IVR integration failure (e.g., backend API timeout causing the \"try again later\" path). Correlate IVR path data with specific menu choices to identify where callers bail out. Report weekly to operations leadership with trend and top escalation reasons.",
              "z": "Line chart (containment rate trend over 30 days), Funnel chart (IVR path flow from entry to exit), Bar chart (containment by entry point), Single value (today's containment vs target).",
              "kfp": "Simple call flows, callbacks, and high callback-success designs that change containment rate.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Webex Contact Center GraphQL API via HEC, `Cisco Webex Add-on` (Splunkbase #5781).\n• Ensure the following data sources are available: `sourcetype=wxcc:ivr_activity` (custom via API), `sourcetype=wxcc:call_legs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest WxCC call leg data which tracks each call's journey through the IVR flow. A call is \"contained\" if it completes with a successful disposition without ever being connected to an agent. Track containment rate daily by entry point (phone number or IVR menu). Industry benchmarks vary: 30-50% for complex support, 60-80% for billing/account inquiries. Alert when containment drops more than 10 percentage points from the 7-day average — this usually indicates an IVR integration failure (e.g., bac…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=contact_center sourcetype=\"wxcc:call_legs\"\n| eval reached_agent=if(isnotnull(agent_id) AND agent_id!=\"\", 1, 0)\n| eval self_served=if(reached_agent==0 AND disposition==\"Completed\", 1, 0)\n| eval abandoned_ivr=if(reached_agent==0 AND disposition==\"Abandoned\", 1, 0)\n| bin _time span=1d\n| stats count as total_calls, sum(self_served) as contained, sum(reached_agent) as to_agent, sum(abandoned_ivr) as abandoned by _time, entry_point\n| eval containment_pct=round(contained*100/total_calls, 1)\n| eval abandon_pct=round(abandoned*100/total_calls, 1)\n| table _time, entry_point, total_calls, contained, to_agent, abandoned, containment_pct, abandon_pct\n| sort -_time, -total_calls\n```\n\nUnderstanding this SPL\n\n**Webex Contact Center IVR Containment Rate** — IVR containment rate measures the percentage of callers who complete their task within the IVR self-service system without speaking to a live agent. High containment reduces agent workload and cost per contact. Declining containment signals IVR menu confusion, new customer issues not covered by self-service, or technical failures in IVR integrations (database lookup failures, speech recognition errors) — all of which increase agent queue volume and customer frustration.\n\nDocumented **Data sources**: `sourcetype=wxcc:ivr_activity` (custom via API), `sourcetype=wxcc:call_legs`. **App/TA** (typical add-on context): Webex Contact Center GraphQL API via HEC, `Cisco Webex Add-on` (Splunkbase #5781). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: contact_center; **sourcetype**: wxcc:call_legs. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=contact_center, sourcetype=\"wxcc:call_legs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **reached_agent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **self_served** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **abandoned_ivr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, entry_point** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **containment_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **abandon_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Webex Contact Center IVR Containment Rate**): table _time, entry_point, total_calls, contained, to_agent, abandoned, containment_pct, abandon_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (containment rate trend over 30 days), Funnel chart (IVR path flow from entry to exit), Bar chart (containment by entry point), Single value (today's containment vs target).",
              "script": "",
              "premium": "",
              "hw": "Webex Contact Center (WxCC), Cisco UCCX IVR",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.44",
              "n": "Webex Contact Center Customer Wait Time SLA by Skill Group",
              "c": "critical",
              "f": "intermediate",
              "v": "Queue-level SLA metrics hide performance disparities across skill groups. A blended 80% service level may mask that billing support hits 95% while technical support languishes at 55%. Granular skill-group SLA tracking exposes which specializations need staffing adjustments, schedule optimization, or skills-based routing tuning — directly preventing customer churn in the skill groups that matter most to revenue.",
              "t": "Webex Contact Center GraphQL API via HEC, `Cisco Webex Add-on` (Splunkbase #5781)",
              "d": "`sourcetype=wxcc:queue_stats` (custom via API), `sourcetype=wxcc:call_legs`",
              "q": "index=contact_center sourcetype=\"wxcc:call_legs\" queue_name=*\n| eval answered_in_sla=if(queue_wait_sec<=30 AND isnotnull(agent_id), 1, 0)\n| eval answered=if(isnotnull(agent_id), 1, 0)\n| eval abandoned=if(isnull(agent_id) AND disposition==\"Abandoned\", 1, 0)\n| bin _time span=30m\n| stats count as offered, sum(answered) as answered, sum(answered_in_sla) as in_sla, sum(abandoned) as abandoned, avg(queue_wait_sec) as avg_wait, perc95(queue_wait_sec) as p95_wait by _time, queue_name, skill_group\n| eval sla_pct=round(in_sla*100/offered, 1)\n| eval abandon_pct=round(abandoned*100/offered, 1)\n| table _time, queue_name, skill_group, offered, answered, in_sla, sla_pct, abandoned, abandon_pct, avg_wait, p95_wait\n| sort _time, -offered",
              "m": "Ingest WxCC queue and call leg data. Define SLA threshold per skill group (commonly 80% of calls answered within 30 seconds, but varies: sales may target 20 seconds, tier-2 support may allow 60 seconds). Build a `skill_group_sla_target` lookup with per-group thresholds. Compare actual performance against target every 30 minutes. Alert when any skill group drops below its SLA target for 2 consecutive periods. Track p95 wait time as a customer experience indicator — even if average wait is acceptable, long tail waits destroy satisfaction. Provide real-time wallboard data and daily reports for workforce managers. Correlate with agent state data (UC-11.3.42) to determine if poor SLA is caused by insufficient staffing or high NotReady time.",
              "z": "Table (skill groups with SLA status — green/red), Gauge (SLA % per skill group), Line chart (SLA trend per skill group over 30 days), Bar chart (p95 wait time by skill group).",
              "kfp": "Training queues, new hire cohorts, and understaffed intervals during flu season or holidays.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Webex Contact Center GraphQL API via HEC, `Cisco Webex Add-on` (Splunkbase #5781).\n• Ensure the following data sources are available: `sourcetype=wxcc:queue_stats` (custom via API), `sourcetype=wxcc:call_legs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest WxCC queue and call leg data. Define SLA threshold per skill group (commonly 80% of calls answered within 30 seconds, but varies: sales may target 20 seconds, tier-2 support may allow 60 seconds). Build a `skill_group_sla_target` lookup with per-group thresholds. Compare actual performance against target every 30 minutes. Alert when any skill group drops below its SLA target for 2 consecutive periods. Track p95 wait time as a customer experience indicator — even if average wait is accepta…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=contact_center sourcetype=\"wxcc:call_legs\" queue_name=*\n| eval answered_in_sla=if(queue_wait_sec<=30 AND isnotnull(agent_id), 1, 0)\n| eval answered=if(isnotnull(agent_id), 1, 0)\n| eval abandoned=if(isnull(agent_id) AND disposition==\"Abandoned\", 1, 0)\n| bin _time span=30m\n| stats count as offered, sum(answered) as answered, sum(answered_in_sla) as in_sla, sum(abandoned) as abandoned, avg(queue_wait_sec) as avg_wait, perc95(queue_wait_sec) as p95_wait by _time, queue_name, skill_group\n| eval sla_pct=round(in_sla*100/offered, 1)\n| eval abandon_pct=round(abandoned*100/offered, 1)\n| table _time, queue_name, skill_group, offered, answered, in_sla, sla_pct, abandoned, abandon_pct, avg_wait, p95_wait\n| sort _time, -offered\n```\n\nUnderstanding this SPL\n\n**Webex Contact Center Customer Wait Time SLA by Skill Group** — Queue-level SLA metrics hide performance disparities across skill groups. A blended 80% service level may mask that billing support hits 95% while technical support languishes at 55%. Granular skill-group SLA tracking exposes which specializations need staffing adjustments, schedule optimization, or skills-based routing tuning — directly preventing customer churn in the skill groups that matter most to revenue.\n\nDocumented **Data sources**: `sourcetype=wxcc:queue_stats` (custom via API), `sourcetype=wxcc:call_legs`. **App/TA** (typical add-on context): Webex Contact Center GraphQL API via HEC, `Cisco Webex Add-on` (Splunkbase #5781). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: contact_center; **sourcetype**: wxcc:call_legs. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=contact_center, sourcetype=\"wxcc:call_legs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **answered_in_sla** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **answered** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **abandoned** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, queue_name, skill_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **abandon_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Webex Contact Center Customer Wait Time SLA by Skill Group**): table _time, queue_name, skill_group, offered, answered, in_sla, sla_pct, abandoned, abandon_pct, avg_wait, p95_wait\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (skill groups with SLA status — green/red), Gauge (SLA % per skill group), Line chart (SLA trend per skill group over 30 days), Bar chart (p95 wait time by skill group).",
              "script": "",
              "premium": "",
              "hw": "Webex Contact Center (WxCC)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.45",
              "n": "UCCX Real-Time Queue and Agent Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Cisco Unified Contact Center Express (UCCX) remains widely deployed for small-to-medium contact centers. Native UCCX reporting is limited to historical views, and Finesse supervisor gadgets show only a single queue at a time. Splunk aggregation of UCCX queue statistics provides a unified real-time and historical view across all Contact Service Queues (CSQs), enabling supervisors to spot developing queue emergencies and workforce planners to validate staffing models with actual data.",
              "t": "Custom scripted input (UCCX REST API / Finesse API), `Splunk Connect for Syslog`",
              "d": "`sourcetype=uccx:csq_stats` (custom via REST API), `sourcetype=uccx:agent_stats`, UCCX wallboard XML feed",
              "q": "index=contact_center sourcetype=\"uccx:csq_stats\"\n| stats latest(calls_waiting) as waiting, latest(calls_handled) as handled, latest(calls_abandoned) as abandoned, latest(longest_wait_sec) as longest_wait, latest(agents_available) as avail_agents by csq_name\n| eval calls_per_agent=if(avail_agents>0, round(waiting/avail_agents, 1), \"N/A - No Agents\")\n| eval alert_level=case(\n    waiting>10 AND avail_agents==0, \"CRITICAL\",\n    waiting>5 OR longest_wait>120, \"WARNING\",\n    1==1, \"OK\")\n| table csq_name, waiting, handled, abandoned, longest_wait, avail_agents, calls_per_agent, alert_level\n| sort -waiting",
              "m": "Deploy a scripted input that polls the UCCX REST API (available on port 9443) for CSQ statistics every 60 seconds. The API returns calls waiting, calls handled, calls abandoned, average/max wait times, and available agents per CSQ. Parse into structured events and index. For agent-level data, poll the Finesse REST API for agent state and call details. Alert when any CSQ has calls waiting with zero available agents (immediate customer impact). Provide a wallboard-style dashboard with auto-refresh for supervisors. Track historical queue performance trends to validate workforce management forecasts. Combine with UCCX Historical Reporting data for end-of-day analytics.",
              "z": "Single value tiles (calls waiting, longest wait, available agents — per CSQ), Table (all CSQs with status), Line chart (calls waiting trend over shift), Bar chart (handled vs abandoned by CSQ).",
              "kfp": "Agent login delays, after-hours skilling, and maintenance on UCCX that pauses ready state.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (UCCX REST API / Finesse API), `Splunk Connect for Syslog`.\n• Ensure the following data sources are available: `sourcetype=uccx:csq_stats` (custom via REST API), `sourcetype=uccx:agent_stats`, UCCX wallboard XML feed.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a scripted input that polls the UCCX REST API (available on port 9443) for CSQ statistics every 60 seconds. The API returns calls waiting, calls handled, calls abandoned, average/max wait times, and available agents per CSQ. Parse into structured events and index. For agent-level data, poll the Finesse REST API for agent state and call details. Alert when any CSQ has calls waiting with zero available agents (immediate customer impact). Provide a wallboard-style dashboard with auto-refresh…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=contact_center sourcetype=\"uccx:csq_stats\"\n| stats latest(calls_waiting) as waiting, latest(calls_handled) as handled, latest(calls_abandoned) as abandoned, latest(longest_wait_sec) as longest_wait, latest(agents_available) as avail_agents by csq_name\n| eval calls_per_agent=if(avail_agents>0, round(waiting/avail_agents, 1), \"N/A - No Agents\")\n| eval alert_level=case(\n    waiting>10 AND avail_agents==0, \"CRITICAL\",\n    waiting>5 OR longest_wait>120, \"WARNING\",\n    1==1, \"OK\")\n| table csq_name, waiting, handled, abandoned, longest_wait, avail_agents, calls_per_agent, alert_level\n| sort -waiting\n```\n\nUnderstanding this SPL\n\n**UCCX Real-Time Queue and Agent Monitoring** — Cisco Unified Contact Center Express (UCCX) remains widely deployed for small-to-medium contact centers. Native UCCX reporting is limited to historical views, and Finesse supervisor gadgets show only a single queue at a time. Splunk aggregation of UCCX queue statistics provides a unified real-time and historical view across all Contact Service Queues (CSQs), enabling supervisors to spot developing queue emergencies and workforce planners to validate staffing models with…\n\nDocumented **Data sources**: `sourcetype=uccx:csq_stats` (custom via REST API), `sourcetype=uccx:agent_stats`, UCCX wallboard XML feed. **App/TA** (typical add-on context): Custom scripted input (UCCX REST API / Finesse API), `Splunk Connect for Syslog`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: contact_center; **sourcetype**: uccx:csq_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=contact_center, sourcetype=\"uccx:csq_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by csq_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **calls_per_agent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **alert_level** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **UCCX Real-Time Queue and Agent Monitoring**): table csq_name, waiting, handled, abandoned, longest_wait, avail_agents, calls_per_agent, alert_level\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value tiles (calls waiting, longest wait, available agents — per CSQ), Table (all CSQs with status), Line chart (calls waiting trend over shift), Bar chart (handled vs abandoned by CSQ).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco Unified Contact Center Express (UCCX), Cisco Finesse",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.46",
              "n": "Contact Center Abandon Rate Correlation with Network Quality",
              "c": "high",
              "f": "advanced",
              "v": "Contact center abandons have two fundamentally different root causes: callers hanging up because of long wait times (staffing issue) vs callers disconnected due to network/voice quality failures (infrastructure issue). Distinguishing these requires correlating abandon events with network quality metrics. Misdiagnosing network-caused abandons as staffing issues wastes workforce budget; misdiagnosing wait-time abandons as network issues wastes engineering effort.",
              "t": "`Cisco Webex Add-on` (Splunkbase #5781), `Cisco ThousandEyes App for Splunk` (Splunkbase #7719), WxCC API",
              "d": "`sourcetype=wxcc:call_legs`, `sourcetype=thousandeyes:tests`, `sourcetype=cisco:ucm:cmr`",
              "q": "index=contact_center sourcetype=\"wxcc:call_legs\" disposition=\"Abandoned\"\n| eval abandon_after_sec=queue_wait_sec\n| eval time_bucket=case(abandon_after_sec<10, \"0-10s (likely drop)\", abandon_after_sec<30, \"10-30s\", abandon_after_sec<60, \"30-60s\", abandon_after_sec<120, \"1-2min\", 1==1, \"2min+ (likely frustration)\")\n| bin _time span=1h\n| stats count as abandons by _time, time_bucket, entry_point\n| append [search index=network sourcetype=\"thousandeyes:tests\" test_type=\"voice\"\n    | bin _time span=1h\n    | stats avg(mos) as avg_mos, avg(packet_loss_pct) as avg_loss by _time]\n| stats sum(abandons) as total_abandons, first(avg_mos) as network_mos, first(avg_loss) as network_loss by _time\n| eval likely_cause=case(network_mos<3.5 OR network_loss>2, \"Network Quality\", total_abandons>0 AND (isnull(network_mos) OR network_mos>=3.5), \"Wait Time\", 1==1, \"Unknown\")\n| table _time, total_abandons, network_mos, network_loss, likely_cause",
              "m": "Ingest both contact center abandon events and network quality metrics (ThousandEyes voice tests, CUCM CMR data, or SD-WAN quality metrics) into Splunk. Classify abandons by timing: calls abandoned within 10 seconds likely experienced a technical failure (no ring, one-way audio, poor quality); calls abandoned after 2+ minutes are likely frustrated by wait time. Correlate with concurrent ThousandEyes MOS scores and packet loss on the voice path. When a cluster of short-duration abandons coincides with network quality degradation, classify as network-caused and alert the network team rather than the workforce management team. Build a daily report showing abandon root cause distribution to drive targeted improvements.",
              "z": "Stacked bar chart (abandons by time bucket), Line chart (abandon count overlaid with MOS score), Table (hourly breakdown with likely cause), Pie chart (root cause distribution).",
              "kfp": "Correlated network blips; abandoned calls are not always security incidents.",
              "refs": "[Cisco ThousandEyes App for Splunk](https://splunkbase.splunk.com/app/7719)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Webex Add-on` (Splunkbase #5781), `Cisco ThousandEyes App for Splunk` (Splunkbase #7719), WxCC API.\n• Ensure the following data sources are available: `sourcetype=wxcc:call_legs`, `sourcetype=thousandeyes:tests`, `sourcetype=cisco:ucm:cmr`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest both contact center abandon events and network quality metrics (ThousandEyes voice tests, CUCM CMR data, or SD-WAN quality metrics) into Splunk. Classify abandons by timing: calls abandoned within 10 seconds likely experienced a technical failure (no ring, one-way audio, poor quality); calls abandoned after 2+ minutes are likely frustrated by wait time. Correlate with concurrent ThousandEyes MOS scores and packet loss on the voice path. When a cluster of short-duration abandons coincides …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=contact_center sourcetype=\"wxcc:call_legs\" disposition=\"Abandoned\"\n| eval abandon_after_sec=queue_wait_sec\n| eval time_bucket=case(abandon_after_sec<10, \"0-10s (likely drop)\", abandon_after_sec<30, \"10-30s\", abandon_after_sec<60, \"30-60s\", abandon_after_sec<120, \"1-2min\", 1==1, \"2min+ (likely frustration)\")\n| bin _time span=1h\n| stats count as abandons by _time, time_bucket, entry_point\n| append [search index=network sourcetype=\"thousandeyes:tests\" test_type=\"voice\"\n    | bin _time span=1h\n    | stats avg(mos) as avg_mos, avg(packet_loss_pct) as avg_loss by _time]\n| stats sum(abandons) as total_abandons, first(avg_mos) as network_mos, first(avg_loss) as network_loss by _time\n| eval likely_cause=case(network_mos<3.5 OR network_loss>2, \"Network Quality\", total_abandons>0 AND (isnull(network_mos) OR network_mos>=3.5), \"Wait Time\", 1==1, \"Unknown\")\n| table _time, total_abandons, network_mos, network_loss, likely_cause\n```\n\nUnderstanding this SPL\n\n**Contact Center Abandon Rate Correlation with Network Quality** — Contact center abandons have two fundamentally different root causes: callers hanging up because of long wait times (staffing issue) vs callers disconnected due to network/voice quality failures (infrastructure issue). Distinguishing these requires correlating abandon events with network quality metrics. Misdiagnosing network-caused abandons as staffing issues wastes workforce budget; misdiagnosing wait-time abandons as network issues wastes engineering effort.\n\nDocumented **Data sources**: `sourcetype=wxcc:call_legs`, `sourcetype=thousandeyes:tests`, `sourcetype=cisco:ucm:cmr`. **App/TA** (typical add-on context): `Cisco Webex Add-on` (Splunkbase #5781), `Cisco ThousandEyes App for Splunk` (Splunkbase #7719), WxCC API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: contact_center; **sourcetype**: wxcc:call_legs. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=contact_center, sourcetype=\"wxcc:call_legs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **abandon_after_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **time_bucket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, time_bucket, entry_point** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **likely_cause** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Contact Center Abandon Rate Correlation with Network Quality**): table _time, total_abandons, network_mos, network_loss, likely_cause\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart (abandons by time bucket), Line chart (abandon count overlaid with MOS score), Table (hourly breakdown with likely cause), Pie chart (root cause distribution).",
              "script": "",
              "premium": "",
              "hw": "Webex Contact Center, Cisco ThousandEyes, CUCM",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_thousandeyes",
                "cisco_webex"
              ],
              "sapp": [
                {
                  "name": "Cisco ThousandEyes App for Splunk",
                  "id": 7719,
                  "url": "https://splunkbase.splunk.com/app/7719",
                  "desc": "Pre-built dashboards for ThousandEyes agent tests, events, and activity logs",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/1e95cd84-404f-438c-8446-0426643c49a2.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/8e5ba70f-d63f-48af-9d03-cb837f61db45.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.47",
              "n": "Jabber Client Version Compliance and Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Cisco Jabber clients run on Windows, macOS, iOS, and Android with different version lifecycles and vulnerability profiles. Outdated Jabber versions may have known security vulnerabilities (CVEs), lack support for current SRTP/TLS standards, or miss critical bug fixes. Fleet-wide version tracking replaces manual inventory audits and supports security compliance reporting by quantifying the attack surface from legacy communication clients.",
              "t": "`Splunk Connect for Syslog`, CUCM AXL API scripted input",
              "d": "`sourcetype=cisco:ucm:syslog` (CSF/BOT/TCT/TAB registration events), `sourcetype=jabber:problemreport` (Jabber PRT logs)",
              "q": "index=voip sourcetype=\"cisco:ucm:syslog\" \"%CCM_CALLMANAGER-CALLMANAGER-7-DeviceRegistered%\"\n    (DeviceName=CSF* OR DeviceName=BOT* OR DeviceName=TCT* OR DeviceName=TAB*)\n| rex field=_raw \"DeviceName=(?<device_name>\\S+).*ActiveLoadID=(?<version>\\S+).*IPAddress=(?<ip>\\S+)\"\n| eval client_type=case(\n    like(device_name, \"CSF%\"), \"Jabber Windows/Mac\",\n    like(device_name, \"BOT%\"), \"Jabber Bot\",\n    like(device_name, \"TCT%\"), \"Jabber Mobile (Phone)\",\n    like(device_name, \"TAB%\"), \"Jabber Tablet\")\n| stats latest(version) as current_version, latest(ip) as last_ip, latest(_time) as last_seen, count as registrations by device_name, client_type\n| lookup jabber_version_baseline client_type OUTPUT min_version, eol_version\n| eval compliant=if(current_version>=min_version, \"Yes\", \"No\")\n| stats count as total, sum(eval(if(compliant==\"No\",1,0))) as non_compliant by client_type\n| eval compliance_pct=round((total-non_compliant)*100/total, 1)",
              "m": "CUCM logs device registration events for Jabber clients using device name prefixes: CSF (desktop softphone), BOT (Jabber bot/integration), TCT (mobile phone mode), TAB (tablet). The `ActiveLoadID` contains the Jabber version. Build a `jabber_version_baseline` lookup mapping client type to minimum acceptable version (from Cisco's Jabber release matrix). Schedule daily to track compliance. For crash analysis, collect Jabber Problem Report Tool (PRT) logs submitted to CUCM — these contain stack traces, network diagnostics, and configuration snapshots. Track crash frequency per version to prioritize upgrade campaigns. Alert when any client type's compliance drops below 80%.",
              "z": "Pie chart (version distribution per client type), Single value (fleet compliance %), Table (non-compliant devices with version and last seen), Bar chart (compliance by client type).",
              "kfp": "Beta clients in pilot groups, VDI, and mobile Jabber on unstable networks.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Connect for Syslog`, CUCM AXL API scripted input.\n• Ensure the following data sources are available: `sourcetype=cisco:ucm:syslog` (CSF/BOT/TCT/TAB registration events), `sourcetype=jabber:problemreport` (Jabber PRT logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCUCM logs device registration events for Jabber clients using device name prefixes: CSF (desktop softphone), BOT (Jabber bot/integration), TCT (mobile phone mode), TAB (tablet). The `ActiveLoadID` contains the Jabber version. Build a `jabber_version_baseline` lookup mapping client type to minimum acceptable version (from Cisco's Jabber release matrix). Schedule daily to track compliance. For crash analysis, collect Jabber Problem Report Tool (PRT) logs submitted to CUCM — these contain stack tra…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:syslog\" \"%CCM_CALLMANAGER-CALLMANAGER-7-DeviceRegistered%\"\n    (DeviceName=CSF* OR DeviceName=BOT* OR DeviceName=TCT* OR DeviceName=TAB*)\n| rex field=_raw \"DeviceName=(?<device_name>\\S+).*ActiveLoadID=(?<version>\\S+).*IPAddress=(?<ip>\\S+)\"\n| eval client_type=case(\n    like(device_name, \"CSF%\"), \"Jabber Windows/Mac\",\n    like(device_name, \"BOT%\"), \"Jabber Bot\",\n    like(device_name, \"TCT%\"), \"Jabber Mobile (Phone)\",\n    like(device_name, \"TAB%\"), \"Jabber Tablet\")\n| stats latest(version) as current_version, latest(ip) as last_ip, latest(_time) as last_seen, count as registrations by device_name, client_type\n| lookup jabber_version_baseline client_type OUTPUT min_version, eol_version\n| eval compliant=if(current_version>=min_version, \"Yes\", \"No\")\n| stats count as total, sum(eval(if(compliant==\"No\",1,0))) as non_compliant by client_type\n| eval compliance_pct=round((total-non_compliant)*100/total, 1)\n```\n\nUnderstanding this SPL\n\n**Jabber Client Version Compliance and Health** — Cisco Jabber clients run on Windows, macOS, iOS, and Android with different version lifecycles and vulnerability profiles. Outdated Jabber versions may have known security vulnerabilities (CVEs), lack support for current SRTP/TLS standards, or miss critical bug fixes. Fleet-wide version tracking replaces manual inventory audits and supports security compliance reporting by quantifying the attack surface from legacy communication clients.\n\nDocumented **Data sources**: `sourcetype=cisco:ucm:syslog` (CSF/BOT/TCT/TAB registration events), `sourcetype=jabber:problemreport` (Jabber PRT logs). **App/TA** (typical add-on context): `Splunk Connect for Syslog`, CUCM AXL API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **client_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by device_name, client_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by client_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (version distribution per client type), Single value (fleet compliance %), Table (non-compliant devices with version and last seen), Bar chart (compliance by client type).",
              "script": "",
              "premium": "",
              "hw": "Cisco Jabber for Windows, Cisco Jabber for Mac, Cisco Jabber for iOS, Cisco Jabber for Android",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [
                "cisco_ucm"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.48",
              "n": "IM and Presence Service Availability",
              "c": "high",
              "f": "intermediate",
              "v": "Cisco IM and Presence (IM&P) provides XMPP-based instant messaging, presence status, and federation with external systems. IM&P node failures cause presence to show all users as \"Unknown,\" messages to queue indefinitely, and inter-cluster federation to break. Unlike voice call failures which produce immediate user complaints, IM&P degradation often goes unreported for hours while quietly impacting team coordination and collaboration workflows.",
              "t": "`Splunk Connect for Syslog`, CUCM/IM&P RTMT log forwarding",
              "d": "`sourcetype=cisco:imp:syslog` (IM&P syslog), RTMT perfmon counters via scripted input",
              "q": "index=voip sourcetype=\"cisco:imp:syslog\"\n| eval service_impact=case(\n    like(_raw, \"%XMPPConnectionFailed%\") OR like(_raw, \"%XCPConnectionClosed%\"), \"XMPP\",\n    like(_raw, \"%SIPSubscriptionFailed%\") OR like(_raw, \"%PresenceSubscription%\"), \"Presence\",\n    like(_raw, \"%PeGroupNode%\") OR like(_raw, \"%InterCluster%\"), \"Federation\",\n    like(_raw, \"%DBReplication%\") OR like(_raw, \"%SchemaUpdate%\"), \"Database\",\n    1==1, \"Other\")\n| where service_impact!=\"Other\"\n| bin _time span=5m\n| stats count as events, dc(host) as affected_nodes, values(service_impact) as impacted_services by _time\n| where events > 3\n| table _time, affected_nodes, events, impacted_services\n| sort -_time",
              "m": "Forward IM&P node syslog via Splunk Connect for Syslog. Key events to monitor: XMPPConnectionFailed (client-facing messaging down), SIPSubscriptionFailed (presence status not updating), PeGroupNode errors (inter-cluster peering broken), and DBReplication issues (configuration sync failures). Classify events by service impact area. Alert when XMPP or Presence events spike above 3 per 5-minute window — this indicates active service degradation. For capacity monitoring, deploy a scripted input collecting IM&P RTMT perfmon counters: active XMPP sessions, SIP subscriptions, message rate, and PE node status. Track session counts against licensed capacity. Correlate IM&P health with CUCM cluster health (UC-11.3.39) as they share infrastructure.",
              "z": "Single value (IM&P service status — green/yellow/red), Timeline (service impact events), Bar chart (events by impact area), Line chart (XMPP session count over 24 hours).",
              "kfp": "Cluster maintenance, XMPP resubscription storms, and client upgrades that look like outages briefly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Connect for Syslog`, CUCM/IM&P RTMT log forwarding.\n• Ensure the following data sources are available: `sourcetype=cisco:imp:syslog` (IM&P syslog), RTMT perfmon counters via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward IM&P node syslog via Splunk Connect for Syslog. Key events to monitor: XMPPConnectionFailed (client-facing messaging down), SIPSubscriptionFailed (presence status not updating), PeGroupNode errors (inter-cluster peering broken), and DBReplication issues (configuration sync failures). Classify events by service impact area. Alert when XMPP or Presence events spike above 3 per 5-minute window — this indicates active service degradation. For capacity monitoring, deploy a scripted input coll…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:imp:syslog\"\n| eval service_impact=case(\n    like(_raw, \"%XMPPConnectionFailed%\") OR like(_raw, \"%XCPConnectionClosed%\"), \"XMPP\",\n    like(_raw, \"%SIPSubscriptionFailed%\") OR like(_raw, \"%PresenceSubscription%\"), \"Presence\",\n    like(_raw, \"%PeGroupNode%\") OR like(_raw, \"%InterCluster%\"), \"Federation\",\n    like(_raw, \"%DBReplication%\") OR like(_raw, \"%SchemaUpdate%\"), \"Database\",\n    1==1, \"Other\")\n| where service_impact!=\"Other\"\n| bin _time span=5m\n| stats count as events, dc(host) as affected_nodes, values(service_impact) as impacted_services by _time\n| where events > 3\n| table _time, affected_nodes, events, impacted_services\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**IM and Presence Service Availability** — Cisco IM and Presence (IM&P) provides XMPP-based instant messaging, presence status, and federation with external systems. IM&P node failures cause presence to show all users as \"Unknown,\" messages to queue indefinitely, and inter-cluster federation to break. Unlike voice call failures which produce immediate user complaints, IM&P degradation often goes unreported for hours while quietly impacting team coordination and collaboration workflows.\n\nDocumented **Data sources**: `sourcetype=cisco:imp:syslog` (IM&P syslog), RTMT perfmon counters via scripted input. **App/TA** (typical add-on context): `Splunk Connect for Syslog`, CUCM/IM&P RTMT log forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:imp:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:imp:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **service_impact** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where service_impact!=\"Other\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where events > 3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IM and Presence Service Availability**): table _time, affected_nodes, events, impacted_services\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (IM&P service status — green/yellow/red), Timeline (service impact events), Bar chart (events by impact area), Line chart (XMPP session count over 24 hours).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco IM and Presence Service (IM&P), Cisco Unified Presence Server",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [
                "cisco_ucm"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.49",
              "n": "Unity Connection Voicemail System Health",
              "c": "high",
              "f": "intermediate",
              "v": "Cisco Unity Connection handles voicemail, auto-attendant, and Interactive Voice Response functions. Port exhaustion during peak hours causes callers to hear busy signals instead of reaching voicemail. Message store capacity issues cause new messages to be rejected. MWI (Message Waiting Indicator) delivery failures leave users unaware of waiting messages. Monitoring these components prevents the silent voicemail failures that users only discover when someone says \"didn't you get my message?\"",
              "t": "`Splunk Connect for Syslog`, Unity Connection RTMT / Serviceability API scripted input",
              "d": "`sourcetype=cisco:unity:syslog`, `sourcetype=cisco:unity:perf` (custom via API)",
              "q": "index=voip sourcetype=\"cisco:unity:syslog\"\n| eval component=case(\n    like(_raw, \"%Port%\") OR like(_raw, \"%VoiceMail%port%\"), \"Ports\",\n    like(_raw, \"%MessageStore%\") OR like(_raw, \"%Mailbox%quota%\"), \"Storage\",\n    like(_raw, \"%MWI%\") OR like(_raw, \"%MessageWaiting%\"), \"MWI\",\n    like(_raw, \"%SMTP%\") OR like(_raw, \"%Notification%\"), \"Notifications\",\n    1==1, \"Other\")\n| where component!=\"Other\"\n| bin _time span=15m\n| stats count as events, values(component) as affected_components by _time, host\n| eval severity=case(\n    mvfind(affected_components, \"Ports\")>=0, \"High\",\n    mvfind(affected_components, \"Storage\")>=0, \"High\",\n    mvfind(affected_components, \"MWI\")>=0, \"Medium\",\n    1==1, \"Low\")\n| table _time, host, affected_components, events, severity\n| sort -_time",
              "m": "Forward Unity Connection syslog via Splunk Connect for Syslog. Monitor four key areas: (1) Port utilization — Unity has a fixed number of voice ports; when all are in use, callers get busy signals. Deploy a scripted input polling the CUPI REST API for port status every 2 minutes. Alert at 80% port utilization. (2) Message store capacity — track UnityDynSvc mailbox storage against configured quotas. Alert at 90% capacity. (3) MWI delivery — track MWI on/off notifications; failures mean the phone light stays off when messages are waiting. (4) SMTP notification queue — email notifications of voicemail messages queue when Exchange/O365 connectivity fails. Alert when queue depth exceeds 100. Correlate port exhaustion with CUCM call volume (UC-11.3.2) to validate port-to-call ratio.",
              "z": "Gauge (port utilization %), Single value (message store capacity %), Timeline (component events), Table (affected components with severity), Line chart (port utilization trend over 24 hours).",
              "kfp": "Unity failover tests, message move jobs, and storage expansion under planned change.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Connect for Syslog`, Unity Connection RTMT / Serviceability API scripted input.\n• Ensure the following data sources are available: `sourcetype=cisco:unity:syslog`, `sourcetype=cisco:unity:perf` (custom via API).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Unity Connection syslog via Splunk Connect for Syslog. Monitor four key areas: (1) Port utilization — Unity has a fixed number of voice ports; when all are in use, callers get busy signals. Deploy a scripted input polling the CUPI REST API for port status every 2 minutes. Alert at 80% port utilization. (2) Message store capacity — track UnityDynSvc mailbox storage against configured quotas. Alert at 90% capacity. (3) MWI delivery — track MWI on/off notifications; failures mean the phone …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:unity:syslog\"\n| eval component=case(\n    like(_raw, \"%Port%\") OR like(_raw, \"%VoiceMail%port%\"), \"Ports\",\n    like(_raw, \"%MessageStore%\") OR like(_raw, \"%Mailbox%quota%\"), \"Storage\",\n    like(_raw, \"%MWI%\") OR like(_raw, \"%MessageWaiting%\"), \"MWI\",\n    like(_raw, \"%SMTP%\") OR like(_raw, \"%Notification%\"), \"Notifications\",\n    1==1, \"Other\")\n| where component!=\"Other\"\n| bin _time span=15m\n| stats count as events, values(component) as affected_components by _time, host\n| eval severity=case(\n    mvfind(affected_components, \"Ports\")>=0, \"High\",\n    mvfind(affected_components, \"Storage\")>=0, \"High\",\n    mvfind(affected_components, \"MWI\")>=0, \"Medium\",\n    1==1, \"Low\")\n| table _time, host, affected_components, events, severity\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Unity Connection Voicemail System Health** — Cisco Unity Connection handles voicemail, auto-attendant, and Interactive Voice Response functions. Port exhaustion during peak hours causes callers to hear busy signals instead of reaching voicemail. Message store capacity issues cause new messages to be rejected. MWI (Message Waiting Indicator) delivery failures leave users unaware of waiting messages. Monitoring these components prevents the silent voicemail failures that users only discover when someone says \"didn't you…\n\nDocumented **Data sources**: `sourcetype=cisco:unity:syslog`, `sourcetype=cisco:unity:perf` (custom via API). **App/TA** (typical add-on context): `Splunk Connect for Syslog`, Unity Connection RTMT / Serviceability API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:unity:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:unity:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **component** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where component!=\"Other\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Unity Connection Voicemail System Health**): table _time, host, affected_components, events, severity\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (port utilization %), Single value (message store capacity %), Timeline (component events), Table (affected components with severity), Line chart (port utilization trend over 24 hours).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco Unity Connection",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "dell_emc",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.50",
              "n": "Unity Connection Mailbox Usage and Retention Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Voicemail mailboxes that grow unbounded consume storage and may violate data retention policies (PCI, HIPAA, legal hold requirements). Users who never check voicemail accumulate messages that represent both a storage cost and a compliance risk. Tracking mailbox sizes, message aging, and auto-deletion policy compliance ensures the voicemail system operates within governance boundaries and storage capacity is allocated to active users rather than abandoned mailboxes.",
              "t": "Unity Connection CUPI REST API via scripted input, `Splunk Connect for Syslog`",
              "d": "`sourcetype=cisco:unity:mailbox` (custom via CUPI API), `sourcetype=cisco:unity:syslog`",
              "q": "index=voip sourcetype=\"cisco:unity:mailbox\"\n| stats latest(mailbox_size_mb) as size_mb, latest(message_count) as msg_count, latest(oldest_msg_days) as oldest_msg, latest(unread_count) as unread, latest(quota_mb) as quota_mb by user_alias, display_name, cos_name\n| eval quota_pct=round(size_mb*100/quota_mb, 1)\n| eval retention_violation=if(oldest_msg > 90, \"Yes\", \"No\")\n| eval inactive=if(unread==msg_count AND msg_count>5, \"Likely Inactive\", \"Active\")\n| where quota_pct > 80 OR retention_violation==\"Yes\" OR inactive==\"Likely Inactive\"\n| table display_name, user_alias, cos_name, size_mb, quota_mb, quota_pct, msg_count, unread, oldest_msg, retention_violation, inactive\n| sort -quota_pct",
              "m": "Deploy a scripted input that queries the Unity Connection CUPI REST API (`/vmrest/users` and `/vmrest/mailbox`) daily to extract per-user mailbox statistics: size, message count, unread count, oldest message date, and quota allocation. Store in a dedicated sourcetype. Build compliance rules: (1) Messages older than 90 days violate standard retention (adjust threshold per organizational policy). (2) Mailboxes above 80% quota need notification. (3) Users where all messages are unread and count exceeds 5 are likely inactive — flag for deprovisioning review. Provide monthly compliance reports to IT governance. Track storage growth trends to forecast Unity Connection storage capacity needs.",
              "z": "Table (users with compliance issues), Pie chart (quota utilization distribution), Bar chart (top 20 mailboxes by size), Single value (total retention violations), Line chart (storage growth trend over 90 days).",
              "kfp": "Shared voicemail boxes, high-turnover sites, and legal holds that keep messages longer.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Unity Connection CUPI REST API via scripted input, `Splunk Connect for Syslog`.\n• Ensure the following data sources are available: `sourcetype=cisco:unity:mailbox` (custom via CUPI API), `sourcetype=cisco:unity:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a scripted input that queries the Unity Connection CUPI REST API (`/vmrest/users` and `/vmrest/mailbox`) daily to extract per-user mailbox statistics: size, message count, unread count, oldest message date, and quota allocation. Store in a dedicated sourcetype. Build compliance rules: (1) Messages older than 90 days violate standard retention (adjust threshold per organizational policy). (2) Mailboxes above 80% quota need notification. (3) Users where all messages are unread and count exc…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:unity:mailbox\"\n| stats latest(mailbox_size_mb) as size_mb, latest(message_count) as msg_count, latest(oldest_msg_days) as oldest_msg, latest(unread_count) as unread, latest(quota_mb) as quota_mb by user_alias, display_name, cos_name\n| eval quota_pct=round(size_mb*100/quota_mb, 1)\n| eval retention_violation=if(oldest_msg > 90, \"Yes\", \"No\")\n| eval inactive=if(unread==msg_count AND msg_count>5, \"Likely Inactive\", \"Active\")\n| where quota_pct > 80 OR retention_violation==\"Yes\" OR inactive==\"Likely Inactive\"\n| table display_name, user_alias, cos_name, size_mb, quota_mb, quota_pct, msg_count, unread, oldest_msg, retention_violation, inactive\n| sort -quota_pct\n```\n\nUnderstanding this SPL\n\n**Unity Connection Mailbox Usage and Retention Compliance** — Voicemail mailboxes that grow unbounded consume storage and may violate data retention policies (PCI, HIPAA, legal hold requirements). Users who never check voicemail accumulate messages that represent both a storage cost and a compliance risk. Tracking mailbox sizes, message aging, and auto-deletion policy compliance ensures the voicemail system operates within governance boundaries and storage capacity is allocated to active users rather than abandoned mailboxes.\n\nDocumented **Data sources**: `sourcetype=cisco:unity:mailbox` (custom via CUPI API), `sourcetype=cisco:unity:syslog`. **App/TA** (typical add-on context): Unity Connection CUPI REST API via scripted input, `Splunk Connect for Syslog`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:unity:mailbox. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:unity:mailbox\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user_alias, display_name, cos_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **quota_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **retention_violation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **inactive** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where quota_pct > 80 OR retention_violation==\"Yes\" OR inactive==\"Likely Inactive\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Unity Connection Mailbox Usage and Retention Compliance**): table display_name, user_alias, cos_name, size_mb, quota_mb, quota_pct, msg_count, unread, oldest_msg, retention_violation, inactive\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users with compliance issues), Pie chart (quota utilization distribution), Bar chart (top 20 mailboxes by size), Single value (total retention violations), Line chart (storage growth trend over 90 days).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco Unity Connection",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Compliance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "dell_emc",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.51",
              "n": "Pexip Conference Volume and Concurrency Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Pexip Infinity clusters have finite conferencing node capacity. Understanding conference volume patterns — peak hours, concurrent participant counts, and growth trends — is essential for capacity planning and license procurement. Unexpected spikes can exhaust node resources, causing new conferences to fail or existing ones to degrade. Tracking concurrency against licensed capacity prevents brownouts during company all-hands or large training events.",
              "t": "Custom Pexip TA (scripted input polling Management REST API)",
              "d": "`sourcetype=pexip:conference_history` (polled from `/api/admin/history/v1/conference/`)",
              "q": "index=pexip sourcetype=\"pexip:conference_history\"\n| eval start=strptime(start_time, \"%Y-%m-%dT%H:%M:%S\")\n| eval end=strptime(end_time, \"%Y-%m-%dT%H:%M:%S\")\n| bin _time span=15m\n| stats dc(name) as active_conferences, sum(participant_count) as total_participants, max(participant_count) as peak_per_conf by _time\n| lookup pexip_capacity_lookup node_pool OUTPUT max_capacity\n| eval utilization_pct=round(total_participants*100/max_capacity, 1)\n| timechart span=1h avg(total_participants) as avg_participants, max(total_participants) as peak_participants\n| predict peak_participants as predicted algorithm=LLP5 future_timespan=24",
              "m": "Deploy a scripted input that polls the Pexip Management REST API at `/api/admin/history/v1/conference/` every 5 minutes. The API returns up to 10,000 conference instances with participant counts and timestamps. Parse the JSON response and index with sourcetype `pexip:conference_history`. Build a `pexip_capacity_lookup` CSV mapping node pools to their maximum participant capacity. Calculate utilization as total concurrent participants divided by capacity. Use `predict` to forecast peak demand 24 hours ahead. Alert when utilization exceeds 80% or when predicted peaks will exceed capacity. Track weekly and monthly growth to support procurement cycles. Correlate with calendar events (all-hands, training) using a `corporate_events` lookup for context.",
              "z": "Line chart (concurrent participants over 24 hours with prediction band), Gauge (current utilization %), Column chart (conference count by hour of day), Single value (peak concurrency today vs capacity).",
              "kfp": "Pexip peaks during large customer briefings, training weeks, and seasonal town halls.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom Pexip TA (scripted input polling Management REST API).\n• Ensure the following data sources are available: `sourcetype=pexip:conference_history` (polled from `/api/admin/history/v1/conference/`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a scripted input that polls the Pexip Management REST API at `/api/admin/history/v1/conference/` every 5 minutes. The API returns up to 10,000 conference instances with participant counts and timestamps. Parse the JSON response and index with sourcetype `pexip:conference_history`. Build a `pexip_capacity_lookup` CSV mapping node pools to their maximum participant capacity. Calculate utilization as total concurrent participants divided by capacity. Use `predict` to forecast peak demand 24 …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pexip sourcetype=\"pexip:conference_history\"\n| eval start=strptime(start_time, \"%Y-%m-%dT%H:%M:%S\")\n| eval end=strptime(end_time, \"%Y-%m-%dT%H:%M:%S\")\n| bin _time span=15m\n| stats dc(name) as active_conferences, sum(participant_count) as total_participants, max(participant_count) as peak_per_conf by _time\n| lookup pexip_capacity_lookup node_pool OUTPUT max_capacity\n| eval utilization_pct=round(total_participants*100/max_capacity, 1)\n| timechart span=1h avg(total_participants) as avg_participants, max(total_participants) as peak_participants\n| predict peak_participants as predicted algorithm=LLP5 future_timespan=24\n```\n\nUnderstanding this SPL\n\n**Pexip Conference Volume and Concurrency Trending** — Pexip Infinity clusters have finite conferencing node capacity. Understanding conference volume patterns — peak hours, concurrent participant counts, and growth trends — is essential for capacity planning and license procurement. Unexpected spikes can exhaust node resources, causing new conferences to fail or existing ones to degrade. Tracking concurrency against licensed capacity prevents brownouts during company all-hands or large training events.\n\nDocumented **Data sources**: `sourcetype=pexip:conference_history` (polled from `/api/admin/history/v1/conference/`). **App/TA** (typical add-on context): Custom Pexip TA (scripted input polling Management REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pexip; **sourcetype**: pexip:conference_history. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pexip, sourcetype=\"pexip:conference_history\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **start** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **end** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Pexip Conference Volume and Concurrency Trending**): predict peak_participants as predicted algorithm=LLP5 future_timespan=24\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (concurrent participants over 24 hours with prediction band), Gauge (current utilization %), Column chart (conference count by hour of day), Single value (peak concurrency today vs capacity).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Pexip Infinity Management Node, Pexip Conferencing Nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.52",
              "n": "Pexip Participant Call Quality Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Pexip calculates call quality based on packet loss: Good (<1%), OK (<3%), Bad (<10%), Terrible (>=10%). Unlike platforms that only report aggregate quality, Pexip media stream statistics provide per-participant, per-stream metrics including jitter and packet loss for both audio and video. Identifying quality degradation by participant, location, or protocol allows targeted remediation — a site with consistently bad quality likely has a WAN problem, while a single participant with issues may have a local network or endpoint problem.",
              "t": "Custom Pexip TA (scripted input polling Management REST API)",
              "d": "`sourcetype=pexip:media_stream` (polled from `/api/admin/history/v1/participant/media_stream/`)",
              "q": "index=pexip sourcetype=\"pexip:media_stream\"\n| eval quality_rating=case(\n    packet_loss < 1, \"Good\",\n    packet_loss < 3, \"OK\",\n    packet_loss < 10, \"Bad\",\n    packet_loss >= 10, \"Terrible\",\n    1==1, \"Unknown\")\n| stats count as streams,\n    avg(packet_loss) as avg_loss,\n    avg(jitter) as avg_jitter,\n    perc95(packet_loss) as p95_loss,\n    sum(eval(case(quality_rating=\"Bad\" OR quality_rating=\"Terrible\", 1, 0))) as poor_streams\n    by participant_alias, conference_name, stream_type\n| eval poor_pct=round(poor_streams*100/streams, 1)\n| where poor_pct > 5\n| sort -poor_pct\n| table participant_alias, conference_name, stream_type, streams, avg_loss, avg_jitter, p95_loss, poor_pct",
              "m": "Poll `/api/admin/history/v1/participant/media_stream/` every 5 minutes via the custom Pexip TA. Each media stream record contains packet loss, jitter, and the quality rating that Pexip itself assigns. Index as `pexip:media_stream`. Classify streams using Pexip's own thresholds (Good/OK/Bad/Terrible based on packet loss percentages). Calculate the percentage of streams rated Bad or Terrible per participant and conference. Alert when more than 5% of streams in a 15-minute window are Bad/Terrible. Build a site-pair quality lookup mapping participant IP subnets to office locations to identify geographic patterns. Compare quality by protocol (SIP, H.323, WebRTC, Teams connector) to identify protocol-specific issues. Correlate with network monitoring data (UC-5.x) for root cause analysis when quality degrades.",
              "z": "Pie chart (quality distribution: Good/OK/Bad/Terrible), Heatmap (quality by location pair), Table (worst participants with quality metrics), Line chart (poor stream percentage trend over 7 days).",
              "kfp": "Client-side quality issues, VPN hairpin, and home Wi-Fi rather than the conference node.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom Pexip TA (scripted input polling Management REST API).\n• Ensure the following data sources are available: `sourcetype=pexip:media_stream` (polled from `/api/admin/history/v1/participant/media_stream/`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `/api/admin/history/v1/participant/media_stream/` every 5 minutes via the custom Pexip TA. Each media stream record contains packet loss, jitter, and the quality rating that Pexip itself assigns. Index as `pexip:media_stream`. Classify streams using Pexip's own thresholds (Good/OK/Bad/Terrible based on packet loss percentages). Calculate the percentage of streams rated Bad or Terrible per participant and conference. Alert when more than 5% of streams in a 15-minute window are Bad/Terrible. …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pexip sourcetype=\"pexip:media_stream\"\n| eval quality_rating=case(\n    packet_loss < 1, \"Good\",\n    packet_loss < 3, \"OK\",\n    packet_loss < 10, \"Bad\",\n    packet_loss >= 10, \"Terrible\",\n    1==1, \"Unknown\")\n| stats count as streams,\n    avg(packet_loss) as avg_loss,\n    avg(jitter) as avg_jitter,\n    perc95(packet_loss) as p95_loss,\n    sum(eval(case(quality_rating=\"Bad\" OR quality_rating=\"Terrible\", 1, 0))) as poor_streams\n    by participant_alias, conference_name, stream_type\n| eval poor_pct=round(poor_streams*100/streams, 1)\n| where poor_pct > 5\n| sort -poor_pct\n| table participant_alias, conference_name, stream_type, streams, avg_loss, avg_jitter, p95_loss, poor_pct\n```\n\nUnderstanding this SPL\n\n**Pexip Participant Call Quality Monitoring** — Pexip calculates call quality based on packet loss: Good (<1%), OK (<3%), Bad (<10%), Terrible (>=10%). Unlike platforms that only report aggregate quality, Pexip media stream statistics provide per-participant, per-stream metrics including jitter and packet loss for both audio and video. Identifying quality degradation by participant, location, or protocol allows targeted remediation — a site with consistently bad quality likely has a WAN problem, while a single…\n\nDocumented **Data sources**: `sourcetype=pexip:media_stream` (polled from `/api/admin/history/v1/participant/media_stream/`). **App/TA** (typical add-on context): Custom Pexip TA (scripted input polling Management REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pexip; **sourcetype**: pexip:media_stream. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pexip, sourcetype=\"pexip:media_stream\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **quality_rating** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by participant_alias, conference_name, stream_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **poor_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where poor_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Pexip Participant Call Quality Monitoring**): table participant_alias, conference_name, stream_type, streams, avg_loss, avg_jitter, p95_loss, poor_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (quality distribution: Good/OK/Bad/Terrible), Heatmap (quality by location pair), Table (worst participants with quality metrics), Line chart (poor stream percentage trend over 7 days).",
              "script": "",
              "premium": "",
              "hw": "Pexip Infinity Management Node, Pexip Conferencing Nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.53",
              "n": "Pexip Conferencing Node Capacity and Load",
              "c": "critical",
              "f": "intermediate",
              "v": "Pexip Conferencing Nodes process all media transcoding and conference hosting. Each node has a finite capacity measured in HD call equivalents. When a node reaches capacity, new calls are either routed to other nodes (if available) or rejected. Uneven load distribution across nodes wastes capacity — one overloaded node may reject calls while others sit idle. Monitoring per-node load ensures efficient resource usage and prevents capacity-related call failures.",
              "t": "Custom Pexip TA (Event Sink API via HEC), `Splunk Connect for Syslog`",
              "d": "`sourcetype=pexip:event_sink` (HTTP POST from Conferencing Nodes to HEC), `sourcetype=pexip:syslog`",
              "q": "index=pexip sourcetype=\"pexip:event_sink\" event_type=\"conference*\" OR event_type=\"participant*\"\n| eval node=coalesce(conferencing_node, node_name)\n| bin _time span=5m\n| stats dc(conference_id) as active_conferences, dc(participant_id) as active_participants by _time, node\n| lookup pexip_node_capacity node OUTPUT max_hd_calls\n| eval load_pct=round(active_participants*100/max_hd_calls, 1)\n| eval status=case(load_pct >= 90, \"Critical\", load_pct >= 75, \"Warning\", 1==1, \"OK\")\n| table _time, node, active_conferences, active_participants, max_hd_calls, load_pct, status\n| sort -load_pct",
              "m": "Configure Pexip's Event Sink API to POST events to a Splunk HEC endpoint. The Event Sink delivers conference and participant lifecycle events (start, join, leave, end) in JSON format from each Conferencing Node. Enable bulk delivery mode for efficiency. Also forward Conferencing Node syslog for resource-level messages (CPU, memory, media processing warnings). Build a `pexip_node_capacity` lookup mapping each node hostname to its rated HD call capacity (varies by VM size and license). Calculate per-node load as active participants divided by capacity. Alert at 75% (warning) and 90% (critical) utilization. Track load imbalance: if max_node_load minus min_node_load exceeds 30 percentage points, the overflow/distribution policy may need tuning. Monitor node availability — if a node stops sending events for more than 5 minutes, it may be down.",
              "z": "Column chart (load % per node, colored by status), Line chart (per-node participant count over 24 hours), Single value (cluster-wide utilization %), Table (node status with load metrics).",
              "kfp": "Elastic scaling events and regional spikes that stay within node headroom; compare to license tier.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom Pexip TA (Event Sink API via HEC), `Splunk Connect for Syslog`.\n• Ensure the following data sources are available: `sourcetype=pexip:event_sink` (HTTP POST from Conferencing Nodes to HEC), `sourcetype=pexip:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Pexip's Event Sink API to POST events to a Splunk HEC endpoint. The Event Sink delivers conference and participant lifecycle events (start, join, leave, end) in JSON format from each Conferencing Node. Enable bulk delivery mode for efficiency. Also forward Conferencing Node syslog for resource-level messages (CPU, memory, media processing warnings). Build a `pexip_node_capacity` lookup mapping each node hostname to its rated HD call capacity (varies by VM size and license). Calculate p…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pexip sourcetype=\"pexip:event_sink\" event_type=\"conference*\" OR event_type=\"participant*\"\n| eval node=coalesce(conferencing_node, node_name)\n| bin _time span=5m\n| stats dc(conference_id) as active_conferences, dc(participant_id) as active_participants by _time, node\n| lookup pexip_node_capacity node OUTPUT max_hd_calls\n| eval load_pct=round(active_participants*100/max_hd_calls, 1)\n| eval status=case(load_pct >= 90, \"Critical\", load_pct >= 75, \"Warning\", 1==1, \"OK\")\n| table _time, node, active_conferences, active_participants, max_hd_calls, load_pct, status\n| sort -load_pct\n```\n\nUnderstanding this SPL\n\n**Pexip Conferencing Node Capacity and Load** — Pexip Conferencing Nodes process all media transcoding and conference hosting. Each node has a finite capacity measured in HD call equivalents. When a node reaches capacity, new calls are either routed to other nodes (if available) or rejected. Uneven load distribution across nodes wastes capacity — one overloaded node may reject calls while others sit idle. Monitoring per-node load ensures efficient resource usage and prevents capacity-related call failures.\n\nDocumented **Data sources**: `sourcetype=pexip:event_sink` (HTTP POST from Conferencing Nodes to HEC), `sourcetype=pexip:syslog`. **App/TA** (typical add-on context): Custom Pexip TA (Event Sink API via HEC), `Splunk Connect for Syslog`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pexip; **sourcetype**: pexip:event_sink. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pexip, sourcetype=\"pexip:event_sink\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **node** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **load_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Pexip Conferencing Node Capacity and Load**): table _time, node, active_conferences, active_participants, max_hd_calls, load_pct, status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (load % per node, colored by status), Line chart (per-node participant count over 24 hours), Single value (cluster-wide utilization %), Table (node status with load metrics).",
              "script": "",
              "premium": "",
              "hw": "Pexip Infinity Conferencing Nodes (VM or hardware)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Capacity",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.54",
              "n": "Pexip License Consumption Tracking",
              "c": "high",
              "f": "beginner",
              "v": "Pexip licensing is typically based on concurrent ports or participants. Exceeding licensed capacity prevents new participants from joining conferences. Under-utilization means wasted license spend. Tracking actual consumption against purchased licenses supports renewal negotiations with real data, identifies peak usage patterns for right-sizing, and provides early warning when growth trends approach license limits.",
              "t": "Custom Pexip TA (scripted input polling Management REST API)",
              "d": "`sourcetype=pexip:conference_history` (license_count field from conference records)",
              "q": "index=pexip sourcetype=\"pexip:conference_history\"\n| bin _time span=15m\n| stats sum(participant_count) as concurrent_licenses by _time\n| lookup pexip_license_info type OUTPUT total_licenses\n| eval usage_pct=round(concurrent_licenses*100/total_licenses, 1)\n| eval license_status=case(usage_pct >= 95, \"Critical\", usage_pct >= 80, \"Warning\", 1==1, \"OK\")\n| stats max(concurrent_licenses) as peak_licenses, avg(concurrent_licenses) as avg_licenses, max(usage_pct) as peak_usage_pct by _time\n| timechart span=1d max(peak_licenses) as daily_peak, avg(avg_licenses) as daily_avg",
              "m": "Reuse the conference history data collected for UC-11.3.51. Each conference record includes participant count and license consumption. Build a `pexip_license_info` lookup with the purchased license count by type (audio, video, VMR). Calculate concurrent license usage per 15-minute bin. Alert at 80% (plan for expansion) and 95% (imminent risk). Generate a monthly license utilization report showing daily peak, daily average, and peak-to-average ratio — a high ratio means licenses are sized for spikes that rarely occur, while a low ratio suggests consistent usage. Track month-over-month growth to predict when current licenses will be exhausted.",
              "z": "Line chart (daily peak vs average license usage with license limit line), Single value (current usage % with traffic-light color), Gauge (peak usage vs total licenses), Table (monthly summary: peak, avg, growth %).",
              "kfp": "License overage during short bursts; true-up and burst licensing may be planned.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom Pexip TA (scripted input polling Management REST API).\n• Ensure the following data sources are available: `sourcetype=pexip:conference_history` (license_count field from conference records).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nReuse the conference history data collected for UC-11.3.51. Each conference record includes participant count and license consumption. Build a `pexip_license_info` lookup with the purchased license count by type (audio, video, VMR). Calculate concurrent license usage per 15-minute bin. Alert at 80% (plan for expansion) and 95% (imminent risk). Generate a monthly license utilization report showing daily peak, daily average, and peak-to-average ratio — a high ratio means licenses are sized for spi…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pexip sourcetype=\"pexip:conference_history\"\n| bin _time span=15m\n| stats sum(participant_count) as concurrent_licenses by _time\n| lookup pexip_license_info type OUTPUT total_licenses\n| eval usage_pct=round(concurrent_licenses*100/total_licenses, 1)\n| eval license_status=case(usage_pct >= 95, \"Critical\", usage_pct >= 80, \"Warning\", 1==1, \"OK\")\n| stats max(concurrent_licenses) as peak_licenses, avg(concurrent_licenses) as avg_licenses, max(usage_pct) as peak_usage_pct by _time\n| timechart span=1d max(peak_licenses) as daily_peak, avg(avg_licenses) as daily_avg\n```\n\nUnderstanding this SPL\n\n**Pexip License Consumption Tracking** — Pexip licensing is typically based on concurrent ports or participants. Exceeding licensed capacity prevents new participants from joining conferences. Under-utilization means wasted license spend. Tracking actual consumption against purchased licenses supports renewal negotiations with real data, identifies peak usage patterns for right-sizing, and provides early warning when growth trends approach license limits.\n\nDocumented **Data sources**: `sourcetype=pexip:conference_history` (license_count field from conference records). **App/TA** (typical add-on context): Custom Pexip TA (scripted input polling Management REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pexip; **sourcetype**: pexip:conference_history. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pexip, sourcetype=\"pexip:conference_history\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **usage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **license_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily peak vs average license usage with license limit line), Single value (current usage % with traffic-light color), Gauge (peak usage vs total licenses), Table (monthly summary: peak, avg, growth %).",
              "script": "",
              "premium": "",
              "hw": "Pexip Infinity Management Node",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Capacity",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.55",
              "n": "Pexip Alarm and Service Health Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Pexip Infinity generates alarms for infrastructure problems: database connectivity loss, licensing errors, node unreachability, certificate expiry, and media resource exhaustion. These alarms represent active service risks that can escalate to conference failures. Centralizing alarm monitoring in Splunk provides correlation with other infrastructure events and enables faster incident response than checking the Pexip management console manually.",
              "t": "Custom Pexip TA (Event Sink API via HEC), `Splunk Connect for Syslog`",
              "d": "`sourcetype=pexip:event_sink` (alarm events), `sourcetype=pexip:syslog` (facility local0, local2)",
              "q": "index=pexip (sourcetype=\"pexip:event_sink\" event_type=\"alarm*\") OR (sourcetype=\"pexip:syslog\" severity<=3)\n| eval alarm_category=case(\n    like(_raw, \"%license%\") OR like(_raw, \"%License%\"), \"Licensing\",\n    like(_raw, \"%database%\") OR like(_raw, \"%Database%\"), \"Database\",\n    like(_raw, \"%certificate%\") OR like(_raw, \"%TLS%\"), \"Certificate\",\n    like(_raw, \"%node%unreachable%\") OR like(_raw, \"%NodeUnreachable%\"), \"Node Health\",\n    like(_raw, \"%media%\") OR like(_raw, \"%resource%\"), \"Media Resources\",\n    1==1, \"Other\")\n| stats count as occurrences, latest(_time) as last_seen, values(host) as affected_hosts by alarm_category, alarm_id\n| eval age_hours=round((now()-last_seen)/3600, 1)\n| where age_hours < 24\n| sort -occurrences\n| table alarm_category, alarm_id, occurrences, affected_hosts, age_hours",
              "m": "Pexip Event Sink API delivers alarm events to HEC. Also forward Management Node and Conferencing Node syslog via Splunk Connect for Syslog using facility codes local0 (admin) and local2 (support). Classify alarms into categories: Licensing (approaching or exceeding limits), Database (replication or connectivity), Certificate (expiry within 30 days), Node Health (unreachable or degraded nodes), and Media Resources (port or transcoding exhaustion). Alert immediately on Node Health and Media Resources alarms (service-affecting). Alert within 1 hour on Licensing and Database alarms. Certificate alarms should trigger 30 days before expiry. Track alarm frequency over time — increasing alarm rates indicate systemic degradation.",
              "z": "Single value (active alarm count by severity), Timeline (alarm events over 24 hours), Table (active alarms with category and affected hosts), Column chart (alarm count by category over 7 days).",
              "kfp": "Known benign alarms after upgrades; some alerts self-clear on service restart during maintenance.",
              "refs": "[CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom Pexip TA (Event Sink API via HEC), `Splunk Connect for Syslog`.\n• Ensure the following data sources are available: `sourcetype=pexip:event_sink` (alarm events), `sourcetype=pexip:syslog` (facility local0, local2).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPexip Event Sink API delivers alarm events to HEC. Also forward Management Node and Conferencing Node syslog via Splunk Connect for Syslog using facility codes local0 (admin) and local2 (support). Classify alarms into categories: Licensing (approaching or exceeding limits), Database (replication or connectivity), Certificate (expiry within 30 days), Node Health (unreachable or degraded nodes), and Media Resources (port or transcoding exhaustion). Alert immediately on Node Health and Media Resour…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pexip (sourcetype=\"pexip:event_sink\" event_type=\"alarm*\") OR (sourcetype=\"pexip:syslog\" severity<=3)\n| eval alarm_category=case(\n    like(_raw, \"%license%\") OR like(_raw, \"%License%\"), \"Licensing\",\n    like(_raw, \"%database%\") OR like(_raw, \"%Database%\"), \"Database\",\n    like(_raw, \"%certificate%\") OR like(_raw, \"%TLS%\"), \"Certificate\",\n    like(_raw, \"%node%unreachable%\") OR like(_raw, \"%NodeUnreachable%\"), \"Node Health\",\n    like(_raw, \"%media%\") OR like(_raw, \"%resource%\"), \"Media Resources\",\n    1==1, \"Other\")\n| stats count as occurrences, latest(_time) as last_seen, values(host) as affected_hosts by alarm_category, alarm_id\n| eval age_hours=round((now()-last_seen)/3600, 1)\n| where age_hours < 24\n| sort -occurrences\n| table alarm_category, alarm_id, occurrences, affected_hosts, age_hours\n```\n\nUnderstanding this SPL\n\n**Pexip Alarm and Service Health Monitoring** — Pexip Infinity generates alarms for infrastructure problems: database connectivity loss, licensing errors, node unreachability, certificate expiry, and media resource exhaustion. These alarms represent active service risks that can escalate to conference failures. Centralizing alarm monitoring in Splunk provides correlation with other infrastructure events and enables faster incident response than checking the Pexip management console manually.\n\nDocumented **Data sources**: `sourcetype=pexip:event_sink` (alarm events), `sourcetype=pexip:syslog` (facility local0, local2). **App/TA** (typical add-on context): Custom Pexip TA (Event Sink API via HEC), `Splunk Connect for Syslog`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pexip; **sourcetype**: pexip:event_sink, pexip:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pexip, sourcetype=\"pexip:event_sink\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **alarm_category** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by alarm_category, alarm_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **age_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_hours < 24` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Pexip Alarm and Service Health Monitoring**): table alarm_category, alarm_id, occurrences, affected_hosts, age_hours\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Pexip Alarm and Service Health Monitoring** — Pexip Infinity generates alarms for infrastructure problems: database connectivity loss, licensing errors, node unreachability, certificate expiry, and media resource exhaustion. These alarms represent active service risks that can escalate to conference failures. Centralizing alarm monitoring in Splunk provides correlation with other infrastructure events and enables faster incident response than checking the Pexip management console manually.\n\nDocumented **Data sources**: `sourcetype=pexip:event_sink` (alarm events), `sourcetype=pexip:syslog` (facility local0, local2). **App/TA** (typical add-on context): Custom Pexip TA (Event Sink API via HEC), `Splunk Connect for Syslog`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (active alarm count by severity), Timeline (alarm events over 24 hours), Table (active alarms with category and affected hosts), Column chart (alarm count by category over 7 days).",
              "script": "",
              "premium": "",
              "hw": "Pexip Infinity Management Node, Pexip Conferencing Nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count",
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.56",
              "n": "Pexip Registration and Gateway Call Routing",
              "c": "medium",
              "f": "intermediate",
              "v": "Pexip acts as a gateway between different video conferencing protocols and platforms — SIP, H.323, WebRTC, and Microsoft Teams via the Teams Connector. Call routing rules determine how incoming calls reach their destination VMR (Virtual Meeting Room) or are forwarded to external systems. Routing failures cause participants to reach wrong destinations or fail to connect entirely. Monitoring registration status and gateway routing ensures interoperability between the Pexip infrastructure and external systems like CUCM, Teams, or third-party SIP endpoints.",
              "t": "Custom Pexip TA (scripted input polling Management REST API), `Splunk Connect for Syslog`",
              "d": "`sourcetype=pexip:conference_history`, `sourcetype=pexip:syslog`",
              "q": "index=pexip sourcetype=\"pexip:conference_history\"\n| eval call_direction=case(\n    like(source_alias, \"%@teams%\") OR protocol=\"mssip\", \"Teams Inbound\",\n    like(destination_alias, \"%@teams%\"), \"Teams Outbound\",\n    protocol=\"sip\", \"SIP\",\n    protocol=\"h323\", \"H.323\",\n    protocol=\"webrtc\", \"WebRTC\",\n    1==1, \"Other\")\n| stats count as calls, avg(duration) as avg_duration_sec,\n    sum(eval(case(disconnect_reason!=\"OK\" AND disconnect_reason!=\"Otherendclearedcall\", 1, 0))) as failed_calls\n    by call_direction, destination_alias\n| eval failure_pct=round(failed_calls*100/calls, 1)\n| where calls > 5\n| sort -failure_pct\n| table call_direction, destination_alias, calls, avg_duration_sec, failed_calls, failure_pct",
              "m": "Analyze conference history records to classify calls by protocol and direction. The `protocol` field identifies the signaling protocol (SIP, H.323, WebRTC, mssip for Teams). Track call success rates by routing path — high failure rates on a specific destination alias or protocol indicate misconfigured dial plans or network connectivity issues. Monitor Teams Connector specifically: failures here affect the growing population of Teams-native users trying to join Pexip conferences (or vice versa). Track registration status via syslog for SIP and H.323 trunk registrations. Alert when failure rate exceeds 5% on any routing path with more than 5 calls. Correlate routing failures with network events and Conferencing Node health (UC-11.3.53).",
              "z": "Pie chart (call volume by protocol), Table (routing paths with failure rates), Line chart (calls per protocol over 7 days), Column chart (failure rate by call direction).",
              "kfp": "Interop tests, gateway migrations, and legacy H.323 islands during transition.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom Pexip TA (scripted input polling Management REST API), `Splunk Connect for Syslog`.\n• Ensure the following data sources are available: `sourcetype=pexip:conference_history`, `sourcetype=pexip:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAnalyze conference history records to classify calls by protocol and direction. The `protocol` field identifies the signaling protocol (SIP, H.323, WebRTC, mssip for Teams). Track call success rates by routing path — high failure rates on a specific destination alias or protocol indicate misconfigured dial plans or network connectivity issues. Monitor Teams Connector specifically: failures here affect the growing population of Teams-native users trying to join Pexip conferences (or vice versa). …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pexip sourcetype=\"pexip:conference_history\"\n| eval call_direction=case(\n    like(source_alias, \"%@teams%\") OR protocol=\"mssip\", \"Teams Inbound\",\n    like(destination_alias, \"%@teams%\"), \"Teams Outbound\",\n    protocol=\"sip\", \"SIP\",\n    protocol=\"h323\", \"H.323\",\n    protocol=\"webrtc\", \"WebRTC\",\n    1==1, \"Other\")\n| stats count as calls, avg(duration) as avg_duration_sec,\n    sum(eval(case(disconnect_reason!=\"OK\" AND disconnect_reason!=\"Otherendclearedcall\", 1, 0))) as failed_calls\n    by call_direction, destination_alias\n| eval failure_pct=round(failed_calls*100/calls, 1)\n| where calls > 5\n| sort -failure_pct\n| table call_direction, destination_alias, calls, avg_duration_sec, failed_calls, failure_pct\n```\n\nUnderstanding this SPL\n\n**Pexip Registration and Gateway Call Routing** — Pexip acts as a gateway between different video conferencing protocols and platforms — SIP, H.323, WebRTC, and Microsoft Teams via the Teams Connector. Call routing rules determine how incoming calls reach their destination VMR (Virtual Meeting Room) or are forwarded to external systems. Routing failures cause participants to reach wrong destinations or fail to connect entirely. Monitoring registration status and gateway routing ensures interoperability between the Pexip…\n\nDocumented **Data sources**: `sourcetype=pexip:conference_history`, `sourcetype=pexip:syslog`. **App/TA** (typical add-on context): Custom Pexip TA (scripted input polling Management REST API), `Splunk Connect for Syslog`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pexip; **sourcetype**: pexip:conference_history. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pexip, sourcetype=\"pexip:conference_history\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **call_direction** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by call_direction, destination_alias** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **failure_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where calls > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Pexip Registration and Gateway Call Routing**): table call_direction, destination_alias, calls, avg_duration_sec, failed_calls, failure_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (call volume by protocol), Table (routing paths with failure rates), Line chart (calls per protocol over 7 days), Column chart (failure rate by call direction).",
              "script": "",
              "premium": "",
              "hw": "Pexip Infinity Management Node, Pexip Conferencing Nodes, Pexip Teams Connector",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.57",
              "n": "Pexip Participant Join Failure Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "When participants fail to join a Pexip conference, the disconnect reason reveals the root cause: capacity exhaustion, authentication failure, codec mismatch, network timeout, or policy rejection. Understanding the distribution and trend of join failures by reason enables targeted remediation. A spike in \"capacity\" failures means nodes need scaling; \"codec mismatch\" failures mean endpoint configuration needs updating; \"timeout\" failures point to network problems between participant and conferencing node.",
              "t": "Custom Pexip TA (Event Sink API via HEC)",
              "d": "`sourcetype=pexip:event_sink` (participant disconnect events), `sourcetype=pexip:participant`",
              "q": "index=pexip (sourcetype=\"pexip:event_sink\" event_type=\"participant_disconnected\") OR sourcetype=\"pexip:participant\"\n| where disconnect_reason!=\"OK\" AND disconnect_reason!=\"Otherendclearedcall\" AND disconnect_reason!=\"Otherenddisconnected\"\n| eval failure_category=case(\n    like(disconnect_reason, \"%capacity%\") OR like(disconnect_reason, \"%resource%\"), \"Capacity Exhaustion\",\n    like(disconnect_reason, \"%authen%\") OR like(disconnect_reason, \"%pin%\") OR like(disconnect_reason, \"%denied%\"), \"Authentication\",\n    like(disconnect_reason, \"%codec%\") OR like(disconnect_reason, \"%media%\"), \"Media/Codec\",\n    like(disconnect_reason, \"%timeout%\") OR like(disconnect_reason, \"%unreachable%\"), \"Network Timeout\",\n    like(disconnect_reason, \"%policy%\") OR like(disconnect_reason, \"%rejected%\"), \"Policy Rejection\",\n    1==1, \"Other\")\n| bin _time span=1h\n| stats count as failures, dc(participant_alias) as unique_participants, values(conference_name) as affected_conferences by _time, failure_category\n| sort -failures",
              "m": "The Event Sink API sends participant disconnect events with a `disconnect_reason` field. Filter out normal disconnects (OK, user-initiated hangups) to isolate genuine failures. Categorize failures by root cause. Alert on any Capacity Exhaustion failures (immediate service impact). Track Authentication failures — a spike may indicate a PIN policy change that users are unaware of. Codec/Media failures suggest endpoint incompatibility — correlate with the protocol field from UC-11.3.58 to identify which endpoint types are affected. Network Timeout failures should trigger cross-referencing with network monitoring (latency, packet loss on the WAN path between the participant's site and the Conferencing Node). Build a weekly failure trend report for the UC operations team.",
              "z": "Column chart (failures by category over 24 hours), Single value (total failures last hour), Table (recent failures with participant, conference, reason), Line chart (failure trend by category over 7 days).",
              "kfp": "Client firewalls, strict UDP, and guest networks blocking media while signaling succeeds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom Pexip TA (Event Sink API via HEC).\n• Ensure the following data sources are available: `sourcetype=pexip:event_sink` (participant disconnect events), `sourcetype=pexip:participant`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThe Event Sink API sends participant disconnect events with a `disconnect_reason` field. Filter out normal disconnects (OK, user-initiated hangups) to isolate genuine failures. Categorize failures by root cause. Alert on any Capacity Exhaustion failures (immediate service impact). Track Authentication failures — a spike may indicate a PIN policy change that users are unaware of. Codec/Media failures suggest endpoint incompatibility — correlate with the protocol field from UC-11.3.58 to identify …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pexip (sourcetype=\"pexip:event_sink\" event_type=\"participant_disconnected\") OR sourcetype=\"pexip:participant\"\n| where disconnect_reason!=\"OK\" AND disconnect_reason!=\"Otherendclearedcall\" AND disconnect_reason!=\"Otherenddisconnected\"\n| eval failure_category=case(\n    like(disconnect_reason, \"%capacity%\") OR like(disconnect_reason, \"%resource%\"), \"Capacity Exhaustion\",\n    like(disconnect_reason, \"%authen%\") OR like(disconnect_reason, \"%pin%\") OR like(disconnect_reason, \"%denied%\"), \"Authentication\",\n    like(disconnect_reason, \"%codec%\") OR like(disconnect_reason, \"%media%\"), \"Media/Codec\",\n    like(disconnect_reason, \"%timeout%\") OR like(disconnect_reason, \"%unreachable%\"), \"Network Timeout\",\n    like(disconnect_reason, \"%policy%\") OR like(disconnect_reason, \"%rejected%\"), \"Policy Rejection\",\n    1==1, \"Other\")\n| bin _time span=1h\n| stats count as failures, dc(participant_alias) as unique_participants, values(conference_name) as affected_conferences by _time, failure_category\n| sort -failures\n```\n\nUnderstanding this SPL\n\n**Pexip Participant Join Failure Analysis** — When participants fail to join a Pexip conference, the disconnect reason reveals the root cause: capacity exhaustion, authentication failure, codec mismatch, network timeout, or policy rejection. Understanding the distribution and trend of join failures by reason enables targeted remediation. A spike in \"capacity\" failures means nodes need scaling; \"codec mismatch\" failures mean endpoint configuration needs updating; \"timeout\" failures point to network problems between…\n\nDocumented **Data sources**: `sourcetype=pexip:event_sink` (participant disconnect events), `sourcetype=pexip:participant`. **App/TA** (typical add-on context): Custom Pexip TA (Event Sink API via HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pexip; **sourcetype**: pexip:event_sink, pexip:participant. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pexip, sourcetype=\"pexip:event_sink\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where disconnect_reason!=\"OK\" AND disconnect_reason!=\"Otherendclearedcall\" AND disconnect_reason!=\"Otherenddisconnected\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **failure_category** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, failure_category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (failures by category over 24 hours), Single value (total failures last hour), Table (recent failures with participant, conference, reason), Line chart (failure trend by category over 7 days).",
              "script": "",
              "premium": "",
              "hw": "Pexip Infinity Management Node, Pexip Conferencing Nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.3.58",
              "n": "Pexip Interoperability and Protocol Mix",
              "c": "medium",
              "f": "beginner",
              "v": "Pexip's core value proposition is interoperability — connecting participants across SIP, H.323, WebRTC, Microsoft Teams, and Google Meet. Understanding the protocol mix reveals how the platform is actually being used versus its intended deployment. A shift from H.323 to WebRTC signals endpoint modernization. Growing Teams Connector traffic validates the hybrid meeting room investment. Protocol distribution also affects capacity planning because different protocols consume different amounts of transcoding resources on Conferencing Nodes.",
              "t": "Custom Pexip TA (scripted input polling Management REST API)",
              "d": "`sourcetype=pexip:participant` (polled from `/api/admin/history/v1/participant/`)",
              "q": "index=pexip sourcetype=\"pexip:participant\"\n| eval protocol=coalesce(protocol, \"Unknown\")\n| eval vendor=case(\n    like(user_agent, \"%Teams%\") OR protocol=\"mssip\", \"Microsoft Teams\",\n    like(user_agent, \"%Oand%\") OR like(user_agent, \"%webrtc%\") OR protocol=\"webrtc\", \"WebRTC Browser\",\n    like(user_agent, \"%Poly%\") OR like(user_agent, \"%OALTP%\"), \"Poly Endpoint\",\n    like(user_agent, \"%Cisco%\") OR like(user_agent, \"%CE%\"), \"Cisco Endpoint\",\n    protocol=\"sip\", \"SIP (Other)\",\n    protocol=\"h323\", \"H.323 (Other)\",\n    1==1, \"Other\")\n| bin _time span=1d\n| stats count as participants, avg(call_quality) as avg_quality, dc(conference_name) as conferences by _time, protocol, vendor\n| sort _time, -participants",
              "m": "Poll `/api/admin/history/v1/participant/` via the custom Pexip TA. Each participant record includes protocol, user_agent, call quality, and duration. Map user_agent strings to vendor categories using eval case logic. Track protocol distribution over time to identify migration trends (e.g., H.323 declining, WebRTC growing). Compare call quality by protocol and vendor — if Cisco endpoints consistently show higher quality than Poly endpoints, this informs procurement decisions. Generate monthly interoperability reports showing protocol mix, vendor distribution, and quality-by-protocol. This data also supports Pexip license negotiations by showing which features (Teams Connector, WebRTC gateway) drive the most value.",
              "z": "Pie chart (participant count by protocol), Stacked area chart (protocol distribution trend over 30 days), Table (vendor breakdown with quality metrics), Column chart (average quality by protocol).",
              "kfp": "Mixed SIP/H.323 environments where protocol share shifts without an incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom Pexip TA (scripted input polling Management REST API).\n• Ensure the following data sources are available: `sourcetype=pexip:participant` (polled from `/api/admin/history/v1/participant/`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `/api/admin/history/v1/participant/` via the custom Pexip TA. Each participant record includes protocol, user_agent, call quality, and duration. Map user_agent strings to vendor categories using eval case logic. Track protocol distribution over time to identify migration trends (e.g., H.323 declining, WebRTC growing). Compare call quality by protocol and vendor — if Cisco endpoints consistently show higher quality than Poly endpoints, this informs procurement decisions. Generate monthly int…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pexip sourcetype=\"pexip:participant\"\n| eval protocol=coalesce(protocol, \"Unknown\")\n| eval vendor=case(\n    like(user_agent, \"%Teams%\") OR protocol=\"mssip\", \"Microsoft Teams\",\n    like(user_agent, \"%Oand%\") OR like(user_agent, \"%webrtc%\") OR protocol=\"webrtc\", \"WebRTC Browser\",\n    like(user_agent, \"%Poly%\") OR like(user_agent, \"%OALTP%\"), \"Poly Endpoint\",\n    like(user_agent, \"%Cisco%\") OR like(user_agent, \"%CE%\"), \"Cisco Endpoint\",\n    protocol=\"sip\", \"SIP (Other)\",\n    protocol=\"h323\", \"H.323 (Other)\",\n    1==1, \"Other\")\n| bin _time span=1d\n| stats count as participants, avg(call_quality) as avg_quality, dc(conference_name) as conferences by _time, protocol, vendor\n| sort _time, -participants\n```\n\nUnderstanding this SPL\n\n**Pexip Interoperability and Protocol Mix** — Pexip's core value proposition is interoperability — connecting participants across SIP, H.323, WebRTC, Microsoft Teams, and Google Meet. Understanding the protocol mix reveals how the platform is actually being used versus its intended deployment. A shift from H.323 to WebRTC signals endpoint modernization. Growing Teams Connector traffic validates the hybrid meeting room investment. Protocol distribution also affects capacity planning because different protocols consume…\n\nDocumented **Data sources**: `sourcetype=pexip:participant` (polled from `/api/admin/history/v1/participant/`). **App/TA** (typical add-on context): Custom Pexip TA (scripted input polling Management REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pexip; **sourcetype**: pexip:participant. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pexip, sourcetype=\"pexip:participant\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **protocol** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **vendor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, protocol, vendor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (participant count by protocol), Stacked area chart (protocol distribution trend over 30 days), Table (vendor breakdown with quality metrics), Column chart (average quality by protocol).",
              "script": "",
              "premium": "",
              "hw": "Pexip Infinity Management Node, Pexip Conferencing Nodes, Pexip Teams Connector",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.1,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 56,
            "none": 0
          }
        },
        {
          "i": "11.4",
          "n": "Mail Transport & Relay Infrastructure",
          "u": [
            {
              "i": "11.4.1",
              "n": "SMTP Service Availability",
              "c": "high",
              "f": "beginner",
              "v": "Distinct from mail queue depth monitoring — this checks whether the SMTP daemon is accepting TCP connections and responding to EHLO. A crashed postfix or sendmail process stops inbound/outbound mail entirely without generating queue entries. Nagios `check_smtp` verifies this at the connection layer; Splunk replicates it via daemon-level log monitoring.",
              "t": "`Splunk_TA_syslog`, `Splunk_TA_postfix` (community)",
              "d": "`sourcetype=syslog` (postfix, sendmail, exim logs), `sourcetype=postfix:syslog`",
              "q": "index=mail (sourcetype=syslog process=postfix* OR sourcetype=\"postfix:syslog\")\n| bucket _time span=5m\n| stats count as smtp_events by host, _time\n| streamstats window=3 min(smtp_events) as min_events by host\n| where min_events=0\n| eval status=\"SMTP_DOWN\"\n| table _time, host, status",
              "m": "Ingest Postfix/Sendmail syslog output via Universal Forwarder. Under normal operation, an active MTA generates constant log activity (queue manager, cleanup, smtp/smtpd). Absence of events for 5–10 minutes on an expected mail host indicates SMTP process death or service failure. Alert after 2 consecutive empty windows. Complement with a scripted input: `echo QUIT | nc -w5 host 25` — log exit code as synthetic probe result. Monitor separately for TLS handshake failures (port 587/465) as distinct service checks.",
              "z": "Single value (SMTP hosts down), Timeline (downtime events), Line chart (event rate per mail host), Table (host, MTA type, last event timestamp).",
              "kfp": "Planned MTA restarts, quiet nights on low-traffic servers, and synthetic probe gaps to tune windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_syslog`, `Splunk_TA_postfix` (community).\n• Ensure the following data sources are available: `sourcetype=syslog` (postfix, sendmail, exim logs), `sourcetype=postfix:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Postfix/Sendmail syslog output via Universal Forwarder. Under normal operation, an active MTA generates constant log activity (queue manager, cleanup, smtp/smtpd). Absence of events for 5–10 minutes on an expected mail host indicates SMTP process death or service failure. Alert after 2 consecutive empty windows. Complement with a scripted input: `echo QUIT | nc -w5 host 25` — log exit code as synthetic probe result. Monitor separately for TLS handshake failures (port 587/465) as distinct …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail (sourcetype=syslog process=postfix* OR sourcetype=\"postfix:syslog\")\n| bucket _time span=5m\n| stats count as smtp_events by host, _time\n| streamstats window=3 min(smtp_events) as min_events by host\n| where min_events=0\n| eval status=\"SMTP_DOWN\"\n| table _time, host, status\n```\n\nUnderstanding this SPL\n\n**SMTP Service Availability** — Distinct from mail queue depth monitoring — this checks whether the SMTP daemon is accepting TCP connections and responding to EHLO. A crashed postfix or sendmail process stops inbound/outbound mail entirely without generating queue entries. Nagios `check_smtp` verifies this at the connection layer; Splunk replicates it via daemon-level log monitoring.\n\nDocumented **Data sources**: `sourcetype=syslog` (postfix, sendmail, exim logs), `sourcetype=postfix:syslog`. **App/TA** (typical add-on context): `Splunk_TA_syslog`, `Splunk_TA_postfix` (community). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: syslog, postfix:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `streamstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where min_events=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **SMTP Service Availability**): table _time, host, status\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (SMTP hosts down), Timeline (downtime events), Line chart (event rate per mail host), Table (host, MTA type, last event timestamp).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your mail servers and queues, so delivery, auth, and encryption problems show up in time to fix them.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "dell_emc",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.4.2",
              "n": "POP3 / IMAP Mail Retrieval Service Availability",
              "c": "medium",
              "f": "beginner",
              "v": "POP3 and IMAP services allow mail clients to retrieve messages. Even when delivery works correctly, a crashed Dovecot or Cyrus daemon prevents users from reading email, appearing as a total mail outage. Nagios `check_pop` and `check_imap` monitor these ports directly; Splunk replicates availability detection through daemon log analysis.",
              "t": "`Splunk_TA_syslog`",
              "d": "`sourcetype=syslog` (dovecot, cyrus-imapd logs), Dovecot authentication log",
              "q": "index=mail sourcetype=syslog (process=dovecot OR process=imap OR process=pop3)\n| bucket _time span=5m\n| stats count as imap_events by host, _time\n| where isnull(imap_events) OR imap_events=0\n| eval status=\"IMAP_POP3_DOWN\"\n| table _time, host, status",
              "m": "Forward Dovecot or Cyrus IMAP logs via Universal Forwarder. Dovecot logs login events, failed auth, and daemon lifecycle events continuously during normal operation. Zero events for >10 minutes on a mail host indicates a process crash or service failure. Alert after 2 consecutive empty windows. Cross-correlate with auth failures (could indicate process restart loops). For comprehensive coverage, deploy a scripted TCP probe on ports 143 (IMAP), 993 (IMAPS), 110 (POP3), 995 (POP3S).",
              "z": "Table (host, protocol, port, status), Timeline (downtime events), Single value (services down count), Line chart (login event rate as proxy for service health).",
              "kfp": "Mobile fetch intervals, IMAP idle differences, and clients sleeping on battery with no poller traffic.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_syslog`.\n• Ensure the following data sources are available: `sourcetype=syslog` (dovecot, cyrus-imapd logs), Dovecot authentication log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Dovecot or Cyrus IMAP logs via Universal Forwarder. Dovecot logs login events, failed auth, and daemon lifecycle events continuously during normal operation. Zero events for >10 minutes on a mail host indicates a process crash or service failure. Alert after 2 consecutive empty windows. Cross-correlate with auth failures (could indicate process restart loops). For comprehensive coverage, deploy a scripted TCP probe on ports 143 (IMAP), 993 (IMAPS), 110 (POP3), 995 (POP3S).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail sourcetype=syslog (process=dovecot OR process=imap OR process=pop3)\n| bucket _time span=5m\n| stats count as imap_events by host, _time\n| where isnull(imap_events) OR imap_events=0\n| eval status=\"IMAP_POP3_DOWN\"\n| table _time, host, status\n```\n\nUnderstanding this SPL\n\n**POP3 / IMAP Mail Retrieval Service Availability** — POP3 and IMAP services allow mail clients to retrieve messages. Even when delivery works correctly, a crashed Dovecot or Cyrus daemon prevents users from reading email, appearing as a total mail outage. Nagios `check_pop` and `check_imap` monitor these ports directly; Splunk replicates availability detection through daemon log analysis.\n\nDocumented **Data sources**: `sourcetype=syslog` (dovecot, cyrus-imapd logs), Dovecot authentication log. **App/TA** (typical add-on context): `Splunk_TA_syslog`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnull(imap_events) OR imap_events=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **POP3 / IMAP Mail Retrieval Service Availability**): table _time, host, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, protocol, port, status), Timeline (downtime events), Single value (services down count), Line chart (login event rate as proxy for service health).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your mail servers and queues, so delivery, auth, and encryption problems show up in time to fix them.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.4.3",
              "n": "Mail Queue Depth and Deferred Message Backlog",
              "c": "high",
              "f": "intermediate",
              "v": "Growing mail queue (deferred, hold) indicates delivery failures, recipient issues, or abuse. Detecting backlog early prevents bounce storms and blacklisting.",
              "t": "`Splunk_TA_syslog`, custom scripted input (mailq, postqueue)",
              "d": "Postfix `mailq`, Sendmail queue, Exchange queue length",
              "q": "index=mail sourcetype=mail_queue host=*\n| stats latest(queue_depth) as depth, latest(deferred_count) as deferred, latest(_time) as last_seen by host\n| where depth > 100 OR deferred > 50\n| table host depth deferred last_seen",
              "m": "Run `mailq` or equivalent every 5 minutes. Parse queue depth and deferred count. Alert when queue exceeds 100 or deferred exceeds 50. Correlate with rejection logs and recipient domains.",
              "z": "Line chart (queue depth over time), Table (host, queue, deferred), Single value (max queue).",
              "kfp": "Deferred queues during list migrations, directory sync jobs, and greylisting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_syslog`, custom scripted input (mailq, postqueue).\n• Ensure the following data sources are available: Postfix `mailq`, Sendmail queue, Exchange queue length.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `mailq` or equivalent every 5 minutes. Parse queue depth and deferred count. Alert when queue exceeds 100 or deferred exceeds 50. Correlate with rejection logs and recipient domains.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail sourcetype=mail_queue host=*\n| stats latest(queue_depth) as depth, latest(deferred_count) as deferred, latest(_time) as last_seen by host\n| where depth > 100 OR deferred > 50\n| table host depth deferred last_seen\n```\n\nUnderstanding this SPL\n\n**Mail Queue Depth and Deferred Message Backlog** — Growing mail queue (deferred, hold) indicates delivery failures, recipient issues, or abuse. Detecting backlog early prevents bounce storms and blacklisting.\n\nDocumented **Data sources**: Postfix `mailq`, Sendmail queue, Exchange queue length. **App/TA** (typical add-on context): `Splunk_TA_syslog`, custom scripted input (mailq, postqueue). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: mail_queue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=mail_queue. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where depth > 100 OR deferred > 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Mail Queue Depth and Deferred Message Backlog**): table host depth deferred last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue depth over time), Table (host, queue, deferred), Single value (max queue).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your mail servers and queues, so delivery, auth, and encryption problems show up in time to fix them.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.4.4",
              "n": "SMTP Authentication and Relay Policy Violations",
              "c": "high",
              "f": "beginner",
              "v": "Failed SMTP auth or unauthorized relay attempts may indicate credential stuffing or abuse. Monitoring supports security and ensures relay policy is enforced.",
              "t": "`Splunk_TA_syslog`, mail server logs",
              "d": "Postfix maillog, Sendmail logs, Exchange SMTP receive connector logs",
              "q": "index=mail sourcetype=syslog (process=postfix OR process=sendmail) (\"authentication failed\" OR \"relay denied\" OR \"reject\")\n| rex \"user=(?<sasl_user>\\S+)\"\n| stats count by src, sasl_user, action\n| where count > 10\n| sort -count",
              "m": "Forward mail server logs. Extract auth and relay outcomes. Alert on high volume of auth failures from single IP or relay denied for internal IPs (possible misconfiguration).",
              "z": "Table (IP, user, action, count), Timechart of failures, Map (GeoIP).",
              "kfp": "Roaming users and legacy apps with AUTH after password rotation; some alerts are helpdesk follow-up.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_syslog`, mail server logs.\n• Ensure the following data sources are available: Postfix maillog, Sendmail logs, Exchange SMTP receive connector logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward mail server logs. Extract auth and relay outcomes. Alert on high volume of auth failures from single IP or relay denied for internal IPs (possible misconfiguration).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail sourcetype=syslog (process=postfix OR process=sendmail) (\"authentication failed\" OR \"relay denied\" OR \"reject\")\n| rex \"user=(?<sasl_user>\\S+)\"\n| stats count by src, sasl_user, action\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**SMTP Authentication and Relay Policy Violations** — Failed SMTP auth or unauthorized relay attempts may indicate credential stuffing or abuse. Monitoring supports security and ensures relay policy is enforced.\n\nDocumented **Data sources**: Postfix maillog, Sendmail logs, Exchange SMTP receive connector logs. **App/TA** (typical add-on context): `Splunk_TA_syslog`, mail server logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: syslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=syslog. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by src, sasl_user, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SMTP Authentication and Relay Policy Violations** — Failed SMTP auth or unauthorized relay attempts may indicate credential stuffing or abuse. Monitoring supports security and ensures relay policy is enforced.\n\nDocumented **Data sources**: Postfix maillog, Sendmail logs, Exchange SMTP receive connector logs. **App/TA** (typical add-on context): `Splunk_TA_syslog`, mail server logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (IP, user, action, count), Timechart of failures, Map (GeoIP).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your mail servers and queues, so delivery, auth, and encryption problems show up in time to fix them.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.action | sort - count",
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.4.5",
              "n": "Mail Delivery Rate and Bounce Rate by Domain",
              "c": "medium",
              "f": "intermediate",
              "v": "Sudden drop in delivery rate or spike in bounces for a domain indicates reputation or configuration issues. Trending supports deliverability and capacity planning.",
              "t": "Mail server logs, bounce logs",
              "d": "Postfix/Sendmail delivery status, bounce messages, Exchange tracking logs",
              "q": "index=mail sourcetype=mail_delivery\n| bin _time span=1h\n| stats count(eval(status=\"delivered\")) as delivered, count(eval(status=\"bounce\")) as bounces by domain, _time\n| eval bounce_rate=round(bounces/(delivered+bounces)*100, 2)\n| where bounce_rate > 5 OR delivered < 10\n| table domain delivered bounces bounce_rate",
              "m": "Parse delivery and bounce events by recipient domain. Compute hourly delivery and bounce rate. Alert when bounce rate exceeds 5% or delivery volume drops significantly for critical domains.",
              "z": "Line chart (delivery and bounce rate by domain), Table (domain, delivered, bounces, %), Bar chart (bounce rate by domain).",
              "kfp": "Major sender campaigns, DNS changes, and recipient domain moves that change bounce rate.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Mail server logs, bounce logs.\n• Ensure the following data sources are available: Postfix/Sendmail delivery status, bounce messages, Exchange tracking logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse delivery and bounce events by recipient domain. Compute hourly delivery and bounce rate. Alert when bounce rate exceeds 5% or delivery volume drops significantly for critical domains.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail sourcetype=mail_delivery\n| bin _time span=1h\n| stats count(eval(status=\"delivered\")) as delivered, count(eval(status=\"bounce\")) as bounces by domain, _time\n| eval bounce_rate=round(bounces/(delivered+bounces)*100, 2)\n| where bounce_rate > 5 OR delivered < 10\n| table domain delivered bounces bounce_rate\n```\n\nUnderstanding this SPL\n\n**Mail Delivery Rate and Bounce Rate by Domain** — Sudden drop in delivery rate or spike in bounces for a domain indicates reputation or configuration issues. Trending supports deliverability and capacity planning.\n\nDocumented **Data sources**: Postfix/Sendmail delivery status, bounce messages, Exchange tracking logs. **App/TA** (typical add-on context): Mail server logs, bounce logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: mail_delivery. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=mail_delivery. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by domain, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **bounce_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bounce_rate > 5 OR delivered < 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Mail Delivery Rate and Bounce Rate by Domain**): table domain delivered bounces bounce_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (delivery and bounce rate by domain), Table (domain, delivered, bounces, %), Bar chart (bounce rate by domain).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your mail servers and queues, so delivery, auth, and encryption problems show up in time to fix them.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.4.6",
              "n": "Outbound Mail Volume and Recipient Anomaly",
              "c": "high",
              "f": "intermediate",
              "v": "Unusual outbound volume or new bulk recipients may indicate compromised account or phishing campaign. Baseline and anomaly detection support incident response.",
              "t": "Mail server logs",
              "d": "Postfix/Sendmail/Exchange outbound logs",
              "q": "index=mail sourcetype=mail_send\n| bin _time span=1h\n| stats dc(recipient) as recipients, count as msg_count by sender, _time\n| eventstats avg(msg_count) as avg_count, stdev(msg_count) as std_count by sender\n| eval z_score=if(std_count>0, (msg_count-avg_count)/std_count, 0)\n| where z_score > 3 OR recipients > 100\n| table _time sender msg_count recipients z_score",
              "m": "Ingest outbound send events. Baseline message count and unique recipients per sender (hourly). Alert when volume or recipient count exceeds 3 standard deviations or recipient count >100 in one hour.",
              "z": "Table (sender, count, recipients, z-score), Line chart (volume by sender), Bar chart (top senders).",
              "kfp": "Newsletters, finance batch runs, and year-end mail bursts from approved systems.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Mail server logs.\n• Ensure the following data sources are available: Postfix/Sendmail/Exchange outbound logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest outbound send events. Baseline message count and unique recipients per sender (hourly). Alert when volume or recipient count exceeds 3 standard deviations or recipient count >100 in one hour.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail sourcetype=mail_send\n| bin _time span=1h\n| stats dc(recipient) as recipients, count as msg_count by sender, _time\n| eventstats avg(msg_count) as avg_count, stdev(msg_count) as std_count by sender\n| eval z_score=if(std_count>0, (msg_count-avg_count)/std_count, 0)\n| where z_score > 3 OR recipients > 100\n| table _time sender msg_count recipients z_score\n```\n\nUnderstanding this SPL\n\n**Outbound Mail Volume and Recipient Anomaly** — Unusual outbound volume or new bulk recipients may indicate compromised account or phishing campaign. Baseline and anomaly detection support incident response.\n\nDocumented **Data sources**: Postfix/Sendmail/Exchange outbound logs. **App/TA** (typical add-on context): Mail server logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: mail_send. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=mail_send. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by sender, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by sender** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z_score > 3 OR recipients > 100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Outbound Mail Volume and Recipient Anomaly**): table _time sender msg_count recipients z_score\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sender, count, recipients, z-score), Line chart (volume by sender), Bar chart (top senders).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.4.7",
              "n": "Mail Server TLS and Certificate Expiration",
              "c": "critical",
              "f": "beginner",
              "v": "Expired or expiring TLS certificates on SMTP/IMAP/POP break encryption and can cause delivery failures. Proactive monitoring prevents outages.",
              "t": "Custom scripted input (openssl s_client)",
              "d": "TLS handshake to mail server ports (25, 465, 587, 993, 995)",
              "q": "index=mail sourcetype=mail_tls host=*\n| eval days_left=round((expiry_epoch-now())/86400, 0)\n| where days_left < 30\n| table host port days_left subject\n| sort days_left",
              "m": "Script that connects to mail server ports and extracts certificate expiry (e.g. `openssl s_client -connect host:25 -starttls smtp`). Ingest daily. Alert when expiry is within 30 days.",
              "z": "Table (host, port, days left), Single value (soonest expiry), Gauge (days remaining).",
              "kfp": "Certificate renewals on balancers, staged cutovers, and test mailboxes hitting TLS to edge hosts.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (openssl s_client).\n• Ensure the following data sources are available: TLS handshake to mail server ports (25, 465, 587, 993, 995).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScript that connects to mail server ports and extracts certificate expiry (e.g. `openssl s_client -connect host:25 -starttls smtp`). Ingest daily. Alert when expiry is within 30 days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail sourcetype=mail_tls host=*\n| eval days_left=round((expiry_epoch-now())/86400, 0)\n| where days_left < 30\n| table host port days_left subject\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**Mail Server TLS and Certificate Expiration** — Expired or expiring TLS certificates on SMTP/IMAP/POP break encryption and can cause delivery failures. Proactive monitoring prevents outages.\n\nDocumented **Data sources**: TLS handshake to mail server ports (25, 465, 587, 993, 995). **App/TA** (typical add-on context): Custom scripted input (openssl s_client). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: mail_tls. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=mail_tls. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Mail Server TLS and Certificate Expiration**): table host port days_left subject\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, port, days left), Single value (soonest expiry), Gauge (days remaining).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your mail servers and queues, so delivery, auth, and encryption problems show up in time to fix them.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.4.8",
              "n": "SMTP Relay Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Tracks messages relayed through internal SMTP gateways vs policy — unexpected relay volume or open relay abuse paths.",
              "t": "`Splunk_TA_syslog`, Postfix/Exchange logs",
              "d": "`postfix:syslog` `relay=`, `status=sent`, `reject` relay attempts",
              "q": "index=mail sourcetype=\"postfix:syslog\" OR sourcetype=syslog process=postfix\n| search relay=* OR \"relay access denied\"\n| stats count by relay_domain, action, src\n| where count > 500",
              "m": "Parse relay lines for authorized vs denied. Alert on high relay denied from single IP (scanning) or high accepted relay to external domains (misconfiguration).",
              "z": "Table (relay domain, count), Line chart (relay attempts), Single value (relay denied rate).",
              "kfp": "Intentional smarthost relay via approved providers, scanner mail, and line-of-business app relays.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_syslog`, Postfix/Exchange logs.\n• Ensure the following data sources are available: `postfix:syslog` `relay=`, `status=sent`, `reject` relay attempts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse relay lines for authorized vs denied. Alert on high relay denied from single IP (scanning) or high accepted relay to external domains (misconfiguration).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail sourcetype=\"postfix:syslog\" OR sourcetype=syslog process=postfix\n| search relay=* OR \"relay access denied\"\n| stats count by relay_domain, action, src\n| where count > 500\n```\n\nUnderstanding this SPL\n\n**SMTP Relay Monitoring** — Tracks messages relayed through internal SMTP gateways vs policy — unexpected relay volume or open relay abuse paths.\n\nDocumented **Data sources**: `postfix:syslog` `relay=`, `status=sent`, `reject` relay attempts. **App/TA** (typical add-on context): `Splunk_TA_syslog`, Postfix/Exchange logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: postfix:syslog, syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=\"postfix:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by relay_domain, action, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 500` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (relay domain, count), Line chart (relay attempts), Single value (relay denied rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your mail servers and queues, so delivery, auth, and encryption problems show up in time to fix them.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "exchange",
                "syslog"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 8,
            "none": 0
          }
        },
        {
          "i": "11.5",
          "n": "Video Conferencing & Collaboration Analytics",
          "u": [
            {
              "i": "11.5.1",
              "n": "Zoom Meeting Quality Metrics (Jitter, Packet Loss, Latency)",
              "c": "high",
              "f": "intermediate",
              "v": "Poor jitter, loss, or latency directly degrades audio/video MOS and drives support tickets; trending these metrics isolates client, ISP, or Zoom POP issues before executive calls fail.",
              "t": "Splunk Connect for Zoom",
              "d": "`sourcetype=zoom:metrics`, Zoom dashboard quality API / meeting QoS events",
              "q": "index=zoom sourcetype=\"zoom:metrics\"\n| where avg_jitter_ms > 30 OR packet_loss_pct > 2 OR avg_rtt_ms > 300\n| timechart span=5m avg(avg_jitter_ms) as jitter_ms, avg(packet_loss_pct) as loss_pct, avg(avg_rtt_ms) as rtt_ms by meeting_id",
              "m": "Ingest Zoom meeting quality or participant QoS feeds via the official connector. Normalize per-participant jitter, loss, and RTT. Baseline by region and device type. Alert when thresholds exceed SLA for sustained intervals. Correlate with ISP and VPN indicators.",
              "z": "Line chart (jitter, loss, RTT over time), Heatmap (participant × metric), Table (worst meetings in window).",
              "kfp": "Home Wi-Fi, shared broadband, and VPN hairpin; not every poor metric is a Zoom defect.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Zoom.\n• Ensure the following data sources are available: `sourcetype=zoom:metrics`, Zoom dashboard quality API / meeting QoS events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Zoom meeting quality or participant QoS feeds via the official connector. Normalize per-participant jitter, loss, and RTT. Baseline by region and device type. Alert when thresholds exceed SLA for sustained intervals. Correlate with ISP and VPN indicators.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zoom sourcetype=\"zoom:metrics\"\n| where avg_jitter_ms > 30 OR packet_loss_pct > 2 OR avg_rtt_ms > 300\n| timechart span=5m avg(avg_jitter_ms) as jitter_ms, avg(packet_loss_pct) as loss_pct, avg(avg_rtt_ms) as rtt_ms by meeting_id\n```\n\nUnderstanding this SPL\n\n**Zoom Meeting Quality Metrics (Jitter, Packet Loss, Latency)** — Poor jitter, loss, or latency directly degrades audio/video MOS and drives support tickets; trending these metrics isolates client, ISP, or Zoom POP issues before executive calls fail.\n\nDocumented **Data sources**: `sourcetype=zoom:metrics`, Zoom dashboard quality API / meeting QoS events. **App/TA** (typical add-on context): Splunk Connect for Zoom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zoom; **sourcetype**: zoom:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zoom, sourcetype=\"zoom:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where avg_jitter_ms > 30 OR packet_loss_pct > 2 OR avg_rtt_ms > 300` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by meeting_id** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (jitter, loss, RTT over time), Heatmap (participant × metric), Table (worst meetings in window).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track Zoom call quality, so choppy video or bad audio is something we can trace instead of a vague complaint.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.2",
              "n": "Zoom Call Drop Rate Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Elevated drop rates signal network instability, client bugs, or capacity limits; tracking drops by geography and client version prioritizes fixes.",
              "t": "Splunk Connect for Zoom",
              "d": "`sourcetype=zoom:meetings`, meeting end / participant disconnect events",
              "q": "index=zoom sourcetype=\"zoom:meetings\"\n| eval er=lower(end_reason)\n| eval dropped=if(like(er,\"%drop%\") OR like(er,\"%disconnect%\") OR like(er,\"%lost%\"),1,0)\n| bin _time span=1h\n| stats sum(dropped) as drops, count as meetings by _time\n| eval drop_rate_pct=if(meetings>0, round(drops/meetings*100,2), 0)\n| where drop_rate_pct > 5",
              "m": "Ingest meeting lifecycle events with end reason and duration. Compute hourly drop rate = meetings ended abnormally / total meetings. Segment by account, data center, or client version. Alert when drop rate exceeds baseline (e.g., >5%).",
              "z": "Line chart (drop rate % over time), Bar chart (drops by region), Single value (drop rate last hour).",
              "kfp": "Client updates, local PC sleep, and carrier glitches; correlate before blaming the cloud bridge.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Zoom.\n• Ensure the following data sources are available: `sourcetype=zoom:meetings`, meeting end / participant disconnect events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest meeting lifecycle events with end reason and duration. Compute hourly drop rate = meetings ended abnormally / total meetings. Segment by account, data center, or client version. Alert when drop rate exceeds baseline (e.g., >5%).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zoom sourcetype=\"zoom:meetings\"\n| eval er=lower(end_reason)\n| eval dropped=if(like(er,\"%drop%\") OR like(er,\"%disconnect%\") OR like(er,\"%lost%\"),1,0)\n| bin _time span=1h\n| stats sum(dropped) as drops, count as meetings by _time\n| eval drop_rate_pct=if(meetings>0, round(drops/meetings*100,2), 0)\n| where drop_rate_pct > 5\n```\n\nUnderstanding this SPL\n\n**Zoom Call Drop Rate Monitoring** — Elevated drop rates signal network instability, client bugs, or capacity limits; tracking drops by geography and client version prioritizes fixes.\n\nDocumented **Data sources**: `sourcetype=zoom:meetings`, meeting end / participant disconnect events. **App/TA** (typical add-on context): Splunk Connect for Zoom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zoom; **sourcetype**: zoom:meetings. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zoom, sourcetype=\"zoom:meetings\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **er** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dropped** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drop_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drop_rate_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (drop rate % over time), Bar chart (drops by region), Single value (drop rate last hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.3",
              "n": "Zoom Participant Join Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Join failures block users from hearings and classes; clustering failures by error code reveals SSO, licensing, or capacity misconfigurations.",
              "t": "Splunk Connect for Zoom",
              "d": "`sourcetype=zoom:participant`, join attempt logs",
              "q": "index=zoom sourcetype=\"zoom:participant\" join_result!=\"success\"\n| stats count by error_code, join_result, client_type\n| sort -count",
              "m": "Capture join attempts with result, error code, meeting type, and IdP correlation if SAML. Alert on spikes in specific codes (e.g., 3000-series). Compare with Okta/Azure AD sign-in success for the same window.",
              "z": "Table (error_code, count), Line chart (failed joins over time), Bar chart (failures by client type).",
              "kfp": "Capacity limits, invites to wrong time zones, and large webinars with join throttling as designed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Zoom.\n• Ensure the following data sources are available: `sourcetype=zoom:participant`, join attempt logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture join attempts with result, error code, meeting type, and IdP correlation if SAML. Alert on spikes in specific codes (e.g., 3000-series). Compare with Okta/Azure AD sign-in success for the same window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zoom sourcetype=\"zoom:participant\" join_result!=\"success\"\n| stats count by error_code, join_result, client_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Zoom Participant Join Failures** — Join failures block users from hearings and classes; clustering failures by error code reveals SSO, licensing, or capacity misconfigurations.\n\nDocumented **Data sources**: `sourcetype=zoom:participant`, join attempt logs. **App/TA** (typical add-on context): Splunk Connect for Zoom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zoom; **sourcetype**: zoom:participant. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zoom, sourcetype=\"zoom:participant\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by error_code, join_result, client_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (error_code, count), Line chart (failed joins over time), Bar chart (failures by client type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.4",
              "n": "Webex Device Health",
              "c": "high",
              "f": "intermediate",
              "v": "Room devices with high CPU, thermal, or firmware errors degrade meetings; proactive health reduces onsite truck rolls and VIP room incidents.",
              "t": "`ta_cisco_webex_add_on_for_splunk`",
              "d": "`sourcetype=webex:device`, Webex Control Hub device telemetry",
              "q": "index=webex sourcetype=\"webex:device\"\n| where health_state!=\"ok\" OR cpu_pct > 85 OR temperature_c > 45\n| stats latest(health_state) as state, max(cpu_pct) as max_cpu, max(temperature_c) as max_temp by device_id, product\n| sort -max_cpu",
              "m": "Ingest Control Hub device inventory and health APIs. Poll or stream alerts for offline, warning, or error states. Track firmware version drift. Alert on sustained high CPU or temperature before automatic thermal throttling.",
              "z": "Status grid (device × health), Table (devices over threshold), Line chart (CPU/temperature trend).",
              "kfp": "Planned room maintenance, power cycles, and vendor swaps on devices under warranty.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk`.\n• Ensure the following data sources are available: `sourcetype=webex:device`, Webex Control Hub device telemetry.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Control Hub device inventory and health APIs. Poll or stream alerts for offline, warning, or error states. Track firmware version drift. Alert on sustained high CPU or temperature before automatic thermal throttling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:device\"\n| where health_state!=\"ok\" OR cpu_pct > 85 OR temperature_c > 45\n| stats latest(health_state) as state, max(cpu_pct) as max_cpu, max(temperature_c) as max_temp by device_id, product\n| sort -max_cpu\n```\n\nUnderstanding this SPL\n\n**Webex Device Health** — Room devices with high CPU, thermal, or firmware errors degrade meetings; proactive health reduces onsite truck rolls and VIP room incidents.\n\nDocumented **Data sources**: `sourcetype=webex:device`, Webex Control Hub device telemetry. **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:device. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where health_state!=\"ok\" OR cpu_pct > 85 OR temperature_c > 45` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by device_id, product** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (device × health), Table (devices over threshold), Line chart (CPU/temperature trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use this to stay ahead of email and collaboration problems, so the team is not the last to know when something drifts or breaks.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.5",
              "n": "Webex Room System Uptime",
              "c": "medium",
              "f": "beginner",
              "v": "Room system availability underpins executive and boardroom SLAs; uptime trending supports hardware refresh and network path decisions.",
              "t": "`ta_cisco_webex_add_on_for_splunk`",
              "d": "`sourcetype=webex:device`, device online/offline events",
              "q": "index=webex sourcetype=\"webex:device\"\n| eval up=if(connection_state=\"connected\",1,0)\n| bin _time span=1h\n| stats avg(up) as uptime_ratio by _time, device_id\n| where uptime_ratio < 0.99\n| table _time, device_id, uptime_ratio",
              "m": "Derive online state from heartbeat or Control Hub connectivity. Compute rolling uptime per room vs. expected business hours. Alert on devices below 99% weekly uptime or prolonged offline spans.",
              "z": "Line chart (uptime ratio by room), Single value (fleet uptime %), Table (rooms below SLA).",
              "kfp": "Nights and weekends with powered-down rooms; compare against occupancy expectations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `ta_cisco_webex_add_on_for_splunk`.\n• Ensure the following data sources are available: `sourcetype=webex:device`, device online/offline events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDerive online state from heartbeat or Control Hub connectivity. Compute rolling uptime per room vs. expected business hours. Alert on devices below 99% weekly uptime or prolonged offline spans.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:device\"\n| eval up=if(connection_state=\"connected\",1,0)\n| bin _time span=1h\n| stats avg(up) as uptime_ratio by _time, device_id\n| where uptime_ratio < 0.99\n| table _time, device_id, uptime_ratio\n```\n\nUnderstanding this SPL\n\n**Webex Room System Uptime** — Room system availability underpins executive and boardroom SLAs; uptime trending supports hardware refresh and network path decisions.\n\nDocumented **Data sources**: `sourcetype=webex:device`, device online/offline events. **App/TA** (typical add-on context): `ta_cisco_webex_add_on_for_splunk`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:device. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **up** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, device_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where uptime_ratio < 0.99` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Webex Room System Uptime**): table _time, device_id, uptime_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (uptime ratio by room), Single value (fleet uptime %), Table (rooms below SLA).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch meeting rooms and devices, so empty bookings, bad gear, and license waste are visible before a big event fails.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.6",
              "n": "Video Conferencing License Utilization",
              "c": "medium",
              "f": "beginner",
              "v": "Underused licenses waste budget; near-capacity entitlements risk meeting blocks during peaks—utilization guides true-up and consolidation across Zoom/Webex/Teams.",
              "t": "Splunk Connect for Zoom, Webex TA, Microsoft 365 licensing inputs",
              "d": "`sourcetype=zoom:account`, `sourcetype=webex:license`, `sourcetype=m365:license`",
              "q": "index=saas (sourcetype=\"zoom:account\" OR sourcetype=\"webex:license\" OR sourcetype=\"m365:license\")\n| eval used_pct=round(assigned_licenses/nullif(total_licenses,0)*100,1)\n| where used_pct > 90 OR used_pct < 60\n| table platform, sku, assigned_licenses, total_licenses, used_pct",
              "m": "Ingest license counts and active assignments from each vendor’s admin API on a daily schedule. Map SKUs to collaboration products. Alert above 90% utilization and report under-60% for reclamation. Normalize multi-platform duplicates where possible via email identity.",
              "z": "Bar chart (utilization % by platform), Table (SKU detail), Line chart (assigned licenses over time).",
              "kfp": "True-up timing, co-term renewals, and short peaks during all-hands that look like a shortage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Zoom, Webex TA, Microsoft 365 licensing inputs.\n• Ensure the following data sources are available: `sourcetype=zoom:account`, `sourcetype=webex:license`, `sourcetype=m365:license`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest license counts and active assignments from each vendor’s admin API on a daily schedule. Map SKUs to collaboration products. Alert above 90% utilization and report under-60% for reclamation. Normalize multi-platform duplicates where possible via email identity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=saas (sourcetype=\"zoom:account\" OR sourcetype=\"webex:license\" OR sourcetype=\"m365:license\")\n| eval used_pct=round(assigned_licenses/nullif(total_licenses,0)*100,1)\n| where used_pct > 90 OR used_pct < 60\n| table platform, sku, assigned_licenses, total_licenses, used_pct\n```\n\nUnderstanding this SPL\n\n**Video Conferencing License Utilization** — Underused licenses waste budget; near-capacity entitlements risk meeting blocks during peaks—utilization guides true-up and consolidation across Zoom/Webex/Teams.\n\nDocumented **Data sources**: `sourcetype=zoom:account`, `sourcetype=webex:license`, `sourcetype=m365:license`. **App/TA** (typical add-on context): Splunk Connect for Zoom, Webex TA, Microsoft 365 licensing inputs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: saas; **sourcetype**: zoom:account, webex:license, m365:license. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=saas, sourcetype=\"zoom:account\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 90 OR used_pct < 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Video Conferencing License Utilization**): table platform, sku, assigned_licenses, total_licenses, used_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (utilization % by platform), Table (SKU detail), Line chart (assigned licenses over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track collaboration license use, so we buy only what we need and still have seats when everyone jumps on a call.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.7",
              "n": "Meeting Recording Storage Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Cloud recording storage grows with retention policies; trending consumption avoids surprise overages and informs lifecycle rules for compliance vs. cost.",
              "t": "Splunk Connect for Zoom, Webex TA, Microsoft Graph / Teams recording metadata",
              "d": "`sourcetype=zoom:recording`, `sourcetype=webex:recording`, `sourcetype=m365:teams_recording`",
              "q": "index=saas (sourcetype=\"zoom:recording\" OR sourcetype=\"webex:recording\" OR sourcetype=\"m365:teams_recording\")\n| eval size_gb=round(storage_bytes/1073741824,2)\n| timechart span=1d sum(size_gb) as daily_gb by platform",
              "m": "Ingest recording completion events with byte size and retention class. Sum daily growth per platform. Project growth with linear regression or `predict` on a single series for 30-day forecast. Alert when projected storage crosses budget tiers. Pair with legal hold tags where applicable.",
              "z": "Area chart (storage growth by platform), Line chart (daily_gb trend), Table (largest tenants or sites).",
              "kfp": "Retention policies, large webinar recordings, and compliance archives growing storage in bursts.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Connect for Zoom, Webex TA, Microsoft Graph / Teams recording metadata.\n• Ensure the following data sources are available: `sourcetype=zoom:recording`, `sourcetype=webex:recording`, `sourcetype=m365:teams_recording`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest recording completion events with byte size and retention class. Sum daily growth per platform. Project growth with linear regression or `predict` on a single series for 30-day forecast. Alert when projected storage crosses budget tiers. Pair with legal hold tags where applicable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=saas (sourcetype=\"zoom:recording\" OR sourcetype=\"webex:recording\" OR sourcetype=\"m365:teams_recording\")\n| eval size_gb=round(storage_bytes/1073741824,2)\n| timechart span=1d sum(size_gb) as daily_gb by platform\n```\n\nUnderstanding this SPL\n\n**Meeting Recording Storage Trending** — Cloud recording storage grows with retention policies; trending consumption avoids surprise overages and informs lifecycle rules for compliance vs. cost.\n\nDocumented **Data sources**: `sourcetype=zoom:recording`, `sourcetype=webex:recording`, `sourcetype=m365:teams_recording`. **App/TA** (typical add-on context): Splunk Connect for Zoom, Webex TA, Microsoft Graph / Teams recording metadata. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: saas; **sourcetype**: zoom:recording, webex:recording, m365:teams_recording. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=saas, sourcetype=\"zoom:recording\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by platform** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (storage growth by platform), Line chart (daily_gb trend), Table (largest tenants or sites).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how fast meeting recording storage grows, so legal hold and cost stay under control as people record everything.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.8",
              "n": "Teams Meeting Quality Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Teams Call Quality Dashboard (CQD) data exposes poor Wi‑Fi, VPN, or PSTN legs; Splunk rollups unify CQD with org context for targeted network fixes.",
              "t": "Splunk Add-on for Microsoft Cloud Services, Microsoft 365 Add-on",
              "d": "`sourcetype=m365:teams_cqd`, Call Records / CQD feed",
              "q": "index=m365 sourcetype=\"m365:teams_cqd\"\n| where avg_video_frame_loss_pct > 5 OR avg_round_trip_time_ms > 300 OR poor_stream_pct > 10\n| stats avg(avg_video_frame_loss_pct) as avg_loss, avg(avg_round_trip_time_ms) as avg_rtt, avg(poor_stream_pct) as poor_pct by user_principal_name, building_name\n| sort -poor_pct",
              "m": "Ingest CQD or Call Records via Graph / data export. Join subnet or building names from network inventory. Baseline per site. Alert when poor stream percentage or packet loss exceeds SLA. Feed top offenders to network ops.",
              "z": "Table (users/sites with worst quality), Line chart (poor stream % trend), Map or bar chart (quality by building).",
              "kfp": "Wi-Fi, VPN, and headset drivers; some poor streams are on the end user path, not the Teams service alone.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services, Microsoft 365 Add-on.\n• Ensure the following data sources are available: `sourcetype=m365:teams_cqd`, Call Records / CQD feed.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CQD or Call Records via Graph / data export. Join subnet or building names from network inventory. Baseline per site. Alert when poor stream percentage or packet loss exceeds SLA. Feed top offenders to network ops.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=m365 sourcetype=\"m365:teams_cqd\"\n| where avg_video_frame_loss_pct > 5 OR avg_round_trip_time_ms > 300 OR poor_stream_pct > 10\n| stats avg(avg_video_frame_loss_pct) as avg_loss, avg(avg_round_trip_time_ms) as avg_rtt, avg(poor_stream_pct) as poor_pct by user_principal_name, building_name\n| sort -poor_pct\n```\n\nUnderstanding this SPL\n\n**Teams Meeting Quality Analysis** — Teams Call Quality Dashboard (CQD) data exposes poor Wi‑Fi, VPN, or PSTN legs; Splunk rollups unify CQD with org context for targeted network fixes.\n\nDocumented **Data sources**: `sourcetype=m365:teams_cqd`, Call Records / CQD feed. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services, Microsoft 365 Add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: m365; **sourcetype**: m365:teams_cqd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=m365, sourcetype=\"m365:teams_cqd\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where avg_video_frame_loss_pct > 5 OR avg_round_trip_time_ms > 300 OR poor_stream_pct > 10` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user_principal_name, building_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users/sites with worst quality), Line chart (poor stream % trend), Map or bar chart (quality by building).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at Teams call quality, so we can fix Wi-Fi and network paths that ruin meetings more than a bad laptop mic.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.9",
              "n": "Meeting Room No-Show and Early Release Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Meeting rooms are expensive corporate assets. Rooms booked but never occupied (no-shows) and meetings that end well before the booked time waste capacity that other teams need. Quantifying no-show rates and early release patterns provides facilities and IT leadership with evidence to implement auto-release policies, shorten default booking durations, and right-size room inventory — directly improving room availability without adding physical space.",
              "t": "`Cisco Webex Add-on` (Splunkbase #5781), Cisco Spaces Add-On (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), calendar API integration",
              "d": "`sourcetype=webex:room_analytics` (RoomAnalytics PeoplePresence), `sourcetype=cisco:spaces:occupancy`, calendar booking data",
              "q": "index=collaboration sourcetype=\"webex:room_analytics\"\n| eval booked=if(isnotnull(booking_id), 1, 0)\n| eval occupied=if(people_presence==\"Yes\" OR people_count>0, 1, 0)\n| eval no_show=if(booked==1 AND occupied==0, 1, 0)\n| eval early_release_min=if(booked==1 AND occupied==1, round((booking_end_epoch - actual_end_epoch)/60, 0), null())\n| eval early_release=if(isnotnull(early_release_min) AND early_release_min > round(booking_duration_min*0.5, 0), 1, 0)\n| bin _time span=1d\n| stats count(eval(booked==1)) as total_bookings, sum(no_show) as no_shows, sum(early_release) as early_releases, avg(early_release_min) as avg_early_min by _time, room_name, building\n| eval no_show_pct=round(no_shows*100/total_bookings, 1)\n| eval early_pct=round(early_releases*100/total_bookings, 1)\n| eval wasted_pct=round((no_shows+early_releases)*100/total_bookings, 1)\n| table _time, building, room_name, total_bookings, no_show_pct, early_pct, wasted_pct, avg_early_min\n| sort -wasted_pct",
              "m": "Combine Webex RoomOS room analytics data (PeoplePresence and PeopleCount sensors) with calendar booking data (Exchange/O365 room resource calendar or Webex calendar integration). A room is a \"no-show\" if it was booked but PeoplePresence remained \"No\" for the entire booking duration (allow 10-minute grace period). A meeting is an \"early release\" if it ended more than 50% before the booked end time. Track daily trends per room and building. Identify chronically wasted rooms (>30% no-show rate) for policy intervention. Feed data to Cisco Spaces for automated room release workflows. Provide monthly reports to facilities management with cost-per-wasted-hour calculations based on floor space cost allocation.",
              "z": "Bar chart (no-show % by room), Line chart (fleet-wide no-show rate trend over 90 days), Heatmap (room × day-of-week waste), Table (worst rooms with waste percentage), Single value (weekly wasted hours).",
              "kfp": "Calendar hygiene issues, overbooking of rooms, and hybrid meetings that no-show a physical room.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Webex Add-on` (Splunkbase #5781), Cisco Spaces Add-On (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), calendar API integration.\n• Ensure the following data sources are available: `sourcetype=webex:room_analytics` (RoomAnalytics PeoplePresence), `sourcetype=cisco:spaces:occupancy`, calendar booking data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCombine Webex RoomOS room analytics data (PeoplePresence and PeopleCount sensors) with calendar booking data (Exchange/O365 room resource calendar or Webex calendar integration). A room is a \"no-show\" if it was booked but PeoplePresence remained \"No\" for the entire booking duration (allow 10-minute grace period). A meeting is an \"early release\" if it ended more than 50% before the booked end time. Track daily trends per room and building. Identify chronically wasted rooms (>30% no-show rate) for…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=collaboration sourcetype=\"webex:room_analytics\"\n| eval booked=if(isnotnull(booking_id), 1, 0)\n| eval occupied=if(people_presence==\"Yes\" OR people_count>0, 1, 0)\n| eval no_show=if(booked==1 AND occupied==0, 1, 0)\n| eval early_release_min=if(booked==1 AND occupied==1, round((booking_end_epoch - actual_end_epoch)/60, 0), null())\n| eval early_release=if(isnotnull(early_release_min) AND early_release_min > round(booking_duration_min*0.5, 0), 1, 0)\n| bin _time span=1d\n| stats count(eval(booked==1)) as total_bookings, sum(no_show) as no_shows, sum(early_release) as early_releases, avg(early_release_min) as avg_early_min by _time, room_name, building\n| eval no_show_pct=round(no_shows*100/total_bookings, 1)\n| eval early_pct=round(early_releases*100/total_bookings, 1)\n| eval wasted_pct=round((no_shows+early_releases)*100/total_bookings, 1)\n| table _time, building, room_name, total_bookings, no_show_pct, early_pct, wasted_pct, avg_early_min\n| sort -wasted_pct\n```\n\nUnderstanding this SPL\n\n**Meeting Room No-Show and Early Release Trending** — Meeting rooms are expensive corporate assets. Rooms booked but never occupied (no-shows) and meetings that end well before the booked time waste capacity that other teams need. Quantifying no-show rates and early release patterns provides facilities and IT leadership with evidence to implement auto-release policies, shorten default booking durations, and right-size room inventory — directly improving room availability without adding physical space.\n\nDocumented **Data sources**: `sourcetype=webex:room_analytics` (RoomAnalytics PeoplePresence), `sourcetype=cisco:spaces:occupancy`, calendar booking data. **App/TA** (typical add-on context): `Cisco Webex Add-on` (Splunkbase #5781), Cisco Spaces Add-On (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), calendar API integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: collaboration; **sourcetype**: webex:room_analytics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=collaboration, sourcetype=\"webex:room_analytics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **booked** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **occupied** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **no_show** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **early_release_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **early_release** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, room_name, building** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **no_show_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **early_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **wasted_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Meeting Room No-Show and Early Release Trending**): table _time, building, room_name, total_bookings, no_show_pct, early_pct, wasted_pct, avg_early_min\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (no-show % by room), Line chart (fleet-wide no-show rate trend over 90 days), Heatmap (room × day-of-week waste), Table (worst rooms with waste percentage), Single value (weekly wasted hours).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Room Kit, Webex Board, Webex Desk Pro, Cisco Room Navigator, Cisco Meraki MV Smart Cameras",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch meeting rooms and devices, so empty bookings, bad gear, and license waste are visible before a big event fails.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki",
                "cisco_spaces",
                "cisco_webex"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.10",
              "n": "Meeting Room People Count vs Capacity Optimization",
              "c": "medium",
              "f": "intermediate",
              "v": "A 20-person boardroom consistently used by 2-person meetings represents a massive space efficiency failure. Conversely, 4-person huddle rooms packed with 8 people violate fire codes and degrade meeting quality. RoomOS people count data matched against room capacity enables evidence-based space optimization — converting underutilized large rooms into multiple smaller spaces, or adding capacity where demand exceeds supply — decisions worth millions in real estate savings.",
              "t": "`Cisco Webex Add-on` (Splunkbase #5781), Cisco Spaces Add-On (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=webex:room_analytics` (RoomAnalytics PeopleCount), `sourcetype=cisco:spaces:occupancy`",
              "q": "index=collaboration sourcetype=\"webex:room_analytics\" people_count=*\n| where people_count > 0\n| lookup room_inventory room_id OUTPUT room_name, capacity, room_type, building, floor\n| eval utilization_pct=round(people_count*100/capacity, 1)\n| eval size_match=case(\n    utilization_pct <= 25, \"Oversized (≤25%)\",\n    utilization_pct <= 50, \"Underutilized (25-50%)\",\n    utilization_pct <= 100, \"Right-sized (50-100%)\",\n    utilization_pct > 100, \"Overcrowded (>100%)\")\n| bin _time span=1d\n| stats avg(people_count) as avg_attendees, avg(utilization_pct) as avg_util, max(people_count) as peak_attendees, count as meeting_count by room_name, capacity, room_type, building, size_match\n| eval avg_attendees=round(avg_attendees, 1)\n| eval avg_util=round(avg_util, 1)\n| table building, room_name, room_type, capacity, avg_attendees, peak_attendees, avg_util, size_match, meeting_count\n| sort size_match, -meeting_count",
              "m": "Ingest RoomOS PeopleCount data via Webex device telemetry or Cisco Spaces API. Build a `room_inventory` lookup with room ID, name, capacity, type (huddle, conference, boardroom, training), building, and floor. Calculate utilization as people count divided by room capacity. Classify each meeting as oversized, underutilized, right-sized, or overcrowded. Aggregate over 30-90 days to identify persistent patterns (not single outliers). Generate monthly right-sizing recommendations: rooms consistently below 25% utilization are candidates for subdivision or repurposing. Rooms consistently overcrowded need capacity upgrades or booking restrictions. Feed findings into corporate real estate planning with cost per square foot context.",
              "z": "Scatter plot (avg attendees vs room capacity), Bar chart (room utilization by category), Table (rooms with optimization recommendations), Heatmap (building × floor utilization), Single value (fleet-wide average utilization %).",
              "kfp": "Low occupancy in secure rooms, executive floors, and highly booked rooms with hybrid attendance.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Webex Add-on` (Splunkbase #5781), Cisco Spaces Add-On (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=webex:room_analytics` (RoomAnalytics PeopleCount), `sourcetype=cisco:spaces:occupancy`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest RoomOS PeopleCount data via Webex device telemetry or Cisco Spaces API. Build a `room_inventory` lookup with room ID, name, capacity, type (huddle, conference, boardroom, training), building, and floor. Calculate utilization as people count divided by room capacity. Classify each meeting as oversized, underutilized, right-sized, or overcrowded. Aggregate over 30-90 days to identify persistent patterns (not single outliers). Generate monthly right-sizing recommendations: rooms consistently…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=collaboration sourcetype=\"webex:room_analytics\" people_count=*\n| where people_count > 0\n| lookup room_inventory room_id OUTPUT room_name, capacity, room_type, building, floor\n| eval utilization_pct=round(people_count*100/capacity, 1)\n| eval size_match=case(\n    utilization_pct <= 25, \"Oversized (≤25%)\",\n    utilization_pct <= 50, \"Underutilized (25-50%)\",\n    utilization_pct <= 100, \"Right-sized (50-100%)\",\n    utilization_pct > 100, \"Overcrowded (>100%)\")\n| bin _time span=1d\n| stats avg(people_count) as avg_attendees, avg(utilization_pct) as avg_util, max(people_count) as peak_attendees, count as meeting_count by room_name, capacity, room_type, building, size_match\n| eval avg_attendees=round(avg_attendees, 1)\n| eval avg_util=round(avg_util, 1)\n| table building, room_name, room_type, capacity, avg_attendees, peak_attendees, avg_util, size_match, meeting_count\n| sort size_match, -meeting_count\n```\n\nUnderstanding this SPL\n\n**Meeting Room People Count vs Capacity Optimization** — A 20-person boardroom consistently used by 2-person meetings represents a massive space efficiency failure. Conversely, 4-person huddle rooms packed with 8 people violate fire codes and degrade meeting quality. RoomOS people count data matched against room capacity enables evidence-based space optimization — converting underutilized large rooms into multiple smaller spaces, or adding capacity where demand exceeds supply — decisions worth millions in real estate savings.\n\nDocumented **Data sources**: `sourcetype=webex:room_analytics` (RoomAnalytics PeopleCount), `sourcetype=cisco:spaces:occupancy`. **App/TA** (typical add-on context): `Cisco Webex Add-on` (Splunkbase #5781), Cisco Spaces Add-On (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: collaboration; **sourcetype**: webex:room_analytics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=collaboration, sourcetype=\"webex:room_analytics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where people_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **size_match** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by room_name, capacity, room_type, building, size_match** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_attendees** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **avg_util** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Meeting Room People Count vs Capacity Optimization**): table building, room_name, room_type, capacity, avg_attendees, peak_attendees, avg_util, size_match, meeting_count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter plot (avg attendees vs room capacity), Bar chart (room utilization by category), Table (rooms with optimization recommendations), Heatmap (building × floor utilization), Single value (fleet-wide average utilization %).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Room Kit, Webex Board, Webex Room Kit Mini, Webex Desk Pro, Cisco Meraki MV Smart Cameras",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch meeting rooms and devices, so empty bookings, bad gear, and license waste are visible before a big event fails.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki",
                "cisco_spaces",
                "cisco_webex"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.11",
              "n": "Meeting Room AV Equipment Health",
              "c": "high",
              "f": "intermediate",
              "v": "\"The screen doesn't work\" is the most common meeting room complaint. By monitoring display/projector power state, camera connectivity, microphone mute state at meeting start, speaker health, and peripheral connectivity via RoomOS xAPI status events, IT can detect and fix equipment failures before users encounter them. Proactive AV monitoring transforms reactive room support into preventive maintenance, reducing meeting disruptions and executive frustration.",
              "t": "`Cisco Webex Add-on` (Splunkbase #5781), RoomOS xAPI via Webex cloud telemetry",
              "d": "`sourcetype=webex:device` (status events), `sourcetype=webex:room_analytics` (peripheral status)",
              "q": "index=webex sourcetype=\"webex:device\"\n| eval issue=case(\n    like(display_status, \"%NotDetected%\") OR like(display_status, \"%Error%\"), \"Display Disconnected\",\n    like(camera_status, \"%NotConnected%\") OR camera_status==\"Error\", \"Camera Failure\",\n    like(microphone_status, \"%NotConnected%\"), \"Microphone Disconnected\",\n    like(speaker_status, \"%Error%\") OR like(speaker_status, \"%NotConnected%\"), \"Speaker Failure\",\n    like(usb_status, \"%Error%\"), \"USB Peripheral Error\",\n    hdmi_input_status==\"NoSignal\" AND display_status==\"Connected\", \"HDMI Input Lost\",\n    1==1, null())\n| where isnotnull(issue)\n| stats latest(_time) as last_reported, count as occurrences, values(issue) as issues by device_id, product, room_name, building\n| eval hours_since=round((now()-last_reported)/3600, 1)\n| eval priority=case(\n    mvcount(issues)>2, \"Critical - Multiple Failures\",\n    mvfind(issues, \"Display\")>=0 OR mvfind(issues, \"Camera\")>=0, \"High\",\n    1==1, \"Medium\")\n| sort priority, -occurrences\n| table room_name, building, product, issues, priority, occurrences, hours_since",
              "m": "Webex cloud telemetry provides device status updates including display connection state (via CEC/HDMI), camera availability, microphone mute state, speaker test results, and USB peripheral status through the RoomOS xAPI. Ingest via the Webex Add-on. Build a room equipment baseline that maps each room to its expected peripherals (e.g., Room Kit + 2 displays + ceiling mic + touch controller). Compare current status against baseline to detect missing or failed components. Alert facilities/AV team when any room has equipment issues, prioritized by room importance (executive rooms first). Schedule automated daily health checks during non-business hours. Track equipment failure patterns by product model to inform procurement decisions and warranty claims.",
              "z": "Status grid (room × equipment status — green/red), Table (rooms with active issues), Bar chart (failures by equipment type), Line chart (daily failure count trend), Single value (rooms with issues vs total rooms).",
              "kfp": "Firmware updates, cable swaps, and known vendor incidents with a published advisory.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Webex Add-on` (Splunkbase #5781), RoomOS xAPI via Webex cloud telemetry.\n• Ensure the following data sources are available: `sourcetype=webex:device` (status events), `sourcetype=webex:room_analytics` (peripheral status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWebex cloud telemetry provides device status updates including display connection state (via CEC/HDMI), camera availability, microphone mute state, speaker test results, and USB peripheral status through the RoomOS xAPI. Ingest via the Webex Add-on. Build a room equipment baseline that maps each room to its expected peripherals (e.g., Room Kit + 2 displays + ceiling mic + touch controller). Compare current status against baseline to detect missing or failed components. Alert facilities/AV team w…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:device\"\n| eval issue=case(\n    like(display_status, \"%NotDetected%\") OR like(display_status, \"%Error%\"), \"Display Disconnected\",\n    like(camera_status, \"%NotConnected%\") OR camera_status==\"Error\", \"Camera Failure\",\n    like(microphone_status, \"%NotConnected%\"), \"Microphone Disconnected\",\n    like(speaker_status, \"%Error%\") OR like(speaker_status, \"%NotConnected%\"), \"Speaker Failure\",\n    like(usb_status, \"%Error%\"), \"USB Peripheral Error\",\n    hdmi_input_status==\"NoSignal\" AND display_status==\"Connected\", \"HDMI Input Lost\",\n    1==1, null())\n| where isnotnull(issue)\n| stats latest(_time) as last_reported, count as occurrences, values(issue) as issues by device_id, product, room_name, building\n| eval hours_since=round((now()-last_reported)/3600, 1)\n| eval priority=case(\n    mvcount(issues)>2, \"Critical - Multiple Failures\",\n    mvfind(issues, \"Display\")>=0 OR mvfind(issues, \"Camera\")>=0, \"High\",\n    1==1, \"Medium\")\n| sort priority, -occurrences\n| table room_name, building, product, issues, priority, occurrences, hours_since\n```\n\nUnderstanding this SPL\n\n**Meeting Room AV Equipment Health** — \"The screen doesn't work\" is the most common meeting room complaint. By monitoring display/projector power state, camera connectivity, microphone mute state at meeting start, speaker health, and peripheral connectivity via RoomOS xAPI status events, IT can detect and fix equipment failures before users encounter them. Proactive AV monitoring transforms reactive room support into preventive maintenance, reducing meeting disruptions and executive frustration.\n\nDocumented **Data sources**: `sourcetype=webex:device` (status events), `sourcetype=webex:room_analytics` (peripheral status). **App/TA** (typical add-on context): `Cisco Webex Add-on` (Splunkbase #5781), RoomOS xAPI via Webex cloud telemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:device. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(issue)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by device_id, product, room_name, building** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **priority** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Meeting Room AV Equipment Health**): table room_name, building, product, issues, priority, occurrences, hours_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (room × equipment status — green/red), Table (rooms with active issues), Bar chart (failures by equipment type), Line chart (daily failure count trend), Single value (rooms with issues vs total rooms).",
              "script": "",
              "premium": "",
              "hw": "Cisco Webex Room Kit, Webex Room Kit Plus, Webex Board, Webex Desk Pro, Cisco Room Navigator, Cisco Quad Camera, Cisco SpeakerTrack 60",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch meeting rooms and devices, so empty bookings, bad gear, and license waste are visible before a big event fails.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "11.5.12",
              "n": "Digital Signage and Room Scheduler Device Health",
              "c": "medium",
              "f": "beginner",
              "v": "Webex-powered digital signage displays and room schedulers (Room Navigator panels mounted outside meeting rooms) are visible indicators of IT reliability. A blank Room Navigator outside a boardroom or a frozen digital signage screen in the lobby creates a poor impression for visitors and employees. Monitoring these devices for connectivity, content delivery, and responsiveness prevents embarrassing failures in high-visibility locations.",
              "t": "`Cisco Webex Add-on` (Splunkbase #5781)",
              "d": "`sourcetype=webex:device` (device status), `sourcetype=webex:room_analytics`",
              "q": "index=webex sourcetype=\"webex:device\" (product=\"Room Navigator\" OR mode=\"Signage\" OR mode=\"RoomScheduler\")\n| eval device_type=case(\n    mode==\"Signage\", \"Digital Signage\",\n    mode==\"RoomScheduler\" OR product==\"Room Navigator\", \"Room Scheduler\",\n    1==1, \"Other\")\n| eval healthy=if(connection_status==\"Connected\" AND health_state==\"ok\", 1, 0)\n| stats latest(connection_status) as status, latest(health_state) as health, latest(firmware_version) as firmware, latest(_time) as last_checkin by device_id, device_type, room_name, building\n| eval hours_since_checkin=round((now()-last_checkin)/3600, 1)\n| eval alert=case(\n    status!=\"Connected\", \"Offline\",\n    hours_since_checkin > 4, \"Stale - No Recent Check-in\",\n    health!=\"ok\", \"Health Warning\",\n    1==1, \"OK\")\n| where alert!=\"OK\"\n| table building, room_name, device_type, device_id, alert, status, health, hours_since_checkin, firmware\n| sort alert, building",
              "m": "Ingest Webex device telemetry for Room Navigator and signage-mode devices via the Webex Add-on. Room Navigators operate in RoomScheduler mode mounted outside meeting rooms, displaying availability and allowing booking via touch. Digital signage devices display content on lobby screens, wayfinding displays, or cafeteria menus. Monitor connection status (Connected/Disconnected), health state, and firmware version. Alert when any device goes offline for more than 30 minutes during business hours. Track stale devices that haven't checked in recently — these may have power issues or network disconnects that don't generate explicit offline events. Provide a daily health report grouped by building for facilities teams. Track firmware version compliance across the signage fleet.",
              "z": "Status grid (device × status), Table (offline/unhealthy devices), Pie chart (device type distribution), Single value (fleet online percentage), Bar chart (issues by building).",
              "kfp": "Offline signage for planned lobby refits, power work, and content pushes during store hours changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Webex Add-on` (Splunkbase #5781).\n• Ensure the following data sources are available: `sourcetype=webex:device` (device status), `sourcetype=webex:room_analytics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Webex device telemetry for Room Navigator and signage-mode devices via the Webex Add-on. Room Navigators operate in RoomScheduler mode mounted outside meeting rooms, displaying availability and allowing booking via touch. Digital signage devices display content on lobby screens, wayfinding displays, or cafeteria menus. Monitor connection status (Connected/Disconnected), health state, and firmware version. Alert when any device goes offline for more than 30 minutes during business hours. T…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=webex sourcetype=\"webex:device\" (product=\"Room Navigator\" OR mode=\"Signage\" OR mode=\"RoomScheduler\")\n| eval device_type=case(\n    mode==\"Signage\", \"Digital Signage\",\n    mode==\"RoomScheduler\" OR product==\"Room Navigator\", \"Room Scheduler\",\n    1==1, \"Other\")\n| eval healthy=if(connection_status==\"Connected\" AND health_state==\"ok\", 1, 0)\n| stats latest(connection_status) as status, latest(health_state) as health, latest(firmware_version) as firmware, latest(_time) as last_checkin by device_id, device_type, room_name, building\n| eval hours_since_checkin=round((now()-last_checkin)/3600, 1)\n| eval alert=case(\n    status!=\"Connected\", \"Offline\",\n    hours_since_checkin > 4, \"Stale - No Recent Check-in\",\n    health!=\"ok\", \"Health Warning\",\n    1==1, \"OK\")\n| where alert!=\"OK\"\n| table building, room_name, device_type, device_id, alert, status, health, hours_since_checkin, firmware\n| sort alert, building\n```\n\nUnderstanding this SPL\n\n**Digital Signage and Room Scheduler Device Health** — Webex-powered digital signage displays and room schedulers (Room Navigator panels mounted outside meeting rooms) are visible indicators of IT reliability. A blank Room Navigator outside a boardroom or a frozen digital signage screen in the lobby creates a poor impression for visitors and employees. Monitoring these devices for connectivity, content delivery, and responsiveness prevents embarrassing failures in high-visibility locations.\n\nDocumented **Data sources**: `sourcetype=webex:device` (device status), `sourcetype=webex:room_analytics`. **App/TA** (typical add-on context): `Cisco Webex Add-on` (Splunkbase #5781). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: webex; **sourcetype**: webex:device. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=webex, sourcetype=\"webex:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **device_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **healthy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by device_id, device_type, room_name, building** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since_checkin** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where alert!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Digital Signage and Room Scheduler Device Health**): table building, room_name, device_type, device_id, alert, status, health, hours_since_checkin, firmware\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (device × status), Table (offline/unhealthy devices), Pie chart (device type distribution), Single value (fleet online percentage), Bar chart (issues by building).",
              "script": "",
              "premium": "",
              "hw": "Cisco Room Navigator (room scheduling mode), Cisco Webex Board (signage mode), third-party Webex-compatible displays",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch meeting rooms and devices, so empty bookings, bad gear, and license waste are visible before a big event fails.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 12,
            "none": 0
          }
        }
      ],
      "i": 11,
      "n": "Email & Collaboration",
      "src": "cat-11-email-collaboration.md"
    },
    {
      "s": [
        {
          "i": "12.1",
          "n": "Source Control",
          "u": [
            {
              "i": "12.1.1",
              "n": "Commit Activity Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Commit velocity indicates team productivity and project health. Drops may signal blockers; spikes may precede release issues.",
              "t": "GitHub webhook, custom API input",
              "d": "Git webhook events (push), GitHub/GitLab API",
              "q": "index=devops sourcetype=\"github:webhook\" event=\"push\"\n| timechart span=1d count as commits by repository",
              "m": "Configure GitHub/GitLab webhooks to send push events to Splunk HEC. Parse repository, author, branch, and commit count. Track trends per team and repository. Report on developer activity metrics.",
              "z": "Line chart (commits over time), Bar chart (commits by repo), Stacked area (commits by team).",
              "kfp": "Spikes in commit volume during crunch weeks, new team onboarding, or monorepo migrations; dips during holidays or org freezes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub webhook, custom API input.\n• Ensure the following data sources are available: Git webhook events (push), GitHub/GitLab API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure GitHub/GitLab webhooks to send push events to Splunk HEC. Parse repository, author, branch, and commit count. Track trends per team and repository. Report on developer activity metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:webhook\" event=\"push\"\n| timechart span=1d count as commits by repository\n```\n\nUnderstanding this SPL\n\n**Commit Activity Trending** — Commit velocity indicates team productivity and project health. Drops may signal blockers; spikes may precede release issues.\n\nDocumented **Data sources**: Git webhook events (push), GitHub/GitLab API. **App/TA** (typical add-on context): GitHub webhook, custom API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:webhook. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:webhook\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by repository** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (commits over time), Bar chart (commits by repo), Stacked area (commits by team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how often code lands in your repositories so we notice unusual slows or surges that can affect release risk and planning.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.2",
              "n": "Branch Protection Bypasses",
              "c": "critical",
              "f": "intermediate",
              "v": "Direct pushes to protected branches bypass code review, introducing unreviewed code to production. Detection ensures process compliance.",
              "t": "GitHub audit log, GitLab API",
              "d": "GitHub/GitLab audit log, push events to protected branches",
              "q": "index=devops sourcetype=\"github:audit\" action=\"protected_branch.policy_override\"\n| table _time, actor, repo, branch, action",
              "m": "Ingest GitHub/GitLab audit logs. Alert on any push to protected branches (main, release) without PR merge. Alert on branch protection rule changes. Correlate with deployment events.",
              "z": "Table (bypass events), Timeline (protection violations), Single value (bypasses this month — target: 0).",
              "kfp": "Bypasses from emergency hotfixes, planned admin pushes during release freezes, or repo migration tasks.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub audit log, GitLab API.\n• Ensure the following data sources are available: GitHub/GitLab audit log, push events to protected branches.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest GitHub/GitLab audit logs. Alert on any push to protected branches (main, release) without PR merge. Alert on branch protection rule changes. Correlate with deployment events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:audit\" action=\"protected_branch.policy_override\"\n| table _time, actor, repo, branch, action\n```\n\nUnderstanding this SPL\n\n**Branch Protection Bypasses** — Direct pushes to protected branches bypass code review, introducing unreviewed code to production. Detection ensures process compliance.\n\nDocumented **Data sources**: GitHub/GitLab audit log, push events to protected branches. **App/TA** (typical add-on context): GitHub audit log, GitLab API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Branch Protection Bypasses**): table _time, actor, repo, branch, action\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (bypass events), Timeline (protection violations), Single value (bypasses this month — target: 0).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for changes that skip the normal review path on important branches, because that is when risky or unreviewed code can reach production.",
              "wv": "crawl",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "gitlab"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "12.1.3",
              "n": "Pull Request Metrics",
              "c": "medium",
              "f": "beginner",
              "v": "PR cycle time affects development velocity. Long review times indicate bottlenecks; abandoned PRs indicate scope or alignment issues.",
              "t": "GitHub API input",
              "d": "PR events (opened, reviewed, merged, closed)",
              "q": "index=devops sourcetype=\"github:pull_request\" action=\"closed\" merged=\"true\"\n| eval cycle_hours=round((merged_at_epoch-created_at_epoch)/3600,1)\n| stats avg(cycle_hours) as avg_cycle, median(cycle_hours) as median_cycle by repository\n| sort -avg_cycle",
              "m": "Ingest PR lifecycle events. Calculate open-to-merge time, review cycles, and abandonment rates. Track per repository and team. Report on engineering efficiency metrics. Identify bottleneck reviewers.",
              "z": "Bar chart (avg cycle time by repo), Line chart (PR metrics trend), Table (PR summary), Histogram (cycle time distribution).",
              "kfp": "PR cycle times vary with review policy, team load, and scope changes; holiday freezes and end-of-quarter pushes shift baselines.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub API input.\n• Ensure the following data sources are available: PR events (opened, reviewed, merged, closed).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest PR lifecycle events. Calculate open-to-merge time, review cycles, and abandonment rates. Track per repository and team. Report on engineering efficiency metrics. Identify bottleneck reviewers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:pull_request\" action=\"closed\" merged=\"true\"\n| eval cycle_hours=round((merged_at_epoch-created_at_epoch)/3600,1)\n| stats avg(cycle_hours) as avg_cycle, median(cycle_hours) as median_cycle by repository\n| sort -avg_cycle\n```\n\nUnderstanding this SPL\n\n**Pull Request Metrics** — PR cycle time affects development velocity. Long review times indicate bottlenecks; abandoned PRs indicate scope or alignment issues.\n\nDocumented **Data sources**: PR events (opened, reviewed, merged, closed). **App/TA** (typical add-on context): GitHub API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:pull_request. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:pull_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cycle_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by repository** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (avg cycle time by repo), Line chart (PR metrics trend), Table (PR summary), Histogram (cycle time distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how pull requests flow so we see review health, queue time, and where work might be stuck.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.4",
              "n": "Secret Exposure Detection",
              "c": "critical",
              "f": "beginner",
              "v": "Secrets committed to source control are immediately compromised. Detection within minutes enables rapid rotation before exploitation.",
              "t": "GitGuardian webhook, GitHub secret scanning",
              "d": "Pre-commit hook results, GitGuardian/GitHub secret scanning alerts",
              "q": "index=devops sourcetype=\"github:secret_scanning\" OR sourcetype=\"gitguardian:alert\"\n| table _time, repository, secret_type, file_path, author, status\n| sort -_time",
              "m": "Enable GitHub secret scanning or deploy GitGuardian. Forward alerts to Splunk. Alert at critical priority on any secret detection. Track remediation time (rotation). Report on secret types and recurrence.",
              "z": "Table (exposed secrets), Single value (unresolved secrets — target: 0), Bar chart (secrets by type), Timeline (detection events).",
              "kfp": "Secrets found may be test fixtures, mock data, or rotated credentials in old commits being remediated.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitGuardian webhook, GitHub secret scanning.\n• Ensure the following data sources are available: Pre-commit hook results, GitGuardian/GitHub secret scanning alerts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable GitHub secret scanning or deploy GitGuardian. Forward alerts to Splunk. Alert at critical priority on any secret detection. Track remediation time (rotation). Report on secret types and recurrence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:secret_scanning\" OR sourcetype=\"gitguardian:alert\"\n| table _time, repository, secret_type, file_path, author, status\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Secret Exposure Detection** — Secrets committed to source control are immediately compromised. Detection within minutes enables rapid rotation before exploitation.\n\nDocumented **Data sources**: Pre-commit hook results, GitGuardian/GitHub secret scanning alerts. **App/TA** (typical add-on context): GitGuardian webhook, GitHub secret scanning. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:secret_scanning, gitguardian:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:secret_scanning\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Secret Exposure Detection**): table _time, repository, secret_type, file_path, author, status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exposed secrets), Single value (unresolved secrets — target: 0), Bar chart (secrets by type), Timeline (detection events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the code and scan results for leaked keys and passwords so we can rotate them before they are abused.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.13",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of EU CRA Art.13 (Obligations of manufacturers) — Splunk UC-12.1.4: Secret Exposure Detection.",
                  "ea": "Saved search 'UC-12.1.4' running on Pre-commit hook results, GitGuardian/GitHub secret scanning alerts, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2024/2847/oj"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "09.aa",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of HITRUST 09.aa (Audit logging) — Splunk UC-12.1.4: Secret Exposure Detection.",
                  "ea": "Saved search 'UC-12.1.4' running on Pre-commit hook results, GitGuardian/GitHub secret scanning alerts, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.3.5",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of SAMA CSF 3.3.5 (Security monitoring) — Splunk UC-12.1.4: Secret Exposure Detection.",
                  "ea": "Saved search 'UC-12.1.4' running on Pre-commit hook results, GitGuardian/GitHub secret scanning alerts, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.sama.gov.sa/en-US/Laws/BankingRules/SAMA%20Cyber%20Security%20Framework.pdf"
                }
              ],
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "12.1.5",
              "n": "Repository Access Audit",
              "c": "high",
              "f": "beginner",
              "v": "Repository permission changes can expose source code to unauthorized users. Audit trail supports IP protection and compliance.",
              "t": "GitHub audit log",
              "d": "GitHub/GitLab audit log (permission events)",
              "q": "index=devops sourcetype=\"github:audit\" action IN (\"repo.add_member\",\"repo.remove_member\",\"repo.update_member\")\n| table _time, actor, repo, user, permission, action",
              "m": "Ingest organization audit log. Track member additions, removals, and permission changes. Alert on permission escalation to admin. Report on repository access patterns for periodic access review.",
              "z": "Table (access changes), Timeline (permission events), Bar chart (changes by actor).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub audit log.\n• Ensure the following data sources are available: GitHub/GitLab audit log (permission events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest organization audit log. Track member additions, removals, and permission changes. Alert on permission escalation to admin. Report on repository access patterns for periodic access review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:audit\" action IN (\"repo.add_member\",\"repo.remove_member\",\"repo.update_member\")\n| table _time, actor, repo, user, permission, action\n```\n\nUnderstanding this SPL\n\n**Repository Access Audit** — Repository permission changes can expose source code to unauthorized users. Audit trail supports IP protection and compliance.\n\nDocumented **Data sources**: GitHub/GitLab audit log (permission events). **App/TA** (typical add-on context): GitHub audit log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Repository Access Audit**): table _time, actor, repo, user, permission, action\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (access changes), Timeline (permission events), Bar chart (changes by actor).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who can change repository settings and membership so privilege drift does not slip past your team.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.6",
              "n": "Force Push Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Force pushes overwrite git history, potentially destroying code and audit trails. Detection ensures accountability.",
              "t": "GitHub webhook",
              "d": "Git push events (forced flag)",
              "q": "index=devops sourcetype=\"github:webhook\" event=\"push\" forced=\"true\"\n| table _time, repository, ref, pusher, forced",
              "m": "Parse force push flag from webhook events. Alert on any force push to shared branches. Whitelist expected force pushes (e.g., squash-merge workflows on feature branches). Track frequency per developer.",
              "z": "Table (force push events), Timeline (force pushes), Single value (force pushes this week).",
              "kfp": "Force pushes from approved repo cleanup, history rewrites for license remediation, or scheduled rebases on long-running branches.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub webhook.\n• Ensure the following data sources are available: Git push events (forced flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse force push flag from webhook events. Alert on any force push to shared branches. Whitelist expected force pushes (e.g., squash-merge workflows on feature branches). Track frequency per developer.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:webhook\" event=\"push\" forced=\"true\"\n| table _time, repository, ref, pusher, forced\n```\n\nUnderstanding this SPL\n\n**Force Push Detection** — Force pushes overwrite git history, potentially destroying code and audit trails. Detection ensures accountability.\n\nDocumented **Data sources**: Git push events (forced flag). **App/TA** (typical add-on context): GitHub webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:webhook. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:webhook\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Force Push Detection**): table _time, repository, ref, pusher, forced\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (force push events), Timeline (force pushes), Single value (force pushes this week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for history rewrites on protected branches so we know when someone may have removed audit history or forced code in without the usual checks.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.7",
              "n": "GitHub Actions Workflow Run Time Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Workflow duration growth over time indicates growing tech debt, resource constraints, or inefficient pipeline design. Early detection enables optimization before velocity degrades.",
              "t": "Custom (GitHub API)",
              "d": "GitHub /repos/:owner/:repo/actions/runs",
              "q": "index=devops sourcetype=\"github:actions_run\"\n| eval duration_min=round((updated_at_epoch-run_started_at_epoch)/60,1)\n| timechart span=1d avg(duration_min) as avg_duration, median(duration_min) as median_duration by workflow_name\n| where avg_duration > 0",
              "m": "Poll GitHub Actions API for workflow runs. Ingest run metadata (workflow_name, status, run_started_at, updated_at) to Splunk HEC. Calculate duration per run. Track 7-day and 30-day rolling averages. Alert when avg duration increases >20% week-over-week. Correlate with runner capacity and job concurrency.",
              "z": "Line chart (workflow duration trend by workflow), Bar chart (avg duration by workflow), Table (slowest workflows this week), Single value (p95 duration).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (GitHub API).\n• Ensure the following data sources are available: GitHub /repos/:owner/:repo/actions/runs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll GitHub Actions API for workflow runs. Ingest run metadata (workflow_name, status, run_started_at, updated_at) to Splunk HEC. Calculate duration per run. Track 7-day and 30-day rolling averages. Alert when avg duration increases >20% week-over-week. Correlate with runner capacity and job concurrency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:actions_run\"\n| eval duration_min=round((updated_at_epoch-run_started_at_epoch)/60,1)\n| timechart span=1d avg(duration_min) as avg_duration, median(duration_min) as median_duration by workflow_name\n| where avg_duration > 0\n```\n\nUnderstanding this SPL\n\n**GitHub Actions Workflow Run Time Trending** — Workflow duration growth over time indicates growing tech debt, resource constraints, or inefficient pipeline design. Early detection enables optimization before velocity degrades.\n\nDocumented **Data sources**: GitHub /repos/:owner/:repo/actions/runs. **App/TA** (typical add-on context): Custom (GitHub API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:actions_run. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:actions_run\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by workflow_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_duration > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (workflow duration trend by workflow), Bar chart (avg duration by workflow), Table (slowest workflows this week), Single value (p95 duration).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch pipeline time and spend so surprise cost or slowdowns do not derail delivery.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.8",
              "n": "GitHub Actions Billing Usage",
              "c": "medium",
              "f": "beginner",
              "v": "Approaching included minutes quota risks unexpected overages or workflow throttling. Proactive monitoring prevents billing surprises and supports capacity planning.",
              "t": "Custom (GitHub API)",
              "d": "GitHub /orgs/:org/settings/billing/actions",
              "q": "index=devops sourcetype=\"github:actions_billing\"\n| eval pct_used=round(total_minutes_used/total_minutes_included*100,1)\n| where pct_used > 70\n| table _time, org, total_minutes_used, total_minutes_included, pct_used, total_paid_minutes_used",
              "m": "Poll GitHub billing API (requires org admin scope). Ingest minutes used vs. included per billing cycle. Alert at 70%, 85%, and 95% of included minutes. Track paid minutes consumption. Report on usage by repository and workflow for optimization.",
              "z": "Gauge (% of included minutes used), Single value (minutes remaining), Line chart (usage trend over billing cycle), Table (top consuming repos).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (GitHub API).\n• Ensure the following data sources are available: GitHub /orgs/:org/settings/billing/actions.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll GitHub billing API (requires org admin scope). Ingest minutes used vs. included per billing cycle. Alert at 70%, 85%, and 95% of included minutes. Track paid minutes consumption. Report on usage by repository and workflow for optimization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:actions_billing\"\n| eval pct_used=round(total_minutes_used/total_minutes_included*100,1)\n| where pct_used > 70\n| table _time, org, total_minutes_used, total_minutes_included, pct_used, total_paid_minutes_used\n```\n\nUnderstanding this SPL\n\n**GitHub Actions Billing Usage** — Approaching included minutes quota risks unexpected overages or workflow throttling. Proactive monitoring prevents billing surprises and supports capacity planning.\n\nDocumented **Data sources**: GitHub /orgs/:org/settings/billing/actions. **App/TA** (typical add-on context): Custom (GitHub API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:actions_billing. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:actions_billing\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_used > 70` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GitHub Actions Billing Usage**): table _time, org, total_minutes_used, total_minutes_included, pct_used, total_paid_minutes_used\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% of included minutes used), Single value (minutes remaining), Line chart (usage trend over billing cycle), Table (top consuming repos).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch pipeline time and spend so surprise cost or slowdowns do not derail delivery.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.9",
              "n": "Branch Protection Bypass Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Complements UC-12.1.2 by correlating audit `protected_branch.policy_override` with push events for investigation packs.",
              "t": "GitHub audit log, webhook HEC",
              "d": "`github:audit` policy_override, `github:webhook` push",
              "q": "index=devops sourcetype=\"github:audit\" action=\"protected_branch.policy_override\"\n| join type=left max=1 repo [search index=devops sourcetype=\"github:webhook\" event=\"push\" | fields repository, ref, pusher, _time]\n| table _time, actor, repo, branch, action, pusher",
              "m": "Require GitHub Advanced Security audit stream. Alert on any override; require VP approval ticket in lookup. Weekly report of overrides vs zero target.",
              "z": "Table (overrides), Timeline, Single value (count — target 0).",
              "kfp": "Bypasses from emergency hotfixes, planned admin pushes during release freezes, or repo migration tasks.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub audit log, webhook HEC.\n• Ensure the following data sources are available: `github:audit` policy_override, `github:webhook` push.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequire GitHub Advanced Security audit stream. Alert on any override; require VP approval ticket in lookup. Weekly report of overrides vs zero target.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:audit\" action=\"protected_branch.policy_override\"\n| join type=left max=1 repo [search index=devops sourcetype=\"github:webhook\" event=\"push\" | fields repository, ref, pusher, _time]\n| table _time, actor, repo, branch, action, pusher\n```\n\nUnderstanding this SPL\n\n**Branch Protection Bypass Detection** — Complements UC-12.1.2 by correlating audit `protected_branch.policy_override` with push events for investigation packs.\n\nDocumented **Data sources**: `github:audit` policy_override, `github:webhook` push. **App/TA** (typical add-on context): GitHub audit log, webhook HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Branch Protection Bypass Detection**): table _time, actor, repo, branch, action, pusher\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overrides), Timeline, Single value (count — target 0).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for changes that skip the normal review path on important branches, because that is when risky or unreviewed code can reach production.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.10",
              "n": "Force Push to Protected Branches",
              "c": "critical",
              "f": "beginner",
              "v": "Narrows UC-12.1.6 to default/release branches where history rewrite has highest impact.",
              "t": "GitHub/GitLab webhooks",
              "d": "Push events with `forced=true` and `ref` matching protected branch patterns",
              "q": "index=devops sourcetype=\"github:webhook\" event=\"push\" forced=\"true\"\n| where match(ref,\"refs/heads/(main|master|release/.*|production)\")\n| table _time, repository, ref, pusher, forced\n| sort -_time",
              "m": "Maintain lookup of protected ref regex per org. Page on-call for production branch force pushes. Exclude documented release bot service accounts.",
              "z": "Table (force pushes), Timeline, Bar chart (by branch).",
              "kfp": "Force pushes from approved repo cleanup, history rewrites for license remediation, or scheduled rebases on long-running branches.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub/GitLab webhooks.\n• Ensure the following data sources are available: Push events with `forced=true` and `ref` matching protected branch patterns.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain lookup of protected ref regex per org. Page on-call for production branch force pushes. Exclude documented release bot service accounts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:webhook\" event=\"push\" forced=\"true\"\n| where match(ref,\"refs/heads/(main|master|release/.*|production)\")\n| table _time, repository, ref, pusher, forced\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Force Push to Protected Branches** — Narrows UC-12.1.6 to default/release branches where history rewrite has highest impact.\n\nDocumented **Data sources**: Push events with `forced=true` and `ref` matching protected branch patterns. **App/TA** (typical add-on context): GitHub/GitLab webhooks. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:webhook. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:webhook\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(ref,\"refs/heads/(main|master|release/.*|production)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Force Push to Protected Branches**): table _time, repository, ref, pusher, forced\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (force pushes), Timeline, Bar chart (by branch).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for history rewrites on protected branches so we know when someone may have removed audit history or forced code in without the usual checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "gitlab"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.11",
              "n": "Sensitive File Commit Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Paths such as `.env`, `id_rsa`, `kubeconfig`, and `*.pem` in commits indicate credential sprawl even before secret scanning fires.",
              "t": "GitHub `push` webhook payload (commits[].modified/added)",
              "d": "Push webhook with file path arrays, or GitHub compare API",
              "q": "index=devops sourcetype=\"github:webhook\" event=\"push\"\n| mvexpand commits{}.modified limit=500\n| where match(commits{}.modified,\"(\\.env|id_rsa|kubeconfig|\\.pem|credentials\\.xml)$\")\n| table _time, repository, commits{}.author.username, commits{}.modified",
              "m": "Expand commit file lists in ingestion. Alert on first match; auto-open rotation ticket. Pair with secret scanning (UC-12.1.4).",
              "z": "Table (sensitive paths), Bar chart (repos), Timeline.",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub `push` webhook payload (commits[].modified/added).\n• Ensure the following data sources are available: Push webhook with file path arrays, or GitHub compare API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExpand commit file lists in ingestion. Alert on first match; auto-open rotation ticket. Pair with secret scanning (UC-12.1.4).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:webhook\" event=\"push\"\n| mvexpand commits{}.modified limit=500\n| where match(commits{}.modified,\"(\\.env|id_rsa|kubeconfig|\\.pem|credentials\\.xml)$\")\n| table _time, repository, commits{}.author.username, commits{}.modified\n```\n\nUnderstanding this SPL\n\n**Sensitive File Commit Detection** — Paths such as `.env`, `id_rsa`, `kubeconfig`, and `*.pem` in commits indicate credential sprawl even before secret scanning fires.\n\nDocumented **Data sources**: Push webhook with file path arrays, or GitHub compare API. **App/TA** (typical add-on context): GitHub `push` webhook payload (commits[].modified/added). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:webhook. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:webhook\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Filters the current rows with `where match(commits{}.modified,\"(\\.env|id_rsa|kubeconfig|\\.pem|credentials\\.xml)$\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Sensitive File Commit Detection**): table _time, repository, commits{}.author.username, commits{}.modified\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sensitive paths), Bar chart (repos), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.12",
              "n": "Repository Permission Changes",
              "c": "high",
              "f": "beginner",
              "v": "Team `maintain`/`admin` grants and visibility changes to `public` are higher risk than member adds (UC-12.1.5).",
              "t": "GitHub audit log",
              "d": "`repo.update`, `team.add_repository`, `org.update_member`",
              "q": "index=devops sourcetype=\"github:audit\" action IN (\"repo.update\",\"team.add_repository\",\"repo.access\")\n| where visibility=\"public\" OR permission IN (\"admin\",\"maintain\")\n| table _time, actor, repo, action, permission, visibility",
              "m": "Alert on public visibility toggles and admin grants. Quarterly access review export.",
              "z": "Table (high-risk changes), Timeline, Single value (public repos).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub audit log.\n• Ensure the following data sources are available: `repo.update`, `team.add_repository`, `org.update_member`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on public visibility toggles and admin grants. Quarterly access review export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:audit\" action IN (\"repo.update\",\"team.add_repository\",\"repo.access\")\n| where visibility=\"public\" OR permission IN (\"admin\",\"maintain\")\n| table _time, actor, repo, action, permission, visibility\n```\n\nUnderstanding this SPL\n\n**Repository Permission Changes** — Team `maintain`/`admin` grants and visibility changes to `public` are higher risk than member adds (UC-12.1.5).\n\nDocumented **Data sources**: `repo.update`, `team.add_repository`, `org.update_member`. **App/TA** (typical add-on context): GitHub audit log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where visibility=\"public\" OR permission IN (\"admin\",\"maintain\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Repository Permission Changes**): table _time, actor, repo, action, permission, visibility\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (high-risk changes), Timeline, Single value (public repos).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch who can change repository settings and membership so privilege drift does not slip past your team.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.13",
              "n": "PR Review Bypass Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Merges with zero approvals or with only self-approval violate four-eyes policies.",
              "t": "GitHub GraphQL / pull_request webhook",
              "d": "`pull_request` `closed` merged=true with `review_count`, `merge_method`",
              "q": "index=devops sourcetype=\"github:pull_request\" action=\"closed\" merged=\"true\"\n| where review_count=0 OR (review_count=1 AND merger=author)\n| search base_ref IN (\"main\",\"master\",\"production\")\n| table _time, repository, author, merger, review_count, base_ref",
              "m": "Ingest PR merged payload with review tally from API enrichment. Exclude bots via label. Alert in Splunk as backstop to branch protection.",
              "z": "Table (bypass merges), Bar chart (by author), Timeline.",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub GraphQL / pull_request webhook.\n• Ensure the following data sources are available: `pull_request` `closed` merged=true with `review_count`, `merge_method`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest PR merged payload with review tally from API enrichment. Exclude bots via label. Alert in Splunk as backstop to branch protection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:pull_request\" action=\"closed\" merged=\"true\"\n| where review_count=0 OR (review_count=1 AND merger=author)\n| search base_ref IN (\"main\",\"master\",\"production\")\n| table _time, repository, author, merger, review_count, base_ref\n```\n\nUnderstanding this SPL\n\n**PR Review Bypass Detection** — Merges with zero approvals or with only self-approval violate four-eyes policies.\n\nDocumented **Data sources**: `pull_request` `closed` merged=true with `review_count`, `merge_method`. **App/TA** (typical add-on context): GitHub GraphQL / pull_request webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:pull_request. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:pull_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where review_count=0 OR (review_count=1 AND merger=author)` — typically the threshold or rule expression for this monitoring goal.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **PR Review Bypass Detection**): table _time, repository, author, merger, review_count, base_ref\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (bypass merges), Bar chart (by author), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.14",
              "n": "Fork Network Suspicious Activity",
              "c": "high",
              "f": "advanced",
              "v": "Sudden stars/forks from new accounts or geo clusters can precede supply-chain attacks or leaked-token cloning.",
              "t": "GitHub audit + Events API",
              "d": "`fork`, `create` events; `WatchEvent` bursts",
              "q": "index=devops sourcetype=\"github:meta\" event_type IN (\"ForkEvent\",\"WatchEvent\")\n| bin _time span=1h\n| stats dc(actor_id) as unique_actors, count by repo_name, _time\n| where unique_actors > 50 OR count > 200\n| sort -count",
              "m": "Ingest public repo events for crown-jewel repositories. Correlate with token scope changes. Feed threat intel on abusive ASNs.",
              "z": "Line chart (fork/star rate), Table (spikes), Geo map (actor country).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub audit + Events API.\n• Ensure the following data sources are available: `fork`, `create` events; `WatchEvent` bursts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest public repo events for crown-jewel repositories. Correlate with token scope changes. Feed threat intel on abusive ASNs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:meta\" event_type IN (\"ForkEvent\",\"WatchEvent\")\n| bin _time span=1h\n| stats dc(actor_id) as unique_actors, count by repo_name, _time\n| where unique_actors > 50 OR count > 200\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Fork Network Suspicious Activity** — Sudden stars/forks from new accounts or geo clusters can precede supply-chain attacks or leaked-token cloning.\n\nDocumented **Data sources**: `fork`, `create` events; `WatchEvent` bursts. **App/TA** (typical add-on context): GitHub audit + Events API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:meta. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:meta\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by repo_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_actors > 50 OR count > 200` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (fork/star rate), Table (spikes), Geo map (actor country).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.15",
              "n": "CODEOWNERS File Modification Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Attackers may weaken review requirements by editing `CODEOWNERS` or `.github/CODEOWNERS`.",
              "t": "GitHub pull_request webhook",
              "d": "PR `files[]` paths",
              "q": "index=devops sourcetype=\"github:pull_request\" action IN (\"opened\",\"synchronize\")\n| where mvjoin(commit_files{},\",\") LIKE \"%CODEOWNERS%\"\n| table _time, repository, author, title, commit_files",
              "m": "Parse file lists from PR payloads. Require CODEOWNER approval for CODEOWNERS changes via branch rules; alert on any edit pending rule rollout.",
              "z": "Table (PRs touching CODEOWNERS), Timeline, Single value (changes per quarter).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub pull_request webhook.\n• Ensure the following data sources are available: PR `files[]` paths.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse file lists from PR payloads. Require CODEOWNER approval for CODEOWNERS changes via branch rules; alert on any edit pending rule rollout.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:pull_request\" action IN (\"opened\",\"synchronize\")\n| where mvjoin(commit_files{},\",\") LIKE \"%CODEOWNERS%\"\n| table _time, repository, author, title, commit_files\n```\n\nUnderstanding this SPL\n\n**CODEOWNERS File Modification Monitoring** — Attackers may weaken review requirements by editing `CODEOWNERS` or `.github/CODEOWNERS`.\n\nDocumented **Data sources**: PR `files[]` paths. **App/TA** (typical add-on context): GitHub pull_request webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:pull_request. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:pull_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where mvjoin(commit_files{},\",\") LIKE \"%CODEOWNERS%\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CODEOWNERS File Modification Monitoring**): table _time, repository, author, title, commit_files\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (PRs touching CODEOWNERS), Timeline, Single value (changes per quarter).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.16",
              "n": "Large File Commit Detection",
              "c": "medium",
              "f": "beginner",
              "v": "Large blobs bloat repos, bypass review in diff viewers, and may hide embedded data.",
              "t": "GitHub `push` hook with size, Git LFS audit",
              "d": "Commit statistics (`size`, `distinct_size`), LFS upload events",
              "q": "index=devops sourcetype=\"github:webhook\" event=\"push\"\n| where commits{}.size > 5242880 OR commits{}.distinct_size > 5242880\n| table _time, repository, commits{}.id, commits{}.size, pusher",
              "m": "Threshold 5MB default; tune per repo. Require LFS for binaries. Alert on repeated violations.",
              "z": "Table (large commits), Bar chart (by repo), Timeline.",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub `push` hook with size, Git LFS audit.\n• Ensure the following data sources are available: Commit statistics (`size`, `distinct_size`), LFS upload events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nThreshold 5MB default; tune per repo. Require LFS for binaries. Alert on repeated violations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:webhook\" event=\"push\"\n| where commits{}.size > 5242880 OR commits{}.distinct_size > 5242880\n| table _time, repository, commits{}.id, commits{}.size, pusher\n```\n\nUnderstanding this SPL\n\n**Large File Commit Detection** — Large blobs bloat repos, bypass review in diff viewers, and may hide embedded data.\n\nDocumented **Data sources**: Commit statistics (`size`, `distinct_size`), LFS upload events. **App/TA** (typical add-on context): GitHub `push` hook with size, Git LFS audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:webhook. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:webhook\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where commits{}.size > 5242880 OR commits{}.distinct_size > 5242880` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Large File Commit Detection**): table _time, repository, commits{}.id, commits{}.size, pusher\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (large commits), Bar chart (by repo), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.17",
              "n": "Signed Commit Enforcement",
              "c": "high",
              "f": "intermediate",
              "v": "Unsigned commits on protected branches weaken non-repudiation; verify GPG/SSH sig status.",
              "t": "GitHub commit signature API, push webhook enrichment",
              "d": "Commit `verification.status` != `verified`",
              "q": "index=devops sourcetype=\"github:commit_status\"\n| where verification_status!=\"verified\" AND branch IN (\"main\",\"master\",\"production\")\n| stats count by repository, author, verification_status\n| sort -count",
              "m": "Enrich push SHAs via API in pipeline. Alert on unverified commits after enforcement date. Whitelist bots with documented keys.",
              "z": "Table (unsigned commits), Line chart (compliance %), Pie chart (verified vs not).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub commit signature API, push webhook enrichment.\n• Ensure the following data sources are available: Commit `verification.status` != `verified`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnrich push SHAs via API in pipeline. Alert on unverified commits after enforcement date. Whitelist bots with documented keys.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:commit_status\"\n| where verification_status!=\"verified\" AND branch IN (\"main\",\"master\",\"production\")\n| stats count by repository, author, verification_status\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Signed Commit Enforcement** — Unsigned commits on protected branches weaken non-repudiation; verify GPG/SSH sig status.\n\nDocumented **Data sources**: Commit `verification.status` != `verified`. **App/TA** (typical add-on context): GitHub commit signature API, push webhook enrichment. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:commit_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:commit_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where verification_status!=\"verified\" AND branch IN (\"main\",\"master\",\"production\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by repository, author, verification_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unsigned commits), Line chart (compliance %), Pie chart (verified vs not).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.18",
              "n": "Stale Branch Cleanup Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Long-lived branches accumulate merge debt and stale code; tracking supports automated deletion policies.",
              "t": "GitHub GraphQL / repos API",
              "d": "Branch `updated_at`, open PR linkage",
              "q": "index=devops sourcetype=\"github:branch_inventory\"\n| eval age_days=round((now()-strptime(updated_at,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,1)\n| where age_days > 180 AND protected=\"false\"\n| stats max(age_days) as max_age by repository, ref\n| sort -max_age",
              "m": "Nightly job lists branches. Join with Jira for linked tickets. Auto-PR stale branch report to teams channel.",
              "z": "Table (stale branches), Bar chart (count by repo), Single value (branches >180d).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub GraphQL / repos API.\n• Ensure the following data sources are available: Branch `updated_at`, open PR linkage.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNightly job lists branches. Join with Jira for linked tickets. Auto-PR stale branch report to teams channel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:branch_inventory\"\n| eval age_days=round((now()-strptime(updated_at,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,1)\n| where age_days > 180 AND protected=\"false\"\n| stats max(age_days) as max_age by repository, ref\n| sort -max_age\n```\n\nUnderstanding this SPL\n\n**Stale Branch Cleanup Tracking** — Long-lived branches accumulate merge debt and stale code; tracking supports automated deletion policies.\n\nDocumented **Data sources**: Branch `updated_at`, open PR linkage. **App/TA** (typical add-on context): GitHub GraphQL / repos API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:branch_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:branch_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days > 180 AND protected=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by repository, ref** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale branches), Bar chart (count by repo), Single value (branches >180d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.19",
              "n": "Repository Webhook Health",
              "c": "high",
              "f": "beginner",
              "v": "Failed webhook deliveries break CI, security scanning, and Splunk ingestion—monitor delivery logs and HTTP status.",
              "t": "GitHub Webhook deliveries API, Splunk HEC receiver logs",
              "d": "`delivery` records with `status_code`, `error_message`",
              "q": "index=devops sourcetype=\"github:webhook_delivery\"\n| where status_code>=400 OR delivered=\"false\"\n| stats count by repository, hook_id, status_code\n| where count > 5\n| sort -count",
              "m": "Poll recent deliveries or ingest GitHub Enterprise audit. Alert on sustained 4xx/5xx to Splunk HEC. Verify TLS cert on endpoint.",
              "z": "Table (failing hooks), Line chart (failure rate), Single value (open incidents).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub Webhook deliveries API, Splunk HEC receiver logs.\n• Ensure the following data sources are available: `delivery` records with `status_code`, `error_message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll recent deliveries or ingest GitHub Enterprise audit. Alert on sustained 4xx/5xx to Splunk HEC. Verify TLS cert on endpoint.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:webhook_delivery\"\n| where status_code>=400 OR delivered=\"false\"\n| stats count by repository, hook_id, status_code\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Repository Webhook Health** — Failed webhook deliveries break CI, security scanning, and Splunk ingestion—monitor delivery logs and HTTP status.\n\nDocumented **Data sources**: `delivery` records with `status_code`, `error_message`. **App/TA** (typical add-on context): GitHub Webhook deliveries API, Splunk HEC receiver logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:webhook_delivery. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:webhook_delivery\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status_code>=400 OR delivered=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by repository, hook_id, status_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failing hooks), Line chart (failure rate), Single value (open incidents).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.1.20",
              "n": "Code Scanning Alert Trends",
              "c": "high",
              "f": "beginner",
              "v": "GitHub Advanced Security CodeQL/code scanning alert open/close rates show remediation velocity and new debt.",
              "t": "GitHub `code_scanning_alert` webhooks, API",
              "d": "`alert` created, fixed, reopened events",
              "q": "index=devops sourcetype=\"github:code_scanning\"\n| timechart span=1w sum(eval(action=\"created\")) as opened, sum(eval(action=\"fixed\")) as fixed by rule_severity\n| eval net_debt=opened-fixed",
              "m": "Ingest SARIF-related alert events. Track MTTR for Critical. Executive dashboard: net debt per language.",
              "z": "Line chart (opened vs fixed), Bar chart (by severity), Single value (open Critical alerts).",
              "kfp": "Activity shifts during holidays, release freezes, or vendor maintenance; confirm against your change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub `code_scanning_alert` webhooks, API.\n• Ensure the following data sources are available: `alert` created, fixed, reopened events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SARIF-related alert events. Track MTTR for Critical. Executive dashboard: net debt per language.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:code_scanning\"\n| timechart span=1w sum(eval(action=\"created\")) as opened, sum(eval(action=\"fixed\")) as fixed by rule_severity\n| eval net_debt=opened-fixed\n```\n\nUnderstanding this SPL\n\n**Code Scanning Alert Trends** — GitHub Advanced Security CodeQL/code scanning alert open/close rates show remediation velocity and new debt.\n\nDocumented **Data sources**: `alert` created, fixed, reopened events. **App/TA** (typical add-on context): GitHub `code_scanning_alert` webhooks, API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:code_scanning. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:code_scanning\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1w** buckets with a separate series **by rule_severity** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **net_debt** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (opened vs fixed), Bar chart (by severity), Single value (open Critical alerts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.8,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 18,
            "none": 0
          }
        },
        {
          "i": "12.2",
          "n": "CI/CD Pipelines",
          "u": [
            {
              "i": "12.2.1",
              "n": "Build Success Rate Trending",
              "c": "high",
              "f": "beginner",
              "v": "Declining build success rates indicate code quality issues, flaky tests, or infrastructure problems. Trending drives improvement.",
              "t": "Splunk App for Jenkins, webhook input",
              "d": "CI/CD build results (Jenkins, GitHub Actions, GitLab CI)",
              "q": "index=cicd sourcetype=\"jenkins:build\"\n| stats count(eval(result=\"SUCCESS\")) as success, count(eval(result=\"FAILURE\")) as failed, count as total by job_name\n| eval success_rate=round(success/total*100,1)\n| sort success_rate",
              "m": "Ingest CI/CD build events via TA or webhook. Track success/failure rates per pipeline. Alert when success rate drops below threshold (e.g., <90%). Identify most-failing pipelines for developer attention.",
              "z": "Bar chart (success rate by pipeline), Line chart (success rate trend), Table (failing builds), Single value (overall success rate).",
              "kfp": "Build failures spike during CI infrastructure issues, dependency registry outages, flaky test runs, or major framework upgrades.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk App for Jenkins, webhook input.\n• Ensure the following data sources are available: CI/CD build results (Jenkins, GitHub Actions, GitLab CI).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CI/CD build events via TA or webhook. Track success/failure rates per pipeline. Alert when success rate drops below threshold (e.g., <90%). Identify most-failing pipelines for developer attention.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"jenkins:build\"\n| stats count(eval(result=\"SUCCESS\")) as success, count(eval(result=\"FAILURE\")) as failed, count as total by job_name\n| eval success_rate=round(success/total*100,1)\n| sort success_rate\n```\n\nUnderstanding this SPL\n\n**Build Success Rate Trending** — Declining build success rates indicate code quality issues, flaky tests, or infrastructure problems. Trending drives improvement.\n\nDocumented **Data sources**: CI/CD build results (Jenkins, GitHub Actions, GitLab CI). **App/TA** (typical add-on context): Splunk App for Jenkins, webhook input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: jenkins:build. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"jenkins:build\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by job_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (success rate by pipeline), Line chart (success rate trend), Table (failing builds), Single value (overall success rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch build outcomes so we spot quality or infrastructure problems before they become a release blocker.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.2",
              "n": "Build Duration Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Increasing build times slow development velocity. Detection enables build optimization and infrastructure right-sizing.",
              "t": "Splunk App for Jenkins, CI/CD metrics",
              "d": "Build start/end timestamps",
              "q": "index=cicd sourcetype=\"jenkins:build\" result=\"SUCCESS\"\n| eval duration_min=round(duration/60000,1)\n| timechart span=1d avg(duration_min) as avg_build_time by job_name",
              "m": "Track build duration for all pipelines. Alert when duration exceeds historical average by >50%. Identify slow build steps. Correlate with infrastructure metrics (runner CPU, disk I/O) to find bottlenecks.",
              "z": "Line chart (build duration trend), Bar chart (avg duration by pipeline), Table (slowest builds today).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk App for Jenkins, CI/CD metrics.\n• Ensure the following data sources are available: Build start/end timestamps.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack build duration for all pipelines. Alert when duration exceeds historical average by >50%. Identify slow build steps. Correlate with infrastructure metrics (runner CPU, disk I/O) to find bottlenecks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"jenkins:build\" result=\"SUCCESS\"\n| eval duration_min=round(duration/60000,1)\n| timechart span=1d avg(duration_min) as avg_build_time by job_name\n```\n\nUnderstanding this SPL\n\n**Build Duration Monitoring** — Increasing build times slow development velocity. Detection enables build optimization and infrastructure right-sizing.\n\nDocumented **Data sources**: Build start/end timestamps. **App/TA** (typical add-on context): Splunk App for Jenkins, CI/CD metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: jenkins:build. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"jenkins:build\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by job_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (build duration trend), Bar chart (avg duration by pipeline), Table (slowest builds today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how long builds take so slow pipelines and flaky steps are visible before deadlines slip.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.3",
              "n": "Deployment Frequency (DORA)",
              "c": "medium",
              "f": "beginner",
              "v": "Deployment frequency is a key DORA metric indicating engineering capability maturity. Higher frequency correlates with better outcomes.",
              "t": "Deployment event webhook",
              "d": "Deployment events from CI/CD pipelines",
              "q": "index=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| timechart span=1w count as deployments by application",
              "m": "Emit deployment events from CI/CD pipelines to Splunk HEC. Track production deployments per team/application per week. Calculate DORA deployment frequency category (daily, weekly, monthly). Report to engineering leadership.",
              "z": "Line chart (deployment frequency trend), Bar chart (deployments by team), Single value (deployments this week).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Deployment event webhook.\n• Ensure the following data sources are available: Deployment events from CI/CD pipelines.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmit deployment events from CI/CD pipelines to Splunk HEC. Track production deployments per team/application per week. Calculate DORA deployment frequency category (daily, weekly, monthly). Report to engineering leadership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| timechart span=1w count as deployments by application\n```\n\nUnderstanding this SPL\n\n**Deployment Frequency (DORA)** — Deployment frequency is a key DORA metric indicating engineering capability maturity. Higher frequency correlates with better outcomes.\n\nDocumented **Data sources**: Deployment events from CI/CD pipelines. **App/TA** (typical add-on context): Deployment event webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: deployment_event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"deployment_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1w** buckets with a separate series **by application** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (deployment frequency trend), Bar chart (deployments by team), Single value (deployments this week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track delivery tempo and stability so leaders see whether we are shipping safely and often enough.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.4",
              "n": "Lead Time for Changes (DORA)",
              "c": "medium",
              "f": "beginner",
              "v": "Lead time measures the commit-to-production pipeline efficiency. Shorter lead times enable faster value delivery and incident response.",
              "t": "Git + deployment correlation",
              "d": "Git commit timestamps + production deployment timestamps",
              "q": "index=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| eval lead_time_hours=round((deploy_time_epoch-commit_time_epoch)/3600,1)\n| stats avg(lead_time_hours) as avg_lead_time, median(lead_time_hours) as median_lead_time by application",
              "m": "Correlate commit timestamps with deployment events. Calculate time from first commit to production deployment. Track per application and team. Classify per DORA categories (under 1 hour, under 1 day, under 1 week, over 1 month).",
              "z": "Bar chart (lead time by application), Line chart (lead time trend), Histogram (lead time distribution).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Git + deployment correlation.\n• Ensure the following data sources are available: Git commit timestamps + production deployment timestamps.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate commit timestamps with deployment events. Calculate time from first commit to production deployment. Track per application and team. Classify per DORA categories (under 1 hour, under 1 day, under 1 week, over 1 month).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| eval lead_time_hours=round((deploy_time_epoch-commit_time_epoch)/3600,1)\n| stats avg(lead_time_hours) as avg_lead_time, median(lead_time_hours) as median_lead_time by application\n```\n\nUnderstanding this SPL\n\n**Lead Time for Changes (DORA)** — Lead time measures the commit-to-production pipeline efficiency. Shorter lead times enable faster value delivery and incident response.\n\nDocumented **Data sources**: Git commit timestamps + production deployment timestamps. **App/TA** (typical add-on context): Git + deployment correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: deployment_event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"deployment_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lead_time_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (lead time by application), Line chart (lead time trend), Histogram (lead time distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track delivery tempo and stability so leaders see whether we are shipping safely and often enough.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.5",
              "n": "Failed Deployment Tracking",
              "c": "critical",
              "f": "beginner",
              "v": "Failed deployments cause service disruption. Rapid detection enables rollback decisions. Change failure rate is a DORA metric.",
              "t": "Deployment event webhook",
              "d": "Deployment events with status, rollback events",
              "q": "index=cicd sourcetype=\"deployment_event\" status=\"failed\"\n| table _time, application, environment, version, deployer, error_message\n| sort -_time",
              "m": "Track all deployment outcomes including failures and rollbacks. Alert immediately on production deployment failures. Calculate change failure rate (DORA metric). Correlate with application error rate to measure deployment impact.",
              "z": "Table (failed deployments), Single value (change failure rate %), Line chart (failure rate trend), Timeline (deployment events).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Deployment event webhook.\n• Ensure the following data sources are available: Deployment events with status, rollback events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack all deployment outcomes including failures and rollbacks. Alert immediately on production deployment failures. Calculate change failure rate (DORA metric). Correlate with application error rate to measure deployment impact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"deployment_event\" status=\"failed\"\n| table _time, application, environment, version, deployer, error_message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Failed Deployment Tracking** — Failed deployments cause service disruption. Rapid detection enables rollback decisions. Change failure rate is a DORA metric.\n\nDocumented **Data sources**: Deployment events with status, rollback events. **App/TA** (typical add-on context): Deployment event webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: deployment_event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"deployment_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Failed Deployment Tracking**): table _time, application, environment, version, deployer, error_message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed deployments), Single value (change failure rate %), Line chart (failure rate trend), Timeline (deployment events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch failed or rolled-back releases so we can tighten the path from commit to production.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "12.2.6",
              "n": "Pipeline Queue Time",
              "c": "medium",
              "f": "beginner",
              "v": "Long queue times indicate insufficient CI/CD runner capacity, slowing developer feedback loops and delivery velocity.",
              "t": "Splunk App for Jenkins, CI/CD system metrics",
              "d": "CI/CD queue metrics (time in queue, pending jobs)",
              "q": "index=cicd sourcetype=\"jenkins:queue\"\n| timechart span=15m avg(wait_time_sec) as avg_wait, max(queue_length) as max_queue\n| where avg_wait > 300",
              "m": "Track job queue wait times and queue lengths. Alert when average wait exceeds 5 minutes. Monitor runner/agent utilization. Report on peak hours to guide scaling decisions.",
              "z": "Line chart (queue time trend), Bar chart (queue time by pipeline), Single value (current queue length).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk App for Jenkins, CI/CD system metrics.\n• Ensure the following data sources are available: CI/CD queue metrics (time in queue, pending jobs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack job queue wait times and queue lengths. Alert when average wait exceeds 5 minutes. Monitor runner/agent utilization. Report on peak hours to guide scaling decisions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"jenkins:queue\"\n| timechart span=15m avg(wait_time_sec) as avg_wait, max(queue_length) as max_queue\n| where avg_wait > 300\n```\n\nUnderstanding this SPL\n\n**Pipeline Queue Time** — Long queue times indicate insufficient CI/CD runner capacity, slowing developer feedback loops and delivery velocity.\n\nDocumented **Data sources**: CI/CD queue metrics (time in queue, pending jobs). **App/TA** (typical add-on context): Splunk App for Jenkins, CI/CD system metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: jenkins:queue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"jenkins:queue\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_wait > 300` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue time trend), Bar chart (queue time by pipeline), Single value (current queue length).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch queueing and wait time so developers are not stuck behind broken runners or clogged pipelines.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.7",
              "n": "Test Coverage Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Declining test coverage increases risk of undetected bugs. Trending ensures quality standards are maintained.",
              "t": "Custom test report input",
              "d": "Test result reports (JUnit XML, coverage reports)",
              "q": "index=cicd sourcetype=\"test_coverage\"\n| timechart span=1d latest(coverage_pct) as coverage by project\n| where coverage < 80",
              "m": "Parse test coverage reports from CI/CD pipelines. Send to Splunk via HEC. Track coverage per project. Alert when coverage drops below minimum (e.g., <80%). Block merges when coverage decreases (enforce in CI).",
              "z": "Line chart (coverage trend per project), Bar chart (coverage by project), Single value (avg coverage %).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom test report input.\n• Ensure the following data sources are available: Test result reports (JUnit XML, coverage reports).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse test coverage reports from CI/CD pipelines. Send to Splunk via HEC. Track coverage per project. Alert when coverage drops below minimum (e.g., <80%). Block merges when coverage decreases (enforce in CI).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"test_coverage\"\n| timechart span=1d latest(coverage_pct) as coverage by project\n| where coverage < 80\n```\n\nUnderstanding this SPL\n\n**Test Coverage Trending** — Declining test coverage increases risk of undetected bugs. Trending ensures quality standards are maintained.\n\nDocumented **Data sources**: Test result reports (JUnit XML, coverage reports). **App/TA** (typical add-on context): Custom test report input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: test_coverage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"test_coverage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by project** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where coverage < 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (coverage trend per project), Bar chart (coverage by project), Single value (avg coverage %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch test signals so coverage drops and flaky tests do not hide real defects.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.8",
              "n": "Security Scan Results in Pipeline",
              "c": "critical",
              "f": "intermediate",
              "v": "SAST/DAST/SCA findings in CI/CD pipelines catch vulnerabilities before they reach production. Tracking ensures security gates work.",
              "t": "Custom scan result input",
              "d": "SAST (SonarQube, Checkmarx), DAST (ZAP, Burp), SCA (Snyk, Dependabot) results",
              "q": "index=cicd sourcetype=\"security_scan\"\n| stats count by severity, scan_type, project\n| where severity IN (\"Critical\",\"High\")\n| sort -count",
              "m": "Ingest security scan results from CI/CD pipelines. Track findings by severity, type, and project. Alert on critical findings blocking deployment. Report on vulnerability introduction rate and fix rate.",
              "z": "Bar chart (findings by severity), Table (critical findings), Line chart (findings trend), Stacked bar (by scan type).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scan result input.\n• Ensure the following data sources are available: SAST (SonarQube, Checkmarx), DAST (ZAP, Burp), SCA (Snyk, Dependabot) results.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest security scan results from CI/CD pipelines. Track findings by severity, type, and project. Alert on critical findings blocking deployment. Report on vulnerability introduction rate and fix rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"security_scan\"\n| stats count by severity, scan_type, project\n| where severity IN (\"Critical\",\"High\")\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Security Scan Results in Pipeline** — SAST/DAST/SCA findings in CI/CD pipelines catch vulnerabilities before they reach production. Tracking ensures security gates work.\n\nDocumented **Data sources**: SAST (SonarQube, Checkmarx), DAST (ZAP, Burp), SCA (Snyk, Dependabot) results. **App/TA** (typical add-on context): Custom scan result input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: security_scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"security_scan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity, scan_type, project** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity IN (\"Critical\",\"High\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (findings by severity), Table (critical findings), Line chart (findings trend), Stacked bar (by scan type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch security scan results in the pipeline so findings are handled before code ships.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "12.2.9",
              "n": "Jenkins Executor Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Busy executors as % of total and queue wait time indicate CI capacity. High utilization with long queue times signals need for additional agents or executor scaling.",
              "t": "Custom (Jenkins API, Prometheus metrics endpoint)",
              "d": "Jenkins /metrics, /api/json?tree=computer[displayName,busyExecutors,totalExecutors]",
              "q": "index=cicd sourcetype=\"jenkins:metrics\"\n| eval utilization_pct=round(busy_executors/total_executors*100,1)\n| timechart span=15m avg(utilization_pct) as avg_util, avg(queue_wait_sec) as avg_wait by computer\n| where avg_util > 80 OR avg_wait > 300",
              "m": "Poll Jenkins /metrics (Prometheus format) or /computer/api/json for executor counts. Ingest busyExecutors, totalExecutors, and queue wait time. Calculate utilization per node. Alert when utilization >85% sustained for 15 min or queue wait >5 min. Correlate with build duration to right-size capacity.",
              "z": "Gauge (current utilization %), Line chart (utilization and queue wait trend), Bar chart (utilization by node), Single value (avg queue wait sec).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Jenkins API, Prometheus metrics endpoint).\n• Ensure the following data sources are available: Jenkins /metrics, /api/json?tree=computer[displayName,busyExecutors,totalExecutors].\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Jenkins /metrics (Prometheus format) or /computer/api/json for executor counts. Ingest busyExecutors, totalExecutors, and queue wait time. Calculate utilization per node. Alert when utilization >85% sustained for 15 min or queue wait >5 min. Correlate with build duration to right-size capacity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"jenkins:metrics\"\n| eval utilization_pct=round(busy_executors/total_executors*100,1)\n| timechart span=15m avg(utilization_pct) as avg_util, avg(queue_wait_sec) as avg_wait by computer\n| where avg_util > 80 OR avg_wait > 300\n```\n\nUnderstanding this SPL\n\n**Jenkins Executor Utilization** — Busy executors as % of total and queue wait time indicate CI capacity. High utilization with long queue times signals need for additional agents or executor scaling.\n\nDocumented **Data sources**: Jenkins /metrics, /api/json?tree=computer[displayName,busyExecutors,totalExecutors]. **App/TA** (typical add-on context): Custom (Jenkins API, Prometheus metrics endpoint). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: jenkins:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"jenkins:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by computer** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_util > 80 OR avg_wait > 300` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (current utilization %), Line chart (utilization and queue wait trend), Bar chart (utilization by node), Single value (avg queue wait sec).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Jenkins health and jobs so executors, agents, and failures do not stall delivery.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jenkins",
                "prometheus"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.10",
              "n": "Jenkins Node Offline Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Build agents going offline impacts CI capacity and causes job failures. Rapid detection enables agent recovery or failover before queue backlog grows.",
              "t": "Custom (Jenkins API)",
              "d": "Jenkins /computer/api/json",
              "q": "index=cicd sourcetype=\"jenkins:computer\"\n| where offline=\"true\" OR offline=true\n| table _time, displayName, offline, temporarilyOffline, numExecutors, idleExecutors\n| sort -_time",
              "m": "Poll Jenkins /computer/api/json periodically (e.g., every 5 min). Ingest offline, temporarilyOffline, displayName per node. Alert immediately when any node goes offline. Exclude master if desired. Track offline duration and recurrence for capacity planning.",
              "z": "Table (offline nodes), Single value (offline node count — target: 0), Status grid (node × online/offline), Timeline (offline events).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Jenkins API).\n• Ensure the following data sources are available: Jenkins /computer/api/json.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Jenkins /computer/api/json periodically (e.g., every 5 min). Ingest offline, temporarilyOffline, displayName per node. Alert immediately when any node goes offline. Exclude master if desired. Track offline duration and recurrence for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"jenkins:computer\"\n| where offline=\"true\" OR offline=true\n| table _time, displayName, offline, temporarilyOffline, numExecutors, idleExecutors\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Jenkins Node Offline Detection** — Build agents going offline impacts CI capacity and causes job failures. Rapid detection enables agent recovery or failover before queue backlog grows.\n\nDocumented **Data sources**: Jenkins /computer/api/json. **App/TA** (typical add-on context): Custom (Jenkins API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: jenkins:computer. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"jenkins:computer\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where offline=\"true\" OR offline=true` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Jenkins Node Offline Detection**): table _time, displayName, offline, temporarilyOffline, numExecutors, idleExecutors\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (offline nodes), Single value (offline node count — target: 0), Status grid (node × online/offline), Timeline (offline events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Jenkins health and jobs so executors, agents, and failures do not stall delivery.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.11",
              "n": "GitLab CI Runner Availability",
              "c": "high",
              "f": "intermediate",
              "v": "Runner registration status and job queue time affect CI throughput. Offline or paused runners cause jobs to wait or fail; monitoring ensures runner fleet health.",
              "t": "Custom (GitLab API)",
              "d": "GitLab /api/v4/runners, runner logs",
              "q": "index=cicd sourcetype=\"gitlab:runners\"\n| where active=\"false\" OR paused=\"true\" OR (status!=\"online\" AND status!=\"idle\")\n| table _time, runner_id, description, active, paused, status, contacted_at\n| sort -_time",
              "m": "Poll GitLab /api/v4/runners for runner list and status. Ingest active, paused, contacted_at. Optionally parse runner logs for connectivity errors. Alert when runner goes inactive or paused. Track job queue time from pipeline events. Report on runner utilization and availability SLA.",
              "z": "Table (inactive/paused runners), Single value (available runners), Status grid (runner × status), Line chart (job queue time trend).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (GitLab API).\n• Ensure the following data sources are available: GitLab /api/v4/runners, runner logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll GitLab /api/v4/runners for runner list and status. Ingest active, paused, contacted_at. Optionally parse runner logs for connectivity errors. Alert when runner goes inactive or paused. Track job queue time from pipeline events. Report on runner utilization and availability SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"gitlab:runners\"\n| where active=\"false\" OR paused=\"true\" OR (status!=\"online\" AND status!=\"idle\")\n| table _time, runner_id, description, active, paused, status, contacted_at\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**GitLab CI Runner Availability** — Runner registration status and job queue time affect CI throughput. Offline or paused runners cause jobs to wait or fail; monitoring ensures runner fleet health.\n\nDocumented **Data sources**: GitLab /api/v4/runners, runner logs. **App/TA** (typical add-on context): Custom (GitLab API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: gitlab:runners. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"gitlab:runners\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where active=\"false\" OR paused=\"true\" OR (status!=\"online\" AND status!=\"idle\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GitLab CI Runner Availability**): table _time, runner_id, description, active, paused, status, contacted_at\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (inactive/paused runners), Single value (available runners), Status grid (runner × status), Line chart (job queue time trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitLab runner and pipeline signals so broken jobs, storage, or runners do not stall your teams.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gitlab"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.12",
              "n": "GitLab Pipeline Duration Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Pipeline getting slower over time indicates growing tech debt or resource constraints. Trending enables proactive optimization before developer velocity degrades.",
              "t": "Custom (GitLab API)",
              "d": "GitLab /api/v4/projects/:id/pipelines",
              "q": "index=cicd sourcetype=\"gitlab:pipeline\"\n| eval duration_sec=coalesce(duration, 0)\n| timechart span=1d avg(duration_sec) as avg_duration, percentile(duration_sec, 95) as p95_duration by ref\n| where avg_duration > 0",
              "m": "Poll GitLab pipelines API per project. Ingest id, ref, status, duration, created_at. Calculate duration for completed pipelines. Track 7-day rolling average. Alert when avg duration increases >25% week-over-week. Correlate with runner capacity and stage-level timings.",
              "z": "Line chart (pipeline duration trend by branch), Bar chart (avg duration by project), Table (slowest pipelines this week), Single value (p95 duration).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (GitLab API).\n• Ensure the following data sources are available: GitLab /api/v4/projects/:id/pipelines.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll GitLab pipelines API per project. Ingest id, ref, status, duration, created_at. Calculate duration for completed pipelines. Track 7-day rolling average. Alert when avg duration increases >25% week-over-week. Correlate with runner capacity and stage-level timings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"gitlab:pipeline\"\n| eval duration_sec=coalesce(duration, 0)\n| timechart span=1d avg(duration_sec) as avg_duration, percentile(duration_sec, 95) as p95_duration by ref\n| where avg_duration > 0\n```\n\nUnderstanding this SPL\n\n**GitLab Pipeline Duration Trending** — Pipeline getting slower over time indicates growing tech debt or resource constraints. Trending enables proactive optimization before developer velocity degrades.\n\nDocumented **Data sources**: GitLab /api/v4/projects/:id/pipelines. **App/TA** (typical add-on context): Custom (GitLab API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: gitlab:pipeline. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"gitlab:pipeline\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by ref** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_duration > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pipeline duration trend by branch), Bar chart (avg duration by project), Table (slowest pipelines this week), Single value (p95 duration).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitLab runner and pipeline signals so broken jobs, storage, or runners do not stall your teams.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gitlab"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.14",
              "n": "Jenkins Agent Offline Alerts",
              "c": "critical",
              "f": "beginner",
              "v": "Operational paging when executors disappear—complements UC-12.2.10 with severity tiers and agent labels (prod vs dev).",
              "t": "Splunk App for Jenkins, Jenkins API",
              "d": "`/computer/api/json` `offline=true`, agent heartbeat",
              "q": "index=cicd sourcetype=\"jenkins:computer\" (offline=\"true\" OR offline=true)\n| lookup jenkins_agent_tier displayName OUTPUT tier\n| where tier=\"production\"\n| table _time, displayName, offline, labels, numExecutors\n| sort -_time",
              "m": "Tag agents in lookup. Page immediately for production-labeled offline agents. Auto-create incident if >30% of pool offline.",
              "z": "Table (offline prod agents), Single value (count), Status grid (agent × tier).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk App for Jenkins, Jenkins API.\n• Ensure the following data sources are available: `/computer/api/json` `offline=true`, agent heartbeat.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag agents in lookup. Page immediately for production-labeled offline agents. Auto-create incident if >30% of pool offline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"jenkins:computer\" (offline=\"true\" OR offline=true)\n| lookup jenkins_agent_tier displayName OUTPUT tier\n| where tier=\"production\"\n| table _time, displayName, offline, labels, numExecutors\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Jenkins Agent Offline Alerts** — Operational paging when executors disappear—complements UC-12.2.10 with severity tiers and agent labels (prod vs dev).\n\nDocumented **Data sources**: `/computer/api/json` `offline=true`, agent heartbeat. **App/TA** (typical add-on context): Splunk App for Jenkins, Jenkins API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: jenkins:computer. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"jenkins:computer\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where tier=\"production\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Jenkins Agent Offline Alerts**): table _time, displayName, offline, labels, numExecutors\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (offline prod agents), Single value (count), Status grid (agent × tier).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Jenkins health and jobs so executors, agents, and failures do not stall delivery.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.15",
              "n": "Pipeline Stage Duration Regression",
              "c": "medium",
              "f": "intermediate",
              "v": "Stage-level regression (e.g., `test` stage) surfaces before overall job duration crosses SLA (UC-12.2.2).",
              "t": "Jenkins Blue Ocean / GitLab job trace, GitHub Actions job logs",
              "d": "Stage start/end timestamps per pipeline run",
              "q": "index=cicd sourcetype=\"cicd:stage\"\n| eval stage_duration_sec=duration_ms/1000\n| eventstats median(stage_duration_sec) as med by stage_name, pipeline\n| where stage_duration_sec > med*1.5 AND stage_duration_sec > 60\n| table _time, pipeline, stage_name, stage_duration_sec, med",
              "m": "Emit structured stage events from CI. Baseline weekly medians. Alert on regression >50% vs median.",
              "z": "Line chart (stage duration trend), Table (regressions), Heatmap (stage × pipeline).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Jenkins Blue Ocean / GitLab job trace, GitHub Actions job logs.\n• Ensure the following data sources are available: Stage start/end timestamps per pipeline run.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmit structured stage events from CI. Baseline weekly medians. Alert on regression >50% vs median.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"cicd:stage\"\n| eval stage_duration_sec=duration_ms/1000\n| eventstats median(stage_duration_sec) as med by stage_name, pipeline\n| where stage_duration_sec > med*1.5 AND stage_duration_sec > 60\n| table _time, pipeline, stage_name, stage_duration_sec, med\n```\n\nUnderstanding this SPL\n\n**Pipeline Stage Duration Regression** — Stage-level regression (e.g., `test` stage) surfaces before overall job duration crosses SLA (UC-12.2.2).\n\nDocumented **Data sources**: Stage start/end timestamps per pipeline run. **App/TA** (typical add-on context): Jenkins Blue Ocean / GitLab job trace, GitHub Actions job logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: cicd:stage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"cicd:stage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **stage_duration_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by stage_name, pipeline** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where stage_duration_sec > med*1.5 AND stage_duration_sec > 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Pipeline Stage Duration Regression**): table _time, pipeline, stage_name, stage_duration_sec, med\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (stage duration trend), Table (regressions), Heatmap (stage × pipeline).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "gitlab",
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.16",
              "n": "Build Artifact Integrity Verification",
              "c": "critical",
              "f": "intermediate",
              "v": "Compares published artifact checksums/Sigstore signatures against CI-recorded values to detect tampering at rest.",
              "t": "Cosign, Jenkins archive, S3/GCS object metadata",
              "d": "Build record with `sha256`, registry manifest digest",
              "q": "index=cicd sourcetype=\"artifact:integrity\"\n| where expected_sha256!=actual_sha256 OR signature_valid=\"false\"\n| table _time, artifact_name, pipeline, expected_sha256, actual_sha256",
              "m": "CI stores expected hash in Splunk; registry poll compares. Alert on any mismatch before prod promotion.",
              "z": "Table (mismatches — target empty), Timeline, Single value (failed verifications).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cosign, Jenkins archive, S3/GCS object metadata.\n• Ensure the following data sources are available: Build record with `sha256`, registry manifest digest.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCI stores expected hash in Splunk; registry poll compares. Alert on any mismatch before prod promotion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"artifact:integrity\"\n| where expected_sha256!=actual_sha256 OR signature_valid=\"false\"\n| table _time, artifact_name, pipeline, expected_sha256, actual_sha256\n```\n\nUnderstanding this SPL\n\n**Build Artifact Integrity Verification** — Compares published artifact checksums/Sigstore signatures against CI-recorded values to detect tampering at rest.\n\nDocumented **Data sources**: Build record with `sha256`, registry manifest digest. **App/TA** (typical add-on context): Cosign, Jenkins archive, S3/GCS object metadata. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: artifact:integrity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"artifact:integrity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where expected_sha256!=actual_sha256 OR signature_valid=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Build Artifact Integrity Verification**): table _time, artifact_name, pipeline, expected_sha256, actual_sha256\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mismatches — target empty), Timeline, Single value (failed verifications).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch packages and artifacts so supply-chain risk, storage, and policy checks stay under control.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.17",
              "n": "Deploy Approval Bypass Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Production deploys without change ticket or approval group in CD system (Spinnaker, Argo Rollouts, Harness).",
              "t": "CD platform audit logs",
              "d": "Deployment events with `approval_id`, `change_ticket`",
              "q": "index=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| where isnull(approval_id) OR isnull(change_ticket)\n| search user!=\"svc_cd_bot\"\n| table _time, application, user, version, environment",
              "m": "Enforce required fields in CD templates. Alert on nulls. Correlate with Entra/Okta for human actors.",
              "z": "Table (unapproved deploys), Timeline, Single value (violations 30d).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CD platform audit logs.\n• Ensure the following data sources are available: Deployment events with `approval_id`, `change_ticket`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnforce required fields in CD templates. Alert on nulls. Correlate with Entra/Okta for human actors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| where isnull(approval_id) OR isnull(change_ticket)\n| search user!=\"svc_cd_bot\"\n| table _time, application, user, version, environment\n```\n\nUnderstanding this SPL\n\n**Deploy Approval Bypass Detection** — Production deploys without change ticket or approval group in CD system (Spinnaker, Argo Rollouts, Harness).\n\nDocumented **Data sources**: Deployment events with `approval_id`, `change_ticket`. **App/TA** (typical add-on context): CD platform audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: deployment_event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"deployment_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(approval_id) OR isnull(change_ticket)` — typically the threshold or rule expression for this monitoring goal.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Deploy Approval Bypass Detection**): table _time, application, user, version, environment\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved deploys), Timeline, Single value (violations 30d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§9",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of BAIT/KAIT §9 (ICT operations management) — Splunk UC-12.2.17: Deploy Approval Bypass Detection.",
                  "ea": "Saved search 'UC-12.2.17' running on Deployment events with approval_id, change_ticket, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Rundschreiben/2021/rs_1021_BAIT_en.html"
                },
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) — Splunk UC-12.2.17: Deploy Approval Bypass Detection.",
                  "ea": "Saved search 'UC-12.2.17' running on Deployment events with approval_id, change_ticket, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/IT-Grundschutz/it-grundschutz_node.html"
                },
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§4.1.1",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of MAS TRM §4.1.1 (Technology risk governance) — Splunk UC-12.2.17: Deploy Approval Bypass Detection.",
                  "ea": "Saved search 'UC-12.2.17' running on Deployment events with approval_id, change_ticket, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.mas.gov.sg/-/media/mas/regulations-and-financial-stability/regulatory-and-supervisory-framework/risk-management/trm-guidelines-18-january-2021.pdf"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Approval",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of SOX-ITGC ITGC.ChangeMgmt.Approval (Change approved) — Splunk UC-12.2.17: Deploy Approval Bypass Detection.",
                  "ea": "Saved search 'UC-12.2.17' running on Deployment events with approval_id, change_ticket, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.18",
              "n": "Parallel Build Resource Contention",
              "c": "medium",
              "f": "intermediate",
              "v": "Runner saturation lengthens CI pipelines and causes flaky tests that block releases. Correlating saturation metrics with P95 job duration justifies autoscaling investment and prevents false 'code regression' blame when infrastructure is the root cause.",
              "t": "Jenkins metrics, Kubernetes node metrics",
              "d": "Active executor count, node CPU utilization, overlapping job IDs",
              "q": "index=cicd sourcetype=\"jenkins:metrics\"\n| eval utilization_pct=round(busy_executors/total_executors*100,1)\n| join type=left max=1 _time [search index=infra sourcetype=\"kube:node\" | stats avg(cpu_usage) as node_cpu by _time]\n| where utilization_pct>90 AND node_cpu>85\n| table _time, computer, utilization_pct, node_cpu",
              "m": "Align timestamps between CI and Kubernetes. Alert when high utilization coincides with p95 build time spike. Scale runner pool.",
              "z": "Line chart (utilization vs build time), Table (contention windows), Bar chart (by node pool).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Jenkins metrics, Kubernetes node metrics.\n• Ensure the following data sources are available: Active executor count, node CPU utilization, overlapping job IDs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign timestamps between CI and Kubernetes. Alert when high utilization coincides with p95 build time spike. Scale runner pool.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"jenkins:metrics\"\n| eval utilization_pct=round(busy_executors/total_executors*100,1)\n| join type=left max=1 _time [search index=infra sourcetype=\"kube:node\" | stats avg(cpu_usage) as node_cpu by _time]\n| where utilization_pct>90 AND node_cpu>85\n| table _time, computer, utilization_pct, node_cpu\n```\n\nUnderstanding this SPL\n\n**Parallel Build Resource Contention** — Runner saturation lengthens CI pipelines and causes flaky tests that block releases. Correlating saturation metrics with P95 job duration justifies autoscaling investment and prevents false 'code regression' blame when infrastructure is the root cause.\n\nDocumented **Data sources**: Active executor count, node CPU utilization, overlapping job IDs. **App/TA** (typical add-on context): Jenkins metrics, Kubernetes node metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: jenkins:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"jenkins:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where utilization_pct>90 AND node_cpu>85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Parallel Build Resource Contention**): table _time, computer, utilization_pct, node_cpu\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (utilization vs build time), Table (contention windows), Bar chart (by node pool).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jenkins",
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.19",
              "n": "Flaky Test Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Tests that pass/fail without code changes waste time and hide real failures; quarantine candidates identified by pass rate variance.",
              "t": "JUnit XML ingest, Buildkite/GitHub Actions annotations",
              "d": "Test case name, suite, result per run",
              "q": "index=cicd sourcetype=\"junit:result\"\n| stats count(eval(result=\"SUCCESS\")) as pass, count(eval(result=\"FAILURE\")) as fail, count as runs by test_case, class_name\n| eval flake_rate=round(fail/runs*100,1)\n| where runs>10 AND flake_rate>10 AND flake_rate<90\n| sort -flake_rate",
              "m": "Minimum 10 runs for statistics. File Jira for quarantine when flake_rate >25%. Track fix SLA.",
              "z": "Table (flaky tests), Bar chart (flake rate), Line chart (trend after fix).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JUnit XML ingest, Buildkite/GitHub Actions annotations.\n• Ensure the following data sources are available: Test case name, suite, result per run.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMinimum 10 runs for statistics. File Jira for quarantine when flake_rate >25%. Track fix SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"junit:result\"\n| stats count(eval(result=\"SUCCESS\")) as pass, count(eval(result=\"FAILURE\")) as fail, count as runs by test_case, class_name\n| eval flake_rate=round(fail/runs*100,1)\n| where runs>10 AND flake_rate>10 AND flake_rate<90\n| sort -flake_rate\n```\n\nUnderstanding this SPL\n\n**Flaky Test Detection** — Tests that pass/fail without code changes waste time and hide real failures; quarantine candidates identified by pass rate variance.\n\nDocumented **Data sources**: Test case name, suite, result per run. **App/TA** (typical add-on context): JUnit XML ingest, Buildkite/GitHub Actions annotations. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: junit:result. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"junit:result\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by test_case, class_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **flake_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where runs>10 AND flake_rate>10 AND flake_rate<90` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (flaky tests), Bar chart (flake rate), Line chart (trend after fix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch test signals so coverage drops and flaky tests do not hide real defects.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.20",
              "n": "Deployment Frequency Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "DORA dashboard slice—deployments per service per day with team and application tags (extends UC-12.2.3).",
              "t": "`deployment_event` HEC",
              "d": "Normalized deploy events with `service`, `team`",
              "q": "index=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| bin _time span=1d\n| stats count as deploys by _time, team, service\n| sort -deploys",
              "m": "Tag all deploy pipelines. Weekly report to leadership. Compare to DORA elite thresholds (on-demand per day).",
              "z": "Line chart (deploys per day by team), Bar chart (by service), Single value (deploys 7d).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `deployment_event` HEC.\n• Ensure the following data sources are available: Normalized deploy events with `service`, `team`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag all deploy pipelines. Weekly report to leadership. Compare to DORA elite thresholds (on-demand per day).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| bin _time span=1d\n| stats count as deploys by _time, team, service\n| sort -deploys\n```\n\nUnderstanding this SPL\n\n**Deployment Frequency Tracking** — DORA dashboard slice—deployments per service per day with team and application tags (extends UC-12.2.3).\n\nDocumented **Data sources**: Normalized deploy events with `service`, `team`. **App/TA** (typical add-on context): `deployment_event` HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: deployment_event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"deployment_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, team, service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (deploys per day by team), Bar chart (by service), Single value (deploys 7d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track delivery tempo and stability so leaders see whether we are shipping safely and often enough.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.21",
              "n": "Lead Time for Changes (Percentile)",
              "c": "medium",
              "f": "intermediate",
              "v": "p95 lead time exposes tail latency hiding in averages (UC-12.2.4).",
              "t": "Git + deployment correlation",
              "d": "First commit SHA timestamp → prod deploy time",
              "q": "index=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| eval lead_hours=(deploy_time_epoch-commit_time_epoch)/3600\n| stats avg(lead_hours) as avg_lt, p95(lead_hours) as p95_lt by application\n| where p95_lt > 168\n| sort -p95_lt",
              "m": "Require deploy events to carry commit SHA. Alert when p95 exceeds one week. Segment by monorepo vs microservice.",
              "z": "Histogram (lead time), Table (p95 by app), Line chart (p95 trend).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Git + deployment correlation.\n• Ensure the following data sources are available: First commit SHA timestamp → prod deploy time.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequire deploy events to carry commit SHA. Alert when p95 exceeds one week. Segment by monorepo vs microservice.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| eval lead_hours=(deploy_time_epoch-commit_time_epoch)/3600\n| stats avg(lead_hours) as avg_lt, p95(lead_hours) as p95_lt by application\n| where p95_lt > 168\n| sort -p95_lt\n```\n\nUnderstanding this SPL\n\n**Lead Time for Changes (Percentile)** — p95 lead time exposes tail latency hiding in averages (UC-12.2.4).\n\nDocumented **Data sources**: First commit SHA timestamp → prod deploy time. **App/TA** (typical add-on context): Git + deployment correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: deployment_event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"deployment_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lead_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_lt > 168` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (lead time), Table (p95 by app), Line chart (p95 trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track delivery tempo and stability so leaders see whether we are shipping safely and often enough.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.22",
              "n": "Mean Time to Recovery (MTTR)",
              "c": "high",
              "f": "intermediate",
              "v": "Time from incident detection or deploy failure to successful restore—core DORA metric.",
              "t": "PagerDuty/Opsgenie, deployment events",
              "d": "Incident `created_at`, `resolved_at`; rollback `deploy_time`",
              "q": "index=itsm sourcetype=\"pagerduty:incident\"\n| eval mttr_min=(resolved_epoch-ack_epoch)/60\n| stats avg(mttr_min) as avg_mttr, p95(mttr_min) as p95_mttr by service\n| where avg_mttr > 60\n| sort -avg_mttr",
              "m": "Correlate PD incidents with service. For deploy failures, measure time to healthy deploy. Executive review monthly.",
              "z": "Table (MTTR by service), Line chart (avg MTTR trend), Gauge (vs SLA).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PagerDuty/Opsgenie, deployment events.\n• Ensure the following data sources are available: Incident `created_at`, `resolved_at`; rollback `deploy_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate PD incidents with service. For deploy failures, measure time to healthy deploy. Executive review monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"pagerduty:incident\"\n| eval mttr_min=(resolved_epoch-ack_epoch)/60\n| stats avg(mttr_min) as avg_mttr, p95(mttr_min) as p95_mttr by service\n| where avg_mttr > 60\n| sort -avg_mttr\n```\n\nUnderstanding this SPL\n\n**Mean Time to Recovery (MTTR)** — Time from incident detection or deploy failure to successful restore—core DORA metric.\n\nDocumented **Data sources**: Incident `created_at`, `resolved_at`; rollback `deploy_time`. **App/TA** (typical add-on context): PagerDuty/Opsgenie, deployment events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: pagerduty:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"pagerduty:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mttr_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_mttr > 60` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (MTTR by service), Line chart (avg MTTR trend), Gauge (vs SLA).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "pagerduty"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.23",
              "n": "Change Failure Rate",
              "c": "high",
              "f": "beginner",
              "v": "Ratio of failed deployments or hotfix-required releases to total deployments—DORA metric.",
              "t": "Deployment + incident linkage",
              "d": "`deployment_event` outcome, post-deploy incident within 1h",
              "q": "index=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| eval failed=if(status=\"failed\" OR incident_within_1h=\"true\",1,0)\n| stats sum(failed) as fails, count as total by application\n| eval cfr=round(fails/total*100,2)\n| where cfr > 15\n| sort -cfr",
              "m": "Flag incidents linked to version within 1h window. Target CFR <15% for mature teams. Review outliers in postmortems.",
              "z": "Bar chart (CFR by app), Line chart (CFR trend), Single value (org CFR).",
              "kfp": "Build failures spike during CI infrastructure issues, dependency registry outages, flaky test runs, or major framework upgrades.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Deployment + incident linkage.\n• Ensure the following data sources are available: `deployment_event` outcome, post-deploy incident within 1h.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFlag incidents linked to version within 1h window. Target CFR <15% for mature teams. Review outliers in postmortems.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"deployment_event\" environment=\"production\"\n| eval failed=if(status=\"failed\" OR incident_within_1h=\"true\",1,0)\n| stats sum(failed) as fails, count as total by application\n| eval cfr=round(fails/total*100,2)\n| where cfr > 15\n| sort -cfr\n```\n\nUnderstanding this SPL\n\n**Change Failure Rate** — Ratio of failed deployments or hotfix-required releases to total deployments—DORA metric.\n\nDocumented **Data sources**: `deployment_event` outcome, post-deploy incident within 1h. **App/TA** (typical add-on context): Deployment + incident linkage. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: deployment_event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"deployment_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cfr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cfr > 15` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (CFR by app), Line chart (CFR trend), Single value (org CFR).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.24",
              "n": "Pipeline Secret Rotation Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "CI/CD credentials (vault tokens, cloud API keys) approaching expiry without rotation break builds and create risk.",
              "t": "Vault audit, cloud IAM credential reports",
              "d": "Secret `expires_at`, `last_rotated`",
              "q": "index=secrets sourcetype=\"vault:audit\" OR sourcetype=\"ci:credential_inventory\"\n| eval days_left=(expiry_epoch-now())/86400\n| where days_left < 14 AND days_left > 0\n| table secret_name, pipeline, days_left, owner\n| sort days_left",
              "m": "Export CI secret inventory from Vault or sealed secrets metadata. Alert at 14/7/1 days. Block pipeline if expired.",
              "z": "Table (expiring secrets), Gauge (compliance %), Timeline.",
              "kfp": "Spikes from new app deployments, infrastructure rebuilds (DR drills), or scheduled secret rotation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vault audit, cloud IAM credential reports.\n• Ensure the following data sources are available: Secret `expires_at`, `last_rotated`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport CI secret inventory from Vault or sealed secrets metadata. Alert at 14/7/1 days. Block pipeline if expired.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=secrets sourcetype=\"vault:audit\" OR sourcetype=\"ci:credential_inventory\"\n| eval days_left=(expiry_epoch-now())/86400\n| where days_left < 14 AND days_left > 0\n| table secret_name, pipeline, days_left, owner\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**Pipeline Secret Rotation Compliance** — CI/CD credentials (vault tokens, cloud API keys) approaching expiry without rotation break builds and create risk.\n\nDocumented **Data sources**: Secret `expires_at`, `last_rotated`. **App/TA** (typical add-on context): Vault audit, cloud IAM credential reports. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: secrets; **sourcetype**: vault:audit, ci:credential_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=secrets, sourcetype=\"vault:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 14 AND days_left > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Pipeline Secret Rotation Compliance**): table secret_name, pipeline, days_left, owner\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (expiring secrets), Gauge (compliance %), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch secrets and rotation in the delivery path so expiring credentials do not break pipelines quietly.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.25",
              "n": "Build Queue Wait Time SLA",
              "c": "high",
              "f": "beginner",
              "v": "SLA-focused view of queue delay—% of jobs waiting >5 min (extends UC-12.2.6).",
              "t": "Jenkins queue API, GitLab pending jobs",
              "d": "`queue_wait_ms`, `queued_at`, `started_at`",
              "q": "index=cicd sourcetype=\"jenkins:build\"\n| eval wait_min=queue_wait_ms/60000\n| stats count(eval(wait_min>5)) as slow_queued, count as total\n| eval pct_slow=round(slow_queued/total*100,2)\n| where pct_slow > 10",
              "m": "Emit wait time per build. Alert if >10% of builds exceed 5 min queue in 1h window. Scale agents.",
              "z": "Line chart (p95 wait time), Histogram (wait distribution), Single value (% >5min).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Jenkins queue API, GitLab pending jobs.\n• Ensure the following data sources are available: `queue_wait_ms`, `queued_at`, `started_at`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmit wait time per build. Alert if >10% of builds exceed 5 min queue in 1h window. Scale agents.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"jenkins:build\"\n| eval wait_min=queue_wait_ms/60000\n| stats count(eval(wait_min>5)) as slow_queued, count as total\n| eval pct_slow=round(slow_queued/total*100,2)\n| where pct_slow > 10\n```\n\nUnderstanding this SPL\n\n**Build Queue Wait Time SLA** — SLA-focused view of queue delay—% of jobs waiting >5 min (extends UC-12.2.6).\n\nDocumented **Data sources**: `queue_wait_ms`, `queued_at`, `started_at`. **App/TA** (typical add-on context): Jenkins queue API, GitLab pending jobs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: jenkins:build. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"jenkins:build\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **wait_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **pct_slow** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_slow > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95 wait time), Histogram (wait distribution), Single value (% >5min).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch queueing and wait time so developers are not stuck behind broken runners or clogged pipelines.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gitlab",
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.26",
              "n": "Test Coverage Regression",
              "c": "high",
              "f": "beginner",
              "v": "PRs that reduce line coverage vs main branch—visibility for governance (pair with CI gates).",
              "t": "Cobertura/JaCoCo XML to HEC",
              "d": "`coverage_pct` per branch, PR number",
              "q": "index=cicd sourcetype=\"test_coverage\" branch=main\n| eventstats latest(coverage_pct) as main_cov by project\n| join max=1 project [search index=cicd sourcetype=\"test_coverage\" branch!=main | fields project, coverage_pct, pr]\n| eval delta=coverage_pct-main_cov\n| where delta < -1\n| table project, pr, coverage_pct, main_cov, delta",
              "m": "Compare PR coverage to main on each build. Alert on >1% drop. Block merge in CI when integrated.",
              "z": "Table (regressions), Bar chart (delta by project), Line chart (main coverage trend).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cobertura/JaCoCo XML to HEC.\n• Ensure the following data sources are available: `coverage_pct` per branch, PR number.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare PR coverage to main on each build. Alert on >1% drop. Block merge in CI when integrated.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"test_coverage\" branch=main\n| eventstats latest(coverage_pct) as main_cov by project\n| join max=1 project [search index=cicd sourcetype=\"test_coverage\" branch!=main | fields project, coverage_pct, pr]\n| eval delta=coverage_pct-main_cov\n| where delta < -1\n| table project, pr, coverage_pct, main_cov, delta\n```\n\nUnderstanding this SPL\n\n**Test Coverage Regression** — PRs that reduce line coverage vs main branch—visibility for governance (pair with CI gates).\n\nDocumented **Data sources**: `coverage_pct` per branch, PR number. **App/TA** (typical add-on context): Cobertura/JaCoCo XML to HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: test_coverage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"test_coverage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eventstats` rolls up events into metrics; results are split **by project** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta < -1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Test Coverage Regression**): table project, pr, coverage_pct, main_cov, delta\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (regressions), Bar chart (delta by project), Line chart (main coverage trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch test signals so coverage drops and flaky tests do not hide real defects.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.2.27",
              "n": "Pipeline Resource Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "CPU/memory per runner job—right-sizing and spot vs on-demand mix.",
              "t": "Kubernetes metrics, Jenkins Prometheus plugin",
              "d": "`container_cpu_usage_seconds`, `job_duration`, resource requests",
              "q": "index=infra sourcetype=\"kube:pod_metrics\" label_app=\"ci-runner\"\n| bin _time span=5m\n| stats avg(cpu_cores) as avg_cpu by pod_name\n| join max=1 pod_name [search index=cicd sourcetype=\"jenkins:build\" | stats avg(duration) as job_dur by executor_pod]\n| table pod_name, avg_cpu, job_dur",
              "m": "Correlate runner pods with jobs. Identify over-provisioned runners. Recommend requests/limits tuning.",
              "z": "Table (utilization by runner), Bar chart (avg CPU per job type), Line chart (efficiency trend).",
              "kfp": "Pipeline noise during blue/green cutovers, dependency cache cold-starts, or scheduled maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes metrics, Jenkins Prometheus plugin.\n• Ensure the following data sources are available: `container_cpu_usage_seconds`, `job_duration`, resource requests.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate runner pods with jobs. Identify over-provisioned runners. Recommend requests/limits tuning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infra sourcetype=\"kube:pod_metrics\" label_app=\"ci-runner\"\n| bin _time span=5m\n| stats avg(cpu_cores) as avg_cpu by pod_name\n| join max=1 pod_name [search index=cicd sourcetype=\"jenkins:build\" | stats avg(duration) as job_dur by executor_pod]\n| table pod_name, avg_cpu, job_dur\n```\n\nUnderstanding this SPL\n\n**Pipeline Resource Utilization** — CPU/memory per runner job—right-sizing and spot vs on-demand mix.\n\nDocumented **Data sources**: `container_cpu_usage_seconds`, `job_duration`, resource requests. **App/TA** (typical add-on context): Kubernetes metrics, Jenkins Prometheus plugin. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infra; **sourcetype**: kube:pod_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infra, sourcetype=\"kube:pod_metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by pod_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Pipeline Resource Utilization**): table pod_name, avg_cpu, job_dur\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (utilization by runner), Bar chart (avg CPU per job type), Line chart (efficiency trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jenkins",
                "kubernetes",
                "prometheus"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.9,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 24,
            "none": 0
          }
        },
        {
          "i": "12.3",
          "n": "Artifact & Package Management",
          "u": [
            {
              "i": "12.3.1",
              "n": "Artifact Repository Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Full artifact repositories prevent builds from publishing. Storage monitoring and cleanup policy verification ensures CI/CD continuity.",
              "t": "Custom API input (Artifactory/Nexus)",
              "d": "Repository storage metrics, cleanup policy logs",
              "q": "index=devops sourcetype=\"artifactory:storage\"\n| eval pct_used=round(used_space/total_space*100,1)\n| where pct_used > 80\n| table repository, used_space_gb, total_space_gb, pct_used",
              "m": "Poll Artifactory/Nexus storage API daily. Track storage per repository. Alert at 80% capacity. Verify cleanup policies are running and effective. Report on artifact growth rate.",
              "z": "Gauge (% capacity used), Bar chart (storage by repository), Line chart (storage trend).",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (Artifactory/Nexus).\n• Ensure the following data sources are available: Repository storage metrics, cleanup policy logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Artifactory/Nexus storage API daily. Track storage per repository. Alert at 80% capacity. Verify cleanup policies are running and effective. Report on artifact growth rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"artifactory:storage\"\n| eval pct_used=round(used_space/total_space*100,1)\n| where pct_used > 80\n| table repository, used_space_gb, total_space_gb, pct_used\n```\n\nUnderstanding this SPL\n\n**Artifact Repository Health** — Full artifact repositories prevent builds from publishing. Storage monitoring and cleanup policy verification ensures CI/CD continuity.\n\nDocumented **Data sources**: Repository storage metrics, cleanup policy logs. **App/TA** (typical add-on context): Custom API input (Artifactory/Nexus). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: artifactory:storage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"artifactory:storage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_used > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Artifact Repository Health**): table repository, used_space_gb, total_space_gb, pct_used\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% capacity used), Bar chart (storage by repository), Line chart (storage trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch packages and artifacts so supply-chain risk, storage, and policy checks stay under control.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.2",
              "n": "Dependency Vulnerability Alerts",
              "c": "critical",
              "f": "beginner",
              "v": "Vulnerable dependencies in the software supply chain are a primary attack vector. Tracking ensures timely patching.",
              "t": "Snyk/Dependabot webhook",
              "d": "SCA tool output (Snyk, Dependabot, GitHub Advisory)",
              "q": "index=devops sourcetype=\"snyk:vulnerability\"\n| where severity IN (\"critical\",\"high\")\n| stats count by project, package_name, cve_id, severity\n| sort -severity, -count",
              "m": "Ingest SCA scan results. Track vulnerable dependencies by project, severity, and package. Alert on new critical/high findings. Track remediation time. Report on dependency health per team.",
              "z": "Table (vulnerable dependencies), Bar chart (vulns by project), Line chart (vulnerability trend), Single value (critical vulns count).",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Snyk/Dependabot webhook.\n• Ensure the following data sources are available: SCA tool output (Snyk, Dependabot, GitHub Advisory).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SCA scan results. Track vulnerable dependencies by project, severity, and package. Alert on new critical/high findings. Track remediation time. Report on dependency health per team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"snyk:vulnerability\"\n| where severity IN (\"critical\",\"high\")\n| stats count by project, package_name, cve_id, severity\n| sort -severity, -count\n```\n\nUnderstanding this SPL\n\n**Dependency Vulnerability Alerts** — Vulnerable dependencies in the software supply chain are a primary attack vector. Tracking ensures timely patching.\n\nDocumented **Data sources**: SCA tool output (Snyk, Dependabot, GitHub Advisory). **App/TA** (typical add-on context): Snyk/Dependabot webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: snyk:vulnerability. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"snyk:vulnerability\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where severity IN (\"critical\",\"high\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by project, package_name, cve_id, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (vulnerable dependencies), Bar chart (vulns by project), Line chart (vulnerability trend), Single value (critical vulns count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.06",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.06 (Patch operating systems) is enforced — Splunk UC-12.3.2: Dependency Vulnerability Alerts.",
                  "ea": "Saved search 'UC-12.3.2' running on SCA tool output (Snyk, Dependabot, GitHub Advisory), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU CRA Art.13 (Obligations of manufacturers) is enforced — Splunk UC-12.3.2: Dependency Vulnerability Alerts.",
                  "ea": "Saved search 'UC-12.3.2' running on SCA tool output (Snyk, Dependabot, GitHub Advisory), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2024/2847/oj"
                },
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-12.3.2: Dependency Vulnerability Alerts.",
                  "ea": "Saved search 'UC-12.3.2' running on SCA tool output (Snyk, Dependabot, GitHub Advisory), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2024/2847/oj"
                },
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-12.3.2: Dependency Vulnerability Alerts.",
                  "ea": "Saved search 'UC-12.3.2' running on SCA tool output (Snyk, Dependabot, GitHub Advisory), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2015/2366/oj"
                }
              ],
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "12.3.3",
              "n": "Package Download Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Unusual package download patterns may indicate dependency confusion attacks or compromised internal packages.",
              "t": "Artifactory/Nexus access logs",
              "d": "Repository access logs (download events)",
              "q": "index=devops sourcetype=\"artifactory:access\"\n| stats count by package_name, client_ip\n| eventstats avg(count) as avg_downloads, stdev(count) as stdev_downloads by package_name\n| where count > avg_downloads + 3*stdev_downloads",
              "m": "Monitor package download patterns. Baseline normal download volumes per package. Alert on statistical outliers. Watch for downloads of internal packages from external IPs. Track new/unknown packages being introduced.",
              "z": "Table (anomalous downloads), Bar chart (top downloaded packages), Line chart (download volume trend).",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Artifactory/Nexus access logs.\n• Ensure the following data sources are available: Repository access logs (download events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor package download patterns. Baseline normal download volumes per package. Alert on statistical outliers. Watch for downloads of internal packages from external IPs. Track new/unknown packages being introduced.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"artifactory:access\"\n| stats count by package_name, client_ip\n| eventstats avg(count) as avg_downloads, stdev(count) as stdev_downloads by package_name\n| where count > avg_downloads + 3*stdev_downloads\n```\n\nUnderstanding this SPL\n\n**Package Download Anomalies** — Unusual package download patterns may indicate dependency confusion attacks or compromised internal packages.\n\nDocumented **Data sources**: Repository access logs (download events). **App/TA** (typical add-on context): Artifactory/Nexus access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: artifactory:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"artifactory:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by package_name, client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by package_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > avg_downloads + 3*stdev_downloads` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous downloads), Bar chart (top downloaded packages), Line chart (download volume trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch packages and artifacts so supply-chain risk, storage, and policy checks stay under control.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.4",
              "n": "License Compliance Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Open-source license violations create legal risk. Automated tracking ensures compliance before code reaches production.",
              "t": "SCA tool output",
              "d": "SCA license scan results (Snyk, FOSSA, WhiteSource)",
              "q": "index=devops sourcetype=\"sca:license\"\n| where license_risk IN (\"high\",\"critical\") OR license IN (\"GPL-3.0\",\"AGPL-3.0\")\n| stats count by project, package_name, license, license_risk\n| sort -license_risk",
              "m": "Ingest SCA license scan results. Track license types across all projects. Alert on copyleft licenses in commercial products. Report on license distribution for legal review. Block deployments with policy violations.",
              "z": "Table (license risks), Pie chart (license distribution), Bar chart (risks by project).",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SCA tool output.\n• Ensure the following data sources are available: SCA license scan results (Snyk, FOSSA, WhiteSource).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SCA license scan results. Track license types across all projects. Alert on copyleft licenses in commercial products. Report on license distribution for legal review. Block deployments with policy violations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"sca:license\"\n| where license_risk IN (\"high\",\"critical\") OR license IN (\"GPL-3.0\",\"AGPL-3.0\")\n| stats count by project, package_name, license, license_risk\n| sort -license_risk\n```\n\nUnderstanding this SPL\n\n**License Compliance Tracking** — Open-source license violations create legal risk. Automated tracking ensures compliance before code reaches production.\n\nDocumented **Data sources**: SCA license scan results (Snyk, FOSSA, WhiteSource). **App/TA** (typical add-on context): SCA tool output. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: sca:license. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"sca:license\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where license_risk IN (\"high\",\"critical\") OR license IN (\"GPL-3.0\",\"AGPL-3.0\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by project, package_name, license, license_risk** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (license risks), Pie chart (license distribution), Bar chart (risks by project).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.5",
              "n": "Terraform State Drift Detection",
              "c": "medium",
              "f": "advanced",
              "v": "Planned vs. applied resource count and drift between declared and actual infrastructure indicate manual changes or state corruption. Detection ensures IaC remains source of truth and supports compliance audits.",
              "t": "Custom (terraform plan output)",
              "d": "terraform plan JSON output (-json flag)",
              "q": "index=iac sourcetype=\"terraform:plan_json\"\n| eval add_count=coalesce(resource_changes_add, 0), change_count=coalesce(resource_changes_change, 0), destroy_count=coalesce(resource_changes_destroy, 0)\n| where add_count > 0 OR change_count > 0 OR destroy_count > 0\n| table _time, workspace, plan_mode, add_count, change_count, destroy_count, resource_changes\n| sort -_time",
              "m": "Run `terraform plan -json` in CI or on schedule. Parse JSON output for resource_changes (add, change, destroy). Ingest to Splunk via HEC. Alert on any unexpected changes (drift) in detect-only runs. Track planned vs. applied resource counts per workspace. Correlate drift events with cloud provider change logs. Enforce drift remediation SLA and report on compliance.",
              "z": "Table (drift events with resource details), Single value (workspaces with drift), Bar chart (add/change/destroy by workspace), Timeline (drift detection events).",
              "kfp": "Drift from manual console fixes during incidents, automated tool changes, or planned config updates outside IaC.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (terraform plan output).\n• Ensure the following data sources are available: terraform plan JSON output (-json flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun `terraform plan -json` in CI or on schedule. Parse JSON output for resource_changes (add, change, destroy). Ingest to Splunk via HEC. Alert on any unexpected changes (drift) in detect-only runs. Track planned vs. applied resource counts per workspace. Correlate drift events with cloud provider change logs. Enforce drift remediation SLA and report on compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"terraform:plan_json\"\n| eval add_count=coalesce(resource_changes_add, 0), change_count=coalesce(resource_changes_change, 0), destroy_count=coalesce(resource_changes_destroy, 0)\n| where add_count > 0 OR change_count > 0 OR destroy_count > 0\n| table _time, workspace, plan_mode, add_count, change_count, destroy_count, resource_changes\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Terraform State Drift Detection** — Planned vs. applied resource count and drift between declared and actual infrastructure indicate manual changes or state corruption. Detection ensures IaC remains source of truth and supports compliance audits.\n\nDocumented **Data sources**: terraform plan JSON output (-json flag). **App/TA** (typical add-on context): Custom (terraform plan output). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: terraform:plan_json. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"terraform:plan_json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **add_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where add_count > 0 OR change_count > 0 OR destroy_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Terraform State Drift Detection**): table _time, workspace, plan_mode, add_count, change_count, destroy_count, resource_changes\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drift events with resource details), Single value (workspaces with drift), Bar chart (add/change/destroy by workspace), Timeline (drift detection events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch infrastructure-as-code runs and policy so changes to systems stay visible and governed.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_terraform"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.6",
              "n": "Container Image Vulnerability Scan Failures",
              "c": "critical",
              "f": "beginner",
              "v": "CI gate failures when Trivy/Grype/ECR scanning cannot pull image or scanner errors—distinct from found CVEs (UC-12.3.2).",
              "t": "Trivy JSON output, Harbor webhook",
              "d": "Scan exit code, `Status: ERROR` in SARIF",
              "q": "index=devops sourcetype=\"container:scan\"\n| where scan_status!=\"SUCCESS\" OR match(_raw,\"(?i)(timeout|failed to pull|manifest unknown)\")\n| stats count by image_ref, scanner, error_message\n| sort -count",
              "m": "Alert on scanner infrastructure failures separately from policy violations. Retry with backoff; page platform team on registry outages.",
              "z": "Table (failed scans), Line chart (failure rate), Single value (open scan errors).",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Trivy JSON output, Harbor webhook.\n• Ensure the following data sources are available: Scan exit code, `Status: ERROR` in SARIF.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on scanner infrastructure failures separately from policy violations. Retry with backoff; page platform team on registry outages.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"container:scan\"\n| where scan_status!=\"SUCCESS\" OR match(_raw,\"(?i)(timeout|failed to pull|manifest unknown)\")\n| stats count by image_ref, scanner, error_message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Container Image Vulnerability Scan Failures** — CI gate failures when Trivy/Grype/ECR scanning cannot pull image or scanner errors—distinct from found CVEs (UC-12.3.2).\n\nDocumented **Data sources**: Scan exit code, `Status: ERROR` in SARIF. **App/TA** (typical add-on context): Trivy JSON output, Harbor webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: container:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"container:scan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where scan_status!=\"SUCCESS\" OR match(_raw,\"(?i)(timeout|failed to pull|manifest unknown)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by image_ref, scanner, error_message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed scans), Line chart (failure rate), Single value (open scan errors).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.7",
              "n": "Package Dependency Audit Alerts",
              "c": "high",
              "f": "beginner",
              "v": "New AGPL/GPL or typosquat package names in lockfiles—policy engine output beyond CVE severity.",
              "t": "FOSSA, `npm audit` JSON, OSV",
              "d": "Policy violation events `license_policy`, `dependency_policy`",
              "q": "index=devops sourcetype=\"sca:policy\"\n| where policy_result=\"violation\" OR risk=\"blocked\"\n| stats count by project, package_name, policy_name\n| sort -count",
              "m": "Map policies to Splunk alerts. Weekly license review for new copyleft in commercial products.",
              "z": "Table (violations), Bar chart (by policy), Timeline.",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: FOSSA, `npm audit` JSON, OSV.\n• Ensure the following data sources are available: Policy violation events `license_policy`, `dependency_policy`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap policies to Splunk alerts. Weekly license review for new copyleft in commercial products.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"sca:policy\"\n| where policy_result=\"violation\" OR risk=\"blocked\"\n| stats count by project, package_name, policy_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Package Dependency Audit Alerts** — New AGPL/GPL or typosquat package names in lockfiles—policy engine output beyond CVE severity.\n\nDocumented **Data sources**: Policy violation events `license_policy`, `dependency_policy`. **App/TA** (typical add-on context): FOSSA, `npm audit` JSON, OSV. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: sca:policy. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"sca:policy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where policy_result=\"violation\" OR risk=\"blocked\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by project, package_name, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Bar chart (by policy), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch packages and artifacts so supply-chain risk, storage, and policy checks stay under control.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.8",
              "n": "Artifact Retention Policy Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Verifies cleanup jobs deleted snapshots per policy; stale artifacts past retention indicate failed garbage collection.",
              "t": "Artifactory/Nexus repository metadata API",
              "d": "Artifact `last_downloaded`, `created`, retention rule ID",
              "q": "index=devops sourcetype=\"artifactory:artifact_age\"\n| eval age_days=(now()-created_epoch)/86400\n| where age_days > retention_days + 7\n| stats count by repository, path\n| sort -count",
              "m": "Compare artifact age to configured retention. Alert on drift >7 days past policy. Audit quarterly for legal hold exceptions.",
              "z": "Table (over-retained artifacts), Bar chart (by repo), Single value (non-compliant count).",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Artifactory/Nexus repository metadata API.\n• Ensure the following data sources are available: Artifact `last_downloaded`, `created`, retention rule ID.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare artifact age to configured retention. Alert on drift >7 days past policy. Audit quarterly for legal hold exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"artifactory:artifact_age\"\n| eval age_days=(now()-created_epoch)/86400\n| where age_days > retention_days + 7\n| stats count by repository, path\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Artifact Retention Policy Compliance** — Verifies cleanup jobs deleted snapshots per policy; stale artifacts past retention indicate failed garbage collection.\n\nDocumented **Data sources**: Artifact `last_downloaded`, `created`, retention rule ID. **App/TA** (typical add-on context): Artifactory/Nexus repository metadata API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: artifactory:artifact_age. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"artifactory:artifact_age\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days > retention_days + 7` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by repository, path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (over-retained artifacts), Bar chart (by repo), Single value (non-compliant count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch packages and artifacts so supply-chain risk, storage, and policy checks stay under control.",
              "mtype": [
                "Compliance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.9",
              "n": "SBOM Generation Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "CycloneDX/SPDX missing for release builds breaks supply-chain attestation requirements (EO 14028).",
              "t": "Syft, build pipeline attestations",
              "d": "`sbom_generated=true`, artifact `sbom_path`",
              "q": "index=cicd sourcetype=\"release:build\"\n| where sbom_present=\"false\" AND environment=\"release\"\n| table _time, application, version, build_id\n| sort -_time",
              "m": "Fail release stage if SBOM not uploaded to blob store. Track SBOM format version (CycloneDX 1.5).",
              "z": "Table (missing SBOM), Single value (compliance %), Line chart (trend).",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Syft, build pipeline attestations.\n• Ensure the following data sources are available: `sbom_generated=true`, artifact `sbom_path`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFail release stage if SBOM not uploaded to blob store. Track SBOM format version (CycloneDX 1.5).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"release:build\"\n| where sbom_present=\"false\" AND environment=\"release\"\n| table _time, application, version, build_id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**SBOM Generation Compliance** — CycloneDX/SPDX missing for release builds breaks supply-chain attestation requirements (EO 14028).\n\nDocumented **Data sources**: `sbom_generated=true`, artifact `sbom_path`. **App/TA** (typical add-on context): Syft, build pipeline attestations. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: release:build. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"release:build\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where sbom_present=\"false\" AND environment=\"release\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SBOM Generation Compliance**): table _time, application, version, build_id\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing SBOM), Single value (compliance %), Line chart (trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch packages and artifacts so supply-chain risk, storage, and policy checks stay under control.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.10",
              "n": "Artifact Signing Verification",
              "c": "critical",
              "f": "intermediate",
              "v": "Cosign/Notary verification results at deploy time—signature missing or wrong key.",
              "t": "Cosign, Sigstore Rekor",
              "d": "Verify command JSON `verified`, `issuer`; lookup `trusted_signing_issuers.csv` maintained by the platform team with `issuer` → `trusted` (true/false)",
              "q": "index=cicd sourcetype=\"cosign:verify\"\n| lookup trusted_signing_issuers.csv issuer OUTPUT trusted\n| where verified=\"false\" OR trusted!=\"true\"\n| table _time, image, issuer, reason",
              "m": "Ingest verification from CD pipeline. Block deploy on false. Rotate keys per runbook.",
              "z": "Table (failed verifications), Timeline, Single value (failed count).",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cosign, Sigstore Rekor.\n• Ensure the following data sources are available: Verify command JSON `verified`, `issuer`; lookup `trusted_signing_issuers.csv` maintained by the platform team with `issuer` → `trusted` (true/false).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest verification from CD pipeline. Block deploy on false. Rotate keys per runbook.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"cosign:verify\"\n| lookup trusted_signing_issuers.csv issuer OUTPUT trusted\n| where verified=\"false\" OR trusted!=\"true\"\n| table _time, image, issuer, reason\n```\n\nUnderstanding this SPL\n\n**Artifact Signing Verification** — Cosign/Notary verification results at deploy time—signature missing or wrong key.\n\nDocumented **Data sources**: Verify command JSON `verified`, `issuer`; lookup `trusted_signing_issuers.csv` maintained by the platform team with `issuer` → `trusted` (true/false). **App/TA** (typical add-on context): Cosign, Sigstore Rekor. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: cosign:verify. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"cosign:verify\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where verified=\"false\" OR trusted!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Artifact Signing Verification**): table _time, image, issuer, reason\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed verifications), Timeline, Single value (failed count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch packages and artifacts so supply-chain risk, storage, and policy checks stay under control.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.01 (Application control) is enforced — Splunk UC-12.3.10: Artifact Signing Verification.",
                  "ea": "Saved search 'UC-12.3.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/essential-eight/essential-eight-maturity-model"
                },
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU CRA Art.13 (Obligations of manufacturers) is enforced — Splunk UC-12.3.10: Artifact Signing Verification.",
                  "ea": "Saved search 'UC-12.3.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2024/2847/oj"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.11",
              "n": "Package Provenance Tracking",
              "c": "high",
              "f": "advanced",
              "v": "SLSA provenance predicate links artifact digest to git source SHA and builder ID—detects builds not from trusted pipelines.",
              "t": "SLSA provenance JSON, GitHub attestations",
              "d": "`predicate.buildDefinition`, `subject.digest`",
              "q": "index=cicd sourcetype=\"slsa:provenance\"\n| where builder_id!=\"https://github.com/org/trusted-workflow\" OR commit_ref!=expected_git_sha\n| table _time, artifact_digest, builder_id, commit_ref, expected_git_sha",
              "m": "Store expected builder allowlist. Alert on provenance mismatch for prod images.",
              "z": "Table (mismatches), Bar chart (by builder), Timeline.",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SLSA provenance JSON, GitHub attestations.\n• Ensure the following data sources are available: `predicate.buildDefinition`, `subject.digest`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStore expected builder allowlist. Alert on provenance mismatch for prod images.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"slsa:provenance\"\n| where builder_id!=\"https://github.com/org/trusted-workflow\" OR commit_ref!=expected_git_sha\n| table _time, artifact_digest, builder_id, commit_ref, expected_git_sha\n```\n\nUnderstanding this SPL\n\n**Package Provenance Tracking** — SLSA provenance predicate links artifact digest to git source SHA and builder ID—detects builds not from trusted pipelines.\n\nDocumented **Data sources**: `predicate.buildDefinition`, `subject.digest`. **App/TA** (typical add-on context): SLSA provenance JSON, GitHub attestations. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: slsa:provenance. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"slsa:provenance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where builder_id!=\"https://github.com/org/trusted-workflow\" OR commit_ref!=expected_git_sha` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Package Provenance Tracking**): table _time, artifact_digest, builder_id, commit_ref, expected_git_sha\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mismatches), Bar chart (by builder), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch packages and artifacts so supply-chain risk, storage, and policy checks stay under control.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.3.12",
              "n": "Registry Storage Growth",
              "c": "medium",
              "f": "beginner",
              "v": "Daily growth rate of container/NPM registry—forecast disk exhaustion (extends UC-12.3.1).",
              "t": "Harbor/ECR/Artifactory storage API",
              "d": "Total bytes, per-repo breakdown",
              "q": "index=devops sourcetype=\"registry:storage\"\n| timechart span=1d sum(size_bytes) as total by registry_name\n| predict total as forecast",
              "m": "Alert when weekly growth >20% or forecast crosses 85% capacity in <90 days. Recommend GC tuning.",
              "z": "Line chart (storage growth), Area chart (by project), Single value (days to full).",
              "kfp": "Alerts from scheduled scans, pilot repositories, or vendor API throttling during bulk operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Harbor/ECR/Artifactory storage API.\n• Ensure the following data sources are available: Total bytes, per-repo breakdown.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert when weekly growth >20% or forecast crosses 85% capacity in <90 days. Recommend GC tuning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"registry:storage\"\n| timechart span=1d sum(size_bytes) as total by registry_name\n| predict total as forecast\n```\n\nUnderstanding this SPL\n\n**Registry Storage Growth** — Daily growth rate of container/NPM registry—forecast disk exhaustion (extends UC-12.3.1).\n\nDocumented **Data sources**: Total bytes, per-repo breakdown. **App/TA** (typical add-on context): Harbor/ECR/Artifactory storage API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: registry:storage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"registry:storage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by registry_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Registry Storage Growth**): predict total as forecast\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (storage growth), Area chart (by project), Single value (days to full).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch packages and artifacts so supply-chain risk, storage, and policy checks stay under control.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.5,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 11,
            "none": 0
          }
        },
        {
          "i": "12.4",
          "n": "Infrastructure as Code",
          "u": [
            {
              "i": "12.4.1",
              "n": "Terraform Plan/Apply Tracking",
              "c": "high",
              "f": "beginner",
              "v": "Every Terraform apply changes infrastructure. Full audit trail enables change management, impact analysis, and rollback decisions.",
              "t": "Terraform Cloud API, CI/CD output parsing",
              "d": "Terraform CLI output (plan/apply), Terraform Cloud run events",
              "q": "index=iac sourcetype=\"terraform:run\"\n| table _time, workspace, user, action, resources_added, resources_changed, resources_destroyed, status\n| sort -_time",
              "m": "Send Terraform run events to Splunk via HEC (from CI/CD pipeline or Terraform Cloud webhooks). Track resource changes per workspace. Alert on destroy operations. Correlate infrastructure changes with monitoring alerts.",
              "z": "Table (recent Terraform runs), Timeline (apply events), Bar chart (resource changes by workspace), Single value (applies today).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Terraform Cloud API, CI/CD output parsing.\n• Ensure the following data sources are available: Terraform CLI output (plan/apply), Terraform Cloud run events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSend Terraform run events to Splunk via HEC (from CI/CD pipeline or Terraform Cloud webhooks). Track resource changes per workspace. Alert on destroy operations. Correlate infrastructure changes with monitoring alerts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"terraform:run\"\n| table _time, workspace, user, action, resources_added, resources_changed, resources_destroyed, status\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Terraform Plan/Apply Tracking** — Every Terraform apply changes infrastructure. Full audit trail enables change management, impact analysis, and rollback decisions.\n\nDocumented **Data sources**: Terraform CLI output (plan/apply), Terraform Cloud run events. **App/TA** (typical add-on context): Terraform Cloud API, CI/CD output parsing. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: terraform:run. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"terraform:run\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Terraform Plan/Apply Tracking**): table _time, workspace, user, action, resources_added, resources_changed, resources_destroyed, status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent Terraform runs), Timeline (apply events), Bar chart (resource changes by workspace), Single value (applies today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch infrastructure-as-code runs and policy so changes to systems stay visible and governed.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_terraform"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.2",
              "n": "Configuration Drift Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Drift from declared IaC state indicates manual changes that bypass change control, creating inconsistency and security risks.",
              "t": "Terraform plan output, cloud config monitoring",
              "d": "Terraform plan output (no-change runs showing drift), AWS Config",
              "q": "index=iac sourcetype=\"terraform:plan\"\n| where drift_detected=\"true\"\n| table _time, workspace, resource_type, resource_name, drift_detail\n| sort -_time",
              "m": "Schedule periodic `terraform plan` runs (detect-only). Parse output for unexpected changes. Alert on any drift detected. Correlate with cloud provider change logs to identify who made manual changes. Enforce drift remediation SLA.",
              "z": "Table (drifted resources), Single value (resources with drift), Bar chart (drift by workspace), Timeline (drift events).",
              "kfp": "Drift from manual console fixes during incidents, automated tool changes, or planned config updates outside IaC.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Terraform plan output, cloud config monitoring.\n• Ensure the following data sources are available: Terraform plan output (no-change runs showing drift), AWS Config.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule periodic `terraform plan` runs (detect-only). Parse output for unexpected changes. Alert on any drift detected. Correlate with cloud provider change logs to identify who made manual changes. Enforce drift remediation SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"terraform:plan\"\n| where drift_detected=\"true\"\n| table _time, workspace, resource_type, resource_name, drift_detail\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Configuration Drift Detection** — Drift from declared IaC state indicates manual changes that bypass change control, creating inconsistency and security risks.\n\nDocumented **Data sources**: Terraform plan output (no-change runs showing drift), AWS Config. **App/TA** (typical add-on context): Terraform plan output, cloud config monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: terraform:plan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"terraform:plan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where drift_detected=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Configuration Drift Detection**): table _time, workspace, resource_type, resource_name, drift_detail\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drifted resources), Single value (resources with drift), Bar chart (drift by workspace), Timeline (drift events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for configuration drift so live systems do not quietly diverge from what we declared in code.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_terraform"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.3",
              "n": "Ansible Playbook Outcomes",
              "c": "medium",
              "f": "beginner",
              "v": "Tracking Ansible run results ensures configuration management is working. Failed tasks indicate systems in unknown state.",
              "t": "Ansible callback plugin (Splunk HEC callback)",
              "d": "Ansible callback output (play results, task results)",
              "q": "index=iac sourcetype=\"ansible:result\"\n| stats sum(ok) as ok, sum(changed) as changed, sum(failed) as failed, sum(unreachable) as unreachable by playbook, host\n| where failed > 0 OR unreachable > 0",
              "m": "Configure Ansible Splunk callback plugin to send results to HEC. Track ok/changed/failed/unreachable counts per playbook and host. Alert on failed or unreachable hosts. Report on configuration management coverage.",
              "z": "Table (playbook results), Status grid (host × playbook status), Bar chart (failures by playbook), Single value (success rate).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Ansible callback plugin (Splunk HEC callback).\n• Ensure the following data sources are available: Ansible callback output (play results, task results).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Ansible Splunk callback plugin to send results to HEC. Track ok/changed/failed/unreachable counts per playbook and host. Alert on failed or unreachable hosts. Report on configuration management coverage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"ansible:result\"\n| stats sum(ok) as ok, sum(changed) as changed, sum(failed) as failed, sum(unreachable) as unreachable by playbook, host\n| where failed > 0 OR unreachable > 0\n```\n\nUnderstanding this SPL\n\n**Ansible Playbook Outcomes** — Tracking Ansible run results ensures configuration management is working. Failed tasks indicate systems in unknown state.\n\nDocumented **Data sources**: Ansible callback output (play results, task results). **App/TA** (typical add-on context): Ansible callback plugin (Splunk HEC callback). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: ansible:result. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"ansible:result\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by playbook, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failed > 0 OR unreachable > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (playbook results), Status grid (host × playbook status), Bar chart (failures by playbook), Single value (success rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch infrastructure-as-code runs and policy so changes to systems stay visible and governed.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "ansible"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.4",
              "n": "Puppet/Chef Compliance Reports",
              "c": "medium",
              "f": "beginner",
              "v": "Configuration management compliance ensures systems match desired state. Non-compliance indicates security or operational risk.",
              "t": "Puppet/Chef report forwarding",
              "d": "Puppet agent reports, Chef client run reports",
              "q": "index=iac sourcetype=\"puppet:report\"\n| stats latest(status) as status, latest(corrective_changes) as corrective by certname\n| where status=\"failed\" OR corrective > 0",
              "m": "Forward Puppet/Chef reports to Splunk. Track agent compliance rates. Alert on failed runs (nodes in non-compliant state). Monitor corrective changes (Puppet remediated drift). Report on fleet compliance percentage.",
              "z": "Single value (compliance %), Table (non-compliant nodes), Pie chart (status distribution), Line chart (compliance trend).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Puppet/Chef report forwarding.\n• Ensure the following data sources are available: Puppet agent reports, Chef client run reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Puppet/Chef reports to Splunk. Track agent compliance rates. Alert on failed runs (nodes in non-compliant state). Monitor corrective changes (Puppet remediated drift). Report on fleet compliance percentage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"puppet:report\"\n| stats latest(status) as status, latest(corrective_changes) as corrective by certname\n| where status=\"failed\" OR corrective > 0\n```\n\nUnderstanding this SPL\n\n**Puppet/Chef Compliance Reports** — Configuration management compliance ensures systems match desired state. Non-compliance indicates security or operational risk.\n\nDocumented **Data sources**: Puppet agent reports, Chef client run reports. **App/TA** (typical add-on context): Puppet/Chef report forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: puppet:report. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"puppet:report\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by certname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status=\"failed\" OR corrective > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (compliance %), Table (non-compliant nodes), Pie chart (status distribution), Line chart (compliance trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch infrastructure-as-code runs and policy so changes to systems stay visible and governed.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.5",
              "n": "IaC Policy Violations",
              "c": "high",
              "f": "beginner",
              "v": "Policy-as-code (OPA/Sentinel) prevents non-compliant infrastructure from being provisioned. Tracking blocked deployments validates governance.",
              "t": "Policy engine output (CI/CD integration)",
              "d": "OPA/Sentinel policy check results, CI/CD pipeline logs",
              "q": "index=iac sourcetype=\"policy_check\"\n| where result=\"DENY\"\n| stats count by policy_name, workspace, resource_type\n| sort -count",
              "m": "Ingest policy check results from CI/CD pipelines. Track denied provisions by policy and team. Alert on repeated violations (may indicate training need). Report on policy effectiveness and most-violated rules.",
              "z": "Bar chart (violations by policy), Table (denied provisions), Line chart (violation trend), Pie chart (by resource type).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Policy engine output (CI/CD integration).\n• Ensure the following data sources are available: OPA/Sentinel policy check results, CI/CD pipeline logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest policy check results from CI/CD pipelines. Track denied provisions by policy and team. Alert on repeated violations (may indicate training need). Report on policy effectiveness and most-violated rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"policy_check\"\n| where result=\"DENY\"\n| stats count by policy_name, workspace, resource_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**IaC Policy Violations** — Policy-as-code (OPA/Sentinel) prevents non-compliant infrastructure from being provisioned. Tracking blocked deployments validates governance.\n\nDocumented **Data sources**: OPA/Sentinel policy check results, CI/CD pipeline logs. **App/TA** (typical add-on context): Policy engine output (CI/CD integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: policy_check. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"policy_check\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where result=\"DENY\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by policy_name, workspace, resource_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (violations by policy), Table (denied provisions), Line chart (violation trend), Pie chart (by resource type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch infrastructure-as-code runs and policy so changes to systems stay visible and governed.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.6",
              "n": "Pipeline Failure Root Cause Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Recurring failure causes (e.g., flaky tests, env issues) slow delivery. Trending root causes supports targeted remediation and stability.",
              "t": "Jenkins/GitHub Actions/Azure DevOps TAs",
              "d": "Pipeline run logs, failure reasons, stage outcomes",
              "q": "index=cicd sourcetype=\"jenkins:build\"\n| where result=\"FAILURE\"\n| rex field=message \"(?<cause>Timeout|OOM|Connection refused|AssertionError|dependency)\"\n| stats count by cause, job_name\n| sort -count",
              "m": "Parse failure messages and stack traces from CI logs. Classify by cause (test, env, dependency, timeout). Alert on spike in a specific cause. Report on top failure reasons by job and week.",
              "z": "Bar chart (failures by cause), Table (job × cause), Line chart (failure rate trend).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Jenkins/GitHub Actions/Azure DevOps TAs.\n• Ensure the following data sources are available: Pipeline run logs, failure reasons, stage outcomes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse failure messages and stack traces from CI logs. Classify by cause (test, env, dependency, timeout). Alert on spike in a specific cause. Report on top failure reasons by job and week.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"jenkins:build\"\n| where result=\"FAILURE\"\n| rex field=message \"(?<cause>Timeout|OOM|Connection refused|AssertionError|dependency)\"\n| stats count by cause, job_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Pipeline Failure Root Cause Trending** — Recurring failure causes (e.g., flaky tests, env issues) slow delivery. Trending root causes supports targeted remediation and stability.\n\nDocumented **Data sources**: Pipeline run logs, failure reasons, stage outcomes. **App/TA** (typical add-on context): Jenkins/GitHub Actions/Azure DevOps TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: jenkins:build. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"jenkins:build\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where result=\"FAILURE\"` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by cause, job_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures by cause), Table (job × cause), Line chart (failure rate trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "github",
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.7",
              "n": "Container Image Build and Push Audit",
              "c": "high",
              "f": "beginner",
              "v": "Unauthorized or untagged image pushes can introduce risk. Auditing build and push events supports supply chain security and compliance.",
              "t": "Registry and CI logs (ECR, ACR, Harbor, Docker)",
              "d": "Image push events, build logs, registry audit",
              "q": "index=registry sourcetype=\"registry:audit\"\n| search action=push\n| stats latest(_time) as last_push, count by image_name, actor, tag\n| table image_name, tag, actor, last_push, count",
              "m": "Ingest registry audit and CI build events. Alert on push from unexpected identity or to production repo without tag policy. Report on image provenance and push frequency.",
              "z": "Table (push events), Timeline (pushes by image), Bar chart (pushes by actor).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Registry and CI logs (ECR, ACR, Harbor, Docker).\n• Ensure the following data sources are available: Image push events, build logs, registry audit.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest registry audit and CI build events. Alert on push from unexpected identity or to production repo without tag policy. Report on image provenance and push frequency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=registry sourcetype=\"registry:audit\"\n| search action=push\n| stats latest(_time) as last_push, count by image_name, actor, tag\n| table image_name, tag, actor, last_push, count\n```\n\nUnderstanding this SPL\n\n**Container Image Build and Push Audit** — Unauthorized or untagged image pushes can introduce risk. Auditing build and push events supports supply chain security and compliance.\n\nDocumented **Data sources**: Image push events, build logs, registry audit. **App/TA** (typical add-on context): Registry and CI logs (ECR, ACR, Harbor, Docker). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: registry; **sourcetype**: registry:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=registry, sourcetype=\"registry:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by image_name, actor, tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Container Image Build and Push Audit**): table image_name, tag, actor, last_push, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (push events), Timeline (pushes by image), Bar chart (pushes by actor).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "docker"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.8",
              "n": "Release Gate and Approval Lag",
              "c": "medium",
              "f": "intermediate",
              "v": "Long approval or gate wait times delay releases. Monitoring gate duration and approval latency supports process improvement.",
              "t": "Release management / pipeline TAs",
              "d": "Gate and approval timestamps from release pipelines",
              "q": "index=cicd sourcetype=\"release:gate\"\n| eval wait_sec=approved_time - submitted_time\n| stats avg(wait_sec) as avg_wait, max(wait_sec) as max_wait by stage_name, environment\n| where avg_wait > 3600",
              "m": "Ingest gate and approval events from Azure DevOps, Spinnaker, or similar. Compute wait time per stage. Alert when average wait exceeds threshold. Report on approval latency by stage and approver.",
              "z": "Bar chart (wait time by stage), Table (slow gates), Line chart (approval latency trend).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Release management / pipeline TAs.\n• Ensure the following data sources are available: Gate and approval timestamps from release pipelines.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest gate and approval events from Azure DevOps, Spinnaker, or similar. Compute wait time per stage. Alert when average wait exceeds threshold. Report on approval latency by stage and approver.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"release:gate\"\n| eval wait_sec=approved_time - submitted_time\n| stats avg(wait_sec) as avg_wait, max(wait_sec) as max_wait by stage_name, environment\n| where avg_wait > 3600\n```\n\nUnderstanding this SPL\n\n**Release Gate and Approval Lag** — Long approval or gate wait times delay releases. Monitoring gate duration and approval latency supports process improvement.\n\nDocumented **Data sources**: Gate and approval timestamps from release pipelines. **App/TA** (typical add-on context): Release management / pipeline TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: release:gate. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"release:gate\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **wait_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by stage_name, environment** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_wait > 3600` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (wait time by stage), Table (slow gates), Line chart (approval latency trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.9",
              "n": "Feature Flag and Experiment Rollout Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Feature flags and experiments affect user experience. Monitoring rollout percentage and error rate per flag supports safe rollouts and rollback.",
              "t": "Feature flag provider logs, app telemetry",
              "d": "Flag evaluation logs, rollout events, error rates by flag",
              "q": "index=app sourcetype=\"feature_flag:eval\"\n| bin _time span=1h\n| stats count, sum(eval(if(error=\"true\",1,0))) as errors by flag_name, variant, _time\n| eval error_rate=round((errors/count)*100, 2)\n| where error_rate > 5",
              "m": "Ingest flag evaluation and error data. Track rollout % and error rate per flag/variant. Alert on error rate spike after rollout. Report on flag adoption and performance by variant.",
              "z": "Line chart (error rate by flag), Table (flags with high errors), Bar chart (rollout % by variant).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Feature flag provider logs, app telemetry.\n• Ensure the following data sources are available: Flag evaluation logs, rollout events, error rates by flag.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest flag evaluation and error data. Track rollout % and error rate per flag/variant. Alert on error rate spike after rollout. Report on flag adoption and performance by variant.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"feature_flag:eval\"\n| bin _time span=1h\n| stats count, sum(eval(if(error=\"true\",1,0))) as errors by flag_name, variant, _time\n| eval error_rate=round((errors/count)*100, 2)\n| where error_rate > 5\n```\n\nUnderstanding this SPL\n\n**Feature Flag and Experiment Rollout Monitoring** — Feature flags and experiments affect user experience. Monitoring rollout percentage and error rate per flag supports safe rollouts and rollback.\n\nDocumented **Data sources**: Flag evaluation logs, rollout events, error rates by flag. **App/TA** (typical add-on context): Feature flag provider logs, app telemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: feature_flag:eval. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"feature_flag:eval\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by flag_name, variant, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate by flag), Table (flags with high errors), Bar chart (rollout % by variant).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.10",
              "n": "Deployment Rollback and Canary Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed canaries or automatic rollbacks indicate bad releases. Tracking rollback rate and canary metrics ensures safe deployments.",
              "t": "Kubernetes/Argo/Spinnaker TAs, app metrics",
              "d": "Deployment events, canary success/failure, rollback triggers",
              "q": "index=k8s sourcetype=\"kube:deployment\"\n| search (reason=\"Rollback\" OR reason=\"CanaryFailed\" OR type=\"Rollback\")\n| stats count by namespace, deployment, reason\n| sort -count",
              "m": "Ingest deployment and canary outcome events. Alert on any rollback or canary failure. Correlate with change and error metrics. Report on rollback rate by service and time.",
              "z": "Table (rollback events), Single value (rollbacks this week), Line chart (canary success rate).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes/Argo/Spinnaker TAs, app metrics.\n• Ensure the following data sources are available: Deployment events, canary success/failure, rollback triggers.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest deployment and canary outcome events. Alert on any rollback or canary failure. Correlate with change and error metrics. Report on rollback rate by service and time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:deployment\"\n| search (reason=\"Rollback\" OR reason=\"CanaryFailed\" OR type=\"Rollback\")\n| stats count by namespace, deployment, reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Deployment Rollback and Canary Health** — Failed canaries or automatic rollbacks indicate bad releases. Tracking rollback rate and canary metrics ensures safe deployments.\n\nDocumented **Data sources**: Deployment events, canary success/failure, rollback triggers. **App/TA** (typical add-on context): Kubernetes/Argo/Spinnaker TAs, app metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:deployment. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:deployment\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by namespace, deployment, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rollback events), Single value (rollbacks this week), Line chart (canary success rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch failed or rolled-back releases so we can tighten the path from commit to production.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.11",
              "n": "ArgoCD Application Sync Status",
              "c": "high",
              "f": "intermediate",
              "v": "Out-of-sync or degraded applications in ArgoCD indicate GitOps drift or deployment failures. Detection ensures desired state matches cluster state and enables rapid remediation.",
              "t": "Custom (ArgoCD API)",
              "d": "ArgoCD /api/v1/applications",
              "q": "index=devops sourcetype=\"argocd:application\"\n| where sync_status!=\"Synced\" OR health_status!=\"Healthy\" OR health_status=\"Degraded\"\n| table _time, name, namespace, sync_status, health_status, revision, message\n| sort -_time",
              "m": "Poll ArgoCD API /api/v1/applications for application list. Ingest sync.status, health.status, revision, message. Alert when sync_status is OutOfSync or health_status is Degraded/Progressing for >5 min. Track sync and health history. Correlate with Git commits and cluster events. Report on application sync health and remediation time.",
              "z": "Table (out-of-sync/degraded apps), Single value (synced apps %), Status grid (app × sync/health), Timeline (sync status changes).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (ArgoCD API).\n• Ensure the following data sources are available: ArgoCD /api/v1/applications.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll ArgoCD API /api/v1/applications for application list. Ingest sync.status, health.status, revision, message. Alert when sync_status is OutOfSync or health_status is Degraded/Progressing for >5 min. Track sync and health history. Correlate with Git commits and cluster events. Report on application sync health and remediation time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"argocd:application\"\n| where sync_status!=\"Synced\" OR health_status!=\"Healthy\" OR health_status=\"Degraded\"\n| table _time, name, namespace, sync_status, health_status, revision, message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**ArgoCD Application Sync Status** — Out-of-sync or degraded applications in ArgoCD indicate GitOps drift or deployment failures. Detection ensures desired state matches cluster state and enables rapid remediation.\n\nDocumented **Data sources**: ArgoCD /api/v1/applications. **App/TA** (typical add-on context): Custom (ArgoCD API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: argocd:application. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"argocd:application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where sync_status!=\"Synced\" OR health_status!=\"Healthy\" OR health_status=\"Degraded\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ArgoCD Application Sync Status**): table _time, name, namespace, sync_status, health_status, revision, message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (out-of-sync/degraded apps), Single value (synced apps %), Status grid (app × sync/health), Timeline (sync status changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitOps sync and Argo signals so clusters stay aligned with the desired configuration in Git.",
              "mtype": [
                "Configuration",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "argocd"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.12",
              "n": "Terraform Plan Drift Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Scheduled CI `terraform plan` runs that show changes when no pipeline apply occurred—focused operational view vs UC-12.4.2.",
              "t": "Terraform CLI in CI, Terraform Cloud",
              "d": "Plan JSON `resource_changes`, `plan_mode=scheduled`",
              "q": "index=iac sourcetype=\"terraform:plan_ci\"\n| where plan_mode=\"scheduled\" AND (changes_add>0 OR changes_change>0 OR changes_destroy>0)\n| table _time, workspace, changes_add, changes_change, changes_destroy, run_url\n| sort -_time",
              "m": "Nightly plan-only workflow for prod workspaces. Alert on any change. Auto-create drift remediation ticket.",
              "z": "Table (drift plans), Timeline, Bar chart (by workspace).",
              "kfp": "Drift from manual console fixes during incidents, automated tool changes, or planned config updates outside IaC.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Terraform CLI in CI, Terraform Cloud.\n• Ensure the following data sources are available: Plan JSON `resource_changes`, `plan_mode=scheduled`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNightly plan-only workflow for prod workspaces. Alert on any change. Auto-create drift remediation ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"terraform:plan_ci\"\n| where plan_mode=\"scheduled\" AND (changes_add>0 OR changes_change>0 OR changes_destroy>0)\n| table _time, workspace, changes_add, changes_change, changes_destroy, run_url\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Terraform Plan Drift Detection** — Scheduled CI `terraform plan` runs that show changes when no pipeline apply occurred—focused operational view vs UC-12.4.2.\n\nDocumented **Data sources**: Plan JSON `resource_changes`, `plan_mode=scheduled`. **App/TA** (typical add-on context): Terraform CLI in CI, Terraform Cloud. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: terraform:plan_ci. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"terraform:plan_ci\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where plan_mode=\"scheduled\" AND (changes_add>0 OR changes_change>0 OR changes_destroy>0)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Terraform Plan Drift Detection**): table _time, workspace, changes_add, changes_change, changes_destroy, run_url\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drift plans), Timeline, Bar chart (by workspace).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch infrastructure-as-code runs and policy so changes to systems stay visible and governed.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_terraform"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.13",
              "n": "CloudFormation Stack Drift",
              "c": "high",
              "f": "intermediate",
              "v": "AWS `DetectStackDrift` results show live resources diverging from template—essential for CloudFormation-centric teams.",
              "t": "AWS CloudFormation API, CloudTrail",
              "d": "`StackDriftStatus`, `StackResourceDriftStatus`",
              "q": "index=aws sourcetype=\"aws:cloudformation:drift\"\n| where StackDriftStatus=\"DRIFTED\" OR DetectionStatus=\"DETECTION_FAILED\"\n| stats latest(_time) as last_check, values(LogicalResourceId) as drifted_resources by StackName, region\n| sort -last_check",
              "m": "Schedule drift detection after stack updates. Alert on DRIFTED. Optionally ingest full drift detail JSON.",
              "z": "Table (drifted stacks), Bar chart (by account), Timeline.",
              "kfp": "Drift from manual console fixes during incidents, automated tool changes, or planned config updates outside IaC.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS CloudFormation API, CloudTrail.\n• Ensure the following data sources are available: `StackDriftStatus`, `StackResourceDriftStatus`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule drift detection after stack updates. Alert on DRIFTED. Optionally ingest full drift detail JSON.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudformation:drift\"\n| where StackDriftStatus=\"DRIFTED\" OR DetectionStatus=\"DETECTION_FAILED\"\n| stats latest(_time) as last_check, values(LogicalResourceId) as drifted_resources by StackName, region\n| sort -last_check\n```\n\nUnderstanding this SPL\n\n**CloudFormation Stack Drift** — AWS `DetectStackDrift` results show live resources diverging from template—essential for CloudFormation-centric teams.\n\nDocumented **Data sources**: `StackDriftStatus`, `StackResourceDriftStatus`. **App/TA** (typical add-on context): AWS CloudFormation API, CloudTrail. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudformation:drift. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudformation:drift\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where StackDriftStatus=\"DRIFTED\" OR DetectionStatus=\"DETECTION_FAILED\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by StackName, region** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CloudFormation Stack Drift** — AWS `DetectStackDrift` results show live resources diverging from template—essential for CloudFormation-centric teams.\n\nDocumented **Data sources**: `StackDriftStatus`, `StackResourceDriftStatus`. **App/TA** (typical add-on context): AWS CloudFormation API, CloudTrail. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drifted stacks), Bar chart (by account), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch for configuration drift so live systems do not quietly diverge from what we declared in code.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.14",
              "n": "Ansible Playbook Failure Tracking",
              "c": "high",
              "f": "beginner",
              "v": "Failure rate per playbook and run—trending layer on top of UC-12.4.3 outcomes.",
              "t": "Ansible Splunk callback, ARA",
              "d": "Per-play `failed`, `unreachable`, `playbook`",
              "q": "index=iac sourcetype=\"ansible:result\"\n| stats sum(failed) as fails, sum(unreachable) as unreach, count as runs by playbook\n| eval fail_rate=round((fails+unreach)/runs*100,2)\n| where fail_rate > 5\n| sort -fail_rate",
              "m": "Alert when fail_rate >5% over 24h. Page for security baseline playbooks. Host-level drilldown from same data.",
              "z": "Line chart (fail rate trend), Table (worst playbooks), Bar chart (by team).",
              "kfp": "Failures from drift fixes, provider version mismatches, or controlled rollback operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Ansible Splunk callback, ARA.\n• Ensure the following data sources are available: Per-play `failed`, `unreachable`, `playbook`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert when fail_rate >5% over 24h. Page for security baseline playbooks. Host-level drilldown from same data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"ansible:result\"\n| stats sum(failed) as fails, sum(unreachable) as unreach, count as runs by playbook\n| eval fail_rate=round((fails+unreach)/runs*100,2)\n| where fail_rate > 5\n| sort -fail_rate\n```\n\nUnderstanding this SPL\n\n**Ansible Playbook Failure Tracking** — Failure rate per playbook and run—trending layer on top of UC-12.4.3 outcomes.\n\nDocumented **Data sources**: Per-play `failed`, `unreachable`, `playbook`. **App/TA** (typical add-on context): Ansible Splunk callback, ARA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: ansible:result. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"ansible:result\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by playbook** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (fail rate trend), Table (worst playbooks), Bar chart (by team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch infrastructure-as-code runs and policy so changes to systems stay visible and governed.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "ansible"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.15",
              "n": "Policy-as-Code Violation Trending",
              "c": "high",
              "f": "intermediate",
              "v": "OPA, Sentinel, or Conftest denials over time—spikes after new policy rollout (extends UC-12.4.5).",
              "t": "OPA decision logs, Terraform Cloud policy sets",
              "d": "`result=\"fail\"`, `policy_path`, `namespace`",
              "q": "index=iac sourcetype=\"opa:decision\"\n| where result=\"fail\"\n| timechart span=1d count by policy_name",
              "m": "Baseline failures per policy. Alert on 3× week-over-week spike. Run education before switching to hard-fail.",
              "z": "Line chart (violations over time), Bar chart (by policy), Table (top namespaces).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OPA decision logs, Terraform Cloud policy sets.\n• Ensure the following data sources are available: `result=\"fail\"`, `policy_path`, `namespace`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline failures per policy. Alert on 3× week-over-week spike. Run education before switching to hard-fail.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"opa:decision\"\n| where result=\"fail\"\n| timechart span=1d count by policy_name\n```\n\nUnderstanding this SPL\n\n**Policy-as-Code Violation Trending** — OPA, Sentinel, or Conftest denials over time—spikes after new policy rollout (extends UC-12.4.5).\n\nDocumented **Data sources**: `result=\"fail\"`, `policy_path`, `namespace`. **App/TA** (typical add-on context): OPA decision logs, Terraform Cloud policy sets. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: opa:decision. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"opa:decision\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where result=\"fail\"` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by policy_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (violations over time), Bar chart (by policy), Table (top namespaces).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_terraform"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.4.16",
              "n": "IaC Module Version Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Terraform module sources below approved minimum semver—reduces stale or vulnerable module usage.",
              "t": "`terraform-config-inspect`, CI parse of resolved modules",
              "d": "Module `name`, `version` from `terraform init -json`",
              "q": "index=iac sourcetype=\"terraform:modules\"\n| lookup terraform_module_allowed module_name OUTPUT min_version\n| where semver_compare(module_version, min_version) < 0\n| table workspace, module_name, module_version, min_version",
              "m": "Weekly compliance report. Enforce minimum via Sentinel/OPA in pipeline. Pair with private registry pinning.",
              "z": "Table (non-compliant modules), Bar chart (version lag), Line chart (compliance %).",
              "kfp": "Signals that reflect approved hotfixes, provider upgrades, or brownout tests rather than misuse.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `terraform-config-inspect`, CI parse of resolved modules.\n• Ensure the following data sources are available: Module `name`, `version` from `terraform init -json`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWeekly compliance report. Enforce minimum via Sentinel/OPA in pipeline. Pair with private registry pinning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=\"terraform:modules\"\n| lookup terraform_module_allowed module_name OUTPUT min_version\n| where semver_compare(module_version, min_version) < 0\n| table workspace, module_name, module_version, min_version\n```\n\nUnderstanding this SPL\n\n**IaC Module Version Compliance** — Terraform module sources below approved minimum semver—reduces stale or vulnerable module usage.\n\nDocumented **Data sources**: Module `name`, `version` from `terraform init -json`. **App/TA** (typical add-on context): `terraform-config-inspect`, CI parse of resolved modules. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: terraform:modules. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=\"terraform:modules\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where semver_compare(module_version, min_version) < 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IaC Module Version Compliance**): table workspace, module_name, module_version, min_version\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant modules), Bar chart (version lag), Line chart (compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch infrastructure-as-code runs and policy so changes to systems stay visible and governed.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_terraform"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 16,
            "none": 0
          }
        },
        {
          "i": "12.5",
          "n": "GitOps & Deployment Automation",
          "u": [
            {
              "i": "12.5.1",
              "n": "ArgoCD Sync Status Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Failed or stuck sync operations leave clusters diverging from Git and block releases; surfacing them quickly limits blast radius and restores desired state.",
              "t": "Splunk Add-on for Argo CD, custom ArgoCD API/audit input",
              "d": "`sourcetype=argocd:application`, `sourcetype=argocd:audit`",
              "q": "index=devops (sourcetype=\"argocd:application\" OR sourcetype=\"argocd:audit\")\n| search sync_status IN (\"OutOfSync\",\"Unknown\") OR operation_state=\"Error\" OR phase=\"Failed\"\n| stats latest(_time) as last_seen, values(message) as messages by name, namespace, project\n| sort -last_seen",
              "m": "Ingest Argo CD application CR status and controller/audit logs via add-on or HEC. Normalize `sync_status`, `operation_state`, and error messages. Alert when sync fails or remains in Error/Failed beyond a short window. Correlate with Git commits and cluster events.",
              "z": "Table (failed apps), Single value (apps in failed sync), Timeline (sync operations), Status grid by project.",
              "kfp": "Short-lived sync issues during cluster upgrades, credential rotations, or Git provider incidents.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Argo CD, custom ArgoCD API/audit input.\n• Ensure the following data sources are available: `sourcetype=argocd:application`, `sourcetype=argocd:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Argo CD application CR status and controller/audit logs via add-on or HEC. Normalize `sync_status`, `operation_state`, and error messages. Alert when sync fails or remains in Error/Failed beyond a short window. Correlate with Git commits and cluster events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops (sourcetype=\"argocd:application\" OR sourcetype=\"argocd:audit\")\n| search sync_status IN (\"OutOfSync\",\"Unknown\") OR operation_state=\"Error\" OR phase=\"Failed\"\n| stats latest(_time) as last_seen, values(message) as messages by name, namespace, project\n| sort -last_seen\n```\n\nUnderstanding this SPL\n\n**ArgoCD Sync Status Failures** — Failed or stuck sync operations leave clusters diverging from Git and block releases; surfacing them quickly limits blast radius and restores desired state.\n\nDocumented **Data sources**: `sourcetype=argocd:application`, `sourcetype=argocd:audit`. **App/TA** (typical add-on context): Splunk Add-on for Argo CD, custom ArgoCD API/audit input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: argocd:application, argocd:audit. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"argocd:application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by name, namespace, project** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed apps), Single value (apps in failed sync), Timeline (sync operations), Status grid by project.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitOps sync and Argo signals so clusters stay aligned with the desired configuration in Git.",
              "mtype": [
                "Fault",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "argocd"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.5.2",
              "n": "ArgoCD Drift Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Live cluster drift from Git-defined manifests risks untracked changes and audit gaps; detecting drift prioritizes reconciliation before incidents or compliance findings.",
              "t": "Splunk Add-on for Argo CD",
              "d": "`sourcetype=argocd:application`",
              "q": "index=devops sourcetype=\"argocd:application\"\n| where sync_status=\"OutOfSync\" OR health_status=\"Degraded\"\n| eval drift_indicator=if(sync_status=\"OutOfSync\",\"manifest_drift\",\"health_degraded\")\n| stats count by name, namespace, sync_status, health_status, drift_indicator\n| sort -count",
              "m": "Poll or stream Argo CD application objects so `sync_status` and `health_status` are current. Treat sustained `OutOfSync` as drift unless an approved sync window applies. Alert with application, revision, and diff summary fields when available.",
              "z": "Table (drifted apps), Bar chart (drift by cluster/namespace), Line chart (drift count over time).",
              "kfp": "Drift from manual console fixes during incidents, automated tool changes, or planned config updates outside IaC.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Argo CD.\n• Ensure the following data sources are available: `sourcetype=argocd:application`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll or stream Argo CD application objects so `sync_status` and `health_status` are current. Treat sustained `OutOfSync` as drift unless an approved sync window applies. Alert with application, revision, and diff summary fields when available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"argocd:application\"\n| where sync_status=\"OutOfSync\" OR health_status=\"Degraded\"\n| eval drift_indicator=if(sync_status=\"OutOfSync\",\"manifest_drift\",\"health_degraded\")\n| stats count by name, namespace, sync_status, health_status, drift_indicator\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ArgoCD Drift Detection** — Live cluster drift from Git-defined manifests risks untracked changes and audit gaps; detecting drift prioritizes reconciliation before incidents or compliance findings.\n\nDocumented **Data sources**: `sourcetype=argocd:application`. **App/TA** (typical add-on context): Splunk Add-on for Argo CD. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: argocd:application. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"argocd:application\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where sync_status=\"OutOfSync\" OR health_status=\"Degraded\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **drift_indicator** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by name, namespace, sync_status, health_status, drift_indicator** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drifted apps), Bar chart (drift by cluster/namespace), Line chart (drift count over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitOps sync and Argo signals so clusters stay aligned with the desired configuration in Git.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "argocd"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.5.3",
              "n": "Flux Reconciliation Health",
              "c": "high",
              "f": "intermediate",
              "v": "Unhealthy Flux `Kustomization`/`HelmRelease` resources stop automated delivery; monitoring reconciliation ensures continuous GitOps and catches controller or source errors early.",
              "t": "Custom (Flux logs/metrics), Splunk OTel Collector for Kubernetes",
              "d": "`sourcetype=fluxcd:controller`, `sourcetype=kube:container_flux`",
              "q": "index=devops (sourcetype=\"fluxcd:controller\" OR sourcetype=\"kube:container_flux\")\n| search (status=\"False\" AND type=\"Ready\") OR level=\"error\" OR msg=\"*reconciliation*failed*\"\n| stats count by namespace, name, kind, message\n| sort -count",
              "m": "Forward Flux source-controller, kustomize-controller, and helm-controller logs (or scrape status conditions from CRDs) into Splunk. Parse Ready=False conditions and error strings. Alert on reconciliation failures or backlog growth. Group by cluster and tenant.",
              "z": "Table (failed resources), Single value (failed reconciliations), Timeline (controller errors), Bar chart (failures by kind).",
              "kfp": "Short-lived sync issues during cluster upgrades, credential rotations, or Git provider incidents.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Flux logs/metrics), Splunk OTel Collector for Kubernetes.\n• Ensure the following data sources are available: `sourcetype=fluxcd:controller`, `sourcetype=kube:container_flux`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Flux source-controller, kustomize-controller, and helm-controller logs (or scrape status conditions from CRDs) into Splunk. Parse Ready=False conditions and error strings. Alert on reconciliation failures or backlog growth. Group by cluster and tenant.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops (sourcetype=\"fluxcd:controller\" OR sourcetype=\"kube:container_flux\")\n| search (status=\"False\" AND type=\"Ready\") OR level=\"error\" OR msg=\"*reconciliation*failed*\"\n| stats count by namespace, name, kind, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Flux Reconciliation Health** — Unhealthy Flux `Kustomization`/`HelmRelease` resources stop automated delivery; monitoring reconciliation ensures continuous GitOps and catches controller or source errors early.\n\nDocumented **Data sources**: `sourcetype=fluxcd:controller`, `sourcetype=kube:container_flux`. **App/TA** (typical add-on context): Custom (Flux logs/metrics), Splunk OTel Collector for Kubernetes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: fluxcd:controller, kube:container_flux. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"fluxcd:controller\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by namespace, name, kind, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed resources), Single value (failed reconciliations), Timeline (controller errors), Bar chart (failures by kind).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Flux reconciliation so clusters do not drift silently from the desired state in Git.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.5.4",
              "n": "GitHub Actions Workflow Failure Rate",
              "c": "medium",
              "f": "beginner",
              "v": "A rising workflow failure rate signals flaky pipelines, bad merges, or infra issues that slow delivery and can block production paths.",
              "t": "GitHub Audit Log / webhook forwarder, Splunk HTTP Event Collector",
              "d": "`sourcetype=github:workflow_run`, `sourcetype=github:webhook`",
              "q": "index=devops sourcetype=\"github:workflow_run\"\n| eval failed=if(conclusion IN (\"failure\",\"cancelled\",\"timed_out\"),1,0)\n| timechart span=1h sum(failed) as failures, count as runs\n| eval failure_rate=round(100*failures/runs,2)\n| fields _time, failure_rate, failures, runs",
              "m": "Ingest workflow_run events with conclusion, workflow, branch, and repository. Compute failure rate over sliding windows per repo or default branch. Alert when failure_rate exceeds baseline or a fixed threshold. Exclude expected flaky jobs via labels when possible.",
              "z": "Line chart (failure rate), Stacked bar (conclusions), Single value (last 24h failure %), Table (top failing workflows).",
              "kfp": "Build failures spike during CI infrastructure issues, dependency registry outages, flaky test runs, or major framework upgrades.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub Audit Log / webhook forwarder, Splunk HTTP Event Collector.\n• Ensure the following data sources are available: `sourcetype=github:workflow_run`, `sourcetype=github:webhook`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest workflow_run events with conclusion, workflow, branch, and repository. Compute failure rate over sliding windows per repo or default branch. Alert when failure_rate exceeds baseline or a fixed threshold. Exclude expected flaky jobs via labels when possible.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:workflow_run\"\n| eval failed=if(conclusion IN (\"failure\",\"cancelled\",\"timed_out\"),1,0)\n| timechart span=1h sum(failed) as failures, count as runs\n| eval failure_rate=round(100*failures/runs,2)\n| fields _time, failure_rate, failures, runs\n```\n\nUnderstanding this SPL\n\n**GitHub Actions Workflow Failure Rate** — A rising workflow failure rate signals flaky pipelines, bad merges, or infra issues that slow delivery and can block production paths.\n\nDocumented **Data sources**: `sourcetype=github:workflow_run`, `sourcetype=github:webhook`. **App/TA** (typical add-on context): GitHub Audit Log / webhook forwarder, Splunk HTTP Event Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:workflow_run. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:workflow_run\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **failure_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Keeps or drops fields with `fields` to shape columns and size.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failure rate), Stacked bar (conclusions), Single value (last 24h failure %), Table (top failing workflows).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch build outcomes so we spot quality or infrastructure problems before they become a release blocker.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.5.5",
              "n": "GitHub Actions Runner Queue Depth",
              "c": "high",
              "f": "intermediate",
              "v": "Deep job queues delay CI feedback and releases; tracking queue depth distinguishes runner capacity problems from workflow volume spikes.",
              "t": "GitHub Enterprise Server / self-hosted runner scripts, custom metrics via Actions API",
              "d": "`sourcetype=github:runner_metrics`, `sourcetype=github:workflow_job`",
              "q": "index=devops (sourcetype=\"github:workflow_job\" OR sourcetype=\"github:runner_metrics\")\n| eval queued=if(status=\"queued\",1,0)\n| bin _time span=5m\n| stats sum(queued) as queued_jobs, dc(runner_name) as active_runners by _time, organization\n| sort _time",
              "m": "Emit periodic queue depth from self-hosted runner APIs or poll workflow jobs in `queued` state. For hosted runners, approximate backlog using queued job counts and wait times. Alert when queued_jobs or wait time p95 exceeds SLO. Plan runner pool scaling from trends.",
              "z": "Area chart (queued jobs), Line chart (queue wait p95), Single value (current queue depth), Table (repos with longest waits).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub Enterprise Server / self-hosted runner scripts, custom metrics via Actions API.\n• Ensure the following data sources are available: `sourcetype=github:runner_metrics`, `sourcetype=github:workflow_job`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmit periodic queue depth from self-hosted runner APIs or poll workflow jobs in `queued` state. For hosted runners, approximate backlog using queued job counts and wait times. Alert when queued_jobs or wait time p95 exceeds SLO. Plan runner pool scaling from trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops (sourcetype=\"github:workflow_job\" OR sourcetype=\"github:runner_metrics\")\n| eval queued=if(status=\"queued\",1,0)\n| bin _time span=5m\n| stats sum(queued) as queued_jobs, dc(runner_name) as active_runners by _time, organization\n| sort _time\n```\n\nUnderstanding this SPL\n\n**GitHub Actions Runner Queue Depth** — Deep job queues delay CI feedback and releases; tracking queue depth distinguishes runner capacity problems from workflow volume spikes.\n\nDocumented **Data sources**: `sourcetype=github:runner_metrics`, `sourcetype=github:workflow_job`. **App/TA** (typical add-on context): GitHub Enterprise Server / self-hosted runner scripts, custom metrics via Actions API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:workflow_job, github:runner_metrics. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:workflow_job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **queued** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, organization** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (queued jobs), Line chart (queue wait p95), Single value (current queue depth), Table (repos with longest waits).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch queueing and wait time so developers are not stuck behind broken runners or clogged pipelines.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.5.6",
              "n": "GitLab CI Pipeline Duration Regression",
              "c": "medium",
              "f": "intermediate",
              "v": "Sudden pipeline duration increases waste compute budgets and slow merges; regression detection isolates stages, runners, or dependencies that changed.",
              "t": "GitLab webhook / API integration, Splunk Add-on for GitLab (custom)",
              "d": "`sourcetype=gitlab:pipeline`",
              "q": "index=devops sourcetype=\"gitlab:pipeline\" status=\"success\"\n| eval duration_min=round(duration_sec/60,2)\n| eventstats median(duration_min) as baseline_med by project_id, ref\n| eval regression=if(duration_min > baseline_med * 1.5, 1, 0)\n| where regression=1\n| table _time, project, ref, duration_min, baseline_med, pipeline_id\n| sort -_time",
              "m": "Ingest pipeline completion events with duration, project, ref, and stage timings if available. Establish rolling median baseline per project/branch. Alert when duration exceeds threshold multiplier or absolute cap. Drill into job-level logs for the slow stage.",
              "z": "Line chart (median duration trend), Table (regression events), Box plot (duration distribution by pipeline).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitLab webhook / API integration, Splunk Add-on for GitLab (custom).\n• Ensure the following data sources are available: `sourcetype=gitlab:pipeline`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest pipeline completion events with duration, project, ref, and stage timings if available. Establish rolling median baseline per project/branch. Alert when duration exceeds threshold multiplier or absolute cap. Drill into job-level logs for the slow stage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"gitlab:pipeline\" status=\"success\"\n| eval duration_min=round(duration_sec/60,2)\n| eventstats median(duration_min) as baseline_med by project_id, ref\n| eval regression=if(duration_min > baseline_med * 1.5, 1, 0)\n| where regression=1\n| table _time, project, ref, duration_min, baseline_med, pipeline_id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**GitLab CI Pipeline Duration Regression** — Sudden pipeline duration increases waste compute budgets and slow merges; regression detection isolates stages, runners, or dependencies that changed.\n\nDocumented **Data sources**: `sourcetype=gitlab:pipeline`. **App/TA** (typical add-on context): GitLab webhook / API integration, Splunk Add-on for GitLab (custom). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: gitlab:pipeline. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"gitlab:pipeline\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by project_id, ref** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **regression** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where regression=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GitLab CI Pipeline Duration Regression**): table _time, project, ref, duration_min, baseline_med, pipeline_id\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (median duration trend), Table (regression events), Box plot (duration distribution by pipeline).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GitLab runner and pipeline signals so broken jobs, storage, or runners do not stall your teams.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gitlab"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.5.7",
              "n": "Deployment Rollback Frequency Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Frequent rollbacks indicate release quality or progressive-delivery issues; tracking frequency supports blameless review and release process tuning.",
              "t": "Argo Rollouts, Flagger, Kubernetes audit, CI/CD webhook",
              "d": "`sourcetype=kube:events`, `sourcetype=argocd:application`",
              "q": "index=devops (sourcetype=\"kube:events\" OR sourcetype=\"argocd:application\")\n| search rollback=\"true\" OR reason=\"Rollback\" OR message=\"*rollback*\"\n| timechart span=1d count by namespace",
              "m": "Tag rollback events from Argo Rollouts/Flagger, deployment controllers, or GitOps sync history. Deduplicate by deployment and revision. Report rollbacks per service and environment. Correlate with failed health checks or error spikes.",
              "z": "Line chart (rollbacks per day), Bar chart (rollbacks by service), Table (recent rollbacks with revision).",
              "kfp": "Short-lived sync issues during cluster upgrades, credential rotations, or Git provider incidents.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Argo Rollouts, Flagger, Kubernetes audit, CI/CD webhook.\n• Ensure the following data sources are available: `sourcetype=kube:events`, `sourcetype=argocd:application`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag rollback events from Argo Rollouts/Flagger, deployment controllers, or GitOps sync history. Deduplicate by deployment and revision. Report rollbacks per service and environment. Correlate with failed health checks or error spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops (sourcetype=\"kube:events\" OR sourcetype=\"argocd:application\")\n| search rollback=\"true\" OR reason=\"Rollback\" OR message=\"*rollback*\"\n| timechart span=1d count by namespace\n```\n\nUnderstanding this SPL\n\n**Deployment Rollback Frequency Tracking** — Frequent rollbacks indicate release quality or progressive-delivery issues; tracking frequency supports blameless review and release process tuning.\n\nDocumented **Data sources**: `sourcetype=kube:events`, `sourcetype=argocd:application`. **App/TA** (typical add-on context): Argo Rollouts, Flagger, Kubernetes audit, CI/CD webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: kube:events, argocd:application. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"kube:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by namespace** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (rollbacks per day), Bar chart (rollbacks by service), Table (recent rollbacks with revision).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch failed or rolled-back releases so we can tighten the path from commit to production.",
              "mtype": [
                "Fault",
                "Change"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.5.8",
              "n": "Helm Release Health Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Failed or pending Helm releases leave workloads partially updated or broken; monitoring release status prevents silent partial deploys.",
              "t": "Helm CLI / Flux helm-controller logs, `splunk-otel-collector` for cluster metrics",
              "d": "`sourcetype=helm:release`, `sourcetype=fluxcd:controller`",
              "q": "index=devops (sourcetype=\"helm:release\" OR (sourcetype=\"fluxcd:controller\" AND kind=\"HelmRelease\"))\n| where status IN (\"failed\",\"pending-upgrade\",\"pending-rollback\") OR info_status!=\"deployed\"\n| stats latest(_time) as last_event, values(message) as notes by release, namespace, chart, status\n| sort -last_event",
              "m": "Ingest Helm release status from `helm list -o json` jobs, Flux HelmRelease conditions, or controller logs. Map statuses to deployed/failed/pending. Alert on non-deployed steady states. Include chart version and values hash for change correlation.",
              "z": "Table (unhealthy releases), Single value (non-deployed count), Timeline (release operations).",
              "kfp": "Short-lived sync issues during cluster upgrades, credential rotations, or Git provider incidents.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Helm CLI / Flux helm-controller logs, `splunk-otel-collector` for cluster metrics.\n• Ensure the following data sources are available: `sourcetype=helm:release`, `sourcetype=fluxcd:controller`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Helm release status from `helm list -o json` jobs, Flux HelmRelease conditions, or controller logs. Map statuses to deployed/failed/pending. Alert on non-deployed steady states. Include chart version and values hash for change correlation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops (sourcetype=\"helm:release\" OR (sourcetype=\"fluxcd:controller\" AND kind=\"HelmRelease\"))\n| where status IN (\"failed\",\"pending-upgrade\",\"pending-rollback\") OR info_status!=\"deployed\"\n| stats latest(_time) as last_event, values(message) as notes by release, namespace, chart, status\n| sort -last_event\n```\n\nUnderstanding this SPL\n\n**Helm Release Health Monitoring** — Failed or pending Helm releases leave workloads partially updated or broken; monitoring release status prevents silent partial deploys.\n\nDocumented **Data sources**: `sourcetype=helm:release`, `sourcetype=fluxcd:controller`. **App/TA** (typical add-on context): Helm CLI / Flux helm-controller logs, `splunk-otel-collector` for cluster metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: helm:release, fluxcd:controller. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"helm:release\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status IN (\"failed\",\"pending-upgrade\",\"pending-rollback\") OR info_status!=\"deployed\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by release, namespace, chart, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy releases), Single value (non-deployed count), Timeline (release operations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Kubernetes release tooling so manifest or chart problems surface before users do.",
              "mtype": [
                "Fault",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "kubernetes_helm"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.5.9",
              "n": "Kustomize Build Error Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Kustomize build failures block manifests from applying; early detection shortens fix time for overlay, patch, or base reference mistakes.",
              "t": "CI pipeline logs, Flux kustomize-controller",
              "d": "`sourcetype=gitlab:job`, `sourcetype=github:workflow_job`, `sourcetype=fluxcd:controller`",
              "q": "index=devops (sourcetype=\"fluxcd:controller\" OR sourcetype=\"gitlab:job\" OR sourcetype=\"github:workflow_job\")\n| search kustomize_build OR \"kustomize build\" OR \"error building kustomize\"\n| rex field=_raw \"(?<err_msg>kustomize:.*|error:.*)\"\n| stats count by project, pipeline_id, err_msg\n| sort -count",
              "m": "Capture stderr from CI jobs and Flux kustomize-controller when `kustomize build` runs. Extract file paths and duplicate key errors. Alert on any build failure on protected branches or for production overlays. Feed counts back to repo owners.",
              "z": "Table (build errors), Bar chart (errors by repo), Timeline (failure events).",
              "kfp": "Short-lived sync issues during cluster upgrades, credential rotations, or Git provider incidents.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CI pipeline logs, Flux kustomize-controller.\n• Ensure the following data sources are available: `sourcetype=gitlab:job`, `sourcetype=github:workflow_job`, `sourcetype=fluxcd:controller`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture stderr from CI jobs and Flux kustomize-controller when `kustomize build` runs. Extract file paths and duplicate key errors. Alert on any build failure on protected branches or for production overlays. Feed counts back to repo owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops (sourcetype=\"fluxcd:controller\" OR sourcetype=\"gitlab:job\" OR sourcetype=\"github:workflow_job\")\n| search kustomize_build OR \"kustomize build\" OR \"error building kustomize\"\n| rex field=_raw \"(?<err_msg>kustomize:.*|error:.*)\"\n| stats count by project, pipeline_id, err_msg\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Kustomize Build Error Tracking** — Kustomize build failures block manifests from applying; early detection shortens fix time for overlay, patch, or base reference mistakes.\n\nDocumented **Data sources**: `sourcetype=gitlab:job`, `sourcetype=github:workflow_job`, `sourcetype=fluxcd:controller`. **App/TA** (typical add-on context): CI pipeline logs, Flux kustomize-controller. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: fluxcd:controller, gitlab:job, github:workflow_job. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"fluxcd:controller\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by project, pipeline_id, err_msg** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (build errors), Bar chart (errors by repo), Timeline (failure events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch Kubernetes release tooling so manifest or chart problems surface before users do.",
              "mtype": [
                "Fault",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.5.10",
              "n": "GitOps Deployment Lead Time",
              "c": "medium",
              "f": "advanced",
              "v": "Measuring Git commit-to-production lead time exposes bottlenecks in review, CI, and sync so teams can optimize end-to-end delivery speed.",
              "t": "Git + ArgoCD/Flux correlation (custom), DORA metrics scripts",
              "d": "`sourcetype=github:webhook`, `sourcetype=argocd:application`",
              "q": "index=devops (sourcetype=\"github:webhook\" OR sourcetype=\"argocd:application\")\n| eval commit_ts=if(sourcetype=\"github:webhook\", strptime(commit_time,\"%Y-%m-%dT%H:%M:%SZ\"), null())\n| eval sync_ts=if(sourcetype=\"argocd:application\", _time, null())\n| stats earliest(commit_ts) as first_commit, latest(sync_ts) as last_sync by repository, revision\n| eval lead_time_sec=last_sync-first_commit\n| where isnotnull(lead_time_sec) AND lead_time_sec > 0\n| eval lead_time_min=round(lead_time_sec/60,1)\n| table repository, revision, lead_time_min\n| sort -lead_time_min",
              "m": "Correlate Git merge or push timestamps with Argo CD successful sync or Flux `LastAppliedRevision` time for the same revision. Use lookup or transaction across indexes if needed. Report p50/p95 lead time by team and service. Exclude hotfix channels with tags if required.",
              "z": "Histogram (lead time distribution), Line chart (p95 lead time trend), Bar chart (lead time by service).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Git + ArgoCD/Flux correlation (custom), DORA metrics scripts.\n• Ensure the following data sources are available: `sourcetype=github:webhook`, `sourcetype=argocd:application`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate Git merge or push timestamps with Argo CD successful sync or Flux `LastAppliedRevision` time for the same revision. Use lookup or transaction across indexes if needed. Report p50/p95 lead time by team and service. Exclude hotfix channels with tags if required.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops (sourcetype=\"github:webhook\" OR sourcetype=\"argocd:application\")\n| eval commit_ts=if(sourcetype=\"github:webhook\", strptime(commit_time,\"%Y-%m-%dT%H:%M:%SZ\"), null())\n| eval sync_ts=if(sourcetype=\"argocd:application\", _time, null())\n| stats earliest(commit_ts) as first_commit, latest(sync_ts) as last_sync by repository, revision\n| eval lead_time_sec=last_sync-first_commit\n| where isnotnull(lead_time_sec) AND lead_time_sec > 0\n| eval lead_time_min=round(lead_time_sec/60,1)\n| table repository, revision, lead_time_min\n| sort -lead_time_min\n```\n\nUnderstanding this SPL\n\n**GitOps Deployment Lead Time** — Measuring Git commit-to-production lead time exposes bottlenecks in review, CI, and sync so teams can optimize end-to-end delivery speed.\n\nDocumented **Data sources**: `sourcetype=github:webhook`, `sourcetype=argocd:application`. **App/TA** (typical add-on context): Git + ArgoCD/Flux correlation (custom), DORA metrics scripts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:webhook, argocd:application. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:webhook\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **commit_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sync_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by repository, revision** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **lead_time_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(lead_time_sec) AND lead_time_sec > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **lead_time_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **GitOps Deployment Lead Time**): table repository, revision, lead_time_min\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (lead time distribution), Line chart (p95 lead time trend), Bar chart (lead time by service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track delivery tempo and stability so leaders see whether we are shipping safely and often enough.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "argocd"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 10,
            "none": 0
          }
        },
        {
          "i": "12.6",
          "n": "DevOps Trending",
          "u": [
            {
              "i": "12.6.1",
              "n": "DORA Metrics Trending Dashboard",
              "c": "high",
              "f": "advanced",
              "v": "Deployment frequency, lead time, change failure rate, and restore time summarize delivery health; month-over-month trends show whether engineering investments actually improved flow or reliability.",
              "t": "GitHub/GitLab/Jenkins integrations, optional Splunk DORA or custom summary searches",
              "d": "`index=devops` `sourcetype IN (\"github:workflow_run\",\"gitlab:pipeline\",\"jenkins:build\")`; production deploy tags on `environment` or `branch`",
              "q": "index=devops sourcetype=\"github:issues\" earliest=-180d@d (label=\"incident\" OR priority=\"P1\")\n| eval created_ts=strptime(created_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval closed_ts=strptime(closed_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval restore_hr=round((closed_ts-created_ts)/3600,2)\n| where restore_hr>=0\n| eval _time=closed_ts\n| timechart span=1mon median(restore_hr) as mttr_restore_hours\n| trendline sma3(mttr_restore_hours) as mttr_trend",
              "m": "Rarely one query fits all four DORA metrics—implement four saved searches (or a data model) that align on calendar months and team tags. Map GitHub Actions, GitLab pipelines, and Jenkins jobs to “production deploy” using branch/environment rules. For change failure rate, count failed production workflows over successful attempts to prod in the same window. Restore time often comes from incident tooling (`index=itsm`) if not in GitHub issues—join on service name. Validate timestamps are UTC-consistent.",
              "z": "Four-panel executive dashboard (line charts per metric), Optional radar chart for normalized month scores, Table (monthly KPI table export).",
              "kfp": "Trend moves with team size, release cadence, or company holidays; baseline per service line.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub/GitLab/Jenkins integrations, optional Splunk DORA or custom summary searches.\n• Ensure the following data sources are available: `index=devops` `sourcetype IN (\"github:workflow_run\",\"gitlab:pipeline\",\"jenkins:build\")`; production deploy tags on `environment` or `branch`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRarely one query fits all four DORA metrics—implement four saved searches (or a data model) that align on calendar months and team tags. Map GitHub Actions, GitLab pipelines, and Jenkins jobs to “production deploy” using branch/environment rules. For change failure rate, count failed production workflows over successful attempts to prod in the same window. Restore time often comes from incident tooling (`index=itsm`) if not in GitHub issues—join on service name. Validate timestamps are UTC-consi…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:issues\" earliest=-180d@d (label=\"incident\" OR priority=\"P1\")\n| eval created_ts=strptime(created_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval closed_ts=strptime(closed_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval restore_hr=round((closed_ts-created_ts)/3600,2)\n| where restore_hr>=0\n| eval _time=closed_ts\n| timechart span=1mon median(restore_hr) as mttr_restore_hours\n| trendline sma3(mttr_restore_hours) as mttr_trend\n```\n\nUnderstanding this SPL\n\n**DORA Metrics Trending Dashboard** — Deployment frequency, lead time, change failure rate, and restore time summarize delivery health; month-over-month trends show whether engineering investments actually improved flow or reliability.\n\nDocumented **Data sources**: `index=devops` `sourcetype IN (\"github:workflow_run\",\"gitlab:pipeline\",\"jenkins:build\")`; production deploy tags on `environment` or `branch`. **App/TA** (typical add-on context): GitHub/GitLab/Jenkins integrations, optional Splunk DORA or custom summary searches. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:issues. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:issues\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **created_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **closed_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **restore_hr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where restore_hr>=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1mon** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **DORA Metrics Trending Dashboard**): trendline sma3(mttr_restore_hours) as mttr_trend\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Four-panel executive dashboard (line charts per metric), Optional radar chart for normalized month scores, Table (monthly KPI table export).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track delivery tempo and stability so leaders see whether we are shipping safely and often enough.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "gitlab",
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.6.2",
              "n": "Security Scan Finding Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Seeing new, open, and closed findings per sprint proves whether shift-left scanning and remediation keep pace with development; spikes in new findings after dependency upgrades are expected—flat open counts are not.",
              "t": "GitHub Advanced Security (Dependabot, code scanning), GitLab SAST/DAST, Jenkins security stage plugins",
              "d": "`index=devops` `sourcetype IN (\"github:dependabot_alert\",\"github:code_scanning\",\"gitlab:vulnerability\",\"jenkins:security_scan\")`",
              "q": "index=devops sourcetype IN (\"github:dependabot_alert\",\"gitlab:vulnerability\") earliest=-90d@d\n| eval is_open=if(lower(coalesce(state,status,\"open\"))=\"open\",1,0)\n| timechart span=7d sum(is_open) as open_findings count as total_alerts\n| eval open_pct=round(100*open_findings/nullif(total_alerts,0),1)\n| trendline sma2(open_pct) as open_trend\n| eventstats avg(open_pct) as baseline_open",
              "m": "Ingest Dependabot and SARIF or code-scanning webhooks with stable `alert_id`. For GitLab, map `state` transitions over time or snapshot daily open counts via API. Tag by `repository` and `severity`. Exclude informational severities if policy dictates. Review sprints where `open_findings` rises while merges are flat—often a supply-chain or license scan change.",
              "z": "Stacked bar (new versus closed per sprint), Line chart (open backlog trend), Treemap (findings by repo).",
              "kfp": "Trend moves with team size, release cadence, or company holidays; baseline per service line.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub Advanced Security (Dependabot, code scanning), GitLab SAST/DAST, Jenkins security stage plugins.\n• Ensure the following data sources are available: `index=devops` `sourcetype IN (\"github:dependabot_alert\",\"github:code_scanning\",\"gitlab:vulnerability\",\"jenkins:security_scan\")`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Dependabot and SARIF or code-scanning webhooks with stable `alert_id`. For GitLab, map `state` transitions over time or snapshot daily open counts via API. Tag by `repository` and `severity`. Exclude informational severities if policy dictates. Review sprints where `open_findings` rises while merges are flat—often a supply-chain or license scan change.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype IN (\"github:dependabot_alert\",\"gitlab:vulnerability\") earliest=-90d@d\n| eval is_open=if(lower(coalesce(state,status,\"open\"))=\"open\",1,0)\n| timechart span=7d sum(is_open) as open_findings count as total_alerts\n| eval open_pct=round(100*open_findings/nullif(total_alerts,0),1)\n| trendline sma2(open_pct) as open_trend\n| eventstats avg(open_pct) as baseline_open\n```\n\nUnderstanding this SPL\n\n**Security Scan Finding Trending** — Seeing new, open, and closed findings per sprint proves whether shift-left scanning and remediation keep pace with development; spikes in new findings after dependency upgrades are expected—flat open counts are not.\n\nDocumented **Data sources**: `index=devops` `sourcetype IN (\"github:dependabot_alert\",\"github:code_scanning\",\"gitlab:vulnerability\",\"jenkins:security_scan\")`. **App/TA** (typical add-on context): GitHub Advanced Security (Dependabot, code scanning), GitLab SAST/DAST, Jenkins security stage plugins. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=7d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **open_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Security Scan Finding Trending**): trendline sma2(open_pct) as open_trend\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (new versus closed per sprint), Line chart (open backlog trend), Treemap (findings by repo).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "gitlab",
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.6.3",
              "n": "Build Queue Wait Time Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Queue wait reflects runner capacity and pipeline fan-in; rising waits delay feedback and release trains even when job success rates look fine.",
              "t": "Jenkins metrics (`metrics` plugin), GitLab Runner metrics, GitHub Actions (job queue via API-derived events)",
              "d": "`index=devops` `sourcetype=\"jenkins:queue\"` or `sourcetype=\"jenkins:build\"` with `queue_wait_ms`; `sourcetype=\"gitlab:job\"` with `queued_duration`",
              "q": "index=devops (sourcetype=\"jenkins:build\" OR sourcetype=\"gitlab:job\") earliest=-30d@d\n| eval wait_sec=coalesce(queue_wait_ms/1000, queued_duration, queue_time_sec)\n| where isnotnull(wait_sec)\n| timechart span=1d avg(wait_sec) as avg_wait_sec p95(wait_sec) as p95_wait_sec\n| eval avg_wait_min=round(avg_wait_sec/60,2)\n| trendline sma7(avg_wait_min) as wait_trend\n| eventstats median(avg_wait_min) as med_wait\n| eval backlog_pressure=if(avg_wait_min > med_wait*1.5,1,0)",
              "m": "Ensure `wait_sec` excludes container provisioning if that is tracked separately. For GitHub-only shops, ingest workflow_job events with `queued_at` and `started_at` to derive wait. Split by `label` or `runner_group` to see constrained pools. Alert when p95 wait exceeds SLA for two consecutive days.",
              "z": "Line chart (average and p95 wait in minutes), Area chart (wait distribution bands), Single value (current p95 wait).",
              "kfp": "Build times vary with cache state, runner availability, or test suite expansion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Jenkins metrics (`metrics` plugin), GitLab Runner metrics, GitHub Actions (job queue via API-derived events).\n• Ensure the following data sources are available: `index=devops` `sourcetype=\"jenkins:queue\"` or `sourcetype=\"jenkins:build\"` with `queue_wait_ms`; `sourcetype=\"gitlab:job\"` with `queued_duration`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure `wait_sec` excludes container provisioning if that is tracked separately. For GitHub-only shops, ingest workflow_job events with `queued_at` and `started_at` to derive wait. Split by `label` or `runner_group` to see constrained pools. Alert when p95 wait exceeds SLA for two consecutive days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops (sourcetype=\"jenkins:build\" OR sourcetype=\"gitlab:job\") earliest=-30d@d\n| eval wait_sec=coalesce(queue_wait_ms/1000, queued_duration, queue_time_sec)\n| where isnotnull(wait_sec)\n| timechart span=1d avg(wait_sec) as avg_wait_sec p95(wait_sec) as p95_wait_sec\n| eval avg_wait_min=round(avg_wait_sec/60,2)\n| trendline sma7(avg_wait_min) as wait_trend\n| eventstats median(avg_wait_min) as med_wait\n| eval backlog_pressure=if(avg_wait_min > med_wait*1.5,1,0)\n```\n\nUnderstanding this SPL\n\n**Build Queue Wait Time Trending** — Queue wait reflects runner capacity and pipeline fan-in; rising waits delay feedback and release trains even when job success rates look fine.\n\nDocumented **Data sources**: `index=devops` `sourcetype=\"jenkins:queue\"` or `sourcetype=\"jenkins:build\"` with `queue_wait_ms`; `sourcetype=\"gitlab:job\"` with `queued_duration`. **App/TA** (typical add-on context): Jenkins metrics (`metrics` plugin), GitLab Runner metrics, GitHub Actions (job queue via API-derived events). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: jenkins:build, gitlab:job. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"jenkins:build\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **wait_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(wait_sec)` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **avg_wait_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Build Queue Wait Time Trending**): trendline sma7(avg_wait_min) as wait_trend\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **backlog_pressure** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (average and p95 wait in minutes), Area chart (wait distribution bands), Single value (current p95 wait).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch queueing and wait time so developers are not stuck behind broken runners or clogged pipelines.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "gitlab",
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "12.6.4",
              "n": "Container Image Build Time Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Longer image builds slow every downstream deploy; sprint-level trends catch Dockerfile regressions, bloated layers, or registry latency before they dominate CI budgets.",
              "t": "GitLab CI, GitHub Actions, Jenkins Pipelines with Kaniko/buildkit logging",
              "d": "`index=devops` `sourcetype IN (\"gitlab:job\",\"github:workflow_job\",\"jenkins:build\")` filtered to `image`/`container`/`docker` job or stage names",
              "q": "index=devops sourcetype IN (\"gitlab:job\",\"github:workflow_job\") earliest=-90d@d\n| eval job_lower=lower(coalesce(name,job_name,workflow_name))\n| where match(job_lower,\"(?i)image|container|docker|build.*push|kaniko|buildkit\")\n| eval dur_min=round(coalesce(duration_sec,duration)/60,2)\n| eval sprint=strftime(_time,\"%Y-W%V\")\n| stats avg(dur_min) as avg_build_min median(dur_min) as med_build_min by sprint, project\n| eventstats median(avg_build_min) as fleet_med by sprint\n| eval regression=if(avg_build_min > fleet_med*1.35,1,0)\n| sort sprint project",
              "m": "Standardize job naming so filters stay reliable; alternatively maintain a lookup of pipeline IDs that produce images. Strip cache-hit jobs if duration is near-zero noise. Compare medians per repo against its own 8-sprint baseline to reduce cross-team skew. Pair with container registry pull latency dashboards when build times spike only in certain regions.",
              "z": "Line chart (average image build minutes by sprint), Bar chart (top regressing projects), Table (sprint, project, avg, median).",
              "kfp": "Trend moves with team size, release cadence, or company holidays; baseline per service line.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitLab CI, GitHub Actions, Jenkins Pipelines with Kaniko/buildkit logging.\n• Ensure the following data sources are available: `index=devops` `sourcetype IN (\"gitlab:job\",\"github:workflow_job\",\"jenkins:build\")` filtered to `image`/`container`/`docker` job or stage names.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStandardize job naming so filters stay reliable; alternatively maintain a lookup of pipeline IDs that produce images. Strip cache-hit jobs if duration is near-zero noise. Compare medians per repo against its own 8-sprint baseline to reduce cross-team skew. Pair with container registry pull latency dashboards when build times spike only in certain regions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype IN (\"gitlab:job\",\"github:workflow_job\") earliest=-90d@d\n| eval job_lower=lower(coalesce(name,job_name,workflow_name))\n| where match(job_lower,\"(?i)image|container|docker|build.*push|kaniko|buildkit\")\n| eval dur_min=round(coalesce(duration_sec,duration)/60,2)\n| eval sprint=strftime(_time,\"%Y-W%V\")\n| stats avg(dur_min) as avg_build_min median(dur_min) as med_build_min by sprint, project\n| eventstats median(avg_build_min) as fleet_med by sprint\n| eval regression=if(avg_build_min > fleet_med*1.35,1,0)\n| sort sprint project\n```\n\nUnderstanding this SPL\n\n**Container Image Build Time Trending** — Longer image builds slow every downstream deploy; sprint-level trends catch Dockerfile regressions, bloated layers, or registry latency before they dominate CI budgets.\n\nDocumented **Data sources**: `index=devops` `sourcetype IN (\"gitlab:job\",\"github:workflow_job\",\"jenkins:build\")` filtered to `image`/`container`/`docker` job or stage names. **App/TA** (typical add-on context): GitLab CI, GitHub Actions, Jenkins Pipelines with Kaniko/buildkit logging. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **job_lower** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(job_lower,\"(?i)image|container|docker|build.*push|kaniko|buildkit\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **dur_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sprint** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sprint, project** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by sprint** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **regression** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (average image build minutes by sprint), Bar chart (top regressing projects), Table (sprint, project, avg, median).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch this part of your delivery chain for risks and reliability so issues show up before they hit customers.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "gitlab",
                "jenkins"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 4,
            "none": 0
          }
        }
      ],
      "i": 12,
      "n": "DevOps & CI/CD",
      "src": "cat-12-devops-ci-cd.md"
    },
    {
      "s": [
        {
          "i": "13.1",
          "n": "Splunk Platform Health",
          "u": [
            {
              "i": "13.1.1",
              "n": "Indexer Queue Fill Ratio",
              "c": "critical",
              "f": "beginner",
              "v": "Backed-up indexing queues cause data loss or delay. Detection enables immediate investigation of ingestion bottlenecks.",
              "t": "Monitoring Console (built-in)",
              "d": "`_internal` (metrics.log, queue metrics)",
              "q": "index=_internal sourcetype=splunkd group=queue\n| eval fill_pct=round(current_size/max_size*100,1)\n| where fill_pct > 70\n| timechart span=5m max(fill_pct) as queue_pct by name",
              "m": "Monitor parsing, merging, and typing queues via `_internal`. Alert when any queue exceeds 70% fill ratio. Investigate source of data surge (new data source, burst events). Consider parallel pipelines or additional indexers.",
              "z": "Gauge (queue fill % per pipeline), Line chart (queue fill over time), Table (queues above threshold).",
              "kfp": "Planned ingestion surges during marketing events or batch replays can park bytes in parsingQueue and typingQueue without a broken disk; pair peaks with change calendars and forwarder fan-in dashboards before you treat the cluster as sick. Summary-index and report-acceleration bursts inflate aggQueue because the indexer is doing exactly what you asked on a tight schedule; shift heavy summaries or widen search windows instead of paging like it is a mystery outage. Replication catch-up after a peer rejoin often deepens indexQueue while buckets reconcile; when tcpin is calm and disk latency is normal, let the cluster breathe and watch dwell decay rather than opening a sev-1. Mid-restart windows frequently show indexQueue backlog while pipelines reconnect; the restarts arm exists precisely so those minutes do not become false tragedies. Hot-bucket rolls and cold-path transitions can flicker queue depth for seconds; require sustained dwell beyond roll timing or correlate with bucket path errors. Search-head bundle pushes can transiently starve typing throughput even though ingestion volume is unchanged; cross-check shcluster artifact traffic before you rip out network gear. Compression or SSL renegotiation storms on tcpin can look like saturation in the connection lane while queues remain modest; validate universal forwarder settings and load balancer idle timeouts. Executive dashboards that only glance at fill ratio miss the story when pipelineinputchannel imbalance hides a single hot parallel lane; keep the channel lane attached so false calm does not survive a leadership review.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console (built-in).\n• Ensure the following data sources are available: `_internal` (metrics.log, queue metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor parsing, merging, and typing queues via `_internal`. Alert when any queue exceeds 70% fill ratio. Investigate source of data surge (new data source, burst events). Consider parallel pipelines or additional indexers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=queue\n| eval fill_pct=round(current_size/max_size*100,1)\n| where fill_pct > 70\n| timechart span=5m max(fill_pct) as queue_pct by name\n```\n\nUnderstanding this SPL\n\n**Indexer Queue Fill Ratio** — Backed-up indexing queues cause data loss or delay. Detection enables immediate investigation of ingestion bottlenecks.\n\nDocumented **Data sources**: `_internal` (metrics.log, queue metrics). **App/TA** (typical add-on context): Monitoring Console (built-in). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fill_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fill_pct > 70` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (queue fill % per pipeline), Line chart (queue fill over time), Table (queues above threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch how full the inner processing lines get on the machines that accept and store logs, and we measure how long they stay stuck that way after big restarts or traffic spikes. When something is truly backed up—not just busy for a minute—we raise a clear signal so the right team fixes it.",
              "wv": "crawl",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "13.1.2",
              "n": "Search Concurrency Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Exceeding search concurrency limits causes search skipping and degraded user experience. Monitoring guides capacity decisions.",
              "t": "Monitoring Console",
              "d": "`_internal` (scheduler logs, search dispatch)",
              "q": "index=_internal sourcetype=splunkd group=search_concurrency\n| timechart span=5m max(active_hist_searches) as historical, max(active_rt_searches) as realtime",
              "m": "Track concurrent searches vs configured limits. Alert when approaching concurrency limits. Identify resource-intensive searches consuming disproportionate capacity. Report on search workload distribution.",
              "z": "Line chart (concurrent searches over time), Gauge (% of limit), Table (top resource consumers).",
              "kfp": "Planned load and disaster exercises can pin concurrency without malfunction when change tickets document the window; require ticket correlation before executive escalation. Captain maintenance and bundle replication often create short peaks in PerProcess counts that self-resolve; pair with UC-13.1.10 timelines rather than permanent limit raises. Penetration tests and purple-team search storms inflate distinct_users and sched_events while the platform is behaving correctly; downgrade only with security calendar proof. Splunk Cloud pool adjustments may change effective throughput without local file edits; provider emails belong in the evidence pack to explain benign shifts. Metrics duplication from misconfigured forwarding can exaggerate peaks until host keys are deduplicated. Rare parser truncations can hide reaper tokens while raw files still show them; fix TRUNCATE instead of muting the alert. DST boundaries can split bins oddly; compare wall-clock histograms when incidents cluster at exactly the clock shift hour.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (scheduler logs, search dispatch).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack concurrent searches vs configured limits. Alert when approaching concurrency limits. Identify resource-intensive searches consuming disproportionate capacity. Report on search workload distribution.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=search_concurrency\n| timechart span=5m max(active_hist_searches) as historical, max(active_rt_searches) as realtime\n```\n\nUnderstanding this SPL\n\n**Search Concurrency Monitoring** — Exceeding search concurrency limits causes search skipping and degraded user experience. Monitoring guides capacity decisions.\n\nDocumented **Data sources**: `_internal` (scheduler logs, search dispatch). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (concurrent searches over time), Gauge (% of limit), Table (top resource consumers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch how many searches run at once versus the safe limits on the reporting computers, and we notice when work gets canceled or trimmed because time or space rules were hit. When traffic truly overwhelms the gates, we raise a clear signal so owners fix schedules, roles, or capacity before everyday reporting quietly falls behind.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.3",
              "n": "Forwarder Connectivity",
              "c": "critical",
              "f": "intermediate",
              "v": "Silent forwarder failures mean data gaps that may not be noticed until an investigation fails. Detection ensures data completeness.",
              "t": "Monitoring Console, Deployment Monitor app",
              "d": "`_internal` (metrics.log — tcpin_connections)",
              "q": "index=_internal sourcetype=splunkd group=tcpin_connections\n| stats latest(_time) as last_seen by hostname, sourceIp\n| eval hours_since=round((now()-last_seen)/3600,1)\n| where hours_since > 1\n| table hostname, sourceIp, hours_since\n| sort -hours_since",
              "m": "Track last-seen timestamp per forwarder from `_internal`. Alert when any forwarder hasn't reported in >1 hour. Maintain forwarder inventory for coverage analysis. Cross-reference with host downtime events.",
              "z": "Table (silent forwarders), Single value (forwarders reporting), Status grid (forwarder × health), Bar chart (silent by location).",
              "kfp": "Planned indexer rolling restarts during approved maintenance windows elevate tcpin and tcpout churn without implying forwarder misconfiguration; require change_ticket_id correlation and dwell thresholds before paging application owners. Splunk Cloud blue-green style migrations can shift receiving endpoints while forwarders negotiate new peers; treat short autoLB rotations as benign when cloud operations publish maintenance banners and inventory env tags match the migration wave. Deployment-server reload events during phased rollouts can interrupt phone-home while TCP ingestion continues; downgrade routing when Forwarders: Instance shows healthy data receipt despite transient management gaps. Low-volume sources that legitimately go quiet overnight can mimic idle_seconds spikes unless you pair inventory criticality tags with business-hour expectations; tune severity for batch-only hosts. Monitoring Console service restarts can create brief gaps in distilled views while raw _internal remains complete; confirm Job Inspector scan coverage before muting based on MC tiles alone. Load balancer health check flaps may force rapid tcpout reconnects that look like storms; validate balancer logs alongside Splunk metrics. Certificate transparency tests in lab can raise medium severity without customer impact; gate lab hosts with env tags. Corporate proxy or TLS inspection pilots occasionally break handshakes for a subset of subnets; correlate with network change tickets. Backup WAN failovers may lengthen paths without dropping sessions; require sustained queue pressure before executive escalation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console, Deployment Monitor app.\n• Ensure the following data sources are available: `_internal` (metrics.log — tcpin_connections).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack last-seen timestamp per forwarder from `_internal`. Alert when any forwarder hasn't reported in >1 hour. Maintain forwarder inventory for coverage analysis. Cross-reference with host downtime events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=tcpin_connections\n| stats latest(_time) as last_seen by hostname, sourceIp\n| eval hours_since=round((now()-last_seen)/3600,1)\n| where hours_since > 1\n| table hostname, sourceIp, hours_since\n| sort -hours_since\n```\n\nUnderstanding this SPL\n\n**Forwarder Connectivity** — Silent forwarder failures mean data gaps that may not be noticed until an investigation fails. Detection ensures data completeness.\n\nDocumented **Data sources**: `_internal` (metrics.log — tcpin_connections). **App/TA** (typical add-on context): Monitoring Console, Deployment Monitor app. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname, sourceIp** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since > 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Forwarder Connectivity**): table hostname, sourceIp, hours_since\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (silent forwarders), Single value (forwarders reporting), Status grid (forwarder × health), Bar chart (silent by location).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch your log couriers the way a harbor master watches tugboats: a rope can look tight while nothing moves, and we raise the alarm when the boat stops delivering cargo even if it is still tied to the dock, so crews fix the channel before the whole harbor goes quiet.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 65,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "13.1.4",
              "n": "License Usage Trending",
              "c": "high",
              "f": "intermediate",
              "v": "License overages cause enforcement (search blocking). Trending enables proactive management and capacity planning.",
              "t": "Monitoring Console",
              "d": "`_internal` (license_usage.log)",
              "q": "index=_internal sourcetype=splunkd group=license_usage\n| timechart span=1d sum(b) as bytes_indexed\n| eval gb=round(bytes_indexed/1024/1024/1024,2)\n| predict gb as predicted future_timespan=30",
              "m": "Track daily license usage against entitled volume. Alert at 80% and 90% of daily limit. Use `predict` for 30-day forecast. Identify top sourcetypes contributing to growth. Report on usage trends.",
              "z": "Line chart (daily usage with license limit line), Single value (today's usage %), Bar chart (usage by sourcetype), Gauge (% of limit).",
              "kfp": "Scheduled disaster-recovery backfill windows replay historical bytes and can push projected end-of-day percentages above polite thresholds even when live traffic is calm; require change-calendar correlation before finance escalations. Planned bulk imports and migration projects inflate Usage slopes for hours; tag those hosts in inventory or route alerts to the data-onboarding queue instead of the platform incident bridge. License stack tier changes and entitlement renewals move daily quota goalposts mid-month; refresh splunk_license_inventory.csv the same day contracts change or suffer false criticals. Weekly maintenance windows that restart license masters or peers create short reporting gaps and suppressed counters; pair rollover summaries with Maintenance metadata before paging application owners. Development and test instances that share a stack name with production in poorly curated lookups can absorb synthetic load and look like production saturation; enforce environment labels and separate pool keys. Executive summaries that only glance at absolute gigabytes miss pool reassignment noise; keep percentage and slope columns attached so false calm does not survive leadership review.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (license_usage.log).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack daily license usage against entitled volume. Alert at 80% and 90% of daily limit. Use `predict` for 30-day forecast. Identify top sourcetypes contributing to growth. Report on usage trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=license_usage\n| timechart span=1d sum(b) as bytes_indexed\n| eval gb=round(bytes_indexed/1024/1024/1024,2)\n| predict gb as predicted future_timespan=30\n```\n\nUnderstanding this SPL\n\n**License Usage Trending** — License overages cause enforcement (search blocking). Trending enables proactive management and capacity planning.\n\nDocumented **Data sources**: `_internal` (license_usage.log). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **License Usage Trending**): predict gb as predicted future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily usage with license limit line), Single value (today's usage %), Bar chart (usage by sourcetype), Gauge (% of limit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch shared usage like an apartment tower with one main fuse and smaller daytime allowances per flat on top of a bigger monthly bill. When one kitchen runs every appliance during a heat wave, the shared meter spins faster than hallway warnings can blink; we show which line is likely to trip first and what to unplug before dinner so neighbors keep power for the basics.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.5",
              "n": "Skipped Search Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Skipped searches mean scheduled reports, alerts, and data enrichments aren't running. This creates blind spots in monitoring and compliance.",
              "t": "Monitoring Console",
              "d": "`_internal` (scheduler.log)",
              "q": "index=_internal sourcetype=scheduler status=\"skipped\"\n| stats count by savedsearch_name, reason\n| sort -count",
              "m": "Monitor scheduler logs for skipped searches. Alert when critical searches are skipped. Track skip reasons (concurrency, disabled, cron). Optimize skipped searches or increase search concurrency limits.",
              "z": "Table (skipped searches with reasons), Bar chart (top skipped searches), Line chart (skip rate trend).",
              "kfp": "Planned mass-disable of scheduled searches during a sev-1 ingestion incident legitimately spikes skipped counts with disabled reasons; require change-ticket correlation before paging application owners. Deliberate canary searches that intentionally skip to prove alerting paths can look like quota failure until inventory marks them as test objects. A search head peer outage concentrates cron on survivors; pair skip bursts with UC-13.1.10 timelines rather than assuming misconfiguration. Big quarterly roll-up jobs that run once per quarter create a single loud bin; compare against finance-approved calendars before opening capacity projects. Scheduled patch windows that restart many members in sequence can depress success denominators briefly; suppress using maintenance metadata stamps. Indexer-side slowdowns can indirectly starve search heads when bundle replication or distributed search fan-out stalls; cross-check distributed search health before blaming scheduler limits alone. Stale inventory lookups make enrichment sparse while raw skips remain real; route low-confidence rows to hygiene dashboards instead of executive pages. Executive read-only walk-throughs that trigger many interactive searches can elevate _audit volume without implying scheduled quota exhaustion; distinguish action=search audit noise from scheduler skip rates.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (scheduler.log).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor scheduler logs for skipped searches. Alert when critical searches are skipped. Track skip reasons (concurrency, disabled, cron). Optimize skipped searches or increase search concurrency limits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=scheduler status=\"skipped\"\n| stats count by savedsearch_name, reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Skipped Search Detection** — Skipped searches mean scheduled reports, alerts, and data enrichments aren't running. This creates blind spots in monitoring and compliance.\n\nDocumented **Data sources**: `_internal` (scheduler.log). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: scheduler. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=scheduler. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by savedsearch_name, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (skipped searches with reasons), Bar chart (top skipped searches), Line chart (skip rate trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We run a busy kitchen with a fixed number of stoves. When too many tickets demand the same second, some orders get skipped—not because the recipe vanished, but because every burner was busy. We catch that early, separate a true capacity crunch from unfair line rules or a badly timed rush, and steer teams to the right fix before anyone goes hungry.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.6",
              "n": "Index Size Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Index size growth affects storage costs and search performance. Trending enables proactive storage planning.",
              "t": "Monitoring Console",
              "d": "`_internal` (indexes.conf, REST API)",
              "q": "| rest /services/data/indexes\n| table title, currentDBSizeMB, maxTotalDataSizeMB, frozenTimePeriodInSecs\n| eval pct_used=round(currentDBSizeMB/maxTotalDataSizeMB*100,1)\n| sort -pct_used",
              "m": "Poll index sizes via REST API daily. Track growth rates per index. Alert when indexes approach maxTotalDataSizeMB (data will roll to frozen). Use `predict` to forecast when limits will be reached.",
              "z": "Table (index sizes with % used), Bar chart (top indexes by size), Line chart (growth trend), Gauge (% of max per index).",
              "kfp": "Planned historical backfill projects temporarily inflate cold footprints and mover chatter until buckets settle; require change calendar correlation before executive escalation. Deliberate frozenTimePeriodInSecs extensions during legal hold widen searchable windows and increase bytes resident by design; annotate CMDB legal_hold_flag so severity downgrades appropriately. Scheduled cold path migrations or volume rebases create honest spikes in splunk_disk_objects without implying runaway ingest; pair with storage team tickets. Snapshot restore staging lands thawed buckets that balloon local bytes until re-freeze completes; treat as expected during disaster recovery drills. Vendor-driven retention extensions negotiated with compliance can lag configuration updates; avoid paging the platform team for approved policy drift when documentation exists. Rolling indexer upgrades and rolling restarts increase transient BucketMover volume; compare against maintenance metadata and UC-13.1.11 replication context before treating as saturation. SmartStore cache churn during approved large historical searches can mimic distress; validate search concurrency and object retrieval dashboards. Replication factor increases and site-aware policy changes stepwise multiply resident copies; update inventory expected_max_bytes rather than assuming stealth growth. Executive dashboards that only glance at total bytes miss bucket count explosions; keep bucket_dc in the narrative to prevent false calm. Stale splunk_index_inventory.csv rows with tiny expected_max_bytes create artificial critical severities until refreshed weekly automation lands.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (indexes.conf, REST API).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll index sizes via REST API daily. Track growth rates per index. Alert when indexes approach maxTotalDataSizeMB (data will roll to frozen). Use `predict` to forecast when limits will be reached.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/data/indexes\n| table title, currentDBSizeMB, maxTotalDataSizeMB, frozenTimePeriodInSecs\n| eval pct_used=round(currentDBSizeMB/maxTotalDataSizeMB*100,1)\n| sort -pct_used\n```\n\nUnderstanding this SPL\n\n**Index Size Trending** — Index size growth affects storage costs and search performance. Trending enables proactive storage planning.\n\nDocumented **Data sources**: `_internal` (indexes.conf, REST API). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Pipeline stage (see **Index Size Trending**): table title, currentDBSizeMB, maxTotalDataSizeMB, frozenTimePeriodInSecs\n• `eval` defines or adjusts **pct_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (index sizes with % used), Bar chart (top indexes by size), Line chart (growth trend), Gauge (% of max per index).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We treat each data store like a warehouse with a finite floor and a lease end date. We measure how fast pallets stack, how often crates shuffle to the back room, and when remote overflow keeps getting shuffled in and out, so leaders see a calm countdown instead of a locked-door surprise.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.7",
              "n": "KV Store Health",
              "c": "high",
              "f": "beginner",
              "v": "KV Store failures break lookups, app functionality, and ES correlation. Health monitoring prevents cascading application issues.",
              "t": "Monitoring Console",
              "d": "`_internal` (kvstore logs)",
              "q": "index=_internal sourcetype=splunkd component=KVStoreServlet OR component=KvStore\n| search log_level=ERROR OR log_level=WARN\n| stats count by host, log_level, message\n| sort -count",
              "m": "Monitor KV Store logs for errors and replication issues. Track replication lag between SHC members. Alert on KV Store service unavailability. Monitor collection sizes for capacity planning.",
              "z": "Status grid (SHC member × KV Store health), Table (KV Store errors), Line chart (replication lag).",
              "kfp": "Approved maintenance that rolls KV members in sequence will temporarily raise oplog_lag_seconds and flip replication verbs until secondaries catch up; require change windows and UC-13.1.10 captain stability context before paging rule authors. Nightly backup or resync jobs can inflate db_size_bytes and IOPS without user-visible errors; compare spikes to backup schedules and snapshot tooling. Lab clusters that import production bundles for testing may show huge notable_audit or incident_review footprints that are expected in non-production; use inventory criticality tags to downgrade routing. Sparse mongod sourcetype adoption on some hosts can make the internal_mongod arm quiet even when introspection is healthy; validate forwarding policy instead of muting the alert. REST /services/kvstore/status from search heads in strict lockdown may be blocked while introspection still flows from peers; prefer indexed REST pollers or service accounts rather than silencing replication logic. Version upgrades sometimes rename introspection keys until coalesce ladders are refreshed; expect short false warn bursts after upgrades if parsers lag documentation. Penetration or vulnerability scanners that hammer management ports can elevate connections_active without genuine lookup storms; pair with job inspector before blaming correlation searches. Duplicate forwarders indexing the same introspection stream can skew growth math until dedupe summaries land. Executive dashboards that only watch replication_status green may miss eviction_pressure warnings that still merit tuning during batch outputlookup seasons.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (kvstore logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor KV Store logs for errors and replication issues. Track replication lag between SHC members. Alert on KV Store service unavailability. Monitor collection sizes for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd component=KVStoreServlet OR component=KvStore\n| search log_level=ERROR OR log_level=WARN\n| stats count by host, log_level, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**KV Store Health** — KV Store failures break lookups, app functionality, and ES correlation. Health monitoring prevents cascading application issues.\n\nDocumented **Data sources**: `_internal` (kvstore logs). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, log_level, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (SHC member × KV Store health), Table (KV Store errors), Line chart (replication lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch the small database that keeps apps, lookups, and investigations in sync across cluster members. When it falls behind, runs out of breathing room, or cannot copy changes cleanly, we raise a clear signal before screens go blank and jobs that write tables start piling up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.8",
              "n": "Deployment Server Status",
              "c": "medium",
              "f": "beginner",
              "v": "Deployment server issues prevent app/config distribution to forwarders, leaving them with stale or incorrect configurations.",
              "t": "Monitoring Console",
              "d": "`_internal` (deployment server logs)",
              "q": "index=_internal sourcetype=splunkd component=DeploymentServer\n| search log_level=ERROR\n| stats count by message, host\n| sort -count",
              "m": "Monitor deployment server logs for errors. Track successful vs failed deployments to clients. Alert on deployment failures. Verify client phone-home intervals are within expected ranges.",
              "z": "Table (deployment errors), Single value (clients checking in), Bar chart (failures by server class).",
              "kfp": "Approved deployment server reloads during enterprise change windows spike DeploymentServer WARN lines and temporarily skew phone-home metrics without implying data loss; require change_ticket_id correlation and dwell thresholds before paging application owners. Load-balanced deployment server pairs may concentrate REST poll snapshots on whichever peer answered the management VIP during that interval; compare multisearch lanes across peers before treating partial client lists as truth. Universal forwarders that intentionally disable deploymentclient for air-gapped enclave roles will surface absent REST rows until inventory tier_tag documents the exception; do not route those hosts to the same severity ladder as production bulk forwarders. Large Splunk Cloud management plane migrations can reorder serverclass precedence while TCP forwarding remains healthy; pair this UC with platform communications rather than muting based solely on tcpout dashboards. REST collectors that parse timestamps in local time zones can miscompute ph_drift_sec until you normalize to UTC in props.conf. Certificate transparency tests in lab can raise checksum drift hints when administrators stage apps without promoting repository_checksum to production classes. Duplicate HTTP Event Collector submissions from redundant pollers can double client rows until dedupe logic lands in summary indexes. Coverage_gap_pct math depends on accurate subscribedCount fields from your REST exporter; a bug that zeroes subscribers creates false calm rather than false alarms, so validate exporter unit tests quarterly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (deployment server logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor deployment server logs for errors. Track successful vs failed deployments to clients. Alert on deployment failures. Verify client phone-home intervals are within expected ranges.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd component=DeploymentServer\n| search log_level=ERROR\n| stats count by message, host\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Deployment Server Status** — Deployment server issues prevent app/config distribution to forwarders, leaving them with stale or incorrect configurations.\n\nDocumented **Data sources**: `_internal` (deployment server logs). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by message, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (deployment errors), Single value (clients checking in), Bar chart (failures by server class).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch the central machine that hands out the rulebook to every log collector. When that machine stumbles, collectors keep an old rulebook and the wrong destinations or wrong files slip by quietly, so we raise a clear signal for the people who keep the platform honest.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.9",
              "n": "Data Ingestion Latency",
              "c": "high",
              "f": "intermediate",
              "v": "High indexing latency (difference between event time and index time) means stale data for searches. Detection enables root cause analysis.",
              "t": "Monitoring Console",
              "d": "Any index (sampling `_time` vs `_indextime`)",
              "q": "(index=main OR index=security OR index=os OR index=windows) earliest=-15m\n| eval latency=_indextime-_time\n| stats avg(latency) as avg_latency, perc95(latency) as p95_latency by index, sourcetype\n| where p95_latency > 300\n| sort -p95_latency",
              "m": "Sample events periodically and calculate `_indextime` minus `_time`. Alert when p95 latency exceeds 5 minutes for critical sourcetypes. Investigate queue buildup, network latency, or time parsing issues.",
              "z": "Table (sourcetypes with high latency), Line chart (latency trend), Bar chart (latency by sourcetype).",
              "kfp": "Batch-mode and object-store pull add-ons such as AWS S3-based modular inputs legitimately deliver large historical slices where event timestamps trail wall clock by hours even though indexing is healthy; gate those sourcetypes with lookup batch_mode flags before paging application teams. Intentional historical re-ingest and backfill projects inflate p95 lag without implying a live outage; require change_ticket_id correlation and temporary SLA overrides in splunk_sourcetype_sla_targets.csv. Legacy hosts with wrong time zones or manual clock drift can show enormous stale_onboard_sec while universal forwarders remain connected; pair with NTP checks rather than assuming indexer sickness. Scripted inputs that sleep between polls and container sidecars that batch stdout lines can create smooth lag distributions that look like indexing pathology. INDEXED_EXTRACTIONS=json and other late parsing stages can widen p95 lag during surge windows even when mean lag looks fine; validate on a canary index before muting. Cross-site replication and searchable copy lag change how quickly peers see identical buckets but are not the primary signal here; defer bucket narratives to UC-13.1.11 when copies, not event-time-to-index-time drift, dominate. Monitoring Console maintenance or search affinity moves can make raw sampling arms sparse; confirm Job Inspector scan counts before trusting quiet severities. Development clusters that replay PCAP or synthetic traces hourly will trip stale thresholds unless non-production sourcetypes carry explicit SLA relaxations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: Any index (sampling `_time` vs `_indextime`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSample events periodically and calculate `_indextime` minus `_time`. Alert when p95 latency exceeds 5 minutes for critical sourcetypes. Investigate queue buildup, network latency, or time parsing issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=main OR index=security OR index=os OR index=windows) earliest=-15m\n| eval latency=_indextime-_time\n| stats avg(latency) as avg_latency, perc95(latency) as p95_latency by index, sourcetype\n| where p95_latency > 300\n| sort -p95_latency\n```\n\nUnderstanding this SPL\n\n**Data Ingestion Latency** — High indexing latency (difference between event time and index time) means stale data for searches. Detection enables root cause analysis.\n\nDocumented **Data sources**: Any index (sampling `_time` vs `_indextime`). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: main, security, os, windows.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=main, index=security, index=os, index=windows…. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **latency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by index, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_latency > 300` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sourcetypes with high latency), Line chart (latency trend), Bar chart (latency by sourcetype).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We measure how far behind the timestamps on your events are compared to when they actually became searchable, and we flag sources that look stuck in the past so teams can fix clocks, batch feeds, or heavy parsing before leaders think the data is fresh when it is not.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.10",
              "n": "Search Head Cluster Status",
              "c": "critical",
              "f": "intermediate",
              "v": "SHC member failures affect user access and search capacity. Captain election issues can cause complete SHC outage.",
              "t": "Monitoring Console",
              "d": "`_internal` (SHC logs, REST endpoints)",
              "q": "| rest /services/shcluster/member/members\n| table label, status, last_heartbeat, replication_count\n| eval heartbeat_age=now()-last_heartbeat\n| where status!=\"Up\" OR heartbeat_age > 300",
              "m": "Monitor SHC member health via REST API. Track captain status and election events. Alert on member disconnection or replication failures. Monitor artifact replication lag between members.",
              "z": "Status grid (SHC member × status), Table (member health), Timeline (captain election events).",
              "kfp": "Captain handover during a planned upgrade or dynamic captain rebuild can look like an election storm until you correlate change_ticket_id metadata and maintenance banners; require dwell and inventory tags before paging application owners. Deployer push exercises on a canary cluster intentionally generate REST errors in lower environments; gate non-production clusters with env labels so synthetic failures never wake production on-call. Blue-green style migrations across stacks can elevate captain churn and KV catch-up lag for hours while operations validate parity; treat elevated medium severity as expected until the migration runbook declares cutover complete. Planned member maintenance removes peers from service by design; member_offline_count equals one in those windows is often benign when the change calendar shows a rolling restart and remaining members stay healthy. RBAC user-load tests and executive drill sessions can spike CPU on members without implying consensus failure; pair member_p95_cpu_pct with concurrent search counts from introspection before declaring cluster instability. Cold replica catch-up after backup restores or storage remediation inflates kv_store_oplog_lag_ms without data loss; validate lag decay slopes and disk latency rather than opening a sev-1 on the first high sample. Brief conf-replication spikes during large knowledge object deployments can exceed quiet baselines until peers converge; compare ConfReplicationThread density against deployer completion before muting. Monitoring Console service restarts can create short gaps in distilled tiles while raw _internal remains complete; confirm Job Inspector scan coverage before trusting an empty MC tile as truth. Certificate transparency or vulnerability scanning tools that open TLS probes against peer ports may log alarming REST narratives without user impact; correlate scanner schedules with SHCRESTAPI noise. Search affinity misconfiguration can run this saved search on a host that does not forward all member logs, producing false calm; validate distributed search peer list and index access roles quarterly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (SHC logs, REST endpoints).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor SHC member health via REST API. Track captain status and election events. Alert on member disconnection or replication failures. Monitor artifact replication lag between members.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/shcluster/member/members\n| table label, status, last_heartbeat, replication_count\n| eval heartbeat_age=now()-last_heartbeat\n| where status!=\"Up\" OR heartbeat_age > 300\n```\n\nUnderstanding this SPL\n\n**Search Head Cluster Status** — SHC member failures affect user access and search capacity. Captain election issues can cause complete SHC outage.\n\nDocumented **Data sources**: `_internal` (SHC logs, REST endpoints). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Pipeline stage (see **Search Head Cluster Status**): table label, status, last_heartbeat, replication_count\n• `eval` defines or adjusts **heartbeat_age** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status!=\"Up\" OR heartbeat_age > 300` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (SHC member × status), Table (member health), Timeline (captain election events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch the small council that runs interactive search like judges sharing a single gavel: when the chair changes too often, the shared binder stops matching, or the backstage clerk falls behind, we call the crew before the room hears silence.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 55,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "13.1.11",
              "n": "Indexer Cluster Bucket Replication",
              "c": "critical",
              "f": "intermediate",
              "v": "Under-replicated buckets mean data is at risk of loss. Monitoring ensures the replication factor is maintained.",
              "t": "Monitoring Console",
              "d": "`_internal` (CM logs, REST endpoints)",
              "q": "| rest /services/cluster/master/buckets\n| where search_factor_met=0 OR replication_factor_met=0\n| stats count as non_compliant_buckets",
              "m": "Monitor cluster master/manager REST endpoints. Track replication and search factor compliance. Alert on any buckets not meeting the configured factor. Investigate cause (indexer down, disk full, network issues).",
              "z": "Single value (non-compliant buckets — target: 0), Table (non-compliant bucket details), Line chart (compliance trend).",
              "kfp": "Peer rejoin storms after maintenance windows routinely spike CMReplicationManager chatter until searchable copies settle; require correlation with peer_down_cnt and change metadata before paging application owners. Intentional rolling restarts of the indexer cluster increase CMRepJob churn without implying corruption; compare against approved trains and baseline fixup half-life. Deliberate de-replication of low-tier indexes or sandbox paths can look like replication factor noncompliance until inventory allowlists exclude those indexes. Capacity-tuned bucket fixup throttles intentionally slow fixup to protect disk and network; pair with fair-queue documentation before declaring an incident. Bundle push windows during brief peer offline periods can amplify generation language in CMMaster without data loss; cross-check UC-13.1.25 timelines. Planned multisite failover exercises shift searchable copies across sites and can elevate audit counts and fixup density for hours. Summary buckets and summary processors sometimes follow different searchable expectations than primary hot buckets; tune severity for summary indexes with architecture sign-off. Lab clusters that constantly add and remove peers for automation tests will fire audit churn thresholds unless non-production labels downgrade routing. Stale REST credentials can empty the peers arm while `_internal` remains rich; fix token rotation before muting. Duplicate forwarders that double-submit `_internal` can inflate counts until dedupe logic lands in summary indexes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (CM logs, REST endpoints).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor cluster master/manager REST endpoints. Track replication and search factor compliance. Alert on any buckets not meeting the configured factor. Investigate cause (indexer down, disk full, network issues).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/cluster/master/buckets\n| where search_factor_met=0 OR replication_factor_met=0\n| stats count as non_compliant_buckets\n```\n\nUnderstanding this SPL\n\n**Indexer Cluster Bucket Replication** — Under-replicated buckets mean data is at risk of loss. Monitoring ensures the replication factor is maintained.\n\nDocumented **Data sources**: `_internal` (CM logs, REST endpoints). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Filters the current rows with `where search_factor_met=0 OR replication_factor_met=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (non-compliant buckets — target: 0), Table (non-compliant bucket details), Line chart (compliance trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the copying and safety nets that keep search data consistent across machines. When that work piles up, peers flap, or copies fall out of policy, we raise a clear signal so teams fix the foundation before people notice missing or slow results.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.12",
              "n": "HEC Endpoint Health",
              "c": "high",
              "f": "beginner",
              "v": "HEC is a primary data ingestion path. Failures silently drop data from applications, containers, and cloud services.",
              "t": "Monitoring Console",
              "d": "`_internal` (http_event_collector logs)",
              "q": "index=_internal sourcetype=splunkd component=HttpEventCollector\n| stats count(eval(log_level=\"ERROR\")) as errors, count as total by host\n| eval error_rate=round(errors/total*100,2)\n| where error_rate > 1",
              "m": "Monitor HEC endpoint health and error rates. Track HTTP status codes returned to clients. Alert on elevated error rates (4xx, 5xx). Monitor HEC token usage for capacity planning and security.",
              "z": "Single value (HEC error rate), Line chart (HEC throughput), Table (errors by token), Bar chart (status codes).",
              "kfp": "Legitimate token deprecation cleanup windows can still show traffic on retired tokens until all clients restart; require inventory status equals deprecated with an explicit sunset timestamp before muting unknown-token narratives. Planned load balancer rebalance or target group edits shift splunk-hec-instance cardinality and clientip paths without implying abuse; correlate with networking change tickets. Edge Processor or Observability Cloud canary onboarding from new source addresses inflates clientip_richness until baselines refresh; annotate hec_token_inventory.csv with canary CIDR notes. Scheduled compression toggles on Firehose or ingress proxies change gzip versus identity ratios and can trip gzip_hint heuristics during the transition hour. Indexer cluster rolling restarts disturb acknowledgment state for clients using ack equals one; expect transient tcpin proxy spikes paired with maintenance calendars rather than silent data loss. AWS Firehose backfill replay or S3 re-drive jobs can saturate tokens with high 429 rates without misconfiguration; require data engineering announcements. Marketing bursts and batch analytics replays inflate quota_pressure on known-good tokens; use expected_daily_volume_gb notes and change_ticket_id fields. Certificate rotations on ingress may move failures entirely into proxy logs while splunkd_access looks quiet; pair with UC-13.1.15 style checks outside this search. Penetration tests may hammer collector paths; require SOC calendar correlation. Splunk Cloud tenants with redacted internal fields may show sparse metrics arms while Monitoring Console remains authoritative; treat absent metrics as collection debt. Developer laptops posting experimental JSON to production tokens can mimic handler errors; validate IP ownership against CMDB.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (http_event_collector logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor HEC endpoint health and error rates. Track HTTP status codes returned to clients. Alert on elevated error rates (4xx, 5xx). Monitor HEC token usage for capacity planning and security.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd component=HttpEventCollector\n| stats count(eval(log_level=\"ERROR\")) as errors, count as total by host\n| eval error_rate=round(errors/total*100,2)\n| where error_rate > 1\n```\n\nUnderstanding this SPL\n\n**HEC Endpoint Health** — HEC is a primary data ingestion path. Failures silently drop data from applications, containers, and cloud services.\n\nDocumented **Data sources**: `_internal` (http_event_collector logs). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (HEC error rate), Line chart (HEC throughput), Table (errors by token), Bar chart (status codes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch the secure web entry points where cloud pipelines and agents hand off machine data. When those entry points reject credentials, throttle traffic, or fall behind confirming deliveries, we raise a clear signal so platform teams fix routing and capacity before dashboards go quiet.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.13",
              "n": "Sourcetype Breakdown Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Understanding data volume per sourcetype enables cost optimization, retention tuning, and unexpected growth detection.",
              "t": "Monitoring Console",
              "d": "`_internal` (license_usage.log)",
              "q": "index=_internal sourcetype=splunkd group=license_usage\n| stats sum(b) as bytes by st\n| eval gb=round(bytes/1024/1024/1024,2)\n| sort -gb\n| head 20",
              "m": "Track daily volume per sourcetype. Identify top consumers. Alert on sourcetypes with unexpected growth (>20% week-over-week). Use for license optimization and retention policy tuning.",
              "z": "Bar chart (top sourcetypes by volume), Pie chart (volume distribution), Line chart (growth trend for top sourcetypes).",
              "kfp": "Capacity moves with archive work, new data, or a short burst. We compare with growth plans before a spend jump.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (license_usage.log).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack daily volume per sourcetype. Identify top consumers. Alert on sourcetypes with unexpected growth (>20% week-over-week). Use for license optimization and retention policy tuning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=license_usage\n| stats sum(b) as bytes by st\n| eval gb=round(bytes/1024/1024/1024,2)\n| sort -gb\n| head 20\n```\n\nUnderstanding this SPL\n\n**Sourcetype Breakdown Trending** — Understanding data volume per sourcetype enables cost optimization, retention tuning, and unexpected growth detection.\n\nDocumented **Data sources**: `_internal` (license_usage.log). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by st** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top sourcetypes by volume), Pie chart (volume distribution), Line chart (growth trend for top sourcetypes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.14",
              "n": "Long-Running Search Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Long-running searches consume shared resources and may indicate poorly written SPL or excessive time ranges.",
              "t": "Monitoring Console",
              "d": "`_internal` (scheduler, search audit log)",
              "q": "index=_audit action=search info=completed\n| where total_run_time > 600\n| table _time, user, savedsearch_name, total_run_time, scan_count, event_count\n| sort -total_run_time",
              "m": "Monitor search audit log for long-running searches (>10 minutes). Alert on searches consuming excessive resources. Identify optimization opportunities. Report on top resource-consuming searches weekly.",
              "z": "Table (long-running searches), Bar chart (top consumers by run time), Line chart (long search count trend).",
              "kfp": "Admin spikes from automation, break-glass, and app pushes. We check change records before a single run looks like a takeover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `_internal` (scheduler, search audit log).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor search audit log for long-running searches (>10 minutes). Alert on searches consuming excessive resources. Identify optimization opportunities. Report on top resource-consuming searches weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit action=search info=completed\n| where total_run_time > 600\n| table _time, user, savedsearch_name, total_run_time, scan_count, event_count\n| sort -total_run_time\n```\n\nUnderstanding this SPL\n\n**Long-Running Search Detection** — Long-running searches consume shared resources and may indicate poorly written SPL or excessive time ranges.\n\nDocumented **Data sources**: `_internal` (scheduler, search audit log). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where total_run_time > 600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Long-Running Search Detection**): table _time, user, savedsearch_name, total_run_time, scan_count, event_count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (long-running searches), Bar chart (top consumers by run time), Line chart (long search count trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.15",
              "n": "Splunk Certificate Expiration",
              "c": "critical",
              "f": "intermediate",
              "v": "Expired Splunk internal certificates break inter-component communication (forwarder→indexer, SHC replication, etc.).",
              "t": "Monitoring Console, scripted input",
              "d": "`_internal` (splunkd certificate warnings), certificate check script",
              "q": "index=_internal sourcetype=splunkd \"certificate\" (\"expire\" OR \"expiration\" OR \"not yet valid\")\n| stats count by host, message",
              "m": "Monitor `_internal` for certificate-related warnings. Deploy scripted input to check Splunk certificate files directly. Alert at 30, 14, and 7 days before expiry. Document certificate renewal procedure.",
              "z": "Table (certificates with expiry), Single value (days until nearest expiry), Status grid (component × cert status).",
              "kfp": "Planned grace-window rotations with overlapping validity can look critical until modular inputs catch the new notAfter; require change_ticket correlation before paging application owners. Vendor-issued short-lived certificates that roll every twenty-four hours in integration sandboxes will breach thirty-day thresholds by design; tag those environments as lab in lookups. Test stacks that intentionally use self-signed material will fire self_signed_in_prod unless environment labels downgrade routing. Certificate introspection logs go quiet on idle peers until reload; pair silent PEM lanes with scheduled inventory to avoid false confidence. Penetration scanners sometimes report TLS negotiation quirks on management ports that do not reflect Splunk application failure modes; triage with openssl s_client before treating as outage. Duplicate forwarders double-submit infra_certs events and can skew streamstats until dedupe summaries land. Stale CMDB splunk_role_inventory rows may null owner_team even when PKI lookup is correct; refresh cmdb feeds before muting. Load balancer certificates can renew while origin splunkd certs remain old; suppress edge-only rows only when origin inventory explicitly tracks both layers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console, scripted input.\n• Ensure the following data sources are available: `_internal` (splunkd certificate warnings), certificate check script.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor `_internal` for certificate-related warnings. Deploy scripted input to check Splunk certificate files directly. Alert at 30, 14, and 7 days before expiry. Document certificate renewal procedure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd \"certificate\" (\"expire\" OR \"expiration\" OR \"not yet valid\")\n| stats count by host, message\n```\n\nUnderstanding this SPL\n\n**Splunk Certificate Expiration** — Expired Splunk internal certificates break inter-component communication (forwarder→indexer, SHC replication, etc.).\n\nDocumented **Data sources**: `_internal` (splunkd certificate warnings), certificate check script. **App/TA** (typical add-on context): Monitoring Console, scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certificates with expiry), Single value (days until nearest expiry), Status grid (component × cert status).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We treat machine certificates like passports at every embassy desk in your estate: when a stamp goes stale, the wrong issuer signs the page, or the name on the ID does not match the door label, trusted hallways suddenly lock. We catch those mismatches early so teams renew and reseal everything before travelers notice the silence.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.16",
              "n": "Parsing Queue Health (_internal)",
              "c": "high",
              "f": "beginner",
              "v": "A saturated parsing queue delays ingestion and increases `_indextime` lag. Tracking fill ratio and blocked state isolates pipeline bottlenecks before data loss.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` `group=queue` (parsingqueue)",
              "q": "index=_internal sourcetype=splunkd group=queue name=*parsing*\n| eval fill_pct=if(max_size>0, round(current_size/max_size*100,1), null())\n| where fill_pct > 70 OR is_blocked=1\n| timechart span=5m max(fill_pct) as max_fill by host, name",
              "m": "Filter `metrics.log` queue metrics for parsing queue names. Alert on sustained fill >70% or `is_blocked`. Correlate with new sourcetypes, regex-heavy props, or indexer CPU.",
              "z": "Gauge (parsing fill %), Line chart (fill by host), Table (queues above threshold).",
              "kfp": "Queues fill during indexing surges, search peaks, or after restart. We rule out maintenance, bundle work, and traffic spikes before we chase a single node.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` `group=queue` (parsingqueue).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFilter `metrics.log` queue metrics for parsing queue names. Alert on sustained fill >70% or `is_blocked`. Correlate with new sourcetypes, regex-heavy props, or indexer CPU.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=queue name=*parsing*\n| eval fill_pct=if(max_size>0, round(current_size/max_size*100,1), null())\n| where fill_pct > 70 OR is_blocked=1\n| timechart span=5m max(fill_pct) as max_fill by host, name\n```\n\nUnderstanding this SPL\n\n**Parsing Queue Health (_internal)** — A saturated parsing queue delays ingestion and increases `_indextime` lag. Tracking fill ratio and blocked state isolates pipeline bottlenecks before data loss.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` `group=queue` (parsingqueue). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fill_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fill_pct > 70 OR is_blocked=1` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host, name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (parsing fill %), Line chart (fill by host), Table (queues above threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full our indexing and processing queues are so we can act before data stacks up, slows, or is dropped.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.17",
              "n": "Merging Queue Health (_internal)",
              "c": "high",
              "f": "beginner",
              "v": "Merging queue backlog delays structured indexing and can block hot buckets. Monitoring prevents silent pipeline stall on heavy merge workloads.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` `group=queue` (merging / agg queues)",
              "q": "index=_internal sourcetype=splunkd group=queue name=*merge*\n| eval fill_pct=if(max_size>0, round(current_size/max_size*100,1), null())\n| where fill_pct > 70\n| stats latest(fill_pct) as fill_pct, latest(current_size) as depth by host, name\n| sort -fill_pct",
              "m": "Track merging-related queue names per indexer. Alert on high fill or rapid growth. Correlate with index volume, replication, and disk I/O.",
              "z": "Line chart (merge queue depth over time), Table (top merging queues).",
              "kfp": "Queues fill during indexing surges, search peaks, or after restart. We rule out maintenance, bundle work, and traffic spikes before we chase a single node.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` `group=queue` (merging / agg queues).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack merging-related queue names per indexer. Alert on high fill or rapid growth. Correlate with index volume, replication, and disk I/O.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=queue name=*merge*\n| eval fill_pct=if(max_size>0, round(current_size/max_size*100,1), null())\n| where fill_pct > 70\n| stats latest(fill_pct) as fill_pct, latest(current_size) as depth by host, name\n| sort -fill_pct\n```\n\nUnderstanding this SPL\n\n**Merging Queue Health (_internal)** — Merging queue backlog delays structured indexing and can block hot buckets. Monitoring prevents silent pipeline stall on heavy merge workloads.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` `group=queue` (merging / agg queues). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fill_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fill_pct > 70` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (merge queue depth over time), Table (top merging queues).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full our indexing and processing queues are so we can act before data stacks up, slows, or is dropped.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.18",
              "n": "Typing Queue Health (_internal)",
              "c": "high",
              "f": "beginner",
              "v": "The typing queue applies rules to parsed events; backlog here delays field extraction and routing to indexes.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` `group=queue` (typingqueue)",
              "q": "index=_internal sourcetype=splunkd group=queue name=*typing*\n| eval fill_pct=if(max_size>0, round(current_size/max_size*100,1), null())\n| timechart span=5m max(fill_pct) as max_fill by host\n| where max_fill > 65",
              "m": "Monitor typing queue fill and blocked state. Tune props/transforms if chronic backlog. Correlate with high-cardinality lookups or expensive `EVAL` in transforms.",
              "z": "Area chart (typing queue fill), Single value (worst host).",
              "kfp": "Queues fill during indexing surges, search peaks, or after restart. We rule out maintenance, bundle work, and traffic spikes before we chase a single node.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` `group=queue` (typingqueue).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor typing queue fill and blocked state. Tune props/transforms if chronic backlog. Correlate with high-cardinality lookups or expensive `EVAL` in transforms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=queue name=*typing*\n| eval fill_pct=if(max_size>0, round(current_size/max_size*100,1), null())\n| timechart span=5m max(fill_pct) as max_fill by host\n| where max_fill > 65\n```\n\nUnderstanding this SPL\n\n**Typing Queue Health (_internal)** — The typing queue applies rules to parsed events; backlog here delays field extraction and routing to indexes.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` `group=queue` (typingqueue). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fill_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where max_fill > 65` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (typing queue fill), Single value (worst host).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full our indexing and processing queues are so we can act before data stacks up, slows, or is dropped.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.19",
              "n": "TCP Output Connection Failures (_internal)",
              "c": "critical",
              "f": "intermediate",
              "v": "Forwarders and intermediate tiers rely on TCP out to indexers. Connection failures mean data is queued or dropped at the source.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` (TcpOutputProc, `group=tcpout_connections`)",
              "q": "index=_internal sourcetype=splunkd (TcpOutputProc OR group=tcpout_connections)\n| search \"connection\" AND (\"failed\" OR \"refused\" OR \"timed out\" OR \"broken pipe\")\n| stats count by host, destIp, destPort\n| sort -count",
              "m": "Forward `splunkd` logs with TCP output connection events. Alert on rising failure counts per destination. Verify indexer reachability, certificates, and firewall paths.",
              "z": "Table (failures by destination), Line chart (failure rate), Status grid (forwarder × output group).",
              "kfp": "Scheduled indexer rolling restarts during approved change windows produce short TcpOutputProc bursts while autoLB drains peers and forwarders reconnect; require sustained fifteen minute counts outside inventory maintenance and Monitoring Console correlation before paging. Planned cluster maintenance with explicit scheduled_maint_window values should downgrade severity to LOW when lookup hygiene is correct. autoLBFrequency misconfigured far too low can create benign reconnect churn that resembles outages; tune against baseline guidance and compare to fleet median before incidents open. Intermediate heavy forwarders moving between indexer peers via DNS rotation generate connection closes that are operationally normal during capacity moves. SOCKS proxies or TLS inspection appliances sometimes reset long-lived splunktcp sessions during their own certificate rotations; pair Splunk timelines with proxy change records. Apple silicon or other forwarder hosts occasionally exhibit kernel networking quirks that produce intermittent TcpOutputProc warnings without indexer faults; validate with host OS patches and controlled pcaps. Queue-full conditions driven by indexer CPU saturation can look like network loss until metrics_tcpout_queue shows depth growth while firewall logs stay quiet; treat as capacity not ACL. Load balancer affinity or drain policies may reset sessions benignly when backends leave pools; correlate with load balancer events. Duplicate _internal ingestion from mis-rotated heavy forwarders can double-count failure rows until dedupe summaries land. Lab forwarders pointed at dark addresses as part of bootstrap testing create refused connections that should never page production bridges. Penetration tests that port-scan indexers may generate firewall deny rows in Network_Traffic without Splunk functional loss; triage with change tickets.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (TcpOutputProc, `group=tcpout_connections`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward `splunkd` logs with TCP output connection events. Alert on rising failure counts per destination. Verify indexer reachability, certificates, and firewall paths.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd (TcpOutputProc OR group=tcpout_connections)\n| search \"connection\" AND (\"failed\" OR \"refused\" OR \"timed out\" OR \"broken pipe\")\n| stats count by host, destIp, destPort\n| sort -count\n```\n\nUnderstanding this SPL\n\n**TCP Output Connection Failures (_internal)** — Forwarders and intermediate tiers rely on TCP out to indexers. Connection failures mean data is queued or dropped at the source.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (TcpOutputProc, `group=tcpout_connections`). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, destIp, destPort** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failures by destination), Line chart (failure rate), Status grid (forwarder × output group).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We help you know when the private roads your machines use to deliver their journals to the central library are what failed. When those roads get fussy, pages pile up or vanish quietly, and we ring a bell so crews fix the path before anyone assumes the library stayed healthy on its own.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.20",
              "n": "Modular Input Errors (_internal)",
              "c": "high",
              "f": "intermediate",
              "v": "Scripted and modular inputs drive critical ingestion; errors stop or corrupt data collection without obvious UI failure.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` (ModularInputs, ExecProcessor)",
              "q": "index=_internal sourcetype=splunkd (component=ModularInputs OR component=ExecProcessor)\n| search log_level=ERROR OR log_level=FATAL\n| stats count by host, stanza, message\n| sort -count",
              "m": "Alert on ERROR/FATAL from modular inputs. Map `stanza` to `inputs.conf`. Verify script paths, credentials, and API rate limits.",
              "z": "Table (modular input errors), Timeline (error bursts).",
              "kfp": "Signals move after deploys, config edits, and traffic shifts. We compare with the right product view and a short history before an outage page.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (ModularInputs, ExecProcessor).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on ERROR/FATAL from modular inputs. Map `stanza` to `inputs.conf`. Verify script paths, credentials, and API rate limits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd (component=ModularInputs OR component=ExecProcessor)\n| search log_level=ERROR OR log_level=FATAL\n| stats count by host, stanza, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Modular Input Errors (_internal)** — Scripted and modular inputs drive critical ingestion; errors stop or corrupt data collection without obvious UI failure.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (ModularInputs, ExecProcessor). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, stanza, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (modular input errors), Timeline (error bursts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.21",
              "n": "Data Model Acceleration Status (_internal)",
              "c": "medium",
              "f": "intermediate",
              "v": "Failed or lagging accelerations break pivot searches, ES views, and app dashboards that depend on `tstats`.",
              "t": "Monitoring Console, Data Model Editor",
              "d": "`index=_internal` `sourcetype=splunkd` (AccelerationManager, DataModel)",
              "q": "index=_internal sourcetype=splunkd (AccelerationManager OR \"Data Model\")\n| search (\"failed\" OR \"rebuild\" OR \"lag\" OR log_level=ERROR)\n| stats count by host, object, message\n| sort -count",
              "m": "Monitor acceleration build/rebuild failures and backlogs. Alert when acceleration is disabled unexpectedly or rebuild exceeds SLA. Review high-cardinality datasets.",
              "z": "Table (data models with issues), Single value (models not accelerated).",
              "kfp": "Signals move after deploys, config edits, and traffic shifts. We compare with the right product view and a short history before an outage page.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console, Data Model Editor.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (AccelerationManager, DataModel).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor acceleration build/rebuild failures and backlogs. Alert when acceleration is disabled unexpectedly or rebuild exceeds SLA. Review high-cardinality datasets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd (AccelerationManager OR \"Data Model\")\n| search (\"failed\" OR \"rebuild\" OR \"lag\" OR log_level=ERROR)\n| stats count by host, object, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Data Model Acceleration Status (_internal)** — Failed or lagging accelerations break pivot searches, ES views, and app dashboards that depend on `tstats`.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (AccelerationManager, DataModel). **App/TA** (typical add-on context): Monitoring Console, Data Model Editor. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, object, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (data models with issues), Single value (models not accelerated).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.22",
              "n": "Summary Indexing Failures (_internal)",
              "c": "high",
              "f": "intermediate",
              "v": "Summary index population jobs feed exec and compliance reports; silent failures skew metrics and dashboards.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=scheduler` (summary-index saved searches)",
              "q": "index=_internal sourcetype=scheduler status IN (\"failed\",\"skipped\")\n| search savedsearch_name=\"*summary*\" OR savedsearch_name=\"*si_*\"\n| stats count by savedsearch_name, reason, status\n| sort -count",
              "m": "Tag SI-populating searches and alert on failed/skipped runs. Verify disk space on summary indexers and search concurrency.",
              "z": "Table (failed summary searches), Line chart (SI job success rate).",
              "kfp": "Signals move after deploys, config edits, and traffic shifts. We compare with the right product view and a short history before an outage page.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=scheduler` (summary-index saved searches).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag SI-populating searches and alert on failed/skipped runs. Verify disk space on summary indexers and search concurrency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=scheduler status IN (\"failed\",\"skipped\")\n| search savedsearch_name=\"*summary*\" OR savedsearch_name=\"*si_*\"\n| stats count by savedsearch_name, reason, status\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Summary Indexing Failures (_internal)** — Summary index population jobs feed exec and compliance reports; silent failures skew metrics and dashboards.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=scheduler` (summary-index saved searches). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: scheduler. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=scheduler. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by savedsearch_name, reason, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed summary searches), Line chart (SI job success rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see which views and stored searches people actually use so we can drop clutter and keep the right work maintained.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.23",
              "n": "Indexer Disk Space Utilization (_internal)",
              "c": "critical",
              "f": "beginner",
              "v": "Full indexer volumes stop writes and break replication; proactive disk monitoring avoids cascading cluster failure.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunk_disk_objects` / `splunkd` disk metrics",
              "q": "index=_internal sourcetype=splunk_disk_objects OR (sourcetype=splunkd \"disk usage\")\n| eval pct_used=if(total_kb>0, round(used_kb/total_kb*100,1), null())\n| where pct_used > 85\n| stats latest(pct_used) as pct_used, latest(used_kb) as used_kb by host, mount_point\n| sort -pct_used",
              "m": "Normalize mount paths for hot/warm/cold. Alert at 85% and 90%. Include frozen path and SmartStore cache volumes where applicable.",
              "z": "Gauge (disk % per indexer), Table (mounts at risk), Heatmap (host × volume).",
              "kfp": "Capacity moves with archive work, new data, or a short burst. We compare with growth plans before a spend jump.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunk_disk_objects` / `splunkd` disk metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize mount paths for hot/warm/cold. Alert at 85% and 90%. Include frozen path and SmartStore cache volumes where applicable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunk_disk_objects OR (sourcetype=splunkd \"disk usage\")\n| eval pct_used=if(total_kb>0, round(used_kb/total_kb*100,1), null())\n| where pct_used > 85\n| stats latest(pct_used) as pct_used, latest(used_kb) as used_kb by host, mount_point\n| sort -pct_used\n```\n\nUnderstanding this SPL\n\n**Indexer Disk Space Utilization (_internal)** — Full indexer volumes stop writes and break replication; proactive disk monitoring avoids cascading cluster failure.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunk_disk_objects` / `splunkd` disk metrics. **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunk_disk_objects, splunkd. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunk_disk_objects. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_used > 85` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, mount_point** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (disk % per indexer), Table (mounts at risk), Heatmap (host × volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.24",
              "n": "SmartStore Cache Hit/Miss Ratio (_internal)",
              "c": "high",
              "f": "expert",
              "v": "Low cache hit rates increase S3/API latency and search cost. Trending hit/miss guides cache sizing and bucket locality.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` (SmartStore, remote storage metrics)",
              "q": "index=_internal sourcetype=splunkd (SmartStore OR \"remote_storage\" OR S2Bucket)\n| search (\"cache\" OR \"download\" OR \"hit\" OR \"miss\")\n| rex \"(?<metric>hit|miss|download)_count=(?<cnt>\\d+)\"\n| stats sum(eval(if(match(metric,\"hit\"),cnt,0))) as hits, sum(eval(if(match(metric,\"miss\"),cnt,0))) as misses by host\n| eval hit_ratio=if((hits+misses)>0, round(100*hits/(hits+misses),2), null())\n| where hit_ratio < 70 OR isnull(hit_ratio)",
              "m": "Parse vendor-specific SmartStore metrics or use `metrics.log` patterns for remote fetch vs cache serve. Baseline hit ratio per indexer. Alert on sustained drops after upgrades or index changes.",
              "z": "Line chart (hit ratio over time), Single value (cluster avg hit %).",
              "kfp": "Capacity moves with archive work, new data, or a short burst. We compare with growth plans before a spend jump.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (SmartStore, remote storage metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse vendor-specific SmartStore metrics or use `metrics.log` patterns for remote fetch vs cache serve. Baseline hit ratio per indexer. Alert on sustained drops after upgrades or index changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd (SmartStore OR \"remote_storage\" OR S2Bucket)\n| search (\"cache\" OR \"download\" OR \"hit\" OR \"miss\")\n| rex \"(?<metric>hit|miss|download)_count=(?<cnt>\\d+)\"\n| stats sum(eval(if(match(metric,\"hit\"),cnt,0))) as hits, sum(eval(if(match(metric,\"miss\"),cnt,0))) as misses by host\n| eval hit_ratio=if((hits+misses)>0, round(100*hits/(hits+misses),2), null())\n| where hit_ratio < 70 OR isnull(hit_ratio)\n```\n\nUnderstanding this SPL\n\n**SmartStore Cache Hit/Miss Ratio (_internal)** — Low cache hit rates increase S3/API latency and search cost. Trending hit/miss guides cache sizing and bucket locality.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (SmartStore, remote storage metrics). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio < 70 OR isnull(hit_ratio)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hit ratio over time), Single value (cluster avg hit %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.25",
              "n": "Cluster Bundle Push Failures (_internal)",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed bundle distribution leaves indexers with stale `indexes.conf`, apps, or peer configs, causing search/replication skew.",
              "t": "Monitoring Console, Cluster Manager",
              "d": "`index=_internal` `sourcetype=splunkd` (CM, bundle replication)",
              "q": "index=_internal sourcetype=splunkd (bundle OR BundleReplication)\n| search (log_level=ERROR OR \"bundle.*fail\" OR \"Failed to apply\")\n| stats count by host, peer, message\n| sort -count",
              "m": "Monitor CM and peer logs for bundle apply failures. Alert immediately. Verify disk space on peers and CM connectivity.",
              "z": "Table (peers with bundle errors), Timeline (bundle events).",
              "kfp": "Capacity moves with archive work, new data, or a short burst. We compare with growth plans before a spend jump.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console, Cluster Manager.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (CM, bundle replication).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor CM and peer logs for bundle apply failures. Alert immediately. Verify disk space on peers and CM connectivity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd (bundle OR BundleReplication)\n| search (log_level=ERROR OR \"bundle.*fail\" OR \"Failed to apply\")\n| stats count by host, peer, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cluster Bundle Push Failures (_internal)** — Failed bundle distribution leaves indexers with stale `indexes.conf`, apps, or peer configs, causing search/replication skew.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (CM, bundle replication). **App/TA** (typical add-on context): Monitoring Console, Cluster Manager. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, peer, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (peers with bundle errors), Timeline (bundle events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many app copies are running as traffic and policy change so a surprise scale event does not go unnoticed.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.26",
              "n": "splunkd Unexpected Restart Detection (_internal)",
              "c": "critical",
              "f": "beginner",
              "v": "Unplanned `splunkd` restarts interrupt searches, replication, and ingestion. Correlating restarts speeds root cause (OOM, crash, admin).",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` (startup, shutdown, watchdog)",
              "q": "index=_internal sourcetype=splunkd\n| search (\"Splunkd starting\" OR \"Shutting down\" OR \"splunkd restarted\" OR \"detected unexpected\")\n| bin _time span=1h\n| stats count by host, _time\n| where count > 3",
              "m": "Tune for crash loops; join with OOM killer logs on the OS if forwarded. Alert when hourly restart count exceeds threshold.",
              "z": "Table (restart count by host), Timeline (restart events).",
              "kfp": "Signals move after deploys, config edits, and traffic shifts. We compare with the right product view and a short history before an outage page.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (startup, shutdown, watchdog).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune for crash loops; join with OOM killer logs on the OS if forwarded. Alert when hourly restart count exceeds threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd\n| search (\"Splunkd starting\" OR \"Shutting down\" OR \"splunkd restarted\" OR \"detected unexpected\")\n| bin _time span=1h\n| stats count by host, _time\n| where count > 3\n```\n\nUnderstanding this SPL\n\n**splunkd Unexpected Restart Detection (_internal)** — Unplanned `splunkd` restarts interrupt searches, replication, and ingestion. Correlating restarts speeds root cause (OOM, crash, admin).\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (startup, shutdown, watchdog). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 3` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (restart count by host), Timeline (restart events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.27",
              "n": "Splunk Web UI Errors (_internal)",
              "c": "high",
              "f": "beginner",
              "v": "Web UI 5xx/timeout errors block analysts during incidents. Tracking errors isolates SHC vs load balancer vs app issues.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunk_web_access` / `splunkd_ui` / `splunkd` Web UI",
              "q": "index=_internal sourcetype IN (\"splunk_web_access\",\"splunkd_ui\",\"splunkd\") uri_path=\"*\"\n| search status>=500 OR match(_raw,\"(?i)(error|exception|timeout)\")\n| stats count by status, uri_path, clientip\n| sort -count",
              "m": "Ensure access logs include HTTP status. Alert on 5xx rate above baseline. Correlate with KV Store and SHC captain during UI-wide failures.",
              "z": "Line chart (5xx rate), Table (top failing URIs).",
              "kfp": "Signals move after deploys, config edits, and traffic shifts. We compare with the right product view and a short history before an outage page.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunk_web_access` / `splunkd_ui` / `splunkd` Web UI.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure access logs include HTTP status. Alert on 5xx rate above baseline. Correlate with KV Store and SHC captain during UI-wide failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype IN (\"splunk_web_access\",\"splunkd_ui\",\"splunkd\") uri_path=\"*\"\n| search status>=500 OR match(_raw,\"(?i)(error|exception|timeout)\")\n| stats count by status, uri_path, clientip\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Splunk Web UI Errors (_internal)** — Web UI 5xx/timeout errors block analysts during incidents. Tracking errors isolates SHC vs load balancer vs app issues.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunk_web_access` / `splunkd_ui` / `splunkd` Web UI. **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by status, uri_path, clientip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (5xx rate), Table (top failing URIs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.28",
              "n": "SHC Configuration Replication Lag (_internal)",
              "c": "critical",
              "f": "intermediate",
              "v": "Lagging conf replication causes inconsistent searches and failed lookups across members.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` (SHC, replication, `apply_bundle`)",
              "q": "index=_internal sourcetype=splunkd component=*SHC* OR component=*shcluster*\n| search (\"replication\" OR \"bundle\" OR \"lag\") AND (log_level=WARN OR log_level=ERROR)\n| stats count by host, message\n| sort -count",
              "m": "Prefer REST `| rest /services/shcluster/member/members` for `last_conf_replication` where available. Alert on WARN/ERROR about replication lag or failed bundles.",
              "z": "Table (members with replication issues), Single value (max lag seconds).",
              "kfp": "Capacity moves with archive work, new data, or a short burst. We compare with growth plans before a spend jump.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (SHC, replication, `apply_bundle`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer REST `| rest /services/shcluster/member/members` for `last_conf_replication` where available. Alert on WARN/ERROR about replication lag or failed bundles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd component=*SHC* OR component=*shcluster*\n| search (\"replication\" OR \"bundle\" OR \"lag\") AND (log_level=WARN OR log_level=ERROR)\n| stats count by host, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**SHC Configuration Replication Lag (_internal)** — Lagging conf replication causes inconsistent searches and failed lookups across members.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (SHC, replication, `apply_bundle`). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (members with replication issues), Single value (max lag seconds).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many app copies are running as traffic and policy change so a surprise scale event does not go unnoticed.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.29",
              "n": "Ingest Actions Pipeline Status (_internal)",
              "c": "high",
              "f": "intermediate",
              "v": "Ingest actions (routing, filtering, streaming) failures drop or misroute security and operational data.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` (IngestActions, `ingest_actions`)",
              "q": "index=_internal sourcetype=splunkd (IngestActions OR \"ingest.action\")\n| search log_level=ERROR OR \"failed\" OR \"dropped\"\n| stats count by host, pipeline, rule_id, message\n| sort -count",
              "m": "Map errors to `props`/`transforms` ingest action stanzas. Alert on any sustained error rate. Verify HEC and indexer tier compatibility.",
              "z": "Table (failing ingest actions), Line chart (error trend).",
              "kfp": "Signals move after deploys, config edits, and traffic shifts. We compare with the right product view and a short history before an outage page.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (IngestActions, `ingest_actions`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap errors to `props`/`transforms` ingest action stanzas. Alert on any sustained error rate. Verify HEC and indexer tier compatibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd (IngestActions OR \"ingest.action\")\n| search log_level=ERROR OR \"failed\" OR \"dropped\"\n| stats count by host, pipeline, rule_id, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Ingest Actions Pipeline Status (_internal)** — Ingest actions (routing, filtering, streaming) failures drop or misroute security and operational data.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (IngestActions, `ingest_actions`). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, pipeline, rule_id, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failing ingest actions), Line chart (error trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.30",
              "n": "Timestamp Parsing Accuracy (_internal)",
              "c": "high",
              "f": "intermediate",
              "v": "Mis-parsed timestamps break RT searches, compliance, and `_indextime` lag metrics.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` (DateParser, `could not use strptime`)",
              "q": "index=_internal sourcetype=splunkd (DateParser OR \"strptime\" OR \"could not\")\n| search (log_level=WARN OR log_level=ERROR)\n| stats count by host, sourcetype_extracted, message\n| sort -count",
              "m": "Track DATE_CONFIG failures and warnings. Join with sample events from the same sourcetype in data tier. Fix `TIME_FORMAT`/`MAX_TIMESTAMP_LOOKAHEAD`.",
              "z": "Table (sourcetypes with parse warnings), Line chart (warning rate).",
              "kfp": "Queues fill during indexing surges, search peaks, or after restart. We rule out maintenance, bundle work, and traffic spikes before we chase a single node.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (DateParser, `could not use strptime`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack DATE_CONFIG failures and warnings. Join with sample events from the same sourcetype in data tier. Fix `TIME_FORMAT`/`MAX_TIMESTAMP_LOOKAHEAD`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd (DateParser OR \"strptime\" OR \"could not\")\n| search (log_level=WARN OR log_level=ERROR)\n| stats count by host, sourcetype_extracted, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Timestamp Parsing Accuracy (_internal)** — Mis-parsed timestamps break RT searches, compliance, and `_indextime` lag metrics.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (DateParser, `could not use strptime`). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, sourcetype_extracted, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sourcetypes with parse warnings), Line chart (warning rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full our indexing and processing queues are so we can act before data stacks up, slows, or is dropped.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.31",
              "n": "Workload Management Pool Saturation (_internal)",
              "c": "high",
              "f": "expert",
              "v": "WLM pools prevent search starvation; saturation delays critical searches and alerts.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` (WorkloadManager, `workload_pool`)",
              "q": "index=_internal sourcetype=splunkd (WorkloadManager OR \"workload_pool\")\n| search (\"saturated\" OR \"rejected\" OR \"queue\" OR log_level=ERROR)\n| stats count by host, pool_name, message\n| sort -count",
              "m": "Define pool limits per SLA. Alert on rejections or sustained queue depth. Correlate with ad-hoc search storms.",
              "z": "Table (pools with rejections), Gauge (pool utilization %).",
              "kfp": "Signals move after deploys, config edits, and traffic shifts. We compare with the right product view and a short history before an outage page.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (WorkloadManager, `workload_pool`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine pool limits per SLA. Alert on rejections or sustained queue depth. Correlate with ad-hoc search storms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd (WorkloadManager OR \"workload_pool\")\n| search (\"saturated\" OR \"rejected\" OR \"queue\" OR log_level=ERROR)\n| stats count by host, pool_name, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Workload Management Pool Saturation (_internal)** — WLM pools prevent search starvation; saturation delays critical searches and alerts.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (WorkloadManager, `workload_pool`). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, pool_name, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pools with rejections), Gauge (pool utilization %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track machine and scheduler stress on the search side so a heavy job or a bad pattern does not starve the rest of the work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.32",
              "n": "Search Scheduler Fill Ratio (_internal)",
              "c": "high",
              "f": "intermediate",
              "v": "Scheduler at capacity skips searches—blind spots for alerts and summaries.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=scheduler` / `splunkd` `group=scheduler`",
              "q": "index=_internal sourcetype=scheduler\n| stats count(eval(status=\"skipped\")) as skipped, count as total\n| eval fill_skip_pct=round(100*skipped/total,2)\n| where fill_skip_pct > 5",
              "m": "Track skipped vs completed over sliding windows. Break down by app and user. Add concurrency or split heavy searches when fill ratio grows.",
              "z": "Line chart (scheduler skip %), Table (top skipped searches).",
              "kfp": "Skips during scheduler load spikes, after restart catch-up, or during upgrade windows. We check activity and change windows before we call it a fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=scheduler` / `splunkd` `group=scheduler`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack skipped vs completed over sliding windows. Break down by app and user. Add concurrency or split heavy searches when fill ratio grows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=scheduler\n| stats count(eval(status=\"skipped\")) as skipped, count as total\n| eval fill_skip_pct=round(100*skipped/total,2)\n| where fill_skip_pct > 5\n```\n\nUnderstanding this SPL\n\n**Search Scheduler Fill Ratio (_internal)** — Scheduler at capacity skips searches—blind spots for alerts and summaries.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=scheduler` / `splunkd` `group=scheduler`. **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: scheduler. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=scheduler. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **fill_skip_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fill_skip_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (scheduler skip %), Table (top skipped searches).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch when scheduled work is skipped or delayed so we are not blind to what alerts and reports were supposed to do.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.33",
              "n": "Knowledge Bundle Size Monitoring (_internal)",
              "c": "medium",
              "f": "intermediate",
              "v": "Oversized knowledge bundles slow search distribution and SHC replication.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` (bundles, `KnowledgeBundle`)",
              "q": "index=_internal sourcetype=splunkd \"bundle\" (\"MB\" OR \"KB\" OR \"size\")\n| rex \"(?<bundle_mb>\\d+(\\.\\d+)?)\\s*MB\"\n| where bundle_mb > 200\n| stats count by host, user, app\n| sort -bundle_mb",
              "m": "Prefer REST or scripted checks of bundle sizes on search heads. Alert when bundle size exceeds policy. Audit large lookups and unused apps.",
              "z": "Table (largest bundles), Bar chart (size by app).",
              "kfp": "Capacity moves with archive work, new data, or a short burst. We compare with growth plans before a spend jump.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` (bundles, `KnowledgeBundle`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer REST or scripted checks of bundle sizes on search heads. Alert when bundle size exceeds policy. Audit large lookups and unused apps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd \"bundle\" (\"MB\" OR \"KB\" OR \"size\")\n| rex \"(?<bundle_mb>\\d+(\\.\\d+)?)\\s*MB\"\n| where bundle_mb > 200\n| stats count by host, user, app\n| sort -bundle_mb\n```\n\nUnderstanding this SPL\n\n**Knowledge Bundle Size Monitoring (_internal)** — Oversized knowledge bundles slow search distribution and SHC replication.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` (bundles, `KnowledgeBundle`). **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where bundle_mb > 200` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, user, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (largest bundles), Bar chart (size by app).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.34",
              "n": "Real-Time Search Resource Consumption (_internal)",
              "c": "medium",
              "f": "intermediate",
              "v": "Real-time searches reserve cores and memory; runaway RT jobs degrade interactive and scheduled workload.",
              "t": "Monitoring Console",
              "d": "`index=_internal` `sourcetype=splunkd` `group=search_concurrency` / `search_process`",
              "q": "index=_internal sourcetype=splunkd group=search_concurrency\n| stats max(active_rt_searches) as rt_active, max(active_hist_searches) as hist by host\n| eval rt_ratio=if((rt_active+hist)>0, round(100*rt_active/(rt_active+hist),1), null())\n| where rt_active > 20 OR rt_ratio > 40",
              "m": "Baseline RT search counts per SH. Alert on unusual RT concurrency. Identify dashboards or users with many concurrent RT panels.",
              "z": "Line chart (RT vs historical searches), Table (hosts with high RT).",
              "kfp": "Cluster and resource noise from rolling work, restarts, and heavy jobs. We use health rollups and change windows to separate flapping from a break.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` `group=search_concurrency` / `search_process`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline RT search counts per SH. Alert on unusual RT concurrency. Identify dashboards or users with many concurrent RT panels.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=search_concurrency\n| stats max(active_rt_searches) as rt_active, max(active_hist_searches) as hist by host\n| eval rt_ratio=if((rt_active+hist)>0, round(100*rt_active/(rt_active+hist),1), null())\n| where rt_active > 20 OR rt_ratio > 40\n```\n\nUnderstanding this SPL\n\n**Real-Time Search Resource Consumption (_internal)** — Real-time searches reserve cores and memory; runaway RT jobs degrade interactive and scheduled workload.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` `group=search_concurrency` / `search_process`. **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **rt_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where rt_active > 20 OR rt_ratio > 40` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (RT vs historical searches), Table (hosts with high RT).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track machine and scheduler stress on the search side so a heavy job or a bad pattern does not starve the rest of the work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.35",
              "n": "User Search Activity Audit (_audit)",
              "c": "medium",
              "f": "beginner",
              "v": "Auditing who ran which searches supports insider-threat investigations and segregation of duties.",
              "t": "Audit trail (built-in)",
              "d": "`index=_audit` `action=search`",
              "q": "index=_audit action=search info=started\n| stats count, dc(search) as distinct_searches by user, app\n| sort -count",
              "m": "Retain per policy. Report on after-hours or high-volume search users. Exclude known service accounts via lookup.",
              "z": "Table (users by search volume), Heatmap (hour × user).",
              "kfp": "Admin spikes from automation, break-glass, and app pushes. We check change records before a single run looks like a takeover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Audit trail (built-in).\n• Ensure the following data sources are available: `index=_audit` `action=search`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRetain per policy. Report on after-hours or high-volume search users. Exclude known service accounts via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit action=search info=started\n| stats count, dc(search) as distinct_searches by user, app\n| sort -count\n```\n\nUnderstanding this SPL\n\n**User Search Activity Audit (_audit)** — Auditing who ran which searches supports insider-threat investigations and segregation of duties.\n\nDocumented **Data sources**: `index=_audit` `action=search`. **App/TA** (typical add-on context): Audit trail (built-in). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users by search volume), Heatmap (hour × user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on who changed what in the control plane so we can review sensitive actions and learn from near misses.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AU-6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FedRAMP AU-6 (Audit review, analysis, reporting) is enforced — Splunk UC-13.1.35: User Search Activity Audit (_audit).",
                  "ea": "Saved search 'UC-13.1.35' running on index=_audit action=search, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "09.aa",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 09.aa (Audit logging) is enforced — Splunk UC-13.1.35: User Search Activity Audit (_audit).",
                  "ea": "Saved search 'UC-13.1.35' running on index=_audit action=search, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-13.1.35: User Search Activity Audit (_audit).",
                  "ea": "Saved search 'UC-13.1.35' running on index=_audit action=search, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.swift.com/myswift/customer-security-programme-csp/security-controls"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.36",
              "n": "Configuration File Change Tracking (_audit)",
              "c": "high",
              "f": "intermediate",
              "v": "Unauthorized `.conf` edits change data access and ingestion behavior; rapid detection limits blast radius.",
              "t": "Audit trail",
              "d": "`index=_audit` (`action` for config changes)",
              "q": "index=_audit action IN (\"edit_*\",\"update\")\n| search file_path=\"*.conf\" OR object_path=\"*local*\"\n| table _time, user, action, file_path, object_path\n| sort -_time",
              "m": "Map `action` types to your Splunk version. Alert on changes outside change windows. Route to SecOps for prod SH/CM.",
              "z": "Timeline (config changes), Table (recent edits by user).",
              "kfp": "Admin spikes from automation, break-glass, and app pushes. We check change records before a single run looks like a takeover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Audit trail.\n• Ensure the following data sources are available: `index=_audit` (`action` for config changes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `action` types to your Splunk version. Alert on changes outside change windows. Route to SecOps for prod SH/CM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit action IN (\"edit_*\",\"update\")\n| search file_path=\"*.conf\" OR object_path=\"*local*\"\n| table _time, user, action, file_path, object_path\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Configuration File Change Tracking (_audit)** — Unauthorized `.conf` edits change data access and ingestion behavior; rapid detection limits blast radius.\n\nDocumented **Data sources**: `index=_audit` (`action` for config changes). **App/TA** (typical add-on context): Audit trail. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Configuration File Change Tracking (_audit)**): table _time, user, action, file_path, object_path\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (config changes), Table (recent edits by user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on who changed what in the control plane so we can review sensitive actions and learn from near misses.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "09.aa",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 09.aa (Audit logging) is enforced — Splunk UC-13.1.36: Configuration File Change Tracking (_audit).",
                  "ea": "Saved search 'UC-13.1.36' running on index=_audit (action for config changes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                },
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) is enforced — Splunk UC-13.1.36: Configuration File Change Tracking (_audit).",
                  "ea": "Saved search 'UC-13.1.36' running on index=_audit (action for config changes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/IT-Grundschutz/it-grundschutz_node.html"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R1 (Configuration change management) is enforced — Splunk UC-13.1.36: Configuration File Change Tracking (_audit).",
                  "ea": "Saved search 'UC-13.1.36' running on index=_audit (action for config changes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-13.1.36: Configuration File Change Tracking (_audit).",
                  "ea": "Saved search 'UC-13.1.36' running on index=_audit (action for config changes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.swift.com/myswift/customer-security-programme-csp/security-controls"
                }
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.37",
              "n": "Knowledge Object Modification Audit (_audit)",
              "c": "high",
              "f": "intermediate",
              "v": "Dashboard and alert tampering can hide attacks or exfiltration; object-level audit supports SOC and GRC.",
              "t": "Audit trail",
              "d": "`index=_audit` (knowledge object CRUD)",
              "q": "index=_audit object_type IN (\"savedsearch\",\"dashboard\",\"lookup\",\"macro\")\n| stats count by user, object_type, action\n| sort -count",
              "m": "Tune `object_type` values for your version. Alert on delete or ACL change for critical objects. Use lookups for approved admins.",
              "z": "Table (changes by object type), Bar chart (actions by user).",
              "kfp": "Admin spikes from automation, break-glass, and app pushes. We check change records before a single run looks like a takeover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Audit trail.\n• Ensure the following data sources are available: `index=_audit` (knowledge object CRUD).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune `object_type` values for your version. Alert on delete or ACL change for critical objects. Use lookups for approved admins.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit object_type IN (\"savedsearch\",\"dashboard\",\"lookup\",\"macro\")\n| stats count by user, object_type, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Knowledge Object Modification Audit (_audit)** — Dashboard and alert tampering can hide attacks or exfiltration; object-level audit supports SOC and GRC.\n\nDocumented **Data sources**: `index=_audit` (knowledge object CRUD). **App/TA** (typical add-on context): Audit trail. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, object_type, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (changes by object type), Bar chart (actions by user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on who changed what in the control plane so we can review sensitive actions and learn from near misses.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AU-6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that FedRAMP AU-6 (Audit review, analysis, reporting) is enforced — Splunk UC-13.1.37: Knowledge Object Modification Audit (_audit).",
                  "ea": "Saved search 'UC-13.1.37' running on index=_audit (knowledge object CRUD), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "09.aa",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 09.aa (Audit logging) is enforced — Splunk UC-13.1.37: Knowledge Object Modification Audit (_audit).",
                  "ea": "Saved search 'UC-13.1.37' running on index=_audit (knowledge object CRUD), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.38",
              "n": "REST API Access Pattern Analysis (_audit)",
              "c": "high",
              "f": "intermediate",
              "v": "Automated token abuse and reconnaissance often appear as unusual REST paths or volumes.",
              "t": "Audit trail / `splunkd_access`",
              "d": "`index=_audit` `action=rest` OR `index=_internal` `sourcetype=splunkd_access`",
              "q": "(index=_audit action=rest) OR (index=_internal sourcetype=splunkd_access)\n| stats count by user, uri_path, status\n| where count > 500 OR status>=400\n| sort -count",
              "m": "Baseline normal automation. Alert on new `uri_path` clusters or HTTP 401/403 bursts. Correlate with token ID if logged.",
              "z": "Table (top REST paths), Line chart (4xx/5xx rate).",
              "kfp": "Admin spikes from automation, break-glass, and app pushes. We check change records before a single run looks like a takeover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Audit trail / `splunkd_access`.\n• Ensure the following data sources are available: `index=_audit` `action=rest` OR `index=_internal` `sourcetype=splunkd_access`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline normal automation. Alert on new `uri_path` clusters or HTTP 401/403 bursts. Correlate with token ID if logged.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=_audit action=rest) OR (index=_internal sourcetype=splunkd_access)\n| stats count by user, uri_path, status\n| where count > 500 OR status>=400\n| sort -count\n```\n\nUnderstanding this SPL\n\n**REST API Access Pattern Analysis (_audit)** — Automated token abuse and reconnaissance often appear as unusual REST paths or volumes.\n\nDocumented **Data sources**: `index=_audit` `action=rest` OR `index=_internal` `sourcetype=splunkd_access`. **App/TA** (typical add-on context): Audit trail / `splunkd_access`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit, _internal; **sourcetype**: splunkd_access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, index=_internal, sourcetype=splunkd_access. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, uri_path, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 500 OR status>=400` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top REST paths), Line chart (4xx/5xx rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on who changed what in the control plane so we can review sensitive actions and learn from near misses.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.39",
              "n": "Role and Capability Change Detection (_audit)",
              "c": "critical",
              "f": "beginner",
              "v": "Privilege escalation via new roles/capabilities must be detected in minutes, not days.",
              "t": "Audit trail",
              "d": "`index=_audit` (authentication/authorization changes)",
              "q": "index=_audit object_type IN (\"user\",\"role\",\"capabilities\")\n| search action IN (\"create\",\"update\",\"delete\",\"edit_user\",\"edit_role\")\n| table _time, user, action, object, target_user, roles_added\n| sort -_time",
              "m": "Field names vary by version—verify with `| metadata type=sourcetypes index=_audit`. Alert on any `admin` role grant or `edit_user` outside IT hours.",
              "z": "Timeline (privilege changes), Table (recent role mappings).",
              "kfp": "Admin spikes from automation, break-glass, and app pushes. We check change records before a single run looks like a takeover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Audit trail.\n• Ensure the following data sources are available: `index=_audit` (authentication/authorization changes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nField names vary by version—verify with `| metadata type=sourcetypes index=_audit`. Alert on any `admin` role grant or `edit_user` outside IT hours.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit object_type IN (\"user\",\"role\",\"capabilities\")\n| search action IN (\"create\",\"update\",\"delete\",\"edit_user\",\"edit_role\")\n| table _time, user, action, object, target_user, roles_added\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Role and Capability Change Detection (_audit)** — Privilege escalation via new roles/capabilities must be detected in minutes, not days.\n\nDocumented **Data sources**: `index=_audit` (authentication/authorization changes). **App/TA** (typical add-on context): Audit trail. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Role and Capability Change Detection (_audit)**): table _time, user, action, object, target_user, roles_added\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (privilege changes), Table (recent role mappings).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on who changed what in the control plane so we can review sensitive actions and learn from near misses.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that BAIT/KAIT §5 (Identity & access management) is enforced — Splunk UC-13.1.39: Role and Capability Change Detection (_audit).",
                  "ea": "Saved search 'UC-13.1.39' running on index=_audit (authentication/authorization changes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Rundschreiben/2021/rs_1021_BAIT_en.html"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FedRAMP AC-2 (Account management) is enforced — Splunk UC-13.1.39: Role and Capability Change Detection (_audit).",
                  "ea": "Saved search 'UC-13.1.39' running on index=_audit (authentication/authorization changes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "HITRUST",
                  "v": "v11",
                  "cl": "01.b",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HITRUST 01.b (User access management) is enforced — Splunk UC-13.1.39: Role and Capability Change Detection (_audit).",
                  "ea": "Saved search 'UC-13.1.39' running on index=_audit (authentication/authorization changes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://hitrustalliance.net/csf-overview/"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.40",
              "n": "Per-Process CPU and Memory Trending (_introspection)",
              "c": "high",
              "f": "intermediate",
              "v": "`splunkd` child processes (search, indexer, pipeline) resource trends predict OOM and slowdown before user impact.",
              "t": "Monitoring Console",
              "d": "`index=_introspection` `sourcetype=splunk_resource_usage` / `splunk_disk_objects`",
              "q": "index=_introspection sourcetype=splunk_resource_usage\n| timechart span=5m avg(data.cpu_pct) as cpu, avg(data.mem_used) as mem_kb by data.process_type, host",
              "m": "Enable introspection generators on all tiers. Alert when `cpu_pct` or memory for `search`/`indexing` exceeds baseline. Use `predict` for week-over-week growth.",
              "z": "Line chart (CPU/memory by process class), Heatmap (host × process).",
              "kfp": "Capacity moves with archive work, new data, or a short burst. We compare with growth plans before a spend jump.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console.\n• Ensure the following data sources are available: `index=_introspection` `sourcetype=splunk_resource_usage` / `splunk_disk_objects`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable introspection generators on all tiers. Alert when `cpu_pct` or memory for `search`/`indexing` exceeds baseline. Use `predict` for week-over-week growth.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_introspection sourcetype=splunk_resource_usage\n| timechart span=5m avg(data.cpu_pct) as cpu, avg(data.mem_used) as mem_kb by data.process_type, host\n```\n\nUnderstanding this SPL\n\n**Per-Process CPU and Memory Trending (_introspection)** — `splunkd` child processes (search, indexer, pipeline) resource trends predict OOM and slowdown before user impact.\n\nDocumented **Data sources**: `index=_introspection` `sourcetype=splunk_resource_usage` / `splunk_disk_objects`. **App/TA** (typical add-on context): Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _introspection; **sourcetype**: splunk_resource_usage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_introspection, sourcetype=splunk_resource_usage. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by data.process_type, host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU/memory by process class), Heatmap (host × process).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track machine and scheduler stress on the search side so a heavy job or a bad pattern does not starve the rest of the work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.41",
              "n": "Dispatch Directory Size (_introspection)",
              "c": "medium",
              "f": "intermediate",
              "v": "Large dispatch directories fill disk and slow search teardown—common after runaway searches or stuck jobs.",
              "t": "Monitoring Console, scripted disk check",
              "d": "Scripted input / HEC: `sourcetype=splunk:dispatch_stats` on `index=main` (dispatch dir size MB per SH)",
              "q": "index=main sourcetype=\"splunk:dispatch_stats\"\n| eval size_gb=round(dispatch_dir_size_mb/1024,2)\n| where size_gb > 50\n| stats max(size_gb) as max_gb by host",
              "m": "Nightly scripted input: `du -sm` on `$SPLUNK_HOME/var/run/splunk/dispatch`, emit JSON. Alert on rapid growth. Automate cleanup of orphaned jobs per support guidance.",
              "z": "Gauge (dispatch GB per SH), Line chart (growth trend).",
              "kfp": "Intake blips can follow token or load-balancer work and client bursts. We align with app owners before a platform-wide incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console, scripted disk check.\n• Ensure the following data sources are available: Scripted input / HEC: `sourcetype=splunk:dispatch_stats` on `index=main` (dispatch dir size MB per SH).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNightly scripted input: `du -sm` on `$SPLUNK_HOME/var/run/splunk/dispatch`, emit JSON. Alert on rapid growth. Automate cleanup of orphaned jobs per support guidance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=main sourcetype=\"splunk:dispatch_stats\"\n| eval size_gb=round(dispatch_dir_size_mb/1024,2)\n| where size_gb > 50\n| stats max(size_gb) as max_gb by host\n```\n\nUnderstanding this SPL\n\n**Dispatch Directory Size (_introspection)** — Large dispatch directories fill disk and slow search teardown—common after runaway searches or stuck jobs.\n\nDocumented **Data sources**: Scripted input / HEC: `sourcetype=splunk:dispatch_stats` on `index=main` (dispatch dir size MB per SH). **App/TA** (typical add-on context): Monitoring Console, scripted disk check. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: main; **sourcetype**: splunk:dispatch_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=main, sourcetype=\"splunk:dispatch_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where size_gb > 50` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (dispatch GB per SH), Line chart (growth trend).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track machine and scheduler stress on the search side so a heavy job or a bad pattern does not starve the rest of the work.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.42",
              "n": "I/O Wait Bottleneck Detection (_introspection)",
              "c": "high",
              "f": "intermediate",
              "v": "High I/O wait on indexers correlates with slow searches, bucket replication, and ingestion lag.",
              "t": "Monitoring Console, OTel/node_exporter",
              "d": "`index=_introspection` `sourcetype=splunk_resource_usage` (disk I/O fields) / host metrics",
              "q": "index=_introspection sourcetype=splunk_resource_usage\n| where data.io_wait_pct > 25 OR data.disk_busy_pct > 80\n| timechart span=5m avg(data.io_wait_pct) by host",
              "m": "Field names depend on platform; normalize in `props`. Correlate with storage latency metrics from SAN/NVMe. Alert when sustained `io_wait` exceeds threshold.",
              "z": "Line chart (iowait %), Table (hosts with disk saturation).",
              "kfp": "Collector errors during config changes, target restarts, or transport and credential issues. We read collector logs and compare with the OTel pipeline view.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console, OTel/node_exporter.\n• Ensure the following data sources are available: `index=_introspection` `sourcetype=splunk_resource_usage` (disk I/O fields) / host metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nField names depend on platform; normalize in `props`. Correlate with storage latency metrics from SAN/NVMe. Alert when sustained `io_wait` exceeds threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_introspection sourcetype=splunk_resource_usage\n| where data.io_wait_pct > 25 OR data.disk_busy_pct > 80\n| timechart span=5m avg(data.io_wait_pct) by host\n```\n\nUnderstanding this SPL\n\n**I/O Wait Bottleneck Detection (_introspection)** — High I/O wait on indexers correlates with slow searches, bucket replication, and ingestion lag.\n\nDocumented **Data sources**: `index=_introspection` `sourcetype=splunk_resource_usage` (disk I/O fields) / host metrics. **App/TA** (typical add-on context): Monitoring Console, OTel/node_exporter. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _introspection; **sourcetype**: splunk_resource_usage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_introspection, sourcetype=splunk_resource_usage. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where data.io_wait_pct > 25 OR data.disk_busy_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (iowait %), Table (hosts with disk saturation).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track machine and scheduler stress on the search side so a heavy job or a bad pattern does not starve the rest of the work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.43",
              "n": "Splunk Version Compliance (operational inventory)",
              "c": "high",
              "f": "intermediate",
              "v": "Drift across Splunk Enterprise versions breaks feature parity and support eligibility.",
              "t": "Monitoring Console, REST inventory",
              "d": "`| rest /services/server/info` (scripted aggregate), `sourcetype=splunk:version_inventory`",
              "q": "index=inventory sourcetype=\"splunk:version_inventory\"\n| stats dc(version) as version_count, values(version) as versions by group\n| where version_count > 1\n| table group, versions",
              "m": "Nightly scheduled search hits `server/info` on all peers via SH with credentials or forwarder-side scripted input. Compare to approved matrix. Report non-compliant hosts.",
              "z": "Table (hosts × version), Pie chart (version distribution).",
              "kfp": "Signals move after deploys, config edits, and traffic shifts. We compare with the right product view and a short history before an outage page.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring Console, REST inventory.\n• Ensure the following data sources are available: `| rest /services/server/info` (scripted aggregate), `sourcetype=splunk:version_inventory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNightly scheduled search hits `server/info` on all peers via SH with credentials or forwarder-side scripted input. Compare to approved matrix. Report non-compliant hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=inventory sourcetype=\"splunk:version_inventory\"\n| stats dc(version) as version_count, values(version) as versions by group\n| where version_count > 1\n| table group, versions\n```\n\nUnderstanding this SPL\n\n**Splunk Version Compliance (operational inventory)** — Drift across Splunk Enterprise versions breaks feature parity and support eligibility.\n\nDocumented **Data sources**: `| rest /services/server/info` (scripted aggregate), `sourcetype=splunk:version_inventory`. **App/TA** (typical add-on context): Monitoring Console, REST inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: inventory; **sourcetype**: splunk:version_inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=inventory, sourcetype=\"splunk:version_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where version_count > 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Splunk Version Compliance (operational inventory)**): table group, versions\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hosts × version), Pie chart (version distribution).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.44",
              "n": "App Version Consistency Across SHC (operational inventory)",
              "c": "high",
              "f": "intermediate",
              "v": "Mixed app versions on SHC members cause bundle skew and intermittent UI or search errors.",
              "t": "SHC, REST",
              "d": "`| rest /services/apps/local` per member, `sourcetype=splunk:shc_app_inventory`",
              "q": "index=inventory sourcetype=\"splunk:shc_app_inventory\"\n| stats values(app_version) as ver by app_name, member\n| eventstats dc(ver) as ver_count by app_name\n| where ver_count > 1\n| table app_name, member, ver",
              "m": "Push inventory script via `runshellscript` or external job. Alert on mismatch for production apps. Exclude dev-only apps via lookup.",
              "z": "Matrix (app × member version), Table (mismatched apps).",
              "kfp": "Cluster and resource noise from rolling work, restarts, and heavy jobs. We use health rollups and change windows to separate flapping from a break.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SHC, REST.\n• Ensure the following data sources are available: `| rest /services/apps/local` per member, `sourcetype=splunk:shc_app_inventory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPush inventory script via `runshellscript` or external job. Alert on mismatch for production apps. Exclude dev-only apps via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=inventory sourcetype=\"splunk:shc_app_inventory\"\n| stats values(app_version) as ver by app_name, member\n| eventstats dc(ver) as ver_count by app_name\n| where ver_count > 1\n| table app_name, member, ver\n```\n\nUnderstanding this SPL\n\n**App Version Consistency Across SHC (operational inventory)** — Mixed app versions on SHC members cause bundle skew and intermittent UI or search errors.\n\nDocumented **Data sources**: `| rest /services/apps/local` per member, `sourcetype=splunk:shc_app_inventory`. **App/TA** (typical add-on context): SHC, REST. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: inventory; **sourcetype**: splunk:shc_app_inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=inventory, sourcetype=\"splunk:shc_app_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_name, member** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by app_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ver_count > 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **App Version Consistency Across SHC (operational inventory)**): table app_name, member, ver\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (app × member version), Table (mismatched apps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.45",
              "n": "Forwarder Version Compliance (operational inventory)",
              "c": "high",
              "f": "beginner",
              "v": "Old universal forwarders miss TLS fixes and acknowledgment behaviors required by security policy.",
              "t": "Deployment Server, `splunkd` phone-home",
              "d": "`index=_internal` `group=deploymentclient` / `sourcetype=splunk:forwarder_inventory`",
              "q": "index=_internal sourcetype=splunkd group=deploymentclient\n| stats latest(version) as uf_version by hostname\n| lookup approved_uf_versions.csv version OUTPUT approved\n| where isnull(approved)\n| table hostname, uf_version",
              "m": "Maintain CSV of approved forwarder builds. Supplement with DS client list. Drive upgrades via DS server classes.",
              "z": "Bar chart (forwarders by version), Single value (% compliant).",
              "kfp": "Lag during indexer maintenance, network throttling, or scheduled config syncs. We check deployment health before we blame one agent.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Deployment Server, `splunkd` phone-home.\n• Ensure the following data sources are available: `index=_internal` `group=deploymentclient` / `sourcetype=splunk:forwarder_inventory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain CSV of approved forwarder builds. Supplement with DS client list. Drive upgrades via DS server classes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd group=deploymentclient\n| stats latest(version) as uf_version by hostname\n| lookup approved_uf_versions.csv version OUTPUT approved\n| where isnull(approved)\n| table hostname, uf_version\n```\n\nUnderstanding this SPL\n\n**Forwarder Version Compliance (operational inventory)** — Old universal forwarders miss TLS fixes and acknowledgment behaviors required by security policy.\n\nDocumented **Data sources**: `index=_internal` `group=deploymentclient` / `sourcetype=splunk:forwarder_inventory`. **App/TA** (typical add-on context): Deployment Server, `splunkd` phone-home. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Forwarder Version Compliance (operational inventory)**): table hostname, uf_version\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (forwarders by version), Single value (% compliant).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.46",
              "n": "Log Volume and Error Rate Anomaly per Sourcetype (MLTK)",
              "c": "critical",
              "f": "advanced",
              "v": "Silent data pipeline breaks — a forwarder stops sending, a sourcetype drops to zero, or error rates spike — are invisible to threshold alerts that only fire on presence. By modeling expected log volume and error rate per sourcetype with MLTK, this detection catches ingestion failures within minutes instead of hours, preventing blind spots in security and operational monitoring.",
              "t": "Splunk Machine Learning Toolkit (MLTK)",
              "d": "`index=_internal sourcetype=splunkd` (metrics.log), `index=_internal sourcetype=splunkd` (component=Metrics)",
              "q": "| tstats count WHERE index=* by _time span=15m, sourcetype\n| xyseries _time sourcetype count\n| fillnull value=0\n| untable _time sourcetype count\n| fit DensityFunction count by sourcetype into sourcetype_volume_model\n| rename \"IsOutlier(count)\" as isOutlier\n| where isOutlier > 0 OR count=0\n| eval anomaly_type=if(count=0, \"silent_drop\", \"volume_anomaly\")\n| table _time, sourcetype, count, anomaly_type\n| sort anomaly_type, -_time",
              "m": "Schedule every 15 minutes against a 30-day trained DensityFunction model per sourcetype. Zero-count windows flag silent pipeline drops immediately. Volume spikes or dips that deviate from the learned distribution trigger volume anomaly alerts. Enrich with forwarder host metadata to pinpoint the broken pipeline segment. Integrate with PagerDuty or Splunk On-Call for infrastructure on-call routing. Retrain the model weekly via a separate scheduled search. Exclude maintenance windows using a KV store lookup.",
              "z": "Line chart (volume per sourcetype with anomaly markers), Table (anomalous sourcetypes), Single value (active silent drops).",
              "kfp": "Error spikes during deployments, downstream issues, or after schema or contract changes. We line up releases and dependencies with the time window first.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK).\n• Ensure the following data sources are available: `index=_internal sourcetype=splunkd` (metrics.log), `index=_internal sourcetype=splunkd` (component=Metrics).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule every 15 minutes against a 30-day trained DensityFunction model per sourcetype. Zero-count windows flag silent pipeline drops immediately. Volume spikes or dips that deviate from the learned distribution trigger volume anomaly alerts. Enrich with forwarder host metadata to pinpoint the broken pipeline segment. Integrate with PagerDuty or Splunk On-Call for infrastructure on-call routing. Retrain the model weekly via a separate scheduled search. Exclude maintenance windows using a KV sto…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats count WHERE index=* by _time span=15m, sourcetype\n| xyseries _time sourcetype count\n| fillnull value=0\n| untable _time sourcetype count\n| fit DensityFunction count by sourcetype into sourcetype_volume_model\n| rename \"IsOutlier(count)\" as isOutlier\n| where isOutlier > 0 OR count=0\n| eval anomaly_type=if(count=0, \"silent_drop\", \"volume_anomaly\")\n| table _time, sourcetype, count, anomaly_type\n| sort anomaly_type, -_time\n```\n\nUnderstanding this SPL\n\n**Log Volume and Error Rate Anomaly per Sourcetype (MLTK)** — Silent data pipeline breaks — a forwarder stops sending, a sourcetype drops to zero, or error rates spike — are invisible to threshold alerts that only fire on presence. By modeling expected log volume and error rate per sourcetype with MLTK, this detection catches ingestion failures within minutes instead of hours, preventing blind spots in security and operational monitoring.\n\nDocumented **Data sources**: `index=_internal sourcetype=splunkd` (metrics.log), `index=_internal sourcetype=splunkd` (component=Metrics). **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: *.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against precomputed summaries; ensure the referenced data model is accelerated.\n• Pivots fields for charting with `xyseries`.\n• Fills null values with `fillnull`.\n• Pipeline stage (see **Log Volume and Error Rate Anomaly per Sourcetype (MLTK)**): untable _time sourcetype count\n• Pipeline stage (see **Log Volume and Error Rate Anomaly per Sourcetype (MLTK)**): fit DensityFunction count by sourcetype into sourcetype_volume_model\n• Renames fields with `rename` for clarity or joins.\n• Filters the current rows with `where isOutlier > 0 OR count=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **anomaly_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Log Volume and Error Rate Anomaly per Sourcetype (MLTK)**): table _time, sourcetype, count, anomaly_type\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (volume per sourcetype with anomaly markers), Table (anomalous sourcetypes), Single value (active silent drops).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch platform health and capacity for search, indexing, and data delivery so the system we use for answers does not go dark or slow first.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.47",
              "n": "License Usage Forecast with Seasonality (MLTK)",
              "c": "high",
              "f": "advanced",
              "v": "Splunk license overages trigger warnings and eventually block indexing. Forecasting daily license consumption with seasonal decomposition (weekly and monthly patterns) gives administrators 7–30 day advance warning to act — whether by reducing noisy sourcetypes, requesting license expansion, or shifting workloads.",
              "t": "Splunk Machine Learning Toolkit (MLTK)",
              "d": "`index=_internal sourcetype=splunk_resource_usage` OR License Usage Report view",
              "q": "index=_internal source=*license_usage.log type=Usage\n| bin _time span=1d\n| stats sum(b) as bytes_used by _time\n| eval gb_used=round(bytes_used/1073741824, 2)\n| fit StateSpaceForecast gb_used holdback=0 forecast_k=30 conf_interval=95 into license_forecast_model\n| eval over_license=if('predicted(gb_used)' > license_limit_gb, 1, 0)\n| table _time, gb_used, \"predicted(gb_used)\", \"lower95(predicted(gb_used))\", \"upper95(predicted(gb_used))\", over_license",
              "m": "Pull daily license usage from `license_usage.log` and train a StateSpaceForecast model that captures weekly cycles (lower weekend volumes) and monthly trends (end-of-month batch jobs). Forecast 30 days ahead with 95% confidence intervals. Alert when the upper confidence bound crosses the licensed capacity threshold. Display the forecast in a capacity planning dashboard alongside current daily consumption. Retrain monthly. Supplement with `predict` command for simpler deployments that lack MLTK.",
              "z": "Area chart (actual vs forecast with confidence band), Single value (days until projected overage), Table (daily forecast).",
              "kfp": "Capacity moves with archive work, new data, or a short burst. We compare with growth plans before a spend jump.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK).\n• Ensure the following data sources are available: `index=_internal sourcetype=splunk_resource_usage` OR License Usage Report view.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPull daily license usage from `license_usage.log` and train a StateSpaceForecast model that captures weekly cycles (lower weekend volumes) and monthly trends (end-of-month batch jobs). Forecast 30 days ahead with 95% confidence intervals. Alert when the upper confidence bound crosses the licensed capacity threshold. Display the forecast in a capacity planning dashboard alongside current daily consumption. Retrain monthly. Supplement with `predict` command for simpler deployments that lack MLTK.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*license_usage.log type=Usage\n| bin _time span=1d\n| stats sum(b) as bytes_used by _time\n| eval gb_used=round(bytes_used/1073741824, 2)\n| fit StateSpaceForecast gb_used holdback=0 forecast_k=30 conf_interval=95 into license_forecast_model\n| eval over_license=if('predicted(gb_used)' > license_limit_gb, 1, 0)\n| table _time, gb_used, \"predicted(gb_used)\", \"lower95(predicted(gb_used))\", \"upper95(predicted(gb_used))\", over_license\n```\n\nUnderstanding this SPL\n\n**License Usage Forecast with Seasonality (MLTK)** — Splunk license overages trigger warnings and eventually block indexing. Forecasting daily license consumption with seasonal decomposition (weekly and monthly patterns) gives administrators 7–30 day advance warning to act — whether by reducing noisy sourcetypes, requesting license expansion, or shifting workloads.\n\nDocumented **Data sources**: `index=_internal sourcetype=splunk_resource_usage` OR License Usage Report view. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gb_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **License Usage Forecast with Seasonality (MLTK)**): fit StateSpaceForecast gb_used holdback=0 forecast_k=30 conf_interval=95 into license_forecast_model\n• `eval` defines or adjusts **over_license** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **License Usage Forecast with Seasonality (MLTK)**): table _time, gb_used, \"predicted(gb_used)\", \"lower95(predicted(gb_used))\", \"upper95(predicted(gb_used))\", over_license\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (actual vs forecast with confidence band), Single value (days until projected overage), Table (daily forecast).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how license and data use move over time so we can plan and avoid a hard stop to visibility.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.48",
              "n": "Splunk Internal Queue Depth Multivariate Anomaly (MLTK)",
              "c": "high",
              "f": "advanced",
              "v": "Individual queue metrics (parsing, merging, typing, indexing) may fluctuate normally, but simultaneous pressure across multiple queues indicates a systemic bottleneck. Multivariate anomaly detection catches correlated queue saturation that per-queue thresholds miss, enabling proactive capacity intervention before data loss occurs.",
              "t": "Splunk Machine Learning Toolkit (MLTK)",
              "d": "`index=_internal sourcetype=splunkd` (component=Metrics, group=queue)",
              "q": "index=_internal sourcetype=splunkd component=Metrics group=queue\n| bin _time span=5m\n| stats avg(current_size_kb) as avg_size by _time, name\n| xyseries _time name avg_size\n| fillnull value=0\n| fit DensityFunction parsingQueue indexQueue typingQueue mergingQueue tcpin_queue into queue_multivariate_model\n| where isOutlier > 0\n| eval severity=case(\n    parsingQueue > 500 AND indexQueue > 500, \"critical\",\n    parsingQueue > 500 OR indexQueue > 500, \"high\",\n    true(), \"medium\")\n| table _time, parsingQueue, indexQueue, typingQueue, mergingQueue, severity\n| sort -_time",
              "m": "Collect queue metrics from `metrics.log` across all indexers. Pivot into a wide table (one column per queue) for multivariate DensityFunction modeling. The model learns the joint distribution of queue depths, catching correlated saturation that single-queue thresholds miss. Alert on critical (multiple large queues) and high (single large queue) severity. Correlate with ingestion volume spikes and forwarder connection counts. Route alerts to the Splunk platform team. Retrain the model weekly.",
              "z": "Multi-line chart (all queue depths over time), Heatmap (queue × indexer), Single value (current anomaly status).",
              "kfp": "Queues fill during indexing surges, search peaks, or after restart. We rule out maintenance, bundle work, and traffic spikes before we chase a single node.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK).\n• Ensure the following data sources are available: `index=_internal sourcetype=splunkd` (component=Metrics, group=queue).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect queue metrics from `metrics.log` across all indexers. Pivot into a wide table (one column per queue) for multivariate DensityFunction modeling. The model learns the joint distribution of queue depths, catching correlated saturation that single-queue thresholds miss. Alert on critical (multiple large queues) and high (single large queue) severity. Correlate with ingestion volume spikes and forwarder connection counts. Route alerts to the Splunk platform team. Retrain the model weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd component=Metrics group=queue\n| bin _time span=5m\n| stats avg(current_size_kb) as avg_size by _time, name\n| xyseries _time name avg_size\n| fillnull value=0\n| fit DensityFunction parsingQueue indexQueue typingQueue mergingQueue tcpin_queue into queue_multivariate_model\n| where isOutlier > 0\n| eval severity=case(\n    parsingQueue > 500 AND indexQueue > 500, \"critical\",\n    parsingQueue > 500 OR indexQueue > 500, \"high\",\n    true(), \"medium\")\n| table _time, parsingQueue, indexQueue, typingQueue, mergingQueue, severity\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Splunk Internal Queue Depth Multivariate Anomaly (MLTK)** — Individual queue metrics (parsing, merging, typing, indexing) may fluctuate normally, but simultaneous pressure across multiple queues indicates a systemic bottleneck. Multivariate anomaly detection catches correlated queue saturation that per-queue thresholds miss, enabling proactive capacity intervention before data loss occurs.\n\nDocumented **Data sources**: `index=_internal sourcetype=splunkd` (component=Metrics, group=queue). **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• Fills null values with `fillnull`.\n• Pipeline stage (see **Splunk Internal Queue Depth Multivariate Anomaly (MLTK)**): fit DensityFunction parsingQueue indexQueue typingQueue mergingQueue tcpin_queue into queue_multivariate_model\n• Filters the current rows with `where isOutlier > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Splunk Internal Queue Depth Multivariate Anomaly (MLTK)**): table _time, parsingQueue, indexQueue, typingQueue, mergingQueue, severity\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-line chart (all queue depths over time), Heatmap (queue × indexer), Single value (current anomaly status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how full our indexing and processing queues are so we can act before data stacks up, slows, or is dropped.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.49",
              "n": "Service Latency Seasonality and Anomaly (MLTK)",
              "c": "high",
              "f": "advanced",
              "v": "Application response times follow predictable patterns — higher during business hours, lower at night. By decomposing latency into seasonal (hour-of-week) and residual components, this detection flags abnormal slowdowns that coincide with deployments, infrastructure changes, or emerging incidents — even when raw latency stays within static thresholds.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Observability Cloud (optional)",
              "d": "`index=main sourcetype=access_combined` or APM traces, `index=o11y sourcetype=otel:metrics`",
              "q": "index=main sourcetype=access_combined\n| eval response_ms=response_time*1000\n| bin _time span=5m\n| stats p95(response_ms) as p95_latency, p99(response_ms) as p99_latency, count by _time, uri_path\n| eval hour_of_week=(tonumber(strftime(_time,\"%w\"))*24) + tonumber(strftime(_time,\"%H\"))\n| fit DensityFunction p95_latency p99_latency by uri_path into latency_seasonal_model\n| rename \"IsOutlier(p95_latency)\" as is_p95_outlier, \"IsOutlier(p99_latency)\" as is_p99_outlier\n| where is_p95_outlier > 0 OR is_p99_outlier > 0\n| table _time, uri_path, p95_latency, p99_latency, hour_of_week, count\n| sort -p95_latency",
              "m": "Collect p95 and p99 latency per endpoint in 5-minute bins. Train DensityFunction models per `uri_path` that learn hour-of-week seasonality. Anomalies represent latency that is unusual for that specific time window, not just above a flat threshold. Correlate with deployment events from CI/CD pipelines (cat-12) and infrastructure changes. Create ITSI KPIs from the anomaly output for service health scoring. Alert application owners via Splunk On-Call with endpoint-specific context. Retrain models weekly.",
              "z": "Line chart (p95 latency with seasonal overlay), Heatmap (endpoint × hour-of-week), Table (anomalous endpoints).",
              "kfp": "Latency spikes during pool exhaustion, garbage collection, or upstream slowness. We compare with the APM UI and deploy history before a code-only theory.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Observability Cloud (optional).\n• Ensure the following data sources are available: `index=main sourcetype=access_combined` or APM traces, `index=o11y sourcetype=otel:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect p95 and p99 latency per endpoint in 5-minute bins. Train DensityFunction models per `uri_path` that learn hour-of-week seasonality. Anomalies represent latency that is unusual for that specific time window, not just above a flat threshold. Correlate with deployment events from CI/CD pipelines (cat-12) and infrastructure changes. Create ITSI KPIs from the anomaly output for service health scoring. Alert application owners via Splunk On-Call with endpoint-specific context. Retrain models w…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=main sourcetype=access_combined\n| eval response_ms=response_time*1000\n| bin _time span=5m\n| stats p95(response_ms) as p95_latency, p99(response_ms) as p99_latency, count by _time, uri_path\n| eval hour_of_week=(tonumber(strftime(_time,\"%w\"))*24) + tonumber(strftime(_time,\"%H\"))\n| fit DensityFunction p95_latency p99_latency by uri_path into latency_seasonal_model\n| rename \"IsOutlier(p95_latency)\" as is_p95_outlier, \"IsOutlier(p99_latency)\" as is_p99_outlier\n| where is_p95_outlier > 0 OR is_p99_outlier > 0\n| table _time, uri_path, p95_latency, p99_latency, hour_of_week, count\n| sort -p95_latency\n```\n\nUnderstanding this SPL\n\n**Service Latency Seasonality and Anomaly (MLTK)** — Application response times follow predictable patterns — higher during business hours, lower at night. By decomposing latency into seasonal (hour-of-week) and residual components, this detection flags abnormal slowdowns that coincide with deployments, infrastructure changes, or emerging incidents — even when raw latency stays within static thresholds.\n\nDocumented **Data sources**: `index=main sourcetype=access_combined` or APM traces, `index=o11y sourcetype=otel:metrics`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Observability Cloud (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: main; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=main, sourcetype=access_combined. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **response_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, uri_path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hour_of_week** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Service Latency Seasonality and Anomaly (MLTK)**): fit DensityFunction p95_latency p99_latency by uri_path into latency_seasonal_model\n• Renames fields with `rename` for clarity or joins.\n• Filters the current rows with `where is_p95_outlier > 0 OR is_p99_outlier > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Service Latency Seasonality and Anomaly (MLTK)**): table _time, uri_path, p95_latency, p99_latency, hour_of_week, count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Service Latency Seasonality and Anomaly (MLTK)** — Application response times follow predictable patterns — higher during business hours, lower at night. By decomposing latency into seasonal (hour-of-week) and residual components, this detection flags abnormal slowdowns that coincide with deployments, infrastructure changes, or emerging incidents — even when raw latency stays within static thresholds.\n\nDocumented **Data sources**: `index=main sourcetype=access_combined` or APM traces, `index=o11y sourcetype=otel:metrics`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Observability Cloud (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95 latency with seasonal overlay), Heatmap (endpoint × hour-of-week), Table (anomalous endpoints).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how long work takes across services so slowdowns in our code or a dependency show up while they are still small.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=5m | sort - count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.50",
              "n": "Kubernetes HPA Replica Count Anomaly (MLTK)",
              "c": "high",
              "f": "advanced",
              "v": "Horizontal Pod Autoscaler (HPA) replica counts reflect traffic load. Unexpected spikes in replicas without corresponding traffic increases may indicate resource leaks, crash loops, or misconfigured scaling policies. Anomaly detection on replica counts relative to traffic volume catches autoscaler misbehavior before it exhausts cluster capacity.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Connect for Kubernetes (Helm chart, github.com/splunk/splunk-connect-for-kubernetes)",
              "d": "`index=k8s sourcetype=kube:objects:hpa`, `index=k8s sourcetype=kube:metrics`",
              "q": "index=k8s sourcetype=\"kube:objects:hpa\"\n| bin _time span=5m\n| stats latest(status.currentReplicas) as replicas, latest(status.currentMetrics{}.resource.current.averageValue) as avg_cpu by _time, metadata.name, metadata.namespace\n| eval replicas=tonumber(replicas), avg_cpu=tonumber(avg_cpu)\n| fit DensityFunction replicas avg_cpu by \"metadata.name\" into hpa_anomaly_model\n| rename \"IsOutlier(replicas)\" as replica_outlier\n| where replica_outlier > 0\n| table _time, metadata.namespace, metadata.name, replicas, avg_cpu\n| sort -replicas",
              "m": "Collect HPA status objects from the Kubernetes API via Splunk Connect for Kubernetes. Model the joint distribution of replica count and CPU utilization per HPA target. Outliers where replica count spikes without proportional CPU increase indicate scaling misbehavior. Correlate with pod restart events and OOMKill signals from `kube:events`. Alert the platform engineering team and include the HPA configuration (min/max replicas, target utilization) for rapid triage. Retrain the model weekly.",
              "z": "Dual-axis line chart (replicas vs CPU), Table (anomalous HPAs), Bar chart (replica count distribution by namespace).",
              "kfp": "Models need clean baselines; holidays, deploys, and new sources can look like a mystery. We retrain and check known events before a one-off score rules the day.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Connect for Kubernetes (Helm chart, github.com/splunk/splunk-connect-for-kubernetes).\n• Ensure the following data sources are available: `index=k8s sourcetype=kube:objects:hpa`, `index=k8s sourcetype=kube:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect HPA status objects from the Kubernetes API via Splunk Connect for Kubernetes. Model the joint distribution of replica count and CPU utilization per HPA target. Outliers where replica count spikes without proportional CPU increase indicate scaling misbehavior. Correlate with pod restart events and OOMKill signals from `kube:events`. Alert the platform engineering team and include the HPA configuration (min/max replicas, target utilization) for rapid triage. Retrain the model weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=k8s sourcetype=\"kube:objects:hpa\"\n| bin _time span=5m\n| stats latest(status.currentReplicas) as replicas, latest(status.currentMetrics{}.resource.current.averageValue) as avg_cpu by _time, metadata.name, metadata.namespace\n| eval replicas=tonumber(replicas), avg_cpu=tonumber(avg_cpu)\n| fit DensityFunction replicas avg_cpu by \"metadata.name\" into hpa_anomaly_model\n| rename \"IsOutlier(replicas)\" as replica_outlier\n| where replica_outlier > 0\n| table _time, metadata.namespace, metadata.name, replicas, avg_cpu\n| sort -replicas\n```\n\nUnderstanding this SPL\n\n**Kubernetes HPA Replica Count Anomaly (MLTK)** — Horizontal Pod Autoscaler (HPA) replica counts reflect traffic load. Unexpected spikes in replicas without corresponding traffic increases may indicate resource leaks, crash loops, or misconfigured scaling policies. Anomaly detection on replica counts relative to traffic volume catches autoscaler misbehavior before it exhausts cluster capacity.\n\nDocumented **Data sources**: `index=k8s sourcetype=kube:objects:hpa`, `index=k8s sourcetype=kube:metrics`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Connect for Kubernetes (Helm chart, github.com/splunk/splunk-connect-for-kubernetes). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: k8s; **sourcetype**: kube:objects:hpa. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=k8s, sourcetype=\"kube:objects:hpa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, metadata.name, metadata.namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **replicas** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Kubernetes HPA Replica Count Anomaly (MLTK)**): fit DensityFunction replicas avg_cpu by \"metadata.name\" into hpa_anomaly_model\n• Renames fields with `rename` for clarity or joins.\n• Filters the current rows with `where replica_outlier > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Kubernetes HPA Replica Count Anomaly (MLTK)**): table _time, metadata.namespace, metadata.name, replicas, avg_cpu\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis line chart (replicas vs CPU), Table (anomalous HPAs), Bar chart (replica count distribution by namespace).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many app copies are running as traffic and policy change so a surprise scale event does not go unnoticed.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "kubernetes"
              ],
              "em": [
                "kubernetes_helm",
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.1.51",
              "n": "SLO Burn-Rate Multivariate Anomaly (MLTK)",
              "c": "critical",
              "f": "expert",
              "v": "Single-dimensional SLO burn-rate alerts trigger too late or too often. By combining error budget burn rates across availability, latency, and throughput SLOs into a multivariate model, this detection identifies services heading for SLO breach across multiple dimensions simultaneously — a stronger signal than any individual budget alarm.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk ITSI (optional)",
              "d": "`index=o11y sourcetype=otel:metrics`, SLO definitions in KV store or ITSI",
              "q": "index=o11y sourcetype=\"otel:metrics\" metric_name IN (\"slo.error_budget.remaining_pct\",\"slo.latency_budget.remaining_pct\",\"slo.throughput_budget.remaining_pct\")\n| bin _time span=1h\n| stats latest(metric_value) as budget_pct by _time, service.name, metric_name\n| xyseries _time+\"|\"+service.name metric_name budget_pct\n| fillnull value=100\n| eval burn_avail=100-'slo.error_budget.remaining_pct', burn_latency=100-'slo.latency_budget.remaining_pct', burn_throughput=100-'slo.throughput_budget.remaining_pct'\n| fit DensityFunction burn_avail burn_latency burn_throughput into slo_burnrate_model\n| where isOutlier > 0\n| eval composite_burn=burn_avail + burn_latency + burn_throughput\n| sort -composite_burn\n| table _time, service.name, burn_avail, burn_latency, burn_throughput, composite_burn",
              "m": "Define SLOs for each service across three dimensions: availability (error rate), latency (p99 target), and throughput (requests/sec floor). Calculate hourly burn rates as the percentage of monthly error budget consumed. Train a DensityFunction model on the joint burn-rate distribution across all three dimensions per service. Services where multiple budgets burn simultaneously are flagged as multivariate outliers. Integrate with ITSI service models to propagate SLO risk into service health scores. Alert service owners at 50% budget consumed (warning) and 80% consumed (critical). Use the model to provide SRE teams with a predicted breach timeline.",
              "z": "Radar chart (three SLO dimensions per service), Line chart (burn rates over time), Table (services at risk), Single value (services projected to breach this month).",
              "kfp": "Collector errors during config changes, target restarts, or transport and credential issues. We read collector logs and compare with the OTel pipeline view.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk ITSI (optional).\n• Ensure the following data sources are available: `index=o11y sourcetype=otel:metrics`, SLO definitions in KV store or ITSI.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine SLOs for each service across three dimensions: availability (error rate), latency (p99 target), and throughput (requests/sec floor). Calculate hourly burn rates as the percentage of monthly error budget consumed. Train a DensityFunction model on the joint burn-rate distribution across all three dimensions per service. Services where multiple budgets burn simultaneously are flagged as multivariate outliers. Integrate with ITSI service models to propagate SLO risk into service health scores…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o11y sourcetype=\"otel:metrics\" metric_name IN (\"slo.error_budget.remaining_pct\",\"slo.latency_budget.remaining_pct\",\"slo.throughput_budget.remaining_pct\")\n| bin _time span=1h\n| stats latest(metric_value) as budget_pct by _time, service.name, metric_name\n| xyseries _time+\"|\"+service.name metric_name budget_pct\n| fillnull value=100\n| eval burn_avail=100-'slo.error_budget.remaining_pct', burn_latency=100-'slo.latency_budget.remaining_pct', burn_throughput=100-'slo.throughput_budget.remaining_pct'\n| fit DensityFunction burn_avail burn_latency burn_throughput into slo_burnrate_model\n| where isOutlier > 0\n| eval composite_burn=burn_avail + burn_latency + burn_throughput\n| sort -composite_burn\n| table _time, service.name, burn_avail, burn_latency, burn_throughput, composite_burn\n```\n\nUnderstanding this SPL\n\n**SLO Burn-Rate Multivariate Anomaly (MLTK)** — Single-dimensional SLO burn-rate alerts trigger too late or too often. By combining error budget burn rates across availability, latency, and throughput SLOs into a multivariate model, this detection identifies services heading for SLO breach across multiple dimensions simultaneously — a stronger signal than any individual budget alarm.\n\nDocumented **Data sources**: `index=o11y sourcetype=otel:metrics`, SLO definitions in KV store or ITSI. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk ITSI (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o11y; **sourcetype**: otel:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o11y, sourcetype=\"otel:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service.name, metric_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **burn_avail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **SLO Burn-Rate Multivariate Anomaly (MLTK)**): fit DensityFunction burn_avail burn_latency burn_throughput into slo_burnrate_model\n• Filters the current rows with `where isOutlier > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **composite_burn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **SLO Burn-Rate Multivariate Anomaly (MLTK)**): table _time, service.name, burn_avail, burn_latency, burn_throughput, composite_burn\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Radar chart (three SLO dimensions per service), Line chart (burn rates over time), Table (services at risk), Single value (services projected to breach this month).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see how fast we are spending room for mistakes so we can act before a small issue eats the month.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.6,
          "qd": {
            "gold": 0,
            "silver": 3,
            "bronze": 48,
            "none": 0
          }
        },
        {
          "i": "13.2",
          "n": "Splunk ITSI (Premium)",
          "u": [
            {
              "i": "13.2.1",
              "n": "Service Health Score Trending",
              "c": "critical",
              "f": "beginner",
              "v": "Service health scores provide a single-pane view of business service status. Trending enables SLA reporting and proactive management.",
              "t": "Splunk ITSI",
              "d": "`itsi_summary` index",
              "q": "index=itsi_summary is_service_in_maintenance=0\n| timechart span=1h avg(health_score) by service_name",
              "m": "Configure ITSI services with KPIs mapped to business services. Track health scores over time. Alert on score degradation. Use for SLA reporting and executive dashboards. Configure Glass Tables for NOC display.",
              "z": "Service Analyzer (ITSI native), Glass Table, Line chart (health trend), Status grid.",
              "kfp": "Planned KPI threshold tightening during incident response can flash warn rows even when the service is stabilizing because old NEAP episodes remain open until analysts close them. Rolling search-head upgrades for ITSI reorder scheduler slots and can create short skip bursts in itsi_searchlog that resemble degradation in health_score averages until queues drain. Scheduled ML adaptive thresholding retrain windows widen bands temporarily and may suppress individual KPI severities while composite health_score still decays, so pair machine learning maintenance metadata with this alert. NEAP throttling and suppression rules that close episodes aggressively during storms can hide lingering fifty-point health floors unless you keep severity-blind detection enabled. KPI base-search skip events during nightly backup windows can create apparent recovery in averages when data simply stops arriving; always verify event continuity. Dependency-tree refactors that move parent_service_id values without refreshing lookups cause misrouted owner_team tags until configuration management catches up. Service-template re-links that drop Service Health Score KPI membership make health rows disappear entirely, which looks healthy but is actually observability debt. Glass-table rebuilds that change color thresholds without changing KPI truth create executive false alarms unrelated to Splunk alert_severity. Dashboard clones that apply ad hoc entity filters can disagree with NEAP grouping and cause apparent contradictions between Splunk tables and Episode Review. Sourcetype or compression changes on forwarding paths can hide KPI events while index volume looks stable, producing false calm until you inspect null KPI rates. Finally, itsi_summary ingestion lag during indexer maintenance delays health_score updates and can mask real improvement or real failure for minutes at a time.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `itsi_summary` index.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure ITSI services with KPIs mapped to business services. Track health scores over time. Alert on score degradation. Use for SLA reporting and executive dashboards. Configure Glass Tables for NOC display.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_in_maintenance=0\n| timechart span=1h avg(health_score) by service_name\n```\n\nUnderstanding this SPL\n\n**Service Health Score Trending** — Service health scores provide a single-pane view of business service status. Trending enables SLA reporting and proactive management.\n\nDocumented **Data sources**: `itsi_summary` index. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by service_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Service Analyzer (ITSI native), Glass Table, Line chart (health trend), Status grid.",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the rolled-up health number for important services across several time windows and compare it to open incident groups so we catch slow slides and blind spots where trouble hides just under the alarm line.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "13.2.2",
              "n": "KPI Degradation Alerting",
              "c": "critical",
              "f": "intermediate",
              "v": "KPI threshold breaches provide early warning of service degradation. Adaptive thresholds reduce false positives vs static thresholds.",
              "t": "Splunk ITSI",
              "d": "ITSI correlation searches, KPI data",
              "q": "index=itsi_summary severity_value>3\n| stats count by service_name, kpi_name, severity_label\n| sort -count",
              "m": "Configure KPIs with adaptive thresholds (ITSI machine learning). Set up correlation searches for threshold breach alerting. Route alerts to Episode Review for analyst triage. Tune thresholds based on feedback.",
              "z": "ITSI Deep Dive, Service Analyzer, Line chart with threshold bands.",
              "kfp": "Legitimate end-of-day batch jobs can depress latency KPIs for twenty minutes without customer impact; tag those windows in change metadata before interpreting burn_60m. ITSI version upgrades sometimes reset adaptive threshold baselines and can flash high severity for an hour; compare release notes and freeze paging until retrain completes. Content-pack import cycles relink templates and may duplicate kpi_id rows until lookups refresh; dedupe inventory before blaming applications. Quarterly on-call drills intentionally breach sandbox KPIs; exclude drill environments via service_tier filters. Planned maintenance windows suppress episodes but not raw summary indexes; cross-check maintenance tickets before treating missing episode_count as governance failure. Synthetic monitoring accounts can drive alert_value spikes when scripts retry aggressively; segregate synthetic entities in entity rules. Clock skew between search heads and indexers misaligns minute bins; repair time synchronization before tuning burn constants. Shared service templates across regions can route urgency_weight incorrectly when lookups lack region keys; partition CSV rows. Backfill replays after indexer recovery may stack duplicate alert_severity transitions; use deduped source keys upstream. Executive rehearse days can inflate tracked_sev_hits without production risk; annotate rehearsal labels in correlation rules.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: ITSI correlation searches, KPI data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure KPIs with adaptive thresholds (ITSI machine learning). Set up correlation searches for threshold breach alerting. Route alerts to Episode Review for analyst triage. Tune thresholds based on feedback.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary severity_value>3\n| stats count by service_name, kpi_name, severity_label\n| sort -count\n```\n\nUnderstanding this SPL\n\n**KPI Degradation Alerting** — KPI threshold breaches provide early warning of service degradation. Adaptive thresholds reduce false positives vs static thresholds.\n\nDocumented **Data sources**: ITSI correlation searches, KPI data. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by service_name, kpi_name, severity_label** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: ITSI Deep Dive, Service Analyzer, Line chart with threshold bands.",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We treat each vital business service like a bakery oven with many thermometers: one hot reading is ignored, but when the same thermometer stays too hot through short and long timers while the shop gets busier, we call the baker before the whole batch burns.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.3",
              "n": "Episode Volume and MTTR",
              "c": "high",
              "f": "intermediate",
              "v": "Episode volume and resolution time measure IT operations effectiveness. Trending drives process improvement.",
              "t": "Splunk ITSI",
              "d": "`itsi_grouped_alerts` index",
              "q": "index=itsi_grouped_alerts\n| stats count as episodes, avg(duration) as avg_duration_sec by severity\n| eval avg_mttr_min=round(avg_duration_sec/60,1)",
              "m": "Track episode creation, severity distribution, and time-to-resolution. Monitor episode assignment and owner workload. Alert on episode volume spikes. Report on MTTR by severity for management.",
              "z": "Bar chart (episodes by severity), Line chart (episode volume trend), Single value (avg MTTR), Table (open episodes).",
              "kfp": "Planned chaos engineering drills and game-day simulations can spike episode creation and acknowledgement latency without customer risk; tag and filter rehearsal metadata before executive reviews. Holiday on-call hand-offs legitimately slow acknowledgement when rotations are thin; compare against on-call schedule coverage rather than application regressions alone. Change-management freeze windows sometimes intentionally suppress auto-close actions; expect manual-close dominance and document the freeze ticket beside metrics. After-hours batch operations may generate benign notables that auto-close quickly; pair noise_ratio with business context before accusing teams of rubber-stamping closures. Training cohort onboarding sessions in shared tenants can inflate episode volume; use sandbox partitions or training labels to keep outcome metrics honest. Quarterly daylight-saving shifts can skew hourly bins for one day; widen tolerance windows when validating acknowledgement math. Vendor maintenance on ticketing APIs can delay ServiceNow timestamps while ITSI shows resolution; correlate external vendor status before blaming internal squads. Content-pack imports may temporarily widen NEAP auto-close windows; delay severity upgrades until policies stabilize. Executive dashboards that average MTTR across all severities can hide critical-tier outliers; insist on subgroup views when investigating anomalies. Mis-synchronized laptop clocks on analyst workstations can cause misleading manual timestamps in notes even when server-side audit times are correct.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `itsi_grouped_alerts` index.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack episode creation, severity distribution, and time-to-resolution. Monitor episode assignment and owner workload. Alert on episode volume spikes. Report on MTTR by severity for management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_grouped_alerts\n| stats count as episodes, avg(duration) as avg_duration_sec by severity\n| eval avg_mttr_min=round(avg_duration_sec/60,1)\n```\n\nUnderstanding this SPL\n\n**Episode Volume and MTTR** — Episode volume and resolution time measure IT operations effectiveness. Trending drives process improvement.\n\nDocumented **Data sources**: `itsi_grouped_alerts` index. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_grouped_alerts.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_grouped_alerts. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_mttr_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (episodes by severity), Line chart (episode volume trend), Single value (avg MTTR), Table (open episodes).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We time how fast a busy emergency department admits, labels urgency, starts treatment, and discharges people, compared with how long patients still look sick in the waiting room. When the paperwork says everyone was helped quickly but the waiting room tells a different story, we surface that gap so leaders fix triage and handoffs instead of trusting the headline alone.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.4",
              "n": "Entity Status Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Entity health provides granular visibility into individual infrastructure components feeding services. Unstable entities degrade service health.",
              "t": "Splunk ITSI",
              "d": "ITSI entity overview, entity health scores",
              "q": "| inputlookup itsi_entities\n| where entity_status!=\"active\"\n| table title, entity_type, entity_status, last_seen",
              "m": "Configure entity discovery (AD, CMDB, cloud APIs). Monitor entity states (active, inactive, unstable). Alert when critical entities become inactive. Track entity population for coverage analysis.",
              "z": "Status grid (entities by type × status), Table (inactive entities), Single value (active entity count).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: ITSI entity overview, entity health scores.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure entity discovery (AD, CMDB, cloud APIs). Monitor entity states (active, inactive, unstable). Alert when critical entities become inactive. Track entity population for coverage analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup itsi_entities\n| where entity_status!=\"active\"\n| table title, entity_type, entity_status, last_seen\n```\n\nUnderstanding this SPL\n\n**Entity Status Monitoring** — Entity health provides granular visibility into individual infrastructure components feeding services. Unstable entities degrade service health.\n\nDocumented **Data sources**: ITSI entity overview, entity health scores. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where entity_status!=\"active\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Entity Status Monitoring**): table title, entity_type, entity_status, last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (entities by type × status), Table (inactive entities), Single value (active entity count).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how healthy our services look over time so we can report, prioritize, and catch a slide before it becomes a major incident.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.5",
              "n": "Base Search Performance",
              "c": "medium",
              "f": "beginner",
              "v": "ITSI base searches feed all KPIs. Slow or skipped base searches cause stale or missing KPI data across multiple services.",
              "t": "Splunk ITSI",
              "d": "`_internal` (scheduler logs for ITSI searches)",
              "q": "index=_internal sourcetype=scheduler savedsearch_name=\"ITSI*Base*\"\n| stats avg(run_time) as avg_runtime, count(eval(status=\"skipped\")) as skipped by savedsearch_name\n| where avg_runtime > 120 OR skipped > 0",
              "m": "Monitor ITSI base search run times and skip rates. Alert when any base search is skipped or exceeds its schedule interval. Optimize slow base searches (reduce scope, improve SPL). Consider splitting overloaded base searches.",
              "z": "Table (base search performance), Bar chart (runtime by search), Single value (skipped searches).",
              "kfp": "Planned base-search redesign windows intentionally widen earliest bounds and can inflate freshness_lag_minutes until backfill completes; annotate maintenance metadata before paging. Mass entity import projects and rediscovery batches temporarily elevate pseudo_entity_pct while lookups catch up; downgrade severity when change tickets document the import. Scheduled lookup refresh jobs that rewrite entity tables during business hours can mimic discovery failure for one or two intervals; validate job success before opening incidents. Throttled disaster recovery sites that stretch search latency can push run_time_p95_sec high without primary-region logic errors; compare environment labels in inventory. Change-window pauses that disable KPI bases under approval should suppress alerts via suppression rules keyed on change_ticket_id rather than muting the control globally. Nightly backup intervals that pause indexer responsiveness can drive continued scheduler statuses; pair with indexer health UCs. Content-pack staging clusters with tiny data volumes produce unstable percentiles; gate production alerts by business_priority. REST credential rotation empties the saved_search_query_metrics arm briefly; expect single-window info severity only. Executive drilldown filters that hide entity rows can look like pseudo-entity surges when analysts compare unlike scopes. Finally, clock skew between search heads and indexers creates artificial freshness_lag_minutes until time hygiene UCs clear the drift.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `_internal` (scheduler logs for ITSI searches).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor ITSI base search run times and skip rates. Alert when any base search is skipped or exceeds its schedule interval. Optimize slow base searches (reduce scope, improve SPL). Consider splitting overloaded base searches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=scheduler savedsearch_name=\"ITSI*Base*\"\n| stats avg(run_time) as avg_runtime, count(eval(status=\"skipped\")) as skipped by savedsearch_name\n| where avg_runtime > 120 OR skipped > 0\n```\n\nUnderstanding this SPL\n\n**Base Search Performance** — ITSI base searches feed all KPIs. Slow or skipped base searches cause stale or missing KPI data across multiple services.\n\nDocumented **Data sources**: `_internal` (scheduler logs for ITSI searches). **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: scheduler. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=scheduler. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by savedsearch_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_runtime > 120 OR skipped > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (base search performance), Bar chart (runtime by search), Single value (skipped searches).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch the big prep station that chops ingredients for many meals at once. When that station runs late, loses track of who ordered what, or starts labeling mystery bowls, every downstream dish looks wrong even if the ovens still glow green. We rank the slowest prep lanes so crews fix the foundation before diners notice cold plates.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.6",
              "n": "Rules Engine Health",
              "c": "critical",
              "f": "beginner",
              "v": "The ITSI Rules Engine processes events into episodes. Failure means alerts are not grouped or routed, breaking Event Analytics.",
              "t": "Splunk ITSI",
              "d": "`_internal` (itsi_internal_log)",
              "q": "index=_internal sourcetype=itsi_internal_log component=RulesEngine\n| search log_level=ERROR OR log_level=WARN\n| stats count by log_level, message",
              "m": "Monitor Rules Engine logs for errors and warnings. Alert on Rules Engine restarts or processing failures. Track event-to-episode latency. Verify aggregation policies are functioning correctly.",
              "z": "Single value (Rules Engine status), Table (recent errors), Line chart (processing latency).",
              "kfp": "Planned content-pack imports temporarily reorder correlation searches and can spike scheduler skips until bundles settle; require change metadata before paging application owners. Scheduled NEAP backfill jobs replay historical notables and can inflate notable_per_min without live incident pressure; gate medium severity during documented backfill windows. IIS or mid-tier outages for ticketing webhooks can raise dispatch_failure_rate while the rules engine itself is healthy; confirm external service health before blaming aggregation policies. ITSI version upgrades restart modular inputs and can flash high restart counts for minutes; compare against release notes and delay pages until rolling restarts finish. Search head cluster rolling restarts reorder correlation schedules and can elevate corr_search_skip_rate briefly; validate captain stability and suppress duplicates using search_head_id throttles. Quarterly disaster exercises may intentionally throttle REST inputs; annotate drill labels in inventory lookups so severity downgrades are auditable. Mis-tuned introspection sampling can emit sparse resource lanes; do not infer CPU truth from empty introspection during maintenance. Stale itsi_correlation_search_inventory.csv rows route pages to the wrong owner_team until configuration management refreshes lookups. False calm appears when _internal retention is shorter than the alert window; extend retention or lower earliest bounds after risk review. Duplicate cmdb rows despite dedup indicate upstream feed errors; fix cmdb pipelines before muting join failures.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `_internal` (itsi_internal_log).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Rules Engine logs for errors and warnings. Alert on Rules Engine restarts or processing failures. Track event-to-episode latency. Verify aggregation policies are functioning correctly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=itsi_internal_log component=RulesEngine\n| search log_level=ERROR OR log_level=WARN\n| stats count by log_level, message\n```\n\nUnderstanding this SPL\n\n**Rules Engine Health** — The ITSI Rules Engine processes events into episodes. Failure means alerts are not grouped or routed, breaking Event Analytics.\n\nDocumented **Data sources**: `_internal` (itsi_internal_log). **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: itsi_internal_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=itsi_internal_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by log_level, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (Rules Engine status), Table (recent errors), Line chart (processing latency).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch a busy sorting hall where notices are grouped into batches before anyone acts. When belts skip, doors jam, or the room keeps rebooting, we raise a clear signal so crews fix the machinery before piles land in the wrong bins and customers wait.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "13.2.7",
              "n": "Predictive Service Degradation",
              "c": "high",
              "f": "advanced",
              "v": "Predicting service health degradation before it happens enables proactive remediation, reducing incident impact.",
              "t": "Splunk ITSI + MLTK",
              "d": "`itsi_summary` + ML models",
              "q": "index=itsi_summary service_name=\"Production Web\"\n| timechart span=15m avg(health_score) as health\n| predict health as predicted_health future_timespan=24 algorithm=LLP5\n| where predicted_health < 50",
              "m": "Train ML models on service health history using MLTK. Predict health scores 4-24 hours ahead. Alert when predicted health falls below threshold. Investigate contributing KPIs proactively. This is an advanced ITSI capability.",
              "z": "Line chart (actual vs predicted health), Single value (predicted health in 4h), Alert timeline.",
              "kfp": "Planned predictive retrain freezes during content-pack imports can inflate coverage_gap rows until services relink; require change metadata before paging application owners. Holiday quiet periods shrink training variance so residuals look perfect while sample mass is thin; compare observation counts before trusting low residuals. Sandbox services that intentionally lack models should be excluded via environment or cluster_tier filters so coverage alerts target production only. Clock skew between summary indexers and search heads can misalign hourly bins for one interval after daylight-saving adjustments; widen validation tolerance for twenty-four hours. KPI maintenance suppressions pause features the model memorized; forecasts may flatline without error until maintenance ends, which resembles health rather than model death. Duplicate serviceid labels from migration cutovers can merge unrelated residuals; reconcile CMDB keys before blaming algorithms. Performance datamodel acceleration gaps can empty the tstats arm while predictions remain healthy; treat that arm as supplementary signal. Finance-led reporting windows that slice only business hours can make lead_time_bucket look pessimistic when overnight incidents dominate; annotate reporting calendars. MLTK scheduler distinct counts include unrelated experiments from citizen data scientists; narrow savedsearch_id patterns when noise appears. Temporary uplift in noise_swing_no_incident after UI upgrades can reflect visualization thresholds rather than model volatility; confirm with raw stash numerics.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI + MLTK.\n• Ensure the following data sources are available: `itsi_summary` + ML models.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrain ML models on service health history using MLTK. Predict health scores 4-24 hours ahead. Alert when predicted health falls below threshold. Investigate contributing KPIs proactively. This is an advanced ITSI capability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary service_name=\"Production Web\"\n| timechart span=15m avg(health_score) as health\n| predict health as predicted_health future_timespan=24 algorithm=LLP5\n| where predicted_health < 50\n```\n\nUnderstanding this SPL\n\n**Predictive Service Degradation** — Predicting service health degradation before it happens enables proactive remediation, reducing incident impact.\n\nDocumented **Data sources**: `itsi_summary` + ML models. **App/TA** (typical add-on context): Splunk ITSI + MLTK. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Predictive Service Degradation**): predict health as predicted_health future_timespan=24 algorithm=LLP5\n• Filters the current rows with `where predicted_health < 50` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (actual vs predicted health), Single value (predicted health in 4h), Alert timeline.",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We treat the health forecast like a storm siren that should sound before the rain hits your picnic. We check whether the siren was early enough, whether it cried wolf on sunny days, whether the machinery behind it was serviced on time, and which parks never got a siren at all.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.8",
              "n": "Glass Table NOC Display",
              "c": "medium",
              "f": "beginner",
              "v": "Real-time service visualization for operations centers provides at-a-glance awareness of infrastructure and service health.",
              "t": "Splunk ITSI Glass Tables",
              "d": "ITSI service/KPI data",
              "q": "| rest /servicesNS/-/-/data/ui/views\n| search label=\"Glass*\" OR label=\"NOC*\"\n| table title label author updated\n| sort -updated",
              "m": "Design Glass Tables representing logical infrastructure views (network topology, service dependency map, data center layout). Map ITSI services and KPIs to visual elements. Deploy on NOC screens with auto-refresh.",
              "z": "ITSI Glass Table (custom visual layout with service health indicators, KPI widgets, and status icons).",
              "kfp": "Approved content-pack migrations temporarily duplicate glass_table_id rows until bookmarks and inventory reconcile; require change metadata before paging wall teams. Synthetic load tests and screenshot diff bots inflate web_hits without representing human operators; exclude their useragent tokens via lookup. Rolling search head restarts create benign live_drop_pct bursts for one or two bins; pair with captain stability UCs before treating as proxy failure. KPI maintenance freezes raise widget_fail_hits while the canvas remains healthy; correlate with UC-13.2.5 scheduler outcomes. Kiosk browsers that auto-clear cookies nightly mimic authentication churn; document facilities policy before extending session timeouts. Performance datamodel acceleration gaps empty the tstats arm while glass tables still render; repair acceleration rather than muting the entire control. Reverse-proxy health checks that hit glass table URLs without session cookies can skew dc_viewers; segregate probe identities in inventory. Executive drilldowns that open archived glass tables from cold storage snapshots can show orphan hits; gate is_noc_wall flags on current REST exports only. Holiday quiet periods shrink concurrent viewers and can make percentiles noisy on low sample counts; widen tolerance or require minimum web_hits. Finally, manual saves during template edits spike audit_views without security incidents; validate unusual editor accounts against change calendars.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI Glass Tables.\n• Ensure the following data sources are available: ITSI service/KPI data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDesign Glass Tables representing logical infrastructure views (network topology, service dependency map, data center layout). Map ITSI services and KPIs to visual elements. Deploy on NOC screens with auto-refresh.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /servicesNS/-/-/data/ui/views\n| search label=\"Glass*\" OR label=\"NOC*\"\n| table title label author updated\n| sort -updated\n```\n\nUnderstanding this SPL\n\n**Glass Table NOC Display** — Real-time service visualization for operations centers provides at-a-glance awareness of infrastructure and service health.\n\nDocumented **Data sources**: ITSI service/KPI data. **App/TA** (typical add-on context): Splunk ITSI Glass Tables. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Glass Table NOC Display**): table title label author updated\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: ITSI Glass Table (custom visual layout with service health indicators, KPI widgets, and status icons).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch the big status boards on the operations center wall the same way you would watch a train departures screen. When the board takes too long to refresh, drops its live feed, shows blank tiles, or forces a login during an emergency, we surface which board misbehaved and who should fix it.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.9",
              "n": "Elasticsearch Ingest Pipeline Errors",
              "c": "high",
              "f": "intermediate",
              "v": "Log pipeline processing failures causing data loss or corruption.",
              "t": "Custom (ES REST API)",
              "d": "Elasticsearch _nodes/stats/ingest, pipeline error counts",
              "q": "index=elasticsearch sourcetype=\"elasticsearch:ingest\"\n| where pipeline_failures > 0 OR pipeline_current > 100\n| stats sum(pipeline_failures) as total_failures, sum(pipeline_current) as current_in_flight by node_name, pipeline_id\n| sort -total_failures",
              "m": "Poll Elasticsearch `GET _nodes/stats/ingest` via scripted input or scheduled REST call. Parse `ingest.total.pipeline_failures`, `ingest.total.pipeline_current`, and per-pipeline stats. Ingest as events with node, pipeline ID, and counters. Alert when pipeline_failures increases or when pipeline_current exceeds threshold (backlog). Correlate with index rate and cluster health. Investigate pipeline processor errors (script failures, date parse errors, field mapping conflicts).",
              "z": "Table (pipelines with failures), Line chart (pipeline failures over time), Bar chart (failures by pipeline), Single value (total pipeline failures).",
              "kfp": "Legitimate parsing edge-case bursts arrive with every major application rollout when new log lines do not yet match grok or dissect patterns; hold a short dwell window and correlate with change tickets before paging product teams. Some pipelines intentionally route grok failures into a DLQ index for forensic sampling; read on_failure_policy in elasticsearch_pipeline_inventory.csv before treating DLQ growth as an incident. GeoIP lookup misses are common for internal RFC1918 addresses and expected when processors guard public-ip-only enrichment; pair failures with network address plans. Planned schema migrations and reindex jobs can temporarily inflate documents.failed counters or DLQ documents during controlled replays; compare cluster maintenance calendars and ILM phases. Amazon OpenSearch Service blue-green maintenance can shift ingest threadpools without application changes; validate service events before blaming parsers. OpenTelemetry exporter counters can exceed mirror counts when Splunk applies stronger filtering or sampling at the HEC tier while Elasticsearch ingests raw volume; tune cross_stack_gap_pct thresholds after documenting intentional differences. Coordinating node log regexes for HTTP status can false-positive on unrelated access logs that mention pipeline keywords; scope sourcetype and logger filters tightly. Duplicate tenant_id labels across dev and prod test hosts can skew tstats joins until lookups segregate environments. Metricbeat upgrades sometimes rename dotted fields; coalesce ladders may mask partial collection until FIELDALIAS rules catch up.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (ES REST API).\n• Ensure the following data sources are available: Elasticsearch _nodes/stats/ingest, pipeline error counts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Elasticsearch `GET _nodes/stats/ingest` via scripted input or scheduled REST call. Parse `ingest.total.pipeline_failures`, `ingest.total.pipeline_current`, and per-pipeline stats. Ingest as events with node, pipeline ID, and counters. Alert when pipeline_failures increases or when pipeline_current exceeds threshold (backlog). Correlate with index rate and cluster health. Investigate pipeline processor errors (script failures, date parse errors, field mapping conflicts).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=elasticsearch sourcetype=\"elasticsearch:ingest\"\n| where pipeline_failures > 0 OR pipeline_current > 100\n| stats sum(pipeline_failures) as total_failures, sum(pipeline_current) as current_in_flight by node_name, pipeline_id\n| sort -total_failures\n```\n\nUnderstanding this SPL\n\n**Elasticsearch Ingest Pipeline Errors** — Log pipeline processing failures causing data loss or corruption.\n\nDocumented **Data sources**: Elasticsearch _nodes/stats/ingest, pipeline error counts. **App/TA** (typical add-on context): Custom (ES REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: elasticsearch; **sourcetype**: elasticsearch:ingest. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=elasticsearch, sourcetype=\"elasticsearch:ingest\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where pipeline_failures > 0 OR pipeline_current > 100` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by node_name, pipeline_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pipelines with failures), Line chart (pipeline failures over time), Bar chart (failures by pipeline), Single value (total pipeline failures).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch the machinery that prepares log records before they are stored in the search system. When parsing steps fail, dead-letter piles grow, or the same traffic no longer matches between two observability paths, we show which customer area and which step needs attention before records quietly vanish.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_es"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.10",
              "n": "Fluentd / Fluent Bit Buffer Overflow",
              "c": "high",
              "f": "intermediate",
              "v": "Log forwarding buffer full, data at risk of being dropped.",
              "t": "Custom (Fluentd monitoring agent, Fluent Bit metrics)",
              "d": "Fluentd /api/plugins.json, Fluent Bit /api/v1/metrics",
              "q": "index=fluent sourcetype IN (\"fluentd:plugins\", \"fluentbit:metrics\")\n| eval buffer_usage_pct=if(isnum(buffer_queue_length) AND buffer_total_limit>0, round(buffer_queue_length/buffer_total_limit*100,1), null())\n| where buffer_queue_length > 0 AND (buffer_usage_pct > 80 OR buffer_total_limit - buffer_queue_length < 1000)\n| stats latest(buffer_queue_length) as queue_depth, latest(buffer_total_limit) as limit, latest(buffer_usage_pct) as pct by host, plugin_id, output_plugin\n| sort -pct",
              "m": "For Fluentd, enable monitoring agent and poll `/api/plugins.json` (or use `in_monitor_agent`). For Fluent Bit, enable HTTP server and poll `/api/v1/metrics`. Ingest buffer_queue_length, buffer_total_limit, retry_count, and emit_count. Alert when buffer usage exceeds 80% or when retries spike. Correlate with downstream (Elasticsearch, Splunk) ingestion latency. Tune buffer size, flush interval, or add more workers.",
              "z": "Table (plugins with high buffer usage), Gauge (buffer fill % per output), Line chart (buffer depth over time), Bar chart (retries by plugin).",
              "kfp": "Buffer pain from downstream saturation, bad TLS, or local disk. We check agent and destination before blind buffer tuning.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Fluentd monitoring agent, Fluent Bit metrics).\n• Ensure the following data sources are available: Fluentd /api/plugins.json, Fluent Bit /api/v1/metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor Fluentd, enable monitoring agent and poll `/api/plugins.json` (or use `in_monitor_agent`). For Fluent Bit, enable HTTP server and poll `/api/v1/metrics`. Ingest buffer_queue_length, buffer_total_limit, retry_count, and emit_count. Alert when buffer usage exceeds 80% or when retries spike. Correlate with downstream (Elasticsearch, Splunk) ingestion latency. Tune buffer size, flush interval, or add more workers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fluent sourcetype IN (\"fluentd:plugins\", \"fluentbit:metrics\")\n| eval buffer_usage_pct=if(isnum(buffer_queue_length) AND buffer_total_limit>0, round(buffer_queue_length/buffer_total_limit*100,1), null())\n| where buffer_queue_length > 0 AND (buffer_usage_pct > 80 OR buffer_total_limit - buffer_queue_length < 1000)\n| stats latest(buffer_queue_length) as queue_depth, latest(buffer_total_limit) as limit, latest(buffer_usage_pct) as pct by host, plugin_id, output_plugin\n| sort -pct\n```\n\nUnderstanding this SPL\n\n**Fluentd / Fluent Bit Buffer Overflow** — Log forwarding buffer full, data at risk of being dropped.\n\nDocumented **Data sources**: Fluentd /api/plugins.json, Fluent Bit /api/v1/metrics. **App/TA** (typical add-on context): Custom (Fluentd monitoring agent, Fluent Bit metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fluent.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fluent. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **buffer_usage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where buffer_queue_length > 0 AND (buffer_usage_pct > 80 OR buffer_total_limit - buffer_queue_length < 1000)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, plugin_id, output_plugin** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (plugins with high buffer usage), Gauge (buffer fill % per output), Line chart (buffer depth over time), Bar chart (retries by plugin).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We read the forwarder and buffer logs for stress and give-ups so we see log loss risk before disks or patience run out.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "log_pipeline"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.11",
              "n": "KPI Threshold Violation Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Trending KPI breaches over time shows chronic vs transient service issues and validates threshold tuning.",
              "t": "Splunk ITSI",
              "d": "`index=itsi_summary`, `itsi_notable:audit`",
              "q": "index=itsi_summary severity_value>=3\n| timechart span=1h count by service_name, kpi_name\n| streamstats window=24 avg(count) as baseline by kpi_name\n| where count > baseline * 2",
              "m": "Baseline breach counts per KPI with `streamstats` or `predict`. Alert on sustained elevation vs one-off spikes. Feed results into Episode Review for service owners.",
              "z": "Line chart (breaches per KPI), Heatmap (service × hour), Table (KPIs above baseline).",
              "kfp": "Chronic hints misfire when streamstats cold-starts after indexer maintenance because the first day of data invents a fake smooth baseline; wait for backfill before acting. Audit arms miss edits when sourcetype routes change or when admins use emergency REST paths that bypass notable auditing; compare with UI change logs manually. Maintenance splits lie if entities lack correct maintenance flags during freeze windows; validate CMDB bridges. Efficacy labels confuse readers when tracked alerts deduplicate aggressively while summary counts every evaluation; reconcile correlation throttle policies before blaming thresholds. Performance joins show zero when host fields in inventory omit domain suffixes that Performance.host still carries; normalize host strings in the CSV. Quiet-series noisy-threshold alerts fire on KPIs that legitimately hold flat metrics with tight static limits during steady state; review business context before widening bands. Transient spikes after deploys resemble chronic rows for a few hours if the deploy window is long; extend human review beyond automated hints. Template inheritance can duplicate breach counts across parent and child services; deduplicate in dashboards by service ownership rules outside this SPL. Cloud latency may delay itsi_summary relative to audit hours, weakening th_corr; widen the correlation window in ad hoc searches when investigating. Seasonal businesses produce misleading roll168 averages around holidays; overlay external calendars in dashboards.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `index=itsi_summary`, `itsi_notable:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline breach counts per KPI with `streamstats` or `predict`. Alert on sustained elevation vs one-off spikes. Feed results into Episode Review for service owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary severity_value>=3\n| timechart span=1h count by service_name, kpi_name\n| streamstats window=24 avg(count) as baseline by kpi_name\n| where count > baseline * 2\n```\n\nUnderstanding this SPL\n\n**KPI Threshold Violation Trending** — Trending KPI breaches over time shows chronic vs transient service issues and validates threshold tuning.\n\nDocumented **Data sources**: `index=itsi_summary`, `itsi_notable:audit`. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by service_name, kpi_name** — ideal for trending and alerting on this use case.\n• `streamstats` rolls up events into metrics; results are split **by kpi_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > baseline * 2` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (breaches per KPI), Heatmap (service × hour), Table (KPIs above baseline).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-01",
              "sver": "",
              "rby": "",
              "ge": "We count how often each alert ladder step trips, whether the pattern is a steady grind or a quick spike, and whether recent knob turns line up with the noise. That way we tune thresholds with evidence instead of guessing who is crying wolf.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.12",
              "n": "Episode Correlation Accuracy",
              "c": "medium",
              "f": "expert",
              "v": "Measuring false merges and missed splits improves aggregation policies and reduces analyst rework.",
              "t": "Splunk ITSI Event Analytics",
              "d": "`index=itsi_grouped_alerts`, analyst disposition (ServiceNow/Splunk On-Call)",
              "q": "index=itsi_grouped_alerts\n| lookup episode_feedback episode_id OUTPUT disposition\n| stats count by disposition, severity\n| eval pct=round(100*count/sum(count),2)",
              "m": "Ingest manual episode disposition (false positive, wrong merge, should split) from ticketing or a KV store. Monthly review of `pct` by policy. Tune aggregation and similarity thresholds.",
              "z": "Pie chart (disposition mix), Bar chart (accuracy by policy), Table (episodes with poor feedback).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI Event Analytics.\n• Ensure the following data sources are available: `index=itsi_grouped_alerts`, analyst disposition (ServiceNow/Splunk On-Call).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest manual episode disposition (false positive, wrong merge, should split) from ticketing or a KV store. Monthly review of `pct` by policy. Tune aggregation and similarity thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_grouped_alerts\n| lookup episode_feedback episode_id OUTPUT disposition\n| stats count by disposition, severity\n| eval pct=round(100*count/sum(count),2)\n```\n\nUnderstanding this SPL\n\n**Episode Correlation Accuracy** — Measuring false merges and missed splits improves aggregation policies and reduces analyst rework.\n\nDocumented **Data sources**: `index=itsi_grouped_alerts`, analyst disposition (ServiceNow/Splunk On-Call). **App/TA** (typical add-on context): Splunk ITSI Event Analytics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_grouped_alerts.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_grouped_alerts. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by disposition, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (disposition mix), Bar chart (accuracy by policy), Table (episodes with poor feedback).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.13",
              "n": "Maintenance Window Compliance",
              "c": "high",
              "f": "beginner",
              "v": "Incorrect maintenance flags hide real outages during change windows, while noisy KPI alerts during true maintenance erode analyst trust. This use case validates that ITSI `is_service_in_maintenance` flags align with ITSM change windows, catching both missing and stale flags.",
              "t": "Splunk ITSI",
              "d": "`index=itsi_summary`, maintenance windows via REST",
              "q": "index=itsi_summary is_service_in_maintenance=0\n| join type=left max=1 service_name [\n  | rest /servicesNS/nobody/SA-ITOA/maintenance_services\n  | table title, service_name\n]\n| where severity_value>=4\n| stats count by service_name",
              "m": "Compare active alerts against scheduled maintenance windows. Alert when KPIs fire outside declared windows for critical services (possible misconfiguration). Report on % of alerts during maintenance windows.",
              "z": "Table (services alerting outside window), Single value (non-compliant alert %).",
              "kfp": "Deliberately long maintenance windows for known multi-day migrations can present as overdue_active until operators split work into sequential tickets; require human review against the implementation plan before paging. Emergency change records opened post-hoc may lack cmdb_change_window_inventory.csv rows at the moment maintenance starts; honor change_type_risk and defer rogue_scope until the CMDB snapshot catches up. Maintenance calendar time zones versus Splunk indexer UTC can create false overdue impressions when end_time is stored as local civil time without offset; reconcile in the ITSI UI. Demoted services intentionally parked in maintenance for cost-saving observability reductions should be documented in risk registers so maint_abuse findings route to the correct executive sponsor rather than the on-call engineer. Scheduled holiday change freezes sometimes stack overlapping maintenance_id values that look like collisions but reflect intentional quiet periods; compare to the enterprise freeze calendar. Intentional overrun for documented risk acceptance should carry a linked problem record so auditors understand why end_time exceeded planned_end. Partner-managed services may legitimately suppress alerts without internal change numbers; use lookup notes to avoid false rogue_scope. Validation notables fired immediately after maintenance can resemble leak_notable until suppression rules intentionally allow test traffic; read change tasks before escalating. Content-pack upgrades that bulk-refresh maintenance objects may spike internal_maint_calendar_lane volume without misconduct; filter known automation actors. tstats baseline drift after index migrations can empty summary_hist_pts; repair acceleration before interpreting historical context. Jira-sourced changes mapped into ServiceNow mirrors can duplicate planned_start; deduplicate change_request_id keys in cmdb_change_window_inventory.csv to prevent noisy missing_maint_hint rows.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `index=itsi_summary`, maintenance windows via REST.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare active alerts against scheduled maintenance windows. Alert when KPIs fire outside declared windows for critical services (possible misconfiguration). Report on % of alerts during maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_in_maintenance=0\n| join type=left max=1 service_name [\n  | rest /servicesNS/nobody/SA-ITOA/maintenance_services\n  | table title, service_name\n]\n| where severity_value>=4\n| stats count by service_name\n```\n\nUnderstanding this SPL\n\n**Maintenance Window Compliance** — Incorrect maintenance flags hide real outages during change windows, while noisy KPI alerts during true maintenance erode analyst trust. This use case validates that ITSI `is_service_in_maintenance` flags align with ITSM change windows, catching both missing and stale flags.\n\nDocumented **Data sources**: `index=itsi_summary`, maintenance windows via REST. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where severity_value>=4` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (services alerting outside window), Single value (non-compliant alert %).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We check that planned quiet periods for changes match the official schedule, catch when urgent notices slip through while work is supposed to be hushed, and flag when quiet shields stay open too long or hide services without proper approval.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.14",
              "n": "Glass Table SLA Breaches",
              "c": "high",
              "f": "intermediate",
              "v": "Glass Tables for NOC must reflect SLA-backed KPIs; breaches on the wallboard drive incident prioritization.",
              "t": "Splunk ITSI Glass Tables",
              "d": "ITSI KPIs, `itsi_summary`, SLA lookup",
              "q": "index=itsi_summary\n| lookup sla_targets service_name OUTPUT kpi_name, sla_target\n| where health_score < sla_target OR severity_value>=4\n| stats count by service_name, kpi_name\n| sort -count",
              "m": "Maintain `sla_targets` lookup with minimum health score or max severity per service. Drive Glass Table color thresholds from the same search. Alert when executive-facing services breach SLA for >15 minutes.",
              "z": "Glass Table (SLA status), KPI ticker (breached services), Table (breach duration).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI Glass Tables.\n• Ensure the following data sources are available: ITSI KPIs, `itsi_summary`, SLA lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain `sla_targets` lookup with minimum health score or max severity per service. Drive Glass Table color thresholds from the same search. Alert when executive-facing services breach SLA for >15 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary\n| lookup sla_targets service_name OUTPUT kpi_name, sla_target\n| where health_score < sla_target OR severity_value>=4\n| stats count by service_name, kpi_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Glass Table SLA Breaches** — Glass Tables for NOC must reflect SLA-backed KPIs; breaches on the wallboard drive incident prioritization.\n\nDocumented **Data sources**: ITSI KPIs, `itsi_summary`, SLA lookup. **App/TA** (typical add-on context): Splunk ITSI Glass Tables. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where health_score < sla_target OR severity_value>=4` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by service_name, kpi_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Glass Table (SLA status), KPI ticker (breached services), Table (breach duration).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.15",
              "n": "Service Dependency Health Propagation",
              "c": "critical",
              "f": "intermediate",
              "v": "Upstream dependency failure should roll up to dependent services; missing links cause wrong prioritization.",
              "t": "Splunk ITSI Service Analyzer",
              "d": "ITSI service topology, `itsi_summary`",
              "q": "| inputlookup itsi_services\n| search is_enabled=1\n| join type=left max=1 service_name [\n  search index=itsi_summary is_service_in_maintenance=0\n  | stats latest(health_score) as health by service_name\n]\n| where health < 50\n| table service_name, health, dependent_services",
              "m": "Validate service dependencies in ITSI. When a dependency drops below threshold, confirm dependent service health reflects impact (or use entity rules). Run weekly health of dependency graph completeness.",
              "z": "Service Analyzer tree, Sankey (dependency impact), Table (dependency × health).",
              "kfp": "Template re-import after a service rename often clears `services_depends_on` until the nightly DAG export runs, which looks like a propagation failure even when runtime health is fine. Drift between development, test, and production ITSI deployments produces mismatched parent keys while Observability Cloud still shows a single global service name, so RED telemetry disagrees with score math until inventories reconcile. Parent health-score recalculation legitimately lags during bulk service updates or backup-restore windows, creating minutes where parents look healthy while child KPIs already reflect outages. Deliberate dampening weights during blue-green cutovers suppress propagation on purpose; treat those windows as governed change, not defects. KPI base searches on a stale cron cadence can leave child snapshots fresher than parent snapshots, inventing false lag until schedules align. Maintenance windows in ITSI suppress severity while Observability Cloud keeps firing, so the joint search may show INFO_maintenance_window_active beside noisy external alerts. Sub-service entity merges collapse duplicate nodes and temporarily orphan historical edges until analysts remap dependencies. ITSI 4.x inventories exported through legacy REST shapes disagree with 4.18 and newer DAG validation helpers, so automated cycle detection may flag benign differences across versions. Federated ownership sometimes marks `enabled=0` in one tenant while Universal Forwarders still emit entity telemetry tied to retired keys, which mimics orphan services until ingest filters catch up.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI Service Analyzer.\n• Ensure the following data sources are available: ITSI service topology, `itsi_summary`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nValidate service dependencies in ITSI. When a dependency drops below threshold, confirm dependent service health reflects impact (or use entity rules). Run weekly health of dependency graph completeness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup itsi_services\n| search is_enabled=1\n| join type=left max=1 service_name [\n  search index=itsi_summary is_service_in_maintenance=0\n  | stats latest(health_score) as health by service_name\n]\n| where health < 50\n| table service_name, health, dependent_services\n```\n\nUnderstanding this SPL\n\n**Service Dependency Health Propagation** — Upstream dependency failure should roll up to dependent services; missing links cause wrong prioritization.\n\nDocumented **Data sources**: ITSI service topology, `itsi_summary`. **App/TA** (typical add-on context): Splunk ITSI Service Analyzer. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Applies an explicit `search` filter to narrow the current result set.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where health < 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Service Dependency Health Propagation**): table service_name, health, dependent_services\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Service Analyzer tree, Sankey (dependency impact), Table (dependency × health).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We check that when one important service leans on another, trouble really shows up in the right order and strength across the chain, instead of hiding because a link was missing or the math waited too long.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.16",
              "n": "ITSI Backup Set Integrity",
              "c": "critical",
              "f": "intermediate",
              "v": "Corrupt or incomplete KV Store / ITSI backup objects prevent disaster recovery after SH loss.",
              "t": "Splunk ITSI, backup automation",
              "d": "Backup job logs, `sourcetype=itsi:backup`",
              "q": "index=_internal OR index=main sourcetype=\"itsi:backup\"\n| search status IN (\"failed\",\"partial\",\"corrupt\") OR match(_raw,\"(?i)(checksum|verify failed)\")\n| stats count by backup_job, host, message\n| sort -count",
              "m": "Log ITSI backup jobs (scheduled exports, `kvstore` backup). Verify checksum after write. Alert on any non-success. Test restore quarterly to a lab SH.",
              "z": "Table (failed backups), Timeline (backup jobs), Single value (last successful backup age).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI, backup automation.\n• Ensure the following data sources are available: Backup job logs, `sourcetype=itsi:backup`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog ITSI backup jobs (scheduled exports, `kvstore` backup). Verify checksum after write. Alert on any non-success. Test restore quarterly to a lab SH.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal OR index=main sourcetype=\"itsi:backup\"\n| search status IN (\"failed\",\"partial\",\"corrupt\") OR match(_raw,\"(?i)(checksum|verify failed)\")\n| stats count by backup_job, host, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ITSI Backup Set Integrity** — Corrupt or incomplete KV Store / ITSI backup objects prevent disaster recovery after SH loss.\n\nDocumented **Data sources**: Backup job logs, `sourcetype=itsi:backup`. **App/TA** (typical add-on context): Splunk ITSI, backup automation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal, main; **sourcetype**: itsi:backup. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, index=main, sourcetype=\"itsi:backup\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by backup_job, host, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed backups), Timeline (backup jobs), Single value (last successful backup age).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.17",
              "n": "Notable Event Suppression Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Over-suppression hides incidents; audit ensures suppress rules and analyst actions are justified.",
              "t": "Splunk ITSI, ES correlation (if linked)",
              "d": "`index=itsi_notable:audit` / notable audit logs",
              "q": "index=itsi_notable:audit OR index=notable sourcetype=\"itsi:notable_audit\"\n| search action IN (\"suppress\",\"close\",\"suppress_episode\")\n| stats count by user, rule_id, reason\n| sort -count",
              "m": "Ingest notable audit events with user, rule, and reason. Alert on high-volume suppression by single user or new rule. Review monthly for policy compliance.",
              "z": "Table (top suppressors), Bar chart (suppressions by rule), Timeline (suppression events).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI, ES correlation (if linked).\n• Ensure the following data sources are available: `index=itsi_notable:audit` / notable audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest notable audit events with user, rule, and reason. Alert on high-volume suppression by single user or new rule. Review monthly for policy compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_notable:audit OR index=notable sourcetype=\"itsi:notable_audit\"\n| search action IN (\"suppress\",\"close\",\"suppress_episode\")\n| stats count by user, rule_id, reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Notable Event Suppression Audit** — Over-suppression hides incidents; audit ensures suppress rules and analyst actions are justified.\n\nDocumented **Data sources**: `index=itsi_notable:audit` / notable audit logs. **App/TA** (typical add-on context): Splunk ITSI, ES correlation (if linked). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_notable:audit, notable; **sourcetype**: itsi:notable_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_notable:audit, index=notable, sourcetype=\"itsi:notable_audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, rule_id, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top suppressors), Bar chart (suppressions by rule), Timeline (suppression events).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We review when alerts are hushed on purpose so we can balance a quieter channel with not hiding a real problem.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.18",
              "n": "Adaptive Thresholding Effectiveness",
              "c": "medium",
              "f": "advanced",
              "v": "Adaptive thresholds reduce false positives; tracking effectiveness shows when to retrain or fall back to static limits.",
              "t": "Splunk ITSI (adaptive thresholds)",
              "d": "`index=itsi_summary`, KPI threshold history",
              "q": "index=itsi_summary is_service_in_maintenance=0\n| timechart span=1d count(eval(severity_value>=3)) as breaches by kpi_name\n| join max=1 kpi_name [\n  search index=itsi_summary kpi_threshold_type=\"adaptive\"\n  | stats dc(kpi_name) as adaptive_kpis by kpi_name\n]\n| where breaches > 10",
              "m": "Tag KPIs using adaptive vs static thresholds. Compare breach rate and analyst disposition before/after ML enablement. Retrain when seasonal drift causes misses.",
              "z": "Line chart (breaches per adaptive KPI), Table (KPIs needing threshold review).",
              "kfp": "Planned training-window resets after major architecture migrations temporarily inflate psi_drift and fp_rate until the new distribution stabilizes; require change tickets before paging service owners. Intentional manual overrides during major incidents can make template inheritance appear broken until sync windows complete; annotate override tickets in the CMDB feed. KPI source data backfill after indexer lag replays history and can skew short-window dispersion metrics; gate alerts during documented backfill intervals. Holiday calendar excluded periods legitimately remove traffic slices and can narrow bands for seasonally quiet KPIs; compare against holiday metadata before declaring mis-calibration. Seasonality boundaries at fiscal quarter close produce abrupt but expected level shifts; downgrade severity when finance calendars explain the jump. Content-pack template imports may flash high fp_rate for an hour while entities remap; validate inventory freshness. Canary KPIs with artificially low cardinality trigger brittle quantile estimates; exclude them from executive rollups. Duplicate entity_id labels from discovery bugs merge unrelated traffic; fix entity rules before trusting fn_rate. Service_KPI_Summary acceleration gaps can zero the tstats arm and drop psi_drift to defaults; treat missing acceleration as a deployment gap, not a threshold failure.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (adaptive thresholds).\n• Ensure the following data sources are available: `index=itsi_summary`, KPI threshold history.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag KPIs using adaptive vs static thresholds. Compare breach rate and analyst disposition before/after ML enablement. Retrain when seasonal drift causes misses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_in_maintenance=0\n| timechart span=1d count(eval(severity_value>=3)) as breaches by kpi_name\n| join max=1 kpi_name [\n  search index=itsi_summary kpi_threshold_type=\"adaptive\"\n  | stats dc(kpi_name) as adaptive_kpis by kpi_name\n]\n| where breaches > 10\n```\n\nUnderstanding this SPL\n\n**Adaptive Thresholding Effectiveness** — Adaptive thresholds reduce false positives; tracking effectiveness shows when to retrain or fall back to static limits.\n\nDocumented **Data sources**: `index=itsi_summary`, KPI threshold history. **App/TA** (typical add-on context): Splunk ITSI (adaptive thresholds). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by kpi_name** — ideal for trending and alerting on this use case.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where breaches > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (breaches per adaptive KPI), Table (KPIs needing threshold review).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch the comfort band on a thermostat that learned the house only during a cold month, then summer arrives and the system thinks huge swings are fine until people are miserable. We measure whether those automatic comfort ranges are too loose, too tight, drifting sideways, or fighting a calendar nobody updated, so teams fix the tuning before real trouble hides inside quiet charts.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.19",
              "n": "Multi-Tier Application Service Tree Modeling",
              "c": "critical",
              "f": "advanced",
              "v": "Service trees link infrastructure KPIs to business outcomes, enabling impact analysis that shows which component failure affects which customer-facing service.",
              "t": "Splunk ITSI",
              "d": "`itsi_summary` index, entity discovery from infrastructure TAs",
              "q": "| rest /servicesNS/nobody/SA-ITOA/itoa_interface/service\n| rename title as service_name\n| eval kpi_count=mvcount(kpis), dep_count=mvcount(services_depends_on)\n| table service_name kpi_count dep_count\n| sort -dep_count",
              "m": "Model services top-down: business service → application tier → middleware → infrastructure. Use entity rules with host/IP aliases to dynamically bind entities. Define dependency relationships so parent health reflects child degradation. Use service templates for repeatable patterns across environments.",
              "z": "Service Analyzer (dependency tree), Glass Table (business service map).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `itsi_summary` index, entity discovery from infrastructure TAs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nModel services top-down: business service → application tier → middleware → infrastructure. Use entity rules with host/IP aliases to dynamically bind entities. Define dependency relationships so parent health reflects child degradation. Use service templates for repeatable patterns across environments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /servicesNS/nobody/SA-ITOA/itoa_interface/service\n| rename title as service_name\n| eval kpi_count=mvcount(kpis), dep_count=mvcount(services_depends_on)\n| table service_name kpi_count dep_count\n| sort -dep_count\n```\n\nUnderstanding this SPL\n\n**Multi-Tier Application Service Tree Modeling** — Service trees link infrastructure KPIs to business outcomes, enabling impact analysis that shows which component failure affects which customer-facing service.\n\nDocumented **Data sources**: `itsi_summary` index, entity discovery from infrastructure TAs. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **kpi_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Multi-Tier Application Service Tree Modeling**): table service_name kpi_count dep_count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Service Analyzer (dependency tree), Glass Table (business service map).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.20",
              "n": "Entity Discovery Completeness Audit",
              "c": "medium",
              "f": "advanced",
              "v": "Undiscovered entities create monitoring blind spots. Auditing entity coverage against infrastructure inventories ensures every critical asset is monitored by ITSI services.",
              "t": "Splunk ITSI, infrastructure TAs",
              "d": "`itsi_entities` lookup, CMDB/asset inventory, `index=_internal`",
              "q": "| inputlookup itsi_entities\n| stats dc(_key) as itsi_entities values(entity_type_ids) as types\n| appendcols [\n  | tstats dc(host) as infra_hosts where (index=main OR index=security OR index=os OR index=windows) by index\n  | stats sum(infra_hosts) as total_infra_hosts\n]\n| eval coverage_pct=round(itsi_entities/total_infra_hosts*100,1)\n| table itsi_entities total_infra_hosts coverage_pct types",
              "m": "Compare ITSI entity inventory against CMDB, cloud provider APIs, or Splunk host metadata. Identify unmonitored hosts. Use entity discovery searches or CSV imports to close gaps. Schedule weekly coverage audits. Track entity types to ensure classification is consistent.",
              "z": "Single value (coverage %), Table (unmatched hosts), Column chart (entity count by type).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI, infrastructure TAs.\n• Ensure the following data sources are available: `itsi_entities` lookup, CMDB/asset inventory, `index=_internal`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare ITSI entity inventory against CMDB, cloud provider APIs, or Splunk host metadata. Identify unmonitored hosts. Use entity discovery searches or CSV imports to close gaps. Schedule weekly coverage audits. Track entity types to ensure classification is consistent.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup itsi_entities\n| stats dc(_key) as itsi_entities values(entity_type_ids) as types\n| appendcols [\n  | tstats dc(host) as infra_hosts where (index=main OR index=security OR index=os OR index=windows) by index\n  | stats sum(infra_hosts) as total_infra_hosts\n]\n| eval coverage_pct=round(itsi_entities/total_infra_hosts*100,1)\n| table itsi_entities total_infra_hosts coverage_pct types\n```\n\nUnderstanding this SPL\n\n**Entity Discovery Completeness Audit** — Undiscovered entities create monitoring blind spots. Auditing entity coverage against infrastructure inventories ensures every critical asset is monitored by ITSI services.\n\nDocumented **Data sources**: `itsi_entities` lookup, CMDB/asset inventory, `index=_internal`. **App/TA** (typical add-on context): Splunk ITSI, infrastructure TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Adds columns from a subsearch with `appendcols`.\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Entity Discovery Completeness Audit**): table itsi_entities total_infra_hosts coverage_pct types\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (coverage %), Table (unmatched hosts), Column chart (entity count by type).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.21",
              "n": "Content Pack Deployment Health (Monitoring and Alerting)",
              "c": "high",
              "f": "intermediate",
              "v": "The Monitoring and Alerting content pack provides pre-built correlation searches and aggregation policies. Tracking deployment health ensures these critical components remain functional.",
              "t": "Splunk ITSI, DA-ITSI-CP-Monitoring-and-Alerting",
              "d": "`index=_internal sourcetype=scheduler`, `itsi_tracked_alerts`, `itsi_grouped_alerts`",
              "q": "index=_internal sourcetype=scheduler app=\"DA-ITSI-CP-Monitoring-and-Alerting\"\n| stats count(eval(status=\"success\")) as success count(eval(status=\"skipped\")) as skipped count(eval(status!=\"success\" AND status!=\"skipped\")) as failed by savedsearch_name\n| eval health=if(failed>0 OR skipped>success, \"degraded\", \"healthy\")\n| sort -failed -skipped",
              "m": "Install the Monitoring and Alerting content pack via ITSI UI. Enable correlation searches incrementally. Monitor the lookup generator reports (schedule daily). Track notable event flow rates to confirm the pipeline is working. Alert on correlation search failures or skipped executions.",
              "z": "Table (search name, status, skip rate), Single value (healthy/degraded count).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI, DA-ITSI-CP-Monitoring-and-Alerting.\n• Ensure the following data sources are available: `index=_internal sourcetype=scheduler`, `itsi_tracked_alerts`, `itsi_grouped_alerts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall the Monitoring and Alerting content pack via ITSI UI. Enable correlation searches incrementally. Monitor the lookup generator reports (schedule daily). Track notable event flow rates to confirm the pipeline is working. Alert on correlation search failures or skipped executions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=scheduler app=\"DA-ITSI-CP-Monitoring-and-Alerting\"\n| stats count(eval(status=\"success\")) as success count(eval(status=\"skipped\")) as skipped count(eval(status!=\"success\" AND status!=\"skipped\")) as failed by savedsearch_name\n| eval health=if(failed>0 OR skipped>success, \"degraded\", \"healthy\")\n| sort -failed -skipped\n```\n\nUnderstanding this SPL\n\n**Content Pack Deployment Health (Monitoring and Alerting)** — The Monitoring and Alerting content pack provides pre-built correlation searches and aggregation policies. Tracking deployment health ensures these critical components remain functional.\n\nDocumented **Data sources**: `index=_internal sourcetype=scheduler`, `itsi_tracked_alerts`, `itsi_grouped_alerts`. **App/TA** (typical add-on context): Splunk ITSI, DA-ITSI-CP-Monitoring-and-Alerting. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: scheduler. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=scheduler. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by savedsearch_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (search name, status, skip rate), Single value (healthy/degraded count).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.23",
              "n": "Notable Event Volume Trending by Source",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracking notable event volume by source correlation search identifies noisy rules, misconfigured thresholds, and alert fatigue risks before they overwhelm analysts.",
              "t": "Splunk ITSI",
              "d": "`itsi_tracked_alerts`",
              "q": "index=itsi_tracked_alerts\n| timechart span=1h count by source limit=20\n| addtotals\n| where Total > 50",
              "m": "Monitor notable event ingest rates per correlation search source. Identify sudden spikes (alert storms) and sustained high-volume sources (noisy rules). Set thresholds: >100 notables/hour from a single source warrants investigation. Tune or disable noisy correlation searches. Feed into Episode Review capacity planning.",
              "z": "Stacked area chart (events by source over time), Table (top 10 noisiest sources).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `itsi_tracked_alerts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor notable event ingest rates per correlation search source. Identify sudden spikes (alert storms) and sustained high-volume sources (noisy rules). Set thresholds: >100 notables/hour from a single source warrants investigation. Tune or disable noisy correlation searches. Feed into Episode Review capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_tracked_alerts\n| timechart span=1h count by source limit=20\n| addtotals\n| where Total > 50\n```\n\nUnderstanding this SPL\n\n**Notable Event Volume Trending by Source** — Tracking notable event volume by source correlation search identifies noisy rules, misconfigured thresholds, and alert fatigue risks before they overwhelm analysts.\n\nDocumented **Data sources**: `itsi_tracked_alerts`. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_tracked_alerts.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_tracked_alerts. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by source limit=20** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Notable Event Volume Trending by Source**): addtotals\n• Filters the current rows with `where Total > 50` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area chart (events by source over time), Table (top 10 noisiest sources).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.24",
              "n": "KPI Drift Detection for Gradual Degradation",
              "c": "high",
              "f": "advanced",
              "v": "Drift detection identifies gradual KPI value changes (e.g., slow memory leak, increasing latency) that stay within thresholds but indicate an emerging problem. Catches issues before threshold breach.",
              "t": "Splunk ITSI 4.20+",
              "d": "`itsi_summary`, `itsi_summary_metrics`",
              "q": "index=itsi_summary is_service_in_maintenance=0 is_entity_in_maintenance=0\n| timechart span=1d avg(alert_value) as daily_avg by kpi_name\n| foreach * [\n  | eval <<FIELD>>_trend=if(<<FIELD>> > 0, round((<<FIELD>> - exact(<<FIELD>>))/exact(<<FIELD>>)*100, 2), 0)\n]\n| untable _time kpi_name daily_avg\n| eventstats avg(daily_avg) as baseline stdev(daily_avg) as sigma by kpi_name\n| eval drift_score=round(abs(daily_avg - baseline) / sigma, 2)\n| where drift_score > 2",
              "m": "Enable drift detection in ITSI 4.20+ Configuration Assistant. For earlier versions, use MLTK regression models on KPI time series. Compare rolling 7-day averages against 30-day baselines. Alert when drift exceeds 2 sigma. Common drift patterns: memory leaks, disk fill, queue depth growth, connection pool exhaustion.",
              "z": "Line chart (KPI value with baseline band), Table (drifting KPIs ranked by score).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI 4.20+.\n• Ensure the following data sources are available: `itsi_summary`, `itsi_summary_metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable drift detection in ITSI 4.20+ Configuration Assistant. For earlier versions, use MLTK regression models on KPI time series. Compare rolling 7-day averages against 30-day baselines. Alert when drift exceeds 2 sigma. Common drift patterns: memory leaks, disk fill, queue depth growth, connection pool exhaustion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_in_maintenance=0 is_entity_in_maintenance=0\n| timechart span=1d avg(alert_value) as daily_avg by kpi_name\n| foreach * [\n  | eval <<FIELD>>_trend=if(<<FIELD>> > 0, round((<<FIELD>> - exact(<<FIELD>>))/exact(<<FIELD>>)*100, 2), 0)\n]\n| untable _time kpi_name daily_avg\n| eventstats avg(daily_avg) as baseline stdev(daily_avg) as sigma by kpi_name\n| eval drift_score=round(abs(daily_avg - baseline) / sigma, 2)\n| where drift_score > 2\n```\n\nUnderstanding this SPL\n\n**KPI Drift Detection for Gradual Degradation** — Drift detection identifies gradual KPI value changes (e.g., slow memory leak, increasing latency) that stay within thresholds but indicate an emerging problem. Catches issues before threshold breach.\n\nDocumented **Data sources**: `itsi_summary`, `itsi_summary_metrics`. **App/TA** (typical add-on context): Splunk ITSI 4.20+. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by kpi_name** — ideal for trending and alerting on this use case.\n• Iterates over multivalue fields with `foreach`.\n• Pipeline stage (see **KPI Drift Detection for Gradual Degradation**): untable _time kpi_name daily_avg\n• `eventstats` rolls up events into metrics; results are split **by kpi_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drift_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift_score > 2` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (KPI value with baseline band), Table (drifting KPIs ranked by score).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.25",
              "n": "MLTK Custom Anomaly Detection on KPI Data",
              "c": "medium",
              "f": "expert",
              "v": "Combining MLTK with ITSI KPI data enables custom anomaly models that detect multi-dimensional patterns (e.g., CPU+memory+latency correlation) impossible with single-KPI thresholds.",
              "t": "Splunk ITSI, Splunk Machine Learning Toolkit (MLTK)",
              "d": "`itsi_summary`, MLTK models",
              "q": "index=itsi_summary is_service_aggregate=0 kpi_name IN (\"CPU Utilization\", \"Memory Usage\", \"Response Time\")\n| timechart span=5m avg(alert_value) by kpi_name\n| fit DensityFunction \"CPU Utilization\" \"Memory Usage\" \"Response Time\" into itsi_multivariate_model\n| where isOutlier > 0",
              "m": "Extract KPI data from `itsi_summary`. Build MLTK models (DensityFunction for outlier detection, RandomForestRegressor for prediction). Create residual KPIs: predicted vs actual values. Feed MLTK output back as ITSI KPIs for service health scoring. Retrain models monthly or on significant infrastructure changes.",
              "z": "Scatter plot (multi-dimensional KPI space with outliers highlighted), Line chart (residual KPI over time).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI, Splunk Machine Learning Toolkit (MLTK).\n• Ensure the following data sources are available: `itsi_summary`, MLTK models.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract KPI data from `itsi_summary`. Build MLTK models (DensityFunction for outlier detection, RandomForestRegressor for prediction). Create residual KPIs: predicted vs actual values. Feed MLTK output back as ITSI KPIs for service health scoring. Retrain models monthly or on significant infrastructure changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_aggregate=0 kpi_name IN (\"CPU Utilization\", \"Memory Usage\", \"Response Time\")\n| timechart span=5m avg(alert_value) by kpi_name\n| fit DensityFunction \"CPU Utilization\" \"Memory Usage\" \"Response Time\" into itsi_multivariate_model\n| where isOutlier > 0\n```\n\nUnderstanding this SPL\n\n**MLTK Custom Anomaly Detection on KPI Data** — Combining MLTK with ITSI KPI data enables custom anomaly models that detect multi-dimensional patterns (e.g., CPU+memory+latency correlation) impossible with single-KPI thresholds.\n\nDocumented **Data sources**: `itsi_summary`, MLTK models. **App/TA** (typical add-on context): Splunk ITSI, Splunk Machine Learning Toolkit (MLTK). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by kpi_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **MLTK Custom Anomaly Detection on KPI Data**): fit DensityFunction \"CPU Utilization\" \"Memory Usage\" \"Response Time\" into itsi_multivariate_model\n• Filters the current rows with `where isOutlier > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter plot (multi-dimensional KPI space with outliers highlighted), Line chart (residual KPI over time).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.26",
              "n": "Splunk On-Call (VictorOps) Alert Routing",
              "c": "high",
              "f": "intermediate",
              "v": "Routing ITSI episode alerts to Splunk On-Call ensures the right on-call engineer is paged with full service context, reducing MTTA by eliminating manual triage.",
              "t": "Splunk ITSI, Splunk On-Call",
              "d": "`itsi_grouped_alerts`, On-Call incident logs",
              "q": "index=itsi_grouped_alerts status=1 severity>=4\n| eval routing_key=case(\n    match(service_name, \"(?i)payment|checkout\"), \"payment-team\",\n    match(service_name, \"(?i)database|sql\"), \"dba-team\",\n    1=1, \"general-ops\"\n)\n| stats count by routing_key severity\n| sort -severity",
              "m": "Configure Splunk On-Call integration in ITSI notable event actions. Map episode severity to On-Call urgency levels. Define routing keys per service or service tree branch. Enable auto-acknowledgment when analysts claim episodes in Episode Review. Track MTTA and MTTR per routing key.",
              "z": "Table (routing key, severity, count), Single value (unacknowledged critical episodes).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI, Splunk On-Call.\n• Ensure the following data sources are available: `itsi_grouped_alerts`, On-Call incident logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk On-Call integration in ITSI notable event actions. Map episode severity to On-Call urgency levels. Define routing keys per service or service tree branch. Enable auto-acknowledgment when analysts claim episodes in Episode Review. Track MTTA and MTTR per routing key.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_grouped_alerts status=1 severity>=4\n| eval routing_key=case(\n    match(service_name, \"(?i)payment|checkout\"), \"payment-team\",\n    match(service_name, \"(?i)database|sql\"), \"dba-team\",\n    1=1, \"general-ops\"\n)\n| stats count by routing_key severity\n| sort -severity\n```\n\nUnderstanding this SPL\n\n**Splunk On-Call (VictorOps) Alert Routing** — Routing ITSI episode alerts to Splunk On-Call ensures the right on-call engineer is paged with full service context, reducing MTTA by eliminating manual triage.\n\nDocumented **Data sources**: `itsi_grouped_alerts`, On-Call incident logs. **App/TA** (typical add-on context): Splunk ITSI, Splunk On-Call. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_grouped_alerts.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_grouped_alerts. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **routing_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by routing_key severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (routing key, severity, count), Single value (unacknowledged critical episodes).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.27",
              "n": "Observability Cloud Alert Ingestion",
              "c": "medium",
              "f": "advanced",
              "v": "Ingesting Splunk Observability Cloud alerts into ITSI unifies cloud-native and on-prem monitoring into a single episode management workflow, eliminating tool-switching.",
              "t": "Splunk ITSI, Splunk Observability Cloud",
              "d": "Observability Cloud webhooks, `itsi_tracked_alerts`",
              "q": "index=itsi_tracked_alerts source=\"*observability*\" OR source=\"*o11y*\"\n| stats count by service_name severity source\n| sort -count",
              "m": "Configure Observability Cloud to forward alerts via webhook to Splunk HEC. Normalize alert payloads into the ITSI Universal Alerting schema. Create a Universal Correlation Search to convert incoming alerts into notable events. Configure NEAPs to group O11y alerts with infrastructure alerts into unified episodes. Track alert volume and false positive rate.",
              "z": "Table (O11y alert source, count, severity), Time chart (alert volume over time).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI, Splunk Observability Cloud.\n• Ensure the following data sources are available: Observability Cloud webhooks, `itsi_tracked_alerts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Observability Cloud to forward alerts via webhook to Splunk HEC. Normalize alert payloads into the ITSI Universal Alerting schema. Create a Universal Correlation Search to convert incoming alerts into notable events. Configure NEAPs to group O11y alerts with infrastructure alerts into unified episodes. Track alert volume and false positive rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_tracked_alerts source=\"*observability*\" OR source=\"*o11y*\"\n| stats count by service_name severity source\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Observability Cloud Alert Ingestion** — Ingesting Splunk Observability Cloud alerts into ITSI unifies cloud-native and on-prem monitoring into a single episode management workflow, eliminating tool-switching.\n\nDocumented **Data sources**: Observability Cloud webhooks, `itsi_tracked_alerts`. **App/TA** (typical add-on context): Splunk ITSI, Splunk Observability Cloud. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_tracked_alerts.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_tracked_alerts. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by service_name severity source** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Observability Cloud Alert Ingestion** — Ingesting Splunk Observability Cloud alerts into ITSI unifies cloud-native and on-prem monitoring into a single episode management workflow, eliminating tool-switching.\n\nDocumented **Data sources**: Observability Cloud webhooks, `itsi_tracked_alerts`. **App/TA** (typical add-on context): Splunk ITSI, Splunk Observability Cloud. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (O11y alert source, count, severity), Time chart (alert volume over time).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We land cloud monitoring alerts into the same service and episode workflow as the rest of our stack so the team is not switch-hitting tools when time is short.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.28",
              "n": "Service Template Adoption and Consistency",
              "c": "medium",
              "f": "intermediate",
              "v": "Service templates ensure consistent KPI definitions, thresholds, and entity rules across environments (dev/staging/prod). Tracking adoption prevents configuration drift.",
              "t": "Splunk ITSI",
              "d": "ITSI REST API, `itsi_services` lookup",
              "q": "| rest /servicesNS/nobody/SA-ITOA/itoa_interface/service\n| rename title as service_name\n| eval has_template=if(isnotnull(base_service_template_id), \"yes\", \"no\")\n| stats count by has_template\n| eval adoption_pct=round(count/sum(count)*100, 1)",
              "m": "Create service templates for standard service types (web app, database, message queue). Link services to templates for consistent KPI inheritance. Monitor template adherence — services that diverge from templates should be reviewed. Use ITSI REST API to programmatically create services from templates during CI/CD deployments.",
              "z": "Pie chart (templated vs non-templated), Table (services diverging from template).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: ITSI REST API, `itsi_services` lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate service templates for standard service types (web app, database, message queue). Link services to templates for consistent KPI inheritance. Monitor template adherence — services that diverge from templates should be reviewed. Use ITSI REST API to programmatically create services from templates during CI/CD deployments.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /servicesNS/nobody/SA-ITOA/itoa_interface/service\n| rename title as service_name\n| eval has_template=if(isnotnull(base_service_template_id), \"yes\", \"no\")\n| stats count by has_template\n| eval adoption_pct=round(count/sum(count)*100, 1)\n```\n\nUnderstanding this SPL\n\n**Service Template Adoption and Consistency** — Service templates ensure consistent KPI definitions, thresholds, and entity rules across environments (dev/staging/prod). Tracking adoption prevents configuration drift.\n\nDocumented **Data sources**: ITSI REST API, `itsi_services` lookup. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **has_template** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by has_template** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **adoption_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (templated vs non-templated), Table (services diverging from template).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.29",
              "n": "Entity-Level Adaptive Threshold Tuning",
              "c": "medium",
              "f": "expert",
              "v": "Entity-level adaptive thresholds (ITSI 4.20+) provide per-host baselines instead of aggregate, drastically reducing false positives in heterogeneous environments where host behavior varies.",
              "t": "Splunk ITSI 4.20+",
              "d": "`itsi_summary`, per-entity KPI data",
              "q": "index=itsi_summary is_entity_in_maintenance=0 is_service_in_maintenance=0\n| stats avg(alert_value) as avg_val stdev(alert_value) as stdev_val count by entity_title kpi_name\n| where stdev_val/avg_val > 0.5 AND count > 100\n| sort -stdev_val\n| head 20",
              "m": "Enable entity-level adaptive thresholds for KPIs with high per-entity variance (e.g., CPU on mixed workload hosts). Review the coefficient of variation (stdev/mean) — values > 0.5 indicate entity-level thresholds will significantly outperform aggregate. Monitor false positive reduction after enablement. Fall back to static thresholds for entities with insufficient data.",
              "z": "Table (entity, KPI, variance, threshold type), Line chart (per-entity KPI with threshold bands).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI 4.20+.\n• Ensure the following data sources are available: `itsi_summary`, per-entity KPI data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable entity-level adaptive thresholds for KPIs with high per-entity variance (e.g., CPU on mixed workload hosts). Review the coefficient of variation (stdev/mean) — values > 0.5 indicate entity-level thresholds will significantly outperform aggregate. Monitor false positive reduction after enablement. Fall back to static thresholds for entities with insufficient data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_entity_in_maintenance=0 is_service_in_maintenance=0\n| stats avg(alert_value) as avg_val stdev(alert_value) as stdev_val count by entity_title kpi_name\n| where stdev_val/avg_val > 0.5 AND count > 100\n| sort -stdev_val\n| head 20\n```\n\nUnderstanding this SPL\n\n**Entity-Level Adaptive Threshold Tuning** — Entity-level adaptive thresholds (ITSI 4.20+) provide per-host baselines instead of aggregate, drastically reducing false positives in heterogeneous environments where host behavior varies.\n\nDocumented **Data sources**: `itsi_summary`, per-entity KPI data. **App/TA** (typical add-on context): Splunk ITSI 4.20+. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by entity_title kpi_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where stdev_val/avg_val > 0.5 AND count > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (entity, KPI, variance, threshold type), Line chart (per-entity KPI with threshold bands).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.30",
              "n": "Configuration Assistant Recommendations Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "The Configuration Assistant (ITSI 4.20+) provides AI-powered optimization recommendations. Tracking which recommendations are implemented vs ignored ensures continuous ITSI health improvement.",
              "t": "Splunk ITSI 4.20+",
              "d": "`index=_internal sourcetype=itsi_internal_log`, Configuration Assistant UI",
              "q": "index=_internal sourcetype=itsi_internal_log component=ConfigurationAssistant\n| stats count by recommendation_type action_taken\n| eval implementation_rate=round(count/sum(count)*100, 1)",
              "m": "Review Configuration Assistant recommendations weekly. Categorize by type: threshold tuning, KPI consolidation, entity rule optimization, base search performance. Track implementation rate and measure impact (reduced skipped searches, fewer false positives, improved health score stability). Prioritize recommendations that affect critical services.",
              "z": "Table (recommendation type, count, implementation status), Single value (implementation rate %).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI 4.20+.\n• Ensure the following data sources are available: `index=_internal sourcetype=itsi_internal_log`, Configuration Assistant UI.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nReview Configuration Assistant recommendations weekly. Categorize by type: threshold tuning, KPI consolidation, entity rule optimization, base search performance. Track implementation rate and measure impact (reduced skipped searches, fewer false positives, improved health score stability). Prioritize recommendations that affect critical services.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=itsi_internal_log component=ConfigurationAssistant\n| stats count by recommendation_type action_taken\n| eval implementation_rate=round(count/sum(count)*100, 1)\n```\n\nUnderstanding this SPL\n\n**Configuration Assistant Recommendations Tracking** — The Configuration Assistant (ITSI 4.20+) provides AI-powered optimization recommendations. Tracking which recommendations are implemented vs ignored ensures continuous ITSI health improvement.\n\nDocumented **Data sources**: `index=_internal sourcetype=itsi_internal_log`, Configuration Assistant UI. **App/TA** (typical add-on context): Splunk ITSI 4.20+. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: itsi_internal_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=itsi_internal_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by recommendation_type action_taken** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **implementation_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recommendation type, count, implementation status), Single value (implementation rate %).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.31",
              "n": "Deep Dive Utilization and Performance",
              "c": "medium",
              "f": "intermediate",
              "v": "Deep Dives are ITSI's primary investigation tool. Tracking utilization reveals which KPIs analysts actually use for troubleshooting and identifies slow-rendering dives that need optimization.",
              "t": "Splunk ITSI",
              "d": "`index=_internal`, ITSI access logs",
              "q": "index=_internal sourcetype=splunkd_ui_access uri_path=\"*deep_dive*\"\n| stats count avg(spent) as avg_load_time_ms by user uri_path\n| sort -count\n| eval avg_load_time_s=round(avg_load_time_ms/1000, 2)",
              "m": "Monitor Deep Dive access patterns to understand analyst workflows. Identify unused deep dives for cleanup. Track load times — dives exceeding 10s typically have too many KPIs or overly broad time ranges. Optimize by reducing KPI count per lane, enabling backfill, or narrowing default time ranges.",
              "z": "Table (deep dive name, user, access count, avg load time), Bar chart (top 10 most-used dives).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `index=_internal`, ITSI access logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor Deep Dive access patterns to understand analyst workflows. Identify unused deep dives for cleanup. Track load times — dives exceeding 10s typically have too many KPIs or overly broad time ranges. Optimize by reducing KPI count per lane, enabling backfill, or narrowing default time ranges.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd_ui_access uri_path=\"*deep_dive*\"\n| stats count avg(spent) as avg_load_time_ms by user uri_path\n| sort -count\n| eval avg_load_time_s=round(avg_load_time_ms/1000, 2)\n```\n\nUnderstanding this SPL\n\n**Deep Dive Utilization and Performance** — Deep Dives are ITSI's primary investigation tool. Tracking utilization reveals which KPIs analysts actually use for troubleshooting and identifies slow-rendering dives that need optimization.\n\nDocumented **Data sources**: `index=_internal`, ITSI access logs. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd_ui_access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd_ui_access. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user uri_path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `eval` defines or adjusts **avg_load_time_s** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (deep dive name, user, access count, avg load time), Bar chart (top 10 most-used dives).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.32",
              "n": "ITSI Team Permission and RBAC Audit",
              "c": "high",
              "f": "intermediate",
              "v": "ITSI team assignments control service visibility and episode access. Auditing permissions ensures least-privilege access and prevents unauthorized service modifications.",
              "t": "Splunk ITSI",
              "d": "ITSI REST API, `authorize.conf`",
              "q": "| rest /servicesNS/nobody/SA-ITOA/itoa_interface/team\n| rename title as team_name\n| eval member_count=mvcount(members)\n| eval service_count=mvcount(services)\n| table team_name member_count service_count\n| sort -service_count",
              "m": "Define ITSI teams aligned to organizational structure. Assign services to teams for scoped visibility. Audit team membership quarterly — remove departed users, verify role assignments (itoa_admin, itoa_team_admin, itoa_analyst, itoa_user). Ensure admin role inherits itoa_admin in authorize.conf. Monitor for users with excessive permissions.",
              "z": "Table (team, members, services, role distribution), Single value (users with admin access).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: ITSI REST API, `authorize.conf`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine ITSI teams aligned to organizational structure. Assign services to teams for scoped visibility. Audit team membership quarterly — remove departed users, verify role assignments (itoa_admin, itoa_team_admin, itoa_analyst, itoa_user). Ensure admin role inherits itoa_admin in authorize.conf. Monitor for users with excessive permissions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /servicesNS/nobody/SA-ITOA/itoa_interface/team\n| rename title as team_name\n| eval member_count=mvcount(members)\n| eval service_count=mvcount(services)\n| table team_name member_count service_count\n| sort -service_count\n```\n\nUnderstanding this SPL\n\n**ITSI Team Permission and RBAC Audit** — ITSI team assignments control service visibility and episode access. Auditing permissions ensures least-privilege access and prevents unauthorized service modifications.\n\nDocumented **Data sources**: ITSI REST API, `authorize.conf`. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **member_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **service_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **ITSI Team Permission and RBAC Audit**): table team_name member_count service_count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (team, members, services, role distribution), Single value (users with admin access).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Security"
              ],
              "pillar": "security",
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.33",
              "n": "Business Service SLA Composite Scoring",
              "c": "critical",
              "f": "advanced",
              "v": "Composite SLA scores aggregate ITSI health data across services to produce contractual SLA metrics (e.g., 99.9% availability), directly supporting customer and executive reporting.",
              "t": "Splunk ITSI",
              "d": "`itsi_summary`, SLA definitions",
              "q": "index=itsi_summary is_service_aggregate=1\n  service_name IN (\"Payment Gateway\", \"Customer Portal\", \"API Platform\")\n| bin _time span=1d\n| stats avg(health_score) as daily_health by _time service_name\n| eval sla_met=if(daily_health >= 70, 1, 0)\n| stats sum(sla_met) as days_met count as total_days by service_name\n| eval sla_pct=round(days_met/total_days*100, 3)\n| eval sla_target=99.9\n| eval sla_status=if(sla_pct >= sla_target, \"MET\", \"BREACHED\")",
              "m": "Define SLA targets per business service (e.g., 99.9% availability). Map ITSI health score thresholds to SLA compliance (health >= 70 = available). Calculate daily/monthly/quarterly SLA metrics. Use Glass Tables for executive dashboards showing SLA status. Alert on projected SLA breach based on error budget burn rate. Integrate with ITSM for SLA violation reporting.",
              "z": "Glass Table (SLA dashboard), Single value (current SLA %), Gauge (error budget remaining), Table (service SLA history).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `itsi_summary`, SLA definitions.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine SLA targets per business service (e.g., 99.9% availability). Map ITSI health score thresholds to SLA compliance (health >= 70 = available). Calculate daily/monthly/quarterly SLA metrics. Use Glass Tables for executive dashboards showing SLA status. Alert on projected SLA breach based on error budget burn rate. Integrate with ITSM for SLA violation reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_aggregate=1\n  service_name IN (\"Payment Gateway\", \"Customer Portal\", \"API Platform\")\n| bin _time span=1d\n| stats avg(health_score) as daily_health by _time service_name\n| eval sla_met=if(daily_health >= 70, 1, 0)\n| stats sum(sla_met) as days_met count as total_days by service_name\n| eval sla_pct=round(days_met/total_days*100, 3)\n| eval sla_target=99.9\n| eval sla_status=if(sla_pct >= sla_target, \"MET\", \"BREACHED\")\n```\n\nUnderstanding this SPL\n\n**Business Service SLA Composite Scoring** — Composite SLA scores aggregate ITSI health data across services to produce contractual SLA metrics (e.g., 99.9% availability), directly supporting customer and executive reporting.\n\nDocumented **Data sources**: `itsi_summary`, SLA definitions. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_target** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Glass Table (SLA dashboard), Single value (current SLA %), Gauge (error budget remaining), Table (service SLA history).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.34",
              "n": "Episode MTTR Analysis by Service Tier",
              "c": "high",
              "f": "intermediate",
              "v": "Breaking MTTR down by service tier (Tier 1 critical, Tier 2 important, Tier 3 internal) reveals whether high-priority services get faster resolution and identifies process bottlenecks.",
              "t": "Splunk ITSI",
              "d": "`itsi_grouped_alerts`",
              "q": "index=itsi_grouped_alerts status=5\n| eval create_time=strptime(create_time, \"%Y-%m-%dT%H:%M:%S\")\n| eval close_time=strptime(mod_time, \"%Y-%m-%dT%H:%M:%S\")\n| eval mttr_minutes=round((close_time - create_time) / 60, 1)\n| eval tier=case(\n    severity>=6, \"Tier 1 - Critical\",\n    severity>=4, \"Tier 2 - Important\",\n    1=1, \"Tier 3 - Internal\"\n)\n| stats avg(mttr_minutes) as avg_mttr median(mttr_minutes) as median_mttr count by tier\n| sort tier",
              "m": "Define service tiers based on business impact (severity mapping). Track MTTR per tier over time. Set targets: Tier 1 < 15 min, Tier 2 < 60 min, Tier 3 < 4 hours. Analyze outliers to identify process gaps. Correlate MTTR with time-of-day and team assignment for resource optimization.",
              "z": "Bar chart (avg MTTR by tier), Line chart (MTTR trend over weeks), Table (slowest-resolved episodes).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `itsi_grouped_alerts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine service tiers based on business impact (severity mapping). Track MTTR per tier over time. Set targets: Tier 1 < 15 min, Tier 2 < 60 min, Tier 3 < 4 hours. Analyze outliers to identify process gaps. Correlate MTTR with time-of-day and team assignment for resource optimization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_grouped_alerts status=5\n| eval create_time=strptime(create_time, \"%Y-%m-%dT%H:%M:%S\")\n| eval close_time=strptime(mod_time, \"%Y-%m-%dT%H:%M:%S\")\n| eval mttr_minutes=round((close_time - create_time) / 60, 1)\n| eval tier=case(\n    severity>=6, \"Tier 1 - Critical\",\n    severity>=4, \"Tier 2 - Important\",\n    1=1, \"Tier 3 - Internal\"\n)\n| stats avg(mttr_minutes) as avg_mttr median(mttr_minutes) as median_mttr count by tier\n| sort tier\n```\n\nUnderstanding this SPL\n\n**Episode MTTR Analysis by Service Tier** — Breaking MTTR down by service tier (Tier 1 critical, Tier 2 important, Tier 3 internal) reveals whether high-priority services get faster resolution and identifies process bottlenecks.\n\nDocumented **Data sources**: `itsi_grouped_alerts`. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_grouped_alerts.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_grouped_alerts. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **create_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **close_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mttr_minutes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **tier** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by tier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (avg MTTR by tier), Line chart (MTTR trend over weeks), Table (slowest-resolved episodes).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see how long it takes to calm trouble by how important a service is so we can tell whether priority really drives speed.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.35",
              "n": "ITSI License and Capacity Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "ITSI licensing is based on entity count and KPI volume. Tracking utilization prevents license overages and supports capacity planning for service expansion.",
              "t": "Splunk ITSI",
              "d": "`itsi_entities` lookup, `itsi_summary`, `index=_internal`",
              "q": "| inputlookup itsi_entities\n| stats dc(_key) as total_entities\n| appendcols [\n  | rest /servicesNS/nobody/SA-ITOA/itoa_interface/service\n  | stats count as total_services\n]\n| appendcols [\n  | rest /servicesNS/nobody/SA-ITOA/itoa_interface/kpi_base_search\n  | stats count as total_base_searches\n]\n| table total_entities total_services total_base_searches",
              "m": "Monitor entity count against license tier. Track KPI count growth over time. Project when the next license tier will be needed. Identify unused or orphaned entities for cleanup. Monitor base search count and execution time as a proxy for ITSI compute load.",
              "z": "Single value (entity count vs license limit), Line chart (entity growth trend), Table (entity count by type).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI.\n• Ensure the following data sources are available: `itsi_entities` lookup, `itsi_summary`, `index=_internal`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor entity count against license tier. Track KPI count growth over time. Project when the next license tier will be needed. Identify unused or orphaned entities for cleanup. Monitor base search count and execution time as a proxy for ITSI compute load.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup itsi_entities\n| stats dc(_key) as total_entities\n| appendcols [\n  | rest /servicesNS/nobody/SA-ITOA/itoa_interface/service\n  | stats count as total_services\n]\n| appendcols [\n  | rest /servicesNS/nobody/SA-ITOA/itoa_interface/kpi_base_search\n  | stats count as total_base_searches\n]\n| table total_entities total_services total_base_searches\n```\n\nUnderstanding this SPL\n\n**ITSI License and Capacity Utilization** — ITSI licensing is based on entity count and KPI volume. Tracking utilization prevents license overages and supports capacity planning for service expansion.\n\nDocumented **Data sources**: `itsi_entities` lookup, `itsi_summary`, `index=_internal`. **App/TA** (typical add-on context): Splunk ITSI. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Adds columns from a subsearch with `appendcols`.\n• Adds columns from a subsearch with `appendcols`.\n• Pipeline stage (see **ITSI License and Capacity Utilization**): table total_entities total_services total_base_searches\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (entity count vs license limit), Line chart (entity growth trend), Table (entity count by type).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.36",
              "n": "Azure Log Analytics Workspace Ingestion Health",
              "c": "high",
              "f": "intermediate",
              "v": "Azure Log Analytics Workspace is the central logging destination for Azure Monitor, Defender for Cloud, and Sentinel. Ingestion lag or data cap throttling silently breaks alerting and investigation workflows across the entire Azure monitoring stack.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics)",
              "d": "`sourcetype=azure:monitor:metric` (Microsoft.OperationalInsights/workspaces)",
              "q": "index=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.operationalinsights/workspaces\"\n| where metric_name IN (\"IngestionRate\",\"IngestionLatencyInSeconds\",\"DataBytes\",\"BillableDataGB\")\n| timechart span=5m avg(average) as value by metric_name, resource_name",
              "m": "Collect Azure Monitor metrics for Log Analytics workspaces. Key metrics: `IngestionLatencyInSeconds` (alert >300s — indicates data delay for all downstream analytics), `IngestionRate` (sudden drops mean data sources stopped sending), and `BillableDataGB` versus daily cap (when cap is hit, ingestion stops until reset). Track per-table ingestion volume using workspace diagnostic settings to identify data spikes.",
              "z": "Line chart (ingestion latency and rate), Gauge (daily volume vs. cap), Table (top tables by volume).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics).\n• Ensure the following data sources are available: `sourcetype=azure:monitor:metric` (Microsoft.OperationalInsights/workspaces).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Azure Monitor metrics for Log Analytics workspaces. Key metrics: `IngestionLatencyInSeconds` (alert >300s — indicates data delay for all downstream analytics), `IngestionRate` (sudden drops mean data sources stopped sending), and `BillableDataGB` versus daily cap (when cap is hit, ingestion stops until reset). Track per-table ingestion volume using workspace diagnostic settings to identify data spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:monitor:metric\" resource_type=\"microsoft.operationalinsights/workspaces\"\n| where metric_name IN (\"IngestionRate\",\"IngestionLatencyInSeconds\",\"DataBytes\",\"BillableDataGB\")\n| timechart span=5m avg(average) as value by metric_name, resource_name\n```\n\nUnderstanding this SPL\n\n**Azure Log Analytics Workspace Ingestion Health** — Azure Log Analytics Workspace is the central logging destination for Azure Monitor, Defender for Cloud, and Sentinel. Ingestion lag or data cap throttling silently breaks alerting and investigation workflows across the entire Azure monitoring stack.\n\nDocumented **Data sources**: `sourcetype=azure:monitor:metric` (Microsoft.OperationalInsights/workspaces). **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Azure Monitor metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:monitor:metric. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:monitor:metric\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where metric_name IN (\"IngestionRate\",\"IngestionLatencyInSeconds\",\"DataBytes\",\"BillableDataGB\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name, resource_name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ingestion latency and rate), Gauge (daily volume vs. cap), Table (top tables by volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.37",
              "n": "Entity-Level Multivariate Anomaly Detection (MLTK + ITSI)",
              "c": "critical",
              "f": "expert",
              "v": "ITSI's adaptive thresholds evaluate KPIs individually per service. But many real incidents manifest as subtle, simultaneous deviations across multiple KPIs for a single entity — CPU slightly elevated, memory climbing, response time drifting up. Per-entity multivariate anomaly detection via MLTK catches these correlated degradation patterns before any single KPI breaches its threshold, providing minutes of early warning for cascading failures.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk IT Service Intelligence (ITSI)",
              "d": "`itsi_summary` (per-entity KPI values)",
              "q": "index=itsi_summary is_service_aggregate=0 is_entity_in_maintenance=0\n    kpi_name IN (\"CPU Utilization\",\"Memory Usage\",\"Response Time\",\"Error Rate\",\"Disk IO Wait\")\n| bin _time span=5m\n| stats avg(alert_value) as val by _time, entity_key, entity_title, kpi_name\n| xyseries _time+\"|\"+entity_key kpi_name val\n| fillnull value=0\n| fit DensityFunction \"CPU Utilization\" \"Memory Usage\" \"Response Time\" \"Error Rate\" \"Disk IO Wait\" by entity_key into entity_multivariate_model\n| where isOutlier > 0\n| eval composite_score=round('CPU Utilization' + 'Memory Usage' + 'Response Time' + 'Error Rate', 2)\n| sort -composite_score\n| table _time, entity_key, \"CPU Utilization\", \"Memory Usage\", \"Response Time\", \"Error Rate\", \"Disk IO Wait\", composite_score",
              "m": "Extract entity-level KPI data from `itsi_summary` for all monitored KPIs within a service. Pivot into wide format (one column per KPI) per entity per time bin. Train DensityFunction models per entity that learn the joint distribution of their KPI values. Schedule the detection search every 5 minutes. Outliers represent entities where the combination of KPI values is unusual, even if each individual KPI is within its threshold. Feed the anomaly score back into ITSI as a synthetic \"Entity Health Anomaly\" KPI that contributes to the service health score. Alert service owners via ITSI notable event rules when the composite anomaly persists for 3+ consecutive windows. Retrain models weekly; use entity groups (by service or tier) if per-entity training data is sparse.",
              "z": "Radar chart (KPI values for anomalous entity), Line chart (composite anomaly score over time), Table (top anomalous entities with KPI breakdown).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk IT Service Intelligence](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk IT Service Intelligence (ITSI).\n• Ensure the following data sources are available: `itsi_summary` (per-entity KPI values).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract entity-level KPI data from `itsi_summary` for all monitored KPIs within a service. Pivot into wide format (one column per KPI) per entity per time bin. Train DensityFunction models per entity that learn the joint distribution of their KPI values. Schedule the detection search every 5 minutes. Outliers represent entities where the combination of KPI values is unusual, even if each individual KPI is within its threshold. Feed the anomaly score back into ITSI as a synthetic \"Entity Health A…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_aggregate=0 is_entity_in_maintenance=0\n    kpi_name IN (\"CPU Utilization\",\"Memory Usage\",\"Response Time\",\"Error Rate\",\"Disk IO Wait\")\n| bin _time span=5m\n| stats avg(alert_value) as val by _time, entity_key, entity_title, kpi_name\n| xyseries _time+\"|\"+entity_key kpi_name val\n| fillnull value=0\n| fit DensityFunction \"CPU Utilization\" \"Memory Usage\" \"Response Time\" \"Error Rate\" \"Disk IO Wait\" by entity_key into entity_multivariate_model\n| where isOutlier > 0\n| eval composite_score=round('CPU Utilization' + 'Memory Usage' + 'Response Time' + 'Error Rate', 2)\n| sort -composite_score\n| table _time, entity_key, \"CPU Utilization\", \"Memory Usage\", \"Response Time\", \"Error Rate\", \"Disk IO Wait\", composite_score\n```\n\nUnderstanding this SPL\n\n**Entity-Level Multivariate Anomaly Detection (MLTK + ITSI)** — ITSI's adaptive thresholds evaluate KPIs individually per service. But many real incidents manifest as subtle, simultaneous deviations across multiple KPIs for a single entity — CPU slightly elevated, memory climbing, response time drifting up. Per-entity multivariate anomaly detection via MLTK catches these correlated degradation patterns before any single KPI breaches its threshold, providing minutes of early warning for cascading failures.\n\nDocumented **Data sources**: `itsi_summary` (per-entity KPI values). **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk IT Service Intelligence (ITSI). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, entity_key, entity_title, kpi_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• Fills null values with `fillnull`.\n• Pipeline stage (see **Entity-Level Multivariate Anomaly Detection (MLTK + ITSI)**): fit DensityFunction \"CPU Utilization\" \"Memory Usage\" \"Response Time\" \"Error Rate\" \"Disk IO Wait\" by entity_key into entity_multivariate_m…\n• Filters the current rows with `where isOutlier > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **composite_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Entity-Level Multivariate Anomaly Detection (MLTK + ITSI)**): table _time, entity_key, \"CPU Utilization\", \"Memory Usage\", \"Response Time\", \"Error Rate\", \"Disk IO Wait\", composite_score\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Radar chart (KPI values for anomalous entity), Line chart (composite anomaly score over time), Table (top anomalous entities with KPI breakdown).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track service health, episodes, and related signals in one workspace so the team can see what broke and who owns it without juggling every console.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.2.38",
              "n": "Causal KPI Ranking — Root-Cause Acceleration (MLTK + ITSI)",
              "c": "high",
              "f": "expert",
              "v": "When a parent service health score drops, operators must manually investigate child KPIs to find the root cause. A trained model that ranks which child KPIs best explain parent health changes accelerates root-cause analysis from minutes to seconds — showing \"memory pressure on the database tier explains 68% of the service degradation\" instead of requiring manual drill-down through dozens of KPIs.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk IT Service Intelligence (ITSI)",
              "d": "`itsi_summary` (service-level and KPI-level health scores)",
              "q": "index=itsi_summary is_service_aggregate=1\n| eval parent_health=alert_level\n| join type=left service_id _time\n    [search index=itsi_summary is_service_aggregate=0\n    | stats avg(alert_value) as kpi_val by _time, service_id, kpi_name]\n| xyseries _time+\"|\"+service_id kpi_name kpi_val\n| fillnull value=0\n| fit RandomForestRegressor parent_health from * into causal_kpi_model\n| summary causal_kpi_model\n| sort -importance\n| head 10\n| table feature, importance",
              "m": "Collect time-aligned parent service health scores and all child KPI values from `itsi_summary`. Train a RandomForestRegressor or GradientBoostedTrees model where the target variable is the parent health score and features are individual KPI values. Extract feature importance rankings to identify which KPIs most strongly influence parent health. Publish the ranked KPI list as a lookup that Glass Tables and Deep Dives reference for \"top contributing KPIs\" context. Retrain monthly or after service tree changes. For real-time use, apply the pre-trained model to current KPI snapshots and display the top-3 contributing KPIs alongside each degraded service on the NOC Glass Table. Consider using Shapley values (via DSDL) for more accurate per-incident causal attribution.",
              "z": "Bar chart (KPI feature importance), Table (top causal KPIs per service), Sankey (parent health → child KPI contributions).",
              "kfp": "Episode and score noise from content changes, seasonality, and who is in scope. We check maintenance and drill paths before we escalate.",
              "refs": "[Splunk IT Service Intelligence](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk IT Service Intelligence (ITSI).\n• Ensure the following data sources are available: `itsi_summary` (service-level and KPI-level health scores).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect time-aligned parent service health scores and all child KPI values from `itsi_summary`. Train a RandomForestRegressor or GradientBoostedTrees model where the target variable is the parent health score and features are individual KPI values. Extract feature importance rankings to identify which KPIs most strongly influence parent health. Publish the ranked KPI list as a lookup that Glass Tables and Deep Dives reference for \"top contributing KPIs\" context. Retrain monthly or after service …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_aggregate=1\n| eval parent_health=alert_level\n| join type=left service_id _time\n    [search index=itsi_summary is_service_aggregate=0\n    | stats avg(alert_value) as kpi_val by _time, service_id, kpi_name]\n| xyseries _time+\"|\"+service_id kpi_name kpi_val\n| fillnull value=0\n| fit RandomForestRegressor parent_health from * into causal_kpi_model\n| summary causal_kpi_model\n| sort -importance\n| head 10\n| table feature, importance\n```\n\nUnderstanding this SPL\n\n**Causal KPI Ranking — Root-Cause Acceleration (MLTK + ITSI)** — When a parent service health score drops, operators must manually investigate child KPIs to find the root cause. A trained model that ranks which child KPIs best explain parent health changes accelerates root-cause analysis from minutes to seconds — showing \"memory pressure on the database tier explains 68% of the service degradation\" instead of requiring manual drill-down through dozens of KPIs.\n\nDocumented **Data sources**: `itsi_summary` (service-level and KPI-level health scores). **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk IT Service Intelligence (ITSI). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **parent_health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pivots fields for charting with `xyseries`.\n• Fills null values with `fillnull`.\n• Pipeline stage (see **Causal KPI Ranking — Root-Cause Acceleration (MLTK + ITSI)**): fit RandomForestRegressor parent_health from * into causal_kpi_model\n• Pipeline stage (see **Causal KPI Ranking — Root-Cause Acceleration (MLTK + ITSI)**): summary causal_kpi_model\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **Causal KPI Ranking — Root-Cause Acceleration (MLTK + ITSI)**): table feature, importance\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (KPI feature importance), Table (top causal KPIs per service), Sankey (parent health → child KPI contributions).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how healthy our services look over time so we can report, prioritize, and catch a slide before it becomes a major incident.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.9,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 35,
            "none": 0
          }
        },
        {
          "i": "13.3",
          "n": "Third-Party Monitoring Integration",
          "u": [
            {
              "i": "13.3.1",
              "n": "Nagios/Zabbix Alert Ingestion",
              "c": "medium",
              "f": "beginner",
              "v": "Consolidating legacy monitoring alerts into Splunk enables cross-tool correlation and single-pane-of-glass operations.",
              "t": "Custom webhook input, syslog",
              "d": "Nagios/Zabbix webhook exports, syslog notifications",
              "q": "index=monitoring sourcetype=\"nagios:notification\" OR sourcetype=\"zabbix:webhook\"\n| stats count by host, service, state, severity\n| sort -count",
              "m": "Configure Nagios/Zabbix to send alerts to Splunk via webhook or syslog. Normalize alert fields (host, service, severity, state) using CIM. Correlate with Splunk-native monitoring. Phase out legacy tools over time.",
              "z": "Table (third-party alerts), Bar chart (alerts by source tool), Status grid (host × service).",
              "kfp": "Planned Nagios or Icinga maintenance windows suppress checks legitimately and can look like ingestion failure until maintenance_calendar metadata aligns; require ticket tags before muting. Zabbix flapping triggers that auto-resolve within seconds may create artificial transition_conflict rows; widen correlation windows or filter ultra-short problems in props. SolarWinds NetFlow gaps during exporter reconfiguration are common while SNMP remains up; do not treat NetFlow silence alone as host down without NPM context. SCOM alert auto-resolution storms after management pack updates can min_sev to cleared while operators still see console noise; pair with ResolutionState timelines. Nimsoft probe restarts emit brief clear-critical oscillations that resemble conflict; ignore single-minute spikes without second-source confirmation. CMDB inventory drift where canonical_host stops matching robot or MonitoringObjectDisplayName produces false dedupe keys; refresh lookups nightly. Duplicate syslog forwarding through redundant relays doubles evt_count without doubling physical faults; dedupe at the edge. Lab chaos tests that pause Universal Forwarders will trigger silent_tool_risk; isolate lab indexes. Certificate rotations on Zabbix or Icinga APIs can stall pollers temporarily; overlap validity windows. Time zone changes around daylight saving boundaries can skew heartbeat_age_min until sources stabilize.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom webhook input, syslog.\n• Ensure the following data sources are available: Nagios/Zabbix webhook exports, syslog notifications.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Nagios/Zabbix to send alerts to Splunk via webhook or syslog. Normalize alert fields (host, service, severity, state) using CIM. Correlate with Splunk-native monitoring. Phase out legacy tools over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=monitoring sourcetype=\"nagios:notification\" OR sourcetype=\"zabbix:webhook\"\n| stats count by host, service, state, severity\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Nagios/Zabbix Alert Ingestion** — Consolidating legacy monitoring alerts into Splunk enables cross-tool correlation and single-pane-of-glass operations.\n\nDocumented **Data sources**: Nagios/Zabbix webhook exports, syslog notifications. **App/TA** (typical add-on context): Custom webhook input, syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: monitoring; **sourcetype**: nagios:notification, zabbix:webhook. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=monitoring, sourcetype=\"nagios:notification\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, service, state, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (third-party alerts), Bar chart (alerts by source tool), Status grid (host × service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We pull bells from several old control rooms into one timeline so crews can see when two rooms disagree about the same machine, when a room falls quiet without permission, or when the same fault rings twice through different doors.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.2",
              "n": "Prometheus Metric Ingestion",
              "c": "medium",
              "f": "intermediate",
              "v": "Ingesting Prometheus metrics into Splunk enables long-term storage, cross-domain correlation, and unified dashboarding.",
              "t": "OpenTelemetry Collector, Prometheus remote write",
              "d": "Prometheus remote write endpoint, OpenTelemetry metrics",
              "q": "| mstats avg(_value) WHERE index=prometheus metric_name=\"node_cpu_seconds_total\" by host span=5m",
              "m": "Configure Prometheus remote_write to Splunk's metrics endpoint or use OpenTelemetry Collector as intermediary. Ingest as Splunk metrics. Use `mstats` for efficient querying. Create unified dashboards combining Prometheus and Splunk data.",
              "z": "Line chart (metric trends), Multi-metric dashboard, Table (metric summaries).",
              "kfp": "Scheduled relabel or `push` style jobs that intentionally remove targets during autoscaling can drive `up` gaps that clear within one interval; pair with change metadata before paging. Exporter rolling restarts and canary pod churn produce short `up` flaps that the three-interval dwell should absorb, yet very aggressive scrape intervals can still false-negative dwell logic if only two failures occur. Splunk Metrics index hot-bucket rolls and indexer restart waves create brief tcpin lag signatures in `_internal` logs that resemble back-pressure even when remote-write clients are healthy; correlate across peers before declaring receiver saturation. Federation mesh DNS re-resolution after service mesh upgrades can stall meta scrapes until caches warm; treat warn-tier lag as informational when leaf clusters report fresh timestamps. Kubernetes ServiceMonitor or PodMonitor churn during GitOps apply storms emits noisy events that reconcile cleanly; require message keywords tied to failures, not merely object updates. OpenTelemetry Collector pipeline restarts emit exporter retry lines that self-resolve; cross-check zPages counters for sustained growth. HTTP Event Collector token rotation without dual-write overlap can create auth error bursts that look like sustained ingest failure until secrets converge. Downstream PagerDuty or third-party observability backend maintenance windows may pause alert delivery even while Splunk rows show warn severities; route proofs through a secondary channel during vendor maintenance. Dual Prometheus receivers scraping the same targets can duplicate samples and confuse cardinality math until architecture owners dedupe scrape ownership.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OpenTelemetry Collector, Prometheus remote write.\n• Ensure the following data sources are available: Prometheus remote write endpoint, OpenTelemetry metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Prometheus remote_write to Splunk's metrics endpoint or use OpenTelemetry Collector as intermediary. Ingest as Splunk metrics. Use `mstats` for efficient querying. Create unified dashboards combining Prometheus and Splunk data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(_value) WHERE index=prometheus metric_name=\"node_cpu_seconds_total\" by host span=5m\n```\n\nUnderstanding this SPL\n\n**Prometheus Metric Ingestion** — Ingesting Prometheus metrics into Splunk enables long-term storage, cross-domain correlation, and unified dashboarding.\n\nDocumented **Data sources**: Prometheus remote write endpoint, OpenTelemetry metrics. **App/TA** (typical add-on context): OpenTelemetry Collector, Prometheus remote write. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: prometheus.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (metric trends), Multi-metric dashboard, Table (metric summaries).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the plumbing that moves numeric measurements from your applications into the central store so breaks in collection, delays in forwarding, or sudden floods of labels show up before charts go blank or costs spike without a clear reason.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry",
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.3",
              "n": "PagerDuty/Opsgenie Integration",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracking alert lifecycle and on-call response metrics ensures incident response SLAs are met and identifies process improvements.",
              "t": "PagerDuty API input",
              "d": "PagerDuty incidents API, Opsgenie alerts API",
              "q": "index=pagerduty sourcetype=\"pagerduty:incident\"\n| eval ack_time_min=round((acknowledged_at_epoch-created_at_epoch)/60,1)\n| stats avg(ack_time_min) as avg_ack, avg(resolved_at_epoch-created_at_epoch)/60 as avg_resolve by service",
              "m": "Poll PagerDuty/Opsgenie API for incident data. Track acknowledgment time, resolution time, and escalation rates. Report on on-call workload distribution. Alert when acknowledgment SLA is breached.",
              "z": "Bar chart (MTTA by service), Line chart (incident volume trend), Table (open incidents), Single value (avg MTTA).",
              "kfp": "Planned game-day drills and controlled failover exercises intentionally stretch acknowledgement intervals; tag drill metadata in vendor custom fields or lookup tables before paging people for SLA breaches. Vendor API pagination or rate-limit backoff can delay incident rows until minutes after the UI shows action, inflating acknowledgement math until pollers catch up; compare vendor epoch fields to Splunk _time. Webhook retry storms can duplicate incident_id rows and inflate incident counts until dedupe keys land at ingest. Shift-boundary handoffs near midnight local time split acknowledgement ownership between roster rows; widen join keys with shift_bucket overlap rules rather than blaming individuals. Opsgenie notification-only sourcetypes without core alert bodies can lack resolved_at until the alert arm arrives; require paired sourcetypes before declaring auto-resolve surges. Splunk CIM Alerts rows without paging counterparts are analytical upstream signals, not proof that paging failed; treat splunk_alert_upstreams as reconciliation context. Maintenance annotations that suppress pages but leave Splunk alerts firing can make Splunk appear ahead of paging vendors without a real disagreement. Executive dashboards that average across globally distributed teams without region splits can misread fairness; segment by region or follow-the-sun labels. Policy exports that lag reorganizations make expected_primary_minutes stale; refresh escalation_policy_ground_truth.csv within one business day of policy edits.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PagerDuty API input.\n• Ensure the following data sources are available: PagerDuty incidents API, Opsgenie alerts API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll PagerDuty/Opsgenie API for incident data. Track acknowledgment time, resolution time, and escalation rates. Report on on-call workload distribution. Alert when acknowledgment SLA is breached.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pagerduty sourcetype=\"pagerduty:incident\"\n| eval ack_time_min=round((acknowledged_at_epoch-created_at_epoch)/60,1)\n| stats avg(ack_time_min) as avg_ack, avg(resolved_at_epoch-created_at_epoch)/60 as avg_resolve by service\n```\n\nUnderstanding this SPL\n\n**PagerDuty/Opsgenie Integration** — Tracking alert lifecycle and on-call response metrics ensures incident response SLAs are met and identifies process improvements.\n\nDocumented **Data sources**: PagerDuty incidents API, Opsgenie alerts API. **App/TA** (typical add-on context): PagerDuty API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pagerduty; **sourcetype**: pagerduty:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pagerduty, sourcetype=\"pagerduty:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ack_time_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (MTTA by service), Line chart (incident volume trend), Table (open incidents), Single value (avg MTTA).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We track urgent calls for help from the tools that page people at night: how fast someone answers, how long fixes take, which services ring the phone most, and when problems heal on their own, so managers can fix unfair shifts and noisy services without guessing who was slow.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "pagerduty"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.4",
              "n": "Monitoring Coverage Gap Detection",
              "c": "high",
              "f": "expert",
              "v": "Hosts not covered by any monitoring tool are blind spots. Detection ensures comprehensive infrastructure visibility.",
              "t": "Cross-tool asset correlation",
              "d": "CMDB + all monitoring tool inventories",
              "q": "| inputlookup cmdb_hosts.csv\n| join type=left max=1 hostname [search index=_internal group=tcpin_connections | stats latest(_time) as splunk_last by hostname]\n| join type=left max=1 hostname [search index=edr sourcetype=\"*sensor*\" | stats latest(_time) as edr_last by hostname]\n| where isnull(splunk_last) AND isnull(edr_last)\n| table hostname, os, department",
              "m": "Cross-reference CMDB with all monitoring tool inventories (Splunk forwarders, EDR agents, SNMP targets). Identify assets not monitored by any tool. Alert on new unmonitored assets. Track coverage percentage as a KPI.",
              "z": "Table (unmonitored hosts), Single value (coverage %), Pie chart (monitored vs unmonitored), Bar chart (gaps by department).",
              "kfp": "Decommissioned servers often remain marked in_service in CMDB for weeks, producing blind_spot rows until lifecycle automation updates cmdb_status. Development and sandbox hosts are frequently under-instrumented by design; environment-tiered thresholds and min_prod_coverage_pct in the expected_streams lookup prevent false escalations when ratios dip outside production. Stale CMDB owners and business_unit labels route pages to the wrong team until identity refresh jobs run. IP address reuse and DHCP churn make rogue_asset signals ambiguous without IPAM correlation. Hostname normalization mismatches between FQDN and short names split a single machine into two keys until transforms align primary_dns. Containerized workloads may emit telemetry under pod names while CMDB lists nodes; without node_role and orchestration tags, partial_stream detections confuse platform and application owners. Cloud ephemeral instances can appear in telemetry before billing snapshots land, temporarily skewing account-level narratives. Maintenance windows without annotated suppressions still look like outages unless change_ticket_id flows into a suppression lookup. Duplicate forwarders behind NAT may collapse tcpin counts, overstating fleet health. Finally, overly aggressive expected_streams templates flag nearly every host as partial_stream until architects publish realistic baselines per role.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cross-tool asset correlation.\n• Ensure the following data sources are available: CMDB + all monitoring tool inventories.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCross-reference CMDB with all monitoring tool inventories (Splunk forwarders, EDR agents, SNMP targets). Identify assets not monitored by any tool. Alert on new unmonitored assets. Track coverage percentage as a KPI.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup cmdb_hosts.csv\n| join type=left max=1 hostname [search index=_internal group=tcpin_connections | stats latest(_time) as splunk_last by hostname]\n| join type=left max=1 hostname [search index=edr sourcetype=\"*sensor*\" | stats latest(_time) as edr_last by hostname]\n| where isnull(splunk_last) AND isnull(edr_last)\n| table hostname, os, department\n```\n\nUnderstanding this SPL\n\n**Monitoring Coverage Gap Detection** — Hosts not covered by any monitoring tool are blind spots. Detection ensures comprehensive infrastructure visibility.\n\nDocumented **Data sources**: CMDB + all monitoring tool inventories. **App/TA** (typical add-on context): Cross-tool asset correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(splunk_last) AND isnull(edr_last)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Monitoring Coverage Gap Detection**): table hostname, os, department\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unmonitored hosts), Single value (coverage %), Pie chart (monitored vs unmonitored), Bar chart (gaps by department).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We compare the official machine list from your inventory systems with the signals we actually receive so we can spot missing collectors, unexpected senders, and half-finished setups before audits or outages surprise leadership.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.5",
              "n": "Alert Storm Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Correlated alert storms across monitoring tools indicate major incidents. Detection enables rapid escalation and noise reduction.",
              "t": "Multi-source alert correlation",
              "d": "All monitoring tool alerts ingested into Splunk",
              "q": "index=alerts sourcetype=*\n| timechart span=5m count as alert_count\n| where alert_count > 50\n| eval storm=\"Alert storm detected\"",
              "m": "Ingest alerts from all monitoring tools into a common index. Track alert rate across all sources. Alert when rate exceeds normal baseline by >5× (indicates correlated event). Use ITSI Event Analytics for intelligent grouping.",
              "z": "Line chart (alert rate across all sources), Single value (current alert rate), Timeline (alert storm events), Table (contributing alerts).",
              "kfp": "Legitimate major incidents naturally spike alert_count across many services; mitigate by confirming duplicate_ratio and correlating with customer-facing error budgets before dismissing the storm as noise. Planned change windows may intentionally raise alert volume during canaries; require change_calendar tags and temporary threshold offsets rather than permanent mutes. Intentional load and chaos tests can fire synthetic monitors; gate test environments or label events with drill metadata so severity downgrades automatically. Alert policy onboarding bursts when teams import dozens of monitors at once; schedule imports and use warn-tier routing until baselines stabilize. Observability vendor retries during partial API outages can inflate duplicate_ratio with identical fingerprints; coordinate with vendor status pages and backoff configuration. Episode maintenance during content pack upgrades can pause grouping while vendor lanes remain hot; avoid blaming infrastructure until NEAP policies finish rebalancing.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Multi-source alert correlation.\n• Ensure the following data sources are available: All monitoring tool alerts ingested into Splunk.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest alerts from all monitoring tools into a common index. Track alert rate across all sources. Alert when rate exceeds normal baseline by >5× (indicates correlated event). Use ITSI Event Analytics for intelligent grouping.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=alerts sourcetype=*\n| timechart span=5m count as alert_count\n| where alert_count > 50\n| eval storm=\"Alert storm detected\"\n```\n\nUnderstanding this SPL\n\n**Alert Storm Detection** — Correlated alert storms across monitoring tools indicate major incidents. Detection enables rapid escalation and noise reduction.\n\nDocumented **Data sources**: All monitoring tool alerts ingested into Splunk. **App/TA** (typical add-on context): Multi-source alert correlation. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: alerts; **sourcetype**: *. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=alerts, sourcetype=*. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where alert_count > 50` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **storm** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (alert rate across all sources), Single value (current alert rate), Timeline (alert storm events), Table (contributing alerts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Imagine a concert hall where every exit sign has its own siren on one flaky switch. When the switch twitches, identical screams fill the room and nobody hears the usher who knows which door is safe. We tally sirens per minute, separate real distinct alarms from echoes, and read posted notices before waking the neighborhood.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.6",
              "n": "SLO Burn Rate and Error Budget Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Burn rate and error budget show how fast SLO is consumed. Tracking supports proactive action before SLA breach and prioritization of reliability work.",
              "t": "ITSI, custom SLO metrics, APM data",
              "d": "SLO compliance metrics, error budget calculations",
              "q": "index=slos sourcetype=\"slo:compliance\"\n| eval burn_rate=1-(success_count/(success_count+failure_count))\n| bin _time span=1h\n| stats avg(burn_rate) as avg_burn, sum(error_budget_consumed) as consumed by service, slo_name, _time\n| where avg_burn > 0.1",
              "m": "Compute SLO compliance and error budget from availability/latency data. Ingest into Splunk. Alert on burn rate above threshold or error budget exhaustion. Report on remaining budget by service.",
              "z": "Gauge (error budget remaining), Line chart (burn rate), Table (services by budget consumed).",
              "kfp": "Mid-quarter target lifts such as ninety-nine point five to ninety-nine point nine without rebaselining make retroactive ledger replays look like massive burn even when customer outcomes were steady; snapshot targets with effective_from metadata before trusting historical charts. SLI calculator swaps, for example moving from raw HTTP five xx tallies to an error_class taxonomy, change numerators overnight and can mimic outages until dual-run calculators converge. Seasonal retail or batch windows shrink denominators so rare errors look like huge burn spikes; require minimum request floors from Performance or APM lanes before paging. Approved maintenance periods should consume budget narratively but not wake people; validate maintenance_until parsing and time zones so false CRITICAL pages do not recur. Child objectives that depend on a parent_slo already under declared incident credit may show sympathetic burn that is intellectually honest yet redundant for paging; suppress duplicates using incident correlation lookups. Dual-stack cutovers where the new mesh emits http.route while legacy emits uri_path split multisearch arms until semantic convention alignment completes. Hyperscaler incident credits and SLA rebates can recalibrate budgets hours later; annotate FinOps adjustments rather than treating vendor dashboards as lies. Rolling twenty-eight day boundaries that cross daylight saving transitions on legacy cron hosts can shift bucket edges one hour and exaggerate burn_3d unless everything runs UTC. Prometheus prometheus_query_log_total spikes may reflect ad hoc analyst queries rather than customer traffic; never substitute query logs for business denominators without guardrails.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ITSI, custom SLO metrics, APM data.\n• Ensure the following data sources are available: SLO compliance metrics, error budget calculations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompute SLO compliance and error budget from availability/latency data. Ingest into Splunk. Alert on burn rate above threshold or error budget exhaustion. Report on remaining budget by service.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=slos sourcetype=\"slo:compliance\"\n| eval burn_rate=1-(success_count/(success_count+failure_count))\n| bin _time span=1h\n| stats avg(burn_rate) as avg_burn, sum(error_budget_consumed) as consumed by service, slo_name, _time\n| where avg_burn > 0.1\n```\n\nUnderstanding this SPL\n\n**SLO Burn Rate and Error Budget Tracking** — Burn rate and error budget show how fast SLO is consumed. Tracking supports proactive action before SLA breach and prioritization of reliability work.\n\nDocumented **Data sources**: SLO compliance metrics, error budget calculations. **App/TA** (typical add-on context): ITSI, custom SLO metrics, APM data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: slos; **sourcetype**: slo:compliance. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=slos, sourcetype=\"slo:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **burn_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by service, slo_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_burn > 0.1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (error budget remaining), Line chart (burn rate), Table (services by budget consumed).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We track the shared reliability allowance for our most important services like a household budget notebook. When the notebook, the bank app, and the calendar disagree, or when a planned quiet hour should not count as a crisis, we sort it out calmly and only ring the serious alarm when real customers are losing out.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.7",
              "n": "Distributed Trace Sampling and Coverage",
              "c": "medium",
              "f": "intermediate",
              "v": "Trace sampling rate and coverage affect observability. Monitoring sampling decisions and trace completeness supports tuning and gap detection.",
              "t": "APM/tracing TAs (Jaeger, Tempo, OTLP)",
              "d": "Trace metadata, sampling flags, span counts per trace",
              "q": "index=traces sourcetype=\"trace:span\"\n| bin _time span=1h\n| stats count as spans, dc(trace_id) as traces, avg(sample_rate) as avg_sample by service, _time\n| eval spans_per_trace=spans/traces\n| where spans_per_trace < 5 OR avg_sample < 0.01",
              "m": "Ingest trace metadata and sampling rates. Alert when sampling drops below target or trace completeness (spans per trace) is low for critical services. Report on coverage by service and env.",
              "z": "Line chart (sampling rate by service), Table (low-coverage services), Bar chart (spans per trace).",
              "kfp": "Planned cost-capping campaigns that lower global sample rates are legitimate when documented in change calendars; downgrade severity using business_priority and finance ticket metadata before paging service owners. Weekend low-traffic services produce noisy denominators; require apm_req_sum floors so quiet services do not trigger false criticals. Regional traffic shifts after marketing launches inflate log counts without proportional trace volume; compare against env_median_yield before declaring incidents. A/B test cohorts intentionally under-sampled relative to control per experiment design; join experiment labels and exclude cohort keys from default alerting. Infra-side benchmark routing that bypasses application gateways yields log denominators without matching APM service keys; filter benchmark paths in lookup tables. Blue-green cutovers duplicate traces briefly across mirrored indexes; deduplicate trace_id in investigations before muting. Chaos drills that strip Kafka headers will trip yield detectors; annotate drill namespaces. Vendor maintenance windows that pause ingestion mimic drift; check status pages before rollback. Stale apm_sampling_policy.csv rows after service renames misstate targets; refresh lookups weekly. Dual HEC mirrors that double-count access logs inflate denominators; normalize source types per environment.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: APM/tracing TAs (Jaeger, Tempo, OTLP).\n• Ensure the following data sources are available: Trace metadata, sampling flags, span counts per trace.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest trace metadata and sampling rates. Alert when sampling drops below target or trace completeness (spans per trace) is low for critical services. Report on coverage by service and env.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"trace:span\"\n| bin _time span=1h\n| stats count as spans, dc(trace_id) as traces, avg(sample_rate) as avg_sample by service, _time\n| eval spans_per_trace=spans/traces\n| where spans_per_trace < 5 OR avg_sample < 0.01\n```\n\nUnderstanding this SPL\n\n**Distributed Trace Sampling and Coverage** — Trace sampling rate and coverage affect observability. Monitoring sampling decisions and trace completeness supports tuning and gap detection.\n\nDocumented **Data sources**: Trace metadata, sampling flags, span counts per trace. **App/TA** (typical add-on context): APM/tracing TAs (Jaeger, Tempo, OTLP). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: trace:span. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"trace:span\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by service, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **spans_per_trace** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where spans_per_trace < 5 OR avg_sample < 0.01` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (sampling rate by service), Table (low-coverage services), Bar chart (spans per trace).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We run every customer request through a checkpoint like inspectors at a factory door. When the gate opens too rarely or skips the cracked boxes, we think we are safe while broken items slip past; when the rules ignore errors and slow paths, the worst shipments never get inspected.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "cisco_aci"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.8",
              "n": "Log Ingestion Backlog and Lag",
              "c": "critical",
              "f": "beginner",
              "v": "Ingestion backlog or lag causes delayed alerting and search. Monitoring lag per source ensures data freshness and pipeline health.",
              "t": "Splunk _internal, forwarder metrics",
              "d": "Indexer acknowledgment, forwarder queue depth, event timestamps",
              "q": "index=_internal source=*metrics* group=queue\n| stats latest(current_size) as queue_depth by host, name\n| where queue_depth > 1000\n| table host, name, queue_depth",
              "m": "Monitor forwarder and indexer queue metrics. Alert when queue depth or event lag (now - event time) exceeds threshold. Report on lag by source and index.",
              "z": "Table (hosts with backlog), Single value (max lag minutes), Line chart (queue depth trend).",
              "kfp": "Legitimate batch-archival log ingestion often lands object-heavy files during off-peak windows, producing a burst in `_indextime` skew that clears after replay without customer impact; require correlation with active change tickets and object-ingest dashboards before paging. Historical replay of cold archives after retention extensions replays years of `_time` while `_indextime` clusters in the present, which looks like catastrophe in naive lag math until you filter replay tags. Scheduled bulk-import jobs from security or finance warehouses can saturate parsing queues briefly yet stay inside written SLAs; join `pipeline_inventory.csv` deploy_freeze_until and tier metadata to suppress known imports. Daily universal forwarder restarts or scripted input reloads create five-minute queue humps that self-heal; use `streamstats` baselines to ignore single-sample spikes. Indexer cluster rolling restarts move queue depth from peer to peer; verify host-level blockage against cluster-wide ingest rather than paging every node. Kafka consumer group rebalances during partition scale-out inflate lag readings for seconds to minutes; compare to broker metrics and consumer group generation counters. Network partitions between a region and the indexer fabric look like severe lag but reflect connectivity, not broken parsers; pair with network path dashboards and cross-region health checks. HTTP Event Collector token rate-limit cooldowns after marketing bursts throttle intentionally; confirm token and load-balancer policies before treating as incident. Splunk SmartStore cold bucket fetch latency stretches search-time retrieval but may not indicate ingest backlog unless combined with local staging growth; separate search assistant delays from ingest queues. Network Time Protocol drift between sources and indexers widens `_time` versus wall clock without implying indexer backlog; validate stratum and offset monitors. Daylight saving time transitions create apparent timestamp jumps in human-readable charts even when monotonic ingest continues; annotate locale policies and use epoch math in investigations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk _internal, forwarder metrics.\n• Ensure the following data sources are available: Indexer acknowledgment, forwarder queue depth, event timestamps.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor forwarder and indexer queue metrics. Alert when queue depth or event lag (now - event time) exceeds threshold. Report on lag by source and index.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*metrics* group=queue\n| stats latest(current_size) as queue_depth by host, name\n| where queue_depth > 1000\n| table host, name, queue_depth\n```\n\nUnderstanding this SPL\n\n**Log Ingestion Backlog and Lag** — Ingestion backlog or lag causes delayed alerting and search. Monitoring lag per source ensures data freshness and pipeline health.\n\nDocumented **Data sources**: Indexer acknowledgment, forwarder queue depth, event timestamps. **App/TA** (typical add-on context): Splunk _internal, forwarder metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where queue_depth > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Log Ingestion Backlog and Lag**): table host, name, queue_depth\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hosts with backlog), Single value (max lag minutes), Line chart (queue depth trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We measure how long it takes for diary entries from your systems to show up in the library where everyone reads them. When the mail trucks, sorting rooms, or librarians fall behind, we raise a clear signal before anyone assumes the story simply stopped happening.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.9",
              "n": "Dashboard and Saved Search Usage Analytics",
              "c": "medium",
              "f": "beginner",
              "v": "Usage of dashboards and saved searches guides optimization and retirement. Analytics support adoption and reduce stale or unused content.",
              "t": "Splunk audit logs, usage metadata",
              "d": "Dashboard view and search run audit logs",
              "q": "index=_audit action=view OR action=run\n| search (resource_type=\"dashboard\" OR resource_type=\"saved_search\")\n| stats count by user, resource_name, resource_type\n| sort -count",
              "m": "Ingest Splunk audit or usage logs for dashboard and search runs. Report on most/least used dashboards and searches. Identify unused content for archival. Track adoption by team.",
              "z": "Bar chart (views by dashboard), Table (search run count by name), Pie chart (usage by user).",
              "kfp": "Admin spikes from automation, break-glass, and app pushes. We check change records before a single run looks like a takeover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk audit logs, usage metadata.\n• Ensure the following data sources are available: Dashboard view and search run audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Splunk audit or usage logs for dashboard and search runs. Report on most/least used dashboards and searches. Identify unused content for archival. Track adoption by team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit action=view OR action=run\n| search (resource_type=\"dashboard\" OR resource_type=\"saved_search\")\n| stats count by user, resource_name, resource_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Dashboard and Saved Search Usage Analytics** — Usage of dashboards and saved searches guides optimization and retirement. Analytics support adoption and reduce stale or unused content.\n\nDocumented **Data sources**: Dashboard view and search run audit logs. **App/TA** (typical add-on context): Splunk audit logs, usage metadata. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, resource_name, resource_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (views by dashboard), Table (search run count by name), Pie chart (usage by user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see which views and stored searches people actually use so we can drop clutter and keep the right work maintained.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.10",
              "n": "Synthetic Check Failure and Geographic Variance",
              "c": "high",
              "f": "intermediate",
              "v": "Synthetic checks validate user-facing availability. Failure or variance by geography highlights regional issues and CDN/edge health.",
              "t": "Synthetic monitoring product logs, Splunk HTTP Event Collector",
              "d": "Synthetic check results, response time, status by location",
              "q": "index=synthetic sourcetype=\"synthetic:check\"\n| where success=\"false\" OR response_time_ms > 5000\n| bin _time span=15m\n| stats count, avg(response_time_ms) as avg_ms by check_name, location, _time\n| sort -count",
              "m": "Ingest synthetic check results from Datadog, Pingdom, or custom scripts. Alert on failure or latency above threshold. Compare success rate and latency by region. Report on SLA by check and location.",
              "z": "Table (failed checks by location), Geo map (failure by region), Line chart (latency by location).",
              "kfp": "Intake blips can follow token or load-balancer work and client bursts. We align with app owners before a platform-wide incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Synthetic monitoring product logs, Splunk HTTP Event Collector.\n• Ensure the following data sources are available: Synthetic check results, response time, status by location.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest synthetic check results from Datadog, Pingdom, or custom scripts. Alert on failure or latency above threshold. Compare success rate and latency by region. Report on SLA by check and location.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=synthetic sourcetype=\"synthetic:check\"\n| where success=\"false\" OR response_time_ms > 5000\n| bin _time span=15m\n| stats count, avg(response_time_ms) as avg_ms by check_name, location, _time\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Synthetic Check Failure and Geographic Variance** — Synthetic checks validate user-facing availability. Failure or variance by geography highlights regional issues and CDN/edge health.\n\nDocumented **Data sources**: Synthetic check results, response time, status by location. **App/TA** (typical add-on context): Synthetic monitoring product logs, Splunk HTTP Event Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: synthetic; **sourcetype**: synthetic:check. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=synthetic, sourcetype=\"synthetic:check\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where success=\"false\" OR response_time_ms > 5000` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by check_name, location, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed checks by location), Geo map (failure by region), Line chart (latency by location).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We run outside-in checks of important flows so we know when a path breaks before a large part of the user base does.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.11",
              "n": "Prometheus Target Scrape Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Targets down or unreachable; gaps in metrics collection.",
              "t": "Custom (Prometheus /api/v1/targets)",
              "d": "Prometheus targets API, up metric",
              "q": "index=prometheus sourcetype=\"prometheus:targets\" health=\"down\"\n| stats latest(_time) as last_check, values(job) as job, values(instance) as instance by scrapeUrl\n| eval down_since=round((now()-last_check)/60,0)\n| table scrapeUrl, job, instance, down_since\n| sort -down_since",
              "m": "Poll Prometheus `/api/v1/targets` via scripted input or HTTP Event Collector. Parse JSON response and index target health (up/down), last scrape time, and last error. Alternatively, ingest `up` metric (value 0 = down) from Prometheus remote write. Alert when any target has been down >5 minutes. Track scrape duration and failure reasons (connection refused, timeout, DNS) for root cause analysis.",
              "z": "Table (down targets with duration), Status grid (job × instance health), Single value (targets down count), Line chart (scrape failure rate over time).",
              "kfp": "Planned target churn during Kubernetes rollouts, Consul-template reloads, and GitOps node taints routinely produces short scrape gaps that clear within minutes; require dwell and change metadata before paging service owners. Intentional scrape_timeout increases during noisy neighbors can elevate scrape_duration_seconds without exporter failure; pair with rule_evaluation latency before treating as incidents. Consul or Kubernetes DNS rolling restarts cause intermittent resolve failures that look like TLS or HTTP errors until resolvers stabilize; suppress duplicate pages when infra tickets already cover the same window. Federation pauses during Prometheus replica failovers or meta-tier config edits can mimic remote_write backlog; distinguish leaf up series freshness from queue gauges. Blackbox exporter synthetic checks in distant regions may fail while local scrape paths remain healthy; correlate probe labels with Prometheus relabel rules before merging incidents. Deliberate sample_limit or body_size_limit tightening to control cardinality can trigger scrape_samples_post_metric_relabeling cliffs that match policy rather than broken instrumentation; confirm change tickets. ServiceMonitor and PodMonitor reconciliation lag after operator upgrades can temporarily desync target lists from workload reality; compare operator logs with inventory baselines. OTel Collector refused metric points may reflect downstream HEC pressure rather than bad scrape targets; triage with UC-13.3.15 before blaming applications. Cross-datacenter ratio alerts can fire when one sister cluster is mid-migration with fewer targets; normalize by active target counts from inventory. Stale CSV lookups that omit decommissioned jobs create false drift signals until weekly refresh jobs run.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Prometheus /api/v1/targets).\n• Ensure the following data sources are available: Prometheus targets API, up metric.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Prometheus `/api/v1/targets` via scripted input or HTTP Event Collector. Parse JSON response and index target health (up/down), last scrape time, and last error. Alternatively, ingest `up` metric (value 0 = down) from Prometheus remote write. Alert when any target has been down >5 minutes. Track scrape duration and failure reasons (connection refused, timeout, DNS) for root cause analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=prometheus sourcetype=\"prometheus:targets\" health=\"down\"\n| stats latest(_time) as last_check, values(job) as job, values(instance) as instance by scrapeUrl\n| eval down_since=round((now()-last_check)/60,0)\n| table scrapeUrl, job, instance, down_since\n| sort -down_since\n```\n\nUnderstanding this SPL\n\n**Prometheus Target Scrape Failures** — Targets down or unreachable; gaps in metrics collection.\n\nDocumented **Data sources**: Prometheus targets API, up metric. **App/TA** (typical add-on context): Custom (Prometheus /api/v1/targets). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: prometheus; **sourcetype**: prometheus:targets. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=prometheus, sourcetype=\"prometheus:targets\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scrapeUrl** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **down_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Prometheus Target Scrape Failures**): table scrapeUrl, job, instance, down_since\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (down targets with duration), Status grid (job × instance health), Single value (targets down count), Line chart (scrape failure rate over time).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch the robots that visit each application meter on a schedule and copy numbers into a central ledger. When visits fail, run too long, return far fewer lines than usual, or the visit list changes oddly, we raise a clear signal before charts go quiet or alerts judge the wrong story.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.12",
              "n": "Prometheus TSDB Compaction Failures",
              "c": "high",
              "f": "advanced",
              "v": "Storage engine issues impacting metric retention.",
              "t": "Custom (Prometheus /api/v1/status/tsdb, prometheus logs)",
              "d": "Prometheus TSDB stats, server logs",
              "q": "index=prometheus (sourcetype=\"prometheus:tsdb\" OR sourcetype=\"prometheus:log\")\n| search \"compaction\" AND (\"failed\" OR \"error\" OR \"panic\")\n| stats count, latest(_time) as last_occurrence by host, message\n| eval last_occurrence_human=strftime(last_occurrence,\"%Y-%m-%d %H:%M:%S\")\n| table host, message, count, last_occurrence_human\n| sort -count",
              "m": "Poll Prometheus `/api/v1/status/tsdb` for head stats, block count, and retention info. Ingest Prometheus server logs (stderr) for compaction-related errors. Parse TSDB head chunk count and series count for anomaly detection. Alert on compaction failure messages or when head series count grows abnormally (potential compaction backlog). Correlate with disk I/O and storage capacity metrics.",
              "z": "Table (compaction errors by host), Single value (TSDB health status), Line chart (head series count trend), Bar chart (block count by retention).",
              "kfp": "A single logged compaction warning during a graceful `SIGTERM` shutdown can coincide with benign WAL truncation while the head block flushes; treat isolated lines as noise unless `prometheus_tsdb_compactions_failed_total` increases across multiple scrapes or TSDB status shows persistent head growth. Operators who intentionally delete blocks using `promtool tsdb` repair workflows during forensic exercises will generate scary log fragments that match this search even when the cluster is healthy post-restoration; require inventory change tickets and maint_window metadata before paging. After tightening `--storage.tsdb.retention.time` or `--storage.tsdb.retention.size`, compaction duration histograms often spike for several hours while the engine rewrites overlapping ranges; that burst is expected capacity work rather than a fault if free space remains above policy and failure counters stay flat. Head series counts routinely jump during large scrape config rollouts that add many new targets; compare against a seven-day baseline per job label set rather than absolute thresholds alone. WAL truncation events logged shortly after retention-size edits can look like corruption until sequence numbers stabilize; pair log text with `prometheus_tsdb_wal_corruptions_total` and reload failure metrics. Snapshot tools and backup agents that hold file descriptors open across compaction windows may stall merges briefly; correlate with vendor snapshot schedules before declaring engine damage. Kernel page cache pressure on busy nodes can lengthen `prometheus_tsdb_compaction_duration_seconds` without raising failure counters; compare against `linux:disk` queue depth and host steal time. WAL corruption counters sometimes reflect recoverable segments that Prometheus replays after a clean stop, whereas true block corruption often implies missing chunks and requires `promtool tsdb analyze` plus restore-from-object-store procedures; do not conflate the two in executive summaries. Thanos sidecars uploading partially compacted blocks during store-gateway cutovers can create transient TSDB status skew between leaf and global views; validate object-store consistency before muting. During HA failover events where `remote_write` endpoints are repointed, backlog gauges can mask local compaction health; examine leaf self-metrics independently of remote queue depth. False alarms may cluster when a passive replica is promoted and scrapes duplicate self-metrics until service discovery settles; dedupe by `instance` label and enforce maint_window matches from `prometheus_tsdb_inventory.csv`.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Prometheus /api/v1/status/tsdb, prometheus logs).\n• Ensure the following data sources are available: Prometheus TSDB stats, server logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Prometheus `/api/v1/status/tsdb` for head stats, block count, and retention info. Ingest Prometheus server logs (stderr) for compaction-related errors. Parse TSDB head chunk count and series count for anomaly detection. Alert on compaction failure messages or when head series count grows abnormally (potential compaction backlog). Correlate with disk I/O and storage capacity metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=prometheus (sourcetype=\"prometheus:tsdb\" OR sourcetype=\"prometheus:log\")\n| search \"compaction\" AND (\"failed\" OR \"error\" OR \"panic\")\n| stats count, latest(_time) as last_occurrence by host, message\n| eval last_occurrence_human=strftime(last_occurrence,\"%Y-%m-%d %H:%M:%S\")\n| table host, message, count, last_occurrence_human\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Prometheus TSDB Compaction Failures** — Storage engine issues impacting metric retention.\n\nDocumented **Data sources**: Prometheus TSDB stats, server logs. **App/TA** (typical add-on context): Custom (Prometheus /api/v1/status/tsdb, prometheus logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: prometheus; **sourcetype**: prometheus:tsdb, prometheus:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=prometheus, sourcetype=\"prometheus:tsdb\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **last_occurrence_human** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Prometheus TSDB Compaction Failures**): table host, message, count, last_occurrence_human\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (compaction errors by host), Single value (TSDB health status), Line chart (head series count trend), Bar chart (block count by retention).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Prometheus stores mountains of tiny measurement slips on disk like a librarian merging folders so old papers stay tidy. When that merging jams because the disk is full, files are damaged, or the pile of labels explodes, graphs can go spotty and alarms may hush; Splunk spots those jams early and points engineers to the right shelf.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.13",
              "n": "Grafana Datasource Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Datasource connectivity and query errors; broken dashboards.",
              "t": "Custom (Grafana API)",
              "d": "Grafana /api/datasources/proxy/:id, health check endpoint",
              "q": "index=grafana sourcetype=\"grafana:datasource\"\n| where status!=\"success\" OR response_time_ms > 3000\n| stats count, avg(response_time_ms) as avg_ms, values(error_message) as errors by datasource_name, datasource_type, host\n| sort -count",
              "m": "Use scripted input or scheduled search to call Grafana `/api/datasources` and `/api/datasources/proxy/:id/health` (or datasource-specific health endpoints). Ingest response status, latency, and error messages. Alternatively, parse Grafana server logs for datasource query errors. Alert when any datasource fails health check or when error rate exceeds threshold. Track which dashboards use each datasource for impact assessment.",
              "z": "Table (unhealthy datasources with errors), Status grid (datasource × status), Single value (healthy datasource count), Line chart (datasource latency trend).",
              "kfp": "Noisy periods after retention, target, or scrape changes. We compare with the Grafana or Prometheus view for the same minutes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Grafana API).\n• Ensure the following data sources are available: Grafana /api/datasources/proxy/:id, health check endpoint.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse scripted input or scheduled search to call Grafana `/api/datasources` and `/api/datasources/proxy/:id/health` (or datasource-specific health endpoints). Ingest response status, latency, and error messages. Alternatively, parse Grafana server logs for datasource query errors. Alert when any datasource fails health check or when error rate exceeds threshold. Track which dashboards use each datasource for impact assessment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grafana sourcetype=\"grafana:datasource\"\n| where status!=\"success\" OR response_time_ms > 3000\n| stats count, avg(response_time_ms) as avg_ms, values(error_message) as errors by datasource_name, datasource_type, host\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Grafana Datasource Health** — Datasource connectivity and query errors; broken dashboards.\n\nDocumented **Data sources**: Grafana /api/datasources/proxy/:id, health check endpoint. **App/TA** (typical add-on context): Custom (Grafana API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grafana; **sourcetype**: grafana:datasource. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grafana, sourcetype=\"grafana:datasource\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"success\" OR response_time_ms > 3000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by datasource_name, datasource_type, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy datasources with errors), Status grid (datasource × status), Single value (healthy datasource count), Line chart (datasource latency trend).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch HTTP data intake so broken or busy endpoints do not starve the rest of our monitoring.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "grafana"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.14",
              "n": "OpenTelemetry Collector Dropped Spans and Metrics",
              "c": "high",
              "f": "intermediate",
              "v": "Collector queue pressure causing data loss.",
              "t": "Custom (OTel Collector metrics)",
              "d": "OTel Collector internal metrics (otelcol_exporter_send_failed_spans, otelcol_processor_dropped_metric_points)",
              "q": "| mstats avg(_value) WHERE index=otel_collector metric_name IN (\"otelcol_exporter_send_failed_spans\", \"otelcol_processor_dropped_metric_points\", \"otelcol_exporter_send_failed_metric_points\") BY metric_name, exporter, processor span=5m\n| where _value > 0\n| timechart span=5m sum(_value) as dropped by metric_name",
              "m": "Scrape OpenTelemetry Collector's internal metrics endpoint (default :8888/metrics) via Prometheus or OTLP. Ingest `otelcol_exporter_send_failed_spans`, `otelcol_processor_dropped_metric_points`, `otelcol_exporter_send_failed_metric_points`, and `otelcol_processor_dropped_spans`. Alert when any dropped/failed count >0 for critical pipelines. Correlate with `otelcol_receiver_accepted_spans` and queue depth metrics to identify backpressure. Tune batch size, retries, or add more collector replicas.",
              "z": "Line chart (dropped spans/metrics over time), Table (dropped by exporter/processor), Single value (total dropped in last hour), Bar chart (dropped by pipeline).",
              "kfp": "Rolling upgrades can reorder DaemonSets so a minority of replicas temporarily run alternate YAML while metrics cardinality shifts; wait for full rollout before paging unless send_failed persists past stabilization. Intentional filter processors raise otelcol_processor_filter_filtered_* during policy changes; join change calendars and filter names before treating filters as incidents. Lab chaos tests that inject 429 responses will trip this control on purpose; tag those windows in a suppression lookup. Scrape loss of the internal metrics endpoint mimics innocence while logs still show pain; monitor scrape targets independently. Some distributions omit otelcol_exporter_sent_* for certain exporters, producing null gap ratios without proving health; rely on send_failed, enqueue_failed, and dead-letter lanes in those builds. Dead-letter sourcetype misconfiguration can under-count failures when file rotation clears evidence quickly; validate retention and permissions. HEC corridor counts are coarse aggregates and may move for reasons unrelated to collectors, such as unrelated applications sharing the same token; treat that arm as corroboration, not sole proof. Partial backend outages may heal mid-window, creating short spikes that clear automatically; require dwell time aligned to error budget policy before waking executives.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (OTel Collector metrics).\n• Ensure the following data sources are available: OTel Collector internal metrics (otelcol_exporter_send_failed_spans, otelcol_processor_dropped_metric_points).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape OpenTelemetry Collector's internal metrics endpoint (default :8888/metrics) via Prometheus or OTLP. Ingest `otelcol_exporter_send_failed_spans`, `otelcol_processor_dropped_metric_points`, `otelcol_exporter_send_failed_metric_points`, and `otelcol_processor_dropped_spans`. Alert when any dropped/failed count >0 for critical pipelines. Correlate with `otelcol_receiver_accepted_spans` and queue depth metrics to identify backpressure. Tune batch size, retries, or add more collector replicas.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(_value) WHERE index=otel_collector metric_name IN (\"otelcol_exporter_send_failed_spans\", \"otelcol_processor_dropped_metric_points\", \"otelcol_exporter_send_failed_metric_points\") BY metric_name, exporter, processor span=5m\n| where _value > 0\n| timechart span=5m sum(_value) as dropped by metric_name\n```\n\nUnderstanding this SPL\n\n**OpenTelemetry Collector Dropped Spans and Metrics** — Collector queue pressure causing data loss.\n\nDocumented **Data sources**: OTel Collector internal metrics (otelcol_exporter_send_failed_spans, otelcol_processor_dropped_metric_points). **App/TA** (typical add-on context): Custom (OTel Collector metrics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: otel_collector.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where _value > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by metric_name** — ideal for trending and alerting on this use case.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (dropped spans/metrics over time), Table (dropped by exporter/processor), Single value (total dropped in last hour), Bar chart (dropped by pipeline).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-01",
              "sver": "",
              "rby": "",
              "ge": "We watch the paths that carry traces, metrics, and logs from programs into storage. When a checkpoint quietly throws packages away instead of slowing the line politely, we show which gate and which shipping lane broke so teams fix it before charts look fine while reality is thin.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.15",
              "n": "OpenTelemetry Collector Pipeline Throughput and Backpressure",
              "c": "critical",
              "f": "intermediate",
              "v": "The OTel Collector is a single point of failure for observability data. When exporters can't keep up with receiver intake, the batch processor queue fills and backpressure propagates upstream — first dropping low-priority data, then refusing new data entirely. By the time you notice missing traces in your backend, thousands of spans are already lost. Monitoring pipeline throughput and queue saturation in real-time catches backpressure within minutes, before it becomes data loss.",
              "t": "Splunk Distribution of OpenTelemetry Collector, Prometheus remote write, custom HEC",
              "d": "OTel Collector internal metrics (`otelcol_receiver_accepted_*`, `otelcol_exporter_queue_size`, `otelcol_exporter_queue_capacity`, `otelcol_processor_batch_*`)",
              "q": "| mstats latest(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"otelcol_receiver_accepted_spans\",\n    \"otelcol_receiver_accepted_metric_points\",\n    \"otelcol_receiver_accepted_log_records\",\n    \"otelcol_exporter_queue_size\",\n    \"otelcol_exporter_queue_capacity\",\n    \"otelcol_exporter_sent_spans\",\n    \"otelcol_exporter_sent_metric_points\"\n  ) BY metric_name, service_instance_id, exporter, receiver span=1m\n| eval signal_type=case(\n    match(metric_name, \"spans\"), \"traces\",\n    match(metric_name, \"metric\"), \"metrics\",\n    match(metric_name, \"log\"), \"logs\")\n| eval component=case(\n    match(metric_name, \"receiver\"), \"receiver\",\n    match(metric_name, \"exporter_queue\"), \"queue\",\n    match(metric_name, \"exporter_sent\"), \"exporter\")\n| xyseries _time, metric_name, val\n| eval queue_pct=round('otelcol_exporter_queue_size'*100/'otelcol_exporter_queue_capacity', 1)\n| where queue_pct > 70\n| table _time, service_instance_id, exporter, queue_pct, otelcol_exporter_queue_size, otelcol_exporter_queue_capacity\n| sort -queue_pct",
              "m": "Every OTel Collector instance exposes internal metrics on `:8888/metrics` by default. Scrape these metrics using a second collector or Prometheus federation, then forward to Splunk via OTLP or Prometheus remote write. Key metrics: `otelcol_receiver_accepted_*` (input throughput by signal type), `otelcol_exporter_queue_size` / `queue_capacity` (export queue saturation), `otelcol_exporter_sent_*` (output throughput), and `otelcol_processor_batch_batch_send_size` (batch efficiency). Alert at 70% queue saturation (warning) and 90% (critical). Track the ratio of accepted to sent — divergence indicates data accumulation or loss. Label each collector instance by `service_instance_id` to identify which replica is under pressure. Common remediation: increase queue size, add collector replicas, tune batch processor `send_batch_size` and `timeout`, or reduce exporter concurrency.",
              "z": "Line chart (queue saturation % per collector), Area chart (received vs sent throughput by signal), Gauge (peak queue saturation), Table (collectors above 70% queue).",
              "kfp": "Chaos load tests and fault injection drills that deliberately throttle exporters can saturate queues and raise critical severities without customer impact; annotate drills in change calendars before muting. Planned Splunk Cloud or Observability Cloud ingest tier upgrades may shift latency envelopes; freeze thresholds during vendor maintenance windows and compare against published status pages. Intentional gateway throttling during cost-governance events can elevate refused counters even when applications are healthy; require finance-approved runbook links before paging service owners. Development clusters with idle traffic may produce noisy low-rate refused spikes when collectors recycle connections; gate non-production envs with stricter filters. Canary deployments that sample traces aggressively can drop trace fractions that resemble processor drops until sampling policies stabilize; correlate with deployment metadata. Short bursts during certificate rotations may elevate exporter_failed_rate for minutes; overlap certificate validity windows to avoid false critical pages. Kubernetes node drains produce ephemeral CPU spikes on collector DaemonSets; pair with node cordon timelines before declaring incidents. Mis-labeled pods that match otel substrings but are not collectors can skew kube_pod lanes; tighten pod name filters when false positives recur. Stale otel_collector_inventory.csv rows that omit decommissioned collectors can mis-route ownership; refresh lookups weekly. Duplicate scraping of internal metrics from two Prometheus jobs can double counts until dedupe rules land; watch for sudden step changes after config edits.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, Prometheus remote write, custom HEC.\n• Ensure the following data sources are available: OTel Collector internal metrics (`otelcol_receiver_accepted_*`, `otelcol_exporter_queue_size`, `otelcol_exporter_queue_capacity`, `otelcol_processor_batch_*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEvery OTel Collector instance exposes internal metrics on `:8888/metrics` by default. Scrape these metrics using a second collector or Prometheus federation, then forward to Splunk via OTLP or Prometheus remote write. Key metrics: `otelcol_receiver_accepted_*` (input throughput by signal type), `otelcol_exporter_queue_size` / `queue_capacity` (export queue saturation), `otelcol_exporter_sent_*` (output throughput), and `otelcol_processor_batch_batch_send_size` (batch efficiency). Alert at 70% qu…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats latest(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"otelcol_receiver_accepted_spans\",\n    \"otelcol_receiver_accepted_metric_points\",\n    \"otelcol_receiver_accepted_log_records\",\n    \"otelcol_exporter_queue_size\",\n    \"otelcol_exporter_queue_capacity\",\n    \"otelcol_exporter_sent_spans\",\n    \"otelcol_exporter_sent_metric_points\"\n  ) BY metric_name, service_instance_id, exporter, receiver span=1m\n| eval signal_type=case(\n    match(metric_name, \"spans\"), \"traces\",\n    match(metric_name, \"metric\"), \"metrics\",\n    match(metric_name, \"log\"), \"logs\")\n| eval component=case(\n    match(metric_name, \"receiver\"), \"receiver\",\n    match(metric_name, \"exporter_queue\"), \"queue\",\n    match(metric_name, \"exporter_sent\"), \"exporter\")\n| xyseries _time, metric_name, val\n| eval queue_pct=round('otelcol_exporter_queue_size'*100/'otelcol_exporter_queue_capacity', 1)\n| where queue_pct > 70\n| table _time, service_instance_id, exporter, queue_pct, otelcol_exporter_queue_size, otelcol_exporter_queue_capacity\n| sort -queue_pct\n```\n\nUnderstanding this SPL\n\n**OpenTelemetry Collector Pipeline Throughput and Backpressure** — The OTel Collector is a single point of failure for observability data. When exporters can't keep up with receiver intake, the batch processor queue fills and backpressure propagates upstream — first dropping low-priority data, then refusing new data entirely. By the time you notice missing traces in your backend, thousands of spans are already lost. Monitoring pipeline throughput and queue saturation in real-time catches backpressure within minutes, before it becomes data…\n\nDocumented **Data sources**: OTel Collector internal metrics (`otelcol_receiver_accepted_*`, `otelcol_exporter_queue_size`, `otelcol_exporter_queue_capacity`, `otelcol_processor_batch_*`). **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Prometheus remote write, custom HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: otel_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **signal_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **component** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pivots fields for charting with `xyseries`.\n• `eval` defines or adjusts **queue_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where queue_pct > 70` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OpenTelemetry Collector Pipeline Throughput and Backpressure**): table _time, service_instance_id, exporter, queue_pct, otelcol_exporter_queue_size, otelcol_exporter_queue_capacity\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (queue saturation % per collector), Area chart (received vs sent throughput by signal), Gauge (peak queue saturation), Table (collectors above 70% queue).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch factory belts that package measurements into boxes, move them through labeling stations, and load trucks to a warehouse. When a truck door rejects loads or a sorter jams, belts back up, sensors refuse new boxes, and foremen throttle intake so the hall does not overflow. We catch those jams early so crews open lanes, resize batches, or add gateways before anything gets quietly thrown away.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry",
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.16",
              "n": "OpenTelemetry Collector Memory and CPU Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "OTel Collectors are Go processes that can OOM-kill under sustained load or memory leaks, especially with processors like `groupbytrace` or `tail_sampling` that hold data in memory. A killed collector creates a gap in observability data exactly when you need it most — during incidents. Tracking heap allocation growth patterns and CPU utilization catches runaway collectors before the OOM killer does.",
              "t": "Splunk Distribution of OpenTelemetry Collector",
              "d": "OTel Collector process metrics (`process_runtime_*`, `runtime.uptime`)",
              "q": "| mstats latest(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"process_runtime_total_alloc_bytes\",\n    \"process_runtime_heap_alloc_bytes\",\n    \"process_cpu_seconds\",\n    \"runtime.uptime\"\n  ) BY metric_name, service_instance_id span=5m\n| eval metric_short=replace(metric_name, \"process_runtime_|process_\", \"\")\n| xyseries _time, metric_short, val\n| eval heap_mb=round(heap_alloc_bytes/1048576, 1)\n| streamstats window=6 avg(heap_mb) as avg_heap_mb by service_instance_id\n| eval heap_growth_pct=round((heap_mb - avg_heap_mb)*100/avg_heap_mb, 1)\n| where heap_mb > 512 OR heap_growth_pct > 30\n| table _time, service_instance_id, heap_mb, avg_heap_mb, heap_growth_pct\n| sort -heap_mb",
              "m": "OTel Collectors emit Go runtime metrics by default: `process_runtime_heap_alloc_bytes` (current heap usage), `process_runtime_total_alloc_bytes` (cumulative allocations), `process_cpu_seconds` (CPU time), and `runtime.uptime` (time since start). Ingest via the collector's internal metrics pipeline. Set memory limits using `memory_limiter` processor in collector config (recommended: 80% of container memory limit). Alert when heap exceeds 512 MB (adjust based on deployment sizing) or shows >30% growth over 30-minute rolling average. Short uptimes (<5 minutes) combined with high allocation rates indicate crash-restart loops. Correlate with Kubernetes pod restarts (UC-3.2.1) if collectors run as DaemonSets. Track CPU utilization against collector pod resource requests to identify under-provisioned instances.",
              "z": "Line chart (heap MB per collector over 24 hours), Area chart (CPU seconds rate), Table (collectors exceeding memory threshold), Single value (collector with highest heap).",
              "kfp": "Collector errors during config changes, target restarts, or transport and credential issues. We read collector logs and compare with the OTel pipeline view.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector.\n• Ensure the following data sources are available: OTel Collector process metrics (`process_runtime_*`, `runtime.uptime`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOTel Collectors emit Go runtime metrics by default: `process_runtime_heap_alloc_bytes` (current heap usage), `process_runtime_total_alloc_bytes` (cumulative allocations), `process_cpu_seconds` (CPU time), and `runtime.uptime` (time since start). Ingest via the collector's internal metrics pipeline. Set memory limits using `memory_limiter` processor in collector config (recommended: 80% of container memory limit). Alert when heap exceeds 512 MB (adjust based on deployment sizing) or shows >30% gr…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats latest(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"process_runtime_total_alloc_bytes\",\n    \"process_runtime_heap_alloc_bytes\",\n    \"process_cpu_seconds\",\n    \"runtime.uptime\"\n  ) BY metric_name, service_instance_id span=5m\n| eval metric_short=replace(metric_name, \"process_runtime_|process_\", \"\")\n| xyseries _time, metric_short, val\n| eval heap_mb=round(heap_alloc_bytes/1048576, 1)\n| streamstats window=6 avg(heap_mb) as avg_heap_mb by service_instance_id\n| eval heap_growth_pct=round((heap_mb - avg_heap_mb)*100/avg_heap_mb, 1)\n| where heap_mb > 512 OR heap_growth_pct > 30\n| table _time, service_instance_id, heap_mb, avg_heap_mb, heap_growth_pct\n| sort -heap_mb\n```\n\nUnderstanding this SPL\n\n**OpenTelemetry Collector Memory and CPU Utilization** — OTel Collectors are Go processes that can OOM-kill under sustained load or memory leaks, especially with processors like `groupbytrace` or `tail_sampling` that hold data in memory. A killed collector creates a gap in observability data exactly when you need it most — during incidents. Tracking heap allocation growth patterns and CPU utilization catches runaway collectors before the OOM killer does.\n\nDocumented **Data sources**: OTel Collector process metrics (`process_runtime_*`, `runtime.uptime`). **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: otel_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **metric_short** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pivots fields for charting with `xyseries`.\n• `eval` defines or adjusts **heap_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `streamstats` rolls up events into metrics; results are split **by service_instance_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **heap_growth_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where heap_mb > 512 OR heap_growth_pct > 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OpenTelemetry Collector Memory and CPU Utilization**): table _time, service_instance_id, heap_mb, avg_heap_mb, heap_growth_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (heap MB per collector over 24 hours), Area chart (CPU seconds rate), Table (collectors exceeding memory threshold), Single value (collector with highest heap).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We connect metrics, alerts, and external collectors into our analysis so we can see coverage, backlog, and how work flows from probe to on-call in one place.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.17",
              "n": "OpenTelemetry Collector Configuration Drift Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "In a fleet of dozens or hundreds of OTel Collector instances (DaemonSets, gateway deployments, agent sidecars), configuration inconsistency causes silent observability failures. One collector running a stale config may be missing a processor, dropping a signal type, or exporting to a decommissioned endpoint — while appearing healthy. Configuration drift detection ensures every collector runs the intended config after rollouts, preventing partial observability gaps.",
              "t": "Splunk Distribution of OpenTelemetry Collector, Kubernetes ConfigMap tracking",
              "d": "OTel Collector config hash (custom metric or log), `sourcetype=kube:events` (ConfigMap updates)",
              "q": "index=otel_metrics sourcetype=\"otel:collector:info\"\n| stats latest(config_hash) as config_hash, latest(collector_version) as version, latest(_time) as last_seen by service_instance_id, host, k8s_namespace\n| eventstats dc(config_hash) as unique_configs, mode(config_hash) as expected_hash\n| eval drifted=if(config_hash!=expected_hash, \"Yes\", \"No\")\n| eval stale_hours=round((now()-last_seen)/3600, 1)\n| where drifted=\"Yes\" OR stale_hours > 1\n| table service_instance_id, host, k8s_namespace, config_hash, expected_hash, drifted, version, stale_hours\n| sort drifted, -stale_hours",
              "m": "Add a custom processor or extension to each collector that computes a SHA-256 hash of the active configuration and emits it as a metric attribute or log event on startup and at regular intervals (every 5 minutes). Alternatively, use the `zpages` extension to expose config and scrape it. Store the expected config hash in a KV store lookup, updated when deployments roll out. Compare each collector's reported hash against the expected value. Alert when any collector reports a different hash after a rollout window (30 minutes). Also detect stale collectors that haven't reported recently — these may have crashed without restarting. For Kubernetes deployments, correlate with ConfigMap update events to verify that DaemonSet pods restarted after config changes.",
              "z": "Single value (collectors with drift), Table (drifted instances with config hash comparison), Pie chart (config version distribution), Timeline (config rollout events).",
              "kfp": "Collector errors during config changes, target restarts, or transport and credential issues. We read collector logs and compare with the OTel pipeline view.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, Kubernetes ConfigMap tracking.\n• Ensure the following data sources are available: OTel Collector config hash (custom metric or log), `sourcetype=kube:events` (ConfigMap updates).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAdd a custom processor or extension to each collector that computes a SHA-256 hash of the active configuration and emits it as a metric attribute or log event on startup and at regular intervals (every 5 minutes). Alternatively, use the `zpages` extension to expose config and scrape it. Store the expected config hash in a KV store lookup, updated when deployments roll out. Compare each collector's reported hash against the expected value. Alert when any collector reports a different hash after a…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=otel_metrics sourcetype=\"otel:collector:info\"\n| stats latest(config_hash) as config_hash, latest(collector_version) as version, latest(_time) as last_seen by service_instance_id, host, k8s_namespace\n| eventstats dc(config_hash) as unique_configs, mode(config_hash) as expected_hash\n| eval drifted=if(config_hash!=expected_hash, \"Yes\", \"No\")\n| eval stale_hours=round((now()-last_seen)/3600, 1)\n| where drifted=\"Yes\" OR stale_hours > 1\n| table service_instance_id, host, k8s_namespace, config_hash, expected_hash, drifted, version, stale_hours\n| sort drifted, -stale_hours\n```\n\nUnderstanding this SPL\n\n**OpenTelemetry Collector Configuration Drift Detection** — In a fleet of dozens or hundreds of OTel Collector instances (DaemonSets, gateway deployments, agent sidecars), configuration inconsistency causes silent observability failures. One collector running a stale config may be missing a processor, dropping a signal type, or exporting to a decommissioned endpoint — while appearing healthy. Configuration drift detection ensures every collector runs the intended config after rollouts, preventing partial observability gaps.\n\nDocumented **Data sources**: OTel Collector config hash (custom metric or log), `sourcetype=kube:events` (ConfigMap updates). **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Kubernetes ConfigMap tracking. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: otel_metrics; **sourcetype**: otel:collector:info. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=otel_metrics, sourcetype=\"otel:collector:info\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by service_instance_id, host, k8s_namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **drifted** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **stale_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drifted=\"Yes\" OR stale_hours > 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OpenTelemetry Collector Configuration Drift Detection**): table service_instance_id, host, k8s_namespace, config_hash, expected_hash, drifted, version, stale_hours\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (collectors with drift), Table (drifted instances with config hash comparison), Pie chart (config version distribution), Timeline (config rollout events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many app copies are running as traffic and policy change so a surprise scale event does not go unnoticed.",
              "mtype": [
                "Compliance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.18",
              "n": "OpenTelemetry Receiver Health by Signal Type",
              "c": "high",
              "f": "intermediate",
              "v": "Each OTel Collector receiver (OTLP, Jaeger, Prometheus, filelog, hostmetrics) independently accepts or refuses data by signal type (traces, metrics, logs). A receiver that starts refusing spans while accepting metrics indicates endpoint-specific authentication failures, protocol mismatches, or resource exhaustion on a single pipeline. Per-receiver, per-signal health monitoring pinpoints exactly which instrumentation endpoint is broken, reducing MTTR from \"something is wrong with observability\" to \"the Jaeger receiver on collector-5 is refusing spans due to gRPC auth errors.\"",
              "t": "Splunk Distribution of OpenTelemetry Collector",
              "d": "OTel Collector internal metrics (`otelcol_receiver_accepted_*`, `otelcol_receiver_refused_*`)",
              "q": "| mstats latest(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"otelcol_receiver_accepted_spans\",\n    \"otelcol_receiver_refused_spans\",\n    \"otelcol_receiver_accepted_metric_points\",\n    \"otelcol_receiver_refused_metric_points\",\n    \"otelcol_receiver_accepted_log_records\",\n    \"otelcol_receiver_refused_log_records\"\n  ) BY metric_name, receiver, transport, service_instance_id span=5m\n| eval signal=case(match(metric_name,\"spans\"),\"traces\", match(metric_name,\"metric\"),\"metrics\", match(metric_name,\"log\"),\"logs\")\n| eval status=if(match(metric_name,\"refused\"), \"refused\", \"accepted\")\n| stats sum(val) as total by _time, receiver, signal, status, service_instance_id\n| xyseries _time, status, total\n| eval refuse_pct=round(refused*100/(accepted+refused), 2)\n| where refused > 0\n| table _time, receiver, signal, service_instance_id, accepted, refused, refuse_pct\n| sort -refuse_pct",
              "m": "OTel Collector emits `otelcol_receiver_accepted_*` and `otelcol_receiver_refused_*` metrics per receiver and transport (grpc, http). Refused data indicates the collector rejected incoming telemetry — causes include: authentication failure, payload too large, receiver shutting down, or unsupported format. Alert when refuse rate exceeds 0% for any receiver/signal combination (any refusal is abnormal). Correlate with collector logs for the specific error reason. Track accepted counts to detect instrumentation gaps: if a receiver that normally accepts 10K spans/minute suddenly drops to zero, the upstream service likely lost connectivity. Build a receiver health matrix showing each receiver × signal type × collector instance status for fleet-wide visibility.",
              "z": "Heatmap (receiver × signal health status), Line chart (accepted vs refused per receiver), Table (receivers with refusals), Single value (total active receivers).",
              "kfp": "Collector errors during config changes, target restarts, or transport and credential issues. We read collector logs and compare with the OTel pipeline view.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector.\n• Ensure the following data sources are available: OTel Collector internal metrics (`otelcol_receiver_accepted_*`, `otelcol_receiver_refused_*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOTel Collector emits `otelcol_receiver_accepted_*` and `otelcol_receiver_refused_*` metrics per receiver and transport (grpc, http). Refused data indicates the collector rejected incoming telemetry — causes include: authentication failure, payload too large, receiver shutting down, or unsupported format. Alert when refuse rate exceeds 0% for any receiver/signal combination (any refusal is abnormal). Correlate with collector logs for the specific error reason. Track accepted counts to detect inst…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats latest(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"otelcol_receiver_accepted_spans\",\n    \"otelcol_receiver_refused_spans\",\n    \"otelcol_receiver_accepted_metric_points\",\n    \"otelcol_receiver_refused_metric_points\",\n    \"otelcol_receiver_accepted_log_records\",\n    \"otelcol_receiver_refused_log_records\"\n  ) BY metric_name, receiver, transport, service_instance_id span=5m\n| eval signal=case(match(metric_name,\"spans\"),\"traces\", match(metric_name,\"metric\"),\"metrics\", match(metric_name,\"log\"),\"logs\")\n| eval status=if(match(metric_name,\"refused\"), \"refused\", \"accepted\")\n| stats sum(val) as total by _time, receiver, signal, status, service_instance_id\n| xyseries _time, status, total\n| eval refuse_pct=round(refused*100/(accepted+refused), 2)\n| where refused > 0\n| table _time, receiver, signal, service_instance_id, accepted, refused, refuse_pct\n| sort -refuse_pct\n```\n\nUnderstanding this SPL\n\n**OpenTelemetry Receiver Health by Signal Type** — Each OTel Collector receiver (OTLP, Jaeger, Prometheus, filelog, hostmetrics) independently accepts or refuses data by signal type (traces, metrics, logs). A receiver that starts refusing spans while accepting metrics indicates endpoint-specific authentication failures, protocol mismatches, or resource exhaustion on a single pipeline. Per-receiver, per-signal health monitoring pinpoints exactly which instrumentation endpoint is broken, reducing MTTR from \"something is wrong…\n\nDocumented **Data sources**: OTel Collector internal metrics (`otelcol_receiver_accepted_*`, `otelcol_receiver_refused_*`). **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: otel_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **signal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time, receiver, signal, status, service_instance_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• `eval` defines or adjusts **refuse_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where refused > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OpenTelemetry Receiver Health by Signal Type**): table _time, receiver, signal, service_instance_id, accepted, refused, refuse_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (receiver × signal health status), Line chart (accepted vs refused per receiver), Table (receivers with refusals), Single value (total active receivers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We connect metrics, alerts, and external collectors into our analysis so we can see coverage, backlog, and how work flows from probe to on-call in one place.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.3.19",
              "n": "OpenTelemetry Exporter Retry and Timeout Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "OTel Collector exporters retry failed sends with exponential backoff. Persistent retry failures indicate backend unavailability, authentication expiry, or network partitions. Timeouts indicate backend performance degradation that slows the entire pipeline. Monitoring retry counts and timeout rates per exporter and destination prevents the cascade where a slow backend fills queues, triggers backpressure, and ultimately causes data loss across all signals — not just the one with the failing backend.",
              "t": "Splunk Distribution of OpenTelemetry Collector",
              "d": "OTel Collector internal metrics (`otelcol_exporter_send_failed_*`, `otelcol_exporter_sent_*`), collector logs",
              "q": "| mstats sum(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"otelcol_exporter_send_failed_spans\",\n    \"otelcol_exporter_send_failed_metric_points\",\n    \"otelcol_exporter_send_failed_log_records\",\n    \"otelcol_exporter_sent_spans\",\n    \"otelcol_exporter_sent_metric_points\",\n    \"otelcol_exporter_sent_log_records\"\n  ) BY metric_name, exporter, service_instance_id span=5m\n| eval signal=case(match(metric_name,\"spans\"),\"traces\", match(metric_name,\"metric\"),\"metrics\", match(metric_name,\"log\"),\"logs\")\n| eval status=if(match(metric_name,\"failed\"), \"failed\", \"sent\")\n| stats sum(val) as total by _time, exporter, signal, status, service_instance_id\n| xyseries _time, status, total\n| eval failure_pct=round(failed*100/nullif(sent+failed, 0), 2)\n| where failed > 0\n| table _time, exporter, signal, service_instance_id, sent, failed, failure_pct\n| sort -failure_pct",
              "m": "OTel Collector exporters emit `otelcol_exporter_send_failed_*` counters per signal type and exporter name. These increment on each failed send attempt (including retries). Track failure percentage per exporter: sustained failures above 1% indicate a backend problem requiring investigation. Check collector logs for specific error codes: 429 (rate limited), 503 (backend overloaded), context deadline exceeded (timeout), and TLS handshake failures. For Splunk HEC exporters, correlate with HEC endpoint health (UC-13.1.12). For OTLP exporters, verify the receiving collector or backend is healthy. Alert when any exporter shows failures for more than 10 consecutive minutes. Monitor the retry queue: exporters with `retry_on_failure` enabled accumulate data during transient failures, but persistent failures eventually hit `max_elapsed_time` and data is dropped permanently.",
              "z": "Line chart (failure rate per exporter over 24 hours), Table (exporters with active failures), Bar chart (failures by signal type and exporter), Single value (exporters currently healthy vs failing).",
              "kfp": "Collector errors during config changes, target restarts, or transport and credential issues. We read collector logs and compare with the OTel pipeline view.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector.\n• Ensure the following data sources are available: OTel Collector internal metrics (`otelcol_exporter_send_failed_*`, `otelcol_exporter_sent_*`), collector logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOTel Collector exporters emit `otelcol_exporter_send_failed_*` counters per signal type and exporter name. These increment on each failed send attempt (including retries). Track failure percentage per exporter: sustained failures above 1% indicate a backend problem requiring investigation. Check collector logs for specific error codes: 429 (rate limited), 503 (backend overloaded), context deadline exceeded (timeout), and TLS handshake failures. For Splunk HEC exporters, correlate with HEC endpoi…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats sum(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"otelcol_exporter_send_failed_spans\",\n    \"otelcol_exporter_send_failed_metric_points\",\n    \"otelcol_exporter_send_failed_log_records\",\n    \"otelcol_exporter_sent_spans\",\n    \"otelcol_exporter_sent_metric_points\",\n    \"otelcol_exporter_sent_log_records\"\n  ) BY metric_name, exporter, service_instance_id span=5m\n| eval signal=case(match(metric_name,\"spans\"),\"traces\", match(metric_name,\"metric\"),\"metrics\", match(metric_name,\"log\"),\"logs\")\n| eval status=if(match(metric_name,\"failed\"), \"failed\", \"sent\")\n| stats sum(val) as total by _time, exporter, signal, status, service_instance_id\n| xyseries _time, status, total\n| eval failure_pct=round(failed*100/nullif(sent+failed, 0), 2)\n| where failed > 0\n| table _time, exporter, signal, service_instance_id, sent, failed, failure_pct\n| sort -failure_pct\n```\n\nUnderstanding this SPL\n\n**OpenTelemetry Exporter Retry and Timeout Monitoring** — OTel Collector exporters retry failed sends with exponential backoff. Persistent retry failures indicate backend unavailability, authentication expiry, or network partitions. Timeouts indicate backend performance degradation that slows the entire pipeline. Monitoring retry counts and timeout rates per exporter and destination prevents the cascade where a slow backend fills queues, triggers backpressure, and ultimately causes data loss across all signals — not just the one…\n\nDocumented **Data sources**: OTel Collector internal metrics (`otelcol_exporter_send_failed_*`, `otelcol_exporter_sent_*`), collector logs. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: otel_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **signal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time, exporter, signal, status, service_instance_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• `eval` defines or adjusts **failure_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failed > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OpenTelemetry Exporter Retry and Timeout Monitoring**): table _time, exporter, signal, service_instance_id, sent, failed, failure_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failure rate per exporter over 24 hours), Table (exporters with active failures), Bar chart (failures by signal type and exporter), Single value (exporters currently healthy vs failing).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We connect metrics, alerts, and external collectors into our analysis so we can see coverage, backlog, and how work flows from probe to on-call in one place.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.8,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 19,
            "none": 0
          }
        },
        {
          "i": "13.4",
          "n": "AI & LLM Observability",
          "u": [
            {
              "i": "13.4.1",
              "n": "LLM API Latency and Error Rate (OpenAI, Azure OpenAI)",
              "c": "high",
              "f": "intermediate",
              "v": "High latency or error rates on managed LLM endpoints directly impact user experience and SLOs; tracking both by model and region isolates provider issues versus client misuse.",
              "t": "Azure Monitor Add-on, custom OpenAI proxy logs, HEC",
              "d": "`sourcetype=openai:api`, `sourcetype=azure:openai`",
              "q": "index=ai_ops (sourcetype=\"openai:api\" OR sourcetype=\"azure:openai\")\n| eval failed=if(status>=400 OR isnotnull(error_code),1,0)\n| timechart span=5m avg(latency_ms) as avg_latency p99(latency_ms) as p99_latency sum(failed) as errors count as calls\n| eval error_rate_pct=round(100*errors/calls,2)",
              "m": "Ingest REST proxy or provider diagnostic logs with HTTP status, `latency_ms`, model id, deployment, and region. Normalize field names across OpenAI and Azure OpenAI. Alert when p99 latency or error rate exceeds baselines. Track 429 separately if you manage quota.",
              "z": "Line chart (latency p99, error rate), Single value (SLO burn), Table (top failing models/regions).",
              "kfp": "Canary traffic and synthetic probes routinely produce micro-bursts of 429 responses during quota calibration; require correlation with real tenant traffic before paging executives. Approved blue-green gateway swaps can duplicate requests for a short interval, inflating request_count without implying user-visible errors; compare trace IDs and dedupe keys. Azure diagnostic ingestion lag during Event Hub throttling can delay 5xx evidence minutes behind live provider dashboards; verify pipeline health before declaring false negatives. Apigee or API Management sampling on analytics exports may under-represent rare 5xx tails; tighten sampling for LLM routes or supplement with full-fidelity invoke logs for SLO-bound paths. Content moderation blocks that return vendor-specific 4xx codes can look like application bugs until message text is reviewed under policy. FinOps price lookup staleness in llm_pricing can mislabel cost regressions when list prices change mid-month; refresh lookups weekly. Multi-region active-active setups may show elevated latency on secondary regions by design; join region to routing policy lookups before severity escalation. Broken clock skew between collectors and providers distorts bucket alignment; monitor NTP separately. Agent frameworks that retry aggressively on 408 timeouts can spike error_rate without provider faults; inspect retry budgets. Embedding batch jobs that share deployments with interactive chat can raise p95_ms during nightly ETL; schedule segregation or downgrade non-production severity using environment tags.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Azure Monitor Add-on, custom OpenAI proxy logs, HEC.\n• Ensure the following data sources are available: `sourcetype=openai:api`, `sourcetype=azure:openai`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest REST proxy or provider diagnostic logs with HTTP status, `latency_ms`, model id, deployment, and region. Normalize field names across OpenAI and Azure OpenAI. Alert when p99 latency or error rate exceeds baselines. Track 429 separately if you manage quota.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_ops (sourcetype=\"openai:api\" OR sourcetype=\"azure:openai\")\n| eval failed=if(status>=400 OR isnotnull(error_code),1,0)\n| timechart span=5m avg(latency_ms) as avg_latency p99(latency_ms) as p99_latency sum(failed) as errors count as calls\n| eval error_rate_pct=round(100*errors/calls,2)\n```\n\nUnderstanding this SPL\n\n**LLM API Latency and Error Rate (OpenAI, Azure OpenAI)** — High latency or error rates on managed LLM endpoints directly impact user experience and SLOs; tracking both by model and region isolates provider issues versus client misuse.\n\nDocumented **Data sources**: `sourcetype=openai:api`, `sourcetype=azure:openai`. **App/TA** (typical add-on context): Azure Monitor Add-on, custom OpenAI proxy logs, HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ops; **sourcetype**: openai:api, azure:openai. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ops, sourcetype=\"openai:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=5m** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **error_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency p99, error rate), Single value (SLO burn), Table (top failing models/regions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Think of each AI model call like a courier route to a busy translation office. We measure how long deliveries take, how often they bounce back refused, and whether a whole neighborhood suddenly waits longer, so leaders see trouble before customers feel it.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.2",
              "n": "Token Usage and Cost per Model and Application",
              "c": "medium",
              "f": "intermediate",
              "v": "Token spend ties directly to budget and chargeback; per-application and per-model views prevent surprise bills and highlight inefficient prompts or runaway automation.",
              "t": "Custom billing export, OpenAI Usage API → HEC",
              "d": "`sourcetype=openai:api`, `sourcetype=azure:openai`",
              "q": "index=ai_ops (sourcetype=\"openai:api\" OR sourcetype=\"azure:openai\")\n| eval prompt_tokens=coalesce(prompt_tokens,0), completion_tokens=coalesce(completion_tokens,0)\n| eval total_tokens=prompt_tokens+completion_tokens\n| eval est_cost_usd=round((total_tokens/1000)*coalesce(price_per_1k_tokens,0.002),4)\n| stats sum(total_tokens) as tokens sum(est_cost_usd) as cost_usd by app_id, model\n| sort -cost_usd",
              "m": "Ensure each request carries `app_id` or API key alias. Ingest usage records with token counts; join a lookup table for price per model per 1k tokens (refresh monthly). Schedule daily cost reports and threshold alerts for tenants or apps.",
              "z": "Bar chart (cost by app), Treemap (cost by model), Table (tokens and cost detail).",
              "kfp": "Scheduled batch backfill jobs that replay historical chats through a gateway can spike token totals for a day without implying live customer abuse; require change_ticket_id correlation before paging product owners. Fine-tuning evaluation passes that sweep many prompts against a teacher model inflate completion tokens predictably; join registry flags that mark eval suites as non-production spend. Planned migration onto a larger model class with written budget approval will raise est_cost_usd_1h while dq_pressure stays controlled; verify CAB records and FinOps approval artifacts. Model-deprecation re-encoding campaigns that rewrite corpora into new tokenizer families can multiply prompt tokens briefly; treat as benign when catalog notes a deprecation window. Embeddings re-index after a documentation refresh can dominate token volume while chat traffic is quiet; separate embedding routes in gateway metadata to avoid misrouting on-call. Reserved capacity or provisioned throughput purchases can change effective marginal price without changing token counters; refresh llm_model_pricing.csv when contracts renew. Multimodal attachments counted under provider-specific image or audio token rules may look like prompt bloat until you align tokenizer documentation with the emitted usage fields. Synthetic canary traffic that intentionally exercises premium models for latency checks will move cost needles; tag canary tenants and downgrade non-production severities. Azure OpenAI diagnostic lag during Event Hub throttling can delay billing arms minutes behind live consoles; confirm pipeline health before declaring false negatives. Duplicate HEC submissions from blue-green forwarders during cutovers can double-count tokens until dedupe keys land; compare with vendor usage dashboards.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom billing export, OpenAI Usage API → HEC.\n• Ensure the following data sources are available: `sourcetype=openai:api`, `sourcetype=azure:openai`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure each request carries `app_id` or API key alias. Ingest usage records with token counts; join a lookup table for price per model per 1k tokens (refresh monthly). Schedule daily cost reports and threshold alerts for tenants or apps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_ops (sourcetype=\"openai:api\" OR sourcetype=\"azure:openai\")\n| eval prompt_tokens=coalesce(prompt_tokens,0), completion_tokens=coalesce(completion_tokens,0)\n| eval total_tokens=prompt_tokens+completion_tokens\n| eval est_cost_usd=round((total_tokens/1000)*coalesce(price_per_1k_tokens,0.002),4)\n| stats sum(total_tokens) as tokens sum(est_cost_usd) as cost_usd by app_id, model\n| sort -cost_usd\n```\n\nUnderstanding this SPL\n\n**Token Usage and Cost per Model and Application** — Token spend ties directly to budget and chargeback; per-application and per-model views prevent surprise bills and highlight inefficient prompts or runaway automation.\n\nDocumented **Data sources**: `sourcetype=openai:api`, `sourcetype=azure:openai`. **App/TA** (typical add-on context): Custom billing export, OpenAI Usage API → HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ops; **sourcetype**: openai:api, azure:openai. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ops, sourcetype=\"openai:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **prompt_tokens** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **total_tokens** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **est_cost_usd** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by app_id, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (cost by app), Treemap (cost by model), Table (tokens and cost detail).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch one big apartment tower with a shared electric meter and a shared water main. Every flat runs lights, kettles, washers, and hoses, and we meter each room so one renter cannot hide an industrial pump while everyone else pays the surprise bill. When a tap goes wild, we see which flat and which hourly spike caused it before the super has to guess who to visit.",
              "mtype": [
                "Capacity",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.3",
              "n": "GPU and TPU Utilization for Inference Workloads",
              "c": "high",
              "f": "intermediate",
              "v": "Underutilized accelerators waste capex; saturated GPUs increase queue time and latency for inference—utilization guides autoscaling and instance right-sizing.",
              "t": "Splunk OTel Collector, NVIDIA DCGM exporter, Kubernetes metrics",
              "d": "`sourcetype=otel:metrics`",
              "q": "index=infra sourcetype=\"otel:metrics\" (metric_name=\"gpu.utilization\" OR metric_name=\"dcgm.gpu.utilization\")\n| bin _time span=1m\n| stats avg(value) as gpu_util by _time, host, gpu_id\n| where gpu_util > 90 OR gpu_util < 15\n| timechart span=15m avg(gpu_util) by host",
              "m": "Scrape DCGM or cloud TPU metrics via OTel Prometheus receiver. Tag by cluster, pool, and model deployment. Alert on sustained high utilization (queue risk) or chronic low utilization (oversized nodes). Correlate with inference request rate from gateway logs if present.",
              "z": "Timechart (GPU %), Heatmap (GPU × host), Single value (cluster avg utilization).",
              "kfp": "Chaos engineering or game-day exercises that drain inference traffic without scaling nodes down can show sustained low sm_busy_ma while finance still books hourly nodes; require change metadata and experiment tags before paging owners. Scheduled batch warming jobs that preload weights into VRAM may spike mem_used_pct while sm_busy_ma stays moderate until requests arrive; treat as benign when catalog marks the workload as warm-cache. Development and sandbox clusters intentionally park idle accelerators for researcher notebooks; downgrade severity using env labels in gpu_inventory rather than muting production logic. Model bake-off A/B routes that pin small canary fractions to large GPU pools can look under-utilised until traffic weight shifts; correlate with gateway weight tables. Evaluation harnesses that run intentionally slow single-batch precision tests will drag latency_p95_ms without implying customer traffic regression; join model_name to registry eval_suite flags when available. MIG reconfiguration windows can temporarily double-count UUID labels across old and new profiles until scrapes settle; verify mig_uuid stability across two buckets. Cloud monitoring ingestion lag during regional incidents can delay duty-cycle arms minutes behind live consoles; confirm pipeline health before declaring false negatives. ROCm or DCGM exporter upgrades restart sidecars and produce empty mstats slices for one interval; use streamstats dwell before critical pages. Neuron compiler version skew can emit misleading vCPU usage spikes during graph captures that self-resolve; inspect Neuron logs alongside metrics.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OTel Collector, NVIDIA DCGM exporter, Kubernetes metrics.\n• Ensure the following data sources are available: `sourcetype=otel:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape DCGM or cloud TPU metrics via OTel Prometheus receiver. Tag by cluster, pool, and model deployment. Alert on sustained high utilization (queue risk) or chronic low utilization (oversized nodes). Correlate with inference request rate from gateway logs if present.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infra sourcetype=\"otel:metrics\" (metric_name=\"gpu.utilization\" OR metric_name=\"dcgm.gpu.utilization\")\n| bin _time span=1m\n| stats avg(value) as gpu_util by _time, host, gpu_id\n| where gpu_util > 90 OR gpu_util < 15\n| timechart span=15m avg(gpu_util) by host\n```\n\nUnderstanding this SPL\n\n**GPU and TPU Utilization for Inference Workloads** — Underutilized accelerators waste capex; saturated GPUs increase queue time and latency for inference—utilization guides autoscaling and instance right-sizing.\n\nDocumented **Data sources**: `sourcetype=otel:metrics`. **App/TA** (typical add-on context): Splunk OTel Collector, NVIDIA DCGM exporter, Kubernetes metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infra; **sourcetype**: otel:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infra, sourcetype=\"otel:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, gpu_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where gpu_util > 90 OR gpu_util < 15` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (GPU %), Heatmap (GPU × host), Single value (cluster avg utilization).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Picture a taxi rank full of pricey sports cars rented by the hour. Sometimes every engine idles while riders wait in line; sometimes one car redlines while the others nap. We watch the rank so you neither waste money on parked horsepower nor let sudden crowds stall because nobody saw the fleet was lopsided.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.4",
              "n": "Model Version Deployment Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Knowing which model revision serves production supports rollback, audit, and safety reviews when behavior or cost shifts after a rollout.",
              "t": "CI/CD webhooks, Kubernetes labels, LLM gateway config audit",
              "d": "`sourcetype=openai:api`, `sourcetype=k8s:deployment`",
              "q": "(index=ai_ops sourcetype=\"openai:api\") OR (index=platform sourcetype=\"k8s:deployment\")\n| eval model_id=coalesce(model, model_name, image_tag)\n| stats latest(_time) as last_seen, latest(model_id) as current_model by deployment, namespace, environment\n| sort environment, deployment",
              "m": "Log model id from inference gateway on each request; for self-hosted models, ingest deployment events with image tag or `MODEL_ID` env. Maintain a lookup of approved model versions per environment. Alert on requests referencing undeployed or deprecated model strings.",
              "z": "Table (environment × model version), Timeline (version changes), Single value (unapproved model calls).",
              "kfp": "Planned multi-snapshot evaluations that intentionally alternate between two approved board baselines for offline scoring can elevate ver_dc and flip_flop without implying production drift; tag tenant_id with eval_suite metadata and downgrade non-production severities. Provider-announced maintenance that rotates system_fingerprint while model_alias stays pinned may trigger informational flip_flop until notes in llm_approved_model_versions.csv acknowledge the window. Azure deployment swaps executed under change control can move ver_max legitimately; require CAB correlation before paging application owners. Gemini regional failovers that change publisher strings without altering approved capability tiers may look like mismatches until regex patterns include alternation groups for regional publishers. Stale allowlist rows that omit newly blessed snapshots cause mismatch_allowlist spikes that reflect spreadsheet lag, not abuse; automate lookup pulls from Git. Brief LiteLLM cache warm-up phases can skew traffic_share_pct until steady state; compare against five trailing buckets. Duplicate HEC submissions can inflate req_count; dedupe on request_id. Cost_step_ratio swings from currency normalization or tax line changes belong in UC-13.4.2 reconciliation, not emergency rollback here. Canary tenants that deliberately float aliases for research should set alias_pin_required=no in the lookup. tstats Change summaries absent acceleration return empty change_touchpoints; repair datamodel acceleration rather than muting detections.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CI/CD webhooks, Kubernetes labels, LLM gateway config audit.\n• Ensure the following data sources are available: `sourcetype=openai:api`, `sourcetype=k8s:deployment`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog model id from inference gateway on each request; for self-hosted models, ingest deployment events with image tag or `MODEL_ID` env. Maintain a lookup of approved model versions per environment. Alert on requests referencing undeployed or deprecated model strings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=ai_ops sourcetype=\"openai:api\") OR (index=platform sourcetype=\"k8s:deployment\")\n| eval model_id=coalesce(model, model_name, image_tag)\n| stats latest(_time) as last_seen, latest(model_id) as current_model by deployment, namespace, environment\n| sort environment, deployment\n```\n\nUnderstanding this SPL\n\n**Model Version Deployment Tracking** — Knowing which model revision serves production supports rollback, audit, and safety reviews when behavior or cost shifts after a rollout.\n\nDocumented **Data sources**: `sourcetype=openai:api`, `sourcetype=k8s:deployment`. **App/TA** (typical add-on context): CI/CD webhooks, Kubernetes labels, LLM gateway config audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ops, platform; **sourcetype**: openai:api, k8s:deployment. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ops, index=platform, sourcetype=\"openai:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **model_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by deployment, namespace, environment** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (environment × model version), Timeline (version changes), Single value (unapproved model calls).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We keep a dated name tag on the exact recipe version answering your automated helpers, like noting which edition of a cookbook a chef used. When the publisher swaps pages under the same cover, or a taste test grows bigger than agreed, we raise a clear flag so stewards can undo it calmly.",
              "mtype": [
                "Change",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.5",
              "n": "AI Gateway Rate Limiting and Quota Management",
              "c": "high",
              "f": "intermediate",
              "v": "Gateway limits protect backends and budgets; monitoring 429s and quota headers prevents one client from starving others and validates limit tuning.",
              "t": "Kong, Azure API Management, Envoy access logs → Splunk",
              "d": "`sourcetype=openai:api`, `sourcetype=haproxy:access`",
              "q": "index=ai_ops sourcetype=\"openai:api\"\n| eval throttled=if(status=429 OR match(_raw,\"(?i)rate.?limit\"),1,0)\n| stats sum(throttled) as throttled_reqs count as total by client_id, route\n| eval throttle_pct=round(100*throttled_reqs/total,2)\n| where throttle_pct > 5\n| sort -throttle_pct",
              "m": "Centralize LLM traffic through an API gateway and log client identity, route, status, and optional `X-RateLimit-*` headers. Alert on rising 429 share per key or app. Feed quota resets into a small KV store for dashboards if headers are present.",
              "z": "Bar chart (429% by client), Line chart (throttled requests over time), Table (top limited routes).",
              "kfp": "Planned load tests and partner certification windows routinely generate intentional 429 storms; require change_calendar correlation and test headers before paging executives. Vendor-announced regional throttling or maintenance advisories can elevate provider_rl_hints across tenants without implying internal misconfiguration; cross-check public status pages. Scheduled batch evaluation jobs that fan out thousands of short requests may trip RPM ceilings while TPM remains comfortable; annotate batch lanes and compare against contracted_rpm from inventory. Partner integration tests reusing production routes without isolated keys can look like noisy neighbors until routes are segregated. Controlled chaos engineering exercises that validate backoff behavior may temporarily raise rate_limit_pct; gate alerts with experiment metadata. Marketing splash events with pre-approved traffic spikes can exceed historical baselines; FinOps should pre-raise quotas and tag events. Certificate or mesh upgrades that force client retries can inflate gateway volumes without true quota exhaustion; inspect retry-after patterns and TLS error adjacency. Stale inventory rows that under-report contracted_tpm can make time_to_quota_exhaustion_min look artificially dire; refresh lookups weekly. Duplicate log shipping from active-active gateways can double-count requests until dedupe summaries land; monitor source host cardinality. Brief Apigee spike-arrest dampening after configuration pushes can look like starvation until policies stabilize; compare against deployment timelines.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kong, Azure API Management, Envoy access logs → Splunk.\n• Ensure the following data sources are available: `sourcetype=openai:api`, `sourcetype=haproxy:access`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCentralize LLM traffic through an API gateway and log client identity, route, status, and optional `X-RateLimit-*` headers. Alert on rising 429 share per key or app. Feed quota resets into a small KV store for dashboards if headers are present.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_ops sourcetype=\"openai:api\"\n| eval throttled=if(status=429 OR match(_raw,\"(?i)rate.?limit\"),1,0)\n| stats sum(throttled) as throttled_reqs count as total by client_id, route\n| eval throttle_pct=round(100*throttled_reqs/total,2)\n| where throttle_pct > 5\n| sort -throttle_pct\n```\n\nUnderstanding this SPL\n\n**AI Gateway Rate Limiting and Quota Management** — Gateway limits protect backends and budgets; monitoring 429s and quota headers prevents one client from starving others and validates limit tuning.\n\nDocumented **Data sources**: `sourcetype=openai:api`, `sourcetype=haproxy:access`. **App/TA** (typical add-on context): Kong, Azure API Management, Envoy access logs → Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ops; **sourcetype**: openai:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ops, sourcetype=\"openai:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **throttled** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by client_id, route** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **throttle_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where throttle_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (429% by client), Line chart (throttled requests over time), Table (top limited routes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch the doorway to the shared smart helpers like a busy bar with a fair bouncer counting drinks per guest each hour. When one crowd orders enormous pitchers back-to-back, pours pause for them so the tap does not run dry for everyone else, and we see who hit the limit before regulars wait all night.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure",
                "envoy"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.6",
              "n": "Ollama Local LLM Abuse Detection (ESCU)",
              "c": "critical",
              "f": "advanced",
              "v": "Self-hosted Ollama endpoints can be probed or misused for bulk generation; correlating ESCU-style analytics with local logs catches abuse before resource exhaustion or data leakage.",
              "t": "ESCU (Analytic Story: Local LLM abuse patterns), Ollama HTTP logs",
              "d": "`sourcetype=ollama:logs`",
              "q": "index=ai_ops sourcetype=\"ollama:logs\"\n| search path IN (\"/api/generate\",\"/api/chat\")\n| eventstats count as reqs_per_src by src\n| where status>=400 OR duration_ms>60000 OR reqs_per_src>100\n| eval suspicious=if(reqs_per_src>100,\"high_volume\",if(match(user_agent,\"curl|python-requests\"),\"scripted\",\"normal\"))\n| stats count, dc(path) as paths, values(user_agent) as ua by src, host, suspicious\n| where count>50 OR match(ua,\"(?i)(scanner|masscan)\")",
              "m": "Forward Ollama access logs with client IP, path, model, duration, and status. Tune ESCU detections for unusual volume, off-hours spikes, and known scanner user agents. Block or rate-limit at the network edge based on Splunk alerts. Enrich with asset and identity lookups where available.",
              "z": "Map (source IPs), Table (suspicious sessions), Timeline (request bursts).",
              "kfp": "Large-context retrieval jobs that legitimately embed tens of thousands of tokens for summarization can trip token_burn_outlier until you annotate tenant lanes and document expected maximum context per model. Approved adversarial research and purple-team drills inject DAN-style strings on purpose; require change_ticket_id and lab index routing before paging executives. Batch code-generation workers using headless HTTP clients may show ua_risk and high volume without malicious intent; join owner metadata from allowlists and CMDB before blocking subnets. Scientific computing notebooks behind shared NAT can concentrate many principals behind one client_ip; enrich with Authentication.user and session correlation instead of IP-only bans. Model warm-up scripts that replay long system prompts during deploys can spike prompt_tokens briefly; compare release timelines. Misconfigured reverse proxies that strip X-Forwarded-For leave misleading client_ip values until parsers trust only vetted headers; fix forwarding before tuning thresholds. Duplicate log streams during blue-green cutovers can double-count totals until dedupe keys land. Brief GPU recovery after driver upgrades may stretch latency_ms for honest users; gate latency_outlier with injection density. Penetration testers using residential VPN egress may resemble threat actors; validate against vendor ROE calendars. Stale ESCU context from lagging tstats summaries can empty auth_user; repair data model acceleration rather than muting detections entirely.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ESCU (Analytic Story: Local LLM abuse patterns), Ollama HTTP logs.\n• Ensure the following data sources are available: `sourcetype=ollama:logs`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Ollama access logs with client IP, path, model, duration, and status. Tune ESCU detections for unusual volume, off-hours spikes, and known scanner user agents. Block or rate-limit at the network edge based on Splunk alerts. Enrich with asset and identity lookups where available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_ops sourcetype=\"ollama:logs\"\n| search path IN (\"/api/generate\",\"/api/chat\")\n| eventstats count as reqs_per_src by src\n| where status>=400 OR duration_ms>60000 OR reqs_per_src>100\n| eval suspicious=if(reqs_per_src>100,\"high_volume\",if(match(user_agent,\"curl|python-requests\"),\"scripted\",\"normal\"))\n| stats count, dc(path) as paths, values(user_agent) as ua by src, host, suspicious\n| where count>50 OR match(ua,\"(?i)(scanner|masscan)\")\n```\n\nUnderstanding this SPL\n\n**Ollama Local LLM Abuse Detection (ESCU)** — Self-hosted Ollama endpoints can be probed or misused for bulk generation; correlating ESCU-style analytics with local logs catches abuse before resource exhaustion or data leakage.\n\nDocumented **Data sources**: `sourcetype=ollama:logs`. **App/TA** (typical add-on context): ESCU (Analytic Story: Local LLM abuse patterns), Ollama HTTP logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ops; **sourcetype**: ollama:logs. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ops, sourcetype=\"ollama:logs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eventstats` rolls up events into metrics; results are split **by src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status>=400 OR duration_ms>60000 OR reqs_per_src>100` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **suspicious** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src, host, suspicious** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>50 OR match(ua,\"(?i)(scanner|masscan)\")` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (source IPs), Table (suspicious sessions), Timeline (request bursts).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch the private copy of the smart writing helper running inside your walls. When someone hammers it with sneaky instructions, burns through enormous prompts, or appears from corners of the network that usually do not belong there, we raise a clear alert so your team can act before secrets slip or the service is milked dry.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.7",
              "n": "MCP Server Suspicious Activity Detection (ESCU)",
              "c": "critical",
              "f": "advanced",
              "v": "Model Context Protocol servers expose tools and data to agents; anomalous tool invocation or auth patterns may indicate compromise or prompt-driven misuse.",
              "t": "ESCU (custom correlation searches), MCP server audit logs",
              "d": "`sourcetype=mcp:audit`",
              "q": "index=security sourcetype=\"mcp:audit\"\n| search tool_name IN (\"filesystem.write\",\"shell.exec\",\"secrets.read\") OR result=\"denied\"\n| eval risk=case(match(tool_name,\"shell|filesystem|secrets\"),\"high\",result=\"denied\",\"medium\",true(),\"low\")\n| stats count by session_id, principal, tool_name, risk\n| where risk=\"high\" AND count>10\n| sort -count",
              "m": "Emit structured MCP events: session, tool, args hash (not raw secrets), allow/deny, latency. Map to ESCU-compatible data models and run threshold and rare-process detections. Alert on denied high-risk tools, impossible travel for sessions, or burst tool calls from a single agent identity.",
              "z": "Table (high-risk tool calls), Sankey (tool flow), Timeline (session anomalies).",
              "kfp": "Legitimate batch indexing agents can emit hundreds of read_file or list_directory calls per minute during repository scans; annotate client_tier internal_batch and raise tpm thresholds rather than paging on volume alone. Scheduled IT automation that fans out delete or exec tools after change windows may match sensitive_fanout until you require preceding read phases in the same session or stamp approved_change_id on events. Large data-import sessions can trigger payload_limit_exceeded and timeout error_code clusters that mirror attacks; correlate with known migration tickets and network maintenance. AI-assisted code review tools may issue dense read bursts across many paths; allowlist repository agent identities in mcp_authorized_clients.csv. Planned data migrations with bulk delete operations should carry change metadata so rapid_escalation heuristics defer to maintenance calendars. Developer laptops running local stdio MCP servers can show disconnect noise from sleep and resume; filter by production mcp_server_id classes. Penetration tests and purple-team exercises that deliberately inject jailbreak strings should route to lab indexes. Stale inventory CSV rows after rapid CI rollouts can mislabel tool_risk_class until pipelines refresh; treat inventory drift as an operational gap not a silent mute. Duplicate log streams during blue-green cutovers can double-count streamstats windows until dedupe keys land. Brief Authentication tstats lag can empty auth_user enrichment; repair acceleration before muting the detection.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1199",
                "T1078",
                "T1059",
                "T1190"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ESCU (custom correlation searches), MCP server audit logs.\n• Ensure the following data sources are available: `sourcetype=mcp:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmit structured MCP events: session, tool, args hash (not raw secrets), allow/deny, latency. Map to ESCU-compatible data models and run threshold and rare-process detections. Alert on denied high-risk tools, impossible travel for sessions, or burst tool calls from a single agent identity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"mcp:audit\"\n| search tool_name IN (\"filesystem.write\",\"shell.exec\",\"secrets.read\") OR result=\"denied\"\n| eval risk=case(match(tool_name,\"shell|filesystem|secrets\"),\"high\",result=\"denied\",\"medium\",true(),\"low\")\n| stats count by session_id, principal, tool_name, risk\n| where risk=\"high\" AND count>10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**MCP Server Suspicious Activity Detection (ESCU)** — Model Context Protocol servers expose tools and data to agents; anomalous tool invocation or auth patterns may indicate compromise or prompt-driven misuse.\n\nDocumented **Data sources**: `sourcetype=mcp:audit`. **App/TA** (typical add-on context): ESCU (custom correlation searches), MCP server audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: mcp:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"mcp:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by session_id, principal, tool_name, risk** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where risk=\"high\" AND count>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (high-risk tool calls), Sankey (tool flow), Timeline (session anomalies).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch the helper programs that let smart assistants use your company tools. When one helper starts hammering dangerous actions, sneaks across customer boundaries, or signs in strangely, we raise a clear alert so people can stop it before real damage spreads.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.8",
              "n": "Microsoft 365 Copilot Data Exfiltration Risk (ESCU)",
              "c": "critical",
              "f": "advanced",
              "v": "Copilot can surface sensitive content; monitoring risky prompts and large exports supports insider-threat and DLP alignment with Microsoft’s recommended logging.",
              "t": "Microsoft 365 Add-on, Microsoft Graph audit, ESCU (Microsoft Cloud content)",
              "d": "`sourcetype=o365:audit`",
              "q": "index=o365 sourcetype=\"o365:audit\" Workload=\"Copilot\"\n| search (Operation=\"CopilotInteraction\" OR Operation=\"Search\") AND (match(SensitivityLabel,\"Highly Confidential\") OR match(ObjectId,\"(?i)export|download\"))\n| stats count by UserId, Operation, ObjectId\n| where count>20\n| sort -count",
              "m": "Ingest Copilot-related audit events and sensitivity labels from Purview where available. Tune for bulk retrieval, unusual Copilot sessions after privilege changes, and interactions with restricted sites. Align alerts with ESCU Microsoft 365 analytic stories and incident response playbooks.",
              "z": "Table (users and operations), Bar chart (events by label), Timeline (Copilot activity spikes).",
              "kfp": "Legitimate legal and finance research workflows often summarize many sensitivity-labeled contracts in short windows; require review_window allow lists and department-based suppressions before paging line managers. Copilot agent flows and Microsoft Search can produce surges of Search operations that resemble scraping; pair ObjectId diversity with MCAS mass-download heuristics rather than volume alone. eDiscovery officers with eyes-on review may intentionally drive high CopilotInteraction counts across hold sets; document matter IDs in the inventory CSV. Microsoft 365 Service Health incidents can duplicate or delay audit events, creating bursty backfills that look like exfiltration until _indextime latency is checked. License type changes can suddenly enable Copilot for populations that were previously ineligible, elevating event volume without malice. Mergers and acquisitions due diligence rooms may authorize bulk access to Highly Confidential libraries; align classification column in the inventory CSV with transaction calendars. Purview label drift after content migrations can mis-tag files until rescans finish, causing false HIGH tiers until labels stabilize. Copilot Pages and Loop auto-summarization features can inflate per-user counts during legitimate collaboration sprints; tune baseline_daily_events upward for product and marketing cohorts after enablement waves.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1530",
                "T1213.002",
                "T1567"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Microsoft 365 Add-on, Microsoft Graph audit, ESCU (Microsoft Cloud content).\n• Ensure the following data sources are available: `sourcetype=o365:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Copilot-related audit events and sensitivity labels from Purview where available. Tune for bulk retrieval, unusual Copilot sessions after privilege changes, and interactions with restricted sites. Align alerts with ESCU Microsoft 365 analytic stories and incident response playbooks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"o365:audit\" Workload=\"Copilot\"\n| search (Operation=\"CopilotInteraction\" OR Operation=\"Search\") AND (match(SensitivityLabel,\"Highly Confidential\") OR match(ObjectId,\"(?i)export|download\"))\n| stats count by UserId, Operation, ObjectId\n| where count>20\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Microsoft 365 Copilot Data Exfiltration Risk (ESCU)** — Copilot can surface sensitive content; monitoring risky prompts and large exports supports insider-threat and DLP alignment with Microsoft’s recommended logging.\n\nDocumented **Data sources**: `sourcetype=o365:audit`. **App/TA** (typical add-on context): Microsoft 365 Add-on, Microsoft Graph audit, ESCU (Microsoft Cloud content). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: o365:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"o365:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by UserId, Operation, ObjectId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users and operations), Bar chart (events by label), Timeline (Copilot activity spikes).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the workplace assistant that reads and summarizes your company documents. When someone who can open many locations suddenly pulls together a large pile of restricted material in a short time, we raise a clear signal so reviewers can step in before sensitive content spreads in ways policy did not intend.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "security_essentials"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.9",
              "n": "LLM Prompt Injection Attempt Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Injection attempts try to override system instructions or exfiltrate context; logging and alerting limits abuse before secrets or policies are bypassed.",
              "t": "LLM gateway with prompt logging (redacted), DLP on egress",
              "d": "`sourcetype=openai:api`, `sourcetype=azure:openai`",
              "q": "index=ai_ops sourcetype IN (\"openai:api\",\"azure:openai\")\n| search match(lower(prompt_preview),\"(ignore|disregard|system prompt|jailbreak|sudo mode|base64)\")\n| eval severity=if(match(prompt_preview,\"(?i)(password|secret|api[_-]?key)\"),\"critical\",\"high\")\n| stats count by user_id, app_id, severity\n| where count>=3\n| sort -count",
              "m": "Log truncated or hashed prompts server-side only (privacy review required). Use regex and optional ML classifiers for injection patterns. Route critical hits to SOC. Do not index full PII-heavy prompts without policy. Pair with response policy blocks.",
              "z": "Table (injection attempts), Single value (daily blocked prompts), Timeline (repeat offenders).",
              "kfp": "Authorized red-team campaigns and purple-team exercises produce high classifier scores that must be tagged through red_team_window in llm_app_inventory.csv to avoid overnight pages. Security awareness training corpora and internal evaluation suites that replay HarmBench, JailbreakBench, or AdvBench style strings will trip regex or score thresholds even when runs are benign. Linguistics or policy teams discussing ignore previous instructions as a citation example can resemble attacks until classifiers incorporate document context signals. Code-review copilots that paste enterprise security policy excerpts into prompts may match sensitive-pattern rules without malicious intent. Multilingual prompts that paraphrase English jailbreaks using transliterated characters can false-negative on English-only regex while still being benign homework or translation tasks. Tool-call traces may carry injection-like arguments that tools reject harmlessly, producing scary telemetry without impact. Academic datasets that include adversarial strings for benchmarking should run only on isolated sourcetypes excluded from production paging. Newly deployed retrieval indexes can spike retrieval_fanout_count during backfills, mimicking indirect injection until baselines stabilize. Provider-side content_filter stops sometimes reflect vendor threshold drift rather than user malice, so treat provider arms as corroboration not sole proof. Service accounts sharing one principal identifier can make repeat_10m look like a brute-force human when automation batches retries. Clock skew across Kubernetes nodes can distort streamstats windows until NTP is repaired.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1041",
                "T1059"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: LLM gateway with prompt logging (redacted), DLP on egress.\n• Ensure the following data sources are available: `sourcetype=openai:api`, `sourcetype=azure:openai`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog truncated or hashed prompts server-side only (privacy review required). Use regex and optional ML classifiers for injection patterns. Route critical hits to SOC. Do not index full PII-heavy prompts without policy. Pair with response policy blocks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_ops sourcetype IN (\"openai:api\",\"azure:openai\")\n| search match(lower(prompt_preview),\"(ignore|disregard|system prompt|jailbreak|sudo mode|base64)\")\n| eval severity=if(match(prompt_preview,\"(?i)(password|secret|api[_-]?key)\"),\"critical\",\"high\")\n| stats count by user_id, app_id, severity\n| where count>=3\n| sort -count\n```\n\nUnderstanding this SPL\n\n**LLM Prompt Injection Attempt Detection** — Injection attempts try to override system instructions or exfiltrate context; logging and alerting limits abuse before secrets or policies are bypassed.\n\nDocumented **Data sources**: `sourcetype=openai:api`, `sourcetype=azure:openai`. **App/TA** (typical add-on context): LLM gateway with prompt logging (redacted), DLP on egress. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ops.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ops. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user_id, app_id, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (injection attempts), Single value (daily blocked prompts), Timeline (repeat offenders).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We quietly read the intent of questions headed for smart assistants before answers go out, watching for tricky wording that tries to override rules, smuggle hidden instructions, or steal secrets, and we raise a louder alarm when someone keeps hammering a public-facing helper but stay calmer during scheduled security drills.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "hardware_bmc_edac"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.10",
              "n": "AI Model API Key Rotation Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Stale API keys increase breach impact; proving rotation cadence supports security policy and vendor audits.",
              "t": "Secret manager audit (Vault, AWS Secrets Manager), key metadata sync",
              "d": "`sourcetype=vault:audit`, `sourcetype=aws:cloudtrail`",
              "q": "index=security (sourcetype=\"vault:audit\" OR sourcetype=\"aws:cloudtrail\")\n| search (operation=\"rotate\" OR eventName=\"RotateSecret\") AND match(_raw,\"(?i)openai|azure.?openai|llm\")\n| stats latest(_time) as last_rotate by secret_path\n| eval key_age_days=round((now()-last_rotate)/86400,0)\n| where key_age_days > 90\n| table secret_path, key_age_days",
              "m": "Track last rotation timestamp per logical key from your secret store. Join usage logs to key id if you issue per-app keys. Alert when age exceeds policy (e.g., 90 days since last rotation) or rotation job fails. Dashboard compliance percentage by business unit.",
              "z": "Table (keys past due), Single value (% compliant), Bar chart (age distribution).",
              "kfp": "Lab environments that share logical_key_id strings with production inflate gateway_last_used and can mask cutover gaps until inventory splits environments. Long-running batch jobs may legitimately delay gateway timestamp movement for hours; correlate job schedules before paging. Azure diagnostic OperationName strings vary by subscription policy; an overly tight match filter drops real SecretSet events and creates false stale-key readings. Vault audit noise from health checks and policy listing can satisfy coarse filters; tighten operation lists once you observe false renew signals. Manual emergency PutSecretValue rotations without RotateSecret still secure the credential but may look unusual in evidence_actions until you annotate the incident ticket. Clock skew between cloud control plane and on-prem gateways produces artificial gateway_cutover_gap positives; monitor NTP offsets. Break-glass keys tagged in inventory still appear as violations if exception workflows forget to extend rotation_threshold_days; governance must update CSV fields. Summary index lag for GCP Secret Manager optional lane can make Vertex rotations look tardy when only multisearch arms run; reconcile with native cloud consoles. Change.object mapping mismatches yield zero change_recent despite real Helm churn; validate CIM transforms for ConfigMap objects tied to logical_key_id. Business units with only a handful of keys show volatile bu_compliance_pct; use rolling monthly panels for executive reporting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Secret manager audit (Vault, AWS Secrets Manager), key metadata sync.\n• Ensure the following data sources are available: `sourcetype=vault:audit`, `sourcetype=aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack last rotation timestamp per logical key from your secret store. Join usage logs to key id if you issue per-app keys. Alert when age exceeds policy (e.g., 90 days since last rotation) or rotation job fails. Dashboard compliance percentage by business unit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security (sourcetype=\"vault:audit\" OR sourcetype=\"aws:cloudtrail\")\n| search (operation=\"rotate\" OR eventName=\"RotateSecret\") AND match(_raw,\"(?i)openai|azure.?openai|llm\")\n| stats latest(_time) as last_rotate by secret_path\n| eval key_age_days=round((now()-last_rotate)/86400,0)\n| where key_age_days > 90\n| table secret_path, key_age_days\n```\n\nUnderstanding this SPL\n\n**AI Model API Key Rotation Compliance** — Stale API keys increase breach impact; proving rotation cadence supports security policy and vendor audits.\n\nDocumented **Data sources**: `sourcetype=vault:audit`, `sourcetype=aws:cloudtrail`. **App/TA** (typical add-on context): Secret manager audit (Vault, AWS Secrets Manager), key metadata sync. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: vault:audit, aws:cloudtrail. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"vault:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by secret_path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **key_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where key_age_days > 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **AI Model API Key Rotation Compliance**): table secret_path, key_age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (keys past due), Single value (% compliant), Bar chart (age distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-01",
              "sver": "",
              "rby": "",
              "ge": "We check that the passwords your AI services use get refreshed on time and that the live gateways actually pick up the new ones. When a team only updates a vault record but the app keeps an old key, we flag it so auditors see real protection—not paperwork alone.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.11",
              "n": "LLM Output Content Policy Violation Logging",
              "c": "high",
              "f": "intermediate",
              "v": "Provider and organizational safety filters block harmful content; logging violations supports red-teaming, policy tuning, and audit trails for regulated use cases.",
              "t": "Azure OpenAI content filter logs, OpenAI moderation API results",
              "d": "`sourcetype=azure:openai`, `sourcetype=openai:api`",
              "q": "index=ai_ops (sourcetype=\"azure:openai\" OR sourcetype=\"openai:api\")\n| search content_filter_result=\"blocked\" OR finish_reason=\"content_filter\" OR moderation_flagged=\"true\"\n| stats count by filter_category, model, app_id\n| sort -count",
              "m": "Capture moderation and content-filter outcomes from API responses (categories, severity). Avoid storing blocked text; store hashes or length only if needed. Review spikes by app or model after prompt changes. Feed executive summary dashboards for AI governance.",
              "z": "Bar chart (violations by category), Line chart (trend), Table (top apps).",
              "kfp": "Annual red-team campaigns and purple-team replay weeks intentionally spike jailbreak and injection categories; without red_team_window tagging, severity math pages executives during sanctioned drills. ML retraining sets and evaluation harnesses emit synthetic harmful strings that light up filters even though no customer saw output; route those app_id rows to non-production indexes or enrich with dataset labels. Locale-specific false positives arise when medical charting mentions self-harm assessment language, legal contracts quote violent statutes, or trauma counseling bots use clinical vocabulary providers flag broadly. Providers may retune thresholds between API releases without a semver bump you notice immediately; pair category drift with model_id and deployment_id dimensions. Model version migrations can remap category enums so historical baselines look noisy until lookup tables translate legacy codes. Multi-provider fan-out for resilience duplicates the same logical request_id across Azure and Bedrock arms, double-counting viol_roll_1h unless deduped. Tenants indexing regulated medical or legal corpora into retrieval may see elevated PII categories from grounding snippets rather than user malice. Benign multilingual news summarization can surface violence tokens in headlines and trigger sexual or violence scores until translation-aware policies ship. Gateway middleware that truncates headers may drop category fields while leaving act_block true, confusing triage. Scheduled provider maintenance windows can zero out one multisearch arm while others continue, mimicking a bypass until health monitors fire.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Azure OpenAI content filter logs, OpenAI moderation API results.\n• Ensure the following data sources are available: `sourcetype=azure:openai`, `sourcetype=openai:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture moderation and content-filter outcomes from API responses (categories, severity). Avoid storing blocked text; store hashes or length only if needed. Review spikes by app or model after prompt changes. Feed executive summary dashboards for AI governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_ops (sourcetype=\"azure:openai\" OR sourcetype=\"openai:api\")\n| search content_filter_result=\"blocked\" OR finish_reason=\"content_filter\" OR moderation_flagged=\"true\"\n| stats count by filter_category, model, app_id\n| sort -count\n```\n\nUnderstanding this SPL\n\n**LLM Output Content Policy Violation Logging** — Provider and organizational safety filters block harmful content; logging violations supports red-teaming, policy tuning, and audit trails for regulated use cases.\n\nDocumented **Data sources**: `sourcetype=azure:openai`, `sourcetype=openai:api`. **App/TA** (typical add-on context): Azure OpenAI content filter logs, OpenAI moderation API results. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ops; **sourcetype**: azure:openai, openai:api. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ops, sourcetype=\"azure:openai\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by filter_category, model, app_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (violations by category), Line chart (trend), Table (top apps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Picture the automatic referee flags on AI answers: we record which rule tripped, how serious it was, and which customer-facing app was involved—without saving the embarrassing sentence itself. That way safety, legal, and product teams can fix policies calmly and show auditors the controls worked.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.12",
              "n": "AI Inference Pipeline Error Rate",
              "c": "high",
              "f": "intermediate",
              "v": "End-to-end inference includes preprocess, model call, and postprocess; pipeline-level error rates catch failures that raw HTTP 200s miss (e.g., empty generations, schema errors).",
              "t": "Application logs, OpenTelemetry traces",
              "d": "`sourcetype=otel:metrics`",
              "q": "index=ai_ops sourcetype=\"otel:metrics\" metric_name IN (\"inference.pipeline.errors\",\"inference.pipeline.requests\")\n| bin _time span=5m\n| stats sum(eval(if(metric_name=\"inference.pipeline.errors\",value,0))) as errors,\n        sum(eval(if(metric_name=\"inference.pipeline.requests\",value,0))) as reqs by _time, service.name\n| eval err_rate=round(100*errors/nullif(reqs,0),3)\n| where err_rate > 1\n| timechart span=5m avg(err_rate) by service.name",
              "m": "Instrument each pipeline stage with OTel counters or structured logs (`stage`, `error_class`). Emit `inference.pipeline.errors` and `inference.pipeline.requests` counters per service. Alert on SLO burn for error rate. Correlate with deployments and model version changes.",
              "z": "Line chart (pipeline error rate), Table (errors by stage), Single value (SLO status).",
              "kfp": "Legitimate transient errors appear during model-server hot-swap when rolling deployments briefly return HTTP 503 while queue proxies wait for new predictor pods; require dwell thresholds and change_ticket_id correlation before paging customer success. Expected HTTP 429 responses from intentional concurrency limiters at peak queries per second are protective, not predictor failures; exclude those routes or mark them in gateway parsers. GPU node drains initiated by cluster autoscaler can surface short error bursts that clear when workloads reschedule; pair with Kubernetes node cordon metadata. Schema validators rejecting malformed client payloads produce 4xx outcomes that should not inflate server-side error_rate if your governance treats them as client faults; tune status-class filters accordingly. Triton lazy model loading raises failures on the first requests until weights finish mapping into GPU memory; treat early-window spikes as warmup when model_version transitions from zero to a positive revision. KServe Knative cold-start delays can masquerade as inference timeouts during scale-from-zero; differentiate with revision age and pod ready conditions. A/B canary releases may deliberately route a small percentage to a buggy build; severity should reference canary_percent metadata before executives escalate. Dynamic batching latency varies when instantaneous batch size sits below configured max_batch_size; oscillation alone is not failure without error counters. SageMaker Multi-Model Server evicts cold models under memory pressure and can return model loading errors that resemble outages; confirm endpoint configuration and memory headroom. vLLM continuous batching may report healthy admin probes while stalled generations accumulate; corroborate with in-flight batch gauges. Baseline math can mis-fire when streamstats windows include maintenance zeros; freeze windows in model_inventory.csv exist precisely to suppress those intervals. Cloud monitoring ingest lag for Vertex AI or Azure ML can delay failure signals versus on-cluster logs; align timestamps before deduping duplicates. Duplicate Prometheus remote-write submissions can double mstats arms until deduplication lands on the collector. Inventory rows missing sla_error_rate_pct default to conservative numeric placeholders that may over-alert until owners populate CSV fields.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Application logs, OpenTelemetry traces.\n• Ensure the following data sources are available: `sourcetype=otel:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument each pipeline stage with OTel counters or structured logs (`stage`, `error_class`). Emit `inference.pipeline.errors` and `inference.pipeline.requests` counters per service. Alert on SLO burn for error rate. Correlate with deployments and model version changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_ops sourcetype=\"otel:metrics\" metric_name IN (\"inference.pipeline.errors\",\"inference.pipeline.requests\")\n| bin _time span=5m\n| stats sum(eval(if(metric_name=\"inference.pipeline.errors\",value,0))) as errors,\n        sum(eval(if(metric_name=\"inference.pipeline.requests\",value,0))) as reqs by _time, service.name\n| eval err_rate=round(100*errors/nullif(reqs,0),3)\n| where err_rate > 1\n| timechart span=5m avg(err_rate) by service.name\n```\n\nUnderstanding this SPL\n\n**AI Inference Pipeline Error Rate** — End-to-end inference includes preprocess, model call, and postprocess; pipeline-level error rates catch failures that raw HTTP 200s miss (e.g., empty generations, schema errors).\n\nDocumented **Data sources**: `sourcetype=otel:metrics`. **App/TA** (typical add-on context): Application logs, OpenTelemetry traces. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ops; **sourcetype**: otel:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ops, sourcetype=\"otel:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service.name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate > 1` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by service.name** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pipeline error rate), Table (errors by stage), Single value (SLO status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the machines that actually run the finished model so we know when answers fail because those machines choke, run out of memory, or trip internal safety timers—not because someone typed a silly question.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.13",
              "n": "Seq2Seq Log Anomaly Detection via Reconstruction Error (DSDL)",
              "c": "critical",
              "f": "expert",
              "v": "Traditional log monitoring relies on known patterns — regex, keywords, error codes. But novel failures, zero-day exploits, and subtle misconfigurations produce log lines that have never been seen before. An LSTM or Transformer autoencoder trained on normal log sequences learns the \"grammar\" of healthy log output and flags lines with high reconstruction error — catching anomalies that no predefined rule could anticipate.",
              "t": "Splunk Deep Learning Toolkit (DSDL), custom Python container",
              "d": "Any structured log index (`index=main`, `index=os`, `index=web`, `index=security`)",
              "q": "index=main sourcetype=syslog earliest=-1h\n| eval log_token=lower(replace(_raw, \"\\d+\", \"N\"))\n| eval log_token=replace(log_token, \"[0-9a-f]{8,}\", \"HEX\")\n| streamstats count as seq_pos by host\n| apply pretrained_log_autoencoder_dsdl\n| rename reconstruction_error as recon_err\n| where recon_err > 0.85\n| eval anomaly_severity=case(recon_err>0.95, \"critical\", recon_err>0.90, \"high\", true(), \"medium\")\n| table _time, host, sourcetype, _raw, recon_err, anomaly_severity\n| sort -recon_err",
              "m": "Tokenize log lines by replacing numeric values with placeholders (N) and hex strings with HEX to reduce vocabulary size. Train an LSTM autoencoder (or Transformer encoder-decoder) in the DSDL container on 30 days of normal-state logs per sourcetype. The model learns to reconstruct typical log line sequences; lines it cannot reconstruct well (high reconstruction error) are anomalous. Deploy the model via `apply` in a scheduled search running every 15 minutes. Tune the threshold per sourcetype — security logs may have higher natural variance than infrastructure logs. Route critical anomalies (>0.95 error) to the SOC and high anomalies (>0.90) to operations. Track model performance weekly by reviewing false positive rates and adjusting the threshold. Retrain quarterly or after major infrastructure changes. Consider training separate models for high-volume sourcetypes (Windows Security, syslog, application logs) for better precision.",
              "z": "Table (anomalous log lines with reconstruction error), Line chart (reconstruction error distribution over time), Histogram (error score distribution), Single value (anomalies detected in last hour).",
              "kfp": "Legitimate new application releases that introduce genuinely fresh log strings routinely spike reconstruction_error until the champion model retrains on the expanded corpus. Scheduled batch jobs that execute once per quarter produce rare templates that look anomalous despite being healthy business logic. Planned log-format migrations and field renames shift token distributions without security impact. Very low-volume hosts inflate variance on rolling percentile thresholds and create singleton alerts that density logic intentionally suppresses. Intentional debug logging during incident triage floods out-of-vocabulary tokens and raises benign surprise scores. Holiday or promotional traffic patterns absent from training windows mimic drift until seasonal cohorts are labeled. Freshly promoted models still warming up after retrain emit elevated errors for hours until calibration completes. Manual template miner regressions can mis-assign template_id and concentrate false positives in one family until parsers heal. Staging versus production A/B tests that intentionally diverge formats should carry change metadata so validators downgrade severity. Container restarts during node drains can create latency spikes unrelated to model quality. Analyst mis-clicks on feedback labels can temporarily bias thresholds until confusion-matrix audits correct the training export.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Deep Learning Toolkit (DSDL), custom Python container.\n• Ensure the following data sources are available: Any structured log index (`index=main`, `index=os`, `index=web`, `index=security`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTokenize log lines by replacing numeric values with placeholders (N) and hex strings with HEX to reduce vocabulary size. Train an LSTM autoencoder (or Transformer encoder-decoder) in the DSDL container on 30 days of normal-state logs per sourcetype. The model learns to reconstruct typical log line sequences; lines it cannot reconstruct well (high reconstruction error) are anomalous. Deploy the model via `apply` in a scheduled search running every 15 minutes. Tune the threshold per sourcetype — s…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=main sourcetype=syslog earliest=-1h\n| eval log_token=lower(replace(_raw, \"\\d+\", \"N\"))\n| eval log_token=replace(log_token, \"[0-9a-f]{8,}\", \"HEX\")\n| streamstats count as seq_pos by host\n| apply pretrained_log_autoencoder_dsdl\n| rename reconstruction_error as recon_err\n| where recon_err > 0.85\n| eval anomaly_severity=case(recon_err>0.95, \"critical\", recon_err>0.90, \"high\", true(), \"medium\")\n| table _time, host, sourcetype, _raw, recon_err, anomaly_severity\n| sort -recon_err\n```\n\nUnderstanding this SPL\n\n**Seq2Seq Log Anomaly Detection via Reconstruction Error (DSDL)** — Traditional log monitoring relies on known patterns — regex, keywords, error codes. But novel failures, zero-day exploits, and subtle misconfigurations produce log lines that have never been seen before. An LSTM or Transformer autoencoder trained on normal log sequences learns the \"grammar\" of healthy log output and flags lines with high reconstruction error — catching anomalies that no predefined rule could anticipate.\n\nDocumented **Data sources**: Any structured log index (`index=main`, `index=os`, `index=web`, `index=security`). **App/TA** (typical add-on context): Splunk Deep Learning Toolkit (DSDL), custom Python container. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: main; **sourcetype**: syslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=main, sourcetype=syslog, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **log_token** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **log_token** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `streamstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Seq2Seq Log Anomaly Detection via Reconstruction Error (DSDL)**): apply pretrained_log_autoencoder_dsdl\n• Renames fields with `rename` for clarity or joins.\n• Filters the current rows with `where recon_err > 0.85` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **anomaly_severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Seq2Seq Log Anomaly Detection via Reconstruction Error (DSDL)**): table _time, host, sourcetype, _raw, recon_err, anomaly_severity\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous log lines with reconstruction error), Line chart (reconstruction error distribution over time), Histogram (error score distribution), Single value (anomalies detected in last hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch your application diary lines for moments when the pattern they usually follow suddenly breaks. We bunch lonely odd lines differently from a real burst on one machine, and we ease urgent pages when rare words flood in so teams chase true trouble instead of harmless novelty.",
              "mtype": [
                "Fault",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.14",
              "n": "Host-Metric Heatmap Anomaly via CNN (DSDL)",
              "c": "high",
              "f": "expert",
              "v": "Infrastructure metrics (CPU, memory, disk I/O, network throughput) per host form a time × metric matrix that looks like an image. A Convolutional Neural Network trained on these \"metric heatmaps\" detects complex, multi-metric degradation patterns — such as the specific combination of rising CPU, flat memory, and oscillating disk I/O that precedes a particular failure mode — that univariate thresholds and even multivariate statistical models cannot capture.",
              "t": "Splunk Deep Learning Toolkit (DSDL), custom Python container with TensorFlow/PyTorch",
              "d": "`index=infra sourcetype=collectd_http` or `sourcetype=otel:metrics` or `sourcetype=vmware:perf:*`",
              "q": "index=infra sourcetype IN (\"collectd_http\",\"otel:metrics\",\"vmware:perf:cpu\",\"vmware:perf:mem\",\"vmware:perf:disk\")\n| bin _time span=5m\n| stats avg(metric_value) as val by _time, host, metric_name\n| xyseries _time host+\"|\"+metric_name val\n| fillnull value=0\n| apply pretrained_metric_cnn_dsdl\n| rename anomaly_score as cnn_score\n| where cnn_score > 0.80\n| eval severity=case(cnn_score>0.95, \"critical\", cnn_score>0.90, \"high\", true(), \"medium\")\n| table _time, host, cnn_score, severity\n| sort -cnn_score",
              "m": "Construct metric heatmaps: for each host, create a 2D matrix where rows are metrics (CPU user, CPU system, memory used, disk read IOPS, disk write IOPS, network bytes in/out) and columns are time bins (e.g., 288 bins for 24 hours at 5-minute intervals). Normalize each metric row to [0,1]. Train a CNN autoencoder in the DSDL container on healthy-state heatmaps. At inference time, compute reconstruction error per host-day; high error indicates an anomalous metric pattern. The CNN captures spatial correlations across metrics that linear models miss — for example, the co-occurrence of CPU saturation with specific disk I/O patterns that precede storage controller failures. Schedule daily scoring with a 24-hour sliding window. Alert infrastructure teams on critical patterns and correlate with recent change events. Retrain monthly with new healthy baselines.",
              "z": "Heatmap (metric × time for anomalous hosts), Line chart (CNN anomaly score over time), Table (top anomalous hosts), Image (reconstructed vs actual heatmap for investigation).",
              "kfp": "Collector errors during config changes, target restarts, or transport and credential issues. We read collector logs and compare with the OTel pipeline view.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Deep Learning Toolkit (DSDL), custom Python container with TensorFlow/PyTorch.\n• Ensure the following data sources are available: `index=infra sourcetype=collectd_http` or `sourcetype=otel:metrics` or `sourcetype=vmware:perf:*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConstruct metric heatmaps: for each host, create a 2D matrix where rows are metrics (CPU user, CPU system, memory used, disk read IOPS, disk write IOPS, network bytes in/out) and columns are time bins (e.g., 288 bins for 24 hours at 5-minute intervals). Normalize each metric row to [0,1]. Train a CNN autoencoder in the DSDL container on healthy-state heatmaps. At inference time, compute reconstruction error per host-day; high error indicates an anomalous metric pattern. The CNN captures spatial …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infra sourcetype IN (\"collectd_http\",\"otel:metrics\",\"vmware:perf:cpu\",\"vmware:perf:mem\",\"vmware:perf:disk\")\n| bin _time span=5m\n| stats avg(metric_value) as val by _time, host, metric_name\n| xyseries _time host+\"|\"+metric_name val\n| fillnull value=0\n| apply pretrained_metric_cnn_dsdl\n| rename anomaly_score as cnn_score\n| where cnn_score > 0.80\n| eval severity=case(cnn_score>0.95, \"critical\", cnn_score>0.90, \"high\", true(), \"medium\")\n| table _time, host, cnn_score, severity\n| sort -cnn_score\n```\n\nUnderstanding this SPL\n\n**Host-Metric Heatmap Anomaly via CNN (DSDL)** — Infrastructure metrics (CPU, memory, disk I/O, network throughput) per host form a time × metric matrix that looks like an image. A Convolutional Neural Network trained on these \"metric heatmaps\" detects complex, multi-metric degradation patterns — such as the specific combination of rising CPU, flat memory, and oscillating disk I/O that precedes a particular failure mode — that univariate thresholds and even multivariate statistical models cannot capture.\n\nDocumented **Data sources**: `index=infra sourcetype=collectd_http` or `sourcetype=otel:metrics` or `sourcetype=vmware:perf:*`. **App/TA** (typical add-on context): Splunk Deep Learning Toolkit (DSDL), custom Python container with TensorFlow/PyTorch. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infra.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infra. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, metric_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• Fills null values with `fillnull`.\n• Pipeline stage (see **Host-Metric Heatmap Anomaly via CNN (DSDL)**): apply pretrained_metric_cnn_dsdl\n• Renames fields with `rename` for clarity or joins.\n• Filters the current rows with `where cnn_score > 0.80` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Host-Metric Heatmap Anomaly via CNN (DSDL)**): table _time, host, cnn_score, severity\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (metric × time for anomalous hosts), Line chart (CNN anomaly score over time), Table (top anomalous hosts), Image (reconstructed vs actual heatmap for investigation).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how we use and protect smart assistant and model calls so cost, quality, and misuse stay in a range we are willing to own.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.4.15",
              "n": "MLTK Model Drift and Performance Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "All ML models deployed in production (security detections, capacity forecasts, anomaly detectors) degrade as data distributions shift. Without drift monitoring, a model that was 95% accurate at deployment may silently drop to 60% accuracy over months. This meta-use-case tracks model health metrics so teams know when to retrain before detection quality degrades.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Deep Learning Toolkit (DSDL)",
              "d": "MLTK model artifacts, custom model performance logs (`sourcetype=mltk:model:metrics`)",
              "q": "index=ml_ops sourcetype=\"mltk:model:metrics\"\n| bin _time span=1d\n| stats avg(precision) as precision, avg(recall) as recall, avg(f1_score) as f1, latest(training_date) as last_trained by model_name, _time\n| eval model_age_days=round((now() - strptime(last_trained, \"%Y-%m-%d\")) / 86400, 0)\n| eval drift_alert=case(\n    f1 < 0.70, \"critical_drift\",\n    f1 < 0.80, \"moderate_drift\",\n    model_age_days > 90, \"stale_model\",\n    true(), \"healthy\")\n| where drift_alert != \"healthy\"\n| table _time, model_name, precision, recall, f1, model_age_days, drift_alert\n| sort drift_alert, -model_age_days",
              "m": "Instrument all deployed MLTK and DSDL models to emit performance metrics (precision, recall, F1 score, reconstruction error mean/std, prediction distribution) to a dedicated `ml_ops` index. For supervised models, compare predictions against ground-truth labels (analyst dispositions, confirmed incidents). For unsupervised models, track anomaly rate stability and reconstruction error distribution. Alert data science teams when F1 drops below 0.80 or model age exceeds 90 days. Maintain a model registry KV store with model name, version, training date, data hash, and performance baseline. Automate retraining pipelines for models that drift past thresholds. Dashboard the health of all production ML models for ML platform governance.",
              "z": "Line chart (F1 score over time per model), Table (model registry with drift status), Bar chart (model age distribution), Single value (models requiring retraining).",
              "kfp": "Planned quarterly retrain windows routinely elevate last_trained_age_days until the new champion lands; require correlation with change calendars before paging executives. A/B holdout cohorts engineered to see harder cases can show lower active accuracy by design; join experiment metadata before treating drift as production failure. Holiday seasonality and fiscal close cycles shift legitimate priors; annotate known seasonal regimes so severity downgrades follow policy rather than ad hoc muting. Marketing campaigns may induce expected population shifts stakeholders already approved; treat alerts as confirmation, not surprises, when campaign tags are present. Dataset migrations and backfills can temporarily inflate cardinality and PSI until baselines refresh; gate alerts using migration tickets and post-migration stabilization hours. Shadow models that score a biased slice of traffic can look artificially strong; validate cohort parity before promotion narratives. Short outages in label ingest mimic long label_lag_days until queues catch up; verify pipeline lag separately from model quality. REST snapshot delays after upgrades can inflate staleness metrics until the first successful poll; compare against bundle versions. Canary search heads running experimental Python pins may emit divergent accuracy telemetry; segregate environments in inventory. Internal penetration tests that replay historical features can spike KS statistics without threatening production; correlate with security calendars.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Deep Learning Toolkit (DSDL).\n• Ensure the following data sources are available: MLTK model artifacts, custom model performance logs (`sourcetype=mltk:model:metrics`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument all deployed MLTK and DSDL models to emit performance metrics (precision, recall, F1 score, reconstruction error mean/std, prediction distribution) to a dedicated `ml_ops` index. For supervised models, compare predictions against ground-truth labels (analyst dispositions, confirmed incidents). For unsupervised models, track anomaly rate stability and reconstruction error distribution. Alert data science teams when F1 drops below 0.80 or model age exceeds 90 days. Maintain a model regi…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ml_ops sourcetype=\"mltk:model:metrics\"\n| bin _time span=1d\n| stats avg(precision) as precision, avg(recall) as recall, avg(f1_score) as f1, latest(training_date) as last_trained by model_name, _time\n| eval model_age_days=round((now() - strptime(last_trained, \"%Y-%m-%d\")) / 86400, 0)\n| eval drift_alert=case(\n    f1 < 0.70, \"critical_drift\",\n    f1 < 0.80, \"moderate_drift\",\n    model_age_days > 90, \"stale_model\",\n    true(), \"healthy\")\n| where drift_alert != \"healthy\"\n| table _time, model_name, precision, recall, f1, model_age_days, drift_alert\n| sort drift_alert, -model_age_days\n```\n\nUnderstanding this SPL\n\n**MLTK Model Drift and Performance Monitoring** — All ML models deployed in production (security detections, capacity forecasts, anomaly detectors) degrade as data distributions shift. Without drift monitoring, a model that was 95% accurate at deployment may silently drop to 60% accuracy over months. This meta-use-case tracks model health metrics so teams know when to retrain before detection quality degrades.\n\nDocumented **Data sources**: MLTK model artifacts, custom model performance logs (`sourcetype=mltk:model:metrics`). **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Deep Learning Toolkit (DSDL). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ml_ops; **sourcetype**: mltk:model:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ml_ops, sourcetype=\"mltk:model:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by model_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **model_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **drift_alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift_alert != \"healthy\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MLTK Model Drift and Performance Monitoring**): table _time, model_name, precision, recall, f1, model_age_days, drift_alert\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (F1 score over time per model), Table (model registry with drift status), Bar chart (model age distribution), Single value (models requiring retraining).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch the kitchen that turns everyday signals into predictions, like a chef whose suppliers quietly swap grains without updating the recipe card. When the mix shifts, the soup still simmers, but the flavor wanders and guests stop coming back. We raise a clear flag when the blend drifts so the recipe gets retested before anyone serves a bad bowl.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 15,
            "none": 0
          }
        },
        {
          "i": "13.5",
          "n": "OpenTelemetry, Observability Pipelines & SRE Patterns",
          "u": [
            {
              "i": "13.5.1",
              "n": "Trace Duration Anomaly and Slow Transaction Detection",
              "c": "critical",
              "f": "advanced",
              "v": "A deployment that adds 200ms to a critical checkout flow costs revenue every second it runs. Static latency thresholds generate noise during normal traffic variation and miss gradual regressions. By baselining p50/p95/p99 trace duration per service and operation, this detection identifies statistically significant latency regressions within minutes of a deployment — before enough users complain to reach support.",
              "t": "Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector, Jaeger",
              "d": "`sourcetype=otel:traces` or APM span data, `index=traces`",
              "q": "index=traces sourcetype=\"otel:traces\"\n| eval duration_ms=duration_nano/1000000\n| bin _time span=15m\n| stats p50(duration_ms) as p50, p95(duration_ms) as p95, p99(duration_ms) as p99, count as span_count by _time, service_name, span_name\n| eventstats avg(p99) as baseline_p99, stdev(p99) as std_p99 by service_name, span_name\n| eval z_score=round((p99 - baseline_p99) / nullif(std_p99, 0), 2)\n| where z_score > 2 AND span_count > 50\n| table _time, service_name, span_name, p50, p95, p99, baseline_p99, z_score, span_count\n| sort -z_score",
              "m": "Ingest OTel trace data via OTLP exporter to Splunk (HEC or Observability Cloud). Calculate duration percentiles per service and operation in 15-minute windows. Baseline using 7-day rolling statistics. Flag operations where p99 exceeds 2 standard deviations above the baseline with sufficient sample size (>50 spans). Correlate with deployment events (cat-12) to identify which release caused the regression. For Splunk APM users, the APM service map provides built-in latency comparison; this UC replicates the pattern for platform-only deployments. Track regressions over time to measure release quality trends.",
              "z": "Line chart (p50/p95/p99 duration per operation over 24 hours), Table (operations with latency regressions), Heatmap (service × operation p99), Bar chart (top 10 slowest operations).",
              "kfp": "Cold-start spikes after a deploy or horizontal scale-out routinely inflate P99 while P50 remains calm because a thin tail of fresh instances serves early requests; pair pod age, readiness success rates, and steady-state windows before paging application owners. Scheduled batch jobs and ETL workflows often emit legitimately long internal spans that are expected under service level objectives tuned for interactive traffic; segregate those workflows with attributes or inventory tiers so they do not page checkout on-call. Real user measurement page-load tails on mobile networks can look like server regressions when the browser or device network is the true bottleneck; compare regional and connection-type slices before blaming microservices. Downstream gRPC clients that enable aggressive retries can stretch client span duration while the root trace still meets external latency promises; review retry budgets and hedging policies alongside trace waterfalls. Distributed locking and leader election paths can show long internal spans during contention that reflect coordination design rather than a surprise outage; validate lock metrics and holder health before deep code rollbacks. Continuous integration canary pipelines that synthesize load can disturb percentile baselines if their spans land in production-like indexes; isolate canary environments. Scheduled load tests and game-day exercises will move P95 and P99 in obvious patterns; suppress alerts using change calendars and labeled environments. Tail-sampling policy edits and collector drops change which slow traces persist, creating apparent latency shifts without any code change; reconcile otel:collector:log signals with trace volume before declaring regressions.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector, Jaeger.\n• Ensure the following data sources are available: `sourcetype=otel:traces` or APM span data, `index=traces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest OTel trace data via OTLP exporter to Splunk (HEC or Observability Cloud). Calculate duration percentiles per service and operation in 15-minute windows. Baseline using 7-day rolling statistics. Flag operations where p99 exceeds 2 standard deviations above the baseline with sufficient sample size (>50 spans). Correlate with deployment events (cat-12) to identify which release caused the regression. For Splunk APM users, the APM service map provides built-in latency comparison; this UC repl…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\"\n| eval duration_ms=duration_nano/1000000\n| bin _time span=15m\n| stats p50(duration_ms) as p50, p95(duration_ms) as p95, p99(duration_ms) as p99, count as span_count by _time, service_name, span_name\n| eventstats avg(p99) as baseline_p99, stdev(p99) as std_p99 by service_name, span_name\n| eval z_score=round((p99 - baseline_p99) / nullif(std_p99, 0), 2)\n| where z_score > 2 AND span_count > 50\n| table _time, service_name, span_name, p50, p95, p99, baseline_p99, z_score, span_count\n| sort -z_score\n```\n\nUnderstanding this SPL\n\n**Trace Duration Anomaly and Slow Transaction Detection** — A deployment that adds 200ms to a critical checkout flow costs revenue every second it runs. Static latency thresholds generate noise during normal traffic variation and miss gradual regressions. By baselining p50/p95/p99 trace duration per service and operation, this detection identifies statistically significant latency regressions within minutes of a deployment — before enough users complain to reach support.\n\nDocumented **Data sources**: `sourcetype=otel:traces` or APM span data, `index=traces`. **App/TA** (typical add-on context): Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector, Jaeger. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name, span_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by service_name, span_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z_score > 2 AND span_count > 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Trace Duration Anomaly and Slow Transaction Detection**): table _time, service_name, span_name, p50, p95, p99, baseline_p99, z_score, span_count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p50/p95/p99 duration per operation over 24 hours), Table (operations with latency regressions), Heatmap (service × operation p99), Bar chart (top 10 slowest operations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch how long important work takes as it moves between systems. When something becomes much slower than its usual pattern for long enough to matter, we raise a clear signal and point teams toward the slow step so they can fix it before customers feel widespread pain.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.2",
              "n": "Trace Error Rate by Service and Operation",
              "c": "critical",
              "f": "intermediate",
              "v": "Error rate is the most immediate signal of service degradation. Tracking error spans (status_code=ERROR) by service, operation, and error type creates accountability: each team sees their service's error contribution. When the checkout service suddenly jumps from 0.1% to 5% errors on the `processPayment` operation, the payment team gets alerted immediately rather than waiting for downstream impact to cascade.",
              "t": "Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector",
              "d": "`sourcetype=otel:traces`, `index=traces`",
              "q": "index=traces sourcetype=\"otel:traces\"\n| eval is_error=if(status_code==\"ERROR\" OR status_code==2, 1, 0)\n| bin _time span=5m\n| stats count as total_spans, sum(is_error) as error_spans by _time, service_name, span_name\n| eval error_rate_pct=round(error_spans*100/total_spans, 2)\n| where error_rate_pct > 1 AND total_spans > 20\n| eventstats avg(error_rate_pct) as baseline_error by service_name, span_name\n| eval error_spike=round(error_rate_pct / nullif(baseline_error, 0), 1)\n| where error_spike > 3 OR error_rate_pct > 5\n| table _time, service_name, span_name, total_spans, error_spans, error_rate_pct, baseline_error, error_spike\n| sort -error_rate_pct",
              "m": "Ingest OTel trace data. Map status codes: OTel status `ERROR` (code=2) and HTTP status codes >=500 in span attributes indicate errors. Calculate error rate per service/operation in 5-minute windows. Alert when error rate exceeds 3x the baseline or crosses an absolute 5% threshold. Enrich with error messages from span events (exception.type, exception.message) to group errors by root cause. Build service ownership lookups to route alerts to the responsible team. Track error rate trends per service over 30 days to measure reliability improvements.",
              "z": "Line chart (error rate per service over 24 hours), Table (services with elevated errors), Bar chart (top error types by volume), Single value (fleet-wide error rate).",
              "kfp": "Synthetic test traffic that deliberately exercises failure paths can elevate error_rate without customer impact; segregate environments with deployment.environment filters and inventory tiers before paging. Chaos engineering drills and game days inject controlled faults that look like production regressions until you read experiment metadata; tie alerts to change calendars and labeled namespaces. Third-party API maintenance windows that return503 responses can spike HTTP-oriented lanes while your code is healthy; track vendor status pages and map dependency names to suppressible routes. Unsampled high-cardinality endpoints such as ad-hoc debug routes may emit noisy exception types that dominate top_exception_type while volume is tiny; enforce minimum span_count floors and cardinality caps at the collector. Kubernetes NodeNotReady events during pod evictions can interrupt sidecar exports and create bursts of ERROR spans that self-heal when the node drains; correlate with kube events before rollback. Canary deployments that send a sliver of traffic through experimental code paths routinely lift error_share for narrow operations; compare rollouts and feature flags. Client-side retries after benign timeouts can duplicate error spans when idempotency is imperfect; examine messaging and HTTP client policies. Load balancer misconfiguration that returns502 for health checks might flood Web lanes without matching server spans; validate probe paths separately. Partial trace ingestion gaps can make denominators shrink and error_rate appear inflated; cross-check raw event volume and forwarder health. Security scanners hammering edge URLs may generate 5xx rows that should not page application teams; maintain scanner allowlists and edge-only routing.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector.\n• Ensure the following data sources are available: `sourcetype=otel:traces`, `index=traces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest OTel trace data. Map status codes: OTel status `ERROR` (code=2) and HTTP status codes >=500 in span attributes indicate errors. Calculate error rate per service/operation in 5-minute windows. Alert when error rate exceeds 3x the baseline or crosses an absolute 5% threshold. Enrich with error messages from span events (exception.type, exception.message) to group errors by root cause. Build service ownership lookups to route alerts to the responsible team. Track error rate trends per servic…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\"\n| eval is_error=if(status_code==\"ERROR\" OR status_code==2, 1, 0)\n| bin _time span=5m\n| stats count as total_spans, sum(is_error) as error_spans by _time, service_name, span_name\n| eval error_rate_pct=round(error_spans*100/total_spans, 2)\n| where error_rate_pct > 1 AND total_spans > 20\n| eventstats avg(error_rate_pct) as baseline_error by service_name, span_name\n| eval error_spike=round(error_rate_pct / nullif(baseline_error, 0), 1)\n| where error_spike > 3 OR error_rate_pct > 5\n| table _time, service_name, span_name, total_spans, error_spans, error_rate_pct, baseline_error, error_spike\n| sort -error_rate_pct\n```\n\nUnderstanding this SPL\n\n**Trace Error Rate by Service and Operation** — Error rate is the most immediate signal of service degradation. Tracking error spans (status_code=ERROR) by service, operation, and error type creates accountability: each team sees their service's error contribution. When the checkout service suddenly jumps from 0.1% to 5% errors on the `processPayment` operation, the payment team gets alerted immediately rather than waiting for downstream impact to cascade.\n\nDocumented **Data sources**: `sourcetype=otel:traces`, `index=traces`. **App/TA** (typical add-on context): Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name, span_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate_pct > 1 AND total_spans > 20` — typically the threshold or rule expression for this monitoring goal.\n• `eventstats` rolls up events into metrics; results are split **by service_name, span_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_spike** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_spike > 3 OR error_rate_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Trace Error Rate by Service and Operation**): table _time, service_name, span_name, total_spans, error_spans, error_rate_pct, baseline_error, error_spike\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate per service over 24 hours), Table (services with elevated errors), Bar chart (top error types by volume), Single value (fleet-wide error rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We treat each online request like a package crossing many conveyor belts. When too many boxes get stamped rejected at one station, we spot which belt and which worker step caused the pile-up, compare it with the usual rejection rate, and raise the alarm before shoppers wonder why deliveries slowed.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.3",
              "n": "Trace Completeness and Orphan Span Detection",
              "c": "high",
              "f": "advanced",
              "v": "Incomplete traces — missing parent spans, single-span traces from multi-service flows, or orphaned spans with no root — indicate broken context propagation, missing instrumentation, or sampling inconsistencies. These gaps make distributed debugging impossible exactly when it matters most. Measuring trace completeness quantifies instrumentation quality and identifies which services need propagation fixes.",
              "t": "Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud (APM)",
              "d": "`sourcetype=otel:traces`, `index=traces`",
              "q": "index=traces sourcetype=\"otel:traces\"\n| bin _time span=1h\n| stats dc(span_id) as span_count, dc(service_name) as service_count,\n    sum(eval(if(parent_span_id=\"\" OR isnull(parent_span_id), 1, 0))) as root_spans\n    by _time, trace_id\n| eval completeness=case(\n    service_count==1 AND span_count==1, \"single_span\",\n    root_spans==0, \"orphan_no_root\",\n    root_spans>1, \"multiple_roots\",\n    1==1, \"complete\")\n| stats count as trace_count,\n    sum(eval(if(completeness==\"single_span\",1,0))) as single_spans,\n    sum(eval(if(completeness==\"orphan_no_root\",1,0))) as orphans,\n    sum(eval(if(completeness==\"multiple_roots\",1,0))) as multi_root\n    by _time\n| eval completeness_pct=round((trace_count - single_spans - orphans - multi_root)*100/trace_count, 1)\n| table _time, trace_count, completeness_pct, single_spans, orphans, multi_root",
              "m": "Analyze trace structure by examining parent-child span relationships within each trace_id. Classify traces: \"complete\" (single root, proper parent chain), \"single_span\" (only one span — missing downstream instrumentation), \"orphan_no_root\" (no span has an empty parent_span_id — broken propagation), \"multiple_roots\" (more than one root span — context fragmentation). Track completeness percentage over time. Alert when completeness drops below 90%. Identify which services produce the most orphan spans by joining back to service_name. For tail-sampling deployments, validate that sampling decisions are consistent across services (UC-13.3.7 covers sampling rate; this UC covers the resulting trace quality).",
              "z": "Line chart (completeness % over 7 days), Pie chart (trace classification breakdown), Table (services producing most orphan spans), Single value (current completeness %).",
              "kfp": "Intentionally unsampled traces in lower environments can look catastrophically incomplete compared with production; segregate deployment.environment and inventory tiers before paging application owners. Third-party SaaS integrations that never export a root span you can read will inflate root-missing style counts even when customer journeys succeed; model vendor hops as external roots with documentation rather than auto-rollback. Batch jobs that legitimately emit leaf spans without a durable root by design should carry workflow attributes and inventory exclusions so nightly work does not page checkout on-call. Partial instrumentation rollouts during canary weeks routinely elevate orphan_ratio until every service adopts the same propagation library; track rollout metadata and downgrade severity for labeled canary fleets. OpenTelemetry collector autoscale events can briefly spike drops during cold start; require sustained dropped_span_count dwell and cross-lane collector metrics before declaring incidents. Rare always-on sampling tests that fabricate broken parent links for chaos drills will trip structural detectors; tie experiments to labeled namespaces. Dual-write bridges that duplicate spans with different identifier casings can create false orphan math until normalization lands; reconcile field transforms. Security proxies that scrub trace headers for privacy can look like application bugs; validate policy intent before rewriting microservice code. Index maintenance windows that shrink searchable history can make traces look suddenly fragmented when older parents age out; align retention communications with completeness dashboards.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud (APM).\n• Ensure the following data sources are available: `sourcetype=otel:traces`, `index=traces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAnalyze trace structure by examining parent-child span relationships within each trace_id. Classify traces: \"complete\" (single root, proper parent chain), \"single_span\" (only one span — missing downstream instrumentation), \"orphan_no_root\" (no span has an empty parent_span_id — broken propagation), \"multiple_roots\" (more than one root span — context fragmentation). Track completeness percentage over time. Alert when completeness drops below 90%. Identify which services produce the most orphan sp…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\"\n| bin _time span=1h\n| stats dc(span_id) as span_count, dc(service_name) as service_count,\n    sum(eval(if(parent_span_id=\"\" OR isnull(parent_span_id), 1, 0))) as root_spans\n    by _time, trace_id\n| eval completeness=case(\n    service_count==1 AND span_count==1, \"single_span\",\n    root_spans==0, \"orphan_no_root\",\n    root_spans>1, \"multiple_roots\",\n    1==1, \"complete\")\n| stats count as trace_count,\n    sum(eval(if(completeness==\"single_span\",1,0))) as single_spans,\n    sum(eval(if(completeness==\"orphan_no_root\",1,0))) as orphans,\n    sum(eval(if(completeness==\"multiple_roots\",1,0))) as multi_root\n    by _time\n| eval completeness_pct=round((trace_count - single_spans - orphans - multi_root)*100/trace_count, 1)\n| table _time, trace_count, completeness_pct, single_spans, orphans, multi_root\n```\n\nUnderstanding this SPL\n\n**Trace Completeness and Orphan Span Detection** — Incomplete traces — missing parent spans, single-span traces from multi-service flows, or orphaned spans with no root — indicate broken context propagation, missing instrumentation, or sampling inconsistencies. These gaps make distributed debugging impossible exactly when it matters most. Measuring trace completeness quantifies instrumentation quality and identifies which services need propagation fixes.\n\nDocumented **Data sources**: `sourcetype=otel:traces`, `index=traces`. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud (APM). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, trace_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **completeness** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **completeness_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Trace Completeness and Orphan Span Detection**): table _time, trace_count, completeness_pct, single_spans, orphans, multi_root\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (completeness % over 7 days), Pie chart (trace classification breakdown), Table (services producing most orphan spans), Single value (current completeness %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We treat a distributed trace like a relay race where every runner must hand off the same baton; when someone grabs a stick with no teammate attached, or the official record loses the team number at a handoff, the race photo stops telling the truth, so we spotlight those broken handoffs before leaders argue about the wrong cause.",
              "mtype": [
                "Fault",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.4",
              "n": "Cross-Service Dependency Map from Traces",
              "c": "high",
              "f": "advanced",
              "v": "Service-to-service dependencies are often undocumented, especially in microservice architectures where teams add new API calls without updating architecture diagrams. Auto-discovering the dependency graph from trace data reveals the actual topology — including unexpected edges that represent security risks (why is the frontend calling the billing database directly?) or change risks (this \"isolated\" service actually has 12 downstream dependents). New edges appearing after deployments are an immediate change management signal.",
              "t": "Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector",
              "d": "`sourcetype=otel:traces`, `index=traces`",
              "q": "index=traces sourcetype=\"otel:traces\" parent_span_id=* parent_span_id!=\"\"\n| join type=left parent_span_id [search index=traces sourcetype=\"otel:traces\"\n    | rename span_id as parent_span_id, service_name as parent_service\n    | fields parent_span_id, parent_service]\n| where isnotnull(parent_service) AND parent_service!=service_name\n| eval edge=parent_service.\" → \".service_name\n| bin _time span=1d\n| stats count as call_count, avg(duration_nano)/1000000 as avg_latency_ms, dc(trace_id) as trace_count by _time, parent_service, service_name, edge\n| eventstats earliest(_time) as first_seen by edge\n| eval is_new_edge=if(first_seen > relative_time(now(), \"-7d\"), \"NEW\", \"known\")\n| where is_new_edge=\"NEW\" OR call_count > 100\n| sort is_new_edge, -call_count\n| table _time, edge, parent_service, service_name, call_count, avg_latency_ms, is_new_edge, first_seen",
              "m": "Extract parent-child service relationships from traces by joining each span's `parent_span_id` to its parent span's `service_name`. Filter cross-service edges (parent_service != service_name). Aggregate daily to build the dependency graph. Track `first_seen` per edge to detect new dependencies appearing after deployments. Alert on new edges (services communicating for the first time) as a change/security signal. For Splunk APM users, the Service Map provides this visualization natively; this UC replicates the detection for platform-only deployments. Export the edge list to a network graph visualization or integrate with Splunk ITSI service dependency trees.",
              "z": "Force-directed graph (service dependency map), Table (new edges detected this week), Bar chart (top dependencies by call volume), Line chart (edge count trend — growing complexity indicator).",
              "kfp": "Legitimate canary deployments and blue-green cutovers often introduce temporary edges that resemble shadow connectivity until traffic fully shifts; require sustained dwell and release correlation before paging executives. Planned strangler-fig migrations intentionally duplicate old and new edges for weeks; baseline CSV annotations should mark those tuples as expected_duplicate with an expiry date. Scheduled nightly batch workloads create burst fan-out that looks like an incident if you ignore cron context; compare to the same clock hour on prior days. On-demand partner APIs that fire only during monthly closes can look like fresh edges when they wake up; enrich with contract calendars. Vendor IP rotation and DNS load balancing can change peer.service strings while the logical dependency is unchanged; collapse DNS noise with normalization rules before alerting. Ephemeral integration tests in shared staging clusters may fabricate rare edges; segregate environments in lookup tables. Service mesh debug endpoints occasionally emit diagnostic client spans that resemble new externals; filter known mesh housekeeping operations. Regional failover drills can swing edges across peering hops without any application code deploy; treat drills as planned when change calendars include them.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector.\n• Ensure the following data sources are available: `sourcetype=otel:traces`, `index=traces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract parent-child service relationships from traces by joining each span's `parent_span_id` to its parent span's `service_name`. Filter cross-service edges (parent_service != service_name). Aggregate daily to build the dependency graph. Track `first_seen` per edge to detect new dependencies appearing after deployments. Alert on new edges (services communicating for the first time) as a change/security signal. For Splunk APM users, the Service Map provides this visualization natively; this UC …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\" parent_span_id=* parent_span_id!=\"\"\n| join type=left parent_span_id [search index=traces sourcetype=\"otel:traces\"\n    | rename span_id as parent_span_id, service_name as parent_service\n    | fields parent_span_id, parent_service]\n| where isnotnull(parent_service) AND parent_service!=service_name\n| eval edge=parent_service.\" → \".service_name\n| bin _time span=1d\n| stats count as call_count, avg(duration_nano)/1000000 as avg_latency_ms, dc(trace_id) as trace_count by _time, parent_service, service_name, edge\n| eventstats earliest(_time) as first_seen by edge\n| eval is_new_edge=if(first_seen > relative_time(now(), \"-7d\"), \"NEW\", \"known\")\n| where is_new_edge=\"NEW\" OR call_count > 100\n| sort is_new_edge, -call_count\n| table _time, edge, parent_service, service_name, call_count, avg_latency_ms, is_new_edge, first_seen\n```\n\nUnderstanding this SPL\n\n**Cross-Service Dependency Map from Traces** — Service-to-service dependencies are often undocumented, especially in microservice architectures where teams add new API calls without updating architecture diagrams. Auto-discovering the dependency graph from trace data reveals the actual topology — including unexpected edges that represent security risks (why is the frontend calling the billing database directly?) or change risks (this \"isolated\" service actually has 12 downstream dependents). New edges appearing after…\n\nDocumented **Data sources**: `sourcetype=otel:traces`, `index=traces`. **App/TA** (typical add-on context): Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(parent_service) AND parent_service!=service_name` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **edge** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, parent_service, service_name, edge** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by edge** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **is_new_edge** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_new_edge=\"NEW\" OR call_count > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Cross-Service Dependency Map from Traces**): table _time, edge, parent_service, service_name, call_count, avg_latency_ms, is_new_edge, first_seen\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Force-directed graph (service dependency map), Table (new edges detected this week), Bar chart (top dependencies by call volume), Line chart (edge count trend — growing complexity indicator).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "Picture a city map where new alleys appear overnight and familiar avenues vanish. We watch how software neighborhoods connect, flag surprise roads to sensitive places, noisy intersections with dozens of new exits, and missing bridges that used to carry rush-hour traffic, then we ask who approved the change before the whole city grid locks up.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.5",
              "n": "Log-to-Trace Correlation Coverage",
              "c": "medium",
              "f": "intermediate",
              "v": "Logs without trace context are isolated events that can't be correlated to specific user requests. When only 30% of log events carry `trace_id` and `span_id`, debugging a failed request requires manual timestamp-based guesswork across services. Measuring log-trace correlation coverage per service quantifies instrumentation maturity and identifies which teams need to add OTel context propagation to their logging frameworks.",
              "t": "Splunk Distribution of OpenTelemetry Collector, any log framework with OTel integration",
              "d": "Application logs with optional `trace_id` and `span_id` fields",
              "q": "index=app_logs\n| eval has_trace=if(isnotnull(trace_id) AND trace_id!=\"\" AND trace_id!=\"0000000000000000\", 1, 0)\n| eval has_span=if(isnotnull(span_id) AND span_id!=\"\" AND span_id!=\"0000000000000000\", 1, 0)\n| bin _time span=1d\n| stats count as total_logs, sum(has_trace) as with_trace, sum(has_span) as with_span by _time, service_name, sourcetype\n| eval trace_coverage_pct=round(with_trace*100/total_logs, 1)\n| eval span_coverage_pct=round(with_span*100/total_logs, 1)\n| table _time, service_name, sourcetype, total_logs, trace_coverage_pct, span_coverage_pct\n| sort trace_coverage_pct",
              "m": "Modern logging frameworks (Log4j2, Logback, Python logging, Serilog) support automatic injection of OTel trace context (`trace_id`, `span_id`, `trace_flags`) into log events via MDC/context propagation. The OTel SDK logging bridge also carries this context. Measure what percentage of log events per service contain valid trace IDs (not null, not zero-padded). Target: 80%+ for instrumented services. Services below 50% likely haven't configured their logging framework's OTel integration. Provide a weekly instrumentation scorecard by team. Exclude infrastructure logs (syslog, container runtime) from the calculation as they're not expected to carry trace context. Track improvement over time to measure observability maturity program progress.",
              "z": "Bar chart (trace coverage % by service — sorted ascending), Line chart (fleet-wide coverage trend over 90 days), Table (services with lowest coverage), Single value (fleet average coverage %).",
              "kfp": "Bootstrap and shutdown log storms often lack active spans by design; require warn-level subsets before paging product teams on whole-log coverage. Infrastructure agents, kernel ring buffers, and CI runner chatter legitimately omit trace context; segregate sourcetypes or service keys in the lookup. Short HEC outages can zero one lane while APM continues; pair with inputstatus dashboards before blaming instrumentation. Canary clusters running reduced instrumentation inflate apparent gaps; annotate deployment_environment rows. Header proxy fields extracted only on a fraction of services make header_propagation_proxy look worse than actual mesh behavior; align FIELDALIAS. RUM ad blockers and privacy browsers strip trace headers; exclude known kiosk fleets. Log sampling policies targeting INFO volume may accidentally undersample WARN lines in some frameworks; validate severity-aware sampling. Duplicate indexing of the same log line across mirrored clusters doubles denominators; deduplicate before SLO reviews. Third-party SaaS webhooks indexed as app_logs without trace contracts will never meet ninety percent targets; mark them out-of-scope in inventory notes. Performance datamodel tstats rows map hosts to processes, not always microservice names; treat perf_rows as supplemental context. Clock skew across regions misaligns fifteen-minute buckets; monitor NTP. Manual ad hoc trace_id searches during incidents can skew rolling streamstats; isolate investigative sessions into separate indexes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, any log framework with OTel integration.\n• Ensure the following data sources are available: Application logs with optional `trace_id` and `span_id` fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nModern logging frameworks (Log4j2, Logback, Python logging, Serilog) support automatic injection of OTel trace context (`trace_id`, `span_id`, `trace_flags`) into log events via MDC/context propagation. The OTel SDK logging bridge also carries this context. Measure what percentage of log events per service contain valid trace IDs (not null, not zero-padded). Target: 80%+ for instrumented services. Services below 50% likely haven't configured their logging framework's OTel integration. Provide a …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app_logs\n| eval has_trace=if(isnotnull(trace_id) AND trace_id!=\"\" AND trace_id!=\"0000000000000000\", 1, 0)\n| eval has_span=if(isnotnull(span_id) AND span_id!=\"\" AND span_id!=\"0000000000000000\", 1, 0)\n| bin _time span=1d\n| stats count as total_logs, sum(has_trace) as with_trace, sum(has_span) as with_span by _time, service_name, sourcetype\n| eval trace_coverage_pct=round(with_trace*100/total_logs, 1)\n| eval span_coverage_pct=round(with_span*100/total_logs, 1)\n| table _time, service_name, sourcetype, total_logs, trace_coverage_pct, span_coverage_pct\n| sort trace_coverage_pct\n```\n\nUnderstanding this SPL\n\n**Log-to-Trace Correlation Coverage** — Logs without trace context are isolated events that can't be correlated to specific user requests. When only 30% of log events carry `trace_id` and `span_id`, debugging a failed request requires manual timestamp-based guesswork across services. Measuring log-trace correlation coverage per service quantifies instrumentation maturity and identifies which teams need to add OTel context propagation to their logging frameworks.\n\nDocumented **Data sources**: Application logs with optional `trace_id` and `span_id` fields. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, any log framework with OTel integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app_logs.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app_logs. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_trace** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_span** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **trace_coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **span_coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Log-to-Trace Correlation Coverage**): table _time, service_name, sourcetype, total_logs, trace_coverage_pct, span_coverage_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (trace coverage % by service — sorted ascending), Line chart (fleet-wide coverage trend over 90 days), Table (services with lowest coverage), Single value (fleet average coverage %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We check that your error logs still carry the same secret thread ID as your traces so engineers can jump from a scary line straight into the full story in Splunk APM instead of guessing times across systems.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.6",
              "n": "Trace Fanout and Depth Anomaly",
              "c": "high",
              "f": "advanced",
              "v": "Traces with unusually high span counts or deep nesting reveal N+1 query patterns, recursive service calls, and runaway microservice chains that consume disproportionate resources. A single API call generating 10,000 spans indicates a loop or unbounded fan-out that impacts both application performance and observability infrastructure cost. Detecting these \"mega-traces\" prevents both performance degradation and observability platform overload.",
              "t": "Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector",
              "d": "`sourcetype=otel:traces`, `index=traces`",
              "q": "index=traces sourcetype=\"otel:traces\"\n| stats count as span_count, dc(service_name) as service_count, sum(duration_nano)/1000000 as total_duration_ms by trace_id\n| eventstats avg(span_count) as avg_spans, stdev(span_count) as std_spans\n| eval span_z_score=round((span_count - avg_spans) / if(std_spans==0,null(),std_spans), 2)\n| eval anomaly_type=case(\n    span_count > 1000, \"mega_trace\",\n    span_z_score > 3, \"high_fanout\",\n    service_count > 15, \"wide_fanout\",\n    1==1, null())\n| where isnotnull(anomaly_type)\n| sort -span_count\n| head 100\n| table trace_id, anomaly_type, span_count, service_count, total_duration_ms, span_z_score",
              "m": "Aggregate span counts per `trace_id` to identify traces with unusually high fan-out. Traces exceeding 1,000 spans are \"mega-traces\" that likely indicate N+1 queries or unbounded pagination loops. Traces touching more than 15 services signal wide fan-out across the architecture. Calculate z-scores against the population to detect statistically anomalous traces. For each anomalous trace, identify the service and operation that generates the most child spans — this is the fan-out origin to investigate. Common root causes: ORM lazy loading in loops, recursive microservice calls without depth limits, batch jobs that create per-item spans. Alert when mega-traces exceed 5 per hour. Feed findings to development teams with specific trace IDs for investigation.",
              "z": "Histogram (span count distribution with anomaly threshold), Table (anomalous traces with details), Bar chart (top services producing high-fanout traces), Single value (mega-traces in last hour).",
              "kfp": "Legitimate batch importers and nightly reconcilers often emit wide traces on purpose; segregate them with deployment.environment, cluster_tier, or endpoint_key notes in apm_endpoint_span_slo_inventory.csv before paging interactive on-call. GraphQL and gRPC streaming workloads can show high span_count without relational N plus one; require db.system correlation before assuming ORM guilt. Aggressive head sampling and tail sampling policy edits change which mega-traces survive; reconcile sampling dashboards before muting this control. Double instrumentation during canary SDK pairs can flag dup_span_cluster until versions converge; annotate releases and downgrade duplicate severity when expected. Partial parent_span_id loss after mesh upgrades inflates apparent depth; pair with UC-13.5.3 completeness analytics before deep recursion rollbacks. Metric rollup lag during indexer maintenance can make span_count disagree across lanes; verify index=_internal forwarder health. Load tests and chaos drills without labels can mimic production anti-patterns; require labeled environments. ITSI catalog rows are advisory; stale pattern_service keys cause false confidence—refresh GitOps-owned CSVs quarterly. Performance datamodel tstats rows map processes to services imperfectly; treat perf_service_rows as supplemental context only.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector.\n• Ensure the following data sources are available: `sourcetype=otel:traces`, `index=traces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate span counts per `trace_id` to identify traces with unusually high fan-out. Traces exceeding 1,000 spans are \"mega-traces\" that likely indicate N+1 queries or unbounded pagination loops. Traces touching more than 15 services signal wide fan-out across the architecture. Calculate z-scores against the population to detect statistically anomalous traces. For each anomalous trace, identify the service and operation that generates the most child spans — this is the fan-out origin to investig…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\"\n| stats count as span_count, dc(service_name) as service_count, sum(duration_nano)/1000000 as total_duration_ms by trace_id\n| eventstats avg(span_count) as avg_spans, stdev(span_count) as std_spans\n| eval span_z_score=round((span_count - avg_spans) / if(std_spans==0,null(),std_spans), 2)\n| eval anomaly_type=case(\n    span_count > 1000, \"mega_trace\",\n    span_z_score > 3, \"high_fanout\",\n    service_count > 15, \"wide_fanout\",\n    1==1, null())\n| where isnotnull(anomaly_type)\n| sort -span_count\n| head 100\n| table trace_id, anomaly_type, span_count, service_count, total_duration_ms, span_z_score\n```\n\nUnderstanding this SPL\n\n**Trace Fanout and Depth Anomaly** — Traces with unusually high span counts or deep nesting reveal N+1 query patterns, recursive service calls, and runaway microservice chains that consume disproportionate resources. A single API call generating 10,000 spans indicates a loop or unbounded fan-out that impacts both application performance and observability infrastructure cost. Detecting these \"mega-traces\" prevents both performance degradation and observability platform overload.\n\nDocumented **Data sources**: `sourcetype=otel:traces`, `index=traces`. **App/TA** (typical add-on context): Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by trace_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **span_z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **anomaly_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(anomaly_type)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **Trace Fanout and Depth Anomaly**): table trace_id, anomaly_type, span_count, service_count, total_duration_ms, span_z_score\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (span count distribution with anomaly threshold), Table (anomalous traces with details), Bar chart (top services producing high-fanout traces), Single value (mega-traces in last hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We count how many steps one request fans out across your systems and how deep the call stack grows. When a single click quietly spawns hundreds of tiny database calls, an overly long chain of services, or suspicious repeats, we raise a clear signal so teams fix the design before bills and slowdowns grow.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.7",
              "n": "Splunk APM Service Map Health (RED Metrics)",
              "c": "critical",
              "f": "intermediate",
              "v": "Splunk APM's service map provides real-time Request rate, Error rate, and Duration (RED) metrics per service and endpoint. Ingesting these metrics into Splunk Enterprise enables correlation with infrastructure data, security events, and business metrics that live outside Observability Cloud — creating a unified view that neither platform provides alone. Degrading RED metrics in APM can trigger Splunk Enterprise workflows, populate ITSI service trees, or enrich ES risk scores.",
              "t": "Splunk Observability Cloud (APM), Splunk Add-on for Splunk Observability Cloud",
              "d": "Splunk APM service metrics via API or OTel Collector relay, `sourcetype=signalfx:apm:metrics`",
              "q": "index=observability sourcetype=\"signalfx:apm:metrics\"\n| bin _time span=5m\n| stats avg(request_rate) as req_rate, avg(error_rate) as err_rate, avg(p99_duration_ms) as p99_ms by _time, service_name, environment\n| eventstats avg(err_rate) as baseline_err, avg(p99_ms) as baseline_p99 by service_name\n| eval err_spike=round(err_rate / nullif(baseline_err, 0), 1)\n| eval latency_spike=round(p99_ms / nullif(baseline_p99, 0), 1)\n| where err_spike > 3 OR latency_spike > 2 OR err_rate > 5\n| table _time, service_name, environment, req_rate, err_rate, err_spike, p99_ms, latency_spike\n| sort -err_spike",
              "m": "Export Splunk APM metrics to Splunk Enterprise via the OTel Collector (using the SignalFx exporter → Splunk HEC pipeline) or via the Observability Cloud API with a scripted input. Key metrics: `service.request.count` (rate), `service.request.duration.ns.p99` (latency), `service.error.count` (errors). Calculate RED metrics per service in 5-minute windows. Compare against rolling baselines to detect spikes. For ITSI integration, map APM services to ITSI service entities and feed RED metrics as KPIs. For ES integration, generate risk events when critical services show sustained error spikes. Track RED metrics trend over 30 days to measure service reliability improvement.",
              "z": "Table (service health matrix — green/yellow/red by RED metric), Line chart (RED metrics per service over 24 hours), Gauge (error rate per critical service), Heatmap (service × time error rate).",
              "kfp": "Chaos engineering tests that inject coordinated failure, latency, and traffic throttling can trip simultaneous RED gates without customer impact; segregate labeled namespaces and change calendars before paging service owners. Scheduled batch windows that legitimately collapse request rate can look like a rate crisis while error share is meaningless on tiny denominators; require roll_rate_rps floors and inventory tier filters for interactive paths. Machine-learning or A/B model rollouts sometimes shift traffic between operations and split metrics across span names, creating apparent rate drops on one operation while another absorbs load; validate experiment metadata before rollback. Sample-rate or tail-sampling edits change histogram populations and can inflate p99_ms or deflate rate without code changes; reconcile collector policies and compare raw span2.metrics with MonitorMetricSets. Weekend or holiday traffic shapes can diverge from weekday baselines and move p99_ratio without defects; compare like-for-like seasons or use environment-specific baselines. Canary analysis windows with partial fleet traffic routinely depress rate_rps while errors concentrate on canary instances; pair with progressive delivery dashboards. Third-party maintenance that returns synthetic health errors on integration edges can raise error_rate without core service bugs; map vendor routes and suppressible dependencies. Autoscaling cold-start minutes can depress measured throughput while latency tails spike; correlate kube events before simultaneous RED pages. Mis-joined inventory rows that duplicate service keys can distort slo_burn_rate context; audit apm_service_inventory.csv keys quarterly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud (APM), Splunk Add-on for Splunk Observability Cloud.\n• Ensure the following data sources are available: Splunk APM service metrics via API or OTel Collector relay, `sourcetype=signalfx:apm:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport Splunk APM metrics to Splunk Enterprise via the OTel Collector (using the SignalFx exporter → Splunk HEC pipeline) or via the Observability Cloud API with a scripted input. Key metrics: `service.request.count` (rate), `service.request.duration.ns.p99` (latency), `service.error.count` (errors). Calculate RED metrics per service in 5-minute windows. Compare against rolling baselines to detect spikes. For ITSI integration, map APM services to ITSI service entities and feed RED metrics as KPI…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=observability sourcetype=\"signalfx:apm:metrics\"\n| bin _time span=5m\n| stats avg(request_rate) as req_rate, avg(error_rate) as err_rate, avg(p99_duration_ms) as p99_ms by _time, service_name, environment\n| eventstats avg(err_rate) as baseline_err, avg(p99_ms) as baseline_p99 by service_name\n| eval err_spike=round(err_rate / nullif(baseline_err, 0), 1)\n| eval latency_spike=round(p99_ms / nullif(baseline_p99, 0), 1)\n| where err_spike > 3 OR latency_spike > 2 OR err_rate > 5\n| table _time, service_name, environment, req_rate, err_rate, err_spike, p99_ms, latency_spike\n| sort -err_spike\n```\n\nUnderstanding this SPL\n\n**Splunk APM Service Map Health (RED Metrics)** — Splunk APM's service map provides real-time Request rate, Error rate, and Duration (RED) metrics per service and endpoint. Ingesting these metrics into Splunk Enterprise enables correlation with infrastructure data, security events, and business metrics that live outside Observability Cloud — creating a unified view that neither platform provides alone. Degrading RED metrics in APM can trigger Splunk Enterprise workflows, populate ITSI service trees, or enrich ES risk scores.\n\nDocumented **Data sources**: Splunk APM service metrics via API or OTel Collector relay, `sourcetype=signalfx:apm:metrics`. **App/TA** (typical add-on context): Splunk Observability Cloud (APM), Splunk Add-on for Splunk Observability Cloud. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: observability; **sourcetype**: signalfx:apm:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=observability, sourcetype=\"signalfx:apm:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name, environment** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **err_spike** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **latency_spike** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_spike > 3 OR latency_spike > 2 OR err_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Splunk APM Service Map Health (RED Metrics)**): table _time, service_name, environment, req_rate, err_rate, err_spike, p99_ms, latency_spike\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (service health matrix — green/yellow/red by RED metric), Line chart (RED metrics per service over 24 hours), Gauge (error rate per critical service), Heatmap (service × time error rate).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "Picture a busy restaurant kitchen counting tickets finished, plates sent back, and how long the slowest tables wait compared to a normal night. When finished tickets suddenly fall while send-backs jump and the slowest wait doubles together, we sound the alarm and point crews to the station and which neighbor sent the bad orders, before guests give up and leave.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.8",
              "n": "Splunk APM Database Query Performance",
              "c": "high",
              "f": "intermediate",
              "v": "Database spans in APM traces reveal which SQL queries contribute most to service latency. A single unoptimized query hiding behind 50ms average response time can drive p99 to 2 seconds. APM database query analysis identifies the specific queries and calling services responsible for database-driven latency, providing actionable evidence for query optimization and index tuning — bridging the gap between application and database teams.",
              "t": "Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector",
              "d": "APM database span data, `sourcetype=otel:traces` (db.* attributes)",
              "q": "index=traces sourcetype=\"otel:traces\" span_kind=\"CLIENT\" db_system=*\n| eval query_duration_ms=duration_nano/1000000\n| eval db_statement_short=substr(db_statement, 1, 200)\n| bin _time span=15m\n| stats avg(query_duration_ms) as avg_ms, p95(query_duration_ms) as p95_ms, p99(query_duration_ms) as p99_ms, count as query_count, sum(eval(if(status_code==\"ERROR\",1,0))) as errors by _time, service_name, db_system, db_name, db_statement_short\n| where p99_ms > 500 OR errors > 0\n| eval impact_score=round(query_count * p99_ms / 1000, 1)\n| sort -impact_score\n| table _time, service_name, db_system, db_name, db_statement_short, query_count, avg_ms, p95_ms, p99_ms, errors, impact_score",
              "m": "OTel auto-instrumentation captures database spans with semantic conventions: `db.system` (mysql, postgresql, redis), `db.name`, `db.statement` (sanitized query), and `db.operation` (SELECT, INSERT, etc.). Ingest these spans and analyze query performance per service. The `impact_score` (query_count × p99_ms) prioritizes the queries that contribute most to total service latency — a query running 10,000 times at 100ms p99 has higher impact than one running 10 times at 5,000ms. Alert when any query's p99 exceeds 500ms or when errors appear. Correlate with database monitoring (cat-07) to validate that database-side metrics confirm the latency seen in traces. Truncate `db.statement` for display while preserving enough for identification.",
              "z": "Table (top queries by impact score), Line chart (query p99 trend per service), Bar chart (query count by database system), Scatter plot (query count vs p99 latency — bubble size = impact).",
              "kfp": "Legitimate long-running analytical queries against read replicas or Snowflake warehouses often exceed P99 gates while still being approved in db_query_inventory.csv with approved_flag true and elevated expected_p95_ms; tune lookup rows instead of muting the control. Intentional database warm-up after cold cache events, including restart of shared buffers or buffer pool, produces transient tail latency that clears without code changes; correlate with maintenance tickets. Schema migrations that execute DDL and backfills create expected spikes; suppress using change_calendar metadata or environment tags on non-production. Overnight batch windows may dominate rows_hit and duration_ms slopes without implying interactive regressions; split inventories by workload class. Development sandboxes with tiny datasets can make anti-pattern regexes noisy when engineers paste ad hoc SQL; route dev traffic to separate indexes or downgrade severity using environment. Multi-region replica lag can bimodalize read latency on the same statement shape when traffic flips regions; validate replica health before index rebuilds. ORM-generated SQL that appears as select star may still project internally; confirm with application owners before paging. Redis and MongoDB command shapes differ from SQL; rely on db_system exceptions already encoded and extend inventory for engine-specific approved patterns.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector.\n• Ensure the following data sources are available: APM database span data, `sourcetype=otel:traces` (db.* attributes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOTel auto-instrumentation captures database spans with semantic conventions: `db.system` (mysql, postgresql, redis), `db.name`, `db.statement` (sanitized query), and `db.operation` (SELECT, INSERT, etc.). Ingest these spans and analyze query performance per service. The `impact_score` (query_count × p99_ms) prioritizes the queries that contribute most to total service latency — a query running 10,000 times at 100ms p99 has higher impact than one running 10 times at 5,000ms. Alert when any query'…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\" span_kind=\"CLIENT\" db_system=*\n| eval query_duration_ms=duration_nano/1000000\n| eval db_statement_short=substr(db_statement, 1, 200)\n| bin _time span=15m\n| stats avg(query_duration_ms) as avg_ms, p95(query_duration_ms) as p95_ms, p99(query_duration_ms) as p99_ms, count as query_count, sum(eval(if(status_code==\"ERROR\",1,0))) as errors by _time, service_name, db_system, db_name, db_statement_short\n| where p99_ms > 500 OR errors > 0\n| eval impact_score=round(query_count * p99_ms / 1000, 1)\n| sort -impact_score\n| table _time, service_name, db_system, db_name, db_statement_short, query_count, avg_ms, p95_ms, p99_ms, errors, impact_score\n```\n\nUnderstanding this SPL\n\n**Splunk APM Database Query Performance** — Database spans in APM traces reveal which SQL queries contribute most to service latency. A single unoptimized query hiding behind 50ms average response time can drive p99 to 2 seconds. APM database query analysis identifies the specific queries and calling services responsible for database-driven latency, providing actionable evidence for query optimization and index tuning — bridging the gap between application and database teams.\n\nDocumented **Data sources**: APM database span data, `sourcetype=otel:traces` (db.* attributes). **App/TA** (typical add-on context): Splunk Observability Cloud (APM), Splunk Distribution of OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **query_duration_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **db_statement_short** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name, db_system, db_name, db_statement_short** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p99_ms > 500 OR errors > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **impact_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Splunk APM Database Query Performance**): table _time, service_name, db_system, db_name, db_statement_short, query_count, avg_ms, p95_ms, p99_ms, errors, impact_score\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top queries by impact score), Line chart (query p99 trend per service), Bar chart (query count by database system), Scatter plot (query count vs p99 latency — bubble size = impact).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We study the questions your applications ask databases and how long those round-trips take. When a few hidden slow queries, risky SQL shapes, or connection waits threaten customer experience, we surface a clear signal and point teams to the exact statement pattern and database engine involved.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.9",
              "n": "Splunk RUM Core Web Vitals Tracking",
              "c": "critical",
              "f": "intermediate",
              "v": "Google's Core Web Vitals — Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS) — directly impact search ranking and user experience. Splunk RUM captures these metrics per page, browser, device type, and geographic location. Tracking CWV trends detects regressions after frontend deployments before they affect SEO ranking or user conversion rates. A 100ms LCP regression on the product page can reduce conversion by 1-2%.",
              "t": "Splunk Observability Cloud (RUM), Splunk RUM agent",
              "d": "Splunk RUM telemetry, `sourcetype=signalfx:rum:metrics`",
              "q": "index=observability sourcetype=\"signalfx:rum:metrics\"\n| bin _time span=1h\n| stats avg(lcp_ms) as avg_lcp, p75(lcp_ms) as p75_lcp, avg(inp_ms) as avg_inp, p75(inp_ms) as p75_inp, avg(cls) as avg_cls, p75(cls) as p75_cls, count as page_views by _time, page_url, browser_name, device_type\n| eval lcp_rating=case(p75_lcp<=2500, \"Good\", p75_lcp<=4000, \"Needs Improvement\", 1==1, \"Poor\")\n| eval inp_rating=case(p75_inp<=200, \"Good\", p75_inp<=500, \"Needs Improvement\", 1==1, \"Poor\")\n| eval cls_rating=case(p75_cls<=0.1, \"Good\", p75_cls<=0.25, \"Needs Improvement\", 1==1, \"Poor\")\n| where lcp_rating!=\"Good\" OR inp_rating!=\"Good\" OR cls_rating!=\"Good\"\n| sort -p75_lcp\n| table _time, page_url, browser_name, device_type, p75_lcp, lcp_rating, p75_inp, inp_rating, p75_cls, cls_rating, page_views",
              "m": "Deploy Splunk RUM agent on frontend pages. RUM automatically captures CWV metrics using the web-vitals library. Ingest RUM data into Splunk via the Observability Cloud API or OTel Collector relay. Google measures CWV at the 75th percentile: LCP ≤2.5s (good), INP ≤200ms (good), CLS ≤0.1 (good). Track p75 values per page URL, browser, and device type (mobile vs desktop — mobile often has worse LCP). Alert frontend teams when any high-traffic page drops from \"Good\" to \"Needs Improvement.\" Compare CWV before and after deployments using deployment markers. Provide weekly CWV reports to product owners with page-level detail and trend direction.",
              "z": "Scorecard (CWV status per top page — green/yellow/red), Line chart (LCP/INP/CLS p75 trend over 30 days), Table (pages with poor CWV), Bar chart (CWV by device type).",
              "kfp": "Browser version rollout can shift seventy-fifth percentile INP and CLS without any server deploy as JIT compilers and scheduling change; segment by major version before paging. Slow ISP geography can elevate TTFB and LCP for honest networks during regional congestion; pair with external network status and downgrade when errors are flat. Scheduled image-quality A/B tests intentionally trade bytes for fidelity and can move LCP for winning cohorts; require experiment tags in RUM attributes to explain shifts. Third-party widget rollouts inject new long tasks and layout shifts that look like your regression until vendor ownership is visible in metadata. Beta route preview cohorts mix internal testers with aggressive extensions and devtools open, inflating JavaScript error rate; exclude labeled preview traffic from executive alerts. Corporate proxies and TLS inspection sometimes delay first byte for desktop only, producing geo-like clusters that are actually enterprise network policy. Aggressive sampling changes move percentile estimates without user experience changing; correlate with configuration change tickets before blaming developers. Single-page application soft navigations can duplicate session counts if instrumentation double-fires, skewing error rate denominators until dedupe rules land.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud (RUM), Splunk RUM agent.\n• Ensure the following data sources are available: Splunk RUM telemetry, `sourcetype=signalfx:rum:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Splunk RUM agent on frontend pages. RUM automatically captures CWV metrics using the web-vitals library. Ingest RUM data into Splunk via the Observability Cloud API or OTel Collector relay. Google measures CWV at the 75th percentile: LCP ≤2.5s (good), INP ≤200ms (good), CLS ≤0.1 (good). Track p75 values per page URL, browser, and device type (mobile vs desktop — mobile often has worse LCP). Alert frontend teams when any high-traffic page drops from \"Good\" to \"Needs Improvement.\" Compare C…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=observability sourcetype=\"signalfx:rum:metrics\"\n| bin _time span=1h\n| stats avg(lcp_ms) as avg_lcp, p75(lcp_ms) as p75_lcp, avg(inp_ms) as avg_inp, p75(inp_ms) as p75_inp, avg(cls) as avg_cls, p75(cls) as p75_cls, count as page_views by _time, page_url, browser_name, device_type\n| eval lcp_rating=case(p75_lcp<=2500, \"Good\", p75_lcp<=4000, \"Needs Improvement\", 1==1, \"Poor\")\n| eval inp_rating=case(p75_inp<=200, \"Good\", p75_inp<=500, \"Needs Improvement\", 1==1, \"Poor\")\n| eval cls_rating=case(p75_cls<=0.1, \"Good\", p75_cls<=0.25, \"Needs Improvement\", 1==1, \"Poor\")\n| where lcp_rating!=\"Good\" OR inp_rating!=\"Good\" OR cls_rating!=\"Good\"\n| sort -p75_lcp\n| table _time, page_url, browser_name, device_type, p75_lcp, lcp_rating, p75_inp, inp_rating, p75_cls, cls_rating, page_views\n```\n\nUnderstanding this SPL\n\n**Splunk RUM Core Web Vitals Tracking** — Google's Core Web Vitals — Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS) — directly impact search ranking and user experience. Splunk RUM captures these metrics per page, browser, device type, and geographic location. Tracking CWV trends detects regressions after frontend deployments before they affect SEO ranking or user conversion rates. A 100ms LCP regression on the product page can reduce conversion by 1-2%.\n\nDocumented **Data sources**: Splunk RUM telemetry, `sourcetype=signalfx:rum:metrics`. **App/TA** (typical add-on context): Splunk Observability Cloud (RUM), Splunk RUM agent. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: observability; **sourcetype**: signalfx:rum:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=observability, sourcetype=\"signalfx:rum:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, page_url, browser_name, device_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **lcp_rating** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **inp_rating** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cls_rating** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lcp_rating!=\"Good\" OR inp_rating!=\"Good\" OR cls_rating!=\"Good\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Splunk RUM Core Web Vitals Tracking**): table _time, page_url, browser_name, device_type, p75_lcp, lcp_rating, p75_inp, inp_rating, p75_cls, cls_rating, page_views\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scorecard (CWV status per top page — green/yellow/red), Line chart (LCP/INP/CLS p75 trend over 30 days), Table (pages with poor CWV), Bar chart (CWV by device type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "Like grading delivery trucks on time-to-doorbell across neighborhoods, we score how often pages feel ready, stay steady when you tap, and avoid jarring jumps. When real visitors in one city or on phones see repeated late arrivals, sluggish taps, or shaky layouts, we raise a clear signal before most complaints land.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.10",
              "n": "Splunk RUM JavaScript Error Rate by Page",
              "c": "high",
              "f": "intermediate",
              "v": "Frontend JavaScript errors — unhandled exceptions, failed API calls, resource loading failures — directly degrade user experience but are invisible to backend monitoring. RUM captures these errors with full stack traces, page URL, browser, and user session context. Tracking JS error rate by page detects regressions after deployments, identifies browser-specific bugs, and quantifies the user impact of frontend failures that backend health checks miss entirely.",
              "t": "Splunk Observability Cloud (RUM), Splunk RUM agent",
              "d": "Splunk RUM error events, `sourcetype=signalfx:rum:errors`",
              "q": "index=observability sourcetype=\"signalfx:rum:errors\"\n| eval error_type=coalesce(exception_type, \"UnknownError\")\n| bin _time span=1h\n| stats count as error_count, dc(session_id) as affected_sessions, values(error_type) as error_types by _time, page_url, browser_name\n| join type=left _time page_url [search index=observability sourcetype=\"signalfx:rum:metrics\"\n    | bin _time span=1h\n    | stats dc(session_id) as total_sessions by _time, page_url]\n| eval error_session_pct=round(affected_sessions*100/nullif(total_sessions, 0), 1)\n| where error_count > 10 OR error_session_pct > 5\n| table _time, page_url, browser_name, error_count, affected_sessions, error_session_pct, error_types\n| sort -error_session_pct",
              "m": "Splunk RUM captures JavaScript errors including: uncaught exceptions, unhandled promise rejections, resource loading failures (img, script, CSS 404s), and fetch/XHR errors. Each error event includes stack trace, page URL, browser, OS, and session ID. Calculate the percentage of user sessions experiencing errors per page — this measures user impact more accurately than raw error count (one broken page viewed by 100 users = 100 errors but 100% session impact). Alert when error session percentage exceeds 5% for any page with >100 views. Compare error rates by browser to identify browser-specific regressions. Link RUM errors to backend traces via trace_id to identify full-stack root causes.",
              "z": "Line chart (error session % per page over 7 days), Table (pages with highest error impact), Bar chart (errors by type), Pie chart (errors by browser).",
              "kfp": "Marketing tag and advertising frameworks routinely emit ResizeObserver loop limit exceeded and opaque Script error strings that originate from cross-origin frames; those patterns look severe in raw counts but often require vendor tickets or filter rules instead of emergency rollbacks. Browser extensions that block analytics scripts can raise CORS and network failure noise for small cohorts, especially on internal tools where employees run aggressive blockers; segment by organization networks before paging product teams. Corporate TLS inspection and captive portals occasionally break script loads in ways that resemble application defects; correlate with network operations signals. Deliberately thrown validation errors that include user-typed email addresses can trip crude regular-expression privacy heuristics; refine rum_pii_redaction_rules.csv allowlists so legitimate validation messages route to medium severity instead of critical when counsel agrees. Single-page application soft navigations and double-firing history listeners can duplicate error events; dedupe at ingest when possible. Beta browser releases and automated accessibility scanners can spike console-driven noise; exclude labeled bot traffic when policy allows. Intermittent CDN script 404s after a partial deploy can concentrate in one region; pair geo.country with release_version before assuming logic bugs. Load tests and synthetic generators that reuse one session id will skew denominators unless labeled out of production alerts.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud (RUM), Splunk RUM agent.\n• Ensure the following data sources are available: Splunk RUM error events, `sourcetype=signalfx:rum:errors`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSplunk RUM captures JavaScript errors including: uncaught exceptions, unhandled promise rejections, resource loading failures (img, script, CSS 404s), and fetch/XHR errors. Each error event includes stack trace, page URL, browser, OS, and session ID. Calculate the percentage of user sessions experiencing errors per page — this measures user impact more accurately than raw error count (one broken page viewed by 100 users = 100 errors but 100% session impact). Alert when error session percentage e…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=observability sourcetype=\"signalfx:rum:errors\"\n| eval error_type=coalesce(exception_type, \"UnknownError\")\n| bin _time span=1h\n| stats count as error_count, dc(session_id) as affected_sessions, values(error_type) as error_types by _time, page_url, browser_name\n| join type=left _time page_url [search index=observability sourcetype=\"signalfx:rum:metrics\"\n    | bin _time span=1h\n    | stats dc(session_id) as total_sessions by _time, page_url]\n| eval error_session_pct=round(affected_sessions*100/nullif(total_sessions, 0), 1)\n| where error_count > 10 OR error_session_pct > 5\n| table _time, page_url, browser_name, error_count, affected_sessions, error_session_pct, error_types\n| sort -error_session_pct\n```\n\nUnderstanding this SPL\n\n**Splunk RUM JavaScript Error Rate by Page** — Frontend JavaScript errors — unhandled exceptions, failed API calls, resource loading failures — directly degrade user experience but are invisible to backend monitoring. RUM captures these errors with full stack traces, page URL, browser, and user session context. Tracking JS error rate by page detects regressions after deployments, identifies browser-specific bugs, and quantifies the user impact of frontend failures that backend health checks miss entirely.\n\nDocumented **Data sources**: Splunk RUM error events, `sourcetype=signalfx:rum:errors`. **App/TA** (typical add-on context): Splunk Observability Cloud (RUM), Splunk RUM agent. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: observability; **sourcetype**: signalfx:rum:errors. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=observability, sourcetype=\"signalfx:rum:errors\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **error_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, page_url, browser_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **error_session_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_count > 10 OR error_session_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Splunk RUM JavaScript Error Rate by Page**): table _time, page_url, browser_name, error_count, affected_sessions, error_session_pct, error_types\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error session % per page over 7 days), Table (pages with highest error impact), Bar chart (errors by type), Pie chart (errors by browser).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2025-11-15",
              "sver": "",
              "rby": "",
              "ge": "We watch the scripts running in real visitors' browsers for sudden jumps in hidden crashes and broken promises, tie those jumps to the version you just shipped when that is the pattern, and flag messages that accidentally carry private details so teams fix the leak fast.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.11",
              "n": "Splunk Synthetic Monitoring Multi-Step Transaction SLA",
              "c": "high",
              "f": "intermediate",
              "v": "Beyond simple pass/fail synthetic checks (UC-13.3.10), multi-step browser transactions test complete user workflows — login, search, add-to-cart, checkout. Step-level timing trends reveal which step is degrading: if the login step takes 500ms longer in one geography, that points to a regional identity provider issue. Transaction SLA tracking quantifies end-user experience commitments and provides evidence for SLA breach discussions with internal service owners or external vendors.",
              "t": "Splunk Observability Cloud (Synthetics), Splunk Synthetic Monitoring",
              "d": "Splunk Synthetic test results, `sourcetype=signalfx:synthetics:results`",
              "q": "index=observability sourcetype=\"signalfx:synthetics:results\" test_type=\"browser\"\n| eval step_duration_ms=step_end_ms - step_start_ms\n| bin _time span=1h\n| stats avg(step_duration_ms) as avg_step_ms, p95(step_duration_ms) as p95_step_ms, sum(eval(if(step_status==\"FAIL\",1,0))) as step_failures, count as step_runs by _time, test_name, step_name, location\n| eval step_success_pct=round((step_runs-step_failures)*100/step_runs, 1)\n| eval sla_met=if(step_success_pct >= 99.5 AND p95_step_ms < 3000, \"Yes\", \"No\")\n| stats avg(step_success_pct) as avg_success, avg(p95_step_ms) as avg_p95, min(step_success_pct) as worst_success by test_name, step_name, location\n| where avg_success < 99.5 OR avg_p95 > 3000\n| table test_name, step_name, location, avg_success, avg_p95, worst_success\n| sort avg_success",
              "m": "Configure Splunk Synthetic browser tests for critical user journeys (login flow, checkout, search, account management) running from multiple geographic locations every 5-15 minutes. Ingest results with per-step timing and status. Define SLA targets per transaction (e.g., 99.5% success, p95 < 3 seconds). Track step-level performance to identify which step in the journey degrades. Compare performance across locations to detect regional infrastructure issues. Alert when any transaction drops below SLA for 2 consecutive hours. Provide weekly SLA reports to service owners showing uptime, performance, and geographic variance. Correlate synthetic failures with infrastructure events (cat-01, cat-05) to distinguish application from infrastructure issues.",
              "z": "Table (transaction SLA status by location — green/red), Line chart (step duration trend per test), Bar chart (success rate by geography), Heatmap (test × location performance).",
              "kfp": "Cosmetic front-end refactors can break selectors even when customers still succeed through resilient client code, which spikes synthetic failures without true unavailability; triage by diffing DOM snapshots and attach HAR files before paging payment processors. Planned maintenance windows may be absent from synthetic_test_owners.csv during the first hours of a new process, so alerts look like breaches when operations intentionally paused probes; enforce lookup updates as part of the change calendar. Third-party identity or payment vendors sometimes throttle synthetic credentials differently from production customer tokens, producing exaggerated login or checkout latency that does not match real-user charts; correlate with vendor status pages and separate canary accounts. Corporate proxies or TLS inspection on specific probe networks add handshake noise that concentrates in one geography; validate against regional_div_ms before blaming application code. RUM corroboration can be null or stale when sampling is aggressive or when page_url_hint mapping drifts, so low rum_lcp_p95_corr must not silence a genuine synthetic breach. CDN cache purges after marketing pushes can temporarily inflate asset fetch times for probes hitting cold edges while most users still see warm caches; pair cdn_vendor signals with provider incident feeds. Flaky automation timing on shared runner infrastructure occasionally yields one-off slow steps; require bad_bucket_streak greater than one before executive escalation unless the step is explicitly tier zero. Browser engine upgrades in Observability-managed runners can change performance baselines overnight; refresh baselines after vendor release notes rather than treating the shift as an application regression.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud (Synthetics), Splunk Synthetic Monitoring.\n• Ensure the following data sources are available: Splunk Synthetic test results, `sourcetype=signalfx:synthetics:results`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Splunk Synthetic browser tests for critical user journeys (login flow, checkout, search, account management) running from multiple geographic locations every 5-15 minutes. Ingest results with per-step timing and status. Define SLA targets per transaction (e.g., 99.5% success, p95 < 3 seconds). Track step-level performance to identify which step in the journey degrades. Compare performance across locations to detect regional infrastructure issues. Alert when any transaction drops below …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=observability sourcetype=\"signalfx:synthetics:results\" test_type=\"browser\"\n| eval step_duration_ms=step_end_ms - step_start_ms\n| bin _time span=1h\n| stats avg(step_duration_ms) as avg_step_ms, p95(step_duration_ms) as p95_step_ms, sum(eval(if(step_status==\"FAIL\",1,0))) as step_failures, count as step_runs by _time, test_name, step_name, location\n| eval step_success_pct=round((step_runs-step_failures)*100/step_runs, 1)\n| eval sla_met=if(step_success_pct >= 99.5 AND p95_step_ms < 3000, \"Yes\", \"No\")\n| stats avg(step_success_pct) as avg_success, avg(p95_step_ms) as avg_p95, min(step_success_pct) as worst_success by test_name, step_name, location\n| where avg_success < 99.5 OR avg_p95 > 3000\n| table test_name, step_name, location, avg_success, avg_p95, worst_success\n| sort avg_success\n```\n\nUnderstanding this SPL\n\n**Splunk Synthetic Monitoring Multi-Step Transaction SLA** — Beyond simple pass/fail synthetic checks (UC-13.3.10), multi-step browser transactions test complete user workflows — login, search, add-to-cart, checkout. Step-level timing trends reveal which step is degrading: if the login step takes 500ms longer in one geography, that points to a regional identity provider issue. Transaction SLA tracking quantifies end-user experience commitments and provides evidence for SLA breach discussions with internal service owners or external…\n\nDocumented **Data sources**: Splunk Synthetic test results, `sourcetype=signalfx:synthetics:results`. **App/TA** (typical add-on context): Splunk Observability Cloud (Synthetics), Splunk Synthetic Monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: observability; **sourcetype**: signalfx:synthetics:results. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=observability, sourcetype=\"signalfx:synthetics:results\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **step_duration_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, test_name, step_name, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **step_success_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by test_name, step_name, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_success < 99.5 OR avg_p95 > 3000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Splunk Synthetic Monitoring Multi-Step Transaction SLA**): table test_name, step_name, location, avg_success, avg_p95, worst_success\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (transaction SLA status by location — green/red), Line chart (step duration trend per test), Bar chart (success rate by geography), Heatmap (test × location performance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-01",
              "sver": "",
              "rby": "",
              "ge": "We run scheduled robot browsers through your most important checkout-style paths from several cities, and we raise a clear hand when one step slows down, fails over and over, or breaks the success promise you made to the business, before a huge wave of real shoppers hits the same wall.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.12",
              "n": "Splunk Observability Cloud Detector Health",
              "c": "high",
              "f": "intermediate",
              "v": "Observability Cloud detectors (alerts) degrade silently: a detector may be permanently muted, have no recent data feeding its signal, fire so frequently it's ignored (alert fatigue), or have a condition that can never trigger due to metric name changes. Monitoring detector health ensures the alerting layer that protects production services is itself healthy — preventing the dangerous situation where teams believe they're covered by alerts that haven't actually fired or evaluated in months.",
              "t": "Splunk Observability Cloud API, custom scripted input",
              "d": "Observability Cloud Detector API (`sourcetype=signalfx:detectors`), alert event history",
              "q": "index=observability sourcetype=\"signalfx:detectors\"\n| stats latest(is_muted) as muted, latest(last_triggered) as last_trigger, latest(last_updated) as last_update, count(eval(severity==\"Critical\")) as critical_fires, count(eval(severity==\"Warning\")) as warning_fires by detector_name, detector_id, creator\n| eval days_since_trigger=if(isnotnull(last_trigger), round((now()-last_trigger)/86400, 0), \"Never\")\n| eval days_since_update=round((now()-last_update)/86400, 0)\n| eval health=case(\n    muted==\"true\", \"MUTED\",\n    days_since_trigger==\"Never\" OR days_since_trigger > 90, \"STALE - Never/Rarely Fires\",\n    critical_fires > 100, \"NOISY - Excessive Alerts\",\n    days_since_update > 365, \"ABANDONED - Not Updated\",\n    1==1, \"Healthy\")\n| where health!=\"Healthy\"\n| table detector_name, creator, health, muted, days_since_trigger, days_since_update, critical_fires\n| sort health",
              "m": "Deploy a scripted input that polls the Observability Cloud Detector API daily, extracting detector metadata (name, creator, mute status, last update), and alert event history (last triggered, firing frequency). Classify detector health: \"MUTED\" (permanently silenced — may be intentional or forgotten), \"STALE\" (never fired or hasn't fired in 90 days — may have no data or an impossible condition), \"NOISY\" (fires more than 100 times — creates alert fatigue), \"ABANDONED\" (not updated in 1 year — may reference deprecated metrics). Provide a quarterly detector hygiene report to platform teams. Alert when critical-severity detectors are muted for more than 7 days. Track detector count over time to measure observability sprawl.",
              "z": "Pie chart (detector health distribution), Table (unhealthy detectors with details), Bar chart (detectors by health category), Single value (% of healthy detectors).",
              "kfp": "Approved maintenance windows legitimately mute detectors for hours or days; the scheduled_quiet_window column exists precisely so intentional silence does not auto-page platform leadership. Training and sandbox tenants often host deliberately quiet detectors used only during classes; exclude them with environment tags or separate indexes before treating silence as production risk. Seasonal retail detectors may fire only during peak commerce weeks; without expected_trigger_freq_per_30d hints, off-season STALE classifications look like defects even when the business expects quiet months. Creator-of-record frequently shows automation service accounts; never infer human abandonment from that field alone—pair with owner_team and Terraform audit actors. On-call rotations can make a healthy detector look ownerless when pagerduty_service strings lag a week behind HR changes; refresh roster lookups on the same cadence as people moves. Terraform-managed detectors intentionally show no UI edit history; treat infrastructure-as-code metadata as first-class evidence instead of flagging ABANDONED purely from last UI touch. Multi-environment scopes may reuse logical names across staging and production; missing org or realm keys can duplicate detector_id collisions that confuse joins—always partition by tenant. A detector that never fired may be stale, while a detector that fired once and was acknowledged can still be perfectly healthy; require incident outcome fields or manual notes before auto-closing tickets on first-fire patterns. Org-wide Observability lag metrics sometimes precede incident gaps; correlate stream lag signals before blaming individual SignalFlow programs. Enterprise Security Alerts acceleration gaps can zero out the tstats arm while raw incidents still flow; repair acceleration rather than muting the governance search. Lookup staleness makes service_criticality look empty; treat blank CSV rows as blocking defects for tier-one services. Quiet detectors bound to low-traffic canary services may be legitimately idle; use service tier metadata to avoid comparing them to checkout-scale expectations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud API, custom scripted input.\n• Ensure the following data sources are available: Observability Cloud Detector API (`sourcetype=signalfx:detectors`), alert event history.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy a scripted input that polls the Observability Cloud Detector API daily, extracting detector metadata (name, creator, mute status, last update), and alert event history (last triggered, firing frequency). Classify detector health: \"MUTED\" (permanently silenced — may be intentional or forgotten), \"STALE\" (never fired or hasn't fired in 90 days — may have no data or an impossible condition), \"NOISY\" (fires more than 100 times — creates alert fatigue), \"ABANDONED\" (not updated in 1 year — may…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=observability sourcetype=\"signalfx:detectors\"\n| stats latest(is_muted) as muted, latest(last_triggered) as last_trigger, latest(last_updated) as last_update, count(eval(severity==\"Critical\")) as critical_fires, count(eval(severity==\"Warning\")) as warning_fires by detector_name, detector_id, creator\n| eval days_since_trigger=if(isnotnull(last_trigger), round((now()-last_trigger)/86400, 0), \"Never\")\n| eval days_since_update=round((now()-last_update)/86400, 0)\n| eval health=case(\n    muted==\"true\", \"MUTED\",\n    days_since_trigger==\"Never\" OR days_since_trigger > 90, \"STALE - Never/Rarely Fires\",\n    critical_fires > 100, \"NOISY - Excessive Alerts\",\n    days_since_update > 365, \"ABANDONED - Not Updated\",\n    1==1, \"Healthy\")\n| where health!=\"Healthy\"\n| table detector_name, creator, health, muted, days_since_trigger, days_since_update, critical_fires\n| sort health\n```\n\nUnderstanding this SPL\n\n**Splunk Observability Cloud Detector Health** — Observability Cloud detectors (alerts) degrade silently: a detector may be permanently muted, have no recent data feeding its signal, fire so frequently it's ignored (alert fatigue), or have a condition that can never trigger due to metric name changes. Monitoring detector health ensures the alerting layer that protects production services is itself healthy — preventing the dangerous situation where teams believe they're covered by alerts that haven't actually fired or…\n\nDocumented **Data sources**: Observability Cloud Detector API (`sourcetype=signalfx:detectors`), alert event history. **App/TA** (typical add-on context): Splunk Observability Cloud API, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: observability; **sourcetype**: signalfx:detectors. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=observability, sourcetype=\"signalfx:detectors\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by detector_name, detector_id, creator** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_trigger** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since_update** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where health!=\"Healthy\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Splunk Observability Cloud Detector Health**): table detector_name, creator, health, muted, days_since_trigger, days_since_update, critical_fires\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (detector health distribution), Table (unhealthy detectors with details), Bar chart (detectors by health category), Single value (% of healthy detectors).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We keep a master list of the automated alarms that watch your cloud software, and we raise a clear flag when those alarms go quiet, scream too much, or drift from the teams and services they are supposed to protect.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.13",
              "n": "RED Metrics Dashboard Template (Rate, Errors, Duration)",
              "c": "high",
              "f": "intermediate",
              "v": "The RED method (Rate, Errors, Duration) is the standard SRE pattern for monitoring request-driven services. This template UC provides a reusable SPL pattern applicable to any HTTP/gRPC service instrumented with OTel — producing the three essential metrics that answer \"is this service healthy right now?\" Having a standardized RED pattern ensures every team monitors their services consistently, enabling fleet-wide comparison and prioritization.",
              "t": "Splunk Distribution of OpenTelemetry Collector, any HTTP/gRPC access log",
              "d": "`sourcetype=otel:traces` (spans), `sourcetype=access_combined` (HTTP logs), OTel metrics",
              "q": "index=traces sourcetype=\"otel:traces\" span_kind=\"SERVER\"\n| eval duration_ms=duration_nano/1000000\n| eval is_error=if(status_code==\"ERROR\" OR http_status_code>=500, 1, 0)\n| bin _time span=5m\n| stats count as requests, sum(is_error) as errors, avg(duration_ms) as avg_duration, p50(duration_ms) as p50, p95(duration_ms) as p95, p99(duration_ms) as p99 by _time, service_name\n| eval error_rate_pct=round(errors*100/requests, 2)\n| eval req_per_sec=round(requests/300, 1)\n| table _time, service_name, req_per_sec, error_rate_pct, p50, p95, p99",
              "m": "Filter for SERVER spans (inbound requests to the service) from OTel trace data. Calculate three metrics per 5-minute window: Rate (requests per second), Errors (percentage of requests with error status), Duration (latency percentiles). This template works with any OTel-instrumented service. Alternatively, compute RED from HTTP access logs using `status>=500` for errors and response time fields for duration. Deploy as a saved search macro `red_metrics(service_name)` for reusability across dashboards. Each team clones the template for their services. Combine with deployment markers to immediately visualize RED impact of releases. Set standard thresholds: error rate >1% (warning), >5% (critical); p99 >2x baseline (warning).",
              "z": "Three-panel row per service: Single value (request rate with sparkline), Gauge (error rate with green/yellow/red), Line chart (p50/p95/p99 duration). Repeatable per service.",
              "kfp": "Legitimate background batch jobs sometimes log benign 404 responses while discovering shard placement through work-stealing probes; do not treat those as customer errors without route context. Warmup minutes after pod replacement routinely spike p99 while JIT and connection pools settle; require sustained breach across two windows before paging latency. Canary traffic ramps from one percent to five percent can look like a sudden rate spike when denominators are misunderstood; compare against baseline_qps and progressive delivery weights. Enabling host.id style resource detectors without normalization can fan duplicate metric series that inflate rate or split duration; dedupe labels at the collector. Service mesh rolling injections delay sidecar readiness so Envoy-only latency rises while application code is innocent; correlate rollout timestamps. HTTP 499 or client-cancel codes are not server faults yet may be marked errors in naive rules; align policy per API surface. gRPC numeric status codes can disagree with OpenTelemetry span status after SDK upgrades; regression-test mapping tables after bumps. Intentional 429 rate-limit responses are errors for some clients and success for governance; encode product policy in is_err logic. Asynchronous publish semantics may count producer requests while consumers silently drop; pair RED with queue depth monitors. A/B cohort splits can shift measured duration without defects when populations differ; join experiment metadata before rollback.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, any HTTP/gRPC access log.\n• Ensure the following data sources are available: `sourcetype=otel:traces` (spans), `sourcetype=access_combined` (HTTP logs), OTel metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFilter for SERVER spans (inbound requests to the service) from OTel trace data. Calculate three metrics per 5-minute window: Rate (requests per second), Errors (percentage of requests with error status), Duration (latency percentiles). This template works with any OTel-instrumented service. Alternatively, compute RED from HTTP access logs using `status>=500` for errors and response time fields for duration. Deploy as a saved search macro `red_metrics(service_name)` for reusability across dashboa…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\" span_kind=\"SERVER\"\n| eval duration_ms=duration_nano/1000000\n| eval is_error=if(status_code==\"ERROR\" OR http_status_code>=500, 1, 0)\n| bin _time span=5m\n| stats count as requests, sum(is_error) as errors, avg(duration_ms) as avg_duration, p50(duration_ms) as p50, p95(duration_ms) as p95, p99(duration_ms) as p99 by _time, service_name\n| eval error_rate_pct=round(errors*100/requests, 2)\n| eval req_per_sec=round(requests/300, 1)\n| table _time, service_name, req_per_sec, error_rate_pct, p50, p95, p99\n```\n\nUnderstanding this SPL\n\n**RED Metrics Dashboard Template (Rate, Errors, Duration)** — The RED method (Rate, Errors, Duration) is the standard SRE pattern for monitoring request-driven services. This template UC provides a reusable SPL pattern applicable to any HTTP/gRPC service instrumented with OTel — producing the three essential metrics that answer \"is this service healthy right now?\" Having a standardized RED pattern ensures every team monitors their services consistently, enabling fleet-wide comparison and prioritization.\n\nDocumented **Data sources**: `sourcetype=otel:traces` (spans), `sourcetype=access_combined` (HTTP logs), OTel metrics. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, any HTTP/gRPC access log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **req_per_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **RED Metrics Dashboard Template (Rate, Errors, Duration)**): table _time, service_name, req_per_sec, error_rate_pct, p50, p95, p99\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Three-panel row per service: Single value (request rate with sparkline), Gauge (error rate with green/yellow/red), Line chart (p50/p95/p99 duration). Repeatable per service.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We count how much work arrives, how often it fails, and how long the slowest responses take. When those three jump past the written guardrails for a service, we sound the alarm so the owning team fixes it before most customers notice.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.14",
              "n": "USE Method for Infrastructure (Utilization, Saturation, Errors)",
              "c": "high",
              "f": "intermediate",
              "v": "The USE method (Utilization, Saturation, Errors) is the standard SRE pattern for monitoring resource-driven systems (CPU, memory, disk, network). While individual infrastructure UCs exist across cat-01 and cat-05, this template consolidates all three signals per resource into a unified view that answers \"is this resource the bottleneck?\" Having the USE framework as a structured pattern ensures systematic coverage — teams often monitor utilization but miss saturation (the queue) and errors (the failures).",
              "t": "Splunk Distribution of OpenTelemetry Collector (hostmetrics receiver), Splunk Infrastructure Monitoring",
              "d": "OTel host metrics (`system.cpu.*`, `system.memory.*`, `system.disk.*`, `system.network.*`)",
              "q": "| mstats avg(_value) as val WHERE index=infra_metrics metric_name IN (\n    \"system.cpu.utilization\",\n    \"system.cpu.load_average.5m\",\n    \"system.memory.utilization\",\n    \"system.memory.usage\",\n    \"system.disk.utilization\",\n    \"system.disk.io_time\",\n    \"system.network.dropped\",\n    \"system.network.errors\"\n  ) BY metric_name, host span=5m\n| eval resource=case(\n    match(metric_name, \"cpu\"), \"CPU\",\n    match(metric_name, \"memory\"), \"Memory\",\n    match(metric_name, \"disk\"), \"Disk\",\n    match(metric_name, \"network\"), \"Network\")\n| eval signal=case(\n    match(metric_name, \"utilization\"), \"Utilization\",\n    match(metric_name, \"load_average|io_time|dropped\"), \"Saturation\",\n    match(metric_name, \"errors\"), \"Errors\")\n| stats avg(val) as avg_val, max(val) as max_val by host, resource, signal\n| eval status=case(\n    signal==\"Utilization\" AND max_val > 0.9, \"Critical\",\n    signal==\"Utilization\" AND max_val > 0.7, \"Warning\",\n    signal==\"Saturation\" AND max_val > 0, \"Warning\",\n    signal==\"Errors\" AND max_val > 0, \"Critical\",\n    1==1, \"OK\")\n| where status!=\"OK\"\n| table host, resource, signal, avg_val, max_val, status\n| sort status, resource",
              "m": "Deploy OTel Collector with the `hostmetrics` receiver on all infrastructure hosts. Map OTel host metrics to the USE framework: Utilization (% of resource capacity in use), Saturation (work queued or waiting — load average, disk I/O wait, network drops), Errors (hardware/software errors — disk errors, network errors). For each resource type, define USE metric mappings: CPU (utilization=cpu.utilization, saturation=load_average, errors=N/A), Memory (utilization=memory.utilization, saturation=swap usage, errors=ECC errors), Disk (utilization=disk.utilization, saturation=io_time, errors=disk errors), Network (utilization=bandwidth%, saturation=dropped packets, errors=errors). Alert when any resource shows high utilization (>90%) combined with saturation. This pattern complements per-host monitoring by providing a methodological framework for bottleneck identification.",
              "z": "Matrix table (host × resource with USE status coloring), Gauge (utilization per resource), Bar chart (hosts with saturation), Single value (resources in critical state).",
              "kfp": "Legitimate overnight batch analytics deliberately pins CPU utilization at one hundred percent for wall-clock efficiency; saturation stays bounded because the job owns the socket; require saturation multiples before paging batch role hosts. Hypervisor memory balloon drivers induce swap and guest page cache churn that resemble memory leaks; correlate balloon target metrics and vendor knowledge-base articles before opening application defects. A frayed access-layer cable can raise NIC RX errors while netops already tracks the span in a plant ticket; dedupe with facilities status lookups. Disk read latency spikes during vendor-scheduled SMART extended self-tests look like impending failures yet clear when the self-test ends; align maintenance calendars. Kernel numa_balancing migrations can inflate runqueue depth for seconds on large NUMA servers without customer impact; short-lived spikes below policy dwell should warn not page. Seasonal thermal throttling at a documented warehouse ambient ceiling is expected until mechanical cooling restores margin; compare against CRAC setpoints. GPU nodes dedicated to model training often show near-maximum GPU utilization for hours; inventory gpu_required and role tags must downgrade util_hi to avoid false crises. Planned VMware DRS or Kubernetes drain evacuations concentrate VMs on fewer hosts; utilization and saturation rise during approved windows; honor change tickets. Containers configured with intentionally low memory limits may OOM as a noisy-neighbor containment feature rather than an outage; verify cgroup max against playbook limits. Parity RAID rebuild elevates disk queue depth and IO saturation briefly; pair with array event logs before drive replacements. Corrected ECC rates below field-replace thresholds still increment error counters in naive parsers; gate on uncorrected or policy-specific rates only.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector (hostmetrics receiver), Splunk Infrastructure Monitoring.\n• Ensure the following data sources are available: OTel host metrics (`system.cpu.*`, `system.memory.*`, `system.disk.*`, `system.network.*`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy OTel Collector with the `hostmetrics` receiver on all infrastructure hosts. Map OTel host metrics to the USE framework: Utilization (% of resource capacity in use), Saturation (work queued or waiting — load average, disk I/O wait, network drops), Errors (hardware/software errors — disk errors, network errors). For each resource type, define USE metric mappings: CPU (utilization=cpu.utilization, saturation=load_average, errors=N/A), Memory (utilization=memory.utilization, saturation=swap u…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(_value) as val WHERE index=infra_metrics metric_name IN (\n    \"system.cpu.utilization\",\n    \"system.cpu.load_average.5m\",\n    \"system.memory.utilization\",\n    \"system.memory.usage\",\n    \"system.disk.utilization\",\n    \"system.disk.io_time\",\n    \"system.network.dropped\",\n    \"system.network.errors\"\n  ) BY metric_name, host span=5m\n| eval resource=case(\n    match(metric_name, \"cpu\"), \"CPU\",\n    match(metric_name, \"memory\"), \"Memory\",\n    match(metric_name, \"disk\"), \"Disk\",\n    match(metric_name, \"network\"), \"Network\")\n| eval signal=case(\n    match(metric_name, \"utilization\"), \"Utilization\",\n    match(metric_name, \"load_average|io_time|dropped\"), \"Saturation\",\n    match(metric_name, \"errors\"), \"Errors\")\n| stats avg(val) as avg_val, max(val) as max_val by host, resource, signal\n| eval status=case(\n    signal==\"Utilization\" AND max_val > 0.9, \"Critical\",\n    signal==\"Utilization\" AND max_val > 0.7, \"Warning\",\n    signal==\"Saturation\" AND max_val > 0, \"Warning\",\n    signal==\"Errors\" AND max_val > 0, \"Critical\",\n    1==1, \"OK\")\n| where status!=\"OK\"\n| table host, resource, signal, avg_val, max_val, status\n| sort status, resource\n```\n\nUnderstanding this SPL\n\n**USE Method for Infrastructure (Utilization, Saturation, Errors)** — The USE method (Utilization, Saturation, Errors) is the standard SRE pattern for monitoring resource-driven systems (CPU, memory, disk, network). While individual infrastructure UCs exist across cat-01 and cat-05, this template consolidates all three signals per resource into a unified view that answers \"is this resource the bottleneck?\" Having the USE framework as a structured pattern ensures systematic coverage — teams often monitor utilization but miss saturation (the…\n\nDocumented **Data sources**: OTel host metrics (`system.cpu.*`, `system.memory.*`, `system.disk.*`, `system.network.*`). **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector (hostmetrics receiver), Splunk Infrastructure Monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infra_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **resource** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **signal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, resource, signal** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **USE Method for Infrastructure (Utilization, Saturation, Errors)**): table host, resource, signal, avg_val, max_val, status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix table (host × resource with USE status coloring), Gauge (utilization per resource), Bar chart (hosts with saturation), Single value (resources in critical state).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We check whether machines are actually full, backed up with waiting work, or reporting hardware faults—not just whether a website feels slow. That way we fix the real choke point instead of guessing.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.15",
              "n": "Golden Signals Composite Health per Service",
              "c": "high",
              "f": "advanced",
              "v": "Google SRE's four Golden Signals — Latency, Traffic, Errors, Saturation — provide a complete view of service health when combined into a composite score. Individual signal monitoring exists across various UCs, but a composite score enables service-level comparison and prioritization: when 20 services degrade simultaneously during an incident, the composite score instantly ranks which services are worst-affected. This is particularly valuable for non-ITSI deployments that lack ITSI's built-in service health scoring.",
              "t": "Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud",
              "d": "`sourcetype=otel:traces` (spans), OTel metrics, application logs",
              "q": "index=traces sourcetype=\"otel:traces\" span_kind=\"SERVER\"\n| eval duration_ms=duration_nano/1000000\n| eval is_error=if(status_code==\"ERROR\", 1, 0)\n| bin _time span=5m\n| stats count as traffic, sum(is_error) as errors, p99(duration_ms) as latency_p99 by _time, service_name\n| eval error_rate=round(errors*100/traffic, 2)\n| join type=left _time service_name [\n    | mstats avg(_value) as saturation WHERE index=infra_metrics metric_name=\"system.cpu.utilization\" BY service_name span=5m]\n| eval latency_score=case(latency_p99<200, 100, latency_p99<500, 80, latency_p99<1000, 60, latency_p99<2000, 40, 1==1, 20)\n| eval error_score=case(error_rate<0.1, 100, error_rate<1, 80, error_rate<5, 60, error_rate<10, 40, 1==1, 20)\n| eval traffic_score=if(traffic>0, 100, 0)\n| eval sat_score=case(saturation<0.5, 100, saturation<0.7, 80, saturation<0.85, 60, saturation<0.95, 40, 1==1, 20)\n| eval composite_health=round((latency_score*0.3 + error_score*0.3 + traffic_score*0.2 + sat_score*0.2), 0)\n| table _time, service_name, composite_health, latency_p99, error_rate, traffic, saturation\n| sort composite_health",
              "m": "Combine the four Golden Signals per service into a weighted composite score (0-100). Weights: Latency 30%, Errors 30%, Traffic 20% (presence/absence), Saturation 20%. Score each signal on a 0-100 scale based on configurable thresholds. The composite score enables instant service ranking during incidents. For services with ITSI coverage, this complements rather than replaces ITSI health scores — ITSI provides richer KPI modeling while this provides a lightweight alternative for services not yet onboarded to ITSI. Store service-level scores in a summary index for historical trending. Build a fleet-wide service health leaderboard sorted by composite score.",
              "z": "Table (service leaderboard sorted by composite health — color coded), Gauge (composite health per critical service), Line chart (composite health trend per service over 7 days), Treemap (services sized by traffic, colored by health).",
              "kfp": "Latency spikes during pool exhaustion, garbage collection, or upstream slowness. We compare with the APM UI and deploy history before a code-only theory.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud.\n• Ensure the following data sources are available: `sourcetype=otel:traces` (spans), OTel metrics, application logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCombine the four Golden Signals per service into a weighted composite score (0-100). Weights: Latency 30%, Errors 30%, Traffic 20% (presence/absence), Saturation 20%. Score each signal on a 0-100 scale based on configurable thresholds. The composite score enables instant service ranking during incidents. For services with ITSI coverage, this complements rather than replaces ITSI health scores — ITSI provides richer KPI modeling while this provides a lightweight alternative for services not yet o…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\" span_kind=\"SERVER\"\n| eval duration_ms=duration_nano/1000000\n| eval is_error=if(status_code==\"ERROR\", 1, 0)\n| bin _time span=5m\n| stats count as traffic, sum(is_error) as errors, p99(duration_ms) as latency_p99 by _time, service_name\n| eval error_rate=round(errors*100/traffic, 2)\n| join type=left _time service_name [\n    | mstats avg(_value) as saturation WHERE index=infra_metrics metric_name=\"system.cpu.utilization\" BY service_name span=5m]\n| eval latency_score=case(latency_p99<200, 100, latency_p99<500, 80, latency_p99<1000, 60, latency_p99<2000, 40, 1==1, 20)\n| eval error_score=case(error_rate<0.1, 100, error_rate<1, 80, error_rate<5, 60, error_rate<10, 40, 1==1, 20)\n| eval traffic_score=if(traffic>0, 100, 0)\n| eval sat_score=case(saturation<0.5, 100, saturation<0.7, 80, saturation<0.85, 60, saturation<0.95, 40, 1==1, 20)\n| eval composite_health=round((latency_score*0.3 + error_score*0.3 + traffic_score*0.2 + sat_score*0.2), 0)\n| table _time, service_name, composite_health, latency_p99, error_rate, traffic, saturation\n| sort composite_health\n```\n\nUnderstanding this SPL\n\n**Golden Signals Composite Health per Service** — Google SRE's four Golden Signals — Latency, Traffic, Errors, Saturation — provide a complete view of service health when combined into a composite score. Individual signal monitoring exists across various UCs, but a composite score enables service-level comparison and prioritization: when 20 services degrade simultaneously during an incident, the composite score instantly ranks which services are worst-affected. This is particularly valuable for non-ITSI deployments that…\n\nDocumented **Data sources**: `sourcetype=otel:traces` (spans), OTel metrics, application logs. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **latency_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **error_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **traffic_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sat_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **composite_health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Golden Signals Composite Health per Service**): table _time, service_name, composite_health, latency_p99, error_rate, traffic, saturation\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (service leaderboard sorted by composite health — color coded), Gauge (composite health per critical service), Line chart (composite health trend per service over 7 days), Treemap (services sized by traffic, colored by health).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch application traces, real users, and service-level health so we see slow, wrong, and costly behavior in the same story our teams ship to.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.16",
              "n": "SLO Definition and Multi-Window Burn Rate Alerting",
              "c": "critical",
              "f": "advanced",
              "v": "Simple error rate thresholds create two failure modes: too sensitive (alert on every transient error) or too lenient (miss sustained degradation). Google SRE's multi-window burn rate alerting solves this by measuring how fast you're consuming your error budget across multiple time windows. A fast burn (5min/1hr window) catches severe outages immediately; a slow burn (30min/6hr window) catches gradual degradation that compounds. This structured approach replaces ad-hoc threshold alerting with mathematically rigorous SLO-based alerting.",
              "t": "Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud",
              "d": "`sourcetype=otel:traces`, service metrics, `sourcetype=access_combined`",
              "q": "index=traces sourcetype=\"otel:traces\" span_kind=\"SERVER\" service_name=\"$service$\"\n| eval good=if(status_code!=\"ERROR\" AND (isnull(http_status_code) OR http_status_code<500), 1, 0)\n| bin _time span=1m\n| stats count as total, sum(good) as good_count by _time\n| eval error_ratio=1-(good_count/total)\n| eval slo_target=0.999\n| eval budget_total=1-slo_target\n| sort _time\n| streamstats sum(total) as window_total, sum(eval(total-good_count)) as window_errors window=5\n| eval burn_rate_5m=round((window_errors/window_total)/budget_total, 2)\n| streamstats sum(total) as window_total_1h, sum(eval(total-good_count)) as window_errors_1h window=60\n| eval burn_rate_1h=round((window_errors_1h/window_total_1h)/budget_total, 2)\n| eval fast_burn_alert=if(burn_rate_5m > 14.4 AND burn_rate_1h > 14.4, 1, 0)\n| eval slow_burn_alert=if(burn_rate_1h > 6 AND burn_rate_5m > 6, 1, 0)\n| where fast_burn_alert=1 OR slow_burn_alert=1\n| eval alert_type=case(fast_burn_alert=1, \"FAST BURN\", slow_burn_alert=1, \"SLOW BURN\")\n| table _time, alert_type, burn_rate_5m, burn_rate_1h, error_ratio, window_total",
              "m": "Define SLOs per service as availability targets (e.g., 99.9% = 0.1% error budget over 30 days). Calculate burn rate as (observed error rate / error budget rate). Google SRE recommends multi-window alerting: Fast burn (14.4x burn rate over 5min AND 1hr windows) catches severe incidents — at this rate, the entire monthly budget is consumed in 2 hours. Slow burn (6x burn rate over 30min AND 6hr windows) catches gradual degradation — budget consumed in 5 days. Store SLO definitions in a KV store lookup (service, slo_target, budget_period_days). Calculate remaining error budget percentage per service and display on an SLO dashboard. Pair with UC-13.5.17 for error budget policy enforcement when budget is exhausted.",
              "z": "Gauge (error budget remaining % per service), Line chart (burn rate over 24 hours with threshold lines), Table (services with active burn rate alerts), Single value (services currently burning budget).",
              "kfp": "Approved maintenance windows recorded only in email can look like surprise burn until the change calendar is authoritative; align CMDB and change tooling before paging. Low-volume midnight batch jobs can create jagged denominators that trip burn math without user impact; require minimum request totals or split batch objectives. Planned chaos engineering fault injection will consume budget on purpose; label environments and suppress vendor-native detectors during signed games. Region failover drills shift traffic and can spike synthetic failures in the origin region while customers remain healthy; annotate drill metadata and compare regional slices. Customer-side throttling tests and partner rate-limit drills can raise error shares without platform defects; confirm traffic attribution before freezing deploys. Log-only lanes miscount when parsers drop fields during parser deploys; monitor ingest health alongside SLI searches. Synthetic geo blocks and captcha walls present as total downtime in one city while others stay green; diversify probes and validate edge DNS. Quota resets and Observability Cloud billing month boundaries may desynchronize internal spreadsheets from detector budgets; reconcile monthly with finance observers. Kubernetes cron skew and double-counted replay topics can inflate totals; dedupe event IDs when log lanes are primary. Grafana Cloud recording rules may lag Prometheus scrape intervals, stretching apparent windows; verify rule group intervals match Splunk bins.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud.\n• Ensure the following data sources are available: `sourcetype=otel:traces`, service metrics, `sourcetype=access_combined`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine SLOs per service as availability targets (e.g., 99.9% = 0.1% error budget over 30 days). Calculate burn rate as (observed error rate / error budget rate). Google SRE recommends multi-window alerting: Fast burn (14.4x burn rate over 5min AND 1hr windows) catches severe incidents — at this rate, the entire monthly budget is consumed in 2 hours. Slow burn (6x burn rate over 30min AND 6hr windows) catches gradual degradation — budget consumed in 5 days. Store SLO definitions in a KV store loo…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\" span_kind=\"SERVER\" service_name=\"$service$\"\n| eval good=if(status_code!=\"ERROR\" AND (isnull(http_status_code) OR http_status_code<500), 1, 0)\n| bin _time span=1m\n| stats count as total, sum(good) as good_count by _time\n| eval error_ratio=1-(good_count/total)\n| eval slo_target=0.999\n| eval budget_total=1-slo_target\n| sort _time\n| streamstats sum(total) as window_total, sum(eval(total-good_count)) as window_errors window=5\n| eval burn_rate_5m=round((window_errors/window_total)/budget_total, 2)\n| streamstats sum(total) as window_total_1h, sum(eval(total-good_count)) as window_errors_1h window=60\n| eval burn_rate_1h=round((window_errors_1h/window_total_1h)/budget_total, 2)\n| eval fast_burn_alert=if(burn_rate_5m > 14.4 AND burn_rate_1h > 14.4, 1, 0)\n| eval slow_burn_alert=if(burn_rate_1h > 6 AND burn_rate_5m > 6, 1, 0)\n| where fast_burn_alert=1 OR slow_burn_alert=1\n| eval alert_type=case(fast_burn_alert=1, \"FAST BURN\", slow_burn_alert=1, \"SLOW BURN\")\n| table _time, alert_type, burn_rate_5m, burn_rate_1h, error_ratio, window_total\n```\n\nUnderstanding this SPL\n\n**SLO Definition and Multi-Window Burn Rate Alerting** — Simple error rate thresholds create two failure modes: too sensitive (alert on every transient error) or too lenient (miss sustained degradation). Google SRE's multi-window burn rate alerting solves this by measuring how fast you're consuming your error budget across multiple time windows. A fast burn (5min/1hr window) catches severe outages immediately; a slow burn (30min/6hr window) catches gradual degradation that compounds. This structured approach replaces ad-hoc…\n\nDocumented **Data sources**: `sourcetype=otel:traces`, service metrics, `sourcetype=access_combined`. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **good** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **slo_target** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **budget_total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **burn_rate_5m** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **burn_rate_1h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **fast_burn_alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **slow_burn_alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fast_burn_alert=1 OR slow_burn_alert=1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **alert_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **SLO Definition and Multi-Window Burn Rate Alerting**): table _time, alert_type, burn_rate_5m, burn_rate_1h, error_ratio, window_total\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (error budget remaining % per service), Line chart (burn rate over 24 hours with threshold lines), Table (services with active burn rate alerts), Single value (services currently burning budget).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We treat reliability like minutes on a monthly phone plan: we watch both a frantic ten-minute spending spree and a slow weekly leak. When the burn pattern matches real customer pain, we call the right leader with a clear next step before the allowance is gone.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.17",
              "n": "Error Budget Policy Enforcement",
              "c": "high",
              "f": "intermediate",
              "v": "An SLO without enforcement is just a dashboard. Error budget policies define what happens when a service exhausts its error budget: feature freeze, mandatory reliability work, postmortem requirement, or escalation to engineering leadership. Tracking error budget consumption per service per period and automatically flagging services that have exhausted their budget creates accountability — transforming SLOs from aspirational targets into governance mechanisms that balance feature velocity with reliability investment.",
              "t": "Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud",
              "d": "`sourcetype=otel:traces`, SLO definition lookup (KV store)",
              "q": "index=traces sourcetype=\"otel:traces\" span_kind=\"SERVER\"\n| eval good=if(status_code!=\"ERROR\" AND (isnull(http_status_code) OR http_status_code<500), 1, 0)\n| bin _time span=1d\n| stats count as total, sum(good) as good_count by _time, service_name\n| lookup slo_definitions service_name OUTPUT slo_target, budget_period_days, service_tier, owning_team\n| eval daily_error_rate=round((total-good_count)*100/total, 3)\n| eval allowed_error_pct=round((1-slo_target)*100, 3)\n| streamstats sum(total) as period_total, sum(good_count) as period_good window=30 by service_name\n| eval period_availability=round(period_good*100/period_total, 3)\n| eval budget_consumed_pct=round((100-period_availability)*100/(100-slo_target*100), 1)\n| eval policy_action=case(\n    budget_consumed_pct >= 100, \"BUDGET EXHAUSTED - Feature Freeze\",\n    budget_consumed_pct >= 80, \"WARNING - Prioritize Reliability\",\n    budget_consumed_pct >= 50, \"CAUTION - Monitor Closely\",\n    1==1, \"OK - Budget Available\")\n| where budget_consumed_pct >= 50\n| table service_name, owning_team, service_tier, slo_target, period_availability, budget_consumed_pct, policy_action\n| sort -budget_consumed_pct",
              "m": "Create a `slo_definitions` KV store lookup with columns: service_name, slo_target (e.g., 0.999), budget_period_days (typically 30), service_tier (critical/standard/best-effort), and owning_team. Calculate rolling 30-day availability per service from trace data. Compute error budget consumption as the ratio of actual errors to allowed errors. Define policy actions at thresholds: 50% consumed (caution — team should be aware), 80% consumed (warning — shift priorities to reliability), 100% consumed (feature freeze — only reliability and security work until budget replenishes). Generate weekly error budget reports for engineering leadership. Integrate with JIRA/ServiceNow to automatically create reliability work items when budget hits 80%. Track budget consumption trend to forecast when budget will be exhausted.",
              "z": "Table (service error budget status with policy action), Gauge (budget remaining % per critical service), Line chart (budget consumption trend over 30 days), Bar chart (services by budget consumption — sorted descending).",
              "kfp": "Latency spikes during pool exhaustion, garbage collection, or upstream slowness. We compare with the APM UI and deploy history before a code-only theory.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud.\n• Ensure the following data sources are available: `sourcetype=otel:traces`, SLO definition lookup (KV store).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a `slo_definitions` KV store lookup with columns: service_name, slo_target (e.g., 0.999), budget_period_days (typically 30), service_tier (critical/standard/best-effort), and owning_team. Calculate rolling 30-day availability per service from trace data. Compute error budget consumption as the ratio of actual errors to allowed errors. Define policy actions at thresholds: 50% consumed (caution — team should be aware), 80% consumed (warning — shift priorities to reliability), 100% consumed …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=traces sourcetype=\"otel:traces\" span_kind=\"SERVER\"\n| eval good=if(status_code!=\"ERROR\" AND (isnull(http_status_code) OR http_status_code<500), 1, 0)\n| bin _time span=1d\n| stats count as total, sum(good) as good_count by _time, service_name\n| lookup slo_definitions service_name OUTPUT slo_target, budget_period_days, service_tier, owning_team\n| eval daily_error_rate=round((total-good_count)*100/total, 3)\n| eval allowed_error_pct=round((1-slo_target)*100, 3)\n| streamstats sum(total) as period_total, sum(good_count) as period_good window=30 by service_name\n| eval period_availability=round(period_good*100/period_total, 3)\n| eval budget_consumed_pct=round((100-period_availability)*100/(100-slo_target*100), 1)\n| eval policy_action=case(\n    budget_consumed_pct >= 100, \"BUDGET EXHAUSTED - Feature Freeze\",\n    budget_consumed_pct >= 80, \"WARNING - Prioritize Reliability\",\n    budget_consumed_pct >= 50, \"CAUTION - Monitor Closely\",\n    1==1, \"OK - Budget Available\")\n| where budget_consumed_pct >= 50\n| table service_name, owning_team, service_tier, slo_target, period_availability, budget_consumed_pct, policy_action\n| sort -budget_consumed_pct\n```\n\nUnderstanding this SPL\n\n**Error Budget Policy Enforcement** — An SLO without enforcement is just a dashboard. Error budget policies define what happens when a service exhausts its error budget: feature freeze, mandatory reliability work, postmortem requirement, or escalation to engineering leadership. Tracking error budget consumption per service per period and automatically flagging services that have exhausted their budget creates accountability — transforming SLOs from aspirational targets into governance mechanisms that balance…\n\nDocumented **Data sources**: `sourcetype=otel:traces`, SLO definition lookup (KV store). **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Splunk Observability Cloud. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=traces, sourcetype=\"otel:traces\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **good** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **daily_error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **allowed_error_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `streamstats` rolls up events into metrics; results are split **by service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **period_availability** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **budget_consumed_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **policy_action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where budget_consumed_pct >= 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Error Budget Policy Enforcement**): table service_name, owning_team, service_tier, slo_target, period_availability, budget_consumed_pct, policy_action\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (service error budget status with policy action), Gauge (budget remaining % per critical service), Line chart (budget consumption trend over 30 days), Bar chart (services by budget consumption — sorted descending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see how fast we are spending room for mistakes so we can act before a small issue eats the month.",
              "mtype": [
                "Compliance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.18",
              "n": "Observability Data Volume and Cost Attribution",
              "c": "high",
              "f": "intermediate",
              "v": "Observability platforms charge by data volume (metrics data points, trace spans, log GB). Without attribution, a single team's misconfigured debug logging or high-cardinality metrics can spike the monthly bill by 40% while no one knows who caused it. Attributing observability data volume to services, teams, and environments enables FinOps governance — teams that generate the most telemetry bear the cost, creating incentive to instrument efficiently rather than indiscriminately.",
              "t": "Splunk Distribution of OpenTelemetry Collector, Splunk License Manager",
              "d": "OTel Collector throughput metrics, Splunk license usage (`_internal`), `sourcetype=otel:traces`",
              "q": "| mstats sum(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"otelcol_receiver_accepted_spans\",\n    \"otelcol_receiver_accepted_metric_points\",\n    \"otelcol_receiver_accepted_log_records\"\n  ) BY metric_name, service_name span=1d\n| eval signal=case(match(metric_name,\"spans\"),\"traces\", match(metric_name,\"metric\"),\"metrics\", match(metric_name,\"log\"),\"logs\")\n| lookup service_ownership service_name OUTPUT owning_team, cost_center, environment\n| stats sum(val) as volume by owning_team, cost_center, signal\n| eval estimated_cost=case(\n    signal==\"traces\", round(volume * 0.000005, 2),\n    signal==\"metrics\", round(volume * 0.000001, 2),\n    signal==\"logs\", round(volume * 0.0000008, 2))\n| stats sum(volume) as total_volume, sum(estimated_cost) as total_cost by owning_team, cost_center\n| sort -total_cost\n| table owning_team, cost_center, total_volume, total_cost",
              "m": "Track OTel Collector throughput metrics attributed to `service_name` (extracted from span/metric/log resource attributes). Build a `service_ownership` lookup mapping services to teams and cost centers. Aggregate daily data volume by signal type (traces, metrics, logs) and team. Apply cost-per-unit estimates based on your observability platform pricing (Splunk Cloud, Observability Cloud, or self-hosted). Generate monthly chargeback or showback reports. Identify the top 10 services by volume for optimization review. Common volume reduction strategies: reduce trace sampling for low-risk services, aggregate metrics at the collector (use `metricstransform` processor), filter debug-level logs before export.",
              "z": "Bar chart (cost by team), Pie chart (volume by signal type), Table (top services by volume), Line chart (total volume trend over 90 days).",
              "kfp": "Latency spikes during pool exhaustion, garbage collection, or upstream slowness. We compare with the APM UI and deploy history before a code-only theory.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, Splunk License Manager.\n• Ensure the following data sources are available: OTel Collector throughput metrics, Splunk license usage (`_internal`), `sourcetype=otel:traces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack OTel Collector throughput metrics attributed to `service_name` (extracted from span/metric/log resource attributes). Build a `service_ownership` lookup mapping services to teams and cost centers. Aggregate daily data volume by signal type (traces, metrics, logs) and team. Apply cost-per-unit estimates based on your observability platform pricing (Splunk Cloud, Observability Cloud, or self-hosted). Generate monthly chargeback or showback reports. Identify the top 10 services by volume for o…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats sum(_value) as val WHERE index=otel_metrics metric_name IN (\n    \"otelcol_receiver_accepted_spans\",\n    \"otelcol_receiver_accepted_metric_points\",\n    \"otelcol_receiver_accepted_log_records\"\n  ) BY metric_name, service_name span=1d\n| eval signal=case(match(metric_name,\"spans\"),\"traces\", match(metric_name,\"metric\"),\"metrics\", match(metric_name,\"log\"),\"logs\")\n| lookup service_ownership service_name OUTPUT owning_team, cost_center, environment\n| stats sum(val) as volume by owning_team, cost_center, signal\n| eval estimated_cost=case(\n    signal==\"traces\", round(volume * 0.000005, 2),\n    signal==\"metrics\", round(volume * 0.000001, 2),\n    signal==\"logs\", round(volume * 0.0000008, 2))\n| stats sum(volume) as total_volume, sum(estimated_cost) as total_cost by owning_team, cost_center\n| sort -total_cost\n| table owning_team, cost_center, total_volume, total_cost\n```\n\nUnderstanding this SPL\n\n**Observability Data Volume and Cost Attribution** — Observability platforms charge by data volume (metrics data points, trace spans, log GB). Without attribution, a single team's misconfigured debug logging or high-cardinality metrics can spike the monthly bill by 40% while no one knows who caused it. Attributing observability data volume to services, teams, and environments enables FinOps governance — teams that generate the most telemetry bear the cost, creating incentive to instrument efficiently rather than…\n\nDocumented **Data sources**: OTel Collector throughput metrics, Splunk license usage (`_internal`), `sourcetype=otel:traces`. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Splunk License Manager. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: otel_metrics.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **signal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by owning_team, cost_center, signal** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **estimated_cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by owning_team, cost_center** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Observability Data Volume and Cost Attribution**): table owning_team, cost_center, total_volume, total_cost\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (cost by team), Pie chart (volume by signal type), Table (top services by volume), Line chart (total volume trend over 90 days).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch traces, real users, and service-level health so we see slow, wrong, and expensive behavior before the business feels it first.",
              "mtype": [
                "Cost",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.19",
              "n": "Observability Cardinality Explosion Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Metric cardinality — the number of unique time series — is the hidden cost driver of observability platforms. Adding a high-cardinality label like `user_id` or `request_id` to a metric can create millions of unique time series, overwhelming TSDB backends and causing query timeouts, memory exhaustion, and unexpected cost spikes. Detecting cardinality explosions before they impact platform stability prevents outages in the observability infrastructure itself.",
              "t": "Splunk Distribution of OpenTelemetry Collector, Prometheus, Splunk Observability Cloud",
              "d": "OTel Collector metrics, TSDB cardinality endpoints, `sourcetype=otel:metrics`",
              "q": "| mcatalog values(metric_name) WHERE index=otel_metrics by metric_name\n| map maxsearches=500 search=\"| mcatalog values(_dims) as dimensions WHERE index=otel_metrics metric_name=\\\"$metric_name$\\\" | eval metric_name=\\\"$metric_name$\\\" | eval cardinality=mvcount(dimensions)\"\n| sort -cardinality\n| head 50\n| eventstats sum(cardinality) as total_cardinality\n| eval pct_of_total=round(cardinality*100/total_cardinality, 1)\n| where cardinality > 10000 OR pct_of_total > 5\n| table metric_name, cardinality, pct_of_total",
              "m": "Periodically audit metric cardinality by counting unique label combinations (time series) per metric name. Metrics with cardinality >10,000 are candidates for label reduction. Common offenders: HTTP metrics with `path` labels containing IDs (`/users/12345`), metrics with `pod_name` labels in auto-scaling environments, and custom metrics with unbounded label values. Use the OTel Collector's `metricstransform` processor to aggregate or drop high-cardinality labels before export. For Splunk Observability Cloud, monitor the `sf.org.numCustomMetrics` org metric. Alert when any single metric exceeds 10,000 time series or when total cardinality grows more than 20% week-over-week. Build a cardinality budget per team aligned with cost allocation.",
              "z": "Bar chart (top 20 metrics by cardinality), Line chart (total cardinality trend over 30 days), Table (metrics exceeding threshold with label analysis), Single value (total active time series).",
              "kfp": "Legitimate cardinality growth appears during rolling deploys when new pod template hashes and canary names multiply series for an hour; require correlation with change tickets before paging owners. Seasonal retail bursts sometimes leak pseudo user identifiers into cohort labels during Black Friday experiments; finance-approved cohort keys may still be high-cardinality and should be whitelisted via emergency_drop only after legal review. Retroactive resampling or backfill pipelines can re-fan-out historical series and look like a live explosion until you segment _time. OpenTelemetry resource detector changes that add host.id then later remove it create short-lived duplicate identities; treat as migration noise bounded to upgrade windows. Prometheus federation from paired high-availability scrapers can double-count identical targets if dedupe labels differ; compare replica labels before blaming applications. Scheduled load tests that stamp synthetic user_id labels are expected; annotate test windows in metrics_inventory or suppress via protected flags. Histogram exemplars that intentionally attach trace identifiers are cardinal by design when misconfigured; distinguish exemplar policies from counter labels. ITSI KPI recalculations can temporarily inflate derived metric fan-out; cross-check ITSI maintenance modes. Edge Processor shadow rules that duplicate events during canary cutovers may inflate counts; validate processor metrics. New service launches with honest additional low-cardinality labels can cross eighty percent utilization without malice; MEDIUM severity should open a ticket, not a wake-up call, unless paired with growth_rate_pct_h spikes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, Prometheus, Splunk Observability Cloud.\n• Ensure the following data sources are available: OTel Collector metrics, TSDB cardinality endpoints, `sourcetype=otel:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically audit metric cardinality by counting unique label combinations (time series) per metric name. Metrics with cardinality >10,000 are candidates for label reduction. Common offenders: HTTP metrics with `path` labels containing IDs (`/users/12345`), metrics with `pod_name` labels in auto-scaling environments, and custom metrics with unbounded label values. Use the OTel Collector's `metricstransform` processor to aggregate or drop high-cardinality labels before export. For Splunk Observa…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mcatalog values(metric_name) WHERE index=otel_metrics by metric_name\n| map maxsearches=500 search=\"| mcatalog values(_dims) as dimensions WHERE index=otel_metrics metric_name=\\\"$metric_name$\\\" | eval metric_name=\\\"$metric_name$\\\" | eval cardinality=mvcount(dimensions)\"\n| sort -cardinality\n| head 50\n| eventstats sum(cardinality) as total_cardinality\n| eval pct_of_total=round(cardinality*100/total_cardinality, 1)\n| where cardinality > 10000 OR pct_of_total > 5\n| table metric_name, cardinality, pct_of_total\n```\n\nUnderstanding this SPL\n\n**Observability Cardinality Explosion Detection** — Metric cardinality — the number of unique time series — is the hidden cost driver of observability platforms. Adding a high-cardinality label like `user_id` or `request_id` to a metric can create millions of unique time series, overwhelming TSDB backends and causing query timeouts, memory exhaustion, and unexpected cost spikes. Detecting cardinality explosions before they impact platform stability prevents outages in the observability infrastructure itself.\n\nDocumented **Data sources**: OTel Collector metrics, TSDB cardinality endpoints, `sourcetype=otel:metrics`. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, Prometheus, Splunk Observability Cloud. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: otel_metrics.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=otel_metrics. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Runs a templated search per row with `map`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **pct_of_total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cardinality > 10000 OR pct_of_total > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Observability Cardinality Explosion Detection**): table metric_name, cardinality, pct_of_total\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top 20 metrics by cardinality), Line chart (total cardinality trend over 30 days), Table (metrics exceeding threshold with label analysis), Single value (total active time series).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch whether your dashboards are remembering every tiny unique label like a receipt number on each measurement, because that habit makes charts crawl, bills balloon, and alarms cry wolf before anything your customers feel breaks. We help teams roll up the noise and keep the picture honest.",
              "mtype": [
                "Performance",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry",
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.20",
              "n": "Instrumentation Coverage Audit",
              "c": "medium",
              "f": "intermediate",
              "v": "You can't debug what you can't see. Services without OTel instrumentation are \"dark\" — when they fail, you diagnose from the outside using downstream error messages and infrastructure metrics, adding 30-60 minutes to MTTR. Measuring instrumentation coverage per team (what percentage of their services emit traces, metrics, and logs with trace context) drives observability maturity programs with data rather than mandates. A coverage target of 90% for critical services creates measurable accountability.",
              "t": "Splunk Distribution of OpenTelemetry Collector, service registry",
              "d": "OTel Collector receiver metrics, service registry/CMDB, `sourcetype=otel:traces`",
              "q": "| inputlookup service_registry where status=\"active\"\n| fields service_name, owning_team, service_tier, expected_signals\n| join type=left service_name [\n    search index=traces sourcetype=\"otel:traces\" earliest=-7d\n    | stats dc(trace_id) as trace_count, dc(span_name) as operations by service_name\n    | eval has_traces=\"Yes\"]\n| join type=left service_name [\n    | mstats count(_value) as count WHERE index=otel_metrics metric_name=* BY service_name span=7d\n    | where count > 0\n    | eval has_metrics=\"Yes\"\n    | fields service_name, has_metrics]\n| fillnull has_traces has_metrics value=\"No\"\n| eval coverage=case(\n    has_traces==\"Yes\" AND has_metrics==\"Yes\", \"Full\",\n    has_traces==\"Yes\" OR has_metrics==\"Yes\", \"Partial\",\n    1==1, \"Dark\")\n| stats count as total, sum(eval(if(coverage==\"Full\",1,0))) as full, sum(eval(if(coverage==\"Partial\",1,0))) as partial, sum(eval(if(coverage==\"Dark\",1,0))) as dark by owning_team\n| eval coverage_pct=round(full*100/total, 1)\n| table owning_team, total, full, partial, dark, coverage_pct\n| sort coverage_pct",
              "m": "Maintain a `service_registry` lookup (from CMDB, Kubernetes service discovery, or manual inventory) listing all active services with their owning team, tier, and expected telemetry signals. Compare the registry against actual telemetry received in the last 7 days: services emitting traces are \"instrumented for tracing,\" services emitting metrics are \"instrumented for metrics.\" Classify each service as Full (both signals), Partial (one signal), or Dark (no telemetry). Calculate coverage percentage per team. Target: 90% full coverage for Tier-1 services, 70% for Tier-2. Generate weekly instrumentation scorecards for engineering leadership. Track coverage improvement over quarters to measure observability maturity program progress.",
              "z": "Bar chart (coverage % by team), Table (dark services by team), Pie chart (fleet-wide coverage distribution), Line chart (coverage trend over quarters), Single value (fleet coverage %).",
              "kfp": "Latency spikes during pool exhaustion, garbage collection, or upstream slowness. We compare with the APM UI and deploy history before a code-only theory.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector, service registry.\n• Ensure the following data sources are available: OTel Collector receiver metrics, service registry/CMDB, `sourcetype=otel:traces`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain a `service_registry` lookup (from CMDB, Kubernetes service discovery, or manual inventory) listing all active services with their owning team, tier, and expected telemetry signals. Compare the registry against actual telemetry received in the last 7 days: services emitting traces are \"instrumented for tracing,\" services emitting metrics are \"instrumented for metrics.\" Classify each service as Full (both signals), Partial (one signal), or Dark (no telemetry). Calculate coverage percentag…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup service_registry where status=\"active\"\n| fields service_name, owning_team, service_tier, expected_signals\n| join type=left service_name [\n    search index=traces sourcetype=\"otel:traces\" earliest=-7d\n    | stats dc(trace_id) as trace_count, dc(span_name) as operations by service_name\n    | eval has_traces=\"Yes\"]\n| join type=left service_name [\n    | mstats count(_value) as count WHERE index=otel_metrics metric_name=* BY service_name span=7d\n    | where count > 0\n    | eval has_metrics=\"Yes\"\n    | fields service_name, has_metrics]\n| fillnull has_traces has_metrics value=\"No\"\n| eval coverage=case(\n    has_traces==\"Yes\" AND has_metrics==\"Yes\", \"Full\",\n    has_traces==\"Yes\" OR has_metrics==\"Yes\", \"Partial\",\n    1==1, \"Dark\")\n| stats count as total, sum(eval(if(coverage==\"Full\",1,0))) as full, sum(eval(if(coverage==\"Partial\",1,0))) as partial, sum(eval(if(coverage==\"Dark\",1,0))) as dark by owning_team\n| eval coverage_pct=round(full*100/total, 1)\n| table owning_team, total, full, partial, dark, coverage_pct\n| sort coverage_pct\n```\n\nUnderstanding this SPL\n\n**Instrumentation Coverage Audit** — You can't debug what you can't see. Services without OTel instrumentation are \"dark\" — when they fail, you diagnose from the outside using downstream error messages and infrastructure metrics, adding 30-60 minutes to MTTR. Measuring instrumentation coverage per team (what percentage of their services emit traces, metrics, and logs with trace context) drives observability maturity programs with data rather than mandates. A coverage target of 90% for critical services creates…\n\nDocumented **Data sources**: OTel Collector receiver metrics, service registry/CMDB, `sourcetype=otel:traces`. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector, service registry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **coverage** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by owning_team** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Instrumentation Coverage Audit**): table owning_team, total, full, partial, dark, coverage_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (coverage % by team), Table (dark services by team), Pie chart (fleet-wide coverage distribution), Line chart (coverage trend over quarters), Single value (fleet coverage %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch real user experience in the browser so slow pages, errors, and rough interactions are not invisible in production.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "13.5.21",
              "n": "Telemetry Signal Freshness and Staleness",
              "c": "high",
              "f": "intermediate",
              "v": "A service that stops emitting metrics or traces could mean two very different things: the service is down (infrastructure problem requiring immediate response) or the instrumentation broke (observability gap requiring engineering fix). Monitoring signal freshness — how recently each service last emitted each signal type — distinguishes these cases. If infrastructure monitoring shows the service is running but traces stopped, the instrumentation broke. If both stop, the service is likely down. Without freshness monitoring, instrumentation failures go unnoticed until the next incident when debugging tools fail.",
              "t": "Splunk Distribution of OpenTelemetry Collector",
              "d": "`sourcetype=otel:traces`, OTel metrics, application logs",
              "q": "| tstats latest(_time) as last_trace WHERE index=traces sourcetype=\"otel:traces\" BY service_name\n| join type=left service_name [\n    | mstats max(_time) as last_metric WHERE index=otel_metrics metric_name=* BY service_name]\n| join type=left service_name [\n    | tstats latest(_time) as last_log WHERE index=app_logs BY service_name]\n| eval trace_age_min=round((now()-last_trace)/60, 0)\n| eval metric_age_min=round((now()-last_metric)/60, 0)\n| eval log_age_min=round((now()-last_log)/60, 0)\n| eval trace_status=case(isnull(trace_age_min), \"Never\", trace_age_min>60, \"STALE\", trace_age_min>15, \"Warning\", 1==1, \"Fresh\")\n| eval metric_status=case(isnull(metric_age_min), \"Never\", metric_age_min>30, \"STALE\", metric_age_min>10, \"Warning\", 1==1, \"Fresh\")\n| eval log_status=case(isnull(log_age_min), \"Never\", log_age_min>30, \"STALE\", log_age_min>10, \"Warning\", 1==1, \"Fresh\")\n| where trace_status!=\"Fresh\" OR metric_status!=\"Fresh\" OR log_status!=\"Fresh\"\n| table service_name, trace_status, trace_age_min, metric_status, metric_age_min, log_status, log_age_min\n| sort trace_status, metric_status",
              "m": "Track the latest timestamp per service for each signal type (traces, metrics, logs). Calculate the age of each signal in minutes. Define freshness thresholds: traces stale after 60 minutes (services typically generate spans continuously), metrics stale after 30 minutes (collection interval is usually 10-60 seconds), logs stale after 30 minutes. Alert when any service's signal goes stale. Cross-reference with infrastructure health: if the host/pod is running (CPU/memory metrics flowing via node-level collection) but application signals stopped, the instrumentation broke, not the service. Distinguish between services that should be continuously active versus batch/scheduled services that naturally have quiet periods. Maintain an expected-schedule lookup for batch services.",
              "z": "Status matrix (service × signal type — green/yellow/red), Table (services with stale signals), Single value (services with all signals fresh), Line chart (stale service count trend over 7 days).",
              "kfp": "Latency spikes during pool exhaustion, garbage collection, or upstream slowness. We compare with the APM UI and deploy history before a code-only theory.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Distribution of OpenTelemetry Collector.\n• Ensure the following data sources are available: `sourcetype=otel:traces`, OTel metrics, application logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack the latest timestamp per service for each signal type (traces, metrics, logs). Calculate the age of each signal in minutes. Define freshness thresholds: traces stale after 60 minutes (services typically generate spans continuously), metrics stale after 30 minutes (collection interval is usually 10-60 seconds), logs stale after 30 minutes. Alert when any service's signal goes stale. Cross-reference with infrastructure health: if the host/pod is running (CPU/memory metrics flowing via node-l…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats latest(_time) as last_trace WHERE index=traces sourcetype=\"otel:traces\" BY service_name\n| join type=left service_name [\n    | mstats max(_time) as last_metric WHERE index=otel_metrics metric_name=* BY service_name]\n| join type=left service_name [\n    | tstats latest(_time) as last_log WHERE index=app_logs BY service_name]\n| eval trace_age_min=round((now()-last_trace)/60, 0)\n| eval metric_age_min=round((now()-last_metric)/60, 0)\n| eval log_age_min=round((now()-last_log)/60, 0)\n| eval trace_status=case(isnull(trace_age_min), \"Never\", trace_age_min>60, \"STALE\", trace_age_min>15, \"Warning\", 1==1, \"Fresh\")\n| eval metric_status=case(isnull(metric_age_min), \"Never\", metric_age_min>30, \"STALE\", metric_age_min>10, \"Warning\", 1==1, \"Fresh\")\n| eval log_status=case(isnull(log_age_min), \"Never\", log_age_min>30, \"STALE\", log_age_min>10, \"Warning\", 1==1, \"Fresh\")\n| where trace_status!=\"Fresh\" OR metric_status!=\"Fresh\" OR log_status!=\"Fresh\"\n| table service_name, trace_status, trace_age_min, metric_status, metric_age_min, log_status, log_age_min\n| sort trace_status, metric_status\n```\n\nUnderstanding this SPL\n\n**Telemetry Signal Freshness and Staleness** — A service that stops emitting metrics or traces could mean two very different things: the service is down (infrastructure problem requiring immediate response) or the instrumentation broke (observability gap requiring engineering fix). Monitoring signal freshness — how recently each service last emitted each signal type — distinguishes these cases. If infrastructure monitoring shows the service is running but traces stopped, the instrumentation broke. If both stop, the…\n\nDocumented **Data sources**: `sourcetype=otel:traces`, OTel metrics, application logs. **App/TA** (typical add-on context): Splunk Distribution of OpenTelemetry Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: traces; **sourcetype**: otel:traces. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against precomputed summaries; ensure the referenced data model is accelerated.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **trace_age_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **metric_age_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **log_age_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **trace_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **metric_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **log_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where trace_status!=\"Fresh\" OR metric_status!=\"Fresh\" OR log_status!=\"Fresh\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Telemetry Signal Freshness and Staleness**): table service_name, trace_status, trace_age_min, metric_status, metric_age_min, log_status, log_age_min\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status matrix (service × signal type — green/yellow/red), Table (services with stale signals), Single value (services with all signals fresh), Line chart (stale service count trend over 7 days).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch application traces, real users, and service-level health so we see slow, wrong, and costly behavior in the same story our teams ship to.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 21,
            "none": 0
          }
        }
      ],
      "i": 13,
      "n": "Observability & Monitoring Stack",
      "src": "cat-13-observability-monitoring-stack.md"
    },
    {
      "s": [
        {
          "i": "14.1",
          "n": "Building Management Systems (BMS)",
          "u": [
            {
              "i": "14.1.1",
              "n": "HVAC Performance Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "HVAC issues in data centers risk equipment damage; in buildings they affect occupant comfort and energy costs.",
              "t": "Modbus TA, MQTT input, BMS API",
              "d": "BACnet/Modbus sensors (temperature setpoint, actual, supply/return air)",
              "q": "index=bms sourcetype=\"modbus:hvac\"\n| eval deviation=abs(actual_temp-setpoint_temp)\n| where deviation > 3\n| table _time, zone, setpoint_temp, actual_temp, deviation",
              "m": "Connect BMS to Splunk via MQTT broker or Modbus gateway. Ingest setpoints and actuals per zone. Alert when deviation exceeds 3°F/2°C for sustained period. Track energy consumption per HVAC unit.",
              "z": "Line chart (setpoint vs actual per zone), Heatmap (zone × temperature), Single value (zones out of spec).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Modbus TA, MQTT input, BMS API.\n• Ensure the following data sources are available: BACnet/Modbus sensors (temperature setpoint, actual, supply/return air).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConnect BMS to Splunk via MQTT broker or Modbus gateway. Ingest setpoints and actuals per zone. Alert when deviation exceeds 3°F/2°C for sustained period. Track energy consumption per HVAC unit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bms sourcetype=\"modbus:hvac\"\n| eval deviation=abs(actual_temp-setpoint_temp)\n| where deviation > 3\n| table _time, zone, setpoint_temp, actual_temp, deviation\n```\n\nUnderstanding this SPL\n\n**HVAC Performance Monitoring** — HVAC issues in data centers risk equipment damage; in buildings they affect occupant comfort and energy costs.\n\nDocumented **Data sources**: BACnet/Modbus sensors (temperature setpoint, actual, supply/return air). **App/TA** (typical add-on context): Modbus TA, MQTT input, BMS API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bms; **sourcetype**: modbus:hvac. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bms, sourcetype=\"modbus:hvac\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **deviation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where deviation > 3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HVAC Performance Monitoring**): table _time, zone, setpoint_temp, actual_temp, deviation\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (setpoint vs actual per zone), Heatmap (zone × temperature), Single value (zones out of spec).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with heating, cooling, and room comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.5",
              "n": "Elevator/Equipment Health",
              "c": "medium",
              "f": "beginner",
              "v": "Equipment fault codes enable predictive maintenance, reducing downtime and extending equipment life.",
              "t": "BMS integration, MQTT",
              "d": "BMS event logs, equipment fault codes",
              "q": "index=bms sourcetype=\"bms:faults\"\n| stats count by equipment_id, fault_code, description\n| sort -count",
              "m": "Forward BMS fault events to Splunk. Map fault codes to descriptions via lookup. Track fault frequency per equipment. Alert on critical faults. Report on recurring issues for maintenance planning.",
              "z": "Table (equipment faults), Bar chart (faults by equipment), Timeline (fault events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS integration, MQTT.\n• Ensure the following data sources are available: BMS event logs, equipment fault codes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward BMS fault events to Splunk. Map fault codes to descriptions via lookup. Track fault frequency per equipment. Alert on critical faults. Report on recurring issues for maintenance planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bms sourcetype=\"bms:faults\"\n| stats count by equipment_id, fault_code, description\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Elevator/Equipment Health** — Equipment fault codes enable predictive maintenance, reducing downtime and extending equipment life.\n\nDocumented **Data sources**: BMS event logs, equipment fault codes. **App/TA** (typical add-on context): BMS integration, MQTT. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bms; **sourcetype**: bms:faults. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bms, sourcetype=\"bms:faults\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by equipment_id, fault_code, description** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (equipment faults), Bar chart (faults by equipment), Timeline (fault events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with elevators and other moving equipment that people rely on before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.6",
              "n": "Environmental Compliance",
              "c": "critical",
              "f": "beginner",
              "v": "Temperature/humidity exceedances in data centers risk equipment damage; in labs they invalidate experiments. Compliance monitoring is mandatory.",
              "t": "Environmental sensor inputs (SNMP, MQTT)",
              "d": "Environmental sensors (temperature, humidity, differential pressure)",
              "q": "index=environment sourcetype=\"sensor:environmental\"\n| where temp_f > 80 OR temp_f < 64 OR humidity_pct > 60 OR humidity_pct < 40\n| table _time, zone, sensor, temp_f, humidity_pct",
              "m": "Deploy environmental sensors per ASHRAE guidelines. Ingest via SNMP or MQTT. Alert immediately on out-of-range conditions. Log compliance data for audit. Track seasonal patterns for cooling optimization.",
              "z": "Heatmap (zone × temperature), Line chart (temp/humidity trend), Single value (zones in compliance %), Gauge (current temp per zone).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Environmental sensor inputs (SNMP, MQTT).\n• Ensure the following data sources are available: Environmental sensors (temperature, humidity, differential pressure).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy environmental sensors per ASHRAE guidelines. Ingest via SNMP or MQTT. Alert immediately on out-of-range conditions. Log compliance data for audit. Track seasonal patterns for cooling optimization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"sensor:environmental\"\n| where temp_f > 80 OR temp_f < 64 OR humidity_pct > 60 OR humidity_pct < 40\n| table _time, zone, sensor, temp_f, humidity_pct\n```\n\nUnderstanding this SPL\n\n**Environmental Compliance** — Temperature/humidity exceedances in data centers risk equipment damage; in labs they invalidate experiments. Compliance monitoring is mandatory.\n\nDocumented **Data sources**: Environmental sensors (temperature, humidity, differential pressure). **App/TA** (typical add-on context): Environmental sensor inputs (SNMP, MQTT). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: sensor:environmental. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"sensor:environmental\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where temp_f > 80 OR temp_f < 64 OR humidity_pct > 60 OR humidity_pct < 40` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Environmental Compliance**): table _time, zone, sensor, temp_f, humidity_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (zone × temperature), Line chart (temp/humidity trend), Single value (zones in compliance %), Gauge (current temp per zone).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with room conditions your rules and equipment need before harm, damage, or wasted effort piles up.",
              "wv": "crawl",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt",
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "14.1.7",
              "n": "LoRaWAN Gateway Health",
              "c": "medium",
              "f": "advanced",
              "v": "Gateway uplink/downlink success rate and RSSI trending indicate network coverage and reliability. Degraded gateways cause packet loss and affect IoT application availability.",
              "t": "Custom (LoRaWAN Network Server API, e.g. ChirpStack)",
              "d": "LoRaWAN NS API (gateway stats, rx/tx packets)",
              "q": "index=iot sourcetype=\"lorawan:gateway_stats\"\n| eval uplink_success_rate=if(uplink_total>0, (uplink_ok/uplink_total)*100, null), downlink_success_rate=if(downlink_total>0, (downlink_ok/downlink_total)*100, null)\n| bin _time span=1h\n| stats avg(rssi) as avg_rssi, avg(uplink_success_rate) as uplink_pct, avg(downlink_success_rate) as downlink_pct by gateway_id, _time\n| where uplink_pct < 95 OR downlink_pct < 95 OR avg_rssi < -120\n| table gateway_id, avg_rssi, uplink_pct, downlink_pct",
              "m": "Poll LoRaWAN Network Server API (ChirpStack, TTN, etc.) for gateway statistics. Ingest rx/tx packet counts, success/failure, and RSSI per gateway. Configure HEC or scripted input to forward JSON to Splunk. Alert when uplink or downlink success rate drops below 95% or RSSI trends below -120 dBm. Track gateway health for capacity planning.",
              "z": "Table (gateways with degraded success rate), Line chart (RSSI trend by gateway), Gauge (uplink/downlink success %), Status grid (gateway × health).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (LoRaWAN Network Server API, e.g. ChirpStack).\n• Ensure the following data sources are available: LoRaWAN NS API (gateway stats, rx/tx packets).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll LoRaWAN Network Server API (ChirpStack, TTN, etc.) for gateway statistics. Ingest rx/tx packet counts, success/failure, and RSSI per gateway. Configure HEC or scripted input to forward JSON to Splunk. Alert when uplink or downlink success rate drops below 95% or RSSI trends below -120 dBm. Track gateway health for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"lorawan:gateway_stats\"\n| eval uplink_success_rate=if(uplink_total>0, (uplink_ok/uplink_total)*100, null), downlink_success_rate=if(downlink_total>0, (downlink_ok/downlink_total)*100, null)\n| bin _time span=1h\n| stats avg(rssi) as avg_rssi, avg(uplink_success_rate) as uplink_pct, avg(downlink_success_rate) as downlink_pct by gateway_id, _time\n| where uplink_pct < 95 OR downlink_pct < 95 OR avg_rssi < -120\n| table gateway_id, avg_rssi, uplink_pct, downlink_pct\n```\n\nUnderstanding this SPL\n\n**LoRaWAN Gateway Health** — Gateway uplink/downlink success rate and RSSI trending indicate network coverage and reliability. Degraded gateways cause packet loss and affect IoT application availability.\n\nDocumented **Data sources**: LoRaWAN NS API (gateway stats, rx/tx packets). **App/TA** (typical add-on context): Custom (LoRaWAN Network Server API, e.g. ChirpStack). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: lorawan:gateway_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"lorawan:gateway_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **uplink_success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by gateway_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where uplink_pct < 95 OR downlink_pct < 95 OR avg_rssi < -120` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **LoRaWAN Gateway Health**): table gateway_id, avg_rssi, uplink_pct, downlink_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gateways with degraded success rate), Line chart (RSSI trend by gateway), Gauge (uplink/downlink success %), Status grid (gateway × health).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with wireless field gateways and the devices that count on them before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.8",
              "n": "Modbus Device Communication Failure Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Poll timeout tracking across Modbus TCP/RTU slaves identifies communication failures before they impact process control. High failure rates indicate network issues, slave overload, or misconfiguration.",
              "t": "Splunk Edge Hub, custom (Modbus polling logs)",
              "d": "Modbus gateway/master logs (poll success/failure per slave address)",
              "q": "index=ot sourcetype=\"modbus:poll_log\" OR sourcetype=\"modbus:gateway\"\n| rex \"slave=(?<slave_addr>\\d+)|address=(?<slave_addr>\\d+)|(?<status>success|timeout|failure|error)\"\n| eval poll_ok=if(lower(status)=\"success\", 1, 0), poll_fail=if(lower(status)!=\"success\" AND status!=\"\", 1, 0)\n| bin _time span=15m\n| stats sum(poll_ok) as ok, sum(poll_fail) as fail by slave_addr, host, _time\n| eval total=ok+fail, failure_rate_pct=if(total>0, (fail/total)*100, 0)\n| where failure_rate_pct > 10 OR fail > 5\n| table slave_addr, host, ok, fail, failure_rate_pct",
              "m": "Configure Modbus gateway or Edge Hub Modbus connector to log poll success/failure per slave address. Parse slave address and status from logs. Ingest via syslog or file monitor. Alert when failure rate exceeds 10% over 15 minutes or more than 5 consecutive failures for a critical slave. Correlate with network and PLC health.",
              "z": "Table (slaves with high failure rate), Line chart (failure rate trend by slave), Bar chart (top failing slaves), Single value (slaves in spec %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub, custom (Modbus polling logs).\n• Ensure the following data sources are available: Modbus gateway/master logs (poll success/failure per slave address).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Modbus gateway or Edge Hub Modbus connector to log poll success/failure per slave address. Parse slave address and status from logs. Ingest via syslog or file monitor. Alert when failure rate exceeds 10% over 15 minutes or more than 5 consecutive failures for a critical slave. Correlate with network and PLC health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"modbus:poll_log\" OR sourcetype=\"modbus:gateway\"\n| rex \"slave=(?<slave_addr>\\d+)|address=(?<slave_addr>\\d+)|(?<status>success|timeout|failure|error)\"\n| eval poll_ok=if(lower(status)=\"success\", 1, 0), poll_fail=if(lower(status)!=\"success\" AND status!=\"\", 1, 0)\n| bin _time span=15m\n| stats sum(poll_ok) as ok, sum(poll_fail) as fail by slave_addr, host, _time\n| eval total=ok+fail, failure_rate_pct=if(total>0, (fail/total)*100, 0)\n| where failure_rate_pct > 10 OR fail > 5\n| table slave_addr, host, ok, fail, failure_rate_pct\n```\n\nUnderstanding this SPL\n\n**Modbus Device Communication Failure Rate** — Poll timeout tracking across Modbus TCP/RTU slaves identifies communication failures before they impact process control. High failure rates indicate network issues, slave overload, or misconfiguration.\n\nDocumented **Data sources**: Modbus gateway/master logs (poll success/failure per slave address). **App/TA** (typical add-on context): Splunk Edge Hub, custom (Modbus polling logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: modbus:poll_log, modbus:gateway. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"modbus:poll_log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **poll_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by slave_addr, host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failure_rate_pct > 10 OR fail > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Modbus Device Communication Failure Rate**): table slave_addr, host, ok, fail, failure_rate_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slaves with high failure rate), Line chart (failure rate trend by slave), Bar chart (top failing slaves), Single value (slaves in spec %).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with controllers and the data links the building team relies on before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.9",
              "n": "OPC-UA Server Session Count and Subscription Health",
              "c": "high",
              "f": "advanced",
              "v": "Session limits and subscription keep-alive failures indicate OPC-UA server capacity and client connectivity. Exceeding session limits or subscription failures cause data gaps and break real-time monitoring.",
              "t": "Splunk Edge Hub, custom (OPC-UA server diagnostics)",
              "d": "OPC-UA server diagnostics node (ServerDiagnosticsSummary)",
              "q": "index=ot sourcetype=\"opcua:diagnostics\" OR sourcetype=\"opcua:server\"\n| eval session_pct=if(session_limit>0, (current_sessions/session_limit)*100, null), subscription_ok=if(subscription_failures==0 OR isnull(subscription_failures), 1, 0)\n| where session_pct > 85 OR current_sessions >= session_limit OR subscription_failures > 0\n| table _time, server_endpoint, current_sessions, session_limit, session_pct, subscription_count, subscription_failures, rejected_session_count",
              "m": "Read OPC-UA ServerDiagnosticsSummary node (standard diagnostics object) via Edge Hub OPC-UA connector or custom client. Ingest current session count, session limit, subscription count, and subscription failure metrics. Poll every 1–5 minutes. Alert when session count exceeds 85% of limit, subscription failures occur, or rejected session count increases. Track trends for capacity planning.",
              "z": "Gauge (session utilization %), Table (servers with subscription failures), Line chart (session count and subscription health trend), Single value (OPC-UA servers healthy).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub, custom (OPC-UA server diagnostics).\n• Ensure the following data sources are available: OPC-UA server diagnostics node (ServerDiagnosticsSummary).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRead OPC-UA ServerDiagnosticsSummary node (standard diagnostics object) via Edge Hub OPC-UA connector or custom client. Ingest current session count, session limit, subscription count, and subscription failure metrics. Poll every 1–5 minutes. Alert when session count exceeds 85% of limit, subscription failures occur, or rejected session count increases. Track trends for capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"opcua:diagnostics\" OR sourcetype=\"opcua:server\"\n| eval session_pct=if(session_limit>0, (current_sessions/session_limit)*100, null), subscription_ok=if(subscription_failures==0 OR isnull(subscription_failures), 1, 0)\n| where session_pct > 85 OR current_sessions >= session_limit OR subscription_failures > 0\n| table _time, server_endpoint, current_sessions, session_limit, session_pct, subscription_count, subscription_failures, rejected_session_count\n```\n\nUnderstanding this SPL\n\n**OPC-UA Server Session Count and Subscription Health** — Session limits and subscription keep-alive failures indicate OPC-UA server capacity and client connectivity. Exceeding session limits or subscription failures cause data gaps and break real-time monitoring.\n\nDocumented **Data sources**: OPC-UA server diagnostics node (ServerDiagnosticsSummary). **App/TA** (typical add-on context): Splunk Edge Hub, custom (OPC-UA server diagnostics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: opcua:diagnostics, opcua:server. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"opcua:diagnostics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where session_pct > 85 OR current_sessions >= session_limit OR subscription_failures > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OPC-UA Server Session Count and Subscription Health**): table _time, server_endpoint, current_sessions, session_limit, session_pct, subscription_count, subscription_failures, rejected_session_c…\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (session utilization %), Table (servers with subscription failures), Line chart (session count and subscription health trend), Single value (OPC-UA servers healthy).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with controllers and the data links the building team relies on before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.10",
              "n": "SNMP Trap Storm Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Trap floods overwhelm collectors and obscure real faults; rapid detection enables rate limiting and upstream device triage.",
              "t": "SNMP trap receiver, `snmptrapd` → syslog",
              "d": "`index=network` `sourcetype=\"snmp:trap\"` or `snmptrapd:syslog`",
              "q": "index=network sourcetype IN (\"snmp:trap\",\"snmptrapd:syslog\")\n| timechart span=1m count as trap_rate by device_ip\n| where trap_rate > 500",
              "m": "Baseline traps/min per agent IP. Alert when rate exceeds 5× baseline or absolute threshold. Correlate with link flaps or misconfigured threshold on managed device.",
              "z": "Line chart (trap rate by device), Single value (peak traps/min), Table (top storm sources).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP trap receiver, `snmptrapd` → syslog.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"snmp:trap\"` or `snmptrapd:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline traps/min per agent IP. Alert when rate exceeds 5× baseline or absolute threshold. Correlate with link flaps or misconfigured threshold on managed device.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype IN (\"snmp:trap\",\"snmptrapd:syslog\")\n| timechart span=1m count as trap_rate by device_ip\n| where trap_rate > 500\n```\n\nUnderstanding this SPL\n\n**SNMP Trap Storm Detection** — Trap floods overwhelm collectors and obscure real faults; rapid detection enables rate limiting and upstream device triage.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"snmp:trap\"` or `snmptrapd:syslog`. **App/TA** (typical add-on context): SNMP trap receiver, `snmptrapd` → syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by device_ip** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where trap_rate > 500` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (trap rate by device), Single value (peak traps/min), Table (top storm sources).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with device health the way a network-style poll sees it before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.11",
              "n": "Device MIB Polling Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Failed GET/GETNEXT/WALK cycles mean stale metrics and blind spots in capacity management.",
              "t": "SNMP TA (modular input), polling audit logs",
              "d": "`sourcetype=\"snmp:poll_status\"` or `sourcetype=\"snmp:ta:log\"`",
              "q": "index=network sourcetype=\"snmp:poll_status\"\n| where status!=\"success\" OR timeout_ms > 3000\n| stats count by host, device_ip, oid_tree, error_code\n| sort -count",
              "m": "Emit structured poll result per target (success, timeout, auth error). Alert on sustained failure rate >5% or SNMP timeout storms. Verify SNMP community/v3 creds and ACLs on device.",
              "z": "Table (devices with poll failures), Line chart (failure % trend), Status grid (device × OID family).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA (modular input), polling audit logs.\n• Ensure the following data sources are available: `sourcetype=\"snmp:poll_status\"` or `sourcetype=\"snmp:ta:log\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmit structured poll result per target (success, timeout, auth error). Alert on sustained failure rate >5% or SNMP timeout storms. Verify SNMP community/v3 creds and ACLs on device.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:poll_status\"\n| where status!=\"success\" OR timeout_ms > 3000\n| stats count by host, device_ip, oid_tree, error_code\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Device MIB Polling Failures** — Failed GET/GETNEXT/WALK cycles mean stale metrics and blind spots in capacity management.\n\nDocumented **Data sources**: `sourcetype=\"snmp:poll_status\"` or `sourcetype=\"snmp:ta:log\"`. **App/TA** (typical add-on context): SNMP TA (modular input), polling audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:poll_status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:poll_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"success\" OR timeout_ms > 3000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, device_ip, oid_tree, error_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (devices with poll failures), Line chart (failure % trend), Status grid (device × OID family).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with this area: device mib polling failures before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.12",
              "n": "Firmware Version Compliance Across Fleet",
              "c": "high",
              "f": "intermediate",
              "v": "Known-bad IOS/NX-OS/IOS-XE builds expose the network to CVEs; compliance reporting supports change windows.",
              "t": "SNMP TA (ENTITY-MIB, vendor OIDs), SolarWinds/Prime export",
              "d": "`sourcetype=\"snmp:inventory\"` (sysDescr, entPhysicalFirmwareRev)",
              "q": "index=network sourcetype=\"snmp:inventory\"\n| stats latest(firmware_version) as fw by device_name, model\n| lookup approved_network_firmware.csv model OUTPUT approved_fw\n| where fw!=approved_fw\n| table device_name, model, fw, approved_fw",
              "m": "Poll ENTITY-MIB / vendor firmware revision OIDs on a weekly schedule. Maintain CSV of approved builds per platform. Drive remediation tickets for non-compliant devices.",
              "z": "Table (non-compliant devices), Pie chart (compliance %), Bar chart (by site).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA (ENTITY-MIB, vendor OIDs), SolarWinds/Prime export.\n• Ensure the following data sources are available: `sourcetype=\"snmp:inventory\"` (sysDescr, entPhysicalFirmwareRev).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll ENTITY-MIB / vendor firmware revision OIDs on a weekly schedule. Maintain CSV of approved builds per platform. Drive remediation tickets for non-compliant devices.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:inventory\"\n| stats latest(firmware_version) as fw by device_name, model\n| lookup approved_network_firmware.csv model OUTPUT approved_fw\n| where fw!=approved_fw\n| table device_name, model, fw, approved_fw\n```\n\nUnderstanding this SPL\n\n**Firmware Version Compliance Across Fleet** — Known-bad IOS/NX-OS/IOS-XE builds expose the network to CVEs; compliance reporting supports change windows.\n\nDocumented **Data sources**: `sourcetype=\"snmp:inventory\"` (sysDescr, entPhysicalFirmwareRev). **App/TA** (typical add-on context): SNMP TA (ENTITY-MIB, vendor OIDs), SolarWinds/Prime export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_name, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where fw!=approved_fw` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Firmware Version Compliance Across Fleet**): table device_name, model, fw, approved_fw\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant devices), Pie chart (compliance %), Bar chart (by site).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with room conditions your rules and equipment need before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.13",
              "n": "Environmental Sensor Threshold Alerts",
              "c": "high",
              "f": "beginner",
              "v": "Rack intake temperature and humidity from SNMP sensors protect IT and edge equipment from thermal damage.",
              "t": "SNMP TA (UPS-MIB, custom sensor MIBs)",
              "d": "`sourcetype=\"snmp:env_sensor\"` (tempC, humidityPct)",
              "q": "index=environment sourcetype=\"snmp:env_sensor\"\n| where temp_c > 30 OR temp_c < 10 OR humidity_pct > 70 OR humidity_pct < 20\n| table _time, device_ip, sensor_id, temp_c, humidity_pct, location",
              "m": "Map OIDs to sensor labels. Alert per ASHRAE/site policy. Correlate with HVAC/BMS where available.",
              "z": "Heatmap (rack × temp), Line chart (sensor trend), Table (exceedances).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA (UPS-MIB, custom sensor MIBs).\n• Ensure the following data sources are available: `sourcetype=\"snmp:env_sensor\"` (tempC, humidityPct).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap OIDs to sensor labels. Alert per ASHRAE/site policy. Correlate with HVAC/BMS where available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"snmp:env_sensor\"\n| where temp_c > 30 OR temp_c < 10 OR humidity_pct > 70 OR humidity_pct < 20\n| table _time, device_ip, sensor_id, temp_c, humidity_pct, location\n```\n\nUnderstanding this SPL\n\n**Environmental Sensor Threshold Alerts** — Rack intake temperature and humidity from SNMP sensors protect IT and edge equipment from thermal damage.\n\nDocumented **Data sources**: `sourcetype=\"snmp:env_sensor\"` (tempC, humidityPct). **App/TA** (typical add-on context): SNMP TA (UPS-MIB, custom sensor MIBs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: snmp:env_sensor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"snmp:env_sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where temp_c > 30 OR temp_c < 10 OR humidity_pct > 70 OR humidity_pct < 20` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Environmental Sensor Threshold Alerts**): table _time, device_ip, sensor_id, temp_c, humidity_pct, location\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (rack × temp), Line chart (sensor trend), Table (exceedances).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with room conditions your rules and equipment need before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_ups"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.14",
              "n": "SNMPv3 Authentication Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Auth failures indicate credential rotation gaps, brute force, or misconfigured collectors.",
              "t": "Device syslog, SNMP engine logs",
              "d": "`sourcetype=\"snmp:auth\"` or `sourcetype=\"cisco:ios\"` (SNMPv3 usmStats)",
              "q": "index=network sourcetype IN (\"snmp:auth\",\"cisco:ios\")\n| search \"authentication failure\" OR \"Unknown user name\" OR \"usmStatsUnknownUserNames\"\n| eval user=coalesce(user, user_name)\n| stats count by src, device_name, user\n| where count > 10\n| sort -count",
              "m": "Forward device-side SNMPv3 error counters to Splunk. Alert on burst from single IP or new engine ID. Correlate with NetOps change tickets.",
              "z": "Table (top sources of auth failures), Timeline (failure bursts), Map (geo of source IPs if routed).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Device syslog, SNMP engine logs.\n• Ensure the following data sources are available: `sourcetype=\"snmp:auth\"` or `sourcetype=\"cisco:ios\"` (SNMPv3 usmStats).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward device-side SNMPv3 error counters to Splunk. Alert on burst from single IP or new engine ID. Correlate with NetOps change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype IN (\"snmp:auth\",\"cisco:ios\")\n| search \"authentication failure\" OR \"Unknown user name\" OR \"usmStatsUnknownUserNames\"\n| eval user=coalesce(user, user_name)\n| stats count by src, device_name, user\n| where count > 10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**SNMPv3 Authentication Failures** — Auth failures indicate credential rotation gaps, brute force, or misconfigured collectors.\n\nDocumented **Data sources**: `sourcetype=\"snmp:auth\"` or `sourcetype=\"cisco:ios\"` (SNMPv3 usmStats). **App/TA** (typical add-on context): Device syslog, SNMP engine logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src, device_name, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top sources of auth failures), Timeline (failure bursts), Map (geo of source IPs if routed).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with device health the way a network-style poll sees it before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.15",
              "n": "Temperature Sensor Threshold Alerts (Meraki MT)",
              "c": "high",
              "f": "beginner",
              "v": "Alerts when environmental temperatures exceed safe thresholds to prevent equipment damage.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*temperature*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*temperature*\"\n| stats latest(temperature) as current_temp, min(temperature) as min_temp, max(temperature) as max_temp by sensor_location\n| where current_temp > 30 OR current_temp < 5",
              "m": "Monitor temperature sensor threshold alerts from syslog. Alert on exceedance.",
              "z": "Temperature gauge per location; trend timeline; alert dashboard.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*temperature*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor temperature sensor threshold alerts from syslog. Alert on exceedance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*temperature*\"\n| stats latest(temperature) as current_temp, min(temperature) as min_temp, max(temperature) as max_temp by sensor_location\n| where current_temp > 30 OR current_temp < 5\n```\n\nUnderstanding this SPL\n\n**Temperature Sensor Threshold Alerts (Meraki MT)** — Alerts when environmental temperatures exceed safe thresholds to prevent equipment damage.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*temperature*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sensor_location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where current_temp > 30 OR current_temp < 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Temperature gauge per location; trend timeline; alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with heating, cooling, and room comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.16",
              "n": "Humidity Monitoring and Dew Point Tracking (Meraki MT)",
              "c": "medium",
              "f": "beginner",
              "v": "Monitors humidity levels to ensure optimal conditions for equipment and prevent moisture damage.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*humidity*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" (metric=\"humidity\" OR metric=\"temperature\")\n| stats latest(eval(if(metric=\"humidity\", value, null()))) as humidity_pct,\n        latest(eval(if(metric=\"temperature\", value, null()))) as temp_c\n        by sensor_location\n| eval a=17.625, b=243.04\n| eval alpha=ln(humidity_pct/100) + (a*temp_c)/(b+temp_c)\n| eval dew_point_c=round((b*alpha)/(a-alpha), 2)\n| fields - a, b, alpha",
              "m": "Monitor humidity sensor data. Calculate dew point for condensation risk.",
              "z": "Humidity gauge per location; humidity vs temperature correlation; trend chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*humidity*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor humidity sensor data. Calculate dew point for condensation risk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" (metric=\"humidity\" OR metric=\"temperature\")\n| stats latest(eval(if(metric=\"humidity\", value, null()))) as humidity_pct,\n        latest(eval(if(metric=\"temperature\", value, null()))) as temp_c\n        by sensor_location\n| eval a=17.625, b=243.04\n| eval alpha=ln(humidity_pct/100) + (a*temp_c)/(b+temp_c)\n| eval dew_point_c=round((b*alpha)/(a-alpha), 2)\n| fields - a, b, alpha\n```\n\nUnderstanding this SPL\n\n**Humidity Monitoring and Dew Point Tracking (Meraki MT)** — Monitors humidity levels to ensure optimal conditions for equipment and prevent moisture damage.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*humidity*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sensor_location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **a** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **alpha** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dew_point_c** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Keeps or drops fields with `fields` to shape columns and size.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Humidity gauge per location; humidity vs temperature correlation; trend chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with this area: humidity monitoring and dew point tracking before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.17",
              "n": "Door Open/Close Event Detection and Alerts (Meraki MT)",
              "c": "high",
              "f": "beginner",
              "v": "Tracks door access events for security and facility monitoring.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*door*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*door*\" (action=\"open\" OR action=\"close\")\n| stats count as door_events, latest(timestamp) as last_event by door_location, action",
              "m": "Monitor door sensor events. Alert on unusual access patterns.",
              "z": "Door event timeline; access pattern analysis; alert table.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*door*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor door sensor events. Alert on unusual access patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*door*\" (action=\"open\" OR action=\"close\")\n| stats count as door_events, latest(timestamp) as last_event by door_location, action\n```\n\nUnderstanding this SPL\n\n**Door Open/Close Event Detection and Alerts (Meraki MT)** — Tracks door access events for security and facility monitoring.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*door*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by door_location, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Door event timeline; access pattern analysis; alert table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with this area: door open and close event detection and alerts before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.18",
              "n": "Water Leak Detection and Flood Alerts (Meraki MT)",
              "c": "critical",
              "f": "intermediate",
              "v": "Immediately detects water leaks to prevent equipment damage and business interruption.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*water*\" OR signature=\"*leak*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*water*\" OR signature=\"*leak*\")\n| stats count as leak_events, latest(timestamp) as last_detection by sensor_location\n| where leak_events > 0",
              "m": "Monitor water/leak detection sensors. Create critical alert.",
              "z": "Leak alert dashboard; sensor location map; event timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*water*\" OR signature=\"*leak*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor water/leak detection sensors. Create critical alert.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*water*\" OR signature=\"*leak*\")\n| stats count as leak_events, latest(timestamp) as last_detection by sensor_location\n| where leak_events > 0\n```\n\nUnderstanding this SPL\n\n**Water Leak Detection and Flood Alerts (Meraki MT)** — Immediately detects water leaks to prevent equipment damage and business interruption.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*water*\" OR signature=\"*leak*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sensor_location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where leak_events > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Leak alert dashboard; sensor location map; event timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with water use that might mean leaks or waste before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.19",
              "n": "Power Monitoring and Electrical Load Analysis (Meraki MT)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks electrical power consumption and load to identify anomalies and plan upgrades.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" sensor_type=\"power\" power_watts=*\n| stats avg(power_watts) as avg_power, max(power_watts) as peak_power by location\n| eval power_capacity_pct=round(peak_power*100/15000, 2)",
              "m": "Query sensor API for power metrics. Track consumption and peaks.",
              "z": "Power consumption gauge; peak load timeline; capacity planning chart.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery sensor API for power metrics. Track consumption and peaks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" sensor_type=\"power\" power_watts=*\n| stats avg(power_watts) as avg_power, max(power_watts) as peak_power by location\n| eval power_capacity_pct=round(peak_power*100/15000, 2)\n```\n\nUnderstanding this SPL\n\n**Power Monitoring and Electrical Load Analysis (Meraki MT)** — Tracks electrical power consumption and load to identify anomalies and plan upgrades.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **power_capacity_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Power consumption gauge; peak load timeline; capacity planning chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with this area: power monitoring and electrical load analysis before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.20",
              "n": "Air Quality and CO2 Monitoring (Meraki MT)",
              "c": "medium",
              "f": "beginner",
              "v": "Monitors indoor air quality to ensure safe working conditions.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api sensor_type=\"air_quality\"`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" sensor_type=\"air_quality\" co2_ppm=*\n| stats latest(co2_ppm) as current_co2, avg(co2_ppm) as avg_co2 by location\n| where current_co2 > 1000",
              "m": "Monitor CO2 and air quality sensor data. Alert on high levels.",
              "z": "CO2 level gauge per location; trend timeline; air quality status chart.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api sensor_type=\"air_quality\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor CO2 and air quality sensor data. Alert on high levels.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" sensor_type=\"air_quality\" co2_ppm=*\n| stats latest(co2_ppm) as current_co2, avg(co2_ppm) as avg_co2 by location\n| where current_co2 > 1000\n```\n\nUnderstanding this SPL\n\n**Air Quality and CO2 Monitoring (Meraki MT)** — Monitors indoor air quality to ensure safe working conditions.\n\nDocumented **Data sources**: `sourcetype=meraki:api sensor_type=\"air_quality\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where current_co2 > 1000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: CO2 level gauge per location; trend timeline; air quality status chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with stale air, dust, and indoor comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.21",
              "n": "Ambient Noise Level Monitoring and Trend Analysis (Meraki MT)",
              "c": "low",
              "f": "beginner",
              "v": "Tracks noise levels to ensure comfortable working environment and detect anomalies.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api sensor_type=\"noise\"`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" sensor_type=\"noise\" noise_db=*\n| stats avg(noise_db) as avg_noise, max(noise_db) as peak_noise by location\n| timechart avg(noise_db) by location",
              "m": "Ingest noise sensor data. Track by location and time of day.",
              "z": "Noise level gauge; time-of-day heat map; location comparison chart.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api sensor_type=\"noise\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest noise sensor data. Track by location and time of day.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" sensor_type=\"noise\" noise_db=*\n| stats avg(noise_db) as avg_noise, max(noise_db) as peak_noise by location\n| timechart avg(noise_db) by location\n```\n\nUnderstanding this SPL\n\n**Ambient Noise Level Monitoring and Trend Analysis (Meraki MT)** — Tracks noise levels to ensure comfortable working environment and detect anomalies.\n\nDocumented **Data sources**: `sourcetype=meraki:api sensor_type=\"noise\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time with a separate series **by location** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Noise level gauge; time-of-day heat map; location comparison chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with this area: ambient noise level monitoring and trend analysis before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.22",
              "n": "Indoor Climate Trending and HVAC Optimization (Meraki MT)",
              "c": "low",
              "f": "beginner",
              "v": "Analyzes temperature and humidity trends to optimize HVAC system efficiency.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api sensor_type IN (\"temperature\", \"humidity\")`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" sensor_type IN (\"temperature\", \"humidity\")\n| stats avg(value) as avg_value by sensor_type, location\n| timechart avg(value) by sensor_type",
              "m": "Correlate temperature and humidity data. Identify optimization opportunities.",
              "z": "Climate trend line chart; comfort zone indicator; energy efficiency analysis.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api sensor_type IN (\"temperature\", \"humidity\")`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate temperature and humidity data. Identify optimization opportunities.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" sensor_type IN (\"temperature\", \"humidity\")\n| stats avg(value) as avg_value by sensor_type, location\n| timechart avg(value) by sensor_type\n```\n\nUnderstanding this SPL\n\n**Indoor Climate Trending and HVAC Optimization (Meraki MT)** — Analyzes temperature and humidity trends to optimize HVAC system efficiency.\n\nDocumented **Data sources**: `sourcetype=meraki:api sensor_type IN (\"temperature\", \"humidity\")`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sensor_type, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time with a separate series **by sensor_type** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Climate trend line chart; comfort zone indicator; energy efficiency analysis.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with heating, cooling, and room comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.23",
              "n": "Environmental Sensor Battery Health and Replacement Alerts (Meraki MT)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks sensor battery levels to ensure sensors remain operational and schedule replacements.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" battery_level=*\n| stats latest(battery_level) as battery_pct by sensor_id, location\n| where battery_pct < 20\n| sort battery_pct",
              "m": "Query sensor API for battery metrics. Alert on <20% battery.",
              "z": "Battery health table; battery trend timeline; replacement alert dashboard.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery sensor API for battery metrics. Alert on <20% battery.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" battery_level=*\n| stats latest(battery_level) as battery_pct by sensor_id, location\n| where battery_pct < 20\n| sort battery_pct\n```\n\nUnderstanding this SPL\n\n**Environmental Sensor Battery Health and Replacement Alerts (Meraki MT)** — Tracks sensor battery levels to ensure sensors remain operational and schedule replacements.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sensor_id, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where battery_pct < 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Battery health table; battery trend timeline; replacement alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with room conditions your rules and equipment need before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.24",
              "n": "Sensor Connectivity and Heartbeat Monitoring (Meraki MT)",
              "c": "high",
              "f": "intermediate",
              "v": "Ensures all sensors maintain connectivity and operational status.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\"\n| stats latest(last_report) as last_checkin by sensor_id\n| eval hours_since_checkin=round((now()-strptime(last_report, \"%Y-%m-%dT%H:%M:%S\"))/3600, 1)\n| where hours_since_checkin > 2",
              "m": "Query sensor API for last report time. Alert on missing heartbeats.",
              "z": "Sensor status table; last heartbeat timeline; offline sensor list.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery sensor API for last report time. Alert on missing heartbeats.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\"\n| stats latest(last_report) as last_checkin by sensor_id\n| eval hours_since_checkin=round((now()-strptime(last_report, \"%Y-%m-%dT%H:%M:%S\"))/3600, 1)\n| where hours_since_checkin > 2\n```\n\nUnderstanding this SPL\n\n**Sensor Connectivity and Heartbeat Monitoring (Meraki MT)** — Ensures all sensors maintain connectivity and operational status.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sensor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since_checkin** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since_checkin > 2` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sensor status table; last heartbeat timeline; offline sensor list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with this area: sensor connectivity and heartbeat monitoring before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.25",
              "n": "AHU Supply Air Temperature Deviation",
              "c": "high",
              "f": "intermediate",
              "v": "Air Handling Unit supply air temperature drifting from setpoint indicates coil fouling, valve failures, or economizer faults — wasting energy and degrading occupant comfort. Catching deviations early prevents complaints and energy waste.",
              "t": "BACnet gateway (e.g. Cimetrics BACstac, Contemporary Controls), Splunk Edge Hub, custom scripted input",
              "d": "`sourcetype=bms:bacnet`, `sourcetype=bms:hvac`",
              "q": "index=building sourcetype=\"bms:hvac\" object_type=\"AHU\"\n| eval delta=abs(supply_air_temp - supply_air_setpoint)\n| bin _time span=15m\n| stats avg(delta) as avg_deviation, max(delta) as max_deviation, avg(supply_air_temp) as avg_supply, latest(supply_air_setpoint) as setpoint by ahu_name, _time\n| where avg_deviation > 2\n| table _time, ahu_name, setpoint, avg_supply, avg_deviation, max_deviation",
              "m": "Ingest AHU BACnet points via Edge Hub or BACnet gateway. Compare supply air temperature to setpoint. Alert on sustained deviation >2°F/1°C.",
              "z": "Line chart (supply temp vs setpoint per AHU); deviation heatmap by AHU and time of day; single value (worst deviation).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BACnet gateway (e.g. Cimetrics BACstac, Contemporary Controls), Splunk Edge Hub, custom scripted input.\n• Ensure the following data sources are available: `sourcetype=bms:bacnet`, `sourcetype=bms:hvac`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest AHU BACnet points via Edge Hub or BACnet gateway. Compare supply air temperature to setpoint. Alert on sustained deviation >2°F/1°C.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:hvac\" object_type=\"AHU\"\n| eval delta=abs(supply_air_temp - supply_air_setpoint)\n| bin _time span=15m\n| stats avg(delta) as avg_deviation, max(delta) as max_deviation, avg(supply_air_temp) as avg_supply, latest(supply_air_setpoint) as setpoint by ahu_name, _time\n| where avg_deviation > 2\n| table _time, ahu_name, setpoint, avg_supply, avg_deviation, max_deviation\n```\n\nUnderstanding this SPL\n\n**AHU Supply Air Temperature Deviation** — Air Handling Unit supply air temperature drifting from setpoint indicates coil fouling, valve failures, or economizer faults — wasting energy and degrading occupant comfort. Catching deviations early prevents complaints and energy waste.\n\nDocumented **Data sources**: `sourcetype=bms:bacnet`, `sourcetype=bms:hvac`. **App/TA** (typical add-on context): BACnet gateway (e.g. Cimetrics BACstac, Contemporary Controls), Splunk Edge Hub, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:hvac. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:hvac\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ahu_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_deviation > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **AHU Supply Air Temperature Deviation**): table _time, ahu_name, setpoint, avg_supply, avg_deviation, max_deviation\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (supply temp vs setpoint per AHU); deviation heatmap by AHU and time of day; single value (worst deviation).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with heating, cooling, and room comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.26",
              "n": "VAV Box Damper Position Stuck Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Approximately 30% of VAV boxes operate with faults in typical buildings. A stuck damper actuator causes a zone to receive constant airflow regardless of demand — wasting 15-25% of HVAC energy per affected zone and creating hot/cold complaints.",
              "t": "BACnet gateway, Splunk Edge Hub, BMS vendor API (Honeywell/Siemens/JCI)",
              "d": "`sourcetype=bms:bacnet`, `sourcetype=bms:hvac`",
              "q": "index=building sourcetype=\"bms:hvac\" object_type=\"VAV\"\n| bin _time span=1h\n| stats stdev(damper_position_pct) as damper_variance, avg(damper_position_pct) as avg_position, avg(zone_temp) as avg_zone_temp, avg(zone_setpoint) as avg_setpoint by vav_id, zone_name, _time\n| where damper_variance < 2 AND abs(avg_zone_temp - avg_setpoint) > 2\n| table _time, vav_id, zone_name, avg_position, damper_variance, avg_zone_temp, avg_setpoint",
              "m": "Monitor damper position variance over 1-hour windows. If position barely changes while zone temp drifts from setpoint, flag as stuck actuator. Dispatch HVAC technician.",
              "z": "Table of stuck VAV boxes; zone comfort heatmap; damper position timeline per VAV.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BACnet gateway, Splunk Edge Hub, BMS vendor API (Honeywell/Siemens/JCI).\n• Ensure the following data sources are available: `sourcetype=bms:bacnet`, `sourcetype=bms:hvac`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor damper position variance over 1-hour windows. If position barely changes while zone temp drifts from setpoint, flag as stuck actuator. Dispatch HVAC technician.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:hvac\" object_type=\"VAV\"\n| bin _time span=1h\n| stats stdev(damper_position_pct) as damper_variance, avg(damper_position_pct) as avg_position, avg(zone_temp) as avg_zone_temp, avg(zone_setpoint) as avg_setpoint by vav_id, zone_name, _time\n| where damper_variance < 2 AND abs(avg_zone_temp - avg_setpoint) > 2\n| table _time, vav_id, zone_name, avg_position, damper_variance, avg_zone_temp, avg_setpoint\n```\n\nUnderstanding this SPL\n\n**VAV Box Damper Position Stuck Detection** — Approximately 30% of VAV boxes operate with faults in typical buildings. A stuck damper actuator causes a zone to receive constant airflow regardless of demand — wasting 15-25% of HVAC energy per affected zone and creating hot/cold complaints.\n\nDocumented **Data sources**: `sourcetype=bms:bacnet`, `sourcetype=bms:hvac`. **App/TA** (typical add-on context): BACnet gateway, Splunk Edge Hub, BMS vendor API (Honeywell/Siemens/JCI). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:hvac. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:hvac\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by vav_id, zone_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where damper_variance < 2 AND abs(avg_zone_temp - avg_setpoint) > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VAV Box Damper Position Stuck Detection**): table _time, vav_id, zone_name, avg_position, damper_variance, avg_zone_temp, avg_setpoint\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of stuck VAV boxes; zone comfort heatmap; damper position timeline per VAV.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with heating, cooling, and room comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.27",
              "n": "Chiller Plant COP Efficiency Trending",
              "c": "high",
              "f": "advanced",
              "v": "Chiller plants consume 30-50% of total building energy. Tracking Coefficient of Performance (COP) identifies degradation from condenser fouling, refrigerant loss, or suboptimal sequencing — enabling 15-30% cooling energy savings.",
              "t": "BACnet gateway, Modbus TCP (for power meters), BMS vendor API",
              "d": "`sourcetype=bms:hvac`, `sourcetype=bms:energy`",
              "q": "index=building sourcetype=\"bms:hvac\" object_type=\"chiller\"\n| bin _time span=30m\n| stats avg(cooling_output_kw) as avg_cooling, avg(power_input_kw) as avg_power, avg(chilled_water_supply_temp) as avg_chws, avg(condenser_water_return_temp) as avg_cwr by chiller_id, _time\n| eval cop=round(avg_cooling/avg_power, 2)\n| where avg_power > 10\n| eval efficiency_status=case(cop>=5, \"Excellent\", cop>=4, \"Good\", cop>=3, \"Degraded\", cop<3, \"Poor\")\n| table _time, chiller_id, cop, efficiency_status, avg_cooling, avg_power, avg_chws, avg_cwr",
              "m": "Collect chiller cooling output (tons or kW) and electrical input from BACnet or power meters. Calculate COP. Trend over time and compare to manufacturer baseline. Alert on sustained COP below threshold.",
              "z": "COP trend line per chiller; efficiency gauge vs baseline; chiller comparison chart; kW/ton dashboard.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BACnet gateway, Modbus TCP (for power meters), BMS vendor API.\n• Ensure the following data sources are available: `sourcetype=bms:hvac`, `sourcetype=bms:energy`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect chiller cooling output (tons or kW) and electrical input from BACnet or power meters. Calculate COP. Trend over time and compare to manufacturer baseline. Alert on sustained COP below threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:hvac\" object_type=\"chiller\"\n| bin _time span=30m\n| stats avg(cooling_output_kw) as avg_cooling, avg(power_input_kw) as avg_power, avg(chilled_water_supply_temp) as avg_chws, avg(condenser_water_return_temp) as avg_cwr by chiller_id, _time\n| eval cop=round(avg_cooling/avg_power, 2)\n| where avg_power > 10\n| eval efficiency_status=case(cop>=5, \"Excellent\", cop>=4, \"Good\", cop>=3, \"Degraded\", cop<3, \"Poor\")\n| table _time, chiller_id, cop, efficiency_status, avg_cooling, avg_power, avg_chws, avg_cwr\n```\n\nUnderstanding this SPL\n\n**Chiller Plant COP Efficiency Trending** — Chiller plants consume 30-50% of total building energy. Tracking Coefficient of Performance (COP) identifies degradation from condenser fouling, refrigerant loss, or suboptimal sequencing — enabling 15-30% cooling energy savings.\n\nDocumented **Data sources**: `sourcetype=bms:hvac`, `sourcetype=bms:energy`. **App/TA** (typical add-on context): BACnet gateway, Modbus TCP (for power meters), BMS vendor API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:hvac. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:hvac\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by chiller_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cop** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_power > 10` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **efficiency_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Chiller Plant COP Efficiency Trending**): table _time, chiller_id, cop, efficiency_status, avg_cooling, avg_power, avg_chws, avg_cwr\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: COP trend line per chiller; efficiency gauge vs baseline; chiller comparison chart; kW/ton dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with heating, cooling, and room comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.28",
              "n": "Cooling Tower Approach Temperature Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Approach temperature (difference between condenser water leaving the tower and outdoor wet bulb) reveals cooling tower effectiveness. Rising approach indicates fill media fouling, fan issues, or scale buildup — directly impacting chiller efficiency.",
              "t": "BACnet gateway, outdoor weather station integration",
              "d": "`sourcetype=bms:hvac`, `sourcetype=weather:station`",
              "q": "index=building sourcetype=\"bms:hvac\" object_type=\"cooling_tower\"\n| bin _time span=30m\n| stats avg(leaving_water_temp) as avg_lwt, avg(wet_bulb_temp) as avg_wb by tower_id, _time\n| eval approach=round(avg_lwt - avg_wb, 1)\n| where approach > 0\n| eval status=case(approach<=5, \"Normal\", approach<=8, \"Watch\", approach>8, \"Degraded\")\n| table _time, tower_id, approach, status, avg_lwt, avg_wb",
              "m": "Ingest cooling tower leaving water temperature and outdoor wet bulb from weather station or BMS. Calculate approach. Alert when approach exceeds design value by more than 3°F.",
              "z": "Approach temperature trend per tower; status indicator; comparison to design baseline.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BACnet gateway, outdoor weather station integration.\n• Ensure the following data sources are available: `sourcetype=bms:hvac`, `sourcetype=weather:station`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest cooling tower leaving water temperature and outdoor wet bulb from weather station or BMS. Calculate approach. Alert when approach exceeds design value by more than 3°F.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:hvac\" object_type=\"cooling_tower\"\n| bin _time span=30m\n| stats avg(leaving_water_temp) as avg_lwt, avg(wet_bulb_temp) as avg_wb by tower_id, _time\n| eval approach=round(avg_lwt - avg_wb, 1)\n| where approach > 0\n| eval status=case(approach<=5, \"Normal\", approach<=8, \"Watch\", approach>8, \"Degraded\")\n| table _time, tower_id, approach, status, avg_lwt, avg_wb\n```\n\nUnderstanding this SPL\n\n**Cooling Tower Approach Temperature Trending** — Approach temperature (difference between condenser water leaving the tower and outdoor wet bulb) reveals cooling tower effectiveness. Rising approach indicates fill media fouling, fan issues, or scale buildup — directly impacting chiller efficiency.\n\nDocumented **Data sources**: `sourcetype=bms:hvac`, `sourcetype=weather:station`. **App/TA** (typical add-on context): BACnet gateway, outdoor weather station integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:hvac. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:hvac\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by tower_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **approach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where approach > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Cooling Tower Approach Temperature Trending**): table _time, tower_id, approach, status, avg_lwt, avg_wb\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Approach temperature trend per tower; status indicator; comparison to design baseline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with heating, cooling, and room comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.29",
              "n": "HVAC Setpoint Override Frequency and Duration",
              "c": "medium",
              "f": "beginner",
              "v": "Frequent manual setpoint overrides by occupants or operators indicate either comfort issues or poor control tuning. Tracking override patterns helps identify problem zones and quantifies energy waste from overrides that persist beyond work hours.",
              "t": "BMS vendor API (Honeywell EBI, Siemens Desigo, Johnson Controls Metasys), BACnet gateway",
              "d": "`sourcetype=bms:hvac`, `sourcetype=bms:bacnet`",
              "q": "index=building sourcetype=\"bms:hvac\" event_type=\"setpoint_override\"\n| eval override_duration_hrs=round(override_duration_min/60, 1)\n| bin _time span=1d\n| stats count as override_count, avg(override_duration_hrs) as avg_duration, sum(override_duration_hrs) as total_override_hrs, dc(zone_name) as zones_affected by building, _time\n| where override_count > 5\n| table _time, building, override_count, zones_affected, avg_duration, total_override_hrs",
              "m": "Capture BMS override events with zone, user, original setpoint, override value, and duration. Track daily override frequency. Alert on zones with excessive overrides.",
              "z": "Override count heatmap by zone and time; daily override trending; top override zones table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS vendor API (Honeywell EBI, Siemens Desigo, Johnson Controls Metasys), BACnet gateway.\n• Ensure the following data sources are available: `sourcetype=bms:hvac`, `sourcetype=bms:bacnet`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture BMS override events with zone, user, original setpoint, override value, and duration. Track daily override frequency. Alert on zones with excessive overrides.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:hvac\" event_type=\"setpoint_override\"\n| eval override_duration_hrs=round(override_duration_min/60, 1)\n| bin _time span=1d\n| stats count as override_count, avg(override_duration_hrs) as avg_duration, sum(override_duration_hrs) as total_override_hrs, dc(zone_name) as zones_affected by building, _time\n| where override_count > 5\n| table _time, building, override_count, zones_affected, avg_duration, total_override_hrs\n```\n\nUnderstanding this SPL\n\n**HVAC Setpoint Override Frequency and Duration** — Frequent manual setpoint overrides by occupants or operators indicate either comfort issues or poor control tuning. Tracking override patterns helps identify problem zones and quantifies energy waste from overrides that persist beyond work hours.\n\nDocumented **Data sources**: `sourcetype=bms:hvac`, `sourcetype=bms:bacnet`. **App/TA** (typical add-on context): BMS vendor API (Honeywell EBI, Siemens Desigo, Johnson Controls Metasys), BACnet gateway. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:hvac. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:hvac\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **override_duration_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where override_count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HVAC Setpoint Override Frequency and Duration**): table _time, building, override_count, zones_affected, avg_duration, total_override_hrs\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Override count heatmap by zone and time; daily override trending; top override zones table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with heating, cooling, and room comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.30",
              "n": "Economizer Free Cooling Hours Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Economizer mode uses outside air for free cooling when conditions permit, saving significant compressor energy. Tracking actual free cooling hours vs available hours reveals malfunctioning dampers, faulty enthalpy sensors, or misconfigured switchover points.",
              "t": "BACnet gateway, BMS vendor API",
              "d": "`sourcetype=bms:hvac`, `sourcetype=weather:station`",
              "q": "index=building sourcetype=\"bms:hvac\" object_type=\"AHU\"\n| eval is_economizer=if(economizer_mode=\"active\", 1, 0)\n| eval could_economize=if(outside_air_temp < return_air_temp AND outside_air_temp < 65, 1, 0)\n| bin _time span=1d\n| stats sum(is_economizer) as economizer_hours, sum(could_economize) as available_hours by ahu_name, _time\n| eval utilization_pct=round(economizer_hours*100/available_hours, 1)\n| where available_hours > 2 AND utilization_pct < 50\n| table _time, ahu_name, economizer_hours, available_hours, utilization_pct",
              "m": "Track AHU economizer mode status alongside outdoor conditions. Calculate available vs actual free cooling hours. Alert when utilization drops below 50% of available hours.",
              "z": "Free cooling utilization gauge per AHU; seasonal trend; missed opportunity cost estimate.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BACnet gateway, BMS vendor API.\n• Ensure the following data sources are available: `sourcetype=bms:hvac`, `sourcetype=weather:station`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack AHU economizer mode status alongside outdoor conditions. Calculate available vs actual free cooling hours. Alert when utilization drops below 50% of available hours.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:hvac\" object_type=\"AHU\"\n| eval is_economizer=if(economizer_mode=\"active\", 1, 0)\n| eval could_economize=if(outside_air_temp < return_air_temp AND outside_air_temp < 65, 1, 0)\n| bin _time span=1d\n| stats sum(is_economizer) as economizer_hours, sum(could_economize) as available_hours by ahu_name, _time\n| eval utilization_pct=round(economizer_hours*100/available_hours, 1)\n| where available_hours > 2 AND utilization_pct < 50\n| table _time, ahu_name, economizer_hours, available_hours, utilization_pct\n```\n\nUnderstanding this SPL\n\n**Economizer Free Cooling Hours Tracking** — Economizer mode uses outside air for free cooling when conditions permit, saving significant compressor energy. Tracking actual free cooling hours vs available hours reveals malfunctioning dampers, faulty enthalpy sensors, or misconfigured switchover points.\n\nDocumented **Data sources**: `sourcetype=bms:hvac`, `sourcetype=weather:station`. **App/TA** (typical add-on context): BACnet gateway, BMS vendor API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:hvac. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:hvac\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_economizer** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **could_economize** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ahu_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where available_hours > 2 AND utilization_pct < 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Economizer Free Cooling Hours Tracking**): table _time, ahu_name, economizer_hours, available_hours, utilization_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Free cooling utilization gauge per AHU; seasonal trend; missed opportunity cost estimate.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with this area: economizer free cooling hours before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.31",
              "n": "Building Energy Consumption Intensity (kWh/m²)",
              "c": "high",
              "f": "intermediate",
              "v": "Energy Use Intensity (EUI) in kWh per square meter is the standard benchmark for building energy performance. Tracking EUI trending and comparing across buildings reveals efficiency opportunities and supports ENERGY STAR, LEED, and ESG reporting.",
              "t": "Smart utility meters via Modbus TCP or pulse counting, BMS energy meters, scripted input from utility APIs",
              "d": "`sourcetype=bms:energy`, `sourcetype=bms:meter`",
              "q": "index=building sourcetype=\"bms:energy\" meter_type=\"electricity\"\n| bin _time span=1d\n| stats sum(kwh_consumed) as daily_kwh by building_name, _time\n| lookup building_info.csv building_name OUTPUTNEW floor_area_sqm, building_type\n| eval eui=round(daily_kwh/floor_area_sqm, 2)\n| eval annual_eui_projected=eui*365\n| eval benchmark_status=case(annual_eui_projected<150, \"Excellent\", annual_eui_projected<250, \"Good\", annual_eui_projected<350, \"Average\", annual_eui_projected>=350, \"Poor\")\n| table _time, building_name, building_type, daily_kwh, floor_area_sqm, eui, annual_eui_projected, benchmark_status",
              "m": "Install smart energy meters on main electrical feeds. Ingest kWh readings via Modbus or pulse counter. Join with building floor area lookup. Calculate daily and projected annual EUI. Compare to industry benchmarks (ENERGY STAR, CIBSE).",
              "z": "EUI trend per building; benchmark comparison bar chart; building ranking table; year-over-year comparison.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Smart utility meters via Modbus TCP or pulse counting, BMS energy meters, scripted input from utility APIs.\n• Ensure the following data sources are available: `sourcetype=bms:energy`, `sourcetype=bms:meter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall smart energy meters on main electrical feeds. Ingest kWh readings via Modbus or pulse counter. Join with building floor area lookup. Calculate daily and projected annual EUI. Compare to industry benchmarks (ENERGY STAR, CIBSE).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:energy\" meter_type=\"electricity\"\n| bin _time span=1d\n| stats sum(kwh_consumed) as daily_kwh by building_name, _time\n| lookup building_info.csv building_name OUTPUTNEW floor_area_sqm, building_type\n| eval eui=round(daily_kwh/floor_area_sqm, 2)\n| eval annual_eui_projected=eui*365\n| eval benchmark_status=case(annual_eui_projected<150, \"Excellent\", annual_eui_projected<250, \"Good\", annual_eui_projected<350, \"Average\", annual_eui_projected>=350, \"Poor\")\n| table _time, building_name, building_type, daily_kwh, floor_area_sqm, eui, annual_eui_projected, benchmark_status\n```\n\nUnderstanding this SPL\n\n**Building Energy Consumption Intensity (kWh/m²)** — Energy Use Intensity (EUI) in kWh per square meter is the standard benchmark for building energy performance. Tracking EUI trending and comparing across buildings reveals efficiency opportunities and supports ENERGY STAR, LEED, and ESG reporting.\n\nDocumented **Data sources**: `sourcetype=bms:energy`, `sourcetype=bms:meter`. **App/TA** (typical add-on context): Smart utility meters via Modbus TCP or pulse counting, BMS energy meters, scripted input from utility APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:energy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:energy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **eui** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **annual_eui_projected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **benchmark_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Building Energy Consumption Intensity (kWh/m²)**): table _time, building_name, building_type, daily_kwh, floor_area_sqm, eui, annual_eui_projected, benchmark_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: EUI trend per building; benchmark comparison bar chart; building ranking table; year-over-year comparison.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with how hard the building is working against its power budget before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.32",
              "n": "Sub-Meter Energy Distribution by System",
              "c": "medium",
              "f": "intermediate",
              "v": "Sub-metering breaks total energy into HVAC, lighting, plug loads, and other systems. Knowing which system consumes the most energy directs efficiency investments where they'll have the biggest impact — typically HVAC at 40-60%.",
              "t": "Smart sub-meters via Modbus TCP, BMS energy management module",
              "d": "`sourcetype=bms:energy`, `sourcetype=bms:meter`",
              "q": "index=building sourcetype=\"bms:energy\" meter_type=\"sub_meter\"\n| bin _time span=1d\n| stats sum(kwh_consumed) as daily_kwh by building_name, system_type, _time\n| eventstats sum(daily_kwh) as total_kwh by building_name, _time\n| eval pct_of_total=round(daily_kwh*100/total_kwh, 1)\n| table _time, building_name, system_type, daily_kwh, pct_of_total\n| sort _time, building_name, -pct_of_total",
              "m": "Deploy sub-meters on HVAC, lighting, plug load, and elevator panels. Ingest readings. Calculate percentage distribution. Trend over time. Identify systems with growing share.",
              "z": "Pie chart of energy by system; stacked area chart over time; system comparison across buildings.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Smart sub-meters via Modbus TCP, BMS energy management module.\n• Ensure the following data sources are available: `sourcetype=bms:energy`, `sourcetype=bms:meter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy sub-meters on HVAC, lighting, plug load, and elevator panels. Ingest readings. Calculate percentage distribution. Trend over time. Identify systems with growing share.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:energy\" meter_type=\"sub_meter\"\n| bin _time span=1d\n| stats sum(kwh_consumed) as daily_kwh by building_name, system_type, _time\n| eventstats sum(daily_kwh) as total_kwh by building_name, _time\n| eval pct_of_total=round(daily_kwh*100/total_kwh, 1)\n| table _time, building_name, system_type, daily_kwh, pct_of_total\n| sort _time, building_name, -pct_of_total\n```\n\nUnderstanding this SPL\n\n**Sub-Meter Energy Distribution by System** — Sub-metering breaks total energy into HVAC, lighting, plug loads, and other systems. Knowing which system consumes the most energy directs efficiency investments where they'll have the biggest impact — typically HVAC at 40-60%.\n\nDocumented **Data sources**: `sourcetype=bms:energy`, `sourcetype=bms:meter`. **App/TA** (typical add-on context): Smart sub-meters via Modbus TCP, BMS energy management module. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:energy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:energy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building_name, system_type, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by building_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct_of_total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Sub-Meter Energy Distribution by System**): table _time, building_name, system_type, daily_kwh, pct_of_total\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart of energy by system; stacked area chart over time; system comparison across buildings.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with how hard the building is working against its power budget before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.33",
              "n": "After-Hours Energy Waste Detection",
              "c": "high",
              "f": "beginner",
              "v": "Buildings typically waste 20-30% of energy outside occupied hours from HVAC systems running on override, lights left on, or equipment not in unoccupied mode. Detecting after-hours energy above baseline enables quick wins with minimal investment.",
              "t": "Smart utility meters via Modbus, BMS schedule integration",
              "d": "`sourcetype=bms:energy`, `sourcetype=bms:hvac`",
              "q": "index=building sourcetype=\"bms:energy\" meter_type=\"electricity\"\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval is_after_hours=if((hour<7 OR hour>19) OR dow>5, 1, 0)\n| bin _time span=1h\n| stats sum(kwh_consumed) as hourly_kwh by building_name, is_after_hours, _time\n| where is_after_hours=1 AND hourly_kwh > 0\n| eventstats perc25(hourly_kwh) as baseline_kwh by building_name\n| eval waste_kwh=if(hourly_kwh > baseline_kwh*1.5, hourly_kwh - baseline_kwh, 0)\n| where waste_kwh > 0\n| table _time, building_name, hourly_kwh, baseline_kwh, waste_kwh",
              "m": "Compare after-hours energy consumption to baseline (minimum observed usage). Flag hours where consumption exceeds 150% of baseline. Cross-reference with BMS schedules and override events to identify root cause.",
              "z": "After-hours vs occupied hours comparison chart; waste energy trending; building ranking by waste percentage.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Smart utility meters via Modbus, BMS schedule integration.\n• Ensure the following data sources are available: `sourcetype=bms:energy`, `sourcetype=bms:hvac`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare after-hours energy consumption to baseline (minimum observed usage). Flag hours where consumption exceeds 150% of baseline. Cross-reference with BMS schedules and override events to identify root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:energy\" meter_type=\"electricity\"\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval is_after_hours=if((hour<7 OR hour>19) OR dow>5, 1, 0)\n| bin _time span=1h\n| stats sum(kwh_consumed) as hourly_kwh by building_name, is_after_hours, _time\n| where is_after_hours=1 AND hourly_kwh > 0\n| eventstats perc25(hourly_kwh) as baseline_kwh by building_name\n| eval waste_kwh=if(hourly_kwh > baseline_kwh*1.5, hourly_kwh - baseline_kwh, 0)\n| where waste_kwh > 0\n| table _time, building_name, hourly_kwh, baseline_kwh, waste_kwh\n```\n\nUnderstanding this SPL\n\n**After-Hours Energy Waste Detection** — Buildings typically waste 20-30% of energy outside occupied hours from HVAC systems running on override, lights left on, or equipment not in unoccupied mode. Detecting after-hours energy above baseline enables quick wins with minimal investment.\n\nDocumented **Data sources**: `sourcetype=bms:energy`, `sourcetype=bms:hvac`. **App/TA** (typical add-on context): Smart utility meters via Modbus, BMS schedule integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:energy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:energy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_after_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building_name, is_after_hours, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where is_after_hours=1 AND hourly_kwh > 0` — typically the threshold or rule expression for this monitoring goal.\n• `eventstats` rolls up events into metrics; results are split **by building_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **waste_kwh** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where waste_kwh > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **After-Hours Energy Waste Detection**): table _time, building_name, hourly_kwh, baseline_kwh, waste_kwh\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: After-hours vs occupied hours comparison chart; waste energy trending; building ranking by waste percentage.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with how hard the building is working against its power budget before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.34",
              "n": "Peak Demand Shaving Effectiveness",
              "c": "medium",
              "f": "advanced",
              "v": "Utility demand charges based on peak 15-minute kW draw can represent 30-50% of a commercial building's electric bill. Monitoring peak demand and evaluating load-shedding effectiveness helps control costs and supports demand response programs.",
              "t": "Smart utility meters, BMS demand limiting module",
              "d": "`sourcetype=bms:energy`, `sourcetype=bms:meter`",
              "q": "index=building sourcetype=\"bms:energy\" meter_type=\"electricity\"\n| bin _time span=15m\n| stats avg(kw_demand) as demand_kw by building_name, _time\n| eventstats max(demand_kw) as peak_demand by building_name\n| eval pct_of_peak=round(demand_kw*100/peak_demand, 1)\n| eval month=strftime(_time, \"%Y-%m\")\n| stats max(demand_kw) as monthly_peak_kw, avg(demand_kw) as avg_demand_kw by building_name, month\n| eval load_factor=round(avg_demand_kw*100/monthly_peak_kw, 1)\n| table month, building_name, monthly_peak_kw, avg_demand_kw, load_factor",
              "m": "Monitor 15-minute demand intervals from utility meters. Track monthly peak demand. Calculate load factor (average/peak). Higher load factor means better demand management. Alert when approaching contract demand limit.",
              "z": "Demand profile chart (15-min intervals); monthly peak trending; load factor gauge; demand limit threshold line.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Smart utility meters, BMS demand limiting module.\n• Ensure the following data sources are available: `sourcetype=bms:energy`, `sourcetype=bms:meter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor 15-minute demand intervals from utility meters. Track monthly peak demand. Calculate load factor (average/peak). Higher load factor means better demand management. Alert when approaching contract demand limit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:energy\" meter_type=\"electricity\"\n| bin _time span=15m\n| stats avg(kw_demand) as demand_kw by building_name, _time\n| eventstats max(demand_kw) as peak_demand by building_name\n| eval pct_of_peak=round(demand_kw*100/peak_demand, 1)\n| eval month=strftime(_time, \"%Y-%m\")\n| stats max(demand_kw) as monthly_peak_kw, avg(demand_kw) as avg_demand_kw by building_name, month\n| eval load_factor=round(avg_demand_kw*100/monthly_peak_kw, 1)\n| table month, building_name, monthly_peak_kw, avg_demand_kw, load_factor\n```\n\nUnderstanding this SPL\n\n**Peak Demand Shaving Effectiveness** — Utility demand charges based on peak 15-minute kW draw can represent 30-50% of a commercial building's electric bill. Monitoring peak demand and evaluating load-shedding effectiveness helps control costs and supports demand response programs.\n\nDocumented **Data sources**: `sourcetype=bms:energy`, `sourcetype=bms:meter`. **App/TA** (typical add-on context): Smart utility meters, BMS demand limiting module. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:energy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:energy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by building_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct_of_peak** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **month** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by building_name, month** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **load_factor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Peak Demand Shaving Effectiveness**): table month, building_name, monthly_peak_kw, avg_demand_kw, load_factor\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Demand profile chart (15-min intervals); monthly peak trending; load factor gauge; demand limit threshold line.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with how hard the building is working against its power budget before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.35",
              "n": "Lighting Schedule Compliance and Override Tracking",
              "c": "low",
              "f": "beginner",
              "v": "Lighting left on outside scheduled hours wastes 10-20% of lighting energy. Tracking schedule compliance and manual override patterns identifies zones where schedules need adjustment or occupancy sensors should be added.",
              "t": "DALI/KNX gateway, BMS lighting control module, smart relay monitoring",
              "d": "`sourcetype=bms:lighting`, `sourcetype=bms:bacnet`",
              "q": "index=building sourcetype=\"bms:lighting\" event_type IN (\"on\", \"off\", \"override\")\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval scheduled_off=if((hour>=21 OR hour<6) AND dow<=5, 1, if(dow>5, 1, 0))\n| where event_type=\"on\" AND scheduled_off=1\n| stats count as after_hours_on, dc(zone) as zones_affected by building, floor\n| sort -after_hours_on\n| table building, floor, zones_affected, after_hours_on",
              "m": "Monitor lighting circuit status from BMS or DALI/KNX gateway. Compare on/off events to published schedule. Track overrides with timestamp and duration. Alert on sustained after-hours lighting without override justification.",
              "z": "Schedule compliance percentage gauge; after-hours lighting heatmap by floor; override frequency chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DALI/KNX gateway, BMS lighting control module, smart relay monitoring.\n• Ensure the following data sources are available: `sourcetype=bms:lighting`, `sourcetype=bms:bacnet`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor lighting circuit status from BMS or DALI/KNX gateway. Compare on/off events to published schedule. Track overrides with timestamp and duration. Alert on sustained after-hours lighting without override justification.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:lighting\" event_type IN (\"on\", \"off\", \"override\")\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval scheduled_off=if((hour>=21 OR hour<6) AND dow<=5, 1, if(dow>5, 1, 0))\n| where event_type=\"on\" AND scheduled_off=1\n| stats count as after_hours_on, dc(zone) as zones_affected by building, floor\n| sort -after_hours_on\n| table building, floor, zones_affected, after_hours_on\n```\n\nUnderstanding this SPL\n\n**Lighting Schedule Compliance and Override Tracking** — Lighting left on outside scheduled hours wastes 10-20% of lighting energy. Tracking schedule compliance and manual override patterns identifies zones where schedules need adjustment or occupancy sensors should be added.\n\nDocumented **Data sources**: `sourcetype=bms:lighting`, `sourcetype=bms:bacnet`. **App/TA** (typical add-on context): DALI/KNX gateway, BMS lighting control module, smart relay monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:lighting. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:lighting\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **scheduled_off** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where event_type=\"on\" AND scheduled_off=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by building, floor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Lighting Schedule Compliance and Override Tracking**): table building, floor, zones_affected, after_hours_on\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Schedule compliance percentage gauge; after-hours lighting heatmap by floor; override frequency chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with lighting and daylight the way the schedule and savings plan expect before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.36",
              "n": "Elevator Trip Count and Usage Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Elevator trip counts, floor-by-floor usage patterns, and duty cycles drive maintenance scheduling and modernization planning. High-traffic elevators need more frequent service. Usage data also informs tenant space planning and lobby redesign.",
              "t": "Elevator controller API (KONE, Otis, Schindler, ThyssenKrupp), IoT gateway, custom scripted input",
              "d": "`sourcetype=bms:elevator`",
              "q": "index=building sourcetype=\"bms:elevator\"\n| bin _time span=1h\n| stats count as trips, sum(distance_traveled_m) as total_distance, avg(travel_time_sec) as avg_travel_time, dc(floor_stop) as floors_served by elevator_id, _time\n| eval trips_per_hour=trips\n| eventstats avg(trips_per_hour) as baseline_trips by elevator_id\n| eval utilization=round(trips_per_hour*100/baseline_trips, 1)\n| table _time, elevator_id, trips, floors_served, avg_travel_time, utilization",
              "m": "Integrate with elevator controller via vendor API or IoT gateway. Collect trip events with timestamp, origin floor, destination floor, and travel time. Aggregate into hourly/daily patterns.",
              "z": "Trip count timeline per elevator; floor usage heatmap; peak hour analysis; elevator comparison chart.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Elevator controller API (KONE, Otis, Schindler, ThyssenKrupp), IoT gateway, custom scripted input.\n• Ensure the following data sources are available: `sourcetype=bms:elevator`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate with elevator controller via vendor API or IoT gateway. Collect trip events with timestamp, origin floor, destination floor, and travel time. Aggregate into hourly/daily patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:elevator\"\n| bin _time span=1h\n| stats count as trips, sum(distance_traveled_m) as total_distance, avg(travel_time_sec) as avg_travel_time, dc(floor_stop) as floors_served by elevator_id, _time\n| eval trips_per_hour=trips\n| eventstats avg(trips_per_hour) as baseline_trips by elevator_id\n| eval utilization=round(trips_per_hour*100/baseline_trips, 1)\n| table _time, elevator_id, trips, floors_served, avg_travel_time, utilization\n```\n\nUnderstanding this SPL\n\n**Elevator Trip Count and Usage Trending** — Elevator trip counts, floor-by-floor usage patterns, and duty cycles drive maintenance scheduling and modernization planning. High-traffic elevators need more frequent service. Usage data also informs tenant space planning and lobby redesign.\n\nDocumented **Data sources**: `sourcetype=bms:elevator`. **App/TA** (typical add-on context): Elevator controller API (KONE, Otis, Schindler, ThyssenKrupp), IoT gateway, custom scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:elevator. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:elevator\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by elevator_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **trips_per_hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by elevator_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Elevator Trip Count and Usage Trending**): table _time, elevator_id, trips, floors_served, avg_travel_time, utilization\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Trip count timeline per elevator; floor usage heatmap; peak hour analysis; elevator comparison chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with elevators and other moving equipment that people rely on before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.37",
              "n": "Elevator Door Fault Frequency and Prediction",
              "c": "critical",
              "f": "advanced",
              "v": "Door faults are the most common elevator malfunction and the leading cause of entrapments. Monitoring door close force, cycle time, and motor amperage detects deteriorating door systems 2-6 weeks before failure — preventing passenger entrapments and service calls.",
              "t": "Elevator controller API, vibration/current sensors via IoT gateway",
              "d": "`sourcetype=bms:elevator`",
              "q": "index=building sourcetype=\"bms:elevator\" event_type=\"door_*\"\n| eval is_fault=if(event_type=\"door_fault\" OR door_close_time_ms > 5000 OR door_reversal=1, 1, 0)\n| bin _time span=1d\n| stats sum(is_fault) as daily_faults, count as total_door_cycles, avg(door_close_time_ms) as avg_close_time, sum(door_reversal) as reversals by elevator_id, _time\n| eval fault_rate_pct=round(daily_faults*100/total_door_cycles, 2)\n| where fault_rate_pct > 1 OR daily_faults > 5\n| table _time, elevator_id, daily_faults, total_door_cycles, fault_rate_pct, avg_close_time, reversals",
              "m": "Collect door cycle events including close force, cycle time, and reversal events. Trend door close time — increasing values indicate worn rollers, dirty tracks, or actuator degradation. Alert when fault rate exceeds 1% or close time exceeds 5 seconds.",
              "z": "Door fault trend per elevator; close time degradation chart; reversal frequency; fault prediction timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Elevator controller API, vibration/current sensors via IoT gateway.\n• Ensure the following data sources are available: `sourcetype=bms:elevator`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect door cycle events including close force, cycle time, and reversal events. Trend door close time — increasing values indicate worn rollers, dirty tracks, or actuator degradation. Alert when fault rate exceeds 1% or close time exceeds 5 seconds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:elevator\" event_type=\"door_*\"\n| eval is_fault=if(event_type=\"door_fault\" OR door_close_time_ms > 5000 OR door_reversal=1, 1, 0)\n| bin _time span=1d\n| stats sum(is_fault) as daily_faults, count as total_door_cycles, avg(door_close_time_ms) as avg_close_time, sum(door_reversal) as reversals by elevator_id, _time\n| eval fault_rate_pct=round(daily_faults*100/total_door_cycles, 2)\n| where fault_rate_pct > 1 OR daily_faults > 5\n| table _time, elevator_id, daily_faults, total_door_cycles, fault_rate_pct, avg_close_time, reversals\n```\n\nUnderstanding this SPL\n\n**Elevator Door Fault Frequency and Prediction** — Door faults are the most common elevator malfunction and the leading cause of entrapments. Monitoring door close force, cycle time, and motor amperage detects deteriorating door systems 2-6 weeks before failure — preventing passenger entrapments and service calls.\n\nDocumented **Data sources**: `sourcetype=bms:elevator`. **App/TA** (typical add-on context): Elevator controller API, vibration/current sensors via IoT gateway. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:elevator. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:elevator\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_fault** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by elevator_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fault_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fault_rate_pct > 1 OR daily_faults > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Elevator Door Fault Frequency and Prediction**): table _time, elevator_id, daily_faults, total_door_cycles, fault_rate_pct, avg_close_time, reversals\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Door fault trend per elevator; close time degradation chart; reversal frequency; fault prediction timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with elevators and other moving equipment that people rely on before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.38",
              "n": "Elevator Wait Time and Service Quality",
              "c": "medium",
              "f": "intermediate",
              "v": "Average wait time is the primary measure of elevator service quality visible to occupants. Industry target is under 30 seconds for office buildings. Excessive wait times indicate traffic imbalance, out-of-service cars, or poor dispatching algorithms.",
              "t": "Elevator controller API, hall call registration sensors",
              "d": "`sourcetype=bms:elevator`",
              "q": "index=building sourcetype=\"bms:elevator\" event_type=\"hall_call\"\n| eval wait_time_sec=(response_time - call_time)\n| bin _time span=1h\n| stats avg(wait_time_sec) as avg_wait, perc95(wait_time_sec) as p95_wait, count as total_calls by building, elevator_group, _time\n| eval sla_status=if(avg_wait<=30, \"Meeting SLA\", \"SLA Breach\")\n| where avg_wait > 30 OR p95_wait > 60\n| table _time, building, elevator_group, avg_wait, p95_wait, total_calls, sla_status",
              "m": "Capture hall call registration and car arrival timestamps from elevator controller. Calculate wait time per call. Aggregate into hourly metrics. Compare to industry SLA (30s average, 60s 95th percentile).",
              "z": "Wait time gauge per elevator group; hourly pattern chart; SLA compliance timeline; floor-level analysis.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Elevator controller API, hall call registration sensors.\n• Ensure the following data sources are available: `sourcetype=bms:elevator`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture hall call registration and car arrival timestamps from elevator controller. Calculate wait time per call. Aggregate into hourly metrics. Compare to industry SLA (30s average, 60s 95th percentile).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:elevator\" event_type=\"hall_call\"\n| eval wait_time_sec=(response_time - call_time)\n| bin _time span=1h\n| stats avg(wait_time_sec) as avg_wait, perc95(wait_time_sec) as p95_wait, count as total_calls by building, elevator_group, _time\n| eval sla_status=if(avg_wait<=30, \"Meeting SLA\", \"SLA Breach\")\n| where avg_wait > 30 OR p95_wait > 60\n| table _time, building, elevator_group, avg_wait, p95_wait, total_calls, sla_status\n```\n\nUnderstanding this SPL\n\n**Elevator Wait Time and Service Quality** — Average wait time is the primary measure of elevator service quality visible to occupants. Industry target is under 30 seconds for office buildings. Excessive wait times indicate traffic imbalance, out-of-service cars, or poor dispatching algorithms.\n\nDocumented **Data sources**: `sourcetype=bms:elevator`. **App/TA** (typical add-on context): Elevator controller API, hall call registration sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:elevator. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:elevator\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **wait_time_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building, elevator_group, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_wait > 30 OR p95_wait > 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Elevator Wait Time and Service Quality**): table _time, building, elevator_group, avg_wait, p95_wait, total_calls, sla_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Wait time gauge per elevator group; hourly pattern chart; SLA compliance timeline; floor-level analysis.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with elevators and other moving equipment that people rely on before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.39",
              "n": "Water Consumption Trending and Anomaly Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Undetected water leaks can waste thousands of liters daily and cause structural damage. Continuous flow monitoring detects leaks, running toilets, and irrigation faults within hours rather than the weeks it takes for a utility bill to reveal problems.",
              "t": "Smart water meters (Modbus, pulse), IoT flow sensors, building management API",
              "d": "`sourcetype=bms:water`",
              "q": "index=building sourcetype=\"bms:water\" meter_type=\"main_feed\"\n| bin _time span=1h\n| stats sum(liters) as hourly_liters by building_name, _time\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval is_occupied=if(hour>=7 AND hour<=19 AND dow<=5, 1, 0)\n| where is_occupied=0 AND hourly_liters > 50\n| eventstats avg(hourly_liters) as baseline_unoccupied by building_name\n| eval anomaly_ratio=round(hourly_liters/baseline_unoccupied, 1)\n| where anomaly_ratio > 3\n| table _time, building_name, hourly_liters, baseline_unoccupied, anomaly_ratio",
              "m": "Install smart water meters on main feeds and key branches. Monitor flow during unoccupied hours — sustained flow indicates leaks or running fixtures. Alert when unoccupied consumption exceeds 3x baseline.",
              "z": "Water consumption timeline; occupied vs unoccupied comparison; anomaly alert dashboard; leak detection map.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Smart water meters (Modbus, pulse), IoT flow sensors, building management API.\n• Ensure the following data sources are available: `sourcetype=bms:water`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall smart water meters on main feeds and key branches. Monitor flow during unoccupied hours — sustained flow indicates leaks or running fixtures. Alert when unoccupied consumption exceeds 3x baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:water\" meter_type=\"main_feed\"\n| bin _time span=1h\n| stats sum(liters) as hourly_liters by building_name, _time\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval is_occupied=if(hour>=7 AND hour<=19 AND dow<=5, 1, 0)\n| where is_occupied=0 AND hourly_liters > 50\n| eventstats avg(hourly_liters) as baseline_unoccupied by building_name\n| eval anomaly_ratio=round(hourly_liters/baseline_unoccupied, 1)\n| where anomaly_ratio > 3\n| table _time, building_name, hourly_liters, baseline_unoccupied, anomaly_ratio\n```\n\nUnderstanding this SPL\n\n**Water Consumption Trending and Anomaly Detection** — Undetected water leaks can waste thousands of liters daily and cause structural damage. Continuous flow monitoring detects leaks, running toilets, and irrigation faults within hours rather than the weeks it takes for a utility bill to reveal problems.\n\nDocumented **Data sources**: `sourcetype=bms:water`. **App/TA** (typical add-on context): Smart water meters (Modbus, pulse), IoT flow sensors, building management API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:water. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:water\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_occupied** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_occupied=0 AND hourly_liters > 50` — typically the threshold or rule expression for this monitoring goal.\n• `eventstats` rolls up events into metrics; results are split **by building_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **anomaly_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where anomaly_ratio > 3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Water Consumption Trending and Anomaly Detection**): table _time, building_name, hourly_liters, baseline_unoccupied, anomaly_ratio\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Water consumption timeline; occupied vs unoccupied comparison; anomaly alert dashboard; leak detection map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with water use that might mean leaks or waste before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.40",
              "n": "Domestic Hot Water Temperature Compliance (Legionella Prevention)",
              "c": "critical",
              "f": "beginner",
              "v": "Legionella bacteria thrive in water between 20-45°C. Building codes require domestic hot water storage above 60°C and distribution above 50°C. Temperature drops below these thresholds create serious health risks and regulatory violations.",
              "t": "BACnet gateway, temperature sensors via Modbus or IoT",
              "d": "`sourcetype=bms:water`, `sourcetype=bms:bacnet`",
              "q": "index=building sourcetype=\"bms:water\" system=\"domestic_hot_water\"\n| bin _time span=15m\n| stats avg(temperature_c) as avg_temp, min(temperature_c) as min_temp by location, sensor_type, _time\n| eval compliance=case(\n    sensor_type=\"storage\" AND min_temp>=60, \"Compliant\",\n    sensor_type=\"storage\" AND min_temp<60, \"NON-COMPLIANT\",\n    sensor_type=\"return\" AND min_temp>=50, \"Compliant\",\n    sensor_type=\"return\" AND min_temp<50, \"NON-COMPLIANT\")\n| where compliance=\"NON-COMPLIANT\"\n| table _time, location, sensor_type, avg_temp, min_temp, compliance",
              "m": "Monitor hot water storage tank and return line temperatures continuously. Alert immediately on any reading below compliance threshold. Log all temperature data for regulatory audit trail.",
              "z": "Temperature compliance gauge per system; temperature timeline; non-compliance alert log; regulatory compliance report.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BACnet gateway, temperature sensors via Modbus or IoT.\n• Ensure the following data sources are available: `sourcetype=bms:water`, `sourcetype=bms:bacnet`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor hot water storage tank and return line temperatures continuously. Alert immediately on any reading below compliance threshold. Log all temperature data for regulatory audit trail.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:water\" system=\"domestic_hot_water\"\n| bin _time span=15m\n| stats avg(temperature_c) as avg_temp, min(temperature_c) as min_temp by location, sensor_type, _time\n| eval compliance=case(\n    sensor_type=\"storage\" AND min_temp>=60, \"Compliant\",\n    sensor_type=\"storage\" AND min_temp<60, \"NON-COMPLIANT\",\n    sensor_type=\"return\" AND min_temp>=50, \"Compliant\",\n    sensor_type=\"return\" AND min_temp<50, \"NON-COMPLIANT\")\n| where compliance=\"NON-COMPLIANT\"\n| table _time, location, sensor_type, avg_temp, min_temp, compliance\n```\n\nUnderstanding this SPL\n\n**Domestic Hot Water Temperature Compliance (Legionella Prevention)** — Legionella bacteria thrive in water between 20-45°C. Building codes require domestic hot water storage above 60°C and distribution above 50°C. Temperature drops below these thresholds create serious health risks and regulatory violations.\n\nDocumented **Data sources**: `sourcetype=bms:water`, `sourcetype=bms:bacnet`. **App/TA** (typical add-on context): BACnet gateway, temperature sensors via Modbus or IoT. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:water. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:water\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by location, sensor_type, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliance=\"NON-COMPLIANT\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Domestic Hot Water Temperature Compliance (Legionella Prevention)**): table _time, location, sensor_type, avg_temp, min_temp, compliance\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Temperature compliance gauge per system; temperature timeline; non-compliance alert log; regulatory compliance report.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with heating, cooling, and room comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.41",
              "n": "Cooling Tower Water Chemistry Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Cooling tower water chemistry (conductivity, pH, biocide levels) directly impacts heat transfer efficiency and equipment life. Scale buildup from poor chemistry increases chiller energy by 2-5% per mm of scale and accelerates corrosion.",
              "t": "Water chemistry controller (Nalco, Chemtrol) via Modbus or API, IoT sensors",
              "d": "`sourcetype=bms:water`",
              "q": "index=building sourcetype=\"bms:water\" system=\"cooling_tower\"\n| bin _time span=1h\n| stats avg(conductivity_us) as avg_conductivity, avg(ph) as avg_ph, avg(cycles_of_concentration) as avg_coc, latest(biocide_ppm) as biocide_level by tower_id, _time\n| eval chem_status=case(\n    avg_conductivity > 3000, \"High Conductivity - Blowdown Needed\",\n    avg_ph < 7.0 OR avg_ph > 9.0, \"pH Out of Range\",\n    biocide_level < 0.5, \"Low Biocide\",\n    avg_coc > 6, \"High Concentration\",\n    1=1, \"Normal\")\n| where chem_status!=\"Normal\"\n| table _time, tower_id, chem_status, avg_conductivity, avg_ph, avg_coc, biocide_level",
              "m": "Integrate with water treatment controller via Modbus or API. Monitor conductivity, pH, cycles of concentration, and biocide levels. Alert on out-of-range values. Track chemical consumption trends.",
              "z": "Chemistry parameter trend lines; status indicator per tower; chemical consumption dashboard.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Water chemistry controller (Nalco, Chemtrol) via Modbus or API, IoT sensors.\n• Ensure the following data sources are available: `sourcetype=bms:water`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate with water treatment controller via Modbus or API. Monitor conductivity, pH, cycles of concentration, and biocide levels. Alert on out-of-range values. Track chemical consumption trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:water\" system=\"cooling_tower\"\n| bin _time span=1h\n| stats avg(conductivity_us) as avg_conductivity, avg(ph) as avg_ph, avg(cycles_of_concentration) as avg_coc, latest(biocide_ppm) as biocide_level by tower_id, _time\n| eval chem_status=case(\n    avg_conductivity > 3000, \"High Conductivity - Blowdown Needed\",\n    avg_ph < 7.0 OR avg_ph > 9.0, \"pH Out of Range\",\n    biocide_level < 0.5, \"Low Biocide\",\n    avg_coc > 6, \"High Concentration\",\n    1=1, \"Normal\")\n| where chem_status!=\"Normal\"\n| table _time, tower_id, chem_status, avg_conductivity, avg_ph, avg_coc, biocide_level\n```\n\nUnderstanding this SPL\n\n**Cooling Tower Water Chemistry Monitoring** — Cooling tower water chemistry (conductivity, pH, biocide levels) directly impacts heat transfer efficiency and equipment life. Scale buildup from poor chemistry increases chiller energy by 2-5% per mm of scale and accelerates corrosion.\n\nDocumented **Data sources**: `sourcetype=bms:water`. **App/TA** (typical add-on context): Water chemistry controller (Nalco, Chemtrol) via Modbus or API, IoT sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:water. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:water\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by tower_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **chem_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where chem_status!=\"Normal\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cooling Tower Water Chemistry Monitoring**): table _time, tower_id, chem_status, avg_conductivity, avg_ph, avg_coc, biocide_level\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Chemistry parameter trend lines; status indicator per tower; chemical consumption dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with water use that might mean leaks or waste before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.42",
              "n": "Fire Alarm Panel Zone Health and Event Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Fire alarm panel trouble conditions (ground faults, open circuits, dirty detectors) must be resolved promptly to maintain life safety protection. Centralizing fire panel events enables multi-building oversight and faster trouble resolution.",
              "t": "Fire alarm panel integration via syslog, serial gateway, or Notifier/Edwards/Simplex API",
              "d": "`sourcetype=bms:fire`",
              "q": "index=building sourcetype=\"bms:fire\" event_type IN (\"trouble\", \"alarm\", \"supervisory\")\n| bin _time span=1h\n| stats count as events, dc(zone) as zones_affected, values(event_type) as event_types, latest(description) as latest_event by panel_id, building, _time\n| eval severity=case(\n    mvfind(event_types, \"alarm\")>=0, \"ALARM\",\n    mvfind(event_types, \"supervisory\")>=0, \"Supervisory\",\n    mvfind(event_types, \"trouble\")>=0, \"Trouble\")\n| table _time, building, panel_id, severity, zones_affected, events, latest_event\n| sort -severity, -events",
              "m": "Connect fire alarm panels via syslog receiver or serial-to-IP converter. Normalize events into alarm, trouble, and supervisory categories. Alert on any alarm immediately. Track trouble conditions for resolution within code-required timeframes.",
              "z": "Fire panel status dashboard; event timeline; trouble condition age tracking; multi-building overview map.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Fire alarm panel integration via syslog, serial gateway, or Notifier/Edwards/Simplex API.\n• Ensure the following data sources are available: `sourcetype=bms:fire`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConnect fire alarm panels via syslog receiver or serial-to-IP converter. Normalize events into alarm, trouble, and supervisory categories. Alert on any alarm immediately. Track trouble conditions for resolution within code-required timeframes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:fire\" event_type IN (\"trouble\", \"alarm\", \"supervisory\")\n| bin _time span=1h\n| stats count as events, dc(zone) as zones_affected, values(event_type) as event_types, latest(description) as latest_event by panel_id, building, _time\n| eval severity=case(\n    mvfind(event_types, \"alarm\")>=0, \"ALARM\",\n    mvfind(event_types, \"supervisory\")>=0, \"Supervisory\",\n    mvfind(event_types, \"trouble\")>=0, \"Trouble\")\n| table _time, building, panel_id, severity, zones_affected, events, latest_event\n| sort -severity, -events\n```\n\nUnderstanding this SPL\n\n**Fire Alarm Panel Zone Health and Event Monitoring** — Fire alarm panel trouble conditions (ground faults, open circuits, dirty detectors) must be resolved promptly to maintain life safety protection. Centralizing fire panel events enables multi-building oversight and faster trouble resolution.\n\nDocumented **Data sources**: `sourcetype=bms:fire`. **App/TA** (typical add-on context): Fire alarm panel integration via syslog, serial gateway, or Notifier/Edwards/Simplex API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:fire. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:fire\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by panel_id, building, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Fire Alarm Panel Zone Health and Event Monitoring**): table _time, building, panel_id, severity, zones_affected, events, latest_event\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Fire panel status dashboard; event timeline; trouble condition age tracking; multi-building overview map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with life-safety water and fire equipment before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.43",
              "n": "Sprinkler System Valve Tamper and Supervisory Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Closed or partially closed sprinkler valves are the number one reason sprinkler systems fail to operate during a fire. Valve tamper switches provide immediate notification when a valve position changes — NFPA 25 requires monitoring.",
              "t": "Fire alarm panel integration (valve tamper switches connected to panel), syslog",
              "d": "`sourcetype=bms:fire`",
              "q": "index=building sourcetype=\"bms:fire\" event_type=\"supervisory\" device_type IN (\"valve_tamper\", \"waterflow\", \"low_pressure\")\n| eval urgency=case(\n    device_type=\"valve_tamper\" AND state=\"closed\", \"CRITICAL\",\n    device_type=\"waterflow\", \"ALARM\",\n    device_type=\"low_pressure\", \"HIGH\",\n    1=1, \"Medium\")\n| stats count as events, latest(state) as current_state, latest(_time) as last_event by building, zone, device_type, device_id\n| where current_state IN (\"closed\", \"active\", \"low\")\n| table building, zone, device_type, device_id, current_state, urgency, last_event, events\n| sort urgency",
              "m": "Ensure all valve tamper switches, waterflow switches, and pressure switches report to fire panel. Forward panel events to Splunk. Alert immediately on valve tamper events. Track restoration time for compliance.",
              "z": "Valve status map by building; tamper event timeline; open trouble list; NFPA 25 compliance dashboard.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Fire alarm panel integration (valve tamper switches connected to panel), syslog.\n• Ensure the following data sources are available: `sourcetype=bms:fire`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure all valve tamper switches, waterflow switches, and pressure switches report to fire panel. Forward panel events to Splunk. Alert immediately on valve tamper events. Track restoration time for compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:fire\" event_type=\"supervisory\" device_type IN (\"valve_tamper\", \"waterflow\", \"low_pressure\")\n| eval urgency=case(\n    device_type=\"valve_tamper\" AND state=\"closed\", \"CRITICAL\",\n    device_type=\"waterflow\", \"ALARM\",\n    device_type=\"low_pressure\", \"HIGH\",\n    1=1, \"Medium\")\n| stats count as events, latest(state) as current_state, latest(_time) as last_event by building, zone, device_type, device_id\n| where current_state IN (\"closed\", \"active\", \"low\")\n| table building, zone, device_type, device_id, current_state, urgency, last_event, events\n| sort urgency\n```\n\nUnderstanding this SPL\n\n**Sprinkler System Valve Tamper and Supervisory Monitoring** — Closed or partially closed sprinkler valves are the number one reason sprinkler systems fail to operate during a fire. Valve tamper switches provide immediate notification when a valve position changes — NFPA 25 requires monitoring.\n\nDocumented **Data sources**: `sourcetype=bms:fire`. **App/TA** (typical add-on context): Fire alarm panel integration (valve tamper switches connected to panel), syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:fire. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:fire\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **urgency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by building, zone, device_type, device_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where current_state IN (\"closed\", \"active\", \"low\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Sprinkler System Valve Tamper and Supervisory Monitoring**): table building, zone, device_type, device_id, current_state, urgency, last_event, events\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Valve status map by building; tamper event timeline; open trouble list; NFPA 25 compliance dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with life-safety water and fire equipment before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.44",
              "n": "Fire Pump Controller Status and Run Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Fire pumps must be ready to operate instantly during a fire. Controller monitoring tracks pump starts, run duration, phase voltage, and jockey pump cycling. Excessive jockey pump cycling often indicates system leaks that degrade firefighting readiness.",
              "t": "Fire pump controller (via syslog or serial gateway), BMS integration",
              "d": "`sourcetype=bms:fire`",
              "q": "index=building sourcetype=\"bms:fire\" device_type=\"fire_pump\"\n| bin _time span=1d\n| stats count(eval(event_type=\"pump_start\")) as starts, sum(run_duration_min) as total_run_min, avg(phase_a_voltage) as avg_voltage, count(eval(pump_type=\"jockey\" AND event_type=\"pump_start\")) as jockey_starts by building, pump_id, _time\n| eval jockey_concern=if(jockey_starts > 20, \"Excessive Cycling - Check for Leaks\", \"Normal\")\n| table _time, building, pump_id, starts, total_run_min, avg_voltage, jockey_starts, jockey_concern",
              "m": "Monitor fire pump controller via syslog or serial gateway. Track all pump starts (test and demand), run duration, and electrical parameters. Flag excessive jockey pump cycling (>20 starts/day). Log for NFPA 25 weekly/monthly test compliance.",
              "z": "Pump run history timeline; jockey pump cycling chart; weekly test compliance tracker; voltage quality gauge.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Fire pump controller (via syslog or serial gateway), BMS integration.\n• Ensure the following data sources are available: `sourcetype=bms:fire`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor fire pump controller via syslog or serial gateway. Track all pump starts (test and demand), run duration, and electrical parameters. Flag excessive jockey pump cycling (>20 starts/day). Log for NFPA 25 weekly/monthly test compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:fire\" device_type=\"fire_pump\"\n| bin _time span=1d\n| stats count(eval(event_type=\"pump_start\")) as starts, sum(run_duration_min) as total_run_min, avg(phase_a_voltage) as avg_voltage, count(eval(pump_type=\"jockey\" AND event_type=\"pump_start\")) as jockey_starts by building, pump_id, _time\n| eval jockey_concern=if(jockey_starts > 20, \"Excessive Cycling - Check for Leaks\", \"Normal\")\n| table _time, building, pump_id, starts, total_run_min, avg_voltage, jockey_starts, jockey_concern\n```\n\nUnderstanding this SPL\n\n**Fire Pump Controller Status and Run Monitoring** — Fire pumps must be ready to operate instantly during a fire. Controller monitoring tracks pump starts, run duration, phase voltage, and jockey pump cycling. Excessive jockey pump cycling often indicates system leaks that degrade firefighting readiness.\n\nDocumented **Data sources**: `sourcetype=bms:fire`. **App/TA** (typical add-on context): Fire pump controller (via syslog or serial gateway), BMS integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:fire. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:fire\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building, pump_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **jockey_concern** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Fire Pump Controller Status and Run Monitoring**): table _time, building, pump_id, starts, total_run_min, avg_voltage, jockey_starts, jockey_concern\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pump run history timeline; jockey pump cycling chart; weekly test compliance tracker; voltage quality gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with life-safety water and fire equipment before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.45",
              "n": "BACnet Controller Communication Health",
              "c": "high",
              "f": "intermediate",
              "v": "BACnet controllers are the brains of building automation — each manages a zone, floor, or system. Communication loss means affected areas run in fallback mode with no remote visibility or control. Monitoring network health catches issues before occupants notice.",
              "t": "BACnet network scanner, BMS supervisory controller, SNMP",
              "d": "`sourcetype=bms:bacnet`",
              "q": "index=building sourcetype=\"bms:bacnet\" event_type=\"device_status\"\n| stats latest(status) as current_status, latest(_time) as last_seen, count(eval(status=\"offline\")) as offline_events by device_id, device_name, network_id, location\n| eval hours_since_seen=round((now()-last_seen)/3600, 1)\n| eval health=case(\n    current_status=\"offline\" OR hours_since_seen > 1, \"OFFLINE\",\n    offline_events > 3, \"Unstable\",\n    1=1, \"Online\")\n| where health!=\"Online\"\n| table device_id, device_name, location, network_id, health, hours_since_seen, offline_events\n| sort health, -hours_since_seen",
              "m": "Periodically scan BACnet network for device presence using BACnet Who-Is/I-Am or poll device status from supervisory controller. Track response times and offline events. Alert on any controller going offline or showing unstable communication.",
              "z": "Controller status map; network health dashboard; offline device list; communication stability trend.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BACnet network scanner, BMS supervisory controller, SNMP.\n• Ensure the following data sources are available: `sourcetype=bms:bacnet`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPeriodically scan BACnet network for device presence using BACnet Who-Is/I-Am or poll device status from supervisory controller. Track response times and offline events. Alert on any controller going offline or showing unstable communication.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:bacnet\" event_type=\"device_status\"\n| stats latest(status) as current_status, latest(_time) as last_seen, count(eval(status=\"offline\")) as offline_events by device_id, device_name, network_id, location\n| eval hours_since_seen=round((now()-last_seen)/3600, 1)\n| eval health=case(\n    current_status=\"offline\" OR hours_since_seen > 1, \"OFFLINE\",\n    offline_events > 3, \"Unstable\",\n    1=1, \"Online\")\n| where health!=\"Online\"\n| table device_id, device_name, location, network_id, health, hours_since_seen, offline_events\n| sort health, -hours_since_seen\n```\n\nUnderstanding this SPL\n\n**BACnet Controller Communication Health** — BACnet controllers are the brains of building automation — each manages a zone, floor, or system. Communication loss means affected areas run in fallback mode with no remote visibility or control. Monitoring network health catches issues before occupants notice.\n\nDocumented **Data sources**: `sourcetype=bms:bacnet`. **App/TA** (typical add-on context): BACnet network scanner, BMS supervisory controller, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:bacnet. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:bacnet\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id, device_name, network_id, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since_seen** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where health!=\"Online\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BACnet Controller Communication Health**): table device_id, device_name, location, network_id, health, hours_since_seen, offline_events\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Controller status map; network health dashboard; offline device list; communication stability trend.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with controllers and the data links the building team relies on before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.46",
              "n": "BMS Alarm Flood Detection and Suppression",
              "c": "medium",
              "f": "intermediate",
              "v": "A single equipment failure (e.g., chiller trip) can generate hundreds of cascading BMS alarms across downstream systems — overwhelming operators and burying the root cause. Detecting alarm floods and identifying the source event reduces mean time to resolve.",
              "t": "BMS alarm feed via syslog, BACnet alarm notifications, vendor API",
              "d": "`sourcetype=bms:bacnet`, `sourcetype=bms:hvac`",
              "q": "index=building sourcetype=\"bms:*\" event_type=\"alarm\"\n| bin _time span=5m\n| stats count as alarm_count, dc(device_id) as devices_affected, dc(alarm_type) as alarm_types, earliest(alarm_description) as first_alarm, earliest(device_name) as first_device by building, _time\n| where alarm_count > 20\n| eval flood_status=\"ALARM FLOOD\", probable_root_cause=first_device.\" - \".first_alarm\n| table _time, building, alarm_count, devices_affected, alarm_types, probable_root_cause",
              "m": "Aggregate all BMS alarms. Detect bursts exceeding 20 alarms in 5 minutes. Identify the earliest alarm as probable root cause. Suppress downstream alarms in operator dashboard. Track alarm flood frequency for system reliability improvement.",
              "z": "Alarm volume timeline; flood event timeline; root cause table; alarm type distribution during floods.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS alarm feed via syslog, BACnet alarm notifications, vendor API.\n• Ensure the following data sources are available: `sourcetype=bms:bacnet`, `sourcetype=bms:hvac`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate all BMS alarms. Detect bursts exceeding 20 alarms in 5 minutes. Identify the earliest alarm as probable root cause. Suppress downstream alarms in operator dashboard. Track alarm flood frequency for system reliability improvement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:*\" event_type=\"alarm\"\n| bin _time span=5m\n| stats count as alarm_count, dc(device_id) as devices_affected, dc(alarm_type) as alarm_types, earliest(alarm_description) as first_alarm, earliest(device_name) as first_device by building, _time\n| where alarm_count > 20\n| eval flood_status=\"ALARM FLOOD\", probable_root_cause=first_device.\" - \".first_alarm\n| table _time, building, alarm_count, devices_affected, alarm_types, probable_root_cause\n```\n\nUnderstanding this SPL\n\n**BMS Alarm Flood Detection and Suppression** — A single equipment failure (e.g., chiller trip) can generate hundreds of cascading BMS alarms across downstream systems — overwhelming operators and burying the root cause. Detecting alarm floods and identifying the source event reduces mean time to resolve.\n\nDocumented **Data sources**: `sourcetype=bms:bacnet`, `sourcetype=bms:hvac`. **App/TA** (typical add-on context): BMS alarm feed via syslog, BACnet alarm notifications, vendor API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:*\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where alarm_count > 20` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **flood_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **BMS Alarm Flood Detection and Suppression**): table _time, building, alarm_count, devices_affected, alarm_types, probable_root_cause\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alarm volume timeline; flood event timeline; root cause table; alarm type distribution during floods.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with a sudden rush of control alarms in the building before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.47",
              "n": "Parking Occupancy Trending and Capacity Planning",
              "c": "low",
              "f": "beginner",
              "v": "Real-time parking occupancy data optimizes space utilization, reduces circling time, and informs expansion or reduction decisions. Trending occupancy by day-of-week and time helps set pricing tiers and plan EV charging capacity.",
              "t": "Parking guidance system API (SKIDATA, SWARCO, ParkAssist), gate controller, IoT occupancy sensors",
              "d": "`sourcetype=bms:parking`",
              "q": "index=building sourcetype=\"bms:parking\"\n| bin _time span=30m\n| stats latest(occupied_spaces) as occupied, latest(total_spaces) as total by parking_facility, level, _time\n| eval available=total-occupied, occupancy_pct=round(occupied*100/total, 1)\n| eval status=case(occupancy_pct>=95, \"Full\", occupancy_pct>=80, \"Busy\", occupancy_pct>=50, \"Moderate\", occupancy_pct<50, \"Available\")\n| table _time, parking_facility, level, occupied, available, total, occupancy_pct, status",
              "m": "Integrate with parking guidance system API or gate entry/exit counters. Track occupancy per level. Trend by time of day and day of week. Alert when approaching capacity.",
              "z": "Occupancy gauge per level; daily pattern heatmap; available spaces single value; trend chart.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Parking guidance system API (SKIDATA, SWARCO, ParkAssist), gate controller, IoT occupancy sensors.\n• Ensure the following data sources are available: `sourcetype=bms:parking`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate with parking guidance system API or gate entry/exit counters. Track occupancy per level. Trend by time of day and day of week. Alert when approaching capacity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:parking\"\n| bin _time span=30m\n| stats latest(occupied_spaces) as occupied, latest(total_spaces) as total by parking_facility, level, _time\n| eval available=total-occupied, occupancy_pct=round(occupied*100/total, 1)\n| eval status=case(occupancy_pct>=95, \"Full\", occupancy_pct>=80, \"Busy\", occupancy_pct>=50, \"Moderate\", occupancy_pct<50, \"Available\")\n| table _time, parking_facility, level, occupied, available, total, occupancy_pct, status\n```\n\nUnderstanding this SPL\n\n**Parking Occupancy Trending and Capacity Planning** — Real-time parking occupancy data optimizes space utilization, reduces circling time, and informs expansion or reduction decisions. Trending occupancy by day-of-week and time helps set pricing tiers and plan EV charging capacity.\n\nDocumented **Data sources**: `sourcetype=bms:parking`. **App/TA** (typical add-on context): Parking guidance system API (SKIDATA, SWARCO, ParkAssist), gate controller, IoT occupancy sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:parking. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:parking\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by parking_facility, level, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **available** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Parking Occupancy Trending and Capacity Planning**): table _time, parking_facility, level, occupied, available, total, occupancy_pct, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Occupancy gauge per level; daily pattern heatmap; available spaces single value; trend chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with how full the garage is and how people move through it before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.48",
              "n": "EV Charging Station Availability and Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "EV charging infrastructure is rapidly growing in commercial buildings. Monitoring station availability, session duration, and energy delivery helps right-size charging capacity, identify broken stations, and plan electrical load management.",
              "t": "OCPP-compatible charging management system (ChargePoint, Blink, ABB), custom API input",
              "d": "`sourcetype=bms:ev_charging`",
              "q": "index=building sourcetype=\"bms:ev_charging\"\n| eval session_duration_hrs=round(session_duration_min/60, 1), energy_kwh=round(energy_delivered_wh/1000, 1)\n| bin _time span=1d\n| stats count as sessions, avg(session_duration_hrs) as avg_duration, sum(energy_kwh) as total_energy_kwh, dc(station_id) as active_stations, count(eval(status=\"faulted\")) as faulted_count by facility, _time\n| lookup ev_station_inventory.csv facility OUTPUTNEW total_stations\n| eval utilization_pct=round(active_stations*100/total_stations, 1)\n| table _time, facility, sessions, avg_duration, total_energy_kwh, active_stations, total_stations, utilization_pct, faulted_count",
              "m": "Connect to charging management platform via OCPP or vendor API. Track session starts, energy delivered, and station faults. Monitor utilization rates. Alert on faulted stations. Feed energy data to demand management.",
              "z": "Station availability map; energy delivery trend; utilization by time of day; faulted station alerts; session duration histogram.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OCPP-compatible charging management system (ChargePoint, Blink, ABB), custom API input.\n• Ensure the following data sources are available: `sourcetype=bms:ev_charging`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConnect to charging management platform via OCPP or vendor API. Track session starts, energy delivered, and station faults. Monitor utilization rates. Alert on faulted stations. Feed energy data to demand management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:ev_charging\"\n| eval session_duration_hrs=round(session_duration_min/60, 1), energy_kwh=round(energy_delivered_wh/1000, 1)\n| bin _time span=1d\n| stats count as sessions, avg(session_duration_hrs) as avg_duration, sum(energy_kwh) as total_energy_kwh, dc(station_id) as active_stations, count(eval(status=\"faulted\")) as faulted_count by facility, _time\n| lookup ev_station_inventory.csv facility OUTPUTNEW total_stations\n| eval utilization_pct=round(active_stations*100/total_stations, 1)\n| table _time, facility, sessions, avg_duration, total_energy_kwh, active_stations, total_stations, utilization_pct, faulted_count\n```\n\nUnderstanding this SPL\n\n**EV Charging Station Availability and Utilization** — EV charging infrastructure is rapidly growing in commercial buildings. Monitoring station availability, session duration, and energy delivery helps right-size charging capacity, identify broken stations, and plan electrical load management.\n\nDocumented **Data sources**: `sourcetype=bms:ev_charging`. **App/TA** (typical add-on context): OCPP-compatible charging management system (ChargePoint, Blink, ABB), custom API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:ev_charging. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:ev_charging\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session_duration_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by facility, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **EV Charging Station Availability and Utilization**): table _time, facility, sessions, avg_duration, total_energy_kwh, active_stations, total_stations, utilization_pct, faulted_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Station availability map; energy delivery trend; utilization by time of day; faulted station alerts; session duration histogram.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with who can charge and whether stations are working before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.49",
              "n": "Indoor Air Quality (IAQ) Index Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Post-pandemic, indoor air quality is a top tenant concern. CO2 concentration is a direct proxy for ventilation effectiveness and occupant density. PM2.5, VOCs, and CO2 together create a composite IAQ index that supports WELL Building and RESET Air certification.",
              "t": "IAQ sensors (Airthings, Awair, Kaiterra) via API, BACnet IAQ points, Meraki MT",
              "d": "`sourcetype=bms:iaq`, `sourcetype=bms:bacnet`",
              "q": "index=building sourcetype=\"bms:iaq\"\n| bin _time span=15m\n| stats avg(co2_ppm) as avg_co2, avg(pm25_ugm3) as avg_pm25, avg(tvoc_ppb) as avg_tvoc, avg(temperature_c) as avg_temp, avg(humidity_pct) as avg_rh by zone, floor, building, _time\n| eval co2_score=case(avg_co2<600, 100, avg_co2<800, 80, avg_co2<1000, 60, avg_co2<1500, 40, avg_co2>=1500, 20)\n| eval pm25_score=case(avg_pm25<12, 100, avg_pm25<25, 80, avg_pm25<35, 60, avg_pm25<55, 40, avg_pm25>=55, 20)\n| eval iaq_index=round((co2_score*0.5 + pm25_score*0.3 + if(avg_tvoc<300, 100, 50)*0.2), 0)\n| eval iaq_status=case(iaq_index>=80, \"Good\", iaq_index>=60, \"Moderate\", iaq_index>=40, \"Poor\", iaq_index<40, \"Unhealthy\")\n| where iaq_index < 60\n| table _time, building, floor, zone, avg_co2, avg_pm25, avg_tvoc, iaq_index, iaq_status",
              "m": "Deploy IAQ sensors measuring CO2, PM2.5, TVOC, temperature, and humidity. Calculate composite IAQ index weighted by parameter importance. Alert on poor zones. Correlate with HVAC ventilation rates for root cause. Support WELL/RESET certification data logging.",
              "z": "IAQ index gauge per zone; floor plan heatmap; CO2 trend with occupancy overlay; certification compliance dashboard.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IAQ sensors (Airthings, Awair, Kaiterra) via API, BACnet IAQ points, Meraki MT.\n• Ensure the following data sources are available: `sourcetype=bms:iaq`, `sourcetype=bms:bacnet`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy IAQ sensors measuring CO2, PM2.5, TVOC, temperature, and humidity. Calculate composite IAQ index weighted by parameter importance. Alert on poor zones. Correlate with HVAC ventilation rates for root cause. Support WELL/RESET certification data logging.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:iaq\"\n| bin _time span=15m\n| stats avg(co2_ppm) as avg_co2, avg(pm25_ugm3) as avg_pm25, avg(tvoc_ppb) as avg_tvoc, avg(temperature_c) as avg_temp, avg(humidity_pct) as avg_rh by zone, floor, building, _time\n| eval co2_score=case(avg_co2<600, 100, avg_co2<800, 80, avg_co2<1000, 60, avg_co2<1500, 40, avg_co2>=1500, 20)\n| eval pm25_score=case(avg_pm25<12, 100, avg_pm25<25, 80, avg_pm25<35, 60, avg_pm25<55, 40, avg_pm25>=55, 20)\n| eval iaq_index=round((co2_score*0.5 + pm25_score*0.3 + if(avg_tvoc<300, 100, 50)*0.2), 0)\n| eval iaq_status=case(iaq_index>=80, \"Good\", iaq_index>=60, \"Moderate\", iaq_index>=40, \"Poor\", iaq_index<40, \"Unhealthy\")\n| where iaq_index < 60\n| table _time, building, floor, zone, avg_co2, avg_pm25, avg_tvoc, iaq_index, iaq_status\n```\n\nUnderstanding this SPL\n\n**Indoor Air Quality (IAQ) Index Monitoring** — Post-pandemic, indoor air quality is a top tenant concern. CO2 concentration is a direct proxy for ventilation effectiveness and occupant density. PM2.5, VOCs, and CO2 together create a composite IAQ index that supports WELL Building and RESET Air certification.\n\nDocumented **Data sources**: `sourcetype=bms:iaq`, `sourcetype=bms:bacnet`. **App/TA** (typical add-on context): IAQ sensors (Airthings, Awair, Kaiterra) via API, BACnet IAQ points, Meraki MT. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:iaq. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:iaq\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by zone, floor, building, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **co2_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pm25_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **iaq_index** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **iaq_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where iaq_index < 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Indoor Air Quality (IAQ) Index Monitoring**): table _time, building, floor, zone, avg_co2, avg_pm25, avg_tvoc, iaq_index, iaq_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: IAQ index gauge per zone; floor plan heatmap; CO2 trend with occupancy overlay; certification compliance dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with stale air, dust, and indoor comfort before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.1.50",
              "n": "Carbon Emissions Tracking from Building Operations (Scope 1+2)",
              "c": "medium",
              "f": "advanced",
              "v": "ESG reporting increasingly requires Scope 1 (on-site combustion) and Scope 2 (purchased electricity) carbon emissions tracking. Automating carbon calculations from utility meter data provides real-time emissions dashboards and supports Science Based Targets.",
              "t": "Utility meters via Modbus, gas meters, BMS energy data, grid emissions factor lookup",
              "d": "`sourcetype=bms:energy`, `sourcetype=bms:meter`",
              "q": "index=building sourcetype=\"bms:energy\"\n| bin _time span=1d\n| stats sum(eval(if(meter_type=\"electricity\", kwh_consumed, 0))) as electricity_kwh, sum(eval(if(meter_type=\"gas\", therms_consumed, 0))) as gas_therms by building_name, _time\n| lookup grid_emissions_factors.csv region OUTPUTNEW kg_co2_per_kwh\n| eval scope2_kg=round(electricity_kwh * kg_co2_per_kwh, 1)\n| eval scope1_kg=round(gas_therms * 5.3, 1)\n| eval total_co2_kg=scope1_kg + scope2_kg\n| eval total_co2_tonnes=round(total_co2_kg/1000, 2)\n| table _time, building_name, electricity_kwh, gas_therms, scope1_kg, scope2_kg, total_co2_kg, total_co2_tonnes",
              "m": "Collect electricity (kWh) and gas (therms/m³) consumption from utility meters. Apply grid emissions factors (varying by region and time) for Scope 2. Apply standard combustion factors for Scope 1. Aggregate for ESG reporting. Track against reduction targets.",
              "z": "Carbon emissions trend; Scope 1 vs 2 breakdown; building comparison; progress toward reduction target; annual emissions summary for ESG report.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Utility meters via Modbus, gas meters, BMS energy data, grid emissions factor lookup.\n• Ensure the following data sources are available: `sourcetype=bms:energy`, `sourcetype=bms:meter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect electricity (kWh) and gas (therms/m³) consumption from utility meters. Apply grid emissions factors (varying by region and time) for Scope 2. Apply standard combustion factors for Scope 1. Aggregate for ESG reporting. Track against reduction targets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=building sourcetype=\"bms:energy\"\n| bin _time span=1d\n| stats sum(eval(if(meter_type=\"electricity\", kwh_consumed, 0))) as electricity_kwh, sum(eval(if(meter_type=\"gas\", therms_consumed, 0))) as gas_therms by building_name, _time\n| lookup grid_emissions_factors.csv region OUTPUTNEW kg_co2_per_kwh\n| eval scope2_kg=round(electricity_kwh * kg_co2_per_kwh, 1)\n| eval scope1_kg=round(gas_therms * 5.3, 1)\n| eval total_co2_kg=scope1_kg + scope2_kg\n| eval total_co2_tonnes=round(total_co2_kg/1000, 2)\n| table _time, building_name, electricity_kwh, gas_therms, scope1_kg, scope2_kg, total_co2_kg, total_co2_tonnes\n```\n\nUnderstanding this SPL\n\n**Carbon Emissions Tracking from Building Operations (Scope 1+2)** — ESG reporting increasingly requires Scope 1 (on-site combustion) and Scope 2 (purchased electricity) carbon emissions tracking. Automating carbon calculations from utility meter data provides real-time emissions dashboards and supports Science Based Targets.\n\nDocumented **Data sources**: `sourcetype=bms:energy`, `sourcetype=bms:meter`. **App/TA** (typical add-on context): Utility meters via Modbus, gas meters, BMS energy data, grid emissions factor lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: building; **sourcetype**: bms:energy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=building, sourcetype=\"bms:energy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by building_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **scope2_kg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **scope1_kg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **total_co2_kg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **total_co2_tonnes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Carbon Emissions Tracking from Building Operations (Scope 1+2)**): table _time, building_name, electricity_kwh, gas_therms, scope1_kg, scope2_kg, total_co2_kg, total_co2_tonnes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Carbon emissions trend; Scope 1 vs 2 breakdown; building comparison; progress toward reduction target; annual emissions summary for ESG report.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how your building systems behave. We help you know if something is off with how much energy the building is using in carbon terms before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.9,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 46,
            "none": 0
          }
        },
        {
          "i": "14.2",
          "n": "Industrial Control Systems (ICS/SCADA)",
          "u": [
            {
              "i": "14.2.1",
              "n": "PLC/RTU Health Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Controller failures halt industrial processes. Monitoring CPU, memory, and communication status prevents unplanned downtime.",
              "t": "OPC-UA input, Modbus TA",
              "d": "OPC-UA metrics (CPU, memory, I/O status), Modbus register data",
              "q": "index=ot sourcetype=\"opcua:metrics\"\n| where plc_cpu_pct > 80 OR plc_memory_pct > 90 OR comm_status!=\"OK\"\n| table _time, plc_name, plc_cpu_pct, plc_memory_pct, comm_status",
              "m": "Connect to PLCs via OPC-UA server or Modbus gateway through Splunk Edge Hub. Poll health metrics every 30 seconds. Alert on CPU >80%, memory >90%, or communication loss. Track uptime per controller.",
              "z": "Status grid (PLC × health), Gauge (CPU/memory per PLC), Line chart (health trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OPC-UA input, Modbus TA.\n• Ensure the following data sources are available: OPC-UA metrics (CPU, memory, I/O status), Modbus register data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConnect to PLCs via OPC-UA server or Modbus gateway through Splunk Edge Hub. Poll health metrics every 30 seconds. Alert on CPU >80%, memory >90%, or communication loss. Track uptime per controller.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"opcua:metrics\"\n| where plc_cpu_pct > 80 OR plc_memory_pct > 90 OR comm_status!=\"OK\"\n| table _time, plc_name, plc_cpu_pct, plc_memory_pct, comm_status\n```\n\nUnderstanding this SPL\n\n**PLC/RTU Health Monitoring** — Controller failures halt industrial processes. Monitoring CPU, memory, and communication status prevents unplanned downtime.\n\nDocumented **Data sources**: OPC-UA metrics (CPU, memory, I/O status), Modbus register data. **App/TA** (typical add-on context): OPC-UA input, Modbus TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: opcua:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"opcua:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where plc_cpu_pct > 80 OR plc_memory_pct > 90 OR comm_status!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PLC/RTU Health Monitoring**): table _time, plc_name, plc_cpu_pct, plc_memory_pct, comm_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (PLC × health), Gauge (CPU/memory per PLC), Line chart (health trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: plc and rtu before harm, damage, or wasted effort piles up.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "14.2.2",
              "n": "Process Variable Anomalies",
              "c": "critical",
              "f": "intermediate",
              "v": "Process variables (pressure, flow, temperature) outside normal ranges indicate equipment failure or process upset. Early detection prevents safety incidents.",
              "t": "OPC-UA input, Edge Hub anomaly detection",
              "d": "OPC-UA/Modbus process data (analog values)",
              "q": "index=ot sourcetype=\"opcua:process\"\n| where value > high_limit OR value < low_limit\n| table _time, tag_name, value, low_limit, high_limit, unit",
              "m": "Ingest process variables via OPC-UA. Define normal ranges per tag. Use Edge Hub kNN anomaly detection for ML-based alerting. Alert on limit exceedances. Track process stability over time.",
              "z": "Line chart (process variable with limit bands), Table (out-of-range events), Single value (current value with status color).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OPC-UA input, Edge Hub anomaly detection.\n• Ensure the following data sources are available: OPC-UA/Modbus process data (analog values).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest process variables via OPC-UA. Define normal ranges per tag. Use Edge Hub kNN anomaly detection for ML-based alerting. Alert on limit exceedances. Track process stability over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"opcua:process\"\n| where value > high_limit OR value < low_limit\n| table _time, tag_name, value, low_limit, high_limit, unit\n```\n\nUnderstanding this SPL\n\n**Process Variable Anomalies** — Process variables (pressure, flow, temperature) outside normal ranges indicate equipment failure or process upset. Early detection prevents safety incidents.\n\nDocumented **Data sources**: OPC-UA/Modbus process data (analog values). **App/TA** (typical add-on context): OPC-UA input, Edge Hub anomaly detection. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: opcua:process. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"opcua:process\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where value > high_limit OR value < low_limit` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Process Variable Anomalies**): table _time, tag_name, value, low_limit, high_limit, unit\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (process variable with limit bands), Table (out-of-range events), Single value (current value with status color).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: process variable anomalies before harm, damage, or wasted effort piles up.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "14.2.3",
              "n": "Safety System Activation",
              "c": "critical",
              "f": "beginner",
              "v": "Safety system activations (ESD, interlocks) indicate dangerous conditions. Each activation requires investigation and documentation.",
              "t": "Safety PLC logs, OPC-UA events",
              "d": "Safety PLC event logs, emergency shutdown events",
              "q": "index=ot sourcetype=\"safety_plc\"\n| search event_type IN (\"ESD\",\"interlock_trip\",\"safety_shutdown\")\n| table _time, system, event_type, cause, action_taken",
              "m": "Forward safety PLC events to Splunk (isolated network — use data diode or Edge Hub). Alert at critical priority on any safety activation. Maintain incident log for regulatory compliance. Track activation frequency per system.",
              "z": "Single value (safety activations — target: 0), Table (activation history), Timeline (safety events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Safety PLC logs, OPC-UA events.\n• Ensure the following data sources are available: Safety PLC event logs, emergency shutdown events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward safety PLC events to Splunk (isolated network — use data diode or Edge Hub). Alert at critical priority on any safety activation. Maintain incident log for regulatory compliance. Track activation frequency per system.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"safety_plc\"\n| search event_type IN (\"ESD\",\"interlock_trip\",\"safety_shutdown\")\n| table _time, system, event_type, cause, action_taken\n```\n\nUnderstanding this SPL\n\n**Safety System Activation** — Safety system activations (ESD, interlocks) indicate dangerous conditions. Each activation requires investigation and documentation.\n\nDocumented **Data sources**: Safety PLC event logs, emergency shutdown events. **App/TA** (typical add-on context): Safety PLC logs, OPC-UA events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: safety_plc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"safety_plc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Safety System Activation**): table _time, system, event_type, cause, action_taken\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (safety activations — target: 0), Table (activation history), Timeline (safety events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: safety system activation before harm, damage, or wasted effort piles up.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "14.2.4",
              "n": "Network Segmentation Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "IT/OT network boundary violations create cybersecurity risk to critical infrastructure. Continuous monitoring validates segmentation.",
              "t": "Firewall TAs, network flow data",
              "d": "Industrial firewall logs, network flow data at IT/OT boundary",
              "q": "index=network sourcetype=\"pan:traffic\" zone_pair=\"IT-to-OT\"\n| where action=\"allow\"\n| stats count by src, dest, dest_port, app\n| sort -count",
              "m": "Forward IT/OT boundary firewall logs. Monitor all traffic crossing the boundary. Alert on unexpected protocols or connections. Validate against whitelist of approved communications. Report for ICS security audits.",
              "z": "Table (cross-boundary traffic), Sankey diagram (IT→OT flows), Bar chart (by protocol), Single value (unauthorized connections).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Firewall TAs, network flow data.\n• Ensure the following data sources are available: Industrial firewall logs, network flow data at IT/OT boundary.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward IT/OT boundary firewall logs. Monitor all traffic crossing the boundary. Alert on unexpected protocols or connections. Validate against whitelist of approved communications. Report for ICS security audits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\" zone_pair=\"IT-to-OT\"\n| where action=\"allow\"\n| stats count by src, dest, dest_port, app\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Network Segmentation Monitoring** — IT/OT network boundary violations create cybersecurity risk to critical infrastructure. Continuous monitoring validates segmentation.\n\nDocumented **Data sources**: Industrial firewall logs, network flow data at IT/OT boundary. **App/TA** (typical add-on context): Firewall TAs, network flow data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"allow\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cross-boundary traffic), Sankey diagram (IT→OT flows), Bar chart (by protocol), Single value (unauthorized connections).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: network segmentation before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-14.2.4: Network Segmentation Monitoring.",
                  "ea": "Saved search 'UC-14.2.4' running on Industrial firewall logs, network flow data at IT/OT boundary, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.api.org/products-and-services/standards/important-standards-announcements/standard-1164"
                },
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "FR 6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 FR 6.2 (Continuous monitoring) is enforced — Splunk UC-14.2.4: Network Segmentation Monitoring.",
                  "ea": "Saved search 'UC-14.2.4' running on Industrial firewall logs, network flow data at IT/OT boundary, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.isa.org/standards-and-publications/isa-standards/isa-iec-62443-series-of-standards"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-7 R1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NERC CIP CIP-005-7 R1 (Electronic security perimeter) is enforced — Splunk UC-14.2.4: Network Segmentation Monitoring.",
                  "ea": "Saved search 'UC-14.2.4' running on Industrial firewall logs, network flow data at IT/OT boundary, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-14.2.4: Network Segmentation Monitoring.",
                  "ea": "Saved search 'UC-14.2.4' running on Industrial firewall logs, network flow data at IT/OT boundary, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.tsa.gov/sd02c"
                }
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.5",
              "n": "Firmware Version Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "OT devices with outdated firmware are vulnerable to exploitation. Inventory tracking supports patching during maintenance windows.",
              "t": "Scripted inventory input, OPC-UA",
              "d": "Asset inventory scans, OPC-UA system attributes",
              "q": "index=ot sourcetype=\"ics_inventory\"\n| stats latest(firmware_version) as current by device_name, vendor, model\n| lookup approved_firmware.csv vendor, model OUTPUT approved_version\n| where current!=approved_version\n| table device_name, vendor, model, current, approved_version",
              "m": "Conduct periodic OT asset inventory scans (during maintenance windows). Ingest firmware versions. Maintain approved firmware lookup. Report on compliance. Prioritize based on CISA ICS advisories.",
              "z": "Table (devices with outdated firmware), Pie chart (compliance distribution), Single value (% compliant).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Scripted inventory input, OPC-UA.\n• Ensure the following data sources are available: Asset inventory scans, OPC-UA system attributes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConduct periodic OT asset inventory scans (during maintenance windows). Ingest firmware versions. Maintain approved firmware lookup. Report on compliance. Prioritize based on CISA ICS advisories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics_inventory\"\n| stats latest(firmware_version) as current by device_name, vendor, model\n| lookup approved_firmware.csv vendor, model OUTPUT approved_version\n| where current!=approved_version\n| table device_name, vendor, model, current, approved_version\n```\n\nUnderstanding this SPL\n\n**Firmware Version Tracking** — OT devices with outdated firmware are vulnerable to exploitation. Inventory tracking supports patching during maintenance windows.\n\nDocumented **Data sources**: Asset inventory scans, OPC-UA system attributes. **App/TA** (typical add-on context): Scripted inventory input, OPC-UA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_name, vendor, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where current!=approved_version` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Firmware Version Tracking**): table device_name, vendor, model, current, approved_version\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (devices with outdated firmware), Pie chart (compliance distribution), Single value (% compliant).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with the version of code running on a controller before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opcua"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.6",
              "n": "Unauthorized Access Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Unauthorized access to ICS systems could lead to physical damage or safety incidents. Detection is critical for industrial cybersecurity.",
              "t": "Firewall TAs, ICS network monitoring",
              "d": "ICS network logs, industrial firewalls, IDS alerts",
              "q": "index=ot sourcetype=\"ics_firewall\"\n| search action=\"deny\" OR src_zone=\"untrusted\"\n| stats count by src, dest, dest_port\n| sort -count",
              "m": "Monitor access to ICS networks from all sources. Alert on connections from non-whitelisted IPs. Track engineering workstation access sessions. Correlate with physical access to control rooms. Report for ICS cybersecurity compliance.",
              "z": "Table (access events), Timeline (unauthorized attempts), Bar chart (blocked connections by source).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Firewall TAs, ICS network monitoring.\n• Ensure the following data sources are available: ICS network logs, industrial firewalls, IDS alerts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor access to ICS networks from all sources. Alert on connections from non-whitelisted IPs. Track engineering workstation access sessions. Correlate with physical access to control rooms. Report for ICS cybersecurity compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics_firewall\"\n| search action=\"deny\" OR src_zone=\"untrusted\"\n| stats count by src, dest, dest_port\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Unauthorized Access Detection** — Unauthorized access to ICS systems could lead to physical damage or safety incidents. Detection is critical for industrial cybersecurity.\n\nDocumented **Data sources**: ICS network logs, industrial firewalls, IDS alerts. **App/TA** (typical add-on context): Firewall TAs, ICS network monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics_firewall. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics_firewall\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (access events), Timeline (unauthorized attempts), Bar chart (blocked connections by source).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: unauthorized access before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.7",
              "n": "Modbus TCP Anomaly Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Unusual read/write rates or register ranges can indicate process upset or malicious manipulation.",
              "t": "Splunk OT Intelligence, Modbus TA",
              "d": "`sourcetype=\"modbus:traffic\"` or `modbus:gateway`",
              "q": "index=ot sourcetype=\"modbus:traffic\"\n| bin _time span=5m\n| stats count by _time, unit_id, function_code, src\n| eventstats median(count) as med by unit_id, function_code\n| where count > med * 5 AND count > 100",
              "m": "Baseline requests per 5m per unit and function code. Use MLTK or `eventstats` for median. Alert on spikes without corresponding maintenance window.",
              "z": "Line chart (Modbus req rate), Table (anomalous unit × function), Heatmap (time × unit).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, Modbus TA.\n• Ensure the following data sources are available: `sourcetype=\"modbus:traffic\"` or `modbus:gateway`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline requests per 5m per unit and function code. Use MLTK or `eventstats` for median. Alert on spikes without corresponding maintenance window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"modbus:traffic\"\n| bin _time span=5m\n| stats count by _time, unit_id, function_code, src\n| eventstats median(count) as med by unit_id, function_code\n| where count > med * 5 AND count > 100\n```\n\nUnderstanding this SPL\n\n**Modbus TCP Anomaly Detection** — Unusual read/write rates or register ranges can indicate process upset or malicious manipulation.\n\nDocumented **Data sources**: `sourcetype=\"modbus:traffic\"` or `modbus:gateway`. **App/TA** (typical add-on context): Splunk OT Intelligence, Modbus TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: modbus:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"modbus:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, unit_id, function_code, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by unit_id, function_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > med * 5 AND count > 100` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (Modbus req rate), Table (anomalous unit × function), Heatmap (time × unit).",
              "script": "",
              "premium": "Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with read and write traffic on the plant data link before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.8",
              "n": "OPC-UA Session Abuse",
              "c": "critical",
              "f": "advanced",
              "v": "Excessive sessions or anonymous binds from new clients can indicate scanning or unauthorized access.",
              "t": "OPC-UA server audit logs, Edge Hub",
              "d": "`sourcetype=\"opcua:session\"` (server audit events)",
              "q": "index=ot sourcetype=\"opcua:session\"\n| where event_type IN (\"CreateSession\",\"ActivateSession\") AND (is_anonymous=1 OR rejected=1)\n| bin _time span=1h\n| stats dc(session_id) as sessions, dc(client_ip) as clients by server_endpoint, _time\n| where sessions > 50 OR clients > 10",
              "m": "Ingest OPC-UA audit events. Whitelist known engineering hosts. Alert on anonymous session creation or high rejection rate.",
              "z": "Table (servers with suspicious sessions), Bar chart (sessions by client IP).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OPC-UA server audit logs, Edge Hub.\n• Ensure the following data sources are available: `sourcetype=\"opcua:session\"` (server audit events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest OPC-UA audit events. Whitelist known engineering hosts. Alert on anonymous session creation or high rejection rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"opcua:session\"\n| where event_type IN (\"CreateSession\",\"ActivateSession\") AND (is_anonymous=1 OR rejected=1)\n| bin _time span=1h\n| stats dc(session_id) as sessions, dc(client_ip) as clients by server_endpoint, _time\n| where sessions > 50 OR clients > 10\n```\n\nUnderstanding this SPL\n\n**OPC-UA Session Abuse** — Excessive sessions or anonymous binds from new clients can indicate scanning or unauthorized access.\n\nDocumented **Data sources**: `sourcetype=\"opcua:session\"` (server audit events). **App/TA** (typical add-on context): OPC-UA server audit logs, Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: opcua:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"opcua:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"CreateSession\",\"ActivateSession\") AND (is_anonymous=1 OR rejected=1)` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by server_endpoint, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sessions > 50 OR clients > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (servers with suspicious sessions), Bar chart (sessions by client IP).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: opc-ua session abuse before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.9",
              "n": "PLC Firmware Change Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Unexpected firmware changes on PLCs can be maintenance errors or malicious reprogramming.",
              "t": "PLC vendor export, OPC-UA device metadata",
              "d": "`sourcetype=\"plc:inventory\"` (firmware, serial)",
              "q": "index=ot sourcetype=\"plc:inventory\"\n| streamstats current=firmware_version window=2 global=f by plc_name\n| where firmware_version!=f\n| table _time, plc_name, f, firmware_version, user",
              "m": "Snapshot firmware nightly or on change event from vendor tool. Correlate with change tickets. Alert on any drift without CMDB match.",
              "z": "Timeline (firmware changes), Table (PLCs with unexpected version).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PLC vendor export, OPC-UA device metadata.\n• Ensure the following data sources are available: `sourcetype=\"plc:inventory\"` (firmware, serial).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSnapshot firmware nightly or on change event from vendor tool. Correlate with change tickets. Alert on any drift without CMDB match.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"plc:inventory\"\n| streamstats current=firmware_version window=2 global=f by plc_name\n| where firmware_version!=f\n| table _time, plc_name, f, firmware_version, user\n```\n\nUnderstanding this SPL\n\n**PLC Firmware Change Detection** — Unexpected firmware changes on PLCs can be maintenance errors or malicious reprogramming.\n\nDocumented **Data sources**: `sourcetype=\"plc:inventory\"` (firmware, serial). **App/TA** (typical add-on context): PLC vendor export, OPC-UA device metadata. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: plc:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"plc:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `streamstats` rolls up events into metrics; results are split **by plc_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where firmware_version!=f` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PLC Firmware Change Detection**): table _time, plc_name, f, firmware_version, user\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (firmware changes), Table (PLCs with unexpected version).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with the version of code running on a controller before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Change"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-14.2.9: PLC Firmware Change Detection.",
                  "ea": "Saved search 'UC-14.2.9' running on sourcetype=\"plc:inventory\" (firmware, serial), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.api.org/products-and-services/standards/important-standards-announcements/standard-1164"
                },
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "FR 6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 FR 6.2 (Continuous monitoring) is enforced — Splunk UC-14.2.9: PLC Firmware Change Detection.",
                  "ea": "Saved search 'UC-14.2.9' running on sourcetype=\"plc:inventory\" (firmware, serial), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.isa.org/standards-and-publications/isa-standards/isa-iec-62443-series-of-standards"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NERC CIP CIP-010-4 R1 (Configuration change management) is enforced — Splunk UC-14.2.9: PLC Firmware Change Detection.",
                  "ea": "Saved search 'UC-14.2.9' running on sourcetype=\"plc:inventory\" (firmware, serial), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.10",
              "n": "ICS Protocol Violation Alerts",
              "c": "critical",
              "f": "advanced",
              "v": "Malformed DNP3/Modbus/Profinet frames or wrong L4 ports indicate misconfiguration or attacks.",
              "t": "Industrial IDS, Zeek ICS parsers",
              "d": "`sourcetype=\"ics:protocol\"` IDS alerts",
              "q": "index=ot sourcetype=\"ics:protocol\"\n| where severity IN (\"high\",\"critical\") OR match(message,\"(?i)(malformed|illegal|out.of.range)\")\n| stats count by protocol, src, dest, signature\n| sort -count",
              "m": "Normalize IDS fields into Splunk. Tune for false positives on legacy equipment. Route critical to SOC and OT jointly.",
              "z": "Table (violations), Timeline (events), Sankey (src → dest).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Industrial IDS, Zeek ICS parsers.\n• Ensure the following data sources are available: `sourcetype=\"ics:protocol\"` IDS alerts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize IDS fields into Splunk. Tune for false positives on legacy equipment. Route critical to SOC and OT jointly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics:protocol\"\n| where severity IN (\"high\",\"critical\") OR match(message,\"(?i)(malformed|illegal|out.of.range)\")\n| stats count by protocol, src, dest, signature\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ICS Protocol Violation Alerts** — Malformed DNP3/Modbus/Profinet frames or wrong L4 ports indicate misconfiguration or attacks.\n\nDocumented **Data sources**: `sourcetype=\"ics:protocol\"` IDS alerts. **App/TA** (typical add-on context): Industrial IDS, Zeek ICS parsers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:protocol. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:protocol\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where severity IN (\"high\",\"critical\") OR match(message,\"(?i)(malformed|illegal|out.of.range)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by protocol, src, dest, signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Timeline (events), Sankey (src → dest).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: ics protocol violation alerts before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.11",
              "n": "NERC CIP Compliance Checks",
              "c": "critical",
              "f": "expert",
              "v": "Evidence of electronic access controls, logging, and change management for bulk electric systems.",
              "t": "Custom (CIP evidence packs), Splunk Enterprise Security",
              "d": "Firewall, VPN, AD, asset logs tagged `nerc_cip`",
              "q": "index=security sourcetype IN (\"vpn:log\",\"firewall:traffic\") nerc_cip=1\n| where action=\"deny\" OR match(user,\"(?i)orphan\")\n| stats count by asset_id, control_id, evidence_type",
              "m": "Tag in-scope assets and controls. Use saved searches per CIP requirement (e.g., access logging, 30-day log retention). Document in Splunk as authoritative evidence store.",
              "z": "Compliance dashboard (control × status), Table (gaps by site).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (CIP evidence packs), Splunk Enterprise Security.\n• Ensure the following data sources are available: Firewall, VPN, AD, asset logs tagged `nerc_cip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag in-scope assets and controls. Use saved searches per CIP requirement (e.g., access logging, 30-day log retention). Document in Splunk as authoritative evidence store.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype IN (\"vpn:log\",\"firewall:traffic\") nerc_cip=1\n| where action=\"deny\" OR match(user,\"(?i)orphan\")\n| stats count by asset_id, control_id, evidence_type\n```\n\nUnderstanding this SPL\n\n**NERC CIP Compliance Checks** — Evidence of electronic access controls, logging, and change management for bulk electric systems.\n\nDocumented **Data sources**: Firewall, VPN, AD, asset logs tagged `nerc_cip`. **App/TA** (typical add-on context): Custom (CIP evidence packs), Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"deny\" OR match(user,\"(?i)orphan\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by asset_id, control_id, evidence_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Compliance dashboard (control × status), Table (gaps by site).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: nerc cip compliance checks before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-002-5.1a R1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-002-5.1a R1 (BES cyber system identification) is enforced — Splunk UC-14.2.11: NERC CIP Compliance Checks.",
                  "ea": "Saved search 'UC-14.2.11' running on Firewall, VPN, AD, asset logs tagged nerc_cip, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R4 (Security event monitoring) is enforced — Splunk UC-14.2.11: NERC CIP Compliance Checks.",
                  "ea": "Saved search 'UC-14.2.11' running on Firewall, VPN, AD, asset logs tagged nerc_cip, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-14.2.11: NERC CIP Compliance Checks.",
                  "ea": "Saved search 'UC-14.2.11' running on Firewall, VPN, AD, asset logs tagged nerc_cip, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.tsa.gov/sd02c"
                },
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.D",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.D (Cybersecurity assessment) is enforced — Splunk UC-14.2.11: NERC CIP Compliance Checks.",
                  "ea": "Saved search 'UC-14.2.11' running on Firewall, VPN, AD, asset logs tagged nerc_cip, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.tsa.gov/sd02c"
                }
              ],
              "pillar": "observability",
              "regs": [
                "NERC CIP"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.12",
              "n": "Historian Data Integrity",
              "c": "high",
              "f": "intermediate",
              "v": "Gaps or duplicated timestamps in historian feeds break batch quality and regulatory reporting.",
              "t": "PI / OPC-UA historian export",
              "d": "`sourcetype=\"historian:point\"` (value, quality, ts)",
              "q": "index=ot sourcetype=\"historian:point\"\n| sort 0 + _time\n| streamstats window=2 global=prev\n| eval gap_sec=_time-prev_ts\n| where gap_sec > 300 OR quality!=\"Good\"\n| stats count by tag_name, gap_sec",
              "m": "Ingest quality codes and source timestamps. Alert on gap > SLA or bad quality >10% of samples per tag group.",
              "z": "Line chart (insert rate), Table (tags with gaps), Single value (data quality %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PI / OPC-UA historian export.\n• Ensure the following data sources are available: `sourcetype=\"historian:point\"` (value, quality, ts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest quality codes and source timestamps. Alert on gap > SLA or bad quality >10% of samples per tag group.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"historian:point\"\n| sort 0 + _time\n| streamstats window=2 global=prev\n| eval gap_sec=_time-prev_ts\n| where gap_sec > 300 OR quality!=\"Good\"\n| stats count by tag_name, gap_sec\n```\n\nUnderstanding this SPL\n\n**Historian Data Integrity** — Gaps or duplicated timestamps in historian feeds break batch quality and regulatory reporting.\n\nDocumented **Data sources**: `sourcetype=\"historian:point\"` (value, quality, ts). **App/TA** (typical add-on context): PI / OPC-UA historian export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: historian:point. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"historian:point\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **gap_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_sec > 300 OR quality!=\"Good\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tag_name, gap_sec** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (insert rate), Table (tags with gaps), Single value (data quality %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with the history of process values the plant trusts before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.13",
              "n": "Safety Instrumented System Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "SIS trips, bypasses, and diagnostics demand immediate visibility and audit trails.",
              "t": "SIS vendor DCS export, OPC-UA alarms",
              "d": "`sourcetype=\"sis:event\"` (trip, bypass, fault)",
              "q": "index=ot sourcetype=\"sis:event\"\n| where event_type IN (\"trip\",\"bypass\",\"fault\",\"demand\")\n| table _time, sis_tag, event_type, cause, sil_level\n| sort -_time",
              "m": "Segregate SIS data per safety policy. Never route writes from IT networks. Alert on any bypass or fault.",
              "z": "Timeline (SIS events), Single value (active bypasses), Table (trip history).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SIS vendor DCS export, OPC-UA alarms.\n• Ensure the following data sources are available: `sourcetype=\"sis:event\"` (trip, bypass, fault).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSegregate SIS data per safety policy. Never route writes from IT networks. Alert on any bypass or fault.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"sis:event\"\n| where event_type IN (\"trip\",\"bypass\",\"fault\",\"demand\")\n| table _time, sis_tag, event_type, cause, sil_level\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Safety Instrumented System Monitoring** — SIS trips, bypasses, and diagnostics demand immediate visibility and audit trails.\n\nDocumented **Data sources**: `sourcetype=\"sis:event\"` (trip, bypass, fault). **App/TA** (typical add-on context): SIS vendor DCS export, OPC-UA alarms. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: sis:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"sis:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"trip\",\"bypass\",\"fault\",\"demand\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Safety Instrumented System Monitoring**): table _time, sis_tag, event_type, cause, sil_level\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (SIS events), Single value (active bypasses), Table (trip history).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: safety instrumented system before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.14",
              "n": "HMI Unauthorized Access",
              "c": "critical",
              "f": "intermediate",
              "v": "HMIs should only be reachable from jump hosts; direct access from corporate networks is a red flag.",
              "t": "HMI audit logs, RDP/VNC syslog",
              "d": "`sourcetype=\"hmi:audit\"` (login, operator action)",
              "q": "index=ot sourcetype=\"hmi:audit\"\n| where action=\"login\" AND NOT cidrmatch(\"10.50.0.0/16\",src)\n| stats count by src, hmi_name, user\n| sort -count",
              "m": "Define allowed HMI source subnets. Alert on logins from outside. Correlate with physical access.",
              "z": "Table (non-compliant logins), Map (source IP).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HMI audit logs, RDP/VNC syslog.\n• Ensure the following data sources are available: `sourcetype=\"hmi:audit\"` (login, operator action).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine allowed HMI source subnets. Alert on logins from outside. Correlate with physical access.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"hmi:audit\"\n| where action=\"login\" AND NOT cidrmatch(\"10.50.0.0/16\",src)\n| stats count by src, hmi_name, user\n| sort -count\n```\n\nUnderstanding this SPL\n\n**HMI Unauthorized Access** — HMIs should only be reachable from jump hosts; direct access from corporate networks is a red flag.\n\nDocumented **Data sources**: `sourcetype=\"hmi:audit\"` (login, operator action). **App/TA** (typical add-on context): HMI audit logs, RDP/VNC syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: hmi:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"hmi:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"login\" AND NOT cidrmatch(\"10.50.0.0/16\",src)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, hmi_name, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant logins), Map (source IP).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with the operator screen and the story it tells the shift team before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of API RP 1164 5.3 (Access control) — Splunk UC-14.2.14: HMI Unauthorized Access.",
                  "ea": "Saved search 'UC-14.2.14' running on sourcetype=\"hmi:audit\" (login, operator action), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.api.org/products-and-services/standards/important-standards-announcements/standard-1164"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-7 R1",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of NERC CIP CIP-005-7 R1 (Electronic security perimeter) — Splunk UC-14.2.14: HMI Unauthorized Access.",
                  "ea": "Saved search 'UC-14.2.14' running on sourcetype=\"hmi:audit\" (login, operator action), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R4",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of NERC CIP CIP-007-6 R4 (Security event monitoring) — Splunk UC-14.2.14: HMI Unauthorized Access.",
                  "ea": "Saved search 'UC-14.2.14' running on sourcetype=\"hmi:audit\" (login, operator action), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                }
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.15",
              "n": "Control Loop Deviation",
              "c": "high",
              "f": "intermediate",
              "v": "PV diverging from SP with saturated output indicates tuning issues or actuator faults.",
              "t": "OPC-UA process tags",
              "d": "`sourcetype=\"opcua:control\"` (pv, sp, out)",
              "q": "index=ot sourcetype=\"opcua:control\"\n| eval err=abs(pv-sp)\n| where err > deadband*5 OR (abs(out)>95 AND err>deadband)\n| timechart span=1m avg(err) by loop_id",
              "m": "Define deadband per loop. Alert when sustained error exceeds threshold or output pegs. Integrate with maintenance CMMS.",
              "z": "Line chart (PV vs SP), Gauge (loop error), Table (loops in alarm).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OPC-UA process tags.\n• Ensure the following data sources are available: `sourcetype=\"opcua:control\"` (pv, sp, out).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine deadband per loop. Alert when sustained error exceeds threshold or output pegs. Integrate with maintenance CMMS.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"opcua:control\"\n| eval err=abs(pv-sp)\n| where err > deadband*5 OR (abs(out)>95 AND err>deadband)\n| timechart span=1m avg(err) by loop_id\n```\n\nUnderstanding this SPL\n\n**Control Loop Deviation** — PV diverging from SP with saturated output indicates tuning issues or actuator faults.\n\nDocumented **Data sources**: `sourcetype=\"opcua:control\"` (pv, sp, out). **App/TA** (typical add-on context): OPC-UA process tags. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: opcua:control. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"opcua:control\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **err** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err > deadband*5 OR (abs(out)>95 AND err>deadband)` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1m** buckets with a separate series **by loop_id** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (PV vs SP), Gauge (loop error), Table (loops in alarm).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: control loop deviation before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.16",
              "n": "Process Variable Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Slow drift before alarm limits supports predictive maintenance and energy optimization.",
              "t": "OPC-UA historian",
              "d": "`sourcetype=\"opcua:process\"`",
              "q": "index=ot sourcetype=\"opcua:process\"\n| timechart span=1h avg(value) as avg_val by tag_name\n| predict avg_val as pred future_timespan=24\n| lookup ot_tag_limits.csv tag_name OUTPUT high_limit, low_limit\n| where pred > high_limit OR pred < low_limit",
              "m": "Use `predict` for critical tags. Alert when forecast crosses limits before physical alarm. Tune per process area.",
              "z": "Line chart (actual vs predicted), Table (tags trending to limit).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OPC-UA historian.\n• Ensure the following data sources are available: `sourcetype=\"opcua:process\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse `predict` for critical tags. Alert when forecast crosses limits before physical alarm. Tune per process area.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"opcua:process\"\n| timechart span=1h avg(value) as avg_val by tag_name\n| predict avg_val as pred future_timespan=24\n| lookup ot_tag_limits.csv tag_name OUTPUT high_limit, low_limit\n| where pred > high_limit OR pred < low_limit\n```\n\nUnderstanding this SPL\n\n**Process Variable Trending** — Slow drift before alarm limits supports predictive maintenance and energy optimization.\n\nDocumented **Data sources**: `sourcetype=\"opcua:process\"`. **App/TA** (typical add-on context): OPC-UA historian. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: opcua:process. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"opcua:process\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by tag_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Process Variable Trending**): predict avg_val as pred future_timespan=24\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pred > high_limit OR pred < low_limit` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (actual vs predicted), Table (tags trending to limit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: process variable trending before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.17",
              "n": "ICS Network Segmentation Violations",
              "c": "critical",
              "f": "intermediate",
              "v": "Hosts in wrong VLANs or east-west traffic across zones violates Purdue model and IEC 62443.",
              "t": "Firewall, switch NetFlow, ARP tables",
              "d": "`sourcetype=\"pan:traffic\"` `zone_pair` or `sourcetype=\"flow:ics\"`",
              "q": "index=network sourcetype=\"flow:ics\"\n| where src_zone!=dest_zone AND allowed_pair=\"false\"\n| stats count by src, dest, dest_port, app\n| sort -count",
              "m": "Maintain `allowed_pair` lookup for zone-to-zone. Alert on any deny or unexpected allow. Quarterly review.",
              "z": "Sankey (zones), Table (violations), Single value (open violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Firewall, switch NetFlow, ARP tables.\n• Ensure the following data sources are available: `sourcetype=\"pan:traffic\"` `zone_pair` or `sourcetype=\"flow:ics\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain `allowed_pair` lookup for zone-to-zone. Alert on any deny or unexpected allow. Quarterly review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"flow:ics\"\n| where src_zone!=dest_zone AND allowed_pair=\"false\"\n| stats count by src, dest, dest_port, app\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ICS Network Segmentation Violations** — Hosts in wrong VLANs or east-west traffic across zones violates Purdue model and IEC 62443.\n\nDocumented **Data sources**: `sourcetype=\"pan:traffic\"` `zone_pair` or `sourcetype=\"flow:ics\"`. **App/TA** (typical add-on context): Firewall, switch NetFlow, ARP tables. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: flow:ics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"flow:ics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where src_zone!=dest_zone AND allowed_pair=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey (zones), Table (violations), Single value (open violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: ics network segmentation violations before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.18",
              "n": "Engineering Workstation Anomaly",
              "c": "critical",
              "f": "advanced",
              "v": "USB inserts, new binaries, or remote sessions on EWS workstations are high-risk in OT.",
              "t": "EDR on EWS, Windows Security / Sysmon",
              "d": "`sourcetype=\"sysmon:windows\"` `tag=ews`",
              "q": "index=ot sourcetype=\"sysmon:windows\" host_tag=\"EWS\"\n| search EventCode=1 OR EventCode=11\n| where NOT match(Image,\"(?i)(approved\\\\\\\\path)\")\n| stats count by Computer, Image, ParentImage\n| sort -count",
              "m": "Lock down EWS to approved paths. Alert on new process or driver load. Correlate with maintenance windows.",
              "z": "Table (suspicious processes), Timeline (EWS events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDR on EWS, Windows Security / Sysmon.\n• Ensure the following data sources are available: `sourcetype=\"sysmon:windows\"` `tag=ews`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLock down EWS to approved paths. Alert on new process or driver load. Correlate with maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"sysmon:windows\" host_tag=\"EWS\"\n| search EventCode=1 OR EventCode=11\n| where NOT match(Image,\"(?i)(approved\\\\\\\\path)\")\n| stats count by Computer, Image, ParentImage\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Engineering Workstation Anomaly** — USB inserts, new binaries, or remote sessions on EWS workstations are high-risk in OT.\n\nDocumented **Data sources**: `sourcetype=\"sysmon:windows\"` `tag=ews`. **App/TA** (typical add-on context): EDR on EWS, Windows Security / Sysmon. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: sysmon:windows. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"sysmon:windows\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Filters the current rows with `where NOT match(Image,\"(?i)(approved\\\\\\\\path)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by Computer, Image, ParentImage** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious processes), Timeline (EWS events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with the engineering laptop or desktop in the control zone before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.19",
              "n": "OT Asset External Communication Detection",
              "c": "critical",
              "f": "advanced",
              "v": "PLCs, HMIs, or sensors initiating outbound sessions to the internet or untrusted zones indicate misconfiguration, malware, or pivot from IT — a top IEC 62443 / NERC CIP concern.",
              "t": "Firewall, OT NetFlow, passive tap (Zeek OT)",
              "d": "`sourcetype=\"flow:ics\"`, `sourcetype=\"pan:traffic\"` with OT zone tags",
              "q": "index=ot sourcetype=\"flow:ics\" src_zone=\"OT_L3\"\n| lookup ot_asset_inventory ip as src OUTPUT asset_class\n| lookup vendor_update_nets network as dest OUTPUT network as vendor_net\n| where NOT cidrmatch(\"10.0.0.0/8\",dest) AND isnull(vendor_net)\n| stats count, values(dest_port) as ports by src, dest, app\n| sort -count",
              "m": "Maintain allowlisted update CDNs and remote-support jump hosts; default-deny egress from L3 devices. Alert on first-seen external `dest`.",
              "z": "Map (flows), Table (assets), Sankey (zone → egress).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Firewall, OT NetFlow, passive tap (Zeek OT).\n• Ensure the following data sources are available: `sourcetype=\"flow:ics\"`, `sourcetype=\"pan:traffic\"` with OT zone tags.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain allowlisted update CDNs and remote-support jump hosts; default-deny egress from L3 devices. Alert on first-seen external `dest`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"flow:ics\" src_zone=\"OT_L3\"\n| lookup ot_asset_inventory ip as src OUTPUT asset_class\n| lookup vendor_update_nets network as dest OUTPUT network as vendor_net\n| where NOT cidrmatch(\"10.0.0.0/8\",dest) AND isnull(vendor_net)\n| stats count, values(dest_port) as ports by src, dest, app\n| sort -count\n```\n\nUnderstanding this SPL\n\n**OT Asset External Communication Detection** — PLCs, HMIs, or sensors initiating outbound sessions to the internet or untrusted zones indicate misconfiguration, malware, or pivot from IT — a top IEC 62443 / NERC CIP concern.\n\nDocumented **Data sources**: `sourcetype=\"flow:ics\"`, `sourcetype=\"pan:traffic\"` with OT zone tags. **App/TA** (typical add-on context): Firewall, OT NetFlow, passive tap (Zeek OT). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: flow:ics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"flow:ics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where NOT cidrmatch(\"10.0.0.0/8\",dest) AND isnull(vendor_net)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (flows), Table (assets), Sankey (zone → egress).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: ot asset external communication before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "netflow"
              ],
              "em": [
                "netflow_netflow"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.20",
              "n": "OT Protocol Port Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Unexpected Modbus, DNP3, or EtherNet/IP ports on non-asset IPs reveal rogue devices or scanning.",
              "t": "SPAN / Zeek OT, industrial firewall",
              "d": "`sourcetype=\"zeek:conn\"` with OT VLANs, `sourcetype=\"ics:protocol\"`",
              "q": "index=ot (sourcetype=\"zeek:conn\" OR sourcetype=\"ics:protocol\")\n| where dest_port IN (502,20000,44818,2404) OR service IN (\"modbus\",\"dnp3\",\"enip\")\n| lookup ot_authorized_peers src dest OUTPUT approved\n| where approved!=\"true\"\n| stats count by src, dest, dest_port, service\n| sort -count",
              "m": "Pair with asset inventory; tune for engineering laptops in maintenance windows.",
              "z": "Table (unexpected sessions), Bar chart (port mix), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SPAN / Zeek OT, industrial firewall.\n• Ensure the following data sources are available: `sourcetype=\"zeek:conn\"` with OT VLANs, `sourcetype=\"ics:protocol\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPair with asset inventory; tune for engineering laptops in maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot (sourcetype=\"zeek:conn\" OR sourcetype=\"ics:protocol\")\n| where dest_port IN (502,20000,44818,2404) OR service IN (\"modbus\",\"dnp3\",\"enip\")\n| lookup ot_authorized_peers src dest OUTPUT approved\n| where approved!=\"true\"\n| stats count by src, dest, dest_port, service\n| sort -count\n```\n\nUnderstanding this SPL\n\n**OT Protocol Port Monitoring** — Unexpected Modbus, DNP3, or EtherNet/IP ports on non-asset IPs reveal rogue devices or scanning.\n\nDocumented **Data sources**: `sourcetype=\"zeek:conn\"` with OT VLANs, `sourcetype=\"ics:protocol\"`. **App/TA** (typical add-on context): SPAN / Zeek OT, industrial firewall. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:conn, ics:protocol. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:conn\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where dest_port IN (502,20000,44818,2404) OR service IN (\"modbus\",\"dnp3\",\"enip\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where approved!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unexpected sessions), Bar chart (port mix), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: ot protocol port before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.21",
              "n": "Removable Media in OT Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "USB events on HMIs or engineering stations violate many site policies and are common malware vectors.",
              "t": "Windows Security, EDR (where allowed on EWS), USB control agents",
              "d": "`sourcetype=\"WinEventLog:Security\"` EventCode=6416, `sourcetype=\"sysmon:windows\"` EventCode=11",
              "q": "index=ot (sourcetype=\"WinEventLog:Security\" EventCode=6416) OR (sourcetype=\"sysmon:windows\" EventCode=11)\n| search Computer IN (\"*HMI*\",\"*EWS*\") OR tag=ot_workstation\n| stats count by Computer, DeviceDescription, User, Image\n| sort -_time",
              "m": "Physical port block by default; break-glass USB with logged approval ticket.",
              "z": "Table (USB events), Timeline, Single value (events per shift).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Security, EDR (where allowed on EWS), USB control agents.\n• Ensure the following data sources are available: `sourcetype=\"WinEventLog:Security\"` EventCode=6416, `sourcetype=\"sysmon:windows\"` EventCode=11.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPhysical port block by default; break-glass USB with logged approval ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot (sourcetype=\"WinEventLog:Security\" EventCode=6416) OR (sourcetype=\"sysmon:windows\" EventCode=11)\n| search Computer IN (\"*HMI*\",\"*EWS*\") OR tag=ot_workstation\n| stats count by Computer, DeviceDescription, User, Image\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Removable Media in OT Detection** — USB events on HMIs or engineering stations violate many site policies and are common malware vectors.\n\nDocumented **Data sources**: `sourcetype=\"WinEventLog:Security\"` EventCode=6416, `sourcetype=\"sysmon:windows\"` EventCode=11. **App/TA** (typical add-on context): Windows Security, EDR (where allowed on EWS), USB control agents. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: WinEventLog:Security, sysmon:windows. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"WinEventLog:Security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by Computer, DeviceDescription, User, Image** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (USB events), Timeline, Single value (events per shift).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: removable media in ot before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.22",
              "n": "OT/IT Boundary Traffic Analysis",
              "c": "high",
              "f": "advanced",
              "v": "Baselines permitted jump-server, patch, and historian replication across the Purdue boundary; flags new apps or volume spikes.",
              "t": "Firewall, data diode logs (if bidirectional return path exists)",
              "d": "`sourcetype=\"pan:traffic\"` `zone_pair`, `sourcetype=\"flow:ics\"`",
              "q": "index=network sourcetype=\"pan:traffic\" zone_pair=\"IT_DMZ_to_OT_L3\"\n| bin _time span=1h\n| stats sum(bytes) as b, dc(dest_port) as ports by app, _time\n| eventstats avg(b) as baseline by app\n| where b > 3*baseline AND ports>5\n| sort -b",
              "m": "Document each allowed application; use application-aware rules; quarterly rule review with OT owners.",
              "z": "Line chart (bytes by app), Table (anomalies), Heatmap (hour × app).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Firewall, data diode logs (if bidirectional return path exists).\n• Ensure the following data sources are available: `sourcetype=\"pan:traffic\"` `zone_pair`, `sourcetype=\"flow:ics\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDocument each allowed application; use application-aware rules; quarterly rule review with OT owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\" zone_pair=\"IT_DMZ_to_OT_L3\"\n| bin _time span=1h\n| stats sum(bytes) as b, dc(dest_port) as ports by app, _time\n| eventstats avg(b) as baseline by app\n| where b > 3*baseline AND ports>5\n| sort -b\n```\n\nUnderstanding this SPL\n\n**OT/IT Boundary Traffic Analysis** — Baselines permitted jump-server, patch, and historian replication across the Purdue boundary; flags new apps or volume spikes.\n\nDocumented **Data sources**: `sourcetype=\"pan:traffic\"` `zone_pair`, `sourcetype=\"flow:ics\"`. **App/TA** (typical add-on context): Firewall, data diode logs (if bidirectional return path exists). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by app, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where b > 3*baseline AND ports>5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (bytes by app), Table (anomalies), Heatmap (hour × app).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with unexpected traffic at the line between the office and the plant before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.23",
              "n": "ICS Change Management Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Logic downloads, firmware pushes, or HMI project changes without a linked work order break IEC 62443 SR-7 / internal change policy.",
              "t": "PLC programming tools audit, engineering station logs",
              "d": "`sourcetype=\"plc:download\"`, `sourcetype=\"tia:audit\"`",
              "q": "index=ot sourcetype=\"plc:download\"\n| lookup cmms_work_orders change_id OUTPUT wo_status requester\n| where isnull(wo_status) OR wo_status!=\"approved\"\n| stats count by plc_name, engineer_id, project_version\n| sort -count",
              "m": "Require pre-approved WO for all downloads; correlate with maintenance windows from MES.",
              "z": "Table (unauthorized downloads), Bar chart (by line), Gantt (WO vs. event time).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PLC programming tools audit, engineering station logs.\n• Ensure the following data sources are available: `sourcetype=\"plc:download\"`, `sourcetype=\"tia:audit\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequire pre-approved WO for all downloads; correlate with maintenance windows from MES.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"plc:download\"\n| lookup cmms_work_orders change_id OUTPUT wo_status requester\n| where isnull(wo_status) OR wo_status!=\"approved\"\n| stats count by plc_name, engineer_id, project_version\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ICS Change Management Compliance** — Logic downloads, firmware pushes, or HMI project changes without a linked work order break IEC 62443 SR-7 / internal change policy.\n\nDocumented **Data sources**: `sourcetype=\"plc:download\"`, `sourcetype=\"tia:audit\"`. **App/TA** (typical add-on context): PLC programming tools audit, engineering station logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: plc:download. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"plc:download\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(wo_status) OR wo_status!=\"approved\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by plc_name, engineer_id, project_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unauthorized downloads), Bar chart (by line), Gantt (WO vs. event time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: ics change management compliance before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.24",
              "n": "Production Line Downtime Tracking",
              "c": "critical",
              "f": "beginner",
              "v": "Correlates controller faults, jam sensors, and Andon events for OEE and root-cause — feeds continuous improvement.",
              "t": "MES, PLC fault bits, SCADA alarms",
              "d": "`sourcetype=\"mes:line_status\"`, `sourcetype=\"opcua:alarm\"`",
              "q": "index=ot sourcetype=\"mes:line_status\"\n| where state=\"DOWN\" OR fault_code!=0\n| transaction line_id maxspan=24h startswith=\"state=UP\" endswith=\"state=DOWN\" keepevicted=true\n| eval downtime_min=duration/60\n| stats sum(downtime_min) as total_down, count as stops by line_id, shift\n| sort -total_down",
              "m": "Normalize fault codes to reason trees; exclude planned changeovers via MES schedule lookup.",
              "z": "Bar chart (downtime by line), Timeline (stops), Single value (MTBF).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MES, PLC fault bits, SCADA alarms.\n• Ensure the following data sources are available: `sourcetype=\"mes:line_status\"`, `sourcetype=\"opcua:alarm\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize fault codes to reason trees; exclude planned changeovers via MES schedule lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"mes:line_status\"\n| where state=\"DOWN\" OR fault_code!=0\n| transaction line_id maxspan=24h startswith=\"state=UP\" endswith=\"state=DOWN\" keepevicted=true\n| eval downtime_min=duration/60\n| stats sum(downtime_min) as total_down, count as stops by line_id, shift\n| sort -total_down\n```\n\nUnderstanding this SPL\n\n**Production Line Downtime Tracking** — Correlates controller faults, jam sensors, and Andon events for OEE and root-cause — feeds continuous improvement.\n\nDocumented **Data sources**: `sourcetype=\"mes:line_status\"`, `sourcetype=\"opcua:alarm\"`. **App/TA** (typical add-on context): MES, PLC fault bits, SCADA alarms. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: mes:line_status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"mes:line_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state=\"DOWN\" OR fault_code!=0` — typically the threshold or rule expression for this monitoring goal.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **downtime_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by line_id, shift** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (downtime by line), Timeline (stops), Single value (MTBF).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: production line downtime before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.25",
              "n": "OEE Metrics Collection",
              "c": "high",
              "f": "intermediate",
              "v": "Availability × Performance × Quality from SCADA/MES tags — executive and plant KPIs.",
              "t": "Historian, OPC-UA, MES",
              "d": "`sourcetype=\"opcua:tag\"`, `sourcetype=\"historian:sample\"`",
              "q": "index=ot sourcetype=\"opcua:tag\" tag=oee\n| bin _time span=1h\n| stats avg(availability_pct) as A, avg(performance_pct) as P, avg(quality_pct) as Q by line_id, _time\n| eval oee=round((A*P*Q)/10000,2)\n| where oee < 0.75\n| sort _time",
              "m": "Align tag naming per ISA-95; validate against manual OEE samples monthly.",
              "z": "Line chart (OEE trend), Gauge (current OEE), Bar chart (loss buckets).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Historian, OPC-UA, MES.\n• Ensure the following data sources are available: `sourcetype=\"opcua:tag\"`, `sourcetype=\"historian:sample\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign tag naming per ISA-95; validate against manual OEE samples monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"opcua:tag\" tag=oee\n| bin _time span=1h\n| stats avg(availability_pct) as A, avg(performance_pct) as P, avg(quality_pct) as Q by line_id, _time\n| eval oee=round((A*P*Q)/10000,2)\n| where oee < 0.75\n| sort _time\n```\n\nUnderstanding this SPL\n\n**OEE Metrics Collection** — Availability × Performance × Quality from SCADA/MES tags — executive and plant KPIs.\n\nDocumented **Data sources**: `sourcetype=\"opcua:tag\"`, `sourcetype=\"historian:sample\"`. **App/TA** (typical add-on context): Historian, OPC-UA, MES. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: opcua:tag. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"opcua:tag\", tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by line_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **oee** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where oee < 0.75` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (OEE trend), Gauge (current OEE), Bar chart (loss buckets).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: oee metrics collection before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.26",
              "n": "Batch Process Deviation Alerting",
              "c": "critical",
              "f": "advanced",
              "v": "Recipe phase duration, temperature, or agitator speed outside limits risks scrap or unsafe reactions.",
              "t": "Batch executive (S88), historian",
              "d": "`sourcetype=\"batch:phase\"`, `sourcetype=\"historian:sample\"`",
              "q": "index=ot sourcetype=\"batch:phase\" batch_id=*\n| lookup recipe_limits recipe phase OUTPUT min_temp max_temp max_duration_sec\n| where temp_c < min_temp OR temp_c > max_temp OR phase_duration_sec > max_duration_sec\n| stats latest(batch_id) as batch, values(phase) as phases by reactor_id\n| sort -_time",
              "m": "Integrate with quality hold workflow; electronic signatures for parameter overrides.",
              "z": "Table (deviations), Control chart (temp), Timeline (phases).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Batch executive (S88), historian.\n• Ensure the following data sources are available: `sourcetype=\"batch:phase\"`, `sourcetype=\"historian:sample\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate with quality hold workflow; electronic signatures for parameter overrides.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"batch:phase\" batch_id=*\n| lookup recipe_limits recipe phase OUTPUT min_temp max_temp max_duration_sec\n| where temp_c < min_temp OR temp_c > max_temp OR phase_duration_sec > max_duration_sec\n| stats latest(batch_id) as batch, values(phase) as phases by reactor_id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Batch Process Deviation Alerting** — Recipe phase duration, temperature, or agitator speed outside limits risks scrap or unsafe reactions.\n\nDocumented **Data sources**: `sourcetype=\"batch:phase\"`, `sourcetype=\"historian:sample\"`. **App/TA** (typical add-on context): Batch executive (S88), historian. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: batch:phase. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"batch:phase\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where temp_c < min_temp OR temp_c > max_temp OR phase_duration_sec > max_duration_sec` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by reactor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (deviations), Control chart (temp), Timeline (phases).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: batch process deviation alerting before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Quality",
                "Safety"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.27",
              "n": "EDI Acknowledgement Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Missing 997/999/CONTRL acks for ASNs or orders disrupt supply-chain and inventory — common in automotive / discrete manufacturing.",
              "t": "B2B gateway (IBM/Seeburger), VAN logs",
              "d": "`sourcetype=\"edi:control\"`, `sourcetype=\"as2:mdn\"`",
              "q": "index=edi sourcetype=\"edi:control\"\n| stats earliest(_time) as sent latest(ack_time) as ack by interchange_id, doc_type, partner_id\n| eval ack_latency_sec=ack-sent\n| where isnull(ack) OR ack_latency_sec>3600\n| sort -sent",
              "m": "SLA per trading partner; auto-retry and partner escalation on NAK codes.",
              "z": "Table (late/missing acks), Line chart (ack latency trend), Bar chart (by partner).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: B2B gateway (IBM/Seeburger), VAN logs.\n• Ensure the following data sources are available: `sourcetype=\"edi:control\"`, `sourcetype=\"as2:mdn\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSLA per trading partner; auto-retry and partner escalation on NAK codes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edi sourcetype=\"edi:control\"\n| stats earliest(_time) as sent latest(ack_time) as ack by interchange_id, doc_type, partner_id\n| eval ack_latency_sec=ack-sent\n| where isnull(ack) OR ack_latency_sec>3600\n| sort -sent\n```\n\nUnderstanding this SPL\n\n**EDI Acknowledgement Monitoring** — Missing 997/999/CONTRL acks for ASNs or orders disrupt supply-chain and inventory — common in automotive / discrete manufacturing.\n\nDocumented **Data sources**: `sourcetype=\"edi:control\"`, `sourcetype=\"as2:mdn\"`. **App/TA** (typical add-on context): B2B gateway (IBM/Seeburger), VAN logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edi; **sourcetype**: edi:control. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edi, sourcetype=\"edi:control\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by interchange_id, doc_type, partner_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ack_latency_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(ack) OR ack_latency_sec>3600` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (late/missing acks), Line chart (ack latency trend), Bar chart (by partner).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: edi acknowledgement before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Operations"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.2.28",
              "n": "Supplier Delivery Performance",
              "c": "medium",
              "f": "beginner",
              "v": "On-time delivery % and lead-time variance from ASN/OTIF feeds support supplier scorecards and production planning.",
              "t": "TMS, EDI 856/855, MES receipts",
              "d": "`sourcetype=\"edi:asn\"`, `sourcetype=\"wms:receipt\"`",
              "q": "index=supply sourcetype=\"edi:asn\"\n| eval on_time=if(actual_arrival <= promised_date,1,0)\n| stats sum(on_time) as ot, count as total, avg((actual_arrival-promised_date)/86400) as avg_late_days by supplier_id\n| eval otif_pct=round(100*ot/total,1)\n| sort otif_pct\n| head 20",
              "m": "Join to GRN for quantity accuracy; exclude force majeure with reason codes.",
              "z": "Bar chart (OTIF % by supplier), Table (worst performers), Trend (rolling 13 weeks).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TMS, EDI 856/855, MES receipts.\n• Ensure the following data sources are available: `sourcetype=\"edi:asn\"`, `sourcetype=\"wms:receipt\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nJoin to GRN for quantity accuracy; exclude force majeure with reason codes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=supply sourcetype=\"edi:asn\"\n| eval on_time=if(actual_arrival <= promised_date,1,0)\n| stats sum(on_time) as ot, count as total, avg((actual_arrival-promised_date)/86400) as avg_late_days by supplier_id\n| eval otif_pct=round(100*ot/total,1)\n| sort otif_pct\n| head 20\n```\n\nUnderstanding this SPL\n\n**Supplier Delivery Performance** — On-time delivery % and lead-time variance from ASN/OTIF feeds support supplier scorecards and production planning.\n\nDocumented **Data sources**: `sourcetype=\"edi:asn\"`, `sourcetype=\"wms:receipt\"`. **App/TA** (typical add-on context): TMS, EDI 856/855, MES receipts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: supply; **sourcetype**: edi:asn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=supply, sourcetype=\"edi:asn\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **on_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by supplier_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **otif_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (OTIF % by supplier), Table (worst performers), Trend (rolling 13 weeks).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your plant control systems and field data. We help you know if something is off with this area: supplier delivery performance before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 38.6,
          "qd": {
            "gold": 0,
            "silver": 3,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "14.3",
          "n": "Splunk Edge Hub",
          "u": [
            {
              "i": "14.3.1",
              "n": "Temperature Anomaly Detection",
              "c": "high",
              "f": "advanced",
              "v": "Edge-based kNN anomaly detection provides faster response than cloud-based processing for critical temperature monitoring in data centers and industrial environments.",
              "t": "Splunk Edge Hub (built-in kNN model), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` (metrics index), `index=edge-hub-anomalies` (anomaly metrics), `sourcetype=edge_hub`",
              "q": "| mstats count where index=edge-hub-anomalies AND metric_name=temperature AND type=\"anomaly-detector\"\n  span=1h by extracted_host\n| where count > 0",
              "m": "Deploy Edge Hub device (IP66-rated, built-in temperature sensor ±0.2°C accuracy). Enable kNN anomaly detection via the Edge Hub mobile app — toggle \"Anomaly Detection\" on the temperature sensor tile. Sensor data streams as metrics to `edge-hub-data` index; anomalies to `edge-hub-anomalies` index via HEC. Create alerts on anomaly count spikes. Optional: attach external I²C temperature probes via the 3.5mm jack for additional measurement points.",
              "z": "Line chart (mstats temperature trend by device), Single value (current temperature), Timeline (anomaly events from edge-hub-anomalies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (built-in kNN model), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` (metrics index), `index=edge-hub-anomalies` (anomaly metrics), `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub device (IP66-rated, built-in temperature sensor ±0.2°C accuracy). Enable kNN anomaly detection via the Edge Hub mobile app — toggle \"Anomaly Detection\" on the temperature sensor tile. Sensor data streams as metrics to `edge-hub-data` index; anomalies to `edge-hub-anomalies` index via HEC. Create alerts on anomaly count spikes. Optional: attach external I²C temperature probes via the 3.5mm jack for additional measurement points.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats count where index=edge-hub-anomalies AND metric_name=temperature AND type=\"anomaly-detector\"\n  span=1h by extracted_host\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Temperature Anomaly Detection** — Edge-based kNN anomaly detection provides faster response than cloud-based processing for critical temperature monitoring in data centers and industrial environments.\n\nDocumented **Data sources**: `index=edge-hub-data` (metrics index), `index=edge-hub-anomalies` (anomaly metrics), `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (built-in kNN model), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-anomalies.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (mstats temperature trend by device), Single value (current temperature), Timeline (anomaly events from edge-hub-anomalies).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with uncomfortable or unsafe hot and cold spots before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.2",
              "n": "Vibration & Motion Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Equipment vibration changes indicate bearing wear, misalignment, or imbalance. Edge Hub's built-in 3-axis accelerometer and gyroscope enable predictive maintenance without external sensors.",
              "t": "Splunk Edge Hub (built-in sensors), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` (metrics index), `sourcetype=edge_hub` — built-in 3-axis accelerometer + 6-axis gyroscope",
              "q": "| mstats count where index=edge-hub-anomalies AND metric_name=\"accelerometer*\" AND type=\"anomaly-detector\"\n  span=1h by extracted_host\n| where count > 0",
              "m": "Mount Edge Hub near rotating equipment (IP66 enclosure suits industrial environments, operating -40°C to 80°C). The built-in accelerometer and gyroscope stream metrics to `edge-hub-data`. Enable kNN anomaly detection via the mobile app for each axis. Deploy MLTK Smart Outlier Detection model for more advanced analysis (requires OT Intelligence 4.8.0+ and Edge Hub OS 2.0+). Alert on anomaly detections. Note: one ML model per sensor; performance degrades with 2+ concurrent models.",
              "z": "Line chart (accelerometer axes over time), Single value (current RMS), Timeline (anomaly events).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (built-in sensors), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` (metrics index), `sourcetype=edge_hub` — built-in 3-axis accelerometer + 6-axis gyroscope.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMount Edge Hub near rotating equipment (IP66 enclosure suits industrial environments, operating -40°C to 80°C). The built-in accelerometer and gyroscope stream metrics to `edge-hub-data`. Enable kNN anomaly detection via the mobile app for each axis. Deploy MLTK Smart Outlier Detection model for more advanced analysis (requires OT Intelligence 4.8.0+ and Edge Hub OS 2.0+). Alert on anomaly detections. Note: one ML model per sensor; performance degrades with 2+ concurrent models.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats count where index=edge-hub-anomalies AND metric_name=\"accelerometer*\" AND type=\"anomaly-detector\"\n  span=1h by extracted_host\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Vibration & Motion Monitoring** — Equipment vibration changes indicate bearing wear, misalignment, or imbalance. Edge Hub's built-in 3-axis accelerometer and gyroscope enable predictive maintenance without external sensors.\n\nDocumented **Data sources**: `index=edge-hub-data` (metrics index), `sourcetype=edge_hub` — built-in 3-axis accelerometer + 6-axis gyroscope. **App/TA** (typical add-on context): Splunk Edge Hub (built-in sensors), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-anomalies.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (accelerometer axes over time), Single value (current RMS), Timeline (anomaly events).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with shaking that might mean a bearing, balance, or mount problem before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.3",
              "n": "Air Quality & VOC Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Indoor air quality affects occupant health and productivity. Edge Hub's optional VOC sensor provides IAQ scoring for workplace wellness monitoring.",
              "t": "Splunk Edge Hub (optional air quality sensor), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` (metrics index), `sourcetype=edge_hub` — built-in VOC sensor (optional, <1s response, IAQ score)",
              "q": "| mstats avg(_value) as value\n  where index=edge-hub-data AND metric_name IN (voc, humidity, temperature)\n  span=15m by metric_name, extracted_host",
              "m": "Deploy Edge Hub with optional VOC/air quality sensor module. The sensor provides IAQ (Indoor Air Quality) score with <1 second response time. Data streams as metrics to `edge-hub-data` index. Note: Edge Hub measures VOC and IAQ score — it does not have a CO2 or PM2.5 sensor natively. For CO2/PM2.5, connect external sensors via MQTT or I²C. Alert when IAQ score exceeds thresholds. Correlate with humidity sensor data for comfort indexing.",
              "z": "Line chart (IAQ score over time), Gauge (current IAQ), Multi-metric dashboard (VOC + humidity + temperature).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (optional air quality sensor), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` (metrics index), `sourcetype=edge_hub` — built-in VOC sensor (optional, <1s response, IAQ score).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub with optional VOC/air quality sensor module. The sensor provides IAQ (Indoor Air Quality) score with <1 second response time. Data streams as metrics to `edge-hub-data` index. Note: Edge Hub measures VOC and IAQ score — it does not have a CO2 or PM2.5 sensor natively. For CO2/PM2.5, connect external sensors via MQTT or I²C. Alert when IAQ score exceeds thresholds. Correlate with humidity sensor data for comfort indexing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(_value) as value\n  where index=edge-hub-data AND metric_name IN (voc, humidity, temperature)\n  span=15m by metric_name, extracted_host\n```\n\nUnderstanding this SPL\n\n**Air Quality & VOC Monitoring** — Indoor air quality affects occupant health and productivity. Edge Hub's optional VOC sensor provides IAQ scoring for workplace wellness monitoring.\n\nDocumented **Data sources**: `index=edge-hub-data` (metrics index), `sourcetype=edge_hub` — built-in VOC sensor (optional, <1s response, IAQ score). **App/TA** (typical add-on context): Splunk Edge Hub (optional air quality sensor), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-data.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (IAQ score over time), Gauge (current IAQ), Multi-metric dashboard (VOC + humidity + temperature).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with fumes, dust, and other air quality issues near people before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.4",
              "n": "Sound Level Anomalies",
              "c": "medium",
              "f": "intermediate",
              "v": "Unusual sound patterns near equipment indicate mechanical issues. Edge Hub's stereo microphone enables acoustic monitoring without external sensors.",
              "t": "Splunk Edge Hub (built-in stereo microphone), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` (metrics index), `sourcetype=edge_hub` — built-in stereo microphone",
              "q": "| mstats count where index=edge-hub-anomalies AND metric_name=sound_level AND type=\"anomaly-detector\"\n  span=1h by extracted_host\n| where count > 0",
              "m": "Deploy Edge Hub near critical equipment. The built-in stereo microphone captures ambient sound levels. Enable kNN anomaly detection to baseline normal patterns and detect deviations. Alert on sustained high levels (OSHA >85dB threshold) and sudden changes (potential equipment failure). Sound data streams as metrics to `edge-hub-data`; anomalies to `edge-hub-anomalies`.",
              "z": "Line chart (sound level trend), Single value (current dB), Timeline (anomaly events).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (built-in stereo microphone), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` (metrics index), `sourcetype=edge_hub` — built-in stereo microphone.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub near critical equipment. The built-in stereo microphone captures ambient sound levels. Enable kNN anomaly detection to baseline normal patterns and detect deviations. Alert on sustained high levels (OSHA >85dB threshold) and sudden changes (potential equipment failure). Sound data streams as metrics to `edge-hub-data`; anomalies to `edge-hub-anomalies`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats count where index=edge-hub-anomalies AND metric_name=sound_level AND type=\"anomaly-detector\"\n  span=1h by extracted_host\n| where count > 0\n```\n\nUnderstanding this SPL\n\n**Sound Level Anomalies** — Unusual sound patterns near equipment indicate mechanical issues. Edge Hub's stereo microphone enables acoustic monitoring without external sensors.\n\nDocumented **Data sources**: `index=edge-hub-data` (metrics index), `sourcetype=edge_hub` — built-in stereo microphone. **App/TA** (typical add-on context): Splunk Edge Hub (built-in stereo microphone), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-anomalies.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (sound level trend), Single value (current dB), Timeline (anomaly events).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with loud or unsafe noise in the work area before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.5",
              "n": "MQTT Device Integration Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Edge Hub's built-in MQTT broker aggregates IoT sensor data from external devices. Monitoring broker health ensures data pipeline reliability.",
              "t": "Splunk Edge Hub (built-in MQTT broker), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` (metrics from MQTT topics), `index=edge-hub-logs sourcetype=splunk_edge_hub_log` (broker logs)",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log \"mqtt\" OR \"broker\"\n| stats count by log_level, message",
              "m": "Configure MQTT topics via Edge Hub Advanced Settings → MQTT tab. Create metric or event topic subscriptions with transformations (metric name, dimensions, timestamps). External IoT devices publish to Edge Hub's built-in MQTT broker (port 1883). Data is transformed and forwarded to Splunk via HEC. For TLS-secured external brokers, upload certificates via Advanced Settings → MQTT → TLS Configuration. Monitor for disconnected publishers and message rate drops.",
              "z": "Line chart (MQTT metric trends), Table (connected device inventory), Single value (active MQTT topics).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (built-in MQTT broker), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` (metrics from MQTT topics), `index=edge-hub-logs sourcetype=splunk_edge_hub_log` (broker logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure MQTT topics via Edge Hub Advanced Settings → MQTT tab. Create metric or event topic subscriptions with transformations (metric name, dimensions, timestamps). External IoT devices publish to Edge Hub's built-in MQTT broker (port 1883). Data is transformed and forwarded to Splunk via HEC. For TLS-secured external brokers, upload certificates via Advanced Settings → MQTT → TLS Configuration. Monitor for disconnected publishers and message rate drops.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log \"mqtt\" OR \"broker\"\n| stats count by log_level, message\n```\n\nUnderstanding this SPL\n\n**MQTT Device Integration Monitoring** — Edge Hub's built-in MQTT broker aggregates IoT sensor data from external devices. Monitoring broker health ensures data pipeline reliability.\n\nDocumented **Data sources**: `index=edge-hub-data` (metrics from MQTT topics), `index=edge-hub-logs sourcetype=splunk_edge_hub_log` (broker logs). **App/TA** (typical add-on context): Splunk Edge Hub (built-in MQTT broker), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by log_level, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (MQTT metric trends), Table (connected device inventory), Single value (active MQTT topics).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: mqtt device integration before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.6",
              "n": "SNMP Device Polling from Edge",
              "c": "medium",
              "f": "beginner",
              "v": "Edge Hub bridges OT/IT network segmentation by polling SNMP-enabled devices on isolated networks and forwarding data to Splunk Cloud.",
              "t": "Splunk Edge Hub (SNMP integration), Splunk OT Intelligence",
              "d": "`index=edge-hub-snmp sourcetype=edge_hub` — SNMP polls via Edge Hub to local devices",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log \"snmp\" (\"timeout\" OR \"unreachable\")\n| stats count by host, message",
              "m": "Configure SNMP polling via Edge Hub Advanced Settings → SNMP tab. Add devices by IP, set SNMP version (v1/v2c/v3), community string or v3 credentials, and define OIDs with aliases. Set polling interval (default 60s). Edge Hub polls local OT devices and forwards results to `edge-hub-snmp` index via HEC. This bridges the air-gap — enterprise Splunk never touches the OT network directly. Alert on device unreachability or metric threshold violations.",
              "z": "Table (device OID values), Status grid (device × poll status), Line chart (metric trends).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (SNMP integration), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-snmp sourcetype=edge_hub` — SNMP polls via Edge Hub to local devices.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure SNMP polling via Edge Hub Advanced Settings → SNMP tab. Add devices by IP, set SNMP version (v1/v2c/v3), community string or v3 credentials, and define OIDs with aliases. Set polling interval (default 60s). Edge Hub polls local OT devices and forwards results to `edge-hub-snmp` index via HEC. This bridges the air-gap — enterprise Splunk never touches the OT network directly. Alert on device unreachability or metric threshold violations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log \"snmp\" (\"timeout\" OR \"unreachable\")\n| stats count by host, message\n```\n\nUnderstanding this SPL\n\n**SNMP Device Polling from Edge** — Edge Hub bridges OT/IT network segmentation by polling SNMP-enabled devices on isolated networks and forwarding data to Splunk Cloud.\n\nDocumented **Data sources**: `index=edge-hub-snmp sourcetype=edge_hub` — SNMP polls via Edge Hub to local devices. **App/TA** (typical add-on context): Splunk Edge Hub (SNMP integration), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device OID values), Status grid (device × poll status), Line chart (metric trends).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: snmp device polling from edge before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.7",
              "n": "Edge-to-Cloud Data Pipeline Health",
              "c": "high",
              "f": "intermediate",
              "v": "Edge Hub pipeline health ensures IoT/OT data reaches Splunk. A disconnected Edge Hub creates blind spots — the device backlogs up to 3M sensor data points locally in SQLite.",
              "t": "Splunk Edge Hub (system health), Splunk OT Intelligence",
              "d": "`index=edge-hub-health sourcetype=edge_hub` (device health metrics), `index=edge-hub-logs sourcetype=splunk_edge_hub_log` (system logs)",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log\n  (\"connection\" OR \"unreachable\" OR \"timeout\")\n| timechart count by log_level",
              "m": "Edge Hub streams device health data to `edge-hub-health` index and system logs to `edge-hub-logs` index. The device checks Splunk reachability every 15 seconds — LED ring shows green (connected) or red (disconnected). When disconnected, data backlogs locally: 3M sensor data points and 100K health/logs/anomalies/SNMP entries each (FIFO, batches of 100 via HEC on reconnect). Monitor CPU, memory, disk utilization on the device. Alert on connectivity loss or sustained high resource usage.",
              "z": "Single value (connectivity status with LED color mapping), Gauge (CPU/memory/disk), Line chart (forwarding rate over time).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (system health), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-health sourcetype=edge_hub` (device health metrics), `index=edge-hub-logs sourcetype=splunk_edge_hub_log` (system logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub streams device health data to `edge-hub-health` index and system logs to `edge-hub-logs` index. The device checks Splunk reachability every 15 seconds — LED ring shows green (connected) or red (disconnected). When disconnected, data backlogs locally: 3M sensor data points and 100K health/logs/anomalies/SNMP entries each (FIFO, batches of 100 via HEC on reconnect). Monitor CPU, memory, disk utilization on the device. Alert on connectivity loss or sustained high resource usage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log\n  (\"connection\" OR \"unreachable\" OR \"timeout\")\n| timechart count by log_level\n```\n\nUnderstanding this SPL\n\n**Edge-to-Cloud Data Pipeline Health** — Edge Hub pipeline health ensures IoT/OT data reaches Splunk. A disconnected Edge Hub creates blind spots — the device backlogs up to 3M sensor data points locally in SQLite.\n\nDocumented **Data sources**: `index=edge-hub-health sourcetype=edge_hub` (device health metrics), `index=edge-hub-logs sourcetype=splunk_edge_hub_log` (system logs). **App/TA** (typical add-on context): Splunk Edge Hub (system health), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time with a separate series **by log_level** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (connectivity status with LED color mapping), Gauge (CPU/memory/disk), Line chart (forwarding rate over time).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with what the on-site box is feeling and whether it is online before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.9",
              "n": "Cold Storage Room Temperature Excursion Alert",
              "c": "critical",
              "f": "advanced",
              "v": "Ensures pharmaceutical, food, or vaccine storage integrity by alerting within minutes of unplanned temperature rise.",
              "t": "Splunk Edge Hub (temperature sensor ±0.2°C), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` metric_name=temperature, `sourcetype=edge_hub`",
              "q": "| mstats avg(temperature) as temp by host, _time span=5m\n| eval expected_range=\"[-20,-15]\"\n| where temp > -15 OR temp < -20\n| eval deviation=if(temp > -15, temp - (-15), (-20) - temp)\n| stats count as excursion_count, max(deviation) as max_deviation by host\n| where excursion_count >= 3",
              "m": "Configure temperature sensor with Advanced Settings → Alerts enabled at -15°C upper threshold. Store locally for 30 minutes via SQLite backlog. For sub-zero operation, verify Edge Hub -40°C to 80°C operating range covers your environment. MQTT topic subscription can include external low-cost temp probes via I²C port (3.5mm jack).",
              "z": "Single-value alert indicator, time-series trend, deviation log.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (temperature sensor ±0.2°C), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=temperature, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure temperature sensor with Advanced Settings → Alerts enabled at -15°C upper threshold. Store locally for 30 minutes via SQLite backlog. For sub-zero operation, verify Edge Hub -40°C to 80°C operating range covers your environment. MQTT topic subscription can include external low-cost temp probes via I²C port (3.5mm jack).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(temperature) as temp by host, _time span=5m\n| eval expected_range=\"[-20,-15]\"\n| where temp > -15 OR temp < -20\n| eval deviation=if(temp > -15, temp - (-15), (-20) - temp)\n| stats count as excursion_count, max(deviation) as max_deviation by host\n| where excursion_count >= 3\n```\n\nUnderstanding this SPL\n\n**Cold Storage Room Temperature Excursion Alert** — Ensures pharmaceutical, food, or vaccine storage integrity by alerting within minutes of unplanned temperature rise.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=temperature, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (temperature sensor ±0.2°C), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **expected_range** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where temp > -15 OR temp < -20` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **deviation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where excursion_count >= 3` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single-value alert indicator, time-series trend, deviation log.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with uncomfortable or unsafe hot and cold spots before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.10",
              "n": "Museum & Archive Climate Control Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Documents preservation requirements (typically 18-21°C, 35-45% RH) for regulatory compliance and insurance.",
              "t": "Splunk Edge Hub (temperature + humidity dual sensor), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` metric_name=temperature OR metric_name=humidity, `sourcetype=edge_hub`",
              "q": "| mstats avg(temperature) as temp, avg(humidity) as rh by host, _time span=10m\n| eval temp_compliant=if(temp>=18 AND temp<=21, 1, 0), rh_compliant=if(rh>=35 AND rh<=45, 1, 0)\n| eval compliance_score=((temp_compliant + rh_compliant) / 2) * 100\n| stats avg(compliance_score) as avg_compliance, count as hours by host\n| where avg_compliance < 95",
              "m": "Mount Edge Hub in archival vault with sensors in passive airflow zone. Configure 10-minute polling intervals via Advanced Settings → Sensor Polling for daily compliance reporting. Use edge-hub-health index to track sensor drift (humidity can drift ±5% annually). Maintain audit trail in edge-hub-logs for regulatory documentation.",
              "z": "Compliance scorecard, historical trend, excursion timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (temperature + humidity dual sensor), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=temperature OR metric_name=humidity, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMount Edge Hub in archival vault with sensors in passive airflow zone. Configure 10-minute polling intervals via Advanced Settings → Sensor Polling for daily compliance reporting. Use edge-hub-health index to track sensor drift (humidity can drift ±5% annually). Maintain audit trail in edge-hub-logs for regulatory documentation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(temperature) as temp, avg(humidity) as rh by host, _time span=10m\n| eval temp_compliant=if(temp>=18 AND temp<=21, 1, 0), rh_compliant=if(rh>=35 AND rh<=45, 1, 0)\n| eval compliance_score=((temp_compliant + rh_compliant) / 2) * 100\n| stats avg(compliance_score) as avg_compliance, count as hours by host\n| where avg_compliance < 95\n```\n\nUnderstanding this SPL\n\n**Museum & Archive Climate Control Compliance** — Documents preservation requirements (typically 18-21°C, 35-45% RH) for regulatory compliance and insurance.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=temperature OR metric_name=humidity, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (temperature + humidity dual sensor), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **temp_compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **compliance_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_compliance < 95` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Compliance scorecard, historical trend, excursion timeline.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: museum & archive climate control compliance before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.11",
              "n": "Greenhouse Humidity & Growth Optimization",
              "c": "medium",
              "f": "advanced",
              "v": "Optimizes plant growth rates by maintaining ideal VPD (vapor pressure deficit) and reducing fungal disease risk.",
              "t": "Splunk Edge Hub (humidity + temperature + optional light sensor), custom edge.json container",
              "d": "`index=edge-hub-data` metric_name=humidity OR metric_name=temperature OR metric_name=light, `sourcetype=edge_hub`",
              "q": "| mstats avg(temperature) as temp, avg(humidity) as rh, max(light_level) as lux by host, _time span=1h\n| eval sat_pressure=610.5*exp((17.27*temp)/(temp+237.7))\n| eval vpd=(sat_pressure*(100-rh)/100)/1000\n| eval growth_optimal=if(vpd>=0.8 AND vpd<=1.5 AND temp>=20 AND temp<=28, \"YES\", \"NO\")\n| stats count(eval(growth_optimal=\"YES\")) as optimal_hours, count as total_hours by host\n| eval growth_score=(optimal_hours/total_hours)*100",
              "m": "Deploy Edge Hub with external humidity/temp probe via I²C (3.5mm jack) placed in plant canopy zone. Optional light sensor integration measures lux for photosynthesis optimization. Build custom ARM64 container to interface with greenhouse HVAC controller via Modbus TCP (port 502) for automated adjustment. Store 3M data points locally for real-time analytics without cloud latency.",
              "z": "VPD gauge, growth score trend, hourly optimization heatmap.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (humidity + temperature + optional light sensor), custom edge.json container.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=humidity OR metric_name=temperature OR metric_name=light, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub with external humidity/temp probe via I²C (3.5mm jack) placed in plant canopy zone. Optional light sensor integration measures lux for photosynthesis optimization. Build custom ARM64 container to interface with greenhouse HVAC controller via Modbus TCP (port 502) for automated adjustment. Store 3M data points locally for real-time analytics without cloud latency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(temperature) as temp, avg(humidity) as rh, max(light_level) as lux by host, _time span=1h\n| eval sat_pressure=610.5*exp((17.27*temp)/(temp+237.7))\n| eval vpd=(sat_pressure*(100-rh)/100)/1000\n| eval growth_optimal=if(vpd>=0.8 AND vpd<=1.5 AND temp>=20 AND temp<=28, \"YES\", \"NO\")\n| stats count(eval(growth_optimal=\"YES\")) as optimal_hours, count as total_hours by host\n| eval growth_score=(optimal_hours/total_hours)*100\n```\n\nUnderstanding this SPL\n\n**Greenhouse Humidity & Growth Optimization** — Optimizes plant growth rates by maintaining ideal VPD (vapor pressure deficit) and reducing fungal disease risk.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=humidity OR metric_name=temperature OR metric_name=light, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (humidity + temperature + optional light sensor), custom edge.json container. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **sat_pressure** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **vpd** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **growth_optimal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **growth_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: VPD gauge, growth score trend, hourly optimization heatmap.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with sticky air, dry air, and condensation risk before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.12",
              "n": "Security Camera Motion Detection with Light Level Correlation",
              "c": "high",
              "f": "expert",
              "v": "Reduces false motion alerts by correlating camera motion events with ambient light levels and eliminating day/night false positives.",
              "t": "Splunk Edge Hub (light sensor + USB camera container with NPU), v2.1+",
              "d": "`index=edge-hub-data` metric_name=light, `index=edge-hub-logs` sourcetype=splunk_edge_hub_log, camera motion event",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log motion_detected=true\n| join max=1 host [| mstats avg(light_level) as lux by host, _time span=5m | where lux < 10]\n| stats count as false_positives by host\n| eval false_positive_rate=(false_positives / (false_positives + true_detections)) * 100",
              "m": "Deploy Edge Hub with USB camera attached (requires USB device passthrough v2.1+). Build custom ARM64 container with OpenCV + NPU inference for motion detection. Filter detections with built-in ambient light sensor: suppress alerts when lux < 10 (night) or > 50000 (direct sun glare). Configure edge.json manifest with resource limits (memory: 256MB, CPU: 1 core) to avoid impacting sensor polling.",
              "z": "Motion vs light correlation scatter, false positive trend, alert effectiveness dashboard.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (light sensor + USB camera container with NPU), v2.1+.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=light, `index=edge-hub-logs` sourcetype=splunk_edge_hub_log, camera motion event.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub with USB camera attached (requires USB device passthrough v2.1+). Build custom ARM64 container with OpenCV + NPU inference for motion detection. Filter detections with built-in ambient light sensor: suppress alerts when lux < 10 (night) or > 50000 (direct sun glare). Configure edge.json manifest with resource limits (memory: 256MB, CPU: 1 core) to avoid impacting sensor polling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log motion_detected=true\n| join max=1 host [| mstats avg(light_level) as lux by host, _time span=5m | where lux < 10]\n| stats count as false_positives by host\n| eval false_positive_rate=(false_positives / (false_positives + true_detections)) * 100\n```\n\nUnderstanding this SPL\n\n**Security Camera Motion Detection with Light Level Correlation** — Reduces false motion alerts by correlating camera motion events with ambient light levels and eliminating day/night false positives.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=light, `index=edge-hub-logs` sourcetype=splunk_edge_hub_log, camera motion event. **App/TA** (typical add-on context): Splunk Edge Hub (light sensor + USB camera container with NPU), v2.1+. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **false_positive_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Motion vs light correlation scatter, false positive trend, alert effectiveness dashboard.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with light levels and sudden darkness when you would expect lights on before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.13",
              "n": "Energy Management & HVAC Occupancy-Based Control",
              "c": "medium",
              "f": "expert",
              "v": "Reduces HVAC energy consumption 15-30% by correlating occupancy detection with temperature setpoints.",
              "t": "Splunk Edge Hub (light + USB camera + custom container), Modbus TCP actuator control",
              "d": "`index=edge-hub-data` metric_name=light, `index=edge-hub-logs` camera_occupancy_count, `index=edge-hub-logs` sourcetype=edge_hub modbus_register",
              "q": "| mstats avg(light_level) as lux by host, _time span=15m\n| join max=1 host [index=edge-hub-logs camera_occupancy_count > 0 | bin _time span=15m | stats count as people_detected by host, _time]\n| eval hvac_mode=case(people_detected > 0 AND lux < 500, \"COMFORT\", people_detected = 0 AND lux > 500, \"ECO\", 1=1, \"TRANSITION\")\n| stats count by hvac_mode, host",
              "m": "Deploy custom ARM64 container with TensorFlow Lite occupancy counting model (CNN) running on NPU. Integrate Modbus TCP gateway to read/write HVAC controller setpoint registers (port 502). Use light sensor as secondary occupancy indicator. Configure container resource limits to ensure 30-second sensor polling remains unaffected. Implement local alerting logic in container to adjust setpoint without cloud round-trip latency.",
              "z": "Occupancy vs light scatter, energy savings trend, HVAC mode timeline.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (light + USB camera + custom container), Modbus TCP actuator control.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=light, `index=edge-hub-logs` camera_occupancy_count, `index=edge-hub-logs` sourcetype=edge_hub modbus_register.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy custom ARM64 container with TensorFlow Lite occupancy counting model (CNN) running on NPU. Integrate Modbus TCP gateway to read/write HVAC controller setpoint registers (port 502). Use light sensor as secondary occupancy indicator. Configure container resource limits to ensure 30-second sensor polling remains unaffected. Implement local alerting logic in container to adjust setpoint without cloud round-trip latency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(light_level) as lux by host, _time span=15m\n| join max=1 host [index=edge-hub-logs camera_occupancy_count > 0 | bin _time span=15m | stats count as people_detected by host, _time]\n| eval hvac_mode=case(people_detected > 0 AND lux < 500, \"COMFORT\", people_detected = 0 AND lux > 500, \"ECO\", 1=1, \"TRANSITION\")\n| stats count by hvac_mode, host\n```\n\nUnderstanding this SPL\n\n**Energy Management & HVAC Occupancy-Based Control** — Reduces HVAC energy consumption 15-30% by correlating occupancy detection with temperature setpoints.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=light, `index=edge-hub-logs` camera_occupancy_count, `index=edge-hub-logs` sourcetype=edge_hub modbus_register. **App/TA** (typical add-on context): Splunk Edge Hub (light + USB camera + custom container), Modbus TCP actuator control. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **hvac_mode** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by hvac_mode, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Occupancy vs light scatter, energy savings trend, HVAC mode timeline.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with when a space looks empty or busy in an odd way before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.14",
              "n": "Warehouse Inventory Light-Based Shelf Monitoring",
              "c": "medium",
              "f": "expert",
              "v": "Detects empty or partially depleted shelves in real-time by monitoring light pattern changes in high-bay storage.",
              "t": "Splunk Edge Hub (light sensor array), custom container for pattern recognition",
              "d": "`index=edge-hub-data` metric_name=light, `sourcetype=edge_hub`",
              "q": "| mstats avg(light_level) as lux by host, rack_id, shelf_position, _time span=5m\n| delta lux as lux_change\n| eval significant_change=if(abs(lux_change) > 20, \"YES\", \"NO\")\n| stats count(eval(significant_change=\"YES\")) as change_events, avg(lux) as avg_lux by host, rack_id, shelf_position\n| where change_events > 5\n| eval inventory_status=case(avg_lux > 1000, \"EMPTY\", avg_lux > 500, \"LOW\", 1=1, \"STOCKED\")",
              "m": "Mount Edge Hub light sensor facing shelving unit. Deploy custom Python container that learns baseline light patterns for each shelf over 1-week baseline period. Use machine learning to detect sustained light increases (empty shelf) vs brief shadows (restocking activity). Reference Advanced Settings → Containers tab to set container polling interval to 5 minutes. Store 3M light data points locally for historical baseline calculation.",
              "z": "Shelf occupancy heatmap, light level trend by shelf, inventory status dashboard.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (light sensor array), custom container for pattern recognition.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=light, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMount Edge Hub light sensor facing shelving unit. Deploy custom Python container that learns baseline light patterns for each shelf over 1-week baseline period. Use machine learning to detect sustained light increases (empty shelf) vs brief shadows (restocking activity). Reference Advanced Settings → Containers tab to set container polling interval to 5 minutes. Store 3M light data points locally for historical baseline calculation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(light_level) as lux by host, rack_id, shelf_position, _time span=5m\n| delta lux as lux_change\n| eval significant_change=if(abs(lux_change) > 20, \"YES\", \"NO\")\n| stats count(eval(significant_change=\"YES\")) as change_events, avg(lux) as avg_lux by host, rack_id, shelf_position\n| where change_events > 5\n| eval inventory_status=case(avg_lux > 1000, \"EMPTY\", avg_lux > 500, \"LOW\", 1=1, \"STOCKED\")\n```\n\nUnderstanding this SPL\n\n**Warehouse Inventory Light-Based Shelf Monitoring** — Detects empty or partially depleted shelves in real-time by monitoring light pattern changes in high-bay storage.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=light, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (light sensor array), custom container for pattern recognition. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Pipeline stage (see **Warehouse Inventory Light-Based Shelf Monitoring**): delta lux as lux_change\n• `eval` defines or adjusts **significant_change** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, rack_id, shelf_position** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where change_events > 5` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **inventory_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Shelf occupancy heatmap, light level trend by shelf, inventory status dashboard.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: warehouse inventory light-based shelf before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.15",
              "n": "Structural Health Monitoring via Vibration Baseline Drift",
              "c": "high",
              "f": "expert",
              "v": "Detects early-stage structural degradation (loose bolts, bearing wear) before catastrophic failure by monitoring vibration signature drift.",
              "t": "Splunk Edge Hub (3-axis accelerometer + 6-axis gyroscope), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` metric_name=acceleration_x OR acceleration_y OR acceleration_z, `sourcetype=edge_hub`",
              "q": "| mstats avg(acceleration_x) as ax, avg(acceleration_y) as ay, avg(acceleration_z) as az by host, _time span=10m\n| eval vibration_magnitude=sqrt((ax^2 + ay^2 + az^2))\n| eval baseline=avg(vibration_magnitude)\n| relative_entropy baseline, vibration_magnitude\n| where vibration_magnitude > (baseline * 1.5)\n| stats count as anomalies, max(vibration_magnitude) as peak_mag by host",
              "m": "Mount Edge Hub on bridge structure, machinery frame, or building floor with accelerometer facing primary load direction. Collect 7-day baseline using kNN built-in anomaly detection (one model per sensor). Enable MLTK Smart Outlier Detection v4.8.0+ for drift tracking over months. Store 3M data points locally for baseline comparison. Note: MQTT sensors only support MLTK; if using built-in accelerometer, use built-in kNN algorithm.",
              "z": "Vibration magnitude trend, baseline drift scatter, anomaly frequency timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (3-axis accelerometer + 6-axis gyroscope), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=acceleration_x OR acceleration_y OR acceleration_z, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMount Edge Hub on bridge structure, machinery frame, or building floor with accelerometer facing primary load direction. Collect 7-day baseline using kNN built-in anomaly detection (one model per sensor). Enable MLTK Smart Outlier Detection v4.8.0+ for drift tracking over months. Store 3M data points locally for baseline comparison. Note: MQTT sensors only support MLTK; if using built-in accelerometer, use built-in kNN algorithm.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(acceleration_x) as ax, avg(acceleration_y) as ay, avg(acceleration_z) as az by host, _time span=10m\n| eval vibration_magnitude=sqrt((ax^2 + ay^2 + az^2))\n| eval baseline=avg(vibration_magnitude)\n| relative_entropy baseline, vibration_magnitude\n| where vibration_magnitude > (baseline * 1.5)\n| stats count as anomalies, max(vibration_magnitude) as peak_mag by host\n```\n\nUnderstanding this SPL\n\n**Structural Health Monitoring via Vibration Baseline Drift** — Detects early-stage structural degradation (loose bolts, bearing wear) before catastrophic failure by monitoring vibration signature drift.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=acceleration_x OR acceleration_y OR acceleration_z, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (3-axis accelerometer + 6-axis gyroscope), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **vibration_magnitude** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **baseline** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Structural Health Monitoring via Vibration Baseline Drift**): relative_entropy baseline, vibration_magnitude\n• Filters the current rows with `where vibration_magnitude > (baseline * 1.5)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Vibration magnitude trend, baseline drift scatter, anomaly frequency timeline.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with shaking that might mean a bearing, balance, or mount problem before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.16",
              "n": "Door Open/Close Detection via Accelerometer Tilt",
              "c": "medium",
              "f": "advanced",
              "v": "Monitors facility access by detecting door swing events without motion sensors or contact switches.",
              "t": "Splunk Edge Hub (3-axis accelerometer with gravity component), custom edge.json container",
              "d": "`index=edge-hub-data` metric_name=acceleration_x OR acceleration_y OR acceleration_z, `sourcetype=edge_hub`",
              "q": "| mstats avg(acceleration_z) as az, avg(acceleration_y) as ay by host, _time span=100ms\n| eval tilt_angle=atan2(ay, az) * (180 / pi())\n| delta tilt_angle as tilt_change\n| eval door_event=if(abs(tilt_change) > 15 AND (tilt_change > 0 OR tilt_change < 0), \"SWING\", \"STATIC\")\n| stats count as swings by host, door_id\n| where swings > 0",
              "m": "Mount Edge Hub vertically on doorframe with accelerometer Z-axis aligned to gravity. Configure 100ms sampling interval (Advanced Settings → Sensor Polling) to capture door swing signatures (typically 0.5-2 second transit). Build custom ARM64 container that implements state machine for distinguishing between single swing (door passing) vs sustained tilt (propped open). Store local SQLite events for 24+ hours via 100K event backlog.",
              "z": "Door swing timeline, access frequency histogram, anomalous access alert.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (3-axis accelerometer with gravity component), custom edge.json container.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=acceleration_x OR acceleration_y OR acceleration_z, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMount Edge Hub vertically on doorframe with accelerometer Z-axis aligned to gravity. Configure 100ms sampling interval (Advanced Settings → Sensor Polling) to capture door swing signatures (typically 0.5-2 second transit). Build custom ARM64 container that implements state machine for distinguishing between single swing (door passing) vs sustained tilt (propped open). Store local SQLite events for 24+ hours via 100K event backlog.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(acceleration_z) as az, avg(acceleration_y) as ay by host, _time span=100ms\n| eval tilt_angle=atan2(ay, az) * (180 / pi())\n| delta tilt_angle as tilt_change\n| eval door_event=if(abs(tilt_change) > 15 AND (tilt_change > 0 OR tilt_change < 0), \"SWING\", \"STATIC\")\n| stats count as swings by host, door_id\n| where swings > 0\n```\n\nUnderstanding this SPL\n\n**Door Open/Close Detection via Accelerometer Tilt** — Monitors facility access by detecting door swing events without motion sensors or contact switches.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=acceleration_x OR acceleration_y OR acceleration_z, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (3-axis accelerometer with gravity component), custom edge.json container. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **tilt_angle** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Door Open/Close Detection via Accelerometer Tilt**): delta tilt_angle as tilt_change\n• `eval` defines or adjusts **door_event** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, door_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where swings > 0` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Door swing timeline, access frequency histogram, anomalous access alert.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with shaking that might mean a bearing, balance, or mount problem before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.17",
              "n": "Equipment Alignment & Vibration Analysis via Gyroscope",
              "c": "medium",
              "f": "advanced",
              "v": "Monitors rotational alignment of rotating equipment to predict misalignment-induced failures.",
              "t": "Splunk Edge Hub (6-axis gyroscope), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` metric_name=gyro_x OR gyro_y OR gyro_z, `sourcetype=edge_hub`",
              "q": "| mstats avg(gyro_x) as gx, avg(gyro_y) as gy, avg(gyro_z) as gz by host, equipment_id, _time span=1m\n| eval rotation_magnitude=sqrt((gx^2 + gy^2 + gz^2))\n| eval z_axis_dominant=if(abs(gz) > abs(gx) AND abs(gz) > abs(gy), \"YES\", \"NO\")\n| stats avg(rotation_magnitude) as avg_rot, stdev(rotation_magnitude) as std_rot by equipment_id\n| where (avg_rot > 50) AND (std_rot > 10)",
              "m": "Mount Edge Hub at equipment bearing or motor coupling with gyroscope Z-axis aligned to equipment rotation axis. Collect 30-day baseline for expected rotation rate and variation. Use built-in kNN anomaly detection to flag unexpected rotational patterns (e.g., gyroscopic precession from misalignment). For precision industrial environments, integrate with OPC-UA PLC (port 4840) to read encoder data for ground-truth validation. Local 3M backlog ensures all rotation events are captured.",
              "z": "Rotation rate trend, z-axis dominance heatmap, misalignment risk gauge.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (6-axis gyroscope), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=gyro_x OR gyro_y OR gyro_z, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMount Edge Hub at equipment bearing or motor coupling with gyroscope Z-axis aligned to equipment rotation axis. Collect 30-day baseline for expected rotation rate and variation. Use built-in kNN anomaly detection to flag unexpected rotational patterns (e.g., gyroscopic precession from misalignment). For precision industrial environments, integrate with OPC-UA PLC (port 4840) to read encoder data for ground-truth validation. Local 3M backlog ensures all rotation events are captured.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(gyro_x) as gx, avg(gyro_y) as gy, avg(gyro_z) as gz by host, equipment_id, _time span=1m\n| eval rotation_magnitude=sqrt((gx^2 + gy^2 + gz^2))\n| eval z_axis_dominant=if(abs(gz) > abs(gx) AND abs(gz) > abs(gy), \"YES\", \"NO\")\n| stats avg(rotation_magnitude) as avg_rot, stdev(rotation_magnitude) as std_rot by equipment_id\n| where (avg_rot > 50) AND (std_rot > 10)\n```\n\nUnderstanding this SPL\n\n**Equipment Alignment & Vibration Analysis via Gyroscope** — Monitors rotational alignment of rotating equipment to predict misalignment-induced failures.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=gyro_x OR gyro_y OR gyro_z, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (6-axis gyroscope), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **rotation_magnitude** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **z_axis_dominant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by equipment_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (avg_rot > 50) AND (std_rot > 10)` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Rotation rate trend, z-axis dominance heatmap, misalignment risk gauge.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with shaking that might mean a bearing, balance, or mount problem before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.18",
              "n": "Sound Frequency Analysis for Equipment Signatures",
              "c": "medium",
              "f": "advanced",
              "v": "Identifies equipment degradation by detecting shifts in characteristic sound frequencies (bearing wear, compressor blade damage).",
              "t": "Splunk Edge Hub (stereo microphone + custom NPU container), v2.1+",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log audio_frequency_analysis, `index=edge-hub-data` metric_name=sound_level",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log audio_signature_extracted=true\n| bin _time span=5m\n| stats avg(peak_frequency_hz) as avg_peak, stdev(peak_frequency_hz) as freq_std,\n        max(frequency_band_2k_4k_db) as mid_high_power by equipment_id, _time\n| eval freq_shift=abs(avg_peak - 3000)\n| where freq_shift > 500\n| eval signature_change=\"DEGRADATION_RISK\"",
              "m": "Position Edge Hub stereo microphone 0.5-2m from equipment (not in direct high-velocity air). Build custom ARM64 container using FFT (Fast Fourier Transform) library to extract peak frequencies and power spectral density. Deploy on NPU (v2.1+) for real-time FFT computation without cloud round-trip. Reference frequency baseline from first 7 days of operation. Store sound level metric data locally for pattern matching without streaming audio to cloud (privacy + bandwidth).",
              "z": "Frequency spectrum waterfall, peak frequency trend, degradation risk timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (stereo microphone + custom NPU container), v2.1+.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log audio_frequency_analysis, `index=edge-hub-data` metric_name=sound_level.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPosition Edge Hub stereo microphone 0.5-2m from equipment (not in direct high-velocity air). Build custom ARM64 container using FFT (Fast Fourier Transform) library to extract peak frequencies and power spectral density. Deploy on NPU (v2.1+) for real-time FFT computation without cloud round-trip. Reference frequency baseline from first 7 days of operation. Store sound level metric data locally for pattern matching without streaming audio to cloud (privacy + bandwidth).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log audio_signature_extracted=true\n| bin _time span=5m\n| stats avg(peak_frequency_hz) as avg_peak, stdev(peak_frequency_hz) as freq_std,\n        max(frequency_band_2k_4k_db) as mid_high_power by equipment_id, _time\n| eval freq_shift=abs(avg_peak - 3000)\n| where freq_shift > 500\n| eval signature_change=\"DEGRADATION_RISK\"\n```\n\nUnderstanding this SPL\n\n**Sound Frequency Analysis for Equipment Signatures** — Identifies equipment degradation by detecting shifts in characteristic sound frequencies (bearing wear, compressor blade damage).\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log audio_frequency_analysis, `index=edge-hub-data` metric_name=sound_level. **App/TA** (typical add-on context): Splunk Edge Hub (stereo microphone + custom NPU container), v2.1+. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by equipment_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **freq_shift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where freq_shift > 500` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **signature_change** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Frequency spectrum waterfall, peak frequency trend, degradation risk timeline.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with loud or unsafe noise in the work area before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.19",
              "n": "Multi-Sensor Environmental Baseline & Drift Detection",
              "c": "medium",
              "f": "expert",
              "v": "Detects sensor failures, calibration drift, or environmental changes by correlating expected relationships between temperature, humidity, pressure, and light.",
              "t": "Splunk Edge Hub (multi-sensor fusion), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` metric_name=temperature OR metric_name=humidity OR metric_name=pressure OR metric_name=light, `sourcetype=edge_hub`",
              "q": "| mstats avg(temperature) as temp, avg(humidity) as rh, avg(pressure) as press, avg(light_level) as lux by host, _time span=1h\n| stats avg(temp) as avg_temp, stdev(temp) as std_temp,\n        avg(rh) as avg_rh, stdev(rh) as std_rh,\n        avg(press) as avg_press, stdev(press) as std_press,\n        avg(lux) as avg_lux, stdev(lux) as std_lux by host\n| eval temp_anomaly=if(std_temp > 5, \"DRIFT\", \"NORMAL\"),\n        rh_anomaly=if(std_rh > 15, \"DRIFT\", \"NORMAL\"),\n        press_anomaly=if(std_press > 10, \"DRIFT\", \"NORMAL\"),\n        lux_anomaly=if(std_lux > 5000, \"DRIFT\", \"NORMAL\")\n| where temp_anomaly=\"DRIFT\" OR rh_anomaly=\"DRIFT\" OR press_anomaly=\"DRIFT\" OR lux_anomaly=\"DRIFT\"",
              "m": "Enable all available sensors on Edge Hub (temperature, humidity, optional pressure, optional light). Configure 1-hour aggregation interval (Advanced Settings → Sensor Polling). Establish 30-day baseline for expected correlation between sensors (e.g., temp and humidity should not fluctuate independently in sealed rooms). Use MLTK Smart Outlier Detection to detect when sensor relationships break down (indicator of sensor failure or environmental change). Store baseline profiles in edge-hub-data index for historical comparison.",
              "z": "Multi-sensor correlation matrix, drift detection alerts, baseline comparison chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (multi-sensor fusion), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=temperature OR metric_name=humidity OR metric_name=pressure OR metric_name=light, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable all available sensors on Edge Hub (temperature, humidity, optional pressure, optional light). Configure 1-hour aggregation interval (Advanced Settings → Sensor Polling). Establish 30-day baseline for expected correlation between sensors (e.g., temp and humidity should not fluctuate independently in sealed rooms). Use MLTK Smart Outlier Detection to detect when sensor relationships break down (indicator of sensor failure or environmental change). Store baseline profiles in edge-hub-data in…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(temperature) as temp, avg(humidity) as rh, avg(pressure) as press, avg(light_level) as lux by host, _time span=1h\n| stats avg(temp) as avg_temp, stdev(temp) as std_temp,\n        avg(rh) as avg_rh, stdev(rh) as std_rh,\n        avg(press) as avg_press, stdev(press) as std_press,\n        avg(lux) as avg_lux, stdev(lux) as std_lux by host\n| eval temp_anomaly=if(std_temp > 5, \"DRIFT\", \"NORMAL\"),\n        rh_anomaly=if(std_rh > 15, \"DRIFT\", \"NORMAL\"),\n        press_anomaly=if(std_press > 10, \"DRIFT\", \"NORMAL\"),\n        lux_anomaly=if(std_lux > 5000, \"DRIFT\", \"NORMAL\")\n| where temp_anomaly=\"DRIFT\" OR rh_anomaly=\"DRIFT\" OR press_anomaly=\"DRIFT\" OR lux_anomaly=\"DRIFT\"\n```\n\nUnderstanding this SPL\n\n**Multi-Sensor Environmental Baseline & Drift Detection** — Detects sensor failures, calibration drift, or environmental changes by correlating expected relationships between temperature, humidity, pressure, and light.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=temperature OR metric_name=humidity OR metric_name=pressure OR metric_name=light, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (multi-sensor fusion), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **temp_anomaly** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where temp_anomaly=\"DRIFT\" OR rh_anomaly=\"DRIFT\" OR press_anomaly=\"DRIFT\" OR lux_anomaly=\"DRIFT\"` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-sensor correlation matrix, drift detection alerts, baseline comparison chart.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: multi-sensor environmental baseline & drift before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.20",
              "n": "Pressure Monitoring for Cleanroom Compliance",
              "c": "critical",
              "f": "intermediate",
              "v": "Ensures pharmaceutical and semiconductor cleanroom integrity by verifying positive pressure differentials between zones.",
              "t": "Splunk Edge Hub (optional pressure sensor ±0.12 hPa), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` metric_name=pressure, `sourcetype=edge_hub`",
              "q": "index=edge-hub-data metric_name=pressure\n| bin _time span=5m\n| stats avg(pressure) as avg_press by room, zone, _time\n| eval zone_pair=room + \"_\" + zone\n| eventstats avg(avg_press) as zone_avg by zone_pair\n| eval pressure_diff=avg_press - zone_avg\n| where pressure_diff < 0.5\n| eval compliance=\"FAIL\"",
              "m": "Deploy Edge Hub with optional pressure sensor in each cleanroom zone. Configure 5-minute polling interval (Advanced Settings → Sensor Polling) for real-time compliance monitoring. Cleanrooms require 0.5-2.0 hPa positive pressure differential from adjacent areas. Set threshold alerts at 0.5 hPa minimum. Enable continuous local logging (edge-hub-logs index) for regulatory audit trail. Pressure sensor range 300-1100 hPa covers sea-level and altitude variations.",
              "z": "Pressure differential gauge, zone comparison heatmap, compliance timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (optional pressure sensor ±0.12 hPa), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=pressure, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub with optional pressure sensor in each cleanroom zone. Configure 5-minute polling interval (Advanced Settings → Sensor Polling) for real-time compliance monitoring. Cleanrooms require 0.5-2.0 hPa positive pressure differential from adjacent areas. Set threshold alerts at 0.5 hPa minimum. Enable continuous local logging (edge-hub-logs index) for regulatory audit trail. Pressure sensor range 300-1100 hPa covers sea-level and altitude variations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-data metric_name=pressure\n| bin _time span=5m\n| stats avg(pressure) as avg_press by room, zone, _time\n| eval zone_pair=room + \"_\" + zone\n| eventstats avg(avg_press) as zone_avg by zone_pair\n| eval pressure_diff=avg_press - zone_avg\n| where pressure_diff < 0.5\n| eval compliance=\"FAIL\"\n```\n\nUnderstanding this SPL\n\n**Pressure Monitoring for Cleanroom Compliance** — Ensures pharmaceutical and semiconductor cleanroom integrity by verifying positive pressure differentials between zones.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=pressure, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (optional pressure sensor ±0.12 hPa), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-data.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-data. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by room, zone, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **zone_pair** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by zone_pair** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pressure_diff** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pressure_diff < 0.5` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **compliance** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pressure differential gauge, zone comparison heatmap, compliance timeline.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: pressure monitoring for cleanroom compliance before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.21",
              "n": "HVAC Duct Pressure & Velocity Monitoring",
              "c": "medium",
              "f": "advanced",
              "v": "Monitors HVAC filter clogging and airflow efficiency by tracking duct static pressure trends.",
              "t": "Splunk Edge Hub (optional pressure sensor), Modbus TCP integration",
              "d": "`index=edge-hub-data` metric_name=pressure, `index=edge-hub-logs` sourcetype=edge_hub modbus_register",
              "q": "| mstats avg(pressure) as static_press by duct_zone, _time span=10m\n| delta static_press as press_delta\n| eval filter_condition=case(static_press > 2.5, \"CLOGGED\", static_press > 1.5, \"RESTRICTED\", 1=1, \"NORMAL\")\n| stats avg(filter_condition) as predominant_condition, avg(static_press) as avg_press by duct_zone\n| where predominant_condition!=\"NORMAL\"",
              "m": "Install Edge Hub pressure sensor in return air duct upstream of main filter. Configure 10-minute sampling. Correlate with Modbus TCP fan speed register reads (port 502) from HVAC controller: increasing pressure + constant fan speed = clogged filter. Typical clogged filter threshold: > 2.5 in H2O (84.7 hPa). Store local SQLite data for 7-day history to track pressure rise rate (rate of clogging). Integrate with OPC-UA SCADA (port 4840) for automated filter change alerts.",
              "z": "Duct pressure trend, filter condition gauge, maintenance alert timeline.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (optional pressure sensor), Modbus TCP integration.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=pressure, `index=edge-hub-logs` sourcetype=edge_hub modbus_register.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall Edge Hub pressure sensor in return air duct upstream of main filter. Configure 10-minute sampling. Correlate with Modbus TCP fan speed register reads (port 502) from HVAC controller: increasing pressure + constant fan speed = clogged filter. Typical clogged filter threshold: > 2.5 in H2O (84.7 hPa). Store local SQLite data for 7-day history to track pressure rise rate (rate of clogging). Integrate with OPC-UA SCADA (port 4840) for automated filter change alerts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(pressure) as static_press by duct_zone, _time span=10m\n| delta static_press as press_delta\n| eval filter_condition=case(static_press > 2.5, \"CLOGGED\", static_press > 1.5, \"RESTRICTED\", 1=1, \"NORMAL\")\n| stats avg(filter_condition) as predominant_condition, avg(static_press) as avg_press by duct_zone\n| where predominant_condition!=\"NORMAL\"\n```\n\nUnderstanding this SPL\n\n**HVAC Duct Pressure & Velocity Monitoring** — Monitors HVAC filter clogging and airflow efficiency by tracking duct static pressure trends.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=pressure, `index=edge-hub-logs` sourcetype=edge_hub modbus_register. **App/TA** (typical add-on context): Splunk Edge Hub (optional pressure sensor), Modbus TCP integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Pipeline stage (see **HVAC Duct Pressure & Velocity Monitoring**): delta static_press as press_delta\n• `eval` defines or adjusts **filter_condition** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by duct_zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where predominant_condition!=\"NORMAL\"` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Duct pressure trend, filter condition gauge, maintenance alert timeline.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: hvac duct pressure & velocity before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.22",
              "n": "Weather Station Data Integration & Altitude Compensation",
              "c": "medium",
              "f": "advanced",
              "v": "Provides pressure-altitude data for facility environmental baselines and corrects sensor readings for elevation changes.",
              "t": "Splunk Edge Hub (optional pressure sensor), MQTT integration",
              "d": "`index=edge-hub-data` metric_name=pressure, `index=edge-hub-logs` sourcetype=splunk_edge_hub_log external_weather_device",
              "q": "| mstats avg(pressure) as edge_press by host, _time span=1h\n| join max=1 host [| mstats avg(external_pressure) as ext_press by host, _time span=1h]\n| eval altitude_diff = (44330 * (1.0 - ((edge_press / ext_press)^(1/5.255))))\n| where altitude_diff != 0\n| eval altitude_compensated_reading = edge_press - (altitude_diff * 0.0001198)",
              "m": "Deploy Edge Hub with optional pressure sensor at facility location. Subscribe to external MQTT weather station (Advanced Settings → MQTT Subscriptions) publishing atmospheric pressure. Use barometric formula to compute altitude or detect pressure sensor drift. Store readings in edge-hub-data metric index. Pressure range 300-1100 hPa covers sea-level to 3,000m elevation. Use local SQLite backlog for real-time compensation without cloud latency.",
              "z": "Altitude vs time, pressure correction factor trend, weather correlation chart.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (optional pressure sensor), MQTT integration.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=pressure, `index=edge-hub-logs` sourcetype=splunk_edge_hub_log external_weather_device.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub with optional pressure sensor at facility location. Subscribe to external MQTT weather station (Advanced Settings → MQTT Subscriptions) publishing atmospheric pressure. Use barometric formula to compute altitude or detect pressure sensor drift. Store readings in edge-hub-data metric index. Pressure range 300-1100 hPa covers sea-level to 3,000m elevation. Use local SQLite backlog for real-time compensation without cloud latency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(pressure) as edge_press by host, _time span=1h\n| join max=1 host [| mstats avg(external_pressure) as ext_press by host, _time span=1h]\n| eval altitude_diff = (44330 * (1.0 - ((edge_press / ext_press)^(1/5.255))))\n| where altitude_diff != 0\n| eval altitude_compensated_reading = edge_press - (altitude_diff * 0.0001198)\n```\n\nUnderstanding this SPL\n\n**Weather Station Data Integration & Altitude Compensation** — Provides pressure-altitude data for facility environmental baselines and corrects sensor readings for elevation changes.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=pressure, `index=edge-hub-logs` sourcetype=splunk_edge_hub_log external_weather_device. **App/TA** (typical add-on context): Splunk Edge Hub (optional pressure sensor), MQTT integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **altitude_diff** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where altitude_diff != 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **altitude_compensated_reading** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Altitude vs time, pressure correction factor trend, weather correlation chart.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with the situation around: weather station data integration & altitude compensation before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.23",
              "n": "Custom Python Container for Data Transformation & Enrichment",
              "c": "medium",
              "f": "advanced",
              "v": "Pre-processes edge sensor data locally before forwarding to Splunk, reducing bandwidth and enabling offline analytics.",
              "t": "Splunk Edge Hub (custom container), gRPC SDK",
              "d": "`index=edge-hub-data` all metrics post-transformation, `index=edge-hub-logs` container_event_log",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log container_name=transform_enrichment\n| stats count as successful_transforms, count(eval(error_code!=0)) as failed_transforms by host\n| eval transform_success_rate = (successful_transforms / (successful_transforms + failed_transforms)) * 100",
              "m": "Build custom ARM64 Python container (requires Dockerfile with Python 3.9+ and gRPC client library) to read sensor data via Edge Hub gRPC API. Implement custom enrichment logic (e.g., add facility ID, shift code, operator ID). Redact PII or sensitive fields before forwarding to cloud. Configure edge.json manifest with resource limits (memory: 512MB, CPU: 2 cores). Container runs as non-root (v2.0+). Deploy via Advanced Settings → Containers tab. Local SQLite backlog absorbs data if container crashes.",
              "z": "Transform success rate trend, processing latency histogram, error frequency chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (custom container), gRPC SDK.\n• Ensure the following data sources are available: `index=edge-hub-data` all metrics post-transformation, `index=edge-hub-logs` container_event_log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild custom ARM64 Python container (requires Dockerfile with Python 3.9+ and gRPC client library) to read sensor data via Edge Hub gRPC API. Implement custom enrichment logic (e.g., add facility ID, shift code, operator ID). Redact PII or sensitive fields before forwarding to cloud. Configure edge.json manifest with resource limits (memory: 512MB, CPU: 2 cores). Container runs as non-root (v2.0+). Deploy via Advanced Settings → Containers tab. Local SQLite backlog absorbs data if container cras…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log container_name=transform_enrichment\n| stats count as successful_transforms, count(eval(error_code!=0)) as failed_transforms by host\n| eval transform_success_rate = (successful_transforms / (successful_transforms + failed_transforms)) * 100\n```\n\nUnderstanding this SPL\n\n**Custom Python Container for Data Transformation & Enrichment** — Pre-processes edge sensor data locally before forwarding to Splunk, reducing bandwidth and enabling offline analytics.\n\nDocumented **Data sources**: `index=edge-hub-data` all metrics post-transformation, `index=edge-hub-logs` container_event_log. **App/TA** (typical add-on context): Splunk Edge Hub (custom container), gRPC SDK. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **transform_success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Transform success rate trend, processing latency histogram, error frequency chart.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with the situation around: custom python container for data transformation & enrichment before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.24",
              "n": "BACnet-to-MQTT Protocol Gateway Container",
              "c": "high",
              "f": "advanced",
              "v": "Bridges legacy BACnet-based building control systems with modern MQTT/Splunk pipeline without expensive protocol gateway hardware.",
              "t": "Splunk Edge Hub (custom container), MQTT broker",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log bacnet_translation_event, MQTT subscribed topics",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log bacnet_translation_event\n| stats count as bacnet_objects_polled, count(eval(translation_status=\"SUCCESS\")) as successful by host\n| eval gateway_health = (successful / bacnet_objects_polled) * 100\n| where gateway_health < 95",
              "m": "Build custom ARM64 container using python-bacnet or BACnet4J library. Container reads BACnet object properties from legacy controllers (IP broadcast network) and translates to MQTT messages (publishes to Edge Hub MQTT broker on port 1883). Configure container resource limits (memory: 256MB, CPU: 1 core) in edge.json manifest. Enable USB device passthrough (v2.1+) if BACnet gateway requires serial/USB interface. Store translation event logs locally for audit trail.",
              "z": "BACnet object discovery count, translation success rate, latency histogram.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (custom container), MQTT broker.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log bacnet_translation_event, MQTT subscribed topics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild custom ARM64 container using python-bacnet or BACnet4J library. Container reads BACnet object properties from legacy controllers (IP broadcast network) and translates to MQTT messages (publishes to Edge Hub MQTT broker on port 1883). Configure container resource limits (memory: 256MB, CPU: 1 core) in edge.json manifest. Enable USB device passthrough (v2.1+) if BACnet gateway requires serial/USB interface. Store translation event logs locally for audit trail.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log bacnet_translation_event\n| stats count as bacnet_objects_polled, count(eval(translation_status=\"SUCCESS\")) as successful by host\n| eval gateway_health = (successful / bacnet_objects_polled) * 100\n| where gateway_health < 95\n```\n\nUnderstanding this SPL\n\n**BACnet-to-MQTT Protocol Gateway Container** — Bridges legacy BACnet-based building control systems with modern MQTT/Splunk pipeline without expensive protocol gateway hardware.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log bacnet_translation_event, MQTT subscribed topics. **App/TA** (typical add-on context): Splunk Edge Hub (custom container), MQTT broker. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gateway_health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gateway_health < 95` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: BACnet object discovery count, translation success rate, latency histogram.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: bacnet-to-mqtt protocol gateway container before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.25",
              "n": "Local Alerting & GPIO Relay Control Container",
              "c": "high",
              "f": "advanced",
              "v": "Enables immediate equipment shutdown or alarm triggering at the edge without cloud latency, critical for safety-critical systems.",
              "t": "Splunk Edge Hub (custom container), gRPC SDK, GPIO control",
              "d": "`index=edge-hub-data` all sensor metrics, `index=edge-hub-logs` container_alert_log",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log container_name=local_alerting alert_triggered=true\n| stats count as alerts_triggered, count(eval(relay_state=\"ENERGIZED\")) as equipment_stopped by host\n| eval safety_response_rate = (equipment_stopped / alerts_triggered) * 100",
              "m": "Build custom ARM64 container with gRPC client library and GPIO library (RPi.GPIO or gpiod). Container subscribes to Edge Hub gRPC sensor stream, implements local thresholds (e.g., temperature > 90°C), and directly controls GPIO pins to energize/de-energize relays (e.g., kill power to pump, trigger siren). No cloud round-trip latency—decisions made in <100ms. Configure edge.json with resource limits (memory: 128MB, CPU: 0.5 core). Store alert events in local edge-hub-logs for compliance.",
              "z": "Alert frequency timeline, relay activation log, response latency histogram.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (custom container), gRPC SDK, GPIO control.\n• Ensure the following data sources are available: `index=edge-hub-data` all sensor metrics, `index=edge-hub-logs` container_alert_log.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild custom ARM64 container with gRPC client library and GPIO library (RPi.GPIO or gpiod). Container subscribes to Edge Hub gRPC sensor stream, implements local thresholds (e.g., temperature > 90°C), and directly controls GPIO pins to energize/de-energize relays (e.g., kill power to pump, trigger siren). No cloud round-trip latency—decisions made in <100ms. Configure edge.json with resource limits (memory: 128MB, CPU: 0.5 core). Store alert events in local edge-hub-logs for compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log container_name=local_alerting alert_triggered=true\n| stats count as alerts_triggered, count(eval(relay_state=\"ENERGIZED\")) as equipment_stopped by host\n| eval safety_response_rate = (equipment_stopped / alerts_triggered) * 100\n```\n\nUnderstanding this SPL\n\n**Local Alerting & GPIO Relay Control Container** — Enables immediate equipment shutdown or alarm triggering at the edge without cloud latency, critical for safety-critical systems.\n\nDocumented **Data sources**: `index=edge-hub-data` all sensor metrics, `index=edge-hub-logs` container_alert_log. **App/TA** (typical add-on context): Splunk Edge Hub (custom container), gRPC SDK, GPIO control. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **safety_response_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alert frequency timeline, relay activation log, response latency histogram.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: local alerting & gpio relay control container before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.26",
              "n": "Edge Analytics Container for Rolling Statistics & Threshold Logic",
              "c": "medium",
              "f": "advanced",
              "v": "Computes advanced analytics (moving averages, percentiles, trend detection) locally, reducing cloud computation burden.",
              "t": "Splunk Edge Hub (custom container), gRPC SDK",
              "d": "`index=edge-hub-data` computed_statistics, `index=edge-hub-logs` container_analytics_event",
              "q": "index=edge-hub-data metric_name=temperature\n| timechart avg(temperature) as temp_avg by host\n| delta temp_avg as temp_trend\n| stats avg(temp_trend) as avg_trend, stdev(temp_trend) as trend_std by host\n| eval trend_anomaly=if(abs(temp_trend) > (avg_trend + (2*trend_std)), \"YES\", \"NO\")",
              "m": "Build custom ARM64 container with NumPy/Pandas libraries (may require multi-stage build to reduce image size). Container implements rolling window statistics (5/15/60-minute moving averages) via gRPC sensor stream. Compute percentiles, trend lines, and detect threshold crossings locally. Publish results as new metrics to MQTT (Advanced Settings → MQTT Publish) or directly to Splunk via gRPC SDK. Store raw + computed metrics locally (3M backlog) for redundancy. Configure resource limits: memory 512MB, CPU 1.5 cores.",
              "z": "Rolling average trend, threshold crossing frequency, anomaly detection timeline.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (custom container), gRPC SDK.\n• Ensure the following data sources are available: `index=edge-hub-data` computed_statistics, `index=edge-hub-logs` container_analytics_event.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild custom ARM64 container with NumPy/Pandas libraries (may require multi-stage build to reduce image size). Container implements rolling window statistics (5/15/60-minute moving averages) via gRPC sensor stream. Compute percentiles, trend lines, and detect threshold crossings locally. Publish results as new metrics to MQTT (Advanced Settings → MQTT Publish) or directly to Splunk via gRPC SDK. Store raw + computed metrics locally (3M backlog) for redundancy. Configure resource limits: memory 5…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-data metric_name=temperature\n| timechart avg(temperature) as temp_avg by host\n| delta temp_avg as temp_trend\n| stats avg(temp_trend) as avg_trend, stdev(temp_trend) as trend_std by host\n| eval trend_anomaly=if(abs(temp_trend) > (avg_trend + (2*trend_std)), \"YES\", \"NO\")\n```\n\nUnderstanding this SPL\n\n**Edge Analytics Container for Rolling Statistics & Threshold Logic** — Computes advanced analytics (moving averages, percentiles, trend detection) locally, reducing cloud computation burden.\n\nDocumented **Data sources**: `index=edge-hub-data` computed_statistics, `index=edge-hub-logs` container_analytics_event. **App/TA** (typical add-on context): Splunk Edge Hub (custom container), gRPC SDK. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-data.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-data. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time with a separate series **by host** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Edge Analytics Container for Rolling Statistics & Threshold Logic**): delta temp_avg as temp_trend\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **trend_anomaly** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Rolling average trend, threshold crossing frequency, anomaly detection timeline.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with what the on-site box is feeling and whether it is online before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.27",
              "n": "BLE Beacon Asset Tracking & Presence Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks valuable equipment or personnel location within facility using low-cost BLE tags without requiring dedicated asset management infrastructure.",
              "t": "Splunk Edge Hub (Bluetooth connectivity), custom container",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log bluetooth_beacon_event, `index=edge-hub-data` metric_name=rssi_strength",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log bluetooth_beacon_event beacon_uuid=* beacon_major=* beacon_minor=*\n| stats latest(_time) as last_seen, avg(rssi_strength) as avg_rssi by beacon_id, host\n| eval presence_status=if((now() - last_seen) < 300, \"PRESENT\", \"ABSENT\")\n| stats count(eval(presence_status=\"PRESENT\")) as present_assets by host, location",
              "m": "Enable Bluetooth scanning on Edge Hub. Build custom ARM64 container that listens for iBeacon or AltBeacon advertisements, parses UUID/major/minor identifiers, and logs beacon_id + RSSI (signal strength). Use RSSI to estimate distance (typically 1-10m range for Edge Hub antenna). Store beacon events locally via 100K event backlog. Implement trilateration logic in container or Splunk downstream to estimate asset location across 3+ Edge Hubs. MQTT publish beacon sightings to central location service.",
              "z": "Asset presence map, RSSI range heatmap, movement timeline.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (Bluetooth connectivity), custom container.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log bluetooth_beacon_event, `index=edge-hub-data` metric_name=rssi_strength.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Bluetooth scanning on Edge Hub. Build custom ARM64 container that listens for iBeacon or AltBeacon advertisements, parses UUID/major/minor identifiers, and logs beacon_id + RSSI (signal strength). Use RSSI to estimate distance (typically 1-10m range for Edge Hub antenna). Store beacon events locally via 100K event backlog. Implement trilateration logic in container or Splunk downstream to estimate asset location across 3+ Edge Hubs. MQTT publish beacon sightings to central location servic…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log bluetooth_beacon_event beacon_uuid=* beacon_major=* beacon_minor=*\n| stats latest(_time) as last_seen, avg(rssi_strength) as avg_rssi by beacon_id, host\n| eval presence_status=if((now() - last_seen) < 300, \"PRESENT\", \"ABSENT\")\n| stats count(eval(presence_status=\"PRESENT\")) as present_assets by host, location\n```\n\nUnderstanding this SPL\n\n**BLE Beacon Asset Tracking & Presence Detection** — Tracks valuable equipment or personnel location within facility using low-cost BLE tags without requiring dedicated asset management infrastructure.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log bluetooth_beacon_event, `index=edge-hub-data` metric_name=rssi_strength. **App/TA** (typical add-on context): Splunk Edge Hub (Bluetooth connectivity), custom container. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by beacon_id, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **presence_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Asset presence map, RSSI range heatmap, movement timeline.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: ble beacon asset tracking & presence before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.28",
              "n": "USB Camera Barcode & QR Code Scanning Container",
              "c": "medium",
              "f": "advanced",
              "v": "Automates material tracking and inventory verification by scanning barcodes/QR codes at the edge without manual entry.",
              "t": "Splunk Edge Hub (USB camera + custom container), v2.1+ (USB passthrough)",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log barcode_scan_event, `index=edge-hub-data` scan_metadata",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log barcode_scan_event\n| regex barcode_value=\"^[0-9]{12,14}$\"\n| stats count as successful_scans, count(eval(barcode_valid=\"NO\")) as invalid_scans by host, scan_location\n| eval scan_accuracy = (successful_scans / (successful_scans + invalid_scans)) * 100\n| where scan_accuracy < 95",
              "m": "Connect USB camera to Edge Hub USB port (requires v2.1+ for USB device passthrough). Build custom ARM64 container using OpenCV + pyzbar/python-qrcode libraries for barcode detection. Container captures video frames, decodes barcodes/QR codes, and logs scan_id + barcode_value to edge-hub-logs. Implement local SQLite database (in container) to store scanned inventory and prevent duplicate entries. Publish scan events to MQTT (Advanced Settings → MQTT Publish) for downstream processing. Configure edge.json resource limits: memory 512MB, CPU 2 cores (video processing is CPU-intensive).",
              "z": "Scan success rate trend, invalid barcode timeline, inventory reconciliation report.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (USB camera + custom container), v2.1+ (USB passthrough).\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log barcode_scan_event, `index=edge-hub-data` scan_metadata.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConnect USB camera to Edge Hub USB port (requires v2.1+ for USB device passthrough). Build custom ARM64 container using OpenCV + pyzbar/python-qrcode libraries for barcode detection. Container captures video frames, decodes barcodes/QR codes, and logs scan_id + barcode_value to edge-hub-logs. Implement local SQLite database (in container) to store scanned inventory and prevent duplicate entries. Publish scan events to MQTT (Advanced Settings → MQTT Publish) for downstream processing. Configure e…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log barcode_scan_event\n| regex barcode_value=\"^[0-9]{12,14}$\"\n| stats count as successful_scans, count(eval(barcode_valid=\"NO\")) as invalid_scans by host, scan_location\n| eval scan_accuracy = (successful_scans / (successful_scans + invalid_scans)) * 100\n| where scan_accuracy < 95\n```\n\nUnderstanding this SPL\n\n**USB Camera Barcode & QR Code Scanning Container** — Automates material tracking and inventory verification by scanning barcodes/QR codes at the edge without manual entry.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log barcode_scan_event, `index=edge-hub-data` scan_metadata. **App/TA** (typical add-on context): Splunk Edge Hub (USB camera + custom container), v2.1+ (USB passthrough). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• `stats` rolls up events into metrics; results are split **by host, scan_location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **scan_accuracy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where scan_accuracy < 95` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scan success rate trend, invalid barcode timeline, inventory reconciliation report.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: usb camera barcode & qr code scanning container before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.29",
              "n": "Audio Classification for Anomalous Sound Detection",
              "c": "medium",
              "f": "expert",
              "v": "Detects equipment distress (compressor cavitation, bearing squeal, motor whine) by classifying sound types without FFT spectral analysis.",
              "t": "Splunk Edge Hub (stereo microphone + NPU container), v2.1+",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log audio_classification_event, `index=edge-hub-data` audio_class_confidence",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log audio_classification_event\n| stats count as classification_attempts, count(eval(sound_class=\"ABNORMAL\")) as anomalies by host, equipment_type\n| eval anomaly_rate = (anomalies / classification_attempts) * 100\n| where anomaly_rate > 5",
              "m": "Deploy TensorFlow Lite audio classification model (v2.1+ NPU support) in custom ARM64 container. Train model on normal equipment sounds (baseline) and abnormal sounds (target classes: cavitation, squeal, whine, vibration). Container processes 1-second audio chunks from stereo microphone at 16kHz, runs inference on NPU, publishes classification result (sound_class + confidence) to MQTT. Store classification logs locally for retraining. Configure edge.json: memory 512MB, CPU 1 core. Note: Do not stream raw audio to cloud (privacy); only log classification results.",
              "z": "Anomaly classification frequency, confidence score distribution, sound type timeline.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (stereo microphone + NPU container), v2.1+.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log audio_classification_event, `index=edge-hub-data` audio_class_confidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy TensorFlow Lite audio classification model (v2.1+ NPU support) in custom ARM64 container. Train model on normal equipment sounds (baseline) and abnormal sounds (target classes: cavitation, squeal, whine, vibration). Container processes 1-second audio chunks from stereo microphone at 16kHz, runs inference on NPU, publishes classification result (sound_class + confidence) to MQTT. Store classification logs locally for retraining. Configure edge.json: memory 512MB, CPU 1 core. Note: Do not s…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log audio_classification_event\n| stats count as classification_attempts, count(eval(sound_class=\"ABNORMAL\")) as anomalies by host, equipment_type\n| eval anomaly_rate = (anomalies / classification_attempts) * 100\n| where anomaly_rate > 5\n```\n\nUnderstanding this SPL\n\n**Audio Classification for Anomalous Sound Detection** — Detects equipment distress (compressor cavitation, bearing squeal, motor whine) by classifying sound types without FFT spectral analysis.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log audio_classification_event, `index=edge-hub-data` audio_class_confidence. **App/TA** (typical add-on context): Splunk Edge Hub (stereo microphone + NPU container), v2.1+. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, equipment_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **anomaly_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where anomaly_rate > 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Anomaly classification frequency, confidence score distribution, sound type timeline.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with loud or unsafe noise in the work area before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.30",
              "n": "Predictive Maintenance via NPU-Based Model Inference",
              "c": "high",
              "f": "advanced",
              "v": "Predicts equipment failures (bearing degradation, motor insulation breakdown) 7-30 days in advance using on-device ML inference.",
              "t": "Splunk Edge Hub (NPU + custom container), v2.1+, OT Intelligence",
              "d": "`index=edge-hub-data` raw sensor metrics, `index=edge-hub-logs` predictive_maintenance_inference",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log predictive_model_inference failure_risk_score>0.7\n| stats count as high_risk_predictions, avg(failure_risk_score) as avg_risk by equipment_id, host\n| eval maintenance_urgency=case(avg_risk > 0.85, \"CRITICAL\", avg_risk > 0.7, \"HIGH\", 1=1, \"MEDIUM\")",
              "m": "Train XGBoost or TensorFlow Lite model offline using historical sensor data (temperature, vibration, power consumption trends). Quantize model to INT8 for NPU deployment. Build custom ARM64 container that streams sensor features (via gRPC API) into model inference pipeline running on NPU. Model outputs failure_risk_score (0-1 scale). If score > 0.7, trigger alert and log predictive maintenance event. Store raw feature vectors locally (3M backlog) for continuous model retraining. Configure edge.json: memory 512MB, CPU 2 cores, NPU enabled.",
              "z": "Failure risk score trend, maintenance urgency gauge, prediction accuracy (post-hoc) scatter.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (NPU + custom container), v2.1+, OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` raw sensor metrics, `index=edge-hub-logs` predictive_maintenance_inference.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrain XGBoost or TensorFlow Lite model offline using historical sensor data (temperature, vibration, power consumption trends). Quantize model to INT8 for NPU deployment. Build custom ARM64 container that streams sensor features (via gRPC API) into model inference pipeline running on NPU. Model outputs failure_risk_score (0-1 scale). If score > 0.7, trigger alert and log predictive maintenance event. Store raw feature vectors locally (3M backlog) for continuous model retraining. Configure edge.j…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log predictive_model_inference failure_risk_score>0.7\n| stats count as high_risk_predictions, avg(failure_risk_score) as avg_risk by equipment_id, host\n| eval maintenance_urgency=case(avg_risk > 0.85, \"CRITICAL\", avg_risk > 0.7, \"HIGH\", 1=1, \"MEDIUM\")\n```\n\nUnderstanding this SPL\n\n**Predictive Maintenance via NPU-Based Model Inference** — Predicts equipment failures (bearing degradation, motor insulation breakdown) 7-30 days in advance using on-device ML inference.\n\nDocumented **Data sources**: `index=edge-hub-data` raw sensor metrics, `index=edge-hub-logs` predictive_maintenance_inference. **App/TA** (typical add-on context): Splunk Edge Hub (NPU + custom container), v2.1+, OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by equipment_id, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **maintenance_urgency** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Failure risk score trend, maintenance urgency gauge, prediction accuracy (post-hoc) scatter.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with the situation around: predictive maintenance via npu-based model inference before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.31",
              "n": "OPC-UA Tag Browsing & Change Detection",
              "c": "high",
              "f": "advanced",
              "v": "Monitors PLC tag changes in real-time and alerts on unexpected data type or value changes indicating program modification or malfunction.",
              "t": "Splunk Edge Hub (OPC-UA client), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua, `index=edge-hub-health` sourcetype=edge_hub",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_opcua opcua_tag=* opcua_value=*\n| stats latest(opcua_value) as latest_val, latest(opcua_data_type) as latest_type by opcua_tag, host\n| join max=1 opcua_tag [| rest /services/saved/data-model/indexes/OT_Industry_Process_Assets | fields asset_id, tag_name, expected_data_type]\n| where latest_type != expected_data_type\n| eval change_alert=\"DATA_TYPE_MISMATCH\"",
              "m": "Configure OPC-UA connection in Advanced Settings → OPC-UA tab with PLC/SCADA server hostname (port 4840), username/password or anonymous authentication. Browse PLC namespace to discover tags. Enable continuous polling of selected tags at 5-second intervals. Configure threshold alerts on value changes (delta > 20% or absolute > threshold). Store tag values in edge-hub-logs index with sourcetype=splunk_edge_hub_opcua. Detect unexpected data type changes (INT to FLOAT) or tag disappearance (PLC program change). Use local SQLite backlog (100K event capacity) for connectivity loss resilience.",
              "z": "Tag value trend, data type change alert, PLC program integrity dashboard.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (OPC-UA client), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua, `index=edge-hub-health` sourcetype=edge_hub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure OPC-UA connection in Advanced Settings → OPC-UA tab with PLC/SCADA server hostname (port 4840), username/password or anonymous authentication. Browse PLC namespace to discover tags. Enable continuous polling of selected tags at 5-second intervals. Configure threshold alerts on value changes (delta > 20% or absolute > threshold). Store tag values in edge-hub-logs index with sourcetype=splunk_edge_hub_opcua. Detect unexpected data type changes (INT to FLOAT) or tag disappearance (PLC pro…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_opcua opcua_tag=* opcua_value=*\n| stats latest(opcua_value) as latest_val, latest(opcua_data_type) as latest_type by opcua_tag, host\n| join max=1 opcua_tag [| rest /services/saved/data-model/indexes/OT_Industry_Process_Assets | fields asset_id, tag_name, expected_data_type]\n| where latest_type != expected_data_type\n| eval change_alert=\"DATA_TYPE_MISMATCH\"\n```\n\nUnderstanding this SPL\n\n**OPC-UA Tag Browsing & Change Detection** — Monitors PLC tag changes in real-time and alerts on unexpected data type or value changes indicating program modification or malfunction.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua, `index=edge-hub-health` sourcetype=edge_hub. **App/TA** (typical add-on context): Splunk Edge Hub (OPC-UA client), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_opcua. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_opcua. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by opcua_tag, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where latest_type != expected_data_type` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **change_alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Tag value trend, data type change alert, PLC program integrity dashboard.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: opc-ua tag browsing & change before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.32",
              "n": "Modbus TCP Register Monitoring for Industrial Equipment",
              "c": "high",
              "f": "advanced",
              "v": "Monitors equipment operational parameters via Modbus registers without requiring specialized data collection agents.",
              "t": "Splunk Edge Hub (Modbus TCP client), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=edge_hub modbus_register, `index=edge-hub-data` modbus_metric",
              "q": "index=edge-hub-logs sourcetype=edge_hub modbus_register\n| regex modbus_register_name=\"^(voltage|current|frequency)\"\n| stats latest(register_value) as latest_val by modbus_register_name, modbus_device_ip\n| eval register_healthy=case(\n    modbus_register_name=\"voltage\" AND latest_val >= 210 AND latest_val <= 250, \"YES\",\n    modbus_register_name=\"current\" AND latest_val >= 0 AND latest_val <= 100, \"YES\",\n    modbus_register_name=\"frequency\" AND latest_val >= 49 AND latest_val <= 51, \"YES\",\n    1=1, \"NO\")\n| where register_healthy=\"NO\"",
              "m": "Configure Modbus TCP in Advanced Settings → Modbus tab with equipment IP/port (default 502). Define register map (coils, discrete inputs, holding registers, input registers) with OID aliases for readability. Configure polling interval (10-30 seconds typical) and register read strategy (optimized batching). Store register values in edge-hub-logs (events) or as metrics in edge-hub-data. Map register indices to human-readable tags (e.g., 0x1234→\"VFD_Speed_Hz\"). Local SQLite backlog stores 100K Modbus events for offline resilience.",
              "z": "Register value trend, equipment health gauge, Modbus gateway connection status.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (Modbus TCP client), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=edge_hub modbus_register, `index=edge-hub-data` modbus_metric.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Modbus TCP in Advanced Settings → Modbus tab with equipment IP/port (default 502). Define register map (coils, discrete inputs, holding registers, input registers) with OID aliases for readability. Configure polling interval (10-30 seconds typical) and register read strategy (optimized batching). Store register values in edge-hub-logs (events) or as metrics in edge-hub-data. Map register indices to human-readable tags (e.g., 0x1234→\"VFD_Speed_Hz\"). Local SQLite backlog stores 100K Modb…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=edge_hub modbus_register\n| regex modbus_register_name=\"^(voltage|current|frequency)\"\n| stats latest(register_value) as latest_val by modbus_register_name, modbus_device_ip\n| eval register_healthy=case(\n    modbus_register_name=\"voltage\" AND latest_val >= 210 AND latest_val <= 250, \"YES\",\n    modbus_register_name=\"current\" AND latest_val >= 0 AND latest_val <= 100, \"YES\",\n    modbus_register_name=\"frequency\" AND latest_val >= 49 AND latest_val <= 51, \"YES\",\n    1=1, \"NO\")\n| where register_healthy=\"NO\"\n```\n\nUnderstanding this SPL\n\n**Modbus TCP Register Monitoring for Industrial Equipment** — Monitors equipment operational parameters via Modbus registers without requiring specialized data collection agents.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=edge_hub modbus_register, `index=edge-hub-data` modbus_metric. **App/TA** (typical add-on context): Splunk Edge Hub (Modbus TCP client), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• `stats` rolls up events into metrics; results are split **by modbus_register_name, modbus_device_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **register_healthy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where register_healthy=\"NO\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Register value trend, equipment health gauge, Modbus gateway connection status.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with dust in the air that can bother lungs before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.33",
              "n": "Multi-Protocol Sensor Fusion (OPC-UA + MQTT + Built-in)",
              "c": "high",
              "f": "intermediate",
              "v": "Correlates data from heterogeneous sources (PLC via OPC-UA, IoT devices via MQTT, internal sensors) to identify root causes of anomalies.",
              "t": "Splunk Edge Hub (multi-protocol aggregation), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua OR splunk_edge_hub_log (MQTT), `index=edge-hub-data` all metric types",
              "q": "(index=edge-hub-logs sourcetype=splunk_edge_hub_opcua OR index=edge-hub-logs sourcetype=splunk_edge_hub_log mqtt_topic=*)\nOR index=edge-hub-data metric_name=temperature\n| bin _time span=5m\n| stats avg(temperature) as temp, avg(opc_ua_motor_current) as motor_current,\n        avg(mqtt_load_percent) as load by equipment_id, _time\n| eval correlation=correlation(temp, motor_current)\n| where correlation > 0.8\n| eval root_cause=case(\n    temp > 80 AND motor_current > 15, \"THERMAL_OVERLOAD\",\n    temp > 80 AND motor_current < 5, \"SENSOR_FAILURE\",\n    1=1, \"UNKNOWN\")",
              "m": "Configure all three connectivity modes simultaneously: (1) OPC-UA to PLC (Advanced Settings → OPC-UA), (2) MQTT subscriptions to IoT devices (Advanced Settings → MQTT Subscriptions), (3) Enable built-in sensors (temperature, humidity, etc.). Set each protocol's polling interval (OPC-UA 5s, MQTT 10s, sensors 30s) to minimize latency skew. Ingest all data streams with consistent timestamps. Local SQLite backlog (3M data points) ensures data fusion doesn't lose events. Use downstream Splunk correlation SPL for multi-modal root cause analysis.",
              "z": "Multi-protocol correlation heatmap, root cause attribution waterfall, equipment health scorecard.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (multi-protocol aggregation), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua OR splunk_edge_hub_log (MQTT), `index=edge-hub-data` all metric types.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure all three connectivity modes simultaneously: (1) OPC-UA to PLC (Advanced Settings → OPC-UA), (2) MQTT subscriptions to IoT devices (Advanced Settings → MQTT Subscriptions), (3) Enable built-in sensors (temperature, humidity, etc.). Set each protocol's polling interval (OPC-UA 5s, MQTT 10s, sensors 30s) to minimize latency skew. Ingest all data streams with consistent timestamps. Local SQLite backlog (3M data points) ensures data fusion doesn't lose events. Use downstream Splunk correla…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=edge-hub-logs sourcetype=splunk_edge_hub_opcua OR index=edge-hub-logs sourcetype=splunk_edge_hub_log mqtt_topic=*)\nOR index=edge-hub-data metric_name=temperature\n| bin _time span=5m\n| stats avg(temperature) as temp, avg(opc_ua_motor_current) as motor_current,\n        avg(mqtt_load_percent) as load by equipment_id, _time\n| eval correlation=correlation(temp, motor_current)\n| where correlation > 0.8\n| eval root_cause=case(\n    temp > 80 AND motor_current > 15, \"THERMAL_OVERLOAD\",\n    temp > 80 AND motor_current < 5, \"SENSOR_FAILURE\",\n    1=1, \"UNKNOWN\")\n```\n\nUnderstanding this SPL\n\n**Multi-Protocol Sensor Fusion (OPC-UA + MQTT + Built-in)** — Correlates data from heterogeneous sources (PLC via OPC-UA, IoT devices via MQTT, internal sensors) to identify root causes of anomalies.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua OR splunk_edge_hub_log (MQTT), `index=edge-hub-data` all metric types. **App/TA** (typical add-on context): Splunk Edge Hub (multi-protocol aggregation), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs, edge-hub-data; **sourcetype**: splunk_edge_hub_opcua, splunk_edge_hub_log. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, index=edge-hub-logs, index=edge-hub-data, sourcetype=splunk_edge_hub_opcua. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by equipment_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **correlation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where correlation > 0.8` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **root_cause** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-protocol correlation heatmap, root cause attribution waterfall, equipment health scorecard.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: multi-protocol sensor fusion before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.34",
              "n": "Protocol Gateway Health & Connectivity Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks OPC-UA/Modbus/MQTT gateway uptime and connection quality to prevent silent data loss.",
              "t": "Splunk Edge Hub (health monitoring), Splunk OT Intelligence",
              "d": "`index=edge-hub-health` sourcetype=edge_hub, `index=edge-hub-logs` sourcetype=splunk_edge_hub_log connection_event",
              "q": "index=edge-hub-health sourcetype=edge_hub gateway_name=*\n| stats latest(connection_status) as status, latest(response_time_ms) as response_time,\n        count(eval(error_code!=0)) as error_count by gateway_name, host\n| eval gateway_health=case(\n    status=\"CONNECTED\" AND response_time < 1000 AND error_count < 5, \"HEALTHY\",\n    status=\"CONNECTED\" AND response_time >= 1000, \"SLOW\",\n    status=\"DISCONNECTED\" OR error_count > 10, \"DEGRADED\",\n    1=1, \"UNKNOWN\")",
              "m": "Edge Hub continuously monitors OPC-UA (port 4840), Modbus TCP (port 502), and MQTT broker (port 1883) connectivity every 15 seconds. Log connection attempts + response times to edge-hub-health index (sourcetype=edge_hub). Track error codes (authentication failures, timeouts, handshake errors). Store 100K health events locally. If gateway disconnects, LED ring turns red. Resume transmission via local SQLite backlog (FIFO) when connectivity restored. Configure alert thresholds: downtime > 1 minute = critical, response time > 2s = warning.",
              "z": "Gateway uptime timeline, response time histogram, error rate trend.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (health monitoring), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-health` sourcetype=edge_hub, `index=edge-hub-logs` sourcetype=splunk_edge_hub_log connection_event.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub continuously monitors OPC-UA (port 4840), Modbus TCP (port 502), and MQTT broker (port 1883) connectivity every 15 seconds. Log connection attempts + response times to edge-hub-health index (sourcetype=edge_hub). Track error codes (authentication failures, timeouts, handshake errors). Store 100K health events locally. If gateway disconnects, LED ring turns red. Resume transmission via local SQLite backlog (FIFO) when connectivity restored. Configure alert thresholds: downtime > 1 minute…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-health sourcetype=edge_hub gateway_name=*\n| stats latest(connection_status) as status, latest(response_time_ms) as response_time,\n        count(eval(error_code!=0)) as error_count by gateway_name, host\n| eval gateway_health=case(\n    status=\"CONNECTED\" AND response_time < 1000 AND error_count < 5, \"HEALTHY\",\n    status=\"CONNECTED\" AND response_time >= 1000, \"SLOW\",\n    status=\"DISCONNECTED\" OR error_count > 10, \"DEGRADED\",\n    1=1, \"UNKNOWN\")\n```\n\nUnderstanding this SPL\n\n**Protocol Gateway Health & Connectivity Monitoring** — Tracks OPC-UA/Modbus/MQTT gateway uptime and connection quality to prevent silent data loss.\n\nDocumented **Data sources**: `index=edge-hub-health` sourcetype=edge_hub, `index=edge-hub-logs` sourcetype=splunk_edge_hub_log connection_event. **App/TA** (typical add-on context): Splunk Edge Hub (health monitoring), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-health; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-health, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by gateway_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gateway_health** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gateway uptime timeline, response time histogram, error rate trend.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: protocol gateway health & connectivity before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.35",
              "n": "Industrial Alarm Management via OPC-UA",
              "c": "critical",
              "f": "intermediate",
              "v": "Centralizes alarm processing from multiple PLCs via OPC-UA Alarms & Events service, preventing missed critical alerts.",
              "t": "Splunk Edge Hub (OPC-UA A&E client), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua alarm_event, `index=edge-hub-health` sourcetype=edge_hub",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_opcua alarm_event=true\n| bin _time span=1m\n| stats count as alarm_count, latest(alarm_severity) as severity, latest(alarm_message) as msg by source_node_id, _time\n| where severity=\"HIGH\" OR severity=\"CRITICAL\"\n| eval acknowledgment_status=if(isnotnull(acknowledged_time), \"ACK\", \"UNACK\")\n| where acknowledgment_status=\"UNACK\"",
              "m": "Configure OPC-UA in Advanced Settings → OPC-UA tab with Alarms & Events subscription enabled. Define event filters for alarm severity levels (High, Critical). Edge Hub subscribes to server's Alarms & Events namespace and logs all alarm state changes (triggered, acknowledged, cleared) to edge-hub-logs (sourcetype=splunk_edge_hub_opcua). Store alarm events locally via 100K backlog for resilience. Implement alarm acknowledgment workflow: operator ack in Splunk → webhook → OPC-UA Acknowledge operation. Color LED ring based on highest unacknowledged severity (red=critical, orange=high).",
              "z": "Alarm frequency timeline, severity distribution pie, acknowledgment status list.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (OPC-UA A&E client), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua alarm_event, `index=edge-hub-health` sourcetype=edge_hub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure OPC-UA in Advanced Settings → OPC-UA tab with Alarms & Events subscription enabled. Define event filters for alarm severity levels (High, Critical). Edge Hub subscribes to server's Alarms & Events namespace and logs all alarm state changes (triggered, acknowledged, cleared) to edge-hub-logs (sourcetype=splunk_edge_hub_opcua). Store alarm events locally via 100K backlog for resilience. Implement alarm acknowledgment workflow: operator ack in Splunk → webhook → OPC-UA Acknowledge operati…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_opcua alarm_event=true\n| bin _time span=1m\n| stats count as alarm_count, latest(alarm_severity) as severity, latest(alarm_message) as msg by source_node_id, _time\n| where severity=\"HIGH\" OR severity=\"CRITICAL\"\n| eval acknowledgment_status=if(isnotnull(acknowledged_time), \"ACK\", \"UNACK\")\n| where acknowledgment_status=\"UNACK\"\n```\n\nUnderstanding this SPL\n\n**Industrial Alarm Management via OPC-UA** — Centralizes alarm processing from multiple PLCs via OPC-UA Alarms & Events service, preventing missed critical alerts.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua alarm_event, `index=edge-hub-health` sourcetype=edge_hub. **App/TA** (typical add-on context): Splunk Edge Hub (OPC-UA A&E client), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_opcua. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_opcua. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by source_node_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity=\"HIGH\" OR severity=\"CRITICAL\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **acknowledgment_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where acknowledgment_status=\"UNACK\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Alarm frequency timeline, severity distribution pie, acknowledgment status list.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: industrial alarm management via opc-ua before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.36",
              "n": "Energy Meter Integration via Modbus TCP",
              "c": "medium",
              "f": "advanced",
              "v": "Monitors power consumption and demand charges in real-time to identify energy waste and optimize utility costs.",
              "t": "Splunk Edge Hub (Modbus TCP client), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=edge_hub modbus_register meter_type=energy, `index=edge-hub-data` metric_name=power",
              "q": "index=edge-hub-logs sourcetype=edge_hub modbus_register meter_type=energy modbus_register_name=total_energy_kwh\n| stats latest(register_value) as total_kwh, latest(_time) as latest_time by meter_id\n| join max=1 meter_id [| rest /services/saved/data-model/indexes/energy_meter_cost_model | fields meter_id, cost_per_kwh]\n| eval daily_cost=(total_kwh * cost_per_kwh)\n| stats sum(daily_cost) as total_daily_cost by meter_id\n| where total_daily_cost > threshold",
              "m": "Deploy Edge Hub with Modbus TCP connectivity to energy meter (Schneider, Siemens, ABB models typical support). Configure register map: 0x0000=voltage, 0x0002=current, 0x0004=power_factor, 0x000C=total_energy_kwh. Set polling interval to 1-5 minutes for demand tracking. Store register values in edge-hub-logs as events or convert to metrics (kW, kVAR) in edge-hub-data for time-series analysis. Local SQLite backlog ensures no consumption data is lost. Implement demand charge alerts (cost spike detection) via SPL.",
              "z": "Energy consumption trend, cost breakdown by zone, demand charge projection.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (Modbus TCP client), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=edge_hub modbus_register meter_type=energy, `index=edge-hub-data` metric_name=power.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub with Modbus TCP connectivity to energy meter (Schneider, Siemens, ABB models typical support). Configure register map: 0x0000=voltage, 0x0002=current, 0x0004=power_factor, 0x000C=total_energy_kwh. Set polling interval to 1-5 minutes for demand tracking. Store register values in edge-hub-logs as events or convert to metrics (kW, kVAR) in edge-hub-data for time-series analysis. Local SQLite backlog ensures no consumption data is lost. Implement demand charge alerts (cost spike dete…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=edge_hub modbus_register meter_type=energy modbus_register_name=total_energy_kwh\n| stats latest(register_value) as total_kwh, latest(_time) as latest_time by meter_id\n| join max=1 meter_id [| rest /services/saved/data-model/indexes/energy_meter_cost_model | fields meter_id, cost_per_kwh]\n| eval daily_cost=(total_kwh * cost_per_kwh)\n| stats sum(daily_cost) as total_daily_cost by meter_id\n| where total_daily_cost > threshold\n```\n\nUnderstanding this SPL\n\n**Energy Meter Integration via Modbus TCP** — Monitors power consumption and demand charges in real-time to identify energy waste and optimize utility costs.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=edge_hub modbus_register meter_type=energy, `index=edge-hub-data` metric_name=power. **App/TA** (typical add-on context): Splunk Edge Hub (Modbus TCP client), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by meter_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **daily_cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by meter_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_daily_cost > threshold` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Energy consumption trend, cost breakdown by zone, demand charge projection.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: energy meter integration via modbus tcp before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "modbus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.37",
              "n": "PLC Program Change Detection via OPC-UA Timestamp Monitoring",
              "c": "critical",
              "f": "expert",
              "v": "Detects unauthorized or accidental PLC program modifications by tracking program last-edit timestamp changes.",
              "t": "Splunk Edge Hub (OPC-UA client), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua program_timestamp_event",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_opcua program_timestamp_event\n| stats latest(program_last_modified) as current_timestamp by program_id, plc_ip\n| join max=1 program_id [| rest /services/saved/data-model/indexes/plc_program_baseline | fields program_id, last_known_timestamp, last_known_user]\n| where current_timestamp != last_known_timestamp\n| eval program_change=\"DETECTED\"\n| eval time_since_change_hours=((now() - current_timestamp) / 3600)",
              "m": "Configure OPC-UA subscriptions to PLC program metadata tags (if available) or implement custom OPC-UA node reads for program timestamp info. Some PLC vendors expose system time for last program write. Query these tags every 5-10 minutes. Store baseline program timestamp on first run. Alert if current timestamp differs from baseline (indicates program reload or modification). Log change details to edge-hub-logs. Correlate with PLC user login logs (if available via separate data source) to identify who modified program. This is security-critical for industrial environments.",
              "z": "Program timestamp timeline, change detection alert, modification history.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (OPC-UA client), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua program_timestamp_event.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure OPC-UA subscriptions to PLC program metadata tags (if available) or implement custom OPC-UA node reads for program timestamp info. Some PLC vendors expose system time for last program write. Query these tags every 5-10 minutes. Store baseline program timestamp on first run. Alert if current timestamp differs from baseline (indicates program reload or modification). Log change details to edge-hub-logs. Correlate with PLC user login logs (if available via separate data source) to identif…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_opcua program_timestamp_event\n| stats latest(program_last_modified) as current_timestamp by program_id, plc_ip\n| join max=1 program_id [| rest /services/saved/data-model/indexes/plc_program_baseline | fields program_id, last_known_timestamp, last_known_user]\n| where current_timestamp != last_known_timestamp\n| eval program_change=\"DETECTED\"\n| eval time_since_change_hours=((now() - current_timestamp) / 3600)\n```\n\nUnderstanding this SPL\n\n**PLC Program Change Detection via OPC-UA Timestamp Monitoring** — Detects unauthorized or accidental PLC program modifications by tracking program last-edit timestamp changes.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua program_timestamp_event. **App/TA** (typical add-on context): Splunk Edge Hub (OPC-UA client), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_opcua. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_opcua. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by program_id, plc_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where current_timestamp != last_known_timestamp` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **program_change** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **time_since_change_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Program timestamp timeline, change detection alert, modification history.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: plc program change detection via opc-ua timestamp before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.38",
              "n": "SCADA HMI Event Capture & Operator Action Logging",
              "c": "medium",
              "f": "intermediate",
              "v": "Audits all HMI operator actions (setpoint changes, equipment starts/stops) for compliance and root cause analysis.",
              "t": "Splunk Edge Hub (OPC-UA tag subscription), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua hmi_event, `index=edge-hub-health` sourcetype=edge_hub",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_opcua hmi_event=true\n| regex field_name=\"setpoint|start|stop|mode\"\n| bin _time span=1h\n| stats count as action_count, latest(field_value) as value by operator_id, _time\n| eval action_frequency=(action_count / 60)\n| where action_frequency > 5\n| eval operator_behavior=\"UNUSUAL\"",
              "m": "Configure OPC-UA subscriptions to HMI write tags (setpoints, control commands). Enable change notification for tags with ValueWrite attributes. Log tag writes with operator context (user ID from HMI session) to edge-hub-logs (sourcetype=splunk_edge_hub_opcua). Store events locally (100K backlog) for audit trail continuity. Implement audit report: operator ID, timestamp, tag name, old value, new value, status. Alert on unusual operator behavior patterns (too many commands in short time window).",
              "z": "Operator action timeline, command frequency histogram, unusual behavior alert.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (OPC-UA tag subscription), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua hmi_event, `index=edge-hub-health` sourcetype=edge_hub.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure OPC-UA subscriptions to HMI write tags (setpoints, control commands). Enable change notification for tags with ValueWrite attributes. Log tag writes with operator context (user ID from HMI session) to edge-hub-logs (sourcetype=splunk_edge_hub_opcua). Store events locally (100K backlog) for audit trail continuity. Implement audit report: operator ID, timestamp, tag name, old value, new value, status. Alert on unusual operator behavior patterns (too many commands in short time window).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_opcua hmi_event=true\n| regex field_name=\"setpoint|start|stop|mode\"\n| bin _time span=1h\n| stats count as action_count, latest(field_value) as value by operator_id, _time\n| eval action_frequency=(action_count / 60)\n| where action_frequency > 5\n| eval operator_behavior=\"UNUSUAL\"\n```\n\nUnderstanding this SPL\n\n**SCADA HMI Event Capture & Operator Action Logging** — Audits all HMI operator actions (setpoint changes, equipment starts/stops) for compliance and root cause analysis.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_opcua hmi_event, `index=edge-hub-health` sourcetype=edge_hub. **App/TA** (typical add-on context): Splunk Edge Hub (OPC-UA tag subscription), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_opcua. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_opcua. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by operator_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **action_frequency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where action_frequency > 5` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **operator_behavior** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Operator action timeline, command frequency histogram, unusual behavior alert.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: scada hmi event capture & operator action logging before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.39",
              "n": "Multi-Device Fleet Firmware Version Compliance",
              "c": "high",
              "f": "expert",
              "v": "Ensures all Edge Hubs in a fleet run current firmware versions to maintain security and feature parity.",
              "t": "Splunk Edge Hub (multiple devices), Splunk OT Intelligence",
              "d": "`index=edge-hub-health` sourcetype=edge_hub firmware_version, `index=edge-hub-logs` firmware_update_event",
              "q": "index=edge-hub-health sourcetype=edge_hub\n| stats latest(firmware_version) as fw_version, latest(os_build_number) as build by host, device_id\n| stats dc(fw_version) as unique_versions, count as total_devices by location\n| where unique_versions > 1\n| eval compliance=\"DRIFTED\"\n| join max=1 location [| rest /services/data-model/indexes/edge_hub_fleet_baseline | fields location, target_firmware_version]\n| where fw_version != target_firmware_version",
              "m": "Central Splunk instance receives health data from all Edge Hubs via HEC (HTTP Event Collector). Health heartbeat includes firmware_version + build_number every 5 minutes (stored in edge-hub-health index). Create baseline search for target firmware per location/site. Alert when devices drift from baseline (old firmware detected). Implement scheduled search that flags out-of-compliance devices for manual firmware update. Store update history in edge-hub-logs for audit trail. For multi-region deployments, allow per-region firmware versions.",
              "z": "Firmware version distribution pie, device compliance status list, update history timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (multiple devices), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-health` sourcetype=edge_hub firmware_version, `index=edge-hub-logs` firmware_update_event.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCentral Splunk instance receives health data from all Edge Hubs via HEC (HTTP Event Collector). Health heartbeat includes firmware_version + build_number every 5 minutes (stored in edge-hub-health index). Create baseline search for target firmware per location/site. Alert when devices drift from baseline (old firmware detected). Implement scheduled search that flags out-of-compliance devices for manual firmware update. Store update history in edge-hub-logs for audit trail. For multi-region deplo…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-health sourcetype=edge_hub\n| stats latest(firmware_version) as fw_version, latest(os_build_number) as build by host, device_id\n| stats dc(fw_version) as unique_versions, count as total_devices by location\n| where unique_versions > 1\n| eval compliance=\"DRIFTED\"\n| join max=1 location [| rest /services/data-model/indexes/edge_hub_fleet_baseline | fields location, target_firmware_version]\n| where fw_version != target_firmware_version\n```\n\nUnderstanding this SPL\n\n**Multi-Device Fleet Firmware Version Compliance** — Ensures all Edge Hubs in a fleet run current firmware versions to maintain security and feature parity.\n\nDocumented **Data sources**: `index=edge-hub-health` sourcetype=edge_hub firmware_version, `index=edge-hub-logs` firmware_update_event. **App/TA** (typical add-on context): Splunk Edge Hub (multiple devices), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-health; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-health, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, device_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_versions > 1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **compliance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where fw_version != target_firmware_version` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Firmware version distribution pie, device compliance status list, update history timeline.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: multi-device fleet firmware version compliance before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.40",
              "n": "Device Location Tracking via GNSS",
              "c": "medium",
              "f": "advanced",
              "v": "Monitors Edge Hub physical location for mobile/outdoor deployments to verify proper coverage and detect theft/unauthorized movement.",
              "t": "Splunk Edge Hub (cellular + GNSS), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=edge_hub gnss_position, `index=edge-hub-health` sourcetype=edge_hub location_heartbeat",
              "q": "index=edge-hub-logs sourcetype=edge_hub gnss_position=true\n| stats latest(latitude) as lat, latest(longitude) as lon, latest(accuracy_meters) as accuracy by device_id\n| join max=1 device_id [| rest /services/data-model/indexes/edge_hub_location_baseline | fields device_id, expected_latitude, expected_longitude, geofence_radius_meters]\n| eval distance=sqrt(((lat - expected_latitude)*111111)^2 + ((lon - expected_longitude)*111111*cos(expected_latitude*pi()/180))^2)\n| where distance > geofence_radius_meters\n| eval location_drift=\"ALERT\"",
              "m": "Edge Hub with cellular module (LTE/4G) includes integrated GNSS receiver. Enable GNSS in Advanced Settings (requires clear sky line-of-sight). Edge Hub logs GPS position (latitude, longitude, accuracy_meters) to edge-hub-logs every 15 minutes. Store expected location + geofence radius per device. Alert if device moves outside geofence (e.g., trailer theft detection, equipment relocation). For outdoor industrial sites, track GNSS acquisition time and accuracy metrics (typically 5-20m accuracy in open sky). Local SQLite stores 30+ days of position history.",
              "z": "Device location map, geofence status indicator, movement timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (cellular + GNSS), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=edge_hub gnss_position, `index=edge-hub-health` sourcetype=edge_hub location_heartbeat.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub with cellular module (LTE/4G) includes integrated GNSS receiver. Enable GNSS in Advanced Settings (requires clear sky line-of-sight). Edge Hub logs GPS position (latitude, longitude, accuracy_meters) to edge-hub-logs every 15 minutes. Store expected location + geofence radius per device. Alert if device moves outside geofence (e.g., trailer theft detection, equipment relocation). For outdoor industrial sites, track GNSS acquisition time and accuracy metrics (typically 5-20m accuracy in …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=edge_hub gnss_position=true\n| stats latest(latitude) as lat, latest(longitude) as lon, latest(accuracy_meters) as accuracy by device_id\n| join max=1 device_id [| rest /services/data-model/indexes/edge_hub_location_baseline | fields device_id, expected_latitude, expected_longitude, geofence_radius_meters]\n| eval distance=sqrt(((lat - expected_latitude)*111111)^2 + ((lon - expected_longitude)*111111*cos(expected_latitude*pi()/180))^2)\n| where distance > geofence_radius_meters\n| eval location_drift=\"ALERT\"\n```\n\nUnderstanding this SPL\n\n**Device Location Tracking via GNSS** — Monitors Edge Hub physical location for mobile/outdoor deployments to verify proper coverage and detect theft/unauthorized movement.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=edge_hub gnss_position, `index=edge-hub-health` sourcetype=edge_hub location_heartbeat. **App/TA** (typical add-on context): Splunk Edge Hub (cellular + GNSS), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **distance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where distance > geofence_radius_meters` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **location_drift** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Device location map, geofence status indicator, movement timeline.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: device location tracking via gnss before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.41",
              "n": "Cellular Connectivity Quality & Signal Strength Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks LTE/4G signal strength and network latency to predict connectivity issues and plan network upgrades.",
              "t": "Splunk Edge Hub (cellular module), Splunk OT Intelligence",
              "d": "`index=edge-hub-health` sourcetype=edge_hub cellular_signal, `index=edge-hub-logs` cellular_connect_event",
              "q": "index=edge-hub-health sourcetype=edge_hub\n| stats latest(rssi_dbm) as signal_dbm, latest(sinr_db) as sinr, latest(latency_ms) as latency,\n        latest(network_type) as net_type by host, cell_id\n| eval signal_quality=case(\n    signal_dbm > -80 AND sinr > 15, \"EXCELLENT\",\n    signal_dbm > -90 AND sinr > 5, \"GOOD\",\n    signal_dbm > -100, \"FAIR\",\n    1=1, \"POOR\")\n| stats avg(latency) as avg_latency by signal_quality, host",
              "m": "Edge Hub cellular module reports RSSI (signal strength -140 to 0 dBm), SINR (signal-to-interference noise ratio dB), network latency (ms), and network type (LTE Band, 4G, etc.) to edge-hub-health index every 5 minutes. Strong signal: RSSI > -80 dBm. Acceptable signal: RSSI -80 to -100 dBm. Poor signal: RSSI < -100 dBm. Track carrier (AT&T, Verizon, etc.) and band for capacity planning. Alert if signal drops below -100 dBm or latency exceeds 500ms (indicates backhaul congestion or dead zone).",
              "z": "Signal strength heatmap, latency trend, network type distribution.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (cellular module), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-health` sourcetype=edge_hub cellular_signal, `index=edge-hub-logs` cellular_connect_event.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub cellular module reports RSSI (signal strength -140 to 0 dBm), SINR (signal-to-interference noise ratio dB), network latency (ms), and network type (LTE Band, 4G, etc.) to edge-hub-health index every 5 minutes. Strong signal: RSSI > -80 dBm. Acceptable signal: RSSI -80 to -100 dBm. Poor signal: RSSI < -100 dBm. Track carrier (AT&T, Verizon, etc.) and band for capacity planning. Alert if signal drops below -100 dBm or latency exceeds 500ms (indicates backhaul congestion or dead zone).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-health sourcetype=edge_hub\n| stats latest(rssi_dbm) as signal_dbm, latest(sinr_db) as sinr, latest(latency_ms) as latency,\n        latest(network_type) as net_type by host, cell_id\n| eval signal_quality=case(\n    signal_dbm > -80 AND sinr > 15, \"EXCELLENT\",\n    signal_dbm > -90 AND sinr > 5, \"GOOD\",\n    signal_dbm > -100, \"FAIR\",\n    1=1, \"POOR\")\n| stats avg(latency) as avg_latency by signal_quality, host\n```\n\nUnderstanding this SPL\n\n**Cellular Connectivity Quality & Signal Strength Monitoring** — Tracks LTE/4G signal strength and network latency to predict connectivity issues and plan network upgrades.\n\nDocumented **Data sources**: `index=edge-hub-health` sourcetype=edge_hub cellular_signal, `index=edge-hub-logs` cellular_connect_event. **App/TA** (typical add-on context): Splunk Edge Hub (cellular module), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-health; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-health, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, cell_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **signal_quality** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by signal_quality, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Signal strength heatmap, latency trend, network type distribution.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: cellular connectivity quality & signal strength before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.42",
              "n": "Edge Hub Resource Capacity Planning & CPU/Memory Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Prevents Edge Hub performance degradation and data loss by tracking resource utilization and planning for container resource allocation.",
              "t": "Splunk Edge Hub (system monitoring), Splunk OT Intelligence",
              "d": "`index=edge-hub-health` sourcetype=edge_hub cpu_percent, memory_percent, disk_used_mb",
              "q": "index=edge-hub-health sourcetype=edge_hub\n| bin _time span=1h\n| stats avg(cpu_percent) as avg_cpu, max(cpu_percent) as peak_cpu,\n        avg(memory_percent) as avg_mem, max(memory_percent) as peak_mem,\n        latest(disk_used_mb) as disk_used by host, _time\n| eval cpu_headroom=(100 - peak_cpu), mem_headroom=(100 - peak_mem), disk_available_mb=(32000 - disk_used)\n| where cpu_headroom < 10 OR mem_headroom < 5 OR disk_available_mb < 1000\n| eval resource_alert=\"CAPACITY_WARNING\"",
              "m": "Edge Hub OS reports CPU %, memory %, disk %, and container-level resource stats to edge-hub-health index every 5 minutes. NXP IMX8M has 8GB RAM total: allocate 4GB for OS/system, 4GB for containers. Each container configured with memory limits in edge.json (e.g., 512MB, 256MB). Monitor peak CPU during data bursts (e.g., video processing, FFT computation). Alert when peak CPU > 80% (insufficient headroom for spikes) or memory > 95% (OOM risk). Plan container consolidation or upgrade if resources consistently constrained.",
              "z": "CPU usage trend, memory usage gauge, container resource breakdown pie, capacity projection.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (system monitoring), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-health` sourcetype=edge_hub cpu_percent, memory_percent, disk_used_mb.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub OS reports CPU %, memory %, disk %, and container-level resource stats to edge-hub-health index every 5 minutes. NXP IMX8M has 8GB RAM total: allocate 4GB for OS/system, 4GB for containers. Each container configured with memory limits in edge.json (e.g., 512MB, 256MB). Monitor peak CPU during data bursts (e.g., video processing, FFT computation). Alert when peak CPU > 80% (insufficient headroom for spikes) or memory > 95% (OOM risk). Plan container consolidation or upgrade if resources …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-health sourcetype=edge_hub\n| bin _time span=1h\n| stats avg(cpu_percent) as avg_cpu, max(cpu_percent) as peak_cpu,\n        avg(memory_percent) as avg_mem, max(memory_percent) as peak_mem,\n        latest(disk_used_mb) as disk_used by host, _time\n| eval cpu_headroom=(100 - peak_cpu), mem_headroom=(100 - peak_mem), disk_available_mb=(32000 - disk_used)\n| where cpu_headroom < 10 OR mem_headroom < 5 OR disk_available_mb < 1000\n| eval resource_alert=\"CAPACITY_WARNING\"\n```\n\nUnderstanding this SPL\n\n**Edge Hub Resource Capacity Planning & CPU/Memory Utilization** — Prevents Edge Hub performance degradation and data loss by tracking resource utilization and planning for container resource allocation.\n\nDocumented **Data sources**: `index=edge-hub-health` sourcetype=edge_hub cpu_percent, memory_percent, disk_used_mb. **App/TA** (typical add-on context): Splunk Edge Hub (system monitoring), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-health; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-health, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cpu_headroom** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cpu_headroom < 10 OR mem_headroom < 5 OR disk_available_mb < 1000` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **resource_alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: CPU usage trend, memory usage gauge, container resource breakdown pie, capacity projection.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with what the on-site box is feeling and whether it is online before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.43",
              "n": "Configuration Drift Detection Across Fleet",
              "c": "high",
              "f": "expert",
              "v": "Ensures all Edge Hubs in a fleet maintain consistent configuration (MQTT topics, OPC-UA endpoints, polling intervals) to prevent data inconsistencies.",
              "t": "Splunk Edge Hub (fleet management), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=edge_hub config_hash, `index=edge-hub-health` configuration_snapshot",
              "q": "index=edge-hub-health sourcetype=edge_hub configuration_snapshot=true\n| stats latest(config_hash) as current_hash, latest(config_timestamp) as timestamp by host, location\n| stats dc(current_hash) as unique_configs, count as total_devices by location\n| where unique_configs > 1\n| eval config_drift=\"DETECTED\"\n| join max=1 location [| rest /services/data-model/indexes/approved_fleet_configs | fields location, approved_config_hash, approved_timestamp]\n| where current_hash != approved_config_hash",
              "m": "Edge Hub computes MD5 hash of entire configuration (MQTT subscriptions, OPC-UA endpoints, Modbus registers, container definitions, sensor polling intervals) and reports to edge-hub-health index weekly. Central Splunk instance generates baseline config hash per location/site. Alert if device config hash differs (indicates manual configuration, failed deployment, or malicious modification). Implement remediation workflow: flag device for manual inspection or trigger automated config re-deployment via edge.json manifest update. Store configuration snapshots locally for historical comparison.",
              "z": "Config drift alert, baseline hash variance, deployment history timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (fleet management), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=edge_hub config_hash, `index=edge-hub-health` configuration_snapshot.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub computes MD5 hash of entire configuration (MQTT subscriptions, OPC-UA endpoints, Modbus registers, container definitions, sensor polling intervals) and reports to edge-hub-health index weekly. Central Splunk instance generates baseline config hash per location/site. Alert if device config hash differs (indicates manual configuration, failed deployment, or malicious modification). Implement remediation workflow: flag device for manual inspection or trigger automated config re-deployment …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-health sourcetype=edge_hub configuration_snapshot=true\n| stats latest(config_hash) as current_hash, latest(config_timestamp) as timestamp by host, location\n| stats dc(current_hash) as unique_configs, count as total_devices by location\n| where unique_configs > 1\n| eval config_drift=\"DETECTED\"\n| join max=1 location [| rest /services/data-model/indexes/approved_fleet_configs | fields location, approved_config_hash, approved_timestamp]\n| where current_hash != approved_config_hash\n```\n\nUnderstanding this SPL\n\n**Configuration Drift Detection Across Fleet** — Ensures all Edge Hubs in a fleet maintain consistent configuration (MQTT topics, OPC-UA endpoints, polling intervals) to prevent data inconsistencies.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=edge_hub config_hash, `index=edge-hub-health` configuration_snapshot. **App/TA** (typical add-on context): Splunk Edge Hub (fleet management), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-health; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-health, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_configs > 1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **config_drift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where current_hash != approved_config_hash` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Config drift alert, baseline hash variance, deployment history timeline.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: configuration drift detection across fleet before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.44",
              "n": "Local Backlog Monitoring & Data Loss Prevention",
              "c": "critical",
              "f": "intermediate",
              "v": "Prevents silent data loss by monitoring local SQLite backlog capacity and alerting before data is discarded.",
              "t": "Splunk Edge Hub (local storage), Splunk OT Intelligence",
              "d": "`index=edge-hub-health` sourcetype=edge_hub backlog_status, `index=edge-hub-logs` backlog_overflow_event",
              "q": "index=edge-hub-health sourcetype=edge_hub\n| stats latest(backlog_sensor_data_count) as sensor_backlog, latest(backlog_max_capacity) as capacity,\n        latest(backlog_events_lost) as lost_count by host\n| eval backlog_utilization=(sensor_backlog / capacity) * 100\n| eval data_loss_risk=case(\n    backlog_utilization > 95, \"CRITICAL\",\n    backlog_utilization > 80, \"HIGH\",\n    backlog_utilization > 60, \"MEDIUM\",\n    1=1, \"LOW\")\n| where data_loss_risk!=\"LOW\"",
              "m": "Edge Hub tracks SQLite backlog capacity: 3M sensor data points, 100K events (logs/health/anomalies), 100K SNMP data points. Report current backlog size + utilization % to edge-hub-health index every 5 minutes. Alert if utilization exceeds 80% (indicates connectivity outage or ingestion backlog). During Splunk cloud outage, Edge Hub continues logging to local SQLite; upon reconnection, HEC batch processor sends oldest 10K entries in batches of 100 until caught up. Implement alert: if backlog at 95% capacity for >30 minutes, oldest data will be lost (FIFO). Set escalating alert thresholds to trigger remediation.",
              "z": "Backlog utilization gauge, lost data counter, recovery timeline.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (local storage), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-health` sourcetype=edge_hub backlog_status, `index=edge-hub-logs` backlog_overflow_event.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub tracks SQLite backlog capacity: 3M sensor data points, 100K events (logs/health/anomalies), 100K SNMP data points. Report current backlog size + utilization % to edge-hub-health index every 5 minutes. Alert if utilization exceeds 80% (indicates connectivity outage or ingestion backlog). During Splunk cloud outage, Edge Hub continues logging to local SQLite; upon reconnection, HEC batch processor sends oldest 10K entries in batches of 100 until caught up. Implement alert: if backlog at 9…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-health sourcetype=edge_hub\n| stats latest(backlog_sensor_data_count) as sensor_backlog, latest(backlog_max_capacity) as capacity,\n        latest(backlog_events_lost) as lost_count by host\n| eval backlog_utilization=(sensor_backlog / capacity) * 100\n| eval data_loss_risk=case(\n    backlog_utilization > 95, \"CRITICAL\",\n    backlog_utilization > 80, \"HIGH\",\n    backlog_utilization > 60, \"MEDIUM\",\n    1=1, \"LOW\")\n| where data_loss_risk!=\"LOW\"\n```\n\nUnderstanding this SPL\n\n**Local Backlog Monitoring & Data Loss Prevention** — Prevents silent data loss by monitoring local SQLite backlog capacity and alerting before data is discarded.\n\nDocumented **Data sources**: `index=edge-hub-health` sourcetype=edge_hub backlog_status, `index=edge-hub-logs` backlog_overflow_event. **App/TA** (typical add-on context): Splunk Edge Hub (local storage), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-health; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-health, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **backlog_utilization** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **data_loss_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where data_loss_risk!=\"LOW\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Backlog utilization gauge, lost data counter, recovery timeline.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: local backlog monitoring & data loss prevention before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.45",
              "n": "USB Camera People Counting for Occupancy & Capacity Management",
              "c": "medium",
              "f": "expert",
              "v": "Enables real-time facility occupancy tracking and automatic alerts when spaces exceed safe capacity thresholds.",
              "t": "Splunk Edge Hub (USB camera + NPU container), v2.1+",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log people_count_event, `index=edge-hub-data` metric_name=occupancy_count",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log people_count_event\n| stats latest(people_detected) as occupancy, latest(_time) as timestamp, max(people_detected) as peak_occupancy by location, camera_id\n| join max=1 location [| rest /services/data-model/indexes/facility_capacity_limits | fields location, max_occupancy_safe, max_occupancy_emergency]\n| eval capacity_status=case(\n    occupancy > max_occupancy_emergency, \"EMERGENCY_EXCEEDED\",\n    occupancy > max_occupancy_safe, \"OVERCROWDED\",\n    1=1, \"NORMAL\")\n| stats latest(capacity_status) as status, avg(occupancy) as avg_occ by location",
              "m": "Deploy Edge Hub with USB camera (v2.1+ USB passthrough required). Build custom ARM64 container using TensorFlow Lite + OpenCV for person detection + counting (YOLO or MobileNet models work well). Container processes video frames at 1-2 fps, outputs people_count metric to MQTT and event logs. Run inference on NPU (v2.1+) for real-time performance. Configure container resource limits: memory 512MB, CPU 2 cores. Set safe + emergency capacity thresholds per location. Alert if occupancy exceeds safe threshold; trigger intercom/visual alerts if emergency threshold exceeded.",
              "z": "Occupancy trend by location, capacity status heatmap, peak occupancy histogram.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (USB camera + NPU container), v2.1+.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log people_count_event, `index=edge-hub-data` metric_name=occupancy_count.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub with USB camera (v2.1+ USB passthrough required). Build custom ARM64 container using TensorFlow Lite + OpenCV for person detection + counting (YOLO or MobileNet models work well). Container processes video frames at 1-2 fps, outputs people_count metric to MQTT and event logs. Run inference on NPU (v2.1+) for real-time performance. Configure container resource limits: memory 512MB, CPU 2 cores. Set safe + emergency capacity thresholds per location. Alert if occupancy exceeds safe …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log people_count_event\n| stats latest(people_detected) as occupancy, latest(_time) as timestamp, max(people_detected) as peak_occupancy by location, camera_id\n| join max=1 location [| rest /services/data-model/indexes/facility_capacity_limits | fields location, max_occupancy_safe, max_occupancy_emergency]\n| eval capacity_status=case(\n    occupancy > max_occupancy_emergency, \"EMERGENCY_EXCEEDED\",\n    occupancy > max_occupancy_safe, \"OVERCROWDED\",\n    1=1, \"NORMAL\")\n| stats latest(capacity_status) as status, avg(occupancy) as avg_occ by location\n```\n\nUnderstanding this SPL\n\n**USB Camera People Counting for Occupancy & Capacity Management** — Enables real-time facility occupancy tracking and automatic alerts when spaces exceed safe capacity thresholds.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log people_count_event, `index=edge-hub-data` metric_name=occupancy_count. **App/TA** (typical add-on context): Splunk Edge Hub (USB camera + NPU container), v2.1+. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by location, camera_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **capacity_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Occupancy trend by location, capacity status heatmap, peak occupancy histogram.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with when a space looks empty or busy in an odd way before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.46",
              "n": "USB Camera Visual Inspection for Manufacturing Defects",
              "c": "high",
              "f": "advanced",
              "v": "Automates defect detection on assembly lines by running visual inspection models on captured images without human intervention.",
              "t": "Splunk Edge Hub (USB camera + NPU container), v2.1+, OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log visual_inspection_event, `index=edge-hub-data` inspection_metadata",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log visual_inspection_event defect_detected=true\n| stats count as defect_count, count(eval(defect_severity=\"CRITICAL\")) as critical_defects by location, product_line\n| eval defect_rate=(defect_count / total_parts_inspected) * 100\n| where defect_rate > acceptable_defect_rate\n| eval quality_alert=\"DEFECT_RATE_EXCEEDED\"",
              "m": "Deploy Edge Hub with USB camera pointing at assembly line. Train TensorFlow Lite object detection model (e.g., SSD MobileNet) on product images with annotated defects (scratches, dents, misalignment, discoloration). Build custom ARM64 container that captures images at takt time (e.g., 1 image per part), runs inference on NPU, logs result (defect_detected=true/false, defect_class, confidence) to edge-hub-logs. Store images locally only if defect detected (privacy + storage). Implement local alerting: if defect severity=CRITICAL, trigger relay to stop conveyor belt. Configure edge.json resource limits: memory 512MB, CPU 2 cores, GPU (NPU) enabled.",
              "z": "Defect detection timeline, defect type distribution pie, quality trend chart.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (USB camera + NPU container), v2.1+, OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log visual_inspection_event, `index=edge-hub-data` inspection_metadata.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub with USB camera pointing at assembly line. Train TensorFlow Lite object detection model (e.g., SSD MobileNet) on product images with annotated defects (scratches, dents, misalignment, discoloration). Build custom ARM64 container that captures images at takt time (e.g., 1 image per part), runs inference on NPU, logs result (defect_detected=true/false, defect_class, confidence) to edge-hub-logs. Store images locally only if defect detected (privacy + storage). Implement local alert…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log visual_inspection_event defect_detected=true\n| stats count as defect_count, count(eval(defect_severity=\"CRITICAL\")) as critical_defects by location, product_line\n| eval defect_rate=(defect_count / total_parts_inspected) * 100\n| where defect_rate > acceptable_defect_rate\n| eval quality_alert=\"DEFECT_RATE_EXCEEDED\"\n```\n\nUnderstanding this SPL\n\n**USB Camera Visual Inspection for Manufacturing Defects** — Automates defect detection on assembly lines by running visual inspection models on captured images without human intervention.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log visual_inspection_event, `index=edge-hub-data` inspection_metadata. **App/TA** (typical add-on context): Splunk Edge Hub (USB camera + NPU container), v2.1+, OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by location, product_line** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **defect_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where defect_rate > acceptable_defect_rate` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **quality_alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Defect detection timeline, defect type distribution pie, quality trend chart.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with the situation around: usb camera visual inspection for manufacturing defects before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.47",
              "n": "Custom Python Container for API Integration & Data Enrichment",
              "c": "medium",
              "f": "expert",
              "v": "Integrates Edge Hub data with external APIs (weather, commodity prices, inventory systems) to enrich sensor context without cloud latency.",
              "t": "Splunk Edge Hub (custom container), gRPC SDK, HTTP client",
              "d": "`index=edge-hub-data` enriched_sensor_metrics, `index=edge-hub-logs` enrichment_event",
              "q": "index=edge-hub-data metric_name=temperature\n| join max=1 host [| mstats avg(temperature) as avg_temp by host | eval weather_context_available=1]\n| stats avg(avg_temp) as sensor_temp, latest(external_air_temp_c) as api_air_temp by host\n| eval correlation=correlation(sensor_temp, api_air_temp)\n| where correlation < 0.5\n| eval enrichment_anomaly=\"LOW_CORRELATION\"",
              "m": "Build custom ARM64 container with Python requests library. Container subscribes to sensor data via gRPC API, periodically fetches external data (weather API, stock prices, etc.) via HTTPS, correlates with sensor data, and publishes enriched metrics back to MQTT. Example: fetch external air temperature from weather API every 30 minutes, correlate with Edge Hub inside temperature to detect HVAC failures. Implement caching layer to minimize API calls. Store enrichment logs locally. Configure edge.json: memory 256MB, CPU 1 core. Container runs as non-root (v2.0+).",
              "z": "Sensor vs API correlation scatter, enrichment success rate trend, external data staleness timeline.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (custom container), gRPC SDK, HTTP client.\n• Ensure the following data sources are available: `index=edge-hub-data` enriched_sensor_metrics, `index=edge-hub-logs` enrichment_event.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild custom ARM64 container with Python requests library. Container subscribes to sensor data via gRPC API, periodically fetches external data (weather API, stock prices, etc.) via HTTPS, correlates with sensor data, and publishes enriched metrics back to MQTT. Example: fetch external air temperature from weather API every 30 minutes, correlate with Edge Hub inside temperature to detect HVAC failures. Implement caching layer to minimize API calls. Store enrichment logs locally. Configure edge.j…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-data metric_name=temperature\n| join max=1 host [| mstats avg(temperature) as avg_temp by host | eval weather_context_available=1]\n| stats avg(avg_temp) as sensor_temp, latest(external_air_temp_c) as api_air_temp by host\n| eval correlation=correlation(sensor_temp, api_air_temp)\n| where correlation < 0.5\n| eval enrichment_anomaly=\"LOW_CORRELATION\"\n```\n\nUnderstanding this SPL\n\n**Custom Python Container for API Integration & Data Enrichment** — Integrates Edge Hub data with external APIs (weather, commodity prices, inventory systems) to enrich sensor context without cloud latency.\n\nDocumented **Data sources**: `index=edge-hub-data` enriched_sensor_metrics, `index=edge-hub-logs` enrichment_event. **App/TA** (typical add-on context): Splunk Edge Hub (custom container), gRPC SDK, HTTP client. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-data.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-data. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **correlation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where correlation < 0.5` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **enrichment_anomaly** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sensor vs API correlation scatter, enrichment success rate trend, external data staleness timeline.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with the situation around: custom python container for api integration & data enrichment before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.48",
              "n": "Pressure & Humidity Sensor Correlation for Leakage Detection",
              "c": "high",
              "f": "expert",
              "v": "Detects water leaks or condensation damage early by correlating pressure drop with humidity rise in sealed enclosures.",
              "t": "Splunk Edge Hub (pressure + humidity sensors), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` metric_name=pressure OR metric_name=humidity, `sourcetype=edge_hub`",
              "q": "| mstats avg(pressure) as press, avg(humidity) as rh by host, _time span=5m\n| delta press as press_change\n| stats avg(press_change) as avg_press_delta, stdev(rh) as rh_volatility by host\n| eval leak_risk=case(\n    avg_press_delta < -0.5 AND rh_volatility > 10, \"CRITICAL_LEAK\",\n    avg_press_delta < -0.2 AND rh_volatility > 5, \"POTENTIAL_LEAK\",\n    1=1, \"NORMAL\")\n| where leak_risk!=\"NORMAL\"",
              "m": "Deploy Edge Hub in sealed enclosure (electrical room, equipment cabinet) with optional pressure sensor (±0.12 hPa accuracy) and built-in humidity sensor exposed to enclosure air. Configure 5-minute polling. Monitor pressure trend: sealed enclosure pressure should remain stable (±1 hPa). Pressure drop + humidity rise = leakage from outside or failed seal. Humidity rise alone = internal moisture generation (faulty equipment). Implement baseline: first week = normal enclosure profile. Alert if pressure drops >1 hPa/hour (rapid leak). Store local SQLite data for 30+ days to track seasonal humidity variations. Trigger maintenance ticket on leak detection.",
              "z": "Pressure vs humidity scatter plot, leak risk gauge, enclosure seal integrity trend.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (pressure + humidity sensors), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=pressure OR metric_name=humidity, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub in sealed enclosure (electrical room, equipment cabinet) with optional pressure sensor (±0.12 hPa accuracy) and built-in humidity sensor exposed to enclosure air. Configure 5-minute polling. Monitor pressure trend: sealed enclosure pressure should remain stable (±1 hPa). Pressure drop + humidity rise = leakage from outside or failed seal. Humidity rise alone = internal moisture generation (faulty equipment). Implement baseline: first week = normal enclosure profile. Alert if pres…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(pressure) as press, avg(humidity) as rh by host, _time span=5m\n| delta press as press_change\n| stats avg(press_change) as avg_press_delta, stdev(rh) as rh_volatility by host\n| eval leak_risk=case(\n    avg_press_delta < -0.5 AND rh_volatility > 10, \"CRITICAL_LEAK\",\n    avg_press_delta < -0.2 AND rh_volatility > 5, \"POTENTIAL_LEAK\",\n    1=1, \"NORMAL\")\n| where leak_risk!=\"NORMAL\"\n```\n\nUnderstanding this SPL\n\n**Pressure & Humidity Sensor Correlation for Leakage Detection** — Detects water leaks or condensation damage early by correlating pressure drop with humidity rise in sealed enclosures.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=pressure OR metric_name=humidity, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (pressure + humidity sensors), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Pipeline stage (see **Pressure & Humidity Sensor Correlation for Leakage Detection**): delta press as press_change\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **leak_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where leak_risk!=\"NORMAL\"` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pressure vs humidity scatter plot, leak risk gauge, enclosure seal integrity trend.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with sticky air, dry air, and condensation risk before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.49",
              "n": "Sound Level & Frequency Band Monitoring for Regulatory Compliance",
              "c": "medium",
              "f": "advanced",
              "v": "Monitors workplace noise levels to ensure OSHA compliance (90 dB over 8 hours) and tracks frequency bands for hearing loss risk.",
              "t": "Splunk Edge Hub (stereo microphone + custom container), gRPC SDK",
              "d": "`index=edge-hub-data` metric_name=sound_level_db OR metric_name=frequency_band_power, `sourcetype=edge_hub`",
              "q": "| mstats avg(sound_level_db) as avg_db, max(sound_level_db) as peak_db by location, _time span=1h\n| stats avg(avg_db) as hourly_avg_db, max(peak_db) as hourly_peak_db by location\n| eval osha_exposure_rating=case(\n    hourly_avg_db >= 90, \"NO_PROTECTION_REQUIRED\",\n    hourly_avg_db >= 85, \"HEARING_PROTECTION_REQUIRED\",\n    1=1, \"SAFE\")\n| where osha_exposure_rating=\"HEARING_PROTECTION_REQUIRED\"",
              "m": "Deploy Edge Hub with stereo microphone in warehouse/factory/airport locations. Configure 1-hour aggregation for OSHA 8-hour TWA (time-weighted average). Build custom container that computes: (1) dB(A) sound pressure level (apply A-weighting curve), (2) frequency band powers (125Hz, 250Hz, 500Hz, 1kHz, 2kHz, 4kHz, 8kHz octave bands). Store hourly averages in edge-hub-data metrics. Alert if hourly average exceeds 85 dB (OSHA hearing protection threshold). Log high-frequency band power (4-8kHz) for hearing loss risk assessment. Note: Do not stream raw audio; only log processed metrics for privacy.",
              "z": "Noise level trend by location, frequency band heatmap, OSHA compliance status.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (stereo microphone + custom container), gRPC SDK.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=sound_level_db OR metric_name=frequency_band_power, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub with stereo microphone in warehouse/factory/airport locations. Configure 1-hour aggregation for OSHA 8-hour TWA (time-weighted average). Build custom container that computes: (1) dB(A) sound pressure level (apply A-weighting curve), (2) frequency band powers (125Hz, 250Hz, 500Hz, 1kHz, 2kHz, 4kHz, 8kHz octave bands). Store hourly averages in edge-hub-data metrics. Alert if hourly average exceeds 85 dB (OSHA hearing protection threshold). Log high-frequency band power (4-8kHz) for…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(sound_level_db) as avg_db, max(sound_level_db) as peak_db by location, _time span=1h\n| stats avg(avg_db) as hourly_avg_db, max(peak_db) as hourly_peak_db by location\n| eval osha_exposure_rating=case(\n    hourly_avg_db >= 90, \"NO_PROTECTION_REQUIRED\",\n    hourly_avg_db >= 85, \"HEARING_PROTECTION_REQUIRED\",\n    1=1, \"SAFE\")\n| where osha_exposure_rating=\"HEARING_PROTECTION_REQUIRED\"\n```\n\nUnderstanding this SPL\n\n**Sound Level & Frequency Band Monitoring for Regulatory Compliance** — Monitors workplace noise levels to ensure OSHA compliance (90 dB over 8 hours) and tracks frequency bands for hearing loss risk.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=sound_level_db OR metric_name=frequency_band_power, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (stereo microphone + custom container), gRPC SDK. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `stats` rolls up events into metrics; results are split **by location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **osha_exposure_rating** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where osha_exposure_rating=\"HEARING_PROTECTION_REQUIRED\"` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Noise level trend by location, frequency band heatmap, OSHA compliance status.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with loud or unsafe noise in the work area before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.50",
              "n": "Accelerometer-Based Fall Detection & Impact Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Detects equipment falls or impacts (e.g., dropped sensors, dropped parts on conveyor) to trigger automatic alerts and prevent asset loss.",
              "t": "Splunk Edge Hub (3-axis accelerometer + custom container)",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log impact_event, `index=edge-hub-data` metric_name=acceleration_magnitude",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log impact_event=true\n| stats count as impact_count, max(peak_acceleration_g) as max_impact_g by device_id, location\n| eval impact_severity=case(\n    max_impact_g > 15, \"CRITICAL_DAMAGE_RISK\",\n    max_impact_g > 10, \"SEVERE_IMPACT\",\n    max_impact_g > 5, \"MODERATE_IMPACT\",\n    1=1, \"LIGHT_IMPACT\")\n| where impact_severity!=\"LIGHT_IMPACT\"",
              "m": "Build custom ARM64 container that monitors 3-axis accelerometer data via gRPC API in real-time (100Hz sampling). Implement impact detection: compute magnitude sqrt(ax^2 + ay^2 + az^2), apply high-pass filter to remove gravity component, detect transient spikes > 5g lasting < 500ms (characteristic of impacts). Log impact events with peak acceleration and timestamp. Configure local alerting: if peak > 15g, trigger relay to activate warning LED/buzzer. Store impact history locally (100K backlog) for root cause analysis. Use for monitoring fragile sensor deployments or tracking dropped parts on assembly lines.",
              "z": "Impact event timeline, severity distribution histogram, peak acceleration trend.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (3-axis accelerometer + custom container).\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log impact_event, `index=edge-hub-data` metric_name=acceleration_magnitude.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild custom ARM64 container that monitors 3-axis accelerometer data via gRPC API in real-time (100Hz sampling). Implement impact detection: compute magnitude sqrt(ax^2 + ay^2 + az^2), apply high-pass filter to remove gravity component, detect transient spikes > 5g lasting < 500ms (characteristic of impacts). Log impact events with peak acceleration and timestamp. Configure local alerting: if peak > 15g, trigger relay to activate warning LED/buzzer. Store impact history locally (100K backlog) fo…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log impact_event=true\n| stats count as impact_count, max(peak_acceleration_g) as max_impact_g by device_id, location\n| eval impact_severity=case(\n    max_impact_g > 15, \"CRITICAL_DAMAGE_RISK\",\n    max_impact_g > 10, \"SEVERE_IMPACT\",\n    max_impact_g > 5, \"MODERATE_IMPACT\",\n    1=1, \"LIGHT_IMPACT\")\n| where impact_severity!=\"LIGHT_IMPACT\"\n```\n\nUnderstanding this SPL\n\n**Accelerometer-Based Fall Detection & Impact Monitoring** — Detects equipment falls or impacts (e.g., dropped sensors, dropped parts on conveyor) to trigger automatic alerts and prevent asset loss.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log impact_event, `index=edge-hub-data` metric_name=acceleration_magnitude. **App/TA** (typical add-on context): Splunk Edge Hub (3-axis accelerometer + custom container). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **impact_severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where impact_severity!=\"LIGHT_IMPACT\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Impact event timeline, severity distribution histogram, peak acceleration trend.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with shaking that might mean a bearing, balance, or mount problem before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.51",
              "n": "Temperature & Humidity Sensor Calibration Drift Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Detects when sensors exceed acceptable calibration drift to trigger preventive recalibration and ensure measurement accuracy.",
              "t": "Splunk Edge Hub (temperature + humidity sensors), Splunk OT Intelligence",
              "d": "`index=edge-hub-health` sourcetype=edge_hub sensor_calibration_status, `index=edge-hub-data` sensor_drift_metric",
              "q": "index=edge-hub-health sourcetype=edge_hub sensor_type=temperature OR sensor_type=humidity\n| stats latest(last_calibration_date) as last_cal, latest(sensor_drift_percent) as drift by sensor_type, host\n| eval days_since_calibration=(now() - strptime(last_cal, \"%Y-%m-%d\")) / 86400\n| eval calibration_status=case(\n    drift > 5 OR days_since_calibration > 365, \"OUT_OF_SPEC\",\n    drift > 2 OR days_since_calibration > 180, \"MARGINAL\",\n    1=1, \"GOOD\")\n| where calibration_status!=\"GOOD\"",
              "m": "Edge Hub firmware tracks sensor calibration date and calculates drift estimate (comparison to stable reference or statistical baseline). Temperature sensor nominal accuracy ±0.2°C; alert if drift exceeds ±0.5°C (±2.5x drift). Humidity sensor nominal accuracy ±2%; alert if drift exceeds ±5% RH (±2.5x drift). Report calibration status to edge-hub-health every week. Recommend recalibration every 12 months or after >2% drift detected. Store calibration history in edge-hub-logs for audit trail. For critical environments (pharmaceutical, food), set more aggressive drift thresholds (±1% per year).",
              "z": "Sensor drift gauge, calibration status list, recalibration due timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (temperature + humidity sensors), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-health` sourcetype=edge_hub sensor_calibration_status, `index=edge-hub-data` sensor_drift_metric.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub firmware tracks sensor calibration date and calculates drift estimate (comparison to stable reference or statistical baseline). Temperature sensor nominal accuracy ±0.2°C; alert if drift exceeds ±0.5°C (±2.5x drift). Humidity sensor nominal accuracy ±2%; alert if drift exceeds ±5% RH (±2.5x drift). Report calibration status to edge-hub-health every week. Recommend recalibration every 12 months or after >2% drift detected. Store calibration history in edge-hub-logs for audit trail. For c…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-health sourcetype=edge_hub sensor_type=temperature OR sensor_type=humidity\n| stats latest(last_calibration_date) as last_cal, latest(sensor_drift_percent) as drift by sensor_type, host\n| eval days_since_calibration=(now() - strptime(last_cal, \"%Y-%m-%d\")) / 86400\n| eval calibration_status=case(\n    drift > 5 OR days_since_calibration > 365, \"OUT_OF_SPEC\",\n    drift > 2 OR days_since_calibration > 180, \"MARGINAL\",\n    1=1, \"GOOD\")\n| where calibration_status!=\"GOOD\"\n```\n\nUnderstanding this SPL\n\n**Temperature & Humidity Sensor Calibration Drift Detection** — Detects when sensors exceed acceptable calibration drift to trigger preventive recalibration and ensure measurement accuracy.\n\nDocumented **Data sources**: `index=edge-hub-health` sourcetype=edge_hub sensor_calibration_status, `index=edge-hub-data` sensor_drift_metric. **App/TA** (typical add-on context): Splunk Edge Hub (temperature + humidity sensors), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-health; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-health, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sensor_type, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_calibration** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **calibration_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where calibration_status!=\"GOOD\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sensor drift gauge, calibration status list, recalibration due timeline.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with uncomfortable or unsafe hot and cold spots before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.52",
              "n": "Light Sensor Ambient Light Level Anomaly Detection",
              "c": "medium",
              "f": "advanced",
              "v": "Detects sudden lighting failures or unauthorized facility access by monitoring ambient light level anomalies.",
              "t": "Splunk Edge Hub (light sensor), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` metric_name=light_level, `sourcetype=edge_hub`",
              "q": "| mstats avg(light_level) as lux by location, _time span=5m\n| stats avg(lux) as baseline_lux, stdev(lux) as lux_std by location\n| relative_entropy baseline_lux, lux\n| where lux < (baseline_lux - 3*lux_std)\n| eval light_anomaly=case(\n    lux < 10, \"LIGHTS_OFF_OR_DARKNESS\",\n    lux < (baseline_lux / 2), \"SEVERE_DIMMING\",\n    1=1, \"MODERATE_DIMMING\")",
              "m": "Deploy Edge Hub light sensor in areas with regular light schedule (e.g., office hours 8am-6pm, lights expected 200-500 lux). Collect 7-day baseline to learn normal lighting schedule. Use built-in kNN anomaly detection to flag sudden light level changes (e.g., lights switched off during business hours = facility access anomaly). Alert if lux drops below 10 for extended period (darkness = potential theft/intrusion). Configure 5-minute polling interval. Light sensor high sensitivity range: 0-65535 lux. Store local SQLite data for 30+ days to track seasonal lighting changes.",
              "z": "Light level trend by location, anomaly detection timeline, darkness event log.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (light sensor), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=light_level, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub light sensor in areas with regular light schedule (e.g., office hours 8am-6pm, lights expected 200-500 lux). Collect 7-day baseline to learn normal lighting schedule. Use built-in kNN anomaly detection to flag sudden light level changes (e.g., lights switched off during business hours = facility access anomaly). Alert if lux drops below 10 for extended period (darkness = potential theft/intrusion). Configure 5-minute polling interval. Light sensor high sensitivity range: 0-65535 …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(light_level) as lux by location, _time span=5m\n| stats avg(lux) as baseline_lux, stdev(lux) as lux_std by location\n| relative_entropy baseline_lux, lux\n| where lux < (baseline_lux - 3*lux_std)\n| eval light_anomaly=case(\n    lux < 10, \"LIGHTS_OFF_OR_DARKNESS\",\n    lux < (baseline_lux / 2), \"SEVERE_DIMMING\",\n    1=1, \"MODERATE_DIMMING\")\n```\n\nUnderstanding this SPL\n\n**Light Sensor Ambient Light Level Anomaly Detection** — Detects sudden lighting failures or unauthorized facility access by monitoring ambient light level anomalies.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=light_level, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (light sensor), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `stats` rolls up events into metrics; results are split **by location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Light Sensor Ambient Light Level Anomaly Detection**): relative_entropy baseline_lux, lux\n• Filters the current rows with `where lux < (baseline_lux - 3*lux_std)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **light_anomaly** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Light level trend by location, anomaly detection timeline, darkness event log.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with light levels and sudden darkness when you would expect lights on before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.53",
              "n": "Vibration Magnitude Threshold Monitoring for Equipment Protection",
              "c": "critical",
              "f": "expert",
              "v": "Protects precision equipment from damage by triggering automatic shutdowns when vibration exceeds safe operating thresholds.",
              "t": "Splunk Edge Hub (3-axis accelerometer + custom container), gRPC SDK",
              "d": "`index=edge-hub-data` metric_name=vibration_magnitude, `index=edge-hub-logs` vibration_threshold_event",
              "q": "| mstats max(vibration_magnitude) as peak_vib by equipment_id, _time span=10s\n| eval equipment_class=\"PRECISION_MACHINERY\"\n| join max=1 equipment_class [| rest /services/data-model/indexes/equipment_vibration_limits | fields equipment_class, vibration_max_safe_g, vibration_alarm_g]\n| where peak_vib > vibration_alarm_g\n| eval shutdown_required=\"YES\"\n| stats count as alarm_count, max(peak_vib) as max_vibration by equipment_id",
              "m": "Deploy Edge Hub accelerometer on precision equipment (CNC machine, semiconductor wafer scanner, optical alignment tool). Configure 10-second rolling window for vibration magnitude calculation. Set alarm threshold based on equipment manufacturer specs (typical: 3-5g for precision machinery). Build custom container that monitors vibration in real-time and triggers GPIO relay to cut equipment power if threshold exceeded (safety interlock). Store vibration magnitude in edge-hub-data metrics. Implement hierarchical alerts: 80% threshold = warning, 100% threshold = equipment shutdown. Local alert response avoids cloud latency (critical for safety).",
              "z": "Vibration magnitude trend, threshold exceedance timeline, equipment protection status.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (3-axis accelerometer + custom container), gRPC SDK.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=vibration_magnitude, `index=edge-hub-logs` vibration_threshold_event.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub accelerometer on precision equipment (CNC machine, semiconductor wafer scanner, optical alignment tool). Configure 10-second rolling window for vibration magnitude calculation. Set alarm threshold based on equipment manufacturer specs (typical: 3-5g for precision machinery). Build custom container that monitors vibration in real-time and triggers GPIO relay to cut equipment power if threshold exceeded (safety interlock). Store vibration magnitude in edge-hub-data metrics. Impleme…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats max(vibration_magnitude) as peak_vib by equipment_id, _time span=10s\n| eval equipment_class=\"PRECISION_MACHINERY\"\n| join max=1 equipment_class [| rest /services/data-model/indexes/equipment_vibration_limits | fields equipment_class, vibration_max_safe_g, vibration_alarm_g]\n| where peak_vib > vibration_alarm_g\n| eval shutdown_required=\"YES\"\n| stats count as alarm_count, max(peak_vib) as max_vibration by equipment_id\n```\n\nUnderstanding this SPL\n\n**Vibration Magnitude Threshold Monitoring for Equipment Protection** — Protects precision equipment from damage by triggering automatic shutdowns when vibration exceeds safe operating thresholds.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=vibration_magnitude, `index=edge-hub-logs` vibration_threshold_event. **App/TA** (typical add-on context): Splunk Edge Hub (3-axis accelerometer + custom container), gRPC SDK. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **equipment_class** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where peak_vib > vibration_alarm_g` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **shutdown_required** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by equipment_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Vibration magnitude trend, threshold exceedance timeline, equipment protection status.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with shaking that might mean a bearing, balance, or mount problem before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.54",
              "n": "Multi-Zone Temperature Gradient Monitoring for Optimal Environment",
              "c": "medium",
              "f": "advanced",
              "v": "Monitors temperature gradients across facility zones to optimize HVAC distribution and detect unequal cooling/heating.",
              "t": "Splunk Edge Hub (multiple temperature sensors via MQTT or external probes), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` metric_name=temperature, `sourcetype=edge_hub` zone=*",
              "q": "| mstats avg(temperature) as zone_temp by zone, _time span=5m\n| stats avg(zone_temp) as avg_zone_temp by zone\n| eventstats avg(avg_zone_temp) as facility_avg_temp\n| eval temp_offset=(avg_zone_temp - facility_avg_temp)\n| stats max(abs(temp_offset)) as max_gradient by zone\n| where max_gradient > 3\n| eval gradient_alert=\"HVAC_IMBALANCE\"",
              "m": "Deploy Edge Hub in central location with MQTT subscriptions to external temperature sensors in multiple zones (Advanced Settings → MQTT Subscriptions). Or use external probes connected to I²C port (3.5mm jack). Configure 5-minute polling to capture HVAC response dynamics. Acceptable temperature gradient: ±1-2°C across zones. Gradient > 3°C indicates HVAC distribution issue (blocked duct, stuck valve). Store zone temperatures in edge-hub-data metrics. Implement trend analysis: if gradient increasing over days = duct blockage. If gradient constant but offset = thermostat miscalibration. Alert HVAC maintenance if gradient exceeds threshold.",
              "z": "Zone temperature heatmap, gradient trend, HVAC balance status.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (multiple temperature sensors via MQTT or external probes), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` metric_name=temperature, `sourcetype=edge_hub` zone=*.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub in central location with MQTT subscriptions to external temperature sensors in multiple zones (Advanced Settings → MQTT Subscriptions). Or use external probes connected to I²C port (3.5mm jack). Configure 5-minute polling to capture HVAC response dynamics. Acceptable temperature gradient: ±1-2°C across zones. Gradient > 3°C indicates HVAC distribution issue (blocked duct, stuck valve). Store zone temperatures in edge-hub-data metrics. Implement trend analysis: if gradient increas…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(temperature) as zone_temp by zone, _time span=5m\n| stats avg(zone_temp) as avg_zone_temp by zone\n| eventstats avg(avg_zone_temp) as facility_avg_temp\n| eval temp_offset=(avg_zone_temp - facility_avg_temp)\n| stats max(abs(temp_offset)) as max_gradient by zone\n| where max_gradient > 3\n| eval gradient_alert=\"HVAC_IMBALANCE\"\n```\n\nUnderstanding this SPL\n\n**Multi-Zone Temperature Gradient Monitoring for Optimal Environment** — Monitors temperature gradients across facility zones to optimize HVAC distribution and detect unequal cooling/heating.\n\nDocumented **Data sources**: `index=edge-hub-data` metric_name=temperature, `sourcetype=edge_hub` zone=*. **App/TA** (typical add-on context): Splunk Edge Hub (multiple temperature sensors via MQTT or external probes), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `stats` rolls up events into metrics; results are split **by zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **temp_offset** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_gradient > 3` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gradient_alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Zone temperature heatmap, gradient trend, HVAC balance status.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with uncomfortable or unsafe hot and cold spots before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.55",
              "n": "Acoustic Anomaly Detection for Equipment Health Assessment",
              "c": "medium",
              "f": "expert",
              "v": "Identifies subtle equipment changes (bearing looseness, gearbox wear) by detecting acoustic signature shifts without manual FFT analysis.",
              "t": "Splunk Edge Hub (stereo microphone + ML container), v2.1+",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log acoustic_anomaly_event, `index=edge-hub-data` acoustic_baseline_deviation",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log acoustic_classification_event\n| stats latest(acoustic_anomaly_score) as anomaly_score, count as detections by equipment_id\n| where anomaly_score > 0.7\n| eval equipment_health=\"DEGRADED\"\n| stats count(eval(equipment_health=\"DEGRADED\")) as degraded_count by facility\n| where degraded_count > 0",
              "m": "Build custom ARM64 container with TensorFlow Lite audio anomaly detection model (autoencoder or isolation forest on MFCC spectral features). Container captures 5-second audio windows at 1-minute intervals, extracts MFCC features, computes reconstruction error vs baseline model (trained on normal equipment sounds), outputs anomaly_score (0-1). Score > 0.7 = significant acoustic change. Deploy NPU inference (v2.1+) for real-time processing. Store anomaly events locally (100K backlog). Useful for early detection of bearing wear, compressor cavitation, motor bearing looseness before catastrophic failure. Do not stream raw audio to cloud (privacy).",
              "z": "Anomaly score timeline, detection frequency histogram, equipment health trend.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (stereo microphone + ML container), v2.1+.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log acoustic_anomaly_event, `index=edge-hub-data` acoustic_baseline_deviation.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild custom ARM64 container with TensorFlow Lite audio anomaly detection model (autoencoder or isolation forest on MFCC spectral features). Container captures 5-second audio windows at 1-minute intervals, extracts MFCC features, computes reconstruction error vs baseline model (trained on normal equipment sounds), outputs anomaly_score (0-1). Score > 0.7 = significant acoustic change. Deploy NPU inference (v2.1+) for real-time processing. Store anomaly events locally (100K backlog). Useful for e…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log acoustic_classification_event\n| stats latest(acoustic_anomaly_score) as anomaly_score, count as detections by equipment_id\n| where anomaly_score > 0.7\n| eval equipment_health=\"DEGRADED\"\n| stats count(eval(equipment_health=\"DEGRADED\")) as degraded_count by facility\n| where degraded_count > 0\n```\n\nUnderstanding this SPL\n\n**Acoustic Anomaly Detection for Equipment Health Assessment** — Identifies subtle equipment changes (bearing looseness, gearbox wear) by detecting acoustic signature shifts without manual FFT analysis.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log acoustic_anomaly_event, `index=edge-hub-data` acoustic_baseline_deviation. **App/TA** (typical add-on context): Splunk Edge Hub (stereo microphone + ML container), v2.1+. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by equipment_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where anomaly_score > 0.7` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **equipment_health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by facility** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where degraded_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Anomaly score timeline, detection frequency histogram, equipment health trend.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with dust in the air that can bother lungs before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.56",
              "n": "MQTT Topic Latency & Message Loss Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Ensures MQTT message delivery reliability by tracking topic latency and detecting lost or delayed messages.",
              "t": "Splunk Edge Hub (MQTT broker + client), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=splunk_edge_hub_log mqtt_latency_event, `index=edge-hub-health` sourcetype=edge_hub mqtt_broker_health",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log mqtt_latency_event\n| stats avg(publish_to_receive_latency_ms) as avg_latency, max(publish_to_receive_latency_ms) as peak_latency,\n        count(eval(latency_ms > 5000)) as slow_messages by mqtt_topic, host\n| eval latency_status=case(\n    avg_latency > 1000, \"SEVERE_DELAY\",\n    avg_latency > 500, \"SLOW\",\n    1=1, \"NORMAL\")\n| where latency_status!=\"NORMAL\"",
              "m": "Configure MQTT subscriptions with latency tracking enabled (Advanced Settings → MQTT Subscriptions). Edge Hub MQTT client publishes test messages with timestamp to topics, subscribes to responses, measures round-trip latency. Monitor message sequence numbers to detect loss (gap in sequence = lost message). Store latency metrics in edge-hub-logs. Typical acceptable latency: < 500ms for sensor data (< 1s for anomaly alerts). Alert if average latency exceeds 1s (indicates broker congestion or network saturation). Check MQTT broker resource usage (CPU, memory, subscriber count) if latency degrades. Local SQLite backlog ensures no latency data is lost.",
              "z": "MQTT latency trend by topic, message loss rate, broker health timeline.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (MQTT broker + client), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log mqtt_latency_event, `index=edge-hub-health` sourcetype=edge_hub mqtt_broker_health.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure MQTT subscriptions with latency tracking enabled (Advanced Settings → MQTT Subscriptions). Edge Hub MQTT client publishes test messages with timestamp to topics, subscribes to responses, measures round-trip latency. Monitor message sequence numbers to detect loss (gap in sequence = lost message). Store latency metrics in edge-hub-logs. Typical acceptable latency: < 500ms for sensor data (< 1s for anomaly alerts). Alert if average latency exceeds 1s (indicates broker congestion or netwo…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log mqtt_latency_event\n| stats avg(publish_to_receive_latency_ms) as avg_latency, max(publish_to_receive_latency_ms) as peak_latency,\n        count(eval(latency_ms > 5000)) as slow_messages by mqtt_topic, host\n| eval latency_status=case(\n    avg_latency > 1000, \"SEVERE_DELAY\",\n    avg_latency > 500, \"SLOW\",\n    1=1, \"NORMAL\")\n| where latency_status!=\"NORMAL\"\n```\n\nUnderstanding this SPL\n\n**MQTT Topic Latency & Message Loss Monitoring** — Ensures MQTT message delivery reliability by tracking topic latency and detecting lost or delayed messages.\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=splunk_edge_hub_log mqtt_latency_event, `index=edge-hub-health` sourcetype=edge_hub mqtt_broker_health. **App/TA** (typical add-on context): Splunk Edge Hub (MQTT broker + client), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by mqtt_topic, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **latency_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where latency_status!=\"NORMAL\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: MQTT latency trend by topic, message loss rate, broker health timeline.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with this area: mqtt topic latency & message loss before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.3.57",
              "n": "Temperature Sensor Response Time Validation & Lag Detection",
              "c": "medium",
              "f": "advanced",
              "v": "Validates temperature sensor response time to ensure rapid detection of thermal events (e.g., fire detection latency < 30 seconds).",
              "t": "Splunk Edge Hub (temperature sensor), Splunk OT Intelligence",
              "d": "`index=edge-hub-logs` sourcetype=edge_hub temperature_response_test, `index=edge-hub-data` sensor_response_metrics",
              "q": "index=edge-hub-logs sourcetype=edge_hub temperature_response_test=true stimulus_type=heat_pulse\n| stats latest(stimulus_start_time) as heat_start, latest(temperature_rise_detected_time) as detection_time by sensor_id\n| eval response_latency_seconds=round((detection_time - heat_start) / 1000, 1)\n| stats avg(response_latency_seconds) as avg_response_time, max(response_latency_seconds) as worst_case by sensor_id\n| where avg_response_time > 30 OR worst_case > 60\n| eval sensor_status=\"SLOW_RESPONSE\"",
              "m": "Implement quarterly temperature sensor response test: apply controlled heat source (heat lamp, hot water bath) near sensor, record time from stimulus application to temperature rise detection (configurable threshold: +5°C from baseline). Temperature sensor response time (Edge Hub spec): ~1-5 seconds in air, ~10-30 seconds in slow-moving air. Response time > 60 seconds indicates sensor degradation (fouled sensing element, thermal insulation issue). Store test results in edge-hub-logs. Alert if average response time exceeds equipment-specific safety limit (e.g., fire detection requires < 30 second response). Use test data for recalibration/replacement decisions.",
              "z": "Sensor response time trend, test results timeline, response time validation pass/fail status.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (temperature sensor), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-logs` sourcetype=edge_hub temperature_response_test, `index=edge-hub-data` sensor_response_metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nImplement quarterly temperature sensor response test: apply controlled heat source (heat lamp, hot water bath) near sensor, record time from stimulus application to temperature rise detection (configurable threshold: +5°C from baseline). Temperature sensor response time (Edge Hub spec): ~1-5 seconds in air, ~10-30 seconds in slow-moving air. Response time > 60 seconds indicates sensor degradation (fouled sensing element, thermal insulation issue). Store test results in edge-hub-logs. Alert if av…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=edge_hub temperature_response_test=true stimulus_type=heat_pulse\n| stats latest(stimulus_start_time) as heat_start, latest(temperature_rise_detected_time) as detection_time by sensor_id\n| eval response_latency_seconds=round((detection_time - heat_start) / 1000, 1)\n| stats avg(response_latency_seconds) as avg_response_time, max(response_latency_seconds) as worst_case by sensor_id\n| where avg_response_time > 30 OR worst_case > 60\n| eval sensor_status=\"SLOW_RESPONSE\"\n```\n\nUnderstanding this SPL\n\n**Temperature Sensor Response Time Validation & Lag Detection** — Validates temperature sensor response time to ensure rapid detection of thermal events (e.g., fire detection latency < 30 seconds).\n\nDocumented **Data sources**: `index=edge-hub-logs` sourcetype=edge_hub temperature_response_test, `index=edge-hub-data` sensor_response_metrics. **App/TA** (typical add-on context): Splunk Edge Hub (temperature sensor), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sensor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **response_latency_seconds** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sensor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_response_time > 30 OR worst_case > 60` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **sensor_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sensor response time trend, test results timeline, response time validation pass/fail status.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what edge sensors report from the physical world. We help you know if something is off with uncomfortable or unsafe hot and cold spots before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 56,
            "none": 0
          }
        },
        {
          "i": "14.4",
          "n": "IoT Platforms & Sensors",
          "u": [
            {
              "i": "14.4.1",
              "n": "Smart Sensor Fleet Health",
              "c": "high",
              "f": "beginner",
              "v": "IoT sensors with low batteries or offline status create monitoring gaps. Fleet management ensures comprehensive coverage.",
              "t": "IoT platform API input",
              "d": "IoT platform device management API",
              "q": "index=iot sourcetype=\"iot_platform:devices\"\n| where battery_pct < 20 OR status!=\"online\" OR last_seen < relative_time(now(),\"-4h\")\n| table device_id, device_type, location, battery_pct, status, last_seen",
              "m": "Poll IoT platform API for device status. Track battery levels, connectivity, and data freshness. Alert on low battery (<20%) and offline devices (>4 hours). Report on fleet health for maintenance planning.",
              "z": "Table (devices needing attention), Gauge (fleet health %), Pie chart (device status distribution), Map (device locations with status).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IoT platform API input.\n• Ensure the following data sources are available: IoT platform device management API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll IoT platform API for device status. Track battery levels, connectivity, and data freshness. Alert on low battery (<20%) and offline devices (>4 hours). Report on fleet health for maintenance planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"iot_platform:devices\"\n| where battery_pct < 20 OR status!=\"online\" OR last_seen < relative_time(now(),\"-4h\")\n| table device_id, device_type, location, battery_pct, status, last_seen\n```\n\nUnderstanding this SPL\n\n**Smart Sensor Fleet Health** — IoT sensors with low batteries or offline status create monitoring gaps. Fleet management ensures comprehensive coverage.\n\nDocumented **Data sources**: IoT platform device management API. **App/TA** (typical add-on context): IoT platform API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: iot_platform:devices. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"iot_platform:devices\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where battery_pct < 20 OR status!=\"online\" OR last_seen < relative_time(now(),\"-4h\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Smart Sensor Fleet Health**): table device_id, device_type, location, battery_pct, status, last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (devices needing attention), Gauge (fleet health %), Pie chart (device status distribution), Map (device locations with status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with batteries, signal, and which sensors have gone quiet before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.2",
              "n": "Environmental Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Distributed environmental sensors provide early warning of conditions that could damage equipment or inventory.",
              "t": "MQTT input, IoT platform API",
              "d": "Environmental sensor data (temperature, humidity, water leak, smoke)",
              "q": "index=iot sourcetype=\"sensor:environmental\"\n| where water_detected=\"true\" OR smoke_detected=\"true\" OR temp_f > 90\n| table _time, sensor_id, location, alert_type, value",
              "m": "Deploy environmental sensors in server rooms, warehouses, and facilities. Ingest via MQTT or API. Alert immediately on water leak or smoke detection. Track temperature/humidity trends per location.",
              "z": "Floor plan (sensors with status), Line chart (environmental trends), Table (alerts), Single value (active environmental alerts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MQTT input, IoT platform API.\n• Ensure the following data sources are available: Environmental sensor data (temperature, humidity, water leak, smoke).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy environmental sensors in server rooms, warehouses, and facilities. Ingest via MQTT or API. Alert immediately on water leak or smoke detection. Track temperature/humidity trends per location.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"sensor:environmental\"\n| where water_detected=\"true\" OR smoke_detected=\"true\" OR temp_f > 90\n| table _time, sensor_id, location, alert_type, value\n```\n\nUnderstanding this SPL\n\n**Environmental Monitoring** — Distributed environmental sensors provide early warning of conditions that could damage equipment or inventory.\n\nDocumented **Data sources**: Environmental sensor data (temperature, humidity, water leak, smoke). **App/TA** (typical add-on context): MQTT input, IoT platform API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: sensor:environmental. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"sensor:environmental\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where water_detected=\"true\" OR smoke_detected=\"true\" OR temp_f > 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Environmental Monitoring**): table _time, sensor_id, location, alert_type, value\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Floor plan (sensors with status), Line chart (environmental trends), Table (alerts), Single value (active environmental alerts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: environmental before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.3",
              "n": "Asset Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Real-time asset location reduces search time, prevents loss, and enables utilization optimization.",
              "t": "Custom API input, BLE/GPS data",
              "d": "GPS/BLE beacon data, RFID events",
              "q": "index=iot sourcetype=\"asset_tracking\"\n| stats latest(location) as current_location, latest(_time) as last_seen by asset_id, asset_type\n| eval hours_since=round((now()-last_seen)/3600,1)\n| where hours_since > 24",
              "m": "Ingest asset tracking data from GPS/BLE/RFID systems. Track asset locations and movement patterns. Alert when high-value assets leave designated zones. Report on asset utilization by location.",
              "z": "Map (asset locations), Table (asset inventory with location), Timeline (asset movement), Single value (assets not reporting).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input, BLE/GPS data.\n• Ensure the following data sources are available: GPS/BLE beacon data, RFID events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest asset tracking data from GPS/BLE/RFID systems. Track asset locations and movement patterns. Alert when high-value assets leave designated zones. Report on asset utilization by location.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"asset_tracking\"\n| stats latest(location) as current_location, latest(_time) as last_seen by asset_id, asset_type\n| eval hours_since=round((now()-last_seen)/3600,1)\n| where hours_since > 24\n```\n\nUnderstanding this SPL\n\n**Asset Tracking** — Real-time asset location reduces search time, prevents loss, and enables utilization optimization.\n\nDocumented **Data sources**: GPS/BLE beacon data, RFID events. **App/TA** (typical add-on context): Custom API input, BLE/GPS data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: asset_tracking. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"asset_tracking\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by asset_id, asset_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since > 24` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (asset locations), Table (asset inventory with location), Timeline (asset movement), Single value (assets not reporting).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: asset before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.4",
              "n": "Home Automation Monitoring",
              "c": "low",
              "f": "beginner",
              "v": "Smart home monitoring provides energy usage insights, security awareness, and automation troubleshooting.",
              "t": "Custom API input (Homey, Home Assistant)",
              "d": "Homey/Home Assistant API (device events, energy data)",
              "q": "index=smarthome sourcetype=\"homey:events\"\n| stats count by device_name, capability, event_type\n| sort -count",
              "m": "Configure Homey/Home Assistant webhook or API to send events to Splunk HEC. Track device states, energy consumption, and automation triggers. Create dashboards for home energy management and security.",
              "z": "Line chart (energy usage), Table (device events), Status grid (device × state), Single value (energy today).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom API input (Homey, Home Assistant).\n• Ensure the following data sources are available: Homey/Home Assistant API (device events, energy data).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Homey/Home Assistant webhook or API to send events to Splunk HEC. Track device states, energy consumption, and automation triggers. Create dashboards for home energy management and security.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=smarthome sourcetype=\"homey:events\"\n| stats count by device_name, capability, event_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Home Automation Monitoring** — Smart home monitoring provides energy usage insights, security awareness, and automation troubleshooting.\n\nDocumented **Data sources**: Homey/Home Assistant API (device events, energy data). **App/TA** (typical add-on context): Custom API input (Homey, Home Assistant). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: smarthome; **sourcetype**: homey:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=smarthome, sourcetype=\"homey:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_name, capability, event_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (energy usage), Table (device events), Status grid (device × state), Single value (energy today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: home automation before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.5",
              "n": "IoT Device Firmware Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "IoT devices are frequently targeted for botnets. Outdated firmware creates network security risks.",
              "t": "IoT platform API",
              "d": "Device inventory with firmware versions",
              "q": "index=iot sourcetype=\"iot_platform:inventory\"\n| stats latest(firmware_version) as current by device_type, model\n| lookup iot_approved_firmware.csv device_type, model OUTPUT approved_version\n| where current!=approved_version\n| table device_type, model, count, current, approved_version",
              "m": "Export IoT device inventory with firmware versions periodically. Maintain approved firmware lookup. Report on compliance percentage. Prioritize updates for internet-connected devices. Track firmware update campaigns.",
              "z": "Table (non-compliant devices), Pie chart (compliant vs non-compliant), Bar chart (by device type), Single value (compliance %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IoT platform API.\n• Ensure the following data sources are available: Device inventory with firmware versions.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport IoT device inventory with firmware versions periodically. Maintain approved firmware lookup. Report on compliance percentage. Prioritize updates for internet-connected devices. Track firmware update campaigns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"iot_platform:inventory\"\n| stats latest(firmware_version) as current by device_type, model\n| lookup iot_approved_firmware.csv device_type, model OUTPUT approved_version\n| where current!=approved_version\n| table device_type, model, count, current, approved_version\n```\n\nUnderstanding this SPL\n\n**IoT Device Firmware Compliance** — IoT devices are frequently targeted for botnets. Outdated firmware creates network security risks.\n\nDocumented **Data sources**: Device inventory with firmware versions. **App/TA** (typical add-on context): IoT platform API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: iot_platform:inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"iot_platform:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_type, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where current!=approved_version` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IoT Device Firmware Compliance**): table device_type, model, count, current, approved_version\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant devices), Pie chart (compliant vs non-compliant), Bar chart (by device type), Single value (compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: iot device firmware compliance before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.6",
              "n": "IoT Device Connectivity and Last-Seen Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Devices that stop reporting indicate failure, network issues, or tampering. Last-seen monitoring ensures fleet visibility and rapid response to outages.",
              "t": "IoT platform, MQTT/CoAP gateway logs",
              "d": "Device heartbeat, last telemetry timestamp per device",
              "q": "index=iot sourcetype=\"iot:telemetry\"\n| stats latest(_time) as last_seen by device_id, gateway\n| eval gap_sec=now()-last_seen\n| where gap_sec > 3600\n| table device_id, gateway, last_seen, gap_sec",
              "m": "Track last-seen per device from telemetry or heartbeat. Alert when device has not reported for >1 hour (tune per use case). Report on connectivity rate and devices with longest gap. Correlate with gateway health.",
              "z": "Table (devices with gap), Single value (devices offline), Line chart (connectivity rate).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IoT platform, MQTT/CoAP gateway logs.\n• Ensure the following data sources are available: Device heartbeat, last telemetry timestamp per device.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack last-seen per device from telemetry or heartbeat. Alert when device has not reported for >1 hour (tune per use case). Report on connectivity rate and devices with longest gap. Correlate with gateway health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"iot:telemetry\"\n| stats latest(_time) as last_seen by device_id, gateway\n| eval gap_sec=now()-last_seen\n| where gap_sec > 3600\n| table device_id, gateway, last_seen, gap_sec\n```\n\nUnderstanding this SPL\n\n**IoT Device Connectivity and Last-Seen Monitoring** — Devices that stop reporting indicate failure, network issues, or tampering. Last-seen monitoring ensures fleet visibility and rapid response to outages.\n\nDocumented **Data sources**: Device heartbeat, last telemetry timestamp per device. **App/TA** (typical add-on context): IoT platform, MQTT/CoAP gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: iot:telemetry. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"iot:telemetry\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id, gateway** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_sec > 3600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IoT Device Connectivity and Last-Seen Monitoring**): table device_id, gateway, last_seen, gap_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (devices with gap), Single value (devices offline), Line chart (connectivity rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: iot device connectivity and last-seen before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.7",
              "n": "OT Protocol Anomaly and Unauthorized Command Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Unusual Modbus/OPC-UA/DNP3 commands or write operations can indicate attack or misconfiguration. Detection supports OT security and safety.",
              "t": "OT protocol decoders, industrial IDS",
              "d": "Modbus/OPC-UA/DNP3 traffic, write and function code logs",
              "q": "index=ot sourcetype=\"modbus:traffic\"\n| search (function_code=6 OR function_code=16 OR function_code IN (\"0x10\",\"0x06\"))\n| stats count by src, unit_id, function_code, register\n| where count > 100\n| sort -count",
              "m": "Ingest OT protocol traffic from sensors or IDS. Baseline normal read/write patterns. Alert on write to critical registers, unknown source, or high command rate. Report on command distribution by source and function.",
              "z": "Table (write events), Timeline (commands by source), Bar chart (function codes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OT protocol decoders, industrial IDS.\n• Ensure the following data sources are available: Modbus/OPC-UA/DNP3 traffic, write and function code logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest OT protocol traffic from sensors or IDS. Baseline normal read/write patterns. Alert on write to critical registers, unknown source, or high command rate. Report on command distribution by source and function.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"modbus:traffic\"\n| search (function_code=6 OR function_code=16 OR function_code IN (\"0x10\",\"0x06\"))\n| stats count by src, unit_id, function_code, register\n| where count > 100\n| sort -count\n```\n\nUnderstanding this SPL\n\n**OT Protocol Anomaly and Unauthorized Command Detection** — Unusual Modbus/OPC-UA/DNP3 commands or write operations can indicate attack or misconfiguration. Detection supports OT security and safety.\n\nDocumented **Data sources**: Modbus/OPC-UA/DNP3 traffic, write and function code logs. **App/TA** (typical add-on context): OT protocol decoders, industrial IDS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: modbus:traffic. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"modbus:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by src, unit_id, function_code, register** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (write events), Timeline (commands by source), Bar chart (function codes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: ot protocol anomaly and unauthorized command before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.8",
              "n": "Sensor Calibration and Drift Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Sensor drift causes incorrect control and compliance issues. Detecting drift against reference or peer sensors supports maintenance and data quality.",
              "t": "Sensor telemetry, reference sensor or baseline data",
              "d": "Sensor readings, calibration records",
              "q": "index=iot sourcetype=\"sensor:reading\"\n| bin _time span=1d\n| stats avg(value) as avg_val, stdev(value) as stdev_val by sensor_id, metric, _time\n| eventstats avg(avg_val) as fleet_avg by metric\n| eval drift=abs(avg_val-fleet_avg)\n| where drift > (stdev_val * 3)",
              "m": "Ingest sensor readings and optional reference values. Compute baseline or peer average. Alert when a sensor deviates beyond threshold. Track calibration history and flag sensors due for recalibration.",
              "z": "Line chart (sensor vs baseline), Table (sensors with drift), Bar chart (drift magnitude).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Sensor telemetry, reference sensor or baseline data.\n• Ensure the following data sources are available: Sensor readings, calibration records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest sensor readings and optional reference values. Compute baseline or peer average. Alert when a sensor deviates beyond threshold. Track calibration history and flag sensors due for recalibration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"sensor:reading\"\n| bin _time span=1d\n| stats avg(value) as avg_val, stdev(value) as stdev_val by sensor_id, metric, _time\n| eventstats avg(avg_val) as fleet_avg by metric\n| eval drift=abs(avg_val-fleet_avg)\n| where drift > (stdev_val * 3)\n```\n\nUnderstanding this SPL\n\n**Sensor Calibration and Drift Detection** — Sensor drift causes incorrect control and compliance issues. Detecting drift against reference or peer sensors supports maintenance and data quality.\n\nDocumented **Data sources**: Sensor readings, calibration records. **App/TA** (typical add-on context): Sensor telemetry, reference sensor or baseline data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: sensor:reading. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"sensor:reading\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by sensor_id, metric, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by metric** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift > (stdev_val * 3)` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (sensor vs baseline), Table (sensors with drift), Bar chart (drift magnitude).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: sensor calibration and drift before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.9",
              "n": "Gateway and Edge Node Resource Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "Overloaded gateways drop telemetry or delay processing. Monitoring CPU, memory, and queue depth ensures edge reliability and capacity planning.",
              "t": "Edge/gateway metrics, SNMP or agent",
              "d": "Gateway CPU, memory, disk, message queue depth",
              "q": "index=iot sourcetype=\"gateway:metrics\"\n| stats latest(cpu_pct) as cpu, latest(mem_pct) as mem, latest(queue_depth) as queue by gateway_id\n| where cpu > 80 OR mem > 85 OR queue > 1000\n| table gateway_id, cpu, mem, queue",
              "m": "Collect gateway and edge node metrics via agent or SNMP. Alert when CPU/memory or queue exceeds threshold. Report on utilization trend and top-loaded gateways. Plan scale-out before saturation.",
              "z": "Table (gateways over threshold), Gauge (queue depth), Line chart (utilization trend).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Edge/gateway metrics, SNMP or agent.\n• Ensure the following data sources are available: Gateway CPU, memory, disk, message queue depth.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect gateway and edge node metrics via agent or SNMP. Alert when CPU/memory or queue exceeds threshold. Report on utilization trend and top-loaded gateways. Plan scale-out before saturation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"gateway:metrics\"\n| stats latest(cpu_pct) as cpu, latest(mem_pct) as mem, latest(queue_depth) as queue by gateway_id\n| where cpu > 80 OR mem > 85 OR queue > 1000\n| table gateway_id, cpu, mem, queue\n```\n\nUnderstanding this SPL\n\n**Gateway and Edge Node Resource Utilization** — Overloaded gateways drop telemetry or delay processing. Monitoring CPU, memory, and queue depth ensures edge reliability and capacity planning.\n\nDocumented **Data sources**: Gateway CPU, memory, disk, message queue depth. **App/TA** (typical add-on context): Edge/gateway metrics, SNMP or agent. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: gateway:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"gateway:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by gateway_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu > 80 OR mem > 85 OR queue > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Gateway and Edge Node Resource Utilization**): table gateway_id, cpu, mem, queue\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gateways over threshold), Gauge (queue depth), Line chart (utilization trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: gateway and edge node resource utilization before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.10",
              "n": "IoT Data Pipeline Throughput and Latency",
              "c": "high",
              "f": "intermediate",
              "v": "Pipeline latency or drop in throughput affects real-time dashboards and alerts. Monitoring supports SLA and troubleshooting of ingestion and processing.",
              "t": "Pipeline metrics, message broker logs",
              "d": "Ingestion rate, end-to-end latency, backlog size",
              "q": "index=iot sourcetype=\"pipeline:metrics\"\n| bin _time span=5m\n| stats avg(ingestion_rate) as rate, avg(latency_ms) as latency, max(backlog) as backlog by pipeline_stage, _time\n| where rate < 100 OR latency > 5000 OR backlog > 50000\n| table pipeline_stage, rate, latency, backlog",
              "m": "Ingest pipeline stage metrics (rate, latency, backlog). Alert when rate drops or latency/backlog exceeds threshold. Report on throughput by stage and trend. Correlate with gateway and cloud health.",
              "z": "Line chart (throughput and latency), Table (stages with issues), Single value (pipeline health).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Pipeline metrics, message broker logs.\n• Ensure the following data sources are available: Ingestion rate, end-to-end latency, backlog size.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest pipeline stage metrics (rate, latency, backlog). Alert when rate drops or latency/backlog exceeds threshold. Report on throughput by stage and trend. Correlate with gateway and cloud health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"pipeline:metrics\"\n| bin _time span=5m\n| stats avg(ingestion_rate) as rate, avg(latency_ms) as latency, max(backlog) as backlog by pipeline_stage, _time\n| where rate < 100 OR latency > 5000 OR backlog > 50000\n| table pipeline_stage, rate, latency, backlog\n```\n\nUnderstanding this SPL\n\n**IoT Data Pipeline Throughput and Latency** — Pipeline latency or drop in throughput affects real-time dashboards and alerts. Monitoring supports SLA and troubleshooting of ingestion and processing.\n\nDocumented **Data sources**: Ingestion rate, end-to-end latency, backlog size. **App/TA** (typical add-on context): Pipeline metrics, message broker logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: pipeline:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"pipeline:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by pipeline_stage, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rate < 100 OR latency > 5000 OR backlog > 50000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IoT Data Pipeline Throughput and Latency**): table pipeline_stage, rate, latency, backlog\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (throughput and latency), Table (stages with issues), Single value (pipeline health).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: iot data pipeline throughput and latency before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.11",
              "n": "Aranet Environmental Sensor Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "CO2, temperature, humidity, and atmospheric pressure from Aranet4/Aranet PRO sensors support workplace air quality and occupant comfort. Exceedances indicate ventilation issues and may violate ASHRAE 62.1 or local workplace standards.",
              "t": "Custom (Aranet Cloud API or local gateway)",
              "d": "Aranet API (sensor readings JSON)",
              "q": "index=environment sourcetype=\"aranet:sensor\"\n| where co2_ppm > 1000 OR temp_c < 18 OR temp_c > 26 OR humidity_pct < 30 OR humidity_pct > 60 OR pressure_hpa < 980 OR pressure_hpa > 1050\n| eval exceedance=case(co2_ppm>1000, \"CO2\", temp_c<18 OR temp_c>26, \"Temperature\", humidity_pct<30 OR humidity_pct>60, \"Humidity\", pressure_hpa<980 OR pressure_hpa>1050, \"Pressure\", 1=1, \"OK\")\n| table _time, sensor_id, location, co2_ppm, temp_c, humidity_pct, pressure_hpa, exceedance",
              "m": "Integrate Aranet Cloud API or local Aranet gateway (e.g. Aranet Cloud Bridge) to fetch sensor readings. Schedule scripted input or HEC to ingest JSON (CO2 ppm, temperature °C, humidity %, pressure hPa) every 5–15 minutes. Alert when CO2 exceeds 1000 ppm (ASHRAE 62.1 recommends <1000 ppm for occupied spaces), temperature outside 18–26°C, or humidity outside 30–60%. Track trends for ventilation and HVAC tuning.",
              "z": "Gauge (CO2 ppm per zone), Line chart (CO2, temp, humidity trend), Heatmap (zone × CO2 level), Table (sensors with exceedances), Single value (zones in compliance %).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Aranet Cloud API or local gateway).\n• Ensure the following data sources are available: Aranet API (sensor readings JSON).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate Aranet Cloud API or local Aranet gateway (e.g. Aranet Cloud Bridge) to fetch sensor readings. Schedule scripted input or HEC to ingest JSON (CO2 ppm, temperature °C, humidity %, pressure hPa) every 5–15 minutes. Alert when CO2 exceeds 1000 ppm (ASHRAE 62.1 recommends <1000 ppm for occupied spaces), temperature outside 18–26°C, or humidity outside 30–60%. Track trends for ventilation and HVAC tuning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"aranet:sensor\"\n| where co2_ppm > 1000 OR temp_c < 18 OR temp_c > 26 OR humidity_pct < 30 OR humidity_pct > 60 OR pressure_hpa < 980 OR pressure_hpa > 1050\n| eval exceedance=case(co2_ppm>1000, \"CO2\", temp_c<18 OR temp_c>26, \"Temperature\", humidity_pct<30 OR humidity_pct>60, \"Humidity\", pressure_hpa<980 OR pressure_hpa>1050, \"Pressure\", 1=1, \"OK\")\n| table _time, sensor_id, location, co2_ppm, temp_c, humidity_pct, pressure_hpa, exceedance\n```\n\nUnderstanding this SPL\n\n**Aranet Environmental Sensor Monitoring** — CO2, temperature, humidity, and atmospheric pressure from Aranet4/Aranet PRO sensors support workplace air quality and occupant comfort. Exceedances indicate ventilation issues and may violate ASHRAE 62.1 or local workplace standards.\n\nDocumented **Data sources**: Aranet API (sensor readings JSON). **App/TA** (typical add-on context): Custom (Aranet Cloud API or local gateway). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: aranet:sensor. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"aranet:sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where co2_ppm > 1000 OR temp_c < 18 OR temp_c > 26 OR humidity_pct < 30 OR humidity_pct > 60 OR pressure_hpa < 980 OR press…` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **exceedance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Aranet Environmental Sensor Monitoring**): table _time, sensor_id, location, co2_ppm, temp_c, humidity_pct, pressure_hpa, exceedance\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (CO2 ppm per zone), Line chart (CO2, temp, humidity trend), Heatmap (zone × CO2 level), Table (sensors with exceedances), Single value (zones in compliance %).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: aranet environmental sensor before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance",
                "Safety"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aranet"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.12",
              "n": "IoT Device Fleet Health Dashboard",
              "c": "high",
              "f": "beginner",
              "v": "Single pane for battery, RSSI, last-seen, and firmware supports NOC and field operations.",
              "t": "IoT platform API, MQTT broker metrics",
              "d": "`sourcetype=\"iot_platform:devices\"`",
              "q": "index=iot sourcetype=\"iot_platform:devices\"\n| eval health=case(status!=\"online\",\"offline\", battery_pct<15,\"low_battery\", rssi< -110,\"poor_rssi\", 1=1,\"ok\")\n| stats count by health, product_family\n| sort health",
              "m": "Normalize vendor fields. Refresh dashboard every 5 min. Drill to device detail.",
              "z": "Status grid (region × health), Treemap (fleet by family), Single value (% healthy).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IoT platform API, MQTT broker metrics.\n• Ensure the following data sources are available: `sourcetype=\"iot_platform:devices\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize vendor fields. Refresh dashboard every 5 min. Drill to device detail.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"iot_platform:devices\"\n| eval health=case(status!=\"online\",\"offline\", battery_pct<15,\"low_battery\", rssi< -110,\"poor_rssi\", 1=1,\"ok\")\n| stats count by health, product_family\n| sort health\n```\n\nUnderstanding this SPL\n\n**IoT Device Fleet Health Dashboard** — Single pane for battery, RSSI, last-seen, and firmware supports NOC and field operations.\n\nDocumented **Data sources**: `sourcetype=\"iot_platform:devices\"`. **App/TA** (typical add-on context): IoT platform API, MQTT broker metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: iot_platform:devices. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"iot_platform:devices\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by health, product_family** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (region × health), Treemap (fleet by family), Single value (% healthy).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with batteries, signal, and which sensors have gone quiet before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.13",
              "n": "Firmware Update Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "OTA campaigns must complete; stragglers remain vulnerable to known exploits.",
              "t": "IoT DM, OTA job logs",
              "d": "`sourcetype=\"iot:ota\"` (job_id, status, version)",
              "q": "index=iot sourcetype=\"iot:ota\"\n| stats latest(target_version) as tgt by device_id\n| join max=1 device_id [\n  search index=iot sourcetype=\"iot_platform:devices\"\n  | stats latest(firmware_version) as cur by device_id\n]\n| where cur!=tgt\n| table device_id, cur, tgt",
              "m": "Compare device inventory to OTA target per wave. Escalate devices >7 days behind.",
              "z": "Bar chart (compliance by wave), Table (lagging devices).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IoT DM, OTA job logs.\n• Ensure the following data sources are available: `sourcetype=\"iot:ota\"` (job_id, status, version).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare device inventory to OTA target per wave. Escalate devices >7 days behind.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"iot:ota\"\n| stats latest(target_version) as tgt by device_id\n| join max=1 device_id [\n  search index=iot sourcetype=\"iot_platform:devices\"\n  | stats latest(firmware_version) as cur by device_id\n]\n| where cur!=tgt\n| table device_id, cur, tgt\n```\n\nUnderstanding this SPL\n\n**Firmware Update Compliance** — OTA campaigns must complete; stragglers remain vulnerable to known exploits.\n\nDocumented **Data sources**: `sourcetype=\"iot:ota\"` (job_id, status, version). **App/TA** (typical add-on context): IoT DM, OTA job logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: iot:ota. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"iot:ota\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where cur!=tgt` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Firmware Update Compliance**): table device_id, cur, tgt\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (compliance by wave), Table (lagging devices).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: firmware update compliance before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.14",
              "n": "Sensor Data Gap Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Missing telemetry breaks automation and safety analytics; gaps often precede device failure.",
              "t": "MQTT/CoAP gateway logs",
              "d": "`sourcetype=\"iot:telemetry\"`",
              "q": "index=iot sourcetype=\"iot:telemetry\"\n| stats latest(_time) as last_seen by device_id, sensor_id\n| eval gap_min=round((now()-last_seen)/60,1)\n| where gap_min > 30\n| table device_id, sensor_id, last_seen, gap_min",
              "m": "Tune gap threshold per sensor class (critical vs ambient). Alert on gap or stepped decrease in message rate.",
              "z": "Table (gaps), Heatmap (device × hour), Line chart (messages/min).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MQTT/CoAP gateway logs.\n• Ensure the following data sources are available: `sourcetype=\"iot:telemetry\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune gap threshold per sensor class (critical vs ambient). Alert on gap or stepped decrease in message rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"iot:telemetry\"\n| stats latest(_time) as last_seen by device_id, sensor_id\n| eval gap_min=round((now()-last_seen)/60,1)\n| where gap_min > 30\n| table device_id, sensor_id, last_seen, gap_min\n```\n\nUnderstanding this SPL\n\n**Sensor Data Gap Detection** — Missing telemetry breaks automation and safety analytics; gaps often precede device failure.\n\nDocumented **Data sources**: `sourcetype=\"iot:telemetry\"`. **App/TA** (typical add-on context): MQTT/CoAP gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: iot:telemetry. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"iot:telemetry\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_id, sensor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_min > 30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Sensor Data Gap Detection**): table device_id, sensor_id, last_seen, gap_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gaps), Heatmap (device × hour), Line chart (messages/min).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: sensor data gap before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.15",
              "n": "MQTT Broker Overload",
              "c": "critical",
              "f": "intermediate",
              "v": "Broker CPU, connection count, and retained message backlog indicate need to shard or scale.",
              "t": "Mosquitto/HiveMQ/AWS IoT metrics",
              "d": "`sourcetype=\"mqtt:broker_metrics\"`",
              "q": "index=iot sourcetype=\"mqtt:broker_metrics\"\n| where connections > max_connections*0.9 OR cpu_pct > 85 OR dropped_messages > 0\n| timechart span=1m avg(connections) as conn, avg(cpu_pct) as cpu, sum(dropped_messages) as drops",
              "m": "Scrape Prometheus or vendor API. Alert on sustained high utilization or any dropped messages. Correlate with misbehaving clients publishing at high QoS0 rate.",
              "z": "Line chart (connections and CPU), Table (brokers with drops), Gauge (connection %).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Mosquitto/HiveMQ/AWS IoT metrics.\n• Ensure the following data sources are available: `sourcetype=\"mqtt:broker_metrics\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nScrape Prometheus or vendor API. Alert on sustained high utilization or any dropped messages. Correlate with misbehaving clients publishing at high QoS0 rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"mqtt:broker_metrics\"\n| where connections > max_connections*0.9 OR cpu_pct > 85 OR dropped_messages > 0\n| timechart span=1m avg(connections) as conn, avg(cpu_pct) as cpu, sum(dropped_messages) as drops\n```\n\nUnderstanding this SPL\n\n**MQTT Broker Overload** — Broker CPU, connection count, and retained message backlog indicate need to shard or scale.\n\nDocumented **Data sources**: `sourcetype=\"mqtt:broker_metrics\"`. **App/TA** (typical add-on context): Mosquitto/HiveMQ/AWS IoT metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: mqtt:broker_metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"mqtt:broker_metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where connections > max_connections*0.9 OR cpu_pct > 85 OR dropped_messages > 0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1m** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (connections and CPU), Table (brokers with drops), Gauge (connection %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with data moving on the right message paths before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.16",
              "n": "IoT Device Certificate Expiry",
              "c": "critical",
              "f": "intermediate",
              "v": "Expired device certs break TLS to cloud and brick OTA; proactive rotation avoids mass outages.",
              "t": "PKI portal, device shadow attributes",
              "d": "`sourcetype=\"iot:cert_inventory\"` (device_id, not_after)",
              "q": "index=iot sourcetype=\"iot:cert_inventory\"\n| eval days_left=round((strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")-now())/86400,0)\n| where days_left < 45\n| table device_id, not_after, days_left\n| sort days_left",
              "m": "Ingest cert metadata from AWS IoT / Azure DPS / custom PKI. Alert at 45, 14, 7 days. Automate renewal jobs.",
              "z": "Table (certs expiring), Timeline (renewal window), Single value (devices <30d).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PKI portal, device shadow attributes.\n• Ensure the following data sources are available: `sourcetype=\"iot:cert_inventory\"` (device_id, not_after).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest cert metadata from AWS IoT / Azure DPS / custom PKI. Alert at 45, 14, 7 days. Automate renewal jobs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"iot:cert_inventory\"\n| eval days_left=round((strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")-now())/86400,0)\n| where days_left < 45\n| table device_id, not_after, days_left\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**IoT Device Certificate Expiry** — Expired device certs break TLS to cloud and brick OTA; proactive rotation avoids mass outages.\n\nDocumented **Data sources**: `sourcetype=\"iot:cert_inventory\"` (device_id, not_after). **App/TA** (typical add-on context): PKI portal, device shadow attributes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: iot:cert_inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"iot:cert_inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 45` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IoT Device Certificate Expiry**): table device_id, not_after, days_left\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certs expiring), Timeline (renewal window), Single value (devices <30d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: iot device certificate expiry before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.17",
              "n": "Edge-to-Cloud Sync Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Store-and-forward gaps mean stale dashboards and missed alerts for remote sites.",
              "t": "Edge agent logs (Azure IoT Edge, Greengrass)",
              "d": "`sourcetype=\"edge:sync\"`",
              "q": "index=iot sourcetype=\"edge:sync\"\n| where status=\"failed\" OR backlog_mb > 100 OR last_success_age_sec > 600\n| stats latest(backlog_mb) as backlog, latest(status) as st by edge_id, cloud_endpoint\n| sort -backlog",
              "m": "Parse sync success, backoff, and queue depth. Alert on failure or growing backlog. Correlate with WAN outages.",
              "z": "Table (edges with backlog), Line chart (backlog MB), Single value (edges in sync).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Edge agent logs (Azure IoT Edge, Greengrass).\n• Ensure the following data sources are available: `sourcetype=\"edge:sync\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse sync success, backoff, and queue depth. Alert on failure or growing backlog. Correlate with WAN outages.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype=\"edge:sync\"\n| where status=\"failed\" OR backlog_mb > 100 OR last_success_age_sec > 600\n| stats latest(backlog_mb) as backlog, latest(status) as st by edge_id, cloud_endpoint\n| sort -backlog\n```\n\nUnderstanding this SPL\n\n**Edge-to-Cloud Sync Failures** — Store-and-forward gaps mean stale dashboards and missed alerts for remote sites.\n\nDocumented **Data sources**: `sourcetype=\"edge:sync\"`. **App/TA** (typical add-on context): Edge agent logs (Azure IoT Edge, Greengrass). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot; **sourcetype**: edge:sync. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot, sourcetype=\"edge:sync\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"failed\" OR backlog_mb > 100 OR last_success_age_sec > 600` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by edge_id, cloud_endpoint** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (edges with backlog), Line chart (backlog MB), Single value (edges in sync).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: edge-to-cloud sync failures before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.18",
              "n": "IoT Device Provisioning Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Unauthorized provisioning events create shadow devices and billing abuse.",
              "t": "AWS IoT Fleet / Azure DPS audit",
              "d": "AWS CloudTrail (`sourcetype=\"aws:cloudtrail\"`; filter `eventSource` / `eventName` for IoT control-plane APIs such as `iot.amazonaws.com`), or a custom `iot:provisioning` sourcetype from your pipeline.",
              "q": "index=audit sourcetype=\"iot:provisioning\"\n| where action IN (\"RegisterThing\",\"CreateCertificate\",\"AttachPolicy\")\n| stats count by actor, device_template, src\n| where count > 50\n| sort -count",
              "m": "Compare provisioning rate to approved baseline. Alert on new template or IAM principal. Cross-check with HR for contractor access. For AWS, prefer CloudTrail (`sourcetype=\"aws:cloudtrail\"`) with `eventSource=\"iot.amazonaws.com\"` (and relevant `eventName` values) instead of a non-standard `cloudtrail:iot` sourcetype; keep `iot:provisioning` only if that is your normalized pipeline.",
              "z": "Table (provisioning events), Timeline (bursts), Bar chart (by actor).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS IoT Fleet / Azure DPS audit.\n• Ensure the following data sources are available: AWS CloudTrail (`sourcetype=\"aws:cloudtrail\"`; filter `eventSource` / `eventName` for IoT control-plane APIs such as `iot.amazonaws.com`), or a custom `iot:provisioning` sourcetype from your pipeline..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare provisioning rate to approved baseline. Alert on new template or IAM principal. Cross-check with HR for contractor access. For AWS, prefer CloudTrail (`sourcetype=\"aws:cloudtrail\"`) with `eventSource=\"iot.amazonaws.com\"` (and relevant `eventName` values) instead of a non-standard `cloudtrail:iot` sourcetype; keep `iot:provisioning` only if that is your normalized pipeline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"iot:provisioning\"\n| where action IN (\"RegisterThing\",\"CreateCertificate\",\"AttachPolicy\")\n| stats count by actor, device_template, src\n| where count > 50\n| sort -count\n```\n\nUnderstanding this SPL\n\n**IoT Device Provisioning Audit** — Unauthorized provisioning events create shadow devices and billing abuse.\n\nDocumented **Data sources**: AWS CloudTrail (`sourcetype=\"aws:cloudtrail\"`; filter `eventSource` / `eventName` for IoT control-plane APIs such as `iot.amazonaws.com`), or a custom `iot:provisioning` sourcetype from your pipeline. **App/TA** (typical add-on context): AWS IoT Fleet / Azure DPS audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: iot:provisioning. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"iot:provisioning\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"RegisterThing\",\"CreateCertificate\",\"AttachPolicy\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by actor, device_template, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (provisioning events), Timeline (bursts), Bar chart (by actor).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: iot device provisioning audit before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.4.19",
              "n": "BLE/Zigbee Gateway Health",
              "c": "high",
              "f": "intermediate",
              "v": "Mesh coordinators and gateways aggregate low-power sensors; their health affects entire buildings.",
              "t": "Zigbee2MQTT, Home Assistant, vendor hub",
              "d": "`sourcetype=\"zigbee:gateway\"` OR `sourcetype=\"ble:gateway\"`",
              "q": "index=iot sourcetype IN (\"zigbee:gateway\",\"ble:gateway\")\n| where coordinator_status!=\"OK\" OR offline_devices > 5 OR mqtt_connected=0\n| table _time, gateway_id, coordinator_status, offline_devices, mqtt_connected",
              "m": "Poll hub API for mesh depth, neighbor loss, and MQTT uplink. Alert on coordinator down or offline child count spike.",
              "z": "Status grid (gateway × health), Line chart (offline device count), Table (gateways degraded).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Zigbee2MQTT, Home Assistant, vendor hub.\n• Ensure the following data sources are available: `sourcetype=\"zigbee:gateway\"` OR `sourcetype=\"ble:gateway\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll hub API for mesh depth, neighbor loss, and MQTT uplink. Alert on coordinator down or offline child count spike.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iot sourcetype IN (\"zigbee:gateway\",\"ble:gateway\")\n| where coordinator_status!=\"OK\" OR offline_devices > 5 OR mqtt_connected=0\n| table _time, gateway_id, coordinator_status, offline_devices, mqtt_connected\n```\n\nUnderstanding this SPL\n\n**BLE/Zigbee Gateway Health** — Mesh coordinators and gateways aggregate low-power sensors; their health affects entire buildings.\n\nDocumented **Data sources**: `sourcetype=\"zigbee:gateway\"` OR `sourcetype=\"ble:gateway\"`. **App/TA** (typical add-on context): Zigbee2MQTT, Home Assistant, vendor hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iot.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iot. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where coordinator_status!=\"OK\" OR offline_devices > 5 OR mqtt_connected=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BLE/Zigbee Gateway Health**): table _time, gateway_id, coordinator_status, offline_devices, mqtt_connected\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (gateway × health), Line chart (offline device count), Table (gateways degraded).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch your connected devices and the messages they send. We help you know if something is off with this area: ble and zigbee gateway before harm, damage, or wasted effort piles up.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 19,
            "none": 0
          }
        },
        {
          "i": "14.5",
          "n": "MQTT and OPC-UA (Edge Hub and Gateways)",
          "u": [
            {
              "i": "14.5.1",
              "n": "MQTT Topic Message Rate and Subscription Health",
              "c": "high",
              "f": "intermediate",
              "v": "Per-topic message rate and subscriber count indicate whether sensors and downstream consumers are healthy. Sudden drops mean a publisher or subscription failed; spikes can indicate a runaway device or replay.",
              "t": "Splunk Edge Hub (MQTT broker), MQTT broker metrics (Mosquitto, HiveMQ)",
              "d": "`index=edge-hub-data` (metrics by topic), broker stats API or log-derived metrics",
              "q": "index=edge-hub-data OR index=ot sourcetype=mqtt OR sourcetype=edge_hub\n| bin _time span=5m\n| stats count as msg_count, latest(_time) as last_seen by topic, host, _time\n| eval age_sec = now() - last_seen\n| where age_sec > 600 OR msg_count < 1\n| table topic host msg_count last_seen age_sec",
              "m": "Use Edge Hub MQTT topic subscriptions with metric extraction (topic as dimension). Or ingest broker metrics (e.g. Mosquitto stats, HiveMQ REST API) for messages per topic. Alert when a critical topic has no messages for >10 minutes or rate drops below baseline. Dashboard message rate by topic and subscriber count.",
              "z": "Line chart (message rate by topic), Table (topics with no recent data), Single value (topics healthy %).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (MQTT broker), MQTT broker metrics (Mosquitto, HiveMQ).\n• Ensure the following data sources are available: `index=edge-hub-data` (metrics by topic), broker stats API or log-derived metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse Edge Hub MQTT topic subscriptions with metric extraction (topic as dimension). Or ingest broker metrics (e.g. Mosquitto stats, HiveMQ REST API) for messages per topic. Alert when a critical topic has no messages for >10 minutes or rate drops below baseline. Dashboard message rate by topic and subscriber count.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-data OR index=ot sourcetype=mqtt OR sourcetype=edge_hub\n| bin _time span=5m\n| stats count as msg_count, latest(_time) as last_seen by topic, host, _time\n| eval age_sec = now() - last_seen\n| where age_sec > 600 OR msg_count < 1\n| table topic host msg_count last_seen age_sec\n```\n\nUnderstanding this SPL\n\n**MQTT Topic Message Rate and Subscription Health** — Per-topic message rate and subscriber count indicate whether sensors and downstream consumers are healthy. Sudden drops mean a publisher or subscription failed; spikes can indicate a runaway device or replay.\n\nDocumented **Data sources**: `index=edge-hub-data` (metrics by topic), broker stats API or log-derived metrics. **App/TA** (typical add-on context): Splunk Edge Hub (MQTT broker), MQTT broker metrics (Mosquitto, HiveMQ). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-data, ot; **sourcetype**: mqtt, edge_hub. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-data, index=ot, sourcetype=mqtt. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by topic, host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_sec > 600 OR msg_count < 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MQTT Topic Message Rate and Subscription Health**): table topic host msg_count last_seen age_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (message rate by topic), Table (topics with no recent data), Single value (topics healthy %).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how often each topic gets messages so we can tell when a sensor or our broker goes quiet or starts flooding before operators lose trust in the data.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.2",
              "n": "OPC-UA Server Connection and Session Count",
              "c": "high",
              "f": "intermediate",
              "v": "OPC-UA session count and connection state indicate whether Edge Hub or gateways are successfully bound to PLCs/servers. Lost sessions create data gaps and break real-time monitoring.",
              "t": "Splunk Edge Hub (OPC-UA connector), OPC-UA gateway (KEPServerEX, Prosys)",
              "d": "`index=edge-hub-logs` (connector logs), gateway connection status API or logs",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log \"opcua\" OR \"opc-ua\"\n| rex \"session|connection|endpoint\"\n| stats count, latest(_time) as last_seen by host, connection_state\n| where connection_state=\"disconnected\" OR connection_state=\"failed\"\n| table host connection_state count last_seen",
              "m": "Configure Edge Hub OPC-UA connector with endpoint URL and security. Forward connector logs to edge-hub-logs. Parse connection/session state from log messages or gateway API. Alert when connection state is disconnected or session count drops to zero for a critical server.",
              "z": "Table (server, state, session count), Status grid (endpoint × state), Single value (OPC-UA connections healthy).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (OPC-UA connector), OPC-UA gateway (KEPServerEX, Prosys).\n• Ensure the following data sources are available: `index=edge-hub-logs` (connector logs), gateway connection status API or logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Edge Hub OPC-UA connector with endpoint URL and security. Forward connector logs to edge-hub-logs. Parse connection/session state from log messages or gateway API. Alert when connection state is disconnected or session count drops to zero for a critical server.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log \"opcua\" OR \"opc-ua\"\n| rex \"session|connection|endpoint\"\n| stats count, latest(_time) as last_seen by host, connection_state\n| where connection_state=\"disconnected\" OR connection_state=\"failed\"\n| table host connection_state count last_seen\n```\n\nUnderstanding this SPL\n\n**OPC-UA Server Connection and Session Count** — OPC-UA session count and connection state indicate whether Edge Hub or gateways are successfully bound to PLCs/servers. Lost sessions create data gaps and break real-time monitoring.\n\nDocumented **Data sources**: `index=edge-hub-logs` (connector logs), gateway connection status API or logs. **App/TA** (typical add-on context): Splunk Edge Hub (OPC-UA connector), OPC-UA gateway (KEPServerEX, Prosys). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, connection_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where connection_state=\"disconnected\" OR connection_state=\"failed\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OPC-UA Server Connection and Session Count**): table host connection_state count last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (server, state, session count), Status grid (endpoint × state), Single value (OPC-UA connections healthy).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count industrial server sessions and connections so we see overload or client churn early and keep plant data flowing steadily.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.3",
              "n": "Edge Hub MQTT Broker Client Disconnections",
              "c": "high",
              "f": "intermediate",
              "v": "Frequent client disconnections or reconnect storms indicate network issues, broker overload, or misconfigured keep-alive. Monitoring supports stability and capacity planning.",
              "t": "Splunk Edge Hub (built-in MQTT broker), broker logs",
              "d": "`index=edge-hub-logs sourcetype=splunk_edge_hub_log` (broker events), MQTT broker log (disconnect, connect)",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log \"mqtt\" (\"disconnect\" OR \"connection closed\" OR \"client\")\n| rex \"client_id=(?<client_id>\\S+)|client (?<client_id>\\S+)\"\n| bin _time span=15m\n| stats count as disconnect_count by client_id, host, _time\n| where disconnect_count > 5\n| sort -disconnect_count",
              "m": "Enable MQTT broker logging on Edge Hub or external broker. Ingest disconnect and connection events. Count disconnects per client per 15 minutes. Alert on disconnect storms (>5 in 15 min) or when a critical client (e.g. PLC gateway) disconnects.",
              "z": "Table (client, disconnect count), Line chart (disconnects over time), Timeline (connection events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (built-in MQTT broker), broker logs.\n• Ensure the following data sources are available: `index=edge-hub-logs sourcetype=splunk_edge_hub_log` (broker events), MQTT broker log (disconnect, connect).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable MQTT broker logging on Edge Hub or external broker. Ingest disconnect and connection events. Count disconnects per client per 15 minutes. Alert on disconnect storms (>5 in 15 min) or when a critical client (e.g. PLC gateway) disconnects.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log \"mqtt\" (\"disconnect\" OR \"connection closed\" OR \"client\")\n| rex \"client_id=(?<client_id>\\S+)|client (?<client_id>\\S+)\"\n| bin _time span=15m\n| stats count as disconnect_count by client_id, host, _time\n| where disconnect_count > 5\n| sort -disconnect_count\n```\n\nUnderstanding this SPL\n\n**Edge Hub MQTT Broker Client Disconnections** — Frequent client disconnections or reconnect storms indicate network issues, broker overload, or misconfigured keep-alive. Monitoring supports stability and capacity planning.\n\nDocumented **Data sources**: `index=edge-hub-logs sourcetype=splunk_edge_hub_log` (broker events), MQTT broker log (disconnect, connect). **App/TA** (typical add-on context): Splunk Edge Hub (built-in MQTT broker), broker logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by client_id, host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where disconnect_count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client, disconnect count), Line chart (disconnects over time), Timeline (connection events).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track when clients drop off the message broker so we can spot unstable networks, bad credentials, or maintenance gaps before data stops arriving.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.4",
              "n": "OPC-UA Node Value Change Rate and Anomaly",
              "c": "medium",
              "f": "advanced",
              "v": "PLC tag value change rate and distribution indicate normal process behavior. Sudden change in rate or value distribution can signal sensor fault, process upset, or cyber event.",
              "t": "Splunk Edge Hub (OPC-UA connector), Splunk OT Intelligence",
              "d": "`index=edge-hub-data sourcetype=splunk_edge_hub_opcua` (node values)",
              "q": "index=edge-hub-data sourcetype=splunk_edge_hub_opcua\n| bin _time span=5m\n| stats count as sample_count, dc(node_id) as nodes_seen by host, _time\n| eventstats avg(sample_count) as avg_count, stdev(sample_count) as std_count by host\n| eval z = if(std_count>0, (sample_count-avg_count)/std_count, 0)\n| where abs(z) > 3\n| table host _time sample_count avg_count z",
              "m": "Ingest OPC-UA node samples from Edge Hub. Compute per-host message rate (samples per 5 min). Baseline mean and stdev; alert when rate exceeds 3 standard deviations. Optionally run MLTK anomaly detection on critical tags.",
              "z": "Line chart (sample rate by host), Table (anomalous intervals), Single value (current rate vs baseline).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (OPC-UA connector), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data sourcetype=splunk_edge_hub_opcua` (node values).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest OPC-UA node samples from Edge Hub. Compute per-host message rate (samples per 5 min). Baseline mean and stdev; alert when rate exceeds 3 standard deviations. Optionally run MLTK anomaly detection on critical tags.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-data sourcetype=splunk_edge_hub_opcua\n| bin _time span=5m\n| stats count as sample_count, dc(node_id) as nodes_seen by host, _time\n| eventstats avg(sample_count) as avg_count, stdev(sample_count) as std_count by host\n| eval z = if(std_count>0, (sample_count-avg_count)/std_count, 0)\n| where abs(z) > 3\n| table host _time sample_count avg_count z\n```\n\nUnderstanding this SPL\n\n**OPC-UA Node Value Change Rate and Anomaly** — PLC tag value change rate and distribution indicate normal process behavior. Sudden change in rate or value distribution can signal sensor fault, process upset, or cyber event.\n\nDocumented **Data sources**: `index=edge-hub-data sourcetype=splunk_edge_hub_opcua` (node values). **App/TA** (typical add-on context): Splunk Edge Hub (OPC-UA connector), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-data; **sourcetype**: splunk_edge_hub_opcua. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-data, sourcetype=splunk_edge_hub_opcua. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(z) > 3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OPC-UA Node Value Change Rate and Anomaly**): table host _time sample_count avg_count z\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (sample rate by host), Table (anomalous intervals), Single value (current rate vs baseline).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how fast tag values change so unusual spikes or flatlines stand out from normal production swings.",
              "mtype": [
                "Performance",
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.5",
              "n": "Edge Hub to Cloud HEC Forwarding Backlog",
              "c": "critical",
              "f": "intermediate",
              "v": "When Edge Hub loses connectivity to Splunk, data backs up locally (e.g. 3M sensor points in SQLite). Backlog growth and drain rate indicate risk of data loss and recovery time.",
              "t": "Splunk Edge Hub (system health)",
              "d": "`index=edge-hub-health sourcetype=edge_hub` (backlog, queue depth)",
              "q": "index=edge-hub-health sourcetype=edge_hub\n| stats latest(backlog_count) as backlog, latest(hec_connected) as connected by host\n| where backlog > 10000 OR connected != 1\n| table host backlog connected _time",
              "m": "Edge Hub reports backlog and HEC connection state to edge-hub-health. Ingest and alert when backlog exceeds threshold (e.g. 10K events) or connected=0. Track backlog drain rate after reconnect to estimate catch-up time.",
              "z": "Gauge (backlog per device), Single value (HEC connected), Line chart (backlog over time).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (system health).\n• Ensure the following data sources are available: `index=edge-hub-health sourcetype=edge_hub` (backlog, queue depth).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub reports backlog and HEC connection state to edge-hub-health. Ingest and alert when backlog exceeds threshold (e.g. 10K events) or connected=0. Track backlog drain rate after reconnect to estimate catch-up time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-health sourcetype=edge_hub\n| stats latest(backlog_count) as backlog, latest(hec_connected) as connected by host\n| where backlog > 10000 OR connected != 1\n| table host backlog connected _time\n```\n\nUnderstanding this SPL\n\n**Edge Hub to Cloud HEC Forwarding Backlog** — When Edge Hub loses connectivity to Splunk, data backs up locally (e.g. 3M sensor points in SQLite). Backlog growth and drain rate indicate risk of data loss and recovery time.\n\nDocumented **Data sources**: `index=edge-hub-health sourcetype=edge_hub` (backlog, queue depth). **App/TA** (typical add-on context): Splunk Edge Hub (system health). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-health; **sourcetype**: edge_hub. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-health, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where backlog > 10000 OR connected != 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Edge Hub to Cloud HEC Forwarding Backlog**): table host backlog connected _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (backlog per device), Single value (HEC connected), Line chart (backlog over time).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We monitor the edge forwarder backlog so a cloud or link hiccup does not quietly pile up work and age out the data we need.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.6",
              "n": "MQTT Retain and Last Will Message Audit",
              "c": "medium",
              "f": "intermediate",
              "v": "Retained messages and Last Will payloads can contain sensitive state. Auditing who published retain/LWT and when supports change control and security review.",
              "t": "MQTT broker (access log, audit log)",
              "d": "Broker audit log (publish with retain=1, will payload)",
              "q": "index=ot sourcetype=mqtt_broker_audit (retain=1 OR \"last_will\" OR \"will_message\")\n| bin _time span=1h\n| stats count by client_id, topic, action, _time\n| where count > 0\n| table _time client_id topic action count",
              "m": "Enable broker audit or access logging for MQTT publish (include retain flag and will payload if logged). Forward to Splunk. Alert on new retain on sensitive topics or LWT changes. Dashboard retain/LWT events by client and topic.",
              "z": "Table (client, topic, retain/LWT events), Timeline (audit events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MQTT broker (access log, audit log).\n• Ensure the following data sources are available: Broker audit log (publish with retain=1, will payload).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable broker audit or access logging for MQTT publish (include retain flag and will payload if logged). Forward to Splunk. Alert on new retain on sensitive topics or LWT changes. Dashboard retain/LWT events by client and topic.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=mqtt_broker_audit (retain=1 OR \"last_will\" OR \"will_message\")\n| bin _time span=1h\n| stats count by client_id, topic, action, _time\n| where count > 0\n| table _time client_id topic action count\n```\n\nUnderstanding this SPL\n\n**MQTT Retain and Last Will Message Audit** — Retained messages and Last Will payloads can contain sensitive state. Auditing who published retain/LWT and when supports change control and security review.\n\nDocumented **Data sources**: Broker audit log (publish with retain=1, will payload). **App/TA** (typical add-on context): MQTT broker (access log, audit log). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: mqtt_broker_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=mqtt_broker_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by client_id, topic, action, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MQTT Retain and Last Will Message Audit**): table _time client_id topic action count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client, topic, retain/LWT events), Timeline (audit events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log retained messages and last-will events so surprise broker-side state does not break downstream logic we rely on.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.7",
              "n": "OPC-UA Alarms and Events Queue Depth",
              "c": "high",
              "f": "intermediate",
              "v": "OPC-UA alarm/event queues that grow indicate slow consumption or overflow risk. Overflow can drop critical alarms; depth trending supports connector and gateway sizing.",
              "t": "OPC-UA gateway, Edge Hub OPC-UA connector",
              "d": "Gateway or connector metrics (alarm queue depth, event count)",
              "q": "index=edge-hub-data OR index=ot sourcetype=opcua_metrics\n| stats latest(alarm_queue_depth) as queue, latest(events_pending) as pending by host, endpoint\n| where queue > 100 OR pending > 500\n| table host endpoint queue pending",
              "m": "Expose alarm/event queue depth from OPC-UA gateway or Edge Hub connector (if available). Ingest as metric. Alert when queue depth exceeds 100 or pending events >500. Tune subscription and sampling rate if queue grows persistently.",
              "z": "Gauge (queue depth), Line chart (queue over time), Table (endpoints over threshold).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OPC-UA gateway, Edge Hub OPC-UA connector.\n• Ensure the following data sources are available: Gateway or connector metrics (alarm queue depth, event count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExpose alarm/event queue depth from OPC-UA gateway or Edge Hub connector (if available). Ingest as metric. Alert when queue depth exceeds 100 or pending events >500. Tune subscription and sampling rate if queue grows persistently.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-data OR index=ot sourcetype=opcua_metrics\n| stats latest(alarm_queue_depth) as queue, latest(events_pending) as pending by host, endpoint\n| where queue > 100 OR pending > 500\n| table host endpoint queue pending\n```\n\nUnderstanding this SPL\n\n**OPC-UA Alarms and Events Queue Depth** — OPC-UA alarm/event queues that grow indicate slow consumption or overflow risk. Overflow can drop critical alarms; depth trending supports connector and gateway sizing.\n\nDocumented **Data sources**: Gateway or connector metrics (alarm queue depth, event count). **App/TA** (typical add-on context): OPC-UA gateway, Edge Hub OPC-UA connector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-data, ot; **sourcetype**: opcua_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-data, index=ot, sourcetype=opcua_metrics. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, endpoint** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where queue > 100 OR pending > 500` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OPC-UA Alarms and Events Queue Depth**): table host endpoint queue pending\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (queue depth), Line chart (queue over time), Table (endpoints over threshold).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch alarm and event queues on the industrial server so a backlog does not mean we miss a safety or process alert.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.8",
              "n": "MQTT QoS 0/1/2 Delivery and Drops",
              "c": "high",
              "f": "intermediate",
              "v": "QoS 0 messages can be dropped under load; QoS 1/2 add overhead. Tracking delivery by QoS and drop rate supports SLA and broker tuning.",
              "t": "MQTT broker metrics (messages in/out, dropped)",
              "d": "Broker stats (e.g. Mosquitto messages received/sent, dropped)",
              "q": "index=ot sourcetype=mqtt_broker_metrics\n| bin _time span=5m\n| stats sum(messages_in) as in, sum(messages_out) as out, sum(messages_dropped) as dropped by qos, _time\n| eval drop_rate=if(in>0, round(dropped/in*100, 2), 0)\n| where drop_rate > 1 OR dropped > 0\n| table _time qos in out dropped drop_rate",
              "m": "Collect broker metrics (SNMP, REST, or log parsing) for messages in/out/dropped by QoS. Ingest to Splunk. Alert when drop rate exceeds 1% or absolute drops exceed threshold. Correlate with broker CPU and connection count.",
              "z": "Line chart (in/out/dropped by QoS), Table (drop rate by QoS), Single value (total drops).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MQTT broker metrics (messages in/out, dropped).\n• Ensure the following data sources are available: Broker stats (e.g. Mosquitto messages received/sent, dropped).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect broker metrics (SNMP, REST, or log parsing) for messages in/out/dropped by QoS. Ingest to Splunk. Alert when drop rate exceeds 1% or absolute drops exceed threshold. Correlate with broker CPU and connection count.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=mqtt_broker_metrics\n| bin _time span=5m\n| stats sum(messages_in) as in, sum(messages_out) as out, sum(messages_dropped) as dropped by qos, _time\n| eval drop_rate=if(in>0, round(dropped/in*100, 2), 0)\n| where drop_rate > 1 OR dropped > 0\n| table _time qos in out dropped drop_rate\n```\n\nUnderstanding this SPL\n\n**MQTT QoS 0/1/2 Delivery and Drops** — QoS 0 messages can be dropped under load; QoS 1/2 add overhead. Tracking delivery by QoS and drop rate supports SLA and broker tuning.\n\nDocumented **Data sources**: Broker stats (e.g. Mosquitto messages received/sent, dropped). **App/TA** (typical add-on context): MQTT broker metrics (messages in/out, dropped). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: mqtt_broker_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=mqtt_broker_metrics. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by qos, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drop_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drop_rate > 1 OR dropped > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MQTT QoS 0/1/2 Delivery and Drops**): table _time qos in out dropped drop_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (in/out/dropped by QoS), Table (drop rate by QoS), Single value (total drops).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track delivery and drops by quality-of-service so we know when messages are at risk in tough network conditions.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.9",
              "n": "OPC-UA Certificate Expiration and Trust",
              "c": "critical",
              "f": "intermediate",
              "v": "Expired or untrusted OPC-UA certificates break secure connections and prevent data collection. Proactive monitoring avoids blind spots after certificate rollover failures.",
              "t": "Edge Hub OPC-UA connector, OPC-UA gateway",
              "d": "Connector/gateway logs (certificate validation, trust list)",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log \"opcua\" (\"certificate\" OR \"trust\" OR \"expir\" OR \"reject\")\n| rex \"expir|reject|invalid|cert\"\n| table _time host message\n| sort -_time",
              "m": "Forward OPC-UA connector and gateway logs. Parse certificate and trust-related messages. Maintain a script or lookup of cert expiry dates; alert when expiry is within 30 days or when log shows validation failure.",
              "z": "Table (cert expiry by endpoint), Timeline (cert events), Single value (certs expiring in 30d).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Edge Hub OPC-UA connector, OPC-UA gateway.\n• Ensure the following data sources are available: Connector/gateway logs (certificate validation, trust list).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward OPC-UA connector and gateway logs. Parse certificate and trust-related messages. Maintain a script or lookup of cert expiry dates; alert when expiry is within 30 days or when log shows validation failure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log \"opcua\" (\"certificate\" OR \"trust\" OR \"expir\" OR \"reject\")\n| rex \"expir|reject|invalid|cert\"\n| table _time host message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**OPC-UA Certificate Expiration and Trust** — Expired or untrusted OPC-UA certificates break secure connections and prevent data collection. Proactive monitoring avoids blind spots after certificate rollover failures.\n\nDocumented **Data sources**: Connector/gateway logs (certificate validation, trust list). **App/TA** (typical add-on context): Edge Hub OPC-UA connector, OPC-UA gateway. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Pipeline stage (see **OPC-UA Certificate Expiration and Trust**): table _time host message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cert expiry by endpoint), Timeline (cert events), Single value (certs expiring in 30d).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch certificates and trust on the industrial link so an expiry or trust break does not silently cut off secure plant data.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.10",
              "n": "Edge Hub Local Storage and SQLite Backlog",
              "c": "high",
              "f": "intermediate",
              "v": "Edge Hub stores backlog in SQLite when disconnected. Disk usage and backlog size must stay within limits (e.g. 3M sensor points) to avoid data loss and device lockup.",
              "t": "Splunk Edge Hub (device health)",
              "d": "`index=edge-hub-health` (disk_usage, backlog_count)",
              "q": "index=edge-hub-health sourcetype=edge_hub\n| stats latest(disk_usage) as disk_pct, latest(backlog_count) as backlog by host\n| where disk_pct > 85 OR backlog > 2000000\n| table host disk_pct backlog _time",
              "m": "Edge Hub reports disk and backlog to edge-hub-health. Alert when disk exceeds 85% or backlog approaches device limit (e.g. 2M). Plan connectivity and storage upgrades before saturation.",
              "z": "Gauge (disk %), Line chart (backlog over time), Table (devices near limit).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (device health).\n• Ensure the following data sources are available: `index=edge-hub-health` (disk_usage, backlog_count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEdge Hub reports disk and backlog to edge-hub-health. Alert when disk exceeds 85% or backlog approaches device limit (e.g. 2M). Plan connectivity and storage upgrades before saturation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-health sourcetype=edge_hub\n| stats latest(disk_usage) as disk_pct, latest(backlog_count) as backlog by host\n| where disk_pct > 85 OR backlog > 2000000\n| table host disk_pct backlog _time\n```\n\nUnderstanding this SPL\n\n**Edge Hub Local Storage and SQLite Backlog** — Edge Hub stores backlog in SQLite when disconnected. Disk usage and backlog size must stay within limits (e.g. 3M sensor points) to avoid data loss and device lockup.\n\nDocumented **Data sources**: `index=edge-hub-health` (disk_usage, backlog_count). **App/TA** (typical add-on context): Splunk Edge Hub (device health). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-health; **sourcetype**: edge_hub. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-health, sourcetype=edge_hub. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where disk_pct > 85 OR backlog > 2000000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Edge Hub Local Storage and SQLite Backlog**): table host disk_pct backlog _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (disk %), Line chart (backlog over time), Table (devices near limit).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch local storage and backlogs on the edge box so a full disk or stuck queue does not block messaging into our platform.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.11",
              "n": "MQTT Authentication Failure and ACL Denials",
              "c": "high",
              "f": "beginner",
              "v": "Failed MQTT logins and publish/subscribe denials indicate credential abuse, misconfiguration, or attack. Monitoring supports access control and incident response.",
              "t": "MQTT broker (auth and ACL logs)",
              "d": "Broker access log (auth failure, ACL deny)",
              "q": "index=ot sourcetype=mqtt_broker_log (\"auth failed\" OR \"ACL deny\" OR \"access denied\" OR \"unauthorized\")\n| bin _time span=15m\n| stats count by client_id, src, reason, _time\n| where count > 5\n| sort -count",
              "m": "Enable broker authentication and ACL logging. Forward to Splunk. Alert when failure count from a single client or IP exceeds threshold. Dashboard failures by client and topic.",
              "z": "Table (client, IP, reason, count), Timeline (denials), Bar chart (top denied clients).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MQTT broker (auth and ACL logs).\n• Ensure the following data sources are available: Broker access log (auth failure, ACL deny).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable broker authentication and ACL logging. Forward to Splunk. Alert when failure count from a single client or IP exceeds threshold. Dashboard failures by client and topic.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=mqtt_broker_log (\"auth failed\" OR \"ACL deny\" OR \"access denied\" OR \"unauthorized\")\n| bin _time span=15m\n| stats count by client_id, src, reason, _time\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**MQTT Authentication Failure and ACL Denials** — Failed MQTT logins and publish/subscribe denials indicate credential abuse, misconfiguration, or attack. Monitoring supports access control and incident response.\n\nDocumented **Data sources**: Broker access log (auth failure, ACL deny). **App/TA** (typical add-on context): MQTT broker (auth and ACL logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: mqtt_broker_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=mqtt_broker_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by client_id, src, reason, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MQTT Authentication Failure and ACL Denials** — Failed MQTT logins and publish/subscribe denials indicate credential abuse, misconfiguration, or attack. Monitoring supports access control and incident response.\n\nDocumented **Data sources**: Broker access log (auth failure, ACL deny). **App/TA** (typical add-on context): MQTT broker (auth and ACL logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client, IP, reason, count), Timeline (denials), Bar chart (top denied clients).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log failed logins and access denials on the message broker so we can spot bad credentials, policy mistakes, or attacks early.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src span=15m | sort - count",
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.12",
              "n": "OPC-UA Subscription Latency and Sampling Overrun",
              "c": "medium",
              "f": "advanced",
              "v": "High subscription latency or sampling overrun (server missing sample interval) degrades real-time visibility. Tracking supports connector and server tuning.",
              "t": "OPC-UA gateway, Edge Hub OPC-UA connector",
              "d": "Connector metrics (subscription latency, overrun count)",
              "q": "index=edge-hub-data OR index=ot sourcetype=opcua_metrics\n| stats avg(subscription_latency_ms) as avg_latency, sum(sampling_overrun_count) as overruns by host, subscription_id\n| where avg_latency > 500 OR overruns > 0\n| table host subscription_id avg_latency overruns",
              "m": "If gateway or Edge Hub exposes subscription latency and overrun metrics, ingest them. Alert when latency exceeds 500 ms or overrun count is non-zero. Reduce sampling rate or add resources if persistent.",
              "z": "Line chart (latency by subscription), Table (overruns by subscription), Single value (max latency).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OPC-UA gateway, Edge Hub OPC-UA connector.\n• Ensure the following data sources are available: Connector metrics (subscription latency, overrun count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIf gateway or Edge Hub exposes subscription latency and overrun metrics, ingest them. Alert when latency exceeds 500 ms or overrun count is non-zero. Reduce sampling rate or add resources if persistent.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-data OR index=ot sourcetype=opcua_metrics\n| stats avg(subscription_latency_ms) as avg_latency, sum(sampling_overrun_count) as overruns by host, subscription_id\n| where avg_latency > 500 OR overruns > 0\n| table host subscription_id avg_latency overruns\n```\n\nUnderstanding this SPL\n\n**OPC-UA Subscription Latency and Sampling Overrun** — High subscription latency or sampling overrun (server missing sample interval) degrades real-time visibility. Tracking supports connector and server tuning.\n\nDocumented **Data sources**: Connector metrics (subscription latency, overrun count). **App/TA** (typical add-on context): OPC-UA gateway, Edge Hub OPC-UA connector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-data, ot; **sourcetype**: opcua_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-data, index=ot, sourcetype=opcua_metrics. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, subscription_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_latency > 500 OR overruns > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OPC-UA Subscription Latency and Sampling Overrun**): table host subscription_id avg_latency overruns\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency by subscription), Table (overruns by subscription), Single value (max latency).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We time subscriptions and sampling on the industrial server so late or overrun polls do not erode the picture of the process.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.13",
              "n": "Edge Hub Container Health (MQTT/OPC-UA Modules)",
              "c": "high",
              "f": "intermediate",
              "v": "Edge Hub runs MQTT broker and OPC-UA connector as modules. Container restarts or OOM kills cause data gaps and require investigation.",
              "t": "Splunk Edge Hub (system logs)",
              "d": "`index=edge-hub-logs` (container lifecycle, OOM)",
              "q": "index=edge-hub-logs sourcetype=splunk_edge_hub_log (\"container\" OR \"module\" OR \"oom\" OR \"restart\")\n| search \"mqtt\" OR \"opcua\" OR \"opc-ua\"\n| bin _time span=1h\n| stats count by log_level, message, _time\n| where count > 0\n| table _time log_level message count",
              "m": "Forward Edge Hub system logs. Parse container/module start, stop, and OOM events. Alert on any restart or OOM for MQTT or OPC-UA modules. Correlate with device memory and CPU from edge-hub-health.",
              "z": "Timeline (container events), Table (restart/OOM by module), Single value (modules healthy).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (system logs).\n• Ensure the following data sources are available: `index=edge-hub-logs` (container lifecycle, OOM).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Edge Hub system logs. Parse container/module start, stop, and OOM events. Alert on any restart or OOM for MQTT or OPC-UA modules. Correlate with device memory and CPU from edge-hub-health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edge-hub-logs sourcetype=splunk_edge_hub_log (\"container\" OR \"module\" OR \"oom\" OR \"restart\")\n| search \"mqtt\" OR \"opcua\" OR \"opc-ua\"\n| bin _time span=1h\n| stats count by log_level, message, _time\n| where count > 0\n| table _time log_level message count\n```\n\nUnderstanding this SPL\n\n**Edge Hub Container Health (MQTT/OPC-UA Modules)** — Edge Hub runs MQTT broker and OPC-UA connector as modules. Container restarts or OOM kills cause data gaps and require investigation.\n\nDocumented **Data sources**: `index=edge-hub-logs` (container lifecycle, OOM). **App/TA** (typical add-on context): Splunk Edge Hub (system logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edge-hub-logs; **sourcetype**: splunk_edge_hub_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edge-hub-logs, sourcetype=splunk_edge_hub_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by log_level, message, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Edge Hub Container Health (MQTT/OPC-UA Modules)**): table _time log_level message count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (container events), Table (restart/OOM by module), Single value (modules healthy).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check the health of edge modules that handle messaging so a bad container does not take down collection on the line.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.14",
              "n": "MQTT TLS Handshake and Cipher Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "TLS handshake failures or weak ciphers indicate misconfiguration or downgrade attacks. Monitoring ensures encrypted MQTT and policy compliance.",
              "t": "MQTT broker (TLS log), reverse proxy logs",
              "d": "Broker or proxy TLS logs (handshake, cipher)",
              "q": "index=ot sourcetype=mqtt_tls_log (\"handshake failed\" OR \"certificate verify\" OR \"TLS\")\n| rex \"cipher=(?<cipher>\\S+)|protocol=(?<protocol>\\S+)\"\n| eval modern_tls=if(protocol IN (\"TLSv1.2\",\"TLSv1.3\"), \"yes\", \"no\"),\n       strong_cipher=if(match(cipher, \"^(TLS_AES_|TLS_CHACHA20_|ECDHE-.*-GCM|ECDHE-.*-CHACHA20)\"), \"yes\", \"no\")\n| stats count by cipher, protocol, reason, modern_tls, strong_cipher\n| where modern_tls!=\"yes\" OR strong_cipher!=\"yes\" OR reason=\"failed\"\n| table cipher protocol reason count",
              "m": "Enable TLS logging on MQTT broker or proxy. Ingest handshake success/failure and negotiated cipher/protocol. Alert on handshake failure or use of non-approved ciphers (e.g. block TLS 1.0/1.1 and weak ciphers).",
              "z": "Table (cipher, protocol, failures), Timeline (handshake events), Single value (TLS failures).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MQTT broker (TLS log), reverse proxy logs.\n• Ensure the following data sources are available: Broker or proxy TLS logs (handshake, cipher).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable TLS logging on MQTT broker or proxy. Ingest handshake success/failure and negotiated cipher/protocol. Alert on handshake failure or use of non-approved ciphers (e.g. block TLS 1.0/1.1 and weak ciphers).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=mqtt_tls_log (\"handshake failed\" OR \"certificate verify\" OR \"TLS\")\n| rex \"cipher=(?<cipher>\\S+)|protocol=(?<protocol>\\S+)\"\n| eval modern_tls=if(protocol IN (\"TLSv1.2\",\"TLSv1.3\"), \"yes\", \"no\"),\n       strong_cipher=if(match(cipher, \"^(TLS_AES_|TLS_CHACHA20_|ECDHE-.*-GCM|ECDHE-.*-CHACHA20)\"), \"yes\", \"no\")\n| stats count by cipher, protocol, reason, modern_tls, strong_cipher\n| where modern_tls!=\"yes\" OR strong_cipher!=\"yes\" OR reason=\"failed\"\n| table cipher protocol reason count\n```\n\nUnderstanding this SPL\n\n**MQTT TLS Handshake and Cipher Compliance** — TLS handshake failures or weak ciphers indicate misconfiguration or downgrade attacks. Monitoring ensures encrypted MQTT and policy compliance.\n\nDocumented **Data sources**: Broker or proxy TLS logs (handshake, cipher). **App/TA** (typical add-on context): MQTT broker (TLS log), reverse proxy logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: mqtt_tls_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=mqtt_tls_log. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **modern_tls** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cipher, protocol, reason, modern_tls, strong_cipher** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where modern_tls!=\"yes\" OR strong_cipher!=\"yes\" OR reason=\"failed\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MQTT TLS Handshake and Cipher Compliance**): table cipher protocol reason count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cipher, protocol, failures), Timeline (handshake events), Single value (TLS failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track secure handshakes and ciphers on the broker so weak or wrong setups do not pass unnoticed.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.15",
              "n": "OPC-UA Write and Permission Denials",
              "c": "high",
              "f": "intermediate",
              "v": "Unauthorized write attempts or permission denials can indicate misconfiguration, abuse, or attack. Auditing supports least-privilege and security review.",
              "t": "OPC-UA server/gateway (audit or security log)",
              "d": "OPC-UA server audit log (write request, status code)",
              "q": "index=ot sourcetype=opcua_audit action=write\n| search status_code!=\"Good\" OR permission_denied\n| bin _time span=1h\n| stats count by client_id, node_id, status_code, _time\n| where count > 0\n| table _time client_id node_id status_code count",
              "m": "Enable OPC-UA server audit or security logging for write requests. Forward to Splunk. Alert on write denials for critical nodes or high volume of denials from a single client. Dashboard writes by client and node.",
              "z": "Table (client, node, status, count), Timeline (write denials), Bar chart (denials by node).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OPC-UA server/gateway (audit or security log).\n• Ensure the following data sources are available: OPC-UA server audit log (write request, status code).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable OPC-UA server audit or security logging for write requests. Forward to Splunk. Alert on write denials for critical nodes or high volume of denials from a single client. Dashboard writes by client and node.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=opcua_audit action=write\n| search status_code!=\"Good\" OR permission_denied\n| bin _time span=1h\n| stats count by client_id, node_id, status_code, _time\n| where count > 0\n| table _time client_id node_id status_code count\n```\n\nUnderstanding this SPL\n\n**OPC-UA Write and Permission Denials** — Unauthorized write attempts or permission denials can indicate misconfiguration, abuse, or attack. Auditing supports least-privilege and security review.\n\nDocumented **Data sources**: OPC-UA server audit log (write request, status code). **App/TA** (typical add-on context): OPC-UA server/gateway (audit or security log). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: opcua_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=opcua_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by client_id, node_id, status_code, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OPC-UA Write and Permission Denials**): table _time client_id node_id status_code count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client, node, status, count), Timeline (write denials), Bar chart (denials by node).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log denied writes and permission faults so we can tell normal operator limits from something worth investigating.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opcua"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.16",
              "n": "HiveMQ Cluster Node Health and Split-Brain Detection",
              "c": "critical",
              "f": "advanced",
              "v": "MQTT broker cluster faults and split-brain conditions fragment subscriptions and retained state; early detection avoids cross-site messaging black holes.",
              "t": "HiveMQ Splunk Extension (SVA), HiveMQ Enterprise logging",
              "d": "`index=ot` `sourcetype=\"hivemq:log\"` cluster/quorum log lines; optional `sourcetype=\"hivemq:metrics\"`",
              "q": "index=ot sourcetype=\"hivemq:log\"\n| rex field=_raw \"(?i)(?<cluster_event>split.?brain|quorum|not enough members|cluster view|partition|lost majority)\"\n| where isnotnull(cluster_event)\n| rex field=_raw \"(?i)node[=\\s]+(?<node_id>[^\\s,;]+)\"\n| stats count by host, node_id, cluster_event\n| sort - count",
              "m": "Forward HiveMQ broker logs with cluster logger categories enabled to Splunk. Normalize host to broker hostname. Alert on any match of split-brain/quorum strings or sudden role flaps.",
              "z": "Timeline (cluster events), Table (event counts by broker), Single value (split-brain indicators in last 24h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HiveMQ Splunk Extension (SVA), HiveMQ Enterprise logging.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"hivemq:log\"` cluster/quorum log lines; optional `sourcetype=\"hivemq:metrics\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward HiveMQ broker logs with cluster logger categories enabled to Splunk. Normalize host to broker hostname. Alert on any match of split-brain/quorum strings or sudden role flaps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"hivemq:log\"\n| rex field=_raw \"(?i)(?<cluster_event>split.?brain|quorum|not enough members|cluster view|partition|lost majority)\"\n| where isnotnull(cluster_event)\n| rex field=_raw \"(?i)node[=\\s]+(?<node_id>[^\\s,;]+)\"\n| stats count by host, node_id, cluster_event\n| sort - count\n```\n\nUnderstanding this SPL\n\n**HiveMQ Cluster Node Health and Split-Brain Detection** — MQTT broker cluster faults and split-brain conditions fragment subscriptions and retained state; early detection avoids cross-site messaging black holes.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"hivemq:log\"` cluster/quorum log lines; optional `sourcetype=\"hivemq:metrics\"`. **App/TA** (typical add-on context): HiveMQ Splunk Extension (SVA), HiveMQ Enterprise logging. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: hivemq:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"hivemq:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where isnotnull(cluster_event)` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, node_id, cluster_event** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (cluster events), Table (event counts by broker), Single value (split-brain indicators in last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch cluster health on the managed broker so split brain or a sick node does not cost us availability.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.17",
              "n": "MQTT Shared Subscription Load Distribution",
              "c": "medium",
              "f": "advanced",
              "v": "Shared subscriptions should balance load across consumers; skew indicates stuck consumers or broker-side dispatch issues that inflate latency.",
              "t": "HiveMQ Splunk Extension, MQTT Modular Input (Splunkbase 1890)",
              "d": "`index=ot` `sourcetype=\"mqtt:message\"` fields `topic`, optional `subscription_group`",
              "q": "index=ot sourcetype=\"mqtt:message\"\n| eval t=lower(topic)\n| where like(t,\"$share/%\")\n| rex field=topic \"^\\$share\\/(?<share_group>[^/]+)\\/(?<base_topic>.+)\"\n| stats count as msgs by share_group, base_topic\n| eventstats sum(msgs) as total_by_topic by base_topic\n| eval share_pct=round(100*msgs/total_by_topic,2)\n| sort base_topic, -msgs",
              "m": "If the modular input does not preserve `$share/...` in topic, enable broker metrics for shared subscriptions or ingest dispatch logs. For high volume, sample at the broker or pre-aggregate.",
              "z": "Bar chart (messages per share group), Heatmap (group × time), Table (skew: max/min share_pct per base topic).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[MQTT Modular Input](https://splunkbase.splunk.com/app/1890)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HiveMQ Splunk Extension, MQTT Modular Input (Splunkbase 1890).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"mqtt:message\"` fields `topic`, optional `subscription_group`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIf the modular input does not preserve `$share/...` in topic, enable broker metrics for shared subscriptions or ingest dispatch logs. For high volume, sample at the broker or pre-aggregate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"mqtt:message\"\n| eval t=lower(topic)\n| where like(t,\"$share/%\")\n| rex field=topic \"^\\$share\\/(?<share_group>[^/]+)\\/(?<base_topic>.+)\"\n| stats count as msgs by share_group, base_topic\n| eventstats sum(msgs) as total_by_topic by base_topic\n| eval share_pct=round(100*msgs/total_by_topic,2)\n| sort base_topic, -msgs\n```\n\nUnderstanding this SPL\n\n**MQTT Shared Subscription Load Distribution** — Shared subscriptions should balance load across consumers; skew indicates stuck consumers or broker-side dispatch issues that inflate latency.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"mqtt:message\"` fields `topic`, optional `subscription_group`. **App/TA** (typical add-on context): HiveMQ Splunk Extension, MQTT Modular Input (Splunkbase 1890). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: mqtt:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"mqtt:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **t** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where like(t,\"$share/%\")` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by share_group, base_topic** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by base_topic** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **share_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (messages per share group), Heatmap (group × time), Table (skew: max/min share_pct per base topic).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check shared subscription balance so work spreads fairly and one consumer does not starve the rest.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.18",
              "n": "HiveMQ Retained Message Store Growth",
              "c": "high",
              "f": "intermediate",
              "v": "Retained messages accumulate with misconfigured publishers; growth risks disk exhaustion and slower broker startup, especially at OT edge with constrained storage.",
              "t": "HiveMQ Splunk Extension (metrics to Splunk)",
              "d": "`index=ot` `sourcetype=\"hivemq:metrics\"` retained message counters",
              "q": "index=ot sourcetype=\"hivemq:metrics\"\n| eval m=lower(coalesce(metric_name, metric, name))\n| where match(m, \"retain\")\n| eval v=coalesce(value, metric_value, _value)\n| bin _time span=1h\n| stats max(v) as retained_max by host, _time\n| timechart span=1h max(retained_max) by host",
              "m": "Map the exact HiveMQ metric name from your Prometheus/SVA mapping. Alert on week-over-week growth or crossing a capacity threshold. Correlate spikes with new devices publishing retained messages on unique topics.",
              "z": "Line chart (retained count over time), Area chart (growth rate), Single value (current max), Table (brokers over threshold).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HiveMQ Splunk Extension (metrics to Splunk).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"hivemq:metrics\"` retained message counters.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap the exact HiveMQ metric name from your Prometheus/SVA mapping. Alert on week-over-week growth or crossing a capacity threshold. Correlate spikes with new devices publishing retained messages on unique topics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"hivemq:metrics\"\n| eval m=lower(coalesce(metric_name, metric, name))\n| where match(m, \"retain\")\n| eval v=coalesce(value, metric_value, _value)\n| bin _time span=1h\n| stats max(v) as retained_max by host, _time\n| timechart span=1h max(retained_max) by host\n```\n\nUnderstanding this SPL\n\n**HiveMQ Retained Message Store Growth** — Retained messages accumulate with misconfigured publishers; growth risks disk exhaustion and slower broker startup, especially at OT edge with constrained storage.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"hivemq:metrics\"` retained message counters. **App/TA** (typical add-on context): HiveMQ Splunk Extension (metrics to Splunk). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: hivemq:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"hivemq:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **m** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(m, \"retain\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **v** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (retained count over time), Area chart (growth rate), Single value (current max), Table (brokers over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track retained store growth so runaway topics do not fill disk or slow the broker for everyone.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.19",
              "n": "MQTT Client Disconnect Reason Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Categorized disconnect reasons separate clean shutdowns from timeouts, kicks, and protocol errors — shortening MTTR for unstable OT clients.",
              "t": "HiveMQ Splunk Extension, HiveMQ broker logs",
              "d": "`index=ot` `sourcetype=\"hivemq:log\"` client disconnect lines",
              "q": "index=ot sourcetype=\"hivemq:log\"\n| search disconnect OR DISCONNECT OR \"Client ID\"\n| rex field=_raw \"(?i)client[_ ]?id[:=]\\s*(?<client_id>[^\\s,;]+)\"\n| eval category=case(\n    match(_raw,\"(?i)timeout|idle|keep.?alive\"), \"timeout\",\n    match(_raw,\"(?i)admin|kick|forced\"), \"admin_kick\",\n    match(_raw,\"(?i)reset|eof|closed\"), \"network_error\",\n    match(_raw,\"(?i)not.?authorized|bad.?user\"), \"auth_failure\",\n    true(), \"other\"\n  )\n| stats count by category, client_id, host\n| sort - count",
              "m": "Align rex patterns with HiveMQ log format for your version. If reason codes are numeric, maintain a lookup mapping code to category. Filter out expected maintenance windows.",
              "z": "Bar chart (disconnects by category), Pie chart (category mix), Table (top client_ids), Timeline (disconnect bursts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HiveMQ Splunk Extension, HiveMQ broker logs.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"hivemq:log\"` client disconnect lines.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign rex patterns with HiveMQ log format for your version. If reason codes are numeric, maintain a lookup mapping code to category. Filter out expected maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"hivemq:log\"\n| search disconnect OR DISCONNECT OR \"Client ID\"\n| rex field=_raw \"(?i)client[_ ]?id[:=]\\s*(?<client_id>[^\\s,;]+)\"\n| eval category=case(\n    match(_raw,\"(?i)timeout|idle|keep.?alive\"), \"timeout\",\n    match(_raw,\"(?i)admin|kick|forced\"), \"admin_kick\",\n    match(_raw,\"(?i)reset|eof|closed\"), \"network_error\",\n    match(_raw,\"(?i)not.?authorized|bad.?user\"), \"auth_failure\",\n    true(), \"other\"\n  )\n| stats count by category, client_id, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**MQTT Client Disconnect Reason Analysis** — Categorized disconnect reasons separate clean shutdowns from timeouts, kicks, and protocol errors — shortening MTTR for unstable OT clients.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"hivemq:log\"` client disconnect lines. **App/TA** (typical add-on context): HiveMQ Splunk Extension, HiveMQ broker logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: hivemq:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"hivemq:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **category** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by category, client_id, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (disconnects by category), Pie chart (category mix), Table (top client_ids), Timeline (disconnect bursts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We read client disconnect reasons so we can tell a clean shutdown from a network or keep-alive problem.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.20",
              "n": "HiveMQ Extension Execution Errors",
              "c": "high",
              "f": "intermediate",
              "v": "HiveMQ extensions (enterprise integrations, custom interceptors) can fail independently; tracking exceptions prevents silent drops in policy enforcement and data enrichment.",
              "t": "HiveMQ Splunk Extension, HiveMQ broker logs",
              "d": "`index=ot` `sourcetype=\"hivemq:log\"` extension error lines",
              "q": "index=ot sourcetype=\"hivemq:log\"\n| where match(_raw, \"(?i)extension\") AND (match(_raw, \"(?i)error|exception|failed\") OR match(_raw,\" ERROR \"))\n| rex field=_raw \"(?i)extension[:\\s]+(?<extension_id>[^\\s\\]\\[]+)\"\n| rex field=_raw \"(?i)(?<ex_type>[A-Za-z0-9_.]+Exception)\"\n| stats count by host, extension_id, ex_type\n| sort - count",
              "m": "Ensure HiveMQ log level captures extension exceptions. Create suppressions for known benign stack signatures via a lookup table. Consider separate alerts for WARN vs ERROR thresholds.",
              "z": "Table (top extensions by errors), Line chart (error rate over time), Bar chart (errors by host).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HiveMQ Splunk Extension, HiveMQ broker logs.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"hivemq:log\"` extension error lines.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure HiveMQ log level captures extension exceptions. Create suppressions for known benign stack signatures via a lookup table. Consider separate alerts for WARN vs ERROR thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"hivemq:log\"\n| where match(_raw, \"(?i)extension\") AND (match(_raw, \"(?i)error|exception|failed\") OR match(_raw,\" ERROR \"))\n| rex field=_raw \"(?i)extension[:\\s]+(?<extension_id>[^\\s\\]\\[]+)\"\n| rex field=_raw \"(?i)(?<ex_type>[A-Za-z0-9_.]+Exception)\"\n| stats count by host, extension_id, ex_type\n| sort - count\n```\n\nUnderstanding this SPL\n\n**HiveMQ Extension Execution Errors** — HiveMQ extensions (enterprise integrations, custom interceptors) can fail independently; tracking exceptions prevents silent drops in policy enforcement and data enrichment.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"hivemq:log\"` extension error lines. **App/TA** (typical add-on context): HiveMQ Splunk Extension, HiveMQ broker logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: hivemq:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"hivemq:log\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw, \"(?i)extension\") AND (match(_raw, \"(?i)error|exception|failed\") OR match(_raw,\" ERROR \"))` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, extension_id, ex_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top extensions by errors), Line chart (error rate over time), Bar chart (errors by host).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log extension and plugin errors on the broker so third-party add-ons do not break messaging quietly.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.21",
              "n": "MQTT Topic Tree Depth and Fan-Out Analysis",
              "c": "low",
              "f": "intermediate",
              "v": "Deep topic hierarchies and high fan-out increase broker memory and ACL evaluation costs; trending complexity helps right-size clusters and topic design reviews.",
              "t": "MQTT Modular Input (Splunkbase 1890), HiveMQ Splunk Extension",
              "d": "`index=ot` `sourcetype=\"mqtt:message\"` fields `topic`",
              "q": "index=ot sourcetype=\"mqtt:message\"\n| eval depth=mvcount(split(topic,\"/\"))\n| eval parts=split(topic,\"/\")\n| eval root=mvindex(parts,0)\n| stats count as msgs, dc(topic) as unique_topics, max(depth) as max_depth, avg(depth) as avg_depth by root, host\n| sort - unique_topics",
              "m": "For very high message rates, sample or pre-aggregate in HiveMQ metrics. Exclude test topics. Pair with ACL audit if unauthorized deep topics appear.",
              "z": "Bar chart (unique topics by root prefix), Histogram (depth distribution), Table (top fan-out roots).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[MQTT Modular Input](https://splunkbase.splunk.com/app/1890)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MQTT Modular Input (Splunkbase 1890), HiveMQ Splunk Extension.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"mqtt:message\"` fields `topic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor very high message rates, sample or pre-aggregate in HiveMQ metrics. Exclude test topics. Pair with ACL audit if unauthorized deep topics appear.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"mqtt:message\"\n| eval depth=mvcount(split(topic,\"/\"))\n| eval parts=split(topic,\"/\")\n| eval root=mvindex(parts,0)\n| stats count as msgs, dc(topic) as unique_topics, max(depth) as max_depth, avg(depth) as avg_depth by root, host\n| sort - unique_topics\n```\n\nUnderstanding this SPL\n\n**MQTT Topic Tree Depth and Fan-Out Analysis** — Deep topic hierarchies and high fan-out increase broker memory and ACL evaluation costs; trending complexity helps right-size clusters and topic design reviews.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"mqtt:message\"` fields `topic`. **App/TA** (typical add-on context): MQTT Modular Input (Splunkbase 1890), HiveMQ Splunk Extension. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: mqtt:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"mqtt:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **depth** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **parts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **root** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by root, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (unique topics by root prefix), Histogram (depth distribution), Table (top fan-out roots).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We size topic tree depth and fan-out so a messy namespace or runaway auto-topics do not get impossible to run.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.5.22",
              "n": "HiveMQ License Utilization Trending",
              "c": "high",
              "f": "beginner",
              "v": "Connection growth toward license limits causes hard rejections during peaks; trending utilization supports procurement decisions before production brownouts.",
              "t": "HiveMQ Splunk Extension (metrics), HiveMQ license reporting",
              "d": "`index=ot` `sourcetype=\"hivemq:metrics\"` connection count gauges",
              "q": "index=ot sourcetype=\"hivemq:metrics\"\n| eval m=lower(coalesce(metric_name, metric, name))\n| eval v=coalesce(value, metric_value, _value)\n| where match(m, \"connection\") AND match(m,\"current|active|open|established\")\n| bin _time span=5m\n| stats max(v) as connections by host, _time\n| eval license_limit=10000\n| eval utilization_pct=round(100*connections/license_limit,2)\n| where utilization_pct>85",
              "m": "Replace the static `license_limit` with a lookup or environment-specific value. Alert at 85%/95% thresholds with different severities.",
              "z": "Line chart (connections vs limit), Area chart (utilization %), Gauge (current utilization), Table (hosts approaching limit).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884), [TA for Corelight](https://splunkbase.splunk.com/app/3885)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HiveMQ Splunk Extension (metrics), HiveMQ license reporting.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"hivemq:metrics\"` connection count gauges.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nReplace the static `license_limit` with a lookup or environment-specific value. Alert at 85%/95% thresholds with different severities.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"hivemq:metrics\"\n| eval m=lower(coalesce(metric_name, metric, name))\n| eval v=coalesce(value, metric_value, _value)\n| where match(m, \"connection\") AND match(m,\"current|active|open|established\")\n| bin _time span=5m\n| stats max(v) as connections by host, _time\n| eval license_limit=10000\n| eval utilization_pct=round(100*connections/license_limit,2)\n| where utilization_pct>85\n```\n\nUnderstanding this SPL\n\n**HiveMQ License Utilization Trending** — Connection growth toward license limits causes hard rejections during peaks; trending utilization supports procurement decisions before production brownouts.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"hivemq:metrics\"` connection count gauges. **App/TA** (typical add-on context): HiveMQ Splunk Extension (metrics), HiveMQ license reporting. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: hivemq:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"hivemq:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **m** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **v** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(m, \"connection\") AND match(m,\"current|active|open|established\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **license_limit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilization_pct>85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (connections vs limit), Area chart (utilization %), Gauge (current utilization), Table (hosts approaching limit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend license and capacity use on the managed broker so growth or limits do not surprise us in production.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.9,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 22,
            "none": 0
          }
        },
        {
          "i": "14.6",
          "n": "Zeek ICS Deep Protocol Inspection",
          "u": [
            {
              "i": "14.6.1",
              "n": "S7comm PLC Read/Write Operation Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Unexpected write-heavy traffic to Siemens PLCs can indicate tampering or mis-engineered automation; tracking read versus write ratios supports least-privilege engineering and early detection of process-impacting changes.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:s7comm:json\"` (ICSNPP `s7comm.log` fields such as `function_name`, `rosctr_name`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:s7comm:json\"\n| eval op=case(match(function_name,\"(?i)read\"),\"read\",match(function_name,\"(?i)write\"),\"write\",\"other\")\n| stats count(eval(op==\"read\")) as reads count(eval(op==\"write\")) as writes by source_h destination_h\n| eval write_ratio=if(reads+writes>0, round(100*writes/(reads+writes),2), null())\n| where writes>0 AND (reads=0 OR write_ratio>25)\n| table source_h destination_h reads writes write_ratio",
              "m": "Deploy Zeek with ICSNPP-S7COMM on passive taps or SPAN ports on OT VLANs carrying PLC traffic; forward JSON logs to Splunk with TA for Zeek field extractions. Baseline read/write ratios per engineering workstation pair; tune the write-ratio threshold per zone and alert on off-hours spikes.",
              "z": "Bar chart (writes vs reads by source/destination pair), Single value (max write_ratio), Table (top writers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:s7comm:json\"` (ICSNPP `s7comm.log` fields such as `function_name`, `rosctr_name`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Zeek with ICSNPP-S7COMM on passive taps or SPAN ports on OT VLANs carrying PLC traffic; forward JSON logs to Splunk with TA for Zeek field extractions. Baseline read/write ratios per engineering workstation pair; tune the write-ratio threshold per zone and alert on off-hours spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:s7comm:json\"\n| eval op=case(match(function_name,\"(?i)read\"),\"read\",match(function_name,\"(?i)write\"),\"write\",\"other\")\n| stats count(eval(op==\"read\")) as reads count(eval(op==\"write\")) as writes by source_h destination_h\n| eval write_ratio=if(reads+writes>0, round(100*writes/(reads+writes),2), null())\n| where writes>0 AND (reads=0 OR write_ratio>25)\n| table source_h destination_h reads writes write_ratio\n```\n\nUnderstanding this SPL\n\n**S7comm PLC Read/Write Operation Monitoring** — Unexpected write-heavy traffic to Siemens PLCs can indicate tampering or mis-engineered automation; tracking read versus write ratios supports least-privilege engineering and early detection of process-impacting changes.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:s7comm:json\"` (ICSNPP `s7comm.log` fields such as `function_name`, `rosctr_name`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:s7comm:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:s7comm:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **op** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by source_h destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **write_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where writes>0 AND (reads=0 OR write_ratio>25)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **S7comm PLC Read/Write Operation Monitoring**): table source_h destination_h reads writes write_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (writes vs reads by source/destination pair), Single value (max write_ratio), Table (top writers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection of industrial control traffic to compare reads and writes to our controllers so unexpected write-heavy patterns stand out.",
              "mtype": [
                "Security",
                "Performance",
                "Change"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.2",
              "n": "S7comm Program Upload/Download Detection",
              "c": "critical",
              "f": "advanced",
              "v": "PLC program upload/download changes process logic and safety envelopes; correlating these events with change tickets prevents unauthorized logic swaps that could disrupt operations or bypass interlocks.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:s7comm:json\"` (ICSNPP `s7comm_upload_download.log`: `function_code`, `filename`, `block_type`, `block_number`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:s7comm:json\"\n| eval fc_hex=replace(function_code,\"0x\",\"\")\n| eval fc=coalesce(tonumber(fc_hex,16), tonumber(function_code))\n| where fc IN (26,27) OR function_code IN (\"0x1a\",\"0x1b\",\"26\",\"27\") OR isnotnull(filename)\n| stats earliest(_time) as first_seen latest(_time) as last_seen values(filename) as filenames values(block_type) as block_types values(block_number) as block_numbers by source_h destination_h uid\n| table first_seen last_seen source_h destination_h filenames block_types block_numbers",
              "m": "Enable ICSNPP upload/download logging on Zeek sensors at OT taps; ingest into `index=ot`. Map `function_code` 0x1a/0x1b (decimal 26/27) to engineering change workflows; require ticket IDs in SOAR for acknowledged maintenance windows.",
              "z": "Timeline (upload/download events), Table (filename, block, endpoints), Single value (events in last 24h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:s7comm:json\"` (ICSNPP `s7comm_upload_download.log`: `function_code`, `filename`, `block_type`, `block_number`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ICSNPP upload/download logging on Zeek sensors at OT taps; ingest into `index=ot`. Map `function_code` 0x1a/0x1b (decimal 26/27) to engineering change workflows; require ticket IDs in SOAR for acknowledged maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:s7comm:json\"\n| eval fc_hex=replace(function_code,\"0x\",\"\")\n| eval fc=coalesce(tonumber(fc_hex,16), tonumber(function_code))\n| where fc IN (26,27) OR function_code IN (\"0x1a\",\"0x1b\",\"26\",\"27\") OR isnotnull(filename)\n| stats earliest(_time) as first_seen latest(_time) as last_seen values(filename) as filenames values(block_type) as block_types values(block_number) as block_numbers by source_h destination_h uid\n| table first_seen last_seen source_h destination_h filenames block_types block_numbers\n```\n\nUnderstanding this SPL\n\n**S7comm Program Upload/Download Detection** — PLC program upload/download changes process logic and safety envelopes; correlating these events with change tickets prevents unauthorized logic swaps that could disrupt operations or bypass interlocks.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:s7comm:json\"` (ICSNPP `s7comm_upload_download.log`: `function_code`, `filename`, `block_type`, `block_number`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:s7comm:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:s7comm:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fc_hex** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **fc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fc IN (26,27) OR function_code IN (\"0x1a\",\"0x1b\",\"26\",\"27\") OR isnotnull(filename)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source_h destination_h uid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **S7comm Program Upload/Download Detection**): table first_seen last_seen source_h destination_h filenames block_types block_numbers\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (upload/download events), Table (filename, block, endpoints), Single value (events in last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to flag program transfer sessions so we can tie uploads and downloads to approved changes.",
              "mtype": [
                "Security",
                "Change",
                "Compliance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.3",
              "n": "S7comm CPU State Change Detection",
              "c": "high",
              "f": "advanced",
              "v": "CPU stop/start or mode transitions can halt a line or leave a PLC in an unsafe state; detecting them from the wire supports both troubleshooting and detection of malicious or accidental operational disruption.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:s7comm:json\"` (`rosctr_name`, `subfunction_name`, `function_name`, `error_class`)",
              "q": "index=ot sourcetype=\"zeek:s7comm:json\"\n| where match(subfunction_name,\"(?i)STOP|START|RESUME\") OR match(rosctr_name,\"(?i)STOP|RUN|HOLD\") OR match(function_name,\"(?i)PLC|mode|cpu\")\n| stats count by source_h destination_h rosctr_name subfunction_name function_name\n| sort - count\n| head 100",
              "m": "Place Zeek on taps facing S7 controllers and HMIs; normalize `subfunction_name`/`rosctr_name` strings from production captures. Alert on stop/start patterns outside approved maintenance; correlate with MES/SCADA alarms for the same asset.",
              "z": "Timeline (state-related messages), Bar chart (count by subfunction_name), Table (source, destination, fields).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:s7comm:json\"` (`rosctr_name`, `subfunction_name`, `function_name`, `error_class`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPlace Zeek on taps facing S7 controllers and HMIs; normalize `subfunction_name`/`rosctr_name` strings from production captures. Alert on stop/start patterns outside approved maintenance; correlate with MES/SCADA alarms for the same asset.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:s7comm:json\"\n| where match(subfunction_name,\"(?i)STOP|START|RESUME\") OR match(rosctr_name,\"(?i)STOP|RUN|HOLD\") OR match(function_name,\"(?i)PLC|mode|cpu\")\n| stats count by source_h destination_h rosctr_name subfunction_name function_name\n| sort - count\n| head 100\n```\n\nUnderstanding this SPL\n\n**S7comm CPU State Change Detection** — CPU stop/start or mode transitions can halt a line or leave a PLC in an unsafe state; detecting them from the wire supports both troubleshooting and detection of malicious or accidental operational disruption.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:s7comm:json\"` (`rosctr_name`, `subfunction_name`, `function_name`, `error_class`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:s7comm:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:s7comm:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(subfunction_name,\"(?i)STOP|START|RESUME\") OR match(rosctr_name,\"(?i)STOP|RUN|HOLD\") OR match(function_name,\"(?i…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source_h destination_h rosctr_name subfunction_name function_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (state-related messages), Bar chart (count by subfunction_name), Table (source, destination, fields).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to see run-state and mode changes on our controllers so we can separate routine stops from something odd.",
              "mtype": [
                "Fault",
                "Security",
                "Availability"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.4",
              "n": "S7comm Unauthorized Function Block Access",
              "c": "high",
              "f": "advanced",
              "v": "Access attempts to protected OB/FB/FC blocks may indicate credential abuse or ladder tampering; monitoring errors and targeted function names supports defense-in-depth around safety-related code.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:s7comm:json\"` (`function_name`, `subfunction_name`, `error_class`, `error_code`)",
              "q": "index=ot sourcetype=\"zeek:s7comm:json\"\n| where (match(function_name,\"(?i)block|OB|FB|FC|SFB\") OR match(subfunction_name,\"(?i)block\"))\n| where (isnotnull(error_code) AND error_code!=\"0x0000\") OR (isnotnull(error_class) AND NOT error_class IN (\"NONE\",\"-\",\"\"))\n| stats count by source_h destination_h function_name subfunction_name error_class error_code\n| where count>=3\n| sort - count",
              "m": "Deploy Zeek ICSNPP on segments with S7 controllers; build an allowlist of engineering hosts permitted to access safety-related blocks. Tune minimum event counts to suppress single-bit noise; integrate with asset inventory for PLC roles.",
              "z": "Table (source, destination, function, error), Bar chart (errors by source_h), Timeline (clusters of denied access).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:s7comm:json\"` (`function_name`, `subfunction_name`, `error_class`, `error_code`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Zeek ICSNPP on segments with S7 controllers; build an allowlist of engineering hosts permitted to access safety-related blocks. Tune minimum event counts to suppress single-bit noise; integrate with asset inventory for PLC roles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:s7comm:json\"\n| where (match(function_name,\"(?i)block|OB|FB|FC|SFB\") OR match(subfunction_name,\"(?i)block\"))\n| where (isnotnull(error_code) AND error_code!=\"0x0000\") OR (isnotnull(error_class) AND NOT error_class IN (\"NONE\",\"-\",\"\"))\n| stats count by source_h destination_h function_name subfunction_name error_class error_code\n| where count>=3\n| sort - count\n```\n\nUnderstanding this SPL\n\n**S7comm Unauthorized Function Block Access** — Access attempts to protected OB/FB/FC blocks may indicate credential abuse or ladder tampering; monitoring errors and targeted function names supports defense-in-depth around safety-related code.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:s7comm:json\"` (`function_name`, `subfunction_name`, `error_class`, `error_code`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:s7comm:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:s7comm:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where (match(function_name,\"(?i)block|OB|FB|FC|SFB\") OR match(subfunction_name,\"(?i)block\"))` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where (isnotnull(error_code) AND error_code!=\"0x0000\") OR (isnotnull(error_class) AND NOT error_class IN (\"NONE\",\"-\",\"\"))` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source_h destination_h function_name subfunction_name error_class error_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (source, destination, function, error), Bar chart (errors by source_h), Timeline (clusters of denied access).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to watch which program blocks are touched so unexpected access to logic surfaces quickly.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.5",
              "n": "Modbus Function Code Distribution Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Diagnostics and coil forcing are high-impact Modbus operations; a sudden shift in function-code mix can signal scanning, misuse, or a compromised HMI.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:modbus_detailed:json\"` (`func`, `unit`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:modbus_detailed:json\"\n| eval fc=upper(trim(func))\n| stats count by fc source_h destination_h\n| eval risky=if(fc IN (\"0x08\",\"8\",\"0x05\",\"5\",\"0x0F\",\"15\"),1,0)\n| where risky=1 OR count>1000\n| sort - count",
              "m": "Ingest ICSNPP `modbus_detailed` logs from Zeek sensors on Modbus TCP segments. Establish weekly baselines per RTU; alert when diagnostics (0x08) or force/write function codes spike versus baseline or appear from new masters.",
              "z": "Pie or bar chart (function code distribution), Table (risky FC by master), Timeline (spikes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:modbus_detailed:json\"` (`func`, `unit`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ICSNPP `modbus_detailed` logs from Zeek sensors on Modbus TCP segments. Establish weekly baselines per RTU; alert when diagnostics (0x08) or force/write function codes spike versus baseline or appear from new masters.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:modbus_detailed:json\"\n| eval fc=upper(trim(func))\n| stats count by fc source_h destination_h\n| eval risky=if(fc IN (\"0x08\",\"8\",\"0x05\",\"5\",\"0x0F\",\"15\"),1,0)\n| where risky=1 OR count>1000\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Modbus Function Code Distribution Audit** — Diagnostics and coil forcing are high-impact Modbus operations; a sudden shift in function-code mix can signal scanning, misuse, or a compromised HMI.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:modbus_detailed:json\"` (`func`, `unit`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:modbus_detailed:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:modbus_detailed:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by fc source_h destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **risky** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where risky=1 OR count>1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie or bar chart (function code distribution), Table (risky FC by master), Timeline (spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to chart function codes on Modbus so odd commands or new devices do not blend into background noise.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.6",
              "n": "Modbus Register Value Change Tracking",
              "c": "critical",
              "f": "advanced",
              "v": "Covert changes to holding registers can alter setpoints or interlocks; comparing matched request/response values highlights tampering distinct from normal operator writes.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:modbus_detailed:json\"` (`func`, `address`, `unit`, `request_values`, `response_values`, `matched`, `network_direction`)",
              "q": "index=ot sourcetype=\"zeek:modbus_detailed:json\" matched=true\n| where func IN (\"0x06\",\"6\",\"0x10\",\"16\")\n| eval reg_key=destination_h.\"|\".unit.\"|\".address\n| sort 0 reg_key _time\n| streamstats window=2 global=f earliest(response_values) as earlier_resp latest(response_values) as later_resp by reg_key\n| where isnotnull(earlier_resp) AND mvjoin(earlier_resp,\",\")!=mvjoin(later_resp,\",\")\n| table _time source_h destination_h unit address func earlier_resp later_resp request_values",
              "m": "Deploy Zeek with ICSNPP-Modbus on taps near RTUs; ensure `request_values`/`response_values` are indexed. Focus on critical register ranges from asset documentation; schedule correlation with historian trends for validation.",
              "z": "Table (register_key, old vs new values), Timeline (changes), Line chart (change rate per hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:modbus_detailed:json\"` (`func`, `address`, `unit`, `request_values`, `response_values`, `matched`, `network_direction`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Zeek with ICSNPP-Modbus on taps near RTUs; ensure `request_values`/`response_values` are indexed. Focus on critical register ranges from asset documentation; schedule correlation with historian trends for validation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:modbus_detailed:json\" matched=true\n| where func IN (\"0x06\",\"6\",\"0x10\",\"16\")\n| eval reg_key=destination_h.\"|\".unit.\"|\".address\n| sort 0 reg_key _time\n| streamstats window=2 global=f earliest(response_values) as earlier_resp latest(response_values) as later_resp by reg_key\n| where isnotnull(earlier_resp) AND mvjoin(earlier_resp,\",\")!=mvjoin(later_resp,\",\")\n| table _time source_h destination_h unit address func earlier_resp later_resp request_values\n```\n\nUnderstanding this SPL\n\n**Modbus Register Value Change Tracking** — Covert changes to holding registers can alter setpoints or interlocks; comparing matched request/response values highlights tampering distinct from normal operator writes.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:modbus_detailed:json\"` (`func`, `address`, `unit`, `request_values`, `response_values`, `matched`, `network_direction`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:modbus_detailed:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:modbus_detailed:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where func IN (\"0x06\",\"6\",\"0x10\",\"16\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **reg_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by reg_key** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnotnull(earlier_resp) AND mvjoin(earlier_resp,\",\")!=mvjoin(later_resp,\",\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Modbus Register Value Change Tracking**): table _time source_h destination_h unit address func earlier_resp later_resp request_values\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (register_key, old vs new values), Timeline (changes), Line chart (change rate per hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to follow register value changes so we can audit what moved and when in the process.",
              "mtype": [
                "Security",
                "Change",
                "Fault"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-14.6.6: Modbus Register Value Change Tracking.",
                  "ea": "Saved search 'UC-14.6.6' running on sourcetype zeek:modbus_detailed:json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.api.org/products-and-services/standards/important-standards-announcements/standard-1164"
                },
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "FR 6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 FR 6.2 (Continuous monitoring) is enforced — Splunk UC-14.6.6: Modbus Register Value Change Tracking.",
                  "ea": "Saved search 'UC-14.6.6' running on sourcetype zeek:modbus_detailed:json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.isa.org/standards-and-publications/isa-standards/isa-iec-62443-series-of-standards"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R1 (Configuration change management) is enforced — Splunk UC-14.6.6: Modbus Register Value Change Tracking.",
                  "ea": "Saved search 'UC-14.6.6' running on sourcetype zeek:modbus_detailed:json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                }
              ],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.7",
              "n": "Modbus Device Identification Enumeration (FC 43 / 0x2B)",
              "c": "high",
              "f": "intermediate",
              "v": "Read Device Identification (FC 0x2B) is a common reconnaissance step; bursts from non-inventory hosts often precede targeted attacks or rogue integration attempts.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:modbus_detailed:json\"` or `sourcetype=\"zeek:modbus_read_device_identification:json\"` (`func`, `mei_type`, `object_id`, `source_h`, `destination_h`)",
              "q": "index=ot (sourcetype=\"zeek:modbus_detailed:json\" OR sourcetype=\"zeek:modbus_read_device_identification:json\")\n| where func IN (\"0x2B\",\"43\",\"0x2b\") OR mei_type=\"READ-DEVICE-IDENTIFICATION\" OR isnotnull(object_id)\n| stats count dc(destination_h) as targets by source_h\n| where count>20 OR targets>5\n| sort - count",
              "m": "Forward ICSNPP Modbus detailed and read-device-identification logs from OT VLAN taps. Allowlist asset-management scanners; alert on new sources or high fan-out to many slaves in a short window.",
              "z": "Bar chart (enumeration events by source), Table (source, target count), Map or table (distinct targets).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:modbus_detailed:json\"` or `sourcetype=\"zeek:modbus_read_device_identification:json\"` (`func`, `mei_type`, `object_id`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward ICSNPP Modbus detailed and read-device-identification logs from OT VLAN taps. Allowlist asset-management scanners; alert on new sources or high fan-out to many slaves in a short window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot (sourcetype=\"zeek:modbus_detailed:json\" OR sourcetype=\"zeek:modbus_read_device_identification:json\")\n| where func IN (\"0x2B\",\"43\",\"0x2b\") OR mei_type=\"READ-DEVICE-IDENTIFICATION\" OR isnotnull(object_id)\n| stats count dc(destination_h) as targets by source_h\n| where count>20 OR targets>5\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Modbus Device Identification Enumeration (FC 43 / 0x2B)** — Read Device Identification (FC 0x2B) is a common reconnaissance step; bursts from non-inventory hosts often precede targeted attacks or rogue integration attempts.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:modbus_detailed:json\"` or `sourcetype=\"zeek:modbus_read_device_identification:json\"` (`func`, `mei_type`, `object_id`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:modbus_detailed:json, zeek:modbus_read_device_identification:json. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:modbus_detailed:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where func IN (\"0x2B\",\"43\",\"0x2b\") OR mei_type=\"READ-DEVICE-IDENTIFICATION\" OR isnotnull(object_id)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>20 OR targets>5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (enumeration events by source), Table (source, target count), Map or table (distinct targets).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to see identification and enumeration on Modbus so asset sweeps and odd scanners show up.",
              "mtype": [
                "Security",
                "Fault"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.8",
              "n": "DNP3 Unsolicited Response Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Unsolicited responses carry event-driven telemetry; abnormal volume or timing can indicate flooding, misconfiguration, or spoofed outstations affecting SCADA visibility.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:dnp3:json\"` (Zeek `dnp3.log` / application function text, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:dnp3:json\"\n| where match(_raw,\"UNSOLICITED\") OR function=\"UNSOLICITED_RESPONSE\"\n| bin _time span=1m\n| stats count by _time source_h destination_h\n| eventstats median(count) as med by destination_h\n| where count > med*3 AND count > 10\n| table _time source_h destination_h count med",
              "m": "Deploy Zeek with DNP3 on serial-Ethernet gateways’ segments; verify `function` or raw tokens for unsolicited responses in your build. Baseline per-master/outstation pair; alert on bursts that exceed rolling median.",
              "z": "Timeline (unsolicited rate), Line chart (count vs median), Table (spikes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:dnp3:json\"` (Zeek `dnp3.log` / application function text, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Zeek with DNP3 on serial-Ethernet gateways’ segments; verify `function` or raw tokens for unsolicited responses in your build. Baseline per-master/outstation pair; alert on bursts that exceed rolling median.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:dnp3:json\"\n| where match(_raw,\"UNSOLICITED\") OR function=\"UNSOLICITED_RESPONSE\"\n| bin _time span=1m\n| stats count by _time source_h destination_h\n| eventstats median(count) as med by destination_h\n| where count > med*3 AND count > 10\n| table _time source_h destination_h count med\n```\n\nUnderstanding this SPL\n\n**DNP3 Unsolicited Response Monitoring** — Unsolicited responses carry event-driven telemetry; abnormal volume or timing can indicate flooding, misconfiguration, or spoofed outstations affecting SCADA visibility.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:dnp3:json\"` (Zeek `dnp3.log` / application function text, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:dnp3:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:dnp3:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"UNSOLICITED\") OR function=\"UNSOLICITED_RESPONSE\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time source_h destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > med*3 AND count > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DNP3 Unsolicited Response Monitoring**): table _time source_h destination_h count med\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (unsolicited rate), Line chart (count vs median), Table (spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to monitor unsolicited and exception traffic on DNP3 so a failing link or hot RTU is obvious.",
              "mtype": [
                "Performance",
                "Fault",
                "Security"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.9",
              "n": "DNP3 Control Relay Output Block (CROB) Tracking",
              "c": "critical",
              "f": "advanced",
              "v": "CROB select/operate sequences directly actuate breakers and outputs; a full audit trail is required for NERC CIP-style reviews and post-incident forensics.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:dnp3:json\"` (ICSNPP `dnp3_control.log`: `block_type`, `function_code`, `operation_type`, `index_number`, `trip_control_code`, `status_code`)",
              "q": "index=ot sourcetype=\"zeek:dnp3:json\" block_type=\"Control_Relay_Output_Block\"\n| stats values(function_code) as phases values(operation_type) as ops values(trip_control_code) as trips latest(status_code) as last_status by _time source_h destination_h index_number uid\n| table _time source_h destination_h index_number phases ops trips last_status",
              "m": "Enable ICSNPP-DNP3 control logging on Zeek sensors facing RTU/MTU paths. Ingest `dnp3_control` fields; map `index_number` to one-line diagrams. Require change correlation for OPERATE phases outside maintenance.",
              "z": "Table (full CROB audit), Timeline (SELECT vs OPERATE), Bar chart (operates by index_number).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:dnp3:json\"` (ICSNPP `dnp3_control.log`: `block_type`, `function_code`, `operation_type`, `index_number`, `trip_control_code`, `status_code`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable ICSNPP-DNP3 control logging on Zeek sensors facing RTU/MTU paths. Ingest `dnp3_control` fields; map `index_number` to one-line diagrams. Require change correlation for OPERATE phases outside maintenance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:dnp3:json\" block_type=\"Control_Relay_Output_Block\"\n| stats values(function_code) as phases values(operation_type) as ops values(trip_control_code) as trips latest(status_code) as last_status by _time source_h destination_h index_number uid\n| table _time source_h destination_h index_number phases ops trips last_status\n```\n\nUnderstanding this SPL\n\n**DNP3 Control Relay Output Block (CROB) Tracking** — CROB select/operate sequences directly actuate breakers and outputs; a full audit trail is required for NERC CIP-style reviews and post-incident forensics.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:dnp3:json\"` (ICSNPP `dnp3_control.log`: `block_type`, `function_code`, `operation_type`, `index_number`, `trip_control_code`, `status_code`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:dnp3:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:dnp3:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by _time source_h destination_h index_number uid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **DNP3 Control Relay Output Block (CROB) Tracking**): table _time source_h destination_h index_number phases ops trips last_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (full CROB audit), Timeline (SELECT vs OPERATE), Bar chart (operates by index_number).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to track control relay and binary output actions so remote commands line up with what we expect.",
              "mtype": [
                "Security",
                "Compliance",
                "Change"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.10",
              "n": "DNP3 Cold/Warm Restart Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Restart commands to outstations reset application context and can interrupt protection; unexpected restarts may follow malware or operator error.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:dnp3:json\"` (`function`, `object_type`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:dnp3:json\"\n| where match(_raw,\"(?i)COLD_RESTART|WARM_RESTART\") OR match(function,\"(?i)COLD_RESTART|WARM_RESTART\") OR match(object_type,\"(?i)COLD_RESTART|WARM_RESTART\")\n| stats earliest(_time) as evt_time values(function) as fn values(object_type) as ot by source_h destination_h\n| table evt_time source_h destination_h fn ot",
              "m": "Capture DNP3 on links to critical RTUs via network tap; confirm field names (`function` vs `object_type`) against a sample capture. Alert any restart from non-master IPs or outside approved windows.",
              "z": "Timeline (restart events), Table (source, destination, function), Single value (restarts per day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:dnp3:json\"` (`function`, `object_type`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture DNP3 on links to critical RTUs via network tap; confirm field names (`function` vs `object_type`) against a sample capture. Alert any restart from non-master IPs or outside approved windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:dnp3:json\"\n| where match(_raw,\"(?i)COLD_RESTART|WARM_RESTART\") OR match(function,\"(?i)COLD_RESTART|WARM_RESTART\") OR match(object_type,\"(?i)COLD_RESTART|WARM_RESTART\")\n| stats earliest(_time) as evt_time values(function) as fn values(object_type) as ot by source_h destination_h\n| table evt_time source_h destination_h fn ot\n```\n\nUnderstanding this SPL\n\n**DNP3 Cold/Warm Restart Detection** — Restart commands to outstations reset application context and can interrupt protection; unexpected restarts may follow malware or operator error.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:dnp3:json\"` (`function`, `object_type`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:dnp3:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:dnp3:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"(?i)COLD_RESTART|WARM_RESTART\") OR match(function,\"(?i)COLD_RESTART|WARM_RESTART\") OR match(object_type,\"…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source_h destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **DNP3 Cold/Warm Restart Detection**): table evt_time source_h destination_h fn ot\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (restart events), Table (source, destination, function), Single value (restarts per day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to see cold and warm restarts in substation telecontrol so we can line up process events with changes.",
              "mtype": [
                "Fault",
                "Security",
                "Availability"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.11",
              "n": "EtherNet/IP CIP Service Request Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Unusual CIP services (e.g., configuration writes) against controllers can precede firmware or logic manipulation; service baselines highlight drift from normal automation behavior.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:enip:json\"` (`cip_service`, `cip_service_code`, `class_name`, `direction`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:enip:json\" direction=\"Request\"\n| stats count by cip_service class_name source_h destination_h\n| eventstats sum(count) as svc_total by destination_h\n| eval pct=round(100*count/svc_total,3)\n| where cip_service IN (\"Set_Attribute_Single\",\"Reset\",\"Delete\") OR pct<0.1\n| sort destination_h - pct\n| table destination_h cip_service class_name count pct source_h",
              "m": "Deploy ICSNPP-ENIP on Zeek at CIP/EtherNet/IP taps (ports 2222/44818 per policy). Build per-controller service profiles; alert on rare services or configuration-class access from non-engineering hosts.",
              "z": "Bar chart (CIP service mix), Table (rare services), Heatmap (service by source).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:enip:json\"` (`cip_service`, `cip_service_code`, `class_name`, `direction`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy ICSNPP-ENIP on Zeek at CIP/EtherNet/IP taps (ports 2222/44818 per policy). Build per-controller service profiles; alert on rare services or configuration-class access from non-engineering hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:enip:json\" direction=\"Request\"\n| stats count by cip_service class_name source_h destination_h\n| eventstats sum(count) as svc_total by destination_h\n| eval pct=round(100*count/svc_total,3)\n| where cip_service IN (\"Set_Attribute_Single\",\"Reset\",\"Delete\") OR pct<0.1\n| sort destination_h - pct\n| table destination_h cip_service class_name count pct source_h\n```\n\nUnderstanding this SPL\n\n**EtherNet/IP CIP Service Request Audit** — Unusual CIP services (e.g., configuration writes) against controllers can precede firmware or logic manipulation; service baselines highlight drift from normal automation behavior.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:enip:json\"` (`cip_service`, `cip_service_code`, `class_name`, `direction`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:enip:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:enip:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cip_service class_name source_h destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cip_service IN (\"Set_Attribute_Single\",\"Reset\",\"Delete\") OR pct<0.1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **EtherNet/IP CIP Service Request Audit**): table destination_h cip_service class_name count pct source_h\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (CIP service mix), Table (rare services), Heatmap (service by source).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to audit CIP and EtherNet/IP service use so non-routine service calls stand out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.12",
              "n": "EtherNet/IP Unregistered Session Detection",
              "c": "high",
              "f": "advanced",
              "v": "EtherNet/IP sessions are normally established with explicit registration; traffic that skips expected session setup may indicate tooling errors, bypass attempts, or non-compliant devices on the plant floor.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:enip:json\"` (`enip_command`, `session_handle`, `uid`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:enip:json\"\n| stats earliest(enip_command) as first_cmd values(enip_command) as cmds dc(enip_command) as cmd_variety by uid source_h destination_h\n| where first_cmd!=\"Register_Session\" AND (like(enip_command,\"%SendRRData%\") OR like(enip_command,\"%Unit%\") OR mvindex(cmds,0)!=first_cmd)\n| table uid source_h destination_h first_cmd cmds",
              "m": "Ingest `enip.log` from Zeek on EtherNet/IP segments. Validate ordering with known-good PLC/HMI captures; tune exclusions for vendor-specific handshake quirks. Combine with asset roles so HMIs are not false-positive flagged incorrectly.",
              "z": "Table (sessions with anomalous first command), Timeline (connection uid), Bar chart (count by destination_h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:enip:json\"` (`enip_command`, `session_handle`, `uid`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest `enip.log` from Zeek on EtherNet/IP segments. Validate ordering with known-good PLC/HMI captures; tune exclusions for vendor-specific handshake quirks. Combine with asset roles so HMIs are not false-positive flagged incorrectly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:enip:json\"\n| stats earliest(enip_command) as first_cmd values(enip_command) as cmds dc(enip_command) as cmd_variety by uid source_h destination_h\n| where first_cmd!=\"Register_Session\" AND (like(enip_command,\"%SendRRData%\") OR like(enip_command,\"%Unit%\") OR mvindex(cmds,0)!=first_cmd)\n| table uid source_h destination_h first_cmd cmds\n```\n\nUnderstanding this SPL\n\n**EtherNet/IP Unregistered Session Detection** — EtherNet/IP sessions are normally established with explicit registration; traffic that skips expected session setup may indicate tooling errors, bypass attempts, or non-compliant devices on the plant floor.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:enip:json\"` (`enip_command`, `session_handle`, `uid`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:enip:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:enip:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by uid source_h destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where first_cmd!=\"Register_Session\" AND (like(enip_command,\"%SendRRData%\") OR like(enip_command,\"%Unit%\") OR mvindex(cmds,0…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **EtherNet/IP Unregistered Session Detection**): table uid source_h destination_h first_cmd cmds\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sessions with anomalous first command), Timeline (connection uid), Bar chart (count by destination_h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to spot unregistered CIP sessions so short-lived or stray clients on the line do not go unseen.",
              "mtype": [
                "Security",
                "Fault"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.13",
              "n": "EtherNet/IP I/O Implicit Messaging Anomaly",
              "c": "high",
              "f": "advanced",
              "v": "Implicit I/O carries real-time control data; sudden changes in payload size or timing can signal cable issues, configuration drift, or injection attempts affecting deterministic control.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:enip:json\"` (ICSNPP `cip_io.log` merged or `sourcetype=\"zeek:cip_io:json\"`: `connection_id`, `data_length`, `sequence_number`, `io_data`)",
              "q": "index=ot (sourcetype=\"zeek:cip_io:json\" OR (sourcetype=\"zeek:enip:json\" io_data=*))\n| bin _time span=1s\n| stats avg(data_length) as avg_len stdev(data_length) as sd_len count by _time connection_id source_h destination_h\n| eventstats avg(avg_len) as baseline by connection_id\n| where sd_len>0 OR abs(avg_len-baseline)>64\n| table _time connection_id source_h destination_h avg_len sd_len baseline",
              "m": "Place Zeek on I/O scanner–adapter paths; prefer dedicated `cip_io` sourcetype when the TA splits logs. Learn normal `data_length` and sequence cadence per `connection_id`; alert on variance tied to control outages.",
              "z": "Line chart (data_length over time), Timeline (anomaly markers), Table (connection_id stats).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:enip:json\"` (ICSNPP `cip_io.log` merged or `sourcetype=\"zeek:cip_io:json\"`: `connection_id`, `data_length`, `sequence_number`, `io_data`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPlace Zeek on I/O scanner–adapter paths; prefer dedicated `cip_io` sourcetype when the TA splits logs. Learn normal `data_length` and sequence cadence per `connection_id`; alert on variance tied to control outages.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot (sourcetype=\"zeek:cip_io:json\" OR (sourcetype=\"zeek:enip:json\" io_data=*))\n| bin _time span=1s\n| stats avg(data_length) as avg_len stdev(data_length) as sd_len count by _time connection_id source_h destination_h\n| eventstats avg(avg_len) as baseline by connection_id\n| where sd_len>0 OR abs(avg_len-baseline)>64\n| table _time connection_id source_h destination_h avg_len sd_len baseline\n```\n\nUnderstanding this SPL\n\n**EtherNet/IP I/O Implicit Messaging Anomaly** — Implicit I/O carries real-time control data; sudden changes in payload size or timing can signal cable issues, configuration drift, or injection attempts affecting deterministic control.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:enip:json\"` (ICSNPP `cip_io.log` merged or `sourcetype=\"zeek:cip_io:json\"`: `connection_id`, `data_length`, `sequence_number`, `io_data`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:cip_io:json, zeek:enip:json. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:cip_io:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time connection_id source_h destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by connection_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sd_len>0 OR abs(avg_len-baseline)>64` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **EtherNet/IP I/O Implicit Messaging Anomaly**): table _time connection_id source_h destination_h avg_len sd_len baseline\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (data_length over time), Timeline (anomaly markers), Table (connection_id stats).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to watch implicit I/O messaging on EtherNet/IP so changes to cyclic traffic break from baseline.",
              "mtype": [
                "Performance",
                "Fault",
                "Security"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.14",
              "n": "IEC 104 Interrogation Command Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "General interrogation and clock synchronization (types 100 and 103) can be abused for reconnaissance or time skew; auditing masters against expected scan behavior supports grid and plant operational integrity.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:iec104:json\"` (`asdu_type`, `cot`, `stationinterrogation`, `cp56_*` clock fields, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:iec104:json\"\n| where asdu_type IN (100,103) OR stationinterrogation=\"T\" OR match(_raw,\"C_IC_NA_1|C_CS_NA_1\")\n| stats count by source_h destination_h asdu_type cot\n| sort - count",
              "m": "Deploy Zeek IEC 60870-5-104 parser on 2404/tcp SCADA paths via tap. Map `asdu_type` 100 to general interrogation and 103 to clock sync per asset documentation; whitelist primary SCADA masters.",
              "z": "Timeline (interrogation and clock sync), Bar chart (count by asdu_type), Table (master, outstation, ASDU type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:iec104:json\"` (`asdu_type`, `cot`, `stationinterrogation`, `cp56_*` clock fields, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Zeek IEC 60870-5-104 parser on 2404/tcp SCADA paths via tap. Map `asdu_type` 100 to general interrogation and 103 to clock sync per asset documentation; whitelist primary SCADA masters.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:iec104:json\"\n| where asdu_type IN (100,103) OR stationinterrogation=\"T\" OR match(_raw,\"C_IC_NA_1|C_CS_NA_1\")\n| stats count by source_h destination_h asdu_type cot\n| sort - count\n```\n\nUnderstanding this SPL\n\n**IEC 104 Interrogation Command Monitoring** — General interrogation and clock synchronization (types 100 and 103) can be abused for reconnaissance or time skew; auditing masters against expected scan behavior supports grid and plant operational integrity.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:iec104:json\"` (`asdu_type`, `cot`, `stationinterrogation`, `cp56_*` clock fields, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:iec104:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:iec104:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where asdu_type IN (100,103) OR stationinterrogation=\"T\" OR match(_raw,\"C_IC_NA_1|C_CS_NA_1\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source_h destination_h asdu_type cot** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (interrogation and clock sync), Bar chart (count by asdu_type), Table (master, outstation, ASDU type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to track IEC-104 interrogations so a master is not over-polling or probing in a new way.",
              "mtype": [
                "Security",
                "Performance",
                "Compliance"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.15",
              "n": "IEC 104 Spontaneous Value Change Tracking",
              "c": "high",
              "f": "advanced",
              "v": "Monitoring spontaneous updates helps distinguish normal process swings from stale data or spoofed telemetry that could mask a physical fault.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:iec104:json\"` (`cot`, `asdu_type`, `ioa`, `shortfloat`, `nva`, `sva`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:iec104:json\" cot=3\n| eval ioa_key=mvindex(ioa,0)\n| bin _time span=5m\n| stats latest(shortfloat) as sf by ioa_key destination_h _time\n| sort 0 ioa_key destination_h _time\n| streamstats window=2 global=f earliest(sf) as prev_sf latest(sf) as curr_sf by ioa_key destination_h\n| where isnotnull(prev_sf) AND isnotnull(curr_sf) AND abs(curr_sf-prev_sf)>0.0001\n| table _time destination_h ioa_key prev_sf curr_sf",
              "m": "Ingest IEC 104 JSON with vector fields expanded by TA; validate `cot` value for spontaneous (commonly 3). Tune magnitude thresholds per analog point class; integrate with EMS/Historian for cross-checks.",
              "z": "Line chart (value over time by IOA), Table (IOA, delta), Timeline (large deltas).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:iec104:json\"` (`cot`, `asdu_type`, `ioa`, `shortfloat`, `nva`, `sva`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest IEC 104 JSON with vector fields expanded by TA; validate `cot` value for spontaneous (commonly 3). Tune magnitude thresholds per analog point class; integrate with EMS/Historian for cross-checks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:iec104:json\" cot=3\n| eval ioa_key=mvindex(ioa,0)\n| bin _time span=5m\n| stats latest(shortfloat) as sf by ioa_key destination_h _time\n| sort 0 ioa_key destination_h _time\n| streamstats window=2 global=f earliest(sf) as prev_sf latest(sf) as curr_sf by ioa_key destination_h\n| where isnotnull(prev_sf) AND isnotnull(curr_sf) AND abs(curr_sf-prev_sf)>0.0001\n| table _time destination_h ioa_key prev_sf curr_sf\n```\n\nUnderstanding this SPL\n\n**IEC 104 Spontaneous Value Change Tracking** — Monitoring spontaneous updates helps distinguish normal process swings from stale data or spoofed telemetry that could mask a physical fault.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:iec104:json\"` (`cot`, `asdu_type`, `ioa`, `shortfloat`, `nva`, `sva`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:iec104:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:iec104:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ioa_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ioa_key destination_h _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by ioa_key destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnotnull(prev_sf) AND isnotnull(curr_sf) AND abs(curr_sf-prev_sf)>0.0001` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IEC 104 Spontaneous Value Change Tracking**): table _time destination_h ioa_key prev_sf curr_sf\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (value over time by IOA), Table (IOA, delta), Timeline (large deltas).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to follow spontaneous IEC-104 data so bursts from the field are visible to operations.",
              "mtype": [
                "Performance",
                "Fault",
                "Security"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.16",
              "n": "IEC 104 Clock Synchronization Deviation",
              "c": "high",
              "f": "advanced",
              "v": "Time drift between master and outstation complicates event ordering and SOE correlation; detecting skew supports NERC-style evidence of synchronized operations.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:iec104:json\"` (`asdu_type`, `cp56_minutes`, `cp56_hours`, `cp56_day`, `cp56_month`, `cp56_year`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:iec104:json\" asdu_type=103\n| eval wall=strptime(cp56_year.\"-\".cp56_month.\"-\".cp56_day.\" \".cp56_hours.\":\".cp56_minutes.\":00\",\"%Y-%m-%d %H:%M:%S\")\n| eval skew_sec=abs(_time-wall)\n| where skew_sec>2 AND isnotnull(wall)\n| stats max(skew_sec) as max_skew avg(skew_sec) as avg_skew by source_h destination_h\n| where max_skew>5\n| table source_h destination_h max_skew avg_skew",
              "m": "Capture C_CS_NA_1 clock sync ASDUs on OT taps; confirm `cp56_*` field population in your Zeek build. Alert when wire-time `_time` diverges from embedded CP56 time beyond policy (e.g., 2–5 seconds).",
              "z": "Single value (max skew), Line chart (skew over time), Table (endpoints, skew stats).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:iec104:json\"` (`asdu_type`, `cp56_minutes`, `cp56_hours`, `cp56_day`, `cp56_month`, `cp56_year`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture C_CS_NA_1 clock sync ASDUs on OT taps; confirm `cp56_*` field population in your Zeek build. Alert when wire-time `_time` diverges from embedded CP56 time beyond policy (e.g., 2–5 seconds).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:iec104:json\" asdu_type=103\n| eval wall=strptime(cp56_year.\"-\".cp56_month.\"-\".cp56_day.\" \".cp56_hours.\":\".cp56_minutes.\":00\",\"%Y-%m-%d %H:%M:%S\")\n| eval skew_sec=abs(_time-wall)\n| where skew_sec>2 AND isnotnull(wall)\n| stats max(skew_sec) as max_skew avg(skew_sec) as avg_skew by source_h destination_h\n| where max_skew>5\n| table source_h destination_h max_skew avg_skew\n```\n\nUnderstanding this SPL\n\n**IEC 104 Clock Synchronization Deviation** — Time drift between master and outstation complicates event ordering and SOE correlation; detecting skew supports NERC-style evidence of synchronized operations.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:iec104:json\"` (`asdu_type`, `cp56_minutes`, `cp56_hours`, `cp56_day`, `cp56_month`, `cp56_year`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:iec104:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:iec104:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **wall** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **skew_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where skew_sec>2 AND isnotnull(wall)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source_h destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_skew>5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IEC 104 Clock Synchronization Deviation**): table source_h destination_h max_skew avg_skew\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (max skew), Line chart (skew over time), Table (endpoints, skew stats).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to compare time sync on IEC-104 so clock drift does not break sequence and event trust.",
              "mtype": [
                "Compliance",
                "Fault",
                "Security"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.17",
              "n": "BACnet Object Access Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Writes to analog outputs, schedules, or life-safety objects can change building or process environmental limits; auditing ReadProperty/WriteProperty supports both cyber and operational accountability.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:bacnet:json\"` (`bacnet_property.log` via `pdu_service`, `object_type`, `property`, `value`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:bacnet:json\"\n| where match(pdu_service,\"(?i)write.*property\") OR pdu_service=\"Write_Property_Request\"\n| where object_type IN (\"analog-output\",\"binary-output\",\"schedule\",\"life-safety-point\") OR match(property,\"(?i)present.value|priority\")\n| stats count by source_h destination_h object_type instance_number property\n| sort - count",
              "m": "Deploy ICSNPP-BACnet on UDP/47808 taps; use `bacnet_property` fields when the TA routes them into the same sourcetype or a dedicated property sourcetype. Define sensitive object lists per site; alert on writes from non-BMS servers.",
              "z": "Table (writes by object), Bar chart (writes by source_h), Timeline (write bursts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:bacnet:json\"` (`bacnet_property.log` via `pdu_service`, `object_type`, `property`, `value`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy ICSNPP-BACnet on UDP/47808 taps; use `bacnet_property` fields when the TA routes them into the same sourcetype or a dedicated property sourcetype. Define sensitive object lists per site; alert on writes from non-BMS servers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:bacnet:json\"\n| where match(pdu_service,\"(?i)write.*property\") OR pdu_service=\"Write_Property_Request\"\n| where object_type IN (\"analog-output\",\"binary-output\",\"schedule\",\"life-safety-point\") OR match(property,\"(?i)present.value|priority\")\n| stats count by source_h destination_h object_type instance_number property\n| sort - count\n```\n\nUnderstanding this SPL\n\n**BACnet Object Access Audit** — Writes to analog outputs, schedules, or life-safety objects can change building or process environmental limits; auditing ReadProperty/WriteProperty supports both cyber and operational accountability.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:bacnet:json\"` (`bacnet_property.log` via `pdu_service`, `object_type`, `property`, `value`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:bacnet:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:bacnet:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(pdu_service,\"(?i)write.*property\") OR pdu_service=\"Write_Property_Request\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where object_type IN (\"analog-output\",\"binary-output\",\"schedule\",\"life-safety-point\") OR match(property,\"(?i)present.value|…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source_h destination_h object_type instance_number property** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (writes by object), Bar chart (writes by source_h), Timeline (write bursts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to audit BACnet object access so we know who is reading and writing what in the building or plant.",
              "mtype": [
                "Security",
                "Compliance",
                "Change"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.18",
              "n": "BACnet Who-Is Broadcast Storm Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Excessive Who-Is/I-Am discovery floods MS/TP-to-IP bridges and can indicate misconfigured devices, loops, or active scanning that degrades BMS responsiveness.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:bacnet:json\"` (`bacnet_discovery.log`: `pdu_service`, `device_id_number`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:bacnet:json\" pdu_service IN (\"who-is\",\"i-am\",\"who_is\",\"i_am\")\n| bin _time span=1m\n| stats count dc(source_h) as distinct_sources by _time destination_h\n| where count>200 OR distinct_sources>20\n| table _time destination_h count distinct_sources",
              "m": "Ingest discovery logs from Zeek on BACnet/IP VLANs. Set thresholds per campus; investigate sources with high Who-Is rates and verify router/broadcast management settings.",
              "z": "Area chart (Who-Is rate per minute), Table (spikes), Pie chart (share by pdu_service).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:bacnet:json\"` (`bacnet_discovery.log`: `pdu_service`, `device_id_number`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest discovery logs from Zeek on BACnet/IP VLANs. Set thresholds per campus; investigate sources with high Who-Is rates and verify router/broadcast management settings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:bacnet:json\" pdu_service IN (\"who-is\",\"i-am\",\"who_is\",\"i_am\")\n| bin _time span=1m\n| stats count dc(source_h) as distinct_sources by _time destination_h\n| where count>200 OR distinct_sources>20\n| table _time destination_h count distinct_sources\n```\n\nUnderstanding this SPL\n\n**BACnet Who-Is Broadcast Storm Detection** — Excessive Who-Is/I-Am discovery floods MS/TP-to-IP bridges and can indicate misconfigured devices, loops, or active scanning that degrades BMS responsiveness.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:bacnet:json\"` (`bacnet_discovery.log`: `pdu_service`, `device_id_number`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:bacnet:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:bacnet:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time destination_h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>200 OR distinct_sources>20` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BACnet Who-Is Broadcast Storm Detection**): table _time destination_h count distinct_sources\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (Who-Is rate per minute), Table (spikes), Pie chart (share by pdu_service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to catch BACnet discovery storms so broadcast noise does not hide real trouble.",
              "mtype": [
                "Security",
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.19",
              "n": "HART-IP Command 48 Additional Status Monitoring",
              "c": "medium",
              "f": "advanced",
              "v": "HART command 48 returns extended device status; tracking additional status fields helps catch sensor faults or configuration issues before they affect closed-loop control.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:hartip:json\"` (`command`, `status`, `additional_status`, `source_h`, `destination_h`)",
              "q": "index=ot sourcetype=\"zeek:hartip:json\"\n| where command=48 OR command=\"0x30\" OR match(_raw,\"\\\"command\\\"\\\\s*:\\\\s*48\")\n| bin _time span=15m\n| stats latest(status) as dev_status values(additional_status) as addl by destination_h _time\n| where isnotnull(addl) OR (isnotnull(dev_status) AND dev_status!=\"0\")\n| table _time destination_h dev_status addl",
              "m": "Deploy ICSNPP HART-IP on segments with smart instruments; confirm JSON keys (`command`, `additional_status`) against a sample. Baseline healthy additional-status patterns; alert on new fault bits correlated with maintenance.",
              "z": "Timeline (command 48 events), Table (device, status, additional_status), Single value (devices reporting faults).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:hartip:json\"` (`command`, `status`, `additional_status`, `source_h`, `destination_h`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy ICSNPP HART-IP on segments with smart instruments; confirm JSON keys (`command`, `additional_status`) against a sample. Baseline healthy additional-status patterns; alert on new fault bits correlated with maintenance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:hartip:json\"\n| where command=48 OR command=\"0x30\" OR match(_raw,\"\\\"command\\\"\\\\s*:\\\\s*48\")\n| bin _time span=15m\n| stats latest(status) as dev_status values(additional_status) as addl by destination_h _time\n| where isnotnull(addl) OR (isnotnull(dev_status) AND dev_status!=\"0\")\n| table _time destination_h dev_status addl\n```\n\nUnderstanding this SPL\n\n**HART-IP Command 48 Additional Status Monitoring** — HART command 48 returns extended device status; tracking additional status fields helps catch sensor faults or configuration issues before they affect closed-loop control.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:hartip:json\"` (`command`, `status`, `additional_status`, `source_h`, `destination_h`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:hartip:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:hartip:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where command=48 OR command=\"0x30\" OR match(_raw,\"\\\"command\\\"\\\\s*:\\\\s*48\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by destination_h _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnotnull(addl) OR (isnotnull(dev_status) AND dev_status!=\"0\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HART-IP Command 48 Additional Status Monitoring**): table _time destination_h dev_status addl\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (command 48 events), Table (device, status, additional_status), Single value (devices reporting faults).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to read HART-IP device status so instrument problems surface before they hit the process.",
              "mtype": [
                "Fault",
                "Performance",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.6.20",
              "n": "Unknown Protocol on OT VLAN Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Non-whitelisted protocols on ICS segments often indicate shadow IT, dual-homed misconfigurations, or tunneling that bypasses zone policies—each can bridge IT threats into OT.",
              "t": "TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884)",
              "d": "`index=ot` `sourcetype=\"zeek:conn:json\"` (`service`, `proto`, `dest_port`, `vlan_id` or `vlan_name`)",
              "q": "index=ot sourcetype=\"zeek:conn:json\" (like(vlan_name,\"OT-%\") OR cidrmatch(\"10.0.0.0/8\",id.orig_h))\n| eval svc=lower(service)\n| where bytes_orig>0 AND bytes_resp>0\n| where NOT (svc IN (\"modbus\",\"dnp3\",\"bacnet\",\"enip\",\"s7comm\",\"iec60870-5-104\",\"hart-ip\",\"dns\",\"ntp\") OR dest_port IN (502,20000,44818,47808,2222,2404,102,5094))\n| stats sum(bytes_orig) as ob sum(bytes_resp) as rb dc(uid) as flows by \"id.orig_h\" \"id.resp_h\" dest_port proto service\n| sort - flows\n| head 200",
              "m": "Forward `conn.log` from Zeek on OT core taps with VLAN tags preserved. Maintain a Splunk lookup of approved services/ports per site; schedule nightly review of new triples (origin, destination, service). Tune DNS/NTP allowances.",
              "z": "Table (unexpected proto/port/service), Treemap (bytes by service), Timeline (first-seen connections).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[TA for Zeek](https://splunkbase.splunk.com/app/5466), [Corelight App for Splunk](https://splunkbase.splunk.com/app/3884)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"zeek:conn:json\"` (`service`, `proto`, `dest_port`, `vlan_id` or `vlan_name`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward `conn.log` from Zeek on OT core taps with VLAN tags preserved. Maintain a Splunk lookup of approved services/ports per site; schedule nightly review of new triples (origin, destination, service). Tune DNS/NTP allowances.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"zeek:conn:json\" (like(vlan_name,\"OT-%\") OR cidrmatch(\"10.0.0.0/8\",id.orig_h))\n| eval svc=lower(service)\n| where bytes_orig>0 AND bytes_resp>0\n| where NOT (svc IN (\"modbus\",\"dnp3\",\"bacnet\",\"enip\",\"s7comm\",\"iec60870-5-104\",\"hart-ip\",\"dns\",\"ntp\") OR dest_port IN (502,20000,44818,47808,2222,2404,102,5094))\n| stats sum(bytes_orig) as ob sum(bytes_resp) as rb dc(uid) as flows by \"id.orig_h\" \"id.resp_h\" dest_port proto service\n| sort - flows\n| head 200\n```\n\nUnderstanding this SPL\n\n**Unknown Protocol on OT VLAN Detection** — Non-whitelisted protocols on ICS segments often indicate shadow IT, dual-homed misconfigurations, or tunneling that bypasses zone policies—each can bridge IT threats into OT.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"zeek:conn:json\"` (`service`, `proto`, `dest_port`, `vlan_id` or `vlan_name`). **App/TA** (typical add-on context): TA for Zeek (Splunkbase 5466), Corelight App for Splunk (Splunkbase 3884). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: zeek:conn:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"zeek:conn:json\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **svc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bytes_orig>0 AND bytes_resp>0` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where NOT (svc IN (\"modbus\",\"dnp3\",\"bacnet\",\"enip\",\"s7comm\",\"iec60870-5-104\",\"hart-ip\",\"dns\",\"ntp\") OR dest_port IN (502,20…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by \"id.orig_h\" \"id.resp_h\" dest_port proto service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unexpected proto/port/service), Treemap (bytes by service), Timeline (first-seen connections).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use deep packet inspection to see unknown protocols on the OT VLAN so unexpected traffic is not written off as noise.",
              "mtype": [
                "Security",
                "Compliance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "TA for Zeek",
                "id": 5466,
                "url": "https://splunkbase.splunk.com/app/5466"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "14.7",
          "n": "Litmus Edge Industrial IoT Gateway",
          "u": [
            {
              "i": "14.7.1",
              "n": "Litmus Edge Gateway Connectivity Health",
              "c": "high",
              "f": "beginner",
              "v": "Gateway offline events stop OT data from reaching Splunk; early detection reduces blind spots in plant visibility.",
              "t": "Litmus Edge (Splunk HEC connector)",
              "d": "`index=ot` `sourcetype=\"litmus:health\"` fields `gateway_id`, `status`, `online`",
              "q": "index=ot sourcetype=\"litmus:health\"\n| eval is_online=case(\n    match(lower(status),\"online|up|connected|running\"), 1,\n    match(lower(status),\"offline|down|disconnected|stopped\"), 0,\n    online=\"true\" OR online=\"1\", 1,\n    true(), 0)\n| stats latest(is_online) as online_now, latest(_time) as last_health by gateway_id, site_id\n| where online_now=0 OR isnull(online_now)\n| eval minutes_since=round((now()-last_health)/60,1)\n| table gateway_id, site_id, online_now, last_health, minutes_since",
              "m": "Enable the Litmus Edge Splunk HEC destination and send periodic health/heartbeat JSON. Ensure gateway_id and site_id are present. Alert if no healthy event for 2x the expected interval.",
              "z": "Single value (gateways offline), Table (gateway, site, last health), Status indicator (green/red per gateway).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Litmus Edge (Splunk HEC connector).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"litmus:health\"` fields `gateway_id`, `status`, `online`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable the Litmus Edge Splunk HEC destination and send periodic health/heartbeat JSON. Ensure gateway_id and site_id are present. Alert if no healthy event for 2x the expected interval.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"litmus:health\"\n| eval is_online=case(\n    match(lower(status),\"online|up|connected|running\"), 1,\n    match(lower(status),\"offline|down|disconnected|stopped\"), 0,\n    online=\"true\" OR online=\"1\", 1,\n    true(), 0)\n| stats latest(is_online) as online_now, latest(_time) as last_health by gateway_id, site_id\n| where online_now=0 OR isnull(online_now)\n| eval minutes_since=round((now()-last_health)/60,1)\n| table gateway_id, site_id, online_now, last_health, minutes_since\n```\n\nUnderstanding this SPL\n\n**Litmus Edge Gateway Connectivity Health** — Gateway offline events stop OT data from reaching Splunk; early detection reduces blind spots in plant visibility.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"litmus:health\"` fields `gateway_id`, `status`, `online`. **App/TA** (typical add-on context): Litmus Edge (Splunk HEC connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: litmus:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"litmus:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_online** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by gateway_id, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where online_now=0 OR isnull(online_now)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **minutes_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Litmus Edge Gateway Connectivity Health**): table gateway_id, site_id, online_now, last_health, minutes_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (gateways offline), Table (gateway, site, last health), Status indicator (green/red per gateway).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our industrial gateway’s health stream so we know the box that bridges the plant to us is up before the tags go stale.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.7.2",
              "n": "PLC Tag Data Ingestion Validation",
              "c": "high",
              "f": "intermediate",
              "v": "Missing or silent tag streams break dashboards, historians, and ML models; validating expected tags catches connector or network regressions before production impact.",
              "t": "Litmus Edge (Splunk HEC connector)",
              "d": "`index=ot` `sourcetype=\"litmus:tag\"` fields `gateway_id`, `tag_name`, `source_device`",
              "q": "index=ot sourcetype=\"litmus:tag\"\n| bin _time span=5m\n| stats count as events, dc(tag_name) as distinct_tags by gateway_id, source_device, _time\n| where events < 1 OR distinct_tags < 1\n| table _time, gateway_id, source_device, events, distinct_tags",
              "m": "Normalize tag events so each sample carries gateway_id, tag_name, and source_device. Replace thresholds with per-device baselines from a lookup. For stricter checks, join to a required-tag lookup.",
              "z": "Line chart (events per minute by device), Table (devices below floor), Heatmap (device × time rate).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Litmus Edge (Splunk HEC connector).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"litmus:tag\"` fields `gateway_id`, `tag_name`, `source_device`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize tag events so each sample carries gateway_id, tag_name, and source_device. Replace thresholds with per-device baselines from a lookup. For stricter checks, join to a required-tag lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"litmus:tag\"\n| bin _time span=5m\n| stats count as events, dc(tag_name) as distinct_tags by gateway_id, source_device, _time\n| where events < 1 OR distinct_tags < 1\n| table _time, gateway_id, source_device, events, distinct_tags\n```\n\nUnderstanding this SPL\n\n**PLC Tag Data Ingestion Validation** — Missing or silent tag streams break dashboards, historians, and ML models; validating expected tags catches connector or network regressions before production impact.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"litmus:tag\"` fields `gateway_id`, `tag_name`, `source_device`. **App/TA** (typical add-on context): Litmus Edge (Splunk HEC connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: litmus:tag. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"litmus:tag\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by gateway_id, source_device, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where events < 1 OR distinct_tags < 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PLC Tag Data Ingestion Validation**): table _time, gateway_id, source_device, events, distinct_tags\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (events per minute by device), Table (devices below floor), Heatmap (device × time rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare live tag values from the industrial gateway to what we expect so bad scaling or wrong mapping is caught before it steers a decision.",
              "mtype": [
                "Fault"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.7.3",
              "n": "Edge-to-Splunk Data Pipeline Latency",
              "c": "medium",
              "f": "intermediate",
              "v": "End-to-end latency distinguishes edge capture delays from Splunk indexing backlog; spikes indicate broker saturation, HEC backpressure, or clock skew.",
              "t": "Litmus Edge (Splunk HEC connector)",
              "d": "`index=ot` `sourcetype=\"litmus:tag\"` with `edge_timestamp` in JSON payload",
              "q": "index=ot sourcetype=\"litmus:tag\"\n| eval edge_sec=if(edge_timestamp>1e12, edge_timestamp/1000, edge_timestamp)\n| where isnotnull(edge_timestamp) AND edge_sec>0\n| eval latency_ms=abs(_time-edge_sec)*1000\n| bin _time span=5m\n| stats perc95(latency_ms) as p95_ms, avg(latency_ms) as avg_ms by gateway_id, _time\n| where p95_ms > 5000",
              "m": "Configure Litmus to stamp each tag event with capture time in epoch. Keep NTP synchronized on Litmus and Splunk. Alert when p95 exceeds acceptable thresholds.",
              "z": "Line chart (p95/p99 latency by gateway), Area chart (latency distribution), Single value (fleet p95 latency).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Litmus Edge (Splunk HEC connector).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"litmus:tag\"` with `edge_timestamp` in JSON payload.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Litmus to stamp each tag event with capture time in epoch. Keep NTP synchronized on Litmus and Splunk. Alert when p95 exceeds acceptable thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"litmus:tag\"\n| eval edge_sec=if(edge_timestamp>1e12, edge_timestamp/1000, edge_timestamp)\n| where isnotnull(edge_timestamp) AND edge_sec>0\n| eval latency_ms=abs(_time-edge_sec)*1000\n| bin _time span=5m\n| stats perc95(latency_ms) as p95_ms, avg(latency_ms) as avg_ms by gateway_id, _time\n| where p95_ms > 5000\n```\n\nUnderstanding this SPL\n\n**Edge-to-Splunk Data Pipeline Latency** — End-to-end latency distinguishes edge capture delays from Splunk indexing backlog; spikes indicate broker saturation, HEC backpressure, or clock skew.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"litmus:tag\"` with `edge_timestamp` in JSON payload. **App/TA** (typical add-on context): Litmus Edge (Splunk HEC connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: litmus:tag. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"litmus:tag\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **edge_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(edge_timestamp) AND edge_sec>0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by gateway_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_ms > 5000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95/p99 latency by gateway), Area chart (latency distribution), Single value (fleet p95 latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We time how long it takes data to reach us from the industrial gateway so slow links or throttles do not hide behind averages.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.7.4",
              "n": "Production Sensor Data Completeness Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Stale or missing sensor readings invalidate safety and quality analytics; completeness audits align telemetry coverage with regulatory expectations for critical measurements.",
              "t": "Litmus Edge (Splunk HEC connector)",
              "d": "`index=ot` `sourcetype=\"litmus:tag\"` fields `gateway_id`, `tag_name`, optional `quality`",
              "q": "index=ot sourcetype=\"litmus:tag\"\n| stats latest(_time) as last_seen by gateway_id, tag_name\n| eval stale_sec=now()-last_seen\n| where stale_sec > 300\n| eval stale_minutes=round(stale_sec/60,1)\n| table gateway_id, tag_name, last_seen, stale_minutes\n| sort - stale_minutes",
              "m": "Set stale_threshold per tag class. Optionally join to a required-tag lookup and alert when any required tag is absent entirely.",
              "z": "Table (stale tags), Bar chart (count stale by gateway), Heatmap (tag × gateway freshness).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Litmus Edge (Splunk HEC connector).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"litmus:tag\"` fields `gateway_id`, `tag_name`, optional `quality`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSet stale_threshold per tag class. Optionally join to a required-tag lookup and alert when any required tag is absent entirely.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"litmus:tag\"\n| stats latest(_time) as last_seen by gateway_id, tag_name\n| eval stale_sec=now()-last_seen\n| where stale_sec > 300\n| eval stale_minutes=round(stale_sec/60,1)\n| table gateway_id, tag_name, last_seen, stale_minutes\n| sort - stale_minutes\n```\n\nUnderstanding this SPL\n\n**Production Sensor Data Completeness Audit** — Stale or missing sensor readings invalidate safety and quality analytics; completeness audits align telemetry coverage with regulatory expectations for critical measurements.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"litmus:tag\"` fields `gateway_id`, `tag_name`, optional `quality`. **App/TA** (typical add-on context): Litmus Edge (Splunk HEC connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: litmus:tag. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"litmus:tag\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by gateway_id, tag_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **stale_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where stale_sec > 300` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **stale_minutes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Production Sensor Data Completeness Audit**): table gateway_id, tag_name, last_seen, stale_minutes\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale tags), Bar chart (count stale by gateway), Heatmap (tag × gateway freshness).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count missing or partial sensor intervals so a dead instrument or link does not get mistaken for a quiet process.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.7.5",
              "n": "Litmus Edge Device Inventory Drift",
              "c": "medium",
              "f": "advanced",
              "v": "Unexpected devices or disappearing assets indicate misconfiguration or unauthorized connectivity; drift detection supports CMDB accuracy and OT security reviews.",
              "t": "Litmus Edge (Splunk HEC connector)",
              "d": "`index=ot` `sourcetype=\"litmus:health\"` OR `sourcetype=\"litmus:edge\"` with `gateway_id`, `device_id`",
              "q": "index=ot (sourcetype=\"litmus:health\" OR sourcetype=\"litmus:edge\")\n| eval device_key=coalesce(device_id, asset_id)\n| where isnotnull(device_key)\n| stats earliest(_time) as first_seen, latest(_time) as last_seen by gateway_id, device_key\n| eval is_new=if(first_seen >= relative_time(now(),\"-24h@h\"), 1, 0)\n| eval is_missing=if(last_seen < relative_time(now(),\"-7d@d\"), 1, 0)\n| where is_new=1 OR is_missing=1\n| eval drift_type=if(is_new=1, \"new_device\", \"missing_device\")\n| table gateway_id, device_key, first_seen, last_seen, drift_type",
              "m": "Ensure device identifiers are stable in Litmus. Adjust the new/missing windows to match your change-management cadence. For baselines, use inputlookup and compare sets.",
              "z": "Table (new vs missing devices), Bar chart (drift events by gateway), Timeline (first seen for new devices).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Litmus Edge (Splunk HEC connector).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"litmus:health\"` OR `sourcetype=\"litmus:edge\"` with `gateway_id`, `device_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure device identifiers are stable in Litmus. Adjust the new/missing windows to match your change-management cadence. For baselines, use inputlookup and compare sets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot (sourcetype=\"litmus:health\" OR sourcetype=\"litmus:edge\")\n| eval device_key=coalesce(device_id, asset_id)\n| where isnotnull(device_key)\n| stats earliest(_time) as first_seen, latest(_time) as last_seen by gateway_id, device_key\n| eval is_new=if(first_seen >= relative_time(now(),\"-24h@h\"), 1, 0)\n| eval is_missing=if(last_seen < relative_time(now(),\"-7d@d\"), 1, 0)\n| where is_new=1 OR is_missing=1\n| eval drift_type=if(is_new=1, \"new_device\", \"missing_device\")\n| table gateway_id, device_key, first_seen, last_seen, drift_type\n```\n\nUnderstanding this SPL\n\n**Litmus Edge Device Inventory Drift** — Unexpected devices or disappearing assets indicate misconfiguration or unauthorized connectivity; drift detection supports CMDB accuracy and OT security reviews.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"litmus:health\"` OR `sourcetype=\"litmus:edge\"` with `gateway_id`, `device_id`. **App/TA** (typical add-on context): Litmus Edge (Splunk HEC connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: litmus:health, litmus:edge. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"litmus:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **device_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(device_key)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by gateway_id, device_key** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **is_new** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_missing** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_new=1 OR is_missing=1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **drift_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Litmus Edge Device Inventory Drift**): table gateway_id, device_key, first_seen, last_seen, drift_type\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new vs missing devices), Bar chart (drift events by gateway), Timeline (first seen for new devices).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We diff gateway inventory to our golden list so a swap, clone, or duplicate serial does not corrupt our view of the line.",
              "mtype": [
                "Change"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.7.6",
              "n": "Edge Data Transformation Error Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Transformation and normalization failures silently drop or corrupt OT fields; tracking error rates ties engineering changes to ingestion health.",
              "t": "Litmus Edge (Splunk HEC connector)",
              "d": "`index=ot` `sourcetype=\"litmus:edge\"` error fields or message text",
              "q": "index=ot sourcetype=\"litmus:edge\"\n| eval is_error=if(match(lower(_raw), \"transform error|mapping error|parse error|conversion failed\"), 1, 0)\n| bin _time span=15m\n| stats sum(is_error) as errors, count as total by gateway_id, _time\n| eval error_rate_pct=round(100*errors/total,2)\n| where errors>0\n| table _time, gateway_id, errors, total, error_rate_pct",
              "m": "Route Litmus pipeline diagnostics to Splunk. Alert when error rate exceeds SLO (e.g., >1% over 15 minutes).",
              "z": "Line chart (error rate), Stacked bar (errors vs total), Table (top error pipelines), Single value (fleet error rate %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Litmus Edge (Splunk HEC connector).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"litmus:edge\"` error fields or message text.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRoute Litmus pipeline diagnostics to Splunk. Alert when error rate exceeds SLO (e.g., >1% over 15 minutes).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"litmus:edge\"\n| eval is_error=if(match(lower(_raw), \"transform error|mapping error|parse error|conversion failed\"), 1, 0)\n| bin _time span=15m\n| stats sum(is_error) as errors, count as total by gateway_id, _time\n| eval error_rate_pct=round(100*errors/total,2)\n| where errors>0\n| table _time, gateway_id, errors, total, error_rate_pct\n```\n\nUnderstanding this SPL\n\n**Edge Data Transformation Error Rate** — Transformation and normalization failures silently drop or corrupt OT fields; tracking error rates ties engineering changes to ingestion health.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"litmus:edge\"` error fields or message text. **App/TA** (typical add-on context): Litmus Edge (Splunk HEC connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: litmus:edge. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"litmus:edge\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_error** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by gateway_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where errors>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Edge Data Transformation Error Rate**): table _time, gateway_id, errors, total, error_rate_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (error rate), Stacked bar (errors vs total), Table (top error pipelines), Single value (fleet error rate %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track transform and connector errors on the industrial gateway so a bad format or field map does not silently drop production data.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.7.7",
              "n": "Multi-Site Litmus Edge Fleet Health",
              "c": "high",
              "f": "intermediate",
              "v": "Aggregating gateway health by site highlights WAN issues, deployment drift, and capacity hotspots across plants without per-host manual checks.",
              "t": "Litmus Edge (Splunk HEC connector)",
              "d": "`index=ot` `sourcetype=\"litmus:health\"` fields `site_id`, `gateway_id`, `status`",
              "q": "index=ot sourcetype=\"litmus:health\"\n| eval site=coalesce(site_id, plant_id, \"unknown\")\n| eval is_ok=if(match(lower(status),\"online|up|connected|running\"), 1, 0)\n| stats count as events, sum(is_ok) as ok_events, dc(gateway_id) as gateways by site\n| eval health_pct=round(100*ok_events/events,1)\n| table site, gateways, health_pct, ok_events, events\n| sort health_pct",
              "m": "Require site_id in every Litmus health payload. Schedule a nightly report by site for executive visibility.",
              "z": "Bar chart (health % by site), Table (site ranking), Treemap (gateways × site).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Litmus Edge (Splunk HEC connector).\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"litmus:health\"` fields `site_id`, `gateway_id`, `status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequire site_id in every Litmus health payload. Schedule a nightly report by site for executive visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"litmus:health\"\n| eval site=coalesce(site_id, plant_id, \"unknown\")\n| eval is_ok=if(match(lower(status),\"online|up|connected|running\"), 1, 0)\n| stats count as events, sum(is_ok) as ok_events, dc(gateway_id) as gateways by site\n| eval health_pct=round(100*ok_events/events,1)\n| table site, gateways, health_pct, ok_events, events\n| sort health_pct\n```\n\nUnderstanding this SPL\n\n**Multi-Site Litmus Edge Fleet Health** — Aggregating gateway health by site highlights WAN issues, deployment drift, and capacity hotspots across plants without per-host manual checks.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"litmus:health\"` fields `site_id`, `gateway_id`, `status`. **App/TA** (typical add-on context): Litmus Edge (Splunk HEC connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: litmus:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"litmus:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **site** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **health_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Multi-Site Litmus Edge Fleet Health**): table site, gateways, health_pct, ok_events, events\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (health % by site), Table (site ranking), Treemap (gateways × site).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We roll up gateway health across sites so we can see which plant or cell is falling behind the fleet norm.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.7.8",
              "n": "Litmus Edge Connector Authentication Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "HEC token leakage, rotation mistakes, or clock skew cause auth failures that stop ingestion; monitoring auth errors accelerates rotation and prevents silent data gaps.",
              "t": "Litmus Edge (Splunk HEC connector)",
              "d": "Splunk `_internal` HEC (`sourcetype=\"splunkd_http_input\"`) for HTTP Event Collector rejections",
              "q": "index=_internal sourcetype=\"splunkd_http_input\" (status=401 OR status=403)\n| stats count as failures, values(source) as sources by host\n| where failures > 0\n| sort - failures",
              "m": "Ensure HEC is enabled only on the collector Litmus uses. Correlate spikes with token rotations and NTP drift. Alert on any 401/403 responses.",
              "z": "Line chart (auth failures over time), Table (hosts with failures), Single value (failures in last hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Litmus Edge (Splunk HEC connector).\n• Ensure the following data sources are available: Splunk `_internal` HEC (`sourcetype=\"splunkd_http_input\"`) for HTTP Event Collector rejections.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure HEC is enabled only on the collector Litmus uses. Correlate spikes with token rotations and NTP drift. Alert on any 401/403 responses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=\"splunkd_http_input\" (status=401 OR status=403)\n| stats count as failures, values(source) as sources by host\n| where failures > 0\n| sort - failures\n```\n\nUnderstanding this SPL\n\n**Litmus Edge Connector Authentication Monitoring** — HEC token leakage, rotation mistakes, or clock skew cause auth failures that stop ingestion; monitoring auth errors accelerates rotation and prevents silent data gaps.\n\nDocumented **Data sources**: Splunk `_internal` HEC (`sourcetype=\"splunkd_http_input\"`) for HTTP Event Collector rejections. **App/TA** (typical add-on context): Litmus Edge (Splunk HEC connector). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd_http_input. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=\"splunkd_http_input\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (auth failures over time), Table (hosts with failures), Single value (failures in last hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log connector and API authentication on the industrial gateway so a bad key or user does not look like a data drought.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.7.9",
              "n": "Centralized Model Retraining for Industrial Sensor ML (DSDL)",
              "c": "high",
              "f": "expert",
              "v": "Edge-deployed ML models (autoencoder anomaly detection, XGBoost predictive maintenance, CNN visual inspection) degrade over time as equipment ages, processes change, and seasonal conditions shift. Centralized retraining in Splunk aggregates sensor data from all edge gateways, trains updated models via DSDL containers, and tracks model drift — ensuring edge predictions stay accurate without requiring data science expertise at each plant site.",
              "t": "Splunk Deep Learning Toolkit (DSDL), Splunk Edge Hub, Litmus Edge",
              "d": "`index=ot sourcetype=edge_hub:metrics`, `index=ot sourcetype=litmus:edge`, model performance logs",
              "q": "index=ot sourcetype IN (\"edge_hub:metrics\",\"litmus:edge\")\n| bin _time span=1h\n| stats avg(metric_value) as val stdev(metric_value) as val_std by _time, sensor_id, metric_name, site\n| eval feature_vector=val.\",\".val_std\n| fit AutoEncoderAE feature_vector into industrial_anomaly_model_v2\n| summary industrial_anomaly_model_v2\n| eval model_version=\"v2_\".strftime(now(), \"%Y%m%d\")\n| eval training_samples=count\n| table model_version, training_samples, reconstruction_error_mean, reconstruction_error_std",
              "m": "Aggregate sensor data from all edge gateways (Edge Hub, Litmus Edge, Cisco EI) into a centralized OT index. Schedule weekly retraining jobs via DSDL containers running PyTorch or TensorFlow autoencoders. Track model performance metrics (reconstruction error distribution, anomaly detection rate, false positive rate) in a model registry KV store. Compare new model metrics against the production model before promotion. Deploy updated model weights back to edge gateways via the Edge Hub container management API or Litmus Edge's model deployment pipeline. Alert data engineering when model drift exceeds thresholds (reconstruction error mean shifting more than 2 standard deviations from the training baseline). Maintain model versioning and rollback capability.",
              "z": "Line chart (reconstruction error over model versions), Bar chart (anomaly detection rate by site), Table (model registry with version, accuracy, deployment status), Single value (model age in days).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Deep Learning Toolkit (DSDL), Splunk Edge Hub, Litmus Edge.\n• Ensure the following data sources are available: `index=ot sourcetype=edge_hub:metrics`, `index=ot sourcetype=litmus:edge`, model performance logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate sensor data from all edge gateways (Edge Hub, Litmus Edge, Cisco EI) into a centralized OT index. Schedule weekly retraining jobs via DSDL containers running PyTorch or TensorFlow autoencoders. Track model performance metrics (reconstruction error distribution, anomaly detection rate, false positive rate) in a model registry KV store. Compare new model metrics against the production model before promotion. Deploy updated model weights back to edge gateways via the Edge Hub container ma…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype IN (\"edge_hub:metrics\",\"litmus:edge\")\n| bin _time span=1h\n| stats avg(metric_value) as val stdev(metric_value) as val_std by _time, sensor_id, metric_name, site\n| eval feature_vector=val.\",\".val_std\n| fit AutoEncoderAE feature_vector into industrial_anomaly_model_v2\n| summary industrial_anomaly_model_v2\n| eval model_version=\"v2_\".strftime(now(), \"%Y%m%d\")\n| eval training_samples=count\n| table model_version, training_samples, reconstruction_error_mean, reconstruction_error_std\n```\n\nUnderstanding this SPL\n\n**Centralized Model Retraining for Industrial Sensor ML (DSDL)** — Edge-deployed ML models (autoencoder anomaly detection, XGBoost predictive maintenance, CNN visual inspection) degrade over time as equipment ages, processes change, and seasonal conditions shift. Centralized retraining in Splunk aggregates sensor data from all edge gateways, trains updated models via DSDL containers, and tracks model drift — ensuring edge predictions stay accurate without requiring data science expertise at each plant site.\n\nDocumented **Data sources**: `index=ot sourcetype=edge_hub:metrics`, `index=ot sourcetype=litmus:edge`, model performance logs. **App/TA** (typical add-on context): Splunk Deep Learning Toolkit (DSDL), Splunk Edge Hub, Litmus Edge. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, sensor_id, metric_name, site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **feature_vector** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Centralized Model Retraining for Industrial Sensor ML (DSDL)**): fit AutoEncoderAE feature_vector into industrial_anomaly_model_v2\n• Pipeline stage (see **Centralized Model Retraining for Industrial Sensor ML (DSDL)**): summary industrial_anomaly_model_v2\n• `eval` defines or adjusts **model_version** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **training_samples** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Centralized Model Retraining for Industrial Sensor ML (DSDL)**): table model_version, training_samples, reconstruction_error_mean, reconstruction_error_std\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (reconstruction error over model versions), Bar chart (anomaly detection rate by site), Table (model registry with version, accuracy, deployment status), Single value (model age in days).",
              "script": "",
              "premium": "None (DSDL is free)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow model retraining and pipeline health for on-device analytics so a bad deploy does not poison predictions we trust.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 9,
            "none": 0
          }
        },
        {
          "i": "14.8",
          "n": "IoT & OT Trending",
          "u": [
            {
              "i": "14.8.1",
              "n": "Device Fleet Online Rate Trending",
              "c": "high",
              "f": "intermediate",
              "v": "The share of devices reporting on schedule is a leading indicator of network, power, or gateway issues before production KPIs degrade; trending over 30 days shows whether fixes stick across sites.",
              "t": "Splunk Add-on for Edge Hub, optional Splunk OTel for OT gateways",
              "d": "`index=ot` metrics: `sourcetype=edge_hub:metrics` with `metric_name` such as `device.online`, `device.heartbeat`, or `gateway.device_active`; dimensions `device_id`, `site_id`",
              "q": "| mstats avg(_value) WHERE index=ot metric_name IN (\"device.online\",\"device.heartbeat_ok\") span=1d BY device_id\n| eval online_flag=if('avg(_value)'>0.5,1,0)\n| stats avg(online_flag) as fleet_online_rate by _time\n| eval online_pct=round(100*fleet_online_rate,1)\n| trendline sma7(online_pct) as online_trend\n| eventstats median(online_pct) as med30\n| eval vs_baseline=round(online_pct - med30,2)",
              "m": "Normalize `device.online` to 0/1 (or heartbeat) in the metrics pipeline. Multiply `fleet_online_rate` by a fleet `inputlookup` total if `mstats` only returns devices that reported—otherwise the average is the share of devices reporting at least one healthy sample per day. Align `metric_name` with Edge Hub transforms. If metrics are not available, use daily `dc(device_id)` from `edge_hub:*` heartbeats divided by the inventory lookup. Alert when `online_pct` drops more than 5 points below the 30-day median for three consecutive days.",
              "z": "Line chart (online % and trendline), Area chart (online versus offline device-days), Single value (current fleet availability).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Operational Telemetry](https://docs.splunk.com/Documentation/CIM/latest/User/Operational_Telemetry)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Edge Hub, optional Splunk OTel for OT gateways.\n• Ensure the following data sources are available: `index=ot` metrics: `sourcetype=edge_hub:metrics` with `metric_name` such as `device.online`, `device.heartbeat`, or `gateway.device_active`; dimensions `device_id`, `site_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `device.online` to 0/1 (or heartbeat) in the metrics pipeline. Multiply `fleet_online_rate` by a fleet `inputlookup` total if `mstats` only returns devices that reported—otherwise the average is the share of devices reporting at least one healthy sample per day. Align `metric_name` with Edge Hub transforms. If metrics are not available, use daily `dc(device_id)` from `edge_hub:*` heartbeats divided by the inventory lookup. Alert when `online_pct` drops more than 5 points below the 30-d…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(_value) WHERE index=ot metric_name IN (\"device.online\",\"device.heartbeat_ok\") span=1d BY device_id\n| eval online_flag=if('avg(_value)'>0.5,1,0)\n| stats avg(online_flag) as fleet_online_rate by _time\n| eval online_pct=round(100*fleet_online_rate,1)\n| trendline sma7(online_pct) as online_trend\n| eventstats median(online_pct) as med30\n| eval vs_baseline=round(online_pct - med30,2)\n```\n\nUnderstanding this SPL\n\n**Device Fleet Online Rate Trending** — The share of devices reporting on schedule is a leading indicator of network, power, or gateway issues before production KPIs degrade; trending over 30 days shows whether fixes stick across sites.\n\nDocumented **Data sources**: `index=ot` metrics: `sourcetype=edge_hub:metrics` with `metric_name` such as `device.online`, `device.heartbeat`, or `gateway.device_active`; dimensions `device_id`, `site_id`. **App/TA** (typical add-on context): Splunk Add-on for Edge Hub, optional Splunk OTel for OT gateways. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **online_flag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **online_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Device Fleet Online Rate Trending**): trendline sma7(online_pct) as online_trend\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **vs_baseline** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (online % and trendline), Area chart (online versus offline device-days), Single value (current fleet availability).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend how many devices stay online over time so slow erosion of connectivity shows up before a line-wide outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Operational_Telemetry (Metrics) where tagged; otherwise N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.8.2",
              "n": "Sensor Data Quality Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Missing or stale readings break historians, dashboards, and ML features; daily trending exposes ingestion gaps early so teams can fix collectors, radios, or brokers before safety or quality thresholds are blind.",
              "t": "Edge Hub, Modbus/OPC-UA TAs",
              "d": "`index=ot` `sourcetype IN (\"edge_hub:metrics\",\"edge_hub:events\",\"modbus:readings\",\"opcua:telemetry\")`",
              "q": "index=ot sourcetype IN (\"edge_hub:metrics\",\"modbus:readings\",\"opcua:telemetry\") earliest=-30d@d\n| eval bad=if(match(lower(coalesce(quality,data_quality)),\"(?i)bad|invalid|stale\") OR isnull(metric_value),1,0)\n| timechart span=1d sum(bad) as bad_reads count as total_reads\n| eval bad_pct=round(100*bad_reads/nullif(total_reads,0),2)\n| trendline sma7(bad_pct) as quality_trend\n| eventstats avg(bad_pct) as run_avg\n| eval delta=round(bad_pct - run_avg,2)",
              "m": "Map your quality field (`good`, `192` OPC status, Modbus exception flags). If quality is not present, infer staleness with `streamstats` per `device_id` and `tag` using gaps between `_time` greater than `3×` the expected poll interval from a lookup. Dedupe by `device_id`+`tag` to avoid double counting. Tune thresholds per line speed—fast PLCs need tighter windows than environmental sensors.",
              "z": "Line chart (daily bad or stale % with trend), Stacked bar (bad readings by site), Table (worst devices by gap count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Operational Telemetry](https://docs.splunk.com/Documentation/CIM/latest/User/Operational_Telemetry)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Edge Hub, Modbus/OPC-UA TAs.\n• Ensure the following data sources are available: `index=ot` `sourcetype IN (\"edge_hub:metrics\",\"edge_hub:events\",\"modbus:readings\",\"opcua:telemetry\")`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap your quality field (`good`, `192` OPC status, Modbus exception flags). If quality is not present, infer staleness with `streamstats` per `device_id` and `tag` using gaps between `_time` greater than `3×` the expected poll interval from a lookup. Dedupe by `device_id`+`tag` to avoid double counting. Tune thresholds per line speed—fast PLCs need tighter windows than environmental sensors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype IN (\"edge_hub:metrics\",\"modbus:readings\",\"opcua:telemetry\") earliest=-30d@d\n| eval bad=if(match(lower(coalesce(quality,data_quality)),\"(?i)bad|invalid|stale\") OR isnull(metric_value),1,0)\n| timechart span=1d sum(bad) as bad_reads count as total_reads\n| eval bad_pct=round(100*bad_reads/nullif(total_reads,0),2)\n| trendline sma7(bad_pct) as quality_trend\n| eventstats avg(bad_pct) as run_avg\n| eval delta=round(bad_pct - run_avg,2)\n```\n\nUnderstanding this SPL\n\n**Sensor Data Quality Trending** — Missing or stale readings break historians, dashboards, and ML features; daily trending exposes ingestion gaps early so teams can fix collectors, radios, or brokers before safety or quality thresholds are blind.\n\nDocumented **Data sources**: `index=ot` `sourcetype IN (\"edge_hub:metrics\",\"edge_hub:events\",\"modbus:readings\",\"opcua:telemetry\")`. **App/TA** (typical add-on context): Edge Hub, Modbus/OPC-UA TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bad** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **bad_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Sensor Data Quality Trending**): trendline sma7(bad_pct) as quality_trend\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily bad or stale % with trend), Stacked bar (bad readings by site), Table (worst devices by gap count).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend data quality and gap rates so quality drift is visible in the run-up to scrap or unplanned stops.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Operational_Telemetry (Quality / Metrics) where mapped; otherwise N/A"
              ],
              "e": [
                "edge_hub",
                "modbus",
                "opcua"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.8.3",
              "n": "OEE Trending",
              "c": "high",
              "f": "advanced",
              "v": "Overall Equipment Effectiveness summarizes availability, performance, and quality; weekly or monthly trends show whether maintenance programs and changeovers are improving plant output without waiting for quarterly business reviews.",
              "t": "Edge Hub, MES or historian connectors (optional), OT metrics model",
              "d": "`index=ot` `| mstats` on `metric_name=\"oee\"` with `asset_id` or `line_id`; fallback `sourcetype=edge_hub:metrics` events carrying `oee` fields",
              "q": "| mstats avg(_value) WHERE index=ot metric_name=\"oee\" span=1w BY asset_id\n| rename \"avg(_value)\" as oee_avg\n| eval oee_pct=round(oee_avg*100,2)\n| stats avg(oee_pct) as fleet_oee by _time\n| trendline sma4(fleet_oee) as oee_trend\n| eval gap_to_target=round(85 - fleet_oee,2)",
              "m": "If OEE is not pre-calculated, derive it from availability × performance × quality metrics ingested separately (use `eval` in a scheduled search writing to a summary index). Align shifts and planned downtime using a `production_calendar` lookup to avoid penalizing scheduled stops. Compare assets to peers in the same technology line. Alert when 4-week rolling OEE drops more than 5 points below baseline.",
              "z": "Line chart (OEE % by line over weeks), Area chart (components if split metrics), Bullet chart (actual versus target OEE).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Operational Telemetry](https://docs.splunk.com/Documentation/CIM/latest/User/Operational_Telemetry)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Edge Hub, MES or historian connectors (optional), OT metrics model.\n• Ensure the following data sources are available: `index=ot` `| mstats` on `metric_name=\"oee\"` with `asset_id` or `line_id`; fallback `sourcetype=edge_hub:metrics` events carrying `oee` fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIf OEE is not pre-calculated, derive it from availability × performance × quality metrics ingested separately (use `eval` in a scheduled search writing to a summary index). Align shifts and planned downtime using a `production_calendar` lookup to avoid penalizing scheduled stops. Compare assets to peers in the same technology line. Alert when 4-week rolling OEE drops more than 5 points below baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(_value) WHERE index=ot metric_name=\"oee\" span=1w BY asset_id\n| rename \"avg(_value)\" as oee_avg\n| eval oee_pct=round(oee_avg*100,2)\n| stats avg(oee_pct) as fleet_oee by _time\n| trendline sma4(fleet_oee) as oee_trend\n| eval gap_to_target=round(85 - fleet_oee,2)\n```\n\nUnderstanding this SPL\n\n**OEE Trending** — Overall Equipment Effectiveness summarizes availability, performance, and quality; weekly or monthly trends show whether maintenance programs and changeovers are improving plant output without waiting for quarterly business reviews.\n\nDocumented **Data sources**: `index=ot` `| mstats` on `metric_name=\"oee\"` with `asset_id` or `line_id`; fallback `sourcetype=edge_hub:metrics` events carrying `oee` fields. **App/TA** (typical add-on context): Edge Hub, MES or historian connectors (optional), OT metrics model. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **oee_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **OEE Trending**): trendline sma4(fleet_oee) as oee_trend\n• `eval` defines or adjusts **gap_to_target** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (OEE % by line over weeks), Area chart (components if split metrics), Bullet chart (actual versus target OEE).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend overall equipment effectiveness so we can tell whether a line is healthy beyond a single day’s snapshot.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Operational_Telemetry (Production / OEE) where enabled; otherwise N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.8.4",
              "n": "Predictive Maintenance Alert Volume Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "ML-driven anomaly counts should rise when equipment drifts but should fall after maintenance; tracking weekly volume validates model calibration and catches alert fatigue or broken scoring pipelines.",
              "t": "Splunk Machine Learning Toolkit (optional), Edge Hub anomaly outputs, DSDL or custom scoring jobs",
              "d": "`index=ot` `sourcetype IN (\"edge_hub:anomaly\",\"edge_hub:alert\",\"edge_hub:ml\")` with `severity`, `model_id`, `asset_id`",
              "q": "index=ot sourcetype IN (\"edge_hub:anomaly\",\"edge_hub:alert\") earliest=-90d@d\n| timechart span=1w count as pred_alerts\n| trendline sma6(pred_alerts) as alert_trend\n| eventstats avg(pred_alerts) as baseline\n| eval spike=if(pred_alerts > baseline*1.5,1,0)",
              "m": "Tag production ML alerts distinctly from threshold rules. Deduplicate repeated scores per asset per hour if the model emits bursts. Correlate spikes with maintenance windows—expected dips after work orders. If counts go to zero suddenly, validate the scoring container and HEC path. Feed counts back to data science for precision and recall reviews quarterly.",
              "z": "Line chart (weekly ML alert count with trend), Bar chart (alerts by asset), Single value (alerts versus 8-week average).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [Splunkbase app 6796](https://splunkbase.splunk.com/app/6796), [CIM: Operational Telemetry](https://docs.splunk.com/Documentation/CIM/latest/User/Operational_Telemetry)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (optional), Edge Hub anomaly outputs, DSDL or custom scoring jobs.\n• Ensure the following data sources are available: `index=ot` `sourcetype IN (\"edge_hub:anomaly\",\"edge_hub:alert\",\"edge_hub:ml\")` with `severity`, `model_id`, `asset_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag production ML alerts distinctly from threshold rules. Deduplicate repeated scores per asset per hour if the model emits bursts. Correlate spikes with maintenance windows—expected dips after work orders. If counts go to zero suddenly, validate the scoring container and HEC path. Feed counts back to data science for precision and recall reviews quarterly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype IN (\"edge_hub:anomaly\",\"edge_hub:alert\") earliest=-90d@d\n| timechart span=1w count as pred_alerts\n| trendline sma6(pred_alerts) as alert_trend\n| eventstats avg(pred_alerts) as baseline\n| eval spike=if(pred_alerts > baseline*1.5,1,0)\n```\n\nUnderstanding this SPL\n\n**Predictive Maintenance Alert Volume Trending** — ML-driven anomaly counts should rise when equipment drifts but should fall after maintenance; tracking weekly volume validates model calibration and catches alert fatigue or broken scoring pipelines.\n\nDocumented **Data sources**: `index=ot` `sourcetype IN (\"edge_hub:anomaly\",\"edge_hub:alert\",\"edge_hub:ml\")` with `severity`, `model_id`, `asset_id`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (optional), Edge Hub anomaly outputs, DSDL or custom scoring jobs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Predictive Maintenance Alert Volume Trending**): trendline sma6(pred_alerts) as alert_trend\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **spike** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (weekly ML alert count with trend), Bar chart (alerts by asset), Single value (alerts versus 8-week average).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend how many predictive maintenance alerts fire so a jump or drop tells us the model, assets, or process changed in a meaningful way.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Operational_Telemetry (Maintenance / Security) as applicable; otherwise N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 4,
            "none": 0
          }
        },
        {
          "i": "14.9",
          "n": "OT Network Security Monitoring (Cisco Cyber Vision / Nozomi Networks)",
          "u": [
            {
              "i": "14.9.1",
              "n": "OT Asset Discovery and Inventory Tracking",
              "c": "critical",
              "f": "beginner",
              "v": "You cannot secure what you do not know exists. OT network security platforms (Cisco Cyber Vision, Nozomi Guardian) passively discover every device on your OT network via deep packet inspection of industrial protocols — PLCs, HMIs, drives, field devices — and builds an always-current inventory with vendor, model, firmware, serial number, and rack slot. This eliminates manual spreadsheets and provides the foundation for all other OT security use cases.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:device` · Nozomi: `sourcetype=nozomi:nn_asset`",
              "q": "index=nozomi sourcetype=\"nozomi:nn_asset\"\n| stats dc(id) as total_devices, dc(vendor) as vendors, values(vendor) as vendor_list by zone\n| eval device_summary=total_devices.\" devices from \".vendors.\" vendors\"\n| table zone, total_devices, vendors, vendor_list, device_summary",
              "m": "**Cyber Vision:** Configure Splunk Add-On with API token from Cyber Vision Center; add \"Devices\" input with polling interval (e.g. 3600s). **Nozomi:** Configure Universal Add-on with Guardian/Vantage API credentials; enable the `nn_asset` data input. Both platforms passively discover assets — use device data as the authoritative OT asset inventory for compliance and security programs.",
              "z": "Asset count single value; vendor breakdown pie chart; device table with firmware versions; site comparison bar chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Compute_Inventory](https://docs.splunk.com/Documentation/CIM/latest/User/Compute_Inventory)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:device` · Nozomi: `sourcetype=nozomi:nn_asset`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n**Cyber Vision:** Configure Splunk Add-On with API token from Cyber Vision Center; add \"Devices\" input with polling interval (e.g. 3600s). **Nozomi:** Configure Universal Add-on with Guardian/Vantage API credentials; enable the `nn_asset` data input. Both platforms passively discover assets — use device data as the authoritative OT asset inventory for compliance and security programs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:nn_asset\"\n| stats dc(id) as total_devices, dc(vendor) as vendors, values(vendor) as vendor_list by zone\n| eval device_summary=total_devices.\" devices from \".vendors.\" vendors\"\n| table zone, total_devices, vendors, vendor_list, device_summary\n```\n\nUnderstanding this SPL\n\n**OT Asset Discovery and Inventory Tracking** — You cannot secure what you do not know exists. OT network security platforms (Cisco Cyber Vision, Nozomi Guardian) passively discover every device on your OT network via deep packet inspection of industrial protocols — PLCs, HMIs, drives, field devices — and builds an always-current inventory with vendor, model, firmware, serial number, and rack slot. This eliminates manual spreadsheets and provides the foundation for all other OT security use cases.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:device` · Nozomi: `sourcetype=nozomi:nn_asset`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:nn_asset. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:nn_asset\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **device_summary** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **OT Asset Discovery and Inventory Tracking**): table zone, total_devices, vendors, vendor_list, device_summary\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest, Virtual_OS.status | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Asset Discovery and Inventory Tracking** — You cannot secure what you do not know exists. OT network security platforms (Cisco Cyber Vision, Nozomi Guardian) passively discover every device on your OT network via deep packet inspection of industrial protocols — PLCs, HMIs, drives, field devices — and builds an always-current inventory with vendor, model, firmware, serial number, and rack slot. This eliminates manual spreadsheets and provides the foundation for all other OT security use cases.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:device` · Nozomi: `sourcetype=nozomi:nn_asset`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Compute_Inventory.Virtual_OS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Asset count single value; vendor breakdown pie chart; device table with firmware versions; site comparison bar chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use our network security tool’s inventory in Splunk so we know which OT devices exist and how they change over time, which is the basis for protecting them.",
              "mtype": [
                "Inventory"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest, Virtual_OS.status | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.2",
              "n": "New OT Device Detection Alert",
              "c": "critical",
              "f": "beginner",
              "v": "Any new device appearing on a baselined OT network is a potential threat — rogue devices, unauthorized laptops, or attacker implants. OT security platforms detect new components automatically and generate alerts. Immediate alerting enables rapid investigation before a threat can spread.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"ASSET-NEW\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"ASSET-NEW\"\n| table _time, name, ip, mac_address, zone, risk\n| sort -_time",
              "m": "**Cyber Vision:** Forward syslog to Splunk (CEF format via TCP/TLS); alert on `component_new` events. **Nozomi:** Enable `alert` data input; filter on `type_id=\"ASSET-NEW\"`. Both platforms detect new devices automatically. Enrich with sensor/zone location to identify physical site. Cross-reference against approved asset list. Investigate unauthorized devices within SLA.",
              "z": "New device alert timeline; device location map by sensor; unauthorized device table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"ASSET-NEW\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n**Cyber Vision:** Forward syslog to Splunk (CEF format via TCP/TLS); alert on `component_new` events. **Nozomi:** Enable `alert` data input; filter on `type_id=\"ASSET-NEW\"`. Both platforms detect new devices automatically. Enrich with sensor/zone location to identify physical site. Cross-reference against approved asset list. Investigate unauthorized devices within SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"ASSET-NEW\"\n| table _time, name, ip, mac_address, zone, risk\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**New OT Device Detection Alert** — Any new device appearing on a baselined OT network is a potential threat — rogue devices, unauthorized laptops, or attacker implants. OT security platforms detect new components automatically and generate alerts. Immediate alerting enables rapid investigation before a threat can spread.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"ASSET-NEW\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **New OT Device Detection Alert**): table _time, name, ip, mac_address, zone, risk\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**New OT Device Detection Alert** — Any new device appearing on a baselined OT network is a potential threat — rogue devices, unauthorized laptops, or attacker implants. OT security platforms detect new components automatically and generate alerts. Immediate alerting enables rapid investigation before a threat can spread.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"ASSET-NEW\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: New device alert timeline; device location map by sensor; unauthorized device table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We flag devices we have not seen before on the OT side so a rogue or mis-placed box is not just another line on a diagram.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Network_Traffic",
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "both",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.3",
              "n": "OT Asset Vulnerability Detection and CVE Tracking",
              "c": "critical",
              "f": "intermediate",
              "v": "OT security platforms automatically match discovered OT assets against known CVEs, generating vulnerability alerts with CVE ID, CVSS score, and affected component. This eliminates manual vulnerability scanning in sensitive OT environments where active scanners can disrupt processes.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:vulnerability` · Nozomi: `sourcetype=nozomi:alert` (type_id contains \"CVE\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"CVE-*\"\n| eval cvss_severity=case(cvss>=9.0, \"Critical\", cvss>=7.0, \"High\", cvss>=4.0, \"Medium\", cvss<4.0, \"Low\")\n| stats count as vuln_count, max(cvss) as max_cvss, values(type_id) as cve_list by ip, name, zone\n| where max_cvss >= 7.0\n| sort -max_cvss\n| table name, ip, zone, vuln_count, max_cvss, cve_list",
              "m": "**Cyber Vision:** Enable \"Vulnerabilities\" input; knowledge base updated weekly via Cisco Talos. **Nozomi:** Enable `alert` input; Guardian matches assets against NVD automatically. Both platforms provide passive vulnerability assessment without active scanning. Track vulnerability counts per asset. Prioritize remediation by CVSS severity and asset criticality.",
              "z": "Vulnerability count by severity pie chart; top 10 vulnerable assets table; CVE trend over time; CVSS distribution histogram.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:vulnerability` · Nozomi: `sourcetype=nozomi:alert` (type_id contains \"CVE\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n**Cyber Vision:** Enable \"Vulnerabilities\" input; knowledge base updated weekly via Cisco Talos. **Nozomi:** Enable `alert` input; Guardian matches assets against NVD automatically. Both platforms provide passive vulnerability assessment without active scanning. Track vulnerability counts per asset. Prioritize remediation by CVSS severity and asset criticality.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"CVE-*\"\n| eval cvss_severity=case(cvss>=9.0, \"Critical\", cvss>=7.0, \"High\", cvss>=4.0, \"Medium\", cvss<4.0, \"Low\")\n| stats count as vuln_count, max(cvss) as max_cvss, values(type_id) as cve_list by ip, name, zone\n| where max_cvss >= 7.0\n| sort -max_cvss\n| table name, ip, zone, vuln_count, max_cvss, cve_list\n```\n\nUnderstanding this SPL\n\n**OT Asset Vulnerability Detection and CVE Tracking** — OT security platforms automatically match discovered OT assets against known CVEs, generating vulnerability alerts with CVE ID, CVSS score, and affected component. This eliminates manual vulnerability scanning in sensitive OT environments where active scanners can disrupt processes.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:vulnerability` · Nozomi: `sourcetype=nozomi:alert` (type_id contains \"CVE\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cvss_severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by ip, name, zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_cvss >= 7.0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **OT Asset Vulnerability Detection and CVE Tracking**): table name, ip, zone, vuln_count, max_cvss, cve_list\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Asset Vulnerability Detection and CVE Tracking** — OT security platforms automatically match discovered OT assets against known CVEs, generating vulnerability alerts with CVE ID, CVSS score, and affected component. This eliminates manual vulnerability scanning in sensitive OT environments where active scanners can disrupt processes.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:vulnerability` · Nozomi: `sourcetype=nozomi:alert` (type_id contains \"CVE\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Vulnerability count by severity pie chart; top 10 vulnerable assets table; CVE trend over time; CVSS distribution histogram.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track vulnerability findings on OT gear so we can prioritize what matters even when a patch is not a simple click.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.4",
              "n": "OT Asset Risk Score Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "OT security platforms calculate composite risk scores per device considering vulnerabilities, communication patterns, and security posture. Monitoring risk score changes over time shows whether your OT security posture is improving or degrading, and highlights which assets need immediate attention.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:device` · Nozomi: `sourcetype=nozomi:nn_asset`",
              "q": "index=nozomi sourcetype=\"nozomi:nn_asset\" risk=*\n| stats latest(risk) as current_risk by id, name, ip, zone\n| eval risk_level=case(current_risk>=8, \"Critical\", current_risk>=6, \"High\", current_risk>=4, \"Medium\", current_risk<4, \"Low\")\n| where current_risk >= 6\n| sort -current_risk\n| table name, ip, zone, current_risk, risk_level",
              "m": "Both platforms calculate composite risk scores per device. Track score changes over time. Alert on assets crossing risk thresholds. Use risk scores to prioritize patching and segmentation efforts. Report on overall risk posture trends for management.",
              "z": "Risk distribution gauge; high-risk asset table; risk trend line per site; risk heatmap by asset group.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Compute_Inventory](https://docs.splunk.com/Documentation/CIM/latest/User/Compute_Inventory)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:device` · Nozomi: `sourcetype=nozomi:nn_asset`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms calculate composite risk scores per device. Track score changes over time. Alert on assets crossing risk thresholds. Use risk scores to prioritize patching and segmentation efforts. Report on overall risk posture trends for management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:nn_asset\" risk=*\n| stats latest(risk) as current_risk by id, name, ip, zone\n| eval risk_level=case(current_risk>=8, \"Critical\", current_risk>=6, \"High\", current_risk>=4, \"Medium\", current_risk<4, \"Low\")\n| where current_risk >= 6\n| sort -current_risk\n| table name, ip, zone, current_risk, risk_level\n```\n\nUnderstanding this SPL\n\n**OT Asset Risk Score Monitoring** — OT security platforms calculate composite risk scores per device considering vulnerabilities, communication patterns, and security posture. Monitoring risk score changes over time shows whether your OT security posture is improving or degrading, and highlights which assets need immediate attention.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:device` · Nozomi: `sourcetype=nozomi:nn_asset`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:nn_asset. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:nn_asset\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by id, name, ip, zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **risk_level** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where current_risk >= 6` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **OT Asset Risk Score Monitoring**): table name, ip, zone, current_risk, risk_level\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest, Virtual_OS.status | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Asset Risk Score Monitoring** — OT security platforms calculate composite risk scores per device considering vulnerabilities, communication patterns, and security posture. Monitoring risk score changes over time shows whether your OT security posture is improving or degrading, and highlights which assets need immediate attention.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:device` · Nozomi: `sourcetype=nozomi:nn_asset`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Compute_Inventory.Virtual_OS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Risk distribution gauge; high-risk asset table; risk trend line per site; risk heatmap by asset group.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow risk scores on each asset so the worst combinations of exposure and comms get attention first.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest, Virtual_OS.status | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.5",
              "n": "Baseline Deviation Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "OT security platforms baseline normal network behavior — which devices talk to which, over what protocols, at what frequency. Deviations from baseline trigger alerts, detecting unauthorized communication changes, new data flows, or behavioral shifts that could indicate compromise or misconfiguration.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"ANOMALY\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"ANOMALY*\"\n| eval severity_label=case(risk>=8, \"Critical\", risk>=6, \"High\", risk>=4, \"Medium\", risk<4, \"Low\")\n| stats count as total_deviations by severity_label, name, zone, _time\n| sort -severity_label\n| table _time, severity_label, name, zone, total_deviations",
              "m": "**Cyber Vision:** Create baselines for critical production zones; enable monitoring mode. **Nozomi:** Guardian automatically builds AI-powered behavioral baselines; anomaly alerts fire when deviations occur. Alert on any Critical or High severity deviations. Investigate and either acknowledge (legitimate change) or escalate (potential threat). Track unresolved deviations.",
              "z": "Deviation event timeline; baseline status dashboard; unresolved deviation count; severity distribution.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"ANOMALY\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n**Cyber Vision:** Create baselines for critical production zones; enable monitoring mode. **Nozomi:** Guardian automatically builds AI-powered behavioral baselines; anomaly alerts fire when deviations occur. Alert on any Critical or High severity deviations. Investigate and either acknowledge (legitimate change) or escalate (potential threat). Track unresolved deviations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"ANOMALY*\"\n| eval severity_label=case(risk>=8, \"Critical\", risk>=6, \"High\", risk>=4, \"Medium\", risk<4, \"Low\")\n| stats count as total_deviations by severity_label, name, zone, _time\n| sort -severity_label\n| table _time, severity_label, name, zone, total_deviations\n```\n\nUnderstanding this SPL\n\n**Baseline Deviation Detection** — OT security platforms baseline normal network behavior — which devices talk to which, over what protocols, at what frequency. Deviations from baseline trigger alerts, detecting unauthorized communication changes, new data flows, or behavioral shifts that could indicate compromise or misconfiguration.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"ANOMALY\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **severity_label** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by severity_label, name, zone, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Baseline Deviation Detection**): table _time, severity_label, name, zone, total_deviations\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Baseline Deviation Detection** — OT security platforms baseline normal network behavior — which devices talk to which, over what protocols, at what frequency. Deviations from baseline trigger alerts, detecting unauthorized communication changes, new data flows, or behavioral shifts that could indicate compromise or misconfiguration.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"ANOMALY\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Deviation event timeline; baseline status dashboard; unresolved deviation count; severity distribution.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for traffic and behavior that drifts from learned baselines so a quiet insider-style change is harder to hide.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change",
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.6",
              "n": "Snort IDS Threat Detection on OT Networks",
              "c": "critical",
              "f": "intermediate",
              "v": "OT security platforms include built-in IDS engines with threat intelligence updates to detect known threats — malware, C2 traffic, exploit attempts — crossing into OT networks. This provides signature-based threat detection without deploying separate IDS appliances in sensitive OT environments.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"SIGN\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"SIGN*\"\n| eval severity_label=case(risk>=8, \"Critical\", risk>=6, \"High\", risk>=4, \"Medium\", risk<4, \"Low\")\n| stats count as hits, values(name) as signatures, dc(dst_ip) as targets by src_ip, severity_label\n| where severity_label IN (\"Critical\", \"High\")\n| sort -hits\n| table src_ip, severity_label, hits, targets, signatures",
              "m": "**Cyber Vision:** Enable Snort IDS on sensors (4GB RAM required); configure Talos subscription for weekly rule updates. **Nozomi:** Guardian includes built-in threat intelligence with Nozomi Threat Intelligence updates. Both platforms provide signature-based IDS for OT. Forward events to Splunk. Correlate with IT security events in Splunk ES for unified threat detection. Alert SOC on Critical/High severity IDS hits targeting OT assets.",
              "z": "IDS alert timeline; top source IPs table; signature hit frequency chart; OT target heatmap.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Intrusion Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"SIGN\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n**Cyber Vision:** Enable Snort IDS on sensors (4GB RAM required); configure Talos subscription for weekly rule updates. **Nozomi:** Guardian includes built-in threat intelligence with Nozomi Threat Intelligence updates. Both platforms provide signature-based IDS for OT. Forward events to Splunk. Correlate with IT security events in Splunk ES for unified threat detection. Alert SOC on Critical/High severity IDS hits targeting OT assets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"SIGN*\"\n| eval severity_label=case(risk>=8, \"Critical\", risk>=6, \"High\", risk>=4, \"Medium\", risk<4, \"Low\")\n| stats count as hits, values(name) as signatures, dc(dst_ip) as targets by src_ip, severity_label\n| where severity_label IN (\"Critical\", \"High\")\n| sort -hits\n| table src_ip, severity_label, hits, targets, signatures\n```\n\nUnderstanding this SPL\n\n**Snort IDS Threat Detection on OT Networks** — OT security platforms include built-in IDS engines with threat intelligence updates to detect known threats — malware, C2 traffic, exploit attempts — crossing into OT networks. This provides signature-based threat detection without deploying separate IDS appliances in sensitive OT environments.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"SIGN\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **severity_label** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src_ip, severity_label** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity_label IN (\"Critical\", \"High\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Snort IDS Threat Detection on OT Networks**): table src_ip, severity_label, hits, targets, signatures\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(IDS_Attacks.dest) as agg_value from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Snort IDS Threat Detection on OT Networks** — OT security platforms include built-in IDS engines with threat intelligence updates to detect known threats — malware, C2 traffic, exploit attempts — crossing into OT networks. This provides signature-based threat detection without deploying separate IDS appliances in sensitive OT environments.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"SIGN\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: IDS alert timeline; top source IPs table; signature hit frequency chart; OT target heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use signature-based detection in OT so known bad patterns in front of our process networks surface in one place with context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t dc(IDS_Attacks.dest) as agg_value from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src | sort - agg_value",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.7",
              "n": "PLC Program Download/Upload Detection",
              "c": "critical",
              "f": "beginner",
              "v": "Unauthorized PLC program changes can alter physical processes with safety implications. OT security platforms detect program download and upload events across industrial protocols (EtherNet/IP, S7, Modbus, Profinet) and generate Critical-severity alerts. Every program change must be verified against authorized change windows.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id IN (\"PROTOCOL-ENGINEERING-WRITE\", \"PROTOCOL-ENGINEERING-DOWNLOAD\", \"PROTOCOL-ENGINEERING-UPLOAD\")\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval outside_change_window=if((hour<6 OR hour>18) OR dow>5, \"YES\", \"No\")\n| table _time, name, src_ip, dst_ip, type_id, zone, outside_change_window\n| sort -_time",
              "m": "Both platforms detect PLC program changes across industrial protocols. Alert on all program download/upload events. Cross-reference with change management system to validate authorized changes. Flag events outside approved maintenance windows. Require investigation and sign-off for every program change.",
              "z": "Program change timeline; authorized vs unauthorized change chart; target PLC table; change window compliance gauge.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect PLC program changes across industrial protocols. Alert on all program download/upload events. Cross-reference with change management system to validate authorized changes. Flag events outside approved maintenance windows. Require investigation and sign-off for every program change.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id IN (\"PROTOCOL-ENGINEERING-WRITE\", \"PROTOCOL-ENGINEERING-DOWNLOAD\", \"PROTOCOL-ENGINEERING-UPLOAD\")\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval outside_change_window=if((hour<6 OR hour>18) OR dow>5, \"YES\", \"No\")\n| table _time, name, src_ip, dst_ip, type_id, zone, outside_change_window\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**PLC Program Download/Upload Detection** — Unauthorized PLC program changes can alter physical processes with safety implications. OT security platforms detect program download and upload events across industrial protocols (EtherNet/IP, S7, Modbus, Profinet) and generate Critical-severity alerts. Every program change must be verified against authorized change windows.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **outside_change_window** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **PLC Program Download/Upload Detection**): table _time, name, src_ip, dst_ip, type_id, zone, outside_change_window\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PLC Program Download/Upload Detection** — Unauthorized PLC program changes can alter physical processes with safety implications. OT security platforms detect program download and upload events across industrial protocols (EtherNet/IP, S7, Modbus, Profinet) and generate Critical-severity alerts. Every program change must be verified against authorized change windows.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Program change timeline; authorized vs unauthorized change chart; target PLC table; change window compliance gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log controller program download and upload so we can line up code changes to tickets and windows.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change",
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.8",
              "n": "Controller Firmware Activation Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Firmware activation on a controller changes the software running the physical process. Unauthorized firmware changes are a key indicator of advanced OT attacks (e.g., Stuxnet-style). OT security platforms detect firmware activation events across supported industrial protocols and generate Critical-severity alerts.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"PROTOCOL-ENGINEERING-FIRMWARE*\"\n| table _time, name, src_ip, dst_ip, zone, risk\n| sort -_time",
              "m": "Both platforms detect firmware activation events on controllers. Alert immediately — this is always high-priority. Cross-reference with approved change records. Investigate source and target. Validate firmware version matches approved versions.",
              "z": "Firmware activation event log; target controller details; change authorization correlation.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect firmware activation events on controllers. Alert immediately — this is always high-priority. Cross-reference with approved change records. Investigate source and target. Validate firmware version matches approved versions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"PROTOCOL-ENGINEERING-FIRMWARE*\"\n| table _time, name, src_ip, dst_ip, zone, risk\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Controller Firmware Activation Monitoring** — Firmware activation on a controller changes the software running the physical process. Unauthorized firmware changes are a key indicator of advanced OT attacks (e.g., Stuxnet-style). OT security platforms detect firmware activation events across supported industrial protocols and generate Critical-severity alerts.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Controller Firmware Activation Monitoring**): table _time, name, src_ip, dst_ip, zone, risk\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Controller Firmware Activation Monitoring** — Firmware activation on a controller changes the software running the physical process. Unauthorized firmware changes are a key indicator of advanced OT attacks (e.g., Stuxnet-style). OT security platforms detect firmware activation events across supported industrial protocols and generate Critical-severity alerts.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Firmware activation event log; target controller details; change authorization correlation.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log firmware and image activation on controllers so a software change in the field is always visible to security and ops.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.9",
              "n": "Forced Variable Detection in OT Processes",
              "c": "critical",
              "f": "intermediate",
              "v": "Forcing a variable in a PLC overrides the normal program logic — the variable holds a fixed value regardless of what the control program calculates. While sometimes used legitimately during maintenance, forced variables left active in production can mask sensor readings and bypass safety logic. OT security platforms detect forced variable events with variable name and value.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:variable`",
              "q": "index=nozomi sourcetype=\"nozomi:variable\" status=\"forced\"\n| table _time, node_id, variable_name, value, zone\n| sort -_time",
              "m": "**Cyber Vision:** Detects `flow_forced_variable` events via syslog. **Nozomi:** Guardian tracks all process variables via the `nozomi:variable` sourcetype, including forced status. Alert on all forced variable events. Verify against active maintenance work orders. Track duration — any that remain active longer than the maintenance window must be investigated.",
              "z": "Forced variable event log; active forces table; force duration tracker; variable name word cloud.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:variable`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n**Cyber Vision:** Detects `flow_forced_variable` events via syslog. **Nozomi:** Guardian tracks all process variables via the `nozomi:variable` sourcetype, including forced status. Alert on all forced variable events. Verify against active maintenance work orders. Track duration — any that remain active longer than the maintenance window must be investigated.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:variable\" status=\"forced\"\n| table _time, node_id, variable_name, value, zone\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Forced Variable Detection in OT Processes** — Forcing a variable in a PLC overrides the normal program logic — the variable holds a fixed value regardless of what the control program calculates. While sometimes used legitimately during maintenance, forced variables left active in production can mask sensor readings and bypass safety logic. OT security platforms detect forced variable events with variable name and value.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:variable`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:variable. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:variable\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Forced Variable Detection in OT Processes**): table _time, node_id, variable_name, value, zone\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Forced Variable Detection in OT Processes** — Forcing a variable in a PLC overrides the normal program logic — the variable holds a fixed value regardless of what the control program calculates. While sometimes used legitimately during maintenance, forced variables left active in production can mask sensor readings and bypass safety logic. OT security platforms detect forced variable events with variable name and value.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:variable`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Forced variable event log; active forces table; force duration tracker; variable name word cloud.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log forced and overridden variables in the process so a human or tool pinning a value is never invisible.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change",
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.10",
              "n": "Control Action Monitoring on Industrial Assets",
              "c": "high",
              "f": "intermediate",
              "v": "OT security platforms track control action events when a control system modifies process variables — setpoint changes, valve commands, motor start/stop. Monitoring these actions detects unauthorized process manipulation and provides an audit trail for root cause analysis of production incidents.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:variable`, `sourcetype=nozomi:link_events`",
              "q": "index=nozomi sourcetype=\"nozomi:variable\"\n| bin _time span=1h\n| stats count as changes, dc(variable_name) as variables_changed by node_id, zone, _time\n| where changes > 50\n| table _time, node_id, zone, changes, variables_changed\n| sort -changes",
              "m": "Both platforms track process variable changes. Baseline normal control action volume per source-target pair. Alert on spikes exceeding 2x baseline (mass parameter changes could indicate unauthorized batch modifications). Track who is making changes and which controllers are targets. Correlate with operator shift schedules.",
              "z": "Control action volume timeline; source-target relationship map; process change audit log.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:variable`, `sourcetype=nozomi:link_events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms track process variable changes. Baseline normal control action volume per source-target pair. Alert on spikes exceeding 2x baseline (mass parameter changes could indicate unauthorized batch modifications). Track who is making changes and which controllers are targets. Correlate with operator shift schedules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:variable\"\n| bin _time span=1h\n| stats count as changes, dc(variable_name) as variables_changed by node_id, zone, _time\n| where changes > 50\n| table _time, node_id, zone, changes, variables_changed\n| sort -changes\n```\n\nUnderstanding this SPL\n\n**Control Action Monitoring on Industrial Assets** — OT security platforms track control action events when a control system modifies process variables — setpoint changes, valve commands, motor start/stop. Monitoring these actions detects unauthorized process manipulation and provides an audit trail for root cause analysis of production incidents.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:variable`, `sourcetype=nozomi:link_events`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:variable. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:variable\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by node_id, zone, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where changes > 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Control Action Monitoring on Industrial Assets**): table _time, node_id, zone, changes, variables_changed\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Control Action Monitoring on Industrial Assets** — OT security platforms track control action events when a control system modifies process variables — setpoint changes, valve commands, motor start/stop. Monitoring these actions detects unauthorized process manipulation and provides an audit trail for root cause analysis of production incidents.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:variable`, `sourcetype=nozomi:link_events`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Control action volume timeline; source-target relationship map; process change audit log.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log control actions and writes on industrial assets so we can tell intended automation from something off-script.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1h | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.11",
              "n": "Controller Mode Change Detection",
              "c": "critical",
              "f": "beginner",
              "v": "OT security platforms detect when controllers are placed into online, offline, force, or run/stop modes — each a Critical-severity event. A controller taken offline stops controlling its physical process. An unauthorized mode change could halt production, disable safety systems, or prepare for a more destructive attack.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id IN (\"PROTOCOL-ENGINEERING-MODE*\", \"PROTOCOL-ENGINEERING-RUN*\", \"PROTOCOL-ENGINEERING-STOP*\")\n| table _time, name, src_ip, dst_ip, type_id, zone, risk\n| sort -_time",
              "m": "Both platforms detect controller mode changes across industrial protocols. Alert immediately — CPU Stop and Offline events are highest priority as they halt physical processes. Cross-reference with scheduled maintenance. Track frequency of mode changes per controller. Investigate any mode change from unexpected source IPs.",
              "z": "Mode change timeline with color-coded severity; controller status dashboard; unauthorized source detection.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect controller mode changes across industrial protocols. Alert immediately — CPU Stop and Offline events are highest priority as they halt physical processes. Cross-reference with scheduled maintenance. Track frequency of mode changes per controller. Investigate any mode change from unexpected source IPs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id IN (\"PROTOCOL-ENGINEERING-MODE*\", \"PROTOCOL-ENGINEERING-RUN*\", \"PROTOCOL-ENGINEERING-STOP*\")\n| table _time, name, src_ip, dst_ip, type_id, zone, risk\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Controller Mode Change Detection** — OT security platforms detect when controllers are placed into online, offline, force, or run/stop modes — each a Critical-severity event. A controller taken offline stops controlling its physical process. An unauthorized mode change could halt production, disable safety systems, or prepare for a more destructive attack.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Controller Mode Change Detection**): table _time, name, src_ip, dst_ip, type_id, zone, risk\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Controller Mode Change Detection** — OT security platforms detect when controllers are placed into online, offline, force, or run/stop modes — each a Critical-severity event. A controller taken offline stops controlling its physical process. An unauthorized mode change could halt production, disable safety systems, or prepare for a more destructive attack.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-ENGINEERING\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Mode change timeline with color-coded severity; controller status dashboard; unauthorized source detection.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log mode and run-state changes on controllers so run, program, and remote do not get mixed up without a trail.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change",
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.12",
              "n": "New Communication Flow Detection",
              "c": "high",
              "f": "beginner",
              "v": "In a stable OT environment, the set of communication flows between devices should be predictable. OT security platforms generate alerts when a previously unseen protocol flow appears between two components. New FTP, HTTP, or SSH flows in the control network may indicate lateral movement, data exfiltration, or unauthorized remote access tools.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:link_events`, `sourcetype=nozomi:link`",
              "q": "index=nozomi sourcetype=\"nozomi:link_events\" status=\"new\"\n| bin _time span=1d\n| stats count as new_flows, dc(protocols) as protocol_count, values(protocols) as protocol_list by _time\n| eval concern=if(new_flows > 10, \"High — investigate burst of new flows\", \"Normal\")\n| table _time, new_flows, protocol_count, protocol_list, concern",
              "m": "Both platforms detect new communication flows after an initial learning period. Alert on new flows. Prioritize IT protocols appearing in OT zones (FTP, SSH, HTTP, RDP, SMB). Correlate with baseline deviations. Investigate source and destination to determine if the flow is legitimate.",
              "z": "New flow event timeline; protocol distribution chart; source-destination network graph.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:link_events`, `sourcetype=nozomi:link`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect new communication flows after an initial learning period. Alert on new flows. Prioritize IT protocols appearing in OT zones (FTP, SSH, HTTP, RDP, SMB). Correlate with baseline deviations. Investigate source and destination to determine if the flow is legitimate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:link_events\" status=\"new\"\n| bin _time span=1d\n| stats count as new_flows, dc(protocols) as protocol_count, values(protocols) as protocol_list by _time\n| eval concern=if(new_flows > 10, \"High — investigate burst of new flows\", \"Normal\")\n| table _time, new_flows, protocol_count, protocol_list, concern\n```\n\nUnderstanding this SPL\n\n**New Communication Flow Detection** — In a stable OT environment, the set of communication flows between devices should be predictable. OT security platforms generate alerts when a previously unseen protocol flow appears between two components. New FTP, HTTP, or SSH flows in the control network may indicate lateral movement, data exfiltration, or unauthorized remote access tools.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:link_events`, `sourcetype=nozomi:link`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:link_events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:link_events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **concern** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **New Communication Flow Detection**): table _time, new_flows, protocol_count, protocol_list, concern\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**New Communication Flow Detection** — In a stable OT environment, the set of communication flows between devices should be predictable. OT security platforms generate alerts when a previously unseen protocol flow appears between two components. New FTP, HTTP, or SSH flows in the control network may indicate lateral movement, data exfiltration, or unauthorized remote access tools.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:link_events`, `sourcetype=nozomi:link`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: New flow event timeline; protocol distribution chart; source-destination network graph.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch new host-to-host flows in OT so a new path between machines is a first-class event, not a footnote in a pcap.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1d | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.13",
              "n": "Protocol Exception Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Protocol exceptions (`illegal-function`, `invalid-data-address`, malformed packets) detected by OT DPI platforms indicate either misconfigured devices, faulty communication, or active exploitation attempts. An attacker probing Modbus function codes will trigger `illegal-function` exceptions. Repeated exceptions from a single source warrant investigation.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"PROTOCOL*\"\n| stats count as exceptions, dc(type_id) as exception_types, values(type_id) as exception_list by src_ip, dst_ip, zone\n| where exceptions > 5\n| sort -exceptions\n| table src_ip, dst_ip, exceptions, exception_types, exception_list, zone",
              "m": "Both platforms detect protocol-level exceptions via DPI. Baseline normal exception rates per flow. Alert on sudden spikes (>5 exceptions from single source in short window). Differentiate between known interoperability issues and new attack patterns. Feed into SOC correlation rules.",
              "z": "Exception volume timeline; top exception source table; exception type distribution; attack pattern detection dashboard.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Intrusion Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect protocol-level exceptions via DPI. Baseline normal exception rates per flow. Alert on sudden spikes (>5 exceptions from single source in short window). Differentiate between known interoperability issues and new attack patterns. Feed into SOC correlation rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"PROTOCOL*\"\n| stats count as exceptions, dc(type_id) as exception_types, values(type_id) as exception_list by src_ip, dst_ip, zone\n| where exceptions > 5\n| sort -exceptions\n| table src_ip, dst_ip, exceptions, exception_types, exception_list, zone\n```\n\nUnderstanding this SPL\n\n**Protocol Exception Monitoring** — Protocol exceptions (`illegal-function`, `invalid-data-address`, malformed packets) detected by OT DPI platforms indicate either misconfigured devices, faulty communication, or active exploitation attempts. An attacker probing Modbus function codes will trigger `illegal-function` exceptions. Repeated exceptions from a single source warrant investigation.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_ip, dst_ip, zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where exceptions > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Protocol Exception Monitoring**): table src_ip, dst_ip, exceptions, exception_types, exception_list, zone\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Protocol Exception Monitoring** — Protocol exceptions (`illegal-function`, `invalid-data-address`, malformed packets) detected by OT DPI platforms indicate either misconfigured devices, faulty communication, or active exploitation attempts. An attacker probing Modbus function codes will trigger `illegal-function` exceptions. Repeated exceptions from a single source warrant investigation.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Exception volume timeline; top exception source table; exception type distribution; attack pattern detection dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log protocol errors and edge cases in OT so odd framing or half-implemented stacks do not get buried.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.14",
              "n": "OT Device Authentication Failure Detection",
              "c": "high",
              "f": "beginner",
              "v": "OT security platforms detect login failure events including the number of failed authentication attempts and the protocol used. Brute-force login attempts against HMIs, engineering workstations, or web-enabled controllers indicate credential-based attacks that could lead to unauthorized process control.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"AUTH\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"AUTH*\"\n| stats count as total_failures by src_ip, dst_ip, zone\n| where total_failures >= 3\n| sort -total_failures\n| table src_ip, dst_ip, total_failures, zone",
              "m": "Both platforms detect authentication failures on OT devices. Alert on repeated failures, especially against critical OT assets (PLCs, RTUs, SIS controllers). Correlate source IP with known engineering workstations. Unknown sources attempting authentication are high priority. Feed into Splunk ES notable events.",
              "z": "Failed auth timeline; top attack source table; target asset vulnerability assessment; brute force detection dashboard.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"AUTH\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect authentication failures on OT devices. Alert on repeated failures, especially against critical OT assets (PLCs, RTUs, SIS controllers). Correlate source IP with known engineering workstations. Unknown sources attempting authentication are high priority. Feed into Splunk ES notable events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"AUTH*\"\n| stats count as total_failures by src_ip, dst_ip, zone\n| where total_failures >= 3\n| sort -total_failures\n| table src_ip, dst_ip, total_failures, zone\n```\n\nUnderstanding this SPL\n\n**OT Device Authentication Failure Detection** — OT security platforms detect login failure events including the number of failed authentication attempts and the protocol used. Brute-force login attempts against HMIs, engineering workstations, or web-enabled controllers indicate credential-based attacks that could lead to unauthorized process control.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"AUTH\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_ip, dst_ip, zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where total_failures >= 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **OT Device Authentication Failure Detection**): table src_ip, dst_ip, total_failures, zone\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Device Authentication Failure Detection** — OT security platforms detect login failure events including the number of failed authentication attempts and the protocol used. Brute-force login attempts against HMIs, engineering workstations, or web-enabled controllers indicate credential-based attacks that could lead to unauthorized process control.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"AUTH\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Failed auth timeline; top attack source table; target asset vulnerability assessment; brute force detection dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log failed authentications in OT so weak credentials and account lockouts are visible next to the same asset data.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of API RP 1164 5.3 (Access control) — Splunk UC-14.9.14: OT Device Authentication Failure Detection.",
                  "ea": "Saved search 'UC-14.9.14' running on sourcetype=cisco:cybervision:syslog · Nozomi: sourcetype=nozomi:alert (type_id=\"AUTH\"), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.api.org/products-and-services/standards/important-standards-announcements/standard-1164"
                },
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "FR 6.2",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of IEC 62443 FR 6.2 (Continuous monitoring) — Splunk UC-14.9.14: OT Device Authentication Failure Detection.",
                  "ea": "Saved search 'UC-14.9.14' running on sourcetype=cisco:cybervision:syslog · Nozomi: sourcetype=nozomi:alert (type_id=\"AUTH\"), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.isa.org/standards-and-publications/isa-standards/isa-iec-62443-series-of-standards"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R4",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of NERC CIP CIP-007-6 R4 (Security event monitoring) — Splunk UC-14.9.14: OT Device Authentication Failure Detection.",
                  "ea": "Saved search 'UC-14.9.14' running on sourcetype=cisco:cybervision:syslog · Nozomi: sourcetype=nozomi:alert (type_id=\"AUTH\"), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                }
              ],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.15",
              "n": "Admin Connection Detection to ICS Assets",
              "c": "high",
              "f": "beginner",
              "v": "OT security platforms detect administrative connections to industrial components — engineering sessions to PLCs, configuration access to field devices. These events identify who is connecting to which controller, enabling detection of unauthorized engineering access that could modify process logic.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:session`, `sourcetype=nozomi:link`",
              "q": "index=nozomi sourcetype=\"nozomi:session\" session_type=\"engineering\"\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval outside_hours=if((hour<7 OR hour>18) OR dow>5, \"After Hours\", \"Business Hours\")\n| stats count as connections by src_ip, dst_ip, outside_hours, zone\n| where outside_hours=\"After Hours\" OR connections > 10\n| sort -connections\n| table src_ip, dst_ip, connections, outside_hours, zone",
              "m": "Both platforms detect engineering/admin connections to industrial assets. Baseline approved engineering workstation IPs. Alert on connections from unapproved sources or outside business hours. Track connection frequency per engineer. Correlate with change management tickets.",
              "z": "Admin connection timeline; source-destination network map; after-hours connection alerts; approved vs unapproved source comparison.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:session`, `sourcetype=nozomi:link`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect engineering/admin connections to industrial assets. Baseline approved engineering workstation IPs. Alert on connections from unapproved sources or outside business hours. Track connection frequency per engineer. Correlate with change management tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:session\" session_type=\"engineering\"\n| eval hour=strftime(_time, \"%H\"), dow=strftime(_time, \"%u\")\n| eval outside_hours=if((hour<7 OR hour>18) OR dow>5, \"After Hours\", \"Business Hours\")\n| stats count as connections by src_ip, dst_ip, outside_hours, zone\n| where outside_hours=\"After Hours\" OR connections > 10\n| sort -connections\n| table src_ip, dst_ip, connections, outside_hours, zone\n```\n\nUnderstanding this SPL\n\n**Admin Connection Detection to ICS Assets** — OT security platforms detect administrative connections to industrial components — engineering sessions to PLCs, configuration access to field devices. These events identify who is connecting to which controller, enabling detection of unauthorized engineering access that could modify process logic.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:session`, `sourcetype=nozomi:link`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **outside_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src_ip, dst_ip, outside_hours, zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where outside_hours=\"After Hours\" OR connections > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Admin Connection Detection to ICS Assets**): table src_ip, dst_ip, connections, outside_hours, zone\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Admin Connection Detection to ICS Assets** — OT security platforms detect administrative connections to industrial components — engineering sessions to PLCs, configuration access to field devices. These events identify who is connecting to which controller, enabling detection of unauthorized engineering access that could modify process logic.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:session`, `sourcetype=nozomi:link`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Admin connection timeline; source-destination network map; after-hours connection alerts; approved vs unapproved source comparison.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log administrative and maintenance sessions to field gear so ad-hoc access is traceable, not a black box.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.16",
              "n": "Port Scan Detection on OT Networks",
              "c": "critical",
              "f": "beginner",
              "v": "Port scanning is a reconnaissance technique used by attackers to map OT network topology and identify exploitable services. OT security platforms detect port scan events with scanner and target component identification. Port scans in OT networks are almost never legitimate and warrant immediate investigation.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"SCAN\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"SCAN*\"\n| table _time, name, src_ip, dst_ip, zone, risk\n| sort -_time",
              "m": "Both platforms detect port scanning in OT networks. Alert immediately on any port scan event. Identify scanner source — is it an authorized vulnerability scanner or unknown? Correlate with network baseline. Block scanning source at network boundary if unauthorized. Escalate to SOC and OT security team.",
              "z": "Port scan alert log; scanner source identification; target analysis; network map overlay.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Intrusion Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"SCAN\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect port scanning in OT networks. Alert immediately on any port scan event. Identify scanner source — is it an authorized vulnerability scanner or unknown? Correlate with network baseline. Block scanning source at network boundary if unauthorized. Escalate to SOC and OT security team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"SCAN*\"\n| table _time, name, src_ip, dst_ip, zone, risk\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Port Scan Detection on OT Networks** — Port scanning is a reconnaissance technique used by attackers to map OT network topology and identify exploitable services. OT security platforms detect port scan events with scanner and target component identification. Port scans in OT networks are almost never legitimate and warrant immediate investigation.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"SCAN\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Port Scan Detection on OT Networks**): table _time, name, src_ip, dst_ip, zone, risk\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Port Scan Detection on OT Networks** — Port scanning is a reconnaissance technique used by attackers to map OT network topology and identify exploitable services. OT security platforms detect port scan events with scanner and target component identification. Port scans in OT networks are almost never legitimate and warrant immediate investigation.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"SCAN\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Port scan alert log; scanner source identification; target analysis; network map overlay.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for port-scan-like behavior in OT so an IT tool or bad actor poking the segment shows up with context.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.17",
              "n": "Weak Encryption Detection in OT Communications",
              "c": "medium",
              "f": "beginner",
              "v": "Many OT protocols use weak or no encryption. OT security platforms identify flows using deprecated TLS versions, weak ciphers, or unencrypted protocols where encryption should be used. This supports IEC 62443 compliance and prioritizes protocol hardening efforts.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-CIPHER\")",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"PROTOCOL-CIPHER*\"\n| stats count as occurrences, dc(dst_ip) as affected_targets by src_ip, name, zone\n| sort -occurrences\n| table src_ip, name, occurrences, affected_targets, zone",
              "m": "Both platforms detect weak encryption in OT communications. Inventory all findings. Prioritize remediation by criticality of affected assets. Track progress toward encryption upgrade milestones. Report on IEC 62443 encryption compliance per zone.",
              "z": "Weak encryption finding table; affected asset count; compliance progress gauge; protocol breakdown.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-CIPHER\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect weak encryption in OT communications. Inventory all findings. Prioritize remediation by criticality of affected assets. Track progress toward encryption upgrade milestones. Report on IEC 62443 encryption compliance per zone.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"PROTOCOL-CIPHER*\"\n| stats count as occurrences, dc(dst_ip) as affected_targets by src_ip, name, zone\n| sort -occurrences\n| table src_ip, name, occurrences, affected_targets, zone\n```\n\nUnderstanding this SPL\n\n**Weak Encryption Detection in OT Communications** — Many OT protocols use weak or no encryption. OT security platforms identify flows using deprecated TLS versions, weak ciphers, or unencrypted protocols where encryption should be used. This supports IEC 62443 compliance and prioritizes protocol hardening efforts.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-CIPHER\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_ip, name, zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Weak Encryption Detection in OT Communications**): table src_ip, name, occurrences, affected_targets, zone\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Weak Encryption Detection in OT Communications** — Many OT protocols use weak or no encryption. OT security platforms identify flows using deprecated TLS versions, weak ciphers, or unencrypted protocols where encryption should be used. This supports IEC 62443 compliance and prioritizes protocol hardening efforts.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert` (type_id=\"PROTOCOL-CIPHER\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Weak encryption finding table; affected asset count; compliance progress gauge; protocol breakdown.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We catch weak or legacy encryption in OT so migration plans can be grounded in what is actually on the wire.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.18",
              "n": "SMB Protocol Activity in OT Networks",
              "c": "high",
              "f": "beginner",
              "v": "SMB (Server Message Block) traffic in OT networks is frequently associated with ransomware propagation (WannaCry, NotPetya) and lateral movement. OT security platforms detect SMB protocol events. Any SMB activity in pure control network segments is suspicious and may indicate an active threat.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:session` (protocol=\"smb\")",
              "q": "index=nozomi sourcetype=\"nozomi:session\" protocols=\"smb\"\n| stats count as smb_events, dc(dst_ip) as targets by src_ip, zone\n| sort -smb_events\n| table src_ip, smb_events, targets, zone",
              "m": "Both platforms detect SMB traffic via DPI. Map legitimate SMB usage (historian data transfer, Windows-based HMIs). Alert on SMB traffic to/from pure control devices (PLCs, RTUs, field devices) which should never use SMB. Correlate with IDS for known SMB exploit signatures. High priority for SOC investigation.",
              "z": "SMB activity timeline; source-target map; alert correlation with IDS; legitimate vs suspicious classification.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:session` (protocol=\"smb\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect SMB traffic via DPI. Map legitimate SMB usage (historian data transfer, Windows-based HMIs). Alert on SMB traffic to/from pure control devices (PLCs, RTUs, field devices) which should never use SMB. Correlate with IDS for known SMB exploit signatures. High priority for SOC investigation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:session\" protocols=\"smb\"\n| stats count as smb_events, dc(dst_ip) as targets by src_ip, zone\n| sort -smb_events\n| table src_ip, smb_events, targets, zone\n```\n\nUnderstanding this SPL\n\n**SMB Protocol Activity in OT Networks** — SMB (Server Message Block) traffic in OT networks is frequently associated with ransomware propagation (WannaCry, NotPetya) and lateral movement. OT security platforms detect SMB protocol events. Any SMB activity in pure control network segments is suspicious and may indicate an active threat.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:session` (protocol=\"smb\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_ip, zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **SMB Protocol Activity in OT Networks**): table src_ip, smb_events, targets, zone\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SMB Protocol Activity in OT Networks** — SMB (Server Message Block) traffic in OT networks is frequently associated with ransomware propagation (WannaCry, NotPetya) and lateral movement. OT security platforms detect SMB protocol events. Any SMB activity in pure control network segments is suspicious and may indicate an active threat.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:session` (protocol=\"smb\"). **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: SMB activity timeline; source-target map; alert correlation with IDS; legitimate vs suspicious classification.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log file and Windows-style share use in OT so out-of-profile file access does not look normal just because the network is small.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic",
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.19",
              "n": "Network Redundancy and HA Failover Events",
              "c": "high",
              "f": "intermediate",
              "v": "OT security platforms detect network redundancy failover events and router HA state changes across industrial protocols. Frequent failovers indicate network instability that could disrupt real-time control. Unexpected failovers may also indicate denial-of-service attacks or physical cable issues.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:link_events`",
              "q": "index=nozomi sourcetype=\"nozomi:link_events\" status IN (\"up\", \"down\", \"flapping\")\n| bin _time span=1h\n| stats count as failover_events by src_ip, dst_ip, status, zone, _time\n| where failover_events > 2\n| table _time, src_ip, dst_ip, status, failover_events, zone",
              "m": "Both platforms detect network redundancy and HA state changes. Baseline normal failover frequency (should be near zero in stable networks). Alert on any failover event. Multiple failovers in short succession (flapping) indicates a serious issue. Correlate with physical infrastructure monitoring.",
              "z": "Failover event timeline; network stability score; flapping device detection; HA state dashboard.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:link_events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms detect network redundancy and HA state changes. Baseline normal failover frequency (should be near zero in stable networks). Alert on any failover event. Multiple failovers in short succession (flapping) indicates a serious issue. Correlate with physical infrastructure monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:link_events\" status IN (\"up\", \"down\", \"flapping\")\n| bin _time span=1h\n| stats count as failover_events by src_ip, dst_ip, status, zone, _time\n| where failover_events > 2\n| table _time, src_ip, dst_ip, status, failover_events, zone\n```\n\nUnderstanding this SPL\n\n**Network Redundancy and HA Failover Events** — OT security platforms detect network redundancy failover events and router HA state changes across industrial protocols. Frequent failovers indicate network instability that could disrupt real-time control. Unexpected failovers may also indicate denial-of-service attacks or physical cable issues.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:link_events`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:link_events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:link_events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by src_ip, dst_ip, status, zone, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failover_events > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Redundancy and HA Failover Events**): table _time, src_ip, dst_ip, status, failover_events, zone\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Redundancy and HA Failover Events** — OT security platforms detect network redundancy failover events and router HA state changes across industrial protocols. Frequent failovers indicate network instability that could disrupt real-time control. Unexpected failovers may also indicate denial-of-service attacks or physical cable issues.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:link_events`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Failover event timeline; network stability score; flapping device detection; HA state dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log redundant pair and failover events in OT so a silent switch to backup or a flap is not invisible to the team.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Traffic",
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.20",
              "n": "Cyber Vision Sensor Health and Resource Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "OT security sensors and appliances need monitoring themselves — high CPU, memory, or disk usage events fire when resources exceed thresholds. Degraded sensors may miss traffic, creating blind spots in OT visibility.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:health`",
              "q": "index=nozomi sourcetype=\"nozomi:health\"\n| table _time, appliance_id, cpu_percent, memory_percent, disk_percent, version\n| where cpu_percent > 80 OR memory_percent > 80 OR disk_percent > 80\n| sort -cpu_percent",
              "m": "Both platforms expose sensor/appliance health metrics. **Cyber Vision:** Pre-filtered at 80% threshold via syslog. **Nozomi:** Guardian exposes health data via the `nozomi:health` sourcetype. Alert on high resource usage. Track trends per sensor. High CPU may indicate excessive traffic (DDoS, broadcast storm). Plan capacity upgrades or traffic optimization.",
              "z": "Sensor health dashboard; CPU/memory/disk gauges per sensor; resource trend lines; sensor fleet status map.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:health`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms expose sensor/appliance health metrics. **Cyber Vision:** Pre-filtered at 80% threshold via syslog. **Nozomi:** Guardian exposes health data via the `nozomi:health` sourcetype. Alert on high resource usage. Track trends per sensor. High CPU may indicate excessive traffic (DDoS, broadcast storm). Plan capacity upgrades or traffic optimization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:health\"\n| table _time, appliance_id, cpu_percent, memory_percent, disk_percent, version\n| where cpu_percent > 80 OR memory_percent > 80 OR disk_percent > 80\n| sort -cpu_percent\n```\n\nUnderstanding this SPL\n\n**Cyber Vision Sensor Health and Resource Monitoring** — OT security sensors and appliances need monitoring themselves — high CPU, memory, or disk usage events fire when resources exceed thresholds. Degraded sensors may miss traffic, creating blind spots in OT visibility.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:health`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Cyber Vision Sensor Health and Resource Monitoring**): table _time, appliance_id, cpu_percent, memory_percent, disk_percent, version\n• Filters the current rows with `where cpu_percent > 80 OR memory_percent > 80 OR disk_percent > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.Memory by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cyber Vision Sensor Health and Resource Monitoring** — OT security sensors and appliances need monitoring themselves — high CPU, memory, or disk usage events fire when resources exceed thresholds. Degraded sensors may miss traffic, creating blind spots in OT visibility.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:health`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Memory` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sensor health dashboard; CPU/memory/disk gauges per sensor; resource trend lines; sensor fleet status map.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track sensor and platform health in our OT security stack so a starving appliance does not create blind spots on the line.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.Memory by Performance.host | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.21",
              "n": "Cyber Vision Administration Audit Trail",
              "c": "medium",
              "f": "beginner",
              "v": "All administrative actions in OT security platforms — user logins, configuration changes, sensor management, database operations — are logged. Maintaining an audit trail of who did what and when supports regulatory compliance (IEC 62443, NERC CIP) and insider threat detection.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:health`",
              "q": "index=nozomi sourcetype=\"nozomi:health\" log_type=\"audit\"\n| stats count as events by user, action, appliance_id\n| sort -events\n| table user, action, appliance_id, events",
              "m": "Both platforms provide administrative audit trails. Forward all administration events to Splunk. Build compliance reports showing all administrative actions. Alert on high-severity admin events (system reboot, database restore, sensor deletion). Track user login patterns for anomaly detection.",
              "z": "Admin activity timeline; user action summary table; login pattern analysis; compliance audit report.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:health`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms provide administrative audit trails. Forward all administration events to Splunk. Build compliance reports showing all administrative actions. Alert on high-severity admin events (system reboot, database restore, sensor deletion). Track user login patterns for anomaly detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:health\" log_type=\"audit\"\n| stats count as events by user, action, appliance_id\n| sort -events\n| table user, action, appliance_id, events\n```\n\nUnderstanding this SPL\n\n**Cyber Vision Administration Audit Trail** — All administrative actions in OT security platforms — user logins, configuration changes, sensor management, database operations — are logged. Maintaining an audit trail of who did what and when supports regulatory compliance (IEC 62443, NERC CIP) and insider threat detection.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:health`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, action, appliance_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Cyber Vision Administration Audit Trail**): table user, action, appliance_id, events\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cyber Vision Administration Audit Trail** — All administrative actions in OT security platforms — user logins, configuration changes, sensor management, database operations — are logged. Maintaining an audit trail of who did what and when supports regulatory compliance (IEC 62443, NERC CIP) and insider threat detection.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:health`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Admin activity timeline; user action summary table; login pattern analysis; compliance audit report.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We keep an audit of administration on our OT security platform so who changed what and when is not argued from memory.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.22",
              "n": "IEC 62443 Zone and Conduit Compliance Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "IEC 62443 requires industrial networks to be segmented into security zones connected by controlled conduits. OT security platforms help control engineers define these zones and automatically detect cross-zone traffic that violates conduit policies. Monitoring zone compliance ensures segmentation remains effective over time.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:flow`, `sourcetype=cisco:cybervision:activity` · Nozomi: `sourcetype=nozomi:link`, `sourcetype=nozomi:session`",
              "q": "index=nozomi sourcetype=\"nozomi:link\"\n| lookup iec62443_conduit_policy.csv src_zone, dst_zone OUTPUTNEW allowed_protocols, conduit_status\n| where conduit_status!=\"approved\"\n| stats count as violations, values(protocols) as protocol_list by src_zone, dst_zone\n| table src_zone, dst_zone, violations, protocol_list",
              "m": "Both platforms support zone-based monitoring aligned with IEC 62443. Export zone definitions to a lookup table. Define approved conduits with allowed protocols. Match actual cross-zone flows against policy. Alert on any unapproved cross-zone communication. Generate compliance reports for audits.",
              "z": "Zone topology map; conduit compliance matrix; violation count per zone pair; compliance trend over time.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:flow`, `sourcetype=cisco:cybervision:activity` · Nozomi: `sourcetype=nozomi:link`, `sourcetype=nozomi:session`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms support zone-based monitoring aligned with IEC 62443. Export zone definitions to a lookup table. Define approved conduits with allowed protocols. Match actual cross-zone flows against policy. Alert on any unapproved cross-zone communication. Generate compliance reports for audits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:link\"\n| lookup iec62443_conduit_policy.csv src_zone, dst_zone OUTPUTNEW allowed_protocols, conduit_status\n| where conduit_status!=\"approved\"\n| stats count as violations, values(protocols) as protocol_list by src_zone, dst_zone\n| table src_zone, dst_zone, violations, protocol_list\n```\n\nUnderstanding this SPL\n\n**IEC 62443 Zone and Conduit Compliance Monitoring** — IEC 62443 requires industrial networks to be segmented into security zones connected by controlled conduits. OT security platforms help control engineers define these zones and automatically detect cross-zone traffic that violates conduit policies. Monitoring zone compliance ensures segmentation remains effective over time.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:flow`, `sourcetype=cisco:cybervision:activity` · Nozomi: `sourcetype=nozomi:link`, `sourcetype=nozomi:session`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:link. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:link\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where conduit_status!=\"approved\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_zone, dst_zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **IEC 62443 Zone and Conduit Compliance Monitoring**): table src_zone, dst_zone, violations, protocol_list\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IEC 62443 Zone and Conduit Compliance Monitoring** — IEC 62443 requires industrial networks to be segmented into security zones connected by controlled conduits. OT security platforms help control engineers define these zones and automatically detect cross-zone traffic that violates conduit policies. Monitoring zone compliance ensures segmentation remains effective over time.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:flow`, `sourcetype=cisco:cybervision:activity` · Nozomi: `sourcetype=nozomi:link`, `sourcetype=nozomi:session`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Zone topology map; conduit compliance matrix; violation count per zone pair; compliance trend over time.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare live traffic to our zone and conduit policy so a hole in segmentation is a finding, not a hunch on a whiteboard.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Network_Traffic",
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "FR 6.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that IEC 62443 FR 6.2 (Continuous monitoring) is enforced — Splunk UC-14.9.22: IEC 62443 Zone and Conduit Compliance Monitoring.",
                  "ea": "Saved search 'UC-14.9.22' running on sourcetype cisco:cybervision:flow and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.isa.org/standards-and-publications/isa-standards/isa-iec-62443-series-of-standards"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-7 R1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-7 R1 (Electronic security perimeter) is enforced — Splunk UC-14.9.22: IEC 62443 Zone and Conduit Compliance Monitoring.",
                  "ea": "Saved search 'UC-14.9.22' running on sourcetype cisco:cybervision:flow and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.D",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that TSA SD III.D (Cybersecurity assessment) is enforced — Splunk UC-14.9.22: IEC 62443 Zone and Conduit Compliance Monitoring.",
                  "ea": "Saved search 'UC-14.9.22' running on sourcetype cisco:cybervision:flow and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.tsa.gov/sd02c"
                }
              ],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.23",
              "n": "OT Event Severity Distribution and Security Posture Dashboard",
              "c": "medium",
              "f": "beginner",
              "v": "A high-level view of all OT security events by severity and category provides security posture at a glance. Trending event volumes over time shows whether security is improving (fewer Critical/High events) or degrading. Supports CISO reporting and board-level OT security metrics.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:event`, `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert`",
              "q": "index=nozomi sourcetype=\"nozomi:alert\"\n| eval severity_label=case(risk>=8, \"Critical\", risk>=6, \"High\", risk>=4, \"Medium\", risk<4, \"Low\")\n| bin _time span=1d\n| stats count as events by severity_label, type_id, _time\n| eventstats sum(events) as daily_total by _time\n| eval pct_of_total=round(events*100/daily_total, 1)\n| table _time, severity_label, type_id, events, pct_of_total",
              "m": "Both platforms provide event aggregation for security posture dashboards. Build executive dashboard showing daily event volumes, severity distribution, and trend lines. Track week-over-week and month-over-month changes. Alert on sudden spikes in Critical/High events.",
              "z": "Severity distribution pie chart; daily event volume stacked bar chart; trend line overlay; category breakdown table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:event`, `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms provide event aggregation for security posture dashboards. Build executive dashboard showing daily event volumes, severity distribution, and trend lines. Track week-over-week and month-over-month changes. Alert on sudden spikes in Critical/High events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\"\n| eval severity_label=case(risk>=8, \"Critical\", risk>=6, \"High\", risk>=4, \"Medium\", risk<4, \"Low\")\n| bin _time span=1d\n| stats count as events by severity_label, type_id, _time\n| eventstats sum(events) as daily_total by _time\n| eval pct_of_total=round(events*100/daily_total, 1)\n| table _time, severity_label, type_id, events, pct_of_total\n```\n\nUnderstanding this SPL\n\n**OT Event Severity Distribution and Security Posture Dashboard** — A high-level view of all OT security events by severity and category provides security posture at a glance. Trending event volumes over time shows whether security is improving (fewer Critical/High events) or degrading. Supports CISO reporting and board-level OT security metrics.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:event`, `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **severity_label** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by severity_label, type_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct_of_total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **OT Event Severity Distribution and Security Posture Dashboard**): table _time, severity_label, type_id, events, pct_of_total\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Severity distribution pie chart; daily event volume stacked bar chart; trend line overlay; category breakdown table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how many critical and high events we get over time so our OT security posture and tuning show up in one trend.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.24",
              "n": "OT Protocol Usage Analysis and Inventory",
              "c": "medium",
              "f": "intermediate",
              "v": "OT security platforms identify all industrial protocols in use across the OT network — Modbus, EtherNet/IP, Profinet, S7, BACnet, DNP3, IEC 104, OPC-UA, and dozens more. Understanding protocol distribution helps prioritize security investments, plan protocol-specific IDS rules, and identify legacy protocols that need migration.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:activity` · Nozomi: `sourcetype=nozomi:link`, `sourcetype=nozomi:session`",
              "q": "index=nozomi sourcetype=\"nozomi:link\"\n| stats dc(src_ip) as sources, dc(dst_ip) as destinations, sum(bps_out) as total_bps by protocols, zone\n| sort -sources\n| table zone, protocols, sources, destinations, total_bps",
              "m": "Both platforms identify all industrial protocols in use via DPI. Analyze data to build protocol inventory per site/zone. Identify unexpected protocols (e.g., BACnet in a power substation, or Modbus in an enterprise zone). Compare protocol usage across sites for standardization. Feed into protocol-specific security policy development.",
              "z": "Protocol distribution pie chart per site; protocol comparison across sites; unexpected protocol alert table; protocol trend over time.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:activity` · Nozomi: `sourcetype=nozomi:link`, `sourcetype=nozomi:session`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms identify all industrial protocols in use via DPI. Analyze data to build protocol inventory per site/zone. Identify unexpected protocols (e.g., BACnet in a power substation, or Modbus in an enterprise zone). Compare protocol usage across sites for standardization. Feed into protocol-specific security policy development.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:link\"\n| stats dc(src_ip) as sources, dc(dst_ip) as destinations, sum(bps_out) as total_bps by protocols, zone\n| sort -sources\n| table zone, protocols, sources, destinations, total_bps\n```\n\nUnderstanding this SPL\n\n**OT Protocol Usage Analysis and Inventory** — OT security platforms identify all industrial protocols in use across the OT network — Modbus, EtherNet/IP, Profinet, S7, BACnet, DNP3, IEC 104, OPC-UA, and dozens more. Understanding protocol distribution helps prioritize security investments, plan protocol-specific IDS rules, and identify legacy protocols that need migration.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:activity` · Nozomi: `sourcetype=nozomi:link`, `sourcetype=nozomi:session`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:link. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:link\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by protocols, zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **OT Protocol Usage Analysis and Inventory**): table zone, protocols, sources, destinations, total_bps\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Protocol Usage Analysis and Inventory** — OT security platforms identify all industrial protocols in use across the OT network — Modbus, EtherNet/IP, Profinet, S7, BACnet, DNP3, IEC 104, OPC-UA, and dozens more. Understanding protocol distribution helps prioritize security investments, plan protocol-specific IDS rules, and identify legacy protocols that need migration.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:activity` · Nozomi: `sourcetype=nozomi:link`, `sourcetype=nozomi:session`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Protocol distribution pie chart per site; protocol comparison across sites; unexpected protocol alert table; protocol trend over time.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We build a view of which industrial protocols are in use and where so unexpected protocols or sites stand out in governance.",
              "mtype": [
                "Inventory"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "14.9.25",
              "n": "Decode Failure and Malformed Packet Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "OT security platforms generate decode failure events when their DPI engine encounters packets it cannot properly parse — potentially indicating protocol fuzzing attacks, corrupted communications, or deliberately malformed exploit payloads targeting OT devices. Sustained decode failures from a single source may indicate active exploitation attempts.",
              "t": "`Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905)",
              "d": "`sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert`",
              "q": "index=nozomi sourcetype=\"nozomi:alert\" type_id=\"PROTOCOL-DECODE*\"\n| bin _time span=1h\n| stats count as failures by zone, _time\n| where failures > 10\n| table _time, zone, failures\n| sort -failures",
              "m": "Both platforms generate events when DPI encounters unparseable packets. Baseline normal decode failure rates per sensor/zone. Alert on sudden spikes exceeding 3x baseline, which may indicate a fuzzing or exploitation attempt. Correlate with IDS alerts from the same time window.",
              "z": "Decode failure timeline per sensor; spike detection; correlation with IDS events; sensor health overlay.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunkbase app 5748](https://splunkbase.splunk.com/app/5748), [Splunkbase app 6905](https://splunkbase.splunk.com/app/6905), [CIM: Intrusion Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905).\n• Ensure the following data sources are available: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBoth platforms generate events when DPI encounters unparseable packets. Baseline normal decode failure rates per sensor/zone. Alert on sudden spikes exceeding 3x baseline, which may indicate a fuzzing or exploitation attempt. Correlate with IDS alerts from the same time window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nozomi sourcetype=\"nozomi:alert\" type_id=\"PROTOCOL-DECODE*\"\n| bin _time span=1h\n| stats count as failures by zone, _time\n| where failures > 10\n| table _time, zone, failures\n| sort -failures\n```\n\nUnderstanding this SPL\n\n**Decode Failure and Malformed Packet Detection** — OT security platforms generate decode failure events when their DPI engine encounters packets it cannot properly parse — potentially indicating protocol fuzzing attacks, corrupted communications, or deliberately malformed exploit payloads targeting OT devices. Sustained decode failures from a single source may indicate active exploitation attempts.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nozomi; **sourcetype**: nozomi:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nozomi, sourcetype=\"nozomi:alert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by zone, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Decode Failure and Malformed Packet Detection**): table _time, zone, failures\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Decode Failure and Malformed Packet Detection** — OT security platforms generate decode failure events when their DPI engine encounters packets it cannot properly parse — potentially indicating protocol fuzzing attacks, corrupted communications, or deliberately malformed exploit payloads targeting OT devices. Sustained decode failures from a single source may indicate active exploitation attempts.\n\nDocumented **Data sources**: `sourcetype=cisco:cybervision:syslog` · Nozomi: `sourcetype=nozomi:alert`. **App/TA** (typical add-on context): `Cisco Cyber Vision Splunk Add-On` (Splunkbase 5748) · `Nozomi Networks Universal Add-on` (Splunkbase 6905). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Decode failure timeline per sensor; spike detection; correlation with IDS events; sensor health overlay.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log decode and malformed-packet events so a parser gap or a deliberate fuzz is visible next to the same flow metadata.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.action, IDS_Attacks.signature, IDS_Attacks.src, IDS_Attacks.dest span=1h | sort - count",
              "e": [
                "cisco"
              ],
              "em": [],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.8,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        }
      ],
      "i": 14,
      "n": "IoT & Operational Technology (OT)",
      "src": "cat-14-iot-operational-technology-ot.md"
    },
    {
      "s": [
        {
          "i": "15.1",
          "n": "Power & UPS",
          "u": [
            {
              "i": "15.1.1",
              "n": "UPS Battery Health",
              "c": "critical",
              "f": "beginner",
              "v": "UPS battery degradation is the single largest cause of unprotected power events. Proactive replacement prevents data center outages.",
              "t": "SNMP TA (UPS-MIB)",
              "d": "SNMP UPS-MIB (battery status, charge, runtime, temperature, replace indicator)",
              "q": "index=power sourcetype=\"snmp:ups\"\n| where battery_replace_indicator=\"yes\" OR charge_pct < 80 OR runtime_min < 15\n| table ups_name, location, battery_status, charge_pct, runtime_min, battery_age_months",
              "m": "Poll UPS battery metrics via SNMP every 5 minutes. Alert on replace indicator, low charge, or low runtime. Track battery age and capacity trend over time to predict replacement needs.",
              "z": "Timechart of `runtime_min` and `charge_pct` by `ups_name` for battery health trending; table sorted by `runtime_min` ascending to surface units needing replacement; single-value gauge for fleet-wide minimum runtime as the critical KPI.",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA (UPS-MIB).\n• Ensure the following data sources are available: SNMP UPS-MIB (battery status, charge, runtime, temperature, replace indicator).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll UPS battery metrics via SNMP every 5 minutes. Alert on replace indicator, low charge, or low runtime. Track battery age and capacity trend over time to predict replacement needs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:ups\"\n| where battery_replace_indicator=\"yes\" OR charge_pct < 80 OR runtime_min < 15\n| table ups_name, location, battery_status, charge_pct, runtime_min, battery_age_months\n```\n\nUnderstanding this SPL\n\n**UPS Battery Health** — UPS battery degradation is the single largest cause of unprotected power events. Proactive replacement prevents data center outages.\n\nDocumented **Data sources**: SNMP UPS-MIB (battery status, charge, runtime, temperature, replace indicator). **App/TA** (typical add-on context): SNMP TA (UPS-MIB). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:ups. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:ups\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where battery_replace_indicator=\"yes\" OR charge_pct < 80 OR runtime_min < 15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **UPS Battery Health**): table ups_name, location, battery_status, charge_pct, runtime_min, battery_age_months\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart of `runtime_min` and `charge_pct` by `ups_name` for battery health trending; table sorted by `runtime_min` ascending to surface units needing replacement; single-value gauge for fleet-wide minimum runtime as the critical KPI.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_ups"
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "15.1.2",
              "n": "PDU Power per Rack",
              "c": "high",
              "f": "beginner",
              "v": "Per-rack power monitoring prevents circuit overloads and enables efficient rack placement for new equipment.",
              "t": "SNMP TA (PDU-MIB), vendor API",
              "d": "Smart PDU per-outlet and per-circuit metrics",
              "q": "index=power sourcetype=\"snmp:pdu\"\n| eval pct_capacity=round(current_amps/rated_amps*100,1)\n| where pct_capacity > 80\n| table rack_id, pdu_name, circuit, current_amps, rated_amps, pct_capacity",
              "m": "Poll PDU metrics via SNMP. Track per-outlet and per-circuit power. Alert when any circuit exceeds 80% capacity. Report on rack power distribution for capacity planning. Track power trends per rack.",
              "z": "Heatmap (rack × power usage), Gauge (% capacity per circuit), Bar chart (power by rack), Table (overloaded circuits).",
              "kfp": "Capacity alerts during planned hardware refreshes, scheduled rack moves, or new deployments.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA (PDU-MIB), vendor API.\n• Ensure the following data sources are available: Smart PDU per-outlet and per-circuit metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll PDU metrics via SNMP. Track per-outlet and per-circuit power. Alert when any circuit exceeds 80% capacity. Report on rack power distribution for capacity planning. Track power trends per rack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:pdu\"\n| eval pct_capacity=round(current_amps/rated_amps*100,1)\n| where pct_capacity > 80\n| table rack_id, pdu_name, circuit, current_amps, rated_amps, pct_capacity\n```\n\nUnderstanding this SPL\n\n**PDU Power per Rack** — Per-rack power monitoring prevents circuit overloads and enables efficient rack placement for new equipment.\n\nDocumented **Data sources**: Smart PDU per-outlet and per-circuit metrics. **App/TA** (typical add-on context): SNMP TA (PDU-MIB), vendor API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:pdu. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:pdu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_capacity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_capacity > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PDU Power per Rack**): table rack_id, pdu_name, circuit, current_amps, rated_amps, pct_capacity\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (rack × power usage), Gauge (% capacity per circuit), Bar chart (power by rack), Table (overloaded circuits).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_pdu"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.3",
              "n": "Power Redundancy Status",
              "c": "critical",
              "f": "intermediate",
              "v": "Loss of A/B feed redundancy means a single power failure will cause an outage. Immediate awareness enables emergency response.",
              "t": "SNMP TA, PDU/UPS events",
              "d": "PDU input status, UPS input voltage, transfer switch events",
              "q": "index=power sourcetype=\"snmp:pdu\"\n| where input_status!=\"normal\" OR input_voltage < 180\n| table _time, pdu_name, rack_id, feed, input_status, input_voltage",
              "m": "Monitor PDU input status and UPS input voltage. Alert immediately on loss of any power feed. Track ATS (Automatic Transfer Switch) events. Maintain power topology documentation for impact analysis.",
              "z": "Status grid (rack × A/B feed status), Table (power events), Timeline (redundancy loss events), Single value (racks with full redundancy %).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA, PDU/UPS events.\n• Ensure the following data sources are available: PDU input status, UPS input voltage, transfer switch events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor PDU input status and UPS input voltage. Alert immediately on loss of any power feed. Track ATS (Automatic Transfer Switch) events. Maintain power topology documentation for impact analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:pdu\"\n| where input_status!=\"normal\" OR input_voltage < 180\n| table _time, pdu_name, rack_id, feed, input_status, input_voltage\n```\n\nUnderstanding this SPL\n\n**Power Redundancy Status** — Loss of A/B feed redundancy means a single power failure will cause an outage. Immediate awareness enables emergency response.\n\nDocumented **Data sources**: PDU input status, UPS input voltage, transfer switch events. **App/TA** (typical add-on context): SNMP TA, PDU/UPS events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:pdu. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:pdu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where input_status!=\"normal\" OR input_voltage < 180` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Power Redundancy Status**): table _time, pdu_name, rack_id, feed, input_status, input_voltage\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (rack × A/B feed status), Table (power events), Timeline (redundancy loss events), Single value (racks with full redundancy %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_pdu",
                "snmp_ups"
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "15.1.4",
              "n": "Generator Test Results",
              "c": "high",
              "f": "beginner",
              "v": "Generators are the last line of defense during extended outages. Failed tests mean they may not start when needed.",
              "t": "BMS integration, manual log input",
              "d": "Generator controller logs, BMS events",
              "q": "index=power sourcetype=\"generator:test\"\n| stats latest(result) as last_result, latest(_time) as last_test by generator_id\n| eval days_since_test=round((now()-last_test)/86400)\n| where last_result!=\"pass\" OR days_since_test > 30",
              "m": "Log generator test results (manual or automated). Track test frequency and outcomes. Alert on failed tests and missed test schedules. Monitor fuel levels. Report on generator readiness for management.",
              "z": "Table (generator test history), Single value (days since last test), Status indicator (pass/fail).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS integration, manual log input.\n• Ensure the following data sources are available: Generator controller logs, BMS events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLog generator test results (manual or automated). Track test frequency and outcomes. Alert on failed tests and missed test schedules. Monitor fuel levels. Report on generator readiness for management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"generator:test\"\n| stats latest(result) as last_result, latest(_time) as last_test by generator_id\n| eval days_since_test=round((now()-last_test)/86400)\n| where last_result!=\"pass\" OR days_since_test > 30\n```\n\nUnderstanding this SPL\n\n**Generator Test Results** — Generators are the last line of defense during extended outages. Failed tests mean they may not start when needed.\n\nDocumented **Data sources**: Generator controller logs, BMS events. **App/TA** (typical add-on context): BMS integration, manual log input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: generator:test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"generator:test\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by generator_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_test** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where last_result!=\"pass\" OR days_since_test > 30` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (generator test history), Single value (days since last test), Status indicator (pass/fail).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.5",
              "n": "PUE Calculation",
              "c": "medium",
              "f": "beginner",
              "v": "Power Usage Effectiveness is the primary data center efficiency metric. Trending drives energy optimization and sustainability goals.",
              "t": "Aggregate power metrics from PDU/UPS/BMS",
              "d": "Total facility power, IT load power",
              "q": "index=power sourcetype=\"power:aggregate\"\n| timechart span=1h avg(total_facility_kw) as facility, avg(it_load_kw) as it_load\n| eval pue=round(facility/it_load,2)",
              "m": "Aggregate total facility power and IT equipment power from PDU/UPS/BMS data. Calculate PUE hourly and daily. Track seasonal variation. Report monthly to operations and sustainability teams. Target PUE <1.5.",
              "z": "Gauge (current PUE), Line chart (PUE trend), Single value (monthly average PUE), Bar chart (PUE by month).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Aggregate power metrics from PDU/UPS/BMS.\n• Ensure the following data sources are available: Total facility power, IT load power.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate total facility power and IT equipment power from PDU/UPS/BMS data. Calculate PUE hourly and daily. Track seasonal variation. Report monthly to operations and sustainability teams. Target PUE <1.5.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"power:aggregate\"\n| timechart span=1h avg(total_facility_kw) as facility, avg(it_load_kw) as it_load\n| eval pue=round(facility/it_load,2)\n```\n\nUnderstanding this SPL\n\n**PUE Calculation** — Power Usage Effectiveness is the primary data center efficiency metric. Trending drives energy optimization and sustainability goals.\n\nDocumented **Data sources**: Total facility power, IT load power. **App/TA** (typical add-on context): Aggregate power metrics from PDU/UPS/BMS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: power:aggregate. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"power:aggregate\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **pue** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (current PUE), Line chart (PUE trend), Single value (monthly average PUE), Bar chart (PUE by month).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "snmp_pdu",
                "snmp_ups"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.6",
              "n": "Circuit Breaker Trips",
              "c": "critical",
              "f": "beginner",
              "v": "Breaker trips cause immediate power loss to affected equipment. Detection enables rapid response and root cause investigation.",
              "t": "PDU/BMS event logs",
              "d": "PDU events, BMS alerts, UPS transfer events",
              "q": "index=power sourcetype=\"pdu:events\" OR sourcetype=\"bms:events\"\n| search \"breaker\" OR \"overcurrent\" OR \"trip\"\n| table _time, device, location, event_type, circuit, message",
              "m": "Forward PDU and BMS events to Splunk. Alert immediately on breaker trips or overcurrent events. Track affected equipment from PDU-to-server mapping. Investigate root cause (overload, short circuit, equipment failure).",
              "z": "Timeline (breaker events), Table (trip details), Single value (trips this month).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PDU/BMS event logs.\n• Ensure the following data sources are available: PDU events, BMS alerts, UPS transfer events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward PDU and BMS events to Splunk. Alert immediately on breaker trips or overcurrent events. Track affected equipment from PDU-to-server mapping. Investigate root cause (overload, short circuit, equipment failure).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"pdu:events\" OR sourcetype=\"bms:events\"\n| search \"breaker\" OR \"overcurrent\" OR \"trip\"\n| table _time, device, location, event_type, circuit, message\n```\n\nUnderstanding this SPL\n\n**Circuit Breaker Trips** — Breaker trips cause immediate power loss to affected equipment. Detection enables rapid response and root cause investigation.\n\nDocumented **Data sources**: PDU events, BMS alerts, UPS transfer events. **App/TA** (typical add-on context): PDU/BMS event logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: pdu:events, bms:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"pdu:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Circuit Breaker Trips**): table _time, device, location, event_type, circuit, message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (breaker events), Table (trip details), Single value (trips this month).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "snmp_pdu"
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "15.1.7",
              "n": "APC PDU Outlet-Level Power Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Per-outlet current draw and on/off state on APC rack PDUs enables granular power attribution, remote outlet control verification, and identification of high-draw devices within a rack.",
              "t": "SNMP modular input",
              "d": "APC PowerNet-MIB (rPDU2OutletSwitchedStatusCurrent, rPDU2OutletSwitchedStatusState)",
              "q": "index=power sourcetype=\"snmp:apc:pdu\"\n| eval outlet_state=case(rPDU2OutletSwitchedStatusState=\"1\",\"on\",rPDU2OutletSwitchedStatusState=\"2\",\"off\",1=1,\"unknown\")\n| where rPDU2OutletSwitchedStatusCurrent > 10 OR outlet_state=\"off\"\n| table _time, pdu_name, outlet_id, outlet_label, rPDU2OutletSwitchedStatusCurrent as current_amps, outlet_state\n| sort -current_amps",
              "m": "Configure SNMP modular input to poll APC PowerNet-MIB. Map rPDU2OutletSwitchedStatusCurrent (0.1A resolution) and rPDU2OutletSwitchedStatusState per outlet. Poll every 5 minutes. Alert on outlets exceeding threshold (e.g., 10A) or unexpected off-state. Correlate outlet labels with DCIM for device mapping.",
              "z": "Table (outlet current by PDU), Bar chart (top outlets by draw), Status grid (outlet on/off state), Line chart (outlet current trend).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input.\n• Ensure the following data sources are available: APC PowerNet-MIB (rPDU2OutletSwitchedStatusCurrent, rPDU2OutletSwitchedStatusState).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure SNMP modular input to poll APC PowerNet-MIB. Map rPDU2OutletSwitchedStatusCurrent (0.1A resolution) and rPDU2OutletSwitchedStatusState per outlet. Poll every 5 minutes. Alert on outlets exceeding threshold (e.g., 10A) or unexpected off-state. Correlate outlet labels with DCIM for device mapping.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:apc:pdu\"\n| eval outlet_state=case(rPDU2OutletSwitchedStatusState=\"1\",\"on\",rPDU2OutletSwitchedStatusState=\"2\",\"off\",1=1,\"unknown\")\n| where rPDU2OutletSwitchedStatusCurrent > 10 OR outlet_state=\"off\"\n| table _time, pdu_name, outlet_id, outlet_label, rPDU2OutletSwitchedStatusCurrent as current_amps, outlet_state\n| sort -current_amps\n```\n\nUnderstanding this SPL\n\n**APC PDU Outlet-Level Power Monitoring** — Per-outlet current draw and on/off state on APC rack PDUs enables granular power attribution, remote outlet control verification, and identification of high-draw devices within a rack.\n\nDocumented **Data sources**: APC PowerNet-MIB (rPDU2OutletSwitchedStatusCurrent, rPDU2OutletSwitchedStatusState). **App/TA** (typical add-on context): SNMP modular input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:apc:pdu. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:apc:pdu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **outlet_state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where rPDU2OutletSwitchedStatusCurrent > 10 OR outlet_state=\"off\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **APC PDU Outlet-Level Power Monitoring**): table _time, pdu_name, outlet_id, outlet_label, rPDU2OutletSwitchedStatusCurrent as current_amps, outlet_state\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (outlet current by PDU), Bar chart (top outlets by draw), Status grid (outlet on/off state), Line chart (outlet current trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.8",
              "n": "Generator Runtime and Fuel Level",
              "c": "critical",
              "f": "intermediate",
              "v": "Diesel generator monitoring for extended outages; fuel level and run hours ensure generators can sustain operations during prolonged utility failures.",
              "t": "Custom (generator controller SNMP/Modbus, BMS integration)",
              "d": "Generator controller telemetry (fuel_level_pct, run_hours, engine_status)",
              "q": "index=power sourcetype=\"generator:telemetry\"\n| stats latest(fuel_level_pct) as fuel_pct, latest(run_hours) as run_hrs, latest(engine_status) as status by generator_id\n| where fuel_pct < 30 OR status!=\"standby\" AND status!=\"running\"\n| table generator_id, fuel_pct, run_hrs, status",
              "m": "Integrate generator controller via SNMP or Modbus. Ingest fuel_level_pct, run_hours, engine_status (standby, running, fault). Poll every 5–15 minutes. Alert on low fuel (<30%), engine fault, or unexpected runtime. Correlate with utility outage events. Report fuel consumption rate during run events.",
              "z": "Gauge (fuel level per generator), Table (generator status), Line chart (fuel level trend), Single value (lowest fuel %).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (generator controller SNMP/Modbus, BMS integration).\n• Ensure the following data sources are available: Generator controller telemetry (fuel_level_pct, run_hours, engine_status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate generator controller via SNMP or Modbus. Ingest fuel_level_pct, run_hours, engine_status (standby, running, fault). Poll every 5–15 minutes. Alert on low fuel (<30%), engine fault, or unexpected runtime. Correlate with utility outage events. Report fuel consumption rate during run events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"generator:telemetry\"\n| stats latest(fuel_level_pct) as fuel_pct, latest(run_hours) as run_hrs, latest(engine_status) as status by generator_id\n| where fuel_pct < 30 OR status!=\"standby\" AND status!=\"running\"\n| table generator_id, fuel_pct, run_hrs, status\n```\n\nUnderstanding this SPL\n\n**Generator Runtime and Fuel Level** — Diesel generator monitoring for extended outages; fuel level and run hours ensure generators can sustain operations during prolonged utility failures.\n\nDocumented **Data sources**: Generator controller telemetry (fuel_level_pct, run_hours, engine_status). **App/TA** (typical add-on context): Custom (generator controller SNMP/Modbus, BMS integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: generator:telemetry. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"generator:telemetry\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by generator_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fuel_pct < 30 OR status!=\"standby\" AND status!=\"running\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Generator Runtime and Fuel Level**): table generator_id, fuel_pct, run_hrs, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (fuel level per generator), Table (generator status), Line chart (fuel level trend), Single value (lowest fuel %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus",
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.9",
              "n": "Rack Power Density Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Watts per rack unit over time for capacity planning; identifies hot spots and supports safe equipment placement decisions.",
              "t": "SNMP modular input, DCIM integration",
              "d": "PDU per-rack power readings, DCIM inventory (rack height, location)",
              "q": "index=power sourcetype=\"pdu:rack_power\"\n| lookup rack_inventory rack_id OUTPUT rack_u_height\n| eval watts_per_u=round(power_watts/rack_u_height,1)\n| timechart span=1d avg(watts_per_u) as avg_watts_per_u by rack_id\n| predict avg_watts_per_u as predicted future_timespan=30",
              "m": "Aggregate PDU power per rack. Join with DCIM lookup for rack U height. Calculate watts/U. Poll daily or hourly. Alert when watts/U exceeds design threshold (e.g., 500W/U). Use prediction for capacity planning. Report top-density racks and trend by zone.",
              "z": "Line chart (watts/U trend by rack), Heatmap (rack × density), Bar chart (top racks by density), Table (capacity forecast).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, DCIM integration.\n• Ensure the following data sources are available: PDU per-rack power readings, DCIM inventory (rack height, location).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate PDU power per rack. Join with DCIM lookup for rack U height. Calculate watts/U. Poll daily or hourly. Alert when watts/U exceeds design threshold (e.g., 500W/U). Use prediction for capacity planning. Report top-density racks and trend by zone.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"pdu:rack_power\"\n| lookup rack_inventory rack_id OUTPUT rack_u_height\n| eval watts_per_u=round(power_watts/rack_u_height,1)\n| timechart span=1d avg(watts_per_u) as avg_watts_per_u by rack_id\n| predict avg_watts_per_u as predicted future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Rack Power Density Trending** — Watts per rack unit over time for capacity planning; identifies hot spots and supports safe equipment placement decisions.\n\nDocumented **Data sources**: PDU per-rack power readings, DCIM inventory (rack height, location). **App/TA** (typical add-on context): SNMP modular input, DCIM integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: pdu:rack_power. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"pdu:rack_power\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **watts_per_u** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by rack_id** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Rack Power Density Trending**): predict avg_watts_per_u as predicted future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (watts/U trend by rack), Heatmap (rack × density), Bar chart (top racks by density), Table (capacity forecast).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.10",
              "n": "UPS Battery Runtime Remaining",
              "c": "critical",
              "f": "beginner",
              "v": "Minutes of runtime remaining under present load is the actionable metric for graceful shutdown sequencing during outages.",
              "t": "SNMP TA (UPS-MIB upsEstimatedMinutesRemaining)",
              "d": "`sourcetype=\"snmp:ups\"` (runtime_min, load_pct)",
              "q": "index=power sourcetype=\"snmp:ups\"\n| where runtime_min < 10 OR (runtime_min < 20 AND load_pct > 70)\n| table _time, ups_name, location, runtime_min, load_pct, battery_status",
              "m": "Poll `upsEstimatedMinutesRemaining` every 1–5 minutes. Alert when runtime drops below site policy (e.g., <10 min). Correlate with concurrent generator tests.",
              "z": "Gauge (runtime min per UPS), Line chart (runtime trend), Single value (minimum runtime in site).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA (UPS-MIB upsEstimatedMinutesRemaining).\n• Ensure the following data sources are available: `sourcetype=\"snmp:ups\"` (runtime_min, load_pct).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll `upsEstimatedMinutesRemaining` every 1–5 minutes. Alert when runtime drops below site policy (e.g., <10 min). Correlate with concurrent generator tests.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:ups\"\n| where runtime_min < 10 OR (runtime_min < 20 AND load_pct > 70)\n| table _time, ups_name, location, runtime_min, load_pct, battery_status\n```\n\nUnderstanding this SPL\n\n**UPS Battery Runtime Remaining** — Minutes of runtime remaining under present load is the actionable metric for graceful shutdown sequencing during outages.\n\nDocumented **Data sources**: `sourcetype=\"snmp:ups\"` (runtime_min, load_pct). **App/TA** (typical add-on context): SNMP TA (UPS-MIB upsEstimatedMinutesRemaining). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:ups. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:ups\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where runtime_min < 10 OR (runtime_min < 20 AND load_pct > 70)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **UPS Battery Runtime Remaining**): table _time, ups_name, location, runtime_min, load_pct, battery_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (runtime min per UPS), Line chart (runtime trend), Single value (minimum runtime in site).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_ups"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.11",
              "n": "PDU Outlet-Level Power Draw",
              "c": "high",
              "f": "beginner",
              "v": "Per-outlet amps identify rogue devices and balance load across strips before branch trips.",
              "t": "SNMP TA (PDU-MIB, ENCLOSURE-MIB)",
              "d": "`sourcetype=\"snmp:pdu:outlet\"` (outlet_index, current_amps)",
              "q": "index=power sourcetype=\"snmp:pdu:outlet\"\n| eval pct_outlet=if(rated_amps_outlet>0, round(current_amps/rated_amps_outlet*100,1), null())\n| where pct_outlet > 85 OR current_amps > 12\n| table pdu_id, outlet_id, current_amps, rated_amps_outlet, pct_outlet\n| sort -current_amps",
              "m": "Poll outlet bank tables on smart PDUs. Map outlet labels from DCIM. Alert on high draw or imbalance vs peer outlets on same strip.",
              "z": "Bar chart (outlet draw), Heatmap (PDU × outlet), Table (top consumers).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA (PDU-MIB, ENCLOSURE-MIB).\n• Ensure the following data sources are available: `sourcetype=\"snmp:pdu:outlet\"` (outlet_index, current_amps).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll outlet bank tables on smart PDUs. Map outlet labels from DCIM. Alert on high draw or imbalance vs peer outlets on same strip.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:pdu:outlet\"\n| eval pct_outlet=if(rated_amps_outlet>0, round(current_amps/rated_amps_outlet*100,1), null())\n| where pct_outlet > 85 OR current_amps > 12\n| table pdu_id, outlet_id, current_amps, rated_amps_outlet, pct_outlet\n| sort -current_amps\n```\n\nUnderstanding this SPL\n\n**PDU Outlet-Level Power Draw** — Per-outlet amps identify rogue devices and balance load across strips before branch trips.\n\nDocumented **Data sources**: `sourcetype=\"snmp:pdu:outlet\"` (outlet_index, current_amps). **App/TA** (typical add-on context): SNMP TA (PDU-MIB, ENCLOSURE-MIB). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:pdu:outlet. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:pdu:outlet\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct_outlet** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_outlet > 85 OR current_amps > 12` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PDU Outlet-Level Power Draw**): table pdu_id, outlet_id, current_amps, rated_amps_outlet, pct_outlet\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (outlet draw), Heatmap (PDU × outlet), Table (top consumers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_pdu"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.12",
              "n": "Generator Fuel Level Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Tank level and burn rate determine how long the site can run off-grid; critical for extended utility failures.",
              "t": "BMS, generator controller Modbus/SNMP",
              "d": "`sourcetype=\"generator:telemetry\"` (fuel_level_pct, fuel_gallons)",
              "q": "index=power sourcetype=\"generator:telemetry\"\n| where fuel_level_pct < 25\n| table generator_id, fuel_level_pct, fuel_gallons, engine_status",
              "m": "Poll fuel level and totalizer. Compute burn rate during engine run. Alert on low level and abnormal consumption (leak suspicion).",
              "z": "Gauge (fuel %), Line chart (level vs time), Table (hours of fuel at current load).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS, generator controller Modbus/SNMP.\n• Ensure the following data sources are available: `sourcetype=\"generator:telemetry\"` (fuel_level_pct, fuel_gallons).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll fuel level and totalizer. Compute burn rate during engine run. Alert on low level and abnormal consumption (leak suspicion).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"generator:telemetry\"\n| where fuel_level_pct < 25\n| table generator_id, fuel_level_pct, fuel_gallons, engine_status\n```\n\nUnderstanding this SPL\n\n**Generator Fuel Level Monitoring** — Tank level and burn rate determine how long the site can run off-grid; critical for extended utility failures.\n\nDocumented **Data sources**: `sourcetype=\"generator:telemetry\"` (fuel_level_pct, fuel_gallons). **App/TA** (typical add-on context): BMS, generator controller Modbus/SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: generator:telemetry. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"generator:telemetry\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where fuel_level_pct < 25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Generator Fuel Level Monitoring**): table generator_id, fuel_level_pct, fuel_gallons, engine_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (fuel %), Line chart (level vs time), Table (hours of fuel at current load).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "modbus",
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.13",
              "n": "Transfer Switch Events",
              "c": "critical",
              "f": "intermediate",
              "v": "ATS transitions between utility and generator must be logged for root cause of brief outages and equipment stress.",
              "t": "ATS SNMP/BMS, relay contacts",
              "d": "`sourcetype=\"ats:events\"` (source, event, duration_ms)",
              "q": "index=power sourcetype=\"ats:events\"\n| where event IN (\"transfer_to_gen\",\"retransfer_to_utility\",\"test\") OR duration_ms > 500\n| table _time, ats_id, event, source_side, duration_ms\n| sort -_time",
              "m": "Ingest dry-contact or SNMP traps on transfer. Alert on failed transfer, oscillation, or long transfer time. Correlate with utility feeder events.",
              "z": "Timeline (transfer events), Table (last transfer by ATS), Single value (transfers in 24h).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ATS SNMP/BMS, relay contacts.\n• Ensure the following data sources are available: `sourcetype=\"ats:events\"` (source, event, duration_ms).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest dry-contact or SNMP traps on transfer. Alert on failed transfer, oscillation, or long transfer time. Correlate with utility feeder events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"ats:events\"\n| where event IN (\"transfer_to_gen\",\"retransfer_to_utility\",\"test\") OR duration_ms > 500\n| table _time, ats_id, event, source_side, duration_ms\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Transfer Switch Events** — ATS transitions between utility and generator must be logged for root cause of brief outages and equipment stress.\n\nDocumented **Data sources**: `sourcetype=\"ats:events\"` (source, event, duration_ms). **App/TA** (typical add-on context): ATS SNMP/BMS, relay contacts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: ats:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"ats:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event IN (\"transfer_to_gen\",\"retransfer_to_utility\",\"test\") OR duration_ms > 500` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Transfer Switch Events**): table _time, ats_id, event, source_side, duration_ms\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (transfer events), Table (last transfer by ATS), Single value (transfers in 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.14",
              "n": "Power Factor Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Low PF increases utility demand charges and heats conductors; correction banks need verification.",
              "t": "Power meter SNMP (PQ meters)",
              "d": "`sourcetype=\"power_meter:pq\"` (pf, thd, kw, kvar)",
              "q": "index=power sourcetype=\"power_meter:pq\"\n| where pf < 0.92 OR thd_pct > 8\n| timechart span=15m avg(pf) as avg_pf by feed_id",
              "m": "Poll main switchboard meters. Alert when PF drops below utility contract threshold. Correlate with capacitor bank status if monitored.",
              "z": "Line chart (PF trend), Gauge (current PF), Table (feeds out of spec).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Power meter SNMP (PQ meters).\n• Ensure the following data sources are available: `sourcetype=\"power_meter:pq\"` (pf, thd, kw, kvar).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll main switchboard meters. Alert when PF drops below utility contract threshold. Correlate with capacitor bank status if monitored.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"power_meter:pq\"\n| where pf < 0.92 OR thd_pct > 8\n| timechart span=15m avg(pf) as avg_pf by feed_id\n```\n\nUnderstanding this SPL\n\n**Power Factor Monitoring** — Low PF increases utility demand charges and heats conductors; correction banks need verification.\n\nDocumented **Data sources**: `sourcetype=\"power_meter:pq\"` (pf, thd, kw, kvar). **App/TA** (typical add-on context): Power meter SNMP (PQ meters). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: power_meter:pq. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"power_meter:pq\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where pf < 0.92 OR thd_pct > 8` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by feed_id** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (PF trend), Gauge (current PF), Table (feeds out of spec).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.15",
              "n": "PUE Efficiency Tracking vs Target",
              "c": "medium",
              "f": "intermediate",
              "v": "Comparing live PUE to annual design target drives mechanical optimization and executive reporting.",
              "t": "BMS + IT load meters",
              "d": "`sourcetype=\"power:aggregate\"` (facility_kw, it_load_kw, pue_design)",
              "q": "index=power sourcetype=\"power:aggregate\"\n| eval pue=round(total_facility_kw/it_load_kw,2)\n| where pue > pue_design * 1.1 OR pue > 1.6\n| timechart span=1h avg(pue) as live_pue",
              "m": "Ingest design PUE from DCIM lookup. Alert when rolling PUE exceeds target band. Seasonally adjust expected range.",
              "z": "Line chart (PUE vs target band), Gauge (delta from design), Single value (30-day avg PUE).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS + IT load meters.\n• Ensure the following data sources are available: `sourcetype=\"power:aggregate\"` (facility_kw, it_load_kw, pue_design).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest design PUE from DCIM lookup. Alert when rolling PUE exceeds target band. Seasonally adjust expected range.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"power:aggregate\"\n| eval pue=round(total_facility_kw/it_load_kw,2)\n| where pue > pue_design * 1.1 OR pue > 1.6\n| timechart span=1h avg(pue) as live_pue\n```\n\nUnderstanding this SPL\n\n**PUE Efficiency Tracking vs Target** — Comparing live PUE to annual design target drives mechanical optimization and executive reporting.\n\nDocumented **Data sources**: `sourcetype=\"power:aggregate\"` (facility_kw, it_load_kw, pue_design). **App/TA** (typical add-on context): BMS + IT load meters. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: power:aggregate. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"power:aggregate\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pue > pue_design * 1.1 OR pue > 1.6` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (PUE vs target band), Gauge (delta from design), Single value (30-day avg PUE).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.16",
              "n": "Breaker Panel Load Balancing",
              "c": "high",
              "f": "intermediate",
              "v": "Uneven loading across panel phases causes neutral current and premature breaker wear.",
              "t": "Panelboard metering, PQ analyzer",
              "d": "`sourcetype=\"panel:phase\"` (phase_a_amps, phase_b_amps, phase_c_amps)",
              "q": "index=power sourcetype=\"panel:phase\"\n| eval max_p=max(phase_a_amps, phase_b_amps, phase_c_amps), min_p=min(phase_a_amps, phase_b_amps, phase_c_amps), imbalance=max_p-min_p\n| where imbalance > rated_amps*0.15\n| table panel_id, phase_a_amps, phase_b_amps, phase_c_amps, imbalance",
              "m": "Define max phase imbalance (e.g., 15% of frame). Alert and schedule load moves. Common with single-phase dense racks.",
              "z": "Bar chart (phase amps), Table (panels with imbalance), Heatmap (panel × phase).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Panelboard metering, PQ analyzer.\n• Ensure the following data sources are available: `sourcetype=\"panel:phase\"` (phase_a_amps, phase_b_amps, phase_c_amps).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine max phase imbalance (e.g., 15% of frame). Alert and schedule load moves. Common with single-phase dense racks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"panel:phase\"\n| eval max_p=max(phase_a_amps, phase_b_amps, phase_c_amps), min_p=min(phase_a_amps, phase_b_amps, phase_c_amps), imbalance=max_p-min_p\n| where imbalance > rated_amps*0.15\n| table panel_id, phase_a_amps, phase_b_amps, phase_c_amps, imbalance\n```\n\nUnderstanding this SPL\n\n**Breaker Panel Load Balancing** — Uneven loading across panel phases causes neutral current and premature breaker wear.\n\nDocumented **Data sources**: `sourcetype=\"panel:phase\"` (phase_a_amps, phase_b_amps, phase_c_amps). **App/TA** (typical add-on context): Panelboard metering, PQ analyzer. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: panel:phase. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"panel:phase\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **max_p** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where imbalance > rated_amps*0.15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Breaker Panel Load Balancing**): table panel_id, phase_a_amps, phase_b_amps, phase_c_amps, imbalance\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (phase amps), Table (panels with imbalance), Heatmap (panel × phase).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.17",
              "n": "UPS Self-Test Failure",
              "c": "critical",
              "f": "beginner",
              "v": "Failed self-tests indicate battery or inverter faults before a real outage.",
              "t": "UPS SNMP (upsTestResultsSummary)",
              "d": "`sourcetype=\"snmp:ups:selftest\"`",
              "q": "index=power sourcetype=\"snmp:ups:selftest\"\n| where test_result!=\"passed\" OR upsTestResultsSummary!=\"donePass\"\n| table _time, ups_name, test_type, test_result, upsTestResultsSummary",
              "m": "Ingest results from scheduled and manual tests. Alert on any failure. Force battery replacement workflow per vendor guidance.",
              "z": "Table (failed tests), Timeline (self-test history), Single value (UPS with last fail).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: UPS SNMP (upsTestResultsSummary).\n• Ensure the following data sources are available: `sourcetype=\"snmp:ups:selftest\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest results from scheduled and manual tests. Alert on any failure. Force battery replacement workflow per vendor guidance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:ups:selftest\"\n| where test_result!=\"passed\" OR upsTestResultsSummary!=\"donePass\"\n| table _time, ups_name, test_type, test_result, upsTestResultsSummary\n```\n\nUnderstanding this SPL\n\n**UPS Self-Test Failure** — Failed self-tests indicate battery or inverter faults before a real outage.\n\nDocumented **Data sources**: `sourcetype=\"snmp:ups:selftest\"`. **App/TA** (typical add-on context): UPS SNMP (upsTestResultsSummary). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:ups:selftest. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:ups:selftest\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where test_result!=\"passed\" OR upsTestResultsSummary!=\"donePass\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **UPS Self-Test Failure**): table _time, ups_name, test_type, test_result, upsTestResultsSummary\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed tests), Timeline (self-test history), Single value (UPS with last fail).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_ups"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.18",
              "n": "Generator Start Failure",
              "c": "critical",
              "f": "beginner",
              "v": "Failed cranks or inability to reach rated speed leave the site on battery-only until resolved.",
              "t": "Generator controller",
              "d": "`sourcetype=\"generator:events\"` (start_result, crank_count, fault_code)",
              "q": "index=power sourcetype=\"generator:events\"\n| search start_result IN (\"fail\",\"abort\") OR crank_count > 3\n| table _time, generator_id, start_result, crank_count, fault_code\n| sort -_time",
              "m": "Alert immediately on failed start during tests or utility loss. Track battery, starter, and fuel subsystem codes.",
              "z": "Table (start failures), Single value (failed starts 90d), Timeline (events).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Generator controller.\n• Ensure the following data sources are available: `sourcetype=\"generator:events\"` (start_result, crank_count, fault_code).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert immediately on failed start during tests or utility loss. Track battery, starter, and fuel subsystem codes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"generator:events\"\n| search start_result IN (\"fail\",\"abort\") OR crank_count > 3\n| table _time, generator_id, start_result, crank_count, fault_code\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Generator Start Failure** — Failed cranks or inability to reach rated speed leave the site on battery-only until resolved.\n\nDocumented **Data sources**: `sourcetype=\"generator:events\"` (start_result, crank_count, fault_code). **App/TA** (typical add-on context): Generator controller. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: generator:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"generator:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Generator Start Failure**): table _time, generator_id, start_result, crank_count, fault_code\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (start failures), Single value (failed starts 90d), Timeline (events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.19",
              "n": "Power Redundancy Compliance (N+1)",
              "c": "critical",
              "f": "intermediate",
              "v": "Proves both A/B feeds and redundant UPS paths are energized and within load limits for loss-of-one scenarios.",
              "t": "PDU/ATS telemetry, DCIM rules",
              "d": "`sourcetype=\"power:redundancy\"` (feed_a_ok, feed_b_ok, load_on_loss_of_one_pct)",
              "q": "index=power sourcetype=\"power:redundancy\"\n| where feed_a_ok=0 OR feed_b_ok=0 OR load_on_loss_of_one_pct > 100\n| table _time, pdu_id, feed_a_ok, feed_b_ok, load_on_loss_of_one_pct",
              "m": "Model expected load after single failure from DCIM. Alert when any feed is down or modeled headroom <0%. Pair with physical walk-through audits.",
              "z": "Status grid (feed × PDU), Gauge (headroom %), Table (non-compliant PDUs).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PDU/ATS telemetry, DCIM rules.\n• Ensure the following data sources are available: `sourcetype=\"power:redundancy\"` (feed_a_ok, feed_b_ok, load_on_loss_of_one_pct).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nModel expected load after single failure from DCIM. Alert when any feed is down or modeled headroom <0%. Pair with physical walk-through audits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"power:redundancy\"\n| where feed_a_ok=0 OR feed_b_ok=0 OR load_on_loss_of_one_pct > 100\n| table _time, pdu_id, feed_a_ok, feed_b_ok, load_on_loss_of_one_pct\n```\n\nUnderstanding this SPL\n\n**Power Redundancy Compliance (N+1)** — Proves both A/B feeds and redundant UPS paths are energized and within load limits for loss-of-one scenarios.\n\nDocumented **Data sources**: `sourcetype=\"power:redundancy\"` (feed_a_ok, feed_b_ok, load_on_loss_of_one_pct). **App/TA** (typical add-on context): PDU/ATS telemetry, DCIM rules. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: power:redundancy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"power:redundancy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where feed_a_ok=0 OR feed_b_ok=0 OR load_on_loss_of_one_pct > 100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Power Redundancy Compliance (N+1)**): table _time, pdu_id, feed_a_ok, feed_b_ok, load_on_loss_of_one_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (feed × PDU), Gauge (headroom %), Table (non-compliant PDUs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "snmp_pdu"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.20",
              "n": "PDU Branch Circuit Alerts",
              "c": "high",
              "f": "beginner",
              "v": "Branch breakers trip before mains; per-branch monitoring catches overload before rack outage.",
              "t": "Smart PDU branch monitoring",
              "d": "`sourcetype=\"snmp:pdu:branch\"` (branch_id, branch_amps, branch_rated_amps)",
              "q": "index=power sourcetype=\"snmp:pdu:branch\"\n| eval pct=round(branch_amps/branch_rated_amps*100,1)\n| where pct > 80 OR branch_status=\"alarm\"\n| table pdu_id, branch_id, branch_amps, branch_rated_amps, pct\n| sort -pct",
              "m": "Map branches to rack groups. Alert at 80% sustained. Correlate with planned equipment adds.",
              "z": "Bar chart (branch load %), Table (branches at risk).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Smart PDU branch monitoring.\n• Ensure the following data sources are available: `sourcetype=\"snmp:pdu:branch\"` (branch_id, branch_amps, branch_rated_amps).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap branches to rack groups. Alert at 80% sustained. Correlate with planned equipment adds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:pdu:branch\"\n| eval pct=round(branch_amps/branch_rated_amps*100,1)\n| where pct > 80 OR branch_status=\"alarm\"\n| table pdu_id, branch_id, branch_amps, branch_rated_amps, pct\n| sort -pct\n```\n\nUnderstanding this SPL\n\n**PDU Branch Circuit Alerts** — Branch breakers trip before mains; per-branch monitoring catches overload before rack outage.\n\nDocumented **Data sources**: `sourcetype=\"snmp:pdu:branch\"` (branch_id, branch_amps, branch_rated_amps). **App/TA** (typical add-on context): Smart PDU branch monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:pdu:branch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:pdu:branch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct > 80 OR branch_status=\"alarm\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PDU Branch Circuit Alerts**): table pdu_id, branch_id, branch_amps, branch_rated_amps, pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (branch load %), Table (branches at risk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "snmp_pdu"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.21",
              "n": "Electrical Panel Phase Balancing",
              "c": "high",
              "f": "intermediate",
              "v": "Panel-level phase balance differs from rack PDU balance; both matter for neutral harmonics and breaker life.",
              "t": "Main distribution panel meters",
              "d": "`sourcetype=\"panel:phase\"` (main_l1_a, main_l2_a, main_l3_a)",
              "q": "index=power sourcetype=\"panel:phase\" panel_type=\"main\"\n| eval avg_a=(main_l1_a+main_l2_a+main_l3_a)/3\n| eval dev=max(abs(main_l1_a-avg_a), abs(main_l2_a-avg_a), abs(main_l3_a-avg_a))\n| where dev > avg_a*0.2\n| table panel_id, main_l1_a, main_l2_a, main_l3_a, dev",
              "m": "Alert when any phase deviates >20% from mean. Schedule rebalancing with facilities during maintenance.",
              "z": "Line chart (phase currents), Gauge (max deviation), Table (panels out of balance).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Main distribution panel meters.\n• Ensure the following data sources are available: `sourcetype=\"panel:phase\"` (main_l1_a, main_l2_a, main_l3_a).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert when any phase deviates >20% from mean. Schedule rebalancing with facilities during maintenance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"panel:phase\" panel_type=\"main\"\n| eval avg_a=(main_l1_a+main_l2_a+main_l3_a)/3\n| eval dev=max(abs(main_l1_a-avg_a), abs(main_l2_a-avg_a), abs(main_l3_a-avg_a))\n| where dev > avg_a*0.2\n| table panel_id, main_l1_a, main_l2_a, main_l3_a, dev\n```\n\nUnderstanding this SPL\n\n**Electrical Panel Phase Balancing** — Panel-level phase balance differs from rack PDU balance; both matter for neutral harmonics and breaker life.\n\nDocumented **Data sources**: `sourcetype=\"panel:phase\"` (main_l1_a, main_l2_a, main_l3_a). **App/TA** (typical add-on context): Main distribution panel meters. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: panel:phase. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"panel:phase\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **avg_a** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dev > avg_a*0.2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Electrical Panel Phase Balancing**): table panel_id, main_l1_a, main_l2_a, main_l3_a, dev\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (phase currents), Gauge (max deviation), Table (panels out of balance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.22",
              "n": "UPS Battery Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "UPS battery failure during power loss causes complete outage. Proactive monitoring prevents unprotected power events.",
              "t": "SNMP TA (UPS-MIB)",
              "d": "SNMP UPS-MIB (upsEstimatedMinutesRemaining, upsBatteryStatus, upsBatteryTemperature)",
              "q": "index=power sourcetype=\"snmp:ups\"\n| where battery_status!=\"normal\" OR runtime_remaining_min < 30 OR battery_temp_c > 35\n| table _time, ups_name, battery_status, charge_pct, runtime_remaining_min, battery_temp_c",
              "m": "Poll UPS via SNMP every 5 minutes. Alert on low charge (<80%), low runtime (<30 min), high temperature (>35°C), or abnormal status. Track battery health trend to predict replacement needs.",
              "z": "Gauge (charge %), Line chart (runtime trend), Table (UPS status), Single value (runtime remaining).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA (UPS-MIB).\n• Ensure the following data sources are available: SNMP UPS-MIB (upsEstimatedMinutesRemaining, upsBatteryStatus, upsBatteryTemperature).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll UPS via SNMP every 5 minutes. Alert on low charge (<80%), low runtime (<30 min), high temperature (>35°C), or abnormal status. Track battery health trend to predict replacement needs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:ups\"\n| where battery_status!=\"normal\" OR runtime_remaining_min < 30 OR battery_temp_c > 35\n| table _time, ups_name, battery_status, charge_pct, runtime_remaining_min, battery_temp_c\n```\n\nUnderstanding this SPL\n\n**UPS Battery Monitoring** — UPS battery failure during power loss causes complete outage. Proactive monitoring prevents unprotected power events.\n\nDocumented **Data sources**: SNMP UPS-MIB (upsEstimatedMinutesRemaining, upsBatteryStatus, upsBatteryTemperature). **App/TA** (typical add-on context): SNMP TA (UPS-MIB). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:ups. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:ups\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where battery_status!=\"normal\" OR runtime_remaining_min < 30 OR battery_temp_c > 35` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **UPS Battery Monitoring**): table _time, ups_name, battery_status, charge_pct, runtime_remaining_min, battery_temp_c\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (charge %), Line chart (runtime trend), Table (UPS status), Single value (runtime remaining).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_ups"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.23",
              "n": "Power Consumption Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Power consumption trending supports capacity planning, cost management, and sustainability reporting. Anomalies indicate equipment issues.",
              "t": "SNMP TA, smart PDU API",
              "d": "Smart PDU metrics (per-outlet, per-circuit power)",
              "q": "index=power sourcetype=\"snmp:pdu\"\n| timechart span=1h avg(power_watts) as avg_power by rack_id\n| predict avg_power as predicted future_timespan=30",
              "m": "Poll PDU power metrics via SNMP. Track per-rack and per-circuit consumption. Baseline normal patterns. Alert on unusual spikes (potential hardware issue) or drops (server failure). Use for PUE calculation.",
              "z": "Line chart (power per rack), Heatmap (rack × time power usage), Bar chart (top consumers), Stacked area (floor/room power).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP TA, smart PDU API.\n• Ensure the following data sources are available: Smart PDU metrics (per-outlet, per-circuit power).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll PDU power metrics via SNMP. Track per-rack and per-circuit consumption. Baseline normal patterns. Alert on unusual spikes (potential hardware issue) or drops (server failure). Use for PUE calculation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"snmp:pdu\"\n| timechart span=1h avg(power_watts) as avg_power by rack_id\n| predict avg_power as predicted future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Power Consumption Trending** — Power consumption trending supports capacity planning, cost management, and sustainability reporting. Anomalies indicate equipment issues.\n\nDocumented **Data sources**: Smart PDU metrics (per-outlet, per-circuit power). **App/TA** (typical add-on context): SNMP TA, smart PDU API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: snmp:pdu. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"snmp:pdu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by rack_id** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Power Consumption Trending**): predict avg_power as predicted future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (power per rack), Heatmap (rack × time power usage), Bar chart (top consumers), Stacked area (floor/room power).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_pdu"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.24",
              "n": "Rack PDU Load and Phase Balance",
              "c": "high",
              "f": "intermediate",
              "v": "PDU overload or phase imbalance risks circuit trips and equipment failure. Monitoring supports capacity planning and safe load distribution.",
              "t": "PDU SNMP or API, DCIM",
              "d": "PDU current/load per phase, per outlet",
              "q": "index=physical sourcetype=\"pdu:metrics\"\n| stats latest(current_a) as current, latest(kw) as load by pdu_id, phase\n| eval imbalance=abs(current - avg(current) over (partition by pdu_id))\n| where load > 80 OR imbalance > 10\n| table pdu_id, phase, current, load, imbalance",
              "m": "Poll PDU metrics via SNMP or API. Alert when load exceeds 80% or phase imbalance exceeds threshold. Report on load trend and top-loaded PDUs. Balance loads across phases as needed.",
              "z": "Table (PDU load by phase), Bar chart (phase balance), Line chart (load trend).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PDU SNMP or API, DCIM.\n• Ensure the following data sources are available: PDU current/load per phase, per outlet.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll PDU metrics via SNMP or API. Alert when load exceeds 80% or phase imbalance exceeds threshold. Report on load trend and top-loaded PDUs. Balance loads across phases as needed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"pdu:metrics\"\n| stats latest(current_a) as current, latest(kw) as load by pdu_id, phase\n| eval imbalance=abs(current - avg(current) over (partition by pdu_id))\n| where load > 80 OR imbalance > 10\n| table pdu_id, phase, current, load, imbalance\n```\n\nUnderstanding this SPL\n\n**Rack PDU Load and Phase Balance** — PDU overload or phase imbalance risks circuit trips and equipment failure. Monitoring supports capacity planning and safe load distribution.\n\nDocumented **Data sources**: PDU current/load per phase, per outlet. **App/TA** (typical add-on context): PDU SNMP or API, DCIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pdu:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"pdu:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by pdu_id, phase** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **imbalance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where load > 80 OR imbalance > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Rack PDU Load and Phase Balance**): table pdu_id, phase, current, load, imbalance\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (PDU load by phase), Bar chart (phase balance), Line chart (load trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_pdu"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.1.25",
              "n": "Generator Run Hours and Maintenance Due",
              "c": "high",
              "f": "intermediate",
              "v": "Generator maintenance based on run hours ensures reliability during outages. Tracking run hours and maintenance due dates avoids missed service.",
              "t": "Generator controller, BMS",
              "d": "Run hours, last maintenance date, next due",
              "q": "index=physical sourcetype=\"generator:status\"\n| stats latest(run_hours) as run_hrs, latest(maintenance_due_hrs) as due_hrs by generator_id\n| eval remaining_hrs=due_hrs-run_hrs\n| where remaining_hrs < 100\n| table generator_id, run_hrs, due_hrs, remaining_hrs",
              "m": "Ingest generator run hours and maintenance schedule. Alert when remaining hours until next service is below threshold. Report on run hour trend and overdue maintenance.",
              "z": "Table (generators due for service), Gauge (remaining hours), Bar chart (run hours by unit).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Generator controller, BMS.\n• Ensure the following data sources are available: Run hours, last maintenance date, next due.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest generator run hours and maintenance schedule. Alert when remaining hours until next service is below threshold. Report on run hour trend and overdue maintenance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"generator:status\"\n| stats latest(run_hours) as run_hrs, latest(maintenance_due_hrs) as due_hrs by generator_id\n| eval remaining_hrs=due_hrs-run_hrs\n| where remaining_hrs < 100\n| table generator_id, run_hrs, due_hrs, remaining_hrs\n```\n\nUnderstanding this SPL\n\n**Generator Run Hours and Maintenance Due** — Generator maintenance based on run hours ensures reliability during outages. Tracking run hours and maintenance due dates avoids missed service.\n\nDocumented **Data sources**: Run hours, last maintenance date, next due. **App/TA** (typical add-on context): Generator controller, BMS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: generator:status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"generator:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by generator_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **remaining_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where remaining_hrs < 100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Generator Run Hours and Maintenance Due**): table generator_id, run_hrs, due_hrs, remaining_hrs\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (generators due for service), Gauge (remaining hours), Bar chart (run hours by unit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 38.2,
          "qd": {
            "gold": 0,
            "silver": 3,
            "bronze": 22,
            "none": 0
          }
        },
        {
          "i": "15.2",
          "n": "Cooling & Environmental",
          "u": [
            {
              "i": "15.2.1",
              "n": "Temperature Monitoring per Zone",
              "c": "critical",
              "f": "beginner",
              "v": "Data center temperature exceedances risk equipment damage and unplanned shutdowns. Per-zone monitoring localizes issues.",
              "t": "SNMP environmental sensors",
              "d": "Environmental sensors (intake, exhaust, ambient temperature)",
              "q": "index=environment sourcetype=\"sensor:temperature\"\n| where temp_f > 80 OR temp_f < 64\n| table _time, zone, rack, sensor_position, temp_f\n| sort -temp_f",
              "m": "Deploy temperature sensors per ASHRAE recommendations (intake, exhaust, per-row). Poll via SNMP every minute. Alert on exceedance of ASHRAE A1 limits (64-80°F intake). Correlate with cooling unit status.",
              "z": "Heatmap (zone × temperature), Line chart (temperature trend per zone), Floor plan visualization, Single value (hottest zone).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP environmental sensors.\n• Ensure the following data sources are available: Environmental sensors (intake, exhaust, ambient temperature).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy temperature sensors per ASHRAE recommendations (intake, exhaust, per-row). Poll via SNMP every minute. Alert on exceedance of ASHRAE A1 limits (64-80°F intake). Correlate with cooling unit status.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"sensor:temperature\"\n| where temp_f > 80 OR temp_f < 64\n| table _time, zone, rack, sensor_position, temp_f\n| sort -temp_f\n```\n\nUnderstanding this SPL\n\n**Temperature Monitoring per Zone** — Data center temperature exceedances risk equipment damage and unplanned shutdowns. Per-zone monitoring localizes issues.\n\nDocumented **Data sources**: Environmental sensors (intake, exhaust, ambient temperature). **App/TA** (typical add-on context): SNMP environmental sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: sensor:temperature. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"sensor:temperature\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where temp_f > 80 OR temp_f < 64` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Temperature Monitoring per Zone**): table _time, zone, rack, sensor_position, temp_f\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (zone × temperature), Line chart (temperature trend per zone), Floor plan visualization, Single value (hottest zone).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "15.2.2",
              "n": "Humidity Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Low humidity causes ESD risk; high humidity causes condensation. Maintaining 40-60% RH protects equipment.",
              "t": "SNMP environmental sensors",
              "d": "Humidity sensors",
              "q": "index=environment sourcetype=\"sensor:humidity\"\n| where humidity_pct > 60 OR humidity_pct < 40\n| table _time, zone, humidity_pct",
              "m": "Deploy humidity sensors alongside temperature sensors. Alert on out-of-range humidity (below 40% or above 60% RH). Track dew point to prevent condensation. Correlate with HVAC system humidifier/dehumidifier operation.",
              "z": "Line chart (humidity trend), Gauge (current humidity per zone), Table (zones out of range).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP environmental sensors.\n• Ensure the following data sources are available: Humidity sensors.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy humidity sensors alongside temperature sensors. Alert on out-of-range humidity (below 40% or above 60% RH). Track dew point to prevent condensation. Correlate with HVAC system humidifier/dehumidifier operation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"sensor:humidity\"\n| where humidity_pct > 60 OR humidity_pct < 40\n| table _time, zone, humidity_pct\n```\n\nUnderstanding this SPL\n\n**Humidity Monitoring** — Low humidity causes ESD risk; high humidity causes condensation. Maintaining 40-60% RH protects equipment.\n\nDocumented **Data sources**: Humidity sensors. **App/TA** (typical add-on context): SNMP environmental sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: sensor:humidity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"sensor:humidity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where humidity_pct > 60 OR humidity_pct < 40` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Humidity Monitoring**): table _time, zone, humidity_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (humidity trend), Gauge (current humidity per zone), Table (zones out of range).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.3",
              "n": "CRAC/CRAH Unit Health",
              "c": "critical",
              "f": "beginner",
              "v": "Cooling unit failures can cause rapid temperature rise. Monitoring operational status enables immediate response and failover.",
              "t": "BMS/SNMP integration",
              "d": "CRAC/CRAH unit SNMP metrics, BMS alarms",
              "q": "index=cooling sourcetype=\"bms:crac\"\n| where unit_status!=\"running\" OR supply_temp_f > setpoint_f + 5 OR compressor_status!=\"normal\"\n| table _time, unit_name, unit_status, supply_temp_f, setpoint_f, compressor_status",
              "m": "Monitor cooling unit operational status, supply/return temperatures, and compressor health via SNMP/BMS. Alert on unit failure or degraded performance. Track runtime hours for maintenance scheduling.",
              "z": "Status grid (unit × operational status), Table (unit health), Line chart (supply/return temps), Gauge (cooling capacity %).",
              "kfp": "HVAC alarms during scheduled maintenance, filter replacements, or outdoor temperature swings.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS/SNMP integration.\n• Ensure the following data sources are available: CRAC/CRAH unit SNMP metrics, BMS alarms.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor cooling unit operational status, supply/return temperatures, and compressor health via SNMP/BMS. Alert on unit failure or degraded performance. Track runtime hours for maintenance scheduling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"bms:crac\"\n| where unit_status!=\"running\" OR supply_temp_f > setpoint_f + 5 OR compressor_status!=\"normal\"\n| table _time, unit_name, unit_status, supply_temp_f, setpoint_f, compressor_status\n```\n\nUnderstanding this SPL\n\n**CRAC/CRAH Unit Health** — Cooling unit failures can cause rapid temperature rise. Monitoring operational status enables immediate response and failover.\n\nDocumented **Data sources**: CRAC/CRAH unit SNMP metrics, BMS alarms. **App/TA** (typical add-on context): BMS/SNMP integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: bms:crac. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"bms:crac\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where unit_status!=\"running\" OR supply_temp_f > setpoint_f + 5 OR compressor_status!=\"normal\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CRAC/CRAH Unit Health**): table _time, unit_name, unit_status, supply_temp_f, setpoint_f, compressor_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (unit × operational status), Table (unit health), Line chart (supply/return temps), Gauge (cooling capacity %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "15.2.4",
              "n": "Hot Aisle Temperature Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Hot aisle trends indicate cooling efficiency and capacity margin. Rising trends signal approaching cooling limits.",
              "t": "Environmental sensors",
              "d": "Hot aisle return air temperature sensors",
              "q": "index=environment sourcetype=\"sensor:temperature\" position=\"hot_aisle\"\n| timechart span=1h avg(temp_f) as avg_temp by zone\n| predict avg_temp as predicted future_timespan=7",
              "m": "Deploy sensors in hot aisle containment. Track return air temperatures. Compare hot aisle temps across zones to identify cooling imbalances. Use prediction to forecast capacity issues.",
              "z": "Line chart (hot aisle temps with prediction), Heatmap (zone × time), Bar chart (avg hot aisle by zone).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Environmental sensors.\n• Ensure the following data sources are available: Hot aisle return air temperature sensors.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy sensors in hot aisle containment. Track return air temperatures. Compare hot aisle temps across zones to identify cooling imbalances. Use prediction to forecast capacity issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"sensor:temperature\" position=\"hot_aisle\"\n| timechart span=1h avg(temp_f) as avg_temp by zone\n| predict avg_temp as predicted future_timespan=7\n```\n\nUnderstanding this SPL\n\n**Hot Aisle Temperature Trending** — Hot aisle trends indicate cooling efficiency and capacity margin. Rising trends signal approaching cooling limits.\n\nDocumented **Data sources**: Hot aisle return air temperature sensors. **App/TA** (typical add-on context): Environmental sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: sensor:temperature. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"sensor:temperature\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by zone** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Hot Aisle Temperature Trending**): predict avg_temp as predicted future_timespan=7\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hot aisle temps with prediction), Heatmap (zone × time), Bar chart (avg hot aisle by zone).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.5",
              "n": "Water Leak Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Water in a data center causes immediate equipment damage and potential electrical hazards. Seconds matter in detection.",
              "t": "Leak detection sensor inputs",
              "d": "Water leak detection system (rope sensors, spot detectors)",
              "q": "index=environment sourcetype=\"leak_detection\"\n| where leak_detected=\"true\"\n| table _time, zone, sensor_id, location_description",
              "m": "Deploy water leak detection sensors under raised floors, near CRAC units, and along pipe routes. Alert at critical priority on any detection. Integrate with building facilities team notification. Test sensors quarterly.",
              "z": "Single value (active leak alerts — target: 0), Floor plan (sensor locations with status), Timeline (leak events).",
              "kfp": "Leak alerts from condensation, scheduled maintenance moisture, or false-trigger sensors.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Leak detection sensor inputs.\n• Ensure the following data sources are available: Water leak detection system (rope sensors, spot detectors).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy water leak detection sensors under raised floors, near CRAC units, and along pipe routes. Alert at critical priority on any detection. Integrate with building facilities team notification. Test sensors quarterly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"leak_detection\"\n| where leak_detected=\"true\"\n| table _time, zone, sensor_id, location_description\n```\n\nUnderstanding this SPL\n\n**Water Leak Detection** — Water in a data center causes immediate equipment damage and potential electrical hazards. Seconds matter in detection.\n\nDocumented **Data sources**: Water leak detection system (rope sensors, spot detectors). **App/TA** (typical add-on context): Leak detection sensor inputs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: leak_detection. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"leak_detection\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where leak_detected=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Water Leak Detection**): table _time, zone, sensor_id, location_description\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (active leak alerts — target: 0), Floor plan (sensor locations with status), Timeline (leak events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.6",
              "n": "Cooling Capacity Planning",
              "c": "medium",
              "f": "beginner",
              "v": "Trending cooling load vs capacity ensures adequate cooling for current and planned equipment deployments.",
              "t": "BMS metrics",
              "d": "CRAC/CRAH cooling output, IT heat load calculations",
              "q": "index=cooling sourcetype=\"bms:cooling_capacity\"\n| timechart span=1d avg(cooling_output_kw) as output, avg(cooling_capacity_kw) as capacity\n| eval utilization_pct=round(output/capacity*100,1)",
              "m": "Calculate cooling load from IT power consumption (1 watt IT ≈ 3.41 BTU/h heat). Compare against total cooling capacity. Track utilization percentage. Alert when approaching 80% capacity. Plan for seasonal variations.",
              "z": "Dual-axis chart (load vs capacity), Gauge (cooling utilization %), Line chart (utilization trend).",
              "kfp": "Capacity alerts during planned hardware refreshes, scheduled rack moves, or new deployments.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS metrics.\n• Ensure the following data sources are available: CRAC/CRAH cooling output, IT heat load calculations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCalculate cooling load from IT power consumption (1 watt IT ≈ 3.41 BTU/h heat). Compare against total cooling capacity. Track utilization percentage. Alert when approaching 80% capacity. Plan for seasonal variations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"bms:cooling_capacity\"\n| timechart span=1d avg(cooling_output_kw) as output, avg(cooling_capacity_kw) as capacity\n| eval utilization_pct=round(output/capacity*100,1)\n```\n\nUnderstanding this SPL\n\n**Cooling Capacity Planning** — Trending cooling load vs capacity ensures adequate cooling for current and planned equipment deployments.\n\nDocumented **Data sources**: CRAC/CRAH cooling output, IT heat load calculations. **App/TA** (typical add-on context): BMS metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: bms:cooling_capacity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"bms:cooling_capacity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis chart (load vs capacity), Gauge (cooling utilization %), Line chart (utilization trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.7",
              "n": "APC InRow / CRAC Unit Temperature Differential",
              "c": "high",
              "f": "intermediate",
              "v": "Inlet/outlet delta-T indicating cooling effectiveness; low delta-T suggests bypass airflow or undersized cooling; high delta-T indicates effective heat removal.",
              "t": "SNMP modular input",
              "d": "APC AirIR-MIB (airIRSupplyAirTemperature, airIRReturnAirTemperature)",
              "q": "index=cooling sourcetype=\"snmp:apc:inrow\"\n| eval delta_t=airIRReturnAirTemperature - airIRSupplyAirTemperature\n| where delta_t < 5 OR delta_t > 25\n| table _time, unit_name, zone, airIRSupplyAirTemperature as supply_f, airIRReturnAirTemperature as return_f, delta_t\n| sort -delta_t",
              "m": "Poll APC AirIR-MIB for airIRSupplyAirTemperature and airIRReturnAirTemperature. Calculate delta-T (return minus supply). Typical effective range 10–20°F. Alert on delta-T <5°F (ineffective cooling) or >25°F (possible airflow restriction). Poll every 5 minutes. Correlate with unit runtime and fan status.",
              "z": "Line chart (delta-T trend per unit), Table (units outside range), Gauge (current delta-T), Heatmap (zone × delta-T).",
              "kfp": "HVAC alarms during scheduled maintenance, filter replacements, or outdoor temperature swings.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input.\n• Ensure the following data sources are available: APC AirIR-MIB (airIRSupplyAirTemperature, airIRReturnAirTemperature).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll APC AirIR-MIB for airIRSupplyAirTemperature and airIRReturnAirTemperature. Calculate delta-T (return minus supply). Typical effective range 10–20°F. Alert on delta-T <5°F (ineffective cooling) or >25°F (possible airflow restriction). Poll every 5 minutes. Correlate with unit runtime and fan status.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"snmp:apc:inrow\"\n| eval delta_t=airIRReturnAirTemperature - airIRSupplyAirTemperature\n| where delta_t < 5 OR delta_t > 25\n| table _time, unit_name, zone, airIRSupplyAirTemperature as supply_f, airIRReturnAirTemperature as return_f, delta_t\n| sort -delta_t\n```\n\nUnderstanding this SPL\n\n**APC InRow / CRAC Unit Temperature Differential** — Inlet/outlet delta-T indicating cooling effectiveness; low delta-T suggests bypass airflow or undersized cooling; high delta-T indicates effective heat removal.\n\nDocumented **Data sources**: APC AirIR-MIB (airIRSupplyAirTemperature, airIRReturnAirTemperature). **App/TA** (typical add-on context): SNMP modular input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: snmp:apc:inrow. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"snmp:apc:inrow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **delta_t** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta_t < 5 OR delta_t > 25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **APC InRow / CRAC Unit Temperature Differential**): table _time, unit_name, zone, airIRSupplyAirTemperature as supply_f, airIRReturnAirTemperature as return_f, delta_t\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (delta-T trend per unit), Table (units outside range), Gauge (current delta-T), Heatmap (zone × delta-T).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.8",
              "n": "CRAC Unit Failure and Alarm State",
              "c": "critical",
              "f": "beginner",
              "v": "Explicit CRAC fault/alarm bits catch compressor and fan failures not visible in temperature alone.",
              "t": "BMS, SNMP (Liebert/Vertiv MIBs)",
              "d": "`sourcetype=\"bms:crac\"` (unit_alarm, compressor_fault, fan_fault)",
              "q": "index=cooling sourcetype=\"bms:crac\"\n| where unit_alarm=1 OR compressor_fault=1 OR fan_fault=1 OR unit_status=\"fault\"\n| table _time, unit_name, unit_alarm, compressor_fault, fan_fault, alarm_text",
              "m": "Map vendor alarm codes to plain text. Page on any fault. Track MTTR for CRAC repairs.",
              "z": "Status grid (unit × fault class), Timeline (alarms), Single value (units in fault).",
              "kfp": "HVAC alarms during scheduled maintenance, filter replacements, or outdoor temperature swings.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS, SNMP (Liebert/Vertiv MIBs).\n• Ensure the following data sources are available: `sourcetype=\"bms:crac\"` (unit_alarm, compressor_fault, fan_fault).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor alarm codes to plain text. Page on any fault. Track MTTR for CRAC repairs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"bms:crac\"\n| where unit_alarm=1 OR compressor_fault=1 OR fan_fault=1 OR unit_status=\"fault\"\n| table _time, unit_name, unit_alarm, compressor_fault, fan_fault, alarm_text\n```\n\nUnderstanding this SPL\n\n**CRAC Unit Failure and Alarm State** — Explicit CRAC fault/alarm bits catch compressor and fan failures not visible in temperature alone.\n\nDocumented **Data sources**: `sourcetype=\"bms:crac\"` (unit_alarm, compressor_fault, fan_fault). **App/TA** (typical add-on context): BMS, SNMP (Liebert/Vertiv MIBs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: bms:crac. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"bms:crac\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where unit_alarm=1 OR compressor_fault=1 OR fan_fault=1 OR unit_status=\"fault\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CRAC Unit Failure and Alarm State**): table _time, unit_name, unit_alarm, compressor_fault, fan_fault, alarm_text\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (unit × fault class), Timeline (alarms), Single value (units in fault).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.9",
              "n": "Hot/Cold Aisle Temperature Delta",
              "c": "high",
              "f": "intermediate",
              "v": "Low ΔT across containment indicates bypass airflow or insufficient airflow; high ΔT may signal restricted return.",
              "t": "Environmental sensors",
              "d": "`sourcetype=\"sensor:temperature\"` position IN (\"hot_aisle\",\"cold_aisle\")",
              "q": "index=environment sourcetype=\"sensor:temperature\" row_id=*\n| bin _time span=5m\n| stats avg(eval(if(position=\"cold_aisle\",temp_c,null()))) as cold_c, avg(eval(if(position=\"hot_aisle\",temp_c,null()))) as hot_c by row_id, _time\n| eval delta_t=hot_c-cold_c\n| where delta_t < 8 OR delta_t > 22\n| table row_id, cold_c, hot_c, delta_t",
              "m": "Pair sensors per row. Baseline expected ΔT for your containment design. Alert outside band.",
              "z": "Line chart (ΔT by row), Heatmap (row × time), Gauge (current ΔT).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Environmental sensors.\n• Ensure the following data sources are available: `sourcetype=\"sensor:temperature\"` position IN (\"hot_aisle\",\"cold_aisle\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPair sensors per row. Baseline expected ΔT for your containment design. Alert outside band.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"sensor:temperature\" row_id=*\n| bin _time span=5m\n| stats avg(eval(if(position=\"cold_aisle\",temp_c,null()))) as cold_c, avg(eval(if(position=\"hot_aisle\",temp_c,null()))) as hot_c by row_id, _time\n| eval delta_t=hot_c-cold_c\n| where delta_t < 8 OR delta_t > 22\n| table row_id, cold_c, hot_c, delta_t\n```\n\nUnderstanding this SPL\n\n**Hot/Cold Aisle Temperature Delta** — Low ΔT across containment indicates bypass airflow or insufficient airflow; high ΔT may signal restricted return.\n\nDocumented **Data sources**: `sourcetype=\"sensor:temperature\"` position IN (\"hot_aisle\",\"cold_aisle\"). **App/TA** (typical add-on context): Environmental sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: sensor:temperature. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"sensor:temperature\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by row_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta_t** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta_t < 8 OR delta_t > 22` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Hot/Cold Aisle Temperature Delta**): table row_id, cold_c, hot_c, delta_t\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (ΔT by row), Heatmap (row × time), Gauge (current ΔT).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.10",
              "n": "Humidity Threshold Exceedance (ASHRAE)",
              "c": "high",
              "f": "beginner",
              "v": "Stricter alerting for short excursions supports tape media and corrosion-sensitive gear beyond static high/low limits.",
              "t": "SNMP sensors",
              "d": "`sourcetype=\"sensor:humidity\"`",
              "q": "index=environment sourcetype=\"sensor:humidity\"\n| where humidity_pct > 60 OR humidity_pct < 40\n| eval duration_bucket=if(humidity_pct>60,\"high\",\"low\")\n| stats count by zone, duration_bucket",
              "m": "Use sliding windows to alert only if out of 40–60% RH for >15 minutes. Pair with dew point for condensation risk.",
              "z": "Line chart (RH with bands), Table (zones in excursion), Single value (max excursion minutes).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP sensors.\n• Ensure the following data sources are available: `sourcetype=\"sensor:humidity\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse sliding windows to alert only if out of 40–60% RH for >15 minutes. Pair with dew point for condensation risk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"sensor:humidity\"\n| where humidity_pct > 60 OR humidity_pct < 40\n| eval duration_bucket=if(humidity_pct>60,\"high\",\"low\")\n| stats count by zone, duration_bucket\n```\n\nUnderstanding this SPL\n\n**Humidity Threshold Exceedance (ASHRAE)** — Stricter alerting for short excursions supports tape media and corrosion-sensitive gear beyond static high/low limits.\n\nDocumented **Data sources**: `sourcetype=\"sensor:humidity\"`. **App/TA** (typical add-on context): SNMP sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: sensor:humidity. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"sensor:humidity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where humidity_pct > 60 OR humidity_pct < 40` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **duration_bucket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by zone, duration_bucket** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (RH with bands), Table (zones in excursion), Single value (max excursion minutes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.11",
              "n": "Chiller Plant Efficiency (kW/ton)",
              "c": "medium",
              "f": "intermediate",
              "v": "Elevated kW/ton indicates fouling, low refrigerant, or poor tower performance.",
              "t": "BMS chiller plant",
              "d": "`sourcetype=\"bms:chiller\"` (tons, kw, cop)",
              "q": "index=cooling sourcetype=\"bms:chiller\"\n| eval kw_per_ton=round(kw/tons,3)\n| where kw_per_ton > design_kw_per_ton * 1.15\n| timechart span=15m avg(kw_per_ton) by chiller_id",
              "m": "Baseline design kW/ton from commissioning. Alert on sustained degradation. Schedule tube cleaning on trend.",
              "z": "Line chart (kW/ton trend), Gauge (current vs design), Table (chillers degraded).",
              "kfp": "Cooling capacity changes during seasonal swings, scheduled coil cleaning, or pump cycling.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS chiller plant.\n• Ensure the following data sources are available: `sourcetype=\"bms:chiller\"` (tons, kw, cop).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline design kW/ton from commissioning. Alert on sustained degradation. Schedule tube cleaning on trend.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"bms:chiller\"\n| eval kw_per_ton=round(kw/tons,3)\n| where kw_per_ton > design_kw_per_ton * 1.15\n| timechart span=15m avg(kw_per_ton) by chiller_id\n```\n\nUnderstanding this SPL\n\n**Chiller Plant Efficiency (kW/ton)** — Elevated kW/ton indicates fouling, low refrigerant, or poor tower performance.\n\nDocumented **Data sources**: `sourcetype=\"bms:chiller\"` (tons, kw, cop). **App/TA** (typical add-on context): BMS chiller plant. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: bms:chiller. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"bms:chiller\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **kw_per_ton** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where kw_per_ton > design_kw_per_ton * 1.15` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by chiller_id** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (kW/ton trend), Gauge (current vs design), Table (chillers degraded).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.12",
              "n": "Liquid Cooling Loop Pressure",
              "c": "critical",
              "f": "intermediate",
              "v": "Leak or pump failure shows first as pressure loss or delta-P across strainers.",
              "t": "CDU / in-rack liquid cooling",
              "d": "`sourcetype=\"liquid_cool:loop\"` (supply_kpa, return_kpa, flow_lpm)",
              "q": "index=cooling sourcetype=\"liquid_cool:loop\"\n| eval delta_p=supply_kpa-return_kpa\n| where supply_kpa < min_supply_kpa OR delta_p < min_delta_p OR flow_lpm < min_flow\n| table _time, rack_id, supply_kpa, return_kpa, flow_lpm, delta_p",
              "m": "Define min pressure and flow per vendor. Alert on any breach. Integrate leak-detection rope under manifolds.",
              "z": "Line chart (pressure and flow), Gauge (delta-P), Table (loops at risk).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CDU / in-rack liquid cooling.\n• Ensure the following data sources are available: `sourcetype=\"liquid_cool:loop\"` (supply_kpa, return_kpa, flow_lpm).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine min pressure and flow per vendor. Alert on any breach. Integrate leak-detection rope under manifolds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"liquid_cool:loop\"\n| eval delta_p=supply_kpa-return_kpa\n| where supply_kpa < min_supply_kpa OR delta_p < min_delta_p OR flow_lpm < min_flow\n| table _time, rack_id, supply_kpa, return_kpa, flow_lpm, delta_p\n```\n\nUnderstanding this SPL\n\n**Liquid Cooling Loop Pressure** — Leak or pump failure shows first as pressure loss or delta-P across strainers.\n\nDocumented **Data sources**: `sourcetype=\"liquid_cool:loop\"` (supply_kpa, return_kpa, flow_lpm). **App/TA** (typical add-on context): CDU / in-rack liquid cooling. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: liquid_cool:loop. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"liquid_cool:loop\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **delta_p** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where supply_kpa < min_supply_kpa OR delta_p < min_delta_p OR flow_lpm < min_flow` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Liquid Cooling Loop Pressure**): table _time, rack_id, supply_kpa, return_kpa, flow_lpm, delta_p\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pressure and flow), Gauge (delta-P), Table (loops at risk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.13",
              "n": "Air Handler Filter Differential Pressure",
              "c": "medium",
              "f": "beginner",
              "v": "High ΔP across filters increases fan energy and reduces airflow; indicates change-out due.",
              "t": "BMS AHU points",
              "d": "`sourcetype=\"bms:ahu\"` (filter_dp_pa, fan_speed_pct)",
              "q": "index=cooling sourcetype=\"bms:ahu\"\n| where filter_dp_pa > max_filter_dp_pa * 0.85\n| table ahu_id, filter_dp_pa, max_filter_dp_pa, fan_speed_pct",
              "m": "Set change-out threshold at ~85% of max rated ΔP. Correlate rising ΔP with fan speed increases.",
              "z": "Line chart (filter ΔP), Table (AHUs due for filter change), Gauge (ΔP % of limit).",
              "kfp": "HVAC alarms during scheduled maintenance, filter replacements, or outdoor temperature swings.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS AHU points.\n• Ensure the following data sources are available: `sourcetype=\"bms:ahu\"` (filter_dp_pa, fan_speed_pct).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSet change-out threshold at ~85% of max rated ΔP. Correlate rising ΔP with fan speed increases.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"bms:ahu\"\n| where filter_dp_pa > max_filter_dp_pa * 0.85\n| table ahu_id, filter_dp_pa, max_filter_dp_pa, fan_speed_pct\n```\n\nUnderstanding this SPL\n\n**Air Handler Filter Differential Pressure** — High ΔP across filters increases fan energy and reduces airflow; indicates change-out due.\n\nDocumented **Data sources**: `sourcetype=\"bms:ahu\"` (filter_dp_pa, fan_speed_pct). **App/TA** (typical add-on context): BMS AHU points. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: bms:ahu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"bms:ahu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where filter_dp_pa > max_filter_dp_pa * 0.85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Air Handler Filter Differential Pressure**): table ahu_id, filter_dp_pa, max_filter_dp_pa, fan_speed_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (filter ΔP), Table (AHUs due for filter change), Gauge (ΔP % of limit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.14",
              "n": "Cooling Capacity vs IT Load",
              "c": "high",
              "f": "intermediate",
              "v": "Compares available tons to IT heat load (from power meters) to expose margin before next summer.",
              "t": "BMS + IT power feeds",
              "d": "`sourcetype=\"bms:cooling_capacity\"` `sourcetype=\"power:it_heat\"`",
              "q": "index=cooling sourcetype=\"bms:cooling_capacity\"\n| eval it_heat_tons=it_load_kw * 0.284345\n| eval margin_tons=cooling_capacity_tons - cooling_output_tons\n| where margin_tons < it_heat_tons * 0.15\n| table zone, cooling_capacity_tons, cooling_output_tons, it_heat_tons, margin_tons",
              "m": "Convert IT kW to tons (approx). Alert when margin <15% of peak load forecast. Feed capacity planning.",
              "z": "Area chart (load vs capacity), Gauge (margin tons), Table (zones tight).",
              "kfp": "Capacity alerts during planned hardware refreshes, scheduled rack moves, or new deployments.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS + IT power feeds.\n• Ensure the following data sources are available: `sourcetype=\"bms:cooling_capacity\"` `sourcetype=\"power:it_heat\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConvert IT kW to tons (approx). Alert when margin <15% of peak load forecast. Feed capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"bms:cooling_capacity\"\n| eval it_heat_tons=it_load_kw * 0.284345\n| eval margin_tons=cooling_capacity_tons - cooling_output_tons\n| where margin_tons < it_heat_tons * 0.15\n| table zone, cooling_capacity_tons, cooling_output_tons, it_heat_tons, margin_tons\n```\n\nUnderstanding this SPL\n\n**Cooling Capacity vs IT Load** — Compares available tons to IT heat load (from power meters) to expose margin before next summer.\n\nDocumented **Data sources**: `sourcetype=\"bms:cooling_capacity\"` `sourcetype=\"power:it_heat\"`. **App/TA** (typical add-on context): BMS + IT power feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: bms:cooling_capacity. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"bms:cooling_capacity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **it_heat_tons** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **margin_tons** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where margin_tons < it_heat_tons * 0.15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cooling Capacity vs IT Load**): table zone, cooling_capacity_tons, cooling_output_tons, it_heat_tons, margin_tons\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (load vs capacity), Gauge (margin tons), Table (zones tight).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.15",
              "n": "Economizer Mode Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Low economizer hours may indicate stuck dampers or bad enthalpy programming—wasting energy.",
              "t": "BMS",
              "d": "`sourcetype=\"bms:economizer\"` (mode, oa_temp_f, enthalpy_ok)",
              "q": "index=cooling sourcetype=\"bms:economizer\"\n| timechart span=1d sum(eval(if(mode=\"economizer\",1,0))) as econ_hours by ahu_id\n| where econ_hours < 4",
              "m": "Compare economizer hours to weather bin data. Investigate AHU with low free-cooling in mild weather.",
              "z": "Bar chart (econ hours by month), Line chart (OA temp vs mode), Table (AHUs underutilizing economizer).",
              "kfp": "HVAC alarms during scheduled maintenance, filter replacements, or outdoor temperature swings.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS.\n• Ensure the following data sources are available: `sourcetype=\"bms:economizer\"` (mode, oa_temp_f, enthalpy_ok).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare economizer hours to weather bin data. Investigate AHU with low free-cooling in mild weather.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"bms:economizer\"\n| timechart span=1d sum(eval(if(mode=\"economizer\",1,0))) as econ_hours by ahu_id\n| where econ_hours < 4\n```\n\nUnderstanding this SPL\n\n**Economizer Mode Utilization** — Low economizer hours may indicate stuck dampers or bad enthalpy programming—wasting energy.\n\nDocumented **Data sources**: `sourcetype=\"bms:economizer\"` (mode, oa_temp_f, enthalpy_ok). **App/TA** (typical add-on context): BMS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: bms:economizer. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"bms:economizer\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by ahu_id** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where econ_hours < 4` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (econ hours by month), Line chart (OA temp vs mode), Table (AHUs underutilizing economizer).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.16",
              "n": "Condensation Risk Alerts",
              "c": "critical",
              "f": "intermediate",
              "v": "Surface temp below dew point risks water on equipment and slip hazards near chilled doors.",
              "t": "RH + surface temp sensors",
              "d": "`sourcetype=\"sensor:condensation\"` (surface_temp_c, dew_point_c)",
              "q": "index=environment sourcetype=\"sensor:condensation\"\n| where surface_temp_c <= dew_point_c + 1\n| table _time, location, surface_temp_c, dew_point_c, rh_pct",
              "m": "Compute dew point from RH and air temp or ingest BMS dew point. Alert when within 1°C of condensation.",
              "z": "Table (at-risk locations), Line chart (surface temp vs dew), Single value (active condensation risk).",
              "kfp": "Leak alerts from condensation, scheduled maintenance moisture, or false-trigger sensors.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RH + surface temp sensors.\n• Ensure the following data sources are available: `sourcetype=\"sensor:condensation\"` (surface_temp_c, dew_point_c).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompute dew point from RH and air temp or ingest BMS dew point. Alert when within 1°C of condensation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"sensor:condensation\"\n| where surface_temp_c <= dew_point_c + 1\n| table _time, location, surface_temp_c, dew_point_c, rh_pct\n```\n\nUnderstanding this SPL\n\n**Condensation Risk Alerts** — Surface temp below dew point risks water on equipment and slip hazards near chilled doors.\n\nDocumented **Data sources**: `sourcetype=\"sensor:condensation\"` (surface_temp_c, dew_point_c). **App/TA** (typical add-on context): RH + surface temp sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: sensor:condensation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"sensor:condensation\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where surface_temp_c <= dew_point_c + 1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Condensation Risk Alerts**): table _time, location, surface_temp_c, dew_point_c, rh_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (at-risk locations), Line chart (surface temp vs dew), Single value (active condensation risk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.17",
              "n": "Cooling Redundancy Status",
              "c": "critical",
              "f": "intermediate",
              "v": "N+1 cooling requires knowing when one CRAC is down and remaining capacity still covers peak IT load.",
              "t": "BMS, cooling plant model",
              "d": "`sourcetype=\"bms:cooling_redundancy\"` (online_tons, required_tons, units_down)",
              "q": "index=cooling sourcetype=\"bms:cooling_redundancy\"\n| where online_tons < required_tons * 1.1 OR units_down > 0\n| table _time, zone, online_tons, required_tons, units_down, unit_list",
              "m": "Set `required_tons` from peak IT load × safety factor. Alert when redundancy lost. Block new installs if margin negative.",
              "z": "Gauge (redundancy margin), Table (zones without N+1), Status grid (unit × online).",
              "kfp": "Power spikes during equipment power cycling, scheduled load shifting, or DR drills.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS, cooling plant model.\n• Ensure the following data sources are available: `sourcetype=\"bms:cooling_redundancy\"` (online_tons, required_tons, units_down).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSet `required_tons` from peak IT load × safety factor. Alert when redundancy lost. Block new installs if margin negative.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cooling sourcetype=\"bms:cooling_redundancy\"\n| where online_tons < required_tons * 1.1 OR units_down > 0\n| table _time, zone, online_tons, required_tons, units_down, unit_list\n```\n\nUnderstanding this SPL\n\n**Cooling Redundancy Status** — N+1 cooling requires knowing when one CRAC is down and remaining capacity still covers peak IT load.\n\nDocumented **Data sources**: `sourcetype=\"bms:cooling_redundancy\"` (online_tons, required_tons, units_down). **App/TA** (typical add-on context): BMS, cooling plant model. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cooling; **sourcetype**: bms:cooling_redundancy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cooling, sourcetype=\"bms:cooling_redundancy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where online_tons < required_tons * 1.1 OR units_down > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cooling Redundancy Status**): table _time, zone, online_tons, required_tons, units_down, unit_list\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (redundancy margin), Table (zones without N+1), Status grid (unit × online).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.18",
              "n": "Data Center Humidity & Condensation Risk",
              "c": "critical",
              "f": "intermediate",
              "v": "Prevents equipment failure by detecting dew point conditions before condensation forms on servers and network infrastructure.",
              "t": "Splunk Edge Hub (humidity + temperature sensors), Splunk OT Intelligence",
              "d": "`index=edge-hub-data` tag=humidity tag=temperature, `sourcetype=edge_hub`",
              "q": "| mstats avg(Humidity), avg(Temperature) as temp by host\n| eval dew_point=(243.04*(ln(Humidity/100)+((17.625*temp)/(243.04+temp))))/(17.625-ln(Humidity/100)-((17.625*temp)/(243.04+temp)))\n| eval condensation_risk=case(temp<=dew_point, \"CRITICAL\", temp-dew_point<2, \"HIGH\", 1=1, \"NORMAL\")\n| where condensation_risk!=\"NORMAL\"",
              "m": "Deploy Edge Hub in raised floor or ceiling-mounted configuration with humidity sensor exposed to air circulation. Configure Advanced Settings → Sensor Polling interval to 30 seconds for real-time dew point calculation. Use local SQLite backlog to ensure no readings are lost during Splunk connectivity outages.",
              "z": "Gauge (dew point vs actual temp), time-series overlay, condensation risk heatmap.",
              "kfp": "Leak alerts from condensation, scheduled maintenance moisture, or false-trigger sensors.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (humidity + temperature sensors), Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=edge-hub-data` tag=humidity tag=temperature, `sourcetype=edge_hub`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Edge Hub in raised floor or ceiling-mounted configuration with humidity sensor exposed to air circulation. Configure Advanced Settings → Sensor Polling interval to 30 seconds for real-time dew point calculation. Use local SQLite backlog to ensure no readings are lost during Splunk connectivity outages.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| mstats avg(Humidity), avg(Temperature) as temp by host\n| eval dew_point=(243.04*(ln(Humidity/100)+((17.625*temp)/(243.04+temp))))/(17.625-ln(Humidity/100)-((17.625*temp)/(243.04+temp)))\n| eval condensation_risk=case(temp<=dew_point, \"CRITICAL\", temp-dew_point<2, \"HIGH\", 1=1, \"NORMAL\")\n| where condensation_risk!=\"NORMAL\"\n```\n\nUnderstanding this SPL\n\n**Data Center Humidity & Condensation Risk** — Prevents equipment failure by detecting dew point conditions before condensation forms on servers and network infrastructure.\n\nDocumented **Data sources**: `index=edge-hub-data` tag=humidity tag=temperature, `sourcetype=edge_hub`. **App/TA** (typical add-on context): Splunk Edge Hub (humidity + temperature sensors), Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `mstats` to query metrics indexes (pre-aggregated metric data).\n• `eval` defines or adjusts **dew_point** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **condensation_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where condensation_risk!=\"NORMAL\"` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (dew point vs actual temp), time-series overlay, condensation risk heatmap.",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.2.19",
              "n": "Water Leak Sensor Zone Correlation",
              "c": "critical",
              "f": "intermediate",
              "v": "Correlates multiple rope sensors to localize leak source and trigger EPO workflows under raised floor.",
              "t": "Leak detection panel → Splunk",
              "d": "`sourcetype=\"leak_detection\"` (zone, sensor_id, conductivity)",
              "q": "index=environment sourcetype=\"leak_detection\"\n| where leak_detected=1\n| stats count by zone, crac_id\n| sort -count",
              "m": "On any leak, show adjacent zones and nearest CRAC/isolation valve. Integrate with facilities runbook.",
              "z": "Floor plan (zones), Table (active leaks), Single value (leak count).",
              "kfp": "Leak alerts from condensation, scheduled maintenance moisture, or false-trigger sensors.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Leak detection panel → Splunk.\n• Ensure the following data sources are available: `sourcetype=\"leak_detection\"` (zone, sensor_id, conductivity).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nOn any leak, show adjacent zones and nearest CRAC/isolation valve. Integrate with facilities runbook.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"leak_detection\"\n| where leak_detected=1\n| stats count by zone, crac_id\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Water Leak Sensor Zone Correlation** — Correlates multiple rope sensors to localize leak source and trigger EPO workflows under raised floor.\n\nDocumented **Data sources**: `sourcetype=\"leak_detection\"` (zone, sensor_id, conductivity). **App/TA** (typical add-on context): Leak detection panel → Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: leak_detection. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"leak_detection\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where leak_detected=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by zone, crac_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Floor plan (zones), Table (active leaks), Single value (leak count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.6,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 17,
            "none": 0
          }
        },
        {
          "i": "15.3",
          "n": "Physical Security",
          "u": [
            {
              "i": "15.3.1",
              "n": "Badge Access Audit",
              "c": "high",
              "f": "beginner",
              "v": "Complete badge access audit trail is required for compliance (SOC2, PCI-DSS) and supports security investigations.",
              "t": "Access control syslog/API",
              "d": "Access control system events",
              "q": "index=physical sourcetype=\"access_control\"\n| table _time, badge_holder, badge_id, door, action, result\n| sort -_time",
              "m": "Forward access control events to Splunk. Parse all badge events (granted, denied, door held, forced). Retain per compliance requirements. Enable search by person, door, or time for investigations.",
              "z": "Table (access log), Bar chart (access by door), Timeline (access events for person), Single value (total access today).",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Access control syslog/API.\n• Ensure the following data sources are available: Access control system events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward access control events to Splunk. Parse all badge events (granted, denied, door held, forced). Retain per compliance requirements. Enable search by person, door, or time for investigations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"access_control\"\n| table _time, badge_holder, badge_id, door, action, result\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Badge Access Audit** — Complete badge access audit trail is required for compliance (SOC2, PCI-DSS) and supports security investigations.\n\nDocumented **Data sources**: Access control system events. **App/TA** (typical add-on context): Access control syslog/API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: access_control. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"access_control\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Badge Access Audit**): table _time, badge_holder, badge_id, door, action, result\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (access log), Bar chart (access by door), Timeline (access events for person), Single value (total access today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-15.3.1: Badge Access Audit.",
                  "ea": "Saved search 'UC-15.3.1' running on Access control system events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.api.org/products-and-services/standards/important-standards-announcements/standard-1164"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-7 R1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NERC CIP CIP-005-7 R1 (Electronic security perimeter) is enforced — Splunk UC-15.3.1: Badge Access Audit.",
                  "ea": "Saved search 'UC-15.3.1' running on Access control system events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-15.3.1: Badge Access Audit.",
                  "ea": "Saved search 'UC-15.3.1' running on Access control system events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.tsa.gov/sd02c"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.2",
              "n": "After-Hours Access Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "Data center access outside business hours requires additional scrutiny. Alerts ensure authorized personnel are verified.",
              "t": "Access control system",
              "d": "Access events with time-based rules",
              "q": "index=physical sourcetype=\"access_control\" result=\"granted\"\n| eval hour=strftime(_time,\"%H\")\n| where (hour < 6 OR hour > 22) AND NOT match(badge_holder, \"NOC|Security|Facilities\")\n| table _time, badge_holder, door, badge_id",
              "m": "Define business hours per facility. Alert on access outside hours (excluding authorized roles like NOC, security). Require pre-authorization for after-hours access. Track after-hours access patterns.",
              "z": "Table (after-hours access events), Bar chart (after-hours by person), Heatmap (time × access volume).",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Access control system.\n• Ensure the following data sources are available: Access events with time-based rules.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine business hours per facility. Alert on access outside hours (excluding authorized roles like NOC, security). Require pre-authorization for after-hours access. Track after-hours access patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"access_control\" result=\"granted\"\n| eval hour=strftime(_time,\"%H\")\n| where (hour < 6 OR hour > 22) AND NOT match(badge_holder, \"NOC|Security|Facilities\")\n| table _time, badge_holder, door, badge_id\n```\n\nUnderstanding this SPL\n\n**After-Hours Access Alerts** — Data center access outside business hours requires additional scrutiny. Alerts ensure authorized personnel are verified.\n\nDocumented **Data sources**: Access events with time-based rules. **App/TA** (typical add-on context): Access control system. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: access_control. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"access_control\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (hour < 6 OR hour > 22) AND NOT match(badge_holder, \"NOC|Security|Facilities\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **After-Hours Access Alerts**): table _time, badge_holder, door, badge_id\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (after-hours access events), Bar chart (after-hours by person), Heatmap (time × access volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of SWIFT CSP 6.4 (Logging and monitoring) — Splunk UC-15.3.2: After-Hours Access Alerts.",
                  "ea": "Saved search 'UC-15.3.2' running on Access events with time-based rules, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.swift.com/myswift/customer-security-programme-csp/security-controls"
                },
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of TSA SD III.A (Cybersecurity plan) — Splunk UC-15.3.2: After-Hours Access Alerts.",
                  "ea": "Saved search 'UC-15.3.2' running on Access events with time-based rules, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.tsa.gov/sd02c"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.3",
              "n": "Tailgating Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Tailgating bypasses access control, allowing unauthorized entry. Detection supports physical security integrity.",
              "t": "Access control system",
              "d": "Access events (badge-in vs badge-out patterns)",
              "q": "index=physical sourcetype=\"access_control\" door=\"DC_Main_Entry\"\n| transaction badge_id maxspan=10s\n| where eventcount > 1 AND action=\"entry\"\n| table _time, badge_holder, badge_id, eventcount",
              "m": "Analyze badge-in/badge-out patterns. Detect multiple entries without corresponding exits (or vice versa). Alert on anti-passback violations. Correlate with camera footage for investigation. Report on tailgating trends.",
              "z": "Table (tailgating events), Bar chart (by door), Line chart (tailgating trend), Single value (incidents this week).",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Access control system.\n• Ensure the following data sources are available: Access events (badge-in vs badge-out patterns).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAnalyze badge-in/badge-out patterns. Detect multiple entries without corresponding exits (or vice versa). Alert on anti-passback violations. Correlate with camera footage for investigation. Report on tailgating trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"access_control\" door=\"DC_Main_Entry\"\n| transaction badge_id maxspan=10s\n| where eventcount > 1 AND action=\"entry\"\n| table _time, badge_holder, badge_id, eventcount\n```\n\nUnderstanding this SPL\n\n**Tailgating Detection** — Tailgating bypasses access control, allowing unauthorized entry. Detection supports physical security integrity.\n\nDocumented **Data sources**: Access events (badge-in vs badge-out patterns). **App/TA** (typical add-on context): Access control system. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: access_control. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"access_control\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• Filters the current rows with `where eventcount > 1 AND action=\"entry\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Tailgating Detection**): table _time, badge_holder, badge_id, eventcount\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (tailgating events), Bar chart (by door), Line chart (tailgating trend), Single value (incidents this week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.4",
              "n": "Camera System Health",
              "c": "high",
              "f": "beginner",
              "v": "Offline cameras create security blind spots. Monitoring ensures continuous surveillance coverage.",
              "t": "NVR/VMS syslog or API",
              "d": "Video management system logs (camera status, recording status)",
              "q": "index=physical sourcetype=\"vms:camera_status\"\n| where recording_status!=\"recording\" OR connection_status!=\"connected\"\n| table camera_id, location, connection_status, recording_status, last_frame",
              "m": "Poll camera/NVR status via API or forward VMS events. Alert on camera offline, recording failure, or storage issues. Track camera uptime percentage. Report on coverage gaps.",
              "z": "Status grid (camera × status), Table (offline cameras), Single value (cameras recording %), Floor plan (camera locations with status).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NVR/VMS syslog or API.\n• Ensure the following data sources are available: Video management system logs (camera status, recording status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll camera/NVR status via API or forward VMS events. Alert on camera offline, recording failure, or storage issues. Track camera uptime percentage. Report on coverage gaps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"vms:camera_status\"\n| where recording_status!=\"recording\" OR connection_status!=\"connected\"\n| table camera_id, location, connection_status, recording_status, last_frame\n```\n\nUnderstanding this SPL\n\n**Camera System Health** — Offline cameras create security blind spots. Monitoring ensures continuous surveillance coverage.\n\nDocumented **Data sources**: Video management system logs (camera status, recording status). **App/TA** (typical add-on context): NVR/VMS syslog or API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: vms:camera_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"vms:camera_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where recording_status!=\"recording\" OR connection_status!=\"connected\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Camera System Health**): table camera_id, location, connection_status, recording_status, last_frame\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (camera × status), Table (offline cameras), Single value (cameras recording %), Floor plan (camera locations with status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cctv",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.5",
              "n": "Cabinet Door Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Unauthorized cabinet access could indicate tampering. Door sensors provide granular physical security for critical racks.",
              "t": "Cabinet lock sensor input",
              "d": "Smart cabinet lock events",
              "q": "index=physical sourcetype=\"cabinet_lock\"\n| where action=\"opened\" AND NOT authorized=\"true\"\n| table _time, rack_id, user, action, method",
              "m": "Deploy smart cabinet locks with event logging. Forward events to Splunk. Alert on unauthorized openings. Track door open duration. Correlate with badge access events for validation. Report on cabinet access frequency.",
              "z": "Table (cabinet access events), Timeline (open/close events), Bar chart (access by rack).",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cabinet lock sensor input.\n• Ensure the following data sources are available: Smart cabinet lock events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy smart cabinet locks with event logging. Forward events to Splunk. Alert on unauthorized openings. Track door open duration. Correlate with badge access events for validation. Report on cabinet access frequency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"cabinet_lock\"\n| where action=\"opened\" AND NOT authorized=\"true\"\n| table _time, rack_id, user, action, method\n```\n\nUnderstanding this SPL\n\n**Cabinet Door Monitoring** — Unauthorized cabinet access could indicate tampering. Door sensors provide granular physical security for critical racks.\n\nDocumented **Data sources**: Smart cabinet lock events. **App/TA** (typical add-on context): Cabinet lock sensor input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: cabinet_lock. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"cabinet_lock\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"opened\" AND NOT authorized=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cabinet Door Monitoring**): table _time, rack_id, user, action, method\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cabinet access events), Timeline (open/close events), Bar chart (access by rack).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.7",
              "n": "Fire Suppression and Detection System Alarms",
              "c": "critical",
              "f": "beginner",
              "v": "Fire and gas detection events require immediate response. Centralized alarm monitoring ensures rapid escalation and audit trail.",
              "t": "Fire alarm panel integration, BMS",
              "d": "Fire detection, suppression system status, alarm events",
              "q": "index=physical sourcetype=\"fire:alarm\"\n| search (status=\"alarm\" OR status=\"trouble\" OR type=\"suppression\")\n| table _time, zone, type, status, description\n| sort -_time",
              "m": "Integrate fire panel or BMS for alarm events. Alert immediately on any alarm or suppression activation. Log all events for compliance. Report on alarm history and trouble events.",
              "z": "Table (alarms), Timeline (events), Single value (active alarms).",
              "kfp": "HVAC alarms during scheduled maintenance, filter replacements, or outdoor temperature swings.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Fire alarm panel integration, BMS.\n• Ensure the following data sources are available: Fire detection, suppression system status, alarm events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate fire panel or BMS for alarm events. Alert immediately on any alarm or suppression activation. Log all events for compliance. Report on alarm history and trouble events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"fire:alarm\"\n| search (status=\"alarm\" OR status=\"trouble\" OR type=\"suppression\")\n| table _time, zone, type, status, description\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Fire Suppression and Detection System Alarms** — Fire and gas detection events require immediate response. Centralized alarm monitoring ensures rapid escalation and audit trail.\n\nDocumented **Data sources**: Fire detection, suppression system status, alarm events. **App/TA** (typical add-on context): Fire alarm panel integration, BMS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: fire:alarm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"fire:alarm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Fire Suppression and Detection System Alarms**): table _time, zone, type, status, description\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (alarms), Timeline (events), Single value (active alarms).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.8",
              "n": "Raised Floor and Cable Management Events",
              "c": "medium",
              "f": "beginner",
              "v": "Floor tile removal or cable strain events can indicate unauthorized work or trip hazards. Monitoring supports physical security and change audit.",
              "t": "Floor/cable sensors, DCIM",
              "d": "Tile position, cable tension or movement sensors",
              "q": "index=physical sourcetype=\"floor:sensor\"\n| search (tile_removed=\"true\" OR cable_strain > 80)\n| table _time, location, tile_id, cable_strain, operator\n| sort -_time",
              "m": "Deploy sensors for critical floor tiles and cable runs. Forward events to Splunk. Alert on tile removal or strain above threshold. Correlate with change tickets. Report on access and strain history.",
              "z": "Table (events), Timeline (tile/cable events), Floor plan (locations).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Floor/cable sensors, DCIM.\n• Ensure the following data sources are available: Tile position, cable tension or movement sensors.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy sensors for critical floor tiles and cable runs. Forward events to Splunk. Alert on tile removal or strain above threshold. Correlate with change tickets. Report on access and strain history.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"floor:sensor\"\n| search (tile_removed=\"true\" OR cable_strain > 80)\n| table _time, location, tile_id, cable_strain, operator\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Raised Floor and Cable Management Events** — Floor tile removal or cable strain events can indicate unauthorized work or trip hazards. Monitoring supports physical security and change audit.\n\nDocumented **Data sources**: Tile position, cable tension or movement sensors. **App/TA** (typical add-on context): Floor/cable sensors, DCIM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: floor:sensor. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"floor:sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Raised Floor and Cable Management Events**): table _time, location, tile_id, cable_strain, operator\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (events), Timeline (tile/cable events), Floor plan (locations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.10",
              "n": "Data Center Capacity Headroom by Zone",
              "c": "high",
              "f": "intermediate",
              "v": "Tracking power, cooling, and space headroom by zone supports capacity planning and prevents over-provisioning in hot spots.",
              "t": "DCIM, PDU/CRAC metrics",
              "d": "Power capacity vs used, cooling capacity vs load, rack U available",
              "q": "index=dcim sourcetype=\"capacity:zone\"\n| eval power_headroom_pct=((capacity_kw - used_kw)/capacity_kw)*100\n| eval cooling_headroom_pct=((capacity_tons - load_tons)/capacity_tons)*100\n| where power_headroom_pct < 20 OR cooling_headroom_pct < 20\n| table zone, power_headroom_pct, cooling_headroom_pct, u_available",
              "m": "Aggregate capacity and usage by zone from DCIM or meters. Alert when headroom drops below 20%. Report on trend and zones with least headroom. Use for placement and expansion planning.",
              "z": "Table (zones with low headroom), Bar chart (headroom by zone), Heatmap (zone capacity).",
              "kfp": "Capacity alerts during planned hardware refreshes, scheduled rack moves, or new deployments.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DCIM, PDU/CRAC metrics.\n• Ensure the following data sources are available: Power capacity vs used, cooling capacity vs load, rack U available.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate capacity and usage by zone from DCIM or meters. Alert when headroom drops below 20%. Report on trend and zones with least headroom. Use for placement and expansion planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dcim sourcetype=\"capacity:zone\"\n| eval power_headroom_pct=((capacity_kw - used_kw)/capacity_kw)*100\n| eval cooling_headroom_pct=((capacity_tons - load_tons)/capacity_tons)*100\n| where power_headroom_pct < 20 OR cooling_headroom_pct < 20\n| table zone, power_headroom_pct, cooling_headroom_pct, u_available\n```\n\nUnderstanding this SPL\n\n**Data Center Capacity Headroom by Zone** — Tracking power, cooling, and space headroom by zone supports capacity planning and prevents over-provisioning in hot spots.\n\nDocumented **Data sources**: Power capacity vs used, cooling capacity vs load, rack U available. **App/TA** (typical add-on context): DCIM, PDU/CRAC metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dcim; **sourcetype**: capacity:zone. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dcim, sourcetype=\"capacity:zone\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **power_headroom_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cooling_headroom_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where power_headroom_pct < 20 OR cooling_headroom_pct < 20` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Data Center Capacity Headroom by Zone**): table zone, power_headroom_pct, cooling_headroom_pct, u_available\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (zones with low headroom), Bar chart (headroom by zone), Heatmap (zone capacity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "snmp_pdu"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.11",
              "n": "CCTV / IP Camera Health Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Camera online/offline, storage utilization, recording status ensures continuous surveillance coverage and prevents blind spots during security incidents.",
              "t": "Custom (NVR API, ONVIF, Hikvision ISAPI)",
              "d": "NVR API (camera status, storage), ONVIF device management",
              "q": "index=physical sourcetype=\"nvr:camera_status\"\n| where connection_status!=\"online\" OR recording_status!=\"recording\" OR storage_util_pct > 90\n| table camera_id, location, connection_status, recording_status, storage_util_pct, last_frame_time\n| sort connection_status",
              "m": "Poll NVR API (Hikvision ISAPI, Milestone, Genetec) or ONVIF for camera status. Ingest connection_status, recording_status, storage_util_pct. Poll every 5–15 minutes. Alert on camera offline, recording stopped, or storage >90%. Track camera uptime percentage. Report on coverage gaps by zone.",
              "z": "Status grid (camera × status), Table (offline or degraded cameras), Gauge (storage utilization), Floor plan (camera locations with status).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (NVR API, ONVIF, Hikvision ISAPI).\n• Ensure the following data sources are available: NVR API (camera status, storage), ONVIF device management.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll NVR API (Hikvision ISAPI, Milestone, Genetec) or ONVIF for camera status. Ingest connection_status, recording_status, storage_util_pct. Poll every 5–15 minutes. Alert on camera offline, recording stopped, or storage >90%. Track camera uptime percentage. Report on coverage gaps by zone.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"nvr:camera_status\"\n| where connection_status!=\"online\" OR recording_status!=\"recording\" OR storage_util_pct > 90\n| table camera_id, location, connection_status, recording_status, storage_util_pct, last_frame_time\n| sort connection_status\n```\n\nUnderstanding this SPL\n\n**CCTV / IP Camera Health Monitoring** — Camera online/offline, storage utilization, recording status ensures continuous surveillance coverage and prevents blind spots during security incidents.\n\nDocumented **Data sources**: NVR API (camera status, storage), ONVIF device management. **App/TA** (typical add-on context): Custom (NVR API, ONVIF, Hikvision ISAPI). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: nvr:camera_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"nvr:camera_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where connection_status!=\"online\" OR recording_status!=\"recording\" OR storage_util_pct > 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CCTV / IP Camera Health Monitoring**): table camera_id, location, connection_status, recording_status, storage_util_pct, last_frame_time\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (camera × status), Table (offline or degraded cameras), Gauge (storage utilization), Floor plan (camera locations with status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cctv"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.12",
              "n": "Fire Suppression System Status",
              "c": "critical",
              "f": "intermediate",
              "v": "Pre-action system armed/disarmed, agent levels (FM-200/Novec) ensure fire suppression readiness; disarmed or low-agent systems leave the data center unprotected.",
              "t": "Custom (BMS integration, SNMP)",
              "d": "Fire suppression panel telemetry (system_armed, agent_level_pct, alarm_active)",
              "q": "index=physical sourcetype=\"fire:suppression_status\"\n| where system_armed!=\"armed\" OR agent_level_pct < 95 OR alarm_active=\"true\"\n| table _time, zone, system_armed, agent_level_pct, alarm_active, last_inspection_date\n| sort -_time",
              "m": "Integrate fire suppression panel via BMS, SNMP, or vendor API. Ingest system_armed (armed/disarmed), agent_level_pct, alarm_active. Poll every 15–30 minutes. Alert immediately on disarmed state, low agent (<95%), or active alarm. Track inspection dates. Report on system readiness for compliance.",
              "z": "Status grid (zone × armed/agent status), Table (zones needing attention), Gauge (agent level %), Single value (systems disarmed).",
              "kfp": "HVAC alarms during scheduled maintenance, filter replacements, or outdoor temperature swings.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (BMS integration, SNMP).\n• Ensure the following data sources are available: Fire suppression panel telemetry (system_armed, agent_level_pct, alarm_active).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate fire suppression panel via BMS, SNMP, or vendor API. Ingest system_armed (armed/disarmed), agent_level_pct, alarm_active. Poll every 15–30 minutes. Alert immediately on disarmed state, low agent (<95%), or active alarm. Track inspection dates. Report on system readiness for compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"fire:suppression_status\"\n| where system_armed!=\"armed\" OR agent_level_pct < 95 OR alarm_active=\"true\"\n| table _time, zone, system_armed, agent_level_pct, alarm_active, last_inspection_date\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Fire Suppression System Status** — Pre-action system armed/disarmed, agent levels (FM-200/Novec) ensure fire suppression readiness; disarmed or low-agent systems leave the data center unprotected.\n\nDocumented **Data sources**: Fire suppression panel telemetry (system_armed, agent_level_pct, alarm_active). **App/TA** (typical add-on context): Custom (BMS integration, SNMP). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: fire:suppression_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"fire:suppression_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where system_armed!=\"armed\" OR agent_level_pct < 95 OR alarm_active=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Fire Suppression System Status**): table _time, zone, system_armed, agent_level_pct, alarm_active, last_inspection_date\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (zone × armed/agent status), Table (zones needing attention), Gauge (agent level %), Single value (systems disarmed).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Safety",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.13",
              "n": "Environmental Sensor Battery Status",
              "c": "medium",
              "f": "beginner",
              "v": "Wireless environmental sensor battery health; dead batteries create monitoring gaps and risk undetected temperature or humidity exceedances.",
              "t": "SNMP modular input, custom (sensor API)",
              "d": "Sensor management interface (battery_level_pct, last_report_time)",
              "q": "index=environment sourcetype=\"sensor:battery_status\"\n| eval hours_since_report=round((now()-last_report_time)/3600,1)\n| where battery_level_pct < 20 OR hours_since_report > 24\n| table sensor_id, zone, sensor_type, battery_level_pct, last_report_time, hours_since_report\n| sort battery_level_pct",
              "m": "Poll sensor management interface (SNMP, API) for battery_level_pct and last_report_time. Poll daily or every 6 hours. Alert on battery <20% or no report in 24+ hours (possible dead battery). Maintain sensor inventory with battery replacement schedule. Report on sensors due for battery change.",
              "z": "Table (low battery sensors), Gauge (lowest battery %), Bar chart (sensors by battery level), Single value (sensors needing battery replacement).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, custom (sensor API).\n• Ensure the following data sources are available: Sensor management interface (battery_level_pct, last_report_time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll sensor management interface (SNMP, API) for battery_level_pct and last_report_time. Poll daily or every 6 hours. Alert on battery <20% or no report in 24+ hours (possible dead battery). Maintain sensor inventory with battery replacement schedule. Report on sensors due for battery change.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=environment sourcetype=\"sensor:battery_status\"\n| eval hours_since_report=round((now()-last_report_time)/3600,1)\n| where battery_level_pct < 20 OR hours_since_report > 24\n| table sensor_id, zone, sensor_type, battery_level_pct, last_report_time, hours_since_report\n| sort battery_level_pct\n```\n\nUnderstanding this SPL\n\n**Environmental Sensor Battery Status** — Wireless environmental sensor battery health; dead batteries create monitoring gaps and risk undetected temperature or humidity exceedances.\n\nDocumented **Data sources**: Sensor management interface (battery_level_pct, last_report_time). **App/TA** (typical add-on context): SNMP modular input, custom (sensor API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: environment; **sourcetype**: sensor:battery_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=environment, sourcetype=\"sensor:battery_status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hours_since_report** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where battery_level_pct < 20 OR hours_since_report > 24` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Environmental Sensor Battery Status**): table sensor_id, zone, sensor_type, battery_level_pct, last_report_time, hours_since_report\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (low battery sensors), Gauge (lowest battery %), Bar chart (sensors by battery level), Single value (sensors needing battery replacement).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.14",
              "n": "Badge Tailgating and Anti-Passback Violations",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks piggyback events where anti-passback rules fire or two entries occur without intermediate exit—stronger signal than generic door events alone.",
              "t": "Access control system",
              "d": "`sourcetype=\"access_control\"` (event_subtype=tailgate, apb_violation)",
              "q": "index=physical sourcetype=\"access_control\"\n| where match(event_subtype,\"(?i)tailgate|apb|anti.passback\")\n| stats count by door, badge_holder, reader_id\n| sort -count",
              "m": "Enable anti-passback on mantraps where supported. Correlate with video analytics if available. Escalate repeat doors.",
              "z": "Table (tailgate events), Bar chart (by door), Timeline.",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Access control system.\n• Ensure the following data sources are available: `sourcetype=\"access_control\"` (event_subtype=tailgate, apb_violation).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable anti-passback on mantraps where supported. Correlate with video analytics if available. Escalate repeat doors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"access_control\"\n| where match(event_subtype,\"(?i)tailgate|apb|anti.passback\")\n| stats count by door, badge_holder, reader_id\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Badge Tailgating and Anti-Passback Violations** — Tracks piggyback events where anti-passback rules fire or two entries occur without intermediate exit—stronger signal than generic door events alone.\n\nDocumented **Data sources**: `sourcetype=\"access_control\"` (event_subtype=tailgate, apb_violation). **App/TA** (typical add-on context): Access control system. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: access_control. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"access_control\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(event_subtype,\"(?i)tailgate|apb|anti.passback\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by door, badge_holder, reader_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (tailgate events), Bar chart (by door), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.15",
              "n": "After-Hours Access Without Active Work Order",
              "c": "high",
              "f": "intermediate",
              "v": "Correlates physical entry with ITSM change/work order to catch unauthorized after-hours presence.",
              "t": "Access control + ServiceNow CMDB",
              "d": "`sourcetype=\"access_control\"`, `sourcetype=\"servicenow:change\"`",
              "q": "index=physical sourcetype=\"access_control\" result=\"granted\"\n| eval hour=strftime(_time,\"%H\")\n| where (hour < 6 OR hour > 22)\n| join type=left max=1 badge_id [\n  search index=itsm sourcetype=\"servicenow:change\" state=\"Implement\"\n  | eval wo_open=if(now() > planned_start AND now() < planned_end,1,0)\n  | table badge_id, wo_open, change_number\n]\n| where wo_open=0 OR isnull(wo_open)\n| table _time, badge_holder, door, badge_id",
              "m": "Map badge IDs to enterprise IDs in CMDB. Tune join window for active changes. Alert on access with no matching WO.",
              "z": "Table (unapproved after-hours), Bar chart (by person).",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Access control + ServiceNow CMDB.\n• Ensure the following data sources are available: `sourcetype=\"access_control\"`, `sourcetype=\"servicenow:change\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap badge IDs to enterprise IDs in CMDB. Tune join window for active changes. Alert on access with no matching WO.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"access_control\" result=\"granted\"\n| eval hour=strftime(_time,\"%H\")\n| where (hour < 6 OR hour > 22)\n| join type=left max=1 badge_id [\n  search index=itsm sourcetype=\"servicenow:change\" state=\"Implement\"\n  | eval wo_open=if(now() > planned_start AND now() < planned_end,1,0)\n  | table badge_id, wo_open, change_number\n]\n| where wo_open=0 OR isnull(wo_open)\n| table _time, badge_holder, door, badge_id\n```\n\nUnderstanding this SPL\n\n**After-Hours Access Without Active Work Order** — Correlates physical entry with ITSM change/work order to catch unauthorized after-hours presence.\n\nDocumented **Data sources**: `sourcetype=\"access_control\"`, `sourcetype=\"servicenow:change\"`. **App/TA** (typical add-on context): Access control + ServiceNow CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: access_control. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"access_control\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (hour < 6 OR hour > 22)` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where wo_open=0 OR isnull(wo_open)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **After-Hours Access Without Active Work Order**): table _time, badge_holder, door, badge_id\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved after-hours), Bar chart (by person).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.16",
              "n": "Camera Feed Loss and Recording Gap Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Detects loss of RTSP/HLS stream and gaps in NVR recording separately from generic camera “offline” pings.",
              "t": "VMS API, stream health probes",
              "d": "`sourcetype=\"vms:stream_health\"` (fps, keyframes, recording_continuity)",
              "q": "index=physical sourcetype=\"vms:stream_health\"\n| where fps=0 OR recording_gap_sec > 60 OR stream_state=\"lost\"\n| stats latest(recording_gap_sec) as gap by camera_id, site\n| sort -gap",
              "m": "Poll VMS for per-camera FPS and last recorded frame time. Alert on stream loss or gap >1 min for critical zones.",
              "z": "Table (cameras with feed loss), Timeline (gaps), Floor plan overlay.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VMS API, stream health probes.\n• Ensure the following data sources are available: `sourcetype=\"vms:stream_health\"` (fps, keyframes, recording_continuity).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll VMS for per-camera FPS and last recorded frame time. Alert on stream loss or gap >1 min for critical zones.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"vms:stream_health\"\n| where fps=0 OR recording_gap_sec > 60 OR stream_state=\"lost\"\n| stats latest(recording_gap_sec) as gap by camera_id, site\n| sort -gap\n```\n\nUnderstanding this SPL\n\n**Camera Feed Loss and Recording Gap Detection** — Detects loss of RTSP/HLS stream and gaps in NVR recording separately from generic camera “offline” pings.\n\nDocumented **Data sources**: `sourcetype=\"vms:stream_health\"` (fps, keyframes, recording_continuity). **App/TA** (typical add-on context): VMS API, stream health probes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: vms:stream_health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"vms:stream_health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where fps=0 OR recording_gap_sec > 60 OR stream_state=\"lost\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by camera_id, site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cameras with feed loss), Timeline (gaps), Floor plan overlay.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.17",
              "n": "Visitor Badge Expiry Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Expired visitor badges that remain active create audit findings and tailgating risk.",
              "t": "Visitor management system API",
              "d": "`sourcetype=\"vms:visitor\"` (visitor_id, badge_expiry, status)",
              "q": "index=physical sourcetype=\"vms:visitor\"\n| eval days_to_exp=round((strptime(badge_expiry,\"%Y-%m-%d\")-now())/86400,0)\n| where status=\"active\" AND (days_to_exp < 0 OR days_to_exp < 1)\n| table visitor_id, host_employee, badge_expiry, days_to_exp",
              "m": "Sync visitor system daily. Auto-disable badges on expiry in PACS where API allows. Alert on active badge past expiry.",
              "z": "Table (expired active badges), Single value (count), Timeline.",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Visitor management system API.\n• Ensure the following data sources are available: `sourcetype=\"vms:visitor\"` (visitor_id, badge_expiry, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSync visitor system daily. Auto-disable badges on expiry in PACS where API allows. Alert on active badge past expiry.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"vms:visitor\"\n| eval days_to_exp=round((strptime(badge_expiry,\"%Y-%m-%d\")-now())/86400,0)\n| where status=\"active\" AND (days_to_exp < 0 OR days_to_exp < 1)\n| table visitor_id, host_employee, badge_expiry, days_to_exp\n```\n\nUnderstanding this SPL\n\n**Visitor Badge Expiry Tracking** — Expired visitor badges that remain active create audit findings and tailgating risk.\n\nDocumented **Data sources**: `sourcetype=\"vms:visitor\"` (visitor_id, badge_expiry, status). **App/TA** (typical add-on context): Visitor management system API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: vms:visitor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"vms:visitor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status=\"active\" AND (days_to_exp < 0 OR days_to_exp < 1)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Visitor Badge Expiry Tracking**): table visitor_id, host_employee, badge_expiry, days_to_exp\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (expired active badges), Single value (count), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.18",
              "n": "Cabinet Intrusion and Forced Rack Door Events",
              "c": "high",
              "f": "beginner",
              "v": "Lateral contact/forced-open sensors on colo cabinets detect physical tampering faster than walk-through audits.",
              "t": "Smart cabinet PDU/door sensors",
              "d": "`sourcetype=\"cabinet:intrusion\"` (door_state, force_detect)",
              "q": "index=physical sourcetype=\"cabinet:intrusion\"\n| where force_detect=1 OR door_state=\"forced\" OR (door_state=\"open\" AND authorized=0)\n| table _time, rack_id, cabinet_id, door_state, user, ticket_id",
              "m": "Require ticket_id or approved user for opens in secure suites. Page on forced events immediately.",
              "z": "Timeline (intrusion events), Table (racks), Map of data hall.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Smart cabinet PDU/door sensors.\n• Ensure the following data sources are available: `sourcetype=\"cabinet:intrusion\"` (door_state, force_detect).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequire ticket_id or approved user for opens in secure suites. Page on forced events immediately.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"cabinet:intrusion\"\n| where force_detect=1 OR door_state=\"forced\" OR (door_state=\"open\" AND authorized=0)\n| table _time, rack_id, cabinet_id, door_state, user, ticket_id\n```\n\nUnderstanding this SPL\n\n**Cabinet Intrusion and Forced Rack Door Events** — Lateral contact/forced-open sensors on colo cabinets detect physical tampering faster than walk-through audits.\n\nDocumented **Data sources**: `sourcetype=\"cabinet:intrusion\"` (door_state, force_detect). **App/TA** (typical add-on context): Smart cabinet PDU/door sensors. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: cabinet:intrusion. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"cabinet:intrusion\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where force_detect=1 OR door_state=\"forced\" OR (door_state=\"open\" AND authorized=0)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cabinet Intrusion and Forced Rack Door Events**): table _time, rack_id, cabinet_id, door_state, user, ticket_id\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (intrusion events), Table (racks), Map of data hall.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "snmp_pdu"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.20",
              "n": "Fire Suppression System Health and Supervisory Signals",
              "c": "critical",
              "f": "intermediate",
              "v": "Supervisory off-normal (low air, pre-action valve trouble) precedes full alarm; distinct from agent level alone.",
              "t": "Fire panel BMS integration",
              "d": "`sourcetype=\"fire:supervisory\"` (signal_type, zone, ack_state)",
              "q": "index=physical sourcetype=\"fire:supervisory\"\n| where signal_type IN (\"trouble\",\"supervisory\",\"preaction_air_low\") AND ack_state=\"unack\"\n| table _time, zone, signal_type, description\n| sort -_time",
              "m": "Map all supervisory points per NFPA 72. Page on any unacknowledged supervisory >5 min.",
              "z": "Timeline (supervisory events), Table (unacked), Status grid (zone × signal).",
              "kfp": "HVAC alarms during scheduled maintenance, filter replacements, or outdoor temperature swings.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Fire panel BMS integration.\n• Ensure the following data sources are available: `sourcetype=\"fire:supervisory\"` (signal_type, zone, ack_state).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap all supervisory points per NFPA 72. Page on any unacknowledged supervisory >5 min.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"fire:supervisory\"\n| where signal_type IN (\"trouble\",\"supervisory\",\"preaction_air_low\") AND ack_state=\"unack\"\n| table _time, zone, signal_type, description\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Fire Suppression System Health and Supervisory Signals** — Supervisory off-normal (low air, pre-action valve trouble) precedes full alarm; distinct from agent level alone.\n\nDocumented **Data sources**: `sourcetype=\"fire:supervisory\"` (signal_type, zone, ack_state). **App/TA** (typical add-on context): Fire panel BMS integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: fire:supervisory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"fire:supervisory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where signal_type IN (\"trouble\",\"supervisory\",\"preaction_air_low\") AND ack_state=\"unack\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Fire Suppression System Health and Supervisory Signals**): table _time, zone, signal_type, description\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (supervisory events), Table (unacked), Status grid (zone × signal).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Safety"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.21",
              "n": "Access Control Panel Tamper and Line Fault",
              "c": "critical",
              "f": "beginner",
              "v": "Enclosure tamper, RS-485 ground faults, or DC line faults may precede bypass or sabotage.",
              "t": "Access panel diagnostics",
              "d": "`sourcetype=\"access_control:panel\"` (tamper, line_fault, power_state)",
              "q": "index=physical sourcetype=\"access_control:panel\"\n| where tamper=1 OR line_fault=1 OR power_state!=\"normal\"\n| stats count by panel_id, fault_type\n| sort -count",
              "m": "Alert at critical on tamper. Dispatch security to site. Log for forensic chain of custody.",
              "z": "Table (panels in fault), Timeline (tamper), Single value (panels compromised).",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Access panel diagnostics.\n• Ensure the following data sources are available: `sourcetype=\"access_control:panel\"` (tamper, line_fault, power_state).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert at critical on tamper. Dispatch security to site. Log for forensic chain of custody.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"access_control:panel\"\n| where tamper=1 OR line_fault=1 OR power_state!=\"normal\"\n| stats count by panel_id, fault_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Access Control Panel Tamper and Line Fault** — Enclosure tamper, RS-485 ground faults, or DC line faults may precede bypass or sabotage.\n\nDocumented **Data sources**: `sourcetype=\"access_control:panel\"` (tamper, line_fault, power_state). **App/TA** (typical add-on context): Access panel diagnostics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: access_control:panel. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"access_control:panel\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where tamper=1 OR line_fault=1 OR power_state!=\"normal\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by panel_id, fault_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (panels in fault), Timeline (tamper), Single value (panels compromised).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.22",
              "n": "Camera Uptime and Availability Tracking (Meraki MV)",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors video surveillance system availability to ensure continuous monitoring coverage.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MV sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MV\n| stats latest(status) as camera_status, latest(last_status_change) as status_change by camera_name, location\n| where camera_status=\"offline\"",
              "m": "Monitor MV camera status via device API. Alert on offline cameras.",
              "z": "Camera status map; offline camera list; availability percentage gauge.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MV sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor MV camera status via device API. Alert on offline cameras.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MV\n| stats latest(status) as camera_status, latest(last_status_change) as status_change by camera_name, location\n| where camera_status=\"offline\"\n```\n\nUnderstanding this SPL\n\n**Camera Uptime and Availability Tracking (Meraki MV)** — Monitors video surveillance system availability to ensure continuous monitoring coverage.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MV sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by camera_name, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where camera_status=\"offline\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Camera status map; offline camera list; availability percentage gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.23",
              "n": "Video Retention and Cloud Archive Storage Utilization (Meraki MV)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks cloud storage usage for video archives to manage costs and ensure retention SLA.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" storage_usage=*\n| stats sum(storage_usage) as total_storage_gb by camera_id, retention_days\n| eval storage_pct=round(total_storage_gb*100/1000, 2)\n| where storage_pct > 80",
              "m": "Query camera API for storage metrics. Alert on >80% utilization.",
              "z": "Storage utilization gauge; retention timeline; storage trend chart.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery camera API for storage metrics. Alert on >80% utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" storage_usage=*\n| stats sum(storage_usage) as total_storage_gb by camera_id, retention_days\n| eval storage_pct=round(total_storage_gb*100/1000, 2)\n| where storage_pct > 80\n```\n\nUnderstanding this SPL\n\n**Video Retention and Cloud Archive Storage Utilization (Meraki MV)** — Tracks cloud storage usage for video archives to manage costs and ensure retention SLA.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by camera_id, retention_days** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **storage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where storage_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Storage utilization gauge; retention timeline; storage trend chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.24",
              "n": "Motion Detection Events and Alert Volume Analysis (Meraki MV)",
              "c": "low",
              "f": "intermediate",
              "v": "Analyzes motion detection event patterns to optimize camera sensitivity and reduce false alerts.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*motion*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*motion*\"\n| timechart count as motion_events by camera_name\n| eval daily_avg=round(motion_events/1440, 2)",
              "m": "Ingest motion detection events. Track volume and patterns.",
              "z": "Motion detection timeline; heat map by time of day; camera comparison chart.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*motion*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest motion detection events. Track volume and patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event signature=\"*motion*\"\n| timechart count as motion_events by camera_name\n| eval daily_avg=round(motion_events/1440, 2)\n```\n\nUnderstanding this SPL\n\n**Motion Detection Events and Alert Volume Analysis (Meraki MV)** — Analyzes motion detection event patterns to optimize camera sensitivity and reduce false alerts.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*motion*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time with a separate series **by camera_name** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **daily_avg** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Motion detection timeline; heat map by time of day; camera comparison chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.25",
              "n": "Camera Video Quality Score and Stream Health (Meraki MV)",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors video quality metrics to identify network or hardware issues affecting video feeds.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" quality_score=*\n| stats avg(quality_score) as avg_quality, min(quality_score) as min_quality by camera_name\n| where avg_quality < 80\n| sort avg_quality",
              "m": "Query camera API for quality_score metric. Alert on <80 average.",
              "z": "Quality score gauge per camera; quality trend line; affected camera list.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery camera API for quality_score metric. Alert on <80 average.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" quality_score=*\n| stats avg(quality_score) as avg_quality, min(quality_score) as min_quality by camera_name\n| where avg_quality < 80\n| sort avg_quality\n```\n\nUnderstanding this SPL\n\n**Camera Video Quality Score and Stream Health (Meraki MV)** — Monitors video quality metrics to identify network or hardware issues affecting video feeds.\n\nDocumented **Data sources**: `sourcetype=meraki:api`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by camera_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_quality < 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Quality score gauge per camera; quality trend line; affected camera list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.26",
              "n": "Cloud Archive Status and Backup Validation (Meraki MV)",
              "c": "medium",
              "f": "beginner",
              "v": "Ensures video archives are successfully uploaded to cloud and backup integrity is maintained.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api archive_status=*`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" archive_status=*\n| stats latest(archive_status) as backup_status, latest(last_archive_time) as last_backup by camera_id\n| where archive_status != \"success\"",
              "m": "Check camera API archive status. Alert on failures.",
              "z": "Archive status table; last backup time timeline; failure alert dashboard.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api archive_status=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCheck camera API archive status. Alert on failures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" archive_status=*\n| stats latest(archive_status) as backup_status, latest(last_archive_time) as last_backup by camera_id\n| where archive_status != \"success\"\n```\n\nUnderstanding this SPL\n\n**Cloud Archive Status and Backup Validation (Meraki MV)** — Ensures video archives are successfully uploaded to cloud and backup integrity is maintained.\n\nDocumented **Data sources**: `sourcetype=meraki:api archive_status=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by camera_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where archive_status != \"success\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Archive status table; last backup time timeline; failure alert dashboard.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.27",
              "n": "Video Stream Connection Errors and Quality Issues (Meraki MV)",
              "c": "medium",
              "f": "beginner",
              "v": "Detects video stream connection failures that prevent remote viewing or recording.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki type=security_event signature=\"*stream*\" OR signature=\"*connection*\"`",
              "q": "index=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*stream*\" OR signature=\"*connection*\")\n| stats count as error_count by camera_name, error_type\n| where error_count > 10",
              "m": "Monitor stream connection events. Alert on error spikes.",
              "z": "Connection error timeline; affected camera list; error type breakdown.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki type=security_event signature=\"*stream*\" OR signature=\"*connection*\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor stream connection events. Alert on error spikes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki\" type=security_event (signature=\"*stream*\" OR signature=\"*connection*\")\n| stats count as error_count by camera_name, error_type\n| where error_count > 10\n```\n\nUnderstanding this SPL\n\n**Video Stream Connection Errors and Quality Issues (Meraki MV)** — Detects video stream connection failures that prevent remote viewing or recording.\n\nDocumented **Data sources**: `sourcetype=meraki type=security_event signature=\"*stream*\" OR signature=\"*connection*\"`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by camera_name, error_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where error_count > 10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Connection error timeline; affected camera list; error type breakdown.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.28",
              "n": "Camera Firmware Compliance and Update Management (Meraki MV)",
              "c": "medium",
              "f": "intermediate",
              "v": "Ensures all cameras run current firmware with security patches.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api device_type=MV`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" device_type=MV\n| stats latest(firmware_version) as camera_fw, count as camera_count\n| lookup recommended_camera_fw.csv camera_model OUTPUTNEW recommended_version\n| where camera_fw != recommended_version",
              "m": "Query MV device API for firmware. Compare to recommended baseline.",
              "z": "Firmware version table; compliance percentage gauge; outdated camera list.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api device_type=MV`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery MV device API for firmware. Compare to recommended baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" device_type=MV\n| stats latest(firmware_version) as camera_fw, count as camera_count\n| lookup recommended_camera_fw.csv camera_model OUTPUTNEW recommended_version\n| where camera_fw != recommended_version\n```\n\nUnderstanding this SPL\n\n**Camera Firmware Compliance and Update Management (Meraki MV)** — Ensures all cameras run current firmware with security patches.\n\nDocumented **Data sources**: `sourcetype=meraki:api device_type=MV`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where camera_fw != recommended_version` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Firmware version table; compliance percentage gauge; outdated camera list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.29",
              "n": "Night Mode Effectiveness and Low-Light Performance (Meraki MV)",
              "c": "low",
              "f": "beginner",
              "v": "Monitors camera performance in low-light conditions to ensure night surveillance effectiveness.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api night_mode=true`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" night_mode=true\n| stats avg(quality_score) as night_quality, count as night_mode_events by camera_name\n| where night_quality < 75",
              "m": "Track camera performance during night mode. Monitor quality metrics.",
              "z": "Night mode quality gauge; low-light performance timeline; affected camera list.",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api night_mode=true`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack camera performance during night mode. Monitor quality metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" night_mode=true\n| stats avg(quality_score) as night_quality, count as night_mode_events by camera_name\n| where night_quality < 75\n```\n\nUnderstanding this SPL\n\n**Night Mode Effectiveness and Low-Light Performance (Meraki MV)** — Monitors camera performance in low-light conditions to ensure night surveillance effectiveness.\n\nDocumented **Data sources**: `sourcetype=meraki:api night_mode=true`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by camera_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where night_quality < 75` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Night mode quality gauge; low-light performance timeline; affected camera list.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.30",
              "n": "People Counting Trends and Occupancy Analytics",
              "c": "low",
              "f": "beginner",
              "v": "Uses camera people counting to track foot traffic trends for space utilization and facility planning.",
              "t": "`Cisco Meraki Add-on for Splunk` (Splunkbase 5580)",
              "d": "`sourcetype=meraki:api people_count=*`",
              "q": "index=cisco_network sourcetype=\"meraki:api\" people_count=*\n| timechart avg(people_count) as avg_occupancy by location",
              "m": "Extract people_count metrics from camera API. Aggregate by location and time.",
              "z": "Occupancy heat map by time of day; location comparison bar chart; trend sparkline.",
              "kfp": "Capacity alerts during planned hardware refreshes, scheduled rack moves, or new deployments.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Meraki Add-on for Splunk` (Splunkbase 5580).\n• Ensure the following data sources are available: `sourcetype=meraki:api people_count=*`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract people_count metrics from camera API. Aggregate by location and time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_network sourcetype=\"meraki:api\" people_count=*\n| timechart avg(people_count) as avg_occupancy by location\n```\n\nUnderstanding this SPL\n\n**People Counting Trends and Occupancy Analytics** — Uses camera people counting to track foot traffic trends for space utilization and facility planning.\n\nDocumented **Data sources**: `sourcetype=meraki:api people_count=*`. **App/TA** (typical add-on context): `Cisco Meraki Add-on for Splunk` (Splunkbase 5580). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_network; **sourcetype**: meraki:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_network, sourcetype=\"meraki:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time with a separate series **by location** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Occupancy heat map by time of day; location comparison bar chart; trend sparkline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.31",
              "n": "Building Occupancy Trending and Capacity Planning",
              "c": "high",
              "f": "intermediate",
              "v": "Provides real-time and historical people counts per building, floor, and zone using data from Meraki APs and cameras. Supports compliance with fire safety capacity limits, energy management optimization (HVAC scheduling based on actual occupancy), and real estate planning. Trending data reveals patterns — which floors are overcrowded on Tuesdays, which buildings are underused on Fridays — enabling data-driven workplace strategy decisions.",
              "t": "`Spaces Add-On for Splunk` (Splunkbase 8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), Cisco Spaces Firehose API",
              "d": "Cisco Spaces Firehose API (COUNT events — device counts, camera people counts)",
              "q": "index=cisco_spaces sourcetype=\"cisco:spaces:occupancy\"\n| bin _time span=15m\n| stats max(deviceCount) as occupancy by _time, building, floor\n| eventstats avg(occupancy) as avg_occupancy, max(occupancy) as peak_occupancy by building, floor\n| eval capacity_pct=round(occupancy/max_capacity*100, 1)\n| where capacity_pct > 80\n| table _time, building, floor, occupancy, avg_occupancy, peak_occupancy, capacity_pct",
              "m": "Deploy the Spaces Add-On for Splunk (Splunkbase app 8485) and configure it with your Cisco Spaces API credentials. Enable the Firehose API with COUNT event types in Cisco Spaces dashboard. Floor capacity limits should be maintained in a lookup table (`building_capacity.csv`) with building, floor, and max_capacity columns. Set alerts at 80% capacity for warning and 95% for critical. Schedule daily and weekly reports for facilities management.",
              "z": "Floor plan heat maps (occupancy by zone), Line chart (occupancy trend by floor), Column chart (peak occupancy by day of week), Single value panels (current building occupancy, % of capacity).",
              "kfp": "Capacity alerts during planned hardware refreshes, scheduled rack moves, or new deployments.",
              "refs": "[Splunkbase app 8485](https://splunkbase.splunk.com/app/8485), [Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Spaces Add-On for Splunk` (Splunkbase 8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), Cisco Spaces Firehose API.\n• Ensure the following data sources are available: Cisco Spaces Firehose API (COUNT events — device counts, camera people counts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy the Spaces Add-On for Splunk (Splunkbase app 8485) and configure it with your Cisco Spaces API credentials. Enable the Firehose API with COUNT event types in Cisco Spaces dashboard. Floor capacity limits should be maintained in a lookup table (`building_capacity.csv`) with building, floor, and max_capacity columns. Set alerts at 80% capacity for warning and 95% for critical. Schedule daily and weekly reports for facilities management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_spaces sourcetype=\"cisco:spaces:occupancy\"\n| bin _time span=15m\n| stats max(deviceCount) as occupancy by _time, building, floor\n| eventstats avg(occupancy) as avg_occupancy, max(occupancy) as peak_occupancy by building, floor\n| eval capacity_pct=round(occupancy/max_capacity*100, 1)\n| where capacity_pct > 80\n| table _time, building, floor, occupancy, avg_occupancy, peak_occupancy, capacity_pct\n```\n\nUnderstanding this SPL\n\n**Building Occupancy Trending and Capacity Planning** — Provides real-time and historical people counts per building, floor, and zone using data from Meraki APs and cameras. Supports compliance with fire safety capacity limits, energy management optimization (HVAC scheduling based on actual occupancy), and real estate planning. Trending data reveals patterns — which floors are overcrowded on Tuesdays, which buildings are underused on Fridays — enabling data-driven workplace strategy decisions.\n\nDocumented **Data sources**: Cisco Spaces Firehose API (COUNT events — device counts, camera people counts). **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase 8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), Cisco Spaces Firehose API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_spaces; **sourcetype**: cisco:spaces:occupancy. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_spaces, sourcetype=\"cisco:spaces:occupancy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, building, floor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by building, floor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **capacity_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where capacity_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Building Occupancy Trending and Capacity Planning**): table _time, building, floor, occupancy, avg_occupancy, peak_occupancy, capacity_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Floor plan heat maps (occupancy by zone), Line chart (occupancy trend by floor), Column chart (peak occupancy by day of week), Single value panels (current building occupancy, % of capacity).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR36, MR44, MR46, MR56, MR57, MR76, MR78, MR86, Meraki MV Smart Cameras",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki",
                "cisco_spaces"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.32",
              "n": "Visitor Dwell Time and Movement Flow Analysis",
              "c": "medium",
              "f": "advanced",
              "v": "Measures how long visitors and employees spend in specific zones and tracks movement patterns between areas. For corporate campuses, this optimizes cafeteria layouts, identifies bottleneck corridors, and improves wayfinding. For retail environments, it measures engagement with displays and optimizes store layouts. For healthcare, it tracks patient flow through departments to reduce wait times. Dwell time analysis reveals which spaces create value and which are passed through without engagement.",
              "t": "`Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces Location Analytics",
              "d": "Cisco Spaces Firehose API (DEVICE_LOCATION_UPDATE, DEVICE_PRESENCE events — entry, exit, dwell)",
              "q": "index=cisco_spaces sourcetype=\"cisco:spaces:location\"\n| search eventType=\"DEVICE_PRESENCE\"\n| eval dwell_min=round((exitTime-entryTime)/60, 1)\n| where isnotnull(dwell_min) AND dwell_min > 0\n| stats avg(dwell_min) as avg_dwell, median(dwell_min) as median_dwell, max(dwell_min) as max_dwell, count as visits by zoneName, building, floor\n| eval engagement=case(avg_dwell>30, \"High\", avg_dwell>10, \"Medium\", avg_dwell>2, \"Low\", 1==1, \"Pass-through\")\n| sort -visits\n| table zoneName, building, floor, visits, avg_dwell, median_dwell, max_dwell, engagement",
              "m": "Enable location analytics in Cisco Spaces with zone definitions mapped to your floor plans. Configure the Firehose API to stream DEVICE_PRESENCE and DEVICE_LOCATION_UPDATE events. Define meaningful zones in Cisco Spaces (lobbies, meeting areas, cafeterias, collaboration spaces). Device presence events fire at entry, after 10 minutes of inactivity, and at exit. Filter out devices with dwell times under 2 minutes to exclude pass-throughs. Build movement flow analysis by sequencing location updates per device MAC.",
              "z": "Sankey diagram (movement flows between zones), Heat map (dwell time by zone and time of day), Bar chart (visits and avg dwell by zone), Table (zone engagement ranking).",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunkbase app 8485](https://splunkbase.splunk.com/app/8485)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces Location Analytics.\n• Ensure the following data sources are available: Cisco Spaces Firehose API (DEVICE_LOCATION_UPDATE, DEVICE_PRESENCE events — entry, exit, dwell).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable location analytics in Cisco Spaces with zone definitions mapped to your floor plans. Configure the Firehose API to stream DEVICE_PRESENCE and DEVICE_LOCATION_UPDATE events. Define meaningful zones in Cisco Spaces (lobbies, meeting areas, cafeterias, collaboration spaces). Device presence events fire at entry, after 10 minutes of inactivity, and at exit. Filter out devices with dwell times under 2 minutes to exclude pass-throughs. Build movement flow analysis by sequencing location updates…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_spaces sourcetype=\"cisco:spaces:location\"\n| search eventType=\"DEVICE_PRESENCE\"\n| eval dwell_min=round((exitTime-entryTime)/60, 1)\n| where isnotnull(dwell_min) AND dwell_min > 0\n| stats avg(dwell_min) as avg_dwell, median(dwell_min) as median_dwell, max(dwell_min) as max_dwell, count as visits by zoneName, building, floor\n| eval engagement=case(avg_dwell>30, \"High\", avg_dwell>10, \"Medium\", avg_dwell>2, \"Low\", 1==1, \"Pass-through\")\n| sort -visits\n| table zoneName, building, floor, visits, avg_dwell, median_dwell, max_dwell, engagement\n```\n\nUnderstanding this SPL\n\n**Visitor Dwell Time and Movement Flow Analysis** — Measures how long visitors and employees spend in specific zones and tracks movement patterns between areas. For corporate campuses, this optimizes cafeteria layouts, identifies bottleneck corridors, and improves wayfinding. For retail environments, it measures engagement with displays and optimizes store layouts. For healthcare, it tracks patient flow through departments to reduce wait times. Dwell time analysis reveals which spaces create value and which are passed…\n\nDocumented **Data sources**: Cisco Spaces Firehose API (DEVICE_LOCATION_UPDATE, DEVICE_PRESENCE events — entry, exit, dwell). **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces Location Analytics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_spaces; **sourcetype**: cisco:spaces:location. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_spaces, sourcetype=\"cisco:spaces:location\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **dwell_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(dwell_min) AND dwell_min > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by zoneName, building, floor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **engagement** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Visitor Dwell Time and Movement Flow Analysis**): table zoneName, building, floor, visits, avg_dwell, median_dwell, max_dwell, engagement\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey diagram (movement flows between zones), Heat map (dwell time by zone and time of day), Bar chart (visits and avg dwell by zone), Table (zone engagement ranking).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR36, MR44, MR46, MR56, MR57, MR76, MR78, MR86",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_spaces"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.33",
              "n": "Environmental Sensor Monitoring (Temperature, Humidity, Air Quality)",
              "c": "critical",
              "f": "beginner",
              "v": "Monitors environmental conditions using Meraki MT sensors — temperature, humidity, air quality (PM2.5, TVOC, CO2), and water leaks. Protects server rooms and network closets from overheating, monitors warehouse cold-chain compliance, detects water leaks before they cause damage, and ensures office air quality meets health standards. Sensor data with five days of onboard storage survives network outages. Threshold-based alerting enables immediate response to environmental hazards.",
              "t": "`Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces IoT Explorer, Meraki Dashboard API",
              "d": "Cisco Spaces Firehose API (IOT_TELEMETRY events), Meraki sensor API",
              "q": "index=cisco_spaces sourcetype=\"cisco:spaces:sensors\"\n| eval alert=case(temperature>28, \"High Temperature\", temperature<16, \"Low Temperature\", humidity>70, \"High Humidity\", humidity<20, \"Low Humidity\", pm25>35, \"Poor Air Quality (PM2.5)\", co2>1000, \"High CO2\", tvoc>500, \"High TVOC\", waterDetected=\"true\", \"Water Leak Detected\", 1==1, null())\n| where isnotnull(alert)\n| stats latest(temperature) as temp, latest(humidity) as humidity, latest(pm25) as pm25, latest(co2) as co2, values(alert) as alerts by sensorName, sensorModel, location, building\n| sort -temp\n| table sensorName, sensorModel, location, building, temp, humidity, pm25, co2, alerts",
              "m": "Deploy Meraki MT sensors and associate them with Meraki MR access points as BLE gateways. Configure sensor thresholds in Meraki Dashboard for local alerting. Stream telemetry to Splunk via the Spaces Add-On or direct Meraki webhook integration to HEC. Define location-specific thresholds (server rooms: 18-27°C; offices: 20-24°C; cold storage: 2-8°C). Create severity tiers: warning (approaching threshold), critical (exceeding threshold), emergency (water leak, extreme temperature). MT sensors have five-year battery life and five days of onboard data storage.",
              "z": "Gauge panels (current temp/humidity per zone), Line chart (environmental trends over 7 days), Map (sensor locations with color-coded status), Alert table (active environmental alerts).",
              "kfp": "Leak alerts from condensation, scheduled maintenance moisture, or false-trigger sensors.",
              "refs": "[Splunkbase app 8485](https://splunkbase.splunk.com/app/8485)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces IoT Explorer, Meraki Dashboard API.\n• Ensure the following data sources are available: Cisco Spaces Firehose API (IOT_TELEMETRY events), Meraki sensor API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Meraki MT sensors and associate them with Meraki MR access points as BLE gateways. Configure sensor thresholds in Meraki Dashboard for local alerting. Stream telemetry to Splunk via the Spaces Add-On or direct Meraki webhook integration to HEC. Define location-specific thresholds (server rooms: 18-27°C; offices: 20-24°C; cold storage: 2-8°C). Create severity tiers: warning (approaching threshold), critical (exceeding threshold), emergency (water leak, extreme temperature). MT sensors have…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_spaces sourcetype=\"cisco:spaces:sensors\"\n| eval alert=case(temperature>28, \"High Temperature\", temperature<16, \"Low Temperature\", humidity>70, \"High Humidity\", humidity<20, \"Low Humidity\", pm25>35, \"Poor Air Quality (PM2.5)\", co2>1000, \"High CO2\", tvoc>500, \"High TVOC\", waterDetected=\"true\", \"Water Leak Detected\", 1==1, null())\n| where isnotnull(alert)\n| stats latest(temperature) as temp, latest(humidity) as humidity, latest(pm25) as pm25, latest(co2) as co2, values(alert) as alerts by sensorName, sensorModel, location, building\n| sort -temp\n| table sensorName, sensorModel, location, building, temp, humidity, pm25, co2, alerts\n```\n\nUnderstanding this SPL\n\n**Environmental Sensor Monitoring (Temperature, Humidity, Air Quality)** — Monitors environmental conditions using Meraki MT sensors — temperature, humidity, air quality (PM2.5, TVOC, CO2), and water leaks. Protects server rooms and network closets from overheating, monitors warehouse cold-chain compliance, detects water leaks before they cause damage, and ensures office air quality meets health standards. Sensor data with five days of onboard storage survives network outages. Threshold-based alerting enables immediate response to environmental…\n\nDocumented **Data sources**: Cisco Spaces Firehose API (IOT_TELEMETRY events), Meraki sensor API. **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces IoT Explorer, Meraki Dashboard API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_spaces; **sourcetype**: cisco:spaces:sensors. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_spaces, sourcetype=\"cisco:spaces:sensors\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(alert)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by sensorName, sensorModel, location, building** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Environmental Sensor Monitoring (Temperature, Humidity, Air Quality)**): table sensorName, sensorModel, location, building, temp, humidity, pm25, co2, alerts\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge panels (current temp/humidity per zone), Line chart (environmental trends over 7 days), Map (sensor locations with color-coded status), Alert table (active environmental alerts).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MT10 (temperature/humidity), MT11 (temperature probe), MT12 (water leak), MT14 (air quality — PM2.5, TVOC, noise, temp, humidity), MT15 (CO2, PM2.5, TVOC, noise, temp, humidity), MT20 (open/close door sensor), MT30 (smart automation button)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki",
                "cisco_spaces"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.34",
              "n": "Asset Tracking and Geofencing Alerts",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks the real-time location of high-value assets (medical equipment, network gear, tools, laptops, carts) using BLE tags and Cisco Spaces IoT Explorer. Geofencing alerts notify when assets leave designated zones — preventing theft, loss, and misplacement. In healthcare, this tracks infusion pumps and wheelchairs across departments. In manufacturing, it ensures tools return to storage after shifts. Historical location data supports asset utilization analysis and procurement decisions.",
              "t": "`Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces IoT Explorer",
              "d": "Cisco Spaces Firehose API (IOT_TELEMETRY, DEVICE_LOCATION_UPDATE events for BLE tags)",
              "q": "index=cisco_spaces sourcetype=\"cisco:spaces:assets\"\n| eval zone_violation=if(currentZone!=assignedZone AND isnotnull(assignedZone), \"Out of Zone\", null())\n| eval missing=if(_time-lastSeenTime>3600, \"Not Seen >1hr\", null())\n| where isnotnull(zone_violation) OR isnotnull(missing)\n| stats latest(currentZone) as current_zone, latest(assignedZone) as assigned_zone, latest(building) as building, latest(floor) as floor, latest(lastSeenTime) as last_seen by assetName, assetTag, assetCategory\n| eval status=case(isnotnull(zone_violation), \"GEOFENCE ALERT\", isnotnull(missing), \"MISSING\", 1==1, \"Unknown\")\n| table assetName, assetTag, assetCategory, status, current_zone, assigned_zone, building, floor, last_seen",
              "m": "Register assets in Cisco Spaces IoT Explorer with BLE tags. Define geofence zones matching building floor plans. Configure the Firehose API to stream asset location updates. Maintain a lookup table of asset assignments (asset_tag, assigned_zone, asset_category, asset_value). Alert immediately on high-value assets leaving their zone. For lower-value assets, alert after 30 minutes outside zone. Track \"last seen\" timestamps and flag assets not seen for >1 hour during business hours. Generate weekly utilization reports (time in zone vs. time out of zone) to optimize asset distribution.",
              "z": "Floor plan map (asset locations with icons), Table (geofence violations and missing assets), Bar chart (assets by zone), Timeline (asset movement history).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 8485](https://splunkbase.splunk.com/app/8485)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces IoT Explorer.\n• Ensure the following data sources are available: Cisco Spaces Firehose API (IOT_TELEMETRY, DEVICE_LOCATION_UPDATE events for BLE tags).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRegister assets in Cisco Spaces IoT Explorer with BLE tags. Define geofence zones matching building floor plans. Configure the Firehose API to stream asset location updates. Maintain a lookup table of asset assignments (asset_tag, assigned_zone, asset_category, asset_value). Alert immediately on high-value assets leaving their zone. For lower-value assets, alert after 30 minutes outside zone. Track \"last seen\" timestamps and flag assets not seen for >1 hour during business hours. Generate weekly…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_spaces sourcetype=\"cisco:spaces:assets\"\n| eval zone_violation=if(currentZone!=assignedZone AND isnotnull(assignedZone), \"Out of Zone\", null())\n| eval missing=if(_time-lastSeenTime>3600, \"Not Seen >1hr\", null())\n| where isnotnull(zone_violation) OR isnotnull(missing)\n| stats latest(currentZone) as current_zone, latest(assignedZone) as assigned_zone, latest(building) as building, latest(floor) as floor, latest(lastSeenTime) as last_seen by assetName, assetTag, assetCategory\n| eval status=case(isnotnull(zone_violation), \"GEOFENCE ALERT\", isnotnull(missing), \"MISSING\", 1==1, \"Unknown\")\n| table assetName, assetTag, assetCategory, status, current_zone, assigned_zone, building, floor, last_seen\n```\n\nUnderstanding this SPL\n\n**Asset Tracking and Geofencing Alerts** — Tracks the real-time location of high-value assets (medical equipment, network gear, tools, laptops, carts) using BLE tags and Cisco Spaces IoT Explorer. Geofencing alerts notify when assets leave designated zones — preventing theft, loss, and misplacement. In healthcare, this tracks infusion pumps and wheelchairs across departments. In manufacturing, it ensures tools return to storage after shifts. Historical location data supports asset utilization analysis and…\n\nDocumented **Data sources**: Cisco Spaces Firehose API (IOT_TELEMETRY, DEVICE_LOCATION_UPDATE events for BLE tags). **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces IoT Explorer. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_spaces; **sourcetype**: cisco:spaces:assets. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_spaces, sourcetype=\"cisco:spaces:assets\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **zone_violation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **missing** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(zone_violation) OR isnotnull(missing)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by assetName, assetTag, assetCategory** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Asset Tracking and Geofencing Alerts**): table assetName, assetTag, assetCategory, status, current_zone, assigned_zone, building, floor, last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Floor plan map (asset locations with icons), Table (geofence violations and missing assets), Bar chart (assets by zone), Timeline (asset movement history).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR36, MR44, MR46, MR56, MR57, MR76, MR78, MR86 (as BLE gateways), BLE asset tags (various vendors)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_spaces"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.35",
              "n": "After-Hours Wireless Presence Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Detects Wi-Fi-connected devices present in buildings or restricted zones outside of business hours. Correlates with badge access data and employee schedules to identify unauthorized presence — potential tailgating, after-hours theft, or social engineering. Unlike badge-only systems which only detect entry, wireless presence confirms ongoing physical presence. Supports investigations by providing device MAC, location, and duration of after-hours presence. Particularly valuable for facilities with sensitive areas (data centers, labs, executive floors).",
              "t": "`Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces Firehose API",
              "d": "Cisco Spaces Firehose API (DEVICE_PRESENCE events — entry/exit with timestamps)",
              "q": "index=cisco_spaces sourcetype=\"cisco:spaces:presence\" eventType=\"DEVICE_ENTRY\"\n| eval hour=strftime(_time, \"%H\")\n| eval day=strftime(_time, \"%u\")\n| where (hour<6 OR hour>20) OR day>5\n| lookup known_devices.csv macAddress OUTPUT owner, department, authorized_afterhours\n| where authorized_afterhours!=\"yes\" OR isnull(authorized_afterhours)\n| stats count as entries, earliest(_time) as first_seen, latest(_time) as last_seen by macAddress, owner, department, building, floor, zoneName\n| sort -count\n| table macAddress, owner, department, building, floor, zoneName, first_seen, last_seen, entries",
              "m": "Configure Cisco Spaces Firehose API to stream DEVICE_PRESENCE events with entry/exit notifications. Build a known devices lookup by correlating MAC addresses with user identities from ISE or Active Directory (via 802.1X authentication records). Define business hours per building/zone (some facilities operate 24/7). Maintain an authorized_afterhours list for security, janitorial, and operations staff. Alert on unknown devices or unauthorized users detected after hours in sensitive zones. Integrate with badge access logs for cross-validation.",
              "z": "Floor plan map (after-hours device locations), Timeline (presence events outside business hours), Table (unauthorized after-hours presence), Bar chart (after-hours detections by zone).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 8485](https://splunkbase.splunk.com/app/8485), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces Firehose API.\n• Ensure the following data sources are available: Cisco Spaces Firehose API (DEVICE_PRESENCE events — entry/exit with timestamps).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Cisco Spaces Firehose API to stream DEVICE_PRESENCE events with entry/exit notifications. Build a known devices lookup by correlating MAC addresses with user identities from ISE or Active Directory (via 802.1X authentication records). Define business hours per building/zone (some facilities operate 24/7). Maintain an authorized_afterhours list for security, janitorial, and operations staff. Alert on unknown devices or unauthorized users detected after hours in sensitive zones. Integrat…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_spaces sourcetype=\"cisco:spaces:presence\" eventType=\"DEVICE_ENTRY\"\n| eval hour=strftime(_time, \"%H\")\n| eval day=strftime(_time, \"%u\")\n| where (hour<6 OR hour>20) OR day>5\n| lookup known_devices.csv macAddress OUTPUT owner, department, authorized_afterhours\n| where authorized_afterhours!=\"yes\" OR isnull(authorized_afterhours)\n| stats count as entries, earliest(_time) as first_seen, latest(_time) as last_seen by macAddress, owner, department, building, floor, zoneName\n| sort -count\n| table macAddress, owner, department, building, floor, zoneName, first_seen, last_seen, entries\n```\n\nUnderstanding this SPL\n\n**After-Hours Wireless Presence Detection** — Detects Wi-Fi-connected devices present in buildings or restricted zones outside of business hours. Correlates with badge access data and employee schedules to identify unauthorized presence — potential tailgating, after-hours theft, or social engineering. Unlike badge-only systems which only detect entry, wireless presence confirms ongoing physical presence. Supports investigations by providing device MAC, location, and duration of after-hours presence. Particularly…\n\nDocumented **Data sources**: Cisco Spaces Firehose API (DEVICE_PRESENCE events — entry/exit with timestamps). **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces Firehose API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_spaces; **sourcetype**: cisco:spaces:presence. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_spaces, sourcetype=\"cisco:spaces:presence\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **day** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (hour<6 OR hour>20) OR day>5` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where authorized_afterhours!=\"yes\" OR isnull(authorized_afterhours)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by macAddress, owner, department, building, floor, zoneName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **After-Hours Wireless Presence Detection**): table macAddress, owner, department, building, floor, zoneName, first_seen, last_seen, entries\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**After-Hours Wireless Presence Detection** — Detects Wi-Fi-connected devices present in buildings or restricted zones outside of business hours. Correlates with badge access data and employee schedules to identify unauthorized presence — potential tailgating, after-hours theft, or social engineering. Unlike badge-only systems which only detect entry, wireless presence confirms ongoing physical presence. Supports investigations by providing device MAC, location, and duration of after-hours presence. Particularly…\n\nDocumented **Data sources**: Cisco Spaces Firehose API (DEVICE_PRESENCE events — entry/exit with timestamps). **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase 8485), Cisco Spaces Firehose API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Floor plan map (after-hours device locations), Timeline (presence events outside business hours), Table (unauthorized after-hours presence), Bar chart (after-hours detections by zone).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR36, MR44, MR46, MR56, MR57, MR76, MR78, MR86",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_spaces"
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.36",
              "n": "Workspace Utilization and Ghost Booking Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Combines room booking data with actual physical occupancy from Cisco Spaces and Webex device sensors to calculate true workspace utilization. Identifies \"ghost bookings\" — rooms reserved but never occupied — which waste available space and frustrate employees searching for rooms. Reveals which rooms are most/least popular, optimal room sizes for actual group sizes, and peak usage patterns. Directly supports real estate cost reduction by providing evidence-based recommendations for space consolidation, redesign, or expansion.",
              "t": "`Spaces Add-On for Splunk` (Splunkbase 8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `ta_cisco_webex_add_on_for_splunk` (GitHub), calendar integration",
              "d": "Cisco Spaces occupancy data, Webex device people count (RoomAnalytics), calendar/booking system data",
              "q": "index=cisco_spaces sourcetype=\"cisco:spaces:workspace\"\n| join type=left max=0 workspaceId [search index=webex sourcetype=\"webex:workspace_bookings\" | fields workspaceId, bookingStart, bookingEnd, organizer]\n| eval booked=if(isnotnull(bookingStart), 1, 0)\n| eval occupied=if(peopleCount>0, 1, 0)\n| eval ghost_booking=if(booked=1 AND occupied=0, 1, 0)\n| eval unbooked_usage=if(booked=0 AND occupied=1, 1, 0)\n| stats sum(booked) as total_bookings, sum(ghost_booking) as ghost_bookings, sum(unbooked_usage) as walk_ins, avg(peopleCount) as avg_occupants, max(roomCapacity) as capacity by workspaceName, building, floor\n| eval ghost_rate=if(total_bookings>0, round(ghost_bookings/total_bookings*100, 1), 0)\n| eval avg_fill=if(capacity>0, round(avg_occupants/capacity*100, 1), 0)\n| sort -ghost_rate\n| table workspaceName, building, floor, capacity, total_bookings, ghost_bookings, ghost_rate, walk_ins, avg_occupants, avg_fill",
              "m": "Enable Webex RoomAnalytics people counting on all room devices (`xConfiguration RoomAnalytics PeopleCountOutOfCall: On`). Ingest room booking data from your calendar system (Google Calendar, Microsoft 365, or Webex Workspaces API). Stream occupancy data from Cisco Spaces. Correlate bookings with actual occupancy using workspace IDs and time windows. Define ghost booking as: room booked but no occupancy detected within 10 minutes of booking start. Generate weekly facility reports showing: ghost booking rate per floor/building, rooms with avg occupancy below 25% of capacity, and peak vs. off-peak usage patterns.",
              "z": "Heat map (utilization by room and time slot), Bar chart (ghost booking rate by floor), Table (workspace utilization summary), Scatter plot (room capacity vs. actual avg occupants).",
              "kfp": "Capacity alerts during planned hardware refreshes, scheduled rack moves, or new deployments.",
              "refs": "[Splunkbase app 8485](https://splunkbase.splunk.com/app/8485), [Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Spaces Add-On for Splunk` (Splunkbase 8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `ta_cisco_webex_add_on_for_splunk` (GitHub), calendar integration.\n• Ensure the following data sources are available: Cisco Spaces occupancy data, Webex device people count (RoomAnalytics), calendar/booking system data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable Webex RoomAnalytics people counting on all room devices (`xConfiguration RoomAnalytics PeopleCountOutOfCall: On`). Ingest room booking data from your calendar system (Google Calendar, Microsoft 365, or Webex Workspaces API). Stream occupancy data from Cisco Spaces. Correlate bookings with actual occupancy using workspace IDs and time windows. Define ghost booking as: room booked but no occupancy detected within 10 minutes of booking start. Generate weekly facility reports showing: ghost b…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_spaces sourcetype=\"cisco:spaces:workspace\"\n| join type=left max=0 workspaceId [search index=webex sourcetype=\"webex:workspace_bookings\" | fields workspaceId, bookingStart, bookingEnd, organizer]\n| eval booked=if(isnotnull(bookingStart), 1, 0)\n| eval occupied=if(peopleCount>0, 1, 0)\n| eval ghost_booking=if(booked=1 AND occupied=0, 1, 0)\n| eval unbooked_usage=if(booked=0 AND occupied=1, 1, 0)\n| stats sum(booked) as total_bookings, sum(ghost_booking) as ghost_bookings, sum(unbooked_usage) as walk_ins, avg(peopleCount) as avg_occupants, max(roomCapacity) as capacity by workspaceName, building, floor\n| eval ghost_rate=if(total_bookings>0, round(ghost_bookings/total_bookings*100, 1), 0)\n| eval avg_fill=if(capacity>0, round(avg_occupants/capacity*100, 1), 0)\n| sort -ghost_rate\n| table workspaceName, building, floor, capacity, total_bookings, ghost_bookings, ghost_rate, walk_ins, avg_occupants, avg_fill\n```\n\nUnderstanding this SPL\n\n**Workspace Utilization and Ghost Booking Detection** — Combines room booking data with actual physical occupancy from Cisco Spaces and Webex device sensors to calculate true workspace utilization. Identifies \"ghost bookings\" — rooms reserved but never occupied — which waste available space and frustrate employees searching for rooms. Reveals which rooms are most/least popular, optimal room sizes for actual group sizes, and peak usage patterns. Directly supports real estate cost reduction by providing evidence-based…\n\nDocumented **Data sources**: Cisco Spaces occupancy data, Webex device people count (RoomAnalytics), calendar/booking system data. **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase 8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `ta_cisco_webex_add_on_for_splunk` (GitHub), calendar integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_spaces; **sourcetype**: cisco:spaces:workspace. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_spaces, sourcetype=\"cisco:spaces:workspace\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **booked** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **occupied** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ghost_booking** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **unbooked_usage** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by workspaceName, building, floor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ghost_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **avg_fill** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Workspace Utilization and Ghost Booking Detection**): table workspaceName, building, floor, capacity, total_bookings, ghost_bookings, ghost_rate, walk_ins, avg_occupants, avg_fill\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heat map (utilization by room and time slot), Bar chart (ghost booking rate by floor), Table (workspace utilization summary), Scatter plot (room capacity vs. actual avg occupants).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR36, MR44, MR46, MR56, MR57, MR76, MR78, MR86, Webex Room Kit, Room Bar, Room Navigator, Webex Board",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "github"
              ],
              "em": [
                "cisco_meraki",
                "cisco_spaces",
                "cisco_webex"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.37",
              "n": "Access Control Event Audit",
              "c": "high",
              "f": "intermediate",
              "v": "Physical access logs correlate with logical access events for security investigation. Audit trail required for compliance.",
              "t": "Access control syslog, API input",
              "d": "Access control system logs (badge events, door status)",
              "q": "index=physical sourcetype=\"access_control\"\n| stats count by badge_holder, door, action\n| sort -count",
              "m": "Forward access control events via syslog or API. Parse badge holder, door, time, and action (granted, denied). Alert on after-hours access to sensitive areas. Correlate physical access with logical authentication events.",
              "z": "Table (access events), Bar chart (access by door), Timeline (access events for specific person), Geo/floor plan.",
              "kfp": "Denied access for expired badges, role changes pending update, or guest badges with limited zones.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Access control syslog, API input.\n• Ensure the following data sources are available: Access control system logs (badge events, door status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward access control events via syslog or API. Parse badge holder, door, time, and action (granted, denied). Alert on after-hours access to sensitive areas. Correlate physical access with logical authentication events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"access_control\"\n| stats count by badge_holder, door, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Access Control Event Audit** — Physical access logs correlate with logical access events for security investigation. Audit trail required for compliance.\n\nDocumented **Data sources**: Access control system logs (badge events, door status). **App/TA** (typical add-on context): Access control syslog, API input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: access_control. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"access_control\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by badge_holder, door, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (access events), Bar chart (access by door), Timeline (access events for specific person), Geo/floor plan.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-15.3.37: Access Control Event Audit.",
                  "ea": "Saved search 'UC-15.3.37' running on Access control system logs (badge events, door status), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.api.org/products-and-services/standards/important-standards-announcements/standard-1164"
                },
                {
                  "r": "HIPAA Privacy",
                  "v": "current",
                  "cl": "§164.502(a)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Privacy §164.502(a) (Uses and disclosures of PHI — general rules) is enforced — Splunk UC-15.3.37: Access Control Event Audit.",
                  "ea": "Saved search 'UC-15.3.37' running on Access control system logs (badge events, door status), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E"
                },
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-15.3.37: Access Control Event Audit.",
                  "ea": "Saved search 'UC-15.3.37' running on Access control system logs (badge events, door status), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.tsa.gov/sd02c"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.38",
              "n": "Cisco Spaces Wayfinding and Path Analytics",
              "c": "medium",
              "f": "advanced",
              "v": "Understanding how people physically move through a building reveals traffic bottlenecks, dead zones, and inefficient layouts invisible to static occupancy sensors. Cisco Spaces path analytics tracks visitor movement between zones over time — showing that 80% of foot traffic funnels through one corridor, or that a key amenity is consistently bypassed because signage directs people the wrong way. This data drives floor plan optimization, signage placement, and emergency egress planning with evidence rather than guesswork.",
              "t": "`Spaces Add-On for Splunk` (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `TA-cisco_ios` (Catalyst), Cisco Spaces API",
              "d": "`sourcetype=cisco:spaces:location`, `sourcetype=cisco:spaces:path_analytics`",
              "q": "index=spaces sourcetype=\"cisco:spaces:location\"\n| sort 0 device_mac, _time\n| streamstats current=f last(zone_name) as prev_zone, last(_time) as prev_time by device_mac\n| where zone_name!=prev_zone AND isnotnull(prev_zone)\n| eval transition=prev_zone.\" → \".zone_name\n| eval transit_sec=_time - prev_time\n| where transit_sec < 1800\n| bin _time span=1h\n| stats count as transitions, avg(transit_sec) as avg_transit_sec, dc(device_mac) as unique_visitors by _time, transition, prev_zone, zone_name\n| eval avg_transit_min=round(avg_transit_sec/60, 1)\n| sort -transitions\n| table _time, transition, transitions, unique_visitors, avg_transit_min",
              "m": "Cisco Spaces uses Wi-Fi probe requests and connected client location data from Meraki or Catalyst APs to track device movement across defined zones (floors, wings, departments, amenities). Ingest location updates via the Spaces Add-On. Define zones in Cisco Spaces matching physical areas (lobby, cafeteria, elevator bank, parking, conference wing). Track zone transitions per device to build path flows. Filter out transitions longer than 30 minutes (device likely stationary, then moved). Aggregate by transition pair to identify the highest-traffic paths. Overlay with building floor plans to visualize traffic flow. Compare weekday vs weekend patterns and identify peak congestion hours. Use findings to optimize signage placement, adjust security checkpoint locations, and validate emergency egress route capacity.",
              "z": "Sankey diagram (zone-to-zone flow), Floor plan overlay (traffic density by path), Bar chart (top 20 transitions by volume), Line chart (traffic volume by hour), Heatmap (zone × hour congestion).",
              "kfp": "Sensor anomalies during cleaning, calibration, or planned environmental changes.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Spaces Add-On for Splunk` (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `TA-cisco_ios` (Catalyst), Cisco Spaces API.\n• Ensure the following data sources are available: `sourcetype=cisco:spaces:location`, `sourcetype=cisco:spaces:path_analytics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCisco Spaces uses Wi-Fi probe requests and connected client location data from Meraki or Catalyst APs to track device movement across defined zones (floors, wings, departments, amenities). Ingest location updates via the Spaces Add-On. Define zones in Cisco Spaces matching physical areas (lobby, cafeteria, elevator bank, parking, conference wing). Track zone transitions per device to build path flows. Filter out transitions longer than 30 minutes (device likely stationary, then moved). Aggregate…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=spaces sourcetype=\"cisco:spaces:location\"\n| sort 0 device_mac, _time\n| streamstats current=f last(zone_name) as prev_zone, last(_time) as prev_time by device_mac\n| where zone_name!=prev_zone AND isnotnull(prev_zone)\n| eval transition=prev_zone.\" → \".zone_name\n| eval transit_sec=_time - prev_time\n| where transit_sec < 1800\n| bin _time span=1h\n| stats count as transitions, avg(transit_sec) as avg_transit_sec, dc(device_mac) as unique_visitors by _time, transition, prev_zone, zone_name\n| eval avg_transit_min=round(avg_transit_sec/60, 1)\n| sort -transitions\n| table _time, transition, transitions, unique_visitors, avg_transit_min\n```\n\nUnderstanding this SPL\n\n**Cisco Spaces Wayfinding and Path Analytics** — Understanding how people physically move through a building reveals traffic bottlenecks, dead zones, and inefficient layouts invisible to static occupancy sensors. Cisco Spaces path analytics tracks visitor movement between zones over time — showing that 80% of foot traffic funnels through one corridor, or that a key amenity is consistently bypassed because signage directs people the wrong way. This data drives floor plan optimization, signage placement, and emergency…\n\nDocumented **Data sources**: `sourcetype=cisco:spaces:location`, `sourcetype=cisco:spaces:path_analytics`. **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `TA-cisco_ios` (Catalyst), Cisco Spaces API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: spaces; **sourcetype**: cisco:spaces:location. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=spaces, sourcetype=\"cisco:spaces:location\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by device_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where zone_name!=prev_zone AND isnotnull(prev_zone)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **transition** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **transit_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where transit_sec < 1800` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, transition, prev_zone, zone_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_transit_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Cisco Spaces Wayfinding and Path Analytics**): table _time, transition, transitions, unique_visitors, avg_transit_min\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey diagram (zone-to-zone flow), Floor plan overlay (traffic density by path), Bar chart (top 20 transitions by volume), Line chart (traffic volume by hour), Heatmap (zone × hour congestion).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR series (Wi-Fi location), Cisco Catalyst 9100 series (Wi-Fi), Cisco Spaces IoT Services",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios",
                "cisco_meraki",
                "cisco_spaces"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.39",
              "n": "Cisco Spaces Proximity and Engagement Analytics",
              "c": "medium",
              "f": "intermediate",
              "v": "For corporate campuses, retail environments, and visitor centers, understanding how people engage with specific zones — reception desks, demo areas, retail displays, cafeterias, wellness rooms — transforms facility management from guesswork to data-driven optimization. Cisco Spaces dwell time and repeat visit analytics quantify engagement intensity: a demo area where visitors average 30 seconds of dwell time needs redesign, while a breakout space with 45-minute average dwell validates the investment. Repeat visit patterns reveal which spaces become habit destinations vs one-time curiosities.",
              "t": "`Spaces Add-On for Splunk` (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `TA-cisco_ios` (Catalyst), Cisco Spaces API",
              "d": "`sourcetype=cisco:spaces:occupancy`, `sourcetype=cisco:spaces:dwell_time`",
              "q": "index=spaces sourcetype=\"cisco:spaces:dwell_time\"\n| eval dwell_bucket=case(\n    dwell_min < 5, \"Passerby (<5min)\",\n    dwell_min < 15, \"Brief (5-15min)\",\n    dwell_min < 30, \"Moderate (15-30min)\",\n    dwell_min < 60, \"Engaged (30-60min)\",\n    1==1, \"Extended (60min+)\")\n| bin _time span=1d\n| stats count as visits, avg(dwell_min) as avg_dwell, dc(device_mac) as unique_visitors, sum(eval(if(repeat_visit==\"Yes\",1,0))) as repeat_visits by _time, zone_name, zone_type\n| eval engagement_score=round((avg_dwell * 0.4) + (repeat_visits/unique_visitors * 100 * 0.3) + (unique_visitors * 0.3 / 10), 1)\n| eval repeat_pct=round(repeat_visits*100/visits, 1)\n| table _time, zone_name, zone_type, unique_visitors, visits, avg_dwell, repeat_pct, engagement_score\n| sort -engagement_score",
              "m": "Ingest Cisco Spaces dwell time data via the Spaces Add-On. Cisco Spaces calculates dwell time by tracking how long a device's Wi-Fi probe requests are detected within a zone boundary. Configure zone types (amenity, collaboration, retail, reception, demo) to enable category-level analysis. Track unique visitors (distinct device MACs), average dwell time, and repeat visit rate (same device returning within 7 days). Build a composite engagement score combining dwell time, repeat rate, and visitor volume. Compare engagement across similar zone types (e.g., all breakout spaces) to identify high-performing and underperforming spaces. Provide monthly reports to facilities and real estate teams. For retail environments, correlate engagement with point-of-sale data to measure conversion.",
              "z": "Bar chart (engagement score by zone), Bubble chart (zones by visitor volume, dwell time, repeat rate), Line chart (engagement trend per zone over 90 days), Table (zone performance details), Heatmap (zone × day-of-week engagement).",
              "kfp": "Capacity alerts during planned hardware refreshes, scheduled rack moves, or new deployments.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Spaces Add-On for Splunk` (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `TA-cisco_ios` (Catalyst), Cisco Spaces API.\n• Ensure the following data sources are available: `sourcetype=cisco:spaces:occupancy`, `sourcetype=cisco:spaces:dwell_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Cisco Spaces dwell time data via the Spaces Add-On. Cisco Spaces calculates dwell time by tracking how long a device's Wi-Fi probe requests are detected within a zone boundary. Configure zone types (amenity, collaboration, retail, reception, demo) to enable category-level analysis. Track unique visitors (distinct device MACs), average dwell time, and repeat visit rate (same device returning within 7 days). Build a composite engagement score combining dwell time, repeat rate, and visitor v…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=spaces sourcetype=\"cisco:spaces:dwell_time\"\n| eval dwell_bucket=case(\n    dwell_min < 5, \"Passerby (<5min)\",\n    dwell_min < 15, \"Brief (5-15min)\",\n    dwell_min < 30, \"Moderate (15-30min)\",\n    dwell_min < 60, \"Engaged (30-60min)\",\n    1==1, \"Extended (60min+)\")\n| bin _time span=1d\n| stats count as visits, avg(dwell_min) as avg_dwell, dc(device_mac) as unique_visitors, sum(eval(if(repeat_visit==\"Yes\",1,0))) as repeat_visits by _time, zone_name, zone_type\n| eval engagement_score=round((avg_dwell * 0.4) + (repeat_visits/unique_visitors * 100 * 0.3) + (unique_visitors * 0.3 / 10), 1)\n| eval repeat_pct=round(repeat_visits*100/visits, 1)\n| table _time, zone_name, zone_type, unique_visitors, visits, avg_dwell, repeat_pct, engagement_score\n| sort -engagement_score\n```\n\nUnderstanding this SPL\n\n**Cisco Spaces Proximity and Engagement Analytics** — For corporate campuses, retail environments, and visitor centers, understanding how people engage with specific zones — reception desks, demo areas, retail displays, cafeterias, wellness rooms — transforms facility management from guesswork to data-driven optimization. Cisco Spaces dwell time and repeat visit analytics quantify engagement intensity: a demo area where visitors average 30 seconds of dwell time needs redesign, while a breakout space with 45-minute average…\n\nDocumented **Data sources**: `sourcetype=cisco:spaces:occupancy`, `sourcetype=cisco:spaces:dwell_time`. **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), `TA-cisco_ios` (Catalyst), Cisco Spaces API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: spaces; **sourcetype**: cisco:spaces:dwell_time. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=spaces, sourcetype=\"cisco:spaces:dwell_time\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dwell_bucket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, zone_name, zone_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **engagement_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **repeat_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Cisco Spaces Proximity and Engagement Analytics**): table _time, zone_name, zone_type, unique_visitors, visits, avg_dwell, repeat_pct, engagement_score\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (engagement score by zone), Bubble chart (zones by visitor volume, dwell time, repeat rate), Line chart (engagement trend per zone over 90 days), Table (zone performance details), Heatmap (zone × day-of-week engagement).",
              "script": "",
              "premium": "",
              "hw": "Cisco Meraki MR series, Cisco Catalyst 9100 series, Cisco Spaces IoT Services",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios",
                "cisco_meraki",
                "cisco_spaces"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "15.3.40",
              "n": "Cisco Spaces IoT Sensor Alert Correlation with Building Management",
              "c": "high",
              "f": "advanced",
              "v": "Environmental sensor alerts (temperature excursion, humidity spike, poor air quality) are only half the story — the other half is whether the building management system (BMS) responded correctly. Correlating Cisco Spaces IoT sensor alerts with HVAC events, BMS actions, and occupancy patterns validates automated response effectiveness. When a temperature alert fires and the HVAC doesn't respond within 15 minutes, that's an automation failure. When air quality degrades only in occupied zones during peak hours, that's a ventilation capacity issue. This correlation turns isolated alerts into actionable facility intelligence.",
              "t": "`Spaces Add-On for Splunk` (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), BMS/BACnet integration, Cisco Spaces IoT Services",
              "d": "`sourcetype=cisco:spaces:iot_sensors`, `sourcetype=bacnet:events` or `sourcetype=bms:events`, `sourcetype=cisco:spaces:occupancy`",
              "q": "index=spaces sourcetype=\"cisco:spaces:iot_sensors\" alert_active=true\n| eval sensor_alert=sensor_type.\": \".alert_reason\n| join type=left zone_id [search index=building sourcetype=\"bms:events\" action_type=\"HVAC*\"\n    | stats latest(action) as bms_action, latest(_time) as bms_response_time by zone_id]\n| join type=left zone_id [search index=spaces sourcetype=\"cisco:spaces:occupancy\"\n    | stats latest(occupancy_count) as current_occupancy by zone_id]\n| eval response_delay_min=if(isnotnull(bms_response_time), round((_time - bms_response_time)/60, 1), null())\n| eval bms_responded=if(isnotnull(bms_action), \"Yes\", \"No\")\n| eval assessment=case(\n    bms_responded==\"No\", \"BMS Non-Response - Investigate\",\n    response_delay_min > 15, \"Slow Response (\".response_delay_min.\" min)\",\n    response_delay_min <= 15, \"Timely Response\",\n    1==1, \"Unknown\")\n| table _time, zone_id, sensor_alert, current_occupancy, bms_responded, bms_action, response_delay_min, assessment\n| sort -_time",
              "m": "Ingest Cisco Spaces IoT sensor data (temperature, humidity, air quality index, CO2 levels) via the Spaces Add-On. Configure sensor alert thresholds in Cisco Spaces (e.g., temperature >27°C, CO2 >1000ppm, humidity >65%). Separately ingest BMS/HVAC event logs via BACnet gateway, Modbus, or BMS API integration. Correlate by zone ID and time window: when a sensor alert fires, check whether a corresponding BMS action occurred within 15 minutes. Track BMS non-responses to identify automation failures or disconnected zones. Layer in occupancy data to distinguish capacity-driven environmental issues (crowded meeting room overheating) from equipment failures (empty room overheating). Provide weekly reports to facilities management showing alert count, BMS response rate, and average response time by building zone.",
              "z": "Table (active alerts with BMS response status), Gauge (BMS response rate %), Line chart (daily alert count and response rate trend), Bar chart (alerts by sensor type and zone), Floor plan overlay (alert locations with severity coloring).",
              "kfp": "Capacity alerts during planned hardware refreshes, scheduled rack moves, or new deployments.",
              "refs": "[Splunkbase app 5580](https://splunkbase.splunk.com/app/5580)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Spaces Add-On for Splunk` (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), BMS/BACnet integration, Cisco Spaces IoT Services.\n• Ensure the following data sources are available: `sourcetype=cisco:spaces:iot_sensors`, `sourcetype=bacnet:events` or `sourcetype=bms:events`, `sourcetype=cisco:spaces:occupancy`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Cisco Spaces IoT sensor data (temperature, humidity, air quality index, CO2 levels) via the Spaces Add-On. Configure sensor alert thresholds in Cisco Spaces (e.g., temperature >27°C, CO2 >1000ppm, humidity >65%). Separately ingest BMS/HVAC event logs via BACnet gateway, Modbus, or BMS API integration. Correlate by zone ID and time window: when a sensor alert fires, check whether a corresponding BMS action occurred within 15 minutes. Track BMS non-responses to identify automation failures …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=spaces sourcetype=\"cisco:spaces:iot_sensors\" alert_active=true\n| eval sensor_alert=sensor_type.\": \".alert_reason\n| join type=left zone_id [search index=building sourcetype=\"bms:events\" action_type=\"HVAC*\"\n    | stats latest(action) as bms_action, latest(_time) as bms_response_time by zone_id]\n| join type=left zone_id [search index=spaces sourcetype=\"cisco:spaces:occupancy\"\n    | stats latest(occupancy_count) as current_occupancy by zone_id]\n| eval response_delay_min=if(isnotnull(bms_response_time), round((_time - bms_response_time)/60, 1), null())\n| eval bms_responded=if(isnotnull(bms_action), \"Yes\", \"No\")\n| eval assessment=case(\n    bms_responded==\"No\", \"BMS Non-Response - Investigate\",\n    response_delay_min > 15, \"Slow Response (\".response_delay_min.\" min)\",\n    response_delay_min <= 15, \"Timely Response\",\n    1==1, \"Unknown\")\n| table _time, zone_id, sensor_alert, current_occupancy, bms_responded, bms_action, response_delay_min, assessment\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cisco Spaces IoT Sensor Alert Correlation with Building Management** — Environmental sensor alerts (temperature excursion, humidity spike, poor air quality) are only half the story — the other half is whether the building management system (BMS) responded correctly. Correlating Cisco Spaces IoT sensor alerts with HVAC events, BMS actions, and occupancy patterns validates automated response effectiveness. When a temperature alert fires and the HVAC doesn't respond within 15 minutes, that's an automation failure. When air quality degrades only…\n\nDocumented **Data sources**: `sourcetype=cisco:spaces:iot_sensors`, `sourcetype=bacnet:events` or `sourcetype=bms:events`, `sourcetype=cisco:spaces:occupancy`. **App/TA** (typical add-on context): `Spaces Add-On for Splunk` (Splunkbase #8485), `Cisco Meraki Add-on for Splunk` (Splunkbase 5580), BMS/BACnet integration, Cisco Spaces IoT Services. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: spaces; **sourcetype**: cisco:spaces:iot_sensors. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=spaces, sourcetype=\"cisco:spaces:iot_sensors\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sensor_alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **response_delay_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bms_responded** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **assessment** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Cisco Spaces IoT Sensor Alert Correlation with Building Management**): table _time, zone_id, sensor_alert, current_occupancy, bms_responded, bms_action, response_delay_min, assessment\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active alerts with BMS response status), Gauge (BMS response rate %), Line chart (daily alert count and response rate trend), Bar chart (alerts by sensor type and zone), Floor plan overlay (alert locations with severity coloring).",
              "script": "",
              "premium": "",
              "hw": "Cisco Spaces IoT sensors (temperature, humidity, air quality, CO2), Cisco Meraki MT sensors, BMS/HVAC systems",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the physical data center for power, cooling, environment, and physical access.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_meraki",
                "cisco_spaces"
              ],
              "ta_link": {
                "name": "Cisco Meraki Add-on for Splunk",
                "id": 5580,
                "url": "https://splunkbase.splunk.com/app/5580"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.7,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 37,
            "none": 0
          }
        }
      ],
      "i": 15,
      "n": "Data Center Physical Infrastructure",
      "src": "cat-15-data-center-physical-infrastructure.md"
    },
    {
      "s": [
        {
          "i": "16.1",
          "n": "Ticketing Systems",
          "u": [
            {
              "i": "16.1.1",
              "n": "Incident Volume Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Incident trends reveal infrastructure stability, staffing needs, and the effectiveness of problem management.",
              "t": "`Splunk_TA_snow`",
              "d": "ServiceNow incident table",
              "q": "index=itsm sourcetype=\"snow:incident\"\n| timechart span=1d count by priority",
              "m": "Enable the incident input in `Splunk_TA_snow` with a 300-second polling interval. Baseline incident volume by computing the median count per `assignment_group` by hour-of-week over 30 days. Alert when the current interval exceeds 2x the baseline for two consecutive intervals. Exclude known maintenance windows via a `change_windows` lookup.",
              "z": "Line chart (incident volume trend), Stacked bar (by priority), Pie chart (by category), Table (today's incidents).",
              "kfp": "Volume spikes during major outages, planned maintenance windows, or after large change rollouts.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: ServiceNow incident table.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable the incident input in `Splunk_TA_snow` with a 300-second polling interval. Baseline incident volume by computing the median count per `assignment_group` by hour-of-week over 30 days. Alert when the current interval exceeds 2x the baseline for two consecutive intervals. Exclude known maintenance windows via a `change_windows` lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\"\n| timechart span=1d count by priority\n```\n\nUnderstanding this SPL\n\n**Incident Volume Trending** — Incident trends reveal infrastructure stability, staffing needs, and the effectiveness of problem management.\n\nDocumented **Data sources**: ServiceNow incident table. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by priority** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (incident volume trend), Stacked bar (by priority), Pie chart (by category), Table (today's incidents).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how many incidents we open and how that splits by how urgent they are, so we notice surges, quiet days, and when the team is under water.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.2",
              "n": "SLA Compliance Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "SLA breaches affect customer satisfaction and contractual obligations. Real-time monitoring enables intervention before breaches occur.",
              "t": "`Splunk_TA_snow`",
              "d": "ServiceNow SLA records (response, resolution)",
              "q": "index=itsm sourcetype=\"snow:incident\"\n| eval response_met=if(response_time<=response_sla,\"Yes\",\"No\")\n| eval resolution_met=if(resolution_time<=resolution_sla,\"Yes\",\"No\")\n| stats count(eval(response_met=\"Yes\")) as met, count as total by priority\n| eval compliance_pct=round(met/total*100,1)",
              "m": "Track response and resolution times against SLA targets per priority. Alert when tickets approach SLA breach. Report on compliance percentage per priority and assignment group. Identify teams with consistent breaches.",
              "z": "Gauge (SLA compliance %), Bar chart (compliance by priority), Table (tickets approaching breach), Line chart (compliance trend).",
              "kfp": "Compliance dips with seasonal ticket mix, new targets, or month-end pushes; compare with the official report before a hard escalation.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: ServiceNow SLA records (response, resolution).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack response and resolution times against SLA targets per priority. Alert when tickets approach SLA breach. Report on compliance percentage per priority and assignment group. Identify teams with consistent breaches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\"\n| eval response_met=if(response_time<=response_sla,\"Yes\",\"No\")\n| eval resolution_met=if(resolution_time<=resolution_sla,\"Yes\",\"No\")\n| stats count(eval(response_met=\"Yes\")) as met, count as total by priority\n| eval compliance_pct=round(met/total*100,1)\n```\n\nUnderstanding this SPL\n\n**SLA Compliance Monitoring** — SLA breaches affect customer satisfaction and contractual obligations. Real-time monitoring enables intervention before breaches occur.\n\nDocumented **Data sources**: ServiceNow SLA records (response, resolution). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **response_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **resolution_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by priority** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (SLA compliance %), Bar chart (compliance by priority), Table (tickets approaching breach), Line chart (compliance trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check whether we resolve and respond inside the time we promised, so we know when service levels slip and which teams need support.",
              "wv": "crawl",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "security",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "16.1.3",
              "n": "MTTR by Category",
              "c": "high",
              "f": "beginner",
              "v": "MTTR per category identifies where process improvements or automation would have the greatest impact.",
              "t": "`Splunk_TA_snow`",
              "d": "Incident lifecycle data (open, assigned, resolved timestamps)",
              "q": "index=itsm sourcetype=\"snow:incident\" state=\"resolved\"\n| eval mttr_hours=round((resolved_at-opened_at)/3600,1)\n| stats avg(mttr_hours) as avg_mttr, median(mttr_hours) as median_mttr by category\n| sort -avg_mttr",
              "m": "Calculate MTTR from incident open to resolution timestamps. Break down by category, subcategory, and assignment group. Track trends over time. Set MTTR targets per category and report on achievement.",
              "z": "Bar chart (MTTR by category), Line chart (MTTR trend), Table (category MTTR summary), Histogram (resolution time distribution).",
              "kfp": "MTTR and response drifts from harder incidents, new staff, on-call experience mix, or a wider scope of what we close as done.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Incident lifecycle data (open, assigned, resolved timestamps).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCalculate MTTR from incident open to resolution timestamps. Break down by category, subcategory, and assignment group. Track trends over time. Set MTTR targets per category and report on achievement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" state=\"resolved\"\n| eval mttr_hours=round((resolved_at-opened_at)/3600,1)\n| stats avg(mttr_hours) as avg_mttr, median(mttr_hours) as median_mttr by category\n| sort -avg_mttr\n```\n\nUnderstanding this SPL\n\n**MTTR by Category** — MTTR per category identifies where process improvements or automation would have the greatest impact.\n\nDocumented **Data sources**: Incident lifecycle data (open, assigned, resolved timestamps). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mttr_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (MTTR by category), Line chart (MTTR trend), Table (category MTTR summary), Histogram (resolution time distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We measure how long it takes to fix different kinds of work, so we can see which areas drag and whether we are improving over time.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "16.1.4",
              "n": "Change Success Rate",
              "c": "high",
              "f": "beginner",
              "v": "Failed changes are the leading cause of incidents. Tracking success rate drives improvement in change management practices.",
              "t": "`Splunk_TA_snow`",
              "d": "ServiceNow change records",
              "q": "index=itsm sourcetype=\"snow:change_request\"\n| stats count(eval(close_code=\"successful\")) as success, count(eval(close_code=\"failed\")) as failed, count as total by type\n| eval success_rate=round(success/total*100,1)",
              "m": "Ingest change request records. Track outcomes (successful, failed, backed out). Calculate success rate by change type (standard, normal, emergency). Alert on failed changes. Report on DORA change failure rate metric.",
              "z": "Pie chart (change outcomes), Bar chart (success rate by type), Line chart (success rate trend), Single value (overall success rate).",
              "kfp": "Change success that dips during big trains, new platforms, or a new way to record good versus bad in the form.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: ServiceNow change records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest change request records. Track outcomes (successful, failed, backed out). Calculate success rate by change type (standard, normal, emergency). Alert on failed changes. Report on DORA change failure rate metric.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\"\n| stats count(eval(close_code=\"successful\")) as success, count(eval(close_code=\"failed\")) as failed, count as total by type\n| eval success_rate=round(success/total*100,1)\n```\n\nUnderstanding this SPL\n\n**Change Success Rate** — Failed changes are the leading cause of incidents. Tracking success rate drives improvement in change management practices.\n\nDocumented **Data sources**: ServiceNow change records. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (change outcomes), Bar chart (success rate by type), Line chart (success rate trend), Single value (overall success rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow how many changes land well versus poorly, so we can steer release habits and see when a new platform is still bumpy.",
              "wv": "crawl",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "16.1.5",
              "n": "Change Collision Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Overlapping changes on related systems increase outage risk. Detection enables coordination and conflict resolution.",
              "t": "`Splunk_TA_snow`",
              "d": "Change calendar, CI relationships",
              "q": "index=itsm sourcetype=\"snow:change_request\" state=\"scheduled\"\n| eval change_window_start=start_date, change_window_end=end_date\n| join type=inner max=1 cmdb_ci [| search index=itsm sourcetype=\"snow:change_request\" state=\"scheduled\"]\n| where change_window_start < end_date AND change_window_end > start_date AND change_id!=other_change_id",
              "m": "Analyze scheduled change windows for overlapping CIs. Cross-reference CI relationships for dependent systems. Alert when changes to related systems overlap. Create change calendar view for CAB review.",
              "z": "Calendar view (change windows), Table (colliding changes), Gantt chart (change timeline).",
              "kfp": "Intentional overlap for blue-green, paired work, or test cells that still look like two open windows in the list.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Change calendar, CI relationships.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAnalyze scheduled change windows for overlapping CIs. Cross-reference CI relationships for dependent systems. Alert when changes to related systems overlap. Create change calendar view for CAB review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" state=\"scheduled\"\n| eval change_window_start=start_date, change_window_end=end_date\n| join type=inner max=1 cmdb_ci [| search index=itsm sourcetype=\"snow:change_request\" state=\"scheduled\"]\n| where change_window_start < end_date AND change_window_end > start_date AND change_id!=other_change_id\n```\n\nUnderstanding this SPL\n\n**Change Collision Detection** — Overlapping changes on related systems increase outage risk. Detection enables coordination and conflict resolution.\n\nDocumented **Data sources**: Change calendar, CI relationships. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **change_window_start** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where change_window_start < end_date AND change_window_end > start_date AND change_id!=other_change_id` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Calendar view (change windows), Table (colliding changes), Gantt chart (change timeline).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for two pieces of work that might step on the same thing at once, so we can coordinate before a window turns into a customer surprise.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.6",
              "n": "Problem Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Identifying recurring incident patterns that should become problems drives root cause resolution and reduces incident volume.",
              "t": "`Splunk_TA_snow`",
              "d": "Incident categorization data, problem records",
              "q": "index=itsm sourcetype=\"snow:incident\"\n| stats count by category, subcategory, cmdb_ci\n| where count > 5\n| sort -count\n| head 20",
              "m": "Analyze incident patterns by category, CI, and assignment group. Identify recurring incidents (>5 in 30 days). Flag candidates for problem record creation. Track problem management effectiveness (repeat incidents after RCA).",
              "z": "Table (top recurring incidents), Bar chart (repeat incidents by category), Line chart (repeat rate trend).",
              "kfp": "Problem work rising during big outages, vendor bugs, and right after a major product release; context matters for trend pages.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Incident categorization data, problem records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAnalyze incident patterns by category, CI, and assignment group. Identify recurring incidents (>5 in 30 days). Flag candidates for problem record creation. Track problem management effectiveness (repeat incidents after RCA).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\"\n| stats count by category, subcategory, cmdb_ci\n| where count > 5\n| sort -count\n| head 20\n```\n\nUnderstanding this SPL\n\n**Problem Trending** — Identifying recurring incident patterns that should become problems drives root cause resolution and reduces incident volume.\n\nDocumented **Data sources**: Incident categorization data, problem records. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by category, subcategory, cmdb_ci** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top recurring incidents), Bar chart (repeat incidents by category), Line chart (repeat rate trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many underlying problems are open and moving, so we can fund root-cause work before the same pain keeps coming back.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.7",
              "n": "Ticket Reassignment Rate",
              "c": "medium",
              "f": "beginner",
              "v": "High reassignment rates indicate poor routing or skills gaps. Reduction improves MTTR and customer satisfaction.",
              "t": "`Splunk_TA_snow`",
              "d": "Incident audit trail (assignment changes)",
              "q": "index=itsm sourcetype=\"snow:incident\"\n| stats dc(assignment_group) as group_count, count as reassignments by number\n| where group_count > 2\n| sort -group_count",
              "m": "Track assignment group changes per ticket. Calculate average reassignments. Identify tickets with >2 reassignments (ping-pong tickets). Report on routing accuracy by category. Improve auto-routing rules.",
              "z": "Bar chart (avg reassignments by category), Table (most-reassigned tickets), Line chart (reassignment rate trend), Single value (avg reassignments).",
              "kfp": "Reassignments from severity changes, support model cutovers, or a very large event touching many groups.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Incident audit trail (assignment changes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack assignment group changes per ticket. Calculate average reassignments. Identify tickets with >2 reassignments (ping-pong tickets). Report on routing accuracy by category. Improve auto-routing rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\"\n| stats dc(assignment_group) as group_count, count as reassignments by number\n| where group_count > 2\n| sort -group_count\n```\n\nUnderstanding this SPL\n\n**Ticket Reassignment Rate** — High reassignment rates indicate poor routing or skills gaps. Reduction improves MTTR and customer satisfaction.\n\nDocumented **Data sources**: Incident audit trail (assignment changes). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by number** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where group_count > 2` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (avg reassignments by category), Table (most-reassigned tickets), Line chart (reassignment rate trend), Single value (avg reassignments).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how often we move work between people, so we can see unclear intake, shaky handoffs, or a team that is overloaded.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.8",
              "n": "Aging Ticket Alerts",
              "c": "medium",
              "f": "intermediate",
              "v": "Aging tickets indicate stuck processes or forgotten issues. Alerts ensure nothing falls through the cracks.",
              "t": "`Splunk_TA_snow`",
              "d": "Open incident data",
              "q": "index=itsm sourcetype=\"snow:incident\" state IN (\"new\",\"in_progress\",\"on_hold\")\n| eval age_days=round((now()-opened_at)/86400)\n| eval age_threshold=case(priority=1,1, priority=2,3, priority=3,7, 1=1,14)\n| where age_days > age_threshold\n| table number, short_description, priority, assignment_group, age_days\n| sort -age_days",
              "m": "Calculate ticket age against priority-based thresholds. Alert when tickets exceed expected resolution time. Escalate automatically via workflow rules. Report on aging ticket inventory daily.",
              "z": "Table (aging tickets), Bar chart (aging by priority), Single value (total aging tickets), Line chart (aging trend).",
              "kfp": "Stale or old records while waiting on a vendor, a customer reply, a patch, or work we chose to park on purpose.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Open incident data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCalculate ticket age against priority-based thresholds. Alert when tickets exceed expected resolution time. Escalate automatically via workflow rules. Report on aging ticket inventory daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" state IN (\"new\",\"in_progress\",\"on_hold\")\n| eval age_days=round((now()-opened_at)/86400)\n| eval age_threshold=case(priority=1,1, priority=2,3, priority=3,7, 1=1,14)\n| where age_days > age_threshold\n| table number, short_description, priority, assignment_group, age_days\n| sort -age_days\n```\n\nUnderstanding this SPL\n\n**Aging Ticket Alerts** — Aging tickets indicate stuck processes or forgotten issues. Alerts ensure nothing falls through the cracks.\n\nDocumented **Data sources**: Open incident data. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_threshold** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days > age_threshold` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Aging Ticket Alerts**): table number, short_description, priority, assignment_group, age_days\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (aging tickets), Bar chart (aging by priority), Single value (total aging tickets), Line chart (aging trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We call out work that has sat too long, so we can nudge owners before small delays become big customer stories.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.9",
              "n": "Change-Incident Correlation",
              "c": "critical",
              "f": "intermediate",
              "v": "Correlating incidents with recent changes is the fastest path to root cause. Automated correlation accelerates MTTR.",
              "t": "`Splunk_TA_snow` + monitoring data",
              "d": "Change records + incident records + monitoring events",
              "q": "index=itsm sourcetype=\"snow:incident\" priority IN (1,2)\n| join type=left max=1 cmdb_ci\n    [search index=itsm sourcetype=\"snow:change_request\" close_code=\"successful\" earliest=-24h\n     | table cmdb_ci, number as change_number, short_description as change_desc, end_date]\n| where isnotnull(change_number)\n| table number, short_description, cmdb_ci, change_number, change_desc",
              "m": "When high-priority incidents are created, automatically search for changes completed in the last 24 hours on related CIs. Present correlation to incident team. Track change-related incident percentage. Feed back to change management.",
              "z": "Table (incident-change correlation), Single value (% incidents with recent change), Timeline (changes + incidents overlaid).",
              "kfp": "Loose cause links when many incidents land near the same change window; not every line is a true regression.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow` + monitoring data.\n• Ensure the following data sources are available: Change records + incident records + monitoring events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWhen high-priority incidents are created, automatically search for changes completed in the last 24 hours on related CIs. Present correlation to incident team. Track change-related incident percentage. Feed back to change management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" priority IN (1,2)\n| join type=left max=1 cmdb_ci\n    [search index=itsm sourcetype=\"snow:change_request\" close_code=\"successful\" earliest=-24h\n     | table cmdb_ci, number as change_number, short_description as change_desc, end_date]\n| where isnotnull(change_number)\n| table number, short_description, cmdb_ci, change_number, change_desc\n```\n\nUnderstanding this SPL\n\n**Change-Incident Correlation** — Correlating incidents with recent changes is the fastest path to root cause. Automated correlation accelerates MTTR.\n\nDocumented **Data sources**: Change records + incident records + monitoring events. **App/TA** (typical add-on context): `Splunk_TA_snow` + monitoring data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(change_number)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Change-Incident Correlation**): table number, short_description, cmdb_ci, change_number, change_desc\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incident-change correlation), Single value (% incidents with recent change), Timeline (changes + incidents overlaid).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We line up change activity with a burst of customer issues, so we can tell a real break from coincidence fast.",
              "wv": "crawl",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "16.1.10",
              "n": "Service Request Fulfillment Time",
              "c": "medium",
              "f": "beginner",
              "v": "Fulfillment time metrics drive service catalog optimization and customer satisfaction. Slow fulfillment reduces adoption.",
              "t": "`Splunk_TA_snow`",
              "d": "Service request data",
              "q": "index=itsm sourcetype=\"snow:sc_request\"\n| eval fulfillment_hours=round((closed_at-opened_at)/3600,1)\n| stats avg(fulfillment_hours) as avg_hours, median(fulfillment_hours) as median_hours by cat_item\n| sort -avg_hours",
              "m": "Track service request lifecycle from submission to fulfillment. Calculate fulfillment time per catalog item. Identify items with slow fulfillment for automation opportunities. Report on catalog efficiency.",
              "z": "Bar chart (avg fulfillment by item), Table (catalog item performance), Line chart (fulfillment time trend).",
              "kfp": "MTTR and response drifts from harder incidents, new staff, on-call experience mix, or a wider scope of what we close as done.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Service request data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack service request lifecycle from submission to fulfillment. Calculate fulfillment time per catalog item. Identify items with slow fulfillment for automation opportunities. Report on catalog efficiency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:sc_request\"\n| eval fulfillment_hours=round((closed_at-opened_at)/3600,1)\n| stats avg(fulfillment_hours) as avg_hours, median(fulfillment_hours) as median_hours by cat_item\n| sort -avg_hours\n```\n\nUnderstanding this SPL\n\n**Service Request Fulfillment Time** — Fulfillment time metrics drive service catalog optimization and customer satisfaction. Slow fulfillment reduces adoption.\n\nDocumented **Data sources**: Service request data. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_request. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fulfillment_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cat_item** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (avg fulfillment by item), Table (catalog item performance), Line chart (fulfillment time trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We time how long routine requests take end to end, so we can trim slow steps in catalog and approval paths.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.11",
              "n": "Problem Ticket Reopening Rate",
              "c": "medium",
              "f": "intermediate",
              "v": "Tickets closed then reopened indicate poor resolution quality, incomplete fixes, or inadequate testing. Tracking reopening rate drives resolution discipline and reduces repeat work.",
              "t": "Custom (ITSM API — ServiceNow, Jira Service Management)",
              "d": "ITSM ticket state change history",
              "q": "index=itsm sourcetype=\"snow:incident:audit\" field_name=\"state\"\n| eval reopened=if(match(old_value,\"closed|6\") AND not(match(new_value,\"closed|6\")), 1, 0)\n| where reopened=1\n| stats count as reopened_count by number\n| eval metric=\"reopened\"\n| append [| search index=itsm sourcetype=\"snow:incident\" state=\"closed\" earliest=-30d | stats count as total_closed | eval metric=\"total\"]\n| stats sum(reopened_count) as reopened, sum(total_closed) as total by metric\n| stats sum(reopened) as reopened, sum(total_closed) as total\n| eval reopen_rate=round(reopened/total*100, 1)",
              "m": "Ingest ITSM audit/history tables capturing state transitions. For ServiceNow, use `sys_audit` or `incident_state_history`; for Jira, use `changelog` or REST API history. Identify incidents with state sequence: closed → reopened (or new/in_progress). Calculate reopening rate as reopened / total closed over rolling 30 days. Alert when rate exceeds 5%. Break down by assignment group and category to target improvement. Correlate with resolution notes to identify patterns (e.g., \"workaround\" vs \"root cause fixed\").",
              "z": "Single value (reopen rate %), Bar chart (reopen rate by assignment group), Line chart (reopen rate trend), Table (reopened tickets with resolution notes).",
              "kfp": "Reopens and repeats that share one flaky root cause, and tickets that only look the same in the short text field; problem review still helps sort them.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (ITSM API — ServiceNow, Jira Service Management).\n• Ensure the following data sources are available: ITSM ticket state change history.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ITSM audit/history tables capturing state transitions. For ServiceNow, use `sys_audit` or `incident_state_history`; for Jira, use `changelog` or REST API history. Identify incidents with state sequence: closed → reopened (or new/in_progress). Calculate reopening rate as reopened / total closed over rolling 30 days. Alert when rate exceeds 5%. Break down by assignment group and category to target improvement. Correlate with resolution notes to identify patterns (e.g., \"workaround\" vs \"root…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident:audit\" field_name=\"state\"\n| eval reopened=if(match(old_value,\"closed|6\") AND not(match(new_value,\"closed|6\")), 1, 0)\n| where reopened=1\n| stats count as reopened_count by number\n| eval metric=\"reopened\"\n| append [| search index=itsm sourcetype=\"snow:incident\" state=\"closed\" earliest=-30d | stats count as total_closed | eval metric=\"total\"]\n| stats sum(reopened_count) as reopened, sum(total_closed) as total by metric\n| stats sum(reopened) as reopened, sum(total_closed) as total\n| eval reopen_rate=round(reopened/total*100, 1)\n```\n\nUnderstanding this SPL\n\n**Problem Ticket Reopening Rate** — Tickets closed then reopened indicate poor resolution quality, incomplete fixes, or inadequate testing. Tracking reopening rate drives resolution discipline and reduces repeat work.\n\nDocumented **Data sources**: ITSM ticket state change history. **App/TA** (typical add-on context): Custom (ITSM API — ServiceNow, Jira Service Management). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **reopened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where reopened=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by number** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **metric** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by metric** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **reopen_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (reopen rate %), Bar chart (reopen rate by assignment group), Line chart (reopen rate trend), Table (reopened tickets with resolution notes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count how often a problem case gets opened again, so we know if we truly fixed the cause or only quieted the symptom.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jira",
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.12",
              "n": "Incident Priority Distribution Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Are P1/P2 incidents increasing? Trend analysis for management reporting reveals workload shifts, infrastructure degradation, or seasonal patterns. Supports staffing and capacity planning.",
              "t": "Custom (ITSM API)",
              "d": "ITSM incident records (priority, created_date)",
              "q": "index=itsm sourcetype=\"snow:incident\"\n| eval priority_label=case(priority=1,\"P1\", priority=2,\"P2\", priority=3,\"P3\", priority=4,\"P4\", priority=5,\"P5\", true(),\"Other\")\n| timechart span=1d count by priority_label\n| addtotals\n| eval p1_p2_pct=round(('P1'+'P2')/Total*100, 1)",
              "m": "Ingest incident creation events with priority and created timestamp. Normalize priority values (ServiceNow: 1–5; Jira: Critical/High/Medium/Low). Run daily timechart by priority. Compute P1+P2 share of total for executive summary. Alert when P1/P2 percentage exceeds 7-day rolling average by >20%. Export weekly/monthly reports for management. Compare against previous quarter for trend narrative.",
              "z": "Stacked area chart (priority distribution over time), Line chart (P1+P2 count trend), Single value (P1/P2 % this week), Table (priority counts by week).",
              "kfp": "P1 and P2 spikes in major outages, customer-wide events, and right after a large roll-out; expect heavy weeks to look hot.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (ITSM API).\n• Ensure the following data sources are available: ITSM incident records (priority, created_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest incident creation events with priority and created timestamp. Normalize priority values (ServiceNow: 1–5; Jira: Critical/High/Medium/Low). Run daily timechart by priority. Compute P1+P2 share of total for executive summary. Alert when P1/P2 percentage exceeds 7-day rolling average by >20%. Export weekly/monthly reports for management. Compare against previous quarter for trend narrative.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\"\n| eval priority_label=case(priority=1,\"P1\", priority=2,\"P2\", priority=3,\"P3\", priority=4,\"P4\", priority=5,\"P5\", true(),\"Other\")\n| timechart span=1d count by priority_label\n| addtotals\n| eval p1_p2_pct=round(('P1'+'P2')/Total*100, 1)\n```\n\nUnderstanding this SPL\n\n**Incident Priority Distribution Trending** — Are P1/P2 incidents increasing? Trend analysis for management reporting reveals workload shifts, infrastructure degradation, or seasonal patterns. Supports staffing and capacity planning.\n\nDocumented **Data sources**: ITSM incident records (priority, created_date). **App/TA** (typical add-on context): Custom (ITSM API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **priority_label** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by priority_label** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Incident Priority Distribution Trending**): addtotals\n• `eval` defines or adjusts **p1_p2_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area chart (priority distribution over time), Line chart (P1+P2 count trend), Single value (P1/P2 % this week), Table (priority counts by week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow how the mix of urgent versus routine incidents shifts, so we can see when the world is on fire or when we are over-using top priority.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.13",
              "n": "On-Call Escalation Frequency",
              "c": "medium",
              "f": "intermediate",
              "v": "Rising escalation rate indicates team capacity or knowledge gaps. Unacknowledged or escalated incidents signal burnout risk and process bottlenecks. Supports on-call rotation tuning and training.",
              "t": "Custom (PagerDuty API, Opsgenie API, ITSM)",
              "d": "On-call platform API (incidents, escalations, acknowledgment times)",
              "q": "index=oncall sourcetype IN (\"pagerduty:incidents\",\"opsgenie:alerts\")\n| eval escalated=if(escalation_count>0 OR escalation_policy_used=1, 1, 0)\n| eval ack_delay_mins=round((acknowledged_at-triggered_at)/60, 0)\n| timechart span=1d count as total, sum(escalated) as escalated\n| eval escalation_rate=round(escalated/total*100, 1)\n| where total>0",
              "m": "Ingest PagerDuty or Opsgenie incidents via REST API (scheduled input or scripted input). Map fields: `escalation_count`, `escalation_policy_used`, `acknowledged_at`, `triggered_at`. Compute escalation rate (escalated / total) per day or per service. Alert when escalation rate exceeds 15% over 7 days. Track acknowledgment time (SLA); alert when avg ack time exceeds 15 minutes for P1. Report by service and escalation policy to identify overloaded rotations.",
              "z": "Line chart (escalation rate trend), Bar chart (escalations by service), Single value (escalation rate %), Table (slowest-acknowledged incidents).",
              "kfp": "On-call and escalation noise from flapping tools, new monitoring, or one big issue that bounces through many names.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (PagerDuty API, Opsgenie API, ITSM).\n• Ensure the following data sources are available: On-call platform API (incidents, escalations, acknowledgment times).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest PagerDuty or Opsgenie incidents via REST API (scheduled input or scripted input). Map fields: `escalation_count`, `escalation_policy_used`, `acknowledged_at`, `triggered_at`. Compute escalation rate (escalated / total) per day or per service. Alert when escalation rate exceeds 15% over 7 days. Track acknowledgment time (SLA); alert when avg ack time exceeds 15 minutes for P1. Report by service and escalation policy to identify overloaded rotations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=oncall sourcetype IN (\"pagerduty:incidents\",\"opsgenie:alerts\")\n| eval escalated=if(escalation_count>0 OR escalation_policy_used=1, 1, 0)\n| eval ack_delay_mins=round((acknowledged_at-triggered_at)/60, 0)\n| timechart span=1d count as total, sum(escalated) as escalated\n| eval escalation_rate=round(escalated/total*100, 1)\n| where total>0\n```\n\nUnderstanding this SPL\n\n**On-Call Escalation Frequency** — Rising escalation rate indicates team capacity or knowledge gaps. Unacknowledged or escalated incidents signal burnout risk and process bottlenecks. Supports on-call rotation tuning and training.\n\nDocumented **Data sources**: On-call platform API (incidents, escalations, acknowledgment times). **App/TA** (typical add-on context): Custom (PagerDuty API, Opsgenie API, ITSM). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: oncall.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=oncall. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **escalated** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ack_delay_mins** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **escalation_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where total>0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (escalation rate trend), Bar chart (escalations by service), Single value (escalation rate %), Table (slowest-acknowledged incidents).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track how often work jumps up the on-call chain, so we can see real pressure versus noisy or mis-tuned rules.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "pagerduty"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.14",
              "n": "SLA Breach Prediction",
              "c": "critical",
              "f": "intermediate",
              "v": "Predicting tickets likely to breach SLA before the deadline enables proactive reassignment, escalation, and automation — reducing contractual exposure and customer impact.",
              "t": "`Splunk_TA_snow`",
              "d": "ServiceNow incident + SLA task / task_sla (`sourcetype=snow:task_sla` or equivalent)",
              "q": "index=itsm sourcetype=\"snow:task_sla\"\n| where isnull(breach_time) OR breach_time=0\n| eval pct_elapsed=if(isnotnull(planned_end_time) AND planned_end_time>sla_start_time,\n    100*(now()-sla_start_time)/(planned_end_time-sla_start_time), null())\n| where pct_elapsed>=80 AND pct_elapsed<100 AND isnotnull(pct_elapsed)\n| table _time, parent, number, sla_type, pct_elapsed, planned_end_time\n| sort -pct_elapsed",
              "m": "Ingest SLA task rows with `sla_start_time`, `planned_end_time` (or `due_at`), breach flag, and parent incident. Compute percent of SLA window consumed. Alert when elapsed ≥80% and breach has not occurred. Optionally blend with assignment group queue depth. Tune thresholds per priority and SLA definition.",
              "z": "Table (at-risk tickets), Single value (at-risk count), Gauge (% SLA time consumed), Timeline (SLA burn-down).",
              "kfp": "SLA pressure during shift handoffs, night and weekend work with few people, or holiday and regional long weekends with thin coverage.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: ServiceNow incident + SLA task / task_sla (`sourcetype=snow:task_sla` or equivalent).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SLA task rows with `sla_start_time`, `planned_end_time` (or `due_at`), breach flag, and parent incident. Compute percent of SLA window consumed. Alert when elapsed ≥80% and breach has not occurred. Optionally blend with assignment group queue depth. Tune thresholds per priority and SLA definition.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:task_sla\"\n| where isnull(breach_time) OR breach_time=0\n| eval pct_elapsed=if(isnotnull(planned_end_time) AND planned_end_time>sla_start_time,\n    100*(now()-sla_start_time)/(planned_end_time-sla_start_time), null())\n| where pct_elapsed>=80 AND pct_elapsed<100 AND isnotnull(pct_elapsed)\n| table _time, parent, number, sla_type, pct_elapsed, planned_end_time\n| sort -pct_elapsed\n```\n\nUnderstanding this SPL\n\n**SLA Breach Prediction** — Predicting tickets likely to breach SLA before the deadline enables proactive reassignment, escalation, and automation — reducing contractual exposure and customer impact.\n\nDocumented **Data sources**: ServiceNow incident + SLA task / task_sla (`sourcetype=snow:task_sla` or equivalent). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:task_sla. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:task_sla\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(breach_time) OR breach_time=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **pct_elapsed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_elapsed>=80 AND pct_elapsed<100 AND isnotnull(pct_elapsed)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SLA Breach Prediction**): table _time, parent, number, sla_type, pct_elapsed, planned_end_time\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (at-risk tickets), Single value (at-risk count), Gauge (% SLA time consumed), Timeline (SLA burn-down).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for work at risk of missing its promise time, so we can act while there is still room to help the customer.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.15",
              "n": "Incident Reassignment Frequency",
              "c": "medium",
              "f": "beginner",
              "v": "Trending reassignment frequency (per period and per group) exposes routing quality, skills gaps, and noisy categories — complementing per-ticket reassignment counts.",
              "t": "`Splunk_TA_snow`",
              "d": "Incident audit / history (`sourcetype=snow:incident:audit` or `sys_audit` mapped)",
              "q": "index=itsm sourcetype=\"snow:incident:audit\" field_name=\"assignment_group\"\n| timechart span=1d count as reassign_events\n| appendcols [ search index=itsm sourcetype=\"snow:incident:audit\" field_name=\"assignment_group\" earliest=-30d@d\n  | stats count as events_30d ]\n| eval daily_avg=round(events_30d/30,1)\n| where reassign_events > daily_avg*1.5",
              "m": "Ingest audit rows where `assignment_group` changes; each row is one reassignment event. Timechart daily volume; compare to 30-day average to detect spikes. Break down by `category` and `assignment_group` with `stats count by _time, category` in a separate panel.",
              "z": "Line chart (reassignment events per day), Bar chart (events by category), Single value (30-day total).",
              "kfp": "Reassignments from severity changes, support model cutovers, or a very large event touching many groups.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Incident audit / history (`sourcetype=snow:incident:audit` or `sys_audit` mapped).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest audit rows where `assignment_group` changes; each row is one reassignment event. Timechart daily volume; compare to 30-day average to detect spikes. Break down by `category` and `assignment_group` with `stats count by _time, category` in a separate panel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident:audit\" field_name=\"assignment_group\"\n| timechart span=1d count as reassign_events\n| appendcols [ search index=itsm sourcetype=\"snow:incident:audit\" field_name=\"assignment_group\" earliest=-30d@d\n  | stats count as events_30d ]\n| eval daily_avg=round(events_30d/30,1)\n| where reassign_events > daily_avg*1.5\n```\n\nUnderstanding this SPL\n\n**Incident Reassignment Frequency** — Trending reassignment frequency (per period and per group) exposes routing quality, skills gaps, and noisy categories — complementing per-ticket reassignment counts.\n\nDocumented **Data sources**: Incident audit / history (`sourcetype=snow:incident:audit` or `sys_audit` mapped). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Adds columns from a subsearch with `appendcols`.\n• `eval` defines or adjusts **daily_avg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where reassign_events > daily_avg*1.5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (reassignment events per day), Bar chart (events by category), Single value (30-day total).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count how many times a single incident bounces, so we can fix routing, skills gaps, and unclear ownership.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.16",
              "n": "Ticket Aging by Priority",
              "c": "high",
              "f": "beginner",
              "v": "Distribution of open ticket age by priority highlights backlog imbalance (e.g., many old P3s) and supports capacity and escalation decisions.",
              "t": "`Splunk_TA_snow`",
              "d": "Open incidents (`sourcetype=snow:incident`)",
              "q": "index=itsm sourcetype=\"snow:incident\" state IN (\"new\",\"in_progress\",\"on_hold\")\n| eval age_days=round((now()-opened_at)/86400,1)\n| eval bucket=case(age_days<=1,\"0-1d\", age_days<=7,\"2-7d\", age_days<=30,\"8-30d\", true(),\"30d+\")\n| eval pri=case(priority=1,\"P1\", priority=2,\"P2\", priority=3,\"P3\", priority=4,\"P4\", true(),\"Other\")\n| stats count by pri, bucket\n| sort pri, bucket",
              "m": "Normalize priority labels. Bucket age for open tickets only. Report stacked bar or pivot table (priority × age bucket). Alert when count in `30d+` for P1/P2 exceeds policy.",
              "z": "Stacked bar (age buckets by priority), Heatmap (priority × bucket), Table (raw counts).",
              "kfp": "Stale or old records while waiting on a vendor, a customer reply, a patch, or work we chose to park on purpose.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Open incidents (`sourcetype=snow:incident`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize priority labels. Bucket age for open tickets only. Report stacked bar or pivot table (priority × age bucket). Alert when count in `30d+` for P1/P2 exceeds policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" state IN (\"new\",\"in_progress\",\"on_hold\")\n| eval age_days=round((now()-opened_at)/86400,1)\n| eval bucket=case(age_days<=1,\"0-1d\", age_days<=7,\"2-7d\", age_days<=30,\"8-30d\", true(),\"30d+\")\n| eval pri=case(priority=1,\"P1\", priority=2,\"P2\", priority=3,\"P3\", priority=4,\"P4\", true(),\"Other\")\n| stats count by pri, bucket\n| sort pri, bucket\n```\n\nUnderstanding this SPL\n\n**Ticket Aging by Priority** — Distribution of open ticket age by priority highlights backlog imbalance (e.g., many old P3s) and supports capacity and escalation decisions.\n\nDocumented **Data sources**: Open incidents (`sourcetype=snow:incident`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bucket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pri** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by pri, bucket** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (age buckets by priority), Heatmap (priority × bucket), Table (raw counts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We break out old work by how urgent it is, so the right leaders see what must not wait.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.17",
              "n": "Auto-Close Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks incidents resolved by auto-close rules vs manual closure; excess auto-close may indicate poor engagement or policy gaming; too few may mean workflows are not firing.",
              "t": "`Splunk_TA_snow`",
              "d": "Incidents with resolution code / close notes / `closed_by` / `close_code`",
              "q": "index=itsm sourcetype=\"snow:incident\" state=\"closed\" earliest=-30d\n| eval auto_closed=if(match(lower(close_notes),\"auto[- ]?close\") OR match(lower(resolution_code),\"auto\") OR lower(closed_by)=\"system\",1,0)\n| stats count as total, sum(auto_closed) as auto_closed by category\n| eval auto_close_pct=round(100*auto_closed/total,1)\n| sort -auto_close_pct",
              "m": "Map your ServiceNow fields: auto-close may appear as `resolution_code`, workflow user, or `sys_mod_count` patterns. Adjust `auto_closed` logic to match internal policy. Report fleet auto-close % and by category; investigate outliers.",
              "z": "Single value (auto-close %), Bar chart (auto-close % by category), Table (top auto-closed categories).",
              "kfp": "Auto-close and workflow noise during template updates, new teams learning closure reasons, or a workflow upgrade weekend.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Incidents with resolution code / close notes / `closed_by` / `close_code`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap your ServiceNow fields: auto-close may appear as `resolution_code`, workflow user, or `sys_mod_count` patterns. Adjust `auto_closed` logic to match internal policy. Report fleet auto-close % and by category; investigate outliers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" state=\"closed\" earliest=-30d\n| eval auto_closed=if(match(lower(close_notes),\"auto[- ]?close\") OR match(lower(resolution_code),\"auto\") OR lower(closed_by)=\"system\",1,0)\n| stats count as total, sum(auto_closed) as auto_closed by category\n| eval auto_close_pct=round(100*auto_closed/total,1)\n| sort -auto_close_pct\n```\n\nUnderstanding this SPL\n\n**Auto-Close Compliance** — Tracks incidents resolved by auto-close rules vs manual closure; excess auto-close may indicate poor engagement or policy gaming; too few may mean workflows are not firing.\n\nDocumented **Data sources**: Incidents with resolution code / close notes / `closed_by` / `close_code`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **auto_closed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **auto_close_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (auto-close %), Bar chart (auto-close % by category), Table (top auto-closed categories).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check whether we close the way our policy says, so automated or bulk closes do not hide real work.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.18",
              "n": "Recurring Incident Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Clusters of similar incidents within a short window signal candidates for problem records, known-error articles, or permanent fixes.",
              "t": "`Splunk_TA_snow`",
              "d": "Incidents (`sourcetype=snow:incident`)",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-30d\n| eval key=coalesce(cmdb_ci, category.\"|\".subcategory)\n| bin _time span=24h\n| stats count by _time, key, short_description\n| where count >= 3\n| sort -count",
              "m": "Group by CI and/or category hash; use `cluster` or `anomalydetection` for text similarity if needed. Alert when ≥N incidents per day on same key. Feed to problem management queue.",
              "z": "Table (recurring clusters), Bar chart (count by key), Timeline (spikes).",
              "kfp": "Reopens and repeats that share one flaky root cause, and tickets that only look the same in the short text field; problem review still helps sort them.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Incidents (`sourcetype=snow:incident`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nGroup by CI and/or category hash; use `cluster` or `anomalydetection` for text similarity if needed. Alert when ≥N incidents per day on same key. Feed to problem management queue.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-30d\n| eval key=coalesce(cmdb_ci, category.\"|\".subcategory)\n| bin _time span=24h\n| stats count by _time, key, short_description\n| where count >= 3\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Recurring Incident Detection** — Clusters of similar incidents within a short window signal candidates for problem records, known-error articles, or permanent fixes.\n\nDocumented **Data sources**: Incidents (`sourcetype=snow:incident`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, key, short_description** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count >= 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recurring clusters), Bar chart (count by key), Timeline (spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We spot the same kind of break showing up again and again, so we can hand it to problem work instead of one-off firefighting.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.19",
              "n": "Problem Management Root Cause Linking",
              "c": "high",
              "f": "intermediate",
              "v": "Ensures resolved incidents are tied to problem records with root cause when repeat patterns exist — closing the loop for ITIL problem management.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:incident`, `sourcetype=snow:problem`",
              "q": "index=itsm sourcetype=\"snow:incident\" state=\"closed\" earliest=-90d\n| eval has_pr=if(isnotnull(problem_id) AND problem_id!=\"\",\"1\",\"0\")\n| stats count as closed_inc, sum(eval(if(has_pr=\"1\",1,0))) as with_pr by category\n| eval link_pct=round(100*with_pr/closed_inc,1)\n| where closed_inc>20 AND link_pct < 30\n| sort link_pct",
              "m": "Map `problem_id` or `caused_by` from incident to problem. Report link rate by category and assignment group. Alert when categories with high volume have low problem linkage. Exclude categories excluded by policy.",
              "z": "Bar chart (problem link % by category), Table (gaps), Single value (overall link %).",
              "kfp": "Shifts in ticket and workflow data during large incidents, process or field changes, holiday coverage, and after big rollouts; spot-check a sample in the source system before you treat the trend as a people problem.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:incident`, `sourcetype=snow:problem`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `problem_id` or `caused_by` from incident to problem. Report link rate by category and assignment group. Alert when categories with high volume have low problem linkage. Exclude categories excluded by policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" state=\"closed\" earliest=-90d\n| eval has_pr=if(isnotnull(problem_id) AND problem_id!=\"\",\"1\",\"0\")\n| stats count as closed_inc, sum(eval(if(has_pr=\"1\",1,0))) as with_pr by category\n| eval link_pct=round(100*with_pr/closed_inc,1)\n| where closed_inc>20 AND link_pct < 30\n| sort link_pct\n```\n\nUnderstanding this SPL\n\n**Problem Management Root Cause Linking** — Ensures resolved incidents are tied to problem records with root cause when repeat patterns exist — closing the loop for ITIL problem management.\n\nDocumented **Data sources**: `sourcetype=snow:incident`, `sourcetype=snow:problem`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_pr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **link_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where closed_inc>20 AND link_pct < 30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (problem link % by category), Table (gaps), Single value (overall link %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help tie new breaks back to a known problem record, so the story of what failed stays one thread.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.20",
              "n": "Major Incident Post-Mortem Compliance",
              "c": "high",
              "f": "beginner",
              "v": "Verifies that major incidents have completed post-mortems within policy — extending generic PIR tracking with explicit compliance scoring for Sev1 programs.",
              "t": "`Splunk_TA_snow`",
              "d": "Incidents with `major_incident` or `u_major_incident`, post-mortem fields",
              "q": "index=itsm sourcetype=\"snow:incident\" u_major_incident=true state=\"closed\" earliest=-90d\n| eval pm_due=resolved_at + (7*86400)\n| eval compliant=if(post_mortem_completed=true OR now() <= pm_due OR isnotnull(u_post_mortem_date),1,0)\n| where post_mortem_completed=false AND now() > pm_due\n| table number, short_description, resolved_at, pm_due, assignment_group\n| sort resolved_at",
              "m": "Align field names with your form (`u_post_mortem_complete`, tasks, etc.). Use related task table if post-mortem is a child task. Weekly report of breaches; executive summary of compliance %.",
              "z": "Table (non-compliant MIs), Single value (compliance %), Line chart (compliance trend).",
              "kfp": "Late completion when a major event is still open, legal review slows close-out, or a new runbook is not followed yet.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Incidents with `major_incident` or `u_major_incident`, post-mortem fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign field names with your form (`u_post_mortem_complete`, tasks, etc.). Use related task table if post-mortem is a child task. Weekly report of breaches; executive summary of compliance %.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" u_major_incident=true state=\"closed\" earliest=-90d\n| eval pm_due=resolved_at + (7*86400)\n| eval compliant=if(post_mortem_completed=true OR now() <= pm_due OR isnotnull(u_post_mortem_date),1,0)\n| where post_mortem_completed=false AND now() > pm_due\n| table number, short_description, resolved_at, pm_due, assignment_group\n| sort resolved_at\n```\n\nUnderstanding this SPL\n\n**Major Incident Post-Mortem Compliance** — Verifies that major incidents have completed post-mortems within policy — extending generic PIR tracking with explicit compliance scoring for Sev1 programs.\n\nDocumented **Data sources**: Incidents with `major_incident` or `u_major_incident`, post-mortem fields. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pm_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where post_mortem_completed=false AND now() > pm_due` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Major Incident Post-Mortem Compliance**): table number, short_description, resolved_at, pm_due, assignment_group\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant MIs), Single value (compliance %), Line chart (compliance trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track whether the big events get a learning review, so the organization actually improves after the all-hands fire drill.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-16.1.20: Major Incident Post-Mortem Compliance.",
                  "ea": "Saved search 'UC-16.1.20' running on Incidents with major_incident or u_major_incident, post-mortem fields, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.apra.gov.au/information-security"
                },
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "36",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 36 (Notification of incidents) is enforced — Splunk UC-16.1.20: Major Incident Post-Mortem Compliance.",
                  "ea": "Saved search 'UC-16.1.20' running on Incidents with major_incident or u_major_incident, post-mortem fields, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.apra.gov.au/information-security"
                },
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§8.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MAS TRM §8.1.1 (IT operations — incident mgmt) is enforced — Splunk UC-16.1.20: Major Incident Post-Mortem Compliance.",
                  "ea": "Saved search 'UC-16.1.20' running on Incidents with major_incident or u_major_incident, post-mortem fields, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.mas.gov.sg/-/media/mas/regulations-and-financial-stability/regulatory-and-supervisory-framework/risk-management/trm-guidelines-18-january-2021.pdf"
                },
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.11",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.11 (Incident reporting) is enforced — Splunk UC-16.1.20: Major Incident Post-Mortem Compliance.",
                  "ea": "Saved search 'UC-16.1.20' running on Incidents with major_incident or u_major_incident, post-mortem fields, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.legislation.gov.uk/uksi/2018/506/contents"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.21",
              "n": "War Room Activation Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks when bridge/war-room workflows activate (chat, conference bridges, tags) for major incidents — supporting governance and after-action metrics.",
              "t": "Custom (ServiceNow tags/tasks, Slack/Teams webhooks, Zoom API)",
              "d": "Incident updates with war-room flag, collaboration events (`sourcetype=snow:incident:activity` or chat)",
              "q": "index=itsm (sourcetype=\"snow:incident:activity\" OR sourcetype=\"chat:war_room\")\n| search war_room=true OR match(raw,\"(?i)bridge|war room|command center\")\n| stats earliest(_time) as first_bridge by incident_number\n| join max=1 incident_number [ search index=itsm sourcetype=\"snow:incident\" u_major_incident=true | rename number as incident_number ]\n| eval bridge_delay_mins=round((first_bridge-opened_at)/60,0)\n| table incident_number, opened_at, first_bridge, bridge_delay_mins",
              "m": "Normalize on `incident_number`. Ingest chat or activity logs where bridges are declared. Measure delay from incident open to first war-room event. Report monthly activations and average delay.",
              "z": "Table (MI + bridge times), Line chart (activations per month), Histogram (bridge delay).",
              "kfp": "Late completion when a major event is still open, legal review slows close-out, or a new runbook is not followed yet.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (ServiceNow tags/tasks, Slack/Teams webhooks, Zoom API).\n• Ensure the following data sources are available: Incident updates with war-room flag, collaboration events (`sourcetype=snow:incident:activity` or chat).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize on `incident_number`. Ingest chat or activity logs where bridges are declared. Measure delay from incident open to first war-room event. Report monthly activations and average delay.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm (sourcetype=\"snow:incident:activity\" OR sourcetype=\"chat:war_room\")\n| search war_room=true OR match(raw,\"(?i)bridge|war room|command center\")\n| stats earliest(_time) as first_bridge by incident_number\n| join max=1 incident_number [ search index=itsm sourcetype=\"snow:incident\" u_major_incident=true | rename number as incident_number ]\n| eval bridge_delay_mins=round((first_bridge-opened_at)/60,0)\n| table incident_number, opened_at, first_bridge, bridge_delay_mins\n```\n\nUnderstanding this SPL\n\n**War Room Activation Tracking** — Tracks when bridge/war-room workflows activate (chat, conference bridges, tags) for major incidents — supporting governance and after-action metrics.\n\nDocumented **Data sources**: Incident updates with war-room flag, collaboration events (`sourcetype=snow:incident:activity` or chat). **App/TA** (typical add-on context): Custom (ServiceNow tags/tasks, Slack/Teams webhooks, Zoom API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident:activity, chat:war_room. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident:activity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by incident_number** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **bridge_delay_mins** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **War Room Activation Tracking**): table incident_number, opened_at, first_bridge, bridge_delay_mins\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (MI + bridge times), Line chart (activations per month), Histogram (bridge delay).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We log when the serious bridge room and command habits turn on, so we can show leaders how often we go to full response mode.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.22",
              "n": "Escalation Path Audit",
              "c": "medium",
              "f": "intermediate",
              "v": "Audits whether incidents followed the documented escalation chain (L1→L2→vendor) for compliance and training.",
              "t": "`Splunk_TA_snow`",
              "d": "Incident history / audit (`sourcetype=snow:incident:audit`)",
              "q": "index=itsm sourcetype=\"snow:incident:audit\" field_name=\"assignment_group\"\n| eval from_group=old_value, to_group=new_value\n| eval step=from_group.\"->\".to_group\n| stats values(step) as escalation_path, count as hops by number\n| join max=1 number [ search index=itsm sourcetype=\"snow:incident\" priority IN (1,2) earliest=-90d | fields number, priority ]\n| where hops>4\n| table number, priority, escalation_path, hops",
              "m": "Build assignment hop strings from audit `old_value`/`new_value`. Flag incidents with excessive hops for P1/P2 (policy threshold). Optionally `lookup escalation_policy.csv` with first/last hop pairs for stricter audits. Adjust field names to match `sys_audit` extractions.",
              "z": "Table (unexpected paths), Sankey (escalation flow), Single value (audit exceptions).",
              "kfp": "Reassignments from severity changes, support model cutovers, or a very large event touching many groups.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: Incident history / audit (`sourcetype=snow:incident:audit`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild assignment hop strings from audit `old_value`/`new_value`. Flag incidents with excessive hops for P1/P2 (policy threshold). Optionally `lookup escalation_policy.csv` with first/last hop pairs for stricter audits. Adjust field names to match `sys_audit` extractions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident:audit\" field_name=\"assignment_group\"\n| eval from_group=old_value, to_group=new_value\n| eval step=from_group.\"->\".to_group\n| stats values(step) as escalation_path, count as hops by number\n| join max=1 number [ search index=itsm sourcetype=\"snow:incident\" priority IN (1,2) earliest=-90d | fields number, priority ]\n| where hops>4\n| table number, priority, escalation_path, hops\n```\n\nUnderstanding this SPL\n\n**Escalation Path Audit** — Audits whether incidents followed the documented escalation chain (L1→L2→vendor) for compliance and training.\n\nDocumented **Data sources**: Incident history / audit (`sourcetype=snow:incident:audit`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **from_group** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **step** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by number** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where hops>4` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Escalation Path Audit**): table number, priority, escalation_path, hops\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unexpected paths), Sankey (escalation flow), Single value (audit exceptions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We review whether escalations follow the path we agreed, so the right people get pulled in and nobody is skipped by accident.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.23",
              "n": "Service Request Fulfillment Rate",
              "c": "medium",
              "f": "beginner",
              "v": "Percentage of catalog requests fulfilled successfully within the reporting period — distinct from average fulfillment time (UC-16.1.10).",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:sc_request` or `snow:request`",
              "q": "index=itsm sourcetype=\"snow:sc_request\" earliest=-30d\n| eval fulfilled=if(lower(state) IN (\"closed\",\"complete\",\"fulfilled\") AND lower(close_code)!=\"cancel\",1,0)\n| stats count as total, sum(fulfilled) as fulfilled by cat_item\n| eval fulfill_rate=round(100*fulfilled/total,1)\n| where total>=5\n| sort fulfill_rate\n| head 20",
              "m": "Map request `state` and `close_code` for cancelled vs fulfilled. Exclude duplicates. Report overall rate and bottom 20 catalog items by fulfill rate for remediation.",
              "z": "Bar chart (fulfill rate by catalog item), Single value (overall fulfill %), Table (bottom performers).",
              "kfp": "Slower or fewer completions with approver PTO, customer waits, vendor delays, and intake spikes after a large incident.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:sc_request` or `snow:request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap request `state` and `close_code` for cancelled vs fulfilled. Exclude duplicates. Report overall rate and bottom 20 catalog items by fulfill rate for remediation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:sc_request\" earliest=-30d\n| eval fulfilled=if(lower(state) IN (\"closed\",\"complete\",\"fulfilled\") AND lower(close_code)!=\"cancel\",1,0)\n| stats count as total, sum(fulfilled) as fulfilled by cat_item\n| eval fulfill_rate=round(100*fulfilled/total,1)\n| where total>=5\n| sort fulfill_rate\n| head 20\n```\n\nUnderstanding this SPL\n\n**Service Request Fulfillment Rate** — Percentage of catalog requests fulfilled successfully within the reporting period — distinct from average fulfillment time (UC-16.1.10).\n\nDocumented **Data sources**: `sourcetype=snow:sc_request` or `snow:request`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fulfilled** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cat_item** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fulfill_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where total>=5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (fulfill rate by catalog item), Single value (overall fulfill %), Table (bottom performers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see what share of standard requests we finish in good order, so we can balance people and process with demand.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.24",
              "n": "ServiceNow Bidirectional Incident Sync",
              "c": "high",
              "f": "advanced",
              "v": "Bidirectional sync between ITSI episodes and ServiceNow incidents eliminates manual ticket creation and ensures incident status is consistent across platforms.",
              "t": "Splunk ITSI, Splunk Add-on for ServiceNow",
              "d": "`itsi_grouped_alerts`, ServiceNow incident table",
              "q": "index=itsi_grouped_alerts status!=5\n| eval has_snow_ticket=if(isnotnull(snow_incident_number), \"synced\", \"unsynced\")\n| stats count by has_snow_ticket severity\n| sort severity",
              "m": "Configure ServiceNow integration in ITSI: map episode severity to ServiceNow priority, define assignment groups, and enable bidirectional status updates. Episodes auto-create incidents; ServiceNow resolution closes episodes. Monitor sync latency and failure rate. Requires Splunk Add-on for ServiceNow 5.5+.",
              "z": "Table (sync status by severity), Single value (unsynced episode count), Time chart (sync latency).",
              "kfp": "Short mismatches from rate limits, network gaps, and bulk updates; line up a few record pairs on the same day in both systems.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI, Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `itsi_grouped_alerts`, ServiceNow incident table.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure ServiceNow integration in ITSI: map episode severity to ServiceNow priority, define assignment groups, and enable bidirectional status updates. Episodes auto-create incidents; ServiceNow resolution closes episodes. Monitor sync latency and failure rate. Requires Splunk Add-on for ServiceNow 5.5+.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_grouped_alerts status!=5\n| eval has_snow_ticket=if(isnotnull(snow_incident_number), \"synced\", \"unsynced\")\n| stats count by has_snow_ticket severity\n| sort severity\n```\n\nUnderstanding this SPL\n\n**ServiceNow Bidirectional Incident Sync** — Bidirectional sync between ITSI episodes and ServiceNow incidents eliminates manual ticket creation and ensures incident status is consistent across platforms.\n\nDocumented **Data sources**: `itsi_grouped_alerts`, ServiceNow incident table. **App/TA** (typical add-on context): Splunk ITSI, Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_grouped_alerts.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_grouped_alerts. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_snow_ticket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by has_snow_ticket severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ServiceNow Bidirectional Incident Sync** — Bidirectional sync between ITSI episodes and ServiceNow incidents eliminates manual ticket creation and ensures incident status is consistent across platforms.\n\nDocumented **Data sources**: `itsi_grouped_alerts`, ServiceNow incident table. **App/TA** (typical add-on context): Splunk ITSI, Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sync status by severity), Single value (unsynced episode count), Time chart (sync latency).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help keep a serious incident in step between systems, so status and assignees do not fight each other in two places.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "itsi",
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.25",
              "n": "First Response Time vs SLA Target",
              "c": "critical",
              "f": "intermediate",
              "v": "First response time is a leading indicator of resolution SLA performance; trending FRT against targets aligns the service desk with ITIL service level practices and highlights queue starvation before resolution clocks run out.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `first_response_time` or `responded_at`, `priority`, `assignment_group`)",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-30d\n| eval open_ts=coalesce(opened_at, strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"))\n| eval resp_ts=coalesce(first_response_time, responded_at)\n| where isnotnull(resp_ts) AND isnotnull(open_ts)\n| eval frt_mins=round((resp_ts-open_ts)/60,1)\n| eval target_mins=case(priority=1,15, priority=2,30, priority=3,240, true(),480)\n| eval breach=if(frt_mins>target_mins,1,0)\n| stats count as tickets, sum(breach) as breaches, median(frt_mins) as med_frt by assignment_group\n| eval breach_pct=round(100*breaches/nullif(tickets,0),1)\n| sort -breach_pct",
              "m": "(1) Map ServiceNow response timestamps to `first_response_time`/`responded_at` in the TA props; (2) Align `target_mins` with your published response SLAs per priority; (3) Schedule the search daily and alert when `breach_pct` exceeds policy for two consecutive intervals for any `assignment_group`.",
              "z": "Bar chart (breach % by assignment group), Table (group medians vs target), Line chart (weekly breach % trend).",
              "kfp": "MTTR and response drifts from harder incidents, new staff, on-call experience mix, or a wider scope of what we close as done.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket_Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `first_response_time` or `responded_at`, `priority`, `assignment_group`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map ServiceNow response timestamps to `first_response_time`/`responded_at` in the TA props; (2) Align `target_mins` with your published response SLAs per priority; (3) Schedule the search daily and alert when `breach_pct` exceeds policy for two consecutive intervals for any `assignment_group`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-30d\n| eval open_ts=coalesce(opened_at, strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"))\n| eval resp_ts=coalesce(first_response_time, responded_at)\n| where isnotnull(resp_ts) AND isnotnull(open_ts)\n| eval frt_mins=round((resp_ts-open_ts)/60,1)\n| eval target_mins=case(priority=1,15, priority=2,30, priority=3,240, true(),480)\n| eval breach=if(frt_mins>target_mins,1,0)\n| stats count as tickets, sum(breach) as breaches, median(frt_mins) as med_frt by assignment_group\n| eval breach_pct=round(100*breaches/nullif(tickets,0),1)\n| sort -breach_pct\n```\n\nUnderstanding this SPL\n\n**First Response Time vs SLA Target** — First response time is a leading indicator of resolution SLA performance; trending FRT against targets aligns the service desk with ITIL service level practices and highlights queue starvation before resolution clocks run out.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `first_response_time` or `responded_at`, `priority`, `assignment_group`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **open_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **resp_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(resp_ts) AND isnotnull(open_ts)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **frt_mins** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **target_mins** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **breach_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**First Response Time vs SLA Target** — First response time is a leading indicator of resolution SLA performance; trending FRT against targets aligns the service desk with ITIL service level practices and highlights queue starvation before resolution clocks run out.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `first_response_time` or `responded_at`, `priority`, `assignment_group`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (breach % by assignment group), Table (group medians vs target), Line chart (weekly breach % trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare first human touch to what we said we would do, so customers are not left waiting in silence.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.26",
              "n": "Catalog Request Item Backlog and WIP",
              "c": "high",
              "f": "beginner",
              "v": "A growing backlog of open request items signals fulfillment bottlenecks or catalog misconfiguration; monitoring WIP by catalog item supports request fulfilment SLAs and ITIL service request lifecycle discipline.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (`state`, `cat_item`, `assignment_group`, `sys_created_on`)",
              "q": "index=itsm sourcetype=\"snow:sc_req_item\" earliest=-14d\n| where match(lower(state),\"(?i)open|work in progress|approved|sc_work|waiting for approval\")\n| eval age_days=round((now()-sys_created_on)/86400,1)\n| stats count as wip, median(age_days) as med_age_days by cat_item, assignment_group\n| sort -wip\n| head 50",
              "m": "(1) Normalize `state` values to your catalog workflow; (2) Exclude cancelled/closed states via `where`; (3) Review top `wip` lines weekly with fulfillment owners and feed automation candidates back to the service catalog team.",
              "z": "Bar chart (WIP by catalog item), Table (top queues with median age), Heatmap (assignment_group × cat_item).",
              "kfp": "Backlog and work-in-process swings during freezes, year-end code lock, or a slow vendor for many tickets at once.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (`state`, `cat_item`, `assignment_group`, `sys_created_on`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `state` values to your catalog workflow; (2) Exclude cancelled/closed states via `where`; (3) Review top `wip` lines weekly with fulfillment owners and feed automation candidates back to the service catalog team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:sc_req_item\" earliest=-14d\n| where match(lower(state),\"(?i)open|work in progress|approved|sc_work|waiting for approval\")\n| eval age_days=round((now()-sys_created_on)/86400,1)\n| stats count as wip, median(age_days) as med_age_days by cat_item, assignment_group\n| sort -wip\n| head 50\n```\n\nUnderstanding this SPL\n\n**Catalog Request Item Backlog and WIP** — A growing backlog of open request items signals fulfillment bottlenecks or catalog misconfiguration; monitoring WIP by catalog item supports request fulfilment SLAs and ITIL service request lifecycle discipline.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (`state`, `cat_item`, `assignment_group`, `sys_created_on`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(state),\"(?i)open|work in progress|approved|sc_work|waiting for approval\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cat_item, assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (WIP by catalog item), Table (top queues with median age), Heatmap (assignment_group × cat_item).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We show how many catalog items are in queue and in flight, so we can staff catalog work and see stuck orders.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.1.27",
              "n": "On-Hold Time Impact on Resolution SLA",
              "c": "medium",
              "f": "intermediate",
              "v": "Excessive on-hold time often explains resolution SLA misses without reflecting true work effort; measuring hold hours by category supports ITIL incident lifecycle controls and customer communication quality.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (`on_hold_duration`, `category`, `assignment_group`, `priority`)",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-30d state IN (\"closed\",\"resolved\",\"6\",\"7\")\n| eval hold_hrs=round(coalesce(on_hold_duration,0)/3600,2)\n| eval resolve_hrs=round((resolved_at-opened_at)/3600,2)\n| stats sum(hold_hrs) as total_hold_hrs, sum(resolve_hrs) as total_resolve_hrs, count as tickets by category\n| eval hold_share_pct=round(100*total_hold_hrs/nullif(total_resolve_hrs,0),1)\n| sort -hold_share_pct",
              "m": "(1) Confirm whether `on_hold_duration` is seconds from ServiceNow or use `business_stc`/`calendar_stc` fields if your instance stores pause separately; (2) Exclude incidents with invalid timestamps; (3) Alert when `hold_share_pct` exceeds agreed thresholds for customer-facing categories.",
              "z": "Bar chart (hold share % by category), Table (top categories), Line chart (weekly hold hours trend).",
              "kfp": "On-hold and pause that reflect customer and vendor wait states, not always poor service-desk work; check categories first.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket_Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (`on_hold_duration`, `category`, `assignment_group`, `priority`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm whether `on_hold_duration` is seconds from ServiceNow or use `business_stc`/`calendar_stc` fields if your instance stores pause separately; (2) Exclude incidents with invalid timestamps; (3) Alert when `hold_share_pct` exceeds agreed thresholds for customer-facing categories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-30d state IN (\"closed\",\"resolved\",\"6\",\"7\")\n| eval hold_hrs=round(coalesce(on_hold_duration,0)/3600,2)\n| eval resolve_hrs=round((resolved_at-opened_at)/3600,2)\n| stats sum(hold_hrs) as total_hold_hrs, sum(resolve_hrs) as total_resolve_hrs, count as tickets by category\n| eval hold_share_pct=round(100*total_hold_hrs/nullif(total_resolve_hrs,0),1)\n| sort -hold_share_pct\n```\n\nUnderstanding this SPL\n\n**On-Hold Time Impact on Resolution SLA** — Excessive on-hold time often explains resolution SLA misses without reflecting true work effort; measuring hold hours by category supports ITIL incident lifecycle controls and customer communication quality.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`on_hold_duration`, `category`, `assignment_group`, `priority`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hold_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **resolve_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hold_share_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**On-Hold Time Impact on Resolution SLA** — Excessive on-hold time often explains resolution SLA misses without reflecting true work effort; measuring hold hours by category supports ITIL incident lifecycle controls and customer communication quality.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`on_hold_duration`, `category`, `assignment_group`, `priority`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (hold share % by category), Table (top categories), Line chart (weekly hold hours trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see how long we pause the clock, so we are honest about who waits and why resolution dates move.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.category | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 38.9,
          "qd": {
            "gold": 0,
            "silver": 4,
            "bronze": 23,
            "none": 0
          }
        },
        {
          "i": "16.2",
          "n": "Configuration Management (CMDB)",
          "u": [
            {
              "i": "16.2.1",
              "n": "CMDB Data Quality Score",
              "c": "high",
              "f": "beginner",
              "v": "Poor CMDB data quality undermines all ITSM processes. Scoring and trending drives data quality improvement initiatives.",
              "t": "`Splunk_TA_snow`, custom metrics",
              "d": "CMDB CI data (completeness, accuracy, freshness)",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\"\n| eval complete=if(isnotnull(owner) AND isnotnull(support_group) AND isnotnull(environment),1,0)\n| eval fresh=if(last_discovered > relative_time(now(),\"-30d\"),1,0)\n| stats avg(complete) as completeness, avg(fresh) as freshness\n| eval quality_score=round((completeness*50+freshness*50),1)",
              "m": "Define CMDB quality dimensions (completeness, accuracy, freshness, relationships). Score each dimension. Calculate composite quality score. Track trend over time. Set improvement targets. Report to CMDB governance board.",
              "z": "Gauge (quality score), Line chart (quality trend), Bar chart (quality by dimension), Table (worst-scoring CIs).",
              "kfp": "Coverage gaps while onboarding new systems, after mergers, during discovery or field changes, or when teams skip owner updates in busy months.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`, custom metrics.\n• Ensure the following data sources are available: CMDB CI data (completeness, accuracy, freshness).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine CMDB quality dimensions (completeness, accuracy, freshness, relationships). Score each dimension. Calculate composite quality score. Track trend over time. Set improvement targets. Report to CMDB governance board.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\"\n| eval complete=if(isnotnull(owner) AND isnotnull(support_group) AND isnotnull(environment),1,0)\n| eval fresh=if(last_discovered > relative_time(now(),\"-30d\"),1,0)\n| stats avg(complete) as completeness, avg(fresh) as freshness\n| eval quality_score=round((completeness*50+freshness*50),1)\n```\n\nUnderstanding this SPL\n\n**CMDB Data Quality Score** — Poor CMDB data quality undermines all ITSM processes. Scoring and trending drives data quality improvement initiatives.\n\nDocumented **Data sources**: CMDB CI data (completeness, accuracy, freshness). **App/TA** (typical add-on context): `Splunk_TA_snow`, custom metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **complete** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **fresh** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **quality_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (quality score), Line chart (quality trend), Bar chart (quality by dimension), Table (worst-scoring CIs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We score how complete and fresh our configuration records are, so planning and change work rests on data we can trust.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "16.2.2",
              "n": "CI Discovery Reconciliation",
              "c": "high",
              "f": "intermediate",
              "v": "CIs in the network but not in the CMDB are unmanaged risks. Reconciliation ensures CMDB completeness.",
              "t": "Discovery tools + CMDB",
              "d": "Discovery scan results, CMDB CI records",
              "q": "| inputlookup discovered_assets.csv\n| join type=left max=1 hostname [search index=itsm sourcetype=\"snow:cmdb_ci\" | table hostname, sys_id, ci_class]\n| where isnull(sys_id)\n| table hostname, ip_address, os, discovered_date",
              "m": "Compare auto-discovered assets (ServiceNow Discovery, SCCM, network scans) with CMDB records. Identify CIs found by discovery but absent from CMDB. Create workflow to review and add missing CIs. Track gap closure over time.",
              "z": "Table (unmatched discovered assets), Single value (CMDB gap count), Pie chart (matched vs unmatched), Line chart (gap trend).",
              "kfp": "Shifts in ticket and workflow data during large incidents, process or field changes, holiday coverage, and after big rollouts; spot-check a sample in the source system before you treat the trend as a people problem.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Discovery tools + CMDB.\n• Ensure the following data sources are available: Discovery scan results, CMDB CI records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare auto-discovered assets (ServiceNow Discovery, SCCM, network scans) with CMDB records. Identify CIs found by discovery but absent from CMDB. Create workflow to review and add missing CIs. Track gap closure over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup discovered_assets.csv\n| join type=left max=1 hostname [search index=itsm sourcetype=\"snow:cmdb_ci\" | table hostname, sys_id, ci_class]\n| where isnull(sys_id)\n| table hostname, ip_address, os, discovered_date\n```\n\nUnderstanding this SPL\n\n**CI Discovery Reconciliation** — CIs in the network but not in the CMDB are unmanaged risks. Reconciliation ensures CMDB completeness.\n\nDocumented **Data sources**: Discovery scan results, CMDB CI records. **App/TA** (typical add-on context): Discovery tools + CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(sys_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CI Discovery Reconciliation**): table hostname, ip_address, os, discovered_date\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unmatched discovered assets), Single value (CMDB gap count), Pie chart (matched vs unmatched), Line chart (gap trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare what we discover in the world with what the record book says, so we catch blind spots and duplicate names.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.3",
              "n": "Orphaned CI Detection",
              "c": "medium",
              "f": "beginner",
              "v": "CIs without owners or service mappings aren't managed during incidents, creating accountability gaps and shadow infrastructure.",
              "t": "`Splunk_TA_snow`",
              "d": "CMDB CI attributes",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\" operational_status=\"operational\"\n| where isnull(assigned_to) OR isnull(support_group) OR isnull(u_service)\n| table name, ci_class, assigned_to, support_group, u_service",
              "m": "Query CMDB for operational CIs missing key attributes (owner, support group, service mapping). Report on orphaned CI inventory. Assign ownership through automated or manual workflow. Track orphan reduction over time.",
              "z": "Table (orphaned CIs), Pie chart (by CI class), Bar chart (orphans by missing attribute), Single value (total orphaned CIs).",
              "kfp": "Coverage gaps while onboarding new systems, after mergers, during discovery or field changes, or when teams skip owner updates in busy months.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: CMDB CI attributes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nQuery CMDB for operational CIs missing key attributes (owner, support group, service mapping). Report on orphaned CI inventory. Assign ownership through automated or manual workflow. Track orphan reduction over time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\" operational_status=\"operational\"\n| where isnull(assigned_to) OR isnull(support_group) OR isnull(u_service)\n| table name, ci_class, assigned_to, support_group, u_service\n```\n\nUnderstanding this SPL\n\n**Orphaned CI Detection** — CIs without owners or service mappings aren't managed during incidents, creating accountability gaps and shadow infrastructure.\n\nDocumented **Data sources**: CMDB CI attributes. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(assigned_to) OR isnull(support_group) OR isnull(u_service)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Orphaned CI Detection**): table name, ci_class, assigned_to, support_group, u_service\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (orphaned CIs), Pie chart (by CI class), Bar chart (orphans by missing attribute), Single value (total orphaned CIs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We find things that have no good parent in the book of record, so ownership and support lines stay clear.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.4",
              "n": "Relationship Integrity Check",
              "c": "medium",
              "f": "intermediate",
              "v": "Accurate CI relationships enable impact analysis during incidents. Incomplete relationships undermine service mapping.",
              "t": "`Splunk_TA_snow`",
              "d": "CMDB relationship data",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\" ci_class IN (\"cmdb_ci_server\",\"cmdb_ci_app_server\")\n| join type=left max=1 sys_id [search index=itsm sourcetype=\"snow:cmdb_rel_ci\" | stats count as rel_count by child | rename child as sys_id]\n| where isnull(rel_count) OR rel_count=0\n| table name, ci_class, rel_count",
              "m": "Validate CI relationships are present and bidirectional. Identify servers with no application relationships, applications with no infrastructure dependencies. Report on relationship completeness. Use for impact analysis validation.",
              "z": "Table (CIs without relationships), Network graph (CI dependency map), Single value (% CIs with relationships).",
              "kfp": "False duplicates and drift when naming rules change, discovery merges, or the same system has more than one nickname.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: CMDB relationship data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nValidate CI relationships are present and bidirectional. Identify servers with no application relationships, applications with no infrastructure dependencies. Report on relationship completeness. Use for impact analysis validation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\" ci_class IN (\"cmdb_ci_server\",\"cmdb_ci_app_server\")\n| join type=left max=1 sys_id [search index=itsm sourcetype=\"snow:cmdb_rel_ci\" | stats count as rel_count by child | rename child as sys_id]\n| where isnull(rel_count) OR rel_count=0\n| table name, ci_class, rel_count\n```\n\nUnderstanding this SPL\n\n**Relationship Integrity Check** — Accurate CI relationships enable impact analysis during incidents. Incomplete relationships undermine service mapping.\n\nDocumented **Data sources**: CMDB relationship data. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(rel_count) OR rel_count=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Relationship Integrity Check**): table name, ci_class, rel_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (CIs without relationships), Network graph (CI dependency map), Single value (% CIs with relationships).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for relationship links that do not add up, so maps of what depends on what tell the truth.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.5",
              "n": "CMDB Change Audit",
              "c": "medium",
              "f": "beginner",
              "v": "Tracking all CI attribute changes supports compliance auditing and helps detect unauthorized configuration changes.",
              "t": "`Splunk_TA_snow`",
              "d": "CMDB audit trail (sys_audit)",
              "q": "index=itsm sourcetype=\"snow:cmdb_audit\"\n| table _time, ci_name, field_name, old_value, new_value, changed_by\n| sort -_time",
              "m": "Ingest CMDB audit records. Track all CI attribute changes. Alert on changes to critical CIs outside change windows. Report on change volume by CI class and source (manual vs discovery). Validate accuracy of discovery updates.",
              "z": "Table (CI changes), Timeline (change events), Bar chart (changes by CI class), Line chart (change volume trend).",
              "kfp": "Coverage gaps while onboarding new systems, after mergers, during discovery or field changes, or when teams skip owner updates in busy months.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: CMDB audit trail (sys_audit).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest CMDB audit records. Track all CI attribute changes. Alert on changes to critical CIs outside change windows. Report on change volume by CI class and source (manual vs discovery). Validate accuracy of discovery updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_audit\"\n| table _time, ci_name, field_name, old_value, new_value, changed_by\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**CMDB Change Audit** — Tracking all CI attribute changes supports compliance auditing and helps detect unauthorized configuration changes.\n\nDocumented **Data sources**: CMDB audit trail (sys_audit). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **CMDB Change Audit**): table _time, ci_name, field_name, old_value, new_value, changed_by\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (CI changes), Timeline (change events), Bar chart (changes by CI class), Line chart (change volume trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We review changes to configuration records, so the history of what moved is as serious as a production change.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.6",
              "n": "CI Relationship Drift",
              "c": "high",
              "f": "intermediate",
              "v": "Detects when CI relationships change unexpectedly versus a baseline — supporting impact analysis integrity and unauthorized dependency changes.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:cmdb_rel_ci` or relationship table snapshots",
              "q": "index=itsm sourcetype=\"snow:cmdb_rel_ci\" earliest=-7d@d\n| bucket _time span=1d\n| stats values(_time) as days by parent, child, relationship_type\n| eval change_days=mvcount(days)\n| where change_days>1\n| table parent, child, relationship_type, change_days",
              "m": "Schedule a nightly `outputlookup cmdb_rel_baseline.csv` from `cmdb_rel_ci` and compare with `| diff` or `join` on parent+child+type for strict drift. The SPL above flags relationships with activity on multiple days in a week (churn). Alert on new parent/child pairs against a saved baseline lookup when available.",
              "z": "Table (drifted relationships), Single value (drift count), Timeline (relationship changes).",
              "kfp": "False duplicates and drift when naming rules change, discovery merges, or the same system has more than one nickname.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:cmdb_rel_ci` or relationship table snapshots.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule a nightly `outputlookup cmdb_rel_baseline.csv` from `cmdb_rel_ci` and compare with `| diff` or `join` on parent+child+type for strict drift. The SPL above flags relationships with activity on multiple days in a week (churn). Alert on new parent/child pairs against a saved baseline lookup when available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_rel_ci\" earliest=-7d@d\n| bucket _time span=1d\n| stats values(_time) as days by parent, child, relationship_type\n| eval change_days=mvcount(days)\n| where change_days>1\n| table parent, child, relationship_type, change_days\n```\n\nUnderstanding this SPL\n\n**CI Relationship Drift** — Detects when CI relationships change unexpectedly versus a baseline — supporting impact analysis integrity and unauthorized dependency changes.\n\nDocumented **Data sources**: `sourcetype=snow:cmdb_rel_ci` or relationship table snapshots. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_rel_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_rel_ci\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by parent, child, relationship_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **change_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where change_days>1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CI Relationship Drift**): table parent, child, relationship_type, change_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drifted relationships), Single value (drift count), Timeline (relationship changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We notice when the real world and the book slowly disagree, so drift does not build until nobody trusts the data.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.7",
              "n": "Asset Discovery Reconciliation",
              "c": "high",
              "f": "intermediate",
              "v": "Reconciles discovery source of truth against CMDB for asset inventory — variance by source, age, and confidence.",
              "t": "ServiceNow Discovery, SCCM, or network scan feeds",
              "d": "`sourcetype=snow:discovery_model` or custom `discovery:asset`, `sourcetype=snow:cmdb_ci`",
              "q": "index=discovery sourcetype=\"discovery:asset\" earliest=-1d\n| eval host_key=lower(mvindex(split(hostname,\".\"),0))\n| stats latest(_time) as last_seen by host_key, serial_number\n| join type=left max=1 host_key [ search index=itsm sourcetype=\"snow:cmdb_ci\" | eval host_key=lower(mvindex(split(name,\".\"),0)) | table host_key, sys_id, last_discovered ]\n| eval match_state=if(isnotnull(sys_id) AND abs(last_seen-last_discovered)<86400,\"synced\",\"stale_or_missing\")\n| stats count by match_state",
              "m": "Normalize hostnames (FQDN strip). Map discovery tool serial/IP to CMDB. Report `stale_or_missing` counts weekly. Drive CMDB update tasks for unmatched discovery rows.",
              "z": "Pie chart (synced vs stale), Table (unmatched assets), Single value (reconciliation gap %).",
              "kfp": "Shifts in ticket and workflow data during large incidents, process or field changes, holiday coverage, and after big rollouts; spot-check a sample in the source system before you treat the trend as a people problem.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ServiceNow Discovery, SCCM, or network scan feeds.\n• Ensure the following data sources are available: `sourcetype=snow:discovery_model` or custom `discovery:asset`, `sourcetype=snow:cmdb_ci`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize hostnames (FQDN strip). Map discovery tool serial/IP to CMDB. Report `stale_or_missing` counts weekly. Drive CMDB update tasks for unmatched discovery rows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=discovery sourcetype=\"discovery:asset\" earliest=-1d\n| eval host_key=lower(mvindex(split(hostname,\".\"),0))\n| stats latest(_time) as last_seen by host_key, serial_number\n| join type=left max=1 host_key [ search index=itsm sourcetype=\"snow:cmdb_ci\" | eval host_key=lower(mvindex(split(name,\".\"),0)) | table host_key, sys_id, last_discovered ]\n| eval match_state=if(isnotnull(sys_id) AND abs(last_seen-last_discovered)<86400,\"synced\",\"stale_or_missing\")\n| stats count by match_state\n```\n\nUnderstanding this SPL\n\n**Asset Discovery Reconciliation** — Reconciles discovery source of truth against CMDB for asset inventory — variance by source, age, and confidence.\n\nDocumented **Data sources**: `sourcetype=snow:discovery_model` or custom `discovery:asset`, `sourcetype=snow:cmdb_ci`. **App/TA** (typical add-on context): ServiceNow Discovery, SCCM, or network scan feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: discovery; **sourcetype**: discovery:asset. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=discovery, sourcetype=\"discovery:asset\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **host_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host_key, serial_number** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **match_state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by match_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (synced vs stale), Table (unmatched assets), Single value (reconciliation gap %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We line up what we find on the network with what we pay for, so we are not flying blind for risk and cost.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.8",
              "n": "End-of-Life Hardware Tracking",
              "c": "high",
              "f": "beginner",
              "v": "Surfaces CIs past vendor EOS/EOL dates for refresh planning and security risk reduction.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:cmdb_ci` (model, OS end dates, custom EOL fields)",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\" ci_class IN (\"cmdb_ci_server\",\"cmdb_ci_netgear\")\n| eval eol_epoch=strptime(u_eol_date,\"%Y-%m-%d\")\n| where isnotnull(eol_epoch) AND eol_epoch < relative_time(now(),\"+90d@d\")\n| eval days_to_eol=round((eol_epoch-now())/86400,0)\n| table name, model_id, u_eol_date, days_to_eol, support_group\n| sort days_to_eol",
              "m": "Populate `u_eol_date` from vendor feeds or CMDB enrichment. Alert at 90/30 days. Join model catalog for batch reporting by data center.",
              "z": "Table (upcoming EOL), Bar chart (EOL by quarter), Single value (CIs past EOL).",
              "kfp": "Date-driven alerts that race with refresh cycles, bulk imports, and hardware still under a vendor with a different record.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:cmdb_ci` (model, OS end dates, custom EOL fields).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPopulate `u_eol_date` from vendor feeds or CMDB enrichment. Alert at 90/30 days. Join model catalog for batch reporting by data center.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\" ci_class IN (\"cmdb_ci_server\",\"cmdb_ci_netgear\")\n| eval eol_epoch=strptime(u_eol_date,\"%Y-%m-%d\")\n| where isnotnull(eol_epoch) AND eol_epoch < relative_time(now(),\"+90d@d\")\n| eval days_to_eol=round((eol_epoch-now())/86400,0)\n| table name, model_id, u_eol_date, days_to_eol, support_group\n| sort days_to_eol\n```\n\nUnderstanding this SPL\n\n**End-of-Life Hardware Tracking** — Surfaces CIs past vendor EOS/EOL dates for refresh planning and security risk reduction.\n\nDocumented **Data sources**: `sourcetype=snow:cmdb_ci` (model, OS end dates, custom EOL fields). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eol_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(eol_epoch) AND eol_epoch < relative_time(now(),\"+90d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_to_eol** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **End-of-Life Hardware Tracking**): table name, model_id, u_eol_date, days_to_eol, support_group\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (upcoming EOL), Bar chart (EOL by quarter), Single value (CIs past EOL).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow gear that is past its support window, so we replace it before a failure and a long spare hunt.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.9",
              "n": "CMDB Accuracy Scoring",
              "c": "high",
              "f": "intermediate",
              "v": "Scores accuracy by sampling validation (discovery match, pingable, owner confirmed) — complements completeness-focused quality scores.",
              "t": "`Splunk_TA_snow` + discovery/network",
              "d": "`snow:cmdb_ci`, validation job results (`cmdb:validation`)",
              "q": "index=cmdb sourcetype=\"cmdb:validation\" earliest=-7d\n| eval ok=if(match(lower(to_string(check_passed)),\"(?i)true|1|pass|ok\"),1,0)\n| stats avg(ok) as accuracy_ratio by ci_class\n| eval accuracy_pct=round(accuracy_ratio*100,1)\n| sort accuracy_pct",
              "m": "Ingest periodic validation (e.g., “IP matches DNS,” “server responds to agent,” “owner replied”). Aggregate pass rate per class and region. Trend monthly for governance scorecards.",
              "z": "Bar chart (accuracy % by class), Gauge (fleet accuracy), Line chart (trend).",
              "kfp": "Coverage gaps while onboarding new systems, after mergers, during discovery or field changes, or when teams skip owner updates in busy months.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow` + discovery/network.\n• Ensure the following data sources are available: `snow:cmdb_ci`, validation job results (`cmdb:validation`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest periodic validation (e.g., “IP matches DNS,” “server responds to agent,” “owner replied”). Aggregate pass rate per class and region. Trend monthly for governance scorecards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmdb sourcetype=\"cmdb:validation\" earliest=-7d\n| eval ok=if(match(lower(to_string(check_passed)),\"(?i)true|1|pass|ok\"),1,0)\n| stats avg(ok) as accuracy_ratio by ci_class\n| eval accuracy_pct=round(accuracy_ratio*100,1)\n| sort accuracy_pct\n```\n\nUnderstanding this SPL\n\n**CMDB Accuracy Scoring** — Scores accuracy by sampling validation (discovery match, pingable, owner confirmed) — complements completeness-focused quality scores.\n\nDocumented **Data sources**: `snow:cmdb_ci`, validation job results (`cmdb:validation`). **App/TA** (typical add-on context): `Splunk_TA_snow` + discovery/network. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmdb; **sourcetype**: cmdb:validation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmdb, sourcetype=\"cmdb:validation\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by ci_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **accuracy_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (accuracy % by class), Gauge (fleet accuracy), Line chart (trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We measure how right our key fields are against samples we trust, so quality work has a number to improve.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.10",
              "n": "Undocumented Server Detection",
              "c": "critical",
              "f": "beginner",
              "v": "Finds servers visible to monitoring or AD but missing from CMDB — classic gap for incident routing and security scope.",
              "t": "Splunk Universal Forwarder inventory, AD, vCenter",
              "d": "`sourcetype=inventory` or `vmware:inv:vm`, `sourcetype=snow:cmdb_ci`",
              "q": "index=inventory sourcetype=\"vmware:inv:vm\" earliest=-4h\n| eval host_key=lower(mvindex(split(name,\".\"),0))\n| stats latest(_time) as seen by host_key\n| join type=left max=1 host_key [ search index=itsm sourcetype=\"snow:cmdb_ci\" ci_class=\"cmdb_ci_server\" | eval host_key=lower(mvindex(split(name,\".\"),0)) | table host_key, sys_id ]\n| where isnull(sys_id)\n| table host_key, seen",
              "m": "Compare monitored/VM inventory hostnames (lowercased, short name) to CMDB server CIs. Tune for naming conventions (strip domain). Feed gaps to CMDB onboarding queue.",
              "z": "Table (undocumented hosts), Single value (gap count), Line chart (gap trend).",
              "kfp": "Unknown installs and license gaps from BYOD-style tools, new packaging, and shadow projects that are not in the inventory feed yet.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder inventory, AD, vCenter.\n• Ensure the following data sources are available: `sourcetype=inventory` or `vmware:inv:vm`, `sourcetype=snow:cmdb_ci`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare monitored/VM inventory hostnames (lowercased, short name) to CMDB server CIs. Tune for naming conventions (strip domain). Feed gaps to CMDB onboarding queue.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=inventory sourcetype=\"vmware:inv:vm\" earliest=-4h\n| eval host_key=lower(mvindex(split(name,\".\"),0))\n| stats latest(_time) as seen by host_key\n| join type=left max=1 host_key [ search index=itsm sourcetype=\"snow:cmdb_ci\" ci_class=\"cmdb_ci_server\" | eval host_key=lower(mvindex(split(name,\".\"),0)) | table host_key, sys_id ]\n| where isnull(sys_id)\n| table host_key, seen\n```\n\nUnderstanding this SPL\n\n**Undocumented Server Detection** — Finds servers visible to monitoring or AD but missing from CMDB — classic gap for incident routing and security scope.\n\nDocumented **Data sources**: `sourcetype=inventory` or `vmware:inv:vm`, `sourcetype=snow:cmdb_ci`. **App/TA** (typical add-on context): Splunk Universal Forwarder inventory, AD, vCenter. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: inventory; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=inventory, sourcetype=\"vmware:inv:vm\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **host_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host_key** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(sys_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Undocumented Server Detection**): table host_key, seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (undocumented hosts), Single value (gap count), Line chart (gap trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We find systems that are live but not in the book, so we add them, own them, and protect them on purpose.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.12",
              "n": "Software Asset Management Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures install counts vs license entitlements for major publishers — SAM compliance for audits.",
              "t": "SCCM, Flexera, ServiceNow SAM",
              "d": "`sourcetype=sam:install`, `sourcetype=snow:alm_license`",
              "q": "index=sam sourcetype=\"sam:install\" product_name=\"Microsoft*Visio*\"\n| stats dc(host) as deployed by product_name\n| join max=1 product_name [ search index=itsm sourcetype=\"snow:alm_license\" | stats sum(entitlement) as entitled by product_name ]\n| eval compliance_pct=if(entitled>0, round(min(deployed,entitled)/entitled*100, 1), null())\n| eval over_deployed=if(deployed>entitled, deployed-entitled, 0)\n| table product_name, deployed, entitled, compliance_pct, over_deployed",
              "m": "Normalize product SKUs. Join installs to entitlement table. Alert when `over_deployed>0` or compliance below policy. Refresh entitlements monthly.",
              "z": "Table (SKU compliance), Single value (non-compliant SKUs), Bar chart (overage).",
              "kfp": "Unknown installs and license gaps from BYOD-style tools, new packaging, and shadow projects that are not in the inventory feed yet.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SCCM, Flexera, ServiceNow SAM.\n• Ensure the following data sources are available: `sourcetype=sam:install`, `sourcetype=snow:alm_license`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize product SKUs. Join installs to entitlement table. Alert when `over_deployed>0` or compliance below policy. Refresh entitlements monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sam sourcetype=\"sam:install\" product_name=\"Microsoft*Visio*\"\n| stats dc(host) as deployed by product_name\n| join max=1 product_name [ search index=itsm sourcetype=\"snow:alm_license\" | stats sum(entitlement) as entitled by product_name ]\n| eval compliance_pct=if(entitled>0, round(min(deployed,entitled)/entitled*100, 1), null())\n| eval over_deployed=if(deployed>entitled, deployed-entitled, 0)\n| table product_name, deployed, entitled, compliance_pct, over_deployed\n```\n\nUnderstanding this SPL\n\n**Software Asset Management Compliance** — Measures install counts vs license entitlements for major publishers — SAM compliance for audits.\n\nDocumented **Data sources**: `sourcetype=sam:install`, `sourcetype=snow:alm_license`. **App/TA** (typical add-on context): SCCM, Flexera, ServiceNow SAM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sam; **sourcetype**: sam:install. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sam, sourcetype=\"sam:install\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by product_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **over_deployed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Software Asset Management Compliance**): table product_name, deployed, entitled, compliance_pct, over_deployed\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SKU compliance), Single value (non-compliant SKUs), Bar chart (overage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for installs and license truth compared to what we are allowed, so we are not out of step with auditors or the vendor.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.13",
              "n": "Hardware Warranty Expiry",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks warranty end dates for hardware CIs to avoid unsupported break-fix gaps and budget surprises.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:cmdb_ci` (`warranty_expires`, `u_warranty_end`)",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\" operational_status=\"operational\"\n| eval w_end=strptime(warranty_expires,\"%Y-%m-%d\")\n| where isnotnull(w_end) AND w_end < relative_time(now(),\"+60d@d\") AND w_end > now()\n| eval days_left=round((w_end-now())/86400,0)\n| table name, serial_number, warranty_expires, days_left, support_group\n| sort days_left",
              "m": "Map OEM warranty fields from procurement or discovery. Alert at 60/30 days. Exclude disposed assets via `install_status`.",
              "z": "Table (warranty expiring), Timeline (expiry by month), Single value (CIs <30d).",
              "kfp": "Date-driven alerts that race with refresh cycles, bulk imports, and hardware still under a vendor with a different record.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:cmdb_ci` (`warranty_expires`, `u_warranty_end`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap OEM warranty fields from procurement or discovery. Alert at 60/30 days. Exclude disposed assets via `install_status`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\" operational_status=\"operational\"\n| eval w_end=strptime(warranty_expires,\"%Y-%m-%d\")\n| where isnotnull(w_end) AND w_end < relative_time(now(),\"+60d@d\") AND w_end > now()\n| eval days_left=round((w_end-now())/86400,0)\n| table name, serial_number, warranty_expires, days_left, support_group\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**Hardware Warranty Expiry** — Tracks warranty end dates for hardware CIs to avoid unsupported break-fix gaps and budget surprises.\n\nDocumented **Data sources**: `sourcetype=snow:cmdb_ci` (`warranty_expires`, `u_warranty_end`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **w_end** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(w_end) AND w_end < relative_time(now(),\"+60d@d\") AND w_end > now()` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Hardware Warranty Expiry**): table name, serial_number, warranty_expires, days_left, support_group\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (warranty expiring), Timeline (expiry by month), Single value (CIs <30d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch hardware that is out of or near end of support, so we replace or extend before a failure in the night.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.14",
              "n": "CI Lifecycle Management",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors CI lifecycle state transitions (ordered → received → in production → retired) for stuck states and policy compliance.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:cmdb_ci` (`install_status`, `operational_status`)",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\"\n| where match(lower(install_status),\"on order|pending install|received\")\n| eval created_epoch=coalesce(sys_created_on, sys_updated_on, _time)\n| eval age_days=round((now()-created_epoch)/86400,0)\n| where age_days>90\n| table name, install_status, operational_status, age_days, support_group",
              "m": "Adjust `install_status` values to your list. Flag CIs stuck in procurement or “pending install” beyond SLA. Report retired CIs still `operational` in error.",
              "z": "Table (stuck lifecycles), Bar chart (count by stuck state), Line chart (backlog trend).",
              "kfp": "Shifts in ticket and workflow data during large incidents, process or field changes, holiday coverage, and after big rollouts; spot-check a sample in the source system before you treat the trend as a people problem.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:cmdb_ci` (`install_status`, `operational_status`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAdjust `install_status` values to your list. Flag CIs stuck in procurement or “pending install” beyond SLA. Report retired CIs still `operational` in error.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\"\n| where match(lower(install_status),\"on order|pending install|received\")\n| eval created_epoch=coalesce(sys_created_on, sys_updated_on, _time)\n| eval age_days=round((now()-created_epoch)/86400,0)\n| where age_days>90\n| table name, install_status, operational_status, age_days, support_group\n```\n\nUnderstanding this SPL\n\n**CI Lifecycle Management** — Monitors CI lifecycle state transitions (ordered → received → in production → retired) for stuck states and policy compliance.\n\nDocumented **Data sources**: `sourcetype=snow:cmdb_ci` (`install_status`, `operational_status`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(install_status),\"on order|pending install|received\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **created_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CI Lifecycle Management**): table name, install_status, operational_status, age_days, support_group\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stuck lifecycles), Bar chart (count by stuck state), Line chart (backlog trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow the birth, move, and retire of important items, so nothing active is forgotten and nothing live is a ghost in the data.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.15",
              "n": "Asset Decommission Verification",
              "c": "high",
              "f": "intermediate",
              "v": "Confirms decommissioned servers no longer appear in monitoring, AD, or hypervisor inventory — reducing zombie assets and license bleed.",
              "t": "Monitoring + CMDB",
              "d": "Decommission change tickets, `snow:cmdb_ci`, `vmware:inv:vm`",
              "q": "index=itsm sourcetype=\"snow:change_request\" state=\"Closed\" earliest=-30d\n| where match(lower(short_description),\"(?i)decom|retire\") OR lower(category)=\"retire\"\n| rename cmdb_ci as ci_sys_id\n| join type=left max=1 ci_sys_id [ search index=itsm sourcetype=\"snow:cmdb_ci\" | rename sys_id as ci_sys_id | table ci_sys_id, install_status, operational_status ]\n| where install_status!=\"retired\" AND lower(operational_status)!=\"retired\" AND lower(operational_status)!=\"non-operational\"\n| table number, short_description, ci_sys_id, install_status, operational_status",
              "m": "Map `cmdb_ci` (or task CI list) from the change; normalize `install_status`/`operational_status` values to your CMDB. Optionally join inventory (`vmware:inv:vm`) on hostname to catch VMs still present. Drive cleanup when decom CHG is closed but CI not retired.",
              "z": "Table (failed decom verification), Single value (open exceptions), Bar chart (by team).",
              "kfp": "Shifts in ticket and workflow data during large incidents, process or field changes, holiday coverage, and after big rollouts; spot-check a sample in the source system before you treat the trend as a people problem.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Monitoring + CMDB.\n• Ensure the following data sources are available: Decommission change tickets, `snow:cmdb_ci`, `vmware:inv:vm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `cmdb_ci` (or task CI list) from the change; normalize `install_status`/`operational_status` values to your CMDB. Optionally join inventory (`vmware:inv:vm`) on hostname to catch VMs still present. Drive cleanup when decom CHG is closed but CI not retired.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" state=\"Closed\" earliest=-30d\n| where match(lower(short_description),\"(?i)decom|retire\") OR lower(category)=\"retire\"\n| rename cmdb_ci as ci_sys_id\n| join type=left max=1 ci_sys_id [ search index=itsm sourcetype=\"snow:cmdb_ci\" | rename sys_id as ci_sys_id | table ci_sys_id, install_status, operational_status ]\n| where install_status!=\"retired\" AND lower(operational_status)!=\"retired\" AND lower(operational_status)!=\"non-operational\"\n| table number, short_description, ci_sys_id, install_status, operational_status\n```\n\nUnderstanding this SPL\n\n**Asset Decommission Verification** — Confirms decommissioned servers no longer appear in monitoring, AD, or hypervisor inventory — reducing zombie assets and license bleed.\n\nDocumented **Data sources**: Decommission change tickets, `snow:cmdb_ci`, `vmware:inv:vm`. **App/TA** (typical add-on context): Monitoring + CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(short_description),\"(?i)decom|retire\") OR lower(category)=\"retire\"` — typically the threshold or rule expression for this monitoring goal.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where install_status!=\"retired\" AND lower(operational_status)!=\"retired\" AND lower(operational_status)!=\"non-operational\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Asset Decommission Verification**): table number, short_description, ci_sys_id, install_status, operational_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed decom verification), Single value (open exceptions), Bar chart (by team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We help prove a thing is really out of use and not still billing or still reachable, so retire and risk stay aligned.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.16",
              "n": "Stale CI Discovery Freshness",
              "c": "high",
              "f": "beginner",
              "v": "Operational CIs without recent discovery updates inflate false confidence during impact analysis; freshness trending supports CMDB governance and ITIL configuration management objectives.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`last_discovered`, `sys_updated_on`, `operational_status`, `name`, `ci_class`)",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\" operational_status=\"operational\" earliest=-1d\n| eval last_seen=coalesce(last_discovered, sys_updated_on)\n| where isnotnull(last_seen) AND last_seen < relative_time(now(),\"-90d@d\")\n| eval stale_days=round((now()-last_seen)/86400,0)\n| stats count as stale_cis by ci_class, support_group\n| sort -stale_cis",
              "m": "(1) Tune the 90-day threshold to your discovery schedule; (2) Exclude classes intentionally not discovered (e.g., logical groups) via `where ci_class!=...`; (3) Publish a weekly remediation list to CMDB owners with `stale_days` for prioritization.",
              "z": "Bar chart (stale count by CI class), Table (stale CIs with stale_days), Single value (total stale operational CIs).",
              "kfp": "Shifts in ticket and workflow data during large incidents, process or field changes, holiday coverage, and after big rollouts; spot-check a sample in the source system before you treat the trend as a people problem.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`last_discovered`, `sys_updated_on`, `operational_status`, `name`, `ci_class`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune the 90-day threshold to your discovery schedule; (2) Exclude classes intentionally not discovered (e.g., logical groups) via `where ci_class!=...`; (3) Publish a weekly remediation list to CMDB owners with `stale_days` for prioritization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\" operational_status=\"operational\" earliest=-1d\n| eval last_seen=coalesce(last_discovered, sys_updated_on)\n| where isnotnull(last_seen) AND last_seen < relative_time(now(),\"-90d@d\")\n| eval stale_days=round((now()-last_seen)/86400,0)\n| stats count as stale_cis by ci_class, support_group\n| sort -stale_cis\n```\n\nUnderstanding this SPL\n\n**Stale CI Discovery Freshness** — Operational CIs without recent discovery updates inflate false confidence during impact analysis; freshness trending supports CMDB governance and ITIL configuration management objectives.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`last_discovered`, `sys_updated_on`, `operational_status`, `name`, `ci_class`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **last_seen** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(last_seen) AND last_seen < relative_time(now(),\"-90d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **stale_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by ci_class, support_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (stale count by CI class), Table (stale CIs with stale_days), Single value (total stale operational CIs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We find things we have not re-seen in a long time, so we refresh or retire before we plan on bad truth.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.17",
              "n": "Duplicate CI Name Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Duplicate names break joins between incidents, changes, and monitoring; detecting duplicates early restores CMDB accuracy and reduces routing errors across ITSM processes.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`name`, `sys_id`, `ci_class`, `serial_number`)",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\" earliest=-1d\n| eval norm_name=lower(trim(name))\n| where norm_name!=\"\" AND isnotnull(norm_name)\n| stats values(sys_id) as sys_ids, dc(sys_id) as distinct_ids, values(ci_class) as classes by norm_name\n| where distinct_ids>1\n| table norm_name, distinct_ids, classes, sys_ids\n| sort -distinct_ids",
              "m": "(1) Extend normalization if your naming standard strips domains (`mvindex(split(name,\".\"),0)`); (2) Enrich with `serial_number` when names collide legitimately; (3) Feed results into a CMDB merge workflow and track `distinct_ids` trend monthly.",
              "z": "Table (duplicate name clusters), Single value (duplicate name count), Bar chart (duplicates by CI class).",
              "kfp": "False duplicates and drift when naming rules change, discovery merges, or the same system has more than one nickname.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`name`, `sys_id`, `ci_class`, `serial_number`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Extend normalization if your naming standard strips domains (`mvindex(split(name,\".\"),0)`); (2) Enrich with `serial_number` when names collide legitimately; (3) Feed results into a CMDB merge workflow and track `distinct_ids` trend monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\" earliest=-1d\n| eval norm_name=lower(trim(name))\n| where norm_name!=\"\" AND isnotnull(norm_name)\n| stats values(sys_id) as sys_ids, dc(sys_id) as distinct_ids, values(ci_class) as classes by norm_name\n| where distinct_ids>1\n| table norm_name, distinct_ids, classes, sys_ids\n| sort -distinct_ids\n```\n\nUnderstanding this SPL\n\n**Duplicate CI Name Detection** — Duplicate names break joins between incidents, changes, and monitoring; detecting duplicates early restores CMDB accuracy and reduces routing errors across ITSM processes.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`name`, `sys_id`, `ci_class`, `serial_number`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **norm_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where norm_name!=\"\" AND isnotnull(norm_name)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by norm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where distinct_ids>1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Duplicate CI Name Detection**): table norm_name, distinct_ids, classes, sys_ids\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (duplicate name clusters), Single value (duplicate name count), Bar chart (duplicates by CI class).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for the same name on more than one thing, so we do not direct work or a change to the wrong box.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.18",
              "n": "Application CI Business Service Coverage",
              "c": "medium",
              "f": "beginner",
              "v": "Applications without a business service mapping weaken BIA-driven prioritization and incident impact statements; coverage metrics align the CMDB with ITIL service asset and configuration practices.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`ci_class`, `business_service`, `u_business_service`, `name`, `support_group`)",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\" earliest=-1d\n| where match(lower(ci_class),\"(?i)cmdb_ci_appl|application\")\n| eval bs=coalesce(business_service, u_business_service)\n| eval mapped=if(isnotnull(bs) AND bs!=\"\",1,0)\n| stats count as apps, sum(mapped) as mapped_apps by support_group\n| eval coverage_pct=round(100*mapped_apps/nullif(apps,0),1)\n| sort coverage_pct",
              "m": "(1) Adjust `ci_class` filters to your CMDB application table set; (2) Map whichever field holds the business service reference; (3) Set a minimum `apps` threshold before flagging low `coverage_pct` groups to avoid noise from tiny teams.",
              "z": "Bar chart (coverage % by support group), Table (unmapped application CIs), Gauge (overall coverage %).",
              "kfp": "Shifts in ticket and workflow data during large incidents, process or field changes, holiday coverage, and after big rollouts; spot-check a sample in the source system before you treat the trend as a people problem.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`ci_class`, `business_service`, `u_business_service`, `name`, `support_group`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Adjust `ci_class` filters to your CMDB application table set; (2) Map whichever field holds the business service reference; (3) Set a minimum `apps` threshold before flagging low `coverage_pct` groups to avoid noise from tiny teams.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\" earliest=-1d\n| where match(lower(ci_class),\"(?i)cmdb_ci_appl|application\")\n| eval bs=coalesce(business_service, u_business_service)\n| eval mapped=if(isnotnull(bs) AND bs!=\"\",1,0)\n| stats count as apps, sum(mapped) as mapped_apps by support_group\n| eval coverage_pct=round(100*mapped_apps/nullif(apps,0),1)\n| sort coverage_pct\n```\n\nUnderstanding this SPL\n\n**Application CI Business Service Coverage** — Applications without a business service mapping weaken BIA-driven prioritization and incident impact statements; coverage metrics align the CMDB with ITIL service asset and configuration practices.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`ci_class`, `business_service`, `u_business_service`, `name`, `support_group`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(ci_class),\"(?i)cmdb_ci_appl|application\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **bs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mapped** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by support_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (coverage % by support group), Table (unmapped application CIs), Gauge (overall coverage %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see which applications tie to which business service, so impact maps and priority lines stay true when we break or fix things.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.2.19",
              "n": "CMDB Mandatory Attribute Completeness by Class",
              "c": "high",
              "f": "intermediate",
              "v": "Class-specific mandatory fields (owner, environment, support group) drive automation and reporting; completeness scoring by `ci_class` gives CMDB stewards actionable ITIL-aligned quality targets.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`ci_class`, `assigned_to`, `support_group`, `environment`, `location`, `company`)",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\" earliest=-1d\n| eval core_ok=if(isnotnull(support_group) AND support_group!=\"\" AND isnotnull(environment) AND environment!=\"\",1,0)\n| stats avg(core_ok) as completeness_ratio, count as total by ci_class\n| eval completeness_pct=round(100*completeness_ratio,1)\n| sort completeness_pct\n| head 30",
              "m": "(1) Expand `core_ok` with `assigned_to` or `company` if those are mandatory in your data model; (2) Exclude retired CIs with `install_status`; (3) Publish monthly scorecards and alert when any major `ci_class` drops below the governance threshold two months in a row.",
              "z": "Bar chart (completeness % by CI class), Table (worst classes), Line chart (completeness trend from summary index).",
              "kfp": "Coverage gaps while onboarding new systems, after mergers, during discovery or field changes, or when teams skip owner updates in busy months.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`ci_class`, `assigned_to`, `support_group`, `environment`, `location`, `company`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Expand `core_ok` with `assigned_to` or `company` if those are mandatory in your data model; (2) Exclude retired CIs with `install_status`; (3) Publish monthly scorecards and alert when any major `ci_class` drops below the governance threshold two months in a row.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\" earliest=-1d\n| eval core_ok=if(isnotnull(support_group) AND support_group!=\"\" AND isnotnull(environment) AND environment!=\"\",1,0)\n| stats avg(core_ok) as completeness_ratio, count as total by ci_class\n| eval completeness_pct=round(100*completeness_ratio,1)\n| sort completeness_pct\n| head 30\n```\n\nUnderstanding this SPL\n\n**CMDB Mandatory Attribute Completeness by Class** — Class-specific mandatory fields (owner, environment, support group) drive automation and reporting; completeness scoring by `ci_class` gives CMDB stewards actionable ITIL-aligned quality targets.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:cmdb_ci\"` (`ci_class`, `assigned_to`, `support_group`, `environment`, `location`, `company`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **core_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by ci_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **completeness_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (completeness % by CI class), Table (worst classes), Line chart (completeness trend from summary index).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check required fields for each class of thing we track, so every record is fit for the job we ask the book to do.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.9,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 17,
            "none": 0
          }
        },
        {
          "i": "16.3",
          "n": "Business Process & Availability Intelligence",
          "u": [
            {
              "i": "16.3.1",
              "n": "Cross-Service Business Process Health Score",
              "c": "high",
              "f": "advanced",
              "v": "Individual component alerts do not communicate business impact. A BPI model aggregates the health of all components that together constitute a business capability (e.g., \"Order Processing = web tier + database + payment gateway + message queue\"). When any essential member fails, the business process is immediately flagged as degraded or down — mirroring Nagios BPI groups with essential member logic. Operations teams see business impact, not raw host counts.",
              "t": "Splunk IT Service Intelligence (ITSI), or custom KV Store + scheduled searches",
              "d": "All existing monitoring indexes (`index=os`, `index=network`, `index=app`, `index=db`), ITSI entity/service model",
              "q": "index=monitoring sourcetype IN (server_health, app_health, db_health, network_health)\n| eval component_status=case(\n    status=\"critical\", 0,\n    status=\"high\",     1,\n    status=\"medium\",   2,\n    status=\"ok\",       3,\n    true(),            3)\n| lookup business_process_components.csv component_name AS service OUTPUT process_name, is_essential\n| stats min(component_status) as essential_min\n       avg(component_status) as avg_status\n  by process_name, _time\n| eval bpi_score=round((avg_status / 3) * 100, 1)\n| eval bpi_state=case(\n    essential_min=0, \"DOWN\",\n    bpi_score < 50,  \"DEGRADED\",\n    bpi_score < 80,  \"AT RISK\",\n    true(),          \"HEALTHY\")\n| table _time, process_name, bpi_score, bpi_state, essential_min",
              "m": "Build a lookup (`business_process_components.csv`) mapping infrastructure components to business processes, with an `is_essential` flag (essential = single point of failure). Run as a scheduled search every 5 minutes. Feed results into a KV Store for dashboard consumption. For full capability, use ITSI Service Analyzer with KPI threshold-based health scores — ITSI natively implements BPI-equivalent logic with adaptive thresholding and episode-based alerting. Alert when `bpi_state=DOWN` (essential member failed) or `bpi_score` drops below 60 for >10 minutes.",
              "z": "Service Analyzer glass table (ITSI), Radial gauge (health score per process), Sankey diagram (component → process → business unit), Single value tiles (one per business process with color coding).",
              "kfp": "Red scores from planned work, test noise, and dependencies you do not fully own; compare with a known good week first.",
              "refs": "[Splunk IT Service Intelligence](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk IT Service Intelligence (ITSI), or custom KV Store + scheduled searches.\n• Ensure the following data sources are available: All existing monitoring indexes (`index=os`, `index=network`, `index=app`, `index=db`), ITSI entity/service model.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild a lookup (`business_process_components.csv`) mapping infrastructure components to business processes, with an `is_essential` flag (essential = single point of failure). Run as a scheduled search every 5 minutes. Feed results into a KV Store for dashboard consumption. For full capability, use ITSI Service Analyzer with KPI threshold-based health scores — ITSI natively implements BPI-equivalent logic with adaptive thresholding and episode-based alerting. Alert when `bpi_state=DOWN` (essentia…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=monitoring sourcetype IN (server_health, app_health, db_health, network_health)\n| eval component_status=case(\n    status=\"critical\", 0,\n    status=\"high\",     1,\n    status=\"medium\",   2,\n    status=\"ok\",       3,\n    true(),            3)\n| lookup business_process_components.csv component_name AS service OUTPUT process_name, is_essential\n| stats min(component_status) as essential_min\n       avg(component_status) as avg_status\n  by process_name, _time\n| eval bpi_score=round((avg_status / 3) * 100, 1)\n| eval bpi_state=case(\n    essential_min=0, \"DOWN\",\n    bpi_score < 50,  \"DEGRADED\",\n    bpi_score < 80,  \"AT RISK\",\n    true(),          \"HEALTHY\")\n| table _time, process_name, bpi_score, bpi_state, essential_min\n```\n\nUnderstanding this SPL\n\n**Cross-Service Business Process Health Score** — Individual component alerts do not communicate business impact. A BPI model aggregates the health of all components that together constitute a business capability (e.g., \"Order Processing = web tier + database + payment gateway + message queue\"). When any essential member fails, the business process is immediately flagged as degraded or down — mirroring Nagios BPI groups with essential member logic. Operations teams see business impact, not raw host counts.\n\nDocumented **Data sources**: All existing monitoring indexes (`index=os`, `index=network`, `index=app`, `index=db`), ITSI entity/service model. **App/TA** (typical add-on context): Splunk IT Service Intelligence (ITSI), or custom KV Store + scheduled searches. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: monitoring.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=monitoring. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **component_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by process_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **bpi_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bpi_state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Cross-Service Business Process Health Score**): table _time, process_name, bpi_score, bpi_state, essential_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Service Analyzer glass table (ITSI), Radial gauge (health score per process), Sankey diagram (component → process → business unit), Single value tiles (one per business process with color coding).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We roll many little health signals into one business picture, so leaders see a customer-impacting story instead of a wall of part alerts.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.2",
              "n": "Infrastructure Service Availability Heatmap",
              "c": "high",
              "f": "intermediate",
              "v": "Provides a Nagios-style tactical overview — at a glance, which hosts and services have been available vs down, when, and for how long. Operations teams use this for SLA evidence, post-incident review, and capacity risk communication. Unlike individual alerts, the heatmap reveals systemic patterns: recurring daily outage windows, hosts with chronic flapping, services that always fail together.",
              "t": "`Splunk_TA_nix`, `Splunk_TA_windows`, all existing monitoring TAs",
              "d": "Consolidated availability events from all infrastructure monitoring indexes",
              "q": "index=monitoring sourcetype IN (server_health, service_check, network_health)\n| bin _time span=1h\n| eval hour=strftime(_time, \"%Y-%m-%d %H:00\")\n| eval avail=if(status=\"ok\" OR status=\"up\", 1, 0)\n| stats avg(avail) as availability_ratio by host, service, hour\n| eval avail_pct=round(availability_ratio * 100, 1)\n| eval color=case(\n    avail_pct=100,    \"green\",\n    avail_pct >= 99,  \"lightgreen\",\n    avail_pct >= 95,  \"yellow\",\n    avail_pct >= 90,  \"orange\",\n    true(),           \"red\")\n| table host, service, hour, avail_pct, color",
              "m": "Normalize availability data from all sources (server, network, app) into a shared `index=monitoring` with a standardized `status` field. Schedule this search hourly. Store results in a summary index or KV Store for long-term retention. Build a Dashboard Studio heatmap visualization with host on the Y-axis and time on the X-axis, color-coded by availability percentage. Implement drilldown to raw events for any red cell. Export monthly availability reports per host/service for SLA documentation. Filter by host group (infrastructure, application, network) using tokens.",
              "z": "Heatmap (host × time, color = availability%), Table (hosts sorted by lowest monthly availability), Single value (fleet-wide availability %), Line chart (fleet availability trend), Bar chart (downtime hours by host).",
              "kfp": "Red scores from planned work, test noise, and dependencies you do not fully own; compare with a known good week first.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, `Splunk_TA_windows`, all existing monitoring TAs.\n• Ensure the following data sources are available: Consolidated availability events from all infrastructure monitoring indexes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize availability data from all sources (server, network, app) into a shared `index=monitoring` with a standardized `status` field. Schedule this search hourly. Store results in a summary index or KV Store for long-term retention. Build a Dashboard Studio heatmap visualization with host on the Y-axis and time on the X-axis, color-coded by availability percentage. Implement drilldown to raw events for any red cell. Export monthly availability reports per host/service for SLA documentation. F…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=monitoring sourcetype IN (server_health, service_check, network_health)\n| bin _time span=1h\n| eval hour=strftime(_time, \"%Y-%m-%d %H:00\")\n| eval avail=if(status=\"ok\" OR status=\"up\", 1, 0)\n| stats avg(avail) as availability_ratio by host, service, hour\n| eval avail_pct=round(availability_ratio * 100, 1)\n| eval color=case(\n    avail_pct=100,    \"green\",\n    avail_pct >= 99,  \"lightgreen\",\n    avail_pct >= 95,  \"yellow\",\n    avail_pct >= 90,  \"orange\",\n    true(),           \"red\")\n| table host, service, hour, avail_pct, color\n```\n\nUnderstanding this SPL\n\n**Infrastructure Service Availability Heatmap** — Provides a Nagios-style tactical overview — at a glance, which hosts and services have been available vs down, when, and for how long. Operations teams use this for SLA evidence, post-incident review, and capacity risk communication. Unlike individual alerts, the heatmap reveals systemic patterns: recurring daily outage windows, hosts with chronic flapping, services that always fail together.\n\nDocumented **Data sources**: Consolidated availability events from all infrastructure monitoring indexes. **App/TA** (typical add-on context): `Splunk_TA_nix`, `Splunk_TA_windows`, all existing monitoring TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: monitoring.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=monitoring. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **avail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, service, hour** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **color** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Infrastructure Service Availability Heatmap**): table host, service, hour, avail_pct, color\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (host × time, color = availability%), Table (hosts sorted by lowest monthly availability), Single value (fleet-wide availability %), Line chart (fleet availability trend), Bar chart (downtime hours by host).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We paint where services are green, yellow, or red, so a glance shows where a region or line of business is hurting.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux",
                "windows"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.3",
              "n": "First Contact Resolution Rate by Group",
              "c": "medium",
              "f": "intermediate",
              "v": "FCR indicates support efficiency and customer satisfaction. Tracking by group and category supports staffing and process improvement.",
              "t": "ServiceNow/Service Desk TA, ITSM API",
              "d": "Incident resolution, first-contact resolution flag",
              "q": "index=itsm sourcetype=\"incident\"\n| where is_open=0\n| stats count, sum(eval(if(first_contact_resolution=1,1,0))) as fcr_count by assignment_group, category\n| eval fcr_rate=round((fcr_count/count)*100, 1)\n| sort -count\n| table assignment_group, category, count, fcr_count, fcr_rate",
              "m": "Ingest incident closure data with FCR flag. Compute FCR rate by group and category. Report on trend and groups below target. Use for training and process review.",
              "z": "Bar chart (FCR rate by group), Table (group × category FCR), Line chart (FCR trend).",
              "kfp": "First-contact rates when issues must pass to a specialist, tooling forces a handoff, or training changes who touches the ticket first.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ServiceNow/Service Desk TA, ITSM API.\n• Ensure the following data sources are available: Incident resolution, first-contact resolution flag.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest incident closure data with FCR flag. Compute FCR rate by group and category. Report on trend and groups below target. Use for training and process review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"incident\"\n| where is_open=0\n| stats count, sum(eval(if(first_contact_resolution=1,1,0))) as fcr_count by assignment_group, category\n| eval fcr_rate=round((fcr_count/count)*100, 1)\n| sort -count\n| table assignment_group, category, count, fcr_count, fcr_rate\n```\n\nUnderstanding this SPL\n\n**First Contact Resolution Rate by Group** — FCR indicates support efficiency and customer satisfaction. Tracking by group and category supports staffing and process improvement.\n\nDocumented **Data sources**: Incident resolution, first-contact resolution flag. **App/TA** (typical add-on context): ServiceNow/Service Desk TA, ITSM API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where is_open=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by assignment_group, category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fcr_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **First Contact Resolution Rate by Group**): table assignment_group, category, count, fcr_count, fcr_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (FCR rate by group), Table (group × category FCR), Line chart (FCR trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see what share of issues each team fully fixes the first time, so coaching and self-service can land where they help most.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.4",
              "n": "Escalation and Handoff Latency",
              "c": "high",
              "f": "intermediate",
              "v": "Long escalation or handoff times delay resolution. Monitoring handoff duration supports SLA and identifies bottlenecks.",
              "t": "ITSM workflow logs, incident history",
              "d": "Assignment change events, timestamps per group",
              "q": "index=itsm sourcetype=\"incident:history\"\n| search type=assignment_change\n| eval handoff_hrs=(next_assignment_time - prev_assignment_time)/3600\n| stats avg(handoff_hrs) as avg_handoff, max(handoff_hrs) as max_handoff by from_group, to_group\n| where avg_handoff > 2",
              "m": "Ingest assignment and state change history. Compute time between assignments. Alert when average handoff exceeds threshold. Report on escalation paths and slow handoffs.",
              "z": "Table (handoff latency by path), Bar chart (avg handoff by group), Sankey (escalation flow).",
              "kfp": "Handoff and escalation delays when a bridge runs long, named owners change shift, or many teams join one ticket.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ITSM workflow logs, incident history.\n• Ensure the following data sources are available: Assignment change events, timestamps per group.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest assignment and state change history. Compute time between assignments. Alert when average handoff exceeds threshold. Report on escalation paths and slow handoffs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"incident:history\"\n| search type=assignment_change\n| eval handoff_hrs=(next_assignment_time - prev_assignment_time)/3600\n| stats avg(handoff_hrs) as avg_handoff, max(handoff_hrs) as max_handoff by from_group, to_group\n| where avg_handoff > 2\n```\n\nUnderstanding this SPL\n\n**Escalation and Handoff Latency** — Long escalation or handoff times delay resolution. Monitoring handoff duration supports SLA and identifies bottlenecks.\n\nDocumented **Data sources**: Assignment change events, timestamps per group. **App/TA** (typical add-on context): ITSM workflow logs, incident history. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: incident:history. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"incident:history\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **handoff_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by from_group, to_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_handoff > 2` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (handoff latency by path), Bar chart (avg handoff by group), Sankey (escalation flow).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We time how long work waits when it is passed along, so bridges and handoffs are not the hidden place where work dies.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.5",
              "n": "Knowledge Article Usage and Gap Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Low article use or repeated incidents without matching KB may indicate content gaps. Analytics support knowledge management and deflection.",
              "t": "ITSM knowledge base, search logs",
              "d": "Article views, incident–KB linkage, search terms",
              "q": "index=itsm sourcetype=\"kb:usage\"\n| stats count as views, dc(incident_id) as linked_incidents by article_id, title\n| where views < 10 AND linked_incidents > 0\n| sort -linked_incidents\n| table article_id, title, views, linked_incidents",
              "m": "Ingest KB view and incident–article link data. Identify articles with many linked incidents but few views (potential discovery gap). Report on top articles and unused content. Suggest new articles from frequent incident categories.",
              "z": "Table (articles by usage), Bar chart (linked incidents vs views), Pie chart (top articles).",
              "kfp": "Views and deflection move after comms, training, launches, and when ticket types shift without a process fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ITSM knowledge base, search logs.\n• Ensure the following data sources are available: Article views, incident–KB linkage, search terms.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest KB view and incident–article link data. Identify articles with many linked incidents but few views (potential discovery gap). Report on top articles and unused content. Suggest new articles from frequent incident categories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"kb:usage\"\n| stats count as views, dc(incident_id) as linked_incidents by article_id, title\n| where views < 10 AND linked_incidents > 0\n| sort -linked_incidents\n| table article_id, title, views, linked_incidents\n```\n\nUnderstanding this SPL\n\n**Knowledge Article Usage and Gap Detection** — Low article use or repeated incidents without matching KB may indicate content gaps. Analytics support knowledge management and deflection.\n\nDocumented **Data sources**: Article views, incident–KB linkage, search terms. **App/TA** (typical add-on context): ITSM knowledge base, search logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: kb:usage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"kb:usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by article_id, title** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where views < 10 AND linked_incidents > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Knowledge Article Usage and Gap Detection**): table article_id, title, views, linked_incidents\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (articles by usage), Bar chart (linked incidents vs views), Pie chart (top articles).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see which help articles are used and where we still have holes, so we write what people actually need.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.6",
              "n": "Major Incident and Post-Mortem Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Tracking major incidents and post-mortem completion ensures learning and accountability. Supports compliance and continuous improvement.",
              "t": "ITSM major incident, post-mortem records",
              "d": "Major incident flag, post-mortem due and completed dates",
              "q": "index=itsm sourcetype=\"incident\"\n| where major_incident=1 AND is_open=0\n| eval pm_due=closed_time + (7*86400)\n| where now() > pm_due AND post_mortem_completed=0\n| table number, short_description, closed_time, pm_due, post_mortem_completed",
              "m": "Ingest major incident and post-mortem status. Alert when post-mortem is overdue. Report on major incident count, MTTR, and post-mortem completion rate. Track root cause categories.",
              "z": "Table (overdue post-mortems), Single value (major incidents this month), Line chart (post-mortem completion rate).",
              "kfp": "Late completion when a major event is still open, legal review slows close-out, or a new runbook is not followed yet.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ITSM major incident, post-mortem records.\n• Ensure the following data sources are available: Major incident flag, post-mortem due and completed dates.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest major incident and post-mortem status. Alert when post-mortem is overdue. Report on major incident count, MTTR, and post-mortem completion rate. Track root cause categories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"incident\"\n| where major_incident=1 AND is_open=0\n| eval pm_due=closed_time + (7*86400)\n| where now() > pm_due AND post_mortem_completed=0\n| table number, short_description, closed_time, pm_due, post_mortem_completed\n```\n\nUnderstanding this SPL\n\n**Major Incident and Post-Mortem Tracking** — Tracking major incidents and post-mortem completion ensures learning and accountability. Supports compliance and continuous improvement.\n\nDocumented **Data sources**: Major incident flag, post-mortem due and completed dates. **App/TA** (typical add-on context): ITSM major incident, post-mortem records. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where major_incident=1 AND is_open=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **pm_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where now() > pm_due AND post_mortem_completed=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Major Incident and Post-Mortem Tracking**): table number, short_description, closed_time, pm_due, post_mortem_completed\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue post-mortems), Single value (major incidents this month), Line chart (post-mortem completion rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow big incidents from start through the learning write-up, so the big ones do not close without a lesson.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-16.3.6: Major Incident and Post-Mortem Tracking.",
                  "ea": "Saved search 'UC-16.3.6' running on Major incident flag, post-mortem due and completed dates, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.apra.gov.au/information-security"
                },
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "36",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 36 (Notification of incidents) is enforced — Splunk UC-16.3.6: Major Incident and Post-Mortem Tracking.",
                  "ea": "Saved search 'UC-16.3.6' running on Major incident flag, post-mortem due and completed dates, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.apra.gov.au/information-security"
                },
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.96",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.96 (Incident reporting) is enforced — Splunk UC-16.3.6: Major Incident and Post-Mortem Tracking.",
                  "ea": "Saved search 'UC-16.3.6' running on Major incident flag, post-mortem due and completed dates, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2015/2366/oj"
                },
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§26A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §26A (Data breach notification) is enforced — Splunk UC-16.3.6: Major Incident and Post-Mortem Tracking.",
                  "ea": "Saved search 'UC-16.3.6' running on Major incident flag, post-mortem due and completed dates, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://sso.agc.gov.sg/Act/PDPA2012"
                },
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.11",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.11 (Incident reporting) is enforced — Splunk UC-16.3.6: Major Incident and Post-Mortem Tracking.",
                  "ea": "Saved search 'UC-16.3.6' running on Major incident flag, post-mortem due and completed dates, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.legislation.gov.uk/uksi/2018/506/contents"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.7",
              "n": "Request Fulfillment and Approval Cycle Time",
              "c": "medium",
              "f": "intermediate",
              "v": "Long approval or fulfillment times delay service delivery. Monitoring cycle time supports process optimization and SLA for requests.",
              "t": "ITSM request, approval workflow logs",
              "d": "Request submitted, approval, and fulfillment timestamps",
              "q": "index=itsm sourcetype=\"request\"\n| where state=\"fulfilled\"\n| eval approval_hrs=(approval_time - submitted_time)/3600\n| eval fulfill_hrs=(fulfilled_time - approval_time)/3600\n| stats avg(approval_hrs) as avg_approval, avg(fulfill_hrs) as avg_fulfill by catalog_item, approval_group\n| table catalog_item, approval_group, avg_approval, avg_fulfill",
              "m": "Ingest request and approval lifecycle events. Compute approval and fulfillment duration. Alert when average exceeds target. Report on slow catalog items and approvers.",
              "z": "Table (cycle time by catalog item), Bar chart (approval vs fulfillment time), Line chart (trend).",
              "kfp": "MTTR and response drifts from harder incidents, new staff, on-call experience mix, or a wider scope of what we close as done.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ITSM request, approval workflow logs.\n• Ensure the following data sources are available: Request submitted, approval, and fulfillment timestamps.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest request and approval lifecycle events. Compute approval and fulfillment duration. Alert when average exceeds target. Report on slow catalog items and approvers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"request\"\n| where state=\"fulfilled\"\n| eval approval_hrs=(approval_time - submitted_time)/3600\n| eval fulfill_hrs=(fulfilled_time - approval_time)/3600\n| stats avg(approval_hrs) as avg_approval, avg(fulfill_hrs) as avg_fulfill by catalog_item, approval_group\n| table catalog_item, approval_group, avg_approval, avg_fulfill\n```\n\nUnderstanding this SPL\n\n**Request Fulfillment and Approval Cycle Time** — Long approval or fulfillment times delay service delivery. Monitoring cycle time supports process optimization and SLA for requests.\n\nDocumented **Data sources**: Request submitted, approval, and fulfillment timestamps. **App/TA** (typical add-on context): ITSM request, approval workflow logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state=\"fulfilled\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **approval_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **fulfill_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by catalog_item, approval_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Request Fulfillment and Approval Cycle Time**): table catalog_item, approval_group, avg_approval, avg_fulfill\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cycle time by catalog item), Bar chart (approval vs fulfillment time), Line chart (trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We time approvals and handoffs on requests, so we trim slow approvers and duplicate steps in the request path.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.8",
              "n": "Knowledge Article Usage vs. Ticket Volume",
              "c": "low",
              "f": "intermediate",
              "v": "Self-service effectiveness measurement; are KB articles deflecting tickets? Correlating article views with ticket creation reveals deflection ROI and content gaps.",
              "t": "Custom (ITSM API, KB platform analytics)",
              "d": "KB article view counts, ticket creation rates",
              "q": "index=itsm (sourcetype=\"kb:views\" OR sourcetype=\"snow:incident\")\n| bin _time span=1d\n| eval kb_views=if(sourcetype=\"kb:views\",coalesce(views,1),0)\n| eval ticket_count=if(sourcetype=\"snow:incident\",1,0)\n| stats sum(kb_views) as kb_views, sum(ticket_count) as ticket_count by _time\n| eval deflection_ratio=round(kb_views/ticket_count, 2)\n| streamstats window=7 avg(deflection_ratio) as avg_ratio\n| eval trend=if(deflection_ratio>avg_ratio,\"improving\",\"declining\")",
              "m": "Ingest KB view events (ServiceNow KB, Confluence, SharePoint) and incident creation events. Normalize to daily buckets. Compute deflection ratio (views / tickets) — higher ratio suggests effective self-service. Track 7-day rolling average; alert when ratio drops >20% vs prior week. Segment by category: compare KB views for \"password reset\" vs ticket volume for same category. Identify high-ticket categories with low KB coverage for content creation prioritization.",
              "z": "Line chart (KB views vs ticket volume over time), Single value (deflection ratio), Bar chart (ratio by category), Table (top-deflecting articles).",
              "kfp": "Views and deflection move after comms, training, launches, and when ticket types shift without a process fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (ITSM API, KB platform analytics).\n• Ensure the following data sources are available: KB article view counts, ticket creation rates.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest KB view events (ServiceNow KB, Confluence, SharePoint) and incident creation events. Normalize to daily buckets. Compute deflection ratio (views / tickets) — higher ratio suggests effective self-service. Track 7-day rolling average; alert when ratio drops >20% vs prior week. Segment by category: compare KB views for \"password reset\" vs ticket volume for same category. Identify high-ticket categories with low KB coverage for content creation prioritization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm (sourcetype=\"kb:views\" OR sourcetype=\"snow:incident\")\n| bin _time span=1d\n| eval kb_views=if(sourcetype=\"kb:views\",coalesce(views,1),0)\n| eval ticket_count=if(sourcetype=\"snow:incident\",1,0)\n| stats sum(kb_views) as kb_views, sum(ticket_count) as ticket_count by _time\n| eval deflection_ratio=round(kb_views/ticket_count, 2)\n| streamstats window=7 avg(deflection_ratio) as avg_ratio\n| eval trend=if(deflection_ratio>avg_ratio,\"improving\",\"declining\")\n```\n\nUnderstanding this SPL\n\n**Knowledge Article Usage vs. Ticket Volume** — Self-service effectiveness measurement; are KB articles deflecting tickets? Correlating article views with ticket creation reveals deflection ROI and content gaps.\n\nDocumented **Data sources**: KB article view counts, ticket creation rates. **App/TA** (typical add-on context): Custom (ITSM API, KB platform analytics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: kb:views, snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"kb:views\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `eval` defines or adjusts **kb_views** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ticket_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **deflection_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **trend** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (KB views vs ticket volume over time), Single value (deflection ratio), Bar chart (ratio by category), Table (top-deflecting articles).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We pair article reads with new tickets, so we can tell if help content is really taking pressure off the desk.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.9",
              "n": "Mean Time Between Failures (MTBF) per CI",
              "c": "medium",
              "f": "advanced",
              "v": "Reliability trending per configuration item for replacement planning. CIs with declining MTBF indicate aging hardware or recurring issues; supports proactive replacement and warranty decisions.",
              "t": "Custom (ITSM API, CMDB)",
              "d": "Incident records linked to CIs, CI lifecycle data",
              "q": "index=itsm sourcetype=\"snow:incident\" state=\"closed\" cmdb_ci=*\n| eval resolved_epoch=resolved_at\n| sort cmdb_ci resolved_epoch\n| streamstats current=f last(resolved_epoch) as prev_resolved by cmdb_ci\n| eval mtbf_hours=round((resolved_epoch-prev_resolved)/3600, 1)\n| where isnotnull(prev_resolved) AND mtbf_hours>0\n| stats avg(mtbf_hours) as avg_mtbf_hours, count as incident_count, min(_time) as first_incident, max(_time) as last_incident by cmdb_ci\n| lookup cmdb_ci_details name AS cmdb_ci OUTPUT ci_class, install_date, warranty_expires\n| eval avg_mtbf_days=round(avg_mtbf_hours/24, 1)\n| sort avg_mtbf_hours\n| head 50",
              "m": "Ingest incidents with `cmdb_ci` (or equivalent CI linkage). Ensure resolved timestamps are indexed. For each CI, compute time between consecutive incident resolutions (MTBF). Exclude same-incident reopen/resolve cycles. Join CMDB for CI metadata (class, age, warranty). Alert when MTBF for critical CIs drops below 30-day baseline by >30%. Report top 50 lowest-MTBF CIs for replacement planning. Segment by CI class (server, network device, storage) for fleet-level reliability comparison.",
              "z": "Table (CI, MTBF days, incident count, warranty), Bar chart (MTBF by CI class), Line chart (MTBF trend per CI), Heatmap (CI × time, color = MTBF).",
              "kfp": "Shorter times between events when the same problem is split across many tickets, vendors blur grey failures, or you change what counts as an incident month to month.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (ITSM API, CMDB).\n• Ensure the following data sources are available: Incident records linked to CIs, CI lifecycle data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest incidents with `cmdb_ci` (or equivalent CI linkage). Ensure resolved timestamps are indexed. For each CI, compute time between consecutive incident resolutions (MTBF). Exclude same-incident reopen/resolve cycles. Join CMDB for CI metadata (class, age, warranty). Alert when MTBF for critical CIs drops below 30-day baseline by >30%. Report top 50 lowest-MTBF CIs for replacement planning. Segment by CI class (server, network device, storage) for fleet-level reliability comparison.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" state=\"closed\" cmdb_ci=*\n| eval resolved_epoch=resolved_at\n| sort cmdb_ci resolved_epoch\n| streamstats current=f last(resolved_epoch) as prev_resolved by cmdb_ci\n| eval mtbf_hours=round((resolved_epoch-prev_resolved)/3600, 1)\n| where isnotnull(prev_resolved) AND mtbf_hours>0\n| stats avg(mtbf_hours) as avg_mtbf_hours, count as incident_count, min(_time) as first_incident, max(_time) as last_incident by cmdb_ci\n| lookup cmdb_ci_details name AS cmdb_ci OUTPUT ci_class, install_date, warranty_expires\n| eval avg_mtbf_days=round(avg_mtbf_hours/24, 1)\n| sort avg_mtbf_hours\n| head 50\n```\n\nUnderstanding this SPL\n\n**Mean Time Between Failures (MTBF) per CI** — Reliability trending per configuration item for replacement planning. CIs with declining MTBF indicate aging hardware or recurring issues; supports proactive replacement and warranty decisions.\n\nDocumented **Data sources**: Incident records linked to CIs, CI lifecycle data. **App/TA** (typical add-on context): Custom (ITSM API, CMDB). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **resolved_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by cmdb_ci** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **mtbf_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(prev_resolved) AND mtbf_hours>0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cmdb_ci** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **avg_mtbf_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (CI, MTBF days, incident count, warranty), Bar chart (MTBF by CI class), Line chart (MTBF trend per CI), Heatmap (CI × time, color = MTBF).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look at how often the same item breaks, so we can plan replacement, vendor talks, and problem work in an honest way.",
              "mtype": [
                "Operations",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.10",
              "n": "Business Service Availability (Composite SLA)",
              "c": "critical",
              "f": "advanced",
              "v": "Rolls up component availability into a single business-service SLA percentage (weighted or “all essential up”) for customer-facing reporting — beyond host-level heatmaps.",
              "t": "Splunk ITSI, or custom lookups + summary indexing",
              "d": "`index=monitoring` normalized health events, `business_service_map.csv`",
              "q": "index=monitoring sourcetype IN (server_health, app_health, db_health) earliest=-24h\n| eval up=if(status=\"ok\" OR status=\"up\",1,0)\n| stats latest(up) as up by component_name\n| lookup business_service_map.csv component_name OUTPUT business_service, weight, is_essential\n| eval w=coalesce(weight,1)\n| stats max(eval(if((is_essential=1 OR lower(is_essential)=\"true\") AND up=0,1,0))) as essential_down\n       sum(eval(w*up)) as weighted_up\n       sum(w) as total_weight by business_service\n| eval composite_sla=if(essential_down>0, 0, round(100*weighted_up/total_weight,2))\n| where composite_sla < 99.9 OR essential_down>0\n| table business_service, composite_sla, essential_down",
              "m": "Define `business_service_map.csv` with components, optional weights, and `is_essential` (any essential down forces SLA=0 or “breach”). Ingest normalized availability per component every 5 minutes; backfill from summary index for monthly SLA. Align with contract SLAs (e.g., 99.9% monthly). ITSI can replace this with service KPIs and composite service health.",
              "z": "Single value (composite SLA % per service), Bar chart (SLA vs target), Table (breaching services), ITSI Service Analyzer (if licensed).",
              "kfp": "Dips in composite scores with partial monitoring, one-way work, and feeds that run a few minutes behind the story in chat.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI, or custom lookups + summary indexing.\n• Ensure the following data sources are available: `index=monitoring` normalized health events, `business_service_map.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine `business_service_map.csv` with components, optional weights, and `is_essential` (any essential down forces SLA=0 or “breach”). Ingest normalized availability per component every 5 minutes; backfill from summary index for monthly SLA. Align with contract SLAs (e.g., 99.9% monthly). ITSI can replace this with service KPIs and composite service health.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=monitoring sourcetype IN (server_health, app_health, db_health) earliest=-24h\n| eval up=if(status=\"ok\" OR status=\"up\",1,0)\n| stats latest(up) as up by component_name\n| lookup business_service_map.csv component_name OUTPUT business_service, weight, is_essential\n| eval w=coalesce(weight,1)\n| stats max(eval(if((is_essential=1 OR lower(is_essential)=\"true\") AND up=0,1,0))) as essential_down\n       sum(eval(w*up)) as weighted_up\n       sum(w) as total_weight by business_service\n| eval composite_sla=if(essential_down>0, 0, round(100*weighted_up/total_weight,2))\n| where composite_sla < 99.9 OR essential_down>0\n| table business_service, composite_sla, essential_down\n```\n\nUnderstanding this SPL\n\n**Business Service Availability (Composite SLA)** — Rolls up component availability into a single business-service SLA percentage (weighted or “all essential up”) for customer-facing reporting — beyond host-level heatmaps.\n\nDocumented **Data sources**: `index=monitoring` normalized health events, `business_service_map.csv`. **App/TA** (typical add-on context): Splunk ITSI, or custom lookups + summary indexing. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: monitoring.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=monitoring, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **up** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by component_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **w** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by business_service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **composite_sla** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where composite_sla < 99.9 OR essential_down>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Business Service Availability (Composite SLA)**): table business_service, composite_sla, essential_down\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (composite SLA % per service), Bar chart (SLA vs target), Table (breaching services), ITSI Service Analyzer (if licensed).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We add up the pieces that make a service, so a single number tells the business if the whole stack met its promise.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [
                "snmp_ups"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.11",
              "n": "Batch Job Schedule Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Verifies scheduled batch jobs (ETL, billing, backups) started and finished within expected windows — catching silent scheduler failures before downstream SLAs break.",
              "t": "Control-M, Autosys, cron/syslog, mainframe SMF (custom)",
              "d": "`sourcetype=controlm:job`, `sourcetype=autosys:job`, or `sourcetype=syslog` with scheduler tags",
              "q": "index=batch sourcetype=\"controlm:job\" earliest=-7d@d\n| eval day=strftime(_time,\"%Y-%m-%d\")\n| eval ended_ok=if(match(lower(status),\"(?i)ended ok|success\"),1,0)\n| stats max(ended_ok) as day_ok by job_name, day\n| where day_ok=0\n| table job_name, day",
              "m": "Map vendor fields: `scheduled_time`, end status, job name. For cron, ingest start/stop lines and compare to `batch_schedule.csv` lookup (job, expected cron, max duration). Alert when `ran_ok=0` for a calendar day. Tune time zones.",
              "z": "Table (missed jobs), Calendar (job success by day), Line chart (miss rate trend).",
              "kfp": "Misses from data delays, holiday calendars, clock skew, or jobs that re-run but still finish in the allowed window.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Control-M, Autosys, cron/syslog, mainframe SMF (custom).\n• Ensure the following data sources are available: `sourcetype=controlm:job`, `sourcetype=autosys:job`, or `sourcetype=syslog` with scheduler tags.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor fields: `scheduled_time`, end status, job name. For cron, ingest start/stop lines and compare to `batch_schedule.csv` lookup (job, expected cron, max duration). Alert when `ran_ok=0` for a calendar day. Tune time zones.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=batch sourcetype=\"controlm:job\" earliest=-7d@d\n| eval day=strftime(_time,\"%Y-%m-%d\")\n| eval ended_ok=if(match(lower(status),\"(?i)ended ok|success\"),1,0)\n| stats max(ended_ok) as day_ok by job_name, day\n| where day_ok=0\n| table job_name, day\n```\n\nUnderstanding this SPL\n\n**Batch Job Schedule Compliance** — Verifies scheduled batch jobs (ETL, billing, backups) started and finished within expected windows — catching silent scheduler failures before downstream SLAs break.\n\nDocumented **Data sources**: `sourcetype=controlm:job`, `sourcetype=autosys:job`, or `sourcetype=syslog` with scheduler tags. **App/TA** (typical add-on context): Control-M, Autosys, cron/syslog, mainframe SMF (custom). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: batch; **sourcetype**: controlm:job. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=batch, sourcetype=\"controlm:job\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **day** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ended_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by job_name, day** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where day_ok=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Batch Job Schedule Compliance**): table job_name, day\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missed jobs), Calendar (job success by day), Line chart (miss rate trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check whether work that must run on a schedule actually did, so finance, ops, and customers get files when they expect them.",
              "mtype": [
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "controlm",
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.12",
              "n": "Control-M Job Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Batch job success/failure and SLA compliance are critical for data pipelines and scheduled workloads. Failed or late jobs can cascade to downstream systems and reporting.",
              "t": "Custom (Control-M Automation API)",
              "d": "Control-M /run/jobs/status, job history",
              "q": "index=cicd sourcetype=\"controlm:job\"\n| where status=\"Failed\" OR status=\"Ended Not OK\" OR (sla_met=\"false\" AND status=\"Ended OK\")\n| table _time, job_id, job_name, folder, status, order_date, run_as, end_time, sla_met\n| sort -_time",
              "m": "Poll Control-M Automation API for job status and history. Ingest job_id, job_name, status, order_date, end_time, sla_met. Alert on Failed or Ended Not OK. Alert on SLA violations. Track success rate by folder and job. Report on batch job health and SLA compliance percentage.",
              "z": "Table (failed/late jobs), Single value (success rate %), Timeline (job outcomes), Bar chart (failures by folder).",
              "kfp": "Busy or odd job states during code moves, new folder cutovers, one-off re-runs, or test folders still mixed into production; compare with the scheduler before you change how jobs are defined.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Control-M Automation API).\n• Ensure the following data sources are available: Control-M /run/jobs/status, job history.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Control-M Automation API for job status and history. Ingest job_id, job_name, status, order_date, end_time, sla_met. Alert on Failed or Ended Not OK. Alert on SLA violations. Track success rate by folder and job. Report on batch job health and SLA compliance percentage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"controlm:job\"\n| where status=\"Failed\" OR status=\"Ended Not OK\" OR (sla_met=\"false\" AND status=\"Ended OK\")\n| table _time, job_id, job_name, folder, status, order_date, run_as, end_time, sla_met\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Control-M Job Monitoring** — Batch job success/failure and SLA compliance are critical for data pipelines and scheduled workloads. Failed or late jobs can cascade to downstream systems and reporting.\n\nDocumented **Data sources**: Control-M /run/jobs/status, job history. **App/TA** (typical add-on context): Custom (Control-M Automation API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: controlm:job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"controlm:job\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"Failed\" OR status=\"Ended Not OK\" OR (sla_met=\"false\" AND status=\"Ended OK\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Control-M Job Monitoring**): table _time, job_id, job_name, folder, status, order_date, run_as, end_time, sla_met\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed/late jobs), Single value (success rate %), Timeline (job outcomes), Bar chart (failures by folder).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the job world for late runs and pain in the same job names, so batch and operations see trouble before a missed window.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "controlm"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.13",
              "n": "Service Request Item Fulfillment Cycle Time",
              "c": "high",
              "f": "beginner",
              "v": "Cycle time for catalog line items reflects fulfillment team capacity and automation maturity; tracking medians by item aligns operations with ITIL request fulfilment metrics and customer-facing SLAs.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (`sys_created_on`, `closed_at`, `cat_item`, `state`)",
              "q": "index=itsm sourcetype=\"snow:sc_req_item\" earliest=-30d\n| where match(lower(state),\"(?i)closed|complete|fulfilled\")\n| eval cycle_hrs=round((closed_at-sys_created_on)/3600,2)\n| where cycle_hrs>=0 AND cycle_hrs<720\n| stats median(cycle_hrs) as med_hrs, perc90(cycle_hrs) as p90_hrs, count as fulfilled by cat_item\n| sort -p90_hrs\n| head 40",
              "m": "(1) Map `closed_at`/`sys_updated_on` per your TA field extractions; (2) Filter cancelled rows via `close_code` if present; (3) Pair slow `cat_item` values with fulfillment runbooks and automation backlog reviews.",
              "z": "Bar chart (p90 cycle hours by catalog item), Table (top slow items), Line chart (weekly median cycle time).",
              "kfp": "MTTR and response drifts from harder incidents, new staff, on-call experience mix, or a wider scope of what we close as done.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (`sys_created_on`, `closed_at`, `cat_item`, `state`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `closed_at`/`sys_updated_on` per your TA field extractions; (2) Filter cancelled rows via `close_code` if present; (3) Pair slow `cat_item` values with fulfillment runbooks and automation backlog reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:sc_req_item\" earliest=-30d\n| where match(lower(state),\"(?i)closed|complete|fulfilled\")\n| eval cycle_hrs=round((closed_at-sys_created_on)/3600,2)\n| where cycle_hrs>=0 AND cycle_hrs<720\n| stats median(cycle_hrs) as med_hrs, perc90(cycle_hrs) as p90_hrs, count as fulfilled by cat_item\n| sort -p90_hrs\n| head 40\n```\n\nUnderstanding this SPL\n\n**Service Request Item Fulfillment Cycle Time** — Cycle time for catalog line items reflects fulfillment team capacity and automation maturity; tracking medians by item aligns operations with ITIL request fulfilment metrics and customer-facing SLAs.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (`sys_created_on`, `closed_at`, `cat_item`, `state`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(state),\"(?i)closed|complete|fulfilled\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **cycle_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cycle_hrs>=0 AND cycle_hrs<720` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cat_item** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (p90 cycle hours by catalog item), Table (top slow items), Line chart (weekly median cycle time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We time each catalog item from ask to done, so we can fix the slowest forms and make self-service feel fair.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.14",
              "n": "Priority 1–2 First-Assignment Latency",
              "c": "critical",
              "f": "intermediate",
              "v": "Time-to-first-assignment for urgent incidents measures triage discipline and major-incident readiness; reducing latency supports ITIL event-to-incident handling and response SLA guardrails.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (`priority`, `opened_at`, `assigned_at`, `assignment_group`)",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-30d priority IN (1,2)\n| eval open_ts=coalesce(opened_at, strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"))\n| eval assign_ts=coalesce(assigned_at, first_response_time)\n| where isnotnull(assign_ts) AND isnotnull(open_ts)\n| eval assign_lag_mins=round((assign_ts-open_ts)/60,1)\n| where assign_lag_mins>=0 AND assign_lag_mins<10080\n| stats median(assign_lag_mins) as med_lag_mins, perc90(assign_lag_mins) as p90_lag_mins by assignment_group\n| sort -p90_lag_mins",
              "m": "(1) Confirm which timestamp represents first assignment in your instance; (2) Exclude bot-created tickets with `caller_id` filters if needed; (3) Alert when `p90_lag_mins` exceeds the response playbook target for two rolling weeks for any group supporting P1/P2.",
              "z": "Bar chart (p90 first-assignment minutes by group), Table (group medians), Line chart (weekly p90 trend).",
              "kfp": "P1 and P2 spikes in major outages, customer-wide events, and right after a large roll-out; expect heavy weeks to look hot.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket_Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (`priority`, `opened_at`, `assigned_at`, `assignment_group`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm which timestamp represents first assignment in your instance; (2) Exclude bot-created tickets with `caller_id` filters if needed; (3) Alert when `p90_lag_mins` exceeds the response playbook target for two rolling weeks for any group supporting P1/P2.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-30d priority IN (1,2)\n| eval open_ts=coalesce(opened_at, strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"))\n| eval assign_ts=coalesce(assigned_at, first_response_time)\n| where isnotnull(assign_ts) AND isnotnull(open_ts)\n| eval assign_lag_mins=round((assign_ts-open_ts)/60,1)\n| where assign_lag_mins>=0 AND assign_lag_mins<10080\n| stats median(assign_lag_mins) as med_lag_mins, perc90(assign_lag_mins) as p90_lag_mins by assignment_group\n| sort -p90_lag_mins\n```\n\nUnderstanding this SPL\n\n**Priority 1–2 First-Assignment Latency** — Time-to-first-assignment for urgent incidents measures triage discipline and major-incident readiness; reducing latency supports ITIL event-to-incident handling and response SLA guardrails.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`priority`, `opened_at`, `assigned_at`, `assignment_group`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **open_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **assign_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(assign_ts) AND isnotnull(open_ts)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **assign_lag_mins** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where assign_lag_mins>=0 AND assign_lag_mins<10080` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Priority 1–2 First-Assignment Latency** — Time-to-first-assignment for urgent incidents measures triage discipline and major-incident readiness; reducing latency supports ITIL event-to-incident handling and response SLA guardrails.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`priority`, `opened_at`, `assigned_at`, `assignment_group`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (p90 first-assignment minutes by group), Table (group medians), Line chart (weekly p90 trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We time how long the hottest work sits before anyone owns it, so the most urgent work does not wait in a silent queue.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.15",
              "n": "Open Problem Record Aging",
              "c": "high",
              "f": "beginner",
              "v": "Aging open problems indicate stalled root-cause work and undermine permanent corrective actions; backlog visibility strengthens ITIL problem management cadence with incident and change stakeholders.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:problem\"` (`state`, `sys_created_on`, `assignment_group`, `category`, `number`)",
              "q": "index=itsm sourcetype=\"snow:problem\" earliest=-180d\n| where NOT match(lower(state),\"(?i)closed|resolved\")\n| eval age_days=round((now()-sys_created_on)/86400,0)\n| stats count as open_pr, median(age_days) as med_age, max(age_days) as max_age by assignment_group, category\n| sort -open_pr",
              "m": "(1) Map `state` values for your problem workflow; (2) Add filters for known backlog categories excluded from SLA; (3) Weekly review of `max_age` with problem managers and link to related `snow:incident` volume via `problem_id` in a companion panel.",
              "z": "Table (open problem aging by group), Bar chart (open count by category), Single value (total open problems).",
              "kfp": "Stale or old records while waiting on a vendor, a customer reply, a patch, or work we chose to park on purpose.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:problem\"` (`state`, `sys_created_on`, `assignment_group`, `category`, `number`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `state` values for your problem workflow; (2) Add filters for known backlog categories excluded from SLA; (3) Weekly review of `max_age` with problem managers and link to related `snow:incident` volume via `problem_id` in a companion panel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:problem\" earliest=-180d\n| where NOT match(lower(state),\"(?i)closed|resolved\")\n| eval age_days=round((now()-sys_created_on)/86400,0)\n| stats count as open_pr, median(age_days) as med_age, max(age_days) as max_age by assignment_group, category\n| sort -open_pr\n```\n\nUnderstanding this SPL\n\n**Open Problem Record Aging** — Aging open problems indicate stalled root-cause work and undermine permanent corrective actions; backlog visibility strengthens ITIL problem management cadence with incident and change stakeholders.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:problem\"` (`state`, `sys_created_on`, `assignment_group`, `category`, `number`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:problem. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:problem\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where NOT match(lower(state),\"(?i)closed|resolved\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by assignment_group, category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open problem aging by group), Bar chart (open count by category), Single value (total open problems).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We surface problem records that sit open, so the organization either drives root-cause work or says why the problem stays open on purpose.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.3.16",
              "n": "Knowledge Article Effectiveness on Incidents",
              "c": "medium",
              "f": "intermediate",
              "v": "Comparing incidents that reference knowledge articles versus those that do not shows whether the knowledge base reduces handle time and repeat contacts—core ITIL knowledge management feedback.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (`knowledge`, `u_knowledge_article`, `resolved_at`, `opened_at`, `category`)",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-90d state IN (\"closed\",\"resolved\",\"6\",\"7\")\n| eval has_kb=if(isnotnull(u_knowledge_article) AND u_knowledge_article!=\"\" OR match(lower(coalesce(knowledge,\"\")),\"true|yes|1\"),1,0)\n| eval mttr_hrs=round((resolved_at-opened_at)/3600,2)\n| where mttr_hrs>=0\n| stats median(mttr_hrs) as med_mttr, count as tickets by category, has_kb\n| xyseries category has_kb med_mttr\n| rename \"0\" as mttr_no_kb, \"1\" as mttr_with_kb\n| eval delta_hrs=round(mttr_no_kb-mttr_with_kb,2)\n| sort -delta_hrs",
              "m": "(1) Map the fields your instance uses for article linkage (`task_knowledge`, related lists ingested as multivalue); (2) If `xyseries` is sparse, pivot with `chart median(mttr_hrs) over category by has_kb` instead; (3) Share findings with knowledge coaches when `delta_hrs` is negligible for high-volume categories.",
              "z": "Grouped bar chart (median MTTR with vs without KB by category), Table (delta hours), Line chart (deflection-linked MTTR trend).",
              "kfp": "Views and deflection move after comms, training, launches, and when ticket types shift without a process fault.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (`knowledge`, `u_knowledge_article`, `resolved_at`, `opened_at`, `category`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map the fields your instance uses for article linkage (`task_knowledge`, related lists ingested as multivalue); (2) If `xyseries` is sparse, pivot with `chart median(mttr_hrs) over category by has_kb` instead; (3) Share findings with knowledge coaches when `delta_hrs` is negligible for high-volume categories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-90d state IN (\"closed\",\"resolved\",\"6\",\"7\")\n| eval has_kb=if(isnotnull(u_knowledge_article) AND u_knowledge_article!=\"\" OR match(lower(coalesce(knowledge,\"\")),\"true|yes|1\"),1,0)\n| eval mttr_hrs=round((resolved_at-opened_at)/3600,2)\n| where mttr_hrs>=0\n| stats median(mttr_hrs) as med_mttr, count as tickets by category, has_kb\n| xyseries category has_kb med_mttr\n| rename \"0\" as mttr_no_kb, \"1\" as mttr_with_kb\n| eval delta_hrs=round(mttr_no_kb-mttr_with_kb,2)\n| sort -delta_hrs\n```\n\nUnderstanding this SPL\n\n**Knowledge Article Effectiveness on Incidents** — Comparing incidents that reference knowledge articles versus those that do not shows whether the knowledge base reduces handle time and repeat contacts—core ITIL knowledge management feedback.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`knowledge`, `u_knowledge_article`, `resolved_at`, `opened_at`, `category`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_kb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mttr_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mttr_hrs>=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by category, has_kb** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **delta_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Grouped bar chart (median MTTR with vs without KB by category), Table (delta hours), Line chart (deflection-linked MTTR trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We see whether the right help is attached when we fix issues, so good writing turns into real time saved.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 16,
            "none": 0
          }
        },
        {
          "i": "16.4",
          "n": "Change & Release Management",
          "u": [
            {
              "i": "16.4.1",
              "n": "Unauthorized Change Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Changes executed without approval or outside policy create audit exposure and outage risk; detecting them early supports SOC-2/ITIL controls and rapid rollback decisions.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:change_request`",
              "q": "index=itsm sourcetype=\"snow:change_request\"\n| eval approved=coalesce(approval,\"\") \n| where match(lower(u_authorization),\"(?i)unauthorized|rejected\") OR (isnull(cab_date) AND lower(type)!=\"standard\" AND lower(category)!=\"routine\")\n| table _time, number, short_description, state, type, u_authorization, opened_by\n| sort -_time",
              "m": "Ingest change_request with approval, CAB, and authorization fields mapped from ServiceNow. Build allowlists for standard/pre-approved change models. Alert when production-impacting changes lack `cab_date` or show `rejected`/`unauthorized` authorization. Correlate with CMDB and deployment tools for out-of-band activity.",
              "z": "Table (flagged changes), Single value (unauthorized count — target: 0), Timeline (violations).",
              "kfp": "Apparent bypass during emergency work, pre-approved break-glass, or late documentation right after a sev-1.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest change_request with approval, CAB, and authorization fields mapped from ServiceNow. Build allowlists for standard/pre-approved change models. Alert when production-impacting changes lack `cab_date` or show `rejected`/`unauthorized` authorization. Correlate with CMDB and deployment tools for out-of-band activity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\"\n| eval approved=coalesce(approval,\"\") \n| where match(lower(u_authorization),\"(?i)unauthorized|rejected\") OR (isnull(cab_date) AND lower(type)!=\"standard\" AND lower(category)!=\"routine\")\n| table _time, number, short_description, state, type, u_authorization, opened_by\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Unauthorized Change Detection** — Changes executed without approval or outside policy create audit exposure and outage risk; detecting them early supports SOC-2/ITIL controls and rapid rollback decisions.\n\nDocumented **Data sources**: `sourcetype=snow:change_request`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **approved** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(lower(u_authorization),\"(?i)unauthorized|rejected\") OR (isnull(cab_date) AND lower(type)!=\"standard\" AND lower(…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Unauthorized Change Detection**): table _time, number, short_description, state, type, u_authorization, opened_by\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (flagged changes), Single value (unauthorized count — target: 0), Timeline (violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for change work that is missing the nod we said we need, so risky moves do not land without the right eyeballs.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§9",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of BAIT/KAIT §9 (ICT operations management) — Splunk UC-16.4.1: Unauthorized Change Detection.",
                  "ea": "Saved search 'UC-16.4.1' running on sourcetype=snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Rundschreiben/2021/rs_1021_BAIT_en.html"
                },
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) — Splunk UC-16.4.1: Unauthorized Change Detection.",
                  "ea": "Saved search 'UC-16.4.1' running on sourcetype=snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/IT-Grundschutz/it-grundschutz_node.html"
                },
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§4.1.1",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of MAS TRM §4.1.1 (Technology risk governance) — Splunk UC-16.4.1: Unauthorized Change Detection.",
                  "ea": "Saved search 'UC-16.4.1' running on sourcetype=snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.mas.gov.sg/-/media/mas/regulations-and-financial-stability/regulatory-and-supervisory-framework/risk-management/trm-guidelines-18-january-2021.pdf"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of NERC CIP CIP-010-4 R1 (Configuration change management) — Splunk UC-16.4.1: Unauthorized Change Detection.",
                  "ea": "Saved search 'UC-16.4.1' running on sourcetype=snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Authorization",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of SOX-ITGC ITGC.ChangeMgmt.Authorization (Change authorised) — Splunk UC-16.4.1: Unauthorized Change Detection.",
                  "ea": "Saved search 'UC-16.4.1' running on sourcetype=snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.2",
              "n": "Change Window Compliance Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Work performed outside agreed maintenance windows disrupts users and breaks SLAs; measuring compliance enforces scheduling discipline and supports customer communication.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:change_request`",
              "q": "index=itsm sourcetype=\"snow:change_request\" state IN (\"Closed\",\"Implemented\")\n| eval start=strptime(start_date,\"%Y-%m-%d %H:%M:%S\"), end=strptime(end_date,\"%Y-%m-%d %H:%M:%S\")\n| eval window_start=strptime(planned_start,\"%Y-%m-%d %H:%M:%S\"), window_end=strptime(planned_end,\"%Y-%m-%d %H:%M:%S\")\n| eval outside_window=if(start<window_start OR end>window_end,1,0)\n| stats sum(outside_window) as breaches count as total by assignment_group\n| eval breach_pct=round(100*breaches/total,1)\n| where breach_pct > 0\n| sort -breach_pct",
              "m": "Map `planned_start`/`planned_end` and actual work `start_date`/`end_date` from the change record (field names vary—use transforms). Flag implementations that begin early or finish late versus the approved window. Report by assignment group and business service. Exclude emergency changes with documented extensions via change task.",
              "z": "Bar chart (breach % by team), Table (non-compliant CHGs), Line chart (weekly compliance %).",
              "kfp": "Shifts in ticket and workflow data during large incidents, process or field changes, holiday coverage, and after big rollouts; spot-check a sample in the source system before you treat the trend as a people problem.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `planned_start`/`planned_end` and actual work `start_date`/`end_date` from the change record (field names vary—use transforms). Flag implementations that begin early or finish late versus the approved window. Report by assignment group and business service. Exclude emergency changes with documented extensions via change task.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" state IN (\"Closed\",\"Implemented\")\n| eval start=strptime(start_date,\"%Y-%m-%d %H:%M:%S\"), end=strptime(end_date,\"%Y-%m-%d %H:%M:%S\")\n| eval window_start=strptime(planned_start,\"%Y-%m-%d %H:%M:%S\"), window_end=strptime(planned_end,\"%Y-%m-%d %H:%M:%S\")\n| eval outside_window=if(start<window_start OR end>window_end,1,0)\n| stats sum(outside_window) as breaches count as total by assignment_group\n| eval breach_pct=round(100*breaches/total,1)\n| where breach_pct > 0\n| sort -breach_pct\n```\n\nUnderstanding this SPL\n\n**Change Window Compliance Monitoring** — Work performed outside agreed maintenance windows disrupts users and breaks SLAs; measuring compliance enforces scheduling discipline and supports customer communication.\n\nDocumented **Data sources**: `sourcetype=snow:change_request`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **start** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **window_start** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **outside_window** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **breach_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where breach_pct > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (breach % by team), Table (non-compliant CHGs), Line chart (weekly compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check whether we did the work in the time box we said we would, so out-of-window surprises are a choice, not a habit.",
              "mtype": [
                "Compliance",
                "Change"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.3",
              "n": "Failed Change Correlation with Incident Spikes",
              "c": "high",
              "f": "advanced",
              "v": "Linking unsuccessful changes to incident volume proves root cause for major reviews and helps tighten testing or rollback criteria for similar changes.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:change_request`, `sourcetype=snow:incident`",
              "q": "index=itsm sourcetype=\"snow:change_request\" state=\"Closed\"\n| where lower(close_code)=\"unsuccessful\"\n| eval ci=cmdb_ci\n| join type=left max=1 ci [\n  search index=itsm sourcetype=\"snow:incident\" earliest=-30d\n  | rename cmdb_ci as ci\n  | stats count as inc_count by ci\n]\n| where isnotnull(inc_count) AND inc_count>0\n| table number, short_description, ci, inc_count",
              "m": "Align `cmdb_ci` on change and incident. Use `join` with time bounds via subsearch or `transaction` on `ci` plus `_time` window (e.g., 4h after change close). Prefer native `caused_by` or `problem_id` when populated. Dashboard: unsuccessful changes with related incident counts in the follow-up window. Use for PIR and change model updates.",
              "z": "Table (failed CHG + incident count), Timeline (CHG vs incidents), Sankey (change → CI → incidents).",
              "kfp": "Rollbacks that spike in big cutovers, legacy retirements, or stricter post-audit close codes.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:change_request`, `sourcetype=snow:incident`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign `cmdb_ci` on change and incident. Use `join` with time bounds via subsearch or `transaction` on `ci` plus `_time` window (e.g., 4h after change close). Prefer native `caused_by` or `problem_id` when populated. Dashboard: unsuccessful changes with related incident counts in the follow-up window. Use for PIR and change model updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" state=\"Closed\"\n| where lower(close_code)=\"unsuccessful\"\n| eval ci=cmdb_ci\n| join type=left max=1 ci [\n  search index=itsm sourcetype=\"snow:incident\" earliest=-30d\n  | rename cmdb_ci as ci\n  | stats count as inc_count by ci\n]\n| where isnotnull(inc_count) AND inc_count>0\n| table number, short_description, ci, inc_count\n```\n\nUnderstanding this SPL\n\n**Failed Change Correlation with Incident Spikes** — Linking unsuccessful changes to incident volume proves root cause for major reviews and helps tighten testing or rollback criteria for similar changes.\n\nDocumented **Data sources**: `sourcetype=snow:change_request`, `sourcetype=snow:incident`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where lower(close_code)=\"unsuccessful\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **ci** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(inc_count) AND inc_count>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Failed Change Correlation with Incident Spikes**): table number, short_description, ci, inc_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed CHG + incident count), Timeline (CHG vs incidents), Sankey (change → CI → incidents).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for a wave of customer pain after a bad change, so we can roll back, fix forward, and tell the business fast.",
              "mtype": [
                "Fault",
                "Change"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.4",
              "n": "Release Deployment Success Rate Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Release success rate summarizes delivery health; sustained drops signal quality gaps in testing, automation, or release windows.",
              "t": "`Splunk_TA_snow`, CI/CD release tags (optional)",
              "d": "`sourcetype=snow:change_request`, `sourcetype=snow:release`",
              "q": "index=itsm sourcetype=\"snow:change_request\" (category=\"Release\" OR type=\"Release\")\n| eval success=if(lower(close_code)=\"successful\" OR (state=\"Closed\" AND lower(u_outcome)=\"success\"),1,0)\n| eval failed=if(lower(close_code)=\"unsuccessful\" OR lower(u_outcome)=\"failed\",1,0)\n| timechart span=1w sum(success) as successes sum(failed) as failures\n| eval success_rate=round(100*successes/(successes+failures),1)",
              "m": "Classify changes or release records that represent deployments (release catalog, RFC templates). Normalize `close_code`/`u_outcome`. Optionally join Jenkins/GitHub deployment events by `correlation_id`. Report weekly success rate by application and environment. Alert below target (e.g., 95%).",
              "z": "Line chart (success rate trend), Single value (rolling success %), Bar chart (by application).",
              "kfp": "Release metrics during cutover programs, canary or blue-green double counting, and environment renames in the data.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`, CI/CD release tags (optional).\n• Ensure the following data sources are available: `sourcetype=snow:change_request`, `sourcetype=snow:release`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nClassify changes or release records that represent deployments (release catalog, RFC templates). Normalize `close_code`/`u_outcome`. Optionally join Jenkins/GitHub deployment events by `correlation_id`. Report weekly success rate by application and environment. Alert below target (e.g., 95%).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" (category=\"Release\" OR type=\"Release\")\n| eval success=if(lower(close_code)=\"successful\" OR (state=\"Closed\" AND lower(u_outcome)=\"success\"),1,0)\n| eval failed=if(lower(close_code)=\"unsuccessful\" OR lower(u_outcome)=\"failed\",1,0)\n| timechart span=1w sum(success) as successes sum(failed) as failures\n| eval success_rate=round(100*successes/(successes+failures),1)\n```\n\nUnderstanding this SPL\n\n**Release Deployment Success Rate Tracking** — Release success rate summarizes delivery health; sustained drops signal quality gaps in testing, automation, or release windows.\n\nDocumented **Data sources**: `sourcetype=snow:change_request`, `sourcetype=snow:release`. **App/TA** (typical add-on context): `Splunk_TA_snow`, CI/CD release tags (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **success** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (success rate trend), Single value (rolling success %), Bar chart (by application).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We follow what share of release steps finish clean, so release habits stay honest and we catch brittle paths early.",
              "mtype": [
                "Performance",
                "Change"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.5",
              "n": "Emergency Change Frequency Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Chronic reliance on emergency changes indicates planning gaps or unstable platforms; trending frequency guides process improvement and capacity investments.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:change_request`",
              "q": "index=itsm sourcetype=\"snow:change_request\"\n| eval is_emergency=if(match(lower(type),\"emergency\") OR lower(u_change_model)=\"emergency\",1,0)\n| where is_emergency=1\n| bin _time span=1w\n| stats count by _time, assignment_group\n| eventstats avg(count) as baseline by assignment_group\n| where count > baseline*1.5",
              "m": "Tag emergency changes via `type`, model, or priority. Exclude duplicates from reopen workflows. Compare weekly counts to a rolling baseline per team. Alert on spikes; review in CAB for pattern (vendor defects, capacity, failed standard changes).",
              "z": "Line chart (emergency CHGs per week), Bar chart (by team), Table (recent emergencies with cause).",
              "kfp": "High emergency counts during sev-1s, hotfix waves, and new environments that are still shaky; pair with the incident log.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag emergency changes via `type`, model, or priority. Exclude duplicates from reopen workflows. Compare weekly counts to a rolling baseline per team. Alert on spikes; review in CAB for pattern (vendor defects, capacity, failed standard changes).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\"\n| eval is_emergency=if(match(lower(type),\"emergency\") OR lower(u_change_model)=\"emergency\",1,0)\n| where is_emergency=1\n| bin _time span=1w\n| stats count by _time, assignment_group\n| eventstats avg(count) as baseline by assignment_group\n| where count > baseline*1.5\n```\n\nUnderstanding this SPL\n\n**Emergency Change Frequency Monitoring** — Chronic reliance on emergency changes indicates planning gaps or unstable platforms; trending frequency guides process improvement and capacity investments.\n\nDocumented **Data sources**: `sourcetype=snow:change_request`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_emergency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_emergency=1` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > baseline*1.5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (emergency CHGs per week), Bar chart (by team), Table (recent emergencies with cause).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how often we use emergency change, so a culture of rush does not become the normal way we ship.",
              "mtype": [
                "Operations",
                "Risk"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.6",
              "n": "Change Advisory Board (CAB) Approval Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "CAB sign-off for high-risk changes is a control point; measuring compliance before implementation reduces unauthorized production risk.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:change_request`",
              "q": "index=itsm sourcetype=\"snow:change_request\" u_risk IN (\"High\",\"1 - High\")\n| eval cab_ok=if(isnotnull(cab_date) AND cab_decision=\"Approved\",1,0)\n| where state IN (\"Implement\",\"Closed\") AND cab_ok=0 AND lower(type)!=\"emergency\"\n| stats count by number, short_description, assignment_group, cab_decision\n| sort -_time",
              "m": "Map risk, CAB meeting date, and decision fields from ServiceNow. Define policy: high-risk changes require CAB approval before `Implement`. Allow documented emergency exceptions with `CHG` tasks. Weekly report of violations; integrate with GRC dashboards.",
              "z": "Table (non-compliant CHGs), Single value (CAB compliance %), Pie chart (approved vs missing CAB).",
              "kfp": "Shifts in ticket and workflow data during large incidents, process or field changes, holiday coverage, and after big rollouts; spot-check a sample in the source system before you treat the trend as a people problem.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap risk, CAB meeting date, and decision fields from ServiceNow. Define policy: high-risk changes require CAB approval before `Implement`. Allow documented emergency exceptions with `CHG` tasks. Weekly report of violations; integrate with GRC dashboards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" u_risk IN (\"High\",\"1 - High\")\n| eval cab_ok=if(isnotnull(cab_date) AND cab_decision=\"Approved\",1,0)\n| where state IN (\"Implement\",\"Closed\") AND cab_ok=0 AND lower(type)!=\"emergency\"\n| stats count by number, short_description, assignment_group, cab_decision\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Change Advisory Board (CAB) Approval Compliance** — CAB sign-off for high-risk changes is a control point; measuring compliance before implementation reduces unauthorized production risk.\n\nDocumented **Data sources**: `sourcetype=snow:change_request`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cab_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where state IN (\"Implement\",\"Closed\") AND cab_ok=0 AND lower(type)!=\"emergency\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by number, short_description, assignment_group, cab_decision** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant CHGs), Single value (CAB compliance %), Pie chart (approved vs missing CAB).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We check for serious change that skipped the right approval step, so governance stays real, not a box we tick in an audit.",
              "mtype": [
                "Compliance",
                "Governance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.7",
              "n": "Post-Implementation Review (PIR) Completion Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "PIRs capture lessons from major incidents and failed changes; tracking completion closes the feedback loop and satisfies audit expectations after Sev1 events.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:problem`, `sourcetype=snow:change_request`",
              "q": "index=itsm sourcetype=\"snow:change_request\" u_pir_required=\"true\"\n| eval pir_done=if(isnotnull(u_pir_completed) OR lower(u_pir_state)=\"closed\",1,0)\n| where pir_done=0\n| eval age_days=round((now()-_time)/86400,0)\n| where age_days>7\n| stats latest(_time) as last_seen, values(number) as chg by problem_id, short_description\n| sort last_seen",
              "m": "Use change fields or related problem tasks for PIR workflow (`u_pir_required`, completion date). For Sev1-linked changes, join `problem` records and PIR tasks. Alert when PIR is overdue (e.g., 7 days post closure). Escalate to problem management owner.",
              "z": "Table (open PIRs), Single value (overdue PIR count), Bar chart (PIR completion SLA by team).",
              "kfp": "Late completion when a major event is still open, legal review slows close-out, or a new runbook is not followed yet.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:problem`, `sourcetype=snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse change fields or related problem tasks for PIR workflow (`u_pir_required`, completion date). For Sev1-linked changes, join `problem` records and PIR tasks. Alert when PIR is overdue (e.g., 7 days post closure). Escalate to problem management owner.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" u_pir_required=\"true\"\n| eval pir_done=if(isnotnull(u_pir_completed) OR lower(u_pir_state)=\"closed\",1,0)\n| where pir_done=0\n| eval age_days=round((now()-_time)/86400,0)\n| where age_days>7\n| stats latest(_time) as last_seen, values(number) as chg by problem_id, short_description\n| sort last_seen\n```\n\nUnderstanding this SPL\n\n**Post-Implementation Review (PIR) Completion Tracking** — PIRs capture lessons from major incidents and failed changes; tracking completion closes the feedback loop and satisfies audit expectations after Sev1 events.\n\nDocumented **Data sources**: `sourcetype=snow:problem`, `sourcetype=snow:change_request`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pir_done** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pir_done=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>7` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by problem_id, short_description** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open PIRs), Single value (overdue PIR count), Bar chart (PIR completion SLA by team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We track whether we finish the after-change review, so the team learns and the same mistake is less likely to return.",
              "mtype": [
                "Compliance",
                "Quality"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.8",
              "n": "Change Risk Assessment Accuracy",
              "c": "medium",
              "f": "advanced",
              "v": "If “low risk” changes often fail or drive incidents, the risk model is miscalibrated; analytics improve scoring and reduce surprise outages.",
              "t": "`Splunk_TA_snow`",
              "d": "`sourcetype=snow:change_request`",
              "q": "index=itsm sourcetype=\"snow:change_request\" state=\"Closed\"\n| eval predicted=case(match(lower(u_risk),\"low\"),\"Low\",match(lower(u_risk),\"medium\"),\"Medium\",match(lower(u_risk),\"high\"),\"High\",true(),\"Unknown\")\n| eval actual=if(lower(close_code)=\"unsuccessful\" OR lower(u_customer_impact)=\"yes\",\"Bad\",\"Good\")\n| stats count by predicted, actual\n| eventstats sum(count) as tot by predicted\n| eval pct=round(100*count/tot,1)\n| where predicted=\"Low\" AND actual=\"Bad\"",
              "m": "Compare `u_risk` at submission to outcomes (`close_code`, customer impact, related incident count within 24h). Build confusion-style matrix: predicted risk vs actual. Quarterly review with change managers; adjust questionnaires and automation gates. Requires consistent field extraction in the TA.",
              "z": "Matrix heatmap (predicted vs actual), Table (low-risk failures), Line chart (calibration trend quarter over quarter).",
              "kfp": "Risk that was set early and the close code later tells a different story; or categories that are always subjective.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `sourcetype=snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCompare `u_risk` at submission to outcomes (`close_code`, customer impact, related incident count within 24h). Build confusion-style matrix: predicted risk vs actual. Quarterly review with change managers; adjust questionnaires and automation gates. Requires consistent field extraction in the TA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" state=\"Closed\"\n| eval predicted=case(match(lower(u_risk),\"low\"),\"Low\",match(lower(u_risk),\"medium\"),\"Medium\",match(lower(u_risk),\"high\"),\"High\",true(),\"Unknown\")\n| eval actual=if(lower(close_code)=\"unsuccessful\" OR lower(u_customer_impact)=\"yes\",\"Bad\",\"Good\")\n| stats count by predicted, actual\n| eventstats sum(count) as tot by predicted\n| eval pct=round(100*count/tot,1)\n| where predicted=\"Low\" AND actual=\"Bad\"\n```\n\nUnderstanding this SPL\n\n**Change Risk Assessment Accuracy** — If “low risk” changes often fail or drive incidents, the risk model is miscalibrated; analytics improve scoring and reduce surprise outages.\n\nDocumented **Data sources**: `sourcetype=snow:change_request`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **predicted** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **actual** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by predicted, actual** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by predicted** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where predicted=\"Low\" AND actual=\"Bad\"` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix heatmap (predicted vs actual), Table (low-risk failures), Line chart (calibration trend quarter over quarter).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare how risky we said a change was with how it really went, so our risk call gets sharper over time.",
              "mtype": [
                "Quality",
                "Risk"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.9",
              "n": "Change Backout and Rollback Rate",
              "c": "critical",
              "f": "intermediate",
              "v": "Backouts indicate customer-impacting defects or planning gaps; trending rollback rates informs ITIL change evaluation and release quality gates before the next deployment wave.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:change_request\"` (`close_code`, `close_notes`, `u_outcome`, `type`, `assignment_group`)",
              "q": "index=itsm sourcetype=\"snow:change_request\" state=\"Closed\" earliest=-90d\n| eval backed_out=if(match(lower(close_notes),\"(?i)backout|rollback|revert|rolled back\") OR match(lower(close_code),\"(?i)backed|rollback\") OR match(lower(u_outcome),\"(?i)backed|rollback\"),1,0)\n| timechart span=1w sum(backed_out) as backouts count as total\n| eval backout_pct=round(100*backouts/nullif(total,0),2)\n| sort _time",
              "m": "(1) Normalize vendor-specific `close_code` values into the `backed_out` logic; (2) Exclude duplicate closure events by `dedup number` if replays occur; (3) Alert when `backout_pct` exceeds the rolling baseline by 50% for two consecutive weeks.",
              "z": "Line chart (weekly backout %), Single value (90-day backout %), Bar chart (backouts by assignment group from `stats` panel).",
              "kfp": "Rollbacks that spike in big cutovers, legacy retirements, or stricter post-audit close codes.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:change_request\"` (`close_code`, `close_notes`, `u_outcome`, `type`, `assignment_group`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize vendor-specific `close_code` values into the `backed_out` logic; (2) Exclude duplicate closure events by `dedup number` if replays occur; (3) Alert when `backout_pct` exceeds the rolling baseline by 50% for two consecutive weeks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" state=\"Closed\" earliest=-90d\n| eval backed_out=if(match(lower(close_notes),\"(?i)backout|rollback|revert|rolled back\") OR match(lower(close_code),\"(?i)backed|rollback\") OR match(lower(u_outcome),\"(?i)backed|rollback\"),1,0)\n| timechart span=1w sum(backed_out) as backouts count as total\n| eval backout_pct=round(100*backouts/nullif(total,0),2)\n| sort _time\n```\n\nUnderstanding this SPL\n\n**Change Backout and Rollback Rate** — Backouts indicate customer-impacting defects or planning gaps; trending rollback rates informs ITIL change evaluation and release quality gates before the next deployment wave.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (`close_code`, `close_notes`, `u_outcome`, `type`, `assignment_group`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **backed_out** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **backout_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1w | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Change Backout and Rollback Rate** — Backouts indicate customer-impacting defects or planning gaps; trending rollback rates informs ITIL change evaluation and release quality gates before the next deployment wave.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (`close_code`, `close_notes`, `u_outcome`, `type`, `assignment_group`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (weekly backout %), Single value (90-day backout %), Bar chart (backouts by assignment group from `stats` panel).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We count how often we have to back out, so we know when a train or a platform is still too risky for comfort.",
              "mtype": [
                "Change",
                "Performance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1w | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.10",
              "n": "Change Implementation Duration vs Plan",
              "c": "high",
              "f": "intermediate",
              "v": "Overruns against the approved implementation window increase collision risk and customer impact; comparing actual versus planned duration reinforces ITIL change scheduling and CAB commitments.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:change_request\"` (`start_date`, `end_date`, `planned_start`, `planned_end`, `number`, `assignment_group`)",
              "q": "index=itsm sourcetype=\"snow:change_request\" state IN (\"Closed\",\"Implemented\") earliest=-60d\n| eval actual_hrs=round((end_date-start_date)/3600,2)\n| eval planned_hrs=round((planned_end-planned_start)/3600,2)\n| where planned_hrs>0 AND actual_hrs>=0\n| eval overrun_hrs=round(actual_hrs-planned_hrs,2)\n| eval overrun_pct=round(100*overrun_hrs/planned_hrs,1)\n| where overrun_pct>25\n| table number, short_description, assignment_group, planned_hrs, actual_hrs, overrun_pct\n| sort -overrun_pct",
              "m": "(1) Convert string timestamps with `strptime` if your TA stores strings instead of epoch—mirror UC-16.4.2 field mappings; (2) Exclude emergency changes with documented extensions via a lookup; (3) Feed chronic overruns into release planning retrospectives.",
              "z": "Table (top overruns), Bar chart (overrun % by assignment group), Histogram (overrun distribution).",
              "kfp": "Overruns from complex cutovers, vendor waits, and extra testing in the same window the plan called short.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:change_request\"` (`start_date`, `end_date`, `planned_start`, `planned_end`, `number`, `assignment_group`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Convert string timestamps with `strptime` if your TA stores strings instead of epoch—mirror UC-16.4.2 field mappings; (2) Exclude emergency changes with documented extensions via a lookup; (3) Feed chronic overruns into release planning retrospectives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" state IN (\"Closed\",\"Implemented\") earliest=-60d\n| eval actual_hrs=round((end_date-start_date)/3600,2)\n| eval planned_hrs=round((planned_end-planned_start)/3600,2)\n| where planned_hrs>0 AND actual_hrs>=0\n| eval overrun_hrs=round(actual_hrs-planned_hrs,2)\n| eval overrun_pct=round(100*overrun_hrs/planned_hrs,1)\n| where overrun_pct>25\n| table number, short_description, assignment_group, planned_hrs, actual_hrs, overrun_pct\n| sort -overrun_pct\n```\n\nUnderstanding this SPL\n\n**Change Implementation Duration vs Plan** — Overruns against the approved implementation window increase collision risk and customer impact; comparing actual versus planned duration reinforces ITIL change scheduling and CAB commitments.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (`start_date`, `end_date`, `planned_start`, `planned_end`, `number`, `assignment_group`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **actual_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **planned_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where planned_hrs>0 AND actual_hrs>=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **overrun_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overrun_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overrun_pct>25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Change Implementation Duration vs Plan**): table number, short_description, assignment_group, planned_hrs, actual_hrs, overrun_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Change Implementation Duration vs Plan** — Overruns against the approved implementation window increase collision risk and customer impact; comparing actual versus planned duration reinforces ITIL change scheduling and CAB commitments.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (`start_date`, `end_date`, `planned_start`, `planned_end`, `number`, `assignment_group`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top overruns), Bar chart (overrun % by assignment group), Histogram (overrun distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We compare how long a change really took to what the plan said, so estimates, staffing, and customer comms get better next time.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.11",
              "n": "Same-CI Concurrent Scheduled Changes",
              "c": "critical",
              "f": "advanced",
              "v": "Multiple overlapping implementations against one CI amplify outage risk; detecting concurrent schedules supports ITIL change coordination and collision analysis ahead of CAB approval.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:change_request\"` (`cmdb_ci`, `start_date`, `end_date`, `state`, `number`)",
              "q": "index=itsm sourcetype=\"snow:change_request\" earliest=-14d\n| where match(lower(state),\"(?i)scheduled|implement|work in progress\")\n| where isnotnull(cmdb_ci) AND cmdb_ci!=\"\"\n| rename number as chg_a, start_date as a_start, end_date as a_end\n| join type=inner max=50 cmdb_ci [\n  search index=itsm sourcetype=\"snow:change_request\" earliest=-14d\n  | where match(lower(state),\"(?i)scheduled|implement|work in progress\")\n  | rename number as chg_b, start_date as b_start, end_date as b_end\n  | fields cmdb_ci, chg_b, b_start, b_end\n]\n| where chg_a!=chg_b AND isnotnull(a_start) AND isnotnull(b_start) AND a_start < b_end AND a_end > b_start\n| stats values(chg_b) as overlapping_changes, dc(chg_b) as overlap_count by cmdb_ci, chg_a\n| where overlap_count>0\n| sort -overlap_count",
              "m": "(1) If changes link multiple CIs, pivot from `snow:task_ci` or your m2m table before this join; (2) Keep `max=` tuned (for example 20–100) so the join stays bounded on busy CMDBs; (3) Alert CAB when `overlap_count` exceeds zero for production CIs in the next 72 hours.",
              "z": "Table (CI with overlapping CHG numbers), Timeline (overlapping windows), Single value (collision pairs detected).",
              "kfp": "Intentional overlap for blue-green, paired work, or test cells that still look like two open windows in the list.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:change_request\"` (`cmdb_ci`, `start_date`, `end_date`, `state`, `number`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) If changes link multiple CIs, pivot from `snow:task_ci` or your m2m table before this join; (2) Keep `max=` tuned (for example 20–100) so the join stays bounded on busy CMDBs; (3) Alert CAB when `overlap_count` exceeds zero for production CIs in the next 72 hours.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" earliest=-14d\n| where match(lower(state),\"(?i)scheduled|implement|work in progress\")\n| where isnotnull(cmdb_ci) AND cmdb_ci!=\"\"\n| rename number as chg_a, start_date as a_start, end_date as a_end\n| join type=inner max=50 cmdb_ci [\n  search index=itsm sourcetype=\"snow:change_request\" earliest=-14d\n  | where match(lower(state),\"(?i)scheduled|implement|work in progress\")\n  | rename number as chg_b, start_date as b_start, end_date as b_end\n  | fields cmdb_ci, chg_b, b_start, b_end\n]\n| where chg_a!=chg_b AND isnotnull(a_start) AND isnotnull(b_start) AND a_start < b_end AND a_end > b_start\n| stats values(chg_b) as overlapping_changes, dc(chg_b) as overlap_count by cmdb_ci, chg_a\n| where overlap_count>0\n| sort -overlap_count\n```\n\nUnderstanding this SPL\n\n**Same-CI Concurrent Scheduled Changes** — Multiple overlapping implementations against one CI amplify outage risk; detecting concurrent schedules supports ITIL change coordination and collision analysis ahead of CAB approval.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (`cmdb_ci`, `start_date`, `end_date`, `state`, `number`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(state),\"(?i)scheduled|implement|work in progress\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where isnotnull(cmdb_ci) AND cmdb_ci!=\"\"` — typically the threshold or rule expression for this monitoring goal.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where chg_a!=chg_b AND isnotnull(a_start) AND isnotnull(b_start) AND a_start < b_end AND a_end > b_start` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cmdb_ci, chg_a** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where overlap_count>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Same-CI Concurrent Scheduled Changes** — Multiple overlapping implementations against one CI amplify outage risk; detecting concurrent schedules supports ITIL change coordination and collision analysis ahead of CAB approval.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (`cmdb_ci`, `start_date`, `end_date`, `state`, `number`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (CI with overlapping CHG numbers), Timeline (overlapping windows), Single value (collision pairs detected).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We look for two scheduled windows on the same thing, so we can merge or order work before a collision hurts customers.",
              "mtype": [
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.4.12",
              "n": "Standard Change Volume and Mix Guardrail",
              "c": "medium",
              "f": "beginner",
              "v": "A sudden spike in standard changes may indicate bypassing normal scrutiny; monitoring volume and mix supports ITIL delegated authority models without eroding risk controls.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:change_request\"` (`type`, `category`, `assignment_group`, `opened_at`)",
              "q": "index=itsm sourcetype=\"snow:change_request\" earliest=-90d\n| eval is_standard=if(match(lower(type),\"(?i)standard\"),1,0)\n| eval _time=coalesce(opened_at, sys_created_on, _time)\n| bin _time span=1w\n| stats count as chg_total, sum(is_standard) as std_total by _time, assignment_group\n| eval std_mix_pct=round(100*std_total/nullif(chg_total,0),1)\n| eventstats median(std_mix_pct) as med_mix by assignment_group\n| where std_mix_pct > med_mix*1.4 AND chg_total>=10\n| sort -std_mix_pct",
              "m": "(1) Align `type` values with your change models; (2) Tune the 1.4 multiplier to your risk appetite; (3) Pair with UC-16.4.5 emergency trends to distinguish healthy pre-approval uptake from policy drift.",
              "z": "Line chart (standard % by week), Table (groups exceeding mix guardrail), Stacked bar (change type mix).",
              "kfp": "Standard versus normal mix when you add pre-approved types, retrain, or run an audit that changes what people pick.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:change_request\"` (`type`, `category`, `assignment_group`, `opened_at`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `type` values with your change models; (2) Tune the 1.4 multiplier to your risk appetite; (3) Pair with UC-16.4.5 emergency trends to distinguish healthy pre-approval uptake from policy drift.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" earliest=-90d\n| eval is_standard=if(match(lower(type),\"(?i)standard\"),1,0)\n| eval _time=coalesce(opened_at, sys_created_on, _time)\n| bin _time span=1w\n| stats count as chg_total, sum(is_standard) as std_total by _time, assignment_group\n| eval std_mix_pct=round(100*std_total/nullif(chg_total,0),1)\n| eventstats median(std_mix_pct) as med_mix by assignment_group\n| where std_mix_pct > med_mix*1.4 AND chg_total>=10\n| sort -std_mix_pct\n```\n\nUnderstanding this SPL\n\n**Standard Change Volume and Mix Guardrail** — A sudden spike in standard changes may indicate bypassing normal scrutiny; monitoring volume and mix supports ITIL delegated authority models without eroding risk controls.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (`type`, `category`, `assignment_group`, `opened_at`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_standard** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **std_mix_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where std_mix_pct > med_mix*1.4 AND chg_total>=10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1w | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Standard Change Volume and Mix Guardrail** — A sudden spike in standard changes may indicate bypassing normal scrutiny; monitoring volume and mix supports ITIL delegated authority models without eroding risk controls.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (`type`, `category`, `assignment_group`, `opened_at`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (standard % by week), Table (groups exceeding mix guardrail), Stacked bar (change type mix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how many changes ride the pre-approved, lighter path, so we are not abusing a fast lane for work that is not really standard.",
              "mtype": [
                "Compliance",
                "Operations"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1w | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.8,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 12,
            "none": 0
          }
        },
        {
          "i": "16.5",
          "n": "ITSM Trending",
          "u": [
            {
              "i": "16.5.1",
              "n": "Ticket Backlog Aging Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Watching how open work ages shows whether queues are draining or stagnating; growth in older buckets signals staffing, tooling, or upstream failure demand that will eventually breach customer expectations.",
              "t": "`Splunk_TA_snow`, optional HEC forwarder for ServiceNow scheduled reports",
              "d": "`index=itsm` `sourcetype=\"snow:backlog_daily\"` (recommended) or `sourcetype=\"snow:incident\"` with snapshot pipeline; `sourcetype=jira:*` for Jira SM parity",
              "q": "index=itsm sourcetype=\"snow:backlog_daily\" earliest=-30d@d\n| eval _time=strptime(snapshot_date,\"%Y-%m-%d\")\n| timechart span=1d sum(open_count) by age_bucket\n| foreach \"0-7d\" \"7-30d\" \"30-90d\" \"90d+\" [trendline sma7(<<FIELD>>) as trend_<<FIELD>>]",
              "m": "Publish a daily ServiceNow report (or scripted REST) that counts open incidents or tasks by age bucket (`0-7d`, `7-30d`, `30-90d`, `90d+`) and send results to Splunk with stable field names. If you cannot ingest snapshots yet, run a nightly saved search that writes the same structure to a summary index. Map `age_bucket` labels to your CMDB time zones; exclude cancelled records. If `foreach` is unavailable on your Splunk version, add a `trendline` per series in Dashboard Studio or clone the search per bucket. For Jira, mirror buckets using `created` versus `resolutiondate`.",
              "z": "Stacked area or column chart (buckets over 30 days), Line chart (trendline overlays per bucket), Single value (90d+ share of open backlog).",
              "kfp": "Stale or old records while waiting on a vendor, a customer reply, a patch, or work we chose to park on purpose.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`, optional HEC forwarder for ServiceNow scheduled reports.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:backlog_daily\"` (recommended) or `sourcetype=\"snow:incident\"` with snapshot pipeline; `sourcetype=jira:*` for Jira SM parity.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPublish a daily ServiceNow report (or scripted REST) that counts open incidents or tasks by age bucket (`0-7d`, `7-30d`, `30-90d`, `90d+`) and send results to Splunk with stable field names. If you cannot ingest snapshots yet, run a nightly saved search that writes the same structure to a summary index. Map `age_bucket` labels to your CMDB time zones; exclude cancelled records. If `foreach` is unavailable on your Splunk version, add a `trendline` per series in Dashboard Studio or clone the searc…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:backlog_daily\" earliest=-30d@d\n| eval _time=strptime(snapshot_date,\"%Y-%m-%d\")\n| timechart span=1d sum(open_count) by age_bucket\n| foreach \"0-7d\" \"7-30d\" \"30-90d\" \"90d+\" [trendline sma7(<<FIELD>>) as trend_<<FIELD>>]\n```\n\nUnderstanding this SPL\n\n**Ticket Backlog Aging Trending** — Watching how open work ages shows whether queues are draining or stagnating; growth in older buckets signals staffing, tooling, or upstream failure demand that will eventually breach customer expectations.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:backlog_daily\"` (recommended) or `sourcetype=\"snow:incident\"` with snapshot pipeline; `sourcetype=jira:*` for Jira SM parity. **App/TA** (typical add-on context): `Splunk_TA_snow`, optional HEC forwarder for ServiceNow scheduled reports. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:backlog_daily. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:backlog_daily\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by age_bucket** — ideal for trending and alerting on this use case.\n• Iterates over multivalue fields with `foreach`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area or column chart (buckets over 30 days), Line chart (trendline overlays per bucket), Single value (90d+ share of open backlog).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend how long work sits in the backlog, so we can see debt building before customers feel it in every call.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.5.2",
              "n": "Change Success Rate Trending",
              "c": "high",
              "f": "intermediate",
              "v": "A falling change success rate warns that testing, communication, or risk scoring is degrading before major outages; trending by week ties process fixes to measurable improvement.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:change_request\"`; optional `sourcetype=jira:*` change records",
              "q": "index=itsm sourcetype=\"snow:change_request\" earliest=-365d@d state=\"Closed\"\n| eval ok=if(match(lower(close_code),\"(?i)successful|success|complete\") OR match(lower(u_outcome),\"(?i)success\"),1,0)\n| timechart span=1w sum(ok) as successes count as total\n| eval success_pct=round(100*successes/nullif(total,0),1)\n| trendline sma8(success_pct) as success_pct_trend\n| eventstats median(success_pct) as med_qtr\n| eval vs_median=round(success_pct - med_qtr,1)",
              "m": "Normalize `close_code` and `u_outcome` from ServiceNow; treat emergency and standard changes consistently in scope. Exclude duplicate closure events by deduping on `number` and `sys_updated_on`. Align week boundaries to your CAB calendar. Alert when `success_pct` drops below target for two consecutive weeks or falls more than 10 points under the trailing quarterly median.",
              "z": "Line chart (weekly success % with trendline), Single value (rolling 13-week average), Table (worst assignment groups).",
              "kfp": "Change success that dips during big trains, new platforms, or a new way to record good versus bad in the form.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:change_request\"`; optional `sourcetype=jira:*` change records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `close_code` and `u_outcome` from ServiceNow; treat emergency and standard changes consistently in scope. Exclude duplicate closure events by deduping on `number` and `sys_updated_on`. Align week boundaries to your CAB calendar. Alert when `success_pct` drops below target for two consecutive weeks or falls more than 10 points under the trailing quarterly median.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" earliest=-365d@d state=\"Closed\"\n| eval ok=if(match(lower(close_code),\"(?i)successful|success|complete\") OR match(lower(u_outcome),\"(?i)success\"),1,0)\n| timechart span=1w sum(ok) as successes count as total\n| eval success_pct=round(100*successes/nullif(total,0),1)\n| trendline sma8(success_pct) as success_pct_trend\n| eventstats median(success_pct) as med_qtr\n| eval vs_median=round(success_pct - med_qtr,1)\n```\n\nUnderstanding this SPL\n\n**Change Success Rate Trending** — A falling change success rate warns that testing, communication, or risk scoring is degrading before major outages; trending by week ties process fixes to measurable improvement.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"`; optional `sourcetype=jira:*` change records. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **success_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Change Success Rate Trending**): trendline sma8(success_pct) as success_pct_trend\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **vs_median** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (weekly success % with trendline), Single value (rolling 13-week average), Table (worst assignment groups).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend change outcomes over time, so we can see a culture or platform get safer—or slip—before a bad release week.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.5.3",
              "n": "Knowledge Article Deflection Rate Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Deflection measures whether self-service content reduces human-handled tickets; sustained improvement validates knowledge investment and lowers support cost.",
              "t": "`Splunk_TA_snow`, Knowledge Management KPIs (custom fields)",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` with `u_knowledge_used`, `resolved_by_knowledge`, or related task flags; `sourcetype=\"snow:kb_use\"` if you ingest portal analytics separately",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-180d@d\n| eval deflected=if(match(lower(coalesce(u_knowledge_used,resolved_by_knowledge,knowledge_flag)),\"(?i)true|yes|1|deflected\"),1,0)\n| timechart span=1w sum(deflected) as deflected count as closed_total\n| eval deflect_pct=round(100*deflected/nullif(closed_total,0),1)\n| trendline sma8(deflect_pct) as deflect_trend\n| eventstats avg(deflect_pct) as run_avg\n| eval delta_vs_avg=round(deflect_pct - run_avg,2)",
              "m": "Map the fields your instance uses for “resolved with knowledge” or portal self-solve. If deflection is only visible on closure codes, translate codes via a lookup. Include chatbot-assisted resolutions if they write back to the incident. Review monthly with content owners when `deflect_pct` flatlines despite new articles.",
              "z": "Line chart (deflection % and trend), Bar chart (deflected count by category), Table (top knowledge articles linked from incidents).",
              "kfp": "Views and deflection move after comms, training, launches, and when ticket types shift without a process fault.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`, Knowledge Management KPIs (custom fields).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` with `u_knowledge_used`, `resolved_by_knowledge`, or related task flags; `sourcetype=\"snow:kb_use\"` if you ingest portal analytics separately.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap the fields your instance uses for “resolved with knowledge” or portal self-solve. If deflection is only visible on closure codes, translate codes via a lookup. Include chatbot-assisted resolutions if they write back to the incident. Review monthly with content owners when `deflect_pct` flatlines despite new articles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-180d@d\n| eval deflected=if(match(lower(coalesce(u_knowledge_used,resolved_by_knowledge,knowledge_flag)),\"(?i)true|yes|1|deflected\"),1,0)\n| timechart span=1w sum(deflected) as deflected count as closed_total\n| eval deflect_pct=round(100*deflected/nullif(closed_total,0),1)\n| trendline sma8(deflect_pct) as deflect_trend\n| eventstats avg(deflect_pct) as run_avg\n| eval delta_vs_avg=round(deflect_pct - run_avg,2)\n```\n\nUnderstanding this SPL\n\n**Knowledge Article Deflection Rate Trending** — Deflection measures whether self-service content reduces human-handled tickets; sustained improvement validates knowledge investment and lowers support cost.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` with `u_knowledge_used`, `resolved_by_knowledge`, or related task flags; `sourcetype=\"snow:kb_use\"` if you ingest portal analytics separately. **App/TA** (typical add-on context): `Splunk_TA_snow`, Knowledge Management KPIs (custom fields). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **deflected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **deflect_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Knowledge Article Deflection Rate Trending**): trendline sma8(deflect_pct) as deflect_trend\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **delta_vs_avg** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (deflection % and trend), Bar chart (deflected count by category), Table (top knowledge articles linked from incidents).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend how much help content keeps tickets away, so we can invest in writing where it pays off.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.5.4",
              "n": "MTTR by Priority Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Priority-weighted resolution time shows whether urgent work truly gets faster attention; quarterly trends highlight chronic bottlenecks for P1/P2 paths.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `closed_at`, `priority`); `sourcetype=jira:*` with `created`/`resolved`",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-365d@d\n| where lower(state) IN (\"closed\",\"resolved\",\"6\",\"7\")\n| eval open_ts=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval close_ts=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval mttr_hr=round((close_ts-open_ts)/3600,2)\n| where mttr_hr>=0 AND isnotnull(priority)\n| eval _time=close_ts\n| bin _time span=1q\n| stats avg(mttr_hr) as mttr_hr by priority, _time\n| sort _time priority",
              "m": "Confirm timestamp formats and business-pause fields; subtract `business_wait` or `on_hold_duration` if extracted. Normalize priority text (`1 - Critical` → `P1`). Use `close_ts` as the event time so each resolution lands in the correct quarter. For a single “all priorities” reference line, add a second panel with `stats avg(mttr_hr) by _time` without splitting by `priority`. Pair with staffing and major-incident dashboards when P1 MTTR regresses.",
              "z": "Line chart (MTTR hours by priority over quarters), Heatmap (priority × quarter), Optional overlay (overall MTTR from a companion search).",
              "kfp": "MTTR and response drifts from harder incidents, new staff, on-call experience mix, or a wider scope of what we close as done.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `closed_at`, `priority`); `sourcetype=jira:*` with `created`/`resolved`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfirm timestamp formats and business-pause fields; subtract `business_wait` or `on_hold_duration` if extracted. Normalize priority text (`1 - Critical` → `P1`). Use `close_ts` as the event time so each resolution lands in the correct quarter. For a single “all priorities” reference line, add a second panel with `stats avg(mttr_hr) by _time` without splitting by `priority`. Pair with staffing and major-incident dashboards when P1 MTTR regresses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-365d@d\n| where lower(state) IN (\"closed\",\"resolved\",\"6\",\"7\")\n| eval open_ts=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval close_ts=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval mttr_hr=round((close_ts-open_ts)/3600,2)\n| where mttr_hr>=0 AND isnotnull(priority)\n| eval _time=close_ts\n| bin _time span=1q\n| stats avg(mttr_hr) as mttr_hr by priority, _time\n| sort _time priority\n```\n\nUnderstanding this SPL\n\n**MTTR by Priority Trending** — Priority-weighted resolution time shows whether urgent work truly gets faster attention; quarterly trends highlight chronic bottlenecks for P1/P2 paths.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `closed_at`, `priority`); `sourcetype=jira:*` with `created`/`resolved`. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where lower(state) IN (\"closed\",\"resolved\",\"6\",\"7\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **open_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **close_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mttr_hr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mttr_hr>=0 AND isnotnull(priority)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by priority, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (MTTR hours by priority over quarters), Heatmap (priority × quarter), Optional overlay (overall MTTR from a companion search).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend fix time by how urgent work is, so we see if our promises to customers are drifting over the quarters.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.5.5",
              "n": "Escalation Rate Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Escalations consume senior time and often reflect unclear front-line tooling; a rising share of escalated incidents points to training gaps, bad routing, or unstable services.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (`escalation_level`, `u_escalated`, `assignment_group` history); optional `sourcetype=jira:*` with escalation labels",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-90d@d\n| eval esc=if(escalation_level>0 OR match(lower(coalesce(u_escalated,u_escalation_flag)),\"(?i)true|yes|1\") OR match(lower(category),\"escalat\"),1,0)\n| timechart span=1w sum(esc) as escalations count as inc_total\n| eval esc_rate_pct=round(100*escalations/nullif(inc_total,0),1)\n| trendline sma6(esc_rate_pct) as esc_trend\n| eventstats median(esc_rate_pct) as baseline_med\n| eval spike=if(esc_rate_pct > baseline_med*1.25,1,0)",
              "m": "Define “escalation” consistently—for example, reassignment to a major-incident queue counts, but simple handoffs within L1 may not. Ingest assignment history if escalation is inferred from group changes. Exclude mass-test records. When `spike=1` for two weeks, trigger a problem review with service owners.",
              "z": "Line chart (escalation % of incidents with trendline), Stacked bar (escalations versus total), Table (assignment groups with highest esc_rate_pct).",
              "kfp": "Trend lines that move with seasonality, a new reporting job, a field change in the tool, or a few loud weeks in operations.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (`escalation_level`, `u_escalated`, `assignment_group` history); optional `sourcetype=jira:*` with escalation labels.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine “escalation” consistently—for example, reassignment to a major-incident queue counts, but simple handoffs within L1 may not. Ingest assignment history if escalation is inferred from group changes. Exclude mass-test records. When `spike=1` for two weeks, trigger a problem review with service owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-90d@d\n| eval esc=if(escalation_level>0 OR match(lower(coalesce(u_escalated,u_escalation_flag)),\"(?i)true|yes|1\") OR match(lower(category),\"escalat\"),1,0)\n| timechart span=1w sum(esc) as escalations count as inc_total\n| eval esc_rate_pct=round(100*escalations/nullif(inc_total,0),1)\n| trendline sma6(esc_rate_pct) as esc_trend\n| eventstats median(esc_rate_pct) as baseline_med\n| eval spike=if(esc_rate_pct > baseline_med*1.25,1,0)\n```\n\nUnderstanding this SPL\n\n**Escalation Rate Trending** — Escalations consume senior time and often reflect unclear front-line tooling; a rising share of escalated incidents points to training gaps, bad routing, or unstable services.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`escalation_level`, `u_escalated`, `assignment_group` history); optional `sourcetype=jira:*` with escalation labels. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **esc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **esc_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Escalation Rate Trending**): trendline sma6(esc_rate_pct) as esc_trend\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **spike** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (escalation % of incidents with trendline), Stacked bar (escalations versus total), Table (assignment groups with highest esc_rate_pct).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend how often work jumps a level, so understaffing, noisy tools, and real sev-1s show up in one line on the page.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.5.6",
              "n": "First Response SLA Compliance Trending",
              "c": "critical",
              "f": "intermediate",
              "v": "Trending first-response compliance shows whether the desk keeps pace with contractual response SLAs before resolution work begins; sustained dips trigger ITIL service level review and staffing adjustments.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `first_response_time`, `responded_at`, `priority`, `closed_at`)",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-180d@d\n| eval open_ts=coalesce(opened_at, strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"))\n| eval resp_ts=coalesce(first_response_time, responded_at)\n| where isnotnull(resp_ts) AND isnotnull(open_ts)\n| eval frt_mins=(resp_ts-open_ts)/60\n| eval target_mins=case(priority=1,15, priority=2,30, priority=3,240, true(),480)\n| eval met=if(frt_mins<=target_mins,1,0)\n| eval _time=resp_ts\n| bin _time span=1w\n| stats sum(met) as met_count, count as total by _time\n| eval frt_compliance_pct=round(100*met_count/nullif(total,0),1)\n| trendline sma6(frt_compliance_pct) as frt_trend\n| sort _time",
              "m": "(1) Align `target_mins` with contractual priorities; (2) Use `close_ts` instead of `resp_ts` for `_time` if you measure compliance at closure in your KPI model; (3) Alert when `frt_compliance_pct` drops more than 8 points below the trailing 8-week average for two consecutive weeks.",
              "z": "Line chart (weekly first-response compliance % with trendline), Single value (last week compliance), Table (weekly totals).",
              "kfp": "MTTR and response drifts from harder incidents, new staff, on-call experience mix, or a wider scope of what we close as done.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket_Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `first_response_time`, `responded_at`, `priority`, `closed_at`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `target_mins` with contractual priorities; (2) Use `close_ts` instead of `resp_ts` for `_time` if you measure compliance at closure in your KPI model; (3) Alert when `frt_compliance_pct` drops more than 8 points below the trailing 8-week average for two consecutive weeks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-180d@d\n| eval open_ts=coalesce(opened_at, strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"))\n| eval resp_ts=coalesce(first_response_time, responded_at)\n| where isnotnull(resp_ts) AND isnotnull(open_ts)\n| eval frt_mins=(resp_ts-open_ts)/60\n| eval target_mins=case(priority=1,15, priority=2,30, priority=3,240, true(),480)\n| eval met=if(frt_mins<=target_mins,1,0)\n| eval _time=resp_ts\n| bin _time span=1w\n| stats sum(met) as met_count, count as total by _time\n| eval frt_compliance_pct=round(100*met_count/nullif(total,0),1)\n| trendline sma6(frt_compliance_pct) as frt_trend\n| sort _time\n```\n\nUnderstanding this SPL\n\n**First Response SLA Compliance Trending** — Trending first-response compliance shows whether the desk keeps pace with contractual response SLAs before resolution work begins; sustained dips trigger ITIL service level review and staffing adjustments.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `first_response_time`, `responded_at`, `priority`, `closed_at`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **open_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **resp_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(resp_ts) AND isnotnull(open_ts)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **frt_mins** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **target_mins** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **frt_compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **First Response SLA Compliance Trending**): trendline sma6(frt_compliance_pct) as frt_trend\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category span=1w | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**First Response SLA Compliance Trending** — Trending first-response compliance shows whether the desk keeps pace with contractual response SLAs before resolution work begins; sustained dips trigger ITIL service level review and staffing adjustments.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`opened_at`, `first_response_time`, `responded_at`, `priority`, `closed_at`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (weekly first-response compliance % with trendline), Single value (last week compliance), Table (weekly totals).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend whether we answer inside the time we promised, so first-touch habits stay in line with the contract and the story we tell the business.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category span=1w | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.5.7",
              "n": "Incident Channel Mix Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Channel mix (portal, phone, chat, email) influences staffing models and self-service ROI; trending aligns workforce management with ITIL service desk strategy and digital adoption goals.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (`contact_type`, `channel`, `u_channel`, `opened_at`)",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-180d@d\n| eval ch=lower(coalesce(contact_type, channel, u_channel, \"unknown\"))\n| eval _time=coalesce(opened_at, _time)\n| bin _time span=1w\n| stats count by _time, ch\n| eventstats sum(count) as week_total by _time\n| eval share_pct=round(100*count/nullif(week_total,0),1)\n| sort _time ch",
              "m": "(1) Normalize `ch` with a lookup for legacy values; (2) Exclude automated monitoring tickets via `caller_id` or `category` filters; (3) Review rising phone share weekly with service owners when portal adoption programs are active.",
              "z": "Stacked area chart (channel share % over weeks), Line chart (portal % trend), Table (raw counts by channel).",
              "kfp": "Shifts in how people reach us after portal, chat, and phone rollouts, or when a big outage drives everyone to one channel.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (`contact_type`, `channel`, `u_channel`, `opened_at`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `ch` with a lookup for legacy values; (2) Exclude automated monitoring tickets via `caller_id` or `category` filters; (3) Review rising phone share weekly with service owners when portal adoption programs are active.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-180d@d\n| eval ch=lower(coalesce(contact_type, channel, u_channel, \"unknown\"))\n| eval _time=coalesce(opened_at, _time)\n| bin _time span=1w\n| stats count by _time, ch\n| eventstats sum(count) as week_total by _time\n| eval share_pct=round(100*count/nullif(week_total,0),1)\n| sort _time ch\n```\n\nUnderstanding this SPL\n\n**Incident Channel Mix Trending** — Channel mix (portal, phone, chat, email) influences staffing models and self-service ROI; trending aligns workforce management with ITIL service desk strategy and digital adoption goals.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (`contact_type`, `channel`, `u_channel`, `opened_at`). **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, ch** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **share_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area chart (channel share % over weeks), Line chart (portal % trend), Table (raw counts by channel).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend how people reach us by phone, portal, and chat, so we staff and train for the way the world really contacts us.",
              "mtype": [
                "Capacity",
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "16.5.8",
              "n": "Open Incident WIP Trending by Priority",
              "c": "high",
              "f": "intermediate",
              "v": "Work-in-progress counts by priority show whether urgent queues are accumulating independent of new volume; trending WIP supports ITIL capacity management for the service desk and early escalation to leadership.",
              "t": "`Splunk_TA_snow`",
              "d": "`index=itsm` `sourcetype=\"snow:backlog_daily\"` (`snapshot_date`, `priority`, `open_count`) from a nightly ServiceNow export or Splunk summary; optional parity with UC-16.5.1 snapshot pipeline",
              "q": "index=itsm sourcetype=\"snow:backlog_daily\" earliest=-90d@d\n| eval _time=strptime(snapshot_date,\"%Y-%m-%d\")\n| eval pri=case(priority=1 OR lower(priority)=\"p1\",\"P1\", priority=2 OR lower(priority)=\"p2\",\"P2\", priority=3 OR lower(priority)=\"p3\",\"P3\", priority=4 OR lower(priority)=\"p4\",\"P4\", true(),\"Other\")\n| timechart span=1d sum(open_count) by pri\n| fillnull value=0",
              "m": "(1) Publish a daily aggregate with `snapshot_date`, `priority`, and `open_count` (open incidents at end of day) to match UC-16.5.1; (2) Normalize `priority` labels from ServiceNow display values; (3) Alert when P1 or P2 `open_count` exceeds a static threshold or 2× the trailing 30-day median for three consecutive snapshots.",
              "z": "Multi-series line chart (P1/P2/P3 WIP per day), Single value (latest P1 WIP), Table (daily snapshot rows).",
              "kfp": "Backlog and work-in-process swings during freezes, year-end code lock, or a slow vendor for many tickets at once.",
              "refs": "[Splunk_TA_snow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket_Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_snow`.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:backlog_daily\"` (`snapshot_date`, `priority`, `open_count`) from a nightly ServiceNow export or Splunk summary; optional parity with UC-16.5.1 snapshot pipeline.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Publish a daily aggregate with `snapshot_date`, `priority`, and `open_count` (open incidents at end of day) to match UC-16.5.1; (2) Normalize `priority` labels from ServiceNow display values; (3) Alert when P1 or P2 `open_count` exceeds a static threshold or 2× the trailing 30-day median for three consecutive snapshots.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:backlog_daily\" earliest=-90d@d\n| eval _time=strptime(snapshot_date,\"%Y-%m-%d\")\n| eval pri=case(priority=1 OR lower(priority)=\"p1\",\"P1\", priority=2 OR lower(priority)=\"p2\",\"P2\", priority=3 OR lower(priority)=\"p3\",\"P3\", priority=4 OR lower(priority)=\"p4\",\"P4\", true(),\"Other\")\n| timechart span=1d sum(open_count) by pri\n| fillnull value=0\n```\n\nUnderstanding this SPL\n\n**Open Incident WIP Trending by Priority** — Work-in-progress counts by priority show whether urgent queues are accumulating independent of new volume; trending WIP supports ITIL capacity management for the service desk and early escalation to leadership.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:backlog_daily\"` (`snapshot_date`, `priority`, `open_count`) from a nightly ServiceNow export or Splunk summary; optional parity with UC-16.5.1 snapshot pipeline. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:backlog_daily. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:backlog_daily\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pri** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by pri** — ideal for trending and alerting on this use case.\n• Fills null values with `fillnull`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Open Incident WIP Trending by Priority** — Work-in-progress counts by priority show whether urgent queues are accumulating independent of new volume; trending WIP supports ITIL capacity management for the service desk and early escalation to leadership.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:backlog_daily\"` (`snapshot_date`, `priority`, `open_count`) from a nightly ServiceNow export or Splunk summary; optional parity with UC-16.5.1 snapshot pipeline. **App/TA** (typical add-on context): `Splunk_TA_snow`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-series line chart (P1/P2/P3 WIP per day), Single value (latest P1 WIP), Table (daily snapshot rows).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We trend how much in-flight work we carry by how urgent it is, so the queue does not grow quietly until we miss every promise.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category span=1d | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for ServiceNow",
                "id": 1767,
                "url": "https://splunkbase.splunk.com/app/1767"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 8,
            "none": 0
          }
        }
      ],
      "i": 16,
      "n": "Service Management & ITSM",
      "src": "cat-16-service-management-itsm.md"
    },
    {
      "s": [
        {
          "i": "17.1",
          "n": "Network Access Control (NAC)",
          "u": [
            {
              "i": "17.1.1",
              "n": "NAC Authentication Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Authentication success/failure trends reveal infrastructure issues (certificate problems, RADIUS outages) and security events.",
              "t": "`Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog",
              "d": "RADIUS/ISE authentication logs",
              "q": "index=nac sourcetype=\"cisco:ise:auth\"\n| eval status=if(match(message,\"PASS\"),\"success\",\"failure\")\n| timechart span=1h count by status",
              "m": "Forward ISE PSN syslog (TCP/UDP 514) to a Heavy Forwarder. Configure `props.conf`/`transforms.conf` for field extraction of `AuthenticationPolicy`, `Passed/Failed`, `Calling-Station-ID`, and `NAS-IP-Address`. Alert on failure-rate spikes vs the 7-day same-hour baseline, segmented by `nas_ip` or `location`, to distinguish localized AP/switch issues from systemic RADIUS problems.",
              "z": "Line chart (auth success/failure rates), Bar chart (failures by location), Pie chart (auth method distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog.\n• Ensure the following data sources are available: RADIUS/ISE authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward ISE PSN syslog (TCP/UDP 514) to a Heavy Forwarder. Configure `props.conf`/`transforms.conf` for field extraction of `AuthenticationPolicy`, `Passed/Failed`, `Calling-Station-ID`, and `NAS-IP-Address`. Alert on failure-rate spikes vs the 7-day same-hour baseline, segmented by `nas_ip` or `location`, to distinguish localized AP/switch issues from systemic RADIUS problems.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:auth\"\n| eval status=if(match(message,\"PASS\"),\"success\",\"failure\")\n| timechart span=1h count by status\n```\n\nUnderstanding this SPL\n\n**NAC Authentication Trending** — Authentication success/failure trends reveal infrastructure issues (certificate problems, RADIUS outages) and security events.\n\nDocumented **Data sources**: RADIUS/ISE authentication logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:auth. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:auth\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by status** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.action Authentication.src span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NAC Authentication Trending** — Authentication success/failure trends reveal infrastructure issues (certificate problems, RADIUS outages) and security events.\n\nDocumented **Data sources**: RADIUS/ISE authentication logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (auth success/failure rates), Bar chart (failures by location), Pie chart (auth method distribution).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE 3515/3595/3615/3655/3695, ISE Virtual Appliance; HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance; Forescout CounterACT CT-xxxx, eyeExtend",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.action Authentication.src span=1h\n| sort -count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.2",
              "n": "Endpoint Posture Failures",
              "c": "high",
              "f": "beginner",
              "v": "Non-compliant endpoints accessing the network pose security risks. Posture tracking ensures endpoint hygiene enforcement.",
              "t": "`Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog",
              "d": "ISE posture assessment logs",
              "q": "index=nac sourcetype=\"cisco:ise:posture\"\n| where posture_status=\"NonCompliant\"\n| stats count by endpoint_mac, posture_policy, failure_reason\n| sort -count",
              "m": "Ingest ISE posture assessment results. Track compliance rates per policy (AV status, patch level, disk encryption). Alert on critical endpoints failing posture (exec laptops, admin workstations). Report on remediation effectiveness.",
              "z": "Pie chart (compliant vs non-compliant), Bar chart (failure reasons), Table (non-compliant endpoints), Line chart (compliance trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog.\n• Ensure the following data sources are available: ISE posture assessment logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ISE posture assessment results. Track compliance rates per policy (AV status, patch level, disk encryption). Alert on critical endpoints failing posture (exec laptops, admin workstations). Report on remediation effectiveness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:posture\"\n| where posture_status=\"NonCompliant\"\n| stats count by endpoint_mac, posture_policy, failure_reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Endpoint Posture Failures** — Non-compliant endpoints accessing the network pose security risks. Posture tracking ensures endpoint hygiene enforcement.\n\nDocumented **Data sources**: ISE posture assessment logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:posture. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:posture\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where posture_status=\"NonCompliant\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by endpoint_mac, posture_policy, failure_reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"failure\" AND match(Authentication.vendor_action, \"(?i)posture|compliance\")\n  by Authentication.src Authentication.signature span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Endpoint Posture Failures** — Non-compliant endpoints accessing the network pose security risks. Posture tracking ensures endpoint hygiene enforcement.\n\nDocumented **Data sources**: ISE posture assessment logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (compliant vs non-compliant), Bar chart (failure reasons), Table (non-compliant endpoints), Line chart (compliance trend).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE 3515/3595/3615/3655/3695, ISE Virtual Appliance; HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance; Forescout CounterACT CT-xxxx, eyeExtend",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "wv": "crawl",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"failure\" AND match(Authentication.vendor_action, \"(?i)posture|compliance\")\n  by Authentication.src Authentication.signature span=1h\n| sort -count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "17.1.3",
              "n": "VLAN Assignment Audit",
              "c": "medium",
              "f": "beginner",
              "v": "Dynamic VLAN assignments reflect authorization decisions. Anomalous placements may indicate policy misconfiguration or attacks.",
              "t": "`Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog",
              "d": "ISE authorization logs (VLAN assignment)",
              "q": "index=nac sourcetype=\"cisco:ise:auth\"\n| where assigned_vlan!=expected_vlan\n| table _time, endpoint_mac, username, assigned_vlan, expected_vlan, authorization_policy",
              "m": "Track VLAN assignments per endpoint. Maintain expected VLAN lookup by user role/device type. Alert on unexpected VLAN placements. Audit authorization policy effectiveness.",
              "z": "Table (VLAN assignments), Pie chart (assignments by VLAN), Bar chart (unexpected placements).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog.\n• Ensure the following data sources are available: ISE authorization logs (VLAN assignment).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack VLAN assignments per endpoint. Maintain expected VLAN lookup by user role/device type. Alert on unexpected VLAN placements. Audit authorization policy effectiveness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:auth\"\n| where assigned_vlan!=expected_vlan\n| table _time, endpoint_mac, username, assigned_vlan, expected_vlan, authorization_policy\n```\n\nUnderstanding this SPL\n\n**VLAN Assignment Audit** — Dynamic VLAN assignments reflect authorization decisions. Anomalous placements may indicate policy misconfiguration or attacks.\n\nDocumented **Data sources**: ISE authorization logs (VLAN assignment). **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:auth. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:auth\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where assigned_vlan!=expected_vlan` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VLAN Assignment Audit**): table _time, endpoint_mac, username, assigned_vlan, expected_vlan, authorization_policy\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.user All_Sessions.src All_Sessions.dest span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VLAN Assignment Audit** — Dynamic VLAN assignments reflect authorization decisions. Anomalous placements may indicate policy misconfiguration or attacks.\n\nDocumented **Data sources**: ISE authorization logs (VLAN assignment). **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (VLAN assignments), Pie chart (assignments by VLAN), Bar chart (unexpected placements).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE 3515/3595/3615/3655/3695, ISE Virtual Appliance; HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance; Forescout CounterACT CT-xxxx, eyeExtend",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.user All_Sessions.src All_Sessions.dest span=1h\n| sort -count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.4",
              "n": "Guest Network Usage",
              "c": "medium",
              "f": "beginner",
              "v": "Guest network monitoring ensures acceptable use and identifies capacity needs. Unusual patterns may indicate abuse.",
              "t": "`Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog",
              "d": "ISE guest portal logs, RADIUS accounting",
              "q": "index=nac sourcetype=\"cisco:ise:guest\"\n| stats count, sum(session_duration_min) as total_min by sponsor, guest_type\n| sort -count",
              "m": "Track guest portal registrations, sponsor activity, and session durations. Alert on excessive guest registrations from single sponsors. Monitor guest bandwidth usage. Report on guest network utilization.",
              "z": "Bar chart (guest registrations by sponsor), Line chart (guest sessions trend), Table (active guests).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog.\n• Ensure the following data sources are available: ISE guest portal logs, RADIUS accounting.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack guest portal registrations, sponsor activity, and session durations. Alert on excessive guest registrations from single sponsors. Monitor guest bandwidth usage. Report on guest network utilization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:guest\"\n| stats count, sum(session_duration_min) as total_min by sponsor, guest_type\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Guest Network Usage** — Guest network monitoring ensures acceptable use and identifies capacity needs. Unusual patterns may indicate abuse.\n\nDocumented **Data sources**: ISE guest portal logs, RADIUS accounting. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:guest. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:guest\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sponsor, guest_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count sum(All_Sessions.bytes_in) as bytes_in sum(All_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.user All_Sessions.src span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Guest Network Usage** — Guest network monitoring ensures acceptable use and identifies capacity needs. Unusual patterns may indicate abuse.\n\nDocumented **Data sources**: ISE guest portal logs, RADIUS accounting. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (guest registrations by sponsor), Line chart (guest sessions trend), Table (active guests).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE 3515/3595/3615/3655/3695, ISE Virtual Appliance; HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance; Forescout CounterACT CT-xxxx, eyeExtend",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count sum(All_Sessions.bytes_in) as bytes_in sum(All_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.user All_Sessions.src span=1h\n| sort -count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.5",
              "n": "BYOD Onboarding Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "BYOD onboarding metrics inform mobile device management strategy and user experience optimization.",
              "t": "`Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog",
              "d": "ISE BYOD portal logs, certificate provisioning",
              "q": "index=nac sourcetype=\"cisco:ise:byod\"\n| stats count by device_type, os_type, onboarding_status\n| sort -count",
              "m": "Track BYOD registrations, device types, and onboarding success/failure rates. Alert on onboarding failures. Report on device type distribution for MDM policy planning.",
              "z": "Pie chart (device types), Bar chart (onboarding status), Line chart (BYOD enrollment trend).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog.\n• Ensure the following data sources are available: ISE BYOD portal logs, certificate provisioning.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack BYOD registrations, device types, and onboarding success/failure rates. Alert on onboarding failures. Report on device type distribution for MDM policy planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:byod\"\n| stats count by device_type, os_type, onboarding_status\n| sort -count\n```\n\nUnderstanding this SPL\n\n**BYOD Onboarding Tracking** — BYOD onboarding metrics inform mobile device management strategy and user experience optimization.\n\nDocumented **Data sources**: ISE BYOD portal logs, certificate provisioning. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:byod. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:byod\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by device_type, os_type, onboarding_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**BYOD Onboarding Tracking** — BYOD onboarding metrics inform mobile device management strategy and user experience optimization.\n\nDocumented **Data sources**: ISE BYOD portal logs, certificate provisioning. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (device types), Bar chart (onboarding status), Line chart (BYOD enrollment trend).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE 3515/3595/3615/3655/3695, ISE Virtual Appliance; HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance; Forescout CounterACT CT-xxxx, eyeExtend",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user Authentication.src Authentication.app span=1h\n| sort -count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.6",
              "n": "MAC Authentication Bypass (MAB)",
              "c": "high",
              "f": "intermediate",
              "v": "MAB devices bypass 802.1X and rely on MAC address only. Monitoring for unauthorized MACs prevents rogue device access.",
              "t": "`Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog",
              "d": "ISE MAB authentication logs",
              "q": "index=nac sourcetype=\"cisco:ise:auth\" auth_method=\"MAB\"\n| lookup approved_mab_devices.csv mac_address OUTPUT device_description, approved\n| where isnull(approved) OR approved!=\"Yes\"\n| table _time, endpoint_mac, switch, port, location",
              "m": "Maintain whitelist of approved MAB devices (printers, IP phones, IoT). Alert on unknown MAC addresses authenticating via MAB. Track MAB device population. Report on MAB vs 802.1X ratio for security posture.",
              "z": "Table (unapproved MAB devices), Pie chart (MAB vs 802.1X), Bar chart (MAB by device type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1200"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog.\n• Ensure the following data sources are available: ISE MAB authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain whitelist of approved MAB devices (printers, IP phones, IoT). Alert on unknown MAC addresses authenticating via MAB. Track MAB device population. Report on MAB vs 802.1X ratio for security posture.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:auth\" auth_method=\"MAB\"\n| lookup approved_mab_devices.csv mac_address OUTPUT device_description, approved\n| where isnull(approved) OR approved!=\"Yes\"\n| table _time, endpoint_mac, switch, port, location\n```\n\nUnderstanding this SPL\n\n**MAC Authentication Bypass (MAB)** — MAB devices bypass 802.1X and rely on MAC address only. Monitoring for unauthorized MACs prevents rogue device access.\n\nDocumented **Data sources**: ISE MAB authentication logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:auth. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:auth\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved) OR approved!=\"Yes\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MAC Authentication Bypass (MAB)**): table _time, endpoint_mac, switch, port, location\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.authentication_method, \"(?i)MAB|mac\")\n  by Authentication.src Authentication.dest Authentication.action span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MAC Authentication Bypass (MAB)** — MAB devices bypass 802.1X and rely on MAC address only. Monitoring for unauthorized MACs prevents rogue device access.\n\nDocumented **Data sources**: ISE MAB authentication logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved MAB devices), Pie chart (MAB vs 802.1X), Bar chart (MAB by device type).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE 3515/3595/3615/3655/3695, ISE Virtual Appliance; HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance; Forescout CounterACT CT-xxxx, eyeExtend",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.authentication_method, \"(?i)MAB|mac\")\n  by Authentication.src Authentication.dest Authentication.action span=1h\n| sort -count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.7",
              "n": "Profiling Accuracy",
              "c": "medium",
              "f": "beginner",
              "v": "Accurate device profiling enables correct authorization policies. Misprofiled devices may get inappropriate access.",
              "t": "`Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog",
              "d": "ISE profiler logs, re-profiling events",
              "q": "index=nac sourcetype=\"cisco:ise:profiler\"\n| search \"re-profiled\" OR \"profile changed\"\n| stats count by endpoint_mac, old_profile, new_profile\n| sort -count",
              "m": "Monitor profiling events and profile changes. Track devices that are frequently re-profiled (indicates ambiguous profiling rules). Validate profiling accuracy against known device inventory. Tune profiling policies.",
              "z": "Table (profiling changes), Sankey diagram (old→new profiles), Bar chart (re-profiling frequency).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog.\n• Ensure the following data sources are available: ISE profiler logs, re-profiling events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor profiling events and profile changes. Track devices that are frequently re-profiled (indicates ambiguous profiling rules). Validate profiling accuracy against known device inventory. Tune profiling policies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:profiler\"\n| search \"re-profiled\" OR \"profile changed\"\n| stats count by endpoint_mac, old_profile, new_profile\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Profiling Accuracy** — Accurate device profiling enables correct authorization policies. Misprofiled devices may get inappropriate access.\n\nDocumented **Data sources**: ISE profiler logs, re-profiling events. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:profiler. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:profiler\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by endpoint_mac, old_profile, new_profile** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (profiling changes), Sankey diagram (old→new profiles), Bar chart (re-profiling frequency).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE 3515/3595/3615/3655/3695, ISE Virtual Appliance; HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance; Forescout CounterACT CT-xxxx, eyeExtend",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.8",
              "n": "NAC Policy Change Audit",
              "c": "high",
              "f": "beginner",
              "v": "NAC policy changes affect network access for all devices. Unauthorized changes can create security gaps or disrupt access.",
              "t": "`Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog",
              "d": "ISE admin audit logs",
              "q": "index=nac sourcetype=\"cisco:ise:admin\"\n| search \"PolicySet\" OR \"AuthorizationRule\" OR \"AuthenticationRule\"\n| table _time, admin_user, action, object_name, details",
              "m": "Forward ISE admin audit logs. Alert on any policy change. Track changes by administrator. Correlate with change management tickets. Report on policy change frequency.",
              "z": "Table (policy changes), Timeline (change events), Bar chart (changes by admin).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog.\n• Ensure the following data sources are available: ISE admin audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward ISE admin audit logs. Alert on any policy change. Track changes by administrator. Correlate with change management tickets. Report on policy change frequency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:admin\"\n| search \"PolicySet\" OR \"AuthorizationRule\" OR \"AuthenticationRule\"\n| table _time, admin_user, action, object_name, details\n```\n\nUnderstanding this SPL\n\n**NAC Policy Change Audit** — NAC policy changes affect network access for all devices. Unauthorized changes can create security gaps or disrupt access.\n\nDocumented **Data sources**: ISE admin audit logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:admin\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **NAC Policy Change Audit**): table _time, admin_user, action, object_name, details\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object, \"(?i)ISE|NAC|policy\")\n  by All_Changes.user All_Changes.object span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NAC Policy Change Audit** — NAC policy changes affect network access for all devices. Unauthorized changes can create security gaps or disrupt access.\n\nDocumented **Data sources**: ISE admin audit logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (policy changes), Timeline (change events), Bar chart (changes by admin).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE 3515/3595/3615/3655/3695, ISE Virtual Appliance; HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance; Forescout CounterACT CT-xxxx, eyeExtend",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  where match(All_Changes.object, \"(?i)ISE|NAC|policy\")\n  by All_Changes.user All_Changes.object span=1h\n| sort -count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.BF.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Cyber Essentials CE.BF.1 (Boundary firewalls) is enforced — Splunk UC-17.1.8: NAC Policy Change Audit.",
                  "ea": "Saved search 'UC-17.1.8' running on ISE admin audit logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ncsc.gov.uk/cyberessentials/overview"
                },
                {
                  "r": "FedRAMP",
                  "v": "Rev.5 Baselines",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that FedRAMP AC-2 (Account management) is enforced — Splunk UC-17.1.8: NAC Policy Change Audit.",
                  "ea": "Saved search 'UC-17.1.8' running on ISE admin audit logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.fedramp.gov/baselines/"
                },
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "FR 6.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that IEC 62443 FR 6.2 (Continuous monitoring) is enforced — Splunk UC-17.1.8: NAC Policy Change Audit.",
                  "ea": "Saved search 'UC-17.1.8' running on ISE admin audit logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.isa.org/standards-and-publications/isa-standards/isa-iec-62443-series-of-standards"
                }
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 90,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison"
              ]
            },
            {
              "i": "17.1.9",
              "n": "802.1X Supplicant Timeout Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Clients timing out during RADIUS authentication (common issue with Wi-Fi and wired NAC). Tracking timeouts helps identify supplicant misconfiguration, certificate issues, or network latency affecting authentication.",
              "t": "`Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, `TA-cisco_ios`, HPE Aruba CX syslog, RADIUS TA, NAS syslog",
              "d": "RADIUS server logs (FreeRADIUS, NPS, ISE), switch/WLC syslog (dot1x events)",
              "q": "index=nac (sourcetype=\"cisco:ise:auth\" OR sourcetype=\"radius:auth\" OR sourcetype=\"freeradius\")\n| search \"timeout\" OR \"timed out\" OR \"EAP timeout\" OR \"supplicant\" OR \"no response\"\n| rex field=_raw \"Calling-Station-Id=(?<mac>[^\\s]+)\"\n| rex field=_raw \"NAS-Identifier=(?<nas>[^\\s,]+)\"\n| bin _time span=1h\n| stats count by mac, nas, _time\n| where count > 3\n| sort -count",
              "m": "Forward RADIUS authentication logs and NAS (switch/WLC) syslog to Splunk. Configure sourcetypes for FreeRADIUS, NPS, or ISE. Extract Calling-Station-Id (MAC), NAS-Identifier, and timeout-related messages. Alert when timeout count exceeds 5 per NAS per hour. Correlate with switch port and SSID to identify problematic segments. Report on timeout trends by location and time of day.",
              "z": "Line chart (timeout rate over time), Bar chart (timeouts by NAS/switch), Table (affected MACs and NAS), Heatmap (NAS × hour of day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, `TA-cisco_ios`, HPE Aruba CX syslog, RADIUS TA, NAS syslog.\n• Ensure the following data sources are available: RADIUS server logs (FreeRADIUS, NPS, ISE), switch/WLC syslog (dot1x events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward RADIUS authentication logs and NAS (switch/WLC) syslog to Splunk. Configure sourcetypes for FreeRADIUS, NPS, or ISE. Extract Calling-Station-Id (MAC), NAS-Identifier, and timeout-related messages. Alert when timeout count exceeds 5 per NAS per hour. Correlate with switch port and SSID to identify problematic segments. Report on timeout trends by location and time of day.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac (sourcetype=\"cisco:ise:auth\" OR sourcetype=\"radius:auth\" OR sourcetype=\"freeradius\")\n| search \"timeout\" OR \"timed out\" OR \"EAP timeout\" OR \"supplicant\" OR \"no response\"\n| rex field=_raw \"Calling-Station-Id=(?<mac>[^\\s]+)\"\n| rex field=_raw \"NAS-Identifier=(?<nas>[^\\s,]+)\"\n| bin _time span=1h\n| stats count by mac, nas, _time\n| where count > 3\n| sort -count\n```\n\nUnderstanding this SPL\n\n**802.1X Supplicant Timeout Tracking** — Clients timing out during RADIUS authentication (common issue with Wi-Fi and wired NAC). Tracking timeouts helps identify supplicant misconfiguration, certificate issues, or network latency affecting authentication.\n\nDocumented **Data sources**: RADIUS server logs (FreeRADIUS, NPS, ISE), switch/WLC syslog (dot1x events). **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, `TA-cisco_ios`, HPE Aruba CX syslog, RADIUS TA, NAS syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:auth, radius:auth, freeradius. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:auth\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by mac, nas, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**802.1X Supplicant Timeout Tracking** — Clients timing out during RADIUS authentication (common issue with Wi-Fi and wired NAC). Tracking timeouts helps identify supplicant misconfiguration, certificate issues, or network latency affecting authentication.\n\nDocumented **Data sources**: RADIUS server logs (FreeRADIUS, NPS, ISE), switch/WLC syslog (dot1x events). **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, `TA-cisco_ios`, HPE Aruba CX syslog, RADIUS TA, NAS syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (timeout rate over time), Bar chart (timeouts by NAS/switch), Table (affected MACs and NAS), Heatmap (NAS × hour of day).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE, Windows NPS, FreeRADIUS, Cisco/Aruba switches and WLCs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "cisco",
                "syslog",
                "windows"
              ],
              "em": [
                "cisco_ios",
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                },
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                },
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.10",
              "n": "RADIUS Accounting Discrepancies",
              "c": "medium",
              "f": "advanced",
              "v": "Start/stop mismatches indicating dropped sessions or potential abuse. Accounting discrepancies can hide unauthorized access, session hijacking, or billing/audit gaps.",
              "t": "`Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, RADIUS accounting TA",
              "d": "RADIUS accounting records (Acct-Status-Type Start/Stop/Interim-Update)",
              "q": "index=nac sourcetype=\"radius:accounting\"\n| eval acct_status=case(\n  match(_raw,\"Acct-Status-Type.*Start\"), \"Start\",\n  match(_raw,\"Acct-Status-Type.*Stop\"), \"Stop\",\n  match(_raw,\"Acct-Status-Type.*Interim\"), \"Interim\",\n  1=1, \"Unknown\")\n| rex field=_raw \"Acct-Session-Id=(?<session_id>[^\\s]+)\"\n| rex field=_raw \"User-Name=(?<user>[^\\s]+)\"\n| rex field=_raw \"NAS-IP-Address=(?<nas>[^\\s]+)\"\n| where acct_status!=\"Unknown\" AND acct_status!=\"Interim\"\n| stats count(eval(acct_status=\"Start\")) as starts, count(eval(acct_status=\"Stop\")) as stops by session_id, user, nas\n| eval discrepancy=abs(starts - stops)\n| where discrepancy > 0\n| sort -discrepancy",
              "m": "Ingest RADIUS accounting logs with Acct-Session-Id, Acct-Status-Type, User-Name, NAS-IP-Address. Build session state table: each Start should have exactly one Stop. Alert on sessions with Start but no Stop (orphaned sessions) or Stop without Start (potential replay). Report on discrepancy rate and top NAS/users. Use Interim-Update to validate long-running sessions.",
              "z": "Table (sessions with discrepancies), Bar chart (discrepancy count by NAS), Single value (orphaned sessions), Line chart (discrepancy trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1090"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, RADIUS accounting TA.\n• Ensure the following data sources are available: RADIUS accounting records (Acct-Status-Type Start/Stop/Interim-Update).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest RADIUS accounting logs with Acct-Session-Id, Acct-Status-Type, User-Name, NAS-IP-Address. Build session state table: each Start should have exactly one Stop. Alert on sessions with Start but no Stop (orphaned sessions) or Stop without Start (potential replay). Report on discrepancy rate and top NAS/users. Use Interim-Update to validate long-running sessions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"radius:accounting\"\n| eval acct_status=case(\n  match(_raw,\"Acct-Status-Type.*Start\"), \"Start\",\n  match(_raw,\"Acct-Status-Type.*Stop\"), \"Stop\",\n  match(_raw,\"Acct-Status-Type.*Interim\"), \"Interim\",\n  1=1, \"Unknown\")\n| rex field=_raw \"Acct-Session-Id=(?<session_id>[^\\s]+)\"\n| rex field=_raw \"User-Name=(?<user>[^\\s]+)\"\n| rex field=_raw \"NAS-IP-Address=(?<nas>[^\\s]+)\"\n| where acct_status!=\"Unknown\" AND acct_status!=\"Interim\"\n| stats count(eval(acct_status=\"Start\")) as starts, count(eval(acct_status=\"Stop\")) as stops by session_id, user, nas\n| eval discrepancy=abs(starts - stops)\n| where discrepancy > 0\n| sort -discrepancy\n```\n\nUnderstanding this SPL\n\n**RADIUS Accounting Discrepancies** — Start/stop mismatches indicating dropped sessions or potential abuse. Accounting discrepancies can hide unauthorized access, session hijacking, or billing/audit gaps.\n\nDocumented **Data sources**: RADIUS accounting records (Acct-Status-Type Start/Stop/Interim-Update). **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, RADIUS accounting TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: radius:accounting. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"radius:accounting\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **acct_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where acct_status!=\"Unknown\" AND acct_status!=\"Interim\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by session_id, user, nas** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **discrepancy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where discrepancy > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sessions with discrepancies), Bar chart (discrepancy count by NAS), Single value (orphaned sessions), Line chart (discrepancy trend).",
              "script": "",
              "premium": "",
              "hw": "Cisco ISE, Windows NPS, FreeRADIUS",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog",
                "windows"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                },
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.11",
              "n": "Posture Assessment Failure Trends",
              "c": "high",
              "f": "beginner",
              "v": "Time-series view of posture failures by policy and reason — distinguishes one-off issues from worsening fleet hygiene or a bad policy rollout.",
              "t": "Splunk_TA_cisco-ise",
              "d": "ISE posture assessment logs (`cisco:ise:posture`)",
              "q": "index=nac sourcetype=\"cisco:ise:posture\" earliest=-30d\n| where posture_status=\"NonCompliant\"\n| timechart span=1d count by failure_reason",
              "m": "Normalize `failure_reason` from ISE. Alert when daily failures exceed 7-day baseline by >50%. Segment by AD group or location if extracted.",
              "z": "Line chart (failures over time by reason), Stacked area (failures by policy), Single value (failures vs prior week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_cisco-ise.\n• Ensure the following data sources are available: ISE posture assessment logs (`cisco:ise:posture`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `failure_reason` from ISE. Alert when daily failures exceed 7-day baseline by >50%. Segment by AD group or location if extracted.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:posture\" earliest=-30d\n| where posture_status=\"NonCompliant\"\n| timechart span=1d count by failure_reason\n```\n\nUnderstanding this SPL\n\n**Posture Assessment Failure Trends** — Time-series view of posture failures by policy and reason — distinguishes one-off issues from worsening fleet hygiene or a bad policy rollout.\n\nDocumented **Data sources**: ISE posture assessment logs (`cisco:ise:posture`). **App/TA** (typical add-on context): Splunk_TA_cisco-ise. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:posture. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:posture\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where posture_status=\"NonCompliant\"` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by failure_reason** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.src span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Posture Assessment Failure Trends** — Time-series view of posture failures by policy and reason — distinguishes one-off issues from worsening fleet hygiene or a bad policy rollout.\n\nDocumented **Data sources**: ISE posture assessment logs (`cisco:ise:posture`). **App/TA** (typical add-on context): Splunk_TA_cisco-ise. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failures over time by reason), Stacked area (failures by policy), Single value (failures vs prior week).",
              "script": "",
              "premium": "",
              "hw": "Cisco ISE 3515–3695, ISE Virtual Appliance",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.src span=1d",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.12",
              "n": "Rogue Device Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Identifies MACs or devices that authenticate or probe but are not in the corporate device inventory — common NAC use case for unauthorized hardware.",
              "t": "`Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog",
              "d": "ISE authentication logs, profiling",
              "q": "index=nac sourcetype=\"cisco:ise:auth\" earliest=-24h\n| eval mac=upper(replace(endpoint_mac,\":\",\"-\"))\n| lookup corp_device_inventory.csv mac OUTPUT asset_tag\n| where isnull(asset_tag) AND match(auth_status,\"(?i)success|pass\")\n| stats count by mac, switch, location\n| where count>=3\n| sort -count",
              "m": "Maintain `corp_device_inventory.csv` from MDM/CMDB (MAC, owner). Tune minimum event count to reduce noise. Correlate with port-security and DHCP snooping if available.",
              "z": "Table (unknown MACs), Bar chart (rogue events by site), Timeline (first seen).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1200"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog.\n• Ensure the following data sources are available: ISE authentication logs, profiling.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain `corp_device_inventory.csv` from MDM/CMDB (MAC, owner). Tune minimum event count to reduce noise. Correlate with port-security and DHCP snooping if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:auth\" earliest=-24h\n| eval mac=upper(replace(endpoint_mac,\":\",\"-\"))\n| lookup corp_device_inventory.csv mac OUTPUT asset_tag\n| where isnull(asset_tag) AND match(auth_status,\"(?i)success|pass\")\n| stats count by mac, switch, location\n| where count>=3\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Rogue Device Detection** — Identifies MACs or devices that authenticate or probe but are not in the corporate device inventory — common NAC use case for unauthorized hardware.\n\nDocumented **Data sources**: ISE authentication logs, profiling. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:auth. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:auth\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mac** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(asset_tag) AND match(auth_status,\"(?i)success|pass\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by mac, switch, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.src span=1h\n| where count > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Rogue Device Detection** — Identifies MACs or devices that authenticate or probe but are not in the corporate device inventory — common NAC use case for unauthorized hardware.\n\nDocumented **Data sources**: ISE authentication logs, profiling. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unknown MACs), Bar chart (rogue events by site), Timeline (first seen).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE, switch/WLC syslog",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.src span=1h\n| where count > 20",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                },
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.13",
              "n": "802.1X Authentication Failure Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Breaks down 802.1X/EAP failures by method, failure reason, and NAS to pinpoint certificate rollout issues vs brute-force vs misconfigured supplicants.",
              "t": "`Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog, RADIUS TA",
              "d": "`cisco:ise:auth`, `radius:auth`",
              "q": "index=nac (sourcetype=\"cisco:ise:auth\" OR sourcetype=\"radius:auth\") earliest=-7d\n| search \"802.1X\" OR auth_method=\"EAP*\" OR eap_method=*\n| where match(lower(message),\"(?i)fail|reject|denied\")\n| stats count by eap_method, failure_reason, nas_ip\n| sort 30 -count",
              "m": "Extract `eap_method`, `failure_reason`, and `nas_ip` per your TA. Alert on spikes in a single failure bucket (e.g., TLS cert errors). Compare before/after cert updates.",
              "z": "Bar chart (failures by EAP method), Table (top NAS + reason), Line chart (daily failure rate).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1110",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog, RADIUS TA.\n• Ensure the following data sources are available: `cisco:ise:auth`, `radius:auth`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract `eap_method`, `failure_reason`, and `nas_ip` per your TA. Alert on spikes in a single failure bucket (e.g., TLS cert errors). Compare before/after cert updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac (sourcetype=\"cisco:ise:auth\" OR sourcetype=\"radius:auth\") earliest=-7d\n| search \"802.1X\" OR auth_method=\"EAP*\" OR eap_method=*\n| where match(lower(message),\"(?i)fail|reject|denied\")\n| stats count by eap_method, failure_reason, nas_ip\n| sort 30 -count\n```\n\nUnderstanding this SPL\n\n**802.1X Authentication Failure Analysis** — Breaks down 802.1X/EAP failures by method, failure reason, and NAS to pinpoint certificate rollout issues vs brute-force vs misconfigured supplicants.\n\nDocumented **Data sources**: `cisco:ise:auth`, `radius:auth`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog, RADIUS TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:auth, radius:auth. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:auth\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Filters the current rows with `where match(lower(message),\"(?i)fail|reject|denied\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by eap_method, failure_reason, nas_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.dest Authentication.src span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**802.1X Authentication Failure Analysis** — Breaks down 802.1X/EAP failures by method, failure reason, and NAS to pinpoint certificate rollout issues vs brute-force vs misconfigured supplicants.\n\nDocumented **Data sources**: `cisco:ise:auth`, `radius:auth`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog, RADIUS TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures by EAP method), Table (top NAS + reason), Line chart (daily failure rate).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE, switches, WLCs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Fault",
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.dest Authentication.src span=1h\n| where count > 10",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                },
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.14",
              "n": "Guest Network Abuse Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Flags excessive concurrent guest sessions, high bandwidth, or repeated sponsor abuse — beyond simple guest usage volume (UC-17.1.4).",
              "t": "`Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog, firewall logs",
              "d": "`cisco:ise:guest`, NetFlow optional",
              "q": "index=nac sourcetype=\"cisco:ise:guest\" earliest=-24h\n| stats dc(session_id) as concurrent_sessions, sum(bytes) as total_bytes by sponsor, guest_mac\n| where concurrent_sessions>5 OR total_bytes>10737418240\n| eval total_gb=round(total_bytes/1073741824,2)\n| table sponsor, guest_mac, concurrent_sessions, total_gb\n| sort -total_gb",
              "m": "Map `bytes` from ISE or join firewall `src` for guest VLAN. Thresholds per org. Alert on sponsor accounts with many parallel guests.",
              "z": "Table (abuse candidates), Bar chart (bytes by sponsor), Single value (guests over threshold).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog, firewall logs.\n• Ensure the following data sources are available: `cisco:ise:guest`, NetFlow optional.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `bytes` from ISE or join firewall `src` for guest VLAN. Thresholds per org. Alert on sponsor accounts with many parallel guests.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:guest\" earliest=-24h\n| stats dc(session_id) as concurrent_sessions, sum(bytes) as total_bytes by sponsor, guest_mac\n| where concurrent_sessions>5 OR total_bytes>10737418240\n| eval total_gb=round(total_bytes/1073741824,2)\n| table sponsor, guest_mac, concurrent_sessions, total_gb\n| sort -total_gb\n```\n\nUnderstanding this SPL\n\n**Guest Network Abuse Detection** — Flags excessive concurrent guest sessions, high bandwidth, or repeated sponsor abuse — beyond simple guest usage volume (UC-17.1.4).\n\nDocumented **Data sources**: `cisco:ise:guest`, NetFlow optional. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog, firewall logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:guest. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:guest\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sponsor, guest_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where concurrent_sessions>5 OR total_bytes>10737418240` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **total_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Guest Network Abuse Detection**): table sponsor, guest_mac, concurrent_sessions, total_gb\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Sessions.bytes_in) as bytes_in sum(All_Sessions.bytes_out) as bytes_out dc(All_Sessions.session_id) as sessions\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.user span=1h\n| eval total_bytes=bytes_in+bytes_out\n| where sessions > 5 OR total_bytes > 10737418240\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Guest Network Abuse Detection** — Flags excessive concurrent guest sessions, high bandwidth, or repeated sponsor abuse — beyond simple guest usage volume (UC-17.1.4).\n\nDocumented **Data sources**: `cisco:ise:guest`, NetFlow optional. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog, firewall logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• `eval` defines or adjusts **total_bytes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sessions > 5 OR total_bytes > 10737418240` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (abuse candidates), Bar chart (bytes by sponsor), Single value (guests over threshold).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE guest, WLC",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` sum(All_Sessions.bytes_in) as bytes_in sum(All_Sessions.bytes_out) as bytes_out dc(All_Sessions.session_id) as sessions\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.user span=1h\n| eval total_bytes=bytes_in+bytes_out\n| where sessions > 5 OR total_bytes > 10737418240",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                },
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.15",
              "n": "RADIUS Accounting NAS vs Session-ID Reconciliation",
              "c": "medium",
              "f": "advanced",
              "v": "Complements UC-17.1.10 by flagging duplicate session IDs or mismatched NAS-IP between Start/Interim/Stop for the same `Acct-Session-Id` — catching replication and proxy issues.",
              "t": "`Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, RADIUS accounting TA",
              "d": "`sourcetype=radius:accounting`",
              "q": "index=nac sourcetype=\"radius:accounting\" earliest=-24h\n| rex field=_raw \"Acct-Session-Id=(?<session_id>[^\\s]+)\"\n| rex field=_raw \"NAS-IP-Address=(?<nas>[^\\s]+)\"\n| stats dc(nas) as nas_dc values(nas) as nas_list by session_id\n| where nas_dc>1\n| table session_id, nas_list",
              "m": "A given RADIUS session should map to one NAS-IP unless mobility events are logged; multiple NAS for one session ID often indicates misconfiguration or log duplication.",
              "z": "Table (conflicting sessions), Single value (conflict count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1090.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, RADIUS accounting TA.\n• Ensure the following data sources are available: `sourcetype=radius:accounting`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nA given RADIUS session should map to one NAS-IP unless mobility events are logged; multiple NAS for one session ID often indicates misconfiguration or log duplication.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"radius:accounting\" earliest=-24h\n| rex field=_raw \"Acct-Session-Id=(?<session_id>[^\\s]+)\"\n| rex field=_raw \"NAS-IP-Address=(?<nas>[^\\s]+)\"\n| stats dc(nas) as nas_dc values(nas) as nas_list by session_id\n| where nas_dc>1\n| table session_id, nas_list\n```\n\nUnderstanding this SPL\n\n**RADIUS Accounting NAS vs Session-ID Reconciliation** — Complements UC-17.1.10 by flagging duplicate session IDs or mismatched NAS-IP between Start/Interim/Stop for the same `Acct-Session-Id` — catching replication and proxy issues.\n\nDocumented **Data sources**: `sourcetype=radius:accounting`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `Splunk_TA_windows` (NPS), FreeRADIUS syslog, RADIUS accounting TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: radius:accounting. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"radius:accounting\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by session_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where nas_dc>1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **RADIUS Accounting NAS vs Session-ID Reconciliation**): table session_id, nas_list\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (conflicting sessions), Single value (conflict count).",
              "script": "",
              "premium": "",
              "hw": "Cisco ISE, NPS, FreeRADIUS",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog",
                "windows"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                },
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.16",
              "n": "MAC Authentication Bypass Anomaly Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Complements UC-17.1.6 whitelist checks with **volume and velocity** anomalies (sudden MAB spikes per port or site) that may indicate MAC spoofing or policy gaps.",
              "t": "`Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog",
              "d": "`cisco:ise:auth` with MAB",
              "q": "index=nac sourcetype=\"cisco:ise:auth\" auth_method=\"MAB\" earliest=-7d\n| bin _time span=1h\n| stats count by _time, switch, port\n| eventstats avg(count) as baseline by switch\n| where count > baseline*3 AND count>10\n| sort -count",
              "m": "Baseline MAB events per switch/port; alert on spikes. Join UC-17.1.6 for unknown MAC focus.",
              "z": "Line chart (MAB rate per switch), Table (spike events), Heatmap (port × hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1200"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog.\n• Ensure the following data sources are available: `cisco:ise:auth` with MAB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline MAB events per switch/port; alert on spikes. Join UC-17.1.6 for unknown MAC focus.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:auth\" auth_method=\"MAB\" earliest=-7d\n| bin _time span=1h\n| stats count by _time, switch, port\n| eventstats avg(count) as baseline by switch\n| where count > baseline*3 AND count>10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**MAC Authentication Bypass Anomaly Detection** — Complements UC-17.1.6 whitelist checks with **volume and velocity** anomalies (sudden MAB spikes per port or site) that may indicate MAC spoofing or policy gaps.\n\nDocumented **Data sources**: `cisco:ise:auth` with MAB. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:auth. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:auth\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, switch, port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by switch** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > baseline*3 AND count>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.authentication_method, \"(?i)MAB|mac\")\n  by Authentication.src Authentication.dest span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MAC Authentication Bypass Anomaly Detection** — Complements UC-17.1.6 whitelist checks with **volume and velocity** anomalies (sudden MAB spikes per port or site) that may indicate MAC spoofing or policy gaps.\n\nDocumented **Data sources**: `cisco:ise:auth` with MAB. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, `TA-cisco_ios`, HPE Aruba CX syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (MAB rate per switch), Table (spike events), Heatmap (port × hour).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE, access switches",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where match(Authentication.authentication_method, \"(?i)MAB|mac\")\n  by Authentication.src Authentication.dest span=1h\n| where count > 10",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                },
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.17",
              "n": "Network Quarantine Effectiveness",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures how often quarantined endpoints reach compliant state (successful remediation vs repeat quarantine) — effectiveness of NAC remediation workflows.",
              "t": "`Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog, `nac:quarantine`",
              "d": "Quarantine assign/release, posture re-check",
              "q": "index=nac sourcetype=\"nac:quarantine\" earliest=-30d\n| eval released=if(match(lower(status),\"(?i)released|cleared\"),1,0)\n| stats count as events, sum(released) as releases by endpoint_mac\n| eval success_ratio=round(100*releases/events,1)\n| where events>3 AND success_ratio < 40\n| table endpoint_mac, events, releases, success_ratio",
              "m": "Map vendor status fields. Track repeat quarantines for same MAC within 7 days as “ineffective.” Feed to desktop engineering.",
              "z": "Table (low effectiveness MACs), Bar chart (success ratio by violation type), Line chart (fleet success ratio).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog, `nac:quarantine`.\n• Ensure the following data sources are available: Quarantine assign/release, posture re-check.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor status fields. Track repeat quarantines for same MAC within 7 days as “ineffective.” Feed to desktop engineering.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"nac:quarantine\" earliest=-30d\n| eval released=if(match(lower(status),\"(?i)released|cleared\"),1,0)\n| stats count as events, sum(released) as releases by endpoint_mac\n| eval success_ratio=round(100*releases/events,1)\n| where events>3 AND success_ratio < 40\n| table endpoint_mac, events, releases, success_ratio\n```\n\nUnderstanding this SPL\n\n**Network Quarantine Effectiveness** — Measures how often quarantined endpoints reach compliant state (successful remediation vs repeat quarantine) — effectiveness of NAC remediation workflows.\n\nDocumented **Data sources**: Quarantine assign/release, posture re-check. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`, HPE Aruba ClearPass syslog, Forescout CounterACT syslog, `nac:quarantine`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: nac:quarantine. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"nac:quarantine\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **released** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by endpoint_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where events>3 AND success_ratio < 40` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Quarantine Effectiveness**): table endpoint_mac, events, releases, success_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (low effectiveness MACs), Bar chart (success ratio by violation type), Line chart (fleet success ratio).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "Cisco ISE, NAC vendors",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.18",
              "n": "NAC Policy Compliance Trending",
              "c": "high",
              "f": "beginner",
              "v": "Daily percentage of authentications that receive “permit” vs “deny” vs “redirect” per authorization policy — trending for policy drift and rollout validation.",
              "t": "Splunk_TA_cisco-ise",
              "d": "`cisco:ise:auth`",
              "q": "index=nac sourcetype=\"cisco:ise:auth\" earliest=-30d\n| eval outcome=case(match(lower(authorization_result),\"(?i)permit|access.accept\"),\"permit\", match(lower(authorization_result),\"(?i)deny|reject\"),\"deny\", true(),\"other\")\n| timechart span=1d count by outcome",
              "m": "Normalize `authorization_result` from your ISE field set. Alert when `deny` share increases >2× baseline week-over-week.",
              "z": "Stacked area (outcomes over time), Line chart (deny rate), Single value (deny % today).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1562.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_cisco-ise.\n• Ensure the following data sources are available: `cisco:ise:auth`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `authorization_result` from your ISE field set. Alert when `deny` share increases >2× baseline week-over-week.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:auth\" earliest=-30d\n| eval outcome=case(match(lower(authorization_result),\"(?i)permit|access.accept\"),\"permit\", match(lower(authorization_result),\"(?i)deny|reject\"),\"deny\", true(),\"other\")\n| timechart span=1d count by outcome\n```\n\nUnderstanding this SPL\n\n**NAC Policy Compliance Trending** — Daily percentage of authentications that receive “permit” vs “deny” vs “redirect” per authorization policy — trending for policy drift and rollout validation.\n\nDocumented **Data sources**: `cisco:ise:auth`. **App/TA** (typical add-on context): Splunk_TA_cisco-ise. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:auth. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:auth\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **outcome** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by outcome** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.action Authentication.vendor_action span=1d\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NAC Policy Compliance Trending** — Daily percentage of authentications that receive “permit” vs “deny” vs “redirect” per authorization policy — trending for policy drift and rollout validation.\n\nDocumented **Data sources**: `cisco:ise:auth`. **App/TA** (typical add-on context): Splunk_TA_cisco-ise. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (outcomes over time), Line chart (deny rate), Single value (deny % today).",
              "script": "",
              "premium": "",
              "hw": "Cisco ISE",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.action Authentication.vendor_action span=1d\n| sort -count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.19",
              "n": "Endpoint Compliance Scoring",
              "c": "high",
              "f": "intermediate",
              "v": "Composite score per endpoint from posture checks (AV, patch, encryption) for executive dashboards and exception reporting.",
              "t": "Splunk_TA_cisco-ise",
              "d": "`cisco:ise:posture`",
              "q": "index=nac sourcetype=\"cisco:ise:posture\" earliest=-4h\n| eval check_pass=if(match(lower(posture_status),\"(?i)compliant\"),1,0)\n| stats avg(check_pass) as score by endpoint_mac\n| eval compliance_pct=round(score*100,1)\n| where compliance_pct < 80\n| sort compliance_pct\n| head 100",
              "m": "For multi-check rows per MAC, use `latest` per check name then average. Weight critical checks in `eval` if needed.",
              "z": "Table (lowest-scoring endpoints), Histogram (score distribution), Gauge (fleet average score).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_cisco-ise.\n• Ensure the following data sources are available: `cisco:ise:posture`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFor multi-check rows per MAC, use `latest` per check name then average. Weight critical checks in `eval` if needed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"cisco:ise:posture\" earliest=-4h\n| eval check_pass=if(match(lower(posture_status),\"(?i)compliant\"),1,0)\n| stats avg(check_pass) as score by endpoint_mac\n| eval compliance_pct=round(score*100,1)\n| where compliance_pct < 80\n| sort compliance_pct\n| head 100\n```\n\nUnderstanding this SPL\n\n**Endpoint Compliance Scoring** — Composite score per endpoint from posture checks (AV, patch, encryption) for executive dashboards and exception reporting.\n\nDocumented **Data sources**: `cisco:ise:posture`. **App/TA** (typical add-on context): Splunk_TA_cisco-ise. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: cisco:ise:posture. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"cisco:ise:posture\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **check_pass** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by endpoint_mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliance_pct < 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.src span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Endpoint Compliance Scoring** — Composite score per endpoint from posture checks (AV, patch, encryption) for executive dashboards and exception reporting.\n\nDocumented **Data sources**: `cisco:ise:posture`. **App/TA** (typical add-on context): Splunk_TA_cisco-ise. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (lowest-scoring endpoints), Histogram (score distribution), Gauge (fleet average score).",
              "script": "",
              "premium": "",
              "hw": "Cisco ISE",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.src span=1d",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.20",
              "n": "Quarantine Release Audit",
              "c": "medium",
              "f": "beginner",
              "v": "Audit trail of who released endpoints from quarantine and whether release matched policy (e.g., IT-only, ticket required).",
              "t": "Splunk_TA_cisco-ise, `nac:quarantine`",
              "d": "Admin audit + quarantine logs",
              "q": "index=nac (sourcetype=\"nac:quarantine\" OR sourcetype=\"cisco:ise:admin\")\n| search \"quarantine\" AND (released OR \"cleared\" OR \"unquarantine\")\n| table _time, admin_user, endpoint_mac, action, ticket_id\n| sort -_time",
              "m": "Map `ticket_id` from workflow; alert when `isnull(ticket_id)` and action is manual release. Join ServiceNow for approved changes.",
              "z": "Table (release audit), Timeline (releases), Bar chart (releases by admin).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078",
                "T1562.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_cisco-ise, `nac:quarantine`.\n• Ensure the following data sources are available: Admin audit + quarantine logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `ticket_id` from workflow; alert when `isnull(ticket_id)` and action is manual release. Join ServiceNow for approved changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac (sourcetype=\"nac:quarantine\" OR sourcetype=\"cisco:ise:admin\")\n| search \"quarantine\" AND (released OR \"cleared\" OR \"unquarantine\")\n| table _time, admin_user, endpoint_mac, action, ticket_id\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Quarantine Release Audit** — Audit trail of who released endpoints from quarantine and whether release matched policy (e.g., IT-only, ticket required).\n\nDocumented **Data sources**: Admin audit + quarantine logs. **App/TA** (typical add-on context): Splunk_TA_cisco-ise, `nac:quarantine`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: nac:quarantine, cisco:ise:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"nac:quarantine\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Quarantine Release Audit**): table _time, admin_user, endpoint_mac, action, ticket_id\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (release audit), Timeline (releases), Bar chart (releases by admin).",
              "script": "",
              "premium": "",
              "hw": "Cisco ISE",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.21",
              "n": "ISE Endpoint Posture Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Non-compliant endpoints (missing patches, disabled AV) on the network increase attack surface. ISE posture data enables enforcement visibility.",
              "t": "`Splunk_TA_cisco-ise`",
              "d": "`sourcetype=cisco:ise:syslog`",
              "q": "index=network sourcetype=\"cisco:ise:syslog\" \"Posture\"\n| rex \"PostureStatus=(?<posture_status>\\w+).*?EndpointMacAddress=(?<mac>\\S+)\"\n| stats count by posture_status, mac\n| where posture_status=\"NonCompliant\"\n| sort -count",
              "m": "Forward ISE posture assessment logs to Splunk. Track compliant vs. non-compliant endpoints. Alert when non-compliance rate exceeds 10%. Drill down by failure reason.",
              "z": "Pie chart (compliant vs non-compliant), Table (non-compliant endpoints), Timechart (compliance trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_cisco-ise](https://splunkbase.splunk.com/app/1915)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ise`.\n• Ensure the following data sources are available: `sourcetype=cisco:ise:syslog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward ISE posture assessment logs to Splunk. Track compliant vs. non-compliant endpoints. Alert when non-compliance rate exceeds 10%. Drill down by failure reason.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:ise:syslog\" \"Posture\"\n| rex \"PostureStatus=(?<posture_status>\\w+).*?EndpointMacAddress=(?<mac>\\S+)\"\n| stats count by posture_status, mac\n| where posture_status=\"NonCompliant\"\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ISE Endpoint Posture Compliance** — Non-compliant endpoints (missing patches, disabled AV) on the network increase attack surface. ISE posture data enables enforcement visibility.\n\nDocumented **Data sources**: `sourcetype=cisco:ise:syslog`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ise`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ise:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:ise:syslog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by posture_status, mac** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where posture_status=\"NonCompliant\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (compliant vs non-compliant), Table (non-compliant endpoints), Timechart (compliance trend).",
              "script": "",
              "premium": "",
              "hw": "Cisco ISE 3515/3595/3615/3655/3695, ISE Virtual Appliance; HPE Aruba ClearPass C1000/C2000/C3000, ClearPass Virtual Appliance; Forescout CounterACT CT-xxxx, eyeExtend",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ise"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ISE",
                "id": 1843,
                "url": "https://splunkbase.splunk.com/app/1843"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.1.22",
              "n": "NAC Quarantine and Remediation Duration",
              "c": "medium",
              "f": "intermediate",
              "v": "Long quarantine or remediation times affect user productivity. Monitoring duration supports process improvement and exception handling.",
              "t": "NAC platform logs",
              "d": "Quarantine start/end, remediation outcome",
              "q": "index=nac sourcetype=\"nac:quarantine\"\n| eval duration_min=(released_time - quarantine_time)/60\n| stats avg(duration_min) as avg_mins, count by posture_violation, remediation_result\n| where avg_mins > 60\n| table posture_violation, remediation_result, count, avg_mins",
              "m": "Ingest NAC quarantine and release events. Compute time in quarantine and remediation success. Alert when average duration exceeds threshold. Report on violation types and remediation rate.",
              "z": "Table (quarantine duration by violation), Bar chart (avg duration), Pie chart (remediation outcome).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunkbase app 2846](https://splunkbase.splunk.com/app/2846), [Splunkbase app 2847](https://splunkbase.splunk.com/app/2847)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NAC platform logs.\n• Ensure the following data sources are available: Quarantine start/end, remediation outcome.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest NAC quarantine and release events. Compute time in quarantine and remediation success. Alert when average duration exceeds threshold. Report on violation types and remediation rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nac sourcetype=\"nac:quarantine\"\n| eval duration_min=(released_time - quarantine_time)/60\n| stats avg(duration_min) as avg_mins, count by posture_violation, remediation_result\n| where avg_mins > 60\n| table posture_violation, remediation_result, count, avg_mins\n```\n\nUnderstanding this SPL\n\n**NAC Quarantine and Remediation Duration** — Long quarantine or remediation times affect user productivity. Monitoring duration supports process improvement and exception handling.\n\nDocumented **Data sources**: Quarantine start/end, remediation outcome. **App/TA** (typical add-on context): NAC platform logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nac; **sourcetype**: nac:quarantine. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nac, sourcetype=\"nac:quarantine\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by posture_violation, remediation_result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_mins > 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NAC Quarantine and Remediation Duration**): table posture_violation, remediation_result, count, avg_mins\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (quarantine duration by violation), Bar chart (avg duration), Pie chart (remediation outcome).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 40.7,
          "qd": {
            "gold": 2,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "17.2",
          "n": "VPN & Remote Access",
          "u": [
            {
              "i": "17.2.1",
              "n": "VPN Concurrent Sessions",
              "c": "high",
              "f": "beginner",
              "v": "VPN capacity planning prevents remote workers from being locked out. Trending identifies peak usage and growth patterns.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`",
              "d": "VPN concentrator session logs",
              "q": "index=vpn sourcetype=\"cisco:asa\"\n| where action=\"session_connect\" OR action=\"session_disconnect\"\n| timechart span=15m dc(user) as concurrent_users",
              "m": "Track VPN session connects/disconnects. Calculate concurrent users over time. Alert when approaching license or capacity limits. Report on peak usage patterns for capacity planning. Track growth trends.",
              "z": "Line chart (concurrent sessions), Gauge (% of capacity), Single value (current active sessions), Area chart (sessions over time).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1133",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: VPN concentrator session logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack VPN session connects/disconnects. Calculate concurrent users over time. Alert when approaching license or capacity limits. Report on peak usage patterns for capacity planning. Track growth trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\"\n| where action=\"session_connect\" OR action=\"session_disconnect\"\n| timechart span=15m dc(user) as concurrent_users\n```\n\nUnderstanding this SPL\n\n**VPN Concurrent Sessions** — VPN capacity planning prevents remote workers from being locked out. Trending identifies peak usage and growth patterns.\n\nDocumented **Data sources**: VPN concentrator session logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"session_connect\" OR action=\"session_disconnect\"` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VPN Concurrent Sessions** — VPN capacity planning prevents remote workers from being locked out. Trending identifies peak usage and growth patterns.\n\nDocumented **Data sources**: VPN concentrator session logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (concurrent sessions), Gauge (% of capacity), Single value (current active sessions), Area chart (sessions over time).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA 5506-X/5508-X/5516-X/5525-X/5545-X/5555-X, ASAv, Cisco Secure Firewall 3100/4200; Palo Alto PA-220/PA-440/PA-3200/PA-5200, GlobalProtect; Fortinet FortiGate 60F/100F/200F/600F/1800F, FortiGate SSL-VPN; Juniper SRX300/SRX1500/SRX4100/SRX4200, Junos Dynamic VPN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "17.2.2",
              "n": "VPN Authentication Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Repeated VPN auth failures indicate credential attacks against the remote access perimeter, a primary attack vector.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`",
              "d": "VPN authentication logs",
              "q": "index=vpn sourcetype=\"cisco:asa\" action=\"authentication_failed\"\n| stats count by user, src\n| where count > 5\n| sort -count",
              "m": "Track VPN authentication failures by user and source IP. Alert on >5 failures per user per 15 minutes. Correlate with AD lockout events. Block source IPs with excessive failures. Report on attack patterns.",
              "z": "Table (failed auth events), Bar chart (failures by user), Geo map (source IPs), Line chart (failure rate trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: VPN authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack VPN authentication failures by user and source IP. Alert on >5 failures per user per 15 minutes. Correlate with AD lockout events. Block source IPs with excessive failures. Report on attack patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\" action=\"authentication_failed\"\n| stats count by user, src\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**VPN Authentication Failures** — Repeated VPN auth failures indicate credential attacks against the remote access perimeter, a primary attack vector.\n\nDocumented **Data sources**: VPN authentication logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VPN Authentication Failures** — Repeated VPN auth failures indicate credential attacks against the remote access perimeter, a primary attack vector.\n\nDocumented **Data sources**: VPN authentication logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed auth events), Bar chart (failures by user), Geo map (source IPs), Line chart (failure rate trend).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA 5506-X/5508-X/5516-X/5525-X/5545-X/5555-X, ASAv, Cisco Secure Firewall 3100/4200; Palo Alto PA-220/PA-440/PA-3200/PA-5200, GlobalProtect; Fortinet FortiGate 60F/100F/200F/600F/1800F, FortiGate SSL-VPN; Juniper SRX300/SRX1500/SRX4100/SRX4200, Junos Dynamic VPN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.SAU.1",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of Cyber Essentials CE.SAU.1 (Secure authentication & access) — Splunk UC-17.2.2: VPN Authentication Failures.",
                  "ea": "Saved search 'UC-17.2.2' running on VPN authentication logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ncsc.gov.uk/cyberessentials/overview"
                },
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of PSD2 Art.97 (Strong customer authentication) — Splunk UC-17.2.2: VPN Authentication Failures.",
                  "ea": "Saved search 'UC-17.2.2' running on VPN authentication logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2015/2366/oj"
                },
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.3.5",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of SAMA CSF 3.3.5 (Security monitoring) — Splunk UC-17.2.2: VPN Authentication Failures.",
                  "ea": "Saved search 'UC-17.2.2' running on VPN authentication logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.sama.gov.sa/en-US/Laws/BankingRules/SAMA%20Cyber%20Security%20Framework.pdf"
                }
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.3",
              "n": "Geo-Location Anomalies",
              "c": "critical",
              "f": "beginner",
              "v": "VPN connections from unexpected countries may indicate compromised credentials being used from attacker infrastructure.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, GeoIP lookup",
              "d": "VPN session logs with source IP",
              "q": "index=vpn sourcetype=\"cisco:asa\" action=\"session_connect\"\n| iplocation src\n| search NOT Country IN (\"United States\",\"Canada\",\"United Kingdom\")\n| table _time, user, src, Country, City",
              "m": "Enrich VPN connections with GeoIP data. Maintain whitelist of expected countries. Alert on connections from unexpected locations. Correlate with user travel records if available. Block sanctioned countries.",
              "z": "Geo map (VPN connections), Table (anomalous locations), Bar chart (connections by country).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1219"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, GeoIP lookup.\n• Ensure the following data sources are available: VPN session logs with source IP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnrich VPN connections with GeoIP data. Maintain whitelist of expected countries. Alert on connections from unexpected locations. Correlate with user travel records if available. Block sanctioned countries.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\" action=\"session_connect\"\n| iplocation src\n| search NOT Country IN (\"United States\",\"Canada\",\"United Kingdom\")\n| table _time, user, src, Country, City\n```\n\nUnderstanding this SPL\n\n**Geo-Location Anomalies** — VPN connections from unexpected countries may indicate compromised credentials being used from attacker infrastructure.\n\nDocumented **Data sources**: VPN session logs with source IP. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, GeoIP lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Geo-Location Anomalies**): iplocation src\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Geo-Location Anomalies**): table _time, user, src, Country, City\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Geo-Location Anomalies** — VPN connections from unexpected countries may indicate compromised credentials being used from attacker infrastructure.\n\nDocumented **Data sources**: VPN session logs with source IP. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, GeoIP lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Geo map (VPN connections), Table (anomalous locations), Bar chart (connections by country).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA 5506-X/5508-X/5516-X/5525-X/5545-X/5555-X, ASAv, Cisco Secure Firewall 3100/4200; Palo Alto PA-220/PA-440/PA-3200/PA-5200, GlobalProtect; Fortinet FortiGate 60F/100F/200F/600F/1800F, FortiGate SSL-VPN; Juniper SRX300/SRX1500/SRX4100/SRX4200, Junos Dynamic VPN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "17.2.4",
              "n": "Split-Tunnel Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Split-tunnel configurations affect security visibility. Ensuring compliance with tunnel policy maintains security posture.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`",
              "d": "VPN session attributes (tunnel type, group policy)",
              "q": "index=vpn sourcetype=\"cisco:asa\"\n| stats count by user, tunnel_type, group_policy\n| where tunnel_type=\"split\"\n| table user, tunnel_type, group_policy, count",
              "m": "Track VPN tunnel configuration per session. Verify users are connecting with the correct group policy (full-tunnel for high-risk, split-tunnel for standard). Alert on policy violations. Report on tunnel type distribution.",
              "z": "Pie chart (full vs split tunnel), Table (sessions by policy), Bar chart (tunnel type by department).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1572",
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: VPN session attributes (tunnel type, group policy).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack VPN tunnel configuration per session. Verify users are connecting with the correct group policy (full-tunnel for high-risk, split-tunnel for standard). Alert on policy violations. Report on tunnel type distribution.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\"\n| stats count by user, tunnel_type, group_policy\n| where tunnel_type=\"split\"\n| table user, tunnel_type, group_policy, count\n```\n\nUnderstanding this SPL\n\n**Split-Tunnel Compliance** — Split-tunnel configurations affect security visibility. Ensuring compliance with tunnel policy maintains security posture.\n\nDocumented **Data sources**: VPN session attributes (tunnel type, group policy). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, tunnel_type, group_policy** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tunnel_type=\"split\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Split-Tunnel Compliance**): table user, tunnel_type, group_policy, count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Split-Tunnel Compliance** — Split-tunnel configurations affect security visibility. Ensuring compliance with tunnel policy maintains security posture.\n\nDocumented **Data sources**: VPN session attributes (tunnel type, group policy). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (full vs split tunnel), Table (sessions by policy), Bar chart (tunnel type by department).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA 5506-X/5508-X/5516-X/5525-X/5545-X/5555-X, ASAv, Cisco Secure Firewall 3100/4200; Palo Alto PA-220/PA-440/PA-3200/PA-5200, GlobalProtect; Fortinet FortiGate 60F/100F/200F/600F/1800F, FortiGate SSL-VPN; Juniper SRX300/SRX1500/SRX4100/SRX4200, Junos Dynamic VPN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.5",
              "n": "VPN Tunnel Stability",
              "c": "medium",
              "f": "intermediate",
              "v": "Frequent disconnects indicate network issues, client problems, or infrastructure instability affecting user productivity.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`",
              "d": "VPN session logs (connect/disconnect events)",
              "q": "index=vpn sourcetype=\"cisco:asa\"\n| where action IN (\"session_connect\",\"session_disconnect\")\n| transaction user maxspan=1h\n| where eventcount > 4\n| table user, eventcount, duration\n| sort -eventcount",
              "m": "Track connect/disconnect patterns per user. Identify users with >4 reconnections per hour. Correlate with network quality metrics. Alert on widespread instability (multiple users affected simultaneously). Report for helpdesk.",
              "z": "Table (unstable connections), Bar chart (reconnects by user), Line chart (reconnection rate trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: VPN session logs (connect/disconnect events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack connect/disconnect patterns per user. Identify users with >4 reconnections per hour. Correlate with network quality metrics. Alert on widespread instability (multiple users affected simultaneously). Report for helpdesk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\"\n| where action IN (\"session_connect\",\"session_disconnect\")\n| transaction user maxspan=1h\n| where eventcount > 4\n| table user, eventcount, duration\n| sort -eventcount\n```\n\nUnderstanding this SPL\n\n**VPN Tunnel Stability** — Frequent disconnects indicate network issues, client problems, or infrastructure instability affecting user productivity.\n\nDocumented **Data sources**: VPN session logs (connect/disconnect events). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"session_connect\",\"session_disconnect\")` — typically the threshold or rule expression for this monitoring goal.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• Filters the current rows with `where eventcount > 4` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VPN Tunnel Stability**): table user, eventcount, duration\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VPN Tunnel Stability** — Frequent disconnects indicate network issues, client problems, or infrastructure instability affecting user productivity.\n\nDocumented **Data sources**: VPN session logs (connect/disconnect events). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unstable connections), Bar chart (reconnects by user), Line chart (reconnection rate trend).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA 5506-X/5508-X/5516-X/5525-X/5545-X/5555-X, ASAv, Cisco Secure Firewall 3100/4200; Palo Alto PA-220/PA-440/PA-3200/PA-5200, GlobalProtect; Fortinet FortiGate 60F/100F/200F/600F/1800F, FortiGate SSL-VPN; Juniper SRX300/SRX1500/SRX4100/SRX4200, Junos Dynamic VPN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.6",
              "n": "Off-Hours VPN Access",
              "c": "medium",
              "f": "intermediate",
              "v": "VPN access at unusual hours may indicate compromised credentials or unauthorized activity. Alerting supports investigation.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, user context",
              "d": "VPN session logs, HR data (department, role)",
              "q": "index=vpn sourcetype=\"cisco:asa\" action=\"session_connect\"\n| eval hour=tonumber(strftime(_time,\"%H\"))\n| where (hour < 5 OR hour >= 22)\n| lookup user_roles.csv user OUTPUT department, role\n| where role!=\"on_call\" AND role!=\"sysadmin\"\n| table _time, user, department, src, hour",
              "m": "Define normal hours per user role/department. Alert on VPN connections outside hours for roles that don't require it. Whitelist on-call and sysadmin roles. Review weekly for patterns.",
              "z": "Heatmap (user × hour of day), Table (off-hours access), Bar chart (off-hours by department).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1219"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, user context.\n• Ensure the following data sources are available: VPN session logs, HR data (department, role).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine normal hours per user role/department. Alert on VPN connections outside hours for roles that don't require it. Whitelist on-call and sysadmin roles. Review weekly for patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\" action=\"session_connect\"\n| eval hour=tonumber(strftime(_time,\"%H\"))\n| where (hour < 5 OR hour >= 22)\n| lookup user_roles.csv user OUTPUT department, role\n| where role!=\"on_call\" AND role!=\"sysadmin\"\n| table _time, user, department, src, hour\n```\n\nUnderstanding this SPL\n\n**Off-Hours VPN Access** — VPN access at unusual hours may indicate compromised credentials or unauthorized activity. Alerting supports investigation.\n\nDocumented **Data sources**: VPN session logs, HR data (department, role). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, user context. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (hour < 5 OR hour >= 22)` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where role!=\"on_call\" AND role!=\"sysadmin\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Off-Hours VPN Access**): table _time, user, department, src, hour\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Off-Hours VPN Access** — VPN access at unusual hours may indicate compromised credentials or unauthorized activity. Alerting supports investigation.\n\nDocumented **Data sources**: VPN session logs, HR data (department, role). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, user context. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (user × hour of day), Table (off-hours access), Bar chart (off-hours by department).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA 5506-X/5508-X/5516-X/5525-X/5545-X/5555-X, ASAv, Cisco Secure Firewall 3100/4200; Palo Alto PA-220/PA-440/PA-3200/PA-5200, GlobalProtect; Fortinet FortiGate 60F/100F/200F/600F/1800F, FortiGate SSL-VPN; Juniper SRX300/SRX1500/SRX4100/SRX4200, Junos Dynamic VPN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.7",
              "n": "VPN Bandwidth Consumption",
              "c": "medium",
              "f": "intermediate",
              "v": "Per-user bandwidth tracking identifies heavy users, guides capacity planning, and detects potential data exfiltration.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, RADIUS accounting",
              "d": "VPN session accounting (bytes in/out)",
              "q": "index=vpn sourcetype=\"cisco:asa\"\n| stats sum(bytes_in) as bytes_in, sum(bytes_out) as bytes_out by user\n| eval total_gb=round((bytes_in+bytes_out)/1073741824,2)\n| sort -total_gb\n| head 20",
              "m": "Track VPN session byte counters per user. Alert on users with excessive upload (potential data exfiltration). Report on bandwidth distribution for capacity planning. Identify optimization opportunities (video offload, split-tunnel).",
              "z": "Bar chart (bandwidth by user), Pie chart (upload vs download), Line chart (total bandwidth trend), Table (top bandwidth consumers).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1048",
                "T1041"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, RADIUS accounting.\n• Ensure the following data sources are available: VPN session accounting (bytes in/out).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack VPN session byte counters per user. Alert on users with excessive upload (potential data exfiltration). Report on bandwidth distribution for capacity planning. Identify optimization opportunities (video offload, split-tunnel).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\"\n| stats sum(bytes_in) as bytes_in, sum(bytes_out) as bytes_out by user\n| eval total_gb=round((bytes_in+bytes_out)/1073741824,2)\n| sort -total_gb\n| head 20\n```\n\nUnderstanding this SPL\n\n**VPN Bandwidth Consumption** — Per-user bandwidth tracking identifies heavy users, guides capacity planning, and detects potential data exfiltration.\n\nDocumented **Data sources**: VPN session accounting (bytes in/out). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, RADIUS accounting. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VPN Bandwidth Consumption** — Per-user bandwidth tracking identifies heavy users, guides capacity planning, and detects potential data exfiltration.\n\nDocumented **Data sources**: VPN session accounting (bytes in/out). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`, RADIUS accounting. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (bandwidth by user), Pie chart (upload vs download), Line chart (total bandwidth trend), Table (top bandwidth consumers).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA 5506-X/5508-X/5516-X/5525-X/5545-X/5555-X, ASAv, Cisco Secure Firewall 3100/4200; Palo Alto PA-220/PA-440/PA-3200/PA-5200, GlobalProtect; Fortinet FortiGate 60F/100F/200F/600F/1800F, FortiGate SSL-VPN; Juniper SRX300/SRX1500/SRX4100/SRX4200, Junos Dynamic VPN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.8",
              "n": "Simultaneous Session Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "A single user with simultaneous VPN sessions from different locations strongly indicates credential compromise.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`",
              "d": "VPN session logs",
              "q": "index=vpn sourcetype=\"cisco:asa\" action=\"session_connect\"\n| stats dc(src) as unique_ips, values(src) as ips by user\n| where unique_ips > 1",
              "m": "Track active VPN sessions per user. Alert when a user has concurrent sessions from different IPs. Whitelist known scenarios (multiple devices). Trigger automated investigation including password reset.",
              "z": "Table (users with multiple sessions), Single value (simultaneous sessions detected), Timeline (detection events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1219"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: VPN session logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack active VPN sessions per user. Alert when a user has concurrent sessions from different IPs. Whitelist known scenarios (multiple devices). Trigger automated investigation including password reset.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\" action=\"session_connect\"\n| stats dc(src) as unique_ips, values(src) as ips by user\n| where unique_ips > 1\n```\n\nUnderstanding this SPL\n\n**Simultaneous Session Detection** — A single user with simultaneous VPN sessions from different locations strongly indicates credential compromise.\n\nDocumented **Data sources**: VPN session logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unique_ips > 1` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Simultaneous Session Detection** — A single user with simultaneous VPN sessions from different locations strongly indicates credential compromise.\n\nDocumented **Data sources**: VPN session logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users with multiple sessions), Single value (simultaneous sessions detected), Timeline (detection events).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA 5506-X/5508-X/5516-X/5525-X/5545-X/5555-X, ASAv, Cisco Secure Firewall 3100/4200; Palo Alto PA-220/PA-440/PA-3200/PA-5200, GlobalProtect; Fortinet FortiGate 60F/100F/200F/600F/1800F, FortiGate SSL-VPN; Juniper SRX300/SRX1500/SRX4100/SRX4200, Junos Dynamic VPN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "wv": "crawl",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "17.2.9",
              "n": "VPN Split-Tunnel Policy Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Verifying split-tunnel vs. full-tunnel adherence per user/group policy. Full-tunnel ensures all traffic is inspected; split-tunnel may bypass security controls for internet-bound traffic.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`",
              "d": "VPN session logs (tunnel_type, assigned_policy, routing_mode)",
              "q": "index=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:globalprotect\")\n| search action=\"session_connect\" OR \"tunnel established\"\n| rex field=_raw \"Group Policy=(?<group_policy>[^\\s]+)\"\n| rex field=_raw \"tunnel_type=(?<tunnel_type>[^\\s]+)\"\n| rex field=_raw \"routing_mode=(?<routing_mode>[^\\s]+)\"\n| lookup vpn_policy_requirements.csv group_policy OUTPUT required_tunnel\n| eval actual_tunnel=coalesce(tunnel_type, routing_mode, group_policy)\n| where required_tunnel!=actual_tunnel OR isnull(required_tunnel)\n| table _time, user, group_policy, actual_tunnel, required_tunnel, src",
              "m": "Ingest VPN session logs with tunnel configuration. Maintain lookup table mapping group policy to required tunnel type (full/split). For Cisco ASA, use Group-Policy; for GlobalProtect, use gateway-assigned policy. Alert when users connect with split-tunnel when policy requires full-tunnel (e.g., high-risk groups). Report on policy compliance rate by department and gateway.",
              "z": "Pie chart (full vs split by policy), Table (policy violations), Bar chart (compliance by group policy), Single value (% compliant).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1572",
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`.\n• Ensure the following data sources are available: VPN session logs (tunnel_type, assigned_policy, routing_mode).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest VPN session logs with tunnel configuration. Maintain lookup table mapping group policy to required tunnel type (full/split). For Cisco ASA, use Group-Policy; for GlobalProtect, use gateway-assigned policy. Alert when users connect with split-tunnel when policy requires full-tunnel (e.g., high-risk groups). Report on policy compliance rate by department and gateway.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:globalprotect\")\n| search action=\"session_connect\" OR \"tunnel established\"\n| rex field=_raw \"Group Policy=(?<group_policy>[^\\s]+)\"\n| rex field=_raw \"tunnel_type=(?<tunnel_type>[^\\s]+)\"\n| rex field=_raw \"routing_mode=(?<routing_mode>[^\\s]+)\"\n| lookup vpn_policy_requirements.csv group_policy OUTPUT required_tunnel\n| eval actual_tunnel=coalesce(tunnel_type, routing_mode, group_policy)\n| where required_tunnel!=actual_tunnel OR isnull(required_tunnel)\n| table _time, user, group_policy, actual_tunnel, required_tunnel, src\n```\n\nUnderstanding this SPL\n\n**VPN Split-Tunnel Policy Compliance** — Verifying split-tunnel vs. full-tunnel adherence per user/group policy. Full-tunnel ensures all traffic is inspected; split-tunnel may bypass security controls for internet-bound traffic.\n\nDocumented **Data sources**: VPN session logs (tunnel_type, assigned_policy, routing_mode). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa, pan:globalprotect. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **actual_tunnel** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where required_tunnel!=actual_tunnel OR isnull(required_tunnel)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VPN Split-Tunnel Policy Compliance**): table _time, user, group_policy, actual_tunnel, required_tunnel, src\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VPN Split-Tunnel Policy Compliance** — Verifying split-tunnel vs. full-tunnel adherence per user/group policy. Full-tunnel ensures all traffic is inspected; split-tunnel may bypass security controls for internet-bound traffic.\n\nDocumented **Data sources**: VPN session logs (tunnel_type, assigned_policy, routing_mode). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (full vs split by policy), Table (policy violations), Bar chart (compliance by group policy), Single value (% compliant).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA/AnyConnect, Palo Alto GlobalProtect, FortiGate SSL-VPN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n         sum(Network_Sessions.bytes_in) as bytes_in\n         sum(Network_Sessions.bytes_out) as bytes_out\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.src All_Sessions.user All_Sessions.app span=1h\n| sort -count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.10",
              "n": "mTLS Certificate Rotation Tracking",
              "c": "high",
              "f": "advanced",
              "v": "Mutual TLS certificates approaching expiry in zero-trust architectures (service-to-service auth). Expired certs cause service outages and authentication failures.",
              "t": "Custom (certificate inventory, service mesh telemetry)",
              "d": "Certificate inventory scan (serial, subject, expiry), Istio/Linkerd cert rotation logs",
              "q": "index=certs (sourcetype=\"cert:inventory\" OR sourcetype=\"istio:cert\" OR sourcetype=\"linkerd:cert\")\n| eval expiry_epoch=if(isnum(expiry_time), expiry_time, strptime(expiry_time, \"%Y-%m-%dT%H:%M:%SZ\"))\n| eval days_to_expiry=floor((expiry_epoch - now())/86400)\n| where days_to_expiry < 60 OR days_to_expiry < 0\n| eval status=case(days_to_expiry < 0, \"EXPIRED\", days_to_expiry < 14, \"CRITICAL\", days_to_expiry < 30, \"WARNING\", 1=1, \"OK\")\n| table _time, subject, serial, expiry_epoch, days_to_expiry, status, workload, namespace\n| sort days_to_expiry",
              "m": "Run periodic certificate inventory scans (OpenSSL, cert-manager, HashiCorp Vault) and forward to Splunk. Ingest Istio/Linkerd cert rotation logs for service mesh. Parse subject, serial, notAfter. Alert when cert expires in <30 days; critical alert at <14 days. Track rotation success/failure from mesh logs. Report on cert distribution by workload and expiry timeline. Integrate with automation for cert renewal.",
              "z": "Table (certs expiring soon), Single value (expired count), Bar chart (expiry by month), Timeline (rotation events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1573.001",
                "T1573.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (certificate inventory, service mesh telemetry).\n• Ensure the following data sources are available: Certificate inventory scan (serial, subject, expiry), Istio/Linkerd cert rotation logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun periodic certificate inventory scans (OpenSSL, cert-manager, HashiCorp Vault) and forward to Splunk. Ingest Istio/Linkerd cert rotation logs for service mesh. Parse subject, serial, notAfter. Alert when cert expires in <30 days; critical alert at <14 days. Track rotation success/failure from mesh logs. Report on cert distribution by workload and expiry timeline. Integrate with automation for cert renewal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=certs (sourcetype=\"cert:inventory\" OR sourcetype=\"istio:cert\" OR sourcetype=\"linkerd:cert\")\n| eval expiry_epoch=if(isnum(expiry_time), expiry_time, strptime(expiry_time, \"%Y-%m-%dT%H:%M:%SZ\"))\n| eval days_to_expiry=floor((expiry_epoch - now())/86400)\n| where days_to_expiry < 60 OR days_to_expiry < 0\n| eval status=case(days_to_expiry < 0, \"EXPIRED\", days_to_expiry < 14, \"CRITICAL\", days_to_expiry < 30, \"WARNING\", 1=1, \"OK\")\n| table _time, subject, serial, expiry_epoch, days_to_expiry, status, workload, namespace\n| sort days_to_expiry\n```\n\nUnderstanding this SPL\n\n**mTLS Certificate Rotation Tracking** — Mutual TLS certificates approaching expiry in zero-trust architectures (service-to-service auth). Expired certs cause service outages and authentication failures.\n\nDocumented **Data sources**: Certificate inventory scan (serial, subject, expiry), Istio/Linkerd cert rotation logs. **App/TA** (typical add-on context): Custom (certificate inventory, service mesh telemetry). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: certs; **sourcetype**: cert:inventory, istio:cert, linkerd:cert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=certs, sourcetype=\"cert:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **expiry_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_expiry < 60 OR days_to_expiry < 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **mTLS Certificate Rotation Tracking**): table _time, subject, serial, expiry_epoch, days_to_expiry, status, workload, namespace\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certs expiring soon), Single value (expired count), Bar chart (expiry by month), Timeline (rotation events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.11",
              "n": "Split Tunnel Violation Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Flags sessions where observed routing or client flags indicate split tunnel when group policy mandates full tunnel — complements UC-17.2.4/17.2.9 with explicit **violation** logic.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`",
              "d": "VPN connect logs with `tunnel_type`, `split_include`, `default_gateway`",
              "q": "index=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:globalprotect\") action=\"session_connect\"\n| lookup vpn_policy_requirements.csv group_policy OUTPUT required_tunnel\n| eval violation=if(required_tunnel=\"full\" AND (tunnel_type=\"split\" OR match(lower(_raw),\"(?i)split.?tunnel\")),1,0)\n| where violation=1\n| stats count by user, group_policy, src\n| sort -count",
              "m": "Align lookup with security architecture. Some vendors expose only group name — normalize in transforms.",
              "z": "Table (violations), Bar chart (violations by group policy), Single value (violation sessions / day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1572",
                "T1048.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: VPN connect logs with `tunnel_type`, `split_include`, `default_gateway`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign lookup with security architecture. Some vendors expose only group name — normalize in transforms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:globalprotect\") action=\"session_connect\"\n| lookup vpn_policy_requirements.csv group_policy OUTPUT required_tunnel\n| eval violation=if(required_tunnel=\"full\" AND (tunnel_type=\"split\" OR match(lower(_raw),\"(?i)split.?tunnel\")),1,0)\n| where violation=1\n| stats count by user, group_policy, src\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Split Tunnel Violation Detection** — Flags sessions where observed routing or client flags indicate split tunnel when group policy mandates full tunnel — complements UC-17.2.4/17.2.9 with explicit **violation** logic.\n\nDocumented **Data sources**: VPN connect logs with `tunnel_type`, `split_include`, `default_gateway`. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa, pan:globalprotect. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **violation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where violation=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, group_policy, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.user span=1h\n| where count > 100\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Split Tunnel Violation Detection** — Flags sessions where observed routing or client flags indicate split tunnel when group policy mandates full tunnel — complements UC-17.2.4/17.2.9 with explicit **violation** logic.\n\nDocumented **Data sources**: VPN connect logs with `tunnel_type`, `split_include`, `default_gateway`. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Filters the current rows with `where count > 100` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Bar chart (violations by group policy), Single value (violation sessions / day).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA/AnyConnect, GlobalProtect",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.user span=1h\n| where count > 100",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.12",
              "n": "VPN Concentrator Capacity",
              "c": "high",
              "f": "beginner",
              "v": "Tracks session count and CPU/memory against platform limits to avoid remote-access brownouts during peaks.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), SNMP TA",
              "d": "SNMP OIDs, `cisco:asa` system events, vendor metrics API",
              "q": "index=snmp sourcetype=\"snmp:cpu\" host=\"vpn-headend-*\" earliest=-24h\n| timechart span=15m avg(cpu_utilization) as avg_cpu by host",
              "m": "Prefer vendor metrics (e.g., AnyConnect session count OID). Alert when CPU >80% sustained or sessions >85% of license. Simplify if only session logs: use UC-17.2.1 trend + license field.",
              "z": "Line chart (CPU vs sessions), Gauge (capacity %), Table (headends).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), SNMP TA.\n• Ensure the following data sources are available: SNMP OIDs, `cisco:asa` system events, vendor metrics API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer vendor metrics (e.g., AnyConnect session count OID). Alert when CPU >80% sustained or sessions >85% of license. Simplify if only session logs: use UC-17.2.1 trend + license field.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=snmp sourcetype=\"snmp:cpu\" host=\"vpn-headend-*\" earliest=-24h\n| timechart span=15m avg(cpu_utilization) as avg_cpu by host\n```\n\nUnderstanding this SPL\n\n**VPN Concentrator Capacity** — Tracks session count and CPU/memory against platform limits to avoid remote-access brownouts during peaks.\n\nDocumented **Data sources**: SNMP OIDs, `cisco:asa` system events, vendor metrics API. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), SNMP TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: snmp; **sourcetype**: snmp:cpu; **host** filter: vpn-headend-*. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=snmp, sourcetype=\"snmp:cpu\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU vs sessions), Gauge (capacity %), Table (headends).",
              "script": "",
              "premium": "",
              "hw": "ASA, FTD, Palo Alto GlobalProtect",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "paloalto",
                "snmp"
              ],
              "em": [
                "cisco_asa",
                "paloalto_globalprotect",
                "paloalto_pan_firewall",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.13",
              "n": "Concurrent VPN Session Limits",
              "c": "medium",
              "f": "beginner",
              "v": "Alerts when simultaneous sessions approach licensed or configured caps — same user or aggregate.",
              "t": "Splunk_TA_cisco-asa",
              "d": "`cisco:asa` session events",
              "q": "index=vpn sourcetype=\"cisco:asa\" earliest=-4h\n| where action=\"session_connect\" OR action=\"session_disconnect\"\n| eval delta=if(action=\"session_connect\",1,-1)\n| sort 0 _time\n| streamstats global=f sum(delta) as concurrent by host\n| where concurrent > 0\n| stats max(concurrent) as peak_concurrent by host\n| lookup vpn_license_limits.csv host OUTPUT max_sessions\n| where peak_concurrent > max_sessions*0.85",
              "m": "If connect/disconnect deltas are incomplete, use vendor “show vpn-sessiondb summary” scripted input for authoritative count. Tune `max_sessions` from license CSV.",
              "z": "Single value (peak vs cap %), Line chart (concurrent sessions), Table (headends near limit).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1021"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_cisco-asa.\n• Ensure the following data sources are available: `cisco:asa` session events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIf connect/disconnect deltas are incomplete, use vendor “show vpn-sessiondb summary” scripted input for authoritative count. Tune `max_sessions` from license CSV.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\" earliest=-4h\n| where action=\"session_connect\" OR action=\"session_disconnect\"\n| eval delta=if(action=\"session_connect\",1,-1)\n| sort 0 _time\n| streamstats global=f sum(delta) as concurrent by host\n| where concurrent > 0\n| stats max(concurrent) as peak_concurrent by host\n| lookup vpn_license_limits.csv host OUTPUT max_sessions\n| where peak_concurrent > max_sessions*0.85\n```\n\nUnderstanding this SPL\n\n**Concurrent VPN Session Limits** — Alerts when simultaneous sessions approach licensed or configured caps — same user or aggregate.\n\nDocumented **Data sources**: `cisco:asa` session events. **App/TA** (typical add-on context): Splunk_TA_cisco-asa. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"session_connect\" OR action=\"session_disconnect\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where concurrent > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where peak_concurrent > max_sessions*0.85` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (peak vs cap %), Line chart (concurrent sessions), Table (headends near limit).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Capacity",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_asa"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ASA",
                "id": 1621,
                "url": "https://splunkbase.splunk.com/app/1621"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.14",
              "n": "Geo-Impossible VPN Connections",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects logins from two distant countries faster than plausible travel — complements static geo allowlists (UC-17.2.3).",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), GeoIP",
              "d": "VPN session connect with `src`, `user`",
              "q": "index=vpn sourcetype=\"cisco:asa\" action=\"session_connect\" earliest=-24h\n| iplocation src\n| eval country=Country\n| sort user _time\n| streamstats current=f last(_time) as prev_time last(Country) as prev_country by user\n| eval gap_hrs=round((_time-prev_time)/3600,2)\n| where isnotnull(prev_country) AND country!=prev_country AND gap_hrs < 6\n| table _time, user, prev_country, country, gap_hrs, src",
              "m": "Tune time window (e.g., 4–8h) and distance (optional `haversine` if lat/long from enriched data). Whitelist mobile users and split tunnel carrier NAT.",
              "z": "Table (impossible travel), Map (prev vs new), Single value (alerts / day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1219"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), GeoIP.\n• Ensure the following data sources are available: VPN session connect with `src`, `user`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune time window (e.g., 4–8h) and distance (optional `haversine` if lat/long from enriched data). Whitelist mobile users and split tunnel carrier NAT.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\" action=\"session_connect\" earliest=-24h\n| iplocation src\n| eval country=Country\n| sort user _time\n| streamstats current=f last(_time) as prev_time last(Country) as prev_country by user\n| eval gap_hrs=round((_time-prev_time)/3600,2)\n| where isnotnull(prev_country) AND country!=prev_country AND gap_hrs < 6\n| table _time, user, prev_country, country, gap_hrs, src\n```\n\nUnderstanding this SPL\n\n**Geo-Impossible VPN Connections** — Detects logins from two distant countries faster than plausible travel — complements static geo allowlists (UC-17.2.3).\n\nDocumented **Data sources**: VPN session connect with `src`, `user`. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), GeoIP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Geo-Impossible VPN Connections**): iplocation src\n• `eval` defines or adjusts **country** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(prev_country) AND country!=prev_country AND gap_hrs < 6` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Geo-Impossible VPN Connections**): table _time, user, prev_country, country, gap_hrs, src\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.user Authentication.src span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Geo-Impossible VPN Connections** — Detects logins from two distant countries faster than plausible travel — complements static geo allowlists (UC-17.2.3).\n\nDocumented **Data sources**: VPN session connect with `src`, `user`. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect), GeoIP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (impossible travel), Map (prev vs new), Single value (alerts / day).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA, GlobalProtect",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.user Authentication.src span=1h\n| where count > 5",
              "e": [
                "cisco",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.15",
              "n": "VPN Tunnel Keepalive Failure Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks DPD/keepalive failures and tunnel teardown reasons for site-to-site and remote-access — isolates path MTU, NAT, and idle timeout issues.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`",
              "d": "VPN/IKE syslog (`cisco:asa`, `pan:system`)",
              "q": "index=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:system\") earliest=-24h\n| search \"IKE\" OR \"keepalive\" OR \"DPD\" OR \"dead peer\"\n| stats count by tunnel_id, peer_ip, message_signature\n| sort 40 -count",
              "m": "Normalize `message_signature` with `rex` or `cluster` on raw. Correlate with UC-17.2.5 stability metrics.",
              "z": "Bar chart (failures by peer), Table (top messages), Line chart (failure rate).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: VPN/IKE syslog (`cisco:asa`, `pan:system`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `message_signature` with `rex` or `cluster` on raw. Correlate with UC-17.2.5 stability metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:system\") earliest=-24h\n| search \"IKE\" OR \"keepalive\" OR \"DPD\" OR \"dead peer\"\n| stats count by tunnel_id, peer_ip, message_signature\n| sort 40 -count\n```\n\nUnderstanding this SPL\n\n**VPN Tunnel Keepalive Failure Analysis** — Tracks DPD/keepalive failures and tunnel teardown reasons for site-to-site and remote-access — isolates path MTU, NAT, and idle timeout issues.\n\nDocumented **Data sources**: VPN/IKE syslog (`cisco:asa`, `pan:system`). **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa, pan:system. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by tunnel_id, peer_ip, message_signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures by peer), Table (top messages), Line chart (failure rate).",
              "script": "",
              "premium": "",
              "hw": "ASA, Palo Alto IPsec",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.16",
              "n": "Remote Desktop Gateway Health",
              "c": "high",
              "f": "beginner",
              "v": "Monitors RD Gateway (HTTP/UDP) auth success, connection failures, and capacity for hybrid workers.",
              "t": "Windows TA, IIS TA",
              "d": "`sourcetype=ms:iis`, `WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational` (if inputs use **XML** rendering, use `XmlWinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational` instead)",
              "q": "index=windows sourcetype=\"WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational\" earliest=-24h\n| where EventCode IN (200,201,300,302)\n| eval outcome=if(EventCode IN (200,201),\"success\",\"failure\")\n| timechart span=15m count by outcome",
              "m": "Map Event IDs per OS version. Alert when failure ratio >10% over 1h. Add IIS logs for HTTP 503/502.",
              "z": "Line chart (success vs failure), Single value (failure %), Table (recent errors).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows TA, IIS TA.\n• Ensure the following data sources are available: `sourcetype=ms:iis`, `WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational` (if inputs use **XML** rendering, use `XmlWinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational` instead).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap Event IDs per OS version. Alert when failure ratio >10% over 1h. Add IIS logs for HTTP 503/502.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational\" earliest=-24h\n| where EventCode IN (200,201,300,302)\n| eval outcome=if(EventCode IN (200,201),\"success\",\"failure\")\n| timechart span=15m count by outcome\n```\n\nUnderstanding this SPL\n\n**Remote Desktop Gateway Health** — Monitors RD Gateway (HTTP/UDP) auth success, connection failures, and capacity for hybrid workers.\n\nDocumented **Data sources**: `sourcetype=ms:iis`, `WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational` (if inputs use **XML** rendering, use `XmlWinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational` instead). **App/TA** (typical add-on context): Windows TA, IIS TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where EventCode IN (200,201,300,302)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **outcome** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by outcome** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Remote Desktop Gateway Health** — Monitors RD Gateway (HTTP/UDP) auth success, connection failures, and capacity for hybrid workers.\n\nDocumented **Data sources**: `sourcetype=ms:iis`, `WinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational` (if inputs use **XML** rendering, use `XmlWinEventLog:Microsoft-Windows-TerminalServices-Gateway/Operational` instead). **App/TA** (typical add-on context): Windows TA, IIS TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (success vs failure), Single value (failure %), Table (recent errors).",
              "script": "",
              "premium": "",
              "hw": "Windows Server RD Gateway",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.dest span=1h\n| where count > 5",
              "e": [
                "iis"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.17",
              "n": "VPN Client Version Compliance",
              "c": "high",
              "f": "beginner",
              "v": "Reports AnyConnect/GlobalProtect client versions against minimum supported builds.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect)",
              "d": "VPN session logs with `client_version`",
              "q": "index=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:globalprotect\") action=\"session_connect\" earliest=-24h\n| eval major_minor=replace(client_version,\"^(\\d+\\.\\d+).*\",\"\\1\")\n| lookup vpn_min_client.csv platform OUTPUT min_version\n| eval compliant=if(major_minor>=min_version,1,0)\n| where compliant=0\n| stats count by user, client_version, platform\n| sort -count",
              "m": "Use `ver` normalisation or `version` field if numeric. Block or warn via posture integration.",
              "z": "Pie chart (compliant vs not), Table (outdated clients), Bar chart (by version).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1133",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect).\n• Ensure the following data sources are available: VPN session logs with `client_version`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nUse `ver` normalisation or `version` field if numeric. Block or warn via posture integration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:globalprotect\") action=\"session_connect\" earliest=-24h\n| eval major_minor=replace(client_version,\"^(\\d+\\.\\d+).*\",\"\\1\")\n| lookup vpn_min_client.csv platform OUTPUT min_version\n| eval compliant=if(major_minor>=min_version,1,0)\n| where compliant=0\n| stats count by user, client_version, platform\n| sort -count\n```\n\nUnderstanding this SPL\n\n**VPN Client Version Compliance** — Reports AnyConnect/GlobalProtect client versions against minimum supported builds.\n\nDocumented **Data sources**: VPN session logs with `client_version`. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa, pan:globalprotect. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **major_minor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliant=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, client_version, platform** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.user span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VPN Client Version Compliance** — Reports AnyConnect/GlobalProtect client versions against minimum supported builds.\n\nDocumented **Data sources**: VPN session logs with `client_version`. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto` (GlobalProtect). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (compliant vs not), Table (outdated clients), Bar chart (by version).",
              "script": "",
              "premium": "",
              "hw": "ASA, GP portal",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.user span=1d",
              "e": [
                "cisco",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.18",
              "n": "Site-to-Site Tunnel Flapping",
              "c": "high",
              "f": "intermediate",
              "v": "Counts IKE/IPsec up/down events per peer for unstable WAN or crypto issues.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, `Splunk_TA_juniper`",
              "d": "VPN syslog tunnel events",
              "q": "index=vpn sourcetype=\"cisco:asa\" earliest=-24h\n| search \"Tunnel is UP\" OR \"Tunnel is DOWN\" OR \"IPSEC.*DOWN\"\n| eval peer=coalesce(peer_ip, tunnel_group)\n| stats count by peer\n| where count>10\n| sort -count",
              "m": "Vendor message strings vary — maintain `rex` extractions in props. Alert when transitions >N per hour per peer.",
              "z": "Line chart (transitions over time), Table (worst peers), Single value (flapping peers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, `Splunk_TA_juniper`.\n• Ensure the following data sources are available: VPN syslog tunnel events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nVendor message strings vary — maintain `rex` extractions in props. Alert when transitions >N per hour per peer.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\" earliest=-24h\n| search \"Tunnel is UP\" OR \"Tunnel is DOWN\" OR \"IPSEC.*DOWN\"\n| eval peer=coalesce(peer_ip, tunnel_group)\n| stats count by peer\n| where count>10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Site-to-Site Tunnel Flapping** — Counts IKE/IPsec up/down events per peer for unstable WAN or crypto issues.\n\nDocumented **Data sources**: VPN syslog tunnel events. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, `Splunk_TA_juniper`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **peer** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by peer** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (transitions over time), Table (worst peers), Single value (flapping peers).",
              "script": "",
              "premium": "",
              "hw": "Firewalls, routers",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.19",
              "n": "Always-On VPN Enforcement",
              "c": "high",
              "f": "intermediate",
              "v": "Identifies corporate assets connecting without Always-On (pre-login) VPN when policy requires it.",
              "t": "Splunk_TA_cisco-asa, endpoint inventory",
              "d": "VPN logs with `always_on` flag, MDM compliance",
              "q": "index=vpn sourcetype=\"cisco:asa\" action=\"session_connect\" earliest=-24h\n| eval aov=if(match(lower(_raw),\"(?i)always.?on|pre.?login\"),1,0)\n| lookup corp_laptops.csv hostname OUTPUT requires_aov\n| where requires_aov=1 AND aov=0\n| stats count by user, hostname, src",
              "m": "Prefer explicit field from ASA if available. Join MDM “managed device” list for `requires_aov`.",
              "z": "Table (non-compliant hosts), Bar chart (violations by OU), Single value (violation count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1572",
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_cisco-asa, endpoint inventory.\n• Ensure the following data sources are available: VPN logs with `always_on` flag, MDM compliance.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer explicit field from ASA if available. Join MDM “managed device” list for `requires_aov`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\" action=\"session_connect\" earliest=-24h\n| eval aov=if(match(lower(_raw),\"(?i)always.?on|pre.?login\"),1,0)\n| lookup corp_laptops.csv hostname OUTPUT requires_aov\n| where requires_aov=1 AND aov=0\n| stats count by user, hostname, src\n```\n\nUnderstanding this SPL\n\n**Always-On VPN Enforcement** — Identifies corporate assets connecting without Always-On (pre-login) VPN when policy requires it.\n\nDocumented **Data sources**: VPN logs with `always_on` flag, MDM compliance. **App/TA** (typical add-on context): Splunk_TA_cisco-asa, endpoint inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aov** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where requires_aov=1 AND aov=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, hostname, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant hosts), Bar chart (violations by OU), Single value (violation count).",
              "script": "",
              "premium": "",
              "hw": "AnyConnect with Always-On",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_asa"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ASA",
                "id": 1621,
                "url": "https://splunkbase.splunk.com/app/1621"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.20",
              "n": "VPN Bandwidth Utilization Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Time-series bandwidth per headend and user cohort — complements UC-17.2.7 top talkers with **trend** and **gateway** dimension.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, NetFlow",
              "d": "VPN accounting, NetFlow from VPN interface",
              "q": "index=vpn sourcetype=\"cisco:asa\" earliest=-7d\n| bin _time span=1h\n| stats sum(bytes_in) as in_b, sum(bytes_out) as out_b by _time, host\n| eval gbps=round((in_b+out_b)*8/3600/1000000000,3)\n| timechart span=1h avg(gbps) by host",
              "m": "If bytes not in syslog, use SNMP interface counters or NetFlow `exporter=VPN`. Alert on sustained >80% of circuit.",
              "z": "Line chart (Gbps per headend), Area chart (total VPN throughput), Table (peak hour).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, NetFlow.\n• Ensure the following data sources are available: VPN accounting, NetFlow from VPN interface.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIf bytes not in syslog, use SNMP interface counters or NetFlow `exporter=VPN`. Alert on sustained >80% of circuit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"cisco:asa\" earliest=-7d\n| bin _time span=1h\n| stats sum(bytes_in) as in_b, sum(bytes_out) as out_b by _time, host\n| eval gbps=round((in_b+out_b)*8/3600/1000000000,3)\n| timechart span=1h avg(gbps) by host\n```\n\nUnderstanding this SPL\n\n**VPN Bandwidth Utilization Trending** — Time-series bandwidth per headend and user cohort — complements UC-17.2.7 top talkers with **trend** and **gateway** dimension.\n\nDocumented **Data sources**: VPN accounting, NetFlow from VPN interface. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gbps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by _time span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VPN Bandwidth Utilization Trending** — Time-series bandwidth per headend and user cohort — complements UC-17.2.7 top talkers with **trend** and **gateway** dimension.\n\nDocumented **Data sources**: VPN accounting, NetFlow from VPN interface. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`, `TA-fortinet_fortigate`, NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (Gbps per headend), Area chart (total VPN throughput), Table (peak hour).",
              "script": "",
              "premium": "",
              "hw": "ASA, routers",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Performance",
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_in) as bytes_in sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  by _time span=1h",
              "e": [
                "cisco",
                "fortinet",
                "netflow",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "netflow_netflow",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.21",
              "n": "SSL VPN Certificate Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks server certificate expiry and chain errors on SSL VPN / GlobalProtect portals from TLS handshake logs.",
              "t": "`Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`",
              "d": "SSL/TLS syslog, management logs",
              "q": "index=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:system\") earliest=-30d\n| search \"certificate\" AND (\"expired\" OR \"not trusted\" OR \"invalid\")\n| stats count by host, cert_cn, message\n| sort -count",
              "m": "Prefer proactive cert inventory from PKI; this search catches client-reported errors. Alert on any `expired` match on production gateways.",
              "z": "Table (cert errors), Single value (error count), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757)",
              "mitre": [
                "T1573"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`.\n• Ensure the following data sources are available: SSL/TLS syslog, management logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPrefer proactive cert inventory from PKI; this search catches client-reported errors. Alert on any `expired` match on production gateways.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn (sourcetype=\"cisco:asa\" OR sourcetype=\"pan:system\") earliest=-30d\n| search \"certificate\" AND (\"expired\" OR \"not trusted\" OR \"invalid\")\n| stats count by host, cert_cn, message\n| sort -count\n```\n\nUnderstanding this SPL\n\n**SSL VPN Certificate Compliance** — Tracks server certificate expiry and chain errors on SSL VPN / GlobalProtect portals from TLS handshake logs.\n\nDocumented **Data sources**: SSL/TLS syslog, management logs. **App/TA** (typical add-on context): `Splunk_TA_cisco-asa`, `Splunk_TA_paloalto`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: cisco:asa, pan:system. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"cisco:asa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, cert_cn, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cert errors), Single value (error count), Timeline.",
              "script": "",
              "premium": "",
              "hw": "ASA, Palo Alto",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "paloalto_pan_firewall"
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.22",
              "n": "Remote Session Duration Anomalies",
              "c": "medium",
              "f": "intermediate",
              "v": "Statistical outliers in VPN session length — unusually short (brute probe) or long (unattended tunnel) vs UC-17.3.8 fixed thresholds.",
              "t": "VPN TA",
              "d": "`vpn:session` or ASA with start/end",
              "q": "index=vpn sourcetype=\"vpn:session\" earliest=-7d\n| eval dur_hrs=(end_time-start_time)/3600\n| eventstats median(dur_hrs) as med, stdev(dur_hrs) as sd by user\n| eval z=if(sd>0, (dur_hrs-med)/sd, 0)\n| where abs(z)>3 OR dur_hrs>48 OR dur_hrs<0.01\n| table user, dur_hrs, med, z, src",
              "m": "Requires reliable `start_time`/`end_time`. Tune z-score or use `anomalydetection`.",
              "z": "Scatter (duration vs time), Table (outliers), Histogram (duration).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1021",
                "T1219"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VPN TA.\n• Ensure the following data sources are available: `vpn:session` or ASA with start/end.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires reliable `start_time`/`end_time`. Tune z-score or use `anomalydetection`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"vpn:session\" earliest=-7d\n| eval dur_hrs=(end_time-start_time)/3600\n| eventstats median(dur_hrs) as med, stdev(dur_hrs) as sd by user\n| eval z=if(sd>0, (dur_hrs-med)/sd, 0)\n| where abs(z)>3 OR dur_hrs>48 OR dur_hrs<0.01\n| table user, dur_hrs, med, z, src\n```\n\nUnderstanding this SPL\n\n**Remote Session Duration Anomalies** — Statistical outliers in VPN session length — unusually short (brute probe) or long (unattended tunnel) vs UC-17.3.8 fixed thresholds.\n\nDocumented **Data sources**: `vpn:session` or ASA with start/end. **App/TA** (typical add-on context): VPN TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: vpn:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"vpn:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dur_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(z)>3 OR dur_hrs>48 OR dur_hrs<0.01` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Remote Session Duration Anomalies**): table user, dur_hrs, med, z, src\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter (duration vs time), Table (outliers), Histogram (duration).",
              "script": "",
              "premium": "",
              "hw": "Cisco ASA",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.2.23",
              "n": "VPN Session Duration and Idle Timeout",
              "c": "medium",
              "f": "beginner",
              "v": "Anomalously long or short VPN sessions may indicate abuse or connectivity issues. Monitoring supports policy tuning and security review.",
              "t": "VPN gateway logs, RADIUS accounting",
              "d": "Session start/end, duration, idle time",
              "q": "index=vpn sourcetype=\"vpn:session\"\n| eval duration_hrs=(end_time - start_time)/3600\n| stats avg(duration_hrs) as avg_hrs, max(duration_hrs) as max_hrs, count by user\n| where max_hrs > 24 OR avg_hrs > 12\n| table user, count, avg_hrs, max_hrs",
              "m": "Ingest VPN session and accounting data. Compute session duration and idle time. Alert on sessions exceeding policy (e.g., >24h) or user with unusually long average. Report on session distribution.",
              "z": "Table (long sessions), Bar chart (avg duration by user), Line chart (session count trend).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Cato Networks Events App](https://splunkbase.splunk.com/app/8037), [Check Point App for Splunk](https://splunkbase.splunk.com/app/4293)",
              "mitre": [
                "T1021"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VPN gateway logs, RADIUS accounting.\n• Ensure the following data sources are available: Session start/end, duration, idle time.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest VPN session and accounting data. Compute session duration and idle time. Alert on sessions exceeding policy (e.g., >24h) or user with unusually long average. Report on session distribution.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"vpn:session\"\n| eval duration_hrs=(end_time - start_time)/3600\n| stats avg(duration_hrs) as avg_hrs, max(duration_hrs) as max_hrs, count by user\n| where max_hrs > 24 OR avg_hrs > 12\n| table user, count, avg_hrs, max_hrs\n```\n\nUnderstanding this SPL\n\n**VPN Session Duration and Idle Timeout** — Anomalously long or short VPN sessions may indicate abuse or connectivity issues. Monitoring supports policy tuning and security review.\n\nDocumented **Data sources**: Session start/end, duration, idle time. **App/TA** (typical add-on context): VPN gateway logs, RADIUS accounting. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: vpn:session. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"vpn:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **duration_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_hrs > 24 OR avg_hrs > 12` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VPN Session Duration and Idle Timeout**): table user, count, avg_hrs, max_hrs\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (long sessions), Bar chart (avg duration by user), Line chart (session count trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 42.2,
          "qd": {
            "gold": 3,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "17.3",
          "n": "Zero Trust / SASE",
          "u": [
            {
              "i": "17.3.1",
              "n": "Conditional Access Enforcement",
              "c": "high",
              "f": "beginner",
              "v": "A spike in policy blocks for a single application after a policy publish suggests a rule-order or identity claim error. Gradual deny-rate growth across multiple apps indicates posture drift or certificate expiry across a device cohort. Both patterns require different response workflows.",
              "t": "`Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293), `Splunk_TA_microsoft-cloudservices` (Entra ID)",
              "d": "SASE/ZT policy decision logs",
              "q": "index=zt sourcetype=\"zscaler:zpa\"\n| stats count by policy_action, application, user\n| eval pct=round(count/sum(count)*100,1)",
              "m": "Ingest SASE/ZTNA policy decision logs. Track allow/block/step-up-auth decisions per application and user. Alert on policy blocks for critical applications. Report on policy effectiveness and user experience impact.",
              "z": "Pie chart (policy decisions), Bar chart (blocks by application), Line chart (enforcement trend), Table (blocked users).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293), `Splunk_TA_microsoft-cloudservices` (Entra ID).\n• Ensure the following data sources are available: SASE/ZT policy decision logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SASE/ZTNA policy decision logs. Track allow/block/step-up-auth decisions per application and user. Alert on policy blocks for critical applications. Report on policy effectiveness and user experience impact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zscaler:zpa\"\n| stats count by policy_action, application, user\n| eval pct=round(count/sum(count)*100,1)\n```\n\nUnderstanding this SPL\n\n**Conditional Access Enforcement** — A spike in policy blocks for a single application after a policy publish suggests a rule-order or identity claim error. Gradual deny-rate growth across multiple apps indicates posture drift or certificate expiry across a device cohort. Both patterns require different response workflows.\n\nDocumented **Data sources**: SASE/ZT policy decision logs. **App/TA** (typical add-on context): `Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293), `Splunk_TA_microsoft-cloudservices` (Entra ID). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zscaler:zpa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zscaler:zpa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by policy_action, application, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Conditional Access Enforcement** — A spike in policy blocks for a single application after a policy publish suggests a rule-order or identity claim error. Gradual deny-rate growth across multiple apps indicates posture drift or certificate expiry across a device cohort. Both patterns require different response workflows.\n\nDocumented **Data sources**: SASE/ZT policy decision logs. **App/TA** (typical add-on context): `Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293), `Splunk_TA_microsoft-cloudservices` (Entra ID). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (policy decisions), Bar chart (blocks by application), Line chart (enforcement trend), Table (blocked users).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "azure",
                "checkpoint",
                "fortinet",
                "m365",
                "netskope",
                "nutanix",
                "paloalto",
                "zscaler"
              ],
              "em": [
                "fortinet_fortigate",
                "paloalto_pan_firewall",
                "paloalto_prisma"
              ],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                },
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.2",
              "n": "Device Trust Scoring",
              "c": "high",
              "f": "beginner",
              "v": "Device trust scores drive access decisions in zero-trust architecture. Monitoring ensures devices maintain compliance.",
              "t": "`Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `Splunk_TA_microsoft-cloudservices` (Entra ID / Intune), `TA-crowdstrike-falcon`",
              "d": "ZT device compliance/trust data",
              "q": "index=zt sourcetype=\"zscaler:device_posture\"\n| where trust_score < 50 OR compliance_status!=\"compliant\"\n| table user, device_id, os, trust_score, compliance_status, non_compliant_checks",
              "m": "Ingest device trust score data from ZT platform. Track compliance rates per OS and department. Alert when critical devices become non-compliant. Report on fleet trust posture for security leadership.",
              "z": "Gauge (fleet compliance %), Table (non-compliant devices), Pie chart (compliance distribution), Line chart (trust score trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1200"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `Splunk_TA_microsoft-cloudservices` (Entra ID / Intune), `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: ZT device compliance/trust data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest device trust score data from ZT platform. Track compliance rates per OS and department. Alert when critical devices become non-compliant. Report on fleet trust posture for security leadership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zscaler:device_posture\"\n| where trust_score < 50 OR compliance_status!=\"compliant\"\n| table user, device_id, os, trust_score, compliance_status, non_compliant_checks\n```\n\nUnderstanding this SPL\n\n**Device Trust Scoring** — Device trust scores drive access decisions in zero-trust architecture. Monitoring ensures devices maintain compliance.\n\nDocumented **Data sources**: ZT device compliance/trust data. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `Splunk_TA_microsoft-cloudservices` (Entra ID / Intune), `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zscaler:device_posture. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zscaler:device_posture\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where trust_score < 50 OR compliance_status!=\"compliant\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Device Trust Scoring**): table user, device_id, os, trust_score, compliance_status, non_compliant_checks\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Device Trust Scoring** — Device trust scores drive access decisions in zero-trust architecture. Monitoring ensures devices maintain compliance.\n\nDocumented **Data sources**: ZT device compliance/trust data. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `Splunk_TA_microsoft-cloudservices` (Entra ID / Intune), `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (fleet compliance %), Table (non-compliant devices), Pie chart (compliance distribution), Line chart (trust score trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "azure",
                "crowdstrike",
                "m365",
                "netskope",
                "nutanix",
                "paloalto",
                "zscaler"
              ],
              "em": [
                "paloalto_pan_firewall",
                "paloalto_prisma"
              ],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.3",
              "n": "Micro-Segmentation Audit",
              "c": "high",
              "f": "beginner",
              "v": "Micro-segmentation limits lateral movement. Audit logs validate policy enforcement and detect bypasses.",
              "t": "VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA, Akamai Guardicore Add-on for Splunk (Splunkbase 7426)",
              "d": "Micro-segmentation policy logs (allow/deny events)",
              "q": "index=zt sourcetype=\"microseg:policy\"\n| where action=\"deny\"\n| stats count by src_workload, dest_workload, dest_port, policy_name\n| sort -count",
              "m": "Ingest micro-segmentation policy enforcement logs. Track allowed and denied traffic between workloads. Alert on unexpected denials (may indicate misconfiguration) and unexpected allows (policy gaps). Report on segmentation coverage.",
              "z": "Heatmap (workload × workload traffic), Table (policy violations), Sankey diagram (traffic flows).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Akamai Guardicore Add-on for Splunk](https://splunkbase.splunk.com/app/7426), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1570"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA, Akamai Guardicore Add-on for Splunk (Splunkbase 7426).\n• Ensure the following data sources are available: Micro-segmentation policy logs (allow/deny events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest micro-segmentation policy enforcement logs. Track allowed and denied traffic between workloads. Alert on unexpected denials (may indicate misconfiguration) and unexpected allows (policy gaps). Report on segmentation coverage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"microseg:policy\"\n| where action=\"deny\"\n| stats count by src_workload, dest_workload, dest_port, policy_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Micro-Segmentation Audit** — Micro-segmentation limits lateral movement. Audit logs validate policy enforcement and detect bypasses.\n\nDocumented **Data sources**: Micro-segmentation policy logs (allow/deny events). **App/TA** (typical add-on context): VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA, Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: microseg:policy. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"microseg:policy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"deny\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_workload, dest_workload, dest_port, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Micro-Segmentation Audit** — Micro-segmentation limits lateral movement. Audit logs validate policy enforcement and detect bypasses.\n\nDocumented **Data sources**: Micro-segmentation policy logs (allow/deny events). **App/TA** (typical add-on context): VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA, Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (workload × workload traffic), Table (policy violations), Sankey diagram (traffic flows).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.BF.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Cyber Essentials CE.BF.1 (Boundary firewalls) is enforced — Splunk UC-17.3.3: Micro-Segmentation Audit.",
                  "ea": "Saved search 'UC-17.3.3' running on Micro-segmentation policy logs (allow/deny events), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ncsc.gov.uk/cyberessentials/overview"
                },
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "FR 6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 FR 6.2 (Continuous monitoring) is enforced — Splunk UC-17.3.3: Micro-Segmentation Audit.",
                  "ea": "Saved search 'UC-17.3.3' running on Micro-segmentation policy logs (allow/deny events), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.isa.org/standards-and-publications/isa-standards/isa-iec-62443-series-of-standards"
                },
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-7 R1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-7 R1 (Electronic security perimeter) is enforced — Splunk UC-17.3.3: Micro-Segmentation Audit.",
                  "ea": "Saved search 'UC-17.3.3' running on Micro-segmentation policy logs (allow/deny events), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nerc.com/pa/Stand/Pages/CIPStandards.aspx"
                }
              ],
              "sapp": [
                {
                  "name": "Akamai Guardicore Add-on for Splunk",
                  "id": 7426,
                  "url": "https://splunkbase.splunk.com/app/7426",
                  "desc": "Microsegmentation data from Akamai Guardicore Centra for asset protection and compliance",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.4",
              "n": "ZTNA Application Access",
              "c": "medium",
              "f": "beginner",
              "v": "Per-application access patterns in ZTNA reveal usage trends, security risks, and application performance issues.",
              "t": "`Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293)",
              "d": "ZTNA access logs (application, user, device, action)",
              "q": "index=zt sourcetype=\"zscaler:zpa\"\n| stats dc(user) as unique_users, count as total_access by application\n| sort -unique_users",
              "m": "Track application access through ZTNA per user and device. Identify unused applications for decommissioning. Monitor access patterns for anomalies. Report on application adoption and usage.",
              "z": "Bar chart (top applications by users), Table (application access summary), Line chart (access trends per app).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293).\n• Ensure the following data sources are available: ZTNA access logs (application, user, device, action).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack application access through ZTNA per user and device. Identify unused applications for decommissioning. Monitor access patterns for anomalies. Report on application adoption and usage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zscaler:zpa\"\n| stats dc(user) as unique_users, count as total_access by application\n| sort -unique_users\n```\n\nUnderstanding this SPL\n\n**ZTNA Application Access** — Per-application access patterns in ZTNA reveal usage trends, security risks, and application performance issues.\n\nDocumented **Data sources**: ZTNA access logs (application, user, device, action). **App/TA** (typical add-on context): `Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zscaler:zpa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zscaler:zpa\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZTNA Application Access** — Per-application access patterns in ZTNA reveal usage trends, security risks, and application performance issues.\n\nDocumented **Data sources**: ZTNA access logs (application, user, device, action). **App/TA** (typical add-on context): `Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top applications by users), Table (application access summary), Line chart (access trends per app).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "checkpoint",
                "fortinet",
                "netskope",
                "nutanix",
                "paloalto",
                "zscaler"
              ],
              "em": [
                "fortinet_fortigate",
                "paloalto_pan_firewall",
                "paloalto_prisma"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                },
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.5",
              "n": "Posture Assessment Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Endpoint posture compliance rates over time measure security improvement and identify persistent non-compliance areas.",
              "t": "`Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `Splunk_TA_microsoft-cloudservices` (Entra ID / Intune)",
              "d": "ZT posture assessment data",
              "q": "index=zt sourcetype=\"zt:posture\"\n| timechart span=1d avg(compliance_pct) as compliance by check_type",
              "m": "Track posture assessment results over time by check type (AV, encryption, OS patch, firewall). Report on compliance improvement trends. Alert when compliance drops below target. Identify persistent non-compliance patterns.",
              "z": "Line chart (compliance trend by check), Bar chart (compliance by OS), Single value (overall compliance %), Table (non-compliant checks).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `Splunk_TA_microsoft-cloudservices` (Entra ID / Intune).\n• Ensure the following data sources are available: ZT posture assessment data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack posture assessment results over time by check type (AV, encryption, OS patch, firewall). Report on compliance improvement trends. Alert when compliance drops below target. Identify persistent non-compliance patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zt:posture\"\n| timechart span=1d avg(compliance_pct) as compliance by check_type\n```\n\nUnderstanding this SPL\n\n**Posture Assessment Trending** — Endpoint posture compliance rates over time measure security improvement and identify persistent non-compliance areas.\n\nDocumented **Data sources**: ZT posture assessment data. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `Splunk_TA_microsoft-cloudservices` (Entra ID / Intune). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zt:posture. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zt:posture\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by check_type** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Posture Assessment Trending** — Endpoint posture compliance rates over time measure security improvement and identify persistent non-compliance areas.\n\nDocumented **Data sources**: ZT posture assessment data. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `Splunk_TA_microsoft-cloudservices` (Entra ID / Intune). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (compliance trend by check), Bar chart (compliance by OS), Single value (overall compliance %), Table (non-compliant checks).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "azure",
                "m365",
                "netskope",
                "nutanix",
                "paloalto",
                "zscaler"
              ],
              "em": [
                "paloalto_pan_firewall",
                "paloalto_prisma"
              ],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.6",
              "n": "Policy Drift Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Zero-trust policies require continuous validation. Drift from baseline configuration introduces security gaps.",
              "t": "`Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293)",
              "d": "ZT policy audit logs, configuration snapshots",
              "q": "index=zt sourcetype=\"zt:admin_audit\"\n| search action IN (\"policy_modified\",\"rule_added\",\"rule_deleted\",\"rule_disabled\")\n| table _time, admin, action, policy_name, details\n| sort -_time",
              "m": "Track all ZT policy changes via audit logs. Compare current configuration against approved baseline. Alert on unauthorized modifications. Require change management approval for policy changes. Report on policy change frequency.",
              "z": "Table (policy changes), Timeline (modification events), Bar chart (changes by admin), Single value (changes this week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1562.004",
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293).\n• Ensure the following data sources are available: ZT policy audit logs, configuration snapshots.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack all ZT policy changes via audit logs. Compare current configuration against approved baseline. Alert on unauthorized modifications. Require change management approval for policy changes. Report on policy change frequency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zt:admin_audit\"\n| search action IN (\"policy_modified\",\"rule_added\",\"rule_deleted\",\"rule_disabled\")\n| table _time, admin, action, policy_name, details\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Policy Drift Detection** — Zero-trust policies require continuous validation. Drift from baseline configuration introduces security gaps.\n\nDocumented **Data sources**: ZT policy audit logs, configuration snapshots. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zt:admin_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zt:admin_audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Policy Drift Detection**): table _time, admin, action, policy_name, details\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Policy Drift Detection** — Zero-trust policies require continuous validation. Drift from baseline configuration introduces security gaps.\n\nDocumented **Data sources**: ZT policy audit logs, configuration snapshots. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (policy changes), Timeline (modification events), Bar chart (changes by admin), Single value (changes this week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Configuration"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user Authentication.src Authentication.dest span=1h\n| where count > 5",
              "e": [
                "checkpoint",
                "fortinet",
                "netskope",
                "nutanix",
                "paloalto",
                "zscaler"
              ],
              "em": [
                "fortinet_fortigate",
                "paloalto_pan_firewall",
                "paloalto_prisma"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                },
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.7",
              "n": "Device Certificate Expiration and Renewal",
              "c": "critical",
              "f": "intermediate",
              "v": "Expired device certificates break ZTNA and VPN access. Monitoring expiration and renewal success ensures continuous access and avoids outages.",
              "t": "PKI/certificate inventory, `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access)",
              "d": "Certificate validity, renewal requests, enrollment events",
              "q": "index=zt sourcetype=\"device:cert\"\n| eval days_left=floor((expiry_time-now())/86400)\n| where days_left < 30 OR renewal_status=\"failed\"\n| table device_id, cn, expiry_time, days_left, renewal_status\n| sort days_left",
              "m": "Ingest device certificate inventory and renewal events. Alert when cert expires in <30 days or renewal fails. Report on cert distribution and renewal success rate. Automate renewal where possible.",
              "z": "Table (certs expiring soon), Single value (failed renewals), Bar chart (expiry by month).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757)",
              "mitre": [
                "T1573.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PKI/certificate inventory, `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access).\n• Ensure the following data sources are available: Certificate validity, renewal requests, enrollment events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest device certificate inventory and renewal events. Alert when cert expires in <30 days or renewal fails. Report on cert distribution and renewal success rate. Automate renewal where possible.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"device:cert\"\n| eval days_left=floor((expiry_time-now())/86400)\n| where days_left < 30 OR renewal_status=\"failed\"\n| table device_id, cn, expiry_time, days_left, renewal_status\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**Device Certificate Expiration and Renewal** — Expired device certificates break ZTNA and VPN access. Monitoring expiration and renewal success ensures continuous access and avoids outages.\n\nDocumented **Data sources**: Certificate validity, renewal requests, enrollment events. **App/TA** (typical add-on context): PKI/certificate inventory, `Splunk_TA_zscaler`, Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: device:cert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"device:cert\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 30 OR renewal_status=\"failed\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Device Certificate Expiration and Renewal**): table device_id, cn, expiry_time, days_left, renewal_status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (certs expiring soon), Single value (failed renewals), Bar chart (expiry by month).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "netskope",
                "nutanix",
                "paloalto",
                "zscaler"
              ],
              "em": [
                "paloalto_pan_firewall",
                "paloalto_prisma"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.8",
              "n": "Zero Trust Access Denial Trending",
              "c": "high",
              "f": "intermediate",
              "v": "High denial rates may indicate policy misconfiguration or attacker probing. Trending supports tuning and security analysis.",
              "t": "`Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293)",
              "d": "Access decision logs (allow/deny), user, app, reason",
              "q": "index=zt sourcetype=\"zt:access\"\n| where decision=\"deny\"\n| bin _time span=1h\n| stats count by user, application, deny_reason, _time\n| where count > 20\n| sort -count",
              "m": "Ingest access decision logs. Baseline denial rate by user and app. Alert on spike in denials or new deny reason. Report on top denied users and apps for policy review.",
              "z": "Line chart (denials over time), Table (denials by user/app), Bar chart (deny reasons).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757)",
              "mitre": [
                "T1071.001",
                "T1046"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293).\n• Ensure the following data sources are available: Access decision logs (allow/deny), user, app, reason.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest access decision logs. Baseline denial rate by user and app. Alert on spike in denials or new deny reason. Report on top denied users and apps for policy review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zt:access\"\n| where decision=\"deny\"\n| bin _time span=1h\n| stats count by user, application, deny_reason, _time\n| where count > 20\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Zero Trust Access Denial Trending** — High denial rates may indicate policy misconfiguration or attacker probing. Trending supports tuning and security analysis.\n\nDocumented **Data sources**: Access decision logs (allow/deny), user, app, reason. **App/TA** (typical add-on context): `Splunk_TA_zscaler` (ZPA), Netskope Add-on for Splunk (Splunkbase 3808), `Splunk_TA_paloalto` (Prisma Access), `TA-fortinet_fortigate` (FortiSASE), Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zt:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zt:access\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where decision=\"deny\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by user, application, deny_reason, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (denials over time), Table (denials by user/app), Bar chart (deny reasons).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "checkpoint",
                "fortinet",
                "netskope",
                "nutanix",
                "paloalto",
                "zscaler"
              ],
              "em": [
                "fortinet_fortigate",
                "paloalto_pan_firewall",
                "paloalto_prisma"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                },
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.11",
              "n": "Micro-Segment Traffic Baseline Anomaly",
              "c": "high",
              "f": "intermediate",
              "v": "New or unexpected traffic between segments may indicate lateral movement or misconfiguration. Anomaly detection supports Zero Trust enforcement.",
              "t": "Network flow logs, VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA, Akamai Guardicore Add-on for Splunk (Splunkbase 7426)",
              "d": "East-west traffic, segment IDs, flow counts",
              "q": "index=flows sourcetype=\"netflow\"\n| bin _time span=1h\n| stats sum(bytes) as bytes, count by src_segment, dest_segment, _time\n| eventstats avg(bytes) as avg_bytes by src_segment, dest_segment\n| where bytes > (avg_bytes * 5)\n| table src_segment, dest_segment, bytes, avg_bytes",
              "m": "Ingest segment-level or flow data. Baseline traffic between segment pairs. Alert when traffic exceeds baseline by threshold. Correlate with new connections and ZT policy. Report on segment traffic matrix.",
              "z": "Table (anomalous segment pairs), Heatmap (segment × segment traffic), Line chart (traffic trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Akamai Guardicore Add-on for Splunk](https://splunkbase.splunk.com/app/7426), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1021",
                "T1570",
                "T1018"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Network flow logs, VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA, Akamai Guardicore Add-on for Splunk (Splunkbase 7426).\n• Ensure the following data sources are available: East-west traffic, segment IDs, flow counts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest segment-level or flow data. Baseline traffic between segment pairs. Alert when traffic exceeds baseline by threshold. Correlate with new connections and ZT policy. Report on segment traffic matrix.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=flows sourcetype=\"netflow\"\n| bin _time span=1h\n| stats sum(bytes) as bytes, count by src_segment, dest_segment, _time\n| eventstats avg(bytes) as avg_bytes by src_segment, dest_segment\n| where bytes > (avg_bytes * 5)\n| table src_segment, dest_segment, bytes, avg_bytes\n```\n\nUnderstanding this SPL\n\n**Micro-Segment Traffic Baseline Anomaly** — New or unexpected traffic between segments may indicate lateral movement or misconfiguration. Anomaly detection supports Zero Trust enforcement.\n\nDocumented **Data sources**: East-west traffic, segment IDs, flow counts. **App/TA** (typical add-on context): Network flow logs, VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA, Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: flows; **sourcetype**: netflow. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=flows, sourcetype=\"netflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by src_segment, dest_segment, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by src_segment, dest_segment** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where bytes > (avg_bytes * 5)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Micro-Segment Traffic Baseline Anomaly**): table src_segment, dest_segment, bytes, avg_bytes\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Micro-Segment Traffic Baseline Anomaly** — New or unexpected traffic between segments may indicate lateral movement or misconfiguration. Anomaly detection supports Zero Trust enforcement.\n\nDocumented **Data sources**: East-west traffic, segment IDs, flow counts. **App/TA** (typical add-on context): Network flow logs, VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA, Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous segment pairs), Heatmap (segment × segment traffic), Line chart (traffic trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value",
              "e": [
                "cisco",
                "guardicore",
                "nsx",
                "syslog",
                "vmware"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Akamai Guardicore Add-on for Splunk",
                  "id": 7426,
                  "url": "https://splunkbase.splunk.com/app/7426",
                  "desc": "Microsegmentation data from Akamai Guardicore Centra for asset protection and compliance",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.12",
              "n": "Zscaler ZIA Policy Violation Trends",
              "c": "high",
              "f": "beginner",
              "v": "Time-series of blocked violations per URL category and rule — tunes SWG policy and spots sudden policy drift.",
              "t": "Zscaler TA",
              "d": "`sourcetype=zscaler:web` or `zscaler:zia`",
              "q": "index=proxy sourcetype=\"zscaler:web\" earliest=-30d\n| where action=\"blocked\" OR threat_score>0\n| timechart span=1d count by rule_name",
              "m": "Map `rule_name` / `policy` from ZIA. Alert when daily blocks for a rule exceed 2× 7-day average (possible mis-tuned category).",
              "z": "Line chart (blocks by rule), Stacked area (categories), Table (top rules).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Zscaler TA.\n• Ensure the following data sources are available: `sourcetype=zscaler:web` or `zscaler:zia`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `rule_name` / `policy` from ZIA. Alert when daily blocks for a rule exceed 2× 7-day average (possible mis-tuned category).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"zscaler:web\" earliest=-30d\n| where action=\"blocked\" OR threat_score>0\n| timechart span=1d count by rule_name\n```\n\nUnderstanding this SPL\n\n**Zscaler ZIA Policy Violation Trends** — Time-series of blocked violations per URL category and rule — tunes SWG policy and spots sudden policy drift.\n\nDocumented **Data sources**: `sourcetype=zscaler:web` or `zscaler:zia`. **App/TA** (typical add-on context): Zscaler TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:web. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"zscaler:web\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"blocked\" OR threat_score>0` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by rule_name** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.url span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zscaler ZIA Policy Violation Trends** — Time-series of blocked violations per URL category and rule — tunes SWG policy and spots sudden policy drift.\n\nDocumented **Data sources**: `sourcetype=zscaler:web` or `zscaler:zia`. **App/TA** (typical add-on context): Zscaler TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (blocks by rule), Stacked area (categories), Table (top rules).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.url span=1d",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.13",
              "n": "ZPA Application Segment Health",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks connector health, app segment reachability, and error rates for ZPA-published apps.",
              "t": "Zscaler TA",
              "d": "`sourcetype=zscaler:zpa`, connector telemetry",
              "q": "index=zt sourcetype=\"zscaler:zpa\" earliest=-24h\n| where match(lower(status),\"(?i)fail|error|down\") OR latency_ms>2000\n| stats count by app_segment, connector_group, error_code\n| sort 30 -count",
              "m": "Normalize `app_segment` and latency fields from your ZPA TA. Alert on connector group with >5% error rate vs prior week.",
              "z": "Table (unhealthy segments), Line chart (error rate), Single value (segments in alert).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Zscaler TA.\n• Ensure the following data sources are available: `sourcetype=zscaler:zpa`, connector telemetry.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `app_segment` and latency fields from your ZPA TA. Alert on connector group with >5% error rate vs prior week.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zscaler:zpa\" earliest=-24h\n| where match(lower(status),\"(?i)fail|error|down\") OR latency_ms>2000\n| stats count by app_segment, connector_group, error_code\n| sort 30 -count\n```\n\nUnderstanding this SPL\n\n**ZPA Application Segment Health** — Tracks connector health, app segment reachability, and error rates for ZPA-published apps.\n\nDocumented **Data sources**: `sourcetype=zscaler:zpa`, connector telemetry. **App/TA** (typical add-on context): Zscaler TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zscaler:zpa. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zscaler:zpa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(status),\"(?i)fail|error|down\") OR latency_ms>2000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by app_segment, connector_group, error_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZPA Application Segment Health** — Tracks connector health, app segment reachability, and error rates for ZPA-published apps.\n\nDocumented **Data sources**: `sourcetype=zscaler:zpa`, connector telemetry. **App/TA** (typical add-on context): Zscaler TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy segments), Line chart (error rate), Single value (segments in alert).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.dest span=1h",
              "e": [
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.14",
              "n": "Cisco Umbrella DNS Block Analysis",
              "c": "high",
              "f": "beginner",
              "v": "Top blocked domains, identities, and policy hits for DNS-layer security tuning and threat hunting.",
              "t": "Cisco Umbrella TA",
              "d": "`sourcetype=umbrella:dns`",
              "q": "index=dns sourcetype=\"umbrella:dns\" earliest=-7d\n| where action=\"blocked\"\n| stats count by domain, identity, categories\n| sort 50 -count",
              "m": "Enrich with ASN or threat feed for rare domains. Alert on spike in blocks from single identity (possible compromise).",
              "z": "Bar chart (top domains), Table (identity × domain), Pie chart (categories).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Resolution](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Resolution)",
              "mitre": [
                "T1071.004",
                "T1568"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco Umbrella TA.\n• Ensure the following data sources are available: `sourcetype=umbrella:dns`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnrich with ASN or threat feed for rare domains. Alert on spike in blocks from single identity (possible compromise).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=\"umbrella:dns\" earliest=-7d\n| where action=\"blocked\"\n| stats count by domain, identity, categories\n| sort 50 -count\n```\n\nUnderstanding this SPL\n\n**Cisco Umbrella DNS Block Analysis** — Top blocked domains, identities, and policy hits for DNS-layer security tuning and threat hunting.\n\nDocumented **Data sources**: `sourcetype=umbrella:dns`. **App/TA** (typical add-on context): Cisco Umbrella TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: umbrella:dns. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=\"umbrella:dns\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"blocked\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by domain, identity, categories** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.query span=1h\n| where count > 100\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cisco Umbrella DNS Block Analysis** — Top blocked domains, identities, and policy hits for DNS-layer security tuning and threat hunting.\n\nDocumented **Data sources**: `sourcetype=umbrella:dns`. **App/TA** (typical add-on context): Cisco Umbrella TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Resolution.DNS` — enable acceleration for that model.\n• Filters the current rows with `where count > 100` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top domains), Table (identity × domain), Pie chart (categories).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Resolution"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Resolution.DNS\n  by DNS.query span=1h\n| where count > 100",
              "e": [
                "cisco"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.15",
              "n": "SASE Tunnel Health",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors IPSec/GRE/SSL tunnels from branch to SASE PoPs — packet loss, latency, and down events.",
              "t": "`Splunk_TA_zscaler`, `Splunk_TA_paloalto` (Prisma Access), Cato Networks Events App (Splunkbase 8037), `TA-fortinet_fortigate` (FortiSASE), Netskope Add-on for Splunk (Splunkbase 3808)",
              "d": "`sourcetype=sase:tunnel`, SD-WAN to SASE",
              "q": "index=sase sourcetype=\"sase:tunnel\" earliest=-24h\n| eval healthy=if(match(lower(state),\"(?i)up|active\") AND packet_loss_pct < 2 AND latency_ms < 200,1,0)\n| where healthy=0\n| stats latest(latency_ms) as latency_ms latest(packet_loss_pct) as loss by tunnel_id, site\n| sort loss",
              "m": "Field names vary (Zscaler GRE, Prisma IPSec). Use unified summary index if multi-vendor.",
              "z": "Table (unhealthy tunnels), Geo map (site), Line chart (loss trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cato Networks Events App](https://splunkbase.splunk.com/app/8037), [Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`, `Splunk_TA_paloalto` (Prisma Access), Cato Networks Events App (Splunkbase 8037), `TA-fortinet_fortigate` (FortiSASE), Netskope Add-on for Splunk (Splunkbase 3808).\n• Ensure the following data sources are available: `sourcetype=sase:tunnel`, SD-WAN to SASE.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nField names vary (Zscaler GRE, Prisma IPSec). Use unified summary index if multi-vendor.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype=\"sase:tunnel\" earliest=-24h\n| eval healthy=if(match(lower(state),\"(?i)up|active\") AND packet_loss_pct < 2 AND latency_ms < 200,1,0)\n| where healthy=0\n| stats latest(latency_ms) as latency_ms latest(packet_loss_pct) as loss by tunnel_id, site\n| sort loss\n```\n\nUnderstanding this SPL\n\n**SASE Tunnel Health** — Monitors IPSec/GRE/SSL tunnels from branch to SASE PoPs — packet loss, latency, and down events.\n\nDocumented **Data sources**: `sourcetype=sase:tunnel`, SD-WAN to SASE. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, `Splunk_TA_paloalto` (Prisma Access), Cato Networks Events App (Splunkbase 8037), `TA-fortinet_fortigate` (FortiSASE), Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase; **sourcetype**: sase:tunnel. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, sourcetype=\"sase:tunnel\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **healthy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where healthy=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tunnel_id, site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy tunnels), Geo map (site), Line chart (loss trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "fortinet",
                "netskope",
                "nutanix",
                "paloalto",
                "zscaler"
              ],
              "em": [
                "fortinet_fortigate",
                "paloalto_pan_firewall",
                "paloalto_prisma"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.16",
              "n": "Identity-Aware Proxy Access Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Baselines per-user access to internal apps via IAP/ZTNA; flags new apps, odd hours, or geos.",
              "t": "Google IAP, `Splunk_TA_microsoft-cloudservices` (Azure AD App Proxy), `Splunk_TA_zscaler` (ZPA), Cloudflare App for Splunk (Splunkbase 4501), Netskope Add-on for Splunk (Splunkbase 3808)",
              "d": "`sourcetype=iap:access`, `zscaler:zpa`",
              "q": "index=zt sourcetype=\"zscaler:zpa\" earliest=-30d\n| eval day=strftime(_time,\"%Y-%m-%d\")\n| stats dc(application) as apps_today by user, day\n| eventstats avg(apps_today) as baseline by user\n| where apps_today > baseline*3 AND apps_today>5\n| table user, day, apps_today, baseline",
              "m": "Adapt to Google IAP JSON logs. Whitelist break-glass accounts.",
              "z": "Table (anomalies), Line chart (apps accessed per user), Heatmap (user × app).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cloudflare App for Splunk](https://splunkbase.splunk.com/app/4501), [Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1090",
                "T1071.001",
                "T1021"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Google IAP, `Splunk_TA_microsoft-cloudservices` (Azure AD App Proxy), `Splunk_TA_zscaler` (ZPA), Cloudflare App for Splunk (Splunkbase 4501), Netskope Add-on for Splunk (Splunkbase 3808).\n• Ensure the following data sources are available: `sourcetype=iap:access`, `zscaler:zpa`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAdapt to Google IAP JSON logs. Whitelist break-glass accounts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zscaler:zpa\" earliest=-30d\n| eval day=strftime(_time,\"%Y-%m-%d\")\n| stats dc(application) as apps_today by user, day\n| eventstats avg(apps_today) as baseline by user\n| where apps_today > baseline*3 AND apps_today>5\n| table user, day, apps_today, baseline\n```\n\nUnderstanding this SPL\n\n**Identity-Aware Proxy Access Anomalies** — Baselines per-user access to internal apps via IAP/ZTNA; flags new apps, odd hours, or geos.\n\nDocumented **Data sources**: `sourcetype=iap:access`, `zscaler:zpa`. **App/TA** (typical add-on context): Google IAP, `Splunk_TA_microsoft-cloudservices` (Azure AD App Proxy), `Splunk_TA_zscaler` (ZPA), Cloudflare App for Splunk (Splunkbase 4501), Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zscaler:zpa. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zscaler:zpa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **day** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user, day** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where apps_today > baseline*3 AND apps_today>5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Identity-Aware Proxy Access Anomalies**): table user, day, apps_today, baseline\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.user Authentication.app span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Identity-Aware Proxy Access Anomalies** — Baselines per-user access to internal apps via IAP/ZTNA; flags new apps, odd hours, or geos.\n\nDocumented **Data sources**: `sourcetype=iap:access`, `zscaler:zpa`. **App/TA** (typical add-on context): Google IAP, `Splunk_TA_microsoft-cloudservices` (Azure AD App Proxy), `Splunk_TA_zscaler` (ZPA), Cloudflare App for Splunk (Splunkbase 4501), Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalies), Line chart (apps accessed per user), Heatmap (user × app).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.user Authentication.app span=1d",
              "e": [
                "azure",
                "cloudflare",
                "m365",
                "netskope",
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                },
                {
                  "name": "Cloudflare App for Splunk",
                  "id": 4501,
                  "url": "https://splunkbase.splunk.com/app/4501",
                  "desc": "Dashboards for Cloudflare performance, security, Zero Trust, and DNS analytics",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.17",
              "n": "Microsegmentation Policy Effectiveness",
              "c": "high",
              "f": "intermediate",
              "v": "Ratio of expected denies vs allows for critical segments — validates that “default deny” is actually enforced.",
              "t": "VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA",
              "d": "`microseg:policy`",
              "q": "index=zt sourcetype=\"microseg:policy\" earliest=-7d\n| eval kind=if(action=\"deny\",\"deny\",\"allow\")\n| stats count as c by kind, policy_name\n| eventstats sum(c) as tot by policy_name\n| eval pct=round(100*c/tot,1)\n| where kind=\"deny\"\n| table policy_name, pct, c",
              "m": "High deny % on locked-down segments is expected; unexpected **allow** spikes on deny-first policies warrant review (use companion search with `kind=\"allow\"`).",
              "z": "Bar chart (deny % by policy), Table (policy mix), Line chart (deny trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1021",
                "T1046"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA.\n• Ensure the following data sources are available: `microseg:policy`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nHigh deny % on locked-down segments is expected; unexpected **allow** spikes on deny-first policies warrant review (use companion search with `kind=\"allow\"`).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"microseg:policy\" earliest=-7d\n| eval kind=if(action=\"deny\",\"deny\",\"allow\")\n| stats count as c by kind, policy_name\n| eventstats sum(c) as tot by policy_name\n| eval pct=round(100*c/tot,1)\n| where kind=\"deny\"\n| table policy_name, pct, c\n```\n\nUnderstanding this SPL\n\n**Microsegmentation Policy Effectiveness** — Ratio of expected denies vs allows for critical segments — validates that “default deny” is actually enforced.\n\nDocumented **Data sources**: `microseg:policy`. **App/TA** (typical add-on context): VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: microseg:policy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"microseg:policy\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **kind** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by kind, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where kind=\"deny\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Microsegmentation Policy Effectiveness**): table policy_name, pct, c\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"allowed\"\n  by All_Traffic.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Microsegmentation Policy Effectiveness** — Ratio of expected denies vs allows for critical segments — validates that “default deny” is actually enforced.\n\nDocumented **Data sources**: `microseg:policy`. **App/TA** (typical add-on context): VMware NSX Add-on, Illumio syslog/HEC, Cisco Secure Workload TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (deny % by policy), Table (policy mix), Line chart (deny trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"allowed\"\n  by All_Traffic.dest span=1h",
              "e": [
                "cisco",
                "nsx",
                "syslog",
                "vmware"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.18",
              "n": "Device Trust Score Trending",
              "c": "high",
              "f": "beginner",
              "v": "Fleet-level and cohort trend of device trust scores — extends point-in-time UC-17.3.2.",
              "t": "`Splunk_TA_zscaler`, `Splunk_TA_microsoft-cloudservices` (Entra ID), `TA-crowdstrike-falcon`",
              "d": "`zscaler:device_posture`, `zt:device_trust`",
              "q": "index=zt sourcetype=\"zscaler:device_posture\" earliest=-30d\n| timechart span=1d avg(trust_score) as avg_trust by os_type",
              "m": "Ensure `trust_score` is numeric 0–100. Alert when 7-day moving average drops >10 points for Windows corporate fleet.",
              "z": "Line chart (avg trust by OS), Single value (fleet avg), Area chart (distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1200"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`, `Splunk_TA_microsoft-cloudservices` (Entra ID), `TA-crowdstrike-falcon`.\n• Ensure the following data sources are available: `zscaler:device_posture`, `zt:device_trust`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure `trust_score` is numeric 0–100. Alert when 7-day moving average drops >10 points for Windows corporate fleet.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zscaler:device_posture\" earliest=-30d\n| timechart span=1d avg(trust_score) as avg_trust by os_type\n```\n\nUnderstanding this SPL\n\n**Device Trust Score Trending** — Fleet-level and cohort trend of device trust scores — extends point-in-time UC-17.3.2.\n\nDocumented **Data sources**: `zscaler:device_posture`, `zt:device_trust`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, `Splunk_TA_microsoft-cloudservices` (Entra ID), `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zscaler:device_posture. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zscaler:device_posture\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by os_type** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.src span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Device Trust Score Trending** — Fleet-level and cohort trend of device trust scores — extends point-in-time UC-17.3.2.\n\nDocumented **Data sources**: `zscaler:device_posture`, `zt:device_trust`. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, `Splunk_TA_microsoft-cloudservices` (Entra ID), `TA-crowdstrike-falcon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (avg trust by OS), Single value (fleet avg), Area chart (distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.src span=1d",
              "e": [
                "azure",
                "crowdstrike",
                "m365",
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.19",
              "n": "Continuous Authentication Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks step-up auth, re-auth, and session risk evaluation outcomes for policies requiring continuous verification.",
              "t": "`Splunk_TA_microsoft-cloudservices` (Entra ID Protection), `Splunk_TA_okta`, Zscaler ZPA TA",
              "d": "`sourcetype=azure:signin`, `okta:system`",
              "q": "index=identity sourcetype=\"azure:signin\" earliest=-7d\n| where risk_level!=\"none\" OR authentication_requirement=\"multiFactorAuthentication\"\n| stats count by user, risk_detail, result\n| sort 40 -count",
              "m": "Map Entra `riskLevelDuringSignIn` and CA grant controls. Report MFA completion rate when risk elevated.",
              "z": "Table (risky sign-ins), Bar chart (outcomes), Line chart (risk events / day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk_TA_microsoft-cloudservices](https://splunkbase.splunk.com/app/3110), [Splunk_TA_okta](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_microsoft-cloudservices` (Entra ID Protection), `Splunk_TA_okta`, Zscaler ZPA TA.\n• Ensure the following data sources are available: `sourcetype=azure:signin`, `okta:system`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap Entra `riskLevelDuringSignIn` and CA grant controls. Report MFA completion rate when risk elevated.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=identity sourcetype=\"azure:signin\" earliest=-7d\n| where risk_level!=\"none\" OR authentication_requirement=\"multiFactorAuthentication\"\n| stats count by user, risk_detail, result\n| sort 40 -count\n```\n\nUnderstanding this SPL\n\n**Continuous Authentication Compliance** — Tracks step-up auth, re-auth, and session risk evaluation outcomes for policies requiring continuous verification.\n\nDocumented **Data sources**: `sourcetype=azure:signin`, `okta:system`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Entra ID Protection), `Splunk_TA_okta`, Zscaler ZPA TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: identity; **sourcetype**: azure:signin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=identity, sourcetype=\"azure:signin\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where risk_level!=\"none\" OR authentication_requirement=\"multiFactorAuthentication\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, risk_detail, result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Continuous Authentication Compliance** — Tracks step-up auth, re-auth, and session risk evaluation outcomes for policies requiring continuous verification.\n\nDocumented **Data sources**: `sourcetype=azure:signin`, `okta:system`. **App/TA** (typical add-on context): `Splunk_TA_microsoft-cloudservices` (Entra ID Protection), `Splunk_TA_okta`, Zscaler ZPA TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (risky sign-ins), Bar chart (outcomes), Line chart (risk events / day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user span=1h",
              "e": [
                "azure",
                "m365",
                "okta",
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                },
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Cloud Services",
                "id": 3110,
                "url": "https://splunkbase.splunk.com/app/3110"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.20",
              "n": "Browser Isolation Usage",
              "c": "medium",
              "f": "beginner",
              "v": "Measures adoption of remote browser isolation (RBI) sessions vs direct access — for licensing and risky-site coverage.",
              "t": "Menlo Security syslog, `Splunk_TA_zscaler` (RBI), Island Enterprise Browser syslog, Broadcom Symantec WSS Add-on (Splunkbase 3856), Forcepoint ONE syslog",
              "d": "`sourcetype=rbi:session`",
              "q": "index=zt sourcetype=\"rbi:session\" earliest=-30d\n| eval isolated=if(match(lower(session_type),\"(?i)isolated|rbi\"),1,0)\n| timechart span=1d sum(isolated) as isolated_sessions, count as total_sessions\n| eval isolation_rate=round(100*isolated_sessions/total_sessions,1)",
              "m": "Map vendor-specific session types. Alert when isolation_rate drops vs baseline for high-risk categories.",
              "z": "Line chart (isolation rate), Bar chart (sessions by app), Single value (% isolated).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Broadcom Symantec WSS Add-on](https://splunkbase.splunk.com/app/3856)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Menlo Security syslog, `Splunk_TA_zscaler` (RBI), Island Enterprise Browser syslog, Broadcom Symantec WSS Add-on (Splunkbase 3856), Forcepoint ONE syslog.\n• Ensure the following data sources are available: `sourcetype=rbi:session`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor-specific session types. Alert when isolation_rate drops vs baseline for high-risk categories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"rbi:session\" earliest=-30d\n| eval isolated=if(match(lower(session_type),\"(?i)isolated|rbi\"),1,0)\n| timechart span=1d sum(isolated) as isolated_sessions, count as total_sessions\n| eval isolation_rate=round(100*isolated_sessions/total_sessions,1)\n```\n\nUnderstanding this SPL\n\n**Browser Isolation Usage** — Measures adoption of remote browser isolation (RBI) sessions vs direct access — for licensing and risky-site coverage.\n\nDocumented **Data sources**: `sourcetype=rbi:session`. **App/TA** (typical add-on context): Menlo Security syslog, `Splunk_TA_zscaler` (RBI), Island Enterprise Browser syslog, Broadcom Symantec WSS Add-on (Splunkbase 3856), Forcepoint ONE syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: rbi:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"rbi:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **isolated** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **isolation_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (isolation rate), Bar chart (sessions by app), Single value (% isolated).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Operations"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "broadcom_symantec",
                "forcepoint",
                "syslog",
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Forcepoint Insights SIEM App",
                  "id": 8053,
                  "url": "https://splunkbase.splunk.com/app/8053",
                  "desc": "Centralized visibility into Forcepoint ONE SSE logs with prebuilt dashboards",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.21",
              "n": "SWG Bypass Attempt Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects attempts to reach direct IPs, misuse PAC files, or tunnel out of SWG inspection.",
              "t": "`Splunk_TA_zscaler`, Netskope Add-on for Splunk",
              "d": "`zscaler:web`, endpoint proxy logs",
              "q": "index=proxy sourcetype=\"zscaler:web\" earliest=-24h\n| where match(lower(reason),\"(?i)bypass|tunnel|direct|pac\") OR match(lower(url),\"(?i)proxy\\.pac\")\n| stats count by user, src, reason\n| sort -count",
              "m": "Correlate with firewall deny for non-standard ports. Tune for false positives from dev tools.",
              "z": "Table (bypass attempts), Bar chart (by user), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1572",
                "T1090",
                "T1048",
                "T1573"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler`, Netskope Add-on for Splunk.\n• Ensure the following data sources are available: `zscaler:web`, endpoint proxy logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate with firewall deny for non-standard ports. Tune for false positives from dev tools.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"zscaler:web\" earliest=-24h\n| where match(lower(reason),\"(?i)bypass|tunnel|direct|pac\") OR match(lower(url),\"(?i)proxy\\.pac\")\n| stats count by user, src, reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**SWG Bypass Attempt Detection** — Detects attempts to reach direct IPs, misuse PAC files, or tunnel out of SWG inspection.\n\nDocumented **Data sources**: `zscaler:web`, endpoint proxy logs. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, Netskope Add-on for Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:web. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"zscaler:web\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(reason),\"(?i)bypass|tunnel|direct|pac\") OR match(lower(url),\"(?i)proxy\\.pac\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, src, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.user span=1h\n| where count > 200\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SWG Bypass Attempt Detection** — Detects attempts to reach direct IPs, misuse PAC files, or tunnel out of SWG inspection.\n\nDocumented **Data sources**: `zscaler:web`, endpoint proxy logs. **App/TA** (typical add-on context): `Splunk_TA_zscaler`, Netskope Add-on for Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Filters the current rows with `where count > 200` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (bypass attempts), Bar chart (by user), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.user span=1h\n| where count > 200",
              "e": [
                "netskope",
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.22",
              "n": "ZTNA Application Access Latency",
              "c": "medium",
              "f": "beginner",
              "v": "p95 latency per published application for user experience SLAs on ZTNA paths.",
              "t": "`Splunk_TA_zscaler` (ZPA), Cloudflare Logpush integration",
              "d": "`zscaler:zpa` access logs with `latency_ms`",
              "q": "index=zt sourcetype=\"zscaler:zpa\" earliest=-24h\n| stats perc95(latency_ms) as p95_ms, count by application\n| where p95_ms > 800\n| sort -p95_ms",
              "m": "Segment by connector group and region. Compare before/after app migrations.",
              "z": "Bar chart (p95 by app), Line chart (p95 trend), Table (worst apps).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_zscaler` (ZPA), Cloudflare Logpush integration.\n• Ensure the following data sources are available: `zscaler:zpa` access logs with `latency_ms`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSegment by connector group and region. Compare before/after app migrations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"zscaler:zpa\" earliest=-24h\n| stats perc95(latency_ms) as p95_ms, count by application\n| where p95_ms > 800\n| sort -p95_ms\n```\n\nUnderstanding this SPL\n\n**ZTNA Application Access Latency** — p95 latency per published application for user experience SLAs on ZTNA paths.\n\nDocumented **Data sources**: `zscaler:zpa` access logs with `latency_ms`. **App/TA** (typical add-on context): `Splunk_TA_zscaler` (ZPA), Cloudflare Logpush integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: zscaler:zpa. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"zscaler:zpa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_ms > 800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(All_Traffic.response_time) as rt\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.url span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ZTNA Application Access Latency** — p95 latency per published application for user experience SLAs on ZTNA paths.\n\nDocumented **Data sources**: `zscaler:zpa` access logs with `latency_ms`. **App/TA** (typical add-on context): `Splunk_TA_zscaler` (ZPA), Cloudflare Logpush integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (p95 by app), Line chart (p95 trend), Table (worst apps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` avg(All_Traffic.response_time) as rt\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.url span=5m",
              "e": [
                "cloudflare",
                "zscaler"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Cloudflare App for Splunk",
                  "id": 4501,
                  "url": "https://splunkbase.splunk.com/app/4501",
                  "desc": "Dashboards for Cloudflare performance, security, Zero Trust, and DNS analytics",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.23",
              "n": "Prisma Access Tunnel Health",
              "c": "high",
              "f": "intermediate",
              "v": "IPSec/SSL tunnel state, latency, and error codes for Palo Alto Prisma Access remote networks and mobile users.",
              "t": "Splunk_TA_paloalto, Prisma Access cloud logging",
              "d": "`sourcetype=prisma:access:tunnel` or PAN-OS VPN logs",
              "q": "index=sase sourcetype=\"prisma:access:tunnel\" earliest=-24h\n| eval ok=if(match(lower(tunnel_state),\"(?i)up|active\") AND error_code=0,1,0)\n| where ok=0\n| stats latest(latency_ms) as latency_ms latest(error_code) as error_code by tunnel_name, site_id\n| sort latency_ms",
              "m": "Map Prisma Remote Network vs Mobile User templates. Join SD-WAN site name from CMDB.",
              "z": "Table (down tunnels), Map (sites), Line chart (tunnel availability %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk_TA_paloalto, Prisma Access cloud logging.\n• Ensure the following data sources are available: `sourcetype=prisma:access:tunnel` or PAN-OS VPN logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap Prisma Remote Network vs Mobile User templates. Join SD-WAN site name from CMDB.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype=\"prisma:access:tunnel\" earliest=-24h\n| eval ok=if(match(lower(tunnel_state),\"(?i)up|active\") AND error_code=0,1,0)\n| where ok=0\n| stats latest(latency_ms) as latency_ms latest(error_code) as error_code by tunnel_name, site_id\n| sort latency_ms\n```\n\nUnderstanding this SPL\n\n**Prisma Access Tunnel Health** — IPSec/SSL tunnel state, latency, and error codes for Palo Alto Prisma Access remote networks and mobile users.\n\nDocumented **Data sources**: `sourcetype=prisma:access:tunnel` or PAN-OS VPN logs. **App/TA** (typical add-on context): Splunk_TA_paloalto, Prisma Access cloud logging. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase; **sourcetype**: prisma:access:tunnel. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, sourcetype=\"prisma:access:tunnel\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ok=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tunnel_name, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (down tunnels), Map (sites), Line chart (tunnel availability %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall",
                "paloalto_prisma"
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.24",
              "n": "Conditional Access Policy Enforcement (Entra ID)",
              "c": "high",
              "f": "beginner",
              "v": "Volume of grants vs blocks per named CA policy — complements generic UC-17.3.1 with Microsoft-specific policy dimension.",
              "t": "Azure / Entra TA",
              "d": "`sourcetype=azure:signin` with `conditional_access_status`",
              "q": "index=identity sourcetype=\"azure:signin\" earliest=-7d\n| where isnotnull(conditional_access_policy_name)\n| stats count by conditional_access_policy_name, conditional_access_status\n| sort -count",
              "m": "Include `failureReason` for blocks. Alert when block rate for a policy jumps without change ticket.",
              "z": "Stacked bar (policy × status), Table (top blocks), Line chart (blocks / day per policy).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Azure / Entra TA.\n• Ensure the following data sources are available: `sourcetype=azure:signin` with `conditional_access_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInclude `failureReason` for blocks. Alert when block rate for a policy jumps without change ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=identity sourcetype=\"azure:signin\" earliest=-7d\n| where isnotnull(conditional_access_policy_name)\n| stats count by conditional_access_policy_name, conditional_access_status\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Conditional Access Policy Enforcement (Entra ID)** — Volume of grants vs blocks per named CA policy — complements generic UC-17.3.1 with Microsoft-specific policy dimension.\n\nDocumented **Data sources**: `sourcetype=azure:signin` with `conditional_access_status`. **App/TA** (typical add-on context): Azure / Entra TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: identity; **sourcetype**: azure:signin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=identity, sourcetype=\"azure:signin\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(conditional_access_policy_name)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by conditional_access_policy_name, conditional_access_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Conditional Access Policy Enforcement (Entra ID)** — Volume of grants vs blocks per named CA policy — complements generic UC-17.3.1 with Microsoft-specific policy dimension.\n\nDocumented **Data sources**: `sourcetype=azure:signin` with `conditional_access_status`. **App/TA** (typical add-on context): Azure / Entra TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (policy × status), Table (top blocks), Line chart (blocks / day per policy).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.user span=1h",
              "e": [
                "azure",
                "m365"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.25",
              "n": "Cato Security Event Monitoring (Cato Networks)",
              "c": "high",
              "f": "intermediate",
              "v": "Cato's cloud-native security stack generates IPS, anti-malware, and NGFW events from a single pass inspection of all WAN and internet traffic. Unlike on-premises firewalls, every branch and remote user traverses the same cloud inspection plane. Monitoring detection volume, severity distribution, and threat categories across sites and identities reveals coordinated campaigns, noisy rules, and coverage gaps before incidents escalate.",
              "t": "Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration) (syslog to TCP 1514) as an alternative to the app’s API pull.",
              "d": "Cato Events API (security, connectivity, audit); `sourcetype=cato:events` or `sourcetype=cato:sase`",
              "q": "index=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-24h\n| eval sub_type=lower(coalesce(event_sub_type, sub_type, \"\"))\n| eval threat_cat=coalesce(threat_category, rule_category, signature_category, \"uncategorized\")\n| eval sev=coalesce(severity, priority, risk_level, \"info\")\n| where match(sub_type,\"ips|anti.?malware|malware|intrusion|ngfw|firewall|threat\") OR match(lower(coalesce(action, disposition, \"\")),\"block|prevent|deny|detect\")\n| timechart span=1h count by sev",
              "m": "Install the Cato Networks Events App and map Cato JSON fields with `props.conf` / `FIELDALIAS` to stable names (`event_sub_type`, `threat_category`, `site_name`, `user`). Baseline events per hour per site; alert on 3σ spikes in blocked or critical-severity counts. Tag change windows to suppress expected noise after policy rollouts.",
              "z": "Timechart (events/hour by severity), Pie or bar (threat category mix), Table (top signatures/rules), Single value (24h blocked vs prior day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cato Networks Events App](https://splunkbase.splunk.com/app/8037), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1071.001",
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration) (syslog to TCP 1514) as an alternative to the app’s API pull..\n• Ensure the following data sources are available: Cato Events API (security, connectivity, audit); `sourcetype=cato:events` or `sourcetype=cato:sase`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall the Cato Networks Events App and map Cato JSON fields with `props.conf` / `FIELDALIAS` to stable names (`event_sub_type`, `threat_category`, `site_name`, `user`). Baseline events per hour per site; alert on 3σ spikes in blocked or critical-severity counts. Tag change windows to suppress expected noise after policy rollouts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-24h\n| eval sub_type=lower(coalesce(event_sub_type, sub_type, \"\"))\n| eval threat_cat=coalesce(threat_category, rule_category, signature_category, \"uncategorized\")\n| eval sev=coalesce(severity, priority, risk_level, \"info\")\n| where match(sub_type,\"ips|anti.?malware|malware|intrusion|ngfw|firewall|threat\") OR match(lower(coalesce(action, disposition, \"\")),\"block|prevent|deny|detect\")\n| timechart span=1h count by sev\n```\n\nUnderstanding this SPL\n\n**Cato Security Event Monitoring (Cato Networks)** — Cato's cloud-native security stack generates IPS, anti-malware, and NGFW events from a single pass inspection of all WAN and internet traffic. Unlike on-premises firewalls, every branch and remote user traverses the same cloud inspection plane. Monitoring detection volume, severity distribution, and threat categories across sites and identities reveals coordinated campaigns, noisy rules, and coverage gaps before incidents escalate.\n\nDocumented **Data sources**: Cato Events API (security, connectivity, audit); `sourcetype=cato:events` or `sourcetype=cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration) (syslog to TCP 1514) as an alternative to the app’s API pull. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sub_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **threat_cat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(sub_type,\"ips|anti.?malware|malware|intrusion|ngfw|firewall|threat\") OR match(lower(coalesce(action, dispositio…` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by sev** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.signature span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cato Security Event Monitoring (Cato Networks)** — Cato's cloud-native security stack generates IPS, anti-malware, and NGFW events from a single pass inspection of all WAN and internet traffic. Unlike on-premises firewalls, every branch and remote user traverses the same cloud inspection plane. Monitoring detection volume, severity distribution, and threat categories across sites and identities reveals coordinated campaigns, noisy rules, and coverage gaps before incidents escalate.\n\nDocumented **Data sources**: Cato Events API (security, connectivity, audit); `sourcetype=cato:events` or `sourcetype=cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration) (syslog to TCP 1514) as an alternative to the app’s API pull. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (events/hour by severity), Pie or bar (threat category mix), Table (top signatures/rules), Single value (24h blocked vs prior day).",
              "script": "",
              "premium": "",
              "hw": "Cato Socket (physical edge), Cato vSocket (virtual edge), Cato SDP Client (ZTNA endpoint software) — no discrete on-premises firewall appliances; enforcement is cloud-delivered at Cato PoPs.",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Malware"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.signature span=1h",
              "e": [
                "github",
                "syslog"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.26",
              "n": "Cato WAN Link Health and Quality (Cato Networks)",
              "c": "high",
              "f": "intermediate",
              "v": "Cato SD-WAN measures latency, jitter, and packet loss per Socket uplink (MPLS, broadband, LTE). When quality falls below policy thresholds, Cato steers flows to healthier paths automatically. Retaining link-quality telemetry in Splunk exposes chronic ISP issues, validates steering decisions, and supports capacity conversations with carriers using your own historical evidence.",
              "t": "Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration)",
              "d": "Cato connectivity / link-quality events; `sourcetype=cato:events` or `cato:sase`",
              "q": "index=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-24h\n| eval site=coalesce(site_name, site_id, account_name)\n| eval link=coalesce(link_name, interface_name, wan_link, uplink_id, \"unknown\")\n| eval latency_ms=coalesce(rtt_ms, avg_rtt_ms, latency_ms)\n| eval loss_pct=coalesce(packet_loss_pct, packet_loss, loss_percent)\n| eval jitter_ms=coalesce(jitter_ms, jitter)\n| where isnotnull(latency_ms) OR isnotnull(loss_pct) OR isnotnull(jitter_ms)\n| timechart span=5m avg(latency_ms) as avg_rtt_ms avg(loss_pct) as avg_packet_loss_pct avg(jitter_ms) as avg_jitter_ms by site",
              "m": "Confirm which Cato event subtypes carry WAN metrics for your account (field names vary slightly by feed version). Build per-site SLO panels (for example loss below 1%, latency under a site-specific ms target). Alert when any uplink exceeds SLO for two consecutive 15-minute buckets or when jitter spikes correlate with application ticket volume.",
              "z": "Line chart (latency / loss / jitter over time by site), Heatmap (site × hour quality score), Table (worst uplinks by p95 latency).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Cato Networks Events App](https://splunkbase.splunk.com/app/8037), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration).\n• Ensure the following data sources are available: Cato connectivity / link-quality events; `sourcetype=cato:events` or `cato:sase`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfirm which Cato event subtypes carry WAN metrics for your account (field names vary slightly by feed version). Build per-site SLO panels (for example loss below 1%, latency under a site-specific ms target). Alert when any uplink exceeds SLO for two consecutive 15-minute buckets or when jitter spikes correlate with application ticket volume.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-24h\n| eval site=coalesce(site_name, site_id, account_name)\n| eval link=coalesce(link_name, interface_name, wan_link, uplink_id, \"unknown\")\n| eval latency_ms=coalesce(rtt_ms, avg_rtt_ms, latency_ms)\n| eval loss_pct=coalesce(packet_loss_pct, packet_loss, loss_percent)\n| eval jitter_ms=coalesce(jitter_ms, jitter)\n| where isnotnull(latency_ms) OR isnotnull(loss_pct) OR isnotnull(jitter_ms)\n| timechart span=5m avg(latency_ms) as avg_rtt_ms avg(loss_pct) as avg_packet_loss_pct avg(jitter_ms) as avg_jitter_ms by site\n```\n\nUnderstanding this SPL\n\n**Cato WAN Link Health and Quality (Cato Networks)** — Cato SD-WAN measures latency, jitter, and packet loss per Socket uplink (MPLS, broadband, LTE). When quality falls below policy thresholds, Cato steers flows to healthier paths automatically. Retaining link-quality telemetry in Splunk exposes chronic ISP issues, validates steering decisions, and supports capacity conversations with carriers using your own historical evidence.\n\nDocumented **Data sources**: Cato connectivity / link-quality events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **site** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **link** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **loss_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **jitter_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(latency_ms) OR isnotnull(loss_pct) OR isnotnull(jitter_ms)` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=5m** buckets with a separate series **by site** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` avg(All_Traffic.bytes_in) as avg_in avg(All_Traffic.bytes_out) as avg_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src span=5m\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cato WAN Link Health and Quality (Cato Networks)** — Cato SD-WAN measures latency, jitter, and packet loss per Socket uplink (MPLS, broadband, LTE). When quality falls below policy thresholds, Cato steers flows to healthier paths automatically. Retaining link-quality telemetry in Splunk exposes chronic ISP issues, validates steering decisions, and supports capacity conversations with carriers using your own historical evidence.\n\nDocumented **Data sources**: Cato connectivity / link-quality events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (latency / loss / jitter over time by site), Heatmap (site × hour quality score), Table (worst uplinks by p95 latency).",
              "script": "",
              "premium": "",
              "hw": "Cato Socket, Cato vSocket (per-site uplinks); remote users via Cato SDP Client do not replace site link metrics but appear in separate client-quality events where exposed.",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Performance",
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` avg(All_Traffic.bytes_in) as avg_in avg(All_Traffic.bytes_out) as avg_out\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src span=5m",
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.27",
              "n": "Cato Threat Prevention Events (Cato Networks)",
              "c": "critical",
              "f": "intermediate",
              "v": "Cato IPS blends machine learning and signatures inline on all traversing traffic. Because enforcement runs at Cato PoPs, IPS coverage is uniform for every site and remote user without shipping appliances to each location. Tracking blocked threats, source context, targeted services, and attack patterns supports incident triage, threat hunting, and validation that prevention is active everywhere.",
              "t": "Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration)",
              "d": "Cato threat / IPS events; `sourcetype=cato:events` or `cato:sase`",
              "q": "index=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-24h\n| eval blocked=if(match(lower(coalesce(action, disposition, verdict, \"\")),\"(?i)block|prevent|deny|drop\"),1,0)\n| eval threat=coalesce(threat_name, signature_name, rule_name, description, \"unknown\")\n| eval dst_service=coalesce(dst_port, service_name, app_name)\n| where blocked=1 AND match(lower(coalesce(event_sub_type, category, \"\")),\"(?i)ips|intrusion|threat|exploit\")\n| stats count values(threat) as threats values(dst_ip) as dst_ips by src_ip, site_name, dst_service\n| sort -count",
              "m": "Enrich `src_ip` with asset and identity lookups. Create notables for rare threats, cross-site recurrence of the same source, or blocks against critical server subnets. Tune out known vulnerability scanners only with documented exceptions. Correlate spikes with Cato configuration changes (UC-17.3.28).",
              "z": "Table (top blocked flows), Bar chart (threats by site), Map (src_geo if present), Timeline (block burst detection).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cato Networks Events App](https://splunkbase.splunk.com/app/8037), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration).\n• Ensure the following data sources are available: Cato threat / IPS events; `sourcetype=cato:events` or `cato:sase`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnrich `src_ip` with asset and identity lookups. Create notables for rare threats, cross-site recurrence of the same source, or blocks against critical server subnets. Tune out known vulnerability scanners only with documented exceptions. Correlate spikes with Cato configuration changes (UC-17.3.28).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-24h\n| eval blocked=if(match(lower(coalesce(action, disposition, verdict, \"\")),\"(?i)block|prevent|deny|drop\"),1,0)\n| eval threat=coalesce(threat_name, signature_name, rule_name, description, \"unknown\")\n| eval dst_service=coalesce(dst_port, service_name, app_name)\n| where blocked=1 AND match(lower(coalesce(event_sub_type, category, \"\")),\"(?i)ips|intrusion|threat|exploit\")\n| stats count values(threat) as threats values(dst_ip) as dst_ips by src_ip, site_name, dst_service\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cato Threat Prevention Events (Cato Networks)** — Cato IPS blends machine learning and signatures inline on all traversing traffic. Because enforcement runs at Cato PoPs, IPS coverage is uniform for every site and remote user without shipping appliances to each location. Tracking blocked threats, source context, targeted services, and attack patterns supports incident triage, threat hunting, and validation that prevention is active everywhere.\n\nDocumented **Data sources**: Cato threat / IPS events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **blocked** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **threat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dst_service** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where blocked=1 AND match(lower(coalesce(event_sub_type, category, \"\")),\"(?i)ips|intrusion|threat|exploit\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_ip, site_name, dst_service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.dest IDS_Attacks.signature IDS_Attacks.action span=1h\n| where count >= 5\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cato Threat Prevention Events (Cato Networks)** — Cato IPS blends machine learning and signatures inline on all traversing traffic. Because enforcement runs at Cato PoPs, IPS coverage is uniform for every site and remote user without shipping appliances to each location. Tracking blocked threats, source context, targeted services, and attack patterns supports incident triage, threat hunting, and validation that prevention is active everywhere.\n\nDocumented **Data sources**: Cato threat / IPS events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Filters the current rows with `where count >= 5` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top blocked flows), Bar chart (threats by site), Map (src_geo if present), Timeline (block burst detection).",
              "script": "",
              "premium": "",
              "hw": "Cato Socket, Cato vSocket, Cato SDP Client — threats are seen and acted on at the PoP; edge devices tunnel traffic into that inspection path.",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.dest IDS_Attacks.signature IDS_Attacks.action span=1h\n| where count >= 5",
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.28",
              "n": "Cato Cloud Firewall Policy Audit (Cato Networks)",
              "c": "medium",
              "f": "beginner",
              "v": "Cloud firewall policies are authored centrally in Cato Management and enforced at every PoP, so one misconfiguration has global blast radius. Auditing administrator actions, policy edits, and time-ordered changes lets you tie traffic anomalies to specific change records and demonstrate who approved risky rules.",
              "t": "Cato Networks Events App (Splunkbase 8037); Cato Events API audit stream",
              "d": "Cato administrative / audit events; `sourcetype=cato:events` or `cato:sase`",
              "q": "index=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-30d\n| eval evt=lower(coalesce(event_type, event_sub_type, \"\"))\n| where match(evt,\"admin|audit|config|policy|rule|change|login\") OR match(lower(coalesce(operation, action, \"\")),\"(?i)create|update|delete|modify\")\n| eval admin=coalesce(admin_name, admin_user, user_name, actor, \"unknown\")\n| eval policy=coalesce(policy_name, rule_name, object_name, \"unknown\")\n| stats count earliest(_time) as first_seen latest(_time) as last_seen values(operation) as ops by admin policy\n| sort -last_seen",
              "m": "Forward audit events to a restricted index with immutability or archival policy. Join `last_seen` to your ITSM change IDs when administrators paste ticket numbers into comments (or enforce ticket-required workflow in Cato). Alert on after-hours bulk rule deletes or new “allow any” style rules.",
              "z": "Table (recent policy changes), Timeline (admin activity), Bar chart (changes by administrator).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cato Networks Events App](https://splunkbase.splunk.com/app/8037), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.004",
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cato Networks Events App (Splunkbase 8037); Cato Events API audit stream.\n• Ensure the following data sources are available: Cato administrative / audit events; `sourcetype=cato:events` or `cato:sase`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward audit events to a restricted index with immutability or archival policy. Join `last_seen` to your ITSM change IDs when administrators paste ticket numbers into comments (or enforce ticket-required workflow in Cato). Alert on after-hours bulk rule deletes or new “allow any” style rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-30d\n| eval evt=lower(coalesce(event_type, event_sub_type, \"\"))\n| where match(evt,\"admin|audit|config|policy|rule|change|login\") OR match(lower(coalesce(operation, action, \"\")),\"(?i)create|update|delete|modify\")\n| eval admin=coalesce(admin_name, admin_user, user_name, actor, \"unknown\")\n| eval policy=coalesce(policy_name, rule_name, object_name, \"unknown\")\n| stats count earliest(_time) as first_seen latest(_time) as last_seen values(operation) as ops by admin policy\n| sort -last_seen\n```\n\nUnderstanding this SPL\n\n**Cato Cloud Firewall Policy Audit (Cato Networks)** — Cloud firewall policies are authored centrally in Cato Management and enforced at every PoP, so one misconfiguration has global blast radius. Auditing administrator actions, policy edits, and time-ordered changes lets you tie traffic anomalies to specific change records and demonstrate who approved risky rules.\n\nDocumented **Data sources**: Cato administrative / audit events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); Cato Events API audit stream. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **evt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(evt,\"admin|audit|config|policy|rule|change|login\") OR match(lower(coalesce(operation, action, \"\")),\"(?i)create|…` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **admin** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **policy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by admin policy** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cato Cloud Firewall Policy Audit (Cato Networks)** — Cloud firewall policies are authored centrally in Cato Management and enforced at every PoP, so one misconfiguration has global blast radius. Auditing administrator actions, policy edits, and time-ordered changes lets you tie traffic anomalies to specific change records and demonstrate who approved risky rules.\n\nDocumented **Data sources**: Cato administrative / audit events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); Cato Events API audit stream. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent policy changes), Timeline (admin activity), Bar chart (changes by administrator).",
              "script": "",
              "premium": "",
              "hw": "Cato Management (cloud); Cato Socket, Cato vSocket, Cato SDP Client consume policies — no per-box CLI audit trail.",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.29",
              "n": "Cato SD-WAN Tunnel Health (Cato Networks)",
              "c": "critical",
              "f": "intermediate",
              "v": "Cato Sockets build IPsec/DTLS tunnels to the nearest PoP; when tunnels drop, the site loses cloud-delivered security, path selection, and centralized breakout. Measuring down events, duration, and time-to-recover supports SLA reporting and distinguishes transient blips from structural connectivity failures.",
              "t": "Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration)",
              "d": "Cato tunnel / site connectivity events; `sourcetype=cato:events` or `cato:sase`",
              "q": "index=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-7d\n| eval site=coalesce(site_name, site_id, socket_name, socket_serial)\n| eval tunnel_state=lower(coalesce(tunnel_state, tunnel_status, connection_state, link_state, \"\"))\n| where match(tunnel_state,\"down|disconnected|failed|lost|inactive|degraded\") OR match(lower(coalesce(event_sub_type, \"\")),\"(?i)tunnel.*down|socket.*down|disconnect\")\n| stats count earliest(_time) as first_event latest(_time) as last_event by site tunnel_state\n| eval window_sec=last_event-first_event\n| sort -count",
              "m": "Align `event_sub_type` / `tunnel_state` values with Cato’s feed documentation (labels differ between API and syslog forwarding). For MTTR, run a companion search pairing down events with subsequent up events per `site_id`. Page on any site with sustained tunnel-down beyond your RTO threshold.",
              "z": "Single value (sites currently down), Timeline (tunnel state), Table (longest outages 7d), Line chart (daily down minutes per site).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cato Networks Events App](https://splunkbase.splunk.com/app/8037), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration).\n• Ensure the following data sources are available: Cato tunnel / site connectivity events; `sourcetype=cato:events` or `cato:sase`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign `event_sub_type` / `tunnel_state` values with Cato’s feed documentation (labels differ between API and syslog forwarding). For MTTR, run a companion search pairing down events with subsequent up events per `site_id`. Page on any site with sustained tunnel-down beyond your RTO threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-7d\n| eval site=coalesce(site_name, site_id, socket_name, socket_serial)\n| eval tunnel_state=lower(coalesce(tunnel_state, tunnel_status, connection_state, link_state, \"\"))\n| where match(tunnel_state,\"down|disconnected|failed|lost|inactive|degraded\") OR match(lower(coalesce(event_sub_type, \"\")),\"(?i)tunnel.*down|socket.*down|disconnect\")\n| stats count earliest(_time) as first_event latest(_time) as last_event by site tunnel_state\n| eval window_sec=last_event-first_event\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cato SD-WAN Tunnel Health (Cato Networks)** — Cato Sockets build IPsec/DTLS tunnels to the nearest PoP; when tunnels drop, the site loses cloud-delivered security, path selection, and centralized breakout. Measuring down events, duration, and time-to-recover supports SLA reporting and distinguishes transient blips from structural connectivity failures.\n\nDocumented **Data sources**: Cato tunnel / site connectivity events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **site** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **tunnel_state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(tunnel_state,\"down|disconnected|failed|lost|inactive|degraded\") OR match(lower(coalesce(event_sub_type, \"\")),\"(…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by site tunnel_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **window_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cato SD-WAN Tunnel Health (Cato Networks)** — Cato Sockets build IPsec/DTLS tunnels to the nearest PoP; when tunnels drop, the site loses cloud-delivered security, path selection, and centralized breakout. Measuring down events, duration, and time-to-recover supports SLA reporting and distinguishes transient blips from structural connectivity failures.\n\nDocumented **Data sources**: Cato tunnel / site connectivity events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); `eventsFeed.py` from [catonetworks/cato-splunk-integration](https://github.com/catonetworks/cato-splunk-integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (sites currently down), Timeline (tunnel state), Table (longest outages 7d), Line chart (daily down minutes per site).",
              "script": "",
              "premium": "",
              "hw": "Cato Socket, Cato vSocket (site tunnel endpoints); Cato SDP Client uses separate session semantics — split dashboards accordingly.",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Network_Traffic",
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "github"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.30",
              "n": "Cato SDP Client Connection Monitoring (Cato Networks)",
              "c": "high",
              "f": "intermediate",
              "v": "The Cato SDP client delivers ZTNA access for remote users. Connection, authentication, posture, and disconnect-reason events show who cannot reach applications and why — distinguishing client bugs, credential issues, MFA failures, and policy blocks without guessing from help-desk anecdotes.",
              "t": "Cato Networks Events App (Splunkbase 8037); Cato Events API (client / ZTNA-related event families)",
              "d": "Cato SDP / ZTNA / client session events; `sourcetype=cato:events` or `cato:sase`",
              "q": "index=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-24h\n| eval u=lower(coalesce(user_name, username, user, email, \"\"))\n| eval auth_ok=if(match(lower(coalesce(auth_result, auth_status, \"\")),\"(?i)success|allow|ok\"),1,0)\n| eval reason=coalesce(failure_reason, disconnect_reason, error_message, deny_reason, \"n/a\")\n| where match(lower(coalesce(event_sub_type, client_event, \"\")),\"(?i)sdp|ztna|client|vpn|session\")\n| stats count sum(auth_ok) as successes values(reason) as reasons by u device_os app_version\n| eval failures=count-successes\n| sort -failures",
              "m": "Normalize `app_version` and alert when a specific version shows elevated failures (upgrade campaign). Build a lookup for “known bad” posture outcomes. Feed top `failure_reason` strings back to IT for knowledge-base articles. Respect privacy: restrict user fields to security roles.",
              "z": "Bar chart (failures by reason), Table (users with repeated auth failures), Pie chart (posture pass vs fail), Line chart (daily active SDP users).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cato Networks Events App](https://splunkbase.splunk.com/app/8037), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1219",
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cato Networks Events App (Splunkbase 8037); Cato Events API (client / ZTNA-related event families).\n• Ensure the following data sources are available: Cato SDP / ZTNA / client session events; `sourcetype=cato:events` or `cato:sase`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `app_version` and alert when a specific version shows elevated failures (upgrade campaign). Build a lookup for “known bad” posture outcomes. Feed top `failure_reason` strings back to IT for knowledge-base articles. Respect privacy: restrict user fields to security roles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-24h\n| eval u=lower(coalesce(user_name, username, user, email, \"\"))\n| eval auth_ok=if(match(lower(coalesce(auth_result, auth_status, \"\")),\"(?i)success|allow|ok\"),1,0)\n| eval reason=coalesce(failure_reason, disconnect_reason, error_message, deny_reason, \"n/a\")\n| where match(lower(coalesce(event_sub_type, client_event, \"\")),\"(?i)sdp|ztna|client|vpn|session\")\n| stats count sum(auth_ok) as successes values(reason) as reasons by u device_os app_version\n| eval failures=count-successes\n| sort -failures\n```\n\nUnderstanding this SPL\n\n**Cato SDP Client Connection Monitoring (Cato Networks)** — The Cato SDP client delivers ZTNA access for remote users. Connection, authentication, posture, and disconnect-reason events show who cannot reach applications and why — distinguishing client bugs, credential issues, MFA failures, and policy blocks without guessing from help-desk anecdotes.\n\nDocumented **Data sources**: Cato SDP / ZTNA / client session events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); Cato Events API (client / ZTNA-related event families). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **u** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **auth_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **reason** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(lower(coalesce(event_sub_type, client_event, \"\")),\"(?i)sdp|ztna|client|vpn|session\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by u device_os app_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **failures** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure OR Authentication.action=success\n  by Authentication.user Authentication.action span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cato SDP Client Connection Monitoring (Cato Networks)** — The Cato SDP client delivers ZTNA access for remote users. Connection, authentication, posture, and disconnect-reason events show who cannot reach applications and why — distinguishing client bugs, credential issues, MFA failures, and policy blocks without guessing from help-desk anecdotes.\n\nDocumented **Data sources**: Cato SDP / ZTNA / client session events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); Cato Events API (client / ZTNA-related event families). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures by reason), Table (users with repeated auth failures), Pie chart (posture pass vs fail), Line chart (daily active SDP users).",
              "script": "",
              "premium": "",
              "hw": "Cato SDP Client on laptops and mobile devices; enforcement still occurs through Cato cloud policy — correlate with Cato Socket sites for hybrid users when applicable.",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Authentication",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure OR Authentication.action=success\n  by Authentication.user Authentication.action span=1h",
              "e": [
                "asterisk"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.31",
              "n": "Cato DLP and CASB Event Analysis (Cato Networks)",
              "c": "high",
              "f": "intermediate",
              "v": "Inline CASB and DLP inspect cloud application traffic for unsanctioned SaaS, sensitive data movement, and policy violations in one SASE pass. Aggregating these events highlights shadow IT growth, risky uploads, and repeat offenders before data leaves the organization’s control.",
              "t": "Cato Networks Events App (Splunkbase 8037); Cato SWG/CASB/DLP event categories via Events API",
              "d": "Cato CASB / DLP / SaaS governance events; `sourcetype=cato:events` or `cato:sase`",
              "q": "index=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-7d\n| eval cat=lower(coalesce(event_sub_type, saas_category, \"\"))\n| eval app=coalesce(app_name, saas_app, destination_app, \"unknown\")\n| eval pol=coalesce(policy_name, dlp_policy, casb_policy, \"unknown\")\n| eval viol=coalesce(violation_type, data_classification, sensitive_type, \"unspecified\")\n| where match(cat,\"dlp|casb|saas|shadow|upload|sanction|cloud\")\n| stats count by app pol viol action\n| sort -count",
              "m": "Map Cato severity to your insider-risk tiers. Correlate DLP blocks with HR flags only through approved processes. Create executive rollups: top unsanctioned apps, volume by data class, and trend after policy updates. Integrate with ticketing for mandatory review of high-severity violations.",
              "z": "Bar chart (violations by SaaS app), Stacked bar (action × classification), Table (top policies triggered), Line chart (shadow IT events / day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cato Networks Events App](https://splunkbase.splunk.com/app/8037), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1048",
                "T1048.001",
                "T1102"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cato Networks Events App (Splunkbase 8037); Cato SWG/CASB/DLP event categories via Events API.\n• Ensure the following data sources are available: Cato CASB / DLP / SaaS governance events; `sourcetype=cato:events` or `cato:sase`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap Cato severity to your insider-risk tiers. Correlate DLP blocks with HR flags only through approved processes. Create executive rollups: top unsanctioned apps, volume by data class, and trend after policy updates. Integrate with ticketing for mandatory review of high-severity violations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype IN (\"cato:events\",\"cato:sase\") earliest=-7d\n| eval cat=lower(coalesce(event_sub_type, saas_category, \"\"))\n| eval app=coalesce(app_name, saas_app, destination_app, \"unknown\")\n| eval pol=coalesce(policy_name, dlp_policy, casb_policy, \"unknown\")\n| eval viol=coalesce(violation_type, data_classification, sensitive_type, \"unspecified\")\n| where match(cat,\"dlp|casb|saas|shadow|upload|sanction|cloud\")\n| stats count by app pol viol action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cato DLP and CASB Event Analysis (Cato Networks)** — Inline CASB and DLP inspect cloud application traffic for unsanctioned SaaS, sensitive data movement, and policy violations in one SASE pass. Aggregating these events highlights shadow IT growth, risky uploads, and repeat offenders before data leaves the organization’s control.\n\nDocumented **Data sources**: Cato CASB / DLP / SaaS governance events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); Cato SWG/CASB/DLP event categories via Events API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **app** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pol** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **viol** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(cat,\"dlp|casb|saas|shadow|upload|sanction|cloud\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by app pol viol action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.user All_Traffic.url span=1d\n| where count > 100\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cato DLP and CASB Event Analysis (Cato Networks)** — Inline CASB and DLP inspect cloud application traffic for unsanctioned SaaS, sensitive data movement, and policy violations in one SASE pass. Aggregating these events highlights shadow IT growth, risky uploads, and repeat offenders before data leaves the organization’s control.\n\nDocumented **Data sources**: Cato CASB / DLP / SaaS governance events; `sourcetype=cato:events` or `cato:sase`. **App/TA** (typical add-on context): Cato Networks Events App (Splunkbase 8037); Cato SWG/CASB/DLP event categories via Events API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Filters the current rows with `where count > 100` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (violations by SaaS app), Stacked bar (action × classification), Table (top policies triggered), Line chart (shadow IT events / day).",
              "script": "",
              "premium": "",
              "hw": "Cato Socket, Cato vSocket, Cato SDP Client — all user and site traffic eligible for CASB/DLP inspection at the PoP.",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.user All_Traffic.url span=1d\n| where count > 100",
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.32",
              "n": "Netskope Cloud App Risk Assessment",
              "c": "high",
              "f": "beginner",
              "v": "Netskope assigns Cloud Confidence Index (CCI) scores to SaaS applications. Tracking high-risk (low CCI) app usage reveals shadow IT and data exposure. Trending CCI across the organization supports SaaS governance and vendor risk management.",
              "t": "Netskope Add-on for Splunk (Splunkbase 3808)",
              "d": "`sourcetype=netskope:events` or `netskope:application`",
              "q": "index=casb sourcetype=\"netskope:events\" earliest=-7d\n| where cci_score < 50\n| stats dc(user) as users, sum(numbytes) as bytes by app_name, cci_score, category\n| eval gb=round(bytes/1073741824,2)\n| sort -users",
              "m": "Configure the Netskope Add-on REST input for application events. Map `cci_score` (0–100) to risk tiers (0–30 critical, 31–50 high). Alert when new high-risk apps appear with >5 users. Report weekly on shadow IT growth and data volume to unsanctioned apps.",
              "z": "Table (risky apps by users and data volume), Pie chart (CCI distribution), Bar chart (top unsanctioned apps), Line chart (shadow IT trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1102"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Netskope Add-on for Splunk (Splunkbase 3808).\n• Ensure the following data sources are available: `sourcetype=netskope:events` or `netskope:application`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure the Netskope Add-on REST input for application events. Map `cci_score` (0–100) to risk tiers (0–30 critical, 31–50 high). Alert when new high-risk apps appear with >5 users. Report weekly on shadow IT growth and data volume to unsanctioned apps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=casb sourcetype=\"netskope:events\" earliest=-7d\n| where cci_score < 50\n| stats dc(user) as users, sum(numbytes) as bytes by app_name, cci_score, category\n| eval gb=round(bytes/1073741824,2)\n| sort -users\n```\n\nUnderstanding this SPL\n\n**Netskope Cloud App Risk Assessment** — Netskope assigns Cloud Confidence Index (CCI) scores to SaaS applications. Tracking high-risk (low CCI) app usage reveals shadow IT and data exposure. Trending CCI across the organization supports SaaS governance and vendor risk management.\n\nDocumented **Data sources**: `sourcetype=netskope:events` or `netskope:application`. **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: casb; **sourcetype**: netskope:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=casb, sourcetype=\"netskope:events\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where cci_score < 50` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by app_name, cci_score, category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes dc(All_Traffic.user) as users\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.app span=1d\n| sort -bytes\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Netskope Cloud App Risk Assessment** — Netskope assigns Cloud Confidence Index (CCI) scores to SaaS applications. Tracking high-risk (low CCI) app usage reveals shadow IT and data exposure. Trending CCI across the organization supports SaaS governance and vendor risk management.\n\nDocumented **Data sources**: `sourcetype=netskope:events` or `netskope:application`. **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (risky apps by users and data volume), Pie chart (CCI distribution), Bar chart (top unsanctioned apps), Line chart (shadow IT trend).",
              "script": "",
              "premium": "",
              "hw": "Netskope Security Cloud (cloud-delivered), Netskope Client (endpoint agent)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic",
                "Web"
              ],
              "qs": "| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes dc(All_Traffic.user) as users\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.app span=1d\n| sort -bytes",
              "e": [
                "netskope"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Netskope Add-on for Splunk",
                "id": 3808,
                "url": "https://splunkbase.splunk.com/app/3808"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.33",
              "n": "Netskope DLP Policy Violations",
              "c": "critical",
              "f": "intermediate",
              "v": "Netskope inline DLP inspects uploads, downloads, and cloud-to-cloud sharing for sensitive data (PII, PCI, PHI, IP). Tracking violations by policy, user, and destination app identifies repeat offenders, miscategorized data, and policy gaps before a breach.",
              "t": "Netskope Add-on for Splunk (Splunkbase 3808)",
              "d": "`sourcetype=netskope:alert` (DLP alerts)",
              "q": "index=casb sourcetype=\"netskope:alert\" alert_type=\"DLP\" earliest=-7d\n| stats count by dlp_profile, dlp_rule, user, app_name, action\n| sort -count",
              "m": "Configure the Netskope Add-on iterator input for alerts. Map `dlp_profile` names to data classification. Alert on block actions against critical data categories. Feed repeat offender reports to HR/compliance. Correlate with file hash for forensics.",
              "z": "Bar chart (violations by DLP profile), Table (top users × apps), Stacked bar (actions: block/alert/allow), Line chart (violations/day trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [CIM: DLP](https://docs.splunk.com/Documentation/CIM/latest/User/DLP)",
              "mitre": [
                "T1048",
                "T1048.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Netskope Add-on for Splunk (Splunkbase 3808).\n• Ensure the following data sources are available: `sourcetype=netskope:alert` (DLP alerts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure the Netskope Add-on iterator input for alerts. Map `dlp_profile` names to data classification. Alert on block actions against critical data categories. Feed repeat offender reports to HR/compliance. Correlate with file hash for forensics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=casb sourcetype=\"netskope:alert\" alert_type=\"DLP\" earliest=-7d\n| stats count by dlp_profile, dlp_rule, user, app_name, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Netskope DLP Policy Violations** — Netskope inline DLP inspects uploads, downloads, and cloud-to-cloud sharing for sensitive data (PII, PCI, PHI, IP). Tracking violations by policy, user, and destination app identifies repeat offenders, miscategorized data, and policy gaps before a breach.\n\nDocumented **Data sources**: `sourcetype=netskope:alert` (DLP alerts). **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: casb; **sourcetype**: netskope:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=casb, sourcetype=\"netskope:alert\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dlp_profile, dlp_rule, user, app_name, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.user, DLP_Incidents.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Netskope DLP Policy Violations** — Netskope inline DLP inspects uploads, downloads, and cloud-to-cloud sharing for sensitive data (PII, PCI, PHI, IP). Tracking violations by policy, user, and destination app identifies repeat offenders, miscategorized data, and policy gaps before a breach.\n\nDocumented **Data sources**: `sourcetype=netskope:alert` (DLP alerts). **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Data_Loss_Prevention.DLP_Incidents` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (violations by DLP profile), Table (top users × apps), Stacked bar (actions: block/alert/allow), Line chart (violations/day trend).",
              "script": "",
              "premium": "",
              "hw": "Netskope Security Cloud, Netskope Client",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "DLP"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Data_Loss_Prevention.DLP_Incidents by DLP_Incidents.user, DLP_Incidents.action | sort - count",
              "e": [
                "netskope"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Netskope Add-on for Splunk",
                "id": 3808,
                "url": "https://splunkbase.splunk.com/app/3808"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.34",
              "n": "Netskope Threat Protection Events",
              "c": "critical",
              "f": "intermediate",
              "v": "Netskope inspects web and cloud traffic for malware, phishing, and exploits inline. Threat events reveal attack vectors targeting users through sanctioned and unsanctioned cloud apps — a blind spot for traditional perimeter firewalls.",
              "t": "Netskope Add-on for Splunk (Splunkbase 3808)",
              "d": "`sourcetype=netskope:alert` (malware/threat alerts)",
              "q": "index=casb sourcetype=\"netskope:alert\" alert_type IN (\"Malware\", \"malsite\", \"Compromised Credential\") earliest=-24h\n| stats count by alert_type, threat_name, user, app_name, action\n| sort -count",
              "m": "Configure threat alert iterator. Correlate `threat_name` with threat intel feeds. Alert on any malware block events. Escalate compromised credential alerts immediately. Report on threat type distribution for security posture assessment.",
              "z": "Bar chart (threats by type), Table (affected users), Timeline (threat events), Pie chart (threat categories).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [
                "T1071.001",
                "T1568.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Netskope Add-on for Splunk (Splunkbase 3808).\n• Ensure the following data sources are available: `sourcetype=netskope:alert` (malware/threat alerts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure threat alert iterator. Correlate `threat_name` with threat intel feeds. Alert on any malware block events. Escalate compromised credential alerts immediately. Report on threat type distribution for security posture assessment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=casb sourcetype=\"netskope:alert\" alert_type IN (\"Malware\", \"malsite\", \"Compromised Credential\") earliest=-24h\n| stats count by alert_type, threat_name, user, app_name, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Netskope Threat Protection Events** — Netskope inspects web and cloud traffic for malware, phishing, and exploits inline. Threat events reveal attack vectors targeting users through sanctioned and unsanctioned cloud apps — a blind spot for traditional perimeter firewalls.\n\nDocumented **Data sources**: `sourcetype=netskope:alert` (malware/threat alerts). **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: casb; **sourcetype**: netskope:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=casb, sourcetype=\"netskope:alert\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by alert_type, threat_name, user, app_name, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.user Malware_Attacks.signature span=1h\n| where count > 0\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Netskope Threat Protection Events** — Netskope inspects web and cloud traffic for malware, phishing, and exploits inline. Threat events reveal attack vectors targeting users through sanctioned and unsanctioned cloud apps — a blind spot for traditional perimeter firewalls.\n\nDocumented **Data sources**: `sourcetype=netskope:alert` (malware/threat alerts). **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Filters the current rows with `where count > 0` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (threats by type), Table (affected users), Timeline (threat events), Pie chart (threat categories).",
              "script": "",
              "premium": "",
              "hw": "Netskope Security Cloud, Netskope Client",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware",
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.user Malware_Attacks.signature span=1h\n| where count > 0",
              "e": [
                "netskope"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Netskope Add-on for Splunk",
                "id": 3808,
                "url": "https://splunkbase.splunk.com/app/3808"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.35",
              "n": "Netskope SWG Web Category Blocking",
              "c": "high",
              "f": "beginner",
              "v": "Tracks blocked web requests by URL category across the Netskope SWG — tunes real-time policy and validates acceptable-use enforcement for remote and office users alike.",
              "t": "Netskope Add-on for Splunk (Splunkbase 3808)",
              "d": "`sourcetype=netskope:events` (web transactions)",
              "q": "index=proxy sourcetype=\"netskope:events\" earliest=-7d\n| where action=\"blocked\"\n| stats count by category, user, policy_name\n| sort -count",
              "m": "Map Netskope URL categories to your acceptable-use policy. Alert when blocks spike for a category after a policy change. Report on top blocked categories weekly for policy review.",
              "z": "Bar chart (blocks by category), Table (top blocked users), Line chart (daily blocks trend), Pie chart (category distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Netskope Add-on for Splunk (Splunkbase 3808).\n• Ensure the following data sources are available: `sourcetype=netskope:events` (web transactions).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap Netskope URL categories to your acceptable-use policy. Alert when blocks spike for a category after a policy change. Report on top blocked categories weekly for policy review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"netskope:events\" earliest=-7d\n| where action=\"blocked\"\n| stats count by category, user, policy_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Netskope SWG Web Category Blocking** — Tracks blocked web requests by URL category across the Netskope SWG — tunes real-time policy and validates acceptable-use enforcement for remote and office users alike.\n\nDocumented **Data sources**: `sourcetype=netskope:events` (web transactions). **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: netskope:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"netskope:events\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"blocked\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by category, user, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Netskope SWG Web Category Blocking** — Tracks blocked web requests by URL category across the Netskope SWG — tunes real-time policy and validates acceptable-use enforcement for remote and office users alike.\n\nDocumented **Data sources**: `sourcetype=netskope:events` (web transactions). **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (blocks by category), Table (top blocked users), Line chart (daily blocks trend), Pie chart (category distribution).",
              "script": "",
              "premium": "",
              "hw": "Netskope Security Cloud, Netskope Client",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Web",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d",
              "e": [
                "netskope"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Netskope Add-on for Splunk",
                "id": 3808,
                "url": "https://splunkbase.splunk.com/app/3808"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.36",
              "n": "Netskope Private Access (NPA) Health",
              "c": "high",
              "f": "intermediate",
              "v": "Netskope Private Access (NPA) is the ZTNA component — replacing VPN for private application access. Publisher (connector) health, error rates, and latency determine whether users can reach internal apps.",
              "t": "Netskope Add-on for Splunk (Splunkbase 3808)",
              "d": "`sourcetype=netskope:connection` or `netskope:network` (NPA connection events)",
              "q": "index=casb sourcetype=\"netskope:connection\" earliest=-24h\n| eval healthy=if(match(lower(status),\"(?i)success|established|active\") AND latency_ms<1000,1,0)\n| stats count sum(healthy) as ok by publisher_name, private_app\n| eval error_pct=round(100*(count-ok)/count,1)\n| where error_pct > 5\n| sort -error_pct",
              "m": "Map `publisher_name` to datacenter/region. Alert when any publisher shows >10% error rate or latency p95 exceeds 2s. Report on NPA adoption vs legacy VPN.",
              "z": "Table (unhealthy publishers), Single value (publishers in error), Line chart (error rate trend), Bar chart (latency by app).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Netskope Add-on for Splunk (Splunkbase 3808).\n• Ensure the following data sources are available: `sourcetype=netskope:connection` or `netskope:network` (NPA connection events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `publisher_name` to datacenter/region. Alert when any publisher shows >10% error rate or latency p95 exceeds 2s. Report on NPA adoption vs legacy VPN.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=casb sourcetype=\"netskope:connection\" earliest=-24h\n| eval healthy=if(match(lower(status),\"(?i)success|established|active\") AND latency_ms<1000,1,0)\n| stats count sum(healthy) as ok by publisher_name, private_app\n| eval error_pct=round(100*(count-ok)/count,1)\n| where error_pct > 5\n| sort -error_pct\n```\n\nUnderstanding this SPL\n\n**Netskope Private Access (NPA) Health** — Netskope Private Access (NPA) is the ZTNA component — replacing VPN for private application access. Publisher (connector) health, error rates, and latency determine whether users can reach internal apps.\n\nDocumented **Data sources**: `sourcetype=netskope:connection` or `netskope:network` (NPA connection events). **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: casb; **sourcetype**: netskope:connection. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=casb, sourcetype=\"netskope:connection\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **healthy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by publisher_name, private_app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy publishers), Single value (publishers in error), Line chart (error rate trend), Bar chart (latency by app).",
              "script": "",
              "premium": "",
              "hw": "Netskope Security Cloud, Netskope Publisher (on-prem connector), Netskope Client",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "netskope"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Netskope Add-on for Splunk",
                "id": 3808,
                "url": "https://splunkbase.splunk.com/app/3808"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.37",
              "n": "Netskope CASB Inline Policy Enforcement",
              "c": "high",
              "f": "beginner",
              "v": "Netskope CASB enforces activity-level policies on sanctioned SaaS apps (block upload, read-only, quarantine). Monitoring enforcement actions validates that cloud governance policies are working and identifies gaps.",
              "t": "Netskope Add-on for Splunk (Splunkbase 3808)",
              "d": "`sourcetype=netskope:events` (cloud app activity)",
              "q": "index=casb sourcetype=\"netskope:events\" earliest=-7d\n| where action IN (\"block\", \"quarantine\", \"restrict\", \"coach\")\n| stats count by app_name, activity, action, policy_name\n| sort -count",
              "m": "Map `activity` (upload, download, share, edit, delete) to your DLP/governance controls. Alert on new apps triggering policies. Report on enforcement effectiveness: block vs coach ratio.",
              "z": "Stacked bar (actions by app), Table (policy triggers), Pie chart (activity types blocked), Line chart (enforcement trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1048",
                "T1041"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Netskope Add-on for Splunk (Splunkbase 3808).\n• Ensure the following data sources are available: `sourcetype=netskope:events` (cloud app activity).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `activity` (upload, download, share, edit, delete) to your DLP/governance controls. Alert on new apps triggering policies. Report on enforcement effectiveness: block vs coach ratio.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=casb sourcetype=\"netskope:events\" earliest=-7d\n| where action IN (\"block\", \"quarantine\", \"restrict\", \"coach\")\n| stats count by app_name, activity, action, policy_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Netskope CASB Inline Policy Enforcement** — Netskope CASB enforces activity-level policies on sanctioned SaaS apps (block upload, read-only, quarantine). Monitoring enforcement actions validates that cloud governance policies are working and identifies gaps.\n\nDocumented **Data sources**: `sourcetype=netskope:events` (cloud app activity). **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: casb; **sourcetype**: netskope:events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=casb, sourcetype=\"netskope:events\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"block\", \"quarantine\", \"restrict\", \"coach\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by app_name, activity, action, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.url Web.user span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Netskope CASB Inline Policy Enforcement** — Netskope CASB enforces activity-level policies on sanctioned SaaS apps (block upload, read-only, quarantine). Monitoring enforcement actions validates that cloud governance policies are working and identifies gaps.\n\nDocumented **Data sources**: `sourcetype=netskope:events` (cloud app activity). **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (actions by app), Table (policy triggers), Pie chart (activity types blocked), Line chart (enforcement trend).",
              "script": "",
              "premium": "",
              "hw": "Netskope Security Cloud, Netskope Client",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Network_Traffic",
                "Web"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.url Web.user span=1d",
              "e": [
                "netskope"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Netskope Add-on for Splunk",
                "id": 3808,
                "url": "https://splunkbase.splunk.com/app/3808"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.38",
              "n": "Netskope Admin Audit Trail",
              "c": "medium",
              "f": "beginner",
              "v": "Administrative changes to Netskope policies, steering configs, and tenant settings have global impact. Audit trail ensures accountability and change correlation.",
              "t": "Netskope Add-on for Splunk (Splunkbase 3808)",
              "d": "`sourcetype=netskope:audit`",
              "q": "index=casb sourcetype=\"netskope:audit\" earliest=-30d\n| stats count earliest(_time) as first latest(_time) as last values(operation) as ops by admin_user, object_type\n| sort -last",
              "m": "Alert on after-hours policy changes or bulk rule modifications. Correlate admin changes with enforcement anomalies in UC-17.3.37. Require change tickets for policy modifications.",
              "z": "Table (recent changes), Timeline (admin activity), Bar chart (changes by admin).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Netskope Add-on for Splunk](https://splunkbase.splunk.com/app/3808), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.004",
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Netskope Add-on for Splunk (Splunkbase 3808).\n• Ensure the following data sources are available: `sourcetype=netskope:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert on after-hours policy changes or bulk rule modifications. Correlate admin changes with enforcement anomalies in UC-17.3.37. Require change tickets for policy modifications.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=casb sourcetype=\"netskope:audit\" earliest=-30d\n| stats count earliest(_time) as first latest(_time) as last values(operation) as ops by admin_user, object_type\n| sort -last\n```\n\nUnderstanding this SPL\n\n**Netskope Admin Audit Trail** — Administrative changes to Netskope policies, steering configs, and tenant settings have global impact. Audit trail ensures accountability and change correlation.\n\nDocumented **Data sources**: `sourcetype=netskope:audit`. **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: casb; **sourcetype**: netskope:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=casb, sourcetype=\"netskope:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by admin_user, object_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Netskope Admin Audit Trail** — Administrative changes to Netskope policies, steering configs, and tenant settings have global impact. Audit trail ensures accountability and change correlation.\n\nDocumented **Data sources**: `sourcetype=netskope:audit`. **App/TA** (typical add-on context): Netskope Add-on for Splunk (Splunkbase 3808). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent changes), Timeline (admin activity), Bar chart (changes by admin).",
              "script": "",
              "premium": "",
              "hw": "Netskope Security Cloud (tenant admin)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d",
              "e": [
                "netskope"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Netskope App for Splunk",
                  "id": 6042,
                  "url": "https://splunkbase.splunk.com/app/6042",
                  "desc": "Dashboards for Netskope cloud security, DLP, threat protection, and CASB events",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Netskope Add-on for Splunk",
                "id": 3808,
                "url": "https://splunkbase.splunk.com/app/3808"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.39",
              "n": "FortiSASE SWG Policy Violation Trends (Fortinet)",
              "c": "high",
              "f": "beginner",
              "v": "FortiSASE cloud-delivered SWG enforces web filtering for remote and branch users through FortiGate NGFW policies in the cloud. Trending blocked categories and rules identifies policy drift and shadow IT.",
              "t": "`TA-fortinet_fortigate` (FortiSASE logs use FortiGate syslog format)",
              "d": "`sourcetype=fgt_utm` (FortiSASE web filter logs via syslog)",
              "q": "index=proxy sourcetype=\"fgt_utm\" subtype=\"webfilter\" earliest=-7d\n| where action=\"blocked\"\n| stats count by catdesc, user, policyname\n| sort -count",
              "m": "Forward FortiSASE logs via syslog to Splunk HF. The `TA-fortinet_fortigate` parses FortiSASE traffic identically to on-prem FortiGate. Alert when daily blocks exceed 2× 7-day baseline. Segment by FortiSASE PoP region for regional policy analysis.",
              "z": "Bar chart (blocks by category), Table (top blocked users), Line chart (daily block trend), Pie chart (category distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-fortinet_fortigate` (FortiSASE logs use FortiGate syslog format).\n• Ensure the following data sources are available: `sourcetype=fgt_utm` (FortiSASE web filter logs via syslog).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward FortiSASE logs via syslog to Splunk HF. The `TA-fortinet_fortigate` parses FortiSASE traffic identically to on-prem FortiGate. Alert when daily blocks exceed 2× 7-day baseline. Segment by FortiSASE PoP region for regional policy analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"fgt_utm\" subtype=\"webfilter\" earliest=-7d\n| where action=\"blocked\"\n| stats count by catdesc, user, policyname\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiSASE SWG Policy Violation Trends (Fortinet)** — FortiSASE cloud-delivered SWG enforces web filtering for remote and branch users through FortiGate NGFW policies in the cloud. Trending blocked categories and rules identifies policy drift and shadow IT.\n\nDocumented **Data sources**: `sourcetype=fgt_utm` (FortiSASE web filter logs via syslog). **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE logs use FortiGate syslog format). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: fgt_utm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"fgt_utm\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"blocked\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by catdesc, user, policyname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiSASE SWG Policy Violation Trends (Fortinet)** — FortiSASE cloud-delivered SWG enforces web filtering for remote and branch users through FortiGate NGFW policies in the cloud. Trending blocked categories and rules identifies policy drift and shadow IT.\n\nDocumented **Data sources**: `sourcetype=fgt_utm` (FortiSASE web filter logs via syslog). **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE logs use FortiGate syslog format). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (blocks by category), Table (top blocked users), Line chart (daily block trend), Pie chart (category distribution).",
              "script": "",
              "premium": "",
              "hw": "FortiSASE (cloud), FortiClient (endpoint agent), FortiGate (on-prem with SASE integration)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Web",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d",
              "e": [
                "fortinet",
                "syslog"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.40",
              "n": "FortiSASE ZTNA Application Access (Fortinet)",
              "c": "high",
              "f": "intermediate",
              "v": "FortiSASE ZTNA tags replace traditional VPN by granting per-application access based on device posture and identity. Monitoring ZTNA tag matches and application access patterns validates zero-trust enforcement.",
              "t": "`TA-fortinet_fortigate` (FortiSASE ZTNA logs)",
              "d": "`sourcetype=fgt_traffic` (FortiSASE ZTNA sessions)",
              "q": "index=zt sourcetype=\"fgt_traffic\" subtype=\"forward\" earliest=-24h\n| where isnotnull(ztna_tag) OR isnotnull(ztna_rule)\n| stats count dc(user) as users by ztna_tag, dstip, action\n| sort -count",
              "m": "Map `ztna_tag` to EMS posture profiles. Alert when ZTNA deny rate exceeds baseline (posture drift or misconfigured tags). Report on ZTNA adoption vs legacy VPN connections.",
              "z": "Table (ZTNA tag access), Bar chart (actions by tag), Line chart (ZTNA sessions trend), Single value (ZTNA vs VPN ratio).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-fortinet_fortigate` (FortiSASE ZTNA logs).\n• Ensure the following data sources are available: `sourcetype=fgt_traffic` (FortiSASE ZTNA sessions).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `ztna_tag` to EMS posture profiles. Alert when ZTNA deny rate exceeds baseline (posture drift or misconfigured tags). Report on ZTNA adoption vs legacy VPN connections.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"fgt_traffic\" subtype=\"forward\" earliest=-24h\n| where isnotnull(ztna_tag) OR isnotnull(ztna_rule)\n| stats count dc(user) as users by ztna_tag, dstip, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiSASE ZTNA Application Access (Fortinet)** — FortiSASE ZTNA tags replace traditional VPN by granting per-application access based on device posture and identity. Monitoring ZTNA tag matches and application access patterns validates zero-trust enforcement.\n\nDocumented **Data sources**: `sourcetype=fgt_traffic` (FortiSASE ZTNA sessions). **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE ZTNA logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: fgt_traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"fgt_traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(ztna_tag) OR isnotnull(ztna_rule)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ztna_tag, dstip, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.action All_Traffic.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiSASE ZTNA Application Access (Fortinet)** — FortiSASE ZTNA tags replace traditional VPN by granting per-application access based on device posture and identity. Monitoring ZTNA tag matches and application access patterns validates zero-trust enforcement.\n\nDocumented **Data sources**: `sourcetype=fgt_traffic` (FortiSASE ZTNA sessions). **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE ZTNA logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (ZTNA tag access), Bar chart (actions by tag), Line chart (ZTNA sessions trend), Single value (ZTNA vs VPN ratio).",
              "script": "",
              "premium": "",
              "hw": "FortiSASE (cloud), FortiClient (EMS-managed endpoint), FortiGate ZTNA proxy",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Performance"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.action All_Traffic.dest span=1h",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.41",
              "n": "FortiSASE Threat Detection Events (Fortinet)",
              "c": "critical",
              "f": "intermediate",
              "v": "FortiSASE applies IPS, anti-malware, and sandboxing inline for all users. Threat events reveal attack vectors bypassing traditional perimeter security — critical for remote-first organizations.",
              "t": "`TA-fortinet_fortigate` (FortiSASE IPS/AV logs)",
              "d": "`sourcetype=fgt_utm` subtype IN (ips, virus, anomaly)",
              "q": "index=ids sourcetype=\"fgt_utm\" subtype IN (\"ips\",\"virus\",\"anomaly\") earliest=-24h\n| stats count by attack, severity, user, srcip, action\n| sort -count",
              "m": "Forward FortiSASE UTM logs. Map `attack` names to CVE and MITRE ATT&CK. Alert on critical-severity blocks. Correlate with FortiSandbox verdicts for zero-day detections.",
              "z": "Bar chart (threats by severity), Table (top attacks), Timeline (detection events), Pie chart (action distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1562",
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-fortinet_fortigate` (FortiSASE IPS/AV logs).\n• Ensure the following data sources are available: `sourcetype=fgt_utm` subtype IN (ips, virus, anomaly).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward FortiSASE UTM logs. Map `attack` names to CVE and MITRE ATT&CK. Alert on critical-severity blocks. Correlate with FortiSandbox verdicts for zero-day detections.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ids sourcetype=\"fgt_utm\" subtype IN (\"ips\",\"virus\",\"anomaly\") earliest=-24h\n| stats count by attack, severity, user, srcip, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiSASE Threat Detection Events (Fortinet)** — FortiSASE applies IPS, anti-malware, and sandboxing inline for all users. Threat events reveal attack vectors bypassing traditional perimeter security — critical for remote-first organizations.\n\nDocumented **Data sources**: `sourcetype=fgt_utm` subtype IN (ips, virus, anomaly). **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE IPS/AV logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ids; **sourcetype**: fgt_utm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ids, sourcetype=\"fgt_utm\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by attack, severity, user, srcip, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.signature IDS_Attacks.severity span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiSASE Threat Detection Events (Fortinet)** — FortiSASE applies IPS, anti-malware, and sandboxing inline for all users. Threat events reveal attack vectors bypassing traditional perimeter security — critical for remote-first organizations.\n\nDocumented **Data sources**: `sourcetype=fgt_utm` subtype IN (ips, virus, anomaly). **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE IPS/AV logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (threats by severity), Table (top attacks), Timeline (detection events), Pie chart (action distribution).",
              "script": "",
              "premium": "",
              "hw": "FortiSASE (cloud), FortiClient (endpoint), FortiSandbox Cloud",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Malware"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.signature IDS_Attacks.severity span=1h",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.42",
              "n": "FortiSASE Thin Edge Health (Fortinet)",
              "c": "high",
              "f": "intermediate",
              "v": "FortiSASE thin edges (FortiExtender, FortiGate in SASE mode) connect branches to the nearest FortiSASE PoP. Tunnel state, latency, and failover events determine branch connectivity and SLA compliance.",
              "t": "`TA-fortinet_fortigate` (FortiSASE tunnel/system logs)",
              "d": "`sourcetype=fgt_event` subtype IN (vpn, system)",
              "q": "index=sase sourcetype=\"fgt_event\" subtype IN (\"vpn\",\"system\") earliest=-24h\n| eval down=if(match(lower(msg),\"(?i)tunnel.*down|ipsec.*fail|phase[12].*fail\"),1,0)\n| where down=1\n| stats count latest(_time) as last_event by tunnelid, remip, tunneltype\n| sort -count",
              "m": "Map `remip` to branch site via CMDB lookup. Alert on sustained tunnel-down (>15 min). Report on failover frequency and PoP selection patterns.",
              "z": "Table (down tunnels), Map (branch sites), Line chart (tunnel availability %), Single value (sites currently down).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-fortinet_fortigate` (FortiSASE tunnel/system logs).\n• Ensure the following data sources are available: `sourcetype=fgt_event` subtype IN (vpn, system).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `remip` to branch site via CMDB lookup. Alert on sustained tunnel-down (>15 min). Report on failover frequency and PoP selection patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype=\"fgt_event\" subtype IN (\"vpn\",\"system\") earliest=-24h\n| eval down=if(match(lower(msg),\"(?i)tunnel.*down|ipsec.*fail|phase[12].*fail\"),1,0)\n| where down=1\n| stats count latest(_time) as last_event by tunnelid, remip, tunneltype\n| sort -count\n```\n\nUnderstanding this SPL\n\n**FortiSASE Thin Edge Health (Fortinet)** — FortiSASE thin edges (FortiExtender, FortiGate in SASE mode) connect branches to the nearest FortiSASE PoP. Tunnel state, latency, and failover events determine branch connectivity and SLA compliance.\n\nDocumented **Data sources**: `sourcetype=fgt_event` subtype IN (vpn, system). **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE tunnel/system logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase; **sourcetype**: fgt_event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, sourcetype=\"fgt_event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **down** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where down=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tunnelid, remip, tunneltype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiSASE Thin Edge Health (Fortinet)** — FortiSASE thin edges (FortiExtender, FortiGate in SASE mode) connect branches to the nearest FortiSASE PoP. Tunnel state, latency, and failover events determine branch connectivity and SLA compliance.\n\nDocumented **Data sources**: `sourcetype=fgt_event` subtype IN (vpn, system). **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE tunnel/system logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (down tunnels), Map (branch sites), Line chart (tunnel availability %), Single value (sites currently down).",
              "script": "",
              "premium": "",
              "hw": "FortiSASE (cloud PoPs), FortiExtender 200F/400F, FortiGate SD-WAN appliances in FortiSASE mode",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src span=1h",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.43",
              "n": "FortiSASE Admin Configuration Audit (Fortinet)",
              "c": "medium",
              "f": "beginner",
              "v": "FortiSASE policies are centrally managed and affect all connected users globally. Configuration audit logs enable change tracking, compliance, and root-cause analysis when policies cause access issues.",
              "t": "`TA-fortinet_fortigate` (FortiSASE admin audit)",
              "d": "`sourcetype=fgt_event` subtype=system, logid related to admin/config",
              "q": "index=sase sourcetype=\"fgt_event\" subtype=\"system\" earliest=-30d\n| where match(lower(msg),\"(?i)config|policy|rule|admin.*login|object.*modified\")\n| stats count earliest(_time) as first latest(_time) as last values(msg) as actions by user, ui\n| sort -last",
              "m": "Forward FortiSASE management audit events. Alert on after-hours changes or bulk policy modifications. Correlate with ITSM change tickets.",
              "z": "Table (admin changes), Timeline (configuration events), Bar chart (changes by admin).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.004",
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-fortinet_fortigate` (FortiSASE admin audit).\n• Ensure the following data sources are available: `sourcetype=fgt_event` subtype=system, logid related to admin/config.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward FortiSASE management audit events. Alert on after-hours changes or bulk policy modifications. Correlate with ITSM change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype=\"fgt_event\" subtype=\"system\" earliest=-30d\n| where match(lower(msg),\"(?i)config|policy|rule|admin.*login|object.*modified\")\n| stats count earliest(_time) as first latest(_time) as last values(msg) as actions by user, ui\n| sort -last\n```\n\nUnderstanding this SPL\n\n**FortiSASE Admin Configuration Audit (Fortinet)** — FortiSASE policies are centrally managed and affect all connected users globally. Configuration audit logs enable change tracking, compliance, and root-cause analysis when policies cause access issues.\n\nDocumented **Data sources**: `sourcetype=fgt_event` subtype=system, logid related to admin/config. **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE admin audit). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase; **sourcetype**: fgt_event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, sourcetype=\"fgt_event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(msg),\"(?i)config|policy|rule|admin.*login|object.*modified\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, ui** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FortiSASE Admin Configuration Audit (Fortinet)** — FortiSASE policies are centrally managed and affect all connected users globally. Configuration audit logs enable change tracking, compliance, and root-cause analysis when policies cause access issues.\n\nDocumented **Data sources**: `sourcetype=fgt_event` subtype=system, logid related to admin/config. **App/TA** (typical add-on context): `TA-fortinet_fortigate` (FortiSASE admin audit). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (admin changes), Timeline (configuration events), Bar chart (changes by admin).",
              "script": "",
              "premium": "",
              "hw": "FortiSASE (cloud management plane)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Fortinet FortiGate",
                "id": 2846,
                "url": "https://splunkbase.splunk.com/app/2846"
              },
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.44",
              "n": "Check Point Harmony SASE Threat Prevention (Check Point)",
              "c": "critical",
              "f": "intermediate",
              "v": "Check Point Harmony SASE applies ThreatCloud AI-powered prevention (IPS, anti-malware, anti-bot, threat emulation) to all user traffic via cloud enforcement points. Detection events reveal threats targeting users regardless of location.",
              "t": "Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (Check Point Log Exporter), Smart-1 Cloud logs",
              "q": "index=checkpoint sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)ips|anti.?bot|anti.?malware|anti.?virus|threat.?emulation|threat.?extraction\")\n| stats count by protection_name, severity, src, action\n| sort -count",
              "m": "Configure Check Point Log Exporter or Smart-1 Cloud syslog forwarding to Splunk. Map `protection_name` to ThreatCloud categories. Alert on critical-severity detections. Correlate with MITRE ATT&CK via Check Point attack IDs.",
              "z": "Bar chart (detections by protection type), Table (top threats), Timeline (detection events), Pie chart (severity distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (Check Point Log Exporter), Smart-1 Cloud logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Check Point Log Exporter or Smart-1 Cloud syslog forwarding to Splunk. Map `protection_name` to ThreatCloud categories. Alert on critical-severity detections. Correlate with MITRE ATT&CK via Check Point attack IDs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=checkpoint sourcetype=\"cp_log\" earliest=-24h\n| where match(lower(product),\"(?i)ips|anti.?bot|anti.?malware|anti.?virus|threat.?emulation|threat.?extraction\")\n| stats count by protection_name, severity, src, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point Harmony SASE Threat Prevention (Check Point)** — Check Point Harmony SASE applies ThreatCloud AI-powered prevention (IPS, anti-malware, anti-bot, threat emulation) to all user traffic via cloud enforcement points. Detection events reveal threats targeting users regardless of location.\n\nDocumented **Data sources**: `sourcetype=cp_log` (Check Point Log Exporter), Smart-1 Cloud logs. **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: checkpoint; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=checkpoint, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(product),\"(?i)ips|anti.?bot|anti.?malware|anti.?virus|threat.?emulation|threat.?extraction\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by protection_name, severity, src, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.signature IDS_Attacks.severity span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Harmony SASE Threat Prevention (Check Point)** — Check Point Harmony SASE applies ThreatCloud AI-powered prevention (IPS, anti-malware, anti-bot, threat emulation) to all user traffic via cloud enforcement points. Detection events reveal threats targeting users regardless of location.\n\nDocumented **Data sources**: `sourcetype=cp_log` (Check Point Log Exporter), Smart-1 Cloud logs. **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (detections by protection type), Table (top threats), Timeline (detection events), Pie chart (severity distribution).",
              "script": "",
              "premium": "",
              "hw": "Check Point Harmony SASE (cloud), Harmony Endpoint (client), Smart-1 Cloud (management)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection",
                "Malware"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.signature IDS_Attacks.severity span=1h",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.45",
              "n": "Check Point Harmony SASE Internet Access Policy (Check Point)",
              "c": "high",
              "f": "beginner",
              "v": "Harmony SASE Internet Access (formerly Perimeter 81) enforces URL filtering, application control, and content inspection. Tracking policy actions validates acceptable-use enforcement and identifies shadow IT.",
              "t": "Check Point App for Splunk (Splunkbase 4293)",
              "d": "`sourcetype=cp_log` (URL filtering / application control logs)",
              "q": "index=proxy sourcetype=\"cp_log\" product=\"URL Filtering\" earliest=-7d\n| where action=\"Block\"\n| stats count by category, src_user_name, policy_name\n| sort -count",
              "m": "Forward Harmony SASE logs via Log Exporter. Map URL categories to your acceptable-use policy. Alert on category blocks spiking post-policy change. Report weekly on top blocked categories.",
              "z": "Bar chart (blocks by category), Table (top users blocked), Line chart (block trend), Pie chart (category distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Check Point App for Splunk (Splunkbase 4293).\n• Ensure the following data sources are available: `sourcetype=cp_log` (URL filtering / application control logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward Harmony SASE logs via Log Exporter. Map URL categories to your acceptable-use policy. Alert on category blocks spiking post-policy change. Report weekly on top blocked categories.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"cp_log\" product=\"URL Filtering\" earliest=-7d\n| where action=\"Block\"\n| stats count by category, src_user_name, policy_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point Harmony SASE Internet Access Policy (Check Point)** — Harmony SASE Internet Access (formerly Perimeter 81) enforces URL filtering, application control, and content inspection. Tracking policy actions validates acceptable-use enforcement and identifies shadow IT.\n\nDocumented **Data sources**: `sourcetype=cp_log` (URL filtering / application control logs). **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"Block\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by category, src_user_name, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Harmony SASE Internet Access Policy (Check Point)** — Harmony SASE Internet Access (formerly Perimeter 81) enforces URL filtering, application control, and content inspection. Tracking policy actions validates acceptable-use enforcement and identifies shadow IT.\n\nDocumented **Data sources**: `sourcetype=cp_log` (URL filtering / application control logs). **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (blocks by category), Table (top users blocked), Line chart (block trend), Pie chart (category distribution).",
              "script": "",
              "premium": "",
              "hw": "Check Point Harmony SASE (cloud), Harmony Connect Client",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Web",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.46",
              "n": "Check Point Harmony SASE Private Access (Check Point)",
              "c": "high",
              "f": "intermediate",
              "v": "Harmony SASE Private Access (ZTNA) grants per-application access to private resources without VPN. Connector health, access decisions, and latency determine user experience for internal applications.",
              "t": "Check Point App for Splunk (Splunkbase 4293)",
              "d": "`sourcetype=cp_log` (VPN/access logs)",
              "q": "index=zt sourcetype=\"cp_log\" product=\"VPN\" earliest=-24h\n| eval ok=if(match(lower(action),\"(?i)accept|allow\") AND isnotnull(rule_name),1,0)\n| stats count sum(ok) as successes by src_user_name, dst, rule_name\n| eval fail_pct=round(100*(count-successes)/count,1)\n| where fail_pct > 5\n| sort -fail_pct",
              "m": "Map `dst` to private application names via lookup. Alert when connector error rates exceed 10%. Report on ZTNA adoption vs traditional VPN.",
              "z": "Table (applications with high failure), Single value (connectors in error), Line chart (access trend), Bar chart (failures by rule).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Check Point App for Splunk (Splunkbase 4293).\n• Ensure the following data sources are available: `sourcetype=cp_log` (VPN/access logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `dst` to private application names via lookup. Alert when connector error rates exceed 10%. Report on ZTNA adoption vs traditional VPN.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"cp_log\" product=\"VPN\" earliest=-24h\n| eval ok=if(match(lower(action),\"(?i)accept|allow\") AND isnotnull(rule_name),1,0)\n| stats count sum(ok) as successes by src_user_name, dst, rule_name\n| eval fail_pct=round(100*(count-successes)/count,1)\n| where fail_pct > 5\n| sort -fail_pct\n```\n\nUnderstanding this SPL\n\n**Check Point Harmony SASE Private Access (Check Point)** — Harmony SASE Private Access (ZTNA) grants per-application access to private resources without VPN. Connector health, access decisions, and latency determine user experience for internal applications.\n\nDocumented **Data sources**: `sourcetype=cp_log` (VPN/access logs). **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src_user_name, dst, rule_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Harmony SASE Private Access (Check Point)** — Harmony SASE Private Access (ZTNA) grants per-application access to private resources without VPN. Connector health, access decisions, and latency determine user experience for internal applications.\n\nDocumented **Data sources**: `sourcetype=cp_log` (VPN/access logs). **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (applications with high failure), Single value (connectors in error), Line chart (access trend), Bar chart (failures by rule).",
              "script": "",
              "premium": "",
              "hw": "Check Point Harmony SASE (cloud), Harmony Connect Client, Harmony SASE Connector (on-prem)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=failure\n  by Authentication.dest span=1h",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.47",
              "n": "Check Point Harmony SASE Admin Audit (Check Point)",
              "c": "medium",
              "f": "beginner",
              "v": "Centralized SASE policy changes affect all connected users and sites. Audit logs enable compliance, change correlation, and accountability.",
              "t": "Check Point App for Splunk (Splunkbase 4293)",
              "d": "`sourcetype=cp_log` (audit/admin logs)",
              "q": "index=checkpoint sourcetype=\"cp_log\" product=\"SmartConsole\" earliest=-30d\n| where match(lower(operation),\"(?i)create|modify|delete|publish|install\")\n| stats count earliest(_time) as first latest(_time) as last values(operation) as ops by administrator, object_name\n| sort -last",
              "m": "Forward audit logs via Log Exporter or Smart-1 Cloud. Alert on after-hours changes or policy publishes without change ticket. Correlate access anomalies with recent policy changes.",
              "z": "Table (recent changes), Timeline (admin activity), Bar chart (changes by admin).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1562.004",
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Check Point App for Splunk (Splunkbase 4293).\n• Ensure the following data sources are available: `sourcetype=cp_log` (audit/admin logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward audit logs via Log Exporter or Smart-1 Cloud. Alert on after-hours changes or policy publishes without change ticket. Correlate access anomalies with recent policy changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=checkpoint sourcetype=\"cp_log\" product=\"SmartConsole\" earliest=-30d\n| where match(lower(operation),\"(?i)create|modify|delete|publish|install\")\n| stats count earliest(_time) as first latest(_time) as last values(operation) as ops by administrator, object_name\n| sort -last\n```\n\nUnderstanding this SPL\n\n**Check Point Harmony SASE Admin Audit (Check Point)** — Centralized SASE policy changes affect all connected users and sites. Audit logs enable compliance, change correlation, and accountability.\n\nDocumented **Data sources**: `sourcetype=cp_log` (audit/admin logs). **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: checkpoint; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=checkpoint, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(operation),\"(?i)create|modify|delete|publish|install\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by administrator, object_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Harmony SASE Admin Audit (Check Point)** — Centralized SASE policy changes affect all connected users and sites. Audit logs enable compliance, change correlation, and accountability.\n\nDocumented **Data sources**: `sourcetype=cp_log` (audit/admin logs). **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent changes), Timeline (admin activity), Bar chart (changes by admin).",
              "script": "",
              "premium": "",
              "hw": "Check Point Harmony SASE management portal, Smart-1 Cloud",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Configuration",
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object span=1d",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.48",
              "n": "Check Point Harmony SASE DLP Events (Check Point)",
              "c": "critical",
              "f": "intermediate",
              "v": "Harmony SASE inline DLP inspects uploads and downloads for sensitive data patterns (credit cards, SSNs, health records, source code). Detection events identify data exposure risks across cloud and private application access.",
              "t": "Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259)",
              "d": "`sourcetype=cp_log` (DLP blade logs)",
              "q": "index=dlp sourcetype=\"cp_log\" product=\"DLP\" earliest=-7d\n| stats count by dlp_rule_name, data_type, src_user_name, action, dst\n| sort -count",
              "m": "Map `data_type` to your data classification scheme. Alert on any DLP blocks for PCI/PHI data. Report on repeat offenders. Correlate with CASB events if using Harmony Email & Collaboration.",
              "z": "Bar chart (violations by rule), Table (top users), Stacked bar (actions), Line chart (violations/day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Check Point App for Splunk](https://splunkbase.splunk.com/app/4293), [CCX Add-on for Checkpoint Smart-1 Cloud](https://splunkbase.splunk.com/app/7259), [CIM: DLP](https://docs.splunk.com/Documentation/CIM/latest/User/DLP)",
              "mitre": [
                "T1048",
                "T1048.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259).\n• Ensure the following data sources are available: `sourcetype=cp_log` (DLP blade logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `data_type` to your data classification scheme. Alert on any DLP blocks for PCI/PHI data. Report on repeat offenders. Correlate with CASB events if using Harmony Email & Collaboration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dlp sourcetype=\"cp_log\" product=\"DLP\" earliest=-7d\n| stats count by dlp_rule_name, data_type, src_user_name, action, dst\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Check Point Harmony SASE DLP Events (Check Point)** — Harmony SASE inline DLP inspects uploads and downloads for sensitive data patterns (credit cards, SSNs, health records, source code). Detection events identify data exposure risks across cloud and private application access.\n\nDocumented **Data sources**: `sourcetype=cp_log` (DLP blade logs). **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dlp; **sourcetype**: cp_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dlp, sourcetype=\"cp_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dlp_rule_name, data_type, src_user_name, action, dst** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Data_Loss_Prevention\n  by DLP.user DLP.src span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Check Point Harmony SASE DLP Events (Check Point)** — Harmony SASE inline DLP inspects uploads and downloads for sensitive data patterns (credit cards, SSNs, health records, source code). Detection events identify data exposure risks across cloud and private application access.\n\nDocumented **Data sources**: `sourcetype=cp_log` (DLP blade logs). **App/TA** (typical add-on context): Check Point App for Splunk (Splunkbase 4293), CCX Add-on for Checkpoint Smart-1 Cloud (Splunkbase 7259). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Data_Loss_Prevention` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (violations by rule), Table (top users), Stacked bar (actions), Line chart (violations/day).",
              "script": "",
              "premium": "",
              "hw": "Check Point Harmony SASE (cloud), Harmony Connect Client",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "DLP"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Data_Loss_Prevention\n  by DLP.user DLP.src span=1d",
              "e": [
                "checkpoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Check Point App for Splunk",
                  "id": 4293,
                  "url": "https://splunkbase.splunk.com/app/4293",
                  "desc": "Threat analysis and reporting for Check Point networks, cloud, endpoints, and SASE",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.49",
              "n": "Akamai Guardicore Segmentation Policy Violations",
              "c": "critical",
              "f": "intermediate",
              "v": "Akamai Guardicore enforces microsegmentation at the workload level across on-prem, cloud, and hybrid environments. Policy violation events (blocked lateral movement) validate that default-deny segmentation is enforced and detect attempted east-west attacks.",
              "t": "Akamai Guardicore Add-on for Splunk (Splunkbase 7426)",
              "d": "Akamai Guardicore REST API or syslog via the Add-on",
              "q": "index=microseg sourcetype=\"guardicore:network\" earliest=-24h\n| where match(lower(action),\"(?i)block|deny|drop|reject\")\n| stats count by src_asset, dst_asset, dst_port, policy_name\n| sort -count",
              "m": "Install the Akamai Guardicore Add-on. Map `src_asset` and `dst_asset` to application labels from the Guardicore Reveal map. Alert on blocked flows to critical assets (databases, domain controllers). Report on deny-to-allow ratio per segmentation ring.",
              "z": "Table (blocked flows), Heatmap (source × destination), Bar chart (blocks by policy), Sankey (traffic flow map).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Akamai Guardicore Add-on for Splunk](https://splunkbase.splunk.com/app/7426), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1021",
                "T1570"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Akamai Guardicore Add-on for Splunk (Splunkbase 7426).\n• Ensure the following data sources are available: Akamai Guardicore REST API or syslog via the Add-on.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall the Akamai Guardicore Add-on. Map `src_asset` and `dst_asset` to application labels from the Guardicore Reveal map. Alert on blocked flows to critical assets (databases, domain controllers). Report on deny-to-allow ratio per segmentation ring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=microseg sourcetype=\"guardicore:network\" earliest=-24h\n| where match(lower(action),\"(?i)block|deny|drop|reject\")\n| stats count by src_asset, dst_asset, dst_port, policy_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Akamai Guardicore Segmentation Policy Violations** — Akamai Guardicore enforces microsegmentation at the workload level across on-prem, cloud, and hybrid environments. Policy violation events (blocked lateral movement) validate that default-deny segmentation is enforced and detect attempted east-west attacks.\n\nDocumented **Data sources**: Akamai Guardicore REST API or syslog via the Add-on. **App/TA** (typical add-on context): Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: microseg; **sourcetype**: guardicore:network. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=microseg, sourcetype=\"guardicore:network\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(action),\"(?i)block|deny|drop|reject\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_asset, dst_asset, dst_port, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Akamai Guardicore Segmentation Policy Violations** — Akamai Guardicore enforces microsegmentation at the workload level across on-prem, cloud, and hybrid environments. Policy violation events (blocked lateral movement) validate that default-deny segmentation is enforced and detect attempted east-west attacks.\n\nDocumented **Data sources**: Akamai Guardicore REST API or syslog via the Add-on. **App/TA** (typical add-on context): Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (blocked flows), Heatmap (source × destination), Bar chart (blocks by policy), Sankey (traffic flow map).",
              "script": "",
              "premium": "",
              "hw": "Akamai Guardicore Centra (management), Guardicore Agents (workload), Guardicore Aggregators",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src All_Traffic.dest span=1h",
              "e": [
                "guardicore"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Akamai Guardicore Add-on for Splunk",
                  "id": 7426,
                  "url": "https://splunkbase.splunk.com/app/7426",
                  "desc": "Microsegmentation data from Akamai Guardicore Centra for asset protection and compliance",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.50",
              "n": "Akamai Guardicore Reveal Map Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Guardicore's Reveal feature maps all process-level communication between workloads. New or unexpected connections that deviate from the learned application dependency map may indicate lateral movement, compromised workloads, or configuration drift.",
              "t": "Akamai Guardicore Add-on for Splunk (Splunkbase 7426)",
              "d": "Akamai Guardicore flow/connection data",
              "q": "index=microseg sourcetype=\"guardicore:network\" earliest=-24h\n| eval flow_key=src_asset.\":\".tostring(src_port).\"->\".dst_asset.\":\".tostring(dst_port)\n| lookup known_flows.csv flow_key OUTPUT known\n| where isnull(known)\n| stats count by src_asset, dst_asset, dst_port, process_name\n| where count > 5\n| sort -count",
              "m": "Export the Guardicore application dependency map as a lookup. Run anomaly detection against new flows. Alert on never-before-seen process-port combinations to critical segments. Tune for deployment rollouts.",
              "z": "Table (new flows), Network graph (dependencies), Bar chart (anomalies by segment), Timeline (first seen).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Akamai Guardicore Add-on for Splunk](https://splunkbase.splunk.com/app/7426), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1021",
                "T1018",
                "T1046"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Akamai Guardicore Add-on for Splunk (Splunkbase 7426).\n• Ensure the following data sources are available: Akamai Guardicore flow/connection data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport the Guardicore application dependency map as a lookup. Run anomaly detection against new flows. Alert on never-before-seen process-port combinations to critical segments. Tune for deployment rollouts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=microseg sourcetype=\"guardicore:network\" earliest=-24h\n| eval flow_key=src_asset.\":\".tostring(src_port).\"->\".dst_asset.\":\".tostring(dst_port)\n| lookup known_flows.csv flow_key OUTPUT known\n| where isnull(known)\n| stats count by src_asset, dst_asset, dst_port, process_name\n| where count > 5\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Akamai Guardicore Reveal Map Anomalies** — Guardicore's Reveal feature maps all process-level communication between workloads. New or unexpected connections that deviate from the learned application dependency map may indicate lateral movement, compromised workloads, or configuration drift.\n\nDocumented **Data sources**: Akamai Guardicore flow/connection data. **App/TA** (typical add-on context): Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: microseg; **sourcetype**: guardicore:network. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=microseg, sourcetype=\"guardicore:network\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **flow_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(known)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_asset, dst_asset, dst_port, process_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| where count > 10\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Akamai Guardicore Reveal Map Anomalies** — Guardicore's Reveal feature maps all process-level communication between workloads. New or unexpected connections that deviate from the learned application dependency map may indicate lateral movement, compromised workloads, or configuration drift.\n\nDocumented **Data sources**: Akamai Guardicore flow/connection data. **App/TA** (typical add-on context): Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Filters the current rows with `where count > 10` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new flows), Network graph (dependencies), Bar chart (anomalies by segment), Timeline (first seen).",
              "script": "",
              "premium": "",
              "hw": "Akamai Guardicore Centra, Guardicore Agents",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src All_Traffic.dest All_Traffic.dest_port span=1h\n| where count > 10",
              "e": [
                "guardicore"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Akamai Guardicore Add-on for Splunk",
                  "id": 7426,
                  "url": "https://splunkbase.splunk.com/app/7426",
                  "desc": "Microsegmentation data from Akamai Guardicore Centra for asset protection and compliance",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.51",
              "n": "Akamai Guardicore Agent Health",
              "c": "high",
              "f": "beginner",
              "v": "Guardicore agents on workloads enforce segmentation policies. Offline or degraded agents leave workloads unprotected — effectively disabling zero trust enforcement on that asset.",
              "t": "Akamai Guardicore Add-on for Splunk (Splunkbase 7426)",
              "d": "Guardicore agent status events",
              "q": "index=microseg sourcetype=\"guardicore:agent\" earliest=-4h\n| eval healthy=if(match(lower(status),\"(?i)active|running|connected\"),1,0)\n| stats latest(healthy) as current_status latest(_time) as last_seen by agent_id, hostname, os\n| where current_status=0 OR last_seen < relative_time(now(), \"-2h\")\n| sort last_seen",
              "m": "Alert when agents go offline for >30 minutes on critical assets. Report on agent coverage (% of assets with active agents). Track agent version compliance.",
              "z": "Single value (offline agents), Table (unhealthy agents), Pie chart (agent status), Line chart (coverage trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Akamai Guardicore Add-on for Splunk](https://splunkbase.splunk.com/app/7426)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Akamai Guardicore Add-on for Splunk (Splunkbase 7426).\n• Ensure the following data sources are available: Guardicore agent status events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlert when agents go offline for >30 minutes on critical assets. Report on agent coverage (% of assets with active agents). Track agent version compliance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=microseg sourcetype=\"guardicore:agent\" earliest=-4h\n| eval healthy=if(match(lower(status),\"(?i)active|running|connected\"),1,0)\n| stats latest(healthy) as current_status latest(_time) as last_seen by agent_id, hostname, os\n| where current_status=0 OR last_seen < relative_time(now(), \"-2h\")\n| sort last_seen\n```\n\nUnderstanding this SPL\n\n**Akamai Guardicore Agent Health** — Guardicore agents on workloads enforce segmentation policies. Offline or degraded agents leave workloads unprotected — effectively disabling zero trust enforcement on that asset.\n\nDocumented **Data sources**: Guardicore agent status events. **App/TA** (typical add-on context): Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: microseg; **sourcetype**: guardicore:agent. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=microseg, sourcetype=\"guardicore:agent\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **healthy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by agent_id, hostname, os** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where current_status=0 OR last_seen < relative_time(now(), \"-2h\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (offline agents), Table (unhealthy agents), Pie chart (agent status), Line chart (coverage trend).",
              "script": "",
              "premium": "",
              "hw": "Akamai Guardicore Centra, Guardicore Agents (Linux/Windows/containers)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "guardicore"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Akamai Guardicore Add-on for Splunk",
                  "id": 7426,
                  "url": "https://splunkbase.splunk.com/app/7426",
                  "desc": "Microsegmentation data from Akamai Guardicore Centra for asset protection and compliance",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.52",
              "n": "Akamai Guardicore Incident Investigation",
              "c": "critical",
              "f": "advanced",
              "v": "Guardicore generates security incidents when patterns of blocked connections, deception (honeypot) triggers, or policy violations suggest active threats. Incident-level correlation in Splunk supports rapid investigation with full process and network context.",
              "t": "Akamai Guardicore Add-on for Splunk (Splunkbase 7426)",
              "d": "Guardicore incident and deception events",
              "q": "index=microseg sourcetype=\"guardicore:incident\" earliest=-7d\n| stats count values(affected_assets) as assets values(attack_type) as types by incident_id, severity, status\n| sort -severity -count",
              "m": "Configure the Add-on to pull incident data via API. Correlate `affected_assets` with CMDB for impact assessment. Escalate critical-severity incidents immediately. Use deception triggers as high-fidelity IOCs — false positives are rare on decoys.",
              "z": "Table (active incidents), Timeline (incident progression), Bar chart (by severity), Single value (open critical incidents).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Akamai Guardicore Add-on for Splunk](https://splunkbase.splunk.com/app/7426), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1021",
                "T1046"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Akamai Guardicore Add-on for Splunk (Splunkbase 7426).\n• Ensure the following data sources are available: Guardicore incident and deception events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure the Add-on to pull incident data via API. Correlate `affected_assets` with CMDB for impact assessment. Escalate critical-severity incidents immediately. Use deception triggers as high-fidelity IOCs — false positives are rare on decoys.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=microseg sourcetype=\"guardicore:incident\" earliest=-7d\n| stats count values(affected_assets) as assets values(attack_type) as types by incident_id, severity, status\n| sort -severity -count\n```\n\nUnderstanding this SPL\n\n**Akamai Guardicore Incident Investigation** — Guardicore generates security incidents when patterns of blocked connections, deception (honeypot) triggers, or policy violations suggest active threats. Incident-level correlation in Splunk supports rapid investigation with full process and network context.\n\nDocumented **Data sources**: Guardicore incident and deception events. **App/TA** (typical add-on context): Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: microseg; **sourcetype**: guardicore:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=microseg, sourcetype=\"guardicore:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by incident_id, severity, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Akamai Guardicore Incident Investigation** — Guardicore generates security incidents when patterns of blocked connections, deception (honeypot) triggers, or policy violations suggest active threats. Incident-level correlation in Splunk supports rapid investigation with full process and network context.\n\nDocumented **Data sources**: Guardicore incident and deception events. **App/TA** (typical add-on context): Akamai Guardicore Add-on for Splunk (Splunkbase 7426). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active incidents), Timeline (incident progression), Bar chart (by severity), Single value (open critical incidents).",
              "script": "",
              "premium": "",
              "hw": "Akamai Guardicore Centra, Guardicore Agents, Guardicore Deception",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Intrusion_Detection.IDS_Attacks\n  by IDS_Attacks.src IDS_Attacks.dest span=1h",
              "e": [
                "guardicore"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Akamai Guardicore Add-on for Splunk",
                  "id": 7426,
                  "url": "https://splunkbase.splunk.com/app/7426",
                  "desc": "Microsegmentation data from Akamai Guardicore Centra for asset protection and compliance",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.53",
              "n": "Broadcom Symantec Cloud SWG Policy Analysis (Broadcom)",
              "c": "high",
              "f": "beginner",
              "v": "Broadcom Symantec Cloud SWG (formerly WSS) provides cloud-delivered web security with URL filtering, threat protection, and SSL inspection for all users. Tracking policy violations and threat blocks validates security posture.",
              "t": "Symantec WSS Add-on for Splunk (Splunkbase 3856)",
              "d": "Symantec WSS access logs (API-based collection via the Add-on)",
              "q": "index=proxy sourcetype=\"symantec:wss:accesslog\" earliest=-7d\n| where sc_filter_result=\"DENIED\"\n| stats count by cs_categories, cs_userdn, sc_filter_result\n| sort -count",
              "m": "Install the Symantec WSS Add-on and configure API-based data collection. Map `cs_categories` to your acceptable-use policy. Alert when denied requests spike for a user. Report on top blocked categories and threat detections.",
              "z": "Bar chart (blocks by category), Table (top users blocked), Line chart (daily denied trend), Pie chart (category distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Symantec WSS Add-on for Splunk](https://splunkbase.splunk.com/app/3856), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Symantec WSS Add-on for Splunk (Splunkbase 3856).\n• Ensure the following data sources are available: Symantec WSS access logs (API-based collection via the Add-on).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall the Symantec WSS Add-on and configure API-based data collection. Map `cs_categories` to your acceptable-use policy. Alert when denied requests spike for a user. Report on top blocked categories and threat detections.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"symantec:wss:accesslog\" earliest=-7d\n| where sc_filter_result=\"DENIED\"\n| stats count by cs_categories, cs_userdn, sc_filter_result\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Broadcom Symantec Cloud SWG Policy Analysis (Broadcom)** — Broadcom Symantec Cloud SWG (formerly WSS) provides cloud-delivered web security with URL filtering, threat protection, and SSL inspection for all users. Tracking policy violations and threat blocks validates security posture.\n\nDocumented **Data sources**: Symantec WSS access logs (API-based collection via the Add-on). **App/TA** (typical add-on context): Symantec WSS Add-on for Splunk (Splunkbase 3856). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: symantec:wss:accesslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"symantec:wss:accesslog\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where sc_filter_result=\"DENIED\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cs_categories, cs_userdn, sc_filter_result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Broadcom Symantec Cloud SWG Policy Analysis (Broadcom)** — Broadcom Symantec Cloud SWG (formerly WSS) provides cloud-delivered web security with URL filtering, threat protection, and SSL inspection for all users. Tracking policy violations and threat blocks validates security posture.\n\nDocumented **Data sources**: Symantec WSS access logs (API-based collection via the Add-on). **App/TA** (typical add-on context): Symantec WSS Add-on for Splunk (Splunkbase 3856). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (blocks by category), Table (top users blocked), Line chart (daily denied trend), Pie chart (category distribution).",
              "script": "",
              "premium": "",
              "hw": "Symantec Cloud SWG (cloud), Symantec Web Security Service Edge, Symantec Endpoint Agent",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "Web",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d",
              "e": [
                "broadcom_symantec"
              ],
              "em": [],
              "ta_link": {
                "name": "Symantec WSS Add-on for Splunk",
                "id": 3856,
                "url": "https://splunkbase.splunk.com/app/3856"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.54",
              "n": "Broadcom Symantec CASB Shadow IT Detection (Broadcom)",
              "c": "high",
              "f": "intermediate",
              "v": "Symantec CloudSOC CASB detects unsanctioned cloud application usage by analyzing web proxy and firewall logs. Shadow IT visibility identifies data exposure risks and supports SaaS governance.",
              "t": "Symantec WSS Add-on for Splunk (Splunkbase 3856), Symantec CAS App (Splunkbase 5934)",
              "d": "Symantec CloudSOC or WSS cloud app logs",
              "q": "index=proxy sourcetype=\"symantec:wss:accesslog\" earliest=-30d\n| where cs_threat_risk_level=\"High\" OR match(cs_categories,\"(?i)unsanctioned|shadow\")\n| stats dc(cs_userdn) as users sum(sc_bytes) as bytes by cs_host, cs_categories\n| eval gb=round(bytes/1073741824,2)\n| sort -users",
              "m": "Configure CloudSOC app discovery or filter WSS logs for unsanctioned category. Map cloud app names to your SaaS registry. Alert on new high-risk apps with >3 users. Report on shadow IT data volume.",
              "z": "Table (unsanctioned apps), Bar chart (top apps by users), Pie chart (risk levels), Line chart (shadow IT trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Symantec WSS Add-on for Splunk](https://splunkbase.splunk.com/app/3856), [Symantec CAS App](https://splunkbase.splunk.com/app/5934), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1102"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Symantec WSS Add-on for Splunk (Splunkbase 3856), Symantec CAS App (Splunkbase 5934).\n• Ensure the following data sources are available: Symantec CloudSOC or WSS cloud app logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure CloudSOC app discovery or filter WSS logs for unsanctioned category. Map cloud app names to your SaaS registry. Alert on new high-risk apps with >3 users. Report on shadow IT data volume.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"symantec:wss:accesslog\" earliest=-30d\n| where cs_threat_risk_level=\"High\" OR match(cs_categories,\"(?i)unsanctioned|shadow\")\n| stats dc(cs_userdn) as users sum(sc_bytes) as bytes by cs_host, cs_categories\n| eval gb=round(bytes/1073741824,2)\n| sort -users\n```\n\nUnderstanding this SPL\n\n**Broadcom Symantec CASB Shadow IT Detection (Broadcom)** — Symantec CloudSOC CASB detects unsanctioned cloud application usage by analyzing web proxy and firewall logs. Shadow IT visibility identifies data exposure risks and supports SaaS governance.\n\nDocumented **Data sources**: Symantec CloudSOC or WSS cloud app logs. **App/TA** (typical add-on context): Symantec WSS Add-on for Splunk (Splunkbase 3856), Symantec CAS App (Splunkbase 5934). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: symantec:wss:accesslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"symantec:wss:accesslog\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where cs_threat_risk_level=\"High\" OR match(cs_categories,\"(?i)unsanctioned|shadow\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cs_host, cs_categories** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  by Web.url Web.user span=1d\n| where count > 20\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Broadcom Symantec CASB Shadow IT Detection (Broadcom)** — Symantec CloudSOC CASB detects unsanctioned cloud application usage by analyzing web proxy and firewall logs. Shadow IT visibility identifies data exposure risks and supports SaaS governance.\n\nDocumented **Data sources**: Symantec CloudSOC or WSS cloud app logs. **App/TA** (typical add-on context): Symantec WSS Add-on for Splunk (Splunkbase 3856), Symantec CAS App (Splunkbase 5934). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Filters the current rows with `where count > 20` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unsanctioned apps), Bar chart (top apps by users), Pie chart (risk levels), Line chart (shadow IT trend).",
              "script": "",
              "premium": "",
              "hw": "Symantec CloudSOC (cloud), Cloud SWG",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Web",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  by Web.url Web.user span=1d\n| where count > 20",
              "e": [
                "broadcom_symantec"
              ],
              "em": [],
              "ta_link": {
                "name": "Symantec WSS Add-on for Splunk",
                "id": 3856,
                "url": "https://splunkbase.splunk.com/app/3856"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.55",
              "n": "Broadcom Symantec Cloud SWG Threat Events (Broadcom)",
              "c": "critical",
              "f": "intermediate",
              "v": "Cloud SWG detects malware, phishing, and zero-day threats via URL reputation, content analysis, and sandboxing. Threat events in Splunk enable incident response and threat hunting.",
              "t": "Symantec WSS Add-on for Splunk (Splunkbase 3856)",
              "d": "Symantec WSS access logs with threat fields",
              "q": "index=proxy sourcetype=\"symantec:wss:accesslog\" earliest=-24h\n| where sc_filter_result=\"DENIED\" AND isnotnull(cs_threat_risk_level) AND cs_threat_risk_level IN (\"High\",\"Critical\")\n| stats count by cs_threat, cs_categories, cs_userdn, cs_host\n| sort -count",
              "m": "Map `cs_threat` to threat categories. Alert on critical-severity threats. Correlate with endpoint detection for compromised hosts. Report on threat type distribution.",
              "z": "Bar chart (threats by category), Table (affected users), Timeline (detections), Pie chart (threat types).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Symantec WSS Add-on for Splunk](https://splunkbase.splunk.com/app/3856), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [
                "T1071.001",
                "T1568.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Symantec WSS Add-on for Splunk (Splunkbase 3856).\n• Ensure the following data sources are available: Symantec WSS access logs with threat fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `cs_threat` to threat categories. Alert on critical-severity threats. Correlate with endpoint detection for compromised hosts. Report on threat type distribution.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"symantec:wss:accesslog\" earliest=-24h\n| where sc_filter_result=\"DENIED\" AND isnotnull(cs_threat_risk_level) AND cs_threat_risk_level IN (\"High\",\"Critical\")\n| stats count by cs_threat, cs_categories, cs_userdn, cs_host\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Broadcom Symantec Cloud SWG Threat Events (Broadcom)** — Cloud SWG detects malware, phishing, and zero-day threats via URL reputation, content analysis, and sandboxing. Threat events in Splunk enable incident response and threat hunting.\n\nDocumented **Data sources**: Symantec WSS access logs with threat fields. **App/TA** (typical add-on context): Symantec WSS Add-on for Splunk (Splunkbase 3856). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: symantec:wss:accesslog. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"symantec:wss:accesslog\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where sc_filter_result=\"DENIED\" AND isnotnull(cs_threat_risk_level) AND cs_threat_risk_level IN (\"High\",\"Critical\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cs_threat, cs_categories, cs_userdn, cs_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.user Malware_Attacks.url span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Broadcom Symantec Cloud SWG Threat Events (Broadcom)** — Cloud SWG detects malware, phishing, and zero-day threats via URL reputation, content analysis, and sandboxing. Threat events in Splunk enable incident response and threat hunting.\n\nDocumented **Data sources**: Symantec WSS access logs with threat fields. **App/TA** (typical add-on context): Symantec WSS Add-on for Splunk (Splunkbase 3856). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (threats by category), Table (affected users), Timeline (detections), Pie chart (threat types).",
              "script": "",
              "premium": "",
              "hw": "Symantec Cloud SWG (cloud), Symantec Endpoint Agent",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Malware",
                "Web"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Malware.Malware_Attacks\n  by Malware_Attacks.user Malware_Attacks.url span=1h",
              "e": [
                "broadcom_symantec"
              ],
              "em": [],
              "ta_link": {
                "name": "Symantec WSS Add-on for Splunk",
                "id": 3856,
                "url": "https://splunkbase.splunk.com/app/3856"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.56",
              "n": "Cloudflare Access (ZTNA) Policy Enforcement",
              "c": "high",
              "f": "beginner",
              "v": "Cloudflare Access replaces VPN with per-application zero-trust access policies based on identity, device posture, and network context. Tracking allow/block decisions validates that access policies are correctly enforced.",
              "t": "Cloudflare App for Splunk (Splunkbase 4501)",
              "d": "Cloudflare Logpush (Access logs), `sourcetype=cloudflare:access`",
              "q": "index=zt sourcetype=\"cloudflare:access\" earliest=-7d\n| stats count by action, app_name, user_email, country\n| sort -count",
              "m": "Configure Cloudflare Logpush to push Access logs to Splunk via HEC or S3. Map `app_name` to internal application names. Alert when block rate spikes for specific applications. Report on access patterns by geography.",
              "z": "Bar chart (access by app), Table (blocks by user), Pie chart (allow vs block), Line chart (access trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cloudflare App for Splunk](https://splunkbase.splunk.com/app/4501), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1021",
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloudflare App for Splunk (Splunkbase 4501).\n• Ensure the following data sources are available: Cloudflare Logpush (Access logs), `sourcetype=cloudflare:access`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Cloudflare Logpush to push Access logs to Splunk via HEC or S3. Map `app_name` to internal application names. Alert when block rate spikes for specific applications. Report on access patterns by geography.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"cloudflare:access\" earliest=-7d\n| stats count by action, app_name, user_email, country\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloudflare Access (ZTNA) Policy Enforcement** — Cloudflare Access replaces VPN with per-application zero-trust access policies based on identity, device posture, and network context. Tracking allow/block decisions validates that access policies are correctly enforced.\n\nDocumented **Data sources**: Cloudflare Logpush (Access logs), `sourcetype=cloudflare:access`. **App/TA** (typical add-on context): Cloudflare App for Splunk (Splunkbase 4501). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: cloudflare:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"cloudflare:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action, app_name, user_email, country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.user Authentication.app span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloudflare Access (ZTNA) Policy Enforcement** — Cloudflare Access replaces VPN with per-application zero-trust access policies based on identity, device posture, and network context. Tracking allow/block decisions validates that access policies are correctly enforced.\n\nDocumented **Data sources**: Cloudflare Logpush (Access logs), `sourcetype=cloudflare:access`. **App/TA** (typical add-on context): Cloudflare App for Splunk (Splunkbase 4501). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (access by app), Table (blocks by user), Pie chart (allow vs block), Line chart (access trend).",
              "script": "",
              "premium": "",
              "hw": "Cloudflare Zero Trust (cloud), Cloudflare WARP client (endpoint), Cloudflare Tunnel (connector)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Authentication.Authentication\n  by Authentication.user Authentication.app span=1d",
              "e": [
                "cloudflare"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Cloudflare App for Splunk",
                  "id": 4501,
                  "url": "https://splunkbase.splunk.com/app/4501",
                  "desc": "Dashboards for Cloudflare performance, security, Zero Trust, and DNS analytics",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.57",
              "n": "Cloudflare Gateway (SWG) DNS and HTTP Filtering",
              "c": "high",
              "f": "beginner",
              "v": "Cloudflare Gateway provides DNS-layer and HTTP-layer filtering for all users. DNS blocks stop threats before connections are established; HTTP inspection enforces content and application policies.",
              "t": "Cloudflare App for Splunk (Splunkbase 4501)",
              "d": "Cloudflare Logpush (Gateway DNS/HTTP logs), `sourcetype=cloudflare:gateway`",
              "q": "index=proxy sourcetype=\"cloudflare:gateway\" earliest=-7d\n| where action IN (\"block\", \"isolate\")\n| stats count by policy_name, category, action, user_email\n| sort -count",
              "m": "Configure Logpush for Gateway logs. Map categories to your acceptable-use policy. Alert on DNS blocks for known-malicious categories. Report on filtering effectiveness and policy coverage.",
              "z": "Bar chart (blocks by category), Table (top blocked domains), Line chart (daily blocks), Pie chart (DNS vs HTTP blocks).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cloudflare App for Splunk](https://splunkbase.splunk.com/app/4501), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1071.001",
                "T1071.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloudflare App for Splunk (Splunkbase 4501).\n• Ensure the following data sources are available: Cloudflare Logpush (Gateway DNS/HTTP logs), `sourcetype=cloudflare:gateway`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Logpush for Gateway logs. Map categories to your acceptable-use policy. Alert on DNS blocks for known-malicious categories. Report on filtering effectiveness and policy coverage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"cloudflare:gateway\" earliest=-7d\n| where action IN (\"block\", \"isolate\")\n| stats count by policy_name, category, action, user_email\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Cloudflare Gateway (SWG) DNS and HTTP Filtering** — Cloudflare Gateway provides DNS-layer and HTTP-layer filtering for all users. DNS blocks stop threats before connections are established; HTTP inspection enforces content and application policies.\n\nDocumented **Data sources**: Cloudflare Logpush (Gateway DNS/HTTP logs), `sourcetype=cloudflare:gateway`. **App/TA** (typical add-on context): Cloudflare App for Splunk (Splunkbase 4501). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: cloudflare:gateway. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"cloudflare:gateway\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"block\", \"isolate\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by policy_name, category, action, user_email** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloudflare Gateway (SWG) DNS and HTTP Filtering** — Cloudflare Gateway provides DNS-layer and HTTP-layer filtering for all users. DNS blocks stop threats before connections are established; HTTP inspection enforces content and application policies.\n\nDocumented **Data sources**: Cloudflare Logpush (Gateway DNS/HTTP logs), `sourcetype=cloudflare:gateway`. **App/TA** (typical add-on context): Cloudflare App for Splunk (Splunkbase 4501). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (blocks by category), Table (top blocked domains), Line chart (daily blocks), Pie chart (DNS vs HTTP blocks).",
              "script": "",
              "premium": "",
              "hw": "Cloudflare Zero Trust (cloud), Cloudflare WARP client",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Web",
                "Network_Resolution",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d",
              "e": [
                "cloudflare"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Cloudflare App for Splunk",
                  "id": 4501,
                  "url": "https://splunkbase.splunk.com/app/4501",
                  "desc": "Dashboards for Cloudflare performance, security, Zero Trust, and DNS analytics",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.58",
              "n": "Cloudflare Tunnel Health",
              "c": "high",
              "f": "intermediate",
              "v": "Cloudflare Tunnels (formerly Argo Tunnels) connect private infrastructure to Cloudflare without opening inbound ports. Tunnel health determines whether users can reach private applications through Cloudflare Access.",
              "t": "Cloudflare App for Splunk (Splunkbase 4501)",
              "d": "Cloudflare Logpush (Tunnel logs), `sourcetype=cloudflare:tunnel`",
              "q": "index=zt sourcetype=\"cloudflare:tunnel\" earliest=-24h\n| eval healthy=if(match(lower(event_type),\"(?i)connected|healthy|active\"),1,0)\n| stats latest(healthy) as status latest(_time) as last_seen by tunnel_id, tunnel_name, connector_id\n| where status=0 OR last_seen < relative_time(now(), \"-1h\")\n| sort last_seen",
              "m": "Configure Logpush for Tunnel health events. Alert when tunnels disconnect for >10 minutes. Track connector version compliance. Report on tunnel availability SLA.",
              "z": "Single value (tunnels down), Table (unhealthy tunnels), Line chart (tunnel availability %), Timeline (connect/disconnect events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cloudflare App for Splunk](https://splunkbase.splunk.com/app/4501)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloudflare App for Splunk (Splunkbase 4501).\n• Ensure the following data sources are available: Cloudflare Logpush (Tunnel logs), `sourcetype=cloudflare:tunnel`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Logpush for Tunnel health events. Alert when tunnels disconnect for >10 minutes. Track connector version compliance. Report on tunnel availability SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"cloudflare:tunnel\" earliest=-24h\n| eval healthy=if(match(lower(event_type),\"(?i)connected|healthy|active\"),1,0)\n| stats latest(healthy) as status latest(_time) as last_seen by tunnel_id, tunnel_name, connector_id\n| where status=0 OR last_seen < relative_time(now(), \"-1h\")\n| sort last_seen\n```\n\nUnderstanding this SPL\n\n**Cloudflare Tunnel Health** — Cloudflare Tunnels (formerly Argo Tunnels) connect private infrastructure to Cloudflare without opening inbound ports. Tunnel health determines whether users can reach private applications through Cloudflare Access.\n\nDocumented **Data sources**: Cloudflare Logpush (Tunnel logs), `sourcetype=cloudflare:tunnel`. **App/TA** (typical add-on context): Cloudflare App for Splunk (Splunkbase 4501). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: cloudflare:tunnel. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"cloudflare:tunnel\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **healthy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by tunnel_id, tunnel_name, connector_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status=0 OR last_seen < relative_time(now(), \"-1h\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (tunnels down), Table (unhealthy tunnels), Line chart (tunnel availability %), Timeline (connect/disconnect events).",
              "script": "",
              "premium": "",
              "hw": "Cloudflare Zero Trust (cloud), cloudflared daemon (connector)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cloudflare"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Cloudflare App for Splunk",
                  "id": 4501,
                  "url": "https://splunkbase.splunk.com/app/4501",
                  "desc": "Dashboards for Cloudflare performance, security, Zero Trust, and DNS analytics",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.59",
              "n": "Forcepoint ONE SSE Web Security Events (Forcepoint)",
              "c": "high",
              "f": "beginner",
              "v": "Forcepoint ONE SSE delivers cloud-native SWG, CASB, and ZTNA from a unified platform. Web security events reveal policy violations, threat blocks, and shadow IT usage across all users.",
              "t": "Forcepoint Insights SIEM App (Splunkbase 8053)",
              "d": "Forcepoint ONE SSE logs via Splunkbase app",
              "q": "index=proxy sourcetype=\"forcepoint:one\" earliest=-7d\n| where action IN (\"block\", \"deny\")\n| stats count by policy_name, category, user, action\n| sort -count",
              "m": "Install the Forcepoint Insights SIEM App and configure log ingestion. Map categories to your acceptable-use policy. Alert on high-risk category blocks. Report on enforcement effectiveness.",
              "z": "Bar chart (blocks by category), Table (top users blocked), Line chart (block trend), Pie chart (policy distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Forcepoint Insights SIEM App](https://splunkbase.splunk.com/app/8053), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Forcepoint Insights SIEM App (Splunkbase 8053).\n• Ensure the following data sources are available: Forcepoint ONE SSE logs via Splunkbase app.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstall the Forcepoint Insights SIEM App and configure log ingestion. Map categories to your acceptable-use policy. Alert on high-risk category blocks. Report on enforcement effectiveness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"forcepoint:one\" earliest=-7d\n| where action IN (\"block\", \"deny\")\n| stats count by policy_name, category, user, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Forcepoint ONE SSE Web Security Events (Forcepoint)** — Forcepoint ONE SSE delivers cloud-native SWG, CASB, and ZTNA from a unified platform. Web security events reveal policy violations, threat blocks, and shadow IT usage across all users.\n\nDocumented **Data sources**: Forcepoint ONE SSE logs via Splunkbase app. **App/TA** (typical add-on context): Forcepoint Insights SIEM App (Splunkbase 8053). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: forcepoint:one. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"forcepoint:one\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"block\", \"deny\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by policy_name, category, user, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Forcepoint ONE SSE Web Security Events (Forcepoint)** — Forcepoint ONE SSE delivers cloud-native SWG, CASB, and ZTNA from a unified platform. Web security events reveal policy violations, threat blocks, and shadow IT usage across all users.\n\nDocumented **Data sources**: Forcepoint ONE SSE logs via Splunkbase app. **App/TA** (typical add-on context): Forcepoint Insights SIEM App (Splunkbase 8053). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (blocks by category), Table (top users blocked), Line chart (block trend), Pie chart (policy distribution).",
              "script": "",
              "premium": "",
              "hw": "Forcepoint ONE (cloud), Forcepoint ONE Smart Edge Agent (endpoint)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Web",
                "Network_Traffic"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d",
              "e": [
                "forcepoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Forcepoint Insights SIEM App",
                  "id": 8053,
                  "url": "https://splunkbase.splunk.com/app/8053",
                  "desc": "Centralized visibility into Forcepoint ONE SSE logs with prebuilt dashboards",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.60",
              "n": "Forcepoint ONE ZTNA Private Access Health (Forcepoint)",
              "c": "high",
              "f": "intermediate",
              "v": "Forcepoint ONE ZTNA provides agentless and agent-based private application access. Connector health and access decision monitoring ensures application reachability.",
              "t": "Forcepoint Insights SIEM App (Splunkbase 8053)",
              "d": "Forcepoint ONE ZTNA access/connector logs",
              "q": "index=zt sourcetype=\"forcepoint:one\" earliest=-24h\n| where match(lower(event_type),\"(?i)ztna|private|connector\")\n| eval ok=if(match(lower(status),\"(?i)success|allow|connected\"),1,0)\n| stats count sum(ok) as successes by connector_name, app_name\n| eval error_pct=round(100*(count-successes)/count,1)\n| where error_pct > 5\n| sort -error_pct",
              "m": "Map connector names to datacenter/region. Alert when connectors show >10% error rate. Report on ZTNA adoption metrics.",
              "z": "Table (unhealthy connectors), Single value (connectors in error), Line chart (error rate trend), Bar chart (access by app).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Forcepoint Insights SIEM App](https://splunkbase.splunk.com/app/8053), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1133"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Forcepoint Insights SIEM App (Splunkbase 8053).\n• Ensure the following data sources are available: Forcepoint ONE ZTNA access/connector logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap connector names to datacenter/region. Alert when connectors show >10% error rate. Report on ZTNA adoption metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=zt sourcetype=\"forcepoint:one\" earliest=-24h\n| where match(lower(event_type),\"(?i)ztna|private|connector\")\n| eval ok=if(match(lower(status),\"(?i)success|allow|connected\"),1,0)\n| stats count sum(ok) as successes by connector_name, app_name\n| eval error_pct=round(100*(count-successes)/count,1)\n| where error_pct > 5\n| sort -error_pct\n```\n\nUnderstanding this SPL\n\n**Forcepoint ONE ZTNA Private Access Health (Forcepoint)** — Forcepoint ONE ZTNA provides agentless and agent-based private application access. Connector health and access decision monitoring ensures application reachability.\n\nDocumented **Data sources**: Forcepoint ONE ZTNA access/connector logs. **App/TA** (typical add-on context): Forcepoint Insights SIEM App (Splunkbase 8053). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: zt; **sourcetype**: forcepoint:one. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=zt, sourcetype=\"forcepoint:one\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(event_type),\"(?i)ztna|private|connector\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by connector_name, app_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.dest span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Forcepoint ONE ZTNA Private Access Health (Forcepoint)** — Forcepoint ONE ZTNA provides agentless and agent-based private application access. Connector health and access decision monitoring ensures application reachability.\n\nDocumented **Data sources**: Forcepoint ONE ZTNA access/connector logs. **App/TA** (typical add-on context): Forcepoint Insights SIEM App (Splunkbase 8053). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy connectors), Single value (connectors in error), Line chart (error rate trend), Bar chart (access by app).",
              "script": "",
              "premium": "",
              "hw": "Forcepoint ONE (cloud), Forcepoint ONE Connector (on-prem), Smart Edge Agent",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Network_Traffic",
                "Network_Sessions"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Sessions.All_Sessions\n  by All_Sessions.dest span=1h",
              "e": [
                "forcepoint"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Forcepoint Insights SIEM App",
                  "id": 8053,
                  "url": "https://splunkbase.splunk.com/app/8053",
                  "desc": "Centralized visibility into Forcepoint ONE SSE logs with prebuilt dashboards",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.61",
              "n": "SonicWall Cloud SWG and SMA Access Events (SonicWall)",
              "c": "high",
              "f": "beginner",
              "v": "SonicWall Cloud SWG and Secure Mobile Access (SMA) provide cloud-delivered web security and ZTNA for remote and branch users. Tracking blocked connections, authentication events, and policy actions validates secure access enforcement.",
              "t": "SonicWall SMA 1000 TA (Splunkbase 6670), `dell:sonicwall:firewall` syslog",
              "d": "`sourcetype=dell:sonicwall:firewall` (syslog), SMA 1000 logs via TA",
              "q": "index=proxy sourcetype=\"dell:sonicwall:firewall\" earliest=-7d\n| where fw_action IN (\"block\",\"deny\",\"drop\")\n| stats count by cat, usr, fw_action, rule\n| sort -count",
              "m": "Forward SonicWall Cloud SWG/SMA logs via syslog or use the SMA 1000 TA. Map `cat` (web category) to acceptable-use policy. Alert on authentication failures from SMA. Report on policy enforcement effectiveness.",
              "z": "Bar chart (blocks by category), Table (top blocked users), Line chart (daily block trend), Pie chart (action distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[SonicWall SMA 1000 TA](https://splunkbase.splunk.com/app/6670), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1071.001",
                "T1021"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SonicWall SMA 1000 TA (Splunkbase 6670), `dell:sonicwall:firewall` syslog.\n• Ensure the following data sources are available: `sourcetype=dell:sonicwall:firewall` (syslog), SMA 1000 logs via TA.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward SonicWall Cloud SWG/SMA logs via syslog or use the SMA 1000 TA. Map `cat` (web category) to acceptable-use policy. Alert on authentication failures from SMA. Report on policy enforcement effectiveness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"dell:sonicwall:firewall\" earliest=-7d\n| where fw_action IN (\"block\",\"deny\",\"drop\")\n| stats count by cat, usr, fw_action, rule\n| sort -count\n```\n\nUnderstanding this SPL\n\n**SonicWall Cloud SWG and SMA Access Events (SonicWall)** — SonicWall Cloud SWG and Secure Mobile Access (SMA) provide cloud-delivered web security and ZTNA for remote and branch users. Tracking blocked connections, authentication events, and policy actions validates secure access enforcement.\n\nDocumented **Data sources**: `sourcetype=dell:sonicwall:firewall` (syslog), SMA 1000 logs via TA. **App/TA** (typical add-on context): SonicWall SMA 1000 TA (Splunkbase 6670), `dell:sonicwall:firewall` syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: dell:sonicwall:firewall. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"dell:sonicwall:firewall\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where fw_action IN (\"block\",\"deny\",\"drop\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cat, usr, fw_action, rule** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SonicWall Cloud SWG and SMA Access Events (SonicWall)** — SonicWall Cloud SWG and Secure Mobile Access (SMA) provide cloud-delivered web security and ZTNA for remote and branch users. Tracking blocked connections, authentication events, and policy actions validates secure access enforcement.\n\nDocumented **Data sources**: `sourcetype=dell:sonicwall:firewall` (syslog), SMA 1000 logs via TA. **App/TA** (typical add-on context): SonicWall SMA 1000 TA (Splunkbase 6670), `dell:sonicwall:firewall` syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (blocks by category), Table (top blocked users), Line chart (daily block trend), Pie chart (action distribution).",
              "script": "",
              "premium": "",
              "hw": "SonicWall SMA 1000 series, SonicWall Cloud SWG, SonicWall NSa/NSsp (with SASE), SonicWall Cloud Edge",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "Web",
                "Network_Traffic",
                "Authentication"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Web.Web\n  where Web.action=\"blocked\"\n  by Web.category span=1d",
              "e": [
                "sonicwall",
                "syslog"
              ],
              "em": [],
              "ta_link": {
                "name": "SonicWall SMA 1000 TA for Splunk",
                "id": 6670,
                "url": "https://splunkbase.splunk.com/app/6670"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "17.3.62",
              "n": "Versa SASE Security and Access Events (Versa Networks)",
              "c": "high",
              "f": "intermediate",
              "v": "Versa SASE unifies SD-WAN, SWG, CASB, ZTNA, and FWaaS in a single platform. Security event analysis across these functions provides unified threat visibility and policy enforcement validation.",
              "t": "Versa Networks Splunk App (via Versa Analytics syslog integration)",
              "d": "Versa Analytics logs (syslog key-value pairs: alarm, event, flow, firewall, threat logs)",
              "q": "index=sase sourcetype=\"versa:sase\" earliest=-24h\n| where match(lower(action),\"(?i)block|deny|drop|alert\")\n| stats count by log_type, rule_name, src_ip, action\n| sort -count",
              "m": "Configure Versa Analytics to stream logs to Splunk via UDP/TCP syslog in key-value format. Install the Versa Splunk App for pre-built reports. Map `rule_name` to policy intent. Alert on threat and firewall blocks. Report on SASE enforcement across sites.",
              "z": "Bar chart (events by log type), Table (top blocked sources), Timeline (security events), Pie chart (action distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1071.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Versa Networks Splunk App (via Versa Analytics syslog integration).\n• Ensure the following data sources are available: Versa Analytics logs (syslog key-value pairs: alarm, event, flow, firewall, threat logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure Versa Analytics to stream logs to Splunk via UDP/TCP syslog in key-value format. Install the Versa Splunk App for pre-built reports. Map `rule_name` to policy intent. Alert on threat and firewall blocks. Report on SASE enforcement across sites.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sase sourcetype=\"versa:sase\" earliest=-24h\n| where match(lower(action),\"(?i)block|deny|drop|alert\")\n| stats count by log_type, rule_name, src_ip, action\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Versa SASE Security and Access Events (Versa Networks)** — Versa SASE unifies SD-WAN, SWG, CASB, ZTNA, and FWaaS in a single platform. Security event analysis across these functions provides unified threat visibility and policy enforcement validation.\n\nDocumented **Data sources**: Versa Analytics logs (syslog key-value pairs: alarm, event, flow, firewall, threat logs). **App/TA** (typical add-on context): Versa Networks Splunk App (via Versa Analytics syslog integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sase; **sourcetype**: versa:sase. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sase, sourcetype=\"versa:sase\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(action),\"(?i)block|deny|drop|alert\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by log_type, rule_name, src_ip, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src span=1h\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Versa SASE Security and Access Events (Versa Networks)** — Versa SASE unifies SD-WAN, SWG, CASB, ZTNA, and FWaaS in a single platform. Security event analysis across these functions provides unified threat visibility and policy enforcement validation.\n\nDocumented **Data sources**: Versa Analytics logs (syslog key-value pairs: alarm, event, flow, firewall, threat logs). **App/TA** (typical add-on context): Versa Networks Splunk App (via Versa Analytics syslog integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by log type), Table (top blocked sources), Timeline (security events), Pie chart (action distribution).",
              "script": "",
              "premium": "",
              "hw": "Versa FlexVNF (branch CPE), Versa Director (management), Versa SASE Cloud Gateways",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We use network security tools to watch who accesses what, help prevent data leaks, enforce network isolation, and secure cloud and remote access.",
              "mtype": [
                "Security"
              ],
              "a": [
                "Network_Traffic",
                "Intrusion_Detection"
              ],
              "qs": "| tstats `summariesonly` count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"blocked\"\n  by All_Traffic.src span=1h",
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 60,
            "none": 0
          }
        }
      ],
      "i": 17,
      "n": "Network Security & Zero Trust",
      "src": "cat-17-network-security-zero-trust.md"
    },
    {
      "s": [
        {
          "i": "18.1",
          "n": "Cisco ACI",
          "u": [
            {
              "i": "18.1.1",
              "n": "Fabric Health Score Monitoring",
              "c": "critical",
              "f": "advanced",
              "v": "ACI fabric health scores provide a single-pane view of overall data center network health. Monitoring these scores lets you catch degradation before it impacts workloads, correlate health drops with specific faults, and maintain SLA compliance across your data center fabric.",
              "t": "`TA_cisco-ACI`, APIC REST API via scripted input",
              "d": "APIC REST API (`/api/node/mo/topology/health.json`), APIC syslog",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:health\"\n| eval node_type=case(like(dn, \"%/node-1%\"), \"APIC\", like(dn, \"%/node-1___%\"), \"Leaf\", like(dn, \"%/node-2___%\"), \"Spine\", 1==1, \"Other\")\n| stats latest(healthScore) as health_score by dn, node_type\n| eval status=case(health_score>=90, \"Healthy\", health_score>=70, \"Degraded\", health_score>=50, \"Warning\", 1==1, \"Critical\")\n| sort health_score",
              "m": "Deploy scripted input to poll APIC health API every 60 seconds. Collect topology-wide and per-node health scores. Set threshold alerts: <90 degraded, <70 warning, <50 critical. Integrate with ITSI for service-level health correlation. Build trending to catch slow health degradation.",
              "z": "Single value (fabric health), Gauge (per-node health), Timechart (health trending), Status grid (node health map).",
              "kfp": "**ISSU** / rolling-upgrade windows where one leaf reboot sequence drops transient **fabricNode** reachability—**healthInst** readings lag behind **dataplane** convergence while syslog already surfaces F0103 link-down narratives; paging fabric-wide drops during tagged maintenance ignores CMDB change correlation. Fault acknowledgement latency after engineers clear **faultInst** entries in the APIC GUI—scores can remain depressed until the next **fvHealthScoreEvalPol** evaluation cycle even though severity counts already trend toward zero on screen. Cosmetic optics faults on unused fabric interfaces (missing or excessive optical signals) keep minor/warning buckets alive—scores dip although no tenant traffic traverses those ports—suppress by tagging shutdown interfaces or excluding lab POD DNs in Splunk lookups. Contract policy pushes and COOP convergence bursts raise transient fault storms unrelated to steady-state failure—histogram spikes resemble outages until topSystem trends stabilize; cross-check **fabricHealthHist5min** before opening sev-1 bridges. Planned multi-pod IPN maintenance with IS-IS churn or BFD hold-down on the interconnect—**faultInst** severity piles up while pod-to-pod latency still within SLO—filter on change-record metadata and Nexus Dashboard Fabric Controller context to avoid duplicate IPN escalations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC REST API via scripted input.\n• Ensure the following data sources are available: APIC REST API (`/api/node/mo/topology/health.json`), APIC syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy scripted input to poll APIC health API every 60 seconds. Collect topology-wide and per-node health scores. Set threshold alerts: <90 degraded, <70 warning, <50 critical. Integrate with ITSI for service-level health correlation. Build trending to catch slow health degradation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:health\"\n| eval node_type=case(like(dn, \"%/node-1%\"), \"APIC\", like(dn, \"%/node-1___%\"), \"Leaf\", like(dn, \"%/node-2___%\"), \"Spine\", 1==1, \"Other\")\n| stats latest(healthScore) as health_score by dn, node_type\n| eval status=case(health_score>=90, \"Healthy\", health_score>=70, \"Degraded\", health_score>=50, \"Warning\", 1==1, \"Critical\")\n| sort health_score\n```\n\nUnderstanding this SPL\n\n**Fabric Health Score Monitoring** — ACI fabric health scores provide a single-pane view of overall data center network health. Monitoring these scores lets you catch degradation before it impacts workloads, correlate health drops with specific faults, and maintain SLA compliance across your data center fabric.\n\nDocumented **Data sources**: APIC REST API (`/api/node/mo/topology/health.json`), APIC syslog. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC REST API via scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **node_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by dn, node_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (fabric health), Gauge (per-node health), Timechart (health trending), Status grid (node health map).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9332C (ACI), Nexus 93180YC-FX (ACI), Nexus 9364C (ACI), Nexus 9504 (ACI), Nexus 9508 (ACI)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We turn the fabric color-coded health dial and fault tallies into plain trend lines so teams see when a tenant, endpoint group, or switch role slides from green toward red before apps time out.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.2",
              "n": "Fault Trending by Severity",
              "c": "high",
              "f": "intermediate",
              "v": "ACI faults are the primary operational signal from the fabric. Trending faults by severity helps identify worsening conditions, recurring hardware issues, and configuration problems before they cascade into outages.",
              "t": "`TA_cisco-ACI`, APIC syslog",
              "d": "APIC faults API (`/api/node/class/faultInst.json`), APIC syslog",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:faults\"\n| eval severity_order=case(severity==\"critical\", 4, severity==\"major\", 3, severity==\"minor\", 2, severity==\"warning\", 1, 1==1, 0)\n| timechart span=1h count by severity\n| fields _time critical major minor warning",
              "m": "Poll APIC fault instance class every 5 minutes. Parse severity, fault code, affected DN, and lifecycle state. Track fault creation/clearing patterns. Alert on critical/major fault count spikes. Build fault code frequency reports for proactive maintenance.",
              "z": "Timechart (fault trends by severity), Bar chart (top fault codes), Table (active critical faults), Single value (open critical faults count).",
              "kfp": "**LLDP churn narratives** — **F0105** storms during cabling refreshes mimic instability yet seldom merit tier-one escalation when rack elevation schedules explain neighbour churn rather than dataplane regressions alone. **Administrative silence masks** — **F0103** bursts following deliberate interface shutdown work mimic outages unless maintenance identifiers reconcile Splunk timelines with approved cutovers. **Delegate-instance collisions** — importing both **faultDelegate** aggregates and leaf-local **faultInst** copies doubles perceived severity totals across dashboards tuned for tenant summaries versus leaf troubleshooting squads until role filters partition KPI math. **Soak-window disappearance tricks** — faults residing strictly inside **lc** states **soaking**/**raised-clearing** vanish when naive searches retain literal **raised** filters—operators falsely conclude clearance despite transitional churn captured inside APIC. **Retention rollover ghosts** — aged **faultRecord** rows expire near **fvFaultLifecycleP** clocks while intermittent hardware defects recur moments later—Splunk charts imply vanished trouble though sensors still chatter beneath the hood. **Automation acknowledgement masking** — nightly bots stamping **ackUser** deflate MTTA KPIs without humans owning acknowledgement queues until Splunk excludes scripted identities through lookup governance. **Flap during clear-soak** — oscillations inside **raised-clearing** mimic healing spikes while underlying optics remain unstable—pair timelines with **occur** counters before closing tickets. **faultCounts smoothing** — coarse REST polls flatten bursts already visible inside syslog priority ramps—blend transports before declaring false calm.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC syslog.\n• Ensure the following data sources are available: APIC faults API (`/api/node/class/faultInst.json`), APIC syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll APIC fault instance class every 5 minutes. Parse severity, fault code, affected DN, and lifecycle state. Track fault creation/clearing patterns. Alert on critical/major fault count spikes. Build fault code frequency reports for proactive maintenance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:faults\"\n| eval severity_order=case(severity==\"critical\", 4, severity==\"major\", 3, severity==\"minor\", 2, severity==\"warning\", 1, 1==1, 0)\n| timechart span=1h count by severity\n| fields _time critical major minor warning\n```\n\nUnderstanding this SPL\n\n**Fault Trending by Severity** — ACI faults are the primary operational signal from the fabric. Trending faults by severity helps identify worsening conditions, recurring hardware issues, and configuration problems before they cascade into outages.\n\nDocumented **Data sources**: APIC faults API (`/api/node/class/faultInst.json`), APIC syslog. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:faults. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:faults\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **severity_order** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by severity** — ideal for trending and alerting on this use case.\n• Keeps or drops fields with `fields` to shape columns and size.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (fault trends by severity), Bar chart (top fault codes), Table (active critical faults), Single value (open critical faults count).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9332C (ACI), Nexus 93180YC-FX (ACI), Nexus 9364C (ACI), Nexus 9504 (ACI), Nexus 9508 (ACI)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We follow each flashing alarm from first raise through soak timers into stored history, note who tapped the confirm button and why, and clock how long sticky codes linger—so the night-shift story matches what the controller recorded, not guesswork from one rolling number.",
              "wv": "crawl",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "18.1.3",
              "n": "Endpoint Mobility Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Endpoint mobility in ACI tracks workload movement across leaf switches. Anomalous mobility (rapid moves, unexpected locations) can indicate misconfigurations, loops, or security issues like MAC spoofing.",
              "t": "`TA_cisco-ACI`, APIC endpoint tracker",
              "d": "APIC endpoint tracker, ACI endpoint move events",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:endpoint\"\n| where action=\"move\"\n| stats count as move_count, values(from_leaf) as from_leaves, values(to_leaf) as to_leaves, latest(_time) as last_move by mac, ip, tenant, epg\n| where move_count > 5\n| sort -move_count\n| eval alert=if(move_count>20, \"Anomalous\", \"Normal\")",
              "m": "Enable endpoint tracker on APIC. Ingest endpoint move events via syslog or API polling. Baseline normal mobility rates per EPG. Alert on endpoints with excessive moves (>20/hour). Investigate rapid moves for potential loops or spoofing. Correlate with contract hits.",
              "z": "Table (high-mobility endpoints), Timechart (move rate trending), Sankey diagram (leaf-to-leaf moves), Single value (anomalous endpoints).",
              "kfp": "**VMware vMotion/DRS evacuation bursts** — coordinated **`fvCEp`** relocations spanning dozens of **`leaf_id`** entries mirror spoof storms yet align with **`CHG`** references tagging datastore maintenance—suppress **`mobility_storm`** macros whenever **`planned_motion_tag`** ties **`mac_u`** clusters to **`CHG004412`** style VMware windows while **`F0856`** narratives cite benign DPVM relocations distinct from **`F0467`** VLAN collisions (still noisy yet path-consistent). **vPC peer link renegotiation or member flaps** — **`fvRsCEpToPathEp`** may repoint through alternate MCT members even when workloads stay logically fixed, inflating **`leaf_fanout`**—require paired diagnostics from **`vpcInst`** syslog lines or STP guard logs before escalating; map suppressions to **`aci_vpc_suppress.csv`** keyed on **`vpcPairId`**. **Fabric image / F-Series line-card rolling upgrades** — COOP convergence and cache warm events duplicate endpoint learn events without attack significance; align splunk timelines with **`show version`** change tickets and pause **`distinct_leaf_hour`** macros for controller maintenance tags. **COOP/NYSE cache flush after policy controller events** — brief relearning surges mimic rapid moves; compare COOP **`statsEntity`** polls (optional side index) against endpoint counts; auto-suppress **`mobility_storm`** if flush markers appear in **`cisco:aci:syslog`**. **Anycast gateway ARP/GARP churn on SVI stretches** — VRF stretching between pods makes **`fvCtx`** **`vrfDn`** appear to hop even when East-West traffic reroutes logically; cross-check ServiceNow **`CHG`** approvals for L3Stretch operations and shorten alert windows rather than paging network security; document using **`aci_l3stretch_suppress`** lookups when **`vrfDn`** matches active change data.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC endpoint tracker.\n• Ensure the following data sources are available: APIC endpoint tracker, ACI endpoint move events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable endpoint tracker on APIC. Ingest endpoint move events via syslog or API polling. Baseline normal mobility rates per EPG. Alert on endpoints with excessive moves (>20/hour). Investigate rapid moves for potential loops or spoofing. Correlate with contract hits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:endpoint\"\n| where action=\"move\"\n| stats count as move_count, values(from_leaf) as from_leaves, values(to_leaf) as to_leaves, latest(_time) as last_move by mac, ip, tenant, epg\n| where move_count > 5\n| sort -move_count\n| eval alert=if(move_count>20, \"Anomalous\", \"Normal\")\n```\n\nUnderstanding this SPL\n\n**Endpoint Mobility Tracking** — Endpoint mobility in ACI tracks workload movement across leaf switches. Anomalous mobility (rapid moves, unexpected locations) can indicate misconfigurations, loops, or security issues like MAC spoofing.\n\nDocumented **Data sources**: APIC endpoint tracker, ACI endpoint move events. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC endpoint tracker. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:endpoint. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:endpoint\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"move\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by mac, ip, tenant, epg** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where move_count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `eval` defines or adjusts **alert** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (high-mobility endpoints), Timechart (move rate trending), Sankey diagram (leaf-to-leaf moves), Single value (anomalous endpoints).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9332C (ACI), Nexus 93180YC-FX (ACI), Nexus 9364C (ACI), Nexus 9504 (ACI), Nexus 9508 (ACI)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We keep a simple ledger of where each device’s network name tag last sat on your row of switches—like noting which park bench a courier used—so when it zips bench to bench faster than any real delivery, we notice before someone else’s name is on the package.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "18.1.4",
              "n": "Contract/Filter Hit Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "ACI contracts control EPG-to-EPG communication. Analyzing contract hits reveals traffic patterns, identifies overly permissive or unused contracts, and helps validate micro-segmentation policies are working as designed.",
              "t": "`TA_cisco-ACI`, APIC flow logs",
              "d": "APIC contract hit counters, ACI flow telemetry",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:contracts\"\n| stats sum(permit_count) as permitted, sum(deny_count) as denied by src_epg, dst_epg, contract_name, filter_name\n| eval total=permitted+denied\n| eval deny_pct=round((denied/total)*100, 2)\n| sort -total\n| table src_epg, dst_epg, contract_name, filter_name, permitted, denied, deny_pct",
              "m": "Enable contract statistics on APIC. Poll contract hit counters via API every 5 minutes. Track permit vs deny ratios per contract. Identify contracts with zero hits (candidates for cleanup). Alert on unexpected deny spikes indicating policy or application issues.",
              "z": "Table (contract hit summary), Bar chart (top contracts by hits), Timechart (deny trends), Sankey diagram (EPG-to-EPG flows).",
              "kfp": "**Firewall-consolidation cutovers** — coordinated deny-counter surges appear while engineers rewrite **vzBrCP** subjects during staged migrations—traffic remains legitimately blocked until replacement permits publish; correlate Splunk timelines with CRQ identifiers referencing legacy firewall retirement milestones rather than paging lateral-movement hunts alone. **Preferred Group membership toggles** — flipping **prefGrMemb** from excluded toward included briefly yields permit-counter troughs until TCAM reprograms relaxed intra-group shortcuts—suppress alerts aligned with tenant escape-hatch approvals documented in CAB packets. **Scheduled interconnect-maintenance windows between pods** — IP-network instability delays collectors fetching **actrlPermitStats5min** snapshots while dataplane forwarding stays healthy—cross-reference **_time** jitter against WAN maintenance bridges before accusing microsegmentation regressions. **Leaf TCAM exhaustion or partial programming** — **F4029**/**F4030** combinations drop counters silently because rules never compiled yet syslog fault queues lag REST pulls—pair deny dashboards with fault streams before trusting zero-hit narratives. **vzAny default shortcuts** — east-west conversations traverse broad **/brcdef** permits making per-EPG counters look idle despite alive flows—confirm with SPAN captures rather than concluding Splunk misses ingestion. **NDO/MSO stretched-site synchronization drift** — templates propagate slower than primary-site edits—secondary-site deny spikes occasionally reflect orchestration backlog rather than hostile bursts until replicated **vzBrCP** scopes converge.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC flow logs.\n• Ensure the following data sources are available: APIC contract hit counters, ACI flow telemetry.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable contract statistics on APIC. Poll contract hit counters via API every 5 minutes. Track permit vs deny ratios per contract. Identify contracts with zero hits (candidates for cleanup). Alert on unexpected deny spikes indicating policy or application issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:contracts\"\n| stats sum(permit_count) as permitted, sum(deny_count) as denied by src_epg, dst_epg, contract_name, filter_name\n| eval total=permitted+denied\n| eval deny_pct=round((denied/total)*100, 2)\n| sort -total\n| table src_epg, dst_epg, contract_name, filter_name, permitted, denied, deny_pct\n```\n\nUnderstanding this SPL\n\n**Contract/Filter Hit Analysis** — ACI contracts control EPG-to-EPG communication. Analyzing contract hits reveals traffic patterns, identifies overly permissive or unused contracts, and helps validate micro-segmentation policies are working as designed.\n\nDocumented **Data sources**: APIC contract hit counters, ACI flow telemetry. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC flow logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:contracts. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:contracts\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_epg, dst_epg, contract_name, filter_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **deny_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Contract/Filter Hit Analysis**): table src_epg, dst_epg, contract_name, filter_name, permitted, denied, deny_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (contract hit summary), Bar chart (top contracts by hits), Timechart (deny trends), Sankey diagram (EPG-to-EPG flows).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9332C (ACI), Nexus 93180YC-FX (ACI), Nexus 9364C (ACI), Nexus 9504 (ACI), Nexus 9508 (ACI)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch which conversations between application groups your tenant agreements allow or refuse, so quiet mistakes—like shortcuts that bypass careful checks or counters that hide real movement—surface before anyone blames the wrong software release.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "18.1.5",
              "n": "Tenant Configuration Audit",
              "c": "medium",
              "f": "intermediate",
              "v": "Configuration changes in ACI tenants (BDs, EPGs, contracts) are a leading cause of outages. Auditing all changes provides accountability, supports compliance, and enables rapid rollback identification when issues occur.",
              "t": "`TA_cisco-ACI`, APIC audit log",
              "d": "APIC audit log (`/api/node/class/aaaModLR.json`)",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:audit\"\n| where like(affected, \"uni/tn-%\")\n| rex field=affected \"uni/tn-(?<tenant>[^/]+)\"\n| stats count by _time, user, action, tenant, affected, descr\n| sort -_time\n| table _time, user, action, tenant, affected, descr",
              "m": "Enable audit logging on APIC (enabled by default). Ingest audit records via API polling or syslog. Track all create/modify/delete operations on tenant objects. Correlate configuration changes with fault events. Require change management tickets for production tenant changes.",
              "z": "Table (recent changes), Timeline (change events), Bar chart (changes by user), Pie chart (changes by tenant).",
              "kfp": "**Planned tenant decommissioning during application retire windows** — CAB-sanctioned waves delete **fvAEPg**/**fvBD** hierarchies rapidly, so Splunk velocity and **blast_radius** resemble abuse even while traffic drains by design; require **CHG** correlation and **tenant_retire** suppress flags.\n\n**CD-pipeline orchestrated config refreshes from approved Ansible repos** — Jenkins or **ACI** Ansible modules emit dense **REST** bursts with shared service principals; whitelist **git_commit**/**ansible_job_id** lookups and **svc_aci_cd** identities so velocity panels stay quiet.\n\n**Deliberate snapshot-restore drills** — engineers trigger **configRollback** plus **configJob** replays during **DR** exercises, inflating rollback pairings without production impact; stamp **drill_id** tokens in CMDB overlays before paging **Sev-2**.\n\n**Bulk EPG-import bursts during onboarding sprints** — application teams import dozens of **fvAEPg** rows in one maintenance slice, stretching **aaaModLR** transcripts; align with documented **cutover_id** milestones and mute only when **blast_radius** stays inside predeclared ceilings.\n\n**Tenant-clone replication toward staging fabrics** — **MSO**/**NDO** operations copy tenants for environment parity; **txId** sequencing mirrors production yet targets non-prod indexes—partition Splunk indexes per **site_code** so staging noise never masquerades as production tampering.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC audit log.\n• Ensure the following data sources are available: APIC audit log (`/api/node/class/aaaModLR.json`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable audit logging on APIC (enabled by default). Ingest audit records via API polling or syslog. Track all create/modify/delete operations on tenant objects. Correlate configuration changes with fault events. Require change management tickets for production tenant changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:audit\"\n| where like(affected, \"uni/tn-%\")\n| rex field=affected \"uni/tn-(?<tenant>[^/]+)\"\n| stats count by _time, user, action, tenant, affected, descr\n| sort -_time\n| table _time, user, action, tenant, affected, descr\n```\n\nUnderstanding this SPL\n\n**Tenant Configuration Audit** — Configuration changes in ACI tenants (BDs, EPGs, contracts) are a leading cause of outages. Auditing all changes provides accountability, supports compliance, and enables rapid rollback identification when issues occur.\n\nDocumented **Data sources**: APIC audit log (`/api/node/class/aaaModLR.json`). **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC audit log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(affected, \"uni/tn-%\")` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by _time, user, action, tenant, affected, descr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Tenant Configuration Audit**): table _time, user, action, tenant, affected, descr\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent changes), Timeline (change events), Bar chart (changes by user), Pie chart (changes by tenant).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9332C (ACI), Nexus 93180YC-FX (ACI), Nexus 9364C (ACI), Nexus 9504 (ACI), Nexus 9508 (ACI)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We keep a plain record of who changed each tenant slice on your network fabric, when it happened, and whether the work fully finished everywhere—so surprise rewires cannot hide behind everyday automation noise.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.6",
              "n": "Leaf/Spine Interface Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "Fabric link saturation causes packet drops and application latency. Monitoring leaf/spine interface utilization identifies hotspots, validates ECMP distribution, and supports capacity planning for fabric expansion.",
              "t": "`TA_cisco-ACI`, APIC interface metrics",
              "d": "APIC interface statistics API, fabric port counters",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:interface_stats\"\n| eval util_pct=round((bytesRate*8/speed)*100, 2)\n| stats avg(util_pct) as avg_util, max(util_pct) as peak_util by node, interface, speed\n| where peak_util > 70\n| sort -peak_util\n| table node, interface, speed, avg_util, peak_util",
              "m": "Poll APIC interface statistics every 60 seconds. Calculate utilization from byte rates and link speed. Set thresholds at 70% warning, 85% critical. Track ECMP balance across parallel fabric links. Alert on sustained high utilization indicating need for fabric expansion.",
              "z": "Heatmap (interface utilization by node), Timechart (utilization trending), Table (high-util interfaces), Gauge (fabric aggregate utilization).",
              "kfp": "**Planned port-channel rebalance windows** — cabling teams shift LAG members or reweight spine-hashed uplinks during approved maintenance—**`bytes_ingress_5min`** diverges sharply between **`pcRsMbrIfs`** participants for minutes while packet loss stays at lab noise; correlate Splunk timelines against **`CHG`** bundles that cite spine-side rebalance phases before escalating to sev-1 fabric bridges. **BGP route-refresh uplink bursts** — northbound spine sessions may deliver short **route-refresh** storms that inflate five-minute byte counters without sustained **`drop_forwarding_pkts`** growth; cross-check neighbor event logs whenever utilization alone spikes without drops. **Optics replacement rehearsal intervals** — staged **QSFP**/**SFP** swaps lift transient **`OPTICS-3-OPTICAL_RX_LOW`** narratives at insertion yet settle once power levels stabilize; mute optics macros when **`CHG`** records cite cold-standby cage kits and insertion-loss baselines documented in hardware tickets. **Post-rack-airflow HVAC reset cooling cycles** — intentional aisle reversals jog **`eqptDdiPDp`** temperature digits near transient warm points while **`crc_err_pkts`** remain flat—pair telemetry with facility **`CHG`** identifiers rather than cabling escalation queues alone. **vPC peer-keepalive micro-bursts during isolation drills** — maintenance bridges momentarily widen peer-link drop fractions mirrored by **`vpcMbrEpg`** saturation widgets yet recover once heartbeat routing restores—require **`CHG`** acknowledgement tokens before labeling chronic peer saturation unrelated to rehearsal scopes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1499.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC interface metrics.\n• Ensure the following data sources are available: APIC interface statistics API, fabric port counters.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll APIC interface statistics every 60 seconds. Calculate utilization from byte rates and link speed. Set thresholds at 70% warning, 85% critical. Track ECMP balance across parallel fabric links. Alert on sustained high utilization indicating need for fabric expansion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:interface_stats\"\n| eval util_pct=round((bytesRate*8/speed)*100, 2)\n| stats avg(util_pct) as avg_util, max(util_pct) as peak_util by node, interface, speed\n| where peak_util > 70\n| sort -peak_util\n| table node, interface, speed, avg_util, peak_util\n```\n\nUnderstanding this SPL\n\n**Leaf/Spine Interface Utilization** — Fabric link saturation causes packet drops and application latency. Monitoring leaf/spine interface utilization identifies hotspots, validates ECMP distribution, and supports capacity planning for fabric expansion.\n\nDocumented **Data sources**: APIC interface statistics API, fabric port counters. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC interface metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:interface_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:interface_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by node, interface, speed** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where peak_util > 70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Leaf/Spine Interface Utilization**): table node, interface, speed, avg_util, peak_util\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (interface utilization by node), Timechart (utilization trending), Table (high-util interfaces), Gauge (fabric aggregate utilization).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9332C (ACI), Nexus 93180YC-FX (ACI), Nexus 9364C (ACI), Nexus 9504 (ACI), Nexus 9508 (ACI)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We measure how busy each spine-facing cable runs across five-minute slices and whether receive light stays inside budget—so crowded pipes, rising drops, or dim optics ring before everyday east-west apps feel sluggish.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.7",
              "n": "APIC Cluster Health",
              "c": "critical",
              "f": "advanced",
              "v": "APIC controllers manage the entire ACI fabric. Cluster health issues (split-brain, leader election, convergence problems) can cause fabric-wide configuration and policy failures. Monitoring APIC cluster state is essential for fabric reliability.",
              "t": "`TA_cisco-ACI`, APIC system logs",
              "d": "APIC cluster health API, APIC system logs/syslog",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:system\"\n| search (cluster_status OR leader_election OR convergence)\n| eval status=case(\n    searchmatch(\"fully-fit\"), \"Healthy\",\n    searchmatch(\"partially-fit\"), \"Degraded\",\n    searchmatch(\"not-fit\"), \"Critical\",\n    1==1, \"Unknown\")\n| stats latest(status) as cluster_status, latest(_time) as last_update by apic_id\n| table apic_id, cluster_status, last_update",
              "m": "Monitor APIC cluster health endpoint every 30 seconds. Track cluster fitness, leader election events, and database sync status. Alert immediately on any non-fully-fit state. Monitor APIC resource utilization (disk, CPU, memory). Document recovery procedures for cluster issues.",
              "z": "Status grid (APIC cluster state), Timeline (cluster events), Single value (cluster fitness), Table (APIC node details).",
              "kfp": "**Coordinated ISSU waves** — Sequential APIC reboots lift replicas_pending during soak timers yet converge cleanly once ISSU manifests finish promotion steps—suppress Sev-1 macros whenever CAB bundles cite ISSU sequencing plus aaaModLR audits confirm scripted rotations rather than asymmetric shard splits **.** **Standby activation rehearsals** — Engineers deliberately activate cold standby controllers reorder leader hints briefly without harming dataplane forwarding—tie Splunk timelines to rehearsal IDs logged inside Visore bookmarks before paging consensus faults **.** **CIMC blade evacuation cycles** — Hypervisor patches stall VM-APIC disks mimicking shard backlog yet dissipate once datastore queues drain—cross-check virtualization calendars before escalating consensus faults unrelated to APIC firmware **.** **Visore-immediate MO pulls** — Operators triggering infraCluster refreshes amplify REST deltas absent syslog mnemonic corroboration—require fault_code alignment plus replication-delay narratives before paging fabric-wide outages **.** **CHG-backed firmware staging lanes** — Automation uploads staged bundles temporarily widen MOM drift alarms until ctrl_sw hashes reconcile across appliance peers participating in coordinated ISSU staging windows tagged inside ServiceNow **.** **Insights dashboard latency artifacts** — ND Insights occasionally trails APIC REST collectors during WAN brownouts—cross-verify Splunk sourcetype timestamps versus Insights widgets before blaming quorum_warn positives alone **.**",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1495"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC system logs.\n• Ensure the following data sources are available: APIC cluster health API, APIC system logs/syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor APIC cluster health endpoint every 30 seconds. Track cluster fitness, leader election events, and database sync status. Alert immediately on any non-fully-fit state. Monitor APIC resource utilization (disk, CPU, memory). Document recovery procedures for cluster issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:system\"\n| search (cluster_status OR leader_election OR convergence)\n| eval status=case(\n    searchmatch(\"fully-fit\"), \"Healthy\",\n    searchmatch(\"partially-fit\"), \"Degraded\",\n    searchmatch(\"not-fit\"), \"Critical\",\n    1==1, \"Unknown\")\n| stats latest(status) as cluster_status, latest(_time) as last_update by apic_id\n| table apic_id, cluster_status, last_update\n```\n\nUnderstanding this SPL\n\n**APIC Cluster Health** — APIC controllers manage the entire ACI fabric. Cluster health issues (split-brain, leader election, convergence problems) can cause fabric-wide configuration and policy failures. Monitoring APIC cluster state is essential for fabric reliability.\n\nDocumented **Data sources**: APIC cluster health API, APIC system logs/syslog. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC system logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:system. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:system\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by apic_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **APIC Cluster Health**): table apic_id, cluster_status, last_update\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (APIC cluster state), Timeline (cluster events), Single value (cluster fitness), Table (APIC node details).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9332C (ACI), Nexus 93180YC-FX (ACI), Nexus 9364C (ACI), Nexus 9504 (ACI), Nexus 9508 (ACI)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch whether the three brains steering your fabric still agree who leads and finish copying policy edits together—when they quarrel or leave changes dangling, we raise a calm warning before everyday automation freezes.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.8",
              "n": "Spine-Leaf Fabric Latency",
              "c": "medium",
              "f": "advanced",
              "v": "Inter-switch latency within the fabric directly impacts east-west traffic between workloads. High latency causes application timeouts, database replication lag, and degraded user experience. Monitoring fabric latency identifies congestion, misrouted traffic, and capacity bottlenecks before they impact SLAs.",
              "t": "Custom scripted input (ping, TWAMP, fabric analytics), `TA_cisco-ACI`, `arista:eos` via SC4S",
              "d": "In-band Network Telemetry (INT), fabric analytics tools, ICMP probes between switches",
              "q": "index=fabric sourcetype=\"fabric:latency\"\n| eval latency_ms=round(rtt_us/1000, 2)\n| stats avg(latency_ms) as avg_latency, max(latency_ms) as max_latency, stdev(latency_ms) as latency_jitter by src_switch, dst_switch, path_type\n| where avg_latency > 1 OR max_latency > 5\n| sort -max_latency\n| table src_switch, dst_switch, path_type, avg_latency, max_latency, latency_jitter",
              "m": "Deploy ICMP or TWAMP probes between leaf and spine switches on a dedicated management or out-of-band VLAN. Poll every 30–60 seconds. For INT-capable fabrics (Arista DANZ, Cisco NX-OS telemetry), enable in-band telemetry for real-time latency visibility. Parse probe results into Splunk via scripted input. Set thresholds: >1 ms average or >5 ms peak for east-west paths. Alert on sustained elevation. Correlate latency spikes with interface utilization and BGP convergence events.",
              "z": "Heatmap (latency by switch pair), Timechart (latency trending), Table (high-latency paths), Single value (fabric P99 latency).",
              "kfp": "**microburst_lag_uplink_skew** — Approved spine-facing **LAG** rescans sometimes reshape hashed member utilization for tens of seconds while microburst-sensitive counters spike without sustained **`fabric_latency_ns`** growth; anchor macros to **`CHG`** bundles referencing leaf uplink rebalance scopes before blaming scheduler starvation outright.\n\n**eqptIngr_counter_wrap** — Thirty-two-bit **`eqptIngrTotal`** ladders roll during maintenance-window traffic ramps yet **`latency_ns`** histograms remain flat while **`wrap_seen`** toggles—pair rollover hints with **`latency_us_peak`** streaks spanning multiple buckets rather than isolated wraps.\n\n**nir_grpc_backpressure_window** — **NIR** exporters briefly pause **gRPC** streams during collector upgrades—**`nir_gap_sec`** climbs without dataplane contention—suppress **`latency_fault`** lanes tagged **`nir_export_gap_bad`** whenever Heavy Forwarder **`splunkd.log`** lists rotated certificates concurrently.\n\n**ptp_skew_adjustment_event** — Campus **PTP** steering injects **`fabric_clock_skew_context`** bursts altering ingress→egress deltas although spine buffers remain idle—cross-check **`fabricNode`** **`unixDate`** deltas against **`skew_warn`** arcs tied to timing decks.\n\n**grpc_interval_change_window** — Telemetry engineers intentionally shift dial-out cadence (**30s→5s**) for hotspot pods—**`grpc_iv_sec`** swings mimic pathology unless dashboards annotate **`grpc_hi_freq_window`** suppress tokens referencing observability **`CHG`** identifiers.\n\n**epg_relocation_path_swap** — Mobility crews relocate **`fvCEp`** attachments across leaf pairs—temporary **`latency_ns`** dispersion reflects hashing churn rather than persistent congestion—bind **`latency_fault`** escalations to absent **`CHG`** approvals referencing stretched VLAN migrations.\n\n**nir_probe_sync_window** — Hardware-assisted probes occasionally reorder arrival timestamps during **`nir_probe_sync_window`** firmware transitions—latency deltas spike once while **`burst_proxy_peak`** stays dormant—confirm **`INTERFACE`** syslog silence before escalating optics replacements.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (ping, TWAMP, fabric analytics), `TA_cisco-ACI`, `arista:eos` via SC4S.\n• Ensure the following data sources are available: In-band Network Telemetry (INT), fabric analytics tools, ICMP probes between switches.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy ICMP or TWAMP probes between leaf and spine switches on a dedicated management or out-of-band VLAN. Poll every 30–60 seconds. For INT-capable fabrics (Arista DANZ, Cisco NX-OS telemetry), enable in-band telemetry for real-time latency visibility. Parse probe results into Splunk via scripted input. Set thresholds: >1 ms average or >5 ms peak for east-west paths. Alert on sustained elevation. Correlate latency spikes with interface utilization and BGP convergence events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fabric sourcetype=\"fabric:latency\"\n| eval latency_ms=round(rtt_us/1000, 2)\n| stats avg(latency_ms) as avg_latency, max(latency_ms) as max_latency, stdev(latency_ms) as latency_jitter by src_switch, dst_switch, path_type\n| where avg_latency > 1 OR max_latency > 5\n| sort -max_latency\n| table src_switch, dst_switch, path_type, avg_latency, max_latency, latency_jitter\n```\n\nUnderstanding this SPL\n\n**Spine-Leaf Fabric Latency** — Inter-switch latency within the fabric directly impacts east-west traffic between workloads. High latency causes application timeouts, database replication lag, and degraded user experience. Monitoring fabric latency identifies congestion, misrouted traffic, and capacity bottlenecks before they impact SLAs.\n\nDocumented **Data sources**: In-band Network Telemetry (INT), fabric analytics tools, ICMP probes between switches. **App/TA** (typical add-on context): Custom scripted input (ping, TWAMP, fabric analytics), `TA_cisco-ACI`, `arista:eos` via SC4S. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fabric; **sourcetype**: fabric:latency. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fabric, sourcetype=\"fabric:latency\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src_switch, dst_switch, path_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_latency > 1 OR max_latency > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Spine-Leaf Fabric Latency**): table src_switch, dst_switch, path_type, avg_latency, max_latency, latency_jitter\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (latency by switch pair), Timechart (latency trending), Table (high-latency paths), Single value (fabric P99 latency).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Nexus 9000, Nexus 9300/9500, Arista 7050/7280/7500",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch how long packets really take hopping across your leaf-and-spine switching cloth—not the wider internet—so tiny slowdowns ring alarms before everyday apps stall or meetings fill with guesses.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.9",
              "n": "ACI Contract Hit/Miss Ratio Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Complements raw contract hits (UC-18.1.4) with **permit vs deny/miss** ratios over time to catch mis-tuned filters and unexpected drops before workloads fail.",
              "t": "`TA_cisco-ACI`, APIC contract statistics",
              "d": "`sourcetype=cisco:aci:contracts` or APIC API contract stats",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:contracts\"\n| timechart span=1h sum(permit_count) as permit sum(deny_count) as deny by contract_name\n| eval miss_ratio=round(100*deny/(deny+permit+0.001),2)",
              "m": "Poll contract counters on the same interval as UC-18.1.4. Alert when `miss_ratio` jumps vs 24h baseline for business-critical contracts. Map `contract_name` to owning team.",
              "z": "Line chart (permit vs deny), Single value (miss ratio %), Table (worst contracts).",
              "kfp": "**contract_migration_window** — Operators migrating EPGs between contracts during application-tier restructuring cause transient deny spikes while new permit rules compile across leaf TCAM—the ratio dip reflects compilation lag rather than policy failure; suppress by correlating **aaaModLR** entries showing **vzBrCP** or **vzConsCtrct** edits and stamping maintenance-window tags until actrlRule entries stabilise.\n\n**filter_entry_resequence** — Security teams reordering **vzEntry** priority within filters briefly invalidate compiled **actrlRule** entries while replacement hardware rules propagate—ratio oscillations during the window (typically 30–90 seconds per leaf) do not indicate traffic loss if ingrPkts on newly compiled rules begins incrementing within two poll cycles.\n\n**epg_vzany_enablement** — Enabling **vzAny** provider contracts on a VRF broadens implicit scope from per-EPG to per-VRF evaluation, causing a one-time surge in implicit deny visibility as previously unseen EPG-pair combinations report statistics through the vzAny aggregation path—baseline ratios shift permanently and must be recalibrated within twenty-four hours.\n\n**taboo_contract_addition** — Adding **vzTaboo** contracts generates new explicit deny rules overriding normal permits for blacklisted traffic; deny counters spike because the taboo represents intentional security hardening—distinguish from misconfiguration by correlating vzTaboo creation timestamps in aaaModLR and confirming the fltId matches the taboo filter chain.\n\n**preferred_group_toggle** — Toggling **prefGroupMemb=include** on EPGs switches communication from deny-by-default to permit-by-default among group members; the ratio inverts dramatically—verify against fvEPg attribute change records rather than assuming permit floods indicate leaked security posture.\n\n**consumer_provider_swap_window** — Operators deleting a **vzConsCtrct** binding and re-adding as **vzProvCtrct** to adjust directionality generate transient implicit deny hits during the gap—correlate paired delete-then-create operations on the same fvAEPg within a five-minute window before paging.\n\n**pcTag_reallocation_event** — APIC periodically reallocates **policy control tags** during VRF-scale events or fabric-partition recovery; actrlRule entries referencing old pcTags stop counting while replacements compile—ratio collapses for one or two poll cycles but recovers autonomously.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC contract statistics.\n• Ensure the following data sources are available: `sourcetype=cisco:aci:contracts` or APIC API contract stats.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll contract counters on the same interval as UC-18.1.4. Alert when `miss_ratio` jumps vs 24h baseline for business-critical contracts. Map `contract_name` to owning team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:contracts\"\n| timechart span=1h sum(permit_count) as permit sum(deny_count) as deny by contract_name\n| eval miss_ratio=round(100*deny/(deny+permit+0.001),2)\n```\n\nUnderstanding this SPL\n\n**ACI Contract Hit/Miss Ratio Analysis** — Complements raw contract hits (UC-18.1.4) with **permit vs deny/miss** ratios over time to catch mis-tuned filters and unexpected drops before workloads fail.\n\nDocumented **Data sources**: `sourcetype=cisco:aci:contracts` or APIC API contract stats. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC contract statistics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:contracts. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:contracts\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by contract_name** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **miss_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (permit vs deny), Single value (miss ratio %), Table (worst contracts).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9000 (ACI mode)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We tally how often each tenant agreement welcomes traffic versus quietly refuses it—like a doorkeeper's scorecard—so quiet misconfigurations surface as shifting numbers before workloads stall.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.10",
              "n": "ACI Endpoint Group (EPG) Health",
              "c": "high",
              "f": "intermediate",
              "v": "Aggregates fault and health indicators per EPG (endpoint count, contract violations, BD binding) for application-centric status.",
              "t": "`TA_cisco-ACI`",
              "d": "`cisco:aci:faults`, `cisco:aci:endpoint`, EPG MO",
              "q": "index=cisco_aci (sourcetype=\"cisco:aci:faults\" OR sourcetype=\"cisco:aci:endpoint\")\n| rex field=affected \"epg-(?<epg>[^/]+)\"\n| stats count(eval(severity IN (\"critical\",\"major\"))) as sev_count, dc(mac) as ep_count by tenant, epg\n| eval epg_health=if(sev_count>0 OR ep_count=0,\"Degraded\",\"OK\")\n| where epg_health!=\"OK\"\n| table tenant, epg, sev_count, ep_count",
              "m": "Normalize `affected` DN parsing to your naming. Enrich with APIC EPG API for expected EP counts. Alert on EPG with faults or zero endpoints when baseline >0.",
              "z": "Status table (EPG health), Heatmap (tenant × EPG), Single value (degraded EPG count).",
              "kfp": "Sanctioned application-retirement weekends deliberately detach workloads from fvCEp inventories—Splunk rolls show collapsing endpoint_count even though forwarding stays orderly until fvRsBd subnets free capacity for reuse. Deployment immediacy set to on-demand delays dataplane attachment until conversational flows anchor endpoints—idle VLAN segments temporarily report zero endpoints without outages during quiet nights. vzAny catch-all contracts aggregate deny pacing beneath blanket Distinguished Names—per-EPG actrl counters look artificially placid although lateral conversations still obey wide defaults wired elsewhere in the tenant graph. Tenant-scope YAML bulk-import spikes fvAEPg object cardinality while merges reconcile overlapping namespaces—timeline noise resembles churn instead of lateral workload shifts until namespaces stabilize post-commit. microSeg onboarding bursts replay overlapping MO snapshots—duplicate healthInst deltas briefly mimic instability until collectors deduplicate successive aaaRefresh pulls aligned with Splunk linebreaker tuning. Rotate correlation discipline: reconcile suspected anomalies against ServiceNow CHG narratives plus NDI site_health widgets prior to paging application bridges. Planned subnet reclaim phases temporarily orphan fvIp bindings until DHCP timers expire—Splunk ip_count deltas resemble malicious bursts despite benign housekeeping scripts clearing stale leases.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`.\n• Ensure the following data sources are available: `cisco:aci:faults`, `cisco:aci:endpoint`, EPG MO.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `affected` DN parsing to your naming. Enrich with APIC EPG API for expected EP counts. Alert on EPG with faults or zero endpoints when baseline >0.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci (sourcetype=\"cisco:aci:faults\" OR sourcetype=\"cisco:aci:endpoint\")\n| rex field=affected \"epg-(?<epg>[^/]+)\"\n| stats count(eval(severity IN (\"critical\",\"major\"))) as sev_count, dc(mac) as ep_count by tenant, epg\n| eval epg_health=if(sev_count>0 OR ep_count=0,\"Degraded\",\"OK\")\n| where epg_health!=\"OK\"\n| table tenant, epg, sev_count, ep_count\n```\n\nUnderstanding this SPL\n\n**ACI Endpoint Group (EPG) Health** — Aggregates fault and health indicators per EPG (endpoint count, contract violations, BD binding) for application-centric status.\n\nDocumented **Data sources**: `cisco:aci:faults`, `cisco:aci:endpoint`, EPG MO. **App/TA** (typical add-on context): `TA_cisco-ACI`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:faults, cisco:aci:endpoint. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:faults\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by tenant, epg** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **epg_health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where epg_health!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ACI Endpoint Group (EPG) Health**): table tenant, epg, sev_count, ep_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status table (EPG health), Heatmap (tenant × EPG), Single value (degraded EPG count).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, ACI fabric",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We roll each application slice inside the fabric into one score—how many machines attach, whether policies blocked traffic, and whether subnets stayed wired—so owners see their piece instead of one giant fabric dial.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.11",
              "n": "ACI Fault Lifecycle Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks fault `lc` (lifecycle: raising, active, retaining, resolved) and time-to-clear — beyond raw fault counts (UC-18.1.2).",
              "t": "`TA_cisco-ACI`",
              "d": "`sourcetype=cisco:aci:faults`",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:faults\"\n| eval cleared=if(match(lower(lc),\"(?i)resolved|retaining\"),1,0)\n| stats earliest(_time) as first_seen latest(_time) as last_seen max(cleared) as ever_cleared by code, dn\n| eval duration_hrs=round((last_seen-first_seen)/3600,2)\n| where duration_hrs>24 AND ever_cleared=0\n| table code, dn, duration_hrs, first_seen",
              "m": "Map `lc` per APIC version. Join clear events if streamed separately. Report MTTR for critical faults.",
              "z": "Table (long-lived faults), Bar chart (avg clear time by code), Timeline.",
              "kfp": "**lc_poll_blind_spot** — Sub-minute soaking churn or rapid raised-clearing oscillations between REST polls can hide transitions that syslog shows, so Splunk may under-count dwell time unless you shorten rest_ta intervals or blend cisco:aci:syslog narratives for the same code and dn. Pair shortened polls with Splunk-side max(_time) freshness checks so silent collectors do not masquerade as healthy fabrics.\n\n**retention_timer_rollover** — faultRecord rows age out per fvFaultLifecycleP while faultInst snapshots still look hot; analysts treat short historical silence as clearance when the MO simply aged out of the history class pull. Cross-check moquery history against Splunk retention before closing executive summaries.\n\n**delegate_double_count** — Summing parent faultDelegate rows with child faultInst volumes double-charges severity during storms; segment dashboards by MO class before interpreting storm heuristics. Teach operators that delegation bubbles describe blast summaries, not independent defects.\n\n**ack_automation_skew** — Service accounts push acknowledgement fields in aaaModLR trails, inflating human MTTA stories unless scripted identities are excluded via lookups. Maintain a service-account suppression table reviewed quarterly.\n\n**fault_storm_maintenance** — APIC maintenance windows and simultaneous line-card reloads legitimately raise dozens of raised faults within minutes; correlate _time with change tickets before paging on storm logic alone. Encode maintenance identifiers into lookup tables that temporarily raise suppression thresholds.\n\n**never_cleared_stale_snapshot** — A stalled modular input replays the same JSON payload; lifecycle stays raised in Splunk even when APIC cleared hours ago until collectors refresh—watch for frozen lastTransition timestamps across sequential polls. Alert when _time gaps exceed twice the configured poll interval without credential errors.\n\n**raised_clearing_flap** — Interfaces bouncing during optics work can park faults in raised-clearing and retaining, producing short mttr_min readings while occur climbs—pair clearance math with occur counters and physical validation. Document optics work orders beside Splunk timelines to defuse false closure narratives.\n\n**change_set_echo** — Automation repeatedly posts benign changeSet deltas that momentarily touch lc without true clearance, creating MTTR noise until you filter scripted subjects through an allowlist. Require human-linked tickets for closure when changeSet subjects match automation patterns.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`.\n• Ensure the following data sources are available: `sourcetype=cisco:aci:faults`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `lc` per APIC version. Join clear events if streamed separately. Report MTTR for critical faults.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:faults\"\n| eval cleared=if(match(lower(lc),\"(?i)resolved|retaining\"),1,0)\n| stats earliest(_time) as first_seen latest(_time) as last_seen max(cleared) as ever_cleared by code, dn\n| eval duration_hrs=round((last_seen-first_seen)/3600,2)\n| where duration_hrs>24 AND ever_cleared=0\n| table code, dn, duration_hrs, first_seen\n```\n\nUnderstanding this SPL\n\n**ACI Fault Lifecycle Tracking** — Tracks fault `lc` (lifecycle: raising, active, retaining, resolved) and time-to-clear — beyond raw fault counts (UC-18.1.2).\n\nDocumented **Data sources**: `sourcetype=cisco:aci:faults`. **App/TA** (typical add-on context): `TA_cisco-ACI`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:faults. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:faults\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cleared** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by code, dn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **duration_hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where duration_hrs>24 AND ever_cleared=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ACI Fault Lifecycle Tracking**): table code, dn, duration_hrs, first_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (long-lived faults), Bar chart (avg clear time by code), Timeline.",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We stitch each trouble item's stages from first flicker through clear-out, so crews spot tickets frozen halfway or never finishing—not just how many warnings flash at once.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.12",
              "n": "Fabric Node Decommission Events",
              "c": "medium",
              "f": "beginner",
              "v": "Audits leaf/spine/APIC removal or disable operations for change control and capacity reconciliation.",
              "t": "`TA_cisco-ACI`, APIC audit/syslog",
              "d": "`cisco:aci:audit`, APIC syslog",
              "q": "index=cisco_aci (sourcetype=\"cisco:aci:audit\" OR sourcetype=\"cisco:aci:system\")\n| search decommission OR \"node-remove\" OR \"unregister\" OR \"fabricDecommission\"\n| table _time, user, affected, descr\n| sort -_time",
              "m": "Tune search terms to APIC messages when nodes are drained from fabric. Correlate with maintenance windows.",
              "z": "Table (decommission events), Timeline, Single value (events / month).",
              "kfp": "**fabric_st_snapshot_lag** — REST polls arriving every few minutes smear brief active→inactive transitions, so Splunk can miss the exact minute an operator finished Decommission unless you tighten **rest_ta** intervals or blend **cisco:aci:syslog** for the same **node_dn**. Pair `_time` deltas on **cisco:aci:fabric_node** with collectors' poll metadata so quiet inputs do not masquerade as healthy fabrics.\n\n**identity_policy_prune_echo** — **fabricNodeIdentP** rows disappear during housekeeping even when the physical switch merely lost reachability, mimicking retirement in inventory-only dashboards. Cross-check **serial** and **topSystem** uptime before you call a removal unauthorized.\n\n**syslog_narrative_only** — Standalone **FABRICNODE-5-DECOMMISSION** strings without matching **aaaModLR** or **fabricNode** state flips often come from log fan-out duplicates or buffered exports during APIC failovers. Require at least two transports or a confirmed **fabricSt** sink before paging.\n\n**maintenance_window_legitimate_drain** — Approved changes intentionally move **fabricSt** through inactive while **BGP/IS-IS** withdraw; **unapproved_shape** should reference CMDB windows instead of firing on every drain. Maintain lookups of maintenance identifiers and suppress spine/APIC rows inside those windows.\n\n**health_score_dip_unrelated** — **cisco:aci:health** composites dip for software upgrades or telemetry gaps unrelated to member loss; do not treat every score knock as capacity loss without corroborating **cisco:aci:fabric_node** counts.\n\n**parallel_collector_dn_skew** — Dual **rest_ta** inputs pointed at different APIC VIPs may emit marginally different **dn** strings for the same **serial**, splitting stats into duplicate **node_key** rows until you normalize **pod** paths. Deduplicate on **serial** plus **model** when IDs collide.\n\n**ticket_token_regex_miss** — ITSM exports embed ticket numbers in free text fields your Splunk parser ignores, marking good work as uncorrelated. Refresh regex and FIELDALIAS quarterly whenever the ticketing team changes prefixes.\n\n**inv_transition_poll_blur** — Lab fabrics rapid-toggle **undiscovered** during LLDP learning; excluding lab **pod** prefixes avoids kiddie-pool noise in production alerting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC audit/syslog.\n• Ensure the following data sources are available: `cisco:aci:audit`, APIC syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune search terms to APIC messages when nodes are drained from fabric. Correlate with maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci (sourcetype=\"cisco:aci:audit\" OR sourcetype=\"cisco:aci:system\")\n| search decommission OR \"node-remove\" OR \"unregister\" OR \"fabricDecommission\"\n| table _time, user, affected, descr\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Fabric Node Decommission Events** — Audits leaf/spine/APIC removal or disable operations for change control and capacity reconciliation.\n\nDocumented **Data sources**: `cisco:aci:audit`, APIC syslog. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC audit/syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:audit, cisco:aci:system. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Fabric Node Decommission Events**): table _time, user, affected, descr\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (decommission events), Timeline, Single value (events / month).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We surface when a rack switch or brain controller leaves active fabric duty for idle or retired—or vanishes without a maintenance slip—so spare routing paths and honest outage lines stay believable.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.13",
              "n": "Bridge Domain Subnet Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks IP usage vs subnet size per BD to prevent exhaustion of gateway pools and VM mobility issues.",
              "t": "`TA_cisco-ACI`, APIC BD API",
              "d": "`cisco:aci:bd_stats` or scripted API",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:bd_stats\"\n| eval used_pct=round(100*ip_in_use/total_ips,1)\n| where used_pct > 85\n| table tenant, bd, subnet, used_pct, ip_in_use, total_ips\n| sort -used_pct",
              "m": "Ingest BD statistics from periodic API poll (`fvBD` subnets vs endpoint counts). Alert at 85%/95% thresholds.",
              "z": "Table (full BDs), Bar chart (used % by BD), Gauge (worst BD).",
              "kfp": "External **dhcp_lease_pool** sizing frequently diverges from **fvCEp** visibility because dormant leases and reservations stay on DHCP servers while APIC tallies only live attachments; Splunk marks **dhcp_visibility_gap** accordingly so dashboards never imply abundant space when **dhcp_reserved_pct** already maxed.\n\nVMware mobility bursts during DRS draining can spike **peak_pct** beyond ninety-five percent for only a few polling grains; require sustained elevation on **roll_peak_pct** before paging pure exhaustion bridges.\n\nAdding sibling **fvSubnet** rows mid-change widens **total_ips** instantly so **used_pct** plunges until workloads refill; a **subnet_expansion_window** looks like telemetry drift unless CAB tickets cite new prefixes.\n\nStaged DHCP reservations that park unused IPs off fabric inflate pools server-side yet lack matching **fvCEp** rows; capacity squads chase phantom slack until authoritative DHCP feeds reconcile counts.\n\nFinally, **endpoint_aging_residual** timers retain departed endpoints briefly so **ip_in_use** overshoots hypervisor inventories, and **scope_public_leak_count** ambiguity doubles occupancy math whenever leaked routes overlap advertised subnets. Pair reviewer judgement with DHCP authoritative exports whenever **dhcp_reserved_pct** disagrees with Splunk runway math.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC BD API.\n• Ensure the following data sources are available: `cisco:aci:bd_stats` or scripted API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest BD statistics from periodic API poll (`fvBD` subnets vs endpoint counts). Alert at 85%/95% thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:bd_stats\"\n| eval used_pct=round(100*ip_in_use/total_ips,1)\n| where used_pct > 85\n| table tenant, bd, subnet, used_pct, ip_in_use, total_ips\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**Bridge Domain Subnet Utilization** — Tracks IP usage vs subnet size per BD to prevent exhaustion of gateway pools and VM mobility issues.\n\nDocumented **Data sources**: `cisco:aci:bd_stats` or scripted API. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC BD API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:bd_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:bd_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct > 85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Bridge Domain Subnet Utilization**): table tenant, bd, subnet, used_pct, ip_in_use, total_ips\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (full BDs), Bar chart (used % by BD), Gauge (worst BD).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco ACI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch each subnet shelf inside your fabric like labeled parking garages; we tally spaces versus parked machines so planners hear warnings before new workloads lose room to attach cleanly.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.14",
              "n": "L3Out Prefix Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Monitors advertised/ learned prefixes on L3Outs (BGP/OSPF) for flapping, withdrawal storms, and unexpected route loss.",
              "t": "`TA_cisco-ACI`, APIC L3ExtInstP events, `TA-cisco_ios` (external routers)",
              "d": "APIC syslog, `cisco:aci:bgp` or route telemetry",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:bgp\" earliest=-24h\n| where match(lower(message),\"(?i)withdraw|flap|prefix|l3out\")\n| stats count by l3out_name, peer, prefix\n| where count>20\n| sort -count",
              "m": "Map peer and L3Out from your TA. Correlate with northbound link monitoring. Alert on withdrawal burst.",
              "z": "Table (noisy prefixes), Line chart (events / hour), Timeline.",
              "kfp": "**mpls_pe_maintenance_window** — Scheduled MPLS provider-edge reboots intentionally recycle **bgpPeerEntry** states through **Active** toward **Established** while upstream optics heal; Splunk velocity resembles outages yet dataplane forwarding often resumes behind aggregates once convergence completes—suppress rows tied to CAB references naming PE/router scopes rather than paging pure BGP choreography noise.\n\n**bgp_aggregation_event** — Internet carriers occasionally consolidate many specifics behind wider aggregates during tariff reshuffles; Splunk learns fewer discrete prefixes while traceroutes remain stable via summarized coverage—require numeric correlation proving dataplane impact before blaming **l3extInstP** edits alone.\n\n**tcp_md5_rotation** — Operators rotate BGP TCP MD5 secrets during credential hygiene; neighbors bounce briefly yet resume **Established** quickly—stamp automation transcripts beside Splunk timelines before Sev-2 escalation.\n\n**dci_link_maintenance_window** — Dark-fibre crews patch interconnect strands forcing redundant border-leaf pairs through graceful BGP resets—pair Splunk bursts with facility tickets referencing strand IDs rather than silent fabric regressions.\n\n**import_security_review_window** — Security engineers temporarily reshape **import-security** tuples while auditing external routes—Splunk may flag subnet removals matching maintenance narratives yet intentional pauses must never masquerade as silent dataplane sabotage absent corroborating ticket lanes.\n\n**ldp_session_reset** — MPLS label-distribution churn can ripple label stacks without collapsing adjacency narratives entirely—BGP next hops may jitter briefly though **Established** strings persist—tie positives to upstream MPLS trouble queues referencing **ldp_session_reset** stamps before rewriting **route-map** intent.\n\n**ce_route_reflector_maintenance** — Customer-edge clusters migrating route-reflector anchoring shift best-path advertisements—Splunk observes withdrawal-like churn reflecting topology churn rather than wholesale prefix loss—cross-check BGP additive-path telemetry plus WAN diagnostics.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC L3ExtInstP events, `TA-cisco_ios` (external routers).\n• Ensure the following data sources are available: APIC syslog, `cisco:aci:bgp` or route telemetry.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap peer and L3Out from your TA. Correlate with northbound link monitoring. Alert on withdrawal burst.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:bgp\" earliest=-24h\n| where match(lower(message),\"(?i)withdraw|flap|prefix|l3out\")\n| stats count by l3out_name, peer, prefix\n| where count>20\n| sort -count\n```\n\nUnderstanding this SPL\n\n**L3Out Prefix Monitoring** — Monitors advertised/ learned prefixes on L3Outs (BGP/OSPF) for flapping, withdrawal storms, and unexpected route loss.\n\nDocumented **Data sources**: APIC syslog, `cisco:aci:bgp` or route telemetry. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC L3ExtInstP events, `TA-cisco_ios` (external routers). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:bgp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:bgp\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(message),\"(?i)withdraw|flap|prefix|l3out\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by l3out_name, peer, prefix** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (noisy prefixes), Line chart (events / hour), Timeline.",
              "script": "",
              "premium": "",
              "hw": "Cisco ACI, external routers",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch the outward-facing routes your building publishes toward carriers and remote offices—the paths packets take when traffic leaves campus—so upstream caretaker churn cannot masquerade as a silent outage for everyday apps.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci",
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                },
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.15",
              "n": "APIC Policy CAM Utilization",
              "c": "high",
              "f": "advanced",
              "v": "Tracks TCAM/CAM-style resource use for contracts and security policies on leaf nodes — exhaustion causes policy install failures.",
              "t": "`TA_cisco-ACI`, leaf diagnostics",
              "d": "`cisco:aci:policy_resource`, CLI snapshot via scripted input",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:policy_resource\"\n| where resource_type=\"policy_cam\" OR match(lower(metric_name),\"(?i)tcam|cam\")\n| eval used_pct=round(100*used/total,1)\n| where used_pct>80\n| table node_id, used_pct, used, total\n| sort -used_pct",
              "m": "Field names vary by NX-OS/ACI release; use vendor doc for exact OID/API. Alert before hardware policy scale limits.",
              "z": "Bar chart (CAM % by leaf), Table (top nodes), Line chart (trend).",
              "kfp": "• **polusage_dual_feed_overlap** — Two REST collectors hit the same **APIC** cluster with different **host** headers but land in one **index**, doubling **eqptcapacityPolUsage5min** rows so **pol_pct_now** reads high. Distinguish by **collector_id** in **host** or **inputs.conf** stanza names; deduplicate with `| dedup _time dn` or pause the stale input.\n\n• **inventory_sku_stale_after_rma** — **CMDB** still lists **9364C** after an **RMA** replaced a leaf with **93180YC-FX**, so static ceiling math disagrees with **polUsageCap**. Distinguish with serial-level refresh; apply `lookup aci_leaf_tcam_ceiling.csv` only when **last_seen** is inside **24h**.\n\n• **apic_leader_stats_pause** — During **APIC** **HA** leader moves, capacity samples pause while faults stay quiet, producing a flat **pol_pct** staircase. Distinguish with **cisco:aci:health** drops tied to **apicHaRole** churn; time-bound suppress **10m** after leadership settles using **inputlookup aci_apic_ha_events.csv** where **suppress=1**.\n\n• **image_tcam_repartition_wave** — Some **ACI** releases rebalance **ingress** and **egress** **policy** regions after reload, so **polUsageCum** steps without new **vzBrCP** churn. Distinguish when every leaf in a **pod** spikes the same hour with one **change** ticket; honor **inputlookup aci_image_maint.csv** keyed by **node_key**.\n\n• **span_redirect_borrow** — **SPAN**, **ERSPAN**, or **service-graph redirect** carve **TCAM** from the shared **policy** bank contract views ignore. Distinguish by auditing **monfv** sessions in **APIC**; tag rows **span_context=1** and add `where span_context=0` for paging tied to contract growth.\n\n• **capacity_lane_missing** — **actrlRule** feeds exist without **`cisco:aci:capacity`**, so operators see rule counts but no **pol_pct**. Treat as a **data-quality** gap: page the **platform** team via a **scheduled** conformance search, not fabric engineering.\n\n• **f1585_warmup_shadow** — **F1585** may flash during **line-card** startup while **F0952** stays zero and **policy-mgr** counters recover. Distinguish when **fault** **lifecycle** clears inside two polls and **CLI** **policy-stats** is healthy; filter `where fault_duration_seconds>300` after **raised/cleared** math.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, leaf diagnostics.\n• Ensure the following data sources are available: `cisco:aci:policy_resource`, CLI snapshot via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nField names vary by NX-OS/ACI release; use vendor doc for exact OID/API. Alert before hardware policy scale limits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:policy_resource\"\n| where resource_type=\"policy_cam\" OR match(lower(metric_name),\"(?i)tcam|cam\")\n| eval used_pct=round(100*used/total,1)\n| where used_pct>80\n| table node_id, used_pct, used, total\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**APIC Policy CAM Utilization** — Tracks TCAM/CAM-style resource use for contracts and security policies on leaf nodes — exhaustion causes policy install failures.\n\nDocumented **Data sources**: `cisco:aci:policy_resource`, CLI snapshot via scripted input. **App/TA** (typical add-on context): `TA_cisco-ACI`, leaf diagnostics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:policy_resource. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:policy_resource\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where resource_type=\"policy_cam\" OR match(lower(metric_name),\"(?i)tcam|cam\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct>80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **APIC Policy CAM Utilization**): table node_id, used_pct, used, total\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (CAM % by leaf), Table (top nodes), Line chart (trend).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Nexus 9300/9500 ACI leafs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We map how much spare room each rack switch still keeps for its traffic-rule notebook—when pages run low, new rules may not stick—so your crew gets a plain warning before problems masquerade as vague application timeouts.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.16",
              "n": "ACI Tenant Configuration Compliance Audit",
              "c": "medium",
              "f": "intermediate",
              "v": "Checks tenants for required objects (vzAny restrictions, monitoring policies, SNMP/Syslog) — extends change audit (UC-18.1.5) with **policy completeness** scoring.",
              "t": "`TA_cisco-ACI`",
              "d": "APIC config export or `cisco:aci:audit`",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:tenant_summary\"\n| eval has_mon=isnotnull(mon_policy)\n| eval has_snmp=isnotnull(snmp_group)\n| where has_mon=0 OR has_snmp=0\n| table tenant, has_mon, has_snmp",
              "m": "Build `tenant_summary` from scheduled API pulls (`fvTenant` + children). Adjust required attributes to your standards.",
              "z": "Table (non-compliant tenants), Pie chart (compliance %), Bar chart (missing controls).",
              "kfp": "**greenfield_tenant_shells** — Automation provisions **`fvTenant`** placeholders hours before **`fvAp`**/**`fvBD`** arrive; **`ap_ok`** and **`bd_ok`** lag legitimately. Require **`lookup tenant_greenfield`**, **`CHG`**, or **`where _time`** gates that ignore shells younger than **SLA** hours.\n\n**infrastructure_shared_tenants** — Shared services often waive per-tenant **SNMP** destinations because traps originate from **APIC** itself—encode **`allowed_gap_mask`** columns inside **`tenant_compliance_exceptions`** so **`snmp_ok`**/**`syslog_ok`** checks disable without mutating global SPL.\n\n**l2_only_vrf_design** — Some **`fvCtx`** rows deliberately keep **`unicastRouting=no`** for pure bridging; **`vrf_ok`** fires falsely unless **`l2_only_flag=1`** travels from **CMDB** via **`lookup`**. Pair **`vrf_intent.csv`** refreshed nightly.\n\n**mso_ndo_orchestration_lag** — **Multi-site** templates may delay **monEPGPol** attachment until after **`schema`** deploy completes—short **false positives** occur during **template push** windows. Correlate **`cisco:aci:audit`** **txId** bursts and widen alert suppression using **`mso_push_windows`** lookup keyed by **`fabric_name`**.\n\n**parser_upgrade_field_rename** — **`monEPGPol_dn`** might rename to **`monEPGPolDN`** after **TA** releases, zeroing **`mon_ok`** for every tenant until **`props.conf`** **`FIELDALIAS`** refresh lands. Maintain regression tests comparing **APIC** `moquery` output to Splunk **`fieldsummary`** after upgrades.\n\n**duplicate_collector_replay** — Two modular inputs ingesting the same **`fvTenant`** payload double counts **`missing`** if dedupe fails—dedupe with **`| dedup sha256(_raw)`** macro or sourcetype-specific **`collector_id`** prior to scoring.\n\n**drill_lab_fabric_bypass** — Naming regex may reject **`tn-testbed01`** style tenants—extend **`match`** OR clauses or maintain **`lab_tenants.csv`** bypass list via **`inputlookup`**. Document each bypass quarterly.\n\n**snapshot_only_export_gap** — Overnight **`configJob`** JSON may omit optional **`rsp-subtree`** sections while online **`GET /api/class/fvTenant.json?rsp-subtree=full`** shows full trees—prefer live pulls for authoritative completeness or tag snapshot rows with **`export_kind=snapshot`** and **`where export_kind!=snapshot`** for alerting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`.\n• Ensure the following data sources are available: APIC config export or `cisco:aci:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild `tenant_summary` from scheduled API pulls (`fvTenant` + children). Adjust required attributes to your standards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:tenant_summary\"\n| eval has_mon=isnotnull(mon_policy)\n| eval has_snmp=isnotnull(snmp_group)\n| where has_mon=0 OR has_snmp=0\n| table tenant, has_mon, has_snmp\n```\n\nUnderstanding this SPL\n\n**ACI Tenant Configuration Compliance Audit** — Checks tenants for required objects (vzAny restrictions, monitoring policies, SNMP/Syslog) — extends change audit (UC-18.1.5) with **policy completeness** scoring.\n\nDocumented **Data sources**: APIC config export or `cisco:aci:audit`. **App/TA** (typical add-on context): `TA_cisco-ACI`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:tenant_summary. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:tenant_summary\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_mon** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_snmp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where has_mon=0 OR has_snmp=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ACI Tenant Configuration Compliance Audit**): table tenant, has_mon, has_snmp\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant tenants), Pie chart (compliance %), Bar chart (missing controls).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We line up each tenant against the standard checklist of required fabric pieces—monitoring hooks, trap routes, and naming rules—so empty rows mean blind spots even when recent edits look quiet.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.17",
              "n": "ACI Multisite Health",
              "c": "critical",
              "f": "advanced",
              "v": "Monitors inter-site control-plane sync, spine proxy, and state for Cisco ACI Multi-Site / Multi-Pod deployments.",
              "t": "`TA_cisco-ACI`, MSO/APIC cross-site events",
              "d": "`cisco:aci:multisite`, APIC syslog",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:multisite\" earliest=-24h\n| where match(lower(status),\"(?i)out.of.sync|isolated|failed|degraded\")\n| stats count by site_name, peer_site, component\n| sort -count",
              "m": "Ingest MSO/NDO or per-APIC multisite diagnostics. Alert on any site not `in-sync`. Runbook for partition scenarios.",
              "z": "Status grid (site × peer), Table (active issues), Single value (sites degraded).",
              "kfp": "**isn_wan_maintenance_window** — Carriers groom long-haul rings and satellite hops during published **WAN** slices; round-trip spikes push **`time_to_sync_seconds`** upward while **ISN** routes still reconverge lawfully—treat as noise when **Change** tickets map to **CAB** WAN work, not as orchestrator regression.\n\n**site_onboarding_window** — Fresh fabrics register for hours inside **Provisioning** while **APIC** clusters finish **RMA** swaps, greenfield VLAN handoffs, or **QoS** profiles; **NDO** inventory lags benignly until **site_id** settles—suppress using **`site_commission_tag`** lookups.\n\n**cloud_apic_quarterly_upgrade** — **Cloud APIC** SaaS controllers restart on vendor cadence; **`cisco:capic:health`** gaps can appear while on-prem **`cisco:mso:audit`** continues—do not conflate SaaS silence with stretched **VRF** corruption.\n\n**ipn_router_firmware_reload** — **IPN** **Nexus** 7000/9000 or third-party **P**/**PE** routers reload for **EOS** mitigation; **Multi-Site Loopback** reachability dips until **OSPF**/**BGP-IPv4** converges—pair with **`ipn_chg_id`** stamps.\n\n**ndo_cluster_upgrade** — **Nexus Dashboard** clusters hosting **NDO** restart for feature unlocks—**outcome**=`queued` is ordinary while services return; silence paging unless **`health`** endpoints fail beyond soak timers.\n\n**cab_mass_schema_rollout** — Enterprise pushes dozens of templates concurrently—**partial-deploy** lingers briefly while APIC queues drain—watch acceleration rather than instantaneous completion.\n\n**dr_failover_exercise_isolation** — Tabletop drills deliberately isolate regions so **`Critical Comm Failure`** alarms exercise pager bridges—tie to **`scheduled_dr_test`** markers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, MSO/APIC cross-site events.\n• Ensure the following data sources are available: `cisco:aci:multisite`, APIC syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest MSO/NDO or per-APIC multisite diagnostics. Alert on any site not `in-sync`. Runbook for partition scenarios.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:multisite\" earliest=-24h\n| where match(lower(status),\"(?i)out.of.sync|isolated|failed|degraded\")\n| stats count by site_name, peer_site, component\n| sort -count\n```\n\nUnderstanding this SPL\n\n**ACI Multisite Health** — Monitors inter-site control-plane sync, spine proxy, and state for Cisco ACI Multi-Site / Multi-Pod deployments.\n\nDocumented **Data sources**: `cisco:aci:multisite`, APIC syslog. **App/TA** (typical add-on context): `TA_cisco-ACI`, MSO/APIC cross-site events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:multisite. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:multisite\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(status),\"(?i)out.of.sync|isolated|failed|degraded\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by site_name, peer_site, component** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (site × peer), Table (active issues), Single value (sites degraded).",
              "script": "",
              "premium": "",
              "hw": "APIC, NDO/MSO (if used)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch whether every campus still follows the same master playbook coming from the orchestrator, because when one site keeps marching while another waits, stretched apps can behave differently without screaming as a normal single-site outage.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.18",
              "n": "APIC Cluster Replication Latency",
              "c": "high",
              "f": "advanced",
              "v": "Complements UC-18.1.7 with **database replication delay** and inter-APIC consensus metrics for split-brain prevention.",
              "t": "`TA_cisco-ACI`",
              "d": "APIC `avictrl` / cluster diagnostics API",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:cluster_diag\" earliest=-24h\n| where repl_delay_ms>500 OR match(lower(message),\"(?i)split|partition|lag\")\n| table _time, apic_id, repl_delay_ms, message\n| sort -_time",
              "m": "Map fields from your APIC release; some metrics require Cisco DC Networking App. Alert on sustained replication lag.",
              "z": "Line chart (repl delay), Table (alerts), Single value (max lag ms).",
              "kfp": "**repl_poll_cadence_smoothing** — When HTTPS collectors poll dbgrReplStatus slower than fault narratives, Splunk can miss sub-minute spikes Visore already shows—tighten rest_ta intervals or keep cisco:aci:syslog blended before assuming replicas are healthy.\n\n**aci_full_policy_import_window** — Bulk imports or contract compiles legitimately stretch replication queues while APIC stays sound—tie composite_peak timelines to approved CHG or Ansible job IDs and suppress until pending_peak clears without state_sev escalation.\n\n**ha_leader_transition_buffer** — Leader moves and standby promotions reshape dbgr views for minutes even when sessions converge—require fault codes or mixed sourcetype_lane evidence before sev-1 pages during maintenance_aci.csv windows.\n\n**vmfs_metadata_storm** — ESXi snapshot merges or datastore rescans stall VM-APIC disks, inflating repl_delay_ms without leaf symptoms—check hypervisor calendars and infraWiNode oper transitions before blaming switching.\n\n**mirrored_rest_collector_duplication** — Two rest_ta instances on the same VIP double pending tallies—partition on collector_tag and dedup dn_lc per poll when collector_tags repeats across bins.\n\n**nms_wan_latency_to_passive_site** — Stretched or DR-separated APIC peers over high RTT WAN links can show higher delays on secondary sites—add site_role=primary|dr to aci_controller_inventory and maintain separate baselines instead of one global millisecond gate.\n\n**snmp_trap_only_without_mo_class** — Trap-only noise may mention replication without dbgrReplStatus rows—treat narrative_hits as advisory until structured cisco:aci:replication returns, routing tickets to parser owners instead of fabric bridge calls during long overnight maintenance windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`.\n• Ensure the following data sources are available: APIC `avictrl` / cluster diagnostics API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap fields from your APIC release; some metrics require Cisco DC Networking App. Alert on sustained replication lag.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:cluster_diag\" earliest=-24h\n| where repl_delay_ms>500 OR match(lower(message),\"(?i)split|partition|lag\")\n| table _time, apic_id, repl_delay_ms, message\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**APIC Cluster Replication Latency** — Complements UC-18.1.7 with **database replication delay** and inter-APIC consensus metrics for split-brain prevention.\n\nDocumented **Data sources**: APIC `avictrl` / cluster diagnostics API. **App/TA** (typical add-on context): `TA_cisco-ACI`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:cluster_diag. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:cluster_diag\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where repl_delay_ms>500 OR match(lower(message),\"(?i)split|partition|lag\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **APIC Cluster Replication Latency**): table _time, apic_id, repl_delay_ms, message\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (repl delay), Table (alerts), Single value (max lag ms).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC cluster",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We clock how far behind each policy controller falls when copying its shared rulebook, and we ring the bell early if one drifts long enough that everyday pushes could trust stale settings.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.19",
              "n": "ACI Fault Domain Severity Rollup",
              "c": "high",
              "f": "intermediate",
              "v": "Fault domains group infrastructure and policy failures by functional area (for example connectivity, configuration, or capacity). Rolling up open faults by domain shows where the fabric is structurally weak, helps prioritize remediation before east-west traffic degrades, and shortens war-room triage during incidents.",
              "t": "`TA_cisco-ACI`, Cisco DC Networking Application (Splunkbase 7777)",
              "d": "`index=cisco_aci` `sourcetype=\"cisco:aci:faults\"` with fields `fault_domain`, `severity`, `code`, `dn`, `lc`",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:faults\" earliest=-24h\n| where severity IN (\"critical\",\"major\") AND (isnull(lc) OR NOT match(lower(lc),\"(?i)resolved|cleared\"))\n| stats dc(dn) as affected_objects, values(code) as codes by fault_domain, severity\n| sort fault_domain, -affected_objects\n| table fault_domain, severity, affected_objects, codes",
              "m": "(1) Map `fault_domain` from APIC fault MO or TA extraction; if missing, derive from `dn` prefix via `rex`. (2) Schedule hourly and alert when any domain exceeds baseline affected object count. (3) Correlate spikes with change windows and interface faults (UC-18.1.6).",
              "z": "Stacked bar chart (faults by domain × severity), Table (top domains), Single value (open major+critical count).",
              "kfp": "**fault_domain_rollup_delegate_bias** — Temporarily toggling **delegate** visibility or duplicating **faultDelegate** pulls alongside **faultInst** can make **parent** domains look heavier than **child** MOs until **Splunk** filters align—require **`delegated=no`** gating (as in the primary SPL) and verify **Visore** shows the same **MO** cardinality before paging.\n\n**visore_cleared_apic_lag** — **APIC** **GUI** may mark **cleared** milliseconds before **`lc`** columns land in **REST** rows, so **Splunk** still counts **open** faults briefly—blend **`cisco:aci:syslog`** **CLEAR** mnemonics or delay **alerts** **90s** after **CHG** completion unless **codes** remain **raised** in **both** transports.\n\n**maintenance_flood_domain_inflation** — **ISSU**, **LLPG** rebuilds, or **full** **policy** **repushes** legitimately concentrate **minor**/**warning** defects in **infra**/**fabric** lanes—tie **Splunk** timelines to **`cisco:aci:audit`** **session** IDs and **`aci_chg_calendar`** lookups before treating spikes as regressions.\n\n**dn_prefix_misclassification** — Custom **naming** that flattens **tenant** paths into **l3extOut** DNs can push **tenant-looking** defects into **fabric** buckets—maintain a **`lookup aci_dn_domain_overrides.csv`** keyed by **`regex`** and **`forced_domain`** when automation strips **`/tn-`** markers.\n\n**fault_summary_poll_coarsening** — **`faultSummary.json`** aggregates faster than per-**MO** pulls but hides **delegation** nuances—if dashboards mix **summary** and **inst** without **`dedup`**, **access**/**fabric** ratios stretch—standardize on one authoritative stream per panel.\n\n**multi_apic_duplicate_fault_rows** — Dual **collectors** hitting the same **APIC** with different **host** headers can double **event** volume—tag **`inputs.conf`** **stanzas** uniquely and **`dedup _time dn_k code_u`** when migrating inputs.\n\n**lifecycle_transient_minor_churn** — **Soaking** timers elevate **minor** codes that **clear** automatically after **TCAM** compilation—use **15m** rolling **average** gates for **minor**/**warning** alerts while keeping **hard** thresholds for **major**/**critical**.\n\n**tenant_config_echo_only_faults** — **fvTenant** snapshot gaps sometimes surface “fake” emptiness: **tenant** domain looks quiet while **syslog** screams—never silence **tenant** lanes using only **`cisco:aci:tenant_config`** freshness; require **`cisco:aci:fault`** corroboration.\n\n**health_snapshot_domain_mismatch** — **Health** scores can stay **green** while **policy** defects accumulate in **tenant**/**infra** domains—treat **`cisco:aci:health`** as **telemetry**, not proof of **fault** absence when **`health_feed_alive=1` but `affected_mos` high`**. Pair with **code** list drilldowns.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, Cisco DC Networking Application (Splunkbase 7777).\n• Ensure the following data sources are available: `index=cisco_aci` `sourcetype=\"cisco:aci:faults\"` with fields `fault_domain`, `severity`, `code`, `dn`, `lc`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `fault_domain` from APIC fault MO or TA extraction; if missing, derive from `dn` prefix via `rex`. (2) Schedule hourly and alert when any domain exceeds baseline affected object count. (3) Correlate spikes with change windows and interface faults (UC-18.1.6).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:faults\" earliest=-24h\n| where severity IN (\"critical\",\"major\") AND (isnull(lc) OR NOT match(lower(lc),\"(?i)resolved|cleared\"))\n| stats dc(dn) as affected_objects, values(code) as codes by fault_domain, severity\n| sort fault_domain, -affected_objects\n| table fault_domain, severity, affected_objects, codes\n```\n\nUnderstanding this SPL\n\n**ACI Fault Domain Severity Rollup** — Fault domains group infrastructure and policy failures by functional area (for example connectivity, configuration, or capacity). Rolling up open faults by domain shows where the fabric is structurally weak, helps prioritize remediation before east-west traffic degrades, and shortens war-room triage during incidents.\n\nDocumented **Data sources**: `index=cisco_aci` `sourcetype=\"cisco:aci:faults\"` with fields `fault_domain`, `severity`, `code`, `dn`, `lc`. **App/TA** (typical add-on context): `TA_cisco-ACI`, Cisco DC Networking Application (Splunkbase 7777). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:faults. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:faults\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where severity IN (\"critical\",\"major\") AND (isnull(lc) OR NOT match(lower(lc),\"(?i)resolved|cleared\"))` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by fault_domain, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **ACI Fault Domain Severity Rollup**): table fault_domain, severity, affected_objects, codes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart (faults by domain × severity), Table (top domains), Single value (open major+critical count).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9300/9500 (ACI mode)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We sort live trouble tickets by whether they hit tenant spaces, shared wiring, fabric spines, rack ports, or the admin knobs—so the messiest corner gets help first without reading a novel of codes.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci",
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.20",
              "n": "Contract Violation and Implicit Deny Bursts",
              "c": "critical",
              "f": "intermediate",
              "v": "Sudden increases in implicit denies or contract violations usually mean a mis-deployed contract, a missing EPG binding, or an attack probing disallowed paths. Catching bursts early prevents application outages and avoids silent security gaps where traffic is dropped without operator visibility.",
              "t": "`TA_cisco-ACI`, APIC syslog",
              "d": "`index=cisco_aci` `sourcetype=\"cisco:aci:contracts\"` or `sourcetype=\"cisco:aci:syslog\"` with fields `src_epg`, `dst_epg`, `contract_name`, `deny_count`, `implicit_deny_count`",
              "q": "index=cisco_aci (sourcetype=\"cisco:aci:contracts\" OR sourcetype=\"cisco:aci:syslog\") earliest=-4h\n| bin _time span=5m\n| eval denies=coalesce(deny_count, implicit_deny_count, 0)\n| stats sum(denies) as deny_burst by _time, src_epg, dst_epg, contract_name\n| eventstats median(deny_burst) as med by src_epg, dst_epg\n| where deny_burst > med*10 AND deny_burst > 100\n| sort -deny_burst\n| table _time, src_epg, dst_epg, contract_name, deny_burst, med",
              "m": "(1) Normalize deny counters from contract stats or syslog patterns (`implicitDeny`, `vzBrCP`). (2) Tune multiplier and floor to fabric size. (3) Page on critical EPG pairs; attach last successful change ticket from audit (UC-18.1.5).",
              "z": "Timechart (deny burst timeline), Table (worst EPG pairs), Heatmap (src_epg × dst_epg).",
              "kfp": "**apic_stats_poll_jitter** — When REST collectors align with APIC revision timestamps, two adjacent polls can duplicate the same actrlRule MO while counters barely moved. Dedup on _time, dn_raw, and a hashed _raw fingerprint, or widen span=10m on busy fabrics until HTTP 429 storms settle without hiding real bursts.\n\n**contract_export_duplicate_dn** — Dual modular inputs hitting /api/class/actrlRule.json into one index with different host headers inflate implicit_slices without new leaf drops. Require unique inputs.conf stanza names, stamp collector_id, and dedup dn_raw per bucket so burst math tracks hardware truth.\n\n**implicit_default_aclr_ramp** — Fresh EPG bindings or vzAny toggles legitimately raise implicit deny visibility while permit actrlEntry rows compile. Pair spikes with cisco:aci:audit aaaModLR bursts and change tickets before paging the NOC during labeled maintenance.\n\n**syslog_regex_overmatch** — cisco:aci:syslog lines mentioning contract vocabulary during unrelated routing advisories can lift syslog_implicitish_lines while cisco:aci:actrl stays calm. Demand ratio_i and burst_raw corroboration from structured MO feeds before treating text as enforcement failure.\n\n**faultinst_delegate_double_count** — Parent faultInst rows stacked with children for the same EPG pair may duplicate open_aci_faults unless delegated filtering matches your fault standard operating procedure. Keep cleared lifecycle parity across REST and syslog transports.\n\n**l3out_contract_rebalance** — Border-leaf contract redeployments during L3Out migrations concentrate deny counters on specific pods. Validate per-node /api/node/mo/topology/pod-{podId}/node-{nodeId}/sys/actrl.json samples before assuming east-west leakage.\n\n**aaa_field_alias_drift** — Splunk_TA_cisco-aci upgrades sometimes rename aaaModLR fields (ind versus dn) so audit_policy_ops reads zero despite live edits. Maintain FIELDALIAS stanzas until release notes confirm parser stability.\n\n**ndi_template_replay_noise** — Replayed template publishes from Nexus Dashboard Orchestrator can briefly inflate audit_policy_ops and contract snapshots without equivalent actrl churn; require sustained implicit_slices across three buckets before executive escalation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC syslog.\n• Ensure the following data sources are available: `index=cisco_aci` `sourcetype=\"cisco:aci:contracts\"` or `sourcetype=\"cisco:aci:syslog\"` with fields `src_epg`, `dst_epg`, `contract_name`, `deny_count`, `implicit_deny_count`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize deny counters from contract stats or syslog patterns (`implicitDeny`, `vzBrCP`). (2) Tune multiplier and floor to fabric size. (3) Page on critical EPG pairs; attach last successful change ticket from audit (UC-18.1.5).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci (sourcetype=\"cisco:aci:contracts\" OR sourcetype=\"cisco:aci:syslog\") earliest=-4h\n| bin _time span=5m\n| eval denies=coalesce(deny_count, implicit_deny_count, 0)\n| stats sum(denies) as deny_burst by _time, src_epg, dst_epg, contract_name\n| eventstats median(deny_burst) as med by src_epg, dst_epg\n| where deny_burst > med*10 AND deny_burst > 100\n| sort -deny_burst\n| table _time, src_epg, dst_epg, contract_name, deny_burst, med\n```\n\nUnderstanding this SPL\n\n**Contract Violation and Implicit Deny Bursts** — Sudden increases in implicit denies or contract violations usually mean a mis-deployed contract, a missing EPG binding, or an attack probing disallowed paths. Catching bursts early prevents application outages and avoids silent security gaps where traffic is dropped without operator visibility.\n\nDocumented **Data sources**: `index=cisco_aci` `sourcetype=\"cisco:aci:contracts\"` or `sourcetype=\"cisco:aci:syslog\"` with fields `src_epg`, `dst_epg`, `contract_name`, `deny_count`, `implicit_deny_count`. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:contracts, cisco:aci:syslog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:contracts\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `eval` defines or adjusts **denies** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by _time, src_epg, dst_epg, contract_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by src_epg, dst_epg** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where deny_burst > med*10 AND deny_burst > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Contract Violation and Implicit Deny Bursts**): table _time, src_epg, dst_epg, contract_name, deny_burst, med\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (deny burst timeline), Table (worst EPG pairs), Heatmap (src_epg × dst_epg).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, ACI leaf switches",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We surface spikes when the fabric drops east-west chats because contracts went missing or filters slipped out of line, so teams mend policy before users only see mysterious timeouts.",
              "mtype": [
                "Fault",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.21",
              "n": "EPG Endpoint Learning and Deletion Churn",
              "c": "high",
              "f": "advanced",
              "v": "Rapid endpoint learn/delete cycles on an EPG strain the control plane and can precede bridging or routing instability. Monitoring learning churn protects workload mobility designs and catches misconfigured duplicate IPs or spanning-tree interactions before they impact database and storage east-west paths.",
              "t": "`TA_cisco-ACI`, APIC event log",
              "d": "`index=cisco_aci` `sourcetype=\"cisco:aci:endpoint\"` with fields `action`, `tenant`, `epg`, `mac`, `ip`",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:endpoint\" earliest=-24h\n| where action IN (\"learn\",\"delete\",\"move\")\n| bin _time span=15m\n| stats count as ops by _time, tenant, epg, action\n| stats sum(eval(if(action==\"learn\",ops,0))) as learn_ops sum(eval(if(action==\"delete\",ops,0))) as del_ops max(_time) as last_window by tenant, epg\n| where learn_ops>500 OR del_ops>500\n| table tenant, epg, learn_ops, del_ops, last_window\n| sort -learn_ops",
              "m": "(1) Ingest endpoint tracker events at least every poll interval of APIC TA. (2) Baseline per business EPG; exclude known vMotion pools via lookup. (3) Correlate with faults on the same `dn` and with L3Out prefix churn (UC-18.1.14).",
              "z": "Timechart (learn vs delete), Table (noisy EPGs), Single value (EPGs over threshold).",
              "kfp": "**vmotion_mac_shuffle** — VMware vMotion legitimately replays MAC learning across leaf switches during maintenance. Cross-check `cisco:aci:audit` for quiet policy windows and `vmware:vsphere:task` when that index exists before paging fabric owners.\n\n**epm_poll_drift** — REST collectors that miss POST /api/aaaRefresh.json cadence duplicate epmMacEp rows across adjacent polls, inflating ep_events without new leaf work. Dedup on _time, dn, and a hashed _raw fingerprint, or widen bins until HTTP 429 storms settle.\n\n**parser_field_rename** — TA upgrades that rename action to epm_action can starve the learnish bucket until FIELDALIAS returns. Watch for action_palette stuck on other while syslog still mentions learning—patch aliases before retuning thresholds.\n\n**syslog_regex_echo** — Stale syslog lines that mention endpoint moves during unrelated control-plane advisories can lift syslog_ep_lines while cisco:aci:endpoint stays calm. Require ratio_e corroboration from structured feeds.\n\n**duplicate_ip_clusters** — Misconfigured servers issuing duplicate IPs force rapid delete or learn cycles that look like automation bugs. Validate ARP tables and IPAM assignments before blaming the fabric.\n\n**faultinst_delegate_stack** — Parent and child faultInst rows for the same EPG can double open_ep_faults unless delegated fault SOPs filter duplicates—keep lifecycle parity across syslog and REST transports.\n\n**microseg_quarantine_sweep** — Security tools that repeatedly shunt hosts between quarantine and production EPGs create intentional churn. Route those EPGs through **`aci_epg_allowlist.csv`** or a dedicated correlation saved search so pages stay meaningful.\n\n**issu_epm_resync_wave** — ISSU and controller-image steps occasionally replay `epmDb` synchronization, which concentrates learnish spikes for a bounded maintenance window. Require two negative buckets after the change record closes or gate alerts on `epm_lane_ok` so authorized maintenance is not treated as an unknown storm.\n\n**coop_partition_chatter** — Rare COOP healing periods can push move-like syslog narratives while structured MO exports already stabilize. Compare syslog_ep_lines against ratio_e from `cisco:aci:endpoint` and validate APIC cluster health from **UC-18.1.1** before swapping leaf hardware.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC event log.\n• Ensure the following data sources are available: `index=cisco_aci` `sourcetype=\"cisco:aci:endpoint\"` with fields `action`, `tenant`, `epg`, `mac`, `ip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest endpoint tracker events at least every poll interval of APIC TA. (2) Baseline per business EPG; exclude known vMotion pools via lookup. (3) Correlate with faults on the same `dn` and with L3Out prefix churn (UC-18.1.14).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:endpoint\" earliest=-24h\n| where action IN (\"learn\",\"delete\",\"move\")\n| bin _time span=15m\n| stats count as ops by _time, tenant, epg, action\n| stats sum(eval(if(action==\"learn\",ops,0))) as learn_ops sum(eval(if(action==\"delete\",ops,0))) as del_ops max(_time) as last_window by tenant, epg\n| where learn_ops>500 OR del_ops>500\n| table tenant, epg, learn_ops, del_ops, last_window\n| sort -learn_ops\n```\n\nUnderstanding this SPL\n\n**EPG Endpoint Learning and Deletion Churn** — Rapid endpoint learn/delete cycles on an EPG strain the control plane and can precede bridging or routing instability. Monitoring learning churn protects workload mobility designs and catches misconfigured duplicate IPs or spanning-tree interactions before they impact database and storage east-west paths.\n\nDocumented **Data sources**: `index=cisco_aci` `sourcetype=\"cisco:aci:endpoint\"` with fields `action`, `tenant`, `epg`, `mac`, `ip`. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC event log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:endpoint. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:endpoint\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"learn\",\"delete\",\"move\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, tenant, epg, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by tenant, epg** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where learn_ops>500 OR del_ops>500` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **EPG Endpoint Learning and Deletion Churn**): table tenant, epg, learn_ops, del_ops, last_window\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (learn vs delete), Table (noisy EPGs), Single value (EPGs over threshold).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC, Nexus 9000 (ACI)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We track when the fabric keeps learning and forgetting the same computers in a frantic loop, so teams can spot duplicate addresses or shaky moves before everyday apps go quiet.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.22",
              "n": "Fabric Port-Channel and Member Link Imbalance",
              "c": "high",
              "f": "intermediate",
              "v": "Uneven distribution across port-channel members defeats ECMP assumptions and can saturate individual uplinks while siblings stay idle. Detecting imbalance protects spine-leaf oversubscription models and avoids tail latency for storage and replication traffic.",
              "t": "`TA_cisco-ACI`, APIC interface statistics",
              "d": "`index=cisco_aci` `sourcetype=\"cisco:aci:interface_stats\"` with fields `node`, `interface`, `port_channel`, `bytesRate`, `speed`",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:interface_stats\" earliest=-2h\n| where isnotnull(port_channel)\n| stats sum(bytesRate) as br by node, port_channel, interface\n| eventstats sum(br) as pc_total by node, port_channel\n| eval member_pct=round(100*br/pc_total,2)\n| eventstats range(member_pct) as spread by node, port_channel\n| where spread > 35\n| table node, port_channel, interface, member_pct, spread\n| sort node, port_channel, -member_pct",
              "m": "(1) Ensure `port_channel` is extracted from interface DN or API; map orphan physical members. (2) Alert when member spread exceeds policy (for example 35%). (3) Validate hashing and down-members with operational state feed.",
              "z": "Bar chart (member_pct by interface), Table (imbalanced PCs), Heatmap (node × port_channel).",
              "kfp": "**lacp_convergence_pause** — Right after a member returns from link-down or suspend, LACP can park traffic on one leg for tens of seconds while the bundle reconverges. Expect elevated pct_spread for one or two buckets only when syslog shows lacp sync transitions; widen suppression during approved maintenance or require two consecutive violations before paging.\n\n**microburst_smoothing** — Fifteen-minute buckets hide multi-second bursts that spike one member momentarily without persistent imbalance. If micro-bursts dominate, tighten span cautiously (for example ten minutes) only after confirming REST inputs keep up; otherwise you trade false quiet for sharper but noisy math.\n\n**telemetry_dup_poll** — Duplicate REST pulls that replay identical modTs double-count tx_bytes or rx_bytes on paper, falsely narrowing spread. Dedupe on _time, member_dn, and a hashed raw fingerprint, or fix the modular input interval so each poll advances modTs before Splunk indexes another copy.\n\n**asymmetric_workload_design** — Some storage or backup VLANs intentionally pin flows because hosts hash to one neighbor; a stable skew with quiet fault lanes and calm syslog may be normal. Compare against documented traffic engineering and lower severity when audit rows show no interface edits across the observation window.\n\n**apic_http_throttle_gap** — HTTP 429 or brief APIC CPU pressure can delay pc_stats while interface MOs still advance, manufacturing artificial spread for a single interval. Cross-check join health: if pc_stats subsearch volume drops but interface rows remain, pause alerts until REST queues drain.\n\n**issu_counter_reset** — Image upgrades or ISSU can zero counters mid-bucket, collapsing po_bytes and yielding odd member_pct until the next full poll completes. Ignore single-bucket spikes bounded by change records that cite ISSU or policy downloads.\n\n**vpc_diag_traffic** — Diagnostic vPC holds or half-member diagnostic states can bias bytes toward one leg even when forwarding is healthy. Confirm ethpmPhysIf oper states and Cisco TAC bulletins before treating the skew as hashing failure.\n\n**speed_mismatch_member** — Member legs running negotiated speed below siblings shift byte shares legitimately until ports auto-negotiate uniformly. Validate speed and oper_state parity on every member_dn before re-hashing investigations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC interface statistics.\n• Ensure the following data sources are available: `index=cisco_aci` `sourcetype=\"cisco:aci:interface_stats\"` with fields `node`, `interface`, `port_channel`, `bytesRate`, `speed`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure `port_channel` is extracted from interface DN or API; map orphan physical members. (2) Alert when member spread exceeds policy (for example 35%). (3) Validate hashing and down-members with operational state feed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:interface_stats\" earliest=-2h\n| where isnotnull(port_channel)\n| stats sum(bytesRate) as br by node, port_channel, interface\n| eventstats sum(br) as pc_total by node, port_channel\n| eval member_pct=round(100*br/pc_total,2)\n| eventstats range(member_pct) as spread by node, port_channel\n| where spread > 35\n| table node, port_channel, interface, member_pct, spread\n| sort node, port_channel, -member_pct\n```\n\nUnderstanding this SPL\n\n**Fabric Port-Channel and Member Link Imbalance** — Uneven distribution across port-channel members defeats ECMP assumptions and can saturate individual uplinks while siblings stay idle. Detecting imbalance protects spine-leaf oversubscription models and avoids tail latency for storage and replication traffic.\n\nDocumented **Data sources**: `index=cisco_aci` `sourcetype=\"cisco:aci:interface_stats\"` with fields `node`, `interface`, `port_channel`, `bytesRate`, `speed`. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC interface statistics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:interface_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:interface_stats\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(port_channel)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by node, port_channel, interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by node, port_channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **member_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by node, port_channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where spread > 35` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Fabric Port-Channel and Member Link Imbalance**): table node, port_channel, interface, member_pct, spread\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (member_pct by interface), Table (imbalanced PCs), Heatmap (node × port_channel).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300/9500 (ACI leaf/spine)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch how traffic shares across the cables in a fabric bundle, and we tell you when one line is doing almost all the work while the others stay quiet, so important apps do not slow down without a clear warning.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.1.23",
              "n": "APIC Controller Resource Exhaustion Watch",
              "c": "critical",
              "f": "advanced",
              "v": "APIC nodes host the policy repository and cluster services; disk, memory, or inode pressure delays policy pushes and can stall fault processing. Watching controller resources prevents brownouts where the fabric stays up but automation and incremental updates fail during change windows.",
              "t": "`TA_cisco-ACI`, APIC SNMP or API metrics scripted input",
              "d": "`index=cisco_aci` `sourcetype=\"cisco:aci:apic_capacity\"` with fields `apic_id`, `disk_used_pct`, `mem_used_pct`, `inode_used_pct`",
              "q": "index=cisco_aci sourcetype=\"cisco:aci:apic_capacity\" earliest=-24h\n| stats latest(disk_used_pct) as disk latest(mem_used_pct) as mem latest(inode_used_pct) as inode by apic_id\n| where disk>85 OR mem>90 OR inode>85\n| eval risk=case(inode>85,\"Inode pressure\", mem>90,\"Memory pressure\", disk>85,\"Disk pressure\",1==1,\"OK\")\n| table apic_id, disk, mem, inode, risk\n| sort -mem",
              "m": "(1) Poll `/api/node/mo/sys/summary` or vendor TA capacity fields every 5 minutes. (2) Alert at staged thresholds; include log partition growth rate. (3) Correlate with cluster replication lag (UC-18.1.18).",
              "z": "Gauge (per-APIC disk/mem), Table (nodes at risk), Timechart (capacity trends).",
              "kfp": "**apic_rolling_upgrade** — Coordinated image work and repository expansion temporarily raise CPU and flash writes so disk percentages climb without long-run exhaustion. Require two consecutive breaches outside maintenance windows or bind suppression to change tickets that cite firmware tasks before paging hardware.\n\n**follower_poll_skew** — REST inputs pinned to one APIC FQDN can briefly show followers idle while the leader saturates during VIP failovers. Compare each apic_id side-by-side in the same buckets before replacing nodes.\n\n**compact_log_burst_logrotate** — Scheduled compaction or log rotation lifts CPU and flash I/O for minutes only. Ignore single-bucket violations when audit rows mention observability or diag objects and syslog shows housekeeping mnemonics.\n\n**verbose_audit_inode_drain** — Verbose AAA or mirrored syslog can consume inodes faster than disk percentages move. Tune remote logging levels and retention on lab fabrics before declaring controller fault on inode alone.\n\n**rest_429_metric_hole** — Brief API throttling drops procEntity samples while heartbeats continue, mimicking sudden CPU relief. Validate modular-input logs for 429 before trusting missing cpuPct as recovery.\n\n**policy_import_memory_spike** — Large policy imports inflate memory and flash usage until compilation settles; faults may stay quiet. Cross-reference audit_chg_hits for policy DNs and delay pages until documented import windows close.\n\n**double_modular_input_dupes** — Two modular inputs hitting the same class URL double events and skew latest() math. Deduplicate on apic_id, hashed modTs, and sourcetype before alerting.\n\n**lab_hypervisor_overcommit** — Virtualized APIC labs co-resident with noisy neighbors show memory jitter uncorrelated with fabric load. Lower severity when hypervisor dashboards prove host overcommit rather than policy DB growth.\n\n**issu_counter_quiet** — Image upgrades or ISSU can suppress fault generation for a poll while counters reset, so joins look empty even though flash writers are busy. Ignore one interval bounded by maintenance records before treating the pattern as telemetry failure.",
              "refs": "[Splunkbase app 6805](https://splunkbase.splunk.com/app/6805)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA_cisco-ACI`, APIC SNMP or API metrics scripted input.\n• Ensure the following data sources are available: `index=cisco_aci` `sourcetype=\"cisco:aci:apic_capacity\"` with fields `apic_id`, `disk_used_pct`, `mem_used_pct`, `inode_used_pct`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Poll `/api/node/mo/sys/summary` or vendor TA capacity fields every 5 minutes. (2) Alert at staged thresholds; include log partition growth rate. (3) Correlate with cluster replication lag (UC-18.1.18).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_aci sourcetype=\"cisco:aci:apic_capacity\" earliest=-24h\n| stats latest(disk_used_pct) as disk latest(mem_used_pct) as mem latest(inode_used_pct) as inode by apic_id\n| where disk>85 OR mem>90 OR inode>85\n| eval risk=case(inode>85,\"Inode pressure\", mem>90,\"Memory pressure\", disk>85,\"Disk pressure\",1==1,\"OK\")\n| table apic_id, disk, mem, inode, risk\n| sort -mem\n```\n\nUnderstanding this SPL\n\n**APIC Controller Resource Exhaustion Watch** — APIC nodes host the policy repository and cluster services; disk, memory, or inode pressure delays policy pushes and can stall fault processing. Watching controller resources prevents brownouts where the fabric stays up but automation and incremental updates fail during change windows.\n\nDocumented **Data sources**: `index=cisco_aci` `sourcetype=\"cisco:aci:apic_capacity\"` with fields `apic_id`, `disk_used_pct`, `mem_used_pct`, `inode_used_pct`. **App/TA** (typical add-on context): `TA_cisco-ACI`, APIC SNMP or API metrics scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_aci; **sourcetype**: cisco:aci:apic_capacity. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_aci, sourcetype=\"cisco:aci:apic_capacity\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by apic_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where disk>85 OR mem>90 OR inode>85` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **APIC Controller Resource Exhaustion Watch**): table apic_id, disk, mem, inode, risk\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (per-APIC disk/mem), Table (nodes at risk), Timechart (capacity trends).",
              "script": "",
              "premium": "",
              "hw": "Cisco APIC M3/L3 cluster nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch the three policy computers that steer your fabric and warn you when they run short on memory, disk, or file-table space, so control work does not stall without a clear reason.",
              "mtype": [
                "Availability",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_aci",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 38.5,
          "qd": {
            "gold": 0,
            "silver": 3,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "18.2",
          "n": "VMware NSX",
          "u": [
            {
              "i": "18.2.1",
              "n": "Distributed Firewall Rule Hits",
              "c": "high",
              "f": "beginner",
              "v": "NSX Distributed Firewall (DFW) runs on every hypervisor, providing east-west traffic control. Monitoring rule hits validates security policy effectiveness, identifies unused rules for cleanup, and detects policy violations in real time.",
              "t": "`vmware_nsx_addon`, NSX DFW syslog",
              "d": "NSX DFW firewall logs (syslog), NSX Manager API",
              "q": "index=vmware sourcetype=\"vmware:nsx:dfw\"\n| stats sum(eval(if(action=\"ALLOW\", 1, 0))) as allowed, sum(eval(if(action=\"DROP\", 1, 0))) as dropped, sum(eval(if(action=\"REJECT\", 1, 0))) as rejected by rule_id, rule_name, src, dst_ip, dst_port, protocol\n| eval total=allowed+dropped+rejected\n| sort -total\n| table rule_id, rule_name, src, dst_ip, dst_port, protocol, allowed, dropped, rejected",
              "m": "Enable DFW logging on NSX Manager for desired rule sections. Forward DFW logs via syslog to Splunk. Parse rule ID, action, source, destination, and port fields. Identify rules with zero hits (candidates for removal). Alert on unexpected DENY hits indicating misconfiguration or attack.",
              "z": "Bar chart (top rules by hits), Timechart (allow vs deny trending), Table (denied connections), Sankey diagram (source-to-destination flows).",
              "kfp": "**Workload shuffle bursts** — Planned guest migrations briefly rebuild distributed-firewall flow journals while **vsip** state tables refill on the landing ESXi host; **`vmware:nsxt:firewall`** events-per-second climbs although **`ALLOW`** verdict continuity resumes moments later—suppress brief spikes anchored to **`CMDB`** relocation stamps rather than lateral-move hunts. **Identity-mapping blanks** — Active Directory rollover intervals yield **`USERNAME`** gaps while **`NSX-Identity-Service`** caches heal; **`DROP`** counters remain truthful yet human-readable attribution dips—coordinate paging windows with **`domain-controller`** bridge calendars before accusing spoofed lateral phishing. **Legacy-to-current uplift duplication** — In-place uplift weekends replay **`Distributed Firewall`** sections while VMware reformats Policy domains toward four-dot-zero planes; syslog volume doubles briefly until **`Broadcom`** reconciliation completes—extend SPL joins across stabilization intervals only. **Operator tagging churn** — Teams renaming **`RULE_TAG`** strings mid-migration (`policy-name` edits without mirrored Splunk lookups) skew dashboards referencing stale **`rule_tag_raw`** aliases until nightly **`vmware:nsxt:dfw_policy`** snapshots refresh lookup tables—expect benign spikes whenever admins refactor tagging conventions alone. **Emergency deny rehearsals** — Temporary **`DROP`** surges surface while inserting interim **`JUMP_TO_APPLICATION`** placeholders during modernization waves—confirm **`rule_numeric`** increments align with **`CRQ`** evidence before paging application owners. **Maintenance envelopes** — Entering **`host`** **`maintenance`** stalls fresh **`dfwpktlogs`** journal lines from that hypervisor tail, mimicking faux silence until **`DRS`** completes relocation—cross-check **`vmware:vmware:vsphere:vcenter`** relocation narratives.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, NSX DFW syslog.\n• Ensure the following data sources are available: NSX DFW firewall logs (syslog), NSX Manager API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable DFW logging on NSX Manager for desired rule sections. Forward DFW logs via syslog to Splunk. Parse rule ID, action, source, destination, and port fields. Identify rules with zero hits (candidates for removal). Alert on unexpected DENY hits indicating misconfiguration or attack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:dfw\"\n| stats sum(eval(if(action=\"ALLOW\", 1, 0))) as allowed, sum(eval(if(action=\"DROP\", 1, 0))) as dropped, sum(eval(if(action=\"REJECT\", 1, 0))) as rejected by rule_id, rule_name, src, dst_ip, dst_port, protocol\n| eval total=allowed+dropped+rejected\n| sort -total\n| table rule_id, rule_name, src, dst_ip, dst_port, protocol, allowed, dropped, rejected\n```\n\nUnderstanding this SPL\n\n**Distributed Firewall Rule Hits** — NSX Distributed Firewall (DFW) runs on every hypervisor, providing east-west traffic control. Monitoring rule hits validates security policy effectiveness, identifies unused rules for cleanup, and detects policy violations in real time.\n\nDocumented **Data sources**: NSX DFW firewall logs (syslog), NSX Manager API. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX DFW syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:dfw. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:dfw\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule_id, rule_name, src, dst_ip, dst_port, protocol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Distributed Firewall Rule Hits**): table rule_id, rule_name, src, dst_ip, dst_port, protocol, allowed, dropped, rejected\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top rules by hits), Timechart (allow vs deny trending), Table (denied connections), Sankey diagram (source-to-destination flows).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch which tiny rules welcome or refuse messages between guest computers on your grounds. Sudden swings in allow counts versus deny counts surface early so nobody blames applications when shifting security posture hides behind silence.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "syslog",
                "vmware"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "18.2.2",
              "n": "Micro-Segmentation Enforcement",
              "c": "high",
              "f": "intermediate",
              "v": "NSX micro-segmentation is a key Zero Trust control. Monitoring enforcement validates that workloads are properly isolated, detects lateral movement attempts, and proves compliance with segmentation policies during audits.",
              "t": "`vmware_nsx_addon`, NSX DFW logs",
              "d": "NSX DFW logs, NSX security group membership",
              "q": "index=vmware sourcetype=\"vmware:nsx:dfw\"\n| lookup nsx_security_groups vm_name OUTPUT security_group\n| stats count as hits, dc(dst_ip) as unique_destinations by security_group, action, direction\n| eval compliance=if(action=\"DROP\" AND direction=\"intra-group\", \"Violation\", \"Expected\")\n| sort -hits",
              "m": "Define security groups in NSX aligned with application tiers. Enable DFW logging for inter-group and intra-group traffic. Enrich logs with security group membership. Track allowed vs denied inter-group communication. Alert on intra-group denials or unexpected inter-group allows.",
              "z": "Heatmap (group-to-group traffic), Sankey diagram (flow paths), Bar chart (denials by group), Single value (policy violation count).",
              "kfp": "**tag_sync_lag_after_clone** — Fresh vSphere templates inherit stale custom attributes for NSGroup Tag expressions; vmware:nsxt:firewall ALLOW verdicts appear while Splunk lookups still list the guest as uncovered until the next dfw_groups poll—suppress short windows anchored to clone task IDs. **ad_group_eval_refresh_cycle** — Active Directory–backed NSGroups pause membership during ldap failovers; allow_toward_peer_outside_nsgroups spikes even though the manager already evaluates the true subset—cross-check Identity service alarms before paging. **snapshot_poll_skew_hour_boundary** — Hourly bin boundaries split a single burst across buckets, dimming mix_pct—use align=@s offsets when forensic precision matters. **staging_segment_overlap_noise** — Pre-production VLAN overlays deliberately share address space with production in some labs; cross_nsg_allow_needs_review flags benign pairs—separate indexes or environment tags to silence labs. **intelligence_recommendation_publish_queue** — NSX Intelligence backlog delays suggested DROP commits while temporary ALLOW persists; combine with the Intelligence UI queue depth before blaming drift. **ephemeral_ip_nat_pool_churn** — Edge SNAT pools or Kubernetes NodePort layers reuse public-looking addresses absent from NSGroup expressions—expect false orphan classifications unless internal ranges feed the lookup. **cross_vc_universal_group_replication_lag** — Federation sites may replicate groups minutes behind policy objects; resource_path mismatches self-heal—extend alert suppression with site_id metadata instead of deleting detections. **dfw_logging_mode_surge** — Operators enable verbose connection logging briefly, flooding flow_events without changing posture—correlate with change tickets that reference logging level knobs. **static_member_export_truncation** — Very large NSGroup static member lists occasionally paginate across API calls; if collectors ingest only the first page, lookups miss late-alphabet hosts until the next full export—watch collector byte caps and HTTP 206 handling.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, NSX DFW logs.\n• Ensure the following data sources are available: NSX DFW logs, NSX security group membership.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine security groups in NSX aligned with application tiers. Enable DFW logging for inter-group and intra-group traffic. Enrich logs with security group membership. Track allowed vs denied inter-group communication. Alert on intra-group denials or unexpected inter-group allows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:dfw\"\n| lookup nsx_security_groups vm_name OUTPUT security_group\n| stats count as hits, dc(dst_ip) as unique_destinations by security_group, action, direction\n| eval compliance=if(action=\"DROP\" AND direction=\"intra-group\", \"Violation\", \"Expected\")\n| sort -hits\n```\n\nUnderstanding this SPL\n\n**Micro-Segmentation Enforcement** — NSX micro-segmentation is a key Zero Trust control. Monitoring enforcement validates that workloads are properly isolated, detects lateral movement attempts, and proves compliance with segmentation policies during audits.\n\nDocumented **Data sources**: NSX DFW logs, NSX security group membership. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX DFW logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:dfw. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:dfw\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by security_group, action, direction** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (group-to-group traffic), Sankey diagram (flow paths), Bar chart (denials by group), Single value (policy violation count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We verify that each guest computer still sits in the right security bucket, that firewall blueprints still wrap the workloads they should, and that pairs of teams only talk when the design says they may.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.3",
              "n": "Logical Switch Health",
              "c": "high",
              "f": "intermediate",
              "v": "NSX logical switches and routers form the virtual network fabric. Monitoring their operational status ensures VM connectivity and helps identify overlay network issues before they impact applications.",
              "t": "`vmware_nsx_addon`, NSX Manager events",
              "d": "NSX Manager API, NSX system events/syslog",
              "q": "index=vmware sourcetype=\"vmware:nsx:events\"\n| search object_type IN (\"LogicalSwitch\", \"LogicalRouter\", \"Tier0Router\", \"Tier1Router\")\n| eval status=case(severity==\"HIGH\" OR severity==\"CRITICAL\", \"Degraded\", severity==\"MEDIUM\", \"Warning\", 1==1, \"Healthy\")\n| stats latest(status) as current_status, count as event_count by object_name, object_type\n| sort -event_count",
              "m": "Poll NSX Manager API for logical switch and router status every 60 seconds. Ingest NSX system events via syslog. Alert on logical switch or router down events. Track BFD session state for Tier-0/Tier-1 routers. Monitor VNI pool exhaustion.",
              "z": "Status grid (switch/router health), Table (degraded components), Timechart (event trends), Single value (active logical switches).",
              "kfp": "**Planned replication-mode rehearsal** — Architects toggle **`HEAD`** / **`MTEP`** / **`SOURCE`** during BUM redesign so **`replication_mode`** rows flicker while dataplane forwarding stays steady once control-plane settles—suppress on **`CRQ`** stamps tied to multicast policy rehearsals alone ([VMware NSX segments overview](https://techdocs.broadcom.com/us/en/vmware-cis/nsx/vmware-nsx/4-2/administration-guide/network-segments-overlay-and-vlan-backed-segments.html)). **Heavy vMotion storms** — Large migrations inflate **`mac_count`** until aging prunes stale hosts; **`mac_table_heavy`** spikes without sustained drops ([Administration Guide hub](https://techdocs.broadcom.com/us/en/vmware-cis/nsx/vmware-nsx/4-2/administration-guide.html)). **Search index lag** — **`GET /policy/api/v1/search`** JSON may trail UI **`Realized State`** badges one poll during Manager CPU spikes; **`realization_gap`** blips though operators already see green—validate **`aggregate-realization-status`** before paging. **Stretched-site WAN work** — Remote **`VTEP`** isolation during MTU lifts makes **`oper_state`** oscillate while **`ARP`** suppression masks transient churn—classify **`transport_zone`** **`OVERLAY`** versus VLAN tags in **`tz_k`** before all-hands calls. **Lab duplicate VNIs** — Isolated tenants reuse **`VNI`** values so **`tz_binding_uncertain`** appears harmless—tag `env=lab` on indexes. **Telemetry mis-tags** — gRPC exporters occasionally mix **`Edge`** summaries with **`segment`** metrics—confirm **`sourcetype`** **`vmware:nsxt:logical_switch`** inventory parity in UI before Policy defect escalation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, NSX Manager events.\n• Ensure the following data sources are available: NSX Manager API, NSX system events/syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll NSX Manager API for logical switch and router status every 60 seconds. Ingest NSX system events via syslog. Alert on logical switch or router down events. Track BFD session state for Tier-0/Tier-1 routers. Monitor VNI pool exhaustion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:events\"\n| search object_type IN (\"LogicalSwitch\", \"LogicalRouter\", \"Tier0Router\", \"Tier1Router\")\n| eval status=case(severity==\"HIGH\" OR severity==\"CRITICAL\", \"Degraded\", severity==\"MEDIUM\", \"Warning\", 1==1, \"Healthy\")\n| stats latest(status) as current_status, count as event_count by object_name, object_type\n| sort -event_count\n```\n\nUnderstanding this SPL\n\n**Logical Switch Health** — NSX logical switches and routers form the virtual network fabric. Monitoring their operational status ensures VM connectivity and helps identify overlay network issues before they impact applications.\n\nDocumented **Data sources**: NSX Manager API, NSX system events/syslog. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX Manager events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by object_name, object_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (switch/router health), Table (degraded components), Timechart (event trends), Single value (active logical switches).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the virtual lanes each bunch of computers uses inside your datacenter mesh. When a lane's labels disagree, its address book grows too fast, or its copying mode goes sideways before shoppers notice slow checkout screens, we speak up so networking crews mend the cloth—not the storefront apps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.4",
              "n": "NSX Edge Performance",
              "c": "high",
              "f": "intermediate",
              "v": "NSX Edge nodes handle north-south traffic, load balancing, and NAT. Performance bottlenecks on Edge nodes directly impact application availability and throughput for any workload communicating outside the NSX fabric.",
              "t": "`vmware_nsx_addon`, NSX Edge metrics",
              "d": "NSX Edge node metrics (CPU, memory, datapath), NSX Manager API",
              "q": "index=vmware sourcetype=\"vmware:nsx:edge_metrics\"\n| stats avg(cpu_pct) as avg_cpu, max(cpu_pct) as peak_cpu, avg(mem_pct) as avg_mem, avg(datapath_cpu_pct) as avg_dp_cpu by edge_node, cluster\n| eval status=case(peak_cpu>90 OR avg_dp_cpu>80, \"Critical\", peak_cpu>75 OR avg_dp_cpu>60, \"Warning\", 1==1, \"Healthy\")\n| table edge_node, cluster, avg_cpu, peak_cpu, avg_mem, avg_dp_cpu, status\n| sort -peak_cpu",
              "m": "Collect Edge node metrics via NSX Manager API every 60 seconds. Monitor both management plane and datapath CPU separately. Track interface throughput on uplinks. Set thresholds: datapath CPU >80% critical, >60% warning. Plan Edge node scale-out when sustained utilization exceeds thresholds.",
              "z": "Gauge (Edge CPU/memory), Timechart (performance trending), Table (Edge node status), Single value (peak datapath CPU).",
              "kfp": "**BGP graceful-restart rehearsal noise** — Datacenter-core upgrades intentionally stagger BGP helper timers so Established peers briefly flirt with Active / Idle states—Splunk tier0_bgp_peer_not_established rows spike though WAN optics remain deliberate—suppress windows anchored to CRQ stamps referencing graceful-restart / MAINT_MODE bridges before accusing Tier-0 misconfiguration alone. **Edge maintenance churn masking dataplane dips** — Entering Maintenance Mode on Edge appliances triggers active-standby takeover shifting Tier-0 SR ownership—throughput gauges tumble although north-south paths recover via standby peers—tie edge_compute_hot / gbps_peak drops to Edge evacuation bridge calendars rather than blackhole hypotheses instantly. **LB probe subnet mismatches mid-refactor** — ACTIVE HTTP / HTTPS monitors aimed at stale management IPs during lb-pools subnet migrations briefly stamp pool_member_status DOWN across every backend though applications answer correctly—coordinate monitor / pool edits recorded inside CMDB tickets before paging Tier-1 application squads. **IKEv2 DPD timeouts during WAN MTU realignment** — Ipsec dpd-timeout retries coincide with PMTUD / MSS harmonisation exercises across WAN MTU bridges—tunnel-down syslog bursts mimic outages although IKE phase-2 SA counts stabilize seconds later once strongSwan / charon timers converge—suppress IPSEC_TUNNEL_DOWN overlays aligned with WAN bridge approvals referencing Tier-0 VPN attachments. **Bare-metal NIC firmware flashes resembling cluster outages** — Rolling NIC firmware / driver refreshes on bare-metal Edge nodes briefly reset dataplane counters—Splunk runtime_latest / drop_peak anomalies mimic catastrophic Edge failures despite Tier-1 tenants riding unaffected peers inside Edge clusters hosting ECMP active-active layouts—cross-check hardware bridge notices before split-brain escalations hit Tier-0 routing SMEs nightly. **ECMP convergence lag after Tier-0 attribute tweaks** — Editing BGP path-attribute / multi-hop knobs toward upstream Nexus / Arista routers introduces route installation latency across ECMP active-active Tier-0 locale_services attachments—temporary blackhole / asymmetric forwarding narratives surface inside dataplane_drops / pps_peak jitter although Established BGP peers never flap—extend smoothing filters referencing routing bridge approvals capturing BGP policy / route-map edits tied to Tier-0 north-south fabrics alone.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, NSX Edge metrics.\n• Ensure the following data sources are available: NSX Edge node metrics (CPU, memory, datapath), NSX Manager API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect Edge node metrics via NSX Manager API every 60 seconds. Monitor both management plane and datapath CPU separately. Track interface throughput on uplinks. Set thresholds: datapath CPU >80% critical, >60% warning. Plan Edge node scale-out when sustained utilization exceeds thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:edge_metrics\"\n| stats avg(cpu_pct) as avg_cpu, max(cpu_pct) as peak_cpu, avg(mem_pct) as avg_mem, avg(datapath_cpu_pct) as avg_dp_cpu by edge_node, cluster\n| eval status=case(peak_cpu>90 OR avg_dp_cpu>80, \"Critical\", peak_cpu>75 OR avg_dp_cpu>60, \"Warning\", 1==1, \"Healthy\")\n| table edge_node, cluster, avg_cpu, peak_cpu, avg_mem, avg_dp_cpu, status\n| sort -peak_cpu\n```\n\nUnderstanding this SPL\n\n**NSX Edge Performance** — NSX Edge nodes handle north-south traffic, load balancing, and NAT. Performance bottlenecks on Edge nodes directly impact application availability and throughput for any workload communicating outside the NSX fabric.\n\nDocumented **Data sources**: NSX Edge node metrics (CPU, memory, datapath), NSX Manager API. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX Edge metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:edge_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:edge_metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by edge_node, cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NSX Edge Performance**): table edge_node, cluster, avg_cpu, peak_cpu, avg_mem, avg_dp_cpu, status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (Edge CPU/memory), Timechart (performance trending), Table (Edge node status), Single value (peak datapath CPU).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch the outward routers and Edge appliances carrying campus WAN traffic and VIP flows into your fabric. When upstream BGP sessions wander away from steady handshakes, IPSec tunnels flutter, balancer backends blink unhealthy, or packet movers saturate, we surface it plainly so backbone engineers—not application squads—know where to start.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.5",
              "n": "Transport Node Connectivity",
              "c": "critical",
              "f": "intermediate",
              "v": "Transport nodes are the hypervisors participating in the NSX overlay. Tunnel failures between transport nodes cause VM-to-VM communication loss across hosts, directly impacting application availability.",
              "t": "`vmware_nsx_addon`, NSX transport node logs",
              "d": "NSX transport node status API, TEP tunnel events",
              "q": "index=vmware sourcetype=\"vmware:nsx:transport_node\"\n| eval tunnel_status=case(status==\"UP\", \"Healthy\", status==\"DEGRADED\", \"Degraded\", status==\"DOWN\", \"Down\", 1==1, \"Unknown\")\n| stats latest(tunnel_status) as current_status, latest(_time) as last_seen by transport_node, host_ip\n| search current_status!=\"Healthy\"\n| table transport_node, host_ip, current_status, last_seen",
              "m": "Poll NSX Manager for transport node status every 30 seconds. Monitor TEP (Tunnel Endpoint) reachability between all transport nodes. Alert immediately on tunnel DOWN state. Track tunnel flapping (frequent UP/DOWN cycles). Correlate with physical network events (link failures, MTU issues).",
              "z": "Status grid (transport node map), Table (degraded nodes), Timechart (tunnel status changes), Single value (healthy tunnel percentage).",
              "kfp": "**DRS evacuation bursts** — vMotion drains briefly collapse GENEVE adjacency until VMs settle on destination hosts; tunnel_state_latest may flash DOWN then UP without sustained packet loss once relocation completes. **Rolling upgrades** — Moving NSX-T 3.2.x toward VMware NSX 4.x restarts transport-node agents per host; runtime_latest may sit UNKNOWN briefly though sequencing honors VMware maintenance guidance. **BFD asymmetry during MTU sweeps** — Stretch-cluster engineers tuning paths from 1500-byte toward 1700-byte fabrics can observe INIT/DEGRADED BFD readings until timers reconverge across sites—suppress alerts aligned with WAN change bridges. **Edge rehearsal windows** — Planned Tier-0/Tier-1 failover drills purposely drive Edge TN DOWN states validating HA pairs—tie edge_cluster_refs to approved CRQs before paging overnight bridges. **Multi-TEP migrations** — Designs advertising alternate Tunnel Endpoint IPs intentionally ADMIN_DOWN stale peers until orchestration completes multi-home edits—Splunk sees churn though dataplane prefers healthy TEP pairs. **Stretched VLAN drills plus PMTUD churn** — WAN-layer discovery spikes tunnel_degraded_or_init counts despite overlays recovering seconds later across resilient paths.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, NSX transport node logs.\n• Ensure the following data sources are available: NSX transport node status API, TEP tunnel events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll NSX Manager for transport node status every 30 seconds. Monitor TEP (Tunnel Endpoint) reachability between all transport nodes. Alert immediately on tunnel DOWN state. Track tunnel flapping (frequent UP/DOWN cycles). Correlate with physical network events (link failures, MTU issues).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:transport_node\"\n| eval tunnel_status=case(status==\"UP\", \"Healthy\", status==\"DEGRADED\", \"Degraded\", status==\"DOWN\", \"Down\", 1==1, \"Unknown\")\n| stats latest(tunnel_status) as current_status, latest(_time) as last_seen by transport_node, host_ip\n| search current_status!=\"Healthy\"\n| table transport_node, host_ip, current_status, last_seen\n```\n\nUnderstanding this SPL\n\n**Transport Node Connectivity** — Transport nodes are the hypervisors participating in the NSX overlay. Tunnel failures between transport nodes cause VM-to-VM communication loss across hosts, directly impacting application availability.\n\nDocumented **Data sources**: NSX transport node status API, TEP tunnel events. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX transport node logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:transport_node. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:transport_node\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tunnel_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by transport_node, host_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Transport Node Connectivity**): table transport_node, host_ip, current_status, last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (transport node map), Table (degraded nodes), Timechart (tunnel status changes), Single value (healthy tunnel percentage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch whether the invisible tunnels between your servers stay connected so applications can talk across machines without surprises. When the networking fabric squeezes packets too small or a tunnel drops during planned moves, we raise it early so teams fix the fabric—not blame the apps.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "18.2.6",
              "n": "Distributed Firewall Rule Hit Rate Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Baselines hits per DFW rule and flags sudden drops (unused or bypassed) or spikes (attack or misconfiguration) — complements UC-18.2.1 volume view.",
              "t": "`vmware_nsx_addon`",
              "d": "`sourcetype=vmware:nsx:dfw`",
              "q": "index=vmware sourcetype=\"vmware:nsx:dfw\" earliest=-7d\n| bin _time span=1d\n| stats count by _time, rule_id, rule_name\n| eventstats avg(count) as baseline by rule_id\n| where count < baseline*0.2 OR count > baseline*5\n| table _time, rule_id, rule_name, count, baseline",
              "m": "Requires ≥7 days of data for baseline. Exclude ephemeral rules by lookup. Alert on zero-hit critical allow rules.",
              "z": "Line chart (hits per rule), Table (anomalies), Heatmap (rule × day).",
              "kfp": "• **vsip_stats_flush_skew** — Transport-node vsip counter publish can lag NSX Manager REST by minutes after datastore latency bursts, so vmware:nsxt:rule_stats day_delta under-reports a vmware:nsxt:firewall spike. Distinguish with tn_key-scoped timechart alignment; suppress via lookup nsx_dfw_stats_suppress.csv on tn_key and 15m windows when vCenter shows storage latency alarms.\n\n• **rule_save_counter_restart** — Operators clone or touch rules during CAB work; hit_count resets without attack. Join vmware:nsxt:audit request_uri lines on security-policies PATCH keyed to rule_ref; filter with lookup nsx_dfw_change_ticket.csv when change_id matches maintenance.\n\n• **tn_slice_incomplete_rollup** — Pollers sometimes emit one transport_node_id slice per rule_ref, under-counting b_avg until pagination fixes land. Compare dc(tn_key) to expected cluster footprint; page platform on shortfalls, not SecOps.\n\n• **gateway_snat_correlation_bleed** — vmware:nsxt:gateway_status NAT reflow surges during Tier-0 churn can be mistaken for east-west effectiveness loss when dashboards mix scopes. Scope alerts where lookup nsx_logical_scope.csv tags east_west_only=1 on rule_ref.\n\n• **month_end_batch_allow_surge** — Finance batch VMs legitimately 5× daily ALLOW hits on fixed rules. Seasonalize with lookup batch_surge_allowlist.csv rule_ref expect_multiplier; tune macro nsx_dfw_hit_warn_ratio per business unit.\n\n• **policy_rest_shard_lag** — Automation session may read a stale policy shard versus UI during Manager rolling patches. Spot-check two OAuth identities before sev-1; suppress for upgrade_epoch rows in nsx_manager_maint.csv.\n\n• **dual_manager_stat_fork** — Federated twin Manager clusters return different hit_count for shared policy_ref. Require domain_nm in every alert token; split dashboards per manager_cluster_id.\n\n• **deny_only_health_check_ok** — DENY rules with scanner-only hits look hot yet mean healthy blocking. Route hygiene tickets only when stale_flag=1 on ALLOW-intent rows after category_vals review.\n\n• **logging_disabled_phantom_zero** — ALLOW rows without vmware:nsxt:firewall logging look quiet in flows while rule_stats still counts hits. Never retire rules on zero_day_activity alone; demand stats samples or Traceflow proof.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`.\n• Ensure the following data sources are available: `sourcetype=vmware:nsx:dfw`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires ≥7 days of data for baseline. Exclude ephemeral rules by lookup. Alert on zero-hit critical allow rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:dfw\" earliest=-7d\n| bin _time span=1d\n| stats count by _time, rule_id, rule_name\n| eventstats avg(count) as baseline by rule_id\n| where count < baseline*0.2 OR count > baseline*5\n| table _time, rule_id, rule_name, count, baseline\n```\n\nUnderstanding this SPL\n\n**Distributed Firewall Rule Hit Rate Analysis** — Baselines hits per DFW rule and flags sudden drops (unused or bypassed) or spikes (attack or misconfiguration) — complements UC-18.2.1 volume view.\n\nDocumented **Data sources**: `sourcetype=vmware:nsx:dfw`. **App/TA** (typical add-on context): `vmware_nsx_addon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:dfw. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:dfw\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, rule_id, rule_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by rule_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count < baseline*0.2 OR count > baseline*5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Distributed Firewall Rule Hit Rate Analysis**): table _time, rule_id, rule_name, count, baseline\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hits per rule), Table (anomalies), Heatmap (rule × day).",
              "script": "",
              "premium": "",
              "hw": "NSX-T Data Center",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We read the trip meter on each numbered east-west rule—when it races far ahead or suddenly idles next to its usual month, you get a plain nudge before clutter or real trouble piles up.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.7",
              "n": "Micro-Segmentation Policy Drift",
              "c": "high",
              "f": "intermediate",
              "v": "Compares published DFW policy revision to approved baseline (lookup) to detect unauthorized rule changes between change windows.",
              "t": "`vmware_nsx_addon`, NSX policy export",
              "d": "`vmware:nsx:policy_revision`, config snapshots",
              "q": "index=vmware sourcetype=\"vmware:nsx:policy_revision\" earliest=-24h\n| stats latest(revision_id) as rev by domain_name\n| lookup nsx_policy_baseline.csv domain_name OUTPUT approved_revision\n| where rev!=approved_revision\n| table domain_name, rev, approved_revision",
              "m": "Populate baseline from last CAB-approved export. Run after each change window; alert on drift.",
              "z": "Table (drifted domains), Single value (drift count), Timeline.",
              "kfp": "**`terraform_ansible_pipeline_revision`** cadence — CI/CD bots legitimately elevate **`revision_id`** every merge while **`vmware:nsxt:audit`** shows non-human principals; suppress rows whose **`client_ip`** lands inside documented runner egress pools and retain **`splunk_audit`** references proving automation ownership.\n\n**`cmdb_baseline_lag_window`** — **`nsx_policy_baseline.csv`** sometimes refreshes weekly even though CAB approved same-day edits; **`approved_revision_id`** lags **`revision_id`** through no malicious intent—stamp **`last_approval_timestamp`** SLA guards and pause paging until the CSV webhook catches up.\n\n**`bulk_policy_import_migration`** waves — **`IMPORT_FAILED`** bursts during **`nsx_3x_to_4x_migration`** tooling recreate **`security-policies`** under **`Policy API`**; transient **`revision_id`** hikes misalign every **`policy_id`** until **`bulk_policy_import_migration`** completes—tie **`suppress_migration_epoch`** lookups to VMware maintenance banners.\n\n**`emergency_section_override`** — incident responders publish **`Emergency`** category edits straight through the UI; **`revision_id`** jumps audibly yet responders intend **`expected-emergency-bypass`** classification—route those **`POLICY_UPDATE`** rows through **`CHG`** retrofits rather than waking unrelated platform tiers.\n\n**`rule_order_renumber`** hygiene — consolidating duplicate **`Application`** stacks triggers **`sequence_number`** churn without altering **`services`** bindings—pair SPL with hashed **`rule_body_fingerprint`** extractions when only ordering mutates so **`rule_order_renumber`** does not masquerade as net-new exposure.\n\n**`test_environment_drift_isolated`** — shared Manager clusters sometimes host **`domain_name=test_*`** sandboxes beside production; isolate alert outputs with `where NOT match(domain_name,\"^test_\") OR match(domain_name,\"^test_.*$\")` split baselines so **`test_environment_drift_isolated`** never inherits production **`approved_revision_id`** rows—also mute **`category=Local Gateway`** lab traffic when expressly out-of-scope.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, NSX policy export.\n• Ensure the following data sources are available: `vmware:nsx:policy_revision`, config snapshots.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPopulate baseline from last CAB-approved export. Run after each change window; alert on drift.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:policy_revision\" earliest=-24h\n| stats latest(revision_id) as rev by domain_name\n| lookup nsx_policy_baseline.csv domain_name OUTPUT approved_revision\n| where rev!=approved_revision\n| table domain_name, rev, approved_revision\n```\n\nUnderstanding this SPL\n\n**Micro-Segmentation Policy Drift** — Compares published DFW policy revision to approved baseline (lookup) to detect unauthorized rule changes between change windows.\n\nDocumented **Data sources**: `vmware:nsx:policy_revision`, config snapshots. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX policy export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:policy_revision. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:policy_revision\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by domain_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where rev!=approved_revision` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Micro-Segmentation Policy Drift**): table domain_name, rev, approved_revision\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drifted domains), Single value (drift count), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "It checks whether the live list of east-west firewall rules still matches the approved master copy your teams signed off on, and it flags surprises when someone edits the live list outside the normal approval path.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.8",
              "n": "NSX Edge Gateway Health",
              "c": "critical",
              "f": "beginner",
              "v": "Service status for Tier-0/Tier-1 SR components (BGP, NAT, LB service) on Edge nodes — complements CPU metrics (UC-18.2.4).",
              "t": "`vmware_nsx_addon`",
              "d": "`vmware:nsx:edge_status`",
              "q": "index=vmware sourcetype=\"vmware:nsx:edge_status\" earliest=-4h\n| where overall_status!=\"UP\" OR bgp_status!=\"Established\"\n| stats latest(overall_status) as st, latest(bgp_status) as bgp by edge_node, lr_name\n| table edge_node, lr_name, st, bgp",
              "m": "Map status fields from NSX Manager API. Alert on any non-UP gateway service. Correlate with northbound ISP events.",
              "z": "Status grid (edge × LR), Table (down services), Timeline.",
              "kfp": "**policy_realization_lag** — Policy UI badges can lag HTTPS JSON for minutes while NSX Manager VIP members roll; Splunk shows DOWN labels that consoles later soften once realization catches up. Compare the same `{id}` in CLI service-router output and wait two poll cycles before treating as hard down.\n\n**planned_sr_failover_drill** — Controlled Tier-0 failover exercises intentionally trip `bgp_not_established_like` rows even though the WAN playbook demands a brief non-Established window. Gate alerts with `nsx_edge_change_window.csv` CRQ IDs and require operator acknowledgement fields in tickets.\n\n**api_partial_state_page** — REST workers that forget `cursor` pagination emit only the first page of SR service health; `nat_latest`/`lb_latest` look empty while SSH still shows healthy tables. Fix collectors to honor `result_count` and page until complete before notifying.\n\n**graceful_restart_bgp_hold** — BGP graceful restart can hold peers in softer states while data plane forwarding continues; `bgp_raw` may not read Established yet forwarding is fine per carrier GR policies. Cross-check NSX BGP detailed status and carrier maintenance bulletins.\n\n**single_uplink_lab_noise** — Engineering sandboxes single-home Edge VMs on purpose while testing firewalls; `gateway_health_not_green` stays lit for hours. Filter non-production `edge_cluster` tags out of paging macros.\n\n**edge_vmotion_blip** — vMotion of Edge VMs across ESXi hosts restarts datapath services; one poll interval shows `vpn_service_stressed` then clears. Join `vmware:vsphere:vcenter` tasks and suppress when `vc_ts` hugs `last_seen`.\n\n**stale_status_poll_after_vip_failover** — When NSX Manager VIP fails over, in-flight `/transport-nodes/{id}/status` polls occasionally return cached stale JSON from the previous leader; edge_status may contradict fresh gateway_health until the cursor resets. Validate against CLI before executive pages.\n\n**nat_rule_publish_shadow** — Large NAT publish jobs can make transient `nat_service_stressed` rows while DFW and NAT tables reprogram even though sessions forward correctly. Require two failing polls plus absence of matching audit success rows before paging NAT owners.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`.\n• Ensure the following data sources are available: `vmware:nsx:edge_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap status fields from NSX Manager API. Alert on any non-UP gateway service. Correlate with northbound ISP events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:edge_status\" earliest=-4h\n| where overall_status!=\"UP\" OR bgp_status!=\"Established\"\n| stats latest(overall_status) as st, latest(bgp_status) as bgp by edge_node, lr_name\n| table edge_node, lr_name, st, bgp\n```\n\nUnderstanding this SPL\n\n**NSX Edge Gateway Health** — Service status for Tier-0/Tier-1 SR components (BGP, NAT, LB service) on Edge nodes — complements CPU metrics (UC-18.2.4).\n\nDocumented **Data sources**: `vmware:nsx:edge_status`. **App/TA** (typical add-on context): `vmware_nsx_addon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:edge_status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:edge_status\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where overall_status!=\"UP\" OR bgp_status!=\"Established\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by edge_node, lr_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **NSX Edge Gateway Health**): table edge_node, lr_name, st, bgp\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (edge × LR), Table (down services), Timeline.",
              "script": "",
              "premium": "",
              "hw": "NSX Edge VM/BM",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We guard the software edge routers so North-South paths cannot pretend to be fine while BGP, NAT, load sharing, or VPN pieces quietly struggle before regular folks only notice dead apps.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.9",
              "n": "NSX-T Transport Node Overlay Path Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Validates GENEVE overlay between TNs (packet loss, MTU) beyond simple UP/DOWN (UC-18.2.5).",
              "t": "`vmware_nsx_addon`",
              "d": "`vmware:nsx:tn_diag`, traceflow results",
              "q": "index=vmware sourcetype=\"vmware:nsx:tn_diag\" earliest=-24h\n| where pkt_loss_pct>1 OR mtu_issue=1\n| stats max(pkt_loss_pct) as max_loss by src_tn, dst_tn\n| sort -max_loss\n| head 50",
              "m": "Ingest Traceflow or TN diagnostic jobs on schedule. Alert on loss >1% between any TN pair in same pool.",
              "z": "Heatmap (TN × TN loss), Table (worst pairs), Line chart (loss trend).",
              "kfp": "**`tor_maintenance_window`** — Planned leaf-switch upgrades deliberately perturb east-west forwarding across specific racks; Path Health probes traverse the impacted underlay hops and may show temporary loss or RTT inflation that mirrors outages while dataplane forwarding recovers post-cut. Anchor suppressions to **`CHG`** identifiers plus CMDB **`tor_maintenance_window=true`** tags instead of paging overlay teams alone when underlay orchestration owns the bridge. \n\n**`mtu_renegotiation_convergence`** — Upstream switches sometimes re-stage path-MTU during VXLAN/vPC or storm-control refits; **`mtu_path`** can read hundreds of bytes beneath **`mtu_used`** until DF probes finish their five-to-ten minute sweep. Treat these as transitional noise when NSX Manager UI already shows **`mtu_issue`** clearing across successive polls rather than steady fragmentation signatures. \n\n**`probe_cpu_saturation`** — Path Health originates from hypervisor dataplane contexts; when the originating TN spikes ESXi **`CPU`** readiness queues, probe scheduling skew inflates **`rtt_ms`** despite calm intermediate hops—confirm host **`CPU`** contention (`res cpu`) before accusing WAN optics. \n\n**`lag_rebalance_window`** — Active-active NIC teams rebalance hashing briefly after uplink flap remediation or firmware pushes; asymmetric hashing yields uneven **`pkt_loss_pct`** samples until hashing tables steady—suppress against **`CHG`** stamps referencing **`LAG`**/`team` bridges and corroborate with switch **`LAG`** counters rather than Splunk-only escalation. \n\n**`multi_dc_stretched_latency_variance`** — Geographically stretched clusters intentionally absorb tens of milliseconds RTT during peak replication windows; **`rtt_ms`** variance tracks geography more than pathology—tie thresholds to **`metro_pair`** tags instead of intra-rack goals. \n\n**`tep_pool_exhaustion_onboarding`** — Massive TN onboarding spikes allocate **`src_tep_ip`** leases quickly yet probe matrices populate slowly; dashboards briefly show **`transport_node_overlay_context`** gaps resembling silent degradation until **`probe_outcome`** rows hydrate—monitor pool **`percent_used`** separately before paging. \n\n**`probe_interval_sampling_artefact`** — Operators sometimes tighten Path Health schedules (**10** seconds) on VIP clusters while defaults remain **60** seconds elsewhere; **`pkt_loss_pct`** spikes reflect Nyquist-style sampling mismatches rather than asymmetric fabrics—normalize comparisons using **`probe_interval_sec`** tags when dual cadences coexist. \n\n**`drs_host_evacuation_oscillation`** — VMware **`DRS`** drains relocate VMs aggressively during contention remediation; transporting workloads reshuffles **`src_tep_ip`** anchors rapidly so **`worst_pkt_loss_pct`** oscillates minute-to-minute despite benign median paths—defer paging until evacuation banners clear.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`.\n• Ensure the following data sources are available: `vmware:nsx:tn_diag`, traceflow results.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Traceflow or TN diagnostic jobs on schedule. Alert on loss >1% between any TN pair in same pool.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:tn_diag\" earliest=-24h\n| where pkt_loss_pct>1 OR mtu_issue=1\n| stats max(pkt_loss_pct) as max_loss by src_tn, dst_tn\n| sort -max_loss\n| head 50\n```\n\nUnderstanding this SPL\n\n**NSX-T Transport Node Overlay Path Health** — Validates GENEVE overlay between TNs (packet loss, MTU) beyond simple UP/DOWN (UC-18.2.5).\n\nDocumented **Data sources**: `vmware:nsx:tn_diag`, traceflow results. **App/TA** (typical add-on context): `vmware_nsx_addon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:tn_diag. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:tn_diag\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where pkt_loss_pct>1 OR mtu_issue=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_tn, dst_tn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (TN × TN loss), Table (worst pairs), Line chart (loss trend).",
              "script": "",
              "premium": "",
              "hw": "ESXi KVM transport nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We measure how cleanly the invisible overlay highways carry packets between server hosts—not merely whether tunnels answer yes or no—so shaky paths get repaired before everyday applications feel the drag.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.10",
              "n": "Load Balancer Pool Health",
              "c": "high",
              "f": "beginner",
              "v": "Monitors NSX Advanced Load Balancer / LB pool member up/down and health check failures for published apps.",
              "t": "`vmware_nsx_addon`, Avi if integrated",
              "d": "`vmware:nsx:lb_pool`",
              "q": "index=vmware sourcetype=\"vmware:nsx:lb_pool\" earliest=-24h\n| where member_status!=\"UP\" OR health_check_failures>0\n| stats count by pool_name, member_ip, member_status\n| sort -count",
              "m": "Map Avi or NSX LB API fields. Alert when active members < minimum for pool.",
              "z": "Table (unhealthy pools), Bar chart (failures by pool), Single value (pools in critical state).",
              "kfp": "**pool_member_disabled_window** — Operators deliberately pin **`DISABLED`** members during **`CHG`** rehearsals—Splunk surfaces benign **`member_not_ready`** rows lacking outage semantics until **`CHG`** tokens referencing **`graceful_pool_drain`** approvals accompany dashboards.\n\n**monitor_interval_jitter** — Automation adjusts **`LBActiveMonitor`** cadences (**5–60s**) across tiers—**`monitor_interval_latest`** hops mimic faults unless tagging pipelines preserve **`baseline_monitor_sec`** lookups beside **`pool_fault`** badges.\n\n**edge_cold_migration_window** — Edge VMs undergoing scheduled **`cold`** relocation pause dataplane ACKs briefly—**`edge_lb.log`** captures **`edge_cold_migration_window`** hints without sustained **`member_status`** regressions—suppress whenever **`CHG`** cites **`Edge`** **`TN`** migrations explicitly.\n\n**backend_warmup_after_deploy** — Rolling deployments intentionally **`DRAINING`** siblings—correlate **`backend_warmup_after_deploy`** arcs with **`pipeline_job_id`** metadata before accusing **`LB`** controllers alone.\n\n**haproxy_ssl_reneg_burst** — Burst **`SSL`** renegotiations spike **`edge_lb.log`** **`haproxy_ssl_reneg_burst`** lanes during cipher-suite rehearsals yet **`active_members`** remain saturated—pair syslog excerpts with **`tls_profile`** **`CHG`** bundles.\n\n**graceful_pool_drain** — Game-day **`graceful_pool_drain`** exercises mimic outages—tie pager macros to **`CHG`** rehearsal identifiers referencing **`blue`**/**`green`** lanes.\n\n**vs_vip_ha_failover_swap** — Controlled **`vs_vip_ha_failover_swap`** drills relocate **`VIP`** anchors across Edge clusters—suppress **`vs_operational_bad`** bursts annotated with tabletop **`CRQ`** identifiers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, Avi if integrated.\n• Ensure the following data sources are available: `vmware:nsx:lb_pool`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap Avi or NSX LB API fields. Alert when active members < minimum for pool.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:lb_pool\" earliest=-24h\n| where member_status!=\"UP\" OR health_check_failures>0\n| stats count by pool_name, member_ip, member_status\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Load Balancer Pool Health** — Monitors NSX Advanced Load Balancer / LB pool member up/down and health check failures for published apps.\n\nDocumented **Data sources**: `vmware:nsx:lb_pool`. **App/TA** (typical add-on context): `vmware_nsx_addon`, Avi if integrated. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:lb_pool. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:lb_pool\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where member_status!=\"UP\" OR health_check_failures>0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by pool_name, member_ip, member_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy pools), Bar chart (failures by pool), Single value (pools in critical state).",
              "script": "",
              "premium": "",
              "hw": "NSX ALB, LB service on Edge",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We monitor the traffic-sharing pools behind your published addresses—servers listening, probes honest, drains planned—so customer-facing apps stay reachable before phones buzz.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.11",
              "n": "NAT Rule Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks NAT session and port allocation per rule — prevents exhaustion on busy Edge gateways.",
              "t": "`vmware_nsx_addon`",
              "d": "`vmware:nsx:nat_stats`",
              "q": "index=vmware sourcetype=\"vmware:nsx:nat_stats\" earliest=-1h\n| eval used_pct=round(100*active_sessions/session_limit,1)\n| where used_pct>85\n| table edge_node, rule_id, active_sessions, session_limit, used_pct\n| sort -used_pct",
              "m": "Session limits depend on Edge form factor; load from capacity sheet via lookup. Alert at 85%.",
              "z": "Bar chart (NAT % by rule), Table (top consumers), Gauge.",
              "kfp": "**Quarter-close SNAT bursts** — Month-end ledger jobs legitimately fan thousands of simultaneous outbound sockets through shared Tier-1 SNAT pools so port_pool_used_percent crests above eighty-five percent for tens of minutes while applications remain healthy—suppress paging against fiscal-window lookups tied to payroll batches rather than treating the crest as covert tunneling alone.\n**Wave-scheduled NAT publishes** — Automation pushes stacks of DNAT entries during staged blue-green weekends so session_count_active ramps rule-by-rule while operators rehearse failover—Splunk ranks climb benignly once ServiceNow CHG payloads cite the rehearsal harness and EDGE-NAT-DROP counters stay quiet across the same interval.\n**Cold-power recycle tranquility** — VMware admins park clusters during firmware weekends; powering VMs back online collapses stale NAT bindings so Splunk briefly prints zero sessions even though pools were never exhausted—cross-check Edge CLI get nat translations before accusing Policy drift during those recovery minutes alone.\n**Logging amplification drills** — Teams temporarily elevate NAT logging plus firewall_match verbosity for packet captures during audits; syslog EPS climbs without implying NAT-session-table pressure—tie suppressions to incident bridge numbers or GRC ticket IDs before scaling indexers reactively.\n**DR republish jitter** — Warm-site failover exercises rehydrate Tier-0 SR and Tier-1 NAT USER sections from automation; per-rule counters wobble while Edge nodes re-anchor SNAT anchors even though WAN paths remain deliberate—extend smoothing windows using runbook IDs until translation-statistics summaries steady across every node.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1090.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`.\n• Ensure the following data sources are available: `vmware:nsx:nat_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSession limits depend on Edge form factor; load from capacity sheet via lookup. Alert at 85%.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:nat_stats\" earliest=-1h\n| eval used_pct=round(100*active_sessions/session_limit,1)\n| where used_pct>85\n| table edge_node, rule_id, active_sessions, session_limit, used_pct\n| sort -used_pct\n```\n\nUnderstanding this SPL\n\n**NAT Rule Utilization** — Tracks NAT session and port allocation per rule — prevents exhaustion on busy Edge gateways.\n\nDocumented **Data sources**: `vmware:nsx:nat_stats`. **App/TA** (typical add-on context): `vmware_nsx_addon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:nat_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:nat_stats\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct>85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NAT Rule Utilization**): table edge_node, rule_id, active_sessions, session_limit, used_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (NAT % by rule), Table (top consumers), Gauge.",
              "script": "",
              "premium": "",
              "hw": "NSX Edge NAT",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We track how many borrowed public-side port slots each NAT rule keeps busy on Edge gateways—like counting shared return addresses—so crowded rules cannot quietly fill the locker before payroll or checkout APIs stall without a clear warning.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.12",
              "n": "T0/T1 Gateway Failover Events",
              "c": "critical",
              "f": "beginner",
              "v": "Captures active/standby transitions for Tier-0/Tier-1 logical routers for incident correlation and HA validation.",
              "t": "`vmware_nsx_addon`",
              "d": "`vmware:nsx:lr_events`",
              "q": "index=vmware sourcetype=\"vmware:nsx:lr_events\" earliest=-7d\n| search failover OR \"HA switch\" OR \"role change\"\n| stats latest(_time) as last_seen, count by lr_name, prev_role, new_role, edge_node\n| sort -last_seen",
              "m": "Normalize syslog/API messages for HA events. Alert on any unplanned failover. Pager for T0.",
              "z": "Timeline (failovers), Table (events), Single value (failovers / month).",
              "kfp": "**ha_preempt_rehearsal_churn** — Quarterly **active-standby** drills or **preemptive** reclaim exercises flip **ACTIVE/STANDBY** roles on command—Splunk rows mimic **unplanned** events though **CRQ** tickets and **`nsxt_planned_failover_windows`** **lookup** entries already cover the bridge—**suppress** where **`change_record`** matches **`planned=true`** via **`inputlookup`** filter before paging **Tier-0** SMEs.\n\n**edge_evacuation_role_shuffle** — **vCenter** **maintenance mode** on an **Edge VM** host forces **service-router** ownership to the partner **node**—**`vmware:vmware:vsphere:vcenter`** tasks show **DRS** **evacuation** while **`maintenance_hint`** may stay **0** if operators omit syslog tags—extend **`match(_raw)`** with **`evacuation`** tokens or join **CMDB** **`host_patch_calendar.csv`** **time-bound exception** rows.\n\n**stateful_conntrack_resync_noise** — **NAT** / **VPN** **state bundles** take **seconds** longer than stateless **SR** handoffs—**`stateful_sync_status`** may lag **READY** while **`observed_duration_sec`** crosses naive **6 s** budgets—widen **`sla_breached`** thresholds for gateways that advertise **stateful services** or add **`where stateful_service_profile!=true`**.\n\n**inter_edge_heartbeat_blip** — Brief **TEP** or **dedicated HA link** micro-outages trigger **standby** promotion then rapid **reconciliation**—**`wall_clock_span_sec`** stays sub-second though **`raw_evt_count`** spikes—use **`streamstats`** debouncing (**2 consecutive toggles**) via **macro** before **alert** firing.\n\n**rest_duplicate_snapshot_rows** — Collectors **double-POST** during code deploys yield duplicate **`evt_end`** rows—**`dedup`** on **`lr_id`**, **`from_role`**, **`to_role`**, **`evt_end`** before SLA math to avoid **false** **`sla_breached`**.\n\n**stretched_lr_site_preference** — **Multi-site** **Tier-0** designs intentionally move **ACTIVE** roles during **WAN** exercises—correlate with **campus** **`operations_calendar`** **lookup** **suppress** windows before treating as **incident**.\n\n**snmp_trap_replay_burst** — **SNMP** collectors occasionally **replay** historical **HA** traps after restarts—**`failover_duration_ms`** looks ancient versus **`_time`**—**filter `where _time`** aligns with **trap** **timestamp** fields or **drop** replays older than **1 hour** via **`where`** clause.\n\n**admin_cli_switchover** — Operators run **supported CLI/API** **switchover** for proactive tests—syslog carries **user** attributions while **`maintenance_hint`** regex misses—add **`match(lower(reason),\"cli|api.switch\")`** or maintain **`nsxt_known_operator_actions.csv`** allow patterns.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`.\n• Ensure the following data sources are available: `vmware:nsx:lr_events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize syslog/API messages for HA events. Alert on any unplanned failover. Pager for T0.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:lr_events\" earliest=-7d\n| search failover OR \"HA switch\" OR \"role change\"\n| stats latest(_time) as last_seen, count by lr_name, prev_role, new_role, edge_node\n| sort -last_seen\n```\n\nUnderstanding this SPL\n\n**T0/T1 Gateway Failover Events** — Captures active/standby transitions for Tier-0/Tier-1 logical routers for incident correlation and HA validation.\n\nDocumented **Data sources**: `vmware:nsx:lr_events`. **App/TA** (typical add-on context): `vmware_nsx_addon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:lr_events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:lr_events\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by lr_name, prev_role, new_role, edge_node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (failovers), Table (events), Single value (failovers / month).",
              "script": "",
              "premium": "",
              "hw": "NSX-T LR HA",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We follow when the spare edge router must suddenly take the lead for a shared gateway job, how long the swap lasted, and whether the wider network felt it—so surprise handoffs do not hide in ordinary background chatter.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.13",
              "n": "NSX Manager Cluster Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Cluster quorum, Corfu/RAFT health, and API reachability for NSX Manager appliance cluster.",
              "t": "`vmware_nsx_addon`",
              "d": "`vmware:nsx:manager_cluster`",
              "q": "index=vmware sourcetype=\"vmware:nsx:manager_cluster\" earliest=-24h\n| where cluster_status!=\"STABLE\" OR offline_nodes>0\n| stats latest(cluster_status) as st latest(offline_nodes) as off by cluster_id\n| table cluster_id, st, off",
              "m": "Map NSX version-specific cluster health API. Alert immediately if not STABLE or any node offline.",
              "z": "Status grid (manager nodes), Single value (cluster OK), Timeline.",
              "kfp": "**Rolling NSX 4.x upgrade quiescence** — VMware maintenance deliberately takes one Unified Appliance offline at a time; DEGRADED labels, transient ARS lag bumps, and VIP leadership handoffs happen with two healthy peers still serving—suppress on ServiceNow CHG identifiers so on-call stays focused on unmanaged drift. **DRS evacuation of Manager VMs** — vSphere DRS migrations move the NSX Manager VM between ESXi hosts during host maintenance; syslog may burp Reverse Proxy resets while Corfu replication quickly converges—anchor suppression on Splunk Add-on for vCenter `vmotion` event entries before flagging management-plane failure. **vCenter services restart windows** — recycling vCenter inventory services can make NSX Manager temporarily miss VM metadata even though Corfu quorum is intact; Splunk may show surface-level inventory blanks while CLI get cluster status stays green—validate before opening sev bridges. **Corfu compaction intervals** — scheduled Corfu compaction plus ARS batching occasionally stretches replication-lag counters for tens of seconds—treat as benign when lag snaps back below policy thresholds without repeated slope growth. **Deliberate quorum restoration drills** — administrators run set cluster restore after validated backup restores to re-seed membership; Policy /cluster/restore/status reads BUSY while ARS catches up—expect ASR lag lift until Async Replicator Service acknowledgments stabilize, not instant green across every panel.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1499.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`.\n• Ensure the following data sources are available: `vmware:nsx:manager_cluster`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap NSX version-specific cluster health API. Alert immediately if not STABLE or any node offline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:manager_cluster\" earliest=-24h\n| where cluster_status!=\"STABLE\" OR offline_nodes>0\n| stats latest(cluster_status) as st latest(offline_nodes) as off by cluster_id\n| table cluster_id, st, off\n```\n\nUnderstanding this SPL\n\n**NSX Manager Cluster Health** — Cluster quorum, Corfu/RAFT health, and API reachability for NSX Manager appliance cluster.\n\nDocumented **Data sources**: `vmware:nsx:manager_cluster`. **App/TA** (typical add-on context): `vmware_nsx_addon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:manager_cluster. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:manager_cluster\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where cluster_status!=\"STABLE\" OR offline_nodes>0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **NSX Manager Cluster Health**): table cluster_id, st, off\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (manager nodes), Single value (cluster OK), Timeline.",
              "script": "",
              "premium": "",
              "hw": "NSX Manager cluster (3-node)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We mind the trio folded into each NSX Manager box—intent ledger, policy planner, fast finder—and whether their shared journal stays mirrored plus who fronts the VIP. When those minds drift apart or the journal drags, we raise your crew before automation loses steering.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.14",
              "n": "NSX Intelligence Top Flows and Anomalous East-West Volume",
              "c": "high",
              "f": "intermediate",
              "v": "NSX Intelligence summarizes flow metadata across the overlay. Tracking dominant flows and sudden volume shifts highlights misconfigured services, lateral movement, or noisy neighbors before micro-segmentation rules are tuned, keeping data center east-west paths predictable for latency-sensitive tiers.",
              "t": "`vmware_nsx_addon`, NSX Intelligence HEC/syslog export",
              "d": "`index=vmware` `sourcetype=\"vmware:nsx:intelligence_flow\"` with fields `src_vm`, `dst_vm`, `service_id`, `flow_bytes`, `domain_name`",
              "q": "index=vmware sourcetype=\"vmware:nsx:intelligence_flow\" earliest=-24h\n| stats sum(flow_bytes) as bytes dc(dst_vm) as dst_count by src_vm, service_id, domain_name\n| eventstats perc95(bytes) as p95\n| where bytes > p95*3\n| eval mb=round(bytes/1048576,2)\n| sort -bytes\n| head 50\n| table domain_name, src_vm, service_id, dst_count, mb",
              "m": "(1) Enable Intelligence flow export to Splunk HEC with CIM-friendly field names. (2) Baseline per domain; exclude backup VLANs via lookup. (3) Correlate spikes with DFW deny events (UC-18.2.1).",
              "z": "Table (top talkers), Sankey (src to service), Timechart (bytes per domain).",
              "kfp": "**Wave-scheduled workload onboarding** — Parallel cutovers toward freshly segmented tiers deliberately inflate Intelligence rankings because health checks and staged database copies open short-lived sockets between cohorts sharing the same dynamic group tag; classify as planned variance once change tickets cite the rollout window and Intelligence classification alternates North-South then East-West inside that sanctioned interval rather than drifting slowly across unrelated business days. **Quarter-end ledger and reporting surges** — Finance and procurement jobs open large SQL and file-copy conversations between VMs that rarely talk; anomaly scores climb even though malware is absent—anchor suppressions on fiscal calendar fixtures and the business owner of the batch stream. **Backup-window replication dominance** — Dedicated backup proxies stream east-west toward media servers and object stores during nightly windows; byte-weighted charts crown those pairs while anomaly scores stay low—correlate with backup catalog schedules before paging micro-segmentation teams. **Recommendation queue recycling** — Published NSX Intelligence recommendations that re-enter the `/policy/api/v1/intelligence/recommendations` backlog after Distributed Firewall edits or group renames show queue-depth churn unrelated to stealthy traffic—validate against recent publish actions in Policy before treating depth as compromise. **NAPP-Druid segment maintenance** — During NSX Application Platform upgrades, Intelligence services pause or rebuild Druid segments; flow exports may stutter while NAPP pods recycle, temporarily suppressing anomaly_score math or fragmenting totals—inspect vmware:nsxt:napp:status plus appliance health surfaces before declaring data exfiltration.",
              "refs": "[CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1071",
                "T1568"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, NSX Intelligence HEC/syslog export.\n• Ensure the following data sources are available: `index=vmware` `sourcetype=\"vmware:nsx:intelligence_flow\"` with fields `src_vm`, `dst_vm`, `service_id`, `flow_bytes`, `domain_name`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable Intelligence flow export to Splunk HEC with CIM-friendly field names. (2) Baseline per domain; exclude backup VLANs via lookup. (3) Correlate spikes with DFW deny events (UC-18.2.1).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:intelligence_flow\" earliest=-24h\n| stats sum(flow_bytes) as bytes dc(dst_vm) as dst_count by src_vm, service_id, domain_name\n| eventstats perc95(bytes) as p95\n| where bytes > p95*3\n| eval mb=round(bytes/1048576,2)\n| sort -bytes\n| head 50\n| table domain_name, src_vm, service_id, dst_count, mb\n```\n\nUnderstanding this SPL\n\n**NSX Intelligence Top Flows and Anomalous East-West Volume** — NSX Intelligence summarizes flow metadata across the overlay. Tracking dominant flows and sudden volume shifts highlights misconfigured services, lateral movement, or noisy neighbors before micro-segmentation rules are tuned, keeping data center east-west paths predictable for latency-sensitive tiers.\n\nDocumented **Data sources**: `index=vmware` `sourcetype=\"vmware:nsx:intelligence_flow\"` with fields `src_vm`, `dst_vm`, `service_id`, `flow_bytes`, `domain_name`. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX Intelligence HEC/syslog export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:intelligence_flow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:intelligence_flow\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_vm, service_id, domain_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where bytes > p95*3` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **NSX Intelligence Top Flows and Anomalous East-West Volume**): table domain_name, src_vm, service_id, dst_count, mb\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NSX Intelligence Top Flows and Anomalous East-West Volume** — NSX Intelligence summarizes flow metadata across the overlay. Tracking dominant flows and sudden volume shifts highlights misconfigured services, lateral movement, or noisy neighbors before micro-segmentation rules are tuned, keeping data center east-west paths predictable for latency-sensitive tiers.\n\nDocumented **Data sources**: `index=vmware` `sourcetype=\"vmware:nsx:intelligence_flow\"` with fields `src_vm`, `dst_vm`, `service_id`, `flow_bytes`, `domain_name`. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX Intelligence HEC/syslog export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top talkers), Sankey (src to service), Timechart (bytes per domain).",
              "script": "",
              "premium": "",
              "hw": "NSX-T Manager, NSX Intelligence appliance",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch sideways chatter inside private-cloud racks—not only outward Internet exits—and compare each week's habits so unusual east-west bursts tied to logged-in people ring alarms early. Teams tighten partitions sooner instead of discovering stray workstation hops months later.",
              "mtype": [
                "Performance",
                "Security"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "nsx",
                "syslog",
                "vmware"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.15",
              "n": "Distributed Firewall Rule Hit Counts by Application Tier",
              "c": "medium",
              "f": "intermediate",
              "v": "Grouping DFW hits by application tier proves which segmentation boundaries are exercised in production and identifies stale allow rules that see no traffic. This reduces attack surface during audits and prevents accidental removal of rules that still protect critical tiers.",
              "t": "`vmware_nsx_addon`",
              "d": "`index=vmware` `sourcetype=\"vmware:nsx:dfw\"` with fields `rule_id`, `rule_name`, `action`, `src`, `tier`",
              "q": "index=vmware sourcetype=\"vmware:nsx:dfw\" earliest=-7d\n| lookup nsx_vm_tier.csv vm_name AS src OUTPUT tier AS src_tier\n| stats count by src_tier, rule_id, rule_name, action\n| eventstats sum(count) as tier_total by src_tier, action\n| eval pct=round(100*count/tier_total,2)\n| sort src_tier, -count\n| table src_tier, rule_id, rule_name, action, count, pct",
              "m": "(1) Maintain `nsx_vm_tier.csv` mapping VM names to tier labels. (2) Refresh weekly from CMDB. (3) Alert when production tier shows zero hits on mandatory allow rules for seven days.",
              "z": "Heatmap (tier × rule hits), Bar chart (hits by tier), Table (zero-hit rules).",
              "kfp": "**dfw_stat_transport_skew** — Per-transport-node vsip slices arrive minutes apart during vSAN latency or NSX appliance CPU spikes; `hit_eod` plateaus on one ESXi host while another still climbs, so tier totals look lopsided until pagination completes. Require `dc(transport_node_id)` near expected cluster breadth before paging segmentation SMEs; align suppressions to `vmware:vsphere:vcenter` storage alarms—not application bridges alone.\n\n**tier_lookup_hostname_drift** — VMs renamed in vCenter `guest.net` while `nsx_vm_tier.csv` still lists retired `vm_key` strings; `tier_tag` falls back to `unmapped` and crowds the roll-up even though DFW counters are true. Refresh the CSV from CMDB plus nightly `vmware:vsphere:vcenter` inventory jobs, and keep a `prev_hostname_alias` column for ninety-day tails.\n\n**policy_publish_counter_refresh** — `vmware:nsx:audit` shows `security-policies` PATCH bursts where VMware resets statistics pointers; `day_delta` dives but `vmware:nsx:policy` realization timestamps prove benign. Join `rule_ref` to audit `request_uri` and throttle twelve hours when `change_ticket` matches `dfw_policy_publish`.\n\n**applied_to_microsegment_idle** — `ALLOW` rows scoped to `applied_to` groups covering dormant lab segments naturally read zero `hit_eod`; that is desired quiet, not proof the rule is orphaned in production. Cross-check whether `tier_tag` maps to `prod` classes inside `nsx_vm_tier.csv` before opening retirement tickets.\n\n**cross_tier_shared_service_blur** — Jump `ALLOW` rows servicing DNS, NTP, or patch mirrors legitimately hit multiple workload tiers from one rule; `tier_tag` assignment from a single `vm_key` under-represents the fan-out. Model shared rows with `service_tier_multi` tags or split dashboards by `applied_to_doc` scope instead of blaming noisy `tier_share_pct` math.\n\n**logging_profile_off_quiet** — Operators disable `logging` on hot `ALLOW` rows to save EPS while `vmware:nsx:dfw_stats` REST counters still tick; Splunk shows traffic without companion `vmware:nsx:dfw_rules` `logged=true` hints—never downgrade retention solely because flow logs hushed.\n\n**emergency_section_baseline_hits** — `ETHERNET` / `EMERGENCY` category rows accumulate hits from hypervisor plumbing and should not be graded like `APPLICATION` micro-segment rows. Filter `section_name` with `category_rank<=2` into a sibling panel so tier KPIs stay comparable week over week.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`.\n• Ensure the following data sources are available: `index=vmware` `sourcetype=\"vmware:nsx:dfw\"` with fields `rule_id`, `rule_name`, `action`, `src`, `tier`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `nsx_vm_tier.csv` mapping VM names to tier labels. (2) Refresh weekly from CMDB. (3) Alert when production tier shows zero hits on mandatory allow rules for seven days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:dfw\" earliest=-7d\n| lookup nsx_vm_tier.csv vm_name AS src OUTPUT tier AS src_tier\n| stats count by src_tier, rule_id, rule_name, action\n| eventstats sum(count) as tier_total by src_tier, action\n| eval pct=round(100*count/tier_total,2)\n| sort src_tier, -count\n| table src_tier, rule_id, rule_name, action, count, pct\n```\n\nUnderstanding this SPL\n\n**Distributed Firewall Rule Hit Counts by Application Tier** — Grouping DFW hits by application tier proves which segmentation boundaries are exercised in production and identifies stale allow rules that see no traffic. This reduces attack surface during audits and prevents accidental removal of rules that still protect critical tiers.\n\nDocumented **Data sources**: `index=vmware` `sourcetype=\"vmware:nsx:dfw\"` with fields `rule_id`, `rule_name`, `action`, `src`, `tier`. **App/TA** (typical add-on context): `vmware_nsx_addon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:dfw. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:dfw\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by src_tier, rule_id, rule_name, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by src_tier, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Distributed Firewall Rule Hit Counts by Application Tier**): table src_tier, rule_id, rule_name, action, count, pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (tier × rule hits), Bar chart (hits by tier), Table (zero-hit rules).",
              "script": "",
              "premium": "",
              "hw": "NSX-T Data Center",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We tally kitchen passes still swiped at the bakery, grill, and cold vault every week—when one chit stays silent while neighbors buzz, an outdated permit can retire without starving a real dinner rush.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.16",
              "n": "Edge Cluster BFD and Uplink Session Health",
              "c": "critical",
              "f": "advanced",
              "v": "Tier-0 edge clusters depend on BFD and stable uplinks for fast failure detection. Degraded BFD or partial uplink loss causes asymmetric north-south paths and intermittent application reachability. Proactive monitoring supports failover drills and prevents surprise brownouts during ISP maintenance.",
              "t": "`vmware_nsx_addon`",
              "d": "`index=vmware` `sourcetype=\"vmware:nsx:edge_bfd\"` with fields `edge_node`, `cluster`, `peer_ip`, `bfd_state`, `uplink_name`, `message`",
              "q": "index=vmware sourcetype=\"vmware:nsx:edge_bfd\" earliest=-4h\n| where bfd_state!=\"UP\" OR match(lower(coalesce(message,\"\")),\"(?i)timeout|down\")\n| stats latest(bfd_state) as bfd latest(_time) as t by edge_node, cluster, peer_ip, uplink_name\n| sort -t\n| table edge_node, cluster, uplink_name, peer_ip, bfd, t",
              "m": "(1) Ingest BFD telemetry from NSX Manager API or Edge syslog. (2) Join with `vmware:nsx:edge_status` (UC-18.2.8) for BGP state. (3) Page on any BFD not UP on production T0 uplinks.",
              "z": "Status grid (edge × uplink), Timeline (BFD events), Table (non-UP sessions).",
              "kfp": "**bfd_hold_gr_convergence** — **BGP** **graceful-restart** helpers deliberately hold **BFD** sessions in **INIT** while **RIB** tables rehydrate after **control-plane** hits; **`bfd_state_latest`** reads ugly even though **forwarding** planes stayed **up** on the **surviving** **edge**. Anchor suppressions to **CRQ** rows that cite **GR** maintenance and cross-check **NSX** UI **BFD** timers against **carrier** change tickets before accusing **hardware**.\n\n**lacp_sync_renegotiation_noise** — **LACP** **PDU** timing during **ToR** upgrades raises **`uplink_not_healthy`** flaps for **seconds** while **ASIC** buffers reprogram; **Splunk** sees **DOWN**/**DEGRADED** labels without sustained **packet** **loss** once **bundles** settle. Require **two** consecutive **polls** failing across **both** **edge** nodes in the **cluster** before paging **WAN** crews, and tie noise to **`cisco:aci:syslog`** or **switch** **maintenance** bridges when **4022** feeds exist.\n\n**inter_site_controller_degraded_poll** — **`GET /api/v1/transport-nodes/{id}/inter-site/status`** occasionally returns **stale** JSON when **NSX** **Manager** **VIP** members roll during **patching**; **`edge_status`** sourcetype duplicates **half**-filled **uplink** maps until the **cursor** catches up. Compare **Manager** **CLI** **`get bfd neighbors`** output and **policy** **realization** badges; suppress on **Manager** **cluster** health alarms.\n\n**edge_service_restart_blip** — **VMware** **automatic** **service** restarts on **Edge** **VMs** after **kernel** **upgrades** restart **datapath** **proxies**; **BFD** sessions re-form within **one** **poll** interval yet **`bfd_neighbor_not_up`** rows appear briefly. Join **`vmware:vsphere:vcenter`** **patch** baselines and **`vmware:nsx:audit`** **edge** **upgrade** operations before opening **Sev-1** bridges.\n\n**api_cursor_partial_page** — **REST** workers paging **BFD** neighbors sometimes emit **only** the **first** **twenty** peers; **`peer_latest`** looks **null** for **lower-priority** **VRF** legs even though **CLI** lists them—fix collectors to follow **`result_count`**/**`cursor`** fields rather than **alerting** on **empty** **facets**.\n\n**planned_bgp_neighbor_shut** — **Change** windows that **administratively** shut **underlay** **interfaces** to **carriers** during **migrations** drive **BFD** **DOWN** intentionally—**Splunk** is correct technically but wrong **operationally** without **`nsx_planned_edge_window.csv`** lookups keyed by **`tier0_key`**.\n\n**single_attached_uplink_test_mode** — **Labs** leave **one** **PNIC** **active** by design while **burning** **in** **firewalls**; **`uplink_fault`** stays **true** for **hours**. **Environment** **tags** on **`index`** or **`edge_cluster`** metadata should drop **non-production** **clusters** from **paging** **macros** so **engineering** time funds **true** **dual**-**hommed** **outages** only.",
              "refs": "[CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`.\n• Ensure the following data sources are available: `index=vmware` `sourcetype=\"vmware:nsx:edge_bfd\"` with fields `edge_node`, `cluster`, `peer_ip`, `bfd_state`, `uplink_name`, `message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest BFD telemetry from NSX Manager API or Edge syslog. (2) Join with `vmware:nsx:edge_status` (UC-18.2.8) for BGP state. (3) Page on any BFD not UP on production T0 uplinks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:edge_bfd\" earliest=-4h\n| where bfd_state!=\"UP\" OR match(lower(coalesce(message,\"\")),\"(?i)timeout|down\")\n| stats latest(bfd_state) as bfd latest(_time) as t by edge_node, cluster, peer_ip, uplink_name\n| sort -t\n| table edge_node, cluster, uplink_name, peer_ip, bfd, t\n```\n\nUnderstanding this SPL\n\n**Edge Cluster BFD and Uplink Session Health** — Tier-0 edge clusters depend on BFD and stable uplinks for fast failure detection. Degraded BFD or partial uplink loss causes asymmetric north-south paths and intermittent application reachability. Proactive monitoring supports failover drills and prevents surprise brownouts during ISP maintenance.\n\nDocumented **Data sources**: `index=vmware` `sourcetype=\"vmware:nsx:edge_bfd\"` with fields `edge_node`, `cluster`, `peer_ip`, `bfd_state`, `uplink_name`, `message`. **App/TA** (typical add-on context): `vmware_nsx_addon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:edge_bfd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:edge_bfd\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where bfd_state!=\"UP\" OR match(lower(coalesce(message,\"\")),\"(?i)timeout|down\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by edge_node, cluster, peer_ip, uplink_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Edge Cluster BFD and Uplink Session Health**): table edge_node, cluster, uplink_name, peer_ip, bfd, t\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Edge Cluster BFD and Uplink Session Health** — Tier-0 edge clusters depend on BFD and stable uplinks for fast failure detection. Degraded BFD or partial uplink loss causes asymmetric north-south paths and intermittent application reachability. Proactive monitoring supports failover drills and prevents surprise brownouts during ISP maintenance.\n\nDocumented **Data sources**: `index=vmware` `sourcetype=\"vmware:nsx:edge_bfd\"` with fields `edge_node`, `cluster`, `peer_ip`, `bfd_state`, `uplink_name`, `message`. **App/TA** (typical add-on context): `vmware_nsx_addon`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (edge × uplink), Timeline (BFD events), Table (non-UP sessions).",
              "script": "",
              "premium": "",
              "hw": "NSX Edge cluster nodes (VM or bare metal)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We listen for quick hello drops between the edge routers and their northbound uplinks so uneven paths cannot hide quietly when dashboards still look calm at a glance to a busy operator who only has a minute to check.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.17",
              "n": "Transport Node Data Plane Interface Errors and Drops",
              "c": "high",
              "f": "intermediate",
              "v": "Overlay GENEVE depends on clean physical NICs and switch ports. Rising errors or drops on TN N-VDS or VDS uplinks manifest as intermittent VM connectivity and false-positive firewall symptoms. Isolating TN-level drops speeds root cause between server, rack, and fabric teams.",
              "t": "`vmware_nsx_addon`, vSphere metrics optional",
              "d": "`index=vmware` `sourcetype=\"vmware:nsx:tn_iface\"` with fields `transport_node`, `pnic`, `rx_errors`, `tx_errors`, `rx_drops`, `tx_drops`",
              "q": "index=vmware sourcetype=\"vmware:nsx:tn_iface\" earliest=-24h\n| eval bad=rx_errors+tx_errors+rx_drops+tx_drops\n| stats sum(bad) as issues max(rx_errors) as max_rx max(tx_errors) as max_tx by transport_node, pnic\n| where issues > 100 OR max_rx>0 OR max_tx>0\n| sort -issues\n| table transport_node, pnic, issues, max_rx, max_tx",
              "m": "(1) Collect per-pnic counters via NSX API on 60s interval. (2) Baseline known noisy lab hosts. (3) Correlate with overlay loss diagnostics (UC-18.2.9).",
              "z": "Bar chart (issues by TN), Table (worst pnics), Single value (TNs with errors).",
              "kfp": "**`cable_replacement_window` / `sfp_reseat_event`.** Lift-work on optics spikes **`FCS`** briefly while strands settle; Splunk mirrors **`crc_errors`** bursts that vanish once modules seat. Tie paging holds to **`CHG`** tickets referencing those drills instead of blaming routing alone without leaf corroboration.\n\n**`auto_negotiation_renegotiation` after reload.** **`IOS`**/**`EOS`** refreshes restart speed negotiations between **`vmnic`** uplinks and **`ToR`** ports so **`alignment_errors`** mimic frayed cabling though **`carrier`** stayed up. Suppress against **`CHG`** rows citing leaf reloads; verify duplex via **`esxcli network nic get`**.\n\n**`pause_frame_storm`.** Elevated **`pause_frames_rx`** reflects congested switches issuing **`802.3x`** **`pause-frame`** requests while local **`vmnic`** silicon stays healthy. Compare **`pause`** ratios against **`peak_tail_drop`** before blaming NIC firmware.\n\n**`vmnic_driver_upgrade` versus `ring_buffer_tuning_lowpower`.** Driver installs (`esxcli software vib install`) reset counters while collectors replay stale highs versus fresh zeros; record TA epochs before paging. Low-power blades shrink **`RX`**/**`TX`** **`ring`** depth toward **`256`**, raising expected **`tail_drop_count`** during bursts.\n\n**`psirt_known_firmware_bug` (`broadcom_57414`, `mellanox_connectx_6`).** Vendor **`PSIRT`** advisories describe **`CRC`**/**`FCS`** patterns resembling overlay loss yet rooted in firmware; reconcile NIC serials with **`KB`** notes before optics RMA when microcode fixes apply.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, vSphere metrics optional.\n• Ensure the following data sources are available: `index=vmware` `sourcetype=\"vmware:nsx:tn_iface\"` with fields `transport_node`, `pnic`, `rx_errors`, `tx_errors`, `rx_drops`, `tx_drops`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect per-pnic counters via NSX API on 60s interval. (2) Baseline known noisy lab hosts. (3) Correlate with overlay loss diagnostics (UC-18.2.9).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:tn_iface\" earliest=-24h\n| eval bad=rx_errors+tx_errors+rx_drops+tx_drops\n| stats sum(bad) as issues max(rx_errors) as max_rx max(tx_errors) as max_tx by transport_node, pnic\n| where issues > 100 OR max_rx>0 OR max_tx>0\n| sort -issues\n| table transport_node, pnic, issues, max_rx, max_tx\n```\n\nUnderstanding this SPL\n\n**Transport Node Data Plane Interface Errors and Drops** — Overlay GENEVE depends on clean physical NICs and switch ports. Rising errors or drops on TN N-VDS or VDS uplinks manifest as intermittent VM connectivity and false-positive firewall symptoms. Isolating TN-level drops speeds root cause between server, rack, and fabric teams.\n\nDocumented **Data sources**: `index=vmware` `sourcetype=\"vmware:nsx:tn_iface\"` with fields `transport_node`, `pnic`, `rx_errors`, `tx_errors`, `rx_drops`, `tx_drops`. **App/TA** (typical add-on context): `vmware_nsx_addon`, vSphere metrics optional. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:tn_iface. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:tn_iface\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bad** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by transport_node, pnic** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where issues > 100 OR max_rx>0 OR max_tx>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Transport Node Data Plane Interface Errors and Drops**): table transport_node, pnic, issues, max_rx, max_tx\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (issues by TN), Table (worst pnics), Single value (TNs with errors).",
              "script": "",
              "premium": "",
              "hw": "ESXi transport nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We listen for trouble on the thick cords plugged into each rack machine where invisible tunnels ride between computers. Rising crackle there means cramped buffers or scratched optics before apps complain.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [
                "vmware_vsphere"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.2.18",
              "n": "NSX Intelligence Recommended Firewall Rule Publish Queue",
              "c": "medium",
              "f": "advanced",
              "v": "NSX Intelligence proposes micro-segmentation rules; a backed-up or failed publish queue delays enforcement of least-privilege changes and leaves temporary broad access in place. Monitoring queue depth ties security posture to operational SLAs for the data center network.",
              "t": "`vmware_nsx_addon`, NSX Intelligence API",
              "d": "`index=vmware` `sourcetype=\"vmware:nsx:intel_publish\"` with fields `domain_name`, `queue_depth`, `last_error`, `status`",
              "q": "index=vmware sourcetype=\"vmware:nsx:intel_publish\" earliest=-24h\n| stats latest(queue_depth) as depth latest(status) as st latest(last_error) as err by domain_name\n| where depth>25 OR match(lower(st),\"(?i)fail|error\") OR (isnotnull(err) AND err!=\"\" AND err!=\"null\")\n| table domain_name, depth, st, err\n| sort -depth",
              "m": "(1) Scripted input for Intelligence recommendation publish API. (2) Alert when depth exceeds agreed SLA or status non-success. (3) Link failures to NSX Manager cluster health (UC-18.2.13).",
              "z": "Single value (max queue depth), Table (failed domains), Timechart (depth trend).",
              "kfp": "**bulk_approval_batching** — CAB windows that approve dozens of recommendations in one REST burst temporarily inflate `recommendation_backlog` while `PUBLISHED` events trail by minutes; require two polls trending the same direction before paging and anchor suppressions to change records citing Intelligence freeze exceptions.\n\n**planned_intel_cluster_upgrade** — Rolling Intelligence patches set `intelligence_aggregate_state` to transient DEGRADED strings even though ingest paused by design; tie `vcenter_correlated_motion_ts` and known maint MOIDs to mute macros keyed by `site_key`.\n\n**policy_coordinator_backpressure** — Large `rule_count` bundles wait on Policy/Corfu realization during bulk segmentation imports, so `rules_in_flight` spikes without Intelligence faults; compare `last_policy_realize_event` stalls against Manager cluster health (pair UC-18.2.13) before blaming analytics alone.\n\n**audit_uri_sampling_gaps** — Sampled audit streams drop `/security-recommendations` URIs so `last_rec_actor` looks stale while UI work continues; validate full-fidelity audit archives before accusing teams of inaction.\n\n**false_failed_status_strings** — Vendors sometimes emit `error_message=OK_WITH_WARNINGS` or localized tokens containing literal \"fail\" substrings; normalize with `translate` lookups and require backlog depth plus vendor faults together.\n\n**lab_domain_low_priority** — Engineering tenants keep `PENDING` queues open for weeks while production drains cleanly; tag `site_key` with environment metadata or `nsx_intel_queue_suppress.csv` so prod owns the pager.\n\n**duplicate_rec_id_hashes** — Collector bugs re-ingest the same `recommendation_id` under different `_cd`, inflating `dc(rec_id)`; dedupe at index time with stable `orig_key` hashes.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `vmware_nsx_addon`, NSX Intelligence API.\n• Ensure the following data sources are available: `index=vmware` `sourcetype=\"vmware:nsx:intel_publish\"` with fields `domain_name`, `queue_depth`, `last_error`, `status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Scripted input for Intelligence recommendation publish API. (2) Alert when depth exceeds agreed SLA or status non-success. (3) Link failures to NSX Manager cluster health (UC-18.2.13).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vmware sourcetype=\"vmware:nsx:intel_publish\" earliest=-24h\n| stats latest(queue_depth) as depth latest(status) as st latest(last_error) as err by domain_name\n| where depth>25 OR match(lower(st),\"(?i)fail|error\") OR (isnotnull(err) AND err!=\"\" AND err!=\"null\")\n| table domain_name, depth, st, err\n| sort -depth\n```\n\nUnderstanding this SPL\n\n**NSX Intelligence Recommended Firewall Rule Publish Queue** — NSX Intelligence proposes micro-segmentation rules; a backed-up or failed publish queue delays enforcement of least-privilege changes and leaves temporary broad access in place. Monitoring queue depth ties security posture to operational SLAs for the data center network.\n\nDocumented **Data sources**: `index=vmware` `sourcetype=\"vmware:nsx:intel_publish\"` with fields `domain_name`, `queue_depth`, `last_error`, `status`. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX Intelligence API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vmware; **sourcetype**: vmware:nsx:intel_publish. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vmware, sourcetype=\"vmware:nsx:intel_publish\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by domain_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where depth>25 OR match(lower(st),\"(?i)fail|error\") OR (isnotnull(err) AND err!=\"\" AND err!=\"null\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NSX Intelligence Recommended Firewall Rule Publish Queue**): table domain_name, depth, st, err\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NSX Intelligence Recommended Firewall Rule Publish Queue** — NSX Intelligence proposes micro-segmentation rules; a backed-up or failed publish queue delays enforcement of least-privilege changes and leaves temporary broad access in place. Monitoring queue depth ties security posture to operational SLAs for the data center network.\n\nDocumented **Data sources**: `index=vmware` `sourcetype=\"vmware:nsx:intel_publish\"` with fields `domain_name`, `queue_depth`, `last_error`, `status`. **App/TA** (typical add-on context): `vmware_nsx_addon`, NSX Intelligence API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (max queue depth), Table (failed domains), Timechart (depth trend).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "NSX Intelligence Node",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We spot when suggested inside-the-datacenter firewall rules pile up waiting for approval, so a quiet backlog cannot hide behind a calm screen while east-west paths stay wider than planners intended.",
              "mtype": [
                "Compliance",
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 38.3,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 16,
            "none": 0
          }
        },
        {
          "i": "18.3",
          "n": "Other SDN",
          "u": [
            {
              "i": "18.3.1",
              "n": "Cilium/Calico Network Policy Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Kubernetes CNI network policies enforce pod-to-pod communication rules. Monitoring policy enforcement validates that micro-segmentation is working in containerized environments, critical for multi-tenant clusters and compliance.",
              "t": "Custom scripted inputs, Kubernetes logging pipeline",
              "d": "Cilium/Calico policy logs, Kubernetes audit logs",
              "q": "index=kubernetes sourcetype=\"kube:cni:policy\"\n| stats count as hits, dc(src_pod) as src_pods, dc(dst_pod) as dst_pods by policy_name, action, namespace\n| eval enforcement=if(action=\"deny\", \"Blocked\", \"Allowed\")\n| sort -hits\n| table namespace, policy_name, enforcement, hits, src_pods, dst_pods",
              "m": "Enable CNI policy logging in Cilium/Calico configuration. Forward logs via Fluentd/Fluent Bit to Splunk HEC. Parse policy name, action, source/destination pod, and namespace. Track denied traffic for security visibility. Identify namespaces without network policies (compliance gap).",
              "z": "Bar chart (policy hits by namespace), Table (denied flows), Heatmap (namespace-to-namespace traffic), Single value (namespaces without policies).",
              "kfp": "**Helm rollout deny bursts** — rolling Pod restarts legitimately churn CiliumEndpoint programming such that DROPPED spikes appear briefly while cilium-agent reattaches eBPF programs—suppress against CHG tickets referencing helm revision bumps plus Rollout pause annotations rather than paging raw deny velocity alone.\n\n**Identity garbage-collection timing** — CiliumIdentity reclamation lagging behind Deployment scale-down windows yields identity-oriented drop_reason_desc strings resembling breaches though Namespace membership remains compliant—cross-check cilium identity list snapshots beside Splunk timelines before escalation.\n\n**Felix iptables reload pauses** — Felix iptables dataplane briefly halts enforcement during in-place upgrades whereas BPF dataplane avoids iptables-save restore stalls—Splunk calico:felix:log WARN mixes differ by mode; verify FelixConfiguration before blaming Deny telemetry during upgrades.\n\n**Hubble OTLP saturation** — Hubble exporters constrained on high-flow clusters drop spans when OpenTelemetry batch queues back up—Splunk gaps resemble healthy silence; watch collector splunk exporter retry metrics and memory_limiter settings tied to DaemonSet sizing.\n\n**ClusterPool IPAM exhaustion** — cluster-pool CIDR carve-outs filling under bursty autoscaling resemble pod scheduling failures logged as no-IP style errors inside cilium:agent:log—pair Splunk ERROR lanes with cilium ip diagnostics before attributing denies to NetworkPolicy mistakes.\n\n**Calico overlay MTU edge cases** — VXLAN or IPIP encapsulation shrinks effective MTU; only jumbo-sized payloads fail while routine probes Allow—correlate Deny absence on small pings with interface mtu counters on tunl0 or vxlan.calico devices when large-object transfers fail.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted inputs, Kubernetes logging pipeline.\n• Ensure the following data sources are available: Cilium/Calico policy logs, Kubernetes audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable CNI policy logging in Cilium/Calico configuration. Forward logs via Fluentd/Fluent Bit to Splunk HEC. Parse policy name, action, source/destination pod, and namespace. Track denied traffic for security visibility. Identify namespaces without network policies (compliance gap).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kubernetes sourcetype=\"kube:cni:policy\"\n| stats count as hits, dc(src_pod) as src_pods, dc(dst_pod) as dst_pods by policy_name, action, namespace\n| eval enforcement=if(action=\"deny\", \"Blocked\", \"Allowed\")\n| sort -hits\n| table namespace, policy_name, enforcement, hits, src_pods, dst_pods\n```\n\nUnderstanding this SPL\n\n**Cilium/Calico Network Policy Monitoring** — Kubernetes CNI network policies enforce pod-to-pod communication rules. Monitoring policy enforcement validates that micro-segmentation is working in containerized environments, critical for multi-tenant clusters and compliance.\n\nDocumented **Data sources**: Cilium/Calico policy logs, Kubernetes audit logs. **App/TA** (typical add-on context): Custom scripted inputs, Kubernetes logging pipeline. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kubernetes; **sourcetype**: kube:cni:policy. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kubernetes, sourcetype=\"kube:cni:policy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by policy_name, action, namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **enforcement** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Cilium/Calico Network Policy Monitoring**): table namespace, policy_name, enforcement, hits, src_pods, dst_pods\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (policy hits by namespace), Table (denied flows), Heatmap (namespace-to-namespace traffic), Single value (namespaces without policies).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch the plug-ins on each Linux worker that stitch pods into the cluster network and enforce who may talk to whom, using the traffic notes they emit. When enforcement glitches or those helpers weaken, we speak up early so folks adjust Kubernetes namespace rules or fix the agents before quiet breakage spreads.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.2",
              "n": "OpenStack Neutron Events",
              "c": "medium",
              "f": "intermediate",
              "v": "Neutron manages virtual networking in OpenStack. Tracking network operations (creation, modification, deletion) provides change audit, helps troubleshoot connectivity issues, and identifies unauthorized network modifications.",
              "t": "Custom scripted input (OpenStack API), OpenStack syslog",
              "d": "Neutron API logs, OpenStack syslog",
              "q": "index=openstack sourcetype=\"openstack:neutron\"\n| search action IN (\"create\", \"update\", \"delete\")\n| stats count by action, resource_type, user, project_name\n| sort -count\n| table _time, user, project_name, action, resource_type, resource_name, count",
              "m": "Ingest Neutron API logs via syslog or OpenStack notification bus. Track all network, subnet, port, and router CRUD operations. Alert on mass deletions or unauthorized modifications. Correlate network changes with VM connectivity issues.",
              "z": "Table (recent operations), Bar chart (operations by type), Timeline (change events), Pie chart (operations by project).",
              "kfp": "• **heat_stack_network_refresh** — Heat or Terraform stack updates replay many subnet and port create/delete calls per minute while VXLAN segmentation identifiers are reassigned on purpose. Correlate the burst window with stack identifiers you record in neutron_chg_suppress (chg_token, window_start, window_end) so crud_storm rows are not paged as incidents.\n\n• **ansible_tenant_network_play** — Ansible Tower or AWX roles that rebuild tenant networks may flood security group and rule updates from a single automation principal. Require actor to match known automation accounts and fail open when proj appears on ansible_project_allow.csv via lookup.\n\n• **octavia_member_rebalance** — Octavia load-balancer member reconciliations churn Neutron port rows during amphora failover or pool edits without a customer misconfiguration. Suppress binding_or_vif_fail when change records cite Octavia maintenance and vif_seen briefly toggles between ovs and unbound on the same port.\n\n• **neutron_quota_reset_burst** — Quota reset APIs or project migration runbooks issue paired floating IP and router interface churn. Confirm timestamps against cloud administrator change tickets and carry neutron_quota_reset_burst labels in your exception register before treating the pattern as abuse.\n\n• **dhcp_agent_failover_drill** — DHCP agent active or standby exercises restart namespaces and can emit binding errors that resemble compute faults. Escalate binding_or_vif_fail only when multiple agents or syslog RPC timeout strings appear, not during a single-node drill tagged in dhcp_agent_failover_drill with lookup filters.\n\n• **ml2_rollout_rebind_wave** — Mechanism driver migrations from OVS to OVN or ML2 driver upgrades force fleet-wide port rebinds. Book ml2_rollout_rebind_wave maintenance hashes inside neutron_chg_suppress for TripleO or Kolla-Ansible cutovers so overlay_stress does not fire on expected rebind noise.\n\n• **tempest_network_gate_spike** — Tempest and other CI tenants hammer the networking APIs nightly. Drop tempest-shaped project_id values using openstack_ci_tenant_allow.csv or equivalent proj allowlists keyed off actor naming conventions.\n\n• **manual_cluster_heal_port_resync** — Operators running neutron-ovs-cleanup or ovn-nbctl sync sweeps can mark ports down transiently. Cross-check controller syslog for heal language and require manual_cluster_heal_port_resync in the change ticket before paging executives for apparent binding loss.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (OpenStack API), OpenStack syslog.\n• Ensure the following data sources are available: Neutron API logs, OpenStack syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest Neutron API logs via syslog or OpenStack notification bus. Track all network, subnet, port, and router CRUD operations. Alert on mass deletions or unauthorized modifications. Correlate network changes with VM connectivity issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openstack sourcetype=\"openstack:neutron\"\n| search action IN (\"create\", \"update\", \"delete\")\n| stats count by action, resource_type, user, project_name\n| sort -count\n| table _time, user, project_name, action, resource_type, resource_name, count\n```\n\nUnderstanding this SPL\n\n**OpenStack Neutron Events** — Neutron manages virtual networking in OpenStack. Tracking network operations (creation, modification, deletion) provides change audit, helps troubleshoot connectivity issues, and identifies unauthorized network modifications.\n\nDocumented **Data sources**: Neutron API logs, OpenStack syslog. **App/TA** (typical add-on context): Custom scripted input (OpenStack API), OpenStack syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openstack; **sourcetype**: openstack:neutron. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openstack, sourcetype=\"openstack:neutron\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by action, resource_type, user, project_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **OpenStack Neutron Events**): table _time, user, project_name, action, resource_type, resource_name, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent operations), Bar chart (operations by type), Timeline (change events), Pie chart (operations by project).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We track when private cloud networking changes arrive in big waves or when a virtual hook-up keeps failing, so repair crews can calm tenant overlays before customer software loses its connection.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "openstack",
                "syslog"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.3",
              "n": "SDN Controller Health",
              "c": "critical",
              "f": "advanced",
              "v": "SDN controllers are the brain of software-defined networks. Controller outages or cluster consensus failures can cause network-wide disruption. Monitoring controller health ensures the control plane remains available and consistent.",
              "t": "Custom scripted input, SDN controller syslog",
              "d": "SDN controller system logs, cluster status API",
              "q": "index=sdn sourcetype=\"sdn:controller\"\n| search (cluster_state OR heartbeat OR leader_election OR consensus)\n| eval health=case(\n    searchmatch(\"healthy\") OR searchmatch(\"active\"), \"Healthy\",\n    searchmatch(\"degraded\") OR searchmatch(\"standby\"), \"Degraded\",\n    searchmatch(\"failed\") OR searchmatch(\"unreachable\"), \"Critical\",\n    1==1, \"Unknown\")\n| stats latest(health) as status, latest(_time) as last_heartbeat by controller_id, role\n| table controller_id, role, status, last_heartbeat",
              "m": "Monitor SDN controller cluster via heartbeat polling every 15 seconds. Track cluster membership, leader election events, and consensus state. Alert immediately on controller failure or split-brain conditions. Monitor controller resource utilization (CPU, memory, database size). Maintain runbook for controller failover.",
              "z": "Status grid (controller cluster), Timeline (cluster events), Single value (cluster health), Table (controller details).",
              "kfp": "**Rolling controller maintenance** — a cordoned node intentionally moves through **Leaving**/**Exiting**/**Removed** while workloads drain; **`akka_member`** may show **`Unreachable`** briefly during bounce though quorum stays intact—suppress via CMDB **`maintenance=true`** tokens tied to **`controller_host`**. **Karaf feature installs** — **`osgi`** bundle refreshes emit **`bundle`** lifecycle WARN strings resembling outages yet dataplane sessions persist—differentiate using correlated **`sdn:controller:cluster`** polls still returning **`Up`** majority counts before paging. **`WeaklyUp` transitional windows** — third-node rejoin after heal lingers in **`WeaklyUp`** until gossip settles; Splunk flags transitional **`risk_bucket`** rows though **`shard_latest`** healthy—extend suppression timers beyond gossip convergence documented for your Java runtime. **`ovs-vswitchd` upgrades** — kernel module reloads spike **`sdn:openflow:session`** reconnect storms toward controllers without dataplane loss once flows relearn—cross-check **`openflow_signal`** bursts against packaged upgrade CRQs before blaming quorum. **Atomix RAFT log compaction on ONOS** — periodic compaction raises JVM CPU briefly (`gc_pressure` regex noise) unrelated to steady-state heap exhaustion—tie paging to sustained **`last_seen`** streaks plus **`onos_cluster_state`** **`INACTIVE`** persistence rather than single spikes. **OpenFlow echo timeouts during synthetic lab packet drop drills** — chaos exercises purposely delay echoes yet Ops expects noisy Splunk rows—tag **`environment=lab`** rows separately. **REST pollers hitting HTTP 401 during credential rotation** — **`northbound_http_fault`** fires though controllers remain healthy—sync **`rest_ta`** secrets with **`users.properties`**/`cli` credential rotations before alert thresholds. **Contrail analytics backlog** — **`contrail-collector`** lag temporarily resembles shard drift though **`contrail-config`** stays authoritative—confirm **`shard_mark`** versus **`contrail-config-api`** REST mirrors before escalation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input, SDN controller syslog.\n• Ensure the following data sources are available: SDN controller system logs, cluster status API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor SDN controller cluster via heartbeat polling every 15 seconds. Track cluster membership, leader election events, and consensus state. Alert immediately on controller failure or split-brain conditions. Monitor controller resource utilization (CPU, memory, database size). Maintain runbook for controller failover.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdn sourcetype=\"sdn:controller\"\n| search (cluster_state OR heartbeat OR leader_election OR consensus)\n| eval health=case(\n    searchmatch(\"healthy\") OR searchmatch(\"active\"), \"Healthy\",\n    searchmatch(\"degraded\") OR searchmatch(\"standby\"), \"Degraded\",\n    searchmatch(\"failed\") OR searchmatch(\"unreachable\"), \"Critical\",\n    1==1, \"Unknown\")\n| stats latest(health) as status, latest(_time) as last_heartbeat by controller_id, role\n| table controller_id, role, status, last_heartbeat\n```\n\nUnderstanding this SPL\n\n**SDN Controller Health** — SDN controllers are the brain of software-defined networks. Controller outages or cluster consensus failures can cause network-wide disruption. Monitoring controller health ensures the control plane remains available and consistent.\n\nDocumented **Data sources**: SDN controller system logs, cluster status API. **App/TA** (typical add-on context): Custom scripted input, SDN controller syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdn; **sourcetype**: sdn:controller. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdn, sourcetype=\"sdn:controller\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by controller_id, role** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **SDN Controller Health**): table controller_id, role, status, last_heartbeat\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (controller cluster), Timeline (cluster events), Single value (cluster health), Table (controller details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch the brains that steer your programmable switches so they stay in agreement and keep their connections alive. When those brains argue, drift asleep, or choke on memory, we raise it clearly so repair teams fix the control cluster—not blame applications downstream.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.4",
              "n": "VXLAN Tunnel and Overlay Health",
              "c": "critical",
              "f": "intermediate",
              "v": "VXLAN tunnel failures break overlay connectivity. Monitoring tunnel state and packet drops ensures fabric reliability and fast troubleshooting.",
              "t": "Leaf/spine device logs, fabric manager",
              "d": "VXLAN tunnel status, NVE interface, encapsulation stats",
              "q": "index=sdn sourcetype=\"vxlan:tunnel\"\n| where state!=\"up\" OR packet_drops > 0\n| stats latest(state) as state, sum(packet_drops) as drops by tunnel_id, leaf_id, vni\n| table tunnel_id, leaf_id, vni, state, drops",
              "m": "Poll VXLAN tunnel and NVE stats from fabric devices. Alert on tunnel down or non-zero drops. Report on overlay health by VNI and leaf. Correlate with BGP and underlay events.",
              "z": "Status grid (tunnel × state), Table (tunnels with drops), Line chart (drops over time).",
              "kfp": "**Live mobility bursts** — **Type 2 MAC/IP** advertisement surges occur during **VMware vMotion**, **Hyper-V Live Migration**, or storage live-migrations while **BGP** peers remain **Established**—suppress **`type2_mac_mobility_storm`** buckets using **`CHG`** identifiers referencing **`DRS`** maintenance modes plus **`hypervisor_cluster`** CMDB **`maintenance=true`** windows opened ahead of migrations documented inside virtualization runbooks.\n\n**Planned hypervisor drains** — **ESXi** evacuation oscillates **`Anycast`** gateway **`VTEP`** sources behind shared first-hop relays so **`loss_vtep_reachability`** rows spike transiently though bridging stays intact until evacuation completes—cross-check **`fabric_site`** suppress macros tied to compute **`CHG`** tickets before escalating **`transport`** optics incidents alone.\n\n**BGP graceful-restart on route-reflectors** — **`evpn_control_plane_flap`** fingerprints resemble outages during **`BGP`** **`RFC 4724`** graceful-restart timers while **`EVPN`** routes remain **`STALE`** intentionally—extend suppress lists using **`graceful_restart_active`** telemetry peers expose plus **`compare`** **`BGP`** **`neighbor`** **`restart`** **`timer`** **`CLI`** values versus **`Splunk`** **`last_seen`** deltas beneath configured **`stale-route`** intervals archived inside **`BGP`** **`policy`** references **`transport`** SMEs publish quarterly.\n\n**Anycast-RP / multicast-core choreography** — **Type 3 inclusive multicast** routing churn spikes during **`PIM-SM`** **`Anycast-RP`** standby swaps—**`ingress`** **`replication`** counters fluctuate until multicast **`RP`** convergence finishes—suppress using **`multicast`** maintenance **`CRQ`** identifiers tied to **`spine`** **`Rendezvous-Point`** work rather than instantaneous overlay floods alone.\n\n**PMTU ghosts above synthetic baselines** — **`underlay_mtu_below_vxlan_budget`** may surface only on selective elephant flows exceeding MSS samples your smaller **`ping`** tests used—layer **`interface`** **`fragment`** **`drop`** counters and targeted jumbo probes between **`VTEP`** pairs when Splunk **`mtu_observed`** oscillates near thresholds.\n\n**Fabric automation duplicate-gateway warnings** — **NDFC** or **Apstra** rebuilds temporarily raise **Anycast** IP duplicate-detection syslog while **`bridge-domain`** templates re-stage—suppress when **`automation_pipeline_id`** tokens map to known fabric provisioning **`CHG`** tickets **`NetOps`** automation teams already track in ticketing bridges.\n\n**Software VTEP cross-talk** — **Linux** defaults sometimes use **`UDP`** **8472** while hardware fabrics standardize on **`4789`**—segregate lab indexes or enforce Splunk **`where`** expectations on **`udp_port_seen`** so development clusters do not pollute production **`overlay_fault`** dashboards curated for **`IANA`** port discipline.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Leaf/spine device logs, fabric manager.\n• Ensure the following data sources are available: VXLAN tunnel status, NVE interface, encapsulation stats.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll VXLAN tunnel and NVE stats from fabric devices. Alert on tunnel down or non-zero drops. Report on overlay health by VNI and leaf. Correlate with BGP and underlay events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdn sourcetype=\"vxlan:tunnel\"\n| where state!=\"up\" OR packet_drops > 0\n| stats latest(state) as state, sum(packet_drops) as drops by tunnel_id, leaf_id, vni\n| table tunnel_id, leaf_id, vni, state, drops\n```\n\nUnderstanding this SPL\n\n**VXLAN Tunnel and Overlay Health** — VXLAN tunnel failures break overlay connectivity. Monitoring tunnel state and packet drops ensures fabric reliability and fast troubleshooting.\n\nDocumented **Data sources**: VXLAN tunnel status, NVE interface, encapsulation stats. **App/TA** (typical add-on context): Leaf/spine device logs, fabric manager. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdn; **sourcetype**: vxlan:tunnel. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdn, sourcetype=\"vxlan:tunnel\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state!=\"up\" OR packet_drops > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tunnel_id, leaf_id, vni** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **VXLAN Tunnel and Overlay Health**): table tunnel_id, leaf_id, vni, state, drops\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (tunnel × state), Table (tunnels with drops), Line chart (drops over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch whether the invisible tunnel endpoints between switches stay healthy and whether the fabric keeps publishing overlay reachability for MAC and IP addresses. When physical paths shrink packets too small for VXLAN wrappers—or routes churn during workload moves—we raise it early before east-west traffic fails quietly.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.5",
              "n": "EVPN Route and MAC Mobility Events",
              "c": "high",
              "f": "intermediate",
              "v": "EVPN route churn or MAC mobility storms can impact convergence and stability. Tracking route and mobility events supports fabric tuning and VM mobility analysis.",
              "t": "BGP/EVPN route monitor, spine/leaf logs",
              "d": "EVPN route advertisements, MAC mobility sequence numbers",
              "q": "index=sdn sourcetype=\"evpn:route\"\n| search (type=mac_ip OR type=mac_mobility)\n| bin _time span=5m\n| stats count by vni, host, _time\n| where count > 100\n| sort -count",
              "m": "Ingest EVPN route and mobility events. Alert on route storm or high mobility count in short window. Report on top movers and VNIs with most churn. Use for capacity and placement planning.",
              "z": "Line chart (mobility events over time), Table (top VNIs by churn), Bar chart (mobility by host).",
              "kfp": "**vmotion_batch_window** — Clustered **VMware** **vSphere** **DRS** sweeps deliberately relocate dozens of **MACs** per minute while **BGP** remains **Established**; align Splunk **`storm_bucket`** **`vm_migration_storm`** positives with **CMDB** **`vmotion_batch_window`** tokens tied to **maintenance** **`CHG`** records before treating mobility as fabric misconfiguration.\n\n**evpn_rt2_mass_relearn** — Shortening **MAC** **aging** or toggling **ARP** **suppression** forces **Type-2** mass **re-advertise** cycles resembling move bursts—annotate **`evpn_rt2_mass_relearn`** runbook suppressions referencing **Nexus** / **EOS** knob changes documented in the same **CAB** evening.\n\n**anycast_gateway_rebalance** — **Anycast** **SVI** migrations shift **gateway** **VTEP** sourcing without bad **NICs**—pair syslog **`dup_mac_signal`** crumbs with **`anycast_gateway_rebalance`** tickets when first-hop relays reshuffle during **FHRP** choreography lacking true duplicate hosts.\n\n**kubernetes_cni_sandbox_shuffle** — **Container** **CNI** rebuilds recreate **sandbox** **MACs** rapidly on **bare-metal** **Kubernetes** nodes attached as **CEs**—downgrade **`mac_churn_anomaly`** when **`kubernetes_cni_sandbox_shuffle`** automation IDs describe **namespace** drains sans L2 loops.\n\n**storage_live_migration_noise** — **Hypervisor** **storage-only** migrations sometimes pulse **control-plane** **Type-2** traffic without IP moves—use **`storage_live_migration_noise`** tagging when **vCenter** tasks list **storage** **vMotion** exclusively and Splunk **`vtep_dc`** stays **1**.\n\n**storm_control_mac_suppression** — **Storm-control** **MAC** **limit** breaches trigger **sticky** **MAC** syslog storylines unrelated to **EVPN** **mobility** storms—correlate interface **`storm-control`** counters on **access** **leaf** **CLI** before **`dup_mac_investigate`** escalations.\n\n**orchestrator_api_blind_spot** — **OpenStack** **Nova**/**Kubernetes** **API** outages delay ticketing yet **workloads** still migrate—**`orchestrator_api_blind_spot`** explains **`silent`** drift positives until API health dashboards reconcile with Splunk timelines.\n\n**lab_alias_duplicate_injection** — **CI** labs **reuse** **gold** **images** with identical **MACs** on parallel **VNIs**—segregate **`index=lab_fabric`** or tag **`environment=lab`** so training **`multi_vtep_same_mac`** rows never page production bridges.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BGP/EVPN route monitor, spine/leaf logs.\n• Ensure the following data sources are available: EVPN route advertisements, MAC mobility sequence numbers.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest EVPN route and mobility events. Alert on route storm or high mobility count in short window. Report on top movers and VNIs with most churn. Use for capacity and placement planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdn sourcetype=\"evpn:route\"\n| search (type=mac_ip OR type=mac_mobility)\n| bin _time span=5m\n| stats count by vni, host, _time\n| where count > 100\n| sort -count\n```\n\nUnderstanding this SPL\n\n**EVPN Route and MAC Mobility Events** — EVPN route churn or MAC mobility storms can impact convergence and stability. Tracking route and mobility events supports fabric tuning and VM mobility analysis.\n\nDocumented **Data sources**: EVPN route advertisements, MAC mobility sequence numbers. **App/TA** (typical add-on context): BGP/EVPN route monitor, spine/leaf logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdn; **sourcetype**: evpn:route. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdn, sourcetype=\"evpn:route\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by vni, host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (mobility events over time), Table (top VNIs by churn), Bar chart (mobility by host).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We trace when the same guest sticker hops rack to rack faster than your move calendar says it should, and when two racks both insist they own that sticker. That heads off crossed wires before east-west apps feel it.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.6",
              "n": "ACI Contract Deny and Drop Statistics",
              "c": "high",
              "f": "intermediate",
              "v": "High contract deny or drop counts may indicate overly restrictive policy or attack traffic. Monitoring supports policy tuning and security analysis.",
              "t": "ACI policy stats, fabric stats",
              "d": "Contract hit counters, deny/drop by contract and EPG",
              "q": "index=aci sourcetype=\"aci:contract_stats\"\n| where action=\"deny\" OR action=\"drop\"\n| bin _time span=1h\n| stats sum(packets) as denied by contract_name, src_epg, dest_epg, _time\n| where denied > 1000\n| sort -denied",
              "m": "Ingest ACI contract statistics. Track deny and drop by contract and EPG pair. Alert on spike in denies. Report on top denied flows for policy review. Correlate with app and security events.",
              "z": "Table (denied flows), Bar chart (denies by contract), Line chart (deny trend).",
              "kfp": "• **intersite_contract_rehome** — Planned EPG moves between sites or pods deliberately tear down stretched consumer/provider bindings, so deny counters climb until shadow objects finish reprogramming; align operations alerts with change tickets by tagging maintenance windows in **`aci_maint_intersite.csv`** and **`lookup`** suppressions until NDO shows consistent deploy-success markers.\n\n• **ndo_shadow_epg_lag** — NDO may mark templates deployed while remote leaf TCAM entries for shadow EPG pcTags still compile, producing a ten-to-one-hundred twenty second burst of denies unrelated to attacks; require two consecutive **`risk_shadow_gap`** windows or corroborating leaf **`show zoning-rule`** deltas before paging and document **`ndo_shadow_epg_lag`** in tickets.\n\n• **ipn_acl_sampling_gap** — When IPN/ISN transports enable NetFlow or sFlow sampling but **`cisco:aci:acl_log`** inputs throttle drop logging to one-per-flow caches, analysts undercount real denies; compare **`cisco:aci:contract_stats`** counters before lowering thresholds and widen polling budgets if finance approves.\n\n• **mislabeled_consumer_facing** — Operators occasionally attach consumer contracts facing the wrong EPG after migrations, so traffic crosses WAN legs yet meets implicit deny counters resembling inter-site failures; cross-check **fvRsCons** rows in **`cisco:aci:multisite`** outputs and use **`mislabeled_consumer_facing`** tags after audit fixes rather than hardware replacements.\n\n• **statsenabled_cold_site** — A site whose vzSubj never enabled **statsEnabled** emits zero **`actrlRuleHit5min`** rows while syslog still shows implicit deny chatter; resolve by enabling statistics and backfilling baselines—track with a separate discovery search, not by muting this correlation.\n\n• **symmetric_routing_asym_policy** — Return-path asymmetry across WAN ECMP may send packets through spines lacking the expected contract view, yielding deny spikes that reverse once BGP preferences settle; correlate with underlay IGP/BGP change records and apply **`symmetric_routing_asym_policy`** suppression tokens tied to carrier maintenance tickets.\n\n• **golden_image_epg_reimage** — Golden-image VM redeployments can clone MAC/IP patterns that momentarily violate stretched filters until reinstall completes; **`golden_image_epg_reimage`** tags from compute teams prevent noisy **`deny_flood`** pages during image refresh waves.\n\n• **compressed_audit_window** — Quarterly audit scans hammer many ports rapidly, driving acl_log volume without strategic risk; filter scanner subnets with **`where NOT cidrmatch(\"10.50.0.0/16\",src)`** clauses or an **`audit_scanner_allow.csv`** **`lookup`** so **`compressed_audit_window`** noise drops out of paging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ACI policy stats, fabric stats.\n• Ensure the following data sources are available: Contract hit counters, deny/drop by contract and EPG.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ACI contract statistics. Track deny and drop by contract and EPG pair. Alert on spike in denies. Report on top denied flows for policy review. Correlate with app and security events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aci sourcetype=\"aci:contract_stats\"\n| where action=\"deny\" OR action=\"drop\"\n| bin _time span=1h\n| stats sum(packets) as denied by contract_name, src_epg, dest_epg, _time\n| where denied > 1000\n| sort -denied\n```\n\nUnderstanding this SPL\n\n**ACI Contract Deny and Drop Statistics** — High contract deny or drop counts may indicate overly restrictive policy or attack traffic. Monitoring supports policy tuning and security analysis.\n\nDocumented **Data sources**: Contract hit counters, deny/drop by contract and EPG. **App/TA** (typical add-on context): ACI policy stats, fabric stats. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aci; **sourcetype**: aci:contract_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aci, sourcetype=\"aci:contract_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action=\"deny\" OR action=\"drop\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by contract_name, src_epg, dest_epg, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where denied > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (denied flows), Bar chart (denies by contract), Line chart (deny trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We ring a bell when the handoff between two linked sites turns visitors away at the border more than usual, so teams mend stretched agreements before quiet failures spread to apps in another building.",
              "mtype": [
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "cisco_aci"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.7",
              "n": "NSX-T Segment and Gateway Capacity",
              "c": "high",
              "f": "intermediate",
              "v": "Segment or gateway overload affects tenant connectivity and performance. Monitoring capacity and utilization supports scale planning and troubleshooting.",
              "t": "NSX-T API, vSphere/NSX metrics",
              "d": "Segment port count, gateway session count, throughput per segment",
              "q": "index=nsx sourcetype=\"nsx:segment\"\n| stats latest(port_count) as ports, latest(port_limit) as limit, latest(throughput_mbps) as mbps by segment_id, gateway_id\n| eval port_pct=round((ports/limit)*100, 1)\n| where port_pct > 80 OR mbps > 9000\n| table segment_id, gateway_id, ports, limit, port_pct, mbps",
              "m": "Poll NSX-T segment and gateway metrics. Alert when port usage or throughput approaches limit. Report on capacity trend and top-loaded segments. Plan gateway scale-out when needed.",
              "z": "Table (segments near limit), Gauge (port utilization), Line chart (throughput trend).",
              "kfp": "• **greenfield_onboarding_spike** — Approved **tenant** waves deliberately load thousands of **segments** in one **change** weekend; Splunk **`transport_zone_segment_pressure`** positives should attach to lookup nsx_tz_onboarding_suppress.csv keyed on **`transport_zone_id`** when the **`CHG`** window is active, and otherwise align with **NSX** **onboarding** runbooks rather than silent drift.\n\n• **api_page_drift_double_count** — Collector bugs replay the same **segment** **UUID** across pages; **`segs_obs`** overshoots **UI** totals until **`dedup segment_path`** ships—validate **`dc(segment_path)`** equals **Manager** **inventory** before paging **NetOps**.\n\n• **capacity_counter_reset_upgrade** — **NSX** **Maintenance** releases reset **`usage_percent`** baselines; **`manager_capacity_warning`** spikes right after **boot** banners in **`vmware:nsxt:syslog`**—suppress **12h** when **`nsx_upgrade_suppress.csv`** lists the **cluster** **build**.\n\n• **arp_learning_burst** — **ARP** **suppression** toggles or **VM** **anti-spoof** work triggers mass relearn resembling table pressure—correlate with **`vmware:nsxt:audit`** **segment** edits before **neighbor** escalations.\n\n• **route_reflectors_external_routes** — **Tier-0** carries **WAN** **learned** **routes** that legitimately approach **scale** limits during **carrier** **turnups**—pair **`tier0_route_headroom_low`** with **carrier** **`CHG`** and **BGP** **advertisement** plans.\n\n• **edge_form_factor_reprofile** — **Small**-to-**Large** **Edge** swaps raise **`session_capacity`** overnight; **`edge_session_warning`** clears after **`edge_form_factor.csv`** refresh—treat as **data** hygiene, not attack.\n\n• **stale_rest_vs_datapath_truth** — **`get dataplane`** shows healthy **session** tables while **REST** omits **`sess_max`**; **`sess_pct`** stays **NULL** and **`warn_edge_session_pct`** never fires—do not declare green from **search** alone without **CLI** spot checks.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NSX-T API, vSphere/NSX metrics.\n• Ensure the following data sources are available: Segment port count, gateway session count, throughput per segment.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll NSX-T segment and gateway metrics. Alert when port usage or throughput approaches limit. Report on capacity trend and top-loaded segments. Plan gateway scale-out when needed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nsx sourcetype=\"nsx:segment\"\n| stats latest(port_count) as ports, latest(port_limit) as limit, latest(throughput_mbps) as mbps by segment_id, gateway_id\n| eval port_pct=round((ports/limit)*100, 1)\n| where port_pct > 80 OR mbps > 9000\n| table segment_id, gateway_id, ports, limit, port_pct, mbps\n```\n\nUnderstanding this SPL\n\n**NSX-T Segment and Gateway Capacity** — Segment or gateway overload affects tenant connectivity and performance. Monitoring capacity and utilization supports scale planning and troubleshooting.\n\nDocumented **Data sources**: Segment port count, gateway session count, throughput per segment. **App/TA** (typical add-on context): NSX-T API, vSphere/NSX metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nsx; **sourcetype**: nsx:segment. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nsx, sourcetype=\"nsx:segment\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by segment_id, gateway_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **port_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where port_pct > 80 OR mbps > 9000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NSX-T Segment and Gateway Capacity**): table segment_id, gateway_id, ports, limit, port_pct, mbps\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (segments near limit), Gauge (port utilization), Line chart (throughput trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We help you notice when the virtual network is nearly full of new subnets, gateway route tables, and edge connection tracking, well before fresh tenants hit slowdowns or failures.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx",
                "vmware"
              ],
              "em": [
                "vmware_vsphere"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.8",
              "n": "SDN Configuration Change and Rollback Audit",
              "c": "critical",
              "f": "intermediate",
              "v": "Unauthorized fabric or policy changes can cause outages or security gaps. Auditing changes and rollbacks supports change control and incident analysis.",
              "t": "APIC/NSX/controller audit logs",
              "d": "Configuration change events, user, object, before/after",
              "q": "index=sdn sourcetype=\"sdn:audit\"\n| search (action=\"modified\" OR action=\"deleted\" OR action=\"rollback\")\n| table _time, user, object_type, object_name, action, change_summary\n| sort -_time",
              "m": "Ingest controller and fabric audit logs. Alert on change to critical objects (e.g., tenant, contract, segment) without change ticket. Report on change frequency and rollback rate. Integrate with change management.",
              "z": "Table (recent changes), Timeline (change events), Bar chart (changes by user).",
              "kfp": "**CI/CD burst signatures** — Pipeline controllers replay manifests rapidly while transaction IDs refresh each push; hourly velocity resembles misuse though pipeline_job_id stamps map cleanly to Jenkins Git hashes—suppress against automation principal lists plus CMDB-owned deployment waves.\n\n**Terraform plan-and-apply twins** — Two CONFIG_DATASTORE_WRITE waves appear minutes apart during speculative plan versus executed apply while txn_state briefly resembles rollback_candidate though desired_config parity holds—tie suppression to workspace run_phase extracted from Terraform stdout mirrored into auxiliary indexes.\n\n**Consensus replay mirrors** — Fresh RAFT followers replay immutable ovsdb journals after quorum heal so AuditEntry duplicates inflate blast_radius counts until leader_epoch increments stabilize—dedupe txn plus controller_node pairs before paging.\n\n**Warm-site journaling floods** — Secondary controllers replay aaaModLR backlog after DR failover while Vault leases rotate; after_hours detectors flash despite CAB-approved emergency tickets—gate alerts with CHG business_service qualifiers excluding DR drill markers plus intent-hash parity.\n\n**Compliance scanner cadence** — Nightly Ansible dry-run triples Neutron PUT counts against golden policy mirrors without dataplane drift—whitelist svc-compliance actors inside openstack:neutron:audit before escalating DNA Center audit echoes timed with scanner windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1098",
                "T1078.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: APIC/NSX/controller audit logs.\n• Ensure the following data sources are available: Configuration change events, user, object, before/after.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest controller and fabric audit logs. Alert on change to critical objects (e.g., tenant, contract, segment) without change ticket. Report on change frequency and rollback rate. Integrate with change management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdn sourcetype=\"sdn:audit\"\n| search (action=\"modified\" OR action=\"deleted\" OR action=\"rollback\")\n| table _time, user, object_type, object_name, action, change_summary\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**SDN Configuration Change and Rollback Audit** — Unauthorized fabric or policy changes can cause outages or security gaps. Auditing changes and rollbacks supports change control and incident analysis.\n\nDocumented **Data sources**: Configuration change events, user, object, before/after. **App/TA** (typical add-on context): APIC/NSX/controller audit logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdn; **sourcetype**: sdn:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdn, sourcetype=\"sdn:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **SDN Configuration Change and Rollback Audit**): table _time, user, object_type, object_name, action, change_summary\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent changes), Timeline (change events), Bar chart (changes by user).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We keep a shared diary of who touched the knobs on each vendor's programmable-network brain—and whether those edits matched agreed windows and matching logins. When someone pushes risky changes late at night or walks back what they did without saying so, we raise it plainly before trouble spreads.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nsx"
              ],
              "em": [
                "cisco_aci"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.9",
              "n": "VXLAN VTEP Reachability",
              "c": "high",
              "f": "advanced",
              "v": "VTEP (VXLAN Tunnel Endpoint) peers form the overlay mesh in VXLAN fabrics. When a VTEP goes down, tenant segments lose connectivity and workloads become isolated. Monitoring VTEP reachability ensures overlay health and enables rapid detection of NVE failures before tenant impact.",
              "t": "SNMP modular input, NX-OS/EOS syslog",
              "d": "`show nve peers` (NX-OS), `show vxlan vtep` (EOS), syslog VTEP events",
              "q": "index=network (sourcetype=\"cisco:nxos:nve_peers\" OR sourcetype=\"arista:eos:vxlan_vtep\" OR sourcetype=\"syslog\")\n| search (nve OR vtep OR \"NVE peer\" OR \"VTEP\")\n| eval peer_status=case(\n    like(_raw, \"%Up%\") OR like(_raw, \"%up%\") OR like(_raw, \"%established%\"), \"Up\",\n    like(_raw, \"%Down%\") OR like(_raw, \"%down%\") OR like(_raw, \"%failed%\"), \"Down\",\n    like(_raw, \"%Init%\") OR like(_raw, \"%init%\"), \"Init\",\n    1==1, \"Unknown\")\n| rex field=_raw \"peer\\s+(?<peer_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)|(?<peer_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)\\s+.*?(?<state>\\w+)\"\n| where peer_status!=\"Up\" OR isnull(peer_ip)\n| stats latest(peer_status) as status, latest(_time) as last_seen by host, peer_ip, vni\n| table host, peer_ip, vni, status, last_seen",
              "m": "Run scripted input every 60 seconds to execute `show nve peers` (Cisco NX-OS) or `show vxlan vtep` (Arista EOS) via SSH/API. Parse peer IP, state, and VNI. Ingest syslog for VTEP state-change events (e.g., NVE peer down, BGP session lost). Create sourcetype with field extractions for peer_ip, state, vni, host. Alert immediately when any VTEP peer transitions to Down. Track peer flapping (rapid Up/Down cycles) for underlay stability issues. Correlate VTEP failures with BGP and physical link events.",
              "z": "Status grid (VTEP peer matrix by host), Table (down peers), Timechart (peer state changes), Single value (healthy VTEP peer count).",
              "kfp": "**planned_maintenance_holddown** — Maintenance windows that admin-shut **NVE** interfaces or **EVPN** neighbors under **change** control will mark **peer_state** **Down** in **`nxos:nve_peers`** while **CMDB** still shows production. Honor **suppress_until** lists and **CHG** tags before paging overlay owners.\n\n**issu_fast_reload_blip** — **ISSU**/supervisor switchovers may omit one **NX-API** **poll**, producing **`low_clip=0`** in a single **5m** bucket even though **`show nve peers`** stabilizes seconds later. Require **consecutive** buckets or **`fl_sys`** corroboration before **Sev-1** bridges.\n\n**bgp_soft_reconfig_noise** — Mass **`soft-reconfiguration`** **inbound** on route-reflector clusters can leave **BGP** **Idle**/**Active** for a few **cycles** without **dataplane** loss. Pair **`bgp_fault`** with **`peer_state_out`** **Down** or **`fl_tun`** before declaring **VXLAN** **blackholes**.\n\n**telemetry_alias_drift** — **TA** upgrades occasionally rename **`peer_oper_state`** fields; **`up_ok`** becomes **null** and **`low_clip`** misfires. Run **field** summaries after every Splunk **deploy** wave and keep **`FIELDALIAS`** for legacy **JSON** keys.\n\n**counter_clear_events** — Operators **`clear counters interface nve1`** reset **drop** banks; **`ctr_fault`** spikes once while **peers** stay **Up**. Ignore **one** **interval** **`fl_ctr`** when **change** notes mention **counter** clears.\n\n**vpc_role_asymmetry** — **vPC** peers can show **transient** **data-plane** learn types during **role** negotiation; **`peer_learn_type`** may flap without **reachability** loss. Compare **`uptime`** trends and **underlay** **ping** meshes before **RMA** **hardware**.\n\n**acronym_collision_syslog** — Non-**NVE** messages containing **VXLAN** marketing strings may match **regex** filters. Narrow **`syslog_fault`** to **`%NVE`** **facility** patterns or **exclude** test **VRF** **tags**.\n\n**stale_tunnel_model** — Some **NMS** **tunnels** lag **CLI** by several **minutes**; **`fl_tun`** alone should not outrank **`nxos:nve_peers`** **Up**. Weight **tunnel** sources lower unless **both** agree.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, NX-OS/EOS syslog.\n• Ensure the following data sources are available: `show nve peers` (NX-OS), `show vxlan vtep` (EOS), syslog VTEP events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun scripted input every 60 seconds to execute `show nve peers` (Cisco NX-OS) or `show vxlan vtep` (Arista EOS) via SSH/API. Parse peer IP, state, and VNI. Ingest syslog for VTEP state-change events (e.g., NVE peer down, BGP session lost). Create sourcetype with field extractions for peer_ip, state, vni, host. Alert immediately when any VTEP peer transitions to Down. Track peer flapping (rapid Up/Down cycles) for underlay stability issues. Correlate VTEP failures with BGP and physical link event…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"cisco:nxos:nve_peers\" OR sourcetype=\"arista:eos:vxlan_vtep\" OR sourcetype=\"syslog\")\n| search (nve OR vtep OR \"NVE peer\" OR \"VTEP\")\n| eval peer_status=case(\n    like(_raw, \"%Up%\") OR like(_raw, \"%up%\") OR like(_raw, \"%established%\"), \"Up\",\n    like(_raw, \"%Down%\") OR like(_raw, \"%down%\") OR like(_raw, \"%failed%\"), \"Down\",\n    like(_raw, \"%Init%\") OR like(_raw, \"%init%\"), \"Init\",\n    1==1, \"Unknown\")\n| rex field=_raw \"peer\\s+(?<peer_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)|(?<peer_ip>\\d+\\.\\d+\\.\\d+\\.\\d+)\\s+.*?(?<state>\\w+)\"\n| where peer_status!=\"Up\" OR isnull(peer_ip)\n| stats latest(peer_status) as status, latest(_time) as last_seen by host, peer_ip, vni\n| table host, peer_ip, vni, status, last_seen\n```\n\nUnderstanding this SPL\n\n**VXLAN VTEP Reachability** — VTEP (VXLAN Tunnel Endpoint) peers form the overlay mesh in VXLAN fabrics. When a VTEP goes down, tenant segments lose connectivity and workloads become isolated. Monitoring VTEP reachability ensures overlay health and enables rapid detection of NVE failures before tenant impact.\n\nDocumented **Data sources**: `show nve peers` (NX-OS), `show vxlan vtep` (EOS), syslog VTEP events. **App/TA** (typical add-on context): SNMP modular input, NX-OS/EOS syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:nxos:nve_peers, arista:eos:vxlan_vtep, syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:nxos:nve_peers\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **peer_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where peer_status!=\"Up\" OR isnull(peer_ip)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, peer_ip, vni** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **VXLAN VTEP Reachability**): table host, peer_ip, vni, status, last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (VTEP peer matrix by host), Table (down peers), Timechart (peer state changes), Single value (healthy VTEP peer count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch the tunnel endpoints that stitch your switches together for app traffic, so broken neighbors show up early and you can fix them before people feel disconnects.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_nexus",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.10",
              "n": "EVPN Route Type Distribution",
              "c": "medium",
              "f": "advanced",
              "v": "EVPN route table growth (Type-2 MAC/IP and Type-5 IP prefix routes) impacts control-plane memory and convergence time. Trending route counts by type supports capacity planning, identifies runaway growth (e.g., VM sprawl, IP prefix leakage), and helps size fabric hardware for future scale.",
              "t": "Custom scripted input (`show bgp l2vpn evpn summary`)",
              "d": "BGP EVPN route table counts per type",
              "q": "index=network sourcetype=\"evpn:route_summary\"\n| eval route_type=case(\n    type==\"2\" OR type==\"mac_ip\", \"Type2_MAC_IP\",\n    type==\"3\" OR type==\"imcast\", \"Type3_IMET\",\n    type==\"5\" OR type==\"ip_prefix\", \"Type5_IP_Prefix\",\n    1==1, \"Other\")\n| timechart span=1h latest(count) as count by route_type",
              "m": "Deploy scripted input to run `show bgp l2vpn evpn summary` or equivalent (e.g., `show bgp evpn summary` on Arista) on each leaf every 5–15 minutes via SSH or eAPI. Parse route counts by type: Type-2 (MAC/IP), Type-3 (IMET), Type-5 (IP prefix). Ingest into Splunk with host, route_type, count, and timestamp. Baseline normal growth rates per VNI/tenant. Alert on sudden spikes (>20% in 1 hour) or sustained growth exceeding hardware limits. Report on top VNIs by route count for cleanup and capacity planning.",
              "z": "Timechart (route count by type over time), Table (current counts by host and type), Single value (total EVPN routes), Bar chart (route growth rate by type).",
              "kfp": "**rr_readvert_wave** — Spinning route-reflector clusters or collapsing RR partitions forces every leaf to re-readvertise IMET and MAC/IP routes even though dataplane forwarding stayed warm; Splunk rows cluster on MAINT tickets—suppress until **`network:bgp_neighbor`** shows Established uptime climbing while withdraw counters remain flat.\n\n**imet_scope_expansion** — Operations widen the BUM domain or add multi-site segments, so Type-3 totals step once while Type-2 barely moves; treat as noise when design docs cite PIM-SSM or ingress-replication edits in the same hour and Type-5 share is unchanged.\n\n**collector_field_rename_skew** — A minor TA upgrade renames **`evpn_route_type`** to **`nlri_family`** without updating inputs; RT_OTHER balloons while CLI still looks fine—fix props before paging capacity.\n\n**l3vpn_import_shuffle** — VRF import map refactors replay Type-5 prefixes while MAC tables sleep; correlate peaks on RT5_IP_PREFIX with vrf_key lists that match WAN team change IDs, not server sprawl.\n\n**telemetry_duplicate_streams** — Accidentally ingesting both **`nxos:bgp_evpn`** and a legacy **`network:evpn_routes`** stream for the same leaf doubles counts until dedup—validate dc(leaf) per poll window equals one before trusting growth signals.\n\n**graceful_restart_stale_echo** — RFC 4724 helper modes keep STALE paths visible; route totals may read high versus what the RIB actually uses—cross-check with **`show bgp l2vpn evpn route`** output before buying hardware.\n\n**centralized_irb_cutover** — Moving default-gateway anycast IRB between designs inflates Type-2 replays without new VMs—match Splunk timestamps to symmetric_irb CAB tokens before blaming tenants.\n\n**synthetic_lab_bgp_reset** — Certification labs repeatedly clear BGP neighbors to teach failover behavior; route-type totals oscillate wildly without production impact—cross-check lab device names against a lab_allow.csv lookup before paging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input (`show bgp l2vpn evpn summary`).\n• Ensure the following data sources are available: BGP EVPN route table counts per type.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy scripted input to run `show bgp l2vpn evpn summary` or equivalent (e.g., `show bgp evpn summary` on Arista) on each leaf every 5–15 minutes via SSH or eAPI. Parse route counts by type: Type-2 (MAC/IP), Type-3 (IMET), Type-5 (IP prefix). Ingest into Splunk with host, route_type, count, and timestamp. Baseline normal growth rates per VNI/tenant. Alert on sudden spikes (>20% in 1 hour) or sustained growth exceeding hardware limits. Report on top VNIs by route count for cleanup and capacity p…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"evpn:route_summary\"\n| eval route_type=case(\n    type==\"2\" OR type==\"mac_ip\", \"Type2_MAC_IP\",\n    type==\"3\" OR type==\"imcast\", \"Type3_IMET\",\n    type==\"5\" OR type==\"ip_prefix\", \"Type5_IP_Prefix\",\n    1==1, \"Other\")\n| timechart span=1h latest(count) as count by route_type\n```\n\nUnderstanding this SPL\n\n**EVPN Route Type Distribution** — EVPN route table growth (Type-2 MAC/IP and Type-5 IP prefix routes) impacts control-plane memory and convergence time. Trending route counts by type supports capacity planning, identifies runaway growth (e.g., VM sprawl, IP prefix leakage), and helps size fabric hardware for future scale.\n\nDocumented **Data sources**: BGP EVPN route table counts per type. **App/TA** (typical add-on context): Custom scripted input (`show bgp l2vpn evpn summary`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: evpn:route_summary. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"evpn:route_summary\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **route_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by route_type** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (route count by type over time), Table (current counts by host and type), Single value (total EVPN routes), Bar chart (route growth rate by type).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We map how many Type-2, Type-3, and Type-5 routes pile up in each fabric pocket and whether the mix tilts overnight. When one bucket swells out of character we say so early—before switches get short on control-plane memory or slow neighbor learning.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.11",
              "n": "EVPN/VXLAN Tunnel Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Unified view of BGP EVPN tunnel state and VXLAN encapsulation errors per VNI — bridges overlay protocols (complements UC-18.3.4 and UC-18.3.9).",
              "t": "`TA-cisco_ios` (NX-OS), `arista:eos` via SC4S, BGP telemetry",
              "d": "`evpn:bgp`, `vxlan:tunnel`",
              "q": "index=network (sourcetype=\"evpn:bgp\" OR sourcetype=\"vxlan:tunnel\") earliest=-24h\n| eval bad=if(match(lower(state),\"(?i)down|idle\") OR error_count>0,1,0)\n| where bad=1\n| stats latest(state) as st, sum(error_count) as err by vni, peer_ip, leaf\n| table leaf, peer_ip, vni, st, err",
              "m": "Normalize peer and VNI from vendor logs. Alert on any down EVPN session carrying VXLAN for production VNIs.",
              "z": "Table (unhealthy tunnels), Geo/leaf map, Line chart (error count).",
              "kfp": "**vtep_bounce_during_issu** — In-service software upgrades and supervisor switchovers on Nexus leaves deliberately bounce NVE peers for tens of seconds while forwarding tables reconverge; Splunk shows peer_down spikes that mirror reload banners even though no customer VLAN ever dropped for minutes. Align positives with stamped CHG identifiers, suppress on device tags until BGP Established timers exceed the vendor soak recommendation, and require syslog corroboration that decap_errors_total stayed flat before treating the event as a false crisis.\n\n**nve_counter_wrap_firmware_cutover** — Certain NX-OS cutovers reset 32-bit VXLAN error counters to zero then immediately overflow small test traffic in labs, producing absurd encap_ratio math for one poll. Compare against ASIC snapshots, widen bin span to fifteen minutes during image weekends, and ignore single-bin outliers when REST poll_phase metadata shows firmware_commit_in_progress.\n\n**path_mtu_renegotiation_blip** — Uplinks moving from 10G to 25G or WAN partners adjusting TCP MSS can transiently inflate decap_errors_total while ICMP too-big messages propagate; cross-check with PATH MTU traces and interface error counters on both ends before blaming EVPN signaling.\n\n**evpn_gr_overlay_stale_marker** — BGP graceful restart leaves EVPN routes in STALE or RIB-frozen states that look ominous in dashboards even while VXLAN dataplane keeps forwarding; differentiate using bgp_neighbor GR flags, stale timers, and whether nve_peer_state remained Up across the entire window.\n\n**staging_l2vni_batch_provision** — Automation waves pre-create hundreds of L2VNIs across leaves ahead of tenant attach; brief vni_down flicker appears on unused segments until access ports map in. Gate alerts with staging_l2vni_batch_provision lookups keyed on vni_key ranges tied to known build tickets.\n\n**split_horizon_learn_guard_ramp** — Turning on additional split-horizon or BGP-EVPN proxy parameters forces cache relearns that mimic dataplane_counter_surge until timers expire; validate against recent config deltas and require encap_errors_total growth on more than two consecutive bins.\n\n**rest_poll_phase_skew_dup_peers** — Mis-tuned REST clients occasionally double-post the same poll into HEC, cloning identical peer rows within the same second—Splunk doubles encap_errors_total without physical faults. Deduplicate on leaf, peer_ip, _raw checksum, and sourcetype before paging; fix the forwarder batching bug instead of dispatching bridge engineers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios` (NX-OS), `arista:eos` via SC4S, BGP telemetry.\n• Ensure the following data sources are available: `evpn:bgp`, `vxlan:tunnel`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize peer and VNI from vendor logs. Alert on any down EVPN session carrying VXLAN for production VNIs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"evpn:bgp\" OR sourcetype=\"vxlan:tunnel\") earliest=-24h\n| eval bad=if(match(lower(state),\"(?i)down|idle\") OR error_count>0,1,0)\n| where bad=1\n| stats latest(state) as st, sum(error_count) as err by vni, peer_ip, leaf\n| table leaf, peer_ip, vni, st, err\n```\n\nUnderstanding this SPL\n\n**EVPN/VXLAN Tunnel Health** — Unified view of BGP EVPN tunnel state and VXLAN encapsulation errors per VNI — bridges overlay protocols (complements UC-18.3.4 and UC-18.3.9).\n\nDocumented **Data sources**: `evpn:bgp`, `vxlan:tunnel`. **App/TA** (typical add-on context): `TA-cisco_ios` (NX-OS), `arista:eos` via SC4S, BGP telemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: evpn:bgp, vxlan:tunnel. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"evpn:bgp\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bad** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bad=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by vni, peer_ip, leaf** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **EVPN/VXLAN Tunnel Health**): table leaf, peer_ip, vni, st, err\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unhealthy tunnels), Geo/leaf map, Line chart (error count).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus, Arista",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We blend the control-plane story with the dataplane counters so a wobbly VTEP peer or growing encapsulation errors show up in one place before apps stall or VLANs start flapping.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios",
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                },
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.12",
              "n": "SDN Controller High Availability",
              "c": "critical",
              "f": "advanced",
              "v": "Quorum, leader election, and data-store sync for controller clusters — extends generic health (UC-18.3.3) with **HA-specific** failover signals.",
              "t": "`sdn:controller`, OpenDaylight, ONOS (custom)",
              "d": "`sdn:controller_ha`",
              "q": "index=sdn sourcetype=\"sdn:controller_ha\" earliest=-24h\n| where quorum_ok=0 OR leader_id!=expected_leader\n| stats latest(quorum_ok) as q, latest(leader_id) as leader by cluster_name\n| table cluster_name, q, leader",
              "m": "Map `expected_leader` from static config lookup. Alert on quorum loss or rogue leader.",
              "z": "Status grid (cluster), Timeline (failover events), Single value (cluster up).",
              "kfp": "**planned_controller_swaps** — Rolling APIC replacements or NSX Manager VM refreshes deliberately walk nodes through standby transitions; Splunk may show leader_latest churn and short db_sync_state noise while Raft or Corfu-style stores catch up. Require maint_suppress on expected_sdn_ha_clusters or an ITSM change token before paging, and confirm cluster_status_latest returns to healthy across two polling windows.\n\n**vip_dns_ttl_caching** — After VIP failovers, stale DNS on a collector can point HTTPS polls at the wrong appliance, fabricating leader_gap while the vendor UI on correct IPs looks clean. Flush resolver caches on rest_ta hosts, validate forward and reverse DNS against the VIP, and compare per-member polls before escalating fabric teams.\n\n**certificate_rotation_blip** — When APIC or NSX client-auth certificates rotate, REST modular inputs can return SSL handshake failures that resemble cluster outage in logs even though db_sync_state is fine. Watch rest_ta health metrics and align rotation ceremonies with Splunk credential updates to avoid false bridges.\n\n**lab_chaos_injection** — Chaos drills that force partitioned networks or delayed heartbeats generate split-brain text without production impact. Tag environment=lab or route those indexes to a non-paged workspace.\n\n**telemetry_sampling_skips** — Aggressive sample filters on network:sdn_syslog can drop leader change lines while REST still captures state transitions, creating syslog_ha_signal gaps. Validate props.conf LINE_BREAKER and TRANSFORMS that might truncate multi-line HA traps.\n\n**ndfc_service_recycle** — NDFC minor service bounces can spike cisco:ndfc:cluster warnings that self-heal within minutes; correlate with Cisco maintenance advisories and require sustained risk_peak across three bins before sev-1.\n\n**apic_readonly_rbac_limits** — Some read-only APIC accounts cannot reach every MO path used for cluster health JSON, producing empty cisco:aci:cluster_health rows that look like missing controllers. Test AAA domains and capabilities separately from the Splunk service account before alert go-live.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `sdn:controller`, OpenDaylight, ONOS (custom).\n• Ensure the following data sources are available: `sdn:controller_ha`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `expected_leader` from static config lookup. Alert on quorum loss or rogue leader.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdn sourcetype=\"sdn:controller_ha\" earliest=-24h\n| where quorum_ok=0 OR leader_id!=expected_leader\n| stats latest(quorum_ok) as q, latest(leader_id) as leader by cluster_name\n| table cluster_name, q, leader\n```\n\nUnderstanding this SPL\n\n**SDN Controller High Availability** — Quorum, leader election, and data-store sync for controller clusters — extends generic health (UC-18.3.3) with **HA-specific** failover signals.\n\nDocumented **Data sources**: `sdn:controller_ha`. **App/TA** (typical add-on context): `sdn:controller`, OpenDaylight, ONOS (custom). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdn; **sourcetype**: sdn:controller_ha. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdn, sourcetype=\"sdn:controller_ha\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where quorum_ok=0 OR leader_id!=expected_leader` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **SDN Controller High Availability**): table cluster_name, q, leader\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (cluster), Timeline (failover events), Single value (cluster up).",
              "script": "",
              "premium": "",
              "hw": "SDN controller cluster",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch your software-defined control clusters so a lost vote, wrong captain, or slow shared notebook shows up before apps stall or policy stops publishing.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.13",
              "n": "Fabric Upgrade Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks switch OS / ACI firmware versions against approved upgrade wave — identifies stragglers and unsupported trains.",
              "t": "Inventory scripted input, SNMP",
              "d": "`network:inventory`",
              "q": "index=inventory sourcetype=\"network:inventory\" role IN (\"leaf\",\"spine\")\n| lookup fabric_target_version.csv platform OUTPUT target_version\n| where os_version!=target_version\n| stats count by site, os_version, target_version\n| sort -count",
              "m": "Refresh inventory daily. Drive remediation campaigns for nodes not on target.",
              "z": "Table (non-compliant nodes), Pie chart (compliance %), Bar chart (by site).",
              "kfp": "**Certification lab beta trains** — validation switches intentionally run not-yet-approved builds stamped beta or cert inside CMDB environment=lab; Splunk surfaces Non-Compliant rows though networking labs expect divergence—suppress using lab_fabric tokens plus CMDB validation=true markers referencing certification bridges tied to vendor QA workflows.\n\n**Pre-production staging fabrics** — engineers deliberately pin N-minus-one or N-minus-two builds outside production wave calendars until cutover weekends execute—tie suppression macros to staging snow CHG families instead of paging raw mismatches alone during rehearsal weekends or load-sim weekends that never touch customer traffic paths.\n\n**Cold spare sleds** — warehouse shelves remain pinned on older trains until rack insertion tickets attach fabric_id markers—exclude inventory_state spare rows from Sev-3 paging queues until assets attach to production fabrics so procurement staging shelves stop ringing ops bridges nightly.\n\n**Quarantined RMA hold** — devices remain on end-of-support software while awaiting field replace shipments documented inside risk acceptance memos—join quarantine lookups plus deferral CHG identifiers before escalating lifecycle breaches attributed strictly to firmware drift when logistics—not policy—controls the timeline.\n\n**Matched-version DR pairs** — disaster-recovery fabrics deliberately mirror paired builds across metro rings although calendar waves differ—suppress drift via dr_site parity annotations referencing matched_pair_wave planners rather than labeling rogue posture during deliberate symmetry drills or multi-hour failback rehearsals.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1542.005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Inventory scripted input, SNMP.\n• Ensure the following data sources are available: `network:inventory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRefresh inventory daily. Drive remediation campaigns for nodes not on target.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=inventory sourcetype=\"network:inventory\" role IN (\"leaf\",\"spine\")\n| lookup fabric_target_version.csv platform OUTPUT target_version\n| where os_version!=target_version\n| stats count by site, os_version, target_version\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Fabric Upgrade Compliance** — Tracks switch OS / ACI firmware versions against approved upgrade wave — identifies stragglers and unsupported trains.\n\nDocumented **Data sources**: `network:inventory`. **App/TA** (typical add-on context): Inventory scripted input, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: inventory; **sourcetype**: network:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=inventory, sourcetype=\"network:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where os_version!=target_version` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by site, os_version, target_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant nodes), Pie chart (compliance %), Bar chart (by site).",
              "script": "",
              "premium": "",
              "hw": "Leaf/spine switches",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We line up each switch's software version against the rollout wave your engineers scheduled—even when vendors differ—and call attention when something stays behind too long or nears the date vendors stop fixing problems. That heads off breakage from mismatched trains and closes patching gaps earlier.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.14",
              "n": "Spine-Leaf Topology Anomalies",
              "c": "high",
              "f": "intermediate",
              "v": "Detects unexpected BGP neighbor loss, missing spine links, or asymmetric ECMP paths in Clos topology.",
              "t": "BGP syslog, LLDP",
              "d": "`bgp:neighbor`, `lldp:topology`",
              "q": "index=network sourcetype=\"bgp:neighbor\" earliest=-4h\n| where state!=\"Established\"\n| lookup expected_bgp_peers.csv local_host peer_ip OUTPUT 1 as expected\n| where isnotnull(expected)\n| stats count by local_host, peer_ip, state, reason\n| sort -count",
              "m": "Maintain `expected_peers.csv` from design. Alert on any spine-leaf session not Established.",
              "z": "Graph (topology violations), Table (bad neighbors), Timeline.",
              "kfp": "**spine_pod_expansion_window** — Extra spine turns up with uplinks Idle until optics seat—suppress via spine_pod_expansion_window CMDB tags plus staged-spine CHG tickets instead of paging topology_integrity_hit until cabling milestones finish.\n\n**greenfield_leaf_addition** — Collectors arrive before expected_bgp_peers.csv catches new hostnames—brief absence races provisioning—stamp greenfield_leaf_addition referencing poap_bootstrap_window until design_expect parity closes.\n\n**bgp_graceful_restart_helper** — Spine ISSU keeps forwarding warm while BGP stalls helper-side—sessions diverge briefly from Established—suppress bgp_design_session_not_established_persist when MaintenanceWindow cites ISSU rehearsal CHGs plus soak timers.\n\n**lldp_refresh_skew** — EOS versus NX-OS TTL cadences produce momentary neighbor gaps shorter than unplug faults—raise dwell thresholds before lldp_doc_drift pages unless optics-loss syslog corroborates.\n\n**vpc_peer_link_misclassified** — Without mlag_pair_lookup.csv reloads, vPC peer-link or MLAG stitches resemble forbidden lateral pairs—reload ServiceNow-fed lookups before paging forbidden_leaf_leaf_suspect lacking mlag_exclusion.\n\n**cable_relocate_maintenance** — Documented spine patch moves flip sessions DOWN then Established—suppress until cable_relocate_maintenance CHG closure aligns with MaintenanceWindow cmdb peer_ip relocation notes.\n\n**poap_bootstrap_window** — ZTP bootstrap lets Sonic or Cumulus identities oscillate until ASN stamps finalize—pair with greenfield_leaf_addition so Idle dwell never reads as spine loss.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BGP syslog, LLDP.\n• Ensure the following data sources are available: `bgp:neighbor`, `lldp:topology`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain `expected_peers.csv` from design. Alert on any spine-leaf session not Established.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"bgp:neighbor\" earliest=-4h\n| where state!=\"Established\"\n| lookup expected_bgp_peers.csv local_host peer_ip OUTPUT 1 as expected\n| where isnotnull(expected)\n| stats count by local_host, peer_ip, state, reason\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Spine-Leaf Topology Anomalies** — Detects unexpected BGP neighbor loss, missing spine links, or asymmetric ECMP paths in Clos topology.\n\nDocumented **Data sources**: `bgp:neighbor`, `lldp:topology`. **App/TA** (typical add-on context): BGP syslog, LLDP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: bgp:neighbor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"bgp:neighbor\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state!=\"Established\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(expected)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by local_host, peer_ip, state, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Graph (topology violations), Table (bad neighbors), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep the hall-wide wiring sketch so each rack switch plug lines up with neighbor plugs on paper; when reality slides away long enough, we speak up before packets wander uneven lanes.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.15",
              "n": "BGP EVPN Route Table Convergence",
              "c": "high",
              "f": "advanced",
              "v": "Measures time-to-stable route count after churn events (link bounce, leaf reboot) — complements route count trending (UC-18.3.10).",
              "t": "BGP monitor, `evpn:route_summary`",
              "d": "`evpn:route_summary`",
              "q": "index=network sourcetype=\"evpn:route_summary\" earliest=-24h\n| sort 0 host _time\n| streamstats global=f last(total_routes) as prev_routes by host\n| eval churn=abs(total_routes-prev_routes)\n| where churn>500\n| table _time, host, total_routes, prev_routes, churn",
              "m": "Simplify: alert on `total_routes` delta spikes; use scripted convergence test after maintenance. Tune thresholds to fabric size.",
              "z": "Line chart (total routes), Table (churn events), Bar chart (max churn by leaf).",
              "kfp": "**issu_grace_window** — In-service upgrades keep BGP and EVPN tables warm while counters breathe; Splunk may show transient route_delta without customer impact until sessions return to Established. Pair alerts with MaintenanceWindow lookups and suppress when change records cite ISSU rehearsals without a dataplane fault.\n\n**spine_rollout_surge** — Adding spines or reweighting ECMP floods fresh IMET and MAC/IP advertisements across every leaf even though forwarding never blackholed. Expect route_delta spikes tied to documented spine CHG tokens—raise ceilings in evpn_conv_sla.csv instead of paging convergence SLO breaches alone.\n\n**collector_desync_minute** — REST collectors that lag syslog by one poll minute make routes_max lag CLI totals briefly; validate against _indextime skew before blaming slow BGP toward neighbors that stayed Established.\n\n**rr_partition_relearn** — Route-reflector fabric redesign forces every leaf to relearn EVPN paths without link faults; convergence_timer may stay low while route_delta explodes. Cross-reference CAB notes before treating the pattern as underlay failure.\n\n**ztp_hostname_drift** — Zero-touch leaves sometimes publish syslog hostnames that differ from NX-API management names until bootstrap completes; duplicates or missing joins mimic slow convergence until normalization macros align identities.\n\n**duplicate_poll_streams** — Two modular inputs hitting the same leaf double min/max spread and inflate route_delta. Run inventory audits whenever REST stanza copies proliferate across forwarders without coordination.\n\n**telemetry_spike_after_lab** — Certification labs replay churn scripts beside production indexes when VLANs leak; bursts vanish once lab VLAN ACLs tighten. Filter lab device names with allow or deny lists tied to CMDB site codes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BGP monitor, `evpn:route_summary`.\n• Ensure the following data sources are available: `evpn:route_summary`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSimplify: alert on `total_routes` delta spikes; use scripted convergence test after maintenance. Tune thresholds to fabric size.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"evpn:route_summary\" earliest=-24h\n| sort 0 host _time\n| streamstats global=f last(total_routes) as prev_routes by host\n| eval churn=abs(total_routes-prev_routes)\n| where churn>500\n| table _time, host, total_routes, prev_routes, churn\n```\n\nUnderstanding this SPL\n\n**BGP EVPN Route Table Convergence** — Measures time-to-stable route count after churn events (link bounce, leaf reboot) — complements route count trending (UC-18.3.10).\n\nDocumented **Data sources**: `evpn:route_summary`. **App/TA** (typical add-on context): BGP monitor, `evpn:route_summary`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: evpn:route_summary. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"evpn:route_summary\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **churn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where churn>500` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BGP EVPN Route Table Convergence**): table _time, host, total_routes, prev_routes, churn\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (total routes), Table (churn events), Bar chart (max churn by leaf).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "EVPN/VXLAN leafs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We clock how long the shared route list keeps wobbling after a cable flinch or switch nap, so your crews know the fabric finished reshuffling east-west paths before tenant apps feel pain.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.16",
              "n": "VTEP Reachability and Loss",
              "c": "high",
              "f": "intermediate",
              "v": "Packet loss and latency between VTEP peers — augments UC-18.3.9 state-only checks.",
              "t": "`Splunk_TA_nix`, `TA-cisco_ios`, `arista:eos` via SC4S, ICMP probes, SNMP",
              "d": "`vtep:probe` or synthetic tests",
              "q": "index=network sourcetype=\"vtep:probe\" earliest=-24h\n| where loss_pct>2 OR latency_ms>10\n| stats avg(loss_pct) as avg_loss, avg(latency_ms) as avg_lat by src_vtep, dst_vtep\n| sort -avg_loss\n| head 20",
              "m": "Run periodic probes between TEP IPs from automation. Correlate with underlay QoS drops.",
              "z": "Heatmap (VTEP × VTEP loss), Table (worst pairs), Line chart (loss trend).",
              "kfp": "**mt_scheduled_ping** — Maintenance windows sometimes pause orchestrated **`ping nve`** jobs while data plane keeps forwarding; Splunk shows stale **`network:ping_results`** gaps that look like loss until `_indextime` proves the probe stopped. Pair gaps with change tickets before paging overlay owners.\n\n**icmp_priority_deny** — Control-plane policers or CoPP can silently drop ICMP toward VTEP loopbacks even when VXLAN encapsulation is healthy; `worst_rtt_ms` spikes without `encap_errors` growth often trace to classification updates rather than fabric faults. Validate with scripted TCP or BFD-sidecars before blaming underlay optics.\n\n**counter_reset_window** — `show nve interface` counter clears during linecard restarts produce synthetic decap deceleration in `nxos:nve_counters` for a single poll; ignore single-bin decap_sum spikes when syslog shows %CMGR- or module reload markers in the same minute bucket.\n\n**lldp_hostname_drift** — ZTP or partial bootstrap can leave syslog hostnames mismatched to NX-API `host` fields until DNS catches up; duplicate leaf rows or missing joins mimic VTEP skew until normalization macros align identities.\n\n**asymmetric_ecmp_probe** — Hash-selected ICMP paths may traverse colder uplinks while production VXLAN flows stay balanced; loss_pct from probes can over-state tenant impact unless you compare against bidirectional samples or multi-source pings.\n\n**sup_spine_upgrade** — Planned spine rotations flood transient NVE syslog warnings as RIBs relearn even though tunnel endpoints stay Up; widen suppression when CAB references SUP spine change tokens without dataplane blackholes.\n\n**telemetry_batching_skew** — Heavy forwarders batching multiple polls into one second can compress `loss_pct` math; when in doubt, compare against CLI `ping nve` manual spot checks for the same tuple.\n\n**duplicate_collector_polls** — Two modular inputs hitting the same leaf double encap_sum/decap_sum and inflate `signal_peak`. Run forwarder audits whenever REST stanzas proliferate without ownership.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, `TA-cisco_ios`, `arista:eos` via SC4S, ICMP probes, SNMP.\n• Ensure the following data sources are available: `vtep:probe` or synthetic tests.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRun periodic probes between TEP IPs from automation. Correlate with underlay QoS drops.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"vtep:probe\" earliest=-24h\n| where loss_pct>2 OR latency_ms>10\n| stats avg(loss_pct) as avg_loss, avg(latency_ms) as avg_lat by src_vtep, dst_vtep\n| sort -avg_loss\n| head 20\n```\n\nUnderstanding this SPL\n\n**VTEP Reachability and Loss** — Packet loss and latency between VTEP peers — augments UC-18.3.9 state-only checks.\n\nDocumented **Data sources**: `vtep:probe` or synthetic tests. **App/TA** (typical add-on context): `Splunk_TA_nix`, `TA-cisco_ios`, `arista:eos` via SC4S, ICMP probes, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: vtep:probe. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"vtep:probe\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where loss_pct>2 OR latency_ms>10` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_vtep, dst_vtep** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (VTEP × VTEP loss), Table (worst pairs), Line chart (loss trend).",
              "script": "",
              "premium": "",
              "hw": "VXLAN-capable switches",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch the invisible tunnels between switches for sluggish replies and dropped ping tests so noisy fabrics get fixed before your apps stutter or time out.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "linux",
                "snmp"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                },
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.17",
              "n": "Leaf Switch Resource Utilization",
              "c": "high",
              "f": "beginner",
              "v": "CPU, memory, and forwarding table use on leaf switches — prevents control-plane overload and FIB exhaustion.",
              "t": "SNMP modular input, NX-API, `TA-cisco_ios`, `arista:eos` via SC4S",
              "d": "`snmp:cpu`, `snmp:mem`, `hw:forwarding`",
              "q": "index=snmp sourcetype=\"snmp:cpu\" role=\"leaf\" earliest=-1h\n| eval use=cpu_pct\n| append [ search index=snmp sourcetype=\"snmp:mem\" role=\"leaf\" earliest=-1h | eval use=mem_pct ]\n| stats avg(use) as avg_use max(use) as peak by host\n| where peak>85\n| table host, avg_use, peak",
              "m": "Add FIB/ARP scale via `show forwarding` scripted input. Alert on sustained high CPU with EVPN churn.",
              "z": "Heatmap (leaf × metric), Table (top peaks), Line chart (utilization).",
              "kfp": "**planned_linecard_upgrade** — During ISSU or linecard replacements, `show hardware capacity` may report transient zeros or split totals across old and new ASICs while the supervisor reconciles banks. Splunk can show a fake **fib_util_peak** dip-then-spike pattern. Require two consecutive polls after declared maintenance complete, or honor `suppress_until` on **fabric_leaf_inventory** before paging capacity teams.\n\n**bgp_refresh_after_policy** — Mass concurrent soft-inbound refreshes when controllers publish broad retract/announce churn can pin CPU high for minutes without any hardware table breach. Peaks align with `network:bgp_neighbor` state flutters but forwarding remains sound. Correlate with automation job IDs, damp CPU-only alerts during known publish windows, and require **fib_b** or **tcam_b** before hardware escalations.\n\n**telemetry_alias_drift** — TA upgrades occasionally rename `cpu_utilization` or TCAM fields; `tonumber()` yields nulls so breaches vanish although the CLI still looks hot. Run field-summary dashboards after every TA push, keep **FIELDALIAS** stanzas for legacy sourcetypes, and alert on sudden null-rate increases—not only on numeric thresholds.\n\n**cold_standby_supervisor** — On dual-sup chassis, standby polls may return idle CPU while active forwarding happens on the peer. Mis-tagging **hostname** without **active_role** can flatten **cpu_peak** misleadingly. Enrich events with **redundancy_state** when available and filter standbys from capacity tiles meant for traffic-forwarding units only.\n\n**nms_false_critical** — Third-party **network:switch_health** probes sometimes mark CRITICAL when SNMP communities rotate or proxy ACLs block UDP, not when TCAM fills. Cross-check native **nxos:system_resources** before acting, and include probe collector_id in tickets so network and monitoring ownership stays clear.\n\n**evpn_type5_bursts** — Sudden Type-5 advertisement storms expand RIB→FIB programming queues and inflate CPU without immediate TCAM exhaustion. **bgp_churn** may trip while **fib_util** is still modest. Treat as control-plane contention: widen investigation SPL with `show bgp l2vpn evpn summary` counters and throttle repeat alerts per **leaf** until prefixes stabilize.\n\n**eos_bank_mapping_gap** — Arista platform releases occasionally rename TCAM bank labels; **vendor_capacity_map** rows go stale and **tcam_util** reads artificially low. Refresh the CSV after EOS upgrades, validate against one manual `show platform` snapshot per pod, and tag events with **eos_version** for quicker comparisons.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP modular input, NX-API, `TA-cisco_ios`, `arista:eos` via SC4S.\n• Ensure the following data sources are available: `snmp:cpu`, `snmp:mem`, `hw:forwarding`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAdd FIB/ARP scale via `show forwarding` scripted input. Alert on sustained high CPU with EVPN churn.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=snmp sourcetype=\"snmp:cpu\" role=\"leaf\" earliest=-1h\n| eval use=cpu_pct\n| append [ search index=snmp sourcetype=\"snmp:mem\" role=\"leaf\" earliest=-1h | eval use=mem_pct ]\n| stats avg(use) as avg_use max(use) as peak by host\n| where peak>85\n| table host, avg_use, peak\n```\n\nUnderstanding this SPL\n\n**Leaf Switch Resource Utilization** — CPU, memory, and forwarding table use on leaf switches — prevents control-plane overload and FIB exhaustion.\n\nDocumented **Data sources**: `snmp:cpu`, `snmp:mem`, `hw:forwarding`. **App/TA** (typical add-on context): SNMP modular input, NX-API, `TA-cisco_ios`, `arista:eos` via SC4S. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: snmp; **sourcetype**: snmp:cpu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=snmp, sourcetype=\"snmp:cpu\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **use** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where peak>85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Leaf Switch Resource Utilization**): table host, avg_use, peak\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (leaf × metric), Table (top peaks), Line chart (utilization).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco/Arista leafs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch how hard each leaf works inside—busy brains, memory use, and how full the hardware route books are—so we fix strains before apps stumble or east-west traffic suffers.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "snmp"
              ],
              "em": [
                "cisco_ios",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.18",
              "n": "BGP EVPN Route Withdrawal Rate and Flap Storms",
              "c": "critical",
              "f": "advanced",
              "v": "Withdrawal storms shrink effective ECMP sets and extend convergence after link or node events, which shows up as application timeouts and storage path loss. Measuring withdrawal velocity per peer differentiates normal housekeeping from dangerous churn in the EVPN control plane.",
              "t": "Custom BGP monitor, `TA-cisco_ios`, Arista eAPI scripted input",
              "d": "`index=network` `sourcetype=\"bgp:evpn_events\"` with fields `host`, `peer_ip`, `event_type`, `rd`, `prefix`",
              "q": "index=network sourcetype=\"bgp:evpn_events\" earliest=-1h\n| where match(lower(event_type),\"(?i)withdraw|wdr|revoke\")\n| bin _time span=1m\n| stats count as wdr by _time, host, peer_ip\n| where wdr > 200\n| sort -wdr\n| table _time, host, peer_ip, wdr",
              "m": "(1) Stream BGP UPDATE syslog or BMP into `bgp:evpn_events` with normalized `event_type`. (2) Tune per-fabric scale; exclude RR-only peers if needed. (3) Correlate with spine-leaf neighbor state (UC-18.3.14).",
              "z": "Timechart (withdrawals per minute), Table (worst peers), Single value (peak wdr).",
              "kfp": "**firmware_upgrade_window** — Orchestrated leaf reloads during supervised firmware pushes deliberately tear down BGP-EVPN tables for roughly thirty-to-sixty seconds while images commit—withdrawal counters leap upward yet operators already expect that choreography—suppress **`storm_flag`** hits whenever CMDB tags **`firmware_upgrade_window`** pair **`local_host`** identities with CAB-approved **`ISSU`** sequencing backed by reload tickets tied to those hosts.\n\n**gr_helper_marker_walkthrough** — Peers honoring **`RFC`** **`4724`** **`Restart`** **`helper`** semantics retain **`STALE`** markings while **`WITHDRAWN`** tallies oscillate apart from instantaneous dataplane outages—cross-check syslog **`Neighbor`** banners citing **`Graceful`** timers alongside **`Established`** uptime versus **`withdraw_hit`** timelines before blaming **`esi_segment_burst`** fingerprints alone.\n\n**mso_route_target_changeover** — Multi-site orchestrator pushes sometimes revise **`route-target`** import/export tuples—routes briefly **`withdrawn`** then readvertised resembling storms—honor **`route_target_changed`** booleans inside **`bgp:evpn:withdraw_event`** aggregates plus **`BGP-3-RT_FILTER_ERROR`** narratives before escalating **`type5_storm`** outcomes tied to unrelated WAN faults.\n\n**imet_multicast_tree_rebuild** — Operators flipping **BUM** replication (**HEAD**, **MTEP**, **SOURCE**) modes or adjusting overlay **MTU** knobs rebuild inclusive multicast Ethernet-tag trees—**Type-3** withdrawals spike benignly—tie positives to **`MaintenanceWindow`** tickets citing **`imet_multicast_tree_rebuild`** chores rather than paging multicast cores absent corroborating optics alarms.\n\n**route_refresh_event** — Outbound **`route-map`** edits routinely solicit **`route-refresh`** exchanges—both peers **`withdraw`** then re-announce prefixes symmetrically—carry **`route_refresh_event`** markers harvested from syslog (**`BGP-5-ROUTEREFRESH`** style signatures) alongside **`CHG`** identifiers describing policy edits so **`withdrawn_volume`** surges downgrade automatically.\n\n**mac_aging_time_reconfig** — Operators shortening MAC aging timers provoke mass **`Type-2`** withdrawals followed by fresh advertisements while tables stabilize—annotate **`mac_aging_time_reconfig`** maintenance tickets suppressing pager noise distinct from stealth evacuations lacking paired ticketing clues.\n\n**evpn_arp_routing_toggle** — Feature toggles enabling or disabling **`evpn-arp-routing`** reshape host-route advertisement behavior—large one-time **`withdraw`** bursts precede steady counters afterward—suppress until **`baseline_avg`** recomputes post-change following automation logs documenting **`evpn_arp_routing_toggle`** intent.",
              "refs": "[CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom BGP monitor, `TA-cisco_ios`, Arista eAPI scripted input.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"bgp:evpn_events\"` with fields `host`, `peer_ip`, `event_type`, `rd`, `prefix`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Stream BGP UPDATE syslog or BMP into `bgp:evpn_events` with normalized `event_type`. (2) Tune per-fabric scale; exclude RR-only peers if needed. (3) Correlate with spine-leaf neighbor state (UC-18.3.14).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"bgp:evpn_events\" earliest=-1h\n| where match(lower(event_type),\"(?i)withdraw|wdr|revoke\")\n| bin _time span=1m\n| stats count as wdr by _time, host, peer_ip\n| where wdr > 200\n| sort -wdr\n| table _time, host, peer_ip, wdr\n```\n\nUnderstanding this SPL\n\n**BGP EVPN Route Withdrawal Rate and Flap Storms** — Withdrawal storms shrink effective ECMP sets and extend convergence after link or node events, which shows up as application timeouts and storage path loss. Measuring withdrawal velocity per peer differentiates normal housekeeping from dangerous churn in the EVPN control plane.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"bgp:evpn_events\"` with fields `host`, `peer_ip`, `event_type`, `rd`, `prefix`. **App/TA** (typical add-on context): Custom BGP monitor, `TA-cisco_ios`, Arista eAPI scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: bgp:evpn_events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"bgp:evpn_events\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(event_type),\"(?i)withdraw|wdr|revoke\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, peer_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where wdr > 200` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **BGP EVPN Route Withdrawal Rate and Flap Storms**): table _time, host, peer_ip, wdr\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest span=1m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**BGP EVPN Route Withdrawal Rate and Flap Storms** — Withdrawal storms shrink effective ECMP sets and extend convergence after link or node events, which shows up as application timeouts and storage path loss. Measuring withdrawal velocity per peer differentiates normal housekeeping from dangerous churn in the EVPN control plane.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"bgp:evpn_events\"` with fields `host`, `peer_ip`, `event_type`, `rd`, `prefix`. **App/TA** (typical add-on context): Custom BGP monitor, `TA-cisco_ios`, Arista eAPI scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (withdrawals per minute), Table (worst peers), Single value (peak wdr).",
              "script": "",
              "premium": "",
              "hw": "BGP EVPN-capable leaf and spine switches",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch for stretches when switches pull back lots of routing entries together so sharply that everyday programs stumble without warning long before anyone notices a loose cable or a flaky plug.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest span=1m | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.19",
              "n": "Spine-Leaf ECMP Member Utilization Balance",
              "c": "high",
              "f": "intermediate",
              "v": "ECMP assumes balanced hashing across parallel paths; persistent skew overloads individual spine uplinks while others remain idle. Monitoring member utilization protects oversubscribed Clos designs and avoids silent drops when one path saturates during backup or replication waves.",
              "t": "gNMI/Telegraf, SNMP TA",
              "d": "`index=network` `sourcetype=\"fabric:ecmp_member\"` with fields `leaf`, `spine_peer`, `member_if`, `out_bits_per_sec`",
              "q": "index=network sourcetype=\"fabric:ecmp_member\" earliest=-30m\n| stats avg(out_bits_per_sec) as avg_bps by leaf, spine_peer, member_if\n| eventstats sum(avg_bps) as leaf_total by leaf, spine_peer\n| eventstats dc(member_if) as paths by leaf, spine_peer\n| eval expected_share=if(paths>0,100/paths,0)\n| eval share_pct=round(100*avg_bps/leaf_total,2)\n| eval skew=abs(share_pct-expected_share)\n| where skew > 20 AND leaf_total > 1000000000\n| table leaf, spine_peer, member_if, share_pct, expected_share, skew\n| sort -skew",
              "m": "(1) Ingest per-member interface counters from telemetry at 30–60s. (2) Alert on sustained skew; verify hashing seeds and broken members. (3) Compare with interface errors on hot members.",
              "z": "Heatmap (member_if × leaf skew), Bar chart (skew by spine), Table (outliers).",
              "kfp": "**_rolling_code_upgrade_windows** — During ISSU or supervisor reloads, forwarding-distribution snapshots pause while linecards reseat buckets, inflating skew for minutes without real overload. Require polar_hint=0 and two clean polls after the change record closes, or honor suppress_until in fabric_ecmp_inventory.\n\n**_symmetric_elephant_flow_pinning** — Long-lived TCP sessions (backup agents, replication) can ride one member legitimately because per-flow hashing never re-seeds. Imbalance persists yet drop counters stay quiet. Compare imbalance_ratio history with application owners before escalating hardware.\n\n**_telemetry_alias_drift** — TA upgrades sometimes rename out_bps or nest member_if under interface; tonumber() yields null while CLI still looks hot. Run fieldsummary after each upgrade and retain FIELDALIAS for legacy sourcetypes.\n\n**_vpc_mclag_asymmetry** — One vPC or MC-LAG peer can reflect more east-west traffic than its partner, mimicking bad ECMP toward spines. Cross-check vpc consistency on both leaf names before blaming spine hashing.\n\n**_stale_distribution_batches** — Hourly show forwarding distribution dumps can mis-align _time buckets with real-time interface_stats, creating phantom vendor skew. Stamp ingest_lag_sec and drop join rows older than two windows.\n\n**_bgp_refresh_waves** — Soft inbound refreshes briefly reshuffle next-hops while neighbors remain established. Correlate automation job IDs and damp alerts unless skew_pts stays high past documented RIB stabilization times.\n\n**_nms_false_critical** — network:switch_health may mark critical when SNMP is blocked, not when uplinks saturate. Require nxos:interface_stats confirmation before hardware dispatch.\n\n**_standby_supervisor_polls** — Dual-SUP chassis sometimes return idle counters from standby contexts when hostname lacks active_role. Filter standby polls using redundancy_state before balance math.",
              "refs": "[CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: gNMI/Telegraf, SNMP TA.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"fabric:ecmp_member\"` with fields `leaf`, `spine_peer`, `member_if`, `out_bits_per_sec`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest per-member interface counters from telemetry at 30–60s. (2) Alert on sustained skew; verify hashing seeds and broken members. (3) Compare with interface errors on hot members.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fabric:ecmp_member\" earliest=-30m\n| stats avg(out_bits_per_sec) as avg_bps by leaf, spine_peer, member_if\n| eventstats sum(avg_bps) as leaf_total by leaf, spine_peer\n| eventstats dc(member_if) as paths by leaf, spine_peer\n| eval expected_share=if(paths>0,100/paths,0)\n| eval share_pct=round(100*avg_bps/leaf_total,2)\n| eval skew=abs(share_pct-expected_share)\n| where skew > 20 AND leaf_total > 1000000000\n| table leaf, spine_peer, member_if, share_pct, expected_share, skew\n| sort -skew\n```\n\nUnderstanding this SPL\n\n**Spine-Leaf ECMP Member Utilization Balance** — ECMP assumes balanced hashing across parallel paths; persistent skew overloads individual spine uplinks while others remain idle. Monitoring member utilization protects oversubscribed Clos designs and avoids silent drops when one path saturates during backup or replication waves.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"fabric:ecmp_member\"` with fields `leaf`, `spine_peer`, `member_if`, `out_bits_per_sec`. **App/TA** (typical add-on context): gNMI/Telegraf, SNMP TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fabric:ecmp_member. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fabric:ecmp_member\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by leaf, spine_peer, member_if** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by leaf, spine_peer** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by leaf, spine_peer** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **expected_share** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **share_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **skew** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where skew > 20 AND leaf_total > 1000000000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Spine-Leaf ECMP Member Utilization Balance**): table leaf, spine_peer, member_if, share_pct, expected_share, skew\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Spine-Leaf ECMP Member Utilization Balance** — ECMP assumes balanced hashing across parallel paths; persistent skew overloads individual spine uplinks while others remain idle. Monitoring member utilization protects oversubscribed Clos designs and avoids silent drops when one path saturates during backup or replication waves.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"fabric:ecmp_member\"` with fields `leaf`, `spine_peer`, `member_if`, `out_bits_per_sec`. **App/TA** (typical add-on context): gNMI/Telegraf, SNMP TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (member_if × leaf skew), Bar chart (skew by spine), Table (outliers).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus, Arista 7050/7280 spine-leaf",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We compare traffic on each spine-ward uplink so one cable group cannot flood while its partners stay almost idle, which helps teams spot trouble before big copy jobs or storage bursts hurt apps.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.20",
              "n": "Fabric Host Route and ARP Scale Headroom",
              "c": "high",
              "f": "advanced",
              "v": "EVPN Type-2 host routes and ARP scale drive TCAM and forwarding table use on leafs. Running out of headroom stalls new workload placement and causes black-holed traffic during scale-out events. Trending utilization against hardware limits informs purchase timing and route summarization design.",
              "t": "NX-API/CLI scripted input, SNMP",
              "d": "`index=network` `sourcetype=\"fabric:route_scale\"` with fields `host`, `host_routes`, `host_route_limit`, `arp_entries`, `arp_limit`",
              "q": "index=network sourcetype=\"fabric:route_scale\" earliest=-24h\n| eval host_pct=round(100*host_routes/host_route_limit,2)\n| eval arp_pct=round(100*arp_entries/arp_limit,2)\n| where host_pct>80 OR arp_pct>80\n| stats latest(host_pct) as host_pct latest(arp_pct) as arp_pct latest(host_routes) as hr latest(arp_entries) as arp by host\n| table host, host_pct, arp_pct, hr, arp\n| sort -host_pct",
              "m": "(1) Poll `show system internal forwarding resource` or vendor equivalents nightly plus hourly if above 75%. (2) Map limits per platform SKU via lookup. (3) Join with EVPN route summary growth (UC-18.3.10).",
              "z": "Gauge (headroom %), Table (critical leafs), Timechart (host route growth).",
              "kfp": "**ISSU hardware-capacity-profile reload windows** — engineered ISSU sequences temporarily bounce **vxlan-bgp**/**lan-host-routes** profiles on **Nexus 9000**, producing abrupt **entries_max** shifts and transient **headroom_percent** dips without sustained **eviction_count_5min** positives—suppress via **CHG** tokens referencing ISSU orchestration plus **MaintenanceWindow** lookups keyed on **device_name** until soak timers expire.\n\n**External Type-5 prefix surges at L3Out hand-offs** — DC perimeter routers announce large aggregated prefixes during coordinated **L3Out** migrations, spiking **route-table-v4** occupancy on border leaves even though **evpn_type2_route_count** remains flat—differentiate using **hardware_table_class** route-table-v4 focus, **BGP** **AS-PATH** correlation, and **NDFC** change tickets before attributing spikes to faulty host-route scaling alone.\n\n**VRF-stretch ARP reconciliation sweeps** — after **VRF** stretch or **L2VNI** replay events, NetOps intentionally clear ARP caches or shorten aging timers causing **arp_table_size** oscillation and benign **eviction_count_5min** blips tied to **ARP** class—pair Splunk timelines with **ServiceNow** **CHG** narratives describing cache refresh rather than hardware faults.\n\n**BGP soft-reconfig FIB staging** — during **soft-reconfiguration inbound** maintenance, **FIB-realized** entries can lag **RIB-installed** counts for minutes while **fib_rib_divergence_count** increments—expected until **route-refresh** completes; suppress when **BGP** **soft-reconfig** flags appear in **cisco:nxos:syslog** and **peer** sessions stay **Established**.\n\n**PIM RP cutovers** — multicast-route hardware occupancy spikes during rendezvous-point migrations unrelated to EVPN Type-2 counts—cross-check multicast CRQ tickets before escalating TCAM defects.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1499.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NX-API/CLI scripted input, SNMP.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"fabric:route_scale\"` with fields `host`, `host_routes`, `host_route_limit`, `arp_entries`, `arp_limit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Poll `show system internal forwarding resource` or vendor equivalents nightly plus hourly if above 75%. (2) Map limits per platform SKU via lookup. (3) Join with EVPN route summary growth (UC-18.3.10).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fabric:route_scale\" earliest=-24h\n| eval host_pct=round(100*host_routes/host_route_limit,2)\n| eval arp_pct=round(100*arp_entries/arp_limit,2)\n| where host_pct>80 OR arp_pct>80\n| stats latest(host_pct) as host_pct latest(arp_pct) as arp_pct latest(host_routes) as hr latest(arp_entries) as arp by host\n| table host, host_pct, arp_pct, hr, arp\n| sort -host_pct\n```\n\nUnderstanding this SPL\n\n**Fabric Host Route and ARP Scale Headroom** — EVPN Type-2 host routes and ARP scale drive TCAM and forwarding table use on leafs. Running out of headroom stalls new workload placement and causes black-holed traffic during scale-out events. Trending utilization against hardware limits informs purchase timing and route summarization design.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"fabric:route_scale\"` with fields `host`, `host_routes`, `host_route_limit`, `arp_entries`, `arp_limit`. **App/TA** (typical add-on context): NX-API/CLI scripted input, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fabric:route_scale. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fabric:route_scale\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **host_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **arp_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where host_pct>80 OR arp_pct>80` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Fabric Host Route and ARP Scale Headroom**): table host, host_pct, arp_pct, hr, arp\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (headroom %), Table (critical leafs), Timechart (host route growth).",
              "script": "",
              "premium": "",
              "hw": "VXLAN EVPN leaf switches",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch how full each switch's hidden forwarding shelves are—the spots where routes and neighbor entries live. When those shelves overflow, traffic can vanish quietly even though cables blink green, so we sound the alarm before people feel random slowdowns.",
              "mtype": [
                "Capacity",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.21",
              "n": "EVPN Ethernet Segment (ESI) DF Election and BUM Stability",
              "c": "high",
              "f": "advanced",
              "v": "All-active multihoming depends on correct designated forwarder election and stable per-ESI state. DF flaps or split-brain indicators disrupt BUM handling and can isolate VLANs for dual-homed hosts. Early detection protects clustered databases and hypervisor trunks.",
              "t": "NX-OS/EOS syslog, custom `evpn:esi` parser",
              "d": "`index=network` `sourcetype=\"evpn:esi\"` with fields `leaf`, `esi`, `event`, `vlan`",
              "q": "index=network sourcetype=\"evpn:esi\" earliest=-24h\n| where match(lower(event),\"(?i)df|designated|esi|split|conflict\")\n| stats count by leaf, esi, event\n| where count>5\n| sort -count\n| table leaf, esi, event, count",
              "m": "(1) Normalize DF change syslog into `evpn:esi`. (2) Alert on rapid DF changes per ESI within one hour. (3) Correlate with port-channel member events (UC-18.3.14).",
              "z": "Timeline (DF changes), Table (noisy ESIs), Single value (ESIs with recent DF churn).",
              "kfp": "**lacp_system_id_rotation** — CE operators reshuffle **`lacp`** bundle **`system-id`** values during cabling tidy-ups—brief Type-1 Ethernet Auto-Discovery churn resembles **`esi`** flap though BGP stays **`Established`**—suppress positives referencing **`lacp_system_id_rotation`** change tickets paired with **`MaintenanceWindow`** overlays until **`lag`** settles.\n\n**esi_participant_addremove** — Engineering inserts another PE serving the same **`Ethernet`** **`Segment`**—DF winners reshuffle deliberately—tie **`esi_participant_addremove`** CAB receipts to Splunk timelines before paging **`instability_family`** bursts lacking outage symptoms.\n\n**evpn_rr_cluster_maintenance** — Spinning **`BGP`** **`route-reflector`** clusters forces Type-4 Ethernet Segment Auto-Discovery readvertisement storms unrelated to CE faults—honor **`evpn_rr_cluster_maintenance`** **`CHG`** identifiers describing **`RR`** rotations alongside **`BGP`** **`Established`** uptime parity checks.\n\n**df_wait_timer_change** — Operators lengthen **`RFC`** **`8584`** **`DF`** **`wait`** timers (**seconds**) calming hypersensitive elections—Splunk captures elevated **`wait_obs`** counters—suppress **`election_noise_bucket`** positives labeled **`df_wait_timer_change`** inside **`runbook`** annotations referencing timer deltas explicitly.\n\n**lacp_system_priority_change** — CE-side **`lacp`** **`system-priority`** edits reorder bundle **`actor`** selection causing transient **`EAD`** withdrawals resembling PE instability—pair syslog **`lacp_system_priority_change`** narratives with **`evt_volume`** dips before escalating **`df_role_flip_burst`** alarms alone.\n\n**ce_bond_firmware_upgrade** — Hypervisor **`NIC`** firmware pushes **`suspend`** **`lag`** members momentarily—all PE routers observe withdrawn Type-1 routes concurrently—suppress **`alias_split_hint`** markers referencing **`ce_bond_firmware_upgrade`** **`CHG`** bundles tied to clustered **`NIC`** inventories.\n\n**df_algorithm_change** — Migrating modulo **`DF`** hashing toward Highest Random Weight reshuffles VLAN **`DF`** winners overnight—expect benign spikes until **`vlan_df_latest`** histograms stabilize—annotate dashboards with **`df_algorithm_change`** **`CHG`** metadata.\n\n**route_target_adjustment_propagation** — **`VRF`** **`route-target`** edits ripple EVPN **`RT`** imports briefly shaking **`EAD`** **`ES`** **`routes`**—differentiate via ticketing tokens **`route_target_adjustment_propagation`** referencing **`policy`** edits lacking **`esi_tag`** persistence faults beyond minutes.\n\n**vpc_peer_link_reload_isolation** — **`vPC`** **`peer-link`** reloads asymmetrically starve one **`Nexus`** PE while optics elsewhere remain **`up`**—Splunk logs **`esi`** chatter resembling splits—suppress **`possible_peer_disagreement`** hints mapped to **`vpc_peer_link_reload_isolation`** maintenance bridges documented alongside **`ISSU`** choreography.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NX-OS/EOS syslog, custom `evpn:esi` parser.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"evpn:esi\"` with fields `leaf`, `esi`, `event`, `vlan`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize DF change syslog into `evpn:esi`. (2) Alert on rapid DF changes per ESI within one hour. (3) Correlate with port-channel member events (UC-18.3.14).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"evpn:esi\" earliest=-24h\n| where match(lower(event),\"(?i)df|designated|esi|split|conflict\")\n| stats count by leaf, esi, event\n| where count>5\n| sort -count\n| table leaf, esi, event, count\n```\n\nUnderstanding this SPL\n\n**EVPN Ethernet Segment (ESI) DF Election and BUM Stability** — All-active multihoming depends on correct designated forwarder election and stable per-ESI state. DF flaps or split-brain indicators disrupt BUM handling and can isolate VLANs for dual-homed hosts. Early detection protects clustered databases and hypervisor trunks.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"evpn:esi\"` with fields `leaf`, `esi`, `event`, `vlan`. **App/TA** (typical add-on context): NX-OS/EOS syslog, custom `evpn:esi` parser. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: evpn:esi. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"evpn:esi\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(event),\"(?i)df|designated|esi|split|conflict\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by leaf, esi, event** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **EVPN Ethernet Segment (ESI) DF Election and BUM Stability**): table leaf, esi, event, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (DF changes), Table (noisy ESIs), Single value (ESIs with recent DF churn).",
              "script": "",
              "premium": "",
              "hw": "MLAG/EVPN multihomed leaf pairs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch two paired switches that share one outward-facing cord so shouty Ethernet chatter travels predictably—when their microphone duty swaps wildly inside seconds we warn before broadcasts echo twice or vanish quietly.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.3.22",
              "n": "VXLAN Underlay Path MTU and DF Bit Fragmentation Risk",
              "c": "medium",
              "f": "intermediate",
              "v": "VXLAN adds overhead; MTU mismatches or PMTUD black holes cause silent throughput collapse and TCP retransmits on east-west paths. Tracking DF-bit probes and ICMP unreachables across the underlay separates fabric issues from guest OS misconfiguration before tickets flood application teams.",
              "t": "`TA-cisco_ios`, `arista:eos` via SC4S, ICMP probe scripted input",
              "d": "`index=network` `sourcetype=\"fabric:mtu_diag\"` with fields `src_leaf`, `dst_leaf`, `max_mtu_ok`, `icmp_needfrag`, `test_size`",
              "q": "index=network sourcetype=\"fabric:mtu_diag\" earliest=-24h\n| where max_mtu_ok=0 OR icmp_needfrag>0\n| stats sum(icmp_needfrag) as needfrag max(test_size) as last_size by src_leaf, dst_leaf\n| sort -needfrag\n| table src_leaf, dst_leaf, needfrag, last_size",
              "m": "(1) Run scheduled jumbo ping/UDP probes between loopbacks with DF set. (2) Ingest syslog `ICMP unreachable` / `MTU` messages. (3) Document expected MTU (for example 9216) per site and alert on regression.",
              "z": "Table (bad pairs), Diagram (site × path status), Single value (paths failing MTU).",
              "kfp": "**maintenance_mtu_reboot** — During **ISSU** or **supervisor** **switchover**, **`nxos:interface_mtu`** polls may return **transient** **defaults** while **interfaces** **reconcile**, creating a **short** spike in **`mtu_suspect`**. Require **two** clean **polls** post **change** **record**, or honor **`suppress_until`** in **`fabric_mtu_policy.csv`** before paging.\n\n**icmp_rate_limited** — **DF** **pings** at **aggressive** **cadence** can trigger **control-plane** **policers** that **mimic** **blackholes** in **`network:ping_results`**. Space **probes**, randomize **starts**, and compare against **`show control-plane`** drop counters—correlation distinguishes policing from true MTU faults.\n\n**lab_port_wrong_role** — **Automation** sometimes tags **host-facing** ports as **underlay** due to **stale** **CMDB** **role** strings, so **`mtu_suspect`** fires on **access** segments that **intentionally** stay at **1500**. Refresh **`if_role`** from **source-of-truth** **YAML**, and exclude **edge** **templates** with **`port_usage=host`.\n\n**guest_mss_clamp_only** — **`pmtu_bad`** may reflect **hypervisor** **MSS** **clamps** or **NSX** **policy**, not **underlay** **fragmentation**. When **`4856`**/**`1810`** context shows **VM-only** paths, reroute **evidence** to **compute** owners unless **`nxos:interface_mtu`** also regresses.\n\n**syslog_echo_noise** — **Benign** **ICMP** messages during **DDD** **debugs** can match **`icmp_needfrag`** **regexes**. Tighten **facility** **filters**, require **`%ETHPORT-`** or **`%L3FM-`** **tokens** where **platform** **supports** them, or **AND** with **`mtu_suspect`** before **sev-2** routes.\n\n**false_positive_nve_map** — **Telemetry** **templates** occasionally map **unrelated** counters into **`encap_error`**, inflating **`nve_bad`**. After **ASIC** **driver** bumps, reconcile JSON paths with **`show counters`** differences on a witness leaf.\n\n**df_probe_asymmetry** — **ECMP** **hashing** may steer **DF** **pings** down a **healthy** path while **tenant** flows hit the **low-MTU** **link** until **`network:mtu_discovery`** rotates **probes**. Increase **probe** **diversity** (multiple **sources**, **flows**, **UDP** **templates**) before closing **tickets**.",
              "refs": "[Cisco DC Networking Application for Splunk](https://splunkbase.splunk.com/app/7777), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-cisco_ios`, `arista:eos` via SC4S, ICMP probe scripted input.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"fabric:mtu_diag\"` with fields `src_leaf`, `dst_leaf`, `max_mtu_ok`, `icmp_needfrag`, `test_size`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Run scheduled jumbo ping/UDP probes between loopbacks with DF set. (2) Ingest syslog `ICMP unreachable` / `MTU` messages. (3) Document expected MTU (for example 9216) per site and alert on regression.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"fabric:mtu_diag\" earliest=-24h\n| where max_mtu_ok=0 OR icmp_needfrag>0\n| stats sum(icmp_needfrag) as needfrag max(test_size) as last_size by src_leaf, dst_leaf\n| sort -needfrag\n| table src_leaf, dst_leaf, needfrag, last_size\n```\n\nUnderstanding this SPL\n\n**VXLAN Underlay Path MTU and DF Bit Fragmentation Risk** — VXLAN adds overhead; MTU mismatches or PMTUD black holes cause silent throughput collapse and TCP retransmits on east-west paths. Tracking DF-bit probes and ICMP unreachables across the underlay separates fabric issues from guest OS misconfiguration before tickets flood application teams.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"fabric:mtu_diag\"` with fields `src_leaf`, `dst_leaf`, `max_mtu_ok`, `icmp_needfrag`, `test_size`. **App/TA** (typical add-on context): `TA-cisco_ios`, `arista:eos` via SC4S, ICMP probe scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: fabric:mtu_diag. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"fabric:mtu_diag\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where max_mtu_ok=0 OR icmp_needfrag>0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_leaf, dst_leaf** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **VXLAN Underlay Path MTU and DF Bit Fragmentation Risk**): table src_leaf, dst_leaf, needfrag, last_size\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**VXLAN Underlay Path MTU and DF Bit Fragmentation Risk** — VXLAN adds overhead; MTU mismatches or PMTUD black holes cause silent throughput collapse and TCP retransmits on east-west paths. Tracking DF-bit probes and ICMP unreachables across the underlay separates fabric issues from guest OS misconfiguration before tickets flood application teams.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"fabric:mtu_diag\"` with fields `src_leaf`, `dst_leaf`, `max_mtu_ok`, `icmp_needfrag`, `test_size`. **App/TA** (typical add-on context): `TA-cisco_ios`, `arista:eos` via SC4S, ICMP probe scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (bad pairs), Diagram (site × path status), Single value (paths failing MTU).",
              "script": "",
              "premium": "",
              "hw": "Spine-leaf underlay routers",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We check whether the hidden wiring between big switches leaves enough room for padded traffic, so messages are not quietly dropped when splitting them apart is not allowed.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ios"
              ],
              "sapp": [
                {
                  "name": "App for Cisco Network Data",
                  "id": 1352,
                  "url": "https://splunkbase.splunk.com/app/1352",
                  "desc": "Dashboards and data models for Cisco Switches, Routers, WLAN Controllers",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47590b50-3981-11ed-807b-8e625ade07cf.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4dc530fe-3981-11ed-b76f-8ad1f2b29da5.png"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco IOS",
                "id": 1467,
                "url": "https://splunkbase.splunk.com/app/1467"
              },
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 22,
            "none": 0
          }
        },
        {
          "i": "18.4",
          "n": "Cisco Nexus Dashboard & NX-OS Fabric",
          "u": [
            {
              "i": "18.4.1",
              "n": "Nexus Dashboard Insights Anomaly Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Nexus Dashboard Insights (NDI) uses ML-driven baselining to detect anomalies across hardware, capacity, compliance, connectivity, and configuration. Forwarding these anomalies to Splunk enables correlation with application and security events that NDI cannot see, and provides long-term trending beyond NDI retention.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), NDI webhook / syslog export",
              "d": "NDI anomaly exports (webhook/syslog), `cisco:ndi:anomaly`",
              "q": "index=cisco_dc sourcetype=\"cisco:ndi:anomaly\" earliest=-24h\n| stats count by severity, category, anomaly_type, fabric_name\n| where severity IN (\"critical\",\"major\")\n| sort -count",
              "m": "Configure NDI to export anomalies via webhook to a Splunk HEC endpoint, or forward syslog. Map severity and category fields. Alert on critical/major anomalies. Use NDI anomaly correlation data to distinguish root causes from symptoms.",
              "z": "Table (anomalies by category), Bar chart (anomaly count by severity), Timeline (anomaly events), Single value (open criticals).",
              "kfp": "Planned **ACI tenant teardown** sequences briefly break VLAN uniqueness checks until asynchronous deletes finish—**NDI Compliance** anomalies surge even when ServiceNow tickets already cover the work. **Patch Tuesday** Cisco PSIRT metadata refreshes inside **NDI** bug libraries can spike **Software Fault** counts for NX-OS trains that still pass your last quarterly sign-off—delay paging until **lastSeenTimestamp** deltas flatten versus the prior week. **Microburst contention** can raise **Performance** anomalies with **PFC pause** signatures that self-clear when queues drain—tie paging to sustained **critical** streaks beyond the settling window Cisco documents. **Nexus Dashboard cluster** members that restart can backlog **gNMI** collectors so **Operations** anomalies look like fabric faults until **kubectl** or **docker stats** prove the node—not the spine—was starving. **Fresh Site** baselines often generate **info/minor** posture rows while **NDI** learns daily patterns—hold SLAs until the burn-in timers in **Nexus Dashboard Insights** documentation expire. **NX-OS configuration replay** storms can overrun **streaming telemetry** buffers and briefly look like **Connectivity** gaps—confirm with **show telemetry** on the switch before scheduling optic replacements unrelated to queue pressure.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), NDI webhook / syslog export.\n• Ensure the following data sources are available: NDI anomaly exports (webhook/syslog), `cisco:ndi:anomaly`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure NDI to export anomalies via webhook to a Splunk HEC endpoint, or forward syslog. Map severity and category fields. Alert on critical/major anomalies. Use NDI anomaly correlation data to distinguish root causes from symptoms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:ndi:anomaly\" earliest=-24h\n| stats count by severity, category, anomaly_type, fabric_name\n| where severity IN (\"critical\",\"major\")\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Nexus Dashboard Insights Anomaly Monitoring** — Nexus Dashboard Insights (NDI) uses ML-driven baselining to detect anomalies across hardware, capacity, compliance, connectivity, and configuration. Forwarding these anomalies to Splunk enables correlation with application and security events that NDI cannot see, and provides long-term trending beyond NDI retention.\n\nDocumented **Data sources**: NDI anomaly exports (webhook/syslog), `cisco:ndi:anomaly`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDI webhook / syslog export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:ndi:anomaly. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:ndi:anomaly\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity, category, anomaly_type, fabric_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity IN (\"critical\",\"major\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Nexus Dashboard Insights Anomaly Monitoring** — Nexus Dashboard Insights (NDI) uses ML-driven baselining to detect anomalies across hardware, capacity, compliance, connectivity, and configuration. Forwarding these anomalies to Splunk enables correlation with application and security events that NDI cannot see, and provides long-term trending beyond NDI retention.\n\nDocumented **Data sources**: NDI anomaly exports (webhook/syslog), `cisco:ndi:anomaly`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDI webhook / syslog export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalies by category), Bar chart (anomaly count by severity), Timeline (anomaly events), Single value (open criticals).",
              "script": "",
              "premium": "",
              "hw": "Nexus Dashboard, Nexus 9300, Nexus 9500, APIC (ACI mode)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch for strained links and slip-ups across your switching gear—the readings your Cisco screens already highlight—and tell repair teams what to fix before outages ripple outward.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.2",
              "n": "NDFC Fabric Compliance and Configuration Drift",
              "c": "high",
              "f": "intermediate",
              "v": "NDFC enforces intended configuration via templates. Detecting drift between running config and intended config across the fabric prevents misconfigurations that cause outages, security gaps, or inconsistent policy enforcement.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), NDFC REST API scripted input",
              "d": "NDFC compliance reports (REST API), `cisco:ndfc:compliance`",
              "q": "index=cisco_dc sourcetype=\"cisco:ndfc:compliance\"\n| stats count by switch_name, compliance_status, drift_category\n| where compliance_status!=\"In-Sync\"\n| table switch_name, compliance_status, drift_category, count\n| sort -count",
              "m": "Poll NDFC compliance status via REST API daily or after change windows. Alert on Out-of-Sync devices. Track drift trends over time to identify switches that repeatedly drift. Trigger auto-remediation workflows when safe.",
              "z": "Table (non-compliant switches), Pie chart (compliant vs drifted), Trend chart (drift count over time).",
              "kfp": "**Rolling brownfield pushes** deliberately pause between **`Pending Config`** cohorts—Splunk dashboards briefly resemble drift storms even though **`deploy`** sequencing follows CAB choreography—tie paging to **`Pending Config`** ages older than approved windows rather than instantaneous amber tiles alone. **Emergency NX-OS edits** executed directly on switches during outages intentionally diverge from **`Out-of-Sync`** intent until **`MaintenanceWindow`** clears—Splunk highlights accuracy but approvals mean suppress macros referencing **`CHG`** bridges remain mandatory before paging fabric SMEs. **ISSU pre-check** rejections tied to niche line-card combinations can look like wholesale compliance loss when only one **`Failed`** row exists—cross-check **`Failure Reason`** strings against **`ISSU Pre-Check Failed`** snippets from **`cisco:ndfc:image_mgmt`** before treating the whole site as rogue. **Template schema upgrades** migrating **NDFC** minor releases temporarily mass-tag switches **`Pending Config`** while Cisco rewrites object graphs—delay incident bridges until **`template_tags`** timestamps settle per published upgrade windows documented in migration guidance. **Partial SUM** pushes interrupted by rack power events leave **`Out-of-Sync`** plus inconsistent **`image_mgmt`** job rows—validate PS readings and redundant supplies before blaming intent drift unrelated to operator edits. **POAP DHCP overlap** where a foreign server answers before **NDFC** hands out the official scope produces **`Bootstrap Failed`** records that indict network VLAN design rather than **`NDFC`** service faults—confirm helper addresses and **`mgmt0`** isolation per POAP chapters before Splunk escalations target the wrong team.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), NDFC REST API scripted input.\n• Ensure the following data sources are available: NDFC compliance reports (REST API), `cisco:ndfc:compliance`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll NDFC compliance status via REST API daily or after change windows. Alert on Out-of-Sync devices. Track drift trends over time to identify switches that repeatedly drift. Trigger auto-remediation workflows when safe.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:ndfc:compliance\"\n| stats count by switch_name, compliance_status, drift_category\n| where compliance_status!=\"In-Sync\"\n| table switch_name, compliance_status, drift_category, count\n| sort -count\n```\n\nUnderstanding this SPL\n\n**NDFC Fabric Compliance and Configuration Drift** — NDFC enforces intended configuration via templates. Detecting drift between running config and intended config across the fabric prevents misconfigurations that cause outages, security gaps, or inconsistent policy enforcement.\n\nDocumented **Data sources**: NDFC compliance reports (REST API), `cisco:ndfc:compliance`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDFC REST API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:ndfc:compliance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:ndfc:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, compliance_status, drift_category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where compliance_status!=\"In-Sync\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NDFC Fabric Compliance and Configuration Drift**): table switch_name, compliance_status, drift_category, count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NDFC Fabric Compliance and Configuration Drift** — NDFC enforces intended configuration via templates. Detecting drift between running config and intended config across the fabric prevents misconfigurations that cause outages, security gaps, or inconsistent policy enforcement.\n\nDocumented **Data sources**: NDFC compliance reports (REST API), `cisco:ndfc:compliance`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDFC REST API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant switches), Pie chart (compliant vs drifted), Trend chart (drift count over time).",
              "script": "",
              "premium": "",
              "hw": "Nexus Dashboard Fabric Controller, Nexus 9300, Nexus 9500, Nexus 3000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We compare each rack switch's live settings against the blueprint your Cisco controllers promised for that fabric row, so surprise hand-edits or unfinished pushes show up before traffic quietly follows the wrong paths.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.3",
              "n": "Nexus Dashboard Advisory and Field Notice Alerts",
              "c": "high",
              "f": "beginner",
              "v": "Nexus Dashboard Insights identifies field notices, PSIRTs, and hardware advisories that affect switches in the fabric by matching device serial numbers and software versions. Forwarding these to Splunk provides a centralised risk view alongside other infrastructure advisories.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), NDI webhook",
              "d": "NDI advisory exports, `cisco:ndi:advisory`",
              "q": "index=cisco_dc sourcetype=\"cisco:ndi:advisory\"\n| stats count by advisory_id, advisory_type, severity, affected_switch_count\n| where severity IN (\"critical\",\"high\")\n| table advisory_id, advisory_type, severity, affected_switch_count\n| sort -severity",
              "m": "Export NDI advisories to Splunk via webhook or scheduled API poll. Correlate with asset inventory to calculate exposure percentage. Alert on critical PSIRTs or field notices affecting production fabrics.",
              "z": "Table (active advisories), Single value (critical advisories), Bar chart (affected switches per advisory).",
              "kfp": "**PSIRT republication** or amended disclosures that bump **`lastUpdated`** while keeping the same **`advisoryId`** can double-count backlog metrics if Splunk rolls up only on **`_time`** instead of grouping on **`adv_key` + `newest_revision`**. **NDI** bug-database **weekly** synchronization (commonly clustered on a recurring weekday per operations lore) projects a burst of “new” **`CSC`** rows that merely reflect refreshed metadata—not fresh exposure—until week-over-week deltas settle. **NX-OS** train strings rendered as **`9.3(13)`** vs **`9.3.13`** vs **`9.3.13.G(0.1)`** fool naive string equality when correlating **`affectedReleases[]`** membership against CMDB normalization rules, yielding false negatives even while **NDI** already flagged a match internally. **Field Notice** **EOL** banners that align to **calendar-quarter** reporting windows can spike trend charts unrelated to urgent optics swaps—tie **`FN-#####`** escalations to rack elevation proof rather than arbitrary midnight thresholds alone. **CMDB** software-asset inventories lagging physical decommission steps produce **PSIRT** hits referencing chassis already retired—suppress pages until nightly reconciliation clears stale serial-to-release tuples. **NDI** quarterly advisory-window bundle imports may temporarily flood **Compliance Rule Violation Trend** charts until evaluator cycles finish—delay ticketing until **`newest_revision`** timestamps plateau instead of reacting during the ingest spike alone.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), NDI webhook.\n• Ensure the following data sources are available: NDI advisory exports, `cisco:ndi:advisory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport NDI advisories to Splunk via webhook or scheduled API poll. Correlate with asset inventory to calculate exposure percentage. Alert on critical PSIRTs or field notices affecting production fabrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:ndi:advisory\"\n| stats count by advisory_id, advisory_type, severity, affected_switch_count\n| where severity IN (\"critical\",\"high\")\n| table advisory_id, advisory_type, severity, affected_switch_count\n| sort -severity\n```\n\nUnderstanding this SPL\n\n**Nexus Dashboard Advisory and Field Notice Alerts** — Nexus Dashboard Insights identifies field notices, PSIRTs, and hardware advisories that affect switches in the fabric by matching device serial numbers and software versions. Forwarding these to Splunk provides a centralised risk view alongside other infrastructure advisories.\n\nDocumented **Data sources**: NDI advisory exports, `cisco:ndi:advisory`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDI webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:ndi:advisory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:ndi:advisory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by advisory_id, advisory_type, severity, affected_switch_count** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity IN (\"critical\",\"high\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Nexus Dashboard Advisory and Field Notice Alerts**): table advisory_id, advisory_type, severity, affected_switch_count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Nexus Dashboard Advisory and Field Notice Alerts** — Nexus Dashboard Insights identifies field notices, PSIRTs, and hardware advisories that affect switches in the fabric by matching device serial numbers and software versions. Forwarding these to Splunk provides a centralised risk view alongside other infrastructure advisories.\n\nDocumented **Data sources**: NDI advisory exports, `cisco:ndi:advisory`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDI webhook. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active advisories), Single value (critical advisories), Bar chart (affected switches per advisory).",
              "script": "",
              "premium": "",
              "hw": "Nexus Dashboard, all managed Nexus switches",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep vendor security bulletins and hardware lifecycle deadlines lined up with the software versions actually running in your halls so upgrade planners can schedule calmly instead of scrambling once printed support dates quietly pass.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.4",
              "n": "Nexus 9000 NX-OS Streaming Telemetry Health",
              "c": "medium",
              "f": "advanced",
              "v": "Streaming telemetry (gNMI/gRPC) from Nexus 9000 provides sub-second interface, routing, and system metrics. Monitoring the telemetry pipeline itself ensures data collection gaps are detected before they blind monitoring dashboards.",
              "t": "Telegraf (gNMI plugin), Splunk HEC, SNMP TA (fallback)",
              "d": "Telegraf internal metrics, gNMI subscription status",
              "q": "index=telegraf sourcetype=\"telegraf:internal\" measurement=\"internal_gather\"\n| stats avg(gather_time_ns) as avg_gather_ns max(gather_time_ns) as max_gather_ns count as samples by host, input\n| where input=\"cisco_telemetry_gnmi\"\n| eval avg_gather_ms=round(avg_gather_ns/1000000,1)\n| where avg_gather_ms > 5000 OR samples < expected_samples\n| table host, input, avg_gather_ms, max_gather_ns, samples",
              "m": "Deploy Telegraf with gNMI input plugin on collector hosts. Configure NX-OS sensor groups for interface, BGP, system, and environment paths. Monitor Telegraf internal metrics for collection health. Alert on stale data or excessive gather times.",
              "z": "Table (collector health), Line chart (gather time), Single value (active subscriptions).",
              "kfp": "**mdt_tls_cert_rotation_window** — Planned **`telemetry`** **`certificate`** rotations deliberately widen **`TLS`** handshake gaps—suppress **`telemetry_syslog_bootstrap`** bursts tied to **`CHG`** bundles referencing **`trustpoint`** rehearsals before accusing **`grpc`** outages.\n\n**sensor_subscription_warmup** — Fresh **`sensor-group`** onboarding (**`Cisco-NX-OS-device`** subtrees) emits **`sensor_subscription_warmup`** gaps lasting minutes—monitor **`yang_path_migration_window`** transcripts referencing **`CHG`** identifiers rather than steady-state exporters alone.\n\n**mdt_cadence_change** — Operators deliberately tighten **`cadence_sec`** (**30→5**) for hotspots—**`mdt_cadence_change`** flags mimic pathology unless dashboards annotate observability **`CHG`** stamps beside **`suppress_redundant_toggle`** contexts.\n\n**suppress_redundant_toggle** — Flip **`suppress-redundant`** during troubleshooting floods Splunk with **`telemetry_queue_overflow`** proxies lacking dataplane faults—pair toggles with **`pipeline`** **`heap`** metrics from OTel **`splunk_observability_otel`** feeds.\n\n**mdt_heartbeat_change** — **`heartbeat-interval`** edits (**60→30**) reorder exporter pacing yet dataplane counters remain calm—tie pager macros to **`CHG`** approvals referencing **`grpc`** **`subscription`** edits.\n\n**grpc_tcp_keepalive_expiry** — WAN TCP keepalive variance triggers benign **`grpc_tcp_keepalive_expiry`** bursts—cross-check ICMP probes plus **`vmware:vsphere:syslog`** drift notices before cabling escalation.\n\n**yang_path_migration_window** — Vendor **`YANG`** revisions relocate **`DbgIfHCIn`** sensors—temporary **`yang_path_migration_window`** alarms coincide with **`pipeline`** **`reload`** narratives lacking congestion signatures.\n\n**mdt_encoding_change** — Migration **`gpb-compact`**→**`json`** stresses CPUs momentarily—**`mdt_encoding_change`** bursts resemble outages unless **`CHG`** cites telemetry modernization windows documented beside **`encoding_latest`** dashboards.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telegraf (gNMI plugin), Splunk HEC, SNMP TA (fallback).\n• Ensure the following data sources are available: Telegraf internal metrics, gNMI subscription status.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy Telegraf with gNMI input plugin on collector hosts. Configure NX-OS sensor groups for interface, BGP, system, and environment paths. Monitor Telegraf internal metrics for collection health. Alert on stale data or excessive gather times.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telegraf sourcetype=\"telegraf:internal\" measurement=\"internal_gather\"\n| stats avg(gather_time_ns) as avg_gather_ns max(gather_time_ns) as max_gather_ns count as samples by host, input\n| where input=\"cisco_telemetry_gnmi\"\n| eval avg_gather_ms=round(avg_gather_ns/1000000,1)\n| where avg_gather_ms > 5000 OR samples < expected_samples\n| table host, input, avg_gather_ms, max_gather_ns, samples\n```\n\nUnderstanding this SPL\n\n**Nexus 9000 NX-OS Streaming Telemetry Health** — Streaming telemetry (gNMI/gRPC) from Nexus 9000 provides sub-second interface, routing, and system metrics. Monitoring the telemetry pipeline itself ensures data collection gaps are detected before they blind monitoring dashboards.\n\nDocumented **Data sources**: Telegraf internal metrics, gNMI subscription status. **App/TA** (typical add-on context): Telegraf (gNMI plugin), Splunk HEC, SNMP TA (fallback). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telegraf; **sourcetype**: telegraf:internal. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telegraf, sourcetype=\"telegraf:internal\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, input** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where input=\"cisco_telemetry_gnmi\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_gather_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_gather_ms > 5000 OR samples < expected_samples` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Nexus 9000 NX-OS Streaming Telemetry Health**): table host, input, avg_gather_ms, max_gather_ns, samples\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (collector health), Line chart (gather time), Single value (active subscriptions).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300, Nexus 9500, Nexus 3000 (NX-OS 9.3+)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch whether your switches keep streaming their minute-by-minute health numbers cleanly—pipes staying open and timers honest—before dashboards freeze while the cables themselves still carry traffic.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.5",
              "n": "NX-OS VXLAN EVPN Fabric Underlay BGP Health",
              "c": "critical",
              "f": "intermediate",
              "v": "The VXLAN EVPN fabric relies on BGP for both underlay (iBGP/OSPF) and overlay (BGP EVPN address family) connectivity. BGP peer flaps or stuck sessions in the underlay break VTEP reachability and cause tenant network outages. Monitoring BGP state across all spines and leafs is foundational.",
              "t": "NX-OS syslog (`cisco:nexus`), gNMI telemetry, SNMP TA",
              "d": "NX-OS syslog (BGP-5, BGP-3 messages), gNMI BGP sensor path, SNMP BGP4-MIB",
              "q": "index=network sourcetype=\"cisco:nexus\" \"BGP-5-ADJCHANGE\" OR \"BGP-3-NOTIFICATION\"\n| rex \"neighbor (?<peer_ip>\\S+).*(?<state>Up|Down|Established|Idle)\"\n| stats count latest(state) as current_state by host, peer_ip\n| where current_state!=\"Established\"\n| table host, peer_ip, current_state, count\n| sort -count",
              "m": "Forward NX-OS syslog to Splunk (facility BGP). Optionally stream BGP state via gNMI for sub-second detection. Alert on any peer leaving Established state. Correlate with interface flaps (UC-18.3.14) and VTEP reachability (UC-18.3.9).",
              "z": "Status grid (peer status matrix), Table (non-established peers), Timeline (flap events).",
              "kfp": "A Fabric ISSU rolling window deliberately resets BGP sessions during supervisor reloads—suppress flap thresholds tied to approved CHG identifiers referencing ISSU phases until reload queues drain inside LAN Fabric job consoles documented beside SUM workflows.\n\nNDI site_health replay scenarios invoked for disaster rehearsals temporarily compress fabricScore—tie suppression tokens to tabletop CRQ identifiers before rehearsal dips masquerade as live regressions unrelated to scripted failover choreography.\n\nPlanned spine maintenance exercising BGP Graceful Restart stretches Established narratives while NVE peers remain UP—cross-check graceful-restart syslog arcs against flap KPI totals prior to optical-layer escalations unrelated to cabling faults.\n\nTenant onboarding bursts swell Type-5 advertisement counts briefly while route-target auto symmetry persists—differentiate purposeful EVPN surges using CHG bridges referencing evi mappings versus leakage cues lacking approvals.\n\nControlled BGP route-refresh activity after route-map edits intentionally reshuffles EVPN NLRI—suppress route-drop macros whenever automation pipelines stamp policy_refresh markers correlated with NDFC deploy batches.\n\nFabric forwarding mode anycast-gateway-mac migrations can replay dup-mac syslog arcs until mobility timers converge—widen suppression windows when hypervisor teams schedule clustered storage migrations overlapping gateway-change windows that briefly amplify Type-2 churn.",
              "refs": "[CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NX-OS syslog (`cisco:nexus`), gNMI telemetry, SNMP TA.\n• Ensure the following data sources are available: NX-OS syslog (BGP-5, BGP-3 messages), gNMI BGP sensor path, SNMP BGP4-MIB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward NX-OS syslog to Splunk (facility BGP). Optionally stream BGP state via gNMI for sub-second detection. Alert on any peer leaving Established state. Correlate with interface flaps (UC-18.3.14) and VTEP reachability (UC-18.3.9).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:nexus\" \"BGP-5-ADJCHANGE\" OR \"BGP-3-NOTIFICATION\"\n| rex \"neighbor (?<peer_ip>\\S+).*(?<state>Up|Down|Established|Idle)\"\n| stats count latest(state) as current_state by host, peer_ip\n| where current_state!=\"Established\"\n| table host, peer_ip, current_state, count\n| sort -count\n```\n\nUnderstanding this SPL\n\n**NX-OS VXLAN EVPN Fabric Underlay BGP Health** — The VXLAN EVPN fabric relies on BGP for both underlay (iBGP/OSPF) and overlay (BGP EVPN address family) connectivity. BGP peer flaps or stuck sessions in the underlay break VTEP reachability and cause tenant network outages. Monitoring BGP state across all spines and leafs is foundational.\n\nDocumented **Data sources**: NX-OS syslog (BGP-5, BGP-3 messages), gNMI BGP sensor path, SNMP BGP4-MIB. **App/TA** (typical add-on context): NX-OS syslog (`cisco:nexus`), gNMI telemetry, SNMP TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:nexus. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:nexus\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, peer_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where current_state!=\"Established\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NX-OS VXLAN EVPN Fabric Underlay BGP Health**): table host, peer_ip, current_state, count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NX-OS VXLAN EVPN Fabric Underlay BGP Health** — The VXLAN EVPN fabric relies on BGP for both underlay (iBGP/OSPF) and overlay (BGP EVPN address family) connectivity. BGP peer flaps or stuck sessions in the underlay break VTEP reachability and cause tenant network outages. Monitoring BGP state across all spines and leafs is foundational.\n\nDocumented **Data sources**: NX-OS syslog (BGP-5, BGP-3 messages), gNMI BGP sensor path, SNMP BGP4-MIB. **App/TA** (typical add-on context): NX-OS syslog (`cisco:nexus`), gNMI telemetry, SNMP TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (peer status matrix), Table (non-established peers), Timeline (flap events).",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300, Nexus 9500, Nexus 3000 (VXLAN EVPN fabrics)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch whether neighbor sessions underneath your stretched switching fabric stay steady while tunnel endpoints agree on reachability. When those sessions jitter or tunnels stumble, we raise a calm warning before east-west conversations quietly fail.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_nexus",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.6",
              "n": "NX-OS Control Plane Policing (CoPP) Drops",
              "c": "high",
              "f": "intermediate",
              "v": "CoPP protects the switch CPU from being overwhelmed by excessive control-plane traffic (ARP storms, BGP attacks, ICMP floods). Monitoring CoPP drop counters detects both legitimate overload and potential DoS attacks targeting the management or routing plane.",
              "t": "NX-OS syslog (`cisco:nexus`), SNMP TA, gNMI telemetry",
              "d": "NX-OS CoPP counters (`show policy-map interface control-plane`), syslog, gNMI",
              "q": "index=network sourcetype=\"cisco:nexus:copp\" OR (sourcetype=\"cisco:nexus\" \"COPP\" \"DROP\")\n| stats sum(dropped_packets) as drops sum(conform_packets) as conforms by host, class_name\n| eval drop_pct=round(drops/(drops+conforms)*100,2)\n| where drops > 1000 OR drop_pct > 5\n| table host, class_name, drops, conforms, drop_pct\n| sort -drops",
              "m": "Poll CoPP counters via scripted input or gNMI every 60 seconds. Baseline normal drop rates per class. Alert on sustained drops exceeding baseline, particularly for BGP, OSPF, and management classes. Investigate as potential security events.",
              "z": "Table (CoPP classes with drops), Bar chart (drops by class), Line chart (drop rate trending).",
              "kfp": "**BGP route-refresh bursts** staged after outbound **`route-map`** edits can spike **`copp-system-class-important`** while peers synchronize refresh windows—suppress Sev ladders when **`CHG`** lookups cite scripted **`route-refresh`** maintenance unrelated to hostile **`BGP`** exhaustion hypotheses.\n\n**Supervisor reload choreography** replays **`OSPF`**/**`IS-IS`**/**`BGP`** finite-machine chatter until timers converge—compare **`COPP`** timelines against **`show system reset-reason`** plus boot **`_time`** deltas before blaming steady-state **`violated_pkts`** unrelated to deterministic reload work.\n\n**Leaf turn-up waves** amplify **`LLDP`**/**`CDP`** discovery on fresh fabric links—tie **`discovery_surge_lane`** alerts to **`POAP`** tickets so Splunk distinguishes expansion noise from covert neighbor harvesting narratives around **`T1592`** styled activities.\n\n**Planned strictness relaxations** migrating from **`copp-system-policy-strict`** toward **`copp-system-policy-moderate`** during capacity reviews intentionally enlarge **`police cir`** envelopes—demand CAB proof before **`policy_drift`** macros page as if a surprise attachment event occurred.\n\n**AAA contention** during **`TACACS+`**/**`RADIUS`** brownouts thrashes authentication classes—correlate **`COPP`** bursts with **`AAA-SERVER`**/**`TACACS+`** syslog arcs and **`show aaa server` health before elevating ARP-centric attack stories.",
              "refs": "[CIM: Intrusion Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [
                "T1499.004",
                "T1592"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NX-OS syslog (`cisco:nexus`), SNMP TA, gNMI telemetry.\n• Ensure the following data sources are available: NX-OS CoPP counters (`show policy-map interface control-plane`), syslog, gNMI.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll CoPP counters via scripted input or gNMI every 60 seconds. Baseline normal drop rates per class. Alert on sustained drops exceeding baseline, particularly for BGP, OSPF, and management classes. Investigate as potential security events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"cisco:nexus:copp\" OR (sourcetype=\"cisco:nexus\" \"COPP\" \"DROP\")\n| stats sum(dropped_packets) as drops sum(conform_packets) as conforms by host, class_name\n| eval drop_pct=round(drops/(drops+conforms)*100,2)\n| where drops > 1000 OR drop_pct > 5\n| table host, class_name, drops, conforms, drop_pct\n| sort -drops\n```\n\nUnderstanding this SPL\n\n**NX-OS Control Plane Policing (CoPP) Drops** — CoPP protects the switch CPU from being overwhelmed by excessive control-plane traffic (ARP storms, BGP attacks, ICMP floods). Monitoring CoPP drop counters detects both legitimate overload and potential DoS attacks targeting the management or routing plane.\n\nDocumented **Data sources**: NX-OS CoPP counters (`show policy-map interface control-plane`), syslog, gNMI. **App/TA** (typical add-on context): NX-OS syslog (`cisco:nexus`), SNMP TA, gNMI telemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:nexus:copp, cisco:nexus. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"cisco:nexus:copp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, class_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drop_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drops > 1000 OR drop_pct > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NX-OS Control Plane Policing (CoPP) Drops**): table host, class_name, drops, conforms, drop_pct\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NX-OS Control Plane Policing (CoPP) Drops** — CoPP protects the switch CPU from being overwhelmed by excessive control-plane traffic (ARP storms, BGP attacks, ICMP floods). Monitoring CoPP drop counters detects both legitimate overload and potential DoS attacks targeting the management or routing plane.\n\nDocumented **Data sources**: NX-OS CoPP counters (`show policy-map interface control-plane`), syslog, gNMI. **App/TA** (typical add-on context): NX-OS syslog (`cisco:nexus`), SNMP TA, gNMI telemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (CoPP classes with drops), Bar chart (drops by class), Line chart (drop rate trending).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Cisco Nexus 9300, Nexus 9500, Nexus 3000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch the quiet governor around each switch brain so routing chatter cannot flood it. When those governors discard traffic too long, we warn your crews early so replies stay quick for everyone relying on that gear.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.dest | sort - count",
              "e": [
                "cisco",
                "snmp",
                "syslog"
              ],
              "em": [
                "cisco_nexus",
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.7",
              "n": "Nexus Dashboard Orchestrator Cross-Fabric Consistency",
              "c": "high",
              "f": "advanced",
              "v": "Nexus Dashboard Orchestrator (NDO, formerly MSO) manages policies across multiple ACI fabrics or NDFC-managed NX-OS fabrics. Configuration inconsistencies between sites cause asymmetric routing, broken inter-site connectivity, and policy enforcement gaps.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), NDO REST API scripted input",
              "d": "NDO audit logs, schema/template deployment status, `cisco:ndo:audit`",
              "q": "index=cisco_dc sourcetype=\"cisco:ndo:audit\" earliest=-24h\n| stats count by action, template_name, site_name, status, user\n| where status!=\"deployed\" OR action IN (\"failed\",\"conflict\")\n| table template_name, site_name, action, status, user, count\n| sort -count",
              "m": "Poll NDO deployment status via REST API. Detect schema deployment failures and pending diffs between sites. Alert on any site showing stale or failed deployment. Cross-reference with ACI multisite health (UC-18.1.17).",
              "z": "Table (deployment status per site/template), Status grid (site consistency), Timeline (deployment events).",
              "kfp": "**maintenance_template_repush** — Approved **CAB** windows intentionally redeploy **templates** site-by-site, so **not_deployed** appears transiently on followers while the lead **schema** finishes; **suppress** **P2** rows when **`ndo_change_ticket`** matches **`inputlookup ndo_maintenance_allow.csv`** and **valid_until** covers **_time**.\n\n**greenfield_site_partial_bind** — New **fabric** registrations often show **policy-deployment-status** gaps until **`/api/v1/sites`** stabilization completes; require **`sites_touched`** ≥ **2** or **48h** age from **`lookup ndo_site_onboard.csv`** before paging **P1_policy_split**.\n\n**apic_cli_touchup_window** — Operators sometimes patch **EPG** or **VRF** knobs directly on **APIC**, creating **modified** states that **NDO** will reconcile; cross-check **`cisco:ndo:audit`** actor **user** against **NetOps** roster and **filter** **`severity!=\"P1\"`** unless **`playbook`** references genuine **inter-site** **contract** breaks.\n\n**schema_rename_propagation_lag** — **Schema** renames traverse **sites** asynchronously; **stale** banners can clear without intervention—compare two successive **`cisco:ndo:deploy_status`** snapshots **60m** apart and **suppress** when **deploy_states** converges to **deployed**.\n\n**poll_interval_misalignment** — **Splunkbase** **1546** inputs polling **`/api/v1/schemas/{id}/policy-deployment-status`** faster than **300s** may duplicate **HTTP** pulls; dedupe on **`schema_key`**, **`template_key`**, **`site_key`**, **`date_mday`**, **`date_hour`** before **alert** volume drives false **P2_template_gap**.\n\n**hep_duplicate_batch_ingest** — Mis-keyed **HEC** **tokens** can double-fire identical **audit** rows after **retry** storms; cap **`evidence_hits`** escalation when **source** IP and **guid** repeat—route to **pipeline** owners, not **fabric** anchors.\n\n**stretched_epg_quiesce** — **Disaster-recovery** rehearsals temporarily **detach** a **follower** site; **site_island** is expected—tie **`signal=site_island`** to **`lookup dr_drill_ack.csv`** and **where** **`drill_active=1`** **suppresses** **P1**.\n\n**ndo_certificate_rotation_grace** — **TLS** rotations on **NDO** **management** interfaces produce **`cisco:ndo:platform`** flaps** unrelated to **policy**; **wrap** **P2_ndo_platform** with **`lookup cert_rotation_window.csv`** honoring **maintenance** **tokens**.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), NDO REST API scripted input.\n• Ensure the following data sources are available: NDO audit logs, schema/template deployment status, `cisco:ndo:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll NDO deployment status via REST API. Detect schema deployment failures and pending diffs between sites. Alert on any site showing stale or failed deployment. Cross-reference with ACI multisite health (UC-18.1.17).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:ndo:audit\" earliest=-24h\n| stats count by action, template_name, site_name, status, user\n| where status!=\"deployed\" OR action IN (\"failed\",\"conflict\")\n| table template_name, site_name, action, status, user, count\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Nexus Dashboard Orchestrator Cross-Fabric Consistency** — Nexus Dashboard Orchestrator (NDO, formerly MSO) manages policies across multiple ACI fabrics or NDFC-managed NX-OS fabrics. Configuration inconsistencies between sites cause asymmetric routing, broken inter-site connectivity, and policy enforcement gaps.\n\nDocumented **Data sources**: NDO audit logs, schema/template deployment status, `cisco:ndo:audit`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDO REST API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:ndo:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:ndo:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action, template_name, site_name, status, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status!=\"deployed\" OR action IN (\"failed\",\"conflict\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Nexus Dashboard Orchestrator Cross-Fabric Consistency**): table template_name, site_name, action, status, user, count\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.status, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Nexus Dashboard Orchestrator Cross-Fabric Consistency** — Nexus Dashboard Orchestrator (NDO, formerly MSO) manages policies across multiple ACI fabrics or NDFC-managed NX-OS fabrics. Configuration inconsistencies between sites cause asymmetric routing, broken inter-site connectivity, and policy enforcement gaps.\n\nDocumented **Data sources**: NDO audit logs, schema/template deployment status, `cisco:ndo:audit`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDO REST API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (deployment status per site/template), Status grid (site consistency), Timeline (deployment events).",
              "script": "",
              "premium": "",
              "hw": "Nexus Dashboard Orchestrator, multi-site ACI or NDFC fabrics",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We align the central playbook that spans your halls with the copy each site truly runs, so nobody inherits different traffic rules by accident. We raise a calm flag before one wing behaves unlike its twins.",
              "mtype": [
                "Compliance",
                "Availability"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.status, All_Changes.user | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.8",
              "n": "NDFC Switch Inventory and Lifecycle Status",
              "c": "medium",
              "f": "beginner",
              "v": "Maintaining an accurate, up-to-date inventory of all switches managed by NDFC, including model, serial, software version, and end-of-life/end-of-support dates, supports procurement planning, compliance audits, and vulnerability management.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), NDFC REST API scripted input",
              "d": "NDFC switch inventory API, `cisco:ndfc:inventory`",
              "q": "index=cisco_dc sourcetype=\"cisco:ndfc:inventory\"\n| stats latest(software_version) as sw_ver latest(serial_number) as serial latest(model) as model by switch_name, fabric_name\n| lookup cisco_eos_dates model OUTPUT eos_date, eol_date\n| eval days_to_eos=round((strptime(eos_date,\"%Y-%m-%d\")-now())/86400)\n| where days_to_eos < 365 OR isnull(days_to_eos)\n| table switch_name, fabric_name, model, serial, sw_ver, eos_date, days_to_eos\n| sort days_to_eos",
              "m": "Poll NDFC inventory weekly. Maintain a lookup of Cisco EoS/EoL dates. Alert at 12, 6, and 3 month thresholds. Generate quarterly lifecycle reports for procurement.",
              "z": "Table (switches approaching EoS), Pie chart (lifecycle status distribution), Single value (switches past EoS).",
              "kfp": "**inventory_poll_shard_skew** — Multi-region **NDFC** clusters or **pagination** boundaries can delay subsets of **leaf** rows until the next API **page** cycle; **suppress** **registration_gap** if **`last_inv_poll`** skew is **<15m** and **NDFC** **UI** still lists the device **In-Service**.\n\n**supervisor_switchover_serial_flip** — **active**/**standby** rotations sometimes emit blank **serialNumber** on transient polls; require **two** consecutive **inventory_hole** samples or cross-check **`cisco:ndfc:syslog`** **switchover** arcs before paging **asset** teams.\n\n**pid_label_variants** — **Nexus** **PID** strings from **NDFC** may omit **spare** tokens (**`-=`** optics bundles) versus corporate **CMDB**; **eox_lookup_miss** clears after you normalize **lookup_pid** in **`cisco_switch_eox_lookup`** with **`FIELDALIAS`** bridges rather than treating chassis as unknown.\n\n**greenfield_fabric_registration** — **managedStatus** may read **pending** while **POAP** completes; tie **registration_gap** alerts to **`onboardingState`** age **>48h** or **ServiceNow** **`CHG`** closure absent.\n\n**controlled_lab_duplicates** — **Qualification** racks reuse **hostname** patterns; dedupe **`dev_name`** with **`fabric_name`** tokens or **`mgmt_ip`** before **Sev** ladders fire on **inventory_hole** alone.\n\n**maintenance_unreach_window** — Approved **ISSU** or **reload** tickets legitimately set **operStatus** **down**; join **`inputlookup ndfc_maintenance_allowlist.csv`** windows keyed to **fabric_name** + **dev_name**.\n\n**eox_calendar_refresh** — **EoX** API bulk reloads can shift **LDOS** **days_to_ldos** by hours as Cisco **revises** bulletins; alert on **crossing** thresholds week-over-week, not single-minute jitter.\n\n**audit_noise_rows** — **`cisco:ndfc:audit`** lines mentioning **inventory** during operator **export** jobs can duplicate device strings; keep primary **`stats`** on **`cisco:ndfc:inventory`** and treat **audit** as **optional** corroboration only.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), NDFC REST API scripted input.\n• Ensure the following data sources are available: NDFC switch inventory API, `cisco:ndfc:inventory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll NDFC inventory weekly. Maintain a lookup of Cisco EoS/EoL dates. Alert at 12, 6, and 3 month thresholds. Generate quarterly lifecycle reports for procurement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:ndfc:inventory\"\n| stats latest(software_version) as sw_ver latest(serial_number) as serial latest(model) as model by switch_name, fabric_name\n| lookup cisco_eos_dates model OUTPUT eos_date, eol_date\n| eval days_to_eos=round((strptime(eos_date,\"%Y-%m-%d\")-now())/86400)\n| where days_to_eos < 365 OR isnull(days_to_eos)\n| table switch_name, fabric_name, model, serial, sw_ver, eos_date, days_to_eos\n| sort days_to_eos\n```\n\nUnderstanding this SPL\n\n**NDFC Switch Inventory and Lifecycle Status** — Maintaining an accurate, up-to-date inventory of all switches managed by NDFC, including model, serial, software version, and end-of-life/end-of-support dates, supports procurement planning, compliance audits, and vulnerability management.\n\nDocumented **Data sources**: NDFC switch inventory API, `cisco:ndfc:inventory`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDFC REST API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:ndfc:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:ndfc:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, fabric_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_to_eos** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_eos < 365 OR isnull(days_to_eos)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NDFC Switch Inventory and Lifecycle Status**): table switch_name, fabric_name, model, serial, sw_ver, eos_date, days_to_eos\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (switches approaching EoS), Pie chart (lifecycle status distribution), Single value (switches past EoS).",
              "script": "",
              "premium": "",
              "hw": "All NDFC-managed switches (Nexus 9300, 9500, 3000, 7000)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We carry a living list of every switch your fabric controller says it owns—models, serials, and software ages—so renewal budgets and support deadlines stay honest before anything slips off contract quietly.",
              "mtype": [
                "Inventory",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.9",
              "n": "Nexus Dashboard Site and Fabric Assurance Health Score",
              "c": "high",
              "f": "intermediate",
              "v": "Site-level assurance scores roll up connectivity, best-practice violations, and capacity risk across all managed switches. Surfacing declining scores in Splunk gives data center operators a single trend line to prioritize remediation before Insights opens critical anomalies during business peaks.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), Nexus Dashboard REST export",
              "d": "`index=cisco_dc` `sourcetype=\"cisco:nd:site_health\"` with fields `site_name`, `fabric_name`, `assurance_score`, `risk_level`, `open_findings`",
              "q": "index=cisco_dc sourcetype=\"cisco:nd:site_health\" earliest=-24h\n| stats latest(assurance_score) as score latest(risk_level) as risk latest(open_findings) as findings by site_name, fabric_name\n| where score < 85 OR match(lower(risk),\"(?i)high|critical\")\n| sort score\n| table site_name, fabric_name, score, risk, findings",
              "m": "(1) Schedule API pull from Nexus Dashboard Assurance or ingest pre-aggregated JSON via HEC. (2) Map score scale to your SLA colors. (3) Correlate drops with NDI anomalies (UC-18.4.1) and compliance drift (UC-18.4.2).",
              "z": "Single value (worst site score), Bar chart (score by fabric), Table (sites below threshold).",
              "kfp": "**healthscore_bootstrap_window** - Fresh **site** onboarding keeps **composite** **scores** artificially high until **Insights** finishes first-pass **connectivity** sweeps; **where** clauses should ignore **P3** drops for **72h** after **`site_provision_date`** from **`lookup nd_site_rollout.csv`** or compare against **`insightsGroup firstSeen`** timestamps before paging.\n\n**assurance_weighting_metadata_refresh** - Cisco bug-metadata or **PSIRT** library updates can re-weight **minor** findings overnight, producing **single-digit** score steps without new faults; tie **`delta_window`** alerts to a non-zero **`cisco:nd:findings`** event rate and cross-check **Insights** release notes; **suppress** via **`lookup penalty_release_ack.csv`** stamped by engineering.\n\n**capacity_planning_whatif** - Operators run **capacity simulation** tasks that temporarily inflate **Capacity** pillar penalties; validate active **what-if** jobs in the **Insights** UI and filter **`has_capacity=1`** alerts with **`changereq_open=0`** from **`inputlookup cab_staging.csv`**.\n\n**nd_appservices_queue_storm** - **Nexus Dashboard** microservices restarting after patching can stall **health** polling while **findings** ingestion continues; correlate **`cisco:nd:syslog`** **ERROR** bursts with score freezes; **suppress** **P1** if **`kubectl get pods`** equivalents show rolling restarts tied to approved **CHG** tokens.\n\n**fabric_scope_partial_resync** - Sometimes only half the **spine** cluster registers during **NDFC** sync, skewing **Best Practices**; confirm **inventory counts** in **Insights** match **CMDB** before treating **`last_health`** as production truth; pause alerts using **`lookup fabric_resync_in_progress.csv`**.\n\n**change_impact_shadow_penalties** - Planned **ISSU** or **maintenance** modes inject **Change Impact** deductions that self-heal; require **`maintenance_allow_list`** matches with **time-bound** **`valid_until`** columns before notifying leadership.\n\n**duplicate_poll_json_merge** - Misconfigured **1546** inputs may **duplicate** **site_health** events with identical timestamps, exaggerating volatility; **dedup** **`site_key fabric_key _time`** in a **summary index** or fix modular input **tracking** keys when **`health_spread`** looks implausible above **40**.\n\n**cross_fabric_label_collision** - Identical **`fabric_name`** strings in disjoint regions can merge wrongly; enforce **`region_code`** prefixes inside **`fabric_key`** using **`lookup site_region.csv`** before **`stats`** aggregation triggers false **P2** storms.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), Nexus Dashboard REST export.\n• Ensure the following data sources are available: `index=cisco_dc` `sourcetype=\"cisco:nd:site_health\"` with fields `site_name`, `fabric_name`, `assurance_score`, `risk_level`, `open_findings`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule API pull from Nexus Dashboard Assurance or ingest pre-aggregated JSON via HEC. (2) Map score scale to your SLA colors. (3) Correlate drops with NDI anomalies (UC-18.4.1) and compliance drift (UC-18.4.2).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:nd:site_health\" earliest=-24h\n| stats latest(assurance_score) as score latest(risk_level) as risk latest(open_findings) as findings by site_name, fabric_name\n| where score < 85 OR match(lower(risk),\"(?i)high|critical\")\n| sort score\n| table site_name, fabric_name, score, risk, findings\n```\n\nUnderstanding this SPL\n\n**Nexus Dashboard Site and Fabric Assurance Health Score** — Site-level assurance scores roll up connectivity, best-practice violations, and capacity risk across all managed switches. Surfacing declining scores in Splunk gives data center operators a single trend line to prioritize remediation before Insights opens critical anomalies during business peaks.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:nd:site_health\"` with fields `site_name`, `fabric_name`, `assurance_score`, `risk_level`, `open_findings`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), Nexus Dashboard REST export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:nd:site_health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:nd:site_health\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by site_name, fabric_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where score < 85 OR match(lower(risk),\"(?i)high|critical\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Nexus Dashboard Site and Fabric Assurance Health Score**): table site_name, fabric_name, score, risk, findings\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Nexus Dashboard Site and Fabric Assurance Health Score** — Site-level assurance scores roll up connectivity, best-practice violations, and capacity risk across all managed switches. Surfacing declining scores in Splunk gives data center operators a single trend line to prioritize remediation before Insights opens critical anomalies during business peaks.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:nd:site_health\"` with fields `site_name`, `fabric_name`, `assurance_score`, `risk_level`, `open_findings`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), Nexus Dashboard REST export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (worst site score), Bar chart (score by fabric), Table (sites below threshold).",
              "script": "",
              "premium": "",
              "hw": "Nexus Dashboard, NDFC-managed fabrics",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep a rolling wellness number for each fabric in your halls, like a report card for the switching grid, so teams shore up weak spots before anyone feels slow service.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.10",
              "n": "Golden Firmware Image Compliance Across NDFC Fabrics",
              "c": "high",
              "f": "beginner",
              "v": "Running multiple NX-OS trains in one fabric increases interoperability defects during upgrades. Comparing live images to the approved golden list per platform reduces unplanned reload risk and speeds security patch campaigns for the physical data center network.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), `cisco:ndfc:inventory`",
              "d": "`index=cisco_dc` `sourcetype=\"cisco:ndfc:inventory\"` with fields `switch_name`, `fabric_name`, `model`, `software_version`",
              "q": "index=cisco_dc sourcetype=\"cisco:ndfc:inventory\" earliest=-24h\n| lookup ndfc_golden_image.csv model OUTPUT golden_version\n| where isnotnull(golden_version) AND software_version!=golden_version\n| stats values(software_version) as running_versions values(golden_version) as golden_versions count by fabric_name, model\n| sort -count\n| table fabric_name, model, running_versions, golden_versions, count",
              "m": "(1) Maintain `ndfc_golden_image.csv` with CAB-approved NX-OS per SKU. (2) Nightly diff from inventory API. (3) Drive remediation tickets with risk tier from PSIRT correlation (UC-18.4.3).",
              "z": "Pie chart (compliant vs drift), Table (non-compliant switches), Bar chart (count by fabric).",
              "kfp": "**New-fabric stand-up** leaves **image_policy_compliance** at NA until templates bind **imagePolicy**—Splunk shows a transient gap while inventory onboarding completes—suppress policy_gap macros until CAB tickets record controller attachment milestones and the fabric exits bootstrap windows documented by NetOps.\n\n**Vendor qualification labs** run Maintenance trains under CMDB environment=cert so Out-of-Sync versus LTS targets is scheduled while optics and transceiver matrices bake—tie paging to cert_campaign bridges instead of Sev ladders aimed at production fabrics listed in CMDB tier-one fields.\n\n**Cold-standby fabrics** pin prior-release **imagePolicy** rows on border-leaf roles awaiting failover CHG execution—validateImage may pass while target_version differs from production—apply standby_wave lookups so golden_straggler alerts skip failover_ready estates explicitly tagged in ServiceNow.\n\n**CHG-staged ISSU waves** hold installer.dryRun cohorts Out-of-Sync until installer.commit—amber dashboards overlap wave_stage identifiers—pause Sev timers until dry-run completion timestamps align with automation logs referencing policy:upgrade:install receipts captured by collector scripts.\n\n**Controlled DR drills** pair matched running_version tuples across mirrored racks even when fabric_compliance_pct differs—annotate matched_pair in ServiceNow so vpc_peer_drift macros ignore deliberate symmetry rehearsals tied to failover exercises rather than rogue drift.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1542.005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), `cisco:ndfc:inventory`.\n• Ensure the following data sources are available: `index=cisco_dc` `sourcetype=\"cisco:ndfc:inventory\"` with fields `switch_name`, `fabric_name`, `model`, `software_version`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `ndfc_golden_image.csv` with CAB-approved NX-OS per SKU. (2) Nightly diff from inventory API. (3) Drive remediation tickets with risk tier from PSIRT correlation (UC-18.4.3).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:ndfc:inventory\" earliest=-24h\n| lookup ndfc_golden_image.csv model OUTPUT golden_version\n| where isnotnull(golden_version) AND software_version!=golden_version\n| stats values(software_version) as running_versions values(golden_version) as golden_versions count by fabric_name, model\n| sort -count\n| table fabric_name, model, running_versions, golden_versions, count\n```\n\nUnderstanding this SPL\n\n**Golden Firmware Image Compliance Across NDFC Fabrics** — Running multiple NX-OS trains in one fabric increases interoperability defects during upgrades. Comparing live images to the approved golden list per platform reduces unplanned reload risk and speeds security patch campaigns for the physical data center network.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:ndfc:inventory\"` with fields `switch_name`, `fabric_name`, `model`, `software_version`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), `cisco:ndfc:inventory`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:ndfc:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:ndfc:inventory\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(golden_version) AND software_version!=golden_version` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by fabric_name, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Golden Firmware Image Compliance Across NDFC Fabrics**): table fabric_name, model, running_versions, golden_versions, count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Golden Firmware Image Compliance Across NDFC Fabrics** — Running multiple NX-OS trains in one fabric increases interoperability defects during upgrades. Comparing live images to the approved golden list per platform reduces unplanned reload risk and speeds security patch campaigns for the physical data center network.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:ndfc:inventory\"` with fields `switch_name`, `fabric_name`, `model`, `software_version`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), `cisco:ndfc:inventory`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (compliant vs drift), Table (non-compliant switches), Bar chart (count by fabric).",
              "script": "",
              "premium": "",
              "hw": "NDFC-managed Nexus 9000/3000",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We check that the Cisco fabric controller’s golden firmware rules match what each switch actually runs, and we raise a hand before mixed software versions spoil planned upgrade nights or surprise reloads.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.11",
              "n": "NDFC Flow Telemetry Drop and Export Health",
              "c": "medium",
              "f": "advanced",
              "v": "Flow telemetry underpins capacity and security analytics for the NX-OS fabric. Collector drops or stalled exports create blind spots where congestion and microbursts go unseen until applications complain. Monitoring pipeline health preserves trust in east-west utilization dashboards.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), NetFlow/IPFIX collector syslog",
              "d": "`index=cisco_dc` `sourcetype=\"cisco:ndfc:flow_export\"` with fields `switch_name`, `export_profile`, `dropped_flows`, `export_rate_eps`, `collector_ip`",
              "q": "index=cisco_dc sourcetype=\"cisco:ndfc:flow_export\" earliest=-4h\n| stats sum(dropped_flows) as drops avg(export_rate_eps) as eps by switch_name, collector_ip\n| where drops>0 OR eps < 100\n| sort -drops\n| table switch_name, collector_ip, drops, eps",
              "m": "(1) Ingest NDFC telemetry health API or collector events with per-switch counters. (2) Baseline `eps` per site. (3) Alert on drops or sustained low export rate; verify CPU and sampler intervals on switches.",
              "z": "Timechart (export rate), Table (switches with drops), Single value (total dropped flows).",
              "kfp": "**warm_restart_window** — Rolling recycling of Nexus Dashboard workers that host PMN telemetry can zero out export_rate readings for several minutes while switches still sample flows; require stalled=1 across three consecutive 15m buckets before paging, and tie suppression to approved change tickets that cite ND cluster patching.\n\n**sampler_headroom_change** — NetOps sometimes widen sampler intervals during congestion investigations; med_eps legitimately falls. Pair slowdown alerts with lossy=1 (non-zero drop_count) or a coincident spike in cisco:ndfc:audit before treating the fabric as broken.\n\n**config_preview_partial** — LAN automation may omit border-leaf exporters inside config-preview JSON until inventory convergence finishes; low fabric_key cardinality alone is not a failure signal—compare against live NX-OS CLI and NDFC inventory tiles bundled with Splunkbase 7777 macros.\n\n**aci_fabric_alias_collision** — Brownfield sites may reuse similar fabric labels in APIC and NDFC; Splunk Add-on for Cisco ACI (4022) endpoint names can collide when coalescing fabric_name. Normalize via lookup tables ndfc_fabric_alias.csv and apic_fabric_bridge.csv before declaring cross-controller drift.\n\n**nsx_edge_maintenance_overlap** — VMware NSX edge work ingested with Splunk Add-on for VMware NSX (4856) occasionally aligns timewise with flow telemetry dips even when NDFC is healthy; require CMDB tags or vCenter tasks from Splunk Add-on for VMware (1810) that reference the same maintenance scope before correlating.\n\n**rest_poll_throttle_429** — Aggressive GET /pmn/telemetry/status polling after script redeploys can trip HTTP 429 counters echoed in cisco:ndfc:syslog without sustained collector drops; increase Splunk REST Modular Input (1546) intervals and clear false positives when drop_count stays zero.\n\n**collector_drills_calendar** — Planned failover exercises rotate collector_key values to standby VIPs; median baselines reset for a full polling cycle. Use lookup collector_drill_suppress.csv keyed by drill_id with valid_until columns so sev3_export_slowdown notices stay informational during declared failover rehearsals.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777), [CIM: Network Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), NetFlow/IPFIX collector syslog.\n• Ensure the following data sources are available: `index=cisco_dc` `sourcetype=\"cisco:ndfc:flow_export\"` with fields `switch_name`, `export_profile`, `dropped_flows`, `export_rate_eps`, `collector_ip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest NDFC telemetry health API or collector events with per-switch counters. (2) Baseline `eps` per site. (3) Alert on drops or sustained low export rate; verify CPU and sampler intervals on switches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:ndfc:flow_export\" earliest=-4h\n| stats sum(dropped_flows) as drops avg(export_rate_eps) as eps by switch_name, collector_ip\n| where drops>0 OR eps < 100\n| sort -drops\n| table switch_name, collector_ip, drops, eps\n```\n\nUnderstanding this SPL\n\n**NDFC Flow Telemetry Drop and Export Health** — Flow telemetry underpins capacity and security analytics for the NX-OS fabric. Collector drops or stalled exports create blind spots where congestion and microbursts go unseen until applications complain. Monitoring pipeline health preserves trust in east-west utilization dashboards.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:ndfc:flow_export\"` with fields `switch_name`, `export_profile`, `dropped_flows`, `export_rate_eps`, `collector_ip`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NetFlow/IPFIX collector syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:ndfc:flow_export. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:ndfc:flow_export\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by switch_name, collector_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where drops>0 OR eps < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NDFC Flow Telemetry Drop and Export Health**): table switch_name, collector_ip, drops, eps\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NDFC Flow Telemetry Drop and Export Health** — Flow telemetry underpins capacity and security analytics for the NX-OS fabric. Collector drops or stalled exports create blind spots where congestion and microbursts go unseen until applications complain. Monitoring pipeline health preserves trust in east-west utilization dashboards.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:ndfc:flow_export\"` with fields `switch_name`, `export_profile`, `dropped_flows`, `export_rate_eps`, `collector_ip`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NetFlow/IPFIX collector syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (export rate), Table (switches with drops), Single value (total dropped flows).",
              "script": "",
              "premium": "",
              "hw": "Nexus 9300/9500 with flow telemetry enabled",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch whether flow samples from your data-center switches still reach the analytics collectors, and we raise a hand when drops or stalled exports would leave east-west traffic pictures incomplete for your team.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value",
              "e": [
                "cisco",
                "netflow",
                "syslog"
              ],
              "em": [
                "cisco_nexus",
                "netflow_netflow"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.12",
              "n": "Nexus Dashboard Insights Alert Noise and Category Mix",
              "c": "medium",
              "f": "intermediate",
              "v": "Insights can generate bursts of correlated alerts after a single root cause. Tracking alert volume by category separates chronic noise from emerging systemic issues and helps tune NDI policies without losing visibility into real data center network regressions.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), `cisco:ndi:anomaly`",
              "d": "`index=cisco_dc` `sourcetype=\"cisco:ndi:anomaly\"` with fields `category`, `anomaly_type`, `fabric_name`, `severity`",
              "q": "index=cisco_dc sourcetype=\"cisco:ndi:anomaly\" earliest=-7d\n| bin _time span=1d\n| stats count by _time, category, severity\n| eventstats sum(count) as daily by _time\n| eventstats avg(daily) as baseline\n| where daily > baseline*1.5\n| sort -daily\n| table _time, category, severity, count, daily, baseline",
              "m": "(1) Ensure stable `category` mapping from webhook payload. (2) Tune multiplier for seasonal maintenance. (3) Feed noisy categories into NDI suppression workflow with Splunk approval ID.",
              "z": "Line chart (daily alert volume), Stacked bar (category mix), Table (spike days).",
              "kfp": "**ndi_duplicate_fanout** HEC or REST collectors occasionally ingest the same logical Insight twice when two forwarders target the same token; deduplicate on alert_id before trusting mix_pct, then retire the duplicate input once ack paths converge.\n\n**controller_upgrade_wave** Rolling Nexus Dashboard upgrades reshuffle worker partitions so Insights categories wobble for a few hours without leaf failures; require surge persisting beyond six hours or corroborating CRC syslog from NX-OS before paging hardware teams.\n\n**fabric_inventory_resync** Large LAN automation reconcile jobs spike cisco:ndfc:fabric_status counts and inflate fabric_pressure while NDI volume stays normal; ignore hardware escalations unless cisco:ndi:alerts evt grows in the same buckets.\n\n**compliance_scan_replay** Weekly policy rescans replay historical noncompliant rows and temporarily lift compliance mix_pct; compare uniq_keys to active rule IDs shown in the NDI compliance workspace before opening tickets.\n\n**environmental_sensor_glitch** Brief PSU or thermal telemetry glitches can tag environmental spikes that clear within two polls; demand z_like above 2.2 plus SNMP evidence from NDFC before dispatching field services.\n\n**aci_vmware_calendar_noise** Splunkbase 4022, 4856, and 1810 maintenance windows sometimes align temporally with Insights surges even when NX-OS is healthy; confirm CMDB change tags before claiming cross-domain root cause.\n\n**api_rate_limit_blip** Short HTTP 429 responses from /telemetry/alerts shrink ingest without indicating fabric outages; inspect splunkd.log on 1546 hosts, increase intervals, and only escalate if drop counters on collectors also rise per UC-18.4.11 guidance.\n\n**symmetric_link_event** A single VPC or MLAG regression can fan out dozens of connectivity-family Insights tied to one member port; use fabric_key plus interface tuples in drilldowns so mix_pct storms do not automatically imply multi-site outages.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), `cisco:ndi:anomaly`.\n• Ensure the following data sources are available: `index=cisco_dc` `sourcetype=\"cisco:ndi:anomaly\"` with fields `category`, `anomaly_type`, `fabric_name`, `severity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure stable `category` mapping from webhook payload. (2) Tune multiplier for seasonal maintenance. (3) Feed noisy categories into NDI suppression workflow with Splunk approval ID.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:ndi:anomaly\" earliest=-7d\n| bin _time span=1d\n| stats count by _time, category, severity\n| eventstats sum(count) as daily by _time\n| eventstats avg(daily) as baseline\n| where daily > baseline*1.5\n| sort -daily\n| table _time, category, severity, count, daily, baseline\n```\n\nUnderstanding this SPL\n\n**Nexus Dashboard Insights Alert Noise and Category Mix** — Insights can generate bursts of correlated alerts after a single root cause. Tracking alert volume by category separates chronic noise from emerging systemic issues and helps tune NDI policies without losing visibility into real data center network regressions.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:ndi:anomaly\"` with fields `category`, `anomaly_type`, `fabric_name`, `severity`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), `cisco:ndi:anomaly`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:ndi:anomaly. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:ndi:anomaly\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, category, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where daily > baseline*1.5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Nexus Dashboard Insights Alert Noise and Category Mix**): table _time, category, severity, count, daily, baseline\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Nexus Dashboard Insights Alert Noise and Category Mix** — Insights can generate bursts of correlated alerts after a single root cause. Tracking alert volume by category separates chronic noise from emerging systemic issues and helps tune NDI policies without losing visibility into real data center network regressions.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:ndi:anomaly\"` with fields `category`, `anomaly_type`, `fabric_name`, `severity`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), `cisco:ndi:anomaly`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily alert volume), Stacked bar (category mix), Table (spike days).",
              "script": "",
              "premium": "",
              "hw": "Nexus Dashboard Insights",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch how many Nexus Insights alerts pile up by category and when one kind suddenly dominates the mix, so your team can trim repeat alarms without missing a real fabric problem.",
              "mtype": [
                "Fault",
                "Compliance"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity span=1d | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "18.4.13",
              "n": "NDFC POAP / ZTP Bootstrap and Day-0 Onboarding Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Automated provisioning brings switches into the fabric quickly; DHCP, image fetch, or certificate failures during POAP/ZTP delay expansions and leave partially configured devices in racks. Monitoring onboarding outcomes keeps brownfield growth on schedule and prevents rogue devices from sitting outside policy.",
              "t": "Cisco DC Networking Application (Splunkbase 7777), NDFC syslog",
              "d": "`index=cisco_dc` `sourcetype=\"cisco:ndfc:poap\"` with fields `serial_number`, `switch_name`, `stage`, `status`, `error_code`",
              "q": "index=cisco_dc sourcetype=\"cisco:ndfc:poap\" earliest=-7d\n| where match(lower(status),\"(?i)fail|error|timeout\") OR (match(lower(stage),\"(?i)image|cert|dhcp\") AND status!=\"success\")\n| stats latest(_time) as last_fail latest(error_code) as err latest(stage) as stage by serial_number, switch_name\n| sort -last_fail\n| table last_fail, serial_number, switch_name, stage, err",
              "m": "(1) Forward NDFC POAP/ZTP logs to Splunk with parsed `stage` milestones. (2) Alert on any failure before switch reaches `In-Sync`. (3) Join serial to inventory (UC-18.4.8) for asset context.",
              "z": "Timeline (bootstrap attempts), Table (failed devices), Single value (open failures).",
              "kfp": "**lab_poap_rehearsal** Burn-in labs and staging racks replay automation workflows dozens of times daily. Tag those chassis in the suppress lookup, require a CMDB lab flag, or scope host overrides so rehearsed lease timeouts never wake production on-call.\n\n**controller_api_lag** Cached discovery responses can trail the glass table for several minutes after operators refresh the UI. Compare adjacent poll buckets, demand two consecutive misses, and confirm syslog success lines before opening a bridge call.\n\n**maintenance_dhcp_window** Core DHCP cuts during router refreshes can strand every device hunting fresh leases and flood discovery with identical errors. Align external change calendars (APIC, NSX, vSphere) and auto-suppress when tickets cite relay or scope work.\n\n**dual_forwarder_duplicate** Two heavy forwarders polling the same fabric with different token names double-count identical failure JSON. Deduplicate on serial plus checkpoint hash or enforce one modular input steward per fabric before trusting evidence spikes.\n\n**image_policy_rebaseline** Golden image promotions briefly mark compliant switches as failing until the policy digest finishes. Watch audit rows for template publish verbs and silence alerting until the post-validation job listed on the change clears the serials.\n\n**token_rotation_noise** Scheduled trust-token rotations emit certificate-phase errors until new bundles land on switches. Pair controller audit narratives with PKI change records and extend suppression when the ticket references planned rotation work.\n\n**spurious_rex_capture** Overly permissive regex capture may treat unrelated uppercase strings as serials. Tighten minimum capture length, exclude known OUI patterns, or demand corroborating POAP events before paging syslog-only rows.\n\n**fabric_status_flap** Transient degraded tiles during supervisor switchovers resemble bootstrap storms. Require matching POAP or discovery anomalies or corroborating switch messages before dispatching field services.",
              "refs": "[Cisco DC Networking Application](https://splunkbase.splunk.com/app/7777), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco DC Networking Application (Splunkbase 7777), NDFC syslog.\n• Ensure the following data sources are available: `index=cisco_dc` `sourcetype=\"cisco:ndfc:poap\"` with fields `serial_number`, `switch_name`, `stage`, `status`, `error_code`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward NDFC POAP/ZTP logs to Splunk with parsed `stage` milestones. (2) Alert on any failure before switch reaches `In-Sync`. (3) Join serial to inventory (UC-18.4.8) for asset context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_dc sourcetype=\"cisco:ndfc:poap\" earliest=-7d\n| where match(lower(status),\"(?i)fail|error|timeout\") OR (match(lower(stage),\"(?i)image|cert|dhcp\") AND status!=\"success\")\n| stats latest(_time) as last_fail latest(error_code) as err latest(stage) as stage by serial_number, switch_name\n| sort -last_fail\n| table last_fail, serial_number, switch_name, stage, err\n```\n\nUnderstanding this SPL\n\n**NDFC POAP / ZTP Bootstrap and Day-0 Onboarding Failures** — Automated provisioning brings switches into the fabric quickly; DHCP, image fetch, or certificate failures during POAP/ZTP delay expansions and leave partially configured devices in racks. Monitoring onboarding outcomes keeps brownfield growth on schedule and prevents rogue devices from sitting outside policy.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:ndfc:poap\"` with fields `serial_number`, `switch_name`, `stage`, `status`, `error_code`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDFC syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_dc; **sourcetype**: cisco:ndfc:poap. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_dc, sourcetype=\"cisco:ndfc:poap\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(status),\"(?i)fail|error|timeout\") OR (match(lower(stage),\"(?i)image|cert|dhcp\") AND status!=\"success\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by serial_number, switch_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NDFC POAP / ZTP Bootstrap and Day-0 Onboarding Failures**): table last_fail, serial_number, switch_name, stage, err\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NDFC POAP / ZTP Bootstrap and Day-0 Onboarding Failures** — Automated provisioning brings switches into the fabric quickly; DHCP, image fetch, or certificate failures during POAP/ZTP delay expansions and leave partially configured devices in racks. Monitoring onboarding outcomes keeps brownfield growth on schedule and prevents rogue devices from sitting outside policy.\n\nDocumented **Data sources**: `index=cisco_dc` `sourcetype=\"cisco:ndfc:poap\"` with fields `serial_number`, `switch_name`, `stage`, `status`, `error_code`. **App/TA** (typical add-on context): Cisco DC Networking Application (Splunkbase 7777), NDFC syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (bootstrap attempts), Table (failed devices), Single value (open failures).",
              "script": "",
              "premium": "",
              "hw": "Nexus 9000 being onboarded via NDFC",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch new switches as they join the fabric through the controller, and we tell you when one stalls during its first-day setup so your team fixes cables or images before empty racks block rollout dates.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_nexus"
              ],
              "sapp": [
                {
                  "name": "Cisco DC Networking Application for Splunk",
                  "id": 7777,
                  "url": "https://splunkbase.splunk.com/app/7777",
                  "desc": "Dashboards and analytics for Cisco ACI, Nexus, NDFC, and MDS data center networking",
                  "screenshots": []
                }
              ],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.8,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 13,
            "none": 0
          }
        }
      ],
      "i": 18,
      "n": "Data Center Fabric & SDN",
      "src": "cat-18-data-center-fabric-sdn.md"
    },
    {
      "s": [
        {
          "i": "19.1",
          "n": "Cisco UCS",
          "u": [
            {
              "i": "19.1.1",
              "n": "Blade/Rack Server Health (Cisco UCS)",
              "c": "critical",
              "f": "intermediate",
              "v": "A degraded DIMM or PSU often precedes an uncorrectable ECC error or power loss event. Proactive FRU RMA before HA capacity is exhausted on remaining paths prevents unplanned workload migration and potential data unavailability.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager syslog",
              "d": "UCS Manager faults, UCS Manager equipment API",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:faults\"\n| search dn=\"sys/chassis-*/blade-*\" OR dn=\"sys/rack-unit-*\"\n| eval component=case(\n   like(cause, \"%cpu%\"), \"CPU\",\n   like(cause, \"%memory%\"), \"Memory\",\n   like(cause, \"%psu%\"), \"PSU\",\n   like(cause, \"%fan%\"), \"Fan\",\n   like(cause, \"%disk%\"), \"Disk\",\n   1==1, \"Other\")\n| stats count by severity, component, dn, descr\n| sort -severity, -count",
              "m": "Configure UCS Manager syslog forwarding to Splunk. Poll equipment health via UCS Manager XML API every 5 minutes. Track fault creation and clearing events. Alert on critical/major faults for immediate hardware replacement. Maintain server inventory with health status overlay.",
              "z": "Status grid (server health map), Bar chart (faults by component), Table (active critical faults), Timechart (fault trending).",
              "kfp": "Transient **I2C/SEL glitches** right after a blade re-seats that clear within a few poll cycles, looking severe in Splunk for minutes while the GUI already shows **Soaking** state. **Dual-feed architectures** (API plus syslog) can emit the same `code`+`dn` twice with offset timestamps, so deduplication and ownership of a single *authoritative* feed are essential before judging noise. **Firmware activation** and **CIMC reset** open bursts of `equipment-degraded` or `equipment-inoperable` that self-resolve per Cisco soak timers, yet the ticket still looks “red” in email if the alert has no `lc` or `occur` awareness. **Second fabric interconnect** maintenance often raises `link-down` or `equipment-missing` on paths that are still redundant—operations teams in healthy clusters will acknowledge these while the maintenance banner is on. **Chassis re-acknowledge and discovery** after a truck roll can list historical-looking faults the UI filters differently than the raw `faultInst` list your TA last polled, especially if poll frequency is 15 minutes but humans refresh in 15 seconds. **Time-zone and clock skew** between UCSM, the forwarder, and Splunk indexers can make a *cleared* event appear *after* the *raised* event in a naive table sort; always compare in UCSM local time for Sev1, not only `_time` order. **Read-only scoping** may hide a rack unit a database admin is staring at in the all-admin GUI—alerts on missing objects are a permissions story, not a false positive, but it feels like one until fixed. **Guest OS** disk or memory errors are out of scope here and must never be conflated with `faultInst` on the service profile equipment tree.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager syslog.\n• Ensure the following data sources are available: UCS Manager faults, UCS Manager equipment API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure UCS Manager syslog forwarding to Splunk. Poll equipment health via UCS Manager XML API every 5 minutes. Track fault creation and clearing events. Alert on critical/major faults for immediate hardware replacement. Maintain server inventory with health status overlay.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:faults\"\n| search dn=\"sys/chassis-*/blade-*\" OR dn=\"sys/rack-unit-*\"\n| eval component=case(\n   like(cause, \"%cpu%\"), \"CPU\",\n   like(cause, \"%memory%\"), \"Memory\",\n   like(cause, \"%psu%\"), \"PSU\",\n   like(cause, \"%fan%\"), \"Fan\",\n   like(cause, \"%disk%\"), \"Disk\",\n   1==1, \"Other\")\n| stats count by severity, component, dn, descr\n| sort -severity, -count\n```\n\nUnderstanding this SPL\n\n**Blade/Rack Server Health (Cisco UCS)** — A degraded DIMM or PSU often precedes an uncorrectable ECC error or power loss event. Proactive FRU RMA before HA capacity is exhausted on remaining paths prevents unplanned workload migration and potential data unavailability.\n\nDocumented **Data sources**: UCS Manager faults, UCS Manager equipment API. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:faults. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:faults\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **component** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by severity, component, dn, descr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (server health map), Bar chart (faults by component), Table (active critical faults), Timechart (fault trending).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B200 M5/M6/M7, UCS C220 M5/M6/M7, UCS C240 M5/M6/M7, UCS C480 M5, UCS X210c M6/M7, UCS X410c M6, UCS 6324 FI, UCS 6332 FI, UCS 6454 FI, UCS 6536 FI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We read the hardware health list Cisco rack and blade servers send to the unified platform manager, using the part location in the long system name, not a word guess from a short label. We sort by that path so a fan, power unit, or memory bank is fixed while redundancy still holds, and we keep the fault number for faster runbook routing.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "19.1.2",
              "n": "Service Profile Compliance (Cisco UCS)",
              "c": "high",
              "f": "intermediate",
              "v": "UCS service profiles define the identity of compute resources. Non-compliant associations indicate configuration drift, failed hardware migrations, or policy violations that can impact workload performance and security.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager events",
              "d": "UCS Manager service profile API, configuration events",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:config\"\n| search object_type=\"service_profile\"\n| eval compliance=case(\n    assoc_state=\"associated\" AND config_state=\"applied\", \"Compliant\",\n    assoc_state=\"associated\" AND config_state!=\"applied\", \"Non-Compliant\",\n    assoc_state=\"unassociated\", \"Unassociated\",\n    1==1, \"Unknown\")\n| stats count by compliance, org, sp_name, server_dn\n| sort compliance",
              "m": "Poll service profile status via UCS Manager API every 5 minutes. Track association state and configuration compliance. Alert on non-compliant profiles requiring reapplication. Monitor service profile migrations during maintenance windows. Report on unassociated profiles (wasted compute capacity).",
              "z": "Pie chart (compliance breakdown), Table (non-compliant profiles), Single value (compliance percentage), Status grid (profile status by org).",
              "kfp": "**Updating-template edits** that intentionally create a `pending-reboot` wave across every child profile are a positive finding that already has a change ticket: mute the alert with a lookup keyed on `srcTemplName` and the approved campaign window, not by ignoring all yellow rows. **Planned blade or rack service** often shows a few poll cycles of `unassociated` or `establishing` before the profile re-homes: require the bad state in two or more inventory polls, or a minimum dwell time, before paging. **Audit-only or lab org trees** that never have compute attached keep `unassociated` profiles forever: exclude those org DNs with an allow-list, not with wishful `NOT compliance=*` noise filters. **RBAC** on the read-only user the modular input uses can hide entire org subtrees; Splunk will look like profiles vanished when the GUI admin still sees them—this is a credential-scope defect, not profile drift, until the API user matches the org being audited. **Incompatible firmware or policy pairs** you already triaged in UCSM may leave `failed-to-apply` until a fix ships; use a `code` or policy name lookup to suppress known, ticketed stragglers. **Initial-template-derived profiles** that were customised after binding still show an old `srcTemplName`; divergence is often intentional, so the compliance row is about state enums, not template-name equality, unless you add a stricter join to your own CMDB. **Clock skew and replication** between UCSM, the forwarder, and the indexer can make `last_seen` look a day old even when the poll succeeded—compare `_indextime` to `_time` for the same `dn` before you declare a dead feed. **UCSM high-availability failover** in the same minute as the poll can produce one odd `assocState` read; reconcile on the next cycle before a duplicate incident.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager events.\n• Ensure the following data sources are available: UCS Manager service profile API, configuration events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll service profile status via UCS Manager API every 5 minutes. Track association state and configuration compliance. Alert on non-compliant profiles requiring reapplication. Monitor service profile migrations during maintenance windows. Report on unassociated profiles (wasted compute capacity).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:config\"\n| search object_type=\"service_profile\"\n| eval compliance=case(\n    assoc_state=\"associated\" AND config_state=\"applied\", \"Compliant\",\n    assoc_state=\"associated\" AND config_state!=\"applied\", \"Non-Compliant\",\n    assoc_state=\"unassociated\", \"Unassociated\",\n    1==1, \"Unknown\")\n| stats count by compliance, org, sp_name, server_dn\n| sort compliance\n```\n\nUnderstanding this SPL\n\n**Service Profile Compliance (Cisco UCS)** — UCS service profiles define the identity of compute resources. Non-compliant associations indicate configuration drift, failed hardware migrations, or policy violations that can impact workload performance and security.\n\nDocumented **Data sources**: UCS Manager service profile API, configuration events. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:config. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:config\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **compliance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by compliance, org, sp_name, server_dn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (compliance breakdown), Table (non-compliant profiles), Single value (compliance percentage), Status grid (profile status by org).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B200 M5/M6/M7, UCS C220 M5/M6/M7, UCS C240 M5/M6/M7, UCS C480 M5, UCS X210c M6/M7, UCS X410c M6, UCS 6324 FI, UCS 6332 FI, UCS 6454 FI, UCS 6536 FI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We keep a master list of the server “recipes” your team defines, and we check whether each recipe is really painted onto the right machine, with no half-finished or broken paint jobs. We flag when a recipe changed but the machine is still running the old settings, or when the job failed, so the apps never wake up to the wrong network identity on the next boot.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.3",
              "n": "Firmware Compliance (Cisco UCS)",
              "c": "medium",
              "f": "intermediate",
              "v": "Running inconsistent firmware across UCS creates compatibility issues and security vulnerabilities. Tracking firmware versions enables compliance reporting, patch planning, and ensures consistency across the compute fleet.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager inventory",
              "d": "UCS Manager firmware inventory, UCS firmware policy",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:inventory\"\n| search object_type=\"firmware\"\n| stats count by component_type, running_version, server_dn\n| lookup ucs_approved_firmware component_type OUTPUT approved_version\n| eval compliant=if(running_version==approved_version, \"Yes\", \"No\")\n| stats count as server_count by component_type, running_version, approved_version, compliant\n| sort compliant, component_type",
              "m": "Poll UCS firmware inventory weekly. Maintain a lookup of approved firmware versions per component type. Compare running versions against approved baselines. Generate compliance reports for audit. Prioritize non-compliant servers in maintenance windows.",
              "z": "Table (firmware compliance matrix), Bar chart (servers by firmware version), Pie chart (compliant vs non-compliant), Single value (fleet compliance percentage).",
              "kfp": "Planned **rolling upgrade waves** where some blades already carry the new bundle and others are still on the last approved build until their window—accurate in reality, noisy if you expect one flat version everywhere. **Out-of-band hotfix** images applied for a one-off field-service action before the central baseline file catches up. **New rack or chassis** provisioned after the last quarterly lookup edit, so the row reads “No” until the spreadsheet owner updates `ucs_approved_firmware.csv`. **BIOS and management-controller trains** on different but jointly supported revs, while your lookup stores only a single string per `component_type` and therefore can’t represent paired rules without splitting rows. **Adapter firmware** whose GUI shows a marketing label but the API returns a longer build id string—comparisons look wrong until you normalise in the lookup’s `notes` and optional helper fields. **Capability-catalog auto-sync** in UCSM temporarily diverges from the human-maintained “gold” list during catalog imports.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager inventory.\n• Ensure the following data sources are available: UCS Manager firmware inventory, UCS firmware policy.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll UCS firmware inventory weekly. Maintain a lookup of approved firmware versions per component type. Compare running versions against approved baselines. Generate compliance reports for audit. Prioritize non-compliant servers in maintenance windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:inventory\"\n| search object_type=\"firmware\"\n| stats count by component_type, running_version, server_dn\n| lookup ucs_approved_firmware component_type OUTPUT approved_version\n| eval compliant=if(running_version==approved_version, \"Yes\", \"No\")\n| stats count as server_count by component_type, running_version, approved_version, compliant\n| sort compliant, component_type\n```\n\nUnderstanding this SPL\n\n**Firmware Compliance (Cisco UCS)** — Running inconsistent firmware across UCS creates compatibility issues and security vulnerabilities. Tracking firmware versions enables compliance reporting, patch planning, and ensures consistency across the compute fleet.\n\nDocumented **Data sources**: UCS Manager firmware inventory, UCS firmware policy. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by component_type, running_version, server_dn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by component_type, running_version, approved_version, compliant** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (firmware compliance matrix), Bar chart (servers by firmware version), Pie chart (compliant vs non-compliant), Single value (fleet compliance percentage).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B200 M5/M6/M7, UCS C220 M5/M6/M7, UCS C240 M5/M6/M7, UCS C480 M5, UCS X210c M6/M7, UCS X410c M6, UCS 6324 FI, UCS 6332 FI, UCS 6454 FI, UCS 6536 FI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We keep a list of the software versions you approved for each part inside the servers. We compare that to what the machines report so you can plan upgrade waves in order and keep unsafe or odd mixes from taking you by surprise in production.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.4",
              "n": "Fault Trending by Severity (Cisco UCS)",
              "c": "high",
              "f": "intermediate",
              "v": "UCS fault trends reveal systemic hardware issues, environmental problems, or configuration problems across the compute fleet. Rising fault counts indicate deteriorating conditions requiring proactive attention.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager faults",
              "d": "UCS Manager fault log, syslog",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:faults\"\n| timechart span=1h count by severity\n| fields _time critical major minor warning info",
              "m": "Forward UCS Manager faults via syslog or API polling. Categorize faults by severity and type. Track fault lifecycle (create, clear, acknowledge). Alert on critical/major fault count exceeding baseline by >50%. Report weekly on fault trends and resolution times.",
              "z": "Timechart (fault trends by severity), Bar chart (top fault codes), Single value (open critical faults), Table (active faults detail).",
              "kfp": "**Firmware-wave bursts** after bundle activation or IOM/CIMC resets often produce a dense cluster of `equipment-degraded` and related codes across many DNs. On a 30-day stacked view this can resemble a reliability collapse, yet it is the change calendar in motion. Pair splunk results with a maintenance lookup keyed to chassis, rack, or change record before you re-tune a baseline. **Syslog and API double counting** is common when two feeds each emit a create event with slightly different `_time`. If you have not `dedup`’d, daily arrival totals read roughly double and your rolling z-score baseline shifts, hiding a genuine post-change uptick. Confirm `| stats count by source, sourcetype` and tighten dedup keys only after you see real duplicates, not the day one analyst guessed wrong. **TA version shifts in severity spelling** (Title-Case `Critical` vs `critical`) or field names split historical summary indexes if you only fixed `lower()` in a new app version. A vertical line on the chart at the upgrade day is often a *measurement* change, not a real fault explosion—rebuild summary tables with a single normalisation pass. **Using `occur` like `count`**: one row with `occur=20` and one with `occur=1` are not the same operating story; a storm panel should `sum` `coalesce(occur,occurrences,1)` while a pure arrival-rate line uses deduped row counts, as here—mixing the two in one tile misleads leadership reviews. **Hour-bound API polls vs continuous syslog** can create a visible spike at the top of the hour on mixed feeds, even when the underlying hardware is steady; widen the span, split feeds, or move anomaly logic to a daily grain. **UCSM version reclassification** of some codes between `info` and `condition` can move events between the bottom stack bands across an upgrade, which looks like a policy shift, not a hardware trend. **Decommissioned assets** whose events still sit in 30 online days inflate `affected_dns` on code panels. Filter to an allow-list from inventory or cut off at decommission time. **Clock skew and duplicate `_time`** on separate forwarders can slip rows past `dedup` and exaggerate one-day buckets—investigate `(_indextime-_time)` before you chase hardware.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager faults.\n• Ensure the following data sources are available: UCS Manager fault log, syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward UCS Manager faults via syslog or API polling. Categorize faults by severity and type. Track fault lifecycle (create, clear, acknowledge). Alert on critical/major fault count exceeding baseline by >50%. Report weekly on fault trends and resolution times.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:faults\"\n| timechart span=1h count by severity\n| fields _time critical major minor warning info\n```\n\nUnderstanding this SPL\n\n**Fault Trending by Severity (Cisco UCS)** — UCS fault trends reveal systemic hardware issues, environmental problems, or configuration problems across the compute fleet. Rising fault counts indicate deteriorating conditions requiring proactive attention.\n\nDocumented **Data sources**: UCS Manager fault log, syslog. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager faults. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:faults. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:faults\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by severity** — ideal for trending and alerting on this use case.\n• Keeps or drops fields with `fields` to shape columns and size.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (fault trends by severity), Bar chart (top fault codes), Single value (open critical faults), Table (active faults detail).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B200 M5/M6/M7, UCS C220 M5/M6/M7, UCS C240 M5/M6/M7, UCS C480 M5, UCS X210c M6/M7, UCS X410c M6, UCS 6324 FI, UCS 6332 FI, UCS 6454 FI, UCS 6536 FI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We count how many new problem notices arrive each day from the server farm’s rack-and-blade manager, sorted by how serious each notice is, so the people running the data hall can see if trouble is piling up over weeks—not just what is still open at this hour. That steers long-range repair and upgrade plans so small recurring glitches are handled before they become a year of fire drills.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.5",
              "n": "FI Port Channel Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Fabric Interconnects are the network gateway for all UCS compute. Port-channel failures reduce bandwidth or cause complete loss of connectivity, impacting every workload in the UCS domain.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager stats",
              "d": "UCS Manager FI port-channel statistics, FI syslog",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:fi_stats\"\n| search object_type=\"port_channel\"\n| eval member_pct=round((active_members/configured_members)*100, 0)\n| stats latest(oper_state) as status, latest(member_pct) as active_pct, latest(rx_bps) as rx_rate, latest(tx_bps) as tx_rate by fi_id, pc_id, pc_name\n| eval health=case(status!=\"up\", \"Down\", active_pct<100, \"Degraded\", 1==1, \"Healthy\")\n| table fi_id, pc_id, pc_name, status, active_pct, rx_rate, tx_rate, health",
              "m": "Monitor FI port-channel status every 30 seconds. Track member link count vs configured count. Alert on any port-channel with less than 100% members active. Monitor FI uplink utilization for capacity planning. Correlate FI events with server connectivity issues.",
              "z": "Status grid (port-channel health), Gauge (member active percentage), Timechart (utilization trending), Table (degraded port-channels).",
              "kfp": "Short **LACP renegotiation and rebalancing** after an authorised member add or removal — `active_pct` can dip for one or two polls while forwarding is intentionally de-preferenced on the Nexus side for hash stability, even though UCSM and Splunk both show a degraded mathematical count. **Spanning-tree convergence and BPDU-guard recovery** after an upstream Catalyst or Nexus supervisor reload can discard a VLAN for tens of seconds while the LACP bundle on the FI still reports a logically up aggregate — pair this UC with switch-layer evidence before you RMA the wrong line card. **vPC ISSU or a vPC peer-link maintenance** on the upstream pair can re-home a single member to the standby path while the bundle toward your FI still carries traffic; this is not a Splunk false positive, it is a reminder that Nexus health is not encoded in UCSM alone. **Administratively shutting a facing member** on the Nexus side during a cable pull or A/B test will trigger this alert, which is expected in the ticket but noisy in the inbox if your change-calendar lookup is missing Nexus node names or the change-record number. **Single-member LAGs in test or DR pods** are valid by policy (one NIC in service), so a `<100%` test is nonsensical there — either filter those `dn` values with a small allow-list, or treat the policy as a governance issue, not a monitoring bug. **Mistaking a Fibre Channel `fabric/san/…/pc-*` for an Ethernet uplink** because someone deleted the LAN regex from the search is a configuration error that the `match(dn, ...)` clause in the base SPL is specifically there to prevent. **FI line-card or supervisor restarts during a Cisco firmware bundle** can pause stats polls for longer than a Nexus-side maintenance window — a brief \"no rows in Splunk\" gap is not automatically a dark data centre if Nexus `ifOperStatus` and the UCSM GUI both stay green; require a second signal before paging on stats silence alone. **Clock skew** between UCSM, the forwarder, and the indexer can reorder clear and raise events in raw tables; for Sev1, compare `_time` to the UCSM local clock in the GUI footer.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager stats.\n• Ensure the following data sources are available: UCS Manager FI port-channel statistics, FI syslog.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor FI port-channel status every 30 seconds. Track member link count vs configured count. Alert on any port-channel with less than 100% members active. Monitor FI uplink utilization for capacity planning. Correlate FI events with server connectivity issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:fi_stats\"\n| search object_type=\"port_channel\"\n| eval member_pct=round((active_members/configured_members)*100, 0)\n| stats latest(oper_state) as status, latest(member_pct) as active_pct, latest(rx_bps) as rx_rate, latest(tx_bps) as tx_rate by fi_id, pc_id, pc_name\n| eval health=case(status!=\"up\", \"Down\", active_pct<100, \"Degraded\", 1==1, \"Healthy\")\n| table fi_id, pc_id, pc_name, status, active_pct, rx_rate, tx_rate, health\n```\n\nUnderstanding this SPL\n\n**FI Port Channel Health** — Fabric Interconnects are the network gateway for all UCS compute. Port-channel failures reduce bandwidth or cause complete loss of connectivity, impacting every workload in the UCS domain.\n\nDocumented **Data sources**: UCS Manager FI port-channel statistics, FI syslog. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager stats. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:fi_stats. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:fi_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **member_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by fi_id, pc_id, pc_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **FI Port Channel Health**): table fi_id, pc_id, pc_name, status, active_pct, rx_rate, tx_rate, health\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (port-channel health), Gauge (member active percentage), Timechart (utilization trending), Table (degraded port-channels).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B200 M5/M6/M7, UCS C220 M5/M6/M7, UCS C240 M5/M6/M7, UCS C480 M5, UCS X210c M6/M7, UCS X410c M6, UCS 6324 FI, UCS 6332 FI, UCS 6454 FI, UCS 6536 FI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the link bundles on the two fabric switches that connect your server farm to the data-centre network. When a link goes missing in a bundle or a whole bundle fails, we raise it right away, because a bad uplink can cut the network for every server using that path at once.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "19.1.6",
              "n": "Power and Thermal Monitoring (Cisco UCS)",
              "c": "high",
              "f": "intermediate",
              "v": "UCS power and thermal data helps optimize data center capacity planning, detect cooling failures before overheating causes server throttling, and track energy efficiency metrics for sustainability reporting.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager environmental",
              "d": "UCS Manager environmental statistics, power supply metrics",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:environmental\"\n| eval metric_type=case(like(stat_name, \"%power%\"), \"Power\", like(stat_name, \"%temp%\"), \"Temperature\", like(stat_name, \"%fan%\"), \"Fan\", 1==1, \"Other\")\n| stats avg(value) as avg_val, max(value) as max_val by chassis_id, metric_type, unit\n| eval status=case(\n    metric_type==\"Temperature\" AND max_val>75, \"Critical\",\n    metric_type==\"Temperature\" AND max_val>65, \"Warning\",\n    metric_type==\"Fan\" AND avg_val<2000, \"Warning\",\n    1==1, \"Normal\")\n| table chassis_id, metric_type, avg_val, max_val, unit, status",
              "m": "Collect UCS environmental data via API every 60 seconds. Track per-chassis power draw, inlet/outlet temperatures, and fan speeds. Set thermal thresholds based on vendor specs. Alert on overheating or fan failures. Report monthly power consumption for capacity and cost planning.",
              "z": "Gauge (temperature/power), Timechart (power and thermal trending), Heatmap (chassis thermal map), Single value (total power draw).",
              "kfp": "• A/B or dual-PDU hand-off in the same rack: a short AC sag during automatic transfer to the other feed can show a one-poll drop in input-side watts or a rail blip; correlate with the facility BMS or electrician ticket before you RMA a supply.\n• Hot-swap or FRU reseat: EEPROM discovery can read 0°C, 0 W, or 0 rpm while Cisco’s soak timer is still running—wait at least two full UCSM statistics poll intervals, then re-check the GUI and suppress during the tagged change window.\n• Fan tach pinned high during a Fabric Interconnect or IOM activate/firmware wave when no fault is open in the parallel fault stream—triage to the change calendar; treat as expected cooling margin unless `faultInst` shows a new thermal or power fault code.\n• Cluster leadership change or HA state move on the management plane: a single empty stats bucket for some MO classes is common; do not call a P1 on one quiet poll—compare to a `timechart count` and UCSM high-availability syslog if you have it.\n• NTP or service-processor clock out of step with UCSM after a maintenance window: the order of “clear” vs “raise” in raw event sort can look inverted even when hardware is fine; the latest-by-`dn` in this use case helps, but fix clocks for Sev1 trust.\n• PSU slot marked absent in inventory while the pair still provides redundant feed: watt counters can look normal while a slot flaps; confirm with a physical check before you assume the monitoring tool is wrong about power headroom to the room.\n• UCS X-series IFM or 65xx line-card cooling policy diverging from 2200 IOM/5108-era experience: a single global inlet threshold may be infeasible for a warm row—split extra rows in the lookup after 30 days of site baselines, not the first day of the pilot.\n• Long-uptime 32-bit-style counter semantics if someone accidentally mapped a cumulative energy field into an “instant kW” panel: a stair-step or apparent reset is often wrap, not a real ten-megawatt drop—validate the managed-object attribute in Cisco’s reference for that firmware and field name.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager environmental.\n• Ensure the following data sources are available: UCS Manager environmental statistics, power supply metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect UCS environmental data via API every 60 seconds. Track per-chassis power draw, inlet/outlet temperatures, and fan speeds. Set thermal thresholds based on vendor specs. Alert on overheating or fan failures. Report monthly power consumption for capacity and cost planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:environmental\"\n| eval metric_type=case(like(stat_name, \"%power%\"), \"Power\", like(stat_name, \"%temp%\"), \"Temperature\", like(stat_name, \"%fan%\"), \"Fan\", 1==1, \"Other\")\n| stats avg(value) as avg_val, max(value) as max_val by chassis_id, metric_type, unit\n| eval status=case(\n    metric_type==\"Temperature\" AND max_val>75, \"Critical\",\n    metric_type==\"Temperature\" AND max_val>65, \"Warning\",\n    metric_type==\"Fan\" AND avg_val<2000, \"Warning\",\n    1==1, \"Normal\")\n| table chassis_id, metric_type, avg_val, max_val, unit, status\n```\n\nUnderstanding this SPL\n\n**Power and Thermal Monitoring (Cisco UCS)** — UCS power and thermal data helps optimize data center capacity planning, detect cooling failures before overheating causes server throttling, and track energy efficiency metrics for sustainability reporting.\n\nDocumented **Data sources**: UCS Manager environmental statistics, power supply metrics. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager environmental. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:environmental. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:environmental\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **metric_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by chassis_id, metric_type, unit** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Power and Thermal Monitoring (Cisco UCS)**): table chassis_id, metric_type, avg_val, max_val, unit, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (temperature/power), Timechart (power and thermal trending), Heatmap (chassis thermal map), Single value (total power draw).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B200 M5/M6/M7, UCS C220 M5/M6/M7, UCS C240 M5/M6/M7, UCS C480 M5, UCS X210c M6/M7, UCS X410c M6, UCS 6324 FI, UCS 6332 FI, UCS 6454 FI, UCS 6536 FI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We track how much power your shared racks draw, how hot the air and key parts are, and whether fans spin in a safe range, using a rule table you can edit for your room. We want early notice when cooling margin shrinks or power nears a budget, before hardware slows to protect itself or a hall team arrives too late in the day.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.10",
              "n": "Blade Firmware Compliance (Cisco UCS)",
              "c": "high",
              "f": "intermediate",
              "v": "Per-blade firmware (BIOS, adapter, storage controller) vs approved bundles — complements fleet-wide inventory (UC-19.1.3) with **per-blade** tracking for change windows.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager API",
              "d": "`cisco:ucs:inventory`, blade FRU",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:inventory\" object_type=\"blade\"\n| stats values(running_fw) as fw by server_dn, blade_id\n| lookup ucs_blade_fw_baseline.csv blade_model OUTPUT approved_fw\n| where fw!=approved_fw\n| table server_dn, blade_id, fw, approved_fw",
              "m": "Normalize firmware strings per Cisco bundle naming. Report exceptions before LCM updates.",
              "z": "Table (non-compliant blades), Bar chart (by chassis), Single value (non-compliant count).",
              "kfp": "• A blade in **decommissioned** or **removal** admin state can still return `firmwareRunning` rows for up to a full UCSM inventory day — looks like drift until the MO ages out. Cross-check with `computeBlade` power state before paging. • During BIOS/LCM, **two** `firmwareRunning` DNs (boot loader vs. running) can both be present; if **normalized** versions differ you get `MultipleVersionsActive` even though the window is healthy. • A **mezz VIC** often reports a **package stack** (loader + UEFI/option ROM) that reads like multiple “versions” for one install — compare only the lines your baseline names in `package_name`. • The UCSM string may show **4.2(3a)** while a spreadsheet was typed **4.2.3a**; the SPL normalises parentheses, but if one side is still a **marketing label** and the other is a long **build id**, you need both in `notes` until you align strings. • On **X210c / X410c** the TA may not expose the same I/O or GPU-adjacent MOs on **older 4.1.x TAs** as on 4.3.x, so a missing row is a **version gap** not proof of safety — upgrade the TA on the poller, then re-compare. • A permissive ad-hoc search that forgets `classId` can include **`firmwareUpdatable`** (candidate) instead of `firmwareRunning` (active) — the numbers look off until you lock the class. • UCSM refresh for inventory is on the order of **minutes**, not seconds; a just-activated bundle can still read old for **5–10 minutes** in Splunk. • An **emergency PSIRT** bundle was applied in the data hall before the **CSV owner** updated `ucs_blade_firmware_baseline.csv` — expect `Drifted` until the governance row catches up, even though risk accepted the new build.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager API.\n• Ensure the following data sources are available: `cisco:ucs:inventory`, blade FRU.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize firmware strings per Cisco bundle naming. Report exceptions before LCM updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:inventory\" object_type=\"blade\"\n| stats values(running_fw) as fw by server_dn, blade_id\n| lookup ucs_blade_fw_baseline.csv blade_model OUTPUT approved_fw\n| where fw!=approved_fw\n| table server_dn, blade_id, fw, approved_fw\n```\n\nUnderstanding this SPL\n\n**Blade Firmware Compliance (Cisco UCS)** — Per-blade firmware (BIOS, adapter, storage controller) vs approved bundles — complements fleet-wide inventory (UC-19.1.3) with **per-blade** tracking for change windows.\n\nDocumented **Data sources**: `cisco:ucs:inventory`, blade FRU. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by server_dn, blade_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where fw!=approved_fw` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Blade Firmware Compliance (Cisco UCS)**): table server_dn, blade_id, fw, approved_fw\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant blades), Bar chart (by chassis), Single value (non-compliant count).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B-Series blades",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We line up the signed-off versions of every important chip and card inside each blade with what the domain actually has running, one layer at a time, so the people moving workloads during a change window can see the exact straggler and fix it in the right order without guessing from a single combined version string.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.11",
              "n": "Service Profile Association Failures (Cisco UCS)",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed or stuck service profile associations block server bring-up and maintenance — complements compliance state (UC-19.1.2).",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`cisco:ucs:config`, UCS faults",
              "q": "index=cisco_ucs (sourcetype=\"cisco:ucs:config\" OR sourcetype=\"cisco:ucs:faults\")\n| search assoc_state=\"failed\" OR match(lower(descr),\"(?i)association.*fail\")\n| stats count by sp_name, server_dn, descr\n| sort -count",
              "m": "Tune search to UCS fault codes for association. Alert on any failed association in production orgs.",
              "z": "Table (failed associations), Timeline, Single value (open failures).",
              "kfp": "• A documented **BIOS token** or **secure-boot** policy mismatch on an association attempt can raise a short-lived `failed` or `inaccessible` read even when the later retry path is the approved remediation—confirm against the FSM in UCSM and the service ticket before a second page.\n• An intentional **decommission** of a service profile can appear as a **failed** transition for one poll while UCSM tears storage and identities down in order; your change tool should list the `lsServer` as scheduled out, and the signal should be suppressed when the decommission is an approved, ticketed run.\n• A **boot-policy** or **template** push that is *expected* to fail in a disaster-recovery or clone lab (for example, NIC placement that is invalid outside production) is not the same as an unexpected production fault—separate DNs in your suppression lookup so DR exercises do not look like a Los Angeles data-center outage.\n• A central **LDAP** or **RBAC** outage on UCSM that blocks the audit or inventory modular input from completing gives you **zero rows** in Splunk, not a clean bill of health; treat sudden drops in `cisco:ucs:audit` volume as a data-fidelity incident before you believe an empty association panel.\n• Planned **migrate-pool** or large pool moves can show a burst of `unassociating` then `associating` events in audit; pair with a change calendar and a minimum **dwell** on `failed` in inventory before a page, the same way you would not page on a single `establishing` poll in UC-19.1.2.\n• A **faulty SFP** or **downstream link** that makes a server’s path **inaccessible** in inventory is a legitimate hardware and cabling problem, not an identity-policy error—route to the physical-link runbook, not the policy-runbook, when `cause` and the equipment fault stream already say optic or I/O.\n• After a **UCSM major upgrade**, the human-readable `operation` labels in the audit stream can shift while the underlying `assocState` enums stay stable; if your alert is too tightly regex-bound to a verb, widen to `res IN (...)` and the `lsServer` DN, then re-baseline the verb list in a test domain.\n• **Shorter audit retention** in Splunk than the window you use for `peer_fail_count` can make late-arriving `faultInst` events look like uncorrelated new failures; align retention, or narrow the `earliest` bound for both subsearches, before you accuse a template of a ghost blast radius.\n• **Clock skew** between Splunk `_time` and the UCSM event stamp can reorder lifecycle stages in a 60-second micro-window; a human triaging in the GUI is authoritative for ordering; fix NTP on management paths if this happens often enough to break automation.\n• **PowerTool** or **PowerShell** mass-updates to profiles can create a *correct* `peer_fail_count` spike for a `srcTemplName` that is also what your automation intended—add an automation-lookup key or maintenance tag so scripted waves do not page the same people twice.\n• **Deduplication gaps** in fault indexing (same `faultInst` re-indexed) can make Splunk’s joined `cause` look noisier than the Faults pane; use `dedup` or `stats latest` in a forked test search before you change hardware against a false duplicate row.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `cisco:ucs:config`, UCS faults.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune search to UCS fault codes for association. Alert on any failed association in production orgs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs (sourcetype=\"cisco:ucs:config\" OR sourcetype=\"cisco:ucs:faults\")\n| search assoc_state=\"failed\" OR match(lower(descr),\"(?i)association.*fail\")\n| stats count by sp_name, server_dn, descr\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Service Profile Association Failures (Cisco UCS)** — Failed or stuck service profile associations block server bring-up and maintenance — complements compliance state (UC-19.1.2).\n\nDocumented **Data sources**: `cisco:ucs:config`, UCS faults. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:config, cisco:ucs:faults. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:config\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by sp_name, server_dn, descr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed associations), Timeline, Single value (open failures).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch when the recipe that should land on a server during association or a big change fails mid-flight, and we name how many siblings share the same template so the right people fix a bad policy push versus one sick machine.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.12",
              "n": "Fault Suppression Policy Audit (Cisco UCS)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks suppressed or acknowledged faults that may hide recurring hardware issues — governance for suppression rules.",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`cisco:ucs:faults`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:faults\"\n| where match(lower(lc),\"(?i)suppressed\") OR acked=\"yes\"\n| stats count by code, dn, user\n| where count>10\n| sort -count",
              "m": "Map `lc` and ack fields per UCSM version. Monthly review of chronic suppressions.",
              "z": "Table (top suppressed codes), Bar chart (by user), Line chart (suppression trend).",
              "kfp": "Genuine but benign patterns: a Cisco TAC-authorized RMA where pol/fault-suppression-task is used for the RMA window; a UCSM firmware or infrastructure bundle with a CAB slot tied to maintenanceMaintWindow while a DIMM or I/O code is acked only until parts arrive; a DR or business-continuity exercise with bulk ack of non-production FIs; a TAC data collection where an engineer suppresses faults under an explicit case ID your Cisco UCS operations lead can reference in the ticket system. The distinction from UC-19.1.14 (UCSM backup and mgmtBackup age) and UC-19.1.23 (Intersight aaa audit records) is that the noise here is usually an approved change window, so your defence is a maintained lookup and governance review, not only raising numeric thresholds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `cisco:ucs:faults`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `lc` and ack fields per UCSM version. Monthly review of chronic suppressions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:faults\"\n| where match(lower(lc),\"(?i)suppressed\") OR acked=\"yes\"\n| stats count by code, dn, user\n| where count>10\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Fault Suppression Policy Audit (Cisco UCS)** — Tracks suppressed or acknowledged faults that may hide recurring hardware issues — governance for suppression rules.\n\nDocumented **Data sources**: `cisco:ucs:faults`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:faults. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:faults\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(lc),\"(?i)suppressed\") OR acked=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by code, dn, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top suppressed codes), Bar chart (by user), Line chart (suppression trend).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We look for the same hardware trouble code being hushed or marked as handled over and over, and we match those server entries to a written record of which administrator did it, so a reviewer can see the whole story. We also skip the rows you have already tied to a planned maintenance pass so normal approved work is not called a surprise every week.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.13",
              "n": "FI Port Channel Member Errors and CRCs",
              "c": "critical",
              "f": "intermediate",
              "v": "Per-member link errors and CRCs on FI port-channels — augments aggregate PC status (UC-19.1.5).",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`cisco:ucs:fi_stats` port members",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:fi_stats\" object_type=\"port\"\n| where crc_errors>0 OR link_state!=\"up\"\n| stats sum(crc_errors) as crc by fi_id, pc_id, port_id\n| where crc>0\n| sort -crc",
              "m": "Ingest per-member counters. Alert on CRC growth or member down inside operational PC.",
              "z": "Table (ports with CRCs), Heatmap (FI × port), Line chart (CRC rate).",
              "kfp": "• (a) Just after a link flap or UCSM clear counters, cumulative gauges reset: the streamstats if(curr>=prev,…) leg treats apparent negatives as a baseline reset, but a short suppression window (one to two minutes) after a documented change still prevents duplicate tickets while LACP converges.\n• (b) 40G/100G breakouts: per-aggr-port and per-eth-port stats can both exist; summing them naively in an ad-hoc report double-counts the same physical lane. Pick one counting plane in architecture governance before comparing sites.\n• (c) etherPauseStats on FCoE or PFC paths: PAUSE is flow-control, not a CRC. Pointing a CRC-oriented red tile at pause frames will page on normal storage bursts.\n• (d) SFP-25G optics with marginal receive power near −12 dBm can show a low, steady epmf that is still within an optical budget; use two to three time bins and vendor thresholds before an RMA.\n• (e) UCSM software activation that reboots IOM 2304 can reset uplink-related counters for many members in a five-minute window; align Splunk with the change record before blaming hardware.\n• (f) Cable swaps and LACP re-convergence during planned work can show transient pre-converge errors; require multiple bad bins and Nexus-side counter agreement before a cable RMA.\n• (g) Upstream Nexus 9000 with system MTU 1500 when storage paths expect 9216 jumbo to the FI can produce frameLong symptoms on bursty traffic; fix MTU on the switch, not the Splunk threshold alone.\n• (h) clear counters in UCSM CLI during maintenance makes Splunk see a fresh delta epoch; the streamstats reset leg handles the maths, but operators may misread the alert as a clean bill of health on the wire — confirm Nexus show counters the same hour.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `cisco:ucs:fi_stats` port members.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest per-member counters. Alert on CRC growth or member down inside operational PC.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:fi_stats\" object_type=\"port\"\n| where crc_errors>0 OR link_state!=\"up\"\n| stats sum(crc_errors) as crc by fi_id, pc_id, port_id\n| where crc>0\n| sort -crc\n```\n\nUnderstanding this SPL\n\n**FI Port Channel Member Errors and CRCs** — Per-member link errors and CRCs on FI port-channels — augments aggregate PC status (UC-19.1.5).\n\nDocumented **Data sources**: `cisco:ucs:fi_stats` port members. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:fi_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:fi_stats\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where crc_errors>0 OR link_state!=\"up\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by fi_id, pc_id, port_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where crc>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (ports with CRCs), Heatmap (FI × port), Line chart (CRC rate).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS FI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We count tiny data mistakes on each physical wire in the fat uplink bundles that feed your server racks, even when the whole bundle still looks fine on a simple health check, so the right people can reseat, replace, or re-route work before a slow fault in one leg turns into a full link drop and a big outage. That is a different job from only asking whether a bundle is up or not.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.14",
              "n": "UCS Manager Backup Validation",
              "c": "high",
              "f": "beginner",
              "v": "Confirms scheduled all/configuration backups completed and file size is within expected bounds.",
              "t": "`Splunk_TA_cisco-ucs`, backup scheduler logs",
              "d": "`cisco:ucs:backup`, syslog backup events",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:backup\" earliest=-7d\n| where status!=\"success\" OR backup_size_bytes < 1000000\n| stats latest(status) as st, latest(backup_size_bytes) as sz by ucsm_host\n| table ucsm_host, st, sz",
              "m": "Ingest backup job results from syslog or automation. Alert on failed job or zero-size artifact.",
              "z": "Table (backup status), Single value (failed jobs), Timeline.",
              "kfp": "• The **first week** after you cut over from “syslog only” to HEC with `cisco:ucs:mgmtbackup` often shows **no `pct_of_baseline` row** because `baseline_median_mb` is still null: expect INFO-only rows; finish four weeks of stable exports before you bind paging to `pct_of_baseline` math.\n• A TAC or support bundle export that the operator reuses the same `remoteFile` name for can make one interval look like a size spike: correlate with a ticket id in your change system and, if the export type toggled from logical-configuration to all-configuration on purpose, add a new lookup row, not a sev-1 on hardware.\n• **UCSM cluster failover** in the same minute as a poll can yield one stale `lastBackup` read: wait a second poll, compare both fabric interconnect management paths, and do not reimage an FI off one lagging timestamp.\n• **Daylight-saving clock boundaries** in the `lastBackup` string versus `now()` on the search head: if you see 55–70 minute `miss_ratio` in March or November, confirm `TZ=America/` settings in `props.conf` and the forwarder, not a “missed” policy.\n• A policy **intentionally admin-disabled** during a wide change (maintenance) shows up as a huge `last_backup_age_hours` even though the business approved no exports: you must add `(ucsm_domain,policy_name)` to `ucs_suppressed_backup_policies.csv` (optional companion lookup) for that window, not silence the entire UC.\n• NFS or SMB replication to a DR filer lags: Splunk on the “primary” export host can show a file tens of minutes smaller than the real DR copy. Move the `backup:filesystem` forwarder to the filer that is the contractual restore point, or add a second row keyed on `remote_key_dr` in the join, not a blind threshold cut.\n• Gzip or other wrapper layers that append `.gz` and change size month to month: your baseline median is flat wrong unless you key `remote_key` to the final filename actually listed in `mgmtBackup.remoteFile` after compression.\n• **Cisco Intersight** co-management where UCSM is still the trigger but Central schedules the work: a quiet UCSM row plus noisy Central is not a “green UCSM” story—verify Central's own job state before you close the book on DR readiness.\n• A UCSM read-only API user with rights to `mgmtBackup` but not to a specific org: you get a consistent empty poll with HTTP 200 and JSON errors inside—Splunk will show *no* `mgmtBackup` event, a data quality incident, not a zero-size file.\n• **Test domains** with weekly schedules that skip five weeks on purpose: mark them as non-prod in `ucs_backup_baseline` with a very large `expected_interval_hours` to avoid comparing lab cadence to production SLAs; this is different from a single-fabric A/B vNIC false positive in UC-19.1.36 and different from a port-channel error ramp in UC-19.1.13.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, backup scheduler logs.\n• Ensure the following data sources are available: `cisco:ucs:backup`, syslog backup events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest backup job results from syslog or automation. Alert on failed job or zero-size artifact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:backup\" earliest=-7d\n| where status!=\"success\" OR backup_size_bytes < 1000000\n| stats latest(status) as st, latest(backup_size_bytes) as sz by ucsm_host\n| table ucsm_host, st, sz\n```\n\nUnderstanding this SPL\n\n**UCS Manager Backup Validation** — Confirms scheduled all/configuration backups completed and file size is within expected bounds.\n\nDocumented **Data sources**: `cisco:ucs:backup`, syslog backup events. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, backup scheduler logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:backup. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:backup\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"success\" OR backup_size_bytes < 1000000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ucsm_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **UCS Manager Backup Validation**): table ucsm_host, st, sz\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (backup status), Single value (failed jobs), Timeline.",
              "script": "",
              "premium": "",
              "hw": "Cisco UCSM",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We check that the master copy of your data hall switch-and-server settings is being saved on time, and that each saved file is big enough to be a real full copy, not a blank stub. If a save is late, missing, or the wrong size compared to what you usually get, we let the people who run that hall know before a bad day becomes a long rebuild of everything by hand.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.15",
              "n": "Chassis PSU Redundancy",
              "c": "critical",
              "f": "beginner",
              "v": "Detects loss of N+1 PSU redundancy in chassis before full power loss.",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`cisco:ucs:environmental`, PSU inventory",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:environmental\" metric_type=\"psu\"\n| where oper_state!=\"ok\" OR redundancy_state!=\"redundant\"\n| stats count by chassis_id, psu_slot, oper_state, redundancy_state",
              "m": "Map PSU fields from API. Page on non-redundant state.",
              "z": "Status grid (chassis × PSU), Table (alerts), Single value (chassis without redundancy).",
              "kfp": "• PSU RMA with an intentional 5–30 minute `removed` or `inoperable` per-slot state while a spare is in transit: page after dwell time, not the first single poll, and add a PagerDuty snooze for the ticketed FRU id.\n• Intentional single-feed or single-side work on a dual-PDU or grid A/B test where the chassis legitimately downgrades from `grid-redundant` to `n+1` for an hour: align with the electrical change calendar and a suppress token by `chassis_dn` when the BMS and UCSM both expect the work.\n• Older 5108 chassis that mix `UCSB-PSU-2500ACDV` and `UCSB-PSU-2500ACDV2` in one chassis after a field swap: the live `psuRedundancy` read can flicker for one to two poll intervals until both PSUs re-handshake and watt sharing stabilises; require two identical failing bins before a Sev-1 for flicker only.\n• Standalone C-series images without a `sys/chassis-…` DNs or mis-scoped to the wrong org: if your index is broad, a rack-only `equipmentPsu` can appear; exclude by a tag that proves UCSM service-profile registration or drop events without `sys/chassis-` in the `dn` path.\n• UCSM maintenance where `operState` briefly shows `accessibility-problem` for a single stats cycle during fabric or IOM reseat while no FRU is actually removed: correlate with a parallel `cisco:ucs:faults` `faultInst` and require two consecutive bad bins in this UC.\n• Firmware waves that reboot one PSU at a time for vendor soak rules: a planned momentary `inoperable` is expected; gate with the firmware change record and a windowed suppression, not a permanent threshold cut.\n• UCS X9508 and later X-chassis with different PSU counts, IFM, and 6+ PSU layouts than 5108: the default lookup row may label `n+1` while your contract is actually `n+n` full doubling; if so, annotate per-`chassis_model` in the lookup so `redundancy_delta` does not over-page when the site’s practice is to accept one off-line PSU without paging.\n• Clock or poll skew that lands inventory a cycle ahead of `equipmentChassisStats`: a false gap between `psu_operational_count` and the live `psuRedundancy` string; implement the two-poll hold and compare `_time` spread before RMA on transient mismatches only.\n• Laboratory chassis left intentionally on a single feed for current-limited hall testing: mark those DNs in a `suppress_chassis` lookup so operations does not get weekend noise unrelated to production posture.\n• Sites that have already decommissioned a blade chassis but the UCSM org still has stale `equipmentChassis` rows after incomplete cleanup: zero PSUs in inventory with a phantom chassis row can look worse than real life; CMDB decommission the MO before you tune severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `cisco:ucs:environmental`, PSU inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap PSU fields from API. Page on non-redundant state.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:environmental\" metric_type=\"psu\"\n| where oper_state!=\"ok\" OR redundancy_state!=\"redundant\"\n| stats count by chassis_id, psu_slot, oper_state, redundancy_state\n```\n\nUnderstanding this SPL\n\n**Chassis PSU Redundancy** — Detects loss of N+1 PSU redundancy in chassis before full power loss.\n\nDocumented **Data sources**: `cisco:ucs:environmental`, PSU inventory. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:environmental. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:environmental\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where oper_state!=\"ok\" OR redundancy_state!=\"redundant\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by chassis_id, psu_slot, oper_state, redundancy_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (chassis × PSU), Table (alerts), Single value (chassis without redundancy).",
              "script": "",
              "premium": "",
              "hw": "UCS chassis",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We tell you when a blade chassis is down to one healthy power path even though the servers are still on—before the next fault turns the whole chassis dark. That is a different job than tracking how hot the air is, which a neighbour use case does with graphs.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.16",
              "n": "IOM Uplink Utilization",
              "c": "high",
              "f": "beginner",
              "v": "IOM-to-FI uplink saturation causes east-west congestion for blade traffic.",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`cisco:ucs:iom_stats`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:iom_stats\" earliest=-1h\n| eval util_pct=round((rx_bps+tx_bps)*8/link_speed_bps*100,1)\n| where util_pct>70\n| stats max(util_pct) as peak_util by chassis_id, iom_slot, port\n| sort -peak_util",
              "m": "Poll IOM port counters. Alert at 70%/85%. Plan additional uplinks or rebalance servers.",
              "z": "Heatmap (IOM × port util), Table (hot uplinks), Line chart (trend).",
              "kfp": "Short-lived 100% during PXE or kickstart for a planned bare-metal build if that chassis is the install target. vMotion and live-migration surges when DRS rebalances a cluster. Rolling UCSM or IOM firmware where one fabric reloads and traffic pins to the surviving path so utilisation is high by design. Deliberate single-fabric maintenance where all vNICs fail over to the other fabric and that fabric runs near ceiling until the window closes. Veeam, Commvault, or Avamar backup and replication windows that max egress to IP storage. End-of-quarter report runs on large databases. Planned Nexus vPC or Direct Connect lab failover where saturation is the test objective. vDC or workload evacuation jobs that pack east-west on one pair of uplinks. Short Hadoop or Spark shuffle phases you already accept. Chassis you intentionally run hot for NetApp NFS or Pure Block patterns with written SLO approval.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `cisco:ucs:iom_stats`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll IOM port counters. Alert at 70%/85%. Plan additional uplinks or rebalance servers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:iom_stats\" earliest=-1h\n| eval util_pct=round((rx_bps+tx_bps)*8/link_speed_bps*100,1)\n| where util_pct>70\n| stats max(util_pct) as peak_util by chassis_id, iom_slot, port\n| sort -peak_util\n```\n\nUnderstanding this SPL\n\n**IOM Uplink Utilization** — IOM-to-FI uplink saturation causes east-west congestion for blade traffic.\n\nDocumented **Data sources**: `cisco:ucs:iom_stats`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:iom_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:iom_stats\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where util_pct>70` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by chassis_id, iom_slot, port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (IOM × port util), Table (hot uplinks), Line chart (trend).",
              "script": "",
              "premium": "",
              "hw": "UCS IOM modules",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch the small switch card in each server chassis that funnels all blade traffic up to the big fabric switches. If that pipe gets crowded, every virtual machine in the box can slow down at once, even when other network checks look fine. We help you see the crowded pipes early so you can add capacity or move workloads before people call the help desk.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.17",
              "n": "BIOS Policy Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Verifies server BIOS settings match service profile BIOS policy (VT-x, power management, boot mode).",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`cisco:ucs:bios`, service profile",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:bios\"\n| lookup ucs_bios_policy.csv sp_name OUTPUT require_vt, require_boot_mode\n| where vt_enabled!=require_vt OR boot_mode!=require_boot_mode\n| table server_dn, sp_name, vt_enabled, boot_mode, require_vt, require_boot_mode",
              "m": "Extract BIOS tokens from inventory poll. Reconcile with expected policy per SP.",
              "z": "Table (non-compliant servers), Pie chart (compliance %), Bar chart (by org).",
              "kfp": "A vendor BIOS bundle that changes factory defaults for a token can look like fleet-wide drift until your baseline CSV catches up with the new approved string, even if UCSM and every profile are healthy. A sustainability programme that flips Power profile to a savings class for a month will trip performance rules until the campaign row is in the lookup or a temporary tier is in place. Hyper-Threading is intentionally `disabled` on a handful of HFT- or PROD-DB- service profiles: encode those names in a tier map, not a single global enabled row. Sub-NUMA tuning diverges between a latency cohort and a general virtualisation pool; a one-line global expectation will misfire until you add `sp_tier` rows. AES-NI can read `disabled` for one poll right after a panicked CIMC or rollback event that was not a policy decision—treat as incident-correlated, not a steady finding. TPM `vpEnable` can lag a physical TPM install until the next inventory cycle—dedupe to `server_mo`+`token` with a 12–24h dwell before paging. A rebind to a new template that inherits a different child BIOS policy can show `pending-reboot` while the effective token is still the old one on metal—pair with the association UC, not a blind BIOS alert. If two BIOS policies in the same hierarchy disagree and the child wins, the child value is correct on the wire; a CSV line that only knows the parent will read as drift until the governance table matches the effective policy.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `cisco:ucs:bios`, service profile.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExtract BIOS tokens from inventory poll. Reconcile with expected policy per SP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:bios\"\n| lookup ucs_bios_policy.csv sp_name OUTPUT require_vt, require_boot_mode\n| where vt_enabled!=require_vt OR boot_mode!=require_boot_mode\n| table server_dn, sp_name, vt_enabled, boot_mode, require_vt, require_boot_mode\n```\n\nUnderstanding this SPL\n\n**BIOS Policy Compliance** — Verifies server BIOS settings match service profile BIOS policy (VT-x, power management, boot mode).\n\nDocumented **Data sources**: `cisco:ucs:bios`, service profile. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:bios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:bios\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where vt_enabled!=require_vt OR boot_mode!=require_boot_mode` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BIOS Policy Compliance**): table server_dn, sp_name, vt_enabled, boot_mode, require_vt, require_boot_mode\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant servers), Pie chart (compliance %), Bar chart (by org).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS servers",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the real low-level power, boot, and security switches on each server and compare the live set to the rules your teams wrote, even when a machine already passes a higher-level applied profile check. We send a short, clear list to the right owner without mixing this with checks for main firmware files or add-in network card code.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.18",
              "n": "UCS Central Registration Health",
              "c": "medium",
              "f": "beginner",
              "v": "Monitors UCS domain registration and heartbeat to UCS Central for multi-domain governance.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Central syslog",
              "d": "`cisco:ucs_central:domain`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs_central:domain\" earliest=-24h\n| where registration_state!=\"registered\" OR last_heartbeat_age_sec>300\n| stats latest(registration_state) as reg, max(last_heartbeat_age_sec) as age by domain_name\n| table domain_name, reg, age",
              "m": "Ingest domain inventory from Central API. Alert when heartbeat stale or domain unregistered.",
              "z": "Table (domain status), Single value (stale domains), Map (site).",
              "kfp": "A planned **UCS Central HA** failover (active/standby) can move sessions and produce short `regState` transitions or heartbeat gaps; page only if the condition persists past your HA completion window. **Certificate trust-store rotation** for Central↔UCSM often causes 5–15 minutes of `inProgress` or `failed` states that self-heal when new trust material is in place—use your PKI change ticket to suppress. A chassis or full FI power cycle in a domain during maintenance can pause heartbeats even though the eventual state is fine; join to your hardware change schedule. NTP skew between Central and a UCSM domain can make `heartbeat_age_seconds` look worse than reality—check `ntpd` or `chrony` on both ends before a sev-1. A management VLAN MTU or ACL change can delay XML replies without changing `regState` until timeouts accumulate. Decommission flows that intentionally `unregister` a domain are expected: mark those domains in `ucs_central_domain_owners.csv` with an end-of-life date or route them to a `DECOM` service line so the alert does not fire forever. Capacity-only work that never touches management connectivity should not need this control—if it fires, the event is still often real, but your storage network team is the wrong first call: validate management reachability first.",
              "refs": "[Cisco Intersight Add-on for Splunk](https://splunkbase.splunk.com/app/7828)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Central syslog.\n• Ensure the following data sources are available: `cisco:ucs_central:domain`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest domain inventory from Central API. Alert when heartbeat stale or domain unregistered.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs_central:domain\" earliest=-24h\n| where registration_state!=\"registered\" OR last_heartbeat_age_sec>300\n| stats latest(registration_state) as reg, max(last_heartbeat_age_sec) as age by domain_name\n| table domain_name, reg, age\n```\n\nUnderstanding this SPL\n\n**UCS Central Registration Health** — Monitors UCS domain registration and heartbeat to UCS Central for multi-domain governance.\n\nDocumented **Data sources**: `cisco:ucs_central:domain`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Central syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs_central:domain. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs_central:domain\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where registration_state!=\"registered\" OR last_heartbeat_age_sec>300` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by domain_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **UCS Central Registration Health**): table domain_name, reg, age\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (domain status), Single value (stale domains), Map (site).",
              "script": "",
              "premium": "",
              "hw": "UCS Central, UCSM domains",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We check that each site your central place still shows on the list and has checked in lately. If one drifts or falls off, shared rules and updates can stop partway, and the two sides can disagree after trust work. We help you see that before people chase a change that never fully landed.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "m365",
                "syslog"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.19",
              "n": "Intersight Server Alarm Monitoring",
              "c": "critical",
              "f": "beginner",
              "v": "Intersight centralises alarms across all managed UCS domains, IMM and classic. Monitoring alarm severity trends in Splunk enables faster triage and correlation with application-layer events that Intersight alone cannot see.",
              "t": "`Cisco Intersight Add-on`",
              "d": "`cisco:intersight:alarms`",
              "q": "index=cisco_intersight sourcetype=\"cisco:intersight:alarms\" earliest=-24h\n| stats count by severity, affected_object_type, name\n| sort -count\n| where severity IN (\"Critical\",\"Warning\")",
              "m": "Configure the Intersight Add-on with API key credentials. Schedule alarm collection every 5 minutes. Alert on critical alarms or sustained warning counts exceeding baseline.",
              "z": "Table (alarms by severity), Bar chart (alarm count by object type), Single value (open criticals).",
              "kfp": "• Intersight platform releases occasionally reclassify a given `Code` between **Critical** and **Warning**; a historical Splunk month may show a severity migration that the current Intersight UI no longer uses as the live label, so your paging rule should always consider `Code` plus `AffectedMoType`, not a bare severity string, before you rewrite an RCA.\n• Bulk firmware, driver, or service-profile automation launched from Intersight produces a **predictable** multi-minute alarm band that is expected while Cisco soak timers and verification tasks run; correlate the window with the Intersight **audit** stream before a sev-1.\n• Cloud-connector reconnection or proxy healing after a brownout can **batch** timestamps so Splunk’s `_time` cluster looks spiky; compare `_time` to the connector’s own health panel or `cisco:intersight:audit_logs` for disconnect rows.\n• HyperFlex- or `storage.HyperFlexCluster` style `AffectedMoType` labels are real infrastructure alarms that may be **routed** to a different queue than B-series racking; a compute-only SRE filter is policy, not a data defect.\n• An acknowledged row whose `LastTransitionTime` (or a timestamp field the TA includes) **ticks** without a human-recognisable state change can re-float to the top of a “newest first” `table` while still being owned in Intersight; do not re-page without checking `Acknowledge` semantics in your build.\n• `MoId` that falls out of the latest `cisco:intersight:inventory` snapshot after a decommission still has a true alarm in `cisco:intersight:alarms`; the “missing name” in a lookup join is an inventory-lag false negative in the human column, not a fake alarm.\n• Trial or **demo** Intersight orgs that share a Splunk stack with production can add charts that have no real asset; never mix org keys into a production index without an explicit boundary.\n• `_time` in Splunk should reflect Intersight’s alarm time; if a manual `props` change mis-parses JSON, a morning “spike” can be a **clock** artefact—compare the raw `CreationTime` field (if present) in one event to the parsed `_time`.\n• Intersight’s product **UI** alarm retention can be **shorter** than Splunk’s index retention, so a quarter-old report in Splunk may include alarms a console filter no longer shows; document for auditors that the indexed export is the long-lived evidence store.\n• Intersight can emit **Info**-class threshold chatter from metric policies in some deployments; if your key normalises that into a surprising token, the severity filter in this `spl` is a front-line guard—tighten `props` if needed rather than blinding the feed.\n• A single alarm row **re-fetched** after a TA restart can double-count in a `stats` without a dedup, while a raw `table` of events shows both lines; the blast columns in this `spl` use 15m bucketing, but a monthly KPI may still need a **dedup** policy your governance approves.\n• `RegisteredDevice` null after an un-claim in Intersight is an inventory transition signal; triage to CMDB and claim workflow, not to “Splunk is wrong.”",
              "refs": "[CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Intersight Add-on`.\n• Ensure the following data sources are available: `cisco:intersight:alarms`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure the Intersight Add-on with API key credentials. Schedule alarm collection every 5 minutes. Alert on critical alarms or sustained warning counts exceeding baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_intersight sourcetype=\"cisco:intersight:alarms\" earliest=-24h\n| stats count by severity, affected_object_type, name\n| sort -count\n| where severity IN (\"Critical\",\"Warning\")\n```\n\nUnderstanding this SPL\n\n**Intersight Server Alarm Monitoring** — Intersight centralises alarms across all managed UCS domains, IMM and classic. Monitoring alarm severity trends in Splunk enables faster triage and correlation with application-layer events that Intersight alone cannot see.\n\nDocumented **Data sources**: `cisco:intersight:alarms`. **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_intersight; **sourcetype**: cisco:intersight:alarms. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_intersight, sourcetype=\"cisco:intersight:alarms\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by severity, affected_object_type, name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Filters the current rows with `where severity IN (\"Critical\",\"Warning\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Intersight Server Alarm Monitoring** — Intersight centralises alarms across all managed UCS domains, IMM and classic. Monitoring alarm severity trends in Splunk enables faster triage and correlation with application-layer events that Intersight alone cannot see.\n\nDocumented **Data sources**: `cisco:intersight:alarms`. **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (alarms by severity), Bar chart (alarm count by object type), Single value (open criticals).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B200, C220, C240, C480, X210c, X410c, FI 6454, FI 6536",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "Intersight keeps one shared alarm list in the cloud for every data centre and domain you connect, so you can see one noisy site or a wide pattern in the same place before a fault spreads.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_intersight"
              ],
              "ta_link": {
                "name": "Cisco Intersight Add-on for Splunk",
                "id": 7828,
                "url": "https://splunkbase.splunk.com/app/7828"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.20",
              "n": "Intersight Firmware Compliance",
              "c": "high",
              "f": "beginner",
              "v": "Intersight tracks firmware versions against policies for all managed endpoints. Surfacing non-compliant servers in Splunk alongside vulnerability data and change windows ensures patching cadence is maintained fleet-wide.",
              "t": "`Cisco Intersight Add-on`",
              "d": "`cisco:intersight:compute`",
              "q": "index=cisco_intersight sourcetype=\"cisco:intersight:compute\" object_type=\"firmware.RunningFirmware\"\n| stats latest(version) as fw_version by server_name, component, model\n| lookup intersight_approved_firmware model OUTPUT approved_version\n| where fw_version!=approved_version\n| table server_name, model, component, fw_version, approved_version",
              "m": "Ingest inventory data from Intersight. Maintain a lookup of approved firmware per model. Report weekly on compliance percentage. Alert on critical components (CIMC, BIOS) running non-approved versions.",
              "z": "Table (non-compliant servers), Pie chart (compliant vs non-compliant), Single value (% fleet compliant).",
              "kfp": "• RunningFirmware rows may lag a completed upgrade if the next poll has not run—expect a 4–6 hour window after a change before drift clears, so pair alerts with cisco:intersight:firmwareupgradestatus in dashboards while treating single-shot alerts as “still converging”.\n• Model strings that differ only by marketing suffix (UCSB-B200-M6-U vs UCSB-B200-M6) can miss a lookup join until you normalise or duplicate rows; fix the lookup, not the server.\n• Some components list Version as a bundle string while the lookup stores a shorter approved token from an older runbook; align both sides to the exact Intersight output after each Cisco release, or the eval marks drift on formatting alone.\n• Lab or spare blades that are intentionally on a retired image for triage of old bugs will false-positive against a production strict CSV until you add a second lookup table tagged nonprod with relaxed rows.\n• FI and IOM rows sometimes share a Server relationship pattern that is null while RegisteredDevice is populated; coalesce paths that skip your join are not “green” hardware—they need props tuning or a follow-up search.\n• cve_count is operator-maintained; if the cell is empty, coalesce in SPL treats the row as low severity, which can under-page real PSIRT work—treat null cve_count as a content defect, not a clean bill of health.\n• HCL and RunningFirmware can disagree for a short window when Cisco reclassifies support; cross-check hclstatus before you emergency-patch healthy hosts.\n• Hot-spare parts swapped from stock may show a newer firmware than your golden label; decide policy: either widen approved_version or track those serials in an exception file.\n• When Intersight reports a backup or secondary image mode during failover tests, the SPL filters to mode==running; do not expect alerts from passive banks unless you add a second search for that purpose.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Intersight Add-on`.\n• Ensure the following data sources are available: `cisco:intersight:compute`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest inventory data from Intersight. Maintain a lookup of approved firmware per model. Report weekly on compliance percentage. Alert on critical components (CIMC, BIOS) running non-approved versions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_intersight sourcetype=\"cisco:intersight:compute\" object_type=\"firmware.RunningFirmware\"\n| stats latest(version) as fw_version by server_name, component, model\n| lookup intersight_approved_firmware model OUTPUT approved_version\n| where fw_version!=approved_version\n| table server_name, model, component, fw_version, approved_version\n```\n\nUnderstanding this SPL\n\n**Intersight Firmware Compliance** — Intersight tracks firmware versions against policies for all managed endpoints. Surfacing non-compliant servers in Splunk alongside vulnerability data and change windows ensures patching cadence is maintained fleet-wide.\n\nDocumented **Data sources**: `cisco:intersight:compute`. **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_intersight; **sourcetype**: cisco:intersight:compute. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_intersight, sourcetype=\"cisco:intersight:compute\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by server_name, component, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where fw_version!=approved_version` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Intersight Firmware Compliance**): table server_name, model, component, fw_version, approved_version\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant servers), Pie chart (compliant vs non-compliant), Single value (% fleet compliant).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B-Series, C-Series, X-Series, FI 6300/6400/6500",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We check that the computers and network switches in your data centres are running the exact software versions your team has approved, so small mismatches are caught before they grow into real outages, surprise audit notes, or avoidable security gaps.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_intersight"
              ],
              "ta_link": {
                "name": "Cisco Intersight Add-on for Splunk",
                "id": 7828,
                "url": "https://splunkbase.splunk.com/app/7828"
              },
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.21",
              "n": "Intersight HCL Compliance Status",
              "c": "medium",
              "f": "beginner",
              "v": "Hardware Compatibility List compliance ensures OS, driver, and firmware combinations are Cisco-validated. Non-HCL configurations risk unpredictable failures and void support entitlements.",
              "t": "`Cisco Intersight Add-on`",
              "d": "`cisco:intersight:compute` (HCL status fields)",
              "q": "index=cisco_intersight sourcetype=\"cisco:intersight:compute\" object_type=\"cond.HclStatus\"\n| stats latest(status) as hcl_status latest(reason) as reason by server_name, model\n| where hcl_status!=\"Validated\"\n| table server_name, model, hcl_status, reason",
              "m": "Poll Intersight HCL status via the add-on. Alert when servers move out of Validated status. Correlate with planned OS or driver upgrades.",
              "z": "Table (non-validated servers), Pie chart (HCL status distribution), Single value (% validated).",
              "kfp": "A lab or pre-production cluster that deliberately runs a driver or OS ahead of the published matrix will show `NotListed` or `NotValidated` even when your team is qualifying it under change control, so route those serials with a `nonprod` or `hcl_waiver` column in the lookup rather than sev-1 pages. The first day after a bare-metal or profile rebuild often shows `Incomplete` while UCS Tools and guest telemetry back-fill—suppress short windows, not the signal forever. A transient `ServiceUnavailable` during Intersight or HCL back-end maintenance can flip many servers in one poll; treat multi-host correlation as infrastructure noise unless it persists. OEM network or FC HBAs and third-party NVMe-oF adapters may sit outside Cisco’s HCL for the exact firmware you chose even when the team considers them “certified in practice.” Planned PSIRT remediations that temporarily park firmware on a non-recommended build may trip `NotValidated` until the matrix catches up—pair with a ticket id in the lookup. Cross-vendor qualification clusters for ISVs will intentionally violate HCL; track them in a `business_service` that routes to an informational only queue.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Intersight Add-on`.\n• Ensure the following data sources are available: `cisco:intersight:compute` (HCL status fields).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Intersight HCL status via the add-on. Alert when servers move out of Validated status. Correlate with planned OS or driver upgrades.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_intersight sourcetype=\"cisco:intersight:compute\" object_type=\"cond.HclStatus\"\n| stats latest(status) as hcl_status latest(reason) as reason by server_name, model\n| where hcl_status!=\"Validated\"\n| table server_name, model, hcl_status, reason\n```\n\nUnderstanding this SPL\n\n**Intersight HCL Compliance Status** — Hardware Compatibility List compliance ensures OS, driver, and firmware combinations are Cisco-validated. Non-HCL configurations risk unpredictable failures and void support entitlements.\n\nDocumented **Data sources**: `cisco:intersight:compute` (HCL status fields). **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_intersight; **sourcetype**: cisco:intersight:compute. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_intersight, sourcetype=\"cisco:intersight:compute\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by server_name, model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where hcl_status!=\"Validated\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Intersight HCL Compliance Status**): table server_name, model, hcl_status, reason\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-validated servers), Pie chart (HCL status distribution), Single value (% validated).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B-Series, C-Series, X-Series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We check which operating system, driver, and firmware mix each physical server is allowed to run, using the same management service you already trust. When the mix is not on the approved list, we flag it from most to least risky so the right team fixes it before odd storage or network behaviour shows up in production.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_intersight"
              ],
              "ta_link": {
                "name": "Cisco Intersight Add-on for Splunk",
                "id": 7828,
                "url": "https://splunkbase.splunk.com/app/7828"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.22",
              "n": "Intersight Server Power and Thermal Telemetry",
              "c": "medium",
              "f": "intermediate",
              "v": "Power draw and thermal readings across the fleet feed capacity planning, detect cooling anomalies, and correlate with workload spikes. Trending helps predict rack-level power budget exhaustion.",
              "t": "`Cisco Intersight Add-on`",
              "d": "`cisco:intersight:metrics` (power, temperature)",
              "q": "index=cisco_intersight sourcetype=\"cisco:intersight:metrics\" metric_name IN (\"power_draw_watts\",\"inlet_temp_celsius\",\"exhaust_temp_celsius\")\n| timechart span=1h avg(metric_value) as avg_val by server_name, metric_name\n| where avg_val > case(metric_name=\"inlet_temp_celsius\", 28, metric_name=\"exhaust_temp_celsius\", 45, metric_name=\"power_draw_watts\", 800)",
              "m": "Enable metric collection in the Intersight Add-on. Set thresholds per server model. Alert on thermal exceedances or power draw approaching PDU circuit limits.",
              "z": "Line chart (power/thermal over time), Heatmap (server x temperature), Single value (peak power draw).",
              "kfp": "HVAC scheduled setback or economizer handover that lifts inlet a few degrees for a planned window before returning — compare BMS tickets, not only Splunk. Phase rebalance on a PDU during a datacenter audit can move a large share of kW to another leg and trip the 85% warning on one chassis_key with healthy servers — check the electrical one-line, not a hardware RMA. Intersight telemetry stream gaps while IMC or BMC firmware is staged during a rolling refresh produce empty buckets; gate pages on at least two bad bins. Short cold-aisle door or containment breach from cleaning crews can raise inlet for tens of minutes without a part fault; add a facilities timestamp before you wake hardware engineering.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Intersight Add-on`.\n• Ensure the following data sources are available: `cisco:intersight:metrics` (power, temperature).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnable metric collection in the Intersight Add-on. Set thresholds per server model. Alert on thermal exceedances or power draw approaching PDU circuit limits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_intersight sourcetype=\"cisco:intersight:metrics\" metric_name IN (\"power_draw_watts\",\"inlet_temp_celsius\",\"exhaust_temp_celsius\")\n| timechart span=1h avg(metric_value) as avg_val by server_name, metric_name\n| where avg_val > case(metric_name=\"inlet_temp_celsius\", 28, metric_name=\"exhaust_temp_celsius\", 45, metric_name=\"power_draw_watts\", 800)\n```\n\nUnderstanding this SPL\n\n**Intersight Server Power and Thermal Telemetry** — Power draw and thermal readings across the fleet feed capacity planning, detect cooling anomalies, and correlate with workload spikes. Trending helps predict rack-level power budget exhaustion.\n\nDocumented **Data sources**: `cisco:intersight:metrics` (power, temperature). **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_intersight; **sourcetype**: cisco:intersight:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_intersight, sourcetype=\"cisco:intersight:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by server_name, metric_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where avg_val > case(metric_name=\"inlet_temp_celsius\", 28, metric_name=\"exhaust_temp_celsius\", 45, metric_name=\"power_draw_…` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=1h | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Intersight Server Power and Thermal Telemetry** — Power draw and thermal readings across the fleet feed capacity planning, detect cooling anomalies, and correlate with workload spikes. Trending helps predict rack-level power budget exhaustion.\n\nDocumented **Data sources**: `cisco:intersight:metrics` (power, temperature). **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (power/thermal over time), Heatmap (server x temperature), Single value (peak power draw).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B-Series, C-Series, X-Series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how much electricity your shared computer racks use and how warm they are getting over time, across all the sites your team manages in one place. That way you can add power or fix cooling before a breaker flips or things slow down, and you can show leaders clear totals when they ask about energy use.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=1h | sort - agg_value",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_intersight"
              ],
              "ta_link": {
                "name": "Cisco Intersight Add-on for Splunk",
                "id": 7828,
                "url": "https://splunkbase.splunk.com/app/7828"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.23",
              "n": "Intersight Audit Log and Configuration Changes",
              "c": "high",
              "f": "beginner",
              "v": "Tracks every admin action and policy modification in Intersight. Correlating these with incident timelines reveals whether infrastructure changes contributed to outages or security events.",
              "t": "`Cisco Intersight Add-on`",
              "d": "`cisco:intersight:auditRecords`",
              "q": "index=cisco_intersight sourcetype=\"cisco:intersight:auditRecords\" earliest=-24h\n| where action IN (\"Update\",\"Delete\",\"Create\") AND object_type IN (\"server.Profile\",\"firmware.Policy\",\"ntp.Policy\",\"boot.PrecisionPolicy\")\n| stats count by user_email, action, object_type, object_name\n| sort -count",
              "m": "Ingest audit logs every 5 minutes. Alert on high-impact changes (profile deployments, firmware policy changes) outside change windows. Feed into ES notable events for SOC visibility.",
              "z": "Table (recent changes), Timeline (change events), Bar chart (changes by user).",
              "kfp": "Planned Intersight **bulk policy sync** or Cisco-managed repository refreshes that touch the same `ObjectType` hundreds of times in minutes can look like an attack in naive counts—tune with time windows, automation lookup rows, and change tickets. Dedicated automation accounts (HashiCorp Terraform state apply, Ansible Tower role sync, or internal golden-image pipelines) should live in `intersight_automation_accounts` with `suppress_alert=true` after the security team attests the principal. ServiceNow **CMDB** connectors that re-write tags or org metadata to satisfy discovery rules create noisy *Modified* rows; route those via the lookup or a `Source` sub-filter, not a silent drop of the entire class. Monthly Microsoft Intune or endpoint-tooling renames in companion systems may correlate in time with audit noise—do not merge unrelated stories into Intersight RCA without a shared `TraceId`. Unlike UC-19.1.19’s alarm reclassification noise or UC-19.1.20’s post-upgrade *RunningFirmware* lag, the dominant false story here is **expected automation** masquerading as a human—your defence is a maintained CSV, not a higher threshold.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Intersight Add-on`.\n• Ensure the following data sources are available: `cisco:intersight:auditRecords`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest audit logs every 5 minutes. Alert on high-impact changes (profile deployments, firmware policy changes) outside change windows. Feed into ES notable events for SOC visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_intersight sourcetype=\"cisco:intersight:auditRecords\" earliest=-24h\n| where action IN (\"Update\",\"Delete\",\"Create\") AND object_type IN (\"server.Profile\",\"firmware.Policy\",\"ntp.Policy\",\"boot.PrecisionPolicy\")\n| stats count by user_email, action, object_type, object_name\n| sort -count\n```\n\nUnderstanding this SPL\n\n**Intersight Audit Log and Configuration Changes** — Tracks every admin action and policy modification in Intersight. Correlating these with incident timelines reveals whether infrastructure changes contributed to outages or security events.\n\nDocumented **Data sources**: `cisco:intersight:auditRecords`. **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_intersight; **sourcetype**: cisco:intersight:auditRecords. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_intersight, sourcetype=\"cisco:intersight:auditRecords\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"Update\",\"Delete\",\"Create\") AND object_type IN (\"server.Profile\",\"firmware.Policy\",\"ntp.Policy\",\"boot.Prec…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user_email, action, object_type, object_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Intersight Audit Log and Configuration Changes** — Tracks every admin action and policy modification in Intersight. Correlating these with incident timelines reveals whether infrastructure changes contributed to outages or security events.\n\nDocumented **Data sources**: `cisco:intersight:auditRecords`. **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (recent changes), Timeline (change events), Bar chart (changes by user).",
              "script": "",
              "premium": "",
              "hw": "All Intersight-managed endpoints",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We keep a clear written trail of who changed the important control settings in your management cloud, so unexpected edits to sign-in rules, boot order, or update policies are noticed before they spread. We also separate well-known machine accounts from people so the right humans get the call.",
              "mtype": [
                "Audit",
                "Change"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_intersight"
              ],
              "ta_link": {
                "name": "Cisco Intersight Add-on for Splunk",
                "id": 7828,
                "url": "https://splunkbase.splunk.com/app/7828"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.24",
              "n": "Intersight Contract and Warranty Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Monitoring support contract and warranty expiration across the compute fleet prevents coverage gaps that delay RMA and increase risk during hardware failures.",
              "t": "`Cisco Intersight Add-on`",
              "d": "`cisco:intersight:contracts`",
              "q": "index=cisco_intersight sourcetype=\"cisco:intersight:contracts\"\n| eval days_to_expiry=round((strptime(contract_end_date,\"%Y-%m-%dT%H:%M:%S\")-now())/86400)\n| where days_to_expiry < 90 OR contract_status!=\"Active\"\n| table server_name, serial, contract_status, contract_end_date, days_to_expiry\n| sort days_to_expiry",
              "m": "Poll contract information weekly. Alert at 90, 60, and 30 day thresholds. Generate a monthly report for procurement.",
              "z": "Table (expiring contracts), Single value (servers without active contract), Gauge (% fleet covered).",
              "kfp": "A renewal in flight on the Cisco side can leave an older row marked Expired for a day or two before the new Active record lands from Smart Net Total Care back-end sync—use your finance tickler date as a second source before you punish an owner. Rows whose Source field is `cisco_partner` often have the legally-binding renewal in the partner portal first; Splunk is reflecting Cisco’s channel timing, not your procurement lie. A lab on `8X5XNBD` beside production on `PSUP` is a lab policy choice, not a sev-1, when `intersight_critical_assets.csv` marks `asset_role=lab`. A hardware-refresh month where the decommissioned serial is Expired on purpose while the replacement serial is not yet Active in Intersight should map through `intersight_replacement_serial_map.csv` in your runbook, not a bridge page. M&A or entity-change windows where `BillTo` still shows the pre-close legal name can spook finance; route to procurement master data, not a compute incident. When Cisco renames a ServiceLevel code (`SNTC` to `SNT` style drift), a trend dashboard can briefly look worse until lookup tables catch up. Managed-service and MSP tenants with strict Intersight RBAC can show a missing contract row for operators who are only scoped to a child org—the gap can be permissions on the key, not an absent entitlement. A midnight `EndDate` in UTC over a different calendar quarter in local business time is a common procurement-versus-engineering argument—agree a single `effective_close_tz` in your runbook, not a second Splunk system.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Intersight Add-on`.\n• Ensure the following data sources are available: `cisco:intersight:contracts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll contract information weekly. Alert at 90, 60, and 30 day thresholds. Generate a monthly report for procurement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_intersight sourcetype=\"cisco:intersight:contracts\"\n| eval days_to_expiry=round((strptime(contract_end_date,\"%Y-%m-%dT%H:%M:%S\")-now())/86400)\n| where days_to_expiry < 90 OR contract_status!=\"Active\"\n| table server_name, serial, contract_status, contract_end_date, days_to_expiry\n| sort days_to_expiry\n```\n\nUnderstanding this SPL\n\n**Intersight Contract and Warranty Compliance** — Monitoring support contract and warranty expiration across the compute fleet prevents coverage gaps that delay RMA and increase risk during hardware failures.\n\nDocumented **Data sources**: `cisco:intersight:contracts`. **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_intersight; **sourcetype**: cisco:intersight:contracts. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_intersight, sourcetype=\"cisco:intersight:contracts\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_expiry < 90 OR contract_status!=\"Active\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Intersight Contract and Warranty Compliance**): table server_name, serial, contract_status, contract_end_date, days_to_expiry\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (expiring contracts), Single value (servers without active contract), Gauge (% fleet covered).",
              "script": "",
              "premium": "",
              "hw": "All Intersight-managed endpoints",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We look at the paperwork clock on your big shared computer systems—the dates that say when the vendor is still on the hook to help you fast if something physical breaks, and which service grade you are paying for. We line that up with who owns the budget for renewals and who owns the machines, so a surprise never lands the week a board or auditor asks whether you are still covered.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_intersight"
              ],
              "ta_link": {
                "name": "Cisco Intersight Add-on for Splunk",
                "id": 7828,
                "url": "https://splunkbase.splunk.com/app/7828"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.25",
              "n": "UCS X-Series Intelligent Fabric Module Health",
              "c": "high",
              "f": "intermediate",
              "v": "The X-Series Intelligent Fabric Modules (IFMs) replace traditional IOMs and provide Ethernet and management connectivity to every compute node in the chassis. IFM faults or link degradation can isolate entire chassis from the fabric.",
              "t": "`Cisco Intersight Add-on`, `Splunk_TA_cisco-ucs`",
              "d": "`cisco:intersight:alarms`, `cisco:ucs:faults`",
              "q": "index=cisco_intersight sourcetype=\"cisco:intersight:alarms\" affected_object_type=\"equipment.IoCard\" OR affected_object_type=\"equipment.Fex\"\n| append [search index=cisco_ucs sourcetype=\"cisco:ucs:faults\" dn=\"*iom*\" OR dn=\"*iocard*\"]\n| stats count by severity, affected_object_type, name, chassis_id\n| where severity IN (\"Critical\",\"Warning\")\n| sort -severity, -count",
              "m": "Monitor IFM alarms via both Intersight and UCS Manager. Alert immediately on critical IFM faults. Correlate with FI port channel health (UC-19.1.5) for end-to-end fabric path analysis.",
              "z": "Table (IFM alarms), Status grid (chassis x IFM slot), Timeline (fault events).",
              "kfp": "(1) Planned IFM firmware rolling wave: one leg can be newer than the other until the second activation finishes, so firm_drift=1 mid-change; track the change id, not an outage. (2) Scheduled A or B IFM swap during a hardware refresh: operState and presence pass through transient values; the maintenance ticket is authoritative. (3) Admin-down or maintenance on a vNIC or fabric path: oper can be non-operable for the change window; use a maintenance lookup. (4) DAC reseat or PXE kick causing short etherPIo oper flips: use two transitions in 15m before a production page, or a post-change tag. (5) Staggered IFM-A/IFM-B file-by-file activation during a rolling wave: drift is the expected half-way state. (6) UCSX-I-9108-25G to UCSX-I-9108-100G uplift: versions differ on purpose until both slots match the target BOM. (7) TAC-expected fault F0408 during a guided reseat: keep the TAC case id in the ticket, not a second P1, unless a second health signal also fires. (8) Poll skew: a fault cleared in the UCSM GUI can miss one cisco:ucs:faults poll; widen earliest or add cisco:ucs:events. (9) Chassis renumber, lab motion, or DR that reused chassis- ids: the equipmentChassis join can look empty until inventory republishes. (10) One IFM not yet cabled: firm_drift is not computable; suppress drift. (11) A widened test filter that pulls UCSB-5108 data: only the UCSX-9508 model join keeps B-series out; fix the test, not the TA. (12) Intersight- or IMM-primary domains: thin cisco:ucs:stats is a 19.1.22 and governance sign, not proof the IFM is healthy.",
              "refs": "[CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Intersight Add-on`, `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `cisco:intersight:alarms`, `cisco:ucs:faults`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor IFM alarms via both Intersight and UCS Manager. Alert immediately on critical IFM faults. Correlate with FI port channel health (UC-19.1.5) for end-to-end fabric path analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_intersight sourcetype=\"cisco:intersight:alarms\" affected_object_type=\"equipment.IoCard\" OR affected_object_type=\"equipment.Fex\"\n| append [search index=cisco_ucs sourcetype=\"cisco:ucs:faults\" dn=\"*iom*\" OR dn=\"*iocard*\"]\n| stats count by severity, affected_object_type, name, chassis_id\n| where severity IN (\"Critical\",\"Warning\")\n| sort -severity, -count\n```\n\nUnderstanding this SPL\n\n**UCS X-Series Intelligent Fabric Module Health** — The X-Series Intelligent Fabric Modules (IFMs) replace traditional IOMs and provide Ethernet and management connectivity to every compute node in the chassis. IFM faults or link degradation can isolate entire chassis from the fabric.\n\nDocumented **Data sources**: `cisco:intersight:alarms`, `cisco:ucs:faults`. **App/TA** (typical add-on context): `Cisco Intersight Add-on`, `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_intersight; **sourcetype**: cisco:intersight:alarms. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_intersight, sourcetype=\"cisco:intersight:alarms\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by severity, affected_object_type, name, chassis_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity IN (\"Critical\",\"Warning\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**UCS X-Series Intelligent Fabric Module Health** — The X-Series Intelligent Fabric Modules (IFMs) replace traditional IOMs and provide Ethernet and management connectivity to every compute node in the chassis. IFM faults or link degradation can isolate entire chassis from the fabric.\n\nDocumented **Data sources**: `cisco:intersight:alarms`, `cisco:ucs:faults`. **App/TA** (typical add-on context): `Cisco Intersight Add-on`, `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (IFM alarms), Status grid (chassis x IFM slot), Timeline (fault events).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS X9508, X210c, X410c, IFM 9108-25G, IFM 9108-100G",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We treat each new Cisco X cabinet like a small hub airport: two concourse bridges are the paired smart fabric cards every local plane uses to reach the long-haul network. We watch for closed gates, bumpy bridges, and mismatched playbooks, because if that hub wobbles, every flight through it delays at once, even when the main runway map still looks green.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_intersight",
                "cisco_ucs"
              ],
              "ta_link": {
                "name": "Cisco Intersight Add-on for Splunk",
                "id": 7828,
                "url": "https://splunkbase.splunk.com/app/7828"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.26",
              "n": "Nutanix Prism Central Alert Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Cluster-wide alerts from Prism Central provide early warning of infrastructure issues across managed Nutanix clusters. Monitoring unresolved critical and warning alerts enables rapid response to capacity, hardware, and service degradation before it impacts workloads.",
              "t": "Custom (Nutanix Prism Central REST API)",
              "d": "Nutanix Prism Central `/api/nutanix/v3/alerts/list`",
              "q": "index=nutanix sourcetype=\"nutanix:prism_central:alerts\"\n| where resolved==false OR resolved==\"false\"\n| eval severity_normalized=case(\n    lower(severity)==\"critical\", \"Critical\",\n    lower(severity)==\"warning\", \"Warning\",\n    lower(severity)==\"info\", \"Info\",\n    1==1, coalesce(severity, \"Unknown\"))\n| stats count as alert_count, latest(_time) as last_occurred by cluster, severity_normalized, primary_impact_type, source_entity_type\n| sort -severity_normalized, -alert_count\n| table cluster, severity_normalized, primary_impact_type, source_entity_type, alert_count, last_occurred",
              "m": "Create a REST API modular input or scripted input that polls Prism Central `/api/nutanix/v3/alerts/list` every 2–5 minutes. Use POST with filter `resolved==false` to retrieve only active alerts. Authenticate with Prism Central credentials (stored in Splunk credential manager). Parse JSON response and index with sourcetype `nutanix:prism_central:alerts`. Configure field extractions for `cluster`, `severity`, `primary_impact_type`, `source_entity_type`, `resolved`, `title`. Alert on critical severity count > 0 or warning count exceeding threshold. Correlate with cluster health and CVM metrics.",
              "z": "Table (active alerts by cluster), Bar chart (alerts by severity and impact type), Status grid (cluster alert status), Single value (critical alert count).",
              "kfp": "Planned LCM (Life Cycle Manager) AOS or firmware rollouts that temporarily elevate alert counts while nodes drain and reboot. Expected rolling CVM restarts during AOS maintenance releases. Deliberate operator disk_failure simulations / RMA dry runs that open synthetic CRITICALs. nfs_storage_io_warnings or snapshot-consolidation storms during large protection-group jobs. First-run backup integration (HYCU, Veeam, Rubrik, or other vendors) that surfaces WARNINGs until schedules stabilize. network_path_redundancy_lost or similar network warnings during a documented, approved ToR or switch MOP. pcsd or cluster-service messages during a documented PCVM scale-out add that includes rolling restarts. User-initiated cluster rename in a lab that re-triggers re-registration noise that typically clears in minutes. vdisk capacity-class WARNINGs/CRITICALs during planned vdisk moves or vStorage migrations. cvm restart-family alerts when an operator intentionally re-tags the CVM management VLAN or bounces a CVM in break-fix with change control. Transient false spikes during IPv6 or name-resolution experiments on the PCVM if the site toggles internal DNS. None of these replace genuine CRITICALs—validate against change calendars before muting long term.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Nutanix Prism Central REST API).\n• Ensure the following data sources are available: Nutanix Prism Central `/api/nutanix/v3/alerts/list`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a REST API modular input or scripted input that polls Prism Central `/api/nutanix/v3/alerts/list` every 2–5 minutes. Use POST with filter `resolved==false` to retrieve only active alerts. Authenticate with Prism Central credentials (stored in Splunk credential manager). Parse JSON response and index with sourcetype `nutanix:prism_central:alerts`. Configure field extractions for `cluster`, `severity`, `primary_impact_type`, `source_entity_type`, `resolved`, `title`. Alert on critical sever…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nutanix sourcetype=\"nutanix:prism_central:alerts\"\n| where resolved==false OR resolved==\"false\"\n| eval severity_normalized=case(\n    lower(severity)==\"critical\", \"Critical\",\n    lower(severity)==\"warning\", \"Warning\",\n    lower(severity)==\"info\", \"Info\",\n    1==1, coalesce(severity, \"Unknown\"))\n| stats count as alert_count, latest(_time) as last_occurred by cluster, severity_normalized, primary_impact_type, source_entity_type\n| sort -severity_normalized, -alert_count\n| table cluster, severity_normalized, primary_impact_type, source_entity_type, alert_count, last_occurred\n```\n\nUnderstanding this SPL\n\n**Nutanix Prism Central Alert Monitoring** — Cluster-wide alerts from Prism Central provide early warning of infrastructure issues across managed Nutanix clusters. Monitoring unresolved critical and warning alerts enables rapid response to capacity, hardware, and service degradation before it impacts workloads.\n\nDocumented **Data sources**: Nutanix Prism Central `/api/nutanix/v3/alerts/list`. **App/TA** (typical add-on context): Custom (Nutanix Prism Central REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nutanix; **sourcetype**: nutanix:prism_central:alerts. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nutanix, sourcetype=\"nutanix:prism_central:alerts\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where resolved==false OR resolved==\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **severity_normalized** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cluster, severity_normalized, primary_impact_type, source_entity_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Nutanix Prism Central Alert Monitoring**): table cluster, severity_normalized, primary_impact_type, source_entity_type, alert_count, last_occurred\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active alerts by cluster), Bar chart (alerts by severity and impact type), Status grid (cluster alert status), Single value (critical alert count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Nutanix Prism Central is one control panel for many small server-and-storage clusters. We watch the warning and serious alerts from all those clusters in one list so a team can see a problem growing before the websites and apps your company runs start getting slow or failing for customers.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365",
                "nutanix"
              ],
              "em": [
                "nutanix_prism_central"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.27",
              "n": "Nutanix AOS Version Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Clusters running non-current AOS versions face compatibility risks, support limitations, and security gaps. Tracking version compliance enables upgrade planning, audit reporting, and ensures consistency across the Nutanix fleet.",
              "t": "Custom (Nutanix Prism Central REST API)",
              "d": "Nutanix `/api/nutanix/v3/clusters/list` (cluster_version)",
              "q": "index=nutanix sourcetype=\"nutanix:prism_central:clusters\"\n| stats latest(cluster_version) as aos_version, latest(cluster_name) as cluster_name, latest(uuid) as cluster_uuid by cluster_name\n| lookup nutanix_aos_baseline.csv cluster_name OUTPUT target_version\n| eval compliant=if(aos_version==target_version OR isnull(target_version), \"Yes\", \"No\")\n| where compliant==\"No\"\n| table cluster_name, aos_version, target_version, compliant\n| sort cluster_name",
              "m": "Poll Prism Central `/api/nutanix/v3/clusters/list` (or equivalent cluster inventory endpoint) every 6–24 hours. Extract `cluster_version` (AOS version) and `name` from each cluster entity. Create lookup `nutanix_aos_baseline.csv` with columns `cluster_name` and `target_version` defining the approved AOS version per cluster or environment. Compare running version to baseline. Alert on clusters with version drift. Generate weekly compliance report. Use for maintenance window planning and support eligibility checks.",
              "z": "Table (non-compliant clusters), Pie chart (version distribution), Single value (compliance percentage), Bar chart (clusters by AOS version).",
              "kfp": "An active AOS or hypervisor LCM run leaves the Prism `target_version` (API) different from the running NOS for hours, which this search suppresses; do not re-page during that window. Staggered production waves intentionally keep related clusters on different approved minors; each baseline row should encode that cluster’s allowed `target_version`, not one global value. The shape of `software_map` between AOS 6.x and 7.x can differ, so a parser gap can look like missing NCC until `spath` and props are correct—verify before paging. **OEM vendor-locked** AOS can sit below a public LTS banner while still fully supported; record the qualified build in the baseline, not a lab default. **Dev/test** clusters held back for an ISV certification have approved stragglers: use a future `deferred_until_date` and a named owner. Nutanix Cloud Clusters (NC2 on AWS or Azure) may follow a vendor-owned lifecycle; tag `business_service` so the compliance search does not fight shared-responsibility reality. A mistyped `nutanix:prism_central:clusters` sourcetype in old runbooks will return no data; rewrite to `nutanix:prism_central:cluster`. This UC is version posture, not the **UC-19.1.26** v3 `alerts/list` path—do not cross-wire validation steps from that sibling.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Nutanix Prism Central REST API).\n• Ensure the following data sources are available: Nutanix `/api/nutanix/v3/clusters/list` (cluster_version).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Prism Central `/api/nutanix/v3/clusters/list` (or equivalent cluster inventory endpoint) every 6–24 hours. Extract `cluster_version` (AOS version) and `name` from each cluster entity. Create lookup `nutanix_aos_baseline.csv` with columns `cluster_name` and `target_version` defining the approved AOS version per cluster or environment. Compare running version to baseline. Alert on clusters with version drift. Generate weekly compliance report. Use for maintenance window planning and support e…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nutanix sourcetype=\"nutanix:prism_central:clusters\"\n| stats latest(cluster_version) as aos_version, latest(cluster_name) as cluster_name, latest(uuid) as cluster_uuid by cluster_name\n| lookup nutanix_aos_baseline.csv cluster_name OUTPUT target_version\n| eval compliant=if(aos_version==target_version OR isnull(target_version), \"Yes\", \"No\")\n| where compliant==\"No\"\n| table cluster_name, aos_version, target_version, compliant\n| sort cluster_name\n```\n\nUnderstanding this SPL\n\n**Nutanix AOS Version Compliance** — Clusters running non-current AOS versions face compatibility risks, support limitations, and security gaps. Tracking version compliance enables upgrade planning, audit reporting, and ensures consistency across the Nutanix fleet.\n\nDocumented **Data sources**: Nutanix `/api/nutanix/v3/clusters/list` (cluster_version). **App/TA** (typical add-on context): Custom (Nutanix Prism Central REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nutanix; **sourcetype**: nutanix:prism_central:clusters. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nutanix, sourcetype=\"nutanix:prism_central:clusters\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliant==\"No\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Nutanix AOS Version Compliance**): table cluster_name, aos_version, target_version, compliant\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant clusters), Pie chart (version distribution), Single value (compliance percentage), Bar chart (clusters by AOS version).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We keep a list of the software level each Nutanix cluster is meant to run. We compare that to what the cluster actually reports so stragglers and end-of-support windows show up in one place, separate from the live fault alarms, and so upgrade planning stays ahead of support and audit surprises.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365",
                "nutanix"
              ],
              "em": [
                "nutanix_prism_central"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.28",
              "n": "Nutanix Snapshot Retention Compliance",
              "c": "medium",
              "f": "intermediate",
              "v": "Protection domain snapshots that exceed retention policy consume storage and increase backup window duration. Monitoring snapshot age and count identifies stale snapshots for cleanup, reduces storage waste, and ensures alignment with data protection policies.",
              "t": "Custom (Nutanix Prism Element REST API)",
              "d": "Nutanix `/api/nutanix/v2/protection_domains`, `/api/nutanix/v2/protection_domains/:name/dr_snapshots`",
              "q": "index=nutanix sourcetype=\"nutanix:protection_domains:snapshots\"\n| eval snapshot_age_days=round((now()-_time)/86400, 0)\n| lookup nutanix_snapshot_retention_policy.csv protection_domain OUTPUT max_age_days, max_count\n| stats count as snapshot_count, max(snapshot_age_days) as oldest_days, latest(max_age_days) as max_age_days, latest(max_count) as max_count by cluster, protection_domain\n| eval over_retention=if(oldest_days>max_age_days OR snapshot_count>max_count, \"Non-Compliant\", \"Compliant\")\n| where over_retention==\"Non-Compliant\"\n| table cluster, protection_domain, snapshot_count, oldest_days, max_age_days, max_count, over_retention\n| sort -oldest_days",
              "m": "Poll each Prism Element (per-cluster) for `/api/nutanix/v2/protection_domains` to list protection domains, then call `/api/nutanix/v2/protection_domains/{name}/dr_snapshots` for each domain to retrieve snapshot metadata (creation time, count). Run every 6–12 hours. Index with sourcetype `nutanix:protection_domains:snapshots`. Create lookup `nutanix_snapshot_retention_policy.csv` with `protection_domain`, `max_age_days`, `max_count` per policy. Calculate snapshot age from creation timestamp. Alert when snapshot count or oldest snapshot age exceeds policy. Report on storage consumed by over-retention snapshots. Integrate with capacity dashboards.",
              "z": "Table (non-compliant protection domains), Bar chart (snapshot count by domain), Gauge (oldest snapshot age), Single value (domains over retention).",
              "kfp": "Planned one-off snapshot storms after `ncli pd add-snapshot` validation runs, especially before go-live, can temporarily inflate `snap_count` or make `rpo_lag_mins` look short—compare against the change record. Retention purge windows that delete old snaps can drop count to 1 for minutes until the next scheduled snap; do not page on a single sub-hour dip if the schedule is daily. Nutanix Files protection domains with schedules driven only on FSVM maintenance paths can desynchronise from generic PD polling intervals—treat as known variance or narrow collection to the FS family API in a child UC, not a Sev-1 here. Async or NearSync DR mid-replication, stretch-cluster pause, or a documented maintenance mode that defers the next snapshot should suppress paging via a maintenance lookup or a higher throttle. A legal-hold flag in API metadata, when present, that extends `expiry_time_usecs` is good hygiene, not non-compliance. Seeded full-PD initial replication after a new remote site comes online can look like a wide `rpo_lag` until the first successful snapshot—pair with a first-run-until field in the CSV. None of these are substitutes for an actual RPO miss during steady-state operations.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Nutanix Prism Element REST API).\n• Ensure the following data sources are available: Nutanix `/api/nutanix/v2/protection_domains`, `/api/nutanix/v2/protection_domains/:name/dr_snapshots`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll each Prism Element (per-cluster) for `/api/nutanix/v2/protection_domains` to list protection domains, then call `/api/nutanix/v2/protection_domains/{name}/dr_snapshots` for each domain to retrieve snapshot metadata (creation time, count). Run every 6–12 hours. Index with sourcetype `nutanix:protection_domains:snapshots`. Create lookup `nutanix_snapshot_retention_policy.csv` with `protection_domain`, `max_age_days`, `max_count` per policy. Calculate snapshot age from creation timestamp. Aler…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nutanix sourcetype=\"nutanix:protection_domains:snapshots\"\n| eval snapshot_age_days=round((now()-_time)/86400, 0)\n| lookup nutanix_snapshot_retention_policy.csv protection_domain OUTPUT max_age_days, max_count\n| stats count as snapshot_count, max(snapshot_age_days) as oldest_days, latest(max_age_days) as max_age_days, latest(max_count) as max_count by cluster, protection_domain\n| eval over_retention=if(oldest_days>max_age_days OR snapshot_count>max_count, \"Non-Compliant\", \"Compliant\")\n| where over_retention==\"Non-Compliant\"\n| table cluster, protection_domain, snapshot_count, oldest_days, max_age_days, max_count, over_retention\n| sort -oldest_days\n```\n\nUnderstanding this SPL\n\n**Nutanix Snapshot Retention Compliance** — Protection domain snapshots that exceed retention policy consume storage and increase backup window duration. Monitoring snapshot age and count identifies stale snapshots for cleanup, reduces storage waste, and ensures alignment with data protection policies.\n\nDocumented **Data sources**: Nutanix `/api/nutanix/v2/protection_domains`, `/api/nutanix/v2/protection_domains/:name/dr_snapshots`. **App/TA** (typical add-on context): Custom (Nutanix Prism Element REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nutanix; **sourcetype**: nutanix:protection_domains:snapshots. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nutanix, sourcetype=\"nutanix:protection_domains:snapshots\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **snapshot_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by cluster, protection_domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **over_retention** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where over_retention==\"Non-Compliant\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Nutanix Snapshot Retention Compliance**): table cluster, protection_domain, snapshot_count, oldest_days, max_age_days, max_count, over_retention\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant protection domains), Bar chart (snapshot count by domain), Gauge (oldest snapshot age), Single value (domains over retention).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We look at the copies your Nutanix box keeps for each protected group of workloads. We check how many copies are stored, how old the oldest is, and whether new copies are arriving on time against what the team said they would do for backup and recovery. If something drifts, we can show a short list to fix, which helps with audits and keeps storage from slowly filling with expired copies.",
              "mtype": [
                "Compliance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [
                "nutanix_prism_element"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.29",
              "n": "Blade Server ECC Memory Error Rate (Cisco UCS)",
              "c": "high",
              "f": "intermediate",
              "v": "Correctable ECC errors often precede uncorrectable failures and guest crashes. Trending per-blade memory error rates lets you schedule DIMM replacement during maintenance windows instead of reacting to sudden hardware loss.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager faults and SEL",
              "d": "`index=cisco_ucs` `sourcetype=\"cisco:ucs:faults\"` with fields `dn`, `cause`, `code`, `severity`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:faults\" earliest=-24h\n| search dn=\"sys/chassis-*/blade-*\" AND (like(lower(cause),\"%ecc%\") OR like(lower(cause),\"%memory%\") OR like(lower(cause),\"%dimm%\"))\n| stats count as fault_events by dn, code, severity\n| sort -fault_events",
              "m": "(1) Ensure UCS Manager faults or CIMC memory events forward to Splunk with consistent `dn` naming; (2) baseline normal correctable rates per chassis; (3) alert when per-blade counts exceed baseline or severity is major/critical.",
              "z": "Bar chart (faults by blade), Table (top DIMM-related codes), Timechart (ECC-related fault rate).",
              "kfp": "• (a) **BIOS** / **CIMC** **firmware** update or a deliberate **clear** **statistics** in **UCSM** resets **cumulative** **counters**; the `if(current>=previous,…)` **delta** branch shows a one-bin discontinuity, not always a new fault — pair with a change id before **RMA** **DIMMs**.\n• (b) **UCSM** **cluster** or **CIMC** **clock** skew: two polls in the “wrong” order make **rate** look negative until the next good bin; compare **`_indextime` − `_time`** if **Sev1** trust is challenged.\n• (c) **Single-bit** **chip** failure that stays within **CPU**-correctable guardrails: very high **correctable**/hour can still be “healthy enough” in vendor eyes — the **lookup** row is a **planning** band, not a warranty; tune before paging.\n• (d) **Intermittent I2C/SEL** transport noise to the service processor: short spikes that **UCSM** soaks the next time you look in **Equipment → Memory** — require two bad **15m** **bins** before **FRU**.\n• (e) **Migrated** or **re-seated** **DIMMs** after a truck roll: **counters** start from zero; the **fault** list may not yet show a **F0186** repeat — treat the first bin after a move as a **re-baseline**.\n• (f) **AppDirect** or **PMem** traffic patterns next to a conventional **DIMM** on the same board can confuse operators who conflate **app**-tag **stats** with **DIMM** **ECC** — see **exclusions**.\n• (g) **vSphere** / **ESXi** health events or **MCE** logs at the **guest** for the same time window: correlated story but out of scope here — this UC is the **UCSM**-materialised view only.\n• (h) **Dual-feed** (API and **syslog**) to the same index can duplicate a **faultInst** row; **count**-based fields may double if you have not de-duplicated the **fault** path — use one **authoritative** feed for paging, or de-dupe in a **provisional** **summary** with `dedup` on `code`+`dimm_dn` in a 120s **window**.\n• (i) **Memory** **thermal** throttling and **bandwidth** loss without rising **UCSM** **ECC** **fault** entries can still hurt apps — pair with **host**-side and **19.1.34**-style thermal context when guests complain with clean **DIMM** **rates**.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager faults and SEL.\n• Ensure the following data sources are available: `index=cisco_ucs` `sourcetype=\"cisco:ucs:faults\"` with fields `dn`, `cause`, `code`, `severity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure UCS Manager faults or CIMC memory events forward to Splunk with consistent `dn` naming; (2) baseline normal correctable rates per chassis; (3) alert when per-blade counts exceed baseline or severity is major/critical.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:faults\" earliest=-24h\n| search dn=\"sys/chassis-*/blade-*\" AND (like(lower(cause),\"%ecc%\") OR like(lower(cause),\"%memory%\") OR like(lower(cause),\"%dimm%\"))\n| stats count as fault_events by dn, code, severity\n| sort -fault_events\n```\n\nUnderstanding this SPL\n\n**Blade Server ECC Memory Error Rate (Cisco UCS)** — Correctable ECC errors often precede uncorrectable failures and guest crashes. Trending per-blade memory error rates lets you schedule DIMM replacement during maintenance windows instead of reacting to sudden hardware loss.\n\nDocumented **Data sources**: `index=cisco_ucs` `sourcetype=\"cisco:ucs:faults\"` with fields `dn`, `cause`, `code`, `severity`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager faults and SEL. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:faults. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:faults\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by dn, code, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (faults by blade), Table (top DIMM-related codes), Timechart (ECC-related fault rate).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B200 M6/M7, UCS X210c M6/M7, UCS X410c M6",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the little memory chips on each server for a slowly rising stream of small fixable errors, and we pair that with the bigger notices when something is truly wrong, so the repair team can change the right part in a quiet window. That is a different job from a general “something broke” list, and different from watching network wires or room air heat.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.30",
              "n": "Rack Server PSU N+1 Redundancy (Cisco UCS C-Series)",
              "c": "critical",
              "f": "beginner",
              "v": "A single failed PSU on a rack server without redundant feed leaves no margin before power loss. Tracking redundancy state across C-Series nodes preserves uptime for standalone HCI and database workloads.",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`index=cisco_ucs` `sourcetype=\"cisco:ucs:environmental\"` with fields `rack_unit_id`, `psu_slot`, `oper_state`, `redundancy_state`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:environmental\" metric_type=\"psu\" earliest=-1h\n| search rack_unit_id=*\n| where oper_state!=\"ok\" OR redundancy_state!=\"redundant\" OR input_voltage_state!=\"good\"\n| stats latest(oper_state) as psu_state, latest(redundancy_state) as red by rack_unit_id, psu_slot\n| sort rack_unit_id, psu_slot",
              "m": "(1) Map PSU inventory fields from UCSM API poll; (2) page when redundancy drops from redundant to non-redundant; (3) correlate with facility PDU events in Splunk for root cause.",
              "z": "Status grid (rack unit × PSU slot), Table (non-redundant servers), Single value (servers at risk).",
              "kfp": "• C220/C240 **single-PSU** or intentionally non-redundant line cord for a **lab** row: the MO still lists one supply; the lookup’s `redundancy_intent=single` row should down-rank, not a Sev-1, if your CMDB tag matches.\n• RMA of one rack PSU with the bay marked **removed** for 10–30 minutes: inventory shows two slots with one `inoperable`; use a two-poll hold and a ticket-based snooze on `rack_unit_dn` so you do not page during hot-swap hand-off.\n• **A/B** AC work on the **rack** PDU: both PSUs are operable in UCSM but the feed from one side is intentionally down; the electric change calendar, not the XML poll, is the source of truth—suppress by slot only when BMS and UCSM both expect the work.\n• **Firmware** waves that **reset** a PSU in sequence: a brief `inoperable` on `equipmentPsu` while the pair still has one good path is expected once per part; require two bad bins, not a single poll, before a bridge call on firmware nights.\n• **Clock or poll skew** that lands inventory a cycle ahead of `equipmentPsuStats`: a transient 1-of-2 count when both feeds are actually fine; align `_time` spread and re-run the search with `earliest=-45m` before RMA for skew-only blips.\n• **UCSM maintenance** (FI or IOM work) that drops **statistics** for a cycle while **inventory** still lists both PSUs: `psu_operational_count` from stats rows can be zero with good hardware; compare with parallel `cisco:ucs:faults` and hold the alert on stats-outage alone.\n• **Nameplate** mix after field swap: two different wattage PSUs in the two bays with valid `operable` but odd margin for load—posture is still “intact” in this detection; kW and thermal headroom live in UC-19.1.6, not a duplicate page here.\n• **Stretched** or **drained** service profiles during migration: a rack can show a ghost `rack_unit_dn` in inventory; CMDB decommission the MO before you tune severity, or the table looks worse than the hall.\n• **Index scope** that combines multiple UCSM domains: duplicate `sys/rack-unit-1` DNs in different orgs; add `by host` or `ucsm_fqdn` to the `stats` `by` clause when your add-on populates a host or connection field.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `index=cisco_ucs` `sourcetype=\"cisco:ucs:environmental\"` with fields `rack_unit_id`, `psu_slot`, `oper_state`, `redundancy_state`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map PSU inventory fields from UCSM API poll; (2) page when redundancy drops from redundant to non-redundant; (3) correlate with facility PDU events in Splunk for root cause.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:environmental\" metric_type=\"psu\" earliest=-1h\n| search rack_unit_id=*\n| where oper_state!=\"ok\" OR redundancy_state!=\"redundant\" OR input_voltage_state!=\"good\"\n| stats latest(oper_state) as psu_state, latest(redundancy_state) as red by rack_unit_id, psu_slot\n| sort rack_unit_id, psu_slot\n```\n\nUnderstanding this SPL\n\n**Rack Server PSU N+1 Redundancy (Cisco UCS C-Series)** — A single failed PSU on a rack server without redundant feed leaves no margin before power loss. Tracking redundancy state across C-Series nodes preserves uptime for standalone HCI and database workloads.\n\nDocumented **Data sources**: `index=cisco_ucs` `sourcetype=\"cisco:ucs:environmental\"` with fields `rack_unit_id`, `psu_slot`, `oper_state`, `redundancy_state`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:environmental. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:environmental\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Filters the current rows with `where oper_state!=\"ok\" OR redundancy_state!=\"redundant\" OR input_voltage_state!=\"good\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by rack_unit_id, psu_slot** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (rack unit × PSU slot), Table (non-redundant servers), Single value (servers at risk).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS C220 M6/M7, UCS C240 M6/M7, UCS C480 M5",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the two plug-in power blocks on your Cisco rack servers. If one of them stops being healthy, you still have the machine running, but you are out of backup power path—so we want someone on it before a second hiccup takes the box down.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.31",
              "n": "Fabric Interconnect HA Cluster State",
              "c": "critical",
              "f": "intermediate",
              "v": "FI pairs provide control-plane and northbound redundancy. Loss of subordinate or split state risks asymmetric forwarding and prolonged change freezes until the fabric is healed.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager syslog",
              "d": "`index=cisco_ucs` `sourcetype=\"cisco:ucs:fi_stats\"` with fields `fi_id`, `ha_state`, `oper_state`, `cluster_state`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:fi_stats\" object_type=\"fi_cluster\" earliest=-4h\n| stats latest(ha_state) as ha, latest(oper_state) as op, latest(cluster_state) as cl by fi_id\n| where ha!=\"up\" OR op!=\"up\" OR cl!=\"healthy\"\n| table fi_id, ha, op, cl",
              "m": "(1) Ingest FI cluster state from periodic API or structured syslog; (2) alert on any FI not `up` or cluster not `healthy`; (3) runbook to verify L1/L2 links and UCSM primary/ subordinate roles.",
              "z": "Table (FI cluster state), Timeline (state transitions), Single value (unhealthy FI count).",
              "kfp": "UCSM ISSU or Fabric Interconnect N-in-1 firmware activation (expected E419-style syslog burst while roles settle). Planned subordinate FI reseat or RMA (one mgmtIf missing for multiple polls; reconcile with the hardware ticket). Scheduled L1/L2 or DR test between FI and Nexus that also moves the cluster vIP or a temporary stand-alone mode under change control. Controller certificate or TLS profile rotation (one FI missing from inventory for minutes while the other is healthy; treat as cert/NTP, not a split-brain, until the poll path is clean). Major UCSM upgrade where the syslog `code` prefix changes — tighten the `like` against fresh samples, not a three-year runbook. Third-party syslog forwarders that case-fold or duplicate lines, changing apparent election counts. Read-only org policies that strip equipment-class syslog severities in non-prod, producing zero election rows while prod still has data.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager syslog.\n• Ensure the following data sources are available: `index=cisco_ucs` `sourcetype=\"cisco:ucs:fi_stats\"` with fields `fi_id`, `ha_state`, `oper_state`, `cluster_state`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest FI cluster state from periodic API or structured syslog; (2) alert on any FI not `up` or cluster not `healthy`; (3) runbook to verify L1/L2 links and UCSM primary/ subordinate roles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:fi_stats\" object_type=\"fi_cluster\" earliest=-4h\n| stats latest(ha_state) as ha, latest(oper_state) as op, latest(cluster_state) as cl by fi_id\n| where ha!=\"up\" OR op!=\"up\" OR cl!=\"healthy\"\n| table fi_id, ha, op, cl\n```\n\nUnderstanding this SPL\n\n**Fabric Interconnect HA Cluster State** — FI pairs provide control-plane and northbound redundancy. Loss of subordinate or split state risks asymmetric forwarding and prolonged change freezes until the fabric is healed.\n\nDocumented **Data sources**: `index=cisco_ucs` `sourcetype=\"cisco:ucs:fi_stats\"` with fields `fi_id`, `ha_state`, `oper_state`, `cluster_state`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:fi_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:fi_stats\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by fi_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ha!=\"up\" OR op!=\"up\" OR cl!=\"healthy\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Fabric Interconnect HA Cluster State**): table fi_id, ha, op, cl\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (FI cluster state), Timeline (state transitions), Single value (unhealthy FI count).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS 6454 FI, UCS 6536 FI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the two boss brains on the fabric pair. Either can lead, but they must pick one primary so the whole control side agrees. If both claim to lead or neither will, we stop and tell you first — before the next service change can turn that disagreement into a longer outage for everyone on the domain.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.32",
              "n": "CNA / vNIC Adapter Firmware Drift (Cisco UCS)",
              "c": "medium",
              "f": "intermediate",
              "v": "Adapter firmware out of sync with the approved bundle can cause intermittent FCoE/NVMe-oF or driver mismatches after OS upgrades. Per-adapter tracking closes gaps that aggregate inventory views miss.",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`index=cisco_ucs` `sourcetype=\"cisco:ucs:inventory\"` with fields `server_dn`, `adapter_slot`, `running_fw`, `adapter_model`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:inventory\" object_type=\"adapter\" earliest=-24h\n| stats latest(running_fw) as fw by server_dn, adapter_slot, adapter_model\n| lookup ucs_adapter_fw_baseline.csv adapter_model OUTPUT approved_fw\n| where isnotnull(approved_fw) AND fw!=approved_fw\n| table server_dn, adapter_slot, adapter_model, fw, approved_fw",
              "m": "(1) Export adapter inventory including model and firmware; (2) maintain baseline lookup per adapter model; (3) weekly report and alert on production org drift before change windows.",
              "z": "Table (non-compliant adapters), Bar chart (count by model), Pie chart (compliant vs drift).",
              "kfp": "During a rolling **UCSM bundle** upgrade, the management plane and server CIMC are often updated before the **adaptor** images finish activation; you will see a window where the adapter is still on the previous component build even though the domain is not broken. A replacement card that came from a cold-spare RMA with factory firmware and is intentionally aligned on the next maintenance day will also appear as drift until you schedule activation or refresh the **ucs_adapter_fw_baseline** row with an approved interim build written in your change record. C-Series nodes where an operator side-loaded a Cisco field hot-fix image outside the current bundle, with explicit platform-team approval, will trip until you either add an exception row keyed on serial or widen the `approved_fw` string to that hot-fix label. X-Series mLOM or rear mezz work staged for a firmware activation during the same cutover that already shows **fw_staged_pending_reboot** on the running bank can look harsher than reality—throttle to `adaptor_mo` and cross-check the UCSM firmware auto-install or pending-activation FSM, not a single 15 minute poll. Lab orgs and intentionally lagging qualification blades should be down-tiered with an org-based `business_org` map or a nonprod column in the baseline CSV, not the same sev-1 path as a production org.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `index=cisco_ucs` `sourcetype=\"cisco:ucs:inventory\"` with fields `server_dn`, `adapter_slot`, `running_fw`, `adapter_model`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export adapter inventory including model and firmware; (2) maintain baseline lookup per adapter model; (3) weekly report and alert on production org drift before change windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:inventory\" object_type=\"adapter\" earliest=-24h\n| stats latest(running_fw) as fw by server_dn, adapter_slot, adapter_model\n| lookup ucs_adapter_fw_baseline.csv adapter_model OUTPUT approved_fw\n| where isnotnull(approved_fw) AND fw!=approved_fw\n| table server_dn, adapter_slot, adapter_model, fw, approved_fw\n```\n\nUnderstanding this SPL\n\n**CNA / vNIC Adapter Firmware Drift (Cisco UCS)** — Adapter firmware out of sync with the approved bundle can cause intermittent FCoE/NVMe-oF or driver mismatches after OS upgrades. Per-adapter tracking closes gaps that aggregate inventory views miss.\n\nDocumented **Data sources**: `index=cisco_ucs` `sourcetype=\"cisco:ucs:inventory\"` with fields `server_dn`, `adapter_slot`, `running_fw`, `adapter_model`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:inventory\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by server_dn, adapter_slot, adapter_model** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(approved_fw) AND fw!=approved_fw` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CNA / vNIC Adapter Firmware Drift (Cisco UCS)**): table server_dn, adapter_slot, adapter_model, fw, approved_fw\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant adapters), Bar chart (count by model), Pie chart (compliant vs drift).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS VIC 1440/1480/1540, UCS X-Series mLOM adapters",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We make sure the little network cards on each server stay on the same software version your team has approved, slot by slot, so a half-updated card after a repair or upgrade does not quietly break fast storage and cluster networking later on.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.33",
              "n": "Intersight Device Connector / Tunnel Health",
              "c": "high",
              "f": "beginner",
              "v": "If the secure connector or cloud tunnel is down, alarms and inventory stop updating while hardware may still be failing. Early detection preserves single-pane visibility before the operations team loses trust in the portal.",
              "t": "`Cisco Intersight Add-on`",
              "d": "`index=cisco_intersight` `sourcetype=\"cisco:intersight:appliance\"` with fields `appliance_name`, `tunnel_state`, `last_sync_epoch`",
              "q": "index=cisco_intersight sourcetype=\"cisco:intersight:appliance\" earliest=-24h\n| eval lag_sec=now()-last_sync_epoch\n| where tunnel_state!=\"connected\" OR lag_sec>900\n| stats latest(tunnel_state) as tunnel, max(lag_sec) as max_lag_sec by appliance_name\n| sort -max_lag_sec",
              "m": "(1) Ingest appliance health from Intersight add-on or automation hitting the appliance API; (2) alert when tunnel is not connected or sync lag exceeds 15 minutes; (3) correlate with firewall change tickets.",
              "z": "Table (appliance status), Single value (appliances out of sync), Timechart (sync lag).",
              "kfp": "UCSM-driven firmware or infrastructure upgrades that deliberately place the device connector into MaintenanceMode or a vendor-documented service window: route through intersight_maint_resolved, allow_maintenance_disconnect_pattern, or a change ticket, not a page. Planned reboots of an Intersight Assist virtual machine where you pre-added the hostname to a maintenance window. Controlled internet or proxy maintenance with a 2–5 minute DNS or path flap that self-heals: often stays in the info bucket; widen warn_stale if your carrier routinely flaps. Staged disconnect tests in lab tenants that you forgot to mark nonprod in the inventory. Stale DeviceRegistration with fresh ModTime: the join in this UC is Name-to-hostname; a mismatch can look like a false split—validate keys before a hardware RMA. Transient NoToken from a one-off NTP step after a daylight-saving policy push on a CIMC, corrected within minutes, should not be a critical if crit_stale is respected. This list is different from UC-19.1.19 (alarm-code churn and demo orgs), UC-19.1.20 (post-upgrade firmware lag in RunningFirmware), and UC-19.1.23 (automation that looks like a human); here the dominant “looks bad but is not” is planned maintenance, staged lab work, and chronic but below-threshold flaps—tune the CSV, not the infrastructure truth.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Cisco Intersight Add-on`.\n• Ensure the following data sources are available: `index=cisco_intersight` `sourcetype=\"cisco:intersight:appliance\"` with fields `appliance_name`, `tunnel_state`, `last_sync_epoch`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest appliance health from Intersight add-on or automation hitting the appliance API; (2) alert when tunnel is not connected or sync lag exceeds 15 minutes; (3) correlate with firewall change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_intersight sourcetype=\"cisco:intersight:appliance\" earliest=-24h\n| eval lag_sec=now()-last_sync_epoch\n| where tunnel_state!=\"connected\" OR lag_sec>900\n| stats latest(tunnel_state) as tunnel, max(lag_sec) as max_lag_sec by appliance_name\n| sort -max_lag_sec\n```\n\nUnderstanding this SPL\n\n**Intersight Device Connector / Tunnel Health** — If the secure connector or cloud tunnel is down, alarms and inventory stop updating while hardware may still be failing. Early detection preserves single-pane visibility before the operations team loses trust in the portal.\n\nDocumented **Data sources**: `index=cisco_intersight` `sourcetype=\"cisco:intersight:appliance\"` with fields `appliance_name`, `tunnel_state`, `last_sync_epoch`. **App/TA** (typical add-on context): `Cisco Intersight Add-on`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_intersight; **sourcetype**: cisco:intersight:appliance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_intersight, sourcetype=\"cisco:intersight:appliance\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lag_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where tunnel_state!=\"connected\" OR lag_sec>900` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by appliance_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (appliance status), Single value (appliances out of sync), Timechart (sync lag).",
              "script": "",
              "premium": "",
              "hw": "Cisco Intersight Assist / connected UCS domains",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the long-lasting link from your data-centre equipment to the central service that centralises its health. When that link drops, fault news can stop arriving even though the machines are still there, so we help teams see the gap before they trust a quiet status screen. We also compare another timestamp so a confusing read is less likely to be mistaken for a full emergency.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_intersight"
              ],
              "ta_link": {
                "name": "Cisco Intersight Add-on for Splunk",
                "id": 7828,
                "url": "https://splunkbase.splunk.com/app/7828"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.34",
              "n": "Chassis Thermal Runaway Risk (Blade Enclosures)",
              "c": "critical",
              "f": "intermediate",
              "v": "Rising inlet temperatures or falling fan speeds across a chassis precede thermal shutdown of multiple blades. Acting on enclosure-level trends protects dense compute before throttling spreads to tenant VMs.",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`index=cisco_ucs` `sourcetype=\"cisco:ucs:environmental\"` with fields `chassis_id`, `stat_name`, `value`, `unit`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:environmental\" earliest=-6h\n| where like(lower(stat_name),\"%inlet%temp%\") OR like(lower(stat_name),\"%exhaust%temp%\")\n| stats max(value) as peak_temp_c by chassis_id, stat_name\n| where peak_temp_c>70\n| sort -peak_temp_c",
              "m": "(1) Normalize temperature stat names per UCSM release; (2) set per-datacenter thresholds aligned with ASHRAE class; (3) alert and open facilities ticket when chassis peak exceeds policy for two consecutive polls.",
              "z": "Heatmap (chassis × time), Gauge (peak inlet), Table (hot chassis).",
              "kfp": "• (a) **HVAC** zone or **CRAH** / **CRAC** handover, economizer cut-over, or staged **setpoint** moves can lift inlet a few degrees for several bins without a hardware fault; expect co-present facilities tickets and stable **fan** **RPM** rather than a **fan_failed** state.\n• (b) Open hot- or cold-aisle containment doors during a data-center tour, shipping dock weather intrusion, or short-term fire-panel drill airflow changes — transient inlet slope without sustained **fan** derate.\n• (c) **Blade** or **IOM** **firmware** **activation** with a brief **CPU** socket **temperature** lift on **`processorEnvStats`** while **inlet** mean remains stable; the primary **runaway** gate on **`equipmentChassisStats`** should not page alone without **fan** trend agreement.\n• (d) **PID**-style **fan** **controller** **overshoot** after a large workload step-down: **RPM** can dip then recover while **inlet** is still moving; require two consecutive 5m bins in production paging, or a short dedupe on **`chassis_dn`**, before **RMA** **hardware** **fans**.\n• (e) **MO** or service-processor **clock skew** between **UCSM** poll, forwarder, and `_time` bucketing: apparent **°C/hour** spikes when bins misalign; compare **`_indextime` − `_time`** and **NTP** on the management path if **Sev1** **trust** is questioned.\n• (f) A single-poll **sensor** **glitch** or null-filled sample that collapses a bin mean; the **`dedup` after time sort** is intentionally **latest**-row focused — keep a companion **`timechart` `count` by `chassis_dn`** to detect sparse feeds.\n• (g) **Calibration** **drift** on long-uptime **thermistor** paths vs **DIMM**-adjacent **motherboard** sensors: slow crawl that is real but not an emergency; track **`computeMbTempStats`** against **`equipmentChassisStats`** in a secondary panel, not the same static crit as **inlet**.\n• (h) Planned **chassis** or **IOM** **maintenance** with known **suppression** — pair alerts with a change id so a deliberate fan profile change does not equal **runaway**.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `index=cisco_ucs` `sourcetype=\"cisco:ucs:environmental\"` with fields `chassis_id`, `stat_name`, `value`, `unit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize temperature stat names per UCSM release; (2) set per-datacenter thresholds aligned with ASHRAE class; (3) alert and open facilities ticket when chassis peak exceeds policy for two consecutive polls.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:environmental\" earliest=-6h\n| where like(lower(stat_name),\"%inlet%temp%\") OR like(lower(stat_name),\"%exhaust%temp%\")\n| stats max(value) as peak_temp_c by chassis_id, stat_name\n| where peak_temp_c>70\n| sort -peak_temp_c\n```\n\nUnderstanding this SPL\n\n**Chassis Thermal Runaway Risk (Blade Enclosures)** — Rising inlet temperatures or falling fan speeds across a chassis precede thermal shutdown of multiple blades. Acting on enclosure-level trends protects dense compute before throttling spreads to tenant VMs.\n\nDocumented **Data sources**: `index=cisco_ucs` `sourcetype=\"cisco:ucs:environmental\"` with fields `chassis_id`, `stat_name`, `value`, `unit`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:environmental. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:environmental\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(lower(stat_name),\"%inlet%temp%\") OR like(lower(stat_name),\"%exhaust%temp%\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by chassis_id, stat_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where peak_temp_c>70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Chassis Thermal Runaway Risk (Blade Enclosures)** — Rising inlet temperatures or falling fan speeds across a chassis precede thermal shutdown of multiple blades. Acting on enclosure-level trends protects dense compute before throttling spreads to tenant VMs.\n\nDocumented **Data sources**: `index=cisco_ucs` `sourcetype=\"cisco:ucs:environmental\"` with fields `chassis_id`, `stat_name`, `value`, `unit`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (chassis × time), Gauge (peak inlet), Table (hot chassis).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS 5108, UCS X9508 chassis",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the air in front of a shared rack of blade servers. If the temperature is climbing fast and the fans are not keeping up, many machines in that one box can slow down or shut off together. We give a rough idea of how little time is left to move work or get help, not only a bell when a single high line is crossed.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.35",
              "n": "IOM / FEX to FI Link Flap Events",
              "c": "high",
              "f": "intermediate",
              "v": "Repeated link flaps between IOM/FEX and FI cause micro-outages for blade traffic and complicate troubleshooting with intermittent CRCs. Counting flaps per uplink highlights bad optics or mis-seated cables before full path loss.",
              "t": "`Splunk_TA_cisco-ucs`, UCS Manager syslog",
              "d": "`index=cisco_ucs` `sourcetype=\"cisco:ucs:syslog\"` with fields `chassis_id`, `port`, `message_id`, `severity`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:syslog\" earliest=-24h\n| search (link DOWN OR \"link down\" OR \"LOS\" OR \"SFP\" OR flap) AND (iom OR fex OR \"eth uplink\")\n| stats count as flap_events by chassis_id, port, severity\n| where flap_events>=5\n| sort -flap_events",
              "m": "(1) Forward UCSM FI and FEX syslog with UTC timestamps; (2) tune keywords for your transceiver vendor messages; (3) alert on sustained flap count and create cable/optics work order.",
              "z": "Bar chart (flaps by port), Table (top noisy links), Timeline (flap bursts).",
              "kfp": "A coordinated UCSM / IOM / FEX *firmware* activation that reboots every server-facing link in a governed window (expect many `LINK_DOWN` without operational meaning once the change id is in place — suppress with a time- and `dn`-prefix lookup). *Planned* cable re-seat or SFP RMA in a TAC-observed session that intentionally drops a single port. Lab `shutdown` / `no shutdown` on a test uplink to prove optics. *Port-channel* member add/remove on the **upstream** Nexus that briefly withdraws a LACP child toward the FI while the UCS MO still shows transition language — that is a netops change story, not always a bad SFP, so reconcile the Cisco change ticket before paging. *Micro-flap storms* after data-centre *generator tests* that dip HVAC and temperature at the patch panel. *Duplicate syslog* lines at a site relay can double `flap_pairs` — add `dedup` on `_raw` for that forwarder class only if your pipeline team proves duplication.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`, UCS Manager syslog.\n• Ensure the following data sources are available: `index=cisco_ucs` `sourcetype=\"cisco:ucs:syslog\"` with fields `chassis_id`, `port`, `message_id`, `severity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward UCSM FI and FEX syslog with UTC timestamps; (2) tune keywords for your transceiver vendor messages; (3) alert on sustained flap count and create cable/optics work order.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:syslog\" earliest=-24h\n| search (link DOWN OR \"link down\" OR \"LOS\" OR \"SFP\" OR flap) AND (iom OR fex OR \"eth uplink\")\n| stats count as flap_events by chassis_id, port, severity\n| where flap_events>=5\n| sort -flap_events\n```\n\nUnderstanding this SPL\n\n**IOM / FEX to FI Link Flap Events** — Repeated link flaps between IOM/FEX and FI cause micro-outages for blade traffic and complicate troubleshooting with intermittent CRCs. Counting flaps per uplink highlights bad optics or mis-seated cables before full path loss.\n\nDocumented **Data sources**: `index=cisco_ucs` `sourcetype=\"cisco:ucs:syslog\"` with fields `chassis_id`, `port`, `message_id`, `severity`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`, UCS Manager syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:syslog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:syslog\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by chassis_id, port, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where flap_events>=5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (flaps by port), Table (top noisy links), Timeline (flap bursts).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS IOM 2200/2300, UCS 6454/6536 FI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We count how often the thick cables between the blade side of a rack and the big fabric lights wobble off and on, so someone can fix a bad plug or worn light before storage or live moves keep failing. That is not the same as only asking if the fat link to the main switch is up, or only how full a pipe is — we look for a wire that keeps blinking.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.1.36",
              "n": "Service Profile vNIC Redundancy and Failover Audit",
              "c": "high",
              "f": "intermediate",
              "v": "A service profile with a single active vNIC or mismatched failover policy removes network redundancy for VMs. Auditing template-derived settings reduces surprise outages during single-path failures.",
              "t": "`Splunk_TA_cisco-ucs`",
              "d": "`index=cisco_ucs` `sourcetype=\"cisco:ucs:config\"` with fields `sp_name`, `vnic_name`, `redundancy_pair`, `peer_vnic`, `switch_id`",
              "q": "index=cisco_ucs sourcetype=\"cisco:ucs:config\" object_type=\"vnic\" earliest=-24h\n| stats dc(vnic_name) as vnic_count, values(redundancy_pair) as pairs by sp_name\n| eval has_pair=if(vnic_count>=2 OR mvcount(pairs)>0, \"Yes\", \"No\")\n| where has_pair==\"No\"\n| table sp_name, vnic_count, pairs, has_pair",
              "m": "(1) Poll vNIC objects for each production service profile; (2) flag profiles with fewer than two data vNICs or empty redundancy pair; (3) integrate with CMDB to exclude intentionally single-NIC appliance profiles via lookup.",
              "z": "Table (profiles lacking redundancy), Pie chart (redundant vs single-path), Bar chart (by org).",
              "kfp": "• A **KVM/serial-only** or **OOB utility** run where someone temporarily unpins a `vnicEther` in **UCSM** and your **inventory** window catches one poll with **0** on a side: confirm against **Servers → Service Profiles → Network** before you change hardware against a 24h snapshot.\n• **Rapid template clone + bind** in a test org can show `unknown_topology` until the first **vnicEther** poll after bind—wait one extra poll, not an **RMA**.\n• **vNIC** renamed mid-change without an **lsServer** row refresh: **Splunk** may show **0/0** counts while **GUI** is already green; reconcile **dn** and poll cycle.\n• **read-only** API scope that omits a subtree: profiles look `unknown` or lose **vnic** rows—an **RBAC** defect, not a design win.\n• **M6/M7** **chassis** models where a **IOM** discovery delay leaves **vnic** rows one cycle behind an **IOM** swap—treat a lone **unknown** as “re-scan in 30m” in runbooks, not a permanent exception.\n• **Intentional single-fabric** test blades (fault-inject labs) must sit in a named **org** you exclude or tag in the optional **lookup**; otherwise they are true positives, not “noise to ignore without a record.”\n• **Static MAC** pool or **firmware** work that flips an **operable** read to **disabled** for a few polls during reprogram: correlates to **UCSM** FSM, not a topology redesign.\n• **UCSM A/B** side maintenance that places both temporary **vNICs** on one side under **approved change**: suppress with a **change-calendar** on **sp_dn** for the exact window, not by turning off the whole panel.\n• **vnicEtherIf** class temporarily absent in the add-on’s poll list: **failover** reads false across the board—validate **class** coverage with `| stats count by classId` before you open a sev-1 on “failover off everywhere.”",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_cisco-ucs`.\n• Ensure the following data sources are available: `index=cisco_ucs` `sourcetype=\"cisco:ucs:config\"` with fields `sp_name`, `vnic_name`, `redundancy_pair`, `peer_vnic`, `switch_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Poll vNIC objects for each production service profile; (2) flag profiles with fewer than two data vNICs or empty redundancy pair; (3) integrate with CMDB to exclude intentionally single-NIC appliance profiles via lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cisco_ucs sourcetype=\"cisco:ucs:config\" object_type=\"vnic\" earliest=-24h\n| stats dc(vnic_name) as vnic_count, values(redundancy_pair) as pairs by sp_name\n| eval has_pair=if(vnic_count>=2 OR mvcount(pairs)>0, \"Yes\", \"No\")\n| where has_pair==\"No\"\n| table sp_name, vnic_count, pairs, has_pair\n```\n\nUnderstanding this SPL\n\n**Service Profile vNIC Redundancy and Failover Audit** — A service profile with a single active vNIC or mismatched failover policy removes network redundancy for VMs. Auditing template-derived settings reduces surprise outages during single-path failures.\n\nDocumented **Data sources**: `index=cisco_ucs` `sourcetype=\"cisco:ucs:config\"` with fields `sp_name`, `vnic_name`, `redundancy_pair`, `peer_vnic`, `switch_id`. **App/TA** (typical add-on context): `Splunk_TA_cisco-ucs`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cisco_ucs; **sourcetype**: cisco:ucs:config. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cisco_ucs, sourcetype=\"cisco:ucs:config\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sp_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **has_pair** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where has_pair==\"No\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Service Profile vNIC Redundancy and Failover Audit**): table sp_name, vnic_count, pairs, has_pair\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (profiles lacking redundancy), Pie chart (redundant vs single-path), Bar chart (by org).",
              "script": "",
              "premium": "",
              "hw": "Cisco UCS B-Series, X-Series compute",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We check that every server network recipe uses both of the data centre switch sides, and that automatic handover is on when the design calls for it. When a recipe has paths on only one side, or has both sides but handover turned off everywhere, we name it so the platform team can fix the layout before a single switch fault makes a good server disappear from the network.",
              "mtype": [
                "Configuration",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_ucs"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.5,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 31,
            "none": 0
          }
        },
        {
          "i": "19.2",
          "n": "Hyper-Converged Infrastructure (HCI)",
          "u": [
            {
              "i": "19.2.1",
              "n": "Cluster Health Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "HCI cluster health directly determines workload availability. Monitoring overall cluster state, node availability, and service health enables rapid response to degradation before it impacts VMs and applications running on the cluster.",
              "t": "`TA-nutanix` or vendor-specific TA, HCI management API",
              "d": "HCI management API (Prism, vSAN Health), cluster status events",
              "q": "index=hci sourcetype=\"hci:cluster_health\"\n| stats latest(cluster_status) as status, latest(num_nodes) as total_nodes, latest(healthy_nodes) as healthy_nodes, latest(storage_usage_pct) as storage_pct by cluster_name\n| eval node_health=round((healthy_nodes/total_nodes)*100, 0)\n| eval overall=case(status==\"HEALTHY\" AND node_health==100, \"Healthy\", status==\"WARNING\" OR node_health<100, \"Degraded\", 1==1, \"Critical\")\n| table cluster_name, overall, total_nodes, healthy_nodes, node_health, storage_pct",
              "m": "Poll HCI management API every 60 seconds for cluster health. Track node online/offline state, storage health, and service availability. Alert on any cluster degradation (non-healthy state). Monitor rebuild operations and their impact on performance. Integrate with ITSI for service-level visibility.",
              "z": "Status grid (cluster health map), Single value (cluster status), Gauge (storage capacity), Table (cluster details).",
              "kfp": "Planned `RebuildInProgress` text while you expand with a canary HX node is a normal, hours-long resiliency event and should be paired with a change record before you re-open the RF math. A rolling HX service upgrade that walks nodes one at a time can show mixed `version` on `cisco:hyperflex:node` for the thirty-minute window each node is offline, so `ver_dc>1` may trip `mixed_ver` in the low tier even when ESXi is healthy. HyperFlex Edge designs with only two or three nodes can briefly show `RFNotMet` in maintenance because math is different at small N; treat that as a capacity-class finding unless `node_risk_count` stays up. A datastore that reads `unmounted` while an ESXi host is entering maintenance is expected. Controller VM or HX service VM IP work during a planned vSphere PSO can drop polls so `stale=1` fires, compare `_indextime` to `_time` and the poller log. A cluster that says healthy in HyperFlex while `cisco:ucs:faults` still shows PSU or DIMM red is the split between the UCS axis (UC-19.1.4) and this UC, never delete the UCS alert because `HX` is green. Synthetic HEC load tests that replay the same event twice in one second can inflate `dc` counts in a lab; tag those indexes. HyperFlex HX 5.0 can rename string tokens inside `resiliencyDetails`; re-run `fieldsummary` after upgrades before you widen `match` patterns.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix` or vendor-specific TA, HCI management API.\n• Ensure the following data sources are available: HCI management API (Prism, vSAN Health), cluster status events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll HCI management API every 60 seconds for cluster health. Track node online/offline state, storage health, and service availability. Alert on any cluster degradation (non-healthy state). Monitor rebuild operations and their impact on performance. Integrate with ITSI for service-level visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hci:cluster_health\"\n| stats latest(cluster_status) as status, latest(num_nodes) as total_nodes, latest(healthy_nodes) as healthy_nodes, latest(storage_usage_pct) as storage_pct by cluster_name\n| eval node_health=round((healthy_nodes/total_nodes)*100, 0)\n| eval overall=case(status==\"HEALTHY\" AND node_health==100, \"Healthy\", status==\"WARNING\" OR node_health<100, \"Degraded\", 1==1, \"Critical\")\n| table cluster_name, overall, total_nodes, healthy_nodes, node_health, storage_pct\n```\n\nUnderstanding this SPL\n\n**Cluster Health Monitoring** — HCI cluster health directly determines workload availability. Monitoring overall cluster state, node availability, and service health enables rapid response to degradation before it impacts VMs and applications running on the cluster.\n\nDocumented **Data sources**: HCI management API (Prism, vSAN Health), cluster status events. **App/TA** (typical add-on context): `TA-nutanix` or vendor-specific TA, HCI management API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hci:cluster_health. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hci:cluster_health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **node_health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overall** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Cluster Health Monitoring**): table cluster_name, overall, total_nodes, healthy_nodes, node_health, storage_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (cluster health map), Single value (cluster status), Gauge (storage capacity), Table (cluster details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Imagine your company runs a Cisco HyperFlex box that bundles servers, storage, and brain software so virtual machines for shops and clinics live together. This page reads that box's how-sick-are-we story and only pages the owner when resiliency, data copies, or a missing chunk show up — not for a blip already covered by a planned fix window.",
              "wv": "crawl",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "19.2.2",
              "n": "Storage Pool Capacity",
              "c": "high",
              "f": "intermediate",
              "v": "HCI storage pools are shared across all workloads. Running out of storage capacity causes VM provisioning failures, snapshot failures, and ultimately VM crashes. Proactive monitoring and forecasting prevents capacity emergencies.",
              "t": "`TA-nutanix` or vendor-specific TA",
              "d": "HCI storage metrics, capacity API",
              "q": "index=hci sourcetype=\"hci:storage_metrics\"\n| stats latest(total_capacity_tb) as total_tb, latest(used_capacity_tb) as used_tb by cluster_name, storage_pool\n| eval free_tb=round(total_tb-used_tb, 2)\n| eval used_pct=round((used_tb/total_tb)*100, 1)\n| sort -used_pct\n| table cluster_name, storage_pool, total_tb, used_tb, free_tb, used_pct",
              "m": "Collect storage capacity metrics every 5 minutes. Track daily growth rates for forecasting. Alert at 75% warning and 85% critical thresholds. Use Splunk predict command for capacity forecasting. Plan procurement cycles based on projected exhaustion dates.",
              "z": "Gauge (capacity utilization), Timechart (capacity trending with forecast), Table (pool details), Single value (days to capacity).",
              "kfp": "HyperFlex thin `provisionedCapacityInBytes` can outpace `used+free` while orchestration pre-reserves; burst factor can wobble until the project lands. Nutanix dedupe and EC change logical to physical, so a flat `pool_used_pct` can still be risky if physical rebuild margin is low. vSAN `slack_space_b` and policy reserve slices differ from some Prism or Azure charts; a green vendor UI with a high `pool_used_pct` here usually means a definition mismatch, not a Splunk failure. Azure Stack HCI Storage Bus Cache and Storage Replica maintenance can free or consume CSV space with little VM IO, swinging ETA for an hour. A single-day clone job can make `eta_to_full_days` look short for a week; compare the CSV weekly baseline. These patterns are not the vSAN health-test and stretch slack story from UC-19.2.27; they are pool-name and multi-vendor accounting effects.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix` or vendor-specific TA.\n• Ensure the following data sources are available: HCI storage metrics, capacity API.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect storage capacity metrics every 5 minutes. Track daily growth rates for forecasting. Alert at 75% warning and 85% critical thresholds. Use Splunk predict command for capacity forecasting. Plan procurement cycles based on projected exhaustion dates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hci:storage_metrics\"\n| stats latest(total_capacity_tb) as total_tb, latest(used_capacity_tb) as used_tb by cluster_name, storage_pool\n| eval free_tb=round(total_tb-used_tb, 2)\n| eval used_pct=round((used_tb/total_tb)*100, 1)\n| sort -used_pct\n| table cluster_name, storage_pool, total_tb, used_tb, free_tb, used_pct\n```\n\nUnderstanding this SPL\n\n**Storage Pool Capacity** — HCI storage pools are shared across all workloads. Running out of storage capacity causes VM provisioning failures, snapshot failures, and ultimately VM crashes. Proactive monitoring and forecasting prevents capacity emergencies.\n\nDocumented **Data sources**: HCI storage metrics, capacity API. **App/TA** (typical add-on context): `TA-nutanix` or vendor-specific TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hci:storage_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hci:storage_metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_name, storage_pool** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **free_tb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Storage Pool Capacity**): table cluster_name, storage_pool, total_tb, used_tb, free_tb, used_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (capacity utilization), Timechart (capacity trending with forecast), Table (pool details), Single value (days to capacity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We keep one check on the big storage pools the virtual machines share, and we show how full each is, about how long until it runs out if use keeps up, and who owns it. We also call out when space starts disappearing much faster than normal so the team can catch a runaway copy job before a weekend page.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.3",
              "n": "Storage I/O Latency",
              "c": "critical",
              "f": "intermediate",
              "v": "Storage latency directly impacts application performance on HCI. Elevated latency affects all VMs on the cluster. Early detection of latency spikes enables workload rebalancing or troubleshooting before user impact escalates.",
              "t": "`TA-nutanix` or vendor-specific TA",
              "d": "HCI performance metrics, per-VM I/O statistics",
              "q": "index=hci sourcetype=\"hci:io_metrics\"\n| stats avg(read_latency_ms) as avg_read_lat, avg(write_latency_ms) as avg_write_lat, max(read_latency_ms) as peak_read_lat, max(write_latency_ms) as peak_write_lat, sum(iops) as total_iops by cluster_name, node\n| eval status=case(peak_read_lat>20 OR peak_write_lat>20, \"Critical\", peak_read_lat>10 OR peak_write_lat>10, \"Warning\", 1==1, \"Healthy\")\n| sort -peak_write_lat\n| table cluster_name, node, avg_read_lat, peak_read_lat, avg_write_lat, peak_write_lat, total_iops, status",
              "m": "Collect HCI I/O metrics every 30 seconds. Track read/write latency at cluster, node, and VM level. Set thresholds: >10ms warning, >20ms critical (adjust per workload SLA). Correlate latency spikes with rebuild operations, snapshot activity, or capacity constraints. Alert on sustained latency above threshold.",
              "z": "Timechart (latency trending), Gauge (current latency), Table (high-latency nodes), Heatmap (latency by node over time).",
              "kfp": "A short burst of read latency while VMware Storage DRS or vSAN rebalancing runs can move p95 for only a few minutes; correlation with a change ticket and with UC-19.2.9 resync is expected. A Nutanix Curator or MapReduce heavy window can add controller time without a guest problem; use Prism timestamps before paging. A single VM doing a Veeam or SQL backup can spike max latency while the cluster stays inside capacity SLO, which is a workload signal, not a false alarm for this UC, but you may route those hosts to a backup queue. Annual firmware or hypervisor rolling upgrades, Cisco Intersight–driven updates, and Nutanix rolling reboots can create predictable spikes; align `change_freeze_until` in the lookup. NVMe and all-flash clusters naturally sit at sub‑millisecond reads during idle periods; a jump from 0.2 to 0.6 ms is normal variance, not a page, unless the tier and application SLO say otherwise.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix` or vendor-specific TA.\n• Ensure the following data sources are available: HCI performance metrics, per-VM I/O statistics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect HCI I/O metrics every 30 seconds. Track read/write latency at cluster, node, and VM level. Set thresholds: >10ms warning, >20ms critical (adjust per workload SLA). Correlate latency spikes with rebuild operations, snapshot activity, or capacity constraints. Alert on sustained latency above threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hci:io_metrics\"\n| stats avg(read_latency_ms) as avg_read_lat, avg(write_latency_ms) as avg_write_lat, max(read_latency_ms) as peak_read_lat, max(write_latency_ms) as peak_write_lat, sum(iops) as total_iops by cluster_name, node\n| eval status=case(peak_read_lat>20 OR peak_write_lat>20, \"Critical\", peak_read_lat>10 OR peak_write_lat>10, \"Warning\", 1==1, \"Healthy\")\n| sort -peak_write_lat\n| table cluster_name, node, avg_read_lat, peak_read_lat, avg_write_lat, peak_write_lat, total_iops, status\n```\n\nUnderstanding this SPL\n\n**Storage I/O Latency** — Storage latency directly impacts application performance on HCI. Elevated latency affects all VMs on the cluster. Early detection of latency spikes enables workload rebalancing or troubleshooting before user impact escalates.\n\nDocumented **Data sources**: HCI performance metrics, per-VM I/O statistics. **App/TA** (typical add-on context): `TA-nutanix` or vendor-specific TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hci:io_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hci:io_metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_name, node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Storage I/O Latency**): table cluster_name, node, avg_read_lat, peak_read_lat, avg_write_lat, peak_write_lat, total_iops, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (latency trending), Gauge (current latency), Table (high-latency nodes), Heatmap (latency by node over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We track how long storage reads and writes take across your hyperconverged clusters, and we separate a big IOPS number from a bad latency number so we catch slowdowns that capacity charts miss. We show the worst and the typical delay side by side, route the right team when it crosses the line, and point them to the cluster, node, or disk group to fix.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "19.2.4",
              "n": "Node Performance Balance",
              "c": "medium",
              "f": "intermediate",
              "v": "HCI relies on balanced workload distribution across nodes. Imbalanced nodes lead to hotspots where some nodes are overloaded while others are underutilized, reducing overall cluster efficiency and increasing failure risk.",
              "t": "`TA-nutanix` or vendor-specific TA",
              "d": "HCI node-level performance metrics",
              "q": "index=hci sourcetype=\"hci:node_metrics\"\n| stats avg(cpu_pct) as avg_cpu, avg(mem_pct) as avg_mem, avg(iops) as avg_iops by cluster_name, node\n| eventstats avg(avg_cpu) as cluster_avg_cpu, stdev(avg_cpu) as cluster_stdev_cpu by cluster_name\n| eval cpu_deviation=round(abs(avg_cpu-cluster_avg_cpu)/cluster_stdev_cpu, 2)\n| eval balance=case(cpu_deviation>2, \"Imbalanced\", cpu_deviation>1, \"Slightly Imbalanced\", 1==1, \"Balanced\")\n| table cluster_name, node, avg_cpu, avg_mem, avg_iops, cpu_deviation, balance\n| sort -cpu_deviation",
              "m": "Collect per-node CPU, memory, and I/O metrics every 60 seconds. Calculate standard deviation across nodes to detect imbalance. Alert when any node deviates >2 standard deviations from cluster average. Recommend DRS or workload migration to rebalance. Track balance improvement after actions.",
              "z": "Bar chart (node utilization comparison), Heatmap (node balance over time), Table (imbalanced nodes), Single value (cluster balance score).",
              "kfp": "Deliberate change freezes or maintenance moratoriums pause acceptance while skew signals may still propagate from UC-19.2.8, producing benign backlog spikes. License-driven anti-affinity clusters often keep recommendations pending without indicating broken automation. Rolling firmware waves may transiently spike blocked migrations when NIC saturation or host admission gates tighten. Nutanix or vCenter collector outages mimic low acceptance until backfill arrives. Partially automated DRS modes can look like drift until the next cluster_settings snapshot clarifies intent.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix` or vendor-specific TA.\n• Ensure the following data sources are available: HCI node-level performance metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect per-node CPU, memory, and I/O metrics every 60 seconds. Calculate standard deviation across nodes to detect imbalance. Alert when any node deviates >2 standard deviations from cluster average. Recommend DRS or workload migration to rebalance. Track balance improvement after actions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hci:node_metrics\"\n| stats avg(cpu_pct) as avg_cpu, avg(mem_pct) as avg_mem, avg(iops) as avg_iops by cluster_name, node\n| eventstats avg(avg_cpu) as cluster_avg_cpu, stdev(avg_cpu) as cluster_stdev_cpu by cluster_name\n| eval cpu_deviation=round(abs(avg_cpu-cluster_avg_cpu)/cluster_stdev_cpu, 2)\n| eval balance=case(cpu_deviation>2, \"Imbalanced\", cpu_deviation>1, \"Slightly Imbalanced\", 1==1, \"Balanced\")\n| table cluster_name, node, avg_cpu, avg_mem, avg_iops, cpu_deviation, balance\n| sort -cpu_deviation\n```\n\nUnderstanding this SPL\n\n**Node Performance Balance** — HCI relies on balanced workload distribution across nodes. Imbalanced nodes lead to hotspots where some nodes are overloaded while others are underutilized, reducing overall cluster efficiency and increasing failure risk.\n\nDocumented **Data sources**: HCI node-level performance metrics. **App/TA** (typical add-on context): `TA-nutanix` or vendor-specific TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hci:node_metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hci:node_metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_name, node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cpu_deviation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **balance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Node Performance Balance**): table cluster_name, node, avg_cpu, avg_mem, avg_iops, cpu_deviation, balance\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (node utilization comparison), Heatmap (node balance over time), Table (imbalanced nodes), Single value (cluster balance score).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch whether the cluster's automatic placement suggestions actually move through approval and action, not just whether some machines look busier than others. We flag old suggestions still waiting, broken pairing rules, moves blocked by limits, and times the system quietly stopped self-driving, so operators fix the control path before patching season turns messy.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.5",
              "n": "Disk Failure Tracking",
              "c": "critical",
              "f": "intermediate",
              "v": "Disk failures in HCI trigger data rebuild operations that consume cluster resources and temporarily reduce resilience. Tracking failures enables rapid replacement, monitoring rebuild progress, and assessing the cluster's ability to tolerate additional failures.",
              "t": "`TA-nutanix` or vendor-specific TA",
              "d": "HCI disk events, SMART data, rebuild status",
              "q": "index=hci sourcetype=\"hci:disk_events\"\n| search event_type IN (\"disk_failure\", \"disk_offline\", \"disk_rebuild_start\", \"disk_rebuild_complete\", \"smart_warning\")\n| stats count as events, latest(event_type) as latest_event, latest(_time) as last_event_time by cluster_name, node, disk_id, disk_serial\n| eval status=case(\n    latest_event==\"disk_failure\" OR latest_event==\"disk_offline\", \"Failed\",\n    latest_event==\"disk_rebuild_start\", \"Rebuilding\",\n    latest_event==\"smart_warning\", \"Warning\",\n    1==1, \"OK\")\n| search status!=\"OK\"\n| table cluster_name, node, disk_id, disk_serial, status, last_event_time",
              "m": "Ingest HCI disk events and SMART health data. Alert immediately on disk failures. Track rebuild start/complete times to measure rebuild duration. Monitor cluster resiliency during rebuilds (can it tolerate another failure?). Maintain spare disk inventory based on failure rate trends.",
              "z": "Status grid (disk health by node), Timeline (failure and rebuild events), Single value (disks in rebuild), Table (failed/warning disks).",
              "kfp": "A planned drive pull during cable replacement in one node can look like a failure burst when both the vSphere `diskAdded` and the Nutanix `DiskMarkedForRemoval` land in the same window; pair every medium-or-high block with a change record before paging. A rolling AOS or ESXi upgrade that restrains a disk for firmware can trip predictive flags that clear on the next poll, so a single poll of `kPredictiveFailure` is not a warranty claim by itself. Stretched vSAN and multi-site witness designs can re-home objects so a host’s disk list shrinks in REST while capacity is still protected—do not set `expected_disk_count` to an old CMDB value without a site-aware note. A HyperFlex resync that raises alarms while Cisco UCS still shows a green blade can point to a controller-software false edge; confirm against `cisco:hyperflex:node` and UCS NVMe inventory before a cold RMA. Labs that run both the inventory multisearch and a legacy 3215 vCenter stream may double-count the same `diskFailed` if you leave both unfiltered. SMART counters on a VM without raw disk access are often empty, which is a null wear column, not a clean bill of health.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix` or vendor-specific TA.\n• Ensure the following data sources are available: HCI disk events, SMART data, rebuild status.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest HCI disk events and SMART health data. Alert immediately on disk failures. Track rebuild start/complete times to measure rebuild duration. Monitor cluster resiliency during rebuilds (can it tolerate another failure?). Maintain spare disk inventory based on failure rate trends.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hci:disk_events\"\n| search event_type IN (\"disk_failure\", \"disk_offline\", \"disk_rebuild_start\", \"disk_rebuild_complete\", \"smart_warning\")\n| stats count as events, latest(event_type) as latest_event, latest(_time) as last_event_time by cluster_name, node, disk_id, disk_serial\n| eval status=case(\n    latest_event==\"disk_failure\" OR latest_event==\"disk_offline\", \"Failed\",\n    latest_event==\"disk_rebuild_start\", \"Rebuilding\",\n    latest_event==\"smart_warning\", \"Warning\",\n    1==1, \"OK\")\n| search status!=\"OK\"\n| table cluster_name, node, disk_id, disk_serial, status, last_event_time\n```\n\nUnderstanding this SPL\n\n**Disk Failure Tracking** — Disk failures in HCI trigger data rebuild operations that consume cluster resources and temporarily reduce resilience. Tracking failures enables rapid replacement, monitoring rebuild progress, and assessing the cluster's ability to tolerate additional failures.\n\nDocumented **Data sources**: HCI disk events, SMART data, rebuild status. **App/TA** (typical add-on context): `TA-nutanix` or vendor-specific TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hci:disk_events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hci:disk_events\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by cluster_name, node, disk_id, disk_serial** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Disk Failure Tracking**): table cluster_name, node, disk_id, disk_serial, status, last_event_time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (disk health by node), Timeline (failure and rebuild events), Single value (disks in rebuild), Table (failed/warning disks).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the actual spinning or flash drives inside the big shared storage that runs your private cloud, because one small hardware fault is usually safe, but two in the wrong place or a slow slide toward wear-out can make applications freeze for everyone at once. We add up failures and worn drives so the right people get a clear, early call while things can still be fixed calmly.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "19.2.6",
              "n": "Replication Factor Compliance",
              "c": "critical",
              "f": "intermediate",
              "v": "HCI data resilience depends on maintaining the configured replication factor (RF2/RF3). Non-compliant replication means data loss risk if additional failures occur. Monitoring RF compliance is essential for data protection assurance.",
              "t": "`TA-nutanix` or vendor-specific TA",
              "d": "HCI replication status, data protection metrics",
              "q": "index=hci sourcetype=\"hci:replication\"\n| stats latest(configured_rf) as target_rf, latest(actual_rf) as current_rf, latest(rebuild_pct) as rebuild_progress by cluster_name, container\n| eval compliant=if(current_rf>=target_rf, \"Yes\", \"No\")\n| eval risk=case(current_rf<target_rf-1, \"Data Loss Risk\", current_rf<target_rf, \"Reduced Resilience\", 1==1, \"Protected\")\n| table cluster_name, container, target_rf, current_rf, compliant, risk, rebuild_progress\n| sort risk",
              "m": "Monitor replication factor status continuously. Alert immediately when actual RF drops below configured RF. Track rebuild progress to estimate time to full compliance. Monitor cluster capacity to ensure sufficient space for re-replication. Alert critically if RF drops to 1 (single copy—data loss imminent on next failure).",
              "z": "Single value (RF compliance status), Gauge (rebuild progress), Table (non-compliant containers), Status grid (cluster RF map).",
              "kfp": "A brand-new vSAN object can show `nonCompliant` until components fully instantiate; two consecutive polls with growth in `compliant` count usually clears the alert without a runbook. During stretch or witness work, `sftt` and PFTT together can match policy while a transient API still lists some objects as non-compliant — compare Object Browser in vSphere. Nutanix `replication_factor` and `oplog_replication_factor` can differ by design; this UC uses `replication_factor` for data copies unless you extend the SPL. HyperFlex at RF=2 is valid for your standard but still flags if the lookup says RF=3: that is an intentional “accept risk” case, not a poller error. Parity and MAP S2D volumes often look “lower” in mirror-equivalent `actual_rf` math even when parity fault tolerance is correct — tune `min_required_rf` and `allowed_resiliency_methods` for your parity policy rows. Test clusters with RF=1 in `compliance_class=test` can appear as low but should not page if freeze and class logic are aligned. vSAN `notApplicable` objects are explicitly excluded from `non_compliant_object_count` to avoid swamping the join with iSCSI paths that never participate in the policy. After TA upgrades, a renamed JSON field can look like a breach until you extend the `coalesce` list; verify with one `fieldsummary` run before you silence anything.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix` or vendor-specific TA.\n• Ensure the following data sources are available: HCI replication status, data protection metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor replication factor status continuously. Alert immediately when actual RF drops below configured RF. Track rebuild progress to estimate time to full compliance. Monitor cluster capacity to ensure sufficient space for re-replication. Alert critically if RF drops to 1 (single copy—data loss imminent on next failure).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hci:replication\"\n| stats latest(configured_rf) as target_rf, latest(actual_rf) as current_rf, latest(rebuild_pct) as rebuild_progress by cluster_name, container\n| eval compliant=if(current_rf>=target_rf, \"Yes\", \"No\")\n| eval risk=case(current_rf<target_rf-1, \"Data Loss Risk\", current_rf<target_rf, \"Reduced Resilience\", 1==1, \"Protected\")\n| table cluster_name, container, target_rf, current_rf, compliant, risk, rebuild_progress\n| sort risk\n```\n\nUnderstanding this SPL\n\n**Replication Factor Compliance** — HCI data resilience depends on maintaining the configured replication factor (RF2/RF3). Non-compliant replication means data loss risk if additional failures occur. Monitoring RF compliance is essential for data protection assurance.\n\nDocumented **Data sources**: HCI replication status, data protection metrics. **App/TA** (typical add-on context): `TA-nutanix` or vendor-specific TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hci:replication. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hci:replication\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_name, container** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Replication Factor Compliance**): table cluster_name, container, target_rf, current_rf, compliant, risk, rebuild_progress\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (RF compliance status), Gauge (rebuild progress), Table (non-compliant containers), Status grid (cluster RF map).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We line up the written data-protection rules with what the storage system actually has set for each policy, container, or volume. We raise the alarm if something is on only one full copy, if the protection method is not what the team approved, or if objects never reached their policy. That way you are not finding out the hard way after a second failure during maintenance.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.7",
              "n": "CVM (Controller VM) Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Nutanix Controller VMs manage all storage I/O on each node. CVM failures cause I/O to redirect to other nodes, impacting performance. Monitoring CVM health ensures the HCI control plane remains operational across all nodes.",
              "t": "`TA-nutanix`, Nutanix CVM logs",
              "d": "Nutanix CVM resource metrics, CVM service status logs",
              "q": "index=hci sourcetype=\"nutanix:cvm\"\n| stats latest(cpu_pct) as cpu, latest(mem_pct) as mem, latest(stargate_status) as stargate, latest(cassandra_status) as cassandra, latest(zookeeper_status) as zk by node, cvm_ip\n| eval all_services_up=if(stargate==\"UP\" AND cassandra==\"UP\" AND zk==\"UP\", \"Yes\", \"No\")\n| eval health=case(all_services_up==\"No\", \"Critical\", cpu>80 OR mem>85, \"Warning\", 1==1, \"Healthy\")\n| table node, cvm_ip, cpu, mem, stargate, cassandra, zk, health\n| sort health",
              "m": "Monitor CVM service status (Stargate, Cassandra, Zookeeper, Prism) every 30 seconds. Track CVM CPU and memory utilization. Alert immediately on any CVM service failure. Monitor CVM-to-CVM communication for cluster stability. Track CVM restart events and correlate with I/O disruptions.",
              "z": "Status grid (CVM health by node), Table (CVM service status), Gauge (CVM resource utilization), Timechart (CVM metrics trending).",
              "kfp": "During a rolling AOS upgrade, Genesis briefly stops child processes on each node in order; you can see Stargate or a companion drop for thirty to ninety seconds in `nutanix:cluster:services_status` with a matching change ticket, which is not the same as surprise day-two downtime. Cassandra often sits in a `kForwarding` ring state through a ring rebalance; that state is part of a safe metadata move, and you must not treat it like a `DOWN` line item once your SPL treats kForwarding as healthy. Genesis may read `kStarting` for a minute after a CVM reboot or IP stack bounce; hold the page unless it stays out of `UP` over several polls. Curator is intentionally quiet during some disk scans; a throttled or sleeping Curator is different from `DOWN` in Genesis meaning—compare timestamps to maintenance. When operations move a CVM to a new backplane address during switch work, the same physical node can appear as a new `cvm_ip` until the next full host poll, which looks like a fresh gap but tracks to one maintenance window, not a second outage. Finally, a synthetic harness that replays two copies of the same JSON line can make `latest()` look noisy; guard lab indexes with a tag.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix`, Nutanix CVM logs.\n• Ensure the following data sources are available: Nutanix CVM resource metrics, CVM service status logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor CVM service status (Stargate, Cassandra, Zookeeper, Prism) every 30 seconds. Track CVM CPU and memory utilization. Alert immediately on any CVM service failure. Monitor CVM-to-CVM communication for cluster stability. Track CVM restart events and correlate with I/O disruptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:cvm\"\n| stats latest(cpu_pct) as cpu, latest(mem_pct) as mem, latest(stargate_status) as stargate, latest(cassandra_status) as cassandra, latest(zookeeper_status) as zk by node, cvm_ip\n| eval all_services_up=if(stargate==\"UP\" AND cassandra==\"UP\" AND zk==\"UP\", \"Yes\", \"No\")\n| eval health=case(all_services_up==\"No\", \"Critical\", cpu>80 OR mem>85, \"Warning\", 1==1, \"Healthy\")\n| table node, cvm_ip, cpu, mem, stargate, cassandra, zk, health\n| sort health\n```\n\nUnderstanding this SPL\n\n**CVM (Controller VM) Health** — Nutanix Controller VMs manage all storage I/O on each node. CVM failures cause I/O to redirect to other nodes, impacting performance. Monitoring CVM health ensures the HCI control plane remains operational across all nodes.\n\nDocumented **Data sources**: Nutanix CVM resource metrics, CVM service status logs. **App/TA** (typical add-on context): `TA-nutanix`, Nutanix CVM logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:cvm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:cvm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by node, cvm_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **all_services_up** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **CVM (Controller VM) Health**): table node, cvm_ip, cpu, mem, stargate, cassandra, zk, health\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (CVM health by node), Table (CVM service status), Gauge (CVM resource utilization), Timechart (CVM metrics trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Each physical box has a tiny helper machine that does the heavy storage work for the virtual machines on that box. We watch those helpers one by one so we know when one is not doing its job, because the rest of the cluster has to pick up the slack and things get slow. We tell the right team which single helper needs help before most people would even feel the drag.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.8",
              "n": "HCI Cluster Balance and Skew",
              "c": "high",
              "f": "intermediate",
              "v": "Imbalanced storage or compute across nodes causes hot spots and reduced resilience. Monitoring skew supports rebalance and capacity planning.",
              "t": "Nutanix/vSphere HCI APIs, Prism metrics",
              "d": "Storage used per node, IOPS per node, VM count per node",
              "q": "index=hci sourcetype=\"nutanix:capacity\"\n| stats latest(storage_used_gb) as used_gb, latest(iops) as iops, latest(vm_count) as vms by node\n| eventstats avg(used_gb) as avg_gb, avg(iops) as avg_iops by cluster\n| eval storage_skew_pct=abs(used_gb-avg_gb)/avg_gb*100\n| where storage_skew_pct > 25\n| table node, used_gb, avg_gb, storage_skew_pct, iops, vms",
              "m": "Ingest per-node capacity and load. Compute skew vs cluster average. Alert when storage or IOPS skew exceeds threshold. Trigger rebalance or migration. Report on balance trend.",
              "z": "Table (nodes with skew), Bar chart (used by node), Gauge (cluster balance score).",
              "kfp": "Live Distributed Resource Scheduler, Acropolis Dynamic Scheduling, or Hyper-V live-migration waves can concentrate load briefly while correctness catches up, so pair alerts with migration event feeds before calling a human. Planned expansions leave a new node nearly empty until rebalance finishes and will look skewed until moves complete. Mixed-generation estates can show unequal absolute bytes with fair utilization percents, so emphasize storage_util_pct and rated IOPS normalization rather than raw counters alone. Maintenance-mode drains intentionally idle a host during evacuation. One-off resync or rebuild after disk replacement spikes IOPS on a subset of nodes until components heal. License-driven affinity that pins a database cluster to two heavy hosts can keep vm_count skew elevated even when the cluster is healthy, which belongs in lookup annotations rather than silent suppression.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Nutanix/vSphere HCI APIs, Prism metrics.\n• Ensure the following data sources are available: Storage used per node, IOPS per node, VM count per node.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest per-node capacity and load. Compute skew vs cluster average. Alert when storage or IOPS skew exceeds threshold. Trigger rebalance or migration. Report on balance trend.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:capacity\"\n| stats latest(storage_used_gb) as used_gb, latest(iops) as iops, latest(vm_count) as vms by node\n| eventstats avg(used_gb) as avg_gb, avg(iops) as avg_iops by cluster\n| eval storage_skew_pct=abs(used_gb-avg_gb)/avg_gb*100\n| where storage_skew_pct > 25\n| table node, used_gb, avg_gb, storage_skew_pct, iops, vms\n```\n\nUnderstanding this SPL\n\n**HCI Cluster Balance and Skew** — Imbalanced storage or compute across nodes causes hot spots and reduced resilience. Monitoring skew supports rebalance and capacity planning.\n\nDocumented **Data sources**: Storage used per node, IOPS per node, VM count per node. **App/TA** (typical add-on context): Nutanix/vSphere HCI APIs, Prism metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:capacity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:capacity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **storage_skew_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where storage_skew_pct > 25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HCI Cluster Balance and Skew**): table node, used_gb, avg_gb, storage_skew_pct, iops, vms\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (nodes with skew), Bar chart (used by node), Gauge (cluster balance score).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We check whether each machine in a tight cluster is pulling a fair share of the work, storage traffic, and guest load. When one machine is overloaded while neighbors coast, teams can fix placement early so everyday moves and repairs do not turn into an urgent scramble.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix",
                "vmware"
              ],
              "em": [
                "vmware_vsphere"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.9",
              "n": "HCI Data Resiliency and Rebuild Progress",
              "c": "critical",
              "f": "intermediate",
              "v": "After node or disk failure, rebuild must complete before another failure. Monitoring rebuild progress and ETA ensures data remains protected.",
              "t": "Nutanix/vSAN resiliency APIs",
              "d": "Rebuild progress %, ETA, affected containers/vSAN components",
              "q": "index=hci sourcetype=\"nutanix:resiliency\"\n| where status=\"rebuilding\" OR status=\"rebalancing\"\n| stats latest(progress_pct) as progress, latest(eta_sec) as eta, latest(affected_gb) as gb by cluster, task_type\n| table cluster, task_type, progress, eta, gb",
              "m": "Poll resiliency and rebuild status. Alert when rebuild is slow or ETA exceeds threshold. Report on rebuild history and time-to-full resilience. Correlate with disk and node events.",
              "z": "Gauge (rebuild progress), Table (active rebuilds), Line chart (rebuild rate).",
              "kfp": "A storage maintenance that intentionally evacuates a disk with policy-driven data moves can look like a long resync in Splunk and in the vendor UI, so pair the alert with a change ticket and the UC-19.2.5 disk work order before a midnight page. Stretched vSAN and Nutanix metro designs can resync at line rate on one inter-site link while a second path is in soft-fail, which shows up as a healthy `throughput_gbps` with a scary `eta_h` in one site only; that is a topology note, not a false Splunk line. Firmware upgrades that pause resync in order on each node can drop throughput to the noise floor for several polls, which should demote the derived ETA until the window is wider. Labs that replay the same HEC line twice in two indexes will `dedup` to one `cluster_name` in production but may double in dev if the search omits the index list you meant to exclude. A Nutanix `kCriticalAttentionNeeded` row without bytes can still be real policy pressure from Curator, so the absence of `bytes_to_replicate` is not a clean bill. Finally, a HyperFlex alarm that still references rebuild while the cluster REST is green often means a lagging summary field, so cross-check alarms before a vendor TAC is opened in anger.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Nutanix/vSAN resiliency APIs.\n• Ensure the following data sources are available: Rebuild progress %, ETA, affected containers/vSAN components.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll resiliency and rebuild status. Alert when rebuild is slow or ETA exceeds threshold. Report on rebuild history and time-to-full resilience. Correlate with disk and node events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:resiliency\"\n| where status=\"rebuilding\" OR status=\"rebalancing\"\n| stats latest(progress_pct) as progress, latest(eta_sec) as eta, latest(affected_gb) as gb by cluster, task_type\n| table cluster, task_type, progress, eta, gb\n```\n\nUnderstanding this SPL\n\n**HCI Data Resiliency and Rebuild Progress** — After node or disk failure, rebuild must complete before another failure. Monitoring rebuild progress and ETA ensures data remains protected.\n\nDocumented **Data sources**: Rebuild progress %, ETA, affected containers/vSAN components. **App/TA** (typical add-on context): Nutanix/vSAN resiliency APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:resiliency. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:resiliency\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"rebuilding\" OR status=\"rebalancing\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, task_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **HCI Data Resiliency and Rebuild Progress**): table cluster, task_type, progress, eta, gb\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (rebuild progress), Table (active rebuilds), Line chart (rebuild rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "When part of a big shared storage system breaks, the boxes quietly copy data until everything is safe again, and that copying takes time. We measure how long that has been going on, how much is left, and how fast the work is moving, so the right people know when a second hiccup could really hurt, not just the day the first light blinked.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.10",
              "n": "HCI Hypervisor and AOS Version Compliance",
              "c": "high",
              "f": "beginner",
              "v": "Mixed hypervisor or AOS versions can cause compatibility and support issues. Tracking version compliance supports upgrade planning and support eligibility.",
              "t": "Nutanix Prism, vSphere/vCenter API",
              "d": "Node AOS version, hypervisor version, compliance baseline",
              "q": "index=hci sourcetype=\"nutanix:cluster\"\n| stats latest(aos_version) as aos, latest(hypervisor_version) as hv by node\n| lookup hci_compliance_baseline.csv env OUTPUT target_aos, target_hv\n| where aos!=target_aos OR hv!=target_hv\n| table node, aos, target_aos, hv, target_hv",
              "m": "Ingest cluster and node version data. Maintain baseline by environment. Alert on version drift. Report on compliance percentage and nodes due for upgrade.",
              "z": "Table (non-compliant nodes), Pie chart (version distribution), Single value (compliance %).",
              "kfp": "A planned AOS or AHV one-click during an approved change window can set last_lcm_upgrade_status to InProgress for many polls even though the operation is healthy — correlate start_time, status, and the maintenance ticket. Short-lived mixed AHV while a rolling AHV apply is in flight is expected; the SPL only treats mixed AHV as high when the span exceeds twenty-four hours. inventory_in_progress true for a long but bounded inventory run on a very large cluster is normal, while inventory_age_seconds is still the honesty check for the catalog. A Lenovo ThinkAgile or other OEM that applies BIOS or HBA through XClarity out-of-band can show a firmware state that is not the same as a pure LCM row — mark those clusters in a side note field so triage does not treat XClarity as a Splunk bug. A single NCC failed_check_count from a test the team already accepted as risk may trip the low precheck path — add an exception list in the poller, not a blind mute in Splunk. A catalog CSV refreshed one day after Nutanix retires a build can make is_supported_until_epoch look tight even though a patch is already in CAB — that is a governance lag, not a false alarm, until you update lineage_rank. Finally, a duplicate HEC line replayed in lab can make upgrade_history look busier than reality — tag lab indexes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Nutanix Prism, vSphere/vCenter API.\n• Ensure the following data sources are available: Node AOS version, hypervisor version, compliance baseline.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest cluster and node version data. Maintain baseline by environment. Alert on version drift. Report on compliance percentage and nodes due for upgrade.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:cluster\"\n| stats latest(aos_version) as aos, latest(hypervisor_version) as hv by node\n| lookup hci_compliance_baseline.csv env OUTPUT target_aos, target_hv\n| where aos!=target_aos OR hv!=target_hv\n| table node, aos, target_aos, hv, target_hv\n```\n\nUnderstanding this SPL\n\n**HCI Hypervisor and AOS Version Compliance** — Mixed hypervisor or AOS versions can cause compatibility and support issues. Tracking version compliance supports upgrade planning and support eligibility.\n\nDocumented **Data sources**: Node AOS version, hypervisor version, compliance baseline. **App/TA** (typical add-on context): Nutanix Prism, vSphere/vCenter API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:cluster. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:cluster\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where aos!=target_aos OR hv!=target_hv` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HCI Hypervisor and AOS Version Compliance**): table node, aos, target_aos, hv, target_hv\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant nodes), Pie chart (version distribution), Single value (compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We line up the cluster's software and hypervisor versions with the dates and targets your own team wrote down, and we tell you when you are out of support or an upgrade is stuck, before a planned change window fails at the first gate.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix",
                "vmware"
              ],
              "em": [
                "vmware_vcenter",
                "vmware_vsphere"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.11",
              "n": "HCI Network and Storage Controller Saturation",
              "c": "high",
              "f": "intermediate",
              "v": "Saturated storage or network controllers cause latency and timeouts. Monitoring utilization supports capacity and design decisions.",
              "t": "HCI platform metrics, Prism/vCenter",
              "d": "Controller queue depth, network throughput per node, latency percentiles",
              "q": "index=hci sourcetype=\"nutanix:io\"\n| stats latest(queue_depth) as queue, latest(latency_p99_ms) as p99, latest(throughput_mbps) as mbps by node, controller\n| where queue > 50 OR p99 > 100 OR mbps > 9000\n| table node, controller, queue, p99, mbps",
              "m": "Ingest I/O and network metrics per node and controller. Alert when queue depth or latency exceeds threshold. Report on saturation events and trend. Plan node or network upgrade when sustained.",
              "z": "Table (saturated controllers), Line chart (latency and queue), Gauge (throughput utilization).",
              "kfp": "Coordinated rolling hypervisor firmware waves can rotate elevated controller percentiles host by host without a defect. Storage maintenance that pauses one path can briefly stack outstanding I/O on the remaining path even when capacity remains healthy. Third-party replication appliances pinned beside HCI hosts may load vnics that appear saturated while guest latency stays acceptable until policy changes. Acceptance clusters running IO micro-benchmark suites on a schedule will sit intentionally near ceilings and should point at alternate governance thresholds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HCI platform metrics, Prism/vCenter.\n• Ensure the following data sources are available: Controller queue depth, network throughput per node, latency percentiles.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest I/O and network metrics per node and controller. Alert when queue depth or latency exceeds threshold. Report on saturation events and trend. Plan node or network upgrade when sustained.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:io\"\n| stats latest(queue_depth) as queue, latest(latency_p99_ms) as p99, latest(throughput_mbps) as mbps by node, controller\n| where queue > 50 OR p99 > 100 OR mbps > 9000\n| table node, controller, queue, p99, mbps\n```\n\nUnderstanding this SPL\n\n**HCI Network and Storage Controller Saturation** — Saturated storage or network controllers cause latency and timeouts. Monitoring utilization supports capacity and design decisions.\n\nDocumented **Data sources**: Controller queue depth, network throughput per node, latency percentiles. **App/TA** (typical add-on context): HCI platform metrics, Prism/vCenter. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:io. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:io\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by node, controller** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where queue > 50 OR p99 > 100 OR mbps > 9000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HCI Network and Storage Controller Saturation**): table node, controller, queue, p99, mbps\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (saturated controllers), Line chart (latency and queue), Gauge (throughput utilization).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch how crowded the storage controllers and data-network pipes are on each machine in your shared cluster, using percentile math so brief spikes do not wake people overnight. When one machine stays near its ceiling while neighbors stay calm, we give your team a clear row to fix that path before guests feel the slowdown.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix",
                "vmware"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.12",
              "n": "HCI Prism Central and Management Plane Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Prism Central (PC) failure affects visibility and automation. Monitoring PC and management plane ensures operations and alerting remain functional.",
              "t": "Prism Central API, management node metrics",
              "d": "PC service status, API latency, task queue depth",
              "q": "index=hci sourcetype=\"nutanix:prism_central\"\n| stats latest(status) as status, latest(api_latency_ms) as latency, latest(task_queue) as queue by pc_instance\n| where status!=\"healthy\" OR latency > 5000 OR queue > 100\n| table pc_instance, status, latency, queue",
              "m": "Poll Prism Central health and API metrics. Alert on unhealthy status, high API latency, or backed-up task queue. Report on PC availability and performance trend. Maintain HA for PC where available.",
              "z": "Status grid (PC health), Table (PC metrics), Line chart (API latency).",
              "kfp": "A 60–90 s gap in `cluster` UUID visibility during a planned PC HA leader election or rolling PCVM reboot can make `last_event_ev` look worse than the UI without being a sev-1. Calm/Nucalm is intentionally stopped when you have no Calm license — add a `calm_licensed` column to the lookup or a child search to suppress the Calm down bit. Karbon is down on sites that never deploy K8s; treat that as expected unless a row in the lookup says Karbon is required. PE count will dip during a controlled un-register or re-parent to another PC — that is a PE-migration class, not always data loss. PC-of-PC (PCoPC) federations can make simple pe_count math look odd; carry parent/child PC as separate `pc_instance` rows. Magneto or Insights can restart for 2–3 m through PC LCM; pair with a change ticket. Insights compaction can raise Prism API latency for about 10 m without a true process DOWN; use `access_log`, not a blanket mute on this UC. genesis `kStarting` is normal for 30–60 s after a PC patch — the `bad_genesis_st` low path already waits 5 m before flagging. A single off-poll of `nutanix:prism_central:alerts` after TA pagination work is not a PE loss; compare `cluster` MO in Prism before paging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Prism Central API, management node metrics.\n• Ensure the following data sources are available: PC service status, API latency, task queue depth.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Prism Central health and API metrics. Alert on unhealthy status, high API latency, or backed-up task queue. Report on PC availability and performance trend. Maintain HA for PC where available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:prism_central\"\n| stats latest(status) as status, latest(api_latency_ms) as latency, latest(task_queue) as queue by pc_instance\n| where status!=\"healthy\" OR latency > 5000 OR queue > 100\n| table pc_instance, status, latency, queue\n```\n\nUnderstanding this SPL\n\n**HCI Prism Central and Management Plane Health** — Prism Central (PC) failure affects visibility and automation. Monitoring PC and management plane ensures operations and alerting remain functional.\n\nDocumented **Data sources**: PC service status, API latency, task queue depth. **App/TA** (typical add-on context): Prism Central API, management node metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:prism_central. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:prism_central\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by pc_instance** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where status!=\"healthy\" OR latency > 5000 OR queue > 100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HCI Prism Central and Management Plane Health**): table pc_instance, status, latency, queue\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (PC health), Table (PC metrics), Line chart (API latency).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "Imagine a big company with many small farms and one central office that holds the only full map for all of them. We watch that office. If its screen goes dark, the animals still eat, but you cannot see every farm in one place or run one big plan across the company from a single desk — like losing the only map room for a whole region.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365",
                "nutanix"
              ],
              "em": [
                "nutanix_prism_central"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.13",
              "n": "Dell VxRail Cluster Health",
              "c": "high",
              "f": "intermediate",
              "v": "VxRail Manager health checks and LCM (Lifecycle Manager) update status directly impact cluster availability and supportability. Monitoring cluster and host health enables rapid response to hardware or software issues before they cause VM outages or failed upgrades.",
              "t": "Custom (VxRail Manager REST API)",
              "d": "VxRail Manager `/rest/vxm/v1/cluster`, `/rest/vxm/v1/system/cluster-hosts`",
              "q": "index=vxrail sourcetype=\"vxrail:cluster\"\n| stats latest(health) as cluster_health, latest(vcenter_name) as vcenter, latest(version) as vxrail_version by cluster_id\n| eval overall=case(cluster_health!=\"Healthy\" AND cluster_health!=\"\", \"Degraded\", 1==1, \"Healthy\")\n| table cluster_id, overall, cluster_health, vxrail_version, vcenter",
              "m": "Create a REST API modular input or scripted input that polls VxRail Manager at `https://<vxrail_manager>/rest/vxm/v1/cluster` and `https://<vxrail_manager>/rest/vxm/v1/system/cluster-hosts` every 2–5 minutes. Authenticate with VxRail Manager credentials. Parse JSON responses and index with sourcetypes `vxrail:cluster` and `vxrail:cluster_hosts`. Extract fields: `health`, `version`, `vcenter_name`, `host_state`, `cluster_id`. Optionally poll LCM status endpoints for update state. Alert on cluster health != \"Healthy\" or any host not in CONNECTED/Healthy state. Correlate with vCenter and ESXi events for root cause analysis.",
              "z": "Status grid (cluster and host health from cluster_hosts), Table (cluster details with LCM status), Single value (unhealthy cluster count), Gauge (host connectivity percentage from cluster_hosts data).",
              "kfp": "During a VxRail Manager VM guest reboot (patch, LCM, or cluster scale-out), health on the vxrail:vxm:cluster line can read Unknown for one or two poll cycles even while ESXi and vSAN are still running production workloads—treat that as a short-lived manager gap unless host rows and vCenter show loss. A host in planned ESXi maintenance can present non-Healthy host_health or a power_state that is not poweredOn on purpose; align with a change record before paging. LCM in Downloading or Compatibility-Check is expected churn during a bundle download and is not an outage. Heavy LCM that triggers HTTP 429 on the poller can create an ingest gap that looks like missing health; compare _indextime to _time and the poller log, not only Splunk nulls. A vxrail_version field may be empty while the manager appliance itself upgrades, which is a metadata gap, not a cluster-wide fault until other planes disagree.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (VxRail Manager REST API).\n• Ensure the following data sources are available: VxRail Manager `/rest/vxm/v1/cluster`, `/rest/vxm/v1/system/cluster-hosts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate a REST API modular input or scripted input that polls VxRail Manager at `https://<vxrail_manager>/rest/vxm/v1/cluster` and `https://<vxrail_manager>/rest/vxm/v1/system/cluster-hosts` every 2–5 minutes. Authenticate with VxRail Manager credentials. Parse JSON responses and index with sourcetypes `vxrail:cluster` and `vxrail:cluster_hosts`. Extract fields: `health`, `version`, `vcenter_name`, `host_state`, `cluster_id`. Optionally poll LCM status endpoints for update state. Alert on cluster…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vxrail sourcetype=\"vxrail:cluster\"\n| stats latest(health) as cluster_health, latest(vcenter_name) as vcenter, latest(version) as vxrail_version by cluster_id\n| eval overall=case(cluster_health!=\"Healthy\" AND cluster_health!=\"\", \"Degraded\", 1==1, \"Healthy\")\n| table cluster_id, overall, cluster_health, vxrail_version, vcenter\n```\n\nUnderstanding this SPL\n\n**Dell VxRail Cluster Health** — VxRail Manager health checks and LCM (Lifecycle Manager) update status directly impact cluster availability and supportability. Monitoring cluster and host health enables rapid response to hardware or software issues before they cause VM outages or failed upgrades.\n\nDocumented **Data sources**: VxRail Manager `/rest/vxm/v1/cluster`, `/rest/vxm/v1/system/cluster-hosts`. **App/TA** (typical add-on context): Custom (VxRail Manager REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vxrail; **sourcetype**: vxrail:cluster. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vxrail, sourcetype=\"vxrail:cluster\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **overall** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Dell VxRail Cluster Health**): table cluster_id, overall, cluster_health, vxrail_version, vcenter\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (cluster and host health from cluster_hosts), Table (cluster details with LCM status), Single value (unhealthy cluster count), Gauge (host connectivity percentage from cluster_hosts data).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch Dell VxRail, an appliance that bundles servers, storage, and control into one box, using the internal manager VxRail Manager. We see cluster-wide health, each host, and the big version-update run. We help you catch manager-side glitches before everyday services on top feel slow or stop.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vxrail"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.14",
              "n": "Nutanix CVM Resource and Service Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Deep-dive CVM CPU/memory pressure and Stargate latency vs UC-19.2.7 — for capacity and noisy-neighbor triage.",
              "t": "`TA-nutanix`, Prism metrics",
              "d": "`nutanix:cvm:metrics`",
              "q": "index=hci sourcetype=\"nutanix:cvm:metrics\" earliest=-4h\n| where cpu_pct>85 OR mem_pct>90 OR stargate_latency_ms>5\n| stats max(cpu_pct) as max_cpu, max(mem_pct) as max_mem, max(stargate_latency_ms) as max_lat by node\n| sort -max_lat",
              "m": "Poll Prism per-CVM metrics API. Alert on sustained high latency with high CPU (investigate disk/network). Correlate with storage rebuilds.",
              "z": "Table (hot CVMs), Line chart (latency vs CPU), Heatmap (node × time).",
              "kfp": "A rolling AOS upgrade intentionally cycles Stargate and other Genesis-supervised services; pair spikes in `restart_count_30m` to `cluster.upgrade_in_progress` in your `nutanix:cvm:status` HEC and the change record before auto-paging. `ergon` and `minerva_cvm` in `services_down_list` on some AOS lines are often transient worker processes; document them before they drive tickets. A single poll where Prism still shows `UP` but Genesis shows `DOWN` is usually a thirty-second UI cache; compare a second poll. `nutanix:cluster:alerts` can lag a healthy `genesis status` on the wire by one modular-input interval. Labs that use FQDN in `host` for nix but only IP in the lookup return empty nix subsearch rows until the lookup or `host` is normalized. Stretched or witness designs can make one CVM’s `iowait` look bad during rebuilds while service rows are still `running`; treat as capacity context when `services_down_list` is empty. Spurious oplog `max` on a new poller is often a field rename—re-run `| fieldsummary` after each AOS minor.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix`, Prism metrics.\n• Ensure the following data sources are available: `nutanix:cvm:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll Prism per-CVM metrics API. Alert on sustained high latency with high CPU (investigate disk/network). Correlate with storage rebuilds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:cvm:metrics\" earliest=-4h\n| where cpu_pct>85 OR mem_pct>90 OR stargate_latency_ms>5\n| stats max(cpu_pct) as max_cpu, max(mem_pct) as max_mem, max(stargate_latency_ms) as max_lat by node\n| sort -max_lat\n```\n\nUnderstanding this SPL\n\n**Nutanix CVM Resource and Service Health** — Deep-dive CVM CPU/memory pressure and Stargate latency vs UC-19.2.7 — for capacity and noisy-neighbor triage.\n\nDocumented **Data sources**: `nutanix:cvm:metrics`. **App/TA** (typical add-on context): `TA-nutanix`, Prism metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:cvm:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:cvm:metrics\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where cpu_pct>85 OR mem_pct>90 OR stargate_latency_ms>5` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hot CVMs), Line chart (latency vs CPU), Heatmap (node × time).",
              "script": "",
              "premium": "",
              "hw": "Nutanix AOS nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We look inside the small control computers on each box that run the shared storage and catalog for your virtual machines, not just a single health number for the whole group. We watch for programs that keep restarting, run out of memory, or drown in disk wait, and we tell the right people in time, before the whole group feels slow because one box is struggling.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.15",
              "n": "Storage Pool Rebalance Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks Curator/Planned Outage rebalance and disk usage skew during rebalance operations on Nutanix/vSAN.",
              "t": "Nutanix Prism, vSAN health",
              "d": "`nutanix:curator`, `vsan:rebalance`",
              "q": "index=hci sourcetype=\"nutanix:curator\" earliest=-24h\n| where status=\"running\" OR rebalance_pct>0\n| stats latest(rebalance_pct) as pct, latest(eta_min) as eta by cluster, task_id\n| table cluster, task_id, pct, eta",
              "m": "Map Curator task fields. Alert when rebalance stalls or ETA exceeds policy. Report impact on I/O (UC-19.2.3).",
              "z": "Gauge (rebalance %), Table (active tasks), Line chart (skew index).",
              "kfp": "Large firmware or hypervisor rolling updates can pause Curator or proactive jobs while nodes drain, which looks like a stall until polls resume. Adding capacity may start a long rebalance that sits at ninety-five to ninety-nine percent while metadata work finishes, so percent-based warnings need calendar context. Azure Stack HCI Optimize jobs may report complete while background trim or tier movement continues, which differs from Nutanix percent semantics. Manual suspension of proactive rebalance during performance incidents can trip skew indicators even when the action was deliberate; annotate change_freeze_until or suppress with an approved maintenance lookup. Mis-mapped cluster_id strings between Prism and CMDB can show false skew joins; validate keys whenever inventory merges.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Nutanix Prism, vSAN health.\n• Ensure the following data sources are available: `nutanix:curator`, `vsan:rebalance`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap Curator task fields. Alert when rebalance stalls or ETA exceeds policy. Report impact on I/O (UC-19.2.3).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:curator\" earliest=-24h\n| where status=\"running\" OR rebalance_pct>0\n| stats latest(rebalance_pct) as pct, latest(eta_min) as eta by cluster, task_id\n| table cluster, task_id, pct, eta\n```\n\nUnderstanding this SPL\n\n**Storage Pool Rebalance Monitoring** — Tracks Curator/Planned Outage rebalance and disk usage skew during rebalance operations on Nutanix/vSAN.\n\nDocumented **Data sources**: `nutanix:curator`, `vsan:rebalance`. **App/TA** (typical add-on context): Nutanix Prism, vSAN health. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:curator. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:curator\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"running\" OR rebalance_pct>0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, task_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Storage Pool Rebalance Monitoring**): table cluster, task_id, pct, eta\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (rebalance %), Table (active tasks), Line chart (skew index).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We follow the tidy-up jobs that shared storage runs on purpose while everything is still healthy, watching how far along each job is, whether it is stuck, and whether disks are uneven. That way capacity gets reclaimed calmly instead of turning the next ordinary growth spike into an emergency shuffle.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.16",
              "n": "HCI Node Failure Domain Risk",
              "c": "critical",
              "f": "intermediate",
              "v": "Validates RF/FTM rules so critical VM replicas don’t share the same fault domain (rack/block) — risk scoring when domains are imbalanced.",
              "t": "Nutanix Prism, vSAN stretched cluster APIs",
              "d": "`hci:fd`, host-to-disk mapping",
              "q": "index=hci sourcetype=\"hci:fault_domain\" earliest=-7d\n| stats dc(node) as nodes_in_fd by cluster, fault_domain_name\n| where nodes_in_fd>3\n| lookup hci_fd_risk.csv cluster fault_domain_name OUTPUT risk_score\n| where risk_score>70\n| table cluster, fault_domain_name, nodes_in_fd, risk_score",
              "m": "Build fault domain from rack metadata. Flag Tier-0 VMs without FD separation. Use for BCP testing.",
              "z": "Table (risky placements), Sankey (VM → FD), Single value (violations).",
              "kfp": "A rolling Nutanix or vSAN maintenance can transiently show lower runtime FT in API snapshots while the UI still shows policy intent; use `change_freeze_until` and compare two consecutive polls, not a single tick. Lab clusters with tiny host counts can show odd `effective_blast_radius` approximations from the `host_count/fault_domain_count` helper — tune with `vms_in_largest_fault_domain` from an extended poller. Greenfield sites often lack `expected_fault_tolerance` in the lookup for the first week; the row is missing, not the cluster, so you see `UNSET` in `domain_awareness_drift` until CMDB catch-up. Stretched vSAN in partial witness failure can look like FT drift in REST while the storage policy is still satisfied; do not page solely on a single `expected_fault_tolerance` read without a second API. HyperFlex upgrades sometimes rename `replication_factor` in JSON; a TA bump without `coalesce` extension looks like a breach. Azure Stack HCI ARM views can trail Windows Admin Center by minutes after an ExpressRoute or firewall event; a medium-severity `domain_awareness` mismatch is common briefly. Proof-of-concept hosts without rack labels will legitimately read `NONE` for Nutanix rack awareness; exclude them in the lookup with `expected_domain_awareness=NONE` instead of silencing the vendor feed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Nutanix Prism, vSAN stretched cluster APIs.\n• Ensure the following data sources are available: `hci:fd`, host-to-disk mapping.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild fault domain from rack metadata. Flag Tier-0 VMs without FD separation. Use for BCP testing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hci:fault_domain\" earliest=-7d\n| stats dc(node) as nodes_in_fd by cluster, fault_domain_name\n| where nodes_in_fd>3\n| lookup hci_fd_risk.csv cluster fault_domain_name OUTPUT risk_score\n| where risk_score>70\n| table cluster, fault_domain_name, nodes_in_fd, risk_score\n```\n\nUnderstanding this SPL\n\n**HCI Node Failure Domain Risk** — Validates RF/FTM rules so critical VM replicas don’t share the same fault domain (rack/block) — risk scoring when domains are imbalanced.\n\nDocumented **Data sources**: `hci:fd`, host-to-disk mapping. **App/TA** (typical add-on context): Nutanix Prism, vSAN stretched cluster APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hci:fault_domain. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hci:fault_domain\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster, fault_domain_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where nodes_in_fd>3` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where risk_score>70` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HCI Node Failure Domain Risk**): table cluster, fault_domain_name, nodes_in_fd, risk_score\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (risky placements), Sankey (VM → FD), Single value (violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We read how your big shared computer farms are supposed to survive a whole shelf or rack going away, before anything actually breaks, and we compare that to what the teams wrote down in their own plan. We raise a clear hand when the wiring or the rules do not match the promise, and we are careful not to silence a truly dangerous “zero safety margin” case just because a maintenance window is open.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.17",
              "n": "vSAN Disk Group Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Disk group mount state, component health, and checksum errors for VMware vSAN.",
              "t": "vSAN health API, vCenter TA",
              "d": "`vsan:diskgroup`",
              "q": "index=hci sourcetype=\"vsan:diskgroup\" earliest=-4h\n| where state!=\"healthy\" OR checksum_errors>0 OR component_state!=\"active\"\n| stats latest(state) as st, sum(checksum_errors) as checksums by cluster, host, dg_name\n| sort -checksums",
              "m": "Ingest vSAN health JSON. Page on unhealthy disk group or rising checksums. Correlate with physical disk (UC-19.2.5).",
              "z": "Status grid (DG × host), Table (issues), Timeline (events).",
              "kfp": "Planned disk group decommission for a cache device swap can show unmounted while data evacuates, which is legitimate under an approved change. Stretched vSAN failovers may show heavy resync when a site returns, which is not the same as permanent_error on a local NVMe. ESXi maintenance with data evacuation can make remaining hosts look unhealthy briefly as objects move. vSAN major upgrades that rebalance metadata can bump checksum or IO error counters that settle when the cluster finishes. A witness host network blip can look like data-plane disk group issues if routing is misread—verify witness use cases before hardware RMA. A flaky poller that omits fields intermittently is indistinguishable from a true fault without dead-letter logging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: vSAN health API, vCenter TA.\n• Ensure the following data sources are available: `vsan:diskgroup`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest vSAN health JSON. Page on unhealthy disk group or rising checksums. Correlate with physical disk (UC-19.2.5).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"vsan:diskgroup\" earliest=-4h\n| where state!=\"healthy\" OR checksum_errors>0 OR component_state!=\"active\"\n| stats latest(state) as st, sum(checksum_errors) as checksums by cluster, host, dg_name\n| sort -checksums\n```\n\nUnderstanding this SPL\n\n**vSAN Disk Group Health** — Disk group mount state, component health, and checksum errors for VMware vSAN.\n\nDocumented **Data sources**: `vsan:diskgroup`. **App/TA** (typical add-on context): vSAN health API, vCenter TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: vsan:diskgroup. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"vsan:diskgroup\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where state!=\"healthy\" OR checksum_errors>0 OR component_state!=\"active\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, host, dg_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (DG × host), Table (issues), Timeline (events).",
              "script": "",
              "premium": "",
              "hw": "vSAN ReadyNodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the storage building blocks on each host in your private cloud so we know if a bundle goes offline, misreports health, or flips checksums. When that happens, programs on top can freeze or fail to move, and repair work can fill the network. We help the right team respond before a small storage issue becomes a long outage for everyday services.",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.18",
              "n": "Cluster Expansion Events",
              "c": "medium",
              "f": "beginner",
              "v": "Audits node add/remove, disk group expand, and maintenance mode during cluster scale events.",
              "t": "`TA-nutanix`, vCenter events",
              "d": "`hci:cluster_events`",
              "q": "index=hci sourcetype=\"hci:cluster_events\" earliest=-30d\n| search \"node added\" OR \"add node\" OR \"remove node\" OR \"expand\"\n| table _time, cluster, user, action, details\n| sort -_time",
              "m": "Normalize messages from Prism and vCenter. Correlate with change tickets. Alert on unplanned expansion.",
              "z": "Timeline (expansion events), Table (recent changes), Bar chart (events by cluster).",
              "kfp": "Break-glass hypervisor logins during a Sev-1 recovery can look like non_approved_actor even when every action was necessary; annotate the lookup freeze field instead of muting the control. Prism or vCenter clock skew makes ticket windows look missed until NTP is corrected on collectors and managers. Automated lifecycle managers that use localized service account names may fail regex until approved_actor_pattern is widened with a documented CAB note. Stretched clusters with nodes counted per site can disagree with a single CMDB expected_node_count until you split cluster_id keys. UCS firmware swaps can emit remove and insert pairs in either order within seconds, which inflates topology_changes_24h without implying malice. Azure Stack HCI cumulative updates sometimes replay historical cluster log lines on reboot; filter by known build strings if duplicates appear. Disk-group reshape operations initiated by vendor support under remote session may lack a local ticket row until the vendor case is mirrored into change_tickets_24h.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix`, vCenter events.\n• Ensure the following data sources are available: `hci:cluster_events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize messages from Prism and vCenter. Correlate with change tickets. Alert on unplanned expansion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hci:cluster_events\" earliest=-30d\n| search \"node added\" OR \"add node\" OR \"remove node\" OR \"expand\"\n| table _time, cluster, user, action, details\n| sort -_time\n```\n\nUnderstanding this SPL\n\n**Cluster Expansion Events** — Audits node add/remove, disk group expand, and maintenance mode during cluster scale events.\n\nDocumented **Data sources**: `hci:cluster_events`. **App/TA** (typical add-on context): `TA-nutanix`, vCenter events. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hci:cluster_events. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hci:cluster_events\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Pipeline stage (see **Cluster Expansion Events**): table _time, cluster, user, action, details\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (expansion events), Table (recent changes), Bar chart (events by cluster).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We note when shared compute farms gain or lose a machine, when someone flips repair mode, or when storage grows, and we check those moments against the official schedule and tickets. We flag when the live count and the company inventory disagree or when the work happens under the wrong person or time, because that is how unapproved gear and risky access slip in quietly.",
              "mtype": [
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix",
                "vmware"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.19",
              "n": "Nutanix AHV Host Capacity",
              "c": "high",
              "f": "beginner",
              "v": "vCPU, memory, and VM density headroom on AHV hosts — for right-sizing and new workload placement.",
              "t": "Prism Element API",
              "d": "`nutanix:ahv:host`",
              "q": "index=hci sourcetype=\"nutanix:ahv:host\" earliest=-1h\n| eval used_pct=round(100*vcpu_used/vcpu_total,1)\n| where used_pct>80\n| stats max(used_pct) as peak by host, cluster\n| sort -peak",
              "m": "Poll host capacity. Alert at 80% vCPU or memory. Integrate with provisioning automation.",
              "z": "Bar chart (used % by host), Table (headroom), Gauge (cluster average).",
              "kfp": "A new node in expand-the-cluster work can sit at low VM density while cables and firmware still finish, which looks like a planning win even though the team has not rebalanced yet; do not mark the cluster as lean until rebalance and health checks in UC-19.2.7 finish. A host in maintenance shows almost no local guest load and can look under-utilized on purpose, which is not a green light for new templates on peers until you read the flag in a drilldown. Acropolis data services rebalancing (ADS-style moves) can create short spikes in `vcpu_pct_used` on several nodes in the same five-minute window that are not a capacity miss if the change record matches. A controller VM boot loop on one node (see UC-19.2.14) can add noise in related streams; treat that as a service incident path first, and only then revisit placement math. Two pollers that post the same host event double memory sums until you stop the duplicate, which is an ingest defect, not a false alarm, but it reads like a sudden commit jump.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Prism Element API.\n• Ensure the following data sources are available: `nutanix:ahv:host`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll host capacity. Alert at 80% vCPU or memory. Integrate with provisioning automation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:ahv:host\" earliest=-1h\n| eval used_pct=round(100*vcpu_used/vcpu_total,1)\n| where used_pct>80\n| stats max(used_pct) as peak by host, cluster\n| sort -peak\n```\n\nUnderstanding this SPL\n\n**Nutanix AHV Host Capacity** — vCPU, memory, and VM density headroom on AHV hosts — for right-sizing and new workload placement.\n\nDocumented **Data sources**: `nutanix:ahv:host`. **App/TA** (typical add-on context): Prism Element API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:ahv:host. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:ahv:host\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used_pct>80` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (used % by host), Table (headroom), Gauge (cluster average).",
              "script": "",
              "premium": "",
              "hw": "Nutanix AHV",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We add up the computer power and memory each physical machine really has, compare it to what the virtual machines on it are allowed to use, and check whether the whole cluster can still take the strain if you take one big machine out of service. That way teams know if they can safely patch or add new work without surprises.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [
                "nutanix_prism_element"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.20",
              "n": "SimpliVity Backup Efficiency",
              "c": "medium",
              "f": "intermediate",
              "v": "Backup job success, dedupe ratio, and store utilization for HPE SimpliVity.",
              "t": "HPE SimpliVity REST API",
              "d": "`simplivity:backup`",
              "q": "index=hci sourcetype=\"simplivity:backup\" earliest=-7d\n| where status!=\"success\" OR dedupe_ratio<3\n| stats latest(status) as st, latest(dedupe_ratio) as dr by cluster, policy_name\n| table cluster, policy_name, st, dr",
              "m": "Map OmniStack API fields. Alert on failed backup or low dedupe vs baseline.",
              "z": "Table (backup status), Line chart (dedupe trend), Single value (failed jobs).",
              "kfp": "Planned retention pruning can delete many expired backups in one window, which raises visible job counts without implying new backup failures. Ephemeral templates and deliberate exclusions for automation sandboxes can look like orphan VMs until you tag them out. Cluster expansion or large ingest of incompressible data can lower dedupe_ratio until cold data re-optimizes, which looks like drift without a policy bug. OVC failover can yield a transient empty backups page in one poll bucket; verify the following sample before opening a severity-one bridge.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HPE SimpliVity REST API.\n• Ensure the following data sources are available: `simplivity:backup`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap OmniStack API fields. Alert on failed backup or low dedupe vs baseline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"simplivity:backup\" earliest=-7d\n| where status!=\"success\" OR dedupe_ratio<3\n| stats latest(status) as st, latest(dedupe_ratio) as dr by cluster, policy_name\n| table cluster, policy_name, st, dr\n```\n\nUnderstanding this SPL\n\n**SimpliVity Backup Efficiency** — Backup job success, dedupe ratio, and store utilization for HPE SimpliVity.\n\nDocumented **Data sources**: `simplivity:backup`. **App/TA** (typical add-on context): HPE SimpliVity REST API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: simplivity:backup. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"simplivity:backup\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"success\" OR dedupe_ratio<3` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **SimpliVity Backup Efficiency**): table cluster, policy_name, st, dr\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (backup status), Line chart (dedupe trend), Single value (failed jobs).",
              "script": "",
              "premium": "",
              "hw": "HPE SimpliVity",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch HPE SimpliVity backups so problems surface soon, not only in the morning report. We compare the last good backup age to the minutes you agreed per policy, list missing policies or late VM runs, and track deduplication drift when storage efficiency shifts.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.21",
              "n": "Azure Stack HCI Cluster Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Cluster validation, storage pool health, and Azure Arc connection state for Azure Stack HCI.",
              "t": "Windows Admin Center, Azure Monitor connector",
              "d": "`azurestackhci:health`",
              "q": "index=hci sourcetype=\"azurestackhci:health\" earliest=-4h\n| where overall_status!=\"healthy\" OR storage_pool_status!=\"ok\" OR arc_connected=0\n| stats latest(overall_status) as st, latest(storage_pool_status) as pool by cluster_name\n| table cluster_name, st, pool",
              "m": "Ingest WAC/OMS JSON. Alert on any non-healthy or Arc disconnect. Correlate with Windows Update pauses.",
              "z": "Status grid (HCI clusters), Table (issues), Timeline.",
              "kfp": "A scheduled S2D `Optimize-StoragePool` or automatic rebalance that follows adding a canary node can keep `Get-StorageJob` in `Running` for many hours with healthy intent; the join `long_run_sec` path will light `high` after twelve hours on purpose — align `change_freeze_until` in `azs_hci_cluster_owner.csv` with documented maintenance. Monthly cumulative updates through cluster-aware updating drain and pause nodes one at a time; you will see node states and Arc heartbeats wobble without a storage outage, so a freeze row per monthly wave is wiser than muting the alert. An Azure Arc connection lapse during a data-centre firewall or ExpressRoute change can show as `mismatch=1` between on-prem and Resource Graph for minutes while the file-server workloads stay up; page only if both stay divergent past your convergence SLA. A `BillingStatus` or license warning can surface in the 60-day grace before enforcement; treat the first `OutOfPolicy` as finance plus IT jointly unless `cluster_status` is also `Error`. Domain-controller clock skew that breaks Kerberos for WinRM or JSON timestamps can make `Get-StorageJob` look eternal or can raise false `Down` states in simulation — fix NTP before rebalancing Splunk logic. A deliberately paused node for GPU maintenance can resemble failure if you forget to set `NodeWeight` to zero; training matters more than a new line of SPL. Azure Resource Graph can lag a few minutes behind the node; that is a known cloud indexing delay, not a fabric split.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Admin Center, Azure Monitor connector.\n• Ensure the following data sources are available: `azurestackhci:health`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest WAC/OMS JSON. Alert on any non-healthy or Arc disconnect. Correlate with Windows Update pauses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"azurestackhci:health\" earliest=-4h\n| where overall_status!=\"healthy\" OR storage_pool_status!=\"ok\" OR arc_connected=0\n| stats latest(overall_status) as st, latest(storage_pool_status) as pool by cluster_name\n| table cluster_name, st, pool\n```\n\nUnderstanding this SPL\n\n**Azure Stack HCI Cluster Health** — Cluster validation, storage pool health, and Azure Arc connection state for Azure Stack HCI.\n\nDocumented **Data sources**: `azurestackhci:health`. **App/TA** (typical add-on context): Windows Admin Center, Azure Monitor connector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: azurestackhci:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"azurestackhci:health\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where overall_status!=\"healthy\" OR storage_pool_status!=\"ok\" OR arc_connected=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Azure Stack HCI Cluster Health**): table cluster_name, st, pool\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (HCI clusters), Table (issues), Timeline.",
              "script": "",
              "premium": "",
              "hw": "Azure Stack HCI validated nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the group of powerful computers that run your company’s important shared programs, like watching a power room that feeds many rooms. We check that every machine in the team is up, the shared storage is safe, and long clean-up after a drive swap does not drag on for hours unseen. We help the right people act before things slow down or fail for everyone who depends on them.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.22",
              "n": "HPE dHCI Tier Health",
              "c": "high",
              "f": "intermediate",
              "v": "HPE disaggregated HCI storage tier latency, capacity, and replication lag between compute and storage.",
              "t": "HPE OneView, dHCI metrics",
              "d": "`hpe:dhci:tier`",
              "q": "index=hci sourcetype=\"hpe:dhci:tier\" earliest=-4h\n| where tier_latency_ms>5 OR capacity_pct>85 OR repl_lag_sec>30\n| stats max(tier_latency_ms) as lat, max(repl_lag_sec) as lag by cluster, tier_name\n| sort -lat",
              "m": "Map vendor tier IDs. Alert on latency or replication lag SLO breach.",
              "z": "Table (tier health), Line chart (latency), Gauge (capacity).",
              "kfp": "A Nimble- or Alletra-style dedupe scrub or post-snapshot rebalancing can raise p95 read microseconds for a few minutes while the array is still healthy for production I/O, which is a workload-side stall, not a fabric outage; compare OneView link state. Active-active controller failover or rolling firmware on the array can spike latency and RPO_seconds without an application breach; use change_freeze_until. A large asynchronous replication backfill from a new volume_collection member inflates RPO_seconds until the initial sync finishes; do not page on RPO alone if throughput meters show a known data-mobility job. NTP offset between the array, OneView appliance, and Splunk can make a five-minute perf window straddle the wrong clock boundary, which looks like drift but is skew; watch _indextime minus _time. A pool-rebalance or thin-provisioned volume move after a datastore extend can make pool_oversub jump until compaction catches up, which the UI also shows as a temporary yellow state. A wrong row in dhci_latency_baseline.csv (tier spelling mismatch) can fabricate a two hundred percent drift; validate CSV in a lower environment first.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HPE OneView, dHCI metrics.\n• Ensure the following data sources are available: `hpe:dhci:tier`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor tier IDs. Alert on latency or replication lag SLO breach.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hpe:dhci:tier\" earliest=-4h\n| where tier_latency_ms>5 OR capacity_pct>85 OR repl_lag_sec>30\n| stats max(tier_latency_ms) as lat, max(repl_lag_sec) as lag by cluster, tier_name\n| sort -lat\n```\n\nUnderstanding this SPL\n\n**HPE dHCI Tier Health** — HPE disaggregated HCI storage tier latency, capacity, and replication lag between compute and storage.\n\nDocumented **Data sources**: `hpe:dhci:tier`. **App/TA** (typical add-on context): HPE OneView, dHCI metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hpe:dhci:tier. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hpe:dhci:tier\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where tier_latency_ms>5 OR capacity_pct>85 OR repl_lag_sec>30` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, tier_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (tier health), Line chart (latency), Gauge (capacity).",
              "script": "",
              "premium": "",
              "hw": "HPE dHCI",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We check that the special storage box behind your HPE dHCI private cloud is responding quickly enough, is not out of room, and that the link between the servers and that box is up. We compare what we see to a normal you wrote down ahead of time, and we quiet alerts during a planned maintenance window so people are not woken for expected work.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.23",
              "n": "vSAN Witness Appliance Health",
              "c": "critical",
              "f": "beginner",
              "v": "Stretched vSAN witness availability and quorum for split-brain prevention.",
              "t": "vCenter, witness VM metrics",
              "d": "`vsan:witness`",
              "q": "index=hci sourcetype=\"vsan:witness\" earliest=-24h\n| where witness_state!=\"connected\" OR quorum!=\"met\"\n| stats latest(witness_state) as ws, latest(quorum) as q by cluster\n| table cluster, ws, q",
              "m": "Poll witness health API. Page immediately if witness disconnected or quorum lost.",
              "z": "Single value (witness OK), Table (clusters at risk), Timeline.",
              "kfp": "Witness `witness_state=disconnected` for under two minutes during an OVA guest reboot for guest-OS security patching is usually benign; file a 120s rolling minimum before paging when your change record shows maintenance. A planned multi-hour WAN or MPLS work between data sites can leave `quorum=met` while a witness site is temporarily unreachable; align `change_freeze_until` in `vsan_witness_owner.csv` with the networking CAB window. Right after a brand-new witness deploy, `witness_disk_health` may read `Unhealthy` for roughly five to fifteen minutes while metadata initialises. Shared-witness (vSphere 8.0 U2+) can read `degraded` for one stretched cluster when only that cluster’s preferred site is isolated, even while the shared appliance is fine—triage with per-cluster fault-domain view, not a blanket host rebuild. `connection_state=notResponding` on the poll line during the ESXi layer upgrade for the witness host is expected for the duration the vendor documents; use host-maintenance metadata or silence non-critical rows. A flaky collector that sometimes omits `last_witness_heartbeat` can trip `low` on heartbeat age without a true outage—watch `_indextime` versus `_time` gaps.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: vCenter, witness VM metrics.\n• Ensure the following data sources are available: `vsan:witness`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll witness health API. Page immediately if witness disconnected or quorum lost.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"vsan:witness\" earliest=-24h\n| where witness_state!=\"connected\" OR quorum!=\"met\"\n| stats latest(witness_state) as ws, latest(quorum) as q by cluster\n| table cluster, ws, q\n```\n\nUnderstanding this SPL\n\n**vSAN Witness Appliance Health** — Stretched vSAN witness availability and quorum for split-brain prevention.\n\nDocumented **Data sources**: `vsan:witness`. **App/TA** (typical add-on context): vCenter, witness VM metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: vsan:witness. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"vsan:witness\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where witness_state!=\"connected\" OR quorum!=\"met\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **vSAN Witness Appliance Health**): table cluster, ws, q\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (witness OK), Table (clusters at risk), Timeline.",
              "script": "",
              "premium": "",
              "hw": "vSAN witness",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch a tiny tie-breaker computer in a third place so the two main sites never both think they are in charge when the long link between them fails. Without that vote, the shared storage could act like two halves that both try to write the same data. We help you see trouble early so a blip in the line does not turn into a long hands-on clean-up later.",
              "mtype": [
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.24",
              "n": "HCI Deduplication Efficiency Ratio",
              "c": "medium",
              "f": "beginner",
              "v": "Cluster-wide dedupe/compression ratio (Nutanix vs vSAN) vs baseline — efficiency regression indicates new noisy workloads or mis-tuned containers.",
              "t": "`TA-nutanix`, vSAN capacity",
              "d": "`hci:storage_efficiency`",
              "q": "index=hci sourcetype=\"hci:storage_efficiency\" earliest=-24h\n| eval ratio=logical_tb/physical_tb\n| lookup hci_efficiency_baseline.csv cluster OUTPUT baseline_ratio\n| where ratio < baseline_ratio*0.85\n| stats latest(ratio) as r, latest(baseline_ratio) as baseline by cluster\n| table cluster, r, baseline",
              "m": "Define `baseline_ratio` from lookup or 30-day rolling mean. Alert on >15% drop week-over-week.",
              "z": "Line chart (dedupe ratio trend), Single value (fleet average), Table (regressions).",
              "kfp": "Large ingest of inherently incompressible media lowers multipliers without hardware faults. Firmware background scans or dedupe restripe jobs temporarily change reported ratios until resilver completes. Seasonal VDI refresh cycles shift clone_ratio on Nutanix while other vendors stay flat, resembling drift when baselines were winter-weighted. Prism or HX REST partial outages can emit one stale sample that mimics softness until the next healthy poll. Planned migrations that deliberately flatten clones move combined_efficiency_ratio in ways that match real capacity need, not telemetry breakage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix`, vSAN capacity.\n• Ensure the following data sources are available: `hci:storage_efficiency`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine `baseline_ratio` from lookup or 30-day rolling mean. Alert on >15% drop week-over-week.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"hci:storage_efficiency\" earliest=-24h\n| eval ratio=logical_tb/physical_tb\n| lookup hci_efficiency_baseline.csv cluster OUTPUT baseline_ratio\n| where ratio < baseline_ratio*0.85\n| stats latest(ratio) as r, latest(baseline_ratio) as baseline by cluster\n| table cluster, r, baseline\n```\n\nUnderstanding this SPL\n\n**HCI Deduplication Efficiency Ratio** — Cluster-wide dedupe/compression ratio (Nutanix vs vSAN) vs baseline — efficiency regression indicates new noisy workloads or mis-tuned containers.\n\nDocumented **Data sources**: `hci:storage_efficiency`. **App/TA** (typical add-on context): `TA-nutanix`, vSAN capacity. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: hci:storage_efficiency. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"hci:storage_efficiency\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where ratio < baseline_ratio*0.85` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **HCI Deduplication Efficiency Ratio**): table cluster, r, baseline\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (dedupe ratio trend), Single value (fleet average), Table (regressions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch how tightly the live storage cluster still packs real data compared with its usual shrink factor. If that slips, we show roughly how much spare space quietly vanished so the household budget for disks is not a surprise when ordinary alarms still look fine.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [
                "cisco_aci"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.25",
              "n": "Nutanix Cluster Health Score and Critical Services",
              "c": "critical",
              "f": "intermediate",
              "v": "A single degraded critical service (Stargate, Cassandra, Curator) can silently erode I/O quality before user-visible alerts fire. Tracking cluster health score and service map together shortens time to stabilize the control plane.",
              "t": "`TA-nutanix`, Prism Element API",
              "d": "`index=hci` `sourcetype=\"nutanix:cluster_health\"` with fields `cluster`, `health_score`, `service_name`, `service_status`",
              "q": "index=hci sourcetype=\"nutanix:cluster_health\" earliest=-2h\n| stats latest(health_score) as score by cluster\n| join type=left cluster [\n    search index=hci sourcetype=\"nutanix:cluster_health\" earliest=-2h service_name=*\n    | where service_status!=\"UP\" AND service_status!=\"up\"\n    | stats values(service_name) as bad_services by cluster\n  ]\n| where score<90 OR isnotnull(bad_services)\n| table cluster, score, bad_services\n| sort score",
              "m": "(1) Ingest Prism cluster health JSON on a 1–5 minute cadence; (2) normalize service status casing; (3) alert when score drops below SLO or any core service is not up; (4) link to Nutanix support bundle collection runbook.",
              "z": "Status grid (cluster × health), Table (down services), Single value (clusters below SLO).",
              "kfp": "A short NA on Prism or Insights after a CVM rolling restart in an approved AOS or hypervisor LCM can clear before the two-hour window closes; when the poll lands mid-restart, you may get one row where `service_status` is NA but the console is already green, so correlate `last_mo` to your change ticket before muting. Field rename experiments in a new TA build (for example a temporary `HealthScore` instead of `health_score`) can make `health_score` look empty until `coalesce` in props matches—run `| fieldsummary` in dev before you tune suppression. Clusters in quarantine for forensic imaging can legitimately run at &lt;75 for days; the `change_freeze_until` column is how you mark “expected bad” without a fake green SLA. A dev or pilot cluster you never add to the lookup can page “unknown owner”; fix governance, not the SPL. Prism Central can show an aggregate that lags a single PE poll by one cycle when registration flaps, so a single off-by-one score is not a Sev-1 if `nutanix:health_check` and CVM SSH show healthy Stargate. Synthesized test events in a lab (pipeline replay) that replay identical `_time` can inflate `mvcount(down_svcs)`; filter test harnesses. ESXi clusters with a vendor host PSOD can drop score before services reattach; the sibling host and CVM UCs, not this one, own root-cause, but a high from Acropolis down is still a real find.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix`, Prism Element API.\n• Ensure the following data sources are available: `index=hci` `sourcetype=\"nutanix:cluster_health\"` with fields `cluster`, `health_score`, `service_name`, `service_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest Prism cluster health JSON on a 1–5 minute cadence; (2) normalize service status casing; (3) alert when score drops below SLO or any core service is not up; (4) link to Nutanix support bundle collection runbook.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:cluster_health\" earliest=-2h\n| stats latest(health_score) as score by cluster\n| join type=left cluster [\n    search index=hci sourcetype=\"nutanix:cluster_health\" earliest=-2h service_name=*\n    | where service_status!=\"UP\" AND service_status!=\"up\"\n    | stats values(service_name) as bad_services by cluster\n  ]\n| where score<90 OR isnotnull(bad_services)\n| table cluster, score, bad_services\n| sort score\n```\n\nUnderstanding this SPL\n\n**Nutanix Cluster Health Score and Critical Services** — A single degraded critical service (Stargate, Cassandra, Curator) can silently erode I/O quality before user-visible alerts fire. Tracking cluster health score and service map together shortens time to stabilize the control plane.\n\nDocumented **Data sources**: `index=hci` `sourcetype=\"nutanix:cluster_health\"` with fields `cluster`, `health_score`, `service_name`, `service_status`. **App/TA** (typical add-on context): `TA-nutanix`, Prism Element API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:cluster_health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:cluster_health\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where score<90 OR isnotnull(bad_services)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Nutanix Cluster Health Score and Critical Services**): table cluster, score, bad_services\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (cluster × health), Table (down services), Single value (clusters below SLO).",
              "script": "",
              "premium": "",
              "hw": "Nutanix NX, Dell XC, Lenovo HX nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch one health number for each Nutanix block your company already depends on, and we check whether the small programs that keep the shared storage and the catalog of where data lives are really up. If the number falls into a bad band, or a truly important program is not up, the right contact is told in time, while a planned quiet week can still hush the noise if the case is not an emergency..",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [
                "nutanix_prism_element"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.26",
              "n": "VxRail LCM Compliance and Staged Bundle Drift",
              "c": "high",
              "f": "intermediate",
              "v": "VxRail lifecycle compliance determines VMware and hardware driver supportability. Drift between staged bundles and installed release sets increases risk during one-click upgrades and delays security patching.",
              "t": "Custom (VxRail Manager REST API)",
              "d": "`index=vxrail` `sourcetype=\"vxrail:lcm\"` with fields `cluster_id`, `current_release`, `staged_release`, `compliance_state`",
              "q": "index=vxrail sourcetype=\"vxrail:lcm\" earliest=-24h\n| stats latest(current_release) as cur, latest(staged_release) as staged, latest(compliance_state) as comp by cluster_id\n| where comp!=\"Compliant\" OR (isnotnull(staged_release) AND cur!=staged)\n| table cluster_id, cur, staged, comp\n| sort cluster_id",
              "m": "(1) Poll VxRail LCM inventory endpoints after each maintenance window; (2) alert on non-compliant or partially staged clusters; (3) export weekly compliance for VMware TAM reviews.",
              "z": "Table (non-compliant clusters), Bar chart (releases in fleet), Timeline (LCM state changes).",
              "kfp": "A planned LCM change during a `change_freeze_until` window can still show staged versus current in Splunk; the demotion in the SPL and the CAB record together explain a yellow row that is not a process gap. A mixed-bundle or mixed-ESXi state during a rolling mid-cluster LCM is normal for hours and should be evaluated against `lcm_state` in {`InProgress`, `StagedReady`} before paging as if it were steady-state split-brain. A precheck that Dell documents as an accepted interoperability warning can remain `low` for quarters; the text column in the poller, not the boolean alone, tells you whether the finding is new. Support contract `expiration_date` can lag a renewal already signed on paper until SupportAssist or SRS reflects it—treat a one-day “expired” with an open P1 to Dell with the PO number, not an automatic rebuild. vCenter `vmware:perf` that lacks `cluster_id` because the forwarder is not yet tagging hosts yields empty `perf_product_hint` while LCM is still valid; that is a collection gap, not an automatic compliance miss. A fresh GA bundle in the field before your catalog CSV is updated can overstate n-minus; refresh the catalog every Dell publish cycle, roughly quarterly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (VxRail Manager REST API).\n• Ensure the following data sources are available: `index=vxrail` `sourcetype=\"vxrail:lcm\"` with fields `cluster_id`, `current_release`, `staged_release`, `compliance_state`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Poll VxRail LCM inventory endpoints after each maintenance window; (2) alert on non-compliant or partially staged clusters; (3) export weekly compliance for VMware TAM reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vxrail sourcetype=\"vxrail:lcm\" earliest=-24h\n| stats latest(current_release) as cur, latest(staged_release) as staged, latest(compliance_state) as comp by cluster_id\n| where comp!=\"Compliant\" OR (isnotnull(staged_release) AND cur!=staged)\n| table cluster_id, cur, staged, comp\n| sort cluster_id\n```\n\nUnderstanding this SPL\n\n**VxRail LCM Compliance and Staged Bundle Drift** — VxRail lifecycle compliance determines VMware and hardware driver supportability. Drift between staged bundles and installed release sets increases risk during one-click upgrades and delays security patching.\n\nDocumented **Data sources**: `index=vxrail` `sourcetype=\"vxrail:lcm\"` with fields `cluster_id`, `current_release`, `staged_release`, `compliance_state`. **App/TA** (typical add-on context): Custom (VxRail Manager REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vxrail; **sourcetype**: vxrail:lcm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vxrail, sourcetype=\"vxrail:lcm\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where comp!=\"Compliant\" OR (isnotnull(staged_release) AND cur!=staged)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VxRail LCM Compliance and Staged Bundle Drift**): table cluster_id, cur, staged, comp\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant clusters), Bar chart (releases in fleet), Timeline (LCM state changes).",
              "script": "",
              "premium": "",
              "hw": "Dell VxRail P/V/E-series",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the Dell appliance software stack is on an old or split version between servers, and when the support window no longer lines up with what the machines are actually running, before that mismatch turns the next change into a long phone call. We keep the human owner’s name in the same picture so the right team is asked, not a random queue that only watches disks.",
              "mtype": [
                "Compliance",
                "Configuration"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vxrail"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.27",
              "n": "vSAN Disk Group Capacity Headroom and Mount State",
              "c": "critical",
              "f": "intermediate",
              "v": "Disk groups nearing full capacity slow resyncs and increase rebuild time, which extends windows of reduced fault tolerance. Monitoring per–disk group free space and mount state prevents surprise admission control during failures.",
              "t": "VMware vSAN health via vCenter TA or custom API collector",
              "d": "`index=hci` `sourcetype=\"vsan:diskgroup\"` with fields `cluster`, `host`, `dg_name`, `used_pct`, `state`",
              "q": "index=hci sourcetype=\"vsan:diskgroup\" earliest=-1h\n| stats latest(used_pct) as used_pct, latest(state) as st by cluster, host, dg_name\n| where used_pct>80 OR st!=\"mounted\"\n| sort -used_pct\n| table cluster, host, dg_name, used_pct, st",
              "m": "(1) Ingest vSAN disk group metrics from RVC or vSAN SDK exporter; (2) warn at 80% used and critical at 90%; (3) page on any disk group not mounted; (4) correlate with physical disk SMART (UC-19.2.5).",
              "z": "Gauge (used % per DG), Table (critical disk groups), Heatmap (host × DG utilization).",
              "kfp": "A capacity-add in flight can keep raw `used_pct` high for 24–48h (sometimes longer in stretched clusters) until rebalancing redistributes data—headline usage looks worse than the steady-state SLO. SPBM `objectSpaceReservation=100` makes the store accounting for space jump with little guest use-time change. Backup tools that run long snapshot chains (common agentless patterns) temporarily inflate the consumed footprint. Deduplication can fall when a tenant lands non-dedupable data (for example always-on guest encryption) even though the cluster is operationally sound. vSphere 8.0 ESA clusters expose a different `efficiency_breakdown` than OSA; align expected_dedup_ratio per cluster to architecture. A vSAN Stretched cluster can show total_bytes and mirror accounting that look awkward next to a single site—`free_bytes` in the poller is still the correct numerator for a linear eta_full_days when your feed is consistent.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VMware vSAN health via vCenter TA or custom API collector.\n• Ensure the following data sources are available: `index=hci` `sourcetype=\"vsan:diskgroup\"` with fields `cluster`, `host`, `dg_name`, `used_pct`, `state`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest vSAN disk group metrics from RVC or vSAN SDK exporter; (2) warn at 80% used and critical at 90%; (3) page on any disk group not mounted; (4) correlate with physical disk SMART (UC-19.2.5).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"vsan:diskgroup\" earliest=-1h\n| stats latest(used_pct) as used_pct, latest(state) as st by cluster, host, dg_name\n| where used_pct>80 OR st!=\"mounted\"\n| sort -used_pct\n| table cluster, host, dg_name, used_pct, st\n```\n\nUnderstanding this SPL\n\n**vSAN Disk Group Capacity Headroom and Mount State** — Disk groups nearing full capacity slow resyncs and increase rebuild time, which extends windows of reduced fault tolerance. Monitoring per–disk group free space and mount state prevents surprise admission control during failures.\n\nDocumented **Data sources**: `index=hci` `sourcetype=\"vsan:diskgroup\"` with fields `cluster`, `host`, `dg_name`, `used_pct`, `state`. **App/TA** (typical add-on context): VMware vSAN health via vCenter TA or custom API collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: vsan:diskgroup. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"vsan:diskgroup\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster, host, dg_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where used_pct>80 OR st!=\"mounted\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **vSAN Disk Group Capacity Headroom and Mount State**): table cluster, host, dg_name, used_pct, st\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.Storage by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**vSAN Disk Group Capacity Headroom and Mount State** — Disk groups nearing full capacity slow resyncs and increase rebuild time, which extends windows of reduced fault tolerance. Monitoring per–disk group free space and mount state prevents surprise admission control during failures.\n\nDocumented **Data sources**: `index=hci` `sourcetype=\"vsan:diskgroup\"` with fields `cluster`, `host`, `dg_name`, `used_pct`, `state`. **App/TA** (typical add-on context): VMware vSAN health via vCenter TA or custom API collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Storage` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (used % per DG), Table (critical disk groups), Heatmap (host × DG utilization).",
              "script": "",
              "premium": "",
              "hw": "vSAN ReadyNodes, Dell VxRail with vSAN",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We watch how full the shared storage really is after the system keeps space for big repairs and moves—like a clear spot in a packed moving van for a hand truck. If you only count boxes, you think there is room when the next move or repair would still be blocked. We help the right people see that margin in time, before new work, moves, or snapshots are refused.",
              "mtype": [
                "Capacity",
                "Availability"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.Storage by Performance.host | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.28",
              "n": "Nutanix Storage Pool Erasure Coding vs RF Footprint",
              "c": "medium",
              "f": "advanced",
              "v": "Erasure coding reduces physical footprint but increases rebuild amplification on dense clusters. Tracking EC-enabled containers versus replication factor helps right-size policies and avoid capacity cliffs during failure events.",
              "t": "`TA-nutanix`, Prism storage summary",
              "d": "`index=hci` `sourcetype=\"nutanix:storage_pool\"` with fields `cluster`, `storage_pool`, `ec_enabled`, `rf`, `used_logical_tb`, `used_physical_tb`",
              "q": "index=hci sourcetype=\"nutanix:storage_pool\" earliest=-4h\n| stats latest(ec_enabled) as ec, latest(rf) as rf, latest(used_logical_tb) as log_tb, latest(used_physical_tb) as phys_tb by cluster, storage_pool\n| eval overhead_ratio=round(phys_tb/nullif(log_tb,0), 2)\n| where ec==\"true\" OR ec==\"1\"\n| table cluster, storage_pool, rf, log_tb, phys_tb, overhead_ratio\n| sort -overhead_ratio",
              "m": "(1) Poll storage pool summary including EC flags; (2) baseline overhead ratio per pool design; (3) alert when overhead spikes vs 30-day median indicating rebuild or mis-tuned EC stripe width.",
              "z": "Bar chart (overhead by pool), Table (EC pools), Line chart (logical vs physical trend).",
              "kfp": "Cluster scale-out windows can leave fault_domain_count stale for one or two polls while hosts are cabled but not yet in the exported domain model; pair with freeze metadata before critical. Planned EC migrations from RF2 to EC-4+2 legitimately raise rwaf_estimated while capacity_overhead_actual_ratio improves; keep ec_target_scheme and target_max_rwaf aligned during the cutover week. Nutanix rack-aware clusters sometimes count racks that differ from node totals; reconcile UC-19.2.16 topology feeds before trusting a lone numeric gap. Azure Stack HCI mirror-accelerated parity tier rebalances during heavy random writes can swing used_physical_gb without any EC toggle. vSAN proactive rebalance after disk group replacement elevates component traffic and may nudge dedupe_compression_ratio in capacity feeds without an EC breach. Skyline calibration imports that lag by a day can make empirical bandwidth estimates look noisier than the pure RWAF model predicts.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix`, Prism storage summary.\n• Ensure the following data sources are available: `index=hci` `sourcetype=\"nutanix:storage_pool\"` with fields `cluster`, `storage_pool`, `ec_enabled`, `rf`, `used_logical_tb`, `used_physical_tb`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Poll storage pool summary including EC flags; (2) baseline overhead ratio per pool design; (3) alert when overhead spikes vs 30-day median indicating rebuild or mis-tuned EC stripe width.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:storage_pool\" earliest=-4h\n| stats latest(ec_enabled) as ec, latest(rf) as rf, latest(used_logical_tb) as log_tb, latest(used_physical_tb) as phys_tb by cluster, storage_pool\n| eval overhead_ratio=round(phys_tb/nullif(log_tb,0), 2)\n| where ec==\"true\" OR ec==\"1\"\n| table cluster, storage_pool, rf, log_tb, phys_tb, overhead_ratio\n| sort -overhead_ratio\n```\n\nUnderstanding this SPL\n\n**Nutanix Storage Pool Erasure Coding vs RF Footprint** — Erasure coding reduces physical footprint but increases rebuild amplification on dense clusters. Tracking EC-enabled containers versus replication factor helps right-size policies and avoid capacity cliffs during failure events.\n\nDocumented **Data sources**: `index=hci` `sourcetype=\"nutanix:storage_pool\"` with fields `cluster`, `storage_pool`, `ec_enabled`, `rf`, `used_logical_tb`, `used_physical_tb`. **App/TA** (typical add-on context): `TA-nutanix`, Prism storage summary. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:storage_pool. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:storage_pool\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster, storage_pool** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **overhead_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ec==\"true\" OR ec==\"1\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Nutanix Storage Pool Erasure Coding vs RF Footprint**): table cluster, storage_pool, rf, log_tb, phys_tb, overhead_ratio\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (overhead by pool), Table (EC pools), Line chart (logical vs physical trend).",
              "script": "",
              "premium": "",
              "hw": "Nutanix clusters with EC-enabled containers",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We check whether each storage pool picked a space-saving erasure layout that needs more separate failure zones than the cluster truly has, and we estimate how heavy rebuild traffic could get after a disk fails. We also watch whether the ratio of raw space used to logical data drifts away from what that layout promises, so teams fix policy mistakes during planning instead of during a slow recovery.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.29",
              "n": "Nutanix Controller VM Storage Bandwidth Saturation",
              "c": "high",
              "f": "intermediate",
              "v": "When CVM front-end bandwidth saturates, latency rises for every VM on the node regardless of guest CPU. Spotting sustained saturation drives rebalance, network uplink upgrades, or noisy-neighbor containment.",
              "t": "`TA-nutanix`, Prism metrics",
              "d": "`index=hci` `sourcetype=\"nutanix:cvm:metrics\"` with fields `node`, `read_mbps`, `write_mbps`, `link_speed_mbps`",
              "q": "index=hci sourcetype=\"nutanix:cvm:metrics\" earliest=-4h\n| eval total_mbps=read_mbps+write_mbps\n| eval util_pct=round(100*total_mbps/nullif(link_speed_mbps,0),1)\n| stats perc95(util_pct) as p95_util by node\n| where p95_util>75\n| sort -p95_util",
              "m": "(1) Collect per-CVM throughput and negotiated link speed; (2) alert when 95th percentile utilization exceeds 75% for one hour; (3) correlate with rebuild tasks (UC-19.2.9) and snapshot storms.",
              "z": "Line chart (Mbps per node), Table (saturated CVMs), Gauge (peak utilization).",
              "kfp": "Async or NearSync protection schedules that run wide replication windows to a second site can produce scheduled megabit bursts on the CVM path that are policy-driven, not a surprise fault; tag those windows. Nutanix ADS- or DRS-style storage rebalance, or ingest of a new block after expansion, can legitimately load east-west and storage-facing traffic. LCM or one-click AOS and firmware cycles can briefly change traffic patterns or evacuate guests so the CVM path spikes while health remains green, which is a change window, not a mystery incident. A one-time full resync of a replaced drive or a repair stream after a disk event can hover near the cap for the hours that job needs; you still want this use case to record the fact, but you should classify the ticket with UC-19.2.9 context instead of a pure capacity purchase.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `TA-nutanix`, Prism metrics.\n• Ensure the following data sources are available: `index=hci` `sourcetype=\"nutanix:cvm:metrics\"` with fields `node`, `read_mbps`, `write_mbps`, `link_speed_mbps`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect per-CVM throughput and negotiated link speed; (2) alert when 95th percentile utilization exceeds 75% for one hour; (3) correlate with rebuild tasks (UC-19.2.9) and snapshot storms.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:cvm:metrics\" earliest=-4h\n| eval total_mbps=read_mbps+write_mbps\n| eval util_pct=round(100*total_mbps/nullif(link_speed_mbps,0),1)\n| stats perc95(util_pct) as p95_util by node\n| where p95_util>75\n| sort -p95_util\n```\n\nUnderstanding this SPL\n\n**Nutanix Controller VM Storage Bandwidth Saturation** — When CVM front-end bandwidth saturates, latency rises for every VM on the node regardless of guest CPU. Spotting sustained saturation drives rebalance, network uplink upgrades, or noisy-neighbor containment.\n\nDocumented **Data sources**: `index=hci` `sourcetype=\"nutanix:cvm:metrics\"` with fields `node`, `read_mbps`, `write_mbps`, `link_speed_mbps`. **App/TA** (typical add-on context): `TA-nutanix`, Prism metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:cvm:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:cvm:metrics\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **total_mbps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by node** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_util>75` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Nutanix Controller VM Storage Bandwidth Saturation** — When CVM front-end bandwidth saturates, latency rises for every VM on the node regardless of guest CPU. Spotting sustained saturation drives rebalance, network uplink upgrades, or noisy-neighbor containment.\n\nDocumented **Data sources**: `index=hci` `sourcetype=\"nutanix:cvm:metrics\"` with fields `node`, `read_mbps`, `write_mbps`, `link_speed_mbps`. **App/TA** (typical add-on context): `TA-nutanix`, Prism metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (Mbps per node), Table (saturated CVMs), Gauge (peak utilization).",
              "script": "",
              "premium": "",
              "hw": "Nutanix AOS nodes (10/25 GbE uplinks)",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We check how full the data-carrying link is between the machines that store your virtual disks, not just how long each disk wait feels. If that link is the bottleneck, a lot of users feel it at the same time, and we can tell the people who fix networks and who move virtual machines to spread the load before everyone calls at once.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count",
              "e": [
                "nutanix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.30",
              "n": "vSAN Component Overhead and Resync Backlog Depth",
              "c": "high",
              "f": "advanced",
              "v": "Deep component resync backlogs extend exposure after disk or host loss. Monitoring active resync bytes and component counts helps prioritize maintenance and throttle non-critical workloads until the cluster is healthy again.",
              "t": "vSAN health API, vCenter TA",
              "d": "`index=hci` `sourcetype=\"vsan:resync\"` with fields `cluster`, `host`, `active_resync_bytes`, `components_resyncing`",
              "q": "index=hci sourcetype=\"vsan:resync\" earliest=-24h\n| stats latest(active_resync_bytes) as bytes, latest(components_resyncing) as comps by cluster, host\n| eval gb=round(bytes/1073741824,2)\n| where gb>0.5 OR comps>500\n| sort -gb\n| table cluster, host, gb, comps",
              "m": "(1) Export vSAN resync statistics to Splunk on 5-minute intervals; (2) alert when backlog exceeds operational thresholds; (3) overlay with adaptive resync policy changes; (4) report ETA from vSAN health where available.",
              "z": "Area chart (resync GB over time), Table (hosts with largest backlog), Single value (total active resync GB).",
              "kfp": "After adding fresh capacity to a disk group, vSAN may run a large proactive rebalance that resembles disaster recovery even though policy compliance is improving; cross-check Skyline rebalance triggers before a Sev-2. Skyline Health can initiate proactive resync checks that temporarily lift component counts without hardware loss. Adaptive resync during business hours legitimately throttles rebuild streams when congestion sensors fire, producing long ETA values that should not drive the same response as hardware faults. A witness metadata appliance reboot can leave a small, short-lived witness component repair that should stay below your witness noise floor. A storage policy downgrade or RAID geometry edit can legitimately burst resync traffic until the new layout stabilises; pair Splunk rows with the matching change record.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: vSAN health API, vCenter TA.\n• Ensure the following data sources are available: `index=hci` `sourcetype=\"vsan:resync\"` with fields `cluster`, `host`, `active_resync_bytes`, `components_resyncing`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export vSAN resync statistics to Splunk on 5-minute intervals; (2) alert when backlog exceeds operational thresholds; (3) overlay with adaptive resync policy changes; (4) report ETA from vSAN health where available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"vsan:resync\" earliest=-24h\n| stats latest(active_resync_bytes) as bytes, latest(components_resyncing) as comps by cluster, host\n| eval gb=round(bytes/1073741824,2)\n| where gb>0.5 OR comps>500\n| sort -gb\n| table cluster, host, gb, comps\n```\n\nUnderstanding this SPL\n\n**vSAN Component Overhead and Resync Backlog Depth** — Deep component resync backlogs extend exposure after disk or host loss. Monitoring active resync bytes and component counts helps prioritize maintenance and throttle non-critical workloads until the cluster is healthy again.\n\nDocumented **Data sources**: `index=hci` `sourcetype=\"vsan:resync\"` with fields `cluster`, `host`, `active_resync_bytes`, `components_resyncing`. **App/TA** (typical add-on context): vSAN health API, vCenter TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: vsan:resync. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"vsan:resync\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gb>0.5 OR comps>500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **vSAN Component Overhead and Resync Backlog Depth**): table cluster, host, gb, comps\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.Network by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**vSAN Component Overhead and Resync Backlog Depth** — Deep component resync backlogs extend exposure after disk or host loss. Monitoring active resync bytes and component counts helps prioritize maintenance and throttle non-critical workloads until the cluster is healthy again.\n\nDocumented **Data sources**: `index=hci` `sourcetype=\"vsan:resync\"` with fields `cluster`, `host`, `active_resync_bytes`, `components_resyncing`. **App/TA** (typical add-on context): vSAN health API, vCenter TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Network` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (resync GB over time), Table (hosts with largest backlog), Single value (total active resync GB).",
              "script": "",
              "premium": "",
              "hw": "VMware vSAN stretched and standard clusters",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "When shared storage repairs itself after hardware work, copies can pile up on one machine and make the finish time hard to guess. We track those waiting bytes, name the busiest machine, see whether the system is slowing on purpose, and tell your team when it is safe to ease off other heavy storage tasks.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.Network by Performance.host | sort - count",
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.31",
              "n": "Nutanix Async Remote Site Replication Lag and RPO Risk",
              "c": "high",
              "f": "intermediate",
              "v": "Protection domain replication lag directly affects recovery point objectives for DR sites. Sustained lag beyond policy means a regional failure could lose more data than the business expects, and backlogs slow catch-up after link restoration.",
              "t": "Custom (Nutanix Prism Element REST API)",
              "d": "`index=hci` `sourcetype=\"nutanix:remote_site\"` with fields `cluster`, `remote_site`, `curator_repl_lag_sec`, `near_sync_status`, `last_successful_snapshot`",
              "q": "index=hci sourcetype=\"nutanix:remote_site\" earliest=-4h\n| eval snap_age_sec=now()-strptime(last_successful_snapshot,\"%Y-%m-%dT%H:%M:%SZ\")\n| where curator_repl_lag_sec>600 OR snap_age_sec>3600 OR lower(near_sync_status)!=\"active\"\n| stats latest(curator_repl_lag_sec) as lag_sec, latest(snap_age_sec) as snap_age by cluster, remote_site\n| sort -lag_sec",
              "m": "(1) Poll remote site and protection domain replication metrics from Prism; (2) set lag and snapshot-age thresholds from documented RPO; (3) alert when NearSync falls out of active or async lag exceeds SLA; (4) correlate with WAN utilization and snapshot schedules.",
              "z": "Table (sites over RPO), Line chart (replication lag), Single value (worst lag seconds).",
              "kfp": "A planned re-protect or re-seed after storage expansion can force Cerebro to ship a full fresh snapshot, so `rpo_seconds_observed` balloons for hours with an approved change record even though the environment is under control. Spring or autumn daylight-saving jumps can shift the comparison between epoch milliseconds emitted by the poller and `now()` by one wall-clock hour, producing a one-off false breach until NTP and locale settings align on the collector. During a scheduled Metro site survivability test, administrators toggle Metro Availability; Splunk can briefly list `metro_state` outside Enabled even though the async leg is irrelevant to that test window. A Curator deep scrub that pauses background replication to protect foreground I/O can spike `pending_replication_count` for a few intervals without a WAN fault—cross-check CVM Curator throttling in Prism before you blame the link. If snapshot retention expires a remote snapshot in the same minute a poller sample runs, you may see a momentary `pending_replication_count` tick that clears on the next Cerebro pass. HYCU- or Veeam-driven temporary PD pauses for application-consistent quiesce are outside this index and can look like silence until you read the application backup runbook. Finally, a mis-keyed `rpo_objective_seconds` entry of zero in the lookup makes `rpo_breach_ratio` null and can hide a row during QA until CSV governance catches the typo—treat that as a data-quality alert, not a storage outage.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (Nutanix Prism Element REST API).\n• Ensure the following data sources are available: `index=hci` `sourcetype=\"nutanix:remote_site\"` with fields `cluster`, `remote_site`, `curator_repl_lag_sec`, `near_sync_status`, `last_successful_snapshot`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Poll remote site and protection domain replication metrics from Prism; (2) set lag and snapshot-age thresholds from documented RPO; (3) alert when NearSync falls out of active or async lag exceeds SLA; (4) correlate with WAN utilization and snapshot schedules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hci sourcetype=\"nutanix:remote_site\" earliest=-4h\n| eval snap_age_sec=now()-strptime(last_successful_snapshot,\"%Y-%m-%dT%H:%M:%SZ\")\n| where curator_repl_lag_sec>600 OR snap_age_sec>3600 OR lower(near_sync_status)!=\"active\"\n| stats latest(curator_repl_lag_sec) as lag_sec, latest(snap_age_sec) as snap_age by cluster, remote_site\n| sort -lag_sec\n```\n\nUnderstanding this SPL\n\n**Nutanix Async Remote Site Replication Lag and RPO Risk** — Protection domain replication lag directly affects recovery point objectives for DR sites. Sustained lag beyond policy means a regional failure could lose more data than the business expects, and backlogs slow catch-up after link restoration.\n\nDocumented **Data sources**: `index=hci` `sourcetype=\"nutanix:remote_site\"` with fields `cluster`, `remote_site`, `curator_repl_lag_sec`, `near_sync_status`, `last_successful_snapshot`. **App/TA** (typical add-on context): Custom (Nutanix Prism Element REST API). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hci; **sourcetype**: nutanix:remote_site. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hci, sourcetype=\"nutanix:remote_site\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **snap_age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where curator_repl_lag_sec>600 OR snap_age_sec>3600 OR lower(near_sync_status)!=\"active\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, remote_site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sites over RPO), Line chart (replication lag), Single value (worst lag seconds).",
              "script": "",
              "premium": "",
              "hw": "Nutanix clusters with remote-site replication",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We check that copies of your important data keep arriving at the other site on time, so a fire or power loss would not make you give up more work than the team already agreed in writing. When the line falls behind, we name the data group, the site pair, and who owns the fix so the right people are called, not a random inbox.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "nutanix"
              ],
              "em": [
                "nutanix_prism_element"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.2.32",
              "n": "VxRail vCenter Extension and Marvin Plugin Health",
              "c": "high",
              "f": "intermediate",
              "v": "VxRail management extensions integrate cluster operations with vCenter. Plugin or API failures block LCM workflows and mask host issues from operators who rely on the VxRail UI.",
              "t": "Custom (VxRail Manager REST API), vCenter logs",
              "d": "`index=vxrail` `sourcetype=\"vxrail:plugin\"` with fields `cluster_id`, `plugin_state`, `last_heartbeat`, `api_error_count`",
              "q": "index=vxrail sourcetype=\"vxrail:plugin\" earliest=-24h\n| eval hb_age_sec=now()-last_heartbeat\n| where plugin_state!=\"healthy\" OR hb_age_sec>600 OR api_error_count>0\n| stats latest(plugin_state) as st, max(hb_age_sec) as max_hb_lag, sum(api_error_count) as errs by cluster_id\n| sort -errs",
              "m": "(1) Collect plugin and internal API health from VxRail Manager or automation probes; (2) alert on unhealthy state, stale heartbeat, or rising API errors; (3) correlate with vCenter service restarts and SSO certificate rotations.",
              "z": "Table (cluster plugin status), Single value (clusters with errors), Timeline (heartbeat gaps).",
              "kfp": "Planned VxRail Manager upgrade replays extension registration and can raise transient vmware:vc:event counts until the platform stabilizes. Planned vCenter upgrade or patching removes and reinstalls client extensions with similar short lived bursts. Supervised Tomcat restart for log maintenance yields brief REST gaps that clear on the next poll. Quarterly SSO password rotation for the mystic style service account changes trust until operators update VxRail; treat as expected when the rotation ticket is open. Enterprise root or issuing CA renewal for vCenter raises new trust chain fingerprints without implying outage. HCI Mesh discovery sweeps may spike lcm component warnings for minutes during inventory growth.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (VxRail Manager REST API), vCenter logs.\n• Ensure the following data sources are available: `index=vxrail` `sourcetype=\"vxrail:plugin\"` with fields `cluster_id`, `plugin_state`, `last_heartbeat`, `api_error_count`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect plugin and internal API health from VxRail Manager or automation probes; (2) alert on unhealthy state, stale heartbeat, or rising API errors; (3) correlate with vCenter service restarts and SSO certificate rotations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vxrail sourcetype=\"vxrail:plugin\" earliest=-24h\n| eval hb_age_sec=now()-last_heartbeat\n| where plugin_state!=\"healthy\" OR hb_age_sec>600 OR api_error_count>0\n| stats latest(plugin_state) as st, max(hb_age_sec) as max_hb_lag, sum(api_error_count) as errs by cluster_id\n| sort -errs\n```\n\nUnderstanding this SPL\n\n**VxRail vCenter Extension and Marvin Plugin Health** — VxRail management extensions integrate cluster operations with vCenter. Plugin or API failures block LCM workflows and mask host issues from operators who rely on the VxRail UI.\n\nDocumented **Data sources**: `index=vxrail` `sourcetype=\"vxrail:plugin\"` with fields `cluster_id`, `plugin_state`, `last_heartbeat`, `api_error_count`. **App/TA** (typical add-on context): Custom (VxRail Manager REST API), vCenter logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vxrail; **sourcetype**: vxrail:plugin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vxrail, sourcetype=\"vxrail:plugin\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hb_age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where plugin_state!=\"healthy\" OR hb_age_sec>600 OR api_error_count>0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cluster plugin status), Single value (clusters with errors), Timeline (heartbeat gaps).",
              "script": "",
              "premium": "",
              "hw": "Dell VxRail with integrated vCenter plugin",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-25",
              "sver": "",
              "rby": "",
              "ge": "We watch the admin screens that connect VMware vCenter to the VxRail control service, the login trust that keeps that connection honest, and the timers that run background health checks. We raise a clear signal when those control pieces break so repairs happen before someone walks into a change window with a frozen console.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware",
                "vxrail"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.5,
          "qd": {
            "gold": 0,
            "silver": 3,
            "bronze": 29,
            "none": 0
          }
        },
        {
          "i": "19.3",
          "n": "Azure Stack HCI",
          "u": [
            {
              "i": "19.3.1",
              "n": "Azure Stack HCI Cluster Validation and Quorum Health",
              "c": "critical",
              "f": "intermediate",
              "v": "Failed cluster validation or quorum issues can pause live migration and stretch failover. Centralizing validation results and witness reachability in Splunk reduces time to restore majority before a second fault impacts VMs.",
              "t": "Splunk Add-on for Microsoft Cloud Services, Windows Event Forwarding",
              "d": "`index=azure_stack_hci` `sourcetype=\"azurestackhci:cluster\"` with fields `cluster_name`, `validation_status`, `quorum_type`, `witness_reachable`",
              "q": "index=azure_stack_hci sourcetype=\"azurestackhci:cluster\" earliest=-24h\n| stats latest(validation_status) as val, latest(witness_reachable) as wit by cluster_name\n| where val!=\"Passed\" OR wit=\"false\" OR wit=0\n| table cluster_name, val, wit",
              "m": "(1) Ship Cluster-Witness and validation cmdlet output from automation to HEC or scripted input; (2) alert on any non-passed validation or witness unreachable; (3) document witness repair steps for file share vs cloud witness.",
              "z": "Status grid (cluster validation), Table (failing checks), Single value (clusters at risk).",
              "kfp": "**Rolling quorum adjustments** during Invoke-CauRun while DynamicQuorum reshuffles NodeWeight mid-drain—Event **1606** looks benign yet paging lacks CAU phase tags unless enrichment lookups supply MaintenanceWindow IDs tied to Invoke-CauRun job names. **Long-running Test-Cluster suites** (**Inventory + Network + Storage + Hyper-V Configuration**) overlap nightly quiet hours; scripted validation_overall=Warning rows surface before engineers finish HTML triage—suppress until ClusterValidationLog severity escalates beyond informational-only findings. **Azure Blob fabric refreshes** occasionally inflate witness_https_ok latency toward HTTPS timeouts absent HCI faults—cross-check bursts of Event **1566** against azure:resource_health storage incidents before paging storage SMEs. **Stretch rehearsal drills** that Stop-Service ClusSvc at one site duplicate Event **1135** storms across observers—exclude rehearsal hosts via CMDB rehearsal=true markers so quorum drills never mimic outages. **NetFT silence during converged NIC QoS edits** resembles quorum distress until Get-ClusterNetwork priorities stabilize—pair Splunk alerts with Network ATC intent logs on Windows Server **2025** automated deployments. **Antivirus pauses on File Share Witness UNC paths** mimic ACL-loss signatures identical to transient Event **1564** spikes—confirm NTFS Cluster Name Object ACE audits rather than reacting solely to Splunk spikes.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services, Windows Event Forwarding.\n• Ensure the following data sources are available: `index=azure_stack_hci` `sourcetype=\"azurestackhci:cluster\"` with fields `cluster_name`, `validation_status`, `quorum_type`, `witness_reachable`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ship Cluster-Witness and validation cmdlet output from automation to HEC or scripted input; (2) alert on any non-passed validation or witness unreachable; (3) document witness repair steps for file share vs cloud witness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure_stack_hci sourcetype=\"azurestackhci:cluster\" earliest=-24h\n| stats latest(validation_status) as val, latest(witness_reachable) as wit by cluster_name\n| where val!=\"Passed\" OR wit=\"false\" OR wit=0\n| table cluster_name, val, wit\n```\n\nUnderstanding this SPL\n\n**Azure Stack HCI Cluster Validation and Quorum Health** — Failed cluster validation or quorum issues can pause live migration and stretch failover. Centralizing validation results and witness reachability in Splunk reduces time to restore majority before a second fault impacts VMs.\n\nDocumented **Data sources**: `index=azure_stack_hci` `sourcetype=\"azurestackhci:cluster\"` with fields `cluster_name`, `validation_status`, `quorum_type`, `witness_reachable`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services, Windows Event Forwarding. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure_stack_hci; **sourcetype**: azurestackhci:cluster. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure_stack_hci, sourcetype=\"azurestackhci:cluster\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where val!=\"Passed\" OR wit=\"false\" OR wit=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Azure Stack HCI Cluster Validation and Quorum Health**): table cluster_name, val, wit\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (cluster validation), Table (failing checks), Single value (clusters at risk).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "Azure Stack HCI validated server catalog nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch whether voting members stay healthy and whether the tie-break share stays reachable so the cluster picks who stays online—because broken vote math freezes apps before guests show symptoms.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.3.2",
              "n": "Storage Spaces Direct Pool Utilization and Tier Imbalance",
              "c": "high",
              "f": "intermediate",
              "v": "S2D pools that skew toward capacity tier without enough flash cache increase latency for tiered volumes. Tracking pool free space and cache/capacity ratio preserves predictable VM performance under burst workloads.",
              "t": "Splunk Add-on for Microsoft Cloud Services, Azure Monitor metrics",
              "d": "`index=azure_stack_hci` `sourcetype=\"azurestackhci:s2d_pool\"` with fields `cluster_name`, `pool_name`, `used_pct`, `cache_tb`, `capacity_tb`",
              "q": "index=azure_stack_hci sourcetype=\"azurestackhci:s2d_pool\" earliest=-2h\n| stats latest(used_pct) as used, latest(cache_tb) as cache_tb, latest(capacity_tb) as cap_tb by cluster_name, pool_name\n| eval cache_share=round(100*cache_tb/nullif(cache_tb+cap_tb,0),1)\n| where used>80 OR cache_share<15\n| table cluster_name, pool_name, used, cache_share, cache_tb, cap_tb\n| sort -used",
              "m": "(1) Ingest `Get-StoragePool` and tier capacity metrics on 15-minute cadence; (2) warn at 80% pool used and critical at 90%; (3) alert when cache share drops below policy for all-flash vs hybrid designs.",
              "z": "Gauge (pool used %), Bar chart (cache vs capacity TB), Table (imbalanced pools).",
              "kfp": "Bulk VM template imports during tenant onboarding temporarily push AllocatedSize toward Size because golden images hydrate replicas quickly; suppress capacity_critical until a bulk_seed_campaign enrichment clears while Optimize and Rebuild queues remain healthy.\n\nOptimize-StoragePool jobs deliberately reshape slabs: Get-StorageJob PercentComplete under 95 with multi-hour EtaSecondsRemaining can look like imbalance even though Get-StorageTier AllocatedSize ratios stabilize once Optimize completes.\n\nResiliency migrations (three-way Mirror toward MirrorAcceleratedParity) inflate FootprintOnPool transiently versus logical Size while NumberOfColumns changes finalize—delay tier_ops_review until Get-VirtualDisk OperationalStatus returns OK on every node.\n\nCapacity-disk replacements invoke Rebuild, Regeneration, and Reallocation tasks that temporarily consume slack similar to exhaustion; confirm Repair-VirtualDisk timelines and verify there is no Lost Communication storm before reacting only to percentages.\n\nShrink-volume maintenance reduces logical Size while FootprintOnPool lags garbage collection, so mismatches resemble footprint faults until Optimize or Repair drains complete.\n\nThin-provision churn without tier edits can surface parity_heavy noise when NumberOfColumns spikes only because stats merged two FriendlyName collisions from throwaway volumes—deduplicate pool_label extraction and require two consecutive polling cycles before escalating.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services, Azure Monitor metrics.\n• Ensure the following data sources are available: `index=azure_stack_hci` `sourcetype=\"azurestackhci:s2d_pool\"` with fields `cluster_name`, `pool_name`, `used_pct`, `cache_tb`, `capacity_tb`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest `Get-StoragePool` and tier capacity metrics on 15-minute cadence; (2) warn at 80% pool used and critical at 90%; (3) alert when cache share drops below policy for all-flash vs hybrid designs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure_stack_hci sourcetype=\"azurestackhci:s2d_pool\" earliest=-2h\n| stats latest(used_pct) as used, latest(cache_tb) as cache_tb, latest(capacity_tb) as cap_tb by cluster_name, pool_name\n| eval cache_share=round(100*cache_tb/nullif(cache_tb+cap_tb,0),1)\n| where used>80 OR cache_share<15\n| table cluster_name, pool_name, used, cache_share, cache_tb, cap_tb\n| sort -used\n```\n\nUnderstanding this SPL\n\n**Storage Spaces Direct Pool Utilization and Tier Imbalance** — S2D pools that skew toward capacity tier without enough flash cache increase latency for tiered volumes. Tracking pool free space and cache/capacity ratio preserves predictable VM performance under burst workloads.\n\nDocumented **Data sources**: `index=azure_stack_hci` `sourcetype=\"azurestackhci:s2d_pool\"` with fields `cluster_name`, `pool_name`, `used_pct`, `cache_tb`, `capacity_tb`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services, Azure Monitor metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure_stack_hci; **sourcetype**: azurestackhci:s2d_pool. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure_stack_hci, sourcetype=\"azurestackhci:s2d_pool\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cluster_name, pool_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cache_share** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where used>80 OR cache_share<15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Storage Spaces Direct Pool Utilization and Tier Imbalance**): table cluster_name, pool_name, used, cache_share, cache_tb, cap_tb\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Storage Spaces Direct Pool Utilization and Tier Imbalance** — S2D pools that skew toward capacity tier without enough flash cache increase latency for tiered volumes. Tracking pool free space and cache/capacity ratio preserves predictable VM performance under burst workloads.\n\nDocumented **Data sources**: `index=azure_stack_hci` `sourcetype=\"azurestackhci:s2d_pool\"` with fields `cluster_name`, `pool_name`, `used_pct`, `cache_tb`, `capacity_tb`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services, Azure Monitor metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (pool used %), Bar chart (cache vs capacity TB), Table (imbalanced pools).",
              "script": "",
              "premium": "",
              "hw": "Azure Stack HCI with NVMe/SAS capacity tiers",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch how full the shared storage pool is and whether the space is spread fairly across tiers and nodes before everyday work suddenly cannot create new disks. We also catch long rebalance jobs that look like imbalance but are simply the cluster moving slabs to a safer layout.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count",
              "e": [
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.3.3",
              "n": "VM Placement and Live Migration Failure Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Failed live migrations strand VMs on stressed nodes and can interrupt maintenance. Trending Hyper-V migration failures by reason code highlights network, CSV, or CPU pressure before scheduled patching windows.",
              "t": "Windows Event Forwarding, Splunk Add-on for Microsoft Windows",
              "d": "`index=wineventlog` `sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-High-Availability/Admin\"` with fields `host`, `EventCode`, `Message`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-High-Availability/Admin\" earliest=-24h EventCode=21111 OR EventCode=21110\n| stats count as mig_fail by host, EventCode\n| where mig_fail>=3\n| sort -mig_fail",
              "m": "(1) Enable WEF subscription for Hyper-V HA admin log on all HCI nodes; (2) map EventCode meanings in a lookup; (3) alert on repeated migration failures per host; (4) correlate with CSV disconnect events in the same time window.",
              "z": "Bar chart (failures by host), Table (EventCode breakdown), Timeline (migration failures).",
              "kfp": "**Cluster-aware-update drain choreography** — **`Suspend-ClusterNode -Drain`** plus **`Move-ClusterVirtualMachineRole`** deliberately stacks **`22106`**→**`22112`** bursts across minutes while **`MaximumVirtualMachineMigrations`** stays elevated; mute alerts when **`MaintenanceWindow`** lookups tie **`invoke_cau`** job identifiers to those hosts. **Replica rehearsal mornings** — **`Start-VMFailover`** test steps alongside **`Resume-VMReplication`** churn mimic **`22113`** spikes absent structural faults because **`22114`** cancellations dominate scripted rollback drills—cross-check **`Hyper-V Replica`** planner logs before blaming **`SMB Direct`**. **GPU Discrete Device Assignment guests** remain pinned—**`22113`** pairs with **`lm_cancel`** simply prove **`DifferentNode`** desires conflict with hard **DDA** wiring; treat as informational unless destination **R2** removes **DDA** first. **RemoteFX-era vGPU assignments** still block online moves on older guest OS images; expect repeated **`22113`** strings until guests power down—suppress via **`hardware_profile_remotevx`** CMDB markers. **AVMA tenant mismatch** across stretched racks triggers **`22113`** when destination refuses activation—looks identical to networking failures except **`21024`** clusters near KMS chatter—route licensing SMEs instead of fabric rotations. **Compression-enabled migration waves** intentionally CPU-bound trade **`22112`** latency for narrower **`Perfmon`** **`pending_bytes`**—adjust thresholds upward during **`Set-VMHost`** experimental **`Compression`** pilots documented by infrastructure RFC numbers.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Event Forwarding, Splunk Add-on for Microsoft Windows.\n• Ensure the following data sources are available: `index=wineventlog` `sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-High-Availability/Admin\"` with fields `host`, `EventCode`, `Message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable WEF subscription for Hyper-V HA admin log on all HCI nodes; (2) map EventCode meanings in a lookup; (3) alert on repeated migration failures per host; (4) correlate with CSV disconnect events in the same time window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-High-Availability/Admin\" earliest=-24h EventCode=21111 OR EventCode=21110\n| stats count as mig_fail by host, EventCode\n| where mig_fail>=3\n| sort -mig_fail\n```\n\nUnderstanding this SPL\n\n**VM Placement and Live Migration Failure Rate** — Failed live migrations strand VMs on stressed nodes and can interrupt maintenance. Trending Hyper-V migration failures by reason code highlights network, CSV, or CPU pressure before scheduled patching windows.\n\nDocumented **Data sources**: `index=wineventlog` `sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-High-Availability/Admin\"` with fields `host`, `EventCode`, `Message`. **App/TA** (typical add-on context): Windows Event Forwarding, Splunk Add-on for Microsoft Windows. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Hyper-V-High-Availability/Admin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Hyper-V-High-Availability/Admin\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, EventCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mig_fail>=3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures by host), Table (EventCode breakdown), Timeline (migration failures).",
              "script": "",
              "premium": "",
              "hw": "Windows Server Azure Stack HCI hosts",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch guests sliding between Hyper-V hosts during drain sweeps or ordinary rebalances. When moves stutter, cancel, or clash with placement fences, we point it out early before workloads bunch onto one strained blade.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.3.4",
              "n": "Azure Arc for Servers Heartbeat and Extension Inventory",
              "c": "high",
              "f": "beginner",
              "v": "Arc is the control path for Azure Policy, Update Management, and hybrid monitoring on HCI nodes. Stale heartbeats or missing extensions mean patches and guest configuration are not enforced, increasing security and uptime risk.",
              "t": "Splunk Add-on for Microsoft Cloud Services (Azure Monitor / Resource Graph export)",
              "d": "`index=azure_monitor` `sourcetype=\"azure:arc:vm\"` with fields `resource_name`, `last_heartbeat_epoch`, `agent_version`, `extension_count`",
              "q": "index=azure_monitor sourcetype=\"azure:arc:vm\" earliest=-24h\n| eval hb_age_h=(now()-last_heartbeat_epoch)/3600\n| where hb_age_h>24 OR extension_count=0 OR isnull(extension_count)\n| stats max(hb_age_h) as max_offline_h by resource_name\n| sort -max_offline_h",
              "m": "(1) Export Arc-enabled machine inventory to Event Hub or blob and ingest via add-on; (2) normalize `last_status` to UTC; (3) alert when heartbeat older than 24h or expected extensions missing; (4) join with CMDB rack location for dispatch.",
              "z": "Table (stale Arc agents), Single value (machines without extensions), Map (site by resource group).",
              "kfp": "**Host shutdown nights** — **`Disconnected`** **`microsoft.hybridcompute/machines`** rows **`mirror`** **`planned`** **`power`** **`cuts`** **`when`** **`CMDB`** **`offline_schedule`** **`UTC`** **`tags`** **`stay`** **`fresh`** **`inside`** **`Splunk`** **`lookups`** **`before`** **`quiet-hours`** **`suppress`** **`tables`** **`engage`** **`identity`** **`teams`** **`during`** **`patch`** **`freeze`** **`season`** **`**\n\n**Extension rollout pause windows** — **`provisioningState`** **`may`** **`stay`** **`Succeeded`** **`while`** **`Automation`** **`accounts`** **`withhold`** **`PUT`** **`calls`** **`during`** **`freeze`** **`tags`** **`—`** **`guest_missing`** **`signals`** **`policy`** **`intent`** **`holes`** **`distinct`** **`from`** **`handler`** **`faults`** **`unless`** **`Compliance`** **`blade`** **`reviews`** **`confirm`** **`assignments`** **`still`** **`healthy`** **`**\n\n**Regional HIS delays** — **`himds`** **`retries`** **`widen`** **`Splunk`** **`age`** **`windows`** **`without`** **`lasting`** **`Disconnected`** **`states`** **`—`** **`overlay`** **`Azure`** **`Service`** **`Health`** **`notices`** **`for`** **`his.arc.azure.com`** **`dependencies`** **`before`** **`pager`** **`routing`** **`during`** **`multi-region`** **`drills`** **`**\n\n**Corporate PKI rotations** — **`TLS`** **`validator`** **`errors`** **`inside`** **`himds`** **`surface`** **`before`** **`portal`** **`tiles`** **`flip`** **`Disconnected`** **`after`** **`three`** **`missed`** **`heartbeat`** **`cycles`** **`—`** **`capture`** **`openssl`** **`proof`** **`toward`** **`*.his.arc.azure.com`** **`inside`** **`each`** **`ticket`** **`**\n\n**MMA→AMA soak lanes** — **`mma_without_ama`** **`flags`** **`parallel-ingest`** **`projects`** **`until`** **`AMA`** **`cutover`** **`runbooks`** **`close`** **`—`** **`mute`** **`when`** **`migration_phase`** **`parallel_ingest`** **`lookup`** **`tags`** **`stay`** **`true`** **`**\n\n**Resource Graph quota bursts** — **`HTTP`** **`429`** **`responses`** **`shrink`** **`visible`** **`machines`** **`per`** **`batch`** **`until`** **`Retry-After`** **`sleep`** **`loops`** **`finish`** **`—`** **`banner`** **`Heavy`** **`Forwarder`** **`health`** **`panels`** **`when`** **`collectors`** **`approach`** **`tenant`** **`limits`** **`during`** **`holiday`** **`inventory`** **`jobs`** **`**",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Compute_Inventory](https://docs.splunk.com/Documentation/CIM/latest/User/Compute_Inventory)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (Azure Monitor / Resource Graph export).\n• Ensure the following data sources are available: `index=azure_monitor` `sourcetype=\"azure:arc:vm\"` with fields `resource_name`, `last_heartbeat_epoch`, `agent_version`, `extension_count`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export Arc-enabled machine inventory to Event Hub or blob and ingest via add-on; (2) normalize `last_status` to UTC; (3) alert when heartbeat older than 24h or expected extensions missing; (4) join with CMDB rack location for dispatch.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure_monitor sourcetype=\"azure:arc:vm\" earliest=-24h\n| eval hb_age_h=(now()-last_heartbeat_epoch)/3600\n| where hb_age_h>24 OR extension_count=0 OR isnull(extension_count)\n| stats max(hb_age_h) as max_offline_h by resource_name\n| sort -max_offline_h\n```\n\nUnderstanding this SPL\n\n**Azure Arc for Servers Heartbeat and Extension Inventory** — Arc is the control path for Azure Policy, Update Management, and hybrid monitoring on HCI nodes. Stale heartbeats or missing extensions mean patches and guest configuration are not enforced, increasing security and uptime risk.\n\nDocumented **Data sources**: `index=azure_monitor` `sourcetype=\"azure:arc:vm\"` with fields `resource_name`, `last_heartbeat_epoch`, `agent_version`, `extension_count`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Azure Monitor / Resource Graph export). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure_monitor; **sourcetype**: azure:arc:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure_monitor, sourcetype=\"azure:arc:vm\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hb_age_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hb_age_h>24 OR extension_count=0 OR isnull(extension_count)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by resource_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest, Virtual_OS.status | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Azure Arc for Servers Heartbeat and Extension Inventory** — Arc is the control path for Azure Policy, Update Management, and hybrid monitoring on HCI nodes. Stale heartbeats or missing extensions mean patches and guest configuration are not enforced, increasing security and uptime risk.\n\nDocumented **Data sources**: `index=azure_monitor` `sourcetype=\"azure:arc:vm\"` with fields `resource_name`, `last_heartbeat_epoch`, `agent_version`, `extension_count`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Azure Monitor / Resource Graph export). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Compute_Inventory.Virtual_OS` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale Arc agents), Single value (machines without extensions), Map (site by resource group).",
              "script": "",
              "premium": "",
              "hw": "Azure Arc–enabled HCI cluster nodes",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "Think of Arc like a wristwatch that quietly phones home every few minutes so Azure remembers your physical servers belong to your tenant; Splunk compares those heartbeat receipts plus which helper extensions installed cleanly against what your policies expect—before audits spot disconnected boxes or stuck installers hiding behind healthy LAN pings.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "a": [
                "Compute_Inventory"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Compute_Inventory.Virtual_OS by Virtual_OS.dest, Virtual_OS.status | sort - count",
              "e": [
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.3.5",
              "n": "Windows Admin Center Connection and Gateway Audit Events",
              "c": "medium",
              "f": "beginner",
              "v": "Windows Admin Center is often the break-glass console for HCI operations. Auditing gateway authentication failures and session spikes detects credential attacks or misconfigured smart-card rules before admins lose access during incidents.",
              "t": "Splunk Add-on for Microsoft Windows, HEC JSON from WAC gateway",
              "d": "`index=wineventlog` `sourcetype=\"WinEventLog:Microsoft-Windows-Security-Auditing\"` with fields `host`, `EventCode`, `user`, `src_ip`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Security-Auditing\" earliest=-24h EventCode=4625 Logon_Type IN (3,10)\n| lookup wac_gateway_hosts host OUTPUT is_gateway\n| where is_gateway=\"true\" OR like(host,\"%wac-gw%\")\n| stats count as failed_logons by host, src_ip, user\n| where failed_logons>=8\n| sort -failed_logons",
              "m": "(1) Populate `wac_gateway_hosts` lookup with gateway computer names; (2) tune out known scanner subnets with `src_ip` exclusions; (3) alert on repeated 4625 failures to RDP/WinRM ports; (4) correlate with VPN and conditional access logs for investigations.",
              "z": "Table (top failed logons), Timechart (4625 rate), Single value (unique sources).",
              "kfp": "**Microsoft Entra Conditional Access MFA stair-steps** — administrators see **`2010`** bursts during interactive MFA refreshes while **`EnableAad`** keeps **`OAuth`** redirects alive; **`risk`** reads hostile though **`Sign-in`** diagnostics show benign **`interrupt`** prompts—suppress **`principal`** tuples documented inside **`ca_known_break_glass`** lookups until **`authenticationStrength`** policies stabilize.\n\n**Bulk extension refreshes during approved maintenance** — **`2040`** storms land while **`Install-WACExtension`** pulls **Failover Cluster Manager**, **Storage Replica**, **Containers**, or third-party gallery packs inside one **`MaintenanceWindow`**—pair **`evt_lines`** velocity with **`Change`** ticket **`CHG`** identifiers before paging **`gateway-administrator`** crews.\n\n**Failover-cluster-hosted gateway failovers** — **`1000`**/**`1001`** cycles plus **`2020`** reconnect waves replicate across passive/active nodes during **`Move-ClusterRole`** rehearsals—differentiate **`ServerManagementGateway`** service migrations from lateral movement by tagging **`cluster_network`** / **`cluster_owner_node`** enrichment pulled from **`Get-ClusterResource`** exports mirrored nightly.\n\n**Browser tab-recovery after laptop reboot** — dormant **`chrome`**/**`edge`** sessions resurrect **`2000`**/**`2020`** bursts without fresh MFA challenges because **`OAuth`** cookies **`rehydrate`**—tie **`last_seen`** spikes to workstation reboot telemetry (`Kernel-General` **1074**) instead of attacker **`OAuth`** replay alone.\n\n**Patch Tuesday WinRM listener recycle cycles** — **`net stop winrm && net start winrm`** sequences temporarily **`raise`** **`2021`**→**`2020`** churn against identical **`managed_host`** targets—suppress rows where **`Computer`** / **`host`** pairs align with **`installed_kb`** inventories feeding **`suppress_kb_wave`** lookups.\n\n**CredSSP teaching-tool spikes** — engineers deliberately **`Enable-WSManCredSSP`** on lab VLANs mirroring **`2020`** **`useCredSsp=true`** patterns—fence **`risk`** / **`credssp_named`** alerts using **`lab_cidr`** markers distinct from **`production`** **`Hyper-V`** fabrics.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows, HEC JSON from WAC gateway.\n• Ensure the following data sources are available: `index=wineventlog` `sourcetype=\"WinEventLog:Microsoft-Windows-Security-Auditing\"` with fields `host`, `EventCode`, `user`, `src_ip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate `wac_gateway_hosts` lookup with gateway computer names; (2) tune out known scanner subnets with `src_ip` exclusions; (3) alert on repeated 4625 failures to RDP/WinRM ports; (4) correlate with VPN and conditional access logs for investigations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Security-Auditing\" earliest=-24h EventCode=4625 Logon_Type IN (3,10)\n| lookup wac_gateway_hosts host OUTPUT is_gateway\n| where is_gateway=\"true\" OR like(host,\"%wac-gw%\")\n| stats count as failed_logons by host, src_ip, user\n| where failed_logons>=8\n| sort -failed_logons\n```\n\nUnderstanding this SPL\n\n**Windows Admin Center Connection and Gateway Audit Events** — Windows Admin Center is often the break-glass console for HCI operations. Auditing gateway authentication failures and session spikes detects credential attacks or misconfigured smart-card rules before admins lose access during incidents.\n\nDocumented **Data sources**: `index=wineventlog` `sourcetype=\"WinEventLog:Microsoft-Windows-Security-Auditing\"` with fields `host`, `EventCode`, `user`, `src_ip`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows, HEC JSON from WAC gateway. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Security-Auditing. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Security-Auditing\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_gateway=\"true\" OR like(host,\"%wac-gw%\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, src_ip, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failed_logons>=8` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top failed logons), Timechart (4625 rate), Single value (unique sources).",
              "script": "",
              "premium": "",
              "hw": "Windows Admin Center gateway VM or dedicated server",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep a readable diary of who opens the browser console, which machines they reach afterward, who may change those permissions, when plug-ins arrive or vanish, and which automation scripts ran—so odd administrative moves surface before silent takeover hides behind ordinary upkeep.",
              "mtype": [
                "Audit",
                "Availability"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.3.6",
              "n": "Cluster-Aware Updating Run Status and Node Drain Failures",
              "c": "high",
              "f": "intermediate",
              "v": "Failed CAU runs leave nodes partially patched or stuck outside maintenance mode, which breaks uniform firmware and OS compliance. Tracking per-node CAU stages surfaces stuck updates before the next failure domain event.",
              "t": "Windows Event Forwarding, PowerShell automation logs",
              "d": "`index=azure_stack_hci` `sourcetype=\"azurestackhci:cau\"` with fields `cluster_name`, `run_id`, `node`, `stage`, `status`",
              "q": "index=azure_stack_hci sourcetype=\"azurestackhci:cau\" earliest=-7d\n| where status!=\"Succeeded\" AND status!=\"InProgress\"\n| stats latest(status) as st, latest(stage) as stage, values(node) as nodes by cluster_name, run_id\n| table cluster_name, run_id, stage, st, nodes\n| sort -run_id",
              "m": "(1) Emit structured JSON from `Get-CauRunHistory` after each CAU wave; (2) alert on failed or rolled-back runs; (3) join with Windows Update EventCode 19/20 success for cross-check; (4) attach remediation KB links in alert payload.",
              "z": "Timeline (CAU runs), Table (failed nodes), Single value (open failed runs).",
              "kfp": "**Extended PreUpdateScript rehearsal windows** — orchestrators deliberately stretch **`Suspend-ClusterNode -Drain`** waits past ordinary SLA bounds while scripted backups finish; **`Microsoft-Windows-ClusterAwareUpdating/Admin`** **`1023`** mirrors hazardous stalls though **`Invoke-CauRun`** intent stays approved—suppress rows tied to **`MaintenanceWindow`** lookups tagging **`Add-CauClusterRole`** schedules.\n\n**Multi-reboot cumulative updates** — **`Microsoft.WindowsUpdatePlugin`** occasionally sequences **`Installing`**→**`Restarting`** twice inside one **`Get-CauReport`** boundary so **`NodeStatusNotifications`** briefly reads **`Failed`** between reboot hops—confirm **`WU`** **`KB`** ordering via **`Get-CauReport`** **`Summary`** before paging **`HotfixPlugin`** pathways.\n\n**Administrator-cancelled Remote-Updating runs** — **`Invoke-CauRun`** **`Cancelled`** **`Status`** surfaces beside orderly **`1002`** completions—tie **`Splunk`** **`combined_status`** rows to **`Stop-CauRun`** ticket approvals before **`Severity`** escalation.\n\n**Microsoft Defender interactive scans overlapping CAU installs** — **`Downloading`** slows while **real-time antivirus inspection** touches **`Windows`** **`Servicing`** staging folders— **`Installing`** eventually **`Succeeded`** without **`1008`** **`failure_reason`** payloads.\n\n**Merged snapshot backlog blocking Drain** — clustered guests awaiting **`checkpoint`** merges (`Hyper-V` **`snapshot`** hygiene debt) stall **`Suspend-ClusterNode`** exactly like **`DDA`** pinning— **`1023`** clears once **`guest`** disks consolidate—not because **`Microsoft.HotfixPlugin`** **`SMB`** **`ACL`** faults appeared.\n\n**CSV redirected-access churn during migrations** — **`Verifying`** finishes (**`evt`** **`1007`**) yet **`Perf`** **`latency`** spikes persist elsewhere— **`Splunk`** **`CAU`** posture stays **`Succeeded`** while sibling capacity dashboards blush (**avoid duplicate RCA**).\n\n**Offline SMB Hotfix bundle swaps mid-wave** — administrators temporarily revoke **`\\\\share\\\\kb`** **`READ`** during antivirus sweeps— **`Microsoft.HotfixPlugin`** **`Summary`** **`Failed`** clears after **`icacls`** restoration without implying **`Resume-ClusterNode`** (**`1027`**) defects.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Event Forwarding, PowerShell automation logs.\n• Ensure the following data sources are available: `index=azure_stack_hci` `sourcetype=\"azurestackhci:cau\"` with fields `cluster_name`, `run_id`, `node`, `stage`, `status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit structured JSON from `Get-CauRunHistory` after each CAU wave; (2) alert on failed or rolled-back runs; (3) join with Windows Update EventCode 19/20 success for cross-check; (4) attach remediation KB links in alert payload.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure_stack_hci sourcetype=\"azurestackhci:cau\" earliest=-7d\n| where status!=\"Succeeded\" AND status!=\"InProgress\"\n| stats latest(status) as st, latest(stage) as stage, values(node) as nodes by cluster_name, run_id\n| table cluster_name, run_id, stage, st, nodes\n| sort -run_id\n```\n\nUnderstanding this SPL\n\n**Cluster-Aware Updating Run Status and Node Drain Failures** — Failed CAU runs leave nodes partially patched or stuck outside maintenance mode, which breaks uniform firmware and OS compliance. Tracking per-node CAU stages surfaces stuck updates before the next failure domain event.\n\nDocumented **Data sources**: `index=azure_stack_hci` `sourcetype=\"azurestackhci:cau\"` with fields `cluster_name`, `run_id`, `node`, `stage`, `status`. **App/TA** (typical add-on context): Windows Event Forwarding, PowerShell automation logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure_stack_hci; **sourcetype**: azurestackhci:cau. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure_stack_hci, sourcetype=\"azurestackhci:cau\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"Succeeded\" AND status!=\"InProgress\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster_name, run_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Cluster-Aware Updating Run Status and Node Drain Failures**): table cluster_name, run_id, stage, st, nodes\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cluster-Aware Updating Run Status and Node Drain Failures** — Failed CAU runs leave nodes partially patched or stuck outside maintenance mode, which breaks uniform firmware and OS compliance. Tracking per-node CAU stages surfaces stuck updates before the next failure domain event.\n\nDocumented **Data sources**: `index=azure_stack_hci` `sourcetype=\"azurestackhci:cau\"` with fields `cluster_name`, `run_id`, `node`, `stage`, `status`. **App/TA** (typical add-on context): Windows Event Forwarding, PowerShell automation logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (CAU runs), Table (failed nodes), Single value (open failed runs).",
              "script": "",
              "premium": "",
              "hw": "Azure Stack HCI clusters using Cluster-Aware Updating",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We line up rolling restart rounds across clustered machines so uneven patch footing and stuck drain moments surface early before everyday apps hitch when picky workloads refuse to slide elsewhere.",
              "mtype": [
                "Configuration",
                "Fault"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "19.3.7",
              "n": "S2D Cache Device Health and Predictive Failure SMART Signals",
              "c": "critical",
              "f": "advanced",
              "v": "Cache device loss on S2D sharply increases read latency and can stall resync. Surfacing predictive failure and wear indicators from physical disks lets you replace NVMe or SSD cache devices during business hours instead of during a double-fault scenario.",
              "t": "Splunk Add-on for Microsoft Windows, Azure Monitor HEC integration",
              "d": "`index=wineventlog` `sourcetype=\"WinEventLog:Microsoft-Windows-Storage-Storport/Admin\"` with fields `host`, `EventCode`, `Message`, `disk_serial`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Storage-Storport/Admin\" earliest=-24h\n| search predictive OR \"Predictive Failure\" OR \"reallocated\" OR \"medium error\"\n| rex field=Message \"SerialNumber[^\\w]*(?<serial>[^\\s,]+)\"\n| stats count as events by host, coalesce(disk_serial, serial) as disk_id\n| where events>=1\n| sort -events",
              "m": "(1) Collect Storport admin events from all HCI nodes; (2) normalize serial from message text where field extraction is incomplete; (3) alert on first predictive failure per disk; (4) open hardware RMA and plan cache evacuation per Microsoft guidance.",
              "z": "Table (at-risk disks by host), Timeline (Storport errors), Single value (nodes with predictive failures).",
              "kfp": "**Planned NVMe firmware waves** coordinated through **Cluster-Aware Updating** can emit transient **Predictive Failure** or **305**/**306** churn while carriers reboot through activation—suppress pages until **journal_firmware_wave** enrichment flags complete or **Get-PhysicalDisk** shows **OperationalStatus=OK** with consistent **FirmwareVersion** stamps across nodes.\n\n**Cache-tier rebalancing after capacity-disk replacements** shifts **Journal** bindings across remaining flash devices—**306** cache-rebind events and **Get-StorageJob** **Reallocation** tasks mirror hazard signatures yet **Wear**/**PercentageUsed** counters remain flat; require SMART deltas before paging wear faults.\n\n**Enclosure backplane firmware updates** trigger **Microsoft-Windows-Storage-Storport/Admin** **153** SCSI resets and short **Lost Communication** windows that resemble drive failure—pair bursts with **spaces_drv** **304**/**207** evidence or **STORAGE_HEALTH_FAULT_PHYSICALDISK_LOSTCOMMUNICATION** before opening carrier RMAs.\n\n**Vendor SSD utility wear bars** sometimes disagree **15–25** percentage points from **Get-StorageReliabilityCounter** **Wear** because OEM tools read vendor log pages while Windows surfaces Microsoft-normalized counters—document a single authoritative threshold per fleet to avoid duplicate tickets.\n\n**Rolling maintenance drains** that evacuate **Journal** disks for diagnostics duplicate **305**/**306** sequences absent wear cliffs—tie suppressions to **MaintenanceRecords** markers and **Invoke-StorageMaintenance** identifiers so deliberate evacuations never masquerade as spontaneous flash failure.\n\n**Transient thermal excursions** after HVAC work raise **Temperature**/**TemperatureMax** without **MediaErrors**—cooling teams may resolve these without SSD replacement when **CriticalWarning** thermal bits clear after airflow stabilizes.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows, Azure Monitor HEC integration.\n• Ensure the following data sources are available: `index=wineventlog` `sourcetype=\"WinEventLog:Microsoft-Windows-Storage-Storport/Admin\"` with fields `host`, `EventCode`, `Message`, `disk_serial`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect Storport admin events from all HCI nodes; (2) normalize serial from message text where field extraction is incomplete; (3) alert on first predictive failure per disk; (4) open hardware RMA and plan cache evacuation per Microsoft guidance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Microsoft-Windows-Storage-Storport/Admin\" earliest=-24h\n| search predictive OR \"Predictive Failure\" OR \"reallocated\" OR \"medium error\"\n| rex field=Message \"SerialNumber[^\\w]*(?<serial>[^\\s,]+)\"\n| stats count as events by host, coalesce(disk_serial, serial) as disk_id\n| where events>=1\n| sort -events\n```\n\nUnderstanding this SPL\n\n**S2D Cache Device Health and Predictive Failure SMART Signals** — Cache device loss on S2D sharply increases read latency and can stall resync. Surfacing predictive failure and wear indicators from physical disks lets you replace NVMe or SSD cache devices during business hours instead of during a double-fault scenario.\n\nDocumented **Data sources**: `index=wineventlog` `sourcetype=\"WinEventLog:Microsoft-Windows-Storage-Storport/Admin\"` with fields `host`, `EventCode`, `Message`, `disk_serial`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows, Azure Monitor HEC integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Microsoft-Windows-Storage-Storport/Admin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Microsoft-Windows-Storage-Storport/Admin\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, coalesce(disk_serial, serial) as disk_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where events>=1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (at-risk disks by host), Timeline (Storport errors), Single value (nodes with predictive failures).",
              "script": "",
              "premium": "",
              "hw": "Azure Stack HCI S2D cache tier NVMe and SATA/SAS SSDs",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch the small, very fast disks that cache writes ahead of the big shelves on your cluster. When spare-block margins shrink or wear counts climb, we raise a calm early note so swaps happen on purpose instead of during a panicked rebuild.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Microsoft Azure App for Splunk",
                  "id": 4882,
                  "url": "https://splunkbase.splunk.com/app/4882",
                  "desc": "Dashboards for Azure VMs, Metrics, Storage, Security Monitoring, Billing",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d55080b0-6d4d-11ec-baf2-ce06eb58cb63.jpeg",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/d7147ff0-6d4d-11ec-9e53-3e3d9b7eaa58.jpeg"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.7,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 7,
            "none": 0
          }
        }
      ],
      "i": 19,
      "n": "Compute Infrastructure (HCI & Converged)",
      "src": "cat-19-compute-infrastructure-hci-converged.md"
    },
    {
      "s": [
        {
          "i": "20.1",
          "n": "Cloud Cost Monitoring",
          "u": [
            {
              "i": "20.1.1",
              "n": "Daily Spend Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Cloud costs can spiral without visibility. Daily spend trending by service, account, and tag provides the financial governance foundation — enabling teams to understand where money goes, spot trends early, and make informed optimization decisions.",
              "t": "`Splunk Add-on for AWS` (CUR ingestion), `Splunk Add-on for Microsoft Cloud Services`, `Splunk Add-on for Google Cloud Platform`",
              "d": "AWS Cost and Usage Report (CUR), Azure Cost Management export, GCP Billing export",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| timechart span=1d sum(cost) as daily_spend by lineItem_ProductCode\n| addtotals\n| rename Total as total_daily_spend",
              "m": "Ingest AWS CUR, Azure Cost export, or GCP billing data daily. Parse cost line items by service, account, region, and tags. Build daily/weekly/monthly spend reports. Set trending alerts when daily spend exceeds 7-day rolling average by >20%. Enable tag-based cost allocation from day one.",
              "z": "Timechart (daily spend trending), Stacked bar chart (spend by service), Table (top 10 services by cost), Single value (today's spend vs yesterday).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [Splunk Add-on for Google Cloud Platform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS` (CUR ingestion), `Splunk Add-on for Microsoft Cloud Services`, `Splunk Add-on for Google Cloud Platform`.\n• Ensure the following data sources are available: AWS Cost and Usage Report (CUR), Azure Cost Management export, GCP Billing export.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest AWS CUR, Azure Cost export, or GCP billing data daily. Parse cost line items by service, account, region, and tags. Build daily/weekly/monthly spend reports. Set trending alerts when daily spend exceeds 7-day rolling average by >20%. Enable tag-based cost allocation from day one.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| timechart span=1d sum(cost) as daily_spend by lineItem_ProductCode\n| addtotals\n| rename Total as total_daily_spend\n```\n\nUnderstanding this SPL\n\n**Daily Spend Trending** — Cloud costs can spiral without visibility. Daily spend trending by service, account, and tag provides the financial governance foundation — enabling teams to understand where money goes, spot trends early, and make informed optimization decisions.\n\nDocumented **Data sources**: AWS Cost and Usage Report (CUR), Azure Cost Management export, GCP Billing export. **App/TA** (typical add-on context): `Splunk Add-on for AWS` (CUR ingestion), `Splunk Add-on for Microsoft Cloud Services`, `Splunk Add-on for Google Cloud Platform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by lineItem_ProductCode** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Daily Spend Trending**): addtotals\n• Renames fields with `rename` for clarity or joins.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (daily spend trending), Stacked bar chart (spend by service), Table (top 10 services by cost), Single value (today's spend vs yesterday).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "wv": "crawl",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "20.1.2",
              "n": "Cost Anomaly Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Unexpected cost spikes from runaway instances, misconfigured autoscaling, or crypto-mining attacks can generate thousands in charges within hours. Automated anomaly detection catches these events before they become budget disasters.",
              "t": "`Splunk Add-on for AWS`, cloud billing TAs",
              "d": "Billing data with historical trending",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| timechart span=1d sum(cost) as daily_spend by lineItem_UsageAccountId\n| foreach * [eval <<FIELD>>=if(<<FIELD>>=\"\", 0, <<FIELD>>)]\n| addtotals\n| predict Total as predicted_spend algorithm=LLP5 future_timespan=1\n| eval anomaly=if(Total > 'upper95(predicted_spend)', \"Anomaly\", \"Normal\")\n| where anomaly=\"Anomaly\"",
              "m": "Build 30-day baseline of daily spending per account and service. Use Splunk `predict` command with LLP5 algorithm for anomaly detection. Alert when actual spend exceeds upper 95% confidence interval. Investigate anomalies by drilling into specific services and resources. Integrate with incident management for cost-related incidents.",
              "z": "Timechart (actual vs predicted spend), Table (anomaly details), Single value (current anomaly count), Alert indicator (anomaly detected).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, cloud billing TAs.\n• Ensure the following data sources are available: Billing data with historical trending.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild 30-day baseline of daily spending per account and service. Use Splunk `predict` command with LLP5 algorithm for anomaly detection. Alert when actual spend exceeds upper 95% confidence interval. Investigate anomalies by drilling into specific services and resources. Integrate with incident management for cost-related incidents.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| timechart span=1d sum(cost) as daily_spend by lineItem_UsageAccountId\n| foreach * [eval <<FIELD>>=if(<<FIELD>>=\"\", 0, <<FIELD>>)]\n| addtotals\n| predict Total as predicted_spend algorithm=LLP5 future_timespan=1\n| eval anomaly=if(Total > 'upper95(predicted_spend)', \"Anomaly\", \"Normal\")\n| where anomaly=\"Anomaly\"\n```\n\nUnderstanding this SPL\n\n**Cost Anomaly Detection** — Unexpected cost spikes from runaway instances, misconfigured autoscaling, or crypto-mining attacks can generate thousands in charges within hours. Automated anomaly detection catches these events before they become budget disasters.\n\nDocumented **Data sources**: Billing data with historical trending. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, cloud billing TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by lineItem_UsageAccountId** — ideal for trending and alerting on this use case.\n• Iterates over multivalue fields with `foreach`.\n• Pipeline stage (see **Cost Anomaly Detection**): addtotals\n• Pipeline stage (see **Cost Anomaly Detection**): predict Total as predicted_spend algorithm=LLP5 future_timespan=1\n• `eval` defines or adjusts **anomaly** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where anomaly=\"Anomaly\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (actual vs predicted spend), Table (anomaly details), Single value (current anomaly count), Alert indicator (anomaly detected).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.3",
              "n": "Reserved Instance Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "Reserved Instances and Savings Plans represent upfront commitments. Monitoring utilization ensures you're getting value from these purchases. Low utilization means wasted money; gaps in coverage mean missed savings opportunities.",
              "t": "`Splunk Add-on for AWS`, billing TAs",
              "d": "AWS CUR (reservation fields), Azure reservation utilization",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\"\n| search reservation_ReservationARN!=\"\"\n| stats sum(reservation_UnusedAmortizedUpfrontFeeForBillingPeriod) as unused_upfront, sum(reservation_EffectiveCost) as effective_cost, sum(reservation_UnusedRecurringFee) as unused_recurring by reservation_ReservationARN, lineItem_ProductCode\n| eval utilization_pct=round((1-(unused_upfront+unused_recurring)/effective_cost)*100, 1)\n| sort utilization_pct\n| table reservation_ReservationARN, lineItem_ProductCode, effective_cost, unused_upfront, unused_recurring, utilization_pct",
              "m": "Parse RI/Savings Plan utilization from CUR data. Track utilization percentage per reservation. Alert when any RI falls below 80% utilization for 7+ consecutive days. Report on coverage gaps where on-demand spend could be covered by reservations. Review expiring reservations 30 days before expiry.",
              "z": "Gauge (overall RI utilization), Bar chart (utilization by reservation), Table (underutilized RIs), Timechart (utilization trending).",
              "kfp": "Idle resources for DR standby capacity, scheduled batch processors, or planned future-use staging.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, billing TAs.\n• Ensure the following data sources are available: AWS CUR (reservation fields), Azure reservation utilization.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse RI/Savings Plan utilization from CUR data. Track utilization percentage per reservation. Alert when any RI falls below 80% utilization for 7+ consecutive days. Report on coverage gaps where on-demand spend could be covered by reservations. Review expiring reservations 30 days before expiry.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\"\n| search reservation_ReservationARN!=\"\"\n| stats sum(reservation_UnusedAmortizedUpfrontFeeForBillingPeriod) as unused_upfront, sum(reservation_EffectiveCost) as effective_cost, sum(reservation_UnusedRecurringFee) as unused_recurring by reservation_ReservationARN, lineItem_ProductCode\n| eval utilization_pct=round((1-(unused_upfront+unused_recurring)/effective_cost)*100, 1)\n| sort utilization_pct\n| table reservation_ReservationARN, lineItem_ProductCode, effective_cost, unused_upfront, unused_recurring, utilization_pct\n```\n\nUnderstanding this SPL\n\n**Reserved Instance Utilization** — Reserved Instances and Savings Plans represent upfront commitments. Monitoring utilization ensures you're getting value from these purchases. Low utilization means wasted money; gaps in coverage mean missed savings opportunities.\n\nDocumented **Data sources**: AWS CUR (reservation fields), Azure reservation utilization. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, billing TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by reservation_ReservationARN, lineItem_ProductCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Reserved Instance Utilization**): table reservation_ReservationARN, lineItem_ProductCode, effective_cost, unused_upfront, unused_recurring, utilization_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (overall RI utilization), Bar chart (utilization by reservation), Table (underutilized RIs), Timechart (utilization trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.4",
              "n": "Idle Resource Identification",
              "c": "high",
              "f": "intermediate",
              "v": "Idle resources (running but unused instances, unattached volumes, unused load balancers) are pure waste. Identifying and eliminating them is the quickest path to cloud cost savings, often yielding 20-30% reduction.",
              "t": "`Splunk Add-on for AWS`, cloud monitoring TAs",
              "d": "CloudWatch/Azure Monitor metrics + billing data",
              "q": "index=cloud_metrics sourcetype=\"aws:cloudwatch\"\n| search metric_name=\"CPUUtilization\"\n| stats avg(Average) as avg_cpu, max(Maximum) as peak_cpu by dimensions.InstanceId\n| where avg_cpu < 5 AND peak_cpu < 10\n| lookup aws_instance_details InstanceId as dimensions.InstanceId OUTPUT instance_type, monthly_cost, tags\n| eval waste_monthly=monthly_cost\n| sort -waste_monthly\n| table dimensions.InstanceId, instance_type, avg_cpu, peak_cpu, monthly_cost, tags",
              "m": "Correlate CloudWatch CPU/network metrics with billing data. Define idle thresholds: CPU avg <5%, network <1MB/day for 7+ days. Include unattached EBS volumes, idle ELBs, unused Elastic IPs. Generate weekly idle resource reports with estimated savings. Route to resource owners for action.",
              "z": "Table (idle resources with cost), Bar chart (waste by service), Single value (total monthly waste), Pie chart (waste by team/tag).",
              "kfp": "Idle resources for DR standby capacity, scheduled batch processors, or planned future-use staging.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, cloud monitoring TAs.\n• Ensure the following data sources are available: CloudWatch/Azure Monitor metrics + billing data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCorrelate CloudWatch CPU/network metrics with billing data. Define idle thresholds: CPU avg <5%, network <1MB/day for 7+ days. Include unattached EBS volumes, idle ELBs, unused Elastic IPs. Generate weekly idle resource reports with estimated savings. Route to resource owners for action.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_metrics sourcetype=\"aws:cloudwatch\"\n| search metric_name=\"CPUUtilization\"\n| stats avg(Average) as avg_cpu, max(Maximum) as peak_cpu by dimensions.InstanceId\n| where avg_cpu < 5 AND peak_cpu < 10\n| lookup aws_instance_details InstanceId as dimensions.InstanceId OUTPUT instance_type, monthly_cost, tags\n| eval waste_monthly=monthly_cost\n| sort -waste_monthly\n| table dimensions.InstanceId, instance_type, avg_cpu, peak_cpu, monthly_cost, tags\n```\n\nUnderstanding this SPL\n\n**Idle Resource Identification** — Idle resources (running but unused instances, unattached volumes, unused load balancers) are pure waste. Identifying and eliminating them is the quickest path to cloud cost savings, often yielding 20-30% reduction.\n\nDocumented **Data sources**: CloudWatch/Azure Monitor metrics + billing data. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, cloud monitoring TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_metrics; **sourcetype**: aws:cloudwatch. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_metrics, sourcetype=\"aws:cloudwatch\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by dimensions.InstanceId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_cpu < 5 AND peak_cpu < 10` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **waste_monthly** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Idle Resource Identification**): table dimensions.InstanceId, instance_type, avg_cpu, peak_cpu, monthly_cost, tags\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (idle resources with cost), Bar chart (waste by service), Single value (total monthly waste), Pie chart (waste by team/tag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "20.1.5",
              "n": "Budget Threshold Alerting",
              "c": "high",
              "f": "intermediate",
              "v": "Budget alerts prevent overspend by notifying stakeholders at defined thresholds (50%, 75%, 90%, 100%). Combined with forecast-based alerts, teams can take corrective action before exceeding approved budgets.",
              "t": "`Splunk Add-on for AWS`, cloud billing TAs",
              "d": "Billing data, budget definitions (lookup)",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| stats sum(cost) as mtd_spend by lineItem_UsageAccountId\n| lookup cloud_budgets account_id as lineItem_UsageAccountId OUTPUT budget_amount, owner_email\n| eval budget_pct=round((mtd_spend/budget_amount)*100, 1)\n| eval status=case(budget_pct>=100, \"Exceeded\", budget_pct>=90, \"Critical\", budget_pct>=75, \"Warning\", budget_pct>=50, \"On Track\", 1==1, \"Under Budget\")\n| sort -budget_pct\n| table lineItem_UsageAccountId, owner_email, budget_amount, mtd_spend, budget_pct, status",
              "m": "Define budgets per account/team in a Splunk lookup table. Calculate MTD spend against budgets daily. Alert at 50%, 75%, 90%, and 100% thresholds. Include forecast-based alerts (projected to exceed budget). Escalate to management when budgets are exceeded.",
              "z": "Gauge (budget consumption), Table (budget status by account), Timechart (MTD spend vs budget line), Single value (accounts over budget).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, cloud billing TAs.\n• Ensure the following data sources are available: Billing data, budget definitions (lookup).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine budgets per account/team in a Splunk lookup table. Calculate MTD spend against budgets daily. Alert at 50%, 75%, 90%, and 100% thresholds. Include forecast-based alerts (projected to exceed budget). Escalate to management when budgets are exceeded.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| stats sum(cost) as mtd_spend by lineItem_UsageAccountId\n| lookup cloud_budgets account_id as lineItem_UsageAccountId OUTPUT budget_amount, owner_email\n| eval budget_pct=round((mtd_spend/budget_amount)*100, 1)\n| eval status=case(budget_pct>=100, \"Exceeded\", budget_pct>=90, \"Critical\", budget_pct>=75, \"Warning\", budget_pct>=50, \"On Track\", 1==1, \"Under Budget\")\n| sort -budget_pct\n| table lineItem_UsageAccountId, owner_email, budget_amount, mtd_spend, budget_pct, status\n```\n\nUnderstanding this SPL\n\n**Budget Threshold Alerting** — Budget alerts prevent overspend by notifying stakeholders at defined thresholds (50%, 75%, 90%, 100%). Combined with forecast-based alerts, teams can take corrective action before exceeding approved budgets.\n\nDocumented **Data sources**: Billing data, budget definitions (lookup). **App/TA** (typical add-on context): `Splunk Add-on for AWS`, cloud billing TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by lineItem_UsageAccountId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **budget_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Budget Threshold Alerting**): table lineItem_UsageAccountId, owner_email, budget_amount, mtd_spend, budget_pct, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (budget consumption), Table (budget status by account), Timechart (MTD spend vs budget line), Single value (accounts over budget).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "wv": "crawl",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "20.1.6",
              "n": "Cost Allocation by Team",
              "c": "medium",
              "f": "intermediate",
              "v": "Breaking down cloud costs by team/department via tagging creates accountability and enables chargeback/showback. Teams that see their own costs make better optimization decisions, driving organization-wide cost efficiency.",
              "t": "`Splunk Add-on for AWS`, cloud billing TAs",
              "d": "CUR with tag data, organizational mapping",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval team=coalesce('resourceTags_user_Team', 'resourceTags_user_team', \"Untagged\")\n| stats sum(cost) as total_cost by team\n| eventstats sum(total_cost) as grand_total\n| eval cost_pct=round((total_cost/grand_total)*100, 1)\n| sort -total_cost\n| table team, total_cost, cost_pct",
              "m": "Enforce tagging policy requiring Team/Department/Environment tags. Parse resource tags from billing data. Calculate cost allocation by team, department, and environment. Report on untagged resources (assign to \"Unknown\" for follow-up). Generate monthly chargeback reports.",
              "z": "Pie chart (cost by team), Bar chart (team costs with trending), Table (detailed allocation), Single value (untagged cost percentage).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, cloud billing TAs.\n• Ensure the following data sources are available: CUR with tag data, organizational mapping.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnforce tagging policy requiring Team/Department/Environment tags. Parse resource tags from billing data. Calculate cost allocation by team, department, and environment. Report on untagged resources (assign to \"Unknown\" for follow-up). Generate monthly chargeback reports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval team=coalesce('resourceTags_user_Team', 'resourceTags_user_team', \"Untagged\")\n| stats sum(cost) as total_cost by team\n| eventstats sum(total_cost) as grand_total\n| eval cost_pct=round((total_cost/grand_total)*100, 1)\n| sort -total_cost\n| table team, total_cost, cost_pct\n```\n\nUnderstanding this SPL\n\n**Cost Allocation by Team** — Breaking down cloud costs by team/department via tagging creates accountability and enables chargeback/showback. Teams that see their own costs make better optimization decisions, driving organization-wide cost efficiency.\n\nDocumented **Data sources**: CUR with tag data, organizational mapping. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, cloud billing TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **team** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by team** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **cost_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Cost Allocation by Team**): table team, total_cost, cost_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (cost by team), Bar chart (team costs with trending), Table (detailed allocation), Single value (untagged cost percentage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.7",
              "n": "Spot/Preemptible Instance Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Spot instances offer significant savings (60-90%) but can be interrupted. Tracking interruptions, savings achieved, and workload placement ensures teams maximize savings while maintaining application resilience.",
              "t": "`Splunk Add-on for AWS`, EC2 event logs",
              "d": "EC2 spot instance events, billing data",
              "q": "index=aws sourcetype=\"aws:cloudtrail\"\n| search eventName=\"BidEvictedEvent\" OR eventName=\"SpotInstanceInterruption\"\n| stats count as interruptions by requestParameters.instanceId, requestParameters.instanceType, userIdentity.arn\n| lookup spot_savings instance_id as requestParameters.instanceId OUTPUT on_demand_cost, spot_cost\n| eval savings=on_demand_cost-spot_cost\n| eval savings_pct=round((savings/on_demand_cost)*100, 1)\n| table requestParameters.instanceId, requestParameters.instanceType, interruptions, on_demand_cost, spot_cost, savings_pct",
              "m": "Track spot instance lifecycle events via CloudTrail. Monitor interruption frequency by instance type and AZ. Calculate savings vs on-demand pricing. Alert on interruption rate spikes affecting critical workloads. Report monthly spot savings to justify continued spot adoption.",
              "z": "Bar chart (interruptions by type), Timechart (interruption frequency), Single value (monthly spot savings), Table (instance interruption details).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, EC2 event logs.\n• Ensure the following data sources are available: EC2 spot instance events, billing data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTrack spot instance lifecycle events via CloudTrail. Monitor interruption frequency by instance type and AZ. Calculate savings vs on-demand pricing. Alert on interruption rate spikes affecting critical workloads. Report monthly spot savings to justify continued spot adoption.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\"\n| search eventName=\"BidEvictedEvent\" OR eventName=\"SpotInstanceInterruption\"\n| stats count as interruptions by requestParameters.instanceId, requestParameters.instanceType, userIdentity.arn\n| lookup spot_savings instance_id as requestParameters.instanceId OUTPUT on_demand_cost, spot_cost\n| eval savings=on_demand_cost-spot_cost\n| eval savings_pct=round((savings/on_demand_cost)*100, 1)\n| table requestParameters.instanceId, requestParameters.instanceType, interruptions, on_demand_cost, spot_cost, savings_pct\n```\n\nUnderstanding this SPL\n\n**Spot/Preemptible Instance Tracking** — Spot instances offer significant savings (60-90%) but can be interrupted. Tracking interruptions, savings achieved, and workload placement ensures teams maximize savings while maintaining application resilience.\n\nDocumented **Data sources**: EC2 spot instance events, billing data. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, EC2 event logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by requestParameters.instanceId, requestParameters.instanceType, userIdentity.arn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **savings** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **savings_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Spot/Preemptible Instance Tracking**): table requestParameters.instanceId, requestParameters.instanceType, interruptions, on_demand_cost, spot_cost, savings_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (interruptions by type), Timechart (interruption frequency), Single value (monthly spot savings), Table (instance interruption details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.8",
              "n": "Data Transfer Cost Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Data transfer costs are often the most surprising cloud bill item. Inter-region, cross-AZ, and internet egress charges add up quickly. Identifying the biggest transfer flows enables architectural optimization to reduce costs significantly.",
              "t": "`Splunk Add-on for AWS`, cloud billing TAs",
              "d": "CUR data transfer line items, VPC flow logs",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\"\n| search lineItem_UsageType=\"*DataTransfer*\" OR lineItem_UsageType=\"*Bytes*\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval transfer_type=case(\n    lineItem_UsageType LIKE \"%InterRegion%\", \"Inter-Region\",\n    lineItem_UsageType LIKE \"%Out-Bytes%\", \"Internet Egress\",\n    lineItem_UsageType LIKE \"%In-Bytes%\", \"Internet Ingress\",\n    lineItem_UsageType LIKE \"%Regional%\", \"Cross-AZ\",\n    1==1, \"Other\")\n| stats sum(cost) as transfer_cost, sum(lineItem_UsageAmount) as gb_transferred by transfer_type, lineItem_ProductCode\n| sort -transfer_cost",
              "m": "Parse data transfer line items from CUR. Categorize by transfer type (egress, inter-region, cross-AZ). Identify top services and resources by transfer cost. Correlate with VPC flow logs for detailed flow analysis. Recommend architecture changes (CDN, VPC endpoints, same-AZ placement) for top cost drivers.",
              "z": "Pie chart (cost by transfer type), Bar chart (top services by transfer cost), Timechart (transfer cost trending), Table (detailed transfer breakdown).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, cloud billing TAs.\n• Ensure the following data sources are available: CUR data transfer line items, VPC flow logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse data transfer line items from CUR. Categorize by transfer type (egress, inter-region, cross-AZ). Identify top services and resources by transfer cost. Correlate with VPC flow logs for detailed flow analysis. Recommend architecture changes (CDN, VPC endpoints, same-AZ placement) for top cost drivers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\"\n| search lineItem_UsageType=\"*DataTransfer*\" OR lineItem_UsageType=\"*Bytes*\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval transfer_type=case(\n    lineItem_UsageType LIKE \"%InterRegion%\", \"Inter-Region\",\n    lineItem_UsageType LIKE \"%Out-Bytes%\", \"Internet Egress\",\n    lineItem_UsageType LIKE \"%In-Bytes%\", \"Internet Ingress\",\n    lineItem_UsageType LIKE \"%Regional%\", \"Cross-AZ\",\n    1==1, \"Other\")\n| stats sum(cost) as transfer_cost, sum(lineItem_UsageAmount) as gb_transferred by transfer_type, lineItem_ProductCode\n| sort -transfer_cost\n```\n\nUnderstanding this SPL\n\n**Data Transfer Cost Analysis** — Data transfer costs are often the most surprising cloud bill item. Inter-region, cross-AZ, and internet egress charges add up quickly. Identifying the biggest transfer flows enables architectural optimization to reduce costs significantly.\n\nDocumented **Data sources**: CUR data transfer line items, VPC flow logs. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, cloud billing TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **transfer_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by transfer_type, lineItem_ProductCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (cost by transfer type), Bar chart (top services by transfer cost), Timechart (transfer cost trending), Table (detailed transfer breakdown).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.9",
              "n": "Predictive Disk / Volume Exhaustion",
              "c": "high",
              "f": "advanced",
              "v": "Time-to-full forecast using linear regression on disk usage data enables proactive capacity planning. Running out of disk space causes application outages, failed backups, and data loss. Predicting exhaustion dates allows teams to provision storage or archive data before hitting critical thresholds.",
              "t": "`Splunk_TA_nix`, `Splunk_TA_windows`, SNMP-based storage inputs",
              "d": "df/disk usage data (any sourcetype with UsePct and filesystem), Windows Perfmon logical disk, SNMP storage MIBs",
              "q": "index=infrastructure (sourcetype=\"df\" OR sourcetype=\"disk\" OR sourcetype=\"Perfmon:LogicalDisk\")\n| eval UsePct=coalesce(UsePct, pctUsed, 'Percent_Used')\n| eval filesystem=coalesce(filesystem, mount, instance)\n| where isnotnull(UsePct) AND isnotnull(filesystem)\n| bin _time span=1d\n| stats latest(UsePct) as used_pct by filesystem, host, _time\n| timechart span=1d latest(used_pct) as used_pct by filesystem, host\n| predict used_pct as predicted_pct algorithm=LLP5 future_timespan=90\n| eval risk_30d=if('predicted_pct+30d'>85, \"At Risk\", \"OK\")\n| eval risk_90d=if('predicted_pct+90d'>95, \"Critical\", \"OK\")\n| where risk_30d=\"At Risk\" OR risk_90d=\"Critical\"\n| table _time, filesystem, host, used_pct, 'predicted_pct+30d', 'predicted_pct+90d', risk_30d, risk_90d",
              "m": "Collect disk usage metrics daily from all hosts (df, Perfmon, SNMP). Ensure UsePct and filesystem/mount are extracted. Use Splunk `predict` with LLP5 for 30/60/90-day forecasting. Alert when projected usage exceeds 85% within 30 days or 95% within 90 days. Exclude ephemeral or tmpfs mounts from alerts. Build a dashboard with forecast overlay and drilldown to host/filesystem.",
              "z": "Timechart (usage with forecast overlay), Table (volumes approaching exhaustion with risk status), Gauge (current utilization), Single value (volumes at risk).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk_TA_nix](https://splunkbase.splunk.com/app/833), [Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_nix`, `Splunk_TA_windows`, SNMP-based storage inputs.\n• Ensure the following data sources are available: df/disk usage data (any sourcetype with UsePct and filesystem), Windows Perfmon logical disk, SNMP storage MIBs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect disk usage metrics daily from all hosts (df, Perfmon, SNMP). Ensure UsePct and filesystem/mount are extracted. Use Splunk `predict` with LLP5 for 30/60/90-day forecasting. Alert when projected usage exceeds 85% within 30 days or 95% within 90 days. Exclude ephemeral or tmpfs mounts from alerts. Build a dashboard with forecast overlay and drilldown to host/filesystem.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infrastructure (sourcetype=\"df\" OR sourcetype=\"disk\" OR sourcetype=\"Perfmon:LogicalDisk\")\n| eval UsePct=coalesce(UsePct, pctUsed, 'Percent_Used')\n| eval filesystem=coalesce(filesystem, mount, instance)\n| where isnotnull(UsePct) AND isnotnull(filesystem)\n| bin _time span=1d\n| stats latest(UsePct) as used_pct by filesystem, host, _time\n| timechart span=1d latest(used_pct) as used_pct by filesystem, host\n| predict used_pct as predicted_pct algorithm=LLP5 future_timespan=90\n| eval risk_30d=if('predicted_pct+30d'>85, \"At Risk\", \"OK\")\n| eval risk_90d=if('predicted_pct+90d'>95, \"Critical\", \"OK\")\n| where risk_30d=\"At Risk\" OR risk_90d=\"Critical\"\n| table _time, filesystem, host, used_pct, 'predicted_pct+30d', 'predicted_pct+90d', risk_30d, risk_90d\n```\n\nUnderstanding this SPL\n\n**Predictive Disk / Volume Exhaustion** — Time-to-full forecast using linear regression on disk usage data enables proactive capacity planning. Running out of disk space causes application outages, failed backups, and data loss. Predicting exhaustion dates allows teams to provision storage or archive data before hitting critical thresholds.\n\nDocumented **Data sources**: df/disk usage data (any sourcetype with UsePct and filesystem), Windows Perfmon logical disk, SNMP storage MIBs. **App/TA** (typical add-on context): `Splunk_TA_nix`, `Splunk_TA_windows`, SNMP-based storage inputs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infrastructure; **sourcetype**: df, disk, Perfmon:LogicalDisk. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infrastructure, sourcetype=\"df\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **UsePct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **filesystem** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(UsePct) AND isnotnull(filesystem)` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by filesystem, host, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by filesystem, host** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Predictive Disk / Volume Exhaustion**): predict used_pct as predicted_pct algorithm=LLP5 future_timespan=90\n• `eval` defines or adjusts **risk_30d** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **risk_90d** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where risk_30d=\"At Risk\" OR risk_90d=\"Critical\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Predictive Disk / Volume Exhaustion**): table _time, filesystem, host, used_pct, 'predicted_pct+30d', 'predicted_pct+90d', risk_30d, risk_90d\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (usage with forecast overlay), Table (volumes approaching exhaustion with risk status), Gauge (current utilization), Single value (volumes at risk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "linux",
                "snmp",
                "windows"
              ],
              "em": [
                "snmp_generic"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Unix and Linux",
                "id": 833,
                "url": "https://splunkbase.splunk.com/app/833"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.10",
              "n": "Reserved Instance Coverage Gaps",
              "c": "high",
              "f": "intermediate",
              "v": "Highlights on-demand spend in families/regions that could be covered by additional RIs/Savings Plans — complements utilization of existing commitments (UC-20.1.3).",
              "t": "`Splunk Add-on for AWS`, billing TAs",
              "d": "CUR with `lineItem_LineItemType`, usage type, instance family",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval on_demand=if(isnull(reservation_ReservationARN) AND match(lineItem_UsageType,\"BoxUsage\"),cost,0)\n| stats sum(on_demand) as od_spend by lineItem_ProductCode, lineItem_UsageType, lineItem_AvailabilityZone\n| sort -od_spend\n| head 30",
              "m": "Top on-demand spend by family/AZ drives RI buying decisions. Join with coverage reports from Cost Explorer export if available.",
              "z": "Table (coverage gap candidates), Bar chart (on-demand by family), Single value (total addressable OD spend).",
              "kfp": "RI use varies during workload shifts, planned migrations, or controlled refactoring.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, billing TAs.\n• Ensure the following data sources are available: CUR with `lineItem_LineItemType`, usage type, instance family.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTop on-demand spend by family/AZ drives RI buying decisions. Join with coverage reports from Cost Explorer export if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval on_demand=if(isnull(reservation_ReservationARN) AND match(lineItem_UsageType,\"BoxUsage\"),cost,0)\n| stats sum(on_demand) as od_spend by lineItem_ProductCode, lineItem_UsageType, lineItem_AvailabilityZone\n| sort -od_spend\n| head 30\n```\n\nUnderstanding this SPL\n\n**Reserved Instance Coverage Gaps** — Highlights on-demand spend in families/regions that could be covered by additional RIs/Savings Plans — complements utilization of existing commitments (UC-20.1.3).\n\nDocumented **Data sources**: CUR with `lineItem_LineItemType`, usage type, instance family. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, billing TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **on_demand** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by lineItem_ProductCode, lineItem_UsageType, lineItem_AvailabilityZone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (coverage gap candidates), Bar chart (on-demand by family), Single value (total addressable OD spend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.11",
              "n": "Spot Instance Interruption Rate",
              "c": "medium",
              "f": "beginner",
              "v": "Interruptions per 1,000 instance-hours by pool and AZ — extends raw event counts (UC-20.1.7) with a **rate** for SLO tracking.",
              "t": "`Splunk Add-on for AWS`, CloudTrail",
              "d": "`aws:cloudtrail` Spot events, instance-hours from CUR",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" earliest=-30d\n| search eventName=\"SpotInstanceInterruption\"\n| stats count as intr\n| append [ search index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n  | search lineItem_UsageType=\"*SpotUsage*\"\n  | stats sum(lineItem_NormalizedUsageAmount) as instance_hours ]\n| stats sum(intr) as interruptions sum(instance_hours) as ih\n| eval intr_per_1k=round(1000*interruptions/nullif(ih,0),2)",
              "m": "Align instance-hour denominator from CUR `NormalizedUsageAmount`. Alert when `intr_per_1k` exceeds baseline for stateful tiers.",
              "z": "Single value (interruptions per 1k hours), Line chart (rate trend), Table (by AZ).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, CloudTrail.\n• Ensure the following data sources are available: `aws:cloudtrail` Spot events, instance-hours from CUR.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign instance-hour denominator from CUR `NormalizedUsageAmount`. Alert when `intr_per_1k` exceeds baseline for stateful tiers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" earliest=-30d\n| search eventName=\"SpotInstanceInterruption\"\n| stats count as intr\n| append [ search index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n  | search lineItem_UsageType=\"*SpotUsage*\"\n  | stats sum(lineItem_NormalizedUsageAmount) as instance_hours ]\n| stats sum(intr) as interruptions sum(instance_hours) as ih\n| eval intr_per_1k=round(1000*interruptions/nullif(ih,0),2)\n```\n\nUnderstanding this SPL\n\n**Spot Instance Interruption Rate** — Interruptions per 1,000 instance-hours by pool and AZ — extends raw event counts (UC-20.1.7) with a **rate** for SLO tracking.\n\nDocumented **Data sources**: `aws:cloudtrail` Spot events, instance-hours from CUR. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, CloudTrail. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Appends rows from a subsearch with `append`.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **intr_per_1k** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (interruptions per 1k hours), Line chart (rate trend), Table (by AZ).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.12",
              "n": "FinOps Budget Alert Correlation",
              "c": "high",
              "f": "intermediate",
              "v": "Joins AWS Budgets / Azure budget notifications with resource change events and anomaly searches to explain **why** a threshold fired.",
              "t": "Cloud billing TAs, CloudTrail",
              "d": "Budget alert SNS/email logs, `cost:daily`, CloudTrail",
              "q": "index=finops sourcetype=\"aws:budget:alert\" earliest=-7d\n| eval budget_name=coalesce(budget_name,BudgetName)\n| join type=left max=1 budget_name [ search index=cloud_cost sourcetype=\"cost:daily\" earliest=-7d | bin _time span=1d\n| stats sum(cost) as daily_cost by account_id, _time ]\n| table _time, budget_name, threshold_type, daily_cost",
              "m": "Ingest budget notifications via HEC or Lambda. Drill down to service cost change same day. Link to change tickets.",
              "z": "Timeline (budget alerts overlaid with spend), Table (alert + cost delta), Sankey (alert → service).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud billing TAs, CloudTrail.\n• Ensure the following data sources are available: Budget alert SNS/email logs, `cost:daily`, CloudTrail.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest budget notifications via HEC or Lambda. Drill down to service cost change same day. Link to change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=finops sourcetype=\"aws:budget:alert\" earliest=-7d\n| eval budget_name=coalesce(budget_name,BudgetName)\n| join type=left max=1 budget_name [ search index=cloud_cost sourcetype=\"cost:daily\" earliest=-7d | bin _time span=1d\n| stats sum(cost) as daily_cost by account_id, _time ]\n| table _time, budget_name, threshold_type, daily_cost\n```\n\nUnderstanding this SPL\n\n**FinOps Budget Alert Correlation** — Joins AWS Budgets / Azure budget notifications with resource change events and anomaly searches to explain **why** a threshold fired.\n\nDocumented **Data sources**: Budget alert SNS/email logs, `cost:daily`, CloudTrail. **App/TA** (typical add-on context): Cloud billing TAs, CloudTrail. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: finops; **sourcetype**: aws:budget:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=finops, sourcetype=\"aws:budget:alert\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **budget_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **FinOps Budget Alert Correlation**): table _time, budget_name, threshold_type, daily_cost\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (budget alerts overlaid with spend), Table (alert + cost delta), Sankey (alert → service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.13",
              "n": "Cost Anomaly by Cloud Service",
              "c": "critical",
              "f": "intermediate",
              "v": "z-score or median-absolute-deviation of **daily cost per `lineItem_ProductCode`** — tighter scope than account-level UC-20.1.2.",
              "t": "Billing export TAs",
              "d": "CUR, Azure cost export",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-60d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| timechart span=1d sum(cost) as daily by lineItem_ProductCode\n| untable _time lineItem_ProductCode daily\n| eventstats median(daily) as med, stdev(daily) as sd by lineItem_ProductCode\n| eval z=if(sd>0, (daily-med)/sd, 0)\n| where abs(z)>3\n| table _time, lineItem_ProductCode, daily, med, z",
              "m": "Requires 60d history. Exclude credits via `lineItem_LineItemType`. Page on |z|>3 for top services.",
              "z": "Table (service anomalies), Line chart (daily vs median), Single value (open anomalies).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Billing export TAs.\n• Ensure the following data sources are available: CUR, Azure cost export.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires 60d history. Exclude credits via `lineItem_LineItemType`. Page on |z|>3 for top services.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-60d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| timechart span=1d sum(cost) as daily by lineItem_ProductCode\n| untable _time lineItem_ProductCode daily\n| eventstats median(daily) as med, stdev(daily) as sd by lineItem_ProductCode\n| eval z=if(sd>0, (daily-med)/sd, 0)\n| where abs(z)>3\n| table _time, lineItem_ProductCode, daily, med, z\n```\n\nUnderstanding this SPL\n\n**Cost Anomaly by Cloud Service** — z-score or median-absolute-deviation of **daily cost per `lineItem_ProductCode`** — tighter scope than account-level UC-20.1.2.\n\nDocumented **Data sources**: CUR, Azure cost export. **App/TA** (typical add-on context): Billing export TAs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by lineItem_ProductCode** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Cost Anomaly by Cloud Service**): untable _time lineItem_ProductCode daily\n• `eventstats` rolls up events into metrics; results are split **by lineItem_ProductCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(z)>3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cost Anomaly by Cloud Service**): table _time, lineItem_ProductCode, daily, med, z\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (service anomalies), Line chart (daily vs median), Single value (open anomalies).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Anomaly"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.14",
              "n": "Savings Plan Utilization and Hourly Burn",
              "c": "high",
              "f": "intermediate",
              "v": "Savings Plan utilization % and unused commitment hours — operational view for FinOps reviews (related to UC-20.2.9).",
              "t": "AWS Cost Explorer export, `aws:savings_plan` sourcetype",
              "d": "Savings Plans utilization report",
              "q": "index=cloud_cost sourcetype=\"aws:savings_plan\" earliest=-7d\n| stats latest(utilization_pct) as util, latest(unused_commitment_hrs) as unused by savings_plan_arn\n| where util < 90 OR unused>100\n| table savings_plan_arn, util, unused",
              "m": "Schedule daily SP utilization CSV from S3. Alert when utilization <90% for 3+ days. Recommend exchange/modify.",
              "z": "Gauge (SP utilization), Table (underutilized plans), Line chart (util trend).",
              "kfp": "Idle resources for DR standby capacity, scheduled batch processors, or planned future-use staging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS Cost Explorer export, `aws:savings_plan` sourcetype.\n• Ensure the following data sources are available: Savings Plans utilization report.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule daily SP utilization CSV from S3. Alert when utilization <90% for 3+ days. Recommend exchange/modify.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_cost sourcetype=\"aws:savings_plan\" earliest=-7d\n| stats latest(utilization_pct) as util, latest(unused_commitment_hrs) as unused by savings_plan_arn\n| where util < 90 OR unused>100\n| table savings_plan_arn, util, unused\n```\n\nUnderstanding this SPL\n\n**Savings Plan Utilization and Hourly Burn** — Savings Plan utilization % and unused commitment hours — operational view for FinOps reviews (related to UC-20.2.9).\n\nDocumented **Data sources**: Savings Plans utilization report. **App/TA** (typical add-on context): AWS Cost Explorer export, `aws:savings_plan` sourcetype. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_cost; **sourcetype**: aws:savings_plan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_cost, sourcetype=\"aws:savings_plan\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by savings_plan_arn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where util < 90 OR unused>100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Savings Plan Utilization and Hourly Burn**): table savings_plan_arn, util, unused\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (SP utilization), Table (underutilized plans), Line chart (util trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.15",
              "n": "Data Transfer Cost Attribution by Tag",
              "c": "medium",
              "f": "intermediate",
              "v": "Allocates data transfer line items to `resourceTags_user_*` for chargeback — extends aggregate transfer analysis (UC-20.1.8).",
              "t": "CUR with resource tags",
              "d": "`aws:billing:cur`",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\"\n| search lineItem_ProductCode=\"AmazonEC2\" (lineItem_UsageType=\"*DataTransfer*\" OR lineItem_UsageType=\"*Bytes*\")\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval app=coalesce(resourceTags_user_Application,\"untagged\")\n| stats sum(cost) as xfer_cost by app, lineItem_UsageType\n| sort -xfer_cost",
              "m": "Requires tags on resources generating egress; untagged flows appear as `untagged`. Reconcile with VPC Flow to owners.",
              "z": "Stacked bar (transfer $ by app), Table (top untagged), Pie chart (egress by tag).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CUR with resource tags.\n• Ensure the following data sources are available: `aws:billing:cur`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRequires tags on resources generating egress; untagged flows appear as `untagged`. Reconcile with VPC Flow to owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\"\n| search lineItem_ProductCode=\"AmazonEC2\" (lineItem_UsageType=\"*DataTransfer*\" OR lineItem_UsageType=\"*Bytes*\")\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval app=coalesce(resourceTags_user_Application,\"untagged\")\n| stats sum(cost) as xfer_cost by app, lineItem_UsageType\n| sort -xfer_cost\n```\n\nUnderstanding this SPL\n\n**Data Transfer Cost Attribution by Tag** — Allocates data transfer line items to `resourceTags_user_*` for chargeback — extends aggregate transfer analysis (UC-20.1.8).\n\nDocumented **Data sources**: `aws:billing:cur`. **App/TA** (typical add-on context): CUR with resource tags. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **app** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by app, lineItem_UsageType** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (transfer $ by app), Table (top untagged), Pie chart (egress by tag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.16",
              "n": "Container Workload Right-Sizing Cost",
              "c": "high",
              "f": "intermediate",
              "v": "Correlates EKS/AKS/GKE namespace CPU/memory requests vs actuals with allocated node cost for rightsizing recommendations.",
              "t": "Kubernetes metrics, cloud billing",
              "d": "Prometheus metrics, CUR container cost allocation (Kubecost-style)",
              "q": "index=kubernetes sourcetype=\"kube:metrics\" earliest=-7d\n| stats avg(container_cpu_usage_cores) as use, avg(container_cpu_request_cores) as req by namespace, cluster\n| eval oversize=if(req>use*2,1,0)\n| where oversize=1\n| lookup kube_namespace_monthly_cost namespace cluster OUTPUT monthly_cost\n| table namespace, cluster, use, req, monthly_cost",
              "m": "Ingest kube-state-metrics or vendor. Join cost from Kubecost export or tag-based CUR. Drive requests/limits changes.",
              "z": "Table (oversized namespaces), Bar chart (waste $), Scatter (request vs use).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes metrics, cloud billing.\n• Ensure the following data sources are available: Prometheus metrics, CUR container cost allocation (Kubecost-style).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest kube-state-metrics or vendor. Join cost from Kubecost export or tag-based CUR. Drive requests/limits changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kubernetes sourcetype=\"kube:metrics\" earliest=-7d\n| stats avg(container_cpu_usage_cores) as use, avg(container_cpu_request_cores) as req by namespace, cluster\n| eval oversize=if(req>use*2,1,0)\n| where oversize=1\n| lookup kube_namespace_monthly_cost namespace cluster OUTPUT monthly_cost\n| table namespace, cluster, use, req, monthly_cost\n```\n\nUnderstanding this SPL\n\n**Container Workload Right-Sizing Cost** — Correlates EKS/AKS/GKE namespace CPU/memory requests vs actuals with allocated node cost for rightsizing recommendations.\n\nDocumented **Data sources**: Prometheus metrics, CUR container cost allocation (Kubecost-style). **App/TA** (typical add-on context): Kubernetes metrics, cloud billing. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kubernetes; **sourcetype**: kube:metrics. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kubernetes, sourcetype=\"kube:metrics\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by namespace, cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **oversize** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where oversize=1` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Pipeline stage (see **Container Workload Right-Sizing Cost**): table namespace, cluster, use, req, monthly_cost\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (oversized namespaces), Bar chart (waste $), Scatter (request vs use).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.17",
              "n": "Serverless Invocation Cost Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Daily Lambda/Azure Functions/Google Cloud Functions cost and invocation count — detects runaway retries and bad deploys.",
              "t": "Cloud billing with serverless product codes",
              "d": "CUR, Azure meter export",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| search lineItem_ProductCode IN (\"AWSLambda\",\"AmazonSNS\") OR lineItem_UsageType=\"*Lambda*\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| timechart span=1d sum(cost) as serverless_cost sum(lineItem_UsageAmount) as units",
              "m": "Map usage types to invocations vs GB-sec. Alert when daily cost > 2× 7d average.",
              "z": "Line chart (serverless $ and invocations), Table (top functions from resource tags), Single value (day-over-day %).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud billing with serverless product codes.\n• Ensure the following data sources are available: CUR, Azure meter export.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap usage types to invocations vs GB-sec. Alert when daily cost > 2× 7d average.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| search lineItem_ProductCode IN (\"AWSLambda\",\"AmazonSNS\") OR lineItem_UsageType=\"*Lambda*\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| timechart span=1d sum(cost) as serverless_cost sum(lineItem_UsageAmount) as units\n```\n\nUnderstanding this SPL\n\n**Serverless Invocation Cost Trending** — Daily Lambda/Azure Functions/Google Cloud Functions cost and invocation count — detects runaway retries and bad deploys.\n\nDocumented **Data sources**: CUR, Azure meter export. **App/TA** (typical add-on context): Cloud billing with serverless product codes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (serverless $ and invocations), Table (top functions from resource tags), Single value (day-over-day %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.18",
              "n": "Orphaned Cloud Resource Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Unattached volumes, unused elastic IPs, old snapshots without volume — **inventory-driven** waste beyond idle CPU (UC-20.1.4).",
              "t": "AWS Config snapshot, resource inventory",
              "d": "`aws:config:inventory`, cost",
              "q": "index=cloud_inventory sourcetype=\"aws:config:inventory\" earliest=-1d\n| where status=\"available\" AND resource_type=\"AWS::EC2::Volume\" AND attachments=0\n| lookup monthly_storage_rate region OUTPUT rate_gb_mo\n| eval waste=size_gb*rate_gb_mo\n| stats sum(waste) as monthly_waste by account_id, region\n| sort -monthly_waste",
              "m": "Refresh Config aggregator daily. Include unattached EIP and old snapshots in companion searches. Route to owners via account tags.",
              "z": "Table (orphan waste $), Bar chart (by account), Single value (total orphan monthly $).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS Config snapshot, resource inventory.\n• Ensure the following data sources are available: `aws:config:inventory`, cost.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRefresh Config aggregator daily. Include unattached EIP and old snapshots in companion searches. Route to owners via account tags.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_inventory sourcetype=\"aws:config:inventory\" earliest=-1d\n| where status=\"available\" AND resource_type=\"AWS::EC2::Volume\" AND attachments=0\n| lookup monthly_storage_rate region OUTPUT rate_gb_mo\n| eval waste=size_gb*rate_gb_mo\n| stats sum(waste) as monthly_waste by account_id, region\n| sort -monthly_waste\n```\n\nUnderstanding this SPL\n\n**Orphaned Cloud Resource Detection** — Unattached volumes, unused elastic IPs, old snapshots without volume — **inventory-driven** waste beyond idle CPU (UC-20.1.4).\n\nDocumented **Data sources**: `aws:config:inventory`, cost. **App/TA** (typical add-on context): AWS Config snapshot, resource inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_inventory; **sourcetype**: aws:config:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_inventory, sourcetype=\"aws:config:inventory\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"available\" AND resource_type=\"AWS::EC2::Volume\" AND attachments=0` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **waste** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by account_id, region** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (orphan waste $), Bar chart (by account), Single value (total orphan monthly $).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.19",
              "n": "Cost Allocation Tag Compliance",
              "c": "high",
              "f": "beginner",
              "v": "Percentage of monthly spend that is **untagged** or missing required keys (`Application`, `CostCenter`, `Environment`).",
              "t": "CUR",
              "d": "`aws:billing:cur`",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval has_app=isnotnull(resourceTags_user_Application) AND resourceTags_user_Application!=\"\"\n| eval has_cc=isnotnull(resourceTags_user_CostCenter) AND resourceTags_user_CostCenter!=\"\"\n| stats sum(eval(if(has_app AND has_cc,cost,0))) as tagged_cost sum(cost) as total_cost\n| eval tag_compliance_pct=round(100*tagged_cost/total_cost,1)",
              "m": "Expand required keys per policy. Break down by OU/account. Drive tagging enforcement at CI/CD.",
              "z": "Single value (tag compliance %), Pie chart (tagged vs untagged $), Table (worst accounts).",
              "kfp": "Tag gaps during new project onboarding, contractor-deployed resources, or pre-policy resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CUR.\n• Ensure the following data sources are available: `aws:billing:cur`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExpand required keys per policy. Break down by OU/account. Drive tagging enforcement at CI/CD.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval has_app=isnotnull(resourceTags_user_Application) AND resourceTags_user_Application!=\"\"\n| eval has_cc=isnotnull(resourceTags_user_CostCenter) AND resourceTags_user_CostCenter!=\"\"\n| stats sum(eval(if(has_app AND has_cc,cost,0))) as tagged_cost sum(cost) as total_cost\n| eval tag_compliance_pct=round(100*tagged_cost/total_cost,1)\n```\n\nUnderstanding this SPL\n\n**Cost Allocation Tag Compliance** — Percentage of monthly spend that is **untagged** or missing required keys (`Application`, `CostCenter`, `Environment`).\n\nDocumented **Data sources**: `aws:billing:cur`. **App/TA** (typical add-on context): CUR. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_app** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_cc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **tag_compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (tag compliance %), Pie chart (tagged vs untagged $), Table (worst accounts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.20",
              "n": "Idle Resource Identification by Account",
              "c": "high",
              "f": "beginner",
              "v": "Rolls up idle candidates (UC-20.1.4) to **account and owner** for FinOps accountability — same theme, executive view.",
              "t": "Cloud metrics, CUR, CMDB",
              "d": "Idle detection output, `lineItem_UsageAccountId`",
              "q": "index=summary sourcetype=\"cloud:idle_candidates\" earliest=-1d\n| stats sum(estimated_monthly_savings) as idle_dollars by lineItem_UsageAccountId\n| lookup aws_account_owner account_id AS lineItem_UsageAccountId OUTPUT owner_email\n| sort -idle_dollars\n| head 25",
              "m": "Populate `cloud:idle_candidates` from scheduled UC-20.1.4 logic. Monthly email to top account owners.",
              "z": "Bar chart (idle $ by account), Table (owner, idle $), Single value (fleet idle $).",
              "kfp": "Idle resources for DR standby capacity, scheduled batch processors, or planned future-use staging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud metrics, CUR, CMDB.\n• Ensure the following data sources are available: Idle detection output, `lineItem_UsageAccountId`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPopulate `cloud:idle_candidates` from scheduled UC-20.1.4 logic. Monthly email to top account owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=summary sourcetype=\"cloud:idle_candidates\" earliest=-1d\n| stats sum(estimated_monthly_savings) as idle_dollars by lineItem_UsageAccountId\n| lookup aws_account_owner account_id AS lineItem_UsageAccountId OUTPUT owner_email\n| sort -idle_dollars\n| head 25\n```\n\nUnderstanding this SPL\n\n**Idle Resource Identification by Account** — Rolls up idle candidates (UC-20.1.4) to **account and owner** for FinOps accountability — same theme, executive view.\n\nDocumented **Data sources**: Idle detection output, `lineItem_UsageAccountId`. **App/TA** (typical add-on context): Cloud metrics, CUR, CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: summary; **sourcetype**: cloud:idle_candidates. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=summary, sourcetype=\"cloud:idle_candidates\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by lineItem_UsageAccountId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (idle $ by account), Table (owner, idle $), Single value (fleet idle $).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.21",
              "n": "Azure Cost Management Daily Spend by Meter Category",
              "c": "high",
              "f": "intermediate",
              "v": "Azure invoices spread spend across many meters (compute, storage, networking, PaaS). Rolling up daily cost by `MeterCategory` and subscription exposes the biggest budget drivers early so teams can tune SKUs, retire sandboxes, and negotiate reservations before month-end true-up.",
              "t": "`Splunk Add-on for Microsoft Cloud Services`, Azure Cost Management export (Blob/HEC)",
              "d": "`index=cloud_billing` `sourcetype=\"azure:billing:usage\"`",
              "q": "index=cloud_billing sourcetype=\"azure:billing:usage\" earliest=-30d\n| eval cost=tonumber(coalesce(CostUSD, cost, pretax_cost))\n| eval sub=coalesce(SubscriptionId, subscription_id)\n| eval cat=coalesce(MeterCategory, meter_category, \"Unknown\")\n| bin _time span=1d\n| stats sum(cost) as daily_spend by _time, sub, cat\n| timechart span=1d sum(daily_spend) as spend by cat",
              "m": "(1) Export Cost Management actual + amortized cost daily to Splunk (Blob pull or Event Hub). (2) Normalize currency fields and subscription identifiers. (3) Alert when any `MeterCategory` exceeds its 14-day median by 40% for two consecutive days.",
              "z": "Stacked area chart (spend by meter category), Table (top categories by subscription), Single value (MTD vs prior MTD %).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for Microsoft Cloud Services`, Azure Cost Management export (Blob/HEC).\n• Ensure the following data sources are available: `index=cloud_billing` `sourcetype=\"azure:billing:usage\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export Cost Management actual + amortized cost daily to Splunk (Blob pull or Event Hub). (2) Normalize currency fields and subscription identifiers. (3) Alert when any `MeterCategory` exceeds its 14-day median by 40% for two consecutive days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"azure:billing:usage\" earliest=-30d\n| eval cost=tonumber(coalesce(CostUSD, cost, pretax_cost))\n| eval sub=coalesce(SubscriptionId, subscription_id)\n| eval cat=coalesce(MeterCategory, meter_category, \"Unknown\")\n| bin _time span=1d\n| stats sum(cost) as daily_spend by _time, sub, cat\n| timechart span=1d sum(daily_spend) as spend by cat\n```\n\nUnderstanding this SPL\n\n**Azure Cost Management Daily Spend by Meter Category** — Azure invoices spread spend across many meters (compute, storage, networking, PaaS). Rolling up daily cost by `MeterCategory` and subscription exposes the biggest budget drivers early so teams can tune SKUs, retire sandboxes, and negotiate reservations before month-end true-up.\n\nDocumented **Data sources**: `index=cloud_billing` `sourcetype=\"azure:billing:usage\"`. **App/TA** (typical add-on context): `Splunk Add-on for Microsoft Cloud Services`, Azure Cost Management export (Blob/HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: azure:billing:usage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"azure:billing:usage\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sub** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cat** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, sub, cat** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by cat** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area chart (spend by meter category), Table (top categories by subscription), Single value (MTD vs prior MTD %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "azure"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.22",
              "n": "GCP Billing Export Cost by Project and Service",
              "c": "high",
              "f": "intermediate",
              "v": "Google Cloud bills aggregate credits, sustained-use discounts, and cross-project shared VPC egress. Tracking `project.id` and `service.description` daily highlights runaway BigQuery scan jobs, idle Composer environments, and forgotten projects that quietly consume committed spend.",
              "t": "`Splunk Add-on for Google Cloud Platform`, BigQuery billing export (JSONL to GCS → HEC)",
              "d": "`index=cloud_billing` `sourcetype=\"gcp:billing:export\"`",
              "q": "index=cloud_billing sourcetype=\"gcp:billing:export\" earliest=-30d\n| eval cost=tonumber(coalesce(cost, cost_amount, usage.amount))\n| eval project=coalesce('project.id', project_id)\n| eval svc=coalesce('service.description', service_description, sku.description)\n| bin _time span=1d\n| stats sum(cost) as daily_cost by _time, project, svc\n| stats sum(eval(if(_time>=relative_time(now(),\"-7d@d\"),daily_cost,0))) as last7d\n         sum(eval(if(_time<relative_time(now(),\"-7d@d\") AND _time>=relative_time(now(),\"-14d@d\"),daily_cost,0))) as prev7d by project, svc\n| eval wow_pct=round(100*(last7d-prev7d)/nullif(prev7d,0),1)\n| where wow_pct>25 OR last7d>5000\n| sort -last7d",
              "m": "(1) Enable detailed billing export with project hierarchy labels. (2) Ingest with stable `project`/`service` field aliases. (3) Route week-over-week spikes to FinOps with drilldown links to BigQuery job IDs when present.",
              "z": "Treemap (cost by project), Bar chart (top services), Table (WoW % and 7-day spend).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for Google Cloud Platform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for Google Cloud Platform`, BigQuery billing export (JSONL to GCS → HEC).\n• Ensure the following data sources are available: `index=cloud_billing` `sourcetype=\"gcp:billing:export\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable detailed billing export with project hierarchy labels. (2) Ingest with stable `project`/`service` field aliases. (3) Route week-over-week spikes to FinOps with drilldown links to BigQuery job IDs when present.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"gcp:billing:export\" earliest=-30d\n| eval cost=tonumber(coalesce(cost, cost_amount, usage.amount))\n| eval project=coalesce('project.id', project_id)\n| eval svc=coalesce('service.description', service_description, sku.description)\n| bin _time span=1d\n| stats sum(cost) as daily_cost by _time, project, svc\n| stats sum(eval(if(_time>=relative_time(now(),\"-7d@d\"),daily_cost,0))) as last7d\n         sum(eval(if(_time<relative_time(now(),\"-7d@d\") AND _time>=relative_time(now(),\"-14d@d\"),daily_cost,0))) as prev7d by project, svc\n| eval wow_pct=round(100*(last7d-prev7d)/nullif(prev7d,0),1)\n| where wow_pct>25 OR last7d>5000\n| sort -last7d\n```\n\nUnderstanding this SPL\n\n**GCP Billing Export Cost by Project and Service** — Google Cloud bills aggregate credits, sustained-use discounts, and cross-project shared VPC egress. Tracking `project.id` and `service.description` daily highlights runaway BigQuery scan jobs, idle Composer environments, and forgotten projects that quietly consume committed spend.\n\nDocumented **Data sources**: `index=cloud_billing` `sourcetype=\"gcp:billing:export\"`. **App/TA** (typical add-on context): `Splunk Add-on for Google Cloud Platform`, BigQuery billing export (JSONL to GCS → HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: gcp:billing:export. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"gcp:billing:export\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **project** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **svc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, project, svc** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by project, svc** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **wow_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where wow_pct>25 OR last7d>5000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Treemap (cost by project), Bar chart (top services), Table (WoW % and 7-day spend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.23",
              "n": "Reserved Instance Purchase Amortization vs On-Demand Leakage",
              "c": "medium",
              "f": "advanced",
              "v": "Amortized RI fees should displace on-demand box usage in the same instance family. When amortized reservation cost rises but matching usage stays on-demand, coverage or scope mismatches waste committed dollars. This view quantifies leakage for purchasing corrections.",
              "t": "`Splunk Add-on for AWS` (CUR)",
              "d": "`index=cloud_billing` `sourcetype=\"aws:billing:cur\"`",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval ri_fee=if(lineItem_LineItemType IN (\"DiscountedUsage\",\"RIFee\"), cost, 0)\n| eval od_compute=if(match(lineItem_UsageType,\"BoxUsage\") AND isnull(reservation_ReservationARN), cost, 0)\n| eval family=replace(lineItem_UsageType,\"BoxUsage:\",\"\")\n| stats sum(ri_fee) as ri_spend sum(od_compute) as od_leak by lineItem_UsageAccountId, family\n| eval leak_ratio=round(od_leak/nullif(ri_spend+od_leak,0)*100,1)\n| where od_leak>100 AND leak_ratio>15\n| sort -od_leak",
              "m": "(1) Confirm CUR includes amortized cost columns for your payer account. (2) Map `lineItem_UsageType` to instance family for EC2. (3) Review accounts with high `leak_ratio` against AWS Cost Explorer coverage reports.",
              "z": "Table (accounts and families with leakage), Bar chart (on-demand leak $), Heatmap (account × family).",
              "kfp": "RI use varies during workload shifts, planned migrations, or controlled refactoring.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS` (CUR).\n• Ensure the following data sources are available: `index=cloud_billing` `sourcetype=\"aws:billing:cur\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm CUR includes amortized cost columns for your payer account. (2) Map `lineItem_UsageType` to instance family for EC2. (3) Review accounts with high `leak_ratio` against AWS Cost Explorer coverage reports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval ri_fee=if(lineItem_LineItemType IN (\"DiscountedUsage\",\"RIFee\"), cost, 0)\n| eval od_compute=if(match(lineItem_UsageType,\"BoxUsage\") AND isnull(reservation_ReservationARN), cost, 0)\n| eval family=replace(lineItem_UsageType,\"BoxUsage:\",\"\")\n| stats sum(ri_fee) as ri_spend sum(od_compute) as od_leak by lineItem_UsageAccountId, family\n| eval leak_ratio=round(od_leak/nullif(ri_spend+od_leak,0)*100,1)\n| where od_leak>100 AND leak_ratio>15\n| sort -od_leak\n```\n\nUnderstanding this SPL\n\n**Reserved Instance Purchase Amortization vs On-Demand Leakage** — Amortized RI fees should displace on-demand box usage in the same instance family. When amortized reservation cost rises but matching usage stays on-demand, coverage or scope mismatches waste committed dollars. This view quantifies leakage for purchasing corrections.\n\nDocumented **Data sources**: `index=cloud_billing` `sourcetype=\"aws:billing:cur\"`. **App/TA** (typical add-on context): `Splunk Add-on for AWS` (CUR). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ri_fee** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **od_compute** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **family** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by lineItem_UsageAccountId, family** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **leak_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where od_leak>100 AND leak_ratio>15` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (accounts and families with leakage), Bar chart (on-demand leak $), Heatmap (account × family).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.24",
              "n": "Savings Plan Coverage of Eligible Compute Spend",
              "c": "high",
              "f": "intermediate",
              "v": "Savings Plans discount compute only when usage is eligible and within the commitment’s scope. Low coverage means you are still paying list price for large portions of EC2, Fargate, or Lambda despite owning a plan—direct savings opportunity on the next purchase or exchange.",
              "t": "`Splunk Add-on for AWS`, CUR with `savingsPlan_*` fields",
              "d": "`index=cloud_billing` `sourcetype=\"aws:billing:cur\"`",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval sp_covered=if(isnotnull(savingsPlan_SavingsPlanARN) AND savingsPlan_SavingsPlanARN!=\"\", cost, 0)\n| eval eligible=if(lineItem_ProductCode IN (\"AmazonEC2\",\"AmazonECS\",\"AWSLambda\") AND match(lineItem_UsageType,\"(BoxUsage|SpotUsage|Fargate)\"), cost, 0)\n| stats sum(sp_covered) as sp_sum sum(eligible) as elig_sum by lineItem_UsageAccountId\n| eval coverage_pct=round(100*sp_sum/nullif(elig_sum,0),1)\n| where coverage_pct<60 AND elig_sum>500\n| sort elig_sum",
              "m": "(1) Ensure CUR includes Savings Plan ARN fields. (2) Tune the `eligible` filter to your contract (exclude Marketplace line items). (3) Target accounts below 60% coverage for rightsizing plus incremental SP buys.",
              "z": "Gauge (fleet-wide coverage), Table (accounts under target), Bar chart (eligible on-demand still uncovered).",
              "kfp": "RI use varies during workload shifts, planned migrations, or controlled refactoring.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, CUR with `savingsPlan_*` fields.\n• Ensure the following data sources are available: `index=cloud_billing` `sourcetype=\"aws:billing:cur\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure CUR includes Savings Plan ARN fields. (2) Tune the `eligible` filter to your contract (exclude Marketplace line items). (3) Target accounts below 60% coverage for rightsizing plus incremental SP buys.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval sp_covered=if(isnotnull(savingsPlan_SavingsPlanARN) AND savingsPlan_SavingsPlanARN!=\"\", cost, 0)\n| eval eligible=if(lineItem_ProductCode IN (\"AmazonEC2\",\"AmazonECS\",\"AWSLambda\") AND match(lineItem_UsageType,\"(BoxUsage|SpotUsage|Fargate)\"), cost, 0)\n| stats sum(sp_covered) as sp_sum sum(eligible) as elig_sum by lineItem_UsageAccountId\n| eval coverage_pct=round(100*sp_sum/nullif(elig_sum,0),1)\n| where coverage_pct<60 AND elig_sum>500\n| sort elig_sum\n```\n\nUnderstanding this SPL\n\n**Savings Plan Coverage of Eligible Compute Spend** — Savings Plans discount compute only when usage is eligible and within the commitment’s scope. Low coverage means you are still paying list price for large portions of EC2, Fargate, or Lambda despite owning a plan—direct savings opportunity on the next purchase or exchange.\n\nDocumented **Data sources**: `index=cloud_billing` `sourcetype=\"aws:billing:cur\"`. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, CUR with `savingsPlan_*` fields. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sp_covered** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **eligible** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by lineItem_UsageAccountId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where coverage_pct<60 AND elig_sum>500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (fleet-wide coverage), Table (accounts under target), Bar chart (eligible on-demand still uncovered).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.25",
              "n": "NAT Gateway and VPC Endpoint Egress Cost Concentration",
              "c": "medium",
              "f": "intermediate",
              "v": "Managed NAT and interface endpoints bill per GB processed; a single chatty microservice behind NAT can dominate networking spend. Ranking usage types tied to NAT Gateway and PrivateLink highlights candidates for VPC endpoint redesign, caching, or regional consolidation.",
              "t": "`Splunk Add-on for AWS`, VPC Flow Logs (optional correlation)",
              "d": "`index=cloud_billing` `sourcetype=\"aws:billing:cur\"`",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| search lineItem_ProductCode=\"AmazonEC2\" (lineItem_UsageType=\"NatGateway*\" OR lineItem_UsageType=\"VpcEndpoint*\")\n| eval resource=coalesce(lineItem_ResourceId, resourceId)\n| stats sum(cost) as nat_vpc_cost sum(lineItem_UsageAmount) as usage_qty by lineItem_UsageAccountId, lineItem_UsageType, resource\n| sort -nat_vpc_cost\n| head 50",
              "m": "(1) Tag NAT gateways and endpoints with owning application. (2) Join top resources to flow log aggregates if available. (3) Prioritize architecture reviews for the top 10 resources by trailing 30-day cost.",
              "z": "Table (top NAT/VPC-endpoint resources), Bar chart (cost by usage type), Pie chart (share by account).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, VPC Flow Logs (optional correlation).\n• Ensure the following data sources are available: `index=cloud_billing` `sourcetype=\"aws:billing:cur\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag NAT gateways and endpoints with owning application. (2) Join top resources to flow log aggregates if available. (3) Prioritize architecture reviews for the top 10 resources by trailing 30-day cost.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| search lineItem_ProductCode=\"AmazonEC2\" (lineItem_UsageType=\"NatGateway*\" OR lineItem_UsageType=\"VpcEndpoint*\")\n| eval resource=coalesce(lineItem_ResourceId, resourceId)\n| stats sum(cost) as nat_vpc_cost sum(lineItem_UsageAmount) as usage_qty by lineItem_UsageAccountId, lineItem_UsageType, resource\n| sort -nat_vpc_cost\n| head 50\n```\n\nUnderstanding this SPL\n\n**NAT Gateway and VPC Endpoint Egress Cost Concentration** — Managed NAT and interface endpoints bill per GB processed; a single chatty microservice behind NAT can dominate networking spend. Ranking usage types tied to NAT Gateway and PrivateLink highlights candidates for VPC endpoint redesign, caching, or regional consolidation.\n\nDocumented **Data sources**: `index=cloud_billing` `sourcetype=\"aws:billing:cur\"`. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, VPC Flow Logs (optional correlation). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **resource** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by lineItem_UsageAccountId, lineItem_UsageType, resource** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top NAT/VPC-endpoint resources), Bar chart (cost by usage type), Pie chart (share by account).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.26",
              "n": "Spot Fleet Savings vs Interrupted Instance-Hours",
              "c": "medium",
              "f": "intermediate",
              "v": "Spot savings only matter if interruptions stay within SLOs. Correlating amortized spot spend with interruption counts from CloudTrail yields a simple dollars-saved-per-interruption metric so teams balance price and reliability across pools.",
              "t": "`Splunk Add-on for AWS` (CUR + CloudTrail)",
              "d": "`index=cloud_billing` `sourcetype=\"aws:billing:cur\"`, `index=aws` `sourcetype=\"aws:cloudtrail\"`",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| search lineItem_UsageType=\"*SpotUsage*\"\n| stats sum(cost) as spot_spend by lineItem_UsageAccountId\n| join type=left lineItem_UsageAccountId [\n  search index=aws sourcetype=\"aws:cloudtrail\" earliest=-30d eventName=\"SpotInstanceInterruption\"\n  | eval acct=coalesce(recipientAccountId, accountId)\n  | stats count as interruptions by acct\n  | rename acct as lineItem_UsageAccountId\n]\n| fillnull value=0 interruptions\n| eval savings_per_intr=if(interruptions>0, round(spend/interruptions,2), null())\n| sort -spot_spend",
              "m": "(1) Normalize account IDs across billing and CloudTrail. (2) Schedule weekly review for accounts with high interruption counts and rising spot spend. (3) Pair with capacity-optimized vs price-optimized fleet settings from ASG/Launch Template metadata if ingested.",
              "z": "Scatter plot (spot spend vs interruptions), Table (accounts with worst ratio), Single value (fleet spot savings MTD).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS` (CUR + CloudTrail).\n• Ensure the following data sources are available: `index=cloud_billing` `sourcetype=\"aws:billing:cur\"`, `index=aws` `sourcetype=\"aws:cloudtrail\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize account IDs across billing and CloudTrail. (2) Schedule weekly review for accounts with high interruption counts and rising spot spend. (3) Pair with capacity-optimized vs price-optimized fleet settings from ASG/Launch Template metadata if ingested.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\" earliest=-30d\n| eval cost=tonumber(lineItem_UnblendedCost)\n| search lineItem_UsageType=\"*SpotUsage*\"\n| stats sum(cost) as spot_spend by lineItem_UsageAccountId\n| join type=left lineItem_UsageAccountId [\n  search index=aws sourcetype=\"aws:cloudtrail\" earliest=-30d eventName=\"SpotInstanceInterruption\"\n  | eval acct=coalesce(recipientAccountId, accountId)\n  | stats count as interruptions by acct\n  | rename acct as lineItem_UsageAccountId\n]\n| fillnull value=0 interruptions\n| eval savings_per_intr=if(interruptions>0, round(spend/interruptions,2), null())\n| sort -spot_spend\n```\n\nUnderstanding this SPL\n\n**Spot Fleet Savings vs Interrupted Instance-Hours** — Spot savings only matter if interruptions stay within SLOs. Correlating amortized spot spend with interruption counts from CloudTrail yields a simple dollars-saved-per-interruption metric so teams balance price and reliability across pools.\n\nDocumented **Data sources**: `index=cloud_billing` `sourcetype=\"aws:billing:cur\"`, `index=aws` `sourcetype=\"aws:cloudtrail\"`. **App/TA** (typical add-on context): `Splunk Add-on for AWS` (CUR + CloudTrail). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by lineItem_UsageAccountId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **savings_per_intr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter plot (spot spend vs interruptions), Table (accounts with worst ratio), Single value (fleet spot savings MTD).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.1.27",
              "n": "Cross-Cloud Consolidated FinOps Executive Rollup",
              "c": "high",
              "f": "advanced",
              "v": "Enterprises rarely have a single pane for AWS, Azure, and GCP together. Normalizing daily spend into `cloud_provider`, `business_unit`, and `currency` enables portfolio-level trending and board-ready variance explanations without manual spreadsheet merges.",
              "t": "Multi-cloud billing TAs, optional `lookup fx_rates`",
              "d": "`index=finops` `sourcetype=\"cost:unified_daily\"`",
              "q": "index=finops sourcetype=\"cost:unified_daily\" earliest=-90d\n| eval spend_local=tonumber(daily_cost)\n| lookup fx_rates currency as billing_currency OUTPUT usd_per_unit\n| eval spend_usd=round(spend_local*usd_per_unit,2)\n| bin _time span=1mon\n| stats sum(spend_usd) as month_spend by _time, cloud_provider, business_unit\n| eventstats sum(month_spend) as portfolio_total by _time\n| eval pct_of_portfolio=round(100*month_spend/nullif(portfolio_total,0),1)\n| sort _time, -month_spend",
              "m": "(1) Build a daily scheduled search that writes `cost:unified_daily` from each cloud’s normalized sourcetype. (2) Maintain FX rates for non-USD billers. (3) Publish an executive dashboard with MoM variance annotations from budget lookup.",
              "z": "Column chart (monthly spend by cloud), Stacked bar (business unit mix), Table (MoM % change).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Multi-cloud billing TAs, optional `lookup fx_rates`.\n• Ensure the following data sources are available: `index=finops` `sourcetype=\"cost:unified_daily\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build a daily scheduled search that writes `cost:unified_daily` from each cloud’s normalized sourcetype. (2) Maintain FX rates for non-USD billers. (3) Publish an executive dashboard with MoM variance annotations from budget lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=finops sourcetype=\"cost:unified_daily\" earliest=-90d\n| eval spend_local=tonumber(daily_cost)\n| lookup fx_rates currency as billing_currency OUTPUT usd_per_unit\n| eval spend_usd=round(spend_local*usd_per_unit,2)\n| bin _time span=1mon\n| stats sum(spend_usd) as month_spend by _time, cloud_provider, business_unit\n| eventstats sum(month_spend) as portfolio_total by _time\n| eval pct_of_portfolio=round(100*month_spend/nullif(portfolio_total,0),1)\n| sort _time, -month_spend\n```\n\nUnderstanding this SPL\n\n**Cross-Cloud Consolidated FinOps Executive Rollup** — Enterprises rarely have a single pane for AWS, Azure, and GCP together. Normalizing daily spend into `cloud_provider`, `business_unit`, and `currency` enables portfolio-level trending and board-ready variance explanations without manual spreadsheet merges.\n\nDocumented **Data sources**: `index=finops` `sourcetype=\"cost:unified_daily\"`. **App/TA** (typical add-on context): Multi-cloud billing TAs, optional `lookup fx_rates`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: finops; **sourcetype**: cost:unified_daily. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=finops, sourcetype=\"cost:unified_daily\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **spend_local** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **spend_usd** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, cloud_provider, business_unit** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct_of_portfolio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (monthly spend by cloud), Stacked bar (business unit mix), Table (MoM % change).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 38.7,
          "qd": {
            "gold": 1,
            "silver": 2,
            "bronze": 24,
            "none": 0
          }
        },
        {
          "i": "20.2",
          "n": "Capacity Planning",
          "u": [
            {
              "i": "20.2.1",
              "n": "Compute Capacity Forecasting",
              "c": "high",
              "f": "intermediate",
              "v": "Running out of compute capacity causes provisioning failures and performance degradation. Forecasting when CPU and memory will be exhausted enables proactive procurement or scaling, avoiding emergency purchases at premium cost.",
              "t": "Infrastructure monitoring TAs (various), Splunk `predict` command",
              "d": "Host performance metrics (CPU, memory utilization)",
              "q": "index=infrastructure sourcetype=\"Perfmon:Processor\" OR sourcetype=\"cpu\"\n| timechart span=1d avg(cpu_load_percent) as avg_cpu by host\n| predict avg_cpu as predicted_cpu algorithm=LLP5 future_timespan=30\n| eval days_to_threshold=if('upper95(predicted_cpu)'>90, \"Within 30 days\", \"OK\")",
              "m": "Collect CPU and memory metrics from all hosts. Aggregate to daily averages for trending. Use Splunk `predict` with LLP5 for 30/60/90-day forecasting. Set alerts when forecast predicts >90% utilization within 30 days. Report quarterly on capacity headroom across infrastructure tiers.",
              "z": "Timechart (utilization with forecast overlay), Table (hosts approaching capacity), Gauge (current vs capacity), Single value (days to threshold).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Infrastructure monitoring TAs (various), Splunk `predict` command.\n• Ensure the following data sources are available: Host performance metrics (CPU, memory utilization).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect CPU and memory metrics from all hosts. Aggregate to daily averages for trending. Use Splunk `predict` with LLP5 for 30/60/90-day forecasting. Set alerts when forecast predicts >90% utilization within 30 days. Report quarterly on capacity headroom across infrastructure tiers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infrastructure sourcetype=\"Perfmon:Processor\" OR sourcetype=\"cpu\"\n| timechart span=1d avg(cpu_load_percent) as avg_cpu by host\n| predict avg_cpu as predicted_cpu algorithm=LLP5 future_timespan=30\n| eval days_to_threshold=if('upper95(predicted_cpu)'>90, \"Within 30 days\", \"OK\")\n```\n\nUnderstanding this SPL\n\n**Compute Capacity Forecasting** — Running out of compute capacity causes provisioning failures and performance degradation. Forecasting when CPU and memory will be exhausted enables proactive procurement or scaling, avoiding emergency purchases at premium cost.\n\nDocumented **Data sources**: Host performance metrics (CPU, memory utilization). **App/TA** (typical add-on context): Infrastructure monitoring TAs (various), Splunk `predict` command. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infrastructure; **sourcetype**: Perfmon:Processor, cpu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infrastructure, sourcetype=\"Perfmon:Processor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Compute Capacity Forecasting**): predict avg_cpu as predicted_cpu algorithm=LLP5 future_timespan=30\n• `eval` defines or adjusts **days_to_threshold** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (utilization with forecast overlay), Table (hosts approaching capacity), Gauge (current vs capacity), Single value (days to threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "wv": "crawl",
              "mtype": [
                "Fault"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "20.2.2",
              "n": "Storage Growth Forecasting",
              "c": "high",
              "f": "intermediate",
              "v": "Storage procurement has lead times. Forecasting growth trends enables timely ordering of additional capacity, preventing the emergency of running out of storage space that causes application outages and data loss.",
              "t": "Storage TAs (various), Splunk `predict` command",
              "d": "Storage capacity metrics from SAN/NAS/HCI/cloud",
              "q": "index=storage sourcetype=\"storage:capacity\"\n| timechart span=1d latest(used_pct) as used_pct by storage_system\n| predict used_pct as predicted_pct algorithm=LLP5 future_timespan=90\n| eval forecast_90d='predicted_pct+90d'\n| where forecast_90d > 85",
              "m": "Collect storage capacity metrics daily from all storage platforms. Build growth rate trends per volume/pool. Use Splunk predict for 90-day forecasting. Alert when projected usage exceeds 85% within 90 days. Initiate procurement workflow based on projected needs.",
              "z": "Timechart (usage with forecast), Table (systems approaching capacity), Gauge (current utilization), Single value (days to threshold).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Storage TAs (various), Splunk `predict` command.\n• Ensure the following data sources are available: Storage capacity metrics from SAN/NAS/HCI/cloud.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect storage capacity metrics daily from all storage platforms. Build growth rate trends per volume/pool. Use Splunk predict for 90-day forecasting. Alert when projected usage exceeds 85% within 90 days. Initiate procurement workflow based on projected needs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"storage:capacity\"\n| timechart span=1d latest(used_pct) as used_pct by storage_system\n| predict used_pct as predicted_pct algorithm=LLP5 future_timespan=90\n| eval forecast_90d='predicted_pct+90d'\n| where forecast_90d > 85\n```\n\nUnderstanding this SPL\n\n**Storage Growth Forecasting** — Storage procurement has lead times. Forecasting growth trends enables timely ordering of additional capacity, preventing the emergency of running out of storage space that causes application outages and data loss.\n\nDocumented **Data sources**: Storage capacity metrics from SAN/NAS/HCI/cloud. **App/TA** (typical add-on context): Storage TAs (various), Splunk `predict` command. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: storage:capacity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"storage:capacity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by storage_system** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Storage Growth Forecasting**): predict used_pct as predicted_pct algorithm=LLP5 future_timespan=90\n• `eval` defines or adjusts **forecast_90d** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where forecast_90d > 85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (usage with forecast), Table (systems approaching capacity), Gauge (current utilization), Single value (days to threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "20.2.3",
              "n": "Network Bandwidth Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Network bandwidth constraints cause application latency and packet loss. Trending WAN/LAN utilization enables planned upgrades during maintenance windows rather than emergency bandwidth additions during business-impacting congestion.",
              "t": "Network monitoring TAs, SNMP",
              "d": "Interface utilization metrics (SNMP, streaming telemetry)",
              "q": "index=network sourcetype=\"snmp:interface\"\n| eval util_pct=round((ifHCInOctets_rate*8/ifHighSpeed/1000000)*100, 2)\n| timechart span=1h avg(util_pct) as avg_util, max(util_pct) as peak_util by interface_name\n| predict avg_util as predicted_util algorithm=LLP5 future_timespan=30",
              "m": "Collect interface utilization via SNMP every 5 minutes. Aggregate to hourly peaks and daily averages. Trend key WAN links and data center interconnects. Alert when trending projects >80% utilization within 30 days. Plan circuit upgrades based on business growth forecasts.",
              "z": "Timechart (bandwidth trending with forecast), Table (high-utilization links), Gauge (current peak utilization), Bar chart (top links by utilization).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Network monitoring TAs, SNMP.\n• Ensure the following data sources are available: Interface utilization metrics (SNMP, streaming telemetry).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect interface utilization via SNMP every 5 minutes. Aggregate to hourly peaks and daily averages. Trend key WAN links and data center interconnects. Alert when trending projects >80% utilization within 30 days. Plan circuit upgrades based on business growth forecasts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"snmp:interface\"\n| eval util_pct=round((ifHCInOctets_rate*8/ifHighSpeed/1000000)*100, 2)\n| timechart span=1h avg(util_pct) as avg_util, max(util_pct) as peak_util by interface_name\n| predict avg_util as predicted_util algorithm=LLP5 future_timespan=30\n```\n\nUnderstanding this SPL\n\n**Network Bandwidth Trending** — Network bandwidth constraints cause application latency and packet loss. Trending WAN/LAN utilization enables planned upgrades during maintenance windows rather than emergency bandwidth additions during business-impacting congestion.\n\nDocumented **Data sources**: Interface utilization metrics (SNMP, streaming telemetry). **App/TA** (typical add-on context): Network monitoring TAs, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: snmp:interface. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"snmp:interface\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by interface_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Network Bandwidth Trending**): predict avg_util as predicted_util algorithm=LLP5 future_timespan=30\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (bandwidth trending with forecast), Table (high-utilization links), Gauge (current peak utilization), Bar chart (top links by utilization).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.4",
              "n": "License Utilization Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Software licenses represent significant IT spend. Tracking usage vs entitlements identifies under-licensed risks (compliance violations) and over-licensed waste (unnecessary spend). Right-sizing licenses can save 15-30% of software costs.",
              "t": "Custom scripted inputs, vendor license APIs",
              "d": "License server logs, vendor API data, entitlement records",
              "q": "index=licenses sourcetype=\"license:usage\"\n| stats latest(used_licenses) as used, latest(total_licenses) as total by product, vendor, license_type\n| eval utilization_pct=round((used/total)*100, 1)\n| eval status=case(utilization_pct>=95, \"At Risk\", utilization_pct>=80, \"High Use\", utilization_pct<50, \"Underutilized\", 1==1, \"Healthy\")\n| sort -utilization_pct\n| table product, vendor, license_type, used, total, utilization_pct, status",
              "m": "Collect license usage data from license servers (FlexLM, RLM) and vendor APIs. Maintain entitlement records in a lookup table. Track daily peak concurrent usage. Alert at 90% consumption (buy more) and flag <50% utilization (optimize). Generate quarterly true-up reports.",
              "z": "Gauge (license utilization), Table (license inventory with status), Bar chart (utilization by product), Timechart (usage trending).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted inputs, vendor license APIs.\n• Ensure the following data sources are available: License server logs, vendor API data, entitlement records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect license usage data from license servers (FlexLM, RLM) and vendor APIs. Maintain entitlement records in a lookup table. Track daily peak concurrent usage. Alert at 90% consumption (buy more) and flag <50% utilization (optimize). Generate quarterly true-up reports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=licenses sourcetype=\"license:usage\"\n| stats latest(used_licenses) as used, latest(total_licenses) as total by product, vendor, license_type\n| eval utilization_pct=round((used/total)*100, 1)\n| eval status=case(utilization_pct>=95, \"At Risk\", utilization_pct>=80, \"High Use\", utilization_pct<50, \"Underutilized\", 1==1, \"Healthy\")\n| sort -utilization_pct\n| table product, vendor, license_type, used, total, utilization_pct, status\n```\n\nUnderstanding this SPL\n\n**License Utilization Tracking** — Software licenses represent significant IT spend. Tracking usage vs entitlements identifies under-licensed risks (compliance violations) and over-licensed waste (unnecessary spend). Right-sizing licenses can save 15-30% of software costs.\n\nDocumented **Data sources**: License server logs, vendor API data, entitlement records. **App/TA** (typical add-on context): Custom scripted inputs, vendor license APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: licenses; **sourcetype**: license:usage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=licenses, sourcetype=\"license:usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by product, vendor, license_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **License Utilization Tracking**): table product, vendor, license_type, used, total, utilization_pct, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (license utilization), Table (license inventory with status), Bar chart (utilization by product), Timechart (usage trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.5",
              "n": "Right-Sizing Recommendations",
              "c": "medium",
              "f": "intermediate",
              "v": "Over-provisioned VMs and instances waste compute and money. Right-sizing analysis compares actual resource usage against allocated resources, identifying instances that can be downsized without impacting performance — typically saving 20-40%.",
              "t": "Cloud and virtualization TAs, performance metrics",
              "d": "Performance metrics vs resource allocation data",
              "q": "index=infrastructure (sourcetype=\"vmware:perf:cpu\" OR sourcetype=\"vmware:perf:mem\")\n| stats avg(cpu_usage_pct) as avg_cpu, p95(cpu_usage_pct) as p95_cpu, avg(mem_usage_pct) as avg_mem, p95(mem_usage_pct) as p95_mem by vm_name\n| lookup vm_allocation vm_name OUTPUT allocated_vcpu, allocated_mem_gb, instance_type\n| eval cpu_rightsized=case(p95_cpu<25, \"Downsize\", p95_cpu>90, \"Upsize\", 1==1, \"Right-sized\")\n| eval mem_rightsized=case(p95_mem<25, \"Downsize\", p95_mem>90, \"Upsize\", 1==1, \"Right-sized\")\n| where cpu_rightsized=\"Downsize\" OR mem_rightsized=\"Downsize\"\n| table vm_name, instance_type, allocated_vcpu, avg_cpu, p95_cpu, cpu_rightsized, allocated_mem_gb, avg_mem, p95_mem, mem_rightsized",
              "m": "Collect 30+ days of CPU and memory utilization per VM/instance. Compare P95 utilization against allocated resources. Generate right-sizing recommendations based on workload patterns. Exclude burst workloads from analysis. Calculate estimated savings per recommendation.",
              "z": "Table (right-sizing recommendations with savings), Bar chart (waste by team), Scatter plot (allocated vs used), Single value (total potential savings).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud and virtualization TAs, performance metrics.\n• Ensure the following data sources are available: Performance metrics vs resource allocation data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect 30+ days of CPU and memory utilization per VM/instance. Compare P95 utilization against allocated resources. Generate right-sizing recommendations based on workload patterns. Exclude burst workloads from analysis. Calculate estimated savings per recommendation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infrastructure (sourcetype=\"vmware:perf:cpu\" OR sourcetype=\"vmware:perf:mem\")\n| stats avg(cpu_usage_pct) as avg_cpu, p95(cpu_usage_pct) as p95_cpu, avg(mem_usage_pct) as avg_mem, p95(mem_usage_pct) as p95_mem by vm_name\n| lookup vm_allocation vm_name OUTPUT allocated_vcpu, allocated_mem_gb, instance_type\n| eval cpu_rightsized=case(p95_cpu<25, \"Downsize\", p95_cpu>90, \"Upsize\", 1==1, \"Right-sized\")\n| eval mem_rightsized=case(p95_mem<25, \"Downsize\", p95_mem>90, \"Upsize\", 1==1, \"Right-sized\")\n| where cpu_rightsized=\"Downsize\" OR mem_rightsized=\"Downsize\"\n| table vm_name, instance_type, allocated_vcpu, avg_cpu, p95_cpu, cpu_rightsized, allocated_mem_gb, avg_mem, p95_mem, mem_rightsized\n```\n\nUnderstanding this SPL\n\n**Right-Sizing Recommendations** — Over-provisioned VMs and instances waste compute and money. Right-sizing analysis compares actual resource usage against allocated resources, identifying instances that can be downsized without impacting performance — typically saving 20-40%.\n\nDocumented **Data sources**: Performance metrics vs resource allocation data. **App/TA** (typical add-on context): Cloud and virtualization TAs, performance metrics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infrastructure; **sourcetype**: vmware:perf:cpu, vmware:perf:mem. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infrastructure, sourcetype=\"vmware:perf:cpu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vm_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **cpu_rightsized** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_rightsized** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cpu_rightsized=\"Downsize\" OR mem_rightsized=\"Downsize\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Right-Sizing Recommendations**): table vm_name, instance_type, allocated_vcpu, avg_cpu, p95_cpu, cpu_rightsized, allocated_mem_gb, avg_mem, p95_mem, mem_rightsized\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (right-sizing recommendations with savings), Bar chart (waste by team), Scatter plot (allocated vs used), Single value (total potential savings).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.6",
              "n": "Database Growth Projection",
              "c": "medium",
              "f": "intermediate",
              "v": "Databases that run out of space cause application outages. Forecasting database growth enables proactive storage expansion, archive planning, and helps DBAs plan maintenance windows for data lifecycle operations.",
              "t": "Database monitoring TAs, `Splunk DB Connect`",
              "d": "Database size metrics, tablespace utilization",
              "q": "index=database sourcetype=\"db:capacity\"\n| timechart span=1d latest(db_size_gb) as current_size by db_name\n| predict current_size as predicted_size algorithm=LLP5 future_timespan=90\n| eval growth_rate_gb_per_day=round(('predicted_size+30d'-current_size)/30, 2)\n| where 'predicted_size+90d' > max_size*0.85",
              "m": "Collect database size metrics daily from all platforms. Track per-database and per-tablespace growth. Use Splunk predict for 90-day growth forecasting. Alert when projected size exceeds 85% of allocated space within 90 days. Plan archival or expansion based on projections.",
              "z": "Timechart (database size with forecast), Table (databases approaching limits), Gauge (current utilization), Bar chart (growth rate by database).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Database monitoring TAs, `Splunk DB Connect`.\n• Ensure the following data sources are available: Database size metrics, tablespace utilization.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect database size metrics daily from all platforms. Track per-database and per-tablespace growth. Use Splunk predict for 90-day growth forecasting. Alert when projected size exceeds 85% of allocated space within 90 days. Plan archival or expansion based on projections.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"db:capacity\"\n| timechart span=1d latest(db_size_gb) as current_size by db_name\n| predict current_size as predicted_size algorithm=LLP5 future_timespan=90\n| eval growth_rate_gb_per_day=round(('predicted_size+30d'-current_size)/30, 2)\n| where 'predicted_size+90d' > max_size*0.85\n```\n\nUnderstanding this SPL\n\n**Database Growth Projection** — Databases that run out of space cause application outages. Forecasting database growth enables proactive storage expansion, archive planning, and helps DBAs plan maintenance windows for data lifecycle operations.\n\nDocumented **Data sources**: Database size metrics, tablespace utilization. **App/TA** (typical add-on context): Database monitoring TAs, `Splunk DB Connect`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: db:capacity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"db:capacity\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by db_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Database Growth Projection**): predict current_size as predicted_size algorithm=LLP5 future_timespan=90\n• `eval` defines or adjusts **growth_rate_gb_per_day** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where 'predicted_size+90d' > max_size*0.85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (database size with forecast), Table (databases approaching limits), Gauge (current utilization), Bar chart (growth rate by database).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.7",
              "n": "Seasonal Capacity Modeling",
              "c": "medium",
              "f": "expert",
              "v": "Many businesses have predictable seasonal patterns (retail holidays, fiscal year-end, enrollment periods). Building seasonal capacity models ensures infrastructure scales proactively for peak periods rather than reactively during customer-impacting events.",
              "t": "Infrastructure TAs, Splunk MLTK (Machine Learning Toolkit)",
              "d": "Historical performance data (12+ months)",
              "q": "index=infrastructure sourcetype=\"perf:summary\"\n| eval day_of_year=strftime(_time, \"%j\")\n| eval week_of_year=strftime(_time, \"%V\")\n| stats avg(cpu_pct) as avg_cpu, avg(mem_pct) as avg_mem, avg(req_per_sec) as avg_rps by week_of_year\n| append [| inputlookup previous_year_seasonal_data]\n| stats avg(avg_cpu) as seasonal_cpu, avg(avg_mem) as seasonal_mem, avg(avg_rps) as seasonal_rps by week_of_year\n| eval next_year_projected=seasonal_rps*1.15",
              "m": "Collect 12+ months of performance data for seasonal analysis. Identify recurring patterns (daily, weekly, monthly, seasonal). Build seasonal baseline models using Splunk MLTK or predict. Apply growth factor to historical peaks for next-year projections. Plan capacity expansions 2-3 months ahead of predicted peaks.",
              "z": "Timechart (year-over-year seasonal overlay), Area chart (seasonal patterns), Table (peak week projections), Line chart (actual vs seasonal model).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Infrastructure TAs, Splunk MLTK (Machine Learning Toolkit).\n• Ensure the following data sources are available: Historical performance data (12+ months).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect 12+ months of performance data for seasonal analysis. Identify recurring patterns (daily, weekly, monthly, seasonal). Build seasonal baseline models using Splunk MLTK or predict. Apply growth factor to historical peaks for next-year projections. Plan capacity expansions 2-3 months ahead of predicted peaks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infrastructure sourcetype=\"perf:summary\"\n| eval day_of_year=strftime(_time, \"%j\")\n| eval week_of_year=strftime(_time, \"%V\")\n| stats avg(cpu_pct) as avg_cpu, avg(mem_pct) as avg_mem, avg(req_per_sec) as avg_rps by week_of_year\n| append [| inputlookup previous_year_seasonal_data]\n| stats avg(avg_cpu) as seasonal_cpu, avg(avg_mem) as seasonal_mem, avg(avg_rps) as seasonal_rps by week_of_year\n| eval next_year_projected=seasonal_rps*1.15\n```\n\nUnderstanding this SPL\n\n**Seasonal Capacity Modeling** — Many businesses have predictable seasonal patterns (retail holidays, fiscal year-end, enrollment periods). Building seasonal capacity models ensures infrastructure scales proactively for peak periods rather than reactively during customer-impacting events.\n\nDocumented **Data sources**: Historical performance data (12+ months). **App/TA** (typical add-on context): Infrastructure TAs, Splunk MLTK (Machine Learning Toolkit). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infrastructure; **sourcetype**: perf:summary. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infrastructure, sourcetype=\"perf:summary\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **day_of_year** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **week_of_year** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by week_of_year** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by week_of_year** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **next_year_projected** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (year-over-year seasonal overlay), Area chart (seasonal patterns), Table (peak week projections), Line chart (actual vs seasonal model).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.8",
              "n": "IP Address Space Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "IP address exhaustion causes provisioning failures for new VMs, containers, and services. Monitoring IP pool utilization across subnets and VLANs enables proactive network planning and avoids emergency re-addressing projects.",
              "t": "IPAM/DHCP TAs, custom scripted inputs",
              "d": "DHCP/IPAM data, subnet allocation records",
              "q": "index=network sourcetype=\"ipam:pool\"\n| stats latest(total_ips) as total, latest(allocated_ips) as allocated, latest(available_ips) as available by subnet, vlan, location\n| eval used_pct=round((allocated/total)*100, 1)\n| eval status=case(used_pct>=90, \"Critical\", used_pct>=75, \"Warning\", used_pct>=50, \"Normal\", 1==1, \"Low Use\")\n| sort -used_pct\n| table subnet, vlan, location, total, allocated, available, used_pct, status",
              "m": "Ingest IPAM/DHCP pool data daily. Track allocation rates per subnet, VLAN, and location. Alert at 75% warning and 90% critical utilization. Plan subnet expansions or new VLAN creation based on utilization trends. Report on unused allocations that could be reclaimed.",
              "z": "Table (subnet utilization), Bar chart (utilization by location), Heatmap (subnet usage map), Gauge (overall IP utilization).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IPAM/DHCP TAs, custom scripted inputs.\n• Ensure the following data sources are available: DHCP/IPAM data, subnet allocation records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest IPAM/DHCP pool data daily. Track allocation rates per subnet, VLAN, and location. Alert at 75% warning and 90% critical utilization. Plan subnet expansions or new VLAN creation based on utilization trends. Report on unused allocations that could be reclaimed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"ipam:pool\"\n| stats latest(total_ips) as total, latest(allocated_ips) as allocated, latest(available_ips) as available by subnet, vlan, location\n| eval used_pct=round((allocated/total)*100, 1)\n| eval status=case(used_pct>=90, \"Critical\", used_pct>=75, \"Warning\", used_pct>=50, \"Normal\", 1==1, \"Low Use\")\n| sort -used_pct\n| table subnet, vlan, location, total, allocated, available, used_pct, status\n```\n\nUnderstanding this SPL\n\n**IP Address Space Utilization** — IP address exhaustion causes provisioning failures for new VMs, containers, and services. Monitoring IP pool utilization across subnets and VLANs enables proactive network planning and avoids emergency re-addressing projects.\n\nDocumented **Data sources**: DHCP/IPAM data, subnet allocation records. **App/TA** (typical add-on context): IPAM/DHCP TAs, custom scripted inputs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: ipam:pool. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"ipam:pool\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by subnet, vlan, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **IP Address Space Utilization**): table subnet, vlan, location, total, allocated, available, used_pct, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (subnet utilization), Bar chart (utilization by location), Heatmap (subnet usage map), Gauge (overall IP utilization).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.9",
              "n": "Cloud Commitment and Savings Plan Utilization",
              "c": "high",
              "f": "intermediate",
              "v": "Underutilized commitments or savings plans leave money on the table. Monitoring utilization and coverage supports optimization and renewal decisions.",
              "t": "AWS Cost Explorer, Azure Cost Management, CUDRI/savings plan data",
              "d": "Commitment usage, savings plan coverage, hourly coverage %",
              "q": "index=cloud_cost sourcetype=\"aws:savings_plan\"\n| stats latest(utilization_pct) as util_pct, latest(coverage_pct) as coverage by plan_id, commitment_type\n| where util_pct < 80 OR coverage < 70\n| table plan_id, commitment_type, util_pct, coverage",
              "m": "Ingest commitment and savings plan usage from cloud cost APIs. Alert when utilization or coverage drops below target. Report on commitment ROI and recommend size changes at renewal.",
              "z": "Gauge (utilization %), Table (plans below target), Line chart (coverage trend).",
              "kfp": "RI use varies during workload shifts, planned migrations, or controlled refactoring.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS Cost Explorer, Azure Cost Management, CUDRI/savings plan data.\n• Ensure the following data sources are available: Commitment usage, savings plan coverage, hourly coverage %.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest commitment and savings plan usage from cloud cost APIs. Alert when utilization or coverage drops below target. Report on commitment ROI and recommend size changes at renewal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_cost sourcetype=\"aws:savings_plan\"\n| stats latest(utilization_pct) as util_pct, latest(coverage_pct) as coverage by plan_id, commitment_type\n| where util_pct < 80 OR coverage < 70\n| table plan_id, commitment_type, util_pct, coverage\n```\n\nUnderstanding this SPL\n\n**Cloud Commitment and Savings Plan Utilization** — Underutilized commitments or savings plans leave money on the table. Monitoring utilization and coverage supports optimization and renewal decisions.\n\nDocumented **Data sources**: Commitment usage, savings plan coverage, hourly coverage %. **App/TA** (typical add-on context): AWS Cost Explorer, Azure Cost Management, CUDRI/savings plan data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_cost; **sourcetype**: aws:savings_plan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_cost, sourcetype=\"aws:savings_plan\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by plan_id, commitment_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where util_pct < 80 OR coverage < 70` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cloud Commitment and Savings Plan Utilization**): table plan_id, commitment_type, util_pct, coverage\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (utilization %), Table (plans below target), Line chart (coverage trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.10",
              "n": "Anomalous Cost Spike by Service or Account",
              "c": "high",
              "f": "intermediate",
              "v": "Sudden cost spikes may indicate runaway resources, misconfiguration, or abuse. Early detection limits bill shock and supports incident response.",
              "t": "Cloud cost TAs, billing exports",
              "d": "Daily cost by service, account, region",
              "q": "index=cloud_cost sourcetype=\"cost:daily\"\n| bin _time span=1d\n| stats sum(cost) as daily_cost by service, account_id, _time\n| eventstats avg(daily_cost) as avg_cost, stdev(daily_cost) as std_cost by service, account_id\n| where daily_cost > (avg_cost + (3*std_cost))\n| table service, account_id, daily_cost, avg_cost, std_cost",
              "m": "Ingest daily cost by dimensions. Compute baseline and standard deviation. Alert when cost exceeds 3× std dev. Report on top anomalies and trend. Correlate with resource and usage data.",
              "z": "Table (anomalous services/accounts), Line chart (cost vs baseline), Bar chart (spike magnitude).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud cost TAs, billing exports.\n• Ensure the following data sources are available: Daily cost by service, account, region.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest daily cost by dimensions. Compute baseline and standard deviation. Alert when cost exceeds 3× std dev. Report on top anomalies and trend. Correlate with resource and usage data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_cost sourcetype=\"cost:daily\"\n| bin _time span=1d\n| stats sum(cost) as daily_cost by service, account_id, _time\n| eventstats avg(daily_cost) as avg_cost, stdev(daily_cost) as std_cost by service, account_id\n| where daily_cost > (avg_cost + (3*std_cost))\n| table service, account_id, daily_cost, avg_cost, std_cost\n```\n\nUnderstanding this SPL\n\n**Anomalous Cost Spike by Service or Account** — Sudden cost spikes may indicate runaway resources, misconfiguration, or abuse. Early detection limits bill shock and supports incident response.\n\nDocumented **Data sources**: Daily cost by service, account, region. **App/TA** (typical add-on context): Cloud cost TAs, billing exports. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_cost; **sourcetype**: cost:daily. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_cost, sourcetype=\"cost:daily\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by service, account_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by service, account_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where daily_cost > (avg_cost + (3*std_cost))` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Anomalous Cost Spike by Service or Account**): table service, account_id, daily_cost, avg_cost, std_cost\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalous services/accounts), Line chart (cost vs baseline), Bar chart (spike magnitude).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.11",
              "n": "Unused and Orphaned Resource Cost Attribution",
              "c": "medium",
              "f": "intermediate",
              "v": "Unused disks, idle instances, and orphaned snapshots drive waste. Attributing cost to these resources supports cleanup and chargeback.",
              "t": "Cloud resource inventory, cost allocation tags",
              "d": "Resource list with last used, cost, tags",
              "q": "index=cloud_cost sourcetype=\"resource:inventory\"\n| where (last_used_days > 30 OR state=\"stopped\") AND cost > 0\n| stats sum(cost) as waste_cost, count by resource_type, account_id\n| sort -waste_cost\n| table resource_type, account_id, count, waste_cost",
              "m": "Combine resource inventory (with last-used or state) and cost data. Flag idle or stopped resources older than threshold. Report on waste by type and account. Drive cleanup campaigns.",
              "z": "Table (waste by type and account), Bar chart (waste cost by resource type), Single value (total waste).",
              "kfp": "Idle resources for DR standby capacity, scheduled batch processors, or planned future-use staging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cloud resource inventory, cost allocation tags.\n• Ensure the following data sources are available: Resource list with last used, cost, tags.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCombine resource inventory (with last-used or state) and cost data. Flag idle or stopped resources older than threshold. Report on waste by type and account. Drive cleanup campaigns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_cost sourcetype=\"resource:inventory\"\n| where (last_used_days > 30 OR state=\"stopped\") AND cost > 0\n| stats sum(cost) as waste_cost, count by resource_type, account_id\n| sort -waste_cost\n| table resource_type, account_id, count, waste_cost\n```\n\nUnderstanding this SPL\n\n**Unused and Orphaned Resource Cost Attribution** — Unused disks, idle instances, and orphaned snapshots drive waste. Attributing cost to these resources supports cleanup and chargeback.\n\nDocumented **Data sources**: Resource list with last used, cost, tags. **App/TA** (typical add-on context): Cloud resource inventory, cost allocation tags. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_cost; **sourcetype**: resource:inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_cost, sourcetype=\"resource:inventory\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where (last_used_days > 30 OR state=\"stopped\") AND cost > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by resource_type, account_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Unused and Orphaned Resource Cost Attribution**): table resource_type, account_id, count, waste_cost\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (waste by type and account), Bar chart (waste cost by resource type), Single value (total waste).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.12",
              "n": "License and Subscription Consumption vs Entitlement",
              "c": "high",
              "f": "intermediate",
              "v": "Over-consumption causes true-up or compliance issues; under-consumption wastes spend. Monitoring usage vs entitlement supports optimization and renewal.",
              "t": "License management, SaaS usage APIs",
              "d": "Entitlement count, consumed count, by product and pool",
              "q": "index=licenses sourcetype=\"license:usage\"\n| stats latest(entitled) as entitled, latest(consumed) as consumed by product, pool\n| eval usage_pct=round((consumed/entitled)*100, 1)\n| where usage_pct > 100 OR usage_pct < 50\n| table product, pool, entitled, consumed, usage_pct",
              "m": "Ingest entitlement and consumption from license or SaaS tools. Alert when consumption exceeds entitlement or falls below target. Report on utilization by product and pool. Use for right-sizing at renewal.",
              "z": "Table (over/under utilized), Gauge (usage %), Bar chart (consumed vs entitled).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: License management, SaaS usage APIs.\n• Ensure the following data sources are available: Entitlement count, consumed count, by product and pool.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest entitlement and consumption from license or SaaS tools. Alert when consumption exceeds entitlement or falls below target. Report on utilization by product and pool. Use for right-sizing at renewal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=licenses sourcetype=\"license:usage\"\n| stats latest(entitled) as entitled, latest(consumed) as consumed by product, pool\n| eval usage_pct=round((consumed/entitled)*100, 1)\n| where usage_pct > 100 OR usage_pct < 50\n| table product, pool, entitled, consumed, usage_pct\n```\n\nUnderstanding this SPL\n\n**License and Subscription Consumption vs Entitlement** — Over-consumption causes true-up or compliance issues; under-consumption wastes spend. Monitoring usage vs entitlement supports optimization and renewal.\n\nDocumented **Data sources**: Entitlement count, consumed count, by product and pool. **App/TA** (typical add-on context): License management, SaaS usage APIs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: licenses; **sourcetype**: license:usage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=licenses, sourcetype=\"license:usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by product, pool** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **usage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where usage_pct > 100 OR usage_pct < 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **License and Subscription Consumption vs Entitlement**): table product, pool, entitled, consumed, usage_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (over/under utilized), Gauge (usage %), Bar chart (consumed vs entitled).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.13",
              "n": "Cost Forecast vs Budget and Variance Alert",
              "c": "critical",
              "f": "intermediate",
              "v": "Forecast over budget risks overspend; large variance indicates model or usage change. Monitoring forecast vs budget supports proactive control and reforecasting.",
              "t": "Cost forecasting tool, budget data",
              "d": "Monthly forecast, budget, actuals to date",
              "q": "index=cloud_cost sourcetype=\"cost:forecast\"\n| stats latest(forecast_total) as forecast, latest(budget) as budget, latest(actual_ytd) as actual by account_id, month\n| eval variance_pct=round((forecast-budget)/budget*100, 1)\n| where variance_pct > 10 OR variance_pct < -20\n| table account_id, month, forecast, budget, actual, variance_pct",
              "m": "Ingest forecast and budget. Compute variance. Alert when forecast exceeds budget by threshold or variance is large. Report on forecast accuracy and budget burn rate. Integrate with finance.",
              "z": "Table (accounts over budget), Gauge (variance %), Line chart (forecast vs budget trend).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cost forecasting tool, budget data.\n• Ensure the following data sources are available: Monthly forecast, budget, actuals to date.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest forecast and budget. Compute variance. Alert when forecast exceeds budget by threshold or variance is large. Report on forecast accuracy and budget burn rate. Integrate with finance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_cost sourcetype=\"cost:forecast\"\n| stats latest(forecast_total) as forecast, latest(budget) as budget, latest(actual_ytd) as actual by account_id, month\n| eval variance_pct=round((forecast-budget)/budget*100, 1)\n| where variance_pct > 10 OR variance_pct < -20\n| table account_id, month, forecast, budget, actual, variance_pct\n```\n\nUnderstanding this SPL\n\n**Cost Forecast vs Budget and Variance Alert** — Forecast over budget risks overspend; large variance indicates model or usage change. Monitoring forecast vs budget supports proactive control and reforecasting.\n\nDocumented **Data sources**: Monthly forecast, budget, actuals to date. **App/TA** (typical add-on context): Cost forecasting tool, budget data. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_cost; **sourcetype**: cost:forecast. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_cost, sourcetype=\"cost:forecast\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by account_id, month** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **variance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where variance_pct > 10 OR variance_pct < -20` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cost Forecast vs Budget and Variance Alert**): table account_id, month, forecast, budget, actual, variance_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (accounts over budget), Gauge (variance %), Line chart (forecast vs budget trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.14",
              "n": "Software License Compliance Audit",
              "c": "medium",
              "f": "intermediate",
              "v": "Software license consumption vs. purchased (VMware, Oracle, Microsoft, etc.) visibility prevents compliance violations and identifies over-licensed waste. Tracking concurrent usage against entitlements supports true-up planning and right-sizing at renewal.",
              "t": "Custom (license server API, CMDB integration)",
              "d": "License server data (concurrent_in_use, total_licensed), CMDB software inventory",
              "q": "index=licenses (sourcetype=\"license:server\" OR sourcetype=\"license:usage\")\n| eval concurrent_in_use=coalesce(concurrent_in_use, in_use, used_count)\n| eval total_licensed=coalesce(total_licensed, total_entitled, license_count)\n| bin _time span=1d\n| stats latest(concurrent_in_use) as used, latest(total_licensed) as total by product, vendor, license_server, _time\n| eval utilization_pct=round((used/total)*100, 1)\n| eval status=case(utilization_pct>=95, \"At Risk\", utilization_pct>=80, \"High\", utilization_pct<40, \"Over-licensed\", 1==1, \"Healthy\")\n| lookup cmdb_software_inventory product OUTPUT cost_per_seat, cost_center\n| eval waste_monthly=if(utilization_pct<40, (total-used)*cost_per_seat, 0)\n| sort -utilization_pct\n| table product, vendor, used, total, utilization_pct, status, waste_monthly",
              "m": "Ingest license server data via API or log collection (FlexLM, RLM, VMware vCenter, Oracle LMS, Microsoft VLSC). Map CMDB software inventory for entitlement and cost. Track daily peak concurrent usage. Alert at 90% consumption (compliance risk) and flag <40% utilization (over-licensed). Generate quarterly true-up reports for VMware, Oracle, Microsoft, and other enterprise software.",
              "z": "Gauge (overall license utilization), Table (license inventory with status), Bar chart (utilization by product), Timechart (usage trending).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (license server API, CMDB integration).\n• Ensure the following data sources are available: License server data (concurrent_in_use, total_licensed), CMDB software inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest license server data via API or log collection (FlexLM, RLM, VMware vCenter, Oracle LMS, Microsoft VLSC). Map CMDB software inventory for entitlement and cost. Track daily peak concurrent usage. Alert at 90% consumption (compliance risk) and flag <40% utilization (over-licensed). Generate quarterly true-up reports for VMware, Oracle, Microsoft, and other enterprise software.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=licenses (sourcetype=\"license:server\" OR sourcetype=\"license:usage\")\n| eval concurrent_in_use=coalesce(concurrent_in_use, in_use, used_count)\n| eval total_licensed=coalesce(total_licensed, total_entitled, license_count)\n| bin _time span=1d\n| stats latest(concurrent_in_use) as used, latest(total_licensed) as total by product, vendor, license_server, _time\n| eval utilization_pct=round((used/total)*100, 1)\n| eval status=case(utilization_pct>=95, \"At Risk\", utilization_pct>=80, \"High\", utilization_pct<40, \"Over-licensed\", 1==1, \"Healthy\")\n| lookup cmdb_software_inventory product OUTPUT cost_per_seat, cost_center\n| eval waste_monthly=if(utilization_pct<40, (total-used)*cost_per_seat, 0)\n| sort -utilization_pct\n| table product, vendor, used, total, utilization_pct, status, waste_monthly\n```\n\nUnderstanding this SPL\n\n**Software License Compliance Audit** — Software license consumption vs. purchased (VMware, Oracle, Microsoft, etc.) visibility prevents compliance violations and identifies over-licensed waste. Tracking concurrent usage against entitlements supports true-up planning and right-sizing at renewal.\n\nDocumented **Data sources**: License server data (concurrent_in_use, total_licensed), CMDB software inventory. **App/TA** (typical add-on context): Custom (license server API, CMDB integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: licenses; **sourcetype**: license:server, license:usage. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=licenses, sourcetype=\"license:server\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **concurrent_in_use** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **total_licensed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by product, vendor, license_server, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **waste_monthly** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Software License Compliance Audit**): table product, vendor, used, total, utilization_pct, status, waste_monthly\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (overall license utilization), Table (license inventory with status), Bar chart (utilization by product), Timechart (usage trending).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.15",
              "n": "Power Consumption Cost Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "kWh to cost mapping for data center charge-back enables accurate cost allocation and energy efficiency tracking. Understanding power consumption trends supports capacity planning and identifies cost reduction opportunities.",
              "t": "Custom (PDU SNMP, BMS integration, utility billing)",
              "d": "PDU power readings (kWh), utility rate lookup",
              "q": "index=infrastructure (sourcetype=\"snmp:pdu\" OR sourcetype=\"pdu:power\")\n| eval kwh=coalesce(kwh, energy_kwh, power_kwh)\n| where isnotnull(kwh)\n| bin _time span=1d\n| stats sum(kwh) as daily_kwh by pdu_id, rack, zone, _time\n| lookup utility_rate_lookup zone OUTPUT rate_per_kwh\n| eval daily_cost=round(daily_kwh*rate_per_kwh, 2)\n| timechart span=1d sum(daily_kwh) as total_kwh, sum(daily_cost) as total_cost by zone\n| eval cost_per_kwh=if(total_kwh>0, round(total_cost/total_kwh, 4), 0)",
              "m": "Collect PDU power readings via SNMP (e.g., OID 1.3.6.1.4.1.2.6.223.8.2.2.1.2 for energy) or BMS integration. Maintain utility rate lookup by zone/tier. Aggregate kWh daily per rack, zone, or cost center. Map kWh to cost for charge-back reports. Alert on anomalous power spikes. Build dashboards for energy cost trending and charge-back allocation.",
              "z": "Timechart (kWh and cost trending), Table (cost by rack/zone), Bar chart (top consumers by cost), Single value (monthly power cost).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom (PDU SNMP, BMS integration, utility billing).\n• Ensure the following data sources are available: PDU power readings (kWh), utility rate lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect PDU power readings via SNMP (e.g., OID 1.3.6.1.4.1.2.6.223.8.2.2.1.2 for energy) or BMS integration. Maintain utility rate lookup by zone/tier. Aggregate kWh daily per rack, zone, or cost center. Map kWh to cost for charge-back reports. Alert on anomalous power spikes. Build dashboards for energy cost trending and charge-back allocation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infrastructure (sourcetype=\"snmp:pdu\" OR sourcetype=\"pdu:power\")\n| eval kwh=coalesce(kwh, energy_kwh, power_kwh)\n| where isnotnull(kwh)\n| bin _time span=1d\n| stats sum(kwh) as daily_kwh by pdu_id, rack, zone, _time\n| lookup utility_rate_lookup zone OUTPUT rate_per_kwh\n| eval daily_cost=round(daily_kwh*rate_per_kwh, 2)\n| timechart span=1d sum(daily_kwh) as total_kwh, sum(daily_cost) as total_cost by zone\n| eval cost_per_kwh=if(total_kwh>0, round(total_cost/total_kwh, 4), 0)\n```\n\nUnderstanding this SPL\n\n**Power Consumption Cost Trending** — kWh to cost mapping for data center charge-back enables accurate cost allocation and energy efficiency tracking. Understanding power consumption trends supports capacity planning and identifies cost reduction opportunities.\n\nDocumented **Data sources**: PDU power readings (kWh), utility rate lookup. **App/TA** (typical add-on context): Custom (PDU SNMP, BMS integration, utility billing). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infrastructure; **sourcetype**: snmp:pdu, pdu:power. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infrastructure, sourcetype=\"snmp:pdu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **kwh** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(kwh)` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by pdu_id, rack, zone, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **daily_cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by zone** — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **cost_per_kwh** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (kWh and cost trending), Table (cost by rack/zone), Bar chart (top consumers by cost), Single value (monthly power cost).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic",
                "snmp_pdu"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.16",
              "n": "Cloud Committed-Use Discount Coverage",
              "c": "medium",
              "f": "advanced",
              "v": "Reserved instance and savings plan coverage percentage monitoring ensures committed-use discounts are utilized. Unused commitments waste money; gaps in coverage mean paying on-demand rates. Optimizing coverage typically saves 30–50% vs on-demand.",
              "t": "`Splunk Add-on for AWS`, `Splunk Add-on for Microsoft Cloud Services`, `Splunk Add-on for Google Cloud Platform`",
              "d": "AWS CUR (reservation utilization), Azure Advisor reservation recommendations, GCP commitment usage",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval is_reserved=if(isnotnull(reservation_ReservationARN) AND reservation_ReservationARN!=\"\", 1, 0)\n| eval is_on_demand=if(lineItem_UsageType LIKE \"%BoxUsage%\" AND is_reserved=0, 1, 0)\n| bin _time span=1d\n| stats sum(eval(if(is_reserved=1, cost, 0))) as ri_cost, sum(eval(if(is_on_demand=1, cost, 0))) as on_demand_cost, sum(cost) as total_cost by lineItem_UsageAccountId, _time\n| eval coverage_pct=round((ri_cost/total_cost)*100, 1)\n| eval uncovered_cost=on_demand_cost\n| where total_cost > 0\n| stats avg(coverage_pct) as avg_coverage, sum(uncovered_cost) as uncovered by lineItem_UsageAccountId\n| eval status=case(avg_coverage<70, \"Low Coverage\", uncovered>1000, \"High Uncovered Cost\", 1==1, \"OK\")\n| where status!=\"OK\"\n| sort -uncovered\n| table lineItem_UsageAccountId, avg_coverage, uncovered, status",
              "m": "Ingest AWS CUR with reservation fields, Azure Cost Management reservation utilization, and GCP commitment usage. Calculate coverage as (RI/savings plan cost) / (total compute cost). Alert when coverage drops below 70% or uncovered on-demand spend exceeds threshold. Report on unused commitment hours and recommend size changes. Correlate with Azure Advisor and AWS Cost Explorer recommendations for optimization.",
              "z": "Gauge (coverage percentage), Timechart (coverage trend with forecast), Table (accounts with low coverage), Bar chart (uncovered cost by account).",
              "kfp": "RI use varies during workload shifts, planned migrations, or controlled refactoring.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [Splunk Add-on for Google Cloud Platform](https://splunkbase.splunk.com/app/3088)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, `Splunk Add-on for Microsoft Cloud Services`, `Splunk Add-on for Google Cloud Platform`.\n• Ensure the following data sources are available: AWS CUR (reservation utilization), Azure Advisor reservation recommendations, GCP commitment usage.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest AWS CUR with reservation fields, Azure Cost Management reservation utilization, and GCP commitment usage. Calculate coverage as (RI/savings plan cost) / (total compute cost). Alert when coverage drops below 70% or uncovered on-demand spend exceeds threshold. Report on unused commitment hours and recommend size changes. Correlate with Azure Advisor and AWS Cost Explorer recommendations for optimization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval cost=tonumber(lineItem_UnblendedCost)\n| eval is_reserved=if(isnotnull(reservation_ReservationARN) AND reservation_ReservationARN!=\"\", 1, 0)\n| eval is_on_demand=if(lineItem_UsageType LIKE \"%BoxUsage%\" AND is_reserved=0, 1, 0)\n| bin _time span=1d\n| stats sum(eval(if(is_reserved=1, cost, 0))) as ri_cost, sum(eval(if(is_on_demand=1, cost, 0))) as on_demand_cost, sum(cost) as total_cost by lineItem_UsageAccountId, _time\n| eval coverage_pct=round((ri_cost/total_cost)*100, 1)\n| eval uncovered_cost=on_demand_cost\n| where total_cost > 0\n| stats avg(coverage_pct) as avg_coverage, sum(uncovered_cost) as uncovered by lineItem_UsageAccountId\n| eval status=case(avg_coverage<70, \"Low Coverage\", uncovered>1000, \"High Uncovered Cost\", 1==1, \"OK\")\n| where status!=\"OK\"\n| sort -uncovered\n| table lineItem_UsageAccountId, avg_coverage, uncovered, status\n```\n\nUnderstanding this SPL\n\n**Cloud Committed-Use Discount Coverage** — Reserved instance and savings plan coverage percentage monitoring ensures committed-use discounts are utilized. Unused commitments waste money; gaps in coverage mean paying on-demand rates. Optimizing coverage typically saves 30–50% vs on-demand.\n\nDocumented **Data sources**: AWS CUR (reservation utilization), Azure Advisor reservation recommendations, GCP commitment usage. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, `Splunk Add-on for Microsoft Cloud Services`, `Splunk Add-on for Google Cloud Platform`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_reserved** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_on_demand** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by lineItem_UsageAccountId, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uncovered_cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where total_cost > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by lineItem_UsageAccountId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Cloud Committed-Use Discount Coverage**): table lineItem_UsageAccountId, avg_coverage, uncovered, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (coverage percentage), Timechart (coverage trend with forecast), Table (accounts with low coverage), Bar chart (uncovered cost by account).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.17",
              "n": "Storage Capacity Forecast by Tier",
              "c": "high",
              "f": "intermediate",
              "v": "Forecast days-to-full per storage tier (flash vs capacity) — extends pool forecasting (UC-20.2.2) with **tier** dimension for procurement.",
              "t": "Storage TA, SNMP",
              "d": "`storage:capacity` with `tier`",
              "q": "index=storage sourcetype=\"storage:capacity\" earliest=-90d\n| timechart span=1d latest(used_pct) as used_pct by storage_system, tier\n| predict used_pct algorithm=LLP5 future_timespan=60",
              "m": "Map array vendor tiers. Alert when 60d forecast crosses 90% for any tier.",
              "z": "Line chart (used % by tier), Table (at-risk systems), Gauge.",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Storage TA, SNMP.\n• Ensure the following data sources are available: `storage:capacity` with `tier`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap array vendor tiers. Alert when 60d forecast crosses 90% for any tier.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"storage:capacity\" earliest=-90d\n| timechart span=1d latest(used_pct) as used_pct by storage_system, tier\n| predict used_pct algorithm=LLP5 future_timespan=60\n```\n\nUnderstanding this SPL\n\n**Storage Capacity Forecast by Tier** — Forecast days-to-full per storage tier (flash vs capacity) — extends pool forecasting (UC-20.2.2) with **tier** dimension for procurement.\n\nDocumented **Data sources**: `storage:capacity` with `tier`. **App/TA** (typical add-on context): Storage TA, SNMP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: storage:capacity. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"storage:capacity\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by storage_system, tier** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Storage Capacity Forecast by Tier**): predict used_pct algorithm=LLP5 future_timespan=60\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (used % by tier), Table (at-risk systems), Gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.18",
              "n": "Compute Cluster Scaling Headroom",
              "c": "high",
              "f": "beginner",
              "v": "Remaining vCPU and RAM in VMware/vSphere clusters and AWS ASG max — for **provisioning headroom** beyond host forecast (UC-20.2.1).",
              "t": "vCenter TA, AWS API",
              "d": "`vmware:cluster`, `aws:compute:capacity`",
              "q": "index=virtualization sourcetype=\"vmware:cluster\" earliest=-1h\n| eval headroom_pct=round(100*(cpu_capacity_mhz-cpu_used_mhz)/cpu_capacity_mhz,1)\n| where headroom_pct < 15\n| table cluster_name, headroom_pct, cpu_used_mhz, cpu_capacity_mhz",
              "m": "Poll cluster aggregate capacity. Alert when headroom <15% or policy threshold. Trigger scale-out or new hardware.",
              "z": "Gauge (headroom %), Table (clusters at risk), Bar chart (by datacenter).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: vCenter TA, AWS API.\n• Ensure the following data sources are available: `vmware:cluster`, `aws:compute:capacity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll cluster aggregate capacity. Alert when headroom <15% or policy threshold. Trigger scale-out or new hardware.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"vmware:cluster\" earliest=-1h\n| eval headroom_pct=round(100*(cpu_capacity_mhz-cpu_used_mhz)/cpu_capacity_mhz,1)\n| where headroom_pct < 15\n| table cluster_name, headroom_pct, cpu_used_mhz, cpu_capacity_mhz\n```\n\nUnderstanding this SPL\n\n**Compute Cluster Scaling Headroom** — Remaining vCPU and RAM in VMware/vSphere clusters and AWS ASG max — for **provisioning headroom** beyond host forecast (UC-20.2.1).\n\nDocumented **Data sources**: `vmware:cluster`, `aws:compute:capacity`. **App/TA** (typical add-on context): vCenter TA, AWS API. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: vmware:cluster. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"vmware:cluster\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **headroom_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where headroom_pct < 15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Compute Cluster Scaling Headroom**): table cluster_name, headroom_pct, cpu_used_mhz, cpu_capacity_mhz\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (headroom %), Table (clusters at risk), Bar chart (by datacenter).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "vmware"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.19",
              "n": "Network Bandwidth Utilization Trending (Site Interconnect)",
              "c": "medium",
              "f": "beginner",
              "v": "Site-to-site and DCI link utilization trend with 95th percentile — complements interface forecast (UC-20.2.3).",
              "t": "SNMP, NetFlow summary",
              "d": "`snmp:interface`, `netflow:site`",
              "q": "index=network sourcetype=\"netflow:site\" earliest=-30d\n| timechart span=1d perc95(utilization_pct) as p95_util by link_name\n| where p95_util > 75\n| table link_name, p95_util",
              "m": "Aggregate flows per DCI link daily. Alert on sustained high p95. Plan circuit upgrades.",
              "z": "Line chart (p95 util by link), Table (saturated links), Heatmap.",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP, NetFlow summary.\n• Ensure the following data sources are available: `snmp:interface`, `netflow:site`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate flows per DCI link daily. Alert on sustained high p95. Plan circuit upgrades.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"netflow:site\" earliest=-30d\n| timechart span=1d perc95(utilization_pct) as p95_util by link_name\n| where p95_util > 75\n| table link_name, p95_util\n```\n\nUnderstanding this SPL\n\n**Network Bandwidth Utilization Trending (Site Interconnect)** — Site-to-site and DCI link utilization trend with 95th percentile — complements interface forecast (UC-20.2.3).\n\nDocumented **Data sources**: `snmp:interface`, `netflow:site`. **App/TA** (typical add-on context): SNMP, NetFlow summary. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: netflow:site. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"netflow:site\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by link_name** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where p95_util > 75` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Bandwidth Utilization Trending (Site Interconnect)**): table link_name, p95_util\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95 util by link), Table (saturated links), Heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "netflow",
                "snmp"
              ],
              "em": [
                "netflow_netflow",
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.20",
              "n": "Seasonal Capacity Planning Baseline",
              "c": "medium",
              "f": "advanced",
              "v": "YoY same-week CPU/RPS comparison for retail/event peaks — extends UC-20.2.7 with **automated peak week** flagging.",
              "t": "`perf:summary`, MLTK optional",
              "d": "Weekly rollups per app",
              "q": "index=infrastructure sourcetype=\"perf:summary\" earliest=-400d\n| eval week=strftime(_time,\"%V\")\n| stats avg(cpu_pct) as cpu by app, week\n| eventstats avg(cpu) as fleet_week_avg by week\n| where cpu > fleet_week_avg*1.25\n| table app, week, cpu, fleet_week_avg",
              "m": "Simplify with `timewrap` if available. Use for pre-peak scale plans.",
              "z": "Line chart (YoY overlay), Table (apps with growth), Calendar heatmap.",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `perf:summary`, MLTK optional.\n• Ensure the following data sources are available: Weekly rollups per app.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSimplify with `timewrap` if available. Use for pre-peak scale plans.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infrastructure sourcetype=\"perf:summary\" earliest=-400d\n| eval week=strftime(_time,\"%V\")\n| stats avg(cpu_pct) as cpu by app, week\n| eventstats avg(cpu) as fleet_week_avg by week\n| where cpu > fleet_week_avg*1.25\n| table app, week, cpu, fleet_week_avg\n```\n\nUnderstanding this SPL\n\n**Seasonal Capacity Planning Baseline** — YoY same-week CPU/RPS comparison for retail/event peaks — extends UC-20.2.7 with **automated peak week** flagging.\n\nDocumented **Data sources**: Weekly rollups per app. **App/TA** (typical add-on context): `perf:summary`, MLTK optional. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infrastructure; **sourcetype**: perf:summary. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infrastructure, sourcetype=\"perf:summary\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **week** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by app, week** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by week** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cpu > fleet_week_avg*1.25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Seasonal Capacity Planning Baseline**): table app, week, cpu, fleet_week_avg\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (YoY overlay), Table (apps with growth), Calendar heatmap.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.21",
              "n": "CPU and Memory Right-Sizing (Host and VM)",
              "c": "medium",
              "f": "intermediate",
              "v": "Host-level overcommit risk and VM-level downsize candidates — pairs with VM view (UC-20.2.5).",
              "t": "`Splunk_TA_vmware`, `Splunk_TA_windows` (Hyper-V Perfmon)",
              "d": "`vmware:host:perf`",
              "q": "index=virtualization sourcetype=\"vmware:host:perf\" earliest=-7d\n| stats avg(cpu_used_pct) as cpu, avg(mem_used_pct) as mem by host\n| eval overcommit_risk=if(cpu>85 OR mem>90,1,0)\n| where overcommit_risk=1\n| table host, cpu, mem",
              "m": "Combine with cluster headroom (UC-20.2.18). Alert on chronic host saturation.",
              "z": "Table (hot hosts), Heatmap (host × day), Gauge.",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk_TA_windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk_TA_vmware`, `Splunk_TA_windows` (Hyper-V Perfmon).\n• Ensure the following data sources are available: `vmware:host:perf`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCombine with cluster headroom (UC-20.2.18). Alert on chronic host saturation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"vmware:host:perf\" earliest=-7d\n| stats avg(cpu_used_pct) as cpu, avg(mem_used_pct) as mem by host\n| eval overcommit_risk=if(cpu>85 OR mem>90,1,0)\n| where overcommit_risk=1\n| table host, cpu, mem\n```\n\nUnderstanding this SPL\n\n**CPU and Memory Right-Sizing (Host and VM)** — Host-level overcommit risk and VM-level downsize candidates — pairs with VM view (UC-20.2.5).\n\nDocumented **Data sources**: `vmware:host:perf`. **App/TA** (typical add-on context): `Splunk_TA_vmware`, `Splunk_TA_windows` (Hyper-V Perfmon). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: vmware:host:perf. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"vmware:host:perf\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **overcommit_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overcommit_risk=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CPU and Memory Right-Sizing (Host and VM)**): table host, cpu, mem\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hot hosts), Heatmap (host × day), Gauge.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hyperv",
                "vmware",
                "windows"
              ],
              "em": [
                "vmware_ta_vmware"
              ],
              "sapp": [
                {
                  "name": "IT Essentials Work",
                  "id": 5403,
                  "url": "https://splunkbase.splunk.com/app/5403",
                  "desc": "Modern dashboards for server performance, capacity planning and alerting across Unix/Linux and Windows",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Splunk App for Unix and Linux",
                      "id": 273,
                      "url": "https://splunkbase.splunk.com/app/273"
                    },
                    {
                      "name": "Splunk App for Windows Infrastructure",
                      "id": 1680,
                      "url": "https://splunkbase.splunk.com/app/1680"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Microsoft Windows",
                "id": 742,
                "url": "https://splunkbase.splunk.com/app/742"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.22",
              "n": "Disk IOPS Saturation Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Time-series of `iops_utilization_pct` or latency vs IOP limit for SAN/NVMe — storage performance bottleneck before capacity full.",
              "t": "Storage TA",
              "d": "`storage:performance`, array metrics",
              "q": "index=storage sourcetype=\"storage:performance\" earliest=-7d\n| timechart span=1h avg(iops_util_pct) as iops_util avg(read_latency_ms) as lat by volume_id\n| where iops_util>80 OR lat>10",
              "m": "Map vendor IOPS cap. Alert on sustained >80% util or latency SLO breach. Scale pool or move workload.",
              "z": "Line chart (IOPS util and latency), Table (hot volumes), Single value (volumes in saturation).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Storage TA.\n• Ensure the following data sources are available: `storage:performance`, array metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor IOPS cap. Alert on sustained >80% util or latency SLO breach. Scale pool or move workload.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=storage sourcetype=\"storage:performance\" earliest=-7d\n| timechart span=1h avg(iops_util_pct) as iops_util avg(read_latency_ms) as lat by volume_id\n| where iops_util>80 OR lat>10\n```\n\nUnderstanding this SPL\n\n**Disk IOPS Saturation Trending** — Time-series of `iops_utilization_pct` or latency vs IOP limit for SAN/NVMe — storage performance bottleneck before capacity full.\n\nDocumented **Data sources**: `storage:performance`, array metrics. **App/TA** (typical add-on context): Storage TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: storage; **sourcetype**: storage:performance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=storage, sourcetype=\"storage:performance\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by volume_id** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where iops_util>80 OR lat>10` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (IOPS util and latency), Table (hot volumes), Single value (volumes in saturation).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.23",
              "n": "VM Sprawl Detection",
              "c": "medium",
              "f": "beginner",
              "v": "Count of powered-on VMs per app owner vs license and growth rate — finds unchecked provisioning.",
              "t": "vCenter inventory",
              "d": "`vmware:inv:vm`",
              "q": "index=virtualization sourcetype=\"vmware:inv:vm\" earliest=-1d\n| where power_state=\"poweredOn\"\n| stats count as vm_count by folder, owner\n| eventstats avg(vm_count) as fleet_avg\n| where vm_count > fleet_avg*3 AND vm_count>50\n| sort -vm_count",
              "m": "Map `owner` from folder or tags. Review quarterly for consolidation. Correlate with cost (UC-20.1).",
              "z": "Bar chart (VM count by owner), Table (sprawl candidates), Line chart (VM growth).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: vCenter inventory.\n• Ensure the following data sources are available: `vmware:inv:vm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap `owner` from folder or tags. Review quarterly for consolidation. Correlate with cost (UC-20.1).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"vmware:inv:vm\" earliest=-1d\n| where power_state=\"poweredOn\"\n| stats count as vm_count by folder, owner\n| eventstats avg(vm_count) as fleet_avg\n| where vm_count > fleet_avg*3 AND vm_count>50\n| sort -vm_count\n```\n\nUnderstanding this SPL\n\n**VM Sprawl Detection** — Count of powered-on VMs per app owner vs license and growth rate — finds unchecked provisioning.\n\nDocumented **Data sources**: `vmware:inv:vm`. **App/TA** (typical add-on context): vCenter inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: vmware:inv:vm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"vmware:inv:vm\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where power_state=\"poweredOn\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by folder, owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where vm_count > fleet_avg*3 AND vm_count>50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (VM count by owner), Table (sprawl candidates), Line chart (VM growth).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [
                "vmware_vcenter"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.24",
              "n": "Cloud Cost Anomaly with Seasonal Decomposition (MLTK)",
              "c": "critical",
              "f": "advanced",
              "v": "Cloud costs follow predictable weekly and monthly cycles — batch jobs on weekends, month-end reporting spikes, quarterly compliance scans. Static thresholds generate noise during normal peaks and miss slow-growth anomalies during quiet periods. By decomposing cost into seasonal, trend, and residual components with MLTK, this detection flags true anomalies against the expected cost shape for that specific day and hour.",
              "t": "Splunk Machine Learning Toolkit (MLTK), Splunk Add-on for AWS / Azure / GCP",
              "d": "`index=cloud sourcetype=aws:billing` or `sourcetype=azure:costmanagement` or `sourcetype=gcp:billing`",
              "q": "index=cloud sourcetype IN (\"aws:billing\",\"azure:costmanagement\",\"gcp:billing\")\n| bin _time span=1d\n| stats sum(cost) as daily_cost by _time, service_name, account_id\n| eval dow=strftime(_time, \"%A\"), dom=strftime(_time, \"%d\")\n| fit StateSpaceForecast daily_cost holdback=0 forecast_k=14 conf_interval=95 by service_name into cost_seasonal_model\n| eval residual=daily_cost - 'predicted(daily_cost)'\n| eval pct_deviation=round(100*residual/nullif('predicted(daily_cost)', 0), 1)\n| where abs(pct_deviation) > 25 OR daily_cost > 'upper95(predicted(daily_cost))'\n| table _time, service_name, account_id, daily_cost, \"predicted(daily_cost)\", pct_deviation\n| sort -pct_deviation",
              "m": "Ingest cloud billing data daily from CUR (AWS), Cost Management exports (Azure), or BigQuery billing export (GCP). Train StateSpaceForecast models per service that capture weekly seasonality (weekend dips) and monthly patterns (month-end peaks). Forecast 14 days ahead with 95% confidence intervals. Alert FinOps teams when actual cost exceeds the upper confidence bound or deviates more than 25% from the seasonal prediction. Include account-level drill-down to identify the specific workload driving the anomaly. Retrain models monthly. Pair with UC-20.1.13 for budget variance context and UC-20.1.18 for orphaned resource identification when cost spikes correlate with new resources.",
              "z": "Area chart (actual vs forecast with confidence band), Table (anomalous services), Bar chart (top cost deviations by service).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK), Splunk Add-on for AWS / Azure / GCP.\n• Ensure the following data sources are available: `index=cloud sourcetype=aws:billing` or `sourcetype=azure:costmanagement` or `sourcetype=gcp:billing`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest cloud billing data daily from CUR (AWS), Cost Management exports (Azure), or BigQuery billing export (GCP). Train StateSpaceForecast models per service that capture weekly seasonality (weekend dips) and monthly patterns (month-end peaks). Forecast 14 days ahead with 95% confidence intervals. Alert FinOps teams when actual cost exceeds the upper confidence bound or deviates more than 25% from the seasonal prediction. Include account-level drill-down to identify the specific workload drivin…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype IN (\"aws:billing\",\"azure:costmanagement\",\"gcp:billing\")\n| bin _time span=1d\n| stats sum(cost) as daily_cost by _time, service_name, account_id\n| eval dow=strftime(_time, \"%A\"), dom=strftime(_time, \"%d\")\n| fit StateSpaceForecast daily_cost holdback=0 forecast_k=14 conf_interval=95 by service_name into cost_seasonal_model\n| eval residual=daily_cost - 'predicted(daily_cost)'\n| eval pct_deviation=round(100*residual/nullif('predicted(daily_cost)', 0), 1)\n| where abs(pct_deviation) > 25 OR daily_cost > 'upper95(predicted(daily_cost))'\n| table _time, service_name, account_id, daily_cost, \"predicted(daily_cost)\", pct_deviation\n| sort -pct_deviation\n```\n\nUnderstanding this SPL\n\n**Cloud Cost Anomaly with Seasonal Decomposition (MLTK)** — Cloud costs follow predictable weekly and monthly cycles — batch jobs on weekends, month-end reporting spikes, quarterly compliance scans. Static thresholds generate noise during normal peaks and miss slow-growth anomalies during quiet periods. By decomposing cost into seasonal, trend, and residual components with MLTK, this detection flags true anomalies against the expected cost shape for that specific day and hour.\n\nDocumented **Data sources**: `index=cloud sourcetype=aws:billing` or `sourcetype=azure:costmanagement` or `sourcetype=gcp:billing`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK), Splunk Add-on for AWS / Azure / GCP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name, account_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dow** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Cloud Cost Anomaly with Seasonal Decomposition (MLTK)**): fit StateSpaceForecast daily_cost holdback=0 forecast_k=14 conf_interval=95 by service_name into cost_seasonal_model\n• `eval` defines or adjusts **residual** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pct_deviation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(pct_deviation) > 25 OR daily_cost > 'upper95(predicted(daily_cost))'` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cloud Cost Anomaly with Seasonal Decomposition (MLTK)**): table _time, service_name, account_id, daily_cost, \"predicted(daily_cost)\", pct_deviation\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (actual vs forecast with confidence band), Table (anomalous services), Bar chart (top cost deviations by service).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.25",
              "n": "Capacity Exhaustion Prediction with Confidence Intervals (MLTK)",
              "c": "critical",
              "f": "advanced",
              "v": "Linear extrapolation of resource growth is dangerously simplistic — it misses seasonal acceleration, step changes from new workloads, and growth rate changes after migrations. Probabilistic forecasting with MLTK provides a range of exhaustion dates (best/expected/worst case) so capacity teams can plan procurement and migrations with appropriate urgency.",
              "t": "Splunk Machine Learning Toolkit (MLTK)",
              "d": "`index=infra sourcetype=vmware:perf:cpu` or `sourcetype=linux:cpu` or `index=k8s sourcetype=kube:metrics`",
              "q": "index=infra sourcetype IN (\"vmware:perf:cpu\",\"linux:cpu\",\"nix:df\")\n| bin _time span=1d\n| stats avg(pctUsed) as avg_utilization by _time, host, resource_type\n| fit StateSpaceForecast avg_utilization holdback=0 forecast_k=90 conf_interval=95 by host into capacity_forecast_model\n| where 'upper95(predicted(avg_utilization))' > 85\n| eval days_to_85=round(('upper95(predicted(avg_utilization))' - avg_utilization) / (('predicted(avg_utilization)' - avg_utilization) / 90), 0)\n| where days_to_85 > 0 AND days_to_85 < 90\n| table host, resource_type, avg_utilization, \"predicted(avg_utilization)\", \"upper95(predicted(avg_utilization))\", days_to_85\n| sort days_to_85",
              "m": "Collect daily average utilization metrics for CPU, memory, disk, and network across hosts, VMs, and containers. Train StateSpaceForecast models per host-resource combination that learn growth trends and seasonal patterns. Forecast 90 days ahead with 95% confidence intervals. Flag resources where the upper confidence bound crosses the saturation threshold (85%) within the forecast window. Provide three timeline estimates: optimistic (lower bound), expected (point forecast), and pessimistic (upper bound). Integrate with CMDB for asset lifecycle context. Alert capacity planning teams monthly with prioritized lists sorted by days-to-exhaustion.",
              "z": "Area chart (utilization forecast with confidence band), Table (resources approaching saturation), Gantt chart (exhaustion timelines by host).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (MLTK).\n• Ensure the following data sources are available: `index=infra sourcetype=vmware:perf:cpu` or `sourcetype=linux:cpu` or `index=k8s sourcetype=kube:metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect daily average utilization metrics for CPU, memory, disk, and network across hosts, VMs, and containers. Train StateSpaceForecast models per host-resource combination that learn growth trends and seasonal patterns. Forecast 90 days ahead with 95% confidence intervals. Flag resources where the upper confidence bound crosses the saturation threshold (85%) within the forecast window. Provide three timeline estimates: optimistic (lower bound), expected (point forecast), and pessimistic (upper…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infra sourcetype IN (\"vmware:perf:cpu\",\"linux:cpu\",\"nix:df\")\n| bin _time span=1d\n| stats avg(pctUsed) as avg_utilization by _time, host, resource_type\n| fit StateSpaceForecast avg_utilization holdback=0 forecast_k=90 conf_interval=95 by host into capacity_forecast_model\n| where 'upper95(predicted(avg_utilization))' > 85\n| eval days_to_85=round(('upper95(predicted(avg_utilization))' - avg_utilization) / (('predicted(avg_utilization)' - avg_utilization) / 90), 0)\n| where days_to_85 > 0 AND days_to_85 < 90\n| table host, resource_type, avg_utilization, \"predicted(avg_utilization)\", \"upper95(predicted(avg_utilization))\", days_to_85\n| sort days_to_85\n```\n\nUnderstanding this SPL\n\n**Capacity Exhaustion Prediction with Confidence Intervals (MLTK)** — Linear extrapolation of resource growth is dangerously simplistic — it misses seasonal acceleration, step changes from new workloads, and growth rate changes after migrations. Probabilistic forecasting with MLTK provides a range of exhaustion dates (best/expected/worst case) so capacity teams can plan procurement and migrations with appropriate urgency.\n\nDocumented **Data sources**: `index=infra sourcetype=vmware:perf:cpu` or `sourcetype=linux:cpu` or `index=k8s sourcetype=kube:metrics`. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (MLTK). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infra.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infra. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, resource_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Capacity Exhaustion Prediction with Confidence Intervals (MLTK)**): fit StateSpaceForecast avg_utilization holdback=0 forecast_k=90 conf_interval=95 by host into capacity_forecast_model\n• Filters the current rows with `where 'upper95(predicted(avg_utilization))' > 85` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_to_85** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_85 > 0 AND days_to_85 < 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Capacity Exhaustion Prediction with Confidence Intervals (MLTK)**): table host, resource_type, avg_utilization, \"predicted(avg_utilization)\", \"upper95(predicted(avg_utilization))\", days_to_85\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (utilization forecast with confidence band), Table (resources approaching saturation), Gantt chart (exhaustion timelines by host).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.26",
              "n": "Kubernetes Namespace Resource Quota Pressure",
              "c": "high",
              "f": "intermediate",
              "v": "Hard quotas stop deployments during peak traffic, causing revenue-impacting outages. Tracking requested versus hard limits for CPU, memory, and persistent volume claims per namespace lets platform teams expand quotas or reclaim unused requests before developers hit the wall.",
              "t": "Kubernetes metrics TA, Prometheus exporter, OpenTelemetry",
              "d": "`index=kubernetes` `sourcetype=\"kube:quota\"`",
              "q": "index=kubernetes sourcetype=\"kube:quota\" earliest=-1h\n| eval cpu_req=tonumber(coalesce(cpu_requests_cores, cpu_requests))\n| eval cpu_lim=tonumber(coalesce(cpu_hard_quota_cores, cpu_hard_limit))\n| eval mem_req=tonumber(coalesce(memory_requests_gib, mem_requests_gib))\n| eval mem_lim=tonumber(coalesce(memory_hard_quota_gib, mem_hard_limit_gib))\n| eval cpu_headroom_pct=round(100*(cpu_lim-cpu_req)/nullif(cpu_lim,0),1)\n| eval mem_headroom_pct=round(100*(mem_lim-mem_req)/nullif(mem_lim,0),1)\n| where cpu_headroom_pct<15 OR mem_headroom_pct<15\n| stats latest(cpu_headroom_pct) as cpu_head latest(mem_headroom_pct) as mem_head by cluster, namespace\n| sort cluster, cpu_head",
              "m": "(1) Ingest kube-state-metrics quota objects or a vendor CMDB export with limits and allocated requests. (2) Alert when headroom stays below 15% for six hours. (3) Pair with cost data (UC-20.1.16) before raising quotas on idle namespaces.",
              "z": "Heatmap (namespace × resource headroom), Table (clusters at risk), Gauge (worst namespace headroom).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Kubernetes metrics TA, Prometheus exporter, OpenTelemetry.\n• Ensure the following data sources are available: `index=kubernetes` `sourcetype=\"kube:quota\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest kube-state-metrics quota objects or a vendor CMDB export with limits and allocated requests. (2) Alert when headroom stays below 15% for six hours. (3) Pair with cost data (UC-20.1.16) before raising quotas on idle namespaces.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kubernetes sourcetype=\"kube:quota\" earliest=-1h\n| eval cpu_req=tonumber(coalesce(cpu_requests_cores, cpu_requests))\n| eval cpu_lim=tonumber(coalesce(cpu_hard_quota_cores, cpu_hard_limit))\n| eval mem_req=tonumber(coalesce(memory_requests_gib, mem_requests_gib))\n| eval mem_lim=tonumber(coalesce(memory_hard_quota_gib, mem_hard_limit_gib))\n| eval cpu_headroom_pct=round(100*(cpu_lim-cpu_req)/nullif(cpu_lim,0),1)\n| eval mem_headroom_pct=round(100*(mem_lim-mem_req)/nullif(mem_lim,0),1)\n| where cpu_headroom_pct<15 OR mem_headroom_pct<15\n| stats latest(cpu_headroom_pct) as cpu_head latest(mem_headroom_pct) as mem_head by cluster, namespace\n| sort cluster, cpu_head\n```\n\nUnderstanding this SPL\n\n**Kubernetes Namespace Resource Quota Pressure** — Hard quotas stop deployments during peak traffic, causing revenue-impacting outages. Tracking requested versus hard limits for CPU, memory, and persistent volume claims per namespace lets platform teams expand quotas or reclaim unused requests before developers hit the wall.\n\nDocumented **Data sources**: `index=kubernetes` `sourcetype=\"kube:quota\"`. **App/TA** (typical add-on context): Kubernetes metrics TA, Prometheus exporter, OpenTelemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kubernetes; **sourcetype**: kube:quota. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kubernetes, sourcetype=\"kube:quota\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cpu_req** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cpu_lim** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_req** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_lim** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cpu_headroom_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mem_headroom_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cpu_headroom_pct<15 OR mem_headroom_pct<15` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (namespace × resource headroom), Table (clusters at risk), Gauge (worst namespace headroom).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes",
                "opentelemetry",
                "prometheus"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.27",
              "n": "Object Storage Bucket Growth Forecast",
              "c": "high",
              "f": "intermediate",
              "v": "Object storage growth from logs, images, and backups drives recurring invoices. Forecasting gigabytes by bucket supports lifecycle policies, intelligent tiering, and archive capacity before ingest jobs fail.",
              "t": "`Splunk Add-on for AWS`, Azure Storage metrics, GCP monitoring export",
              "d": "`index=cloud_storage` `sourcetype=\"aws:s3:bucket_metrics\"`",
              "q": "index=cloud_storage sourcetype=\"aws:s3:bucket_metrics\" earliest=-90d\n| eval gb=tonumber(coalesce(size_bytes, BucketSizeBytes))/1024/1024/1024\n| bin _time span=1d\n| stats latest(gb) as used_gb by _time, bucket_name, account_id\n| timechart span=1d latest(used_gb) as used_gb by bucket_name\n| predict used_gb as forecast_gb algorithm=LLP5 future_timespan=60\n| where 'forecast_gb+60d' > used_gb*1.25\n| table bucket_name, used_gb, 'forecast_gb+60d'",
              "m": "(1) Collect bucket size daily from CloudWatch `BucketSizeBytes`, Storage Lens, or vendor export. (2) Exclude buckets with heavy lifecycle churn unless using MLTK for seasonality. (3) Alert owners when the 60-day forecast exceeds 125% of current size.",
              "z": "Line chart (actual versus forecast per bucket), Table (fastest-growing buckets), Single value (total forecasted terabytes).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for AWS`, Azure Storage metrics, GCP monitoring export.\n• Ensure the following data sources are available: `index=cloud_storage` `sourcetype=\"aws:s3:bucket_metrics\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect bucket size daily from CloudWatch `BucketSizeBytes`, Storage Lens, or vendor export. (2) Exclude buckets with heavy lifecycle churn unless using MLTK for seasonality. (3) Alert owners when the 60-day forecast exceeds 125% of current size.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_storage sourcetype=\"aws:s3:bucket_metrics\" earliest=-90d\n| eval gb=tonumber(coalesce(size_bytes, BucketSizeBytes))/1024/1024/1024\n| bin _time span=1d\n| stats latest(gb) as used_gb by _time, bucket_name, account_id\n| timechart span=1d latest(used_gb) as used_gb by bucket_name\n| predict used_gb as forecast_gb algorithm=LLP5 future_timespan=60\n| where 'forecast_gb+60d' > used_gb*1.25\n| table bucket_name, used_gb, 'forecast_gb+60d'\n```\n\nUnderstanding this SPL\n\n**Object Storage Bucket Growth Forecast** — Object storage growth from logs, images, and backups drives recurring invoices. Forecasting gigabytes by bucket supports lifecycle policies, intelligent tiering, and archive capacity before ingest jobs fail.\n\nDocumented **Data sources**: `index=cloud_storage` `sourcetype=\"aws:s3:bucket_metrics\"`. **App/TA** (typical add-on context): `Splunk Add-on for AWS`, Azure Storage metrics, GCP monitoring export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_storage; **sourcetype**: aws:s3:bucket_metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_storage, sourcetype=\"aws:s3:bucket_metrics\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, bucket_name, account_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by bucket_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Object Storage Bucket Growth Forecast**): predict used_gb as forecast_gb algorithm=LLP5 future_timespan=60\n• Filters the current rows with `where 'forecast_gb+60d' > used_gb*1.25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Object Storage Bucket Growth Forecast**): table bucket_name, used_gb, 'forecast_gb+60d'\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (actual versus forecast per bucket), Table (fastest-growing buckets), Single value (total forecasted terabytes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.28",
              "n": "Database Datafile Size and Autogrow Trending",
              "c": "high",
              "f": "intermediate",
              "v": "SQL Server and Oracle datafiles that autogrow frequently signal unexpected data loads or missing archival. Trending file size and growth events avoids emergency disk extensions during month-end loads.",
              "t": "`Splunk DB Connect`, database monitoring scripts",
              "d": "`index=database` `sourcetype=\"db:filegrowth\"`",
              "q": "index=database sourcetype=\"db:filegrowth\" earliest=-60d\n| eval size_gb=tonumber(coalesce(size_gb, current_size_gb))\n| eval ev=lower(coalesce(event_type, message, \"\"))\n| eval grew=if(match(ev, \"(grow|extend|autogrow)\"),1,0)\n| bin _time span=1d\n| stats latest(size_gb) as size_gb sum(grew) as grow_events by _time, db_name, logical_filename\n| sort 0 db_name, logical_filename, _time\n| streamstats global=f window=2 earliest(size_gb) as prev_gb by db_name, logical_filename\n| eval daily_delta_gb=round(size_gb-prev_gb,2)\n| where daily_delta_gb>5 OR grow_events>3\n| table _time, db_name, logical_filename, size_gb, grow_events, daily_delta_gb",
              "m": "(1) Push file-level metrics and autogrow events from DB Connect or a DBA agent. (2) Map `logical_filename` to disk mount for infrastructure correlation. (3) Page when daily delta exceeds policy (example five GB) or grow_events exceeds three in one day.",
              "z": "Timechart (size by database), Table (large deltas and autogrow counts), Bar chart (top databases by growth rate).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk DB Connect`, database monitoring scripts.\n• Ensure the following data sources are available: `index=database` `sourcetype=\"db:filegrowth\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push file-level metrics and autogrow events from DB Connect or a DBA agent. (2) Map `logical_filename` to disk mount for infrastructure correlation. (3) Page when daily delta exceeds policy (example five GB) or grow_events exceeds three in one day.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"db:filegrowth\" earliest=-60d\n| eval size_gb=tonumber(coalesce(size_gb, current_size_gb))\n| eval ev=lower(coalesce(event_type, message, \"\"))\n| eval grew=if(match(ev, \"(grow|extend|autogrow)\"),1,0)\n| bin _time span=1d\n| stats latest(size_gb) as size_gb sum(grew) as grow_events by _time, db_name, logical_filename\n| sort 0 db_name, logical_filename, _time\n| streamstats global=f window=2 earliest(size_gb) as prev_gb by db_name, logical_filename\n| eval daily_delta_gb=round(size_gb-prev_gb,2)\n| where daily_delta_gb>5 OR grow_events>3\n| table _time, db_name, logical_filename, size_gb, grow_events, daily_delta_gb\n```\n\nUnderstanding this SPL\n\n**Database Datafile Size and Autogrow Trending** — SQL Server and Oracle datafiles that autogrow frequently signal unexpected data loads or missing archival. Trending file size and growth events avoids emergency disk extensions during month-end loads.\n\nDocumented **Data sources**: `index=database` `sourcetype=\"db:filegrowth\"`. **App/TA** (typical add-on context): `Splunk DB Connect`, database monitoring scripts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: db:filegrowth. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"db:filegrowth\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **grew** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, db_name, logical_filename** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by db_name, logical_filename** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **daily_delta_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where daily_delta_gb>5 OR grow_events>3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Database Datafile Size and Autogrow Trending**): table _time, db_name, logical_filename, size_gb, grow_events, daily_delta_gb\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (size by database), Table (large deltas and autogrow counts), Bar chart (top databases by growth rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.29",
              "n": "Site-to-Site VPN Tunnel Bandwidth Headroom",
              "c": "high",
              "f": "beginner",
              "v": "VPN tunnels to cloud and partners have fixed negotiated throughput. Sustained high utilization causes latency and drops during batch windows. Headroom reporting triggers circuit upgrades or traffic engineering before outages.",
              "t": "SNMP, `Splunk Add-on for AWS` (VPN metrics), SD-WAN TA",
              "d": "`index=network` `sourcetype=\"vpn:tunnel\"`",
              "q": "index=network sourcetype=\"vpn:tunnel\" earliest=-7d\n| eval bps=tonumber(coalesce(ingress_bps, in_bps))+tonumber(coalesce(egress_bps, out_bps))\n| eval cap=tonumber(coalesce(negotiated_bandwidth_bps, tunnel_capacity_bps))\n| eval util_pct=round(100*bps/nullif(cap,0),2)\n| bin _time span=5m\n| stats perc95(util_pct) as p95_util by tunnel_id, site_name\n| eval headroom_pct=round(100-p95_util,1)\n| where headroom_pct < 20\n| sort headroom_pct",
              "m": "(1) Ingest per-tunnel throughput from SD-WAN or cloud VPN metrics. (2) Store negotiated capacity per tunnel in a lookup. (3) Alert when seven-day P95 utilization leaves less than twenty percent headroom.",
              "z": "Gauge (headroom percent), Line chart (utilization trend), Table (tunnels sorted by risk).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP, `Splunk Add-on for AWS` (VPN metrics), SD-WAN TA.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"vpn:tunnel\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest per-tunnel throughput from SD-WAN or cloud VPN metrics. (2) Store negotiated capacity per tunnel in a lookup. (3) Alert when seven-day P95 utilization leaves less than twenty percent headroom.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"vpn:tunnel\" earliest=-7d\n| eval bps=tonumber(coalesce(ingress_bps, in_bps))+tonumber(coalesce(egress_bps, out_bps))\n| eval cap=tonumber(coalesce(negotiated_bandwidth_bps, tunnel_capacity_bps))\n| eval util_pct=round(100*bps/nullif(cap,0),2)\n| bin _time span=5m\n| stats perc95(util_pct) as p95_util by tunnel_id, site_name\n| eval headroom_pct=round(100-p95_util,1)\n| where headroom_pct < 20\n| sort headroom_pct\n```\n\nUnderstanding this SPL\n\n**Site-to-Site VPN Tunnel Bandwidth Headroom** — VPN tunnels to cloud and partners have fixed negotiated throughput. Sustained high utilization causes latency and drops during batch windows. Headroom reporting triggers circuit upgrades or traffic engineering before outages.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"vpn:tunnel\"`. **App/TA** (typical add-on context): SNMP, `Splunk Add-on for AWS` (VPN metrics), SD-WAN TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: vpn:tunnel. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"vpn:tunnel\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by tunnel_id, site_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **headroom_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where headroom_pct < 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (headroom percent), Line chart (utilization trend), Table (tunnels sorted by risk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.30",
              "n": "Search and Analytics Cluster Disk Watermark Risk",
              "c": "critical",
              "f": "intermediate",
              "v": "OpenSearch and Elasticsearch clusters stop indexing when disk watermarks are breached, breaking log and application pipelines. Tracking used shard store versus cluster capacity predicts when frozen tiers or data nodes must be added.",
              "t": "Elastic/OpenSearch monitoring API, HTTP Event Collector",
              "d": "`index=observability` `sourcetype=\"elastic:cluster_stats\"`",
              "q": "index=observability sourcetype=\"elastic:cluster_stats\" earliest=-30d\n| eval used=tonumber(coalesce(store_size_bytes, total_used_bytes))\n| eval total=tonumber(coalesce(total_capacity_bytes, disk_total_bytes))\n| eval used_pct=round(100*used/nullif(total,0),2)\n| bin _time span=1d\n| stats latest(used_pct) as used_pct by _time, cluster_name\n| predict used_pct as forecast_pct algorithm=LLP5 future_timespan=30\n| where 'forecast_pct+30d' > 75\n| table cluster_name, used_pct, 'forecast_pct+30d'",
              "m": "(1) Poll `_cluster/stats` or Elastic Cloud metrics daily. (2) Align thresholds with `cluster.routing.allocation.disk.watermark` settings. (3) Integrate with storage forecasting (UC-20.2.2) when forecast crosses seventy-five percent within thirty days.",
              "z": "Area chart (used percent with forecast), Table (clusters breaching planning threshold), Single value (clusters over watermark risk).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Elastic/OpenSearch monitoring API, HTTP Event Collector.\n• Ensure the following data sources are available: `index=observability` `sourcetype=\"elastic:cluster_stats\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Poll `_cluster/stats` or Elastic Cloud metrics daily. (2) Align thresholds with `cluster.routing.allocation.disk.watermark` settings. (3) Integrate with storage forecasting (UC-20.2.2) when forecast crosses seventy-five percent within thirty days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=observability sourcetype=\"elastic:cluster_stats\" earliest=-30d\n| eval used=tonumber(coalesce(store_size_bytes, total_used_bytes))\n| eval total=tonumber(coalesce(total_capacity_bytes, disk_total_bytes))\n| eval used_pct=round(100*used/nullif(total,0),2)\n| bin _time span=1d\n| stats latest(used_pct) as used_pct by _time, cluster_name\n| predict used_pct as forecast_pct algorithm=LLP5 future_timespan=30\n| where 'forecast_pct+30d' > 75\n| table cluster_name, used_pct, 'forecast_pct+30d'\n```\n\nUnderstanding this SPL\n\n**Search and Analytics Cluster Disk Watermark Risk** — OpenSearch and Elasticsearch clusters stop indexing when disk watermarks are breached, breaking log and application pipelines. Tracking used shard store versus cluster capacity predicts when frozen tiers or data nodes must be added.\n\nDocumented **Data sources**: `index=observability` `sourcetype=\"elastic:cluster_stats\"`. **App/TA** (typical add-on context): Elastic/OpenSearch monitoring API, HTTP Event Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: observability; **sourcetype**: elastic:cluster_stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=observability, sourcetype=\"elastic:cluster_stats\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, cluster_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Search and Analytics Cluster Disk Watermark Risk**): predict used_pct as forecast_pct algorithm=LLP5 future_timespan=30\n• Filters the current rows with `where 'forecast_pct+30d' > 75` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Search and Analytics Cluster Disk Watermark Risk**): table cluster_name, used_pct, 'forecast_pct+30d'\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (used percent with forecast), Table (clusters breaching planning threshold), Single value (clusters over watermark risk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_opensearch"
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.31",
              "n": "Message Broker Disk and Retention Capacity",
              "c": "high",
              "f": "intermediate",
              "v": "Kafka retains messages on disk; retention growth from new microservices or poison-pill topics can fill brokers and halt producers. Monitoring log segment volume and free disk prevents cascading application failures.",
              "t": "JMX TA, Prometheus, Confluent metrics exporter",
              "d": "`index=messaging` `sourcetype=\"kafka:broker:disk\"`",
              "q": "index=messaging sourcetype=\"kafka:broker:disk\" earliest=-14d\n| eval used_gb=tonumber(coalesce(log_size_gb, kafka_log_size_gb))\n| eval free_gb=tonumber(coalesce(disk_free_gb, volume_free_gb))\n| eval total_gb=used_gb+free_gb\n| eval used_pct=round(100*used_gb/nullif(total_gb,0),1)\n| bin _time span=1h\n| stats latest(used_pct) as used_pct latest(retention_hours) as ret_hrs by _time, broker_id, cluster\n| where used_pct>70\n| stats max(used_pct) as peak_used min(ret_hrs) as min_ret by cluster, broker_id\n| sort -peak_used",
              "m": "(1) Export per-broker log volume and filesystem free space. (2) Alert when used_pct exceeds seventy percent for four hours or minimum retention hours drops unexpectedly. (3) Correlate with topic byte rate to find noisy producers.",
              "z": "Line chart (disk used percent by broker), Table (brokers over threshold), Heatmap (cluster by broker).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: JMX TA, Prometheus, Confluent metrics exporter.\n• Ensure the following data sources are available: `index=messaging` `sourcetype=\"kafka:broker:disk\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export per-broker log volume and filesystem free space. (2) Alert when used_pct exceeds seventy percent for four hours or minimum retention hours drops unexpectedly. (3) Correlate with topic byte rate to find noisy producers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=messaging sourcetype=\"kafka:broker:disk\" earliest=-14d\n| eval used_gb=tonumber(coalesce(log_size_gb, kafka_log_size_gb))\n| eval free_gb=tonumber(coalesce(disk_free_gb, volume_free_gb))\n| eval total_gb=used_gb+free_gb\n| eval used_pct=round(100*used_gb/nullif(total_gb,0),1)\n| bin _time span=1h\n| stats latest(used_pct) as used_pct latest(retention_hours) as ret_hrs by _time, broker_id, cluster\n| where used_pct>70\n| stats max(used_pct) as peak_used min(ret_hrs) as min_ret by cluster, broker_id\n| sort -peak_used\n```\n\nUnderstanding this SPL\n\n**Message Broker Disk and Retention Capacity** — Kafka retains messages on disk; retention growth from new microservices or poison-pill topics can fill brokers and halt producers. Monitoring log segment volume and free disk prevents cascading application failures.\n\nDocumented **Data sources**: `index=messaging` `sourcetype=\"kafka:broker:disk\"`. **App/TA** (typical add-on context): JMX TA, Prometheus, Confluent metrics exporter. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: messaging; **sourcetype**: kafka:broker:disk. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=messaging, sourcetype=\"kafka:broker:disk\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **used_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **free_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **total_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **used_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, broker_id, cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where used_pct>70` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by cluster, broker_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (disk used percent by broker), Table (brokers over threshold), Heatmap (cluster by broker).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "prometheus"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.32",
              "n": "GPU Pool Utilization for ML Workload Capacity",
              "c": "medium",
              "f": "advanced",
              "v": "GPU nodes are expensive; low average utilization wastes capital while queue spikes delay training service levels. Tracking streaming-multiprocessor utilization and peak load by host informs autoscaler bounds and purchase versus spot decisions.",
              "t": "NVIDIA DCGM exporter, Kubernetes GPU metrics, cloud GPU monitoring",
              "d": "`index=ml_infra` `sourcetype=\"dcgm:gpu\"`",
              "q": "index=ml_infra sourcetype=\"dcgm:gpu\" earliest=-7d\n| eval util=tonumber(coalesce(gpu_sm_utilization, sm_util_pct))\n| bin _time span=1h\n| stats avg(util) as avg_sm perc95(util) as p95_sm by _time, host, gpu_index\n| stats avg(avg_sm) as fleet_avg max(p95_sm) as fleet_p95 by host\n| eval underused=if(fleet_avg<35 AND fleet_p95<70,1,0)\n| where underused=1 OR fleet_p95>92\n| table host, fleet_avg, fleet_p95",
              "m": "(1) Deploy DCGM on GPU nodes and normalize `gpu_index`. (2) Tag hosts with workload type such as training versus inference. (3) Right-size node pools when underused persists fourteen days; scale out when fleet_p95 exceeds ninety-two.",
              "z": "Box plot (utilization distribution), Table (underused hosts), Timechart (job queue depth if ingested).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NVIDIA DCGM exporter, Kubernetes GPU metrics, cloud GPU monitoring.\n• Ensure the following data sources are available: `index=ml_infra` `sourcetype=\"dcgm:gpu\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Deploy DCGM on GPU nodes and normalize `gpu_index`. (2) Tag hosts with workload type such as training versus inference. (3) Right-size node pools when underused persists fourteen days; scale out when fleet_p95 exceeds ninety-two.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ml_infra sourcetype=\"dcgm:gpu\" earliest=-7d\n| eval util=tonumber(coalesce(gpu_sm_utilization, sm_util_pct))\n| bin _time span=1h\n| stats avg(util) as avg_sm perc95(util) as p95_sm by _time, host, gpu_index\n| stats avg(avg_sm) as fleet_avg max(p95_sm) as fleet_p95 by host\n| eval underused=if(fleet_avg<35 AND fleet_p95<70,1,0)\n| where underused=1 OR fleet_p95>92\n| table host, fleet_avg, fleet_p95\n```\n\nUnderstanding this SPL\n\n**GPU Pool Utilization for ML Workload Capacity** — GPU nodes are expensive; low average utilization wastes capital while queue spikes delay training service levels. Tracking streaming-multiprocessor utilization and peak load by host informs autoscaler bounds and purchase versus spot decisions.\n\nDocumented **Data sources**: `index=ml_infra` `sourcetype=\"dcgm:gpu\"`. **App/TA** (typical add-on context): NVIDIA DCGM exporter, Kubernetes GPU metrics, cloud GPU monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ml_infra; **sourcetype**: dcgm:gpu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ml_infra, sourcetype=\"dcgm:gpu\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **util** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host, gpu_index** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **underused** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where underused=1 OR fleet_p95>92` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GPU Pool Utilization for ML Workload Capacity**): table host, fleet_avg, fleet_p95\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Box plot (utilization distribution), Table (underused hosts), Timechart (job queue depth if ingested).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "kubernetes"
              ],
              "em": [
                "kubernetes_k8s"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.2.33",
              "n": "Domain Controller Performance Under LDAP Load",
              "c": "high",
              "f": "intermediate",
              "v": "Authentication and directory search storms can saturate domain controllers before generic server CPU alerts explain the blast radius. Correlating directory operation rate with CPU utilization supports extra domain controllers, load balancing changes, and misbehaving application fixes.",
              "t": "`Splunk Add-on for Microsoft Windows`, scripted performance export",
              "d": "`index=active_directory` `sourcetype=\"ad:dc:performance\"`",
              "q": "index=active_directory sourcetype=\"ad:dc:performance\" earliest=-24h\n| eval ldap_ops=tonumber(coalesce(ldap_searches_sec, ldap_ops_per_sec))\n| eval cpu_pct=tonumber(coalesce(cpu_utilization, cpu_load_percent))\n| bin _time span=5m\n| stats avg(ldap_ops) as ldap_avg avg(cpu_pct) as cpu_avg by _time, host\n| eventstats median(ldap_avg) as med_ldap by host\n| eval stress=if(ldap_avg>med_ldap*2.5 AND cpu_avg>80,1,0)\n| where stress=1\n| table _time, host, ldap_avg, cpu_avg, med_ldap",
              "m": "(1) Collect NTDS `LDAP Searches/sec` and total CPU via Performance Monitor or a lightweight forwarder script into `ad:dc:performance`. (2) Tune multipliers for your baseline. (3) Escalate repeated stress windows to the identity engineering team with top calling applications from firewall or load balancer logs.",
              "z": "Timeline (stress markers overlaid on CPU), Table (domain controllers with correlated spikes), Single value (stress hours per week).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk Add-on for Microsoft Windows`, scripted performance export.\n• Ensure the following data sources are available: `index=active_directory` `sourcetype=\"ad:dc:performance\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect NTDS `LDAP Searches/sec` and total CPU via Performance Monitor or a lightweight forwarder script into `ad:dc:performance`. (2) Tune multipliers for your baseline. (3) Escalate repeated stress windows to the identity engineering team with top calling applications from firewall or load balancer logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=active_directory sourcetype=\"ad:dc:performance\" earliest=-24h\n| eval ldap_ops=tonumber(coalesce(ldap_searches_sec, ldap_ops_per_sec))\n| eval cpu_pct=tonumber(coalesce(cpu_utilization, cpu_load_percent))\n| bin _time span=5m\n| stats avg(ldap_ops) as ldap_avg avg(cpu_pct) as cpu_avg by _time, host\n| eventstats median(ldap_avg) as med_ldap by host\n| eval stress=if(ldap_avg>med_ldap*2.5 AND cpu_avg>80,1,0)\n| where stress=1\n| table _time, host, ldap_avg, cpu_avg, med_ldap\n```\n\nUnderstanding this SPL\n\n**Domain Controller Performance Under LDAP Load** — Authentication and directory search storms can saturate domain controllers before generic server CPU alerts explain the blast radius. Correlating directory operation rate with CPU utilization supports extra domain controllers, load balancing changes, and misbehaving application fixes.\n\nDocumented **Data sources**: `index=active_directory` `sourcetype=\"ad:dc:performance\"`. **App/TA** (typical add-on context): `Splunk Add-on for Microsoft Windows`, scripted performance export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: active_directory; **sourcetype**: ad:dc:performance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=active_directory, sourcetype=\"ad:dc:performance\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ldap_ops** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cpu_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **stress** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where stress=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Domain Controller Performance Under LDAP Load**): table _time, host, ldap_avg, cpu_avg, med_ldap\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (stress markers overlaid on CPU), Table (domain controllers with correlated spikes), Single value (stress hours per week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.5,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 31,
            "none": 0
          }
        },
        {
          "i": "20.3",
          "n": "License & Subscription Management",
          "u": [
            {
              "i": "20.3.1",
              "n": "SaaS License Utilization (Assigned vs Active)",
              "c": "high",
              "f": "intermediate",
              "v": "Paying for assigned-but-unused seats wastes budget; comparing entitlements to real sign-in or activity highlights reclaim and right-size opportunities before renewals.",
              "t": "Microsoft Entra ID Add-on, Okta Splunk App, Salesforce TA",
              "d": "`sourcetype=license:usage`, `sourcetype=o365:reporting`",
              "q": "index=saas sourcetype=\"license:usage\"\n| eval assigned=coalesce(licenses_assigned,0), active=coalesce(active_users_30d,0)\n| eval utilization_pct=round(100*active/nullif(assigned,0),1)\n| where utilization_pct < 70 OR active < assigned*0.5\n| stats latest(assigned) as assigned latest(active) as active latest(utilization_pct) as util_pct by product, sku, cost_center\n| sort util_pct",
              "m": "Ingest monthly license assignment exports and last-sign-in or 30-day active user counts from IdP or vendor admin APIs. Schedule weekly jobs; join on `sku`/`product`. Alert when utilization drops below policy thresholds. Feed reclamation workflows with user lists.",
              "z": "Bar chart (utilization % by product), Table (reclaim candidates), Single value (wasted seat estimate).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Microsoft Entra ID Add-on, Okta Splunk App, Salesforce TA.\n• Ensure the following data sources are available: `sourcetype=license:usage`, `sourcetype=o365:reporting`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest monthly license assignment exports and last-sign-in or 30-day active user counts from IdP or vendor admin APIs. Schedule weekly jobs; join on `sku`/`product`. Alert when utilization drops below policy thresholds. Feed reclamation workflows with user lists.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=saas sourcetype=\"license:usage\"\n| eval assigned=coalesce(licenses_assigned,0), active=coalesce(active_users_30d,0)\n| eval utilization_pct=round(100*active/nullif(assigned,0),1)\n| where utilization_pct < 70 OR active < assigned*0.5\n| stats latest(assigned) as assigned latest(active) as active latest(utilization_pct) as util_pct by product, sku, cost_center\n| sort util_pct\n```\n\nUnderstanding this SPL\n\n**SaaS License Utilization (Assigned vs Active)** — Paying for assigned-but-unused seats wastes budget; comparing entitlements to real sign-in or activity highlights reclaim and right-size opportunities before renewals.\n\nDocumented **Data sources**: `sourcetype=license:usage`, `sourcetype=o365:reporting`. **App/TA** (typical add-on context): Microsoft Entra ID Add-on, Okta Splunk App, Salesforce TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: saas; **sourcetype**: license:usage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=saas, sourcetype=\"license:usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **assigned** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilization_pct < 70 OR active < assigned*0.5` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by product, sku, cost_center** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (utilization % by product), Table (reclaim candidates), Single value (wasted seat estimate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365",
                "okta"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.2",
              "n": "Software Audit Readiness Reporting",
              "c": "high",
              "f": "intermediate",
              "v": "Audit-ready evidence of installs, purchases, and usage reduces true-up penalties and speeds vendor true-up negotiations.",
              "t": "Flexera / Snow SAM export, ServiceNow SAM, Splunk Universal Forwarder inventory",
              "d": "`sourcetype=license:usage`, `sourcetype=inventory:software`",
              "q": "index=software (sourcetype=\"license:usage\" OR sourcetype=\"inventory:software\")\n| eval publisher=coalesce(publisher,vendor), edition=coalesce(edition,product_name)\n| stats dc(host) as install_count sum(entitlement_count) as purchased by publisher, edition\n| eval gap=install_count-purchased\n| where gap>0 OR isnull(purchased)\n| table publisher, edition, install_count, purchased, gap",
              "m": "Normalize discovery data from endpoints and purchase records from procurement. Refresh entitlements from contract system. Dashboard shows install vs entitlement gap by publisher. Export CSV for auditor quarterly.",
              "z": "Table (gap by title), Bar chart (over-deployed publishers), Single value (total gap count).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Flexera / Snow SAM export, ServiceNow SAM, Splunk Universal Forwarder inventory.\n• Ensure the following data sources are available: `sourcetype=license:usage`, `sourcetype=inventory:software`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize discovery data from endpoints and purchase records from procurement. Refresh entitlements from contract system. Dashboard shows install vs entitlement gap by publisher. Export CSV for auditor quarterly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=software (sourcetype=\"license:usage\" OR sourcetype=\"inventory:software\")\n| eval publisher=coalesce(publisher,vendor), edition=coalesce(edition,product_name)\n| stats dc(host) as install_count sum(entitlement_count) as purchased by publisher, edition\n| eval gap=install_count-purchased\n| where gap>0 OR isnull(purchased)\n| table publisher, edition, install_count, purchased, gap\n```\n\nUnderstanding this SPL\n\n**Software Audit Readiness Reporting** — Audit-ready evidence of installs, purchases, and usage reduces true-up penalties and speeds vendor true-up negotiations.\n\nDocumented **Data sources**: `sourcetype=license:usage`, `sourcetype=inventory:software`. **App/TA** (typical add-on context): Flexera / Snow SAM export, ServiceNow SAM, Splunk Universal Forwarder inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: software; **sourcetype**: license:usage, inventory:software. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=software, sourcetype=\"license:usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **publisher** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by publisher, edition** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap>0 OR isnull(purchased)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Software Audit Readiness Reporting**): table publisher, edition, install_count, purchased, gap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gap by title), Bar chart (over-deployed publishers), Single value (total gap count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.3",
              "n": "Subscription Renewal Forecasting",
              "c": "medium",
              "f": "intermediate",
              "v": "Forecasting renewal cash-out dates and contract values avoids surprise budget hits and gives procurement time to negotiate or consolidate vendors.",
              "t": "Contract repository export (ServiceNow SPM, Ariba), marketplace billing",
              "d": "`sourcetype=license:usage`, `sourcetype=aws:billing:cur`",
              "q": "(index=contracts sourcetype=\"license:usage\") OR (index=cloud_billing sourcetype=\"aws:billing:cur\")\n| eval renewal_epoch=strptime(renewal_date,\"%Y-%m-%d\"), amount=tonumber(annual_cost_usd)\n| where renewal_epoch > relative_time(now(),\"+30d@d\") AND renewal_epoch < relative_time(now(),\"+365d@d\")\n| eval days_until=round((renewal_epoch-now())/86400,0)\n| stats sum(amount) as renewal_spend by vendor, renewal_date, cost_center\n| sort renewal_date",
              "m": "Load subscription end dates and annual amounts from CLM or reseller exports; for AWS/Azure/GCP, tag marketplace subscriptions. Alert 90/60/30 days before renewal. Combine with utilization metrics to decide downgrade options before signing.",
              "z": "Timeline (renewals by quarter), Table (upcoming renewals), Single value (12-month renewal liability).",
              "kfp": "Forecasts vary with seasonal load, new project ramps, or M&A integration spending.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Contract repository export (ServiceNow SPM, Ariba), marketplace billing.\n• Ensure the following data sources are available: `sourcetype=license:usage`, `sourcetype=aws:billing:cur`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLoad subscription end dates and annual amounts from CLM or reseller exports; for AWS/Azure/GCP, tag marketplace subscriptions. Alert 90/60/30 days before renewal. Combine with utilization metrics to decide downgrade options before signing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=contracts sourcetype=\"license:usage\") OR (index=cloud_billing sourcetype=\"aws:billing:cur\")\n| eval renewal_epoch=strptime(renewal_date,\"%Y-%m-%d\"), amount=tonumber(annual_cost_usd)\n| where renewal_epoch > relative_time(now(),\"+30d@d\") AND renewal_epoch < relative_time(now(),\"+365d@d\")\n| eval days_until=round((renewal_epoch-now())/86400,0)\n| stats sum(amount) as renewal_spend by vendor, renewal_date, cost_center\n| sort renewal_date\n```\n\nUnderstanding this SPL\n\n**Subscription Renewal Forecasting** — Forecasting renewal cash-out dates and contract values avoids surprise budget hits and gives procurement time to negotiate or consolidate vendors.\n\nDocumented **Data sources**: `sourcetype=license:usage`, `sourcetype=aws:billing:cur`. **App/TA** (typical add-on context): Contract repository export (ServiceNow SPM, Ariba), marketplace billing. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: contracts, cloud_billing; **sourcetype**: license:usage, aws:billing:cur. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=contracts, index=cloud_billing, sourcetype=\"license:usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **renewal_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where renewal_epoch > relative_time(now(),\"+30d@d\") AND renewal_epoch < relative_time(now(),\"+365d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_until** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by vendor, renewal_date, cost_center** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (renewals by quarter), Table (upcoming renewals), Single value (12-month renewal liability).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.4",
              "n": "License Compliance Gap Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Unlicensed use of enterprise software creates legal and financial exposure; continuous gap detection supports proactive remediation.",
              "t": "SAM inventory, Adobe/Microsoft portal exports",
              "d": "`sourcetype=inventory:software`, `sourcetype=license:usage`",
              "q": "index=software sourcetype=\"inventory:software\"\n| search is_licensed=\"false\" OR compliance_status=\"unlicensed\"\n| stats count by host, software_name, version, last_seen\n| lookup license_entitlements software_name OUTPUT entitlement_qty\n| eval breach=if(count>entitlement_qty OR isnull(entitlement_qty),1,0)\n| where breach=1\n| sort -count",
              "m": "Flag installs without matching entitlement rows in a KV store refreshed from purchases. Reconcile named-user products with IdP group membership. Alert on new breaches weekly; assign owners by `cost_center`.",
              "z": "Table (compliance gaps), Single value (open violations), Bar chart (gaps by department).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SAM inventory, Adobe/Microsoft portal exports.\n• Ensure the following data sources are available: `sourcetype=inventory:software`, `sourcetype=license:usage`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nFlag installs without matching entitlement rows in a KV store refreshed from purchases. Reconcile named-user products with IdP group membership. Alert on new breaches weekly; assign owners by `cost_center`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=software sourcetype=\"inventory:software\"\n| search is_licensed=\"false\" OR compliance_status=\"unlicensed\"\n| stats count by host, software_name, version, last_seen\n| lookup license_entitlements software_name OUTPUT entitlement_qty\n| eval breach=if(count>entitlement_qty OR isnull(entitlement_qty),1,0)\n| where breach=1\n| sort -count\n```\n\nUnderstanding this SPL\n\n**License Compliance Gap Detection** — Unlicensed use of enterprise software creates legal and financial exposure; continuous gap detection supports proactive remediation.\n\nDocumented **Data sources**: `sourcetype=inventory:software`, `sourcetype=license:usage`. **App/TA** (typical add-on context): SAM inventory, Adobe/Microsoft portal exports. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: software; **sourcetype**: inventory:software. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=software, sourcetype=\"inventory:software\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, software_name, version, last_seen** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where breach=1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (compliance gaps), Single value (open violations), Bar chart (gaps by department).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.5",
              "n": "Multi-Year Contract Consumption Trending",
              "c": "medium",
              "f": "advanced",
              "v": "Enterprise agreements with committed spend need burn-down tracking; falling behind consumption risks leaving value on the table, while overspending early risks true-up shocks.",
              "t": "AWS/GCP/Azure EA billing, custom commitment tracker",
              "d": "`sourcetype=aws:billing:cur`, `sourcetype=license:usage`",
              "q": "index=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval spend=tonumber(lineItem_UnblendedCost)\n| eval contract_id=coalesce(agreement_id,license_pool_id)\n| bin _time span=1mon\n| stats sum(spend) as monthly_spend by contract_id, _time\n| sort contract_id, _time\n| streamstats sum(monthly_spend) as ytd_spend by contract_id\n| lookup contract_commitments contract_id OUTPUT commit_total_usd\n| eval pct_consumed=round(100*ytd_spend/nullif(commit_total_usd,0),1)\n| table contract_id, _time, ytd_spend, pct_consumed",
              "m": "Map invoices and usage lines to enterprise agreement IDs. Compare cumulative spend to committed totals and contract term. Project end-of-term position with linear or seasonal fit. Alert if consumption is off pace vs. expected curve.",
              "z": "Line chart (% consumed vs time), Gauge (YTD vs commit), Table (contract status).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS/GCP/Azure EA billing, custom commitment tracker.\n• Ensure the following data sources are available: `sourcetype=aws:billing:cur`, `sourcetype=license:usage`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap invoices and usage lines to enterprise agreement IDs. Compare cumulative spend to committed totals and contract term. Project end-of-term position with linear or seasonal fit. Alert if consumption is off pace vs. expected curve.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_billing sourcetype=\"aws:billing:cur\"\n| eval spend=tonumber(lineItem_UnblendedCost)\n| eval contract_id=coalesce(agreement_id,license_pool_id)\n| bin _time span=1mon\n| stats sum(spend) as monthly_spend by contract_id, _time\n| sort contract_id, _time\n| streamstats sum(monthly_spend) as ytd_spend by contract_id\n| lookup contract_commitments contract_id OUTPUT commit_total_usd\n| eval pct_consumed=round(100*ytd_spend/nullif(commit_total_usd,0),1)\n| table contract_id, _time, ytd_spend, pct_consumed\n```\n\nUnderstanding this SPL\n\n**Multi-Year Contract Consumption Trending** — Enterprise agreements with committed spend need burn-down tracking; falling behind consumption risks leaving value on the table, while overspending early risks true-up shocks.\n\nDocumented **Data sources**: `sourcetype=aws:billing:cur`, `sourcetype=license:usage`. **App/TA** (typical add-on context): AWS/GCP/Azure EA billing, custom commitment tracker. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_billing; **sourcetype**: aws:billing:cur. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_billing, sourcetype=\"aws:billing:cur\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **spend** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **contract_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by contract_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by contract_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **pct_consumed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Multi-Year Contract Consumption Trending**): table contract_id, _time, ytd_spend, pct_consumed\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (% consumed vs time), Gauge (YTD vs commit), Table (contract status).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "azure",
                "gcp"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.6",
              "n": "License Pool Allocation Optimization",
              "c": "medium",
              "f": "intermediate",
              "v": "Shifting unused pool capacity between departments or regions avoids buying new seats while one pool sits idle.",
              "t": "ServiceNow SAM, custom pool allocator",
              "d": "`sourcetype=license:usage`",
              "q": "index=saas sourcetype=\"license:usage\"\n| stats sum(assigned) as assigned sum(consumed) as consumed by pool_id, org_unit\n| eval slack=assigned-consumed\n| eventstats sum(slack) as total_slack by pool_id\n| where slack < 0 OR (slack > 50 AND total_slack > 0)\n| sort pool_id, slack",
              "m": "Ingest per–cost-center assignments against shared enterprise pools. Identify negative slack (overallocation) and large positive slack (reclaimable). Recommend transfers using simple optimization rules in a lookup updated monthly.",
              "z": "Heatmap (org × pool utilization), Table (rebalance suggestions), Bar chart (slack by pool).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ServiceNow SAM, custom pool allocator.\n• Ensure the following data sources are available: `sourcetype=license:usage`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest per–cost-center assignments against shared enterprise pools. Identify negative slack (overallocation) and large positive slack (reclaimable). Recommend transfers using simple optimization rules in a lookup updated monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=saas sourcetype=\"license:usage\"\n| stats sum(assigned) as assigned sum(consumed) as consumed by pool_id, org_unit\n| eval slack=assigned-consumed\n| eventstats sum(slack) as total_slack by pool_id\n| where slack < 0 OR (slack > 50 AND total_slack > 0)\n| sort pool_id, slack\n```\n\nUnderstanding this SPL\n\n**License Pool Allocation Optimization** — Shifting unused pool capacity between departments or regions avoids buying new seats while one pool sits idle.\n\nDocumented **Data sources**: `sourcetype=license:usage`. **App/TA** (typical add-on context): ServiceNow SAM, custom pool allocator. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: saas; **sourcetype**: license:usage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=saas, sourcetype=\"license:usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by pool_id, org_unit** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **slack** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by pool_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where slack < 0 OR (slack > 50 AND total_slack > 0)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (org × pool utilization), Table (rebalance suggestions), Bar chart (slack by pool).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.7",
              "n": "Auto-Renewal Risk Detection",
              "c": "high",
              "f": "beginner",
              "v": "Unwanted auto-renewals lock spend for another term; early visibility lets legal and procurement opt out or renegotiate within notice windows.",
              "t": "CLM webhook, calendar export, `license:usage` metadata",
              "d": "`sourcetype=license:usage`, `sourcetype=contracts:events`",
              "q": "index=contracts (sourcetype=\"license:usage\" OR sourcetype=\"contracts:events\")\n| eval opt_out_deadline=strptime(cancellation_deadline,\"%Y-%m-%d\")\n| eval auto_renew=if(match(lower(renewal_terms),\"auto\"),1,0)\n| where auto_renew=1 AND opt_out_deadline > now() AND opt_out_deadline < relative_time(now(),\"+90d@d\")\n| eval days_to_opt_out=round((opt_out_deadline-now())/86400,0)\n| table vendor, product, renewal_date, cancellation_deadline, days_to_opt_out, owner\n| sort days_to_opt_out",
              "m": "Capture `auto_renew` flags and contractual opt-out dates from vendor metadata or CLM. Alert owners at 90/60/30 days before the cancellation window closes. Track completion of opt-out tickets in ITSM.",
              "z": "Table (upcoming opt-out deadlines), Single value (contracts at risk), Timeline (deadlines).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CLM webhook, calendar export, `license:usage` metadata.\n• Ensure the following data sources are available: `sourcetype=license:usage`, `sourcetype=contracts:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture `auto_renew` flags and contractual opt-out dates from vendor metadata or CLM. Alert owners at 90/60/30 days before the cancellation window closes. Track completion of opt-out tickets in ITSM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=contracts (sourcetype=\"license:usage\" OR sourcetype=\"contracts:events\")\n| eval opt_out_deadline=strptime(cancellation_deadline,\"%Y-%m-%d\")\n| eval auto_renew=if(match(lower(renewal_terms),\"auto\"),1,0)\n| where auto_renew=1 AND opt_out_deadline > now() AND opt_out_deadline < relative_time(now(),\"+90d@d\")\n| eval days_to_opt_out=round((opt_out_deadline-now())/86400,0)\n| table vendor, product, renewal_date, cancellation_deadline, days_to_opt_out, owner\n| sort days_to_opt_out\n```\n\nUnderstanding this SPL\n\n**Auto-Renewal Risk Detection** — Unwanted auto-renewals lock spend for another term; early visibility lets legal and procurement opt out or renegotiate within notice windows.\n\nDocumented **Data sources**: `sourcetype=license:usage`, `sourcetype=contracts:events`. **App/TA** (typical add-on context): CLM webhook, calendar export, `license:usage` metadata. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: contracts; **sourcetype**: license:usage, contracts:events. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=contracts, sourcetype=\"license:usage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opt_out_deadline** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **auto_renew** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where auto_renew=1 AND opt_out_deadline > now() AND opt_out_deadline < relative_time(now(),\"+90d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_to_opt_out** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Auto-Renewal Risk Detection**): table vendor, product, renewal_date, cancellation_deadline, days_to_opt_out, owner\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (upcoming opt-out deadlines), Single value (contracts at risk), Timeline (deadlines).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Compliance",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.8",
              "n": "Microsoft 365 Inactive License Harvest Candidates",
              "c": "high",
              "f": "intermediate",
              "v": "Seats assigned to users who have not signed in for months are pure renewal waste. Identifying inactive assignees before true-up converts soft savings into reclaimed licenses and lower E5 or add-on counts at contract signature.",
              "t": "Microsoft Entra ID Add-on, `o365:reporting` HEC export, Graph API scripted input",
              "d": "`index=saas` `sourcetype=\"o365:license_assignment\"`",
              "q": "index=saas sourcetype=\"o365:license_assignment\" earliest=-1d\n| eval last_signin=if(isnotnull(last_signin_epoch), last_signin_epoch, strptime(last_signin,\"%Y-%m-%dT%H:%M:%SZ\"))\n| eval inactive_days=round((now()-last_signin)/86400,0)\n| where isnotnull(assigned_license_sku) AND assigned_license_sku!=\"\" AND (inactive_days>90 OR isnull(last_signin))\n| stats dc(user_upn) as harvest_candidates sum(monthly_seat_cost_usd) as monthly_at_risk by assigned_license_sku, department\n| sort -monthly_at_risk",
              "m": "(1) Ingest daily license assignment with last interactive sign-in from Graph `reports/getOffice365ActivationsUserDetail` or equivalent. (2) Join `monthly_seat_cost_usd` from a procurement lookup by SKU. (3) Open harvest tickets only after manager approval workflow in ITSM.",
              "z": "Table (SKU and department reclaim value), Bar chart (inactive seats by workload), Single value (total monthly at-risk dollars).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Microsoft Entra ID Add-on, `o365:reporting` HEC export, Graph API scripted input.\n• Ensure the following data sources are available: `index=saas` `sourcetype=\"o365:license_assignment\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest daily license assignment with last interactive sign-in from Graph `reports/getOffice365ActivationsUserDetail` or equivalent. (2) Join `monthly_seat_cost_usd` from a procurement lookup by SKU. (3) Open harvest tickets only after manager approval workflow in ITSM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=saas sourcetype=\"o365:license_assignment\" earliest=-1d\n| eval last_signin=if(isnotnull(last_signin_epoch), last_signin_epoch, strptime(last_signin,\"%Y-%m-%dT%H:%M:%SZ\"))\n| eval inactive_days=round((now()-last_signin)/86400,0)\n| where isnotnull(assigned_license_sku) AND assigned_license_sku!=\"\" AND (inactive_days>90 OR isnull(last_signin))\n| stats dc(user_upn) as harvest_candidates sum(monthly_seat_cost_usd) as monthly_at_risk by assigned_license_sku, department\n| sort -monthly_at_risk\n```\n\nUnderstanding this SPL\n\n**Microsoft 365 Inactive License Harvest Candidates** — Seats assigned to users who have not signed in for months are pure renewal waste. Identifying inactive assignees before true-up converts soft savings into reclaimed licenses and lower E5 or add-on counts at contract signature.\n\nDocumented **Data sources**: `index=saas` `sourcetype=\"o365:license_assignment\"`. **App/TA** (typical add-on context): Microsoft Entra ID Add-on, `o365:reporting` HEC export, Graph API scripted input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: saas; **sourcetype**: o365:license_assignment. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=saas, sourcetype=\"o365:license_assignment\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **last_signin** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **inactive_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(assigned_license_sku) AND assigned_license_sku!=\"\" AND (inactive_days>90 OR isnull(last_signin))` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by assigned_license_sku, department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SKU and department reclaim value), Bar chart (inactive seats by workload), Single value (total monthly at-risk dollars).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.9",
              "n": "Salesforce Seat Activity vs Purchased Licenses",
              "c": "medium",
              "f": "intermediate",
              "v": "Salesforce contracts charge per seat tier while many users only log in quarterly for reporting. Comparing thirty-day active logins to purchased seats highlights downgrade and permission-set consolidation opportunities before renewal ramps.",
              "t": "Salesforce Splunk Connector, EventLog from Salesforce Shield (optional)",
              "d": "`index=saas` `sourcetype=\"salesforce:login\"`",
              "q": "index=saas sourcetype=\"salesforce:login\" earliest=-30d\n| eval uid=coalesce(user_id, USER_ID)\n| stats dc(uid) as active_users_30d by org_id\n| join type=left org_id [\n  search index=saas sourcetype=\"salesforce:license_snapshot\" earliest=-1d\n  | stats latest(purchased_seats) as purchased by org_id\n]\n| eval utilization_pct=round(100*active_users_30d/nullif(purchased,0),1)\n| eval slack_seats=purchased-active_users_30d\n| where slack_seats>20 OR utilization_pct<60\n| table org_id, purchased, active_users_30d, utilization_pct, slack_seats",
              "m": "(1) Schedule daily license snapshot via Salesforce REST into `salesforce:license_snapshot`. (2) Deduplicate login events per user per day before `dc`. (3) Feed slack_seats into renewal negotiation talking points.",
              "z": "Gauge (utilization percent), Bar chart (slack seats by org), Table (orgs under sixty percent utilization).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Salesforce Splunk Connector, EventLog from Salesforce Shield (optional).\n• Ensure the following data sources are available: `index=saas` `sourcetype=\"salesforce:login\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule daily license snapshot via Salesforce REST into `salesforce:license_snapshot`. (2) Deduplicate login events per user per day before `dc`. (3) Feed slack_seats into renewal negotiation talking points.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=saas sourcetype=\"salesforce:login\" earliest=-30d\n| eval uid=coalesce(user_id, USER_ID)\n| stats dc(uid) as active_users_30d by org_id\n| join type=left org_id [\n  search index=saas sourcetype=\"salesforce:license_snapshot\" earliest=-1d\n  | stats latest(purchased_seats) as purchased by org_id\n]\n| eval utilization_pct=round(100*active_users_30d/nullif(purchased,0),1)\n| eval slack_seats=purchased-active_users_30d\n| where slack_seats>20 OR utilization_pct<60\n| table org_id, purchased, active_users_30d, utilization_pct, slack_seats\n```\n\nUnderstanding this SPL\n\n**Salesforce Seat Activity vs Purchased Licenses** — Salesforce contracts charge per seat tier while many users only log in quarterly for reporting. Comparing thirty-day active logins to purchased seats highlights downgrade and permission-set consolidation opportunities before renewal ramps.\n\nDocumented **Data sources**: `index=saas` `sourcetype=\"salesforce:login\"`. **App/TA** (typical add-on context): Salesforce Splunk Connector, EventLog from Salesforce Shield (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: saas; **sourcetype**: salesforce:login. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=saas, sourcetype=\"salesforce:login\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **uid** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by org_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **slack_seats** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where slack_seats>20 OR utilization_pct<60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Salesforce Seat Activity vs Purchased Licenses**): table org_id, purchased, active_users_30d, utilization_pct, slack_seats\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (utilization percent), Bar chart (slack seats by org), Table (orgs under sixty percent utilization).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.10",
              "n": "ServiceNow Fulfiller versus Requester License Mix",
              "c": "high",
              "f": "advanced",
              "v": "ITSM platforms often blend fulfiller, approver, and requester licenses; over-purchasing fulfiller seats while requester counts spike drives both waste and audit findings. Aligning provisioned roles to transaction patterns avoids shelfware and compliance gaps.",
              "t": "ServiceNow Splunk Integration, `snow:license` export",
              "d": "`index=itsm` `sourcetype=\"snow:user_role\"`",
              "q": "index=itsm sourcetype=\"snow:user_role\" earliest=-1d\n| eval fulfiller=if(match(lower(roles),\"itil|fulfiller|agent\"),1,0)\n| stats dc(eval(if(fulfiller=1,user_id,null()))) as fulfiller_users\n        dc(user_id) as total_users by instance_name\n| join type=left instance_name [\n  search index=itsm sourcetype=\"snow:transaction\" earliest=-30d\n  | stats dc(opened_by) as active_requesters by instance_name\n]\n| eval fulfiller_ratio=round(100*fulfiller_users/nullif(total_users,0),1)\n| where fulfiller_users>active_requesters*1.5 OR fulfiller_ratio>35\n| table instance_name, fulfiller_users, active_requesters, fulfiller_ratio",
              "m": "(1) Export user-to-role assignments nightly from ServiceNow `sys_user_has_role`. (2) Count distinct requesters from `incident` and `sc_request` over thirty days. (3) Work with process owners when fulfiller count greatly exceeds active requesters.",
              "z": "Scatter plot (fulfiller users versus active requesters), Table (instances over policy ratio), Single value (excess fulfiller seats estimate from lookup).",
              "kfp": "This UC fires on role-assignment data, not on activity, so the most common false-positives are gaps between **role assignment** (what ServiceNow bills on) and **role usability** (what the user actually needs). (a) Quarterly approvers — CAB chair, change_manager, problem_manager — legitimately use ServiceNow once per quarter and will fall into the 90- or 180-day buckets; cross-check `assigned_to` / `opened_by` activity over a 90-day window from `snow:change_request` and `snow:problem` before revoking. (b) SSO last-login lag — when the IdP issues long-lived tokens and reuses sessions, ServiceNow's `last_login_time` lags actual platform interaction by days or weeks; for SSO-authenticated populations, derive activity from `snow:incident.assigned_to` and `snow:change_request.assigned_to` instead of `last_login_time`. (c) Newly provisioned users — anyone whose `sys_user.sys_created_on` is within the past 14 days hasn't had time to log in yet; treat them as a separate `pending_first_login` bucket, not as `01_never_logged_in`, otherwise every fresh hire shows up as waste. (d) Service / integration accounts missing the `web_service_access_only` flag — the provisioning team forgot to set the flag during creation, so an integration account looks like a dormant Fulfiller; cross-check usernames against a `servicenow_service_account_allowlist.csv` lookup, then push a corrective update upstream so the flag is set going forward. (e) Group-inherited roles — if a user holds an `itil`-tier role only via group membership, `snow:sys_user_has_role` may not show it directly; you also need to ingest `snow:sys_group_has_role` and `snow:sys_user_grmember` and union both assignment paths before bucketing.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ServiceNow Splunk Integration, `snow:license` export.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:user_role\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export user-to-role assignments nightly from ServiceNow `sys_user_has_role`. (2) Count distinct requesters from `incident` and `sc_request` over thirty days. (3) Work with process owners when fulfiller count greatly exceeds active requesters.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:user_role\" earliest=-1d\n| eval fulfiller=if(match(lower(roles),\"itil|fulfiller|agent\"),1,0)\n| stats dc(eval(if(fulfiller=1,user_id,null()))) as fulfiller_users\n        dc(user_id) as total_users by instance_name\n| join type=left instance_name [\n  search index=itsm sourcetype=\"snow:transaction\" earliest=-30d\n  | stats dc(opened_by) as active_requesters by instance_name\n]\n| eval fulfiller_ratio=round(100*fulfiller_users/nullif(total_users,0),1)\n| where fulfiller_users>active_requesters*1.5 OR fulfiller_ratio>35\n| table instance_name, fulfiller_users, active_requesters, fulfiller_ratio\n```\n\nUnderstanding this SPL\n\n**ServiceNow Fulfiller versus Requester License Mix** — ITSM platforms often blend fulfiller, approver, and requester licenses; over-purchasing fulfiller seats while requester counts spike drives both waste and audit findings. Aligning provisioned roles to transaction patterns avoids shelfware and compliance gaps.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:user_role\"`. **App/TA** (typical add-on context): ServiceNow Splunk Integration, `snow:license` export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:user_role. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:user_role\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fulfiller** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by instance_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **fulfiller_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fulfiller_users>active_requesters*1.5 OR fulfiller_ratio>35` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ServiceNow Fulfiller versus Requester License Mix**): table instance_name, fulfiller_users, active_requesters, fulfiller_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter plot (fulfiller users versus active requesters), Table (instances over policy ratio), Single value (excess fulfiller seats estimate from lookup).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We count how many people at the company have the expensive ServiceNow seat — the 'fulfiller' kind that lets you work on tickets, not just submit them — and then we check who hasn't logged in for months. Every dormant fulfiller seat is still on the bill, so we list them out for the team that owns the renewal, before the auditor or the renewal letter does.",
              "mtype": [
                "Cost",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.11",
              "n": "Oracle Database Option Usage versus Entitlements",
              "c": "critical",
              "f": "advanced",
              "v": "Options such as Partitioning, Advanced Compression, and Diagnostics Pack are metered separately from the processor license. Undetected feature use creates audit exposure and six-figure true-up risk; proactive comparison to LMS entitlements funds remediation projects instead of penalties.",
              "t": "`Splunk DB Connect` (DBA_FEATURE_USAGE_STATISTICS), Oracle audit exports",
              "d": "`index=database` `sourcetype=\"oracle:option_usage\"`",
              "q": "index=database sourcetype=\"oracle:option_usage\" earliest=-1d\n| eval detected=if(upper(currently_used)=\"TRUE\" OR currently_used=\"1\",1,0)\n| stats values(product) as option_name max(detected) as in_use by db_name, host\n| join type=left db_name host [\n  search index=licenses sourcetype=\"oracle:entitlement\" earliest=-1d\n  | table db_name, host, product, entitled\n]\n| eval gap=if(in_use=1 AND (entitled=0 OR isnull(entitled)),1,0)\n| where gap=1\n| table db_name, host, option_name, entitled",
              "m": "(1) Ingest weekly DBA_FEATURE_USAGE_STATISTICS via DB Connect with stable `product` names. (2) Maintain `oracle:entitlement` from procurement. (3) Open high-priority changes to disable unused options or purchase entitlements before vendor review.",
              "z": "Table (unentitled options in use), Bar chart (gap count by data center), Single value (databases with violations).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `Splunk DB Connect` (DBA_FEATURE_USAGE_STATISTICS), Oracle audit exports.\n• Ensure the following data sources are available: `index=database` `sourcetype=\"oracle:option_usage\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest weekly DBA_FEATURE_USAGE_STATISTICS via DB Connect with stable `product` names. (2) Maintain `oracle:entitlement` from procurement. (3) Open high-priority changes to disable unused options or purchase entitlements before vendor review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"oracle:option_usage\" earliest=-1d\n| eval detected=if(upper(currently_used)=\"TRUE\" OR currently_used=\"1\",1,0)\n| stats values(product) as option_name max(detected) as in_use by db_name, host\n| join type=left db_name host [\n  search index=licenses sourcetype=\"oracle:entitlement\" earliest=-1d\n  | table db_name, host, product, entitled\n]\n| eval gap=if(in_use=1 AND (entitled=0 OR isnull(entitled)),1,0)\n| where gap=1\n| table db_name, host, option_name, entitled\n```\n\nUnderstanding this SPL\n\n**Oracle Database Option Usage versus Entitlements** — Options such as Partitioning, Advanced Compression, and Diagnostics Pack are metered separately from the processor license. Undetected feature use creates audit exposure and six-figure true-up risk; proactive comparison to LMS entitlements funds remediation projects instead of penalties.\n\nDocumented **Data sources**: `index=database` `sourcetype=\"oracle:option_usage\"`. **App/TA** (typical add-on context): `Splunk DB Connect` (DBA_FEATURE_USAGE_STATISTICS), Oracle audit exports. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: oracle:option_usage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"oracle:option_usage\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by db_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Oracle Database Option Usage versus Entitlements**): table db_name, host, option_name, entitled\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unentitled options in use), Bar chart (gap count by data center), Single value (databases with violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Compliance",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "oracle"
              ],
              "em": [
                "oracle_oracle_db"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.12",
              "n": "Splunk Enterprise License Pool Usage and Stack Warnings",
              "c": "high",
              "f": "intermediate",
              "v": "Splunk indexing volume directly ties to dollar cost; pools that repeatedly approach quota force soft violations, search degradation, and unplanned license purchases. Watching daily ingestion by pool and stack supports data retirement and routing before finance sees an emergency PO.",
              "t": "Splunk internal telemetry (no TA), Monitoring Console (optional)",
              "d": "`index=_internal` `source=*license_usage.log*` `type=Usage`",
              "q": "index=_internal source=*license_usage.log* type=Usage earliest=-30d\n| eval gb=round(b/1024/1024/1024,4)\n| bin _time span=1d\n| stats sum(gb) as idx_gb by _time, pool, stack\n| eventstats sum(idx_gb) as daily_total by _time\n| lookup splunk_license_quota pool stack OUTPUT quota_gb_per_day\n| eval util_pct=round(100*idx_gb/nullif(quota_gb_per_day,0),1)\n| where util_pct>85\n| sort -util_pct",
              "m": "(1) Build `splunk_license_quota` from your entitlement and stacking plan (GB per day per pool). (2) Alert at eighty-five percent for two consecutive days. (3) Pair with `data model` acceleration and sourcetype-level volume reports to find noisy sources.",
              "z": "Line chart (indexed GB versus quota by pool), Table (pools over threshold), Single value (total daily utilization percent).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk internal telemetry (no TA), Monitoring Console (optional).\n• Ensure the following data sources are available: `index=_internal` `source=*license_usage.log*` `type=Usage`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `splunk_license_quota` from your entitlement and stacking plan (GB per day per pool). (2) Alert at eighty-five percent for two consecutive days. (3) Pair with `data model` acceleration and sourcetype-level volume reports to find noisy sources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*license_usage.log* type=Usage earliest=-30d\n| eval gb=round(b/1024/1024/1024,4)\n| bin _time span=1d\n| stats sum(gb) as idx_gb by _time, pool, stack\n| eventstats sum(idx_gb) as daily_total by _time\n| lookup splunk_license_quota pool stack OUTPUT quota_gb_per_day\n| eval util_pct=round(100*idx_gb/nullif(quota_gb_per_day,0),1)\n| where util_pct>85\n| sort -util_pct\n```\n\nUnderstanding this SPL\n\n**Splunk Enterprise License Pool Usage and Stack Warnings** — Splunk indexing volume directly ties to dollar cost; pools that repeatedly approach quota force soft violations, search degradation, and unplanned license purchases. Watching daily ingestion by pool and stack supports data retirement and routing before finance sees an emergency PO.\n\nDocumented **Data sources**: `index=_internal` `source=*license_usage.log*` `type=Usage`. **App/TA** (typical add-on context): Splunk internal telemetry (no TA), Monitoring Console (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, pool, stack** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where util_pct>85` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (indexed GB versus quota by pool), Table (pools over threshold), Single value (total daily utilization percent).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.13",
              "n": "SAP Named User License versus Concurrent Session Peaks",
              "c": "critical",
              "f": "advanced",
              "v": "SAP named-user contracts are priced on authorized humans while technical users and peak dialog sessions can exceed design points. Comparing entitled named users to measured concurrent dialog peaks highlights indirect access risk and indirect-license optimization programs.",
              "t": "SAProuter or SAP Security Audit Log forwarder, SAP Solution Manager export",
              "d": "`index=sap` `sourcetype=\"sap:sm20\"`",
              "q": "index=sap sourcetype=\"sap:sm20\" earliest=-30d\n| eval user=coalesce(sap_user, user_name)\n| bin _time span=1h\n| stats dc(user) as named_dialog_users by _time, system_id\n| eventstats max(named_dialog_users) as peak_concurrent by system_id\n| join type=left system_id [\n  search index=licenses sourcetype=\"sap:license_position\" earliest=-1d\n  | stats latest(named_users_entitled) as entitled by system_id\n]\n| eval peak_to_entitled_pct=round(100*peak_concurrent/nullif(entitled,0),1)\n| where peak_to_entitled_pct>25 AND peak_concurrent>entitled*0.9\n| table system_id, entitled, peak_concurrent, peak_to_entitled_pct",
              "m": "(1) Ingest SM20 or gateway session logs with one-hour granularity. (2) Refresh `named_users_entitled` from LAW or contract system weekly. (3) Engage SAP measurement team when peaks approach entitlement for sustained business days.",
              "z": "Line chart (concurrent users versus entitlement), Table (systems breaching policy), Single value (peak over entitlement hours per month).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SAProuter or SAP Security Audit Log forwarder, SAP Solution Manager export.\n• Ensure the following data sources are available: `index=sap` `sourcetype=\"sap:sm20\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest SM20 or gateway session logs with one-hour granularity. (2) Refresh `named_users_entitled` from LAW or contract system weekly. (3) Engage SAP measurement team when peaks approach entitlement for sustained business days.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sap sourcetype=\"sap:sm20\" earliest=-30d\n| eval user=coalesce(sap_user, user_name)\n| bin _time span=1h\n| stats dc(user) as named_dialog_users by _time, system_id\n| eventstats max(named_dialog_users) as peak_concurrent by system_id\n| join type=left system_id [\n  search index=licenses sourcetype=\"sap:license_position\" earliest=-1d\n  | stats latest(named_users_entitled) as entitled by system_id\n]\n| eval peak_to_entitled_pct=round(100*peak_concurrent/nullif(entitled,0),1)\n| where peak_to_entitled_pct>25 AND peak_concurrent>entitled*0.9\n| table system_id, entitled, peak_concurrent, peak_to_entitled_pct\n```\n\nUnderstanding this SPL\n\n**SAP Named User License versus Concurrent Session Peaks** — SAP named-user contracts are priced on authorized humans while technical users and peak dialog sessions can exceed design points. Comparing entitled named users to measured concurrent dialog peaks highlights indirect access risk and indirect-license optimization programs.\n\nDocumented **Data sources**: `index=sap` `sourcetype=\"sap:sm20\"`. **App/TA** (typical add-on context): SAProuter or SAP Security Audit Log forwarder, SAP Solution Manager export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sap; **sourcetype**: sap:sm20. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sap, sourcetype=\"sap:sm20\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, system_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by system_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **peak_to_entitled_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where peak_to_entitled_pct>25 AND peak_concurrent>entitled*0.9` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SAP Named User License versus Concurrent Session Peaks**): table system_id, entitled, peak_concurrent, peak_to_entitled_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (concurrent users versus entitlement), Table (systems breaching policy), Single value (peak over entitlement hours per month).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Compliance",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.14",
              "n": "Software License Harvesting Queue from SAM Reclamation",
              "c": "medium",
              "f": "intermediate",
              "v": "Reclamation workflows stall when approvals idle in queues. Tracking candidate seats from assignment to uninstall reduces carry-over into the next renewal cycle and improves realized savings from harvest playbooks.",
              "t": "Flexera IT Visibility, ServiceNow SAM, Snow Inventory",
              "d": "`index=software` `sourcetype=\"sam:reclaim_ticket\"`",
              "q": "index=software sourcetype=\"sam:reclaim_ticket\" earliest=-90d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval closed=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age_days=if(isnotnull(closed), round((closed-opened)/86400,1), round((now()-opened)/86400,1))\n| where status!=\"Closed\" OR age_days>14\n| stats count as tickets avg(age_days) as avg_age sum(potential_savings_usd) as pipeline_savings by owner_team, publisher\n| where tickets>5\n| sort -pipeline_savings",
              "m": "(1) Push reclamation ticket milestones from ITSM when integrated with SAM. (2) Escalate tickets open more than fourteen days. (3) Report pipeline_savings to FinOps monthly for credited harvest dollars.",
              "z": "Bar chart (open reclaim savings by team), Table (stale tickets), Single value (total pipeline savings dollars).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Flexera IT Visibility, ServiceNow SAM, Snow Inventory.\n• Ensure the following data sources are available: `index=software` `sourcetype=\"sam:reclaim_ticket\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push reclamation ticket milestones from ITSM when integrated with SAM. (2) Escalate tickets open more than fourteen days. (3) Report pipeline_savings to FinOps monthly for credited harvest dollars.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=software sourcetype=\"sam:reclaim_ticket\" earliest=-90d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval closed=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age_days=if(isnotnull(closed), round((closed-opened)/86400,1), round((now()-opened)/86400,1))\n| where status!=\"Closed\" OR age_days>14\n| stats count as tickets avg(age_days) as avg_age sum(potential_savings_usd) as pipeline_savings by owner_team, publisher\n| where tickets>5\n| sort -pipeline_savings\n```\n\nUnderstanding this SPL\n\n**Software License Harvesting Queue from SAM Reclamation** — Reclamation workflows stall when approvals idle in queues. Tracking candidate seats from assignment to uninstall reduces carry-over into the next renewal cycle and improves realized savings from harvest playbooks.\n\nDocumented **Data sources**: `index=software` `sourcetype=\"sam:reclaim_ticket\"`. **App/TA** (typical add-on context): Flexera IT Visibility, ServiceNow SAM, Snow Inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: software; **sourcetype**: sam:reclaim_ticket. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=software, sourcetype=\"sam:reclaim_ticket\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **closed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status!=\"Closed\" OR age_days>14` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by owner_team, publisher** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where tickets>5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (open reclaim savings by team), Table (stale tickets), Single value (total pipeline savings dollars).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Capacity"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.15",
              "n": "GitHub Enterprise Seat Utilization versus Active Contributors",
              "c": "medium",
              "f": "beginner",
              "v": "GitHub Enterprise bills per occupied seat while many accounts represent bots, service users, or former contractors. Comparing billed seats to users with pushes or reviews in the last sixty days supports deprovisioning and org consolidation before annual renewal.",
              "t": "GitHub Audit Log streaming to Splunk, GitHub Enterprise Server TA (optional)",
              "d": "`index=devops` `sourcetype=\"github:audit\"`",
              "q": "index=devops sourcetype=\"github:audit\" earliest=-60d\n| search action=pull_request OR action=push OR action=issue_comment\n| eval actor=coalesce(actor, user, login)\n| stats dc(actor) as active_contributors_60d by enterprise_slug\n| join type=left enterprise_slug [\n  search index=devops sourcetype=\"github:license_snapshot\" earliest=-1d\n  | stats latest(billed_seats) as billed_seats by enterprise_slug\n]\n| eval seat_util_pct=round(100*active_contributors_60d/nullif(billed_seats,0),1)\n| eval dormant_seats=billed_seats-active_contributors_60d\n| where dormant_seats>10 OR seat_util_pct<70\n| table enterprise_slug, billed_seats, active_contributors_60d, seat_util_pct, dormant_seats",
              "m": "(1) Enable audit log streaming with actor and action fields normalized. (2) Ingest nightly seat count from billing API into `github:license_snapshot`. (3) Coordinate dormant seat removal with org owners outside peak release windows.",
              "z": "Gauge (seat utilization percent), Table (enterprises with dormant seats), Bar chart (dormant seats by business unit tag).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub Audit Log streaming to Splunk, GitHub Enterprise Server TA (optional).\n• Ensure the following data sources are available: `index=devops` `sourcetype=\"github:audit\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable audit log streaming with actor and action fields normalized. (2) Ingest nightly seat count from billing API into `github:license_snapshot`. (3) Coordinate dormant seat removal with org owners outside peak release windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype=\"github:audit\" earliest=-60d\n| search action=pull_request OR action=push OR action=issue_comment\n| eval actor=coalesce(actor, user, login)\n| stats dc(actor) as active_contributors_60d by enterprise_slug\n| join type=left enterprise_slug [\n  search index=devops sourcetype=\"github:license_snapshot\" earliest=-1d\n  | stats latest(billed_seats) as billed_seats by enterprise_slug\n]\n| eval seat_util_pct=round(100*active_contributors_60d/nullif(billed_seats,0),1)\n| eval dormant_seats=billed_seats-active_contributors_60d\n| where dormant_seats>10 OR seat_util_pct<70\n| table enterprise_slug, billed_seats, active_contributors_60d, seat_util_pct, dormant_seats\n```\n\nUnderstanding this SPL\n\n**GitHub Enterprise Seat Utilization versus Active Contributors** — GitHub Enterprise bills per occupied seat while many accounts represent bots, service users, or former contractors. Comparing billed seats to users with pushes or reviews in the last sixty days supports deprovisioning and org consolidation before annual renewal.\n\nDocumented **Data sources**: `index=devops` `sourcetype=\"github:audit\"`. **App/TA** (typical add-on context): GitHub Audit Log streaming to Splunk, GitHub Enterprise Server TA (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops; **sourcetype**: github:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, sourcetype=\"github:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **actor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by enterprise_slug** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **seat_util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dormant_seats** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dormant_seats>10 OR seat_util_pct<70` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GitHub Enterprise Seat Utilization versus Active Contributors**): table enterprise_slug, billed_seats, active_contributors_60d, seat_util_pct, dormant_seats\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (seat utilization percent), Table (enterprises with dormant seats), Bar chart (dormant seats by business unit tag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Cost",
                "Performance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "asterisk",
                "github"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.16",
              "n": "Webex or Zoom Concurrent License Peak versus Subscription",
              "c": "high",
              "f": "intermediate",
              "v": "Meeting platforms often license peak concurrent ports or hosts; a single all-hands can force expensive burst add-ons if baseline is wrong. Tracking measured concurrent peaks against purchased ports prevents both overbuying idle capacity and embarrassing hard caps during executive broadcasts.",
              "t": "Webex Control Hub export, Zoom Operation logs, vendor SCIM usage",
              "d": "`index=collab` `sourcetype=\"meetings:usage\"`",
              "q": "index=collab sourcetype=\"meetings:usage\" earliest=-30d\n| eval concurrent=tonumber(coalesce(concurrent_participants, concurrent_ports, peak_attendees))\n| bin _time span=5m\n| stats max(concurrent) as peak_5m by _time, tenant_id\n| stats max(peak_5m) as month_peak by tenant_id\n| join type=left tenant_id [\n  search index=collab sourcetype=\"meetings:entitlement\" earliest=-1d\n  | stats latest(purchased_concurrent) as purchased by tenant_id\n]\n| eval headroom_pct=round(100*(purchased-month_peak)/nullif(purchased,0),1)\n| where month_peak > purchased*0.85 OR headroom_pct>60\n| table tenant_id, purchased, month_peak, headroom_pct",
              "m": "(1) Ingest five-minute concurrent participant metrics from admin APIs. (2) Store purchased concurrent or port counts in `meetings:entitlement`. (3) Alert when month_peak exceeds eighty-five percent of purchased; review sustained high headroom for downgrade at renewal.",
              "z": "Line chart (five-minute peak trend), Gauge (headroom percent), Table (tenants at risk of cap).",
              "kfp": "Alerts during planned hardware refresh windows, before scheduled capacity adds.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Webex Control Hub export, Zoom Operation logs, vendor SCIM usage.\n• Ensure the following data sources are available: `index=collab` `sourcetype=\"meetings:usage\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest five-minute concurrent participant metrics from admin APIs. (2) Store purchased concurrent or port counts in `meetings:entitlement`. (3) Alert when month_peak exceeds eighty-five percent of purchased; review sustained high headroom for downgrade at renewal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=collab sourcetype=\"meetings:usage\" earliest=-30d\n| eval concurrent=tonumber(coalesce(concurrent_participants, concurrent_ports, peak_attendees))\n| bin _time span=5m\n| stats max(concurrent) as peak_5m by _time, tenant_id\n| stats max(peak_5m) as month_peak by tenant_id\n| join type=left tenant_id [\n  search index=collab sourcetype=\"meetings:entitlement\" earliest=-1d\n  | stats latest(purchased_concurrent) as purchased by tenant_id\n]\n| eval headroom_pct=round(100*(purchased-month_peak)/nullif(purchased,0),1)\n| where month_peak > purchased*0.85 OR headroom_pct>60\n| table tenant_id, purchased, month_peak, headroom_pct\n```\n\nUnderstanding this SPL\n\n**Webex or Zoom Concurrent License Peak versus Subscription** — Meeting platforms often license peak concurrent ports or hosts; a single all-hands can force expensive burst add-ons if baseline is wrong. Tracking measured concurrent peaks against purchased ports prevents both overbuying idle capacity and embarrassing hard caps during executive broadcasts.\n\nDocumented **Data sources**: `index=collab` `sourcetype=\"meetings:usage\"`. **App/TA** (typical add-on context): Webex Control Hub export, Zoom Operation logs, vendor SCIM usage. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: collab; **sourcetype**: meetings:usage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=collab, sourcetype=\"meetings:usage\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **concurrent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, tenant_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by tenant_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **headroom_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where month_peak > purchased*0.85 OR headroom_pct>60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Webex or Zoom Concurrent License Peak versus Subscription**): table tenant_id, purchased, month_peak, headroom_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (five-minute peak trend), Gauge (headroom percent), Table (tenants at risk of cap).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "20.3.17",
              "n": "Citrix Virtual Apps and Desktops Concurrent Session versus License Count",
              "c": "high",
              "f": "intermediate",
              "v": "CVAD licenses are tied to peak concurrent instances or user connections; sustained peaks above entitlement trigger true-up or session denial. Trending peaks against purchased counts informs additional packs, burst cloud burst packs, or rightsizing published apps before renewal.",
              "t": "Citrix Director / Monitor data export, `Splunk Add-on for Citrix`",
              "d": "`index=virtualization` `sourcetype=\"citrix:session\"`",
              "q": "index=virtualization sourcetype=\"citrix:session\" earliest=-30d\n| where session_state=\"Active\" OR session_state=\"Connected\"\n| bin _time span=5m\n| stats dc(session_key) as concurrent_sessions by _time, site_name\n| stats max(concurrent_sessions) as license_peak_30d by site_name\n| lookup citrix_license_entitlement site_name OUTPUT concurrent_license_count\n| eval peak_util_pct=round(100*license_peak_30d/nullif(concurrent_license_count,0),1)\n| where peak_util_pct>85 OR peak_util_pct<40\n| table site_name, concurrent_license_count, license_peak_30d, peak_util_pct",
              "m": "(1) Forward Director OData or Broker session records with stable `session_key`. (2) Maintain `citrix_license_entitlement` from license server or reseller CSV. (3) Plan purchases when peak_util_pct exceeds eighty-five for more than three peak days per month; investigate downsizing when under forty percent.",
              "z": "Area chart (concurrent sessions with license line overlay), Table (sites over or under target), Single value (portfolio peak utilization percent).",
              "kfp": "Spikes from autoscaling events, new workload deployments, batch jobs, or DR drill resources.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Citrix Director / Monitor data export, `Splunk Add-on for Citrix`.\n• Ensure the following data sources are available: `index=virtualization` `sourcetype=\"citrix:session\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward Director OData or Broker session records with stable `session_key`. (2) Maintain `citrix_license_entitlement` from license server or reseller CSV. (3) Plan purchases when peak_util_pct exceeds eighty-five for more than three peak days per month; investigate downsizing when under forty percent.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=virtualization sourcetype=\"citrix:session\" earliest=-30d\n| where session_state=\"Active\" OR session_state=\"Connected\"\n| bin _time span=5m\n| stats dc(session_key) as concurrent_sessions by _time, site_name\n| stats max(concurrent_sessions) as license_peak_30d by site_name\n| lookup citrix_license_entitlement site_name OUTPUT concurrent_license_count\n| eval peak_util_pct=round(100*license_peak_30d/nullif(concurrent_license_count,0),1)\n| where peak_util_pct>85 OR peak_util_pct<40\n| table site_name, concurrent_license_count, license_peak_30d, peak_util_pct\n```\n\nUnderstanding this SPL\n\n**Citrix Virtual Apps and Desktops Concurrent Session versus License Count** — CVAD licenses are tied to peak concurrent instances or user connections; sustained peaks above entitlement trigger true-up or session denial. Trending peaks against purchased counts informs additional packs, burst cloud burst packs, or rightsizing published apps before renewal.\n\nDocumented **Data sources**: `index=virtualization` `sourcetype=\"citrix:session\"`. **App/TA** (typical add-on context): Citrix Director / Monitor data export, `Splunk Add-on for Citrix`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: virtualization; **sourcetype**: citrix:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=virtualization, sourcetype=\"citrix:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where session_state=\"Active\" OR session_state=\"Connected\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, site_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by site_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **peak_util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where peak_util_pct>85 OR peak_util_pct<40` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Citrix Virtual Apps and Desktops Concurrent Session versus License Count**): table site_name, concurrent_license_count, license_peak_30d, peak_util_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (concurrent sessions with license line overlay), Table (sites over or under target), Single value (portfolio peak utilization percent).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "Cost and capacity tools watch how much we spend on cloud and the data center, and where we have spare or strained capacity.",
              "mtype": [
                "Capacity",
                "Cost"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 17,
            "none": 0
          }
        }
      ],
      "i": 20,
      "n": "Cost & Capacity Management",
      "src": "cat-20-cost-capacity-management.md"
    },
    {
      "s": [
        {
          "i": "21.1",
          "n": "Energy and Utilities",
          "u": [
            {
              "i": "21.1.1",
              "n": "SCADA Alarm Rate Monitoring and Alarm Flooding Detection",
              "c": "critical",
              "f": "intermediate",
              "v": "Alarm storms mask genuine faults and exhaust operator attention; detecting flood rates and shelved alarm backlog prevents missed trips and unsafe operating conditions during grid events.",
              "t": "Splunk OT Intelligence, OT Security Add-on",
              "d": "`index=scada` `sourcetype=\"scada:alarm\"` (alarm_id, priority, shelved flag, substation_id)",
              "q": "index=scada sourcetype=\"scada:alarm\"\n| bin _time span=5m\n| stats count as alarm_count, dc(alarm_id) as distinct_alarms, sum(eval(if(shelved==\"true\" OR shelved==\"1\",1,0))) as shelved_count by substation_id, _time\n| eventstats avg(alarm_count) as baseline_avg, stdev(alarm_count) as baseline_stdev by substation_id\n| eval z_score=if(baseline_stdev>0, (alarm_count-baseline_avg)/baseline_stdev, null)\n| where alarm_count > 50 OR z_score > 3 OR shelved_count > 20\n| table _time, substation_id, alarm_count, distinct_alarms, shelved_count, z_score",
              "m": "Ingest SCADA alarm events via HEC from the EMS or alarm management system; normalize shelved and priority fields in props/transforms. Schedule saved searches for 5-minute windows and route alerts to the control room. Optionally join with OT Intelligence asset context for substation names. **Domain context:** Alarm flooding is a core concern in ISA-18.2 / IEC 62682 alarm management—high alarm rates mask genuine faults and violate “manageable” alarm load targets; track shelved and suppressed alarms explicitly because they still represent operational risk. **Splunk:** Use `scada:alarm` as a custom sourcetype with indexed `substation_id` and `alarm_id`; ensure `_time` is alarm occurrence time (not HEC receipt time). If using Splunk OT Intelligence, map assets so `substation_id` joins to asset hierarchy for drill-downs.",
              "z": "Line chart (alarm rate by substation), Single value (stations in flood state), Area chart (shelved alarm accumulation), Table (top substations by alarm count).",
              "kfp": "Alarm floods during storm restoration, vendor pack updates, NERC or internal exercises, or temporary shelving while alarm-rationalization work is in flight.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, OT Security Add-on.\n• Ensure the following data sources are available: `index=scada` `sourcetype=\"scada:alarm\"` (alarm_id, priority, shelved flag, substation_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest SCADA alarm events via HEC from the EMS or alarm management system; normalize shelved and priority fields in props/transforms. Schedule saved searches for 5-minute windows and route alerts to the control room. Optionally join with OT Intelligence asset context for substation names. **Domain context:** Alarm flooding is a core concern in ISA-18.2 / IEC 62682 alarm management—high alarm rates mask genuine faults and violate “manageable” alarm load targets; track shelved and suppressed alarm…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=scada sourcetype=\"scada:alarm\"\n| bin _time span=5m\n| stats count as alarm_count, dc(alarm_id) as distinct_alarms, sum(eval(if(shelved==\"true\" OR shelved==\"1\",1,0))) as shelved_count by substation_id, _time\n| eventstats avg(alarm_count) as baseline_avg, stdev(alarm_count) as baseline_stdev by substation_id\n| eval z_score=if(baseline_stdev>0, (alarm_count-baseline_avg)/baseline_stdev, null)\n| where alarm_count > 50 OR z_score > 3 OR shelved_count > 20\n| table _time, substation_id, alarm_count, distinct_alarms, shelved_count, z_score\n```\n\nUnderstanding this SPL\n\n**SCADA Alarm Rate Monitoring and Alarm Flooding Detection** — Alarm storms mask genuine faults and exhaust operator attention; detecting flood rates and shelved alarm backlog prevents missed trips and unsafe operating conditions during grid events.\n\nDocumented **Data sources**: `index=scada` `sourcetype=\"scada:alarm\"` (alarm_id, priority, shelved flag, substation_id). **App/TA** (typical add-on context): Splunk OT Intelligence, OT Security Add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: scada; **sourcetype**: scada:alarm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=scada, sourcetype=\"scada:alarm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by substation_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by substation_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where alarm_count > 50 OR z_score > 3 OR shelved_count > 20` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SCADA Alarm Rate Monitoring and Alarm Flooding Detection**): table _time, substation_id, alarm_count, distinct_alarms, shelved_count, z_score\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (alarm rate by substation), Single value (stations in flood state), Area chart (shelved alarm accumulation), Table (top substations by alarm count).",
              "script": "",
              "premium": "Splunk OT Intelligence, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch alarm traffic from the grid and substations. We help you notice floods and shelved alarms before real faults get lost in noise.",
              "wv": "crawl",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "21.1.2",
              "n": "Substation RTU Communication Failure",
              "c": "critical",
              "f": "advanced",
              "v": "Silent RTU loss leaves operators blind to field conditions; rapid detection of polling gaps avoids delayed switching decisions and compliance exposure for unmetered assets.",
              "t": "Splunk Edge Hub, Splunk OT Intelligence",
              "d": "`index=scada` `sourcetype=\"scada:rtu\"` (rtu_id, poll_status, response_ms, substation_id)",
              "q": "index=scada sourcetype=\"scada:rtu\"\n| stats latest(_time) as last_poll, latest(poll_status) as last_status, latest(response_ms) as last_ms by rtu_id, substation_id\n| eval age_sec=now()-last_poll\n| where lower(last_status)!=\"ok\" OR last_ms>5000 OR age_sec>600\n| table substation_id, rtu_id, last_status, last_ms, age_sec",
              "m": "Forward RTU poll logs from the SCADA front end or protocol gateway (DNP3/IEC 104) via Edge Hub syslog or file monitor. Map poll success, timeout, and RTU identifiers. Alert when no successful poll is seen for longer than the expected scan period or when consecutive failures exceed threshold. **Domain context:** RTU/IED communication is the field-to-control-room path in IEC 60870-5-104 and IEEE 1815 (DNP3); silent polling loss often precedes visibility gaps during storms or cyber events—compare `age_sec` to your scan table (e.g. 2–10 s for critical RTUs). **Splunk:** Normalize `poll_status` values (OK, TIMEOUT, etc.) to lowercase in transforms; use `latest()` carefully in dashboards—pair with `age_sec` so stale “OK” rows do not hide a dead RTU.",
              "z": "Status grid (RTU × health), Timeline (poll failures), Table (silent RTUs), Line chart (response time trend).",
              "kfp": "RTU gap noise during radio path failovers, clock skew fixes, network ACL changes, or short power transitions the substation log already captured.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub, Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=scada` `sourcetype=\"scada:rtu\"` (rtu_id, poll_status, response_ms, substation_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nForward RTU poll logs from the SCADA front end or protocol gateway (DNP3/IEC 104) via Edge Hub syslog or file monitor. Map poll success, timeout, and RTU identifiers. Alert when no successful poll is seen for longer than the expected scan period or when consecutive failures exceed threshold. **Domain context:** RTU/IED communication is the field-to-control-room path in IEC 60870-5-104 and IEEE 1815 (DNP3); silent polling loss often precedes visibility gaps during storms or cyber events—compare `…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=scada sourcetype=\"scada:rtu\"\n| stats latest(_time) as last_poll, latest(poll_status) as last_status, latest(response_ms) as last_ms by rtu_id, substation_id\n| eval age_sec=now()-last_poll\n| where lower(last_status)!=\"ok\" OR last_ms>5000 OR age_sec>600\n| table substation_id, rtu_id, last_status, last_ms, age_sec\n```\n\nUnderstanding this SPL\n\n**Substation RTU Communication Failure** — Silent RTU loss leaves operators blind to field conditions; rapid detection of polling gaps avoids delayed switching decisions and compliance exposure for unmetered assets.\n\nDocumented **Data sources**: `index=scada` `sourcetype=\"scada:rtu\"` (rtu_id, poll_status, response_ms, substation_id). **App/TA** (typical add-on context): Splunk Edge Hub, Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: scada; **sourcetype**: scada:rtu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=scada, sourcetype=\"scada:rtu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rtu_id, substation_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lower(last_status)!=\"ok\" OR last_ms>5000 OR age_sec>600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Substation RTU Communication Failure**): table substation_id, rtu_id, last_status, last_ms, age_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (RTU × health), Timeline (poll failures), Table (silent RTUs), Line chart (response time trend).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether field devices answer on time. We help you catch silent RTU or radio problems before operators lose visibility.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.3",
              "n": "Smart Meter Data Gap Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Missing AMI intervals skew billing determinants and load research; finding meters with systematic gaps protects revenue and supports voltage conservation programs.",
              "t": "Custom HEC (MDMS/AMI head-end export)",
              "d": "`index=ami` `sourcetype=\"ami:meter\"` (meter_id, read_timestamp, interval_end, kwh)",
              "q": "index=ami sourcetype=\"ami:meter\"\n| eval read_epoch=strptime(read_timestamp,\"%Y-%m-%d %H:%M:%S\")\n| sort 0 meter_id, read_epoch\n| streamstats current=f window=1 last(read_epoch) as prev_epoch by meter_id\n| eval gap_sec=read_epoch-prev_epoch\n| where isnotnull(gap_sec) AND gap_sec > 900\n| stats count as gap_events, max(gap_sec) as max_gap_sec, avg(gap_sec) as avg_gap_sec by meter_id\n| where gap_events>=3\n| sort - gap_events\n| table meter_id, gap_events, max_gap_sec, avg_gap_sec",
              "m": "Load 15-minute (or hourly) register reads from the MDMS into Splunk via HEC with consistent timestamps. Use a lookup for meter service territory if needed. Schedule daily to flag meters exceeding expected inter-read gap; integrate with work management for field verification. **Domain context:** Utilities often use 15–60 minute interval reads for billing determinants; gaps can indicate meter communication failure, relay issues, or mesh backhaul problems—prioritize revenue-critical meters and regulatory reporting periods. **Splunk:** `strptime` format must match MDMS export exactly; if reads are UTC, document that in props. Consider `summary indexing` for large AMI populations to speed daily gap reports.",
              "z": "Bar chart (gaps per meter), Map (if lat/long in data), Table (worst meters), Single value (meters with gaps %).",
              "kfp": "Meter gaps during mass firmware pushes, truck rolls, weak mesh pockets, or seasonal demand programs the AMI head-end already explains on the work order.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (MDMS/AMI head-end export).\n• Ensure the following data sources are available: `index=ami` `sourcetype=\"ami:meter\"` (meter_id, read_timestamp, interval_end, kwh).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLoad 15-minute (or hourly) register reads from the MDMS into Splunk via HEC with consistent timestamps. Use a lookup for meter service territory if needed. Schedule daily to flag meters exceeding expected inter-read gap; integrate with work management for field verification. **Domain context:** Utilities often use 15–60 minute interval reads for billing determinants; gaps can indicate meter communication failure, relay issues, or mesh backhaul problems—prioritize revenue-critical meters and regu…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ami sourcetype=\"ami:meter\"\n| eval read_epoch=strptime(read_timestamp,\"%Y-%m-%d %H:%M:%S\")\n| sort 0 meter_id, read_epoch\n| streamstats current=f window=1 last(read_epoch) as prev_epoch by meter_id\n| eval gap_sec=read_epoch-prev_epoch\n| where isnotnull(gap_sec) AND gap_sec > 900\n| stats count as gap_events, max(gap_sec) as max_gap_sec, avg(gap_sec) as avg_gap_sec by meter_id\n| where gap_events>=3\n| sort - gap_events\n| table meter_id, gap_events, max_gap_sec, avg_gap_sec\n```\n\nUnderstanding this SPL\n\n**Smart Meter Data Gap Detection** — Missing AMI intervals skew billing determinants and load research; finding meters with systematic gaps protects revenue and supports voltage conservation programs.\n\nDocumented **Data sources**: `index=ami` `sourcetype=\"ami:meter\"` (meter_id, read_timestamp, interval_end, kwh). **App/TA** (typical add-on context): Custom HEC (MDMS/AMI head-end export). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ami; **sourcetype**: ami:meter. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ami, sourcetype=\"ami:meter\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **read_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by meter_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(gap_sec) AND gap_sec > 900` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by meter_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where gap_events>=3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Smart Meter Data Gap Detection**): table meter_id, gap_events, max_gap_sec, avg_gap_sec\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (gaps per meter), Map (if lat/long in data), Table (worst meters), Single value (meters with gaps %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch smart meter reads from homes and businesses. We help you notice gaps before billing and planning go blind.",
              "mtype": [
                "Availability",
                "Capacity"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [
                "asterisk"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.4",
              "n": "Power Quality Event Correlation",
              "c": "high",
              "f": "advanced",
              "v": "Voltage sags, swells, and harmonic distortion drive equipment trips and customer complaints; correlating PQ monitors with SCADA helps prioritize capacitor banks and feeder upgrades.",
              "t": "Splunk OT Intelligence, custom HEC (PQ analyzer)",
              "d": "`index=power` `sourcetype=\"power:quality\"` (site_id, event_type, duration_ms, v_rms, thd_pct)",
              "q": "index=power sourcetype=\"power:quality\"\n| where event_type IN (\"sag\",\"swell\",\"harmonic_limit\")\n| bin _time span=1m\n| stats count as event_count, values(event_type) as types, max(thd_pct) as max_thd, min(v_rms) as min_v, max(v_rms) as max_v by site_id, _time\n| where event_count>=2 OR max_thd>8 OR min_v < 0.9*120 OR max_v > 1.1*120\n| eval severity=case(max_thd>10,\"high\", event_count>=5,\"high\", true(),\"medium\")\n| table _time, site_id, event_count, types, max_thd, min_v, max_v, severity",
              "m": "Stream PQ event records from fixed analyzers or smart relays through HEC; normalize nominal voltage per site via lookup. Use transactions or `stats` by feeder if `feeder_id` is present. Dashboard overlays with SCADA breaker operations for root-cause sessions. **Domain context:** IEEE 1159 / EN 50160 frame categories like sag, swell, and interruption; nominal voltage differs by region (e.g. 120/208 V vs 230/400 V)—the sample thresholds (0.9×/1.1× 120) are illustrative; replace with your nominal voltage lookup. **Splunk:** Index `site_id` and `event_type`; if combining multiple nominal voltages, use `eval nominal_v` from a lookup before comparing `v_rms`.",
              "z": "Timeline (PQ events), Line chart (THD and RMS by site), Heatmap (site × hour event count), Table (severe events).",
              "kfp": "Event bursts during cap switching, large motor inrush, harmonics work, or recloser field tests that planning showed would move PQ indices.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, custom HEC (PQ analyzer).\n• Ensure the following data sources are available: `index=power` `sourcetype=\"power:quality\"` (site_id, event_type, duration_ms, v_rms, thd_pct).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStream PQ event records from fixed analyzers or smart relays through HEC; normalize nominal voltage per site via lookup. Use transactions or `stats` by feeder if `feeder_id` is present. Dashboard overlays with SCADA breaker operations for root-cause sessions. **Domain context:** IEEE 1159 / EN 50160 frame categories like sag, swell, and interruption; nominal voltage differs by region (e.g. 120/208 V vs 230/400 V)—the sample thresholds (0.9×/1.1× 120) are illustrative; replace with your nominal v…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=power sourcetype=\"power:quality\"\n| where event_type IN (\"sag\",\"swell\",\"harmonic_limit\")\n| bin _time span=1m\n| stats count as event_count, values(event_type) as types, max(thd_pct) as max_thd, min(v_rms) as min_v, max(v_rms) as max_v by site_id, _time\n| where event_count>=2 OR max_thd>8 OR min_v < 0.9*120 OR max_v > 1.1*120\n| eval severity=case(max_thd>10,\"high\", event_count>=5,\"high\", true(),\"medium\")\n| table _time, site_id, event_count, types, max_thd, min_v, max_v, severity\n```\n\nUnderstanding this SPL\n\n**Power Quality Event Correlation** — Voltage sags, swells, and harmonic distortion drive equipment trips and customer complaints; correlating PQ monitors with SCADA helps prioritize capacitor banks and feeder upgrades.\n\nDocumented **Data sources**: `index=power` `sourcetype=\"power:quality\"` (site_id, event_type, duration_ms, v_rms, thd_pct). **App/TA** (typical add-on context): Splunk OT Intelligence, custom HEC (PQ analyzer). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: power; **sourcetype**: power:quality. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=power, sourcetype=\"power:quality\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"sag\",\"swell\",\"harmonic_limit\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by site_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where event_count>=2 OR max_thd>8 OR min_v < 0.9*120 OR max_v > 1.1*120` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Power Quality Event Correlation**): table _time, site_id, event_count, types, max_thd, min_v, max_v, severity\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (PQ events), Line chart (THD and RMS by site), Heatmap (site × hour event count), Table (severe events).",
              "script": "",
              "premium": "Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch voltage sags, swells, and power-quality events. We help you tie symptoms to equipment trips before customers churn.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.5",
              "n": "Renewable Generation Forecast vs Actual Deviation",
              "c": "high",
              "f": "advanced",
              "v": "Forecast error increases imbalance costs and reserve deployment; tracking solar and wind deltas supports trading desks and dispatch in markets with renewable penetration.",
              "t": "Splunk OT Intelligence, PI historian HEC",
              "d": "`index=generation` `sourcetype=\"pi:historian\"` (asset_id, mw_actual, mw_forecast, fuel_type)",
              "q": "index=generation sourcetype=\"pi:historian\" fuel_type IN (\"solar\",\"wind\")\n| eval delta_mw=mw_actual-mw_forecast, abs_error_mw=abs(delta_mw), pct_error=if(mw_forecast!=0 AND abs(mw_forecast)>0.01, 100*delta_mw/mw_forecast, null)\n| bin _time span=1h\n| stats avg(mw_actual) as avg_actual, avg(mw_forecast) as avg_fcst, avg(abs_error_mw) as mae_mw, stdev(delta_mw) as delta_stdev by asset_id, fuel_type, _time\n| eval mape_pct=if(avg_fcst!=0, 100*mae_mw/abs(avg_fcst), null)\n| where mape_pct>15 OR mae_mw>5\n| table _time, asset_id, fuel_type, avg_actual, avg_fcst, mae_mw, mape_pct",
              "m": "Ingest both forecast (day-ahead or hour-ahead) and telemetered MW from PI or EMS via HEC with aligned timestamps. Tag fuel type for filtering. Alert when MAPE or absolute error exceeds trading/risk thresholds; feed back to forecasting vendor with Splunk export. **Domain context:** In markets with high renewable penetration, forecast error drives imbalance costs and reserve procurement; align on whether the forecast is system-wide, nodal, or plant-level. **Splunk:** Use `Splunk Add-on for OSIsoft PI` or HEC from PI integrators; if `pi:historian` is custom, ensure `mw_forecast` and `mw_actual` share the same `asset_id` and `_time` grain (e.g. hourly).",
              "z": "Line chart (forecast vs actual), Area chart (error envelope), Bar chart (MAPE by plant), Single value (portfolio forecast error).",
              "kfp": "Forecast error spikes on curtailment days, new site ramp-up, model retunes, or RTO / market data corrections the trading desk reconciled with the ISO view.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, PI historian HEC.\n• Ensure the following data sources are available: `index=generation` `sourcetype=\"pi:historian\"` (asset_id, mw_actual, mw_forecast, fuel_type).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest both forecast (day-ahead or hour-ahead) and telemetered MW from PI or EMS via HEC with aligned timestamps. Tag fuel type for filtering. Alert when MAPE or absolute error exceeds trading/risk thresholds; feed back to forecasting vendor with Splunk export. **Domain context:** In markets with high renewable penetration, forecast error drives imbalance costs and reserve procurement; align on whether the forecast is system-wide, nodal, or plant-level. **Splunk:** Use `Splunk Add-on for OSIsoft…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=generation sourcetype=\"pi:historian\" fuel_type IN (\"solar\",\"wind\")\n| eval delta_mw=mw_actual-mw_forecast, abs_error_mw=abs(delta_mw), pct_error=if(mw_forecast!=0 AND abs(mw_forecast)>0.01, 100*delta_mw/mw_forecast, null)\n| bin _time span=1h\n| stats avg(mw_actual) as avg_actual, avg(mw_forecast) as avg_fcst, avg(abs_error_mw) as mae_mw, stdev(delta_mw) as delta_stdev by asset_id, fuel_type, _time\n| eval mape_pct=if(avg_fcst!=0, 100*mae_mw/abs(avg_fcst), null)\n| where mape_pct>15 OR mae_mw>5\n| table _time, asset_id, fuel_type, avg_actual, avg_fcst, mae_mw, mape_pct\n```\n\nUnderstanding this SPL\n\n**Renewable Generation Forecast vs Actual Deviation** — Forecast error increases imbalance costs and reserve deployment; tracking solar and wind deltas supports trading desks and dispatch in markets with renewable penetration.\n\nDocumented **Data sources**: `index=generation` `sourcetype=\"pi:historian\"` (asset_id, mw_actual, mw_forecast, fuel_type). **App/TA** (typical add-on context): Splunk OT Intelligence, PI historian HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: generation; **sourcetype**: pi:historian. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=generation, sourcetype=\"pi:historian\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **delta_mw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by asset_id, fuel_type, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **mape_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mape_pct>15 OR mae_mw>5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Renewable Generation Forecast vs Actual Deviation**): table _time, asset_id, fuel_type, avg_actual, avg_fcst, mae_mw, mape_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (forecast vs actual), Area chart (error envelope), Bar chart (MAPE by plant), Single value (portfolio forecast error).",
              "script": "",
              "premium": "Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch renewable output against the day-ahead or hour-ahead forecast. We help you catch schedule miss before imbalance and trading costs spike.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.6",
              "n": "Distribution Feeder Load Imbalance",
              "c": "medium",
              "f": "intermediate",
              "v": "Phase imbalance causes neutral current, transformer heating, and voltage quality issues; early detection guides switching operations and load redistribution on rural feeders.",
              "t": "Splunk Edge Hub, PI historian HEC",
              "d": "`index=scada` `sourcetype=\"pi:historian\"` (feeder_id, ia_amps, ib_amps, ic_amps)",
              "q": "index=scada sourcetype=\"pi:historian\"\n| eval avg_phase=(ia_amps+ib_amps+ic_amps)/3\n| eval max_i=max(ia_amps, ib_amps, ic_amps), min_i=min(ia_amps, ib_amps, ic_amps)\n| eval imbalance_pct=if(avg_phase>0, 100*(max_i-min_i)/avg_phase, null)\n| where imbalance_pct>10\n| bin _time span=15m\n| stats max(imbalance_pct) as peak_imbalance_pct, latest(ia_amps) as ia, latest(ib_amps) as ib, latest(ic_amps) as ic by feeder_id, _time\n| table _time, feeder_id, peak_imbalance_pct, ia, ib, ic",
              "m": "Ingest per-phase current from DMS or AMI aggregation via historian tags mapped to feeder IDs. Schedule near-real-time checks during peak load. Combine with OMS for customer complaints on the same feeder. Use lookups for feeder voltage class and limits. **Domain context:** Phase imbalance increases neutral current and transformer heating (NESC / utility design practice); thresholds (~10% imbalance) vary by utility—tune from engineering. **Splunk:** If `pi:historian` mixes multiple tag naming schemes, normalize `ia_amps`/`ib_amps`/`ic_amps` in CALC fields or transforms before `eval`.",
              "z": "Line chart (imbalance % over time), Bar chart (feeders over threshold), Gauge (worst feeder imbalance), Table (phase currents).",
              "kfp": "Imbalance blips while switching feeds, reclosing after faults, rebalancing phases, or running planned feeder maintenance the DMS operator signed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub, PI historian HEC.\n• Ensure the following data sources are available: `index=scada` `sourcetype=\"pi:historian\"` (feeder_id, ia_amps, ib_amps, ic_amps).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest per-phase current from DMS or AMI aggregation via historian tags mapped to feeder IDs. Schedule near-real-time checks during peak load. Combine with OMS for customer complaints on the same feeder. Use lookups for feeder voltage class and limits. **Domain context:** Phase imbalance increases neutral current and transformer heating (NESC / utility design practice); thresholds (~10% imbalance) vary by utility—tune from engineering. **Splunk:** If `pi:historian` mixes multiple tag naming sche…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=scada sourcetype=\"pi:historian\"\n| eval avg_phase=(ia_amps+ib_amps+ic_amps)/3\n| eval max_i=max(ia_amps, ib_amps, ic_amps), min_i=min(ia_amps, ib_amps, ic_amps)\n| eval imbalance_pct=if(avg_phase>0, 100*(max_i-min_i)/avg_phase, null)\n| where imbalance_pct>10\n| bin _time span=15m\n| stats max(imbalance_pct) as peak_imbalance_pct, latest(ia_amps) as ia, latest(ib_amps) as ib, latest(ic_amps) as ic by feeder_id, _time\n| table _time, feeder_id, peak_imbalance_pct, ia, ib, ic\n```\n\nUnderstanding this SPL\n\n**Distribution Feeder Load Imbalance** — Phase imbalance causes neutral current, transformer heating, and voltage quality issues; early detection guides switching operations and load redistribution on rural feeders.\n\nDocumented **Data sources**: `index=scada` `sourcetype=\"pi:historian\"` (feeder_id, ia_amps, ib_amps, ic_amps). **App/TA** (typical add-on context): Splunk Edge Hub, PI historian HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: scada; **sourcetype**: pi:historian. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=scada, sourcetype=\"pi:historian\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **avg_phase** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **max_i** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **imbalance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where imbalance_pct>10` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by feeder_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Distribution Feeder Load Imbalance**): table _time, feeder_id, peak_imbalance_pct, ia, ib, ic\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (imbalance % over time), Bar chart (feeders over threshold), Gauge (worst feeder imbalance), Table (phase currents).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch load balance across distribution feeders. We help you catch phase or loading issues before overloads or complaints spread.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.7",
              "n": "Transformer Dissolved Gas Analysis (DGA) Trending",
              "c": "critical",
              "f": "advanced",
              "v": "Rising H₂, CH₄, and C₂H₂ indicate insulation breakdown; trending DGA against IEEE/IEC limits prioritizes transformer replacement and avoids in-service failures.",
              "t": "Custom HEC (lab LIMS / asset management)",
              "d": "`index=assets` `sourcetype=\"pi:historian\"` (transformer_id, h2_ppm, ch4_ppm, c2h2_ppm, sample_date)",
              "q": "index=assets sourcetype=\"pi:historian\"\n| eval sample_epoch=if(isnotnull(sample_date), strptime(sample_date,\"%Y-%m-%d\"), _time)\n| sort 0 transformer_id, sample_epoch\n| streamstats current=f window=1 last(ch4_ppm) as prev_ch4 last(h2_ppm) as prev_h2 by transformer_id\n| eval ch4_rate=if(isnotnull(prev_ch4), ch4_ppm-prev_ch4, null)\n| where ch4_ppm>100 OR c2h2_ppm>1 OR h2_ppm>500 OR ch4_rate>50\n| stats latest(ch4_ppm) as ch4, latest(c2h2_ppm) as c2h2, latest(h2_ppm) as h2, latest(ch4_rate) as ch4_delta by transformer_id\n| sort - ch4\n| table transformer_id, ch4, c2h2, h2, ch4_delta",
              "m": "Load lab DGA results via HEC from LIMS or manual CSV ingestion; align `sample_date` with asset IDs. Maintain lookup tables for IEEE/IEC thresholds by oil type if required. Quarterly dashboards for reliability planning; alerts on sudden gas generation rates. **Domain context:** IEEE C57.104 and IEC 60599 guide DGA interpretation; acetylene (`c2h2_ppm`) often indicates arcing, methane/ethane ratios suggest thermal faults—rates of change matter as much as absolute PPM. **Splunk:** Use `streamstats` on sparse lab data; if samples are months apart, consider `delta` over `sample_epoch` rather than only latest values.",
              "z": "Line chart (gas PPM trends per transformer), Table (units exceeding limits), Scatter (CH₄ vs C₂H₂), Single value (transformers in alert).",
              "kfp": "DGA noise after high-load days, long sample turnaround, new sensor break-in, or oil handling during transformer work the reliability team expected.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (lab LIMS / asset management).\n• Ensure the following data sources are available: `index=assets` `sourcetype=\"pi:historian\"` (transformer_id, h2_ppm, ch4_ppm, c2h2_ppm, sample_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nLoad lab DGA results via HEC from LIMS or manual CSV ingestion; align `sample_date` with asset IDs. Maintain lookup tables for IEEE/IEC thresholds by oil type if required. Quarterly dashboards for reliability planning; alerts on sudden gas generation rates. **Domain context:** IEEE C57.104 and IEC 60599 guide DGA interpretation; acetylene (`c2h2_ppm`) often indicates arcing, methane/ethane ratios suggest thermal faults—rates of change matter as much as absolute PPM. **Splunk:** Use `streamstats`…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=assets sourcetype=\"pi:historian\"\n| eval sample_epoch=if(isnotnull(sample_date), strptime(sample_date,\"%Y-%m-%d\"), _time)\n| sort 0 transformer_id, sample_epoch\n| streamstats current=f window=1 last(ch4_ppm) as prev_ch4 last(h2_ppm) as prev_h2 by transformer_id\n| eval ch4_rate=if(isnotnull(prev_ch4), ch4_ppm-prev_ch4, null)\n| where ch4_ppm>100 OR c2h2_ppm>1 OR h2_ppm>500 OR ch4_rate>50\n| stats latest(ch4_ppm) as ch4, latest(c2h2_ppm) as c2h2, latest(h2_ppm) as h2, latest(ch4_rate) as ch4_delta by transformer_id\n| sort - ch4\n| table transformer_id, ch4, c2h2, h2, ch4_delta\n```\n\nUnderstanding this SPL\n\n**Transformer Dissolved Gas Analysis (DGA) Trending** — Rising H₂, CH₄, and C₂H₂ indicate insulation breakdown; trending DGA against IEEE/IEC limits prioritizes transformer replacement and avoids in-service failures.\n\nDocumented **Data sources**: `index=assets` `sourcetype=\"pi:historian\"` (transformer_id, h2_ppm, ch4_ppm, c2h2_ppm, sample_date). **App/TA** (typical add-on context): Custom HEC (lab LIMS / asset management). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: assets; **sourcetype**: pi:historian. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=assets, sourcetype=\"pi:historian\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sample_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by transformer_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ch4_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ch4_ppm>100 OR c2h2_ppm>1 OR h2_ppm>500 OR ch4_rate>50` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by transformer_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Transformer Dissolved Gas Analysis (DGA) Trending**): table transformer_id, ch4, c2h2, h2, ch4_delta\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (gas PPM trends per transformer), Table (units exceeding limits), Scatter (CH₄ vs C₂H₂), Single value (transformers in alert).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch dissolved gas trends in transformer oil. We help you catch insulation problems before a forced outage or fire risk.",
              "mtype": [
                "Fault",
                "Compliance"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.8",
              "n": "Generator Trip Event Correlation",
              "c": "critical",
              "f": "expert",
              "v": "Linking generator trips to relay targets and SCADA analogs shortens root-cause analysis and supports NERC event reporting with defensible timelines.",
              "t": "Splunk OT Intelligence, OT Security Add-on",
              "d": "`index=scada` `sourcetype=\"scada:alarm\"` OR `sourcetype=\"scada:rtu\"` (unit_id, trip, relay_element, mw, hz)",
              "q": "index=scada sourcetype=\"scada:alarm\" (match(_raw,\"(?i)trip|generator\"))\n| bin _time span=1m\n| stats values(relay_element) as relays by unit_id, _time\n| join type=left max=1 unit_id _time [\n    search index=scada sourcetype=\"scada:rtu\"\n    | bin _time span=1m\n    | stats avg(mw) as mw_at_event, avg(hz) as hz_at_event by unit_id, _time\n]\n| eval trip_time=strftime(_time,\"%Y-%m-%d %H:%M:%S\")\n| table unit_id, trip_time, relays, mw_at_event, hz_at_event",
              "m": "Normalize unit and breaker IDs across alarm and RTU streams. Prefer `transaction` or `stats` with `maxspan=120s` on `unit_id` if join cardinality is high. Store NERC reportable fields in summary indexing for audit. Edge Hub can timestamp-align IEC 61850 GOOSE if available. **Domain context:** NERC PRC-005 / regional reporting often requires auditable trip timelines; correlate generator trips with underfrequency, distance relay, or loss-of-field elements where present. **Splunk:** The sample uses `match(_raw,\"(?i)trip|generator\")`—prefer a structured `alarm_type` or `event_code` field from the EMS if available (faster, more accurate). For `join`, add `max=0` or a safe `max` and time-bin both sides to the same `_time` granularity to avoid cardinality blow-ups.",
              "z": "Timeline (trip and relay sequence), Table (correlated events), Line chart (MW and Hz around trip), Sankey (optional cause paths).",
              "kfp": "Trip clustering during black-start tests, protection setting projects, or staged unit rollouts that the plant and transmission planners documented.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, OT Security Add-on.\n• Ensure the following data sources are available: `index=scada` `sourcetype=\"scada:alarm\"` OR `sourcetype=\"scada:rtu\"` (unit_id, trip, relay_element, mw, hz).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize unit and breaker IDs across alarm and RTU streams. Prefer `transaction` or `stats` with `maxspan=120s` on `unit_id` if join cardinality is high. Store NERC reportable fields in summary indexing for audit. Edge Hub can timestamp-align IEC 61850 GOOSE if available. **Domain context:** NERC PRC-005 / regional reporting often requires auditable trip timelines; correlate generator trips with underfrequency, distance relay, or loss-of-field elements where present. **Splunk:** The sample uses…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=scada sourcetype=\"scada:alarm\" (match(_raw,\"(?i)trip|generator\"))\n| bin _time span=1m\n| stats values(relay_element) as relays by unit_id, _time\n| join type=left max=1 unit_id _time [\n    search index=scada sourcetype=\"scada:rtu\"\n    | bin _time span=1m\n    | stats avg(mw) as mw_at_event, avg(hz) as hz_at_event by unit_id, _time\n]\n| eval trip_time=strftime(_time,\"%Y-%m-%d %H:%M:%S\")\n| table unit_id, trip_time, relays, mw_at_event, hz_at_event\n```\n\nUnderstanding this SPL\n\n**Generator Trip Event Correlation** — Linking generator trips to relay targets and SCADA analogs shortens root-cause analysis and supports NERC event reporting with defensible timelines.\n\nDocumented **Data sources**: `index=scada` `sourcetype=\"scada:alarm\"` OR `sourcetype=\"scada:rtu\"` (unit_id, trip, relay_element, mw, hz). **App/TA** (typical add-on context): Splunk OT Intelligence, OT Security Add-on. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: scada; **sourcetype**: scada:alarm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=scada, sourcetype=\"scada:alarm\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by unit_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **trip_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Generator Trip Event Correlation**): table unit_id, trip_time, relays, mw_at_event, hz_at_event\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (trip and relay sequence), Table (correlated events), Line chart (MW and Hz around trip), Sankey (optional cause paths).",
              "script": "",
              "premium": "Splunk OT Intelligence, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch generator trips and nearby grid signals. We help you separate real unit faults from grid or protection noise.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.9",
              "n": "Energy Trading Position Reconciliation",
              "c": "critical",
              "f": "advanced",
              "v": "Position mismatches against ISO settlement expose mark-to-market errors and credit risk; automated reconciliation catches booking and tag errors before invoice disputes.",
              "t": "Custom HEC (ETRM / settlement system)",
              "d": "`index=trading` `sourcetype=\"energy:trade\"` (trade_id, product, mw, side, position_internal, position_settlement, trade_date)",
              "q": "index=trading sourcetype=\"energy:trade\"\n| eval diff_mw=abs(position_internal-position_settlement)\n| where diff_mw>0.1\n| stats sum(diff_mw) as total_abs_diff_mw, count as mismatch_trades, values(trade_id) as trade_ids by product, trade_date\n| sort - total_abs_diff_mw\n| table trade_date, product, mismatch_trades, total_abs_diff_mw, trade_ids",
              "m": "Ingest internal ETRM positions and ISO/utility settlement extracts on the same schedule via HEC. Use lookups for product naming alignment. Alert on any non-zero difference above tolerance; restrict dashboards to trading operations roles. **Domain context:** ISO/RTO settlements (e.g. day-ahead/real-time) use defined shadow settlement and true-up timelines—reconciliation tolerances are often contract-specific (MW or $). **Splunk:** Restrict `index=trading` with role-based access; use HEC tokens per environment; never log customer PII in raw trade events.",
              "z": "Table (mismatches by product), Bar chart (diff MW by day), Single value (open reconciliation count), Line chart (reconciliation trend).",
              "kfp": "Reconciliation gaps on RTO true-up days, book transfers, or shadow settlement recalculations the energy trading system audit log still shows as final.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (ETRM / settlement system).\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"energy:trade\"` (trade_id, product, mw, side, position_internal, position_settlement, trade_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest internal ETRM positions and ISO/utility settlement extracts on the same schedule via HEC. Use lookups for product naming alignment. Alert on any non-zero difference above tolerance; restrict dashboards to trading operations roles. **Domain context:** ISO/RTO settlements (e.g. day-ahead/real-time) use defined shadow settlement and true-up timelines—reconciliation tolerances are often contract-specific (MW or $). **Splunk:** Restrict `index=trading` with role-based access; use HEC tokens pe…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"energy:trade\"\n| eval diff_mw=abs(position_internal-position_settlement)\n| where diff_mw>0.1\n| stats sum(diff_mw) as total_abs_diff_mw, count as mismatch_trades, values(trade_id) as trade_ids by product, trade_date\n| sort - total_abs_diff_mw\n| table trade_date, product, mismatch_trades, total_abs_diff_mw, trade_ids\n```\n\nUnderstanding this SPL\n\n**Energy Trading Position Reconciliation** — Position mismatches against ISO settlement expose mark-to-market errors and credit risk; automated reconciliation catches booking and tag errors before invoice disputes.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"energy:trade\"` (trade_id, product, mw, side, position_internal, position_settlement, trade_date). **App/TA** (typical add-on context): Custom HEC (ETRM / settlement system). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: energy:trade. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"energy:trade\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **diff_mw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where diff_mw>0.1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by product, trade_date** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Energy Trading Position Reconciliation**): table trade_date, product, mismatch_trades, total_abs_diff_mw, trade_ids\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mismatches by product), Bar chart (diff MW by day), Single value (open reconciliation count), Line chart (reconciliation trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch trading positions against schedules and market data. We help you catch reconciliation breaks before settlement or credit risk grows.",
              "mtype": [
                "Compliance",
                "Change"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.10",
              "n": "AMI Mesh Network Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Mesh degradation increases latency and packet loss for critical reads and firmware campaigns; proactive node health reduces truck rolls and extends network life.",
              "t": "Custom HEC (RF mesh head-end)",
              "d": "`index=ami` `sourcetype=\"ami:mesh\"` (node_id, parent_id, rssi_dbm, hop_count, latency_ms, online)",
              "q": "index=ami sourcetype=\"ami:mesh\"\n| where online=\"false\" OR latency_ms>2000 OR hop_count>8 OR rssi_dbm<-115\n| bin _time span=15m\n| stats dc(node_id) as degraded_nodes, avg(latency_ms) as avg_latency, avg(rssi_dbm) as avg_rssi by parent_id, _time\n| where degraded_nodes>=5 OR avg_latency>800\n| table _time, parent_id, degraded_nodes, avg_latency, avg_rssi",
              "m": "Export mesh diagnostics from the RF network manager via scheduled JSON to HEC. Baseline RSSI and hop count per region. Alert on concentration of offline nodes under a collector; map to GIS if coordinates are loaded as lookup. **Domain context:** AMI mesh (e.g. RF mesh under ANSI C12) degrades with foliage, new construction, and concentrator load—degraded `parent_id` clusters often indicate backhaul or relay stress. **Splunk:** Cast `online`, `latency_ms`, and `rssi_dbm` to consistent types at index time; string `\"false\"` vs boolean breaks `where` clauses.",
              "z": "Network graph (optional with app), Table (degraded parents), Heatmap (hour × region latency), Line chart (average hops).",
              "kfp": "Mesh or concentrator issues during head-end cutovers, truck rolls, weak RF pockets, or mass key rotation that the metering group scheduled.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (RF mesh head-end).\n• Ensure the following data sources are available: `index=ami` `sourcetype=\"ami:mesh\"` (node_id, parent_id, rssi_dbm, hop_count, latency_ms, online).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport mesh diagnostics from the RF network manager via scheduled JSON to HEC. Baseline RSSI and hop count per region. Alert on concentration of offline nodes under a collector; map to GIS if coordinates are loaded as lookup. **Domain context:** AMI mesh (e.g. RF mesh under ANSI C12) degrades with foliage, new construction, and concentrator load—degraded `parent_id` clusters often indicate backhaul or relay stress. **Splunk:** Cast `online`, `latency_ms`, and `rssi_dbm` to consistent types at in…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ami sourcetype=\"ami:mesh\"\n| where online=\"false\" OR latency_ms>2000 OR hop_count>8 OR rssi_dbm<-115\n| bin _time span=15m\n| stats dc(node_id) as degraded_nodes, avg(latency_ms) as avg_latency, avg(rssi_dbm) as avg_rssi by parent_id, _time\n| where degraded_nodes>=5 OR avg_latency>800\n| table _time, parent_id, degraded_nodes, avg_latency, avg_rssi\n```\n\nUnderstanding this SPL\n\n**AMI Mesh Network Health** — Mesh degradation increases latency and packet loss for critical reads and firmware campaigns; proactive node health reduces truck rolls and extends network life.\n\nDocumented **Data sources**: `index=ami` `sourcetype=\"ami:mesh\"` (node_id, parent_id, rssi_dbm, hop_count, latency_ms, online). **App/TA** (typical add-on context): Custom HEC (RF mesh head-end). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ami; **sourcetype**: ami:mesh. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ami, sourcetype=\"ami:mesh\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where online=\"false\" OR latency_ms>2000 OR hop_count>8 OR rssi_dbm<-115` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by parent_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where degraded_nodes>=5 OR avg_latency>800` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **AMI Mesh Network Health**): table _time, parent_id, degraded_nodes, avg_latency, avg_rssi\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network graph (optional with app), Table (degraded parents), Heatmap (hour × region latency), Line chart (average hops).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch the AMI mesh and repeater health. We help you find weak links before meters stop talking in bulk.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.11",
              "n": "Demand Response Event Compliance Verification",
              "c": "high",
              "f": "advanced",
              "v": "Program penalties apply when committed load reductions are not achieved; verifying kW response against baselines protects program revenue and customer satisfaction.",
              "t": "Custom HEC (DRMS / DERMS)",
              "d": "`index=dr` `sourcetype=\"dr:event\"` (program_id, site_id, event_start, event_end, baseline_kw, actual_kw)",
              "q": "index=dr sourcetype=\"dr:event\"\n| eval achieved_reduction_kw=baseline_kw-actual_kw\n| eval compliance_pct=if(baseline_kw>0, 100*achieved_reduction_kw/baseline_kw, null)\n| where achieved_reduction_kw < 0 OR compliance_pct < 80\n| stats min(compliance_pct) as min_compliance_pct, avg(achieved_reduction_kw) as avg_reduction_kw by program_id, site_id\n| table program_id, site_id, min_compliance_pct, avg_reduction_kw",
              "m": "Ingest DR event windows and interval load from MDMS or AMI aggregated by site. Align timestamps to local timezone of the program. Use summary indexing for monthly compliance scorecards. HEC from DRMS for event definitions is preferred over manual CSV. **Domain context:** OpenADR and utility DR programs define baseline methodologies (e.g. 10-in-10, day-matching); penalties apply when committed kW reductions are not met—document the baseline rule in Splunk lookups. **Splunk:** Store `event_start`/`event_end` in the program’s local TZ or UTC consistently; use `strftime`/`strptime` tests in QA before alerting.",
              "z": "Bar chart (compliance % by program), Table (non-compliant sites), Line chart (load during event), Gauge (portfolio compliance).",
              "kfp": "Event-count variance during test dispatches, customer opt-outs, partial enrollments, or market-operator drill days that the demand-response desk approved.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (DRMS / DERMS).\n• Ensure the following data sources are available: `index=dr` `sourcetype=\"dr:event\"` (program_id, site_id, event_start, event_end, baseline_kw, actual_kw).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest DR event windows and interval load from MDMS or AMI aggregated by site. Align timestamps to local timezone of the program. Use summary indexing for monthly compliance scorecards. HEC from DRMS for event definitions is preferred over manual CSV. **Domain context:** OpenADR and utility DR programs define baseline methodologies (e.g. 10-in-10, day-matching); penalties apply when committed kW reductions are not met—document the baseline rule in Splunk lookups. **Splunk:** Store `event_start`/…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dr sourcetype=\"dr:event\"\n| eval achieved_reduction_kw=baseline_kw-actual_kw\n| eval compliance_pct=if(baseline_kw>0, 100*achieved_reduction_kw/baseline_kw, null)\n| where achieved_reduction_kw < 0 OR compliance_pct < 80\n| stats min(compliance_pct) as min_compliance_pct, avg(achieved_reduction_kw) as avg_reduction_kw by program_id, site_id\n| table program_id, site_id, min_compliance_pct, avg_reduction_kw\n```\n\nUnderstanding this SPL\n\n**Demand Response Event Compliance Verification** — Program penalties apply when committed load reductions are not achieved; verifying kW response against baselines protects program revenue and customer satisfaction.\n\nDocumented **Data sources**: `index=dr` `sourcetype=\"dr:event\"` (program_id, site_id, event_start, event_end, baseline_kw, actual_kw). **App/TA** (typical add-on context): Custom HEC (DRMS / DERMS). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dr; **sourcetype**: dr:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dr, sourcetype=\"dr:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **achieved_reduction_kw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where achieved_reduction_kw < 0 OR compliance_pct < 80` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by program_id, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Demand Response Event Compliance Verification**): table program_id, site_id, min_compliance_pct, avg_reduction_kw\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (compliance % by program), Table (non-compliant sites), Line chart (load during event), Gauge (portfolio compliance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch demand-response enrollments and event performance. We help you prove compliance when programs or regulators ask.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.12",
              "n": "Outage Management System vs SCADA State Correlation",
              "c": "critical",
              "f": "expert",
              "v": "Disconnected OMS tickets and energized SCADA devices delay restoration and confuse customers; alignment checks improve switching safety and SAIDI/SAIFI reporting quality.",
              "t": "Splunk OT Intelligence, custom HEC (OMS)",
              "d": "`index=oms` `sourcetype=\"oms:outage\"` (device_id, status), `index=scada` `sourcetype=\"scada:alarm\"` or `sourcetype=\"scada:rtu\"` (device_id, breaker_state)",
              "q": "index=oms sourcetype=\"oms:outage\" status=\"open\"\n| fields device_id, _time, outage_id\n| join type=inner max=1 device_id [\n    search index=scada (sourcetype=\"scada:rtu\" OR sourcetype=\"scada:alarm\")\n    | eval breaker_state=coalesce(breaker_state, device_state)\n    | where lower(breaker_state) IN (\"closed\",\"energized\",\"1\")\n    | stats latest(breaker_state) as scada_state by device_id\n]\n| table outage_id, device_id, scada_state",
              "m": "Maintain a canonical `device_id` lookup between OMS and SCADA naming. Schedule frequent correlation; handle multiphase devices with MVLV mapping table. Alert ETRM/OMS when mismatches exceed zero for active outages. Use Edge Hub only for SCADA side if needed. **Domain context:** OMS–DMS–SCADA alignment is central to safe switching and accurate SAIDI/SAIFI; energized devices during “open” outages are a safety and customer-trust issue. **Splunk:** `join type=inner` only shows matches—add a separate search for OMS open outages with *no* SCADA row (left anti-join pattern) to catch missing SCADA data. Ensure `breaker_state` enumerations match across sources.",
              "z": "Table (mismatched devices), Single value (mismatch count), Timeline (OMS vs SCADA changes), Map (optional).",
              "kfp": "State mismatch right after OMS cutovers, mapping refreshes, storm playbooks, or training simulations the control room ran as a planned exercise.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, custom HEC (OMS).\n• Ensure the following data sources are available: `index=oms` `sourcetype=\"oms:outage\"` (device_id, status), `index=scada` `sourcetype=\"scada:alarm\"` or `sourcetype=\"scada:rtu\"` (device_id, breaker_state).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMaintain a canonical `device_id` lookup between OMS and SCADA naming. Schedule frequent correlation; handle multiphase devices with MVLV mapping table. Alert ETRM/OMS when mismatches exceed zero for active outages. Use Edge Hub only for SCADA side if needed. **Domain context:** OMS–DMS–SCADA alignment is central to safe switching and accurate SAIDI/SAIFI; energized devices during “open” outages are a safety and customer-trust issue. **Splunk:** `join type=inner` only shows matches—add a separate…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=oms sourcetype=\"oms:outage\" status=\"open\"\n| fields device_id, _time, outage_id\n| join type=inner max=1 device_id [\n    search index=scada (sourcetype=\"scada:rtu\" OR sourcetype=\"scada:alarm\")\n    | eval breaker_state=coalesce(breaker_state, device_state)\n    | where lower(breaker_state) IN (\"closed\",\"energized\",\"1\")\n    | stats latest(breaker_state) as scada_state by device_id\n]\n| table outage_id, device_id, scada_state\n```\n\nUnderstanding this SPL\n\n**Outage Management System vs SCADA State Correlation** — Disconnected OMS tickets and energized SCADA devices delay restoration and confuse customers; alignment checks improve switching safety and SAIDI/SAIFI reporting quality.\n\nDocumented **Data sources**: `index=oms` `sourcetype=\"oms:outage\"` (device_id, status), `index=scada` `sourcetype=\"scada:alarm\"` or `sourcetype=\"scada:rtu\"` (device_id, breaker_state). **App/TA** (typical add-on context): Splunk OT Intelligence, custom HEC (OMS). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: oms; **sourcetype**: oms:outage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=oms, sourcetype=\"oms:outage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Outage Management System vs SCADA State Correlation**): table outage_id, device_id, scada_state\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mismatched devices), Single value (mismatch count), Timeline (OMS vs SCADA changes), Map (optional).",
              "script": "",
              "premium": "Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether outage tickets match live SCADA. We help you catch wrong customer comms before trust erodes.",
              "mtype": [
                "Fault",
                "Change"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.13",
              "n": "Vegetation Management Work Order Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Unexecuted clearance work correlates with repeat outages; tying work orders to feeder outage history prioritizes trims and documents regulatory readiness.",
              "t": "Custom HEC (GIS / work management)",
              "d": "`index=oms` `sourcetype=\"oms:outage\"` (feeder_id, outage_cause, work_order_id, work_type, completed_date)",
              "q": "index=oms sourcetype=\"oms:outage\"\n| where work_type=\"vegetation\" OR match(lower(outage_cause),\"veg|tree|limb\")\n| eval completed=if(isnotnull(completed_date) AND completed_date!=\"\",1,0)\n| stats count as related_events, sum(completed) as completed_orders, dc(feeder_id) as feeders_affected by work_order_id\n| eval completion_rate=if(related_events>0, 100*completed_orders/related_events, 0)\n| where completed_orders < related_events\n| table work_order_id, related_events, completed_orders, feeders_affected, completion_rate",
              "m": "Ingest vegetation work orders and outage records with shared feeder and span identifiers via HEC from the enterprise work system. Use lookups for span-to-feeder if needed. Monthly reporting for regulatory vegetation cycles; optional join with `fleet:gps` for crew verification. **Domain context:** Many jurisdictions require traceable vegetation clearance cycles near conductors; repeat vegetation-caused outages on uncleared spans are regulatory and wildfire risk signals (where applicable). **Splunk:** `match(lower(outage_cause),\"veg|tree|limb\")` complements structured `work_type`; tune regex for your OMS vocabulary.",
              "z": "Table (open vegetation orders), Bar chart (outages vs completed trims by feeder), Timeline (work order lifecycle), Map (feeder segments).",
              "kfp": "Backlog or cycle-time noise during season trimming surges, contractor mobilization, easement disputes, or new feeder construction that the vegetation program lead tracks separately.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (GIS / work management).\n• Ensure the following data sources are available: `index=oms` `sourcetype=\"oms:outage\"` (feeder_id, outage_cause, work_order_id, work_type, completed_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest vegetation work orders and outage records with shared feeder and span identifiers via HEC from the enterprise work system. Use lookups for span-to-feeder if needed. Monthly reporting for regulatory vegetation cycles; optional join with `fleet:gps` for crew verification. **Domain context:** Many jurisdictions require traceable vegetation clearance cycles near conductors; repeat vegetation-caused outages on uncleared spans are regulatory and wildfire risk signals (where applicable). **Splun…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=oms sourcetype=\"oms:outage\"\n| where work_type=\"vegetation\" OR match(lower(outage_cause),\"veg|tree|limb\")\n| eval completed=if(isnotnull(completed_date) AND completed_date!=\"\",1,0)\n| stats count as related_events, sum(completed) as completed_orders, dc(feeder_id) as feeders_affected by work_order_id\n| eval completion_rate=if(related_events>0, 100*completed_orders/related_events, 0)\n| where completed_orders < related_events\n| table work_order_id, related_events, completed_orders, feeders_affected, completion_rate\n```\n\nUnderstanding this SPL\n\n**Vegetation Management Work Order Tracking** — Unexecuted clearance work correlates with repeat outages; tying work orders to feeder outage history prioritizes trims and documents regulatory readiness.\n\nDocumented **Data sources**: `index=oms` `sourcetype=\"oms:outage\"` (feeder_id, outage_cause, work_order_id, work_type, completed_date). **App/TA** (typical add-on context): Custom HEC (GIS / work management). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: oms; **sourcetype**: oms:outage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=oms, sourcetype=\"oms:outage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where work_type=\"vegetation\" OR match(lower(outage_cause),\"veg|tree|limb\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **completed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by work_order_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **completion_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where completed_orders < related_events` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Vegetation Management Work Order Tracking**): table work_order_id, related_events, completed_orders, feeders_affected, completion_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open vegetation orders), Bar chart (outages vs completed trims by feeder), Timeline (work order lifecycle), Map (feeder segments).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch vegetation work near power lines. We help you see backlog before grow-in threatens reliability.",
              "mtype": [
                "Compliance",
                "Fault"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.14",
              "n": "Utility Fleet GPS and Dispatch Optimization",
              "c": "high",
              "f": "intermediate",
              "v": "During storms, knowing crew proximity to open tickets reduces travel time and improves ETR accuracy for public communications and mutual aid billing.",
              "t": "Custom HEC (AVL / dispatch)",
              "d": "`index=fleet` `sourcetype=\"fleet:gps\"` (vehicle_id, crew_id, lat, lon, speed_mph, ticket_id)",
              "q": "index=fleet sourcetype=\"fleet:gps\"\n| eval gps_missing=if(isnull(lat) OR isnull(lon),1,0), speed_anomaly=if(speed_mph>85,1,0)\n| where gps_missing=1 OR speed_anomaly=1\n| bin _time span=1h\n| stats sum(gps_missing) as missing_reports, sum(speed_anomaly) as speed_flags by vehicle_id, crew_id, _time\n| eval issues=missing_reports+speed_flags\n| where issues>=3\n| table _time, vehicle_id, crew_id, missing_reports, speed_flags, issues",
              "m": "Stream AVL points from telematics vendor to HEC with API key authentication. Join to open OMS tickets in a separate search or data model for dispatch boards. Alert on GPS gaps during active storm windows. Respect privacy policies for off-duty masking if required. **Domain context:** Mutual assistance and storm response metrics (crew proximity, ETR) are increasingly scrutinized after major events—GPS gaps during restoration can indicate dead zones or device issues. **Splunk:** HEC ACK settings and token rotation for telematics APIs; hash or role-mask `crew_id` on shared dashboards if required by bargaining agreements.",
              "z": "Map (vehicle positions), Table (crews with stale GPS), Line chart (fleet utilization), Bar chart (response time by district).",
              "kfp": "Route or idle-time spikes during storms, large mutual-aid events, detours for roadwork, or vehicle swaps that fleet and dispatch already aligned on the board.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (AVL / dispatch).\n• Ensure the following data sources are available: `index=fleet` `sourcetype=\"fleet:gps\"` (vehicle_id, crew_id, lat, lon, speed_mph, ticket_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStream AVL points from telematics vendor to HEC with API key authentication. Join to open OMS tickets in a separate search or data model for dispatch boards. Alert on GPS gaps during active storm windows. Respect privacy policies for off-duty masking if required. **Domain context:** Mutual assistance and storm response metrics (crew proximity, ETR) are increasingly scrutinized after major events—GPS gaps during restoration can indicate dead zones or device issues. **Splunk:** HEC ACK settings an…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fleet sourcetype=\"fleet:gps\"\n| eval gps_missing=if(isnull(lat) OR isnull(lon),1,0), speed_anomaly=if(speed_mph>85,1,0)\n| where gps_missing=1 OR speed_anomaly=1\n| bin _time span=1h\n| stats sum(gps_missing) as missing_reports, sum(speed_anomaly) as speed_flags by vehicle_id, crew_id, _time\n| eval issues=missing_reports+speed_flags\n| where issues>=3\n| table _time, vehicle_id, crew_id, missing_reports, speed_flags, issues\n```\n\nUnderstanding this SPL\n\n**Utility Fleet GPS and Dispatch Optimization** — During storms, knowing crew proximity to open tickets reduces travel time and improves ETR accuracy for public communications and mutual aid billing.\n\nDocumented **Data sources**: `index=fleet` `sourcetype=\"fleet:gps\"` (vehicle_id, crew_id, lat, lon, speed_mph, ticket_id). **App/TA** (typical add-on context): Custom HEC (AVL / dispatch). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fleet; **sourcetype**: fleet:gps. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fleet, sourcetype=\"fleet:gps\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **gps_missing** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gps_missing=1 OR speed_anomaly=1` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by vehicle_id, crew_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **issues** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where issues>=3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Utility Fleet GPS and Dispatch Optimization**): table _time, vehicle_id, crew_id, missing_reports, speed_flags, issues\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (vehicle positions), Table (crews with stale GPS), Line chart (fleet utilization), Bar chart (response time by district).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch utility trucks and crew routes. We help you catch long silent GPS gaps or odd routes before service windows slip.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.1.15",
              "n": "Customer Billing Exception Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Estimated reads and bill spikes drive complaints and regulatory inquiries; catching exceptions before invoice release protects customer trust and reduces rework.",
              "t": "Custom HEC (CIS/billing)",
              "d": "`index=billing` `sourcetype=\"billing:exception\"` (account_id, bill_cycle, read_type, variance_pct, amount_due, flag_estimated)",
              "q": "index=billing sourcetype=\"billing:exception\"\n| eval estimated=if(lower(flag_estimated) IN (\"true\",\"1\",\"y\"),1,0)\n| where estimated=1 OR abs(variance_pct)>30 OR amount_due>10000\n| stats count as exception_count, sum(amount_due) as total_at_risk by bill_cycle, read_type\n| sort - exception_count\n| table bill_cycle, read_type, exception_count, total_at_risk",
              "m": "CIS exports exception flags and variance versus prior period to Splunk nightly via HEC. Join AMI gap detection (UC-21.1.3) for upstream cause analysis. Route alerts to billing operations before print/mail. Mask PII in dashboards using Splunk field filters or role-based search filters. **Domain context:** Estimated reads and exceptional bills are common PUC complaint drivers; many regulators require investigation SLAs for high-bill disputes. **Splunk:** Do not expose full `account_id` in shared panels—use tokenization or last-4; confirm `variance_pct` uses the same billing period as CIS.",
              "z": "Bar chart (exceptions by cycle), Table (top accounts — masked), Line chart (estimated read rate trend), Single value (exceptions pending review).",
              "kfp": "Billing exception spikes on estimated-read cycles, rate changes, meter test failures, or payment gateway maintenance that the customer operations center announced.",
              "refs": "[Splunk OT Intelligence](https://splunkbase.splunk.com/app/5180)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (CIS/billing).\n• Ensure the following data sources are available: `index=billing` `sourcetype=\"billing:exception\"` (account_id, bill_cycle, read_type, variance_pct, amount_due, flag_estimated).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCIS exports exception flags and variance versus prior period to Splunk nightly via HEC. Join AMI gap detection (UC-21.1.3) for upstream cause analysis. Route alerts to billing operations before print/mail. Mask PII in dashboards using Splunk field filters or role-based search filters. **Domain context:** Estimated reads and exceptional bills are common PUC complaint drivers; many regulators require investigation SLAs for high-bill disputes. **Splunk:** Do not expose full `account_id` in shared p…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=billing sourcetype=\"billing:exception\"\n| eval estimated=if(lower(flag_estimated) IN (\"true\",\"1\",\"y\"),1,0)\n| where estimated=1 OR abs(variance_pct)>30 OR amount_due>10000\n| stats count as exception_count, sum(amount_due) as total_at_risk by bill_cycle, read_type\n| sort - exception_count\n| table bill_cycle, read_type, exception_count, total_at_risk\n```\n\nUnderstanding this SPL\n\n**Customer Billing Exception Monitoring** — Estimated reads and bill spikes drive complaints and regulatory inquiries; catching exceptions before invoice release protects customer trust and reduces rework.\n\nDocumented **Data sources**: `index=billing` `sourcetype=\"billing:exception\"` (account_id, bill_cycle, read_type, variance_pct, amount_due, flag_estimated). **App/TA** (typical add-on context): Custom HEC (CIS/billing). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: billing; **sourcetype**: billing:exception. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=billing, sourcetype=\"billing:exception\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **estimated** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where estimated=1 OR abs(variance_pct)>30 OR amount_due>10000` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by bill_cycle, read_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Customer Billing Exception Monitoring**): table bill_cycle, read_type, exception_count, total_at_risk\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (exceptions by cycle), Table (top accounts — masked), Line chart (estimated read rate trend), Single value (exceptions pending review).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch billing exceptions and odd usage patterns. We help you catch mistakes before wrong bills go out the door.",
              "mtype": [
                "Compliance",
                "Fault"
              ],
              "ind": "Energy and Utilities",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.0,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 14,
            "none": 0
          }
        },
        {
          "i": "21.2",
          "n": "Manufacturing and Process Industry",
          "u": [
            {
              "i": "21.2.1",
              "n": "Overall Equipment Effectiveness (OEE) Calculation",
              "c": "high",
              "f": "advanced",
              "v": "OEE exposes hidden capacity losses across availability, speed, and quality; plant leadership uses it to prioritize capital and lean projects on the constraint line.",
              "t": "Splunk OT Intelligence, Splunk Edge Hub",
              "d": "`index=mfg` `sourcetype=\"mes:production\"` (line_id, planned_time_min, run_time_min, ideal_cycle_sec, units_good, units_total)",
              "q": "index=mfg sourcetype=\"mes:production\"\n| eval availability=if(planned_time_min>0, run_time_min/planned_time_min, null)\n| eval performance=if(run_time_min>0 AND ideal_cycle_sec>0 AND units_total>0, (units_total*ideal_cycle_sec/60)/run_time_min, null)\n| eval quality=if(units_total>0, units_good/units_total, null)\n| eval oee=availability*performance*quality\n| bin _time span=1h\n| stats avg(availability) as avg_a, avg(performance) as avg_p, avg(quality) as avg_q, avg(oee) as avg_oee by line_id, _time\n| where avg_oee < 0.65 OR avg_a < 0.9 OR avg_q < 0.95\n| table _time, line_id, avg_a, avg_p, avg_q, avg_oee",
              "m": "Ingest MES counters from each line via HEC; validate ideal cycle time from routing master data lookup. Edge Hub can supply machine states if MES gaps exist. Schedule hourly OEE and daily rollups for plant reviews. **Domain context:** OEE = Availability × Performance × Quality (ISO 22400 / TPM); world-class plants often target ~85% OEE on bottlenecks—thresholds in the SPL are illustrative. **Splunk:** Ensure `planned_time_min` and `run_time_min` share the same accounting period; exclude changeover if your MES encodes it separately or OEE will be biased.",
              "z": "Line chart (OEE trend), Breakdown bar (A, P, Q), Table (line ranking), Single value (plant OEE).",
              "kfp": "OEE drops during planned changeover, recipe transitions, scheduled maintenance, new product runs, or sensor calibration the line lead logged in the MES.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, Splunk Edge Hub.\n• Ensure the following data sources are available: `index=mfg` `sourcetype=\"mes:production\"` (line_id, planned_time_min, run_time_min, ideal_cycle_sec, units_good, units_total).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest MES counters from each line via HEC; validate ideal cycle time from routing master data lookup. Edge Hub can supply machine states if MES gaps exist. Schedule hourly OEE and daily rollups for plant reviews. **Domain context:** OEE = Availability × Performance × Quality (ISO 22400 / TPM); world-class plants often target ~85% OEE on bottlenecks—thresholds in the SPL are illustrative. **Splunk:** Ensure `planned_time_min` and `run_time_min` share the same accounting period; exclude changeove…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mfg sourcetype=\"mes:production\"\n| eval availability=if(planned_time_min>0, run_time_min/planned_time_min, null)\n| eval performance=if(run_time_min>0 AND ideal_cycle_sec>0 AND units_total>0, (units_total*ideal_cycle_sec/60)/run_time_min, null)\n| eval quality=if(units_total>0, units_good/units_total, null)\n| eval oee=availability*performance*quality\n| bin _time span=1h\n| stats avg(availability) as avg_a, avg(performance) as avg_p, avg(quality) as avg_q, avg(oee) as avg_oee by line_id, _time\n| where avg_oee < 0.65 OR avg_a < 0.9 OR avg_q < 0.95\n| table _time, line_id, avg_a, avg_p, avg_q, avg_oee\n```\n\nUnderstanding this SPL\n\n**Overall Equipment Effectiveness (OEE) Calculation** — OEE exposes hidden capacity losses across availability, speed, and quality; plant leadership uses it to prioritize capital and lean projects on the constraint line.\n\nDocumented **Data sources**: `index=mfg` `sourcetype=\"mes:production\"` (line_id, planned_time_min, run_time_min, ideal_cycle_sec, units_good, units_total). **App/TA** (typical add-on context): Splunk OT Intelligence, Splunk Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mfg; **sourcetype**: mes:production. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mfg, sourcetype=\"mes:production\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **availability** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **performance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **quality** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **oee** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by line_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_oee < 0.65 OR avg_a < 0.9 OR avg_q < 0.95` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Overall Equipment Effectiveness (OEE) Calculation**): table _time, line_id, avg_a, avg_p, avg_q, avg_oee\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (OEE trend), Breakdown bar (A, P, Q), Table (line ranking), Single value (plant OEE).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch line speed, quality, and uptime together. We help you see where the plant loses capacity before orders miss the schedule.",
              "wv": "crawl",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "21.2.2",
              "n": "Unplanned Downtime Root Cause Correlation",
              "c": "critical",
              "f": "expert",
              "v": "Shortening mean time to repair for unplanned stops protects customer OTIF; correlating alarms, work orders, and environmental data speeds RCA across shifts.",
              "t": "Splunk OT Intelligence, CMMS integration",
              "d": "`index=mfg` `sourcetype=\"mes:production\"` (line_id, state), `index=mfg` `sourcetype=\"cmms:workorder\"` (asset_id, wo_id, priority), `index=ot` `sourcetype=\"opc:tag\"` (asset_id, alarm_text)",
              "q": "(index=mfg sourcetype=\"mes:production\" state=\"down\") OR (index=mfg sourcetype=\"cmms:workorder\" priority=\"emergency\") OR (index=ot sourcetype=\"opc:tag\")\n| eval key=coalesce(line_id, asset_id)\n| where sourcetype!=\"opc:tag\" OR match(alarm_text, \"(?i)fault|alarm|trip\")\n| transaction key maxspan=30m maxevents=50\n| eval duration_sec=duration\n| where duration_sec>60\n| table _time, key, duration_sec, state, wo_id, alarm_text",
              "m": "Normalize asset and line keys across MES, CMMS, and OPC-UA. Use `transaction` or `stats` with `maxspan` tuned to line stop characteristics. Ingest OPC alarms via Edge Hub. Store exemplar searches in OT Intelligence workbench for engineers. **Domain context:** ISA-95 and ISO 22400 frame downtime analytics; RCA typically sequences MES state → CMMS work order → OT alarm text—tune `maxspan` to your line’s stop/start cadence. **Splunk:** The base search must not put `match()` inside the initial boolean (invalid); filter `opc:tag` rows with `where` after the first line. If `transaction` is heavy, use `stats` + `maxspan` windowing instead.",
              "z": "Timeline (downtime episodes), Table (correlated WO and alarms), Sankey (cause categories), Bar chart (duration by line).",
              "kfp": "Downtime clusters during changeovers, material starvation, quality holds, or planned PLC stops that the shift report and CMMS work orders already show as expected.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, CMMS integration.\n• Ensure the following data sources are available: `index=mfg` `sourcetype=\"mes:production\"` (line_id, state), `index=mfg` `sourcetype=\"cmms:workorder\"` (asset_id, wo_id, priority), `index=ot` `sourcetype=\"opc:tag\"` (asset_id, alarm_text).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize asset and line keys across MES, CMMS, and OPC-UA. Use `transaction` or `stats` with `maxspan` tuned to line stop characteristics. Ingest OPC alarms via Edge Hub. Store exemplar searches in OT Intelligence workbench for engineers. **Domain context:** ISA-95 and ISO 22400 frame downtime analytics; RCA typically sequences MES state → CMMS work order → OT alarm text—tune `maxspan` to your line’s stop/start cadence. **Splunk:** The base search must not put `match()` inside the initial boole…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=mfg sourcetype=\"mes:production\" state=\"down\") OR (index=mfg sourcetype=\"cmms:workorder\" priority=\"emergency\") OR (index=ot sourcetype=\"opc:tag\")\n| eval key=coalesce(line_id, asset_id)\n| where sourcetype!=\"opc:tag\" OR match(alarm_text, \"(?i)fault|alarm|trip\")\n| transaction key maxspan=30m maxevents=50\n| eval duration_sec=duration\n| where duration_sec>60\n| table _time, key, duration_sec, state, wo_id, alarm_text\n```\n\nUnderstanding this SPL\n\n**Unplanned Downtime Root Cause Correlation** — Shortening mean time to repair for unplanned stops protects customer OTIF; correlating alarms, work orders, and environmental data speeds RCA across shifts.\n\nDocumented **Data sources**: `index=mfg` `sourcetype=\"mes:production\"` (line_id, state), `index=mfg` `sourcetype=\"cmms:workorder\"` (asset_id, wo_id, priority), `index=ot` `sourcetype=\"opc:tag\"` (asset_id, alarm_text). **App/TA** (typical add-on context): Splunk OT Intelligence, CMMS integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mfg, ot; **sourcetype**: mes:production, cmms:workorder, opc:tag. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mfg, index=mfg, index=ot, sourcetype=\"mes:production\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sourcetype!=\"opc:tag\" OR match(alarm_text, \"(?i)fault|alarm|trip\")` — typically the threshold or rule expression for this monitoring goal.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **duration_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where duration_sec>60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Unplanned Downtime Root Cause Correlation**): table _time, key, duration_sec, state, wo_id, alarm_text\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (downtime episodes), Table (correlated WO and alarms), Sankey (cause categories), Bar chart (duration by line).",
              "script": "",
              "premium": "Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch sudden stops and the alerts around them. We help you stitch the story across machines before the shift loses hours.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.3",
              "n": "Production Batch Yield Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Batch yield ties material usage to quality outcomes; sustained loss trends trigger recipe or supplier investigations before customer rejects accumulate.",
              "t": "Custom HEC (MES)",
              "d": "`index=mfg` `sourcetype=\"mes:production\"` (batch_id, sku, input_kg, good_kg, scrap_kg)",
              "q": "index=mfg sourcetype=\"mes:production\"\n| eval yield_pct=if((good_kg+scrap_kg)>0, 100*good_kg/(good_kg+scrap_kg), null), loss_pct=100-yield_pct\n| bin _time span=1d\n| stats avg(yield_pct) as avg_yield, stdev(yield_pct) as yield_jitter by sku, batch_id, _time\n| eventstats median(avg_yield) as sku_median by sku\n| where avg_yield < sku_median*0.95 OR yield_jitter>5\n| table _time, sku, batch_id, avg_yield, sku_median, yield_jitter",
              "m": "MES batch completion records to HEC with weights from scales integrated via OPC if needed. Maintain golden batch yield per SKU in a lookup for static targets. Alert on statistically significant drops; integrate with QMS for hold codes. **Domain context:** Yield is central to IATF 16949 / food & pharma traceability—tie `batch_id` to lot genealogy in QMS when required. **Splunk:** `good_kg` + `scrap_kg` should reconcile to total material; if not, fix extraction before trusting `yield_pct`.",
              "z": "Line chart (yield by batch), Bar chart (yield by SKU), Table (worst batches), Control chart (yield with limits).",
              "kfp": "Yield step-changes on trial lots, new ingredients, or startup scrap after a clean-out that the quality release still accepted under the deviated batch record.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (MES).\n• Ensure the following data sources are available: `index=mfg` `sourcetype=\"mes:production\"` (batch_id, sku, input_kg, good_kg, scrap_kg).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMES batch completion records to HEC with weights from scales integrated via OPC if needed. Maintain golden batch yield per SKU in a lookup for static targets. Alert on statistically significant drops; integrate with QMS for hold codes. **Domain context:** Yield is central to IATF 16949 / food & pharma traceability—tie `batch_id` to lot genealogy in QMS when required. **Splunk:** `good_kg` + `scrap_kg` should reconcile to total material; if not, fix extraction before trusting `yield_pct`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mfg sourcetype=\"mes:production\"\n| eval yield_pct=if((good_kg+scrap_kg)>0, 100*good_kg/(good_kg+scrap_kg), null), loss_pct=100-yield_pct\n| bin _time span=1d\n| stats avg(yield_pct) as avg_yield, stdev(yield_pct) as yield_jitter by sku, batch_id, _time\n| eventstats median(avg_yield) as sku_median by sku\n| where avg_yield < sku_median*0.95 OR yield_jitter>5\n| table _time, sku, batch_id, avg_yield, sku_median, yield_jitter\n```\n\nUnderstanding this SPL\n\n**Production Batch Yield Tracking** — Batch yield ties material usage to quality outcomes; sustained loss trends trigger recipe or supplier investigations before customer rejects accumulate.\n\nDocumented **Data sources**: `index=mfg` `sourcetype=\"mes:production\"` (batch_id, sku, input_kg, good_kg, scrap_kg). **App/TA** (typical add-on context): Custom HEC (MES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mfg; **sourcetype**: mes:production. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mfg, sourcetype=\"mes:production\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **yield_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by sku, batch_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by sku** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_yield < sku_median*0.95 OR yield_jitter>5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Production Batch Yield Tracking**): table _time, sku, batch_id, avg_yield, sku_median, yield_jitter\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (yield by batch), Bar chart (yield by SKU), Table (worst batches), Control chart (yield with limits).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch batch yields against targets. We help you catch drift before scrap and rework pile up.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.4",
              "n": "Quality SPC Chart Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Western Electric rules catch process shifts before out-of-spec product is made, supporting ISO 9001 and automotive PPAP evidence.",
              "t": "Custom HEC (QMS/LIMS)",
              "d": "`index=quality` `sourcetype=\"qms:inspection\"` (part_id, characteristic, measured_value, lsl, usl)",
              "q": "index=quality sourcetype=\"qms:inspection\"\n| sort 0 part_id, characteristic, _time\n| streamstats window=20 global=f avg(measured_value) as ma20, stdev(measured_value) as ms20 by part_id, characteristic\n| eval rule1=if(measured_value>usl OR measured_value<lsl,1,0)\n| eval rule2=if(ms20>0 AND (measured_value > ma20+3*ms20 OR measured_value < ma20-3*ms20),1,0)\n| where rule1=1 OR rule2=1\n| table _time, part_id, characteristic, measured_value, lsl, usl, ma20, ms20, rule1, rule2",
              "m": "Stream inspection measurements from CMM or inline gauges via QMS API to HEC. Tune window (e.g., 20 subgroups) per characteristic. Alert quality engineers on rule breaches; archive results for audit. Optional `predict` for advanced drift on stable lines. **Domain context:** Western Electric / Nelson rules are standard SPC; automotive PPAP often requires control plan evidence—retention must match your quality program. **Splunk:** `streamstats` is order-dependent—confirm events arrive time-sorted per `part_id`/`characteristic`; use `sort 0` before `streamstats` as in the search.",
              "z": "Control chart (X-bar style), Table (rule violations), Line chart (measurement trend), Heatmap (characteristic × line).",
              "kfp": "SPC excursions during gauge R&R, new sampling plans, or raw-material variability after a qualified supplier change that engineering authorized.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (QMS/LIMS).\n• Ensure the following data sources are available: `index=quality` `sourcetype=\"qms:inspection\"` (part_id, characteristic, measured_value, lsl, usl).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStream inspection measurements from CMM or inline gauges via QMS API to HEC. Tune window (e.g., 20 subgroups) per characteristic. Alert quality engineers on rule breaches; archive results for audit. Optional `predict` for advanced drift on stable lines. **Domain context:** Western Electric / Nelson rules are standard SPC; automotive PPAP often requires control plan evidence—retention must match your quality program. **Splunk:** `streamstats` is order-dependent—confirm events arrive time-sorted p…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=quality sourcetype=\"qms:inspection\"\n| sort 0 part_id, characteristic, _time\n| streamstats window=20 global=f avg(measured_value) as ma20, stdev(measured_value) as ms20 by part_id, characteristic\n| eval rule1=if(measured_value>usl OR measured_value<lsl,1,0)\n| eval rule2=if(ms20>0 AND (measured_value > ma20+3*ms20 OR measured_value < ma20-3*ms20),1,0)\n| where rule1=1 OR rule2=1\n| table _time, part_id, characteristic, measured_value, lsl, usl, ma20, ms20, rule1, rule2\n```\n\nUnderstanding this SPL\n\n**Quality SPC Chart Monitoring** — Western Electric rules catch process shifts before out-of-spec product is made, supporting ISO 9001 and automotive PPAP evidence.\n\nDocumented **Data sources**: `index=quality` `sourcetype=\"qms:inspection\"` (part_id, characteristic, measured_value, lsl, usl). **App/TA** (typical add-on context): Custom HEC (QMS/LIMS). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: quality; **sourcetype**: qms:inspection. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=quality, sourcetype=\"qms:inspection\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by part_id, characteristic** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **rule1** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rule2** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where rule1=1 OR rule2=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Quality SPC Chart Monitoring**): table _time, part_id, characteristic, measured_value, lsl, usl, ma20, ms20, rule1, rule2\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Control chart (X-bar style), Table (rule violations), Line chart (measurement trend), Heatmap (characteristic × line).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch control-chart limits on critical measures. We help you catch trends before parts drift out of spec.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.5",
              "n": "Predictive Maintenance Vibration Baseline Drift",
              "c": "medium",
              "f": "advanced",
              "v": "Rising RMS velocity or envelope demodulation on rotating assets precedes bearing failure; early warning avoids unplanned line stops and secondary damage.",
              "t": "Splunk Edge Hub, Splunk OT Intelligence",
              "d": "`index=ot` `sourcetype=\"opc:tag\"` (asset_id, vibration_rms, temperature_c)",
              "q": "index=ot sourcetype=\"opc:tag\"\n| bin _time span=5m\n| stats avg(vibration_rms) as v_rms, avg(temperature_c) as temp by asset_id, _time\n| sort 0 asset_id, _time\n| streamstats window=288 global=f avg(v_rms) as baseline_v, stdev(v_rms) as baseline_sd by asset_id\n| eval z=if(baseline_sd>0, (v_rms-baseline_v)/baseline_sd, null)\n| where z>3 OR v_rms>7.1 OR temp>85\n| table _time, asset_id, v_rms, baseline_v, z, temp",
              "m": "Sample vibration tags from Edge Hub OPC-UA at 1 Hz aggregated to 5-minute RMS in Splunk or at the edge. Establish baselines per asset with seasonal retuning. Integrate CMMS to auto-create work orders when z-score exceeds policy. **Domain context:** ISO 20816 / API 670 family defines vibration severity zones; 7.1 mm/s RMS is a common alert zone for general machinery but depends on machine class—replace with your engineering limits. **Splunk:** Baseline `streamstats` windows (288 = ~24h at 5m) need enough history; cold-start assets may false-positive until history exists.",
              "z": "Line chart (RMS vs baseline), Heatmap (asset × week), Table (assets in alert), Gauge (worst z-score).",
              "kfp": "Vibration drift after bearing lubrication, belt tension service, or speed setpoint changes during a run that the maintenance planner tied to a work order.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub, Splunk OT Intelligence.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"opc:tag\"` (asset_id, vibration_rms, temperature_c).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSample vibration tags from Edge Hub OPC-UA at 1 Hz aggregated to 5-minute RMS in Splunk or at the edge. Establish baselines per asset with seasonal retuning. Integrate CMMS to auto-create work orders when z-score exceeds policy. **Domain context:** ISO 20816 / API 670 family defines vibration severity zones; 7.1 mm/s RMS is a common alert zone for general machinery but depends on machine class—replace with your engineering limits. **Splunk:** Baseline `streamstats` windows (288 = ~24h at 5m) nee…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"opc:tag\"\n| bin _time span=5m\n| stats avg(vibration_rms) as v_rms, avg(temperature_c) as temp by asset_id, _time\n| sort 0 asset_id, _time\n| streamstats window=288 global=f avg(v_rms) as baseline_v, stdev(v_rms) as baseline_sd by asset_id\n| eval z=if(baseline_sd>0, (v_rms-baseline_v)/baseline_sd, null)\n| where z>3 OR v_rms>7.1 OR temp>85\n| table _time, asset_id, v_rms, baseline_v, z, temp\n```\n\nUnderstanding this SPL\n\n**Predictive Maintenance Vibration Baseline Drift** — Rising RMS velocity or envelope demodulation on rotating assets precedes bearing failure; early warning avoids unplanned line stops and secondary damage.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"opc:tag\"` (asset_id, vibration_rms, temperature_c). **App/TA** (typical add-on context): Splunk Edge Hub, Splunk OT Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: opc:tag. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"opc:tag\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by asset_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by asset_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z>3 OR v_rms>7.1 OR temp>85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Predictive Maintenance Vibration Baseline Drift**): table _time, asset_id, v_rms, baseline_v, z, temp\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (RMS vs baseline), Heatmap (asset × week), Table (assets in alert), Gauge (worst z-score).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch vibration against healthy baselines. We help you catch bearing or alignment issues before the machine fails.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.6",
              "n": "Energy Consumption Per Unit Produced",
              "c": "medium",
              "f": "intermediate",
              "v": "Specific energy consumption (kWh per unit) links sustainability KPIs to operations; spikes reveal compressed air leaks, idle equipment, or recipe drift.",
              "t": "Splunk Edge Hub (energy meters), MES HEC",
              "d": "`index=mfg` `sourcetype=\"energy:meter\"` (line_id, kwh), `sourcetype=\"mes:production\"` (line_id, units_good)",
              "q": "index=mfg (sourcetype=\"energy:meter\" OR sourcetype=\"mes:production\")\n| bin _time span=1h\n| stats sum(kwh) as kwh by line_id, _time\n| join type=inner max=1 line_id _time [\n    search index=mfg sourcetype=\"mes:production\"\n    | bin _time span=1h\n    | stats sum(units_good) as units by line_id, _time\n]\n| eval sec_kwh=if(units>0, kwh/units, null)\n| where sec_kwh>0\n| eventstats median(sec_kwh) as med_sec by line_id\n| where sec_kwh > med_sec*1.15\n| table _time, line_id, kwh, units, sec_kwh, med_sec",
              "m": "Align meter rollups to MES production intervals using common `line_id` and time bucketing. Calibrate meters annually and store correction factors in lookup. Dashboard SEC for sustainability reporting and cost per unit. **Domain context:** ISO 50001 energy baselines and CDP/GHG reporting often use kWh per unit—allocate shared plant meters by line if multi-line feeds share one meter. **Splunk:** The sample `join` assumes `line_id` on MES can be renamed to align with meter scope; if meters are plant-level, aggregate MES `units` at plant grain before joining.",
              "z": "Line chart (SEC trend), Bar chart (SEC by line), Table (outliers), Single value (plant kWh per unit).",
              "kfp": "Energy-intensity noise during off-shift cleaning, wide ambient swings, or partial runs that the energy dashboard already tags as non-steady-state.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub (energy meters), MES HEC.\n• Ensure the following data sources are available: `index=mfg` `sourcetype=\"energy:meter\"` (line_id, kwh), `sourcetype=\"mes:production\"` (line_id, units_good).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign meter rollups to MES production intervals using common `line_id` and time bucketing. Calibrate meters annually and store correction factors in lookup. Dashboard SEC for sustainability reporting and cost per unit. **Domain context:** ISO 50001 energy baselines and CDP/GHG reporting often use kWh per unit—allocate shared plant meters by line if multi-line feeds share one meter. **Splunk:** The sample `join` assumes `line_id` on MES can be renamed to align with meter scope; if meters are plan…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mfg (sourcetype=\"energy:meter\" OR sourcetype=\"mes:production\")\n| bin _time span=1h\n| stats sum(kwh) as kwh by line_id, _time\n| join type=inner max=1 line_id _time [\n    search index=mfg sourcetype=\"mes:production\"\n    | bin _time span=1h\n    | stats sum(units_good) as units by line_id, _time\n]\n| eval sec_kwh=if(units>0, kwh/units, null)\n| where sec_kwh>0\n| eventstats median(sec_kwh) as med_sec by line_id\n| where sec_kwh > med_sec*1.15\n| table _time, line_id, kwh, units, sec_kwh, med_sec\n```\n\nUnderstanding this SPL\n\n**Energy Consumption Per Unit Produced** — Specific energy consumption (kWh per unit) links sustainability KPIs to operations; spikes reveal compressed air leaks, idle equipment, or recipe drift.\n\nDocumented **Data sources**: `index=mfg` `sourcetype=\"energy:meter\"` (line_id, kwh), `sourcetype=\"mes:production\"` (line_id, units_good). **App/TA** (typical add-on context): Splunk Edge Hub (energy meters), MES HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mfg; **sourcetype**: energy:meter, mes:production. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mfg, sourcetype=\"energy:meter\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by line_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **sec_kwh** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sec_kwh>0` — typically the threshold or rule expression for this monitoring goal.\n• `eventstats` rolls up events into metrics; results are split **by line_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sec_kwh > med_sec*1.15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Energy Consumption Per Unit Produced**): table _time, line_id, kwh, units, sec_kwh, med_sec\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (SEC trend), Bar chart (SEC by line), Table (outliers), Single value (plant kWh per unit).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch energy per good unit produced. We help you spot waste before power bills and sustainability targets slip.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.7",
              "n": "MES Order Completion Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Stalled manufacturing orders threaten delivery dates and WIP cash; milestone visibility enables planner intervention before the constraint is starved.",
              "t": "Custom HEC (MES)",
              "d": "`index=mfg` `sourcetype=\"mes:production\"` (order_id, sku, milestone, status, due_date)",
              "q": "index=mfg sourcetype=\"mes:production\"\n| eval due_epoch=strptime(due_date,\"%Y-%m-%d\")\n| eval late_risk=if(lower(status)!=\"complete\" AND _time>due_epoch-86400,1,0)\n| where late_risk=1 OR lower(status) IN (\"held\",\"blocked\")\n| stats latest(status) as status, latest(milestone) as milestone, min(due_epoch) as due by order_id, sku\n| sort due\n| table order_id, sku, milestone, status, due",
              "m": "Ingest order status transitions from MES via HEC with full milestone model. Join to ERP promise date if synced through nightly batch. Alert planners when orders approach due window without final completion event. **Domain context:** APS/ERP often owns the promise date while MES owns WIP state—misalignment between systems is a common root cause of “phantom” late risk. **Splunk:** `late_risk` compares `_time` to `due_epoch`; ensure MES event time reflects status change, not report generation time.",
              "z": "Gantt-style table (milestones), Bar chart (orders by status), Timeline (order events), Single value (orders at risk).",
              "kfp": "Order completion variance during expedites, short picks, or ERP resequencing on the day a customer changed priority that the planner broadcast on the floor.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (MES).\n• Ensure the following data sources are available: `index=mfg` `sourcetype=\"mes:production\"` (order_id, sku, milestone, status, due_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest order status transitions from MES via HEC with full milestone model. Join to ERP promise date if synced through nightly batch. Alert planners when orders approach due window without final completion event. **Domain context:** APS/ERP often owns the promise date while MES owns WIP state—misalignment between systems is a common root cause of “phantom” late risk. **Splunk:** `late_risk` compares `_time` to `due_epoch`; ensure MES event time reflects status change, not report generation time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mfg sourcetype=\"mes:production\"\n| eval due_epoch=strptime(due_date,\"%Y-%m-%d\")\n| eval late_risk=if(lower(status)!=\"complete\" AND _time>due_epoch-86400,1,0)\n| where late_risk=1 OR lower(status) IN (\"held\",\"blocked\")\n| stats latest(status) as status, latest(milestone) as milestone, min(due_epoch) as due by order_id, sku\n| sort due\n| table order_id, sku, milestone, status, due\n```\n\nUnderstanding this SPL\n\n**MES Order Completion Tracking** — Stalled manufacturing orders threaten delivery dates and WIP cash; milestone visibility enables planner intervention before the constraint is starved.\n\nDocumented **Data sources**: `index=mfg` `sourcetype=\"mes:production\"` (order_id, sku, milestone, status, due_date). **App/TA** (typical add-on context): Custom HEC (MES). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mfg; **sourcetype**: mes:production. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mfg, sourcetype=\"mes:production\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **late_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where late_risk=1 OR lower(status) IN (\"held\",\"blocked\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by order_id, sku** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **MES Order Completion Tracking**): table order_id, sku, milestone, status, due\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gantt-style table (milestones), Bar chart (orders by status), Timeline (order events), Single value (orders at risk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch MES orders finishing on schedule. We help you see bottlenecks before downstream steps starve.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.8",
              "n": "Supply Chain EDI Message Failure Rate",
              "c": "high",
              "f": "intermediate",
              "v": "AS2/EDI failures delay ASNs and invoices, disrupting JIT lines and payment cycles; monitoring failure rates protects supplier scorecards and customer OTIF.",
              "t": "Custom HEC (B2B gateway)",
              "d": "`index=edi` `sourcetype=\"edi:message\"` (partner_id, direction, status, message_type, mdn_status)",
              "q": "index=edi sourcetype=\"edi:message\"\n| eval ok=if(lower(status)=\"success\" AND (isnull(mdn_status) OR lower(mdn_status)=\"processed\"),1,0)\n| bin _time span=1h\n| stats count as total, sum(eval(if(ok=0,1,0))) as fail by partner_id, message_type, _time\n| eval fail_rate=if(total>0, 100*fail/total, null)\n| where fail_rate>2 OR fail>10\n| table _time, partner_id, message_type, total, fail, fail_rate",
              "m": "Export gateway logs with MDN and ACK status to HEC; mask payload content. Per-partner SLOs in lookup table. Page integrations team when failure rate exceeds threshold for two consecutive intervals. **Domain context:** AS2 MDN (message disposition notification) and X12/EDIFACT functional acks are distinct—track both; partner scorecards often use 997/999 or EDIFACT CONTRL success rates. **Splunk:** Never index full EDI payloads (PHI/PII/BOM data); log metadata and correlation IDs only.",
              "z": "Line chart (failure rate by partner), Table (top failing partners), Bar chart (failures by message type), Single value (global EDI health).",
              "kfp": "EDI error bursts while trading partners change schemas, VPN certificates roll, or batch replays after a host outage that the integration team reprocessed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (B2B gateway).\n• Ensure the following data sources are available: `index=edi` `sourcetype=\"edi:message\"` (partner_id, direction, status, message_type, mdn_status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport gateway logs with MDN and ACK status to HEC; mask payload content. Per-partner SLOs in lookup table. Page integrations team when failure rate exceeds threshold for two consecutive intervals. **Domain context:** AS2 MDN (message disposition notification) and X12/EDIFACT functional acks are distinct—track both; partner scorecards often use 997/999 or EDIFACT CONTRL success rates. **Splunk:** Never index full EDI payloads (PHI/PII/BOM data); log metadata and correlation IDs only.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edi sourcetype=\"edi:message\"\n| eval ok=if(lower(status)=\"success\" AND (isnull(mdn_status) OR lower(mdn_status)=\"processed\"),1,0)\n| bin _time span=1h\n| stats count as total, sum(eval(if(ok=0,1,0))) as fail by partner_id, message_type, _time\n| eval fail_rate=if(total>0, 100*fail/total, null)\n| where fail_rate>2 OR fail>10\n| table _time, partner_id, message_type, total, fail, fail_rate\n```\n\nUnderstanding this SPL\n\n**Supply Chain EDI Message Failure Rate** — AS2/EDI failures delay ASNs and invoices, disrupting JIT lines and payment cycles; monitoring failure rates protects supplier scorecards and customer OTIF.\n\nDocumented **Data sources**: `index=edi` `sourcetype=\"edi:message\"` (partner_id, direction, status, message_type, mdn_status). **App/TA** (typical add-on context): Custom HEC (B2B gateway). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edi; **sourcetype**: edi:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edi, sourcetype=\"edi:message\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by partner_id, message_type, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_rate>2 OR fail>10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Supply Chain EDI Message Failure Rate**): table _time, partner_id, message_type, total, fail, fail_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (failure rate by partner), Table (top failing partners), Bar chart (failures by message type), Single value (global EDI health).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch EDI messages between plants and partners. We help you catch failures before shipments or payments stall.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.9",
              "n": "Bill of Materials (BOM) Discrepancy Detection",
              "c": "medium",
              "f": "advanced",
              "v": "Wrong component consumption breaks costing and traceability; catching BOM mismatches early avoids recalls and ERP reconciliation fire drills.",
              "t": "ERP HEC, MES integration",
              "d": "`index=erp` `sourcetype=\"erp:event\"` (order_id, material_id, qty_planned), `index=mfg` `sourcetype=\"mes:production\"` (order_id, material_id, qty_consumed)",
              "q": "index=erp sourcetype=\"erp:event\" event_type=\"bom\"\n| rename qty_planned as qty_plan\n| join type=outer max=1 order_id material_id [\n    search index=mfg sourcetype=\"mes:production\"\n    | stats sum(qty_consumed) as qty_cons by order_id, material_id\n]\n| fillnull value=0 qty_cons\n| eval delta=qty_plan-qty_cons\n| where abs(delta)>0.5*qty_plan OR abs(delta)>5\n| table order_id, material_id, qty_plan, qty_cons, delta",
              "m": "Publish planned BOM lines from ERP and consumption from MES on shared keys. Schedule hourly reconciliation; handle unit of measure conversion via lookup. Route discrepancies to production control before period close. **Domain context:** BOM accuracy supports traceability (e.g. medical device UDI, automotive PPAP); backflush timing vs physical issue can create false deltas. **Splunk:** `join type=outer` can be expensive—consider `lookup` from a KV store populated by scheduled searches for large order volumes.",
              "z": "Table (BOM variances), Bar chart (variance by material), Line chart (discrepancy count over time), Heatmap (SKU × material).",
              "kfp": "BOM mismatch flags during engineering change effectivity dates, kit substitutions, or scrap reconciliation windows that the MRP controller closed after audit.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ERP HEC, MES integration.\n• Ensure the following data sources are available: `index=erp` `sourcetype=\"erp:event\"` (order_id, material_id, qty_planned), `index=mfg` `sourcetype=\"mes:production\"` (order_id, material_id, qty_consumed).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPublish planned BOM lines from ERP and consumption from MES on shared keys. Schedule hourly reconciliation; handle unit of measure conversion via lookup. Route discrepancies to production control before period close. **Domain context:** BOM accuracy supports traceability (e.g. medical device UDI, automotive PPAP); backflush timing vs physical issue can create false deltas. **Splunk:** `join type=outer` can be expensive—consider `lookup` from a KV store populated by scheduled searches for large o…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=erp sourcetype=\"erp:event\" event_type=\"bom\"\n| rename qty_planned as qty_plan\n| join type=outer max=1 order_id material_id [\n    search index=mfg sourcetype=\"mes:production\"\n    | stats sum(qty_consumed) as qty_cons by order_id, material_id\n]\n| fillnull value=0 qty_cons\n| eval delta=qty_plan-qty_cons\n| where abs(delta)>0.5*qty_plan OR abs(delta)>5\n| table order_id, material_id, qty_plan, qty_cons, delta\n```\n\nUnderstanding this SPL\n\n**Bill of Materials (BOM) Discrepancy Detection** — Wrong component consumption breaks costing and traceability; catching BOM mismatches early avoids recalls and ERP reconciliation fire drills.\n\nDocumented **Data sources**: `index=erp` `sourcetype=\"erp:event\"` (order_id, material_id, qty_planned), `index=mfg` `sourcetype=\"mes:production\"` (order_id, material_id, qty_consumed). **App/TA** (typical add-on context): ERP HEC, MES integration. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: erp; **sourcetype**: erp:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=erp, sourcetype=\"erp:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(delta)>0.5*qty_plan OR abs(delta)>5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Bill of Materials (BOM) Discrepancy Detection**): table order_id, material_id, qty_plan, qty_cons, delta\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (BOM variances), Bar chart (variance by material), Line chart (discrepancy count over time), Heatmap (SKU × material).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch bills of material across systems. We help you catch mismatches before wrong parts hit the line.",
              "mtype": [
                "Compliance",
                "Change"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.10",
              "n": "Warehouse Pick-Pack-Ship Cycle Time",
              "c": "medium",
              "f": "intermediate",
              "v": "Long cycle times miss carrier cutoffs and inflate labor cost; SLA dashboards drive slotting and staffing decisions in peak seasons.",
              "t": "Custom HEC (WMS)",
              "d": "`index=wms` `sourcetype=\"wms:order\"` (order_id, pick_start, pack_end, ship_confirm, sla_minutes)",
              "q": "index=wms sourcetype=\"wms:order\"\n| eval pick_epoch=strptime(pick_start,\"%Y-%m-%d %H:%M:%S\"), ship_epoch=strptime(ship_confirm,\"%Y-%m-%d %H:%M:%S\")\n| eval cycle_min=(ship_epoch-pick_epoch)/60\n| where cycle_min > sla_minutes OR isnull(ship_confirm)\n| stats avg(cycle_min) as avg_cycle, perc95(cycle_min) as p95_cycle, count as breach_count by order_id\n| sort - breach_count\n| table order_id, avg_cycle, p95_cycle, sla_minutes, breach_count",
              "m": "WMS event stream to HEC with timestamps at pick, pack, and ship scan. Define SLA per customer tier in lookup. Real-time panel for operations; weekly review of p95 versus staffing model. **Domain context:** OTIF and carrier cutoffs (e.g. parcel pickup windows) drive SLA design—`sla_minutes` should reflect channel (retail vs e-commerce). **Splunk:** Validate `strptime` against actual WMS timestamp format and timezone; null `ship_confirm` rows indicate in-flight orders—exclude or bucket separately in KPIs.",
              "z": "Histogram (cycle time distribution), Line chart (p95 trend), Table (orders breaching SLA), Bar chart (breaches by dock).",
              "kfp": "Pick-pack spikes during promotion waves, slotting projects, or temporary headcount gaps that the warehouse leader still met using overflow labor.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (WMS).\n• Ensure the following data sources are available: `index=wms` `sourcetype=\"wms:order\"` (order_id, pick_start, pack_end, ship_confirm, sla_minutes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nWMS event stream to HEC with timestamps at pick, pack, and ship scan. Define SLA per customer tier in lookup. Real-time panel for operations; weekly review of p95 versus staffing model. **Domain context:** OTIF and carrier cutoffs (e.g. parcel pickup windows) drive SLA design—`sla_minutes` should reflect channel (retail vs e-commerce). **Splunk:** Validate `strptime` against actual WMS timestamp format and timezone; null `ship_confirm` rows indicate in-flight orders—exclude or bucket separately …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wms sourcetype=\"wms:order\"\n| eval pick_epoch=strptime(pick_start,\"%Y-%m-%d %H:%M:%S\"), ship_epoch=strptime(ship_confirm,\"%Y-%m-%d %H:%M:%S\")\n| eval cycle_min=(ship_epoch-pick_epoch)/60\n| where cycle_min > sla_minutes OR isnull(ship_confirm)\n| stats avg(cycle_min) as avg_cycle, perc95(cycle_min) as p95_cycle, count as breach_count by order_id\n| sort - breach_count\n| table order_id, avg_cycle, p95_cycle, sla_minutes, breach_count\n```\n\nUnderstanding this SPL\n\n**Warehouse Pick-Pack-Ship Cycle Time** — Long cycle times miss carrier cutoffs and inflate labor cost; SLA dashboards drive slotting and staffing decisions in peak seasons.\n\nDocumented **Data sources**: `index=wms` `sourcetype=\"wms:order\"` (order_id, pick_start, pack_end, ship_confirm, sla_minutes). **App/TA** (typical add-on context): Custom HEC (WMS). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wms; **sourcetype**: wms:order. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wms, sourcetype=\"wms:order\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pick_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cycle_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cycle_min > sla_minutes OR isnull(ship_confirm)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by order_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Warehouse Pick-Pack-Ship Cycle Time**): table order_id, avg_cycle, p95_cycle, sla_minutes, breach_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (cycle time distribution), Line chart (p95 trend), Table (orders breaching SLA), Bar chart (breaches by dock).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch pick, pack, and ship times. We help you catch warehouse slowdowns before promises to customers break.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.11",
              "n": "Robotic Cell Cycle Time Deviation",
              "c": "high",
              "f": "intermediate",
              "v": "Drift from takt-time signals tooling wear or program changes; catching deviation early avoids quality escapes and unplanned maintenance.",
              "t": "Splunk OT Intelligence, robot controller HEC",
              "d": "`index=mfg` `sourcetype=\"robot:cycle\"` (cell_id, program_id, cycle_sec, target_sec)",
              "q": "index=mfg sourcetype=\"robot:cycle\"\n| eval delta_sec=cycle_sec-target_sec, delta_pct=if(target_sec>0, 100*delta_sec/target_sec, null)\n| bin _time span=15m\n| stats avg(cycle_sec) as avg_cycle, avg(target_sec) as avg_target, stdev(cycle_sec) as cycle_jitter by cell_id, program_id, _time\n| eval avg_delta_pct=if(avg_target>0, 100*(avg_cycle-avg_target)/avg_target, null)\n| where avg_delta_pct>10 OR cycle_jitter>3\n| table _time, cell_id, program_id, avg_cycle, avg_target, avg_delta_pct, cycle_jitter",
              "m": "Ingest cycle completion messages from robot OEM or PLC via Edge Hub syslog. Baseline `target_sec` per program revision in lookup when recipes change. Alert maintenance when sustained positive delta exceeds policy. **Domain context:** Robot takt-time drift often tracks EOAT wear, TCP calibration, or upstream material variation—correlate with CMMS on the same `cell_id`. **Splunk:** OEM syslog formats vary; use `props.conf` to extract `cycle_sec`/`program_id` consistently across firmware versions.",
              "z": "Line chart (cycle time vs target), Control chart, Table (cells in deviation), Bar chart (delta by program).",
              "kfp": "Cycle noise during teach-in, light-curtain breaks, or dual-tool changeovers that the robot log shows as normal protective stops, not a fault.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, robot controller HEC.\n• Ensure the following data sources are available: `index=mfg` `sourcetype=\"robot:cycle\"` (cell_id, program_id, cycle_sec, target_sec).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest cycle completion messages from robot OEM or PLC via Edge Hub syslog. Baseline `target_sec` per program revision in lookup when recipes change. Alert maintenance when sustained positive delta exceeds policy. **Domain context:** Robot takt-time drift often tracks EOAT wear, TCP calibration, or upstream material variation—correlate with CMMS on the same `cell_id`. **Splunk:** OEM syslog formats vary; use `props.conf` to extract `cycle_sec`/`program_id` consistently across firmware versions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mfg sourcetype=\"robot:cycle\"\n| eval delta_sec=cycle_sec-target_sec, delta_pct=if(target_sec>0, 100*delta_sec/target_sec, null)\n| bin _time span=15m\n| stats avg(cycle_sec) as avg_cycle, avg(target_sec) as avg_target, stdev(cycle_sec) as cycle_jitter by cell_id, program_id, _time\n| eval avg_delta_pct=if(avg_target>0, 100*(avg_cycle-avg_target)/avg_target, null)\n| where avg_delta_pct>10 OR cycle_jitter>3\n| table _time, cell_id, program_id, avg_cycle, avg_target, avg_delta_pct, cycle_jitter\n```\n\nUnderstanding this SPL\n\n**Robotic Cell Cycle Time Deviation** — Drift from takt-time signals tooling wear or program changes; catching deviation early avoids quality escapes and unplanned maintenance.\n\nDocumented **Data sources**: `index=mfg` `sourcetype=\"robot:cycle\"` (cell_id, program_id, cycle_sec, target_sec). **App/TA** (typical add-on context): Splunk OT Intelligence, robot controller HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mfg; **sourcetype**: robot:cycle. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mfg, sourcetype=\"robot:cycle\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **delta_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by cell_id, program_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_delta_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_delta_pct>10 OR cycle_jitter>3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Robotic Cell Cycle Time Deviation**): table _time, cell_id, program_id, avg_cycle, avg_target, avg_delta_pct, cycle_jitter\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (cycle time vs target), Control chart, Table (cells in deviation), Bar chart (delta by program).",
              "script": "",
              "premium": "Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch robot cycle times against normal. We help you catch wear or program drift before throughput drops.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.12",
              "n": "Conveyor Belt Speed and Jam Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Belt slowdowns and jams starve downstream stations and can damage product; fast detection limits cascade stops across the line.",
              "t": "Splunk Edge Hub",
              "d": "`index=ot` `sourcetype=\"conveyor:sensor\"` (line_id, belt_id, speed_fpm, motor_amps, jam_sensor)",
              "q": "index=ot sourcetype=\"conveyor:sensor\"\n| eval jam=if(lower(jam_sensor)=\"true\" OR jam_sensor=\"1\",1,0)\n| bin _time span=1m\n| stats avg(speed_fpm) as avg_speed, avg(motor_amps) as avg_amps, max(jam) as jam_flag by line_id, belt_id, _time\n| where avg_speed < 10 OR jam_flag=1 OR avg_amps>25\n| table _time, line_id, belt_id, avg_speed, avg_amps, jam_flag",
              "m": "Map photo-eye, encoder, and VFD feedback through OPC-UA into Edge Hub with 1-second sampling aggregated in Splunk. Set nominal speed per SKU from MES lookup if variable. Tie jam events to video timestamp if optional stream metadata is ingested. **Domain context:** Jam cascades are a common OEE loss; nominal speed and amp draw thresholds depend on belt load and SKU—tune `avg_speed`/`avg_amps` limits per line. **Splunk:** Aggregate before alert noise—1m `bin` as shown; lift raw sampling rate only for RCA, not for 24/7 alerting cost.",
              "z": "Line chart (speed and amps), Status timeline (jam), Table (active issues), Single value (lines stopped).",
              "kfp": "Jam or speed variance on wet belts, new SKUs, or after mechanical adjustment during the maintenance window the line documented.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"conveyor:sensor\"` (line_id, belt_id, speed_fpm, motor_amps, jam_sensor).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap photo-eye, encoder, and VFD feedback through OPC-UA into Edge Hub with 1-second sampling aggregated in Splunk. Set nominal speed per SKU from MES lookup if variable. Tie jam events to video timestamp if optional stream metadata is ingested. **Domain context:** Jam cascades are a common OEE loss; nominal speed and amp draw thresholds depend on belt load and SKU—tune `avg_speed`/`avg_amps` limits per line. **Splunk:** Aggregate before alert noise—1m `bin` as shown; lift raw sampling rate only …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"conveyor:sensor\"\n| eval jam=if(lower(jam_sensor)=\"true\" OR jam_sensor=\"1\",1,0)\n| bin _time span=1m\n| stats avg(speed_fpm) as avg_speed, avg(motor_amps) as avg_amps, max(jam) as jam_flag by line_id, belt_id, _time\n| where avg_speed < 10 OR jam_flag=1 OR avg_amps>25\n| table _time, line_id, belt_id, avg_speed, avg_amps, jam_flag\n```\n\nUnderstanding this SPL\n\n**Conveyor Belt Speed and Jam Detection** — Belt slowdowns and jams starve downstream stations and can damage product; fast detection limits cascade stops across the line.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"conveyor:sensor\"` (line_id, belt_id, speed_fpm, motor_amps, jam_sensor). **App/TA** (typical add-on context): Splunk Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: conveyor:sensor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"conveyor:sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **jam** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by line_id, belt_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_speed < 10 OR jam_flag=1 OR avg_amps>25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Conveyor Belt Speed and Jam Detection**): table _time, line_id, belt_id, avg_speed, avg_amps, jam_flag\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (speed and amps), Status timeline (jam), Table (active issues), Single value (lines stopped).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch conveyor speed and jam signals. We help you catch mechanical trouble before the line backs up.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.13",
              "n": "Compressed Air System Leak Detection",
              "c": "medium",
              "f": "advanced",
              "v": "Air leaks are a top energy waste in plants; abnormal specific power (kW per 100 cfm) during non-production indicates leakage or control issues.",
              "t": "Splunk Edge Hub, energy analytics",
              "d": "`index=mfg` `sourcetype=\"air:compressor\"` (plant_id, kw, cfm, run_state), `sourcetype=\"mes:production\"` (plant_id, line_state)",
              "q": "index=mfg sourcetype=\"air:compressor\"\n| bin _time span=15m\n| stats avg(kw) as avg_kw, avg(cfm) as avg_cfm, max(run_state) as run_state by plant_id, _time\n| join type=left max=1 plant_id _time [\n    search index=mfg sourcetype=\"mes:production\"\n    | bin _time span=15m\n    | stats sum(units_good) as units by line_id, _time\n    | rename line_id as plant_id\n]\n| eval idle_compressor=if(lower(run_state)=\"on\" AND (isnull(units) OR units=0),1,0)\n| eval specific_kw=if(avg_cfm>20, avg_kw/(avg_cfm/100), null)\n| where idle_compressor=1 AND specific_kw>22\n| table _time, plant_id, avg_kw, avg_cfm, specific_kw, units",
              "m": "Instrument compressors with power and flow meters; infer non-production from MES aggregate line state. Baseline specific power during known good weekends after leak surveys. Alert facilities when idle load exceeds threshold; track savings from repair campaigns. **Domain context:** Compressed air is often 20–30% of industrial electricity; leak detection programs pair ultrasonic surveys with kW/cfm baselines. **Splunk:** The sample renames `line_id` to `plant_id` for join—only valid if your data model maps lines to plants that way; otherwise join on a real `plant_id` field.",
              "z": "Line chart (specific power trend), Bar chart (plant comparison), Table (suspect compressors), Single value (estimated wasted kW).",
              "kfp": "Leak indicators during dryer regeneration, line blow-downs, or night setback schedules that the utilities engineer treats as non-loss events.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub, energy analytics.\n• Ensure the following data sources are available: `index=mfg` `sourcetype=\"air:compressor\"` (plant_id, kw, cfm, run_state), `sourcetype=\"mes:production\"` (plant_id, line_state).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument compressors with power and flow meters; infer non-production from MES aggregate line state. Baseline specific power during known good weekends after leak surveys. Alert facilities when idle load exceeds threshold; track savings from repair campaigns. **Domain context:** Compressed air is often 20–30% of industrial electricity; leak detection programs pair ultrasonic surveys with kW/cfm baselines. **Splunk:** The sample renames `line_id` to `plant_id` for join—only valid if your data m…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mfg sourcetype=\"air:compressor\"\n| bin _time span=15m\n| stats avg(kw) as avg_kw, avg(cfm) as avg_cfm, max(run_state) as run_state by plant_id, _time\n| join type=left max=1 plant_id _time [\n    search index=mfg sourcetype=\"mes:production\"\n    | bin _time span=15m\n    | stats sum(units_good) as units by line_id, _time\n    | rename line_id as plant_id\n]\n| eval idle_compressor=if(lower(run_state)=\"on\" AND (isnull(units) OR units=0),1,0)\n| eval specific_kw=if(avg_cfm>20, avg_kw/(avg_cfm/100), null)\n| where idle_compressor=1 AND specific_kw>22\n| table _time, plant_id, avg_kw, avg_cfm, specific_kw, units\n```\n\nUnderstanding this SPL\n\n**Compressed Air System Leak Detection** — Air leaks are a top energy waste in plants; abnormal specific power (kW per 100 cfm) during non-production indicates leakage or control issues.\n\nDocumented **Data sources**: `index=mfg` `sourcetype=\"air:compressor\"` (plant_id, kw, cfm, run_state), `sourcetype=\"mes:production\"` (plant_id, line_state). **App/TA** (typical add-on context): Splunk Edge Hub, energy analytics. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mfg; **sourcetype**: air:compressor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mfg, sourcetype=\"air:compressor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by plant_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **idle_compressor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **specific_kw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where idle_compressor=1 AND specific_kw>22` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Compressed Air System Leak Detection**): table _time, plant_id, avg_kw, avg_cfm, specific_kw, units\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (specific power trend), Bar chart (plant comparison), Table (suspect compressors), Single value (estimated wasted kW).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch compressed-air use and signatures of leaks. We help you catch waste before tools lose pressure.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.14",
              "n": "Clean-in-Place (CIP) Cycle Validation",
              "c": "critical",
              "f": "advanced",
              "v": "Incomplete CIP risks product contamination and regulatory findings; verifying flow, temperature, and chemical concentration against the recipe protects brand and batch release.",
              "t": "Splunk OT Intelligence, Splunk Edge Hub",
              "d": "`index=mfg` `sourcetype=\"cip:cycle\"` (skid_id, step, flow_lpm, temp_c, conductivity_ms, duration_sec, recipe_id)",
              "q": "index=mfg sourcetype=\"cip:cycle\"\n| eval flow_ok=if(flow_lpm>=30 AND flow_lpm<=80,1,0), temp_ok=if(temp_c>=70 AND temp_c<=85,1,0), chem_ok=if(conductivity_ms>=1.2,1,0)\n| eval step_pass=flow_ok*temp_ok*chem_ok\n| stats min(step_pass) as cycle_pass, sum(duration_sec) as total_duration by skid_id, recipe_id, _time\n| where cycle_pass=0 OR total_duration < 600 OR total_duration > 7200\n| table _time, skid_id, recipe_id, cycle_pass, total_duration",
              "m": "Ingest skid PLC tags via Edge Hub with one event per step or minute rollups. Store recipe limits in lookup keyed by `recipe_id` and `step`. Electronic batch records can consume Splunk alerts for QA hold. Retain data for audit trail per 21 CFR Part 11 policy if applicable. **Domain context:** CIP validation is GMP-critical in pharma and food; recipe bands (flow/temp/conductivity) must match the validated master recipe—replace hardcoded `eval` limits with lookups per `recipe_id`/`step`. **Splunk:** Use `stats`/`eventstats` per step if one event spans multiple steps; ensure time ordering for step sequences.",
              "z": "Timeline (CIP steps), Table (failed cycles), Line chart (temperature and conductivity), Gauge (cycle compliance %).",
              "kfp": "CIP timing drift on chemical strength proof runs, new allergen changeovers, or extra rinse after QA swabs that the validation package allows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, Splunk Edge Hub.\n• Ensure the following data sources are available: `index=mfg` `sourcetype=\"cip:cycle\"` (skid_id, step, flow_lpm, temp_c, conductivity_ms, duration_sec, recipe_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest skid PLC tags via Edge Hub with one event per step or minute rollups. Store recipe limits in lookup keyed by `recipe_id` and `step`. Electronic batch records can consume Splunk alerts for QA hold. Retain data for audit trail per 21 CFR Part 11 policy if applicable. **Domain context:** CIP validation is GMP-critical in pharma and food; recipe bands (flow/temp/conductivity) must match the validated master recipe—replace hardcoded `eval` limits with lookups per `recipe_id`/`step`. **Splunk:*…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mfg sourcetype=\"cip:cycle\"\n| eval flow_ok=if(flow_lpm>=30 AND flow_lpm<=80,1,0), temp_ok=if(temp_c>=70 AND temp_c<=85,1,0), chem_ok=if(conductivity_ms>=1.2,1,0)\n| eval step_pass=flow_ok*temp_ok*chem_ok\n| stats min(step_pass) as cycle_pass, sum(duration_sec) as total_duration by skid_id, recipe_id, _time\n| where cycle_pass=0 OR total_duration < 600 OR total_duration > 7200\n| table _time, skid_id, recipe_id, cycle_pass, total_duration\n```\n\nUnderstanding this SPL\n\n**Clean-in-Place (CIP) Cycle Validation** — Incomplete CIP risks product contamination and regulatory findings; verifying flow, temperature, and chemical concentration against the recipe protects brand and batch release.\n\nDocumented **Data sources**: `index=mfg` `sourcetype=\"cip:cycle\"` (skid_id, step, flow_lpm, temp_c, conductivity_ms, duration_sec, recipe_id). **App/TA** (typical add-on context): Splunk OT Intelligence, Splunk Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mfg; **sourcetype**: cip:cycle. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mfg, sourcetype=\"cip:cycle\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **flow_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **step_pass** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by skid_id, recipe_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cycle_pass=0 OR total_duration < 600 OR total_duration > 7200` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Clean-in-Place (CIP) Cycle Validation**): table _time, skid_id, recipe_id, cycle_pass, total_duration\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (CIP steps), Table (failed cycles), Line chart (temperature and conductivity), Gauge (cycle compliance %).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch clean-in-place runs from start to finish. We help you catch skipped steps before hygiene audits fail.",
              "mtype": [
                "Compliance",
                "Fault"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.2.15",
              "n": "Production Shift Handover Report Generation",
              "c": "medium",
              "f": "intermediate",
              "v": "Consistent shift reports reduce tacit knowledge loss and accelerate startup; KPI rollups improve accountability across crews on 24/7 lines.",
              "t": "Splunk OT Intelligence, MES HEC",
              "d": "`index=mfg` `sourcetype=\"mes:production\"` (line_id, units_good, downtime_min, scrap_units, shift_id)",
              "q": "index=mfg sourcetype=\"mes:production\"\n| eval shift=case(hour(_time)>=6 AND hour(_time)<14,\"A\", hour(_time)>=14 AND hour(_time)<22,\"B\", true(),\"C\")\n| bin _time span=8h aligntime=@d+6h\n| stats sum(units_good) as units, sum(downtime_min) as down_m, sum(scrap_units) as scrap by line_id, shift, _time\n| eval scrap_rate=if(units+scrap>0, 100*scrap/(units+scrap), null)\n| sort _time, line_id, shift\n| table _time, line_id, shift, units, down_m, scrap, scrap_rate",
              "m": "Schedule a saved search at shift change to email PDF or post to Teams via alert action. Pull `shift_id` from MES when available instead of inferred buckets. Optional append subsearch for top downtime reasons from OPC alarms. Store outputs in summary index for trend comparison. **Domain context:** Shift handover is a known safety and quality interface in 24/7 operations (ISA-18 / human factors); align shift boundaries to plant calendar and collective agreements. **Splunk:** `hour(_time)` uses indexer TZ unless search TZ is set—use explicit `shift_id` from MES when possible; `aligntime` on `bin` may not match all plants’ shift start times.",
              "z": "Table (shift KPI summary), Column chart (units by shift), Line chart (scrap rate trend), Single value (plant output per shift).",
              "kfp": "Handover metric gaps on holiday calendars, data-entry lag from the floor, or template changes the operations manager rolled out for the new KPI review.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, MES HEC.\n• Ensure the following data sources are available: `index=mfg` `sourcetype=\"mes:production\"` (line_id, units_good, downtime_min, scrap_units, shift_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSchedule a saved search at shift change to email PDF or post to Teams via alert action. Pull `shift_id` from MES when available instead of inferred buckets. Optional append subsearch for top downtime reasons from OPC alarms. Store outputs in summary index for trend comparison. **Domain context:** Shift handover is a known safety and quality interface in 24/7 operations (ISA-18 / human factors); align shift boundaries to plant calendar and collective agreements. **Splunk:** `hour(_time)` uses ind…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mfg sourcetype=\"mes:production\"\n| eval shift=case(hour(_time)>=6 AND hour(_time)<14,\"A\", hour(_time)>=14 AND hour(_time)<22,\"B\", true(),\"C\")\n| bin _time span=8h aligntime=@d+6h\n| stats sum(units_good) as units, sum(downtime_min) as down_m, sum(scrap_units) as scrap by line_id, shift, _time\n| eval scrap_rate=if(units+scrap>0, 100*scrap/(units+scrap), null)\n| sort _time, line_id, shift\n| table _time, line_id, shift, units, down_m, scrap, scrap_rate\n```\n\nUnderstanding this SPL\n\n**Production Shift Handover Report Generation** — Consistent shift reports reduce tacit knowledge loss and accelerate startup; KPI rollups improve accountability across crews on 24/7 lines.\n\nDocumented **Data sources**: `index=mfg` `sourcetype=\"mes:production\"` (line_id, units_good, downtime_min, scrap_units, shift_id). **App/TA** (typical add-on context): Splunk OT Intelligence, MES HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mfg; **sourcetype**: mes:production. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mfg, sourcetype=\"mes:production\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **shift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by line_id, shift, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **scrap_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Production Shift Handover Report Generation**): table _time, line_id, shift, units, down_m, scrap, scrap_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (shift KPI summary), Column chart (units by shift), Line chart (scrap rate trend), Single value (plant output per shift).",
              "script": "",
              "premium": "Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch what each shift hands to the next. We help you catch holes before safety or quality handoffs fail.",
              "mtype": [
                "Performance",
                "Change"
              ],
              "ind": "Manufacturing",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.7,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 14,
            "none": 0
          }
        },
        {
          "i": "21.3",
          "n": "Healthcare and Life Sciences",
          "u": [
            {
              "i": "21.3.1",
              "n": "EHR System Response Time Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Slow EHR response times directly impact clinical workflow and patient safety. Monitoring response latency enables proactive intervention before clinicians experience degradation.",
              "t": "Epic Hyperspace / Cerner application performance logs via HEC",
              "d": "`index=healthcare` `sourcetype=\"ehr:audit\"` fields `response_time_ms`, `transaction_type`, `server_node`",
              "q": "index=healthcare sourcetype=\"ehr:audit\"\n| bin _time span=5m\n| stats avg(response_time_ms) as avg_rt, perc95(response_time_ms) as p95_rt, count by server_node, _time\n| where p95_rt > 3000\n| table _time, server_node, avg_rt, p95_rt, count",
              "m": "Ingest EHR application performance logs via HEC. Set thresholds based on clinical workflow requirements (typically p95 < 3 seconds). Alert on sustained degradation and correlate with infrastructure metrics. **Domain context:** EHR latency affects clinician cognitive load and patient safety during high-acuity workflows; many organizations align targets with internal clinical SLA committees, not a single vendor default. **Splunk:** Epic/Cerner often emit separate logs per app tier—normalize `transaction_type` and exclude batch jobs from clinician-facing p95. Ensure PHI minimization: aggregate by `server_node`, not patient context in metrics indexes.",
              "z": "Line chart (p95 response time by server), Heatmap (server × time), Single value (current p95).",
              "kfp": "Slow response during patch windows, storage maintenance, end-of-month jobs, flu surges, or EHR go-live rehearse days that the command center already published.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Epic Hyperspace / Cerner application performance logs via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"ehr:audit\"` fields `response_time_ms`, `transaction_type`, `server_node`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest EHR application performance logs via HEC. Set thresholds based on clinical workflow requirements (typically p95 < 3 seconds). Alert on sustained degradation and correlate with infrastructure metrics. **Domain context:** EHR latency affects clinician cognitive load and patient safety during high-acuity workflows; many organizations align targets with internal clinical SLA committees, not a single vendor default. **Splunk:** Epic/Cerner often emit separate logs per app tier—normalize `trans…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"ehr:audit\"\n| bin _time span=5m\n| stats avg(response_time_ms) as avg_rt, perc95(response_time_ms) as p95_rt, count by server_node, _time\n| where p95_rt > 3000\n| table _time, server_node, avg_rt, p95_rt, count\n```\n\nUnderstanding this SPL\n\n**EHR System Response Time Monitoring** — Slow EHR response times directly impact clinical workflow and patient safety. Monitoring response latency enables proactive intervention before clinicians experience degradation.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"ehr:audit\"` fields `response_time_ms`, `transaction_type`, `server_node`. **App/TA** (typical add-on context): Epic Hyperspace / Cerner application performance logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: ehr:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"ehr:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by server_node, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_rt > 3000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **EHR System Response Time Monitoring**): table _time, server_node, avg_rt, p95_rt, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95 response time by server), Heatmap (server × time), Single value (current p95).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how fast the chart and orders respond for care teams. We help you catch slowdowns before patient care feels the drag.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "21.3.2",
              "n": "Clinical Application Uptime SLA Tracking",
              "c": "critical",
              "f": "intermediate",
              "v": "Clinical applications require 99.9%+ uptime for patient care continuity. Tracking SLA compliance ensures service-level commitments are met and documented.",
              "t": "Synthetic monitoring, application health checks via HEC",
              "d": "`index=healthcare` `sourcetype=\"app:health\"` fields `app_name`, `status`, `response_code`",
              "q": "index=healthcare sourcetype=\"app:health\"\n| eval up=if(status=\"ok\" OR response_code=200, 1, 0)\n| bin _time span=1h\n| stats avg(up) as avail_pct by app_name, _time\n| eval avail_pct=round(avail_pct*100,3)\n| where avail_pct < 99.9\n| table _time, app_name, avail_pct",
              "m": "Deploy health check probes against critical clinical applications. Calculate availability per hour and month. Generate SLA compliance reports for governance. **Domain context:** Healthcare SLAs often distinguish scheduled maintenance vs unplanned outage; exclude planned windows from availability math if your governance requires it. **Splunk:** `avg(up)` per hour is binary uptime only if probes are evenly spaced—prefer explicit downtime events or `sum(up)/count` with fixed probe interval.",
              "z": "Line chart (availability over time), Gauge (monthly SLA %), Table (apps below target).",
              "kfp": "SLA noise during synthetic-monitor blind spots, certificate rotations, VDI image pushes, or disaster-recovery test windows that the service owner called out on status.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Synthetic monitoring, application health checks via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"app:health\"` fields `app_name`, `status`, `response_code`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy health check probes against critical clinical applications. Calculate availability per hour and month. Generate SLA compliance reports for governance. **Domain context:** Healthcare SLAs often distinguish scheduled maintenance vs unplanned outage; exclude planned windows from availability math if your governance requires it. **Splunk:** `avg(up)` per hour is binary uptime only if probes are evenly spaced—prefer explicit downtime events or `sum(up)/count` with fixed probe interval.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"app:health\"\n| eval up=if(status=\"ok\" OR response_code=200, 1, 0)\n| bin _time span=1h\n| stats avg(up) as avail_pct by app_name, _time\n| eval avail_pct=round(avail_pct*100,3)\n| where avail_pct < 99.9\n| table _time, app_name, avail_pct\n```\n\nUnderstanding this SPL\n\n**Clinical Application Uptime SLA Tracking** — Clinical applications require 99.9%+ uptime for patient care continuity. Tracking SLA compliance ensures service-level commitments are met and documented.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"app:health\"` fields `app_name`, `status`, `response_code`. **App/TA** (typical add-on context): Synthetic monitoring, application health checks via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: app:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"app:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **up** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by app_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avail_pct < 99.9` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Clinical Application Uptime SLA Tracking**): table _time, app_name, avail_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (availability over time), Gauge (monthly SLA %), Table (apps below target).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether clinical apps stay up for staff. We help you catch outages before wards and clinics lose the tools they need.",
              "mtype": [
                "Availability"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.3",
              "n": "Nurse Call System Response Time",
              "c": "high",
              "f": "beginner",
              "v": "Nurse call response time is a key patient satisfaction and safety metric. Monitoring enables staffing optimization and regulatory compliance.",
              "t": "Nurse call system integration via syslog or HEC",
              "d": "`index=healthcare` `sourcetype=\"nursecall:event\"` fields `call_id`, `call_time`, `response_time`, `unit`",
              "q": "index=healthcare sourcetype=\"nursecall:event\"\n| eval response_sec=response_time\n| bin _time span=1h\n| stats avg(response_sec) as avg_response, perc95(response_sec) as p95_response, count by unit, _time\n| where p95_response > 300\n| table _time, unit, avg_response, p95_response, count",
              "m": "Integrate nurse call system events via syslog or API. Track response times by unit and shift. Alert when p95 exceeds 5 minutes. Report for CMS quality metrics. **Domain context:** Response time is a common patient experience and safety proxy; thresholds vary by acuity level (e.g. ICU vs med-surg)—segment by `unit` or priority if available. **Splunk:** Confirm whether `response_time` is seconds or milliseconds; normalize before `stats`.",
              "z": "Bar chart (avg response by unit), Line chart (trend over shifts), Table (units exceeding threshold).",
              "kfp": "Response gaps during new ward go-live, console firmware, staffing surges, or call-light wiring tests the nursing supervisor still judged within policy.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Nurse call system integration via syslog or HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"nursecall:event\"` fields `call_id`, `call_time`, `response_time`, `unit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate nurse call system events via syslog or API. Track response times by unit and shift. Alert when p95 exceeds 5 minutes. Report for CMS quality metrics. **Domain context:** Response time is a common patient experience and safety proxy; thresholds vary by acuity level (e.g. ICU vs med-surg)—segment by `unit` or priority if available. **Splunk:** Confirm whether `response_time` is seconds or milliseconds; normalize before `stats`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"nursecall:event\"\n| eval response_sec=response_time\n| bin _time span=1h\n| stats avg(response_sec) as avg_response, perc95(response_sec) as p95_response, count by unit, _time\n| where p95_response > 300\n| table _time, unit, avg_response, p95_response, count\n```\n\nUnderstanding this SPL\n\n**Nurse Call System Response Time** — Nurse call response time is a key patient satisfaction and safety metric. Monitoring enables staffing optimization and regulatory compliance.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"nursecall:event\"` fields `call_id`, `call_time`, `response_time`, `unit`. **App/TA** (typical add-on context): Nurse call system integration via syslog or HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: nursecall:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"nursecall:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **response_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by unit, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_response > 300` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Nurse Call System Response Time**): table _time, unit, avg_response, p95_response, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (avg response by unit), Line chart (trend over shifts), Table (units exceeding threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch nurse call answer times. We help you catch delays before a patient waits too long for help.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.4",
              "n": "Blood Bank Refrigerator Temperature Compliance",
              "c": "critical",
              "f": "beginner",
              "v": "Blood products must be stored at 1-6°C per AABB standards. Temperature excursions require immediate action and documentation to prevent product waste and patient harm.",
              "t": "Environmental sensors via MQTT/Edge Hub",
              "d": "`index=healthcare` `sourcetype=\"bloodbank:temp\"` fields `unit_id`, `temp_c`, `alarm_status`",
              "q": "index=healthcare sourcetype=\"bloodbank:temp\"\n| where temp_c < 1 OR temp_c > 6\n| stats earliest(_time) as excursion_start, latest(_time) as excursion_end, min(temp_c) as min_temp, max(temp_c) as max_temp by unit_id\n| eval duration_min=round((excursion_end-excursion_start)/60,1)\n| table unit_id, excursion_start, excursion_end, duration_min, min_temp, max_temp",
              "m": "Deploy temperature sensors on all blood storage units with 1-minute sampling. Alert immediately on out-of-range readings. Generate compliance documentation for AABB accreditation. **Domain context:** AABB Standards require controlled storage with documented excursion investigation; cumulative time out of range may matter as much as a single point breach. **Splunk:** Use stateful alerting (e.g. duration of excursion) rather than raw `where temp_c` on every row if sensors are noisy.",
              "z": "Line chart (temperature trend with range bands), Alert timeline, Table (excursions by unit).",
              "kfp": "Temperature excursions during door-open drills, defrost, inventory counts, or temporary power from generator tests that blood bank and facilities coordinated.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Environmental sensors via MQTT/Edge Hub.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"bloodbank:temp\"` fields `unit_id`, `temp_c`, `alarm_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDeploy temperature sensors on all blood storage units with 1-minute sampling. Alert immediately on out-of-range readings. Generate compliance documentation for AABB accreditation. **Domain context:** AABB Standards require controlled storage with documented excursion investigation; cumulative time out of range may matter as much as a single point breach. **Splunk:** Use stateful alerting (e.g. duration of excursion) rather than raw `where temp_c` on every row if sensors are noisy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"bloodbank:temp\"\n| where temp_c < 1 OR temp_c > 6\n| stats earliest(_time) as excursion_start, latest(_time) as excursion_end, min(temp_c) as min_temp, max(temp_c) as max_temp by unit_id\n| eval duration_min=round((excursion_end-excursion_start)/60,1)\n| table unit_id, excursion_start, excursion_end, duration_min, min_temp, max_temp\n```\n\nUnderstanding this SPL\n\n**Blood Bank Refrigerator Temperature Compliance** — Blood products must be stored at 1-6°C per AABB standards. Temperature excursions require immediate action and documentation to prevent product waste and patient harm.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"bloodbank:temp\"` fields `unit_id`, `temp_c`, `alarm_status`. **App/TA** (typical add-on context): Environmental sensors via MQTT/Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: bloodbank:temp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"bloodbank:temp\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where temp_c < 1 OR temp_c > 6` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by unit_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Blood Bank Refrigerator Temperature Compliance**): table unit_id, excursion_start, excursion_end, duration_min, min_temp, max_temp\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (temperature trend with range bands), Alert timeline, Table (excursions by unit).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch blood bank fridge and freezer temperatures. We help you catch excursions before product safety is in doubt.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.5",
              "n": "Pharmaceutical Cold Chain Deviation Alerting",
              "c": "critical",
              "f": "intermediate",
              "v": "Many pharmaceuticals (vaccines, biologics) require strict temperature control. Excursions can render medications ineffective, creating patient safety and financial risk.",
              "t": "Cold chain monitoring sensors via MQTT/Edge Hub",
              "d": "`index=healthcare` `sourcetype=\"coldchain:sensor\"` fields `location`, `temp_c`, `setpoint_c`, `tolerance_c`",
              "q": "index=healthcare sourcetype=\"coldchain:sensor\"\n| eval low=setpoint_c-tolerance_c, high=setpoint_c+tolerance_c\n| eval excursion=if(temp_c < low OR temp_c > high, 1, 0)\n| where excursion=1\n| stats earliest(_time) as start, latest(_time) as end, max(abs(temp_c-setpoint_c)) as max_deviation by location\n| eval duration_min=round((end-start)/60,1)\n| table location, start, end, duration_min, max_deviation",
              "m": "Integrate cold chain sensors across pharmacy, lab, and storage areas. Configure product-specific setpoints and tolerances. Alert pharmacy immediately on excursions for product assessment. **Domain context:** USP <797>/<800> and manufacturer product information define storage; MAP excursions often require QA disposition—Splunk should log excursion duration and product lot if integrated from WMS. **Splunk:** Drive `low`/`high` from a lookup by product SKU rather than a single `setpoint_c` if mixed inventory.",
              "z": "Time series (temperature vs bounds), Table (active excursions), Duration chart.",
              "kfp": "Cold chain alerts during short dock handoffs, aircraft delay tarmac holds, or lane mapping errors the logistics partner rebooked the same day.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cold chain monitoring sensors via MQTT/Edge Hub.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"coldchain:sensor\"` fields `location`, `temp_c`, `setpoint_c`, `tolerance_c`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate cold chain sensors across pharmacy, lab, and storage areas. Configure product-specific setpoints and tolerances. Alert pharmacy immediately on excursions for product assessment. **Domain context:** USP <797>/<800> and manufacturer product information define storage; MAP excursions often require QA disposition—Splunk should log excursion duration and product lot if integrated from WMS. **Splunk:** Drive `low`/`high` from a lookup by product SKU rather than a single `setpoint_c` if mixed…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"coldchain:sensor\"\n| eval low=setpoint_c-tolerance_c, high=setpoint_c+tolerance_c\n| eval excursion=if(temp_c < low OR temp_c > high, 1, 0)\n| where excursion=1\n| stats earliest(_time) as start, latest(_time) as end, max(abs(temp_c-setpoint_c)) as max_deviation by location\n| eval duration_min=round((end-start)/60,1)\n| table location, start, end, duration_min, max_deviation\n```\n\nUnderstanding this SPL\n\n**Pharmaceutical Cold Chain Deviation Alerting** — Many pharmaceuticals (vaccines, biologics) require strict temperature control. Excursions can render medications ineffective, creating patient safety and financial risk.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"coldchain:sensor\"` fields `location`, `temp_c`, `setpoint_c`, `tolerance_c`. **App/TA** (typical add-on context): Cold chain monitoring sensors via MQTT/Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: coldchain:sensor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"coldchain:sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **low** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **excursion** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where excursion=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Pharmaceutical Cold Chain Deviation Alerting**): table location, start, end, duration_min, max_deviation\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time series (temperature vs bounds), Table (active excursions), Duration chart.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch pharma cold chain in transit and storage. We help you catch breaks before product must be written off.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.6",
              "n": "Lab Information System Result Turnaround Time",
              "c": "high",
              "f": "intermediate",
              "v": "Lab TAT directly impacts clinical decision-making and patient throughput. Tracking TAT by test type identifies bottlenecks in specimen processing and analysis.",
              "t": "LIS integration via HL7 or HEC",
              "d": "`index=healthcare` `sourcetype=\"lis:result\"` fields `order_id`, `test_type`, `collected_time`, `resulted_time`",
              "q": "index=healthcare sourcetype=\"lis:result\"\n| eval tat_min=round((resulted_time-collected_time)/60,1)\n| where tat_min > 0\n| stats avg(tat_min) as avg_tat, perc95(tat_min) as p95_tat, count by test_type\n| sort - p95_tat\n| table test_type, avg_tat, p95_tat, count",
              "m": "Parse HL7 ORM/ORU messages or LIS exports for specimen collection and result times. Track by test type and priority. Alert on stat tests exceeding critical TAT thresholds. **Domain context:** CAP / CLIA programs often set TAT expectations by test class; stat vs routine must be modeled separately. **Splunk:** `collected_time`/`resulted_time` must be epoch seconds or coerced with `strptime`—HL7 timestamps are often local time with timezone.",
              "z": "Bar chart (TAT by test type), Line chart (TAT trend), Table (tests exceeding target).",
              "kfp": "Turnaround noise during instrument calibration, add-on reflex testing, pathologist collaboration blocks, or batch reruns the lab medical director approves in workflow.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: LIS integration via HL7 or HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"lis:result\"` fields `order_id`, `test_type`, `collected_time`, `resulted_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse HL7 ORM/ORU messages or LIS exports for specimen collection and result times. Track by test type and priority. Alert on stat tests exceeding critical TAT thresholds. **Domain context:** CAP / CLIA programs often set TAT expectations by test class; stat vs routine must be modeled separately. **Splunk:** `collected_time`/`resulted_time` must be epoch seconds or coerced with `strptime`—HL7 timestamps are often local time with timezone.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"lis:result\"\n| eval tat_min=round((resulted_time-collected_time)/60,1)\n| where tat_min > 0\n| stats avg(tat_min) as avg_tat, perc95(tat_min) as p95_tat, count by test_type\n| sort - p95_tat\n| table test_type, avg_tat, p95_tat, count\n```\n\nUnderstanding this SPL\n\n**Lab Information System Result Turnaround Time** — Lab TAT directly impacts clinical decision-making and patient throughput. Tracking TAT by test type identifies bottlenecks in specimen processing and analysis.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"lis:result\"` fields `order_id`, `test_type`, `collected_time`, `resulted_time`. **App/TA** (typical add-on context): LIS integration via HL7 or HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: lis:result. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"lis:result\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tat_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where tat_min > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by test_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Lab Information System Result Turnaround Time**): table test_type, avg_tat, p95_tat, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (TAT by test type), Line chart (TAT trend), Table (tests exceeding target).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how fast lab results return. We help you catch delays before treatment decisions wait on data.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.7",
              "n": "FDA 21 CFR Part 11 Electronic Signature Audit Trail",
              "c": "high",
              "f": "advanced",
              "v": "FDA regulations require audit trails for electronic records and signatures in pharmaceutical manufacturing and clinical systems. Monitoring ensures continuous compliance.",
              "t": "GxP system audit trail exports via HEC",
              "d": "`index=healthcare` `sourcetype=\"esig:audit\"` fields `system`, `user`, `action`, `record_id`, `signature_valid`",
              "q": "index=healthcare sourcetype=\"esig:audit\"\n| where action IN (\"sign\", \"countersign\", \"approve\", \"reject\")\n| eval sig_issue=if(signature_valid=\"false\" OR isnull(signature_valid), 1, 0)\n| stats count as total_sigs, sum(sig_issue) as sig_issues by system, user\n| where sig_issues > 0\n| table system, user, total_sigs, sig_issues",
              "m": "Configure GxP systems to export electronic signature audit trails to Splunk. Track signature validity, sequential signing, and unauthorized signature attempts. Generate periodic compliance reports. **Domain context:** 21 CFR Part 11 requires unique, attributable signatures, audit trails, and record retention—align Splunk retention with your validated records policy. **Splunk:** Use `validate`/`warrant` searches in non-prod first; restrict access to `esig:audit` index; consider signing exports if used as system of record.",
              "z": "Table (signature audit), Bar chart (issues by system), Timeline (signature events).",
              "kfp": "Signature anomaly noise during e-sign refreshes, witness-co-sign workflows, EHR version upgrades, or break-glass continuity events compliance already tied to a ticket.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GxP system audit trail exports via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"esig:audit\"` fields `system`, `user`, `action`, `record_id`, `signature_valid`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfigure GxP systems to export electronic signature audit trails to Splunk. Track signature validity, sequential signing, and unauthorized signature attempts. Generate periodic compliance reports. **Domain context:** 21 CFR Part 11 requires unique, attributable signatures, audit trails, and record retention—align Splunk retention with your validated records policy. **Splunk:** Use `validate`/`warrant` searches in non-prod first; restrict access to `esig:audit` index; consider signing exports if…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"esig:audit\"\n| where action IN (\"sign\", \"countersign\", \"approve\", \"reject\")\n| eval sig_issue=if(signature_valid=\"false\" OR isnull(signature_valid), 1, 0)\n| stats count as total_sigs, sum(sig_issue) as sig_issues by system, user\n| where sig_issues > 0\n| table system, user, total_sigs, sig_issues\n```\n\nUnderstanding this SPL\n\n**FDA 21 CFR Part 11 Electronic Signature Audit Trail** — FDA regulations require audit trails for electronic records and signatures in pharmaceutical manufacturing and clinical systems. Monitoring ensures continuous compliance.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"esig:audit\"` fields `system`, `user`, `action`, `record_id`, `signature_valid`. **App/TA** (typical add-on context): GxP system audit trail exports via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: esig:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"esig:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"sign\", \"countersign\", \"approve\", \"reject\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **sig_issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by system, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sig_issues > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **FDA 21 CFR Part 11 Electronic Signature Audit Trail**): table system, user, total_sigs, sig_issues\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (signature audit), Bar chart (issues by system), Timeline (signature events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch e-signatures and controlled actions in systems of record. We help you keep audit trails defensible for regulators.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.8",
              "n": "GxP System Change Control Log Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Validated system changes must follow documented change control processes. Monitoring detects unauthorized changes and verifies proper approval workflows.",
              "t": "GxP system change logs via HEC",
              "d": "`index=healthcare` `sourcetype=\"gxp:change\"` fields `system`, `change_type`, `approved`, `approver`, `change_id`",
              "q": "index=healthcare sourcetype=\"gxp:change\"\n| where approved!=\"yes\" OR isnull(approver)\n| stats count by system, change_type, approved\n| sort - count\n| table system, change_type, approved, count",
              "m": "Ingest change control system events. Alert on unapproved changes to validated systems. Cross-reference with change advisory board records. Generate deviation reports for QA review. **Domain context:** GAMP 5 and CSV/CSA practices expect traceability from change ticket to validation evidence—map `change_id` to ITSM. **Splunk:** `approved!=\"yes\"` may miss locale variants (`Y`, `Approved`); normalize with `case`/`lookup`.",
              "z": "Table (unapproved changes), Bar chart (changes by system), Timeline (change events).",
              "kfp": "Unapproved rows during emergency changes, paperwork lag, ticket sync delay, or locale differences in approval fields that QA still accepted with evidence.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GxP system change logs via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"gxp:change\"` fields `system`, `change_type`, `approved`, `approver`, `change_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest change control system events. Alert on unapproved changes to validated systems. Cross-reference with change advisory board records. Generate deviation reports for QA review. **Domain context:** GAMP 5 and CSV/CSA practices expect traceability from change ticket to validation evidence—map `change_id` to ITSM. **Splunk:** `approved!=\"yes\"` may miss locale variants (`Y`, `Approved`); normalize with `case`/`lookup`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"gxp:change\"\n| where approved!=\"yes\" OR isnull(approver)\n| stats count by system, change_type, approved\n| sort - count\n| table system, change_type, approved, count\n```\n\nUnderstanding this SPL\n\n**GxP System Change Control Log Monitoring** — Validated system changes must follow documented change control processes. Monitoring detects unauthorized changes and verifies proper approval workflows.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"gxp:change\"` fields `system`, `change_type`, `approved`, `approver`, `change_id`. **App/TA** (typical add-on context): GxP system change logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: gxp:change. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"gxp:change\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where approved!=\"yes\" OR isnull(approver)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by system, change_type, approved** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GxP System Change Control Log Monitoring**): table system, change_type, approved, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved changes), Bar chart (changes by system), Timeline (change events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch GxP change records for approved, traceable work. We help you catch unapproved or orphan changes before validation breaks.",
              "mtype": [
                "Change",
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "security",
              "_qs": 45,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.9",
              "n": "Clinical Trial Data Integrity Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "Clinical trial data requires ALCOA+ principles (Attributable, Legible, Contemporaneous, Original, Accurate). Monitoring audit trails ensures data integrity for regulatory submissions.",
              "t": "CTMS/EDC system audit exports via HEC",
              "d": "`index=healthcare` `sourcetype=\"ctms:audit\"` fields `study_id`, `site_id`, `user`, `action`, `field_changed`, `old_value`, `new_value`",
              "q": "index=healthcare sourcetype=\"ctms:audit\"\n| where action IN (\"modify\", \"delete\", \"unblind\")\n| stats count as modifications, dc(field_changed) as fields_changed, dc(user) as users by study_id, site_id\n| where modifications > 100 OR fields_changed > 20\n| sort - modifications\n| table study_id, site_id, modifications, fields_changed, users",
              "m": "Export EDC/CTMS audit trails to Splunk. Flag excessive modifications, unusual data patterns, and retrospective changes. Generate risk-based monitoring reports for clinical operations. **Domain context:** ICH E6(R2) encourages risk-based monitoring; ALCOA+ applies to source data—blind-breaking `unblind` actions require heightened scrutiny. **Splunk:** De-identify subjects in dashboards; use `study_id`/`site_id` only per protocol privacy plan.",
              "z": "Table (sites by modification count), Bar chart (changes by study), Timeline (high-activity periods).",
              "kfp": "Integrity flags on blinded data loads, redaction for export, query replay after an EDC vendor patch, or monitor-driven protocol amendments IRB already approved.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CTMS/EDC system audit exports via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"ctms:audit\"` fields `study_id`, `site_id`, `user`, `action`, `field_changed`, `old_value`, `new_value`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nExport EDC/CTMS audit trails to Splunk. Flag excessive modifications, unusual data patterns, and retrospective changes. Generate risk-based monitoring reports for clinical operations. **Domain context:** ICH E6(R2) encourages risk-based monitoring; ALCOA+ applies to source data—blind-breaking `unblind` actions require heightened scrutiny. **Splunk:** De-identify subjects in dashboards; use `study_id`/`site_id` only per protocol privacy plan.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"ctms:audit\"\n| where action IN (\"modify\", \"delete\", \"unblind\")\n| stats count as modifications, dc(field_changed) as fields_changed, dc(user) as users by study_id, site_id\n| where modifications > 100 OR fields_changed > 20\n| sort - modifications\n| table study_id, site_id, modifications, fields_changed, users\n```\n\nUnderstanding this SPL\n\n**Clinical Trial Data Integrity Monitoring** — Clinical trial data requires ALCOA+ principles (Attributable, Legible, Contemporaneous, Original, Accurate). Monitoring audit trails ensures data integrity for regulatory submissions.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"ctms:audit\"` fields `study_id`, `site_id`, `user`, `action`, `field_changed`, `old_value`, `new_value`. **App/TA** (typical add-on context): CTMS/EDC system audit exports via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: ctms:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"ctms:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"modify\", \"delete\", \"unblind\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by study_id, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where modifications > 100 OR fields_changed > 20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Clinical Trial Data Integrity Monitoring**): table study_id, site_id, modifications, fields_changed, users\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sites by modification count), Bar chart (changes by study), Timeline (high-activity periods).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch trial data paths for odd edits or gaps. We help you catch integrity issues before a study is questioned.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.10",
              "n": "Radiology Reading Turnaround Time",
              "c": "high",
              "f": "intermediate",
              "v": "Radiology report TAT affects clinical decisions and patient discharge timing. Tracking by modality and priority ensures SLA compliance and identifies workflow bottlenecks.",
              "t": "RIS/PACS integration via HL7 or HEC",
              "d": "`index=healthcare` `sourcetype=\"ris:report\"` fields `accession`, `modality`, `priority`, `exam_complete_time`, `report_finalized_time`",
              "q": "index=healthcare sourcetype=\"ris:report\"\n| eval tat_min=round((report_finalized_time-exam_complete_time)/60,1)\n| where tat_min > 0\n| stats avg(tat_min) as avg_tat, perc95(tat_min) as p95_tat by modality, priority\n| sort priority, -p95_tat\n| table modality, priority, avg_tat, p95_tat",
              "m": "Parse RIS events for exam completion and report finalization timestamps. Track by modality (CT, MR, XR) and priority (stat, urgent, routine). Alert on stat studies exceeding 1-hour TAT. **Domain context:** Joint Commission and operational targets often use modality-specific TAT; peer review and resident reads can skew distributions—segment when possible. **Splunk:** Use `strptime` if times are strings; ensure `exam_complete_time` reflects acquisition end, not order start.",
              "z": "Bar chart (TAT by modality), Line chart (TAT trend), Table (exams exceeding SLA).",
              "kfp": "Reading backlog spikes when the worklist floods after PACS upgrades, telerad overflow, addendum policies, or peer-learning conferences thin staffing.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RIS/PACS integration via HL7 or HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"ris:report\"` fields `accession`, `modality`, `priority`, `exam_complete_time`, `report_finalized_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse RIS events for exam completion and report finalization timestamps. Track by modality (CT, MR, XR) and priority (stat, urgent, routine). Alert on stat studies exceeding 1-hour TAT. **Domain context:** Joint Commission and operational targets often use modality-specific TAT; peer review and resident reads can skew distributions—segment when possible. **Splunk:** Use `strptime` if times are strings; ensure `exam_complete_time` reflects acquisition end, not order start.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"ris:report\"\n| eval tat_min=round((report_finalized_time-exam_complete_time)/60,1)\n| where tat_min > 0\n| stats avg(tat_min) as avg_tat, perc95(tat_min) as p95_tat by modality, priority\n| sort priority, -p95_tat\n| table modality, priority, avg_tat, p95_tat\n```\n\nUnderstanding this SPL\n\n**Radiology Reading Turnaround Time** — Radiology report TAT affects clinical decisions and patient discharge timing. Tracking by modality and priority ensures SLA compliance and identifies workflow bottlenecks.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"ris:report\"` fields `accession`, `modality`, `priority`, `exam_complete_time`, `report_finalized_time`. **App/TA** (typical add-on context): RIS/PACS integration via HL7 or HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: ris:report. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"ris:report\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tat_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where tat_min > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by modality, priority** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Radiology Reading Turnaround Time**): table modality, priority, avg_tat, p95_tat\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (TAT by modality), Line chart (TAT trend), Table (exams exceeding SLA).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how quickly imaging gets read. We help you catch backlog before cancer and ED pathways stall.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.11",
              "n": "Patient Flow and Bed Management Analytics",
              "c": "high",
              "f": "intermediate",
              "v": "Real-time bed occupancy and patient flow data supports throughput optimization, reduces boarding, and improves patient placement decisions.",
              "t": "ADT feed via HL7 or HEC",
              "d": "`index=healthcare` `sourcetype=\"adtflow:event\"` fields `unit`, `event_type`, `bed_id`, `patient_class`",
              "q": "index=healthcare sourcetype=\"adtflow:event\"\n| where event_type IN (\"admit\", \"transfer\", \"discharge\")\n| bin _time span=1h\n| stats dc(bed_id) as beds_used, sum(eval(if(event_type=\"admit\",1,0))) as admits, sum(eval(if(event_type=\"discharge\",1,0))) as discharges by unit, _time\n| eval net_flow=admits-discharges\n| table _time, unit, beds_used, admits, discharges, net_flow",
              "m": "Ingest ADT (Admit-Discharge-Transfer) HL7 messages. Calculate real-time census by unit. Track length of stay distributions and boarding times. Support capacity planning. **Domain context:** ADT feeds drive bed management and surge planning; HL7 v2 ADT^Axx messages must be de-duplicated (ACK/re-send storms). **Splunk:** Use message control ID in `props` to drop duplicates; PHI handling—minimize patient identifiers in analytics indexes.",
              "z": "Stacked area (census by unit), Line chart (admits vs discharges), Table (units at capacity).",
              "kfp": "Bed and flow noise during seasonal flu, diversion hours, new unit openings, or EMR go-live when bed boards lag manual staffing reality briefly.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ADT feed via HL7 or HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"adtflow:event\"` fields `unit`, `event_type`, `bed_id`, `patient_class`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest ADT (Admit-Discharge-Transfer) HL7 messages. Calculate real-time census by unit. Track length of stay distributions and boarding times. Support capacity planning. **Domain context:** ADT feeds drive bed management and surge planning; HL7 v2 ADT^Axx messages must be de-duplicated (ACK/re-send storms). **Splunk:** Use message control ID in `props` to drop duplicates; PHI handling—minimize patient identifiers in analytics indexes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"adtflow:event\"\n| where event_type IN (\"admit\", \"transfer\", \"discharge\")\n| bin _time span=1h\n| stats dc(bed_id) as beds_used, sum(eval(if(event_type=\"admit\",1,0))) as admits, sum(eval(if(event_type=\"discharge\",1,0))) as discharges by unit, _time\n| eval net_flow=admits-discharges\n| table _time, unit, beds_used, admits, discharges, net_flow\n```\n\nUnderstanding this SPL\n\n**Patient Flow and Bed Management Analytics** — Real-time bed occupancy and patient flow data supports throughput optimization, reduces boarding, and improves patient placement decisions.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"adtflow:event\"` fields `unit`, `event_type`, `bed_id`, `patient_class`. **App/TA** (typical add-on context): ADT feed via HL7 or HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: adtflow:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"adtflow:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"admit\", \"transfer\", \"discharge\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by unit, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **net_flow** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Patient Flow and Bed Management Analytics**): table _time, unit, beds_used, admits, discharges, net_flow\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (census by unit), Line chart (admits vs discharges), Table (units at capacity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch patient flow, beds, and bottlenecks. We help you catch crowding before care quality and access suffer.",
              "mtype": [
                "Capacity"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.12",
              "n": "Emergency Department Wait Time Tracking",
              "c": "critical",
              "f": "intermediate",
              "v": "ED wait times impact patient outcomes and satisfaction scores. Real-time tracking enables dynamic resource allocation and identifies systemic throughput issues.",
              "t": "EDIS integration via HEC",
              "d": "`index=healthcare` `sourcetype=\"edis:event\"` fields `visit_id`, `triage_time`, `provider_time`, `disposition_time`",
              "q": "index=healthcare sourcetype=\"edis:event\"\n| eval door_to_provider_min=round((provider_time-triage_time)/60,1)\n| eval total_los_min=round((disposition_time-triage_time)/60,1)\n| where door_to_provider_min > 0\n| bin _time span=1h\n| stats avg(door_to_provider_min) as avg_wait, perc95(door_to_provider_min) as p95_wait, avg(total_los_min) as avg_los by _time\n| table _time, avg_wait, p95_wait, avg_los",
              "m": "Integrate EDIS timestamps for triage, provider evaluation, and disposition. Track door-to-provider time as key metric. Alert when wait times exceed CMS benchmarks. Correlate with arrival volume and staffing. **Domain context:** ED crowding metrics tie to quality programs (e.g. left-without-being-seen); triage level (ESI) should filter comparisons. **Splunk:** `triage_time`/`provider_time` must share timezone and epoch representation—validate with `strptime` if string-formatted.",
              "z": "Line chart (wait time trend), Gauge (current avg wait), Bar chart (wait by acuity level).",
              "kfp": "Wait time spikes when ambulances batch arrivals, triage re-stacks during infection control events, or fast-track is paused for a trauma alert the leadership approved.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EDIS integration via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"edis:event\"` fields `visit_id`, `triage_time`, `provider_time`, `disposition_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate EDIS timestamps for triage, provider evaluation, and disposition. Track door-to-provider time as key metric. Alert when wait times exceed CMS benchmarks. Correlate with arrival volume and staffing. **Domain context:** ED crowding metrics tie to quality programs (e.g. left-without-being-seen); triage level (ESI) should filter comparisons. **Splunk:** `triage_time`/`provider_time` must share timezone and epoch representation—validate with `strptime` if string-formatted.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"edis:event\"\n| eval door_to_provider_min=round((provider_time-triage_time)/60,1)\n| eval total_los_min=round((disposition_time-triage_time)/60,1)\n| where door_to_provider_min > 0\n| bin _time span=1h\n| stats avg(door_to_provider_min) as avg_wait, perc95(door_to_provider_min) as p95_wait, avg(total_los_min) as avg_los by _time\n| table _time, avg_wait, p95_wait, avg_los\n```\n\nUnderstanding this SPL\n\n**Emergency Department Wait Time Tracking** — ED wait times impact patient outcomes and satisfaction scores. Real-time tracking enables dynamic resource allocation and identifies systemic throughput issues.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"edis:event\"` fields `visit_id`, `triage_time`, `provider_time`, `disposition_time`. **App/TA** (typical add-on context): EDIS integration via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: edis:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"edis:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **door_to_provider_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **total_los_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where door_to_provider_min > 0` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Emergency Department Wait Time Tracking**): table _time, avg_wait, p95_wait, avg_los\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (wait time trend), Gauge (current avg wait), Bar chart (wait by acuity level).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch emergency wait times. We help you catch surges before the department is overwhelmed.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.13",
              "n": "Surgical Suite Utilization and Turnover Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "OR utilization and turnover time directly impact surgical throughput and revenue. Monitoring enables scheduling optimization and identifies delays.",
              "t": "OR scheduling system via HEC",
              "d": "`index=healthcare` `sourcetype=\"or:schedule\"` fields `or_room`, `case_start`, `case_end`, `turnover_start`, `turnover_end`",
              "q": "index=healthcare sourcetype=\"or:schedule\"\n| eval case_duration_min=round((case_end-case_start)/60,1)\n| eval turnover_min=round((turnover_end-turnover_start)/60,1)\n| stats avg(case_duration_min) as avg_case, avg(turnover_min) as avg_turnover, count as cases by or_room\n| eval utilization_pct=round((avg_case*cases)/((avg_case+avg_turnover)*cases)*100,1)\n| table or_room, cases, avg_case, avg_turnover, utilization_pct",
              "m": "Ingest OR scheduling and tracking data. Calculate prime time utilization and turnover metrics. Identify rooms with excessive turnover for process improvement. Report weekly for surgical services leadership. **Domain context:** Block utilization vs staffed hours differs—use `or_room` master data for staffed hours if available. **Splunk:** The sample `utilization_pct` is a rough heuristic; prefer block minutes scheduled vs used from the scheduling system when possible.",
              "z": "Bar chart (utilization by room), Line chart (turnover trend), Table (room performance).",
              "kfp": "Utilization noise while sterile processing runs long, add-on cases slip, or block time is reallocated for emergent add-ons the surgical director prioritized.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OR scheduling system via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"or:schedule\"` fields `or_room`, `case_start`, `case_end`, `turnover_start`, `turnover_end`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest OR scheduling and tracking data. Calculate prime time utilization and turnover metrics. Identify rooms with excessive turnover for process improvement. Report weekly for surgical services leadership. **Domain context:** Block utilization vs staffed hours differs—use `or_room` master data for staffed hours if available. **Splunk:** The sample `utilization_pct` is a rough heuristic; prefer block minutes scheduled vs used from the scheduling system when possible.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"or:schedule\"\n| eval case_duration_min=round((case_end-case_start)/60,1)\n| eval turnover_min=round((turnover_end-turnover_start)/60,1)\n| stats avg(case_duration_min) as avg_case, avg(turnover_min) as avg_turnover, count as cases by or_room\n| eval utilization_pct=round((avg_case*cases)/((avg_case+avg_turnover)*cases)*100,1)\n| table or_room, cases, avg_case, avg_turnover, utilization_pct\n```\n\nUnderstanding this SPL\n\n**Surgical Suite Utilization and Turnover Monitoring** — OR utilization and turnover time directly impact surgical throughput and revenue. Monitoring enables scheduling optimization and identifies delays.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"or:schedule\"` fields `or_room`, `case_start`, `case_end`, `turnover_start`, `turnover_end`. **App/TA** (typical add-on context): OR scheduling system via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: or:schedule. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"or:schedule\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **case_duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **turnover_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by or_room** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Surgical Suite Utilization and Turnover Monitoring**): table or_room, cases, avg_case, avg_turnover, utilization_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (utilization by room), Line chart (turnover trend), Table (room performance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch surgical suite use and room turns. We help you catch schedule drag before later cases are bumped.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.14",
              "n": "Biomedical Equipment Preventive Maintenance Compliance",
              "c": "medium",
              "f": "beginner",
              "v": "Joint Commission requires documented PM programs for medical equipment. Tracking compliance prevents accreditation findings and ensures equipment reliability.",
              "t": "CMMS biomedical module via HEC",
              "d": "`index=healthcare` `sourcetype=\"cmms:biomed\"` fields `asset_id`, `pm_due_date`, `pm_completed_date`, `risk_level`",
              "q": "index=healthcare sourcetype=\"cmms:biomed\"\n| where isnull(pm_completed_date) AND pm_due_date < now()\n| eval days_overdue=round((now()-pm_due_date)/86400,0)\n| stats count as overdue_count by risk_level\n| sort - overdue_count\n| table risk_level, overdue_count",
              "m": "Integrate CMMS biomedical work orders. Track PM completion rates by risk level and department. Alert on overdue high-risk equipment. Generate compliance dashboards for accreditation readiness. **Domain context:** Joint Commission EC.02.04.01 and related CMS expectations reference equipment risk stratification and PM completion. **Splunk:** `pm_due_date` comparisons must be epoch-aligned; `now()` uses search-time TZ.",
              "z": "Gauge (PM compliance %), Bar chart (overdue by risk level), Table (overdue equipment).",
              "kfp": "PM compliance dips when service contracts roll, new bio-med interns onboard, or equipment is loaned to sister sites under a written variance.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CMMS biomedical module via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"cmms:biomed\"` fields `asset_id`, `pm_due_date`, `pm_completed_date`, `risk_level`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate CMMS biomedical work orders. Track PM completion rates by risk level and department. Alert on overdue high-risk equipment. Generate compliance dashboards for accreditation readiness. **Domain context:** Joint Commission EC.02.04.01 and related CMS expectations reference equipment risk stratification and PM completion. **Splunk:** `pm_due_date` comparisons must be epoch-aligned; `now()` uses search-time TZ.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"cmms:biomed\"\n| where isnull(pm_completed_date) AND pm_due_date < now()\n| eval days_overdue=round((now()-pm_due_date)/86400,0)\n| stats count as overdue_count by risk_level\n| sort - overdue_count\n| table risk_level, overdue_count\n```\n\nUnderstanding this SPL\n\n**Biomedical Equipment Preventive Maintenance Compliance** — Joint Commission requires documented PM programs for medical equipment. Tracking compliance prevents accreditation findings and ensures equipment reliability.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"cmms:biomed\"` fields `asset_id`, `pm_due_date`, `pm_completed_date`, `risk_level`. **App/TA** (typical add-on context): CMMS biomedical module via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: cmms:biomed. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"cmms:biomed\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(pm_completed_date) AND pm_due_date < now()` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by risk_level** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Biomedical Equipment Preventive Maintenance Compliance**): table risk_level, overdue_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (PM compliance %), Bar chart (overdue by risk level), Table (overdue equipment).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch preventive maintenance on clinical devices. We help you catch missed PM before equipment fails in use.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.15",
              "n": "Medication Administration Record Reconciliation",
              "c": "high",
              "f": "advanced",
              "v": "Discrepancies between medication orders and administration records indicate potential medication errors, a leading cause of adverse events. Reconciliation supports patient safety.",
              "t": "EHR MAR/pharmacy integration via HL7 or HEC",
              "d": "`index=healthcare` `sourcetype=\"mar:record\"` fields `order_id`, `med_name`, `scheduled_time`, `admin_time`, `admin_status`",
              "q": "index=healthcare sourcetype=\"mar:record\"\n| eval late_min=round((admin_time-scheduled_time)/60,1)\n| eval issue=case(\n    admin_status=\"missed\", \"missed\",\n    admin_status=\"held\", \"held\",\n    late_min > 60, \"late\",\n    late_min < -30, \"early\",\n    true(), \"on_time\")\n| where issue!=\"on_time\"\n| stats count by issue, med_name\n| sort - count\n| table issue, med_name, count",
              "m": "Parse MAR events from EHR. Compare scheduled vs actual administration times. Flag missed, held, and significantly late administrations. Report to nursing leadership and pharmacy for investigation. **Domain context:** MAR variance is a patient safety signal; align with ISMP guidelines and high-alert medication lists. **Splunk:** Use `lookup` for high-alert meds; avoid exposing patient identifiers in shared dashboards.",
              "z": "Pie chart (issue distribution), Table (top discrepancies by medication), Timeline (missed doses).",
              "kfp": "MAR mismatch noise during med pass redesigns, bar-code label refreshes, Pyxis restocks, or Pyxis/BCMA upgrades that pharmacy and IT staged together.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EHR MAR/pharmacy integration via HL7 or HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"mar:record\"` fields `order_id`, `med_name`, `scheduled_time`, `admin_time`, `admin_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse MAR events from EHR. Compare scheduled vs actual administration times. Flag missed, held, and significantly late administrations. Report to nursing leadership and pharmacy for investigation. **Domain context:** MAR variance is a patient safety signal; align with ISMP guidelines and high-alert medication lists. **Splunk:** Use `lookup` for high-alert meds; avoid exposing patient identifiers in shared dashboards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"mar:record\"\n| eval late_min=round((admin_time-scheduled_time)/60,1)\n| eval issue=case(\n    admin_status=\"missed\", \"missed\",\n    admin_status=\"held\", \"held\",\n    late_min > 60, \"late\",\n    late_min < -30, \"early\",\n    true(), \"on_time\")\n| where issue!=\"on_time\"\n| stats count by issue, med_name\n| sort - count\n| table issue, med_name, count\n```\n\nUnderstanding this SPL\n\n**Medication Administration Record Reconciliation** — Discrepancies between medication orders and administration records indicate potential medication errors, a leading cause of adverse events. Reconciliation supports patient safety.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"mar:record\"` fields `order_id`, `med_name`, `scheduled_time`, `admin_time`, `admin_status`. **App/TA** (typical add-on context): EHR MAR/pharmacy integration via HL7 or HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: mar:record. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"mar:record\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **late_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where issue!=\"on_time\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by issue, med_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Medication Administration Record Reconciliation**): table issue, med_name, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (issue distribution), Table (top discrepancies by medication), Timeline (missed doses).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch med passes against orders. We help you catch mismatches before a patient gets the wrong dose or drug.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.16",
              "n": "Telehealth Session Quality Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Poor telehealth video/audio quality degrades clinical assessment and patient experience. Monitoring enables proactive intervention and platform optimization.",
              "t": "Telehealth platform API/logs via HEC",
              "d": "`index=healthcare` `sourcetype=\"telehealth:session\"` fields `session_id`, `provider_id`, `video_quality_score`, `audio_quality_score`, `disconnect_count`",
              "q": "index=healthcare sourcetype=\"telehealth:session\"\n| where video_quality_score < 3 OR audio_quality_score < 3 OR disconnect_count > 0\n| stats count as poor_sessions, avg(video_quality_score) as avg_video, avg(audio_quality_score) as avg_audio by provider_id\n| sort - poor_sessions\n| table provider_id, poor_sessions, avg_video, avg_audio",
              "m": "Integrate telehealth platform quality metrics via API. Track session quality by provider and patient location. Correlate with network conditions. Alert on sustained quality degradation. **Domain context:** Audio/video quality and disconnects affect telehealth efficacy; some payers track quality as part of digital health programs. **Splunk:** Aggregate at provider level only; avoid patient location precision that violates privacy.",
              "z": "Line chart (quality scores over time), Bar chart (poor sessions by provider), Table (session details).",
              "kfp": "Session quality flags during home Wi-Fi issues, patient device changes, iOS or Android upgrades, or vendor CDN maintenance that the telehealth runbook names.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telehealth platform API/logs via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"telehealth:session\"` fields `session_id`, `provider_id`, `video_quality_score`, `audio_quality_score`, `disconnect_count`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate telehealth platform quality metrics via API. Track session quality by provider and patient location. Correlate with network conditions. Alert on sustained quality degradation. **Domain context:** Audio/video quality and disconnects affect telehealth efficacy; some payers track quality as part of digital health programs. **Splunk:** Aggregate at provider level only; avoid patient location precision that violates privacy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"telehealth:session\"\n| where video_quality_score < 3 OR audio_quality_score < 3 OR disconnect_count > 0\n| stats count as poor_sessions, avg(video_quality_score) as avg_video, avg(audio_quality_score) as avg_audio by provider_id\n| sort - poor_sessions\n| table provider_id, poor_sessions, avg_video, avg_audio\n```\n\nUnderstanding this SPL\n\n**Telehealth Session Quality Monitoring** — Poor telehealth video/audio quality degrades clinical assessment and patient experience. Monitoring enables proactive intervention and platform optimization.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"telehealth:session\"` fields `session_id`, `provider_id`, `video_quality_score`, `audio_quality_score`, `disconnect_count`. **App/TA** (typical add-on context): Telehealth platform API/logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: telehealth:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"telehealth:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where video_quality_score < 3 OR audio_quality_score < 3 OR disconnect_count > 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by provider_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Telehealth Session Quality Monitoring**): table provider_id, poor_sessions, avg_video, avg_audio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (quality scores over time), Bar chart (poor sessions by provider), Table (session details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch telehealth session quality. We help you catch frozen video or bad audio before visits fail.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.17",
              "n": "Clinical Decision Support Response Time",
              "c": "medium",
              "f": "intermediate",
              "v": "CDS alerts must fire in real-time during clinical workflows. Excessive latency causes clinicians to bypass alerts, reducing patient safety benefit.",
              "t": "CDS engine logs via HEC",
              "d": "`index=healthcare` `sourcetype=\"cds:query\"` fields `rule_id`, `query_time_ms`, `result`, `triggered_alert`",
              "q": "index=healthcare sourcetype=\"cds:query\"\n| bin _time span=1h\n| stats avg(query_time_ms) as avg_ms, perc95(query_time_ms) as p95_ms, count by rule_id, _time\n| where p95_ms > 500\n| table _time, rule_id, avg_ms, p95_ms, count",
              "m": "Instrument CDS engine with response time logging. Track by rule type and clinical context. Optimize slow rules that impact order entry workflows. Alert when latency exceeds clinical usability thresholds. **Domain context:** CDS Hooks / SMART targets sub-second response for interruptive alerts in some workflows; batch analytics rules differ. **Splunk:** Separate interruptive vs background CDS if `rule_id` encodes workflow type.",
              "z": "Line chart (response time by rule), Heatmap (rule × time), Table (slowest rules).",
              "kfp": "CDS delay spikes on knowledge-base refreshes, new order-set rollouts, heavy cohort analytics jobs, or content freezes during a clinical build freeze.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CDS engine logs via HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"cds:query\"` fields `rule_id`, `query_time_ms`, `result`, `triggered_alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument CDS engine with response time logging. Track by rule type and clinical context. Optimize slow rules that impact order entry workflows. Alert when latency exceeds clinical usability thresholds. **Domain context:** CDS Hooks / SMART targets sub-second response for interruptive alerts in some workflows; batch analytics rules differ. **Splunk:** Separate interruptive vs background CDS if `rule_id` encodes workflow type.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"cds:query\"\n| bin _time span=1h\n| stats avg(query_time_ms) as avg_ms, perc95(query_time_ms) as p95_ms, count by rule_id, _time\n| where p95_ms > 500\n| table _time, rule_id, avg_ms, p95_ms, count\n```\n\nUnderstanding this SPL\n\n**Clinical Decision Support Response Time** — CDS alerts must fire in real-time during clinical workflows. Excessive latency causes clinicians to bypass alerts, reducing patient safety benefit.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"cds:query\"` fields `rule_id`, `query_time_ms`, `result`, `triggered_alert`. **App/TA** (typical add-on context): CDS engine logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: cds:query. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"cds:query\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by rule_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_ms > 500` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Clinical Decision Support Response Time**): table _time, rule_id, avg_ms, p95_ms, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (response time by rule), Heatmap (rule × time), Table (slowest rules).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch clinical decision support response times. We help you catch delays before alerts arrive too late to matter.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.18",
              "n": "DIPS Arena Application Response Time",
              "c": "critical",
              "f": "intermediate",
              "v": "DIPS Arena is Norway's dominant EHR system, serving approximately 80% of Norwegian hospitals with over 100,000 healthcare professionals and 4.3 million patient records. Slow application response times directly impact clinical workflows — clinicians waiting on patient record loading, order entry, or document retrieval experience cognitive interruption and reduced throughput. Monitoring Arena's .NET/IIS application response times enables proactive intervention before degradation affects patient care delivery.",
              "t": "Splunk Universal Forwarder with IIS/ASP.NET log monitoring, custom HEC input",
              "d": "`index=healthcare` `sourcetype=\"dips:arena:applog\"` fields `response_time_ms`, `request_url`, `http_status`, `server_node`, `user_count`",
              "q": "index=healthcare sourcetype=\"dips:arena:applog\"\n| bin _time span=5m\n| stats avg(response_time_ms) as avg_rt, perc95(response_time_ms) as p95_rt, count as requests by server_node, _time\n| where p95_rt > 3000\n| table _time, server_node, avg_rt, p95_rt, requests",
              "m": "Collect DIPS Arena application performance data from IIS W3C extended logs or .NET application performance counters. Arena runs on .NET with IIS hosting — enable W3C logging with `time-taken` field and forward via Splunk Universal Forwarder. Alternatively, instrument via HEC if Arena provides application-level performance logging. Set thresholds based on clinical workflow requirements: p95 < 3 seconds for interactive clinical screens, p95 < 1 second for patient lookup. Alert on sustained degradation (3 consecutive 5-minute intervals above threshold). Correlate with server resource metrics (CPU, memory, connection pool) for root cause. **Domain context:** DIPS Arena serves all regional health authorities (Helse Nord, Helse Vest, Helse Sør-Øst) — response time SLAs may vary by trust. Exclude background batch processes and API integrations from clinician-facing p95 metrics.",
              "z": "Line chart (p95 response time by server node), Heatmap (server × time), Single value (current p95).",
              "kfp": "Latency noise during DIPS app pool recycling, WAF or reverse-proxy tuning, or SSL renewals the application team published on the change calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder with IIS/ASP.NET log monitoring, custom HEC input.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:arena:applog\"` fields `response_time_ms`, `request_url`, `http_status`, `server_node`, `user_count`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect DIPS Arena application performance data from IIS W3C extended logs or .NET application performance counters. Arena runs on .NET with IIS hosting — enable W3C logging with `time-taken` field and forward via Splunk Universal Forwarder. Alternatively, instrument via HEC if Arena provides application-level performance logging. Set thresholds based on clinical workflow requirements: p95 < 3 seconds for interactive clinical screens, p95 < 1 second for patient lookup. Alert on sustained degrada…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:arena:applog\"\n| bin _time span=5m\n| stats avg(response_time_ms) as avg_rt, perc95(response_time_ms) as p95_rt, count as requests by server_node, _time\n| where p95_rt > 3000\n| table _time, server_node, avg_rt, p95_rt, requests\n```\n\nUnderstanding this SPL\n\n**DIPS Arena Application Response Time** — DIPS Arena is Norway's dominant EHR system, serving approximately 80% of Norwegian hospitals with over 100,000 healthcare professionals and 4.3 million patient records. Slow application response times directly impact clinical workflows — clinicians waiting on patient record loading, order entry, or document retrieval experience cognitive interruption and reduced throughput. Monitoring Arena's .NET/IIS application response times enables proactive intervention before…\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:arena:applog\"` fields `response_time_ms`, `request_url`, `http_status`, `server_node`, `user_count`. **App/TA** (typical add-on context): Splunk Universal Forwarder with IIS/ASP.NET log monitoring, custom HEC input. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:arena:applog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:arena:applog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by server_node, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_rt > 3000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DIPS Arena Application Response Time**): table _time, server_node, avg_rt, p95_rt, requests\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (p95 response time by server node), Heatmap (server × time), Single value (current p95).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch DIPS Arena screen responsiveness. We help you catch slowness before clinicians fight the app under pressure.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [
                "iis"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.19",
              "n": "DIPS Arena FHIR API Availability and Latency",
              "c": "critical",
              "f": "intermediate",
              "v": "DIPS Arena exposes FHIR REST APIs through the Open DIPS platform, supporting Patient, Appointment, Encounter, Observation, Condition, and many other resource types. Third-party applications, patient portals, and SMART on FHIR apps depend on these APIs being available and responsive. API degradation or outage breaks integrations and can affect clinical workflows that rely on real-time data exchange.",
              "t": "Custom scripted input polling Open DIPS FHIR endpoints",
              "d": "`index=healthcare` `sourcetype=\"dips:arena:api\"` fields `endpoint`, `http_status`, `response_time_ms`, `fhir_resource`, `error_message`",
              "q": "index=healthcare sourcetype=\"dips:arena:api\"\n| bin _time span=5m\n| stats avg(response_time_ms) as avg_rt, perc95(response_time_ms) as p95_rt,\n  sum(eval(if(http_status>=500, 1, 0))) as server_errors,\n  sum(eval(if(http_status>=200 AND http_status<400, 1, 0))) as success,\n  count as total by fhir_resource, _time\n| eval error_pct=round(server_errors/total*100,1)\n| where error_pct > 5 OR p95_rt > 2000\n| table _time, fhir_resource, p95_rt, avg_rt, error_pct, total",
              "m": "Create scripted inputs that perform synthetic health checks against key FHIR endpoints: `GET /fhir/Patient`, `GET /fhir/Appointment`, `GET /fhir/Encounter`, `GET /fhir/Observation`. Authenticate using Client Credentials flow via the DIPS Federation Service (OAuth/OIDC). Also collect real API access logs from the Arena API gateway or IIS reverse proxy. Monitor response times, HTTP status codes, and error rates per FHIR resource type. Alert on: 5xx error rate exceeding 5%, p95 latency exceeding 2 seconds, or complete endpoint unavailability. Cross-reference with the Open DIPS status page (`opendips.statuspage.io`) for known outages.",
              "z": "Line chart (API response time by FHIR resource), Bar chart (error rates by endpoint), Single value (current API availability %).",
              "kfp": "API error or latency noise during client bulk exports, rate-limit adjustments, client-secret rotation, or bulk FHIR loads the integration team throttled on purpose.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom scripted input polling Open DIPS FHIR endpoints.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:arena:api\"` fields `endpoint`, `http_status`, `response_time_ms`, `fhir_resource`, `error_message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCreate scripted inputs that perform synthetic health checks against key FHIR endpoints: `GET /fhir/Patient`, `GET /fhir/Appointment`, `GET /fhir/Encounter`, `GET /fhir/Observation`. Authenticate using Client Credentials flow via the DIPS Federation Service (OAuth/OIDC). Also collect real API access logs from the Arena API gateway or IIS reverse proxy. Monitor response times, HTTP status codes, and error rates per FHIR resource type. Alert on: 5xx error rate exceeding 5%, p95 latency exceeding 2 …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:arena:api\"\n| bin _time span=5m\n| stats avg(response_time_ms) as avg_rt, perc95(response_time_ms) as p95_rt,\n  sum(eval(if(http_status>=500, 1, 0))) as server_errors,\n  sum(eval(if(http_status>=200 AND http_status<400, 1, 0))) as success,\n  count as total by fhir_resource, _time\n| eval error_pct=round(server_errors/total*100,1)\n| where error_pct > 5 OR p95_rt > 2000\n| table _time, fhir_resource, p95_rt, avg_rt, error_pct, total\n```\n\nUnderstanding this SPL\n\n**DIPS Arena FHIR API Availability and Latency** — DIPS Arena exposes FHIR REST APIs through the Open DIPS platform, supporting Patient, Appointment, Encounter, Observation, Condition, and many other resource types. Third-party applications, patient portals, and SMART on FHIR apps depend on these APIs being available and responsive. API degradation or outage breaks integrations and can affect clinical workflows that rely on real-time data exchange.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:arena:api\"` fields `endpoint`, `http_status`, `response_time_ms`, `fhir_resource`, `error_message`. **App/TA** (typical add-on context): Custom scripted input polling Open DIPS FHIR endpoints. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:arena:api. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:arena:api\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by fhir_resource, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_pct > 5 OR p95_rt > 2000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DIPS Arena FHIR API Availability and Latency**): table _time, fhir_resource, p95_rt, avg_rt, error_pct, total\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (API response time by FHIR resource), Bar chart (error rates by endpoint), Single value (current API availability %).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch FHIR API uptime and latency for DIPS. We help you catch integration pain before mobile and partner apps fail.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.20",
              "n": "DIPS Arena User Authentication and SSO Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "DIPS Arena authentication flows through the DIPS Federation Service using OAuth/OIDC, with support for DIPS credentials, Microsoft Entra ID, and SMART on FHIR SSO. Authentication failures lock clinicians out of the EHR during critical care moments. Monitoring authentication success rates, latency, and failure patterns detects identity infrastructure issues before they cascade into widespread access disruptions.",
              "t": "DIPS Federation Service logs via HEC or Universal Forwarder",
              "d": "`index=healthcare` `sourcetype=\"dips:federation:auth\"` fields `auth_flow`, `result`, `user_type`, `client_id`, `response_time_ms`, `error_code`",
              "q": "index=healthcare sourcetype=\"dips:federation:auth\"\n| bin _time span=10m\n| stats sum(eval(if(result=\"success\", 1, 0))) as success,\n  sum(eval(if(result=\"failure\", 1, 0))) as failures,\n  dc(client_id) as unique_clients,\n  avg(response_time_ms) as avg_auth_ms by auth_flow, _time\n| eval fail_pct=round(failures/(success+failures)*100,1)\n| where fail_pct > 10 OR avg_auth_ms > 5000\n| table _time, auth_flow, success, failures, fail_pct, avg_auth_ms, unique_clients",
              "m": "Collect authentication event logs from the DIPS Federation Service. The service handles Authorization Code Flow (interactive clinician login), Client Credentials Flow (system integrations), and SMART on FHIR (embedded applications with SSO). Parse each event for authentication flow type, result (success/failure), error code, and response time. Alert on: authentication failure rate exceeding 10% within a 10-minute window, SSO response time exceeding 5 seconds (causes UI timeouts), repeated failures from a single client application (broken integration), or complete authentication service unavailability. Correlate with Microsoft Entra ID status if using federated identity.",
              "z": "Timechart (success vs failure by auth flow), Bar chart (failure reasons), Single value (current auth success rate).",
              "kfp": "Auth spikes during password synchronizations, phone reprovisioning, Entra or federation maintenance, or SMART app updates that the identity team announced.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DIPS Federation Service logs via HEC or Universal Forwarder.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:federation:auth\"` fields `auth_flow`, `result`, `user_type`, `client_id`, `response_time_ms`, `error_code`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect authentication event logs from the DIPS Federation Service. The service handles Authorization Code Flow (interactive clinician login), Client Credentials Flow (system integrations), and SMART on FHIR (embedded applications with SSO). Parse each event for authentication flow type, result (success/failure), error code, and response time. Alert on: authentication failure rate exceeding 10% within a 10-minute window, SSO response time exceeding 5 seconds (causes UI timeouts), repeated failur…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:federation:auth\"\n| bin _time span=10m\n| stats sum(eval(if(result=\"success\", 1, 0))) as success,\n  sum(eval(if(result=\"failure\", 1, 0))) as failures,\n  dc(client_id) as unique_clients,\n  avg(response_time_ms) as avg_auth_ms by auth_flow, _time\n| eval fail_pct=round(failures/(success+failures)*100,1)\n| where fail_pct > 10 OR avg_auth_ms > 5000\n| table _time, auth_flow, success, failures, fail_pct, avg_auth_ms, unique_clients\n```\n\nUnderstanding this SPL\n\n**DIPS Arena User Authentication and SSO Monitoring** — DIPS Arena authentication flows through the DIPS Federation Service using OAuth/OIDC, with support for DIPS credentials, Microsoft Entra ID, and SMART on FHIR SSO. Authentication failures lock clinicians out of the EHR during critical care moments. Monitoring authentication success rates, latency, and failure patterns detects identity infrastructure issues before they cascade into widespread access disruptions.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:federation:auth\"` fields `auth_flow`, `result`, `user_type`, `client_id`, `response_time_ms`, `error_code`. **App/TA** (typical add-on context): DIPS Federation Service logs via HEC or Universal Forwarder. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:federation:auth. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:federation:auth\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by auth_flow, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_pct > 10 OR avg_auth_ms > 5000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DIPS Arena User Authentication and SSO Monitoring**): table _time, auth_flow, success, failures, fail_pct, avg_auth_ms, unique_clients\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=10m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DIPS Arena User Authentication and SSO Monitoring** — DIPS Arena authentication flows through the DIPS Federation Service using OAuth/OIDC, with support for DIPS credentials, Microsoft Entra ID, and SMART on FHIR SSO. Authentication failures lock clinicians out of the EHR during critical care moments. Monitoring authentication success rates, latency, and failure patterns detects identity infrastructure issues before they cascade into widespread access disruptions.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:federation:auth\"` fields `auth_flow`, `result`, `user_type`, `client_id`, `response_time_ms`, `error_code`. **App/TA** (typical add-on context): DIPS Federation Service logs via HEC or Universal Forwarder. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (success vs failure by auth flow), Bar chart (failure reasons), Single value (current auth success rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch hospital sign-in and single sign-on health. We help you catch identity problems before clinicians are locked out at the worst moment.",
              "mtype": [
                "Security",
                "Availability"
              ],
              "ind": "Healthcare",
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=10m | sort - count",
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.21",
              "n": "DIPS Arena Database Performance",
              "c": "critical",
              "f": "advanced",
              "v": "DIPS Arena's clinical data store — backed by Oracle or SQL Server — is the foundation of all EHR operations. Slow queries, lock contention, connection pool exhaustion, or tablespace pressure directly translate to sluggish clinical workflows. Monitoring database performance at the query and connection level enables DBAs to intervene before latency propagates to the application tier and affects clinician productivity.",
              "t": "Splunk DB Connect or custom HEC input from database monitoring agent",
              "d": "`index=healthcare` `sourcetype=\"dips:arena:db\"` fields `query_time_ms`, `query_type`, `wait_type`, `active_connections`, `blocking_sessions`, `tablespace_pct`",
              "q": "index=healthcare sourcetype=\"dips:arena:db\"\n| bin _time span=5m\n| stats avg(query_time_ms) as avg_query_ms, max(query_time_ms) as max_query_ms,\n  avg(active_connections) as avg_connections, max(blocking_sessions) as max_blocks,\n  latest(tablespace_pct) as tablespace_used by db_instance, _time\n| where avg_query_ms > 500 OR max_blocks > 3 OR tablespace_used > 85\n| table _time, db_instance, avg_query_ms, max_query_ms, avg_connections, max_blocks, tablespace_used",
              "m": "Collect database performance metrics from the Arena backend database (Oracle or SQL Server). Use Splunk DB Connect for direct polling or deploy a database monitoring agent that forwards metrics via HEC. Key metrics: average and p95 query execution time, active connection count vs pool limit, blocking session count and duration, tablespace/datafile utilization percentage. For Oracle: monitor `V$SESSION`, `V$SYSSTAT`, `DBA_TABLESPACE_USAGE_METRICS`. For SQL Server: monitor `sys.dm_exec_query_stats`, `sys.dm_os_wait_stats`, `sys.dm_exec_connections`. Alert on sustained query latency (avg > 500ms over 15 minutes), blocking chains longer than 30 seconds, connection pool above 80% capacity, or tablespace usage above 85%.",
              "z": "Line chart (query latency over time), Gauge (tablespace utilization), Table (current blocking sessions), Single value (active connections).",
              "kfp": "Database noise during index maintenance, stats refresh, storage migration, or batch report windows that the DBA and Arena admin logged as expected load.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect or custom HEC input from database monitoring agent.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:arena:db\"` fields `query_time_ms`, `query_type`, `wait_type`, `active_connections`, `blocking_sessions`, `tablespace_pct`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect database performance metrics from the Arena backend database (Oracle or SQL Server). Use Splunk DB Connect for direct polling or deploy a database monitoring agent that forwards metrics via HEC. Key metrics: average and p95 query execution time, active connection count vs pool limit, blocking session count and duration, tablespace/datafile utilization percentage. For Oracle: monitor `V$SESSION`, `V$SYSSTAT`, `DBA_TABLESPACE_USAGE_METRICS`. For SQL Server: monitor `sys.dm_exec_query_stats…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:arena:db\"\n| bin _time span=5m\n| stats avg(query_time_ms) as avg_query_ms, max(query_time_ms) as max_query_ms,\n  avg(active_connections) as avg_connections, max(blocking_sessions) as max_blocks,\n  latest(tablespace_pct) as tablespace_used by db_instance, _time\n| where avg_query_ms > 500 OR max_blocks > 3 OR tablespace_used > 85\n| table _time, db_instance, avg_query_ms, max_query_ms, avg_connections, max_blocks, tablespace_used\n```\n\nUnderstanding this SPL\n\n**DIPS Arena Database Performance** — DIPS Arena's clinical data store — backed by Oracle or SQL Server — is the foundation of all EHR operations. Slow queries, lock contention, connection pool exhaustion, or tablespace pressure directly translate to sluggish clinical workflows. Monitoring database performance at the query and connection level enables DBAs to intervene before latency propagates to the application tier and affects clinician productivity.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:arena:db\"` fields `query_time_ms`, `query_type`, `wait_type`, `active_connections`, `blocking_sessions`, `tablespace_pct`. **App/TA** (typical add-on context): Splunk DB Connect or custom HEC input from database monitoring agent. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:arena:db. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:arena:db\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by db_instance, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_query_ms > 500 OR max_blocks > 3 OR tablespace_used > 85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DIPS Arena Database Performance**): table _time, db_instance, avg_query_ms, max_query_ms, avg_connections, max_blocks, tablespace_used\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (query latency over time), Gauge (tablespace utilization), Table (current blocking sessions), Single value (active connections).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch DIPS database performance. We help you catch contention or load before chart access times out across wards.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.22",
              "n": "DIPS Communicator Message Throughput and Failures",
              "c": "high",
              "f": "intermediate",
              "v": "DIPS Communicator is Norway's leading healthcare messaging solution, supporting over 250 message profiles for exchanging patient-sensitive information between hospitals, primary care, labs, and pharmacies. Message delivery failures or queue backlogs can delay lab results, referral responses, discharge summaries, and prescription notifications — each with direct patient safety implications. Monitoring throughput and failure rates ensures the messaging backbone remains reliable.",
              "t": "DIPS Communicator application logs via Universal Forwarder or HEC",
              "d": "`index=healthcare` `sourcetype=\"dips:communicator\"` fields `message_profile`, `direction`, `status`, `queue_depth`, `processing_time_ms`, `partner_org`",
              "q": "index=healthcare sourcetype=\"dips:communicator\"\n| bin _time span=15m\n| stats sum(eval(if(status=\"delivered\", 1, 0))) as delivered,\n  sum(eval(if(status=\"failed\", 1, 0))) as failed,\n  sum(eval(if(status=\"queued\", 1, 0))) as queued,\n  avg(processing_time_ms) as avg_proc_ms by message_profile, direction, _time\n| eval fail_pct=if((delivered+failed)>0, round(failed/(delivered+failed)*100,1), 0)\n| where failed > 0 OR queued > 50 OR avg_proc_ms > 10000\n| table _time, message_profile, direction, delivered, failed, fail_pct, queued, avg_proc_ms",
              "m": "Collect DIPS Communicator application logs that record message lifecycle events: received, validated, transformed, queued, delivered, failed. Parse by message profile type (e.g., lab results, referrals, discharge summaries, radiology reports, prescriptions) and direction (inbound/outbound). Track queue depth as a leading indicator of throughput issues. Alert on: delivery failure rate exceeding 2% for any message profile, queue depth exceeding 50 messages (backlog), processing time exceeding 10 seconds (transformation bottleneck), or zero messages processed in a 30-minute window during business hours (service outage). Correlate failures with partner organization IDs to identify external connectivity issues vs internal processing failures.",
              "z": "Timechart (message throughput by profile), Bar chart (failures by message profile), Gauge (current queue depth), Table (failed messages with error details).",
              "kfp": "Throughput or failure blips on template pushes, high attachment traffic, or carrier SMS peering hiccups the messaging team absorbed with vendor failover.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DIPS Communicator application logs via Universal Forwarder or HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:communicator\"` fields `message_profile`, `direction`, `status`, `queue_depth`, `processing_time_ms`, `partner_org`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect DIPS Communicator application logs that record message lifecycle events: received, validated, transformed, queued, delivered, failed. Parse by message profile type (e.g., lab results, referrals, discharge summaries, radiology reports, prescriptions) and direction (inbound/outbound). Track queue depth as a leading indicator of throughput issues. Alert on: delivery failure rate exceeding 2% for any message profile, queue depth exceeding 50 messages (backlog), processing time exceeding 10 s…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:communicator\"\n| bin _time span=15m\n| stats sum(eval(if(status=\"delivered\", 1, 0))) as delivered,\n  sum(eval(if(status=\"failed\", 1, 0))) as failed,\n  sum(eval(if(status=\"queued\", 1, 0))) as queued,\n  avg(processing_time_ms) as avg_proc_ms by message_profile, direction, _time\n| eval fail_pct=if((delivered+failed)>0, round(failed/(delivered+failed)*100,1), 0)\n| where failed > 0 OR queued > 50 OR avg_proc_ms > 10000\n| table _time, message_profile, direction, delivered, failed, fail_pct, queued, avg_proc_ms\n```\n\nUnderstanding this SPL\n\n**DIPS Communicator Message Throughput and Failures** — DIPS Communicator is Norway's leading healthcare messaging solution, supporting over 250 message profiles for exchanging patient-sensitive information between hospitals, primary care, labs, and pharmacies. Message delivery failures or queue backlogs can delay lab results, referral responses, discharge summaries, and prescription notifications — each with direct patient safety implications. Monitoring throughput and failure rates ensures the messaging backbone remains…\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:communicator\"` fields `message_profile`, `direction`, `status`, `queue_depth`, `processing_time_ms`, `partner_org`. **App/TA** (typical add-on context): DIPS Communicator application logs via Universal Forwarder or HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:communicator. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:communicator\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by message_profile, direction, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failed > 0 OR queued > 50 OR avg_proc_ms > 10000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DIPS Communicator Message Throughput and Failures**): table _time, message_profile, direction, delivered, failed, fail_pct, queued, avg_proc_ms\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (message throughput by profile), Bar chart (failures by message profile), Gauge (current queue depth), Table (failed messages with error details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch secure messaging delivery and errors. We help you catch routing failures before care teams miss critical notes.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.23",
              "n": "DIPS Arena Integration Engine Error Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "DIPS Message Broker provides the asynchronous integration interface between DIPS Arena and external systems — lab information systems, pharmacy systems, radiology PACS, patient portals, and regional health network services. Integration errors break data flows that clinicians depend on for complete patient information. Monitoring error rates and patterns across integration channels enables rapid identification and resolution of broken interfaces.",
              "t": "DIPS Message Broker logs via Universal Forwarder or HEC",
              "d": "`index=healthcare` `sourcetype=\"dips:messagebroker\"` fields `channel`, `message_type`, `error_code`, `error_description`, `source_system`, `target_system`",
              "q": "index=healthcare sourcetype=\"dips:messagebroker\" status=\"error\"\n| bin _time span=1h\n| stats count as error_count, dc(error_code) as unique_errors, values(error_description) as error_types by channel, source_system, target_system, _time\n| where error_count > 5\n| sort -error_count\n| table _time, channel, source_system, target_system, error_count, unique_errors, error_types",
              "m": "Collect DIPS Message Broker logs that record integration message processing events across all configured channels. The broker handles multiple protocols and message standards for asynchronous exchange between Arena and external systems. Parse error events with channel identification (lab, pharmacy, radiology, etc.), source/target system names, and error codes. Alert on: sustained errors on any single channel (>5 errors per hour), complete channel silence during business hours (possible connection loss), new error codes not seen before (configuration change), or error spikes correlating with system maintenance windows. Maintain a lookup of expected message patterns by channel for anomaly detection.",
              "z": "Bar chart (errors by channel), Timeline (error patterns over time), Table (error details with system pairs).",
              "kfp": "Engine error bursts during map upgrades, new interface partners, or schema migrations with elevated retries that the interface lead monitored to completion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DIPS Message Broker logs via Universal Forwarder or HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:messagebroker\"` fields `channel`, `message_type`, `error_code`, `error_description`, `source_system`, `target_system`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect DIPS Message Broker logs that record integration message processing events across all configured channels. The broker handles multiple protocols and message standards for asynchronous exchange between Arena and external systems. Parse error events with channel identification (lab, pharmacy, radiology, etc.), source/target system names, and error codes. Alert on: sustained errors on any single channel (>5 errors per hour), complete channel silence during business hours (possible connectio…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:messagebroker\" status=\"error\"\n| bin _time span=1h\n| stats count as error_count, dc(error_code) as unique_errors, values(error_description) as error_types by channel, source_system, target_system, _time\n| where error_count > 5\n| sort -error_count\n| table _time, channel, source_system, target_system, error_count, unique_errors, error_types\n```\n\nUnderstanding this SPL\n\n**DIPS Arena Integration Engine Error Monitoring** — DIPS Message Broker provides the asynchronous integration interface between DIPS Arena and external systems — lab information systems, pharmacy systems, radiology PACS, patient portals, and regional health network services. Integration errors break data flows that clinicians depend on for complete patient information. Monitoring error rates and patterns across integration channels enables rapid identification and resolution of broken interfaces.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:messagebroker\"` fields `channel`, `message_type`, `error_code`, `error_description`, `source_system`, `target_system`. **App/TA** (typical add-on context): DIPS Message Broker logs via Universal Forwarder or HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:messagebroker. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:messagebroker\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by channel, source_system, target_system, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where error_count > 5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **DIPS Arena Integration Engine Error Monitoring**): table _time, channel, source_system, target_system, error_count, unique_errors, error_types\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (errors by channel), Timeline (error patterns over time), Table (error details with system pairs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch integration engine errors into and out of DIPS. We help you catch interface breaks before orders and results stop flowing.",
              "mtype": [
                "Fault"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.24",
              "n": "DIPS Arena Concurrent Session and License Utilization",
              "c": "medium",
              "f": "intermediate",
              "v": "DIPS Arena licenses are typically capacity-based, with limits on concurrent user sessions. Approaching the license ceiling during peak clinical hours (morning rounds, shift changes) causes session rejections that lock clinicians out of the EHR at critical care delivery moments. Trending concurrent session counts against license limits enables proactive capacity planning and license procurement.",
              "t": "DIPS Arena session logs via HEC or IIS logs",
              "d": "`index=healthcare` `sourcetype=\"dips:arena:session\"` fields `session_id`, `user_role`, `department`, `action`",
              "q": "index=healthcare sourcetype=\"dips:arena:session\" (action=\"login\" OR action=\"logout\" OR action=\"timeout\")\n| streamstats sum(eval(if(action=\"login\",1,-1))) as concurrent_sessions\n| bin _time span=5m\n| stats max(concurrent_sessions) as peak_sessions, avg(concurrent_sessions) as avg_sessions by _time\n| eval license_limit=5000\n| eval utilization_pct=round(peak_sessions/license_limit*100,1)\n| where utilization_pct > 80\n| table _time, peak_sessions, avg_sessions, utilization_pct",
              "m": "Collect session lifecycle events (login, logout, session timeout) from DIPS Arena. Calculate concurrent sessions using streamstats to maintain a running count. Set the license limit as a macro or lookup value that administrators update when licenses change. Alert at 80% utilization (capacity planning trigger), 90% (operational warning), and 95% (critical — imminent session rejections). Track peak utilization by time of day and day of week to identify patterns for capacity planning. Segment by department and user role to understand which clinical areas drive peak demand. **Domain context:** Norwegian hospitals experience predictable peaks during morning ward rounds (07:00–09:00), afternoon documentation (14:00–16:00), and shift handover periods.",
              "z": "Area chart (concurrent sessions over time with license limit line), Single value (current utilization %), Heatmap (session count by hour and day of week).",
              "kfp": "License pressure during morning huddle, go-live hypercare, or training classes that the clinical informatics team already booked against the session pool budget.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DIPS Arena session logs via HEC or IIS logs.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:arena:session\"` fields `session_id`, `user_role`, `department`, `action`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect session lifecycle events (login, logout, session timeout) from DIPS Arena. Calculate concurrent sessions using streamstats to maintain a running count. Set the license limit as a macro or lookup value that administrators update when licenses change. Alert at 80% utilization (capacity planning trigger), 90% (operational warning), and 95% (critical — imminent session rejections). Track peak utilization by time of day and day of week to identify patterns for capacity planning. Segment by de…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:arena:session\" (action=\"login\" OR action=\"logout\" OR action=\"timeout\")\n| streamstats sum(eval(if(action=\"login\",1,-1))) as concurrent_sessions\n| bin _time span=5m\n| stats max(concurrent_sessions) as peak_sessions, avg(concurrent_sessions) as avg_sessions by _time\n| eval license_limit=5000\n| eval utilization_pct=round(peak_sessions/license_limit*100,1)\n| where utilization_pct > 80\n| table _time, peak_sessions, avg_sessions, utilization_pct\n```\n\nUnderstanding this SPL\n\n**DIPS Arena Concurrent Session and License Utilization** — DIPS Arena licenses are typically capacity-based, with limits on concurrent user sessions. Approaching the license ceiling during peak clinical hours (morning rounds, shift changes) causes session rejections that lock clinicians out of the EHR at critical care delivery moments. Trending concurrent session counts against license limits enables proactive capacity planning and license procurement.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:arena:session\"` fields `session_id`, `user_role`, `department`, `action`. **App/TA** (typical add-on context): DIPS Arena session logs via HEC or IIS logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:arena:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:arena:session\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **license_limit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **utilization_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilization_pct > 80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DIPS Arena Concurrent Session and License Utilization**): table _time, peak_sessions, avg_sessions, utilization_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (concurrent sessions over time with license limit line), Single value (current utilization %), Heatmap (session count by hour and day of week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch concurrent users and license use. We help you catch capacity limits before the morning rush hits a wall.",
              "mtype": [
                "Capacity"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [
                "iis"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.25",
              "n": "DIPS Arena Clinical Document Generation Latency",
              "c": "medium",
              "f": "intermediate",
              "v": "DIPS Arena generates clinical documents — discharge summaries, referral letters, lab reports, surgical notes — that clinicians need promptly for patient handoff and continuity of care. Document generation involves openEHR archetype composition, template rendering, and PDF creation. Slow document generation delays discharges, referral processing, and inter-organizational communication via DIPS Communicator.",
              "t": "DIPS Arena application logs via HEC or Universal Forwarder",
              "d": "`index=healthcare` `sourcetype=\"dips:arena:applog\"` fields `document_type`, `generation_time_ms`, `template_id`, `page_count`, `archetype_count`",
              "q": "index=healthcare sourcetype=\"dips:arena:applog\" action=\"document_generated\"\n| bin _time span=1h\n| stats avg(generation_time_ms) as avg_gen_ms, perc95(generation_time_ms) as p95_gen_ms, count as doc_count by document_type, _time\n| where p95_gen_ms > 5000\n| table _time, document_type, avg_gen_ms, p95_gen_ms, doc_count",
              "m": "Instrument DIPS Arena document generation events with timing information. Track generation latency by document type (discharge summary, referral, operative note, etc.), template complexity (archetype count), and output size (page count). Alert when p95 generation time exceeds 5 seconds for any document type. Correlate slow generation with database performance (complex AQL queries backing document composition) and server load. **Domain context:** Discharge summaries and referral letters are time-critical for patient flow — Norwegian hospitals are increasingly measured on discharge processing speed.",
              "z": "Line chart (generation latency by document type), Bar chart (document volumes), Table (slowest document types with details).",
              "kfp": "Document latency during template merges, CDA assembly upgrades, e-sign or rendering library updates, or mass fax-to-chart replays the document team requeued.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DIPS Arena application logs via HEC or Universal Forwarder.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:arena:applog\"` fields `document_type`, `generation_time_ms`, `template_id`, `page_count`, `archetype_count`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument DIPS Arena document generation events with timing information. Track generation latency by document type (discharge summary, referral, operative note, etc.), template complexity (archetype count), and output size (page count). Alert when p95 generation time exceeds 5 seconds for any document type. Correlate slow generation with database performance (complex AQL queries backing document composition) and server load. **Domain context:** Discharge summaries and referral letters are time-…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:arena:applog\" action=\"document_generated\"\n| bin _time span=1h\n| stats avg(generation_time_ms) as avg_gen_ms, perc95(generation_time_ms) as p95_gen_ms, count as doc_count by document_type, _time\n| where p95_gen_ms > 5000\n| table _time, document_type, avg_gen_ms, p95_gen_ms, doc_count\n```\n\nUnderstanding this SPL\n\n**DIPS Arena Clinical Document Generation Latency** — DIPS Arena generates clinical documents — discharge summaries, referral letters, lab reports, surgical notes — that clinicians need promptly for patient handoff and continuity of care. Document generation involves openEHR archetype composition, template rendering, and PDF creation. Slow document generation delays discharges, referral processing, and inter-organizational communication via DIPS Communicator.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:arena:applog\"` fields `document_type`, `generation_time_ms`, `template_id`, `page_count`, `archetype_count`. **App/TA** (typical add-on context): DIPS Arena application logs via HEC or Universal Forwarder. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:arena:applog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:arena:applog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by document_type, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_gen_ms > 5000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DIPS Arena Clinical Document Generation Latency**): table _time, document_type, avg_gen_ms, p95_gen_ms, doc_count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (generation latency by document type), Bar chart (document volumes), Table (slowest document types with details).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how long clinical documents take to render. We help you catch slowness before handoffs wait on the chart.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.26",
              "n": "DIPS Arena Scheduled Job Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "DIPS Arena relies on scheduled background jobs for critical operations: data synchronization between clinical modules, report generation, HL7 message queue processing, archetype validation, and database maintenance. Failed or stalled jobs can cause data inconsistencies, missing reports, and growing message backlogs that eventually affect frontline clinical operations. Monitoring job execution status ensures background processes remain healthy.",
              "t": "DIPS Arena job scheduler logs via Universal Forwarder or HEC",
              "d": "`index=healthcare` `sourcetype=\"dips:arena:jobs\"` fields `job_name`, `status`, `duration_sec`, `records_processed`, `error_message`",
              "q": "index=healthcare sourcetype=\"dips:arena:jobs\"\n| stats latest(status) as last_status, latest(duration_sec) as last_duration, latest(_time) as last_run, latest(error_message) as error by job_name\n| eval hours_since_run=round((now()-last_run)/3600, 1)\n| where last_status=\"failed\" OR last_status=\"timeout\" OR hours_since_run > 25\n| sort -hours_since_run\n| table job_name, last_status, last_duration, hours_since_run, error",
              "m": "Collect job execution logs from the DIPS Arena job scheduler. Each event records job name, start/end time, status (success, failed, timeout, running), records processed, and error details. Alert on: any job failure, jobs that have not run within their expected schedule (e.g., daily jobs not run in 25 hours), and jobs with duration exceeding 2x their historical average (performance regression). Maintain a lookup table mapping job names to expected schedules and maximum allowed durations. Run the monitoring search every 30 minutes.",
              "z": "Table (job status overview), Single value (failed jobs count), Timeline (job execution history).",
              "kfp": "Job failures on DST windows, ETL dependency slips, or full rebuilds after a bad feed day that the operations scheduler re-ran from checkpoint.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DIPS Arena job scheduler logs via Universal Forwarder or HEC.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:arena:jobs\"` fields `job_name`, `status`, `duration_sec`, `records_processed`, `error_message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCollect job execution logs from the DIPS Arena job scheduler. Each event records job name, start/end time, status (success, failed, timeout, running), records processed, and error details. Alert on: any job failure, jobs that have not run within their expected schedule (e.g., daily jobs not run in 25 hours), and jobs with duration exceeding 2x their historical average (performance regression). Maintain a lookup table mapping job names to expected schedules and maximum allowed durations. Run the …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:arena:jobs\"\n| stats latest(status) as last_status, latest(duration_sec) as last_duration, latest(_time) as last_run, latest(error_message) as error by job_name\n| eval hours_since_run=round((now()-last_run)/3600, 1)\n| where last_status=\"failed\" OR last_status=\"timeout\" OR hours_since_run > 25\n| sort -hours_since_run\n| table job_name, last_status, last_duration, hours_since_run, error\n```\n\nUnderstanding this SPL\n\n**DIPS Arena Scheduled Job Monitoring** — DIPS Arena relies on scheduled background jobs for critical operations: data synchronization between clinical modules, report generation, HL7 message queue processing, archetype validation, and database maintenance. Failed or stalled jobs can cause data inconsistencies, missing reports, and growing message backlogs that eventually affect frontline clinical operations. Monitoring job execution status ensures background processes remain healthy.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:arena:jobs\"` fields `job_name`, `status`, `duration_sec`, `records_processed`, `error_message`. **App/TA** (typical add-on context): DIPS Arena job scheduler logs via Universal Forwarder or HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:arena:jobs. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:arena:jobs\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by job_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since_run** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where last_status=\"failed\" OR last_status=\"timeout\" OR hours_since_run > 25` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **DIPS Arena Scheduled Job Monitoring**): table job_name, last_status, last_duration, hours_since_run, error\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (job status overview), Single value (failed jobs count), Timeline (job execution history).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch overnight and batch jobs in DIPS. We help you catch failures before morning clinics start on stale data.",
              "mtype": [
                "Availability"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.3.27",
              "n": "DIPS Arena openEHR AQL Query Performance",
              "c": "medium",
              "f": "advanced",
              "v": "DIPS Arena is built on the openEHR standard, using Archetype Query Language (AQL) to retrieve structured clinical data from archetype-based compositions. Complex AQL queries — spanning multiple archetypes, deep clinical hierarchies, or large patient populations — can become performance bottlenecks that impact clinical screen loading, reporting, and API response times. Monitoring AQL query performance identifies optimization targets and prevents query-induced degradation.",
              "t": "DIPS Arena EHR Server logs via HEC or Universal Forwarder",
              "d": "`index=healthcare` `sourcetype=\"dips:arena:aql\"` fields `query_hash`, `execution_time_ms`, `archetype_count`, `result_count`, `calling_module`",
              "q": "index=healthcare sourcetype=\"dips:arena:aql\"\n| bin _time span=1h\n| stats avg(execution_time_ms) as avg_exec_ms, perc95(execution_time_ms) as p95_exec_ms,\n  max(execution_time_ms) as max_exec_ms, count as query_count,\n  avg(result_count) as avg_results by query_hash, calling_module, _time\n| where p95_exec_ms > 2000\n| sort -p95_exec_ms\n| table _time, calling_module, query_hash, query_count, avg_exec_ms, p95_exec_ms, max_exec_ms, avg_results",
              "m": "Instrument the DIPS Arena EHR Server (the openEHR service implementation) to log AQL query execution events with timing, a hash of the query text (avoid logging full queries containing patient context), archetype count (complexity indicator), result set size, and calling module (which clinical screen or API triggered the query). Alert on: p95 execution time exceeding 2 seconds for any query pattern, queries returning more than 10,000 results (likely missing constraints), and new query patterns with execution times significantly above the fleet average. Use query hash trending to detect performance regression after Arena upgrades. Correlate with database performance metrics for root cause analysis.",
              "z": "Table (slowest queries by hash with module context), Line chart (AQL query latency over time), Bar chart (query distribution by calling module).",
              "kfp": "AQL slowness after archetype or template releases, very broad queries from research exports, or database statistics jobs that the openEHR admin scheduled off-hours.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DIPS Arena EHR Server logs via HEC or Universal Forwarder.\n• Ensure the following data sources are available: `index=healthcare` `sourcetype=\"dips:arena:aql\"` fields `query_hash`, `execution_time_ms`, `archetype_count`, `result_count`, `calling_module`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument the DIPS Arena EHR Server (the openEHR service implementation) to log AQL query execution events with timing, a hash of the query text (avoid logging full queries containing patient context), archetype count (complexity indicator), result set size, and calling module (which clinical screen or API triggered the query). Alert on: p95 execution time exceeding 2 seconds for any query pattern, queries returning more than 10,000 results (likely missing constraints), and new query patterns w…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=healthcare sourcetype=\"dips:arena:aql\"\n| bin _time span=1h\n| stats avg(execution_time_ms) as avg_exec_ms, perc95(execution_time_ms) as p95_exec_ms,\n  max(execution_time_ms) as max_exec_ms, count as query_count,\n  avg(result_count) as avg_results by query_hash, calling_module, _time\n| where p95_exec_ms > 2000\n| sort -p95_exec_ms\n| table _time, calling_module, query_hash, query_count, avg_exec_ms, p95_exec_ms, max_exec_ms, avg_results\n```\n\nUnderstanding this SPL\n\n**DIPS Arena openEHR AQL Query Performance** — DIPS Arena is built on the openEHR standard, using Archetype Query Language (AQL) to retrieve structured clinical data from archetype-based compositions. Complex AQL queries — spanning multiple archetypes, deep clinical hierarchies, or large patient populations — can become performance bottlenecks that impact clinical screen loading, reporting, and API response times. Monitoring AQL query performance identifies optimization targets and prevents query-induced degradation.\n\nDocumented **Data sources**: `index=healthcare` `sourcetype=\"dips:arena:aql\"` fields `query_hash`, `execution_time_ms`, `archetype_count`, `result_count`, `calling_module`. **App/TA** (typical add-on context): DIPS Arena EHR Server logs via HEC or Universal Forwarder. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: healthcare; **sourcetype**: dips:arena:aql. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=healthcare, sourcetype=\"dips:arena:aql\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by query_hash, calling_module, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_exec_ms > 2000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **DIPS Arena openEHR AQL Query Performance**): table _time, calling_module, query_hash, query_count, avg_exec_ms, p95_exec_ms, max_exec_ms, avg_results\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slowest queries by hash with module context), Line chart (AQL query latency over time), Bar chart (query distribution by calling module).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch openEHR AQL query performance. We help you catch slow extracts before research and apps time out.",
              "mtype": [
                "Performance"
              ],
              "ind": "Healthcare",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.0,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 26,
            "none": 0
          }
        },
        {
          "i": "21.4",
          "n": "Transportation and Logistics",
          "u": [
            {
              "i": "21.4.1",
              "n": "Fleet Vehicle GPS Tracking and Geofence Alerting",
              "c": "high",
              "f": "intermediate",
              "v": "Real-time vehicle tracking and geofence alerts enable theft prevention, route compliance, and efficient dispatch during emergency response.",
              "t": "GPS telematics platform via HEC",
              "d": "`index=logistics` `sourcetype=\"gps:telematics\"` fields `vehicle_id`, `lat`, `lon`, `speed_kmh`, `geofence_id`",
              "q": "index=logistics sourcetype=\"gps:telematics\"\n| lookup geofence_boundaries geofence_id OUTPUT boundary_name allowed\n| where allowed=0 OR isnull(allowed)\n| stats earliest(_time) as entered, latest(_time) as last_seen, avg(speed_kmh) as avg_speed by vehicle_id, boundary_name\n| table vehicle_id, boundary_name, entered, last_seen, avg_speed",
              "m": "Ingest GPS data at 30-60 second intervals. Define geofences as lookup tables. Alert on boundary violations, after-hours movement, and unauthorized stops. Correlate with driver assignment. **Domain context:** DOT hours-of-service and privacy rules (driver personal vehicle use) may limit monitoring scope—align geofences with depots, yards, and job sites per policy. **Splunk:** Maintain `geofence_boundaries` as a KV/lookup with `allowed` and refresh when routes change; `lookup` requires `geofence_id` on each event—compute at index time if the platform only sends lat/long.",
              "z": "Map (vehicle positions with geofences), Table (violations), Timeline (vehicle movements).",
              "kfp": "GPS gaps from tunnels, urban canyons, yard shuffles, or maintenance at the shop that the telematics vendor flags as common coverage issues.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GPS telematics platform via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"gps:telematics\"` fields `vehicle_id`, `lat`, `lon`, `speed_kmh`, `geofence_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest GPS data at 30-60 second intervals. Define geofences as lookup tables. Alert on boundary violations, after-hours movement, and unauthorized stops. Correlate with driver assignment. **Domain context:** DOT hours-of-service and privacy rules (driver personal vehicle use) may limit monitoring scope—align geofences with depots, yards, and job sites per policy. **Splunk:** Maintain `geofence_boundaries` as a KV/lookup with `allowed` and refresh when routes change; `lookup` requires `geofence_i…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"gps:telematics\"\n| lookup geofence_boundaries geofence_id OUTPUT boundary_name allowed\n| where allowed=0 OR isnull(allowed)\n| stats earliest(_time) as entered, latest(_time) as last_seen, avg(speed_kmh) as avg_speed by vehicle_id, boundary_name\n| table vehicle_id, boundary_name, entered, last_seen, avg_speed\n```\n\nUnderstanding this SPL\n\n**Fleet Vehicle GPS Tracking and Geofence Alerting** — Real-time vehicle tracking and geofence alerts enable theft prevention, route compliance, and efficient dispatch during emergency response.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"gps:telematics\"` fields `vehicle_id`, `lat`, `lon`, `speed_kmh`, `geofence_id`. **App/TA** (typical add-on context): GPS telematics platform via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: gps:telematics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"gps:telematics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where allowed=0 OR isnull(allowed)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by vehicle_id, boundary_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Fleet Vehicle GPS Tracking and Geofence Alerting**): table vehicle_id, boundary_name, entered, last_seen, avg_speed\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (vehicle positions with geofences), Table (violations), Timeline (vehicle movements).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch vehicle location and geofence rules. We help you catch long GPS gaps or route breaches before service or safety suffer.",
              "wv": "crawl",
              "mtype": [
                "Security",
                "Performance"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 65,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier"
              ]
            },
            {
              "i": "21.4.2",
              "n": "Driver Behavior Scoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Analyzing harsh braking, speeding, and excessive idling improves fleet safety, reduces fuel costs, and supports insurance and regulatory compliance.",
              "t": "Telematics platform via HEC",
              "d": "`index=logistics` `sourcetype=\"gps:telematics\"` fields `driver_id`, `harsh_brake`, `speed_kmh`, `speed_limit_kmh`, `idle_min`",
              "q": "index=logistics sourcetype=\"gps:telematics\"\n| eval speeding=if(speed_kmh > speed_limit_kmh*1.1, 1, 0)\n| eval harsh=if(harsh_brake=\"true\" OR harsh_brake=1, 1, 0)\n| stats sum(speeding) as speed_events, sum(harsh) as harsh_events, sum(idle_min) as total_idle_min by driver_id\n| eval score=100 - (speed_events*2 + harsh_events*5 + round(total_idle_min/60,0))\n| eval score=if(score<0, 0, score)\n| sort score\n| table driver_id, score, speed_events, harsh_events, total_idle_min",
              "m": "Ingest telematics events with driving behavior flags. Calculate composite scores weekly. Provide driver coaching based on trends. Share top performers and improvement areas. **Domain context:** Fleet safety programs often align with CSA SMS categories (speeding, harsh brake); scoring weights should be validated with risk/legal. **Splunk:** `harsh_brake` string vs numeric—normalize in `props`; scoring is illustrative—tune weights per fleet.",
              "z": "Bar chart (scores by driver), Line chart (score trend), Table (detailed events).",
              "kfp": "Score swings after training rides, new hire onboarding, or route changes that the safety coach uses as coaching moments rather than policy breaches.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Telematics platform via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"gps:telematics\"` fields `driver_id`, `harsh_brake`, `speed_kmh`, `speed_limit_kmh`, `idle_min`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest telematics events with driving behavior flags. Calculate composite scores weekly. Provide driver coaching based on trends. Share top performers and improvement areas. **Domain context:** Fleet safety programs often align with CSA SMS categories (speeding, harsh brake); scoring weights should be validated with risk/legal. **Splunk:** `harsh_brake` string vs numeric—normalize in `props`; scoring is illustrative—tune weights per fleet.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"gps:telematics\"\n| eval speeding=if(speed_kmh > speed_limit_kmh*1.1, 1, 0)\n| eval harsh=if(harsh_brake=\"true\" OR harsh_brake=1, 1, 0)\n| stats sum(speeding) as speed_events, sum(harsh) as harsh_events, sum(idle_min) as total_idle_min by driver_id\n| eval score=100 - (speed_events*2 + harsh_events*5 + round(total_idle_min/60,0))\n| eval score=if(score<0, 0, score)\n| sort score\n| table driver_id, score, speed_events, harsh_events, total_idle_min\n```\n\nUnderstanding this SPL\n\n**Driver Behavior Scoring** — Analyzing harsh braking, speeding, and excessive idling improves fleet safety, reduces fuel costs, and supports insurance and regulatory compliance.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"gps:telematics\"` fields `driver_id`, `harsh_brake`, `speed_kmh`, `speed_limit_kmh`, `idle_min`. **App/TA** (typical add-on context): Telematics platform via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: gps:telematics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"gps:telematics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **speeding** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **harsh** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by driver_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Driver Behavior Scoring**): table driver_id, score, speed_events, harsh_events, total_idle_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (scores by driver), Line chart (score trend), Table (detailed events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch driving scores and harsh events. We help you coach drivers before risk scores and insurance costs jump.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.3",
              "n": "Fuel Consumption Anomaly Detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Unusual fuel consumption patterns indicate theft, mechanical issues (injectors, tires), or inefficient routing. Early detection reduces operating costs.",
              "t": "Fuel management system / telematics via HEC",
              "d": "`index=logistics` `sourcetype=\"fuel:consumption\"` fields `vehicle_id`, `fuel_liters`, `distance_km`",
              "q": "index=logistics sourcetype=\"fuel:consumption\"\n| eval fuel_rate=fuel_liters/nullif(distance_km,0)*100\n| eventstats avg(fuel_rate) as fleet_avg, stdev(fuel_rate) as fleet_stdev by vehicle_type\n| eval z_score=abs(fuel_rate-fleet_avg)/nullif(fleet_stdev,0)\n| where z_score > 2\n| table vehicle_id, fuel_rate, fleet_avg, z_score, distance_km",
              "m": "Normalize fuel consumption by distance and load. Baseline per vehicle type. Alert on sustained deviations. Correlate with maintenance records and route data. **Domain context:** Fuel theft and inefficient routing show as outliers; `vehicle_type` should feed `eventstats`—without it, mixed fleets skew z-scores. **Splunk:** Ensure `distance_km` > 0 excludes idling-only rows or `fuel_rate` explodes.",
              "z": "Scatter plot (fuel rate vs distance), Bar chart (outliers), Line chart (fuel rate trend).",
              "kfp": "Fuel noise during idle for loading, reefer setpoints, high winds, or detours the dispatcher recorded as authorized operational variance.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Fuel management system / telematics via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"fuel:consumption\"` fields `vehicle_id`, `fuel_liters`, `distance_km`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize fuel consumption by distance and load. Baseline per vehicle type. Alert on sustained deviations. Correlate with maintenance records and route data. **Domain context:** Fuel theft and inefficient routing show as outliers; `vehicle_type` should feed `eventstats`—without it, mixed fleets skew z-scores. **Splunk:** Ensure `distance_km` > 0 excludes idling-only rows or `fuel_rate` explodes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"fuel:consumption\"\n| eval fuel_rate=fuel_liters/nullif(distance_km,0)*100\n| eventstats avg(fuel_rate) as fleet_avg, stdev(fuel_rate) as fleet_stdev by vehicle_type\n| eval z_score=abs(fuel_rate-fleet_avg)/nullif(fleet_stdev,0)\n| where z_score > 2\n| table vehicle_id, fuel_rate, fleet_avg, z_score, distance_km\n```\n\nUnderstanding this SPL\n\n**Fuel Consumption Anomaly Detection** — Unusual fuel consumption patterns indicate theft, mechanical issues (injectors, tires), or inefficient routing. Early detection reduces operating costs.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"fuel:consumption\"` fields `vehicle_id`, `fuel_liters`, `distance_km`. **App/TA** (typical add-on context): Fuel management system / telematics via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: fuel:consumption. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"fuel:consumption\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fuel_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by vehicle_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z_score > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Fuel Consumption Anomaly Detection**): table vehicle_id, fuel_rate, fleet_avg, z_score, distance_km\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter plot (fuel rate vs distance), Bar chart (outliers), Line chart (fuel rate trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch fuel use against trip and load context. We help you catch waste, theft, or engine trouble early.",
              "mtype": [
                "Performance"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.4",
              "n": "Vehicle Diagnostic Trouble Code Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "OBD-II diagnostic trouble codes provide early warning of mechanical and emissions issues. Monitoring enables proactive maintenance and prevents roadside failures.",
              "t": "OBD-II / telematics via HEC",
              "d": "`index=logistics` `sourcetype=\"obd2:dtc\"` fields `vehicle_id`, `dtc_code`, `dtc_description`, `severity`",
              "q": "index=logistics sourcetype=\"obd2:dtc\"\n| stats count, latest(_time) as last_seen, values(dtc_description) as descriptions by vehicle_id, dtc_code, severity\n| where severity IN (\"critical\", \"warning\")\n| sort - count\n| table vehicle_id, dtc_code, descriptions, severity, count, last_seen",
              "m": "Ingest DTC codes from fleet telematics. Prioritize by severity. Generate automated maintenance work orders for critical codes. Track recurring DTCs by vehicle and fleet-wide. **Domain context:** SAE J2012 defines OBD-II DTC formats; emissions-related codes may have regulatory implications—map `dtc_code` to OEM descriptions via lookup. **Splunk:** DTCs can latch until cleared—dedupe by `vehicle_id`+`dtc_code` with `latest(_time)` for active fault panels.",
              "z": "Table (active DTCs), Bar chart (DTCs by code), Timeline (DTC occurrences).",
              "kfp": "DTC bursts after emissions-related recalls, OBD port scans at inspection lanes, or telematics firmware that clears after a key cycle the driver performed.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OBD-II / telematics via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"obd2:dtc\"` fields `vehicle_id`, `dtc_code`, `dtc_description`, `severity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest DTC codes from fleet telematics. Prioritize by severity. Generate automated maintenance work orders for critical codes. Track recurring DTCs by vehicle and fleet-wide. **Domain context:** SAE J2012 defines OBD-II DTC formats; emissions-related codes may have regulatory implications—map `dtc_code` to OEM descriptions via lookup. **Splunk:** DTCs can latch until cleared—dedupe by `vehicle_id`+`dtc_code` with `latest(_time)` for active fault panels.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"obd2:dtc\"\n| stats count, latest(_time) as last_seen, values(dtc_description) as descriptions by vehicle_id, dtc_code, severity\n| where severity IN (\"critical\", \"warning\")\n| sort - count\n| table vehicle_id, dtc_code, descriptions, severity, count, last_seen\n```\n\nUnderstanding this SPL\n\n**Vehicle Diagnostic Trouble Code Monitoring** — OBD-II diagnostic trouble codes provide early warning of mechanical and emissions issues. Monitoring enables proactive maintenance and prevents roadside failures.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"obd2:dtc\"` fields `vehicle_id`, `dtc_code`, `dtc_description`, `severity`. **App/TA** (typical add-on context): OBD-II / telematics via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: obd2:dtc. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"obd2:dtc\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vehicle_id, dtc_code, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity IN (\"critical\", \"warning\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Vehicle Diagnostic Trouble Code Monitoring**): table vehicle_id, dtc_code, descriptions, severity, count, last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active DTCs), Bar chart (DTCs by code), Timeline (DTC occurrences).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch engine and chassis fault codes. We help you catch maintenance needs before a roadside failure.",
              "mtype": [
                "Fault"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.5",
              "n": "Port Container Crane Cycle Time Analytics",
              "c": "medium",
              "f": "intermediate",
              "v": "Crane cycle time is a key productivity metric for port operations. Tracking enables operator performance comparison and equipment optimization.",
              "t": "Terminal operating system / crane PLC via HEC",
              "d": "`index=logistics` `sourcetype=\"crane:cycle\"` fields `crane_id`, `cycle_time_sec`, `move_type`, `operator_id`",
              "q": "index=logistics sourcetype=\"crane:cycle\"\n| bin _time span=1h\n| stats avg(cycle_time_sec) as avg_cycle, perc95(cycle_time_sec) as p95_cycle, count as moves by crane_id, _time\n| eval moves_per_hour=moves\n| table _time, crane_id, avg_cycle, p95_cycle, moves_per_hour",
              "m": "Capture crane PLM (Position Location Measurement) data. Calculate gross and net crane rates. Compare operators and shifts. Identify delays from vessel configuration and weather. **Domain context:** Port productivity is often measured in moves per hour (MPH) per crane and ship working rate—weather and twin lifts materially affect cycle time. **Splunk:** `moves_per_hour` in the sample equals `count` per bin—rename to `moves_in_bin` unless you normalize to hourly rate explicitly.",
              "z": "Line chart (cycle time trend), Bar chart (MPH by crane), Table (shift performance).",
              "kfp": "Crane cycle variance from vessel bunching, labor breaks, or spreader retunes during berth maintenance that the terminal ops meeting already expected.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Terminal operating system / crane PLC via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"crane:cycle\"` fields `crane_id`, `cycle_time_sec`, `move_type`, `operator_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture crane PLM (Position Location Measurement) data. Calculate gross and net crane rates. Compare operators and shifts. Identify delays from vessel configuration and weather. **Domain context:** Port productivity is often measured in moves per hour (MPH) per crane and ship working rate—weather and twin lifts materially affect cycle time. **Splunk:** `moves_per_hour` in the sample equals `count` per bin—rename to `moves_in_bin` unless you normalize to hourly rate explicitly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"crane:cycle\"\n| bin _time span=1h\n| stats avg(cycle_time_sec) as avg_cycle, perc95(cycle_time_sec) as p95_cycle, count as moves by crane_id, _time\n| eval moves_per_hour=moves\n| table _time, crane_id, avg_cycle, p95_cycle, moves_per_hour\n```\n\nUnderstanding this SPL\n\n**Port Container Crane Cycle Time Analytics** — Crane cycle time is a key productivity metric for port operations. Tracking enables operator performance comparison and equipment optimization.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"crane:cycle\"` fields `crane_id`, `cycle_time_sec`, `move_type`, `operator_id`. **App/TA** (typical add-on context): Terminal operating system / crane PLC via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: crane:cycle. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"crane:cycle\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by crane_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **moves_per_hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Port Container Crane Cycle Time Analytics**): table _time, crane_id, avg_cycle, p95_cycle, moves_per_hour\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (cycle time trend), Bar chart (MPH by crane), Table (shift performance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch crane cycle times at the port. We help you catch berth delays before ship schedules slip.",
              "mtype": [
                "Performance"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.6",
              "n": "Rail Signaling System Health Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Rail signaling failures cause service disruptions and safety incidents. Monitoring signal system health enables proactive maintenance and reduces delay minutes.",
              "t": "Signaling system logs via syslog/HEC",
              "d": "`index=logistics` `sourcetype=\"rail:signal\"` fields `signal_id`, `status`, `fault_code`, `location`",
              "q": "index=logistics sourcetype=\"rail:signal\"\n| where status!=\"normal\" OR isnotnull(fault_code)\n| stats count, latest(_time) as last_fault, values(fault_code) as fault_codes by signal_id, location\n| sort - count\n| table signal_id, location, fault_codes, count, last_fault",
              "m": "Integrate signaling system diagnostic logs. Alert on persistent faults and degraded modes. Track mean time between failures by signal type. Report for infrastructure maintenance planning. **Domain context:** Rail signaling architectures vary (CBTC, ETCS, legacy relay interlocking); safety-critical alerts belong in the wayside/ATP system of record—Splunk is for diagnostics and trend, not interlocking. **Splunk:** Restrict access; avoid exposing safety-critical topology in shared dashboards.",
              "z": "Map (signal status by location), Table (faulted signals), Timeline (fault history).",
              "kfp": "Signal health flags during interlocking software loads, wayside test intervals, or possession windows that the rail signal maintainer released with paperwork.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Signaling system logs via syslog/HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"rail:signal\"` fields `signal_id`, `status`, `fault_code`, `location`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate signaling system diagnostic logs. Alert on persistent faults and degraded modes. Track mean time between failures by signal type. Report for infrastructure maintenance planning. **Domain context:** Rail signaling architectures vary (CBTC, ETCS, legacy relay interlocking); safety-critical alerts belong in the wayside/ATP system of record—Splunk is for diagnostics and trend, not interlocking. **Splunk:** Restrict access; avoid exposing safety-critical topology in shared dashboards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"rail:signal\"\n| where status!=\"normal\" OR isnotnull(fault_code)\n| stats count, latest(_time) as last_fault, values(fault_code) as fault_codes by signal_id, location\n| sort - count\n| table signal_id, location, fault_codes, count, last_fault\n```\n\nUnderstanding this SPL\n\n**Rail Signaling System Health Monitoring** — Rail signaling failures cause service disruptions and safety incidents. Monitoring signal system health enables proactive maintenance and reduces delay minutes.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"rail:signal\"` fields `signal_id`, `status`, `fault_code`, `location`. **App/TA** (typical add-on context): Signaling system logs via syslog/HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: rail:signal. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"rail:signal\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status!=\"normal\" OR isnotnull(fault_code)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by signal_id, location** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Rail Signaling System Health Monitoring**): table signal_id, location, fault_codes, count, last_fault\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (signal status by location), Table (faulted signals), Timeline (fault history).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch rail signaling and interlocking health. We help you catch faults before trains are slowed or stopped.",
              "mtype": [
                "Availability"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.7",
              "n": "Airport Baggage Handling System Throughput",
              "c": "high",
              "f": "intermediate",
              "v": "Baggage system throughput impacts flight departure times and passenger satisfaction. Monitoring identifies jams, diversions, and screening bottlenecks.",
              "t": "BHS SCADA / sortation system via HEC",
              "d": "`index=logistics` `sourcetype=\"bhs:throughput\"` fields `lane_id`, `bags_per_hour`, `jam_count`, `divert_count`",
              "q": "index=logistics sourcetype=\"bhs:throughput\"\n| bin _time span=15m\n| stats sum(bags_per_hour) as total_bags, sum(jam_count) as jams, sum(divert_count) as diverts by lane_id, _time\n| where jams > 0 OR diverts > 5\n| table _time, lane_id, total_bags, jams, diverts",
              "m": "Ingest BHS SCADA events for bag counts, jams, and screening diversions. Track throughput by lane and terminal. Alert on sustained throughput drops or jam accumulation. Correlate with flight schedules. **Domain context:** IATA RP 1745 defines BSM messaging; throughput and jam metrics tie to MHB (mishandled baggage) programs. **Splunk:** `bags_per_hour` field naming may be ambiguous—confirm whether source provides hourly aggregates or per-sample rates before summing.",
              "z": "Line chart (throughput trend), Bar chart (jams by lane), Table (performance summary).",
              "kfp": "Throughput dips during irregular operations, sortation or belt maintenance, software releases, or flights held for flow that ramp control already declared.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BHS SCADA / sortation system via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"bhs:throughput\"` fields `lane_id`, `bags_per_hour`, `jam_count`, `divert_count`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest BHS SCADA events for bag counts, jams, and screening diversions. Track throughput by lane and terminal. Alert on sustained throughput drops or jam accumulation. Correlate with flight schedules. **Domain context:** IATA RP 1745 defines BSM messaging; throughput and jam metrics tie to MHB (mishandled baggage) programs. **Splunk:** `bags_per_hour` field naming may be ambiguous—confirm whether source provides hourly aggregates or per-sample rates before summing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"bhs:throughput\"\n| bin _time span=15m\n| stats sum(bags_per_hour) as total_bags, sum(jam_count) as jams, sum(divert_count) as diverts by lane_id, _time\n| where jams > 0 OR diverts > 5\n| table _time, lane_id, total_bags, jams, diverts\n```\n\nUnderstanding this SPL\n\n**Airport Baggage Handling System Throughput** — Baggage system throughput impacts flight departure times and passenger satisfaction. Monitoring identifies jams, diversions, and screening bottlenecks.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"bhs:throughput\"` fields `lane_id`, `bags_per_hour`, `jam_count`, `divert_count`. **App/TA** (typical add-on context): BHS SCADA / sortation system via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: bhs:throughput. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"bhs:throughput\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by lane_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where jams > 0 OR diverts > 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Airport Baggage Handling System Throughput**): table _time, lane_id, total_bags, jams, diverts\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (throughput trend), Bar chart (jams by lane), Table (performance summary).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch baggage system throughput and misroutes. We help you catch piles and wrong paths before bags miss the plane.",
              "mtype": [
                "Performance"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.8",
              "n": "Warehouse Management System Order Accuracy",
              "c": "medium",
              "f": "beginner",
              "v": "Pick accuracy directly impacts customer satisfaction, returns, and operational costs. Tracking accuracy by zone and picker enables targeted training.",
              "t": "WMS application logs via HEC",
              "d": "`index=logistics` `sourcetype=\"wms:order\"` fields `order_id`, `pick_correct`, `zone`, `picker_id`",
              "q": "index=logistics sourcetype=\"wms:order\"\n| eval correct=if(pick_correct=\"true\" OR pick_correct=1, 1, 0)\n| stats avg(correct) as accuracy_pct, count as total_picks by zone, picker_id\n| eval accuracy_pct=round(accuracy_pct*100,2)\n| where accuracy_pct < 99.5\n| sort accuracy_pct\n| table zone, picker_id, accuracy_pct, total_picks",
              "m": "Capture pick confirmation events from WMS. Calculate accuracy rates by zone and picker. Alert on accuracy below threshold. Identify systemic issues vs individual training needs. **Domain context:** Pick accuracy impacts OTIF and returns; voice-pick and scan-to-light systems have different baseline error modes. **Splunk:** `stats avg(correct)` on binary picks yields accuracy per aggregation grain—ensure enough `total_picks` per cell before ranking pickers.",
              "z": "Bar chart (accuracy by zone), Table (picker performance), Gauge (overall accuracy).",
              "kfp": "Accuracy noise during pick-path changes, new SKU cut-ins, or cycle-count blitz weeks that the DC manager reconciled against WMS and physical audits.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WMS application logs via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"wms:order\"` fields `order_id`, `pick_correct`, `zone`, `picker_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCapture pick confirmation events from WMS. Calculate accuracy rates by zone and picker. Alert on accuracy below threshold. Identify systemic issues vs individual training needs. **Domain context:** Pick accuracy impacts OTIF and returns; voice-pick and scan-to-light systems have different baseline error modes. **Splunk:** `stats avg(correct)` on binary picks yields accuracy per aggregation grain—ensure enough `total_picks` per cell before ranking pickers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"wms:order\"\n| eval correct=if(pick_correct=\"true\" OR pick_correct=1, 1, 0)\n| stats avg(correct) as accuracy_pct, count as total_picks by zone, picker_id\n| eval accuracy_pct=round(accuracy_pct*100,2)\n| where accuracy_pct < 99.5\n| sort accuracy_pct\n| table zone, picker_id, accuracy_pct, total_picks\n```\n\nUnderstanding this SPL\n\n**Warehouse Management System Order Accuracy** — Pick accuracy directly impacts customer satisfaction, returns, and operational costs. Tracking accuracy by zone and picker enables targeted training.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"wms:order\"` fields `order_id`, `pick_correct`, `zone`, `picker_id`. **App/TA** (typical add-on context): WMS application logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: wms:order. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"wms:order\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **correct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by zone, picker_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **accuracy_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where accuracy_pct < 99.5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Warehouse Management System Order Accuracy**): table zone, picker_id, accuracy_pct, total_picks\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (accuracy by zone), Table (picker performance), Gauge (overall accuracy).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch warehouse order accuracy. We help you catch mispicks before customers get the wrong product.",
              "mtype": [
                "Performance"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.9",
              "n": "Last-Mile Delivery SLA Compliance",
              "c": "high",
              "f": "intermediate",
              "v": "Last-mile delivery performance drives customer experience and contract compliance. Tracking on-time rates enables route optimization and exception management.",
              "t": "Delivery management platform via HEC",
              "d": "`index=logistics` `sourcetype=\"delivery:event\"` fields `delivery_id`, `promised_time`, `actual_time`, `status`",
              "q": "index=logistics sourcetype=\"delivery:event\" status=\"delivered\"\n| eval on_time=if(actual_time <= promised_time, 1, 0)\n| eval late_min=if(on_time=0, round((actual_time-promised_time)/60,1), 0)\n| bin _time span=1d\n| stats avg(on_time) as otd_rate, avg(late_min) as avg_late_min, count as deliveries by _time\n| eval otd_pct=round(otd_rate*100,2)\n| table _time, deliveries, otd_pct, avg_late_min",
              "m": "Integrate delivery completion events with promised timestamps. Calculate on-time delivery percentage daily. Alert on drops below SLA threshold. Analyze late deliveries by route and driver. **Domain context:** Last-mile SLAs often exclude force majeure; “promised_time” may be customer slot vs carrier internal SLA—document definitions. **Splunk:** `actual_time` and `promised_time` must be epoch-consistent; `status=\"delivered\"` should exclude cancelled/returned.",
              "z": "Line chart (OTD trend), Gauge (current OTD %), Bar chart (late deliveries by reason).",
              "kfp": "SLA risk days from weather, address validation quirks, or hub congestion during peak that the last-mile control tower explains with contingency routes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Delivery management platform via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"delivery:event\"` fields `delivery_id`, `promised_time`, `actual_time`, `status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIntegrate delivery completion events with promised timestamps. Calculate on-time delivery percentage daily. Alert on drops below SLA threshold. Analyze late deliveries by route and driver. **Domain context:** Last-mile SLAs often exclude force majeure; “promised_time” may be customer slot vs carrier internal SLA—document definitions. **Splunk:** `actual_time` and `promised_time` must be epoch-consistent; `status=\"delivered\"` should exclude cancelled/returned.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"delivery:event\" status=\"delivered\"\n| eval on_time=if(actual_time <= promised_time, 1, 0)\n| eval late_min=if(on_time=0, round((actual_time-promised_time)/60,1), 0)\n| bin _time span=1d\n| stats avg(on_time) as otd_rate, avg(late_min) as avg_late_min, count as deliveries by _time\n| eval otd_pct=round(otd_rate*100,2)\n| table _time, deliveries, otd_pct, avg_late_min\n```\n\nUnderstanding this SPL\n\n**Last-Mile Delivery SLA Compliance** — Last-mile delivery performance drives customer experience and contract compliance. Tracking on-time rates enables route optimization and exception management.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"delivery:event\"` fields `delivery_id`, `promised_time`, `actual_time`, `status`. **App/TA** (typical add-on context): Delivery management platform via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: delivery:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"delivery:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **on_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **late_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **otd_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Last-Mile Delivery SLA Compliance**): table _time, deliveries, otd_pct, avg_late_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (OTD trend), Gauge (current OTD %), Bar chart (late deliveries by reason).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch last-mile delivery against promised times. We help you catch late stops before service credits pile up.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.10",
              "n": "Cold Chain Temperature Excursion for Perishable Goods",
              "c": "critical",
              "f": "intermediate",
              "v": "Temperature excursions during transport of perishable goods cause spoilage, regulatory violations, and customer claims. Real-time monitoring enables immediate corrective action.",
              "t": "Cold chain sensors via MQTT/HEC",
              "d": "`index=logistics` `sourcetype=\"coldchain:transit\"` fields `shipment_id`, `temp_c`, `setpoint_c`, `tolerance_c`",
              "q": "index=logistics sourcetype=\"coldchain:transit\"\n| eval low=setpoint_c-tolerance_c, high=setpoint_c+tolerance_c\n| eval excursion=if(temp_c < low OR temp_c > high, 1, 0)\n| where excursion=1\n| stats earliest(_time) as start, latest(_time) as end, max(abs(temp_c-setpoint_c)) as max_dev by shipment_id\n| eval duration_min=round((end-start)/60,1)\n| table shipment_id, start, end, duration_min, max_dev",
              "m": "Tag readings with shipment and setpoint. Escalate by duration and deviation magnitude. Integrate with claims and receiver QA for traceability. **Domain context:** FDA FSMA and EU GDP for pharma/food logistics often require excursion investigation and disposition—retain evidence per policy. **Splunk:** Same pattern as healthcare cold chain—use product-specific lookups for `setpoint_c`/`tolerance_c` when mixed loads.",
              "z": "Time series (temp vs bounds), Table (excursions), Duration chart.",
              "kfp": "Excursions on short handoffs, aircraft holds, reefer pre-cool, or consignee appointment slips that the cold-chain exception report already closed with a release.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cold chain sensors via MQTT/HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"coldchain:transit\"` fields `shipment_id`, `temp_c`, `setpoint_c`, `tolerance_c`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag readings with shipment and setpoint. Escalate by duration and deviation magnitude. Integrate with claims and receiver QA for traceability. **Domain context:** FDA FSMA and EU GDP for pharma/food logistics often require excursion investigation and disposition—retain evidence per policy. **Splunk:** Same pattern as healthcare cold chain—use product-specific lookups for `setpoint_c`/`tolerance_c` when mixed loads.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"coldchain:transit\"\n| eval low=setpoint_c-tolerance_c, high=setpoint_c+tolerance_c\n| eval excursion=if(temp_c < low OR temp_c > high, 1, 0)\n| where excursion=1\n| stats earliest(_time) as start, latest(_time) as end, max(abs(temp_c-setpoint_c)) as max_dev by shipment_id\n| eval duration_min=round((end-start)/60,1)\n| table shipment_id, start, end, duration_min, max_dev\n```\n\nUnderstanding this SPL\n\n**Cold Chain Temperature Excursion for Perishable Goods** — Temperature excursions during transport of perishable goods cause spoilage, regulatory violations, and customer claims. Real-time monitoring enables immediate corrective action.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"coldchain:transit\"` fields `shipment_id`, `temp_c`, `setpoint_c`, `tolerance_c`. **App/TA** (typical add-on context): Cold chain sensors via MQTT/HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: coldchain:transit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"coldchain:transit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **low** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **excursion** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where excursion=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by shipment_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Cold Chain Temperature Excursion for Perishable Goods**): table shipment_id, start, end, duration_min, max_dev\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time series (temp vs bounds), Table (excursions), Duration chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch reefer and cold-chain temperatures. We help you catch excursions before perishable goods spoil.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [
                "mqtt"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.11",
              "n": "Intermodal Container Dwell Time",
              "c": "medium",
              "f": "intermediate",
              "v": "Excessive container dwell time at terminals increases demurrage costs and reduces asset turns. Tracking dwell enables process improvements and better planning.",
              "t": "Terminal operating system via HEC",
              "d": "`index=logistics` `sourcetype=\"container:dwell\"` fields `container_id`, `dwell_hours`, `facility`",
              "q": "index=logistics sourcetype=\"container:dwell\"\n| where isnotnull(dwell_hours)\n| stats avg(dwell_hours) as avg_dwell, perc95(dwell_hours) as p95_dwell, max(dwell_hours) as max_dwell by facility\n| sort - p95_dwell\n| table facility, avg_dwell, p95_dwell, max_dwell",
              "m": "Define dwell from ingate to outgate. Compare rail vs truck facilities. Target reduction projects at high-p95 sites. **Domain context:** Demurrage/detention charges incentivize dwell reduction; intermodal metrics often feed into shipper scorecards (OTIF). **Splunk:** If `dwell_hours` is snapshot, not event-derived, confirm refresh cadence to avoid stale rankings.",
              "z": "Bar chart (p95 by facility), Histogram (dwell distribution), Trend of avg dwell.",
              "kfp": "Dwell noise from customs exams, rail ramp congestion, chassis shortages, or steamship line rollovers the intermodal desk tracks on a hot list.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Terminal operating system via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"container:dwell\"` fields `container_id`, `dwell_hours`, `facility`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDefine dwell from ingate to outgate. Compare rail vs truck facilities. Target reduction projects at high-p95 sites. **Domain context:** Demurrage/detention charges incentivize dwell reduction; intermodal metrics often feed into shipper scorecards (OTIF). **Splunk:** If `dwell_hours` is snapshot, not event-derived, confirm refresh cadence to avoid stale rankings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"container:dwell\"\n| where isnotnull(dwell_hours)\n| stats avg(dwell_hours) as avg_dwell, perc95(dwell_hours) as p95_dwell, max(dwell_hours) as max_dwell by facility\n| sort - p95_dwell\n| table facility, avg_dwell, p95_dwell, max_dwell\n```\n\nUnderstanding this SPL\n\n**Intermodal Container Dwell Time** — Excessive container dwell time at terminals increases demurrage costs and reduces asset turns. Tracking dwell enables process improvements and better planning.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"container:dwell\"` fields `container_id`, `dwell_hours`, `facility`. **App/TA** (typical add-on context): Terminal operating system via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: container:dwell. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"container:dwell\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(dwell_hours)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by facility** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Intermodal Container Dwell Time**): table facility, avg_dwell, p95_dwell, max_dwell\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (p95 by facility), Histogram (dwell distribution), Trend of avg dwell.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how long boxes sit in the yard. We help you catch dwell before demurrage and port flow clog.",
              "mtype": [
                "Capacity"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.4.12",
              "n": "Traffic Management System Sensor Availability",
              "c": "high",
              "f": "beginner",
              "v": "Roadside sensor availability impacts traffic management system accuracy. Monitoring sensor uptime enables proactive maintenance and reliable traffic data.",
              "t": "ATMS field device gateway via HEC",
              "d": "`index=logistics` `sourcetype=\"tms:sensor\"` fields `sensor_id`, `last_reading_epoch`, `status`",
              "q": "index=logistics sourcetype=\"tms:sensor\"\n| eval age_sec=now()-last_reading_epoch\n| eval online=if(status=\"active\" AND age_sec < 300, 1, 0)\n| stats avg(online) as avail_frac by sensor_id\n| eval avail_pct=round(avail_frac*100,2)\n| where avail_pct < 95\n| table sensor_id, avail_pct",
              "m": "Monitor sensor heartbeats. Tune stale threshold per sensor class. Alert maintenance when availability drops below target. **Domain context:** ATMS data feeds ITS and traveler information systems—stale speed/occupancy sensors degrade routing algorithms and public-facing travel times. **Splunk:** `stats avg(online)` across all rows per sensor assumes one row per poll—if multiple rows per sensor per bucket, use `latest(online)` or `timechart` instead.",
              "z": "Map (sensor status), Table (offline sensors), Time chart (online %).",
              "kfp": "Sensor or availability gaps while cabinets reboot for firmware, fiber cuts for roadwork, or camera wash-down the traffic management contract covers as planned.",
              "refs": "[Splunk OT Intelligence](https://splunkbase.splunk.com/app/5180), [OT Security Add-on](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ATMS field device gateway via HEC.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"tms:sensor\"` fields `sensor_id`, `last_reading_epoch`, `status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMonitor sensor heartbeats. Tune stale threshold per sensor class. Alert maintenance when availability drops below target. **Domain context:** ATMS data feeds ITS and traveler information systems—stale speed/occupancy sensors degrade routing algorithms and public-facing travel times. **Splunk:** `stats avg(online)` across all rows per sensor assumes one row per poll—if multiple rows per sensor per bucket, use `latest(online)` or `timechart` instead.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"tms:sensor\"\n| eval age_sec=now()-last_reading_epoch\n| eval online=if(status=\"active\" AND age_sec < 300, 1, 0)\n| stats avg(online) as avail_frac by sensor_id\n| eval avail_pct=round(avail_frac*100,2)\n| where avail_pct < 95\n| table sensor_id, avail_pct\n```\n\nUnderstanding this SPL\n\n**Traffic Management System Sensor Availability** — Roadside sensor availability impacts traffic management system accuracy. Monitoring sensor uptime enables proactive maintenance and reliable traffic data.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"tms:sensor\"` fields `sensor_id`, `last_reading_epoch`, `status`. **App/TA** (typical add-on context): ATMS field device gateway via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: tms:sensor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"tms:sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **online** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sensor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avail_pct < 95` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Traffic Management System Sensor Availability**): table sensor_id, avail_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (sensor status), Table (offline sensors), Time chart (online %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch traffic and sensor health on the road network. We help you catch blind spots before control centers lose trust in the map.",
              "mtype": [
                "Availability"
              ],
              "ind": "Transportation and Logistics",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.5,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 11,
            "none": 0
          }
        },
        {
          "i": "21.5",
          "n": "Oil, Gas, and Mining",
          "u": [
            {
              "i": "21.5.1",
              "n": "Pipeline Pressure and Flow Rate Anomaly Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Unusual pressure or flow patterns may indicate leaks, blockages, or instrument drift. Early detection prevents environmental incidents and costly shutdowns.",
              "t": "Splunk OT Intelligence, historian via HEC",
              "d": "`index=ot` `sourcetype=\"pipeline:pressure\"` fields `segment_id`, `pressure_psi`, `flow_bbl_h`",
              "q": "index=ot sourcetype=\"pipeline:pressure\"\n| eventstats median(pressure_psi) as med_p median(flow_bbl_h) as med_f by segment_id\n| eval dev_p=abs(pressure_psi-med_p)\n| where dev_p > 15 OR flow_bbl_h < med_f*0.5 OR flow_bbl_h > med_f*1.5\n| stats latest(pressure_psi) as pressure, latest(flow_bbl_h) as flow, latest(med_p) as expected_p by segment_id",
              "m": "Ingest high-resolution historian samples. Tune thresholds per segment. Route findings to the control room. Do not use Splunk alone for automatic shutdown. **Domain context:** Pipeline leak detection often combines pressure/flow with SCADA leak detection systems and regulatory PHMSA reporting—analytics here support operations, not primary safety instrumented functions. **Splunk:** `eventstats median` is sensitive to outliers—consider robust baselines per `segment_id` seasonally; validate field units (PSI vs barg).",
              "z": "Time chart (pressure and flow), Anomaly overlay, Segment map.",
              "kfp": "Pressure/flow noise during pigging, batch pig tracking, or SCADA time sync drift that the pipeline control center compared with the field ticket.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Intelligence, historian via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"pipeline:pressure\"` fields `segment_id`, `pressure_psi`, `flow_bbl_h`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest high-resolution historian samples. Tune thresholds per segment. Route findings to the control room. Do not use Splunk alone for automatic shutdown. **Domain context:** Pipeline leak detection often combines pressure/flow with SCADA leak detection systems and regulatory PHMSA reporting—analytics here support operations, not primary safety instrumented functions. **Splunk:** `eventstats median` is sensitive to outliers—consider robust baselines per `segment_id` seasonally; validate field un…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"pipeline:pressure\"\n| eventstats median(pressure_psi) as med_p median(flow_bbl_h) as med_f by segment_id\n| eval dev_p=abs(pressure_psi-med_p)\n| where dev_p > 15 OR flow_bbl_h < med_f*0.5 OR flow_bbl_h > med_f*1.5\n| stats latest(pressure_psi) as pressure, latest(flow_bbl_h) as flow, latest(med_p) as expected_p by segment_id\n```\n\nUnderstanding this SPL\n\n**Pipeline Pressure and Flow Rate Anomaly Detection** — Unusual pressure or flow patterns may indicate leaks, blockages, or instrument drift. Early detection prevents environmental incidents and costly shutdowns.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"pipeline:pressure\"` fields `segment_id`, `pressure_psi`, `flow_bbl_h`. **App/TA** (typical add-on context): Splunk OT Intelligence, historian via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: pipeline:pressure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"pipeline:pressure\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eventstats` rolls up events into metrics; results are split **by segment_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dev_p** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dev_p > 15 OR flow_bbl_h < med_f*0.5 OR flow_bbl_h > med_f*1.5` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by segment_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (pressure and flow), Anomaly overlay, Segment map.",
              "script": "",
              "premium": "Splunk OT Intelligence",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch pipeline pressure and flow. We help you catch leaks, blocks, and odd trends before a release or environmental hit.",
              "mtype": [
                "Fault"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.2",
              "n": "Wellhead Telemetry Data Gap Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Missing wellhead measurements indicate communication failures. Quick detection ensures SCADA and field teams can restore data continuity.",
              "t": "Edge Hub / RTU feeds via HEC",
              "d": "`index=ot` `sourcetype=\"wellhead:telemetry\"` fields `well_id`, `expected_interval_sec`",
              "q": "index=ot sourcetype=\"wellhead:telemetry\"\n| stats latest(_time) as last_seen by well_id\n| eval gap_sec=now()-last_seen\n| where gap_sec > 900\n| eval gap_min=round(gap_sec/60,1)\n| sort - gap_sec\n| table well_id, last_seen, gap_min",
              "m": "Include expected interval on each event or use 3x scan rate as threshold. Verify against planned shutdowns via a maintenance lookup. **Domain context:** Wellhead SCADA gaps can mask unsafe operating conditions or production loss—align thresholds with RTU poll schedule (minutes vs seconds). **Splunk:** `now()-last_seen` in scheduled searches uses search time; use `latest(_time)` from a summarization for fleet-wide freshness.",
              "z": "Table (wells by gap), Single value (stale well count), Timeline (last seen).",
              "kfp": "Gap clusters after solar-power swaps on radios, plunger-lift transients, or comms work on the lease road that the pumper and SCADA support annotated.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Edge Hub / RTU feeds via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"wellhead:telemetry\"` fields `well_id`, `expected_interval_sec`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInclude expected interval on each event or use 3x scan rate as threshold. Verify against planned shutdowns via a maintenance lookup. **Domain context:** Wellhead SCADA gaps can mask unsafe operating conditions or production loss—align thresholds with RTU poll schedule (minutes vs seconds). **Splunk:** `now()-last_seen` in scheduled searches uses search time; use `latest(_time)` from a summarization for fleet-wide freshness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"wellhead:telemetry\"\n| stats latest(_time) as last_seen by well_id\n| eval gap_sec=now()-last_seen\n| where gap_sec > 900\n| eval gap_min=round(gap_sec/60,1)\n| sort - gap_sec\n| table well_id, last_seen, gap_min\n```\n\nUnderstanding this SPL\n\n**Wellhead Telemetry Data Gap Monitoring** — Missing wellhead measurements indicate communication failures. Quick detection ensures SCADA and field teams can restore data continuity.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"wellhead:telemetry\"` fields `well_id`, `expected_interval_sec`. **App/TA** (typical add-on context): Edge Hub / RTU feeds via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: wellhead:telemetry. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"wellhead:telemetry\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by well_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_sec > 900` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gap_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Wellhead Telemetry Data Gap Monitoring**): table well_id, last_seen, gap_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (wells by gap), Single value (stale well count), Timeline (last seen).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch wellhead telemetry for gaps. We help you catch dead sensors before production reports go wrong.",
              "mtype": [
                "Availability"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.3",
              "n": "Gas Compressor Vibration Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Vibration trending catches mechanical issues during planned outages rather than as unplanned failures, improving equipment reliability and reducing costs.",
              "t": "Vibration monitoring system / historian via HEC",
              "d": "`index=ot` `sourcetype=\"compressor:vibration\"` fields `asset_id`, `vibration_mm_s`, `bearing_location`",
              "q": "index=ot sourcetype=\"compressor:vibration\"\n| timechart span=1h avg(vibration_mm_s) as avg_vib, perc95(vibration_mm_s) as p95_vib by asset_id",
              "m": "Align units with your vibration program. Set rising-rate alerts against maintenance baselines. Compare bearing locations on the same train for imbalance context. **Domain context:** API 670 / ISO 10816 severity zones apply to rotating equipment—gas compressors may have multiple casings; trend relative change, not only absolute mm/s. **Splunk:** `timechart` alone has no alert—wrap in saved search or use `predict`/MLTK for sustained rise if desired.",
              "z": "Multi-series time chart, Heatmap (asset × week), Threshold bands.",
              "kfp": "Vibration noise during surge loading, lube top-offs, or temporary speed changes the compressor control strategy allows during peaking or anti-surge tests.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vibration monitoring system / historian via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"compressor:vibration\"` fields `asset_id`, `vibration_mm_s`, `bearing_location`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign units with your vibration program. Set rising-rate alerts against maintenance baselines. Compare bearing locations on the same train for imbalance context. **Domain context:** API 670 / ISO 10816 severity zones apply to rotating equipment—gas compressors may have multiple casings; trend relative change, not only absolute mm/s. **Splunk:** `timechart` alone has no alert—wrap in saved search or use `predict`/MLTK for sustained rise if desired.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"compressor:vibration\"\n| timechart span=1h avg(vibration_mm_s) as avg_vib, perc95(vibration_mm_s) as p95_vib by asset_id\n```\n\nUnderstanding this SPL\n\n**Gas Compressor Vibration Trending** — Vibration trending catches mechanical issues during planned outages rather than as unplanned failures, improving equipment reliability and reducing costs.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"compressor:vibration\"` fields `asset_id`, `vibration_mm_s`, `bearing_location`. **App/TA** (typical add-on context): Vibration monitoring system / historian via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: compressor:vibration. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"compressor:vibration\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by asset_id** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multi-series time chart, Heatmap (asset × week), Threshold bands.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch gas compressor vibration. We help you catch mechanical wear before trip or downtime.",
              "mtype": [
                "Performance"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.4",
              "n": "Flare Stack Event Correlation and Emissions Tracking",
              "c": "high",
              "f": "advanced",
              "v": "Flare duration and intensity tracking supports environmental reporting and identifies operational events causing excessive flaring.",
              "t": "Flare meter / CEMS via HEC",
              "d": "`index=ot` `sourcetype=\"flare:event\"` fields `flare_id`, `duration_min`, `rate_mmscfd`, `site_id`",
              "q": "index=ot sourcetype=\"flare:event\"\n| eval day=strftime(_time,\"%Y-%m-%d\")\n| stats sum(duration_min) as total_minutes, count as flare_events by site_id, day\n| eval hours_flared=round(total_minutes/60,2)\n| sort - hours_flared\n| table site_id, day, flare_events, hours_flared",
              "m": "Normalize flare start/stop events. Validate volume methods against regulatory calculation approach. Join wind or process tags for correlation. **Domain context:** Flaring reports often tie to air permits and GHG inventories (e.g. EPA flare monitoring requirements where applicable); mass balance vs duration-based estimates must match compliance methodology. **Splunk:** Deduplicate start/stop pairs; `duration_min` summation assumes one event per period—validate for overlapping flares.",
              "z": "Time chart (flare hours), Stacked bar by site, Correlation panel with permit limits.",
              "kfp": "Flare counts during depressure for maintenance, pilot-light issues in cold weather, or flaring on upset that environmental gave variance credit for in the log.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Flare meter / CEMS via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"flare:event\"` fields `flare_id`, `duration_min`, `rate_mmscfd`, `site_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize flare start/stop events. Validate volume methods against regulatory calculation approach. Join wind or process tags for correlation. **Domain context:** Flaring reports often tie to air permits and GHG inventories (e.g. EPA flare monitoring requirements where applicable); mass balance vs duration-based estimates must match compliance methodology. **Splunk:** Deduplicate start/stop pairs; `duration_min` summation assumes one event per period—validate for overlapping flares.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"flare:event\"\n| eval day=strftime(_time,\"%Y-%m-%d\")\n| stats sum(duration_min) as total_minutes, count as flare_events by site_id, day\n| eval hours_flared=round(total_minutes/60,2)\n| sort - hours_flared\n| table site_id, day, flare_events, hours_flared\n```\n\nUnderstanding this SPL\n\n**Flare Stack Event Correlation and Emissions Tracking** — Flare duration and intensity tracking supports environmental reporting and identifies operational events causing excessive flaring.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"flare:event\"` fields `flare_id`, `duration_min`, `rate_mmscfd`, `site_id`. **App/TA** (typical add-on context): Flare meter / CEMS via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: flare:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"flare:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **day** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by site_id, day** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_flared** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Flare Stack Event Correlation and Emissions Tracking**): table site_id, day, flare_events, hours_flared\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (flare hours), Stacked bar by site, Correlation panel with permit limits.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch flare events and related trends. We help you connect operations to reporting before emissions or flaring issues surprise leadership.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.5",
              "n": "Mineral Processing Throughput Optimization",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracking tons per hour against targets balances feed rate with crusher and mill constraints, improving yield and reducing energy waste.",
              "t": "DCS/historian via HEC",
              "d": "`index=ot` `sourcetype=\"process:throughput\"` fields `line_id`, `tph`, `target_tph`",
              "q": "index=ot sourcetype=\"process:throughput\"\n| eval rate_ratio=round(tph/nullif(target_tph,0)*100,1)\n| timechart span=15m avg(tph) as avg_tph, avg(rate_ratio) as pct_of_target by line_id",
              "m": "Align tph with shift plans. Alert when sustained underperformance indicates blockage or wear. Share outputs with metallurgy and maintenance planning. **Domain context:** Comminution circuits are energy-intensive; throughput drops may correlate with ore hardness—join geology blends when available. **Splunk:** `timechart` outputs are visualization-friendly; add `where` in a post-process for alerting on `pct_of_target`.",
              "z": "Time chart (tph vs target), Gauge (utilization), Bar by shift.",
              "kfp": "Throughput noise while ore grade shifts, water chemistry changes, or a float cell is isolated for wear-part replacement the metallurgist expected.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DCS/historian via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"process:throughput\"` fields `line_id`, `tph`, `target_tph`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign tph with shift plans. Alert when sustained underperformance indicates blockage or wear. Share outputs with metallurgy and maintenance planning. **Domain context:** Comminution circuits are energy-intensive; throughput drops may correlate with ore hardness—join geology blends when available. **Splunk:** `timechart` outputs are visualization-friendly; add `where` in a post-process for alerting on `pct_of_target`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"process:throughput\"\n| eval rate_ratio=round(tph/nullif(target_tph,0)*100,1)\n| timechart span=15m avg(tph) as avg_tph, avg(rate_ratio) as pct_of_target by line_id\n```\n\nUnderstanding this SPL\n\n**Mineral Processing Throughput Optimization** — Tracking tons per hour against targets balances feed rate with crusher and mill constraints, improving yield and reducing energy waste.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"process:throughput\"` fields `line_id`, `tph`, `target_tph`. **App/TA** (typical add-on context): DCS/historian via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: process:throughput. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"process:throughput\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **rate_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by line_id** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (tph vs target), Gauge (utilization), Bar by shift.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch throughput through mills and concentrators. We help you catch bottlenecks before daily metal targets slip.",
              "mtype": [
                "Performance"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.6",
              "n": "Haul Truck Fleet Utilization and Payload Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Measuring truck active hours and payload improves fleet sizing, load-pass matching, and reduces per-ton haulage costs.",
              "t": "Fleet management / onboard weighing via HEC",
              "d": "`index=ot` `sourcetype=\"haultruck:telematics\"` fields `truck_id`, `payload_ton`, `engine_hours`, `loaded`",
              "q": "index=ot sourcetype=\"haultruck:telematics\"\n| where loaded=1 OR lower(loaded)=\"true\"\n| stats sum(payload_ton) as total_payload, sum(engine_hours) as total_hours by truck_id\n| eval tons_per_hour=round(total_payload/nullif(total_hours,0),1)\n| sort - total_payload\n| table truck_id, total_payload, total_hours, tons_per_hour",
              "m": "Ingest load cycles with valid payload. Reconcile with scale house periodically. Use engine hours for maintenance scheduling alongside production KPIs. **Domain context:** Mining production metrics often reconcile truck counts with shovel/fleet management systems—empty haul vs loaded haul should be modeled. **Splunk:** `loaded=1 OR lower(loaded)=\"true\"` covers string/boolean variants; confirm with telematics vendor.",
              "z": "Bar chart (payload by truck), Scatter (hours vs tons), Fleet summary.",
              "kfp": "Utilization or payload noise during road bans, shift changes, or crusher downtime that the mine planner offset with alternate benches the same day.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Fleet management / onboard weighing via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"haultruck:telematics\"` fields `truck_id`, `payload_ton`, `engine_hours`, `loaded`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest load cycles with valid payload. Reconcile with scale house periodically. Use engine hours for maintenance scheduling alongside production KPIs. **Domain context:** Mining production metrics often reconcile truck counts with shovel/fleet management systems—empty haul vs loaded haul should be modeled. **Splunk:** `loaded=1 OR lower(loaded)=\"true\"` covers string/boolean variants; confirm with telematics vendor.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"haultruck:telematics\"\n| where loaded=1 OR lower(loaded)=\"true\"\n| stats sum(payload_ton) as total_payload, sum(engine_hours) as total_hours by truck_id\n| eval tons_per_hour=round(total_payload/nullif(total_hours,0),1)\n| sort - total_payload\n| table truck_id, total_payload, total_hours, tons_per_hour\n```\n\nUnderstanding this SPL\n\n**Haul Truck Fleet Utilization and Payload Tracking** — Measuring truck active hours and payload improves fleet sizing, load-pass matching, and reduces per-ton haulage costs.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"haultruck:telematics\"` fields `truck_id`, `payload_ton`, `engine_hours`, `loaded`. **App/TA** (typical add-on context): Fleet management / onboard weighing via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: haultruck:telematics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"haultruck:telematics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where loaded=1 OR lower(loaded)=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by truck_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **tons_per_hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Haul Truck Fleet Utilization and Payload Tracking**): table truck_id, total_payload, total_hours, tons_per_hour\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (payload by truck), Scatter (hours vs tons), Fleet summary.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch haul truck loads and use. We help you catch underloads or idle time before cost per ton rises.",
              "mtype": [
                "Capacity"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.7",
              "n": "Drill Rig Sensor Health Monitoring",
              "c": "high",
              "f": "beginner",
              "v": "Drill instrumentation must stay online for safe and efficient drilling. Monitoring channel health catches failures before they impact operations.",
              "t": "Rig data logger via HEC",
              "d": "`index=ot` `sourcetype=\"drillrig:sensor\"` fields `rig_id`, `channel`, `status`, `value_age_sec`",
              "q": "index=ot sourcetype=\"drillrig:sensor\"\n| eval ok=if(status=\"ok\" AND value_age_sec < 30, 1, 0)\n| stats avg(ok) as health_frac by rig_id, channel\n| eval health_pct=round(health_frac*100,2)\n| where health_pct < 99\n| table rig_id, channel, health_pct",
              "m": "Tune value_age_sec threshold per channel type. Alert the rig supervisor when channels degrade. Align with planned rig maintenance. **Domain context:** Drilling instrumentation supports MWD/LWD and well control awareness—loss of WOB/RPM/torque channels can indicate surface or downhole sensor faults. **Splunk:** `stats avg(ok)` assumes evenly spaced samples—prefer `timechart` + fill for irregular logger intervals.",
              "z": "Matrix (rig × channel), Time chart (stale count), Table (bad channels).",
              "kfp": "Sensor health noise while mud pumps surge, WOB changes on interbedded formation, or while surface electrical noise is high during storms.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Rig data logger via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"drillrig:sensor\"` fields `rig_id`, `channel`, `status`, `value_age_sec`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTune value_age_sec threshold per channel type. Alert the rig supervisor when channels degrade. Align with planned rig maintenance. **Domain context:** Drilling instrumentation supports MWD/LWD and well control awareness—loss of WOB/RPM/torque channels can indicate surface or downhole sensor faults. **Splunk:** `stats avg(ok)` assumes evenly spaced samples—prefer `timechart` + fill for irregular logger intervals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"drillrig:sensor\"\n| eval ok=if(status=\"ok\" AND value_age_sec < 30, 1, 0)\n| stats avg(ok) as health_frac by rig_id, channel\n| eval health_pct=round(health_frac*100,2)\n| where health_pct < 99\n| table rig_id, channel, health_pct\n```\n\nUnderstanding this SPL\n\n**Drill Rig Sensor Health Monitoring** — Drill instrumentation must stay online for safe and efficient drilling. Monitoring channel health catches failures before they impact operations.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"drillrig:sensor\"` fields `rig_id`, `channel`, `status`, `value_age_sec`. **App/TA** (typical add-on context): Rig data logger via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: drillrig:sensor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"drillrig:sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by rig_id, channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **health_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where health_pct < 99` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Drill Rig Sensor Health Monitoring**): table rig_id, channel, health_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (rig × channel), Time chart (stale count), Table (bad channels).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch drill-rig sensor feeds. We help you catch bad data or shocks before a hole or plan goes wrong.",
              "mtype": [
                "Availability"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.8",
              "n": "Safety Instrumented System Trip Event Analysis",
              "c": "critical",
              "f": "advanced",
              "v": "SIS trip analysis distinguishes nuisance trips from genuine demand, supporting PHA/MOC review and improving safety system reliability metrics.",
              "t": "SIS / ESD logs via HEC",
              "d": "`index=ot` `sourcetype=\"sis:trip\"` fields `loop_id`, `trip_cause`, `demand_type`",
              "q": "index=ot sourcetype=\"sis:trip\"\n| stats count as trips, dc(loop_id) as loops_affected, values(trip_cause) as causes by demand_type\n| sort - trips\n| table demand_type, trips, loops_affected, causes",
              "m": "Apply strict change control on parsers. Use for post-event analysis and trending trip rates per loop. Never bypass safety systems from analytics. **Domain context:** IEC 61511 / ISA-84 lifecycle applies to SIS; nuisance vs demand trips should be classified for SIL verification—Splunk supports investigation, not SIL calculations alone. **Splunk:** Restrict `sis:trip` data; partner with functional safety team on definitions of `demand_type`.",
              "z": "Bar chart (trips by cause), Timeline, Pareto of loops.",
              "kfp": "Trip review noise during SIF proof tests, process demand upsets, or manual bypass windows performed under a management-of-change with full paperwork.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SIS / ESD logs via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"sis:trip\"` fields `loop_id`, `trip_cause`, `demand_type`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nApply strict change control on parsers. Use for post-event analysis and trending trip rates per loop. Never bypass safety systems from analytics. **Domain context:** IEC 61511 / ISA-84 lifecycle applies to SIS; nuisance vs demand trips should be classified for SIL verification—Splunk supports investigation, not SIL calculations alone. **Splunk:** Restrict `sis:trip` data; partner with functional safety team on definitions of `demand_type`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"sis:trip\"\n| stats count as trips, dc(loop_id) as loops_affected, values(trip_cause) as causes by demand_type\n| sort - trips\n| table demand_type, trips, loops_affected, causes\n```\n\nUnderstanding this SPL\n\n**Safety Instrumented System Trip Event Analysis** — SIS trip analysis distinguishes nuisance trips from genuine demand, supporting PHA/MOC review and improving safety system reliability metrics.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"sis:trip\"` fields `loop_id`, `trip_cause`, `demand_type`. **App/TA** (typical add-on context): SIS / ESD logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: sis:trip. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"sis:trip\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by demand_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Safety Instrumented System Trip Event Analysis**): table demand_type, trips, loops_affected, causes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (trips by cause), Timeline, Pareto of loops.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch safety-system trips. We help you tell real hazards from nuisance before teams lose faith in shutdowns.",
              "mtype": [
                "Fault"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.9",
              "n": "Environmental Compliance Effluent Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Tracking effluent parameters against permit limits enables timely corrective action and audit-ready documentation for environmental regulations.",
              "t": "LIMS / online analyzer via HEC",
              "d": "`index=ot` `sourcetype=\"effluent:monitor\"` fields `outfall_id`, `parameter`, `value_mg_l`, `limit_mg_l`",
              "q": "index=ot sourcetype=\"effluent:monitor\"\n| eval exceed=if(value_mg_l > limit_mg_l, 1, 0)\n| where exceed=1\n| stats earliest(_time) as first_exceed, max(value_mg_l) as peak_value by outfall_id, parameter\n| table outfall_id, parameter, first_exceed, peak_value, limit_mg_l",
              "m": "Align sampling frequency with permit requirements. Add separate searches for daily max vs monthly average if your permit uses both formats. **Domain context:** NPDES-type permits specify limits, averaging periods, and bypass reporting—mirror legal definitions in Splunk (instantaneous vs composite samples). **Splunk:** `exceed=1` per row may over-count—use duration above limit for chronic breaches.",
              "z": "Time chart (value vs limit), Alert table, Gauge (margin to limit).",
              "kfp": "Effluent blips on composite vs grab sampling differences, diurnal plant load, or lab hold times during chain-of-custody handoffs the permit still covers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: LIMS / online analyzer via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"effluent:monitor\"` fields `outfall_id`, `parameter`, `value_mg_l`, `limit_mg_l`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign sampling frequency with permit requirements. Add separate searches for daily max vs monthly average if your permit uses both formats. **Domain context:** NPDES-type permits specify limits, averaging periods, and bypass reporting—mirror legal definitions in Splunk (instantaneous vs composite samples). **Splunk:** `exceed=1` per row may over-count—use duration above limit for chronic breaches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"effluent:monitor\"\n| eval exceed=if(value_mg_l > limit_mg_l, 1, 0)\n| where exceed=1\n| stats earliest(_time) as first_exceed, max(value_mg_l) as peak_value by outfall_id, parameter\n| table outfall_id, parameter, first_exceed, peak_value, limit_mg_l\n```\n\nUnderstanding this SPL\n\n**Environmental Compliance Effluent Monitoring** — Tracking effluent parameters against permit limits enables timely corrective action and audit-ready documentation for environmental regulations.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"effluent:monitor\"` fields `outfall_id`, `parameter`, `value_mg_l`, `limit_mg_l`. **App/TA** (typical add-on context): LIMS / online analyzer via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: effluent:monitor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"effluent:monitor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **exceed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where exceed=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by outfall_id, parameter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Environmental Compliance Effluent Monitoring**): table outfall_id, parameter, first_exceed, peak_value, limit_mg_l\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (value vs limit), Alert table, Gauge (margin to limit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch effluent and permit limits. We help you catch excursions before environmental penalties or public concern.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.10",
              "n": "Tank Farm Level Monitoring and Overflow Prevention",
              "c": "critical",
              "f": "intermediate",
              "v": "Tank overfills cause environmental incidents and safety hazards. Monitoring levels and fill rates enables proactive response and inventory reconciliation.",
              "t": "Tank gauging / DCS via HEC",
              "d": "`index=ot` `sourcetype=\"tankfarm:level\"` fields `tank_id`, `level_pct`, `high_alarm_pct`, `fill_rate_m3_h`",
              "q": "index=ot sourcetype=\"tankfarm:level\"\n| eval risk=if(level_pct >= high_alarm_pct OR (level_pct > 85 AND fill_rate_m3_h > 0), 1, 0)\n| where risk=1\n| stats latest(level_pct) as current_level, latest(fill_rate_m3_h) as fill_rate by tank_id\n| sort - current_level\n| table tank_id, current_level, fill_rate",
              "m": "Keep primary alarms in BPCS/SIS. Splunk provides visibility and trending. Add roof and temperature for floating-roof tanks if available. **Domain context:** API 2350 / EEMUA 159 inform tank overfill prevention; high-high alarms and independent layers remain on the SIS—Splunk is supplementary visibility. **Splunk:** `eval risk` thresholds should match alarm setpoints from the tank table, not arbitrary constants, for consistency.",
              "z": "Tank-style gauge, Time chart (level), Trend (fill rate).",
              "kfp": "Level noise from temperature stratification, agitation after transfer, or radar calibration the tank farm supervisor validated against the manual gauge.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tank gauging / DCS via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"tankfarm:level\"` fields `tank_id`, `level_pct`, `high_alarm_pct`, `fill_rate_m3_h`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nKeep primary alarms in BPCS/SIS. Splunk provides visibility and trending. Add roof and temperature for floating-roof tanks if available. **Domain context:** API 2350 / EEMUA 159 inform tank overfill prevention; high-high alarms and independent layers remain on the SIS—Splunk is supplementary visibility. **Splunk:** `eval risk` thresholds should match alarm setpoints from the tank table, not arbitrary constants, for consistency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"tankfarm:level\"\n| eval risk=if(level_pct >= high_alarm_pct OR (level_pct > 85 AND fill_rate_m3_h > 0), 1, 0)\n| where risk=1\n| stats latest(level_pct) as current_level, latest(fill_rate_m3_h) as fill_rate by tank_id\n| sort - current_level\n| table tank_id, current_level, fill_rate\n```\n\nUnderstanding this SPL\n\n**Tank Farm Level Monitoring and Overflow Prevention** — Tank overfills cause environmental incidents and safety hazards. Monitoring levels and fill rates enables proactive response and inventory reconciliation.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"tankfarm:level\"` fields `tank_id`, `level_pct`, `high_alarm_pct`, `fill_rate_m3_h`. **App/TA** (typical add-on context): Tank gauging / DCS via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: tankfarm:level. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"tankfarm:level\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where risk=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tank_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Tank Farm Level Monitoring and Overflow Prevention**): table tank_id, current_level, fill_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Tank-style gauge, Time chart (level), Trend (fill rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch tank levels against safe bands. We help you catch overfill or empty risks before a loss of containment.",
              "mtype": [
                "Fault"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.11",
              "n": "Cathodic Protection System Integrity",
              "c": "medium",
              "f": "intermediate",
              "v": "Pipe-to-soil potentials and rectifier state indicate whether corrosion protection is effective along pipelines, supporting regulatory compliance and asset integrity.",
              "t": "CP remote monitoring via HEC",
              "d": "`index=ot` `sourcetype=\"cp:reading\"` fields `test_point_id`, `potential_v`, `min_protect_v`, `rectifier_on`",
              "q": "index=ot sourcetype=\"cp:reading\"\n| where rectifier_on=1\n| eval protected=if(potential_v <= min_protect_v, 1, 0)\n| stats avg(protected) as pct_protected by test_point_id\n| eval pct_protected=round(pct_protected*100,2)\n| where pct_protected < 95\n| table test_point_id, pct_protected",
              "m": "Confirm sign convention for your CP system. Alert on sustained under-protection. Survey intervals may be daily. **Domain context:** NACE/ISO standards define CP criteria (e.g. −850 mV CSE for steel)—criteria vary by coating and environment; engineering must validate `min_protect_v`. **Splunk:** `pct_protected` from `avg(protected)` assumes one row per interval per test point—verify sampling frequency.",
              "z": "Map (test points), Time chart (potential), Table (under-protected sites).",
              "kfp": "CP noise after rectifier service, anode field repairs, or seasonal soil moisture shifts that the corrosion engineer models as seasonal variation.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CP remote monitoring via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"cp:reading\"` fields `test_point_id`, `potential_v`, `min_protect_v`, `rectifier_on`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nConfirm sign convention for your CP system. Alert on sustained under-protection. Survey intervals may be daily. **Domain context:** NACE/ISO standards define CP criteria (e.g. −850 mV CSE for steel)—criteria vary by coating and environment; engineering must validate `min_protect_v`. **Splunk:** `pct_protected` from `avg(protected)` assumes one row per interval per test point—verify sampling frequency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"cp:reading\"\n| where rectifier_on=1\n| eval protected=if(potential_v <= min_protect_v, 1, 0)\n| stats avg(protected) as pct_protected by test_point_id\n| eval pct_protected=round(pct_protected*100,2)\n| where pct_protected < 95\n| table test_point_id, pct_protected\n```\n\nUnderstanding this SPL\n\n**Cathodic Protection System Integrity** — Pipe-to-soil potentials and rectifier state indicate whether corrosion protection is effective along pipelines, supporting regulatory compliance and asset integrity.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"cp:reading\"` fields `test_point_id`, `potential_v`, `min_protect_v`, `rectifier_on`. **App/TA** (typical add-on context): CP remote monitoring via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: cp:reading. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"cp:reading\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where rectifier_on=1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **protected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by test_point_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct_protected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct_protected < 95` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cathodic Protection System Integrity**): table test_point_id, pct_protected\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (test points), Time chart (potential), Table (under-protected sites).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch pipe cathodic protection readings. We help you catch coating or rectifier issues before steel corrodes.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.5.12",
              "n": "Seismic Monitoring Data Quality Validation",
              "c": "medium",
              "f": "advanced",
              "v": "Verifying seismic trace completeness and signal-to-noise ratio ensures monitoring programs meet technical specifications for reliable subsurface analysis.",
              "t": "Seismic acquisition system via HEC",
              "d": "`index=ot` `sourcetype=\"seismic:data\"` fields `station_id`, `samples_expected`, `samples_received`, `snr_db`",
              "q": "index=ot sourcetype=\"seismic:data\"\n| eval completeness=round(samples_received/nullif(samples_expected,0)*100,2)\n| eval quality_ok=if(completeness >= 99 AND snr_db >= 10, 1, 0)\n| stats avg(quality_ok) as pass_rate by station_id\n| eval pass_pct=round(pass_rate*100,2)\n| where pass_pct < 98\n| table station_id, pass_pct",
              "m": "Baseline SNR thresholds by site noise. Alert on missing traces or repeated gaps. Support field crew dispatch for sensor issues. **Domain context:** Seismic acquisition QC ties to contract specifications for mineral exploration or induced seismicity monitoring—completeness and SNR thresholds are survey-specific. **Splunk:** `pass_rate` from `avg(quality_ok)` needs representative event granularity; dedupe traces if ingested per file.",
              "z": "Time chart (completeness), SNR distribution, Station ranking table.",
              "kfp": "Seismic quality gaps during array maintenance, calibration blasts, high cultural noise, or temporary station removal the monitoring contractor logged.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Seismic acquisition system via HEC.\n• Ensure the following data sources are available: `index=ot` `sourcetype=\"seismic:data\"` fields `station_id`, `samples_expected`, `samples_received`, `snr_db`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline SNR thresholds by site noise. Alert on missing traces or repeated gaps. Support field crew dispatch for sensor issues. **Domain context:** Seismic acquisition QC ties to contract specifications for mineral exploration or induced seismicity monitoring—completeness and SNR thresholds are survey-specific. **Splunk:** `pass_rate` from `avg(quality_ok)` needs representative event granularity; dedupe traces if ingested per file.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"seismic:data\"\n| eval completeness=round(samples_received/nullif(samples_expected,0)*100,2)\n| eval quality_ok=if(completeness >= 99 AND snr_db >= 10, 1, 0)\n| stats avg(quality_ok) as pass_rate by station_id\n| eval pass_pct=round(pass_rate*100,2)\n| where pass_pct < 98\n| table station_id, pass_pct\n```\n\nUnderstanding this SPL\n\n**Seismic Monitoring Data Quality Validation** — Verifying seismic trace completeness and signal-to-noise ratio ensures monitoring programs meet technical specifications for reliable subsurface analysis.\n\nDocumented **Data sources**: `index=ot` `sourcetype=\"seismic:data\"` fields `station_id`, `samples_expected`, `samples_received`, `snr_db`. **App/TA** (typical add-on context): Seismic acquisition system via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: seismic:data. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"seismic:data\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **completeness** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **quality_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by station_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pass_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pass_pct < 98` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Seismic Monitoring Data Quality Validation**): table station_id, pass_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (completeness), SNR distribution, Station ranking table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch seismic and blast sensor quality. We help you catch noise or dropouts before compliance or safety models fail.",
              "mtype": [
                "Availability"
              ],
              "ind": "Oil, Gas, and Mining",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 12,
            "none": 0
          }
        },
        {
          "i": "21.6",
          "n": "Retail and E-Commerce Operations",
          "u": [
            {
              "i": "21.6.1",
              "n": "POS Terminal Transaction Response Time Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Slow POS responses frustrate customers and extend queue times; tracking latency by terminal and store helps isolate network, host, or payment-processor issues before they impact peak-hour throughput.",
              "t": "Custom HEC (POS gateway / payment middleware)",
              "d": "`index=retail` `sourcetype=\"pos:transaction\"` (store_id, terminal_id, response_ms, txn_status)",
              "q": "index=retail sourcetype=\"pos:transaction\"\n| bin _time span=5m\n| stats avg(response_ms) as avg_ms, perc95(response_ms) as p95_ms, count as txn_count by store_id, terminal_id, _time\n| where avg_ms > 800 OR p95_ms > 2000\n| eval breach=if(p95_ms>2000,\"p95\",\"avg\")\n| table _time, store_id, terminal_id, avg_ms, p95_ms, txn_count, breach",
              "m": "Ingest authorization round-trip times from the POS middleware or switch logs via HEC; normalize milliseconds and exclude void-only events. Schedule alerts for sustained breaches and drill down by VLAN or terminal firmware version using optional lookups. **Domain context:** Card-present authorization latency ties to EMV kernel, host connectivity, and store LAN—PCI DSS scope still applies to log collection (segment networks, minimize PAN in logs). **Splunk:** Exclude or tokenize cardholder data; index `terminal_id` + `store_id` only.",
              "z": "Time chart (avg vs p95 by terminal), Heatmap (store × hour), Table (worst terminals).",
              "kfp": "POS slow-downs during cash-drawer counts, training transactions, gift-card promotions, or payment-host maintenance the store team posted on the status whiteboard.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (POS gateway / payment middleware).\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"pos:transaction\"` (store_id, terminal_id, response_ms, txn_status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest authorization round-trip times from the POS middleware or switch logs via HEC; normalize milliseconds and exclude void-only events. Schedule alerts for sustained breaches and drill down by VLAN or terminal firmware version using optional lookups. **Domain context:** Card-present authorization latency ties to EMV kernel, host connectivity, and store LAN—PCI DSS scope still applies to log collection (segment networks, minimize PAN in logs). **Splunk:** Exclude or tokenize cardholder data; i…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"pos:transaction\"\n| bin _time span=5m\n| stats avg(response_ms) as avg_ms, perc95(response_ms) as p95_ms, count as txn_count by store_id, terminal_id, _time\n| where avg_ms > 800 OR p95_ms > 2000\n| eval breach=if(p95_ms>2000,\"p95\",\"avg\")\n| table _time, store_id, terminal_id, avg_ms, p95_ms, txn_count, breach\n```\n\nUnderstanding this SPL\n\n**POS Terminal Transaction Response Time Monitoring** — Slow POS responses frustrate customers and extend queue times; tracking latency by terminal and store helps isolate network, host, or payment-processor issues before they impact peak-hour throughput.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"pos:transaction\"` (store_id, terminal_id, response_ms, txn_status). **App/TA** (typical add-on context): Custom HEC (POS gateway / payment middleware). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: pos:transaction. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"pos:transaction\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by store_id, terminal_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_ms > 800 OR p95_ms > 2000` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **POS Terminal Transaction Response Time Monitoring**): table _time, store_id, terminal_id, avg_ms, p95_ms, txn_count, breach\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (avg vs p95 by terminal), Heatmap (store × hour), Table (worst terminals).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch checkout and payment response times. We help you catch slowness before lines back up and sales walk away.",
              "mtype": [
                "Performance",
                "Availability"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.2",
              "n": "Self-Checkout Lane Availability and Error Rate",
              "c": "high",
              "f": "intermediate",
              "v": "High self-checkout error rates drive attendant interventions and shrink throughput; correlating lane state with error codes prioritizes hardware refresh and software fixes.",
              "t": "Custom HEC (SCO application / kiosk telemetry)",
              "d": "`index=retail` `sourcetype=\"selfcheckout:event\"` (store_id, lane_id, event_type, error_code)",
              "q": "index=retail sourcetype=\"selfcheckout:event\"\n| bin _time span=15m\n| stats count as total_ev, sum(eval(if(event_type=\"error\" OR (isnotnull(error_code) AND error_code!=\"\" AND error_code!=\"-\"),1,0))) as err_ev by store_id, lane_id, _time\n| eval error_rate=round(100*err_ev/nullif(total_ev,0),2)\n| where error_rate > 8 OR total_ev < 5\n| table _time, store_id, lane_id, total_ev, err_ev, error_rate",
              "m": "Map kiosk heartbeat and transaction error streams into a single sourcetype; use `error_code` only when present. Tune thresholds by store format (grocery vs big-box). Route alerts to store ops and vendor support queues. **Domain context:** SCO interventions are a major labor and shrink discussion point—high `error_rate` often correlates with scale, age-gate, or item recognition issues. **Splunk:** `total_ev < 5` flags low-traffic lanes—tune or filter to avoid noise on quiet stores.",
              "z": "Bar chart (error rate by lane), Stacked area (errors vs total events), Single value (lanes over threshold).",
              "kfp": "SCOs closed for firmware, coin-tower service, endcap remodels, or high attendant interventions during restricted sales the front-end lead expected.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (SCO application / kiosk telemetry).\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"selfcheckout:event\"` (store_id, lane_id, event_type, error_code).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap kiosk heartbeat and transaction error streams into a single sourcetype; use `error_code` only when present. Tune thresholds by store format (grocery vs big-box). Route alerts to store ops and vendor support queues. **Domain context:** SCO interventions are a major labor and shrink discussion point—high `error_rate` often correlates with scale, age-gate, or item recognition issues. **Splunk:** `total_ev < 5` flags low-traffic lanes—tune or filter to avoid noise on quiet stores.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"selfcheckout:event\"\n| bin _time span=15m\n| stats count as total_ev, sum(eval(if(event_type=\"error\" OR (isnotnull(error_code) AND error_code!=\"\" AND error_code!=\"-\"),1,0))) as err_ev by store_id, lane_id, _time\n| eval error_rate=round(100*err_ev/nullif(total_ev,0),2)\n| where error_rate > 8 OR total_ev < 5\n| table _time, store_id, lane_id, total_ev, err_ev, error_rate\n```\n\nUnderstanding this SPL\n\n**Self-Checkout Lane Availability and Error Rate** — High self-checkout error rates drive attendant interventions and shrink throughput; correlating lane state with error codes prioritizes hardware refresh and software fixes.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"selfcheckout:event\"` (store_id, lane_id, event_type, error_code). **App/TA** (typical add-on context): Custom HEC (SCO application / kiosk telemetry). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: selfcheckout:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"selfcheckout:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by store_id, lane_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **error_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where error_rate > 8 OR total_ev < 5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Self-Checkout Lane Availability and Error Rate**): table _time, store_id, lane_id, total_ev, err_ev, error_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (error rate by lane), Stacked area (errors vs total events), Single value (lanes over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch self-checkout lanes and error rates. We help you catch broken lanes or high interventions before the front end chokes.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.3",
              "n": "In-Store Wi-Fi and Network Infrastructure Health",
              "c": "high",
              "f": "beginner",
              "v": "mPOS, guest Wi-Fi, and IoT rely on store LAN/WLAN; tracking AP association failures and controller health prevents silent outages during promotions.",
              "t": "Wi-Fi controller syslog / SNMP trap HEC",
              "d": "`index=retail` `sourcetype=\"wifi:controller\"` (store_id, ap_name, client_count, assoc_failures, cpu_pct)",
              "q": "index=retail sourcetype=\"wifi:controller\"\n| bin _time span=5m\n| stats sum(assoc_failures) as fails, avg(client_count) as avg_clients, max(cpu_pct) as max_cpu by store_id, ap_name, _time\n| eval fail_rate=if(avg_clients>0, round(100*fails/avg_clients,2), fails)\n| where fails > 20 OR max_cpu > 85 OR fail_rate > 2\n| table _time, store_id, ap_name, fails, avg_clients, max_cpu, fail_rate",
              "m": "Normalize vendor-specific fields in props/transforms; retain `ap_name` and `store_id` on every sample. Alert on controller CPU and on APs with repeated association failures compared to peers in the same store. **Domain context:** Guest Wi-Fi and mPOS share airtime—high `assoc_failures` during promotions may be RF contention, not controller fault. **Splunk:** `fail_rate` uses fails per avg_clients—validate denominator when client counts are sampled sparsely.",
              "z": "Time chart (assoc failures), Status grid (AP health), Line chart (controller CPU).",
              "kfp": "Wi-Fi noise during controller upgrades, high guest load, or regional VLAN work that the store network change window names on the NOC calendar.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Wi-Fi controller syslog / SNMP trap HEC.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"wifi:controller\"` (store_id, ap_name, client_count, assoc_failures, cpu_pct).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize vendor-specific fields in props/transforms; retain `ap_name` and `store_id` on every sample. Alert on controller CPU and on APs with repeated association failures compared to peers in the same store. **Domain context:** Guest Wi-Fi and mPOS share airtime—high `assoc_failures` during promotions may be RF contention, not controller fault. **Splunk:** `fail_rate` uses fails per avg_clients—validate denominator when client counts are sampled sparsely.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"wifi:controller\"\n| bin _time span=5m\n| stats sum(assoc_failures) as fails, avg(client_count) as avg_clients, max(cpu_pct) as max_cpu by store_id, ap_name, _time\n| eval fail_rate=if(avg_clients>0, round(100*fails/avg_clients,2), fails)\n| where fails > 20 OR max_cpu > 85 OR fail_rate > 2\n| table _time, store_id, ap_name, fails, avg_clients, max_cpu, fail_rate\n```\n\nUnderstanding this SPL\n\n**In-Store Wi-Fi and Network Infrastructure Health** — mPOS, guest Wi-Fi, and IoT rely on store LAN/WLAN; tracking AP association failures and controller health prevents silent outages during promotions.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"wifi:controller\"` (store_id, ap_name, client_count, assoc_failures, cpu_pct). **App/TA** (typical add-on context): Wi-Fi controller syslog / SNMP trap HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: wifi:controller. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"wifi:controller\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by store_id, ap_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fails > 20 OR max_cpu > 85 OR fail_rate > 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **In-Store Wi-Fi and Network Infrastructure Health**): table _time, store_id, ap_name, fails, avg_clients, max_cpu, fail_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (assoc failures), Status grid (AP health), Line chart (controller CPU).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch in-store Wi-Fi and branch network health. We help you catch pain before staff devices and hand scanners fail.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [
                "snmp",
                "syslog"
              ],
              "em": [
                "snmp_generic"
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.4",
              "n": "Foot Traffic Analytics",
              "c": "medium",
              "f": "intermediate",
              "v": "People-counting trends validate staffing and layout changes; anomaly detection on ingress rates highlights sensor drift or blocked entrances.",
              "t": "Retail IoT / people-counting platform via HEC",
              "d": "`index=retail` `sourcetype=\"foottraffic:sensor\"` (store_id, zone_id, inbound_count, outbound_count)",
              "q": "index=retail sourcetype=\"foottraffic:sensor\"\n| eval net_flow=inbound_count-outbound_count\n| bin _time span=1h\n| stats sum(inbound_count) as entries, sum(outbound_count) as exits by store_id, zone_id, _time\n| eval dow=strftime(_time,\"%w\"), hr=strftime(_time,\"%H\")\n| eventstats median(entries) as med_ent by store_id, zone_id, dow, hr\n| eval ratio=if(med_ent>0, entries/med_ent, null)\n| where ratio < 0.5 OR ratio > 1.8\n| table _time, store_id, zone_id, entries, exits, med_ent, ratio",
              "m": "Align counts to store local time and exclude maintenance windows via a lookup. Compare same day-of-week baselines; investigate zones with sudden drops (blocked sensor) or spikes (configuration error). **Domain context:** People-counting is used for labor scheduling and lease performance—GDPR/CCPA may apply if combined with identifiers; use aggregates only. **Splunk:** `eventstats median` needs several weeks of history for stable `med_ent` per `dow`/`hr`.",
              "z": "Time chart (entries by zone), Heatmap (store × hour), Bar chart (week-over-week delta).",
              "kfp": "Traffic dips or surges from local events, school holidays, set-hour changes, or camera masking during remodel that field marketing already factored in.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Retail IoT / people-counting platform via HEC.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"foottraffic:sensor\"` (store_id, zone_id, inbound_count, outbound_count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign counts to store local time and exclude maintenance windows via a lookup. Compare same day-of-week baselines; investigate zones with sudden drops (blocked sensor) or spikes (configuration error). **Domain context:** People-counting is used for labor scheduling and lease performance—GDPR/CCPA may apply if combined with identifiers; use aggregates only. **Splunk:** `eventstats median` needs several weeks of history for stable `med_ent` per `dow`/`hr`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"foottraffic:sensor\"\n| eval net_flow=inbound_count-outbound_count\n| bin _time span=1h\n| stats sum(inbound_count) as entries, sum(outbound_count) as exits by store_id, zone_id, _time\n| eval dow=strftime(_time,\"%w\"), hr=strftime(_time,\"%H\")\n| eventstats median(entries) as med_ent by store_id, zone_id, dow, hr\n| eval ratio=if(med_ent>0, entries/med_ent, null)\n| where ratio < 0.5 OR ratio > 1.8\n| table _time, store_id, zone_id, entries, exits, med_ent, ratio\n```\n\nUnderstanding this SPL\n\n**Foot Traffic Analytics** — People-counting trends validate staffing and layout changes; anomaly detection on ingress rates highlights sensor drift or blocked entrances.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"foottraffic:sensor\"` (store_id, zone_id, inbound_count, outbound_count). **App/TA** (typical add-on context): Retail IoT / people-counting platform via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: foottraffic:sensor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"foottraffic:sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **net_flow** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by store_id, zone_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dow** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by store_id, zone_id, dow, hr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ratio < 0.5 OR ratio > 1.8` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Foot Traffic Analytics**): table _time, store_id, zone_id, entries, exits, med_ent, ratio\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (entries by zone), Heatmap (store × hour), Bar chart (week-over-week delta).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch foot traffic patterns in stores. We help you staff and stock smarter before big sales and queues surprise you.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.5",
              "n": "Click-and-Collect Order Fulfillment Cycle Time",
              "c": "high",
              "f": "intermediate",
              "v": "BOPIS promises a pickup window; measuring pick-to-ready time exposes backlog in back-of-house systems and reduces customer wait complaints.",
              "t": "OMS / store fulfillment app via HEC",
              "d": "`index=retail` `sourcetype=\"bopis:order\"` (order_id, store_id, placed_epoch, ready_epoch, status)",
              "q": "index=retail sourcetype=\"bopis:order\" status=\"fulfilled\"\n| eval cycle_sec=ready_epoch-placed_epoch\n| where isnotnull(cycle_sec) AND cycle_sec > 0\n| bin _time span=1h\n| stats avg(cycle_sec) as avg_sec, perc95(cycle_sec) as p95_sec, count as orders by store_id, _time\n| eval sla_breach=if(p95_sec > 3600 OR avg_sec > 2400, 1, 0)\n| where sla_breach=1\n| eval avg_min=round(avg_sec/60,1), p95_min=round(p95_sec/60,1)\n| table _time, store_id, orders, avg_min, p95_min",
              "m": "Ensure `placed_epoch` and `ready_epoch` share a common clock (UTC). Join `order_id` to cancellation reasons in a separate search if needed. Tune SLA minutes per retail banner; alert operations when p95 exceeds the published pickup promise. **Domain context:** BOPIS/curbside SLAs are competitive differentiators; peak days need dynamic staffing models not static thresholds. **Splunk:** Filter `status=\"fulfilled\"` excludes cancelled—add parallel panel for cancellations.",
              "z": "Histogram (cycle time distribution), Time chart (avg vs p95), Table (stores breaching SLA).",
              "kfp": "Pick delays from slot cutoffs, short picks, BOPIS code releases, or WMS jobs that the omni-ops bridge call explains as a known issue.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OMS / store fulfillment app via HEC.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"bopis:order\"` (order_id, store_id, placed_epoch, ready_epoch, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure `placed_epoch` and `ready_epoch` share a common clock (UTC). Join `order_id` to cancellation reasons in a separate search if needed. Tune SLA minutes per retail banner; alert operations when p95 exceeds the published pickup promise. **Domain context:** BOPIS/curbside SLAs are competitive differentiators; peak days need dynamic staffing models not static thresholds. **Splunk:** Filter `status=\"fulfilled\"` excludes cancelled—add parallel panel for cancellations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"bopis:order\" status=\"fulfilled\"\n| eval cycle_sec=ready_epoch-placed_epoch\n| where isnotnull(cycle_sec) AND cycle_sec > 0\n| bin _time span=1h\n| stats avg(cycle_sec) as avg_sec, perc95(cycle_sec) as p95_sec, count as orders by store_id, _time\n| eval sla_breach=if(p95_sec > 3600 OR avg_sec > 2400, 1, 0)\n| where sla_breach=1\n| eval avg_min=round(avg_sec/60,1), p95_min=round(p95_sec/60,1)\n| table _time, store_id, orders, avg_min, p95_min\n```\n\nUnderstanding this SPL\n\n**Click-and-Collect Order Fulfillment Cycle Time** — BOPIS promises a pickup window; measuring pick-to-ready time exposes backlog in back-of-house systems and reduces customer wait complaints.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"bopis:order\"` (order_id, store_id, placed_epoch, ready_epoch, status). **App/TA** (typical add-on context): OMS / store fulfillment app via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: bopis:order. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"bopis:order\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cycle_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(cycle_sec) AND cycle_sec > 0` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by store_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sla_breach=1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Click-and-Collect Order Fulfillment Cycle Time**): table _time, store_id, orders, avg_min, p95_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (cycle time distribution), Time chart (avg vs p95), Table (stores breaching SLA).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch click-and-collect orders end to end. We help you catch delays before customers show up to empty hands.",
              "mtype": [
                "Performance"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.6",
              "n": "E-Commerce Platform Checkout Funnel Latency",
              "c": "high",
              "f": "advanced",
              "v": "Checkout step latency directly impacts cart abandonment; segmenting by step isolates payment gateway, tax service, or session store delays.",
              "t": "E-commerce APM / web tier logs via HEC",
              "d": "`index=retail` `sourcetype=\"ecom:checkout\"` (session_id, step_name, latency_ms, http_status)",
              "q": "index=retail sourcetype=\"ecom:checkout\"\n| where http_status < 400\n| bin _time span=5m\n| stats avg(latency_ms) as avg_ms, perc95(latency_ms) as p95_ms, count as n by step_name, _time\n| where p95_ms > 1500 OR avg_ms > 800\n| sort step_name, _time\n| table _time, step_name, n, avg_ms, p95_ms",
              "m": "Instrument each funnel step with consistent `step_name` values. Filter bots via a flag if present. Use side-by-side panels for web vs mobile app sessions if `channel` exists. **Domain context:** Checkout latency impacts cart abandonment; payment and tax steps often dominate—segment accordingly. **Splunk:** `http_status < 400` removes errors from latency stats—track 5xx in a companion search for completeness.",
              "z": "Time chart (p95 by step), Bar chart (step ranking), Funnel diagram (conversion counts, separate panel).",
              "kfp": "Checkout latency during CDN tests, A/B price experiments, bot traffic, or payment risk queues that the e-commerce status page is already tracking.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: E-commerce APM / web tier logs via HEC.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"ecom:checkout\"` (session_id, step_name, latency_ms, http_status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nInstrument each funnel step with consistent `step_name` values. Filter bots via a flag if present. Use side-by-side panels for web vs mobile app sessions if `channel` exists. **Domain context:** Checkout latency impacts cart abandonment; payment and tax steps often dominate—segment accordingly. **Splunk:** `http_status < 400` removes errors from latency stats—track 5xx in a companion search for completeness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"ecom:checkout\"\n| where http_status < 400\n| bin _time span=5m\n| stats avg(latency_ms) as avg_ms, perc95(latency_ms) as p95_ms, count as n by step_name, _time\n| where p95_ms > 1500 OR avg_ms > 800\n| sort step_name, _time\n| table _time, step_name, n, avg_ms, p95_ms\n```\n\nUnderstanding this SPL\n\n**E-Commerce Platform Checkout Funnel Latency** — Checkout step latency directly impacts cart abandonment; segmenting by step isolates payment gateway, tax service, or session store delays.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"ecom:checkout\"` (session_id, step_name, latency_ms, http_status). **App/TA** (typical add-on context): E-commerce APM / web tier logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: ecom:checkout. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"ecom:checkout\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where http_status < 400` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by step_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_ms > 1500 OR avg_ms > 800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **E-Commerce Platform Checkout Funnel Latency**): table _time, step_name, n, avg_ms, p95_ms\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (p95 by step), Bar chart (step ranking), Funnel diagram (conversion counts, separate panel).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch e-commerce checkout steps and delays. We help you catch drop-offs before online revenue leaks.",
              "mtype": [
                "Performance"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.7",
              "n": "Inventory Replenishment Trigger Accuracy",
              "c": "medium",
              "f": "intermediate",
              "v": "Reorder points that fire too late cause stockouts; false triggers inflate carrying costs. Comparing suggested orders to actual on-hand movement validates replenishment rules.",
              "t": "IMS / replenishment engine via HEC",
              "d": "`index=retail` `sourcetype=\"inventory:reorder\"` (sku_id, store_id, on_hand_qty, reorder_point, suggested_qty, trigger_time)",
              "q": "index=retail sourcetype=\"inventory:reorder\"\n| eval at_or_below=if(on_hand_qty <= reorder_point, 1, 0)\n| eval overshoot=if(on_hand_qty > reorder_point*1.5 AND suggested_qty > 0, 1, 0)\n| stats sum(at_or_below) as hits, sum(overshoot) as bad_triggers, count as evals by store_id\n| eval hit_rate=round(100*hits/nullif(evals,0),2), bad_pct=round(100*bad_triggers/nullif(evals,0),2)\n| where bad_pct > 10 OR hit_rate < 60\n| table store_id, evals, hits, hit_rate, bad_triggers, bad_pct",
              "m": "Snapshot replenishment evaluations daily or per batch run. Join promotional calendars to explain expected volatility. Feed findings to supply-chain analysts to tune safety stock parameters. **Domain context:** Inventory accuracy (cycle counting) feeds replenishment quality—bad on-hand data drives false `bad_triggers`. **Splunk:** `overshoot` heuristic is illustrative; align with your IMS logic for phantom suggestions.",
              "z": "Scatter (on_hand vs reorder_point), Bar chart (bad trigger % by store), Table (SKUs with repeated misfires).",
              "kfp": "Replenishment noise during seasonal allocation, new DC cut-over, or vendor fill delays the inventory planning team backfills with transfer orders.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: IMS / replenishment engine via HEC.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"inventory:reorder\"` (sku_id, store_id, on_hand_qty, reorder_point, suggested_qty, trigger_time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nSnapshot replenishment evaluations daily or per batch run. Join promotional calendars to explain expected volatility. Feed findings to supply-chain analysts to tune safety stock parameters. **Domain context:** Inventory accuracy (cycle counting) feeds replenishment quality—bad on-hand data drives false `bad_triggers`. **Splunk:** `overshoot` heuristic is illustrative; align with your IMS logic for phantom suggestions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"inventory:reorder\"\n| eval at_or_below=if(on_hand_qty <= reorder_point, 1, 0)\n| eval overshoot=if(on_hand_qty > reorder_point*1.5 AND suggested_qty > 0, 1, 0)\n| stats sum(at_or_below) as hits, sum(overshoot) as bad_triggers, count as evals by store_id\n| eval hit_rate=round(100*hits/nullif(evals,0),2), bad_pct=round(100*bad_triggers/nullif(evals,0),2)\n| where bad_pct > 10 OR hit_rate < 60\n| table store_id, evals, hits, hit_rate, bad_triggers, bad_pct\n```\n\nUnderstanding this SPL\n\n**Inventory Replenishment Trigger Accuracy** — Reorder points that fire too late cause stockouts; false triggers inflate carrying costs. Comparing suggested orders to actual on-hand movement validates replenishment rules.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"inventory:reorder\"` (sku_id, store_id, on_hand_qty, reorder_point, suggested_qty, trigger_time). **App/TA** (typical add-on context): IMS / replenishment engine via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: inventory:reorder. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"inventory:reorder\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **at_or_below** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overshoot** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by store_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hit_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bad_pct > 10 OR hit_rate < 60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Inventory Replenishment Trigger Accuracy**): table store_id, evals, hits, hit_rate, bad_triggers, bad_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter (on_hand vs reorder_point), Bar chart (bad trigger % by store), Table (SKUs with repeated misfires).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch replenishment signals and accuracy. We help you catch wrong triggers before shelves go bare or overstocked.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.8",
              "n": "Store HVAC and Energy Consumption Optimization",
              "c": "medium",
              "f": "intermediate",
              "v": "HVAC anomalies increase energy spend and affect cold-chain adjacent zones; trending kWh against occupancy and outdoor air temperature supports sustainability KPIs.",
              "t": "BMS / smart meter HEC",
              "d": "`index=retail` `sourcetype=\"store:energy\"` (store_id, kwh, zone_temp_f, hvac_mode, oa_temp_f)",
              "q": "index=retail sourcetype=\"store:energy\"\n| bin _time span=1h\n| stats sum(kwh) as kwh_h, avg(zone_temp_f) as avg_temp, values(hvac_mode) as modes by store_id, _time\n| eval wday=strftime(_time,\"%w\"), hour=strftime(_time,\"%H\")\n| eventstats median(kwh_h) as med_kwh by store_id, wday, hour\n| eval energy_ratio=if(med_kwh>0, round(kwh_h/med_kwh,2), null)\n| where energy_ratio > 1.35 OR avg_temp < 65 OR avg_temp > 78\n| table _time, store_id, kwh_h, med_kwh, energy_ratio, avg_temp, modes",
              "m": "Align BMS points to store open hours via lookup. Exclude demand-response events if tagged. Pair with foot traffic from `foottraffic:sensor` in a dashboard for joint review. **Domain context:** Retail energy intensity (kWh/ft²) appears in sustainability reporting (GRESB, CDP)—document methodology. **Splunk:** `eventstats median(kwh_h)` by `wday`/`hour` needs seasonal refresh for weather shifts.",
              "z": "Time chart (kWh vs baseline), Line chart (zone temp vs OA temp), Bar chart (energy ratio by store).",
              "kfp": "HVAC and kWh noise during heat waves, setpoint or schedule changes, after-hours cleaning, or equipment commissioning the facilities runbook includes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BMS / smart meter HEC.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"store:energy\"` (store_id, kwh, zone_temp_f, hvac_mode, oa_temp_f).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign BMS points to store open hours via lookup. Exclude demand-response events if tagged. Pair with foot traffic from `foottraffic:sensor` in a dashboard for joint review. **Domain context:** Retail energy intensity (kWh/ft²) appears in sustainability reporting (GRESB, CDP)—document methodology. **Splunk:** `eventstats median(kwh_h)` by `wday`/`hour` needs seasonal refresh for weather shifts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"store:energy\"\n| bin _time span=1h\n| stats sum(kwh) as kwh_h, avg(zone_temp_f) as avg_temp, values(hvac_mode) as modes by store_id, _time\n| eval wday=strftime(_time,\"%w\"), hour=strftime(_time,\"%H\")\n| eventstats median(kwh_h) as med_kwh by store_id, wday, hour\n| eval energy_ratio=if(med_kwh>0, round(kwh_h/med_kwh,2), null)\n| where energy_ratio > 1.35 OR avg_temp < 65 OR avg_temp > 78\n| table _time, store_id, kwh_h, med_kwh, energy_ratio, avg_temp, modes\n```\n\nUnderstanding this SPL\n\n**Store HVAC and Energy Consumption Optimization** — HVAC anomalies increase energy spend and affect cold-chain adjacent zones; trending kWh against occupancy and outdoor air temperature supports sustainability KPIs.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"store:energy\"` (store_id, kwh, zone_temp_f, hvac_mode, oa_temp_f). **App/TA** (typical add-on context): BMS / smart meter HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: store:energy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"store:energy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by store_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **wday** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by store_id, wday, hour** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **energy_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where energy_ratio > 1.35 OR avg_temp < 65 OR avg_temp > 78` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Store HVAC and Energy Consumption Optimization**): table _time, store_id, kwh_h, med_kwh, energy_ratio, avg_temp, modes\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (kWh vs baseline), Line chart (zone temp vs OA temp), Bar chart (energy ratio by store).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch store HVAC and energy use. We help you catch waste before comfort and cost targets slip.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.9",
              "n": "Digital Signage Content Delivery Health",
              "c": "medium",
              "f": "beginner",
              "v": "Failed content pulls leave blank or stale screens during campaigns; monitoring download success and player heartbeat protects brand and promotional compliance.",
              "t": "Digital signage CMS / player agent via HEC",
              "d": "`index=retail` `sourcetype=\"signage:health\"` (store_id, player_id, content_id, download_status, last_sync_epoch)",
              "q": "index=retail sourcetype=\"signage:health\"\n| eval sync_age_sec=now()-last_sync_epoch\n| eval healthy=if(lower(download_status)=\"ok\" AND sync_age_sec < 900, 1, 0)\n| stats avg(healthy) as ok_frac by store_id, player_id\n| eval health_pct=round(ok_frac*100,2)\n| where health_pct < 95\n| table store_id, player_id, health_pct",
              "m": "Standardize `download_status` across vendors. Alert when players miss sync for longer than the campaign refresh interval. Group by region for NOC-style triage. **Domain context:** Promotional compliance often requires proof-of-play; failed syncs can breach vendor agreements in franchise networks. **Splunk:** `now()-last_sync_epoch` must use the same clock as the player—prefer player-reported epoch to search `now()`.",
              "z": "Status grid (player × store), Single value (unhealthy %), Timeline (failed downloads).",
              "kfp": "Signage health noise during content pushes, media-player firmware, firewall rule updates, or store-hour mismatches the signage team sees in the CMS.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Digital signage CMS / player agent via HEC.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"signage:health\"` (store_id, player_id, content_id, download_status, last_sync_epoch).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStandardize `download_status` across vendors. Alert when players miss sync for longer than the campaign refresh interval. Group by region for NOC-style triage. **Domain context:** Promotional compliance often requires proof-of-play; failed syncs can breach vendor agreements in franchise networks. **Splunk:** `now()-last_sync_epoch` must use the same clock as the player—prefer player-reported epoch to search `now()`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"signage:health\"\n| eval sync_age_sec=now()-last_sync_epoch\n| eval healthy=if(lower(download_status)=\"ok\" AND sync_age_sec < 900, 1, 0)\n| stats avg(healthy) as ok_frac by store_id, player_id\n| eval health_pct=round(ok_frac*100,2)\n| where health_pct < 95\n| table store_id, player_id, health_pct\n```\n\nUnderstanding this SPL\n\n**Digital Signage Content Delivery Health** — Failed content pulls leave blank or stale screens during campaigns; monitoring download success and player heartbeat protects brand and promotional compliance.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"signage:health\"` (store_id, player_id, content_id, download_status, last_sync_epoch). **App/TA** (typical add-on context): Digital signage CMS / player agent via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: signage:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"signage:health\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sync_age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **healthy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by store_id, player_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **health_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where health_pct < 95` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Digital Signage Content Delivery Health**): table store_id, player_id, health_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (player × store), Single value (unhealthy %), Timeline (failed downloads).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch signage players and content delivery. We help you catch black screens before campaigns and pricing fail in view.",
              "mtype": [
                "Availability"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.10",
              "n": "Mobile POS Device Battery and Connectivity",
              "c": "high",
              "f": "intermediate",
              "v": "mPOS devices that drop offline or run low on battery interrupt line-busting during peaks; proactive swaps reduce abandoned transactions.",
              "t": "MDM / mPOS telemetry via HEC",
              "d": "`index=retail` `sourcetype=\"mpos:device\"` (device_id, store_id, battery_pct, rssi_dbm, last_seen_epoch)",
              "q": "index=retail sourcetype=\"mpos:device\"\n| eval age_sec=now()-last_seen_epoch\n| eval at_risk=if(battery_pct < 20 OR rssi_dbm < -80 OR age_sec > 300, 1, 0)\n| where at_risk=1\n| stats latest(battery_pct) as batt, latest(rssi_dbm) as rssi, max(age_sec) as max_age by store_id, device_id\n| sort store_id, - max_age\n| table store_id, device_id, batt, rssi, max_age",
              "m": "Ingest periodic telemetry from MDM or the payment app SDK. Map `device_id` to assigned associate in a lookup for dispatch. Exclude devices in charging cradles if `docked` is available. **Domain context:** mPOS reliability affects peak-hour throughput; PCI and labor policies may govern device reassignment workflows. **Splunk:** `now()-last_seen_epoch` on stale devices—confirm MDM poll interval to avoid false offline.",
              "z": "Table (at-risk devices), Histogram (battery distribution), Map (store locations if lat/long present).",
              "kfp": "Battery or data gaps on mobile POS during all-day events, cradles faulting, or OS updates the store tech refresh plan rolled out in waves.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MDM / mPOS telemetry via HEC.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"mpos:device\"` (device_id, store_id, battery_pct, rssi_dbm, last_seen_epoch).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest periodic telemetry from MDM or the payment app SDK. Map `device_id` to assigned associate in a lookup for dispatch. Exclude devices in charging cradles if `docked` is available. **Domain context:** mPOS reliability affects peak-hour throughput; PCI and labor policies may govern device reassignment workflows. **Splunk:** `now()-last_seen_epoch` on stale devices—confirm MDM poll interval to avoid false offline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"mpos:device\"\n| eval age_sec=now()-last_seen_epoch\n| eval at_risk=if(battery_pct < 20 OR rssi_dbm < -80 OR age_sec > 300, 1, 0)\n| where at_risk=1\n| stats latest(battery_pct) as batt, latest(rssi_dbm) as rssi, max(age_sec) as max_age by store_id, device_id\n| sort store_id, - max_age\n| table store_id, device_id, batt, rssi, max_age\n```\n\nUnderstanding this SPL\n\n**Mobile POS Device Battery and Connectivity** — mPOS devices that drop offline or run low on battery interrupt line-busting during peaks; proactive swaps reduce abandoned transactions.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"mpos:device\"` (device_id, store_id, battery_pct, rssi_dbm, last_seen_epoch). **App/TA** (typical add-on context): MDM / mPOS telemetry via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: mpos:device. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"mpos:device\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **at_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where at_risk=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by store_id, device_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Mobile POS Device Battery and Connectivity**): table store_id, device_id, batt, rssi, max_age\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (at-risk devices), Histogram (battery distribution), Map (store locations if lat/long present).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mobile POS battery and connections. We help you catch dead devices before line-busting during peaks.",
              "mtype": [
                "Availability"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.11",
              "n": "Loss Prevention Camera System Uptime",
              "c": "high",
              "f": "beginner",
              "v": "Operational visibility into VMS/NVR stream health supports store safety and incident review workflows without duplicating fraud analytics covered elsewhere.",
              "t": "VMS health feed via HEC",
              "d": "`index=retail` `sourcetype=\"camera:status\"` (store_id, camera_id, stream_state, bitrate_kbps, last_frame_epoch)",
              "q": "index=retail sourcetype=\"camera:status\"\n| eval frame_age=now()-last_frame_epoch\n| eval up=if(lower(stream_state)=\"up\" AND frame_age < 60 AND bitrate_kbps > 100, 1, 0)\n| stats avg(up) as up_frac by store_id, camera_id\n| eval uptime_pct=round(up_frac*100,2)\n| where uptime_pct < 99\n| table store_id, camera_id, uptime_pct",
              "m": "Poll or stream VMS health every minute; normalize `stream_state` vocabulary. Exclude planned maintenance windows via lookup. Escalate to LP and facilities when entire aisles show degraded uptime. **Domain context:** Video retention policies (e.g. 30–90 days) are separate from stream health—health proves capability to capture, not retention compliance. **Splunk:** Bitrate thresholds vary by resolution/codec—tune `bitrate_kbps` per camera model.",
              "z": "Heatmap (camera × hour uptime), Table (worst cameras), Single value (stores below target).",
              "kfp": "Camera or stream issues during NVR storage rotation, LPR privacy masking, or network maintenance loss prevention and store security pre-approved.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VMS health feed via HEC.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"camera:status\"` (store_id, camera_id, stream_state, bitrate_kbps, last_frame_epoch).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPoll or stream VMS health every minute; normalize `stream_state` vocabulary. Exclude planned maintenance windows via lookup. Escalate to LP and facilities when entire aisles show degraded uptime. **Domain context:** Video retention policies (e.g. 30–90 days) are separate from stream health—health proves capability to capture, not retention compliance. **Splunk:** Bitrate thresholds vary by resolution/codec—tune `bitrate_kbps` per camera model.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"camera:status\"\n| eval frame_age=now()-last_frame_epoch\n| eval up=if(lower(stream_state)=\"up\" AND frame_age < 60 AND bitrate_kbps > 100, 1, 0)\n| stats avg(up) as up_frac by store_id, camera_id\n| eval uptime_pct=round(up_frac*100,2)\n| where uptime_pct < 99\n| table store_id, camera_id, uptime_pct\n```\n\nUnderstanding this SPL\n\n**Loss Prevention Camera System Uptime** — Operational visibility into VMS/NVR stream health supports store safety and incident review workflows without duplicating fraud analytics covered elsewhere.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"camera:status\"` (store_id, camera_id, stream_state, bitrate_kbps, last_frame_epoch). **App/TA** (typical add-on context): VMS health feed via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: camera:status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"camera:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **frame_age** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **up** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by store_id, camera_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uptime_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where uptime_pct < 99` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Loss Prevention Camera System Uptime**): table store_id, camera_id, uptime_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (camera × hour uptime), Table (worst cameras), Single value (stores below target).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch loss-prevention camera uptime. We help you catch blind spots before incidents have no video.",
              "mtype": [
                "Availability"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.6.12",
              "n": "Multi-Location Store Infrastructure Comparison",
              "c": "medium",
              "f": "intermediate",
              "v": "Benchmarking composite health scores across stores highlights underperforming sites for capital planning and regional support prioritization.",
              "t": "Aggregated retail operations index",
              "d": "`index=retail` `sourcetype=\"store:infra\"` (store_id, health_score, pos_latency_ms, wifi_issue_count, energy_kwh_day)",
              "q": "index=retail sourcetype=\"store:infra\"\n| bin _time span=1d\n| stats latest(health_score) as health, avg(pos_latency_ms) as avg_pos, sum(wifi_issue_count) as wifi_issues, latest(energy_kwh_day) as kwh by store_id, _time\n| eventstats median(health) as med_health, median(avg_pos) as med_pos by _time\n| eval pos_delta=avg_pos-med_pos\n| where health < med_health*0.9 OR pos_delta > 200 OR wifi_issues > 15\n| sort health\n| table _time, store_id, health, med_health, avg_pos, pos_delta, wifi_issues, kwh",
              "m": "Populate `store:infra` from nightly ETL that rolls up POS, Wi-Fi, and energy KPIs per store. Keep scoring methodology documented for audit. Use for regional scorecards rather than real-time alerting unless scores refresh hourly. **Domain context:** Multi-site benchmarking supports capital allocation (remodel vs close)—ensure `health_score` weighting is agreed with finance and ops. **Splunk:** `eventstats median` across stores on the same `_time` bucket compares peers—watch for missing data skewing medians.",
              "z": "Bar chart (health score by store), Box plot (score distribution), Table (bottom quartile stores).",
              "kfp": "Scoreboard variance from remodels, climate differences, or first weeks after a new store format where the region compares like-for-like on purpose.",
              "refs": "[Airport Ground Operations App](https://splunkbase.splunk.com/app/7793)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Aggregated retail operations index.\n• Ensure the following data sources are available: `index=retail` `sourcetype=\"store:infra\"` (store_id, health_score, pos_latency_ms, wifi_issue_count, energy_kwh_day).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nPopulate `store:infra` from nightly ETL that rolls up POS, Wi-Fi, and energy KPIs per store. Keep scoring methodology documented for audit. Use for regional scorecards rather than real-time alerting unless scores refresh hourly. **Domain context:** Multi-site benchmarking supports capital allocation (remodel vs close)—ensure `health_score` weighting is agreed with finance and ops. **Splunk:** `eventstats median` across stores on the same `_time` bucket compares peers—watch for missing data skewi…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=retail sourcetype=\"store:infra\"\n| bin _time span=1d\n| stats latest(health_score) as health, avg(pos_latency_ms) as avg_pos, sum(wifi_issue_count) as wifi_issues, latest(energy_kwh_day) as kwh by store_id, _time\n| eventstats median(health) as med_health, median(avg_pos) as med_pos by _time\n| eval pos_delta=avg_pos-med_pos\n| where health < med_health*0.9 OR pos_delta > 200 OR wifi_issues > 15\n| sort health\n| table _time, store_id, health, med_health, avg_pos, pos_delta, wifi_issues, kwh\n```\n\nUnderstanding this SPL\n\n**Multi-Location Store Infrastructure Comparison** — Benchmarking composite health scores across stores highlights underperforming sites for capital planning and regional support prioritization.\n\nDocumented **Data sources**: `index=retail` `sourcetype=\"store:infra\"` (store_id, health_score, pos_latency_ms, wifi_issue_count, energy_kwh_day). **App/TA** (typical add-on context): Aggregated retail operations index. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: retail; **sourcetype**: store:infra. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=retail, sourcetype=\"store:infra\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by store_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pos_delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where health < med_health*0.9 OR pos_delta > 200 OR wifi_issues > 15` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Multi-Location Store Infrastructure Comparison**): table _time, store_id, health, med_health, avg_pos, pos_delta, wifi_issues, kwh\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (health score by store), Box plot (score distribution), Table (bottom quartile stores).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch metrics across many stores. We help you see which sites struggle before a quiet problem spreads.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Retail and E-Commerce",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 12,
            "none": 0
          }
        },
        {
          "i": "21.7",
          "n": "Aviation and Airport Operations",
          "u": [
            {
              "i": "21.7.1",
              "n": "Baggage Handling System Throughput and Misroute Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Low BHS throughput and rising misroutes delay connections and drive mishandled-bag metrics; correlating belt rates with sort errors focuses maintenance on diverters and scanners.",
              "t": "Airport Ground Operations App, BHS message feed via HEC",
              "d": "`index=airport` `sourcetype=\"airport:baggage\"` (flight_id, bag_tag, sort_destination, actual_destination, belt_id, scan_time)",
              "q": "index=airport sourcetype=\"airport:baggage\"\n| eval misroute=if(sort_destination!=actual_destination,1,0)\n| bin _time span=15m\n| stats count as bags, sum(misroute) as misroutes, dc(bag_tag) as unique_bags by belt_id, _time\n| eval misroute_pct=round(100*misroutes/nullif(bags,0),2)\n| where misroute_pct > 2 OR bags < 50\n| table _time, belt_id, bags, misroutes, misroute_pct",
              "m": "Parse BSM/BUM messages or vendor XML into normalized fields. Validate `sort_destination` against flight plan lookups when available. Alert BHS control for sustained misroute rates above SLA. **Domain context:** IATA RP 1745 baggage messaging; mishandled-bag KPIs drive airline contracts—misroutes also stress rescreening/recirc paths. **Splunk:** `bags < 50` filters low-volume belts—tune for small terminals.",
              "z": "Time chart (bags per belt), Bar chart (misroute %), Sankey (planned vs actual sort, if supported).",
              "kfp": "Misroute or throughput noise during IATA messaging loads, chute or belt maintenance, peak bank bags, or aircraft diversions the BHS log ties to a reason code.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Airport Ground Operations App, BHS message feed via HEC.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"airport:baggage\"` (flight_id, bag_tag, sort_destination, actual_destination, belt_id, scan_time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParse BSM/BUM messages or vendor XML into normalized fields. Validate `sort_destination` against flight plan lookups when available. Alert BHS control for sustained misroute rates above SLA. **Domain context:** IATA RP 1745 baggage messaging; mishandled-bag KPIs drive airline contracts—misroutes also stress rescreening/recirc paths. **Splunk:** `bags < 50` filters low-volume belts—tune for small terminals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"airport:baggage\"\n| eval misroute=if(sort_destination!=actual_destination,1,0)\n| bin _time span=15m\n| stats count as bags, sum(misroute) as misroutes, dc(bag_tag) as unique_bags by belt_id, _time\n| eval misroute_pct=round(100*misroutes/nullif(bags,0),2)\n| where misroute_pct > 2 OR bags < 50\n| table _time, belt_id, bags, misroutes, misroute_pct\n```\n\nUnderstanding this SPL\n\n**Baggage Handling System Throughput and Misroute Detection** — Low BHS throughput and rising misroutes delay connections and drive mishandled-bag metrics; correlating belt rates with sort errors focuses maintenance on diverters and scanners.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"airport:baggage\"` (flight_id, bag_tag, sort_destination, actual_destination, belt_id, scan_time). **App/TA** (typical add-on context): Airport Ground Operations App, BHS message feed via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: airport:baggage. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"airport:baggage\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **misroute** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by belt_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **misroute_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where misroute_pct > 2 OR bags < 50` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Baggage Handling System Throughput and Misroute Detection**): table _time, belt_id, bags, misroutes, misroute_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (bags per belt), Bar chart (misroute %), Sankey (planned vs actual sort, if supported).",
              "script": "",
              "premium": "Splunk Airport Ground Operations App",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch bags through sortation. We help you catch jams and wrong routes before bags miss the flight.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.7.2",
              "n": "Security Lane Processing Time and Queue Length",
              "c": "critical",
              "f": "intermediate",
              "v": "Passenger screening wait times drive missed flights and terminal congestion; monitoring queue depth and lane throughput supports dynamic staffing.",
              "t": "Queue analytics / security lane sensors via HEC",
              "d": "`index=airport` `sourcetype=\"airport:security\"` (terminal_id, lane_id, queue_depth, wait_time_sec, throughput_pph)",
              "q": "index=airport sourcetype=\"airport:security\"\n| bin _time span=5m\n| stats avg(wait_time_sec) as avg_wait, max(queue_depth) as max_q, avg(throughput_pph) as avg_thr by lane_id, terminal_id, _time\n| where avg_wait > 600 OR max_q > 80 OR avg_thr < 120\n| eval avg_wait_min=round(avg_wait/60,1)\n| table _time, terminal_id, lane_id, avg_wait_min, max_q, avg_thr",
              "m": "Ingest lidar or camera analytics exports with consistent `lane_id`. Align with airport peak bank schedules via CSV lookup. Coordinate alerts with security operations, not for access control decisions alone. **Domain context:** Checkpoint wait standards vary by program (PreCheck vs standard); privacy rules limit retention of video/lidar—prefer aggregated metrics in Splunk.",
              "z": "Time chart (wait time by lane), Area chart (queue depth), Heatmap (terminal × hour).",
              "kfp": "Queue noise during staffing breaks, TSA equipment swaps, or configuration tests that security and airport ops planned as a timed drill.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Queue analytics / security lane sensors via HEC.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"airport:security\"` (terminal_id, lane_id, queue_depth, wait_time_sec, throughput_pph).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest lidar or camera analytics exports with consistent `lane_id`. Align with airport peak bank schedules via CSV lookup. Coordinate alerts with security operations, not for access control decisions alone. **Domain context:** Checkpoint wait standards vary by program (PreCheck vs standard); privacy rules limit retention of video/lidar—prefer aggregated metrics in Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"airport:security\"\n| bin _time span=5m\n| stats avg(wait_time_sec) as avg_wait, max(queue_depth) as max_q, avg(throughput_pph) as avg_thr by lane_id, terminal_id, _time\n| where avg_wait > 600 OR max_q > 80 OR avg_thr < 120\n| eval avg_wait_min=round(avg_wait/60,1)\n| table _time, terminal_id, lane_id, avg_wait_min, max_q, avg_thr\n```\n\nUnderstanding this SPL\n\n**Security Lane Processing Time and Queue Length** — Passenger screening wait times drive missed flights and terminal congestion; monitoring queue depth and lane throughput supports dynamic staffing.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"airport:security\"` (terminal_id, lane_id, queue_depth, wait_time_sec, throughput_pph). **App/TA** (typical add-on context): Queue analytics / security lane sensors via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: airport:security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"airport:security\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by lane_id, terminal_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_wait > 600 OR max_q > 80 OR avg_thr < 120` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_wait_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Security Lane Processing Time and Queue Length**): table _time, terminal_id, lane_id, avg_wait_min, max_q, avg_thr\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (wait time by lane), Area chart (queue depth), Heatmap (terminal × hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch security line waits and queue depth. We help you open lanes before missed flights and bad headlines.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.7.3",
              "n": "Aircraft Turnaround Time Monitoring (A-CDM)",
              "c": "critical",
              "f": "advanced",
              "v": "A-CDM milestones expose ground-handling delays that compress departure slots; tracking block-on to off-block variance improves OTP and stand utilization.",
              "t": "A-CDM feed via HEC, Airport CIM for Splunk",
              "d": "`index=airport` `sourcetype=\"acdm:turnaround\"` (flight_id, stand_id, block_on, target_off, actual_off, milestone_name, milestone_time)",
              "q": "index=airport sourcetype=\"acdm:turnaround\" milestone_name=\"ACTUAL_OFF_BLOCK\"\n| eval turnaround_sec=actual_off-block_on\n| eval target_turn_sec=target_off-block_on\n| eval variance_sec=turnaround_sec-target_turn_sec\n| where variance_sec > 900\n| stats avg(variance_sec) as avg_var, count as late_turns by stand_id\n| eval avg_var_min=round(avg_var/60,1)\n| sort - avg_var\n| table stand_id, late_turns, avg_var_min",
              "m": "Normalize all timestamps to UTC with clear time zone fields in raw data. Join to airline handler codes if present for accountability. Use for operational review; validate calculations against the airport CDM tool of record. **Domain context:** IATA A-CDM milestones (TOBT/TSAT/ASAT/ATOT) define turnaround—ensure field mapping matches your airport’s CDM implementation. **Splunk:** `block_on`/`actual_off` must be epoch-consistent; `milestone_name` filter should match vendor strings exactly.",
              "z": "Gantt-style timeline (per flight), Histogram (turnaround distribution), Table (stands with chronic variance).",
              "kfp": "Turnaround noise from gate holds, deicing queues, A-CDM re-sequencing, or construction on taxiways the airport operations center is already moving.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: A-CDM feed via HEC, Airport CIM for Splunk.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"acdm:turnaround\"` (flight_id, stand_id, block_on, target_off, actual_off, milestone_name, milestone_time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize all timestamps to UTC with clear time zone fields in raw data. Join to airline handler codes if present for accountability. Use for operational review; validate calculations against the airport CDM tool of record. **Domain context:** IATA A-CDM milestones (TOBT/TSAT/ASAT/ATOT) define turnaround—ensure field mapping matches your airport’s CDM implementation. **Splunk:** `block_on`/`actual_off` must be epoch-consistent; `milestone_name` filter should match vendor strings exactly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"acdm:turnaround\" milestone_name=\"ACTUAL_OFF_BLOCK\"\n| eval turnaround_sec=actual_off-block_on\n| eval target_turn_sec=target_off-block_on\n| eval variance_sec=turnaround_sec-target_turn_sec\n| where variance_sec > 900\n| stats avg(variance_sec) as avg_var, count as late_turns by stand_id\n| eval avg_var_min=round(avg_var/60,1)\n| sort - avg_var\n| table stand_id, late_turns, avg_var_min\n```\n\nUnderstanding this SPL\n\n**Aircraft Turnaround Time Monitoring (A-CDM)** — A-CDM milestones expose ground-handling delays that compress departure slots; tracking block-on to off-block variance improves OTP and stand utilization.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"acdm:turnaround\"` (flight_id, stand_id, block_on, target_off, actual_off, milestone_name, milestone_time). **App/TA** (typical add-on context): A-CDM feed via HEC, Airport CIM for Splunk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: acdm:turnaround. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"acdm:turnaround\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **turnaround_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **target_turn_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **variance_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where variance_sec > 900` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by stand_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_var_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Aircraft Turnaround Time Monitoring (A-CDM)**): table stand_id, late_turns, avg_var_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gantt-style timeline (per flight), Histogram (turnaround distribution), Table (stands with chronic variance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch turnaround times against the A-CDM plan. We help you catch slippage before the bank and gates clog.",
              "mtype": [
                "Performance"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.7.4",
              "n": "Airfield Ground Vehicle Tracking and Geofencing",
              "c": "high",
              "f": "advanced",
              "v": "Vehicles breaching movement area boundaries create runway incursion risk; monitoring GPS tracks against geofences supports safety management and audit evidence.",
              "t": "Airfield vehicle telematics via HEC",
              "d": "`index=airport` `sourcetype=\"airfield:vehicle\"` (vehicle_id, lat, lon, speed_kph, geofence_id, breach_flag)",
              "q": "index=airport sourcetype=\"airfield:vehicle\"\n| where breach_flag=1 OR speed_kph > 40\n| bin _time span=1m\n| stats count as events, values(geofence_id) as zones by vehicle_id, _time\n| where events >= 2\n| table _time, vehicle_id, events, zones",
              "m": "Stream or batch position fixes; compute `breach_flag` at the edge if possible for lower latency. Use Splunk for trending and investigation; pair with SMS alerts for real-time incursion systems. Retain maps for post-incident review only with proper access controls. **Domain context:** ICAO Doc 9870 / runway safety areas—incursion risk management is separate from Splunk; this UC supports SMS evidence and training. **Splunk:** Speed threshold (40 km/h) is illustrative—set per movement area policy.",
              "z": "Map (vehicle positions), Timeline (breach events), Table (repeat offenders).",
              "kfp": "Geofence noise during escorted maintenance, after-hours access for vendors, or airfield exercise vehicles on approved routes the badging office cleared.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Airfield vehicle telematics via HEC.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"airfield:vehicle\"` (vehicle_id, lat, lon, speed_kph, geofence_id, breach_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nStream or batch position fixes; compute `breach_flag` at the edge if possible for lower latency. Use Splunk for trending and investigation; pair with SMS alerts for real-time incursion systems. Retain maps for post-incident review only with proper access controls. **Domain context:** ICAO Doc 9870 / runway safety areas—incursion risk management is separate from Splunk; this UC supports SMS evidence and training. **Splunk:** Speed threshold (40 km/h) is illustrative—set per movement area policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"airfield:vehicle\"\n| where breach_flag=1 OR speed_kph > 40\n| bin _time span=1m\n| stats count as events, values(geofence_id) as zones by vehicle_id, _time\n| where events >= 2\n| table _time, vehicle_id, events, zones\n```\n\nUnderstanding this SPL\n\n**Airfield Ground Vehicle Tracking and Geofencing** — Vehicles breaching movement area boundaries create runway incursion risk; monitoring GPS tracks against geofences supports safety management and audit evidence.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"airfield:vehicle\"` (vehicle_id, lat, lon, speed_kph, geofence_id, breach_flag). **App/TA** (typical add-on context): Airfield vehicle telematics via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: airfield:vehicle. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"airfield:vehicle\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where breach_flag=1 OR speed_kph > 40` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by vehicle_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where events >= 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Airfield Ground Vehicle Tracking and Geofencing**): table _time, vehicle_id, events, zones\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (vehicle positions), Timeline (breach events), Table (repeat offenders).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch airside vehicles and geofences. We help you catch incursions and odd moves before tarmac risk grows.",
              "mtype": [
                "Compliance",
                "Fault"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.7.5",
              "n": "Flight Information Display System (FIDS) Health",
              "c": "high",
              "f": "beginner",
              "v": "Blank or stale FIDS erode passenger trust and increase gate crowding; heartbeat and content sync monitoring ensures displays match the operational data feed.",
              "t": "FIDS CMS / player health via HEC",
              "d": "`index=airport` `sourcetype=\"fids:status\"` (display_id, terminal_id, sync_lag_sec, content_version, online_flag)",
              "q": "index=airport sourcetype=\"fids:status\"\n| eval healthy=if(online_flag=1 AND sync_lag_sec < 120, 1, 0)\n| stats avg(healthy) as ok_frac, max(sync_lag_sec) as max_lag by terminal_id, display_id\n| eval uptime_pct=round(ok_frac*100,2)\n| where uptime_pct < 99 OR max_lag > 300\n| table terminal_id, display_id, uptime_pct, max_lag",
              "m": "Ingest per-display polls every minute; join `content_version` to the master feed version from a KV store or lookup. Page duty manager when entire banks of displays degrade together (network path issue). **Domain context:** AIDX/BIX messages often back FIDS—stale sync can desynchronize gate info from airline systems—correlate with AODB when troubleshooting. **Splunk:** `online_flag` may be string—normalize with `tonumber` or `case`.",
              "z": "Status grid (display × terminal), Time chart (sync lag), Single value (offline count).",
              "kfp": "FIDS noise during CMS content pushes, power blips on a concourse, or display firmware loads the airport digital team validated in staging.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: FIDS CMS / player health via HEC.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"fids:status\"` (display_id, terminal_id, sync_lag_sec, content_version, online_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest per-display polls every minute; join `content_version` to the master feed version from a KV store or lookup. Page duty manager when entire banks of displays degrade together (network path issue). **Domain context:** AIDX/BIX messages often back FIDS—stale sync can desynchronize gate info from airline systems—correlate with AODB when troubleshooting. **Splunk:** `online_flag` may be string—normalize with `tonumber` or `case`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"fids:status\"\n| eval healthy=if(online_flag=1 AND sync_lag_sec < 120, 1, 0)\n| stats avg(healthy) as ok_frac, max(sync_lag_sec) as max_lag by terminal_id, display_id\n| eval uptime_pct=round(ok_frac*100,2)\n| where uptime_pct < 99 OR max_lag > 300\n| table terminal_id, display_id, uptime_pct, max_lag\n```\n\nUnderstanding this SPL\n\n**Flight Information Display System (FIDS) Health** — Blank or stale FIDS erode passenger trust and increase gate crowding; heartbeat and content sync monitoring ensures displays match the operational data feed.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"fids:status\"` (display_id, terminal_id, sync_lag_sec, content_version, online_flag). **App/TA** (typical add-on context): FIDS CMS / player health via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: fids:status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"fids:status\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **healthy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by terminal_id, display_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uptime_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where uptime_pct < 99 OR max_lag > 300` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Flight Information Display System (FIDS) Health**): table terminal_id, display_id, uptime_pct, max_lag\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (display × terminal), Time chart (sync lag), Single value (offline count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch flight display freshness and errors. We help you catch wrong or stale times before passengers revolt.",
              "mtype": [
                "Availability"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.7.6",
              "n": "Airport Wi-Fi Capacity and Congestion Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "Passenger Wi-Fi saturation during banks degrades airline apps and airport services; tracking airtime utilization and retries guides AP density and backhaul upgrades.",
              "t": "WLAN controller syslog via HEC",
              "d": "`index=airport` `sourcetype=\"airport:wifi\"` (ap_name, terminal_id, channel_util_pct, client_count, retry_rate_pct)",
              "q": "index=airport sourcetype=\"airport:wifi\"\n| bin _time span=5m\n| stats avg(channel_util_pct) as avg_util, max(client_count) as max_clients, avg(retry_rate_pct) as avg_retry by ap_name, terminal_id, _time\n| where avg_util > 75 OR avg_retry > 15 OR max_clients > 200\n| table _time, terminal_id, ap_name, avg_util, max_clients, avg_retry",
              "m": "Normalize vendor metrics to `channel_util_pct` and `retry_rate_pct`. Compare concourse peers; exclude maintenance SSIDs. Correlate with passenger counts from `terminal:flow` when available. **Domain context:** High-density Wi-Fi (802.11ax) in terminals is interference-limited—`channel_util_pct` above ~75% often correlates with retries and sticky clients. **Splunk:** Vendor-specific metrics may need `eval` scaling (0–1 vs 0–100).",
              "z": "Heatmap (AP × hour utilization), Line chart (client count), Bar chart (top congested APs).",
              "kfp": "Wi-Fi congestion while passenger loads surge, a controller upgrades, or a new band is trialed on the AP plan the wireless lead announced.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WLAN controller syslog via HEC.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"airport:wifi\"` (ap_name, terminal_id, channel_util_pct, client_count, retry_rate_pct).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize vendor metrics to `channel_util_pct` and `retry_rate_pct`. Compare concourse peers; exclude maintenance SSIDs. Correlate with passenger counts from `terminal:flow` when available. **Domain context:** High-density Wi-Fi (802.11ax) in terminals is interference-limited—`channel_util_pct` above ~75% often correlates with retries and sticky clients. **Splunk:** Vendor-specific metrics may need `eval` scaling (0–1 vs 0–100).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"airport:wifi\"\n| bin _time span=5m\n| stats avg(channel_util_pct) as avg_util, max(client_count) as max_clients, avg(retry_rate_pct) as avg_retry by ap_name, terminal_id, _time\n| where avg_util > 75 OR avg_retry > 15 OR max_clients > 200\n| table _time, terminal_id, ap_name, avg_util, max_clients, avg_retry\n```\n\nUnderstanding this SPL\n\n**Airport Wi-Fi Capacity and Congestion Monitoring** — Passenger Wi-Fi saturation during banks degrades airline apps and airport services; tracking airtime utilization and retries guides AP density and backhaul upgrades.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"airport:wifi\"` (ap_name, terminal_id, channel_util_pct, client_count, retry_rate_pct). **App/TA** (typical add-on context): WLAN controller syslog via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: airport:wifi. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"airport:wifi\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by ap_name, terminal_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_util > 75 OR avg_retry > 15 OR max_clients > 200` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Airport Wi-Fi Capacity and Congestion Monitoring**): table _time, terminal_id, ap_name, avg_util, max_clients, avg_retry\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (AP × hour utilization), Line chart (client count), Bar chart (top congested APs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch terminal Wi-Fi capacity. We help you catch congestion before travelers cannot connect.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.7.7",
              "n": "Runway and Taxiway Lighting System Status",
              "c": "critical",
              "f": "intermediate",
              "v": "Lighting circuit faults affect night and low-visibility operations; consolidating SCADA alarms with last-known good states speeds electrical maintenance dispatch.",
              "t": "Airfield lighting SCADA via HEC",
              "d": "`index=airport` `sourcetype=\"airfield:lighting\"` (circuit_id, runway_id, intensity_pct, comm_ok, alarm_state)",
              "q": "index=airport sourcetype=\"airfield:lighting\"\n| eval fault=if(comm_ok=0 OR lower(alarm_state)!=\"normal\" OR intensity_pct < 50, 1, 0)\n| where fault=1\n| stats latest(intensity_pct) as intensity, latest(alarm_state) as alarm, latest(comm_ok) as comm_ok by runway_id, circuit_id\n| sort runway_id, circuit_id\n| table runway_id, circuit_id, intensity, alarm, comm_ok",
              "m": "Map SCADA points to runway/taxi identifiers used in NOTAM workflows. Do not replace airfield lighting control systems; Splunk is for visibility and trending. Filter planned test events via maintenance tags. **Domain context:** FAA/EASA lighting requirements tie to LVP/LVO operations—control remains in the field lighting system, not Splunk. **Splunk:** `comm_ok` and `intensity_pct` types must be numeric for comparisons.",
              "z": "Single value (circuits in fault), Table (runway × circuit), Timeline (alarm transitions).",
              "kfp": "Lighting state noise while circuits are tagged out for re-lamp, airfield work, or AGL certification runs that the electrical maintenance package covers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Airfield lighting SCADA via HEC.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"airfield:lighting\"` (circuit_id, runway_id, intensity_pct, comm_ok, alarm_state).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap SCADA points to runway/taxi identifiers used in NOTAM workflows. Do not replace airfield lighting control systems; Splunk is for visibility and trending. Filter planned test events via maintenance tags. **Domain context:** FAA/EASA lighting requirements tie to LVP/LVO operations—control remains in the field lighting system, not Splunk. **Splunk:** `comm_ok` and `intensity_pct` types must be numeric for comparisons.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"airfield:lighting\"\n| eval fault=if(comm_ok=0 OR lower(alarm_state)!=\"normal\" OR intensity_pct < 50, 1, 0)\n| where fault=1\n| stats latest(intensity_pct) as intensity, latest(alarm_state) as alarm, latest(comm_ok) as comm_ok by runway_id, circuit_id\n| sort runway_id, circuit_id\n| table runway_id, circuit_id, intensity, alarm, comm_ok\n```\n\nUnderstanding this SPL\n\n**Runway and Taxiway Lighting System Status** — Lighting circuit faults affect night and low-visibility operations; consolidating SCADA alarms with last-known good states speeds electrical maintenance dispatch.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"airfield:lighting\"` (circuit_id, runway_id, intensity_pct, comm_ok, alarm_state). **App/TA** (typical add-on context): Airfield lighting SCADA via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: airfield:lighting. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"airfield:lighting\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fault** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fault=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by runway_id, circuit_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Runway and Taxiway Lighting System Status**): table runway_id, circuit_id, intensity, alarm, comm_ok\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (circuits in fault), Table (runway × circuit), Timeline (alarm transitions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch runway and taxi lighting circuits. We help you catch faults before night ops lose guidance.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.7.8",
              "n": "Gate Allocation Optimization Analytics",
              "c": "high",
              "f": "intermediate",
              "v": "Gate churn and long towing distances waste ground time; analyzing assigned vs actual gate usage supports stand planning and reduces conflicts.",
              "t": "AODB / gate management via HEC",
              "d": "`index=airport` `sourcetype=\"gate:allocation\"` (flight_id, gate_id, planned_gate, actual_gate, change_count, tow_required_flag)",
              "q": "index=airport sourcetype=\"gate:allocation\"\n| eval gate_mismatch=if(planned_gate!=actual_gate,1,0)\n| stats sum(gate_mismatch) as changes, sum(tow_required_flag) as tows, count as flights by gate_id\n| eval change_rate=round(100*changes/nullif(flights,0),2)\n| where change_rate > 25 OR tows > 10\n| sort - change_rate\n| table gate_id, flights, changes, change_rate, tows",
              "m": "Refresh from AODB snapshots or event stream on gate changes. Join aircraft size class if available to explain constraints. Use monthly reports for planning rather than minute-by-minute alerts. **Domain context:** Gate changes drive towing and turnaround—ICAO Doc 9644 addresses apron management coordination. **Splunk:** `gate_mismatch` counts per `gate_id` may mix cause (airline swap vs airport reassignment)—add `reason_code` if available.",
              "z": "Bar chart (change rate by gate), Stacked bar (tows vs no tow), Table (top volatile gates).",
              "kfp": "Gate assignment variance on irregular ops days, ad hoc aircraft type swaps, or hub bank waves the gate planner optimizes in the live meeting.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AODB / gate management via HEC.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"gate:allocation\"` (flight_id, gate_id, planned_gate, actual_gate, change_count, tow_required_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRefresh from AODB snapshots or event stream on gate changes. Join aircraft size class if available to explain constraints. Use monthly reports for planning rather than minute-by-minute alerts. **Domain context:** Gate changes drive towing and turnaround—ICAO Doc 9644 addresses apron management coordination. **Splunk:** `gate_mismatch` counts per `gate_id` may mix cause (airline swap vs airport reassignment)—add `reason_code` if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"gate:allocation\"\n| eval gate_mismatch=if(planned_gate!=actual_gate,1,0)\n| stats sum(gate_mismatch) as changes, sum(tow_required_flag) as tows, count as flights by gate_id\n| eval change_rate=round(100*changes/nullif(flights,0),2)\n| where change_rate > 25 OR tows > 10\n| sort - change_rate\n| table gate_id, flights, changes, change_rate, tows\n```\n\nUnderstanding this SPL\n\n**Gate Allocation Optimization Analytics** — Gate churn and long towing distances waste ground time; analyzing assigned vs actual gate usage supports stand planning and reduces conflicts.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"gate:allocation\"` (flight_id, gate_id, planned_gate, actual_gate, change_count, tow_required_flag). **App/TA** (typical add-on context): AODB / gate management via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: gate:allocation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"gate:allocation\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **gate_mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by gate_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **change_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where change_rate > 25 OR tows > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Gate Allocation Optimization Analytics**): table gate_id, flights, changes, change_rate, tows\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (change rate by gate), Stacked bar (tows vs no tow), Table (top volatile gates).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch gate use and turns. We help you catch misparking before turn times and stand plans fail.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.7.9",
              "n": "Passenger Flow and Terminal Capacity",
              "c": "high",
              "f": "intermediate",
              "v": "Understanding dwell and flow between checkpoints and gates helps prevent overcrowding and supports staffing during irregular operations.",
              "t": "Wi-Fi probe / Bluetooth / lidar analytics via HEC",
              "d": "`index=airport` `sourcetype=\"terminal:flow\"` (terminal_id, zone_id, occupancy_est, flow_rate_ppm)",
              "q": "index=airport sourcetype=\"terminal:flow\"\n| bin _time span=5m\n| stats avg(occupancy_est) as occ, max(flow_rate_ppm) as peak_flow by terminal_id, zone_id, _time\n| eval date_hour=strftime(_time,\"%H\"), date_wday=strftime(_time,\"%w\")\n| eventstats median(occ) as med_occ by terminal_id, zone_id, date_hour, date_wday\n| eval occ_ratio=if(med_occ>0, round(occ/med_occ,2), null)\n| where occ > 5000 OR occ_ratio > 1.4 OR peak_flow > 120\n| table _time, terminal_id, zone_id, occ, med_occ, occ_ratio, peak_flow",
              "m": "Calibrate occupancy models against manual counts quarterly. Mask precise sensor locations in dashboards if required by security. Combine with `airport:security` wait times for holistic terminal health. **Domain context:** Fire/life safety codes cap occupancy; operational analytics should not be confused with legal occupancy limits. **Splunk:** `occ > 5000` is illustrative—scale thresholds per terminal design.",
              "z": "Heatmap (zone × time occupancy), Line chart (flow rate), Area chart (terminal totals).",
              "kfp": "Passenger flow noise during bus batches, pre-security events, or temporary lane closures the terminal duty manager explains with a passenger communication plan.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Wi-Fi probe / Bluetooth / lidar analytics via HEC.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"terminal:flow\"` (terminal_id, zone_id, occupancy_est, flow_rate_ppm).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nCalibrate occupancy models against manual counts quarterly. Mask precise sensor locations in dashboards if required by security. Combine with `airport:security` wait times for holistic terminal health. **Domain context:** Fire/life safety codes cap occupancy; operational analytics should not be confused with legal occupancy limits. **Splunk:** `occ > 5000` is illustrative—scale thresholds per terminal design.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"terminal:flow\"\n| bin _time span=5m\n| stats avg(occupancy_est) as occ, max(flow_rate_ppm) as peak_flow by terminal_id, zone_id, _time\n| eval date_hour=strftime(_time,\"%H\"), date_wday=strftime(_time,\"%w\")\n| eventstats median(occ) as med_occ by terminal_id, zone_id, date_hour, date_wday\n| eval occ_ratio=if(med_occ>0, round(occ/med_occ,2), null)\n| where occ > 5000 OR occ_ratio > 1.4 OR peak_flow > 120\n| table _time, terminal_id, zone_id, occ, med_occ, occ_ratio, peak_flow\n```\n\nUnderstanding this SPL\n\n**Passenger Flow and Terminal Capacity** — Understanding dwell and flow between checkpoints and gates helps prevent overcrowding and supports staffing during irregular operations.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"terminal:flow\"` (terminal_id, zone_id, occupancy_est, flow_rate_ppm). **App/TA** (typical add-on context): Wi-Fi probe / Bluetooth / lidar analytics via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: terminal:flow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"terminal:flow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by terminal_id, zone_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **date_hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by terminal_id, zone_id, date_hour, date_wday** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **occ_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where occ > 5000 OR occ_ratio > 1.4 OR peak_flow > 120` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Passenger Flow and Terminal Capacity**): table _time, terminal_id, zone_id, occ, med_occ, occ_ratio, peak_flow\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (zone × time occupancy), Line chart (flow rate), Area chart (terminal totals).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch passenger crowding in halls and piers. We help you re-stage staff before security and services choke.",
              "mtype": [
                "Capacity"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.7.10",
              "n": "Airport SCADA Alarm Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Airports rely on SCADA for jet bridges, baggage power, and utilities; alarm floods and unacked critical points risk missed responses during IROPS.",
              "t": "Airport SCADA historian / alarm export via HEC",
              "d": "`index=airport` `sourcetype=\"airport:scada\"` (subsystem, alarm_id, priority, ack_state, description)",
              "q": "index=airport sourcetype=\"airport:scada\"\n| where lower(ack_state)!=\"acked\" AND priority IN (\"1\",\"2\",\"critical\",\"high\")\n| bin _time span=5m\n| stats count as open_alarms, dc(alarm_id) as distinct_points by subsystem, _time\n| where open_alarms > 10\n| table _time, subsystem, open_alarms, distinct_points",
              "m": "Normalize priority enumerations across subsystems. Route critical unacked alarms to facilities NOC; use dedup keys on `alarm_id` to avoid double counting. Pair with maintenance windows lookup to suppress expected noise. **Domain context:** Airport SCADA spans jet bridges, baggage, HVAC, fuel—subsystem taxonomy should map to runbooks for IROPS. **Splunk:** `lower(ack_state)!=\"acked\"` may miss variants (`Acknowledged`); use `match` or lookup normalization.",
              "z": "Timeline (alarm bursts), Bar chart (open alarms by subsystem), Single value (unacked critical count).",
              "kfp": "Alarm bursts during BHS or jet-bridge exercises, utility or fuel system cutovers, airfield lighting certification runs, or fire/life-safety tests that airport facilities logged as supervised work.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Airport SCADA historian / alarm export via HEC.\n• Ensure the following data sources are available: `index=airport` `sourcetype=\"airport:scada\"` (subsystem, alarm_id, priority, ack_state, description).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize priority enumerations across subsystems. Route critical unacked alarms to facilities NOC; use dedup keys on `alarm_id` to avoid double counting. Pair with maintenance windows lookup to suppress expected noise. **Domain context:** Airport SCADA spans jet bridges, baggage, HVAC, fuel—subsystem taxonomy should map to runbooks for IROPS. **Splunk:** `lower(ack_state)!=\"acked\"` may miss variants (`Acknowledged`); use `match` or lookup normalization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=airport sourcetype=\"airport:scada\"\n| where lower(ack_state)!=\"acked\" AND priority IN (\"1\",\"2\",\"critical\",\"high\")\n| bin _time span=5m\n| stats count as open_alarms, dc(alarm_id) as distinct_points by subsystem, _time\n| where open_alarms > 10\n| table _time, subsystem, open_alarms, distinct_points\n```\n\nUnderstanding this SPL\n\n**Airport SCADA Alarm Monitoring** — Airports rely on SCADA for jet bridges, baggage power, and utilities; alarm floods and unacked critical points risk missed responses during IROPS.\n\nDocumented **Data sources**: `index=airport` `sourcetype=\"airport:scada\"` (subsystem, alarm_id, priority, ack_state, description). **App/TA** (typical add-on context): Airport SCADA historian / alarm export via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: airport; **sourcetype**: airport:scada. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=airport, sourcetype=\"airport:scada\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where lower(ack_state)!=\"acked\" AND priority IN (\"1\",\"2\",\"critical\",\"high\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by subsystem, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where open_alarms > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Airport SCADA Alarm Monitoring**): table _time, subsystem, open_alarms, distinct_points\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (alarm bursts), Bar chart (open alarms by subsystem), Single value (unacked critical count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch airport facility SCADA alarms. We help you catch HVAC, power, or baggage plant faults before comfort and ops suffer.",
              "mtype": [
                "Fault",
                "Availability"
              ],
              "ind": "Aviation",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 10,
            "none": 0
          }
        },
        {
          "i": "21.8",
          "n": "Telecommunications Operations",
          "u": [
            {
              "i": "21.8.1",
              "n": "RAN Cell Site Availability",
              "c": "critical",
              "f": "intermediate",
              "v": "Cell site outages directly impact subscriber coverage and handover success; tracking up/down transitions and sustained downtime focuses radio access field teams before KPIs degrade across the footprint.",
              "t": "Custom HEC (RAN EMS / element manager export)",
              "d": "`index=telecom` `sourcetype=\"ran:cellsite\"` (site_id, cell_id, operational_state, last_transition_epoch)",
              "q": "index=telecom sourcetype=\"ran:cellsite\"\n| eval up=if(lower(operational_state) IN (\"up\",\"enabled\",\"on\"),1,0)\n| bin _time span=5m\n| stats avg(up) as avail_frac, min(up) as min_state by site_id, cell_id, _time\n| eval avail_pct=round(avail_frac*100,2)\n| where min_state=0 OR avail_pct < 99.5\n| table _time, site_id, cell_id, avail_pct, min_state",
              "m": "Ingest periodic SNMP or EMS polls with normalized `operational_state` strings. Align site identifiers with inventory CMDB. Alert on sustained down segments and flapping (multiple transitions per hour) using a follow-on search on `last_transition_epoch`. **Domain context:** RAN availability is a 3GPP/O-RAN operational KPI; distinguish planned work (RET/antenna) from faults—use `last_transition_epoch` flapping logic for churn. **Splunk:** `avail_pct < 99.5` on 5m buckets is strict—use longer windows for SLA reporting; ensure `operational_state` vocabulary is normalized across vendors.",
              "z": "Time chart (availability % by site), Status grid (cell × site), Single value (sites below SLA).",
              "kfp": "RAN noise during planned retunes, software loads, power tests, or brief dips during weather that the radio NOC shows as self-clearing in the EMS.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (RAN EMS / element manager export).\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"ran:cellsite\"` (site_id, cell_id, operational_state, last_transition_epoch).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest periodic SNMP or EMS polls with normalized `operational_state` strings. Align site identifiers with inventory CMDB. Alert on sustained down segments and flapping (multiple transitions per hour) using a follow-on search on `last_transition_epoch`. **Domain context:** RAN availability is a 3GPP/O-RAN operational KPI; distinguish planned work (RET/antenna) from faults—use `last_transition_epoch` flapping logic for churn. **Splunk:** `avail_pct < 99.5` on 5m buckets is strict—use longer windo…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"ran:cellsite\"\n| eval up=if(lower(operational_state) IN (\"up\",\"enabled\",\"on\"),1,0)\n| bin _time span=5m\n| stats avg(up) as avail_frac, min(up) as min_state by site_id, cell_id, _time\n| eval avail_pct=round(avail_frac*100,2)\n| where min_state=0 OR avail_pct < 99.5\n| table _time, site_id, cell_id, avail_pct, min_state\n```\n\nUnderstanding this SPL\n\n**RAN Cell Site Availability** — Cell site outages directly impact subscriber coverage and handover success; tracking up/down transitions and sustained downtime focuses radio access field teams before KPIs degrade across the footprint.\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"ran:cellsite\"` (site_id, cell_id, operational_state, last_transition_epoch). **App/TA** (typical add-on context): Custom HEC (RAN EMS / element manager export). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: ran:cellsite. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"ran:cellsite\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **up** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by site_id, cell_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_state=0 OR avail_pct < 99.5` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **RAN Cell Site Availability**): table _time, site_id, cell_id, avail_pct, min_state\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (availability % by site), Status grid (cell × site), Single value (sites below SLA).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch whether cell sites stay on the air. We help you catch outages before dropped calls and data spread.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.8.2",
              "n": "Core Network Element Health (MME, SGW, PGW)",
              "c": "critical",
              "f": "advanced",
              "v": "Core packet gateways and mobility management anchor subscriber sessions; correlating CPU, session load, and alarm states helps capacity and incident teams isolate a degrading blade before mass detach.",
              "t": "Custom HEC (EPC/5GC performance counters)",
              "d": "`index=telecom` `sourcetype=\"core:element\"` (element_id, element_type, cpu_pct, active_sessions, alarm_severity)",
              "q": "index=telecom sourcetype=\"core:element\"\n| eval sev_score=case(lower(alarm_severity) IN (\"critical\",\"1\"),4, lower(alarm_severity) IN (\"major\",\"2\"),3, lower(alarm_severity) IN (\"minor\",\"3\"),2, true(),0)\n| where cpu_pct > 85 OR active_sessions > 800000 OR sev_score >= 3\n| bin _time span=5m\n| stats max(cpu_pct) as max_cpu, max(active_sessions) as max_sess, max(sev_score) as max_alarm by element_id, element_type, _time\n| table _time, element_id, element_type, max_cpu, max_sess, max_alarm",
              "m": "Map vendor counter names into `cpu_pct` and `active_sessions` in transforms. Exclude planned maintenance windows via lookup on `element_id`. Thresholds vary by platform—tune per NE class and license limits. **Domain context:** 4G EPC (MME/SGW/PGW) vs 5GC (SMF/UPF/AMF) differ in counters and scale—tag `element_type` consistently for apples-to-apples dashboards. **Splunk:** `active_sessions > 800000` is illustrative; align with chassis capacity and license entitlements.",
              "z": "Line chart (CPU and sessions by element), Table (top loaded nodes), Single value (elements in alarm).",
              "kfp": "Core element noise during capacity trials, MME pool balancing, law-enforcement or lawful-intercept work, or PGW node maintenance the packet core NOC published.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC (EPC/5GC performance counters).\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"core:element\"` (element_id, element_type, cpu_pct, active_sessions, alarm_severity).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor counter names into `cpu_pct` and `active_sessions` in transforms. Exclude planned maintenance windows via lookup on `element_id`. Thresholds vary by platform—tune per NE class and license limits. **Domain context:** 4G EPC (MME/SGW/PGW) vs 5GC (SMF/UPF/AMF) differ in counters and scale—tag `element_type` consistently for apples-to-apples dashboards. **Splunk:** `active_sessions > 800000` is illustrative; align with chassis capacity and license entitlements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"core:element\"\n| eval sev_score=case(lower(alarm_severity) IN (\"critical\",\"1\"),4, lower(alarm_severity) IN (\"major\",\"2\"),3, lower(alarm_severity) IN (\"minor\",\"3\"),2, true(),0)\n| where cpu_pct > 85 OR active_sessions > 800000 OR sev_score >= 3\n| bin _time span=5m\n| stats max(cpu_pct) as max_cpu, max(active_sessions) as max_sess, max(sev_score) as max_alarm by element_id, element_type, _time\n| table _time, element_id, element_type, max_cpu, max_sess, max_alarm\n```\n\nUnderstanding this SPL\n\n**Core Network Element Health (MME, SGW, PGW)** — Core packet gateways and mobility management anchor subscriber sessions; correlating CPU, session load, and alarm states helps capacity and incident teams isolate a degrading blade before mass detach.\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"core:element\"` (element_id, element_type, cpu_pct, active_sessions, alarm_severity). **App/TA** (typical add-on context): Custom HEC (EPC/5GC performance counters). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: core:element. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"core:element\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sev_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where cpu_pct > 85 OR active_sessions > 800000 OR sev_score >= 3` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by element_id, element_type, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Core Network Element Health (MME, SGW, PGW)**): table _time, element_id, element_type, max_cpu, max_sess, max_alarm\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CPU and sessions by element), Table (top loaded nodes), Single value (elements in alarm).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch mobile core node health. We help you catch capacity or fault trends before mobility fails at scale.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.8.3",
              "n": "Subscriber Provisioning Workflow Completion Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Failed SIM activation or profile pushes strand subscribers on support calls; measuring end-to-end workflow success by step exposes orchestration, HLR/HSS, and BSS handoffs without CDR-volume analytics.",
              "t": "OSS provisioning orchestrator via HEC",
              "d": "`index=telecom` `sourcetype=\"provisioning:workflow\"` (workflow_id, msisdn, step_name, status, duration_ms)",
              "q": "index=telecom sourcetype=\"provisioning:workflow\"\n| eval ok=if(lower(status) IN (\"success\",\"completed\",\"ok\"),1,0)\n| stats count as total, sum(ok) as ok_n by step_name\n| eval success_pct=round(100*ok_n/nullif(total,0),2)\n| where success_pct < 98 OR total < 10\n| sort success_pct\n| table step_name, total, ok_n, success_pct",
              "m": "Emit one event per workflow step completion with consistent `workflow_id` for optional transaction tracing. Schedule hourly; drill into `status` values for failure taxonomy. Keep separate from mediation latency (UC-21.8.5). **Domain context:** Provisioning spans HLR/HSS/UDM, PCRF/PCF, and BSS—name `step_name` so failures map to the right team (radio vs core vs billing). **Splunk:** `total < 10` filters low-volume steps—may hide spikes; use `sum` over 24h for rare steps.",
              "z": "Bar chart (success % by step), Time chart (daily success trend), Table (worst steps).",
              "kfp": "Provisioning drop-offs during cut-throttle jobs, HSS/UDM sync, mass SIM profiles, or vendor CRs the provisioning bridge window tracks to completion.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OSS provisioning orchestrator via HEC.\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"provisioning:workflow\"` (workflow_id, msisdn, step_name, status, duration_ms).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEmit one event per workflow step completion with consistent `workflow_id` for optional transaction tracing. Schedule hourly; drill into `status` values for failure taxonomy. Keep separate from mediation latency (UC-21.8.5). **Domain context:** Provisioning spans HLR/HSS/UDM, PCRF/PCF, and BSS—name `step_name` so failures map to the right team (radio vs core vs billing). **Splunk:** `total < 10` filters low-volume steps—may hide spikes; use `sum` over 24h for rare steps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"provisioning:workflow\"\n| eval ok=if(lower(status) IN (\"success\",\"completed\",\"ok\"),1,0)\n| stats count as total, sum(ok) as ok_n by step_name\n| eval success_pct=round(100*ok_n/nullif(total,0),2)\n| where success_pct < 98 OR total < 10\n| sort success_pct\n| table step_name, total, ok_n, success_pct\n```\n\nUnderstanding this SPL\n\n**Subscriber Provisioning Workflow Completion Rate** — Failed SIM activation or profile pushes strand subscribers on support calls; measuring end-to-end workflow success by step exposes orchestration, HLR/HSS, and BSS handoffs without CDR-volume analytics.\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"provisioning:workflow\"` (workflow_id, msisdn, step_name, status, duration_ms). **App/TA** (typical add-on context): OSS provisioning orchestrator via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: provisioning:workflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"provisioning:workflow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by step_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where success_pct < 98 OR total < 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Subscriber Provisioning Workflow Completion Rate**): table step_name, total, ok_n, success_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (success % by step), Time chart (daily success trend), Table (worst steps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch provisioning work finish on time. We help you catch stuck orders before customers cannot get service.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.8.4",
              "n": "Network Capacity Planning (Spectrum Utilization Trending)",
              "c": "high",
              "f": "advanced",
              "v": "Rising PRB or downlink utilization trends drive sector splits and carrier adds; long-window trending supports RF engineering without duplicating CDR-based traffic analytics (Cat 5.12).",
              "t": "RAN performance management feed via HEC",
              "d": "`index=telecom` `sourcetype=\"spectrum:utilization\"` (site_id, cell_id, dl_prb_util_pct, ul_prb_util_pct, sample_period_sec)",
              "q": "index=telecom sourcetype=\"spectrum:utilization\"\n| eval peak_util=max(dl_prb_util_pct, ul_prb_util_pct)\n| bin _time span=1d\n| stats avg(peak_util) as avg_peak, perc95(peak_util) as p95_peak by site_id, cell_id, _time\n| eventstats median(avg_peak) as med_site by site_id\n| eval growth_ratio=if(med_site>0, round(avg_peak/med_site,2), null)\n| where p95_peak > 85 OR growth_ratio > 1.15\n| table _time, site_id, cell_id, avg_peak, p95_peak, growth_ratio",
              "m": "Aggregate busy-hour samples per operator policy; store `sample_period_sec` for weighting. Join sector metadata (band, azimuth) via lookup for planning reports. Use weekly baselines to smooth day-of-week noise. **Domain context:** LTE PRB utilization drives capacity; LTE vs NR reporting periods differ—align `bin` span with PM file granularity. **Splunk:** `eventstats median(avg_peak) as med_site by site_id` compares cells to site median—use `cell_id` if site-level median masks hot sectors.",
              "z": "Line chart (PRB util trend), Heatmap (cell × week), Bar chart (sectors over 85% p95).",
              "kfp": "Spectrum or utilization trend noise during special events, CBRS trials, or temporary sector locks for stadium deployments the RF planning team intended.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: RAN performance management feed via HEC.\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"spectrum:utilization\"` (site_id, cell_id, dl_prb_util_pct, ul_prb_util_pct, sample_period_sec).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAggregate busy-hour samples per operator policy; store `sample_period_sec` for weighting. Join sector metadata (band, azimuth) via lookup for planning reports. Use weekly baselines to smooth day-of-week noise. **Domain context:** LTE PRB utilization drives capacity; LTE vs NR reporting periods differ—align `bin` span with PM file granularity. **Splunk:** `eventstats median(avg_peak) as med_site by site_id` compares cells to site median—use `cell_id` if site-level median masks hot sectors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"spectrum:utilization\"\n| eval peak_util=max(dl_prb_util_pct, ul_prb_util_pct)\n| bin _time span=1d\n| stats avg(peak_util) as avg_peak, perc95(peak_util) as p95_peak by site_id, cell_id, _time\n| eventstats median(avg_peak) as med_site by site_id\n| eval growth_ratio=if(med_site>0, round(avg_peak/med_site,2), null)\n| where p95_peak > 85 OR growth_ratio > 1.15\n| table _time, site_id, cell_id, avg_peak, p95_peak, growth_ratio\n```\n\nUnderstanding this SPL\n\n**Network Capacity Planning (Spectrum Utilization Trending)** — Rising PRB or downlink utilization trends drive sector splits and carrier adds; long-window trending supports RF engineering without duplicating CDR-based traffic analytics (Cat 5.12).\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"spectrum:utilization\"` (site_id, cell_id, dl_prb_util_pct, ul_prb_util_pct, sample_period_sec). **App/TA** (typical add-on context): RAN performance management feed via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: spectrum:utilization. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"spectrum:utilization\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **peak_util** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by site_id, cell_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **growth_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where p95_peak > 85 OR growth_ratio > 1.15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Capacity Planning (Spectrum Utilization Trending)**): table _time, site_id, cell_id, avg_peak, p95_peak, growth_ratio\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (PRB util trend), Heatmap (cell × week), Bar chart (sectors over 85% p95).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch spectrum and capacity trends. We help you plan growth before cells saturate.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.8.5",
              "n": "Service Activation and Billing Mediation Latency",
              "c": "high",
              "f": "intermediate",
              "v": "Mediation pipelines must deliver rated records to billing within windows; end-to-end latency and backlog depth prevent revenue leakage and rating disputes distinct from raw CDR analytics use cases.",
              "t": "Mediation platform logs via HEC",
              "d": "`index=telecom` `sourcetype=\"mediation:event\"` (batch_id, records_in, records_out, latency_ms, queue_depth)",
              "q": "index=telecom sourcetype=\"mediation:event\"\n| bin _time span=5m\n| stats avg(latency_ms) as avg_lat, perc95(latency_ms) as p95_lat, max(queue_depth) as max_q, sum(records_in) as vol by _time\n| where avg_lat > 120000 OR p95_lat > 300000 OR max_q > 500000\n| eval avg_min=round(avg_lat/60000,2), p95_min=round(p95_lat/60000,2)\n| table _time, avg_min, p95_min, max_q, vol",
              "m": "Normalize timestamps to when batches complete, not file arrival. Alert on sustained queue growth with derivative search on `queue_depth`. Coordinate thresholds with billing close calendar. **Domain context:** Mediation/rating sits between network CDRs and billing—latency spikes near invoice close or rate plan changes are common; separate batch vs streaming pipelines in tagging. **Splunk:** Thresholds `120000`/`300000` ms are examples—tie to billing window SLAs; `avg_lat`/`p95_lat` on 5m buckets can be noisy for hourly batches.",
              "z": "Time chart (latency and queue depth), Area chart (records processed), Single value (backlog breach).",
              "kfp": "Latency noise during mediation batching, tax or rating rule pushes, Diameter peer certificate rolls, or billing cycle catch-up the BSS operations desk owns.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Mediation platform logs via HEC.\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"mediation:event\"` (batch_id, records_in, records_out, latency_ms, queue_depth).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize timestamps to when batches complete, not file arrival. Alert on sustained queue growth with derivative search on `queue_depth`. Coordinate thresholds with billing close calendar. **Domain context:** Mediation/rating sits between network CDRs and billing—latency spikes near invoice close or rate plan changes are common; separate batch vs streaming pipelines in tagging. **Splunk:** Thresholds `120000`/`300000` ms are examples—tie to billing window SLAs; `avg_lat`/`p95_lat` on 5m buckets …\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"mediation:event\"\n| bin _time span=5m\n| stats avg(latency_ms) as avg_lat, perc95(latency_ms) as p95_lat, max(queue_depth) as max_q, sum(records_in) as vol by _time\n| where avg_lat > 120000 OR p95_lat > 300000 OR max_q > 500000\n| eval avg_min=round(avg_lat/60000,2), p95_min=round(p95_lat/60000,2)\n| table _time, avg_min, p95_min, max_q, vol\n```\n\nUnderstanding this SPL\n\n**Service Activation and Billing Mediation Latency** — Mediation pipelines must deliver rated records to billing within windows; end-to-end latency and backlog depth prevent revenue leakage and rating disputes distinct from raw CDR analytics use cases.\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"mediation:event\"` (batch_id, records_in, records_out, latency_ms, queue_depth). **App/TA** (typical add-on context): Mediation platform logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: mediation:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"mediation:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_lat > 120000 OR p95_lat > 300000 OR max_q > 500000` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Service Activation and Billing Mediation Latency**): table _time, avg_min, p95_min, max_q, vol\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (latency and queue depth), Area chart (records processed), Single value (backlog breach).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch service activation and billing delays. We help you catch handoff breaks before revenue and NPS take a hit.",
              "mtype": [
                "Performance"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.8.6",
              "n": "OSS/BSS System Integration Health",
              "c": "high",
              "f": "intermediate",
              "v": "API and message bus integrations between CRM, inventory, and activation systems fail silently under load; HTTP error rates and timeout counts isolate brittle adapters before orders stall.",
              "t": "API gateway / ESB logs via HEC",
              "d": "`index=telecom` `sourcetype=\"ossbss:integration\"` (interface_id, http_status, latency_ms, error_code)",
              "q": "index=telecom sourcetype=\"ossbss:integration\"\n| eval fail=if(http_status>=500 OR isnotnull(error_code),1,0)\n| bin _time span=5m\n| stats count as calls, sum(fail) as fails, avg(latency_ms) as avg_ms by interface_id, _time\n| eval fail_pct=round(100*fails/nullif(calls,0),2)\n| where fail_pct > 2 OR avg_ms > 3000\n| table _time, interface_id, calls, fails, fail_pct, avg_ms",
              "m": "Tag interfaces in the gateway; exclude health-check paths. Add optional `partner_system` field for drilldown. Pair with synthetic probes where logs alone miss silent drops. **Domain context:** TM Forum Open APIs and eTOM-style process boundaries (CRM → OSS) help name `interface_id` for runbooks—timeouts often indicate partner capacity, not your gateway. **Splunk:** `error_code` may be sparse—use `http_status>=500 OR isnotnull(error_code)` but add `http_status=0` or connection reset patterns if logged.",
              "z": "Time chart (fail % and latency), Bar chart (worst interfaces), Table (error_code top values).",
              "kfp": "Integration noise during SOA/ESB certificate changes, new product catalog syncs, or partner API rate limits the integration factory agreed to throttle for.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: API gateway / ESB logs via HEC.\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"ossbss:integration\"` (interface_id, http_status, latency_ms, error_code).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nTag interfaces in the gateway; exclude health-check paths. Add optional `partner_system` field for drilldown. Pair with synthetic probes where logs alone miss silent drops. **Domain context:** TM Forum Open APIs and eTOM-style process boundaries (CRM → OSS) help name `interface_id` for runbooks—timeouts often indicate partner capacity, not your gateway. **Splunk:** `error_code` may be sparse—use `http_status>=500 OR isnotnull(error_code)` but add `http_status=0` or connection reset patterns if l…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"ossbss:integration\"\n| eval fail=if(http_status>=500 OR isnotnull(error_code),1,0)\n| bin _time span=5m\n| stats count as calls, sum(fail) as fails, avg(latency_ms) as avg_ms by interface_id, _time\n| eval fail_pct=round(100*fails/nullif(calls,0),2)\n| where fail_pct > 2 OR avg_ms > 3000\n| table _time, interface_id, calls, fails, fail_pct, avg_ms\n```\n\nUnderstanding this SPL\n\n**OSS/BSS System Integration Health** — API and message bus integrations between CRM, inventory, and activation systems fail silently under load; HTTP error rates and timeout counts isolate brittle adapters before orders stall.\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"ossbss:integration\"` (interface_id, http_status, latency_ms, error_code). **App/TA** (typical add-on context): API gateway / ESB logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: ossbss:integration. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"ossbss:integration\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by interface_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_pct > 2 OR avg_ms > 3000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OSS/BSS System Integration Health**): table _time, interface_id, calls, fails, fail_pct, avg_ms\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fail % and latency), Bar chart (worst interfaces), Table (error_code top values).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch CRM-to-OSS interfaces. We help you catch timeouts and errors before partner and retail flows fail.",
              "mtype": [
                "Availability",
                "Fault"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.8.7",
              "n": "Customer Trouble Ticket Mean Time to Resolution",
              "c": "medium",
              "f": "intermediate",
              "v": "MTTR for access and core tickets reflects operational maturity; trending resolution intervals by category and region highlights training gaps and vendor SLA performance.",
              "t": "ITSM / trouble ticket export via HEC",
              "d": "`index=telecom` `sourcetype=\"troubleticket:event\"` (ticket_id, category, region, created_epoch, resolved_epoch, status)",
              "q": "index=telecom sourcetype=\"troubleticket:event\" lower(status)=\"resolved\"\n| eval mtr_h=(resolved_epoch-created_epoch)/3600\n| where isnotnull(mtr_h) AND mtr_h >= 0\n| bin _time span=1d\n| stats avg(mtr_h) as avg_mtr, perc90(mtr_h) as p90_mtr, count as tickets by category, region, _time\n| where avg_mtr > 24 OR p90_mtr > 72\n| eval avg_mtr_r=round(avg_mtr,2), p90_mtr_r=round(p90_mtr,2)\n| table _time, category, region, tickets, avg_mtr_r, p90_mtr_r",
              "m": "Ensure `created_epoch`/`resolved_epoch` are UTC. Exclude cancelled tickets in a separate clause if needed. Refresh from ITSM nightly or near-real-time for NOC dashboards. **Domain context:** MTTR definitions vary (clock vs work hours); align with ITIL reporting and vendor SLA clocks before publishing leadership SLAs. **Splunk:** Resolved tickets may arrive late—use `_time` from resolution event or `resolved_epoch` consistently for bucketing.",
              "z": "Box plot (MTTR distribution), Line chart (trend by region), Bar chart (category comparison).",
              "kfp": "MTTR spikes on mass outage days, NOC bridge calls, or vendor ticket queues the customer-care exec summary already explains to subscribers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ITSM / trouble ticket export via HEC.\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"troubleticket:event\"` (ticket_id, category, region, created_epoch, resolved_epoch, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nEnsure `created_epoch`/`resolved_epoch` are UTC. Exclude cancelled tickets in a separate clause if needed. Refresh from ITSM nightly or near-real-time for NOC dashboards. **Domain context:** MTTR definitions vary (clock vs work hours); align with ITIL reporting and vendor SLA clocks before publishing leadership SLAs. **Splunk:** Resolved tickets may arrive late—use `_time` from resolution event or `resolved_epoch` consistently for bucketing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"troubleticket:event\" lower(status)=\"resolved\"\n| eval mtr_h=(resolved_epoch-created_epoch)/3600\n| where isnotnull(mtr_h) AND mtr_h >= 0\n| bin _time span=1d\n| stats avg(mtr_h) as avg_mtr, perc90(mtr_h) as p90_mtr, count as tickets by category, region, _time\n| where avg_mtr > 24 OR p90_mtr > 72\n| eval avg_mtr_r=round(avg_mtr,2), p90_mtr_r=round(p90_mtr,2)\n| table _time, category, region, tickets, avg_mtr_r, p90_mtr_r\n```\n\nUnderstanding this SPL\n\n**Customer Trouble Ticket Mean Time to Resolution** — MTTR for access and core tickets reflects operational maturity; trending resolution intervals by category and region highlights training gaps and vendor SLA performance.\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"troubleticket:event\"` (ticket_id, category, region, created_epoch, resolved_epoch, status). **App/TA** (typical add-on context): ITSM / trouble ticket export via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: troubleticket:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"troubleticket:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mtr_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(mtr_h) AND mtr_h >= 0` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by category, region, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_mtr > 24 OR p90_mtr > 72` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_mtr_r** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Customer Trouble Ticket Mean Time to Resolution**): table _time, category, region, tickets, avg_mtr_r, p90_mtr_r\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Box plot (MTTR distribution), Line chart (trend by region), Bar chart (category comparison).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch trouble ticket time to fix. We help you catch backlogs before SLAs and customers boil over.",
              "mtype": [
                "Performance"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.8.8",
              "n": "5G NR gNodeB Performance Monitoring",
              "c": "high",
              "f": "advanced",
              "v": "gNodeB throughput, latency, and drop metrics expose RF and transport issues before subscriber experience scores fall; focuses on RAN KPIs rather than core signaling traces (Cat 5.11).",
              "t": "5G DU/CU performance export via HEC",
              "d": "`index=telecom` `sourcetype=\"gnodeb:metrics\"` (gnb_id, cell_id, dl_throughput_mbps, ul_throughput_mbps, rlc_drop_pct, latency_ms)",
              "q": "index=telecom sourcetype=\"gnodeb:metrics\"\n| bin _time span=15m\n| stats avg(dl_throughput_mbps) as avg_dl, avg(ul_throughput_mbps) as avg_ul, avg(rlc_drop_pct) as avg_drop, avg(latency_ms) as avg_lat by gnb_id, cell_id, _time\n| eventstats median(avg_dl) as med_dl by cell_id\n| eval thr_ratio=if(med_dl>0, round(avg_dl/med_dl,2), null)\n| where avg_drop > 1 OR avg_lat > 40 OR thr_ratio < 0.7\n| table _time, gnb_id, cell_id, avg_dl, avg_ul, avg_drop, avg_lat, thr_ratio",
              "m": "Align PM file periods (15m/5m) and handle DST in `_time`. Use cell-level baselines for `thr_ratio`. Optional: join transport path ID if backhaul congestion is suspected. **Domain context:** 4G/5G KPIs (RLC drops, latency) are vendor-specific—map to 3GPP-style counters where possible for multi-vendor comparisons. **Splunk:** `thr_ratio < 0.7` compares to `med_dl` from `eventstats` on same bucket—ensure enough samples per `cell_id` or median is unstable.",
              "z": "Time chart (throughput and drops), Heatmap (cell × hour), Table (worst cells).",
              "kfp": "gNodeB noise during beam sweep trials, C-RAN re-homing, MIMO parameter tests, or power-saving features that the RAN optimization squad scheduled.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: 5G DU/CU performance export via HEC.\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"gnodeb:metrics\"` (gnb_id, cell_id, dl_throughput_mbps, ul_throughput_mbps, rlc_drop_pct, latency_ms).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign PM file periods (15m/5m) and handle DST in `_time`. Use cell-level baselines for `thr_ratio`. Optional: join transport path ID if backhaul congestion is suspected. **Domain context:** 4G/5G KPIs (RLC drops, latency) are vendor-specific—map to 3GPP-style counters where possible for multi-vendor comparisons. **Splunk:** `thr_ratio < 0.7` compares to `med_dl` from `eventstats` on same bucket—ensure enough samples per `cell_id` or median is unstable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"gnodeb:metrics\"\n| bin _time span=15m\n| stats avg(dl_throughput_mbps) as avg_dl, avg(ul_throughput_mbps) as avg_ul, avg(rlc_drop_pct) as avg_drop, avg(latency_ms) as avg_lat by gnb_id, cell_id, _time\n| eventstats median(avg_dl) as med_dl by cell_id\n| eval thr_ratio=if(med_dl>0, round(avg_dl/med_dl,2), null)\n| where avg_drop > 1 OR avg_lat > 40 OR thr_ratio < 0.7\n| table _time, gnb_id, cell_id, avg_dl, avg_ul, avg_drop, avg_lat, thr_ratio\n```\n\nUnderstanding this SPL\n\n**5G NR gNodeB Performance Monitoring** — gNodeB throughput, latency, and drop metrics expose RF and transport issues before subscriber experience scores fall; focuses on RAN KPIs rather than core signaling traces (Cat 5.11).\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"gnodeb:metrics\"` (gnb_id, cell_id, dl_throughput_mbps, ul_throughput_mbps, rlc_drop_pct, latency_ms). **App/TA** (typical add-on context): 5G DU/CU performance export via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: gnodeb:metrics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"gnodeb:metrics\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by gnb_id, cell_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by cell_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **thr_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avg_drop > 1 OR avg_lat > 40 OR thr_ratio < 0.7` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **5G NR gNodeB Performance Monitoring**): table _time, gnb_id, cell_id, avg_dl, avg_ul, avg_drop, avg_lat, thr_ratio\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (throughput and drops), Heatmap (cell × hour), Table (worst cells).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch 5G cell throughput and drop metrics. We help you catch a bad site before the neighborhood notices.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.8.9",
              "n": "Network Slice Resource Utilization",
              "c": "high",
              "f": "advanced",
              "v": "Slices carry differentiated QoS commitments; monitoring allocated vs used bandwidth and session counts per slice supports enterprise SLAs and slice redesign.",
              "t": "5GC NSSF/NSMF metrics via HEC",
              "d": "`index=telecom` `sourcetype=\"slice:utilization\"` (slice_id, dnn, committed_mbps, used_mbps, active_sessions)",
              "q": "index=telecom sourcetype=\"slice:utilization\"\n| eval util_pct=if(committed_mbps>0, round(100*used_mbps/committed_mbps,2), null)\n| bin _time span=5m\n| stats avg(util_pct) as avg_util, max(active_sessions) as peak_sess by slice_id, dnn, _time\n| where avg_util > 90 OR peak_sess > 50000\n| table _time, slice_id, dnn, avg_util, peak_sess",
              "m": "Normalize `committed_mbps` from slice templates; refresh when contracts change via KV. Alert enterprise account teams when sustained util > 90%. Distinct from generic core health (UC-21.8.2). **Domain context:** Network slicing (5G) binds QoS to S-NSSAI/DNN—`committed_mbps` should reflect the product template, not link capacity. **Splunk:** `used_mbps` and `committed_mbps` must share units (b/s vs kbps); validate extraction from NSSF/NSMF exports.",
              "z": "Line chart (util % by slice), Stacked area (sessions), Table (slices near exhaustion).",
              "kfp": "Slice usage noise when tenants burst into shared pools during marketing pushes, 5G fixed-wireless surges, or test UEs that the slice owner provisioned for trials.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: 5GC NSSF/NSMF metrics via HEC.\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"slice:utilization\"` (slice_id, dnn, committed_mbps, used_mbps, active_sessions).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `committed_mbps` from slice templates; refresh when contracts change via KV. Alert enterprise account teams when sustained util > 90%. Distinct from generic core health (UC-21.8.2). **Domain context:** Network slicing (5G) binds QoS to S-NSSAI/DNN—`committed_mbps` should reflect the product template, not link capacity. **Splunk:** `used_mbps` and `committed_mbps` must share units (b/s vs kbps); validate extraction from NSSF/NSMF exports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"slice:utilization\"\n| eval util_pct=if(committed_mbps>0, round(100*used_mbps/committed_mbps,2), null)\n| bin _time span=5m\n| stats avg(util_pct) as avg_util, max(active_sessions) as peak_sess by slice_id, dnn, _time\n| where avg_util > 90 OR peak_sess > 50000\n| table _time, slice_id, dnn, avg_util, peak_sess\n```\n\nUnderstanding this SPL\n\n**Network Slice Resource Utilization** — Slices carry differentiated QoS commitments; monitoring allocated vs used bandwidth and session counts per slice supports enterprise SLAs and slice redesign.\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"slice:utilization\"` (slice_id, dnn, committed_mbps, used_mbps, active_sessions). **App/TA** (typical add-on context): 5GC NSSF/NSMF metrics via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: slice:utilization. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"slice:utilization\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **util_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by slice_id, dnn, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_util > 90 OR peak_sess > 50000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Network Slice Resource Utilization**): table _time, slice_id, dnn, avg_util, peak_sess\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (util % by slice), Stacked area (sessions), Table (slices near exhaustion).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch slice use against committed rates. We help you catch a hot enterprise slice before other tenants suffer.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.8.10",
              "n": "Content Delivery Network Cache Hit Ratio",
              "c": "medium",
              "f": "intermediate",
              "v": "Low cache hit ratios increase origin load and subscriber latency; trending hit ratio by POP and content type guides cache sizing and TTL policy without analyzing subscriber CDRs.",
              "t": "CDN raw logs / analytics API via HEC",
              "d": "`index=telecom` `sourcetype=\"cdn:performance\"` (pop_id, cache_status, bytes_served, request_count)",
              "q": "index=telecom sourcetype=\"cdn:performance\"\n| eval hit=if(lower(cache_status) IN (\"hit\",\"tcp_hit\",\"mem_hit\"),1,0)\n| bin _time span=1h\n| stats sum(request_count) as reqs, sum(eval(if(hit=1,request_count,0))) as hit_reqs by pop_id, _time\n| eval hit_ratio=round(100*hit_reqs/nullif(reqs,0),2)\n| where hit_ratio < 85 AND reqs > 1000\n| table _time, pop_id, reqs, hit_reqs, hit_ratio",
              "m": "Map vendor cache hit tokens to `cache_status` in props. Exclude purge and error responses from denominator if tagged. Compare POPs to identify misconfigured origins. **Domain context:** CDN hit ratio varies by content type (live vs static) and TTL policy—compare POPs with similar traffic mix. **Splunk:** Extend `hit`/`cache_status` mapping for Fastly/Akamai/CloudFront variants (`MISS`, `STALE`, etc.); `reqs > 1000` avoids noise on low-traffic POPs.",
              "z": "Line chart (hit ratio by POP), Bar chart (worst POPs), Map (if geo coordinates available).",
              "kfp": "Cache hit ratio dips during purges, new origin deployments, DDoS or flash crowds, or TLS session ticket rotation the CDN TAM confirmed as an expected pattern.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CDN raw logs / analytics API via HEC.\n• Ensure the following data sources are available: `index=telecom` `sourcetype=\"cdn:performance\"` (pop_id, cache_status, bytes_served, request_count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor cache hit tokens to `cache_status` in props. Exclude purge and error responses from denominator if tagged. Compare POPs to identify misconfigured origins. **Domain context:** CDN hit ratio varies by content type (live vs static) and TTL policy—compare POPs with similar traffic mix. **Splunk:** Extend `hit`/`cache_status` mapping for Fastly/Akamai/CloudFront variants (`MISS`, `STALE`, etc.); `reqs > 1000` avoids noise on low-traffic POPs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=telecom sourcetype=\"cdn:performance\"\n| eval hit=if(lower(cache_status) IN (\"hit\",\"tcp_hit\",\"mem_hit\"),1,0)\n| bin _time span=1h\n| stats sum(request_count) as reqs, sum(eval(if(hit=1,request_count,0))) as hit_reqs by pop_id, _time\n| eval hit_ratio=round(100*hit_reqs/nullif(reqs,0),2)\n| where hit_ratio < 85 AND reqs > 1000\n| table _time, pop_id, reqs, hit_reqs, hit_ratio\n```\n\nUnderstanding this SPL\n\n**Content Delivery Network Cache Hit Ratio** — Low cache hit ratios increase origin load and subscriber latency; trending hit ratio by POP and content type guides cache sizing and TTL policy without analyzing subscriber CDRs.\n\nDocumented **Data sources**: `index=telecom` `sourcetype=\"cdn:performance\"` (pop_id, cache_status, bytes_served, request_count). **App/TA** (typical add-on context): CDN raw logs / analytics API via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: telecom; **sourcetype**: cdn:performance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=telecom, sourcetype=\"cdn:performance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by pop_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hit_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hit_ratio < 85 AND reqs > 1000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Content Delivery Network Cache Hit Ratio**): table _time, pop_id, reqs, hit_reqs, hit_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hit ratio by POP), Bar chart (worst POPs), Map (if geo coordinates available).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch CDN hit ratios and origin load. We help you catch bad caching before video stalls spike.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Telecommunications",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 10,
            "none": 0
          }
        },
        {
          "i": "21.9",
          "n": "Water and Wastewater Utilities",
          "u": [
            {
              "i": "21.9.1",
              "n": "Treatment Plant Process Parameter Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "pH, turbidity, and chlorine residual excursions threaten regulatory permits and public health; continuous trending flags filter or chemical feed issues before grab samples fail.",
              "t": "Splunk Edge Hub, SCADA HEC",
              "d": "`index=water` `sourcetype=\"treatment:process\"` (plant_id, basin_id, ph, turbidity_ntu, chlorine_mg_l, ph_min, ph_max, turbidity_max, chlorine_min)",
              "q": "index=water sourcetype=\"treatment:process\"\n| eval ph_breach=if(ph < ph_min OR ph > ph_max, 1, 0)\n| eval turb_breach=if(turbidity_ntu > turbidity_max, 1, 0)\n| eval chlorine_breach=if(chlorine_mg_l < chlorine_min, 1, 0)\n| where ph_breach=1 OR turb_breach=1 OR chlorine_breach=1\n| stats latest(ph) as ph, latest(turbidity_ntu) as turb, latest(chlorine_mg_l) as cl2, max(ph_breach) as ph_br, max(turb_breach) as tb_br, max(chlorine_breach) as cl_br by plant_id, basin_id\n| table plant_id, basin_id, ph, turb, cl2, ph_br, tb_br, cl_br",
              "m": "Ingest DCS/PLC tags at 1–5 minute intervals; align limits per permit in fields or lookup. Route alerts to plant operators; retain Splunk as supervisory visibility alongside SCADA alarms. **Domain context:** Surface water treatment limits (pH, turbidity, disinfectant residual) are jurisdiction-specific (e.g., U.S. EPA/State primacy rules)—mirror permit limits in lookups, not hardcoded SPL. **Splunk:** `latest()` in `stats` favors last sample in window—use `max(breach)` or time-weighted logic if multiple basins report per event.",
              "z": "Time chart (pH, turbidity, chlorine), Gauge (distance to limit), Table (active breaches).",
              "kfp": "Process-limit noise on jar tests, coagulant or chemical pump calibration, or SCADA time-base drift the plant DCS team verified against the lab bench sheet.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub, SCADA HEC.\n• Ensure the following data sources are available: `index=water` `sourcetype=\"treatment:process\"` (plant_id, basin_id, ph, turbidity_ntu, chlorine_mg_l, ph_min, ph_max, turbidity_max, chlorine_min).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest DCS/PLC tags at 1–5 minute intervals; align limits per permit in fields or lookup. Route alerts to plant operators; retain Splunk as supervisory visibility alongside SCADA alarms. **Domain context:** Surface water treatment limits (pH, turbidity, disinfectant residual) are jurisdiction-specific (e.g., U.S. EPA/State primacy rules)—mirror permit limits in lookups, not hardcoded SPL. **Splunk:** `latest()` in `stats` favors last sample in window—use `max(breach)` or time-weighted logic if m…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=water sourcetype=\"treatment:process\"\n| eval ph_breach=if(ph < ph_min OR ph > ph_max, 1, 0)\n| eval turb_breach=if(turbidity_ntu > turbidity_max, 1, 0)\n| eval chlorine_breach=if(chlorine_mg_l < chlorine_min, 1, 0)\n| where ph_breach=1 OR turb_breach=1 OR chlorine_breach=1\n| stats latest(ph) as ph, latest(turbidity_ntu) as turb, latest(chlorine_mg_l) as cl2, max(ph_breach) as ph_br, max(turb_breach) as tb_br, max(chlorine_breach) as cl_br by plant_id, basin_id\n| table plant_id, basin_id, ph, turb, cl2, ph_br, tb_br, cl_br\n```\n\nUnderstanding this SPL\n\n**Treatment Plant Process Parameter Monitoring** — pH, turbidity, and chlorine residual excursions threaten regulatory permits and public health; continuous trending flags filter or chemical feed issues before grab samples fail.\n\nDocumented **Data sources**: `index=water` `sourcetype=\"treatment:process\"` (plant_id, basin_id, ph, turbidity_ntu, chlorine_mg_l, ph_min, ph_max, turbidity_max, chlorine_min). **App/TA** (typical add-on context): Splunk Edge Hub, SCADA HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: water; **sourcetype**: treatment:process. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=water, sourcetype=\"treatment:process\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ph_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **turb_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **chlorine_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ph_breach=1 OR turb_breach=1 OR chlorine_breach=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by plant_id, basin_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Treatment Plant Process Parameter Monitoring**): table plant_id, basin_id, ph, turb, cl2, ph_br, tb_br, cl_br\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (pH, turbidity, chlorine), Gauge (distance to limit), Table (active breaches).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch treatment plant pH, turbidity, and chlorine. We help you catch excursions before permit and public trust slip.",
              "mtype": [
                "Compliance",
                "Fault"
              ],
              "ind": "Water and Wastewater",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.9.2",
              "n": "Pump Station Run Time and Efficiency Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Excessive run hours or kWh per volume pumped signals impeller wear, valve issues, or wet-well setpoint drift; trending supports maintenance scheduling and energy cost control.",
              "t": "Pump station PLC via Edge Hub",
              "d": "`index=water` `sourcetype=\"pump:station\"` (station_id, pump_id, run_state, flow_m3h, power_kw, runtime_hr_day)",
              "q": "index=water sourcetype=\"pump:station\" run_state=1\n| eval kwh_per_m3=if(flow_m3h>0 AND power_kw>0, power_kw/flow_m3h, null)\n| bin _time span=1d\n| stats sum(runtime_hr_day) as run_hrs, avg(kwh_per_m3) as avg_intensity by station_id, pump_id, _time\n| eventstats median(avg_intensity) as med_int by station_id, pump_id\n| eval ratio=if(med_int>0, round(avg_intensity/med_int,2), null)\n| where run_hrs > 20 OR ratio > 1.2\n| table _time, station_id, pump_id, run_hrs, avg_intensity, ratio",
              "m": "Normalize `run_state` (1=on). Fill gaps in flow with null checks to avoid divide-by-zero. Baseline `kwh_per_m3` seasonally for irrigation-influenced stations. **Domain context:** Specific energy (kWh/m³) is a common efficiency metric for water/wastewater pumping—compare like pump curves and wet-well levels. **Splunk:** `eventstats median(avg_intensity) as med_int by station_id, pump_id` may be thin on single pump—use `streamstats` or weekly baseline lookup.",
              "z": "Line chart (run hours and intensity), Bar chart (stations over baseline), Table (worst pumps).",
              "kfp": "Efficiency or runtime noise on wet-weather inflows, VFD re-tunes, or pump swaps where the run-time ratio still meets the weekly ops review.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Pump station PLC via Edge Hub.\n• Ensure the following data sources are available: `index=water` `sourcetype=\"pump:station\"` (station_id, pump_id, run_state, flow_m3h, power_kw, runtime_hr_day).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize `run_state` (1=on). Fill gaps in flow with null checks to avoid divide-by-zero. Baseline `kwh_per_m3` seasonally for irrigation-influenced stations. **Domain context:** Specific energy (kWh/m³) is a common efficiency metric for water/wastewater pumping—compare like pump curves and wet-well levels. **Splunk:** `eventstats median(avg_intensity) as med_int by station_id, pump_id` may be thin on single pump—use `streamstats` or weekly baseline lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=water sourcetype=\"pump:station\" run_state=1\n| eval kwh_per_m3=if(flow_m3h>0 AND power_kw>0, power_kw/flow_m3h, null)\n| bin _time span=1d\n| stats sum(runtime_hr_day) as run_hrs, avg(kwh_per_m3) as avg_intensity by station_id, pump_id, _time\n| eventstats median(avg_intensity) as med_int by station_id, pump_id\n| eval ratio=if(med_int>0, round(avg_intensity/med_int,2), null)\n| where run_hrs > 20 OR ratio > 1.2\n| table _time, station_id, pump_id, run_hrs, avg_intensity, ratio\n```\n\nUnderstanding this SPL\n\n**Pump Station Run Time and Efficiency Trending** — Excessive run hours or kWh per volume pumped signals impeller wear, valve issues, or wet-well setpoint drift; trending supports maintenance scheduling and energy cost control.\n\nDocumented **Data sources**: `index=water` `sourcetype=\"pump:station\"` (station_id, pump_id, run_state, flow_m3h, power_kw, runtime_hr_day). **App/TA** (typical add-on context): Pump station PLC via Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: water; **sourcetype**: pump:station. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=water, sourcetype=\"pump:station\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **kwh_per_m3** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by station_id, pump_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by station_id, pump_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where run_hrs > 20 OR ratio > 1.2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Pump Station Run Time and Efficiency Trending**): table _time, station_id, pump_id, run_hrs, avg_intensity, ratio\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (run hours and intensity), Bar chart (stations over baseline), Table (worst pumps).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch pump run hours and power per flow. We help you catch efficiency loss before electric bills and failures rise.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Water and Wastewater",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.9.3",
              "n": "Distribution System Pressure Zone Monitoring",
              "c": "critical",
              "f": "intermediate",
              "v": "Low pressure risks contamination and service complaints; high pressure stresses mains. Zone-level analytics isolate PRV faults and demand spikes faster than single-point alarms.",
              "t": "SCADA pressure telemetry via HEC",
              "d": "`index=water` `sourcetype=\"pressure:zone\"` (zone_id, pressure_psi, min_target_psi, max_target_psi, sensor_id)",
              "q": "index=water sourcetype=\"pressure:zone\"\n| eval breach=if(pressure_psi < min_target_psi OR pressure_psi > max_target_psi, 1, 0)\n| bin _time span=5m\n| stats min(pressure_psi) as min_p, max(pressure_psi) as max_p, max(breach) as any_breach by zone_id, _time\n| where any_breach=1\n| table _time, zone_id, min_p, max_p, any_breach",
              "m": "Multiple sensors per zone—use `stats` to aggregate. Join PRV asset IDs via lookup for work orders. Pair with demand forecasts during fire flow tests. **Domain context:** Pressure zones must maintain minimum service pressure per utility design standards—breaches can indicate main breaks, PRV faults, or pump/VFD issues. **Splunk:** `min(pressure_psi)` / `max` over 5m hides transient dips—add `max(breach)` or shorter `bin` for critical zones.",
              "z": "Time chart (pressure by zone), Single value (zones in breach), Map (zone centroids if GIS joined).",
              "kfp": "Pressure blips from fire flow, break isolation, PRV or pump maintenance, and overnight demand changes the hydraulic model already expects seasonally.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SCADA pressure telemetry via HEC.\n• Ensure the following data sources are available: `index=water` `sourcetype=\"pressure:zone\"` (zone_id, pressure_psi, min_target_psi, max_target_psi, sensor_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMultiple sensors per zone—use `stats` to aggregate. Join PRV asset IDs via lookup for work orders. Pair with demand forecasts during fire flow tests. **Domain context:** Pressure zones must maintain minimum service pressure per utility design standards—breaches can indicate main breaks, PRV faults, or pump/VFD issues. **Splunk:** `min(pressure_psi)` / `max` over 5m hides transient dips—add `max(breach)` or shorter `bin` for critical zones.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=water sourcetype=\"pressure:zone\"\n| eval breach=if(pressure_psi < min_target_psi OR pressure_psi > max_target_psi, 1, 0)\n| bin _time span=5m\n| stats min(pressure_psi) as min_p, max(pressure_psi) as max_p, max(breach) as any_breach by zone_id, _time\n| where any_breach=1\n| table _time, zone_id, min_p, max_p, any_breach\n```\n\nUnderstanding this SPL\n\n**Distribution System Pressure Zone Monitoring** — Low pressure risks contamination and service complaints; high pressure stresses mains. Zone-level analytics isolate PRV faults and demand spikes faster than single-point alarms.\n\nDocumented **Data sources**: `index=water` `sourcetype=\"pressure:zone\"` (zone_id, pressure_psi, min_target_psi, max_target_psi, sensor_id). **App/TA** (typical add-on context): SCADA pressure telemetry via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: water; **sourcetype**: pressure:zone. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=water, sourcetype=\"pressure:zone\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by zone_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where any_breach=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Distribution System Pressure Zone Monitoring**): table _time, zone_id, min_p, max_p, any_breach\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (pressure by zone), Single value (zones in breach), Map (zone centroids if GIS joined).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch pressure in each distribution zone. We help you catch low or high pressure before main breaks or brownouts of service.",
              "mtype": [
                "Availability",
                "Performance"
              ],
              "ind": "Water and Wastewater",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.9.4",
              "n": "Sewer Overflow Early Warning",
              "c": "critical",
              "f": "advanced",
              "v": "Rising wet-well levels during rainfall indicate capacity or blockage risk before SSO events; correlating level rise rate with rain intensity prioritizes inspections.",
              "t": "Sewer SCADA + weather feed via HEC",
              "d": "`index=water` `sourcetype=\"sewer:level\"` (structure_id, level_ft, high_alarm_ft, rainfall_in_hr)",
              "q": "index=water sourcetype=\"sewer:level\"\n| sort 0 structure_id, _time\n| streamstats window=2 global=f current=f last(level_ft) as prev_level last(_time) as prev_t by structure_id\n| eval dt_sec=_time-prev_t\n| eval rise_ft_hr=if(dt_sec>0 AND isnotnull(prev_level), (level_ft-prev_level)*3600/dt_sec, null)\n| eval risk=if(level_ft >= 0.9*high_alarm_ft OR (rainfall_in_hr > 0.5 AND rise_ft_hr > 0.5), 1, 0)\n| where risk=1\n| table _time, structure_id, level_ft, high_alarm_ft, rainfall_in_hr, rise_ft_hr",
              "m": "Align rain gauge timestamps to SCADA time zone. Tune `rise_ft_hr` using 1-minute samples if available. Integrate with CMMS for crew dispatch; document for NPDES reporting workflows. **Domain context:** SSO/CSO events often drive NPDES reporting and consent schedules—Splunk supports early warning, not regulatory submission. **Splunk:** `streamstats window=2` is sensitive to noise—use `trendline` or longer window for stable `rise_ft_hr`; verify `sort 0` order.",
              "z": "Combo chart (level vs rainfall), Timeline (risk flags), Map (structures at risk).",
              "kfp": "Overflow model noise on intense rain, SSO mixing events, or sensor fouling the collections crew cleared after the storm work order.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Sewer SCADA + weather feed via HEC.\n• Ensure the following data sources are available: `index=water` `sourcetype=\"sewer:level\"` (structure_id, level_ft, high_alarm_ft, rainfall_in_hr).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign rain gauge timestamps to SCADA time zone. Tune `rise_ft_hr` using 1-minute samples if available. Integrate with CMMS for crew dispatch; document for NPDES reporting workflows. **Domain context:** SSO/CSO events often drive NPDES reporting and consent schedules—Splunk supports early warning, not regulatory submission. **Splunk:** `streamstats window=2` is sensitive to noise—use `trendline` or longer window for stable `rise_ft_hr`; verify `sort 0` order.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=water sourcetype=\"sewer:level\"\n| sort 0 structure_id, _time\n| streamstats window=2 global=f current=f last(level_ft) as prev_level last(_time) as prev_t by structure_id\n| eval dt_sec=_time-prev_t\n| eval rise_ft_hr=if(dt_sec>0 AND isnotnull(prev_level), (level_ft-prev_level)*3600/dt_sec, null)\n| eval risk=if(level_ft >= 0.9*high_alarm_ft OR (rainfall_in_hr > 0.5 AND rise_ft_hr > 0.5), 1, 0)\n| where risk=1\n| table _time, structure_id, level_ft, high_alarm_ft, rainfall_in_hr, rise_ft_hr\n```\n\nUnderstanding this SPL\n\n**Sewer Overflow Early Warning** — Rising wet-well levels during rainfall indicate capacity or blockage risk before SSO events; correlating level rise rate with rain intensity prioritizes inspections.\n\nDocumented **Data sources**: `index=water` `sourcetype=\"sewer:level\"` (structure_id, level_ft, high_alarm_ft, rainfall_in_hr). **App/TA** (typical add-on context): Sewer SCADA + weather feed via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: water; **sourcetype**: sewer:level. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=water, sourcetype=\"sewer:level\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by structure_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dt_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rise_ft_hr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where risk=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Sewer Overflow Early Warning**): table _time, structure_id, level_ft, high_alarm_ft, rainfall_in_hr, rise_ft_hr\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Combo chart (level vs rainfall), Timeline (risk flags), Map (structures at risk).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch sewer levels and rain response. We help you catch surges before overflows hit streets or water bodies.",
              "mtype": [
                "Fault",
                "Compliance"
              ],
              "ind": "Water and Wastewater",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.9.5",
              "n": "Water Quality Compliance Sampling Automation",
              "c": "high",
              "f": "intermediate",
              "v": "Tracking scheduled vs completed samples and lab receipt timestamps ensures permit coverage and reduces missed-route findings during audits.",
              "t": "LIMS / field sampling app via HEC",
              "d": "`index=water` `sourcetype=\"water:compliance\"` (sample_id, site_id, scheduled_epoch, collected_epoch, lab_received_epoch, parameter_set)",
              "q": "index=water sourcetype=\"water:compliance\"\n| eval collected_lag_h=(collected_epoch-scheduled_epoch)/3600\n| eval lab_lag_h=(lab_received_epoch-collected_epoch)/3600\n| where isnull(collected_epoch) OR collected_lag_h > 48 OR lab_lag_h > 72\n| stats count as issues, dc(site_id) as sites_affected by parameter_set\n| table parameter_set, issues, sites_affected",
              "m": "Ingest lifecycle events from LIMS and field mobile apps. Handle partial collections with status fields. Dashboard for compliance team; not a substitute for chain-of-custody systems. **Domain context:** Drinking water compliance sampling schedules are rule-driven (e.g., RTCR/LT2 where applicable)—map `parameter_set` to regulatory minimums. **Splunk:** `collected_epoch` null vs zero—use `isnull` explicitly; `lab_received_epoch` requires lab integration.",
              "z": "Table (overdue samples), Bar chart (issues by parameter set), Calendar heatmap (collection completion).",
              "kfp": "Sampling lifecycle noise on lab backlog days, field hold times, or courier delay that the compliance manager documents as still within the regulatory grace policy.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: LIMS / field sampling app via HEC.\n• Ensure the following data sources are available: `index=water` `sourcetype=\"water:compliance\"` (sample_id, site_id, scheduled_epoch, collected_epoch, lab_received_epoch, parameter_set).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest lifecycle events from LIMS and field mobile apps. Handle partial collections with status fields. Dashboard for compliance team; not a substitute for chain-of-custody systems. **Domain context:** Drinking water compliance sampling schedules are rule-driven (e.g., RTCR/LT2 where applicable)—map `parameter_set` to regulatory minimums. **Splunk:** `collected_epoch` null vs zero—use `isnull` explicitly; `lab_received_epoch` requires lab integration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=water sourcetype=\"water:compliance\"\n| eval collected_lag_h=(collected_epoch-scheduled_epoch)/3600\n| eval lab_lag_h=(lab_received_epoch-collected_epoch)/3600\n| where isnull(collected_epoch) OR collected_lag_h > 48 OR lab_lag_h > 72\n| stats count as issues, dc(site_id) as sites_affected by parameter_set\n| table parameter_set, issues, sites_affected\n```\n\nUnderstanding this SPL\n\n**Water Quality Compliance Sampling Automation** — Tracking scheduled vs completed samples and lab receipt timestamps ensures permit coverage and reduces missed-route findings during audits.\n\nDocumented **Data sources**: `index=water` `sourcetype=\"water:compliance\"` (sample_id, site_id, scheduled_epoch, collected_epoch, lab_received_epoch, parameter_set). **App/TA** (typical add-on context): LIMS / field sampling app via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: water; **sourcetype**: water:compliance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=water, sourcetype=\"water:compliance\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **collected_lag_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **lab_lag_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(collected_epoch) OR collected_lag_h > 48 OR lab_lag_h > 72` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by parameter_set** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Water Quality Compliance Sampling Automation**): table parameter_set, issues, sites_affected\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue samples), Bar chart (issues by parameter set), Calendar heatmap (collection completion).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch lab sampling and hold times. We help you catch missed or late samples before compliance is at risk.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Water and Wastewater",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.9.6",
              "n": "SCADA RTU Communication Health Across Remote Sites",
              "c": "critical",
              "f": "intermediate",
              "v": "Silent RTU loss leaves operators blind at lift stations and remote wells; age since last good poll drives prioritized truck rolls before overflows.",
              "t": "Splunk Edge Hub, SCADA front-end logs",
              "d": "`index=water` `sourcetype=\"scada:rtu\"` (rtu_id, site_id, poll_ok, response_ms)",
              "q": "index=water sourcetype=\"scada:rtu\"\n| stats latest(_time) as last_ok, latest(response_ms) as last_ms, latest(poll_ok) as last_poll by rtu_id, site_id\n| eval age_sec=now()-last_ok\n| where last_poll=0 OR age_sec > 900 OR last_ms > 5000\n| table site_id, rtu_id, last_poll, last_ms, age_sec",
              "m": "Map protocol timeouts to `poll_ok=0`. Set `age_sec` threshold to 3× expected scan period. Exclude maintenance windows via site lookup. **Domain context:** Water RTUs often use poll/response over radio or cellular—staleness is a common failure mode before SCADA shows “last good” values. **Splunk:** `stats latest(_time)` without time window uses all data—scope with `| where _time>relative_time(now(),\"-24h\")` or use `last` in a time-bounded subsearch for dashboards.",
              "z": "Status grid (RTU × site), Table (oldest staleness), Line chart (response time trend).",
              "kfp": "Telemetry gaps on battery swaps, radio path alignment, or tower climbs that the SCADA maintenance ticket lists with expected offline minutes.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub, SCADA front-end logs.\n• Ensure the following data sources are available: `index=water` `sourcetype=\"scada:rtu\"` (rtu_id, site_id, poll_ok, response_ms).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap protocol timeouts to `poll_ok=0`. Set `age_sec` threshold to 3× expected scan period. Exclude maintenance windows via site lookup. **Domain context:** Water RTUs often use poll/response over radio or cellular—staleness is a common failure mode before SCADA shows “last good” values. **Splunk:** `stats latest(_time)` without time window uses all data—scope with `| where _time>relative_time(now(),\"-24h\")` or use `last` in a time-bounded subsearch for dashboards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=water sourcetype=\"scada:rtu\"\n| stats latest(_time) as last_ok, latest(response_ms) as last_ms, latest(poll_ok) as last_poll by rtu_id, site_id\n| eval age_sec=now()-last_ok\n| where last_poll=0 OR age_sec > 900 OR last_ms > 5000\n| table site_id, rtu_id, last_poll, last_ms, age_sec\n```\n\nUnderstanding this SPL\n\n**SCADA RTU Communication Health Across Remote Sites** — Silent RTU loss leaves operators blind at lift stations and remote wells; age since last good poll drives prioritized truck rolls before overflows.\n\nDocumented **Data sources**: `index=water` `sourcetype=\"scada:rtu\"` (rtu_id, site_id, poll_ok, response_ms). **App/TA** (typical add-on context): Splunk Edge Hub, SCADA front-end logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: water; **sourcetype**: scada:rtu. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=water, sourcetype=\"scada:rtu\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rtu_id, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **age_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where last_poll=0 OR age_sec > 900 OR last_ms > 5000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SCADA RTU Communication Health Across Remote Sites**): table site_id, rtu_id, last_poll, last_ms, age_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Status grid (RTU × site), Table (oldest staleness), Line chart (response time trend).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch remote RTU polls across sites. We help you catch stale or failed sites before SCADA is blind in the field.",
              "mtype": [
                "Availability"
              ],
              "ind": "Water and Wastewater",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.9.7",
              "n": "Water Loss and Non-Revenue Water Detection",
              "c": "high",
              "f": "advanced",
              "v": "Comparing master meter inflows to zone consumption and night minimum flows highlights leaks, unauthorized use, and meter drift—supporting NRW reduction programs.",
              "t": "AMI + district meter HEC",
              "d": "`index=water` `sourcetype=\"water:flow\"` (zone_id, supply_m3_day, billed_m3_day, min_night_flow_m3h)",
              "q": "index=water sourcetype=\"water:flow\"\n| eval nrw_pct=if(supply_m3_day>0, round(100*(supply_m3_day-billed_m3_day)/supply_m3_day,2), null)\n| where nrw_pct > 25 OR min_night_flow_m3h > 10\n| bin _time span=1d\n| stats latest(nrw_pct) as nrw_pct, latest(min_night_flow_m3h) as mnf by zone_id, _time\n| sort - nrw_pct\n| table _time, zone_id, nrw_pct, mnf",
              "m": "Align daily rollups to billing cycles. Use minimum night flow from 2–4 AM window. Join pipe age and material via GIS for remediation prioritization. **Domain context:** NRW (non-revenue water) programs align with IWA/AWWA water balance methods—interpret `nrw_pct` with unmetered consumption and authorized unbilled use in mind. **Splunk:** `supply_m3_day` vs `billed_m3_day` timing must align; AMI lag can inflate apparent NRW.",
              "z": "Choropleth or map (NRW % by zone), Time chart (NRW trend), Bar chart (zones over threshold).",
              "kfp": "NRW noise from unmetered use, main breaks under repair, night-flow studies, or billing-cycle lag the water-audit program treats as a known reconciliation item.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AMI + district meter HEC.\n• Ensure the following data sources are available: `index=water` `sourcetype=\"water:flow\"` (zone_id, supply_m3_day, billed_m3_day, min_night_flow_m3h).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nAlign daily rollups to billing cycles. Use minimum night flow from 2–4 AM window. Join pipe age and material via GIS for remediation prioritization. **Domain context:** NRW (non-revenue water) programs align with IWA/AWWA water balance methods—interpret `nrw_pct` with unmetered consumption and authorized unbilled use in mind. **Splunk:** `supply_m3_day` vs `billed_m3_day` timing must align; AMI lag can inflate apparent NRW.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=water sourcetype=\"water:flow\"\n| eval nrw_pct=if(supply_m3_day>0, round(100*(supply_m3_day-billed_m3_day)/supply_m3_day,2), null)\n| where nrw_pct > 25 OR min_night_flow_m3h > 10\n| bin _time span=1d\n| stats latest(nrw_pct) as nrw_pct, latest(min_night_flow_m3h) as mnf by zone_id, _time\n| sort - nrw_pct\n| table _time, zone_id, nrw_pct, mnf\n```\n\nUnderstanding this SPL\n\n**Water Loss and Non-Revenue Water Detection** — Comparing master meter inflows to zone consumption and night minimum flows highlights leaks, unauthorized use, and meter drift—supporting NRW reduction programs.\n\nDocumented **Data sources**: `index=water` `sourcetype=\"water:flow\"` (zone_id, supply_m3_day, billed_m3_day, min_night_flow_m3h). **App/TA** (typical add-on context): AMI + district meter HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: water; **sourcetype**: water:flow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=water, sourcetype=\"water:flow\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **nrw_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where nrw_pct > 25 OR min_night_flow_m3h > 10` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by zone_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Water Loss and Non-Revenue Water Detection**): table _time, zone_id, nrw_pct, mnf\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Choropleth or map (NRW % by zone), Time chart (NRW trend), Bar chart (zones over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch supplied versus billed water. We help you find real loss before non-revenue water explodes on the books.",
              "mtype": [
                "Performance",
                "Fault"
              ],
              "ind": "Water and Wastewater",
              "a": [
                "N/A"
              ],
              "e": [
                "asterisk"
              ],
              "em": [],
              "pillar": "both",
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.9.8",
              "n": "Lift Station Failure Prediction",
              "c": "high",
              "f": "advanced",
              "v": "Rising vibration with stable level, or current draw creep before thermal trip, predicts pump bearing wear and wet-well pump failures—reducing emergency callouts.",
              "t": "Vibration and motor VFD telemetry via Edge Hub",
              "d": "`index=water` `sourcetype=\"liftstation:sensor\"` (station_id, pump_id, vibration_mm_s, motor_amps, wet_well_level_ft, running_flag)",
              "q": "index=water sourcetype=\"liftstation:sensor\" running_flag=1\n| bin _time span=1h\n| stats avg(vibration_mm_s) as vib, avg(motor_amps) as amps, avg(wet_well_level_ft) as lvl by station_id, pump_id, _time\n| eventstats median(vib) as med_vib, median(amps) as med_amp by pump_id\n| eval vib_ratio=if(med_vib>0, round(vib/med_vib,2), null), amp_ratio=if(med_amp>0, round(amps/med_amp,2), null)\n| where vib_ratio > 1.5 OR amp_ratio > 1.25\n| table _time, station_id, pump_id, vib, amps, lvl, vib_ratio, amp_ratio",
              "m": "Baseline per pump when healthy; exclude dry-run periods with `running_flag`. Optional: send features to ML Toolkit for supervised models. Maintain safety interlocks in PLC, not Splunk. **Domain context:** Vibration (ISO 10816) and motor current trends support predictive maintenance—confirm sensor mounting and RPM for comparable `vibration_mm_s`. **Splunk:** `eventstats median` by `pump_id` needs history in the search window—use a saved baseline lookup for new pumps.",
              "z": "Time chart (vibration and current), Scatter (vibration vs level), Table (pumps flagged).",
              "kfp": "Lift-station feature noise in wet-weather inflow, VFD or impeller work, or probe maintenance that the pretreatment team annotated on the work order.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vibration and motor VFD telemetry via Edge Hub.\n• Ensure the following data sources are available: `index=water` `sourcetype=\"liftstation:sensor\"` (station_id, pump_id, vibration_mm_s, motor_amps, wet_well_level_ft, running_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBaseline per pump when healthy; exclude dry-run periods with `running_flag`. Optional: send features to ML Toolkit for supervised models. Maintain safety interlocks in PLC, not Splunk. **Domain context:** Vibration (ISO 10816) and motor current trends support predictive maintenance—confirm sensor mounting and RPM for comparable `vibration_mm_s`. **Splunk:** `eventstats median` by `pump_id` needs history in the search window—use a saved baseline lookup for new pumps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=water sourcetype=\"liftstation:sensor\" running_flag=1\n| bin _time span=1h\n| stats avg(vibration_mm_s) as vib, avg(motor_amps) as amps, avg(wet_well_level_ft) as lvl by station_id, pump_id, _time\n| eventstats median(vib) as med_vib, median(amps) as med_amp by pump_id\n| eval vib_ratio=if(med_vib>0, round(vib/med_vib,2), null), amp_ratio=if(med_amp>0, round(amps/med_amp,2), null)\n| where vib_ratio > 1.5 OR amp_ratio > 1.25\n| table _time, station_id, pump_id, vib, amps, lvl, vib_ratio, amp_ratio\n```\n\nUnderstanding this SPL\n\n**Lift Station Failure Prediction** — Rising vibration with stable level, or current draw creep before thermal trip, predicts pump bearing wear and wet-well pump failures—reducing emergency callouts.\n\nDocumented **Data sources**: `index=water` `sourcetype=\"liftstation:sensor\"` (station_id, pump_id, vibration_mm_s, motor_amps, wet_well_level_ft, running_flag). **App/TA** (typical add-on context): Vibration and motor VFD telemetry via Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: water; **sourcetype**: liftstation:sensor. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=water, sourcetype=\"liftstation:sensor\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by station_id, pump_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by pump_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **vib_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where vib_ratio > 1.5 OR amp_ratio > 1.25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Lift Station Failure Prediction**): table _time, station_id, pump_id, vib, amps, lvl, vib_ratio, amp_ratio\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (vibration and current), Scatter (vibration vs level), Table (pumps flagged).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch lift-station pumps and sensors. We help you catch motor or bearing issues before a wet-weather release.",
              "mtype": [
                "Fault",
                "Performance"
              ],
              "ind": "Water and Wastewater",
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 8,
            "none": 0
          }
        },
        {
          "i": "21.10",
          "n": "Insurance and Claims Processing",
          "u": [
            {
              "i": "21.10.1",
              "n": "Claims Processing Cycle Time Monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "End-to-end cycle time from FNOL to settlement drives customer satisfaction and loss adjustment expense; segmenting by line of business exposes bottlenecks in adjuster queues and vendor turnaround.",
              "t": "Claims management system via HEC",
              "d": "`index=insurance` `sourcetype=\"claims:lifecycle\"` (claim_id, lob, opened_epoch, settled_epoch, status)",
              "q": "index=insurance sourcetype=\"claims:lifecycle\" lower(status)=\"settled\"\n| eval cycle_d=(settled_epoch-opened_epoch)/86400\n| where isnotnull(cycle_d) AND cycle_d >= 0\n| bin _time span=1w\n| stats avg(cycle_d) as avg_days, perc90(cycle_d) as p90_days, count as claims by lob, _time\n| where avg_days > 30 OR p90_days > 60\n| eval avg_days_r=round(avg_days,1), p90_days_r=round(p90_days,1)\n| table _time, lob, claims, avg_days_r, p90_days_r",
              "m": "Normalize epoch fields from the claims platform; exclude reopened claims with a flag if present. Tune SLAs by LOB. Pair with staffing dashboards for operational planning—not banking fraud (Cat 10.12). **Domain context:** Cycle time KPIs vary by line (auto physical damage vs workers comp vs liability)—publish LOB-specific targets; regulators may review unfair claims practices in some jurisdictions. **Splunk:** `lower(status)=\"settled\"` must match platform values (`Closed`, `Paid`); use a lookup for canonical statuses.",
              "z": "Line chart (cycle time trend), Box plot (by LOB), Table (breaches).",
              "kfp": "Cycle time noise during CAT surges, short-staff days, or schema releases that the claims leadership tracks against the command-center staffing plan.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Claims management system via HEC.\n• Ensure the following data sources are available: `index=insurance` `sourcetype=\"claims:lifecycle\"` (claim_id, lob, opened_epoch, settled_epoch, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nNormalize epoch fields from the claims platform; exclude reopened claims with a flag if present. Tune SLAs by LOB. Pair with staffing dashboards for operational planning—not banking fraud (Cat 10.12). **Domain context:** Cycle time KPIs vary by line (auto physical damage vs workers comp vs liability)—publish LOB-specific targets; regulators may review unfair claims practices in some jurisdictions. **Splunk:** `lower(status)=\"settled\"` must match platform values (`Closed`, `Paid`); use a lookup f…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=insurance sourcetype=\"claims:lifecycle\" lower(status)=\"settled\"\n| eval cycle_d=(settled_epoch-opened_epoch)/86400\n| where isnotnull(cycle_d) AND cycle_d >= 0\n| bin _time span=1w\n| stats avg(cycle_d) as avg_days, perc90(cycle_d) as p90_days, count as claims by lob, _time\n| where avg_days > 30 OR p90_days > 60\n| eval avg_days_r=round(avg_days,1), p90_days_r=round(p90_days,1)\n| table _time, lob, claims, avg_days_r, p90_days_r\n```\n\nUnderstanding this SPL\n\n**Claims Processing Cycle Time Monitoring** — End-to-end cycle time from FNOL to settlement drives customer satisfaction and loss adjustment expense; segmenting by line of business exposes bottlenecks in adjuster queues and vendor turnaround.\n\nDocumented **Data sources**: `index=insurance` `sourcetype=\"claims:lifecycle\"` (claim_id, lob, opened_epoch, settled_epoch, status). **App/TA** (typical add-on context): Claims management system via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: insurance; **sourcetype**: claims:lifecycle. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=insurance, sourcetype=\"claims:lifecycle\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cycle_d** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(cycle_d) AND cycle_d >= 0` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by lob, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg_days > 30 OR p90_days > 60` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **avg_days_r** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Claims Processing Cycle Time Monitoring**): table _time, lob, claims, avg_days_r, p90_days_r\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (cycle time trend), Box plot (by LOB), Table (breaches).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch how long claims take from open to decision. We help you catch backlogs before members wait too long for answers.",
              "wv": "crawl",
              "mtype": [
                "Performance"
              ],
              "ind": "Insurance and Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "21.10.2",
              "n": "First Notice of Loss Channel Analysis",
              "c": "medium",
              "f": "beginner",
              "v": "Shifts in FNOL volume by web, mobile, IVR, or agent channel indicate digital adoption issues or contact center strain after catastrophe or product changes.",
              "t": "FNOL ingestion service via HEC",
              "d": "`index=insurance` `sourcetype=\"fnol:event\"` (fnol_id, channel, region, ingest_latency_ms, success_flag)",
              "q": "index=insurance sourcetype=\"fnol:event\"\n| eval ok=if(success_flag=1 OR lower(success_flag)=\"true\",1,0)\n| bin _time span=1d\n| stats count as fnols, sum(ok) as ok_n, avg(ingest_latency_ms) as avg_lat by channel, region, _time\n| eval success_pct=round(100*ok_n/nullif(fnols,0),2)\n| where success_pct < 95 OR avg_lat > 3000\n| table _time, channel, region, fnols, success_pct, avg_lat",
              "m": "Map vendor channel codes to a canonical list. Filter bot traffic if tagged. Useful for post-mortems after marketing pushes to digital FNOL. **Domain context:** FNOL channel mix shifts during CAT events and regulatory changes (e.g., digital-first mandates)—segment by product and state where applicable. **Splunk:** `success_flag` may be string or numeric—`eval ok=` pattern covers both; watch for duplicate events per `fnol_id`.",
              "z": "Stacked bar (FNOL volume by channel), Line chart (success %), Heatmap (region × channel).",
              "kfp": "Channel or capture noise during IVR, web, or app releases; bot traffic tests; or marketing pushes that the digital FNOL product owner A/B tests.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: FNOL ingestion service via HEC.\n• Ensure the following data sources are available: `index=insurance` `sourcetype=\"fnol:event\"` (fnol_id, channel, region, ingest_latency_ms, success_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nMap vendor channel codes to a canonical list. Filter bot traffic if tagged. Useful for post-mortems after marketing pushes to digital FNOL. **Domain context:** FNOL channel mix shifts during CAT events and regulatory changes (e.g., digital-first mandates)—segment by product and state where applicable. **Splunk:** `success_flag` may be string or numeric—`eval ok=` pattern covers both; watch for duplicate events per `fnol_id`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=insurance sourcetype=\"fnol:event\"\n| eval ok=if(success_flag=1 OR lower(success_flag)=\"true\",1,0)\n| bin _time span=1d\n| stats count as fnols, sum(ok) as ok_n, avg(ingest_latency_ms) as avg_lat by channel, region, _time\n| eval success_pct=round(100*ok_n/nullif(fnols,0),2)\n| where success_pct < 95 OR avg_lat > 3000\n| table _time, channel, region, fnols, success_pct, avg_lat\n```\n\nUnderstanding this SPL\n\n**First Notice of Loss Channel Analysis** — Shifts in FNOL volume by web, mobile, IVR, or agent channel indicate digital adoption issues or contact center strain after catastrophe or product changes.\n\nDocumented **Data sources**: `index=insurance` `sourcetype=\"fnol:event\"` (fnol_id, channel, region, ingest_latency_ms, success_flag). **App/TA** (typical add-on context): FNOL ingestion service via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: insurance; **sourcetype**: fnol:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=insurance, sourcetype=\"fnol:event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by channel, region, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where success_pct < 95 OR avg_lat > 3000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **First Notice of Loss Channel Analysis**): table _time, channel, region, fnols, success_pct, avg_lat\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (FNOL volume by channel), Line chart (success %), Heatmap (region × channel).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch which channels first notice of loss uses. We help you spot bad channels before intake quality varies.",
              "mtype": [
                "Performance",
                "Capacity"
              ],
              "ind": "Insurance and Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.10.3",
              "n": "Claims Adjuster Workload Balancing",
              "c": "high",
              "f": "intermediate",
              "v": "Uneven open-claim counts per adjuster drive delays and quality variance; workload panels support fair distribution and surge staffing during CAT events.",
              "t": "Claims assignment system via HEC",
              "d": "`index=insurance` `sourcetype=\"adjuster:workload\"` (adjuster_id, team, open_claims, new_assignments_day, capacity_target)",
              "q": "index=insurance sourcetype=\"adjuster:workload\"\n| eval load_ratio=if(capacity_target>0, round(open_claims/capacity_target,2), null)\n| where open_claims > capacity_target*1.2 OR load_ratio > 1.3\n| stats max(open_claims) as max_open, avg(load_ratio) as avg_ratio by team, adjuster_id\n| sort team, - max_open\n| table team, adjuster_id, max_open, avg_ratio",
              "m": "Refresh snapshot frequency aligned to work management (hourly/daily). Join `team` to supervisor roster via lookup. Use for operations review; respect labor agreements on monitoring scope. **Domain context:** Open-claim counts ignore complexity/severity—pair with reserves or injury type where possible for fair workload views. **Splunk:** `capacity_target` should be numeric; if missing, `load_ratio` is null—coalesce defaults.",
              "z": "Bar chart (open claims by adjuster), Box plot (team distribution), Single value (adjusters over capacity).",
              "kfp": "Workload noise during CAT mode, new jurisdiction training, or temporary reassignment the claims VP already balanced with an overtime pool.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Claims assignment system via HEC.\n• Ensure the following data sources are available: `index=insurance` `sourcetype=\"adjuster:workload\"` (adjuster_id, team, open_claims, new_assignments_day, capacity_target).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nRefresh snapshot frequency aligned to work management (hourly/daily). Join `team` to supervisor roster via lookup. Use for operations review; respect labor agreements on monitoring scope. **Domain context:** Open-claim counts ignore complexity/severity—pair with reserves or injury type where possible for fair workload views. **Splunk:** `capacity_target` should be numeric; if missing, `load_ratio` is null—coalesce defaults.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=insurance sourcetype=\"adjuster:workload\"\n| eval load_ratio=if(capacity_target>0, round(open_claims/capacity_target,2), null)\n| where open_claims > capacity_target*1.2 OR load_ratio > 1.3\n| stats max(open_claims) as max_open, avg(load_ratio) as avg_ratio by team, adjuster_id\n| sort team, - max_open\n| table team, adjuster_id, max_open, avg_ratio\n```\n\nUnderstanding this SPL\n\n**Claims Adjuster Workload Balancing** — Uneven open-claim counts per adjuster drive delays and quality variance; workload panels support fair distribution and surge staffing during CAT events.\n\nDocumented **Data sources**: `index=insurance` `sourcetype=\"adjuster:workload\"` (adjuster_id, team, open_claims, new_assignments_day, capacity_target). **App/TA** (typical add-on context): Claims assignment system via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: insurance; **sourcetype**: adjuster:workload. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=insurance, sourcetype=\"adjuster:workload\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **load_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where open_claims > capacity_target*1.2 OR load_ratio > 1.3` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by team, adjuster_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Claims Adjuster Workload Balancing**): table team, adjuster_id, max_open, avg_ratio\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (open claims by adjuster), Box plot (team distribution), Single value (adjusters over capacity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch adjuster queues and load. We help you rebalance work before some teams drown and others go idle.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "ind": "Insurance and Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.10.4",
              "n": "Subrogation Recovery Tracking",
              "c": "medium",
              "f": "intermediate",
              "v": "Subrogation dollars recovered reduce net loss ratio; tracking recovery rate and aging by claim type validates vendor performance and statute-of-limitations risk.",
              "t": "Subrogation module via HEC",
              "d": "`index=insurance` `sourcetype=\"subrogation:recovery\"` (claim_id, demand_amt, recovered_amt, opened_epoch, closed_epoch, outcome)",
              "q": "index=insurance sourcetype=\"subrogation:recovery\"\n| eval recovery_pct=if(demand_amt>0, round(100*recovered_amt/demand_amt,2), null)\n| eval age_days=round((now()-opened_epoch)/86400,1)\n| where recovery_pct < 30 AND age_days > 180 AND lower(outcome)!=\"closed_no_recovery\"\n| table claim_id, demand_amt, recovered_amt, recovery_pct, age_days, outcome",
              "m": "Clarify accounting for partial payments in `recovered_amt`. Schedule weekly for aged inventory. Legal holds may restrict fields—mask PII per policy. **Domain context:** Subrogation success depends on liability findings and statute of limitations—`recovery_pct` thresholds should be counsel-reviewed, not purely operational. **Splunk:** `now()-opened_epoch` uses search-time clock—prefer `closed_epoch` or event time for aging if backfilled.",
              "z": "Line chart (recovery rate trend), Bar chart (aging buckets), Table (low-recovery claims).",
              "kfp": "Recovery timing noise when arbitration dates slip, new counsel joins, or subro demands batch-upload after a large loss event the legal team sequenced.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Subrogation module via HEC.\n• Ensure the following data sources are available: `index=insurance` `sourcetype=\"subrogation:recovery\"` (claim_id, demand_amt, recovered_amt, opened_epoch, closed_epoch, outcome).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nClarify accounting for partial payments in `recovered_amt`. Schedule weekly for aged inventory. Legal holds may restrict fields—mask PII per policy. **Domain context:** Subrogation success depends on liability findings and statute of limitations—`recovery_pct` thresholds should be counsel-reviewed, not purely operational. **Splunk:** `now()-opened_epoch` uses search-time clock—prefer `closed_epoch` or event time for aging if backfilled.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=insurance sourcetype=\"subrogation:recovery\"\n| eval recovery_pct=if(demand_amt>0, round(100*recovered_amt/demand_amt,2), null)\n| eval age_days=round((now()-opened_epoch)/86400,1)\n| where recovery_pct < 30 AND age_days > 180 AND lower(outcome)!=\"closed_no_recovery\"\n| table claim_id, demand_amt, recovered_amt, recovery_pct, age_days, outcome\n```\n\nUnderstanding this SPL\n\n**Subrogation Recovery Tracking** — Subrogation dollars recovered reduce net loss ratio; tracking recovery rate and aging by claim type validates vendor performance and statute-of-limitations risk.\n\nDocumented **Data sources**: `index=insurance` `sourcetype=\"subrogation:recovery\"` (claim_id, demand_amt, recovered_amt, opened_epoch, closed_epoch, outcome). **App/TA** (typical add-on context): Subrogation module via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: insurance; **sourcetype**: subrogation:recovery. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=insurance, sourcetype=\"subrogation:recovery\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **recovery_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where recovery_pct < 30 AND age_days > 180 AND lower(outcome)!=\"closed_no_recovery\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Subrogation Recovery Tracking**): table claim_id, demand_amt, recovered_amt, recovery_pct, age_days, outcome\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (recovery rate trend), Bar chart (aging buckets), Table (low-recovery claims).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch money recovered from other carriers. We help you catch slow subrogation before cash sits on the table too long.",
              "mtype": [
                "Performance"
              ],
              "ind": "Insurance and Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.10.5",
              "n": "Policy Underwriting Decision Audit Trail",
              "c": "high",
              "f": "intermediate",
              "v": "Immutable-style audit of quote, risk tier, and bind decisions supports regulatory exams and disputes; Splunk supplements the system of record for search and dashboards.",
              "t": "Policy administration / underwriting engine via HEC",
              "d": "`index=insurance` `sourcetype=\"underwriting:audit\"` (policy_id, decision_id, user_id, decision, risk_score, rule_id, epoch)",
              "q": "index=insurance sourcetype=\"underwriting:audit\"\n| eval bind=if(lower(decision) IN (\"bind\",\"approved\",\"accept\"),1,0)\n| bin _time span=1d\n| stats count as decisions, sum(bind) as binds, dc(rule_id) as rules_fired by user_id, _time\n| eval bind_ratio=round(100*binds/nullif(decisions,0),2)\n| where decisions > 50 AND bind_ratio > 85 AND rules_fired < 2\n| table _time, user_id, decisions, binds, bind_ratio, rules_fired",
              "m": "Ingest append-only decision events with tamper-evident hashing if required by compliance. Tune alert for unusual auto-approval patterns; investigate false positives with underwriting leadership. Not a replacement for GRC workflow. **Domain context:** Model governance (SR 11-7 style expectations for larger insurers) requires explainability—`rule_id` diversity alone is a coarse anomaly signal. **Splunk:** Restrict index to compliance roles; hash/lineage often lives in the policy admin system of record.",
              "z": "Timeline (decision volume), Table (suspicious users), Bar chart (bind ratio by rule).",
              "kfp": "Underwriting decision variance during appetite changes, new model go-live, or data vendor refreshes the actuarial committee approved with shadow scoring.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Policy administration / underwriting engine via HEC.\n• Ensure the following data sources are available: `index=insurance` `sourcetype=\"underwriting:audit\"` (policy_id, decision_id, user_id, decision, risk_score, rule_id, epoch).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nIngest append-only decision events with tamper-evident hashing if required by compliance. Tune alert for unusual auto-approval patterns; investigate false positives with underwriting leadership. Not a replacement for GRC workflow. **Domain context:** Model governance (SR 11-7 style expectations for larger insurers) requires explainability—`rule_id` diversity alone is a coarse anomaly signal. **Splunk:** Restrict index to compliance roles; hash/lineage often lives in the policy admin system of re…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=insurance sourcetype=\"underwriting:audit\"\n| eval bind=if(lower(decision) IN (\"bind\",\"approved\",\"accept\"),1,0)\n| bin _time span=1d\n| stats count as decisions, sum(bind) as binds, dc(rule_id) as rules_fired by user_id, _time\n| eval bind_ratio=round(100*binds/nullif(decisions,0),2)\n| where decisions > 50 AND bind_ratio > 85 AND rules_fired < 2\n| table _time, user_id, decisions, binds, bind_ratio, rules_fired\n```\n\nUnderstanding this SPL\n\n**Policy Underwriting Decision Audit Trail** — Immutable-style audit of quote, risk tier, and bind decisions supports regulatory exams and disputes; Splunk supplements the system of record for search and dashboards.\n\nDocumented **Data sources**: `index=insurance` `sourcetype=\"underwriting:audit\"` (policy_id, decision_id, user_id, decision, risk_score, rule_id, epoch). **App/TA** (typical add-on context): Policy administration / underwriting engine via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: insurance; **sourcetype**: underwriting:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=insurance, sourcetype=\"underwriting:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bind** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by user_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **bind_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where decisions > 50 AND bind_ratio > 85 AND rules_fired < 2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Policy Underwriting Decision Audit Trail**): table _time, user_id, decisions, binds, bind_ratio, rules_fired\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (decision volume), Table (suspicious users), Bar chart (bind ratio by rule).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch underwriting system decisions and changes. We help you keep an auditable story when policies change hands.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Insurance and Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.10.6",
              "n": "Insurance Fraud Ring Detection",
              "c": "critical",
              "f": "advanced",
              "v": "Graph-style links among claimants, body shops, and payees reveal staged-loss rings distinct from generic payment fraud in banking; supports SIU prioritization when combined with Fraud Analytics scores.",
              "t": "Splunk App for Fraud Analytics, graph enrichment via HEC",
              "d": "`index=insurance` `sourcetype=\"fraud:network\"` (claim_id, entity_id, entity_type, edge_type, related_claim_id)",
              "q": "index=insurance sourcetype=\"fraud:network\"\n| stats dc(claim_id) as claim_cnt, dc(related_claim_id) as rel_cnt, values(entity_type) as types by entity_id\n| eval fanout=claim_cnt+rel_cnt\n| where fanout >= 5 AND mvcount(types) >= 2\n| sort - fanout\n| table entity_id, fanout, claim_cnt, rel_cnt, types",
              "m": "Build nightly entity extracts from claims and vendor data; load into Splunk or external graph with summaries back. Coordinate with legal for PII handling. Pair with Behavioral Profiling App scores for triage. **Domain context:** SIU investigations blend fraud scoring with privileged legal strategy—Splunk supports prioritization, not case disposition. **Splunk:** `stats dc(claim_id)` dedupes per entity—tune `fanout` thresholds; consider Splunk App for Fraud Analytics for graph-assisted workflows.",
              "z": "Node-link diagram (external viz or custom), Table (high-fanout entities), Bar chart (claims per entity).",
              "kfp": "Fraud model noise on big marketing campaigns, payment-method tests, or known merchant outage windows the SIU whitelisted as benign clusters.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk App for Fraud Analytics, graph enrichment via HEC.\n• Ensure the following data sources are available: `index=insurance` `sourcetype=\"fraud:network\"` (claim_id, entity_id, entity_type, edge_type, related_claim_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nBuild nightly entity extracts from claims and vendor data; load into Splunk or external graph with summaries back. Coordinate with legal for PII handling. Pair with Behavioral Profiling App scores for triage. **Domain context:** SIU investigations blend fraud scoring with privileged legal strategy—Splunk supports prioritization, not case disposition. **Splunk:** `stats dc(claim_id)` dedupes per entity—tune `fanout` thresholds; consider Splunk App for Fraud Analytics for graph-assisted workflows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=insurance sourcetype=\"fraud:network\"\n| stats dc(claim_id) as claim_cnt, dc(related_claim_id) as rel_cnt, values(entity_type) as types by entity_id\n| eval fanout=claim_cnt+rel_cnt\n| where fanout >= 5 AND mvcount(types) >= 2\n| sort - fanout\n| table entity_id, fanout, claim_cnt, rel_cnt, types\n```\n\nUnderstanding this SPL\n\n**Insurance Fraud Ring Detection** — Graph-style links among claimants, body shops, and payees reveal staged-loss rings distinct from generic payment fraud in banking; supports SIU prioritization when combined with Fraud Analytics scores.\n\nDocumented **Data sources**: `index=insurance` `sourcetype=\"fraud:network\"` (claim_id, entity_id, entity_type, edge_type, related_claim_id). **App/TA** (typical add-on context): Splunk App for Fraud Analytics, graph enrichment via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: insurance; **sourcetype**: fraud:network. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=insurance, sourcetype=\"fraud:network\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by entity_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fanout** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fanout >= 5 AND mvcount(types) >= 2` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Insurance Fraud Ring Detection**): table entity_id, fanout, claim_cnt, rel_cnt, types\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Node-link diagram (external viz or custom), Table (high-fanout entities), Bar chart (claims per entity).",
              "script": "",
              "premium": "Splunk App for Fraud Analytics",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch patterns that look like organized fraud. We help you connect rings before many small hits become one big loss.",
              "mtype": [
                "Security",
                "Fault"
              ],
              "ind": "Insurance and Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.10.7",
              "n": "Workers Compensation Return-to-Work Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "RTW milestones reduce indemnity spend and improve outcomes; monitoring days lost and RTW status by employer class highlights case management gaps.",
              "t": "Workers comp claims system via HEC",
              "d": "`index=insurance` `sourcetype=\"workcomp:rtw\"` (claim_id, employer_class, injury_date_epoch, rtw_date_epoch, lost_time_flag)",
              "q": "index=insurance sourcetype=\"workcomp:rtw\" lost_time_flag=1\n| eval days_lost=if(isnotnull(rtw_date_epoch), (rtw_date_epoch-injury_date_epoch)/86400, (now()-injury_date_epoch)/86400)\n| where days_lost > 45\n| stats avg(days_lost) as avg_lost, perc90(days_lost) as p90_lost, count as claims by employer_class\n| eval avg_lost_r=round(avg_lost,1), p90_lost_r=round(p90_lost,1)\n| sort - avg_lost_r\n| table employer_class, claims, avg_lost_r, p90_lost_r",
              "m": "De-identify claimants in dashboards. Handle jurisdictional differences in reporting latency. Integrate with nurse case management milestones if available. **Domain context:** Workers comp is state-regulated—RTW and lost-time definitions vary; PHI/HIPAA and state privacy rules may apply to health fields. **Splunk:** `lost_time_flag=1` must align with carrier coding; `days_lost` for open claims uses `now()`—refresh searches for accuracy.",
              "z": "Histogram (days lost), Bar chart (by employer class), Line chart (RTW trend).",
              "kfp": "Return-to-work noise during clinic backlogs, modified-duty assignments, or state rule changes the WC program office communicated to case managers.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Workers comp claims system via HEC.\n• Ensure the following data sources are available: `index=insurance` `sourcetype=\"workcomp:rtw\"` (claim_id, employer_class, injury_date_epoch, rtw_date_epoch, lost_time_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nDe-identify claimants in dashboards. Handle jurisdictional differences in reporting latency. Integrate with nurse case management milestones if available. **Domain context:** Workers comp is state-regulated—RTW and lost-time definitions vary; PHI/HIPAA and state privacy rules may apply to health fields. **Splunk:** `lost_time_flag=1` must align with carrier coding; `days_lost` for open claims uses `now()`—refresh searches for accuracy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=insurance sourcetype=\"workcomp:rtw\" lost_time_flag=1\n| eval days_lost=if(isnotnull(rtw_date_epoch), (rtw_date_epoch-injury_date_epoch)/86400, (now()-injury_date_epoch)/86400)\n| where days_lost > 45\n| stats avg(days_lost) as avg_lost, perc90(days_lost) as p90_lost, count as claims by employer_class\n| eval avg_lost_r=round(avg_lost,1), p90_lost_r=round(p90_lost,1)\n| sort - avg_lost_r\n| table employer_class, claims, avg_lost_r, p90_lost_r\n```\n\nUnderstanding this SPL\n\n**Workers Compensation Return-to-Work Tracking** — RTW milestones reduce indemnity spend and improve outcomes; monitoring days lost and RTW status by employer class highlights case management gaps.\n\nDocumented **Data sources**: `index=insurance` `sourcetype=\"workcomp:rtw\"` (claim_id, employer_class, injury_date_epoch, rtw_date_epoch, lost_time_flag). **App/TA** (typical add-on context): Workers comp claims system via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: insurance; **sourcetype**: workcomp:rtw. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=insurance, sourcetype=\"workcomp:rtw\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_lost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_lost > 45` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by employer_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_lost_r** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Workers Compensation Return-to-Work Tracking**): table employer_class, claims, avg_lost_r, p90_lost_r\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (days lost), Bar chart (by employer class), Line chart (RTW trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch return-to-work milestones on workers comp. We help you catch stalled cases before cost and care drag on.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "ind": "Insurance and Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "21.10.8",
              "n": "Catastrophe Event Claims Surge Capacity Monitoring",
              "c": "critical",
              "f": "advanced",
              "v": "During hurricanes or wildfires, FNOL and assignment rates can overwhelm contact centers and field adjusters; real-time intake vs staffed capacity guides IVR messaging and temporary adjuster pools.",
              "t": "Claims platform + workforce management via HEC",
              "d": "`index=insurance` `sourcetype=\"cat:surge\"` (cat_event_id, fnol_per_hr, active_adjusters, queue_depth, p95_handle_sec)",
              "q": "index=insurance sourcetype=\"cat:surge\"\n| bin _time span=1h\n| eval capacity_est=active_adjusters*4\n| eval surge_ratio=if(capacity_est>0, round(fnol_per_hr/capacity_est,2), null)\n| stats max(surge_ratio) as max_surge, max(queue_depth) as max_q, max(p95_handle_sec) as max_p95 by cat_event_id, _time\n| where max_surge > 1.2 OR max_q > 500 OR max_p95 > 600\n| table _time, cat_event_id, max_surge, max_q, max_p95",
              "m": "Parameterize `capacity_est` from actual handles-per-hour by channel. Tag `cat_event_id` from peril models. Coordinate with BCP playbooks; thresholds are scenario-specific. **Domain context:** CAT surge blends claims intake, FNOL, and field adjuster capacity—`active_adjusters*4` is a placeholder; replace with WFM actual productivity. **Splunk:** `bin _time span=1h` aligns FNOL rates—ensure `fnol_per_hr` is pre-aggregated or computed per hour consistently.",
              "z": "Time chart (FNOL vs capacity), Area chart (queue depth), Single value (surge ratio).",
              "kfp": "Surge pressure during CAT, regional storms, or vendor batch requeues that the catastrophe operations bridge explains with staged adjuster rosters.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Claims platform + workforce management via HEC.\n• Ensure the following data sources are available: `index=insurance` `sourcetype=\"cat:surge\"` (cat_event_id, fnol_per_hr, active_adjusters, queue_depth, p95_handle_sec).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\nParameterize `capacity_est` from actual handles-per-hour by channel. Tag `cat_event_id` from peril models. Coordinate with BCP playbooks; thresholds are scenario-specific. **Domain context:** CAT surge blends claims intake, FNOL, and field adjuster capacity—`active_adjusters*4` is a placeholder; replace with WFM actual productivity. **Splunk:** `bin _time span=1h` aligns FNOL rates—ensure `fnol_per_hr` is pre-aggregated or computed per hour consistently.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=insurance sourcetype=\"cat:surge\"\n| bin _time span=1h\n| eval capacity_est=active_adjusters*4\n| eval surge_ratio=if(capacity_est>0, round(fnol_per_hr/capacity_est,2), null)\n| stats max(surge_ratio) as max_surge, max(queue_depth) as max_q, max(p95_handle_sec) as max_p95 by cat_event_id, _time\n| where max_surge > 1.2 OR max_q > 500 OR max_p95 > 600\n| table _time, cat_event_id, max_surge, max_q, max_p95\n```\n\nUnderstanding this SPL\n\n**Catastrophe Event Claims Surge Capacity Monitoring** — During hurricanes or wildfires, FNOL and assignment rates can overwhelm contact centers and field adjusters; real-time intake vs staffed capacity guides IVR messaging and temporary adjuster pools.\n\nDocumented **Data sources**: `index=insurance` `sourcetype=\"cat:surge\"` (cat_event_id, fnol_per_hr, active_adjusters, queue_depth, p95_handle_sec). **App/TA** (typical add-on context): Claims platform + workforce management via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: insurance; **sourcetype**: cat:surge. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=insurance, sourcetype=\"cat:surge\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `eval` defines or adjusts **capacity_est** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **surge_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cat_event_id, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where max_surge > 1.2 OR max_q > 500 OR max_p95 > 600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Catastrophe Event Claims Surge Capacity Monitoring**): table _time, cat_event_id, max_surge, max_q, max_p95\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (FNOL vs capacity), Area chart (queue depth), Single value (surge ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "",
              "sver": "",
              "rby": "",
              "ge": "We watch claim intake when catastrophe hits. We help you add capacity before phone and web lines melt down.",
              "mtype": [
                "Capacity",
                "Performance"
              ],
              "ind": "Insurance and Financial Services",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 38.1,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 7,
            "none": 0
          }
        }
      ],
      "i": 21,
      "n": "Industry Verticals",
      "src": "cat-21-industry-verticals.md"
    },
    {
      "s": [
        {
          "i": "22.1",
          "n": "GDPR",
          "u": [
            {
              "i": "22.1.1",
              "n": "GDPR PII Detection in Application Log Data (Art. 5/6)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects email, phone, and SSN patterns in indexed application and web logs so controllers can prove technical measures for data minimisation and lawful processing under Arts. 5-6.",
              "t": "Splunk Edge Processor (Splunk Cloud Platform — ingest-time PII rules), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`index=main` OR `index=web` OR `index=app` — any high-volume text-bearing sourcetype such as `sourcetype=\"access_combined\"`, `sourcetype=\"log4j\"`, or custom application sourcetypes",
              "q": "(index=main OR index=web OR index=app) earliest=-24h\n| regex _raw=\"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}\"\n| eval pii_type=\"email\"\n| append [\n    search (index=main OR index=web OR index=app) earliest=-24h\n    | regex _raw=\"\\b\\d{3}-\\d{2}-\\d{4}\\b\"\n    | eval pii_type=\"ssn_pattern\"\n  ]\n| stats count by index, sourcetype, host, pii_type\n| sort - count",
              "m": "(1) In Splunk Cloud, configure Edge Processor pipelines with built-in PII detection rules for net-new data to mask at ingest; (2) run this SPL against existing indexes to find residual PII; (3) route hits to a restricted summary index for DPO review; (4) remediate at source (masking, log redaction, field drops in props.conf/transforms.conf) and re-run to verify reduction.",
              "z": "Bar chart (hits by sourcetype/host), Table (top offending sources by PII type), Single value (total PII pattern matches vs prior period).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Processor (Splunk Cloud Platform — ingest-time PII rules), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `index=main` OR `index=web` OR `index=app` — any high-volume text-bearing sourcetype such as `sourcetype=\"access_combined\"`, `sourcetype=\"log4j\"`, or custom application sourcetypes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) In Splunk Cloud, configure Edge Processor pipelines with built-in PII detection rules for net-new data to mask at ingest; (2) run this SPL against existing indexes to find residual PII; (3) route hits to a restricted summary index for DPO review; (4) remediate at source (masking, log redaction, field drops in props.conf/transforms.conf) and re-run to verify reduction.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=main OR index=web OR index=app) earliest=-24h\n| regex _raw=\"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}\"\n| eval pii_type=\"email\"\n| append [\n    search (index=main OR index=web OR index=app) earliest=-24h\n    | regex _raw=\"\\b\\d{3}-\\d{2}-\\d{4}\\b\"\n    | eval pii_type=\"ssn_pattern\"\n  ]\n| stats count by index, sourcetype, host, pii_type\n| sort - count\n```\n\nUnderstanding this SPL\n\n**GDPR PII Detection in Application Log Data (Art. 5/6)** — Detects email, phone, and SSN patterns in indexed application and web logs so controllers can prove technical measures for data minimisation and lawful processing under Arts. 5-6.\n\nDocumented **Data sources**: `index=main` OR `index=web` OR `index=app` — any high-volume text-bearing sourcetype such as `sourcetype=\"access_combined\"`, `sourcetype=\"log4j\"`, or custom application sourcetypes. **App/TA** (typical add-on context): Splunk Edge Processor (Splunk Cloud Platform — ingest-time PII rules), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: main, web, app.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=main, index=web, index=app, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• `eval` defines or adjusts **pii_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by index, sourcetype, host, pii_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR PII Detection in Application Log Data (Art. 5/6)** — Detects email, phone, and SSN patterns in indexed application and web logs so controllers can prove technical measures for data minimisation and lawful processing under Arts. 5-6.\n\nDocumented **Data sources**: `index=main` OR `index=web` OR `index=app` — any high-volume text-bearing sourcetype such as `sourcetype=\"access_combined\"`, `sourcetype=\"log4j\"`, or custom application sourcetypes. **App/TA** (typical add-on context): Splunk Edge Processor (Splunk Cloud Platform — ingest-time PII rules), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (hits by sourcetype/host), Table (top offending sources by PII type), Single value (total PII pattern matches vs prior period).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch your application logs for personal information like email addresses, phone numbers, and ID numbers that shouldn't be there, so you can catch privacy risks before they become a problem.",
              "wv": "crawl",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Web (for `access_combined` when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5 (Principles of processing) is enforced — Splunk UC-22.1.1: GDPR PII Detection in Application Log Data.",
                  "ea": "Saved search 'UC-22.1.1' running on sourcetype access_combined and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.6 (Lawful basis) is enforced — Splunk UC-22.1.1: GDPR PII Detection in Application Log Data.",
                  "ea": "Saved search 'UC-22.1.1' running on sourcetype access_combined and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "22.1.2",
              "n": "GDPR Data Subject Access Request Fulfillment Tracking (Art. 15-22)",
              "c": "critical",
              "f": "intermediate",
              "v": "Measures DSAR ticket lifecycle from opened to closed against a 30-calendar-day SLA so privacy and audit teams can evidence timely handling of access, rectification, erasure, and portability requests.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, opened_at, closed_at, state, short_description) or `sourcetype=\"snow:incident\"` (number, category, opened_at, closed_at, short_description, priority)",
              "q": "index=itsm (sourcetype=\"snow:sc_req_item\" OR sourcetype=\"snow:incident\")\n    (cat_item=\"*Subject Access*\" OR short_description=\"*DSAR*\" OR short_description=\"*data subject*\")\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at, \"%Y-%m-%d %H:%M:%S\"), null())\n| eval age_days=round((now()-opened_epoch)/86400, 1)\n| eval sla_met=if(isnotnull(closed_epoch) AND (closed_epoch-opened_epoch)<=2592000, \"Met\", \"Missed\")\n| eval open_breach=if(isnull(closed_epoch) AND age_days>30, \"Open_SLA_Breach\", null())\n| table _time, number, sourcetype, state, age_days, sla_met, open_breach, short_description\n| sort - age_days",
              "m": "(1) Install Splunk Add-on for ServiceNow (1928) with sc_req_item and incident inputs enabled; (2) align `cat_item`/`short_description` filters with your DSAR catalogue naming; (3) confirm timestamp format in `opened_at`/`closed_at` and adjust `strptime` format if needed; (4) schedule daily and alert on `open_breach`.",
              "z": "Column chart (Met vs Missed), Time chart (DSAR volume), Table (open breaches), Single value (% within 30 days).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, opened_at, closed_at, state, short_description) or `sourcetype=\"snow:incident\"` (number, category, opened_at, closed_at, short_description, priority).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Install Splunk Add-on for ServiceNow (1928) with sc_req_item and incident inputs enabled; (2) align `cat_item`/`short_description` filters with your DSAR catalogue naming; (3) confirm timestamp format in `opened_at`/`closed_at` and adjust `strptime` format if needed; (4) schedule daily and alert on `open_breach`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm (sourcetype=\"snow:sc_req_item\" OR sourcetype=\"snow:incident\")\n    (cat_item=\"*Subject Access*\" OR short_description=\"*DSAR*\" OR short_description=\"*data subject*\")\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at, \"%Y-%m-%d %H:%M:%S\"), null())\n| eval age_days=round((now()-opened_epoch)/86400, 1)\n| eval sla_met=if(isnotnull(closed_epoch) AND (closed_epoch-opened_epoch)<=2592000, \"Met\", \"Missed\")\n| eval open_breach=if(isnull(closed_epoch) AND age_days>30, \"Open_SLA_Breach\", null())\n| table _time, number, sourcetype, state, age_days, sla_met, open_breach, short_description\n| sort - age_days\n```\n\nUnderstanding this SPL\n\n**GDPR Data Subject Access Request Fulfillment Tracking (Art. 15-22)** — Measures DSAR ticket lifecycle from opened to closed against a 30-calendar-day SLA so privacy and audit teams can evidence timely handling of access, rectification, erasure, and portability requests.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, opened_at, closed_at, state, short_description) or `sourcetype=\"snow:incident\"` (number, category, opened_at, closed_at, short_description, priority). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item, snow:incident. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **closed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **open_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **GDPR Data Subject Access Request Fulfillment Tracking (Art. 15-22)**): table _time, number, sourcetype, state, age_days, sla_met, open_breach, short_description\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Data Subject Access Request Fulfillment Tracking (Art. 15-22)** — Measures DSAR ticket lifecycle from opened to closed against a 30-calendar-day SLA so privacy and audit teams can evidence timely handling of access, rectification, erasure, and portability requests.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, opened_at, closed_at, state, short_description) or `sourcetype=\"snow:incident\"` (number, category, opened_at, closed_at, short_description, priority). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (Met vs Missed), Time chart (DSAR volume), Table (open breaches), Single value (% within 30 days).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measure DSAR ticket lifecycle from opened to closed against a 30-calendar-day SLA so privacy and audit teams can evidence timely handling of access, rectification, erasure, and portability requests.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Ticket_Management (ServiceNow TA mappings)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.15 (Right of access) is enforced — Splunk UC-22.1.2: GDPR Data Subject Access Request Fulfillment Tracking.",
                  "ea": "Saved search 'UC-22.1.2' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.16",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.16 (Right to rectification) is enforced — Splunk UC-22.1.2: GDPR Data Subject Access Request Fulfillment Tracking.",
                  "ea": "Saved search 'UC-22.1.2' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.17 (Right to erasure) is enforced — Splunk UC-22.1.2: GDPR Data Subject Access Request Fulfillment Tracking.",
                  "ea": "Saved search 'UC-22.1.2' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.18 (Right to restrict processing) is enforced — Splunk UC-22.1.2: GDPR Data Subject Access Request Fulfillment Tracking.",
                  "ea": "Saved search 'UC-22.1.2' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.19",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.19 is enforced — Splunk UC-22.1.2: GDPR Data Subject Access Request Fulfillment Tracking.",
                  "ea": "Saved search 'UC-22.1.2' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.20",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.20 (Right to data portability) is enforced — Splunk UC-22.1.2: GDPR Data Subject Access Request Fulfillment Tracking.",
                  "ea": "Saved search 'UC-22.1.2' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.21",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.21 (Right to object) is enforced — Splunk UC-22.1.2: GDPR Data Subject Access Request Fulfillment Tracking.",
                  "ea": "Saved search 'UC-22.1.2' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.22",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.22 (Automated decision making) is enforced — Splunk UC-22.1.2: GDPR Data Subject Access Request Fulfillment Tracking.",
                  "ea": "Saved search 'UC-22.1.2' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.3",
              "n": "GDPR Breach Notification Timeline Monitoring (Art. 33, 72-hour rule)",
              "c": "critical",
              "f": "intermediate",
              "v": "The key GDPR Art. 33 evidence artifacts are time-to-DPO notification and time-to-supervisory authority filing, not just SOC notable age. This use case tracks both handoff milestones, preventing false compliance comfort from measuring queue time alone.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` macro (rule_name, urgency, status, owner, status_description, _time)",
              "q": "`notable` status IN (\"New\",\"In Progress\",\"Pending\") earliest=-7d\n| eval hours_since_detection=round((now()-_time)/3600, 2)\n| eval near_deadline=if(hours_since_detection>=60 AND hours_since_detection<72, 1, 0)\n| eval breached_72h=if(hours_since_detection>72, 1, 0)\n| table _time, rule_name, urgency, status, owner, status_description, hours_since_detection, near_deadline, breached_72h\n| sort - breached_72h, - hours_since_detection",
              "m": "(1) Ensure Incident Review workflow populates `owner`, `status`, and `status_description` at each milestone; (2) tag correlation searches that represent personal-data breaches with a `gdpr_relevant` field or label; (3) schedule hourly with alert when `near_deadline=1` or `breached_72h=1`; (4) attach runbook linking to DPO/legal notification steps.",
              "z": "Timeline (notable aging milestones), Table (aging notables), Single value (count past 60h), Alert list (breach candidates).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` macro (rule_name, urgency, status, owner, status_description, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure Incident Review workflow populates `owner`, `status`, and `status_description` at each milestone; (2) tag correlation searches that represent personal-data breaches with a `gdpr_relevant` field or label; (3) schedule hourly with alert when `near_deadline=1` or `breached_72h=1`; (4) attach runbook linking to DPO/legal notification steps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` status IN (\"New\",\"In Progress\",\"Pending\") earliest=-7d\n| eval hours_since_detection=round((now()-_time)/3600, 2)\n| eval near_deadline=if(hours_since_detection>=60 AND hours_since_detection<72, 1, 0)\n| eval breached_72h=if(hours_since_detection>72, 1, 0)\n| table _time, rule_name, urgency, status, owner, status_description, hours_since_detection, near_deadline, breached_72h\n| sort - breached_72h, - hours_since_detection\n```\n\nUnderstanding this SPL\n\n**GDPR Breach Notification Timeline Monitoring (Art. 33, 72-hour rule)** — The key GDPR Art. 33 evidence artifacts are time-to-DPO notification and time-to-supervisory authority filing, not just SOC notable age. This use case tracks both handoff milestones, preventing false compliance comfort from measuring queue time alone.\n\nDocumented **Data sources**: `` `notable` `` macro (rule_name, urgency, status, owner, status_description, _time). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **hours_since_detection** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **near_deadline** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breached_72h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **GDPR Breach Notification Timeline Monitoring (Art. 33, 72-hour rule)**): table _time, rule_name, urgency, status, owner, status_description, hours_since_detection, near_deadline, breached_72h\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (notable aging milestones), Table (aging notables), Single value (count past 60h), Alert list (breach candidates).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We the key GDPR Art so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.33",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.33 (Breach notification to supervisory authority) is enforced — Splunk UC-22.1.3: GDPR Breach Notification Timeline Monitoring.",
                  "ea": "Saved search 'UC-22.1.3' running on notable macro (rule_name, urgency, status, owner, status_description, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.72",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.72 is enforced — Splunk UC-22.1.3: GDPR Breach Notification Timeline Monitoring.",
                  "ea": "Saved search 'UC-22.1.3' running on notable macro (rule_name, urgency, status, owner, status_description, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.4",
              "n": "GDPR Data Retention Policy Enforcement (Art. 5(1)(e))",
              "c": "high",
              "f": "intermediate",
              "v": "Audits Splunk index-level retention settings against written data retention policy so personal data in logs is not kept longer than necessary under the storage limitation principle.",
              "t": "Splunk Enterprise / Splunk Cloud Platform (native `| rest` API, no separate TA required)",
              "d": "REST endpoint: `/services/data/indexes` — fields: `title`, `frozenTimePeriodInSecs`, `maxTotalDataSizeMB`, `disabled`",
              "q": "| rest /services/data/indexes splunk_server=local count=0\n| search disabled=0 NOT title IN (\"_*\", \"history\", \"summary\")\n| eval retention_days=round(frozenTimePeriodInSecs/86400, 1)\n| eval policy_max_days=case(\n    match(title,\"^(hr|pii|gdpr)\"), 180,\n    match(title,\"^(security|sec)\"), 365,\n    1=1, 365)\n| eval violation=if(retention_days>policy_max_days, \"Exceeds_Policy\", \"OK\")\n| table title, retention_days, policy_max_days, frozenTimePeriodInSecs, maxTotalDataSizeMB, violation\n| sort - retention_days",
              "m": "(1) Run from a scheduled search on the search head (requires admin capability for REST); (2) replace the `case()` block with a lookup `index_retention_policy.csv` mapping index names to required max retention days; (3) export results to GRC tickets when violations trigger; (4) pair with archive/freeze path review outside Splunk for complete retention evidence.",
              "z": "Table (index, retention, policy, violation), Bar chart (retention by index), Single value (violation count).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform (native `| rest` API, no separate TA required).\n• Ensure the following data sources are available: REST endpoint: `/services/data/indexes` — fields: `title`, `frozenTimePeriodInSecs`, `maxTotalDataSizeMB`, `disabled`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Run from a scheduled search on the search head (requires admin capability for REST); (2) replace the `case()` block with a lookup `index_retention_policy.csv` mapping index names to required max retention days; (3) export results to GRC tickets when violations trigger; (4) pair with archive/freeze path review outside Splunk for complete retention evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/data/indexes splunk_server=local count=0\n| search disabled=0 NOT title IN (\"_*\", \"history\", \"summary\")\n| eval retention_days=round(frozenTimePeriodInSecs/86400, 1)\n| eval policy_max_days=case(\n    match(title,\"^(hr|pii|gdpr)\"), 180,\n    match(title,\"^(security|sec)\"), 365,\n    1=1, 365)\n| eval violation=if(retention_days>policy_max_days, \"Exceeds_Policy\", \"OK\")\n| table title, retention_days, policy_max_days, frozenTimePeriodInSecs, maxTotalDataSizeMB, violation\n| sort - retention_days\n```\n\nUnderstanding this SPL\n\n**GDPR Data Retention Policy Enforcement (Art. 5(1)(e))** — Audits Splunk index-level retention settings against written data retention policy so personal data in logs is not kept longer than necessary under the storage limitation principle.\n\nDocumented **Data sources**: REST endpoint: `/services/data/indexes` — fields: `title`, `frozenTimePeriodInSecs`, `maxTotalDataSizeMB`, `disabled`. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform (native `| rest` API, no separate TA required). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **retention_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **policy_max_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **violation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **GDPR Data Retention Policy Enforcement (Art. 5(1)(e))**): table title, retention_days, policy_max_days, frozenTimePeriodInSecs, maxTotalDataSizeMB, violation\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (index, retention, policy, violation), Bar chart (retention by index), Single value (violation count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We audits index-level retention settings against written data retention policy so personal data in logs is not kept longer than necessary under the storage limitation principle.",
              "mtype": [
                "Security",
                "Compliance",
                "Capacity"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(1)(e)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5(1)(e) is enforced — Splunk UC-22.1.4: GDPR Data Retention Policy Enforcement.",
                  "ea": "Saved search 'UC-22.1.4' running on REST endpoint: /services/data/indexes — fields: title, frozenTimePeriodInSecs, maxTotalDataSizeMB, disabled, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.5",
              "n": "GDPR Consent Management Audit Trail (Art. 7)",
              "c": "high",
              "f": "intermediate",
              "v": "Preserves a searchable trail of consent grant, refuse, and withdraw events from web applications for accountability and consent withdrawal parity requirements.",
              "t": "Splunk Add-on for Apache Web Server (Splunkbase 3186), HTTP Event Collector (HEC — platform capability for structured JSON from consent APIs)",
              "d": "`index=web` `sourcetype=\"access_combined\"` (clientip, uri, method, status, useragent) for consent page interactions; or custom HEC JSON events with explicit consent fields",
              "q": "index=web sourcetype=\"access_combined\" earliest=-7d\n    (uri=\"*/consent*\" OR uri=\"*/privacy-preferences*\")\n| rex field=uri_query \"action=(?<consent_action>[^&]+)\"\n| eval consent_event=coalesce(consent_action, if(status=200, \"page_view\", \"error\"))\n| stats count by clientip, uri, consent_event, status\n| sort - count",
              "m": "(1) Ingest Apache/nginx access logs via TA 3186 or Universal Forwarder file inputs; (2) for richer evidence, emit HEC JSON from the consent microservice with explicit `action`, `purpose_id`, and hashed subject ID fields; (3) map URIs to consent purposes via a lookup `consent_uri_map.csv`; (4) restrict index ACLs to privacy teams; (5) schedule weekly reporting for consent withdrawal ratio monitoring.",
              "z": "Time chart (consent page hits), Stacked bar (grant vs revoke/withdraw), Table (top consent events by URI).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Apache Web Server (Splunkbase 3186), HTTP Event Collector (HEC — platform capability for structured JSON from consent APIs).\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` (clientip, uri, method, status, useragent) for consent page interactions; or custom HEC JSON events with explicit consent fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest Apache/nginx access logs via TA 3186 or Universal Forwarder file inputs; (2) for richer evidence, emit HEC JSON from the consent microservice with explicit `action`, `purpose_id`, and hashed subject ID fields; (3) map URIs to consent purposes via a lookup `consent_uri_map.csv`; (4) restrict index ACLs to privacy teams; (5) schedule weekly reporting for consent withdrawal ratio monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" earliest=-7d\n    (uri=\"*/consent*\" OR uri=\"*/privacy-preferences*\")\n| rex field=uri_query \"action=(?<consent_action>[^&]+)\"\n| eval consent_event=coalesce(consent_action, if(status=200, \"page_view\", \"error\"))\n| stats count by clientip, uri, consent_event, status\n| sort - count\n```\n\nUnderstanding this SPL\n\n**GDPR Consent Management Audit Trail (Art. 7)** — Preserves a searchable trail of consent grant, refuse, and withdraw events from web applications for accountability and consent withdrawal parity requirements.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (clientip, uri, method, status, useragent) for consent page interactions; or custom HEC JSON events with explicit consent fields. **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186), HTTP Event Collector (HEC — platform capability for structured JSON from consent APIs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **consent_event** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by clientip, uri, consent_event, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Consent Management Audit Trail (Art. 7)** — Preserves a searchable trail of consent grant, refuse, and withdraw events from web applications for accountability and consent withdrawal parity requirements.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (clientip, uri, method, status, useragent) for consent page interactions; or custom HEC JSON events with explicit consent fields. **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186), HTTP Event Collector (HEC — platform capability for structured JSON from consent APIs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (consent page hits), Stacked bar (grant vs revoke/withdraw), Table (top consent events by URI).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-26",
              "sver": "",
              "rby": "",
              "ge": "We keep an unbroken audit trail of every time someone clicks 'allow' or 'withdraw consent' on the cookie or privacy banner — what they agreed to, what version of the wording they were shown, and when they changed their mind. If the 'withdraw' button stops working as easily as the 'allow' button, the trail tells us before the regulator does.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Web (when CIM-tagged via TA 3186)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status | sort - count",
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Per-purpose evidence that every grant carries a parseable TCF v2.2 consent_string and ui_version (Art.7(1) demonstration), and that the rolling withdrawal-to-grant ratio sits inside the privacy-office-signed band evidencing withdrawal-as-easy-as-grant parity (Art.7(3)).",
                  "ea": "Per-purpose row in audit_evidence (7y retention, privacy_office ACL): purpose_id, grants, withdrawals, ratio, grants_missing_consent_string, art7_1_evidence_gap, art7_3_parity_gap. Plus raw CMP records (subject_id_hash, action, purpose_id, tcf_version, consent_string, ui_version, geo, ts_grant/ts_withdraw) in the consent index. Weekly CSV digest to ServiceNow GRC for DPIA review."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.6",
              "n": "GDPR Cross-Border Data Transfer Monitoring (Art. 44-49)",
              "c": "critical",
              "f": "advanced",
              "v": "Highlights outbound traffic volumes to destinations outside the approved EEA/adequacy footprint so transfers can be gated by SCCs, BCRs, TIAs, or blocking controls.",
              "t": "Splunk Common Information Model Add-on (Splunkbase 1621), `Splunk_TA_paloalto` (Splunkbase 2757), `TA-fortinet_fortigate`, `Splunk_TA_cisco-asa`, or equivalent firewall TA populating Network_Traffic data model",
              "d": "CIM `Network_Traffic` data model (`All_Traffic.dest`, `All_Traffic.bytes_out`, `All_Traffic.action`) — backed by sourcetypes such as `sourcetype=\"pan:traffic\"`, `sourcetype=\"cisco:asa\"`, or `sourcetype=\"fortigate_traffic\"`",
              "q": "| tstats summariesonly=t sum(All_Traffic.bytes_out) as bytes_out\n    from datamodel=Network_Traffic.All_Traffic\n    where All_Traffic.action=\"allowed\"\n    by All_Traffic.dest\n| rename All_Traffic.* as *\n| iplocation dest\n| lookup eea_and_adequate_countries.csv Country OUTPUT transfer_basis\n| where isnull(transfer_basis) OR transfer_basis=\"restricted\"\n| eval bytes_gb=round(bytes_out/1073741824, 2)\n| sort 100 - bytes_out\n| head 100\n| table dest, Country, bytes_gb, transfer_basis",
              "m": "(1) Accelerate `Network_Traffic` data model in Settings > Data Models; (2) create `eea_and_adequate_countries.csv` with `Country` values matching `iplocation` output (MaxMind) and your legal team's adequacy list (EEA + UK + other recognised adequacy decisions); (3) add a `transfer_basis` column (e.g. SCC, BCR, adequacy) for approved destinations; (4) tune with CDN/exception lookups by `dest`.",
              "z": "Choropleth (top non-EEA destinations), Bar chart (bytes by country), Table (restricted transfers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [Splunkbase app 2757](https://splunkbase.splunk.com/app/2757), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1048",
                "T1530"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Common Information Model Add-on (Splunkbase 1621), `Splunk_TA_paloalto` (Splunkbase 2757), `TA-fortinet_fortigate`, `Splunk_TA_cisco-asa`, or equivalent firewall TA populating Network_Traffic data model.\n• Ensure the following data sources are available: CIM `Network_Traffic` data model (`All_Traffic.dest`, `All_Traffic.bytes_out`, `All_Traffic.action`) — backed by sourcetypes such as `sourcetype=\"pan:traffic\"`, `sourcetype=\"cisco:asa\"`, or `sourcetype=\"fortigate_traffic\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Accelerate `Network_Traffic` data model in Settings > Data Models; (2) create `eea_and_adequate_countries.csv` with `Country` values matching `iplocation` output (MaxMind) and your legal team's adequacy list (EEA + UK + other recognised adequacy decisions); (3) add a `transfer_basis` column (e.g. SCC, BCR, adequacy) for approved destinations; (4) tune with CDN/exception lookups by `dest`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_out) as bytes_out\n    from datamodel=Network_Traffic.All_Traffic\n    where All_Traffic.action=\"allowed\"\n    by All_Traffic.dest\n| rename All_Traffic.* as *\n| iplocation dest\n| lookup eea_and_adequate_countries.csv Country OUTPUT transfer_basis\n| where isnull(transfer_basis) OR transfer_basis=\"restricted\"\n| eval bytes_gb=round(bytes_out/1073741824, 2)\n| sort 100 - bytes_out\n| head 100\n| table dest, Country, bytes_gb, transfer_basis\n```\n\nUnderstanding this SPL\n\n**GDPR Cross-Border Data Transfer Monitoring (Art. 44-49)** — Highlights outbound traffic volumes to destinations outside the approved EEA/adequacy footprint so transfers can be gated by SCCs, BCRs, TIAs, or blocking controls.\n\nDocumented **Data sources**: CIM `Network_Traffic` data model (`All_Traffic.dest`, `All_Traffic.bytes_out`, `All_Traffic.action`) — backed by sourcetypes such as `sourcetype=\"pan:traffic\"`, `sourcetype=\"cisco:asa\"`, or `sourcetype=\"fortigate_traffic\"`. **App/TA** (typical add-on context): Splunk Common Information Model Add-on (Splunkbase 1621), `Splunk_TA_paloalto` (Splunkbase 2757), `TA-fortinet_fortigate`, `Splunk_TA_cisco-asa`, or equivalent firewall TA populating Network_Traffic data model. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Pipeline stage (see **GDPR Cross-Border Data Transfer Monitoring (Art. 44-49)**): iplocation dest\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(transfer_basis) OR transfer_basis=\"restricted\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **bytes_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **GDPR Cross-Border Data Transfer Monitoring (Art. 44-49)**): table dest, Country, bytes_gb, transfer_basis\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Cross-Border Data Transfer Monitoring (Art. 44-49)** — Highlights outbound traffic volumes to destinations outside the approved EEA/adequacy footprint so transfers can be gated by SCCs, BCRs, TIAs, or blocking controls.\n\nDocumented **Data sources**: CIM `Network_Traffic` data model (`All_Traffic.dest`, `All_Traffic.bytes_out`, `All_Traffic.action`) — backed by sourcetypes such as `sourcetype=\"pan:traffic\"`, `sourcetype=\"cisco:asa\"`, or `sourcetype=\"fortigate_traffic\"`. **App/TA** (typical add-on context): Splunk Common Information Model Add-on (Splunkbase 1621), `Splunk_TA_paloalto` (Splunkbase 2757), `TA-fortinet_fortigate`, `Splunk_TA_cisco-asa`, or equivalent firewall TA populating Network_Traffic data model. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Choropleth (top non-EEA destinations), Bar chart (bytes by country), Table (restricted transfers).",
              "script": "",
              "premium": "Splunk Enterprise Security (optional, for asset/identity context)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlights outbound traffic volumes to destinations outside the approved EEA/adequacy footprint so transfers can be gated by SCCs, BCRs, TIAs, or blocking controls so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [
                "cisco",
                "fortinet",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "fortinet_fortigate",
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.44",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.44 (International transfers — general principle) is enforced — Splunk UC-22.1.6: GDPR Cross-Border Data Transfer Monitoring.",
                  "ea": "Saved search 'UC-22.1.6' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.45",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.45 (Transfers via adequacy decision) is enforced — Splunk UC-22.1.6: GDPR Cross-Border Data Transfer Monitoring.",
                  "ea": "Saved search 'UC-22.1.6' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.46 (Transfers subject to safeguards) is enforced — Splunk UC-22.1.6: GDPR Cross-Border Data Transfer Monitoring.",
                  "ea": "Saved search 'UC-22.1.6' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.47",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.47 is enforced — Splunk UC-22.1.6: GDPR Cross-Border Data Transfer Monitoring.",
                  "ea": "Saved search 'UC-22.1.6' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.48",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.48 is enforced — Splunk UC-22.1.6: GDPR Cross-Border Data Transfer Monitoring.",
                  "ea": "Saved search 'UC-22.1.6' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.49",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.49 is enforced — Splunk UC-22.1.6: GDPR Cross-Border Data Transfer Monitoring.",
                  "ea": "Saved search 'UC-22.1.6' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Cisco Security Cloud",
                  "id": 7404,
                  "url": "https://splunkbase.splunk.com/app/7404",
                  "desc": "Modular dashboards and health checks for Cisco Secure Firewall, Duo, Endpoint",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/37673b17-99fd-4dc1-a5f6-81b9cd15cab7.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/c8eb0ce4-f7bd-412d-950c-63c8c20da0a7.png"
                  ]
                },
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.7",
              "n": "GDPR Security of Processing — Encryption and Pseudonymisation Coverage (Art. 32)",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 32 requires controllers and processors to implement measures ensuring confidentiality, integrity, availability and resilience of processing systems — explicitly calling out pseudonymisation and encryption. This use case continuously monitors encryption-at-rest status for databases holding personal data, TLS enforcement on processing systems, and pseudonymisation coverage — providing the technical evidence that Art. 32 controls are operational, not just documented.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809), Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "CIM Certificates data model, `index=vulnerability` (crypto-related findings), `index=network` (TLS metadata), database audit logs",
              "q": "| tstats `summariesonly` dc(All_Traffic.dest_port) as ports, count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.app IN (\"http\",\"ftp\",\"telnet\",\"smtp\") NOT All_Traffic.app IN (\"https\",\"ftps\",\"smtps\",\"ssh\")\n  by All_Traffic.dest All_Traffic.app\n| rename All_Traffic.* as *\n| lookup gdpr_personal_data_systems.csv dest OUTPUT system_name, data_category, contains_pii\n| where contains_pii=\"true\"\n| sort - count\n| table dest, system_name, data_category, app, count",
              "m": "(1) Create `gdpr_personal_data_systems.csv` listing all systems processing personal data (from Art. 30 register); (2) detect unencrypted protocols (HTTP, FTP, Telnet) to those systems; (3) monitor TLS certificate health for personal data endpoints; (4) track pseudonymisation implementation via application audit logs; (5) alert on any unencrypted connection to PII-bearing systems.",
              "z": "Table (PII systems with unencrypted connections), Pie chart (encrypted vs unencrypted traffic to PII systems), Bar chart (unencrypted protocols by system), Single value (% PII systems fully encrypted).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1005",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809), Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: CIM Certificates data model, `index=vulnerability` (crypto-related findings), `index=network` (TLS metadata), database audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `gdpr_personal_data_systems.csv` listing all systems processing personal data (from Art. 30 register); (2) detect unencrypted protocols (HTTP, FTP, Telnet) to those systems; (3) monitor TLS certificate health for personal data endpoints; (4) track pseudonymisation implementation via application audit logs; (5) alert on any unencrypted connection to PII-bearing systems.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` dc(All_Traffic.dest_port) as ports, count\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.app IN (\"http\",\"ftp\",\"telnet\",\"smtp\") NOT All_Traffic.app IN (\"https\",\"ftps\",\"smtps\",\"ssh\")\n  by All_Traffic.dest All_Traffic.app\n| rename All_Traffic.* as *\n| lookup gdpr_personal_data_systems.csv dest OUTPUT system_name, data_category, contains_pii\n| where contains_pii=\"true\"\n| sort - count\n| table dest, system_name, data_category, app, count\n```\n\nUnderstanding this SPL\n\n**GDPR Security of Processing — Encryption and Pseudonymisation Coverage (Art. 32)** — Article 32 requires controllers and processors to implement measures ensuring confidentiality, integrity, availability and resilience of processing systems — explicitly calling out pseudonymisation and encryption. This use case continuously monitors encryption-at-rest status for databases holding personal data, TLS enforcement on processing systems, and pseudonymisation coverage — providing the technical evidence that Art. 32 controls are operational, not just documented.\n\nDocumented **Data sources**: CIM Certificates data model, `index=vulnerability` (crypto-related findings), `index=network` (TLS metadata), database audit logs. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where contains_pii=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Security of Processing — Encryption and Pseudonymisation Coverage (Art. 32)**): table dest, system_name, data_category, app, count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Security of Processing — Encryption and Pseudonymisation Coverage (Art. 32)** — Article 32 requires controllers and processors to implement measures ensuring confidentiality, integrity, availability and resilience of processing systems — explicitly calling out pseudonymisation and encryption. This use case continuously monitors encryption-at-rest status for databases holding personal data, TLS enforcement on processing systems, and pseudonymisation coverage — providing the technical evidence that Art. 32 controls are operational, not just documented.\n\nDocumented **Data sources**: CIM Certificates data model, `index=vulnerability` (crypto-related findings), `index=network` (TLS metadata), database audit logs. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (PII systems with unencrypted connections), Pie chart (encrypted vs unencrypted traffic to PII systems), Bar chart (unencrypted protocols by system), Single value (% PII systems fully encrypted).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of security of processing — encryption and pseudonymisation coverage — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Network_Traffic",
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.32 (Security of processing) is enforced — Splunk UC-22.1.7: GDPR Security of Processing — Encryption and Pseudonymisation Coverage.",
                  "ea": "Saved search 'UC-22.1.7' running on index vulnerability and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.8",
              "n": "GDPR Records of Processing Activities Completeness (Art. 30)",
              "c": "high",
              "f": "intermediate",
              "v": "Article 30 requires controllers to maintain documented records of processing activities including purposes, data categories, recipients, transfers, retention periods, and technical measures. This use case validates that the processing register (ROPA) is complete and current by cross-referencing it against observed data flows and systems, surfacing systems that process personal data but are not in the register — a gap regulators frequently cite during inspections.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "CIM Network_Traffic and Authentication data models, `gdpr_ropa_register.csv` (processing activities register), ES Asset Framework",
              "q": "| tstats `summariesonly` dc(Authentication.user) as users, count\n  from datamodel=Authentication.Authentication\n  by Authentication.dest\n| rename Authentication.dest as dest\n| lookup gdpr_ropa_register.csv system_name AS dest OUTPUT processing_activity, data_category, legal_basis, retention_period, last_review_date\n| eval in_register=if(isnotnull(processing_activity), \"Yes\", \"NOT_IN_ROPA\")\n| eval review_overdue=if(isnotnull(last_review_date) AND (now()-strptime(last_review_date,\"%Y-%m-%d\"))/86400 > 365, \"OVERDUE\", \"OK\")\n| where in_register=\"NOT_IN_ROPA\" OR review_overdue=\"OVERDUE\"\n| sort - users\n| table dest, users, count, in_register, processing_activity, legal_basis, review_overdue",
              "m": "(1) Maintain `gdpr_ropa_register.csv` from your DPO's Article 30 register (exported from OneTrust, DataGrail, or manual spreadsheet); (2) compare active systems receiving authentication events against registered systems; (3) alert on systems with user activity not in the ROPA; (4) flag entries with reviews older than 12 months; (5) schedule quarterly for ROPA completeness auditing.",
              "z": "Table (unregistered systems), Single value (ROPA coverage %), Bar chart (users by unregistered system), Timeline (review dates).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: CIM Network_Traffic and Authentication data models, `gdpr_ropa_register.csv` (processing activities register), ES Asset Framework.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `gdpr_ropa_register.csv` from your DPO's Article 30 register (exported from OneTrust, DataGrail, or manual spreadsheet); (2) compare active systems receiving authentication events against registered systems; (3) alert on systems with user activity not in the ROPA; (4) flag entries with reviews older than 12 months; (5) schedule quarterly for ROPA completeness auditing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` dc(Authentication.user) as users, count\n  from datamodel=Authentication.Authentication\n  by Authentication.dest\n| rename Authentication.dest as dest\n| lookup gdpr_ropa_register.csv system_name AS dest OUTPUT processing_activity, data_category, legal_basis, retention_period, last_review_date\n| eval in_register=if(isnotnull(processing_activity), \"Yes\", \"NOT_IN_ROPA\")\n| eval review_overdue=if(isnotnull(last_review_date) AND (now()-strptime(last_review_date,\"%Y-%m-%d\"))/86400 > 365, \"OVERDUE\", \"OK\")\n| where in_register=\"NOT_IN_ROPA\" OR review_overdue=\"OVERDUE\"\n| sort - users\n| table dest, users, count, in_register, processing_activity, legal_basis, review_overdue\n```\n\nUnderstanding this SPL\n\n**GDPR Records of Processing Activities Completeness (Art. 30)** — Article 30 requires controllers to maintain documented records of processing activities including purposes, data categories, recipients, transfers, retention periods, and technical measures. This use case validates that the processing register (ROPA) is complete and current by cross-referencing it against observed data flows and systems, surfacing systems that process personal data but are not in the register — a gap regulators frequently cite during inspections.\n\nDocumented **Data sources**: CIM Network_Traffic and Authentication data models, `gdpr_ropa_register.csv` (processing activities register), ES Asset Framework. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **in_register** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **review_overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where in_register=\"NOT_IN_ROPA\" OR review_overdue=\"OVERDUE\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Records of Processing Activities Completeness (Art. 30)**): table dest, users, count, in_register, processing_activity, legal_basis, review_overdue\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Records of Processing Activities Completeness (Art. 30)** — Article 30 requires controllers to maintain documented records of processing activities including purposes, data categories, recipients, transfers, retention periods, and technical measures. This use case validates that the processing register (ROPA) is complete and current by cross-referencing it against observed data flows and systems, surfacing systems that process personal data but are not in the register — a gap regulators frequently cite during inspections.\n\nDocumented **Data sources**: CIM Network_Traffic and Authentication data models, `gdpr_ropa_register.csv` (processing activities register), ES Asset Framework. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unregistered systems), Single value (ROPA coverage %), Bar chart (users by unregistered system), Timeline (review dates).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of records of processing activities completeness — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.30",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.30 (Records of processing) is enforced — Splunk UC-22.1.8: GDPR Records of Processing Activities Completeness.",
                  "ea": "Saved search 'UC-22.1.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.9",
              "n": "GDPR Data Protection by Design — Data Minimisation Validation (Art. 25)",
              "c": "high",
              "f": "advanced",
              "v": "Article 25 requires data protection by design and by default — meaning only personal data necessary for each purpose should be collected. This use case detects systems collecting more personal data fields than their declared purpose requires (over-collection), identifies databases storing data categories beyond their ROPA scope, and monitors for new data collection endpoints appearing without DPIA coverage — catching data minimisation violations before regulators do.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Edge Processor (Splunk Cloud Platform)",
              "d": "Application logs, API access logs, database audit logs, `gdpr_ropa_register.csv`",
              "q": "(index=app OR index=web OR index=api) earliest=-7d\n| rex field=_raw \"(?i)(?:email|e-mail)[\\s=:\\\"]+(?<pii_email>[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,})\"\n| rex field=_raw \"(?i)(?:phone|mobile|tel)[\\s=:\\\"]+(?<pii_phone>[\\+]?\\d[\\d\\s\\-\\(\\)]{7,})\"\n| rex field=_raw \"(?i)(?:dob|birth|born|birthday)[\\s=:\\\"]+(?<pii_dob>\\d{4}[-/]\\d{2}[-/]\\d{2}|\\d{2}[-/]\\d{2}[-/]\\d{4})\"\n| rex field=_raw \"(?i)(?:address|street|postcode|zip)[\\s=:\\\"]+(?<pii_address>[^\\\",}{]{5,})\"\n| eval pii_fields_found=mvappend(\n    if(isnotnull(pii_email),\"email\",null()),\n    if(isnotnull(pii_phone),\"phone\",null()),\n    if(isnotnull(pii_dob),\"date_of_birth\",null()),\n    if(isnotnull(pii_address),\"address\",null()))\n| where isnotnull(pii_fields_found)\n| stats values(pii_fields_found) as pii_types, count by sourcetype, host\n| eval pii_type_count=mvcount(pii_types)\n| lookup gdpr_ropa_register.csv system_name AS host OUTPUT data_category, processing_activity\n| eval excess_collection=if(pii_type_count > 2 AND isnull(processing_activity), \"POSSIBLE_OVER_COLLECTION\", \"Review\")\n| sort - pii_type_count\n| table host, sourcetype, pii_types, pii_type_count, processing_activity, data_category, excess_collection",
              "m": "(1) Run weekly against application and API logs; (2) tune PII detection regex patterns for your data formats; (3) compare detected PII field types against declared ROPA data categories per system; (4) escalate systems collecting PII categories not in their declared purpose; (5) integrate with Edge Processor for ingest-time PII masking on confirmed over-collection sources.",
              "z": "Table (systems with excess PII collection), Bar chart (PII types by system), Heatmap (PII density across sourcetypes), Single value (systems flagged for review).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Edge Processor (Splunk Cloud Platform).\n• Ensure the following data sources are available: Application logs, API access logs, database audit logs, `gdpr_ropa_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Run weekly against application and API logs; (2) tune PII detection regex patterns for your data formats; (3) compare detected PII field types against declared ROPA data categories per system; (4) escalate systems collecting PII categories not in their declared purpose; (5) integrate with Edge Processor for ingest-time PII masking on confirmed over-collection sources.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=app OR index=web OR index=api) earliest=-7d\n| rex field=_raw \"(?i)(?:email|e-mail)[\\s=:\\\"]+(?<pii_email>[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,})\"\n| rex field=_raw \"(?i)(?:phone|mobile|tel)[\\s=:\\\"]+(?<pii_phone>[\\+]?\\d[\\d\\s\\-\\(\\)]{7,})\"\n| rex field=_raw \"(?i)(?:dob|birth|born|birthday)[\\s=:\\\"]+(?<pii_dob>\\d{4}[-/]\\d{2}[-/]\\d{2}|\\d{2}[-/]\\d{2}[-/]\\d{4})\"\n| rex field=_raw \"(?i)(?:address|street|postcode|zip)[\\s=:\\\"]+(?<pii_address>[^\\\",}{]{5,})\"\n| eval pii_fields_found=mvappend(\n    if(isnotnull(pii_email),\"email\",null()),\n    if(isnotnull(pii_phone),\"phone\",null()),\n    if(isnotnull(pii_dob),\"date_of_birth\",null()),\n    if(isnotnull(pii_address),\"address\",null()))\n| where isnotnull(pii_fields_found)\n| stats values(pii_fields_found) as pii_types, count by sourcetype, host\n| eval pii_type_count=mvcount(pii_types)\n| lookup gdpr_ropa_register.csv system_name AS host OUTPUT data_category, processing_activity\n| eval excess_collection=if(pii_type_count > 2 AND isnull(processing_activity), \"POSSIBLE_OVER_COLLECTION\", \"Review\")\n| sort - pii_type_count\n| table host, sourcetype, pii_types, pii_type_count, processing_activity, data_category, excess_collection\n```\n\nUnderstanding this SPL\n\n**GDPR Data Protection by Design — Data Minimisation Validation (Art. 25)** — Article 25 requires data protection by design and by default — meaning only personal data necessary for each purpose should be collected. This use case detects systems collecting more personal data fields than their declared purpose requires (over-collection), identifies databases storing data categories beyond their ROPA scope, and monitors for new data collection endpoints appearing without DPIA coverage — catching data minimisation violations before regulators do.\n\nDocumented **Data sources**: Application logs, API access logs, database audit logs, `gdpr_ropa_register.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Edge Processor (Splunk Cloud Platform). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app, web, api.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, index=web, index=api, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• Extracts fields with `rex` (regular expression).\n• `eval` defines or adjusts **pii_fields_found** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(pii_fields_found)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by sourcetype, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pii_type_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **excess_collection** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Data Protection by Design — Data Minimisation Validation (Art. 25)**): table host, sourcetype, pii_types, pii_type_count, processing_activity, data_category, excess_collection\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (systems with excess PII collection), Bar chart (PII types by system), Heatmap (PII density across sourcetypes), Single value (systems flagged for review).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of data protection by design — data minimisation validation — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.25",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.25 (Data protection by design and by default) is enforced — Splunk UC-22.1.9: GDPR Data Protection by Design — Data Minimisation Validation.",
                  "ea": "Saved search 'UC-22.1.9' running on Application logs, API access logs, database audit logs, gdpr_ropa_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.10",
              "n": "GDPR Privileged Access to Personal Data Stores (Art. 5(1)(f) / Art. 32)",
              "c": "critical",
              "f": "intermediate",
              "v": "Articles 5(1)(f) and 32 require integrity and confidentiality of personal data. Privileged access to databases and file stores containing personal data is the highest-risk vector for both accidental exposure and malicious exfiltration. This use case monitors DBA/admin access to personal data stores, detects bulk data exports, and identifies access outside approved change windows — providing the accountability evidence regulators expect.",
              "t": "Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Oracle Database (Splunkbase 1910), Splunk DB Connect (Splunkbase 2686)",
              "d": "Database audit logs (`sourcetype=\"mssql:audit\"`, `sourcetype=\"oracle:audit\"`, `sourcetype=\"postgres:csv\"`), `index=dbaudit`",
              "q": "index=dbaudit sourcetype IN (\"mssql:audit\",\"oracle:audit\",\"postgres:csv\",\"mysql:audit\") earliest=-24h\n| eval user=coalesce(server_principal_name, os_username, user_name, db_user)\n| eval action=lower(coalesce(action_id, action_name, statement_type, command_tag))\n| eval is_bulk=if(match(action,\"(?i)select.*into|bulk|export|dump|copy|backup\") OR match(_raw,\"(?i)rows_affected.*[5-9]\\d{3}|rows_affected.*\\d{5,}\"), 1, 0)\n| eval after_hours=if(tonumber(strftime(_time,\"%H\"))<7 OR tonumber(strftime(_time,\"%H\"))>19, 1, 0)\n| lookup gdpr_personal_data_systems.csv dest AS database_name OUTPUT data_category, contains_pii\n| where contains_pii=\"true\" AND (is_bulk=1 OR after_hours=1 OR match(action,\"(?i)grant|alter|drop|truncate|delete\"))\n| table _time, user, database_name, data_category, action, is_bulk, after_hours\n| sort - _time",
              "m": "(1) Enable database audit logging on all systems in the ROPA that contain personal data; (2) forward audit logs via Splunk DB Connect or syslog; (3) create `gdpr_personal_data_systems.csv` with database names, data categories, and PII flags; (4) alert on bulk exports, DDL changes, and after-hours access to personal data stores; (5) require change tickets for planned administrative operations and correlate against change window lookups.",
              "z": "Table (privileged access events), Timeline (access patterns), Bar chart (actions by user), Single value (after-hours access count), Heatmap (user × hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft SQL Server](https://splunkbase.splunk.com/app/2648), [Splunk Add-on for Oracle Database](https://splunkbase.splunk.com/app/1910), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Oracle Database (Splunkbase 1910), Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: Database audit logs (`sourcetype=\"mssql:audit\"`, `sourcetype=\"oracle:audit\"`, `sourcetype=\"postgres:csv\"`), `index=dbaudit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable database audit logging on all systems in the ROPA that contain personal data; (2) forward audit logs via Splunk DB Connect or syslog; (3) create `gdpr_personal_data_systems.csv` with database names, data categories, and PII flags; (4) alert on bulk exports, DDL changes, and after-hours access to personal data stores; (5) require change tickets for planned administrative operations and correlate against change window lookups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dbaudit sourcetype IN (\"mssql:audit\",\"oracle:audit\",\"postgres:csv\",\"mysql:audit\") earliest=-24h\n| eval user=coalesce(server_principal_name, os_username, user_name, db_user)\n| eval action=lower(coalesce(action_id, action_name, statement_type, command_tag))\n| eval is_bulk=if(match(action,\"(?i)select.*into|bulk|export|dump|copy|backup\") OR match(_raw,\"(?i)rows_affected.*[5-9]\\d{3}|rows_affected.*\\d{5,}\"), 1, 0)\n| eval after_hours=if(tonumber(strftime(_time,\"%H\"))<7 OR tonumber(strftime(_time,\"%H\"))>19, 1, 0)\n| lookup gdpr_personal_data_systems.csv dest AS database_name OUTPUT data_category, contains_pii\n| where contains_pii=\"true\" AND (is_bulk=1 OR after_hours=1 OR match(action,\"(?i)grant|alter|drop|truncate|delete\"))\n| table _time, user, database_name, data_category, action, is_bulk, after_hours\n| sort - _time\n```\n\nUnderstanding this SPL\n\n**GDPR Privileged Access to Personal Data Stores (Art. 5(1)(f) / Art. 32)** — Articles 5(1)(f) and 32 require integrity and confidentiality of personal data. Privileged access to databases and file stores containing personal data is the highest-risk vector for both accidental exposure and malicious exfiltration. This use case monitors DBA/admin access to personal data stores, detects bulk data exports, and identifies access outside approved change windows — providing the accountability evidence regulators expect.\n\nDocumented **Data sources**: Database audit logs (`sourcetype=\"mssql:audit\"`, `sourcetype=\"oracle:audit\"`, `sourcetype=\"postgres:csv\"`), `index=dbaudit`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Oracle Database (Splunkbase 1910), Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dbaudit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dbaudit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_bulk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **after_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where contains_pii=\"true\" AND (is_bulk=1 OR after_hours=1 OR match(action,\"(?i)grant|alter|drop|truncate|delete\"))` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Privileged Access to Personal Data Stores (Art. 5(1)(f) / Art. 32)**): table _time, user, database_name, data_category, action, is_bulk, after_hours\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Privileged Access to Personal Data Stores (Art. 5(1)(f) / Art. 32)** — Articles 5(1)(f) and 32 require integrity and confidentiality of personal data. Privileged access to databases and file stores containing personal data is the highest-risk vector for both accidental exposure and malicious exfiltration. This use case monitors DBA/admin access to personal data stores, detects bulk data exports, and identifies access outside approved change windows — providing the accountability evidence regulators expect.\n\nDocumented **Data sources**: Database audit logs (`sourcetype=\"mssql:audit\"`, `sourcetype=\"oracle:audit\"`, `sourcetype=\"postgres:csv\"`), `index=dbaudit`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Oracle Database (Splunkbase 1910), Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (privileged access events), Timeline (access patterns), Bar chart (actions by user), Single value (after-hours access count), Heatmap (user × hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of privileged access to personal data stores — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Change",
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "db_connect",
                "mssql",
                "oracle"
              ],
              "em": [
                "oracle_oracle_db"
              ],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.32 (Security of processing) is enforced — Splunk UC-22.1.10: GDPR Privileged Access to Personal Data Stores.",
                  "ea": "Saved search 'UC-22.1.10' running on Database audit logs (sourcetype=\"mssql:audit\", sourcetype=\"oracle:audit\", sourcetype=\"postgres:csv\"), index=dbaudit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(1)(f)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5(1)(f) is enforced — Splunk UC-22.1.10: GDPR Privileged Access to Personal Data Stores.",
                  "ea": "Saved search 'UC-22.1.10' running on Database audit logs (sourcetype=\"mssql:audit\", sourcetype=\"oracle:audit\", sourcetype=\"postgres:csv\"), index=dbaudit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.11",
              "n": "GDPR Right to Erasure Verification (Art. 17)",
              "c": "critical",
              "f": "advanced",
              "v": "Article 17 gives data subjects the right to erasure (\"right to be forgotten\"). While UC-22.1.2 tracks the DSAR ticket lifecycle, this use case verifies that erasure was actually executed across all systems by searching for residual data subject identifiers after the erasure deadline — catching incomplete deletions before they become regulatory violations.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` (completed erasure requests), all indexed data (post-erasure verification scan)",
              "q": "| inputlookup gdpr_completed_erasures.csv WHERE status=\"completed\"\n| eval erasure_date=strptime(completion_date, \"%Y-%m-%d\")\n| eval days_since_erasure=round((now()-erasure_date)/86400, 0)\n| where days_since_erasure >= 7 AND days_since_erasure <= 90\n| map maxsearches=50 search=\"search index=* earliest=-90d \\\"$$subject_identifier$$\\\" | head 1 | eval subject_id=\\\"$$subject_identifier$$\\\", request_id=\\\"$$request_id$$\\\"\"\n| where isnotnull(subject_id)\n| table subject_id, request_id, index, sourcetype, host, _time",
              "m": "(1) Export completed erasure requests to `gdpr_completed_erasures.csv` with subject identifiers (hashed email, user ID, etc.) and completion dates; (2) run weekly to search for residual occurrences; (3) alert the DPO on any matches — these indicate incomplete erasure; (4) track remediation tickets for each residual finding; (5) exclude Splunk indexes with legitimate legal-hold retention from the scan and document the exception.",
              "z": "Table (residual data findings), Single value (incomplete erasures), Bar chart (residual data by system), Timeline (findings over time).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` (completed erasure requests), all indexed data (post-erasure verification scan).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export completed erasure requests to `gdpr_completed_erasures.csv` with subject identifiers (hashed email, user ID, etc.) and completion dates; (2) run weekly to search for residual occurrences; (3) alert the DPO on any matches — these indicate incomplete erasure; (4) track remediation tickets for each residual finding; (5) exclude Splunk indexes with legitimate legal-hold retention from the scan and document the exception.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_completed_erasures.csv WHERE status=\"completed\"\n| eval erasure_date=strptime(completion_date, \"%Y-%m-%d\")\n| eval days_since_erasure=round((now()-erasure_date)/86400, 0)\n| where days_since_erasure >= 7 AND days_since_erasure <= 90\n| map maxsearches=50 search=\"search index=* earliest=-90d \\\"$$subject_identifier$$\\\" | head 1 | eval subject_id=\\\"$$subject_identifier$$\\\", request_id=\\\"$$request_id$$\\\"\"\n| where isnotnull(subject_id)\n| table subject_id, request_id, index, sourcetype, host, _time\n```\n\nUnderstanding this SPL\n\n**GDPR Right to Erasure Verification (Art. 17)** — Article 17 gives data subjects the right to erasure (\"right to be forgotten\"). While UC-22.1.2 tracks the DSAR ticket lifecycle, this use case verifies that erasure was actually executed across all systems by searching for residual data subject identifiers after the erasure deadline — catching incomplete deletions before they become regulatory violations.\n\nDocumented **Data sources**: `index=itsm` (completed erasure requests), all indexed data (post-erasure verification scan). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **erasure_date** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since_erasure** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since_erasure >= 7 AND days_since_erasure <= 90` — typically the threshold or rule expression for this monitoring goal.\n• Runs a templated search per row with `map`.\n• Filters the current rows with `where isnotnull(subject_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Right to Erasure Verification (Art. 17)**): table subject_id, request_id, index, sourcetype, host, _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (residual data findings), Single value (incomplete erasures), Bar chart (residual data by system), Timeline (findings over time).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of right to erasure verification — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.17 (Right to erasure) is enforced — Splunk UC-22.1.11: GDPR Right to Erasure Verification.",
                  "ea": "Saved search 'UC-22.1.11' running on index=itsm (completed erasure requests), all indexed data (post-erasure verification scan), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.12",
              "n": "GDPR Breach Scope and Affected Data Subject Quantification (Art. 33(3))",
              "c": "critical",
              "f": "advanced",
              "v": "Article 33(3) requires breach notifications to include the categories and approximate number of affected data subjects. This use case automates breach scoping by correlating incident indicators (compromised hosts, accounts, or data stores) with the personal data register to estimate the number and categories of affected individuals — accelerating the breach assessment that must be completed within the 72-hour notification window.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` macro, ES Asset Framework, `gdpr_ropa_register.csv`, CIM Authentication data model",
              "q": "`notable` urgency IN (\"high\",\"critical\") status!=\"Closed\" earliest=-7d\n| eval compromised_systems=mvappend(src, dest)\n| mvexpand compromised_systems\n| lookup gdpr_ropa_register.csv system_name AS compromised_systems OUTPUT data_category, processing_activity, estimated_data_subjects\n| where isnotnull(data_category)\n| stats sum(estimated_data_subjects) as total_affected_subjects, values(data_category) as data_categories, dc(compromised_systems) as systems_affected by rule_name, urgency\n| eval notification_required=if(total_affected_subjects > 0, \"LIKELY — Art. 33 notification to DPA\", \"Assess further\")\n| eval individual_notification=if(total_affected_subjects > 0 AND match(mvjoin(data_categories,\",\"),\"(?i)health|financial|special_category|biometric\"), \"LIKELY — Art. 34 notification to subjects\", \"Assess further\")\n| table rule_name, urgency, systems_affected, total_affected_subjects, data_categories, notification_required, individual_notification",
              "m": "(1) Add `estimated_data_subjects` field to `gdpr_ropa_register.csv` (approximate count of records/individuals per system); (2) tag ES notables that represent personal data breaches with relevant compromised hosts; (3) auto-calculate affected scope within minutes of breach detection; (4) use output to pre-populate Art. 33 notification forms; (5) flag incidents involving special category data (Art. 9) for Art. 34 direct notification to affected individuals.",
              "z": "Single value (estimated affected subjects), Table (breach scope by incident), Bar chart (data categories involved), Map (affected systems).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048",
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` macro, ES Asset Framework, `gdpr_ropa_register.csv`, CIM Authentication data model.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Add `estimated_data_subjects` field to `gdpr_ropa_register.csv` (approximate count of records/individuals per system); (2) tag ES notables that represent personal data breaches with relevant compromised hosts; (3) auto-calculate affected scope within minutes of breach detection; (4) use output to pre-populate Art. 33 notification forms; (5) flag incidents involving special category data (Art. 9) for Art. 34 direct notification to affected individuals.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") status!=\"Closed\" earliest=-7d\n| eval compromised_systems=mvappend(src, dest)\n| mvexpand compromised_systems\n| lookup gdpr_ropa_register.csv system_name AS compromised_systems OUTPUT data_category, processing_activity, estimated_data_subjects\n| where isnotnull(data_category)\n| stats sum(estimated_data_subjects) as total_affected_subjects, values(data_category) as data_categories, dc(compromised_systems) as systems_affected by rule_name, urgency\n| eval notification_required=if(total_affected_subjects > 0, \"LIKELY — Art. 33 notification to DPA\", \"Assess further\")\n| eval individual_notification=if(total_affected_subjects > 0 AND match(mvjoin(data_categories,\",\"),\"(?i)health|financial|special_category|biometric\"), \"LIKELY — Art. 34 notification to subjects\", \"Assess further\")\n| table rule_name, urgency, systems_affected, total_affected_subjects, data_categories, notification_required, individual_notification\n```\n\nUnderstanding this SPL\n\n**GDPR Breach Scope and Affected Data Subject Quantification (Art. 33(3))** — Article 33(3) requires breach notifications to include the categories and approximate number of affected data subjects. This use case automates breach scoping by correlating incident indicators (compromised hosts, accounts, or data stores) with the personal data register to estimate the number and categories of affected individuals — accelerating the breach assessment that must be completed within the 72-hour notification window.\n\nDocumented **Data sources**: `` `notable` `` macro, ES Asset Framework, `gdpr_ropa_register.csv`, CIM Authentication data model. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **compromised_systems** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(data_category)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by rule_name, urgency** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **notification_required** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **individual_notification** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **GDPR Breach Scope and Affected Data Subject Quantification (Art. 33(3))**): table rule_name, urgency, systems_affected, total_affected_subjects, data_categories, notification_required, individual_notification\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (estimated affected subjects), Table (breach scope by incident), Bar chart (data categories involved), Map (affected systems).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of breach scope and affected data subject quantification — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.33(3)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.33(3) is enforced — Splunk UC-22.1.12: GDPR Breach Scope and Affected Data Subject Quantification.",
                  "ea": "Saved search 'UC-22.1.12' running on notable macro, ES Asset Framework, gdpr_ropa_register.csv, CIM Authentication data model, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.13",
              "n": "GDPR High-Risk Breach Communication to Data Subjects (Art. 34)",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 34 requires controllers to communicate personal data breaches directly to affected data subjects \"without undue delay\" when the breach is likely to result in a high risk to their rights and freedoms. While Art. 33 covers DPA notification, Art. 34 addresses the often-overlooked obligation to notify individuals. This use case tracks whether high-risk breaches have triggered individual notification workflows and monitors their completion.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`` `notable` `` macro, `index=itsm` (notification workflow tickets), `gdpr_breach_notifications.csv` (KV store)",
              "q": "`notable` urgency=\"critical\" earliest=-30d\n| lookup gdpr_breach_notifications.csv notable_id AS event_id OUTPUT art33_notified, art34_required, art34_notified, art34_date, subjects_notified_count\n| eval art34_status=case(\n    art34_required=\"yes\" AND art34_notified=\"yes\", \"COMPLETE\",\n    art34_required=\"yes\" AND isnull(art34_notified), \"PENDING — notify subjects\",\n    art34_required=\"no\", \"Not required\",\n    isnull(art34_required), \"ASSESS — determine if Art.34 applies\",\n    1=1, \"Unknown\")\n| where art34_status IN (\"PENDING — notify subjects\", \"ASSESS — determine if Art.34 applies\")\n| table _time, rule_name, urgency, owner, art33_notified, art34_required, art34_status\n| sort - _time",
              "m": "(1) Create `gdpr_breach_notifications.csv` KV store linking notable IDs to notification status; (2) when a breach involves special category data, financial data, or affects >1000 subjects, default `art34_required` to \"yes\"; (3) alert the DPO on pending Art. 34 notifications; (4) track notification method (email, letter, public notice) and completion dates; (5) document Art. 34(3) exceptions (encrypted data, mitigation applied, disproportionate effort requiring public communication).",
              "z": "Table (pending individual notifications), Single value (breaches requiring Art. 34 notification), Bar chart (notification status distribution), Timeline (notification lifecycle).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `` `notable` `` macro, `index=itsm` (notification workflow tickets), `gdpr_breach_notifications.csv` (KV store).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `gdpr_breach_notifications.csv` KV store linking notable IDs to notification status; (2) when a breach involves special category data, financial data, or affects >1000 subjects, default `art34_required` to \"yes\"; (3) alert the DPO on pending Art. 34 notifications; (4) track notification method (email, letter, public notice) and completion dates; (5) document Art. 34(3) exceptions (encrypted data, mitigation applied, disproportionate effort requiring public communication).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency=\"critical\" earliest=-30d\n| lookup gdpr_breach_notifications.csv notable_id AS event_id OUTPUT art33_notified, art34_required, art34_notified, art34_date, subjects_notified_count\n| eval art34_status=case(\n    art34_required=\"yes\" AND art34_notified=\"yes\", \"COMPLETE\",\n    art34_required=\"yes\" AND isnull(art34_notified), \"PENDING — notify subjects\",\n    art34_required=\"no\", \"Not required\",\n    isnull(art34_required), \"ASSESS — determine if Art.34 applies\",\n    1=1, \"Unknown\")\n| where art34_status IN (\"PENDING — notify subjects\", \"ASSESS — determine if Art.34 applies\")\n| table _time, rule_name, urgency, owner, art33_notified, art34_required, art34_status\n| sort - _time\n```\n\nUnderstanding this SPL\n\n**GDPR High-Risk Breach Communication to Data Subjects (Art. 34)** — Article 34 requires controllers to communicate personal data breaches directly to affected data subjects \"without undue delay\" when the breach is likely to result in a high risk to their rights and freedoms. While Art. 33 covers DPA notification, Art. 34 addresses the often-overlooked obligation to notify individuals. This use case tracks whether high-risk breaches have triggered individual notification workflows and monitors their completion.\n\nDocumented **Data sources**: `` `notable` `` macro, `index=itsm` (notification workflow tickets), `gdpr_breach_notifications.csv` (KV store). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **art34_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where art34_status IN (\"PENDING — notify subjects\", \"ASSESS — determine if Art.34 applies\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR High-Risk Breach Communication to Data Subjects (Art. 34)**): table _time, rule_name, urgency, owner, art33_notified, art34_required, art34_status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pending individual notifications), Single value (breaches requiring Art. 34 notification), Bar chart (notification status distribution), Timeline (notification lifecycle).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of high-risk breach communication to data subjects — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.34",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.34 (Breach communication to data subjects) is enforced — Splunk UC-22.1.13: GDPR High-Risk Breach Communication to Data Subjects.",
                  "ea": "Saved search 'UC-22.1.13' running on notable macro, index=itsm (notification workflow tickets), gdpr_breach_notifications.csv (KV store), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.14",
              "n": "GDPR Data Protection Impact Assessment Coverage (Art. 35)",
              "c": "high",
              "f": "intermediate",
              "v": "Article 35 requires a Data Protection Impact Assessment (DPIA) for processing likely to result in high risk — including systematic monitoring of public areas, large-scale processing of special categories, and automated decision-making with legal effects. This use case tracks DPIA completion against processing activities identified in the ROPA, flags high-risk processing without DPIA coverage, and monitors for new processing activities that may require a DPIA.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`gdpr_ropa_register.csv`, `gdpr_dpia_register.csv` (DPIA records), ES Asset Framework",
              "q": "| inputlookup gdpr_ropa_register.csv\n| eval high_risk=case(\n    match(lower(processing_activity),\"(?i)profil|automat|scoring|systematic.monitor|biometric|genetic|health|ethnic|political|religious|trade.union|criminal\"), \"HIGH_RISK\",\n    match(lower(data_category),\"(?i)special_category|sensitive|health|biometric|genetic\"), \"HIGH_RISK\",\n    match(lower(scale),\"(?i)large\"), \"HIGH_RISK\",\n    1=1, \"Standard\")\n| where high_risk=\"HIGH_RISK\"\n| lookup gdpr_dpia_register.csv processing_activity OUTPUT dpia_status, dpia_date, dpia_reviewer, residual_risk\n| eval dpia_coverage=case(\n    dpia_status=\"completed\", \"COVERED\",\n    dpia_status=\"in_progress\", \"IN_PROGRESS\",\n    isnull(dpia_status), \"MISSING — DPIA REQUIRED\")\n| eval review_overdue=if(isnotnull(dpia_date) AND (now()-strptime(dpia_date,\"%Y-%m-%d\"))/86400 > 730, \"REVIEW_DUE\", \"OK\")\n| where dpia_coverage=\"MISSING — DPIA REQUIRED\" OR review_overdue=\"REVIEW_DUE\"\n| table processing_activity, system_name, data_category, high_risk, dpia_coverage, dpia_date, review_overdue\n| sort dpia_coverage",
              "m": "(1) Maintain `gdpr_dpia_register.csv` with DPIA records linked to ROPA processing activities; (2) auto-classify high-risk processing using Art. 35(3) criteria and DPA guidance lists; (3) alert the DPO on processing activities lacking DPIA coverage; (4) flag DPIAs not reviewed in >2 years; (5) when new systems appear in authentication logs without ROPA entries (UC-22.1.8), flag them for DPIA assessment.",
              "z": "Table (DPIA coverage gaps), Pie chart (covered vs missing vs in-progress), Single value (missing DPIAs count), Bar chart (high-risk processing by category).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `gdpr_ropa_register.csv`, `gdpr_dpia_register.csv` (DPIA records), ES Asset Framework.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `gdpr_dpia_register.csv` with DPIA records linked to ROPA processing activities; (2) auto-classify high-risk processing using Art. 35(3) criteria and DPA guidance lists; (3) alert the DPO on processing activities lacking DPIA coverage; (4) flag DPIAs not reviewed in >2 years; (5) when new systems appear in authentication logs without ROPA entries (UC-22.1.8), flag them for DPIA assessment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_ropa_register.csv\n| eval high_risk=case(\n    match(lower(processing_activity),\"(?i)profil|automat|scoring|systematic.monitor|biometric|genetic|health|ethnic|political|religious|trade.union|criminal\"), \"HIGH_RISK\",\n    match(lower(data_category),\"(?i)special_category|sensitive|health|biometric|genetic\"), \"HIGH_RISK\",\n    match(lower(scale),\"(?i)large\"), \"HIGH_RISK\",\n    1=1, \"Standard\")\n| where high_risk=\"HIGH_RISK\"\n| lookup gdpr_dpia_register.csv processing_activity OUTPUT dpia_status, dpia_date, dpia_reviewer, residual_risk\n| eval dpia_coverage=case(\n    dpia_status=\"completed\", \"COVERED\",\n    dpia_status=\"in_progress\", \"IN_PROGRESS\",\n    isnull(dpia_status), \"MISSING — DPIA REQUIRED\")\n| eval review_overdue=if(isnotnull(dpia_date) AND (now()-strptime(dpia_date,\"%Y-%m-%d\"))/86400 > 730, \"REVIEW_DUE\", \"OK\")\n| where dpia_coverage=\"MISSING — DPIA REQUIRED\" OR review_overdue=\"REVIEW_DUE\"\n| table processing_activity, system_name, data_category, high_risk, dpia_coverage, dpia_date, review_overdue\n| sort dpia_coverage\n```\n\nUnderstanding this SPL\n\n**GDPR Data Protection Impact Assessment Coverage (Art. 35)** — Article 35 requires a Data Protection Impact Assessment (DPIA) for processing likely to result in high risk — including systematic monitoring of public areas, large-scale processing of special categories, and automated decision-making with legal effects. This use case tracks DPIA completion against processing activities identified in the ROPA, flags high-risk processing without DPIA coverage, and monitors for new processing activities that may require a DPIA.\n\nDocumented **Data sources**: `gdpr_ropa_register.csv`, `gdpr_dpia_register.csv` (DPIA records), ES Asset Framework. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **high_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where high_risk=\"HIGH_RISK\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **dpia_coverage** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **review_overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dpia_coverage=\"MISSING — DPIA REQUIRED\" OR review_overdue=\"REVIEW_DUE\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Data Protection Impact Assessment Coverage (Art. 35)**): table processing_activity, system_name, data_category, high_risk, dpia_coverage, dpia_date, review_overdue\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (DPIA coverage gaps), Pie chart (covered vs missing vs in-progress), Single value (missing DPIAs count), Bar chart (high-risk processing by category).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of data protection impact assessment coverage — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.35",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.35 (DPIA) is enforced — Splunk UC-22.1.14: GDPR Data Protection Impact Assessment Coverage.",
                  "ea": "Saved search 'UC-22.1.14' running on gdpr_ropa_register.csv, gdpr_dpia_register.csv (DPIA records), ES Asset Framework, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.15",
              "n": "GDPR Third-Party Processor Compliance Monitoring (Art. 28)",
              "c": "high",
              "f": "intermediate",
              "v": "Article 28 requires controllers to use only processors providing sufficient guarantees and to monitor their compliance. This use case tracks data flows to processors, monitors their security posture indicators, detects data transfers exceeding agreed scope, and flags processors with overdue security assessments — providing continuous evidence that processor oversight is active, not just contractual.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "CIM Network_Traffic data model, proxy/firewall logs, `gdpr_processor_register.csv`",
              "q": "| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out dc(All_Traffic.src) as sources\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"allowed\"\n  by All_Traffic.dest\n| rename All_Traffic.* as *\n| lookup gdpr_processor_register.csv dest_domain AS dest OUTPUT processor_name, dpa_signed, last_audit_date, approved_data_categories, sub_processors\n| where isnotnull(processor_name)\n| eval days_since_audit=if(isnotnull(last_audit_date), round((now()-strptime(last_audit_date,\"%Y-%m-%d\"))/86400,0), 9999)\n| eval audit_status=case(days_since_audit > 365, \"OVERDUE\", days_since_audit > 270, \"DUE_SOON\", 1=1, \"CURRENT\")\n| eval bytes_gb=round(bytes_out/1073741824, 2)\n| sort - bytes_gb\n| table dest, processor_name, dpa_signed, bytes_gb, sources, audit_status, days_since_audit, sub_processors",
              "m": "(1) Build `gdpr_processor_register.csv` from your Art. 28 processor inventory with destination domains/IPs, DPA status, and audit dates; (2) monitor actual data transfer volumes to each processor; (3) alert on processors with overdue audits or unsigned DPAs; (4) detect unexpected volume spikes indicating scope creep; (5) track sub-processor changes reported by primary processors.",
              "z": "Table (processor compliance status), Bar chart (data transfer by processor), Single value (overdue audits count), Timeline (transfer volume trends).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: CIM Network_Traffic data model, proxy/firewall logs, `gdpr_processor_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `gdpr_processor_register.csv` from your Art. 28 processor inventory with destination domains/IPs, DPA status, and audit dates; (2) monitor actual data transfer volumes to each processor; (3) alert on processors with overdue audits or unsigned DPAs; (4) detect unexpected volume spikes indicating scope creep; (5) track sub-processor changes reported by primary processors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out dc(All_Traffic.src) as sources\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"allowed\"\n  by All_Traffic.dest\n| rename All_Traffic.* as *\n| lookup gdpr_processor_register.csv dest_domain AS dest OUTPUT processor_name, dpa_signed, last_audit_date, approved_data_categories, sub_processors\n| where isnotnull(processor_name)\n| eval days_since_audit=if(isnotnull(last_audit_date), round((now()-strptime(last_audit_date,\"%Y-%m-%d\"))/86400,0), 9999)\n| eval audit_status=case(days_since_audit > 365, \"OVERDUE\", days_since_audit > 270, \"DUE_SOON\", 1=1, \"CURRENT\")\n| eval bytes_gb=round(bytes_out/1073741824, 2)\n| sort - bytes_gb\n| table dest, processor_name, dpa_signed, bytes_gb, sources, audit_status, days_since_audit, sub_processors\n```\n\nUnderstanding this SPL\n\n**GDPR Third-Party Processor Compliance Monitoring (Art. 28)** — Article 28 requires controllers to use only processors providing sufficient guarantees and to monitor their compliance. This use case tracks data flows to processors, monitors their security posture indicators, detects data transfers exceeding agreed scope, and flags processors with overdue security assessments — providing continuous evidence that processor oversight is active, not just contractual.\n\nDocumented **Data sources**: CIM Network_Traffic data model, proxy/firewall logs, `gdpr_processor_register.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(processor_name)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_since_audit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **audit_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bytes_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Third-Party Processor Compliance Monitoring (Art. 28)**): table dest, processor_name, dpa_signed, bytes_gb, sources, audit_status, days_since_audit, sub_processors\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Third-Party Processor Compliance Monitoring (Art. 28)** — Article 28 requires controllers to use only processors providing sufficient guarantees and to monitor their compliance. This use case tracks data flows to processors, monitors their security posture indicators, detects data transfers exceeding agreed scope, and flags processors with overdue security assessments — providing continuous evidence that processor oversight is active, not just contractual.\n\nDocumented **Data sources**: CIM Network_Traffic data model, proxy/firewall logs, `gdpr_processor_register.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (processor compliance status), Bar chart (data transfer by processor), Single value (overdue audits count), Timeline (transfer volume trends).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of third-party processor compliance monitoring — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.28 (Processor obligations) is enforced — Splunk UC-22.1.15: GDPR Third-Party Processor Compliance Monitoring.",
                  "ea": "Saved search 'UC-22.1.15' running on CIM Network_Traffic data model, proxy/firewall logs, gdpr_processor_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.16",
              "n": "GDPR Consent Withdrawal Processing Enforcement (Art. 7(3))",
              "c": "high",
              "f": "advanced",
              "v": "Article 7(3) requires that withdrawal of consent be as easy as giving consent, and processing based on consent must stop after withdrawal. While UC-22.1.5 tracks the consent audit trail, this use case verifies that processing actually ceases after a data subject withdraws consent — detecting systems that continue sending marketing communications, tracking cookies, or processing data for purposes that relied on the withdrawn consent.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), HTTP Event Collector (HEC)",
              "d": "`index=consent` (consent management platform events), `index=email` (marketing automation logs), `index=web` (tracking/analytics events), application audit logs",
              "q": "| inputlookup gdpr_consent_withdrawals.csv WHERE withdrawal_date!=\"\"\n| eval withdrawal_epoch=strptime(withdrawal_date, \"%Y-%m-%d\")\n| eval subject_hash=subject_identifier\n| join type=left subject_hash [\n    search (index=email OR index=marketing) earliest=-30d\n    | eval subject_hash=coalesce(recipient_hash, subscriber_id, user_hash)\n    | where isnotnull(subject_hash)\n    | stats count as post_withdrawal_events, latest(_time) as last_processing_time by subject_hash\n]\n| where isnotnull(post_withdrawal_events) AND last_processing_time > withdrawal_epoch\n| eval days_processing_after_withdrawal=round((last_processing_time - withdrawal_epoch)/86400, 1)\n| sort - days_processing_after_withdrawal\n| table subject_hash, withdrawal_date, consent_purpose, post_withdrawal_events, days_processing_after_withdrawal",
              "m": "(1) Export consent withdrawal events from your CMP (OneTrust, Cookiebot, custom) to `gdpr_consent_withdrawals.csv` with hashed subject identifiers and consent purposes; (2) search marketing automation and tracking systems for activity after withdrawal dates; (3) alert on any processing found after withdrawal; (4) escalate to marketing operations for immediate suppression; (5) document remediation for accountability evidence.",
              "z": "Table (subjects with post-withdrawal processing), Single value (violations found), Bar chart (violations by consent purpose), Timeline (violation discovery).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), HTTP Event Collector (HEC).\n• Ensure the following data sources are available: `index=consent` (consent management platform events), `index=email` (marketing automation logs), `index=web` (tracking/analytics events), application audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export consent withdrawal events from your CMP (OneTrust, Cookiebot, custom) to `gdpr_consent_withdrawals.csv` with hashed subject identifiers and consent purposes; (2) search marketing automation and tracking systems for activity after withdrawal dates; (3) alert on any processing found after withdrawal; (4) escalate to marketing operations for immediate suppression; (5) document remediation for accountability evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_consent_withdrawals.csv WHERE withdrawal_date!=\"\"\n| eval withdrawal_epoch=strptime(withdrawal_date, \"%Y-%m-%d\")\n| eval subject_hash=subject_identifier\n| join type=left subject_hash [\n    search (index=email OR index=marketing) earliest=-30d\n    | eval subject_hash=coalesce(recipient_hash, subscriber_id, user_hash)\n    | where isnotnull(subject_hash)\n    | stats count as post_withdrawal_events, latest(_time) as last_processing_time by subject_hash\n]\n| where isnotnull(post_withdrawal_events) AND last_processing_time > withdrawal_epoch\n| eval days_processing_after_withdrawal=round((last_processing_time - withdrawal_epoch)/86400, 1)\n| sort - days_processing_after_withdrawal\n| table subject_hash, withdrawal_date, consent_purpose, post_withdrawal_events, days_processing_after_withdrawal\n```\n\nUnderstanding this SPL\n\n**GDPR Consent Withdrawal Processing Enforcement (Art. 7(3))** — Article 7(3) requires that withdrawal of consent be as easy as giving consent, and processing based on consent must stop after withdrawal. While UC-22.1.5 tracks the consent audit trail, this use case verifies that processing actually ceases after a data subject withdraws consent — detecting systems that continue sending marketing communications, tracking cookies, or processing data for purposes that relied on the withdrawn consent.\n\nDocumented **Data sources**: `index=consent` (consent management platform events), `index=email` (marketing automation logs), `index=web` (tracking/analytics events), application audit logs. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **withdrawal_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **subject_hash** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(post_withdrawal_events) AND last_processing_time > withdrawal_epoch` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_processing_after_withdrawal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Consent Withdrawal Processing Enforcement (Art. 7(3))**): table subject_hash, withdrawal_date, consent_purpose, post_withdrawal_events, days_processing_after_withdrawal\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (subjects with post-withdrawal processing), Single value (violations found), Bar chart (violations by consent purpose), Timeline (violation discovery).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of consent withdrawal processing enforcement — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.18 (Right to restrict processing) is enforced — Splunk UC-22.1.16: GDPR Consent Withdrawal Processing Enforcement.",
                  "ea": "Saved search 'UC-22.1.16' running on index consent and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679"
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7(3)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.7(3) is enforced — Splunk UC-22.1.16: GDPR Consent Withdrawal Processing Enforcement.",
                  "ea": "Saved search 'UC-22.1.16' running on index consent and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.18 is enforced — Splunk UC-22.1.16: GDPR Consent Withdrawal Processing Enforcement.",
                  "ea": "Saved search 'UC-22.1.16' running on index consent and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.17",
              "n": "GDPR Audit Log Integrity and Tamper Protection (Art. 5(2))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 5(2) requires controllers to demonstrate compliance (accountability principle). Audit logs are the primary evidence mechanism, but logs that can be tampered with have no evidentiary value. This use case monitors Splunk's internal audit trail for suspicious activities — index deletions, user-capability changes, search-time data manipulation, and modifications to retention settings — ensuring the integrity of the very evidence used to prove GDPR compliance.",
              "t": "Splunk Enterprise / Splunk Cloud Platform (native audit capabilities)",
              "d": "`index=_audit` (Splunk audit logs), `index=_internal` (Splunk internal logs)",
              "q": "index=_audit earliest=-24h\n| where match(action,\"(?i)delete|edit_user|change_own_password|edit_roles|search\")\n  AND (match(info,\"(?i)index.*delete|frozen|retire|capabilities|admin\")\n       OR match(action,\"(?i)delete_index_data|edit_index\"))\n| eval risk_signal=case(\n    match(action,\"(?i)delete_index_data\"), \"INDEX_DATA_DELETION\",\n    match(info,\"(?i)capabilities.*admin|role.*admin\"), \"PRIVILEGE_ESCALATION\",\n    match(info,\"(?i)frozen|retire|retention\"), \"RETENTION_CHANGE\",\n    1=1, \"AUDIT_MODIFICATION\")\n| stats count by user, action, risk_signal, info\n| sort - count\n| table _time, user, action, risk_signal, info, count",
              "m": "(1) Enable Splunk audit logging (enabled by default); (2) forward `_audit` events to a separate, restricted index with extended retention; (3) alert on index data deletions, retention setting changes, and admin privilege modifications; (4) restrict `_audit` index access to compliance/security teams only; (5) consider Splunk Cloud's immutable audit trail as the source of truth for regulatory evidence.",
              "z": "Table (suspicious audit events), Timeline (audit modifications), Bar chart (events by risk signal), Single value (high-risk events today).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1070",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform (native audit capabilities).\n• Ensure the following data sources are available: `index=_audit` (Splunk audit logs), `index=_internal` (Splunk internal logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable Splunk audit logging (enabled by default); (2) forward `_audit` events to a separate, restricted index with extended retention; (3) alert on index data deletions, retention setting changes, and admin privilege modifications; (4) restrict `_audit` index access to compliance/security teams only; (5) consider Splunk Cloud's immutable audit trail as the source of truth for regulatory evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit earliest=-24h\n| where match(action,\"(?i)delete|edit_user|change_own_password|edit_roles|search\")\n  AND (match(info,\"(?i)index.*delete|frozen|retire|capabilities|admin\")\n       OR match(action,\"(?i)delete_index_data|edit_index\"))\n| eval risk_signal=case(\n    match(action,\"(?i)delete_index_data\"), \"INDEX_DATA_DELETION\",\n    match(info,\"(?i)capabilities.*admin|role.*admin\"), \"PRIVILEGE_ESCALATION\",\n    match(info,\"(?i)frozen|retire|retention\"), \"RETENTION_CHANGE\",\n    1=1, \"AUDIT_MODIFICATION\")\n| stats count by user, action, risk_signal, info\n| sort - count\n| table _time, user, action, risk_signal, info, count\n```\n\nUnderstanding this SPL\n\n**GDPR Audit Log Integrity and Tamper Protection (Art. 5(2))** — Article 5(2) requires controllers to demonstrate compliance (accountability principle). Audit logs are the primary evidence mechanism, but logs that can be tampered with have no evidentiary value. This use case monitors Splunk's internal audit trail for suspicious activities — index deletions, user-capability changes, search-time data manipulation, and modifications to retention settings — ensuring the integrity of the very evidence used to prove GDPR compliance.\n\nDocumented **Data sources**: `index=_audit` (Splunk audit logs), `index=_internal` (Splunk internal logs). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform (native audit capabilities). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(action,\"(?i)delete|edit_user|change_own_password|edit_roles|search\")\n  AND (match(info,\"(?i)index.*delete|froze…` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **risk_signal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user, action, risk_signal, info** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Audit Log Integrity and Tamper Protection (Art. 5(2))**): table _time, user, action, risk_signal, info, count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Audit Log Integrity and Tamper Protection (Art. 5(2))** — Article 5(2) requires controllers to demonstrate compliance (accountability principle). Audit logs are the primary evidence mechanism, but logs that can be tampered with have no evidentiary value. This use case monitors Splunk's internal audit trail for suspicious activities — index deletions, user-capability changes, search-time data manipulation, and modifications to retention settings — ensuring the integrity of the very evidence used to prove GDPR compliance.\n\nDocumented **Data sources**: `index=_audit` (Splunk audit logs), `index=_internal` (Splunk internal logs). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform (native audit capabilities). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspicious audit events), Timeline (audit modifications), Bar chart (events by risk signal), Single value (high-risk events today).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of audit log integrity and tamper protection — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.action | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5(2) is enforced — Splunk UC-22.1.17: GDPR Audit Log Integrity and Tamper Protection.",
                  "ea": "Saved search 'UC-22.1.17' running on index=_audit (Splunk audit logs), index=_internal (Splunk internal logs), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.18",
              "n": "GDPR Automated Decision-Making and Profiling Transparency (Art. 22)",
              "c": "high",
              "f": "advanced",
              "v": "Article 22 gives data subjects the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. This use case monitors automated decision-making systems (credit scoring, fraud detection, HR screening, insurance pricing) for transparency — tracking decision volumes, override rates, and appeal/challenge requests to ensure human review is available and exercised.",
              "t": "HTTP Event Collector (HEC — structured decision logs)",
              "d": "`index=decisions` (automated decision system audit logs via HEC), `index=itsm` (appeal/challenge tickets)",
              "q": "index=decisions sourcetype=\"automated_decision\" earliest=-30d\n| eval decision_type=coalesce(decision_type, model_name, system_name)\n| eval human_reviewed=if(match(lower(_raw),\"(?i)manual.*review|human.*override|appeal.*granted|escalat\"), 1, 0)\n| eval adverse=if(match(lower(outcome),\"(?i)denied|rejected|declined|flagged|high.risk\"), 1, 0)\n| stats count as total_decisions, sum(adverse) as adverse_decisions, sum(human_reviewed) as reviewed_by_human by decision_type\n| eval adverse_pct=round(100*adverse_decisions/total_decisions, 1)\n| eval human_review_pct=round(100*reviewed_by_human/total_decisions, 1)\n| eval compliance_risk=if(adverse_pct > 10 AND human_review_pct < 5, \"HIGH — low human review of adverse decisions\", \"Monitor\")\n| table decision_type, total_decisions, adverse_decisions, adverse_pct, reviewed_by_human, human_review_pct, compliance_risk",
              "m": "(1) Instrument automated decision systems to emit structured audit logs via HEC with decision type, outcome, data inputs used, and human review flags; (2) track Art. 22(3) challenge/appeal requests through ITSM; (3) alert when adverse decision rates are high but human review rates are low; (4) report on decision transparency for DPO annual review; (5) ensure data subjects are informed about automated decision-making per Art. 13(2)(f) and Art. 14(2)(g).",
              "z": "Bar chart (decisions by type and outcome), Table (low human review systems), Single value (overall human review rate), Line chart (adverse decision trends).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1565",
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC — structured decision logs).\n• Ensure the following data sources are available: `index=decisions` (automated decision system audit logs via HEC), `index=itsm` (appeal/challenge tickets).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument automated decision systems to emit structured audit logs via HEC with decision type, outcome, data inputs used, and human review flags; (2) track Art. 22(3) challenge/appeal requests through ITSM; (3) alert when adverse decision rates are high but human review rates are low; (4) report on decision transparency for DPO annual review; (5) ensure data subjects are informed about automated decision-making per Art. 13(2)(f) and Art. 14(2)(g).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=decisions sourcetype=\"automated_decision\" earliest=-30d\n| eval decision_type=coalesce(decision_type, model_name, system_name)\n| eval human_reviewed=if(match(lower(_raw),\"(?i)manual.*review|human.*override|appeal.*granted|escalat\"), 1, 0)\n| eval adverse=if(match(lower(outcome),\"(?i)denied|rejected|declined|flagged|high.risk\"), 1, 0)\n| stats count as total_decisions, sum(adverse) as adverse_decisions, sum(human_reviewed) as reviewed_by_human by decision_type\n| eval adverse_pct=round(100*adverse_decisions/total_decisions, 1)\n| eval human_review_pct=round(100*reviewed_by_human/total_decisions, 1)\n| eval compliance_risk=if(adverse_pct > 10 AND human_review_pct < 5, \"HIGH — low human review of adverse decisions\", \"Monitor\")\n| table decision_type, total_decisions, adverse_decisions, adverse_pct, reviewed_by_human, human_review_pct, compliance_risk\n```\n\nUnderstanding this SPL\n\n**GDPR Automated Decision-Making and Profiling Transparency (Art. 22)** — Article 22 gives data subjects the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. This use case monitors automated decision-making systems (credit scoring, fraud detection, HR screening, insurance pricing) for transparency — tracking decision volumes, override rates, and appeal/challenge requests to ensure human review is available and exercised.\n\nDocumented **Data sources**: `index=decisions` (automated decision system audit logs via HEC), `index=itsm` (appeal/challenge tickets). **App/TA** (typical add-on context): HTTP Event Collector (HEC — structured decision logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: decisions; **sourcetype**: automated_decision. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=decisions, sourcetype=\"automated_decision\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **decision_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **human_reviewed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **adverse** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by decision_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **adverse_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **human_review_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **compliance_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **GDPR Automated Decision-Making and Profiling Transparency (Art. 22)**): table decision_type, total_decisions, adverse_decisions, adverse_pct, reviewed_by_human, human_review_pct, compliance_risk\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (decisions by type and outcome), Table (low human review systems), Single value (overall human review rate), Line chart (adverse decision trends).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of automated decision-making and profiling transparency — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.22",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.22 (Automated decision making) is enforced — Splunk UC-22.1.18: GDPR Automated Decision-Making and Profiling Transparency.",
                  "ea": "Saved search 'UC-22.1.18' running on index=decisions (automated decision system audit logs via HEC), index=itsm (appeal/challenge tickets), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.19",
              "n": "GDPR Data Subject Rights Response SLA Dashboard (Art. 12)",
              "c": "high",
              "f": "beginner",
              "v": "Article 12 requires controllers to respond to data subject rights requests \"without undue delay and in any event within one month,\" extendable by two months for complex requests with explanation to the data subject. While UC-22.1.2 focuses on DSARs specifically, this use case provides an executive dashboard across all rights (access, rectification, erasure, restriction, portability, objection) with SLA tracking, volume trends, and cost-per-request metrics — the compliance KPI view that DPOs and boards need.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (all data subject rights requests)",
              "q": "index=itsm (sourcetype=\"snow:sc_req_item\" OR sourcetype=\"snow:incident\")\n    (cat_item=\"*Subject*\" OR short_description=\"*DSAR*\" OR short_description=\"*data subject*\" OR short_description=\"*erasure*\" OR short_description=\"*rectification*\" OR short_description=\"*portability*\" OR short_description=\"*objection*\" OR short_description=\"*restriction*\")\n| eval right_type=case(\n    match(lower(short_description),\"(?i)access|dsar|subject.access\"), \"Access (Art.15)\",\n    match(lower(short_description),\"(?i)erasure|forget|deletion\"), \"Erasure (Art.17)\",\n    match(lower(short_description),\"(?i)rectif\"), \"Rectification (Art.16)\",\n    match(lower(short_description),\"(?i)portab\"), \"Portability (Art.20)\",\n    match(lower(short_description),\"(?i)object\"), \"Objection (Art.21)\",\n    match(lower(short_description),\"(?i)restrict\"), \"Restriction (Art.18)\",\n    1=1, \"Other\")\n| eval opened_epoch=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at,\"%Y-%m-%d %H:%M:%S\"), null())\n| eval response_days=if(isnotnull(closed_epoch), round((closed_epoch-opened_epoch)/86400,1), round((now()-opened_epoch)/86400,1))\n| eval sla_status=case(\n    isnotnull(closed_epoch) AND response_days<=30, \"Met\",\n    isnotnull(closed_epoch) AND response_days<=90, \"Extended (Art.12(3))\",\n    isnotnull(closed_epoch) AND response_days>90, \"BREACHED\",\n    isnull(closed_epoch) AND response_days>25, \"AT_RISK\",\n    1=1, \"In Progress\")\n| stats count by right_type, sla_status\n| chart sum(count) by right_type, sla_status",
              "m": "(1) Standardise ServiceNow catalog items or incident categories for each GDPR right type; (2) train intake teams to classify correctly; (3) schedule daily; (4) alert on AT_RISK requests approaching 30-day deadline; (5) report monthly on volume trends, SLA compliance rates, and average response times by right type.",
              "z": "Stacked bar chart (rights by SLA status), Single value (overall SLA compliance %), Table (at-risk requests), Line chart (monthly volume trend by right type), Pie chart (requests by right type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (all data subject rights requests).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardise ServiceNow catalog items or incident categories for each GDPR right type; (2) train intake teams to classify correctly; (3) schedule daily; (4) alert on AT_RISK requests approaching 30-day deadline; (5) report monthly on volume trends, SLA compliance rates, and average response times by right type.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm (sourcetype=\"snow:sc_req_item\" OR sourcetype=\"snow:incident\")\n    (cat_item=\"*Subject*\" OR short_description=\"*DSAR*\" OR short_description=\"*data subject*\" OR short_description=\"*erasure*\" OR short_description=\"*rectification*\" OR short_description=\"*portability*\" OR short_description=\"*objection*\" OR short_description=\"*restriction*\")\n| eval right_type=case(\n    match(lower(short_description),\"(?i)access|dsar|subject.access\"), \"Access (Art.15)\",\n    match(lower(short_description),\"(?i)erasure|forget|deletion\"), \"Erasure (Art.17)\",\n    match(lower(short_description),\"(?i)rectif\"), \"Rectification (Art.16)\",\n    match(lower(short_description),\"(?i)portab\"), \"Portability (Art.20)\",\n    match(lower(short_description),\"(?i)object\"), \"Objection (Art.21)\",\n    match(lower(short_description),\"(?i)restrict\"), \"Restriction (Art.18)\",\n    1=1, \"Other\")\n| eval opened_epoch=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at,\"%Y-%m-%d %H:%M:%S\"), null())\n| eval response_days=if(isnotnull(closed_epoch), round((closed_epoch-opened_epoch)/86400,1), round((now()-opened_epoch)/86400,1))\n| eval sla_status=case(\n    isnotnull(closed_epoch) AND response_days<=30, \"Met\",\n    isnotnull(closed_epoch) AND response_days<=90, \"Extended (Art.12(3))\",\n    isnotnull(closed_epoch) AND response_days>90, \"BREACHED\",\n    isnull(closed_epoch) AND response_days>25, \"AT_RISK\",\n    1=1, \"In Progress\")\n| stats count by right_type, sla_status\n| chart sum(count) by right_type, sla_status\n```\n\nUnderstanding this SPL\n\n**GDPR Data Subject Rights Response SLA Dashboard (Art. 12)** — Article 12 requires controllers to respond to data subject rights requests \"without undue delay and in any event within one month,\" extendable by two months for complex requests with explanation to the data subject. While UC-22.1.2 focuses on DSARs specifically, this use case provides an executive dashboard across all rights (access, rectification, erasure, restriction, portability, objection) with SLA tracking, volume trends, and cost-per-request metrics — the compliance…\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (all data subject rights requests). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item, snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **right_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **opened_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **closed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **response_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by right_type, sla_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `chart` builds a categorical visualization, grouping **by right_type, sla_status**.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar chart (rights by SLA status), Single value (overall SLA compliance %), Table (at-risk requests), Line chart (monthly volume trend by right type), Pie chart (requests by right type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of data subject rights response sla dashboard — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.12 is enforced — Splunk UC-22.1.19: GDPR Data Subject Rights Response SLA Dashboard.",
                  "ea": "Saved search 'UC-22.1.19' running on index=itsm sourcetype=\"snow:sc_req_item\" (all data subject rights requests), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.20",
              "n": "GDPR Legitimate Interest Balancing Test Evidence (Art. 6(1)(f))",
              "c": "high",
              "f": "advanced",
              "v": "Legitimate interest (Art. 6(1)(f)) is the most commonly misused legal basis — regulators have fined LinkedIn €310M and others hundreds of millions for relying on legitimate interest without proper balancing tests. This use case tracks which processing activities use legitimate interest as their legal basis, monitors whether balancing test documentation exists and is current, and detects processing scope creep beyond the original legitimate interest assessment — addressing the enforcement area that generates the highest fines in 2025-2026.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`gdpr_ropa_register.csv`, `gdpr_lia_register.csv` (Legitimate Interest Assessments), CIM Network_Traffic data model",
              "q": "| inputlookup gdpr_ropa_register.csv WHERE legal_basis=\"legitimate_interest\"\n| lookup gdpr_lia_register.csv processing_activity OUTPUT lia_status, lia_date, lia_reviewer, lia_outcome, data_subject_objections\n| eval lia_coverage=case(\n    lia_status=\"completed\" AND lia_outcome=\"justified\", \"JUSTIFIED\",\n    lia_status=\"completed\" AND lia_outcome=\"not_justified\", \"STOP_PROCESSING\",\n    lia_status=\"in_progress\", \"IN_PROGRESS\",\n    isnull(lia_status), \"MISSING — LIA REQUIRED\")\n| eval review_overdue=if(isnotnull(lia_date) AND (now()-strptime(lia_date,\"%Y-%m-%d\"))/86400 > 365, \"REVIEW_DUE\", \"OK\")\n| eval objection_rate=if(isnotnull(data_subject_objections) AND data_subject_objections > 10, \"HIGH_OBJECTIONS — reassess\", \"Normal\")\n| where lia_coverage IN (\"MISSING — LIA REQUIRED\", \"STOP_PROCESSING\") OR review_overdue=\"REVIEW_DUE\" OR objection_rate=\"HIGH_OBJECTIONS — reassess\"\n| table processing_activity, system_name, lia_coverage, lia_date, review_overdue, data_subject_objections, objection_rate\n| sort lia_coverage",
              "m": "(1) Tag ROPA entries using legitimate interest as legal basis; (2) maintain `gdpr_lia_register.csv` linking processing activities to Legitimate Interest Assessments; (3) alert on processing activities relying on legitimate interest without a completed LIA; (4) track data subject objections (Art. 21) per processing activity — high objection rates signal the balancing test may be failing; (5) flag LIAs not reviewed in >12 months; (6) stop processing immediately for any activity where LIA concludes \"not justified.\"",
              "z": "Table (LIA coverage gaps), Pie chart (LIA status distribution), Bar chart (objections by processing activity), Single value (missing LIAs count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `gdpr_ropa_register.csv`, `gdpr_lia_register.csv` (Legitimate Interest Assessments), CIM Network_Traffic data model.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag ROPA entries using legitimate interest as legal basis; (2) maintain `gdpr_lia_register.csv` linking processing activities to Legitimate Interest Assessments; (3) alert on processing activities relying on legitimate interest without a completed LIA; (4) track data subject objections (Art. 21) per processing activity — high objection rates signal the balancing test may be failing; (5) flag LIAs not reviewed in >12 months; (6) stop processing immediately for any activity where LIA concludes…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_ropa_register.csv WHERE legal_basis=\"legitimate_interest\"\n| lookup gdpr_lia_register.csv processing_activity OUTPUT lia_status, lia_date, lia_reviewer, lia_outcome, data_subject_objections\n| eval lia_coverage=case(\n    lia_status=\"completed\" AND lia_outcome=\"justified\", \"JUSTIFIED\",\n    lia_status=\"completed\" AND lia_outcome=\"not_justified\", \"STOP_PROCESSING\",\n    lia_status=\"in_progress\", \"IN_PROGRESS\",\n    isnull(lia_status), \"MISSING — LIA REQUIRED\")\n| eval review_overdue=if(isnotnull(lia_date) AND (now()-strptime(lia_date,\"%Y-%m-%d\"))/86400 > 365, \"REVIEW_DUE\", \"OK\")\n| eval objection_rate=if(isnotnull(data_subject_objections) AND data_subject_objections > 10, \"HIGH_OBJECTIONS — reassess\", \"Normal\")\n| where lia_coverage IN (\"MISSING — LIA REQUIRED\", \"STOP_PROCESSING\") OR review_overdue=\"REVIEW_DUE\" OR objection_rate=\"HIGH_OBJECTIONS — reassess\"\n| table processing_activity, system_name, lia_coverage, lia_date, review_overdue, data_subject_objections, objection_rate\n| sort lia_coverage\n```\n\nUnderstanding this SPL\n\n**GDPR Legitimate Interest Balancing Test Evidence (Art. 6(1)(f))** — Legitimate interest (Art. 6(1)(f)) is the most commonly misused legal basis — regulators have fined LinkedIn €310M and others hundreds of millions for relying on legitimate interest without proper balancing tests. This use case tracks which processing activities use legitimate interest as their legal basis, monitors whether balancing test documentation exists and is current, and detects processing scope creep beyond the original legitimate interest assessment — addressing…\n\nDocumented **Data sources**: `gdpr_ropa_register.csv`, `gdpr_lia_register.csv` (Legitimate Interest Assessments), CIM Network_Traffic data model. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **lia_coverage** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **review_overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **objection_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lia_coverage IN (\"MISSING — LIA REQUIRED\", \"STOP_PROCESSING\") OR review_overdue=\"REVIEW_DUE\" OR objection_rate=\"HIGH_…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Legitimate Interest Balancing Test Evidence (Art. 6(1)(f))**): table processing_activity, system_name, lia_coverage, lia_date, review_overdue, data_subject_objections, objection_rate\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (LIA coverage gaps), Pie chart (LIA status distribution), Bar chart (objections by processing activity), Single value (missing LIAs count).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We legitimate interest (Art so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.6(1)(f)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.6(1)(f) is enforced — Splunk UC-22.1.20: GDPR Legitimate Interest Balancing Test Evidence.",
                  "ea": "Saved search 'UC-22.1.20' running on gdpr_ropa_register.csv, gdpr_lia_register.csv (Legitimate Interest Assessments), CIM Network_Traffic data model, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.21",
              "n": "GDPR Encryption at Rest Audit Evidence (Art. 32(1)(a))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 32(1)(a) requires pseudonymisation and encryption of personal data where appropriate. This use case aggregates technical evidence that storage volumes, databases, object stores, and backup targets that process personal data are encrypted at rest — surfacing misconfigured buckets, unencrypted snapshots, and hosts lacking disk encryption so the DPO can prove the measure is operational, not aspirational.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`aws:cloudtrail` (CreateVolume, CreateDBInstance, PutBucketEncryption), Azure Activity / resource graph exports (`mscs:azure:auditlog`), `tenable:vuln` or agent inventory with encryption flags, `gdpr_data_store_register.csv`",
              "q": "| inputlookup gdpr_data_store_register.csv WHERE processes_personal_data=\"true\"\n| lookup cloud_asset_encryption_lookup resource_id OUTPUT encryption_at_rest, last_scan_date, kms_key_arn\n| eval scan_age_days=if(isnotnull(last_scan_date), round((now()-strptime(last_scan_date,\"%Y-%m-%d\"))/86400,0), 9999)\n| eval compliance_status=case(\n    encryption_at_rest=\"true\", \"COMPLIANT\",\n    encryption_at_rest=\"false\", \"NON_COMPLIANT\",\n    isnull(encryption_at_rest), \"UNKNOWN\")\n| where compliance_status!=\"COMPLIANT\" OR scan_age_days > 90\n| table resource_id, system_name, data_category, encryption_at_rest, compliance_status, scan_age_days, kms_key_arn\n| sort compliance_status",
              "m": "(1) Maintain `gdpr_data_store_register.csv` linking cloud resource IDs and on-prem volumes to systems of record for personal data; (2) ingest CloudTrail / Azure audit events that reflect encryption configuration changes; (3) join nightly Tenable or CSPM exports for at-rest encryption posture; (4) alert on `NON_COMPLIANT` or stale `UNKNOWN` assets; (5) retain dashboard exports for Art. 32 accountability evidence packs.",
              "z": "Table (non-compliant stores), Single value (count lacking encryption), Bar chart (violations by data category), Timeline (encryption configuration changes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `aws:cloudtrail` (CreateVolume, CreateDBInstance, PutBucketEncryption), Azure Activity / resource graph exports (`mscs:azure:auditlog`), `tenable:vuln` or agent inventory with encryption flags, `gdpr_data_store_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `gdpr_data_store_register.csv` linking cloud resource IDs and on-prem volumes to systems of record for personal data; (2) ingest CloudTrail / Azure audit events that reflect encryption configuration changes; (3) join nightly Tenable or CSPM exports for at-rest encryption posture; (4) alert on `NON_COMPLIANT` or stale `UNKNOWN` assets; (5) retain dashboard exports for Art. 32 accountability evidence packs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_data_store_register.csv WHERE processes_personal_data=\"true\"\n| lookup cloud_asset_encryption_lookup resource_id OUTPUT encryption_at_rest, last_scan_date, kms_key_arn\n| eval scan_age_days=if(isnotnull(last_scan_date), round((now()-strptime(last_scan_date,\"%Y-%m-%d\"))/86400,0), 9999)\n| eval compliance_status=case(\n    encryption_at_rest=\"true\", \"COMPLIANT\",\n    encryption_at_rest=\"false\", \"NON_COMPLIANT\",\n    isnull(encryption_at_rest), \"UNKNOWN\")\n| where compliance_status!=\"COMPLIANT\" OR scan_age_days > 90\n| table resource_id, system_name, data_category, encryption_at_rest, compliance_status, scan_age_days, kms_key_arn\n| sort compliance_status\n```\n\nUnderstanding this SPL\n\n**GDPR Encryption at Rest Audit Evidence (Art. 32(1)(a))** — Article 32(1)(a) requires pseudonymisation and encryption of personal data where appropriate. This use case aggregates technical evidence that storage volumes, databases, object stores, and backup targets that process personal data are encrypted at rest — surfacing misconfigured buckets, unencrypted snapshots, and hosts lacking disk encryption so the DPO can prove the measure is operational, not aspirational.\n\nDocumented **Data sources**: `aws:cloudtrail` (CreateVolume, CreateDBInstance, PutBucketEncryption), Azure Activity / resource graph exports (`mscs:azure:auditlog`), `tenable:vuln` or agent inventory with encryption flags, `gdpr_data_store_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **scan_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **compliance_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliance_status!=\"COMPLIANT\" OR scan_age_days > 90` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Encryption at Rest Audit Evidence (Art. 32(1)(a))**): table resource_id, system_name, data_category, encryption_at_rest, compliance_status, scan_age_days, kms_key_arn\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Encryption at Rest Audit Evidence (Art. 32(1)(a))** — Article 32(1)(a) requires pseudonymisation and encryption of personal data where appropriate. This use case aggregates technical evidence that storage volumes, databases, object stores, and backup targets that process personal data are encrypted at rest — surfacing misconfigured buckets, unencrypted snapshots, and hosts lacking disk encryption so the DPO can prove the measure is operational, not aspirational.\n\nDocumented **Data sources**: `aws:cloudtrail` (CreateVolume, CreateDBInstance, PutBucketEncryption), Azure Activity / resource graph exports (`mscs:azure:auditlog`), `tenable:vuln` or agent inventory with encryption flags, `gdpr_data_store_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant stores), Single value (count lacking encryption), Bar chart (violations by data category), Timeline (encryption configuration changes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of encryption at rest audit evidence — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32(1)(a)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.32(1)(a) is enforced — Splunk UC-22.1.21: GDPR Encryption at Rest Audit Evidence.",
                  "ea": "Saved search 'UC-22.1.21' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.22",
              "n": "GDPR Access Control Review and Privileged Access Evidence (Art. 32(1)(b))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 32(1)(b) requires measures to ensure ongoing confidentiality, including controlling access to personal data. This use case reviews who can access GDPR-scoped systems — flagging excessive admin grants, dormant privileged accounts, and role changes that expand access beyond ROPA-documented purposes — giving auditors repeatable evidence of access governance.",
              "t": "Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk Add-on for CyberArk (Splunkbase 2891)",
              "d": "CIM Authentication data model, `OktaIM2:log`, `ms:aad:signin`, `cyberark:session`, `gdpr_system_access_matrix.csv`",
              "q": "| tstats `summariesonly` latest(Authentication.user) as user latest(Authentication.app) as app\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND earliest=-30d\n  by Authentication.dest\n| rename Authentication.* as *\n| lookup gdpr_system_access_matrix.csv system_hostname AS dest OUTPUT data_classification, approved_roles\n| where data_classification IN (\"personal_data\",\"special_category\")\n| join type=left user [\n    search index=okta sourcetype=\"OktaIM2:log\" eventType=\"group.privilege.grant\" earliest=-30d\n    | stats latest(_time) as last_grant by target_user_id\n    | rename target_user_id as user\n]\n| where isnotnull(last_grant)\n| table dest, user, app, data_classification, approved_roles, last_grant",
              "m": "(1) Map hosts and SaaS apps in `gdpr_system_access_matrix.csv` with approved IAM roles; (2) ingest IdP group and role assignment events; (3) correlate successful authentications to personal-data systems; (4) alert when users access GDPR systems without an approved role or after a recent privilege grant; (5) export monthly access review packets for data owners.",
              "z": "Table (privileged access drift), Bar chart (access by system), Single value (users outside approved roles), Heatmap (logon density to PII systems).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Okta Identity Cloud](https://splunkbase.splunk.com/app/6553), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/2891), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk Add-on for CyberArk (Splunkbase 2891).\n• Ensure the following data sources are available: CIM Authentication data model, `OktaIM2:log`, `ms:aad:signin`, `cyberark:session`, `gdpr_system_access_matrix.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map hosts and SaaS apps in `gdpr_system_access_matrix.csv` with approved IAM roles; (2) ingest IdP group and role assignment events; (3) correlate successful authentications to personal-data systems; (4) alert when users access GDPR systems without an approved role or after a recent privilege grant; (5) export monthly access review packets for data owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` latest(Authentication.user) as user latest(Authentication.app) as app\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\" AND earliest=-30d\n  by Authentication.dest\n| rename Authentication.* as *\n| lookup gdpr_system_access_matrix.csv system_hostname AS dest OUTPUT data_classification, approved_roles\n| where data_classification IN (\"personal_data\",\"special_category\")\n| join type=left user [\n    search index=okta sourcetype=\"OktaIM2:log\" eventType=\"group.privilege.grant\" earliest=-30d\n    | stats latest(_time) as last_grant by target_user_id\n    | rename target_user_id as user\n]\n| where isnotnull(last_grant)\n| table dest, user, app, data_classification, approved_roles, last_grant\n```\n\nUnderstanding this SPL\n\n**GDPR Access Control Review and Privileged Access Evidence (Art. 32(1)(b))** — Article 32(1)(b) requires measures to ensure ongoing confidentiality, including controlling access to personal data. This use case reviews who can access GDPR-scoped systems — flagging excessive admin grants, dormant privileged accounts, and role changes that expand access beyond ROPA-documented purposes — giving auditors repeatable evidence of access governance.\n\nDocumented **Data sources**: CIM Authentication data model, `OktaIM2:log`, `ms:aad:signin`, `cyberark:session`, `gdpr_system_access_matrix.csv`. **App/TA** (typical add-on context): Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk Add-on for CyberArk (Splunkbase 2891). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where data_classification IN (\"personal_data\",\"special_category\")` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(last_grant)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Access Control Review and Privileged Access Evidence (Art. 32(1)(b))**): table dest, user, app, data_classification, approved_roles, last_grant\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Access Control Review and Privileged Access Evidence (Art. 32(1)(b))** — Article 32(1)(b) requires measures to ensure ongoing confidentiality, including controlling access to personal data. This use case reviews who can access GDPR-scoped systems — flagging excessive admin grants, dormant privileged accounts, and role changes that expand access beyond ROPA-documented purposes — giving auditors repeatable evidence of access governance.\n\nDocumented **Data sources**: CIM Authentication data model, `OktaIM2:log`, `ms:aad:signin`, `cyberark:session`, `gdpr_system_access_matrix.csv`. **App/TA** (typical add-on context): Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk Add-on for CyberArk (Splunkbase 2891). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (privileged access drift), Bar chart (access by system), Single value (users outside approved roles), Heatmap (logon density to PII systems).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of access control review and privileged access evidence — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "cyberark",
                "okta"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32(1)(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.32(1)(b) is enforced — Splunk UC-22.1.22: GDPR Access Control Review and Privileged Access Evidence.",
                  "ea": "Saved search 'UC-22.1.22' running on CIM Authentication data model, OktaIM2:log, ms:aad:signin, cyberark:session, gdpr_system_access_matrix.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.23",
              "n": "GDPR Pseudonymisation Validation in Pipelines and Logs (Art. 32(1)(a))",
              "c": "high",
              "f": "advanced",
              "v": "Pseudonymisation reduces risk and supports lawful processing when combined with technical controls. This use case verifies that identifiers in downstream indexes, data lakes, and analytics topics match the organisation's pseudonymisation scheme — detecting raw direct identifiers where only tokenised or hashed substitutes should appear.",
              "t": "Splunk Edge Processor (Splunk Cloud Platform), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`index=analytics` OR `index=data_lake_events` (HEC), `gdpr_pseudonymisation_rules.csv`, sample scheduled searches against restricted indexes",
              "q": "index=analytics sourcetype=\"pipeline_audit\" earliest=-24h\n| lookup gdpr_pseudonymisation_rules.csv stream_name OUTPUT expected_token_field, forbidden_raw_fields\n| eval raw_hit=0\n| foreach forbidden_raw_fields [ eval raw_hit=raw_hit + if(isnotnull(<<FIELD>>), 1, 0) ]\n| eval violation=if(raw_hit>0 OR isnull(expected_token_field), \"SCHEMA_VIOLATION\", \"OK\")\n| where violation!=\"OK\"\n| stats count by stream_name, violation, host\n| sort - count",
              "m": "(1) Document per-stream pseudonymisation rules in `gdpr_pseudonymisation_rules.csv`; (2) emit lightweight schema audit events from ETL jobs via HEC; (3) schedule hourly checks for forbidden field presence; (4) route violations to data engineering and DPO; (5) pair with UC-22.1.1 for regex-based residual PII detection.",
              "z": "Table (violating streams), Single value (open violations), Bar chart (violations by owning team).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Processor (Splunk Cloud Platform), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `index=analytics` OR `index=data_lake_events` (HEC), `gdpr_pseudonymisation_rules.csv`, sample scheduled searches against restricted indexes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Document per-stream pseudonymisation rules in `gdpr_pseudonymisation_rules.csv`; (2) emit lightweight schema audit events from ETL jobs via HEC; (3) schedule hourly checks for forbidden field presence; (4) route violations to data engineering and DPO; (5) pair with UC-22.1.1 for regex-based residual PII detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=analytics sourcetype=\"pipeline_audit\" earliest=-24h\n| lookup gdpr_pseudonymisation_rules.csv stream_name OUTPUT expected_token_field, forbidden_raw_fields\n| eval raw_hit=0\n| foreach forbidden_raw_fields [ eval raw_hit=raw_hit + if(isnotnull(<<FIELD>>), 1, 0) ]\n| eval violation=if(raw_hit>0 OR isnull(expected_token_field), \"SCHEMA_VIOLATION\", \"OK\")\n| where violation!=\"OK\"\n| stats count by stream_name, violation, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**GDPR Pseudonymisation Validation in Pipelines and Logs (Art. 32(1)(a))** — Pseudonymisation reduces risk and supports lawful processing when combined with technical controls. This use case verifies that identifiers in downstream indexes, data lakes, and analytics topics match the organisation's pseudonymisation scheme — detecting raw direct identifiers where only tokenised or hashed substitutes should appear.\n\nDocumented **Data sources**: `index=analytics` OR `index=data_lake_events` (HEC), `gdpr_pseudonymisation_rules.csv`, sample scheduled searches against restricted indexes. **App/TA** (typical add-on context): Splunk Edge Processor (Splunk Cloud Platform), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: analytics; **sourcetype**: pipeline_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=analytics, sourcetype=\"pipeline_audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **raw_hit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Iterates over multivalue fields with `foreach`.\n• `eval` defines or adjusts **violation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where violation!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by stream_name, violation, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violating streams), Single value (open violations), Bar chart (violations by owning team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We pseudonymisation reduces risk and supports lawful processing when combined with technical controls so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32(1)(a)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.32(1)(a) is enforced — Splunk UC-22.1.23: GDPR Pseudonymisation Validation in Pipelines and Logs.",
                  "ea": "Saved search 'UC-22.1.23' running on index analytics and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.24",
              "n": "GDPR Security Testing Evidence Aggregation (Pen Test / Red Team) (Art. 32(1)(d))",
              "c": "high",
              "f": "beginner",
              "v": "Article 32(1)(d) requires a process for regularly testing, assessing, and evaluating the effectiveness of security measures. This use case centralises penetration test, red-team, and vulnerability-assessment completion dates, scope coverage against personal-data systems, and remediation SLAs — proving the testing programme exists and covers GDPR-relevant assets.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`gdpr_security_test_register.csv`, `index=itsm` (remediation tickets), `tenable:vuln` (scan completion)",
              "q": "| inputlookup gdpr_security_test_register.csv\n| eval test_date_epoch=strptime(last_test_date,\"%Y-%m-%d\")\n| eval days_since=round((now()-test_date_epoch)/86400,0)\n| eval cadence_days=case(test_type=\"external_pen_test\", 365, test_type=\"internal_va\", 90, 1=1, 180)\n| eval status=case(days_since > cadence_days, \"OVERDUE\", days_since > cadence_days*0.8, \"DUE_SOON\", 1=1, \"CURRENT\")\n| lookup gdpr_ropa_register.csv system_name OUTPUT processing_activity\n| where status!=\"CURRENT\"\n| table system_name, processing_activity, test_type, last_test_date, days_since, cadence_days, status\n| sort status, -days_since",
              "m": "(1) Populate `gdpr_security_test_register.csv` with last test dates and types per in-scope system; (2) join to ROPA for processing-activity context; (3) alert when tests exceed policy cadence; (4) link ServiceNow remediation tickets for critical findings; (5) export annual effectiveness summary for Art. 32 evidence.",
              "z": "Table (overdue tests), Timeline (tests by quarter), Single value (systems past cadence), Bar chart (coverage by business unit).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `gdpr_security_test_register.csv`, `index=itsm` (remediation tickets), `tenable:vuln` (scan completion).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate `gdpr_security_test_register.csv` with last test dates and types per in-scope system; (2) join to ROPA for processing-activity context; (3) alert when tests exceed policy cadence; (4) link ServiceNow remediation tickets for critical findings; (5) export annual effectiveness summary for Art. 32 evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_security_test_register.csv\n| eval test_date_epoch=strptime(last_test_date,\"%Y-%m-%d\")\n| eval days_since=round((now()-test_date_epoch)/86400,0)\n| eval cadence_days=case(test_type=\"external_pen_test\", 365, test_type=\"internal_va\", 90, 1=1, 180)\n| eval status=case(days_since > cadence_days, \"OVERDUE\", days_since > cadence_days*0.8, \"DUE_SOON\", 1=1, \"CURRENT\")\n| lookup gdpr_ropa_register.csv system_name OUTPUT processing_activity\n| where status!=\"CURRENT\"\n| table system_name, processing_activity, test_type, last_test_date, days_since, cadence_days, status\n| sort status, -days_since\n```\n\nUnderstanding this SPL\n\n**GDPR Security Testing Evidence Aggregation (Pen Test / Red Team) (Art. 32(1)(d))** — Article 32(1)(d) requires a process for regularly testing, assessing, and evaluating the effectiveness of security measures. This use case centralises penetration test, red-team, and vulnerability-assessment completion dates, scope coverage against personal-data systems, and remediation SLAs — proving the testing programme exists and covers GDPR-relevant assets.\n\nDocumented **Data sources**: `gdpr_security_test_register.csv`, `index=itsm` (remediation tickets), `tenable:vuln` (scan completion). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **test_date_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cadence_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where status!=\"CURRENT\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Security Testing Evidence Aggregation (Pen Test / Red Team) (Art. 32(1)(d))**): table system_name, processing_activity, test_type, last_test_date, days_since, cadence_days, status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue tests), Timeline (tests by quarter), Single value (systems past cadence), Bar chart (coverage by business unit).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of security testing evidence aggregation (pen test / red team) — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32(1)(d)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.32(1)(d) is enforced — Splunk UC-22.1.24: GDPR Security Testing Evidence Aggregation (Pen Test / Red Team).",
                  "ea": "Saved search 'UC-22.1.24' running on gdpr_security_test_register.csv, index=itsm (remediation tickets), tenable:vuln (scan completion), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.25",
              "n": "GDPR Security Incident Tracking Linked to Personal Data Impact (Art. 32(1)(c))",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 32(1)(c) requires the ability to restore availability and access in a timely manner, and incident handling underpins the integrity and resilience of processing. This use case tags security incidents with personal-data impact classification, tracks containment and eradication milestones, and aligns with breach notification workflows — extending generic SOC metrics with GDPR-specific impact labels.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` macro, `gdpr_system_access_matrix.csv` (personal data flags), ServiceNow incident exports",
              "q": "`notable` earliest=-14d\n| lookup gdpr_system_access_matrix.csv system_hostname AS dest OUTPUT data_classification\n| eval gdpr_impact=case(\n    isnotnull(data_classification) AND match(lower(status_description),\"(?i)pii|personal|breach|exfil\"), \"CONFIRMED_PII\",\n    isnotnull(data_classification), \"POTENTIAL_PII_SYSTEM\",\n    1=1, \"UNKNOWN\")\n| stats latest(status) as status latest(owner) as owner max(_time) as last_event by rule_name, gdpr_impact\n| where gdpr_impact IN (\"CONFIRMED_PII\",\"POTENTIAL_PII_SYSTEM\")\n| table rule_name, gdpr_impact, status, owner, last_event",
              "m": "(1) Enrich notables with asset lookups that flag personal-data processing; (2) require analysts to set `status_description` with PII impact keywords when known; (3) alert DPO workflow when `CONFIRMED_PII`; (4) correlate with UC-22.1.3 for 72h authority notification clocks; (5) retain closed-incident exports for accountability.",
              "z": "Table (GDPR-tagged incidents), Single value (open PII-related incidents), Timeline (milestone progression), Pie chart (impact classification).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` macro, `gdpr_system_access_matrix.csv` (personal data flags), ServiceNow incident exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enrich notables with asset lookups that flag personal-data processing; (2) require analysts to set `status_description` with PII impact keywords when known; (3) alert DPO workflow when `CONFIRMED_PII`; (4) correlate with UC-22.1.3 for 72h authority notification clocks; (5) retain closed-incident exports for accountability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` earliest=-14d\n| lookup gdpr_system_access_matrix.csv system_hostname AS dest OUTPUT data_classification\n| eval gdpr_impact=case(\n    isnotnull(data_classification) AND match(lower(status_description),\"(?i)pii|personal|breach|exfil\"), \"CONFIRMED_PII\",\n    isnotnull(data_classification), \"POTENTIAL_PII_SYSTEM\",\n    1=1, \"UNKNOWN\")\n| stats latest(status) as status latest(owner) as owner max(_time) as last_event by rule_name, gdpr_impact\n| where gdpr_impact IN (\"CONFIRMED_PII\",\"POTENTIAL_PII_SYSTEM\")\n| table rule_name, gdpr_impact, status, owner, last_event\n```\n\nUnderstanding this SPL\n\n**GDPR Security Incident Tracking Linked to Personal Data Impact (Art. 32(1)(c))** — Article 32(1)(c) requires the ability to restore availability and access in a timely manner, and incident handling underpins the integrity and resilience of processing. This use case tags security incidents with personal-data impact classification, tracks containment and eradication milestones, and aligns with breach notification workflows — extending generic SOC metrics with GDPR-specific impact labels.\n\nDocumented **Data sources**: `` `notable` `` macro, `gdpr_system_access_matrix.csv` (personal data flags), ServiceNow incident exports. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **gdpr_impact** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by rule_name, gdpr_impact** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where gdpr_impact IN (\"CONFIRMED_PII\",\"POTENTIAL_PII_SYSTEM\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Security Incident Tracking Linked to Personal Data Impact (Art. 32(1)(c))**): table rule_name, gdpr_impact, status, owner, last_event\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (GDPR-tagged incidents), Single value (open PII-related incidents), Timeline (milestone progression), Pie chart (impact classification).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of security incident tracking linked to personal data impact — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32(1)(c)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.32(1)(c) is enforced — Splunk UC-22.1.25: GDPR Security Incident Tracking Linked to Personal Data Impact.",
                  "ea": "Saved search 'UC-22.1.25' running on notable macro, gdpr_system_access_matrix.csv (personal data flags), ServiceNow incident exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.26",
              "n": "GDPR Resilience and Availability Monitoring for Personal-Data Services (Art. 32(1)(b)(c))",
              "c": "high",
              "f": "intermediate",
              "v": "Integrity, confidentiality, and availability of processing systems are core Art. 32 objectives. This use case monitors uptime, error rates, and recovery objectives for applications and databases documented in the ROPA as processing personal data — evidencing resilience measures such as clustering, failover drills, and synthetic checks.",
              "t": "Splunk ITSI (Splunkbase 1841), Splunk Synthetic Monitoring",
              "d": "`index=itsi_summary` (service health), `index=synthetic` or HEC synthetic results, `gdpr_ropa_register.csv`",
              "q": "| inputlookup gdpr_ropa_register.csv WHERE data_category!=\"N/A\"\n| fields system_name\n| lookup itsi_service_map.csv system_name OUTPUT itsi_service\n| join itsi_service [\n    search index=itsi_summary earliest=-24h\n    | stats latest(health_score) as health_score latest(severity) as severity by itsi_service\n]\n| eval breach_slo=if(health_score < 70 OR match(lower(severity),\"(?i)critical|high\"), 1, 0)\n| where breach_slo=1\n| table system_name, itsi_service, health_score, severity",
              "m": "(1) Map ROPA `system_name` to ITSI services in `itsi_service_map.csv`; (2) define KPI thresholds aligned to internal RPO/RTO for personal-data services; (3) alert when health scores breach SLO; (4) attach failover drill references from `gdpr_security_test_register.csv`; (5) report monthly resilience posture to the DPO.",
              "z": "Single value (services breaching SLO), Table (affected personal-data services), Line chart (health score trend), Heatmap (severity by hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (Splunkbase 1841), Splunk Synthetic Monitoring.\n• Ensure the following data sources are available: `index=itsi_summary` (service health), `index=synthetic` or HEC synthetic results, `gdpr_ropa_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map ROPA `system_name` to ITSI services in `itsi_service_map.csv`; (2) define KPI thresholds aligned to internal RPO/RTO for personal-data services; (3) alert when health scores breach SLO; (4) attach failover drill references from `gdpr_security_test_register.csv`; (5) report monthly resilience posture to the DPO.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_ropa_register.csv WHERE data_category!=\"N/A\"\n| fields system_name\n| lookup itsi_service_map.csv system_name OUTPUT itsi_service\n| join itsi_service [\n    search index=itsi_summary earliest=-24h\n    | stats latest(health_score) as health_score latest(severity) as severity by itsi_service\n]\n| eval breach_slo=if(health_score < 70 OR match(lower(severity),\"(?i)critical|high\"), 1, 0)\n| where breach_slo=1\n| table system_name, itsi_service, health_score, severity\n```\n\nUnderstanding this SPL\n\n**GDPR Resilience and Availability Monitoring for Personal-Data Services (Art. 32(1)(b)(c))** — Integrity, confidentiality, and availability of processing systems are core Art. 32 objectives. This use case monitors uptime, error rates, and recovery objectives for applications and databases documented in the ROPA as processing personal data — evidencing resilience measures such as clustering, failover drills, and synthetic checks.\n\nDocumented **Data sources**: `index=itsi_summary` (service health), `index=synthetic` or HEC synthetic results, `gdpr_ropa_register.csv`. **App/TA** (typical add-on context): Splunk ITSI (Splunkbase 1841), Splunk Synthetic Monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **breach_slo** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where breach_slo=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Resilience and Availability Monitoring for Personal-Data Services (Art. 32(1)(b)(c))**): table system_name, itsi_service, health_score, severity\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (services breaching SLO), Table (affected personal-data services), Line chart (health score trend), Heatmap (severity by hour).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We integrity, confidentiality, and availability of processing systems are core Art so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32(1)(b)(c)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.32(1)(b)(c) is enforced — Splunk UC-22.1.26: GDPR Resilience and Availability Monitoring for Personal-Data Services.",
                  "ea": "Saved search 'UC-22.1.26' running on index=itsi_summary (service health), index=synthetic or HEC synthetic results, gdpr_ropa_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.27",
              "n": "GDPR Processor Compliance Attestation Tracking (Art. 28(3))",
              "c": "high",
              "f": "beginner",
              "v": "Article 28 requires processors to demonstrate compliance and controllers to work only with processors offering sufficient guarantees. This use case tracks ISO 27001, SOC 2, and processor-signed security addenda — surfacing expired attestations and processors without current documentation before contract renewal or procurement renews access.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`gdpr_processor_register.csv` (columns: processor_name, attestation_type, attestation_expiry, dpa_signed), vendor risk platform exports (HEC)",
              "q": "| inputlookup gdpr_processor_register.csv\n| eval expiry_epoch=strptime(attestation_expiry,\"%Y-%m-%d\")\n| eval days_to_expiry=round((expiry_epoch-now())/86400,0)\n| eval attestation_status=case(\n    isnull(attestation_expiry), \"MISSING\",\n    days_to_expiry < 0, \"EXPIRED\",\n    days_to_expiry < 60, \"EXPIRING\",\n    1=1, \"CURRENT\")\n| where attestation_status!=\"CURRENT\" OR dpa_signed!=\"true\"\n| table processor_name, attestation_type, attestation_expiry, days_to_expiry, attestation_status, dpa_signed\n| sort attestation_status, days_to_expiry",
              "m": "(1) Extend processor inventory with attestation types and expiry dates; (2) integrate procurement / ServiceNow vendor records for automated updates; (3) alert procurement and DPO at 60 and 30 days before expiry; (4) block new data integrations in workflow when `EXPIRED`; (5) export quarterly attestation coverage report.",
              "z": "Table (processors at risk), Single value (expired attestations), Bar chart (status by processor tier), Timeline (expiry dates).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `gdpr_processor_register.csv` (columns: processor_name, attestation_type, attestation_expiry, dpa_signed), vendor risk platform exports (HEC).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Extend processor inventory with attestation types and expiry dates; (2) integrate procurement / ServiceNow vendor records for automated updates; (3) alert procurement and DPO at 60 and 30 days before expiry; (4) block new data integrations in workflow when `EXPIRED`; (5) export quarterly attestation coverage report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_processor_register.csv\n| eval expiry_epoch=strptime(attestation_expiry,\"%Y-%m-%d\")\n| eval days_to_expiry=round((expiry_epoch-now())/86400,0)\n| eval attestation_status=case(\n    isnull(attestation_expiry), \"MISSING\",\n    days_to_expiry < 0, \"EXPIRED\",\n    days_to_expiry < 60, \"EXPIRING\",\n    1=1, \"CURRENT\")\n| where attestation_status!=\"CURRENT\" OR dpa_signed!=\"true\"\n| table processor_name, attestation_type, attestation_expiry, days_to_expiry, attestation_status, dpa_signed\n| sort attestation_status, days_to_expiry\n```\n\nUnderstanding this SPL\n\n**GDPR Processor Compliance Attestation Tracking (Art. 28(3))** — Article 28 requires processors to demonstrate compliance and controllers to work only with processors offering sufficient guarantees. This use case tracks ISO 27001, SOC 2, and processor-signed security addenda — surfacing expired attestations and processors without current documentation before contract renewal or procurement renews access.\n\nDocumented **Data sources**: `gdpr_processor_register.csv` (columns: processor_name, attestation_type, attestation_expiry, dpa_signed), vendor risk platform exports (HEC). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **expiry_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **attestation_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where attestation_status!=\"CURRENT\" OR dpa_signed!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Processor Compliance Attestation Tracking (Art. 28(3))**): table processor_name, attestation_type, attestation_expiry, days_to_expiry, attestation_status, dpa_signed\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (processors at risk), Single value (expired attestations), Bar chart (status by processor tier), Timeline (expiry dates).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of processor compliance attestation tracking — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.28(3)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.28(3) is enforced — Splunk UC-22.1.27: GDPR Processor Compliance Attestation Tracking.",
                  "ea": "Saved search 'UC-22.1.27' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.28",
              "n": "GDPR Sub-Processor Change Monitoring (Art. 28(2), Art. 28(4))",
              "c": "high",
              "f": "intermediate",
              "v": "Processors must not engage another processor without prior specific or general written authorisation of the controller. This use case detects new destination domains, SaaS tenants, or infrastructure regions receiving personal data from approved processors — flagging unapproved sub-processing paths inferred from network and SaaS audit telemetry.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "CIM Network_Traffic data model, `index=proxy`, `gdpr_processor_subprocessor_allowlist.csv`",
              "q": "| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out dc(All_Traffic.src) as src_count\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-7d All_Traffic.action=\"allowed\"\n  by All_Traffic.dest\n| rename All_Traffic.* as *\n| eval dest_domain=replace(dest,\"^\\d+\\.\\d+\\.\\d+\\.\\d+$\",\"\")\n| lookup gdpr_processor_subprocessor_allowlist.csv dest_domain OUTPUT primary_processor, approval_status\n| where isnotnull(primary_processor) AND (approval_status!=\"approved\" OR isnull(approval_status))\n| sort - bytes_out\n| table dest, dest_domain, primary_processor, approval_status, bytes_out, src_count",
              "m": "(1) Build allowlist of approved processor and sub-processor domains from DPAs; (2) classify egress traffic by processor parent using DNS and TLS SNI metadata; (3) alert on high-volume flows to unlisted destinations; (4) require procurement approval workflow to update allowlist; (5) document investigations for Art. 28 accountability.",
              "z": "Table (unapproved destinations), Sankey (flow from segment to destination), Single value (new destinations this week), Bar chart (bytes by processor).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: CIM Network_Traffic data model, `index=proxy`, `gdpr_processor_subprocessor_allowlist.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build allowlist of approved processor and sub-processor domains from DPAs; (2) classify egress traffic by processor parent using DNS and TLS SNI metadata; (3) alert on high-volume flows to unlisted destinations; (4) require procurement approval workflow to update allowlist; (5) document investigations for Art. 28 accountability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out dc(All_Traffic.src) as src_count\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-7d All_Traffic.action=\"allowed\"\n  by All_Traffic.dest\n| rename All_Traffic.* as *\n| eval dest_domain=replace(dest,\"^\\d+\\.\\d+\\.\\d+\\.\\d+$\",\"\")\n| lookup gdpr_processor_subprocessor_allowlist.csv dest_domain OUTPUT primary_processor, approval_status\n| where isnotnull(primary_processor) AND (approval_status!=\"approved\" OR isnull(approval_status))\n| sort - bytes_out\n| table dest, dest_domain, primary_processor, approval_status, bytes_out, src_count\n```\n\nUnderstanding this SPL\n\n**GDPR Sub-Processor Change Monitoring (Art. 28(2), Art. 28(4))** — Processors must not engage another processor without prior specific or general written authorisation of the controller. This use case detects new destination domains, SaaS tenants, or infrastructure regions receiving personal data from approved processors — flagging unapproved sub-processing paths inferred from network and SaaS audit telemetry.\n\nDocumented **Data sources**: CIM Network_Traffic data model, `index=proxy`, `gdpr_processor_subprocessor_allowlist.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **dest_domain** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(primary_processor) AND (approval_status!=\"approved\" OR isnull(approval_status))` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Sub-Processor Change Monitoring (Art. 28(2), Art. 28(4))**): table dest, dest_domain, primary_processor, approval_status, bytes_out, src_count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Sub-Processor Change Monitoring (Art. 28(2), Art. 28(4))** — Processors must not engage another processor without prior specific or general written authorisation of the controller. This use case detects new destination domains, SaaS tenants, or infrastructure regions receiving personal data from approved processors — flagging unapproved sub-processing paths inferred from network and SaaS audit telemetry.\n\nDocumented **Data sources**: CIM Network_Traffic data model, `index=proxy`, `gdpr_processor_subprocessor_allowlist.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved destinations), Sankey (flow from segment to destination), Single value (new destinations this week), Bar chart (bytes by processor).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We processors must not engage another processor without prior specific or general written authorisation of the controller so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance",
                "Change"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.28(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.28(2) is enforced — Splunk UC-22.1.28: GDPR Sub-Processor Change Monitoring.",
                  "ea": "Saved search 'UC-22.1.28' running on CIM Network_Traffic data model, index=proxy, gdpr_processor_subprocessor_allowlist.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.28(4)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.28(4) is enforced — Splunk UC-22.1.28: GDPR Sub-Processor Change Monitoring.",
                  "ea": "Saved search 'UC-22.1.28' running on CIM Network_Traffic data model, index=proxy, gdpr_processor_subprocessor_allowlist.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.29",
              "n": "GDPR Processor Personal Data Breach Notification SLA (Art. 28(3)(f), Art. 33)",
              "c": "critical",
              "f": "intermediate",
              "v": "The processor must notify the controller without undue delay after becoming aware of a personal data breach. This use case measures time from processor-reported incident timestamps to internal acknowledgement tickets — ensuring contractual SLA (often 24–72 hours) is met and evidence exists for supervisory authority files.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC)",
              "d": "`index=itsm` (vendor security incident emails parsed to tickets), `index=vendor_sec` (HEC from processor portals), `gdpr_processor_register.csv`",
              "q": "index=itsm sourcetype=\"snow:incident\" category=\"*Vendor*\" OR short_description=\"*processor*breach*\" earliest=-30d\n| lookup gdpr_processor_register.csv vendor_name OUTPUT notification_sla_hours\n| eval reported=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval acknowledged=strptime(work_start,\"%Y-%m-%d %H:%M:%S\")\n| eval ack_hours=round((acknowledged-reported)/3600,2)\n| eval sla_hours=coalesce(notification_sla_hours, 24)\n| eval sla_breach=if(ack_hours > sla_hours, 1, 0)\n| where sla_breach=1 OR isnull(acknowledged)\n| table number, vendor_name, opened_at, work_start, ack_hours, sla_hours, sla_breach",
              "m": "(1) Standardise processor breach intake in ServiceNow with vendor name and reported time; (2) store per-processor SLA hours in `gdpr_processor_register.csv`; (3) alert legal/DPO on approaching SLA; (4) integrate HEC if processors send structured webhook notifications; (5) retain closed tickets as Art. 28(3)(f) evidence.",
              "z": "Table (SLA breaches), Single value (mean acknowledgement hours), Timeline (processor incidents), Bar chart (breaches by vendor).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC).\n• Ensure the following data sources are available: `index=itsm` (vendor security incident emails parsed to tickets), `index=vendor_sec` (HEC from processor portals), `gdpr_processor_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardise processor breach intake in ServiceNow with vendor name and reported time; (2) store per-processor SLA hours in `gdpr_processor_register.csv`; (3) alert legal/DPO on approaching SLA; (4) integrate HEC if processors send structured webhook notifications; (5) retain closed tickets as Art. 28(3)(f) evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" category=\"*Vendor*\" OR short_description=\"*processor*breach*\" earliest=-30d\n| lookup gdpr_processor_register.csv vendor_name OUTPUT notification_sla_hours\n| eval reported=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval acknowledged=strptime(work_start,\"%Y-%m-%d %H:%M:%S\")\n| eval ack_hours=round((acknowledged-reported)/3600,2)\n| eval sla_hours=coalesce(notification_sla_hours, 24)\n| eval sla_breach=if(ack_hours > sla_hours, 1, 0)\n| where sla_breach=1 OR isnull(acknowledged)\n| table number, vendor_name, opened_at, work_start, ack_hours, sla_hours, sla_breach\n```\n\nUnderstanding this SPL\n\n**GDPR Processor Personal Data Breach Notification SLA (Art. 28(3)(f), Art. 33)** — The processor must notify the controller without undue delay after becoming aware of a personal data breach. This use case measures time from processor-reported incident timestamps to internal acknowledgement tickets — ensuring contractual SLA (often 24–72 hours) is met and evidence exists for supervisory authority files.\n\nDocumented **Data sources**: `index=itsm` (vendor security incident emails parsed to tickets), `index=vendor_sec` (HEC from processor portals), `gdpr_processor_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **reported** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **acknowledged** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ack_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sla_breach=1 OR isnull(acknowledged)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Processor Personal Data Breach Notification SLA (Art. 28(3)(f), Art. 33)**): table number, vendor_name, opened_at, work_start, ack_hours, sla_hours, sla_breach\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Processor Personal Data Breach Notification SLA (Art. 28(3)(f), Art. 33)** — The processor must notify the controller without undue delay after becoming aware of a personal data breach. This use case measures time from processor-reported incident timestamps to internal acknowledgement tickets — ensuring contractual SLA (often 24–72 hours) is met and evidence exists for supervisory authority files.\n\nDocumented **Data sources**: `index=itsm` (vendor security incident emails parsed to tickets), `index=vendor_sec` (HEC from processor portals), `gdpr_processor_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SLA breaches), Single value (mean acknowledgement hours), Timeline (processor incidents), Bar chart (breaches by vendor).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We the processor must notify the controller without undue delay after becoming aware of a personal data breach.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.28(3)(f)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.28(3)(f) is enforced — Splunk UC-22.1.29: GDPR Processor Personal Data Breach Notification SLA.",
                  "ea": "Saved search 'UC-22.1.29' running on index itsm and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.33",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.33 (Breach notification to supervisory authority) is enforced — Splunk UC-22.1.29: GDPR Processor Personal Data Breach Notification SLA.",
                  "ea": "Saved search 'UC-22.1.29' running on index itsm and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.30",
              "n": "GDPR Data Processing Agreement Obligation Control Matrix (Art. 28(3))",
              "c": "high",
              "f": "beginner",
              "v": "Article 28(3) spells out mandatory terms that must bind processors. This use case tracks which DPA clauses (sub-processing, assistance with DSARs, deletion/return, audit rights) are marked as included per processor and flags incomplete DPAs before go-live of new integrations.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`gdpr_dpa_obligation_matrix.csv` (processor_name, clause_id, included, evidence_url), procurement system exports",
              "q": "| inputlookup gdpr_dpa_obligation_matrix.csv\n| stats dc(eval(if(included=\"false\", clause_id, null()))) as missing_clauses,\n        values(eval(if(included=\"false\", clause_id, null()))) as missing_list\n  by processor_name\n| where missing_clauses > 0\n| join processor_name [\n    | inputlookup gdpr_processor_register.csv\n    | fields processor_name, go_live_date, dpa_signed\n]\n| eval live_without_dpa=if(dpa_signed!=\"true\" AND isnotnull(go_live_date), 1, 0)\n| table processor_name, missing_clauses, missing_list, dpa_signed, go_live_date, live_without_dpa",
              "m": "(1) Encode Art. 28(3) obligations as rows in `gdpr_dpa_obligation_matrix.csv`; (2) integrate legal review workflow to set `included` and evidence URLs; (3) alert when any processor has missing mandatory clauses; (4) block go-live in ServiceNow when `live_without_dpa=1`; (5) refresh after each DPA amendment.",
              "z": "Table (incomplete DPAs), Bar chart (missing clauses by processor), Single value (processors with gaps).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `gdpr_dpa_obligation_matrix.csv` (processor_name, clause_id, included, evidence_url), procurement system exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Encode Art. 28(3) obligations as rows in `gdpr_dpa_obligation_matrix.csv`; (2) integrate legal review workflow to set `included` and evidence URLs; (3) alert when any processor has missing mandatory clauses; (4) block go-live in ServiceNow when `live_without_dpa=1`; (5) refresh after each DPA amendment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_dpa_obligation_matrix.csv\n| stats dc(eval(if(included=\"false\", clause_id, null()))) as missing_clauses,\n        values(eval(if(included=\"false\", clause_id, null()))) as missing_list\n  by processor_name\n| where missing_clauses > 0\n| join processor_name [\n    | inputlookup gdpr_processor_register.csv\n    | fields processor_name, go_live_date, dpa_signed\n]\n| eval live_without_dpa=if(dpa_signed!=\"true\" AND isnotnull(go_live_date), 1, 0)\n| table processor_name, missing_clauses, missing_list, dpa_signed, go_live_date, live_without_dpa\n```\n\nUnderstanding this SPL\n\n**GDPR Data Processing Agreement Obligation Control Matrix (Art. 28(3))** — Article 28(3) spells out mandatory terms that must bind processors. This use case tracks which DPA clauses (sub-processing, assistance with DSARs, deletion/return, audit rights) are marked as included per processor and flags incomplete DPAs before go-live of new integrations.\n\nDocumented **Data sources**: `gdpr_dpa_obligation_matrix.csv` (processor_name, clause_id, included, evidence_url), procurement system exports. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `stats` rolls up events into metrics; results are split **by processor_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where missing_clauses > 0` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **live_without_dpa** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **GDPR Data Processing Agreement Obligation Control Matrix (Art. 28(3))**): table processor_name, missing_clauses, missing_list, dpa_signed, go_live_date, live_without_dpa\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incomplete DPAs), Bar chart (missing clauses by processor), Single value (processors with gaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of data processing agreement obligation control matrix — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.28(3)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.28(3) is enforced — Splunk UC-22.1.30: GDPR Data Processing Agreement Obligation Control Matrix.",
                  "ea": "Saved search 'UC-22.1.30' running on gdpr_dpa_obligation_matrix.csv (processor_name, clause_id, included, evidence_url), procurement system exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.31",
              "n": "GDPR Processor Audit Evidence — Right to Audit and Inspection Logs (Art. 28(3)(h))",
              "c": "high",
              "f": "intermediate",
              "v": "Controllers must be able to audit processors where contractually agreed. This use case logs scheduled and ad hoc audit events, findings severity, remediation due dates, and closure — creating a defensible audit trail that complements static PDF reports stored elsewhere.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`gdpr_processor_audit_log.csv`, `index=itsm` (audit finding remediation tasks)",
              "q": "| inputlookup gdpr_processor_audit_log.csv\n| eval audit_date_epoch=strptime(audit_date,\"%Y-%m-%d\")\n| eval days_since_audit=round((now()-audit_date_epoch)/86400,0)\n| join processor_name type=left [\n    search index=itsm short_description=\"*processor audit*\" earliest=-365d\n    | stats count as open_findings by vendor_name\n    | rename vendor_name as processor_name\n]\n| eval next_audit_due=if(days_since_audit > 730, \"OVERDUE\", if(days_since_audit > 600, \"DUE_SOON\", \"OK\"))\n| where next_audit_due!=\"OK\" OR open_findings > 0\n| table processor_name, audit_date, days_since_audit, next_audit_due, open_findings",
              "m": "(1) Record each on-site or remote audit in `gdpr_processor_audit_log.csv`; (2) create ServiceNow tasks per finding with linkage to processor; (3) alert when biennial audit cycle is missed; (4) dashboard open findings for steering committee; (5) export evidence package before supervisory inspections.",
              "z": "Table (audit overdue), Timeline (audits by processor), Single value (open critical findings), Bar chart (findings by category).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `gdpr_processor_audit_log.csv`, `index=itsm` (audit finding remediation tasks).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Record each on-site or remote audit in `gdpr_processor_audit_log.csv`; (2) create ServiceNow tasks per finding with linkage to processor; (3) alert when biennial audit cycle is missed; (4) dashboard open findings for steering committee; (5) export evidence package before supervisory inspections.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_processor_audit_log.csv\n| eval audit_date_epoch=strptime(audit_date,\"%Y-%m-%d\")\n| eval days_since_audit=round((now()-audit_date_epoch)/86400,0)\n| join processor_name type=left [\n    search index=itsm short_description=\"*processor audit*\" earliest=-365d\n    | stats count as open_findings by vendor_name\n    | rename vendor_name as processor_name\n]\n| eval next_audit_due=if(days_since_audit > 730, \"OVERDUE\", if(days_since_audit > 600, \"DUE_SOON\", \"OK\"))\n| where next_audit_due!=\"OK\" OR open_findings > 0\n| table processor_name, audit_date, days_since_audit, next_audit_due, open_findings\n```\n\nUnderstanding this SPL\n\n**GDPR Processor Audit Evidence — Right to Audit and Inspection Logs (Art. 28(3)(h))** — Controllers must be able to audit processors where contractually agreed. This use case logs scheduled and ad hoc audit events, findings severity, remediation due dates, and closure — creating a defensible audit trail that complements static PDF reports stored elsewhere.\n\nDocumented **Data sources**: `gdpr_processor_audit_log.csv`, `index=itsm` (audit finding remediation tasks). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **audit_date_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since_audit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **next_audit_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where next_audit_due!=\"OK\" OR open_findings > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Processor Audit Evidence — Right to Audit and Inspection Logs (Art. 28(3)(h))**): table processor_name, audit_date, days_since_audit, next_audit_due, open_findings\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (audit overdue), Timeline (audits by processor), Single value (open critical findings), Bar chart (findings by category).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We controllers must be able to audit processors where contractually agreed so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.28(3)(h)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.28(3)(h) is enforced — Splunk UC-22.1.31: GDPR Processor Audit Evidence — Right to Audit and Inspection Logs.",
                  "ea": "Saved search 'UC-22.1.31' running on gdpr_processor_audit_log.csv, index=itsm (audit finding remediation tasks), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.32",
              "n": "GDPR DPIA Completion Tracking Against High-Risk Processing (Art. 35(7))",
              "c": "high",
              "f": "intermediate",
              "v": "Where processing is likely to result in a high risk, the controller shall carry out a DPIA prior to processing. This use case reconciles the ROPA, system go-live dates, and DPIA register to flag processing that started or changed materially without a completed DPIA.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`gdpr_ropa_register.csv`, `gdpr_dpia_register.csv`, change records `snow:change_request`",
              "q": "| inputlookup gdpr_ropa_register.csv WHERE high_risk=\"true\"\n| lookup gdpr_dpia_register.csv processing_activity OUTPUT dpia_status, dpia_completed_date\n| eval dpi_gap=case(\n    dpia_status=\"completed\", \"OK\",\n    dpia_status=\"in_progress\", \"IN_PROGRESS\",\n    1=1, \"MISSING\")\n| join processing_activity type=left [\n    search index=itsm sourcetype=\"snow:change_request\" earliest=-90d\n    | where match(lower(short_description),\"(?i)go.live|launch|migrate\")\n    | stats latest(start_date) as last_major_change by cmdb_ci\n    | rename cmdb_ci as system_name\n]\n| where dpi_gap!=\"OK\"\n| table processing_activity, system_name, dpi_gap, dpia_status, last_major_change",
              "m": "(1) Keep `high_risk` flags on ROPA entries per Art. 35(3); (2) join DPIA register by processing activity; (3) correlate major changes with DPIA refresh policy; (4) alert DPO on `MISSING` before production changes; (5) align with UC-22.1.14 for consolidated DPIA coverage views.",
              "z": "Table (missing DPIAs), Single value (high-risk gaps), Pie chart (status distribution), Timeline (go-live vs DPIA dates).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `gdpr_ropa_register.csv`, `gdpr_dpia_register.csv`, change records `snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Keep `high_risk` flags on ROPA entries per Art. 35(3); (2) join DPIA register by processing activity; (3) correlate major changes with DPIA refresh policy; (4) alert DPO on `MISSING` before production changes; (5) align with UC-22.1.14 for consolidated DPIA coverage views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_ropa_register.csv WHERE high_risk=\"true\"\n| lookup gdpr_dpia_register.csv processing_activity OUTPUT dpia_status, dpia_completed_date\n| eval dpi_gap=case(\n    dpia_status=\"completed\", \"OK\",\n    dpia_status=\"in_progress\", \"IN_PROGRESS\",\n    1=1, \"MISSING\")\n| join processing_activity type=left [\n    search index=itsm sourcetype=\"snow:change_request\" earliest=-90d\n    | where match(lower(short_description),\"(?i)go.live|launch|migrate\")\n    | stats latest(start_date) as last_major_change by cmdb_ci\n    | rename cmdb_ci as system_name\n]\n| where dpi_gap!=\"OK\"\n| table processing_activity, system_name, dpi_gap, dpia_status, last_major_change\n```\n\nUnderstanding this SPL\n\n**GDPR DPIA Completion Tracking Against High-Risk Processing (Art. 35(7))** — Where processing is likely to result in a high risk, the controller shall carry out a DPIA prior to processing. This use case reconciles the ROPA, system go-live dates, and DPIA register to flag processing that started or changed materially without a completed DPIA.\n\nDocumented **Data sources**: `gdpr_ropa_register.csv`, `gdpr_dpia_register.csv`, change records `snow:change_request`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **dpi_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where dpi_gap!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR DPIA Completion Tracking Against High-Risk Processing (Art. 35(7))**): table processing_activity, system_name, dpi_gap, dpia_status, last_major_change\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR DPIA Completion Tracking Against High-Risk Processing (Art. 35(7))** — Where processing is likely to result in a high risk, the controller shall carry out a DPIA prior to processing. This use case reconciles the ROPA, system go-live dates, and DPIA register to flag processing that started or changed materially without a completed DPIA.\n\nDocumented **Data sources**: `gdpr_ropa_register.csv`, `gdpr_dpia_register.csv`, change records `snow:change_request`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing DPIAs), Single value (high-risk gaps), Pie chart (status distribution), Timeline (go-live vs DPIA dates).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We where processing is likely to result in a high risk, the controller shall carry out a DPIA prior to processing so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.35(7)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.35(7) is enforced — Splunk UC-22.1.32: GDPR DPIA Completion Tracking Against High-Risk Processing.",
                  "ea": "Saved search 'UC-22.1.32' running on gdpr_ropa_register.csv, gdpr_dpia_register.csv, change records snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.33",
              "n": "GDPR DPIA Residual Risk Scoring and Escalation (Art. 35(7)(b))",
              "c": "high",
              "f": "intermediate",
              "v": "The DPIA must assess risks to rights and freedoms of data subjects. This use case ingests residual risk scores from DPIA tooling or spreadsheets, compares them to risk appetite thresholds, and escalates processing activities where residual risk remains high after mitigations.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`gdpr_dpia_register.csv` (residual_risk_score, mitigation_status), `index=risk` (optional ES risk correlation)",
              "q": "| inputlookup gdpr_dpia_register.csv\n| eval risk_band=case(\n    residual_risk_score>=15, \"UNACCEPTABLE\",\n    residual_risk_score>=10, \"HIGH\",\n    residual_risk_score>=5, \"MEDIUM\",\n    1=1, \"LOW\")\n| where risk_band IN (\"UNACCEPTABLE\",\"HIGH\")\n| eval mitigation_complete=if(mitigation_status=\"complete\",1,0)\n| where mitigation_complete=0 OR risk_band=\"UNACCEPTABLE\"\n| table processing_activity, residual_risk_score, risk_band, mitigation_status, dpia_owner\n| sort - residual_risk_score",
              "m": "(1) Define numeric residual risk scale in DPIA methodology; (2) sync `gdpr_dpia_register.csv` from GRC tool nightly; (3) alert DPO and CISO when `UNACCEPTABLE` persists; (4) require consultation workflow per Art. 36 for highest band; (5) optionally overlay ES `index=risk` for technical corroboration.",
              "z": "Bar chart (residual risk by activity), Table (unmitigated high risk), Single value (count above threshold), Scatter (likelihood vs impact if fields exist).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `gdpr_dpia_register.csv` (residual_risk_score, mitigation_status), `index=risk` (optional ES risk correlation).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define numeric residual risk scale in DPIA methodology; (2) sync `gdpr_dpia_register.csv` from GRC tool nightly; (3) alert DPO and CISO when `UNACCEPTABLE` persists; (4) require consultation workflow per Art. 36 for highest band; (5) optionally overlay ES `index=risk` for technical corroboration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_dpia_register.csv\n| eval risk_band=case(\n    residual_risk_score>=15, \"UNACCEPTABLE\",\n    residual_risk_score>=10, \"HIGH\",\n    residual_risk_score>=5, \"MEDIUM\",\n    1=1, \"LOW\")\n| where risk_band IN (\"UNACCEPTABLE\",\"HIGH\")\n| eval mitigation_complete=if(mitigation_status=\"complete\",1,0)\n| where mitigation_complete=0 OR risk_band=\"UNACCEPTABLE\"\n| table processing_activity, residual_risk_score, risk_band, mitigation_status, dpia_owner\n| sort - residual_risk_score\n```\n\nUnderstanding this SPL\n\n**GDPR DPIA Residual Risk Scoring and Escalation (Art. 35(7)(b))** — The DPIA must assess risks to rights and freedoms of data subjects. This use case ingests residual risk scores from DPIA tooling or spreadsheets, compares them to risk appetite thresholds, and escalates processing activities where residual risk remains high after mitigations.\n\nDocumented **Data sources**: `gdpr_dpia_register.csv` (residual_risk_score, mitigation_status), `index=risk` (optional ES risk correlation). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **risk_band** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where risk_band IN (\"UNACCEPTABLE\",\"HIGH\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **mitigation_complete** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mitigation_complete=0 OR risk_band=\"UNACCEPTABLE\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR DPIA Residual Risk Scoring and Escalation (Art. 35(7)(b))**): table processing_activity, residual_risk_score, risk_band, mitigation_status, dpia_owner\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (residual risk by activity), Table (unmitigated high risk), Single value (count above threshold), Scatter (likelihood vs impact if fields exist).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We the DPIA must assess risks to rights and freedoms of data subjects so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Risk",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.35(7)(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.35(7)(b) is enforced — Splunk UC-22.1.33: GDPR DPIA Residual Risk Scoring and Escalation.",
                  "ea": "Saved search 'UC-22.1.33' running on gdpr_dpia_register.csv (residual_risk_score, mitigation_status), index=risk (optional ES risk correlation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.34",
              "n": "GDPR DPIA Supervisory Authority Consultation Tracking (Art. 36)",
              "c": "high",
              "f": "beginner",
              "v": "Prior consultation with the supervisory authority is required when a DPIA indicates high residual risk unless mitigations reduce it. This use case tracks consultation requests, authority references, response deadlines, and conditions imposed — preventing processing start dates from preceding regulatory clearance.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`gdpr_dpia_consultation_register.csv`, `index=itsm` (legal tasks)",
              "q": "| inputlookup gdpr_dpia_consultation_register.csv\n| eval consultation_opened=strptime(request_date,\"%Y-%m-%d\")\n| eval authority_due=strptime(authority_response_due,\"%Y-%m-%d\")\n| eval processing_start=strptime(planned_processing_start,\"%Y-%m-%d\")\n| eval overdue_response=if(now()>authority_due AND isnull(authority_decision_date),1,0)\n| eval start_before_clearance=if(processing_start < strptime(authority_decision_date,\"%Y-%m-%d\") AND isnull(authority_decision_date),1,0)\n| where overdue_response=1 OR start_before_clearance=1 OR authority_decision=\"prohibited\"\n| table processing_activity, dpa_reference, request_date, authority_response_due, authority_decision, planned_processing_start",
              "m": "(1) Maintain consultation register with planned start dates; (2) integrate legal team updates via ServiceNow or CSV upload; (3) alert when response is overdue or processing is scheduled without decision; (4) archive authority letters as external evidence with links in CSV; (5) review quarterly with DPO.",
              "z": "Table (consultation status), Timeline (request to decision), Single value (overdue consultations), Gantt-style bar (optional in Dashboard Studio).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `gdpr_dpia_consultation_register.csv`, `index=itsm` (legal tasks).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain consultation register with planned start dates; (2) integrate legal team updates via ServiceNow or CSV upload; (3) alert when response is overdue or processing is scheduled without decision; (4) archive authority letters as external evidence with links in CSV; (5) review quarterly with DPO.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_dpia_consultation_register.csv\n| eval consultation_opened=strptime(request_date,\"%Y-%m-%d\")\n| eval authority_due=strptime(authority_response_due,\"%Y-%m-%d\")\n| eval processing_start=strptime(planned_processing_start,\"%Y-%m-%d\")\n| eval overdue_response=if(now()>authority_due AND isnull(authority_decision_date),1,0)\n| eval start_before_clearance=if(processing_start < strptime(authority_decision_date,\"%Y-%m-%d\") AND isnull(authority_decision_date),1,0)\n| where overdue_response=1 OR start_before_clearance=1 OR authority_decision=\"prohibited\"\n| table processing_activity, dpa_reference, request_date, authority_response_due, authority_decision, planned_processing_start\n```\n\nUnderstanding this SPL\n\n**GDPR DPIA Supervisory Authority Consultation Tracking (Art. 36)** — Prior consultation with the supervisory authority is required when a DPIA indicates high residual risk unless mitigations reduce it. This use case tracks consultation requests, authority references, response deadlines, and conditions imposed — preventing processing start dates from preceding regulatory clearance.\n\nDocumented **Data sources**: `gdpr_dpia_consultation_register.csv`, `index=itsm` (legal tasks). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **consultation_opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **authority_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **processing_start** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overdue_response** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **start_before_clearance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overdue_response=1 OR start_before_clearance=1 OR authority_decision=\"prohibited\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR DPIA Supervisory Authority Consultation Tracking (Art. 36)**): table processing_activity, dpa_reference, request_date, authority_response_due, authority_decision, planned_processing_start\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (consultation status), Timeline (request to decision), Single value (overdue consultations), Gantt-style bar (optional in Dashboard Studio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We prior consultation with the supervisory authority is required when a DPIA indicates high residual risk unless mitigations reduce it so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.36",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.36 is enforced — Splunk UC-22.1.34: GDPR DPIA Supervisory Authority Consultation Tracking.",
                  "ea": "Saved search 'UC-22.1.34' running on gdpr_dpia_consultation_register.csv, index=itsm (legal tasks), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.35",
              "n": "GDPR DPIA Remediation Monitoring and Mitigation Closure (Art. 35(7)(d))",
              "c": "high",
              "f": "intermediate",
              "v": "The DPIA must include measures to address risks. This use case ties DPIA mitigation items to ITSM work orders, tracks age and overdue status, and verifies technical completion signals (e.g., control deployment logs) where available.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for GitHub (Splunkbase 5596)",
              "d": "`gdpr_dpia_mitigation_register.csv`, `index=itsm` `sourcetype=\"snow:sc_task\"`",
              "q": "| inputlookup gdpr_dpia_mitigation_register.csv\n| rename mitigation_task as task_number\n| join task_number [\n    search index=itsm sourcetype=\"snow:sc_task\" earliest=-180d\n    | eval done=if(state IN (\"Closed\",\"Resolved\"),1,0)\n    | stats latest(state) as state latest(_time) as last_update max(done) as closed by number\n    | rename number as task_number\n]\n| eval age_days=round((now()-strptime(due_date,\"%Y-%m-%d\"))/86400,0)\n| where closed=0 AND age_days > 0\n| table processing_activity, mitigation_description, task_number, state, due_date, age_days\n| sort - age_days",
              "m": "(1) Export mitigation action list from each approved DPIA into `gdpr_dpia_mitigation_register.csv`; (2) create linked ServiceNow tasks with due dates; (3) alert owners weekly on overdue items; (4) optionally confirm technical controls via GitHub merge events or Terraform apply logs; (5) close DPIA only when all mitigations are verified.",
              "z": "Table (overdue mitigations), Single value (open high-risk mitigations), Burndown chart (open vs closed), Bar chart (age by owner).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5596), [CIM: Ticket Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for GitHub (Splunkbase 5596).\n• Ensure the following data sources are available: `gdpr_dpia_mitigation_register.csv`, `index=itsm` `sourcetype=\"snow:sc_task\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export mitigation action list from each approved DPIA into `gdpr_dpia_mitigation_register.csv`; (2) create linked ServiceNow tasks with due dates; (3) alert owners weekly on overdue items; (4) optionally confirm technical controls via GitHub merge events or Terraform apply logs; (5) close DPIA only when all mitigations are verified.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_dpia_mitigation_register.csv\n| rename mitigation_task as task_number\n| join task_number [\n    search index=itsm sourcetype=\"snow:sc_task\" earliest=-180d\n    | eval done=if(state IN (\"Closed\",\"Resolved\"),1,0)\n    | stats latest(state) as state latest(_time) as last_update max(done) as closed by number\n    | rename number as task_number\n]\n| eval age_days=round((now()-strptime(due_date,\"%Y-%m-%d\"))/86400,0)\n| where closed=0 AND age_days > 0\n| table processing_activity, mitigation_description, task_number, state, due_date, age_days\n| sort - age_days\n```\n\nUnderstanding this SPL\n\n**GDPR DPIA Remediation Monitoring and Mitigation Closure (Art. 35(7)(d))** — The DPIA must include measures to address risks. This use case ties DPIA mitigation items to ITSM work orders, tracks age and overdue status, and verifies technical completion signals (e.g., control deployment logs) where available.\n\nDocumented **Data sources**: `gdpr_dpia_mitigation_register.csv`, `index=itsm` `sourcetype=\"snow:sc_task\"`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for GitHub (Splunkbase 5596). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where closed=0 AND age_days > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR DPIA Remediation Monitoring and Mitigation Closure (Art. 35(7)(d))**): table processing_activity, mitigation_description, task_number, state, due_date, age_days\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR DPIA Remediation Monitoring and Mitigation Closure (Art. 35(7)(d))** — The DPIA must include measures to address risks. This use case ties DPIA mitigation items to ITSM work orders, tracks age and overdue status, and verifies technical completion signals (e.g., control deployment logs) where available.\n\nDocumented **Data sources**: `gdpr_dpia_mitigation_register.csv`, `index=itsm` `sourcetype=\"snow:sc_task\"`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for GitHub (Splunkbase 5596). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue mitigations), Single value (open high-risk mitigations), Burndown chart (open vs closed), Bar chart (age by owner).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We the DPIA must include measures to address risks so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count",
              "e": [
                "github",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.35(7)(d)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.35(7)(d) is enforced — Splunk UC-22.1.35: GDPR DPIA Remediation Monitoring and Mitigation Closure.",
                  "ea": "Saved search 'UC-22.1.35' running on gdpr_dpia_mitigation_register.csv, index=itsm sourcetype=\"snow:sc_task\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.36",
              "n": "GDPR Transfer Impact Assessment (TIA) Status for Third-Country Transfers (Art. 44-46)",
              "c": "high",
              "f": "advanced",
              "v": "Following Schrems II, controllers must assess supplementary measures for transfers not covered by adequacy. This use case tracks whether a TIA exists, is current, and matches actual data flows observed in network telemetry — highlighting transfers to jurisdictions without completed TIAs.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`gdpr_transfer_register.csv`, CIM Network_Traffic, geolocation enrichment",
              "q": "| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-30d NOT All_Traffic.dest_category IN (\"eu_eea\",\"uk_adequate\")\n  by All_Traffic.dest\n| rename All_Traffic.* as *\n| iplocation dest\n| eval country=Country\n| lookup gdpr_transfer_register.csv dest_domain OUTPUT tia_status, transfer_mechanism, data_categories\n| where isnull(tia_status) OR tia_status!=\"completed\"\n| sort - bytes_out\n| table dest, country, bytes_out, tia_status, transfer_mechanism, data_categories",
              "m": "(1) Build `gdpr_transfer_register.csv` keyed by external service domains with TIA and mechanism fields; (2) enrich egress with country; (3) exclude adequacy-listed countries per legal list; (4) alert on high-volume transfers with missing TIA; (5) integrate with privacy office for TIA workflow updates.",
              "z": "Map (transfer destinations), Table (missing TIAs), Bar chart (volume by country), Single value (domains without TIA).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `gdpr_transfer_register.csv`, CIM Network_Traffic, geolocation enrichment.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `gdpr_transfer_register.csv` keyed by external service domains with TIA and mechanism fields; (2) enrich egress with country; (3) exclude adequacy-listed countries per legal list; (4) alert on high-volume transfers with missing TIA; (5) integrate with privacy office for TIA workflow updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-30d NOT All_Traffic.dest_category IN (\"eu_eea\",\"uk_adequate\")\n  by All_Traffic.dest\n| rename All_Traffic.* as *\n| iplocation dest\n| eval country=Country\n| lookup gdpr_transfer_register.csv dest_domain OUTPUT tia_status, transfer_mechanism, data_categories\n| where isnull(tia_status) OR tia_status!=\"completed\"\n| sort - bytes_out\n| table dest, country, bytes_out, tia_status, transfer_mechanism, data_categories\n```\n\nUnderstanding this SPL\n\n**GDPR Transfer Impact Assessment (TIA) Status for Third-Country Transfers (Art. 44-46)** — Following Schrems II, controllers must assess supplementary measures for transfers not covered by adequacy. This use case tracks whether a TIA exists, is current, and matches actual data flows observed in network telemetry — highlighting transfers to jurisdictions without completed TIAs.\n\nDocumented **Data sources**: `gdpr_transfer_register.csv`, CIM Network_Traffic, geolocation enrichment. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Pipeline stage (see **GDPR Transfer Impact Assessment (TIA) Status for Third-Country Transfers (Art. 44-46)**): iplocation dest\n• `eval` defines or adjusts **country** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(tia_status) OR tia_status!=\"completed\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Transfer Impact Assessment (TIA) Status for Third-Country Transfers (Art. 44-46)**): table dest, country, bytes_out, tia_status, transfer_mechanism, data_categories\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Transfer Impact Assessment (TIA) Status for Third-Country Transfers (Art. 44-46)** — Following Schrems II, controllers must assess supplementary measures for transfers not covered by adequacy. This use case tracks whether a TIA exists, is current, and matches actual data flows observed in network telemetry — highlighting transfers to jurisdictions without completed TIAs.\n\nDocumented **Data sources**: `gdpr_transfer_register.csv`, CIM Network_Traffic, geolocation enrichment. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (transfer destinations), Table (missing TIAs), Bar chart (volume by country), Single value (domains without TIA).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We following Schrems II, controllers must assess supplementary measures for transfers not covered by adequacy so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.44",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.44 (International transfers — general principle) is enforced — Splunk UC-22.1.36: GDPR Transfer Impact Assessment (TIA) Status for Third-Country Transfers.",
                  "ea": "Saved search 'UC-22.1.36' running on gdpr_transfer_register.csv, CIM Network_Traffic, geolocation enrichment, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.45",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.45 (Transfers via adequacy decision) is enforced — Splunk UC-22.1.36: GDPR Transfer Impact Assessment (TIA) Status for Third-Country Transfers.",
                  "ea": "Saved search 'UC-22.1.36' running on gdpr_transfer_register.csv, CIM Network_Traffic, geolocation enrichment, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.46 (Transfers subject to safeguards) is enforced — Splunk UC-22.1.36: GDPR Transfer Impact Assessment (TIA) Status for Third-Country Transfers.",
                  "ea": "Saved search 'UC-22.1.36' running on gdpr_transfer_register.csv, CIM Network_Traffic, geolocation enrichment, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.37",
              "n": "GDPR Standard Contractual Clauses (SCCs) Compliance Tracking (Art. 46(2)(c))",
              "c": "high",
              "f": "beginner",
              "v": "SCCs must be implemented and kept current with module and annex accuracy. This use case monitors SCC version, signing dates, linked processing activities, and triggers review when processor subprocessors or transfer countries change materially.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`gdpr_scc_register.csv`, `gdpr_processor_register.csv`",
              "q": "| inputlookup gdpr_scc_register.csv\n| eval signed_epoch=strptime(signed_date,\"%Y-%m-%d\")\n| eval scc_age_days=round((now()-signed_epoch)/86400,0)\n| lookup gdpr_processor_register.csv processor_name OUTPUT sub_processors_last_changed\n| eval sub_change_epoch=strptime(sub_processors_last_changed,\"%Y-%m-%d\")\n| eval review_needed=if(sub_change_epoch > signed_epoch OR scc_age_days > 1095, 1, 0)\n| where review_needed=1 OR scc_version!=\"2021\"\n| table processor_name, scc_version, signed_date, scc_age_days, sub_processors_last_changed, review_needed",
              "m": "(1) Record each SCC with version (2021 modules) and signing date; (2) sync sub-processor change dates from vendor notifications; (3) alert legal when sub-processor changes post-date the SCC signature; (4) schedule triennial SCC review tasks; (5) archive signed PDFs externally with URLs in register.",
              "z": "Table (SCCs needing review), Single value (outdated SCC count), Timeline (signature vs sub-processor changes), Bar chart (by module type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `gdpr_scc_register.csv`, `gdpr_processor_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Record each SCC with version (2021 modules) and signing date; (2) sync sub-processor change dates from vendor notifications; (3) alert legal when sub-processor changes post-date the SCC signature; (4) schedule triennial SCC review tasks; (5) archive signed PDFs externally with URLs in register.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_scc_register.csv\n| eval signed_epoch=strptime(signed_date,\"%Y-%m-%d\")\n| eval scc_age_days=round((now()-signed_epoch)/86400,0)\n| lookup gdpr_processor_register.csv processor_name OUTPUT sub_processors_last_changed\n| eval sub_change_epoch=strptime(sub_processors_last_changed,\"%Y-%m-%d\")\n| eval review_needed=if(sub_change_epoch > signed_epoch OR scc_age_days > 1095, 1, 0)\n| where review_needed=1 OR scc_version!=\"2021\"\n| table processor_name, scc_version, signed_date, scc_age_days, sub_processors_last_changed, review_needed\n```\n\nUnderstanding this SPL\n\n**GDPR Standard Contractual Clauses (SCCs) Compliance Tracking (Art. 46(2)(c))** — SCCs must be implemented and kept current with module and annex accuracy. This use case monitors SCC version, signing dates, linked processing activities, and triggers review when processor subprocessors or transfer countries change materially.\n\nDocumented **Data sources**: `gdpr_scc_register.csv`, `gdpr_processor_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **signed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **scc_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **sub_change_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **review_needed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where review_needed=1 OR scc_version!=\"2021\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Standard Contractual Clauses (SCCs) Compliance Tracking (Art. 46(2)(c))**): table processor_name, scc_version, signed_date, scc_age_days, sub_processors_last_changed, review_needed\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SCCs needing review), Single value (outdated SCC count), Timeline (signature vs sub-processor changes), Bar chart (by module type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We sCCs must be implemented and kept current with module and annex accuracy so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.46(2)(c)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.46(2)(c) is enforced — Splunk UC-22.1.37: GDPR Standard Contractual Clauses (SCCs) Compliance Tracking.",
                  "ea": "Saved search 'UC-22.1.37' running on gdpr_scc_register.csv, gdpr_processor_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.38",
              "n": "GDPR Data Localization Enforcement for Restricted Processing (Art. 44-49)",
              "c": "critical",
              "f": "advanced",
              "v": "Some controllers commit to EEA-only processing or specific regional residency in contracts and DPIAs. This use case detects compute, storage, and support access that originates outside approved regions for systems bound by localization commitments.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`aws:cloudtrail` (region fields), Azure activity logs, VPN and admin session logs, `gdpr_localization_policy.csv`",
              "q": "| inputlookup gdpr_localization_policy.csv WHERE required_region=\"EEA_ONLY\"\n| rename system_id as resource_id\n| join resource_id [\n    search index=cloud (sourcetype=\"aws:cloudtrail\" OR sourcetype=\"mscs:azure:auditlog\") earliest=-7d\n    | eval region=coalesce(region, activity_location)\n    | stats values(region) as regions_seen by resource_id\n]\n| eval breach=if(match(regions_seen,\"(?i)us-|ap-|me-\") OR mvcount(regions_seen)>3, 1, 0)\n| where breach=1\n| table resource_id, processing_activity, required_region, regions_seen",
              "m": "(1) List systems with contractual residency in `gdpr_localization_policy.csv`; (2) ingest multi-region cloud audit events; (3) map regions to EEA vs non-EEA; (4) alert on any non-EEA data plane or support access violating policy; (5) escalate to cloud centre of excellence and DPO.",
              "z": "Table (localization breaches), Map (regions), Single value (breach count), Heatmap (events by region).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `aws:cloudtrail` (region fields), Azure activity logs, VPN and admin session logs, `gdpr_localization_policy.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) List systems with contractual residency in `gdpr_localization_policy.csv`; (2) ingest multi-region cloud audit events; (3) map regions to EEA vs non-EEA; (4) alert on any non-EEA data plane or support access violating policy; (5) escalate to cloud centre of excellence and DPO.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_localization_policy.csv WHERE required_region=\"EEA_ONLY\"\n| rename system_id as resource_id\n| join resource_id [\n    search index=cloud (sourcetype=\"aws:cloudtrail\" OR sourcetype=\"mscs:azure:auditlog\") earliest=-7d\n    | eval region=coalesce(region, activity_location)\n    | stats values(region) as regions_seen by resource_id\n]\n| eval breach=if(match(regions_seen,\"(?i)us-|ap-|me-\") OR mvcount(regions_seen)>3, 1, 0)\n| where breach=1\n| table resource_id, processing_activity, required_region, regions_seen\n```\n\nUnderstanding this SPL\n\n**GDPR Data Localization Enforcement for Restricted Processing (Art. 44-49)** — Some controllers commit to EEA-only processing or specific regional residency in contracts and DPIAs. This use case detects compute, storage, and support access that originates outside approved regions for systems bound by localization commitments.\n\nDocumented **Data sources**: `aws:cloudtrail` (region fields), Azure activity logs, VPN and admin session logs, `gdpr_localization_policy.csv`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where breach=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Data Localization Enforcement for Restricted Processing (Art. 44-49)**): table resource_id, processing_activity, required_region, regions_seen\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Data Localization Enforcement for Restricted Processing (Art. 44-49)** — Some controllers commit to EEA-only processing or specific regional residency in contracts and DPIAs. This use case detects compute, storage, and support access that originates outside approved regions for systems bound by localization commitments.\n\nDocumented **Data sources**: `aws:cloudtrail` (region fields), Azure activity logs, VPN and admin session logs, `gdpr_localization_policy.csv`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (localization breaches), Map (regions), Single value (breach count), Heatmap (events by region).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We some controllers commit to EEA-only processing or specific regional residency in contracts and DPIAs so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.44",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.44 (International transfers — general principle) is enforced — Splunk UC-22.1.38: GDPR Data Localization Enforcement for Restricted Processing.",
                  "ea": "Saved search 'UC-22.1.38' running on aws:cloudtrail (region fields), Azure activity logs, VPN and admin session logs, gdpr_localization_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.45",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.45 (Transfers via adequacy decision) is enforced — Splunk UC-22.1.38: GDPR Data Localization Enforcement for Restricted Processing.",
                  "ea": "Saved search 'UC-22.1.38' running on aws:cloudtrail (region fields), Azure activity logs, VPN and admin session logs, gdpr_localization_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.46 (Transfers subject to safeguards) is enforced — Splunk UC-22.1.38: GDPR Data Localization Enforcement for Restricted Processing.",
                  "ea": "Saved search 'UC-22.1.38' running on aws:cloudtrail (region fields), Azure activity logs, VPN and admin session logs, gdpr_localization_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.47",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.47 is enforced — Splunk UC-22.1.38: GDPR Data Localization Enforcement for Restricted Processing.",
                  "ea": "Saved search 'UC-22.1.38' running on aws:cloudtrail (region fields), Azure activity logs, VPN and admin session logs, gdpr_localization_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.48",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.48 is enforced — Splunk UC-22.1.38: GDPR Data Localization Enforcement for Restricted Processing.",
                  "ea": "Saved search 'UC-22.1.38' running on aws:cloudtrail (region fields), Azure activity logs, VPN and admin session logs, gdpr_localization_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.49",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.49 is enforced — Splunk UC-22.1.38: GDPR Data Localization Enforcement for Restricted Processing.",
                  "ea": "Saved search 'UC-22.1.38' running on aws:cloudtrail (region fields), Azure activity logs, VPN and admin session logs, gdpr_localization_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.39",
              "n": "GDPR Adequacy Decision and Legal Basis Change Monitoring (Art. 45)",
              "c": "high",
              "f": "beginner",
              "v": "The Commission may adopt adequacy decisions that evolve. This use case maintains a reference list of countries with adequacy, tracks when transfers rely on adequacy vs SCCs, and flags transfers to countries whose adequacy status changed in the legal reference feed — prompting legal revalidation.",
              "t": "HTTP Event Collector (HEC — legal reference updates), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`gdpr_adequacy_reference.csv` (country, adequacy_status, effective_date), `gdpr_transfer_register.csv`",
              "q": "| inputlookup gdpr_adequacy_reference.csv\n| where adequacy_status=\"revoked\" OR adequacy_status=\"pending_review\"\n| join country [\n    | inputlookup gdpr_transfer_register.csv\n    | where transfer_mechanism=\"adequacy\"\n    | stats values(dest_domain) as affected_domains by destination_country\n    | rename destination_country as country\n]\n| table country, adequacy_status, effective_date, affected_domains",
              "m": "(1) Curate `gdpr_adequacy_reference.csv` when Commission decisions publish; (2) optionally ingest EU Official Journal RSS via HEC for awareness; (3) auto-open legal review tasks in ServiceNow on status change; (4) require transfer mechanism update in `gdpr_transfer_register.csv`; (5) communicate to data owners of affected domains.",
              "z": "Table (impacted transfers), Single value (transfers on revoked adequacy), Timeline (decision effective dates).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC — legal reference updates), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `gdpr_adequacy_reference.csv` (country, adequacy_status, effective_date), `gdpr_transfer_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Curate `gdpr_adequacy_reference.csv` when Commission decisions publish; (2) optionally ingest EU Official Journal RSS via HEC for awareness; (3) auto-open legal review tasks in ServiceNow on status change; (4) require transfer mechanism update in `gdpr_transfer_register.csv`; (5) communicate to data owners of affected domains.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_adequacy_reference.csv\n| where adequacy_status=\"revoked\" OR adequacy_status=\"pending_review\"\n| join country [\n    | inputlookup gdpr_transfer_register.csv\n    | where transfer_mechanism=\"adequacy\"\n    | stats values(dest_domain) as affected_domains by destination_country\n    | rename destination_country as country\n]\n| table country, adequacy_status, effective_date, affected_domains\n```\n\nUnderstanding this SPL\n\n**GDPR Adequacy Decision and Legal Basis Change Monitoring (Art. 45)** — The Commission may adopt adequacy decisions that evolve. This use case maintains a reference list of countries with adequacy, tracks when transfers rely on adequacy vs SCCs, and flags transfers to countries whose adequacy status changed in the legal reference feed — prompting legal revalidation.\n\nDocumented **Data sources**: `gdpr_adequacy_reference.csv` (country, adequacy_status, effective_date), `gdpr_transfer_register.csv`. **App/TA** (typical add-on context): HTTP Event Collector (HEC — legal reference updates), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where adequacy_status=\"revoked\" OR adequacy_status=\"pending_review\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **GDPR Adequacy Decision and Legal Basis Change Monitoring (Art. 45)**): table country, adequacy_status, effective_date, affected_domains\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (impacted transfers), Single value (transfers on revoked adequacy), Timeline (decision effective dates).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We the Commission may adopt adequacy decisions that evolve so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.45",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.45 (Transfers via adequacy decision) is enforced — Splunk UC-22.1.39: GDPR Adequacy Decision and Legal Basis Change Monitoring.",
                  "ea": "Saved search 'UC-22.1.39' running on gdpr_adequacy_reference.csv (country, adequacy_status, effective_date), gdpr_transfer_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.40",
              "n": "GDPR Binding Corporate Rules (BCR) Evidence and Intra-Group Transfer Monitoring (Art. 47)",
              "c": "high",
              "f": "intermediate",
              "v": "BCRs require demonstrable enforcement of group-wide policies including transfers between entities. This use case maps intra-group data flows to BCR-covered entities, verifies annual certification of policy acknowledgement, and detects flows involving non-BCR entities.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`gdpr_bcr_entity_register.csv`, CIM Network_Traffic, Active Directory or HR entity metadata",
              "q": "| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-30d match(All_Traffic.src_category,\"(?i)internal\")\n  by All_Traffic.src All_Traffic.dest\n| rename All_Traffic.* as *\n| lookup gdpr_bcr_entity_register.csv entity_id AS src OUTPUT bcr_covered, entity_name\n| lookup gdpr_bcr_entity_register.csv entity_id AS dest OUTPUT bcr_covered AS dest_bcr, entity_name AS dest_name\n| eval bcr_gap=if(bcr_covered=\"true\" AND dest_bcr!=\"true\", \"FLOW_TO_NON_BCR\", \"review\")\n| where bcr_gap=\"FLOW_TO_NON_BCR\"\n| table src, dest, entity_name, dest_name, bytes_out, bcr_gap",
              "m": "(1) Assign `entity_id` to sites and cloud subscriptions in `gdpr_bcr_entity_register.csv`; (2) tag internal IP ranges and cloud accounts; (3) alert when significant volumes flow to entities not listed as BCR members; (4) join HR training data for annual BCR attestation; (5) include in group privacy committee reporting.",
              "z": "Sankey (intra-group flows), Table (non-BCR recipients), Single value (policy acknowledgement %), Bar chart (volume by entity).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `gdpr_bcr_entity_register.csv`, CIM Network_Traffic, Active Directory or HR entity metadata.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Assign `entity_id` to sites and cloud subscriptions in `gdpr_bcr_entity_register.csv`; (2) tag internal IP ranges and cloud accounts; (3) alert when significant volumes flow to entities not listed as BCR members; (4) join HR training data for annual BCR attestation; (5) include in group privacy committee reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-30d match(All_Traffic.src_category,\"(?i)internal\")\n  by All_Traffic.src All_Traffic.dest\n| rename All_Traffic.* as *\n| lookup gdpr_bcr_entity_register.csv entity_id AS src OUTPUT bcr_covered, entity_name\n| lookup gdpr_bcr_entity_register.csv entity_id AS dest OUTPUT bcr_covered AS dest_bcr, entity_name AS dest_name\n| eval bcr_gap=if(bcr_covered=\"true\" AND dest_bcr!=\"true\", \"FLOW_TO_NON_BCR\", \"review\")\n| where bcr_gap=\"FLOW_TO_NON_BCR\"\n| table src, dest, entity_name, dest_name, bytes_out, bcr_gap\n```\n\nUnderstanding this SPL\n\n**GDPR Binding Corporate Rules (BCR) Evidence and Intra-Group Transfer Monitoring (Art. 47)** — BCRs require demonstrable enforcement of group-wide policies including transfers between entities. This use case maps intra-group data flows to BCR-covered entities, verifies annual certification of policy acknowledgement, and detects flows involving non-BCR entities.\n\nDocumented **Data sources**: `gdpr_bcr_entity_register.csv`, CIM Network_Traffic, Active Directory or HR entity metadata. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **bcr_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bcr_gap=\"FLOW_TO_NON_BCR\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Binding Corporate Rules (BCR) Evidence and Intra-Group Transfer Monitoring (Art. 47)**): table src, dest, entity_name, dest_name, bytes_out, bcr_gap\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Binding Corporate Rules (BCR) Evidence and Intra-Group Transfer Monitoring (Art. 47)** — BCRs require demonstrable enforcement of group-wide policies including transfers between entities. This use case maps intra-group data flows to BCR-covered entities, verifies annual certification of policy acknowledgement, and detects flows involving non-BCR entities.\n\nDocumented **Data sources**: `gdpr_bcr_entity_register.csv`, CIM Network_Traffic, Active Directory or HR entity metadata. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey (intra-group flows), Table (non-BCR recipients), Single value (policy acknowledgement %), Bar chart (volume by entity).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We bCRs require demonstrable enforcement of group-wide policies including transfers between entities so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.47",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.47 is enforced — Splunk UC-22.1.40: GDPR Binding Corporate Rules (BCR) Evidence and Intra-Group Transfer Monitoring.",
                  "ea": "Saved search 'UC-22.1.40' running on gdpr_bcr_entity_register.csv, CIM Network_Traffic, Active Directory or HR entity metadata, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.41",
              "n": "GDPR Unauthorized Cloud Service Detection (Shadow SaaS) (Art. 5(2), Art. 32)",
              "c": "high",
              "f": "advanced",
              "v": "Personal data processed in unapproved SaaS breaks accountability and security measures. This use case identifies cloud applications from proxy, DNS, and CASB logs that are not in the approved catalogue — prioritising services with upload behaviour and OAuth grants to corporate identities.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Enterprise Security (Splunkbase 263)",
              "d": "`index=proxy`, DNS logs, `ms:o365:management` (OAuth consent), `gdpr_approved_saas_catalog.csv`",
              "q": "index=proxy earliest=-24h\n| stats sum(bytes_out) as upload_bytes dc(user) as users by dest_host\n| lookup gdpr_approved_saas_catalog.csv saas_domain AS dest_host OUTPUT approval_status\n| where isnull(approval_status) OR approval_status!=\"approved\"\n| where upload_bytes > 10485760\n| sort - upload_bytes\n| table dest_host, upload_bytes, users, approval_status",
              "m": "(1) Maintain approved SaaS domain list with data residency notes; (2) baseline proxy categories; (3) alert on high upload to unknown SaaS; (4) correlate with O365 OAuth events for enterprise SSO abuse; (5) feed discoveries into ROPA gap analysis and enterprise architecture review.",
              "z": "Table (unapproved SaaS), Bar chart (upload volume), Word cloud (domains — optional), Single value (new domains this week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1567"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `index=proxy`, DNS logs, `ms:o365:management` (OAuth consent), `gdpr_approved_saas_catalog.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain approved SaaS domain list with data residency notes; (2) baseline proxy categories; (3) alert on high upload to unknown SaaS; (4) correlate with O365 OAuth events for enterprise SSO abuse; (5) feed discoveries into ROPA gap analysis and enterprise architecture review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy earliest=-24h\n| stats sum(bytes_out) as upload_bytes dc(user) as users by dest_host\n| lookup gdpr_approved_saas_catalog.csv saas_domain AS dest_host OUTPUT approval_status\n| where isnull(approval_status) OR approval_status!=\"approved\"\n| where upload_bytes > 10485760\n| sort - upload_bytes\n| table dest_host, upload_bytes, users, approval_status\n```\n\nUnderstanding this SPL\n\n**GDPR Unauthorized Cloud Service Detection (Shadow SaaS) (Art. 5(2), Art. 32)** — Personal data processed in unapproved SaaS breaks accountability and security measures. This use case identifies cloud applications from proxy, DNS, and CASB logs that are not in the approved catalogue — prioritising services with upload behaviour and OAuth grants to corporate identities.\n\nDocumented **Data sources**: `index=proxy`, DNS logs, `ms:o365:management` (OAuth consent), `gdpr_approved_saas_catalog.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approval_status) OR approval_status!=\"approved\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where upload_bytes > 10485760` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Unauthorized Cloud Service Detection (Shadow SaaS) (Art. 5(2), Art. 32)**): table dest_host, upload_bytes, users, approval_status\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Unauthorized Cloud Service Detection (Shadow SaaS) (Art. 5(2), Art. 32)** — Personal data processed in unapproved SaaS breaks accountability and security measures. This use case identifies cloud applications from proxy, DNS, and CASB logs that are not in the approved catalogue — prioritising services with upload behaviour and OAuth grants to corporate identities.\n\nDocumented **Data sources**: `index=proxy`, DNS logs, `ms:o365:management` (OAuth consent), `gdpr_approved_saas_catalog.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved SaaS), Bar chart (upload volume), Word cloud (domains — optional), Single value (new domains this week).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We personal data processed in unapproved SaaS breaks accountability and security measures.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.32 (Security of processing) is enforced — Splunk UC-22.1.41: GDPR Unauthorized Cloud Service Detection (Shadow SaaS).",
                  "ea": "Saved search 'UC-22.1.41' running on index=proxy, DNS logs, ms:o365:management (OAuth consent), gdpr_approved_saas_catalog.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5(2) is enforced — Splunk UC-22.1.41: GDPR Unauthorized Cloud Service Detection (Shadow SaaS).",
                  "ea": "Saved search 'UC-22.1.41' running on index=proxy, DNS logs, ms:o365:management (OAuth consent), gdpr_approved_saas_catalog.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.42",
              "n": "GDPR Shadow IT Personal Data Processing Indicators (Art. 5(2))",
              "c": "high",
              "f": "advanced",
              "v": "Shadow IT often processes personal data without DPIA or DPA coverage. This use case correlates unapproved application usage with DLP events, structured PII patterns in egress, and HR system identifiers — raising confidence that shadow systems are processing personal data, not just unsanctioned leisure traffic.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Edge Processor (Splunk Cloud Platform)",
              "d": "`ms:o365:management` (DLP), `index=proxy`, `gdpr_approved_saas_catalog.csv`",
              "q": "index=proxy OR index=o365 earliest=-24h\n| eval dest_host=coalesce(dest_host, Workload)\n| lookup gdpr_approved_saas_catalog.csv saas_domain AS dest_host OUTPUT approval_status\n| regex _raw=\"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}\"\n| where approval_status!=\"approved\"\n| stats count as pii_hits dc(user) as affected_users by dest_host\n| where pii_hits > 10\n| sort - pii_hits\n| table dest_host, pii_hits, affected_users",
              "m": "(1) Tune regex and DLP correlation to reduce false positives; (2) whitelist known CDNs with host-level exceptions; (3) alert DPO when `affected_users` exceeds threshold; (4) auto-create architecture review tasks; (5) document takedown or formal approval outcomes in ROPA.",
              "z": "Table (high-risk shadow hosts), Single value (distinct shadow apps with PII), Bar chart (hits by department if user field mapped).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Edge Processor (Splunk Cloud Platform).\n• Ensure the following data sources are available: `ms:o365:management` (DLP), `index=proxy`, `gdpr_approved_saas_catalog.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune regex and DLP correlation to reduce false positives; (2) whitelist known CDNs with host-level exceptions; (3) alert DPO when `affected_users` exceeds threshold; (4) auto-create architecture review tasks; (5) document takedown or formal approval outcomes in ROPA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy OR index=o365 earliest=-24h\n| eval dest_host=coalesce(dest_host, Workload)\n| lookup gdpr_approved_saas_catalog.csv saas_domain AS dest_host OUTPUT approval_status\n| regex _raw=\"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}\"\n| where approval_status!=\"approved\"\n| stats count as pii_hits dc(user) as affected_users by dest_host\n| where pii_hits > 10\n| sort - pii_hits\n| table dest_host, pii_hits, affected_users\n```\n\nUnderstanding this SPL\n\n**GDPR Shadow IT Personal Data Processing Indicators (Art. 5(2))** — Shadow IT often processes personal data without DPIA or DPA coverage. This use case correlates unapproved application usage with DLP events, structured PII patterns in egress, and HR system identifiers — raising confidence that shadow systems are processing personal data, not just unsanctioned leisure traffic.\n\nDocumented **Data sources**: `ms:o365:management` (DLP), `index=proxy`, `gdpr_approved_saas_catalog.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Edge Processor (Splunk Cloud Platform). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy, o365.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, index=o365, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dest_host** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters rows matching a pattern with `regex`.\n• Filters the current rows with `where approval_status!=\"approved\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dest_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where pii_hits > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **GDPR Shadow IT Personal Data Processing Indicators (Art. 5(2))**): table dest_host, pii_hits, affected_users\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (high-risk shadow hosts), Single value (distinct shadow apps with PII), Bar chart (hits by department if user field mapped).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We shadow IT often processes personal data without DPIA or DPA coverage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5(2) is enforced — Splunk UC-22.1.42: GDPR Shadow IT Personal Data Processing Indicators.",
                  "ea": "Saved search 'UC-22.1.42' running on ms:o365:management (DLP), index=proxy, gdpr_approved_saas_catalog.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.43",
              "n": "GDPR Personal Data in Non-Approved Systems (ROPA Drift Detection) (Art. 5(2), Art. 30)",
              "c": "high",
              "f": "intermediate",
              "v": "The records of processing must be accurate. This use case compares databases, SaaS tenants, and data lake zones observed in authentication and connection logs against `gdpr_ropa_register.csv` — flagging systems that handle identity or HR attributes but lack a ROPA entry.",
              "t": "Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553)",
              "d": "`mssql:audit` (database_name, client_ip), `OktaIM2:log` (app assignments), `gdpr_ropa_register.csv`",
              "q": "index=mssql sourcetype=\"mssql:audit\" action=\"audit_schema_object_access\" earliest=-7d\n| stats dc(statement) as stmts dc(user) as db_users by database_name\n| lookup gdpr_ropa_register.csv system_name AS database_name OUTPUT processing_activity\n| where isnull(processing_activity)\n| join database_name [\n    search index=okta sourcetype=\"OktaIM2:log\" earliest=-7d\n    | stats dc(user) as okta_users by app_name\n    | rename app_name as database_name\n]\n| table database_name, db_users, okta_users",
              "m": "(1) Normalise system identifiers between CMDB, ROPA, and technical logs; (2) focus on databases with HR or CRM naming patterns; (3) alert data stewards on ROPA drift; (4) update ROPA or decommission shadow databases; (5) quarterly reconcile with enterprise architecture repository.",
              "z": "Table (unregistered systems), Single value (drift count), Bar chart (by environment), Timeline (first seen).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft SQL Server](https://splunkbase.splunk.com/app/2648), [Splunk Add-on for Okta Identity Cloud](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553).\n• Ensure the following data sources are available: `mssql:audit` (database_name, client_ip), `OktaIM2:log` (app assignments), `gdpr_ropa_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalise system identifiers between CMDB, ROPA, and technical logs; (2) focus on databases with HR or CRM naming patterns; (3) alert data stewards on ROPA drift; (4) update ROPA or decommission shadow databases; (5) quarterly reconcile with enterprise architecture repository.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mssql sourcetype=\"mssql:audit\" action=\"audit_schema_object_access\" earliest=-7d\n| stats dc(statement) as stmts dc(user) as db_users by database_name\n| lookup gdpr_ropa_register.csv system_name AS database_name OUTPUT processing_activity\n| where isnull(processing_activity)\n| join database_name [\n    search index=okta sourcetype=\"OktaIM2:log\" earliest=-7d\n    | stats dc(user) as okta_users by app_name\n    | rename app_name as database_name\n]\n| table database_name, db_users, okta_users\n```\n\nUnderstanding this SPL\n\n**GDPR Personal Data in Non-Approved Systems (ROPA Drift Detection) (Art. 5(2), Art. 30)** — The records of processing must be accurate. This use case compares databases, SaaS tenants, and data lake zones observed in authentication and connection logs against `gdpr_ropa_register.csv` — flagging systems that handle identity or HR attributes but lack a ROPA entry.\n\nDocumented **Data sources**: `mssql:audit` (database_name, client_ip), `OktaIM2:log` (app assignments), `gdpr_ropa_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mssql; **sourcetype**: mssql:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mssql, sourcetype=\"mssql:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by database_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(processing_activity)` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **GDPR Personal Data in Non-Approved Systems (ROPA Drift Detection) (Art. 5(2), Art. 30)**): table database_name, db_users, okta_users\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(Authentication.user) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Personal Data in Non-Approved Systems (ROPA Drift Detection) (Art. 5(2), Art. 30)** — The records of processing must be accurate. This use case compares databases, SaaS tenants, and data lake zones observed in authentication and connection logs against `gdpr_ropa_register.csv` — flagging systems that handle identity or HR attributes but lack a ROPA entry.\n\nDocumented **Data sources**: `mssql:audit` (database_name, client_ip), `OktaIM2:log` (app assignments), `gdpr_ropa_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unregistered systems), Single value (drift count), Bar chart (by environment), Timeline (first seen).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We the records of processing must be accurate so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t dc(Authentication.user) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value",
              "e": [
                "mssql",
                "okta"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.30",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.30 (Records of processing) is enforced — Splunk UC-22.1.43: GDPR Personal Data in Non-Approved Systems (ROPA Drift Detection).",
                  "ea": "Saved search 'UC-22.1.43' running on mssql:audit (database_name, client_ip), OktaIM2:log (app assignments), gdpr_ropa_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5(2) is enforced — Splunk UC-22.1.43: GDPR Personal Data in Non-Approved Systems (ROPA Drift Detection).",
                  "ea": "Saved search 'UC-22.1.43' running on mssql:audit (database_name, client_ip), OktaIM2:log (app assignments), gdpr_ropa_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.44",
              "n": "GDPR Cross-Border Personal Data Flow Anomaly Detection (Arts. 44-49)",
              "c": "high",
              "f": "advanced",
              "v": "Sudden shifts in cross-border traffic can indicate misconfigured replication, insider misuse, or compromised accounts — each with transfer-law implications. This use case baselines bytes and session counts by destination country and alerts on statistically rare spikes for systems processing personal data.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk App for Behavoral Profiling (optional)",
              "d": "CIM Network_Traffic, `gdpr_ropa_register.csv`, `iplocation`",
              "q": "| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out count as sessions\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-60d latest=now\n  by All_Traffic.dest span=1d\n| rename All_Traffic.* as *\n| iplocation dest\n| lookup gdpr_ropa_register.csv system_name OUTPUT data_category\n| where isnotnull(data_category)\n| eventstats median(bytes_out) as med by Country\n| eval z=round((bytes_out-med)/std(bytes_out),2)\n| where abs(z) > 3\n| table _time, dest, Country, bytes_out, z, sessions",
              "m": "(1) Restrict baseline to GDPR-scoped internal sources via asset tags; (2) tune span and z-score thresholds by business unit; (3) alert SOC and privacy office on spikes to non-EEA countries; (4) integrate with TIA register to rule out approved bulk migrations; (5) retain investigation notes for regulators.",
              "z": "Line chart (bytes baseline vs actual), Table (anomalies), Map (country), Single value (anomalies this week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk App for Behavoral Profiling (optional).\n• Ensure the following data sources are available: CIM Network_Traffic, `gdpr_ropa_register.csv`, `iplocation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Restrict baseline to GDPR-scoped internal sources via asset tags; (2) tune span and z-score thresholds by business unit; (3) alert SOC and privacy office on spikes to non-EEA countries; (4) integrate with TIA register to rule out approved bulk migrations; (5) retain investigation notes for regulators.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out count as sessions\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-60d latest=now\n  by All_Traffic.dest span=1d\n| rename All_Traffic.* as *\n| iplocation dest\n| lookup gdpr_ropa_register.csv system_name OUTPUT data_category\n| where isnotnull(data_category)\n| eventstats median(bytes_out) as med by Country\n| eval z=round((bytes_out-med)/std(bytes_out),2)\n| where abs(z) > 3\n| table _time, dest, Country, bytes_out, z, sessions\n```\n\nUnderstanding this SPL\n\n**GDPR Cross-Border Personal Data Flow Anomaly Detection (Arts. 44-49)** — Sudden shifts in cross-border traffic can indicate misconfigured replication, insider misuse, or compromised accounts — each with transfer-law implications. This use case baselines bytes and session counts by destination country and alerts on statistically rare spikes for systems processing personal data.\n\nDocumented **Data sources**: CIM Network_Traffic, `gdpr_ropa_register.csv`, `iplocation`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk App for Behavoral Profiling (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Pipeline stage (see **GDPR Cross-Border Personal Data Flow Anomaly Detection (Arts. 44-49)**): iplocation dest\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(data_category)` — typically the threshold or rule expression for this monitoring goal.\n• `eventstats` rolls up events into metrics; results are split **by Country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(z) > 3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Cross-Border Personal Data Flow Anomaly Detection (Arts. 44-49)**): table _time, dest, Country, bytes_out, z, sessions\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Cross-Border Personal Data Flow Anomaly Detection (Arts. 44-49)** — Sudden shifts in cross-border traffic can indicate misconfigured replication, insider misuse, or compromised accounts — each with transfer-law implications. This use case baselines bytes and session counts by destination country and alerts on statistically rare spikes for systems processing personal data.\n\nDocumented **Data sources**: CIM Network_Traffic, `gdpr_ropa_register.csv`, `iplocation`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk App for Behavoral Profiling (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (bytes baseline vs actual), Table (anomalies), Map (country), Single value (anomalies this week).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We sudden shifts in cross-border traffic can indicate misconfigured replication, insider misuse, or compromised accounts — each with transfer-law implications so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.44",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.44 (International transfers — general principle) is enforced — Splunk UC-22.1.44: GDPR Cross-Border Personal Data Flow Anomaly Detection.",
                  "ea": "Saved search 'UC-22.1.44' running on CIM Network_Traffic, gdpr_ropa_register.csv, iplocation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.45",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.45 (Transfers via adequacy decision) is enforced — Splunk UC-22.1.44: GDPR Cross-Border Personal Data Flow Anomaly Detection.",
                  "ea": "Saved search 'UC-22.1.44' running on CIM Network_Traffic, gdpr_ropa_register.csv, iplocation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.46 (Transfers subject to safeguards) is enforced — Splunk UC-22.1.44: GDPR Cross-Border Personal Data Flow Anomaly Detection.",
                  "ea": "Saved search 'UC-22.1.44' running on CIM Network_Traffic, gdpr_ropa_register.csv, iplocation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.47",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.47 is enforced — Splunk UC-22.1.44: GDPR Cross-Border Personal Data Flow Anomaly Detection.",
                  "ea": "Saved search 'UC-22.1.44' running on CIM Network_Traffic, gdpr_ropa_register.csv, iplocation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.48",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.48 is enforced — Splunk UC-22.1.44: GDPR Cross-Border Personal Data Flow Anomaly Detection.",
                  "ea": "Saved search 'UC-22.1.44' running on CIM Network_Traffic, gdpr_ropa_register.csv, iplocation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.49",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.49 is enforced — Splunk UC-22.1.44: GDPR Cross-Border Personal Data Flow Anomaly Detection.",
                  "ea": "Saved search 'UC-22.1.44' running on CIM Network_Traffic, gdpr_ropa_register.csv, iplocation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.45",
              "n": "GDPR Privacy Settings Default Validation (Privacy by Design / Default) (Art. 25(2))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 25(2) requires data protection by default — only personal data necessary for each purpose should be processed, with privacy-friendly defaults. This use case audits application configuration events and feature-flag stores for marketing, analytics, and profile-sharing defaults that are not opt-in where required.",
              "t": "HTTP Event Collector (HEC), Splunk Add-on for GitHub (Splunkbase 5596)",
              "d": "`index=app_config` (HEC from feature flag / admin APIs), `index=github` (audit of default config templates)",
              "q": "index=app_config sourcetype=\"feature_flag_audit\" earliest=-24h\n| where match(lower(change_type),\"(?i)default\") AND match(lower(flag_name),\"(?i)market|track|share|personal\")\n| eval compliant_default=if(match(lower(new_value),\"(?i)off|false|opt.in|disabled\"),1,0)\n| where compliant_default=0\n| stats latest(_time) as last_change latest(new_value) as new_value values(user) as changed_by by app_name flag_name\n| table app_name, flag_name, new_value, changed_by, last_change",
              "m": "(1) Instrument admin and feature-flag systems to emit changes via HEC; (2) define per-product policy for safe defaults; (3) alert product owners on non-compliant default changes before release; (4) pair with code review sampling from GitHub default config files; (5) document evidence for Art. 25 DPIAs.",
              "z": "Table (non-compliant defaults), Timeline (changes), Single value (violations), Bar chart (by product line).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5596), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Add-on for GitHub (Splunkbase 5596).\n• Ensure the following data sources are available: `index=app_config` (HEC from feature flag / admin APIs), `index=github` (audit of default config templates).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument admin and feature-flag systems to emit changes via HEC; (2) define per-product policy for safe defaults; (3) alert product owners on non-compliant default changes before release; (4) pair with code review sampling from GitHub default config files; (5) document evidence for Art. 25 DPIAs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app_config sourcetype=\"feature_flag_audit\" earliest=-24h\n| where match(lower(change_type),\"(?i)default\") AND match(lower(flag_name),\"(?i)market|track|share|personal\")\n| eval compliant_default=if(match(lower(new_value),\"(?i)off|false|opt.in|disabled\"),1,0)\n| where compliant_default=0\n| stats latest(_time) as last_change latest(new_value) as new_value values(user) as changed_by by app_name flag_name\n| table app_name, flag_name, new_value, changed_by, last_change\n```\n\nUnderstanding this SPL\n\n**GDPR Privacy Settings Default Validation (Privacy by Design / Default) (Art. 25(2))** — Article 25(2) requires data protection by default — only personal data necessary for each purpose should be processed, with privacy-friendly defaults. This use case audits application configuration events and feature-flag stores for marketing, analytics, and profile-sharing defaults that are not opt-in where required.\n\nDocumented **Data sources**: `index=app_config` (HEC from feature flag / admin APIs), `index=github` (audit of default config templates). **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for GitHub (Splunkbase 5596). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app_config; **sourcetype**: feature_flag_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app_config, sourcetype=\"feature_flag_audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(change_type),\"(?i)default\") AND match(lower(flag_name),\"(?i)market|track|share|personal\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **compliant_default** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliant_default=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by app_name flag_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **GDPR Privacy Settings Default Validation (Privacy by Design / Default) (Art. 25(2))**): table app_name, flag_name, new_value, changed_by, last_change\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Privacy Settings Default Validation (Privacy by Design / Default) (Art. 25(2))** — Article 25(2) requires data protection by default — only personal data necessary for each purpose should be processed, with privacy-friendly defaults. This use case audits application configuration events and feature-flag stores for marketing, analytics, and profile-sharing defaults that are not opt-in where required.\n\nDocumented **Data sources**: `index=app_config` (HEC from feature flag / admin APIs), `index=github` (audit of default config templates). **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for GitHub (Splunkbase 5596). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant defaults), Timeline (changes), Single value (violations), Bar chart (by product line).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of privacy settings default validation (privacy by design / default) — so we can show we look after private information the way the privacy rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.25(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.25(2) is enforced — Splunk UC-22.1.45: GDPR Privacy Settings Default Validation (Privacy by Design / Default).",
                  "ea": "Saved search 'UC-22.1.45' running on index=app_config (HEC from feature flag / admin APIs), index=github (audit of default config templates), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.46",
              "n": "GDPR Consent Mechanism Audit (Lawful Basis Alignment) (Art. 25(1), Art. 7)",
              "c": "high",
              "f": "intermediate",
              "v": "By design, consent collection must be granular and verifiable. This use case monitors CMP and web telemetry for consent banner versions, non-essential cookies fired before consent, and mismatches between claimed purposes in the CMP and tags observed in the browser stream.",
              "t": "HTTP Event Collector (HEC), Splunk Add-on for Stream (Splunkbase 1809)",
              "d": "`index=consent` (CMP events), `index=web` (tag loading), `gdpr_consent_policy.csv`",
              "q": "(index=web sourcetype=\"js_tag\") earliest=-24h\n| eval before_consent=if(consent_state=\"unknown\" AND match(tag_category,\"(?i)marketing|analytics\"),1,0)\n| stats sum(before_consent) as early_nonessential_tags by site_id tag_vendor\n| where early_nonessential_tags > 100\n| lookup gdpr_consent_policy.csv site_id OUTPUT required_banner_version\n| join site_id [\n    search index=consent earliest=-24h\n    | stats latest(banner_version) as live_banner by site_id\n]\n| where live_banner != required_banner_version\n| table site_id, tag_vendor, early_nonessential_tags, live_banner, required_banner_version",
              "m": "(1) Instrument sites to emit tag and consent state events; (2) define required banner versions per site in policy lookup; (3) alert marketing and privacy when non-essential tags load pre-consent; (4) block releases that downgrade banner version without approval; (5) align with UC-22.1.5/UC-22.1.16 for withdrawal enforcement.",
              "z": "Table (sites with violations), Single value (sites off-policy), Heatmap (violations by region), Bar chart (tag vendors).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Add-on for Stream (Splunkbase 1809).\n• Ensure the following data sources are available: `index=consent` (CMP events), `index=web` (tag loading), `gdpr_consent_policy.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument sites to emit tag and consent state events; (2) define required banner versions per site in policy lookup; (3) alert marketing and privacy when non-essential tags load pre-consent; (4) block releases that downgrade banner version without approval; (5) align with UC-22.1.5/UC-22.1.16 for withdrawal enforcement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=web sourcetype=\"js_tag\") earliest=-24h\n| eval before_consent=if(consent_state=\"unknown\" AND match(tag_category,\"(?i)marketing|analytics\"),1,0)\n| stats sum(before_consent) as early_nonessential_tags by site_id tag_vendor\n| where early_nonessential_tags > 100\n| lookup gdpr_consent_policy.csv site_id OUTPUT required_banner_version\n| join site_id [\n    search index=consent earliest=-24h\n    | stats latest(banner_version) as live_banner by site_id\n]\n| where live_banner != required_banner_version\n| table site_id, tag_vendor, early_nonessential_tags, live_banner, required_banner_version\n```\n\nUnderstanding this SPL\n\n**GDPR Consent Mechanism Audit (Lawful Basis Alignment) (Art. 25(1), Art. 7)** — By design, consent collection must be granular and verifiable. This use case monitors CMP and web telemetry for consent banner versions, non-essential cookies fired before consent, and mismatches between claimed purposes in the CMP and tags observed in the browser stream.\n\nDocumented **Data sources**: `index=consent` (CMP events), `index=web` (tag loading), `gdpr_consent_policy.csv`. **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for Stream (Splunkbase 1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: js_tag. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"js_tag\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **before_consent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by site_id tag_vendor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where early_nonessential_tags > 100` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where live_banner != required_banner_version` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Consent Mechanism Audit (Lawful Basis Alignment) (Art. 25(1), Art. 7)**): table site_id, tag_vendor, early_nonessential_tags, live_banner, required_banner_version\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Consent Mechanism Audit (Lawful Basis Alignment) (Art. 25(1), Art. 7)** — By design, consent collection must be granular and verifiable. This use case monitors CMP and web telemetry for consent banner versions, non-essential cookies fired before consent, and mismatches between claimed purposes in the CMP and tags observed in the browser stream.\n\nDocumented **Data sources**: `index=consent` (CMP events), `index=web` (tag loading), `gdpr_consent_policy.csv`. **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for Stream (Splunkbase 1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sites with violations), Single value (sites off-policy), Heatmap (violations by region), Bar chart (tag vendors).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We by design, consent collection must be granular and verifiable so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.21",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.21 (Right to object) is enforced — Splunk UC-22.1.46: GDPR Consent Mechanism Audit (Lawful Basis Alignment).",
                  "ea": "Saved search 'UC-22.1.46' running on index=consent (CMP events), index=web (tag loading), gdpr_consent_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679"
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.25(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.25(1) is enforced — Splunk UC-22.1.46: GDPR Consent Mechanism Audit (Lawful Basis Alignment).",
                  "ea": "Saved search 'UC-22.1.46' running on index=consent (CMP events), index=web (tag loading), gdpr_consent_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.7 (Conditions for consent) is enforced — Splunk UC-22.1.46: GDPR Consent Mechanism Audit (Lawful Basis Alignment).",
                  "ea": "Saved search 'UC-22.1.46' running on index=consent (CMP events), index=web (tag loading), gdpr_consent_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.21",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.21 is enforced — Splunk UC-22.1.46: GDPR Consent Mechanism Audit (Lawful Basis Alignment).",
                  "ea": "Saved search 'UC-22.1.46' running on index=consent (CMP events), index=web (tag loading), gdpr_consent_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.47",
              "n": "GDPR Data Minimisation Compliance in Logs and APIs (Art. 25(2), Art. 5(1)(c))",
              "c": "high",
              "f": "advanced",
              "v": "Data minimisation requires limiting collection to what is adequate, relevant, and limited. This use case detects API and application log fields classified as excessive for the documented purpose — for example special-category fields in general application logs — using schema metadata and sampling.",
              "t": "Splunk Edge Processor (Splunk Cloud Platform), HTTP Event Collector (HEC)",
              "d": "`index=app` (API request logs), `gdpr_field_classification.csv` (field_name, sensitivity, allowed_purposes)",
              "q": "index=app sourcetype=\"api:json\" earliest=-24h\n| spath path=body{}\n| fields _raw endpoint purpose\n| rex field=_raw max_match=0 \"(?<field>\\\"(ssn|dob|health|biometric)[^\\\"]*\\\")\"\n| lookup gdpr_field_classification.csv field_name OUTPUT sensitivity, allowed_purposes\n| where sensitivity=\"special_category\" AND NOT match(allowed_purposes, purpose)\n| stats count by endpoint, purpose, field_name\n| sort - count",
              "m": "(1) Build `gdpr_field_classification.csv` from data dictionary; (2) tune `spath`/`rex` to your JSON shapes; (3) alert API owners on minimisation violations; (4) route critical patterns to Edge Processor drop rules; (5) feed results into DPIA updates.",
              "z": "Table (violating endpoints), Bar chart (field frequency), Single value (daily violation events), Pie chart (by sensitivity class).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Processor (Splunk Cloud Platform), HTTP Event Collector (HEC).\n• Ensure the following data sources are available: `index=app` (API request logs), `gdpr_field_classification.csv` (field_name, sensitivity, allowed_purposes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `gdpr_field_classification.csv` from data dictionary; (2) tune `spath`/`rex` to your JSON shapes; (3) alert API owners on minimisation violations; (4) route critical patterns to Edge Processor drop rules; (5) feed results into DPIA updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"api:json\" earliest=-24h\n| spath path=body{}\n| fields _raw endpoint purpose\n| rex field=_raw max_match=0 \"(?<field>\\\"(ssn|dob|health|biometric)[^\\\"]*\\\")\"\n| lookup gdpr_field_classification.csv field_name OUTPUT sensitivity, allowed_purposes\n| where sensitivity=\"special_category\" AND NOT match(allowed_purposes, purpose)\n| stats count by endpoint, purpose, field_name\n| sort - count\n```\n\nUnderstanding this SPL\n\n**GDPR Data Minimisation Compliance in Logs and APIs (Art. 25(2), Art. 5(1)(c))** — Data minimisation requires limiting collection to what is adequate, relevant, and limited. This use case detects API and application log fields classified as excessive for the documented purpose — for example special-category fields in general application logs — using schema metadata and sampling.\n\nDocumented **Data sources**: `index=app` (API request logs), `gdpr_field_classification.csv` (field_name, sensitivity, allowed_purposes). **App/TA** (typical add-on context): Splunk Edge Processor (Splunk Cloud Platform), HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: api:json. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"api:json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Extracts fields with `rex` (regular expression).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where sensitivity=\"special_category\" AND NOT match(allowed_purposes, purpose)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by endpoint, purpose, field_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violating endpoints), Bar chart (field frequency), Single value (daily violation events), Pie chart (by sensitivity class).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We data minimisation requires limiting collection to what is adequate, relevant, and limited so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.25(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.25(2) is enforced — Splunk UC-22.1.47: GDPR Data Minimisation Compliance in Logs and APIs.",
                  "ea": "Saved search 'UC-22.1.47' running on index=app (API request logs), gdpr_field_classification.csv (field_name, sensitivity, allowed_purposes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(1)(c)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5(1)(c) is enforced — Splunk UC-22.1.47: GDPR Data Minimisation Compliance in Logs and APIs.",
                  "ea": "Saved search 'UC-22.1.47' running on index=app (API request logs), gdpr_field_classification.csv (field_name, sensitivity, allowed_purposes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.48",
              "n": "GDPR Purpose Limitation Enforcement Across Systems (Art. 25(1), Art. 5(1)(b))",
              "c": "high",
              "f": "advanced",
              "v": "Personal data must be collected for specified, explicit, and legitimate purposes and not further processed incompatibly. This use case compares declared processing purposes in ROPA to observed data flows — e.g., HR exports to marketing tools — using purpose tags on integrations where available.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`gdpr_integration_purpose_map.csv`, CIM Network_Traffic, DLP policy labels",
              "q": "| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-7d\n  by All_Traffic.src All_Traffic.dest All_Traffic.app\n| rename All_Traffic.* as *\n| lookup gdpr_integration_purpose_map.csv app OUTPUT ropa_purpose dest_category_allowed\n| eval incompatible=if(NOT match(dest_category_allowed, dest_category),1,0)\n| where incompatible=1\n| table src, dest, app, ropa_purpose, dest_category, dest_category_allowed, bytes_out\n| sort - bytes_out",
              "m": "(1) Document each integration's allowed downstream categories in `gdpr_integration_purpose_map.csv`; (2) populate `dest_category` from proxy and CASB; (3) alert when HR or health-tagged sources send data to marketing-class destinations; (4) require DPO sign-off for exceptions; (5) integrate with access reviews.",
              "z": "Sankey (source purpose to destination class), Table (incompatible flows), Single value (open violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `gdpr_integration_purpose_map.csv`, CIM Network_Traffic, DLP policy labels.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Document each integration's allowed downstream categories in `gdpr_integration_purpose_map.csv`; (2) populate `dest_category` from proxy and CASB; (3) alert when HR or health-tagged sources send data to marketing-class destinations; (4) require DPO sign-off for exceptions; (5) integrate with access reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out\n  from datamodel=Network_Traffic.All_Traffic\n  where earliest=-7d\n  by All_Traffic.src All_Traffic.dest All_Traffic.app\n| rename All_Traffic.* as *\n| lookup gdpr_integration_purpose_map.csv app OUTPUT ropa_purpose dest_category_allowed\n| eval incompatible=if(NOT match(dest_category_allowed, dest_category),1,0)\n| where incompatible=1\n| table src, dest, app, ropa_purpose, dest_category, dest_category_allowed, bytes_out\n| sort - bytes_out\n```\n\nUnderstanding this SPL\n\n**GDPR Purpose Limitation Enforcement Across Systems (Art. 25(1), Art. 5(1)(b))** — Personal data must be collected for specified, explicit, and legitimate purposes and not further processed incompatibly. This use case compares declared processing purposes in ROPA to observed data flows — e.g., HR exports to marketing tools — using purpose tags on integrations where available.\n\nDocumented **Data sources**: `gdpr_integration_purpose_map.csv`, CIM Network_Traffic, DLP policy labels. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **incompatible** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where incompatible=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Purpose Limitation Enforcement Across Systems (Art. 25(1), Art. 5(1)(b))**): table src, dest, app, ropa_purpose, dest_category, dest_category_allowed, bytes_out\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**GDPR Purpose Limitation Enforcement Across Systems (Art. 25(1), Art. 5(1)(b))** — Personal data must be collected for specified, explicit, and legitimate purposes and not further processed incompatibly. This use case compares declared processing purposes in ROPA to observed data flows — e.g., HR exports to marketing tools — using purpose tags on integrations where available.\n\nDocumented **Data sources**: `gdpr_integration_purpose_map.csv`, CIM Network_Traffic, DLP policy labels. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey (source purpose to destination class), Table (incompatible flows), Single value (open violations).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We personal data must be collected for specified, explicit, and legitimate purposes and not further processed incompatibly.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.25(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.25(1) is enforced — Splunk UC-22.1.48: GDPR Purpose Limitation Enforcement Across Systems.",
                  "ea": "Saved search 'UC-22.1.48' running on gdpr_integration_purpose_map.csv, CIM Network_Traffic, DLP policy labels, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(1)(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5(1)(b) is enforced — Splunk UC-22.1.48: GDPR Purpose Limitation Enforcement Across Systems.",
                  "ea": "Saved search 'UC-22.1.48' running on gdpr_integration_purpose_map.csv, CIM Network_Traffic, DLP policy labels, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "PT-3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST 800-53 PT-3 (Personally identifiable information processing purposes) is enforced — Splunk UC-22.1.48: GDPR Purpose Limitation Enforcement Across Systems.",
                  "ea": "Saved search 'UC-22.1.48' running on gdpr_integration_purpose_map.csv, CIM Network_Traffic, DLP policy labels, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.49",
              "n": "GDPR Storage Limitation Automation Evidence (Art. 25(2), Art. 5(1)(e))",
              "c": "high",
              "f": "intermediate",
              "v": "Storage limitation requires defined retention and deletion. Beyond index retention (UC-22.1.4), this use case tracks automated purge jobs, API deletion endpoints, and backup rotation for personal-data datasets — proving deletion automation runs successfully.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), Veeam App for Splunk (Splunkbase 7312)",
              "d": "`index=backup`, `aws:cloudtrail` (DeleteObject lifecycle), application deletion job logs (HEC), `gdpr_retention_schedule.csv`",
              "q": "| inputlookup gdpr_retention_schedule.csv\n| join dataset_name [\n    search index=app sourcetype=\"deletion_job\" earliest=-7d\n    | stats latest(job_status) as last_status latest(_time) as last_run by dataset_name\n]\n| eval overdue=if(last_status!=\"success\" OR (now()-last_run)/86400 > scheduled_interval_days, 1, 0)\n| where overdue=1\n| table dataset_name, retention_days, scheduled_interval_days, last_status, last_run, overdue",
              "m": "(1) Export dataset-level retention policy to `gdpr_retention_schedule.csv`; (2) instrument deletion jobs with structured logs; (3) alert data owners on failed or missed runs; (4) correlate with object storage lifecycle events; (5) include metrics in DPO quarterly report.",
              "z": "Table (failed purges), Single value (datasets overdue), Timeline (job runs), Bar chart (failures by system).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Veeam App for Splunk](https://splunkbase.splunk.com/app/7312)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), Veeam App for Splunk (Splunkbase 7312).\n• Ensure the following data sources are available: `index=backup`, `aws:cloudtrail` (DeleteObject lifecycle), application deletion job logs (HEC), `gdpr_retention_schedule.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export dataset-level retention policy to `gdpr_retention_schedule.csv`; (2) instrument deletion jobs with structured logs; (3) alert data owners on failed or missed runs; (4) correlate with object storage lifecycle events; (5) include metrics in DPO quarterly report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup gdpr_retention_schedule.csv\n| join dataset_name [\n    search index=app sourcetype=\"deletion_job\" earliest=-7d\n    | stats latest(job_status) as last_status latest(_time) as last_run by dataset_name\n]\n| eval overdue=if(last_status!=\"success\" OR (now()-last_run)/86400 > scheduled_interval_days, 1, 0)\n| where overdue=1\n| table dataset_name, retention_days, scheduled_interval_days, last_status, last_run, overdue\n```\n\nUnderstanding this SPL\n\n**GDPR Storage Limitation Automation Evidence (Art. 25(2), Art. 5(1)(e))** — Storage limitation requires defined retention and deletion. Beyond index retention (UC-22.1.4), this use case tracks automated purge jobs, API deletion endpoints, and backup rotation for personal-data datasets — proving deletion automation runs successfully.\n\nDocumented **Data sources**: `index=backup`, `aws:cloudtrail` (DeleteObject lifecycle), application deletion job logs (HEC), `gdpr_retention_schedule.csv`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Veeam App for Splunk (Splunkbase 7312). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overdue=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **GDPR Storage Limitation Automation Evidence (Art. 25(2), Art. 5(1)(e))**): table dataset_name, retention_days, scheduled_interval_days, last_status, last_run, overdue\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed purges), Single value (datasets overdue), Timeline (job runs), Bar chart (failures by system).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We storage limitation requires defined retention and deletion so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance",
                "Capacity"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "veeam"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.25(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.25(2) is enforced — Splunk UC-22.1.49: GDPR Storage Limitation Automation Evidence.",
                  "ea": "Saved search 'UC-22.1.49' running on index=backup, aws:cloudtrail (DeleteObject lifecycle), application deletion job logs (HEC), gdpr_retention_schedule.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(1)(e)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.5(1)(e) is enforced — Splunk UC-22.1.49: GDPR Storage Limitation Automation Evidence.",
                  "ea": "Saved search 'UC-22.1.49' running on index=backup, aws:cloudtrail (DeleteObject lifecycle), application deletion job logs (HEC), gdpr_retention_schedule.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Veeam App for Splunk",
                  "id": 7312,
                  "url": "https://splunkbase.splunk.com/app/7312",
                  "desc": "Monitoring and security dashboards for Veeam Backup job statuses and events",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/033805d0-fe3c-11ee-a32e-be99bb517a22.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/0a08b346-fe3c-11ee-9b85-7afc4dbed252.png"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.1.50",
              "n": "GDPR Transparency Notice Completeness and Version Alignment (Art. 12-14, Art. 25(1))",
              "c": "high",
              "f": "beginner",
              "v": "Transparency requires that data subjects receive fair processing information at collection. This use case monitors CMS and web deployments for privacy notice URL availability, version tags, and broken links — catching releases that ship without updated notices after processing changes.",
              "t": "Splunk Synthetic Monitoring, HTTP Event Collector (HEC)",
              "d": "Synthetic check results (`index=synthetic`), `gdpr_notice_registry.csv` (site_id, notice_url, required_version), CI/CD web deploy events",
              "q": "index=synthetic check_type=\"http\" earliest=-24h\n| lookup gdpr_notice_registry.csv site_id OUTPUT notice_url, required_version\n| eval ok=if(http_status=200 AND match(_raw, required_version),1,0)\n| where ok=0\n| stats latest(http_status) as http_status latest(_time) as last_fail by site_id notice_url\n| join site_id [\n    search index=cicd sourcetype=\"web:deploy\" earliest=-7d\n    | stats latest(release_tag) as release_tag by site_id\n]\n| table site_id, notice_url, http_status, release_tag, last_fail",
              "m": "(1) Register each public site with expected notice URL and embedded version string; (2) run synthetic checks every hour from multiple regions; (3) alert privacy counsel when checks fail or version mismatches after deploy; (4) gate production deploys on successful synthetic in mature pipelines; (5) archive passing runs as Art. 12 evidence.",
              "z": "Table (failing sites), Single value (failed checks), Timeline (availability), Bar chart (failures by region).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Synthetic Monitoring, HTTP Event Collector (HEC).\n• Ensure the following data sources are available: Synthetic check results (`index=synthetic`), `gdpr_notice_registry.csv` (site_id, notice_url, required_version), CI/CD web deploy events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Register each public site with expected notice URL and embedded version string; (2) run synthetic checks every hour from multiple regions; (3) alert privacy counsel when checks fail or version mismatches after deploy; (4) gate production deploys on successful synthetic in mature pipelines; (5) archive passing runs as Art. 12 evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=synthetic check_type=\"http\" earliest=-24h\n| lookup gdpr_notice_registry.csv site_id OUTPUT notice_url, required_version\n| eval ok=if(http_status=200 AND match(_raw, required_version),1,0)\n| where ok=0\n| stats latest(http_status) as http_status latest(_time) as last_fail by site_id notice_url\n| join site_id [\n    search index=cicd sourcetype=\"web:deploy\" earliest=-7d\n    | stats latest(release_tag) as release_tag by site_id\n]\n| table site_id, notice_url, http_status, release_tag, last_fail\n```\n\nUnderstanding this SPL\n\n**GDPR Transparency Notice Completeness and Version Alignment (Art. 12-14, Art. 25(1))** — Transparency requires that data subjects receive fair processing information at collection. This use case monitors CMS and web deployments for privacy notice URL availability, version tags, and broken links — catching releases that ship without updated notices after processing changes.\n\nDocumented **Data sources**: Synthetic check results (`index=synthetic`), `gdpr_notice_registry.csv` (site_id, notice_url, required_version), CI/CD web deploy events. **App/TA** (typical add-on context): Splunk Synthetic Monitoring, HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: synthetic.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=synthetic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ok=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by site_id notice_url** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **GDPR Transparency Notice Completeness and Version Alignment (Art. 12-14, Art. 25(1))**): table site_id, notice_url, http_status, release_tag, last_fail\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failing sites), Single value (failed checks), Timeline (availability), Bar chart (failures by region).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We transparency requires that data subjects receive fair processing information at collection so we can show we meet our privacy duties when people ask for proof.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.12 is enforced — Splunk UC-22.1.50: GDPR Transparency Notice Completeness and Version Alignment.",
                  "ea": "Saved search 'UC-22.1.50' running on index synthetic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.13 is enforced — Splunk UC-22.1.50: GDPR Transparency Notice Completeness and Version Alignment.",
                  "ea": "Saved search 'UC-22.1.50' running on index synthetic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.14 is enforced — Splunk UC-22.1.50: GDPR Transparency Notice Completeness and Version Alignment.",
                  "ea": "Saved search 'UC-22.1.50' running on index synthetic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.25(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.25(1) is enforced — Splunk UC-22.1.50: GDPR Transparency Notice Completeness and Version Alignment.",
                  "ea": "Saved search 'UC-22.1.50' running on index synthetic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.0,
          "qd": {
            "gold": 1,
            "silver": 0,
            "bronze": 49,
            "none": 0
          }
        },
        {
          "i": "22.2",
          "n": "NIS2",
          "u": [
            {
              "i": "22.2.1",
              "n": "NIS2 Incident Detection and 24-Hour Early Warning Reporting (Art. 23)",
              "c": "critical",
              "f": "intermediate",
              "v": "Measures detection-to-response progress on high-urgency ES notables to support early-warning obligations and internal crisis reporting within the first 24 hours of awareness.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` macro (rule_name, urgency, status, owner, status_description, _time)",
              "q": "`notable` urgency IN (\"high\",\"critical\") earliest=-3d\n| eval hours_open=round((now()-_time)/3600, 2)\n| eval t_minus_4h=if(hours_open>=20 AND hours_open<24 AND status!=\"Closed\", 1, 0)\n| eval past_24h_open=if(hours_open>24 AND status!=\"Closed\", 1, 0)\n| table _time, rule_name, urgency, status, owner, status_description, hours_open, t_minus_4h, past_24h_open\n| where t_minus_4h=1 OR past_24h_open=1\n| sort - past_24h_open, - hours_open",
              "m": "(1) Map ES `urgency` values to your NIS2 incident classes; (2) require analysts to transition `status`/`status_description` at acknowledgement and containment; (3) alert on `t_minus_4h` for CSIRT/legal escalation; (4) export `past_24h_open` rows into crisis-management runbooks and regulatory reporting drafts.",
              "z": "Timeline (notable aging), Table (stale high-urgency items), Single value (count approaching 24h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` macro (rule_name, urgency, status, owner, status_description, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map ES `urgency` values to your NIS2 incident classes; (2) require analysts to transition `status`/`status_description` at acknowledgement and containment; (3) alert on `t_minus_4h` for CSIRT/legal escalation; (4) export `past_24h_open` rows into crisis-management runbooks and regulatory reporting drafts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") earliest=-3d\n| eval hours_open=round((now()-_time)/3600, 2)\n| eval t_minus_4h=if(hours_open>=20 AND hours_open<24 AND status!=\"Closed\", 1, 0)\n| eval past_24h_open=if(hours_open>24 AND status!=\"Closed\", 1, 0)\n| table _time, rule_name, urgency, status, owner, status_description, hours_open, t_minus_4h, past_24h_open\n| where t_minus_4h=1 OR past_24h_open=1\n| sort - past_24h_open, - hours_open\n```\n\nUnderstanding this SPL\n\n**NIS2 Incident Detection and 24-Hour Early Warning Reporting (Art. 23)** — Measures detection-to-response progress on high-urgency ES notables to support early-warning obligations and internal crisis reporting within the first 24 hours of awareness.\n\nDocumented **Data sources**: `` `notable` `` macro (rule_name, urgency, status, owner, status_description, _time). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **hours_open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **t_minus_4h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **past_24h_open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NIS2 Incident Detection and 24-Hour Early Warning Reporting (Art. 23)**): table _time, rule_name, urgency, status, owner, status_description, hours_open, t_minus_4h, past_24h_open\n• Filters the current rows with `where t_minus_4h=1 OR past_24h_open=1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (notable aging), Table (stale high-urgency items), Single value (count approaching 24h).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch every serious cyber alert and start a 24-hour clock the moment it is raised, so the team can warn the regulator before the clock runs out — not after, when it is too late.",
              "wv": "crawl",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Contributes evidence toward the Art.23 incident-reporting lifecycle by monitoring the 24-hour early-warning clock for significant incidents.",
                  "ea": "Saved search UC-22.2.1 row-per-finding output archived to index=audit_evidence sourcetype=evidence:saved_search.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23(4)(a)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Proves that for every significant incident raised in ES, the 24-hour early-warning clock is monitored, an owner is assigned, and the early-warning package is drafted within deadline.",
                  "ea": "Saved search `UC-22.2.1` writing one row per high/critical notable approaching or breaching the 24-hour clock to `index=audit_evidence sourcetype=evidence:saved_search`, signed via RFC 3161 TSA. Daily digest archived to `index=audit_evidence` with 7-year retention. Dashboard panel `NIS2 Art.23 — 24h clock` on the NIS2 posture dashboard.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "22.2.2",
              "n": "NIS2 Supply Chain Security Monitoring (Art. 21(2)(d))",
              "c": "high",
              "f": "advanced",
              "v": "Correlates vendor privileged access sessions (PAM) with threat intelligence on supplier domains to surface abnormal third-party activity affecting essential services.",
              "t": "Splunk Add-on for CyberArk (Splunkbase 2891), Splunk Enterprise Security (Splunkbase 263) for threat intelligence lookups",
              "d": "`index=pam` `sourcetype=\"cyberark:session\"` (user, target_host, target_account, protocol, duration_min, session_id); `index=pam` `sourcetype=\"cyberark:vault\"` (user, account, action, target)",
              "q": "index=pam sourcetype=\"cyberark:session\" earliest=-24h\n| rex field=target_host \"(?<target_domain>[a-zA-Z0-9][a-zA-Z0-9.-]+\\.[a-zA-Z]{2,})$\"\n| stats sum(duration_min) as total_min, dc(session_id) as sessions by user, target_host, target_domain, target_account\n| lookup threat_intel_domain_lookup domain AS target_domain OUTPUT description AS ti_description, weight AS ti_weight\n| where isnotnull(ti_weight) OR total_min>120\n| sort - total_min",
              "m": "(1) Deploy CyberArk TA 2891 and send Vault/PSM session logs to `index=pam`; (2) maintain `threat_intel_domain_lookup` from ES Threat Intelligence exports or STIX/TAXII feeds; (3) tag supplier-owned targets in `vendor_asset_lookup.csv` and join for baseline comparison; (4) alert on TI matches or unusually long sessions.",
              "z": "Table (sessions with TI hits), Bar chart (minutes by supplier), Heatmap (user x hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/2891), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CyberArk (Splunkbase 2891), Splunk Enterprise Security (Splunkbase 263) for threat intelligence lookups.\n• Ensure the following data sources are available: `index=pam` `sourcetype=\"cyberark:session\"` (user, target_host, target_account, protocol, duration_min, session_id); `index=pam` `sourcetype=\"cyberark:vault\"` (user, account, action, target).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Deploy CyberArk TA 2891 and send Vault/PSM session logs to `index=pam`; (2) maintain `threat_intel_domain_lookup` from ES Threat Intelligence exports or STIX/TAXII feeds; (3) tag supplier-owned targets in `vendor_asset_lookup.csv` and join for baseline comparison; (4) alert on TI matches or unusually long sessions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:session\" earliest=-24h\n| rex field=target_host \"(?<target_domain>[a-zA-Z0-9][a-zA-Z0-9.-]+\\.[a-zA-Z]{2,})$\"\n| stats sum(duration_min) as total_min, dc(session_id) as sessions by user, target_host, target_domain, target_account\n| lookup threat_intel_domain_lookup domain AS target_domain OUTPUT description AS ti_description, weight AS ti_weight\n| where isnotnull(ti_weight) OR total_min>120\n| sort - total_min\n```\n\nUnderstanding this SPL\n\n**NIS2 Supply Chain Security Monitoring (Art. 21(2)(d))** — Correlates vendor privileged access sessions (PAM) with threat intelligence on supplier domains to surface abnormal third-party activity affecting essential services.\n\nDocumented **Data sources**: `index=pam` `sourcetype=\"cyberark:session\"` (user, target_host, target_account, protocol, duration_min, session_id); `index=pam` `sourcetype=\"cyberark:vault\"` (user, account, action, target). **App/TA** (typical add-on context): Splunk Add-on for CyberArk (Splunkbase 2891), Splunk Enterprise Security (Splunkbase 263) for threat intelligence lookups. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by user, target_host, target_domain, target_account** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(ti_weight) OR total_min>120` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Supply Chain Security Monitoring (Art. 21(2)(d))** — Correlates vendor privileged access sessions (PAM) with threat intelligence on supplier domains to surface abnormal third-party activity affecting essential services.\n\nDocumented **Data sources**: `index=pam` `sourcetype=\"cyberark:session\"` (user, target_host, target_account, protocol, duration_min, session_id); `index=pam` `sourcetype=\"cyberark:vault\"` (user, account, action, target). **App/TA** (typical add-on context): Splunk Add-on for CyberArk (Splunkbase 2891), Splunk Enterprise Security (Splunkbase 263) for threat intelligence lookups. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sessions with TI hits), Bar chart (minutes by supplier), Heatmap (user x hour).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch who connects into outside companies that keep your critical systems running, and we notice when those connections line up with known bad internet names or when someone stays logged in far longer than normal.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Authentication (PAM sessions when CIM-mapped via TA)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "cyberark"
              ],
              "em": [
                "snmp_ups"
              ],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(d)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(d) (Supply-chain security) is enforced — Splunk UC-22.2.2: NIS2 Supply Chain Security Monitoring.",
                  "ea": "Saved search 'UC-22.2.2' running on sourcetype cyberark:session and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.3",
              "n": "NIS2 Vulnerability Disclosure and Patch Management Tracking (Art. 21(2)(e))",
              "c": "critical",
              "f": "beginner",
              "v": "Tracks CVE exposure and remediation latency from first detection to fix to demonstrate systematic vulnerability handling for essential and important entities.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`index=vulnerability` `sourcetype=\"tenable:vuln\"` (cve, severity, plugin_name, host, first_found, last_fixed, state)",
              "q": "index=vulnerability sourcetype=\"tenable:vuln\" state=\"Active\"\n| eval host=coalesce(host, hostname, dns_name)\n| eval first_found=coalesce(first_found, first_seen)\n| eval age_days=round((now()-first_found)/86400, 1)\n| eval sla_days=case(severity=\"Critical\",7, severity=\"High\",30, 1=1,90)\n| eval sla_breach=if(age_days>sla_days, 1, 0)\n| stats count as open_vulns, max(age_days) as max_age by host, severity\n| where sla_breach=1\n| sort - max_age\n| table host, severity, open_vulns, max_age, sla_days",
              "m": "(1) Install Tenable Add-On (4060) and route data to `index=vulnerability`; (2) validate field names (`cve_id` vs `cve`, `first_seen` vs `first_found`) in Data Summary; (3) tune `sla_days` to your security policy; (4) integrate with change/patch tickets for exception tracking.",
              "z": "Table (over-SLA assets), Bar chart (count by severity), Line chart (open critical CVE trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `index=vulnerability` `sourcetype=\"tenable:vuln\"` (cve, severity, plugin_name, host, first_found, last_fixed, state).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Install Tenable Add-On (4060) and route data to `index=vulnerability`; (2) validate field names (`cve_id` vs `cve`, `first_seen` vs `first_found`) in Data Summary; (3) tune `sla_days` to your security policy; (4) integrate with change/patch tickets for exception tracking.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulnerability sourcetype=\"tenable:vuln\" state=\"Active\"\n| eval host=coalesce(host, hostname, dns_name)\n| eval first_found=coalesce(first_found, first_seen)\n| eval age_days=round((now()-first_found)/86400, 1)\n| eval sla_days=case(severity=\"Critical\",7, severity=\"High\",30, 1=1,90)\n| eval sla_breach=if(age_days>sla_days, 1, 0)\n| stats count as open_vulns, max(age_days) as max_age by host, severity\n| where sla_breach=1\n| sort - max_age\n| table host, severity, open_vulns, max_age, sla_days\n```\n\nUnderstanding this SPL\n\n**NIS2 Vulnerability Disclosure and Patch Management Tracking (Art. 21(2)(e))** — Tracks CVE exposure and remediation latency from first detection to fix to demonstrate systematic vulnerability handling for essential and important entities.\n\nDocumented **Data sources**: `index=vulnerability` `sourcetype=\"tenable:vuln\"` (cve, severity, plugin_name, host, first_found, last_fixed, state). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulnerability; **sourcetype**: tenable:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulnerability, sourcetype=\"tenable:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **host** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **first_found** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sla_breach=1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Vulnerability Disclosure and Patch Management Tracking (Art. 21(2)(e))**): table host, severity, open_vulns, max_age, sla_days\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest, Vulnerabilities.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Vulnerability Disclosure and Patch Management Tracking (Art. 21(2)(e))** — Tracks CVE exposure and remediation latency from first detection to fix to demonstrate systematic vulnerability handling for essential and important entities.\n\nDocumented **Data sources**: `index=vulnerability` `sourcetype=\"tenable:vuln\"` (cve, severity, plugin_name, host, first_found, last_fixed, state). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (over-SLA assets), Bar chart (count by severity), Line chart (open critical CVE trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We measure how long serious security holes have stayed open on each computer. When something important has waited too long, we raise it so the people responsible for fixing it can act before auditors or criminals force the issue.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest, Vulnerabilities.severity | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(e) (Security in acquisition, development and maintenance) is enforced — Splunk UC-22.2.3: NIS2 Vulnerability Disclosure and Patch Management Tracking.",
                  "ea": "Saved search 'UC-22.2.3' running on index=vulnerability sourcetype=\"tenable:vuln\" (cve, severity, plugin_name, host, first_found, last_fixed, state), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.4",
              "n": "NIS2 Business Continuity and Crisis Management Monitoring (Art. 21(2)(c))",
              "c": "critical",
              "f": "intermediate",
              "v": "Uses ITSI service health and KPI breach signals as live evidence that continuity targets (RTO/RPO expressed as service KPIs) are monitored during incidents and crises.",
              "t": "Splunk IT Service Intelligence (Splunkbase 1841)",
              "d": "`index=itsi_summary` (health_score, service_name, kpi_name, severity_value, severity_label, is_service_in_maintenance)",
              "q": "index=itsi_summary is_service_in_maintenance=0 earliest=-24h\n| eval rto_rpo_risk=if(severity_value>=3 OR health_score<70, 1, 0)\n| stats avg(health_score) as avg_health,\n        count(eval(rto_rpo_risk=1)) as breach_events\n    by service_name, kpi_name\n| where breach_events>0 OR avg_health<85\n| sort - breach_events\n| table service_name, kpi_name, avg_health, breach_events",
              "m": "(1) Model each regulated NIS2 service in ITSI with KPIs tied to RTO/RPO (e.g. availability, transaction success, replication lag); (2) set severity thresholds so `severity_value>=3` aligns with crisis playbooks; (3) display on Glass Table / Service Analyzer for NOC/C-level crisis calls; (4) attach episode workflows for major incidents.",
              "z": "Service Analyzer (ITSI), Glass Table, Line chart (health_score over time), Table (KPIs in breach).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk IT Service Intelligence](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk IT Service Intelligence (Splunkbase 1841).\n• Ensure the following data sources are available: `index=itsi_summary` (health_score, service_name, kpi_name, severity_value, severity_label, is_service_in_maintenance).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Model each regulated NIS2 service in ITSI with KPIs tied to RTO/RPO (e.g. availability, transaction success, replication lag); (2) set severity thresholds so `severity_value>=3` aligns with crisis playbooks; (3) display on Glass Table / Service Analyzer for NOC/C-level crisis calls; (4) attach episode workflows for major incidents.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_in_maintenance=0 earliest=-24h\n| eval rto_rpo_risk=if(severity_value>=3 OR health_score<70, 1, 0)\n| stats avg(health_score) as avg_health,\n        count(eval(rto_rpo_risk=1)) as breach_events\n    by service_name, kpi_name\n| where breach_events>0 OR avg_health<85\n| sort - breach_events\n| table service_name, kpi_name, avg_health, breach_events\n```\n\nUnderstanding this SPL\n\n**NIS2 Business Continuity and Crisis Management Monitoring (Art. 21(2)(c))** — Uses ITSI service health and KPI breach signals as live evidence that continuity targets (RTO/RPO expressed as service KPIs) are monitored during incidents and crises.\n\nDocumented **Data sources**: `index=itsi_summary` (health_score, service_name, kpi_name, severity_value, severity_label, is_service_in_maintenance). **App/TA** (typical add-on context): Splunk IT Service Intelligence (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **rto_rpo_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by service_name, kpi_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where breach_events>0 OR avg_health<85` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Business Continuity and Crisis Management Monitoring (Art. 21(2)(c))**): table service_name, kpi_name, avg_health, breach_events\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Service Analyzer (ITSI), Glass Table, Line chart (health_score over time), Table (KPIs in breach).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We check that your emergency plans are reviewed on time, that practice runs are actually recorded, and that when you rehearse a failover the important services still look healthy. If something slips, you see it early instead of during an auditor visit.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(c)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Shows BCM plans, BIA reviews, and failover tests stay on schedule by matching ServiceNow GRC policies plus `category=bcp_review` tasks to `nis2_bcp_inventory`, and surfaces ITSI services that leave 'green' during `is_drill` exercises.",
                  "ea": "Saved search `UC-22.2.4` on `index=itsm` (`sourcetype=snow:sn_grc_policy`, `sourcetype=snow:task`) and `index=itsi` `sourcetype=itsi:service_health`, enriched via `lookup nis2_bcp_inventory` (plan_id, owner, review_cycle_days, last_review_date, last_test_date); exports/PDF to vault with RFC3161. SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.5",
              "n": "NIS2 Network and Information Systems Access Control Audit (Art. 21(2)(i))",
              "c": "critical",
              "f": "intermediate",
              "v": "Audits interactive logon success/failure and special privilege assignment on Windows assets supporting essential services, including after-hours and non-interactive patterns, for access-control assurance.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, Account_Name, Logon_Type, Workstation_Name, Status, dest)",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4624, 4625, 4672) earliest=-24h\n| eval auth_result=case(EventCode=4624,\"success\", EventCode=4625,\"failure\", EventCode=4672,\"special_privileges\", 1=1,\"other\")\n| eval after_hours=if(tonumber(strftime(_time,\"%H\"))<7 OR tonumber(strftime(_time,\"%H\"))>19, 1, 0)\n| stats count by EventCode, auth_result, Account_Name, dest, Logon_Type, after_hours\n| sort -count",
              "m": "(1) Deploy Splunk Add-on for Windows (742) with Security log collection from domain controllers and member servers; (2) enable Group Policy auditing for logon events and special privileges; (3) tune out known service accounts via `lookup service_accounts.csv`; (4) send high-value rows (4625 spikes, 4672 after-hours) to SOAR/ITSM; (5) map to Authentication CIM for ES content.",
              "z": "Time chart (failed logons 4625), Table (privileged logons 4672), Bar chart (after_hours vs business hours).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, Account_Name, Logon_Type, Workstation_Name, Status, dest).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Deploy Splunk Add-on for Windows (742) with Security log collection from domain controllers and member servers; (2) enable Group Policy auditing for logon events and special privileges; (3) tune out known service accounts via `lookup service_accounts.csv`; (4) send high-value rows (4625 spikes, 4672 after-hours) to SOAR/ITSM; (5) map to Authentication CIM for ES content.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4624, 4625, 4672) earliest=-24h\n| eval auth_result=case(EventCode=4624,\"success\", EventCode=4625,\"failure\", EventCode=4672,\"special_privileges\", 1=1,\"other\")\n| eval after_hours=if(tonumber(strftime(_time,\"%H\"))<7 OR tonumber(strftime(_time,\"%H\"))>19, 1, 0)\n| stats count by EventCode, auth_result, Account_Name, dest, Logon_Type, after_hours\n| sort -count\n```\n\nUnderstanding this SPL\n\n**NIS2 Network and Information Systems Access Control Audit (Art. 21(2)(i))** — Audits interactive logon success/failure and special privilege assignment on Windows assets supporting essential services, including after-hours and non-interactive patterns, for access-control assurance.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, Account_Name, Logon_Type, Workstation_Name, Status, dest). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **auth_result** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **after_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by EventCode, auth_result, Account_Name, dest, Logon_Type, after_hours** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Network and Information Systems Access Control Audit (Art. 21(2)(i))** — Audits interactive logon success/failure and special privilege assignment on Windows assets supporting essential services, including after-hours and non-interactive patterns, for access-control assurance.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, Account_Name, Logon_Type, Workstation_Name, Status, dest). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (failed logons 4625), Table (privileged logons 4672), Bar chart (after_hours vs business hours).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep a single diary that shows who got into our important computer systems, whether they used an extra login check when required, and when someone tried after bedtime. That way we can prove to outsiders we watch the doors properly.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(i)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(i) (Human resources and access control) is enforced — Splunk UC-22.2.5: NIS2 Network and Information Systems Access Control Audit.",
                  "ea": "Saved search 'UC-22.2.5' running on index=windows sourcetype=\"WinEventLog:Security\" (EventCode, Account_Name, Logon_Type, Workstation_Name, Status, dest), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.6",
              "n": "NIS2 Risk Analysis and Information System Security Policy Evidence (Art. 21(2)(a))",
              "c": "critical",
              "f": "advanced",
              "v": "Article 21(2)(a) requires documented risk analysis and information security policies. This use case continuously validates that organisational risk posture is tracked, risk treatments are assigned owners, and security policy coverage aligns with critical asset inventory — producing auditable evidence that risk management is an operational process, not a one-time exercise.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`index=risk` `sourcetype=\"stash\"` (risk_object, risk_object_type, risk_score, source), asset/identity lookups, `_audit` index",
              "q": "index=risk sourcetype=\"stash\" earliest=-30d@d\n| stats latest(risk_score) as current_risk, max(risk_score) as peak_risk, dc(source) as contributing_detections by risk_object, risk_object_type\n| lookup asset_lookup_by_str key AS risk_object OUTPUT category, priority, owner\n| fillnull value=\"UNASSIGNED\" owner category\n| where owner=\"UNASSIGNED\" OR current_risk > 50\n| sort - current_risk\n| table risk_object, risk_object_type, category, owner, current_risk, peak_risk, contributing_detections",
              "m": "(1) Populate ES asset and identity frameworks with NIS2-scoped systems; (2) ensure risk-generating correlation searches are active for all critical asset categories; (3) flag `UNASSIGNED` owners as governance gaps requiring remediation; (4) schedule weekly to generate evidence of continuous risk monitoring; (5) export as PDF for audit evidence pack.",
              "z": "Table (risk objects without owners), Bar chart (risk by category), Single value (unassigned assets), Line chart (risk trend over 30 days).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `index=risk` `sourcetype=\"stash\"` (risk_object, risk_object_type, risk_score, source), asset/identity lookups, `_audit` index.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate ES asset and identity frameworks with NIS2-scoped systems; (2) ensure risk-generating correlation searches are active for all critical asset categories; (3) flag `UNASSIGNED` owners as governance gaps requiring remediation; (4) schedule weekly to generate evidence of continuous risk monitoring; (5) export as PDF for audit evidence pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=risk sourcetype=\"stash\" earliest=-30d@d\n| stats latest(risk_score) as current_risk, max(risk_score) as peak_risk, dc(source) as contributing_detections by risk_object, risk_object_type\n| lookup asset_lookup_by_str key AS risk_object OUTPUT category, priority, owner\n| fillnull value=\"UNASSIGNED\" owner category\n| where owner=\"UNASSIGNED\" OR current_risk > 50\n| sort - current_risk\n| table risk_object, risk_object_type, category, owner, current_risk, peak_risk, contributing_detections\n```\n\nUnderstanding this SPL\n\n**NIS2 Risk Analysis and Information System Security Policy Evidence (Art. 21(2)(a))** — Article 21(2)(a) requires documented risk analysis and information security policies. This use case continuously validates that organisational risk posture is tracked, risk treatments are assigned owners, and security policy coverage aligns with critical asset inventory — producing auditable evidence that risk management is an operational process, not a one-time exercise.\n\nDocumented **Data sources**: `index=risk` `sourcetype=\"stash\"` (risk_object, risk_object_type, risk_score, source), asset/identity lookups, `_audit` index. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: risk; **sourcetype**: stash. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=risk, sourcetype=\"stash\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by risk_object, risk_object_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Fills null values with `fillnull`.\n• Filters the current rows with `where owner=\"UNASSIGNED\" OR current_risk > 50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Risk Analysis and Information System Security Policy Evidence (Art. 21(2)(a))**): table risk_object, risk_object_type, category, owner, current_risk, peak_risk, contributing_detections\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (risk objects without owners), Bar chart (risk by category), Single value (unassigned assets), Line chart (risk trend over 30 days).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on your risk register and official security rulebooks so overdue reviews and half-finished approvals show up in one list. That way the people responsible can fix paperwork before it becomes a crisis.",
              "mtype": [
                "Risk",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(a)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Links live GRC risk-register rows and infosec policy approvals to `nis2_is_security_policies` so overdue reviews, high open scores, and stale approvals surface in one queue.",
                  "ea": "`UC-22.2.6` saved search on `index=grc sourcetype=grc:risk` and `index=itsm sourcetype=snow:sn_grc_policy` with `lookup nis2_is_security_policies` (policy_id, policy_name, approval_status, last_approved, review_cycle_days); exports vaulted under RFC3161. SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.7",
              "n": "NIS2 72-Hour Incident Notification Readiness (Art. 23(2))",
              "c": "critical",
              "f": "intermediate",
              "v": "After the 24-hour early warning (UC-22.2.1), Article 23(2) requires a more detailed incident notification within 72 hours containing initial severity assessment, impact analysis, and indicators of compromise. This use case tracks whether significant incidents have the required enrichment fields populated within the filing window, ensuring the 72-hour notification is substantive rather than a rehash of the early warning.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` macro (urgency, severity, status, owner, status_description, _time, src, dest, signature)",
              "q": "`notable` urgency IN (\"high\",\"critical\") earliest=-7d\n| eval hours_elapsed=round((now()-_time)/3600, 2)\n| eval has_ioc=if(isnotnull(src) AND isnotnull(dest) AND isnotnull(signature), 1, 0)\n| eval has_severity_assessment=if(isnotnull(status_description) AND len(status_description)>20, 1, 0)\n| eval filing_72h_breach=if(hours_elapsed>72 AND status!=\"Closed\" AND (has_ioc=0 OR has_severity_assessment=0), 1, 0)\n| eval approaching_72h=if(hours_elapsed>=48 AND hours_elapsed<72 AND (has_ioc=0 OR has_severity_assessment=0), 1, 0)\n| where filing_72h_breach=1 OR approaching_72h=1\n| table _time, rule_name, urgency, status, owner, hours_elapsed, has_ioc, has_severity_assessment, filing_72h_breach, approaching_72h\n| sort - filing_72h_breach, - hours_elapsed",
              "m": "(1) Define mandatory enrichment fields for NIS2-classified incidents (IOCs, severity assessment narrative, impact scope); (2) alert at 48h for incidents missing required fields; (3) integrate with SOAR to auto-populate IOCs from investigation; (4) export completed notifications as structured reports matching CSIRT templates.",
              "z": "Table (incidents approaching/breaching 72h), Single value (count missing IOCs), Timeline (enrichment progress).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` macro (urgency, severity, status, owner, status_description, _time, src, dest, signature).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define mandatory enrichment fields for NIS2-classified incidents (IOCs, severity assessment narrative, impact scope); (2) alert at 48h for incidents missing required fields; (3) integrate with SOAR to auto-populate IOCs from investigation; (4) export completed notifications as structured reports matching CSIRT templates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") earliest=-7d\n| eval hours_elapsed=round((now()-_time)/3600, 2)\n| eval has_ioc=if(isnotnull(src) AND isnotnull(dest) AND isnotnull(signature), 1, 0)\n| eval has_severity_assessment=if(isnotnull(status_description) AND len(status_description)>20, 1, 0)\n| eval filing_72h_breach=if(hours_elapsed>72 AND status!=\"Closed\" AND (has_ioc=0 OR has_severity_assessment=0), 1, 0)\n| eval approaching_72h=if(hours_elapsed>=48 AND hours_elapsed<72 AND (has_ioc=0 OR has_severity_assessment=0), 1, 0)\n| where filing_72h_breach=1 OR approaching_72h=1\n| table _time, rule_name, urgency, status, owner, hours_elapsed, has_ioc, has_severity_assessment, filing_72h_breach, approaching_72h\n| sort - filing_72h_breach, - hours_elapsed\n```\n\nUnderstanding this SPL\n\n**NIS2 72-Hour Incident Notification Readiness (Art. 23(2))** — After the 24-hour early warning (UC-22.2.1), Article 23(2) requires a more detailed incident notification within 72 hours containing initial severity assessment, impact analysis, and indicators of compromise. This use case tracks whether significant incidents have the required enrichment fields populated within the filing window, ensuring the 72-hour notification is substantive rather than a rehash of the early warning.\n\nDocumented **Data sources**: `` `notable` `` macro (urgency, severity, status, owner, status_description, _time, src, dest, signature). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **hours_elapsed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_ioc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_severity_assessment** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **filing_72h_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **approaching_72h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where filing_72h_breach=1 OR approaching_72h=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 72-Hour Incident Notification Readiness (Art. 23(2))**): table _time, rule_name, urgency, status, owner, hours_elapsed, has_ioc, has_severity_assessment, filing_72h_breach, approaching_72h\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incidents approaching/breaching 72h), Single value (count missing IOCs), Timeline (enrichment progress).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "After we tell the regulator about a serious cyber incident in the first 24 hours, we have only three days total to send a more detailed update. This watch alerts us before that 72-hour window is about to close, so the more detailed update goes in on time.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23(2)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Proves that for every significant incident with an Art.23 24h early-warning filed, the 72-hour notification is drafted, contains an initial impact assessment, and is delivered to the CSIRT before the deadline.",
                  "ea": "Saved search `UC-22.2.7` row-per-finding output, archived to `index=audit_evidence sourcetype=evidence:saved_search` (RFC 3161 TSA-signed), with daily PDF digest. Dashboard panel `NIS2 Art.23 — 72h clock` on the NIS2 posture dashboard.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.8",
              "n": "NIS2 One-Month Final Incident Report Tracking (Art. 23(4))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 23(4) requires a comprehensive final report within one month of the incident notification, containing root cause analysis, detailed description, mitigation measures applied, and cross-border impact assessment. This use case tracks whether closed significant incidents have completed post-incident reviews within the mandated timeframe.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`` `notable` `` macro, `index=itsm` `sourcetype=\"snow:incident\"` (PIR/RCA records)",
              "q": "`notable` urgency IN (\"high\",\"critical\") status=\"Closed\" earliest=-60d\n| eval days_since_close=round((now()-_time)/86400, 1)\n| eval final_report_due=if(days_since_close>=30, 1, 0)\n| eval final_report_approaching=if(days_since_close>=21 AND days_since_close<30, 1, 0)\n| lookup pir_completion_lookup notable_id AS event_id OUTPUT pir_status, pir_date, root_cause_documented\n| fillnull value=\"NOT_SUBMITTED\" pir_status\n| where (final_report_due=1 AND pir_status!=\"Complete\") OR final_report_approaching=1\n| table _time, rule_name, urgency, owner, days_since_close, pir_status, root_cause_documented, final_report_due\n| sort - final_report_due, - days_since_close",
              "m": "(1) Create `pir_completion_lookup` linking notable IDs to post-incident review (PIR) records from ServiceNow or Confluence; (2) alert at 21 days for incidents without started PIRs; (3) escalate at 30 days for overdue final reports; (4) require root cause analysis, timeline, mitigation actions, and cross-border assessment fields before marking PIR as complete.",
              "z": "Table (overdue final reports), Single value (PIRs due this week), Bar chart (PIR status distribution), Timeline (incident-to-PIR lifecycle).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `` `notable` `` macro, `index=itsm` `sourcetype=\"snow:incident\"` (PIR/RCA records).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `pir_completion_lookup` linking notable IDs to post-incident review (PIR) records from ServiceNow or Confluence; (2) alert at 21 days for incidents without started PIRs; (3) escalate at 30 days for overdue final reports; (4) require root cause analysis, timeline, mitigation actions, and cross-border assessment fields before marking PIR as complete.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") status=\"Closed\" earliest=-60d\n| eval days_since_close=round((now()-_time)/86400, 1)\n| eval final_report_due=if(days_since_close>=30, 1, 0)\n| eval final_report_approaching=if(days_since_close>=21 AND days_since_close<30, 1, 0)\n| lookup pir_completion_lookup notable_id AS event_id OUTPUT pir_status, pir_date, root_cause_documented\n| fillnull value=\"NOT_SUBMITTED\" pir_status\n| where (final_report_due=1 AND pir_status!=\"Complete\") OR final_report_approaching=1\n| table _time, rule_name, urgency, owner, days_since_close, pir_status, root_cause_documented, final_report_due\n| sort - final_report_due, - days_since_close\n```\n\nUnderstanding this SPL\n\n**NIS2 One-Month Final Incident Report Tracking (Art. 23(4))** — Article 23(4) requires a comprehensive final report within one month of the incident notification, containing root cause analysis, detailed description, mitigation measures applied, and cross-border impact assessment. This use case tracks whether closed significant incidents have completed post-incident reviews within the mandated timeframe.\n\nDocumented **Data sources**: `` `notable` `` macro, `index=itsm` `sourcetype=\"snow:incident\"` (PIR/RCA records). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **days_since_close** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **final_report_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **final_report_approaching** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Fills null values with `fillnull`.\n• Filters the current rows with `where (final_report_due=1 AND pir_status!=\"Complete\") OR final_report_approaching=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 One-Month Final Incident Report Tracking (Art. 23(4))**): table _time, rule_name, urgency, owner, days_since_close, pir_status, root_cause_documented, final_report_due\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue final reports), Single value (PIRs due this week), Bar chart (PIR status distribution), Timeline (incident-to-PIR lifecycle).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "After we send the regulator the 24-hour and 72-hour updates about a serious cyber incident, we also have to send a full final report within one month — explaining the cause, what we did, and what we learned. This watch makes sure we never forget that final report.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23(4)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Proves that for every significant incident with a 24h early-warning and 72h notification on file, the 1-month final report is drafted and submitted before the deadline, with a complete record of the cause analysis, mitigation steps, and cross-border impact.",
                  "ea": "Saved search `UC-22.2.8` row-per-finding output, `index=audit_evidence sourcetype=evidence:saved_search` archive (RFC 3161 TSA-signed), dashboard panel `NIS2 Art.23 — Final report clock` on the NIS2 posture dashboard. Final report PDF copies archived under `index=audit_evidence sourcetype=evidence:nis2_artifact stage=final_report`.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.9",
              "n": "NIS2 Effectiveness Assessment of Cybersecurity Measures (Art. 21(2)(f))",
              "c": "high",
              "f": "advanced",
              "v": "Article 21(2)(f) requires policies and procedures to assess the effectiveness of cybersecurity risk-management measures. This use case creates a KPI dashboard tracking operational evidence across all NIS2 control areas — MFA coverage, patch SLA compliance, backup restore success, training completion, and detection efficacy — providing continuous proof that controls work rather than just exist.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841), Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`index=risk`, `index=vulnerability`, `index=itsi_summary`, `_audit`, `_internal`, CIM data models",
              "q": "| makeresults\n| eval measure=\"MFA_Coverage\"\n| append [\n    search index=_internal sourcetype=splunk_web_access earliest=-24h\n    | stats dc(user) as total_users dc(eval(if(match(_raw,\"(?i)mfa|2fa|totp\"),user,null()))) as mfa_users\n    | eval measure=\"MFA_Coverage\", pct=round(100*mfa_users/total_users,1), target=95\n]\n| append [\n    search index=vulnerability sourcetype=\"tenable:vuln\" state=\"Active\" earliest=-30d\n    | eval sla_days=case(severity=\"Critical\",7, severity=\"High\",30, 1=1,90)\n    | eval age_days=round((now()-first_found)/86400,1)\n    | eval in_sla=if(age_days<=sla_days,1,0)\n    | stats avg(in_sla) as pct_raw\n    | eval measure=\"Patch_SLA_Compliance\", pct=round(pct_raw*100,1), target=90\n]\n| append [\n    search index=itsi_summary is_service_in_maintenance=0 earliest=-7d\n    | stats avg(eval(if(health_score>=70,1,0))) as pct_raw by service_name\n    | stats avg(pct_raw) as pct_raw\n    | eval measure=\"Service_Availability\", pct=round(pct_raw*100,1), target=99\n]\n| where isnotnull(pct)\n| eval status=if(pct>=target,\"PASS\",\"FAIL\")\n| table measure, pct, target, status",
              "m": "(1) Define target KPIs for each NIS2 Article 21 measure area; (2) populate data sources (vulnerability scanner, identity provider MFA reports, ITSI services, training LMS exports); (3) schedule monthly for board/audit reporting; (4) add pen test and tabletop exercise results as manual KV store entries; (5) trend quarter-over-quarter to demonstrate improvement.",
              "z": "KPI tiles (pass/fail per measure), Gauge charts (% vs target), Table (failing measures), Line chart (effectiveness trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841), Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `index=risk`, `index=vulnerability`, `index=itsi_summary`, `_audit`, `_internal`, CIM data models.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define target KPIs for each NIS2 Article 21 measure area; (2) populate data sources (vulnerability scanner, identity provider MFA reports, ITSI services, training LMS exports); (3) schedule monthly for board/audit reporting; (4) add pen test and tabletop exercise results as manual KV store entries; (5) trend quarter-over-quarter to demonstrate improvement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| makeresults\n| eval measure=\"MFA_Coverage\"\n| append [\n    search index=_internal sourcetype=splunk_web_access earliest=-24h\n    | stats dc(user) as total_users dc(eval(if(match(_raw,\"(?i)mfa|2fa|totp\"),user,null()))) as mfa_users\n    | eval measure=\"MFA_Coverage\", pct=round(100*mfa_users/total_users,1), target=95\n]\n| append [\n    search index=vulnerability sourcetype=\"tenable:vuln\" state=\"Active\" earliest=-30d\n    | eval sla_days=case(severity=\"Critical\",7, severity=\"High\",30, 1=1,90)\n    | eval age_days=round((now()-first_found)/86400,1)\n    | eval in_sla=if(age_days<=sla_days,1,0)\n    | stats avg(in_sla) as pct_raw\n    | eval measure=\"Patch_SLA_Compliance\", pct=round(pct_raw*100,1), target=90\n]\n| append [\n    search index=itsi_summary is_service_in_maintenance=0 earliest=-7d\n    | stats avg(eval(if(health_score>=70,1,0))) as pct_raw by service_name\n    | stats avg(pct_raw) as pct_raw\n    | eval measure=\"Service_Availability\", pct=round(pct_raw*100,1), target=99\n]\n| where isnotnull(pct)\n| eval status=if(pct>=target,\"PASS\",\"FAIL\")\n| table measure, pct, target, status\n```\n\nUnderstanding this SPL\n\n**NIS2 Effectiveness Assessment of Cybersecurity Measures (Art. 21(2)(f))** — Article 21(2)(f) requires policies and procedures to assess the effectiveness of cybersecurity risk-management measures. This use case creates a KPI dashboard tracking operational evidence across all NIS2 control areas — MFA coverage, patch SLA compliance, backup restore success, training completion, and detection efficacy — providing continuous proof that controls work rather than just exist.\n\nDocumented **Data sources**: `index=risk`, `index=vulnerability`, `index=itsi_summary`, `_audit`, `_internal`, CIM data models. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Generates synthetic events with `makeresults` (tests or scaffolding).\n• `eval` defines or adjusts **measure** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Filters the current rows with `where isnotnull(pct)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NIS2 Effectiveness Assessment of Cybersecurity Measures (Art. 21(2)(f))**): table measure, pct, target, status\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Effectiveness Assessment of Cybersecurity Measures (Art. 21(2)(f))** — Article 21(2)(f) requires policies and procedures to assess the effectiveness of cybersecurity risk-management measures. This use case creates a KPI dashboard tracking operational evidence across all NIS2 control areas — MFA coverage, patch SLA compliance, backup restore success, training completion, and detection efficacy — providing continuous proof that controls work rather than just exist.\n\nDocumented **Data sources**: `index=risk`, `index=vulnerability`, `index=itsi_summary`, `_audit`, `_internal`, CIM data models. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: KPI tiles (pass/fail per measure), Gauge charts (% vs target), Table (failing measures), Line chart (effectiveness trend).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We check a few simple health scores—how fast serious software flaws get fixed, whether important services stay in good shape, and a rough sign that people are signing in the safer way—so we can show outsiders our protections actually work week to week.",
              "mtype": [
                "Risk",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t avg(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value",
              "e": [
                "itsi",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(f)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(f) (Policies and procedures effectiveness) is enforced — Splunk UC-22.2.9: NIS2 Effectiveness Assessment of Cybersecurity Measures.",
                  "ea": "Saved search 'UC-22.2.9' running on index=risk, index=vulnerability, index=itsi_summary, _audit, _internal, CIM data models, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.10",
              "n": "NIS2 Cyber Hygiene and Training Compliance (Art. 21(2)(g))",
              "c": "medium",
              "f": "beginner",
              "v": "Article 21(2)(g) requires basic cyber hygiene practices and cybersecurity training for all staff. This use case tracks training completion rates, identifies overdue personnel, and correlates training gaps with security incident involvement — proving that awareness is delivered, measured, and effective.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), CSV lookup (LMS export)",
              "d": "`index=training` (LMS completion records via CSV/HEC), `index=email` `sourcetype=\"ms:o365:management\"` (phishing simulation results), `` `notable` `` macro",
              "q": "| inputlookup nis2_training_completion.csv\n| eval days_since_training=round((now()-strptime(completion_date,\"%Y-%m-%d\"))/86400, 0)\n| eval overdue=if(days_since_training > 365 OR isnull(completion_date), 1, 0)\n| stats count as total_staff, sum(overdue) as overdue_count, avg(days_since_training) as avg_days_since\n| eval compliance_pct=round(100*(total_staff-overdue_count)/total_staff, 1)\n| table total_staff, overdue_count, compliance_pct, avg_days_since",
              "m": "(1) Export LMS completion data to `nis2_training_completion.csv` via scheduled script or HEC; (2) include all NIS2-scope employees (not just IT); (3) alert when compliance drops below 90%; (4) correlate training-overdue users with notable events to demonstrate risk-based prioritisation; (5) track phishing simulation click rates as effectiveness evidence.",
              "z": "Single value (compliance %), Bar chart (completion by department), Table (overdue staff), Line chart (compliance trend quarterly).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), CSV lookup (LMS export).\n• Ensure the following data sources are available: `index=training` (LMS completion records via CSV/HEC), `index=email` `sourcetype=\"ms:o365:management\"` (phishing simulation results), `` `notable` `` macro.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export LMS completion data to `nis2_training_completion.csv` via scheduled script or HEC; (2) include all NIS2-scope employees (not just IT); (3) alert when compliance drops below 90%; (4) correlate training-overdue users with notable events to demonstrate risk-based prioritisation; (5) track phishing simulation click rates as effectiveness evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_training_completion.csv\n| eval days_since_training=round((now()-strptime(completion_date,\"%Y-%m-%d\"))/86400, 0)\n| eval overdue=if(days_since_training > 365 OR isnull(completion_date), 1, 0)\n| stats count as total_staff, sum(overdue) as overdue_count, avg(days_since_training) as avg_days_since\n| eval compliance_pct=round(100*(total_staff-overdue_count)/total_staff, 1)\n| table total_staff, overdue_count, compliance_pct, avg_days_since\n```\n\nUnderstanding this SPL\n\n**NIS2 Cyber Hygiene and Training Compliance (Art. 21(2)(g))** — Article 21(2)(g) requires basic cyber hygiene practices and cybersecurity training for all staff. This use case tracks training completion rates, identifies overdue personnel, and correlates training gaps with security incident involvement — proving that awareness is delivered, measured, and effective.\n\nDocumented **Data sources**: `index=training` (LMS completion records via CSV/HEC), `index=email` `sourcetype=\"ms:o365:management\"` (phishing simulation results), `` `notable` `` macro. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), CSV lookup (LMS export). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **days_since_training** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NIS2 Cyber Hygiene and Training Compliance (Art. 21(2)(g))**): table total_staff, overdue_count, compliance_pct, avg_days_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (compliance %), Bar chart (completion by department), Table (overdue staff), Line chart (compliance trend quarterly).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We check a simple list that shows who finished this year’s security lessons and who is late, using the same dates the training website stores. That helps leaders prove everyone gets refresher training, not just the tech team.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(g)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(g) (Cyber-hygiene and training) is enforced — Splunk UC-22.2.10: NIS2 Cyber Hygiene and Training Compliance.",
                  "ea": "Saved search 'UC-22.2.10' running on sourcetype ms:o365:management and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.11",
              "n": "NIS2 Cryptography and Encryption Policy Monitoring (Art. 21(2)(h))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 21(2)(h) requires policies and procedures for cryptography and encryption. This use case monitors TLS certificate health, identifies weak cipher usage, detects unencrypted protocols on NIS2-scoped networks, and tracks encryption-at-rest status — providing continuous evidence that cryptographic controls are operational and current.",
              "t": "Splunk Add-on for Stream (Splunkbase 1809), Splunk Enterprise Security (Splunkbase 263), Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "CIM Certificate data model, `index=network` (TLS metadata), `index=vulnerability` (crypto-related findings)",
              "q": "| tstats `summariesonly` count\n  from datamodel=Certificates.All_Certificates\n  where All_Certificates.ssl_end_time < relative_time(now(), \"+30d\")\n  by All_Certificates.ssl_subject_common_name All_Certificates.ssl_end_time All_Certificates.ssl_issuer_common_name\n| rename All_Certificates.* as *\n| eval days_until_expiry=round((ssl_end_time - now())/86400, 0)\n| eval status=case(days_until_expiry < 0, \"EXPIRED\", days_until_expiry < 14, \"CRITICAL\", days_until_expiry < 30, \"WARNING\")\n| sort days_until_expiry\n| table ssl_subject_common_name, ssl_issuer_common_name, days_until_expiry, status",
              "m": "(1) Ingest TLS handshake metadata from Splunk Stream or proxy logs; (2) monitor for deprecated protocols (TLS 1.0/1.1, SSLv3) and weak ciphers (RC4, DES, 3DES, export ciphers); (3) track certificate expiry for NIS2-scoped services; (4) correlate vulnerability scanner findings for crypto weaknesses; (5) report encryption-at-rest coverage for databases and file stores.",
              "z": "Table (expiring/expired certificates), Pie chart (TLS version distribution), Bar chart (weak ciphers by host), Single value (% services using TLS 1.2+).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [
                "T1562",
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Stream (Splunkbase 1809), Splunk Enterprise Security (Splunkbase 263), Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: CIM Certificate data model, `index=network` (TLS metadata), `index=vulnerability` (crypto-related findings).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest TLS handshake metadata from Splunk Stream or proxy logs; (2) monitor for deprecated protocols (TLS 1.0/1.1, SSLv3) and weak ciphers (RC4, DES, 3DES, export ciphers); (3) track certificate expiry for NIS2-scoped services; (4) correlate vulnerability scanner findings for crypto weaknesses; (5) report encryption-at-rest coverage for databases and file stores.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Certificates.All_Certificates\n  where All_Certificates.ssl_end_time < relative_time(now(), \"+30d\")\n  by All_Certificates.ssl_subject_common_name All_Certificates.ssl_end_time All_Certificates.ssl_issuer_common_name\n| rename All_Certificates.* as *\n| eval days_until_expiry=round((ssl_end_time - now())/86400, 0)\n| eval status=case(days_until_expiry < 0, \"EXPIRED\", days_until_expiry < 14, \"CRITICAL\", days_until_expiry < 30, \"WARNING\")\n| sort days_until_expiry\n| table ssl_subject_common_name, ssl_issuer_common_name, days_until_expiry, status\n```\n\nUnderstanding this SPL\n\n**NIS2 Cryptography and Encryption Policy Monitoring (Art. 21(2)(h))** — Article 21(2)(h) requires policies and procedures for cryptography and encryption. This use case monitors TLS certificate health, identifies weak cipher usage, detects unencrypted protocols on NIS2-scoped networks, and tracks encryption-at-rest status — providing continuous evidence that cryptographic controls are operational and current.\n\nDocumented **Data sources**: CIM Certificate data model, `index=network` (TLS metadata), `index=vulnerability` (crypto-related findings). **App/TA** (typical add-on context): Splunk Add-on for Stream (Splunkbase 1809), Splunk Enterprise Security (Splunkbase 263), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **days_until_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Cryptography and Encryption Policy Monitoring (Art. 21(2)(h))**): table ssl_subject_common_name, ssl_issuer_common_name, days_until_expiry, status\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Cryptography and Encryption Policy Monitoring (Art. 21(2)(h))** — Article 21(2)(h) requires policies and procedures for cryptography and encryption. This use case monitors TLS certificate health, identifies weak cipher usage, detects unencrypted protocols on NIS2-scoped networks, and tracks encryption-at-rest status — providing continuous evidence that cryptographic controls are operational and current.\n\nDocumented **Data sources**: CIM Certificate data model, `index=network` (TLS metadata), `index=vulnerability` (crypto-related findings). **App/TA** (typical add-on context): Splunk Add-on for Stream (Splunkbase 1809), Splunk Enterprise Security (Splunkbase 263), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (expiring/expired certificates), Pie chart (TLS version distribution), Bar chart (weak ciphers by host), Single value (% services using TLS 1.2+).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the electronic locks (certificates) that make websites and internal tools talk privately. When a lock is about to expire or is too old-fashioned, we get a heads-up so nothing suddenly stops working.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(h)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(h) (Cryptography and encryption) is enforced — Splunk UC-22.2.11: NIS2 Cryptography and Encryption Policy Monitoring.",
                  "ea": "Saved search 'UC-22.2.11' running on CIM Certificate data model, index=network (TLS metadata), index=vulnerability (crypto-related findings), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.12",
              "n": "NIS2 Multi-Factor Authentication and Secure Communications (Art. 21(2)(j))",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 21(2)(j) requires MFA or continuous authentication, secured voice/video/text communications, and emergency communication systems. This use case monitors MFA enforcement across critical systems, detects authentication bypasses, and validates that administrative and emergency access channels are secured — providing evidence of the strongest authentication controls NIS2 mandates.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553)",
              "d": "`index=auth` (IdP authentication logs), `index=azure` `sourcetype=\"ms:aad:signin\"` (Azure AD sign-ins), `index=okta` `sourcetype=\"OktaIM2:log\"` (Okta system logs)",
              "q": "index=auth OR index=azure OR index=okta earliest=-24h\n  sourcetype IN (\"ms:aad:signin\",\"OktaIM2:log\",\"linux:auth\",\"WinEventLog:Security\")\n| eval user=coalesce(userPrincipalName, actor.alternateId, Account_Name, user)\n| eval mfa_used=case(\n    match(lower(_raw),\"(?i)mfa|multifactor|two.?factor|2fa|totp|fido|webauthn|push.*approve\"), 1,\n    match(lower(authenticationRequirement),\"(?i)multi\"), 1,\n    match(lower(factor),\"(?i)push|totp|sms|call|webauthn\"), 1,\n    1=1, 0)\n| eval is_admin=if(match(lower(user),\"(?i)admin|svc-|service|root|system\") OR match(lower(_raw),\"(?i)privileged|admin.*role\"), 1, 0)\n| stats count sum(mfa_used) as mfa_count by user, is_admin, sourcetype\n| eval mfa_pct=round(100*mfa_count/count, 1)\n| where (is_admin=1 AND mfa_pct < 100) OR (is_admin=0 AND mfa_pct < 80)\n| sort is_admin, mfa_pct\n| table user, is_admin, count, mfa_count, mfa_pct, sourcetype",
              "m": "(1) Ingest IdP authentication logs (Azure AD, Okta, or on-prem AD with ADFS); (2) require 100% MFA for administrative access and 80%+ for general users; (3) alert on admin authentications without MFA; (4) track break-glass account usage separately; (5) validate secure communications by checking Teams/Webex/Signal usage for emergency channels; (6) document emergency communication drill results in a KV store for audit evidence.",
              "z": "Single value (MFA coverage %), Table (users without MFA), Bar chart (MFA by authentication type), Gauge (admin MFA compliance).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [Splunk Add-on for Okta Identity Cloud](https://splunkbase.splunk.com/app/6553), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1556"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553).\n• Ensure the following data sources are available: `index=auth` (IdP authentication logs), `index=azure` `sourcetype=\"ms:aad:signin\"` (Azure AD sign-ins), `index=okta` `sourcetype=\"OktaIM2:log\"` (Okta system logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest IdP authentication logs (Azure AD, Okta, or on-prem AD with ADFS); (2) require 100% MFA for administrative access and 80%+ for general users; (3) alert on admin authentications without MFA; (4) track break-glass account usage separately; (5) validate secure communications by checking Teams/Webex/Signal usage for emergency channels; (6) document emergency communication drill results in a KV store for audit evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=auth OR index=azure OR index=okta earliest=-24h\n  sourcetype IN (\"ms:aad:signin\",\"OktaIM2:log\",\"linux:auth\",\"WinEventLog:Security\")\n| eval user=coalesce(userPrincipalName, actor.alternateId, Account_Name, user)\n| eval mfa_used=case(\n    match(lower(_raw),\"(?i)mfa|multifactor|two.?factor|2fa|totp|fido|webauthn|push.*approve\"), 1,\n    match(lower(authenticationRequirement),\"(?i)multi\"), 1,\n    match(lower(factor),\"(?i)push|totp|sms|call|webauthn\"), 1,\n    1=1, 0)\n| eval is_admin=if(match(lower(user),\"(?i)admin|svc-|service|root|system\") OR match(lower(_raw),\"(?i)privileged|admin.*role\"), 1, 0)\n| stats count sum(mfa_used) as mfa_count by user, is_admin, sourcetype\n| eval mfa_pct=round(100*mfa_count/count, 1)\n| where (is_admin=1 AND mfa_pct < 100) OR (is_admin=0 AND mfa_pct < 80)\n| sort is_admin, mfa_pct\n| table user, is_admin, count, mfa_count, mfa_pct, sourcetype\n```\n\nUnderstanding this SPL\n\n**NIS2 Multi-Factor Authentication and Secure Communications (Art. 21(2)(j))** — Article 21(2)(j) requires MFA or continuous authentication, secured voice/video/text communications, and emergency communication systems. This use case monitors MFA enforcement across critical systems, detects authentication bypasses, and validates that administrative and emergency access channels are secured — providing evidence of the strongest authentication controls NIS2 mandates.\n\nDocumented **Data sources**: `index=auth` (IdP authentication logs), `index=azure` `sourcetype=\"ms:aad:signin\"` (Azure AD sign-ins), `index=okta` `sourcetype=\"OktaIM2:log\"` (Okta system logs). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: auth, azure, okta.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=auth, index=azure, index=okta, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mfa_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_admin** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user, is_admin, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **mfa_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (is_admin=1 AND mfa_pct < 100) OR (is_admin=0 AND mfa_pct < 80)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Multi-Factor Authentication and Secure Communications (Art. 21(2)(j))**): table user, is_admin, count, mfa_count, mfa_pct, sourcetype\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Multi-Factor Authentication and Secure Communications (Art. 21(2)(j))** — Article 21(2)(j) requires MFA or continuous authentication, secured voice/video/text communications, and emergency communication systems. This use case monitors MFA enforcement across critical systems, detects authentication bypasses, and validates that administrative and emergency access channels are secured — providing evidence of the strongest authentication controls NIS2 mandates.\n\nDocumented **Data sources**: `index=auth` (IdP authentication logs), `index=azure` `sourcetype=\"ms:aad:signin\"` (Azure AD sign-ins), `index=okta` `sourcetype=\"OktaIM2:log\"` (Okta system logs). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (MFA coverage %), Table (users without MFA), Bar chart (MFA by authentication type), Gauge (admin MFA compliance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We count how often people prove they are really themselves with a second step when signing into work accounts, especially bosses and technical admins. If that second step is missing too often, we raise a hand so the login rules can be fixed.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "m365",
                "okta"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(j)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.21(2)(j) (MFA and secure communications) is enforced — Splunk UC-22.2.12: NIS2 Multi-Factor Authentication and Secure Communications.",
                  "ea": "Saved search 'UC-22.2.12' running on sourcetype ms:aad:signin and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.13",
              "n": "NIS2 Asset Management and Configuration Baseline (Art. 21(2)(i))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 21(2)(i) includes asset management alongside access control and HR security. This use case validates that the ES asset framework is populated and current for NIS2-scoped systems, detects unknown or unmanaged assets communicating on critical networks, and tracks configuration baseline drift — ensuring the asset inventory that underpins all other NIS2 controls is reliable.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Qualys (Splunkbase 2964)",
              "d": "ES Asset Framework (`asset_lookup_by_str`, `asset_lookup_by_cidr`), CIM Network_Traffic data model, `index=vulnerability` (asset discovery scans)",
              "q": "| tstats `summariesonly` dc(All_Traffic.dest_port) as ports, sum(All_Traffic.bytes) as total_bytes\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src\n| rename All_Traffic.src as src\n| lookup asset_lookup_by_str key AS src OUTPUT category, priority, owner, nt_host\n| where isnull(category) OR category=\"unknown\"\n| sort - total_bytes\n| head 100\n| table src, nt_host, category, owner, ports, total_bytes",
              "m": "(1) Populate ES asset framework from CMDB, vulnerability scanners, and network discovery tools; (2) schedule this search daily to find active IPs not in the asset inventory; (3) classify discovered assets by NIS2 criticality (essential service vs supporting); (4) alert on high-traffic unknown assets; (5) track asset inventory completeness as a NIS2 KPI in UC-22.2.9 effectiveness dashboard.",
              "z": "Table (unknown active assets), Single value (% assets classified), Bar chart (assets by category), Map (asset locations if geo data available).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Qualys (Splunkbase 2964).\n• Ensure the following data sources are available: ES Asset Framework (`asset_lookup_by_str`, `asset_lookup_by_cidr`), CIM Network_Traffic data model, `index=vulnerability` (asset discovery scans).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate ES asset framework from CMDB, vulnerability scanners, and network discovery tools; (2) schedule this search daily to find active IPs not in the asset inventory; (3) classify discovered assets by NIS2 criticality (essential service vs supporting); (4) alert on high-traffic unknown assets; (5) track asset inventory completeness as a NIS2 KPI in UC-22.2.9 effectiveness dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` dc(All_Traffic.dest_port) as ports, sum(All_Traffic.bytes) as total_bytes\n  from datamodel=Network_Traffic.All_Traffic\n  by All_Traffic.src\n| rename All_Traffic.src as src\n| lookup asset_lookup_by_str key AS src OUTPUT category, priority, owner, nt_host\n| where isnull(category) OR category=\"unknown\"\n| sort - total_bytes\n| head 100\n| table src, nt_host, category, owner, ports, total_bytes\n```\n\nUnderstanding this SPL\n\n**NIS2 Asset Management and Configuration Baseline (Art. 21(2)(i))** — Article 21(2)(i) includes asset management alongside access control and HR security. This use case validates that the ES asset framework is populated and current for NIS2-scoped systems, detects unknown or unmanaged assets communicating on critical networks, and tracks configuration baseline drift — ensuring the asset inventory that underpins all other NIS2 controls is reliable.\n\nDocumented **Data sources**: ES Asset Framework (`asset_lookup_by_str`, `asset_lookup_by_cidr`), CIM Network_Traffic data model, `index=vulnerability` (asset discovery scans). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Qualys (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(category) OR category=\"unknown\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **NIS2 Asset Management and Configuration Baseline (Art. 21(2)(i))**): table src, nt_host, category, owner, ports, total_bytes\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Asset Management and Configuration Baseline (Art. 21(2)(i))** — Article 21(2)(i) includes asset management alongside access control and HR security. This use case validates that the ES asset framework is populated and current for NIS2-scoped systems, detects unknown or unmanaged assets communicating on critical networks, and tracks configuration baseline drift — ensuring the asset inventory that underpins all other NIS2 controls is reliable.\n\nDocumented **Data sources**: ES Asset Framework (`asset_lookup_by_str`, `asset_lookup_by_cidr`), CIM Network_Traffic data model, `index=vulnerability` (asset discovery scans). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Qualys (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unknown active assets), Single value (% assets classified), Bar chart (assets by category), Map (asset locations if geo data available).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We compare the address book of company computers with the real traffic on the internal roads. If something is sending lots of data but is not on the list, we point it out so the owners can register or shut it down.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "e": [
                "qualys"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(i)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(i) (Human resources and access control) is enforced — Splunk UC-22.2.13: NIS2 Asset Management and Configuration Baseline.",
                  "ea": "Saved search 'UC-22.2.13' running on index vulnerability and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.14",
              "n": "NIS2 Human Resources Security — Joiner/Mover/Leaver Process (Art. 21(2)(i))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 21(2)(i) requires human resources security controls including lifecycle-managed access. This use case detects accounts that remain active after employee departure, identifies access rights that persist after role changes, and monitors onboarding completeness — providing evidence that joiner/mover/leaver (JML) processes are enforced and auditable.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=windows` `sourcetype=\"WinEventLog:Security\"` (account disable/delete events), `index=itsm` (ServiceNow HR case records), HR system exports (CSV/HEC), `index=auth` (authentication after termination)",
              "q": "| inputlookup terminated_employees.csv\n| eval term_date=strptime(termination_date, \"%Y-%m-%d\")\n| eval days_since_term=round((now()-term_date)/86400, 0)\n| join type=left user [\n    search index=auth OR index=windows OR index=azure earliest=-30d\n    | eval user=coalesce(Account_Name, userPrincipalName, user)\n    | stats latest(_time) as last_auth count as auth_count by user\n]\n| where isnotnull(last_auth) AND last_auth > term_date\n| eval days_active_after_term=round((last_auth - term_date)/86400, 0)\n| sort - days_active_after_term\n| table user, termination_date, days_since_term, days_active_after_term, auth_count",
              "m": "(1) Export HR termination data to `terminated_employees.csv` via scheduled integration or HEC; (2) run daily to detect orphaned accounts; (3) alert on any authentication activity after termination date; (4) escalate accounts active >3 days post-termination as potential compliance violations; (5) extend to movers by comparing current role vs granted access groups; (6) integrate with ServiceNow for automated deprovisioning ticket creation.",
              "z": "Table (active terminated accounts), Single value (orphaned accounts count), Bar chart (days active after termination), Timeline (post-termination activity).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"WinEventLog:Security\"` (account disable/delete events), `index=itsm` (ServiceNow HR case records), HR system exports (CSV/HEC), `index=auth` (authentication after termination).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export HR termination data to `terminated_employees.csv` via scheduled integration or HEC; (2) run daily to detect orphaned accounts; (3) alert on any authentication activity after termination date; (4) escalate accounts active >3 days post-termination as potential compliance violations; (5) extend to movers by comparing current role vs granted access groups; (6) integrate with ServiceNow for automated deprovisioning ticket creation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup terminated_employees.csv\n| eval term_date=strptime(termination_date, \"%Y-%m-%d\")\n| eval days_since_term=round((now()-term_date)/86400, 0)\n| join type=left user [\n    search index=auth OR index=windows OR index=azure earliest=-30d\n    | eval user=coalesce(Account_Name, userPrincipalName, user)\n    | stats latest(_time) as last_auth count as auth_count by user\n]\n| where isnotnull(last_auth) AND last_auth > term_date\n| eval days_active_after_term=round((last_auth - term_date)/86400, 0)\n| sort - days_active_after_term\n| table user, termination_date, days_since_term, days_active_after_term, auth_count\n```\n\nUnderstanding this SPL\n\n**NIS2 Human Resources Security — Joiner/Mover/Leaver Process (Art. 21(2)(i))** — Article 21(2)(i) requires human resources security controls including lifecycle-managed access. This use case detects accounts that remain active after employee departure, identifies access rights that persist after role changes, and monitors onboarding completeness — providing evidence that joiner/mover/leaver (JML) processes are enforced and auditable.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (account disable/delete events), `index=itsm` (ServiceNow HR case records), HR system exports (CSV/HEC), `index=auth` (authentication after termination). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **term_date** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since_term** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(last_auth) AND last_auth > term_date` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_active_after_term** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Human Resources Security — Joiner/Mover/Leaver Process (Art. 21(2)(i))**): table user, termination_date, days_since_term, days_active_after_term, auth_count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Human Resources Security — Joiner/Mover/Leaver Process (Art. 21(2)(i))** — Article 21(2)(i) requires human resources security controls including lifecycle-managed access. This use case detects accounts that remain active after employee departure, identifies access rights that persist after role changes, and monitors onboarding completeness — providing evidence that joiner/mover/leaver (JML) processes are enforced and auditable.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (account disable/delete events), `index=itsm` (ServiceNow HR case records), HR system exports (CSV/HEC), `index=auth` (authentication after termination). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (active terminated accounts), Single value (orphaned accounts count), Bar chart (days active after termination), Timeline (post-termination activity).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "When someone stops working for us, we check the electronic keys still are not used afterward. If a former employee’s name still shows up logging in after their last day, we treat that as urgent housekeeping.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "m365",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(i)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(i) (Human resources and access control) is enforced — Splunk UC-22.2.14: NIS2 Human Resources Security — Joiner/Mover/Leaver Process.",
                  "ea": "Saved search 'UC-22.2.14' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                },
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.15",
              "n": "NIS2 Secure System Acquisition and Development Lifecycle (Art. 21(2)(e))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 21(2)(e) extends beyond vulnerability management to require security in system acquisition, development, and maintenance lifecycles. This use case monitors CI/CD pipeline security gates, tracks code scanning results, and validates that change management processes include security review — demonstrating that security is embedded in the development lifecycle rather than bolted on after deployment.",
              "t": "Splunk Add-on for GitHub (Splunkbase 5596), Splunk Add-on for Jira (Splunkbase 1438)",
              "d": "`index=devops` (CI/CD pipeline events from GitHub Actions, GitLab CI, Jenkins), `index=codescan` (SAST/SCA scan results), `index=itsm` (change management records)",
              "q": "index=devops sourcetype IN (\"github:webhook\",\"gitlab:pipeline\",\"jenkins:build\") earliest=-30d\n| eval has_security_scan=if(match(lower(_raw),\"(?i)sast|sca|snyk|sonar|trivy|semgrep|security.scan|code.scan\"), 1, 0)\n| eval deployment=if(match(lower(_raw),\"(?i)deploy|release|prod|production\"), 1, 0)\n| stats count as total_builds, sum(has_security_scan) as scanned_builds, sum(deployment) as deployments by repository\n| eval scan_coverage=round(100*scanned_builds/total_builds, 1)\n| where scan_coverage < 80 OR (deployments > 0 AND scanned_builds = 0)\n| sort scan_coverage\n| table repository, total_builds, scanned_builds, scan_coverage, deployments",
              "m": "(1) Forward CI/CD pipeline events to Splunk via webhooks or HEC; (2) define minimum security gate requirements (SAST scan, dependency check, container image scan); (3) alert on deployments to production without security scan evidence; (4) track scan coverage percentage as a NIS2 KPI; (5) correlate with change management tickets to verify security review sign-off.",
              "z": "Table (repos without scans), Bar chart (scan coverage by repo), Single value (overall scan coverage %), Line chart (coverage trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5596), [Splunk Add-on for Jira](https://splunkbase.splunk.com/app/1438)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for GitHub (Splunkbase 5596), Splunk Add-on for Jira (Splunkbase 1438).\n• Ensure the following data sources are available: `index=devops` (CI/CD pipeline events from GitHub Actions, GitLab CI, Jenkins), `index=codescan` (SAST/SCA scan results), `index=itsm` (change management records).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward CI/CD pipeline events to Splunk via webhooks or HEC; (2) define minimum security gate requirements (SAST scan, dependency check, container image scan); (3) alert on deployments to production without security scan evidence; (4) track scan coverage percentage as a NIS2 KPI; (5) correlate with change management tickets to verify security review sign-off.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devops sourcetype IN (\"github:webhook\",\"gitlab:pipeline\",\"jenkins:build\") earliest=-30d\n| eval has_security_scan=if(match(lower(_raw),\"(?i)sast|sca|snyk|sonar|trivy|semgrep|security.scan|code.scan\"), 1, 0)\n| eval deployment=if(match(lower(_raw),\"(?i)deploy|release|prod|production\"), 1, 0)\n| stats count as total_builds, sum(has_security_scan) as scanned_builds, sum(deployment) as deployments by repository\n| eval scan_coverage=round(100*scanned_builds/total_builds, 1)\n| where scan_coverage < 80 OR (deployments > 0 AND scanned_builds = 0)\n| sort scan_coverage\n| table repository, total_builds, scanned_builds, scan_coverage, deployments\n```\n\nUnderstanding this SPL\n\n**NIS2 Secure System Acquisition and Development Lifecycle (Art. 21(2)(e))** — Article 21(2)(e) extends beyond vulnerability management to require security in system acquisition, development, and maintenance lifecycles. This use case monitors CI/CD pipeline security gates, tracks code scanning results, and validates that change management processes include security review — demonstrating that security is embedded in the development lifecycle rather than bolted on after deployment.\n\nDocumented **Data sources**: `index=devops` (CI/CD pipeline events from GitHub Actions, GitLab CI, Jenkins), `index=codescan` (SAST/SCA scan results), `index=itsm` (change management records). **App/TA** (typical add-on context): Splunk Add-on for GitHub (Splunkbase 5596), Splunk Add-on for Jira (Splunkbase 1438). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devops.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devops, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_security_scan** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **deployment** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by repository** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **scan_coverage** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where scan_coverage < 80 OR (deployments > 0 AND scanned_builds = 0)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Secure System Acquisition and Development Lifecycle (Art. 21(2)(e))**): table repository, total_builds, scanned_builds, scan_coverage, deployments\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (repos without scans), Bar chart (scan coverage by repo), Single value (overall scan coverage %), Line chart (coverage trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We check whether the automatic safety checks ran before new software left the factory floor, so risky changes are caught while they are still easy to undo instead of after customers are affected.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "jira"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(e) (Security in acquisition, development and maintenance) is enforced — Splunk UC-22.2.15: NIS2 Secure System Acquisition and Development Lifecycle.",
                  "ea": "Saved search 'UC-22.2.15' running on index devops and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.16",
              "n": "NIS2 Supply Chain Third-Party Risk Continuous Monitoring (Art. 21(2)(d))",
              "c": "high",
              "f": "advanced",
              "v": "Beyond vendor session monitoring (UC-22.2.2), Article 21(2)(d) requires continuous security assessment of direct suppliers and service providers. This use case tracks third-party SaaS and API dependency health, monitors supplier security posture indicators, and detects anomalous data flows to supplier networks — providing broader supply chain risk visibility than PAM session monitoring alone.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876)",
              "d": "CIM Network_Traffic data model, `index=proxy` (web proxy logs), DNS logs, threat intelligence lookups, `index=cloud` (SaaS audit logs)",
              "q": "| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out dc(All_Traffic.src) as internal_sources\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.dest_category=\"supplier\"\n  by All_Traffic.dest All_Traffic.app span=1d\n| rename All_Traffic.* as *\n| lookup supplier_risk_lookup dest OUTPUT supplier_name, risk_tier, last_assessment_date\n| eval days_since_assessment=round((now()-strptime(last_assessment_date,\"%Y-%m-%d\"))/86400, 0)\n| eval assessment_overdue=if(days_since_assessment > 365, 1, 0)\n| where bytes_out > 1073741824 OR assessment_overdue=1 OR internal_sources > 50\n| sort - bytes_out\n| table dest, supplier_name, risk_tier, bytes_out, internal_sources, days_since_assessment, assessment_overdue",
              "m": "(1) Tag supplier destination IPs/domains in ES asset framework with `category=supplier`; (2) maintain `supplier_risk_lookup` with vendor names, risk tiers, and last assessment dates; (3) alert on large data transfers to suppliers (potential exfiltration vector); (4) flag suppliers with overdue security assessments; (5) correlate supplier domains with threat intelligence feeds; (6) report on supply chain risk posture for board-level NIS2 compliance evidence.",
              "z": "Table (supplier risk overview), Bar chart (data transfer by supplier), Single value (overdue assessments), Heatmap (supplier access patterns).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1078",
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876).\n• Ensure the following data sources are available: CIM Network_Traffic data model, `index=proxy` (web proxy logs), DNS logs, threat intelligence lookups, `index=cloud` (SaaS audit logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag supplier destination IPs/domains in ES asset framework with `category=supplier`; (2) maintain `supplier_risk_lookup` with vendor names, risk tiers, and last assessment dates; (3) alert on large data transfers to suppliers (potential exfiltration vector); (4) flag suppliers with overdue security assessments; (5) correlate supplier domains with threat intelligence feeds; (6) report on supply chain risk posture for board-level NIS2 compliance evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes_out) as bytes_out dc(All_Traffic.src) as internal_sources\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.dest_category=\"supplier\"\n  by All_Traffic.dest All_Traffic.app span=1d\n| rename All_Traffic.* as *\n| lookup supplier_risk_lookup dest OUTPUT supplier_name, risk_tier, last_assessment_date\n| eval days_since_assessment=round((now()-strptime(last_assessment_date,\"%Y-%m-%d\"))/86400, 0)\n| eval assessment_overdue=if(days_since_assessment > 365, 1, 0)\n| where bytes_out > 1073741824 OR assessment_overdue=1 OR internal_sources > 50\n| sort - bytes_out\n| table dest, supplier_name, risk_tier, bytes_out, internal_sources, days_since_assessment, assessment_overdue\n```\n\nUnderstanding this SPL\n\n**NIS2 Supply Chain Third-Party Risk Continuous Monitoring (Art. 21(2)(d))** — Beyond vendor session monitoring (UC-22.2.2), Article 21(2)(d) requires continuous security assessment of direct suppliers and service providers. This use case tracks third-party SaaS and API dependency health, monitors supplier security posture indicators, and detects anomalous data flows to supplier networks — providing broader supply chain risk visibility than PAM session monitoring alone.\n\nDocumented **Data sources**: CIM Network_Traffic data model, `index=proxy` (web proxy logs), DNS logs, threat intelligence lookups, `index=cloud` (SaaS audit logs). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_since_assessment** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **assessment_overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bytes_out > 1073741824 OR assessment_overdue=1 OR internal_sources > 50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Supply Chain Third-Party Risk Continuous Monitoring (Art. 21(2)(d))**): table dest, supplier_name, risk_tier, bytes_out, internal_sources, days_since_assessment, assessment_overdue\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Supply Chain Third-Party Risk Continuous Monitoring (Art. 21(2)(d))** — Beyond vendor session monitoring (UC-22.2.2), Article 21(2)(d) requires continuous security assessment of direct suppliers and service providers. This use case tracks third-party SaaS and API dependency health, monitors supplier security posture indicators, and detects anomalous data flows to supplier networks — providing broader supply chain risk visibility than PAM session monitoring alone.\n\nDocumented **Data sources**: CIM Network_Traffic data model, `index=proxy` (web proxy logs), DNS logs, threat intelligence lookups, `index=cloud` (SaaS audit logs). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (supplier risk overview), Bar chart (data transfer by supplier), Single value (overdue assessments), Heatmap (supplier access patterns).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on how much information flows out to outside partners and whether anyone forgot to renew the formal safety review for those partners on time.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(d)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(d) (Supply-chain security) is enforced — Splunk UC-22.2.16: NIS2 Supply Chain Third-Party Risk Continuous Monitoring.",
                  "ea": "Saved search 'UC-22.2.16' running on index proxy and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.17",
              "n": "NIS2 Backup Management and Disaster Recovery Verification (Art. 21(2)(c))",
              "c": "critical",
              "f": "intermediate",
              "v": "While UC-22.2.4 monitors live service health during crises, Article 21(2)(c) specifically requires backup management and disaster recovery capabilities. This use case tracks backup job success/failure, validates restore test completion, and monitors RTO/RPO target adherence — providing the operational evidence that NIS2 auditors specifically ask for: \"show me your last successful restore test.\"",
              "t": "Veeam App for Splunk (Splunkbase 7312), Splunk ITSI (Splunkbase 1841)",
              "d": "`index=backup` (backup software logs — Veeam, Commvault, Rubrik, AWS Backup), `index=itsi_summary` (service recovery KPIs)",
              "q": "index=backup sourcetype IN (\"veeam:backup\",\"commvault:job\",\"rubrik:event\",\"aws:backup\") earliest=-7d\n| eval job_status=lower(coalesce(status, result, job_result, state))\n| eval success=if(match(job_status,\"(?i)success|completed|ok\"), 1, 0)\n| eval failed=if(match(job_status,\"(?i)fail|error|warning|missed\"), 1, 0)\n| stats sum(success) as successful, sum(failed) as failed, latest(_time) as last_backup by job_name, target_system\n| eval days_since_backup=round((now()-last_backup)/86400, 1)\n| eval backup_gap=if(days_since_backup > 1, \"OVERDUE\", \"OK\")\n| where failed > 0 OR backup_gap=\"OVERDUE\"\n| sort - days_since_backup\n| table target_system, job_name, successful, failed, days_since_backup, backup_gap",
              "m": "(1) Forward backup software logs via syslog or HEC; (2) define RTO/RPO targets per NIS2-scoped service and validate against actual backup frequency; (3) alert on any failed backup for critical systems; (4) track restore test completion in a KV store (date, system, result, duration); (5) schedule monthly restore tests and report results; (6) include restore test evidence in Article 21(2)(f) effectiveness assessment (UC-22.2.9).",
              "z": "Table (failed/overdue backups), Single value (backup success rate %), Bar chart (failures by system), Timeline (backup history).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Veeam App for Splunk](https://splunkbase.splunk.com/app/7312), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Veeam App for Splunk (Splunkbase 7312), Splunk ITSI (Splunkbase 1841).\n• Ensure the following data sources are available: `index=backup` (backup software logs — Veeam, Commvault, Rubrik, AWS Backup), `index=itsi_summary` (service recovery KPIs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward backup software logs via syslog or HEC; (2) define RTO/RPO targets per NIS2-scoped service and validate against actual backup frequency; (3) alert on any failed backup for critical systems; (4) track restore test completion in a KV store (date, system, result, duration); (5) schedule monthly restore tests and report results; (6) include restore test evidence in Article 21(2)(f) effectiveness assessment (UC-22.2.9).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype IN (\"veeam:backup\",\"commvault:job\",\"rubrik:event\",\"aws:backup\") earliest=-7d\n| eval job_status=lower(coalesce(status, result, job_result, state))\n| eval success=if(match(job_status,\"(?i)success|completed|ok\"), 1, 0)\n| eval failed=if(match(job_status,\"(?i)fail|error|warning|missed\"), 1, 0)\n| stats sum(success) as successful, sum(failed) as failed, latest(_time) as last_backup by job_name, target_system\n| eval days_since_backup=round((now()-last_backup)/86400, 1)\n| eval backup_gap=if(days_since_backup > 1, \"OVERDUE\", \"OK\")\n| where failed > 0 OR backup_gap=\"OVERDUE\"\n| sort - days_since_backup\n| table target_system, job_name, successful, failed, days_since_backup, backup_gap\n```\n\nUnderstanding this SPL\n\n**NIS2 Backup Management and Disaster Recovery Verification (Art. 21(2)(c))** — While UC-22.2.4 monitors live service health during crises, Article 21(2)(c) specifically requires backup management and disaster recovery capabilities. This use case tracks backup job success/failure, validates restore test completion, and monitors RTO/RPO target adherence — providing the operational evidence that NIS2 auditors specifically ask for: \"show me your last successful restore test.\"\n\nDocumented **Data sources**: `index=backup` (backup software logs — Veeam, Commvault, Rubrik, AWS Backup), `index=itsi_summary` (service recovery KPIs). **App/TA** (typical add-on context): Veeam App for Splunk (Splunkbase 7312), Splunk ITSI (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **job_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **success** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by job_name, target_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_backup** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **backup_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failed > 0 OR backup_gap=\"OVERDUE\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Backup Management and Disaster Recovery Verification (Art. 21(2)(c))**): table target_system, job_name, successful, failed, days_since_backup, backup_gap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed/overdue backups), Single value (backup success rate %), Bar chart (failures by system), Timeline (backup history).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch whether your nightly copies of important systems actually finish, and whether practice restores really worked. If a machine slips past the time window you promised, it shows up before you need it in a real emergency.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi",
                "veeam"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(c)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Merges `backup:job_status` runs and `backup:restore_test` outcomes with `nis2_backup_schedule` so failed jobs, missed RPO windows, and failing drills share one timeline.",
                  "ea": "`UC-22.2.17` saved search on `index=backup` (`sourcetype=backup:job_status`, `sourcetype=backup:restore_test`) with `lookup nis2_backup_schedule` (system_name, backup_policy, rpo_hours, rto_hours); exports vaulted under RFC3161. SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Veeam App for Splunk",
                  "id": 7312,
                  "url": "https://splunkbase.splunk.com/app/7312",
                  "desc": "Monitoring and security dashboards for Veeam Backup job statuses and events",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/033805d0-fe3c-11ee-a32e-be99bb517a22.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/0a08b346-fe3c-11ee-9b85-7afc4dbed252.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.18",
              "n": "NIS2 Network Security Monitoring and Anomaly Detection (Art. 21(2)(a))",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 21(2)(a) requires information system security policies to be operational, not just documented. This use case provides continuous network security monitoring — detecting lateral movement, unauthorized network segments, protocol anomalies, and traffic patterns that deviate from baseline — serving as the core evidence that network security policies are enforced through technical controls.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809)",
              "d": "CIM Network_Traffic data model, firewall logs, IDS/IPS events, DNS logs",
              "q": "| tstats `summariesonly` count sum(All_Traffic.bytes) as total_bytes dc(All_Traffic.dest_port) as dest_ports\n  from datamodel=Network_Traffic.All_Traffic\n  where NOT All_Traffic.dest_category IN (\"internal_server\",\"dns\",\"ntp\")\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| rename All_Traffic.* as *\n| where dest_ports > 20 OR (action=\"blocked\" AND count > 100) OR total_bytes > 10737418240\n| lookup asset_lookup_by_str key AS src OUTPUT category as src_category, priority as src_priority\n| eval risk_signal=case(\n    dest_ports > 50, \"port_scan\",\n    action=\"blocked\" AND count > 500, \"brute_force\",\n    total_bytes > 10737418240, \"data_exfiltration\",\n    isnull(src_category), \"unknown_asset\",\n    1=1, \"anomaly\")\n| sort - count\n| table _time, src, dest, src_category, risk_signal, dest_ports, count, total_bytes, action",
              "m": "(1) Ensure CIM Network_Traffic data model is populated from firewall, proxy, and IDS sources; (2) define network segmentation zones and tag in asset framework; (3) alert on port scanning, brute force patterns, and large data transfers; (4) baseline normal traffic patterns and detect deviations; (5) integrate with ES Risk framework to aggregate network anomalies into risk scores for NIS2-scoped assets.",
              "z": "Table (anomalies), Bar chart (risk signals by type), Map (traffic flows), Timeline (blocked connections), Sankey diagram (source to destination flows).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1048",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809).\n• Ensure the following data sources are available: CIM Network_Traffic data model, firewall logs, IDS/IPS events, DNS logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure CIM Network_Traffic data model is populated from firewall, proxy, and IDS sources; (2) define network segmentation zones and tag in asset framework; (3) alert on port scanning, brute force patterns, and large data transfers; (4) baseline normal traffic patterns and detect deviations; (5) integrate with ES Risk framework to aggregate network anomalies into risk scores for NIS2-scoped assets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` count sum(All_Traffic.bytes) as total_bytes dc(All_Traffic.dest_port) as dest_ports\n  from datamodel=Network_Traffic.All_Traffic\n  where NOT All_Traffic.dest_category IN (\"internal_server\",\"dns\",\"ntp\")\n  by All_Traffic.src All_Traffic.dest All_Traffic.action span=1h\n| rename All_Traffic.* as *\n| where dest_ports > 20 OR (action=\"blocked\" AND count > 100) OR total_bytes > 10737418240\n| lookup asset_lookup_by_str key AS src OUTPUT category as src_category, priority as src_priority\n| eval risk_signal=case(\n    dest_ports > 50, \"port_scan\",\n    action=\"blocked\" AND count > 500, \"brute_force\",\n    total_bytes > 10737418240, \"data_exfiltration\",\n    isnull(src_category), \"unknown_asset\",\n    1=1, \"anomaly\")\n| sort - count\n| table _time, src, dest, src_category, risk_signal, dest_ports, count, total_bytes, action\n```\n\nUnderstanding this SPL\n\n**NIS2 Network Security Monitoring and Anomaly Detection (Art. 21(2)(a))** — Article 21(2)(a) requires information system security policies to be operational, not just documented. This use case provides continuous network security monitoring — detecting lateral movement, unauthorized network segments, protocol anomalies, and traffic patterns that deviate from baseline — serving as the core evidence that network security policies are enforced through technical controls.\n\nDocumented **Data sources**: CIM Network_Traffic data model, firewall logs, IDS/IPS events, DNS logs. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Filters the current rows with `where dest_ports > 20 OR (action=\"blocked\" AND count > 100) OR total_bytes > 10737418240` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **risk_signal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Network Security Monitoring and Anomaly Detection (Art. 21(2)(a))**): table _time, src, dest, src_category, risk_signal, dest_ports, count, total_bytes, action\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Network Security Monitoring and Anomaly Detection (Art. 21(2)(a))** — Article 21(2)(a) requires information system security policies to be operational, not just documented. This use case provides continuous network security monitoring — detecting lateral movement, unauthorized network segments, protocol anomalies, and traffic patterns that deviate from baseline — serving as the core evidence that network security policies are enforced through technical controls.\n\nDocumented **Data sources**: CIM Network_Traffic data model, firewall logs, IDS/IPS events, DNS logs. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalies), Bar chart (risk signals by type), Map (traffic flows), Timeline (blocked connections), Sankey diagram (source to destination flows).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep an automatic eye on the data pipes in and out of the company. When traffic looks like someone is knocking on every door, hammering a login, or shoving a huge chunk of data somewhere odd, we raise a clear flag so the security team can act—not months later when an audit asks what we actually watched for.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "Network_Traffic",
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(a)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(a) (Risk analysis and information-system security policies) is enforced — Splunk UC-22.2.18: NIS2 Network Security Monitoring and Anomaly Detection.",
                  "ea": "Saved search 'UC-22.2.18' running on CIM Network_Traffic data model, firewall logs, IDS/IPS events, DNS logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.19",
              "n": "NIS2 Cross-Border Incident Impact Assessment (Art. 23(3))",
              "c": "high",
              "f": "advanced",
              "v": "Article 23(3) requires entities to determine whether significant incidents have cross-border impact and to notify CSIRTs in all affected Member States. This use case identifies whether incident-related traffic, compromised assets, or affected users span multiple countries — automating the cross-border impact assessment that NIS2 makes mandatory for multi-jurisdictional operations.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` macro, CIM Network_Traffic, ES Asset Framework (with geo/country tags), Authentication CIM",
              "q": "`notable` urgency IN (\"high\",\"critical\") status!=\"Closed\" earliest=-7d\n| eval incident_assets=mvappend(src, dest, src_ip, dest_ip)\n| mvexpand incident_assets\n| lookup asset_lookup_by_str key AS incident_assets OUTPUT country, business_unit, category\n| iplocation incident_assets\n| eval asset_country=coalesce(country, Country)\n| where isnotnull(asset_country)\n| stats dc(asset_country) as countries_affected, values(asset_country) as affected_countries, dc(incident_assets) as assets_involved by rule_name, urgency\n| where countries_affected > 1\n| eval cross_border_notification=\"REQUIRED — notify CSIRT in: \" . mvjoin(affected_countries, \", \")\n| table rule_name, urgency, countries_affected, affected_countries, assets_involved, cross_border_notification",
              "m": "(1) Tag assets in ES asset framework with `country` field for all NIS2-scoped systems; (2) enrich IP-based assets with `iplocation`; (3) run against all high/critical notables to assess cross-border scope; (4) auto-generate notification lists for multi-CSIRT reporting; (5) integrate with legal/compliance workflow for mandatory parallel notifications; (6) document cross-border assessment in final report (UC-22.2.8).",
              "z": "Table (cross-border incidents), Map (affected countries), Single value (incidents requiring multi-CSIRT notification), Bar chart (countries involved).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` macro, CIM Network_Traffic, ES Asset Framework (with geo/country tags), Authentication CIM.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag assets in ES asset framework with `country` field for all NIS2-scoped systems; (2) enrich IP-based assets with `iplocation`; (3) run against all high/critical notables to assess cross-border scope; (4) auto-generate notification lists for multi-CSIRT reporting; (5) integrate with legal/compliance workflow for mandatory parallel notifications; (6) document cross-border assessment in final report (UC-22.2.8).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") status!=\"Closed\" earliest=-7d\n| eval incident_assets=mvappend(src, dest, src_ip, dest_ip)\n| mvexpand incident_assets\n| lookup asset_lookup_by_str key AS incident_assets OUTPUT country, business_unit, category\n| iplocation incident_assets\n| eval asset_country=coalesce(country, Country)\n| where isnotnull(asset_country)\n| stats dc(asset_country) as countries_affected, values(asset_country) as affected_countries, dc(incident_assets) as assets_involved by rule_name, urgency\n| where countries_affected > 1\n| eval cross_border_notification=\"REQUIRED — notify CSIRT in: \" . mvjoin(affected_countries, \", \")\n| table rule_name, urgency, countries_affected, affected_countries, assets_involved, cross_border_notification\n```\n\nUnderstanding this SPL\n\n**NIS2 Cross-Border Incident Impact Assessment (Art. 23(3))** — Article 23(3) requires entities to determine whether significant incidents have cross-border impact and to notify CSIRTs in all affected Member States. This use case identifies whether incident-related traffic, compromised assets, or affected users span multiple countries — automating the cross-border impact assessment that NIS2 makes mandatory for multi-jurisdictional operations.\n\nDocumented **Data sources**: `` `notable` `` macro, CIM Network_Traffic, ES Asset Framework (with geo/country tags), Authentication CIM. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **incident_assets** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Pipeline stage (see **NIS2 Cross-Border Incident Impact Assessment (Art. 23(3))**): iplocation incident_assets\n• `eval` defines or adjusts **asset_country** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(asset_country)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by rule_name, urgency** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where countries_affected > 1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **cross_border_notification** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NIS2 Cross-Border Incident Impact Assessment (Art. 23(3))**): table rule_name, urgency, countries_affected, affected_countries, assets_involved, cross_border_notification\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (cross-border incidents), Map (affected countries), Single value (incidents requiring multi-CSIRT notification), Bar chart (countries involved).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "When a cyber incident affects systems in more than one country, the regulator wants us to tell the cybersecurity teams in each of those countries. This watch flags every incident with cross-border signs and tracks whether we have completed that cross-country impact check on time.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23(3)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Proves that for every significant incident, cross-border indicators are evaluated and an impact-assessment task is created with an owner, deadline, and completion record.",
                  "ea": "Saved search `UC-22.2.19` row-per-finding output, `index=audit_evidence sourcetype=evidence:saved_search` archive, dashboard panel `NIS2 Art.23 — Cross-border posture`. Per-incident impact assessments archived to `index=audit_evidence sourcetype=evidence:nis2_artifact stage=crossborder_assessment`.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.20",
              "n": "NIS2 Management Body Accountability and Governance Evidence (Art. 20)",
              "c": "high",
              "f": "intermediate",
              "v": "Article 20 requires management bodies to approve and oversee cybersecurity risk-management measures, undergo training, and be personally accountable. This use case aggregates evidence of management engagement — board-level security report generation, policy approval timestamps, executive training completion, and risk acceptance decisions — into a single governance compliance dashboard.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "KV stores (manual governance records), `_audit` (scheduled report execution), `index=training` (executive training records)",
              "q": "| inputlookup nis2_governance_evidence.csv\n| eval evidence_date=strptime(date, \"%Y-%m-%d\")\n| eval days_since=round((now()-evidence_date)/86400, 0)\n| eval status=case(\n    days_since > 365, \"OVERDUE\",\n    days_since > 270, \"DUE_SOON\",\n    1=1, \"CURRENT\")\n| sort - days_since\n| table evidence_type, description, responsible_person, date, days_since, status",
              "m": "(1) Create `nis2_governance_evidence.csv` KV store with evidence types: board_security_briefing, policy_approval, executive_training, risk_acceptance, audit_review, tabletop_exercise; (2) populate manually or via ServiceNow integration; (3) alert when any evidence type is overdue by >365 days; (4) generate quarterly governance report for board; (5) include training completion certificates for management body members.",
              "z": "Table (governance evidence status), Traffic light indicators (current/due/overdue), Timeline (governance activities), Single value (overdue items).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: KV stores (manual governance records), `_audit` (scheduled report execution), `index=training` (executive training records).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `nis2_governance_evidence.csv` KV store with evidence types: board_security_briefing, policy_approval, executive_training, risk_acceptance, audit_review, tabletop_exercise; (2) populate manually or via ServiceNow integration; (3) alert when any evidence type is overdue by >365 days; (4) generate quarterly governance report for board; (5) include training completion certificates for management body members.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_governance_evidence.csv\n| eval evidence_date=strptime(date, \"%Y-%m-%d\")\n| eval days_since=round((now()-evidence_date)/86400, 0)\n| eval status=case(\n    days_since > 365, \"OVERDUE\",\n    days_since > 270, \"DUE_SOON\",\n    1=1, \"CURRENT\")\n| sort - days_since\n| table evidence_type, description, responsible_person, date, days_since, status\n```\n\nUnderstanding this SPL\n\n**NIS2 Management Body Accountability and Governance Evidence (Art. 20)** — Article 20 requires management bodies to approve and oversee cybersecurity risk-management measures, undergo training, and be personally accountable. This use case aggregates evidence of management engagement — board-level security report generation, policy approval timestamps, executive training completion, and risk acceptance decisions — into a single governance compliance dashboard.\n\nDocumented **Data sources**: KV stores (manual governance records), `_audit` (scheduled report execution), `index=training` (executive training records). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **evidence_date** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 Management Body Accountability and Governance Evidence (Art. 20)**): table evidence_type, description, responsible_person, date, days_since, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (governance evidence status), Traffic light indicators (current/due/overdue), Timeline (governance activities), Single value (overdue items).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep a checklist of what our leaders must sign off on and which security courses they have finished, and we raise a clear flag when anything on that list is missing or older than a year.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.20",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Proves management and supervisory bodies approve cyber risk measures on cadence, complete role-mapped training, and leave an auditable mix of policy sign-off, LMS completion, and risk acceptance tied to people on the NIS2 management roster.",
                  "ea": "Saved search `UC-22.2.20` over `nis2_governance_evidence.csv` joined to `index=training` (`lms:completion`/`lms:enrollment`, `person_id`, `completion_date`) and `index=itsm` (`snow:sn_grc_policy`/`snow:task`, `policy_name`, `approval_date`, `approved_by`); `_audit` proves schedule runs; `audit_evidence` retains exports keyed by `evidence_id`.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.21",
              "n": "NIS2 Risk Analysis Evidence for Essential Entities (Art. 21(2))",
              "c": "critical",
              "f": "advanced",
              "v": "Essential entities must implement a comprehensive set of risk-management measures proportionate to the risks posed. This use case aggregates residual risk scores, vulnerability exposure, and control maturity for assets tagged as essential — producing continuous evidence that formal risk analysis is refreshed and acted upon, not only performed annually on paper.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`index=risk` `sourcetype=\"stash\"`, `tenable:vuln`, `nis2_entity_classification.csv` (entity_tier=essential)",
              "q": "| inputlookup nis2_entity_classification.csv WHERE entity_tier=\"essential\"\n| rename asset_id as risk_object\n| join risk_object [\n    search index=risk sourcetype=\"stash\" earliest=-30d\n    | stats latest(risk_score) as residual_risk max(risk_score) as peak_risk by risk_object\n]\n| join risk_object [\n    search index=tenable sourcetype=\"tenable:vuln\" severity IN (\"Critical\",\"High\") earliest=-14d\n    | stats dc(plugin_id) as open_critical_high by host\n    | rename host as risk_object\n]\n| eval analysis_stale=if(residual_risk > 80 OR open_critical_high > 5, \"ESCALATE\", \"Monitor\")\n| table risk_object, business_service, residual_risk, peak_risk, open_critical_high, analysis_stale\n| sort - residual_risk",
              "m": "(1) Tag all in-scope assets with `entity_tier` in `nis2_entity_classification.csv`; (2) ensure ES risk and vuln scans cover essential services; (3) define risk thresholds with CISO sign-off; (4) alert when residual risk or critical findings breach policy; (5) export quarterly risk analysis workbook attachments from dashboard PDFs.",
              "z": "Heatmap (risk by service), Table (essential assets breaching threshold), Single value (mean residual risk), Line chart (trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `index=risk` `sourcetype=\"stash\"`, `tenable:vuln`, `nis2_entity_classification.csv` (entity_tier=essential).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag all in-scope assets with `entity_tier` in `nis2_entity_classification.csv`; (2) ensure ES risk and vuln scans cover essential services; (3) define risk thresholds with CISO sign-off; (4) alert when residual risk or critical findings breach policy; (5) export quarterly risk analysis workbook attachments from dashboard PDFs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_entity_classification.csv WHERE entity_tier=\"essential\"\n| rename asset_id as risk_object\n| join risk_object [\n    search index=risk sourcetype=\"stash\" earliest=-30d\n    | stats latest(risk_score) as residual_risk max(risk_score) as peak_risk by risk_object\n]\n| join risk_object [\n    search index=tenable sourcetype=\"tenable:vuln\" severity IN (\"Critical\",\"High\") earliest=-14d\n    | stats dc(plugin_id) as open_critical_high by host\n    | rename host as risk_object\n]\n| eval analysis_stale=if(residual_risk > 80 OR open_critical_high > 5, \"ESCALATE\", \"Monitor\")\n| table risk_object, business_service, residual_risk, peak_risk, open_critical_high, analysis_stale\n| sort - residual_risk\n```\n\nUnderstanding this SPL\n\n**NIS2 Risk Analysis Evidence for Essential Entities (Art. 21(2))** — Essential entities must implement a comprehensive set of risk-management measures proportionate to the risks posed. This use case aggregates residual risk scores, vulnerability exposure, and control maturity for assets tagged as essential — producing continuous evidence that formal risk analysis is refreshed and acted upon, not only performed annually on paper.\n\nDocumented **Data sources**: `index=risk` `sourcetype=\"stash\"`, `tenable:vuln`, `nis2_entity_classification.csv` (entity_tier=essential). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **analysis_stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NIS2 Risk Analysis Evidence for Essential Entities (Art. 21(2))**): table risk_object, business_service, residual_risk, peak_risk, open_critical_high, analysis_stale\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Risk Analysis Evidence for Essential Entities (Art. 21(2))** — Essential entities must implement a comprehensive set of risk-management measures proportionate to the risks posed. This use case aggregates residual risk scores, vulnerability exposure, and control maturity for assets tagged as essential — producing continuous evidence that formal risk analysis is refreshed and acted upon, not only performed annually on paper.\n\nDocumented **Data sources**: `index=risk` `sourcetype=\"stash\"`, `tenable:vuln`, `nis2_entity_classification.csv` (entity_tier=essential). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (risk by service), Table (essential assets breaching threshold), Single value (mean residual risk), Line chart (trend).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We line up the official \"how risky is this asset\" score with what the vulnerability scanner still sees for the sites labelled most important, so weaker spots do not hide behind paperwork.",
              "mtype": [
                "Security",
                "Risk",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.21(2) is enforced — Splunk UC-22.2.21: NIS2 Risk Analysis Evidence for Essential Entities.",
                  "ea": "Saved search 'UC-22.2.21' running on index=risk sourcetype=\"stash\", tenable:vuln, nis2_entity_classification.csv (entity_tier=essential), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.22",
              "n": "NIS2 Risk Analysis Evidence for Important Entities (Art. 21(2))",
              "c": "high",
              "f": "intermediate",
              "v": "Important entities must implement appropriate and proportionate measures. This use case applies lighter-weight but documented risk indicators — patch latency, authentication anomalies, and backup health — tailored to important-entity thresholds so compliance teams can demonstrate proportionality without over-burdening smaller operators.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Qualys (Splunkbase 2964)",
              "d": "`nis2_entity_classification.csv` (entity_tier=important), `qualys:hostdetection`, authentication CIM",
              "q": "| inputlookup nis2_entity_classification.csv WHERE entity_tier=\"important\"\n| rename asset_hostname as host\n| join host [\n    search index=qualys sourcetype=\"qualys:hostdetection\" earliest=-30d\n    | eval age_days=round((now()-strptime(first_found_datetime,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n    | where severity IN (\"Critical\",\"High\") AND age_days > 30\n    | stats dc(qid) as aged_findings by host\n]\n| where aged_findings > 0\n| table host, business_service, aged_findings\n| sort - aged_findings",
              "m": "(1) Set proportionate SLAs for important entities (e.g., 30-day critical patch target); (2) ingest Qualys or Tenable with host CMDB mapping; (3) monthly management review of aged findings; (4) pair with UC-22.2.21 for entities that may be reclassified; (5) document risk acceptance where exceptions exist.",
              "z": "Bar chart (aged findings by host), Table (top offenders), Single value (hosts past SLA).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Qualys (Splunkbase 2964).\n• Ensure the following data sources are available: `nis2_entity_classification.csv` (entity_tier=important), `qualys:hostdetection`, authentication CIM.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Set proportionate SLAs for important entities (e.g., 30-day critical patch target); (2) ingest Qualys or Tenable with host CMDB mapping; (3) monthly management review of aged findings; (4) pair with UC-22.2.21 for entities that may be reclassified; (5) document risk acceptance where exceptions exist.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_entity_classification.csv WHERE entity_tier=\"important\"\n| rename asset_hostname as host\n| join host [\n    search index=qualys sourcetype=\"qualys:hostdetection\" earliest=-30d\n    | eval age_days=round((now()-strptime(first_found_datetime,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n    | where severity IN (\"Critical\",\"High\") AND age_days > 30\n    | stats dc(qid) as aged_findings by host\n]\n| where aged_findings > 0\n| table host, business_service, aged_findings\n| sort - aged_findings\n```\n\nUnderstanding this SPL\n\n**NIS2 Risk Analysis Evidence for Important Entities (Art. 21(2))** — Important entities must implement appropriate and proportionate measures. This use case applies lighter-weight but documented risk indicators — patch latency, authentication anomalies, and backup health — tailored to important-entity thresholds so compliance teams can demonstrate proportionality without over-burdening smaller operators.\n\nDocumented **Data sources**: `nis2_entity_classification.csv` (entity_tier=important), `qualys:hostdetection`, authentication CIM. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Qualys (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where aged_findings > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Risk Analysis Evidence for Important Entities (Art. 21(2))**): table host, business_service, aged_findings\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Risk Analysis Evidence for Important Entities (Art. 21(2))** — Important entities must implement appropriate and proportionate measures. This use case applies lighter-weight but documented risk indicators — patch latency, authentication anomalies, and backup health — tailored to important-entity thresholds so compliance teams can demonstrate proportionality without over-burdening smaller operators.\n\nDocumented **Data sources**: `nis2_entity_classification.csv` (entity_tier=important), `qualys:hostdetection`, authentication CIM. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Qualys (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (aged findings by host), Table (top offenders), Single value (hosts past SLA).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "For the slightly-smaller-but-still-regulated group, we watch whether serious known software problems are left sitting too long, so we can say our follow-up matches the lighter rules that apply to them.",
              "mtype": [
                "Risk",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count",
              "e": [
                "qualys"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.21(2) is enforced — Splunk UC-22.2.22: NIS2 Risk Analysis Evidence for Important Entities.",
                  "ea": "Saved search 'UC-22.2.22' running on nis2_entity_classification.csv (entity_tier=important), qualys:hostdetection, authentication CIM, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.23",
              "n": "NIS2 Incident Handling Procedure Adherence and Playbook Execution (Art. 21(2)(b))",
              "c": "critical",
              "f": "intermediate",
              "v": "Policies on handling incidents must be effective in practice. This use case tracks whether incident tickets traverse required playbook stages (triage, containment, eradication, recovery, lessons learned) within SLA — correlating SOC workflow fields with NIS2 incident classes.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`` `notable` `` macro, `index=itsm` linked incidents, `nis2_playbook_stage_map.csv`",
              "q": "`notable` urgency IN (\"high\",\"critical\") earliest=-30d\n| eval incident_id=rule_name.\"|\"._time\n| join incident_id type=left [\n    search index=itsm sourcetype=\"snow:incident\" earliest=-30d\n    | eval incident_id=correlation_id\n    | eval stages_complete=mvcount(mvappend(\n        if(match(work_notes,\"(?i)contain\"),\"contain\",\"\"),\n        if(match(work_notes,\"(?i)erad\"),\"erad\",\"\"),\n        if(match(work_notes,\"(?i)recover\"),\"recover\",\"\"),\n        if(match(work_notes,\"(?i)lessons\"),\"lessons\",\"\")))\n    | stats latest(stages_complete) as stages_complete by incident_id\n]\n| eval playbook_gap=if(stages_complete < 4, 1, 0)\n| where playbook_gap=1 AND status!=\"Closed\"\n| table _time, rule_name, status, owner, stages_complete",
              "m": "(1) Standardise work-note keywords for each playbook stage; (2) link ES notables to ServiceNow `correlation_id`; (3) alert IR leads on missing stages before closure; (4) include lessons-learned completion in post-incident review; (5) export metrics for supervisory evidence packs.",
              "z": "Funnel chart (stage completion), Table (incomplete incidents), Single value (% full playbook adherence).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `` `notable` `` macro, `index=itsm` linked incidents, `nis2_playbook_stage_map.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardise work-note keywords for each playbook stage; (2) link ES notables to ServiceNow `correlation_id`; (3) alert IR leads on missing stages before closure; (4) include lessons-learned completion in post-incident review; (5) export metrics for supervisory evidence packs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") earliest=-30d\n| eval incident_id=rule_name.\"|\"._time\n| join incident_id type=left [\n    search index=itsm sourcetype=\"snow:incident\" earliest=-30d\n    | eval incident_id=correlation_id\n    | eval stages_complete=mvcount(mvappend(\n        if(match(work_notes,\"(?i)contain\"),\"contain\",\"\"),\n        if(match(work_notes,\"(?i)erad\"),\"erad\",\"\"),\n        if(match(work_notes,\"(?i)recover\"),\"recover\",\"\"),\n        if(match(work_notes,\"(?i)lessons\"),\"lessons\",\"\")))\n    | stats latest(stages_complete) as stages_complete by incident_id\n]\n| eval playbook_gap=if(stages_complete < 4, 1, 0)\n| where playbook_gap=1 AND status!=\"Closed\"\n| table _time, rule_name, status, owner, stages_complete\n```\n\nUnderstanding this SPL\n\n**NIS2 Incident Handling Procedure Adherence and Playbook Execution (Art. 21(2)(b))** — Policies on handling incidents must be effective in practice. This use case tracks whether incident tickets traverse required playbook stages (triage, containment, eradication, recovery, lessons learned) within SLA — correlating SOC workflow fields with NIS2 incident classes.\n\nDocumented **Data sources**: `` `notable` `` macro, `index=itsm` linked incidents, `nis2_playbook_stage_map.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **incident_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **playbook_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where playbook_gap=1 AND status!=\"Closed\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Incident Handling Procedure Adherence and Playbook Execution (Art. 21(2)(b))**): table _time, rule_name, status, owner, stages_complete\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel chart (stage completion), Table (incomplete incidents), Single value (% full playbook adherence).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We don't just have a written playbook for handling cyber incidents — we check that the right one actually ran for every serious incident. If a step was skipped or the wrong recipe was used, this watch tells us in time to fix it before the auditor asks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Proves that for each significant incident, the mandated SOAR playbook for the relevant urgency tier ran to completion (or to a documented decision point), and that no playbook steps were silently skipped.",
                  "ea": "Saved search `UC-22.2.23` row-per-finding output, `index=audit_evidence sourcetype=evidence:saved_search` archive, dashboard panel `NIS2 Art.21(2)(b) — Playbook adherence`. Per-incident playbook execution traces archived to `index=audit_evidence sourcetype=evidence:soar_playbook_run`.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.24",
              "n": "NIS2 Business Continuity and ICT Continuity Evidence (Art. 21(2)(c))",
              "c": "critical",
              "f": "intermediate",
              "v": "Business continuity, crisis management, and ICT continuity are explicit Article 21 themes. This use case monitors declared RTO/RPO adherence during real incidents and exercises, combining ITSI service degradation windows with BCP test completion records.",
              "t": "Splunk ITSI (Splunkbase 1841), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsi_summary`, `nis2_bcp_test_register.csv`, `index=itsm` (major incidents)",
              "q": "| inputlookup nis2_bcp_test_register.csv\n| eval test_date_epoch=strptime(last_tabletop_date,\"%Y-%m-%d\")\n| eval months_since=round((now()-test_date_epoch)/2592000,1)\n| where months_since > 12\n| join service_name [\n    search index=itsi_summary earliest=-90d\n    | where severity IN (\"Critical\",\"High\")\n    | stats sum(alert_duration) as downtime_seconds by service_name\n]\n| eval rto_seconds=rto_minutes*60\n| eval rto_breach=if(downtime_seconds > rto_seconds, 1, 0)\n| table service_name, months_since, downtime_seconds, rto_seconds, rto_breach",
              "m": "(1) Record BCP/ICT continuity test dates per critical service; (2) map services to ITSI entities; (3) alert when annual tabletop is overdue; (4) measure downtime during sev1 incidents against RTO; (5) integrate with UC-22.2.17 for backup evidence.",
              "z": "Table (overdue tests and RTO breaches), Timeline (tests and incidents), Single value (services breaching RTO).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (Splunkbase 1841), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsi_summary`, `nis2_bcp_test_register.csv`, `index=itsm` (major incidents).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Record BCP/ICT continuity test dates per critical service; (2) map services to ITSI entities; (3) alert when annual tabletop is overdue; (4) measure downtime during sev1 incidents against RTO; (5) integrate with UC-22.2.17 for backup evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_bcp_test_register.csv\n| eval test_date_epoch=strptime(last_tabletop_date,\"%Y-%m-%d\")\n| eval months_since=round((now()-test_date_epoch)/2592000,1)\n| where months_since > 12\n| join service_name [\n    search index=itsi_summary earliest=-90d\n    | where severity IN (\"Critical\",\"High\")\n    | stats sum(alert_duration) as downtime_seconds by service_name\n]\n| eval rto_seconds=rto_minutes*60\n| eval rto_breach=if(downtime_seconds > rto_seconds, 1, 0)\n| table service_name, months_since, downtime_seconds, rto_seconds, rto_breach\n```\n\nUnderstanding this SPL\n\n**NIS2 Business Continuity and ICT Continuity Evidence (Art. 21(2)(c))** — Business continuity, crisis management, and ICT continuity are explicit Article 21 themes. This use case monitors declared RTO/RPO adherence during real incidents and exercises, combining ITSI service degradation windows with BCP test completion records.\n\nDocumented **Data sources**: `index=itsi_summary`, `nis2_bcp_test_register.csv`, `index=itsm` (major incidents). **App/TA** (typical add-on context): Splunk ITSI (Splunkbase 1841), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **test_date_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **months_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where months_since > 12` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **rto_seconds** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rto_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NIS2 Business Continuity and ICT Continuity Evidence (Art. 21(2)(c))**): table service_name, months_since, downtime_seconds, rto_seconds, rto_breach\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue tests and RTO breaches), Timeline (tests and incidents), Single value (services breaching RTO).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We track two practical things: when we last practiced how to keep important systems running in a crisis, and whether real disruptions stayed inside the time we promised. That way, when oversight asks, we have plain dates and numbers—not frantic emails.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(c)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(c) (Business continuity and crisis management) is enforced — Splunk UC-22.2.24: NIS2 Business Continuity and ICT Continuity Evidence.",
                  "ea": "Saved search 'UC-22.2.24' running on index=itsi_summary, nis2_bcp_test_register.csv, index=itsm (major incidents), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.25",
              "n": "NIS2 Supply Chain Security Assessment Coverage (Art. 21(2)(d))",
              "c": "high",
              "f": "intermediate",
              "v": "Beyond detecting abnormal supplier sessions (UC-22.2.2), entities must assess and document supply chain risk. This use case tracks completion of security questionnaires, on-site assessments, and software bill of materials (SBOM) reviews for suppliers touching NIS2-scoped services.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for GitHub (Splunkbase 5596)",
              "d": "`nis2_supplier_assessment_register.csv`, `index=github` (Dependabot alerts), `index=itsm`",
              "q": "| inputlookup nis2_supplier_assessment_register.csv\n| eval assessment_due=strptime(next_assessment_date,\"%Y-%m-%d\")\n| eval overdue=if(now()>assessment_due,1,0)\n| join supplier_id [\n    search index=github sourcetype=\"dependabot_alert\" earliest=-30d\n    | stats dc(alert_number) as open_supply_chain_alerts by repo\n    | rename repo as supplier_code_repo\n]\n| where overdue=1 OR open_supply_chain_alerts > 10\n| table supplier_name, tier, next_assessment_date, overdue, open_supply_chain_alerts",
              "m": "(1) Classify suppliers by tier in the register; (2) sync assessment due dates from GRC or ServiceNow; (3) correlate critical repos with supplier-maintained code; (4) escalate overdue tier-1 assessments to procurement and CISO; (5) document compensating controls when assessments slip.",
              "z": "Table (overdue assessments), Bar chart (alerts by supplier), Single value (tier-1 gaps).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5596)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for GitHub (Splunkbase 5596).\n• Ensure the following data sources are available: `nis2_supplier_assessment_register.csv`, `index=github` (Dependabot alerts), `index=itsm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Classify suppliers by tier in the register; (2) sync assessment due dates from GRC or ServiceNow; (3) correlate critical repos with supplier-maintained code; (4) escalate overdue tier-1 assessments to procurement and CISO; (5) document compensating controls when assessments slip.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_supplier_assessment_register.csv\n| eval assessment_due=strptime(next_assessment_date,\"%Y-%m-%d\")\n| eval overdue=if(now()>assessment_due,1,0)\n| join supplier_id [\n    search index=github sourcetype=\"dependabot_alert\" earliest=-30d\n    | stats dc(alert_number) as open_supply_chain_alerts by repo\n    | rename repo as supplier_code_repo\n]\n| where overdue=1 OR open_supply_chain_alerts > 10\n| table supplier_name, tier, next_assessment_date, overdue, open_supply_chain_alerts\n```\n\nUnderstanding this SPL\n\n**NIS2 Supply Chain Security Assessment Coverage (Art. 21(2)(d))** — Beyond detecting abnormal supplier sessions (UC-22.2.2), entities must assess and document supply chain risk. This use case tracks completion of security questionnaires, on-site assessments, and software bill of materials (SBOM) reviews for suppliers touching NIS2-scoped services.\n\nDocumented **Data sources**: `nis2_supplier_assessment_register.csv`, `index=github` (Dependabot alerts), `index=itsm`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for GitHub (Splunkbase 5596). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **assessment_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where overdue=1 OR open_supply_chain_alerts > 10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Supply Chain Security Assessment Coverage (Art. 21(2)(d))**): table supplier_name, tier, next_assessment_date, overdue, open_supply_chain_alerts\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue assessments), Bar chart (alerts by supplier), Single value (tier-1 gaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We line up the paperwork deadlines for checking outside suppliers with the real open problems reported in the code libraries those suppliers help maintain, so nobody thinks the homework is done when the toolbox is still messy.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(d)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(d) (Supply-chain security) is enforced — Splunk UC-22.2.25: NIS2 Supply Chain Security Assessment Coverage.",
                  "ea": "Saved search 'UC-22.2.25' running on nis2_supplier_assessment_register.csv, index=github (Dependabot alerts), index=itsm, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.26",
              "n": "NIS2 Network Security Monitoring Coverage by Segment (Art. 21(2)(a))",
              "c": "critical",
              "f": "intermediate",
              "v": "Network security policies must be enforced uniformly. This use case measures log source coverage and IDS inspection rates per network segment defined in the security architecture — identifying blind spots where NIS2-scoped traffic is not visible to Splunk.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809)",
              "d": "`nis2_segment_coverage_expectations.csv`, `metadata:indexes`, firewall sourcetype counts, CIM Network_Traffic",
              "q": "| inputlookup nis2_segment_coverage_expectations.csv\n| join segment_id [\n    | tstats `summariesonly` count where index=netfw earliest=-24h by index\n    | eval segment_id=case(\n        match(index,\"(?i)corp\"),\"CORP\",\n        match(index,\"(?i)dmz\"),\"DMZ\",\n        1=1,\"OTHER\")\n    | stats sum(count) as events by segment_id\n]\n| eval expected_eps=expected_events_per_day/86400*3600\n| eval coverage_gap=if(events < expected_eps*0.5, 1, 0)\n| where coverage_gap=1\n| table segment_id, events, expected_eps, nis2_scope",
              "m": "(1) Define minimum event rates per segment from baselines; (2) tag segments as in-scope for NIS2; (3) alert when collectors fail or indexes stop receiving; (4) integrate with ES data source monitoring; (5) quarterly architecture review of blind spots.",
              "z": "Table (segments with gaps), Single value (blind segments), Heatmap (EPS by hour), Map (site coverage).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809).\n• Ensure the following data sources are available: `nis2_segment_coverage_expectations.csv`, `metadata:indexes`, firewall sourcetype counts, CIM Network_Traffic.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define minimum event rates per segment from baselines; (2) tag segments as in-scope for NIS2; (3) alert when collectors fail or indexes stop receiving; (4) integrate with ES data source monitoring; (5) quarterly architecture review of blind spots.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_segment_coverage_expectations.csv\n| join segment_id [\n    | tstats `summariesonly` count where index=netfw earliest=-24h by index\n    | eval segment_id=case(\n        match(index,\"(?i)corp\"),\"CORP\",\n        match(index,\"(?i)dmz\"),\"DMZ\",\n        1=1,\"OTHER\")\n    | stats sum(count) as events by segment_id\n]\n| eval expected_eps=expected_events_per_day/86400*3600\n| eval coverage_gap=if(events < expected_eps*0.5, 1, 0)\n| where coverage_gap=1\n| table segment_id, events, expected_eps, nis2_scope\n```\n\nUnderstanding this SPL\n\n**NIS2 Network Security Monitoring Coverage by Segment (Art. 21(2)(a))** — Network security policies must be enforced uniformly. This use case measures log source coverage and IDS inspection rates per network segment defined in the security architecture — identifying blind spots where NIS2-scoped traffic is not visible to Splunk.\n\nDocumented **Data sources**: `nis2_segment_coverage_expectations.csv`, `metadata:indexes`, firewall sourcetype counts, CIM Network_Traffic. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **expected_eps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **coverage_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where coverage_gap=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Network Security Monitoring Coverage by Segment (Art. 21(2)(a))**): table segment_id, events, expected_eps, nis2_scope\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Network Security Monitoring Coverage by Segment (Art. 21(2)(a))** — Network security policies must be enforced uniformly. This use case measures log source coverage and IDS inspection rates per network segment defined in the security architecture — identifying blind spots where NIS2-scoped traffic is not visible to Splunk.\n\nDocumented **Data sources**: `nis2_segment_coverage_expectations.csv`, `metadata:indexes`, firewall sourcetype counts, CIM Network_Traffic. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (segments with gaps), Single value (blind segments), Heatmap (EPS by hour), Map (site coverage).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We compare how much network traffic record-keeping we expect from each part of the building’s wiring to what actually arrived yesterday. If a whole zone goes strangely quiet, we treat that as a missing security camera—not as success—so nobody can hide a blind spot in the paperwork.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(a)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(a) (Risk analysis and information-system security policies) is enforced — Splunk UC-22.2.26: NIS2 Network Security Monitoring Coverage by Segment.",
                  "ea": "Saved search 'UC-22.2.26' running on nis2_segment_coverage_expectations.csv, metadata:indexes, firewall sourcetype counts, CIM Network_Traffic, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.27",
              "n": "NIS2 Vulnerability Disclosure Policy Operational Signals (Art. 21(2)(e))",
              "c": "high",
              "f": "beginner",
              "v": "Coordinated vulnerability disclosure and cooperation with researchers support systemic resilience. This use case tracks intake of `security@` reports, HackerOne/Bugcrowd submissions, and PGP-encrypted mailbox events — measuring time-to-triage against internal policy.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC)",
              "d": "`index=itsm` (vuln disclosure tickets), `index=email_gateway` (parsed security@), HEC from bug bounty APIs",
              "q": "index=itsm sourcetype=\"snow:incident\" category=\"*Vulnerability Disclosure*\" earliest=-90d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval triaged=strptime(work_start,\"%Y-%m-%d %H:%M:%S\")\n| eval triage_hours=round((triaged-opened)/3600,2)\n| eval sla_breach=if(triage_hours>48 OR isnull(triaged),1,0)\n| stats count as tickets sum(sla_breach) as breaches avg(triage_hours) as avg_triage\n| eval breach_pct=round(100*breaches/tickets,1)\n| table tickets, breaches, breach_pct, avg_triage",
              "m": "(1) Publish disclosure policy URL in security.txt and monitor referrers optionally; (2) auto-create ServiceNow incidents from email and bounty webhooks; (3) alert on SLA breaches; (4) link findings to engineering Jira; (5) annual report on researcher cooperation metrics.",
              "z": "Table (breached tickets), Line chart (triage time trend), Single value (breach %), Bar chart (by product team).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC).\n• Ensure the following data sources are available: `index=itsm` (vuln disclosure tickets), `index=email_gateway` (parsed security@), HEC from bug bounty APIs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Publish disclosure policy URL in security.txt and monitor referrers optionally; (2) auto-create ServiceNow incidents from email and bounty webhooks; (3) alert on SLA breaches; (4) link findings to engineering Jira; (5) annual report on researcher cooperation metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" category=\"*Vulnerability Disclosure*\" earliest=-90d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval triaged=strptime(work_start,\"%Y-%m-%d %H:%M:%S\")\n| eval triage_hours=round((triaged-opened)/3600,2)\n| eval sla_breach=if(triage_hours>48 OR isnull(triaged),1,0)\n| stats count as tickets sum(sla_breach) as breaches avg(triage_hours) as avg_triage\n| eval breach_pct=round(100*breaches/tickets,1)\n| table tickets, breaches, breach_pct, avg_triage\n```\n\nUnderstanding this SPL\n\n**NIS2 Vulnerability Disclosure Policy Operational Signals (Art. 21(2)(e))** — Coordinated vulnerability disclosure and cooperation with researchers support systemic resilience. This use case tracks intake of `security@` reports, HackerOne/Bugcrowd submissions, and PGP-encrypted mailbox events — measuring time-to-triage against internal policy.\n\nDocumented **Data sources**: `index=itsm` (vuln disclosure tickets), `index=email_gateway` (parsed security@), HEC from bug bounty APIs. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **triaged** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **triage_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **breach_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NIS2 Vulnerability Disclosure Policy Operational Signals (Art. 21(2)(e))**): table tickets, breaches, breach_pct, avg_triage\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Vulnerability Disclosure Policy Operational Signals (Art. 21(2)(e))** — Coordinated vulnerability disclosure and cooperation with researchers support systemic resilience. This use case tracks intake of `security@` reports, HackerOne/Bugcrowd submissions, and PGP-encrypted mailbox events — measuring time-to-triage against internal policy.\n\nDocumented **Data sources**: `index=itsm` (vuln disclosure tickets), `index=email_gateway` (parsed security@), HEC from bug bounty APIs. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (breached tickets), Line chart (triage time trend), Single value (breach %), Bar chart (by product team).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We measure how quickly real people answer when someone emails a security weakness, so you can show the world you treat those messages seriously instead of letting them gather dust.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(e) (Security in acquisition, development and maintenance) is enforced — Splunk UC-22.2.27: NIS2 Vulnerability Disclosure Policy Operational Signals.",
                  "ea": "Saved search 'UC-22.2.27' running on index=itsm (vuln disclosure tickets), index=email_gateway (parsed security@), HEC from bug bounty APIs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.28",
              "n": "NIS2 Cyber Hygiene Practices — Baseline Control Compliance (Art. 21(2)(g))",
              "c": "high",
              "f": "intermediate",
              "v": "Basic cyber hygiene underpins all Article 21 measures. This use case scores endpoints and servers for MFA enrollment, disk encryption, EDR presence, and secure configuration baselines — rolling up to a hygiene scorecard per business service in NIS2 scope.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`WinEventLog:Security` (4624 with MFA claims where available), `OktaIM2:log`, `tenable:was` or agent posture, `nis2_asset_hygiene_expectations.csv`",
              "q": "| inputlookup nis2_asset_hygiene_expectations.csv\n| join asset_fqdn [\n    search index=tenable sourcetype=\"tenable:sc:assets\" earliest=-1d\n    | eval mfa_ok=if(match(plugins_text,\"(?i)mfa|okta.verify\"),1,0)\n    | eval edr_ok=if(match(plugins_text,\"(?i)edr|defender|crowdstrike\"),1,0)\n    | stats latest(mfa_ok) as mfa_ok latest(edr_ok) as edr_ok by dns_name\n    | rename dns_name as asset_fqdn\n]\n| eval hygiene_failures=if(mfa_ok==0,1,0) + if(edr_ok==0,1,0)\n| where hygiene_failures > 0\n| table asset_fqdn, business_service, mfa_ok, edr_ok, hygiene_failures",
              "m": "(1) Define mandatory hygiene controls per asset class; (2) normalise hostnames between CMDB and Tenable; (3) weekly remediation campaigns for failing assets; (4) escalate chronic failures to service owners; (5) include scorecard in board security briefing.",
              "z": "Bar chart (failures by control), Table (worst assets), Single value (% compliant fleet), Treemap (by department).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Okta Identity Cloud](https://splunkbase.splunk.com/app/6553), [Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `WinEventLog:Security` (4624 with MFA claims where available), `OktaIM2:log`, `tenable:was` or agent posture, `nis2_asset_hygiene_expectations.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define mandatory hygiene controls per asset class; (2) normalise hostnames between CMDB and Tenable; (3) weekly remediation campaigns for failing assets; (4) escalate chronic failures to service owners; (5) include scorecard in board security briefing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_asset_hygiene_expectations.csv\n| join asset_fqdn [\n    search index=tenable sourcetype=\"tenable:sc:assets\" earliest=-1d\n    | eval mfa_ok=if(match(plugins_text,\"(?i)mfa|okta.verify\"),1,0)\n    | eval edr_ok=if(match(plugins_text,\"(?i)edr|defender|crowdstrike\"),1,0)\n    | stats latest(mfa_ok) as mfa_ok latest(edr_ok) as edr_ok by dns_name\n    | rename dns_name as asset_fqdn\n]\n| eval hygiene_failures=if(mfa_ok==0,1,0) + if(edr_ok==0,1,0)\n| where hygiene_failures > 0\n| table asset_fqdn, business_service, mfa_ok, edr_ok, hygiene_failures\n```\n\nUnderstanding this SPL\n\n**NIS2 Cyber Hygiene Practices — Baseline Control Compliance (Art. 21(2)(g))** — Basic cyber hygiene underpins all Article 21 measures. This use case scores endpoints and servers for MFA enrollment, disk encryption, EDR presence, and secure configuration baselines — rolling up to a hygiene scorecard per business service in NIS2 scope.\n\nDocumented **Data sources**: `WinEventLog:Security` (4624 with MFA claims where available), `OktaIM2:log`, `tenable:was` or agent posture, `nis2_asset_hygiene_expectations.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **hygiene_failures** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hygiene_failures > 0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Cyber Hygiene Practices — Baseline Control Compliance (Art. 21(2)(g))**): table asset_fqdn, business_service, mfa_ok, edr_ok, hygiene_failures\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Cyber Hygiene Practices — Baseline Control Compliance (Art. 21(2)(g))** — Basic cyber hygiene underpins all Article 21 measures. This use case scores endpoints and servers for MFA enrollment, disk encryption, EDR presence, and secure configuration baselines — rolling up to a hygiene scorecard per business service in NIS2 scope.\n\nDocumented **Data sources**: `WinEventLog:Security` (4624 with MFA claims where available), `OktaIM2:log`, `tenable:was` or agent posture, `nis2_asset_hygiene_expectations.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (failures by control), Table (worst assets), Single value (% compliant fleet), Treemap (by department).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep a checklist of the basic safety gadgets each important computer should have—like a second login step and protective software—and compare it to what our scanning tools actually see. When something is missing, we list it clearly for the people who fix machines.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "okta",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(g)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(g) (Cyber-hygiene and training) is enforced — Splunk UC-22.2.28: NIS2 Cyber Hygiene Practices — Baseline Control Compliance.",
                  "ea": "Saved search 'UC-22.2.28' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.29",
              "n": "NIS2 Cryptography Policy Compliance — TLS and Certificate Posture (Art. 21(2)(h))",
              "c": "high",
              "f": "intermediate",
              "v": "Appropriate use of cryptography protects network and information system security. This use case monitors TLS versions, weak cipher suites, and certificate expiry for public and internal services supporting NIS2-scoped workloads — aligning with organisational cryptography standards.",
              "t": "Splunk Add-on for Stream (Splunkbase 1809), Splunk Enterprise Security (Splunkbase 263)",
              "d": "TLS metadata from Stream, certificate inventory, `nis2_crypto_policy.csv` (min_tls_version, allowed_ciphers)",
              "q": "index=stream sourcetype=\"stream:tls\" earliest=-24h\n| stats latest(tls_version) as tls_version latest(cipher) as cipher values(expiry_epoch) as cert_expiry by dest_ip dest_port\n| lookup nis2_scoped_service_map dest_ip OUTPUT service_name nis2_in_scope\n| where nis2_in_scope=\"true\"\n| eval weak_tls=if(match(tls_version,\"SSLv|TLSv1\\.0|TLSv1\\.1\"),1,0)\n| eval cert_problem=if(cert_expiry < relative_time(now(),\"+30d@d\"),1,0)\n| where weak_tls=1 OR cert_problem=1\n| table service_name, dest_ip, dest_port, tls_version, cipher, cert_problem",
              "m": "(1) Enable passive TLS inspection on critical VLAN mirrors; (2) map IPs to NIS2 services; (3) alert on weak protocols and imminent expiry; (4) integrate with PKI renewal workflow; (5) document exceptions with compensating controls.",
              "z": "Table (weak TLS services), Timeline (certificate expiry), Single value (critical expiries), Bar chart (by cipher family).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Stream (Splunkbase 1809), Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: TLS metadata from Stream, certificate inventory, `nis2_crypto_policy.csv` (min_tls_version, allowed_ciphers).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable passive TLS inspection on critical VLAN mirrors; (2) map IPs to NIS2 services; (3) alert on weak protocols and imminent expiry; (4) integrate with PKI renewal workflow; (5) document exceptions with compensating controls.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=stream sourcetype=\"stream:tls\" earliest=-24h\n| stats latest(tls_version) as tls_version latest(cipher) as cipher values(expiry_epoch) as cert_expiry by dest_ip dest_port\n| lookup nis2_scoped_service_map dest_ip OUTPUT service_name nis2_in_scope\n| where nis2_in_scope=\"true\"\n| eval weak_tls=if(match(tls_version,\"SSLv|TLSv1\\.0|TLSv1\\.1\"),1,0)\n| eval cert_problem=if(cert_expiry < relative_time(now(),\"+30d@d\"),1,0)\n| where weak_tls=1 OR cert_problem=1\n| table service_name, dest_ip, dest_port, tls_version, cipher, cert_problem\n```\n\nUnderstanding this SPL\n\n**NIS2 Cryptography Policy Compliance — TLS and Certificate Posture (Art. 21(2)(h))** — Appropriate use of cryptography protects network and information system security. This use case monitors TLS versions, weak cipher suites, and certificate expiry for public and internal services supporting NIS2-scoped workloads — aligning with organisational cryptography standards.\n\nDocumented **Data sources**: TLS metadata from Stream, certificate inventory, `nis2_crypto_policy.csv` (min_tls_version, allowed_ciphers). **App/TA** (typical add-on context): Splunk Add-on for Stream (Splunkbase 1809), Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: stream; **sourcetype**: stream:tls. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=stream, sourcetype=\"stream:tls\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest_ip dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where nis2_in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **weak_tls** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cert_problem** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where weak_tls=1 OR cert_problem=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Cryptography Policy Compliance — TLS and Certificate Posture (Art. 21(2)(h))**): table service_name, dest_ip, dest_port, tls_version, cipher, cert_problem\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Cryptography Policy Compliance — TLS and Certificate Posture (Art. 21(2)(h))** — Appropriate use of cryptography protects network and information system security. This use case monitors TLS versions, weak cipher suites, and certificate expiry for public and internal services supporting NIS2-scoped workloads — aligning with organisational cryptography standards.\n\nDocumented **Data sources**: TLS metadata from Stream, certificate inventory, `nis2_crypto_policy.csv` (min_tls_version, allowed_ciphers). **App/TA** (typical add-on context): Splunk Add-on for Stream (Splunkbase 1809), Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (weak TLS services), Timeline (certificate expiry), Single value (critical expiries), Bar chart (by cipher family).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We listen quietly to how computers greet each other on the network and check whether they use strong secret handshakes. If a handshake is too old or a security badge is about to run out of date, we flag only the systems the law cares about most.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(h)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(h) (Cryptography and encryption) is enforced — Splunk UC-22.2.29: NIS2 Cryptography Policy Compliance — TLS and Certificate Posture.",
                  "ea": "Saved search 'UC-22.2.29' running on TLS metadata from Stream, certificate inventory, nis2_crypto_policy.csv (min_tls_version, allowed_ciphers), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.30",
              "n": "NIS2 Human Resources Security Measures Evidence (Art. 21(2)(i))",
              "c": "high",
              "f": "beginner",
              "v": "Policies on human resources security — screening, training, and termination procedures — are part of holistic risk management. This use case ingests LMS completion for privileged-role staff, background-check status flags (non-PII aggregates), and leaver access revocation timing for NIS2-scoped roles.",
              "t": "Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), HTTP Event Collector (HEC — LMS exports)",
              "d": "`index=training` (role, completion_date), `OktaIM2:log` (deprovision events), `nis2_privileged_role_roster.csv`",
              "q": "| inputlookup nis2_privileged_role_roster.csv\n| join user_id [\n    search index=training earliest=-365d\n    | stats latest(completion_date) as last_training by user_id\n]\n| eval training_age=round((now()-strptime(last_training,\"%Y-%m-%d\"))/86400,0)\n| where training_age > 365 OR isnull(last_training)\n| table role_name, user_id, last_training, training_age",
              "m": "(1) Hash user identifiers in training feeds where possible; (2) require annual secure-development and IR training for NIS2 privileged roles; (3) alert managers on overdue training; (4) correlate HR termination dates with Okta deprovision latency (aggregate only); (5) export compliance percentages for audits.",
              "z": "Table (overdue training), Single value (% compliant), Bar chart (by role family), Timeline (completion waves).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Okta Identity Cloud](https://splunkbase.splunk.com/app/6553)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), HTTP Event Collector (HEC — LMS exports).\n• Ensure the following data sources are available: `index=training` (role, completion_date), `OktaIM2:log` (deprovision events), `nis2_privileged_role_roster.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Hash user identifiers in training feeds where possible; (2) require annual secure-development and IR training for NIS2 privileged roles; (3) alert managers on overdue training; (4) correlate HR termination dates with Okta deprovision latency (aggregate only); (5) export compliance percentages for audits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_privileged_role_roster.csv\n| join user_id [\n    search index=training earliest=-365d\n    | stats latest(completion_date) as last_training by user_id\n]\n| eval training_age=round((now()-strptime(last_training,\"%Y-%m-%d\"))/86400,0)\n| where training_age > 365 OR isnull(last_training)\n| table role_name, user_id, last_training, training_age\n```\n\nUnderstanding this SPL\n\n**NIS2 Human Resources Security Measures Evidence (Art. 21(2)(i))** — Policies on human resources security — screening, training, and termination procedures — are part of holistic risk management. This use case ingests LMS completion for privileged-role staff, background-check status flags (non-PII aggregates), and leaver access revocation timing for NIS2-scoped roles.\n\nDocumented **Data sources**: `index=training` (role, completion_date), `OktaIM2:log` (deprovision events), `nis2_privileged_role_roster.csv`. **App/TA** (typical add-on context): Splunk Add-on for Okta Identity Cloud (Splunkbase 6553), HTTP Event Collector (HEC — LMS exports). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **training_age** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where training_age > 365 OR isnull(last_training)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Human Resources Security Measures Evidence (Art. 21(2)(i))**): table role_name, user_id, last_training, training_age\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue training), Single value (% compliant), Bar chart (by role family), Timeline (completion waves).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep a fresh list of people with powerful computer access who still owe their yearly safety-and-security lessons, so the right managers can nudge them before it becomes a compliance surprise.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "okta"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(i)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(i) (Human resources and access control) is enforced — Splunk UC-22.2.30: NIS2 Human Resources Security Measures Evidence.",
                  "ea": "Saved search 'UC-22.2.30' running on index=training (role, completion_date), OktaIM2:log (deprovision events), nis2_privileged_role_roster.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.31",
              "n": "NIS2 Entity Classification Validation (Essential vs Important) (Art. 2(1), national transposition)",
              "c": "high",
              "f": "beginner",
              "v": "Incorrect classification drives wrong reporting deadlines and control expectations. This use case reconciles legal entity classification in `nis2_entity_classification.csv` with CMDB ownership, revenue sector tags, and national registry identifiers — flagging assets attached to misclassified entities.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`nis2_entity_classification.csv`, CMDB export (`index=cmdb` or lookup), ServiceNow company records",
              "q": "| inputlookup nis2_entity_classification.csv\n| join legal_entity_id [\n    search index=cmdb earliest=-1d\n    | stats values(nis2_entity_tag) as cmdb_tags by legal_entity_id\n]\n| eval mismatch=if(NOT match(mvjoin(cmdb_tags,\"|\"), entity_tier), 1, 0)\n| where mismatch=1 OR isnull(cmdb_tags)\n| table legal_entity_id, entity_tier, cmdb_tags, mismatch",
              "m": "(1) Establish authoritative classification decisions in legal function; (2) push classification to CMDB nightly; (3) alert enterprise architecture on CMDB vs register mismatches; (4) re-run after M&A events; (5) document appeals and reclassification decisions.",
              "z": "Table (mismatches), Single value (entities needing review), Sankey (entity to asset counts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `nis2_entity_classification.csv`, CMDB export (`index=cmdb` or lookup), ServiceNow company records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Establish authoritative classification decisions in legal function; (2) push classification to CMDB nightly; (3) alert enterprise architecture on CMDB vs register mismatches; (4) re-run after M&A events; (5) document appeals and reclassification decisions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_entity_classification.csv\n| join legal_entity_id [\n    search index=cmdb earliest=-1d\n    | stats values(nis2_entity_tag) as cmdb_tags by legal_entity_id\n]\n| eval mismatch=if(NOT match(mvjoin(cmdb_tags,\"|\"), entity_tier), 1, 0)\n| where mismatch=1 OR isnull(cmdb_tags)\n| table legal_entity_id, entity_tier, cmdb_tags, mismatch\n```\n\nUnderstanding this SPL\n\n**NIS2 Entity Classification Validation (Essential vs Important) (Art. 2(1), national transposition)** — Incorrect classification drives wrong reporting deadlines and control expectations. This use case reconciles legal entity classification in `nis2_entity_classification.csv` with CMDB ownership, revenue sector tags, and national registry identifiers — flagging assets attached to misclassified entities.\n\nDocumented **Data sources**: `nis2_entity_classification.csv`, CMDB export (`index=cmdb` or lookup), ServiceNow company records. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mismatch=1 OR isnull(cmdb_tags)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Entity Classification Validation (Essential vs Important) (Art. 2(1), national transposition)**): table legal_entity_id, entity_tier, cmdb_tags, mismatch\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mismatches), Single value (entities needing review), Sankey (entity to asset counts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We double-check that the big organisation chart labels—who counts as \"extra supervised\" versus \"regular supervised\"—match what the inventory system thinks, so nobody follows the wrong rulebook by accident.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.2(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.2(1) is enforced — Splunk UC-22.2.31: NIS2 Entity Classification Validation (Essential vs Important).",
                  "ea": "Saved search 'UC-22.2.31' running on nis2_entity_classification.csv, CMDB export (index=cmdb or lookup), ServiceNow company records, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.32",
              "n": "NIS2 Proportional Security Measure Verification by Tier (Art. 21(2))",
              "c": "high",
              "f": "intermediate",
              "v": "Important entities are not required to implement every control at essential-entity intensity. This use case compares deployed detective and preventive controls against a tiered control matrix — surfacing overkill gaps (waste) and under-provision gaps (risk) relative to classification.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`nis2_control_matrix.csv` (control_id, required_for_tier), ES correlation search inventory, `nis2_entity_classification.csv`",
              "q": "| inputlookup nis2_control_matrix.csv\n| join entity_tier [\n    | inputlookup nis2_entity_classification.csv\n    | stats values(entity_tier) as entity_tiers by legal_entity_id\n    | mvexpand entity_tiers\n    | rename entity_tiers as entity_tier\n]\n| where required_for_tier=\"true\"\n| join control_id [\n    | rest /servicesNS/-/SplunkEnterpriseSecurity/saved/searches count=0 splunk_server=local\n    | search title=\"nis2_*\"\n    | eval control_id=replace(title,\"nis2_\",\"\")\n    | stats latest(disabled) as search_disabled by control_id\n]\n| where search_disabled=1 OR isnull(search_disabled)\n| table control_id, entity_tier, search_disabled",
              "m": "(1) Encode which ES searches map to which control IDs; (2) mark controls required per tier; (3) alert when required searches are disabled; (4) review quarterly with risk committee; (5) attach matrix version to audit evidence.",
              "z": "Matrix heatmap (tier vs control), Table (disabled required controls), Single value (coverage %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `nis2_control_matrix.csv` (control_id, required_for_tier), ES correlation search inventory, `nis2_entity_classification.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Encode which ES searches map to which control IDs; (2) mark controls required per tier; (3) alert when required searches are disabled; (4) review quarterly with risk committee; (5) attach matrix version to audit evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_control_matrix.csv\n| join entity_tier [\n    | inputlookup nis2_entity_classification.csv\n    | stats values(entity_tier) as entity_tiers by legal_entity_id\n    | mvexpand entity_tiers\n    | rename entity_tiers as entity_tier\n]\n| where required_for_tier=\"true\"\n| join control_id [\n    | rest /servicesNS/-/SplunkEnterpriseSecurity/saved/searches count=0 splunk_server=local\n    | search title=\"nis2_*\"\n    | eval control_id=replace(title,\"nis2_\",\"\")\n    | stats latest(disabled) as search_disabled by control_id\n]\n| where search_disabled=1 OR isnull(search_disabled)\n| table control_id, entity_tier, search_disabled\n```\n\nUnderstanding this SPL\n\n**NIS2 Proportional Security Measure Verification by Tier (Art. 21(2))** — Important entities are not required to implement every control at essential-entity intensity. This use case compares deployed detective and preventive controls against a tiered control matrix — surfacing overkill gaps (waste) and under-provision gaps (risk) relative to classification.\n\nDocumented **Data sources**: `nis2_control_matrix.csv` (control_id, required_for_tier), ES correlation search inventory, `nis2_entity_classification.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where required_for_tier=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where search_disabled=1 OR isnull(search_disabled)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Proportional Security Measure Verification by Tier (Art. 21(2))**): table control_id, entity_tier, search_disabled\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix heatmap (tier vs control), Table (disabled required controls), Single value (coverage %).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We compare the \"who must have which safety alarm switched on\" spreadsheet to what is actually running inside the security monitoring tool, tier by tier, so nobody accidentally lives with the stricter alarms turned off—or the lighter ones left on waste mode forever.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.21(2) is enforced — Splunk UC-22.2.32: NIS2 Proportional Security Measure Verification by Tier.",
                  "ea": "Saved search 'UC-22.2.32' running on nis2_control_matrix.csv (control_id, required_for_tier), ES correlation search inventory, nis2_entity_classification.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.33",
              "n": "NIS2 Incident Reporting Timeline Compliance (24h Early Warning / 72h Notification) (Art. 23)",
              "c": "critical",
              "f": "intermediate",
              "v": "National transpositions specify early warning and incident notification deadlines. This use case extends UC-22.2.1 by explicitly clocking awareness time, regulatory filing timestamps, and CSIRT handoff fields — separating internal detection latency from legal notification compliance.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`` `notable` `` macro, `index=itsm` (regulatory filing tasks with custom fields)",
              "q": "`notable` urgency IN (\"high\",\"critical\") earliest=-14d\n| eval awareness_time=strptime(status_description,\"(?i)awareness[:\\\\s]+(%Y-%m-%d %H:%M:%S)\")\n| eval early_warning_sent=strptime(status_description,\"(?i)early.warning[:\\\\s]+(%Y-%m-%d %H:%M:%S)\")\n| eval notification_72h=strptime(status_description,\"(?i)72h.notification[:\\\\s]+(%Y-%m-%d %H:%M:%S)\")\n| eval hours_to_early=round((early_warning_sent-awareness_time)/3600,2)\n| eval hours_to_72=round((notification_72h-awareness_time)/3600,2)\n| where hours_to_early > 24 OR hours_to_72 > 72 OR isnull(early_warning_sent)\n| table rule_name, awareness_time, early_warning_sent, notification_72h, hours_to_early, hours_to_72",
              "m": "(1) Train analysts to record structured timestamps in `status_description` or use custom fields via adaptive response; (2) map incident classes to which deadlines apply per national law; (3) alert legal at T-2h before each deadline; (4) retain immutable export of closed incidents; (5) integrate with UC-22.2.19 for cross-border cases.",
              "z": "Timeline (milestones per incident), Table (deadline breaches), Single value (mean hours to early warning).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `` `notable` `` macro, `index=itsm` (regulatory filing tasks with custom fields).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Train analysts to record structured timestamps in `status_description` or use custom fields via adaptive response; (2) map incident classes to which deadlines apply per national law; (3) alert legal at T-2h before each deadline; (4) retain immutable export of closed incidents; (5) integrate with UC-22.2.19 for cross-border cases.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") earliest=-14d\n| eval awareness_time=strptime(status_description,\"(?i)awareness[:\\\\s]+(%Y-%m-%d %H:%M:%S)\")\n| eval early_warning_sent=strptime(status_description,\"(?i)early.warning[:\\\\s]+(%Y-%m-%d %H:%M:%S)\")\n| eval notification_72h=strptime(status_description,\"(?i)72h.notification[:\\\\s]+(%Y-%m-%d %H:%M:%S)\")\n| eval hours_to_early=round((early_warning_sent-awareness_time)/3600,2)\n| eval hours_to_72=round((notification_72h-awareness_time)/3600,2)\n| where hours_to_early > 24 OR hours_to_72 > 72 OR isnull(early_warning_sent)\n| table rule_name, awareness_time, early_warning_sent, notification_72h, hours_to_early, hours_to_72\n```\n\nUnderstanding this SPL\n\n**NIS2 Incident Reporting Timeline Compliance (24h Early Warning / 72h Notification) (Art. 23)** — National transpositions specify early warning and incident notification deadlines. This use case extends UC-22.2.1 by explicitly clocking awareness time, regulatory filing timestamps, and CSIRT handoff fields — separating internal detection latency from legal notification compliance.\n\nDocumented **Data sources**: `` `notable` `` macro, `index=itsm` (regulatory filing tasks with custom fields). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **awareness_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **early_warning_sent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **notification_72h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hours_to_early** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hours_to_72** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_to_early > 24 OR hours_to_72 > 72 OR isnull(early_warning_sent)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Incident Reporting Timeline Compliance (24h Early Warning / 72h Notification) (Art. 23)**): table rule_name, awareness_time, early_warning_sent, notification_72h, hours_to_early, hours_to_72\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (milestones per incident), Table (deadline breaches), Single value (mean hours to early warning).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "Every quarter, the audit committee asks: 'Did we send the regulator our cyber incident updates on time?' This watch tracks every incident's three deadlines — 24 hours, 72 hours, one month — and produces the percentage of incidents that hit each deadline. It is the report card.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Proves an end-to-end picture of Art.23 reporting health: per significant incident, which deadlines were hit, missed, or extended; and per quarter, the operator's deadline-compliance rate trend.",
                  "ea": "Saved search `UC-22.2.33` summary-indexed into `index=audit_evidence sourcetype=evidence:nis2_art23_rollup`. Quarterly board-pack PDF generated from the dashboard `compliance/nis2-art23-board.xml` and archived to `evidence:board_pack`.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.34",
              "n": "NIS2 Cross-Border Incident Coordination Task Tracking (Art. 23(3))",
              "c": "high",
              "f": "advanced",
              "v": "When incidents affect multiple Member States, parallel coordination tasks multiply. This use case tracks per-country CSIRT notification tickets, language-specific attachments, and follow-up deadlines — complementing UC-22.2.19's technical cross-border detection.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` (child tasks per country), `nis2_cross_border_incident_register.csv`",
              "q": "| inputlookup nis2_cross_border_incident_register.csv\n| join incident_id [\n    search index=itsm sourcetype=\"snow:sc_task\" short_description=\"*CSIRT*\" earliest=-90d\n    | stats dc(short_description) as tasks_created sum(eval(case(state!=\"Closed\",1,0))) as open_tasks by parent_incident\n    | rename parent_incident as incident_id\n]\n| eval expected_tasks=affected_member_states*2\n| where open_tasks > 0 OR tasks_created < expected_tasks\n| table incident_id, affected_member_states, tasks_created, open_tasks, expected_tasks",
              "m": "(1) When cross-border flag is set, auto-spawn tasks per Member State template; (2) track filing confirmations as attachments; (3) alert incident commander on open_tasks; (4) post-incident merge lessons into UC-22.2.23; (5) legal owns register accuracy.",
              "z": "Table (open coordination tasks), Map (Member States), Gantt-style panel (optional), Single value (incidents with open tasks).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [
                "T1090"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` (child tasks per country), `nis2_cross_border_incident_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) When cross-border flag is set, auto-spawn tasks per Member State template; (2) track filing confirmations as attachments; (3) alert incident commander on open_tasks; (4) post-incident merge lessons into UC-22.2.23; (5) legal owns register accuracy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_cross_border_incident_register.csv\n| join incident_id [\n    search index=itsm sourcetype=\"snow:sc_task\" short_description=\"*CSIRT*\" earliest=-90d\n    | stats dc(short_description) as tasks_created sum(eval(case(state!=\"Closed\",1,0))) as open_tasks by parent_incident\n    | rename parent_incident as incident_id\n]\n| eval expected_tasks=affected_member_states*2\n| where open_tasks > 0 OR tasks_created < expected_tasks\n| table incident_id, affected_member_states, tasks_created, open_tasks, expected_tasks\n```\n\nUnderstanding this SPL\n\n**NIS2 Cross-Border Incident Coordination Task Tracking (Art. 23(3))** — When incidents affect multiple Member States, parallel coordination tasks multiply. This use case tracks per-country CSIRT notification tickets, language-specific attachments, and follow-up deadlines — complementing UC-22.2.19's technical cross-border detection.\n\nDocumented **Data sources**: `index=itsm` (child tasks per country), `nis2_cross_border_incident_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **expected_tasks** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where open_tasks > 0 OR tasks_created < expected_tasks` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Cross-Border Incident Coordination Task Tracking (Art. 23(3))**): table incident_id, affected_member_states, tasks_created, open_tasks, expected_tasks\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Cross-Border Incident Coordination Task Tracking (Art. 23(3))** — When incidents affect multiple Member States, parallel coordination tasks multiply. This use case tracks per-country CSIRT notification tickets, language-specific attachments, and follow-up deadlines — complementing UC-22.2.19's technical cross-border detection.\n\nDocumented **Data sources**: `index=itsm` (child tasks per country), `nis2_cross_border_incident_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open coordination tasks), Map (Member States), Gantt-style panel (optional), Single value (incidents with open tasks).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "When a cyber incident crosses national borders in Europe, we must tell the cybersecurity team in each affected country. This watch tracks a to-do list — one item per country — and makes sure every country's team gets notified and confirms receipt.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23(3)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Proves that for each cross-border significant incident, a notification task per affected member state was opened, assigned, and closed — with a traceable record of the notification channel used and the acknowledgement received.",
                  "ea": "Saved search `UC-22.2.34` row-per-task output, `index=audit_evidence sourcetype=evidence:saved_search` archive. Dashboard panel `NIS2 Art.23 — Cross-border tasks`.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.35",
              "n": "NIS2 Supervisory Compliance Evidence Pack Readiness (Art. 32-33, national measures)",
              "c": "high",
              "f": "beginner",
              "v": "Supervisory authorities may request evidence of measures taken. This use case checks that scheduled Splunk reports for risk, incidents, training, and testing have run successfully and that outputs landed in the evidence repository — a meta-governance view over compliance automation itself.",
              "t": "Splunk Enterprise / Splunk Cloud Platform (`index=_internal`, `index=_audit`)",
              "d": "`index=_internal` sourcetype=scheduler (skipped, failed), `index=_audit` (savedsearch actions), `nis2_evidence_report_catalog.csv`",
              "q": "| inputlookup nis2_evidence_report_catalog.csv\n| join savedsearch_name [\n    search index=_audit action=search earliest=-7d\n    | stats latest(total_run_time) as last_runtime latest(info) as result by savedsearch_name\n]\n| where isnull(last_runtime) OR match(result,\"(?i)fail|error|skipped\")\n| table savedsearch_name, owner, last_runtime, result",
              "m": "(1) List all compliance-evidence saved searches in catalog; (2) alert owners when searches fail or skip due to disk/quota; (3) mirror PDF outputs to WORM storage via scripted output; (4) monthly attestation that catalog is current; (5) include in supervisory inspection runbooks.",
              "z": "Table (failed evidence jobs), Single value (failed jobs count), Timeline (success rate).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform (`index=_internal`, `index=_audit`).\n• Ensure the following data sources are available: `index=_internal` sourcetype=scheduler (skipped, failed), `index=_audit` (savedsearch actions), `nis2_evidence_report_catalog.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) List all compliance-evidence saved searches in catalog; (2) alert owners when searches fail or skip due to disk/quota; (3) mirror PDF outputs to WORM storage via scripted output; (4) monthly attestation that catalog is current; (5) include in supervisory inspection runbooks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_evidence_report_catalog.csv\n| join savedsearch_name [\n    search index=_audit action=search earliest=-7d\n    | stats latest(total_run_time) as last_runtime latest(info) as result by savedsearch_name\n]\n| where isnull(last_runtime) OR match(result,\"(?i)fail|error|skipped\")\n| table savedsearch_name, owner, last_runtime, result\n```\n\nUnderstanding this SPL\n\n**NIS2 Supervisory Compliance Evidence Pack Readiness (Art. 32-33, national measures)** — Supervisory authorities may request evidence of measures taken. This use case checks that scheduled Splunk reports for risk, incidents, training, and testing have run successfully and that outputs landed in the evidence repository — a meta-governance view over compliance automation itself.\n\nDocumented **Data sources**: `index=_internal` sourcetype=scheduler (skipped, failed), `index=_audit` (savedsearch actions), `nis2_evidence_report_catalog.csv`. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform (`index=_internal`, `index=_audit`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(last_runtime) OR match(result,\"(?i)fail|error|skipped\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Supervisory Compliance Evidence Pack Readiness (Art. 32-33, national measures)**): table savedsearch_name, owner, last_runtime, result\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed evidence jobs), Single value (failed jobs count), Timeline (success rate).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We check whether the scheduled jobs that build your regulator-ready paperwork actually ran on time. If a job failed or never finished, we raise a hand so the right owner can fix it before anyone official asks for the files.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.32 is enforced — Splunk UC-22.2.35: NIS2 Supervisory Compliance Evidence Pack Readiness.",
                  "ea": "Saved search 'UC-22.2.35' running on sourcetype scheduler and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.33",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.33 is enforced — Splunk UC-22.2.35: NIS2 Supervisory Compliance Evidence Pack Readiness.",
                  "ea": "Saved search 'UC-22.2.35' running on sourcetype scheduler and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.36",
              "n": "NIS2 OT Network Segmentation Validation (Art. 21(2)(a))",
              "c": "critical",
              "f": "advanced",
              "v": "Operational technology environments rely on strict segmentation between IT and OT. This use case detects east-west flows that violate Purdue zone boundaries — for example SMB or RDP from corporate VLANs into PLC subnets documented as prohibited.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809)",
              "d": "Firewall and IDS logs tagged with `zone`, `nis2_ot_segment_policy.csv`",
              "q": "index=netfw earliest=-24h\n| lookup nis2_ot_segment_policy.csv src_zone dest_zone OUTPUT allowed\n| where allowed=\"false\"\n| stats sum(bytes) as bytes count as sessions by src_zone dest_zone dest_port app\n| where sessions > 10\n| sort - bytes\n| table src_zone, dest_zone, dest_port, app, bytes, sessions",
              "m": "(1) Standardise zone naming across OT and IT firewalls; (2) encode allowed zone pairs in policy lookup; (3) alert SOC-OT fusion cell on any denied-pattern traffic that actually occurred (policy violation or mis-tagging); (4) weekly review with plant engineering; (5) document change tickets for approved exceptions.",
              "z": "Sankey (zone to zone), Table (violations), Single value (distinct forbidden paths), Heatmap (sessions by hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T0881"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809).\n• Ensure the following data sources are available: Firewall and IDS logs tagged with `zone`, `nis2_ot_segment_policy.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardise zone naming across OT and IT firewalls; (2) encode allowed zone pairs in policy lookup; (3) alert SOC-OT fusion cell on any denied-pattern traffic that actually occurred (policy violation or mis-tagging); (4) weekly review with plant engineering; (5) document change tickets for approved exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw earliest=-24h\n| lookup nis2_ot_segment_policy.csv src_zone dest_zone OUTPUT allowed\n| where allowed=\"false\"\n| stats sum(bytes) as bytes count as sessions by src_zone dest_zone dest_port app\n| where sessions > 10\n| sort - bytes\n| table src_zone, dest_zone, dest_port, app, bytes, sessions\n```\n\nUnderstanding this SPL\n\n**NIS2 OT Network Segmentation Validation (Art. 21(2)(a))** — Operational technology environments rely on strict segmentation between IT and OT. This use case detects east-west flows that violate Purdue zone boundaries — for example SMB or RDP from corporate VLANs into PLC subnets documented as prohibited.\n\nDocumented **Data sources**: Firewall and IDS logs tagged with `zone`, `nis2_ot_segment_policy.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where allowed=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_zone dest_zone dest_port app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sessions > 10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIS2 OT Network Segmentation Validation (Art. 21(2)(a))**): table src_zone, dest_zone, dest_port, app, bytes, sessions\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 OT Network Segmentation Validation (Art. 21(2)(a))** — Operational technology environments rely on strict segmentation between IT and OT. This use case detects east-west flows that violate Purdue zone boundaries — for example SMB or RDP from corporate VLANs into PLC subnets documented as prohibited.\n\nDocumented **Data sources**: Firewall and IDS logs tagged with `zone`, `nis2_ot_segment_policy.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Stream (Splunkbase 1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey (zone to zone), Table (violations), Single value (distinct forbidden paths), Heatmap (sessions by hour).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "Factories and plants keep fragile equipment on separate network lanes from the regular office Wi‑Fi. We watch the checkpoints between those lanes—the firewalls—and loudly report traffic patterns that behave like someone drove the wrong way through a one-way gate, especially when it keeps happening.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(a)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(a) (Risk analysis and information-system security policies) is enforced — Splunk UC-22.2.36: NIS2 OT Network Segmentation Validation.",
                  "ea": "Saved search 'UC-22.2.36' running on Firewall and IDS logs tagged with zone, nis2_ot_segment_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.37",
              "n": "NIS2 SCADA System Access Monitoring (Art. 21(2)(a))",
              "c": "critical",
              "f": "advanced",
              "v": "Interactive access to SCADA/HMI layers is high risk. This use case monitors engineering workstation logons, HMI tag writes, and remote-desktop sessions to SCADA hosts — correlating with maintenance windows and approved technician identities.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for CyberArk (Splunkbase 2891)",
              "d": "`WinEventLog:Security` EventCode=4624/4648, `cyberark:session`, `nis2_scada_maintenance_windows.csv`",
              "q": "(index=win_sec EventCode=4624 Logon_Type=10 OR Logon_Type=2) OR sourcetype=\"cyberark:session\" earliest=-7d\n| eval host=coalesce(dest_nt_host, ComputerName, target_host)\n| lookup nis2_scada_asset_inventory.csv host OUTPUT scada_role plant_site\n| where scada_role IN (\"HMI\",\"ENG_WORKSTATION\",\"SCADA_SERVER\")\n| join host [\n    | inputlookup nis2_scada_maintenance_windows.csv\n    | eval starttime=strptime(start,\"%Y-%m-%d %H:%M:%S\")\n    | eval endtime=strptime(end,\"%Y-%m-%d %H:%M:%S\")\n]\n| where _time < starttime OR _time > endtime\n| stats values(user) as users count by host plant_site\n| sort - count",
              "m": "(1) Inventory SCADA hosts with roles in `nis2_scada_asset_inventory.csv`; (2) ingest maintenance windows from plant maintenance system; (3) alert on interactive access outside windows; (4) integrate PAM session video references for investigations; (5) align with safety permit-to-work where applicable.",
              "z": "Table (out-of-window access), Timeline (sessions), Single value (distinct technicians), Map (plant sites).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/2891), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T0859",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for CyberArk (Splunkbase 2891).\n• Ensure the following data sources are available: `WinEventLog:Security` EventCode=4624/4648, `cyberark:session`, `nis2_scada_maintenance_windows.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Inventory SCADA hosts with roles in `nis2_scada_asset_inventory.csv`; (2) ingest maintenance windows from plant maintenance system; (3) alert on interactive access outside windows; (4) integrate PAM session video references for investigations; (5) align with safety permit-to-work where applicable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=win_sec EventCode=4624 Logon_Type=10 OR Logon_Type=2) OR sourcetype=\"cyberark:session\" earliest=-7d\n| eval host=coalesce(dest_nt_host, ComputerName, target_host)\n| lookup nis2_scada_asset_inventory.csv host OUTPUT scada_role plant_site\n| where scada_role IN (\"HMI\",\"ENG_WORKSTATION\",\"SCADA_SERVER\")\n| join host [\n    | inputlookup nis2_scada_maintenance_windows.csv\n    | eval starttime=strptime(start,\"%Y-%m-%d %H:%M:%S\")\n    | eval endtime=strptime(end,\"%Y-%m-%d %H:%M:%S\")\n]\n| where _time < starttime OR _time > endtime\n| stats values(user) as users count by host plant_site\n| sort - count\n```\n\nUnderstanding this SPL\n\n**NIS2 SCADA System Access Monitoring (Art. 21(2)(a))** — Interactive access to SCADA/HMI layers is high risk. This use case monitors engineering workstation logons, HMI tag writes, and remote-desktop sessions to SCADA hosts — correlating with maintenance windows and approved technician identities.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode=4624/4648, `cyberark:session`, `nis2_scada_maintenance_windows.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for CyberArk (Splunkbase 2891). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: win_sec; **sourcetype**: cyberark:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=win_sec, sourcetype=\"cyberark:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **host** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where scada_role IN (\"HMI\",\"ENG_WORKSTATION\",\"SCADA_SERVER\")` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where _time < starttime OR _time > endtime` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host plant_site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 SCADA System Access Monitoring (Art. 21(2)(a))** — Interactive access to SCADA/HMI layers is high risk. This use case monitors engineering workstation logons, HMI tag writes, and remote-desktop sessions to SCADA hosts — correlating with maintenance windows and approved technician identities.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode=4624/4648, `cyberark:session`, `nis2_scada_maintenance_windows.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for CyberArk (Splunkbase 2891). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (out-of-window access), Timeline (sessions), Single value (distinct technicians), Map (plant sites).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch who connects to the computers that run important plants and factories. When someone shows up who should not be there, or they poke around outside an agreed maintenance window, we highlight it so leaders can ask good questions before harm spreads.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(a)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(a) (Risk analysis and information-system security policies) is enforced — Splunk UC-22.2.37: NIS2 SCADA System Access Monitoring.",
                  "ea": "Saved search 'UC-22.2.37' running on WinEventLog:Security EventCode=4624/4648, cyberark:session, nis2_scada_maintenance_windows.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.38",
              "n": "NIS2 Industrial Control System Patching and Change Evidence (Art. 21(2)(e))",
              "c": "high",
              "f": "intermediate",
              "v": "Patch and change management for ICS must balance availability with security. This use case tracks firmware updates, PLC logic downloads, and antivirus signature updates for OT endpoints — flagging systems past approved patch cycles or changes without linked change records.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC — vendor patch tools)",
              "d": "`index=ot_changes` (HEC), `index=itsm` `sourcetype=\"snow:change_request\"`, `nis2_ot_patch_policy.csv`",
              "q": "index=ot_changes earliest=-90d\n| stats latest(firmware_version) as observed_fw latest(_time) as last_change by asset_id\n| lookup nis2_ot_patch_policy.csv asset_id OUTPUT required_fw_version last_approved_change_epoch\n| eval fw_compliant=if(observed_fw=required_fw_version,1,0)\n| eval change_stale=if(last_change < last_approved_change_epoch,1,0)\n| where fw_compliant=0 OR change_stale=1\n| table asset_id, observed_fw, required_fw_version, last_change, change_stale",
              "m": "(1) Forward vendor-specific OT change events via HEC; (2) require ServiceNow change tickets for logic downloads; (3) alert when observed firmware diverges from approved baseline; (4) coordinate outage windows with operations; (5) retain evidence for sector regulator inspections.",
              "z": "Table (non-compliant assets), Timeline (patch waves), Single value (assets past policy), Bar chart (by vendor).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC — vendor patch tools).\n• Ensure the following data sources are available: `index=ot_changes` (HEC), `index=itsm` `sourcetype=\"snow:change_request\"`, `nis2_ot_patch_policy.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward vendor-specific OT change events via HEC; (2) require ServiceNow change tickets for logic downloads; (3) alert when observed firmware diverges from approved baseline; (4) coordinate outage windows with operations; (5) retain evidence for sector regulator inspections.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_changes earliest=-90d\n| stats latest(firmware_version) as observed_fw latest(_time) as last_change by asset_id\n| lookup nis2_ot_patch_policy.csv asset_id OUTPUT required_fw_version last_approved_change_epoch\n| eval fw_compliant=if(observed_fw=required_fw_version,1,0)\n| eval change_stale=if(last_change < last_approved_change_epoch,1,0)\n| where fw_compliant=0 OR change_stale=1\n| table asset_id, observed_fw, required_fw_version, last_change, change_stale\n```\n\nUnderstanding this SPL\n\n**NIS2 Industrial Control System Patching and Change Evidence (Art. 21(2)(e))** — Patch and change management for ICS must balance availability with security. This use case tracks firmware updates, PLC logic downloads, and antivirus signature updates for OT endpoints — flagging systems past approved patch cycles or changes without linked change records.\n\nDocumented **Data sources**: `index=ot_changes` (HEC), `index=itsm` `sourcetype=\"snow:change_request\"`, `nis2_ot_patch_policy.csv`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC — vendor patch tools). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_changes.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_changes, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by asset_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **fw_compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **change_stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fw_compliant=0 OR change_stale=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Industrial Control System Patching and Change Evidence (Art. 21(2)(e))**): table asset_id, observed_fw, required_fw_version, last_change, change_stale\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS2 Industrial Control System Patching and Change Evidence (Art. 21(2)(e))** — Patch and change management for ICS must balance availability with security. This use case tracks firmware updates, PLC logic downloads, and antivirus signature updates for OT endpoints — flagging systems past approved patch cycles or changes without linked change records.\n\nDocumented **Data sources**: `index=ot_changes` (HEC), `index=itsm` `sourcetype=\"snow:change_request\"`, `nis2_ot_patch_policy.csv`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC — vendor patch tools). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant assets), Timeline (patch waves), Single value (assets past policy), Bar chart (by vendor).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We check factory controllers against the approved version list and the official maintenance schedule, so you can prove the sensitive equipment got its fixes on time and nothing silently drifted out of spec.",
              "mtype": [
                "Compliance",
                "Change"
              ],
              "regs": [
                "NIS2"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(e) (Security in acquisition, development and maintenance) is enforced — Splunk UC-22.2.38: NIS2 Industrial Control System Patching and Change Evidence.",
                  "ea": "Saved search 'UC-22.2.38' running on index=ot_changes (HEC), index=itsm sourcetype=\"snow:change_request\", nis2_ot_patch_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.39",
              "n": "NIS2 OT Incident Detection — Process and Protocol Anomalies (Art. 21(2)(f))",
              "c": "critical",
              "f": "advanced",
              "v": "OT incidents may manifest as Modbus function anomalies, unusual OPC-UA browse depth, or historian gaps rather than Windows malware. This use case baselines industrial protocol traffic per cell line and alerts on deviations that suggest manipulation or sensor spoofing.",
              "t": "Splunk Add-on for Stream (Splunkbase 1809), HTTP Event Collector (HEC — edge gateway logs)",
              "d": "`index=ot_protocol` (HEC from Edge Hub or passive taps), `nis2_ot_baseline.csv`",
              "q": "index=ot_protocol sourcetype=\"modbus:session\" earliest=-7d\n| stats count dc(function_code) as fn_variety avg(register_span) as avg_span by plc_name cell_line\n| lookup nis2_ot_baseline.csv cell_line OUTPUT expected_fn_variety expected_span\n| eval anomaly=if(fn_variety > expected_fn_variety*2 OR avg_span > expected_span*3, 1, 0)\n| where anomaly=1\n| table _time, plc_name, cell_line, fn_variety, expected_fn_variety, avg_span, expected_span",
              "m": "(1) Build baselines per production cell during golden runs; (2) ingest Modbus/OPC-UA metadata without disrupting real-time control; (3) tune thresholds with engineering; (4) route alerts to OT SOC; (5) feed confirmed incidents into UC-22.2.1/UC-22.2.33 reporting paths.",
              "z": "Line chart (function code diversity), Table (anomalous cells), Single value (anomaly count), Heatmap (by shift).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809)",
              "mitre": [
                "T0855"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Stream (Splunkbase 1809), HTTP Event Collector (HEC — edge gateway logs).\n• Ensure the following data sources are available: `index=ot_protocol` (HEC from Edge Hub or passive taps), `nis2_ot_baseline.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build baselines per production cell during golden runs; (2) ingest Modbus/OPC-UA metadata without disrupting real-time control; (3) tune thresholds with engineering; (4) route alerts to OT SOC; (5) feed confirmed incidents into UC-22.2.1/UC-22.2.33 reporting paths.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_protocol sourcetype=\"modbus:session\" earliest=-7d\n| stats count dc(function_code) as fn_variety avg(register_span) as avg_span by plc_name cell_line\n| lookup nis2_ot_baseline.csv cell_line OUTPUT expected_fn_variety expected_span\n| eval anomaly=if(fn_variety > expected_fn_variety*2 OR avg_span > expected_span*3, 1, 0)\n| where anomaly=1\n| table _time, plc_name, cell_line, fn_variety, expected_fn_variety, avg_span, expected_span\n```\n\nUnderstanding this SPL\n\n**NIS2 OT Incident Detection — Process and Protocol Anomalies (Art. 21(2)(f))** — OT incidents may manifest as Modbus function anomalies, unusual OPC-UA browse depth, or historian gaps rather than Windows malware. This use case baselines industrial protocol traffic per cell line and alerts on deviations that suggest manipulation or sensor spoofing.\n\nDocumented **Data sources**: `index=ot_protocol` (HEC from Edge Hub or passive taps), `nis2_ot_baseline.csv`. **App/TA** (typical add-on context): Splunk Add-on for Stream (Splunkbase 1809), HTTP Event Collector (HEC — edge gateway logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_protocol; **sourcetype**: modbus:session. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_protocol, sourcetype=\"modbus:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by plc_name cell_line** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **anomaly** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where anomaly=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 OT Incident Detection — Process and Protocol Anomalies (Art. 21(2)(f))**): table _time, plc_name, cell_line, fn_variety, expected_fn_variety, avg_span, expected_span\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (function code diversity), Table (anomalous cells), Single value (anomaly count), Heatmap (by shift).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We learn what \"normal\" chatter looks like on the factory floor networks, then we speak up when a line starts talking in an unusual way that could mean trouble—not just when office computers misbehave.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(f)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(f) (Policies and procedures effectiveness) is enforced — Splunk UC-22.2.39: NIS2 OT Incident Detection — Process and Protocol Anomalies.",
                  "ea": "Saved search 'UC-22.2.39' running on index=ot_protocol (HEC from Edge Hub or passive taps), nis2_ot_baseline.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.40",
              "n": "NIS2 Safety System Integrity Monitoring (SIL / SIS Interlocks) (Art. 21(2)(c))",
              "c": "critical",
              "f": "advanced",
              "v": "Cyber-physical incidents can impair safety instrumented functions. This use case monitors bypass events, force bits, interlock disables, and alarm floods from SIS and BMS historians — ensuring safety overrides are short-lived, authorised, and correlated with maintenance work orders.",
              "t": "HTTP Event Collector (HEC — historian exports), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=sis_audit` (HEC), `index=itsm` (permit-to-work), `nis2_sis_critical_tags.csv`",
              "q": "index=sis_audit earliest=-30d\n| lookup nis2_sis_critical_tags.csv tag_id OUTPUT sil_level plant_area\n| where match(lower(event_type),\"(?i)bypass|force|override|interlock.disable\")\n| join tag_id [\n    search index=itsm sourcetype=\"snow:sc_task\" short_description=\"*permit*\" earliest=-30d\n    | stats latest(state) as permit_state by cmdb_ci\n    | rename cmdb_ci as tag_id\n]\n| where permit_state!=\"Closed\" OR isnull(permit_state)\n| stats count by tag_id sil_level plant_area permit_state\n| sort - count",
              "m": "(1) Work with functional safety officers to define critical tags; (2) ingest audit trails without exceeding vendor-supported rates; (3) alert immediately on unapproved bypass; (4) enforce maximum bypass duration timers in secondary logic where possible; (5) quarterly review with process safety and cybersecurity jointly.",
              "z": "Table (unauthorised overrides), Timeline (bypass events), Single value (open safety overrides), Bar chart (by plant area).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC — historian exports), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=sis_audit` (HEC), `index=itsm` (permit-to-work), `nis2_sis_critical_tags.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Work with functional safety officers to define critical tags; (2) ingest audit trails without exceeding vendor-supported rates; (3) alert immediately on unapproved bypass; (4) enforce maximum bypass duration timers in secondary logic where possible; (5) quarterly review with process safety and cybersecurity jointly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sis_audit earliest=-30d\n| lookup nis2_sis_critical_tags.csv tag_id OUTPUT sil_level plant_area\n| where match(lower(event_type),\"(?i)bypass|force|override|interlock.disable\")\n| join tag_id [\n    search index=itsm sourcetype=\"snow:sc_task\" short_description=\"*permit*\" earliest=-30d\n    | stats latest(state) as permit_state by cmdb_ci\n    | rename cmdb_ci as tag_id\n]\n| where permit_state!=\"Closed\" OR isnull(permit_state)\n| stats count by tag_id sil_level plant_area permit_state\n| sort - count\n```\n\nUnderstanding this SPL\n\n**NIS2 Safety System Integrity Monitoring (SIL / SIS Interlocks) (Art. 21(2)(c))** — Cyber-physical incidents can impair safety instrumented functions. This use case monitors bypass events, force bits, interlock disables, and alarm floods from SIS and BMS historians — ensuring safety overrides are short-lived, authorised, and correlated with maintenance work orders.\n\nDocumented **Data sources**: `index=sis_audit` (HEC), `index=itsm` (permit-to-work), `nis2_sis_critical_tags.csv`. **App/TA** (typical add-on context): HTTP Event Collector (HEC — historian exports), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sis_audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sis_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where match(lower(event_type),\"(?i)bypass|force|override|interlock.disable\")` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where permit_state!=\"Closed\" OR isnull(permit_state)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tag_id sil_level plant_area permit_state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unauthorised overrides), Timeline (bypass events), Single value (open safety overrides), Bar chart (by plant area).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch the automatic shutoff gear that keeps plants from hurting people. When someone sidesteps that protection without a proper signed maintenance ticket, or when tests fall overdue, we raise a clear flag so experts can fix it before something breaks for real.",
              "mtype": [
                "Safety",
                "Compliance"
              ],
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(c)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(c) (Business continuity and crisis management) is enforced — Splunk UC-22.2.40: NIS2 Safety System Integrity Monitoring (SIL / SIS Interlocks).",
                  "ea": "Saved search 'UC-22.2.40' running on index=sis_audit (HEC), index=itsm (permit-to-work), nis2_sis_critical_tags.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.41",
              "n": "NIS2 Management Body Cybersecurity Training Evidence (Art. 20)",
              "c": "high",
              "f": "beginner",
              "v": "Article 20 requires members of management bodies to undergo training and to oversee cyber risk. This use case tracks board and executive training completions, including NIS2-specific modules, and flags gaps before annual general meeting cycles.",
              "t": "HTTP Event Collector (HEC — LMS CSV ingest)",
              "d": "`index=training` (role=executive|board, course_code, completion_date), `nis2_management_roster.csv`",
              "q": "| inputlookup nis2_management_roster.csv\n| join person_id [\n    search index=training course_code=\"NIS2_EXEC\" earliest=-730d\n    | stats latest(completion_date) as last_completion by person_id\n]\n| eval days_since=if(isnotnull(last_completion), round((now()-strptime(last_completion,\"%Y-%m-%d\"))/86400,0), 9999)\n| where days_since > 365 OR isnull(last_completion)\n| table person_id, role, last_completion, days_since",
              "m": "(1) Maintain roster of persons legally considered management body; (2) map LMS course codes; (3) alert governance committee on gaps; (4) store certificates in HR system with Splunk holding only dates; (5) complement UC-22.2.20 governance dashboard.",
              "z": "Table (missing training), Single value (% trained), Bar chart (by role), Timeline (training completions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC — LMS CSV ingest).\n• Ensure the following data sources are available: `index=training` (role=executive|board, course_code, completion_date), `nis2_management_roster.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain roster of persons legally considered management body; (2) map LMS course codes; (3) alert governance committee on gaps; (4) store certificates in HR system with Splunk holding only dates; (5) complement UC-22.2.20 governance dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_management_roster.csv\n| join person_id [\n    search index=training course_code=\"NIS2_EXEC\" earliest=-730d\n    | stats latest(completion_date) as last_completion by person_id\n]\n| eval days_since=if(isnotnull(last_completion), round((now()-strptime(last_completion,\"%Y-%m-%d\"))/86400,0), 9999)\n| where days_since > 365 OR isnull(last_completion)\n| table person_id, role, last_completion, days_since\n```\n\nUnderstanding this SPL\n\n**NIS2 Management Body Cybersecurity Training Evidence (Art. 20)** — Article 20 requires members of management bodies to undergo training and to oversee cyber risk. This use case tracks board and executive training completions, including NIS2-specific modules, and flags gaps before annual general meeting cycles.\n\nDocumented **Data sources**: `index=training` (role=executive|board, course_code, completion_date), `nis2_management_roster.csv`. **App/TA** (typical add-on context): HTTP Event Collector (HEC — LMS CSV ingest). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since > 365 OR isnull(last_completion)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Management Body Cybersecurity Training Evidence (Art. 20)**): table person_id, role, last_completion, days_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing training), Single value (% trained), Bar chart (by role), Timeline (training completions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We check an official list of leaders against training records so we can see who still needs the yearly cyber lesson the law expects directors to take, before anyone important is left uncovered.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.20",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.20 (Governance) is enforced — Splunk UC-22.2.41: NIS2 Management Body Cybersecurity Training Evidence.",
                  "ea": "Saved search 'UC-22.2.41' running on index=training (role=executive|board, course_code, completion_date), nis2_management_roster.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.42",
              "n": "NIS2 Board-Level Cyber Risk Reporting Distribution Audit (Art. 20)",
              "c": "high",
              "f": "beginner",
              "v": "Oversight requires that risk information actually reaches the board. This use case verifies scheduled executive cyber dashboards were generated, emailed to governance distribution lists, and acknowledged — using Splunk `_audit` and mail gateway metadata (aggregated).",
              "t": "Splunk Enterprise / Splunk Cloud Platform (`index=_audit`)",
              "d": "`index=_audit` (scheduled PDF delivery actions), `index=email_meta` (optional DLP headers only), `nis2_board_report_schedule.csv`",
              "q": "| inputlookup nis2_board_report_schedule.csv\n| join report_name [\n    search index=_audit action=search earliest=-90d\n    | where match(info, \"(?i)sendemail|pdf\")\n    | stats latest(_time) as last_sent by report_name\n]\n| eval expected_cadence_days=case(frequency=\"monthly\",35,frequency=\"quarterly\",100,1=1,10)\n| eval days_since=round((now()-last_sent)/86400,0)\n| where days_since > expected_cadence_days OR isnull(last_sent)\n| table report_name, frequency, last_sent, days_since",
              "m": "(1) Name board cyber reports consistently for `_audit` filtering; (2) avoid ingesting email bodies — metadata only; (3) alert secretariat when distribution misses a cycle; (4) archive PDFs to board portal with hash recorded in KV store; (5) align content with UC-22.2.21/22 risk metrics.",
              "z": "Table (missed distributions), Timeline (report sends), Single value (late reports).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform (`index=_audit`).\n• Ensure the following data sources are available: `index=_audit` (scheduled PDF delivery actions), `index=email_meta` (optional DLP headers only), `nis2_board_report_schedule.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Name board cyber reports consistently for `_audit` filtering; (2) avoid ingesting email bodies — metadata only; (3) alert secretariat when distribution misses a cycle; (4) archive PDFs to board portal with hash recorded in KV store; (5) align content with UC-22.2.21/22 risk metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_board_report_schedule.csv\n| join report_name [\n    search index=_audit action=search earliest=-90d\n    | where match(info, \"(?i)sendemail|pdf\")\n    | stats latest(_time) as last_sent by report_name\n]\n| eval expected_cadence_days=case(frequency=\"monthly\",35,frequency=\"quarterly\",100,1=1,10)\n| eval days_since=round((now()-last_sent)/86400,0)\n| where days_since > expected_cadence_days OR isnull(last_sent)\n| table report_name, frequency, last_sent, days_since\n```\n\nUnderstanding this SPL\n\n**NIS2 Board-Level Cyber Risk Reporting Distribution Audit (Art. 20)** — Oversight requires that risk information actually reaches the board. This use case verifies scheduled executive cyber dashboards were generated, emailed to governance distribution lists, and acknowledged — using Splunk `_audit` and mail gateway metadata (aggregated).\n\nDocumented **Data sources**: `index=_audit` (scheduled PDF delivery actions), `index=email_meta` (optional DLP headers only), `nis2_board_report_schedule.csv`. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform (`index=_audit`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **expected_cadence_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since > expected_cadence_days OR isnull(last_sent)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Board-Level Cyber Risk Reporting Distribution Audit (Art. 20)**): table report_name, frequency, last_sent, days_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missed distributions), Timeline (report sends), Single value (late reports).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep a dated log of whether the big cyber summary that is meant for directors was actually sent on schedule, so nobody has to guess if the board saw the risk picture on time.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.20",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.20 (Governance) is enforced — Splunk UC-22.2.42: NIS2 Board-Level Cyber Risk Reporting Distribution Audit.",
                  "ea": "Saved search 'UC-22.2.42' running on index _audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.43",
              "n": "NIS2 Annual Security Assessment Completion Tracking (Art. 21(2)(f))",
              "c": "high",
              "f": "beginner",
              "v": "Effectiveness assessments and broader security assessments are recurring obligations. This use case tracks annual independent assessments, red-team summaries, and control testing sign-offs — ensuring each NIS2-scoped entity has a current assessment on file.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`nis2_annual_assessment_register.csv`, `index=itsm` (GRC tasks)",
              "q": "| inputlookup nis2_annual_assessment_register.csv\n| eval due_epoch=strptime(next_assessment_due,\"%Y-%m-%d\")\n| eval overdue=if(now()>due_epoch,1,0)\n| join legal_entity_id [\n    search index=itsm short_description=\"*NIS2*annual*assessment*\" earliest=-400d\n    | stats latest(closed_at) as last_closed by company\n    | rename company as legal_entity_id\n]\n| where overdue=1\n| table legal_entity_id, assessment_type, next_assessment_due, last_closed, overdue",
              "m": "(1) Define assessment types (internal audit, external cert, penetration retest); (2) sync due dates from GRC; (3) alert CFO/legal when entity-level assessment is overdue; (4) attach assessment reports in controlled repository; (5) link findings to remediation in ServiceNow.",
              "z": "Table (overdue assessments), Timeline (by entity), Single value (overdue count), Bar chart (assessment type mix).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `nis2_annual_assessment_register.csv`, `index=itsm` (GRC tasks).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define assessment types (internal audit, external cert, penetration retest); (2) sync due dates from GRC; (3) alert CFO/legal when entity-level assessment is overdue; (4) attach assessment reports in controlled repository; (5) link findings to remediation in ServiceNow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_annual_assessment_register.csv\n| eval due_epoch=strptime(next_assessment_due,\"%Y-%m-%d\")\n| eval overdue=if(now()>due_epoch,1,0)\n| join legal_entity_id [\n    search index=itsm short_description=\"*NIS2*annual*assessment*\" earliest=-400d\n    | stats latest(closed_at) as last_closed by company\n    | rename company as legal_entity_id\n]\n| where overdue=1\n| table legal_entity_id, assessment_type, next_assessment_due, last_closed, overdue\n```\n\nUnderstanding this SPL\n\n**NIS2 Annual Security Assessment Completion Tracking (Art. 21(2)(f))** — Effectiveness assessments and broader security assessments are recurring obligations. This use case tracks annual independent assessments, red-team summaries, and control testing sign-offs — ensuring each NIS2-scoped entity has a current assessment on file.\n\nDocumented **Data sources**: `nis2_annual_assessment_register.csv`, `index=itsm` (GRC tasks). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where overdue=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Annual Security Assessment Completion Tracking (Art. 21(2)(f))**): table legal_entity_id, assessment_type, next_assessment_due, last_closed, overdue\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue assessments), Timeline (by entity), Single value (overdue count), Bar chart (assessment type mix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We keep an organised list of when each part of the company must finish its yearly security check-up. If the date has passed and we still cannot see a completed ticket or report, we tell the owner in plain language so it can be fixed calmly.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(f)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(f) (Policies and procedures effectiveness) is enforced — Splunk UC-22.2.43: NIS2 Annual Security Assessment Completion Tracking.",
                  "ea": "Saved search 'UC-22.2.43' running on nis2_annual_assessment_register.csv, index=itsm (GRC tasks), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.44",
              "n": "NIS2 Cooperation Group and Sector Information Sharing Participation (Art. 14)",
              "c": "high",
              "f": "beginner",
              "v": "Cooperation mechanisms and information sharing strengthen collective resilience. This use case logs participation in sector ISACs, national cooperation groups, and Splunk-fed threat intel exchanges — tracking attendance, indicator contributions, and action items.",
              "t": "HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`nis2_cooperation_group_register.csv`, `index=itsm` (meeting tasks), STIX/TAXII ingest metrics (`index=threat_intel`)",
              "q": "| inputlookup nis2_cooperation_group_register.csv\n| join group_id [\n    search index=itsm short_description=\"*ISAC*\" OR short_description=\"*cooperation group*\" earliest=-180d\n    | stats latest(state) as last_meeting_state latest(closed_at) as last_meeting_date by assignment_group\n    | rename assignment_group as group_id\n]\n| eval participation_gap=if(last_meeting_state!=\"Closed\" OR isnull(last_meeting_date),1,0)\n| join group_id [\n    search index=threat_intel earliest=-30d\n    | stats dc(indicator_id) as indicators_consumed by feed_name\n    | rename feed_name as group_id\n]\n| where participation_gap=1 OR indicators_consumed < 1000\n| table group_id, last_meeting_state, last_meeting_date, indicators_consumed, participation_gap",
              "m": "(1) Register each cooperation forum with expected meeting cadence; (2) track attendance via ServiceNow tasks; (3) measure threat feed health from Splunk intel indexes; (4) alert CISO when participation drops; (5) document confidentiality constraints on shared data.",
              "z": "Table (gaps), Timeline (meetings), Single value (active feeds), Bar chart (indicators by feed).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `nis2_cooperation_group_register.csv`, `index=itsm` (meeting tasks), STIX/TAXII ingest metrics (`index=threat_intel`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Register each cooperation forum with expected meeting cadence; (2) track attendance via ServiceNow tasks; (3) measure threat feed health from Splunk intel indexes; (4) alert CISO when participation drops; (5) document confidentiality constraints on shared data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup nis2_cooperation_group_register.csv\n| join group_id [\n    search index=itsm short_description=\"*ISAC*\" OR short_description=\"*cooperation group*\" earliest=-180d\n    | stats latest(state) as last_meeting_state latest(closed_at) as last_meeting_date by assignment_group\n    | rename assignment_group as group_id\n]\n| eval participation_gap=if(last_meeting_state!=\"Closed\" OR isnull(last_meeting_date),1,0)\n| join group_id [\n    search index=threat_intel earliest=-30d\n    | stats dc(indicator_id) as indicators_consumed by feed_name\n    | rename feed_name as group_id\n]\n| where participation_gap=1 OR indicators_consumed < 1000\n| table group_id, last_meeting_state, last_meeting_date, indicators_consumed, participation_gap\n```\n\nUnderstanding this SPL\n\n**NIS2 Cooperation Group and Sector Information Sharing Participation (Art. 14)** — Cooperation mechanisms and information sharing strengthen collective resilience. This use case logs participation in sector ISACs, national cooperation groups, and Splunk-fed threat intel exchanges — tracking attendance, indicator contributions, and action items.\n\nDocumented **Data sources**: `nis2_cooperation_group_register.csv`, `index=itsm` (meeting tasks), STIX/TAXII ingest metrics (`index=threat_intel`). **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **participation_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where participation_gap=1 OR indicators_consumed < 1000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NIS2 Cooperation Group and Sector Information Sharing Participation (Art. 14)**): table group_id, last_meeting_state, last_meeting_date, indicators_consumed, participation_gap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gaps), Timeline (meetings), Single value (active feeds), Bar chart (indicators by feed).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "We watch both your meeting follow-through for industry cooperation groups and whether fresh safety tips are still flowing in from trusted partners. If meetings stall or the tip line goes quiet, we flag it so leaders can nudge the right people.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.14 is enforced — Splunk UC-22.2.44: NIS2 Cooperation Group and Sector Information Sharing Participation.",
                  "ea": "Saved search 'UC-22.2.44' running on nis2_cooperation_group_register.csv, index=itsm (meeting tasks), STIX/TAXII ingest metrics (index=threat_intel), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.2.45",
              "n": "NIS2 CSIRT Notification Compliance and Channel Health (Art. 23)",
              "c": "critical",
              "f": "intermediate",
              "v": "Notifications must reach the competent CSIRT through approved channels. This use case monitors successful delivery of structured incident notifications (webhook/API/email handoff), template versioning, and cryptographic signing validation where used — reducing silent failures at the last mile.",
              "t": "HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=reg_notify` (HEC from notification orchestrator), `index=itsm` (linked regulatory tasks), `nis2_csirt_endpoint_catalog.csv`",
              "q": "index=reg_notify sourcetype=\"csirt:delivery\" earliest=-90d\n| lookup nis2_csirt_endpoint_catalog.csv csirt_id OUTPUT required_template_version\n| eval template_ok=if(template_version=required_template_version,1,0)\n| eval delivery_ok=if(match(lower(delivery_status),\"success|accepted|202\"),1,0)\n| where delivery_ok=0 OR template_ok=0\n| stats count by csirt_id delivery_status template_version required_template_version\n| sort - count",
              "m": "(1) Instrument notification orchestrator to emit delivery events; (2) version PDF/XML templates per national CSIRT schema updates; (3) alert legal/IR on any non-2xx response or signature validation failure; (4) test channels quarterly with synthetic non-production payloads; (5) correlate with UC-22.2.33 deadlines.",
              "z": "Table (failed deliveries), Single value (failure rate), Timeline (delivery attempts), Bar chart (by CSIRT).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [
                "N/A (compliance notification channel health)"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=reg_notify` (HEC from notification orchestrator), `index=itsm` (linked regulatory tasks), `nis2_csirt_endpoint_catalog.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument notification orchestrator to emit delivery events; (2) version PDF/XML templates per national CSIRT schema updates; (3) alert legal/IR on any non-2xx response or signature validation failure; (4) test channels quarterly with synthetic non-production payloads; (5) correlate with UC-22.2.33 deadlines.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=reg_notify sourcetype=\"csirt:delivery\" earliest=-90d\n| lookup nis2_csirt_endpoint_catalog.csv csirt_id OUTPUT required_template_version\n| eval template_ok=if(template_version=required_template_version,1,0)\n| eval delivery_ok=if(match(lower(delivery_status),\"success|accepted|202\"),1,0)\n| where delivery_ok=0 OR template_ok=0\n| stats count by csirt_id delivery_status template_version required_template_version\n| sort - count\n```\n\nUnderstanding this SPL\n\n**NIS2 CSIRT Notification Compliance and Channel Health (Art. 23)** — Notifications must reach the competent CSIRT through approved channels. This use case monitors successful delivery of structured incident notifications (webhook/API/email handoff), template versioning, and cryptographic signing validation where used — reducing silent failures at the last mile.\n\nDocumented **Data sources**: `index=reg_notify` (HEC from notification orchestrator), `index=itsm` (linked regulatory tasks), `nis2_csirt_endpoint_catalog.csv`. **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: reg_notify; **sourcetype**: csirt:delivery. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=reg_notify, sourcetype=\"csirt:delivery\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **template_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **delivery_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delivery_ok=0 OR template_ok=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by csirt_id delivery_status template_version required_template_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed deliveries), Single value (failure rate), Timeline (delivery attempts), Bar chart (by CSIRT).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-29",
              "sver": "",
              "rby": "",
              "ge": "Before a cyber emergency hits, we want to know that the phone lines to the regulator actually work. This watch tests the communication channels to each country's cybersecurity team every day, so we find out about a broken line now — not when we desperately need to make that call.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Proves that the CSIRT notification channels used by the operator are tested regularly, reachable, and configured with valid credentials and certificates.",
                  "ea": "Saved search `UC-22.2.45` row-per-channel output, `index=audit_evidence sourcetype=evidence:saved_search` archive. Dashboard panel `NIS2 Art.23 — Channel health`.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.7,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 44,
            "none": 0
          }
        },
        {
          "i": "22.3",
          "n": "DORA",
          "u": [
            {
              "i": "22.3.1",
              "n": "DORA ICT Risk Management Dashboard (Art. 5-16)",
              "c": "critical",
              "f": "advanced",
              "v": "Produces an auditable, continuously refreshed view of residual ICT risk by business entity using the ES risk scoring pipeline, so risk owners can evidence identification, assessment, and monitoring of ICT risk without manual spreadsheet rollups.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`index=risk` `sourcetype=\"stash\"` (risk_object, risk_object_type, risk_score, source, _time)",
              "q": "index=risk sourcetype=\"stash\" earliest=-30d@d\n| stats latest(risk_score) as residual_risk, max(risk_score) as peak_risk, dc(source) as contributing_sources, values(source) as source_list by risk_object, risk_object_type\n| lookup business_entity_lookup risk_object OUTPUT business_entity\n| fillnull value=\"UNASSIGNED\" business_entity\n| stats avg(residual_risk) as avg_residual, max(residual_risk) as max_residual, sum(contributing_sources) as total_sources by business_entity\n| sort - avg_residual\n| table business_entity, avg_residual, max_residual, total_sources",
              "m": "(1) Ensure ES Risk Notable / risk scoring populates `index=risk`; (2) create KV lookup `business_entity_lookup` keyed by `risk_object` (hosts/users/identities) mapping to `business_entity` from CMDB/ServiceNow export; (3) schedule daily for management reporting; (4) drill down to `risk_object` detail in Dashboard Studio.",
              "z": "Bar chart (avg/max residual risk by entity), Single value KPI tiles (top entity risk), Table with drilldown.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `index=risk` `sourcetype=\"stash\"` (risk_object, risk_object_type, risk_score, source, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure ES Risk Notable / risk scoring populates `index=risk`; (2) create KV lookup `business_entity_lookup` keyed by `risk_object` (hosts/users/identities) mapping to `business_entity` from CMDB/ServiceNow export; (3) schedule daily for management reporting; (4) drill down to `risk_object` detail in Dashboard Studio.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=risk sourcetype=\"stash\" earliest=-30d@d\n| stats latest(risk_score) as residual_risk, max(risk_score) as peak_risk, dc(source) as contributing_sources, values(source) as source_list by risk_object, risk_object_type\n| lookup business_entity_lookup risk_object OUTPUT business_entity\n| fillnull value=\"UNASSIGNED\" business_entity\n| stats avg(residual_risk) as avg_residual, max(residual_risk) as max_residual, sum(contributing_sources) as total_sources by business_entity\n| sort - avg_residual\n| table business_entity, avg_residual, max_residual, total_sources\n```\n\nUnderstanding this SPL\n\n**DORA ICT Risk Management Dashboard (Art. 5-16)** — Produces an auditable, continuously refreshed view of residual ICT risk by business entity using the ES risk scoring pipeline, so risk owners can evidence identification, assessment, and monitoring of ICT risk without manual spreadsheet rollups.\n\nDocumented **Data sources**: `index=risk` `sourcetype=\"stash\"` (risk_object, risk_object_type, risk_score, source, _time). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: risk; **sourcetype**: stash. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=risk, sourcetype=\"stash\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by risk_object, risk_object_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Fills null values with `fillnull`.\n• `stats` rolls up events into metrics; results are split **by business_entity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **DORA ICT Risk Management Dashboard (Art. 5-16)**): table business_entity, avg_residual, max_residual, total_sources\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (avg/max residual risk by entity), Single value KPI tiles (top entity risk), Table with drilldown.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of ict risk management dashboard — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "wv": "crawl",
              "mtype": [
                "Risk",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.10 (Detection) is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.11",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.11 (Response and recovery) is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.12 (Backup policies and recovery methods) is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.13 is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.14 is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.15 is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.16",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.16 is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.6 (ICT risk-management framework) is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.7 (ICT systems, protocols and tools) is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.8 (Identification) is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.9 (Protection and prevention) is enforced — Splunk UC-22.3.1: DORA ICT Risk Management Dashboard.",
                  "ea": "Saved search 'UC-22.3.1' running on index=risk sourcetype=\"stash\" (risk_object, risk_object_type, risk_score, source, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "22.3.2",
              "n": "DORA ICT Incident Classification and Reporting (Art. 17-23)",
              "c": "critical",
              "f": "advanced",
              "v": "Maps ES notable urgency/severity to DORA major vs significant classification and computes filing deadline clocks (4h for major, 72h for others) for operational resilience incident workflows.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` macro (urgency, severity, rule_name, status, owner, _time)",
              "q": "`notable` status IN (\"New\",\"In Progress\") earliest=-7d\n| eval dora_class=case(\n    urgency IN (\"critical\",\"high\") OR severity IN (\"critical\",\"high\"), \"major\",\n    1=1, \"significant_or_other\")\n| eval filing_deadline_h=if(dora_class=\"major\", 4, 72)\n| eval hours_elapsed=round((now()-_time)/3600, 2)\n| eval filing_breach=if(hours_elapsed>filing_deadline_h, 1, 0)\n| table _time, rule_name, urgency, severity, dora_class, filing_deadline_h, hours_elapsed, filing_breach, owner, status\n| sort - filing_breach, - hours_elapsed",
              "m": "(1) Confirm ES notable ingestion and that `urgency`/`severity` are populated; (2) align `dora_class` thresholds to your legal/ops policy; (3) wire alerts for `filing_breach=1` to SOC + resilience comms queues; (4) attach runbook for DORA reporting to competent authority.",
              "z": "Table with conditional formatting on deadline breach, Timeline chart of notables by `dora_class`, Single value (count approaching deadline).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` macro (urgency, severity, rule_name, status, owner, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm ES notable ingestion and that `urgency`/`severity` are populated; (2) align `dora_class` thresholds to your legal/ops policy; (3) wire alerts for `filing_breach=1` to SOC + resilience comms queues; (4) attach runbook for DORA reporting to competent authority.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` status IN (\"New\",\"In Progress\") earliest=-7d\n| eval dora_class=case(\n    urgency IN (\"critical\",\"high\") OR severity IN (\"critical\",\"high\"), \"major\",\n    1=1, \"significant_or_other\")\n| eval filing_deadline_h=if(dora_class=\"major\", 4, 72)\n| eval hours_elapsed=round((now()-_time)/3600, 2)\n| eval filing_breach=if(hours_elapsed>filing_deadline_h, 1, 0)\n| table _time, rule_name, urgency, severity, dora_class, filing_deadline_h, hours_elapsed, filing_breach, owner, status\n| sort - filing_breach, - hours_elapsed\n```\n\nUnderstanding this SPL\n\n**DORA ICT Incident Classification and Reporting (Art. 17-23)** — Maps ES notable urgency/severity to DORA major vs significant classification and computes filing deadline clocks (4h for major, 72h for others) for operational resilience incident workflows.\n\nDocumented **Data sources**: `` `notable` `` macro (urgency, severity, rule_name, status, owner, _time). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **dora_class** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **filing_deadline_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hours_elapsed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **filing_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DORA ICT Incident Classification and Reporting (Art. 17-23)**): table _time, rule_name, urgency, severity, dora_class, filing_deadline_h, hours_elapsed, filing_breach, owner, status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table with conditional formatting on deadline breach, Timeline chart of notables by `dora_class`, Single value (count approaching deadline).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We maps notable urgency/severity to DORA major vs significant classification and computes filing deadline clocks (4h for major, 72h for others) for operational resilience incident workflows.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.17 (ICT-related incident management process) is enforced — Splunk UC-22.3.2: DORA ICT Incident Classification and Reporting.",
                  "ea": "Saved search 'UC-22.3.2' running on notable macro (urgency, severity, rule_name, status, owner, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.18 (Classification of ICT-related incidents) is enforced — Splunk UC-22.3.2: DORA ICT Incident Classification and Reporting.",
                  "ea": "Saved search 'UC-22.3.2' running on notable macro (urgency, severity, rule_name, status, owner, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.19",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.19 (Reporting of major ICT-related incidents) is enforced — Splunk UC-22.3.2: DORA ICT Incident Classification and Reporting.",
                  "ea": "Saved search 'UC-22.3.2' running on notable macro (urgency, severity, rule_name, status, owner, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.20",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.20 is enforced — Splunk UC-22.3.2: DORA ICT Incident Classification and Reporting.",
                  "ea": "Saved search 'UC-22.3.2' running on notable macro (urgency, severity, rule_name, status, owner, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.21",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.21 is enforced — Splunk UC-22.3.2: DORA ICT Incident Classification and Reporting.",
                  "ea": "Saved search 'UC-22.3.2' running on notable macro (urgency, severity, rule_name, status, owner, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.22",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.22 is enforced — Splunk UC-22.3.2: DORA ICT Incident Classification and Reporting.",
                  "ea": "Saved search 'UC-22.3.2' running on notable macro (urgency, severity, rule_name, status, owner, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.23 is enforced — Splunk UC-22.3.2: DORA ICT Incident Classification and Reporting.",
                  "ea": "Saved search 'UC-22.3.2' running on notable macro (urgency, severity, rule_name, status, owner, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.3",
              "n": "DORA Digital Operational Resilience Testing (Art. 24-27)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks scheduled resilience test outcomes via ITSI KPI breaches and highlights testing gaps (missing runs, failed thresholds) for Board/ICT oversight reporting on digital resilience.",
              "t": "Splunk IT Service Intelligence (Splunkbase 1841)",
              "d": "`index=itsi_summary` (service_name, kpi_name, alert_value, severity_value, is_service_in_maintenance, _time)",
              "q": "index=itsi_summary earliest=-90d is_service_in_maintenance=0\n| eval kpi_l=lower(kpi_name)\n| where match(kpi_l,\"(dr|disaster|resilience|failover|recovery|rto|rpo|backup|restore)\")\n| bin _time span=1d\n| stats latest(alert_value) as last_value, latest(severity_value) as last_severity by _time, service_name, kpi_name\n| eval test_fail=if(last_severity>=4 OR last_value>0, 1, 0)\n| timechart span=7d sum(test_fail) as failed_observations, dc(service_name) as impacted_services",
              "m": "(1) Standardize KPI naming for resilience tests with tokens like `DR`, `Failover`, `Restore` in `kpi_name`; (2) ensure ITSI services represent regulated business services; (3) exclude maintenance noise via `is_service_in_maintenance`; (4) add a lookup of expected test windows and compare expected vs observed runs for gap detection.",
              "z": "Timechart (failed observations), Heatmap (service x week), Table (last failures with drilldown to deep dives).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk IT Service Intelligence](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk IT Service Intelligence (Splunkbase 1841).\n• Ensure the following data sources are available: `index=itsi_summary` (service_name, kpi_name, alert_value, severity_value, is_service_in_maintenance, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize KPI naming for resilience tests with tokens like `DR`, `Failover`, `Restore` in `kpi_name`; (2) ensure ITSI services represent regulated business services; (3) exclude maintenance noise via `is_service_in_maintenance`; (4) add a lookup of expected test windows and compare expected vs observed runs for gap detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary earliest=-90d is_service_in_maintenance=0\n| eval kpi_l=lower(kpi_name)\n| where match(kpi_l,\"(dr|disaster|resilience|failover|recovery|rto|rpo|backup|restore)\")\n| bin _time span=1d\n| stats latest(alert_value) as last_value, latest(severity_value) as last_severity by _time, service_name, kpi_name\n| eval test_fail=if(last_severity>=4 OR last_value>0, 1, 0)\n| timechart span=7d sum(test_fail) as failed_observations, dc(service_name) as impacted_services\n```\n\nUnderstanding this SPL\n\n**DORA Digital Operational Resilience Testing (Art. 24-27)** — Tracks scheduled resilience test outcomes via ITSI KPI breaches and highlights testing gaps (missing runs, failed thresholds) for Board/ICT oversight reporting on digital resilience.\n\nDocumented **Data sources**: `index=itsi_summary` (service_name, kpi_name, alert_value, severity_value, is_service_in_maintenance, _time). **App/TA** (typical add-on context): Splunk IT Service Intelligence (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **kpi_l** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(kpi_l,\"(dr|disaster|resilience|failover|recovery|rto|rpo|backup|restore)\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, service_name, kpi_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **test_fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=7d** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (failed observations), Heatmap (service x week), Table (last failures with drilldown to deep dives).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track scheduled resilience test outcomes via ITSI KPI breaches and highlights testing gaps (missing runs, failed thresholds) for Board/ICT oversight reporting on digital resilience.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.24 (Digital operational-resilience testing) is enforced — Splunk UC-22.3.3: DORA Digital Operational Resilience Testing.",
                  "ea": "Saved search 'UC-22.3.3' running on index=itsi_summary (service_name, kpi_name, alert_value, severity_value, is_service_in_maintenance, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.25",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.25 is enforced — Splunk UC-22.3.3: DORA Digital Operational Resilience Testing.",
                  "ea": "Saved search 'UC-22.3.3' running on index=itsi_summary (service_name, kpi_name, alert_value, severity_value, is_service_in_maintenance, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.26 (Threat-led penetration testing) is enforced — Splunk UC-22.3.3: DORA Digital Operational Resilience Testing.",
                  "ea": "Saved search 'UC-22.3.3' running on index=itsi_summary (service_name, kpi_name, alert_value, severity_value, is_service_in_maintenance, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.27",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.27 is enforced — Splunk UC-22.3.3: DORA Digital Operational Resilience Testing.",
                  "ea": "Saved search 'UC-22.3.3' running on index=itsi_summary (service_name, kpi_name, alert_value, severity_value, is_service_in_maintenance, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.4",
              "n": "DORA Third-Party ICT Provider Concentration Risk (Art. 28-44)",
              "c": "high",
              "f": "intermediate",
              "v": "Quantifies operational dependency on specific cloud providers by measuring API activity concentration across accounts, regions, and services, supporting third-party risk assessments and exit planning.",
              "t": "Splunk Add-on for Amazon Web Services (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`index=aws` `sourcetype=\"aws:cloudtrail\"` (eventSource, eventName, awsRegion, userIdentity.arn, recipientAccountId); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (operationName, resourceProvider, callerIpAddress, ResourceGroup)",
              "q": "(index=aws sourcetype=\"aws:cloudtrail\") OR (index=azure sourcetype=\"mscs:azure:auditlog\")\n| eval provider=if(sourcetype==\"aws:cloudtrail\", \"AWS\", \"Azure\")\n| eval service=coalesce(eventSource, resourceProvider)\n| eval region=coalesce(awsRegion, ResourceGroup)\n| stats count by provider, service, region\n| eventstats sum(count) as total\n| eval concentration_pct=round(100*count/total, 2)\n| sort - concentration_pct\n| head 50\n| table provider, service, region, count, concentration_pct",
              "m": "(1) Ingest CloudTrail (org trail) into `index=aws` using AWS TA; (2) ingest Azure Activity via Event Hub using Microsoft Cloud Services TA; (3) create a saved search weekly for procurement/third-party governance dashboards; (4) enrich with cloud account tags via lookup (cost center, vendor name).",
              "z": "Treemap (share by service), Stacked bar by provider, Table of top (service, region) pairs.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Amazon Web Services](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Amazon Web Services (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `index=aws` `sourcetype=\"aws:cloudtrail\"` (eventSource, eventName, awsRegion, userIdentity.arn, recipientAccountId); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (operationName, resourceProvider, callerIpAddress, ResourceGroup).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CloudTrail (org trail) into `index=aws` using AWS TA; (2) ingest Azure Activity via Event Hub using Microsoft Cloud Services TA; (3) create a saved search weekly for procurement/third-party governance dashboards; (4) enrich with cloud account tags via lookup (cost center, vendor name).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:cloudtrail\") OR (index=azure sourcetype=\"mscs:azure:auditlog\")\n| eval provider=if(sourcetype==\"aws:cloudtrail\", \"AWS\", \"Azure\")\n| eval service=coalesce(eventSource, resourceProvider)\n| eval region=coalesce(awsRegion, ResourceGroup)\n| stats count by provider, service, region\n| eventstats sum(count) as total\n| eval concentration_pct=round(100*count/total, 2)\n| sort - concentration_pct\n| head 50\n| table provider, service, region, count, concentration_pct\n```\n\nUnderstanding this SPL\n\n**DORA Third-Party ICT Provider Concentration Risk (Art. 28-44)** — Quantifies operational dependency on specific cloud providers by measuring API activity concentration across accounts, regions, and services, supporting third-party risk assessments and exit planning.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` (eventSource, eventName, awsRegion, userIdentity.arn, recipientAccountId); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (operationName, resourceProvider, callerIpAddress, ResourceGroup). **App/TA** (typical add-on context): Splunk Add-on for Amazon Web Services (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, azure; **sourcetype**: aws:cloudtrail, mscs:azure:auditlog. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=azure, sourcetype=\"aws:cloudtrail\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **provider** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **service** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **region** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by provider, service, region** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **concentration_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **DORA Third-Party ICT Provider Concentration Risk (Art. 28-44)**): table provider, service, region, count, concentration_pct\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DORA Third-Party ICT Provider Concentration Risk (Art. 28-44)** — Quantifies operational dependency on specific cloud providers by measuring API activity concentration across accounts, regions, and services, supporting third-party risk assessments and exit planning.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` (eventSource, eventName, awsRegion, userIdentity.arn, recipientAccountId); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (operationName, resourceProvider, callerIpAddress, ResourceGroup). **App/TA** (typical add-on context): Splunk Add-on for Amazon Web Services (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Treemap (share by service), Stacked bar by provider, Table of top (service, region) pairs.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We quantifies operational dependency on specific cloud providers by measuring API activity concentration across accounts, regions, and services, supporting third-party risk assessments and exit planning.",
              "mtype": [
                "Risk",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "Change (CloudTrail often maps via TA)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.28 (ICT third-party risk) is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.29",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.29 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.30",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.30 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.31",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.31 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.32 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.33",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.33 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.34",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.34 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.35",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.35 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.36",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.36 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.37",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.37 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.38",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.38 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.39",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.39 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.40",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.40 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.41",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.41 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.42",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.42 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.43",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.43 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.44",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.44 is enforced — Splunk UC-22.3.4: DORA Third-Party ICT Provider Concentration Risk.",
                  "ea": "Saved search 'UC-22.3.4' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.5",
              "n": "DORA Cross-Region Disaster Recovery Compliance (Art. 11-12)",
              "c": "critical",
              "f": "advanced",
              "v": "Demonstrates ongoing cross-region replication and DR operations evidence from cloud provider audit trails combined with ITSI service health across regions.",
              "t": "Splunk Add-on for Amazon Web Services (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk IT Service Intelligence (Splunkbase 1841)",
              "d": "`index=aws` `sourcetype=\"aws:cloudtrail\"` (eventName, awsRegion, requestParameters); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (operationName, Category, ResourceGroup); `index=itsi_summary` (service_name, health_score, severity_value)",
              "q": "index=itsi_summary is_service_in_maintenance=0 earliest=-24h\n| eval region_tag=coalesce(entity_key, service_name)\n| stats avg(health_score) as avg_health, max(severity_value) as worst_severity by service_name\n| where worst_severity>=3 OR avg_health<80\n| table service_name, avg_health, worst_severity",
              "m": "(1) Ensure CloudTrail includes data-plane events for replication visibility; (2) for Azure, route Activity logs to Event Hub and confirm `mscs:azure:auditlog` parsing; (3) in ITSI, tag entities with `region` and bind KPIs representing DR readiness; (4) combine cloud evidence and ITSI health panels in a single DR compliance dashboard.",
              "z": "Timeline of replication events, Geographic map (counts by region), ITSI service health single values by region.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Amazon Web Services](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [Splunk IT Service Intelligence](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Amazon Web Services (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk IT Service Intelligence (Splunkbase 1841).\n• Ensure the following data sources are available: `index=aws` `sourcetype=\"aws:cloudtrail\"` (eventName, awsRegion, requestParameters); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (operationName, Category, ResourceGroup); `index=itsi_summary` (service_name, health_score, severity_value).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure CloudTrail includes data-plane events for replication visibility; (2) for Azure, route Activity logs to Event Hub and confirm `mscs:azure:auditlog` parsing; (3) in ITSI, tag entities with `region` and bind KPIs representing DR readiness; (4) combine cloud evidence and ITSI health panels in a single DR compliance dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_in_maintenance=0 earliest=-24h\n| eval region_tag=coalesce(entity_key, service_name)\n| stats avg(health_score) as avg_health, max(severity_value) as worst_severity by service_name\n| where worst_severity>=3 OR avg_health<80\n| table service_name, avg_health, worst_severity\n```\n\nUnderstanding this SPL\n\n**DORA Cross-Region Disaster Recovery Compliance (Art. 11-12)** — Demonstrates ongoing cross-region replication and DR operations evidence from cloud provider audit trails combined with ITSI service health across regions.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` (eventName, awsRegion, requestParameters); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (operationName, Category, ResourceGroup); `index=itsi_summary` (service_name, health_score, severity_value). **App/TA** (typical add-on context): Splunk Add-on for Amazon Web Services (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk IT Service Intelligence (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **region_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where worst_severity>=3 OR avg_health<80` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA Cross-Region Disaster Recovery Compliance (Art. 11-12)**): table service_name, avg_health, worst_severity\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DORA Cross-Region Disaster Recovery Compliance (Art. 11-12)** — Demonstrates ongoing cross-region replication and DR operations evidence from cloud provider audit trails combined with ITSI service health across regions.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` (eventName, awsRegion, requestParameters); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (operationName, Category, ResourceGroup); `index=itsi_summary` (service_name, health_score, severity_value). **App/TA** (typical add-on context): Splunk Add-on for Amazon Web Services (Splunkbase 1876), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), Splunk IT Service Intelligence (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of replication events, Geographic map (counts by region), ITSI service health single values by region.",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI) (optional but recommended)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We demonstrates ongoing cross-region replication and DR operations evidence from cloud provider audit trails combined with ITSI service health across regions.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "Change (replication changes)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.11",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.11 (Response and recovery) is enforced — Splunk UC-22.3.5: DORA Cross-Region Disaster Recovery Compliance.",
                  "ea": "Saved search 'UC-22.3.5' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.12 (Backup policies and recovery methods) is enforced — Splunk UC-22.3.5: DORA Cross-Region Disaster Recovery Compliance.",
                  "ea": "Saved search 'UC-22.3.5' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.6",
              "n": "DORA ICT Change Management and Patch Compliance (Art. 9(4)(e))",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 9(4)(e) requires documented ICT change management policies that are risk-assessed and approved before deployment. This use case tracks change ticket compliance for ICT systems supporting critical functions — detecting unauthorized changes, changes without approval, and emergency changes without post-implementation review — providing evidence that protection and prevention controls are operational.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`index=itsm` `sourcetype=\"snow:change_request\"`, `index=windows` `sourcetype=\"WinEventLog:System\"`, CIM Change data model",
              "q": "| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object All_Changes.command All_Changes.change_type _time span=1d\n| rename All_Changes.* as *\n| lookup dora_critical_systems.csv object AS object OUTPUT critical_function, business_service\n| where isnotnull(critical_function)\n| lookup change_ticket_lookup object AS object, _time OUTPUT ticket_number, approval_status, risk_assessment\n| eval unauthorized=if(isnull(ticket_number), 1, 0)\n| eval unapproved=if(isnotnull(ticket_number) AND approval_status!=\"approved\", 1, 0)\n| stats count, sum(unauthorized) as unauthorized_changes, sum(unapproved) as unapproved_changes by object, critical_function, business_service\n| where unauthorized_changes > 0 OR unapproved_changes > 0\n| sort - unauthorized_changes\n| table object, critical_function, business_service, count, unauthorized_changes, unapproved_changes",
              "m": "(1) Create `dora_critical_systems.csv` mapping ICT assets to critical/important business functions; (2) correlate CIM Change events with ServiceNow change tickets; (3) alert on any change to critical systems without an approved ticket; (4) track emergency changes separately and verify post-implementation review within 5 business days; (5) report change compliance rate as DORA governance KPI.",
              "z": "Table (unauthorized changes), Bar chart (changes by approval status), Single value (change compliance %), Timeline (changes to critical systems).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:change_request\"`, `index=windows` `sourcetype=\"WinEventLog:System\"`, CIM Change data model.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `dora_critical_systems.csv` mapping ICT assets to critical/important business functions; (2) correlate CIM Change events with ServiceNow change tickets; (3) alert on any change to critical systems without an approved ticket; (4) track emergency changes separately and verify post-implementation review within 5 business days; (5) report change compliance rate as DORA governance KPI.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` count\n  from datamodel=Change.All_Changes\n  by All_Changes.user All_Changes.object All_Changes.command All_Changes.change_type _time span=1d\n| rename All_Changes.* as *\n| lookup dora_critical_systems.csv object AS object OUTPUT critical_function, business_service\n| where isnotnull(critical_function)\n| lookup change_ticket_lookup object AS object, _time OUTPUT ticket_number, approval_status, risk_assessment\n| eval unauthorized=if(isnull(ticket_number), 1, 0)\n| eval unapproved=if(isnotnull(ticket_number) AND approval_status!=\"approved\", 1, 0)\n| stats count, sum(unauthorized) as unauthorized_changes, sum(unapproved) as unapproved_changes by object, critical_function, business_service\n| where unauthorized_changes > 0 OR unapproved_changes > 0\n| sort - unauthorized_changes\n| table object, critical_function, business_service, count, unauthorized_changes, unapproved_changes\n```\n\nUnderstanding this SPL\n\n**DORA ICT Change Management and Patch Compliance (Art. 9(4)(e))** — Article 9(4)(e) requires documented ICT change management policies that are risk-assessed and approved before deployment. This use case tracks change ticket compliance for ICT systems supporting critical functions — detecting unauthorized changes, changes without approval, and emergency changes without post-implementation review — providing evidence that protection and prevention controls are operational.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"`, `index=windows` `sourcetype=\"WinEventLog:System\"`, CIM Change data model. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(critical_function)` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **unauthorized** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **unapproved** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by object, critical_function, business_service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where unauthorized_changes > 0 OR unapproved_changes > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **DORA ICT Change Management and Patch Compliance (Art. 9(4)(e))**): table object, critical_function, business_service, count, unauthorized_changes, unapproved_changes\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.object | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DORA ICT Change Management and Patch Compliance (Art. 9(4)(e))** — Article 9(4)(e) requires documented ICT change management policies that are risk-assessed and approved before deployment. This use case tracks change ticket compliance for ICT systems supporting critical functions — detecting unauthorized changes, changes without approval, and emergency changes without post-implementation review — providing evidence that protection and prevention controls are operational.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"`, `index=windows` `sourcetype=\"WinEventLog:System\"`, CIM Change data model. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unauthorized changes), Bar chart (changes by approval status), Single value (change compliance %), Timeline (changes to critical systems).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of ict change management and patch compliance — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.object | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.9(4)(e)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.9(4)(e) is enforced — Splunk UC-22.3.6: DORA ICT Change Management and Patch Compliance.",
                  "ea": "Saved search 'UC-22.3.6' running on index=itsm sourcetype=\"snow:change_request\", index=windows sourcetype=\"WinEventLog:System\", CIM Change data model, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.7",
              "n": "DORA ICT Anomaly Detection Capabilities (Art. 10)",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 10 requires financial entities to have mechanisms to promptly detect anomalous activities including ICT network performance issues and ICT-related incidents. This use case monitors the health and coverage of detection capabilities themselves — ensuring that correlation searches are running, data sources are flowing, and detection coverage spans all critical ICT systems — proving that detection infrastructure meets DORA requirements.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`_audit` (correlation search execution), `_internal` (data ingestion), ES Content Management",
              "q": "| rest /services/saved/searches splunk_server=local count=0\n| search disabled=0 is_scheduled=1 action.correlationsearch.enabled=1\n| eval last_run=strftime(strptime(next_scheduled_time,\"%Y-%m-%dT%H:%M:%S\"),\"%Y-%m-%d %H:%M\")\n| table title, cron_schedule, next_scheduled_time, action.correlationsearch.label\n| append [\n    search index=_internal sourcetype=splunkd group=per_index_thruput earliest=-4h\n    | stats latest(ev) as last_events by series\n    | where last_events=0\n    | eval title=\"DATA_GAP: \".series, status=\"NO_DATA_4H\"\n    | table title, status\n]\n| sort title",
              "m": "(1) Verify all ES correlation searches for critical ICT functions are enabled and running; (2) monitor data source health — alert when indexes supporting critical detection go silent for >4 hours; (3) map detection coverage against DORA critical functions to identify blind spots; (4) report detection coverage percentage and data source health as DORA Art. 10 evidence.",
              "z": "Table (detection coverage and data gaps), Single value (active correlation searches), Bar chart (data gaps by index), Gauge (detection coverage %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `_audit` (correlation search execution), `_internal` (data ingestion), ES Content Management.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Verify all ES correlation searches for critical ICT functions are enabled and running; (2) monitor data source health — alert when indexes supporting critical detection go silent for >4 hours; (3) map detection coverage against DORA critical functions to identify blind spots; (4) report detection coverage percentage and data source health as DORA Art. 10 evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/saved/searches splunk_server=local count=0\n| search disabled=0 is_scheduled=1 action.correlationsearch.enabled=1\n| eval last_run=strftime(strptime(next_scheduled_time,\"%Y-%m-%dT%H:%M:%S\"),\"%Y-%m-%d %H:%M\")\n| table title, cron_schedule, next_scheduled_time, action.correlationsearch.label\n| append [\n    search index=_internal sourcetype=splunkd group=per_index_thruput earliest=-4h\n    | stats latest(ev) as last_events by series\n    | where last_events=0\n    | eval title=\"DATA_GAP: \".series, status=\"NO_DATA_4H\"\n    | table title, status\n]\n| sort title\n```\n\nUnderstanding this SPL\n\n**DORA ICT Anomaly Detection Capabilities (Art. 10)** — Article 10 requires financial entities to have mechanisms to promptly detect anomalous activities including ICT network performance issues and ICT-related incidents. This use case monitors the health and coverage of detection capabilities themselves — ensuring that correlation searches are running, data sources are flowing, and detection coverage spans all critical ICT systems — proving that detection infrastructure meets DORA requirements.\n\nDocumented **Data sources**: `_audit` (correlation search execution), `_internal` (data ingestion), ES Content Management. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **last_run** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DORA ICT Anomaly Detection Capabilities (Art. 10)**): table title, cron_schedule, next_scheduled_time, action.correlationsearch.label\n• Appends rows from a subsearch with `append`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (detection coverage and data gaps), Single value (active correlation searches), Bar chart (data gaps by index), Gauge (detection coverage %).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of ict anomaly detection capabilities — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.10 (Detection) is enforced — Splunk UC-22.3.7: DORA ICT Anomaly Detection Capabilities.",
                  "ea": "Saved search 'UC-22.3.7' running on _audit (correlation search execution), _internal (data ingestion), ES Content Management, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.8",
              "n": "DORA ICT Incident Response and Recovery Time Tracking (Art. 11)",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 11 requires financial entities to put in place ICT response and recovery plans for critical functions, including estimated recovery times. This use case measures actual response and recovery times against defined RTO/RPO targets for DORA-regulated services, tracking mean-time-to-detect (MTTD), mean-time-to-respond (MTTR), and mean-time-to-recover (MTTRC) — the operational evidence that response and recovery capabilities are tested and effective.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841)",
              "d": "`` `notable` `` macro (incident lifecycle), `index=itsi_summary` (service recovery KPIs)",
              "q": "`notable` urgency IN (\"high\",\"critical\") status=\"Closed\" earliest=-90d\n| eval detect_time=_time\n| eval respond_time=if(isnotnull(status_description) AND match(status_description,\"(?i)acknowledged|triaged|investigating\"), strptime(status_description,\"%Y-%m-%d %H:%M:%S\"), null())\n| eval close_time=if(status=\"Closed\", now(), null())\n| eval mttd_h=round((detect_time - info_min_time)/3600, 2)\n| eval mttr_h=round((close_time - detect_time)/3600, 2)\n| lookup dora_critical_systems.csv dest AS dest OUTPUT critical_function, rto_hours, rpo_hours\n| eval rto_breach=if(isnotnull(rto_hours) AND mttr_h > rto_hours, 1, 0)\n| stats avg(mttd_h) as avg_mttd, avg(mttr_h) as avg_mttr, sum(rto_breach) as rto_breaches, count by critical_function\n| table critical_function, count, avg_mttd, avg_mttr, rto_breaches\n| sort - rto_breaches",
              "m": "(1) Define RTO/RPO per critical function in `dora_critical_systems.csv`; (2) instrument incident workflow to capture timestamps at detection, acknowledgement, containment, and resolution; (3) compare actual recovery times against defined targets; (4) alert on RTO breaches for critical functions; (5) report MTTD/MTTR trends quarterly for management body oversight.",
              "z": "Bar chart (avg MTTR by function), Single value (avg MTTD), Table (RTO breaches), Line chart (MTTR trend over 90 days).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [
                "T1048",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841).\n• Ensure the following data sources are available: `` `notable` `` macro (incident lifecycle), `index=itsi_summary` (service recovery KPIs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define RTO/RPO per critical function in `dora_critical_systems.csv`; (2) instrument incident workflow to capture timestamps at detection, acknowledgement, containment, and resolution; (3) compare actual recovery times against defined targets; (4) alert on RTO breaches for critical functions; (5) report MTTD/MTTR trends quarterly for management body oversight.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") status=\"Closed\" earliest=-90d\n| eval detect_time=_time\n| eval respond_time=if(isnotnull(status_description) AND match(status_description,\"(?i)acknowledged|triaged|investigating\"), strptime(status_description,\"%Y-%m-%d %H:%M:%S\"), null())\n| eval close_time=if(status=\"Closed\", now(), null())\n| eval mttd_h=round((detect_time - info_min_time)/3600, 2)\n| eval mttr_h=round((close_time - detect_time)/3600, 2)\n| lookup dora_critical_systems.csv dest AS dest OUTPUT critical_function, rto_hours, rpo_hours\n| eval rto_breach=if(isnotnull(rto_hours) AND mttr_h > rto_hours, 1, 0)\n| stats avg(mttd_h) as avg_mttd, avg(mttr_h) as avg_mttr, sum(rto_breach) as rto_breaches, count by critical_function\n| table critical_function, count, avg_mttd, avg_mttr, rto_breaches\n| sort - rto_breaches\n```\n\nUnderstanding this SPL\n\n**DORA ICT Incident Response and Recovery Time Tracking (Art. 11)** — Article 11 requires financial entities to put in place ICT response and recovery plans for critical functions, including estimated recovery times. This use case measures actual response and recovery times against defined RTO/RPO targets for DORA-regulated services, tracking mean-time-to-detect (MTTD), mean-time-to-respond (MTTR), and mean-time-to-recover (MTTRC) — the operational evidence that response and recovery capabilities are tested and effective.\n\nDocumented **Data sources**: `` `notable` `` macro (incident lifecycle), `index=itsi_summary` (service recovery KPIs). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **detect_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **respond_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **close_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mttd_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mttr_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **rto_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by critical_function** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **DORA ICT Incident Response and Recovery Time Tracking (Art. 11)**): table critical_function, count, avg_mttd, avg_mttr, rto_breaches\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (avg MTTR by function), Single value (avg MTTD), Table (RTO breaches), Line chart (MTTR trend over 90 days).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of ict incident response and recovery time tracking — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Security",
                "Availability",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.11",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.11 (Response and recovery) is enforced — Splunk UC-22.3.8: DORA ICT Incident Response and Recovery Time Tracking.",
                  "ea": "Saved search 'UC-22.3.8' running on notable macro (incident lifecycle), index=itsi_summary (service recovery KPIs), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.9",
              "n": "DORA Backup Completeness and Restoration Testing (Art. 12)",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 12 requires documented backup policies specifying scope and frequency based on criticality, with backup systems physically and logically segregated. Restoration procedures must be periodically tested. This use case tracks backup job success/failure for DORA-regulated systems, validates segregation requirements, and monitors restoration test completion — going beyond cloud DR replication (UC-22.3.5) to cover the full backup lifecycle.",
              "t": "Veeam App for Splunk (Splunkbase 7312), Splunk ITSI (Splunkbase 1841)",
              "d": "`index=backup` (backup software logs), `dora_backup_schedule.csv` (expected backup frequency per system)",
              "q": "index=backup sourcetype IN (\"veeam:backup\",\"commvault:job\",\"rubrik:event\",\"aws:backup\") earliest=-7d\n| eval status=lower(coalesce(status, result, state))\n| eval success=if(match(status,\"(?i)success|completed|ok\"), 1, 0)\n| eval failed=if(match(status,\"(?i)fail|error|warning\"), 1, 0)\n| stats sum(success) as successes, sum(failed) as failures, latest(_time) as last_backup by job_name, target_system\n| lookup dora_critical_systems.csv system_name AS target_system OUTPUT critical_function, backup_frequency_hours\n| where isnotnull(critical_function)\n| eval hours_since_backup=round((now()-last_backup)/3600, 1)\n| eval backup_overdue=if(isnotnull(backup_frequency_hours) AND hours_since_backup > backup_frequency_hours, \"OVERDUE\", \"OK\")\n| eval backup_success_rate=round(100*successes/(successes+failures), 1)\n| where failures > 0 OR backup_overdue=\"OVERDUE\" OR backup_success_rate < 95\n| sort - failures\n| table target_system, critical_function, successes, failures, backup_success_rate, hours_since_backup, backup_overdue",
              "m": "(1) Forward backup software logs via syslog or HEC; (2) define expected backup frequency per DORA-critical system in `dora_backup_schedule.csv`; (3) alert on failed backups for critical functions immediately; (4) track restoration test completion in a KV store — DORA requires periodic testing; (5) validate physical/logical segregation by confirming backup destinations differ from source infrastructure.",
              "z": "Table (backup status per critical system), Single value (backup success rate %), Bar chart (failures by system), Timeline (backup history).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Veeam App for Splunk](https://splunkbase.splunk.com/app/7312), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Veeam App for Splunk (Splunkbase 7312), Splunk ITSI (Splunkbase 1841).\n• Ensure the following data sources are available: `index=backup` (backup software logs), `dora_backup_schedule.csv` (expected backup frequency per system).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward backup software logs via syslog or HEC; (2) define expected backup frequency per DORA-critical system in `dora_backup_schedule.csv`; (3) alert on failed backups for critical functions immediately; (4) track restoration test completion in a KV store — DORA requires periodic testing; (5) validate physical/logical segregation by confirming backup destinations differ from source infrastructure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype IN (\"veeam:backup\",\"commvault:job\",\"rubrik:event\",\"aws:backup\") earliest=-7d\n| eval status=lower(coalesce(status, result, state))\n| eval success=if(match(status,\"(?i)success|completed|ok\"), 1, 0)\n| eval failed=if(match(status,\"(?i)fail|error|warning\"), 1, 0)\n| stats sum(success) as successes, sum(failed) as failures, latest(_time) as last_backup by job_name, target_system\n| lookup dora_critical_systems.csv system_name AS target_system OUTPUT critical_function, backup_frequency_hours\n| where isnotnull(critical_function)\n| eval hours_since_backup=round((now()-last_backup)/3600, 1)\n| eval backup_overdue=if(isnotnull(backup_frequency_hours) AND hours_since_backup > backup_frequency_hours, \"OVERDUE\", \"OK\")\n| eval backup_success_rate=round(100*successes/(successes+failures), 1)\n| where failures > 0 OR backup_overdue=\"OVERDUE\" OR backup_success_rate < 95\n| sort - failures\n| table target_system, critical_function, successes, failures, backup_success_rate, hours_since_backup, backup_overdue\n```\n\nUnderstanding this SPL\n\n**DORA Backup Completeness and Restoration Testing (Art. 12)** — Article 12 requires documented backup policies specifying scope and frequency based on criticality, with backup systems physically and logically segregated. Restoration procedures must be periodically tested. This use case tracks backup job success/failure for DORA-regulated systems, validates segregation requirements, and monitors restoration test completion — going beyond cloud DR replication (UC-22.3.5) to cover the full backup lifecycle.\n\nDocumented **Data sources**: `index=backup` (backup software logs), `dora_backup_schedule.csv` (expected backup frequency per system). **App/TA** (typical add-on context): Veeam App for Splunk (Splunkbase 7312), Splunk ITSI (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **success** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by job_name, target_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(critical_function)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **hours_since_backup** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **backup_overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **backup_success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failures > 0 OR backup_overdue=\"OVERDUE\" OR backup_success_rate < 95` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **DORA Backup Completeness and Restoration Testing (Art. 12)**): table target_system, critical_function, successes, failures, backup_success_rate, hours_since_backup, backup_overdue\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (backup status per critical system), Single value (backup success rate %), Bar chart (failures by system), Timeline (backup history).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of backup completeness and restoration testing — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "ind": "Financial Services",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi",
                "veeam"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.12 (Backup policies and recovery methods) is enforced — Splunk UC-22.3.9: DORA Backup Completeness and Restoration Testing.",
                  "ea": "Saved search 'UC-22.3.9' running on index=backup (backup software logs), dora_backup_schedule.csv (expected backup frequency per system), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Veeam App for Splunk",
                  "id": 7312,
                  "url": "https://splunkbase.splunk.com/app/7312",
                  "desc": "Monitoring and security dashboards for Veeam Backup job statuses and events",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/033805d0-fe3c-11ee-a32e-be99bb517a22.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/0a08b346-fe3c-11ee-9b85-7afc4dbed252.png"
                  ]
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.10",
              "n": "DORA Post-Incident Review and Learning (Art. 13)",
              "c": "high",
              "f": "intermediate",
              "v": "Article 13 requires financial entities to have capabilities and staff to learn from ICT-related incidents, share lessons, and evolve their ICT risk management framework. This use case tracks whether major incidents result in completed post-incident reviews (PIRs), that root causes are documented, and that improvement actions are implemented — providing the \"learning and evolving\" evidence DORA specifically mandates.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`` `notable` `` macro, `index=itsm` (PIR/RCA records), `dora_pir_completion.csv` (KV store)",
              "q": "`notable` urgency IN (\"high\",\"critical\") status=\"Closed\" earliest=-180d\n| lookup dora_pir_completion.csv notable_id AS event_id OUTPUT pir_status, pir_date, root_cause, improvement_actions, actions_completed\n| eval pir_due=if(isnull(pir_status), \"MISSING\", pir_status)\n| eval actions_closed=if(isnotnull(actions_completed) AND actions_completed=\"yes\", \"DONE\", \"OPEN\")\n| stats count by pir_due, actions_closed\n| append [\n    search `notable` urgency IN (\"high\",\"critical\") status=\"Closed\" earliest=-180d\n    | lookup dora_pir_completion.csv notable_id AS event_id OUTPUT pir_status, improvement_actions, actions_completed\n    | where pir_status!=\"completed\" OR isnull(pir_status) OR actions_completed!=\"yes\"\n    | table _time, rule_name, urgency, pir_status, improvement_actions, actions_completed\n    | sort - _time\n]",
              "m": "(1) Create `dora_pir_completion.csv` KV store linking ES notable IDs to PIR records; (2) require PIR completion within 30 days of incident closure; (3) track root cause categories for trend analysis; (4) monitor improvement action implementation status; (5) report on learning cycle completion rates to management body; (6) share anonymised lessons across business units as Art. 13 requires.",
              "z": "Pie chart (PIR completion status), Table (incidents without PIR), Bar chart (root cause categories), Single value (PIR completion rate %), Timeline (PIR lifecycle).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `` `notable` `` macro, `index=itsm` (PIR/RCA records), `dora_pir_completion.csv` (KV store).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `dora_pir_completion.csv` KV store linking ES notable IDs to PIR records; (2) require PIR completion within 30 days of incident closure; (3) track root cause categories for trend analysis; (4) monitor improvement action implementation status; (5) report on learning cycle completion rates to management body; (6) share anonymised lessons across business units as Art. 13 requires.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") status=\"Closed\" earliest=-180d\n| lookup dora_pir_completion.csv notable_id AS event_id OUTPUT pir_status, pir_date, root_cause, improvement_actions, actions_completed\n| eval pir_due=if(isnull(pir_status), \"MISSING\", pir_status)\n| eval actions_closed=if(isnotnull(actions_completed) AND actions_completed=\"yes\", \"DONE\", \"OPEN\")\n| stats count by pir_due, actions_closed\n| append [\n    search `notable` urgency IN (\"high\",\"critical\") status=\"Closed\" earliest=-180d\n    | lookup dora_pir_completion.csv notable_id AS event_id OUTPUT pir_status, improvement_actions, actions_completed\n    | where pir_status!=\"completed\" OR isnull(pir_status) OR actions_completed!=\"yes\"\n    | table _time, rule_name, urgency, pir_status, improvement_actions, actions_completed\n    | sort - _time\n]\n```\n\nUnderstanding this SPL\n\n**DORA Post-Incident Review and Learning (Art. 13)** — Article 13 requires financial entities to have capabilities and staff to learn from ICT-related incidents, share lessons, and evolve their ICT risk management framework. This use case tracks whether major incidents result in completed post-incident reviews (PIRs), that root causes are documented, and that improvement actions are implemented — providing the \"learning and evolving\" evidence DORA specifically mandates.\n\nDocumented **Data sources**: `` `notable` `` macro, `index=itsm` (PIR/RCA records), `dora_pir_completion.csv` (KV store). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **pir_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **actions_closed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by pir_due, actions_closed** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (PIR completion status), Table (incidents without PIR), Bar chart (root cause categories), Single value (PIR completion rate %), Timeline (PIR lifecycle).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of post-incident review and learning — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.13 is enforced — Splunk UC-22.3.10: DORA Post-Incident Review and Learning.",
                  "ea": "Saved search 'UC-22.3.10' running on notable macro, index=itsm (PIR/RCA records), dora_pir_completion.csv (KV store), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.11",
              "n": "DORA Major ICT Incident 7-Criteria Classification (Art. 18)",
              "c": "critical",
              "f": "advanced",
              "v": "DORA Article 18 and RTS 2024/1772 define seven classification criteria for determining whether an ICT incident is \"major\" — an incident meeting two or more criteria thresholds triggers mandatory reporting. This use case automates the classification assessment against all seven criteria, replacing manual spreadsheet-based classification and accelerating the 4-hour initial notification deadline.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841)",
              "d": "`` `notable` `` macro, `index=itsi_summary`, CIM Authentication and Network_Traffic data models, `dora_service_client_mapping.csv`",
              "q": "`notable` urgency IN (\"high\",\"critical\") status!=\"Closed\" earliest=-48h\n| eval affected_systems=mvappend(src, dest)\n| mvexpand affected_systems\n| lookup dora_service_client_mapping.csv system_name AS affected_systems OUTPUT service_name, client_count, client_pct, countries_served, data_classification\n| stats dc(affected_systems) as systems_hit, max(client_count) as max_clients_affected, max(client_pct) as max_client_pct,\n        dc(countries_served) as geo_spread, values(data_classification) as data_types by rule_name, urgency\n| eval c1_clients=if(max_client_pct > 10 OR max_clients_affected > 500, 1, 0)\n| eval c2_geographic=if(geo_spread > 1, 1, 0)\n| eval c3_duration=\"ASSESS_MANUALLY\"\n| eval c4_data_loss=if(match(mvjoin(data_types,\",\"),\"(?i)confidential|restricted|pii|financial\"), 1, 0)\n| eval criteria_met=c1_clients + c2_geographic + c4_data_loss\n| eval classification=if(criteria_met >= 2, \"MAJOR — mandatory reporting\", \"Significant or below — assess remaining criteria\")\n| eval reporting_deadline=if(classification=\"MAJOR — mandatory reporting\", \"4h initial + 72h intermediate + 1mo final\", \"Monitor\")\n| table rule_name, urgency, systems_hit, max_clients_affected, max_client_pct, geo_spread, data_types, criteria_met, classification, reporting_deadline\n| sort - criteria_met",
              "m": "(1) Build `dora_service_client_mapping.csv` mapping ICT systems to business services, client counts/percentages, geographic reach, and data classifications; (2) automate criteria 1 (clients), 2 (geographic), and 4 (data loss) — criteria 3 (duration), 5-7 require manual assessment initially; (3) alert immediately when criteria_met >= 2 — the 4-hour reporting clock starts; (4) integrate with SOAR for automated notification workflow; (5) document classification rationale for each incident.",
              "z": "Table (incident classification results), Single value (major incidents requiring reporting), Bar chart (criteria met per incident), Traffic light (classification status).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841).\n• Ensure the following data sources are available: `` `notable` `` macro, `index=itsi_summary`, CIM Authentication and Network_Traffic data models, `dora_service_client_mapping.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `dora_service_client_mapping.csv` mapping ICT systems to business services, client counts/percentages, geographic reach, and data classifications; (2) automate criteria 1 (clients), 2 (geographic), and 4 (data loss) — criteria 3 (duration), 5-7 require manual assessment initially; (3) alert immediately when criteria_met >= 2 — the 4-hour reporting clock starts; (4) integrate with SOAR for automated notification workflow; (5) document classification rationale for each incident.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") status!=\"Closed\" earliest=-48h\n| eval affected_systems=mvappend(src, dest)\n| mvexpand affected_systems\n| lookup dora_service_client_mapping.csv system_name AS affected_systems OUTPUT service_name, client_count, client_pct, countries_served, data_classification\n| stats dc(affected_systems) as systems_hit, max(client_count) as max_clients_affected, max(client_pct) as max_client_pct,\n        dc(countries_served) as geo_spread, values(data_classification) as data_types by rule_name, urgency\n| eval c1_clients=if(max_client_pct > 10 OR max_clients_affected > 500, 1, 0)\n| eval c2_geographic=if(geo_spread > 1, 1, 0)\n| eval c3_duration=\"ASSESS_MANUALLY\"\n| eval c4_data_loss=if(match(mvjoin(data_types,\",\"),\"(?i)confidential|restricted|pii|financial\"), 1, 0)\n| eval criteria_met=c1_clients + c2_geographic + c4_data_loss\n| eval classification=if(criteria_met >= 2, \"MAJOR — mandatory reporting\", \"Significant or below — assess remaining criteria\")\n| eval reporting_deadline=if(classification=\"MAJOR — mandatory reporting\", \"4h initial + 72h intermediate + 1mo final\", \"Monitor\")\n| table rule_name, urgency, systems_hit, max_clients_affected, max_client_pct, geo_spread, data_types, criteria_met, classification, reporting_deadline\n| sort - criteria_met\n```\n\nUnderstanding this SPL\n\n**DORA Major ICT Incident 7-Criteria Classification (Art. 18)** — DORA Article 18 and RTS 2024/1772 define seven classification criteria for determining whether an ICT incident is \"major\" — an incident meeting two or more criteria thresholds triggers mandatory reporting. This use case automates the classification assessment against all seven criteria, replacing manual spreadsheet-based classification and accelerating the 4-hour initial notification deadline.\n\nDocumented **Data sources**: `` `notable` `` macro, `index=itsi_summary`, CIM Authentication and Network_Traffic data models, `dora_service_client_mapping.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **affected_systems** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by rule_name, urgency** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **c1_clients** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **c2_geographic** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **c3_duration** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **c4_data_loss** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **criteria_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **classification** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **reporting_deadline** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DORA Major ICT Incident 7-Criteria Classification (Art. 18)**): table rule_name, urgency, systems_hit, max_clients_affected, max_client_pct, geo_spread, data_types, criteria_met, classification, report…\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incident classification results), Single value (major incidents requiring reporting), Bar chart (criteria met per incident), Traffic light (classification status).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We dORA Article 18 and RTS 2024/1772 define seven classification criteria for determining whether an ICT incident is \"major\" — an incident meeting two or more criteria thresholds triggers mandatory reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.18 (Classification of ICT-related incidents) is enforced — Splunk UC-22.3.11: DORA Major ICT Incident 7-Criteria Classification.",
                  "ea": "Saved search 'UC-22.3.11' running on notable macro, index=itsi_summary, CIM Authentication and Network_Traffic data models, dora_service_client_mapping.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.12",
              "n": "DORA ICT Incident Intermediate and Final Report Tracking (Art. 19)",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 19 mandates a three-report lifecycle for major ICT incidents: initial notification (4h), intermediate report (72h), and final report (1 month). UC-22.3.2 covers the initial classification, but this use case tracks the full reporting lifecycle — ensuring intermediate reports include quantified impact, preliminary root cause, and remediation status, and that final reports contain complete root cause analysis, corrective actions, and lessons learned.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`` `notable` `` macro, `dora_incident_reports.csv` (KV store tracking report submissions)",
              "q": "`notable` urgency IN (\"high\",\"critical\") earliest=-60d\n| lookup dora_incident_reports.csv notable_id AS event_id OUTPUT dora_classification, initial_report_time, intermediate_report_time, final_report_time, initial_submitted, intermediate_submitted, final_submitted\n| where dora_classification=\"major\"\n| eval hours_since_detection=round((now()-_time)/3600, 2)\n| eval initial_status=case(initial_submitted=\"yes\",\"SUBMITTED\", hours_since_detection<=4,\"WITHIN_WINDOW\", 1=1,\"OVERDUE\")\n| eval intermediate_status=case(intermediate_submitted=\"yes\",\"SUBMITTED\", hours_since_detection<=72,\"WITHIN_WINDOW\", 1=1,\"OVERDUE\")\n| eval final_status=case(final_submitted=\"yes\",\"SUBMITTED\", hours_since_detection<=(30*24),\"WITHIN_WINDOW\", 1=1,\"OVERDUE\")\n| where initial_status=\"OVERDUE\" OR intermediate_status=\"OVERDUE\" OR final_status=\"OVERDUE\" OR intermediate_status=\"WITHIN_WINDOW\" OR final_status=\"WITHIN_WINDOW\"\n| table _time, rule_name, urgency, hours_since_detection, initial_status, intermediate_status, final_status, owner\n| sort - hours_since_detection",
              "m": "(1) Create `dora_incident_reports.csv` KV store linking classified major incidents to their three report submission timestamps; (2) alert at 3h for initial report, at 48h for intermediate report, and at 21 days for final report; (3) validate intermediate report contains quantified impact and preliminary root cause; (4) validate final report contains complete root cause, corrective actions, and medium/long-term plan; (5) submit to competent authority using mandated templates.",
              "z": "Table (report status per major incident), Traffic lights (report deadlines), Single value (overdue reports), Timeline (reporting lifecycle).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `` `notable` `` macro, `dora_incident_reports.csv` (KV store tracking report submissions).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `dora_incident_reports.csv` KV store linking classified major incidents to their three report submission timestamps; (2) alert at 3h for initial report, at 48h for intermediate report, and at 21 days for final report; (3) validate intermediate report contains quantified impact and preliminary root cause; (4) validate final report contains complete root cause, corrective actions, and medium/long-term plan; (5) submit to competent authority using mandated templates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` urgency IN (\"high\",\"critical\") earliest=-60d\n| lookup dora_incident_reports.csv notable_id AS event_id OUTPUT dora_classification, initial_report_time, intermediate_report_time, final_report_time, initial_submitted, intermediate_submitted, final_submitted\n| where dora_classification=\"major\"\n| eval hours_since_detection=round((now()-_time)/3600, 2)\n| eval initial_status=case(initial_submitted=\"yes\",\"SUBMITTED\", hours_since_detection<=4,\"WITHIN_WINDOW\", 1=1,\"OVERDUE\")\n| eval intermediate_status=case(intermediate_submitted=\"yes\",\"SUBMITTED\", hours_since_detection<=72,\"WITHIN_WINDOW\", 1=1,\"OVERDUE\")\n| eval final_status=case(final_submitted=\"yes\",\"SUBMITTED\", hours_since_detection<=(30*24),\"WITHIN_WINDOW\", 1=1,\"OVERDUE\")\n| where initial_status=\"OVERDUE\" OR intermediate_status=\"OVERDUE\" OR final_status=\"OVERDUE\" OR intermediate_status=\"WITHIN_WINDOW\" OR final_status=\"WITHIN_WINDOW\"\n| table _time, rule_name, urgency, hours_since_detection, initial_status, intermediate_status, final_status, owner\n| sort - hours_since_detection\n```\n\nUnderstanding this SPL\n\n**DORA ICT Incident Intermediate and Final Report Tracking (Art. 19)** — Article 19 mandates a three-report lifecycle for major ICT incidents: initial notification (4h), intermediate report (72h), and final report (1 month). UC-22.3.2 covers the initial classification, but this use case tracks the full reporting lifecycle — ensuring intermediate reports include quantified impact, preliminary root cause, and remediation status, and that final reports contain complete root cause analysis, corrective actions, and lessons learned.\n\nDocumented **Data sources**: `` `notable` `` macro, `dora_incident_reports.csv` (KV store tracking report submissions). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where dora_classification=\"major\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **hours_since_detection** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **initial_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **intermediate_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **final_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where initial_status=\"OVERDUE\" OR intermediate_status=\"OVERDUE\" OR final_status=\"OVERDUE\" OR intermediate_status=\"WITHIN_WI…` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA ICT Incident Intermediate and Final Report Tracking (Art. 19)**): table _time, rule_name, urgency, hours_since_detection, initial_status, intermediate_status, final_status, owner\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (report status per major incident), Traffic lights (report deadlines), Single value (overdue reports), Timeline (reporting lifecycle).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of ict incident intermediate and final report tracking — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.19",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.19 (Reporting of major ICT-related incidents) is enforced — Splunk UC-22.3.12: DORA ICT Incident Intermediate and Final Report Tracking.",
                  "ea": "Saved search 'UC-22.3.12' running on notable macro, dora_incident_reports.csv (KV store tracking report submissions), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.13",
              "n": "DORA Register of Information for ICT Third-Party Arrangements (Art. 28(3))",
              "c": "high",
              "f": "intermediate",
              "v": "Article 28(3) requires financial entities to maintain and update a register of information on all contractual arrangements with ICT third-party service providers, distinguishing those supporting critical or important functions. This use case validates register completeness by comparing actual ICT provider traffic against the register and detecting unregistered providers or stale entries.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876)",
              "d": "CIM Network_Traffic data model, DNS logs, `dora_ict_provider_register.csv`",
              "q": "| tstats `summariesonly` sum(All_Traffic.bytes) as total_bytes dc(All_Traffic.src) as internal_sources\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"allowed\" NOT All_Traffic.dest_category IN (\"internal\",\"internal_server\")\n  by All_Traffic.dest\n| rename All_Traffic.* as *\n| iplocation dest\n| lookup dora_ict_provider_register.csv dest_domain AS dest OUTPUT provider_name, contract_id, criticality, exit_plan_status, last_review_date\n| eval in_register=if(isnotnull(provider_name), \"REGISTERED\", \"NOT_IN_REGISTER\")\n| eval review_overdue=if(isnotnull(last_review_date) AND (now()-strptime(last_review_date,\"%Y-%m-%d\"))/86400 > 365, \"OVERDUE\", \"OK\")\n| eval bytes_gb=round(total_bytes/1073741824, 2)\n| where (in_register=\"NOT_IN_REGISTER\" AND bytes_gb > 0.1) OR review_overdue=\"OVERDUE\"\n| sort - bytes_gb\n| table dest, Country, provider_name, in_register, bytes_gb, internal_sources, criticality, exit_plan_status, review_overdue",
              "m": "(1) Maintain `dora_ict_provider_register.csv` with all ICT third-party arrangements per Art. 28(3) requirements; (2) include contract IDs, criticality flags, exit plan status, and review dates; (3) compare actual network traffic destinations against registered providers; (4) flag unregistered high-volume destinations for procurement review; (5) submit register to competent authorities upon request; (6) review annually with management body approval.",
              "z": "Table (unregistered providers), Single value (register coverage %), Bar chart (traffic to unregistered destinations), Pie chart (registered vs unregistered by volume).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876).\n• Ensure the following data sources are available: CIM Network_Traffic data model, DNS logs, `dora_ict_provider_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `dora_ict_provider_register.csv` with all ICT third-party arrangements per Art. 28(3) requirements; (2) include contract IDs, criticality flags, exit plan status, and review dates; (3) compare actual network traffic destinations against registered providers; (4) flag unregistered high-volume destinations for procurement review; (5) submit register to competent authorities upon request; (6) review annually with management body approval.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` sum(All_Traffic.bytes) as total_bytes dc(All_Traffic.src) as internal_sources\n  from datamodel=Network_Traffic.All_Traffic\n  where All_Traffic.action=\"allowed\" NOT All_Traffic.dest_category IN (\"internal\",\"internal_server\")\n  by All_Traffic.dest\n| rename All_Traffic.* as *\n| iplocation dest\n| lookup dora_ict_provider_register.csv dest_domain AS dest OUTPUT provider_name, contract_id, criticality, exit_plan_status, last_review_date\n| eval in_register=if(isnotnull(provider_name), \"REGISTERED\", \"NOT_IN_REGISTER\")\n| eval review_overdue=if(isnotnull(last_review_date) AND (now()-strptime(last_review_date,\"%Y-%m-%d\"))/86400 > 365, \"OVERDUE\", \"OK\")\n| eval bytes_gb=round(total_bytes/1073741824, 2)\n| where (in_register=\"NOT_IN_REGISTER\" AND bytes_gb > 0.1) OR review_overdue=\"OVERDUE\"\n| sort - bytes_gb\n| table dest, Country, provider_name, in_register, bytes_gb, internal_sources, criticality, exit_plan_status, review_overdue\n```\n\nUnderstanding this SPL\n\n**DORA Register of Information for ICT Third-Party Arrangements (Art. 28(3))** — Article 28(3) requires financial entities to maintain and update a register of information on all contractual arrangements with ICT third-party service providers, distinguishing those supporting critical or important functions. This use case validates register completeness by comparing actual ICT provider traffic against the register and detecting unregistered providers or stale entries.\n\nDocumented **Data sources**: CIM Network_Traffic data model, DNS logs, `dora_ict_provider_register.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Pipeline stage (see **DORA Register of Information for ICT Third-Party Arrangements (Art. 28(3))**): iplocation dest\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **in_register** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **review_overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bytes_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (in_register=\"NOT_IN_REGISTER\" AND bytes_gb > 0.1) OR review_overdue=\"OVERDUE\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **DORA Register of Information for ICT Third-Party Arrangements (Art. 28(3))**): table dest, Country, provider_name, in_register, bytes_gb, internal_sources, criticality, exit_plan_status, review_overdue\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DORA Register of Information for ICT Third-Party Arrangements (Art. 28(3))** — Article 28(3) requires financial entities to maintain and update a register of information on all contractual arrangements with ICT third-party service providers, distinguishing those supporting critical or important functions. This use case validates register completeness by comparing actual ICT provider traffic against the register and detecting unregistered providers or stale entries.\n\nDocumented **Data sources**: CIM Network_Traffic data model, DNS logs, `dora_ict_provider_register.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unregistered providers), Single value (register coverage %), Bar chart (traffic to unregistered destinations), Pie chart (registered vs unregistered by volume).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of register of information for ict third-party arrangements — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28(3)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.28(3) is enforced — Splunk UC-22.3.13: DORA Register of Information for ICT Third-Party Arrangements.",
                  "ea": "Saved search 'UC-22.3.13' running on CIM Network_Traffic data model, DNS logs, dora_ict_provider_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.14",
              "n": "DORA ICT Third-Party SLA Performance Monitoring (Art. 30)",
              "c": "high",
              "f": "intermediate",
              "v": "Article 30 requires contractual arrangements to include precise quantitative and qualitative performance targets for services supporting critical functions. This use case monitors actual ICT provider performance against SLA targets — availability, response time, throughput — detecting SLA breaches that may indicate degraded operational resilience and trigger contractual remediation or exit procedures.",
              "t": "Splunk ITSI (Splunkbase 1841), Splunk Synthetic Monitoring",
              "d": "`index=itsi_summary` (service/KPI data for provider-dependent services), synthetic monitoring results, `dora_provider_slas.csv`",
              "q": "index=itsi_summary is_service_in_maintenance=0 earliest=-30d\n| lookup dora_provider_slas.csv service_name OUTPUT provider_name, sla_availability_pct, sla_response_ms, contract_id, criticality\n| where isnotnull(provider_name)\n| stats avg(health_score) as avg_health, count(eval(severity_value>=4)) as critical_breaches, count as total_observations by service_name, provider_name, sla_availability_pct, criticality\n| eval actual_availability=round(100*(total_observations - critical_breaches)/total_observations, 2)\n| eval sla_met=if(actual_availability >= sla_availability_pct, \"MET\", \"BREACHED\")\n| where sla_met=\"BREACHED\" OR avg_health < 80\n| sort actual_availability\n| table service_name, provider_name, criticality, sla_availability_pct, actual_availability, sla_met, critical_breaches, avg_health",
              "m": "(1) Map ITSI services to ICT providers and their SLA targets in `dora_provider_slas.csv`; (2) include availability, response time, and throughput SLAs; (3) alert on SLA breaches for critical function providers; (4) escalate repeated breaches for exit strategy activation; (5) report provider performance to management body quarterly.",
              "z": "Table (provider SLA compliance), Gauge (availability per provider), Bar chart (SLA breaches by provider), Line chart (provider health trend).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (Splunkbase 1841), Splunk Synthetic Monitoring.\n• Ensure the following data sources are available: `index=itsi_summary` (service/KPI data for provider-dependent services), synthetic monitoring results, `dora_provider_slas.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map ITSI services to ICT providers and their SLA targets in `dora_provider_slas.csv`; (2) include availability, response time, and throughput SLAs; (3) alert on SLA breaches for critical function providers; (4) escalate repeated breaches for exit strategy activation; (5) report provider performance to management body quarterly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary is_service_in_maintenance=0 earliest=-30d\n| lookup dora_provider_slas.csv service_name OUTPUT provider_name, sla_availability_pct, sla_response_ms, contract_id, criticality\n| where isnotnull(provider_name)\n| stats avg(health_score) as avg_health, count(eval(severity_value>=4)) as critical_breaches, count as total_observations by service_name, provider_name, sla_availability_pct, criticality\n| eval actual_availability=round(100*(total_observations - critical_breaches)/total_observations, 2)\n| eval sla_met=if(actual_availability >= sla_availability_pct, \"MET\", \"BREACHED\")\n| where sla_met=\"BREACHED\" OR avg_health < 80\n| sort actual_availability\n| table service_name, provider_name, criticality, sla_availability_pct, actual_availability, sla_met, critical_breaches, avg_health\n```\n\nUnderstanding this SPL\n\n**DORA ICT Third-Party SLA Performance Monitoring (Art. 30)** — Article 30 requires contractual arrangements to include precise quantitative and qualitative performance targets for services supporting critical functions. This use case monitors actual ICT provider performance against SLA targets — availability, response time, throughput — detecting SLA breaches that may indicate degraded operational resilience and trigger contractual remediation or exit procedures.\n\nDocumented **Data sources**: `index=itsi_summary` (service/KPI data for provider-dependent services), synthetic monitoring results, `dora_provider_slas.csv`. **App/TA** (typical add-on context): Splunk ITSI (Splunkbase 1841), Splunk Synthetic Monitoring. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(provider_name)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by service_name, provider_name, sla_availability_pct, criticality** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **actual_availability** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sla_met=\"BREACHED\" OR avg_health < 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **DORA ICT Third-Party SLA Performance Monitoring (Art. 30)**): table service_name, provider_name, criticality, sla_availability_pct, actual_availability, sla_met, critical_breaches, avg_health\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (provider SLA compliance), Gauge (availability per provider), Bar chart (SLA breaches by provider), Line chart (provider health trend).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of ict third-party sla performance monitoring — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Performance",
                "Compliance"
              ],
              "ind": "Financial Services",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.30",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.30 is enforced — Splunk UC-22.3.14: DORA ICT Third-Party SLA Performance Monitoring.",
                  "ea": "Saved search 'UC-22.3.14' running on index itsi_summary and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.15",
              "n": "DORA ICT Access Control and Authentication Monitoring (Art. 9(4)(c))",
              "c": "critical",
              "f": "intermediate",
              "v": "Article 9(4)(c) requires policies for digital identity management, access control limiting physical and logical access to ICT assets and data, and strong authentication mechanisms. This use case monitors authentication patterns across DORA-regulated ICT systems — detecting shared accounts, weak authentication, privilege escalation, and access from unauthorized locations — providing the access control evidence DORA mandates for protection and prevention.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "CIM Authentication data model, `index=auth`, `index=windows`, `index=azure`",
              "q": "| tstats `summariesonly` count dc(Authentication.src) as src_count dc(Authentication.dest) as dest_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user Authentication.app span=1d\n| rename Authentication.* as *\n| lookup dora_critical_systems.csv system_name AS dest OUTPUT critical_function\n| where isnotnull(critical_function)\n| eval risk_signals=mvappend(\n    if(src_count > 5, \"MULTI_LOCATION\", null()),\n    if(match(lower(user),\"(?i)shared|generic|test|admin[0-9]\"), \"SHARED_ACCOUNT\", null()),\n    if(match(lower(app),\"(?i)password|basic|ntlm\") AND NOT match(lower(app),\"(?i)mfa|2fa|cert|kerberos\"), \"WEAK_AUTH\", null()))\n| where isnotnull(risk_signals)\n| table user, app, critical_function, count, src_count, dest_count, risk_signals\n| sort - count",
              "m": "(1) Ensure CIM Authentication data model is populated from domain controllers, IdPs, and cloud auth sources; (2) tag critical ICT systems in `dora_critical_systems.csv`; (3) alert on shared accounts accessing critical functions; (4) detect authentication without MFA for privileged operations; (5) report access control compliance to management body; (6) correlate with HR data for joiner/mover/leaver validation.",
              "z": "Table (access control risks), Bar chart (risks by type), Single value (shared account usage), Heatmap (user × critical system access).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1110",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: CIM Authentication data model, `index=auth`, `index=windows`, `index=azure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure CIM Authentication data model is populated from domain controllers, IdPs, and cloud auth sources; (2) tag critical ICT systems in `dora_critical_systems.csv`; (3) alert on shared accounts accessing critical functions; (4) detect authentication without MFA for privileged operations; (5) report access control compliance to management body; (6) correlate with HR data for joiner/mover/leaver validation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats `summariesonly` count dc(Authentication.src) as src_count dc(Authentication.dest) as dest_count\n  from datamodel=Authentication.Authentication\n  where Authentication.action=\"success\"\n  by Authentication.user Authentication.app span=1d\n| rename Authentication.* as *\n| lookup dora_critical_systems.csv system_name AS dest OUTPUT critical_function\n| where isnotnull(critical_function)\n| eval risk_signals=mvappend(\n    if(src_count > 5, \"MULTI_LOCATION\", null()),\n    if(match(lower(user),\"(?i)shared|generic|test|admin[0-9]\"), \"SHARED_ACCOUNT\", null()),\n    if(match(lower(app),\"(?i)password|basic|ntlm\") AND NOT match(lower(app),\"(?i)mfa|2fa|cert|kerberos\"), \"WEAK_AUTH\", null()))\n| where isnotnull(risk_signals)\n| table user, app, critical_function, count, src_count, dest_count, risk_signals\n| sort - count\n```\n\nUnderstanding this SPL\n\n**DORA ICT Access Control and Authentication Monitoring (Art. 9(4)(c))** — Article 9(4)(c) requires policies for digital identity management, access control limiting physical and logical access to ICT assets and data, and strong authentication mechanisms. This use case monitors authentication patterns across DORA-regulated ICT systems — detecting shared accounts, weak authentication, privilege escalation, and access from unauthorized locations — providing the access control evidence DORA mandates for protection and prevention.\n\nDocumented **Data sources**: CIM Authentication data model, `index=auth`, `index=windows`, `index=azure`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(critical_function)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **risk_signals** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(risk_signals)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA ICT Access Control and Authentication Monitoring (Art. 9(4)(c))**): table user, app, critical_function, count, src_count, dest_count, risk_signals\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DORA ICT Access Control and Authentication Monitoring (Art. 9(4)(c))** — Article 9(4)(c) requires policies for digital identity management, access control limiting physical and logical access to ICT assets and data, and strong authentication mechanisms. This use case monitors authentication patterns across DORA-regulated ICT systems — detecting shared accounts, weak authentication, privilege escalation, and access from unauthorized locations — providing the access control evidence DORA mandates for protection and prevention.\n\nDocumented **Data sources**: CIM Authentication data model, `index=auth`, `index=windows`, `index=azure`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (access control risks), Bar chart (risks by type), Single value (shared account usage), Heatmap (user × critical system access).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of ict access control and authentication monitoring — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.9(4)(c)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.9(4)(c) is enforced — Splunk UC-22.3.15: DORA ICT Access Control and Authentication Monitoring.",
                  "ea": "Saved search 'UC-22.3.15' running on CIM Authentication data model, index=auth, index=windows, index=azure, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.16",
              "n": "DORA Vulnerability Assessment and Penetration Test Tracking (Art. 25)",
              "c": "high",
              "f": "intermediate",
              "v": "Article 25 requires vulnerability assessments, network security assessments, source code reviews, scenario-based tests, and penetration testing for all ICT systems supporting critical functions. This use case tracks test execution, coverage, and finding remediation — ensuring the full testing program required by DORA is executed on schedule and that identified vulnerabilities are remediated within defined SLAs.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060), Splunk Add-on for Qualys (Splunkbase 2964)",
              "d": "`index=vulnerability` (scan results), `dora_testing_schedule.csv` (planned test calendar), `index=itsm` (remediation tickets)",
              "q": "| inputlookup dora_testing_schedule.csv\n| eval planned_date=strptime(scheduled_date, \"%Y-%m-%d\")\n| eval days_until_due=round((planned_date - now())/86400, 0)\n| eval test_status=case(\n    completed=\"yes\", \"COMPLETED\",\n    days_until_due < 0, \"OVERDUE\",\n    days_until_due <= 30, \"DUE_SOON\",\n    1=1, \"SCHEDULED\")\n| append [\n    search index=vulnerability sourcetype IN (\"tenable:vuln\",\"qualys:hostdetection\") state=\"Active\" earliest=-90d\n    | lookup dora_critical_systems.csv system_name AS host OUTPUT critical_function\n    | where isnotnull(critical_function)\n    | eval sla_days=case(severity=\"Critical\",7, severity=\"High\",30, severity=\"Medium\",90, 1=1,180)\n    | eval age_days=round((now()-first_found)/86400,1)\n    | eval sla_breach=if(age_days > sla_days, 1, 0)\n    | stats sum(sla_breach) as overdue_vulns, count as total_vulns by critical_function\n    | eval test_type=\"Vulnerability_Remediation\", test_status=if(overdue_vulns > 0, \"REMEDIATION_OVERDUE\", \"ON_TRACK\")\n    | table test_type, critical_function, total_vulns, overdue_vulns, test_status\n]\n| table test_type, critical_function, scheduled_date, test_status, total_vulns, overdue_vulns\n| sort test_status",
              "m": "(1) Create `dora_testing_schedule.csv` with all planned vulnerability assessments, pen tests, source code reviews, and scenario tests per Art. 25 requirements; (2) alert when tests are overdue; (3) track vulnerability remediation SLAs for DORA-critical systems; (4) report testing coverage and finding trends to management body; (5) for central securities depositories and central counterparties, ensure pre-deployment vulnerability assessments per Art. 25(2).",
              "z": "Table (test schedule with status), Bar chart (overdue tests by type), Single value (testing coverage %), Pie chart (test status distribution).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060), Splunk Add-on for Qualys (Splunkbase 2964).\n• Ensure the following data sources are available: `index=vulnerability` (scan results), `dora_testing_schedule.csv` (planned test calendar), `index=itsm` (remediation tickets).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `dora_testing_schedule.csv` with all planned vulnerability assessments, pen tests, source code reviews, and scenario tests per Art. 25 requirements; (2) alert when tests are overdue; (3) track vulnerability remediation SLAs for DORA-critical systems; (4) report testing coverage and finding trends to management body; (5) for central securities depositories and central counterparties, ensure pre-deployment vulnerability assessments per Art. 25(2).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_testing_schedule.csv\n| eval planned_date=strptime(scheduled_date, \"%Y-%m-%d\")\n| eval days_until_due=round((planned_date - now())/86400, 0)\n| eval test_status=case(\n    completed=\"yes\", \"COMPLETED\",\n    days_until_due < 0, \"OVERDUE\",\n    days_until_due <= 30, \"DUE_SOON\",\n    1=1, \"SCHEDULED\")\n| append [\n    search index=vulnerability sourcetype IN (\"tenable:vuln\",\"qualys:hostdetection\") state=\"Active\" earliest=-90d\n    | lookup dora_critical_systems.csv system_name AS host OUTPUT critical_function\n    | where isnotnull(critical_function)\n    | eval sla_days=case(severity=\"Critical\",7, severity=\"High\",30, severity=\"Medium\",90, 1=1,180)\n    | eval age_days=round((now()-first_found)/86400,1)\n    | eval sla_breach=if(age_days > sla_days, 1, 0)\n    | stats sum(sla_breach) as overdue_vulns, count as total_vulns by critical_function\n    | eval test_type=\"Vulnerability_Remediation\", test_status=if(overdue_vulns > 0, \"REMEDIATION_OVERDUE\", \"ON_TRACK\")\n    | table test_type, critical_function, total_vulns, overdue_vulns, test_status\n]\n| table test_type, critical_function, scheduled_date, test_status, total_vulns, overdue_vulns\n| sort test_status\n```\n\nUnderstanding this SPL\n\n**DORA Vulnerability Assessment and Penetration Test Tracking (Art. 25)** — Article 25 requires vulnerability assessments, network security assessments, source code reviews, scenario-based tests, and penetration testing for all ICT systems supporting critical functions. This use case tracks test execution, coverage, and finding remediation — ensuring the full testing program required by DORA is executed on schedule and that identified vulnerabilities are remediated within defined SLAs.\n\nDocumented **Data sources**: `index=vulnerability` (scan results), `dora_testing_schedule.csv` (planned test calendar), `index=itsm` (remediation tickets). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060), Splunk Add-on for Qualys (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **planned_date** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_until_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **test_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• Pipeline stage (see **DORA Vulnerability Assessment and Penetration Test Tracking (Art. 25)**): table test_type, critical_function, scheduled_date, test_status, total_vulns, overdue_vulns\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DORA Vulnerability Assessment and Penetration Test Tracking (Art. 25)** — Article 25 requires vulnerability assessments, network security assessments, source code reviews, scenario-based tests, and penetration testing for all ICT systems supporting critical functions. This use case tracks test execution, coverage, and finding remediation — ensuring the full testing program required by DORA is executed on schedule and that identified vulnerabilities are remediated within defined SLAs.\n\nDocumented **Data sources**: `index=vulnerability` (scan results), `dora_testing_schedule.csv` (planned test calendar), `index=itsm` (remediation tickets). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060), Splunk Add-on for Qualys (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (test schedule with status), Bar chart (overdue tests by type), Single value (testing coverage %), Pie chart (test status distribution).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of vulnerability assessment and penetration test tracking — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.25",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.25 is enforced — Splunk UC-22.3.16: DORA Vulnerability Assessment and Penetration Test Tracking.",
                  "ea": "Saved search 'UC-22.3.16' running on index=vulnerability (scan results), dora_testing_schedule.csv (planned test calendar), index=itsm (remediation tickets), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.17",
              "n": "DORA Threat-Led Penetration Testing (TLPT) Lifecycle (Art. 26)",
              "c": "high",
              "f": "advanced",
              "v": "Article 26 requires financial entities identified by competent authorities to conduct TLPT at least every three years, following TIBER-EU methodology with qualified external testers. This use case tracks the TLPT lifecycle — from threat intelligence scoping through red team execution to remediation of findings — ensuring the advanced testing programme that DORA mandates for systemically important entities is completed and acted upon.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`dora_tlpt_register.csv` (TLPT engagement tracking), `index=itsm` (remediation tickets), ES Notable events (from purple team exercises)",
              "q": "| inputlookup dora_tlpt_register.csv\n| eval last_tlpt_date=strptime(completion_date, \"%Y-%m-%d\")\n| eval months_since_tlpt=round((now()-last_tlpt_date)/(86400*30), 0)\n| eval next_due=if(months_since_tlpt >= 36, \"OVERDUE\", if(months_since_tlpt >= 30, \"DUE_WITHIN_6_MONTHS\", \"ON_TRACK\"))\n| eval findings_remediated=if(isnotnull(total_findings) AND isnotnull(findings_closed), round(100*findings_closed/total_findings,1), 0)\n| table scope, tester_organization, completion_date, months_since_tlpt, next_due, total_findings, findings_closed, findings_remediated, critical_findings_open\n| sort next_due",
              "m": "(1) Create `dora_tlpt_register.csv` with TLPT engagement records (scope, tester, completion date, finding counts); (2) track three-year cycle per Art. 26; (3) monitor finding remediation — critical findings should be remediated within 90 days; (4) validate tester qualifications per Art. 27; (5) coordinate pooled TLPT with entities sharing the same ICT provider where applicable; (6) report TLPT status to competent authority.",
              "z": "Table (TLPT status), Single value (months since last TLPT), Gauge (finding remediation %), Bar chart (open findings by severity).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `dora_tlpt_register.csv` (TLPT engagement tracking), `index=itsm` (remediation tickets), ES Notable events (from purple team exercises).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `dora_tlpt_register.csv` with TLPT engagement records (scope, tester, completion date, finding counts); (2) track three-year cycle per Art. 26; (3) monitor finding remediation — critical findings should be remediated within 90 days; (4) validate tester qualifications per Art. 27; (5) coordinate pooled TLPT with entities sharing the same ICT provider where applicable; (6) report TLPT status to competent authority.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_tlpt_register.csv\n| eval last_tlpt_date=strptime(completion_date, \"%Y-%m-%d\")\n| eval months_since_tlpt=round((now()-last_tlpt_date)/(86400*30), 0)\n| eval next_due=if(months_since_tlpt >= 36, \"OVERDUE\", if(months_since_tlpt >= 30, \"DUE_WITHIN_6_MONTHS\", \"ON_TRACK\"))\n| eval findings_remediated=if(isnotnull(total_findings) AND isnotnull(findings_closed), round(100*findings_closed/total_findings,1), 0)\n| table scope, tester_organization, completion_date, months_since_tlpt, next_due, total_findings, findings_closed, findings_remediated, critical_findings_open\n| sort next_due\n```\n\nUnderstanding this SPL\n\n**DORA Threat-Led Penetration Testing (TLPT) Lifecycle (Art. 26)** — Article 26 requires financial entities identified by competent authorities to conduct TLPT at least every three years, following TIBER-EU methodology with qualified external testers. This use case tracks the TLPT lifecycle — from threat intelligence scoping through red team execution to remediation of findings — ensuring the advanced testing programme that DORA mandates for systemically important entities is completed and acted upon.\n\nDocumented **Data sources**: `dora_tlpt_register.csv` (TLPT engagement tracking), `index=itsm` (remediation tickets), ES Notable events (from purple team exercises). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **last_tlpt_date** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **months_since_tlpt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **next_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **findings_remediated** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DORA Threat-Led Penetration Testing (TLPT) Lifecycle (Art. 26)**): table scope, tester_organization, completion_date, months_since_tlpt, next_due, total_findings, findings_closed, findings_remediated, cri…\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (TLPT status), Single value (months since last TLPT), Gauge (finding remediation %), Bar chart (open findings by severity).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of threat-led penetration testing (tlpt) lifecycle — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.26 (Threat-led penetration testing) is enforced — Splunk UC-22.3.17: DORA Threat-Led Penetration Testing (TLPT) Lifecycle.",
                  "ea": "Saved search 'UC-22.3.17' running on index itsm and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.18",
              "n": "DORA ICT Third-Party Exit Strategy Readiness (Art. 28(8))",
              "c": "high",
              "f": "advanced",
              "v": "Article 28(8) requires exit strategies for all ICT arrangements supporting critical or important functions, with tested transition plans. This use case monitors exit strategy readiness by tracking whether exit plans exist, are tested, and have viable alternatives identified — combined with operational dependency metrics showing how deeply the entity relies on each provider, informing realistic transition timelines.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876)",
              "d": "`dora_ict_provider_register.csv`, CIM Network_Traffic data model, cloud provider audit logs",
              "q": "| inputlookup dora_ict_provider_register.csv WHERE criticality IN (\"critical\",\"important\")\n| eval exit_plan_exists=if(isnotnull(exit_plan_status) AND exit_plan_status!=\"none\", 1, 0)\n| eval exit_plan_tested=if(exit_plan_status=\"tested\", 1, 0)\n| eval alternative_identified=if(isnotnull(alternative_provider), 1, 0)\n| eval last_test_date_epoch=if(isnotnull(exit_plan_test_date), strptime(exit_plan_test_date,\"%Y-%m-%d\"), null())\n| eval months_since_test=if(isnotnull(last_test_date_epoch), round((now()-last_test_date_epoch)/(86400*30),0), 999)\n| eval readiness=case(\n    exit_plan_tested=1 AND alternative_identified=1 AND months_since_test<=12, \"READY\",\n    exit_plan_exists=1 AND alternative_identified=1, \"PARTIAL — test overdue\",\n    exit_plan_exists=1, \"PARTIAL — no alternative\",\n    1=1, \"NOT_READY — plan required\")\n| stats count by readiness\n| append [\n    search | inputlookup dora_ict_provider_register.csv WHERE criticality IN (\"critical\",\"important\")\n    | where exit_plan_status IN (\"none\",\"\") OR isnull(exit_plan_status) OR isnull(alternative_provider)\n    | table provider_name, contract_id, criticality, exit_plan_status, alternative_provider\n    | sort criticality\n]",
              "m": "(1) Extend `dora_ict_provider_register.csv` with exit plan status, test dates, and alternative provider fields; (2) alert on critical function providers without exit plans; (3) require annual exit plan testing; (4) combine with concentration risk data (UC-22.3.4) — providers with high concentration AND no exit plan represent highest risk; (5) report exit strategy readiness to management body annually; (6) validate data return/portability capabilities per Art. 30 contractual requirements.",
              "z": "Pie chart (exit readiness distribution), Table (providers without exit plans), Bar chart (readiness by criticality), Single value (% providers with tested exit plans).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876).\n• Ensure the following data sources are available: `dora_ict_provider_register.csv`, CIM Network_Traffic data model, cloud provider audit logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Extend `dora_ict_provider_register.csv` with exit plan status, test dates, and alternative provider fields; (2) alert on critical function providers without exit plans; (3) require annual exit plan testing; (4) combine with concentration risk data (UC-22.3.4) — providers with high concentration AND no exit plan represent highest risk; (5) report exit strategy readiness to management body annually; (6) validate data return/portability capabilities per Art. 30 contractual requirements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_ict_provider_register.csv WHERE criticality IN (\"critical\",\"important\")\n| eval exit_plan_exists=if(isnotnull(exit_plan_status) AND exit_plan_status!=\"none\", 1, 0)\n| eval exit_plan_tested=if(exit_plan_status=\"tested\", 1, 0)\n| eval alternative_identified=if(isnotnull(alternative_provider), 1, 0)\n| eval last_test_date_epoch=if(isnotnull(exit_plan_test_date), strptime(exit_plan_test_date,\"%Y-%m-%d\"), null())\n| eval months_since_test=if(isnotnull(last_test_date_epoch), round((now()-last_test_date_epoch)/(86400*30),0), 999)\n| eval readiness=case(\n    exit_plan_tested=1 AND alternative_identified=1 AND months_since_test<=12, \"READY\",\n    exit_plan_exists=1 AND alternative_identified=1, \"PARTIAL — test overdue\",\n    exit_plan_exists=1, \"PARTIAL — no alternative\",\n    1=1, \"NOT_READY — plan required\")\n| stats count by readiness\n| append [\n    search | inputlookup dora_ict_provider_register.csv WHERE criticality IN (\"critical\",\"important\")\n    | where exit_plan_status IN (\"none\",\"\") OR isnull(exit_plan_status) OR isnull(alternative_provider)\n    | table provider_name, contract_id, criticality, exit_plan_status, alternative_provider\n    | sort criticality\n]\n```\n\nUnderstanding this SPL\n\n**DORA ICT Third-Party Exit Strategy Readiness (Art. 28(8))** — Article 28(8) requires exit strategies for all ICT arrangements supporting critical or important functions, with tested transition plans. This use case monitors exit strategy readiness by tracking whether exit plans exist, are tested, and have viable alternatives identified — combined with operational dependency metrics showing how deeply the entity relies on each provider, informing realistic transition timelines.\n\nDocumented **Data sources**: `dora_ict_provider_register.csv`, CIM Network_Traffic data model, cloud provider audit logs. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **exit_plan_exists** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exit_plan_tested** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **alternative_identified** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **last_test_date_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **months_since_test** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **readiness** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by readiness** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (exit readiness distribution), Table (providers without exit plans), Bar chart (readiness by criticality), Single value (% providers with tested exit plans).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of ict third-party exit strategy readiness — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Risk",
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28(8)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.28(8) is enforced — Splunk UC-22.3.18: DORA ICT Third-Party Exit Strategy Readiness.",
                  "ea": "Saved search 'UC-22.3.18' running on dora_ict_provider_register.csv, CIM Network_Traffic data model, cloud provider audit logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.19",
              "n": "DORA Management Body ICT Governance and Oversight (Art. 5)",
              "c": "high",
              "f": "intermediate",
              "v": "Article 5 places ultimate accountability for ICT risk management on the management body, requiring members to maintain sufficient knowledge and skills, undergo training, and actively oversee the ICT risk framework. This use case aggregates governance evidence — board ICT risk briefings, framework approval dates, training completion, and risk acceptance decisions — into a compliance dashboard proving management body engagement.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`dora_governance_evidence.csv` (KV store), `_audit` (scheduled report execution)",
              "q": "| inputlookup dora_governance_evidence.csv\n| eval evidence_date=strptime(date, \"%Y-%m-%d\")\n| eval days_since=round((now()-evidence_date)/86400, 0)\n| eval status=case(\n    evidence_type=\"board_ict_risk_briefing\" AND days_since > 90, \"OVERDUE\",\n    evidence_type=\"framework_approval\" AND days_since > 365, \"OVERDUE\",\n    evidence_type=\"member_training\" AND days_since > 365, \"OVERDUE\",\n    evidence_type=\"risk_appetite_review\" AND days_since > 365, \"OVERDUE\",\n    evidence_type=\"provider_register_review\" AND days_since > 365, \"OVERDUE\",\n    days_since > 270, \"DUE_SOON\",\n    1=1, \"CURRENT\")\n| sort - days_since\n| table evidence_type, description, responsible_person, date, days_since, status",
              "m": "(1) Create `dora_governance_evidence.csv` KV store with evidence types: board_ict_risk_briefing (quarterly), framework_approval (annual), member_training (annual), risk_appetite_review (annual), provider_register_review (annual), budget_allocation (annual); (2) populate manually or via board secretary integration; (3) alert when any evidence type is overdue; (4) generate quarterly governance report for competent authorities; (5) document management body decisions on ICT risk tolerance and budget.",
              "z": "Table (governance evidence status), Traffic light indicators (current/due/overdue), Timeline (governance activities), Single value (overdue items).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `dora_governance_evidence.csv` (KV store), `_audit` (scheduled report execution).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `dora_governance_evidence.csv` KV store with evidence types: board_ict_risk_briefing (quarterly), framework_approval (annual), member_training (annual), risk_appetite_review (annual), provider_register_review (annual), budget_allocation (annual); (2) populate manually or via board secretary integration; (3) alert when any evidence type is overdue; (4) generate quarterly governance report for competent authorities; (5) document management body decisions on ICT risk tolerance and budget…\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_governance_evidence.csv\n| eval evidence_date=strptime(date, \"%Y-%m-%d\")\n| eval days_since=round((now()-evidence_date)/86400, 0)\n| eval status=case(\n    evidence_type=\"board_ict_risk_briefing\" AND days_since > 90, \"OVERDUE\",\n    evidence_type=\"framework_approval\" AND days_since > 365, \"OVERDUE\",\n    evidence_type=\"member_training\" AND days_since > 365, \"OVERDUE\",\n    evidence_type=\"risk_appetite_review\" AND days_since > 365, \"OVERDUE\",\n    evidence_type=\"provider_register_review\" AND days_since > 365, \"OVERDUE\",\n    days_since > 270, \"DUE_SOON\",\n    1=1, \"CURRENT\")\n| sort - days_since\n| table evidence_type, description, responsible_person, date, days_since, status\n```\n\nUnderstanding this SPL\n\n**DORA Management Body ICT Governance and Oversight (Art. 5)** — Article 5 places ultimate accountability for ICT risk management on the management body, requiring members to maintain sufficient knowledge and skills, undergo training, and actively oversee the ICT risk framework. This use case aggregates governance evidence — board ICT risk briefings, framework approval dates, training completion, and risk acceptance decisions — into a compliance dashboard proving management body engagement.\n\nDocumented **Data sources**: `dora_governance_evidence.csv` (KV store), `_audit` (scheduled report execution). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **evidence_date** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **DORA Management Body ICT Governance and Oversight (Art. 5)**): table evidence_type, description, responsible_person, date, days_since, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (governance evidence status), Traffic light indicators (current/due/overdue), Timeline (governance activities), Single value (overdue items).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of management body ict governance and oversight — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.19: DORA Management Body ICT Governance and Oversight.",
                  "ea": "Saved search 'UC-22.3.19' running on dora_governance_evidence.csv (KV store), _audit (scheduled report execution), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.20",
              "n": "DORA ICT Crisis Communication Readiness (Art. 14)",
              "c": "high",
              "f": "beginner",
              "v": "Article 14 requires financial entities to have communication plans for ICT-related incidents and vulnerabilities, including disclosure to clients and counterparts, and internal escalation. This use case tracks crisis communication readiness — ensuring communication plans are documented, tested, contact lists are current, and that during active incidents, stakeholder notifications are timely and documented.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`dora_comms_readiness.csv` (KV store), `` `notable` `` macro, `index=itsm` (communication records)",
              "q": "| inputlookup dora_comms_readiness.csv\n| eval last_update=strptime(last_updated, \"%Y-%m-%d\")\n| eval days_since_update=round((now()-last_update)/86400, 0)\n| eval freshness=case(\n    days_since_update > 180, \"STALE — update required\",\n    days_since_update > 90, \"REVIEW_DUE\",\n    1=1, \"CURRENT\")\n| table component, description, owner, last_updated, days_since_update, freshness, last_drill_date\n| sort freshness",
              "m": "(1) Create `dora_comms_readiness.csv` with components: stakeholder_contact_list, client_notification_template, regulator_notification_template, internal_escalation_matrix, media_holding_statement, crisis_call_bridge_details; (2) update at least every 6 months; (3) track communication drill completion; (4) during active major incidents, verify that client and counterparty notifications are sent and documented; (5) validate contact list accuracy by comparing against HR/CRM data.",
              "z": "Table (communication readiness status), Traffic lights (component freshness), Single value (stale components count), Timeline (drill history).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `dora_comms_readiness.csv` (KV store), `` `notable` `` macro, `index=itsm` (communication records).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `dora_comms_readiness.csv` with components: stakeholder_contact_list, client_notification_template, regulator_notification_template, internal_escalation_matrix, media_holding_statement, crisis_call_bridge_details; (2) update at least every 6 months; (3) track communication drill completion; (4) during active major incidents, verify that client and counterparty notifications are sent and documented; (5) validate contact list accuracy by comparing against HR/CRM data.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_comms_readiness.csv\n| eval last_update=strptime(last_updated, \"%Y-%m-%d\")\n| eval days_since_update=round((now()-last_update)/86400, 0)\n| eval freshness=case(\n    days_since_update > 180, \"STALE — update required\",\n    days_since_update > 90, \"REVIEW_DUE\",\n    1=1, \"CURRENT\")\n| table component, description, owner, last_updated, days_since_update, freshness, last_drill_date\n| sort freshness\n```\n\nUnderstanding this SPL\n\n**DORA ICT Crisis Communication Readiness (Art. 14)** — Article 14 requires financial entities to have communication plans for ICT-related incidents and vulnerabilities, including disclosure to clients and counterparts, and internal escalation. This use case tracks crisis communication readiness — ensuring communication plans are documented, tested, contact lists are current, and that during active incidents, stakeholder notifications are timely and documented.\n\nDocumented **Data sources**: `dora_comms_readiness.csv` (KV store), `` `notable` `` macro, `index=itsm` (communication records). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **last_update** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since_update** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **freshness** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DORA ICT Crisis Communication Readiness (Art. 14)**): table component, description, owner, last_updated, days_since_update, freshness, last_drill_date\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (communication readiness status), Traffic lights (component freshness), Single value (stale components count), Timeline (drill history).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stay on top of ict crisis communication readiness — so we can show we govern technology and resilience risk the way financial rules that apply to us require.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.14 is enforced — Splunk UC-22.3.20: DORA ICT Crisis Communication Readiness.",
                  "ea": "Saved search 'UC-22.3.20' running on dora_comms_readiness.csv (KV store), notable macro, index=itsm (communication records), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.21",
              "n": "DORA ICT Concentration — Single-Provider Spend and Workload Share Thresholds",
              "c": "high",
              "f": "intermediate",
              "v": "Concentration in a single ICT provider creates systemic dependency. Tracking spend, contract value, and mapped production workloads against internal limits evidences ongoing measurement of concentration risk for management and supervisors.",
              "t": "Splunk DB Connect (Splunkbase 2686), HTTP Event Collector from FinOps",
              "d": "`index=vendor` `sourcetype=\"finops:invoice\"` (vendor_id, amount_usd, period); `cmdb_provider_map.csv` (ci_id, vendor_id, criticality)",
              "q": "index=vendor sourcetype=\"finops:invoice\" period=strftime(now(),\"%Y-%m\") earliest=-400d\n| stats sum(amount_usd) as spend by vendor_id, period\n| eventstats sum(spend) as total_spend by period\n| eval share_pct=round(100*spend/total_spend,2)\n| where share_pct>25\n| sort period, - share_pct",
              "m": "(1) Set `share_pct` threshold per board risk appetite; (2) join `cmdb_provider_map` to count critical CIs per vendor; (3) quarterly board pack export; (4) reconcile vendor_id to Register of Information keys; (5) document mitigation (secondary vendor) in GRC.",
              "z": "Stacked bar (spend by vendor), Table (breaches), Single value (vendors over threshold).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HTTP Event Collector from FinOps.\n• Ensure the following data sources are available: `index=vendor` `sourcetype=\"finops:invoice\"` (vendor_id, amount_usd, period); `cmdb_provider_map.csv` (ci_id, vendor_id, criticality).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Set `share_pct` threshold per board risk appetite; (2) join `cmdb_provider_map` to count critical CIs per vendor; (3) quarterly board pack export; (4) reconcile vendor_id to Register of Information keys; (5) document mitigation (secondary vendor) in GRC.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"finops:invoice\" period=strftime(now(),\"%Y-%m\") earliest=-400d\n| stats sum(amount_usd) as spend by vendor_id, period\n| eventstats sum(spend) as total_spend by period\n| eval share_pct=round(100*spend/total_spend,2)\n| where share_pct>25\n| sort period, - share_pct\n```\n\nUnderstanding this SPL\n\n**DORA ICT Concentration — Single-Provider Spend and Workload Share Thresholds** — Concentration in a single ICT provider creates systemic dependency. Tracking spend, contract value, and mapped production workloads against internal limits evidences ongoing measurement of concentration risk for management and supervisors.\n\nDocumented **Data sources**: `index=vendor` `sourcetype=\"finops:invoice\"` (vendor_id, amount_usd, period); `cmdb_provider_map.csv` (ci_id, vendor_id, criticality). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HTTP Event Collector from FinOps. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: finops:invoice. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"finops:invoice\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_id, period** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by period** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **share_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where share_pct>25` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (spend by vendor), Table (breaches), Single value (vendors over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We concentration in a single ICT provider creates systemic dependency.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.21: DORA ICT Concentration — Single-Provider Spend and Workload Share Thresholds.",
                  "ea": "Saved search 'UC-22.3.21' running on sourcetype finops:invoice and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.22",
              "n": "DORA ICT Concentration — Critical Service Dependency Fan-In by Provider",
              "c": "critical",
              "f": "advanced",
              "v": "Multiple critical services routing through one provider’s shared edge or identity layer is hidden concentration. Fan-in metrics highlight when too many important business functions share a single third-party control point.",
              "t": "CMDB export via Splunk DB Connect, custom sourcetype",
              "d": "`index=cmdb` `sourcetype=\"cmdb:dependency\"` (service_id, upstream_ci, provider_id, tier)",
              "q": "index=cmdb sourcetype=\"cmdb:dependency\" tier=\"critical\" earliest=-1d\n| stats dc(service_id) as dependent_services by provider_id, upstream_ci\n| where dependent_services>15\n| sort - dependent_services",
              "m": "(1) Normalize `provider_id` to legal entity; (2) refresh dependency graph weekly; (3) cross-check with contractual sub-processors list; (4) alert architecture review board; (5) attach to ICT concentration register annex.",
              "z": "Table (hotspots), Graph visualization (optional), Single value (max fan-in).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CMDB export via Splunk DB Connect, custom sourcetype.\n• Ensure the following data sources are available: `index=cmdb` `sourcetype=\"cmdb:dependency\"` (service_id, upstream_ci, provider_id, tier).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `provider_id` to legal entity; (2) refresh dependency graph weekly; (3) cross-check with contractual sub-processors list; (4) alert architecture review board; (5) attach to ICT concentration register annex.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmdb sourcetype=\"cmdb:dependency\" tier=\"critical\" earliest=-1d\n| stats dc(service_id) as dependent_services by provider_id, upstream_ci\n| where dependent_services>15\n| sort - dependent_services\n```\n\nUnderstanding this SPL\n\n**DORA ICT Concentration — Critical Service Dependency Fan-In by Provider** — Multiple critical services routing through one provider’s shared edge or identity layer is hidden concentration. Fan-in metrics highlight when too many important business functions share a single third-party control point.\n\nDocumented **Data sources**: `index=cmdb` `sourcetype=\"cmdb:dependency\"` (service_id, upstream_ci, provider_id, tier). **App/TA** (typical add-on context): CMDB export via Splunk DB Connect, custom sourcetype. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmdb; **sourcetype**: cmdb:dependency. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmdb, sourcetype=\"cmdb:dependency\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by provider_id, upstream_ci** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dependent_services>15` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hotspots), Graph visualization (optional), Single value (max fan-in).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We multiple critical services routing through one provider’s shared edge or identity layer is hidden concentration.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.22: DORA ICT Concentration — Critical Service Dependency Fan-In by Provider.",
                  "ea": "Saved search 'UC-22.3.22' running on index=cmdb sourcetype=\"cmdb:dependency\" (service_id, upstream_ci, provider_id, tier), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.23",
              "n": "DORA ICT Concentration — Regional Provider Outage Correlation Exposure Score",
              "c": "high",
              "f": "advanced",
              "v": "When providers publish incident regions, correlating your asset footprint to those regions quantifies blast radius during external outages—evidence that geographic concentration is monitored, not assumed.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), provider status RSS/JSON via HEC",
              "d": "`index=external` `sourcetype=\"provider:status\"` (provider_id, region, impact_start, impact_end); `index=cloud` `sourcetype=\"cmdb:cloud_asset\"` (provider_id, region, service_id, criticality)",
              "q": "index=external sourcetype=\"provider:status\" impact_end=\"open\" earliest=-2d\n| join type=left provider_id, region [\n    search index=cloud sourcetype=\"cmdb:cloud_asset\" criticality=\"high\"\n    | stats dc(service_id) as exposed_services by provider_id, region\n  ]\n| where exposed_services>0\n| table provider_id, region, impact_start, exposed_services",
              "m": "(1) Map status page regions to your tagging scheme; (2) tune for multi-region active-active; (3) integrate with major incident process; (4) store historical incidents for TLPT scenarios; (5) feed ICT concentration committee.",
              "z": "Table (live exposure), Map by region, Single value (exposed high services).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), provider status RSS/JSON via HEC.\n• Ensure the following data sources are available: `index=external` `sourcetype=\"provider:status\"` (provider_id, region, impact_start, impact_end); `index=cloud` `sourcetype=\"cmdb:cloud_asset\"` (provider_id, region, service_id, criticality).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map status page regions to your tagging scheme; (2) tune for multi-region active-active; (3) integrate with major incident process; (4) store historical incidents for TLPT scenarios; (5) feed ICT concentration committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=external sourcetype=\"provider:status\" impact_end=\"open\" earliest=-2d\n| join type=left provider_id, region [\n    search index=cloud sourcetype=\"cmdb:cloud_asset\" criticality=\"high\"\n    | stats dc(service_id) as exposed_services by provider_id, region\n  ]\n| where exposed_services>0\n| table provider_id, region, impact_start, exposed_services\n```\n\nUnderstanding this SPL\n\n**DORA ICT Concentration — Regional Provider Outage Correlation Exposure Score** — When providers publish incident regions, correlating your asset footprint to those regions quantifies blast radius during external outages—evidence that geographic concentration is monitored, not assumed.\n\nDocumented **Data sources**: `index=external` `sourcetype=\"provider:status\"` (provider_id, region, impact_start, impact_end); `index=cloud` `sourcetype=\"cmdb:cloud_asset\"` (provider_id, region, service_id, criticality). **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), provider status RSS/JSON via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: external; **sourcetype**: provider:status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=external, sourcetype=\"provider:status\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where exposed_services>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA ICT Concentration — Regional Provider Outage Correlation Exposure Score**): table provider_id, region, impact_start, exposed_services\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (live exposure), Map by region, Single value (exposed high services).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We when providers publish incident regions, correlating your asset footprint to those regions quantifies blast radius during external outages—evidence that geographic concentration is monitored, not assumed.",
              "mtype": [
                "Compliance",
                "Resilience"
              ],
              "ind": "Financial Services",
              "pillar": "observability",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.17 (ICT-related incident management process) is enforced — Splunk UC-22.3.23: DORA ICT Concentration — Regional Provider Outage Correlation Exposure Score.",
                  "ea": "Saved search 'UC-22.3.23' running on sourcetype provider:status and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.24",
              "n": "DORA ICT Concentration — Substitutability and Secondary Sourcing Readiness Index",
              "c": "medium",
              "f": "intermediate",
              "v": "Concentration risk mitigates when substitutable alternatives exist. Scoring each critical provider on contract portability, data export tested, and runbook completeness gives a defensible substitutability index for the register of information.",
              "t": "KV Store or `inputlookup dora_substitutability.csv`",
              "d": "`dora_substitutability.csv` (provider_id, contract_portable, last_export_test_date, runbook_score, _time)",
              "q": "| inputlookup dora_substitutability.csv\n| eval last_test=strptime(last_export_test_date,\"%Y-%m-%d\")\n| eval test_age_days=round((now()-last_test)/86400,0)\n| eval readiness=contract_portable*0.4 + if(test_age_days<180,0.3,0) + runbook_score*0.3\n| where readiness<0.6\n| table provider_id, readiness, contract_portable, test_age_days, runbook_score\n| sort readiness",
              "m": "(1) Calibrate weights with risk team; (2) refresh after DR tests; (3) link `provider_id` to RoI; (4) annual reassessment workflow; (5) do not embed legal advice—store lawyer sign-off reference externally.",
              "z": "Bar chart (readiness), Table (low scores), Single value (providers below bar).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: KV Store or `inputlookup dora_substitutability.csv`.\n• Ensure the following data sources are available: `dora_substitutability.csv` (provider_id, contract_portable, last_export_test_date, runbook_score, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Calibrate weights with risk team; (2) refresh after DR tests; (3) link `provider_id` to RoI; (4) annual reassessment workflow; (5) do not embed legal advice—store lawyer sign-off reference externally.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_substitutability.csv\n| eval last_test=strptime(last_export_test_date,\"%Y-%m-%d\")\n| eval test_age_days=round((now()-last_test)/86400,0)\n| eval readiness=contract_portable*0.4 + if(test_age_days<180,0.3,0) + runbook_score*0.3\n| where readiness<0.6\n| table provider_id, readiness, contract_portable, test_age_days, runbook_score\n| sort readiness\n```\n\nUnderstanding this SPL\n\n**DORA ICT Concentration — Substitutability and Secondary Sourcing Readiness Index** — Concentration risk mitigates when substitutable alternatives exist. Scoring each critical provider on contract portability, data export tested, and runbook completeness gives a defensible substitutability index for the register of information.\n\nDocumented **Data sources**: `dora_substitutability.csv` (provider_id, contract_portable, last_export_test_date, runbook_score, _time). **App/TA** (typical add-on context): KV Store or `inputlookup dora_substitutability.csv`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **last_test** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **test_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **readiness** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where readiness<0.6` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA ICT Concentration — Substitutability and Secondary Sourcing Readiness Index**): table provider_id, readiness, contract_portable, test_age_days, runbook_score\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (readiness), Table (low scores), Single value (providers below bar).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We concentration risk mitigates when substitutable alternatives exist.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.24: DORA ICT Concentration — Substitutability and Secondary Sourcing Readiness Index.",
                  "ea": "Saved search 'UC-22.3.24' running on dora_substitutability.csv (provider_id, contract_portable, last_export_test_date, runbook_score, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.25",
              "n": "DORA TLPT — Test Planning Milestone and Scope Lock Audit Trail",
              "c": "high",
              "f": "intermediate",
              "v": "Threat-led penetration testing requires disciplined scope definition and change control. Tracking planning milestones and scope-lock timestamps shows the entity controlled test boundaries and prevented ad hoc scope creep that could invalidate results.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), HEC from GRC",
              "d": "`index=grc` `sourcetype=\"tlpt:milestone\"` (engagement_id, milestone, due_date, completed_date, scope_hash)",
              "q": "index=grc sourcetype=\"tlpt:milestone\" earliest=-730d\n| eval due=strptime(due_date,\"%Y-%m-%d\"), done=strptime(completed_date,\"%Y-%m-%d\")\n| where milestone=\"scope_lock\" AND (isnull(done) OR done>due)\n| stats latest(scope_hash) as scope by engagement_id, milestone\n| sort due",
              "m": "(1) Integrate CISO office project template; (2) alert 14 days before `due_date`; (3) store `scope_hash` of in-scope IPs/apps; (4) map engagement to competent authority reporting year; (5) restrict index to red-team and compliance roles.",
              "z": "Gantt-style table, Single value (late milestones), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), HEC from GRC.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"tlpt:milestone\"` (engagement_id, milestone, due_date, completed_date, scope_hash).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Integrate CISO office project template; (2) alert 14 days before `due_date`; (3) store `scope_hash` of in-scope IPs/apps; (4) map engagement to competent authority reporting year; (5) restrict index to red-team and compliance roles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"tlpt:milestone\" earliest=-730d\n| eval due=strptime(due_date,\"%Y-%m-%d\"), done=strptime(completed_date,\"%Y-%m-%d\")\n| where milestone=\"scope_lock\" AND (isnull(done) OR done>due)\n| stats latest(scope_hash) as scope by engagement_id, milestone\n| sort due\n```\n\nUnderstanding this SPL\n\n**DORA TLPT — Test Planning Milestone and Scope Lock Audit Trail** — Threat-led penetration testing requires disciplined scope definition and change control. Tracking planning milestones and scope-lock timestamps shows the entity controlled test boundaries and prevented ad hoc scope creep that could invalidate results.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"tlpt:milestone\"` (engagement_id, milestone, due_date, completed_date, scope_hash). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), HEC from GRC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: tlpt:milestone. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"tlpt:milestone\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where milestone=\"scope_lock\" AND (isnull(done) OR done>due)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by engagement_id, milestone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gantt-style table, Single value (late milestones), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We threat-led penetration testing requires disciplined scope definition and change control.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.24 (Digital operational-resilience testing) is enforced — Splunk UC-22.3.25: DORA TLPT — Test Planning Milestone and Scope Lock Audit Trail.",
                  "ea": "Saved search 'UC-22.3.25' running on index=grc sourcetype=\"tlpt:milestone\" (engagement_id, milestone, due_date, completed_date, scope_hash), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.26",
              "n": "DORA TLPT — Tester Independence and Conflict-of-Interest Attestation Log",
              "c": "high",
              "f": "beginner",
              "v": "Independence of testers from operational teams is a supervisory expectation. A tamper-evident log of tester employer changes, asset holdings declarations, and recusal events supports governance of TLPT quality.",
              "t": "HTTP Event Collector from HR/GRC web form",
              "d": "`index=grc` `sourcetype=\"tlpt:independence\"` (tester_id, engagement_id, attestation_type, signed_date, conflict_flag)",
              "q": "index=grc sourcetype=\"tlpt:independence\" earliest=-400d conflict_flag=true\n| stats latest(signed_date) as last_signed, values(attestation_type) as types by tester_id, engagement_id\n| sort tester_id",
              "m": "(1) Require attestation before VPN tokens issued; (2) integrate MSSP tester roster; (3) alert on `conflict_flag=true` until cleared by legal; (4) retain 7 years per records policy; (5) hash payloads for integrity.",
              "z": "Table (conflicts), Single value (open conflicts), Pie by attestation_type.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from HR/GRC web form.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"tlpt:independence\"` (tester_id, engagement_id, attestation_type, signed_date, conflict_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require attestation before VPN tokens issued; (2) integrate MSSP tester roster; (3) alert on `conflict_flag=true` until cleared by legal; (4) retain 7 years per records policy; (5) hash payloads for integrity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"tlpt:independence\" earliest=-400d conflict_flag=true\n| stats latest(signed_date) as last_signed, values(attestation_type) as types by tester_id, engagement_id\n| sort tester_id\n```\n\nUnderstanding this SPL\n\n**DORA TLPT — Tester Independence and Conflict-of-Interest Attestation Log** — Independence of testers from operational teams is a supervisory expectation. A tamper-evident log of tester employer changes, asset holdings declarations, and recusal events supports governance of TLPT quality.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"tlpt:independence\"` (tester_id, engagement_id, attestation_type, signed_date, conflict_flag). **App/TA** (typical add-on context): HTTP Event Collector from HR/GRC web form. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: tlpt:independence. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"tlpt:independence\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by tester_id, engagement_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (conflicts), Single value (open conflicts), Pie by attestation_type.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We independence of testers from operational teams is a supervisory expectation.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.26: DORA TLPT — Tester Independence and Conflict-of-Interest Attestation Log.",
                  "ea": "Saved search 'UC-22.3.26' running on index=grc sourcetype=\"tlpt:independence\" (tester_id, engagement_id, attestation_type, signed_date, conflict_flag), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.27",
              "n": "DORA TLPT — Findings Severity, Remediation Owner, and Due Date Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "TLPT produces material findings. Monitoring remediation SLAs by severity with linkage to ICT third-party providers evidences closed-loop treatment required for resilience testing programs.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), Tenable/attack platform JSON",
              "d": "`index=grc` `sourcetype=\"tlpt:finding\"` (finding_id, engagement_id, severity, owner, due_date, status, related_provider_id)",
              "q": "index=grc sourcetype=\"tlpt:finding\" status!=\"closed\" earliest=-400d\n| eval due=strptime(due_date,\"%Y-%m-%d\")\n| where due<now() OR isnull(owner)\n| stats count by severity, related_provider_id, owner\n| sort severity, - count",
              "m": "(1) Map severity to internal SLAs; (2) auto-create problems in ITSM; (3) escalate critical past-due to management body; (4) join provider_id to RoI; (5) quarterly export for regulator file.",
              "z": "Table (overdue), Column chart by severity, Single value (open critical).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), Tenable/attack platform JSON.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"tlpt:finding\"` (finding_id, engagement_id, severity, owner, due_date, status, related_provider_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map severity to internal SLAs; (2) auto-create problems in ITSM; (3) escalate critical past-due to management body; (4) join provider_id to RoI; (5) quarterly export for regulator file.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"tlpt:finding\" status!=\"closed\" earliest=-400d\n| eval due=strptime(due_date,\"%Y-%m-%d\")\n| where due<now() OR isnull(owner)\n| stats count by severity, related_provider_id, owner\n| sort severity, - count\n```\n\nUnderstanding this SPL\n\n**DORA TLPT — Findings Severity, Remediation Owner, and Due Date Tracking** — TLPT produces material findings. Monitoring remediation SLAs by severity with linkage to ICT third-party providers evidences closed-loop treatment required for resilience testing programs.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"tlpt:finding\"` (finding_id, engagement_id, severity, owner, due_date, status, related_provider_id). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Tenable/attack platform JSON. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: tlpt:finding. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"tlpt:finding\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where due<now() OR isnull(owner)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by severity, related_provider_id, owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue), Column chart by severity, Single value (open critical).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tLPT produces material findings.",
              "mtype": [
                "Compliance",
                "Vulnerability"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.24 (Digital operational-resilience testing) is enforced — Splunk UC-22.3.27: DORA TLPT — Findings Severity, Remediation Owner, and Due Date Tracking.",
                  "ea": "Saved search 'UC-22.3.27' running on index=grc sourcetype=\"tlpt:finding\" (finding_id, engagement_id, severity, owner, due_date, status, related_provider_id), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.28",
              "n": "DORA TLPT — Retest and Control Effectiveness Verification Events",
              "c": "high",
              "f": "intermediate",
              "v": "Retesting proves fixes actually work. Correlating retest pass events with original finding IDs closes the assurance loop expected in mature TLPT programs and avoids “fixed on paper” drift.",
              "t": "HTTP Event Collector from attack platform and CI/CD verification jobs",
              "d": "`index=grc` `sourcetype=\"tlpt:retest\"` (finding_id, retest_id, result, executed_at, evidence_url)",
              "q": "index=grc sourcetype=\"tlpt:retest\" earliest=-400d\n| stats latest(result) as last_result, latest(executed_at) as last_exec by finding_id\n| join type=left finding_id [\n    search index=grc sourcetype=\"tlpt:finding\" earliest=-400d\n    | fields finding_id, status, severity\n  ]\n| where status=\"remediated\" AND last_result!=\"pass\"\n| table finding_id, severity, status, last_result, last_exec",
              "m": "(1) Require `finding_id` foreign key; (2) block manual status=remediated without retest event; (3) store `evidence_url` out-of-band; (4) integrate with vulnerability scanners for layered proof; (5) auditor sample pack from this search.",
              "z": "Table (failed retests), Timeline, Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from attack platform and CI/CD verification jobs.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"tlpt:retest\"` (finding_id, retest_id, result, executed_at, evidence_url).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require `finding_id` foreign key; (2) block manual status=remediated without retest event; (3) store `evidence_url` out-of-band; (4) integrate with vulnerability scanners for layered proof; (5) auditor sample pack from this search.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"tlpt:retest\" earliest=-400d\n| stats latest(result) as last_result, latest(executed_at) as last_exec by finding_id\n| join type=left finding_id [\n    search index=grc sourcetype=\"tlpt:finding\" earliest=-400d\n    | fields finding_id, status, severity\n  ]\n| where status=\"remediated\" AND last_result!=\"pass\"\n| table finding_id, severity, status, last_result, last_exec\n```\n\nUnderstanding this SPL\n\n**DORA TLPT — Retest and Control Effectiveness Verification Events** — Retesting proves fixes actually work. Correlating retest pass events with original finding IDs closes the assurance loop expected in mature TLPT programs and avoids “fixed on paper” drift.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"tlpt:retest\"` (finding_id, retest_id, result, executed_at, evidence_url). **App/TA** (typical add-on context): HTTP Event Collector from attack platform and CI/CD verification jobs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: tlpt:retest. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"tlpt:retest\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by finding_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where status=\"remediated\" AND last_result!=\"pass\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA TLPT — Retest and Control Effectiveness Verification Events**): table finding_id, severity, status, last_result, last_exec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed retests), Timeline, Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We retesting proves fixes actually work.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.24 (Digital operational-resilience testing) is enforced — Splunk UC-22.3.28: DORA TLPT — Retest and Control Effectiveness Verification Events.",
                  "ea": "Saved search 'UC-22.3.28' running on index=grc sourcetype=\"tlpt:retest\" (finding_id, retest_id, result, executed_at, evidence_url), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.29",
              "n": "DORA Information Sharing — FINCERT-Style Submission Timeliness and Acknowledgment Log",
              "c": "high",
              "f": "intermediate",
              "v": "Voluntary or mandatory cyber information sharing depends on timely, accurate submissions. Tracking draft, sent, and acknowledged states for intelligence reports demonstrates the entity participates in the financial sector information-sharing ecosystem as required by policy.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), secure email gateway logs via HEC",
              "d": "`index=grc` `sourcetype=\"isac:submission\"` (submission_id, threat_topic, sent_ts, ack_ts, recipient_isac)",
              "q": "index=grc sourcetype=\"isac:submission\" earliest=-180d\n| eval sent=strptime(sent_ts,\"%Y-%m-%dT%H:%M:%SZ\"), ack=strptime(ack_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval lag_h=if(isnotnull(ack), round((ack-sent)/3600,1), null())\n| where isnull(ack) AND now()>relative_time(sent,\"+72h\")\n| table submission_id, threat_topic, sent_ts, recipient_isac",
              "m": "(1) Classify submissions by sensitivity—restrict dashboards; (2) integrate ISAC portal API if available; (3) legal review queue before `sent_ts`; (4) document false negatives separately; (5) map to Art. 45 cooperation themes in internal policy.",
              "z": "Table (pending ack), Line chart (lag_h trend), Single value (stale submissions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), secure email gateway logs via HEC.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"isac:submission\"` (submission_id, threat_topic, sent_ts, ack_ts, recipient_isac).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Classify submissions by sensitivity—restrict dashboards; (2) integrate ISAC portal API if available; (3) legal review queue before `sent_ts`; (4) document false negatives separately; (5) map to Art. 45 cooperation themes in internal policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"isac:submission\" earliest=-180d\n| eval sent=strptime(sent_ts,\"%Y-%m-%dT%H:%M:%SZ\"), ack=strptime(ack_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval lag_h=if(isnotnull(ack), round((ack-sent)/3600,1), null())\n| where isnull(ack) AND now()>relative_time(sent,\"+72h\")\n| table submission_id, threat_topic, sent_ts, recipient_isac\n```\n\nUnderstanding this SPL\n\n**DORA Information Sharing — FINCERT-Style Submission Timeliness and Acknowledgment Log** — Voluntary or mandatory cyber information sharing depends on timely, accurate submissions. Tracking draft, sent, and acknowledged states for intelligence reports demonstrates the entity participates in the financial sector information-sharing ecosystem as required by policy.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"isac:submission\"` (submission_id, threat_topic, sent_ts, ack_ts, recipient_isac). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), secure email gateway logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: isac:submission. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"isac:submission\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **lag_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(ack) AND now()>relative_time(sent,\"+72h\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA Information Sharing — FINCERT-Style Submission Timeliness and Acknowledgment Log**): table submission_id, threat_topic, sent_ts, recipient_isac\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (pending ack), Line chart (lag_h trend), Single value (stale submissions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We voluntary or mandatory cyber information sharing depends on timely, accurate submissions.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.29: DORA Information Sharing — FINCERT-Style Submission Timeliness and Acknowledgment Log.",
                  "ea": "Saved search 'UC-22.3.29' running on index=grc sourcetype=\"isac:submission\" (submission_id, threat_topic, sent_ts, ack_ts, recipient_isac), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.30",
              "n": "DORA Information Sharing — Indicator Distribution to Subsidiaries and Branches Coverage",
              "c": "medium",
              "f": "intermediate",
              "v": "Indicators are only valuable when propagated. Measuring which legal entities received each threat feed revision supports defensible evidence that group-wide information sharing obligations were operationally executed.",
              "t": "HTTP Event Collector from SOAR or threat intel platform",
              "d": "`index=threat` `sourcetype=\"ti:bundle_push\"` (bundle_id, entity_code, pushed_at, record_count)",
              "q": "index=threat sourcetype=\"ti:bundle_push\" earliest=-30d\n| stats latest(pushed_at) as last_push, latest(record_count) as records by bundle_id, entity_code\n| join type=left entity_code [\n    | inputlookup dora_group_entities.csv\n    | fields entity_code\n    | eval expected=1\n  ]\n| where isnull(expected)\n| table bundle_id, entity_code, last_push",
              "m": "(1) Maintain `dora_group_entities.csv` authoritative list; (2) alert when entity misses two consecutive bundles; (3) exclude non-EEA branches if policy dictates; (4) correlate with firewall rule update logs optionally; (5) quarterly attestation by CISO.",
              "z": "Table (missed entities), Heatmap entity x bundle, Single value (coverage %).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from SOAR or threat intel platform.\n• Ensure the following data sources are available: `index=threat` `sourcetype=\"ti:bundle_push\"` (bundle_id, entity_code, pushed_at, record_count).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `dora_group_entities.csv` authoritative list; (2) alert when entity misses two consecutive bundles; (3) exclude non-EEA branches if policy dictates; (4) correlate with firewall rule update logs optionally; (5) quarterly attestation by CISO.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=threat sourcetype=\"ti:bundle_push\" earliest=-30d\n| stats latest(pushed_at) as last_push, latest(record_count) as records by bundle_id, entity_code\n| join type=left entity_code [\n    | inputlookup dora_group_entities.csv\n    | fields entity_code\n    | eval expected=1\n  ]\n| where isnull(expected)\n| table bundle_id, entity_code, last_push\n```\n\nUnderstanding this SPL\n\n**DORA Information Sharing — Indicator Distribution to Subsidiaries and Branches Coverage** — Indicators are only valuable when propagated. Measuring which legal entities received each threat feed revision supports defensible evidence that group-wide information sharing obligations were operationally executed.\n\nDocumented **Data sources**: `index=threat` `sourcetype=\"ti:bundle_push\"` (bundle_id, entity_code, pushed_at, record_count). **App/TA** (typical add-on context): HTTP Event Collector from SOAR or threat intel platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: threat; **sourcetype**: ti:bundle_push. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=threat, sourcetype=\"ti:bundle_push\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by bundle_id, entity_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(expected)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA Information Sharing — Indicator Distribution to Subsidiaries and Branches Coverage**): table bundle_id, entity_code, last_push\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missed entities), Heatmap entity x bundle, Single value (coverage %).",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We indicators are only valuable when propagated.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.30: DORA Information Sharing — Indicator Distribution to Subsidiaries and Branches Coverage.",
                  "ea": "Saved search 'UC-22.3.30' running on index=threat sourcetype=\"ti:bundle_push\" (bundle_id, entity_code, pushed_at, record_count), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.31",
              "n": "DORA Information Sharing — Anonymized Incident TTP Contribution Quality Metrics",
              "c": "medium",
              "f": "advanced",
              "v": "Quality of shared technical details (MITRE technique coverage, IOC richness) determines community usefulness. Internal scoring of contributions before external send improves consistency and evidences a mature sharing culture without leaking sensitive data in Splunk.",
              "t": "Splunk Enterprise Security (Splunkbase 263) for MITRE tagging export",
              "d": "`index=grc` `sourcetype=\"isac:draft_quality\"` (draft_id, technique_count, ioc_count, pii_scrub_score, reviewer)",
              "q": "index=grc sourcetype=\"isac:draft_quality\" earliest=-180d\n| where technique_count<3 OR ioc_count<5 OR pii_scrub_score<0.95\n| stats count by reviewer\n| sort - count",
              "m": "(1) Never store raw customer narratives—use scores only; (2) train reviewers quarterly; (3) integrate STIX export validator; (4) track improvement trend; (5) align with NCSC/FINCERT local guidance.",
              "z": "Scatter (technique vs ioc), Table (low-quality drafts), Single value (reject rate).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263) for MITRE tagging export.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"isac:draft_quality\"` (draft_id, technique_count, ioc_count, pii_scrub_score, reviewer).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Never store raw customer narratives—use scores only; (2) train reviewers quarterly; (3) integrate STIX export validator; (4) track improvement trend; (5) align with NCSC/FINCERT local guidance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"isac:draft_quality\" earliest=-180d\n| where technique_count<3 OR ioc_count<5 OR pii_scrub_score<0.95\n| stats count by reviewer\n| sort - count\n```\n\nUnderstanding this SPL\n\n**DORA Information Sharing — Anonymized Incident TTP Contribution Quality Metrics** — Quality of shared technical details (MITRE technique coverage, IOC richness) determines community usefulness. Internal scoring of contributions before external send improves consistency and evidences a mature sharing culture without leaking sensitive data in Splunk.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"isac:draft_quality\"` (draft_id, technique_count, ioc_count, pii_scrub_score, reviewer). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263) for MITRE tagging export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: isac:draft_quality. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"isac:draft_quality\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where technique_count<3 OR ioc_count<5 OR pii_scrub_score<0.95` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by reviewer** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter (technique vs ioc), Table (low-quality drafts), Single value (reject rate).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We quality of shared technical details ( technique coverage, IOC richness) determines community usefulness.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.17 (ICT-related incident management process) is enforced — Splunk UC-22.3.31: DORA Information Sharing — Anonymized Incident TTP Contribution Quality Metrics.",
                  "ea": "Saved search 'UC-22.3.31' running on index=grc sourcetype=\"isac:draft_quality\" (draft_id, technique_count, ioc_count, pii_scrub_score, reviewer), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.32",
              "n": "DORA Outsourcing Registers — Sub-Processor Notification Lag vs Contractual Notice Period",
              "c": "high",
              "f": "intermediate",
              "v": "Outsourcing arrangements require visibility into sub-processing changes. Comparing provider sub-processor announcement dates to your internal register update timestamps shows contractual notification periods were honored and documented.",
              "t": "Splunk DB Connect, email ingestion (careful PII controls)",
              "d": "`index=vendor` `sourcetype=\"vendor:subproc_notice\"` (provider_id, subprocessor_name, effective_date, notice_received_at); `roi_subprocessor.csv` (provider_id, subprocessor_name, registered_at)",
              "q": "index=vendor sourcetype=\"vendor:subproc_notice\" earliest=-400d\n| eval notice=strptime(notice_received_at,\"%Y-%m-%d\"), eff=strptime(effective_date,\"%Y-%m-%d\")\n| join type=left provider_id, subprocessor_name [\n    | inputlookup roi_subprocessor.csv\n    | eval reg=strptime(registered_at,\"%Y-%m-%d\")\n    | fields provider_id, subprocessor_name, reg\n  ]\n| eval lag_days=round((reg-notice)/86400,1)\n| where lag_days>14 OR isnull(reg)\n| table provider_id, subprocessor_name, notice_received_at, registered_at, lag_days",
              "m": "(1) Set `14` to your contractual days; (2) legal owns register updates—Splunk flags drift; (3) exclude immaterial vendors via lookup; (4) integrate CLM tool when available; (5) export for RoI reconciliation.",
              "z": "Table (lags), Single value (breach count), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect, email ingestion (careful PII controls).\n• Ensure the following data sources are available: `index=vendor` `sourcetype=\"vendor:subproc_notice\"` (provider_id, subprocessor_name, effective_date, notice_received_at); `roi_subprocessor.csv` (provider_id, subprocessor_name, registered_at).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Set `14` to your contractual days; (2) legal owns register updates—Splunk flags drift; (3) exclude immaterial vendors via lookup; (4) integrate CLM tool when available; (5) export for RoI reconciliation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"vendor:subproc_notice\" earliest=-400d\n| eval notice=strptime(notice_received_at,\"%Y-%m-%d\"), eff=strptime(effective_date,\"%Y-%m-%d\")\n| join type=left provider_id, subprocessor_name [\n    | inputlookup roi_subprocessor.csv\n    | eval reg=strptime(registered_at,\"%Y-%m-%d\")\n    | fields provider_id, subprocessor_name, reg\n  ]\n| eval lag_days=round((reg-notice)/86400,1)\n| where lag_days>14 OR isnull(reg)\n| table provider_id, subprocessor_name, notice_received_at, registered_at, lag_days\n```\n\nUnderstanding this SPL\n\n**DORA Outsourcing Registers — Sub-Processor Notification Lag vs Contractual Notice Period** — Outsourcing arrangements require visibility into sub-processing changes. Comparing provider sub-processor announcement dates to your internal register update timestamps shows contractual notification periods were honored and documented.\n\nDocumented **Data sources**: `index=vendor` `sourcetype=\"vendor:subproc_notice\"` (provider_id, subprocessor_name, effective_date, notice_received_at); `roi_subprocessor.csv` (provider_id, subprocessor_name, registered_at). **App/TA** (typical add-on context): Splunk DB Connect, email ingestion (careful PII controls). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: vendor:subproc_notice. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"vendor:subproc_notice\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **notice** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **lag_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag_days>14 OR isnull(reg)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA Outsourcing Registers — Sub-Processor Notification Lag vs Contractual Notice Period**): table provider_id, subprocessor_name, notice_received_at, registered_at, lag_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (lags), Single value (breach count), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We outsourcing arrangements require visibility into sub-processing changes.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.32: DORA Outsourcing Registers — Sub-Processor Notification Lag vs Contractual Notice Period.",
                  "ea": "Saved search 'UC-22.3.32' running on sourcetype vendor:subproc_notice and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.33",
              "n": "DORA Outsourcing Registers — Function Mapping Completeness for Each Outsourced Arrangement",
              "c": "high",
              "f": "intermediate",
              "v": "Registers must reflect which functions depend on each provider. Flagging contracts without mapped internal business functions prevents hollow register entries that fail supervisory scrutiny during inspections.",
              "t": "GRC export via DB Connect, `inputlookup`",
              "d": "`dora_outsourcing_register.csv` (contract_id, provider_id, function_codes, last_validated)",
              "q": "| inputlookup dora_outsourcing_register.csv\n| eval fc_len=len(function_codes)\n| eval validated=strptime(last_validated,\"%Y-%m-%d\")\n| where fc_len<5 OR isnull(function_codes) OR validated<relative_time(now(),\"-365d@d\")\n| table contract_id, provider_id, function_codes, last_validated",
              "m": "(1) Define allowed `function_codes` taxonomy; (2) workflow assigns business owner per contract; (3) integrate procurement go-live events; (4) quarterly validation campaign; (5) tie to BIA criticality.",
              "z": "Table (incomplete), Donut (validation age), Single value (contracts to fix).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GRC export via DB Connect, `inputlookup`.\n• Ensure the following data sources are available: `dora_outsourcing_register.csv` (contract_id, provider_id, function_codes, last_validated).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define allowed `function_codes` taxonomy; (2) workflow assigns business owner per contract; (3) integrate procurement go-live events; (4) quarterly validation campaign; (5) tie to BIA criticality.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_outsourcing_register.csv\n| eval fc_len=len(function_codes)\n| eval validated=strptime(last_validated,\"%Y-%m-%d\")\n| where fc_len<5 OR isnull(function_codes) OR validated<relative_time(now(),\"-365d@d\")\n| table contract_id, provider_id, function_codes, last_validated\n```\n\nUnderstanding this SPL\n\n**DORA Outsourcing Registers — Function Mapping Completeness for Each Outsourced Arrangement** — Registers must reflect which functions depend on each provider. Flagging contracts without mapped internal business functions prevents hollow register entries that fail supervisory scrutiny during inspections.\n\nDocumented **Data sources**: `dora_outsourcing_register.csv` (contract_id, provider_id, function_codes, last_validated). **App/TA** (typical add-on context): GRC export via DB Connect, `inputlookup`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **fc_len** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **validated** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fc_len<5 OR isnull(function_codes) OR validated<relative_time(now(),\"-365d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA Outsourcing Registers — Function Mapping Completeness for Each Outsourced Arrangement**): table contract_id, provider_id, function_codes, last_validated\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incomplete), Donut (validation age), Single value (contracts to fix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We registers must reflect which functions depend on each provider.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.33: DORA Outsourcing Registers — Function Mapping Completeness for Each Outsourced Arrangement.",
                  "ea": "Saved search 'UC-22.3.33' running on dora_outsourcing_register.csv (contract_id, provider_id, function_codes, last_validated), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.34",
              "n": "DORA Outsourcing Registers — Data Localization and Cross-Border Transfer Field Validation",
              "c": "critical",
              "f": "advanced",
              "v": "Outsourcing often imposes data residency constraints. Validating that register rows specify processing locations and transfer mechanisms reduces the risk of unlawful transfers and incomplete RoI disclosures.",
              "t": "Splunk DB Connect",
              "d": "`dora_outsourcing_register.csv` (contract_id, processing_regions, transfer_mechanism, data_categories)",
              "q": "| inputlookup dora_outsourcing_register.csv\n| where isnull(processing_regions) OR processing_regions=\"\" OR isnull(transfer_mechanism) OR transfer_mechanism=\"\"\n| lookup restricted_data_categories.csv data_categories OUTPUT requires_scc\n| where requires_scc=\"true\" AND transfer_mechanism!=\"SCC\"\n| table contract_id, processing_regions, transfer_mechanism, data_categories",
              "m": "(1) Legal maintains `restricted_data_categories.csv`; (2) do not store SCC text in Splunk—boolean flags only; (3) alert privacy office; (4) sync with DPIA references; (5) annual RoI sign-off pack.",
              "z": "Table (defects), Map (optional region codes), Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect.\n• Ensure the following data sources are available: `dora_outsourcing_register.csv` (contract_id, processing_regions, transfer_mechanism, data_categories).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Legal maintains `restricted_data_categories.csv`; (2) do not store SCC text in Splunk—boolean flags only; (3) alert privacy office; (4) sync with DPIA references; (5) annual RoI sign-off pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_outsourcing_register.csv\n| where isnull(processing_regions) OR processing_regions=\"\" OR isnull(transfer_mechanism) OR transfer_mechanism=\"\"\n| lookup restricted_data_categories.csv data_categories OUTPUT requires_scc\n| where requires_scc=\"true\" AND transfer_mechanism!=\"SCC\"\n| table contract_id, processing_regions, transfer_mechanism, data_categories\n```\n\nUnderstanding this SPL\n\n**DORA Outsourcing Registers — Data Localization and Cross-Border Transfer Field Validation** — Outsourcing often imposes data residency constraints. Validating that register rows specify processing locations and transfer mechanisms reduces the risk of unlawful transfers and incomplete RoI disclosures.\n\nDocumented **Data sources**: `dora_outsourcing_register.csv` (contract_id, processing_regions, transfer_mechanism, data_categories). **App/TA** (typical add-on context): Splunk DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where isnull(processing_regions) OR processing_regions=\"\" OR isnull(transfer_mechanism) OR transfer_mechanism=\"\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where requires_scc=\"true\" AND transfer_mechanism!=\"SCC\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA Outsourcing Registers — Data Localization and Cross-Border Transfer Field Validation**): table contract_id, processing_regions, transfer_mechanism, data_categories\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (defects), Map (optional region codes), Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We outsourcing often imposes data residency constraints.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.34: DORA Outsourcing Registers — Data Localization and Cross-Border Transfer Field Validation.",
                  "ea": "Saved search 'UC-22.3.34' running on dora_outsourcing_register.csv (contract_id, processing_regions, transfer_mechanism, data_categories), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.35",
              "n": "DORA Exit Strategy — Alternative Provider Shortlist Currency and RFP Readiness",
              "c": "high",
              "f": "intermediate",
              "v": "Exit strategies require credible alternatives. Tracking age of competitive shortlists, NDA status, and last market scan shows the entity refreshes exit options rather than relying on a stale single name in a slide deck.",
              "t": "KV Store / CSV from procurement",
              "d": "`dora_exit_shortlist.csv` (critical_service, alt_provider_id, last_rfp_contact, nda_expiry, market_scan_date)",
              "q": "| inputlookup dora_exit_shortlist.csv\n| eval nda_exp=strptime(nda_expiry,\"%Y-%m-%d\"), scan=strptime(market_scan_date,\"%Y-%m-%d\")\n| where nda_exp<relative_time(now(),\"+90d@d\") OR scan<relative_time(now(),\"-365d@d\")\n| stats values(alt_provider_id) as providers by critical_service\n| sort critical_service",
              "m": "(1) Link `critical_service` to BIA; (2) procurement owns updates; (3) integrate CLM for NDA dates; (4) exclude services with regulated monopoly constraints via notes field; (5) board risk committee review annually.",
              "z": "Table (stale shortlists), Timeline (market_scan_date), Single value (NDAs expiring).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: KV Store / CSV from procurement.\n• Ensure the following data sources are available: `dora_exit_shortlist.csv` (critical_service, alt_provider_id, last_rfp_contact, nda_expiry, market_scan_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Link `critical_service` to BIA; (2) procurement owns updates; (3) integrate CLM for NDA dates; (4) exclude services with regulated monopoly constraints via notes field; (5) board risk committee review annually.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_exit_shortlist.csv\n| eval nda_exp=strptime(nda_expiry,\"%Y-%m-%d\"), scan=strptime(market_scan_date,\"%Y-%m-%d\")\n| where nda_exp<relative_time(now(),\"+90d@d\") OR scan<relative_time(now(),\"-365d@d\")\n| stats values(alt_provider_id) as providers by critical_service\n| sort critical_service\n```\n\nUnderstanding this SPL\n\n**DORA Exit Strategy — Alternative Provider Shortlist Currency and RFP Readiness** — Exit strategies require credible alternatives. Tracking age of competitive shortlists, NDA status, and last market scan shows the entity refreshes exit options rather than relying on a stale single name in a slide deck.\n\nDocumented **Data sources**: `dora_exit_shortlist.csv` (critical_service, alt_provider_id, last_rfp_contact, nda_expiry, market_scan_date). **App/TA** (typical add-on context): KV Store / CSV from procurement. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **nda_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where nda_exp<relative_time(now(),\"+90d@d\") OR scan<relative_time(now(),\"-365d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by critical_service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale shortlists), Timeline (market_scan_date), Single value (NDAs expiring).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We exit strategies require credible alternatives.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.35: DORA Exit Strategy — Alternative Provider Shortlist Currency and RFP Readiness.",
                  "ea": "Saved search 'UC-22.3.35' running on dora_exit_shortlist.csv (critical_service, alt_provider_id, last_rfp_contact, nda_expiry, market_scan_date), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.36",
              "n": "DORA Exit Strategy — Data Portability Test Evidence and Export Volume Integrity",
              "c": "high",
              "f": "advanced",
              "v": "Exiting a provider is impossible without data. Logging export job byte counts, checksums, and completion times for annual portability drills evidences technical feasibility of contractual exit clauses.",
              "t": "HTTP Event Collector from data movement tools",
              "d": "`index=dataops` `sourcetype=\"exit:export_test\"` (provider_id, dataset, bytes_out, checksum, duration_sec, test_id, _time)",
              "q": "index=dataops sourcetype=\"exit:export_test\" earliest=-400d\n| stats sum(bytes_out) as total_bytes, max(duration_sec) as max_dur by provider_id, test_id\n| join type=left provider_id [\n    search index=dataops sourcetype=\"exit:export_baseline\" earliest=-400d\n    | fields provider_id, expected_min_bytes\n  ]\n| where total_bytes < expected_min_bytes*0.95\n| table provider_id, test_id, total_bytes, expected_min_bytes, max_dur",
              "m": "(1) Establish baselines after first full export; (2) run tests in non-prod mirror where possible; (3) classify logs without customer payloads; (4) store checksum manifests in WORM; (5) map to exit playbook section IDs.",
              "z": "Table (failed tests), Line chart (bytes over tests), Single value (providers failing).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from data movement tools.\n• Ensure the following data sources are available: `index=dataops` `sourcetype=\"exit:export_test\"` (provider_id, dataset, bytes_out, checksum, duration_sec, test_id, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Establish baselines after first full export; (2) run tests in non-prod mirror where possible; (3) classify logs without customer payloads; (4) store checksum manifests in WORM; (5) map to exit playbook section IDs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dataops sourcetype=\"exit:export_test\" earliest=-400d\n| stats sum(bytes_out) as total_bytes, max(duration_sec) as max_dur by provider_id, test_id\n| join type=left provider_id [\n    search index=dataops sourcetype=\"exit:export_baseline\" earliest=-400d\n    | fields provider_id, expected_min_bytes\n  ]\n| where total_bytes < expected_min_bytes*0.95\n| table provider_id, test_id, total_bytes, expected_min_bytes, max_dur\n```\n\nUnderstanding this SPL\n\n**DORA Exit Strategy — Data Portability Test Evidence and Export Volume Integrity** — Exiting a provider is impossible without data. Logging export job byte counts, checksums, and completion times for annual portability drills evidences technical feasibility of contractual exit clauses.\n\nDocumented **Data sources**: `index=dataops` `sourcetype=\"exit:export_test\"` (provider_id, dataset, bytes_out, checksum, duration_sec, test_id, _time). **App/TA** (typical add-on context): HTTP Event Collector from data movement tools. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dataops; **sourcetype**: exit:export_test. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dataops, sourcetype=\"exit:export_test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by provider_id, test_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where total_bytes < expected_min_bytes*0.95` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA Exit Strategy — Data Portability Test Evidence and Export Volume Integrity**): table provider_id, test_id, total_bytes, expected_min_bytes, max_dur\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed tests), Line chart (bytes over tests), Single value (providers failing).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We exiting a provider is impossible without data.",
              "mtype": [
                "Compliance",
                "Data Quality"
              ],
              "ind": "Financial Services",
              "pillar": "observability",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.36: DORA Exit Strategy — Data Portability Test Evidence and Export Volume Integrity.",
                  "ea": "Saved search 'UC-22.3.36' running on index=dataops sourcetype=\"exit:export_test\" (provider_id, dataset, bytes_out, checksum, duration_sec, test_id, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.37",
              "n": "DORA Exit Strategy — Runbook Step Completion and Sign-Off SLA for Critical Providers",
              "c": "high",
              "f": "intermediate",
              "v": "Exit strategies are operational documents. Monitoring partial runbook executions, missing sign-offs, and tabletop drill gaps proves the organization treats exit readiness as a lifecycle control, not a one-time PDF.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=grc` `sourcetype=\"exit:runbook_step\"` (provider_id, step_id, due_date, completed_by, completed_date)",
              "q": "index=grc sourcetype=\"exit:runbook_step\" earliest=-400d\n| eval due=strptime(due_date,\"%Y-%m-%d\"), done=strptime(completed_date,\"%Y-%m-%d\")\n| where isnull(completed_by) AND now()>due\n| stats count as overdue_steps, values(step_id) as steps by provider_id\n| sort - overdue_steps",
              "m": "(1) Break runbooks into atomic `step_id`; (2) assign RACI in ServiceNow; (3) quarterly dry-run creates synthetic steps; (4) escalate overdue to outsourcing officer; (5) align with Art. 28(8) exit themes in internal mapping.",
              "z": "Table (providers), Bar chart (overdue steps), Single value (total overdue).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"exit:runbook_step\"` (provider_id, step_id, due_date, completed_by, completed_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Break runbooks into atomic `step_id`; (2) assign RACI in ServiceNow; (3) quarterly dry-run creates synthetic steps; (4) escalate overdue to outsourcing officer; (5) align with Art. 28(8) exit themes in internal mapping.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"exit:runbook_step\" earliest=-400d\n| eval due=strptime(due_date,\"%Y-%m-%d\"), done=strptime(completed_date,\"%Y-%m-%d\")\n| where isnull(completed_by) AND now()>due\n| stats count as overdue_steps, values(step_id) as steps by provider_id\n| sort - overdue_steps\n```\n\nUnderstanding this SPL\n\n**DORA Exit Strategy — Runbook Step Completion and Sign-Off SLA for Critical Providers** — Exit strategies are operational documents. Monitoring partial runbook executions, missing sign-offs, and tabletop drill gaps proves the organization treats exit readiness as a lifecycle control, not a one-time PDF.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"exit:runbook_step\"` (provider_id, step_id, due_date, completed_by, completed_date). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: exit:runbook_step. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"exit:runbook_step\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(completed_by) AND now()>due` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by provider_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (providers), Bar chart (overdue steps), Single value (total overdue).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We exit strategies are operational documents.",
              "mtype": [
                "Compliance",
                "Resilience"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.5 (ICT risk-management governance) is enforced — Splunk UC-22.3.37: DORA Exit Strategy — Runbook Step Completion and Sign-Off SLA for Critical Providers.",
                  "ea": "Saved search 'UC-22.3.37' running on index=grc sourcetype=\"exit:runbook_step\" (provider_id, step_id, due_date, completed_by, completed_date), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.38",
              "n": "DORA ICT Third-Party Risk Register — Inherent vs Residual Risk Score Reconciliation",
              "c": "high",
              "f": "intermediate",
              "v": "Risk registers should show mitigation effect. Detecting rows where residual risk equals inherent without documented controls, or where math is inconsistent, protects the integrity of supervisory reporting derived from the register.",
              "t": "Splunk DB Connect, GRC export",
              "d": "`index=risk` `sourcetype=\"dora:tprr\"` (provider_id, inherent, residual, control_effectiveness, last_review)",
              "q": "index=risk sourcetype=\"dora:tprr\" earliest=-1d\n| where inherent>0 AND residual=inherent AND (isnull(control_effectiveness) OR control_effectiveness=\"unknown\")\n| eval last=strptime(last_review,\"%Y-%m-%d\")\n| where last<relative_time(now(),\"-180d@d\")\n| table provider_id, inherent, residual, control_effectiveness, last_review",
              "m": "(1) Define scoring scale 1–25; (2) require control linkage from GRC; (3) quarterly risk committee review of hits; (4) exclude not-yet-assessed new vendors for 90 days via `onboarding_flag`; (5) export for internal audit.",
              "z": "Table (anomalies), Scatter (inherent vs residual), Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect, GRC export.\n• Ensure the following data sources are available: `index=risk` `sourcetype=\"dora:tprr\"` (provider_id, inherent, residual, control_effectiveness, last_review).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define scoring scale 1–25; (2) require control linkage from GRC; (3) quarterly risk committee review of hits; (4) exclude not-yet-assessed new vendors for 90 days via `onboarding_flag`; (5) export for internal audit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=risk sourcetype=\"dora:tprr\" earliest=-1d\n| where inherent>0 AND residual=inherent AND (isnull(control_effectiveness) OR control_effectiveness=\"unknown\")\n| eval last=strptime(last_review,\"%Y-%m-%d\")\n| where last<relative_time(now(),\"-180d@d\")\n| table provider_id, inherent, residual, control_effectiveness, last_review\n```\n\nUnderstanding this SPL\n\n**DORA ICT Third-Party Risk Register — Inherent vs Residual Risk Score Reconciliation** — Risk registers should show mitigation effect. Detecting rows where residual risk equals inherent without documented controls, or where math is inconsistent, protects the integrity of supervisory reporting derived from the register.\n\nDocumented **Data sources**: `index=risk` `sourcetype=\"dora:tprr\"` (provider_id, inherent, residual, control_effectiveness, last_review). **App/TA** (typical add-on context): Splunk DB Connect, GRC export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: risk; **sourcetype**: dora:tprr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=risk, sourcetype=\"dora:tprr\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where inherent>0 AND residual=inherent AND (isnull(control_effectiveness) OR control_effectiveness=\"unknown\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **last** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where last<relative_time(now(),\"-180d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA ICT Third-Party Risk Register — Inherent vs Residual Risk Score Reconciliation**): table provider_id, inherent, residual, control_effectiveness, last_review\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (anomalies), Scatter (inherent vs residual), Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We risk registers should show mitigation effect.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.19",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.19 (Reporting of major ICT-related incidents) is enforced — Splunk UC-22.3.38: DORA ICT Third-Party Risk Register — Inherent vs Residual Risk Score Reconciliation.",
                  "ea": "Saved search 'UC-22.3.38' running on index=risk sourcetype=\"dora:tprr\" (provider_id, inherent, residual, control_effectiveness, last_review), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.39",
              "n": "DORA ICT Third-Party Risk Register — Control Testing Evidence Freshness by Provider Tier",
              "c": "high",
              "f": "intermediate",
              "v": "Higher-tier providers need fresher assurance. Tracking SOC2/ISO report ingestion dates and on-site test completion per tier enforces differentiated diligence aligned with criticality.",
              "t": "Splunk DB Connect, document metadata via HEC",
              "d": "`index=vendor` `sourcetype=\"dd:assurance\"` (provider_id, tier, report_type, period_end, uploaded_at, onsite_test_date)",
              "q": "index=vendor sourcetype=\"dd:assurance\" earliest=-1d\n| eval uploaded=strptime(uploaded_at,\"%Y-%m-%d\"), period=strptime(period_end,\"%Y-%m-%d\")\n| eval age_days=round((now()-period)/86400,0)\n| eval breach=case(\n    tier=\"1\" AND age_days>120, 1,\n    tier=\"2\" AND age_days>270, 1,\n    tier=\"3\" AND age_days>365, 1,\n    1=1, 0)\n| where breach=1\n| table provider_id, tier, report_type, period_end, age_days",
              "m": "(1) Calibrate tier thresholds with compliance; (2) integrate Vanta/SecurityScorecard only as supplementary; (3) track `onsite_test_date` for tier 1; (4) auto-email vendor for missing reports; (5) map to RoI provider keys.",
              "z": "Table (stale assurance), Bar by tier, Single value (providers overdue).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect, document metadata via HEC.\n• Ensure the following data sources are available: `index=vendor` `sourcetype=\"dd:assurance\"` (provider_id, tier, report_type, period_end, uploaded_at, onsite_test_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Calibrate tier thresholds with compliance; (2) integrate Vanta/SecurityScorecard only as supplementary; (3) track `onsite_test_date` for tier 1; (4) auto-email vendor for missing reports; (5) map to RoI provider keys.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"dd:assurance\" earliest=-1d\n| eval uploaded=strptime(uploaded_at,\"%Y-%m-%d\"), period=strptime(period_end,\"%Y-%m-%d\")\n| eval age_days=round((now()-period)/86400,0)\n| eval breach=case(\n    tier=\"1\" AND age_days>120, 1,\n    tier=\"2\" AND age_days>270, 1,\n    tier=\"3\" AND age_days>365, 1,\n    1=1, 0)\n| where breach=1\n| table provider_id, tier, report_type, period_end, age_days\n```\n\nUnderstanding this SPL\n\n**DORA ICT Third-Party Risk Register — Control Testing Evidence Freshness by Provider Tier** — Higher-tier providers need fresher assurance. Tracking SOC2/ISO report ingestion dates and on-site test completion per tier enforces differentiated diligence aligned with criticality.\n\nDocumented **Data sources**: `index=vendor` `sourcetype=\"dd:assurance\"` (provider_id, tier, report_type, period_end, uploaded_at, onsite_test_date). **App/TA** (typical add-on context): Splunk DB Connect, document metadata via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: dd:assurance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"dd:assurance\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **uploaded** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where breach=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA ICT Third-Party Risk Register — Control Testing Evidence Freshness by Provider Tier**): table provider_id, tier, report_type, period_end, age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale assurance), Bar by tier, Single value (providers overdue).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We higher-tier providers need fresher assurance.",
              "mtype": [
                "Compliance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.24 (Digital operational-resilience testing) is enforced — Splunk UC-22.3.39: DORA ICT Third-Party Risk Register — Control Testing Evidence Freshness by Provider Tier.",
                  "ea": "Saved search 'UC-22.3.39' running on index=vendor sourcetype=\"dd:assurance\" (provider_id, tier, report_type, period_end, uploaded_at, onsite_test_date), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.40",
              "n": "DORA ICT Third-Party Risk Register — Issue Density and Open Finding Trend by Provider",
              "c": "medium",
              "f": "intermediate",
              "v": "A static risk score hides deteriorating behavior. Trending open audit findings, security scan failures, and SLA breaches per provider surfaces emerging third-party risk for the register narrative and escalation triggers.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`index=itsm` `sourcetype=\"snow:problem\"` (vendor_id, state, opened_at); `index=vuln` `sourcetype=\"tenable:vuln\"` (dns_name, plugin_id) joined to vendor via CMDB",
              "q": "index=itsm sourcetype=\"snow:problem\" vendor_id=* state!=\"Closed\" earliest=-180d\n| timechart span=30d count by vendor_id limit_use_times=0\n| untable _time vendor_id count\n| eventstats avg(count) as baseline by vendor_id\n| eval spike=if(count > baseline*1.5 AND count>5,1,0)\n| where spike=1\n| table _time, vendor_id, count, baseline",
              "m": "(1) Require `vendor_id` on problems; (2) tune multiplier for low-volume vendors; (3) correlate with contract renewal dates; (4) feed TPR committee; (5) document remediation in GRC—not Splunk narratives with PII.",
              "z": "Line chart (count by vendor), Table (spikes), Single value (vendors spiking).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:problem\"` (vendor_id, state, opened_at); `index=vuln` `sourcetype=\"tenable:vuln\"` (dns_name, plugin_id) joined to vendor via CMDB.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require `vendor_id` on problems; (2) tune multiplier for low-volume vendors; (3) correlate with contract renewal dates; (4) feed TPR committee; (5) document remediation in GRC—not Splunk narratives with PII.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:problem\" vendor_id=* state!=\"Closed\" earliest=-180d\n| timechart span=30d count by vendor_id limit_use_times=0\n| untable _time vendor_id count\n| eventstats avg(count) as baseline by vendor_id\n| eval spike=if(count > baseline*1.5 AND count>5,1,0)\n| where spike=1\n| table _time, vendor_id, count, baseline\n```\n\nUnderstanding this SPL\n\n**DORA ICT Third-Party Risk Register — Issue Density and Open Finding Trend by Provider** — A static risk score hides deteriorating behavior. Trending open audit findings, security scan failures, and SLA breaches per provider surfaces emerging third-party risk for the register narrative and escalation triggers.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:problem\"` (vendor_id, state, opened_at); `index=vuln` `sourcetype=\"tenable:vuln\"` (dns_name, plugin_id) joined to vendor via CMDB. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:problem. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:problem\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=30d** buckets with a separate series **by vendor_id limit_use_times=0** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **DORA ICT Third-Party Risk Register — Issue Density and Open Finding Trend by Provider**): untable _time vendor_id count\n• `eventstats` rolls up events into metrics; results are split **by vendor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **spike** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where spike=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA ICT Third-Party Risk Register — Issue Density and Open Finding Trend by Provider**): table _time, vendor_id, count, baseline\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (count by vendor), Table (spikes), Single value (vendors spiking).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We a static risk score hides deteriorating behavior.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that DORA Art.28 (ICT third-party risk) is enforced — Splunk UC-22.3.40: DORA ICT Third-Party Risk Register — Issue Density and Open Finding Trend by Provider.",
                  "ea": "Saved search 'UC-22.3.40' running on sourcetype snow:problem and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 36.0,
          "qd": {
            "gold": 1,
            "silver": 0,
            "bronze": 39,
            "none": 0
          }
        },
        {
          "i": "22.4",
          "n": "CCPA",
          "u": [
            {
              "i": "22.4.1",
              "n": "CCPA Consumer Data Access and Deletion Request Tracking (§1798.100-105)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks privacy request fulfillment work items end-to-end and flags requests at risk of missing the 45-day statutory response window.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, cat_item, short_description) or `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, state, short_description)",
              "q": "index=itsm (sourcetype=\"snow:sc_req_item\" OR sourcetype=\"snow:incident\")\n    (cat_item=\"*CCPA*\" OR cat_item=\"*Privacy*\" OR short_description=\"*CCPA*\" OR short_description=\"*Consumer Privacy*\")\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at, \"%Y-%m-%d %H:%M:%S\"), null())\n| eval age_days=round((now()-opened_epoch)/86400, 1)\n| eval sla_days=45\n| eval breach=if(isnull(closed_epoch) AND age_days>sla_days, 1, 0)\n| eval days_remaining=if(isnull(closed_epoch), sla_days-age_days, null())\n| table _time, number, state, age_days, days_remaining, breach, short_description\n| sort - breach, days_remaining",
              "m": "(1) Configure ServiceNow inputs for sc_req_item and/or incidents; (2) normalize catalog item names to match the filter (adjust `cat_item` strings); (3) if CCPA allows extensions, add fields for `extension_days` and update `sla_days` logic; (4) schedule daily with alert on `breach=1`.",
              "z": "Table (open requests with SLA countdown), Histogram (age distribution), Single value (% within 45 days).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, cat_item, short_description) or `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, state, short_description).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Configure ServiceNow inputs for sc_req_item and/or incidents; (2) normalize catalog item names to match the filter (adjust `cat_item` strings); (3) if CCPA allows extensions, add fields for `extension_days` and update `sla_days` logic; (4) schedule daily with alert on `breach=1`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm (sourcetype=\"snow:sc_req_item\" OR sourcetype=\"snow:incident\")\n    (cat_item=\"*CCPA*\" OR cat_item=\"*Privacy*\" OR short_description=\"*CCPA*\" OR short_description=\"*Consumer Privacy*\")\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at, \"%Y-%m-%d %H:%M:%S\"), null())\n| eval age_days=round((now()-opened_epoch)/86400, 1)\n| eval sla_days=45\n| eval breach=if(isnull(closed_epoch) AND age_days>sla_days, 1, 0)\n| eval days_remaining=if(isnull(closed_epoch), sla_days-age_days, null())\n| table _time, number, state, age_days, days_remaining, breach, short_description\n| sort - breach, days_remaining\n```\n\nUnderstanding this SPL\n\n**CCPA Consumer Data Access and Deletion Request Tracking (§1798.100-105)** — Tracks privacy request fulfillment work items end-to-end and flags requests at risk of missing the 45-day statutory response window.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, cat_item, short_description) or `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, state, short_description). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item, snow:incident. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **closed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_remaining** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **CCPA Consumer Data Access and Deletion Request Tracking (§1798.100-105)**): table _time, number, state, age_days, days_remaining, breach, short_description\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CCPA Consumer Data Access and Deletion Request Tracking (§1798.100-105)** — Tracks privacy request fulfillment work items end-to-end and flags requests at risk of missing the 45-day statutory response window.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, cat_item, short_description) or `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, state, short_description). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open requests with SLA countdown), Histogram (age distribution), Single value (% within 45 days).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track privacy request fulfillment work items end-to-end and flags requests at risk of missing the 45-day statutory response window.",
              "wv": "crawl",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.100",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.100 (Consumer right to know) is enforced — Splunk UC-22.4.1: CCPA Consumer Data Access and Deletion Request Tracking.",
                  "ea": "Saved search 'UC-22.4.1' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.105",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.105 (Consumer right to delete) is enforced — Splunk UC-22.4.1: CCPA Consumer Data Access and Deletion Request Tracking.",
                  "ea": "Saved search 'UC-22.4.1' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "22.4.2",
              "n": "CCPA Data Sale Opt-Out Enforcement Monitoring (§1798.120)",
              "c": "high",
              "f": "intermediate",
              "v": "Measures consumer interaction with \"Do Not Sell/Share\" flows and detects Global Privacy Control (GPC) signal presence for downstream marketing-system enforcement evidence.",
              "t": "Splunk Add-on for Apache Web Server (Splunkbase 3186) or equivalent web server TA",
              "d": "`index=web` `sourcetype=\"access_combined\"` (clientip, status, method, uri, useragent)",
              "q": "index=web sourcetype=\"access_combined\" earliest=-24h\n| eval uri_l=lower(uri)\n| where match(uri_l, \"/(do-not-sell|dnsmpi|privacy-rights|opt-out)(/|$|\\?)\")\n| eval gpc=if(match(_raw, \"(?i)sec-gpc:\\\\s*1\"), \"GPC_Present\", \"No_GPC\")\n| stats count as page_hits, dc(clientip) as unique_visitors by uri, status, gpc\n| sort - page_hits",
              "m": "(1) Configure web servers to log the GPC header (Apache: `%{Sec-GPC}i` / nginx: `$http_sec_gpc`) in the access log format; (2) ensure load balancers preserve the header to origin logs; (3) schedule daily for privacy team dashboards; (4) create a downstream dataset join with marketing system logs to verify opt-out enforcement.",
              "z": "Timechart (opt-out page hits), Pie chart (GPC present vs not), Table (top URIs by visitor count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [
                "T1048",
                "T1530"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Apache Web Server (Splunkbase 3186) or equivalent web server TA.\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` (clientip, status, method, uri, useragent).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Configure web servers to log the GPC header (Apache: `%{Sec-GPC}i` / nginx: `$http_sec_gpc`) in the access log format; (2) ensure load balancers preserve the header to origin logs; (3) schedule daily for privacy team dashboards; (4) create a downstream dataset join with marketing system logs to verify opt-out enforcement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" earliest=-24h\n| eval uri_l=lower(uri)\n| where match(uri_l, \"/(do-not-sell|dnsmpi|privacy-rights|opt-out)(/|$|\\?)\")\n| eval gpc=if(match(_raw, \"(?i)sec-gpc:\\\\s*1\"), \"GPC_Present\", \"No_GPC\")\n| stats count as page_hits, dc(clientip) as unique_visitors by uri, status, gpc\n| sort - page_hits\n```\n\nUnderstanding this SPL\n\n**CCPA Data Sale Opt-Out Enforcement Monitoring (§1798.120)** — Measures consumer interaction with \"Do Not Sell/Share\" flows and detects Global Privacy Control (GPC) signal presence for downstream marketing-system enforcement evidence.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (clientip, status, method, uri, useragent). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186) or equivalent web server TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **uri_l** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(uri_l, \"/(do-not-sell|dnsmpi|privacy-rights|opt-out)(/|$|\\?)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gpc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by uri, status, gpc** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CCPA Data Sale Opt-Out Enforcement Monitoring (§1798.120)** — Measures consumer interaction with \"Do Not Sell/Share\" flows and detects Global Privacy Control (GPC) signal presence for downstream marketing-system enforcement evidence.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (clientip, status, method, uri, useragent). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186) or equivalent web server TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (opt-out page hits), Pie chart (GPC present vs not), Table (top URIs by visitor count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measure consumer interaction with \"Do Not Sell/Share\" flows and detects Global Privacy Control (GPC) signal presence for downstream marketing-system enforcement evidence.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status | sort - count",
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.120",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.120 is enforced — Splunk UC-22.4.2: CCPA Data Sale Opt-Out Enforcement Monitoring.",
                  "ea": "Saved search 'UC-22.4.2' running on index=web sourcetype=\"access_combined\" (clientip, status, method, uri, useragent), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.3",
              "n": "CCPA Sensitive Personal Information Processing Audit (§1798.121)",
              "c": "critical",
              "f": "advanced",
              "v": "Surfaces DLP policy hits from Microsoft 365 to demonstrate monitoring and limitation controls around sensitive personal information processing.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`index=o365` `sourcetype=\"ms:o365:management\"` (Workload, Operation, PolicyName, UserPrincipalName, SensitiveInfoType, Severity)",
              "q": "# Shared SPL: intentional — see UC-22.8.3\nindex=o365 sourcetype=\"ms:o365:management\" Workload=\"Dlp\"\n| stats count by PolicyName, UserPrincipalName, SensitiveInfoType, Severity, Operation\n| sort - count\n| table PolicyName, UserPrincipalName, SensitiveInfoType, Severity, count, Operation",
              "m": "(1) Enable Office 365 Management Activity inputs in TA 4055 and confirm `Workload=\"Dlp\"` events are ingested; (2) map `SensitiveInfoType` values to your CCPA SPI categories via lookup; (3) alert on high-severity exfil patterns; (4) retain per legal hold requirements.",
              "z": "Bar chart (events by PolicyName), Heatmap (user x SensitiveInfoType), Line chart (daily volume by Severity).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [
                "T1005",
                "T1530"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `index=o365` `sourcetype=\"ms:o365:management\"` (Workload, Operation, PolicyName, UserPrincipalName, SensitiveInfoType, Severity).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable Office 365 Management Activity inputs in TA 4055 and confirm `Workload=\"Dlp\"` events are ingested; (2) map `SensitiveInfoType` values to your CCPA SPI categories via lookup; (3) alert on high-severity exfil patterns; (4) retain per legal hold requirements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n# Shared SPL: intentional — see UC-22.8.3\nindex=o365 sourcetype=\"ms:o365:management\" Workload=\"Dlp\"\n| stats count by PolicyName, UserPrincipalName, SensitiveInfoType, Severity, Operation\n| sort - count\n| table PolicyName, UserPrincipalName, SensitiveInfoType, Severity, count, Operation\n```\n\nUnderstanding this SPL\n\n**CCPA Sensitive Personal Information Processing Audit (§1798.121)** — Surfaces DLP policy hits from Microsoft 365 to demonstrate monitoring and limitation controls around sensitive personal information processing.\n\nDocumented **Data sources**: `index=o365` `sourcetype=\"ms:o365:management\"` (Workload, Operation, PolicyName, UserPrincipalName, SensitiveInfoType, Severity). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by PolicyName, UserPrincipalName, SensitiveInfoType, Severity, Operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **CCPA Sensitive Personal Information Processing Audit (§1798.121)**): table PolicyName, UserPrincipalName, SensitiveInfoType, Severity, count, Operation\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by PolicyName), Heatmap (user x SensitiveInfoType), Line chart (daily volume by Severity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surfaces DLP policy hits from Microsoft 365 to demonstrate monitoring and limitation controls around sensitive personal information processing.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.121",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.121 is enforced — Splunk UC-22.4.3: CCPA Sensitive Personal Information Processing Audit.",
                  "ea": "Saved search 'UC-22.4.3' running on sourcetype ms:o365:management and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.4",
              "n": "CCPA Right to Correct Inaccurate Personal Information (§1798.106)",
              "c": "high",
              "f": "intermediate",
              "v": "Businesses must use commercially reasonable efforts to correct inaccurate personal information upon a verifiable request. This search tracks correction tickets from intake through closure so you can prove timely handling and reduce risk of complaints to the California Attorney General for mishandled consumer rights workflows.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, cat_item, short_description, assignment_group)",
              "q": "index=itsm sourcetype=\"snow:sc_req_item\" earliest=-90d\n    (short_description=\"*correct*\" OR short_description=\"*inaccurate*\" OR cat_item=\"*CCPA*Correct*\")\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at, \"%Y-%m-%d %H:%M:%S\"), null())\n| eval age_days=round((now()-opened_epoch)/86400, 1)\n| eval sla_days=45\n| eval at_risk=if(isnull(closed_epoch) AND age_days>(sla_days-7), 1, 0)\n| stats count as requests, sum(at_risk) as nearing_breach by assignment_group, state\n| sort - requests",
              "m": "(1) Align `cat_item` and keyword filters with your ServiceNow CCPA correction catalog items; (2) route agent queues into `assignment_group` for accountability dashboards; (3) join optional `consumer_id_hash` field if present for deduplication; (4) schedule daily and alert when `nearing_breach>0`; (5) export monthly evidence for privacy counsel.",
              "z": "Table (open corrections with age), Bar chart (volume by assignment_group), Single value (requests past 38 days open).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, cat_item, short_description, assignment_group).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `cat_item` and keyword filters with your ServiceNow CCPA correction catalog items; (2) route agent queues into `assignment_group` for accountability dashboards; (3) join optional `consumer_id_hash` field if present for deduplication; (4) schedule daily and alert when `nearing_breach>0`; (5) export monthly evidence for privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:sc_req_item\" earliest=-90d\n    (short_description=\"*correct*\" OR short_description=\"*inaccurate*\" OR cat_item=\"*CCPA*Correct*\")\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at, \"%Y-%m-%d %H:%M:%S\"), null())\n| eval age_days=round((now()-opened_epoch)/86400, 1)\n| eval sla_days=45\n| eval at_risk=if(isnull(closed_epoch) AND age_days>(sla_days-7), 1, 0)\n| stats count as requests, sum(at_risk) as nearing_breach by assignment_group, state\n| sort - requests\n```\n\nUnderstanding this SPL\n\n**CCPA Right to Correct Inaccurate Personal Information (§1798.106)** — Businesses must use commercially reasonable efforts to correct inaccurate personal information upon a verifiable request. This search tracks correction tickets from intake through closure so you can prove timely handling and reduce risk of complaints to the California Attorney General for mishandled consumer rights workflows.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, cat_item, short_description, assignment_group). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **closed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **at_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by assignment_group, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open corrections with age), Bar chart (volume by assignment_group), Single value (requests past 38 days open).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We businesses must use commercially reasonable efforts to correct inaccurate personal information upon a verifiable request.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.106",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.106 is enforced — Splunk UC-22.4.4: CCPA Right to Correct Inaccurate Personal Information.",
                  "ea": "Saved search 'UC-22.4.4' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.5",
              "n": "CCPA Data Broker Sale Disclosure and Third-Party Sharing Audit (§1798.99.80, §1798.115)",
              "c": "critical",
              "f": "advanced",
              "v": "Consumers must receive meaningful notice about categories of personal information sold or shared and the categories of third parties. Monitoring structured “sale/share” pipeline logs shows operational alignment with disclosure obligations and supports rapid investigation if downstream systems process opted-out households.",
              "t": "Splunk HTTP Event Collector (core platform), Splunk Add-on for AWS (Splunkbase 1876) (optional archive path)",
              "d": "`index=privacy` `sourcetype=\"_json\"` `source=\"http:ccpa_sale_share\"` (event_type, consumer_opted_out, data_category, third_party_id, contract_id, _time)",
              "q": "index=privacy sourcetype=\"_json\" source=\"http:ccpa_sale_share\" earliest=-7d\n| where event_type IN (\"sale_batch\",\"share_batch\",\"broker_feed\")\n| eval violation=if(consumer_opted_out=\"true\" AND match(event_type,\"sale|share|broker\"), 1, 0)\n| stats count as events, sum(violation) as potential_violations, dc(third_party_id) as third_parties by data_category, event_type\n| sort - potential_violations",
              "m": "(1) Instrument CRM, CDP, or data-broker connectors to POST JSON batches to HEC with `consumer_opted_out` resolved from your consent store; (2) map `data_category` labels to your external privacy notice; (3) block or alert on `violation=1`; (4) retain five years or per records-management policy; (5) correlate with web `dnsmpi` hits from UC-22.4.2 for end-to-end proof.",
              "z": "Table (categories x third parties), Column chart (events by event_type), Single value (potential_violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1048",
                "T1530"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk HTTP Event Collector (core platform), Splunk Add-on for AWS (Splunkbase 1876) (optional archive path).\n• Ensure the following data sources are available: `index=privacy` `sourcetype=\"_json\"` `source=\"http:ccpa_sale_share\"` (event_type, consumer_opted_out, data_category, third_party_id, contract_id, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument CRM, CDP, or data-broker connectors to POST JSON batches to HEC with `consumer_opted_out` resolved from your consent store; (2) map `data_category` labels to your external privacy notice; (3) block or alert on `violation=1`; (4) retain five years or per records-management policy; (5) correlate with web `dnsmpi` hits from UC-22.4.2 for end-to-end proof.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"_json\" source=\"http:ccpa_sale_share\" earliest=-7d\n| where event_type IN (\"sale_batch\",\"share_batch\",\"broker_feed\")\n| eval violation=if(consumer_opted_out=\"true\" AND match(event_type,\"sale|share|broker\"), 1, 0)\n| stats count as events, sum(violation) as potential_violations, dc(third_party_id) as third_parties by data_category, event_type\n| sort - potential_violations\n```\n\nUnderstanding this SPL\n\n**CCPA Data Broker Sale Disclosure and Third-Party Sharing Audit (§1798.99.80, §1798.115)** — Consumers must receive meaningful notice about categories of personal information sold or shared and the categories of third parties. Monitoring structured “sale/share” pipeline logs shows operational alignment with disclosure obligations and supports rapid investigation if downstream systems process opted-out households.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=\"_json\"` `source=\"http:ccpa_sale_share\"` (event_type, consumer_opted_out, data_category, third_party_id, contract_id, _time). **App/TA** (typical add-on context): Splunk HTTP Event Collector (core platform), Splunk Add-on for AWS (Splunkbase 1876) (optional archive path). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"sale_batch\",\"share_batch\",\"broker_feed\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **violation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by data_category, event_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (categories x third parties), Column chart (events by event_type), Single value (potential_violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We consumers must receive meaningful notice about categories of personal information sold or shared and the categories of third parties.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.115",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.115 is enforced — Splunk UC-22.4.5: CCPA Data Broker Sale Disclosure and Third-Party Sharing Audit.",
                  "ea": "Saved search 'UC-22.4.5' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.99.80",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.99.80 is enforced — Splunk UC-22.4.5: CCPA Data Broker Sale Disclosure and Third-Party Sharing Audit.",
                  "ea": "Saved search 'UC-22.4.5' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.6",
              "n": "CCPA Global Privacy Control and “Do Not Sell or Share” Signal Enforcement (§1798.120, §1798.135(b))",
              "c": "critical",
              "f": "intermediate",
              "v": "Opt-out preference signals including GPC must be honored where required. Aggregating API-side opt-out application outcomes demonstrates that technical controls downstream of the browser actually suppress sale/share processing, not only that marketing pages were visited.",
              "t": "Splunk HTTP Event Collector (core platform)",
              "d": "`index=privacy` `sourcetype=\"_json\"` `source=\"http:ccpa_optout_apply\"` (profile_id, channel, gpc_received, opt_out_applied, failure_reason, _time)",
              "q": "index=privacy sourcetype=\"_json\" source=\"http:ccpa_optout_apply\" earliest=-24h\n| eval failed=if(opt_out_applied=\"false\" OR isnotnull(failure_reason), 1, 0)\n| stats count as attempts, sum(failed) as failures, sum(eval(gpc_received=\"true\")) as gpc_context by channel\n| eval fail_rate=round(100*failures/attempts, 2)\n| sort - failures",
              "m": "(1) Emit one event per profile/channel when consent middleware finishes applying opt-out; (2) set `gpc_received` from upstream headers; (3) alert if `fail_rate>1` percent for any `channel`; (4) join failures to application logs via `profile_id`; (5) document rollback procedures for bad releases.",
              "z": "Timechart (attempts vs failures), Table (channel, fail_rate), Pie chart (gpc_context ratio).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1565",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk HTTP Event Collector (core platform).\n• Ensure the following data sources are available: `index=privacy` `sourcetype=\"_json\"` `source=\"http:ccpa_optout_apply\"` (profile_id, channel, gpc_received, opt_out_applied, failure_reason, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit one event per profile/channel when consent middleware finishes applying opt-out; (2) set `gpc_received` from upstream headers; (3) alert if `fail_rate>1` percent for any `channel`; (4) join failures to application logs via `profile_id`; (5) document rollback procedures for bad releases.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"_json\" source=\"http:ccpa_optout_apply\" earliest=-24h\n| eval failed=if(opt_out_applied=\"false\" OR isnotnull(failure_reason), 1, 0)\n| stats count as attempts, sum(failed) as failures, sum(eval(gpc_received=\"true\")) as gpc_context by channel\n| eval fail_rate=round(100*failures/attempts, 2)\n| sort - failures\n```\n\nUnderstanding this SPL\n\n**CCPA Global Privacy Control and “Do Not Sell or Share” Signal Enforcement (§1798.120, §1798.135(b))** — Opt-out preference signals including GPC must be honored where required. Aggregating API-side opt-out application outcomes demonstrates that technical controls downstream of the browser actually suppress sale/share processing, not only that marketing pages were visited.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=\"_json\"` `source=\"http:ccpa_optout_apply\"` (profile_id, channel, gpc_received, opt_out_applied, failure_reason, _time). **App/TA** (typical add-on context): Splunk HTTP Event Collector (core platform). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (attempts vs failures), Table (channel, fail_rate), Pie chart (gpc_context ratio).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We opt-out preference signals including GPC must be honored where required.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.120",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.120 is enforced — Splunk UC-22.4.6: CCPA Global Privacy Control and “Do Not Sell or Share” Signal Enforcement.",
                  "ea": "Saved search 'UC-22.4.6' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.135(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.135(b) is enforced — Splunk UC-22.4.6: CCPA Global Privacy Control and “Do Not Sell or Share” Signal Enforcement.",
                  "ea": "Saved search 'UC-22.4.6' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.7",
              "n": "CCPA Financial Incentive Program Consent and Withdrawal Monitoring (§1798.125)",
              "c": "high",
              "f": "intermediate",
              "v": "Financial incentive programs require good-faith estimates of program value, clear opt-in, and easy withdrawal without discriminatory treatment. This use case audits incentive enrollments and withdrawals to evidence fair process and support inquiries about material terms.",
              "t": "Splunk Add-on for Apache Web Server (Splunkbase 3186) or Splunk HTTP Event Collector (core platform)",
              "d": "`index=web` `sourcetype=\"access_combined\"` (method, uri, status, clientip); OR `index=marketing` `sourcetype=\"_json\"` `source=\"http:loyalty_ccpa\"` (action, program_id, material_terms_version, _time)",
              "q": "(index=web sourcetype=\"access_combined\" earliest=-30d uri=\"*/loyalty/ccpa-consent*\" OR uri=\"*/financial-incentive*\")\nOR (index=marketing sourcetype=\"_json\" source=\"http:loyalty_ccpa\" earliest=-30d)\n| eval evt=coalesce(action, method)\n| eval ok=if(status IN (\"200\",\"201\",\"204\") OR match(_raw,\"\\\"success\\\"\\\\s*:\\\\s*true\"), 1, 0)\n| stats count as hits, sum(ok) as successful by uri, program_id, material_terms_version\n| fillnull value=\"web\" program_id\n| sort - hits",
              "m": "(1) Log consent, withdrawal, and material-terms acknowledgment with version IDs in JSON or stable URI patterns; (2) map `program_id` to written estimate documents; (3) alert on spikes in non-200 responses; (4) exclude bot user agents via lookup; (5) quarterly export for legal review of `material_terms_version` mix.",
              "z": "Bar chart (hits by program_id), Table (terms version adoption), Line chart (daily successful consents).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Apache Web Server (Splunkbase 3186) or Splunk HTTP Event Collector (core platform).\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` (method, uri, status, clientip); OR `index=marketing` `sourcetype=\"_json\"` `source=\"http:loyalty_ccpa\"` (action, program_id, material_terms_version, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Log consent, withdrawal, and material-terms acknowledgment with version IDs in JSON or stable URI patterns; (2) map `program_id` to written estimate documents; (3) alert on spikes in non-200 responses; (4) exclude bot user agents via lookup; (5) quarterly export for legal review of `material_terms_version` mix.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=web sourcetype=\"access_combined\" earliest=-30d uri=\"*/loyalty/ccpa-consent*\" OR uri=\"*/financial-incentive*\")\nOR (index=marketing sourcetype=\"_json\" source=\"http:loyalty_ccpa\" earliest=-30d)\n| eval evt=coalesce(action, method)\n| eval ok=if(status IN (\"200\",\"201\",\"204\") OR match(_raw,\"\\\"success\\\"\\\\s*:\\\\s*true\"), 1, 0)\n| stats count as hits, sum(ok) as successful by uri, program_id, material_terms_version\n| fillnull value=\"web\" program_id\n| sort - hits\n```\n\nUnderstanding this SPL\n\n**CCPA Financial Incentive Program Consent and Withdrawal Monitoring (§1798.125)** — Financial incentive programs require good-faith estimates of program value, clear opt-in, and easy withdrawal without discriminatory treatment. This use case audits incentive enrollments and withdrawals to evidence fair process and support inquiries about material terms.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (method, uri, status, clientip); OR `index=marketing` `sourcetype=\"_json\"` `source=\"http:loyalty_ccpa\"` (action, program_id, material_terms_version, _time). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186) or Splunk HTTP Event Collector (core platform). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web, marketing; **sourcetype**: access_combined, _json. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, index=marketing, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **evt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by uri, program_id, material_terms_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Fills null values with `fillnull`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CCPA Financial Incentive Program Consent and Withdrawal Monitoring (§1798.125)** — Financial incentive programs require good-faith estimates of program value, clear opt-in, and easy withdrawal without discriminatory treatment. This use case audits incentive enrollments and withdrawals to evidence fair process and support inquiries about material terms.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (method, uri, status, clientip); OR `index=marketing` `sourcetype=\"_json\"` `source=\"http:loyalty_ccpa\"` (action, program_id, material_terms_version, _time). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186) or Splunk HTTP Event Collector (core platform). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (hits by program_id), Table (terms version adoption), Line chart (daily successful consents).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We financial incentive programs require good-faith estimates of program value, clear opt-in, and easy withdrawal without discriminatory treatment.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.125",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.125 is enforced — Splunk UC-22.4.7: CCPA Financial Incentive Program Consent and Withdrawal Monitoring.",
                  "ea": "Saved search 'UC-22.4.7' running on sourcetype access_combined and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.8",
              "n": "CCPA Authorized Agent Request Verification and Fulfillment (§1798.140(ah), §1798.145)",
              "c": "medium",
              "f": "advanced",
              "v": "Businesses may require authorized agents to submit proof of signing authority. Tracking agent-submitted tickets with verification outcomes reduces fraud risk and demonstrates consistent authentication before disclosing or deleting consumer data.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, short_description, u_agent_verified, u_power_of_attorney_on_file, state, opened_at)",
              "q": "index=itsm sourcetype=\"snow:sc_req_item\" earliest=-90d\n    (short_description=\"*authorized agent*\" OR short_description=\"*power of attorney*\")\n| eval verified=coalesce(u_agent_verified, \"unknown\")\n| eval fulfilled=if(match(state,\"(?i)closed|resolved|complete\"),1,0)\n| stats count as tickets,\n        sum(eval(verified=\"false\" OR verified=\"unknown\")) as not_verified,\n        sum(fulfilled) as closed\n    by verified\n| eval risk_pct=round(100*not_verified/tickets,1)\n| sort - tickets",
              "m": "(1) Add custom fields on the privacy request form for agent verification and PoA storage references; (2) block fulfillment workflows until `u_agent_verified=true` except where statute allows; (3) schedule weekly review of `not_verified`; (4) integrate DocuSign webhook optional second sourcetype; (5) redact attachments from Splunk—index metadata only.",
              "z": "Table (verification state x counts), Donut chart (verified vs not), Timeline (median days to close by verified).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [
                "T1078",
                "T1110"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, short_description, u_agent_verified, u_power_of_attorney_on_file, state, opened_at).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Add custom fields on the privacy request form for agent verification and PoA storage references; (2) block fulfillment workflows until `u_agent_verified=true` except where statute allows; (3) schedule weekly review of `not_verified`; (4) integrate DocuSign webhook optional second sourcetype; (5) redact attachments from Splunk—index metadata only.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:sc_req_item\" earliest=-90d\n    (short_description=\"*authorized agent*\" OR short_description=\"*power of attorney*\")\n| eval verified=coalesce(u_agent_verified, \"unknown\")\n| eval fulfilled=if(match(state,\"(?i)closed|resolved|complete\"),1,0)\n| stats count as tickets,\n        sum(eval(verified=\"false\" OR verified=\"unknown\")) as not_verified,\n        sum(fulfilled) as closed\n    by verified\n| eval risk_pct=round(100*not_verified/tickets,1)\n| sort - tickets\n```\n\nUnderstanding this SPL\n\n**CCPA Authorized Agent Request Verification and Fulfillment (§1798.140(ah), §1798.145)** — Businesses may require authorized agents to submit proof of signing authority. Tracking agent-submitted tickets with verification outcomes reduces fraud risk and demonstrates consistent authentication before disclosing or deleting consumer data.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, short_description, u_agent_verified, u_power_of_attorney_on_file, state, opened_at). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **verified** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **fulfilled** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by verified** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **risk_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (verification state x counts), Donut chart (verified vs not), Timeline (median days to close by verified).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We businesses may require authorized agents to submit proof of signing authority.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.140",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.140 is enforced — Splunk UC-22.4.8: CCPA Authorized Agent Request Verification and Fulfillment.",
                  "ea": "Saved search 'UC-22.4.8' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.145",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.145 is enforced — Splunk UC-22.4.8: CCPA Authorized Agent Request Verification and Fulfillment.",
                  "ea": "Saved search 'UC-22.4.8' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.9",
              "n": "CCPA/CPRA — Sensitive PI — Precise Geolocation Collection Stop Signal After User Opt-Out",
              "c": "high",
              "f": "advanced",
              "v": "Sensitive personal information includes precise geolocation. Detecting mobile or web telemetry that continues after an opt-out timestamp demonstrates downstream systems honor consumer choices and reduces enforcement exposure under §1798.121.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), mobile analytics HEC",
              "d": "`index=privacy` `sourcetype=\"consent:state\"` (profile_id, geo_collection_allowed, updated_ts); `index=app` `sourcetype=\"mobile:location_event\"` (profile_id, lat, lon, accuracy_m, _time)",
              "q": "index=privacy sourcetype=\"consent:state\" geo_collection_allowed=false earliest=-30d\n| eval opt=strptime(updated_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left profile_id [\n    search index=app sourcetype=\"mobile:location_event\" earliest=-30d\n    | where accuracy_m<=50\n    | fields profile_id, _time, accuracy_m\n  ]\n| where _time>opt\n| stats count by profile_id, accuracy_m\n| sort - count",
              "m": "(1) Align `accuracy_m` threshold with your “precise” definition; (2) hash `profile_id` in shared dashboards if required; (3) route to privacy engineering for pipeline fixes; (4) document lawful exceptions (e.g., emergency services) in a lookup; (5) retain evidence per records schedule.",
              "z": "Table (violations), Timeline (`_time` vs opt), Single value (affected profiles).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [
                "T1534"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), mobile analytics HEC.\n• Ensure the following data sources are available: `index=privacy` `sourcetype=\"consent:state\"` (profile_id, geo_collection_allowed, updated_ts); `index=app` `sourcetype=\"mobile:location_event\"` (profile_id, lat, lon, accuracy_m, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `accuracy_m` threshold with your “precise” definition; (2) hash `profile_id` in shared dashboards if required; (3) route to privacy engineering for pipeline fixes; (4) document lawful exceptions (e.g., emergency services) in a lookup; (5) retain evidence per records schedule.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"consent:state\" geo_collection_allowed=false earliest=-30d\n| eval opt=strptime(updated_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left profile_id [\n    search index=app sourcetype=\"mobile:location_event\" earliest=-30d\n    | where accuracy_m<=50\n    | fields profile_id, _time, accuracy_m\n  ]\n| where _time>opt\n| stats count by profile_id, accuracy_m\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Sensitive PI — Precise Geolocation Collection Stop Signal After User Opt-Out** — Sensitive personal information includes precise geolocation. Detecting mobile or web telemetry that continues after an opt-out timestamp demonstrates downstream systems honor consumer choices and reduces enforcement exposure under §1798.121.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=\"consent:state\"` (profile_id, geo_collection_allowed, updated_ts); `index=app` `sourcetype=\"mobile:location_event\"` (profile_id, lat, lon, accuracy_m, _time). **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), mobile analytics HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: consent:state. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"consent:state\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where _time>opt` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by profile_id, accuracy_m** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Timeline (`_time` vs opt), Single value (affected profiles).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We sensitive personal information includes precise geolocation.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.121",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.121 is enforced — Splunk UC-22.4.9: CCPA/CPRA — Sensitive PI — Precise Geolocation Collection Stop Signal After User Opt-Out.",
                  "ea": "Saved search 'UC-22.4.9' running on sourcetype consent:state and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.10",
              "n": "CCPA/CPRA — Sensitive PI — Health Information Field Exposure in Non-PHI Indexes",
              "c": "critical",
              "f": "advanced",
              "v": "Health-related sensitive PI must stay within governed stores. Pattern matches for ICD codes, procedure names, or insurance member IDs in general application logs evidence accidental sprawl that CPRA-sensitive processing rules aim to prevent.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Data Loss Prevention summaries via HEC",
              "d": "`index=app` `sourcetype IN (\"apache:access\",\"nginx:access\",\"app:json\")` — `_raw`",
              "q": "index=app sourcetype IN (\"apache:access\",\"nginx:access\",\"app:json\") earliest=-24h NOT index=phi_approved\n| regex _raw=\"(?i)\\b(icd[- ]?10|procedure code|member[_ ]?id|diagnosis)\\b\"\n| stats count by sourcetype, host\n| sort - count",
              "m": "(1) Tune regex with legal/privacy to reduce false positives; (2) exclude approved `phi_approved` indexes explicitly; (3) auto-notable in ES with restricted perms; (4) trigger log redaction tickets; (5) map findings to DPIA / ROPA updates—not legal advice in Splunk.",
              "z": "Table (hosts), Single value (event count), Treemap by sourcetype.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Data Loss Prevention summaries via HEC.\n• Ensure the following data sources are available: `index=app` `sourcetype IN (\"apache:access\",\"nginx:access\",\"app:json\")` — `_raw`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune regex with legal/privacy to reduce false positives; (2) exclude approved `phi_approved` indexes explicitly; (3) auto-notable in ES with restricted perms; (4) trigger log redaction tickets; (5) map findings to DPIA / ROPA updates—not legal advice in Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype IN (\"apache:access\",\"nginx:access\",\"app:json\") earliest=-24h NOT index=phi_approved\n| regex _raw=\"(?i)\\b(icd[- ]?10|procedure code|member[_ ]?id|diagnosis)\\b\"\n| stats count by sourcetype, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Sensitive PI — Health Information Field Exposure in Non-PHI Indexes** — Health-related sensitive PI must stay within governed stores. Pattern matches for ICD codes, procedure names, or insurance member IDs in general application logs evidence accidental sprawl that CPRA-sensitive processing rules aim to prevent.\n\nDocumented **Data sources**: `index=app` `sourcetype IN (\"apache:access\",\"nginx:access\",\"app:json\")` — `_raw`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Data Loss Prevention summaries via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app, phi_approved.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, index=phi_approved, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• `stats` rolls up events into metrics; results are split **by sourcetype, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hosts), Single value (event count), Treemap by sourcetype.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We health-related sensitive PI must stay within governed stores.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.121",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.121 is enforced — Splunk UC-22.4.10: CCPA/CPRA — Sensitive PI — Health Information Field Exposure in Non-PHI Indexes.",
                  "ea": "Saved search 'UC-22.4.10' running on index=app sourcetype IN (\"apache:access\",\"nginx:access\",\"app:json\") — _raw, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.11",
              "n": "CCPA/CPRA — Sensitive PI — Racial or Ethnic Origin Attributes in Model Training Feature Stores",
              "c": "critical",
              "f": "advanced",
              "v": "Use of sensitive categories for automated decisioning is tightly constrained. Auditing feature-store schemas and ETL logs for prohibited attribute names or proxy variables supports demonstrable limits of processing for sensitive PI.",
              "t": "HTTP Event Collector from MLOps metadata service",
              "d": "`index=ml` `sourcetype=\"feast:feature_view\"` (view_name, feature_name, source_table, created_by, _time)",
              "q": "index=ml sourcetype=\"feast:feature_view\" earliest=-30d\n| where match(feature_name,\"(?i)race|ethnic|religion|union|orientation|immigration|citizen\")\n| stats dc(view_name) as views, values(feature_name) as features by source_table, created_by\n| sort - views",
              "m": "(1) Expand keyword list with privacy counsel; (2) join `source_table` to approved sensitive-processing register; (3) block CI deploys on match in pre-prod mirror; (4) quarterly model governance review; (5) document lawful basis where processing is permitted.",
              "z": "Table (feature views), Bar chart (by owner), Single value (distinct views).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from MLOps metadata service.\n• Ensure the following data sources are available: `index=ml` `sourcetype=\"feast:feature_view\"` (view_name, feature_name, source_table, created_by, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Expand keyword list with privacy counsel; (2) join `source_table` to approved sensitive-processing register; (3) block CI deploys on match in pre-prod mirror; (4) quarterly model governance review; (5) document lawful basis where processing is permitted.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ml sourcetype=\"feast:feature_view\" earliest=-30d\n| where match(feature_name,\"(?i)race|ethnic|religion|union|orientation|immigration|citizen\")\n| stats dc(view_name) as views, values(feature_name) as features by source_table, created_by\n| sort - views\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Sensitive PI — Racial or Ethnic Origin Attributes in Model Training Feature Stores** — Use of sensitive categories for automated decisioning is tightly constrained. Auditing feature-store schemas and ETL logs for prohibited attribute names or proxy variables supports demonstrable limits of processing for sensitive PI.\n\nDocumented **Data sources**: `index=ml` `sourcetype=\"feast:feature_view\"` (view_name, feature_name, source_table, created_by, _time). **App/TA** (typical add-on context): HTTP Event Collector from MLOps metadata service. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ml; **sourcetype**: feast:feature_view. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ml, sourcetype=\"feast:feature_view\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(feature_name,\"(?i)race|ethnic|religion|union|orientation|immigration|citizen\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source_table, created_by** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (feature views), Bar chart (by owner), Single value (distinct views).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We use of sensitive categories for automated decisioning is tightly constrained.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.121",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.121 is enforced — Splunk UC-22.4.11: CCPA/CPRA — Sensitive PI — Racial or Ethnic Origin Attributes in Model Training Feature Stores.",
                  "ea": "Saved search 'UC-22.4.11' running on index=ml sourcetype=\"feast:feature_view\" (view_name, feature_name, source_table, created_by, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.12",
              "n": "CCPA/CPRA — Data Broker Registry — Sale/Share Disclosure Parity vs Published Broker Categories",
              "c": "high",
              "f": "intermediate",
              "v": "Businesses must align disclosures with actual broker-like transfers. Comparing internal “sale/share” flags on data feeds against the firm’s published broker-category statements evidences consistency for AG inquiries under §1798.99.80 themes.",
              "t": "Splunk DB Connect to data catalog, `inputlookup`",
              "d": "`data_contracts.csv` (feed_id, sale_or_share_flag, broker_category_code); `published_broker_disclosure.csv` (broker_category_code, disclosed_flag)",
              "q": "| inputlookup data_contracts.csv\n| lookup published_broker_disclosure.csv broker_category_code OUTPUT disclosed_flag\n| where sale_or_share_flag=\"true\" AND (disclosed_flag!=\"true\" OR isnull(disclosed_flag))\n| table feed_id, broker_category_code, sale_or_share_flag, disclosed_flag",
              "m": "(1) Owner: privacy legal + data governance; (2) refresh contracts when onboarding feeds; (3) alert before annual privacy policy updates; (4) map `broker_category_code` to CPRA registry taxonomy; (5) store public disclosure version id externally, reference here.",
              "z": "Table (gaps), Single value (feeds), Sankey optional feed→category.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect to data catalog, `inputlookup`.\n• Ensure the following data sources are available: `data_contracts.csv` (feed_id, sale_or_share_flag, broker_category_code); `published_broker_disclosure.csv` (broker_category_code, disclosed_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Owner: privacy legal + data governance; (2) refresh contracts when onboarding feeds; (3) alert before annual privacy policy updates; (4) map `broker_category_code` to CPRA registry taxonomy; (5) store public disclosure version id externally, reference here.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup data_contracts.csv\n| lookup published_broker_disclosure.csv broker_category_code OUTPUT disclosed_flag\n| where sale_or_share_flag=\"true\" AND (disclosed_flag!=\"true\" OR isnull(disclosed_flag))\n| table feed_id, broker_category_code, sale_or_share_flag, disclosed_flag\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Data Broker Registry — Sale/Share Disclosure Parity vs Published Broker Categories** — Businesses must align disclosures with actual broker-like transfers. Comparing internal “sale/share” flags on data feeds against the firm’s published broker-category statements evidences consistency for AG inquiries under §1798.99.80 themes.\n\nDocumented **Data sources**: `data_contracts.csv` (feed_id, sale_or_share_flag, broker_category_code); `published_broker_disclosure.csv` (broker_category_code, disclosed_flag). **App/TA** (typical add-on context): Splunk DB Connect to data catalog, `inputlookup`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where sale_or_share_flag=\"true\" AND (disclosed_flag!=\"true\" OR isnull(disclosed_flag))` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CCPA/CPRA — Data Broker Registry — Sale/Share Disclosure Parity vs Published Broker Categories**): table feed_id, broker_category_code, sale_or_share_flag, disclosed_flag\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gaps), Single value (feeds), Sankey optional feed→category.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We businesses must align disclosures with actual broker-like transfers.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.99.80",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.99.80 is enforced — Splunk UC-22.4.12: CCPA/CPRA — Data Broker Registry — Sale/Share Disclosure Parity vs Published Broker Categories.",
                  "ea": "Saved search 'UC-22.4.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.13",
              "n": "CCPA/CPRA — Data Broker Registry — Opt-Out Propagation Latency to Downstream Data Partners",
              "c": "high",
              "f": "advanced",
              "v": "When consumers opt out of sale/share, partners must stop promptly. Measuring time from opt-out receipt to partner acknowledgment API or file manifest update demonstrates operational effectiveness beyond UI banners.",
              "t": "HTTP Event Collector from consent platform and partner webhooks",
              "d": "`index=privacy` `sourcetype=\"consent:optout\"` (consumer_id, optout_ts, channel); `index=privacy` `sourcetype=\"partner:ack\"` (consumer_id, partner_id, ack_ts)",
              "q": "index=privacy sourcetype=\"consent:optout\" earliest=-90d\n| eval oo=strptime(optout_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left consumer_id [\n    search index=privacy sourcetype=\"partner:ack\" earliest=-90d\n    | eval ak=strptime(ack_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | stats min(ak) as first_ack by consumer_id, partner_id\n  ]\n| eval lag_h=if(isnotnull(first_ack), round((first_ack-oo)/3600,2), null())\n| where isnull(first_ack) OR lag_h>72\n| stats count by partner_id\n| sort - count",
              "m": "(1) Set `72` hours per contractual SLA; (2) include GPC batch jobs as `channel`; (3) legal escalation for chronic laggards; (4) exclude partners in wind-down via lookup; (5) evidence for CPRA “do not sell/share” audits.",
              "z": "Table (partners), Histogram (lag_h), Single value (consumers past SLA).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from consent platform and partner webhooks.\n• Ensure the following data sources are available: `index=privacy` `sourcetype=\"consent:optout\"` (consumer_id, optout_ts, channel); `index=privacy` `sourcetype=\"partner:ack\"` (consumer_id, partner_id, ack_ts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Set `72` hours per contractual SLA; (2) include GPC batch jobs as `channel`; (3) legal escalation for chronic laggards; (4) exclude partners in wind-down via lookup; (5) evidence for CPRA “do not sell/share” audits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"consent:optout\" earliest=-90d\n| eval oo=strptime(optout_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left consumer_id [\n    search index=privacy sourcetype=\"partner:ack\" earliest=-90d\n    | eval ak=strptime(ack_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | stats min(ak) as first_ack by consumer_id, partner_id\n  ]\n| eval lag_h=if(isnotnull(first_ack), round((first_ack-oo)/3600,2), null())\n| where isnull(first_ack) OR lag_h>72\n| stats count by partner_id\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Data Broker Registry — Opt-Out Propagation Latency to Downstream Data Partners** — When consumers opt out of sale/share, partners must stop promptly. Measuring time from opt-out receipt to partner acknowledgment API or file manifest update demonstrates operational effectiveness beyond UI banners.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=\"consent:optout\"` (consumer_id, optout_ts, channel); `index=privacy` `sourcetype=\"partner:ack\"` (consumer_id, partner_id, ack_ts). **App/TA** (typical add-on context): HTTP Event Collector from consent platform and partner webhooks. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: consent:optout. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"consent:optout\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **oo** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **lag_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(first_ack) OR lag_h>72` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by partner_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (partners), Histogram (lag_h), Single value (consumers past SLA).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We when consumers opt out of sale/share, partners must stop promptly.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.120",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.120 is enforced — Splunk UC-22.4.13: CCPA/CPRA — Data Broker Registry — Opt-Out Propagation Latency to Downstream Data Partners.",
                  "ea": "Saved search 'UC-22.4.13' running on sourcetype consent:optout and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.14",
              "n": "CCPA/CPRA — Automated Decision Profiling — Opt-Out for Automated Decisioning Honored in Scoring API",
              "c": "high",
              "f": "advanced",
              "v": "Consumers may limit certain automated decisioning. Correlating scoring API calls with consent flags catches services still requesting model scores after an opt-out—reducing risk of statutory breaches tied to profiling choices.",
              "t": "API gateway logs via HEC (Kong, Apigee, AWS API GW)",
              "d": "`index=app` `sourcetype=\"apigee:access\"` (consumer_id, api_product, response_code, _time); `index=privacy` `sourcetype=\"consent:state\"` (consumer_id, limit_automated_decisioning, updated_ts)",
              "q": "index=privacy sourcetype=\"consent:state\" limit_automated_decisioning=true earliest=-30d\n| eval eff=strptime(updated_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left consumer_id [\n    search index=app sourcetype=\"apigee:access\" api_product=\"risk_score_v2\" earliest=-30d\n    | rename developer_app_id as consumer_id\n    | fields consumer_id, _time, response_code\n  ]\n| where _time>eff AND response_code=200\n| stats count by consumer_id, api_product\n| sort - count",
              "m": "(1) Align `consumer_id` keys across systems; (2) return 403 at gateway when possible—Splunk verifies drift; (3) redact payloads—use metadata only; (4) integrate with DSAR tooling; (5) legal defines `api_product` in-scope for “significant” decisioning.",
              "z": "Table (violations), Timeline, Single value (calls post opt-out).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: API gateway logs via HEC (Kong, Apigee, AWS API GW).\n• Ensure the following data sources are available: `index=app` `sourcetype=\"apigee:access\"` (consumer_id, api_product, response_code, _time); `index=privacy` `sourcetype=\"consent:state\"` (consumer_id, limit_automated_decisioning, updated_ts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `consumer_id` keys across systems; (2) return 403 at gateway when possible—Splunk verifies drift; (3) redact payloads—use metadata only; (4) integrate with DSAR tooling; (5) legal defines `api_product` in-scope for “significant” decisioning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"consent:state\" limit_automated_decisioning=true earliest=-30d\n| eval eff=strptime(updated_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left consumer_id [\n    search index=app sourcetype=\"apigee:access\" api_product=\"risk_score_v2\" earliest=-30d\n    | rename developer_app_id as consumer_id\n    | fields consumer_id, _time, response_code\n  ]\n| where _time>eff AND response_code=200\n| stats count by consumer_id, api_product\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Automated Decision Profiling — Opt-Out for Automated Decisioning Honored in Scoring API** — Consumers may limit certain automated decisioning. Correlating scoring API calls with consent flags catches services still requesting model scores after an opt-out—reducing risk of statutory breaches tied to profiling choices.\n\nDocumented **Data sources**: `index=app` `sourcetype=\"apigee:access\"` (consumer_id, api_product, response_code, _time); `index=privacy` `sourcetype=\"consent:state\"` (consumer_id, limit_automated_decisioning, updated_ts). **App/TA** (typical add-on context): API gateway logs via HEC (Kong, Apigee, AWS API GW). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: consent:state. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"consent:state\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eff** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where _time>eff AND response_code=200` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by consumer_id, api_product** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Timeline, Single value (calls post opt-out).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We consumers may limit certain automated decisioning.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.120",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.120 is enforced — Splunk UC-22.4.14: CCPA/CPRA — Automated Decision Profiling — Opt-Out for Automated Decisioning Honored in Scoring API.",
                  "ea": "Saved search 'UC-22.4.14' running on sourcetype apigee:access and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.15",
              "n": "CCPA/CPRA — Automated Decision Profiling — Feature Drift Alerts on Consumer-Profile Models",
              "c": "medium",
              "f": "advanced",
              "v": "Profiling systems should not silently change behavior. Tracking population-level distribution shifts in model inputs (e.g., spend deciles) supports governance that consumers receive consistent, explainable outcomes relative to disclosures.",
              "t": "Model observability HEC (`whylogs`, Evidently, or custom)",
              "d": "`index=ml` `sourcetype=\"profile:feature_snapshot\"` (model_name, feature_name, cohort, mean_val, p95_val, snapshot_date)",
              "q": "index=ml sourcetype=\"profile:feature_snapshot\" model_name=\"marketing_propensity\" earliest=-120d\n| eval sd=strptime(snapshot_date,\"%Y-%m-%d\")\n| sort feature_name, sd\n| streamstats window=2 global=f first(mean_val) as prev_mean by feature_name\n| eval delta_pct=if(isnotnull(prev_mean) AND prev_mean!=0, abs(mean_val-prev_mean)/abs(prev_mean)*100, null())\n| where delta_pct>25\n| table feature_name, sd, prev_mean, mean_val, delta_pct",
              "m": "(1) Calibrate `delta_pct` with model risk team; (2) exclude planned campaign seasonality via lookup; (3) trigger model card update workflow; (4) document consumer-facing impact assessment; (5) pair with fairness testing outside Splunk.",
              "z": "Line chart (mean_val over time), Table (drift events), Single value (features drifted).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Model observability HEC (`whylogs`, Evidently, or custom).\n• Ensure the following data sources are available: `index=ml` `sourcetype=\"profile:feature_snapshot\"` (model_name, feature_name, cohort, mean_val, p95_val, snapshot_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Calibrate `delta_pct` with model risk team; (2) exclude planned campaign seasonality via lookup; (3) trigger model card update workflow; (4) document consumer-facing impact assessment; (5) pair with fairness testing outside Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ml sourcetype=\"profile:feature_snapshot\" model_name=\"marketing_propensity\" earliest=-120d\n| eval sd=strptime(snapshot_date,\"%Y-%m-%d\")\n| sort feature_name, sd\n| streamstats window=2 global=f first(mean_val) as prev_mean by feature_name\n| eval delta_pct=if(isnotnull(prev_mean) AND prev_mean!=0, abs(mean_val-prev_mean)/abs(prev_mean)*100, null())\n| where delta_pct>25\n| table feature_name, sd, prev_mean, mean_val, delta_pct\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Automated Decision Profiling — Feature Drift Alerts on Consumer-Profile Models** — Profiling systems should not silently change behavior. Tracking population-level distribution shifts in model inputs (e.g., spend deciles) supports governance that consumers receive consistent, explainable outcomes relative to disclosures.\n\nDocumented **Data sources**: `index=ml` `sourcetype=\"profile:feature_snapshot\"` (model_name, feature_name, cohort, mean_val, p95_val, snapshot_date). **App/TA** (typical add-on context): Model observability HEC (`whylogs`, Evidently, or custom). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ml; **sourcetype**: profile:feature_snapshot. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ml, sourcetype=\"profile:feature_snapshot\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sd** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by feature_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta_pct>25` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CCPA/CPRA — Automated Decision Profiling — Feature Drift Alerts on Consumer-Profile Models**): table feature_name, sd, prev_mean, mean_val, delta_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (mean_val over time), Table (drift events), Single value (features drifted).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We profiling systems should not silently change behavior.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "observability",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.185",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.185 is enforced — Splunk UC-22.4.15: CCPA/CPRA — Automated Decision Profiling — Feature Drift Alerts on Consumer-Profile Models.",
                  "ea": "Saved search 'UC-22.4.15' running on index=ml sourcetype=\"profile:feature_snapshot\" (model_name, feature_name, cohort, mean_val, p95_val, snapshot_date), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.16",
              "n": "CCPA/CPRA — Automated Decision Profiling — Human Review Queue Depth for Adverse Automated Eligibility Decisions",
              "c": "high",
              "f": "intermediate",
              "v": "When automated decisions adversely affect consumers, human review pathways must function. Monitoring queue age for appeals ensures operational readiness beyond policy language—supporting transparency and remedy expectations.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, opened_at, state, assignment_group)",
              "q": "index=itsm sourcetype=\"snow:sc_req_item\" cat_item=\"privacy_automated_decision_review\" state!=\"Closed\" earliest=-180d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age_h=round((now()-opened)/3600,1)\n| where age_h>72\n| stats avg(age_h) as avg_age_h, max(age_h) as max_age_h, count by assignment_group\n| sort - count",
              "m": "(1) Map `cat_item` to your catalog entry; (2) tune SLA hours per CPRA operational policy; (3) executive dashboard for legal; (4) integrate with chat alerts; (5) monthly remediation report.",
              "z": "Table (queues), Single value (oldest age_h), Column chart by group.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, opened_at, state, assignment_group).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `cat_item` to your catalog entry; (2) tune SLA hours per CPRA operational policy; (3) executive dashboard for legal; (4) integrate with chat alerts; (5) monthly remediation report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:sc_req_item\" cat_item=\"privacy_automated_decision_review\" state!=\"Closed\" earliest=-180d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age_h=round((now()-opened)/3600,1)\n| where age_h>72\n| stats avg(age_h) as avg_age_h, max(age_h) as max_age_h, count by assignment_group\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Automated Decision Profiling — Human Review Queue Depth for Adverse Automated Eligibility Decisions** — When automated decisions adversely affect consumers, human review pathways must function. Monitoring queue age for appeals ensures operational readiness beyond policy language—supporting transparency and remedy expectations.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, opened_at, state, assignment_group). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_h>72` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (queues), Single value (oldest age_h), Column chart by group.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We when automated decisions adversely affect consumers, human review pathways must function.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.185",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.185 is enforced — Splunk UC-22.4.16: CCPA/CPRA — Automated Decision Profiling — Human Review Queue Depth for Adverse Automated Eligibility Decisions.",
                  "ea": "Saved search 'UC-22.4.16' running on index=itsm sourcetype=\"snow:sc_req_item\" (number, cat_item, opened_at, state, assignment_group), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.17",
              "n": "CCPA/CPRA — Minor Consent — Age-Gating API Denials vs Account Creation Success Mismatch",
              "c": "critical",
              "f": "advanced",
              "v": "Selling or sharing minors’ PI requires affirmative authorization under specified conditions. Detecting account creation paths that succeed when age-gate APIs return under-16/under-13 outcomes indicates broken client flows or server-side bypass.",
              "t": "Application logs via HEC",
              "d": "`index=app` `sourcetype=\"auth:signup\"` (session_id, account_created, _time); `index=app` `sourcetype=\"identity:age_check\"` (session_id, age_band, decision, _time)",
              "q": "index=app sourcetype=\"identity:age_check\" decision=\"under16\" earliest=-30d\n| join type=left session_id [\n    search index=app sourcetype=\"auth:signup\" account_created=true earliest=-30d\n    | fields session_id, account_created, _time\n    | rename _time as signup_time\n  ]\n| where isnotnull(account_created)\n| table session_id, decision, signup_time, account_created",
              "m": "(1) Align `session_id` correlation; (2) immediate P1 to product security; (3) block accounts via batch job downstream; (4) legal determines applicable age threshold by business line; (5) never log raw DOB in Splunk—use `age_band` only.",
              "z": "Table (incidents), Single value (count), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Application logs via HEC.\n• Ensure the following data sources are available: `index=app` `sourcetype=\"auth:signup\"` (session_id, account_created, _time); `index=app` `sourcetype=\"identity:age_check\"` (session_id, age_band, decision, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `session_id` correlation; (2) immediate P1 to product security; (3) block accounts via batch job downstream; (4) legal determines applicable age threshold by business line; (5) never log raw DOB in Splunk—use `age_band` only.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"identity:age_check\" decision=\"under16\" earliest=-30d\n| join type=left session_id [\n    search index=app sourcetype=\"auth:signup\" account_created=true earliest=-30d\n    | fields session_id, account_created, _time\n    | rename _time as signup_time\n  ]\n| where isnotnull(account_created)\n| table session_id, decision, signup_time, account_created\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Minor Consent — Age-Gating API Denials vs Account Creation Success Mismatch** — Selling or sharing minors’ PI requires affirmative authorization under specified conditions. Detecting account creation paths that succeed when age-gate APIs return under-16/under-13 outcomes indicates broken client flows or server-side bypass.\n\nDocumented **Data sources**: `index=app` `sourcetype=\"auth:signup\"` (session_id, account_created, _time); `index=app` `sourcetype=\"identity:age_check\"` (session_id, age_band, decision, _time). **App/TA** (typical add-on context): Application logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: identity:age_check. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"identity:age_check\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(account_created)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CCPA/CPRA — Minor Consent — Age-Gating API Denials vs Account Creation Success Mismatch**): table session_id, decision, signup_time, account_created\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incidents), Single value (count), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We selling or sharing minors’ PI requires affirmative authorization under specified conditions.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.120",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.120 is enforced — Splunk UC-22.4.17: CCPA/CPRA — Minor Consent — Age-Gating API Denials vs Account Creation Success Mismatch.",
                  "ea": "Saved search 'UC-22.4.17' running on sourcetype auth:signup and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.18",
              "n": "CCPA/CPRA — Minor Consent — Marketing Cookie Fires Before Parental Consent Timestamp",
              "c": "high",
              "f": "intermediate",
              "v": "Tracking technologies on minor-directed experiences require heightened controls. Tag-manager events that load ad or analytics pixels prior to recorded parental consent timestamps evidence misconfiguration of consent mode integrations.",
              "t": "Tealium/Segment/GTM server-side logs via HEC",
              "d": "`index=web` `sourcetype=\"consent:web\"` (session_id, parental_consent_ts, consent_string); `index=web` `sourcetype=\"tag:fire\"` (session_id, tag_vendor, marketing_flag, _time)",
              "q": "index=web sourcetype=\"consent:web\" earliest=-30d\n| eval pc=strptime(parental_consent_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left session_id [\n    search index=web sourcetype=\"tag:fire\" marketing_flag=true earliest=-30d\n    | fields session_id, tag_vendor, _time\n  ]\n| where _time < pc OR isnull(pc)\n| stats count by tag_vendor, session_id\n| sort - count",
              "m": "(1) Scope to minor-designated site IDs via lookup `minor_site_ids.csv`; (2) partner with marketing ops on tag inventory; (3) GPC and CMP linkage testing in pre-prod; (4) document vendor contracts; (5) evidence for AG inquiries on dark pattern adjacent issues.",
              "z": "Table (sessions), Bar by vendor, Single value (violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tealium/Segment/GTM server-side logs via HEC.\n• Ensure the following data sources are available: `index=web` `sourcetype=\"consent:web\"` (session_id, parental_consent_ts, consent_string); `index=web` `sourcetype=\"tag:fire\"` (session_id, tag_vendor, marketing_flag, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Scope to minor-designated site IDs via lookup `minor_site_ids.csv`; (2) partner with marketing ops on tag inventory; (3) GPC and CMP linkage testing in pre-prod; (4) document vendor contracts; (5) evidence for AG inquiries on dark pattern adjacent issues.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"consent:web\" earliest=-30d\n| eval pc=strptime(parental_consent_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left session_id [\n    search index=web sourcetype=\"tag:fire\" marketing_flag=true earliest=-30d\n    | fields session_id, tag_vendor, _time\n  ]\n| where _time < pc OR isnull(pc)\n| stats count by tag_vendor, session_id\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Minor Consent — Marketing Cookie Fires Before Parental Consent Timestamp** — Tracking technologies on minor-directed experiences require heightened controls. Tag-manager events that load ad or analytics pixels prior to recorded parental consent timestamps evidence misconfiguration of consent mode integrations.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"consent:web\"` (session_id, parental_consent_ts, consent_string); `index=web` `sourcetype=\"tag:fire\"` (session_id, tag_vendor, marketing_flag, _time). **App/TA** (typical add-on context): Tealium/Segment/GTM server-side logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: consent:web. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"consent:web\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where _time < pc OR isnull(pc)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tag_vendor, session_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sessions), Bar by vendor, Single value (violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracking technologies on minor-directed experiences require heightened controls.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.120",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.120 is enforced — Splunk UC-22.4.18: CCPA/CPRA — Minor Consent — Marketing Cookie Fires Before Parental Consent Timestamp.",
                  "ea": "Saved search 'UC-22.4.18' running on sourcetype consent:web and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.19",
              "n": "CCPA/CPRA — Dark Patterns — Forced Navigation Loops on “Do Not Sell or Share” Choice Screen",
              "c": "high",
              "f": "advanced",
              "v": "Dark patterns undermine consumer choice. Session paths that repeatedly bounce users away from the opt-out confirmation page without completion suggest manipulative UX—relevant to CPRA enforcement priorities on ease of exercise of rights.",
              "t": "Client-side telemetry HEC (privacy-safe event schema)",
              "d": "`index=web` `sourcetype=\"ux:privacy_flow\"` (session_id, step_name, sequence, _time)",
              "q": "index=web sourcetype=\"ux:privacy_flow\" earliest=-30d\n| sort 0 session_id, _time\n| streamstats window=1 global=f last(step_name) as prev_step by session_id\n| eval backtrack=if(step_name=\"dns_intro\" AND prev_step=\"dns_confirm\",1,0)\n| stats sum(backtrack) as loop_count by session_id\n| where loop_count>3\n| sort - loop_count",
              "m": "(1) Instrument `step_name` consistently across web and app; (2) exclude A/B tests via `experiment_id` lookup; (3) product design reviews on outliers; (4) correlate with conversion metrics carefully—avoid accidental shaming; (5) legal escalation only on sustained campaigns.",
              "z": "Funnel diagram export, Table (sessions), Single value (sessions over threshold).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Client-side telemetry HEC (privacy-safe event schema).\n• Ensure the following data sources are available: `index=web` `sourcetype=\"ux:privacy_flow\"` (session_id, step_name, sequence, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument `step_name` consistently across web and app; (2) exclude A/B tests via `experiment_id` lookup; (3) product design reviews on outliers; (4) correlate with conversion metrics carefully—avoid accidental shaming; (5) legal escalation only on sustained campaigns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"ux:privacy_flow\" earliest=-30d\n| sort 0 session_id, _time\n| streamstats window=1 global=f last(step_name) as prev_step by session_id\n| eval backtrack=if(step_name=\"dns_intro\" AND prev_step=\"dns_confirm\",1,0)\n| stats sum(backtrack) as loop_count by session_id\n| where loop_count>3\n| sort - loop_count\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Dark Patterns — Forced Navigation Loops on “Do Not Sell or Share” Choice Screen** — Dark patterns undermine consumer choice. Session paths that repeatedly bounce users away from the opt-out confirmation page without completion suggest manipulative UX—relevant to CPRA enforcement priorities on ease of exercise of rights.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"ux:privacy_flow\"` (session_id, step_name, sequence, _time). **App/TA** (typical add-on context): Client-side telemetry HEC (privacy-safe event schema). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: ux:privacy_flow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"ux:privacy_flow\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by session_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **backtrack** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by session_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where loop_count>3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel diagram export, Table (sessions), Single value (sessions over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We dark patterns undermine consumer choice.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.120",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.120 is enforced — Splunk UC-22.4.19: CCPA/CPRA — Dark Patterns — Forced Navigation Loops on “Do Not Sell or Share” Choice Screen.",
                  "ea": "Saved search 'UC-22.4.19' running on index=web sourcetype=\"ux:privacy_flow\" (session_id, step_name, sequence, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.20",
              "n": "CCPA/CPRA — Dark Patterns — Pre-Checked “Financial Incentive” Enrollment on Account Settings Save",
              "c": "high",
              "f": "intermediate",
              "v": "Financial incentive programs require opt-in consent. Server logs showing `marketing_incentive_optin` toggled true without a distinct affirmative action event (checkbox submit) support detection of prohibited default-on patterns.",
              "t": "Application audit logs via HEC",
              "d": "`index=app` `sourcetype=\"profile:update\"` (user_id, field_changed, old_value, new_value, client_event_id, _time); `index=web` `sourcetype=\"ux:form_submit\"` (user_id, form_id, _time)",
              "q": "index=app sourcetype=\"profile:update\" field_changed=\"marketing_incentive_optin\" new_value=true earliest=-30d\n| join type=left user_id [\n    search index=web sourcetype=\"ux:form_submit\" form_id=\"financial_incentive_optin\" earliest=-30d\n    | stats min(_time) as first_submit by user_id\n  ]\n| where isnull(first_submit) OR first_submit > _time\n| table user_id, _time, new_value, first_submit",
              "m": "(1) Require `client_event_id` correlation if available—strengthen join; (2) privacy counsel defines valid proof events; (3) auto-create remediation tickets; (4) exclude internal test accounts; (5) align with §1798.125 documentation.",
              "z": "Table (users), Single value (count), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Application audit logs via HEC.\n• Ensure the following data sources are available: `index=app` `sourcetype=\"profile:update\"` (user_id, field_changed, old_value, new_value, client_event_id, _time); `index=web` `sourcetype=\"ux:form_submit\"` (user_id, form_id, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require `client_event_id` correlation if available—strengthen join; (2) privacy counsel defines valid proof events; (3) auto-create remediation tickets; (4) exclude internal test accounts; (5) align with §1798.125 documentation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"profile:update\" field_changed=\"marketing_incentive_optin\" new_value=true earliest=-30d\n| join type=left user_id [\n    search index=web sourcetype=\"ux:form_submit\" form_id=\"financial_incentive_optin\" earliest=-30d\n    | stats min(_time) as first_submit by user_id\n  ]\n| where isnull(first_submit) OR first_submit > _time\n| table user_id, _time, new_value, first_submit\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Dark Patterns — Pre-Checked “Financial Incentive” Enrollment on Account Settings Save** — Financial incentive programs require opt-in consent. Server logs showing `marketing_incentive_optin` toggled true without a distinct affirmative action event (checkbox submit) support detection of prohibited default-on patterns.\n\nDocumented **Data sources**: `index=app` `sourcetype=\"profile:update\"` (user_id, field_changed, old_value, new_value, client_event_id, _time); `index=web` `sourcetype=\"ux:form_submit\"` (user_id, form_id, _time). **App/TA** (typical add-on context): Application audit logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: profile:update. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"profile:update\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(first_submit) OR first_submit > _time` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CCPA/CPRA — Dark Patterns — Pre-Checked “Financial Incentive” Enrollment on Account Settings Save**): table user_id, _time, new_value, first_submit\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users), Single value (count), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We financial incentive programs require opt-in consent.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.125",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.125 is enforced — Splunk UC-22.4.20: CCPA/CPRA — Dark Patterns — Pre-Checked “Financial Incentive” Enrollment on Account Settings Save.",
                  "ea": "Saved search 'UC-22.4.20' running on sourcetype profile:update and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.21",
              "n": "CCPA/CPRA — Cross-Context Behavioral Advertising — Cross-Site ID Sync Pixel After GPC Signal",
              "c": "high",
              "f": "advanced",
              "v": "Global Privacy Control obligates downstream behavior for covered businesses. Detecting third-party sync pixels or bid requests carrying cross-context identifiers after a GPC-positive session start evidences broken CMP / tag enforcement for “sale/share” pathways.",
              "t": "Server-side advertising telemetry HEC, CDN logs",
              "d": "`index=web` `sourcetype=\"cmp:gpc\"` (session_id, gpc, _time); `index=web` `sourcetype=\"ads:sync\"` (session_id, partner_id, _time, cross_context_flag)",
              "q": "index=web sourcetype=\"cmp:gpc\" gpc=1 earliest=-7d\n| eval gpc_time=_time\n| join type=left session_id [\n    search index=web sourcetype=\"ads:sync\" cross_context_flag=true earliest=-7d\n    | fields session_id, partner_id, _time\n  ]\n| where _time>=gpc_time\n| stats count by partner_id, session_id\n| sort - count",
              "m": "(1) Ensure `gpc_time` uses same clock as ad logs; (2) maintain list of `partner_id` considered cross-context; (3) integrate with tag monitoring remediation; (4) legal review for non-California visitors via geo enrichment optional; (5) document CMP version in release notes.",
              "z": "Table (partner_id), Bar chart, Single value (sessions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Server-side advertising telemetry HEC, CDN logs.\n• Ensure the following data sources are available: `index=web` `sourcetype=\"cmp:gpc\"` (session_id, gpc, _time); `index=web` `sourcetype=\"ads:sync\"` (session_id, partner_id, _time, cross_context_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure `gpc_time` uses same clock as ad logs; (2) maintain list of `partner_id` considered cross-context; (3) integrate with tag monitoring remediation; (4) legal review for non-California visitors via geo enrichment optional; (5) document CMP version in release notes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"cmp:gpc\" gpc=1 earliest=-7d\n| eval gpc_time=_time\n| join type=left session_id [\n    search index=web sourcetype=\"ads:sync\" cross_context_flag=true earliest=-7d\n    | fields session_id, partner_id, _time\n  ]\n| where _time>=gpc_time\n| stats count by partner_id, session_id\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Cross-Context Behavioral Advertising — Cross-Site ID Sync Pixel After GPC Signal** — Global Privacy Control obligates downstream behavior for covered businesses. Detecting third-party sync pixels or bid requests carrying cross-context identifiers after a GPC-positive session start evidences broken CMP / tag enforcement for “sale/share” pathways.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"cmp:gpc\"` (session_id, gpc, _time); `index=web` `sourcetype=\"ads:sync\"` (session_id, partner_id, _time, cross_context_flag). **App/TA** (typical add-on context): Server-side advertising telemetry HEC, CDN logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: cmp:gpc. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"cmp:gpc\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **gpc_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where _time>=gpc_time` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by partner_id, session_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (partner_id), Bar chart, Single value (sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We global Privacy Control obligates downstream behavior for covered businesses.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.135",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.135 is enforced — Splunk UC-22.4.21: CCPA/CPRA — Cross-Context Behavioral Advertising — Cross-Site ID Sync Pixel After GPC Signal.",
                  "ea": "Saved search 'UC-22.4.21' running on sourcetype cmp:gpc and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.22",
              "n": "CCPA/CPRA — Cross-Context Behavioral Advertising — SSP Auction Requests After “Limit Use of Sensitive PI” Flag",
              "c": "critical",
              "f": "advanced",
              "v": "Limiting use of sensitive PI for advertising must be operationally enforced. Programmatic auction requests that still attach sensitive audience segments after the limit flag is set indicate pipeline defects with high regulatory salience.",
              "t": "Prebid/SSP server logs via HEC",
              "d": "`index=ads` `sourcetype=\"prebid:auction\"` (user_id, limit_sensitive_ad_use, segments, _time); `index=privacy` `sourcetype=\"consent:state\"` (user_id, limit_sensitive_ad_use as consent_flag, updated_ts)",
              "q": "index=privacy sourcetype=\"consent:state\" limit_sensitive_ad_use=true earliest=-30d\n| eval eff=strptime(updated_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left user_id [\n    search index=ads sourcetype=\"prebid:auction\" earliest=-30d\n    | eval has_sensitive=if(match(segments,\"(?i)(health|finance|precise_geo)\"),1,0)\n    | rename _time as auction_time\n    | fields user_id, auction_time, has_sensitive, segments\n  ]\n| where auction_time>eff AND has_sensitive=1\n| stats count by user_id\n| sort - count",
              "m": "(1) Legal defines `segments` taxonomy; (2) strip segment contents in long-term storage—hash or bucketize for Splunk; (3) block supply in ad server when possible; (4) daily compliance digest; (5) vendor notification workflow.",
              "z": "Table (users), Single value (auction violations), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Prebid/SSP server logs via HEC.\n• Ensure the following data sources are available: `index=ads` `sourcetype=\"prebid:auction\"` (user_id, limit_sensitive_ad_use, segments, _time); `index=privacy` `sourcetype=\"consent:state\"` (user_id, limit_sensitive_ad_use as consent_flag, updated_ts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Legal defines `segments` taxonomy; (2) strip segment contents in long-term storage—hash or bucketize for Splunk; (3) block supply in ad server when possible; (4) daily compliance digest; (5) vendor notification workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"consent:state\" limit_sensitive_ad_use=true earliest=-30d\n| eval eff=strptime(updated_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left user_id [\n    search index=ads sourcetype=\"prebid:auction\" earliest=-30d\n    | eval has_sensitive=if(match(segments,\"(?i)(health|finance|precise_geo)\"),1,0)\n    | rename _time as auction_time\n    | fields user_id, auction_time, has_sensitive, segments\n  ]\n| where auction_time>eff AND has_sensitive=1\n| stats count by user_id\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Cross-Context Behavioral Advertising — SSP Auction Requests After “Limit Use of Sensitive PI” Flag** — Limiting use of sensitive PI for advertising must be operationally enforced. Programmatic auction requests that still attach sensitive audience segments after the limit flag is set indicate pipeline defects with high regulatory salience.\n\nDocumented **Data sources**: `index=ads` `sourcetype=\"prebid:auction\"` (user_id, limit_sensitive_ad_use, segments, _time); `index=privacy` `sourcetype=\"consent:state\"` (user_id, limit_sensitive_ad_use as consent_flag, updated_ts). **App/TA** (typical add-on context): Prebid/SSP server logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: consent:state. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"consent:state\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eff** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where auction_time>eff AND has_sensitive=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users), Single value (auction violations), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We limiting use of sensitive PI for advertising must be operationally enforced.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.121",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.121 is enforced — Splunk UC-22.4.22: CCPA/CPRA — Cross-Context Behavioral Advertising — SSP Auction Requests After “Limit Use of Sensitive PI” Flag.",
                  "ea": "Saved search 'UC-22.4.22' running on sourcetype prebid:auction and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.23",
              "n": "CCPA/CPRA — Cross-Context Behavioral Advertising — Household Device Graph Linking Without Aggregated Opt-Out Propagation",
              "c": "high",
              "f": "advanced",
              "v": "Household-level graphs can undermine individual opt-outs if not synchronized. Monitoring graph edges created linking a opted-out device to new cookies without a refreshed household consent record evidences governance gaps in cross-device advertising.",
              "t": "Identity graph vendor logs via HEC",
              "d": "`index=id` `sourcetype=\"graph:edge\"` (household_id, device_a, device_b, created_ts, consent_version); `index=privacy` `sourcetype=\"consent:state\"` (device_id, sale_opt_out, updated_ts)",
              "q": "index=privacy sourcetype=\"consent:state\" sale_opt_out=true earliest=-30d\n| eval eff=strptime(updated_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| rename device_id as device_a\n| join type=left device_a [\n    search index=id sourcetype=\"graph:edge\" earliest=-30d\n    | eval ct=strptime(created_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | fields household_id, device_a, device_b, ct, consent_version\n  ]\n| lookup household_consent_floor.csv household_id OUTPUT min_consent_version as required_consent_version\n| where ct>eff AND consent_version!=required_consent_version\n| table household_id, device_a, device_b, ct, consent_version, required_consent_version",
              "m": "(1) Maintain `household_consent_floor.csv` (household_id, min_consent_version) updated when household policies change; (2) pseudonymize `household_id`; (3) vendor SLA for propagation latency; (4) legal defines household scope; (5) integrate with periodic reconciliation job.",
              "z": "Table (edges), Network graph optional restricted, Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Identity graph vendor logs via HEC.\n• Ensure the following data sources are available: `index=id` `sourcetype=\"graph:edge\"` (household_id, device_a, device_b, created_ts, consent_version); `index=privacy` `sourcetype=\"consent:state\"` (device_id, sale_opt_out, updated_ts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `household_consent_floor.csv` (household_id, min_consent_version) updated when household policies change; (2) pseudonymize `household_id`; (3) vendor SLA for propagation latency; (4) legal defines household scope; (5) integrate with periodic reconciliation job.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"consent:state\" sale_opt_out=true earliest=-30d\n| eval eff=strptime(updated_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| rename device_id as device_a\n| join type=left device_a [\n    search index=id sourcetype=\"graph:edge\" earliest=-30d\n    | eval ct=strptime(created_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | fields household_id, device_a, device_b, ct, consent_version\n  ]\n| lookup household_consent_floor.csv household_id OUTPUT min_consent_version as required_consent_version\n| where ct>eff AND consent_version!=required_consent_version\n| table household_id, device_a, device_b, ct, consent_version, required_consent_version\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Cross-Context Behavioral Advertising — Household Device Graph Linking Without Aggregated Opt-Out Propagation** — Household-level graphs can undermine individual opt-outs if not synchronized. Monitoring graph edges created linking a opted-out device to new cookies without a refreshed household consent record evidences governance gaps in cross-device advertising.\n\nDocumented **Data sources**: `index=id` `sourcetype=\"graph:edge\"` (household_id, device_a, device_b, created_ts, consent_version); `index=privacy` `sourcetype=\"consent:state\"` (device_id, sale_opt_out, updated_ts). **App/TA** (typical add-on context): Identity graph vendor logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: consent:state. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"consent:state\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eff** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where ct>eff AND consent_version!=required_consent_version` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CCPA/CPRA — Cross-Context Behavioral Advertising — Household Device Graph Linking Without Aggregated Opt-Out Propagation**): table household_id, device_a, device_b, ct, consent_version, required_consent_version\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (edges), Network graph optional restricted, Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We household-level graphs can undermine individual opt-outs if not synchronized.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.120",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.120 is enforced — Splunk UC-22.4.23: CCPA/CPRA — Cross-Context Behavioral Advertising — Household Device Graph Linking Without Aggregated Opt-Out Propagation.",
                  "ea": "Saved search 'UC-22.4.23' running on sourcetype graph:edge and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.24",
              "n": "CCPA/CPRA — Correction/Deletion Verification — Downstream Data Warehouse Row Still Present After Deletion Certificate",
              "c": "critical",
              "f": "advanced",
              "v": "Deletion rights require proof that replicas disappear. Comparing deletion job completion events to late-arriving warehouse snapshots (row presence tests) shows the firm validates end-state—not only API 200 responses from a primary system.",
              "t": "Splunk DB Connect (read-only probes), deletion orchestrator HEC",
              "d": "`index=privacy` `sourcetype=\"privacy:delete_job\"` (subject_id, job_id, completed_ts, systems_touched); `index=dataops` `sourcetype=\"dq:row_probe\"` (subject_id, dataset, found_flag, probe_ts)",
              "q": "index=privacy sourcetype=\"privacy:delete_job\" earliest=-90d\n| eval done=strptime(completed_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left subject_id [\n    search index=dataops sourcetype=\"dq:row_probe\" earliest=-90d\n    | eval pt=strptime(probe_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | where found_flag=true\n    | fields subject_id, dataset, found_flag, pt\n  ]\n| where pt>done\n| stats values(dataset) as datasets by subject_id, job_id\n| sort subject_id",
              "m": "(1) Use salted `subject_id` only; (2) probes must be authorized read-only queries; (3) redact outputs; (4) auto-reopen deletion jobs; (5) document in ROPA technical measures.",
              "z": "Table (failed verifications), Single value (subjects), Bar by dataset.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (read-only probes), deletion orchestrator HEC.\n• Ensure the following data sources are available: `index=privacy` `sourcetype=\"privacy:delete_job\"` (subject_id, job_id, completed_ts, systems_touched); `index=dataops` `sourcetype=\"dq:row_probe\"` (subject_id, dataset, found_flag, probe_ts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Use salted `subject_id` only; (2) probes must be authorized read-only queries; (3) redact outputs; (4) auto-reopen deletion jobs; (5) document in ROPA technical measures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"privacy:delete_job\" earliest=-90d\n| eval done=strptime(completed_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left subject_id [\n    search index=dataops sourcetype=\"dq:row_probe\" earliest=-90d\n    | eval pt=strptime(probe_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | where found_flag=true\n    | fields subject_id, dataset, found_flag, pt\n  ]\n| where pt>done\n| stats values(dataset) as datasets by subject_id, job_id\n| sort subject_id\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Correction/Deletion Verification — Downstream Data Warehouse Row Still Present After Deletion Certificate** — Deletion rights require proof that replicas disappear. Comparing deletion job completion events to late-arriving warehouse snapshots (row presence tests) shows the firm validates end-state—not only API 200 responses from a primary system.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=\"privacy:delete_job\"` (subject_id, job_id, completed_ts, systems_touched); `index=dataops` `sourcetype=\"dq:row_probe\"` (subject_id, dataset, found_flag, probe_ts). **App/TA** (typical add-on context): Splunk DB Connect (read-only probes), deletion orchestrator HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: privacy:delete_job. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"privacy:delete_job\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **done** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where pt>done` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by subject_id, job_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed verifications), Single value (subjects), Bar by dataset.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We deletion rights require proof that replicas disappear.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.105",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.105 (Consumer right to delete) is enforced — Splunk UC-22.4.24: CCPA/CPRA — Correction/Deletion Verification — Downstream Data Warehouse Row Still Present After Deletion Certificate.",
                  "ea": "Saved search 'UC-22.4.24' running on sourcetype privacy:delete_job and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.4.25",
              "n": "CCPA/CPRA — Correction/Deletion Verification — Search Index and Cache Purge Lag After Correction Request",
              "c": "high",
              "f": "advanced",
              "v": "Corrections must appear consistently across channels. Measuring lag between CRM correction timestamps and search-index update events (or CDN cache purge) demonstrates you catch stale copies that could frustrate consumer expectations under §1798.106.",
              "t": "Algolia/Elastic/OpenSearch audit logs, CRM HEC",
              "d": "`index=crm` `sourcetype=\"crm:profile_update\"` (consumer_id, corrected_field, corrected_ts); `index=search` `sourcetype=\"search:indexer_audit\"` (consumer_id, index_operation, _time)",
              "q": "index=crm sourcetype=\"crm:profile_update\" corrected_field=\"mailing_address\" earliest=-30d\n| eval ct=strptime(corrected_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left consumer_id [\n    search index=search sourcetype=\"search:indexer_audit\" index_operation IN (\"UPSERT\",\"REINDEX_DOC\") earliest=-30d\n    | rename _time as index_time\n    | fields consumer_id, index_time\n  ]\n| eval lag_h=(index_time-ct)/3600\n| where lag_h>24 OR isnull(index_time)\n| table consumer_id, ct, index_time, lag_h",
              "m": "(1) Align `consumer_id` to search document keys; (2) tune `24` hours to product SLA; (3) include CDN purge events in union if indexed separately; (4) customer support notified on breach; (5) quarterly control testing evidence.",
              "z": "Table (slow updates), Histogram (lag_h), Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Algolia/Elastic/OpenSearch audit logs, CRM HEC.\n• Ensure the following data sources are available: `index=crm` `sourcetype=\"crm:profile_update\"` (consumer_id, corrected_field, corrected_ts); `index=search` `sourcetype=\"search:indexer_audit\"` (consumer_id, index_operation, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `consumer_id` to search document keys; (2) tune `24` hours to product SLA; (3) include CDN purge events in union if indexed separately; (4) customer support notified on breach; (5) quarterly control testing evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=crm sourcetype=\"crm:profile_update\" corrected_field=\"mailing_address\" earliest=-30d\n| eval ct=strptime(corrected_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left consumer_id [\n    search index=search sourcetype=\"search:indexer_audit\" index_operation IN (\"UPSERT\",\"REINDEX_DOC\") earliest=-30d\n    | rename _time as index_time\n    | fields consumer_id, index_time\n  ]\n| eval lag_h=(index_time-ct)/3600\n| where lag_h>24 OR isnull(index_time)\n| table consumer_id, ct, index_time, lag_h\n```\n\nUnderstanding this SPL\n\n**CCPA/CPRA — Correction/Deletion Verification — Search Index and Cache Purge Lag After Correction Request** — Corrections must appear consistently across channels. Measuring lag between CRM correction timestamps and search-index update events (or CDN cache purge) demonstrates you catch stale copies that could frustrate consumer expectations under §1798.106.\n\nDocumented **Data sources**: `index=crm` `sourcetype=\"crm:profile_update\"` (consumer_id, corrected_field, corrected_ts); `index=search` `sourcetype=\"search:indexer_audit\"` (consumer_id, index_operation, _time). **App/TA** (typical add-on context): Algolia/Elastic/OpenSearch audit logs, CRM HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: crm; **sourcetype**: crm:profile_update. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=crm, sourcetype=\"crm:profile_update\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **lag_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag_h>24 OR isnull(index_time)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CCPA/CPRA — Correction/Deletion Verification — Search Index and Cache Purge Lag After Correction Request**): table consumer_id, ct, index_time, lag_h\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slow updates), Histogram (lag_h), Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We corrections must appear consistently across channels.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "elasticsearch"
              ],
              "em": [
                "elasticsearch_opensearch"
              ],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.105",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA/CPRA §1798.105 (Consumer right to delete) is enforced — Splunk UC-22.4.25: CCPA/CPRA — Correction/Deletion Verification — Search Index and Cache Purge Lag After Correction Request.",
                  "ea": "Saved search 'UC-22.4.25' running on sourcetype crm:profile_update and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.0,
          "qd": {
            "gold": 1,
            "silver": 0,
            "bronze": 24,
            "none": 0
          }
        },
        {
          "i": "22.5",
          "n": "MiFID II",
          "u": [
            {
              "i": "22.5.1",
              "n": "MiFID II Trade and Transaction Reporting Completeness (Art. 26)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects reporting gaps (missing submissions vs expected trading-day volume) and ARM/APA rejection spikes to support completeness and accuracy controls for transaction reporting oversight.",
              "t": "Splunk HTTP Event Collector (core platform) with JSON parsing, Financial Information eXchange (FIX) Log Parsing (Splunkbase 431) (optional)",
              "d": "`index=trading` `sourcetype=\"_json\"` `source=\"http:trx_reporting\"` (transaction_report_id, trade_date, venue, report_status, reject_code)",
              "q": "index=trading sourcetype=\"_json\" source=\"http:trx_reporting\" earliest=-30d@d\n| eval rejected=if(isnotnull(reject_code) AND reject_code!=\"\", 1, 0)\n| eval accepted=if(report_status IN (\"ACCEPTED\",\"ACKED\",\"CONFIRMED\") AND rejected=0, 1, 0)\n| bin _time span=1d\n| stats count as sent, sum(accepted) as accepted, sum(rejected) as rejects, dc(transaction_report_id) as distinct_reports by _time, venue\n| eventstats avg(sent) as baseline by venue\n| eval volume_gap=if(sent<baseline*0.75, 1, 0)\n| where volume_gap=1 OR rejects>0\n| table _time, venue, sent, accepted, rejects, volume_gap\n| sort _time, venue",
              "m": "(1) Send ARM/APA acknowledgements and gateway rejects to HEC with a dedicated token; (2) standardize JSON keys (`reject_code`, `report_status`); (3) baseline \"expected volume\" can be replaced with a lookup of expected daily counts by `venue` and instrument class; (4) schedule daily for compliance desk review.",
              "z": "Timechart (accepted vs rejects), Single value (gap days counter), Table (worst venues by reject rate).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Log Parsing](https://splunkbase.splunk.com/app/431)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk HTTP Event Collector (core platform) with JSON parsing, Financial Information eXchange (FIX) Log Parsing (Splunkbase 431) (optional).\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"_json\"` `source=\"http:trx_reporting\"` (transaction_report_id, trade_date, venue, report_status, reject_code).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Send ARM/APA acknowledgements and gateway rejects to HEC with a dedicated token; (2) standardize JSON keys (`reject_code`, `report_status`); (3) baseline \"expected volume\" can be replaced with a lookup of expected daily counts by `venue` and instrument class; (4) schedule daily for compliance desk review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"_json\" source=\"http:trx_reporting\" earliest=-30d@d\n| eval rejected=if(isnotnull(reject_code) AND reject_code!=\"\", 1, 0)\n| eval accepted=if(report_status IN (\"ACCEPTED\",\"ACKED\",\"CONFIRMED\") AND rejected=0, 1, 0)\n| bin _time span=1d\n| stats count as sent, sum(accepted) as accepted, sum(rejected) as rejects, dc(transaction_report_id) as distinct_reports by _time, venue\n| eventstats avg(sent) as baseline by venue\n| eval volume_gap=if(sent<baseline*0.75, 1, 0)\n| where volume_gap=1 OR rejects>0\n| table _time, venue, sent, accepted, rejects, volume_gap\n| sort _time, venue\n```\n\nUnderstanding this SPL\n\n**MiFID II Trade and Transaction Reporting Completeness (Art. 26)** — Detects reporting gaps (missing submissions vs expected trading-day volume) and ARM/APA rejection spikes to support completeness and accuracy controls for transaction reporting oversight.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"_json\"` `source=\"http:trx_reporting\"` (transaction_report_id, trade_date, venue, report_status, reject_code). **App/TA** (typical add-on context): Splunk HTTP Event Collector (core platform) with JSON parsing, Financial Information eXchange (FIX) Log Parsing (Splunkbase 431) (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **rejected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **accepted** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, venue** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by venue** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **volume_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where volume_gap=1 OR rejects>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MiFID II Trade and Transaction Reporting Completeness (Art. 26)**): table _time, venue, sent, accepted, rejects, volume_gap\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (accepted vs rejects), Single value (gap days counter), Table (worst venues by reject rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We look for reporting gaps (missing submissions vs expected trading-day volume) and ARM/APA rejection spikes to support completeness and accuracy controls for transaction reporting oversight.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "exchange"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.26 is enforced — Splunk UC-22.5.1: MiFID II Trade and Transaction Reporting Completeness.",
                  "ea": "Saved search 'UC-22.5.1' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.2",
              "n": "MiFID II Communications Recording and Retention Audit (Art. 16(7))",
              "c": "critical",
              "f": "advanced",
              "v": "Correlates collaboration recording signals (Webex) with telephony metadata (CUCM CDR) to evidence recording coverage and catch missing/failed capture patterns across communication channels.",
              "t": "Cisco WebEx Meetings Add-on for Splunk (Splunkbase 4991), Cisco CDR Reporting and Analytics (Splunkbase 669)",
              "d": "`index=collab` `sourcetype=\"cisco:webex:meetings:history:recordaccesshistory\"` (creationTime, meetingId, hostWebexID); `index=voip` `sourcetype=\"cisco:ucm:cdr\"` (callingPartyNumber, calledPartyNumber, duration, dateTimeOrigination, origCause_value)",
              "q": "index=voip sourcetype=\"cisco:ucm:cdr\" earliest=-30d\n| eval call_duration_min=round(duration/60, 1)\n| stats count as calls, avg(call_duration_min) as avg_duration, sum(eval(if(origCause_value!=\"0\" AND origCause_value!=\"16\", 1, 0))) as failed_calls by callingPartyNumber\n| where failed_calls>0 OR calls>100\n| sort - calls",
              "m": "(1) Install Webex Meetings inputs from TA 4991; (2) ingest CUCM CDR files into `index=voip` with `cisco:ucm:cdr` sourcetype via TA 669; (3) define retention dashboards using your legal minimum (e.g. 5 years for MiFID II) via lookups tied to meeting/call identifiers; (4) alert on recording failures or gaps.",
              "z": "Timechart (recording events), CDR duration distribution, Table (failed calls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Cisco WebEx Meetings Add-on for Splunk](https://splunkbase.splunk.com/app/4991), [Cisco CDR Reporting and Analytics](https://splunkbase.splunk.com/app/669)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cisco WebEx Meetings Add-on for Splunk (Splunkbase 4991), Cisco CDR Reporting and Analytics (Splunkbase 669).\n• Ensure the following data sources are available: `index=collab` `sourcetype=\"cisco:webex:meetings:history:recordaccesshistory\"` (creationTime, meetingId, hostWebexID); `index=voip` `sourcetype=\"cisco:ucm:cdr\"` (callingPartyNumber, calledPartyNumber, duration, dateTimeOrigination, origCause_value).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Install Webex Meetings inputs from TA 4991; (2) ingest CUCM CDR files into `index=voip` with `cisco:ucm:cdr` sourcetype via TA 669; (3) define retention dashboards using your legal minimum (e.g. 5 years for MiFID II) via lookups tied to meeting/call identifiers; (4) alert on recording failures or gaps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=voip sourcetype=\"cisco:ucm:cdr\" earliest=-30d\n| eval call_duration_min=round(duration/60, 1)\n| stats count as calls, avg(call_duration_min) as avg_duration, sum(eval(if(origCause_value!=\"0\" AND origCause_value!=\"16\", 1, 0))) as failed_calls by callingPartyNumber\n| where failed_calls>0 OR calls>100\n| sort - calls\n```\n\nUnderstanding this SPL\n\n**MiFID II Communications Recording and Retention Audit (Art. 16(7))** — Correlates collaboration recording signals (Webex) with telephony metadata (CUCM CDR) to evidence recording coverage and catch missing/failed capture patterns across communication channels.\n\nDocumented **Data sources**: `index=collab` `sourcetype=\"cisco:webex:meetings:history:recordaccesshistory\"` (creationTime, meetingId, hostWebexID); `index=voip` `sourcetype=\"cisco:ucm:cdr\"` (callingPartyNumber, calledPartyNumber, duration, dateTimeOrigination, origCause_value). **App/TA** (typical add-on context): Cisco WebEx Meetings Add-on for Splunk (Splunkbase 4991), Cisco CDR Reporting and Analytics (Splunkbase 669). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: voip; **sourcetype**: cisco:ucm:cdr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=voip, sourcetype=\"cisco:ucm:cdr\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **call_duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by callingPartyNumber** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failed_calls>0 OR calls>100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (recording events), CDR duration distribution, Table (failed calls).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect collaboration recording signals (Webex) with telephony metadata (CUCM CDR) to evidence recording coverage and catch missing/failed capture patterns across communication channels.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_webex"
              ],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.16(7)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.16(7) (Record keeping of communications) is enforced — Splunk UC-22.5.2: MiFID II Communications Recording and Retention Audit.",
                  "ea": "Saved search 'UC-22.5.2' running on sourcetype cisco:webex:meetings:history:recordaccesshistory and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.3",
              "n": "MiFID II Best Execution Monitoring (Art. 27)",
              "c": "high",
              "f": "advanced",
              "v": "Compares execution quality and routing latency across venues (price improvement, fees, speed) using structured order/execution JSON from OMS/EMS to support best execution oversight.",
              "t": "Splunk HTTP Event Collector (core platform) with JSON parsing, Financial Information eXchange (FIX) Log Parsing (Splunkbase 431) (optional)",
              "d": "`index=trading` `sourcetype=\"_json\"` `source=\"http:bestex\"` (order_id, exec_id, venue, symbol, last_px, effective_spread_bps, fee_bps, exec_latency_ms, decision_time, report_time)",
              "q": "index=trading sourcetype=\"_json\" source=\"http:bestex\" earliest=-7d\n| eval all_in_bps=effective_spread_bps+fee_bps\n| stats median(all_in_bps) as p50_cost, median(exec_latency_ms) as p50_latency, count as fills by venue, symbol\n| eventstats median(p50_cost) as global_p50 by symbol\n| eval venue_delta=round(p50_cost-global_p50, 2)\n| sort symbol, venue_delta\n| table symbol, venue, fills, p50_cost, p50_latency, venue_delta",
              "m": "(1) Publish execution reports to HEC with consistent timestamps and normalized units (`effective_spread_bps`, `fee_bps`); (2) refresh baselines weekly; (3) exclude auctions/halts using flags in the JSON; (4) quarterly export for RTS 28 reporting.",
              "z": "Scatter (latency vs cost), Leaderboard table by venue, Box-style panels via stats.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Log Parsing](https://splunkbase.splunk.com/app/431)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk HTTP Event Collector (core platform) with JSON parsing, Financial Information eXchange (FIX) Log Parsing (Splunkbase 431) (optional).\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"_json\"` `source=\"http:bestex\"` (order_id, exec_id, venue, symbol, last_px, effective_spread_bps, fee_bps, exec_latency_ms, decision_time, report_time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Publish execution reports to HEC with consistent timestamps and normalized units (`effective_spread_bps`, `fee_bps`); (2) refresh baselines weekly; (3) exclude auctions/halts using flags in the JSON; (4) quarterly export for RTS 28 reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"_json\" source=\"http:bestex\" earliest=-7d\n| eval all_in_bps=effective_spread_bps+fee_bps\n| stats median(all_in_bps) as p50_cost, median(exec_latency_ms) as p50_latency, count as fills by venue, symbol\n| eventstats median(p50_cost) as global_p50 by symbol\n| eval venue_delta=round(p50_cost-global_p50, 2)\n| sort symbol, venue_delta\n| table symbol, venue, fills, p50_cost, p50_latency, venue_delta\n```\n\nUnderstanding this SPL\n\n**MiFID II Best Execution Monitoring (Art. 27)** — Compares execution quality and routing latency across venues (price improvement, fees, speed) using structured order/execution JSON from OMS/EMS to support best execution oversight.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"_json\"` `source=\"http:bestex\"` (order_id, exec_id, venue, symbol, last_px, effective_spread_bps, fee_bps, exec_latency_ms, decision_time, report_time). **App/TA** (typical add-on context): Splunk HTTP Event Collector (core platform) with JSON parsing, Financial Information eXchange (FIX) Log Parsing (Splunkbase 431) (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **all_in_bps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by venue, symbol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by symbol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **venue_delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **MiFID II Best Execution Monitoring (Art. 27)**): table symbol, venue, fills, p50_cost, p50_latency, venue_delta\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter (latency vs cost), Leaderboard table by venue, Box-style panels via stats.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compares execution quality and routing latency across venues (price improvement, fees, speed) using structured order/execution JSON from OMS/EMS to support best execution oversight.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "exchange"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.27",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.27 is enforced — Splunk UC-22.5.3: MiFID II Best Execution Monitoring.",
                  "ea": "Saved search 'UC-22.5.3' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.4",
              "n": "MiFID II Transaction Reporting Timeliness and Rejection Root-Cause (RTS 22, Art. 26)",
              "c": "critical",
              "f": "advanced",
              "v": "Competent authorities expect complete, accurate, and timely transaction reports. Measuring submit-to-acknowledgement latency and clustering rejection codes supports proactive fixes before regulatory breaches and demonstrates surveillance over ARM/APA gateway health.",
              "t": "Splunk HTTP Event Collector (core platform) with JSON parsing",
              "d": "`index=trading` `sourcetype=\"_json\"` `source=\"http:trx_reporting\"` (transaction_report_id, submit_epoch_ms, ack_epoch_ms, reject_code, venue, instrument_id)",
              "q": "index=trading sourcetype=\"_json\" source=\"http:trx_reporting\" earliest=-7d\n| eval latency_ms=ack_epoch_ms-submit_epoch_ms\n| eval late=if(latency_ms>600000 OR isnull(ack_epoch_ms), 1, 0)\n| stats count as reports, sum(late) as late_or_open, dc(reject_code) as distinct_reject_codes by venue, reject_code\n| sort - late_or_open",
              "m": "(1) Normalize clocks with NTP on reporting hosts and store epoch milliseconds; (2) treat missing `ack_epoch_ms` after T+1 as open submissions; (3) maintain `reject_code_meanings.csv` lookup for narrative dashboards; (4) alert if `late_or_open/reports>0.01`; (5) feed monthly summary to compliance committee.",
              "z": "Histogram (latency_ms), Table (venue, reject_code, counts), Single value (late_or_open).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk HTTP Event Collector (core platform) with JSON parsing.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"_json\"` `source=\"http:trx_reporting\"` (transaction_report_id, submit_epoch_ms, ack_epoch_ms, reject_code, venue, instrument_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize clocks with NTP on reporting hosts and store epoch milliseconds; (2) treat missing `ack_epoch_ms` after T+1 as open submissions; (3) maintain `reject_code_meanings.csv` lookup for narrative dashboards; (4) alert if `late_or_open/reports>0.01`; (5) feed monthly summary to compliance committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"_json\" source=\"http:trx_reporting\" earliest=-7d\n| eval latency_ms=ack_epoch_ms-submit_epoch_ms\n| eval late=if(latency_ms>600000 OR isnull(ack_epoch_ms), 1, 0)\n| stats count as reports, sum(late) as late_or_open, dc(reject_code) as distinct_reject_codes by venue, reject_code\n| sort - late_or_open\n```\n\nUnderstanding this SPL\n\n**MiFID II Transaction Reporting Timeliness and Rejection Root-Cause (RTS 22, Art. 26)** — Competent authorities expect complete, accurate, and timely transaction reports. Measuring submit-to-acknowledgement latency and clustering rejection codes supports proactive fixes before regulatory breaches and demonstrates surveillance over ARM/APA gateway health.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"_json\"` `source=\"http:trx_reporting\"` (transaction_report_id, submit_epoch_ms, ack_epoch_ms, reject_code, venue, instrument_id). **App/TA** (typical add-on context): Splunk HTTP Event Collector (core platform) with JSON parsing. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **late** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by venue, reject_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (latency_ms), Table (venue, reject_code, counts), Single value (late_or_open).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We competent authorities expect complete, accurate, and timely transaction reports.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.26 is enforced — Splunk UC-22.5.4: MiFID II Transaction Reporting Timeliness and Rejection Root-Cause.",
                  "ea": "Saved search 'UC-22.5.4' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.5",
              "n": "MiFID II Product Governance and Target Market Appropriateness Evidence (Art. 9(3) MiFIR, Art. 16(3) MiFID II)",
              "c": "high",
              "f": "intermediate",
              "v": "Manufacturers and distributors must maintain product-approval processes and identify target markets. Tracking workflow completion for new product launches evidences governance discipline and helps detect rushed approvals or missing distributor notifications.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, state, opened_at, closed_at, u_product_isin, u_target_market_signed_off)",
              "q": "index=itsm sourcetype=\"snow:sc_req_item\" earliest=-365d\n    (cat_item=\"*Product Governance*\" OR cat_item=\"*MiFID Product*\")\n| eval signed_off=coalesce(u_target_market_signed_off, \"false\")\n| eval is_closed=if(match(state,\"(?i)closed|resolved|complete\"),1,0)\n| stats count as launches,\n        sum(eval(is_closed=0)) as open_approvals,\n        sum(eval(signed_off=\"false\" AND is_closed=1)) as closed_without_signoff\n    by cat_item\n| where open_approvals>0 OR closed_without_signoff>0\n| sort - open_approvals",
              "m": "(1) Model ServiceNow catalog items for product approval with mandatory `u_target_market_signed_off`; (2) require `u_product_isin` or internal SKU; (3) alert on `closed_without_signoff>0`; (4) join marketing distribution lists optional via lookup; (5) archive closed items quarterly for NCAs.",
              "z": "Table (catalog item health), Bar chart (open_approvals), Single value (closed_without_signoff).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, state, opened_at, closed_at, u_product_isin, u_target_market_signed_off).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Model ServiceNow catalog items for product approval with mandatory `u_target_market_signed_off`; (2) require `u_product_isin` or internal SKU; (3) alert on `closed_without_signoff>0`; (4) join marketing distribution lists optional via lookup; (5) archive closed items quarterly for NCAs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:sc_req_item\" earliest=-365d\n    (cat_item=\"*Product Governance*\" OR cat_item=\"*MiFID Product*\")\n| eval signed_off=coalesce(u_target_market_signed_off, \"false\")\n| eval is_closed=if(match(state,\"(?i)closed|resolved|complete\"),1,0)\n| stats count as launches,\n        sum(eval(is_closed=0)) as open_approvals,\n        sum(eval(signed_off=\"false\" AND is_closed=1)) as closed_without_signoff\n    by cat_item\n| where open_approvals>0 OR closed_without_signoff>0\n| sort - open_approvals\n```\n\nUnderstanding this SPL\n\n**MiFID II Product Governance and Target Market Appropriateness Evidence (Art. 9(3) MiFIR, Art. 16(3) MiFID II)** — Manufacturers and distributors must maintain product-approval processes and identify target markets. Tracking workflow completion for new product launches evidences governance discipline and helps detect rushed approvals or missing distributor notifications.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, state, opened_at, closed_at, u_product_isin, u_target_market_signed_off). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **signed_off** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_closed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cat_item** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where open_approvals>0 OR closed_without_signoff>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (catalog item health), Bar chart (open_approvals), Single value (closed_without_signoff).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We manufacturers and distributors must maintain product-approval processes and identify target markets.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.16",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.16 is enforced — Splunk UC-22.5.5: MiFID II Product Governance and Target Market Appropriateness Evidence.",
                  "ea": "Saved search 'UC-22.5.5' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.9 is enforced — Splunk UC-22.5.5: MiFID II Product Governance and Target Market Appropriateness Evidence.",
                  "ea": "Saved search 'UC-22.5.5' running on sourcetype snow:sc_req_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.6",
              "n": "MiFID II Order and Decision Data Record Integrity (Art. 25)",
              "c": "critical",
              "f": "advanced",
              "v": "Investment firms must keep orderly records of services and transactions. Detecting gaps or duplicate order IDs in OMS/EMS journals supports defensible reconstruction of the decision chain during regulatory reconstruction exercises.",
              "t": "Financial Information eXchange (FIX) Log Parsing (Splunkbase 431), Splunk HTTP Event Collector (core platform)",
              "d": "`index=trading` `sourcetype=\"_json\"` `source=\"http:order_lifecycle\"` (order_id, cl_ord_id, event_type, venue, decision_time_ms, _time)",
              "q": "index=trading sourcetype=\"_json\" source=\"http:order_lifecycle\" earliest=-1d\n| where event_type IN (\"NewOrderSingle\",\"OrderCancelReplaceRequest\",\"ExecutionReport\",\"OrderCancelRequest\")\n| stats min(_time) as first_seen, max(_time) as last_seen, dc(event_type) as event_types, count as events by order_id\n| eval span_sec=last_seen-first_seen\n| eventstats dc(order_id) as total_orders\n| eventstats sum(events) as sum_events\n| eval dup_ratio=round(events/sum_events, 6)\n| where events<2 OR span_sec<0.001\n| sort order_id",
              "m": "(1) Ensure every order emits at least creation and terminal state events; (2) hash sensitive client fields before indexing; (3) alert on `events<2` for statuses that should be terminal within T day; (4) tune `span_sec` threshold for high-frequency desks; (5) export samples for internal audit replay tools.",
              "z": "Table (suspect orders), Column chart (events per order_id distribution via `bin events`), Single value (count of orders with events<2).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Log Parsing](https://splunkbase.splunk.com/app/431)",
              "mitre": [
                "T1565",
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Financial Information eXchange (FIX) Log Parsing (Splunkbase 431), Splunk HTTP Event Collector (core platform).\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"_json\"` `source=\"http:order_lifecycle\"` (order_id, cl_ord_id, event_type, venue, decision_time_ms, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure every order emits at least creation and terminal state events; (2) hash sensitive client fields before indexing; (3) alert on `events<2` for statuses that should be terminal within T day; (4) tune `span_sec` threshold for high-frequency desks; (5) export samples for internal audit replay tools.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"_json\" source=\"http:order_lifecycle\" earliest=-1d\n| where event_type IN (\"NewOrderSingle\",\"OrderCancelReplaceRequest\",\"ExecutionReport\",\"OrderCancelRequest\")\n| stats min(_time) as first_seen, max(_time) as last_seen, dc(event_type) as event_types, count as events by order_id\n| eval span_sec=last_seen-first_seen\n| eventstats dc(order_id) as total_orders\n| eventstats sum(events) as sum_events\n| eval dup_ratio=round(events/sum_events, 6)\n| where events<2 OR span_sec<0.001\n| sort order_id\n```\n\nUnderstanding this SPL\n\n**MiFID II Order and Decision Data Record Integrity (Art. 25)** — Investment firms must keep orderly records of services and transactions. Detecting gaps or duplicate order IDs in OMS/EMS journals supports defensible reconstruction of the decision chain during regulatory reconstruction exercises.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"_json\"` `source=\"http:order_lifecycle\"` (order_id, cl_ord_id, event_type, venue, decision_time_ms, _time). **App/TA** (typical add-on context): Financial Information eXchange (FIX) Log Parsing (Splunkbase 431), Splunk HTTP Event Collector (core platform). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"NewOrderSingle\",\"OrderCancelReplaceRequest\",\"ExecutionReport\",\"OrderCancelRequest\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by order_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **span_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **dup_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where events<2 OR span_sec<0.001` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspect orders), Column chart (events per order_id distribution via `bin events`), Single value (count of orders with events<2).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We investment firms must keep orderly records of services and transactions.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "exchange"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.25",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.25 is enforced — Splunk UC-22.5.6: MiFID II Order and Decision Data Record Integrity.",
                  "ea": "Saved search 'UC-22.5.6' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.7",
              "n": "MiFID II Clock Synchronization and Timestamp Quality for Reporting (RTS 25)",
              "c": "critical",
              "f": "expert",
              "v": "RTS 25 mandates traceable UTC synchronization for algorithmic and high-frequency activity reporting. Comparing application-reported event time to Splunk ingestion `_time` highlights skewed hosts or malformed timestamps that could invalidate best execution and transaction reports.",
              "t": "Splunk Universal Forwarder (core platform), Splunk Add-on for Unix and Linux (Splunkbase 833)",
              "d": "`index=os` `sourcetype=\"Unix:Version\"` OR `sourcetype=\"chrony:tracking\"` (host, SystemTime, LeapStatus, LastOffset, RMSOffset); `index=trading` `sourcetype=\"_json\"` `source=\"http:order_lifecycle\"` (host, reported_event_epoch_ms, _time)",
              "q": "index=trading sourcetype=\"_json\" source=\"http:order_lifecycle\" earliest=-4h isnotnull(reported_event_epoch_ms)\n| eval skew_ms=abs((reported_event_epoch_ms/1000)-_time)*1000\n| stats median(skew_ms) as p50_skew, perc95(skew_ms) as p95_skew, max(skew_ms) as max_skew by host\n| where p95_skew>250 OR max_skew>1000\n| sort - p95_skew",
              "m": "(1) Forward `chrony` or `ntpq` telemetry from trading servers; (2) align JSON `reported_event_epoch_ms` with exchange event time definitions; (3) alert on hosts breaching your documented max skew (e.g. 250 ms); (4) exclude batch backfills with a `ingest_mode` flag; (5) document remediation in runbooks tied to RTS 25 testing.",
              "z": "Table (host skew stats), Timechart (median skew by host), Single value (hosts breaching threshold).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder (core platform), Splunk Add-on for Unix and Linux (Splunkbase 833).\n• Ensure the following data sources are available: `index=os` `sourcetype=\"Unix:Version\"` OR `sourcetype=\"chrony:tracking\"` (host, SystemTime, LeapStatus, LastOffset, RMSOffset); `index=trading` `sourcetype=\"_json\"` `source=\"http:order_lifecycle\"` (host, reported_event_epoch_ms, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward `chrony` or `ntpq` telemetry from trading servers; (2) align JSON `reported_event_epoch_ms` with exchange event time definitions; (3) alert on hosts breaching your documented max skew (e.g. 250 ms); (4) exclude batch backfills with a `ingest_mode` flag; (5) document remediation in runbooks tied to RTS 25 testing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"_json\" source=\"http:order_lifecycle\" earliest=-4h isnotnull(reported_event_epoch_ms)\n| eval skew_ms=abs((reported_event_epoch_ms/1000)-_time)*1000\n| stats median(skew_ms) as p50_skew, perc95(skew_ms) as p95_skew, max(skew_ms) as max_skew by host\n| where p95_skew>250 OR max_skew>1000\n| sort - p95_skew\n```\n\nUnderstanding this SPL\n\n**MiFID II Clock Synchronization and Timestamp Quality for Reporting (RTS 25)** — RTS 25 mandates traceable UTC synchronization for algorithmic and high-frequency activity reporting. Comparing application-reported event time to Splunk ingestion `_time` highlights skewed hosts or malformed timestamps that could invalidate best execution and transaction reports.\n\nDocumented **Data sources**: `index=os` `sourcetype=\"Unix:Version\"` OR `sourcetype=\"chrony:tracking\"` (host, SystemTime, LeapStatus, LastOffset, RMSOffset); `index=trading` `sourcetype=\"_json\"` `source=\"http:order_lifecycle\"` (host, reported_event_epoch_ms, _time). **App/TA** (typical add-on context): Splunk Universal Forwarder (core platform), Splunk Add-on for Unix and Linux (Splunkbase 833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **skew_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_skew>250 OR max_skew>1000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host skew stats), Timechart (median skew by host), Single value (hosts breaching threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We rTS 25 mandates traceable UTC synchronization for algorithmic and high-frequency activity reporting.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.25",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.25 is enforced — Splunk UC-22.5.7: MiFID II Clock Synchronization and Timestamp Quality for Reporting.",
                  "ea": "Saved search 'UC-22.5.7' running on sourcetype Unix:Version and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.8",
              "n": "MiFID II Algorithmic Trading Strategy Limits and Kill-Switch Audit (Art. 17)",
              "c": "critical",
              "f": "advanced",
              "v": "Firms must have effective systems and risk controls for algorithmic trading, including thresholds and kill switches. Auditing throttle breaches and manual halts evidences governance and supports supervisory questions after market stress events.",
              "t": "Splunk HTTP Event Collector (core platform)",
              "d": "`index=trading` `sourcetype=\"_json\"` `source=\"http:algo_controls\"` (strategy_id, event_type, notional_limit_usd, notional_observed_usd, kill_switch_actor, _time)",
              "q": "index=trading sourcetype=\"_json\" source=\"http:algo_controls\" earliest=-30d\n| where event_type IN (\"limit_breach\",\"kill_switch_activated\",\"kill_switch_reset\",\"parameter_change\")\n| eval breach=if(event_type=\"limit_breach\" OR (isnotnull(notional_observed_usd) AND notional_observed_usd>notional_limit_usd), 1, 0)\n| stats count as events, sum(breach) as breaches, dc(strategy_id) as strategies_affected by event_type, kill_switch_actor\n| sort - breaches",
              "m": "(1) Emit structured events from risk gateways when limits are approached, breached, or when kill switches fire; (2) require `kill_switch_actor` for manual actions; (3) correlate with market data halts optional; (4) retain immutable copy to WORM storage per policy; (5) quarterly tabletop review of top `strategies_affected`.",
              "z": "Timeline (events by strategy_id), Table (event_type totals), Bar chart (breaches by strategy).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1562",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk HTTP Event Collector (core platform).\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"_json\"` `source=\"http:algo_controls\"` (strategy_id, event_type, notional_limit_usd, notional_observed_usd, kill_switch_actor, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit structured events from risk gateways when limits are approached, breached, or when kill switches fire; (2) require `kill_switch_actor` for manual actions; (3) correlate with market data halts optional; (4) retain immutable copy to WORM storage per policy; (5) quarterly tabletop review of top `strategies_affected`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"_json\" source=\"http:algo_controls\" earliest=-30d\n| where event_type IN (\"limit_breach\",\"kill_switch_activated\",\"kill_switch_reset\",\"parameter_change\")\n| eval breach=if(event_type=\"limit_breach\" OR (isnotnull(notional_observed_usd) AND notional_observed_usd>notional_limit_usd), 1, 0)\n| stats count as events, sum(breach) as breaches, dc(strategy_id) as strategies_affected by event_type, kill_switch_actor\n| sort - breaches\n```\n\nUnderstanding this SPL\n\n**MiFID II Algorithmic Trading Strategy Limits and Kill-Switch Audit (Art. 17)** — Firms must have effective systems and risk controls for algorithmic trading, including thresholds and kill switches. Auditing throttle breaches and manual halts evidences governance and supports supervisory questions after market stress events.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"_json\"` `source=\"http:algo_controls\"` (strategy_id, event_type, notional_limit_usd, notional_observed_usd, kill_switch_actor, _time). **App/TA** (typical add-on context): Splunk HTTP Event Collector (core platform). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type IN (\"limit_breach\",\"kill_switch_activated\",\"kill_switch_reset\",\"parameter_change\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by event_type, kill_switch_actor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (events by strategy_id), Table (event_type totals), Bar chart (breaches by strategy).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We firms must have effective systems and risk controls for algorithmic trading, including thresholds and kill switches.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "ind": "Financial Services",
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.8: MiFID II Algorithmic Trading Strategy Limits and Kill-Switch Audit.",
                  "ea": "Saved search 'UC-22.5.8' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.9",
              "n": "MiFID II Algo Trading — Per-Instrument Circuit Breaker Trigger Frequency and Cooling-Off Compliance",
              "c": "critical",
              "f": "advanced",
              "v": "Circuit breakers protect market integrity. Monitoring how often firm algos hit exchange or broker breaker thresholds, and whether mandatory cooling-off periods were observed before re-enable, supports Art. 17 governance of algorithmic activity.",
              "t": "HTTP Event Collector from OMS/EMS, market access gateway logs",
              "d": "`index=trading` `sourcetype=\"oms:circuit\"` (algo_id, symbol, venue, trigger_ts, breaker_type, cooldown_until, resumed_ts)",
              "q": "index=trading sourcetype=\"oms:circuit\" earliest=-90d\n| eval cool=strptime(cooldown_until,\"%Y-%m-%dT%H:%M:%SZ\"), res=strptime(resumed_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| where isnotnull(res) AND res<cool\n| stats count as early_resumes by algo_id, venue, symbol\n| sort - early_resumes",
              "m": "(1) Normalize timestamps to UTC; (2) map `algo_id` to registered strategy in inventory; (3) escalate to compliance officer on any early resume; (4) exclude test venues via lookup; (5) retain for regulatory interview packs.",
              "z": "Table (violations), Timechart (triggers per symbol), Single value (early resumes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from OMS/EMS, market access gateway logs.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"oms:circuit\"` (algo_id, symbol, venue, trigger_ts, breaker_type, cooldown_until, resumed_ts).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize timestamps to UTC; (2) map `algo_id` to registered strategy in inventory; (3) escalate to compliance officer on any early resume; (4) exclude test venues via lookup; (5) retain for regulatory interview packs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"oms:circuit\" earliest=-90d\n| eval cool=strptime(cooldown_until,\"%Y-%m-%dT%H:%M:%SZ\"), res=strptime(resumed_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| where isnotnull(res) AND res<cool\n| stats count as early_resumes by algo_id, venue, symbol\n| sort - early_resumes\n```\n\nUnderstanding this SPL\n\n**MiFID II Algo Trading — Per-Instrument Circuit Breaker Trigger Frequency and Cooling-Off Compliance** — Circuit breakers protect market integrity. Monitoring how often firm algos hit exchange or broker breaker thresholds, and whether mandatory cooling-off periods were observed before re-enable, supports Art. 17 governance of algorithmic activity.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"oms:circuit\"` (algo_id, symbol, venue, trigger_ts, breaker_type, cooldown_until, resumed_ts). **App/TA** (typical add-on context): HTTP Event Collector from OMS/EMS, market access gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: oms:circuit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"oms:circuit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cool** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(res) AND res<cool` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by algo_id, venue, symbol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Timechart (triggers per symbol), Single value (early resumes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We circuit breakers protect market integrity.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.9: MiFID II Algo Trading — Per-Instrument Circuit Breaker Trigger Frequency and Cooling-Off Compliance.",
                  "ea": "Saved search 'UC-22.5.9' running on index=trading sourcetype=\"oms:circuit\" (algo_id, symbol, venue, trigger_ts, breaker_type, cooldown_until, resumed_ts), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.10",
              "n": "MiFID II Algo Trading — Kill-Switch Activation Audit Trail and Dual Authorization",
              "c": "critical",
              "f": "advanced",
              "v": "Kill-switches must be reliable and controlled. Logging who armed, who confirmed, and which child strategies halted proves the firm can stop propagation of erroneous orders—a core supervisory expectation for algorithmic traders.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), OMS audit via HEC",
              "d": "`index=trading` `sourcetype=\"oms:killswitch\"` (event_id, actor, action, strategy_group, require_second_approver, second_actor, _time)",
              "q": "index=trading sourcetype=\"oms:killswitch\" action=\"FIRE\" earliest=-400d\n| eval need_second=if(require_second_approver IN (\"true\",\"1\",\"yes\"),1,0)\n| where need_second=1 AND (isnull(second_actor) OR second_actor=\"\")\n| table _time, event_id, strategy_group, actor, second_actor, require_second_approver",
              "m": "(1) Require OMS to emit one immutable row per `FIRE` with `require_second_approver` populated from policy; (2) integrate CyberArk or ServiceNow approval id as `second_actor` where applicable; (3) page markets compliance on any hit; (4) exclude dry-run environments via `index` or `host` filter; (5) map to RTS 6 and internal algo register identifiers.",
              "z": "Table (policy breaches), Timeline (`_time` by event_id), Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), OMS audit via HEC.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"oms:killswitch\"` (event_id, actor, action, strategy_group, require_second_approver, second_actor, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require OMS to emit one immutable row per `FIRE` with `require_second_approver` populated from policy; (2) integrate CyberArk or ServiceNow approval id as `second_actor` where applicable; (3) page markets compliance on any hit; (4) exclude dry-run environments via `index` or `host` filter; (5) map to RTS 6 and internal algo register identifiers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"oms:killswitch\" action=\"FIRE\" earliest=-400d\n| eval need_second=if(require_second_approver IN (\"true\",\"1\",\"yes\"),1,0)\n| where need_second=1 AND (isnull(second_actor) OR second_actor=\"\")\n| table _time, event_id, strategy_group, actor, second_actor, require_second_approver\n```\n\nUnderstanding this SPL\n\n**MiFID II Algo Trading — Kill-Switch Activation Audit Trail and Dual Authorization** — Kill-switches must be reliable and controlled. Logging who armed, who confirmed, and which child strategies halted proves the firm can stop propagation of erroneous orders—a core supervisory expectation for algorithmic traders.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"oms:killswitch\"` (event_id, actor, action, strategy_group, require_second_approver, second_actor, _time). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), OMS audit via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: oms:killswitch. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"oms:killswitch\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **need_second** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where need_second=1 AND (isnull(second_actor) OR second_actor=\"\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MiFID II Algo Trading — Kill-Switch Activation Audit Trail and Dual Authorization**): table _time, event_id, strategy_group, actor, second_actor, require_second_approver\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (policy breaches), Timeline (`_time` by event_id), Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We kill-switches must be reliable and controlled.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.10: MiFID II Algo Trading — Kill-Switch Activation Audit Trail and Dual Authorization.",
                  "ea": "Saved search 'UC-22.5.10' running on sourcetype oms:killswitch and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.11",
              "n": "MiFID II Algo Trading — Message Rate Throttle Breaches vs Exchange Limits",
              "c": "high",
              "f": "intermediate",
              "v": "Throttle breaches can indicate malfunctioning algos or reckless co-location usage. Continuous comparison of outbound order rates to venue-specific caps evidences real-time surveillance aligned with Art. 17 controls.",
              "t": "Gateway logs (FIX, binary protocol) via HEC",
              "d": "`index=trading` `sourcetype=\"gw:rate\"` (session_id, venue, orders_per_sec, venue_limit_ops, breach_flag, _time)",
              "q": "index=trading sourcetype=\"gw:rate\" breach_flag=true earliest=-30d\n| stats count as breaches, perc95(orders_per_sec) as p95_ops by venue, session_id\n| lookup algo_session_map.csv session_id OUTPUT algo_id, desk\n| sort - breaches",
              "m": "(1) Maintain `venue_limit_ops` from exchange rule files; (2) map sessions to `algo_id`; (3) auto-throttle at gateway where possible—Splunk is detective; (4) alert desk head; (5) weekly summary to market conduct.",
              "z": "Table (sessions), Bar by venue, Single value (distinct sessions breaching).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Gateway logs (FIX, binary protocol) via HEC.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"gw:rate\"` (session_id, venue, orders_per_sec, venue_limit_ops, breach_flag, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `venue_limit_ops` from exchange rule files; (2) map sessions to `algo_id`; (3) auto-throttle at gateway where possible—Splunk is detective; (4) alert desk head; (5) weekly summary to market conduct.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"gw:rate\" breach_flag=true earliest=-30d\n| stats count as breaches, perc95(orders_per_sec) as p95_ops by venue, session_id\n| lookup algo_session_map.csv session_id OUTPUT algo_id, desk\n| sort - breaches\n```\n\nUnderstanding this SPL\n\n**MiFID II Algo Trading — Message Rate Throttle Breaches vs Exchange Limits** — Throttle breaches can indicate malfunctioning algos or reckless co-location usage. Continuous comparison of outbound order rates to venue-specific caps evidences real-time surveillance aligned with Art. 17 controls.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"gw:rate\"` (session_id, venue, orders_per_sec, venue_limit_ops, breach_flag, _time). **App/TA** (typical add-on context): Gateway logs (FIX, binary protocol) via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: gw:rate. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"gw:rate\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by venue, session_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sessions), Bar by venue, Single value (distinct sessions breaching).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We throttle breaches can indicate malfunctioning algos or reckless co-location usage.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "pillar": "observability",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.11: MiFID II Algo Trading — Message Rate Throttle Breaches vs Exchange Limits.",
                  "ea": "Saved search 'UC-22.5.11' running on index=trading sourcetype=\"gw:rate\" (session_id, venue, orders_per_sec, venue_limit_ops, breach_flag, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.12",
              "n": "MiFID II Client Suitability — Know-Your-Client Refresh Overdue by Risk Segment",
              "c": "high",
              "f": "intermediate",
              "v": "Suitability depends on current client facts. Flagging high-risk clients whose KYC refresh date passed without completion reduces the risk of recommending instruments inconsistent with updated circumstances.",
              "t": "Splunk DB Connect to CRM/KYC warehouse",
              "d": "`index=crm` `sourcetype=\"kyc:client\"` (client_id, risk_segment, next_review_date, last_review_date, status)",
              "q": "index=crm sourcetype=\"kyc:client\" risk_segment IN (\"high\",\"PEP\",\"complex_product_user\") earliest=-1d\n| eval nrd=strptime(next_review_date,\"%Y-%m-%d\")\n| where nrd<now() OR status!=\"current\"\n| stats count by risk_segment, status\n| sort - count",
              "m": "(1) Align segments with internal policy; (2) exclude closed accounts; (3) route to advisory supervision queue; (4) integrate with order blocking flags optionally; (5) document remediation in suitability file—not raw Splunk.",
              "z": "Table (clients at risk), Column chart by segment, Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect to CRM/KYC warehouse.\n• Ensure the following data sources are available: `index=crm` `sourcetype=\"kyc:client\"` (client_id, risk_segment, next_review_date, last_review_date, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align segments with internal policy; (2) exclude closed accounts; (3) route to advisory supervision queue; (4) integrate with order blocking flags optionally; (5) document remediation in suitability file—not raw Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=crm sourcetype=\"kyc:client\" risk_segment IN (\"high\",\"PEP\",\"complex_product_user\") earliest=-1d\n| eval nrd=strptime(next_review_date,\"%Y-%m-%d\")\n| where nrd<now() OR status!=\"current\"\n| stats count by risk_segment, status\n| sort - count\n```\n\nUnderstanding this SPL\n\n**MiFID II Client Suitability — Know-Your-Client Refresh Overdue by Risk Segment** — Suitability depends on current client facts. Flagging high-risk clients whose KYC refresh date passed without completion reduces the risk of recommending instruments inconsistent with updated circumstances.\n\nDocumented **Data sources**: `index=crm` `sourcetype=\"kyc:client\"` (client_id, risk_segment, next_review_date, last_review_date, status). **App/TA** (typical add-on context): Splunk DB Connect to CRM/KYC warehouse. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: crm; **sourcetype**: kyc:client. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=crm, sourcetype=\"kyc:client\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **nrd** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where nrd<now() OR status!=\"current\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by risk_segment, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (clients at risk), Column chart by segment, Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We suitability depends on current client facts.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.25",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.25 is enforced — Splunk UC-22.5.12: MiFID II Client Suitability — Know-Your-Client Refresh Overdue by Risk Segment.",
                  "ea": "Saved search 'UC-22.5.12' running on index=crm sourcetype=\"kyc:client\" (client_id, risk_segment, next_review_date, last_review_date, status), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.13",
              "n": "MiFID II Client Suitability — Appropriateness Test Pass Required Before Complex Product Orders",
              "c": "critical",
              "f": "advanced",
              "v": "Complex products demand evidence of knowledge and experience. Correlating order events with latest appropriateness test outcomes catches sequencing errors where orders slip through before tests complete.",
              "t": "OMS and LMS via HEC",
              "d": "`index=trading` `sourcetype=\"oms:order\"` (client_id, symbol, product_tier, order_ts, order_id); `index=lms` `sourcetype=\"lms:apptest\"` (client_id, passed, completed_ts, product_tier)",
              "q": "index=trading sourcetype=\"oms:order\" product_tier=\"complex\" earliest=-30d\n| eval ot=strptime(order_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| lookup order_apptest_state.csv order_id OUTPUT latest_pass_ts\n| eval pass_ok=if(isnotnull(latest_pass_ts) AND latest_pass_ts<=ot, 1, 0)\n| where pass_ok=0\n| stats count by client_id, symbol\n| sort - count",
              "m": "(1) Nightly scheduled search joins `oms:order` to `lms:apptest` and writes `order_apptest_state.csv` with `latest_pass_ts` = max test completion where `completed_ts<=order_ts` per `order_id`; (2) block orders in OMS where possible—Splunk remains detective backfill; (3) compliance reviews all `pass_ok=0` rows; (4) map `product_tier` to the firm’s MiFID complex-product inventory; (5) retain lookup snapshots quarterly for audit.",
              "z": "Table (violations), Single value (orders), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OMS and LMS via HEC.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"oms:order\"` (client_id, symbol, product_tier, order_ts, order_id); `index=lms` `sourcetype=\"lms:apptest\"` (client_id, passed, completed_ts, product_tier).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Nightly scheduled search joins `oms:order` to `lms:apptest` and writes `order_apptest_state.csv` with `latest_pass_ts` = max test completion where `completed_ts<=order_ts` per `order_id`; (2) block orders in OMS where possible—Splunk remains detective backfill; (3) compliance reviews all `pass_ok=0` rows; (4) map `product_tier` to the firm’s MiFID complex-product inventory; (5) retain lookup snapshots quarterly for audit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"oms:order\" product_tier=\"complex\" earliest=-30d\n| eval ot=strptime(order_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| lookup order_apptest_state.csv order_id OUTPUT latest_pass_ts\n| eval pass_ok=if(isnotnull(latest_pass_ts) AND latest_pass_ts<=ot, 1, 0)\n| where pass_ok=0\n| stats count by client_id, symbol\n| sort - count\n```\n\nUnderstanding this SPL\n\n**MiFID II Client Suitability — Appropriateness Test Pass Required Before Complex Product Orders** — Complex products demand evidence of knowledge and experience. Correlating order events with latest appropriateness test outcomes catches sequencing errors where orders slip through before tests complete.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"oms:order\"` (client_id, symbol, product_tier, order_ts, order_id); `index=lms` `sourcetype=\"lms:apptest\"` (client_id, passed, completed_ts, product_tier). **App/TA** (typical add-on context): OMS and LMS via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: oms:order. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"oms:order\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ot** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **pass_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pass_ok=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by client_id, symbol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Single value (orders), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We complex products demand evidence of knowledge and experience.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.25",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.25 is enforced — Splunk UC-22.5.13: MiFID II Client Suitability — Appropriateness Test Pass Required Before Complex Product Orders.",
                  "ea": "Saved search 'UC-22.5.13' running on sourcetype oms:order and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.14",
              "n": "MiFID II Client Suitability — Investment Objective Mismatch Alerts vs Held Positions",
              "c": "high",
              "f": "advanced",
              "v": "Ongoing suitability requires consistency between stated objectives (income vs growth) and position risk. Detecting concentrated speculative exposures for conservative profiles supports conduct supervision and product governance linkage.",
              "t": "Portfolio system export via DB Connect",
              "d": "`index=wealth` `sourcetype=\"pm:position\"` (client_id, symbol, asset_class, notional, risk_score); `index=crm` `sourcetype=\"crm:profile\"` (client_id, objective_tier)",
              "q": "index=wealth sourcetype=\"pm:position\" earliest=-1d\n| stats sum(notional) as exp, sum(eval(if(asset_class=\"derivatives\",notional,0))) as deriv_exp by client_id\n| lookup crm_profile_objective.csv client_id OUTPUT objective_tier\n| eval deriv_ratio=if(exp>0, deriv_exp/exp, 0)\n| where objective_tier=\"conservative\" AND deriv_ratio>0.2\n| table client_id, objective_tier, deriv_ratio, exp",
              "m": "(1) Calibrate `deriv_ratio` threshold with product committee; (2) refresh profiles daily; (3) adviser workflow in CRM; (4) document false positives (hedging); (5) quarterly board conduct metrics.",
              "z": "Table (mismatches), Bar chart, Single value (client count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Portfolio system export via DB Connect.\n• Ensure the following data sources are available: `index=wealth` `sourcetype=\"pm:position\"` (client_id, symbol, asset_class, notional, risk_score); `index=crm` `sourcetype=\"crm:profile\"` (client_id, objective_tier).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Calibrate `deriv_ratio` threshold with product committee; (2) refresh profiles daily; (3) adviser workflow in CRM; (4) document false positives (hedging); (5) quarterly board conduct metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wealth sourcetype=\"pm:position\" earliest=-1d\n| stats sum(notional) as exp, sum(eval(if(asset_class=\"derivatives\",notional,0))) as deriv_exp by client_id\n| lookup crm_profile_objective.csv client_id OUTPUT objective_tier\n| eval deriv_ratio=if(exp>0, deriv_exp/exp, 0)\n| where objective_tier=\"conservative\" AND deriv_ratio>0.2\n| table client_id, objective_tier, deriv_ratio, exp\n```\n\nUnderstanding this SPL\n\n**MiFID II Client Suitability — Investment Objective Mismatch Alerts vs Held Positions** — Ongoing suitability requires consistency between stated objectives (income vs growth) and position risk. Detecting concentrated speculative exposures for conservative profiles supports conduct supervision and product governance linkage.\n\nDocumented **Data sources**: `index=wealth` `sourcetype=\"pm:position\"` (client_id, symbol, asset_class, notional, risk_score); `index=crm` `sourcetype=\"crm:profile\"` (client_id, objective_tier). **App/TA** (typical add-on context): Portfolio system export via DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wealth; **sourcetype**: pm:position. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wealth, sourcetype=\"pm:position\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by client_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **deriv_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where objective_tier=\"conservative\" AND deriv_ratio>0.2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MiFID II Client Suitability — Investment Objective Mismatch Alerts vs Held Positions**): table client_id, objective_tier, deriv_ratio, exp\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mismatches), Bar chart, Single value (client count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We ongoing suitability requires consistency between stated objectives (income vs growth) and position risk.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.25",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.25 is enforced — Splunk UC-22.5.14: MiFID II Client Suitability — Investment Objective Mismatch Alerts vs Held Positions.",
                  "ea": "Saved search 'UC-22.5.14' running on sourcetype pm:position and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.15",
              "n": "MiFID II Conflicts of Interest — Personal Account Dealing Near Client Block Trades",
              "c": "critical",
              "f": "advanced",
              "v": "Conflicts policies require surveillance of staff dealing. Correlating employee PA trades with adjacent client block orders in the same instruments surfaces potential front-running or misuse of confidential information.",
              "t": "Compliance trade surveillance platform via HEC",
              "d": "`index=surveillance` `sourcetype=\"pad:trade\"` (staff_id, symbol, side, qty, trade_ts); `index=trading` `sourcetype=\"oms:block\"` (symbol, side, qty, block_ts, client_segment)",
              "q": "index=surveillance sourcetype=\"pad:trade\" earliest=-90d\n| rename side as pad_side\n| eval t=strptime(trade_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left symbol [\n    search index=trading sourcetype=\"oms:block\" earliest=-90d\n    | rename side as block_side\n    | eval bt=strptime(block_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | fields symbol, bt, block_side, qty\n  ]\n| eval delta=abs(t-bt)\n| where delta<900 AND pad_side=block_side\n| table staff_id, symbol, pad_side, t, bt, delta, qty",
              "m": "(1) Tune the 900-second proximity window per liquidity and venue; (2) same-side matching surfaces potential front-running—adjust to opposite-side if your policy targets different abuse patterns; (3) enforce strict RBAC on `staff_id`; (4) escalate hits to compliance investigations with underlying fills retained under legal hold outside Splunk; (5) document methodology in the conflicts register.",
              "z": "Table (hits), Timeline, Single value (staff_id count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Compliance trade surveillance platform via HEC.\n• Ensure the following data sources are available: `index=surveillance` `sourcetype=\"pad:trade\"` (staff_id, symbol, side, qty, trade_ts); `index=trading` `sourcetype=\"oms:block\"` (symbol, side, qty, block_ts, client_segment).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune the 900-second proximity window per liquidity and venue; (2) same-side matching surfaces potential front-running—adjust to opposite-side if your policy targets different abuse patterns; (3) enforce strict RBAC on `staff_id`; (4) escalate hits to compliance investigations with underlying fills retained under legal hold outside Splunk; (5) document methodology in the conflicts register.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=surveillance sourcetype=\"pad:trade\" earliest=-90d\n| rename side as pad_side\n| eval t=strptime(trade_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left symbol [\n    search index=trading sourcetype=\"oms:block\" earliest=-90d\n    | rename side as block_side\n    | eval bt=strptime(block_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | fields symbol, bt, block_side, qty\n  ]\n| eval delta=abs(t-bt)\n| where delta<900 AND pad_side=block_side\n| table staff_id, symbol, pad_side, t, bt, delta, qty\n```\n\nUnderstanding this SPL\n\n**MiFID II Conflicts of Interest — Personal Account Dealing Near Client Block Trades** — Conflicts policies require surveillance of staff dealing. Correlating employee PA trades with adjacent client block orders in the same instruments surfaces potential front-running or misuse of confidential information.\n\nDocumented **Data sources**: `index=surveillance` `sourcetype=\"pad:trade\"` (staff_id, symbol, side, qty, trade_ts); `index=trading` `sourcetype=\"oms:block\"` (symbol, side, qty, block_ts, client_segment). **App/TA** (typical add-on context): Compliance trade surveillance platform via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: surveillance; **sourcetype**: pad:trade. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=surveillance, sourcetype=\"pad:trade\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **t** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta<900 AND pad_side=block_side` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MiFID II Conflicts of Interest — Personal Account Dealing Near Client Block Trades**): table staff_id, symbol, pad_side, t, bt, delta, qty\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hits), Timeline, Single value (staff_id count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We conflicts policies require surveillance of staff dealing.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.15: MiFID II Conflicts of Interest — Personal Account Dealing Near Client Block Trades.",
                  "ea": "Saved search 'UC-22.5.15' running on sourcetype pad:trade and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.16",
              "n": "MiFID II Conflicts of Interest — Research Analyst vs Trading Desk Information Barrier Violations",
              "c": "critical",
              "f": "advanced",
              "v": "Chinese walls protect research independence. Detecting shared workspace file activity, IM channels, or email between restricted research and trading groups supports conflicts management beyond policy PDFs.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), DLP sourcetype",
              "d": "`index=o365` `sourcetype=\"ms:o365:management\"` (Operation, UserId, Workload); `research_wall_groups.csv` (user_id, wall_side)",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload IN (\"MicrosoftTeams\",\"SharePoint\") earliest=-30d\n| lookup research_wall_groups.csv user_id as UserId OUTPUT wall_side\n| stats values(wall_side) as sides by Operation, ObjectId\n| where mvcount(sides)>1\n| stats count",
              "m": "(1) Populate `research_wall_groups.csv` from HR; (2) tune ObjectId granularity; (3) heavy false positive review—use as triage; (4) integrate with eComms archive; (5) document controls in conflicts register.",
              "z": "Table (shared objects), Single value (incidents), Bar by Workload.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), DLP sourcetype.\n• Ensure the following data sources are available: `index=o365` `sourcetype=\"ms:o365:management\"` (Operation, UserId, Workload); `research_wall_groups.csv` (user_id, wall_side).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate `research_wall_groups.csv` from HR; (2) tune ObjectId granularity; (3) heavy false positive review—use as triage; (4) integrate with eComms archive; (5) document controls in conflicts register.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload IN (\"MicrosoftTeams\",\"SharePoint\") earliest=-30d\n| lookup research_wall_groups.csv user_id as UserId OUTPUT wall_side\n| stats values(wall_side) as sides by Operation, ObjectId\n| where mvcount(sides)>1\n| stats count\n```\n\nUnderstanding this SPL\n\n**MiFID II Conflicts of Interest — Research Analyst vs Trading Desk Information Barrier Violations** — Chinese walls protect research independence. Detecting shared workspace file activity, IM channels, or email between restricted research and trading groups supports conflicts management beyond policy PDFs.\n\nDocumented **Data sources**: `index=o365` `sourcetype=\"ms:o365:management\"` (Operation, UserId, Workload); `research_wall_groups.csv` (user_id, wall_side). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), DLP sourcetype. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by Operation, ObjectId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvcount(sides)>1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (shared objects), Single value (incidents), Bar by Workload.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We chinese walls protect research independence.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.16: MiFID II Conflicts of Interest — Research Analyst vs Trading Desk Information Barrier Violations.",
                  "ea": "Saved search 'UC-22.5.16' running on index=o365 sourcetype=\"ms:o365:management\" (Operation, UserId, Workload); research_wall_groups.csv (user_id, wall_side), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.17",
              "n": "MiFID II Conflicts of Interest — Gift and Entertainment Threshold Breach Trending",
              "c": "medium",
              "f": "beginner",
              "v": "Inducements create conflicts. Aggregating declared gifts per employee and counterparty against policy caps evidences proactive monitoring of inducement risk alongside gifts-and-entertainment registers.",
              "t": "Splunk DB Connect to ethics register, or `inputlookup`",
              "d": "`index=compliance` `sourcetype=\"ethics:gift\"` (employee_id, counterparty_id, value_eur, gift_date, approved_flag)",
              "q": "index=compliance sourcetype=\"ethics:gift\" earliest=-365d\n| eval gd=strptime(gift_date,\"%Y-%m-%d\")\n| bin gd span=90d\n| stats sum(value_eur) as rolling_value by employee_id, gd\n| where rolling_value>1000 AND approved_flag!=\"true\"\n| sort - rolling_value",
              "m": "(1) Set EUR cap per policy; (2) multi-currency conversion upstream; (3) route to COO office; (4) quarterly attestations; (5) map to inducements policy section.",
              "z": "Table (breaches), Column chart by quarter, Single value (employees over cap).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect to ethics register, or `inputlookup`.\n• Ensure the following data sources are available: `index=compliance` `sourcetype=\"ethics:gift\"` (employee_id, counterparty_id, value_eur, gift_date, approved_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Set EUR cap per policy; (2) multi-currency conversion upstream; (3) route to COO office; (4) quarterly attestations; (5) map to inducements policy section.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=\"ethics:gift\" earliest=-365d\n| eval gd=strptime(gift_date,\"%Y-%m-%d\")\n| bin gd span=90d\n| stats sum(value_eur) as rolling_value by employee_id, gd\n| where rolling_value>1000 AND approved_flag!=\"true\"\n| sort - rolling_value\n```\n\nUnderstanding this SPL\n\n**MiFID II Conflicts of Interest — Gift and Entertainment Threshold Breach Trending** — Inducements create conflicts. Aggregating declared gifts per employee and counterparty against policy caps evidences proactive monitoring of inducement risk alongside gifts-and-entertainment registers.\n\nDocumented **Data sources**: `index=compliance` `sourcetype=\"ethics:gift\"` (employee_id, counterparty_id, value_eur, gift_date, approved_flag). **App/TA** (typical add-on context): Splunk DB Connect to ethics register, or `inputlookup`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: ethics:gift. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=\"ethics:gift\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **gd** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by employee_id, gd** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rolling_value>1000 AND approved_flag!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (breaches), Column chart by quarter, Single value (employees over cap).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We inducements create conflicts.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.17: MiFID II Conflicts of Interest — Gift and Entertainment Threshold Breach Trending.",
                  "ea": "Saved search 'UC-22.5.17' running on index=compliance sourcetype=\"ethics:gift\" (employee_id, counterparty_id, value_eur, gift_date, approved_flag), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.18",
              "n": "MiFID II Market Abuse — Layering and Spoofing Pattern Scores by Trader",
              "c": "critical",
              "f": "advanced",
              "v": "MAR surveillance expectations extend to member firms’ own algos and desks. Scoring rapid cancel-to-fill ratios and quote flicker rates supports detection of manipulative patterns for SMF16 oversight.",
              "t": "Market data + OMS via HEC, or vendor surveillance feed",
              "d": "`index=trading` `sourcetype=\"oms:order_lifecycle\"` (order_id, trader_id, symbol, event, _time)",
              "q": "index=trading sourcetype=\"oms:order_lifecycle\" earliest=-1d\n| eval is_cancel=if(event=\"cancel\",1,0), is_fill=if(event=\"fill\",1,0)\n| stats sum(is_cancel) as cancels, sum(is_fill) as fills by trader_id, symbol\n| eval c2f=if(fills>0, cancels/fills, cancels)\n| where c2f>50 AND fills>10\n| sort - c2f",
              "m": "(1) Calibrate c2f by instrument liquidity; (2) exclude market-making agreements via lookup; (3) integrate with STOR workflow; (4) strict data segregation; (5) model governance sign-off.",
              "z": "Table (scores), Scatter (cancels vs fills), Single value (traders flagged).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Market data + OMS via HEC, or vendor surveillance feed.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"oms:order_lifecycle\"` (order_id, trader_id, symbol, event, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Calibrate c2f by instrument liquidity; (2) exclude market-making agreements via lookup; (3) integrate with STOR workflow; (4) strict data segregation; (5) model governance sign-off.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"oms:order_lifecycle\" earliest=-1d\n| eval is_cancel=if(event=\"cancel\",1,0), is_fill=if(event=\"fill\",1,0)\n| stats sum(is_cancel) as cancels, sum(is_fill) as fills by trader_id, symbol\n| eval c2f=if(fills>0, cancels/fills, cancels)\n| where c2f>50 AND fills>10\n| sort - c2f\n```\n\nUnderstanding this SPL\n\n**MiFID II Market Abuse — Layering and Spoofing Pattern Scores by Trader** — MAR surveillance expectations extend to member firms’ own algos and desks. Scoring rapid cancel-to-fill ratios and quote flicker rates supports detection of manipulative patterns for SMF16 oversight.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"oms:order_lifecycle\"` (order_id, trader_id, symbol, event, _time). **App/TA** (typical add-on context): Market data + OMS via HEC, or vendor surveillance feed. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: oms:order_lifecycle. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"oms:order_lifecycle\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_cancel** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trader_id, symbol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **c2f** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where c2f>50 AND fills>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (scores), Scatter (cancels vs fills), Single value (traders flagged).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We mAR surveillance expectations extend to member firms’ own algos and desks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.18: MiFID II Market Abuse — Layering and Spoofing Pattern Scores by Trader.",
                  "ea": "Saved search 'UC-22.5.18' running on index=trading sourcetype=\"oms:order_lifecycle\" (order_id, trader_id, symbol, event, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.19",
              "n": "MiFID II Market Abuse — Insider List Access Log Correlation Before Price-Sensitive Events",
              "c": "critical",
              "f": "advanced",
              "v": "MAR requires insider lists. Correlating document or deal-room access with corporate calendar events surfaces potential leaks when access clusters immediately before sensitive milestones.",
              "t": "Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), DLP",
              "d": "`index=o365` `sourcetype=\"ms:o365:management\"` (UserId, ObjectId, Operation); `index=corporate` `sourcetype=\"deal:insider_list\"` (project_id, user_id, added_date); `index=corporate` `sourcetype=\"deal:milestone\"` (project_id, milestone_type, milestone_date)",
              "q": "index=corporate sourcetype=\"deal:milestone\" milestone_type=\"earnings_release\" earliest=-180d\n| eval ms=strptime(milestone_date,\"%Y-%m-%d\")\n| join type=left project_id [\n    search index=o365 sourcetype=\"ms:o365:management\" Operation=\"FileDownloaded\" earliest=-180d\n    | rename UserId as user_id\n    | stats earliest(_time) as first_dl by user_id, ObjectId\n  ]\n| where first_dl<relative_time(ms,\"-14d@d\") AND first_dl>relative_time(ms,\"-60d@d\")\n| table project_id, milestone_date, user_id, first_dl",
              "m": "(1) Heuristic only—human investigation required; (2) align ObjectId to deal dataroom IDs; (3) legal privilege workflow; (4) integrate insider list membership via `lookup`; (5) document SMF16 review cadence.",
              "z": "Table (correlations), Timeline, Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), DLP.\n• Ensure the following data sources are available: `index=o365` `sourcetype=\"ms:o365:management\"` (UserId, ObjectId, Operation); `index=corporate` `sourcetype=\"deal:insider_list\"` (project_id, user_id, added_date); `index=corporate` `sourcetype=\"deal:milestone\"` (project_id, milestone_type, milestone_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Heuristic only—human investigation required; (2) align ObjectId to deal dataroom IDs; (3) legal privilege workflow; (4) integrate insider list membership via `lookup`; (5) document SMF16 review cadence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=corporate sourcetype=\"deal:milestone\" milestone_type=\"earnings_release\" earliest=-180d\n| eval ms=strptime(milestone_date,\"%Y-%m-%d\")\n| join type=left project_id [\n    search index=o365 sourcetype=\"ms:o365:management\" Operation=\"FileDownloaded\" earliest=-180d\n    | rename UserId as user_id\n    | stats earliest(_time) as first_dl by user_id, ObjectId\n  ]\n| where first_dl<relative_time(ms,\"-14d@d\") AND first_dl>relative_time(ms,\"-60d@d\")\n| table project_id, milestone_date, user_id, first_dl\n```\n\nUnderstanding this SPL\n\n**MiFID II Market Abuse — Insider List Access Log Correlation Before Price-Sensitive Events** — MAR requires insider lists. Correlating document or deal-room access with corporate calendar events surfaces potential leaks when access clusters immediately before sensitive milestones.\n\nDocumented **Data sources**: `index=o365` `sourcetype=\"ms:o365:management\"` (UserId, ObjectId, Operation); `index=corporate` `sourcetype=\"deal:insider_list\"` (project_id, user_id, added_date); `index=corporate` `sourcetype=\"deal:milestone\"` (project_id, milestone_type, milestone_date). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110), DLP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: corporate; **sourcetype**: deal:milestone. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=corporate, sourcetype=\"deal:milestone\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where first_dl<relative_time(ms,\"-14d@d\") AND first_dl>relative_time(ms,\"-60d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MiFID II Market Abuse — Insider List Access Log Correlation Before Price-Sensitive Events**): table project_id, milestone_date, user_id, first_dl\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (correlations), Timeline, Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We mAR requires insider lists.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.19: MiFID II Market Abuse — Insider List Access Log Correlation Before Price-Sensitive Events.",
                  "ea": "Saved search 'UC-22.5.19' running on sourcetype ms:o365:management and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.20",
              "n": "MiFID II Market Abuse — Cross-Venue Wash Trade Risk Linking by Beneficial Owner",
              "c": "critical",
              "f": "advanced",
              "v": "Wash trades may split across accounts and venues. Linking executions by `beneficial_owner_id` and near-equal opposing volumes within short windows supports cross-venue surveillance beyond single-broker silos.",
              "t": "Consolidated CAT-like internal store via HEC",
              "d": "`index=trading` `sourcetype=\"ems:fill\"` (fill_id, bo_id, symbol, side, qty, price, fill_ts, venue)",
              "q": "index=trading sourcetype=\"ems:fill\" earliest=-1d\n| eval t=strptime(fill_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| bin t span=5s\n| stats sum(eval(if(side=\"B\",qty,0))) as buy_qty, sum(eval(if(side=\"S\",qty,0))) as sell_qty by bo_id, symbol, t, venue\n| eval imbalance=abs(buy_qty-sell_qty)\n| where buy_qty>0 AND sell_qty>0 AND imbalance< (buy_qty+sell_qty)*0.05\n| sort - buy_qty",
              "m": "(1) Tune time bin and imbalance; (2) enrich `bo_id` from KYC; (3) exclude bona fide internalizers per policy; (4) STOR escalation path; (5) coordinate with surveillance vendor rules.",
              "z": "Table (clusters), Sankey optional, Single value (bo_id count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Consolidated CAT-like internal store via HEC.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"ems:fill\"` (fill_id, bo_id, symbol, side, qty, price, fill_ts, venue).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune time bin and imbalance; (2) enrich `bo_id` from KYC; (3) exclude bona fide internalizers per policy; (4) STOR escalation path; (5) coordinate with surveillance vendor rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"ems:fill\" earliest=-1d\n| eval t=strptime(fill_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| bin t span=5s\n| stats sum(eval(if(side=\"B\",qty,0))) as buy_qty, sum(eval(if(side=\"S\",qty,0))) as sell_qty by bo_id, symbol, t, venue\n| eval imbalance=abs(buy_qty-sell_qty)\n| where buy_qty>0 AND sell_qty>0 AND imbalance< (buy_qty+sell_qty)*0.05\n| sort - buy_qty\n```\n\nUnderstanding this SPL\n\n**MiFID II Market Abuse — Cross-Venue Wash Trade Risk Linking by Beneficial Owner** — Wash trades may split across accounts and venues. Linking executions by `beneficial_owner_id` and near-equal opposing volumes within short windows supports cross-venue surveillance beyond single-broker silos.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"ems:fill\"` (fill_id, bo_id, symbol, side, qty, price, fill_ts, venue). **App/TA** (typical add-on context): Consolidated CAT-like internal store via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: ems:fill. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"ems:fill\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **t** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by bo_id, symbol, t, venue** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **imbalance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where buy_qty>0 AND sell_qty>0 AND imbalance< (buy_qty+sell_qty)*0.05` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (clusters), Sankey optional, Single value (bo_id count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We wash trades may split across accounts and venues.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.17 (Algorithmic trading controls) is enforced — Splunk UC-22.5.20: MiFID II Market Abuse — Cross-Venue Wash Trade Risk Linking by Beneficial Owner.",
                  "ea": "Saved search 'UC-22.5.20' running on index=trading sourcetype=\"ems:fill\" (fill_id, bo_id, symbol, side, qty, price, fill_ts, venue), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.21",
              "n": "MiFID II Best Execution — Venue Quality of Execution Report Ingestion Completeness",
              "c": "high",
              "f": "intermediate",
              "v": "RTS 27/28 obligations produce periodic reports. Detecting missing files per venue or late uploads demonstrates control over the reporting supply chain used to evidence best execution monitoring.",
              "t": "Splunk Universal Forwarder on SFTP landing zone, or S3 + SQS",
              "d": "`index=compliance` `sourcetype=\"qoe:file_manifest\"` (venue, report_month, file_name, received_ts, expected_flag)",
              "q": "index=compliance sourcetype=\"qoe:file_manifest\" earliest=-400d\n| stats latest(received_ts) as last_recv by venue, report_month\n| eval expected=strptime(report_month,\"%Y-%m\")\n| where isnull(last_recv) OR last_recv > relative_time(expected,\"+35d@d\")\n| table venue, report_month, last_recv",
              "m": "(1) Adjust `+35d` to regulatory calendar; (2) alert compliance operations; (3) vendor SLA tracking; (4) store checksums; (5) map to annual best ex board review.",
              "z": "Table (missing/late), Heatmap venue x month, Single value (gaps).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder on SFTP landing zone, or S3 + SQS.\n• Ensure the following data sources are available: `index=compliance` `sourcetype=\"qoe:file_manifest\"` (venue, report_month, file_name, received_ts, expected_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Adjust `+35d` to regulatory calendar; (2) alert compliance operations; (3) vendor SLA tracking; (4) store checksums; (5) map to annual best ex board review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=\"qoe:file_manifest\" earliest=-400d\n| stats latest(received_ts) as last_recv by venue, report_month\n| eval expected=strptime(report_month,\"%Y-%m\")\n| where isnull(last_recv) OR last_recv > relative_time(expected,\"+35d@d\")\n| table venue, report_month, last_recv\n```\n\nUnderstanding this SPL\n\n**MiFID II Best Execution — Venue Quality of Execution Report Ingestion Completeness** — RTS 27/28 obligations produce periodic reports. Detecting missing files per venue or late uploads demonstrates control over the reporting supply chain used to evidence best execution monitoring.\n\nDocumented **Data sources**: `index=compliance` `sourcetype=\"qoe:file_manifest\"` (venue, report_month, file_name, received_ts, expected_flag). **App/TA** (typical add-on context): Splunk Universal Forwarder on SFTP landing zone, or S3 + SQS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: qoe:file_manifest. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=\"qoe:file_manifest\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by venue, report_month** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **expected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(last_recv) OR last_recv > relative_time(expected,\"+35d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **MiFID II Best Execution — Venue Quality of Execution Report Ingestion Completeness**): table venue, report_month, last_recv\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing/late), Heatmap venue x month, Single value (gaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We rTS 27/28 obligations produce periodic reports.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "observability",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.27",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.27 is enforced — Splunk UC-22.5.21: MiFID II Best Execution — Venue Quality of Execution Report Ingestion Completeness.",
                  "ea": "Saved search 'UC-22.5.21' running on index=compliance sourcetype=\"qoe:file_manifest\" (venue, report_month, file_name, received_ts, expected_flag), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.22",
              "n": "MiFID II Best Execution — Slippage vs Reference Price by Client Segment and Instrument Class",
              "c": "high",
              "f": "advanced",
              "v": "Best execution is measured by outcomes. Trending slippage against a consolidated reference price by retail vs professional clients evidences differentiated quality monitoring required for policy tuning and disclosures.",
              "t": "OMS fills + market data snapshot service",
              "d": "`index=trading` `sourcetype=\"ems:fill\"` (client_segment, instrument_class, fill_ts, price, ref_price, symbol)",
              "q": "index=trading sourcetype=\"ems:fill\" earliest=-30d\n| eval slip_bps=abs(price-ref_price)/ref_price*10000\n| stats median(slip_bps) as med_slip, perc95(slip_bps) as p95_slip by client_segment, instrument_class, symbol\n| where client_segment=\"retail\" AND p95_slip>50\n| sort - p95_slip",
              "m": "(1) Define `ref_price` methodology consistently; (2) exclude auctions; (3) monthly best ex committee pack; (4) integrate with order routing analysis; (5) document outliers investigated.",
              "z": "Box plot by segment, Table (worst symbols), Single value (breach count).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OMS fills + market data snapshot service.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"ems:fill\"` (client_segment, instrument_class, fill_ts, price, ref_price, symbol).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define `ref_price` methodology consistently; (2) exclude auctions; (3) monthly best ex committee pack; (4) integrate with order routing analysis; (5) document outliers investigated.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"ems:fill\" earliest=-30d\n| eval slip_bps=abs(price-ref_price)/ref_price*10000\n| stats median(slip_bps) as med_slip, perc95(slip_bps) as p95_slip by client_segment, instrument_class, symbol\n| where client_segment=\"retail\" AND p95_slip>50\n| sort - p95_slip\n```\n\nUnderstanding this SPL\n\n**MiFID II Best Execution — Slippage vs Reference Price by Client Segment and Instrument Class** — Best execution is measured by outcomes. Trending slippage against a consolidated reference price by retail vs professional clients evidences differentiated quality monitoring required for policy tuning and disclosures.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"ems:fill\"` (client_segment, instrument_class, fill_ts, price, ref_price, symbol). **App/TA** (typical add-on context): OMS fills + market data snapshot service. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: ems:fill. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"ems:fill\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **slip_bps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by client_segment, instrument_class, symbol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where client_segment=\"retail\" AND p95_slip>50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Box plot by segment, Table (worst symbols), Single value (breach count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We best execution is measured by outcomes.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "pillar": "observability",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.27",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.27 is enforced — Splunk UC-22.5.22: MiFID II Best Execution — Slippage vs Reference Price by Client Segment and Instrument Class.",
                  "ea": "Saved search 'UC-22.5.22' running on index=trading sourcetype=\"ems:fill\" (client_segment, instrument_class, fill_ts, price, ref_price, symbol), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.23",
              "n": "MiFID II Best Execution — Client Limit Order Price Improvement vs Top of Book",
              "c": "medium",
              "f": "advanced",
              "v": "Limit orders should receive fair treatment relative to displayed liquidity. Measuring frequency and magnitude of price improvement versus top-of-book at execution time supports conduct narratives for retail protection.",
              "t": "EMS and market data via HEC",
              "d": "`index=trading` `sourcetype=\"ems:fill\"` (order_type, limit_px, best_bid, best_ask, exec_px, side, client_segment)",
              "q": "index=trading sourcetype=\"ems:fill\" order_type=\"limit\" client_segment=\"retail\" earliest=-30d\n| eval improvement_bps=if(side=\"B\",(best_ask-exec_px)/best_ask*10000,(exec_px-best_bid)/best_bid*10000)\n| where improvement_bps < 0\n| stats count as worse_than_tob, median(improvement_bps) as med_imp by venue\n| sort - worse_than_tob",
              "m": "(1) Validate `best_bid/ask` timestamps align with fill; (2) handle inverted books; (3) retail-only scope; (4) investigation workflow; (5) link to smart order router version.",
              "z": "Histogram (improvement_bps), Table (venues), Single value (worse fills).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: EMS and market data via HEC.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"ems:fill\"` (order_type, limit_px, best_bid, best_ask, exec_px, side, client_segment).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Validate `best_bid/ask` timestamps align with fill; (2) handle inverted books; (3) retail-only scope; (4) investigation workflow; (5) link to smart order router version.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"ems:fill\" order_type=\"limit\" client_segment=\"retail\" earliest=-30d\n| eval improvement_bps=if(side=\"B\",(best_ask-exec_px)/best_ask*10000,(exec_px-best_bid)/best_bid*10000)\n| where improvement_bps < 0\n| stats count as worse_than_tob, median(improvement_bps) as med_imp by venue\n| sort - worse_than_tob\n```\n\nUnderstanding this SPL\n\n**MiFID II Best Execution — Client Limit Order Price Improvement vs Top of Book** — Limit orders should receive fair treatment relative to displayed liquidity. Measuring frequency and magnitude of price improvement versus top-of-book at execution time supports conduct narratives for retail protection.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"ems:fill\"` (order_type, limit_px, best_bid, best_ask, exec_px, side, client_segment). **App/TA** (typical add-on context): EMS and market data via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: ems:fill. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"ems:fill\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **improvement_bps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where improvement_bps < 0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by venue** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (improvement_bps), Table (venues), Single value (worse fills).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We limit orders should receive fair treatment relative to displayed liquidity.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "observability",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.27",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.27 is enforced — Splunk UC-22.5.23: MiFID II Best Execution — Client Limit Order Price Improvement vs Top of Book.",
                  "ea": "Saved search 'UC-22.5.23' running on index=trading sourcetype=\"ems:fill\" (order_type, limit_px, best_bid, best_ask, exec_px, side, client_segment), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.24",
              "n": "MiFID II Transaction Reporting — Field Population Quality Scorecards by Counterparty",
              "c": "high",
              "f": "intermediate",
              "v": "EMIR/MiFIR reporting quality depends on complete fields. Scoring rejects and nulls in critical RTS 22 fields by executing broker or desk prevents silent degradation of report quality after system upgrades.",
              "t": "ARM platform logs via HEC",
              "d": "`index=reg` `sourcetype=\"arm:submission\"` (trade_id, venue, counterparty, reject_reason, field_nulls_json, _time)",
              "q": "index=reg sourcetype=\"arm:submission\" earliest=-30d\n| spath path=field_nulls_json\n| eval null_cnt=mvcount('field_nulls_json{}')\n| where isnotnull(reject_reason) OR null_cnt>3\n| stats count as bad_msgs, avg(null_cnt) as avg_nulls by counterparty\n| sort - bad_msgs",
              "m": "(1) Normalize `field_nulls_json` to multivalue; (2) threshold per message type; (3) join to release calendar; (4) weekly quality stand-up; (5) evidence for NCAs inquiries.",
              "z": "Table (counterparties), Line chart (bad_msgs over time), Single value (total).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ARM platform logs via HEC.\n• Ensure the following data sources are available: `index=reg` `sourcetype=\"arm:submission\"` (trade_id, venue, counterparty, reject_reason, field_nulls_json, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `field_nulls_json` to multivalue; (2) threshold per message type; (3) join to release calendar; (4) weekly quality stand-up; (5) evidence for NCAs inquiries.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=reg sourcetype=\"arm:submission\" earliest=-30d\n| spath path=field_nulls_json\n| eval null_cnt=mvcount('field_nulls_json{}')\n| where isnotnull(reject_reason) OR null_cnt>3\n| stats count as bad_msgs, avg(null_cnt) as avg_nulls by counterparty\n| sort - bad_msgs\n```\n\nUnderstanding this SPL\n\n**MiFID II Transaction Reporting — Field Population Quality Scorecards by Counterparty** — EMIR/MiFIR reporting quality depends on complete fields. Scoring rejects and nulls in critical RTS 22 fields by executing broker or desk prevents silent degradation of report quality after system upgrades.\n\nDocumented **Data sources**: `index=reg` `sourcetype=\"arm:submission\"` (trade_id, venue, counterparty, reject_reason, field_nulls_json, _time). **App/TA** (typical add-on context): ARM platform logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: reg; **sourcetype**: arm:submission. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=reg, sourcetype=\"arm:submission\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `eval` defines or adjusts **null_cnt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(reject_reason) OR null_cnt>3` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by counterparty** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (counterparties), Line chart (bad_msgs over time), Single value (total).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We eMIR/MiFIR reporting quality depends on complete fields.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.26 is enforced — Splunk UC-22.5.24: MiFID II Transaction Reporting — Field Population Quality Scorecards by Counterparty.",
                  "ea": "Saved search 'UC-22.5.24' running on index=reg sourcetype=\"arm:submission\" (trade_id, venue, counterparty, reject_reason, field_nulls_json, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.5.25",
              "n": "MiFID II Transaction Reporting — End-to-End Latency from Execution to ARM Accept Acknowledgment",
              "c": "high",
              "f": "advanced",
              "v": "Timeliness reduces backlogs that cause late reports. Measuring latency from internal execution capture to ARM acceptance ACK supports operational resilience of the reporting chain beyond static “file sent” checks.",
              "t": "OMS + ARM via correlated trace IDs in HEC",
              "d": "`index=trading` `sourcetype=\"oms:exec\"` (exec_id, exec_ts); `index=reg` `sourcetype=\"arm:ack\"` (exec_id, ack_ts, status)",
              "q": "index=trading sourcetype=\"oms:exec\" earliest=-7d\n| eval et=strptime(exec_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left exec_id [\n    search index=reg sourcetype=\"arm:ack\" status=\"ACCEPTED\" earliest=-7d\n    | eval at=strptime(ack_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | stats min(at) as first_ack by exec_id\n  ]\n| eval latency_ms=(first_ack-et)*1000\n| where latency_ms>5000 OR isnull(first_ack)\n| stats perc95(latency_ms) as p95_ms, count as breaches",
              "m": "(1) Clock sync NTP monitoring separately; (2) tune SLA ms by instrument; (3) alert reporting ops; (4) dashboard by venue session; (5) retain for audit trail.",
              "z": "Timechart (p95_ms), Table (slow exec_id), Single value (breaches).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OMS + ARM via correlated trace IDs in HEC.\n• Ensure the following data sources are available: `index=trading` `sourcetype=\"oms:exec\"` (exec_id, exec_ts); `index=reg` `sourcetype=\"arm:ack\"` (exec_id, ack_ts, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Clock sync NTP monitoring separately; (2) tune SLA ms by instrument; (3) alert reporting ops; (4) dashboard by venue session; (5) retain for audit trail.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trading sourcetype=\"oms:exec\" earliest=-7d\n| eval et=strptime(exec_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left exec_id [\n    search index=reg sourcetype=\"arm:ack\" status=\"ACCEPTED\" earliest=-7d\n    | eval at=strptime(ack_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n    | stats min(at) as first_ack by exec_id\n  ]\n| eval latency_ms=(first_ack-et)*1000\n| where latency_ms>5000 OR isnull(first_ack)\n| stats perc95(latency_ms) as p95_ms, count as breaches\n```\n\nUnderstanding this SPL\n\n**MiFID II Transaction Reporting — End-to-End Latency from Execution to ARM Accept Acknowledgment** — Timeliness reduces backlogs that cause late reports. Measuring latency from internal execution capture to ARM acceptance ACK supports operational resilience of the reporting chain beyond static “file sent” checks.\n\nDocumented **Data sources**: `index=trading` `sourcetype=\"oms:exec\"` (exec_id, exec_ts); `index=reg` `sourcetype=\"arm:ack\"` (exec_id, ack_ts, status). **App/TA** (typical add-on context): OMS + ARM via correlated trace IDs in HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trading; **sourcetype**: oms:exec. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trading, sourcetype=\"oms:exec\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **et** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **latency_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where latency_ms>5000 OR isnull(first_ack)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (p95_ms), Table (slow exec_id), Single value (breaches).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We timeliness reduces backlogs that cause late reports.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "pillar": "observability",
              "regs": [
                "MiFID II"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "MiFID II",
                  "v": "Directive 2014/65/EU",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MiFID II Art.26 is enforced — Splunk UC-22.5.25: MiFID II Transaction Reporting — End-to-End Latency from Execution to ARM Accept Acknowledgment.",
                  "ea": "Saved search 'UC-22.5.25' running on index=trading sourcetype=\"oms:exec\" (exec_id, exec_ts); index=reg sourcetype=\"arm:ack\" (exec_id, ack_ts, status), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "22.6",
          "n": "ISO 27001",
          "u": [
            {
              "i": "22.6.1",
              "n": "ISO 27001 Annex A Control Effectiveness Monitoring",
              "c": "critical",
              "f": "advanced",
              "v": "Proves that detective controls implemented as ES correlation searches actually execute, complete, and produce hits — mapped to Annex A control IDs — so auditors see operating effectiveness, not only documented intent.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`index=_internal` `source=*scheduler.log*` (savedsearch_name, run_time, status, skip_reason); `` `notable` `` macro (rule_name, urgency, _time); CSV lookup `iso27001_annex_a_es_rule_control_lookup` (correlation_search_short, annex_a_control_id, control_title)",
              "q": "index=_internal source=*scheduler.log* savedsearch_name=\"*Correlation*\" earliest=-30d\n| stats count as executions,\n        avg(run_time) as avg_run_time_sec,\n        sum(eval(if(status==\"skipped\",1,0))) as skipped_runs\n    by savedsearch_name\n| eval correlation_search_short=replace(savedsearch_name, \"(?i)^.*Correlation Search\\\\s*-\\\\s*\", \"\")\n| lookup iso27001_annex_a_es_rule_control_lookup correlation_search_short OUTPUT annex_a_control_id, control_title\n| join type=left max=0 correlation_search_short [\n    search `notable` earliest=-90d\n    | stats count as notable_hits by rule_name\n    | rename rule_name as correlation_search_short\n  ]\n| eval reliability_pct=round(100*(executions-skipped_runs)/executions, 1)\n| fillnull value=0 notable_hits\n| table annex_a_control_id, control_title, savedsearch_name, executions, skipped_runs, reliability_pct, notable_hits\n| sort annex_a_control_id",
              "m": "(1) Build `iso27001_annex_a_es_rule_control_lookup.csv` on the ES search head: `correlation_search_short` must match ES `rule_name` as shown in Incident Review; (2) map each row to `annex_a_control_id` (e.g. A.12.4.1) and `control_title`; (3) ensure `_internal` scheduler data is available on the SH; (4) schedule weekly for control-owner review; alert on `skipped_runs` spikes or zero `notable_hits` for critical controls.",
              "z": "Table (control x rule health), Column chart (reliability_pct by rule), Single value (total skipped runs).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `index=_internal` `source=*scheduler.log*` (savedsearch_name, run_time, status, skip_reason); `` `notable` `` macro (rule_name, urgency, _time); CSV lookup `iso27001_annex_a_es_rule_control_lookup` (correlation_search_short, annex_a_control_id, control_title).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `iso27001_annex_a_es_rule_control_lookup.csv` on the ES search head: `correlation_search_short` must match ES `rule_name` as shown in Incident Review; (2) map each row to `annex_a_control_id` (e.g. A.12.4.1) and `control_title`; (3) ensure `_internal` scheduler data is available on the SH; (4) schedule weekly for control-owner review; alert on `skipped_runs` spikes or zero `notable_hits` for critical controls.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*scheduler.log* savedsearch_name=\"*Correlation*\" earliest=-30d\n| stats count as executions,\n        avg(run_time) as avg_run_time_sec,\n        sum(eval(if(status==\"skipped\",1,0))) as skipped_runs\n    by savedsearch_name\n| eval correlation_search_short=replace(savedsearch_name, \"(?i)^.*Correlation Search\\\\s*-\\\\s*\", \"\")\n| lookup iso27001_annex_a_es_rule_control_lookup correlation_search_short OUTPUT annex_a_control_id, control_title\n| join type=left max=0 correlation_search_short [\n    search `notable` earliest=-90d\n    | stats count as notable_hits by rule_name\n    | rename rule_name as correlation_search_short\n  ]\n| eval reliability_pct=round(100*(executions-skipped_runs)/executions, 1)\n| fillnull value=0 notable_hits\n| table annex_a_control_id, control_title, savedsearch_name, executions, skipped_runs, reliability_pct, notable_hits\n| sort annex_a_control_id\n```\n\nUnderstanding this SPL\n\n**ISO 27001 Annex A Control Effectiveness Monitoring** — Proves that detective controls implemented as ES correlation searches actually execute, complete, and produce hits — mapped to Annex A control IDs — so auditors see operating effectiveness, not only documented intent.\n\nDocumented **Data sources**: `index=_internal` `source=*scheduler.log*` (savedsearch_name, run_time, status, skip_reason); `` `notable` `` macro (rule_name, urgency, _time); CSV lookup `iso27001_annex_a_es_rule_control_lookup` (correlation_search_short, annex_a_control_id, control_title). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by savedsearch_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **correlation_search_short** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **reliability_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Fills null values with `fillnull`.\n• Pipeline stage (see **ISO 27001 Annex A Control Effectiveness Monitoring**): table annex_a_control_id, control_title, savedsearch_name, executions, skipped_runs, reliability_pct, notable_hits\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (control x rule health), Column chart (reliability_pct by rule), Single value (total skipped runs).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Annex A Control Effectiveness Monitoring. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.16",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.16 (Monitoring activities) is enforced — Splunk UC-22.6.1: ISO 27001 Annex A Control Effectiveness Monitoring.",
                  "ea": "Saved search 'UC-22.6.1' running on index _internal and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.2",
              "n": "ISO 27001 Information Security Event Log Review Compliance (A.12.4)",
              "c": "high",
              "f": "intermediate",
              "v": "Produces auditor-ready evidence that named users routinely query security data in Splunk (log review activity), including who reviewed which index classes and how often — not merely that logs exist.",
              "t": "Splunk Enterprise core auditing (`_audit` index, no separate TA required)",
              "d": "`index=_audit` `action=search` (user, search, info, result_count, total_run_time, _time)",
              "q": "index=_audit action=search info=completed user!=\"splunk-system-user\" earliest=-30d\n| where match(search, \"(?i)index\\\\s*=\\\\s*(security|notable|wineventlog|proxy|dns|firewall|ids)\")\n| bucket _time span=1d as review_day\n| stats dc(user) as distinct_reviewers,\n        count as review_searches,\n        sum(result_count) as rows_examined,\n        values(user) as sample_users\n    by review_day\n| eval cadence_met=if(distinct_reviewers>=1 AND review_searches>=1, 1, 0)\n| sort - review_day",
              "m": "(1) Confirm audit logging is enabled for search activity and `_audit` retention meets policy; (2) edit the `match()` index list to your real security index names; (3) exclude service accounts via `user!=` or lookup; (4) monthly PDF/CSV export for ISO evidence packs; (5) tune minimum thresholds to your documented log review frequency.",
              "z": "Time chart (review_searches by day), Table (review_day, reviewers, cadence_met), Single value (rolling 30d cadence percentage).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise core auditing (`_audit` index, no separate TA required).\n• Ensure the following data sources are available: `index=_audit` `action=search` (user, search, info, result_count, total_run_time, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm audit logging is enabled for search activity and `_audit` retention meets policy; (2) edit the `match()` index list to your real security index names; (3) exclude service accounts via `user!=` or lookup; (4) monthly PDF/CSV export for ISO evidence packs; (5) tune minimum thresholds to your documented log review frequency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit action=search info=completed user!=\"splunk-system-user\" earliest=-30d\n| where match(search, \"(?i)index\\\\s*=\\\\s*(security|notable|wineventlog|proxy|dns|firewall|ids)\")\n| bucket _time span=1d as review_day\n| stats dc(user) as distinct_reviewers,\n        count as review_searches,\n        sum(result_count) as rows_examined,\n        values(user) as sample_users\n    by review_day\n| eval cadence_met=if(distinct_reviewers>=1 AND review_searches>=1, 1, 0)\n| sort - review_day\n```\n\nUnderstanding this SPL\n\n**ISO 27001 Information Security Event Log Review Compliance (A.12.4)** — Produces auditor-ready evidence that named users routinely query security data in Splunk (log review activity), including who reviewed which index classes and how often — not merely that logs exist.\n\nDocumented **Data sources**: `index=_audit` `action=search` (user, search, info, result_count, total_run_time, _time). **App/TA** (typical add-on context): Splunk Enterprise core auditing (`_audit` index, no separate TA required). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(search, \"(?i)index\\\\s*=\\\\s*(security|notable|wineventlog|proxy|dns|firewall|ids)\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by review_day** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **cadence_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (review_searches by day), Table (review_day, reviewers, cadence_met), Single value (rolling 30d cadence percentage).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Information Security Event Log Review Compliance. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.12.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.12.4 is enforced — Splunk UC-22.6.2: ISO 27001 Information Security Event Log Review Compliance.",
                  "ea": "Saved search 'UC-22.6.2' running on index=_audit action=search (user, search, info, result_count, total_run_time, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2013",
                  "cl": "A.12.4.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.12.4.1 (Event logging (2013)) is enforced — Splunk UC-22.6.2: ISO 27001 Information Security Event Log Review Compliance.",
                  "ea": "Saved search 'UC-22.6.2' running on index=_audit action=search (user, search, info, result_count, total_run_time, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/54534.html"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.3",
              "n": "ISO 27001 Access Rights Review and Recertification (A.9.2.5)",
              "c": "critical",
              "f": "advanced",
              "v": "Captures group membership changes (on-prem AD or Entra ID) for access recertification evidence and detective alerting on privileged group churn.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, SubjectUserName, MemberName, Group_Name, ComputerName); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, targetResources, initiatedBy)",
              "q": "index=azure sourcetype=\"mscs:azure:auditlog\"\n    activityDisplayName IN (\"Add member to group\",\"Remove member from group\")\n| table _time, activityDisplayName, initiatedBy.user.userPrincipalName, targetResources{}.displayName\n| sort - _time",
              "m": "(1) Install Splunk_TA_windows on DCs or use Windows Event Collector; enable Advanced Audit Policy for Security Group Management; (2) for cloud, configure Microsoft Cloud Services TA for Entra ID audit events; (3) maintain `privileged_ad_groups.csv` keyed on `Group_Name` and `lookup` to flag high-risk groups; (4) feed quarterly CSV to IAM recertification; (5) alert on changes to privileged groups outside CAB windows.",
              "z": "Table (evidence export), Time chart (changes per day), Bar chart (changes by Group_Name).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, SubjectUserName, MemberName, Group_Name, ComputerName); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, targetResources, initiatedBy).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Install Splunk_TA_windows on DCs or use Windows Event Collector; enable Advanced Audit Policy for Security Group Management; (2) for cloud, configure Microsoft Cloud Services TA for Entra ID audit events; (3) maintain `privileged_ad_groups.csv` keyed on `Group_Name` and `lookup` to flag high-risk groups; (4) feed quarterly CSV to IAM recertification; (5) alert on changes to privileged groups outside CAB windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:auditlog\"\n    activityDisplayName IN (\"Add member to group\",\"Remove member from group\")\n| table _time, activityDisplayName, initiatedBy.user.userPrincipalName, targetResources{}.displayName\n| sort - _time\n```\n\nUnderstanding this SPL\n\n**ISO 27001 Access Rights Review and Recertification (A.9.2.5)** — Captures group membership changes (on-prem AD or Entra ID) for access recertification evidence and detective alerting on privileged group churn.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, SubjectUserName, MemberName, Group_Name, ComputerName); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, targetResources, initiatedBy). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:auditlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:auditlog\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **ISO 27001 Access Rights Review and Recertification (A.9.2.5)**): table _time, activityDisplayName, initiatedBy.user.userPrincipalName, targetResources{}.displayName\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ISO 27001 Access Rights Review and Recertification (A.9.2.5)** — Captures group membership changes (on-prem AD or Entra ID) for access recertification evidence and detective alerting on privileged group churn.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, SubjectUserName, MemberName, Group_Name, ComputerName); `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, targetResources, initiatedBy). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (evidence export), Time chart (changes per day), Bar chart (changes by Group_Name).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Access Rights Review and Recertification. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.9.2.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.9.2.5 is enforced — Splunk UC-22.6.3: ISO 27001 Access Rights Review and Recertification.",
                  "ea": "Saved search 'UC-22.6.3' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2013",
                  "cl": "A.9.2.5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.9.2.5 (Review of user access rights (2013)) is enforced — Splunk UC-22.6.3: ISO 27001 Access Rights Review and Recertification.",
                  "ea": "Saved search 'UC-22.6.3' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/54534.html"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.4",
              "n": "ISO 27001 Information Labelling and Media Handling via DLP (A.8.2.3)",
              "c": "high",
              "f": "intermediate",
              "v": "Annex A expects procedures for labelling information according to protection needs and handling removable media and transfers consistently. Microsoft 365 DLP events evidence that confidentiality labels and policies are enforced in practice, not only in the ISMS manual.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`index=o365` `sourcetype=\"ms:o365:management\"` (Workload, PolicyName, Operation, UserPrincipalName, SensitiveInfoType, FileName, Severity)",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=\"Dlp\" earliest=-7d\n| where Operation IN (\"DlpRuleMatch\",\"DlpRuleUndo\") OR match(PolicyName,\"(?i)label|confidential|restricted\")\n| stats count by PolicyName, Operation, SensitiveInfoType, Severity\n| sort - count",
              "m": "(1) Map `PolicyName` to your classification scheme in `info_classification_lookup.csv`; (2) exclude benign test accounts; (3) alert on high-severity outbound matches to personal domains if field present; (4) align retention with A.18.1 legal holds; (5) include panel in annual internal audit evidence pack.",
              "z": "Heatmap (PolicyName x SensitiveInfoType), Table (top users optional via `stats by UserPrincipalName`), Bar chart (count by Severity).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [
                "T1005",
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `index=o365` `sourcetype=\"ms:o365:management\"` (Workload, PolicyName, Operation, UserPrincipalName, SensitiveInfoType, FileName, Severity).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `PolicyName` to your classification scheme in `info_classification_lookup.csv`; (2) exclude benign test accounts; (3) alert on high-severity outbound matches to personal domains if field present; (4) align retention with A.18.1 legal holds; (5) include panel in annual internal audit evidence pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=\"Dlp\" earliest=-7d\n| where Operation IN (\"DlpRuleMatch\",\"DlpRuleUndo\") OR match(PolicyName,\"(?i)label|confidential|restricted\")\n| stats count by PolicyName, Operation, SensitiveInfoType, Severity\n| sort - count\n```\n\nUnderstanding this SPL\n\n**ISO 27001 Information Labelling and Media Handling via DLP (A.8.2.3)** — Annex A expects procedures for labelling information according to protection needs and handling removable media and transfers consistently. Microsoft 365 DLP events evidence that confidentiality labels and policies are enforced in practice, not only in the ISMS manual.\n\nDocumented **Data sources**: `index=o365` `sourcetype=\"ms:o365:management\"` (Workload, PolicyName, Operation, UserPrincipalName, SensitiveInfoType, FileName, Severity). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where Operation IN (\"DlpRuleMatch\",\"DlpRuleUndo\") OR match(PolicyName,\"(?i)label|confidential|restricted\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by PolicyName, Operation, SensitiveInfoType, Severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (PolicyName x SensitiveInfoType), Table (top users optional via `stats by UserPrincipalName`), Bar chart (count by Severity).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Information Labelling and Media Handling via DLP. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.2.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.2.3 is enforced — Splunk UC-22.6.4: ISO 27001 Information Labelling and Media Handling via DLP.",
                  "ea": "Saved search 'UC-22.6.4' running on sourcetype ms:o365:management and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.5",
              "n": "ISO 27001 Cryptographic Key and Certificate Lifecycle Monitoring (A.10.1.2)",
              "c": "critical",
              "f": "advanced",
              "v": "Key management requires defined lifecycles and protection of secret keys. Tracking upcoming TLS certificate expirations and vault key-rotation events demonstrates operational control over cryptography supporting confidentiality and integrity commitments.",
              "t": "Splunk Add-on for OpenTelemetry Collector (Splunkbase 7125) or certificate scan TA, Splunk HTTP Event Collector (core platform)",
              "d": "`index=security` `sourcetype=\"cert:inventory\"` (cn, san, not_after, issuer, host); `index=secrets` `sourcetype=\"_json\"` `source=\"http:vault_audit\"` (path, operation, _time) for asymmetric key rotations",
              "q": "index=secrets sourcetype=\"_json\" source=\"http:vault_audit\" operation=\"rotate\" earliest=-30d\n| stats count as rotations by path\n| sort - rotations",
              "m": "(1) Ingest nightly cert inventory from ACM or Venafi with `not_after` in consistent UTC format; (2) forward HashiCorp Vault audit or cloud KMS rotation webhooks to `index=secrets` for signing keys; (3) alert at 30/14/7 days on TLS `min_days`; (4) document owners in lookup `cert_owner.csv`; (5) tie renewals to change tickets for A.12.1.",
              "z": "Table (expiring certs), Timeline (rotation events), Single value (count expiring <14 days).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for OpenTelemetry Collector](https://splunkbase.splunk.com/app/7125)",
              "mitre": [
                "T1098",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for OpenTelemetry Collector (Splunkbase 7125) or certificate scan TA, Splunk HTTP Event Collector (core platform).\n• Ensure the following data sources are available: `index=security` `sourcetype=\"cert:inventory\"` (cn, san, not_after, issuer, host); `index=secrets` `sourcetype=\"_json\"` `source=\"http:vault_audit\"` (path, operation, _time) for asymmetric key rotations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest nightly cert inventory from ACM or Venafi with `not_after` in consistent UTC format; (2) forward HashiCorp Vault audit or cloud KMS rotation webhooks to `index=secrets` for signing keys; (3) alert at 30/14/7 days on TLS `min_days`; (4) document owners in lookup `cert_owner.csv`; (5) tie renewals to change tickets for A.12.1.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=secrets sourcetype=\"_json\" source=\"http:vault_audit\" operation=\"rotate\" earliest=-30d\n| stats count as rotations by path\n| sort - rotations\n```\n\nUnderstanding this SPL\n\n**ISO 27001 Cryptographic Key and Certificate Lifecycle Monitoring (A.10.1.2)** — Key management requires defined lifecycles and protection of secret keys. Tracking upcoming TLS certificate expirations and vault key-rotation events demonstrates operational control over cryptography supporting confidentiality and integrity commitments.\n\nDocumented **Data sources**: `index=security` `sourcetype=\"cert:inventory\"` (cn, san, not_after, issuer, host); `index=secrets` `sourcetype=\"_json\"` `source=\"http:vault_audit\"` (path, operation, _time) for asymmetric key rotations. **App/TA** (typical add-on context): Splunk Add-on for OpenTelemetry Collector (Splunkbase 7125) or certificate scan TA, Splunk HTTP Event Collector (core platform). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: secrets; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=secrets, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (expiring certs), Timeline (rotation events), Single value (count expiring <14 days).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Cryptographic Key and Certificate Lifecycle Monitoring. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "opentelemetry"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.10.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.10.1.2 is enforced — Splunk UC-22.6.5: ISO 27001 Cryptographic Key and Certificate Lifecycle Monitoring.",
                  "ea": "Saved search 'UC-22.6.5' running on sourcetype cert:inventory and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.6",
              "n": "ISO 27001 Network Security — Segmentation and Firewall Deny Baseline (A.13.1.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Networks must be segregated and filtered according to business requirements. Trending denied flows against approved baselines highlights misconfigurations or lateral movement precursors, supporting Annex A evidence for technical network controls.",
              "t": "Splunk Add-on for Palo Alto Networks (Splunkbase 2757) or vendor firewall TA",
              "d": "`index=network` `sourcetype=\"pan:traffic\"` (action, src, dest, dest_port, rule, app)",
              "q": "index=network sourcetype=\"pan:traffic\" action=deny earliest=-24h\n| stats count as denies, dc(src) as sources, values(dest_port) as ports by dest, rule\n| lookup expected_perimeter_denies.csv dest dest_port OUTPUT is_expected\n| where isnull(is_expected) OR is_expected=\"false\"\n| sort - denies",
              "m": "(1) Normalize firewall CIM or vendor fields to `action`, `src`, `dest`, `dest_port`; (2) seed `expected_perimeter_denies.csv` with known scanner noise; (3) alert on sudden `denies` spikes vs 30-day baseline using `anomalydetection` optional; (4) map `rule` to change records; (5) monthly review with network architecture team.",
              "z": "Map or table (top denied dest), Timechart (denies by rule), Bar chart (sources).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1562.007"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (Splunkbase 2757) or vendor firewall TA.\n• Ensure the following data sources are available: `index=network` `sourcetype=\"pan:traffic\"` (action, src, dest, dest_port, rule, app).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize firewall CIM or vendor fields to `action`, `src`, `dest`, `dest_port`; (2) seed `expected_perimeter_denies.csv` with known scanner noise; (3) alert on sudden `denies` spikes vs 30-day baseline using `anomalydetection` optional; (4) map `rule` to change records; (5) monthly review with network architecture team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\" action=deny earliest=-24h\n| stats count as denies, dc(src) as sources, values(dest_port) as ports by dest, rule\n| lookup expected_perimeter_denies.csv dest dest_port OUTPUT is_expected\n| where isnull(is_expected) OR is_expected=\"false\"\n| sort - denies\n```\n\nUnderstanding this SPL\n\n**ISO 27001 Network Security — Segmentation and Firewall Deny Baseline (A.13.1.1)** — Networks must be segregated and filtered according to business requirements. Trending denied flows against approved baselines highlights misconfigurations or lateral movement precursors, supporting Annex A evidence for technical network controls.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"pan:traffic\"` (action, src, dest, dest_port, rule, app). **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (Splunkbase 2757) or vendor firewall TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest, rule** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(is_expected) OR is_expected=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ISO 27001 Network Security — Segmentation and Firewall Deny Baseline (A.13.1.1)** — Networks must be segregated and filtered according to business requirements. Trending denied flows against approved baselines highlights misconfigurations or lateral movement precursors, supporting Annex A evidence for technical network controls.\n\nDocumented **Data sources**: `index=network` `sourcetype=\"pan:traffic\"` (action, src, dest, dest_port, rule, app). **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (Splunkbase 2757) or vendor firewall TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map or table (top denied dest), Timechart (denies by rule), Bar chart (sources).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Network Security — Segmentation and Firewall Deny Baseline. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.13.1.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.13.1.1 is enforced — Splunk UC-22.6.6: ISO 27001 Network Security — Segmentation and Firewall Deny Baseline.",
                  "ea": "Saved search 'UC-22.6.6' running on index=network sourcetype=\"pan:traffic\" (action, src, dest, dest_port, rule, app), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.7",
              "n": "ISO 27001 Supplier IAM and SaaS Integration Change Surveillance (A.15.1.2)",
              "c": "high",
              "f": "advanced",
              "v": "Supplier relationships must address information security in agreements and monitor service changes. Cloud IAM audit logs for OAuth app consent and service principal changes surface high-risk supplier integrations that could bypass on-prem controls.",
              "t": "Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, initiatedBy, targetResources, result, _time)",
              "q": "index=azure sourcetype=\"mscs:azure:auditlog\" earliest=-30d\n    (activityDisplayName=\"*consent*\" OR activityDisplayName=\"*Add app*\" OR activityDisplayName=\"*service principal*\")\n| eval actor=coalesce(initiatedBy.user.userPrincipalName, initiatedBy.app.displayName, \"unknown\")\n| stats count by activityDisplayName, actor, result\n| sort - count",
              "m": "(1) Scope to tenant IDs for production directories; (2) enrich with `saas_vendor_risk.csv` keyed on `targetResources{}.displayName`; (3) alert on failed `result` spikes or new high-risk vendors; (4) require CAB reference in ServiceNow optional via lookup; (5) quarterly supplier review slide export.",
              "z": "Table (activity x actor), Bar chart (consent events by vendor), Single value (distinct actors).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1098",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, initiatedBy, targetResources, result, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Scope to tenant IDs for production directories; (2) enrich with `saas_vendor_risk.csv` keyed on `targetResources{}.displayName`; (3) alert on failed `result` spikes or new high-risk vendors; (4) require CAB reference in ServiceNow optional via lookup; (5) quarterly supplier review slide export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:auditlog\" earliest=-30d\n    (activityDisplayName=\"*consent*\" OR activityDisplayName=\"*Add app*\" OR activityDisplayName=\"*service principal*\")\n| eval actor=coalesce(initiatedBy.user.userPrincipalName, initiatedBy.app.displayName, \"unknown\")\n| stats count by activityDisplayName, actor, result\n| sort - count\n```\n\nUnderstanding this SPL\n\n**ISO 27001 Supplier IAM and SaaS Integration Change Surveillance (A.15.1.2)** — Supplier relationships must address information security in agreements and monitor service changes. Cloud IAM audit logs for OAuth app consent and service principal changes surface high-risk supplier integrations that could bypass on-prem controls.\n\nDocumented **Data sources**: `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, initiatedBy, targetResources, result, _time). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:auditlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:auditlog\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **actor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by activityDisplayName, actor, result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ISO 27001 Supplier IAM and SaaS Integration Change Surveillance (A.15.1.2)** — Supplier relationships must address information security in agreements and monitor service changes. Cloud IAM audit logs for OAuth app consent and service principal changes surface high-risk supplier integrations that could bypass on-prem controls.\n\nDocumented **Data sources**: `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, initiatedBy, targetResources, result, _time). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (activity x actor), Bar chart (consent events by vendor), Single value (distinct actors).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Supplier IAM and SaaS Integration Change Surveillance. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.15.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.15.1.2 is enforced — Splunk UC-22.6.7: ISO 27001 Supplier IAM and SaaS Integration Change Surveillance.",
                  "ea": "Saved search 'UC-22.6.7' running on index=azure sourcetype=\"mscs:azure:auditlog\" (activityDisplayName, initiatedBy, targetResources, result, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.8",
              "n": "ISO 27001 Segregation of Duties — Privileged Splunk Knowledge Object Changes (A.5.3)",
              "c": "critical",
              "f": "advanced",
              "v": "Conflicting duties must be separated to reduce fraud and error. Auditing edits to high-risk Splunk objects by the same accounts that can execute destructive searches evidences compensating detective controls where native SoD is limited.",
              "t": "Splunk Enterprise core auditing (`_audit` index)",
              "d": "`index=_audit` (object_type, action, user, object, info)",
              "q": "index=_audit object_type IN (\"savedsearch\",\"alert_actions\",\"transforms\",\"props\",\"authorize\") action IN (\"create\",\"update\",\"delete\") earliest=-30d\n| eval object_name=coalesce(object, info)\n| stats count as changes, values(action) as actions, dc(object_type) as object_types by user\n| lookup splunk_privileged_users.csv user OUTPUT is_breakglass\n| where is_breakglass=\"true\" OR changes>25\n| sort - changes",
              "m": "(1) Maintain `splunk_privileged_users.csv` for admin and break-glass IDs; (2) forward `_audit` from all search heads in the cluster; (3) alert on deletes to `authorize` or `props`; (4) pair with change tickets via `user`→`snow_sys_id` lookup; (5) include in role recertification for A.5.3 (segregation of duties).",
              "z": "Table (user, changes, actions), Timeline (edits by object_type), Single value (delete actions count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1098",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise core auditing (`_audit` index).\n• Ensure the following data sources are available: `index=_audit` (object_type, action, user, object, info).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `splunk_privileged_users.csv` for admin and break-glass IDs; (2) forward `_audit` from all search heads in the cluster; (3) alert on deletes to `authorize` or `props`; (4) pair with change tickets via `user`→`snow_sys_id` lookup; (5) include in role recertification for A.5.3 (segregation of duties).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit object_type IN (\"savedsearch\",\"alert_actions\",\"transforms\",\"props\",\"authorize\") action IN (\"create\",\"update\",\"delete\") earliest=-30d\n| eval object_name=coalesce(object, info)\n| stats count as changes, values(action) as actions, dc(object_type) as object_types by user\n| lookup splunk_privileged_users.csv user OUTPUT is_breakglass\n| where is_breakglass=\"true\" OR changes>25\n| sort - changes\n```\n\nUnderstanding this SPL\n\n**ISO 27001 Segregation of Duties — Privileged Splunk Knowledge Object Changes (A.5.3)** — Conflicting duties must be separated to reduce fraud and error. Auditing edits to high-risk Splunk objects by the same accounts that can execute destructive searches evidences compensating detective controls where native SoD is limited.\n\nDocumented **Data sources**: `index=_audit` (object_type, action, user, object, info). **App/TA** (typical add-on context): Splunk Enterprise core auditing (`_audit` index). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **object_name** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_breakglass=\"true\" OR changes>25` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, changes, actions), Timeline (edits by object_type), Single value (delete actions count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Segregation of Duties — Privileged Splunk Knowledge Object Changes. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001",
                "SOX"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.3 is enforced — Splunk UC-22.6.8: ISO 27001 Segregation of Duties — Privileged Splunk Knowledge Object Changes.",
                  "ea": "Saved search 'UC-22.6.8' running on index=_audit (object_type, action, user, object, info), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.9",
              "n": "ISMS Policy Acknowledgment and Version Drift in Confluence or SharePoint (A.5.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Shows when information security policy documents were last updated and accessed so management direction stays current and employees can evidence review.",
              "t": "Splunk Add-on for Atlassian Jira/Confluence HEC or O365",
              "d": "`confluence:audit` or `ms:o365:management` SharePoint",
              "q": "index=collab sourcetype=confluence:audit earliest=-365d object_type=page (space_key=ISMS OR match(object_name,\"(?i)information.security.policy\"))\n| stats latest(_time) as last_change, count as views by object_name\n| eval days_since_change=round((now()-last_change)/86400,0)",
              "m": "(1) Map document IDs to Annex A scope; (2) set alert if no update in 365 days; (3) export for management review minutes.",
              "z": "Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Atlassian Jira/Confluence HEC or O365.\n• Ensure the following data sources are available: `confluence:audit` or `ms:o365:management` SharePoint.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map document IDs to Annex A scope; (2) set alert if no update in 365 days; (3) export for management review minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=collab sourcetype=confluence:audit earliest=-365d object_type=page (space_key=ISMS OR match(object_name,\"(?i)information.security.policy\"))\n| stats latest(_time) as last_change, count as views by object_name\n| eval days_since_change=round((now()-last_change)/86400,0)\n```\n\nUnderstanding this SPL\n\n**ISMS Policy Acknowledgment and Version Drift in Confluence or SharePoint (A.5.1)** — Shows when information security policy documents were last updated and accessed so management direction stays current and employees can evidence review.\n\nDocumented **Data sources**: `confluence:audit` or `ms:o365:management` SharePoint. **App/TA** (typical add-on context): Splunk Add-on for Atlassian Jira/Confluence HEC or O365. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: collab; **sourcetype**: confluence:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=collab, sourcetype=confluence:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by object_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_change** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISMS Policy Acknowledgment and Version Drift in Confluence or SharePoint. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "jira"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.1 is enforced — Splunk UC-22.6.9: ISMS Policy Acknowledgment and Version Drift in Confluence or SharePoint.",
                  "ea": "Saved search 'UC-22.6.9' running on confluence:audit or ms:o365:management SharePoint, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.10",
              "n": "Security Role Changes vs RACI in ServiceNow CMDB Ownership (A.5.2)",
              "c": "medium",
              "f": "advanced",
              "v": "Correlates CI ownership role changes with expected RACI contacts so responsibilities for assets remain traceable.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`snow:cmdb_ci`",
              "q": "index=itsm sourcetype=snow:cmdb_ci earliest=-90d\n| where match(name,\"(?i)prod)\")\n| stats latest(sys_updated_on) as last_update, values(managed_by) as owners by name\n| lookup security_raci.csv ci_name AS name OUTPUT expected_owner\n| where NOT match(mvjoin(owners,\"|\"), expected_owner)",
              "m": "(1) Maintain RACI lookup from ISO SoA; (2) normalize CMDB fields; (3) quarterly access governance review.",
              "z": "Table, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `snow:cmdb_ci`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain RACI lookup from ISO SoA; (2) normalize CMDB fields; (3) quarterly access governance review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=snow:cmdb_ci earliest=-90d\n| where match(name,\"(?i)prod)\")\n| stats latest(sys_updated_on) as last_update, values(managed_by) as owners by name\n| lookup security_raci.csv ci_name AS name OUTPUT expected_owner\n| where NOT match(mvjoin(owners,\"|\"), expected_owner)\n```\n\nUnderstanding this SPL\n\n**Security Role Changes vs RACI in ServiceNow CMDB Ownership (A.5.2)** — Correlates CI ownership role changes with expected RACI contacts so responsibilities for assets remain traceable.\n\nDocumented **Data sources**: `snow:cmdb_ci`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=snow:cmdb_ci, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(name,\"(?i)prod)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where NOT match(mvjoin(owners,\"|\"), expected_owner)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Security Role Changes vs RACI in ServiceNow CMDB Ownership. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.2 is enforced — Splunk UC-22.6.10: Security Role Changes vs RACI in ServiceNow CMDB Ownership.",
                  "ea": "Saved search 'UC-22.6.10' running on snow:cmdb_ci, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.11",
              "n": "Threat Intelligence Feed Freshness and STIX Object Ingest Gaps (A.5.7)",
              "c": "high",
              "f": "advanced",
              "v": "Measures age of threat intel objects and ingest errors so external threat information supports decisions per Annex A.",
              "t": "Splunk Enterprise Security (Splunkbase 263) or custom HEC",
              "d": "`threat_intel_domain.csv` lookup",
              "q": "| inputlookup threat_intel_domain.csv\n| eval age_days=(now()-strptime(last_modified,\"%Y-%m-%d\"))/86400\n| where age_days>30\n| stats count by source, age_days",
              "m": "(1) Align with TI subscription SLAs; (2) alert stale feeds; (3) document in threat intel procedure.",
              "z": "Table, Column chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263) or custom HEC.\n• Ensure the following data sources are available: `threat_intel_domain.csv` lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align with TI subscription SLAs; (2) alert stale feeds; (3) document in threat intel procedure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup threat_intel_domain.csv\n| eval age_days=(now()-strptime(last_modified,\"%Y-%m-%d\"))/86400\n| where age_days>30\n| stats count by source, age_days\n```\n\nUnderstanding this SPL\n\n**Threat Intelligence Feed Freshness and STIX Object Ingest Gaps (A.5.7)** — Measures age of threat intel objects and ingest errors so external threat information supports decisions per Annex A.\n\nDocumented **Data sources**: `threat_intel_domain.csv` lookup. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263) or custom HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>30` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by source, age_days** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Column chart",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Threat Intelligence Feed Freshness and STIX Object Ingest Gaps. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.7 (Threat intelligence (2022 new)) is enforced — Splunk UC-22.6.11: Threat Intelligence Feed Freshness and STIX Object Ingest Gaps.",
                  "ea": "Saved search 'UC-22.6.11' running on threat_intel_domain.csv lookup, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.12",
              "n": "Project Security Gate — Production Deploys Without Security CAB Tag (A.5.8)",
              "c": "medium",
              "f": "intermediate",
              "v": "Flags production deployments missing required security project gate metadata for new or changed systems.",
              "t": "Splunk Add-on for GitHub or Jenkins, ServiceNow change",
              "d": "`github:workflow` or `jenkins:build`; `snow:change_request`",
              "q": "index=cicd sourcetype=github:workflow earliest=-30d conclusion=success branch=main\n| lookup snow_change_tags.csv deploy_sha OUTPUT security_gate\n| where isnull(security_gate) OR security_gate!='approved'",
              "m": "(1) Require change correlation ID in pipeline; (2) block deploy on missing tag optional; (3) map to secure SDLC KPIs.",
              "z": "Table, Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for GitHub or Jenkins, ServiceNow change.\n• Ensure the following data sources are available: `github:workflow` or `jenkins:build`; `snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require change correlation ID in pipeline; (2) block deploy on missing tag optional; (3) map to secure SDLC KPIs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=github:workflow earliest=-30d conclusion=success branch=main\n| lookup snow_change_tags.csv deploy_sha OUTPUT security_gate\n| where isnull(security_gate) OR security_gate!='approved'\n```\n\nUnderstanding this SPL\n\n**Project Security Gate — Production Deploys Without Security CAB Tag (A.5.8)** — Flags production deployments missing required security project gate metadata for new or changed systems.\n\nDocumented **Data sources**: `github:workflow` or `jenkins:build`; `snow:change_request`. **App/TA** (typical add-on context): Splunk Add-on for GitHub or Jenkins, ServiceNow change. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: github:workflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=github:workflow, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(security_gate) OR security_gate!='approved'` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Project Security Gate — Production Deploys Without Security CAB Tag (A.5.8)** — Flags production deployments missing required security project gate metadata for new or changed systems.\n\nDocumented **Data sources**: `github:workflow` or `jenkins:build`; `snow:change_request`. **App/TA** (typical add-on context): Splunk Add-on for GitHub or Jenkins, ServiceNow change. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Project Security Gate — Production Deploys Without Security CAB Tag. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "github",
                "jenkins",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.8 is enforced — Splunk UC-22.6.12: Project Security Gate — Production Deploys Without Security CAB Tag.",
                  "ea": "Saved search 'UC-22.6.12' running on github:workflow or jenkins:build; snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                },
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.13",
              "n": "Cloud Shared Responsibility Control Coverage Map (A.5.23)",
              "c": "high",
              "f": "advanced",
              "v": "Joins cloud configuration findings to a shared-responsibility matrix so use of cloud services shows which Annex controls are organization-operated.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), Splunk App for AWS Security Hub",
              "d": "`aws:securityhub_finding`",
              "q": "index=aws sourcetype=aws:securityhub_finding earliest=-7d RecordState=ACTIVE\n| lookup csr_shared_responsibility.csv control_id OUTPUT annex_owner\n| stats count by annex_owner, Severity.Label",
              "m": "(1) Populate lookup from CSP matrix; (2) prioritize CRITICAL; (3) supplier review input.",
              "z": "Heatmap, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), Splunk App for AWS Security Hub.\n• Ensure the following data sources are available: `aws:securityhub_finding`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate lookup from CSP matrix; (2) prioritize CRITICAL; (3) supplier review input.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=aws:securityhub_finding earliest=-7d RecordState=ACTIVE\n| lookup csr_shared_responsibility.csv control_id OUTPUT annex_owner\n| stats count by annex_owner, Severity.Label\n```\n\nUnderstanding this SPL\n\n**Cloud Shared Responsibility Control Coverage Map (A.5.23)** — Joins cloud configuration findings to a shared-responsibility matrix so use of cloud services shows which Annex controls are organization-operated.\n\nDocumented **Data sources**: `aws:securityhub_finding`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk App for AWS Security Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:securityhub_finding. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=aws:securityhub_finding, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by annex_owner, Severity.Label** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud Shared Responsibility Control Coverage Map (A.5.23)** — Joins cloud configuration findings to a shared-responsibility matrix so use of cloud services shows which Annex controls are organization-operated.\n\nDocumented **Data sources**: `aws:securityhub_finding`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk App for AWS Security Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Cloud Shared Responsibility Control Coverage Map. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Alerts"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.23",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.23 (Information security in cloud services (2022 new)) is enforced — Splunk UC-22.6.13: Cloud Shared Responsibility Control Coverage Map.",
                  "ea": "Saved search 'UC-22.6.13' running on aws:securityhub_finding, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.14",
              "n": "Business Continuity — RTO Breach Signals from ITSI Service Degradation (A.5.29)",
              "c": "medium",
              "f": "intermediate",
              "v": "Uses service health episodes breaching RTO thresholds as objective continuity monitoring beyond paper plans.",
              "t": "Splunk ITSI (Splunkbase 1841)",
              "d": "`itsi_summary`",
              "q": "index=itsi_summary earliest=-90d is_service_max_severity_event=1\n| lookup service_rto_minutes.csv serviceid OUTPUT rto_minutes\n| eval breach=if(severity>=5 AND duration>rto_minutes,1,0)\n| where breach=1\n| stats count by serviceid",
              "m": "(1) Calibrate duration field to your ITSI KPIs; (2) align with BIA tiers; (3) BC manager monthly review.",
              "z": "Bar chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (Splunkbase 1841).\n• Ensure the following data sources are available: `itsi_summary`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Calibrate duration field to your ITSI KPIs; (2) align with BIA tiers; (3) BC manager monthly review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary earliest=-90d is_service_max_severity_event=1\n| lookup service_rto_minutes.csv serviceid OUTPUT rto_minutes\n| eval breach=if(severity>=5 AND duration>rto_minutes,1,0)\n| where breach=1\n| stats count by serviceid\n```\n\nUnderstanding this SPL\n\n**Business Continuity — RTO Breach Signals from ITSI Service Degradation (A.5.29)** — Uses service health episodes breaching RTO thresholds as objective continuity monitoring beyond paper plans.\n\nDocumented **Data sources**: `itsi_summary`. **App/TA** (typical add-on context): Splunk ITSI (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where breach=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by serviceid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Business Continuity — RTO Breach Signals from ITSI Service Degradation (A.5.29)** — Uses service health episodes breaching RTO thresholds as objective continuity monitoring beyond paper plans.\n\nDocumented **Data sources**: `itsi_summary`. **App/TA** (typical add-on context): Splunk ITSI (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart, Table",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Business Continuity — RTO Breach Signals from ITSI Service Degradation. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.29",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.29 is enforced — Splunk UC-22.6.14: Business Continuity — RTO Breach Signals from ITSI Service Degradation.",
                  "ea": "Saved search 'UC-22.6.14' running on itsi_summary, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.15",
              "n": "ICT Readiness for BC — Backup Window Overruns vs RPO (A.5.30)",
              "c": "high",
              "f": "advanced",
              "v": "Compares backup job duration and completion time against RPO windows for ICT supporting business continuity.",
              "t": "NetBackup or vendor backup sourcetype",
              "d": "`netbackup:job`",
              "q": "index=backup sourcetype=netbackup:job earliest=-14d\n| lookup app_rpo_hours.csv policy_name OUTPUT rpo_hours\n| eval end_epoch=_time+duration_sec\n| eval overrun=if((end_epoch-_time)/3600>rpo_hours,1,0)\n| where status!=\"success\" OR overrun=1\n| stats count by policy_name, client_name",
              "m": "(1) Normalize duration fields; (2) tune for incremental chains; (3) feed BC tests.",
              "z": "Table, Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: NetBackup or vendor backup sourcetype.\n• Ensure the following data sources are available: `netbackup:job`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize duration fields; (2) tune for incremental chains; (3) feed BC tests.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=netbackup:job earliest=-14d\n| lookup app_rpo_hours.csv policy_name OUTPUT rpo_hours\n| eval end_epoch=_time+duration_sec\n| eval overrun=if((end_epoch-_time)/3600>rpo_hours,1,0)\n| where status!=\"success\" OR overrun=1\n| stats count by policy_name, client_name\n```\n\nUnderstanding this SPL\n\n**ICT Readiness for BC — Backup Window Overruns vs RPO (A.5.30)** — Compares backup job duration and completion time against RPO windows for ICT supporting business continuity.\n\nDocumented **Data sources**: `netbackup:job`. **App/TA** (typical add-on context): NetBackup or vendor backup sourcetype. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: netbackup:job. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=netbackup:job, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **end_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overrun** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status!=\"success\" OR overrun=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by policy_name, client_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ICT Readiness for BC — Backup Window Overruns vs RPO. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.30",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.30 is enforced — Splunk UC-22.6.15: ICT Readiness for BC — Backup Window Overruns vs RPO.",
                  "ea": "Saved search 'UC-22.6.15' running on netbackup:job, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.16",
              "n": "Compliance with Policies — Splunk Search Head Knowledge Object Violations (A.5.36)",
              "c": "medium",
              "f": "advanced",
              "v": "Detects saved searches referencing non-approved indexes or missing required index= constraint for enforced search policies.",
              "t": "Splunk Enterprise (`_audit`, REST)",
              "d": "| rest /services/saved/searches",
              "q": "| rest /services/saved/searches splunk_server=local count=0\n| search disabled=0 is_scheduled=1\n| where !match(search,\"(?i)index\\s*=\\s*\") OR match(search,\"(?i)index\\s*=\\s*all\")\n| table title, eai:acl.owner, search",
              "m": "(1) Define approved index list separately; (2) exclude system apps; (3) quarterly SoA evidence for tooling policies.",
              "z": "Table, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise (`_audit`, REST).\n• Ensure the following data sources are available: | rest /services/saved/searches.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define approved index list separately; (2) exclude system apps; (3) quarterly SoA evidence for tooling policies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/saved/searches splunk_server=local count=0\n| search disabled=0 is_scheduled=1\n| where !match(search,\"(?i)index\\s*=\\s*\") OR match(search,\"(?i)index\\s*=\\s*all\")\n| table title, eai:acl.owner, search\n```\n\nUnderstanding this SPL\n\n**Compliance with Policies — Splunk Search Head Knowledge Object Violations (A.5.36)** — Detects saved searches referencing non-approved indexes or missing required index= constraint for enforced search policies.\n\nDocumented **Data sources**: | rest /services/saved/searches. **App/TA** (typical add-on context): Splunk Enterprise (`_audit`, REST). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Applies an explicit `search` filter to narrow the current result set.\n• Filters the current rows with `where !match(search,\"(?i)index\\s*=\\s*\") OR match(search,\"(?i)index\\s*=\\s*all\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Compliance with Policies — Splunk Search Head Knowledge Object Violations (A.5.36)**): table title, eai:acl.owner, search\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Compliance with Policies — Splunk Search Head Knowledge Object Violations. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.36",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.36 is enforced — Splunk UC-22.6.16: Compliance with Policies — Splunk Search Head Knowledge Object Violations.",
                  "ea": "Saved search 'UC-22.6.16' running on | rest /services/saved/searches, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.17",
              "n": "Personnel Screening — Contractor Badge Activations Before Background Check Complete (A.6.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Joins physical access grant events to HR screening completion dates to reduce insider risk from premature access.",
              "t": "Lenel/CCure HEC, HR CSV",
              "d": "`pacs:access`; `hr_screening.csv`",
              "q": "index=physical sourcetype=pacs:access action=grant earliest=-90d\n| lookup hr_screening.csv badge_id OUTPUT screening_complete_date\n| eval screen_epoch=strptime(screening_complete_date,\"%Y-%m-%d\")\n| where isnull(screen_epoch) OR _time < screen_epoch",
              "m": "(1) Align badge_id keys; (2) emergency access workflow exception lookup; (3) annual ISO internal audit evidence.",
              "z": "Table, Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Lenel/CCure HEC, HR CSV.\n• Ensure the following data sources are available: `pacs:access`; `hr_screening.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align badge_id keys; (2) emergency access workflow exception lookup; (3) annual ISO internal audit evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=pacs:access action=grant earliest=-90d\n| lookup hr_screening.csv badge_id OUTPUT screening_complete_date\n| eval screen_epoch=strptime(screening_complete_date,\"%Y-%m-%d\")\n| where isnull(screen_epoch) OR _time < screen_epoch\n```\n\nUnderstanding this SPL\n\n**Personnel Screening — Contractor Badge Activations Before Background Check Complete (A.6.1)** — Joins physical access grant events to HR screening completion dates to reduce insider risk from premature access.\n\nDocumented **Data sources**: `pacs:access`; `hr_screening.csv`. **App/TA** (typical add-on context): Lenel/CCure HEC, HR CSV. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pacs:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=pacs:access, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **screen_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(screen_epoch) OR _time < screen_epoch` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Personnel Screening — Contractor Badge Activations Before Background Check Complete. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.6.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.6.1 is enforced — Splunk UC-22.6.17: Personnel Screening — Contractor Badge Activations Before Background Check Complete.",
                  "ea": "Saved search 'UC-22.6.17' running on pacs:access; hr_screening.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.18",
              "n": "Security Awareness Completion Rate by Department (A.6.3)",
              "c": "medium",
              "f": "advanced",
              "v": "Aggregates LMS training completions tagged as security awareness for workforce coverage metrics.",
              "t": "LMS CSV or Workday Learning HEC",
              "d": "`lms:completion`",
              "q": "index=hr sourcetype=lms:completion course_tag=security_awareness earliest=-365d\n| stats latest(status) as last_status by user_id, department\n| eval complete=if(last_status=\"passed\",1,0)\n| stats sum(complete) as passed, count as enrolled by department\n| eval pct=round(100*passed/enrolled,1)\n| where pct<95",
              "m": "(1) Refresh department from HR nightly; (2) escalate managers below threshold; (3) document exceptions.",
              "z": "Bar chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: LMS CSV or Workday Learning HEC.\n• Ensure the following data sources are available: `lms:completion`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Refresh department from HR nightly; (2) escalate managers below threshold; (3) document exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=lms:completion course_tag=security_awareness earliest=-365d\n| stats latest(status) as last_status by user_id, department\n| eval complete=if(last_status=\"passed\",1,0)\n| stats sum(complete) as passed, count as enrolled by department\n| eval pct=round(100*passed/enrolled,1)\n| where pct<95\n```\n\nUnderstanding this SPL\n\n**Security Awareness Completion Rate by Department (A.6.3)** — Aggregates LMS training completions tagged as security awareness for workforce coverage metrics.\n\nDocumented **Data sources**: `lms:completion`. **App/TA** (typical add-on context): LMS CSV or Workday Learning HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: lms:completion. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=lms:completion, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user_id, department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **complete** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct<95` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Security Awareness Completion Rate by Department. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.6.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.6.3 is enforced — Splunk UC-22.6.18: Security Awareness Completion Rate by Department.",
                  "ea": "Saved search 'UC-22.6.18' running on lms:completion, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.19",
              "n": "Disciplinary Process Triggers — HR Case Codes Correlated with Security Incidents (A.6.4)",
              "c": "high",
              "f": "advanced",
              "v": "Correlates anonymized HR case creation timestamps with security investigations to evidence disciplinary process alignment without exposing PII in dashboards.",
              "t": "ServiceNow HR case HEC (restricted index)",
              "d": "`snow:hr_case` (restricted)",
              "q": "index=hr_restricted sourcetype=snow:hr_case earliest=-180d category=conduct\n| eval day=round(_time/86400,0)\n| join type=inner day [| search index=itsm sourcetype=snow:incident category=security earliest=-180d | eval day=round(_time/86400,0) | stats dc(number) as sec_cases by day]\n| stats count by day",
              "m": "(1) Strict RBAC on index; (2) legal privacy review; (3) aggregate only; (4) document data minimization.",
              "z": "Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ServiceNow HR case HEC (restricted index).\n• Ensure the following data sources are available: `snow:hr_case` (restricted).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Strict RBAC on index; (2) legal privacy review; (3) aggregate only; (4) document data minimization.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr_restricted sourcetype=snow:hr_case earliest=-180d category=conduct\n| eval day=round(_time/86400,0)\n| join type=inner day [| search index=itsm sourcetype=snow:incident category=security earliest=-180d | eval day=round(_time/86400,0) | stats dc(number) as sec_cases by day]\n| stats count by day\n```\n\nUnderstanding this SPL\n\n**Disciplinary Process Triggers — HR Case Codes Correlated with Security Incidents (A.6.4)** — Correlates anonymized HR case creation timestamps with security investigations to evidence disciplinary process alignment without exposing PII in dashboards.\n\nDocumented **Data sources**: `snow:hr_case` (restricted). **App/TA** (typical add-on context): ServiceNow HR case HEC (restricted index). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr_restricted; **sourcetype**: snow:hr_case. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr_restricted, sourcetype=snow:hr_case, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **day** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `stats` rolls up events into metrics; results are split **by day** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Disciplinary Process Triggers — HR Case Codes Correlated with Security Incidents. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.6.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.6.4 is enforced — Splunk UC-22.6.19: Disciplinary Process Triggers — HR Case Codes Correlated with Security Incidents.",
                  "ea": "Saved search 'UC-22.6.19' running on snow:hr_case (restricted), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.20",
              "n": "Remote Working — VPN Split Tunnel and Sensitive App Access (A.6.7)",
              "c": "medium",
              "f": "intermediate",
              "v": "Detects VPN sessions advertising split tunnel with subsequent access to finance or HR apps from same user session.",
              "t": "Palo Alto GlobalProtect or AnyConnect logs",
              "d": "`globalprotect` or `vpn:session`",
              "q": "index=vpn sourcetype=globalprotect earliest=-24h\n| eval split=if(match(config,\"split\"),1,0)\n| stats max(split) as used_split by user, private_ip\n| join user [| search index=proxy sourcetype=bluecoat:proxy earliest=-24h | stats values(url) as urls by user | where match(urls,\"(?i)/finance/|/hr/\") ]\n| where used_split=1",
              "m": "(1) Normalize user identifiers; (2) tune URL patterns; (3) align with remote work policy; (4) user education.",
              "z": "Table, Map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto GlobalProtect or AnyConnect logs.\n• Ensure the following data sources are available: `globalprotect` or `vpn:session`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize user identifiers; (2) tune URL patterns; (3) align with remote work policy; (4) user education.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=globalprotect earliest=-24h\n| eval split=if(match(config,\"split\"),1,0)\n| stats max(split) as used_split by user, private_ip\n| join user [| search index=proxy sourcetype=bluecoat:proxy earliest=-24h | stats values(url) as urls by user | where match(urls,\"(?i)/finance/|/hr/\") ]\n| where used_split=1\n```\n\nUnderstanding this SPL\n\n**Remote Working — VPN Split Tunnel and Sensitive App Access (A.6.7)** — Detects VPN sessions advertising split tunnel with subsequent access to finance or HR apps from same user session.\n\nDocumented **Data sources**: `globalprotect` or `vpn:session`. **App/TA** (typical add-on context): Palo Alto GlobalProtect or AnyConnect logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: globalprotect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=globalprotect, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **split** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user, private_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where used_split=1` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Remote Working — VPN Split Tunnel and Sensitive App Access (A.6.7)** — Detects VPN sessions advertising split tunnel with subsequent access to finance or HR apps from same user session.\n\nDocumented **Data sources**: `globalprotect` or `vpn:session`. **App/TA** (typical add-on context): Palo Alto GlobalProtect or AnyConnect logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Map",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Remote Working — VPN Split Tunnel and Sensitive App Access. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_globalprotect",
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.6.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.6.7 is enforced — Splunk UC-22.6.20: Remote Working — VPN Split Tunnel and Sensitive App Access.",
                  "ea": "Saved search 'UC-22.6.20' running on globalprotect or vpn:session, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.21",
              "n": "Physical Perimeter — After-Hours Badge Swipes Without Matching Shift (A.7.1)",
              "c": "high",
              "f": "advanced",
              "v": "Flags access to secured areas outside scheduled shift windows for perimeter control monitoring.",
              "t": "PACS sourcetype",
              "d": "`pacs:access`",
              "q": "index=physical sourcetype=pacs:access earliest=-30d result=granted\n| lookup shift_schedule.csv badge_id day_of_week OUTPUT shift_start, shift_end\n| eval tod=strftime(_time,\"%H:%M\")\n| where tod < shift_start OR tod > shift_end\n| stats count by location, badge_id",
              "m": "(1) Populate shift schedules; (2) guard tour exceptions; (3) SOC physical security watchlist.",
              "z": "Heatmap, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: PACS sourcetype.\n• Ensure the following data sources are available: `pacs:access`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate shift schedules; (2) guard tour exceptions; (3) SOC physical security watchlist.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=pacs:access earliest=-30d result=granted\n| lookup shift_schedule.csv badge_id day_of_week OUTPUT shift_start, shift_end\n| eval tod=strftime(_time,\"%H:%M\")\n| where tod < shift_start OR tod > shift_end\n| stats count by location, badge_id\n```\n\nUnderstanding this SPL\n\n**Physical Perimeter — After-Hours Badge Swipes Without Matching Shift (A.7.1)** — Flags access to secured areas outside scheduled shift windows for perimeter control monitoring.\n\nDocumented **Data sources**: `pacs:access`. **App/TA** (typical add-on context): PACS sourcetype. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pacs:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=pacs:access, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **tod** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where tod < shift_start OR tod > shift_end` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by location, badge_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Physical Perimeter — After-Hours Badge Swipes Without Matching Shift. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.7.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.7.1 is enforced — Splunk UC-22.6.21: Physical Perimeter — After-Hours Badge Swipes Without Matching Shift.",
                  "ea": "Saved search 'UC-22.6.21' running on pacs:access, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.22",
              "n": "Physical Security Monitoring — Camera NVR Offline or Disk Full Events (A.7.4)",
              "c": "medium",
              "f": "intermediate",
              "v": "Surveillance system health events show physical monitoring controls remain operational.",
              "t": "Genetec/Milestone syslog",
              "d": "`cctv:nvr`",
              "q": "index=physical sourcetype=cctv:nvr earliest=-7d (message=\"*offline*\" OR message=\"*disk full*\" OR severity>4)\n| stats count by site, device_id, message",
              "m": "(1) Map devices to zones; (2) alert on any offline >15m; (3) maintenance SLA tracking.",
              "z": "Table, Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Genetec/Milestone syslog.\n• Ensure the following data sources are available: `cctv:nvr`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map devices to zones; (2) alert on any offline >15m; (3) maintenance SLA tracking.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=cctv:nvr earliest=-7d (message=\"*offline*\" OR message=\"*disk full*\" OR severity>4)\n| stats count by site, device_id, message\n```\n\nUnderstanding this SPL\n\n**Physical Security Monitoring — Camera NVR Offline or Disk Full Events (A.7.4)** — Surveillance system health events show physical monitoring controls remain operational.\n\nDocumented **Data sources**: `cctv:nvr`. **App/TA** (typical add-on context): Genetec/Milestone syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: cctv:nvr. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=cctv:nvr, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by site, device_id, message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Physical Security Monitoring — Camera NVR Offline or Disk Full Events. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.7.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.7.4 is enforced — Splunk UC-22.6.22: Physical Security Monitoring — Camera NVR Offline or Disk Full Events.",
                  "ea": "Saved search 'UC-22.6.22' running on cctv:nvr, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.23",
              "n": "Removable Storage — USB Mount Events on Engineering Workstations (A.7.10)",
              "c": "high",
              "f": "advanced",
              "v": "Uses endpoint logs for removable media mounts in restricted VLANs to enforce media handling procedures.",
              "t": "Sysmon or CrowdStrike",
              "d": "Sysmon EventCode=22",
              "q": "index=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode=22 earliest=-30d\n| lookup restricted_subnets.csv src_ip OUTPUT zone\n| where zone=\"engineering_secure\"\n| stats count by ComputerName, Image",
              "m": "(1) USB policy allowlist; (2) DLP correlation optional; (3) quarterly sample for auditors.",
              "z": "Table, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Sysmon or CrowdStrike.\n• Ensure the following data sources are available: Sysmon EventCode=22.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) USB policy allowlist; (2) DLP correlation optional; (3) quarterly sample for auditors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode=22 earliest=-30d\n| lookup restricted_subnets.csv src_ip OUTPUT zone\n| where zone=\"engineering_secure\"\n| stats count by ComputerName, Image\n```\n\nUnderstanding this SPL\n\n**Removable Storage — USB Mount Events on Engineering Workstations (A.7.10)** — Uses endpoint logs for removable media mounts in restricted VLANs to enforce media handling procedures.\n\nDocumented **Data sources**: Sysmon EventCode=22. **App/TA** (typical add-on context): Sysmon or CrowdStrike. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: XmlWinEventLog:Microsoft-Windows-Sysmon/Operational. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where zone=\"engineering_secure\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ComputerName, Image** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Removable Storage — USB Mount Events on Engineering Workstations (A.7.10)** — Uses endpoint logs for removable media mounts in restricted VLANs to enforce media handling procedures.\n\nDocumented **Data sources**: Sysmon EventCode=22. **App/TA** (typical add-on context): Sysmon or CrowdStrike. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Removable Storage — USB Mount Events on Engineering Workstations. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.7.10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.7.10 is enforced — Splunk UC-22.6.23: Removable Storage — USB Mount Events on Engineering Workstations.",
                  "ea": "Saved search 'UC-22.6.23' running on Sysmon EventCode=22, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.24",
              "n": "Secure Disposal — Asset Decommission Wipe Confirmation Before CMDB Retire (A.7.14)",
              "c": "medium",
              "f": "advanced",
              "v": "Requires wipe or crypto-erase confirmation before CMDB retirement timestamp for media disposal chain of custody.",
              "t": "Intune, ServiceNow CMDB",
              "d": "`intune:device_action`; `snow:cmdb_ci`",
              "q": "index=itsm sourcetype=snow:cmdb_ci operational_status=Retired earliest=-180d\n| eval retire_epoch=strptime(sys_updated_on,\"%Y-%m-%d %H:%M:%S\")\n| join name [| search index=mdm sourcetype=intune:device_action action=wipe | stats latest(_time) as wipe_time by device_name | rename device_name as name]\n| where isnull(wipe_time) OR wipe_time > retire_epoch",
              "m": "(1) Align name keys; (2) legal hold exceptions; (3) asset disposal vendor attestation upload.",
              "z": "Table, Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Intune, ServiceNow CMDB.\n• Ensure the following data sources are available: `intune:device_action`; `snow:cmdb_ci`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align name keys; (2) legal hold exceptions; (3) asset disposal vendor attestation upload.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=snow:cmdb_ci operational_status=Retired earliest=-180d\n| eval retire_epoch=strptime(sys_updated_on,\"%Y-%m-%d %H:%M:%S\")\n| join name [| search index=mdm sourcetype=intune:device_action action=wipe | stats latest(_time) as wipe_time by device_name | rename device_name as name]\n| where isnull(wipe_time) OR wipe_time > retire_epoch\n```\n\nUnderstanding this SPL\n\n**Secure Disposal — Asset Decommission Wipe Confirmation Before CMDB Retire (A.7.14)** — Requires wipe or crypto-erase confirmation before CMDB retirement timestamp for media disposal chain of custody.\n\nDocumented **Data sources**: `intune:device_action`; `snow:cmdb_ci`. **App/TA** (typical add-on context): Intune, ServiceNow CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=snow:cmdb_ci, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **retire_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(wipe_time) OR wipe_time > retire_epoch` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Secure Disposal — Asset Decommission Wipe Confirmation Before CMDB Retire. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.7.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.7.14 is enforced — Splunk UC-22.6.24: Secure Disposal — Asset Decommission Wipe Confirmation Before CMDB Retire.",
                  "ea": "Saved search 'UC-22.6.24' running on intune:device_action; snow:cmdb_ci, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.25",
              "n": "User Endpoint Patch Latency Beyond SLA (A.8.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks missing critical patches on user endpoints to evidence endpoint hardening for user devices.",
              "t": "Microsoft Intune or Tanium",
              "d": "`intune:device_compliance`",
              "q": "index=mdm sourcetype=intune:device_compliance earliest=-7d\n| where critical_updates_pending>0 AND days_since_contact<3\n| stats sum(critical_updates_pending) as pending by device_name, os",
              "m": "(1) Define SLA days; (2) auto-ticket top offenders; (3) executive patch scorecard.",
              "z": "Bar chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Microsoft Intune or Tanium.\n• Ensure the following data sources are available: `intune:device_compliance`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define SLA days; (2) auto-ticket top offenders; (3) executive patch scorecard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mdm sourcetype=intune:device_compliance earliest=-7d\n| where critical_updates_pending>0 AND days_since_contact<3\n| stats sum(critical_updates_pending) as pending by device_name, os\n```\n\nUnderstanding this SPL\n\n**User Endpoint Patch Latency Beyond SLA (A.8.1)** — Tracks missing critical patches on user endpoints to evidence endpoint hardening for user devices.\n\nDocumented **Data sources**: `intune:device_compliance`. **App/TA** (typical add-on context): Microsoft Intune or Tanium. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mdm; **sourcetype**: intune:device_compliance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mdm, sourcetype=intune:device_compliance, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where critical_updates_pending>0 AND days_since_contact<3` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by device_name, os** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User Endpoint Patch Latency Beyond SLA (A.8.1)** — Tracks missing critical patches on user endpoints to evidence endpoint hardening for user devices.\n\nDocumented **Data sources**: `intune:device_compliance`. **App/TA** (typical add-on context): Microsoft Intune or Tanium. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for User Endpoint Patch Latency Beyond SLA. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.1 is enforced — Splunk UC-22.6.25: User Endpoint Patch Latency Beyond SLA.",
                  "ea": "Saved search 'UC-22.6.25' running on intune:device_compliance, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.26",
              "n": "Privileged Access — Sudo and RunAs Usage Outside PAM Session (A.8.2)",
              "c": "medium",
              "f": "advanced",
              "v": "Surfaces local privilege elevation without corresponding PAM session ID for privileged access monitoring.",
              "t": "CyberArk, Linux audit",
              "d": "`linux:audit` type=USER_CMD",
              "q": "index=os sourcetype=linux:audit type=USER_CMD earliest=-24h\n| where match(exe,\"(?i)sudo|su|pkexec\")\n| lookup pam_sessions.csv user, _time OUTPUT pam_session_id\n| where isnull(pam_session_id)",
              "m": "(1) Clock sync with PAM; (2) break-glass accounts excluded; (3) monthly privileged user review.",
              "z": "Table, Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CyberArk, Linux audit.\n• Ensure the following data sources are available: `linux:audit` type=USER_CMD.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Clock sync with PAM; (2) break-glass accounts excluded; (3) monthly privileged user review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux:audit type=USER_CMD earliest=-24h\n| where match(exe,\"(?i)sudo|su|pkexec\")\n| lookup pam_sessions.csv user, _time OUTPUT pam_session_id\n| where isnull(pam_session_id)\n```\n\nUnderstanding this SPL\n\n**Privileged Access — Sudo and RunAs Usage Outside PAM Session (A.8.2)** — Surfaces local privilege elevation without corresponding PAM session ID for privileged access monitoring.\n\nDocumented **Data sources**: `linux:audit` type=USER_CMD. **App/TA** (typical add-on context): CyberArk, Linux audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(exe,\"(?i)sudo|su|pkexec\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(pam_session_id)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privileged Access — Sudo and RunAs Usage Outside PAM Session (A.8.2)** — Surfaces local privilege elevation without corresponding PAM session ID for privileged access monitoring.\n\nDocumented **Data sources**: `linux:audit` type=USER_CMD. **App/TA** (typical add-on context): CyberArk, Linux audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Privileged Access — Sudo and RunAs Usage Outside PAM Session. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.2 (Privileged access rights) is enforced — Splunk UC-22.6.26: Privileged Access — Sudo and RunAs Usage Outside PAM Session.",
                  "ea": "Saved search 'UC-22.6.26' running on linux:audit type=USER_CMD, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2013",
                  "cl": "A.12.4.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.12.4.3 (Administrator and operator logs (2013)) is enforced — Splunk UC-22.6.26: Privileged Access — Sudo and RunAs Usage Outside PAM Session.",
                  "ea": "Saved search 'UC-22.6.26' running on linux:audit type=USER_CMD, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/54534.html"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.27",
              "n": "Information Access Restriction — SharePoint Anonymous Link Creation Blocked vs Attempted (A.8.3)",
              "c": "high",
              "f": "advanced",
              "v": "Compares anonymous sharing attempts to blocked operations for access restriction effectiveness.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`ms:o365:management`",
              "q": "index=o365 sourcetype=ms:o365:management Workload=SharePoint earliest=-30d Operation=\"*AnonymousLink*\"\n| eval outcome=if(match(Operation,\"(?i)create\") AND !match(Operation,\"(?i)block\"), \"allowed_or_pending\", \"blocked_or_removed\")\n| stats count by outcome, UserPrincipalName",
              "m": "(1) Map Operations to your tenant vocabulary; (2) alert on unexpected allows; (3) DLP alignment.",
              "z": "Column chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `ms:o365:management`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map Operations to your tenant vocabulary; (2) alert on unexpected allows; (3) DLP alignment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management Workload=SharePoint earliest=-30d Operation=\"*AnonymousLink*\"\n| eval outcome=if(match(Operation,\"(?i)create\") AND !match(Operation,\"(?i)block\"), \"allowed_or_pending\", \"blocked_or_removed\")\n| stats count by outcome, UserPrincipalName\n```\n\nUnderstanding this SPL\n\n**Information Access Restriction — SharePoint Anonymous Link Creation Blocked vs Attempted (A.8.3)** — Compares anonymous sharing attempts to blocked operations for access restriction effectiveness.\n\nDocumented **Data sources**: `ms:o365:management`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **outcome** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by outcome, UserPrincipalName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Information Access Restriction — SharePoint Anonymous Link Creation Blocked vs Attempted. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.3 is enforced — Splunk UC-22.6.27: Information Access Restriction — SharePoint Anonymous Link Creation Blocked vs Attempted.",
                  "ea": "Saved search 'UC-22.6.27' running on ms:o365:management, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.28",
              "n": "Secure Authentication — Password Spray Pattern in Entra Sign-Ins (A.8.5)",
              "c": "medium",
              "f": "intermediate",
              "v": "Detects many distinct usernames from one IP with high failure ratio supporting secure authentication monitoring.",
              "t": "Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`mscs:azure:signinlog`",
              "q": "index=azure sourcetype=mscs:azure:signinlog earliest=-24h\n| bin _time span=10m\n| stats dc(userPrincipalName) as users, count as attempts, sum(eval(if(status.errorCode!=0,1,0))) as fails by _time, IPAddress\n| eval fail_rate=fails/attempts\n| where users>20 AND fail_rate>0.7",
              "m": "(1) Tune thresholds; (2) integrate with risk-based CA; (3) block IP at perimeter optional.",
              "z": "Table, Scatter",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `mscs:azure:signinlog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune thresholds; (2) integrate with risk-based CA; (3) block IP at perimeter optional.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=mscs:azure:signinlog earliest=-24h\n| bin _time span=10m\n| stats dc(userPrincipalName) as users, count as attempts, sum(eval(if(status.errorCode!=0,1,0))) as fails by _time, IPAddress\n| eval fail_rate=fails/attempts\n| where users>20 AND fail_rate>0.7\n```\n\nUnderstanding this SPL\n\n**Secure Authentication — Password Spray Pattern in Entra Sign-Ins (A.8.5)** — Detects many distinct usernames from one IP with high failure ratio supporting secure authentication monitoring.\n\nDocumented **Data sources**: `mscs:azure:signinlog`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:signinlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=mscs:azure:signinlog, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, IPAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where users>20 AND fail_rate>0.7` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=10m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Secure Authentication — Password Spray Pattern in Entra Sign-Ins (A.8.5)** — Detects many distinct usernames from one IP with high failure ratio supporting secure authentication monitoring.\n\nDocumented **Data sources**: `mscs:azure:signinlog`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Scatter",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Secure Authentication — Password Spray Pattern in Entra Sign-Ins. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=10m | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.5 is enforced — Splunk UC-22.6.28: Secure Authentication — Password Spray Pattern in Entra Sign-Ins.",
                  "ea": "Saved search 'UC-22.6.28' running on mscs:azure:signinlog, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.29",
              "n": "Capacity Management — Disk Utilization Forecast Breach in 14 Days (A.8.6)",
              "c": "high",
              "f": "advanced",
              "v": "Projects disk full dates from growth rate for servers supporting ISMS scope to meet capacity management.",
              "t": "Splunk Infrastructure monitoring or `os:df`",
              "d": "`os:df` or `metrics`",
              "q": "index=os sourcetype=os:df (mount=\"/var\" OR mount=\"/data\") earliest=-30d\n| timechart span=1d latest(UsePercent) as pct by host\n| predict pct as future algorithm=LLP future_timespan=14",
              "m": "(1) Collect daily samples; (2) alert predicted pct>90; (3) capacity committee review.",
              "z": "Line chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Infrastructure monitoring or `os:df`.\n• Ensure the following data sources are available: `os:df` or `metrics`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect daily samples; (2) alert predicted pct>90; (3) capacity committee review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=os:df (mount=\"/var\" OR mount=\"/data\") earliest=-30d\n| timechart span=1d latest(UsePercent) as pct by host\n| predict pct as future algorithm=LLP future_timespan=14\n```\n\nUnderstanding this SPL\n\n**Capacity Management — Disk Utilization Forecast Breach in 14 Days (A.8.6)** — Projects disk full dates from growth rate for servers supporting ISMS scope to meet capacity management.\n\nDocumented **Data sources**: `os:df` or `metrics`. **App/TA** (typical add-on context): Splunk Infrastructure monitoring or `os:df`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: os:df. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=os:df, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by host** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Capacity Management — Disk Utilization Forecast Breach in 14 Days (A.8.6)**): predict pct as future algorithm=LLP future_timespan=14\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.Storage by Performance.host span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Capacity Management — Disk Utilization Forecast Breach in 14 Days (A.8.6)** — Projects disk full dates from growth rate for servers supporting ISMS scope to meet capacity management.\n\nDocumented **Data sources**: `os:df` or `metrics`. **App/TA** (typical add-on context): Splunk Infrastructure monitoring or `os:df`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.Storage` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Capacity Management — Disk Utilization Forecast Breach in 14 Days. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.Storage by Performance.host span=1d | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.6 is enforced — Splunk UC-22.6.29: Capacity Management — Disk Utilization Forecast Breach in 14 Days.",
                  "ea": "Saved search 'UC-22.6.29' running on os:df or metrics, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.30",
              "n": "Malware Protection — AV Engine Disabled or Out of Date Events (A.8.7)",
              "c": "medium",
              "f": "advanced",
              "v": "Flags endpoints reporting disabled protection or stale pattern files for malware control assurance.",
              "t": "Microsoft Defender, CrowdStrike",
              "d": "`defender:health` or `crowdstrike:sensor`",
              "q": "index=edr sourcetype=defender:health earliest=-1d\n| where protection_status!=\"Active\" OR signature_age_days>7\n| stats latest(protection_status) as status, max(signature_age_days) as sig_age by host",
              "m": "(1) Normalize health vocabulary; (2) auto-remediate via SOAR; (3) weekly compliance export.",
              "z": "Table, Map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Microsoft Defender, CrowdStrike.\n• Ensure the following data sources are available: `defender:health` or `crowdstrike:sensor`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize health vocabulary; (2) auto-remediate via SOAR; (3) weekly compliance export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=defender:health earliest=-1d\n| where protection_status!=\"Active\" OR signature_age_days>7\n| stats latest(protection_status) as status, max(signature_age_days) as sig_age by host\n```\n\nUnderstanding this SPL\n\n**Malware Protection — AV Engine Disabled or Out of Date Events (A.8.7)** — Flags endpoints reporting disabled protection or stale pattern files for malware control assurance.\n\nDocumented **Data sources**: `defender:health` or `crowdstrike:sensor`. **App/TA** (typical add-on context): Microsoft Defender, CrowdStrike. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: defender:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=defender:health, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where protection_status!=\"Active\" OR signature_age_days>7` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malware Protection — AV Engine Disabled or Out of Date Events (A.8.7)** — Flags endpoints reporting disabled protection or stale pattern files for malware control assurance.\n\nDocumented **Data sources**: `defender:health` or `crowdstrike:sensor`. **App/TA** (typical add-on context): Microsoft Defender, CrowdStrike. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Map",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Malware Protection — AV Engine Disabled or Out of Date Events. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count",
              "e": [
                "crowdstrike",
                "defender"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.7 is enforced — Splunk UC-22.6.30: Malware Protection — AV Engine Disabled or Out of Date Events.",
                  "ea": "Saved search 'UC-22.6.30' running on defender:health or crowdstrike:sensor, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.31",
              "n": "Technical Vulnerability Management — Exploitable CVEs with Public PoC on In-Scope Hosts (A.8.8)",
              "c": "high",
              "f": "intermediate",
              "v": "Joins vulnerability scanner output to exploit availability tags to prioritize technical vulnerability response.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`tenable:vuln`",
              "q": "index=vuln sourcetype=tenable:vuln earliest=-7d severity IN (\"Critical\",\"High\")\n| lookup exploit_available.csv cve OUTPUT has_public_poc\n| where has_public_poc=\"true\"\n| stats dc(host) as hosts, values(cve) as cves by plugin_name\n| sort - hosts",
              "m": "(1) Maintain exploit feed; (2) map to patch CAB; (3) document risk acceptance in SoA.",
              "z": "Table, Heatmap",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `tenable:vuln`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain exploit feed; (2) map to patch CAB; (3) document risk acceptance in SoA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=tenable:vuln earliest=-7d severity IN (\"Critical\",\"High\")\n| lookup exploit_available.csv cve OUTPUT has_public_poc\n| where has_public_poc=\"true\"\n| stats dc(host) as hosts, values(cve) as cves by plugin_name\n| sort - hosts\n```\n\nUnderstanding this SPL\n\n**Technical Vulnerability Management — Exploitable CVEs with Public PoC on In-Scope Hosts (A.8.8)** — Joins vulnerability scanner output to exploit availability tags to prioritize technical vulnerability response.\n\nDocumented **Data sources**: `tenable:vuln`. **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=tenable:vuln, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where has_public_poc=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by plugin_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Technical Vulnerability Management — Exploitable CVEs with Public PoC on In-Scope Hosts (A.8.8)** — Joins vulnerability scanner output to exploit availability tags to prioritize technical vulnerability response.\n\nDocumented **Data sources**: `tenable:vuln`. **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Heatmap",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Technical Vulnerability Management — Exploitable CVEs with Public PoC on In-Scope Hosts. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.8 is enforced — Splunk UC-22.6.31: Technical Vulnerability Management — Exploitable CVEs with Public PoC on In-Scope Hosts.",
                  "ea": "Saved search 'UC-22.6.31' running on tenable:vuln, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.32",
              "n": "Configuration Management — Drift on CIS Hardening Parameters for Web Tier (A.8.9)",
              "c": "medium",
              "f": "advanced",
              "v": "Compares daily config snapshots to golden baseline for web servers in scope.",
              "t": "Chef Inspec or Ansible HEC",
              "d": "`inspec:report`",
              "q": "index=config sourcetype=inspec:report control_profile=web_cis earliest=-7d\n| where status=\"failed\"\n| stats count as failed_controls by host, control_id\n| where failed_controls>0",
              "m": "(1) Version baselines; (2) emergency change exceptions; (3) quarterly config review.",
              "z": "Heatmap, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Chef Inspec or Ansible HEC.\n• Ensure the following data sources are available: `inspec:report`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Version baselines; (2) emergency change exceptions; (3) quarterly config review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=config sourcetype=inspec:report control_profile=web_cis earliest=-7d\n| where status=\"failed\"\n| stats count as failed_controls by host, control_id\n| where failed_controls>0\n```\n\nUnderstanding this SPL\n\n**Configuration Management — Drift on CIS Hardening Parameters for Web Tier (A.8.9)** — Compares daily config snapshots to golden baseline for web servers in scope.\n\nDocumented **Data sources**: `inspec:report`. **App/TA** (typical add-on context): Chef Inspec or Ansible HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: config; **sourcetype**: inspec:report. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=config, sourcetype=inspec:report, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where status=\"failed\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failed_controls>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Configuration Management — Drift on CIS Hardening Parameters for Web Tier (A.8.9)** — Compares daily config snapshots to golden baseline for web servers in scope.\n\nDocumented **Data sources**: `inspec:report`. **App/TA** (typical add-on context): Chef Inspec or Ansible HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Configuration Management — Drift on CIS Hardening Parameters for Web Tier. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "e": [
                "ansible"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.9",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.9 (Configuration management (2022 new)) is enforced — Splunk UC-22.6.32: Configuration Management — Drift on CIS Hardening Parameters for Web Tier.",
                  "ea": "Saved search 'UC-22.6.32' running on inspec:report, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.33",
              "n": "Information Deletion — S3 Object Delete Storm Outside Retention Workflow (A.8.10)",
              "c": "high",
              "f": "advanced",
              "v": "Detects mass delete operations not tagged with approved legal-hold workflow ID for controlled deletion.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876)",
              "d": "`aws:cloudtrail` eventName=DeleteObject",
              "q": "index=aws sourcetype=aws:cloudtrail eventName=DeleteObject earliest=-24h\n| stats count as deletes by userIdentity.arn, bucketName\n| where deletes>1000\n| lookup approved_bulk_delete.csv userIdentity.arn, bucketName OUTPUT approved\n| where isnull(approved)",
              "m": "(1) Tag lifecycle jobs in lookup; (2) correlate with backup jobs; (3) insider risk review.",
              "z": "Table, Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876).\n• Ensure the following data sources are available: `aws:cloudtrail` eventName=DeleteObject.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag lifecycle jobs in lookup; (2) correlate with backup jobs; (3) insider risk review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=aws:cloudtrail eventName=DeleteObject earliest=-24h\n| stats count as deletes by userIdentity.arn, bucketName\n| where deletes>1000\n| lookup approved_bulk_delete.csv userIdentity.arn, bucketName OUTPUT approved\n| where isnull(approved)\n```\n\nUnderstanding this SPL\n\n**Information Deletion — S3 Object Delete Storm Outside Retention Workflow (A.8.10)** — Detects mass delete operations not tagged with approved legal-hold workflow ID for controlled deletion.\n\nDocumented **Data sources**: `aws:cloudtrail` eventName=DeleteObject. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=aws:cloudtrail, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, bucketName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where deletes>1000` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.object | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information Deletion — S3 Object Delete Storm Outside Retention Workflow (A.8.10)** — Detects mass delete operations not tagged with approved legal-hold workflow ID for controlled deletion.\n\nDocumented **Data sources**: `aws:cloudtrail` eventName=DeleteObject. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Information Deletion — S3 Object Delete Storm Outside Retention Workflow. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.object | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.10 is enforced — Splunk UC-22.6.33: Information Deletion — S3 Object Delete Storm Outside Retention Workflow.",
                  "ea": "Saved search 'UC-22.6.33' running on aws:cloudtrail eventName=DeleteObject, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.34",
              "n": "Data Masking — Sampled PII Pattern Hits in Non-Production Test Indexes (A.8.11)",
              "c": "medium",
              "f": "intermediate",
              "v": "Scans a controlled non-production sample for raw SSN-like patterns to validate masking and access restrictions before production promotion.",
              "t": "Splunk Enterprise search",
              "d": "`index=pii_validation`",
              "q": "index=pii_validation sourcetype=* earliest=-24h\n| head 50000\n| regex _raw=\"(?i)\\b\\d{3}-\\d{2}-\\d{4}\\b\"\n| stats count as ssn_like_hits by sourcetype, host\n| where ssn_like_hits>0",
              "m": "(1) Use synthetic data only in lab; (2) expand regex to local PII taxonomy; (3) document masking standard.",
              "z": "Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise search.\n• Ensure the following data sources are available: `index=pii_validation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Use synthetic data only in lab; (2) expand regex to local PII taxonomy; (3) document masking standard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pii_validation sourcetype=* earliest=-24h\n| head 50000\n| regex _raw=\"(?i)\\b\\d{3}-\\d{2}-\\d{4}\\b\"\n| stats count as ssn_like_hits by sourcetype, host\n| where ssn_like_hits>0\n```\n\nUnderstanding this SPL\n\n**Data Masking — Sampled PII Pattern Hits in Non-Production Test Indexes (A.8.11)** — Scans a controlled non-production sample for raw SSN-like patterns to validate masking and access restrictions before production promotion.\n\nDocumented **Data sources**: `index=pii_validation`. **App/TA** (typical add-on context): Splunk Enterprise search. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pii_validation; **sourcetype**: *. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pii_validation, sourcetype=*, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Limits the number of rows with `head`.\n• Filters rows matching a pattern with `regex`.\n• `stats` rolls up events into metrics; results are split **by sourcetype, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ssn_like_hits>0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Data Masking — Sampled PII Pattern Hits in Non-Production Test Indexes. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.11",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.11 is enforced — Splunk UC-22.6.34: Data Masking — Sampled PII Pattern Hits in Non-Production Test Indexes.",
                  "ea": "Saved search 'UC-22.6.34' running on index=pii_validation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.35",
              "n": "Data Leakage Prevention — High Volume Print to PDF on HR Workstations (A.8.12)",
              "c": "high",
              "f": "advanced",
              "v": "Monitors print spooler or DLP events for unusual PDF creation volume from HR VLAN.",
              "t": "Windows Print Service",
              "d": "`WinEventLog:Microsoft-Windows-PrintService/Operational`",
              "q": "index=windows sourcetype=WinEventLog:Microsoft-Windows-PrintService/Operational EventCode=307 earliest=-7d\n| lookup hr_workstation_subnets.csv src_ip OUTPUT vlan\n| where vlan=\"HR\"\n| stats count as prints by ComputerName, UserName\n| where prints>50",
              "m": "(1) Align with acceptable use policy; (2) correlate with DLP; (3) privacy-safe investigation.",
              "z": "Bar chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Print Service.\n• Ensure the following data sources are available: `WinEventLog:Microsoft-Windows-PrintService/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align with acceptable use policy; (2) correlate with DLP; (3) privacy-safe investigation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=WinEventLog:Microsoft-Windows-PrintService/Operational EventCode=307 earliest=-7d\n| lookup hr_workstation_subnets.csv src_ip OUTPUT vlan\n| where vlan=\"HR\"\n| stats count as prints by ComputerName, UserName\n| where prints>50\n```\n\nUnderstanding this SPL\n\n**Data Leakage Prevention — High Volume Print to PDF on HR Workstations (A.8.12)** — Monitors print spooler or DLP events for unusual PDF creation volume from HR VLAN.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-PrintService/Operational`. **App/TA** (typical add-on context): Windows Print Service. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Microsoft-Windows-PrintService/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=WinEventLog:Microsoft-Windows-PrintService/Operational, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where vlan=\"HR\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ComputerName, UserName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where prints>50` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Data Leakage Prevention — High Volume Print to PDF on HR Workstations (A.8.12)** — Monitors print spooler or DLP events for unusual PDF creation volume from HR VLAN.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-PrintService/Operational`. **App/TA** (typical add-on context): Windows Print Service. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Data Leakage Prevention — High Volume Print to PDF on HR Workstations. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.12 (Data leakage prevention) is enforced — Splunk UC-22.6.35: Data Leakage Prevention — High Volume Print to PDF on HR Workstations.",
                  "ea": "Saved search 'UC-22.6.35' running on WinEventLog:Microsoft-Windows-PrintService/Operational, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.36",
              "n": "Information Backup — Immutable Backup Bucket Policy Change Attempts (A.8.13)",
              "c": "medium",
              "f": "advanced",
              "v": "Alerts on API calls that would weaken object-lock or immutability on backup storage.",
              "t": "AWS CloudTrail",
              "d": "`aws:cloudtrail`",
              "q": "index=aws sourcetype=aws:cloudtrail earliest=-30d (eventName=\"PutBucketObjectLockConfiguration\" OR eventName=\"DeleteObjectLockConfiguration\" OR eventName=\"PutLifecycleConfiguration\")\n| stats count by userIdentity.arn, eventName\n| sort - count",
              "m": "(1) Scope backup accounts; (2) SOC sev-1 playbook; (3) map to backup integrity SoA control.",
              "z": "Table, Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: AWS CloudTrail.\n• Ensure the following data sources are available: `aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Scope backup accounts; (2) SOC sev-1 playbook; (3) map to backup integrity SoA control.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=aws:cloudtrail earliest=-30d (eventName=\"PutBucketObjectLockConfiguration\" OR eventName=\"DeleteObjectLockConfiguration\" OR eventName=\"PutLifecycleConfiguration\")\n| stats count by userIdentity.arn, eventName\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Information Backup — Immutable Backup Bucket Policy Change Attempts (A.8.13)** — Alerts on API calls that would weaken object-lock or immutability on backup storage.\n\nDocumented **Data sources**: `aws:cloudtrail`. **App/TA** (typical add-on context): AWS CloudTrail. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=aws:cloudtrail, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, eventName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information Backup — Immutable Backup Bucket Policy Change Attempts (A.8.13)** — Alerts on API calls that would weaken object-lock or immutability on backup storage.\n\nDocumented **Data sources**: `aws:cloudtrail`. **App/TA** (typical add-on context): AWS CloudTrail. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Information Backup — Immutable Backup Bucket Policy Change Attempts. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.13",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.13 is enforced — Splunk UC-22.6.36: Information Backup — Immutable Backup Bucket Policy Change Attempts.",
                  "ea": "Saved search 'UC-22.6.36' running on aws:cloudtrail, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.37",
              "n": "Redundancy of IT — Cluster Node Loss Events for Critical Databases (A.8.14)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks loss of quorum or node eviction in database clusters supporting redundancy requirements.",
              "t": "SQL Server AG logs",
              "d": "`mssql:ag`",
              "q": "index=database sourcetype=mssql:ag earliest=-90d\n| where match(message,\"(?i)evict|offline|lost.quorum\")\n| stats count by ag_name, replica_server_name",
              "m": "(1) Map AG to business service; (2) ITSI optional correlation; (3) DR failover evidence.",
              "z": "Time chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SQL Server AG logs.\n• Ensure the following data sources are available: `mssql:ag`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map AG to business service; (2) ITSI optional correlation; (3) DR failover evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=mssql:ag earliest=-90d\n| where match(message,\"(?i)evict|offline|lost.quorum\")\n| stats count by ag_name, replica_server_name\n```\n\nUnderstanding this SPL\n\n**Redundancy of IT — Cluster Node Loss Events for Critical Databases (A.8.14)** — Tracks loss of quorum or node eviction in database clusters supporting redundancy requirements.\n\nDocumented **Data sources**: `mssql:ag`. **App/TA** (typical add-on context): SQL Server AG logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: mssql:ag. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=mssql:ag, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(message,\"(?i)evict|offline|lost.quorum\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ag_name, replica_server_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Redundancy of IT — Cluster Node Loss Events for Critical Databases. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mssql"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.14 is enforced — Splunk UC-22.6.37: Redundancy of IT — Cluster Node Loss Events for Critical Databases.",
                  "ea": "Saved search 'UC-22.6.37' running on mssql:ag, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.38",
              "n": "Logging — Forwarder Stopped or CrashLoop on Security-Relevant Hosts (A.8.15)",
              "c": "medium",
              "f": "advanced",
              "v": "Detects universal forwarder or agent stopped states on in-scope systems so logging obligations stay met.",
              "t": "Splunk `_internal`",
              "d": "`splunkd` logs",
              "q": "index=_internal source=*splunkd.log* earliest=-24h\n| rex field=_raw \"(?i)Stopping all processes|crashed\"\n| stats count by host\n| lookup security_tier_hosts.csv host OUTPUT tier\n| where tier IN (\"tier0\",\"tier1\")",
              "m": "(1) Maintain tier lookup; (2) auto-restart playbook; (3) ISO evidence for logging availability.",
              "z": "Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk `_internal`.\n• Ensure the following data sources are available: `splunkd` logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain tier lookup; (2) auto-restart playbook; (3) ISO evidence for logging availability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*splunkd.log* earliest=-24h\n| rex field=_raw \"(?i)Stopping all processes|crashed\"\n| stats count by host\n| lookup security_tier_hosts.csv host OUTPUT tier\n| where tier IN (\"tier0\",\"tier1\")\n```\n\nUnderstanding this SPL\n\n**Logging — Forwarder Stopped or CrashLoop on Security-Relevant Hosts (A.8.15)** — Detects universal forwarder or agent stopped states on in-scope systems so logging obligations stay met.\n\nDocumented **Data sources**: `splunkd` logs. **App/TA** (typical add-on context): Splunk `_internal`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where tier IN (\"tier0\",\"tier1\")` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Logging — Forwarder Stopped or CrashLoop on Security-Relevant Hosts. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.15",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.15 (Logging) is enforced — Splunk UC-22.6.38: Logging — Forwarder Stopped or CrashLoop on Security-Relevant Hosts.",
                  "ea": "Saved search 'UC-22.6.38' running on splunkd logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2013",
                  "cl": "A.12.4.2",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of ISO/IEC 27001 A.12.4.2 (Protection of log information (2013)) — Splunk UC-22.6.38: Logging — Forwarder Stopped or CrashLoop on Security-Relevant Hosts.",
                  "ea": "Saved search 'UC-22.6.38' running on splunkd logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/54534.html"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.39",
              "n": "Monitoring Activities — SOC Queue Depth vs On-Shift Analyst Headcount (A.8.16)",
              "c": "high",
              "f": "intermediate",
              "v": "Compares open notable or case backlog to scheduled SOC roster to evidence monitoring resourcing adequacy.",
              "t": "Splunk Enterprise Security, workforce CSV",
              "d": "`notable` macro; `soc_shift_roster.csv`",
              "q": "search `notable` status!=closed earliest=-1d\n| stats dc(rule_id) as open_notables\n| append [| inputlookup soc_shift_roster.csv | where date=strftime(now(),\"%Y-%m-%d\") | stats sum(analysts_on_shift) as headcount]\n| stats max(open_notables) as on max(headcount) as hc\n| eval overload=if(on>hc*25,1,0)",
              "m": "(1) Calibrate ratio to your SOC model; (2) management review; (3) hiring plan input.",
              "z": "Single value, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, workforce CSV.\n• Ensure the following data sources are available: `notable` macro; `soc_shift_roster.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Calibrate ratio to your SOC model; (2) management review; (3) hiring plan input.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch `notable` status!=closed earliest=-1d\n| stats dc(rule_id) as open_notables\n| append [| inputlookup soc_shift_roster.csv | where date=strftime(now(),\"%Y-%m-%d\") | stats sum(analysts_on_shift) as headcount]\n| stats max(open_notables) as on max(headcount) as hc\n| eval overload=if(on>hc*25,1,0)\n```\n\nUnderstanding this SPL\n\n**Monitoring Activities — SOC Queue Depth vs On-Shift Analyst Headcount (A.8.16)** — Compares open notable or case backlog to scheduled SOC roster to evidence monitoring resourcing adequacy.\n\nDocumented **Data sources**: `notable` macro; `soc_shift_roster.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security, workforce CSV. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Appends rows from a subsearch with `append`.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **overload** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value, Table",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Monitoring Activities — SOC Queue Depth vs On-Shift Analyst Headcount. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.16",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.16 (Monitoring activities) is enforced — Splunk UC-22.6.39: Monitoring Activities — SOC Queue Depth vs On-Shift Analyst Headcount.",
                  "ea": "Saved search 'UC-22.6.39' running on notable macro; soc_shift_roster.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2013",
                  "cl": "A.16.1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.16.1.2 (Reporting information security events (2013)) is enforced — Splunk UC-22.6.39: Monitoring Activities — SOC Queue Depth vs On-Shift Analyst Headcount.",
                  "ea": "Saved search 'UC-22.6.39' running on notable macro; soc_shift_roster.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/54534.html"
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.40",
              "n": "Clock Synchronization — Kerberos Clock Skew Related Authentication Failures (A.8.17)",
              "c": "medium",
              "f": "advanced",
              "v": "Surfaces authentication failures that indicate clock skew risk for evidence integrity and Kerberos stability.",
              "t": "Windows Security",
              "d": "`WinEventLog:Security`",
              "q": "index=windows sourcetype=WinEventLog:Security EventCode=4776 earliest=-24h\n| where match(Status,\"0xC000006A\") OR match(Status,\"0xC000052\") OR match(_raw,\"(?i)time|skew|clock\")\n| stats count by ComputerName, Status",
              "m": "(1) Tune Status codes for your domain; (2) NTP remediation; (3) forensic timestamp trust.",
              "z": "Time chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Security.\n• Ensure the following data sources are available: `WinEventLog:Security`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune Status codes for your domain; (2) NTP remediation; (3) forensic timestamp trust.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=WinEventLog:Security EventCode=4776 earliest=-24h\n| where match(Status,\"0xC000006A\") OR match(Status,\"0xC000052\") OR match(_raw,\"(?i)time|skew|clock\")\n| stats count by ComputerName, Status\n```\n\nUnderstanding this SPL\n\n**Clock Synchronization — Kerberos Clock Skew Related Authentication Failures (A.8.17)** — Surfaces authentication failures that indicate clock skew risk for evidence integrity and Kerberos stability.\n\nDocumented **Data sources**: `WinEventLog:Security`. **App/TA** (typical add-on context): Windows Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(Status,\"0xC000006A\") OR match(Status,\"0xC000052\") OR match(_raw,\"(?i)time|skew|clock\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ComputerName, Status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Clock Synchronization — Kerberos Clock Skew Related Authentication Failures (A.8.17)** — Surfaces authentication failures that indicate clock skew risk for evidence integrity and Kerberos stability.\n\nDocumented **Data sources**: `WinEventLog:Security`. **App/TA** (typical add-on context): Windows Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Clock Synchronization — Kerberos Clock Skew Related Authentication Failures. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.17",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.17 (Clock synchronisation) is enforced — Splunk UC-22.6.40: Clock Synchronization — Kerberos Clock Skew Related Authentication Failures.",
                  "ea": "Saved search 'UC-22.6.40' running on WinEventLog:Security, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.41",
              "n": "Network Security — East-West Firewall Deny Spike on Server VLAN (A.8.20)",
              "c": "high",
              "f": "advanced",
              "v": "Trends east-west denies between server tiers for unexpected lateral movement or misconfiguration.",
              "t": "Palo Alto Networks TA (Splunkbase 2757)",
              "d": "`pan:traffic`",
              "q": "index=network sourcetype=pan:traffic action=deny earliest=-24h src_zone=servers dest_zone=servers\n| timechart span=1h count",
              "m": "(1) Baseline business hours; (2) alert vs anomaly; (3) microseg project backlog.",
              "z": "Time chart, Column chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Palo Alto Networks TA](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto Networks TA (Splunkbase 2757).\n• Ensure the following data sources are available: `pan:traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Baseline business hours; (2) alert vs anomaly; (3) microseg project backlog.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:traffic action=deny earliest=-24h src_zone=servers dest_zone=servers\n| timechart span=1h count\n```\n\nUnderstanding this SPL\n\n**Network Security — East-West Firewall Deny Spike on Server VLAN (A.8.20)** — Trends east-west denies between server tiers for unexpected lateral movement or misconfiguration.\n\nDocumented **Data sources**: `pan:traffic`. **App/TA** (typical add-on context): Palo Alto Networks TA (Splunkbase 2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Security — East-West Firewall Deny Spike on Server VLAN (A.8.20)** — Trends east-west denies between server tiers for unexpected lateral movement or misconfiguration.\n\nDocumented **Data sources**: `pan:traffic`. **App/TA** (typical add-on context): Palo Alto Networks TA (Splunkbase 2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Column chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Network Security — East-West Firewall Deny Spike on Server VLAN. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.20",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.20 is enforced — Splunk UC-22.6.41: Network Security — East-West Firewall Deny Spike on Server VLAN.",
                  "ea": "Saved search 'UC-22.6.41' running on pan:traffic, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.42",
              "n": "Web and Email Filtering — Denied High-Risk Categories Toward Young Domains (A.8.21 / A.8.23)",
              "c": "medium",
              "f": "intermediate",
              "v": "Combines proxy and secure email gateway style denies for risky categories and young domains to cover web and messaging filtering controls.",
              "t": "Zscaler or Proofpoint HEC",
              "d": "`zscaler:web` or `proofpoint:message`",
              "q": "index=proxy sourcetype=zscaler:web action=blocked earliest=-24h\n| lookup threat_intel_domain domain as hostname OUTPUT first_seen\n| eval young=if((now()-first_seen)/86400<30,1,0)\n| where young=1 AND match(category,\"(?i)malware|phishing|newly.registered\")\n| stats count by hostname, user",
              "m": "(1) Feed consistent category taxonomy; (2) user awareness tie-in; (3) monthly filtering effectiveness review.",
              "z": "Table, Map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Zscaler or Proofpoint HEC.\n• Ensure the following data sources are available: `zscaler:web` or `proofpoint:message`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Feed consistent category taxonomy; (2) user awareness tie-in; (3) monthly filtering effectiveness review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=zscaler:web action=blocked earliest=-24h\n| lookup threat_intel_domain domain as hostname OUTPUT first_seen\n| eval young=if((now()-first_seen)/86400<30,1,0)\n| where young=1 AND match(category,\"(?i)malware|phishing|newly.registered\")\n| stats count by hostname, user\n```\n\nUnderstanding this SPL\n\n**Web and Email Filtering — Denied High-Risk Categories Toward Young Domains (A.8.21 / A.8.23)** — Combines proxy and secure email gateway style denies for risky categories and young domains to cover web and messaging filtering controls.\n\nDocumented **Data sources**: `zscaler:web` or `proofpoint:message`. **App/TA** (typical add-on context): Zscaler or Proofpoint HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:web. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=zscaler:web, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **young** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where young=1 AND match(category,\"(?i)malware|phishing|newly.registered\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by hostname, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Web and Email Filtering — Denied High-Risk Categories Toward Young Domains (A.8.21 / A.8.23)** — Combines proxy and secure email gateway style denies for risky categories and young domains to cover web and messaging filtering controls.\n\nDocumented **Data sources**: `zscaler:web` or `proofpoint:message`. **App/TA** (typical add-on context): Zscaler or Proofpoint HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Map",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Web and Email Filtering — Denied High-Risk Categories Toward Young Domains. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [
                "proofpoint",
                "zscaler"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.21",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.21 is enforced — Splunk UC-22.6.42: Web and Email Filtering — Denied High-Risk Categories Toward Young Domains.",
                  "ea": "Saved search 'UC-22.6.42' running on zscaler:web or proofpoint:message, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.23",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.23 (Web filtering (2022 new)) is enforced — Splunk UC-22.6.42: Web and Email Filtering — Denied High-Risk Categories Toward Young Domains.",
                  "ea": "Saved search 'UC-22.6.42' running on zscaler:web or proofpoint:message, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.43",
              "n": "Network Segmentation — Cross-VLAN RDP Allowed by Mis-Tuned ACL (A.8.22)",
              "c": "high",
              "f": "advanced",
              "v": "Flags RDP sessions crossing VLAN boundaries that should be blocked by segmentation policy.",
              "t": "Palo Alto Networks TA",
              "d": "`pan:traffic`",
              "q": "index=network sourcetype=pan:traffic earliest=-24h app=rdp action=allow\n| where src_zone!=dest_zone AND match(dest_zone,\"(?i)pci|prod_db\")\n| stats count by src, dest, rule",
              "m": "(1) Validate zone tags; (2) emergency ACL review; (3) document compensating jump hosts.",
              "z": "Table, Heatmap",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto Networks TA.\n• Ensure the following data sources are available: `pan:traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Validate zone tags; (2) emergency ACL review; (3) document compensating jump hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:traffic earliest=-24h app=rdp action=allow\n| where src_zone!=dest_zone AND match(dest_zone,\"(?i)pci|prod_db\")\n| stats count by src, dest, rule\n```\n\nUnderstanding this SPL\n\n**Network Segmentation — Cross-VLAN RDP Allowed by Mis-Tuned ACL (A.8.22)** — Flags RDP sessions crossing VLAN boundaries that should be blocked by segmentation policy.\n\nDocumented **Data sources**: `pan:traffic`. **App/TA** (typical add-on context): Palo Alto Networks TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where src_zone!=dest_zone AND match(dest_zone,\"(?i)pci|prod_db\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, rule** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Segmentation — Cross-VLAN RDP Allowed by Mis-Tuned ACL (A.8.22)** — Flags RDP sessions crossing VLAN boundaries that should be blocked by segmentation policy.\n\nDocumented **Data sources**: `pan:traffic`. **App/TA** (typical add-on context): Palo Alto Networks TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Heatmap",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Network Segmentation — Cross-VLAN RDP Allowed by Mis-Tuned ACL. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.22",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.22 is enforced — Splunk UC-22.6.43: Network Segmentation — Cross-VLAN RDP Allowed by Mis-Tuned ACL.",
                  "ea": "Saved search 'UC-22.6.43' running on pan:traffic, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.44",
              "n": "Use of Cryptography — Weak Cipher Suites Negotiated on Public Load Balancer (A.8.24)",
              "c": "medium",
              "f": "advanced",
              "v": "Parses TLS handshake logs for deprecated ciphers on internet-facing VIPs supporting cryptography policy.",
              "t": "F5 LTM logs",
              "d": "`f5:ltm`",
              "q": "index=lb sourcetype=f5:ltm earliest=-7d\n| where clientssl_cipher IN (\"RC4-SHA\",\"DES-CBC3-SHA\") OR match(clientssl_cipher,\"TLS_RSA\")\n| stats count by virtual_name, clientssl_cipher",
              "m": "(1) Align with crypto standard; (2) phased deprecation plan; (3) external scan cross-check.",
              "z": "Bar chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: F5 LTM logs.\n• Ensure the following data sources are available: `f5:ltm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align with crypto standard; (2) phased deprecation plan; (3) external scan cross-check.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=lb sourcetype=f5:ltm earliest=-7d\n| where clientssl_cipher IN (\"RC4-SHA\",\"DES-CBC3-SHA\") OR match(clientssl_cipher,\"TLS_RSA\")\n| stats count by virtual_name, clientssl_cipher\n```\n\nUnderstanding this SPL\n\n**Use of Cryptography — Weak Cipher Suites Negotiated on Public Load Balancer (A.8.24)** — Parses TLS handshake logs for deprecated ciphers on internet-facing VIPs supporting cryptography policy.\n\nDocumented **Data sources**: `f5:ltm`. **App/TA** (typical add-on context): F5 LTM logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: lb; **sourcetype**: f5:ltm. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=lb, sourcetype=f5:ltm, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where clientssl_cipher IN (\"RC4-SHA\",\"DES-CBC3-SHA\") OR match(clientssl_cipher,\"TLS_RSA\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by virtual_name, clientssl_cipher** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Use of Cryptography — Weak Cipher Suites Negotiated on Public Load Balancer (A.8.24)** — Parses TLS handshake logs for deprecated ciphers on internet-facing VIPs supporting cryptography policy.\n\nDocumented **Data sources**: `f5:ltm`. **App/TA** (typical add-on context): F5 LTM logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Use of Cryptography — Weak Cipher Suites Negotiated on Public Load Balancer. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "f5"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.24",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.24 is enforced — Splunk UC-22.6.44: Use of Cryptography — Weak Cipher Suites Negotiated on Public Load Balancer.",
                  "ea": "Saved search 'UC-22.6.44' running on f5:ltm, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.45",
              "n": "Secure SDLC, App Security Requirements, and Secure Coding Pipeline Gates (A.8.25 / A.8.26 / A.8.28)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks static analysis, dependency scan, and branch protection outcomes in one pipeline view for development security controls.",
              "t": "GitHub Advanced Security, Snyk HEC",
              "d": "`github:code_scanning`, `snyk:issues`",
              "q": "index=cicd (sourcetype=github:code_scanning OR sourcetype=snyk:issues) earliest=-30d\n| eval failed=if(severity IN (\"critical\",\"high\") AND state!=\"fixed\",1,0)\n| stats sum(failed) as open_findings by repo\n| where open_findings>0",
              "m": "(1) Gate release branches; (2) map findings to secure coding training; (3) SoA linkage for outsourced dev.",
              "z": "Table, Column chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub Advanced Security, Snyk HEC.\n• Ensure the following data sources are available: `github:code_scanning`, `snyk:issues`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Gate release branches; (2) map findings to secure coding training; (3) SoA linkage for outsourced dev.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd (sourcetype=github:code_scanning OR sourcetype=snyk:issues) earliest=-30d\n| eval failed=if(severity IN (\"critical\",\"high\") AND state!=\"fixed\",1,0)\n| stats sum(failed) as open_findings by repo\n| where open_findings>0\n```\n\nUnderstanding this SPL\n\n**Secure SDLC, App Security Requirements, and Secure Coding Pipeline Gates (A.8.25 / A.8.26 / A.8.28)** — Tracks static analysis, dependency scan, and branch protection outcomes in one pipeline view for development security controls.\n\nDocumented **Data sources**: `github:code_scanning`, `snyk:issues`. **App/TA** (typical add-on context): GitHub Advanced Security, Snyk HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: github:code_scanning, snyk:issues. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=github:code_scanning, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by repo** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where open_findings>0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Column chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Secure SDLC, App Security Requirements, and Secure Coding Pipeline Gates. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001:2022"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.25",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.25 (Secure development life cycle) is enforced — Splunk UC-22.6.45: Secure SDLC, App Security Requirements, and Secure Coding Pipeline Gates.",
                  "ea": "Saved search 'UC-22.6.45' running on github:code_scanning, snyk:issues, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.26",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.26 is enforced — Splunk UC-22.6.45: Secure SDLC, App Security Requirements, and Secure Coding Pipeline Gates.",
                  "ea": "Saved search 'UC-22.6.45' running on github:code_scanning, snyk:issues, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.28",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.8.28 (Secure coding (2022 new)) is enforced — Splunk UC-22.6.45: Secure SDLC, App Security Requirements, and Secure Coding Pipeline Gates.",
                  "ea": "Saved search 'UC-22.6.45' running on github:code_scanning, snyk:issues, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 34.9,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 45,
            "none": 0
          }
        },
        {
          "i": "22.7",
          "n": "NIST CSF",
          "u": [
            {
              "i": "22.7.1",
              "n": "NIST CSF Maturity Posture Dashboard (Identify/Protect/Detect/Respond/Recover)",
              "c": "high",
              "f": "advanced",
              "v": "Maps enabled ES correlation searches and risk scoring volume to NIST CSF functions for a defensible, data-driven maturity snapshot rather than a static policy diagram.",
              "t": "Splunk Enterprise Security (Splunkbase 263), CIM Risk data model",
              "d": "`| rest /services/saved/searches` (title, disabled, eai:acl.app); `| from datamodel Risk.All_Risk` (search_name, risk_score, _time); CSV lookup `nist_csf_es_function_mapping` (correlation_search_name, nist_csf_function)",
              "q": "| from datamodel Risk.All_Risk\n| timechart span=7d sum(risk_score) as weekly_risk_points, dc(search_name) as distinct_risk_rules",
              "m": "(1) Create `nist_csf_es_function_mapping.csv` with `correlation_search_name` = full saved-search title and `nist_csf_function` in {Identify, Protect, Detect, Respond, Recover}; (2) adjust `eai:acl.app` if your ES app name differs; (3) refresh the REST panel after content upgrades; (4) document CSF tier targets separately in narrative.",
              "z": "Bar chart (enabled_detections by CSF function), Area chart (weekly_risk_points), Table (raw mapping for assessors).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), CIM Risk data model.\n• Ensure the following data sources are available: `| rest /services/saved/searches` (title, disabled, eai:acl.app); `| from datamodel Risk.All_Risk` (search_name, risk_score, _time); CSV lookup `nist_csf_es_function_mapping` (correlation_search_name, nist_csf_function).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create `nist_csf_es_function_mapping.csv` with `correlation_search_name` = full saved-search title and `nist_csf_function` in {Identify, Protect, Detect, Respond, Recover}; (2) adjust `eai:acl.app` if your ES app name differs; (3) refresh the REST panel after content upgrades; (4) document CSF tier targets separately in narrative.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| from datamodel Risk.All_Risk\n| timechart span=7d sum(risk_score) as weekly_risk_points, dc(search_name) as distinct_risk_rules\n```\n\nUnderstanding this SPL\n\n**NIST CSF Maturity Posture Dashboard (Identify/Protect/Detect/Respond/Recover)** — Maps enabled ES correlation searches and risk scoring volume to NIST CSF functions for a defensible, data-driven maturity snapshot rather than a static policy diagram.\n\nDocumented **Data sources**: `| rest /services/saved/searches` (title, disabled, eai:acl.app); `| from datamodel Risk.All_Risk` (search_name, risk_score, _time); CSV lookup `nist_csf_es_function_mapping` (correlation_search_name, nist_csf_function). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), CIM Risk data model. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `from` (dataset / Federated Search) — verify dataset availability and permissions.\n• `timechart` plots the metric over time using **span=7d** buckets — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (enabled_detections by CSF function), Area chart (weekly_risk_points), Table (raw mapping for assessors).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Maturity Posture Dashboard. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.AM-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF ID.AM-01 (Asset inventory) is enforced — Splunk UC-22.7.1: NIST CSF Maturity Posture Dashboard.",
                  "ea": "Saved search 'UC-22.7.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.2",
              "n": "NIST CSF Detect Function Coverage Gap Analysis (MITRE ATT&CK)",
              "c": "high",
              "f": "advanced",
              "v": "Highlights MITRE techniques with no mapped correlation search or no recent notable fires, focusing detection engineering on true gaps in the Detect function.",
              "t": "Splunk Enterprise Security (Splunkbase 263), ES MITRE ATT&CK lookups",
              "d": "`| inputlookup mitre_attack_all_techniques` (technique_id, technique_name); `| rest /services/saved/searches`; `| inputlookup mitre_user_rule_technique_lookup` (correlation_search_name, technique_id); `` `notable` `` macro (rule_name, annotations.mitre_attack.mitre_attack_id)",
              "q": "| inputlookup mitre_attack_all_techniques\n| fields technique_id, technique_name\n| join type=left max=0 technique_id [\n    | rest /services/saved/searches splunk_server=local count=0\n    | search disabled=0 title=\"*Correlation Search*\"\n    | lookup mitre_user_rule_technique_lookup correlation_search_name AS title OUTPUT technique_id\n    | stats dc(title) as enabled_rules by technique_id\n  ]\n| join type=left max=0 technique_id [\n    search `notable` earliest=-90d\n    | mvexpand annotations.mitre_attack.mitre_attack_id limit=500\n    | rename annotations.mitre_attack.mitre_attack_id as technique_id\n    | stats dc(rule_name) as rules_with_fires by technique_id\n  ]\n| fillnull value=0 enabled_rules, rules_with_fires\n| eval gap=case(\n    enabled_rules=0, \"no_mapped_rule\",\n    enabled_rules>0 AND rules_with_fires=0, \"no_recent_notable\",\n    1=1, \"active_signal\")\n| where gap!=\"active_signal\"\n| sort technique_id\n| table technique_id, technique_name, enabled_rules, rules_with_fires, gap",
              "m": "(1) Confirm lookup names on your ES build (`mitre_attack_all_techniques` vs `mitre_attack_techniques`); (2) populate `mitre_user_rule_technique_lookup` (ES documents user mapping of correlation searches to techniques); (3) review quarterly and export gap list to detection engineering backlog.",
              "z": "Table (technique, rules, fires, gap), Column chart (gap counts by category).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), ES MITRE ATT&CK lookups.\n• Ensure the following data sources are available: `| inputlookup mitre_attack_all_techniques` (technique_id, technique_name); `| rest /services/saved/searches`; `| inputlookup mitre_user_rule_technique_lookup` (correlation_search_name, technique_id); `` `notable` `` macro (rule_name, annotations.mitre_attack.mitre_attack_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm lookup names on your ES build (`mitre_attack_all_techniques` vs `mitre_attack_techniques`); (2) populate `mitre_user_rule_technique_lookup` (ES documents user mapping of correlation searches to techniques); (3) review quarterly and export gap list to detection engineering backlog.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup mitre_attack_all_techniques\n| fields technique_id, technique_name\n| join type=left max=0 technique_id [\n    | rest /services/saved/searches splunk_server=local count=0\n    | search disabled=0 title=\"*Correlation Search*\"\n    | lookup mitre_user_rule_technique_lookup correlation_search_name AS title OUTPUT technique_id\n    | stats dc(title) as enabled_rules by technique_id\n  ]\n| join type=left max=0 technique_id [\n    search `notable` earliest=-90d\n    | mvexpand annotations.mitre_attack.mitre_attack_id limit=500\n    | rename annotations.mitre_attack.mitre_attack_id as technique_id\n    | stats dc(rule_name) as rules_with_fires by technique_id\n  ]\n| fillnull value=0 enabled_rules, rules_with_fires\n| eval gap=case(\n    enabled_rules=0, \"no_mapped_rule\",\n    enabled_rules>0 AND rules_with_fires=0, \"no_recent_notable\",\n    1=1, \"active_signal\")\n| where gap!=\"active_signal\"\n| sort technique_id\n| table technique_id, technique_name, enabled_rules, rules_with_fires, gap\n```\n\nUnderstanding this SPL\n\n**NIST CSF Detect Function Coverage Gap Analysis (MITRE ATT&CK)** — Highlights MITRE techniques with no mapped correlation search or no recent notable fires, focusing detection engineering on true gaps in the Detect function.\n\nDocumented **Data sources**: `| inputlookup mitre_attack_all_techniques` (technique_id, technique_name); `| rest /services/saved/searches`; `| inputlookup mitre_user_rule_technique_lookup` (correlation_search_name, technique_id); `` `notable` `` macro (rule_name, annotations.mitre_attack.mitre_attack_id). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), ES MITRE ATT&CK lookups. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap!=\"active_signal\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **NIST CSF Detect Function Coverage Gap Analysis (MITRE ATT&CK)**): table technique_id, technique_name, enabled_rules, rules_with_fires, gap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (technique, rules, fires, gap), Column chart (gap counts by category).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Detect Function Coverage Gap Analysis. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [
                "snmp_ups"
              ],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "1.1",
                  "cl": "DE.AE-3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST CSF DE.AE-3 (Event data collection and correlation) is enforced — Splunk UC-22.7.2: NIST CSF Detect Function Coverage Gap Analysis.",
                  "ea": "Saved search 'UC-22.7.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nist.gov/cyberframework/framework"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.3",
              "n": "NIST CSF Identify — Asset Inventory Coverage and Shadow SaaS Signals (ID.AM-2)",
              "c": "high",
              "f": "intermediate",
              "v": "The Identify function requires software platforms and applications to be inventoried. Comparing cloud audit OAuth consent events against an approved SaaS catalog highlights shadow applications that consume corporate identity before they appear in the CMDB.",
              "t": "Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, targetResources, initiatedBy, _time); `approved_saas_apps.csv` lookup (app_display_name, approved_tier)",
              "q": "index=azure sourcetype=\"mscs:azure:auditlog\" earliest=-30d activityDisplayName=\"Add OAuth2PermissionGrant\"\n| eval app=mvindex(targetResources{}.displayName,0)\n| lookup approved_saas_apps.csv app AS app_display_name OUTPUT approved_tier\n| eval shadow=if(isnull(approved_tier), 1, 0)\n| stats count as grants, sum(shadow) as unapproved_app_hits by app, approved_tier\n| sort - unapproved_app_hits",
              "m": "(1) Build `approved_saas_apps.csv` from enterprise architecture; (2) tune `activityDisplayName` for your IdP (Google Workspace equivalent sourcetype optional); (3) alert when `unapproved_app_hits>0` for production tenants; (4) feed discoveries into asset intake workflow; (5) refresh catalog monthly.",
              "z": "Table (app, grants, approved_tier), Bar chart (shadow vs approved), Pie chart (grant volume).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, targetResources, initiatedBy, _time); `approved_saas_apps.csv` lookup (app_display_name, approved_tier).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `approved_saas_apps.csv` from enterprise architecture; (2) tune `activityDisplayName` for your IdP (Google Workspace equivalent sourcetype optional); (3) alert when `unapproved_app_hits>0` for production tenants; (4) feed discoveries into asset intake workflow; (5) refresh catalog monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:auditlog\" earliest=-30d activityDisplayName=\"Add OAuth2PermissionGrant\"\n| eval app=mvindex(targetResources{}.displayName,0)\n| lookup approved_saas_apps.csv app AS app_display_name OUTPUT approved_tier\n| eval shadow=if(isnull(approved_tier), 1, 0)\n| stats count as grants, sum(shadow) as unapproved_app_hits by app, approved_tier\n| sort - unapproved_app_hits\n```\n\nUnderstanding this SPL\n\n**NIST CSF Identify — Asset Inventory Coverage and Shadow SaaS Signals (ID.AM-2)** — The Identify function requires software platforms and applications to be inventoried. Comparing cloud audit OAuth consent events against an approved SaaS catalog highlights shadow applications that consume corporate identity before they appear in the CMDB.\n\nDocumented **Data sources**: `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, targetResources, initiatedBy, _time); `approved_saas_apps.csv` lookup (app_display_name, approved_tier). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:auditlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:auditlog\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **app** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **shadow** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by app, approved_tier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIST CSF Identify — Asset Inventory Coverage and Shadow SaaS Signals (ID.AM-2)** — The Identify function requires software platforms and applications to be inventoried. Comparing cloud audit OAuth consent events against an approved SaaS catalog highlights shadow applications that consume corporate identity before they appear in the CMDB.\n\nDocumented **Data sources**: `index=azure` `sourcetype=\"mscs:azure:auditlog\"` (activityDisplayName, targetResources, initiatedBy, _time); `approved_saas_apps.csv` lookup (app_display_name, approved_tier). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (app, grants, approved_tier), Bar chart (shadow vs approved), Pie chart (grant volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Asset Inventory Coverage and Shadow SaaS Signals. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "1.1",
                  "cl": "ID.AM-1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST CSF ID.AM-1 (Physical devices inventory) is enforced — Splunk UC-22.7.3: NIST CSF Identify — Asset Inventory Coverage and Shadow SaaS Signals.",
                  "ea": "Saved search 'UC-22.7.3' running on sourcetype mscs:azure:auditlog and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nist.gov/cyberframework/framework"
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.AM-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF ID.AM-2 is enforced — Splunk UC-22.7.3: NIST CSF Identify — Asset Inventory Coverage and Shadow SaaS Signals.",
                  "ea": "Saved search 'UC-22.7.3' running on sourcetype mscs:azure:auditlog and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.4",
              "n": "NIST CSF Protect — Identity Authentication Hardening and MFA Gaps (PR.AC-1)",
              "c": "critical",
              "f": "intermediate",
              "v": "PR.AC-1 expects identities and credentials to be managed for authorized devices and users. Surfacing interactive logons without MFA claim presence from Windows or Entra sign-in logs supports remediation of weak authentication before credential attacks succeed.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, TargetUserName, WorkstationName, AuthenticationPackageName); `index=azure` `sourcetype=\"mscs:azure:signinlog\"` (userPrincipalName, authenticationRequirement, conditionalAccessStatus, appDisplayName, status.errorCode)",
              "q": "index=azure sourcetype=\"mscs:azure:signinlog\" earliest=-24h status.errorCode=0\n| where isnull(authenticationRequirement) OR lower(authenticationRequirement)!=\"multifactorauthentication\"\n| stats count by userPrincipalName, authenticationRequirement, conditionalAccessStatus, appDisplayName\n| sort - count",
              "m": "(1) Ingest Entra ID sign-in logs with Microsoft Cloud Services TA; (2) exclude break-glass accounts via lookup; (3) correlate with Conditional Access policy changes; (4) drive remediation tickets to IAM; (5) track rolling MFA coverage percent in executive dashboard.",
              "z": "Table (users and apps lacking MFA), Bar chart (count by appDisplayName), Single value (events last 24h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1556"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, TargetUserName, WorkstationName, AuthenticationPackageName); `index=azure` `sourcetype=\"mscs:azure:signinlog\"` (userPrincipalName, authenticationRequirement, conditionalAccessStatus, appDisplayName, status.errorCode).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest Entra ID sign-in logs with Microsoft Cloud Services TA; (2) exclude break-glass accounts via lookup; (3) correlate with Conditional Access policy changes; (4) drive remediation tickets to IAM; (5) track rolling MFA coverage percent in executive dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=\"mscs:azure:signinlog\" earliest=-24h status.errorCode=0\n| where isnull(authenticationRequirement) OR lower(authenticationRequirement)!=\"multifactorauthentication\"\n| stats count by userPrincipalName, authenticationRequirement, conditionalAccessStatus, appDisplayName\n| sort - count\n```\n\nUnderstanding this SPL\n\n**NIST CSF Protect — Identity Authentication Hardening and MFA Gaps (PR.AC-1)** — PR.AC-1 expects identities and credentials to be managed for authorized devices and users. Surfacing interactive logons without MFA claim presence from Windows or Entra sign-in logs supports remediation of weak authentication before credential attacks succeed.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, TargetUserName, WorkstationName, AuthenticationPackageName); `index=azure` `sourcetype=\"mscs:azure:signinlog\"` (userPrincipalName, authenticationRequirement, conditionalAccessStatus, appDisplayName, status.errorCode). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:signinlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=\"mscs:azure:signinlog\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(authenticationRequirement) OR lower(authenticationRequirement)!=\"multifactorauthentication\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by userPrincipalName, authenticationRequirement, conditionalAccessStatus, appDisplayName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIST CSF Protect — Identity Authentication Hardening and MFA Gaps (PR.AC-1)** — PR.AC-1 expects identities and credentials to be managed for authorized devices and users. Surfacing interactive logons without MFA claim presence from Windows or Entra sign-in logs supports remediation of weak authentication before credential attacks succeed.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, TargetUserName, WorkstationName, AuthenticationPackageName); `index=azure` `sourcetype=\"mscs:azure:signinlog\"` (userPrincipalName, authenticationRequirement, conditionalAccessStatus, appDisplayName, status.errorCode). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users and apps lacking MFA), Bar chart (count by appDisplayName), Single value (events last 24h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Identity Authentication Hardening and MFA Gaps. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "1.1",
                  "cl": "PR.AC-1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST CSF PR.AC-1 (Identities and credentials managed) is enforced — Splunk UC-22.7.4: NIST CSF Protect — Identity Authentication Hardening and MFA Gaps.",
                  "ea": "Saved search 'UC-22.7.4' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nist.gov/cyberframework/framework"
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.AA-05",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST CSF PR.AA-05 (Access permissions) is enforced — Splunk UC-22.7.4: NIST CSF Protect — Identity Authentication Hardening and MFA Gaps.",
                  "ea": "Saved search 'UC-22.7.4' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nist.gov/cyberframework"
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.AC-1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF PR.AC-1 is enforced — Splunk UC-22.7.4: NIST CSF Protect — Identity Authentication Hardening and MFA Gaps.",
                  "ea": "Saved search 'UC-22.7.4' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.5",
              "n": "NIST CSF Detect — Continuous Vulnerability Exposure Drift on Critical Servers (DE.CM-7)",
              "c": "high",
              "f": "intermediate",
              "v": "DE.CM-7 expects monitoring that surfaces unauthorized or risky software and configuration states, including exploitable weaknesses. Tracking critical and high CVE counts on in-scope assets over time demonstrates that vulnerability findings are continuously visible—not only during annual scan windows—and drives timely remediation aligned to risk tolerance.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`index=vuln` `sourcetype=\"tenable:vuln\"` (host, plugin_id, severity, first_seen, last_seen, cve)",
              "q": "index=vuln sourcetype=\"tenable:vuln\" earliest=-14d severity IN (\"Critical\",\"High\")\n| lookup pci_in_scope_hosts.csv host OUTPUT in_scope\n| where in_scope=\"true\"\n| stats dc(cve) as distinct_cves, dc(plugin_id) as distinct_plugins, values(severity) as severities by host\n| sort - distinct_cves",
              "m": "(1) Maintain `pci_in_scope_hosts.csv` or generic `critical_asset_hosts.csv`; (2) normalize `host` to FQDN used in CMDB; (3) alert when `distinct_cves` increases week over week; (4) join patch tickets from ServiceNow optional; (5) document SLAs in CSF tier narrative.",
              "z": "Table (host exposure), Column chart (distinct_cves), Line chart (weekly trend via appendcols or summary index).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `index=vuln` `sourcetype=\"tenable:vuln\"` (host, plugin_id, severity, first_seen, last_seen, cve).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `pci_in_scope_hosts.csv` or generic `critical_asset_hosts.csv`; (2) normalize `host` to FQDN used in CMDB; (3) alert when `distinct_cves` increases week over week; (4) join patch tickets from ServiceNow optional; (5) document SLAs in CSF tier narrative.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"tenable:vuln\" earliest=-14d severity IN (\"Critical\",\"High\")\n| lookup pci_in_scope_hosts.csv host OUTPUT in_scope\n| where in_scope=\"true\"\n| stats dc(cve) as distinct_cves, dc(plugin_id) as distinct_plugins, values(severity) as severities by host\n| sort - distinct_cves\n```\n\nUnderstanding this SPL\n\n**NIST CSF Detect — Continuous Vulnerability Exposure Drift on Critical Servers (DE.CM-7)** — DE.CM-7 expects monitoring that surfaces unauthorized or risky software and configuration states, including exploitable weaknesses. Tracking critical and high CVE counts on in-scope assets over time demonstrates that vulnerability findings are continuously visible—not only during annual scan windows—and drives timely remediation aligned to risk tolerance.\n\nDocumented **Data sources**: `index=vuln` `sourcetype=\"tenable:vuln\"` (host, plugin_id, severity, first_seen, last_seen, cve). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"tenable:vuln\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIST CSF Detect — Continuous Vulnerability Exposure Drift on Critical Servers (DE.CM-7)** — DE.CM-7 expects monitoring that surfaces unauthorized or risky software and configuration states, including exploitable weaknesses. Tracking critical and high CVE counts on in-scope assets over time demonstrates that vulnerability findings are continuously visible—not only during annual scan windows—and drives timely remediation aligned to risk tolerance.\n\nDocumented **Data sources**: `index=vuln` `sourcetype=\"tenable:vuln\"` (host, plugin_id, severity, first_seen, last_seen, cve). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host exposure), Column chart (distinct_cves), Line chart (weekly trend via appendcols or summary index).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Continuous Vulnerability Exposure Drift on Critical Servers. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.CM-09",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST CSF DE.CM-09 (Environment monitoring) is enforced — Splunk UC-22.7.5: NIST CSF Detect — Continuous Vulnerability Exposure Drift on Critical Servers.",
                  "ea": "Saved search 'UC-22.7.5' running on index=vuln sourcetype=\"tenable:vuln\" (host, plugin_id, severity, first_seen, last_seen, cve), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nist.gov/cyberframework"
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.CM-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF DE.CM-7 is enforced — Splunk UC-22.7.5: NIST CSF Detect — Continuous Vulnerability Exposure Drift on Critical Servers.",
                  "ea": "Saved search 'UC-22.7.5' running on index=vuln sourcetype=\"tenable:vuln\" (host, plugin_id, severity, first_seen, last_seen, cve), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.6",
              "n": "NIST CSF Respond — Incident Response Playbook Execution and Stage Timestamps (RS.RP-1)",
              "c": "critical",
              "f": "advanced",
              "v": "RS.RP-1 requires response processes to be executed during and after an incident. Measuring Splunk SOAR incident-response playbook success rates alongside ServiceNow security-incident stage timing proves that documented response procedures actually run and that closure timelines are measurable for after-action review.",
              "t": "Splunk SOAR, Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=soar` `sourcetype=\"phantom:playbook_run\"` (playbook_name, status, _time); `index=itsm` `sourcetype=\"snow:incident\"` (number, u_ir_stage, state, work_start, resolved_at, short_description)",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-90d short_description=\"*security incident*\"\n| eval start_epoch=strptime(work_start, \"%Y-%m-%d %H:%M:%S\")\n| eval end_epoch=strptime(resolved_at, \"%Y-%m-%d %H:%M:%S\")\n| eval duration_sec=if(isnotnull(start_epoch) AND isnotnull(end_epoch), end_epoch-start_epoch, null())\n| stats count as incidents, median(duration_sec) as median_duration_sec by u_ir_stage, state\n| eval median_duration_h=round(median_duration_sec/3600,2)\n| sort - incidents",
              "m": "(1) Standardize Splunk SOAR playbook names for major incident classes and map `status` vocabulary to success/failed; (2) if using ServiceNow, map `u_ir_stage` values to NIST IR phases and require `work_start`/`resolved_at` for MTTR; (3) alert on SOAR `status` failure spikes; (4) exclude test containers with non-prod labels; (5) quarterly export for tabletop lessons learned.",
              "z": "Table (playbook_name, runs, successes, failures, success_pct), Bar chart (failures by playbook), Table (ServiceNow stages with median_duration_h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [
                "T1048",
                "T1070"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk SOAR, Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=soar` `sourcetype=\"phantom:playbook_run\"` (playbook_name, status, _time); `index=itsm` `sourcetype=\"snow:incident\"` (number, u_ir_stage, state, work_start, resolved_at, short_description).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize Splunk SOAR playbook names for major incident classes and map `status` vocabulary to success/failed; (2) if using ServiceNow, map `u_ir_stage` values to NIST IR phases and require `work_start`/`resolved_at` for MTTR; (3) alert on SOAR `status` failure spikes; (4) exclude test containers with non-prod labels; (5) quarterly export for tabletop lessons learned.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-90d short_description=\"*security incident*\"\n| eval start_epoch=strptime(work_start, \"%Y-%m-%d %H:%M:%S\")\n| eval end_epoch=strptime(resolved_at, \"%Y-%m-%d %H:%M:%S\")\n| eval duration_sec=if(isnotnull(start_epoch) AND isnotnull(end_epoch), end_epoch-start_epoch, null())\n| stats count as incidents, median(duration_sec) as median_duration_sec by u_ir_stage, state\n| eval median_duration_h=round(median_duration_sec/3600,2)\n| sort - incidents\n```\n\nUnderstanding this SPL\n\n**NIST CSF Respond — Incident Response Playbook Execution and Stage Timestamps (RS.RP-1)** — RS.RP-1 requires response processes to be executed during and after an incident. Measuring Splunk SOAR incident-response playbook success rates alongside ServiceNow security-incident stage timing proves that documented response procedures actually run and that closure timelines are measurable for after-action review.\n\nDocumented **Data sources**: `index=soar` `sourcetype=\"phantom:playbook_run\"` (playbook_name, status, _time); `index=itsm` `sourcetype=\"snow:incident\"` (number, u_ir_stage, state, work_start, resolved_at, short_description). **App/TA** (typical add-on context): Splunk SOAR, Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **start_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **end_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **duration_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by u_ir_stage, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **median_duration_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (playbook_name, runs, successes, failures, success_pct), Bar chart (failures by playbook), Table (ServiceNow stages with median_duration_h).",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Incident Response Playbook Execution and Stage Timestamps. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RS.AN-03",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST CSF RS.AN-03 (Incident analysis) is enforced — Splunk UC-22.7.6: NIST CSF Respond — Incident Response Playbook Execution and Stage Timestamps.",
                  "ea": "Saved search 'UC-22.7.6' running on sourcetype phantom:playbook_run and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nist.gov/cyberframework"
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RS.RP-1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RS.RP-1 is enforced — Splunk UC-22.7.6: NIST CSF Respond — Incident Response Playbook Execution and Stage Timestamps.",
                  "ea": "Saved search 'UC-22.7.6' running on sourcetype phantom:playbook_run and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.7",
              "n": "NIST CSF Recover — Backup Job Success and RTO Readiness for Critical Databases (RC.RP-1)",
              "c": "critical",
              "f": "intermediate",
              "v": "RC.RP-1 expects recovery planning and improvements after incidents. Monitoring backup completion for databases tied to RTO tiers evidences that restoration inputs are healthy and flags silent failures that would block ransomware recovery.",
              "t": "Veritas Data Protection Add-On (Splunkbase 7593) or Commvault/Cohesity TA, Splunk DB Connect (Splunkbase 2686) optional",
              "d": "`index=backup` `sourcetype=\"netbackup:job\"` (client_name, policy_name, status, kb_written, _time); `rto_tier_lookup.csv` (client_name, rto_hours)",
              "q": "index=backup sourcetype=\"netbackup:job\" earliest=-7d\n| lookup rto_tier_lookup.csv client_name OUTPUT rto_hours\n| where isnotnull(rto_hours)\n| eval failed=if(match(status,\"(?i)fail|error|partial\"),1,0)\n| stats count as jobs, sum(failed) as failed_jobs by client_name, policy_name, rto_hours\n| eval fail_pct=round(100*failed_jobs/jobs,2)\n| where fail_pct>0 OR failed_jobs>0\n| sort - failed_jobs",
              "m": "(1) Ingest backup product logs with stable `status` vocabulary; (2) align `client_name` to database hostnames; (3) alert on any failed job for tier-0; (4) validate against storage dedupe errors in secondary sourcetype; (5) tie to BC/DR test calendar for RC.IM improvements.",
              "z": "Table (client, fail_pct), Single value (failed_jobs), Timechart (daily success rate by tier).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Veritas Data Protection Add-On](https://splunkbase.splunk.com/app/7593), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Veritas Data Protection Add-On (Splunkbase 7593) or Commvault/Cohesity TA, Splunk DB Connect (Splunkbase 2686) optional.\n• Ensure the following data sources are available: `index=backup` `sourcetype=\"netbackup:job\"` (client_name, policy_name, status, kb_written, _time); `rto_tier_lookup.csv` (client_name, rto_hours).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest backup product logs with stable `status` vocabulary; (2) align `client_name` to database hostnames; (3) alert on any failed job for tier-0; (4) validate against storage dedupe errors in secondary sourcetype; (5) tie to BC/DR test calendar for RC.IM improvements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"netbackup:job\" earliest=-7d\n| lookup rto_tier_lookup.csv client_name OUTPUT rto_hours\n| where isnotnull(rto_hours)\n| eval failed=if(match(status,\"(?i)fail|error|partial\"),1,0)\n| stats count as jobs, sum(failed) as failed_jobs by client_name, policy_name, rto_hours\n| eval fail_pct=round(100*failed_jobs/jobs,2)\n| where fail_pct>0 OR failed_jobs>0\n| sort - failed_jobs\n```\n\nUnderstanding this SPL\n\n**NIST CSF Recover — Backup Job Success and RTO Readiness for Critical Databases (RC.RP-1)** — RC.RP-1 expects recovery planning and improvements after incidents. Monitoring backup completion for databases tied to RTO tiers evidences that restoration inputs are healthy and flags silent failures that would block ransomware recovery.\n\nDocumented **Data sources**: `index=backup` `sourcetype=\"netbackup:job\"` (client_name, policy_name, status, kb_written, _time); `rto_tier_lookup.csv` (client_name, rto_hours). **App/TA** (typical add-on context): Veritas Data Protection Add-On (Splunkbase 7593) or Commvault/Cohesity TA, Splunk DB Connect (Splunkbase 2686) optional. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: netbackup:job. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"netbackup:job\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(rto_hours)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by client_name, policy_name, rto_hours** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_pct>0 OR failed_jobs>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client, fail_pct), Single value (failed_jobs), Timechart (daily success rate by tier).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Backup Job Success and RTO Readiness for Critical Databases. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "commvault",
                "db_connect",
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RC.RP-1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RC.RP-1 is enforced — Splunk UC-22.7.7: NIST CSF Recover — Backup Job Success and RTO Readiness for Critical Databases.",
                  "ea": "Saved search 'UC-22.7.7' running on sourcetype netbackup:job and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.8",
              "n": "Governance Context — Business Critical Services Mapped to IT Assets (GV.OC-01)",
              "c": "high",
              "f": "intermediate",
              "v": "Maps leadership-defined critical business outcomes to in-scope hosts and applications so organizational context in CSF Govern is evidenced in Splunk dashboards.",
              "t": "Splunk ITSI (Splunkbase 1841), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsi_summary` (serviceid, severity); `snow:cmdb_ci` (name, operational_status); `critical_services_lookup.csv` (service_name, cmdb_ci)",
              "q": "| inputlookup critical_services_lookup.csv\n| lookup snow:cmdb_ci service_name AS name OUTPUT operational_status\n| stats count by operational_status, service_name",
              "m": "(1) Maintain CSV of board-approved critical services; (2) join CMDB operational status; (3) refresh after DR tests; (4) export for GV.OC evidence.",
              "z": "Table, Pie chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (Splunkbase 1841), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsi_summary` (serviceid, severity); `snow:cmdb_ci` (name, operational_status); `critical_services_lookup.csv` (service_name, cmdb_ci).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain CSV of board-approved critical services; (2) join CMDB operational status; (3) refresh after DR tests; (4) export for GV.OC evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup critical_services_lookup.csv\n| lookup snow:cmdb_ci service_name AS name OUTPUT operational_status\n| stats count by operational_status, service_name\n```\n\nUnderstanding this SPL\n\n**Governance Context — Business Critical Services Mapped to IT Assets (GV.OC-01)** — Maps leadership-defined critical business outcomes to in-scope hosts and applications so organizational context in CSF Govern is evidenced in Splunk dashboards.\n\nDocumented **Data sources**: `index=itsi_summary` (serviceid, severity); `snow:cmdb_ci` (name, operational_status); `critical_services_lookup.csv` (service_name, cmdb_ci). **App/TA** (typical add-on context): Splunk ITSI (Splunkbase 1841), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by operational_status, service_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Pie chart",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Governance Context — Business Critical Services Mapped to IT Assets. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.OC-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF GV.OC-01 (Organisational context) is enforced — Splunk UC-22.7.8: Governance Context — Business Critical Services Mapped to IT Assets.",
                  "ea": "Saved search 'UC-22.7.8' running on index itsi_summary and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.9",
              "n": "External Stakeholder Dependencies — Third-Party SaaS in Auth Flows (GV.OC-02)",
              "c": "high",
              "f": "advanced",
              "v": "Surfaces which external identity and SaaS dependencies participate in authentication so organizational context includes supply-side and partner risks.",
              "t": "Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`index=azure` `sourcetype=\"mscs:azure:signinlog\"` (resourceDisplayName, appDisplayName)",
              "q": "index=azure sourcetype=mscs:azure:signinlog earliest=-30d status.errorCode=0\n| stats dc(userPrincipalName) as users, count as signins by appDisplayName, resourceDisplayName\n| sort - signins",
              "m": "(1) Scope production tenants; (2) classify appDisplayName via lookup; (3) quarterly review for board risk committee; (4) feed vendor risk register.",
              "z": "Table, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `index=azure` `sourcetype=\"mscs:azure:signinlog\"` (resourceDisplayName, appDisplayName).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Scope production tenants; (2) classify appDisplayName via lookup; (3) quarterly review for board risk committee; (4) feed vendor risk register.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=mscs:azure:signinlog earliest=-30d status.errorCode=0\n| stats dc(userPrincipalName) as users, count as signins by appDisplayName, resourceDisplayName\n| sort - signins\n```\n\nUnderstanding this SPL\n\n**External Stakeholder Dependencies — Third-Party SaaS in Auth Flows (GV.OC-02)** — Surfaces which external identity and SaaS dependencies participate in authentication so organizational context includes supply-side and partner risks.\n\nDocumented **Data sources**: `index=azure` `sourcetype=\"mscs:azure:signinlog\"` (resourceDisplayName, appDisplayName). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:signinlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=mscs:azure:signinlog, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by appDisplayName, resourceDisplayName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**External Stakeholder Dependencies — Third-Party SaaS in Auth Flows (GV.OC-02)** — Surfaces which external identity and SaaS dependencies participate in authentication so organizational context includes supply-side and partner risks.\n\nDocumented **Data sources**: `index=azure` `sourcetype=\"mscs:azure:signinlog\"` (resourceDisplayName, appDisplayName). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for External Stakeholder Dependencies — Third-Party SaaS in Auth Flows. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.OC-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF GV.OC-02 is enforced — Splunk UC-22.7.9: External Stakeholder Dependencies — Third-Party SaaS in Auth Flows.",
                  "ea": "Saved search 'UC-22.7.9' running on index=azure sourcetype=\"mscs:azure:signinlog\" (resourceDisplayName, appDisplayName), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.10",
              "n": "Enterprise Risk Appetite vs Open Critical Vulnerabilities (GV.RM-01)",
              "c": "medium",
              "f": "intermediate",
              "v": "Compares counts of open critical findings against a documented risk tolerance threshold so risk management strategy is operationalized, not only slide-deck text.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`index=vuln` `sourcetype=\"tenable:vuln\"` (severity, host); `risk_appetite_thresholds.csv`",
              "q": "index=vuln sourcetype=tenable:vuln severity=Critical earliest=-7d\n| stats dc(host) as affected_hosts, dc(cve) as open_cves\n| append [| inputlookup risk_appetite_thresholds.csv | fields max_critical_hosts | head 1 | eval join=1 ]\n| stats max(affected_hosts) as ah max(max_critical_hosts) as lim\n| eval breach=if(ah>lim,1,0)",
              "m": "(1) Set max_critical_hosts in lookup from risk committee; (2) normalize hostnames; (3) alert on breach=1; (4) document exceptions with compensating controls.",
              "z": "Single value, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `index=vuln` `sourcetype=\"tenable:vuln\"` (severity, host); `risk_appetite_thresholds.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Set max_critical_hosts in lookup from risk committee; (2) normalize hostnames; (3) alert on breach=1; (4) document exceptions with compensating controls.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=tenable:vuln severity=Critical earliest=-7d\n| stats dc(host) as affected_hosts, dc(cve) as open_cves\n| append [| inputlookup risk_appetite_thresholds.csv | fields max_critical_hosts | head 1 | eval join=1 ]\n| stats max(affected_hosts) as ah max(max_critical_hosts) as lim\n| eval breach=if(ah>lim,1,0)\n```\n\nUnderstanding this SPL\n\n**Enterprise Risk Appetite vs Open Critical Vulnerabilities (GV.RM-01)** — Compares counts of open critical findings against a documented risk tolerance threshold so risk management strategy is operationalized, not only slide-deck text.\n\nDocumented **Data sources**: `index=vuln` `sourcetype=\"tenable:vuln\"` (severity, host); `risk_appetite_thresholds.csv`. **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=tenable:vuln, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Appends rows from a subsearch with `append`.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Enterprise Risk Appetite vs Open Critical Vulnerabilities (GV.RM-01)** — Compares counts of open critical findings against a documented risk tolerance threshold so risk management strategy is operationalized, not only slide-deck text.\n\nDocumented **Data sources**: `index=vuln` `sourcetype=\"tenable:vuln\"` (severity, host); `risk_appetite_thresholds.csv`. **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Enterprise Risk Appetite vs Open Critical Vulnerabilities. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.RM-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF GV.RM-01 (Risk management strategy) is enforced — Splunk UC-22.7.10: Enterprise Risk Appetite vs Open Critical Vulnerabilities.",
                  "ea": "Saved search 'UC-22.7.10' running on index=vuln sourcetype=\"tenable:vuln\" (severity, host); risk_appetite_thresholds.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.11",
              "n": "Security Role Attestation — RBAC Changes vs HR Start Dates (GV.RR-01)",
              "c": "high",
              "f": "advanced",
              "v": "Links IAM role grants to HR hire events to show responsibilities are assigned only when legitimate employment exists, supporting GV.RR.",
              "t": "Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`mscs:azure:auditlog` (Add member to role); `hr_employees.csv`",
              "q": "index=azure sourcetype=mscs:azure:auditlog activityDisplayName=\"Add member to role\" earliest=-30d\n| eval upn=mvindex(targetResources{}.userPrincipalName,0)\n| lookup hr_employees.csv upn OUTPUT hire_date\n| eval hire_epoch=strptime(hire_date,\"%Y-%m-%d\")\n| where isnotnull(hire_epoch) AND _time < hire_epoch",
              "m": "(1) Ingest HR daily snapshot; (2) align UPN keys; (3) investigate hits; (4) update joiner process documentation.",
              "z": "Table, Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `mscs:azure:auditlog` (Add member to role); `hr_employees.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest HR daily snapshot; (2) align UPN keys; (3) investigate hits; (4) update joiner process documentation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=mscs:azure:auditlog activityDisplayName=\"Add member to role\" earliest=-30d\n| eval upn=mvindex(targetResources{}.userPrincipalName,0)\n| lookup hr_employees.csv upn OUTPUT hire_date\n| eval hire_epoch=strptime(hire_date,\"%Y-%m-%d\")\n| where isnotnull(hire_epoch) AND _time < hire_epoch\n```\n\nUnderstanding this SPL\n\n**Security Role Attestation — RBAC Changes vs HR Start Dates (GV.RR-01)** — Links IAM role grants to HR hire events to show responsibilities are assigned only when legitimate employment exists, supporting GV.RR.\n\nDocumented **Data sources**: `mscs:azure:auditlog` (Add member to role); `hr_employees.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:auditlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=mscs:azure:auditlog, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **upn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **hire_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(hire_epoch) AND _time < hire_epoch` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security Role Attestation — RBAC Changes vs HR Start Dates (GV.RR-01)** — Links IAM role grants to HR hire events to show responsibilities are assigned only when legitimate employment exists, supporting GV.RR.\n\nDocumented **Data sources**: `mscs:azure:auditlog` (Add member to role); `hr_employees.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Security Role Attestation — RBAC Changes vs HR Start Dates. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.RR-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF GV.RR-01 (Organisational leadership) is enforced — Splunk UC-22.7.11: Security Role Attestation — RBAC Changes vs HR Start Dates.",
                  "ea": "Saved search 'UC-22.7.11' running on mscs:azure:auditlog (Add member to role); hr_employees.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.12",
              "n": "Policy Exception Tracking — Conditional Access Exclusion Groups (GV.PO-01)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects growth in exclusion groups that bypass conditional access policies, evidencing policy governance drift between approved exceptions and reality.",
              "t": "Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`mscs:azure:auditlog` (group, member changes)",
              "q": "index=azure sourcetype=mscs:azure:auditlog earliest=-30d activityDisplayName=\"Add member to group\"\n| eval glist=mvjoin(targetResources{}.displayName,\" \")\n| where match(glist,\"(?i)CA_Exclude|CAPolicyExempt\")\n| stats count by glist",
              "m": "(1) Name exclusion groups with consistent prefix; (2) CAB for each add; (3) monthly attestation export; (4) map to GV.PO policy reviews.",
              "z": "Bar chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `mscs:azure:auditlog` (group, member changes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Name exclusion groups with consistent prefix; (2) CAB for each add; (3) monthly attestation export; (4) map to GV.PO policy reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=mscs:azure:auditlog earliest=-30d activityDisplayName=\"Add member to group\"\n| eval glist=mvjoin(targetResources{}.displayName,\" \")\n| where match(glist,\"(?i)CA_Exclude|CAPolicyExempt\")\n| stats count by glist\n```\n\nUnderstanding this SPL\n\n**Policy Exception Tracking — Conditional Access Exclusion Groups (GV.PO-01)** — Detects growth in exclusion groups that bypass conditional access policies, evidencing policy governance drift between approved exceptions and reality.\n\nDocumented **Data sources**: `mscs:azure:auditlog` (group, member changes). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:auditlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=mscs:azure:auditlog, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **glist** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(glist,\"(?i)CA_Exclude|CAPolicyExempt\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by glist** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Policy Exception Tracking — Conditional Access Exclusion Groups (GV.PO-01)** — Detects growth in exclusion groups that bypass conditional access policies, evidencing policy governance drift between approved exceptions and reality.\n\nDocumented **Data sources**: `mscs:azure:auditlog` (group, member changes). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Policy Exception Tracking — Conditional Access Exclusion Groups. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.PO-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF GV.PO-01 is enforced — Splunk UC-22.7.12: Policy Exception Tracking — Conditional Access Exclusion Groups.",
                  "ea": "Saved search 'UC-22.7.12' running on mscs:azure:auditlog (group, member changes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.13",
              "n": "Documented Baseline Drift — Firewall Rule Adds Outside CAB Window (GV.PO-02)",
              "c": "medium",
              "f": "advanced",
              "v": "Flags new allow rules created outside change windows to prove security policies embedded in change management are monitored.",
              "t": "Splunk Add-on for Palo Alto Networks (Splunkbase 2757)",
              "d": "`pan:config` or syslog config events (rule, admin, cmd)",
              "q": "index=network sourcetype=pan:config earliest=-30d cmd=\"set\" object=\"rule\"\n| lookup change_calendar.csv _time OUTPUT cab_window\n| where cab_window!=\"open\" OR isnull(cab_window)",
              "m": "(1) Normalize config sourcetype for your firewall; (2) ingest change calendar; (3) tune for automation accounts; (4) attach to policy exception workflow.",
              "z": "Table, Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (Splunkbase 2757).\n• Ensure the following data sources are available: `pan:config` or syslog config events (rule, admin, cmd).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize config sourcetype for your firewall; (2) ingest change calendar; (3) tune for automation accounts; (4) attach to policy exception workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:config earliest=-30d cmd=\"set\" object=\"rule\"\n| lookup change_calendar.csv _time OUTPUT cab_window\n| where cab_window!=\"open\" OR isnull(cab_window)\n```\n\nUnderstanding this SPL\n\n**Documented Baseline Drift — Firewall Rule Adds Outside CAB Window (GV.PO-02)** — Flags new allow rules created outside change windows to prove security policies embedded in change management are monitored.\n\nDocumented **Data sources**: `pan:config` or syslog config events (rule, admin, cmd). **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (Splunkbase 2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:config. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:config, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cab_window!=\"open\" OR isnull(cab_window)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Documented Baseline Drift — Firewall Rule Adds Outside CAB Window (GV.PO-02)** — Flags new allow rules created outside change windows to prove security policies embedded in change management are monitored.\n\nDocumented **Data sources**: `pan:config` or syslog config events (rule, admin, cmd). **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (Splunkbase 2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Documented Baseline Drift — Firewall Rule Adds Outside CAB Window. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.PO-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF GV.PO-02 is enforced — Splunk UC-22.7.13: Documented Baseline Drift — Firewall Rule Adds Outside CAB Window.",
                  "ea": "Saved search 'UC-22.7.13' running on pan:config or syslog config events (rule, admin, cmd), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.14",
              "n": "Executive Oversight Dashboard — Mean Time to Acknowledge Critical Alerts (GV.OV-01)",
              "c": "high",
              "f": "intermediate",
              "v": "Measures how quickly executives or duty managers acknowledge sev-1 security alerts, evidencing GV.OV oversight of outcomes.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`notable` macro (urgency, status, owner, _time)",
              "q": "search `notable` urgency=critical earliest=-90d\n| stats earliest(_time) as first_seen, latest(_time) as last_touch by rule_name, owner\n| eval window_sec=last_touch-first_seen",
              "m": "(1) Define critical urgency mapping; (2) use notable status transitions if available; (3) monthly governance review; (4) document remediation for outliers.",
              "z": "Column chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `notable` macro (urgency, status, owner, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define critical urgency mapping; (2) use notable status transitions if available; (3) monthly governance review; (4) document remediation for outliers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch `notable` urgency=critical earliest=-90d\n| stats earliest(_time) as first_seen, latest(_time) as last_touch by rule_name, owner\n| eval window_sec=last_touch-first_seen\n```\n\nUnderstanding this SPL\n\n**Executive Oversight Dashboard — Mean Time to Acknowledge Critical Alerts (GV.OV-01)** — Measures how quickly executives or duty managers acknowledge sev-1 security alerts, evidencing GV.OV oversight of outcomes.\n\nDocumented **Data sources**: `notable` macro (urgency, status, owner, _time). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by rule_name, owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **window_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart, Table",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Executive Oversight Dashboard — Mean Time to Acknowledge Critical Alerts. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.OV-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF GV.OV-01 is enforced — Splunk UC-22.7.14: Executive Oversight Dashboard — Mean Time to Acknowledge Critical Alerts.",
                  "ea": "Saved search 'UC-22.7.14' running on notable macro (urgency, status, owner, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.15",
              "n": "Supply Chain — New Package Installs in CI Against Approved Registry (GV.SC-01)",
              "c": "medium",
              "f": "advanced",
              "v": "Monitors CI pipelines for package installs that bypass internal registry mirrors, aligning with GV.SC supply chain risk management.",
              "t": "GitHub Enterprise audit or Jenkins",
              "d": "`github:audit` or `jenkins:console`",
              "q": "index=cicd (sourcetype=github:audit OR sourcetype=jenkins:console) earliest=-7d \"npm install\" OR \"pip install\"\n| rex field=_raw \"(?i)(npm|pip)\\s+install\\s+(?<pkg>[^\\s]+)\"\n| lookup approved_build_packages.csv pkg OUTPUT approved\n| where isnull(approved)",
              "m": "(1) Seed approved internal registry mirror packages; (2) scope to release branches; (3) block on alert in pipeline optional; (4) quarterly supplier software review.",
              "z": "Table, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: GitHub Enterprise audit or Jenkins.\n• Ensure the following data sources are available: `github:audit` or `jenkins:console`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Seed approved internal registry mirror packages; (2) scope to release branches; (3) block on alert in pipeline optional; (4) quarterly supplier software review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd (sourcetype=github:audit OR sourcetype=jenkins:console) earliest=-7d \"npm install\" OR \"pip install\"\n| rex field=_raw \"(?i)(npm|pip)\\s+install\\s+(?<pkg>[^\\s]+)\"\n| lookup approved_build_packages.csv pkg OUTPUT approved\n| where isnull(approved)\n```\n\nUnderstanding this SPL\n\n**Supply Chain — New Package Installs in CI Against Approved Registry (GV.SC-01)** — Monitors CI pipelines for package installs that bypass internal registry mirrors, aligning with GV.SC supply chain risk management.\n\nDocumented **Data sources**: `github:audit` or `jenkins:console`. **App/TA** (typical add-on context): GitHub Enterprise audit or Jenkins. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: github:audit, jenkins:console. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=github:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Supply Chain — New Package Installs in CI Against Approved Registry. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "jenkins"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.SC-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF GV.SC-01 is enforced — Splunk UC-22.7.15: Supply Chain — New Package Installs in CI Against Approved Registry.",
                  "ea": "Saved search 'UC-22.7.15' running on github:audit or jenkins:console, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.16",
              "n": "Hardware Asset Coverage — Agents Missing on In-Scope Servers (ID.AM-01)",
              "c": "high",
              "f": "intermediate",
              "v": "Shows servers in CMDB without forwarder heartbeat so asset management inventories are complete for CSF Identify.",
              "t": "Splunk Universal Forwarder `_internal`, CMDB lookup",
              "d": "`_internal` metrics; `cmdb_servers.csv`",
              "q": "| metadata type=hosts index=*\n| lookup cmdb_servers.csv host OUTPUT in_scope\n| where in_scope=\"true\"\n| join type=left host [| search index=_internal source=*metrics.log* group=tcpin_connections earliest=-24h | stats dc(sourceHost) as reporting by sourceHost | rename sourceHost as host]\n| where isnull(reporting)",
              "m": "(1) Align host naming; (2) weekly diff to CMDB owners; (3) track closure in ID.AM improvement backlog.",
              "z": "Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder `_internal`, CMDB lookup.\n• Ensure the following data sources are available: `_internal` metrics; `cmdb_servers.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align host naming; (2) weekly diff to CMDB owners; (3) track closure in ID.AM improvement backlog.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| metadata type=hosts index=*\n| lookup cmdb_servers.csv host OUTPUT in_scope\n| where in_scope=\"true\"\n| join type=left host [| search index=_internal source=*metrics.log* group=tcpin_connections earliest=-24h | stats dc(sourceHost) as reporting by sourceHost | rename sourceHost as host]\n| where isnull(reporting)\n```\n\nUnderstanding this SPL\n\n**Hardware Asset Coverage — Agents Missing on In-Scope Servers (ID.AM-01)** — Shows servers in CMDB without forwarder heartbeat so asset management inventories are complete for CSF Identify.\n\nDocumented **Data sources**: `_internal` metrics; `cmdb_servers.csv`. **App/TA** (typical add-on context): Splunk Universal Forwarder `_internal`, CMDB lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: *.\n\n**Pipeline walkthrough**\n\n• Uses `metadata`/`metasearch` to inspect indexes, sources, hosts, or sourcetypes (not raw events).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(reporting)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Hardware Asset Coverage — Agents Missing on In-Scope Servers. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.AM-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF ID.AM-01 (Asset inventory) is enforced — Splunk UC-22.7.16: Hardware Asset Coverage — Agents Missing on In-Scope Servers.",
                  "ea": "Saved search 'UC-22.7.16' running on _internal metrics; cmdb_servers.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.17",
              "n": "Software Bill of Materials Signals — Container Image Digests (ID.AM-02)",
              "c": "medium",
              "f": "advanced",
              "v": "Correlates Kubernetes workload image digests to approved golden images for containerized asset inventory depth.",
              "t": "OTel k8s attributes or `kube:audit`",
              "d": "`kube:*` sourcetype",
              "q": "index=containers sourcetype=kube:* earliest=-6h\n| stats latest(image_id) as image by pod, namespace\n| lookup approved_container_images.csv image OUTPUT tier\n| where isnull(tier)",
              "m": "(1) Export digest list from artifact registry; (2) alert on unknown digests; (3) feed asset register; (4) map to business owner namespace labels.",
              "z": "Table, Heatmap",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: OTel k8s attributes or `kube:audit`.\n• Ensure the following data sources are available: `kube:*` sourcetype.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export digest list from artifact registry; (2) alert on unknown digests; (3) feed asset register; (4) map to business owner namespace labels.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=containers sourcetype=kube:* earliest=-6h\n| stats latest(image_id) as image by pod, namespace\n| lookup approved_container_images.csv image OUTPUT tier\n| where isnull(tier)\n```\n\nUnderstanding this SPL\n\n**Software Bill of Materials Signals — Container Image Digests (ID.AM-02)** — Correlates Kubernetes workload image digests to approved golden images for containerized asset inventory depth.\n\nDocumented **Data sources**: `kube:*` sourcetype. **App/TA** (typical add-on context): OTel k8s attributes or `kube:audit`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: containers; **sourcetype**: kube:*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=containers, sourcetype=kube:*, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by pod, namespace** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(tier)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Heatmap",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Software Bill of Materials Signals — Container Image Digests. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.AM-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF ID.AM-02 is enforced — Splunk UC-22.7.17: Software Bill of Materials Signals — Container Image Digests.",
                  "ea": "Saved search 'UC-22.7.17' running on kube:* sourcetype, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.18",
              "n": "Data Asset Classification — Sensitive Columns in Query Logs (ID.AM-03)",
              "c": "high",
              "f": "intermediate",
              "v": "Uses database audit logs tagged with sensitivity labels to prove data assets are known and classified for risk treatment prioritization.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`db:audit` (table_name, column_name, user)",
              "q": "index=database sourcetype=db:audit earliest=-24h\n| lookup sensitive_columns.csv table_name column_name OUTPUT classification\n| where classification=\"restricted\"\n| stats count by user, table_name",
              "m": "(1) Maintain sensitive column catalog; (2) exclude ETL service accounts; (3) quarterly access review input; (4) align to data governance council minutes.",
              "z": "Table, Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `db:audit` (table_name, column_name, user).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain sensitive column catalog; (2) exclude ETL service accounts; (3) quarterly access review input; (4) align to data governance council minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=db:audit earliest=-24h\n| lookup sensitive_columns.csv table_name column_name OUTPUT classification\n| where classification=\"restricted\"\n| stats count by user, table_name\n```\n\nUnderstanding this SPL\n\n**Data Asset Classification — Sensitive Columns in Query Logs (ID.AM-03)** — Uses database audit logs tagged with sensitivity labels to prove data assets are known and classified for risk treatment prioritization.\n\nDocumented **Data sources**: `db:audit` (table_name, column_name, user). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: db:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=db:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where classification=\"restricted\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, table_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Data Asset Classification — Sensitive Columns in Query Logs. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.AM-03",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF ID.AM-03 is enforced — Splunk UC-22.7.18: Data Asset Classification — Sensitive Columns in Query Logs.",
                  "ea": "Saved search 'UC-22.7.18' running on db:audit (table_name, column_name, user), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.19",
              "n": "Business Process Impact — Incidents by Critical Application (ID.RA-01)",
              "c": "high",
              "f": "advanced",
              "v": "Aggregates security incidents by business-critical application to inform risk assessment prioritization under Identify.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`notable`; `app_criticality_lookup.csv`",
              "q": "search `notable` earliest=-90d\n| lookup app_criticality_lookup.csv dest OUTPUT criticality\n| stats count by criticality, dest\n| sort - count",
              "m": "(1) Populate dest-to-app mapping; (2) refresh after architecture changes; (3) present in risk workshop; (4) tie to BIA refresh cycle.",
              "z": "Bar chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `notable`; `app_criticality_lookup.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate dest-to-app mapping; (2) refresh after architecture changes; (3) present in risk workshop; (4) tie to BIA refresh cycle.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch `notable` earliest=-90d\n| lookup app_criticality_lookup.csv dest OUTPUT criticality\n| stats count by criticality, dest\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Business Process Impact — Incidents by Critical Application (ID.RA-01)** — Aggregates security incidents by business-critical application to inform risk assessment prioritization under Identify.\n\nDocumented **Data sources**: `notable`; `app_criticality_lookup.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by criticality, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart, Table",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Business Process Impact — Incidents by Critical Application. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.RA-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF ID.RA-01 (Risk assessment) is enforced — Splunk UC-22.7.19: Business Process Impact — Incidents by Critical Application.",
                  "ea": "Saved search 'UC-22.7.19' running on notable; app_criticality_lookup.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.20",
              "n": "Control Weakness Heatmap — Failed CIS Benchmark Checks (ID.RA-02)",
              "c": "medium",
              "f": "intermediate",
              "v": "Surfaces hosts with repeated failed configuration benchmark controls to quantify inherent risk before exploitation.",
              "t": "Tenable.sc compliance or OSCAP",
              "d": "`tenable:sc:compliance`",
              "q": "index=compliance sourcetype=tenable:sc:compliance earliest=-14d result=\"Failed\"\n| stats count as failures by host, check_name\n| where failures>3",
              "m": "(1) Map checks to CSF subcategories; (2) assign remediation owners; (3) track burn-down; (4) document residual risk acceptance.",
              "z": "Heatmap, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable.sc compliance or OSCAP.\n• Ensure the following data sources are available: `tenable:sc:compliance`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map checks to CSF subcategories; (2) assign remediation owners; (3) track burn-down; (4) document residual risk acceptance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=tenable:sc:compliance earliest=-14d result=\"Failed\"\n| stats count as failures by host, check_name\n| where failures>3\n```\n\nUnderstanding this SPL\n\n**Control Weakness Heatmap — Failed CIS Benchmark Checks (ID.RA-02)** — Surfaces hosts with repeated failed configuration benchmark controls to quantify inherent risk before exploitation.\n\nDocumented **Data sources**: `tenable:sc:compliance`. **App/TA** (typical add-on context): Tenable.sc compliance or OSCAP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: tenable:sc:compliance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=tenable:sc:compliance, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, check_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures>3` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Control Weakness Heatmap — Failed CIS Benchmark Checks. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.RA-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF ID.RA-02 is enforced — Splunk UC-22.7.20: Control Weakness Heatmap — Failed CIS Benchmark Checks.",
                  "ea": "Saved search 'UC-22.7.20' running on tenable:sc:compliance, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.21",
              "n": "Lessons Learned — Post-Incident Analyst Search Activity (ID.IM-01)",
              "c": "high",
              "f": "advanced",
              "v": "Tracks analyst searches for IOCs after incidents to evidence continuous improvement and knowledge incorporation.",
              "t": "Splunk Enterprise (`_audit`)",
              "d": "`index=_audit` action=search",
              "q": "index=_audit action=search info=completed earliest=-90d\n| where match(search,\"(?i)ioc|threat_intel|notable\")\n| bucket _time span=7d as week\n| stats dc(user) as analysts, count by week",
              "m": "(1) Tune regex to incident naming convention; (2) exclude bots; (3) include in AAR pack; (4) map to training gaps.",
              "z": "Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise (`_audit`).\n• Ensure the following data sources are available: `index=_audit` action=search.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune regex to incident naming convention; (2) exclude bots; (3) include in AAR pack; (4) map to training gaps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit action=search info=completed earliest=-90d\n| where match(search,\"(?i)ioc|threat_intel|notable\")\n| bucket _time span=7d as week\n| stats dc(user) as analysts, count by week\n```\n\nUnderstanding this SPL\n\n**Lessons Learned — Post-Incident Analyst Search Activity (ID.IM-01)** — Tracks analyst searches for IOCs after incidents to evidence continuous improvement and knowledge incorporation.\n\nDocumented **Data sources**: `index=_audit` action=search. **App/TA** (typical add-on context): Splunk Enterprise (`_audit`). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(search,\"(?i)ioc|threat_intel|notable\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by week** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Lessons Learned — Post-Incident Analyst Search Activity. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.IM-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF ID.IM-01 is enforced — Splunk UC-22.7.21: Lessons Learned — Post-Incident Analyst Search Activity.",
                  "ea": "Saved search 'UC-22.7.21' running on index=_audit action=search, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.22",
              "n": "Process KPI — Median Days to Remediate High and Critical CVEs (ID.IM-02)",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures remediation velocity for highs and criticals to show risk assessment feedback loop improves over time.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060)",
              "d": "`tenable:vuln` (first_seen, last_fixed)",
              "q": "index=vuln sourcetype=tenable:vuln severity IN (\"High\",\"Critical\") earliest=-180d\n| eval fix_lag=if(isnotnull(last_fixed), (strptime(last_fixed,\"%Y-%m-%d %H:%M:%S\")-strptime(first_seen,\"%Y-%m-%d %H:%M:%S\"))/86400, null())\n| stats median(fix_lag) as median_days by severity",
              "m": "(1) Confirm timestamp fields from scanner; (2) segment by line of business; (3) quarterly trend to leadership; (4) adjust scanning frequency per ID.IM.",
              "z": "Line chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060).\n• Ensure the following data sources are available: `tenable:vuln` (first_seen, last_fixed).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm timestamp fields from scanner; (2) segment by line of business; (3) quarterly trend to leadership; (4) adjust scanning frequency per ID.IM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=tenable:vuln severity IN (\"High\",\"Critical\") earliest=-180d\n| eval fix_lag=if(isnotnull(last_fixed), (strptime(last_fixed,\"%Y-%m-%d %H:%M:%S\")-strptime(first_seen,\"%Y-%m-%d %H:%M:%S\"))/86400, null())\n| stats median(fix_lag) as median_days by severity\n```\n\nUnderstanding this SPL\n\n**Process KPI — Median Days to Remediate High and Critical CVEs (ID.IM-02)** — Measures remediation velocity for highs and criticals to show risk assessment feedback loop improves over time.\n\nDocumented **Data sources**: `tenable:vuln` (first_seen, last_fixed). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=tenable:vuln, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fix_lag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Process KPI — Median Days to Remediate High and Critical CVEs (ID.IM-02)** — Measures remediation velocity for highs and criticals to show risk assessment feedback loop improves over time.\n\nDocumented **Data sources**: `tenable:vuln` (first_seen, last_fixed). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Process KPI — Median Days to Remediate High and Critical CVEs. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.IM-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF ID.IM-02 is enforced — Splunk UC-22.7.22: Process KPI — Median Days to Remediate High and Critical CVEs.",
                  "ea": "Saved search 'UC-22.7.22' running on tenable:vuln (first_seen, last_fixed), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.23",
              "n": "Privileged Path — PAM JIT Elevation vs Standing Admin Logons (PR.AA-01)",
              "c": "high",
              "f": "advanced",
              "v": "Compares PAM JIT elevation events to always-on admin logons to reduce standing privilege in line with PR.AA.",
              "t": "CyberArk HEC, Windows Security",
              "d": "`WinEventLog:Security` EventCode=4672; PAM sourcetype",
              "q": "index=windows sourcetype=WinEventLog:Security EventCode=4672 earliest=-24h\n| lookup pam_jit_users.csv user AS TargetUserName OUTPUT jit_enabled\n| eval standing=if(isnull(jit_enabled),1,0)\n| stats sum(standing) as standing_admin_logons by TargetUserName",
              "m": "(1) Ingest PAM session logs if available; (2) map service accounts; (3) drive removal of standing admin; (4) document break-glass exceptions.",
              "z": "Bar chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CyberArk HEC, Windows Security.\n• Ensure the following data sources are available: `WinEventLog:Security` EventCode=4672; PAM sourcetype.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest PAM session logs if available; (2) map service accounts; (3) drive removal of standing admin; (4) document break-glass exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=WinEventLog:Security EventCode=4672 earliest=-24h\n| lookup pam_jit_users.csv user AS TargetUserName OUTPUT jit_enabled\n| eval standing=if(isnull(jit_enabled),1,0)\n| stats sum(standing) as standing_admin_logons by TargetUserName\n```\n\nUnderstanding this SPL\n\n**Privileged Path — PAM JIT Elevation vs Standing Admin Logons (PR.AA-01)** — Compares PAM JIT elevation events to always-on admin logons to reduce standing privilege in line with PR.AA.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode=4672; PAM sourcetype. **App/TA** (typical add-on context): CyberArk HEC, Windows Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **standing** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by TargetUserName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privileged Path — PAM JIT Elevation vs Standing Admin Logons (PR.AA-01)** — Compares PAM JIT elevation events to always-on admin logons to reduce standing privilege in line with PR.AA.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode=4672; PAM sourcetype. **App/TA** (typical add-on context): CyberArk HEC, Windows Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Privileged Path — PAM JIT Elevation vs Standing Admin Logons. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.AA-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF PR.AA-01 (Authentication) is enforced — Splunk UC-22.7.23: Privileged Path — PAM JIT Elevation vs Standing Admin Logons.",
                  "ea": "Saved search 'UC-22.7.23' running on WinEventLog:Security EventCode=4672; PAM sourcetype, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.24",
              "n": "Non-Human Identity — Service Principal Secret and Certificate Adds (PR.AA-02)",
              "c": "medium",
              "f": "intermediate",
              "v": "Alerts on new service principal credentials that expand machine identity attack surface under PR.AA.",
              "t": "Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110)",
              "d": "`mscs:azure:auditlog`",
              "q": "index=azure sourcetype=mscs:azure:auditlog earliest=-30d activityDisplayName IN (\"Add application\",\"Update application – Certificates and secrets management\")\n| stats count by activityDisplayName, initiatedBy.user.userPrincipalName",
              "m": "(1) Require app registration workflow ticket in lookup; (2) rotate secrets per policy; (3) monthly IAM review.",
              "z": "Table, Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110).\n• Ensure the following data sources are available: `mscs:azure:auditlog`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require app registration workflow ticket in lookup; (2) rotate secrets per policy; (3) monthly IAM review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=azure sourcetype=mscs:azure:auditlog earliest=-30d activityDisplayName IN (\"Add application\",\"Update application – Certificates and secrets management\")\n| stats count by activityDisplayName, initiatedBy.user.userPrincipalName\n```\n\nUnderstanding this SPL\n\n**Non-Human Identity — Service Principal Secret and Certificate Adds (PR.AA-02)** — Alerts on new service principal credentials that expand machine identity attack surface under PR.AA.\n\nDocumented **Data sources**: `mscs:azure:auditlog`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: azure; **sourcetype**: mscs:azure:auditlog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=azure, sourcetype=mscs:azure:auditlog, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by activityDisplayName, initiatedBy.user.userPrincipalName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Non-Human Identity — Service Principal Secret and Certificate Adds (PR.AA-02)** — Alerts on new service principal credentials that expand machine identity attack surface under PR.AA.\n\nDocumented **Data sources**: `mscs:azure:auditlog`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Cloud Services (Splunkbase 3110). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Non-Human Identity — Service Principal Secret and Certificate Adds. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.AA-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF PR.AA-02 is enforced — Splunk UC-22.7.24: Non-Human Identity — Service Principal Secret and Certificate Adds.",
                  "ea": "Saved search 'UC-22.7.24' running on mscs:azure:auditlog, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.25",
              "n": "Phishing Simulation Clicks vs Security Awareness Completion (PR.AT-01)",
              "c": "high",
              "f": "intermediate",
              "v": "Joins simulated phishing results with LMS completion records to evidence awareness training effectiveness for PR.AT.",
              "t": "KnowBe4 or O365 ingest",
              "d": "`knowbe4:phish`",
              "q": "index=security sourcetype=knowbe4:phish earliest=-90d\n| stats latest(clicked) as clicked by user\n| lookup lms_training_completion.csv user OUTPUT sec_aware_complete\n| eval gap=if(clicked=\"Yes\" AND sec_aware_complete!=\"true\",1,0)\n| where gap=1",
              "m": "(1) Normalize user keys; (2) assign remedial training; (3) export for HR compliance; (4) refresh after campaigns.",
              "z": "Table, Column chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: KnowBe4 or O365 ingest.\n• Ensure the following data sources are available: `knowbe4:phish`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize user keys; (2) assign remedial training; (3) export for HR compliance; (4) refresh after campaigns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=knowbe4:phish earliest=-90d\n| stats latest(clicked) as clicked by user\n| lookup lms_training_completion.csv user OUTPUT sec_aware_complete\n| eval gap=if(clicked=\"Yes\" AND sec_aware_complete!=\"true\",1,0)\n| where gap=1\n```\n\nUnderstanding this SPL\n\n**Phishing Simulation Clicks vs Security Awareness Completion (PR.AT-01)** — Joins simulated phishing results with LMS completion records to evidence awareness training effectiveness for PR.AT.\n\nDocumented **Data sources**: `knowbe4:phish`. **App/TA** (typical add-on context): KnowBe4 or O365 ingest. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: knowbe4:phish. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=knowbe4:phish, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap=1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Column chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Phishing Simulation Clicks vs Security Awareness Completion. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.AT-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF PR.AT-01 is enforced — Splunk UC-22.7.25: Phishing Simulation Clicks vs Security Awareness Completion.",
                  "ea": "Saved search 'UC-22.7.25' running on knowbe4:phish, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.26",
              "n": "Encryption in Transit — Deprecated TLS on Internal APIs (PR.DS-01)",
              "c": "medium",
              "f": "advanced",
              "v": "Parses load balancer or proxy access logs for deprecated TLS handshakes supporting data security requirements.",
              "t": "nginx, F5, or AWS ALB access logs",
              "d": "`nginx:access` ssl_protocol",
              "q": "index=proxy sourcetype=nginx:access earliest=-24h\n| where ssl_protocol IN (\"TLSv1\",\"TLSv1.1\",\"SSLv3\")\n| stats count by src_ip, ssl_protocol, uri",
              "m": "(1) Confirm field extraction; (2) emergency patch plan for legacy clients; (3) document compensating network segmentation.",
              "z": "Table, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: nginx, F5, or AWS ALB access logs.\n• Ensure the following data sources are available: `nginx:access` ssl_protocol.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extraction; (2) emergency patch plan for legacy clients; (3) document compensating network segmentation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=nginx:access earliest=-24h\n| where ssl_protocol IN (\"TLSv1\",\"TLSv1.1\",\"SSLv3\")\n| stats count by src_ip, ssl_protocol, uri\n```\n\nUnderstanding this SPL\n\n**Encryption in Transit — Deprecated TLS on Internal APIs (PR.DS-01)** — Parses load balancer or proxy access logs for deprecated TLS handshakes supporting data security requirements.\n\nDocumented **Data sources**: `nginx:access` ssl_protocol. **App/TA** (typical add-on context): nginx, F5, or AWS ALB access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: nginx:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=nginx:access, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where ssl_protocol IN (\"TLSv1\",\"TLSv1.1\",\"SSLv3\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_ip, ssl_protocol, uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Encryption in Transit — Deprecated TLS on Internal APIs (PR.DS-01)** — Parses load balancer or proxy access logs for deprecated TLS handshakes supporting data security requirements.\n\nDocumented **Data sources**: `nginx:access` ssl_protocol. **App/TA** (typical add-on context): nginx, F5, or AWS ALB access logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Encryption in Transit — Deprecated TLS on Internal APIs. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "e": [
                "aws",
                "f5",
                "nginx"
              ],
              "em": [
                "nginx_open"
              ],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.DS-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF PR.DS-01 (Data-at-rest protection) is enforced — Splunk UC-22.7.26: Encryption in Transit — Deprecated TLS on Internal APIs.",
                  "ea": "Saved search 'UC-22.7.26' running on nginx:access ssl_protocol, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.27",
              "n": "DLP — Blocked Exfil to Personal Email Domains (PR.DS-02)",
              "c": "high",
              "f": "intermediate",
              "v": "Counts blocked DLP actions toward personal domains proving technical data protection operates.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`ms:o365:management` DLP",
              "q": "index=o365 sourcetype=ms:o365:management Workload=Dlp earliest=-7d\n| where match(Operation,\"(?i)block\") AND match(UserPrincipalName,\"@(gmail|yahoo|hotmail)\\.com\")\n| stats count by PolicyName, UserPrincipalName",
              "m": "(1) Tune allowed domains; (2) HR escalation path; (3) evidence for data handling procedures.",
              "z": "Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `ms:o365:management` DLP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune allowed domains; (2) HR escalation path; (3) evidence for data handling procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management Workload=Dlp earliest=-7d\n| where match(Operation,\"(?i)block\") AND match(UserPrincipalName,\"@(gmail|yahoo|hotmail)\\.com\")\n| stats count by PolicyName, UserPrincipalName\n```\n\nUnderstanding this SPL\n\n**DLP — Blocked Exfil to Personal Email Domains (PR.DS-02)** — Counts blocked DLP actions toward personal domains proving technical data protection operates.\n\nDocumented **Data sources**: `ms:o365:management` DLP. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(Operation,\"(?i)block\") AND match(UserPrincipalName,\"@(gmail|yahoo|hotmail)\\.com\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by PolicyName, UserPrincipalName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for DLP — Blocked Exfil to Personal Email Domains. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.DS-02",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF PR.DS-02 (Data-in-transit protection) is enforced — Splunk UC-22.7.27: DLP — Blocked Exfil to Personal Email Domains.",
                  "ea": "Saved search 'UC-22.7.27' running on ms:o365:management DLP, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.28",
              "n": "Platform Integrity — sudoers or nsswitch Changes on Linux (PR.PS-01)",
              "c": "medium",
              "f": "advanced",
              "v": "Monitors file integrity events on authentication stack files for Linux platform security.",
              "t": "Splunk Add-on for Unix and Linux (Splunkbase 833)",
              "d": "`linux:audit`",
              "q": "index=os sourcetype=linux:audit type=PATH earliest=-7d\n| where match(name,\"(?i)/etc/(sudoers|nsswitch.conf|passwd)\")\n| table _time, name, exe, uid",
              "m": "(1) Deploy auditd rules; (2) whitelist automation; (3) alert within minutes; (4) map to change records.",
              "z": "Table, Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (Splunkbase 833).\n• Ensure the following data sources are available: `linux:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Deploy auditd rules; (2) whitelist automation; (3) alert within minutes; (4) map to change records.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux:audit type=PATH earliest=-7d\n| where match(name,\"(?i)/etc/(sudoers|nsswitch.conf|passwd)\")\n| table _time, name, exe, uid\n```\n\nUnderstanding this SPL\n\n**Platform Integrity — sudoers or nsswitch Changes on Linux (PR.PS-01)** — Monitors file integrity events on authentication stack files for Linux platform security.\n\nDocumented **Data sources**: `linux:audit`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (Splunkbase 833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(name,\"(?i)/etc/(sudoers|nsswitch.conf|passwd)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Platform Integrity — sudoers or nsswitch Changes on Linux (PR.PS-01)**): table _time, name, exe, uid\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Platform Integrity — sudoers or nsswitch Changes on Linux (PR.PS-01)** — Monitors file integrity events on authentication stack files for Linux platform security.\n\nDocumented **Data sources**: `linux:audit`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (Splunkbase 833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Platform Integrity — sudoers or nsswitch Changes on Linux. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.PS-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF PR.PS-01 is enforced — Splunk UC-22.7.28: Platform Integrity — sudoers or nsswitch Changes on Linux.",
                  "ea": "Saved search 'UC-22.7.28' running on linux:audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.29",
              "n": "DNS Resolver Error Rate SLO for Internal Resolvers (PR.IR-01)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks DNS infrastructure error rates as resilience signal for critical name resolution dependency.",
              "t": "BIND or CoreDNS logs",
              "d": "`dns:query`",
              "q": "index=dns sourcetype=dns:query earliest=-24h\n| eval fail=if(match(_raw,\"(?i)SERVFAIL|REFUSED\"),1,0)\n| timechart span=1h sum(fail) as failures",
              "m": "(1) Set SLO thresholds; (2) correlate with DDoS; (3) include in DR tabletop; (4) document failover.",
              "z": "Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: BIND or CoreDNS logs.\n• Ensure the following data sources are available: `dns:query`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Set SLO thresholds; (2) correlate with DDoS; (3) include in DR tabletop; (4) document failover.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=dns:query earliest=-24h\n| eval fail=if(match(_raw,\"(?i)SERVFAIL|REFUSED\"),1,0)\n| timechart span=1h sum(fail) as failures\n```\n\nUnderstanding this SPL\n\n**DNS Resolver Error Rate SLO for Internal Resolvers (PR.IR-01)** — Tracks DNS infrastructure error rates as resilience signal for critical name resolution dependency.\n\nDocumented **Data sources**: `dns:query`. **App/TA** (typical add-on context): BIND or CoreDNS logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: dns:query. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=dns:query, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DNS Resolver Error Rate SLO for Internal Resolvers (PR.IR-01)** — Tracks DNS infrastructure error rates as resilience signal for critical name resolution dependency.\n\nDocumented **Data sources**: `dns:query`. **App/TA** (typical add-on context): BIND or CoreDNS logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for DNS Resolver Error Rate SLO for Internal Resolvers. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.IR-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF PR.IR-01 is enforced — Splunk UC-22.7.29: DNS Resolver Error Rate SLO for Internal Resolvers.",
                  "ea": "Saved search 'UC-22.7.29' running on dns:query, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.30",
              "n": "Storage Path — Cluster Failover or Multipath Events (PR.IR-02)",
              "c": "medium",
              "f": "advanced",
              "v": "Surfaces storage or hypervisor failover events evidencing infrastructure resilience monitoring.",
              "t": "VMware or SAN logs",
              "d": "`vmware:inv`",
              "q": "index=infra sourcetype=vmware:inv earliest=-7d failover OR \"path down\"\n| stats count by host, Message",
              "m": "(1) Normalize messages per vendor; (2) alert spikes; (3) tie to capacity planning.",
              "z": "Table, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: VMware or SAN logs.\n• Ensure the following data sources are available: `vmware:inv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize messages per vendor; (2) alert spikes; (3) tie to capacity planning.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=infra sourcetype=vmware:inv earliest=-7d failover OR \"path down\"\n| stats count by host, Message\n```\n\nUnderstanding this SPL\n\n**Storage Path — Cluster Failover or Multipath Events (PR.IR-02)** — Surfaces storage or hypervisor failover events evidencing infrastructure resilience monitoring.\n\nDocumented **Data sources**: `vmware:inv`. **App/TA** (typical add-on context): VMware or SAN logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: infra; **sourcetype**: vmware:inv. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=infra, sourcetype=vmware:inv, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, Message** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Storage Path — Cluster Failover or Multipath Events. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "vmware"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.IR-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF PR.IR-02 is enforced — Splunk UC-22.7.30: Storage Path — Cluster Failover or Multipath Events.",
                  "ea": "Saved search 'UC-22.7.30' running on vmware:inv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.31",
              "n": "EDR Heartbeat Gap Beyond Policy SLA (DE.CM-01)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects endpoints without EDR telemetry within SLA window for continuous monitoring coverage.",
              "t": "CrowdStrike or Defender HEC",
              "d": "`crowdstrike:hosts`",
              "q": "index=edr sourcetype=crowdstrike:hosts earliest=-1d\n| eval last_seen_epoch=strptime(last_seen,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval gap_hours=(now()-last_seen_epoch)/3600\n| where gap_hours>24",
              "m": "(1) Align with policy hours; (2) auto-ticket stale agents; (3) map to CMDB ownership.",
              "z": "Table, Map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: CrowdStrike or Defender HEC.\n• Ensure the following data sources are available: `crowdstrike:hosts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align with policy hours; (2) auto-ticket stale agents; (3) map to CMDB ownership.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=crowdstrike:hosts earliest=-1d\n| eval last_seen_epoch=strptime(last_seen,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval gap_hours=(now()-last_seen_epoch)/3600\n| where gap_hours>24\n```\n\nUnderstanding this SPL\n\n**EDR Heartbeat Gap Beyond Policy SLA (DE.CM-01)** — Detects endpoints without EDR telemetry within SLA window for continuous monitoring coverage.\n\nDocumented **Data sources**: `crowdstrike:hosts`. **App/TA** (typical add-on context): CrowdStrike or Defender HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:hosts. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=crowdstrike:hosts, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **last_seen_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **gap_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_hours>24` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**EDR Heartbeat Gap Beyond Policy SLA (DE.CM-01)** — Detects endpoints without EDR telemetry within SLA window for continuous monitoring coverage.\n\nDocumented **Data sources**: `crowdstrike:hosts`. **App/TA** (typical add-on context): CrowdStrike or Defender HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Map",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for EDR Heartbeat Gap Beyond Policy SLA. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.CM-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF DE.CM-01 (Network monitoring) is enforced — Splunk UC-22.7.31: EDR Heartbeat Gap Beyond Policy SLA.",
                  "ea": "Saved search 'UC-22.7.31' running on crowdstrike:hosts, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.32",
              "n": "Administrative API Logging Volume Drop vs Baseline (DE.CM-02)",
              "c": "medium",
              "f": "advanced",
              "v": "Alerts when CloudTrail-style administrative logging volume drops suddenly, indicating pipeline or configuration tampering.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876)",
              "d": "`aws:cloudtrail`",
              "q": "index=aws sourcetype=aws:cloudtrail earliest=-14d\n| bucket _time span=1h as h\n| stats count by h\n| eventstats median(count) as med\n| eval ratio=count/med\n| where ratio < 0.2",
              "m": "(1) Seasonal adjust for holidays; (2) exclude account suspensions; (3) incident response if confirmed.",
              "z": "Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876).\n• Ensure the following data sources are available: `aws:cloudtrail`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Seasonal adjust for holidays; (2) exclude account suspensions; (3) incident response if confirmed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=aws:cloudtrail earliest=-14d\n| bucket _time span=1h as h\n| stats count by h\n| eventstats median(count) as med\n| eval ratio=count/med\n| where ratio < 0.2\n```\n\nUnderstanding this SPL\n\n**Administrative API Logging Volume Drop vs Baseline (DE.CM-02)** — Alerts when CloudTrail-style administrative logging volume drops suddenly, indicating pipeline or configuration tampering.\n\nDocumented **Data sources**: `aws:cloudtrail`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=aws:cloudtrail, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by h** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ratio < 0.2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Administrative API Logging Volume Drop vs Baseline (DE.CM-02)** — Alerts when CloudTrail-style administrative logging volume drops suddenly, indicating pipeline or configuration tampering.\n\nDocumented **Data sources**: `aws:cloudtrail`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Administrative API Logging Volume Drop vs Baseline. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.CM-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF DE.CM-02 is enforced — Splunk UC-22.7.32: Administrative API Logging Volume Drop vs Baseline.",
                  "ea": "Saved search 'UC-22.7.32' running on aws:cloudtrail, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "PR.PS-04",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of NIST CSF PR.PS-04 (Log generation) — Splunk UC-22.7.32: Administrative API Logging Volume Drop vs Baseline.",
                  "ea": "Saved search 'UC-22.7.32' running on aws:cloudtrail, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.nist.gov/cyberframework"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.33",
              "n": "Proxy Denies Toward Young Threat-Intel Domains (DE.CM-03)",
              "c": "high",
              "f": "intermediate",
              "v": "Uses threat intel first-seen dates in denied web requests for continuous monitoring of emerging C2.",
              "t": "Splunk ES Threat Intelligence",
              "d": "`proxy` sourcetype; threat intel lookup",
              "q": "index=proxy sourcetype=bluecoat:proxy action=BLOCK earliest=-24h\n| lookup threat_intel_domain domain as dest_host OUTPUT first_seen\n| eval domain_age_days=(now()-first_seen)/86400\n| where domain_age_days<7",
              "m": "(1) Refresh intel feeds; (2) tune for CDNs; (3) escalate high-volume denies.",
              "z": "Table, Column chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ES Threat Intelligence.\n• Ensure the following data sources are available: `proxy` sourcetype; threat intel lookup.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Refresh intel feeds; (2) tune for CDNs; (3) escalate high-volume denies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=bluecoat:proxy action=BLOCK earliest=-24h\n| lookup threat_intel_domain domain as dest_host OUTPUT first_seen\n| eval domain_age_days=(now()-first_seen)/86400\n| where domain_age_days<7\n```\n\nUnderstanding this SPL\n\n**Proxy Denies Toward Young Threat-Intel Domains (DE.CM-03)** — Uses threat intel first-seen dates in denied web requests for continuous monitoring of emerging C2.\n\nDocumented **Data sources**: `proxy` sourcetype; threat intel lookup. **App/TA** (typical add-on context): Splunk ES Threat Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: bluecoat:proxy. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=bluecoat:proxy, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **domain_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where domain_age_days<7` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Proxy Denies Toward Young Threat-Intel Domains (DE.CM-03)** — Uses threat intel first-seen dates in denied web requests for continuous monitoring of emerging C2.\n\nDocumented **Data sources**: `proxy` sourcetype; threat intel lookup. **App/TA** (typical add-on context): Splunk ES Threat Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Column chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Proxy Denies Toward Young Threat-Intel Domains. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.CM-03",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF DE.CM-03 (Personnel activity monitoring) is enforced — Splunk UC-22.7.33: Proxy Denies Toward Young Threat-Intel Domains.",
                  "ea": "Saved search 'UC-22.7.33' running on proxy sourcetype; threat intel lookup, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.34",
              "n": "Database Connection Storm from Application Service Account (DE.CM-04)",
              "c": "medium",
              "f": "advanced",
              "v": "Statistical spike in DB connections by application service account for compromised app or insider detection.",
              "t": "MySQL audit",
              "d": "`db:mysql:audit`",
              "q": "index=database sourcetype=db:mysql:audit earliest=-6h\n| stats count by user, src\n| eventstats median(count) as med by user\n| where count > med*10 AND count>1000",
              "m": "(1) Baseline per app; (2) correlate with deploys; (3) block at WAF optional.",
              "z": "Table, Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: MySQL audit.\n• Ensure the following data sources are available: `db:mysql:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Baseline per app; (2) correlate with deploys; (3) block at WAF optional.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=db:mysql:audit earliest=-6h\n| stats count by user, src\n| eventstats median(count) as med by user\n| where count > med*10 AND count>1000\n```\n\nUnderstanding this SPL\n\n**Database Connection Storm from Application Service Account (DE.CM-04)** — Statistical spike in DB connections by application service account for compromised app or insider detection.\n\nDocumented **Data sources**: `db:mysql:audit`. **App/TA** (typical add-on context): MySQL audit. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: db:mysql:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=db:mysql:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count > med*10 AND count>1000` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Database Connection Storm from Application Service Account. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mysql"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.CM-04",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF DE.CM-04 is enforced — Splunk UC-22.7.34: Database Connection Storm from Application Service Account.",
                  "ea": "Saved search 'UC-22.7.34' running on db:mysql:audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.35",
              "n": "Certificate Transparency — New Public Cert for Corporate Brand (DE.CM-05)",
              "c": "high",
              "f": "intermediate",
              "v": "Ingests CT log events for certificates issued for company domains to catch shadow issuance.",
              "t": "Certstream or vendor HEC",
              "d": "`ct:log`",
              "q": "index=ct sourcetype=ct:log earliest=-7d\n| where match(names,\"(?i)mycompany\\.(com|net)\") AND !match(names,\"\\*\\.cdn\")\n| stats earliest(_time) as first by names, issuer",
              "m": "(1) Allowlist approved CAs; (2) revoke fraudulent; (3) feed brand protection.",
              "z": "Table, Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Certstream or vendor HEC.\n• Ensure the following data sources are available: `ct:log`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Allowlist approved CAs; (2) revoke fraudulent; (3) feed brand protection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ct sourcetype=ct:log earliest=-7d\n| where match(names,\"(?i)mycompany\\.(com|net)\") AND !match(names,\"\\*\\.cdn\")\n| stats earliest(_time) as first by names, issuer\n```\n\nUnderstanding this SPL\n\n**Certificate Transparency — New Public Cert for Corporate Brand (DE.CM-05)** — Ingests CT log events for certificates issued for company domains to catch shadow issuance.\n\nDocumented **Data sources**: `ct:log`. **App/TA** (typical add-on context): Certstream or vendor HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ct; **sourcetype**: ct:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ct, sourcetype=ct:log, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(names,\"(?i)mycompany\\.(com|net)\") AND !match(names,\"\\*\\.cdn\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by names, issuer** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Certificate Transparency — New Public Cert for Corporate Brand. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.CM-05",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF DE.CM-05 is enforced — Splunk UC-22.7.35: Certificate Transparency — New Public Cert for Corporate Brand.",
                  "ea": "Saved search 'UC-22.7.35' running on ct:log, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.36",
              "n": "Lateral Movement Chain — Auth, RDP, and Process Create Same Src (DE.AE-01)",
              "c": "medium",
              "f": "advanced",
              "v": "Joins authentication, RDP, and process events within a window for adverse event analysis.",
              "t": "Windows Security, Sysmon",
              "d": "`WinEventLog:Security`, Sysmon",
              "q": "index=windows earliest=-24h (sourcetype=WinEventLog:Security EventCode=4624 OR sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode=1)\n| eval src=coalesce(src_ip, IpAddress)\n| transaction src maxspan=300s maxevents=50\n| where mvcount(EventCode)>2",
              "m": "(1) Tune for VDI noise; (2) map to ATT&CK; (3) auto-notable optional.",
              "z": "Timeline, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Security, Sysmon.\n• Ensure the following data sources are available: `WinEventLog:Security`, Sysmon.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune for VDI noise; (2) map to ATT&CK; (3) auto-notable optional.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows earliest=-24h (sourcetype=WinEventLog:Security EventCode=4624 OR sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode=1)\n| eval src=coalesce(src_ip, IpAddress)\n| transaction src maxspan=300s maxevents=50\n| where mvcount(EventCode)>2\n```\n\nUnderstanding this SPL\n\n**Lateral Movement Chain — Auth, RDP, and Process Create Same Src (DE.AE-01)** — Joins authentication, RDP, and process events within a window for adverse event analysis.\n\nDocumented **Data sources**: `WinEventLog:Security`, Sysmon. **App/TA** (typical add-on context): Windows Security, Sysmon. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security, XmlWinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• Filters the current rows with `where mvcount(EventCode)>2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Lateral Movement Chain — Auth, RDP, and Process Create Same Src (DE.AE-01)** — Joins authentication, RDP, and process events within a window for adverse event analysis.\n\nDocumented **Data sources**: `WinEventLog:Security`, Sysmon. **App/TA** (typical add-on context): Windows Security, Sysmon. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Lateral Movement Chain — Auth, RDP, and Process Create Same Src. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.AE-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF DE.AE-01 is enforced — Splunk UC-22.7.36: Lateral Movement Chain — Auth, RDP, and Process Create Same Src.",
                  "ea": "Saved search 'UC-22.7.36' running on WinEventLog:Security, Sysmon, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.37",
              "n": "Anomaly on Outbound Bytes from Database Subnet (DE.AE-02)",
              "c": "high",
              "f": "intermediate",
              "v": "Uses statistical anomaly detection on east-west bytes for adverse event triage.",
              "t": "Zeek `conn`",
              "d": "`zeek:conn`",
              "q": "index=network sourcetype=zeek:conn id.orig_h=10.50.* earliest=-7d\n| timechart span=1h sum(orig_bytes) as b\n| anomalydetection b action=annotate",
              "m": "(1) Label DB VLANs; (2) suppress backup windows; (3) integrate with IR runbook.",
              "z": "Time chart, Anomaly overlay",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Zeek `conn`.\n• Ensure the following data sources are available: `zeek:conn`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Label DB VLANs; (2) suppress backup windows; (3) integrate with IR runbook.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=zeek:conn id.orig_h=10.50.* earliest=-7d\n| timechart span=1h sum(orig_bytes) as b\n| anomalydetection b action=annotate\n```\n\nUnderstanding this SPL\n\n**Anomaly on Outbound Bytes from Database Subnet (DE.AE-02)** — Uses statistical anomaly detection on east-west bytes for adverse event triage.\n\nDocumented **Data sources**: `zeek:conn`. **App/TA** (typical add-on context): Zeek `conn`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: zeek:conn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=zeek:conn, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Anomaly on Outbound Bytes from Database Subnet (DE.AE-02)**): anomalydetection b action=annotate\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Anomaly on Outbound Bytes from Database Subnet (DE.AE-02)** — Uses statistical anomaly detection on east-west bytes for adverse event triage.\n\nDocumented **Data sources**: `zeek:conn`. **App/TA** (typical add-on context): Zeek `conn`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Anomaly overlay",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Anomaly on Outbound Bytes from Database Subnet. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1h | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.AE-02",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF DE.AE-02 (Anomalies and events analysis) is enforced — Splunk UC-22.7.37: Anomaly on Outbound Bytes from Database Subnet.",
                  "ea": "Saved search 'UC-22.7.37' running on zeek:conn, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.38",
              "n": "Risk Index Spike for Privileged Accounts (DE.AE-03)",
              "c": "medium",
              "f": "advanced",
              "v": "Surfaces elevated Splunk ES risk scores for privileged users for layered adverse event analysis.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`from datamodel Risk.All_Risk`",
              "q": "| from datamodel Risk.All_Risk earliest=-7d\n| search risk_object_type=user risk_object=*admin*\n| stats sum(risk_score) as score by risk_object\n| where score>50",
              "m": "(1) Calibrate thresholds; (2) enrich with geo and MFA; (3) pair with analyst workflow.",
              "z": "Table, Scatter",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `from datamodel Risk.All_Risk`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Calibrate thresholds; (2) enrich with geo and MFA; (3) pair with analyst workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| from datamodel Risk.All_Risk earliest=-7d\n| search risk_object_type=user risk_object=*admin*\n| stats sum(risk_score) as score by risk_object\n| where score>50\n```\n\nUnderstanding this SPL\n\n**Risk Index Spike for Privileged Accounts (DE.AE-03)** — Surfaces elevated Splunk ES risk scores for privileged users for layered adverse event analysis.\n\nDocumented **Data sources**: `from datamodel Risk.All_Risk`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `from` (dataset / Federated Search) — verify dataset availability and permissions.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by risk_object** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where score>50` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Scatter",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Risk Index Spike for Privileged Accounts. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "DE.AE-03",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF DE.AE-03 is enforced — Splunk UC-22.7.38: Risk Index Spike for Privileged Accounts.",
                  "ea": "Saved search 'UC-22.7.38' running on from datamodel Risk.All_Risk, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.39",
              "n": "IR Ticket Stuck in Containment Beyond SLA (RS.MA-01)",
              "c": "high",
              "f": "intermediate",
              "v": "Measures security incidents stuck in containment state beyond SLA for incident management effectiveness.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`snow:incident`",
              "q": "index=itsm sourcetype=snow:incident category=security earliest=-30d state=\"3\"\n| eval hours_in_state=(now()-strptime(u_containment_start,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnotnull(u_containment_start) AND hours_in_state>8",
              "m": "(1) Map state numbers to your instance; (2) page IR lead; (3) document blockers.",
              "z": "Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `snow:incident`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map state numbers to your instance; (2) page IR lead; (3) document blockers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=snow:incident category=security earliest=-30d state=\"3\"\n| eval hours_in_state=(now()-strptime(u_containment_start,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnotnull(u_containment_start) AND hours_in_state>8\n```\n\nUnderstanding this SPL\n\n**IR Ticket Stuck in Containment Beyond SLA (RS.MA-01)** — Measures security incidents stuck in containment state beyond SLA for incident management effectiveness.\n\nDocumented **Data sources**: `snow:incident`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=snow:incident, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hours_in_state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(u_containment_start) AND hours_in_state>8` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for IR Ticket Stuck in Containment Beyond SLA. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RS.MA-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF RS.MA-01 (Incident management) is enforced — Splunk UC-22.7.39: IR Ticket Stuck in Containment Beyond SLA.",
                  "ea": "Saved search 'UC-22.7.39' running on snow:incident, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.40",
              "n": "SOAR Case Backlog Aging by Severity (RS.MA-02)",
              "c": "medium",
              "f": "advanced",
              "v": "Lists open Splunk SOAR cases older than threshold by severity for incident management hygiene.",
              "t": "Splunk SOAR",
              "d": "`phantom:artifact`",
              "q": "index=soar sourcetype=phantom:artifact status=open earliest=-90d\n| eval age_days=(now()-_time)/86400\n| where age_days>7\n| stats values(name) as cases by severity",
              "m": "(1) Align severity with policy; (2) weekly war-room review; (3) auto-escalation rules.",
              "z": "Table, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk SOAR.\n• Ensure the following data sources are available: `phantom:artifact`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align severity with policy; (2) weekly war-room review; (3) auto-escalation rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=soar sourcetype=phantom:artifact status=open earliest=-90d\n| eval age_days=(now()-_time)/86400\n| where age_days>7\n| stats values(name) as cases by severity\n```\n\nUnderstanding this SPL\n\n**SOAR Case Backlog Aging by Severity (RS.MA-02)** — Lists open Splunk SOAR cases older than threshold by severity for incident management hygiene.\n\nDocumented **Data sources**: `phantom:artifact`. **App/TA** (typical add-on context): Splunk SOAR. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: soar; **sourcetype**: phantom:artifact. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=soar, sourcetype=phantom:artifact, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>7` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar chart",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for SOAR Case Backlog Aging by Severity. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RS.MA-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RS.MA-02 is enforced — Splunk UC-22.7.40: SOAR Case Backlog Aging by Severity.",
                  "ea": "Saved search 'UC-22.7.40' running on phantom:artifact, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.41",
              "n": "Root Cause Field Completeness on Closed Incidents (RS.AN-01)",
              "c": "high",
              "f": "advanced",
              "v": "Ensures analysts filled root-cause codes when closing incidents for after-action analysis quality.",
              "t": "ServiceNow",
              "d": "`snow:incident`",
              "q": "index=itsm sourcetype=snow:incident earliest=-90d state=7\n| where isnull(u_root_cause_code) OR u_root_cause_code=\"\"\n| stats count by assignment_group",
              "m": "(1) Make field mandatory in workflow; (2) train SOC; (3) export for RC.IM.",
              "z": "Bar chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ServiceNow.\n• Ensure the following data sources are available: `snow:incident`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Make field mandatory in workflow; (2) train SOC; (3) export for RC.IM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=snow:incident earliest=-90d state=7\n| where isnull(u_root_cause_code) OR u_root_cause_code=\"\"\n| stats count by assignment_group\n```\n\nUnderstanding this SPL\n\n**Root Cause Field Completeness on Closed Incidents (RS.AN-01)** — Ensures analysts filled root-cause codes when closing incidents for after-action analysis quality.\n\nDocumented **Data sources**: `snow:incident`. **App/TA** (typical add-on context): ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=snow:incident, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(u_root_cause_code) OR u_root_cause_code=\"\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Root Cause Field Completeness on Closed Incidents. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RS.AN-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RS.AN-01 is enforced — Splunk UC-22.7.41: Root Cause Field Completeness on Closed Incidents.",
                  "ea": "Saved search 'UC-22.7.41' running on snow:incident, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.42",
              "n": "Composite Timeline — Notable, AV, and Proxy Same Host One Hour (RS.AN-02)",
              "c": "medium",
              "f": "intermediate",
              "v": "Builds composite timeline per host joining multiple data sources for incident analysis depth.",
              "t": "ES, Defender, proxy",
              "d": "multiple indexes",
              "q": "search `notable` earliest=-1d\n| stats earliest(_time) as t1 by dest\n| join dest [| search index=av sourcetype=defender:alert earliest=-1d | stats earliest(_time) as t2 by ComputerName | rename ComputerName as dest ]\n| eval delta=abs(t1-t2)\n| where delta<3600",
              "m": "(1) Normalize host field; (2) use tstats for scale at enterprise; (3) save as investigation macro.",
              "z": "Timeline, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: ES, Defender, proxy.\n• Ensure the following data sources are available: multiple indexes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize host field; (2) use tstats for scale at enterprise; (3) save as investigation macro.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch `notable` earliest=-1d\n| stats earliest(_time) as t1 by dest\n| join dest [| search index=av sourcetype=defender:alert earliest=-1d | stats earliest(_time) as t2 by ComputerName | rename ComputerName as dest ]\n| eval delta=abs(t1-t2)\n| where delta<3600\n```\n\nUnderstanding this SPL\n\n**Composite Timeline — Notable, AV, and Proxy Same Host One Hour (RS.AN-02)** — Builds composite timeline per host joining multiple data sources for incident analysis depth.\n\nDocumented **Data sources**: multiple indexes. **App/TA** (typical add-on context): ES, Defender, proxy. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta<3600` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Composite Timeline — Notable, AV, and Proxy Same Host One Hour (RS.AN-02)** — Builds composite timeline per host joining multiple data sources for incident analysis depth.\n\nDocumented **Data sources**: multiple indexes. **App/TA** (typical add-on context): ES, Defender, proxy. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Composite Timeline — Notable, AV, and Proxy Same Host One Hour. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RS.AN-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RS.AN-02 is enforced — Splunk UC-22.7.42: Composite Timeline — Notable, AV, and Proxy Same Host One Hour.",
                  "ea": "Saved search 'UC-22.7.42' running on multiple indexes, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.43",
              "n": "Executive Paging Lag After Sev-1 Playbook Start (RS.CO-01)",
              "c": "medium",
              "f": "advanced",
              "v": "Verifies paging actions fired within policy window after high-severity SOAR playbook start.",
              "t": "Splunk SOAR, PagerDuty",
              "d": "`phantom:action_run`",
              "q": "index=soar sourcetype=phantom:action_run earliest=-90d action=\"send pager\"\n| stats earliest(_time) as page_time by playbook_run_id\n| join playbook_run_id [| search index=soar sourcetype=phantom:playbook_run severity=1 | rename _time as pb_time | fields playbook_run_id, pb_time]\n| eval lag_sec=page_time-pb_time\n| where lag_sec>1800",
              "m": "(1) Standardize severity; (2) test quarterly; (3) document comms tree.",
              "z": "Table, Histogram",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk SOAR, PagerDuty.\n• Ensure the following data sources are available: `phantom:action_run`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize severity; (2) test quarterly; (3) document comms tree.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=soar sourcetype=phantom:action_run earliest=-90d action=\"send pager\"\n| stats earliest(_time) as page_time by playbook_run_id\n| join playbook_run_id [| search index=soar sourcetype=phantom:playbook_run severity=1 | rename _time as pb_time | fields playbook_run_id, pb_time]\n| eval lag_sec=page_time-pb_time\n| where lag_sec>1800\n```\n\nUnderstanding this SPL\n\n**Executive Paging Lag After Sev-1 Playbook Start (RS.CO-01)** — Verifies paging actions fired within policy window after high-severity SOAR playbook start.\n\nDocumented **Data sources**: `phantom:action_run`. **App/TA** (typical add-on context): Splunk SOAR, PagerDuty. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: soar; **sourcetype**: phantom:action_run. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=soar, sourcetype=phantom:action_run, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by playbook_run_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **lag_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag_sec>1800` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Histogram",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Executive Paging Lag After Sev-1 Playbook Start. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "pagerduty"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RS.CO-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RS.CO-01 is enforced — Splunk UC-22.7.43: Executive Paging Lag After Sev-1 Playbook Start.",
                  "ea": "Saved search 'UC-22.7.43' running on phantom:action_run, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.44",
              "n": "Legal Hold Population — Elevated File Export Activity (RS.CO-02)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks downloads for users on legal hold to align insider communication risk with coordinated response procedures.",
              "t": "O365 management",
              "d": "`ms:o365:management`",
              "q": "index=o365 sourcetype=ms:o365:management Operation=FileDownloaded earliest=-30d\n| lookup legal_hold_users.csv UserPrincipalName OUTPUT on_hold\n| where on_hold=\"true\"\n| bucket _time span=1d\n| stats count by UserPrincipalName, _time",
              "m": "(1) Coordinate with legal; (2) strict access to dashboard; (3) align to RS.CO procedures.",
              "z": "Table, Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: O365 management.\n• Ensure the following data sources are available: `ms:o365:management`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Coordinate with legal; (2) strict access to dashboard; (3) align to RS.CO procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management Operation=FileDownloaded earliest=-30d\n| lookup legal_hold_users.csv UserPrincipalName OUTPUT on_hold\n| where on_hold=\"true\"\n| bucket _time span=1d\n| stats count by UserPrincipalName, _time\n```\n\nUnderstanding this SPL\n\n**Legal Hold Population — Elevated File Export Activity (RS.CO-02)** — Tracks downloads for users on legal hold to align insider communication risk with coordinated response procedures.\n\nDocumented **Data sources**: `ms:o365:management`. **App/TA** (typical add-on context): O365 management. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where on_hold=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by UserPrincipalName, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Legal Hold Population — Elevated File Export Activity. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RS.CO-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RS.CO-02 is enforced — Splunk UC-22.7.44: Legal Hold Population — Elevated File Export Activity.",
                  "ea": "Saved search 'UC-22.7.44' running on ms:o365:management, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.45",
              "n": "EDR Host Isolation Action Success Rate (RS.MI-01)",
              "c": "high",
              "f": "advanced",
              "v": "Measures isolate-host action success rate as mitigation effectiveness evidence.",
              "t": "Microsoft Defender TA",
              "d": "`defender:action`",
              "q": "index=edr sourcetype=defender:action action=isolate earliest=-90d\n| stats count as attempts, count(eval(status=\"success\")) as ok by host\n| eval rate=round(100*ok/attempts,1)\n| where rate<95",
              "m": "(1) Retry logic review; (2) offline host handling; (3) map to RS.MI.",
              "z": "Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Microsoft Defender TA.\n• Ensure the following data sources are available: `defender:action`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Retry logic review; (2) offline host handling; (3) map to RS.MI.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=defender:action action=isolate earliest=-90d\n| stats count as attempts, count(eval(status=\"success\")) as ok by host\n| eval rate=round(100*ok/attempts,1)\n| where rate<95\n```\n\nUnderstanding this SPL\n\n**EDR Host Isolation Action Success Rate (RS.MI-01)** — Measures isolate-host action success rate as mitigation effectiveness evidence.\n\nDocumented **Data sources**: `defender:action`. **App/TA** (typical add-on context): Microsoft Defender TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: defender:action. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=defender:action, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where rate<95` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for EDR Host Isolation Action Success Rate. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "defender"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RS.MI-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RS.MI-01 is enforced — Splunk UC-22.7.45: EDR Host Isolation Action Success Rate.",
                  "ea": "Saved search 'UC-22.7.45' running on defender:action, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.46",
              "n": "Scheduled Restore Test Outcomes vs Policy Frequency (RC.RP-01)",
              "c": "high",
              "f": "advanced",
              "v": "Tracks restore test jobs for recovery plan execution evidence beyond daily backup success.",
              "t": "Cohesity or Commvault",
              "d": "`cohesity:restore_test`",
              "q": "index=backup sourcetype=cohesity:restore_test earliest=-365d\n| stats latest(status) as last_status, latest(_time) as last_run by policy_name\n| eval days_since=(now()-last_run)/86400\n| where days_since>90 OR last_status!=\"success\"",
              "m": "(1) Align to RTO tiers; (2) document failures; (3) executive attestation.",
              "z": "Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Cohesity or Commvault.\n• Ensure the following data sources are available: `cohesity:restore_test`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align to RTO tiers; (2) document failures; (3) executive attestation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=cohesity:restore_test earliest=-365d\n| stats latest(status) as last_status, latest(_time) as last_run by policy_name\n| eval days_since=(now()-last_run)/86400\n| where days_since>90 OR last_status!=\"success\"\n```\n\nUnderstanding this SPL\n\n**Scheduled Restore Test Outcomes vs Policy Frequency (RC.RP-01)** — Tracks restore test jobs for recovery plan execution evidence beyond daily backup success.\n\nDocumented **Data sources**: `cohesity:restore_test`. **App/TA** (typical add-on context): Cohesity or Commvault. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: cohesity:restore_test. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=cohesity:restore_test, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by policy_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since>90 OR last_status!=\"success\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Scheduled Restore Test Outcomes vs Policy Frequency. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "commvault",
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RC.RP-01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST CSF RC.RP-01 (Recovery plan execution) is enforced — Splunk UC-22.7.46: Scheduled Restore Test Outcomes vs Policy Frequency.",
                  "ea": "Saved search 'UC-22.7.46' running on cohesity:restore_test, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.47",
              "n": "AD Forest Recovery Drill — Directory Service Restore Events (RC.RP-02)",
              "c": "medium",
              "f": "intermediate",
              "v": "Captures authoritative restore events in lab forest drills for recovery execution proof.",
              "t": "Windows Directory Service log",
              "d": "`WinEventLog:Directory Service`",
              "q": "index=windows sourcetype=\"WinEventLog:Directory Service\" earliest=-180d (EventCode=1109 OR EventCode=1110)\n| table _time, ComputerName, EventCode, Message",
              "m": "(1) Tag lab vs prod; (2) store evidence pack; (3) update RC.RP narrative.",
              "z": "Table, Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows Directory Service log.\n• Ensure the following data sources are available: `WinEventLog:Directory Service`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag lab vs prod; (2) store evidence pack; (3) update RC.RP narrative.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Directory Service\" earliest=-180d (EventCode=1109 OR EventCode=1110)\n| table _time, ComputerName, EventCode, Message\n```\n\nUnderstanding this SPL\n\n**AD Forest Recovery Drill — Directory Service Restore Events (RC.RP-02)** — Captures authoritative restore events in lab forest drills for recovery execution proof.\n\nDocumented **Data sources**: `WinEventLog:Directory Service`. **App/TA** (typical add-on context): Windows Directory Service log. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Directory Service. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Directory Service\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **AD Forest Recovery Drill — Directory Service Restore Events (RC.RP-02)**): table _time, ComputerName, EventCode, Message\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for AD Forest Recovery Drill — Directory Service Restore Events. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RC.RP-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RC.RP-02 is enforced — Splunk UC-22.7.47: AD Forest Recovery Drill — Directory Service Restore Events.",
                  "ea": "Saved search 'UC-22.7.47' running on WinEventLog:Directory Service, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.48",
              "n": "Multi-Region Failover — Health-Check Driven DNS Answer Changes (RC.RP-03)",
              "c": "high",
              "f": "advanced",
              "v": "Monitors GSLB or Route53 answer sets proving multi-region recovery execution.",
              "t": "Route53 query logs",
              "d": "`route53`",
              "q": "index=dns sourcetype=route53 earliest=-30d\n| where match(query_name,\"www\\.prod\\.example\\.com\") AND message_type=\"RESPONSE\"\n| stats values(rdata) as answers by bin(_time,5m)\n| where mvcount(answers)>1",
              "m": "(1) Document expected flip patterns; (2) correlate with incident tickets; (3) tabletop validation.",
              "z": "Time chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Route53 query logs.\n• Ensure the following data sources are available: `route53`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Document expected flip patterns; (2) correlate with incident tickets; (3) tabletop validation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dns sourcetype=route53 earliest=-30d\n| where match(query_name,\"www\\.prod\\.example\\.com\") AND message_type=\"RESPONSE\"\n| stats values(rdata) as answers by bin(_time,5m)\n| where mvcount(answers)>1\n```\n\nUnderstanding this SPL\n\n**Multi-Region Failover — Health-Check Driven DNS Answer Changes (RC.RP-03)** — Monitors GSLB or Route53 answer sets proving multi-region recovery execution.\n\nDocumented **Data sources**: `route53`. **App/TA** (typical add-on context): Route53 query logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dns; **sourcetype**: route53. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dns, sourcetype=route53, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(query_name,\"www\\.prod\\.example\\.com\") AND message_type=\"RESPONSE\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by bin(_time,5m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvcount(answers)>1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Multi-Region Failover — Health-Check Driven DNS Answer Changes. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RC.RP-03",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RC.RP-03 is enforced — Splunk UC-22.7.48: Multi-Region Failover — Health-Check Driven DNS Answer Changes.",
                  "ea": "Saved search 'UC-22.7.48' running on route53, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.49",
              "n": "Crisis Email Blast Size to All Employees (RC.CO-01)",
              "c": "medium",
              "f": "intermediate",
              "v": "Detects unusual internal blast emails that may indicate crisis communication during recovery.",
              "t": "O365 message trace",
              "d": "`ms:o365:message_trace`",
              "q": "index=o365 sourcetype=ms:o365:message_trace earliest=-90d\n| stats count as msgs by SenderAddress, bin(_time,1h)\n| where msgs>5000",
              "m": "(1) Allowlist known HR broadcasts; (2) align with comms playbook; (3) privacy review.",
              "z": "Time chart, Table",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: O365 message trace.\n• Ensure the following data sources are available: `ms:o365:message_trace`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Allowlist known HR broadcasts; (2) align with comms playbook; (3) privacy review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:message_trace earliest=-90d\n| stats count as msgs by SenderAddress, bin(_time,1h)\n| where msgs>5000\n```\n\nUnderstanding this SPL\n\n**Crisis Email Blast Size to All Employees (RC.CO-01)** — Detects unusual internal blast emails that may indicate crisis communication during recovery.\n\nDocumented **Data sources**: `ms:o365:message_trace`. **App/TA** (typical add-on context): O365 message trace. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:message_trace. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:message_trace, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by SenderAddress, bin(_time,1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where msgs>5000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Crisis Email Blast Size to All Employees. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RC.CO-01",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RC.CO-01 is enforced — Splunk UC-22.7.49: Crisis Email Blast Size to All Employees.",
                  "ea": "Saved search 'UC-22.7.49' running on ms:o365:message_trace, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.7.50",
              "n": "Status Page Update Cadence During Major Incident (RC.CO-02)",
              "c": "high",
              "f": "advanced",
              "v": "Ingests statuspage.io updates to prove recovery communications cadence met customer commitments.",
              "t": "HEC `_json`",
              "d": "`statuspage:incident`",
              "q": "index=comms sourcetype=statuspage:incident earliest=-180d\n| stats count as updates, earliest(_time) as start by incident_id\n| eval duration_h=(now()-start)/3600\n| where duration_h>1 AND updates<2",
              "m": "(1) Map to customer SLAs; (2) training for incident comms role; (3) post-mortem metric.",
              "z": "Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC `_json`.\n• Ensure the following data sources are available: `statuspage:incident`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map to customer SLAs; (2) training for incident comms role; (3) post-mortem metric.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=comms sourcetype=statuspage:incident earliest=-180d\n| stats count as updates, earliest(_time) as start by incident_id\n| eval duration_h=(now()-start)/3600\n| where duration_h>1 AND updates<2\n```\n\nUnderstanding this SPL\n\n**Status Page Update Cadence During Major Incident (RC.CO-02)** — Ingests statuspage.io updates to prove recovery communications cadence met customer commitments.\n\nDocumented **Data sources**: `statuspage:incident`. **App/TA** (typical add-on context): HEC `_json`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: comms; **sourcetype**: statuspage:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=comms, sourcetype=statuspage:incident, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by incident_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **duration_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where duration_h>1 AND updates<2` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Status Page Update Cadence During Major Incident. The NIST Cybersecurity Framework tells us what good practice looks like, and we keep evidence we match it.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST CSF 2.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "RC.CO-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF RC.CO-02 is enforced — Splunk UC-22.7.50: Status Page Update Cadence During Major Incident.",
                  "ea": "Saved search 'UC-22.7.50' running on statuspage:incident, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 50,
            "none": 0
          }
        },
        {
          "i": "22.8",
          "n": "SOC 2",
          "u": [
            {
              "i": "22.8.1",
              "n": "SOC 2 Trust Services Criteria Continuous Control Monitoring (CC6-CC8)",
              "c": "critical",
              "f": "advanced",
              "v": "Continuous evidence for logical access (CC6), security monitoring and incident handling (CC7), and change management visibility (CC8) using CIM-normalized authentication data, ES notables, and Splunk audit telemetry.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742) or identity TAs feeding CIM, Splunk Enterprise Security (Splunkbase 263)",
              "d": "CIM `Authentication` data model (user, action, src, app); `` `notable` `` macro (status, urgency, owner, rule_name); `index=_audit` (object_type, action, user)",
              "q": "index=_audit object_type IN (\"savedsearch\",\"lookup\") action IN (\"edit\",\"create\",\"delete\",\"update\")\n| stats count by user, object_type, action",
              "m": "(1) Ensure identity data (AD, IdP, VPN) is CIM-tagged to `Authentication`; (2) train analysts to set `status`/`owner` on notables for CC7 closure evidence; (3) scope `_audit` to production SHC for CC8 change evidence; (4) map panels explicitly to CC6.1-CC6.7, CC7.2-CC7.5, CC8.1 in your control matrix.",
              "z": "Area chart (Authentication volume/denied ratio), Bar chart (cc7_open by urgency), Table (CC8 changes by user).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742) or identity TAs feeding CIM, Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: CIM `Authentication` data model (user, action, src, app); `` `notable` `` macro (status, urgency, owner, rule_name); `index=_audit` (object_type, action, user).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure identity data (AD, IdP, VPN) is CIM-tagged to `Authentication`; (2) train analysts to set `status`/`owner` on notables for CC7 closure evidence; (3) scope `_audit` to production SHC for CC8 change evidence; (4) map panels explicitly to CC6.1-CC6.7, CC7.2-CC7.5, CC8.1 in your control matrix.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit object_type IN (\"savedsearch\",\"lookup\") action IN (\"edit\",\"create\",\"delete\",\"update\")\n| stats count by user, object_type, action\n```\n\nUnderstanding this SPL\n\n**SOC 2 Trust Services Criteria Continuous Control Monitoring (CC6-CC8)** — Continuous evidence for logical access (CC6), security monitoring and incident handling (CC7), and change management visibility (CC8) using CIM-normalized authentication data, ES notables, and Splunk audit telemetry.\n\nDocumented **Data sources**: CIM `Authentication` data model (user, action, src, app); `` `notable` `` macro (status, urgency, owner, rule_name); `index=_audit` (object_type, action, user). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742) or identity TAs feeding CIM, Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, object_type, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 Trust Services Criteria Continuous Control Monitoring (CC6-CC8)** — Continuous evidence for logical access (CC6), security monitoring and incident handling (CC7), and change management visibility (CC8) using CIM-normalized authentication data, ES notables, and Splunk audit telemetry.\n\nDocumented **Data sources**: CIM `Authentication` data model (user, action, src, app); `` `notable` `` macro (status, urgency, owner, rule_name); `index=_audit` (object_type, action, user). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742) or identity TAs feeding CIM, Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Area chart (Authentication volume/denied ratio), Bar chart (cc7_open by urgency), Table (CC8 changes by user).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Trust Services Criteria Continuous Control Monitoring. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "wv": "crawl",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action span=1d | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC6.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC6.1 (Logical access controls) is enforced — Splunk UC-22.8.1: SOC 2 Trust Services Criteria Continuous Control Monitoring.",
                  "ea": "Saved search 'UC-22.8.1' running on index _audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC7.1 (System operations monitoring) is enforced — Splunk UC-22.8.1: SOC 2 Trust Services Criteria Continuous Control Monitoring.",
                  "ea": "Saved search 'UC-22.8.1' running on index _audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC8.1 (Change management) is enforced — Splunk UC-22.8.1: SOC 2 Trust Services Criteria Continuous Control Monitoring.",
                  "ea": "Saved search 'UC-22.8.1' running on index _audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "22.8.2",
              "n": "SOC 2 System Availability and Incident Response Evidence Collection (A1)",
              "c": "critical",
              "f": "advanced",
              "v": "Pairs ITSI service health KPI time series with ES notable closure MTTR for availability plus incident-response effectiveness in one evidence trail.",
              "t": "Splunk IT Service Intelligence (Splunkbase 1841), Splunk Enterprise Security (Splunkbase 263)",
              "d": "`index=itsi_summary` (service_name, alert_value, health_score, severity_value, is_service_in_maintenance, _time); `` `notable` `` macro (status, closed_time, rule_name, _time)",
              "q": "`notable` status=\"Closed\" isnotnull(closed_time)\n| eval mttr_sec=closed_time-_time\n| stats avg(mttr_sec) as avg_mttr, perc95(mttr_sec) as p95_mttr, count as closed_incidents by rule_name\n| eval avg_mttr_hours=round(avg_mttr/3600, 2)\n| table rule_name, closed_incidents, avg_mttr_hours, p95_mttr",
              "m": "(1) Model production services in ITSI with KPIs tied to SLIs; (2) keep `itsi_summary` retention aligned with audit window; (3) validate `closed_time` field on notables (`| fieldsummary closed_time`); (4) pair A1 uptime panels with incident MTTR for the same services via lookup; (5) document maintenance windows with `is_service_in_maintenance`.",
              "z": "Line chart (health_score by service), Bar chart (avg_mttr_hours by rule), Single value (peak_severity), Table (closed incidents).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk IT Service Intelligence](https://splunkbase.splunk.com/app/1841), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk IT Service Intelligence (Splunkbase 1841), Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `index=itsi_summary` (service_name, alert_value, health_score, severity_value, is_service_in_maintenance, _time); `` `notable` `` macro (status, closed_time, rule_name, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Model production services in ITSI with KPIs tied to SLIs; (2) keep `itsi_summary` retention aligned with audit window; (3) validate `closed_time` field on notables (`| fieldsummary closed_time`); (4) pair A1 uptime panels with incident MTTR for the same services via lookup; (5) document maintenance windows with `is_service_in_maintenance`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` status=\"Closed\" isnotnull(closed_time)\n| eval mttr_sec=closed_time-_time\n| stats avg(mttr_sec) as avg_mttr, perc95(mttr_sec) as p95_mttr, count as closed_incidents by rule_name\n| eval avg_mttr_hours=round(avg_mttr/3600, 2)\n| table rule_name, closed_incidents, avg_mttr_hours, p95_mttr\n```\n\nUnderstanding this SPL\n\n**SOC 2 System Availability and Incident Response Evidence Collection (A1)** — Pairs ITSI service health KPI time series with ES notable closure MTTR for availability plus incident-response effectiveness in one evidence trail.\n\nDocumented **Data sources**: `index=itsi_summary` (service_name, alert_value, health_score, severity_value, is_service_in_maintenance, _time); `` `notable` `` macro (status, closed_time, rule_name, _time). **App/TA** (typical add-on context): Splunk IT Service Intelligence (Splunkbase 1841), Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **mttr_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by rule_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_mttr_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **SOC 2 System Availability and Incident Response Evidence Collection (A1)**): table rule_name, closed_incidents, avg_mttr_hours, p95_mttr\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (health_score by service), Bar chart (avg_mttr_hours by rule), Single value (peak_severity), Table (closed incidents).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI), Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for System Availability and Incident Response Evidence Collection. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Availability",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "A1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 A1 is enforced — Splunk UC-22.8.2: SOC 2 System Availability and Incident Response Evidence Collection.",
                  "ea": "Saved search 'UC-22.8.2' running on index itsi_summary and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.3",
              "n": "SOC 2 Confidentiality Classification and DLP Event Audit (C1)",
              "c": "high",
              "f": "intermediate",
              "v": "Audits Microsoft 365 DLP policy matches with actor, policy, and sensitive information types for confidentiality control testing and breach-readiness reporting.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`index=o365` `sourcetype=\"ms:o365:management\"` (Workload, PolicyName, UserPrincipalName, SensitiveInfoType, Severity, Operation)",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=\"Dlp\"\n| timechart span=1d count by Severity",
              "m": "(1) Enable Office 365 Management Activity inputs in TA 4055 and confirm `Workload=\"Dlp\"` events are ingested; (2) map `SensitiveInfoType` values to your data classification scheme via lookup `classification_tier.csv`; (3) alert on high-severity or high-volume exfil patterns; (4) retain per legal hold requirements; (5) optionally route to ES as correlation-search input.",
              "z": "Bar chart (events by PolicyName), Heatmap (user x SensitiveInfoType), Line chart (daily volume by Severity), Table (sample evidence).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [
                "T1005",
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `index=o365` `sourcetype=\"ms:o365:management\"` (Workload, PolicyName, UserPrincipalName, SensitiveInfoType, Severity, Operation).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable Office 365 Management Activity inputs in TA 4055 and confirm `Workload=\"Dlp\"` events are ingested; (2) map `SensitiveInfoType` values to your data classification scheme via lookup `classification_tier.csv`; (3) alert on high-severity or high-volume exfil patterns; (4) retain per legal hold requirements; (5) optionally route to ES as correlation-search input.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=\"Dlp\"\n| timechart span=1d count by Severity\n```\n\nUnderstanding this SPL\n\n**SOC 2 Confidentiality Classification and DLP Event Audit (C1)** — Audits Microsoft 365 DLP policy matches with actor, policy, and sensitive information types for confidentiality control testing and breach-readiness reporting.\n\nDocumented **Data sources**: `index=o365` `sourcetype=\"ms:o365:management\"` (Workload, PolicyName, UserPrincipalName, SensitiveInfoType, Severity, Operation). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by Severity** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by PolicyName), Heatmap (user x SensitiveInfoType), Line chart (daily volume by Severity), Table (sample evidence).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Confidentiality Classification and DLP Event Audit. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "C1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 C1 is enforced — Splunk UC-22.8.3: SOC 2 Confidentiality Classification and DLP Event Audit.",
                  "ea": "Saved search 'UC-22.8.3' running on sourcetype ms:o365:management and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.4",
              "n": "SOC 2 Control Environment and Board-Level Attestation Workflow (CC1.2, CC2.1)",
              "c": "medium",
              "f": "beginner",
              "v": "CC1 and CC2 require communication and information about roles, responsibilities, and performance to support functioning of internal control. Tracking completion of quarterly control-owner attestations in ITSM demonstrates tone-at-the-top processes are operationalized with timestamps.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, state, closed_at, opened_at, assigned_to, u_control_id)",
              "q": "index=itsm sourcetype=\"snow:sc_req_item\" earliest=-120d (cat_item=\"*SOC2*Attestation*\" OR short_description=\"*control owner attestation*\")\n| eval is_closed=if(match(state,\"(?i)closed|resolved|complete\"),1,0)\n| stats count as tasks, sum(is_closed) as completed, dc(assigned_to) as owners by u_control_id, cat_item\n| eval completion_pct=round(100*completed/tasks,1)\n| where completion_pct<100\n| sort u_control_id",
              "m": "(1) Create catalog items per SOC2 control family with `u_control_id` matching your CCM matrix; (2) schedule quarterly auto-open tasks; (3) escalate open items after 14 days; (4) export CSV for external auditors; (5) map `assigned_to` to job titles for CC1.3 HR evidence optional.",
              "z": "Table (control completion), Bar chart (open vs closed), Single value (tasks past due).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, state, closed_at, opened_at, assigned_to, u_control_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create catalog items per SOC2 control family with `u_control_id` matching your CCM matrix; (2) schedule quarterly auto-open tasks; (3) escalate open items after 14 days; (4) export CSV for external auditors; (5) map `assigned_to` to job titles for CC1.3 HR evidence optional.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:sc_req_item\" earliest=-120d (cat_item=\"*SOC2*Attestation*\" OR short_description=\"*control owner attestation*\")\n| eval is_closed=if(match(state,\"(?i)closed|resolved|complete\"),1,0)\n| stats count as tasks, sum(is_closed) as completed, dc(assigned_to) as owners by u_control_id, cat_item\n| eval completion_pct=round(100*completed/tasks,1)\n| where completion_pct<100\n| sort u_control_id\n```\n\nUnderstanding this SPL\n\n**SOC 2 Control Environment and Board-Level Attestation Workflow (CC1.2, CC2.1)** — CC1 and CC2 require communication and information about roles, responsibilities, and performance to support functioning of internal control. Tracking completion of quarterly control-owner attestations in ITSM demonstrates tone-at-the-top processes are operationalized with timestamps.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:sc_req_item\"` (number, cat_item, state, closed_at, opened_at, assigned_to, u_control_id). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:sc_req_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_closed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by u_control_id, cat_item** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **completion_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where completion_pct<100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (control completion), Bar chart (open vs closed), Single value (tasks past due).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Control Environment and Board-Level Attestation Workflow. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC1.2 is enforced — Splunk UC-22.8.4: SOC 2 Control Environment and Board-Level Attestation Workflow.",
                  "ea": "Saved search 'UC-22.8.4' running on index=itsm sourcetype=\"snow:sc_req_item\" (number, cat_item, state, closed_at, opened_at, assigned_to, u_control_id), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC2.1 (Internal communication) is enforced — Splunk UC-22.8.4: SOC 2 Control Environment and Board-Level Attestation Workflow.",
                  "ea": "Saved search 'UC-22.8.4' running on index=itsm sourcetype=\"snow:sc_req_item\" (number, cat_item, state, closed_at, opened_at, assigned_to, u_control_id), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.5",
              "n": "SOC 2 Risk Assessment — Change-Induced Emergency Pattern Monitoring (CC3.2, CC3.3)",
              "c": "high",
              "f": "intermediate",
              "v": "CC3 expects risk identification and analysis, including changes that significantly affect the system. Correlating production emergency changes with recent deployments highlights process breakdowns where velocity outpaces risk assessment.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:change_request\"` (number, type, risk, start_date, end_date, state, short_description, u_emergency_flag)",
              "q": "index=itsm sourcetype=\"snow:change_request\" earliest=-90d state=\"Closed\"\n| eval emergency=coalesce(u_emergency_flag, if(match(type,\"(?i)emergency\"),\"true\",\"false\"))\n| where emergency=\"true\"\n| eval duration_h=(strptime(end_date,\"%Y-%m-%d %H:%M:%S\")-strptime(start_date,\"%Y-%m-%d %H:%M:%S\"))/3600\n| stats count as emergency_changes, median(duration_h) as median_duration by risk, cmdb_ci\n| sort - emergency_changes",
              "m": "(1) Ensure `u_emergency_flag` or `type` differentiates emergencies; (2) join `cmdb_ci` to service tier lookup; (3) alert when `emergency_changes` spikes vs baseline; (4) require post-implementation review field completeness; (5) feed results into quarterly risk committee deck.",
              "z": "Bar chart (emergency_changes by service), Table (risk, median_duration), Timechart (weekly emergency volume).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:change_request\"` (number, type, risk, start_date, end_date, state, short_description, u_emergency_flag).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure `u_emergency_flag` or `type` differentiates emergencies; (2) join `cmdb_ci` to service tier lookup; (3) alert when `emergency_changes` spikes vs baseline; (4) require post-implementation review field completeness; (5) feed results into quarterly risk committee deck.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" earliest=-90d state=\"Closed\"\n| eval emergency=coalesce(u_emergency_flag, if(match(type,\"(?i)emergency\"),\"true\",\"false\"))\n| where emergency=\"true\"\n| eval duration_h=(strptime(end_date,\"%Y-%m-%d %H:%M:%S\")-strptime(start_date,\"%Y-%m-%d %H:%M:%S\"))/3600\n| stats count as emergency_changes, median(duration_h) as median_duration by risk, cmdb_ci\n| sort - emergency_changes\n```\n\nUnderstanding this SPL\n\n**SOC 2 Risk Assessment — Change-Induced Emergency Pattern Monitoring (CC3.2, CC3.3)** — CC3 expects risk identification and analysis, including changes that significantly affect the system. Correlating production emergency changes with recent deployments highlights process breakdowns where velocity outpaces risk assessment.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (number, type, risk, start_date, end_date, state, short_description, u_emergency_flag). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **emergency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where emergency=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **duration_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by risk, cmdb_ci** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 Risk Assessment — Change-Induced Emergency Pattern Monitoring (CC3.2, CC3.3)** — CC3 expects risk identification and analysis, including changes that significantly affect the system. Correlating production emergency changes with recent deployments highlights process breakdowns where velocity outpaces risk assessment.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (number, type, risk, start_date, end_date, state, short_description, u_emergency_flag). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (emergency_changes by service), Table (risk, median_duration), Timechart (weekly emergency volume).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Risk Assessment — Change-Induced Emergency Pattern Monitoring. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC3.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC3.2 is enforced — Splunk UC-22.8.5: SOC 2 Risk Assessment — Change-Induced Emergency Pattern Monitoring.",
                  "ea": "Saved search 'UC-22.8.5' running on sourcetype snow:change_request and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC3.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC3.3 is enforced — Splunk UC-22.8.5: SOC 2 Risk Assessment — Change-Induced Emergency Pattern Monitoring.",
                  "ea": "Saved search 'UC-22.8.5' running on sourcetype snow:change_request and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.6",
              "n": "SOC 2 Processing Integrity — Financial Batch Job Reconciliation Exceptions (PI1.3)",
              "c": "critical",
              "f": "advanced",
              "v": "Processing integrity criteria require system processing to be complete, valid, accurate, timely, and authorized. Monitoring ETL or billing batch exception counts versus totals provides continuous evidence that automated controls detect and surface out-of-balance conditions.",
              "t": "Splunk HTTP Event Collector (core platform), Splunk DB Connect (Splunkbase 2686) optional",
              "d": "`index=finance` `sourcetype=\"_json\"` `source=\"http:batch_recon\"` (batch_id, records_total, records_failed, amount_total, amount_exception, pipeline, _time)",
              "q": "index=finance sourcetype=\"_json\" source=\"http:batch_recon\" earliest=-7d\n| eval fail_rate=round(100*records_failed/records_total,4)\n| eval amt_exc_rate=if(amount_total>0, round(100*amount_exception/amount_total,4), 0)\n| stats sum(records_total) as rows, sum(records_failed) as failed_rows, max(fail_rate) as peak_fail_rate by pipeline, batch_id\n| where failed_rows>0 OR peak_fail_rate>0.01\n| sort - failed_rows",
              "m": "(1) Publish one JSON event per batch completion from orchestration (Airflow, Control-M, mainframe bridge); (2) define materiality thresholds per `pipeline`; (3) alert on `peak_fail_rate` breaches; (4) retain hash of source file name for audit trail; (5) map `pipeline` to SOC subservice description in the system description.",
              "z": "Table (batch exceptions), Line chart (fail_rate over time by pipeline), Single value (failed_rows).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [
                "T1565"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk HTTP Event Collector (core platform), Splunk DB Connect (Splunkbase 2686) optional.\n• Ensure the following data sources are available: `index=finance` `sourcetype=\"_json\"` `source=\"http:batch_recon\"` (batch_id, records_total, records_failed, amount_total, amount_exception, pipeline, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Publish one JSON event per batch completion from orchestration (Airflow, Control-M, mainframe bridge); (2) define materiality thresholds per `pipeline`; (3) alert on `peak_fail_rate` breaches; (4) retain hash of source file name for audit trail; (5) map `pipeline` to SOC subservice description in the system description.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=finance sourcetype=\"_json\" source=\"http:batch_recon\" earliest=-7d\n| eval fail_rate=round(100*records_failed/records_total,4)\n| eval amt_exc_rate=if(amount_total>0, round(100*amount_exception/amount_total,4), 0)\n| stats sum(records_total) as rows, sum(records_failed) as failed_rows, max(fail_rate) as peak_fail_rate by pipeline, batch_id\n| where failed_rows>0 OR peak_fail_rate>0.01\n| sort - failed_rows\n```\n\nUnderstanding this SPL\n\n**SOC 2 Processing Integrity — Financial Batch Job Reconciliation Exceptions (PI1.3)** — Processing integrity criteria require system processing to be complete, valid, accurate, timely, and authorized. Monitoring ETL or billing batch exception counts versus totals provides continuous evidence that automated controls detect and surface out-of-balance conditions.\n\nDocumented **Data sources**: `index=finance` `sourcetype=\"_json\"` `source=\"http:batch_recon\"` (batch_id, records_total, records_failed, amount_total, amount_exception, pipeline, _time). **App/TA** (typical add-on context): Splunk HTTP Event Collector (core platform), Splunk DB Connect (Splunkbase 2686) optional. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: finance; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=finance, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **amt_exc_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by pipeline, batch_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failed_rows>0 OR peak_fail_rate>0.01` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (batch exceptions), Line chart (fail_rate over time by pipeline), Single value (failed_rows).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Processing Integrity — Financial Batch Job Reconciliation Exceptions. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "PI1.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 PI1.3 is enforced — Splunk UC-22.8.6: SOC 2 Processing Integrity — Financial Batch Job Reconciliation Exceptions.",
                  "ea": "Saved search 'UC-22.8.6' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.7",
              "n": "SOC 2 Privacy — Consent Log Integrity and Downstream Propagation Checks (P4.2, P4.3)",
              "c": "high",
              "f": "advanced",
              "v": "Privacy criteria expect notices about processing and consent to be accurate and current. Comparing consent-store updates to marketing execution logs detects cases where email or SMS campaigns run after withdrawal timestamps—an integrity failure with reputational and regulatory impact.",
              "t": "Splunk HTTP Event Collector (core platform)",
              "d": "`index=privacy` `sourcetype=\"_json\"` `source=\"http:consent_store\"` (profile_id, consent_email_marketing, updated_epoch_ms); `index=marketing` `sourcetype=\"_json\"` `source=\"http:campaign_send\"` (profile_id, channel, send_epoch_ms, campaign_id)",
              "q": "index=marketing sourcetype=\"_json\" source=\"http:campaign_send\" earliest=-7d channel IN (\"email\",\"sms\")\n| join type=left profile_id [\n    search index=privacy sourcetype=\"_json\" source=\"http:consent_store\" earliest=-30d\n    | eval withdraw=if(consent_email_marketing=\"false\", updated_epoch_ms, null())\n    | stats latest(withdraw) as last_withdraw_ms by profile_id\n  ]\n| eval send_ms=send_epoch_ms\n| where isnotnull(last_withdraw_ms) AND send_ms>last_withdraw_ms\n| stats count as sends_after_withdrawal by campaign_id, channel\n| sort - sends_after_withdrawal",
              "m": "(1) Ensure epoch fields share UTC basis; (2) use `profile_id` as stable key; (3) alert on any `sends_after_withdrawal>0`; (4) cap join window for performance using `subsearch` time range; (5) document corrective action in privacy incident register.",
              "z": "Table (violating campaigns), Bar chart (sends_after_withdrawal), Single value (distinct profiles affected).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [
                "T1565",
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk HTTP Event Collector (core platform).\n• Ensure the following data sources are available: `index=privacy` `sourcetype=\"_json\"` `source=\"http:consent_store\"` (profile_id, consent_email_marketing, updated_epoch_ms); `index=marketing` `sourcetype=\"_json\"` `source=\"http:campaign_send\"` (profile_id, channel, send_epoch_ms, campaign_id).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure epoch fields share UTC basis; (2) use `profile_id` as stable key; (3) alert on any `sends_after_withdrawal>0`; (4) cap join window for performance using `subsearch` time range; (5) document corrective action in privacy incident register.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=marketing sourcetype=\"_json\" source=\"http:campaign_send\" earliest=-7d channel IN (\"email\",\"sms\")\n| join type=left profile_id [\n    search index=privacy sourcetype=\"_json\" source=\"http:consent_store\" earliest=-30d\n    | eval withdraw=if(consent_email_marketing=\"false\", updated_epoch_ms, null())\n    | stats latest(withdraw) as last_withdraw_ms by profile_id\n  ]\n| eval send_ms=send_epoch_ms\n| where isnotnull(last_withdraw_ms) AND send_ms>last_withdraw_ms\n| stats count as sends_after_withdrawal by campaign_id, channel\n| sort - sends_after_withdrawal\n```\n\nUnderstanding this SPL\n\n**SOC 2 Privacy — Consent Log Integrity and Downstream Propagation Checks (P4.2, P4.3)** — Privacy criteria expect notices about processing and consent to be accurate and current. Comparing consent-store updates to marketing execution logs detects cases where email or SMS campaigns run after withdrawal timestamps—an integrity failure with reputational and regulatory impact.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=\"_json\"` `source=\"http:consent_store\"` (profile_id, consent_email_marketing, updated_epoch_ms); `index=marketing` `sourcetype=\"_json\"` `source=\"http:campaign_send\"` (profile_id, channel, send_epoch_ms, campaign_id). **App/TA** (typical add-on context): Splunk HTTP Event Collector (core platform). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: marketing; **sourcetype**: _json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=marketing, sourcetype=\"_json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **send_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(last_withdraw_ms) AND send_ms>last_withdraw_ms` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by campaign_id, channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violating campaigns), Bar chart (sends_after_withdrawal), Single value (distinct profiles affected).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Privacy — Consent Log Integrity and Downstream Propagation Checks. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "P4.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 P4.2 is enforced — Splunk UC-22.8.7: SOC 2 Privacy — Consent Log Integrity and Downstream Propagation Checks.",
                  "ea": "Saved search 'UC-22.8.7' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "P4.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 P4.3 is enforced — Splunk UC-22.8.7: SOC 2 Privacy — Consent Log Integrity and Downstream Propagation Checks.",
                  "ea": "Saved search 'UC-22.8.7' running on sourcetype _json and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.8",
              "n": "SOC 2 Fraud Risk and Anomalous Privileged Activity Correlation (CC9.2)",
              "c": "critical",
              "f": "advanced",
              "v": "CC9.2 addresses risks of fraud, including fraud due to management override. Correlating after-hours privileged logons with Splunk `_audit` searches touching high-sensitivity indexes surfaces potential override paths for investigator review without presuming guilt.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Enterprise core auditing (`_audit` index)",
              "d": "`index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, TargetUserName, _time, LogonType); `index=_audit` `action=search` (user, search, _time)",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=10 earliest=-7d\n| eval hour=strftime(_time,\"%H\")\n| where hour<6 OR hour>22\n| lookup domain_admins.csv TargetUserName OUTPUT is_privileged\n| where is_privileged=\"true\"\n| stats earliest(_time) as first_priv, latest(_time) as last_priv by TargetUserName\n| join type=left max=0 TargetUserName [\n    search index=_audit action=search info=completed earliest=-7d\n    | where match(search, \"(?i)index\\\\s*=\\\\s*(pci|hr|finance)\")\n    | stats earliest(_time) as first_search by user\n    | rename user as TargetUserName\n  ]\n| eval suspicious=if(isnotnull(first_search) AND first_search>=first_priv AND first_search<=relative_time(last_priv,\"+2h\"), 1, 0)\n| where suspicious=1\n| table TargetUserName, first_priv, first_search, suspicious",
              "m": "(1) Tune RDP (`LogonType=10`) vs your jump host patterns; (2) maintain `domain_admins.csv`; (3) adjust sensitive index regex to your taxonomy; (4) route hits to SOC insider-threat queue; (5) document investigation outcomes for CC4 monitoring activities.",
              "z": "Table (correlated events), Timeline (first_priv vs first_search), Single value (suspicious sessions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Enterprise core auditing (`_audit` index).\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, TargetUserName, _time, LogonType); `index=_audit` `action=search` (user, search, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune RDP (`LogonType=10`) vs your jump host patterns; (2) maintain `domain_admins.csv`; (3) adjust sensitive index regex to your taxonomy; (4) route hits to SOC insider-threat queue; (5) document investigation outcomes for CC4 monitoring activities.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=10 earliest=-7d\n| eval hour=strftime(_time,\"%H\")\n| where hour<6 OR hour>22\n| lookup domain_admins.csv TargetUserName OUTPUT is_privileged\n| where is_privileged=\"true\"\n| stats earliest(_time) as first_priv, latest(_time) as last_priv by TargetUserName\n| join type=left max=0 TargetUserName [\n    search index=_audit action=search info=completed earliest=-7d\n    | where match(search, \"(?i)index\\\\s*=\\\\s*(pci|hr|finance)\")\n    | stats earliest(_time) as first_search by user\n    | rename user as TargetUserName\n  ]\n| eval suspicious=if(isnotnull(first_search) AND first_search>=first_priv AND first_search<=relative_time(last_priv,\"+2h\"), 1, 0)\n| where suspicious=1\n| table TargetUserName, first_priv, first_search, suspicious\n```\n\nUnderstanding this SPL\n\n**SOC 2 Fraud Risk and Anomalous Privileged Activity Correlation (CC9.2)** — CC9.2 addresses risks of fraud, including fraud due to management override. Correlating after-hours privileged logons with Splunk `_audit` searches touching high-sensitivity indexes surfaces potential override paths for investigator review without presuming guilt.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, TargetUserName, _time, LogonType); `index=_audit` `action=search` (user, search, _time). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Enterprise core auditing (`_audit` index). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hour<6 OR hour>22` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_privileged=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by TargetUserName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **suspicious** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where suspicious=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 Fraud Risk and Anomalous Privileged Activity Correlation (CC9.2)**): table TargetUserName, first_priv, first_search, suspicious\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 Fraud Risk and Anomalous Privileged Activity Correlation (CC9.2)** — CC9.2 addresses risks of fraud, including fraud due to management override. Correlating after-hours privileged logons with Splunk `_audit` searches touching high-sensitivity indexes surfaces potential override paths for investigator review without presuming guilt.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` (EventCode, TargetUserName, _time, LogonType); `index=_audit` `action=search` (user, search, _time). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Enterprise core auditing (`_audit` index). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (correlated events), Timeline (first_priv vs first_search), Single value (suspicious sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Fraud Risk and Anomalous Privileged Activity Correlation. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC9.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC9.2 is enforced — Splunk UC-22.8.8: SOC 2 Fraud Risk and Anomalous Privileged Activity Correlation.",
                  "ea": "Saved search 'UC-22.8.8' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.9",
              "n": "SOC 2 CC1 — Board and Committee ICT Oversight Evidence Trail",
              "c": "high",
              "f": "intermediate",
              "v": "CC1 expects governance bodies to exercise oversight of system risks. Indexing attestations, meeting minutes extracts, and risk-committee action items proves that board-level review occurred on schedule and that open items were tracked to closure.",
              "t": "Splunk DB Connect (Splunkbase 2686), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=governance` `sourcetype=\"grc:board_evidence\"` (evidence_id, committee, topic, decision, evidence_date, owner, status); optional `index=itsm` `sourcetype=\"snow:task\"` for remediation tasks linked by `evidence_id`",
              "q": "index=governance sourcetype=\"grc:board_evidence\" topic IN (\"ICT\",\"cyber\",\"resilience\",\"third_party\") earliest=-400d\n| eval evidence_ts=strptime(evidence_date,\"%Y-%m-%d\")\n| eval days_open=if(status!=\"closed\", round((now()-evidence_ts)/86400,0), null())\n| where isnull(days_open) OR days_open>90\n| stats values(committee) as committees, max(days_open) as max_days_open, latest(decision) as last_decision by evidence_id, topic, owner, status\n| sort - max_days_open",
              "m": "(1) Map `topic` values to your risk taxonomy; (2) require `evidence_id` on every quarterly pack upload; (3) alert when `status!=closed` and `days_open>90`; (4) restrict index to compliance/GRC roles; (5) export quarterly PDF evidence list from the same search for auditors.",
              "z": "Table (open evidence), Timeline (evidence_ts by committee), Single value (open items count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=governance` `sourcetype=\"grc:board_evidence\"` (evidence_id, committee, topic, decision, evidence_date, owner, status); optional `index=itsm` `sourcetype=\"snow:task\"` for remediation tasks linked by `evidence_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `topic` values to your risk taxonomy; (2) require `evidence_id` on every quarterly pack upload; (3) alert when `status!=closed` and `days_open>90`; (4) restrict index to compliance/GRC roles; (5) export quarterly PDF evidence list from the same search for auditors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=governance sourcetype=\"grc:board_evidence\" topic IN (\"ICT\",\"cyber\",\"resilience\",\"third_party\") earliest=-400d\n| eval evidence_ts=strptime(evidence_date,\"%Y-%m-%d\")\n| eval days_open=if(status!=\"closed\", round((now()-evidence_ts)/86400,0), null())\n| where isnull(days_open) OR days_open>90\n| stats values(committee) as committees, max(days_open) as max_days_open, latest(decision) as last_decision by evidence_id, topic, owner, status\n| sort - max_days_open\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC1 — Board and Committee ICT Oversight Evidence Trail** — CC1 expects governance bodies to exercise oversight of system risks. Indexing attestations, meeting minutes extracts, and risk-committee action items proves that board-level review occurred on schedule and that open items were tracked to closure.\n\nDocumented **Data sources**: `index=governance` `sourcetype=\"grc:board_evidence\"` (evidence_id, committee, topic, decision, evidence_date, owner, status); optional `index=itsm` `sourcetype=\"snow:task\"` for remediation tasks linked by `evidence_id`. **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: governance; **sourcetype**: grc:board_evidence. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=governance, sourcetype=\"grc:board_evidence\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **evidence_ts** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(days_open) OR days_open>90` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by evidence_id, topic, owner, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open evidence), Timeline (evidence_ts by committee), Single value (open items count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC1 — Board and Committee ICT Oversight Evidence Trail. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Governance"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC1.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC1.1 (Integrity and ethical values) is enforced — Splunk UC-22.8.9: SOC 2 CC1 — Board and Committee ICT Oversight Evidence Trail.",
                  "ea": "Saved search 'UC-22.8.9' running on sourcetype grc:board_evidence and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.10",
              "n": "SOC 2 CC2 — Ethical Conduct and Acceptable-Use Violation Monitoring",
              "c": "medium",
              "f": "intermediate",
              "v": "CC2 addresses integrity and ethical values. Trending acceptable-use policy breaches—harassment keywords, unauthorized software installs, or policy bypass attempts—shows the tone at the top is reinforced with detective controls and corrective action.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`index=o365` `sourcetype=\"ms:o365:management\"` (Operation, Workload, UserId); `index=endpoint` `sourcetype IN (\"WinEventLog:Security\",\"crowdstrike:json\")` for policy-related detections",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload IN (\"Security\",\"Compliance\") earliest=-90d\n| where like(Operation,\"%policy%\") OR like(Operation,\"%DLP%\")\n| stats count as events by UserId, Operation\n| eventstats sum(events) as user_total\n| where user_total>25\n| sort - user_total",
              "m": "(1) Align `Operation` filters with HR/legal acceptable-use categories; (2) pseudonymize `UserId` in shared dashboards where required; (3) route high-volume users to People Ops for coaching before escalation; (4) retain 13 months for annual SOC evidence; (5) document disciplinary outcomes separately in HRIS—link only by ticket id in Splunk notes.",
              "z": "Bar chart (events by Operation), Table (top users), Single value (distinct users over threshold).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `index=o365` `sourcetype=\"ms:o365:management\"` (Operation, Workload, UserId); `index=endpoint` `sourcetype IN (\"WinEventLog:Security\",\"crowdstrike:json\")` for policy-related detections.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `Operation` filters with HR/legal acceptable-use categories; (2) pseudonymize `UserId` in shared dashboards where required; (3) route high-volume users to People Ops for coaching before escalation; (4) retain 13 months for annual SOC evidence; (5) document disciplinary outcomes separately in HRIS—link only by ticket id in Splunk notes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload IN (\"Security\",\"Compliance\") earliest=-90d\n| where like(Operation,\"%policy%\") OR like(Operation,\"%DLP%\")\n| stats count as events by UserId, Operation\n| eventstats sum(events) as user_total\n| where user_total>25\n| sort - user_total\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC2 — Ethical Conduct and Acceptable-Use Violation Monitoring** — CC2 addresses integrity and ethical values. Trending acceptable-use policy breaches—harassment keywords, unauthorized software installs, or policy bypass attempts—shows the tone at the top is reinforced with detective controls and corrective action.\n\nDocumented **Data sources**: `index=o365` `sourcetype=\"ms:o365:management\"` (Operation, Workload, UserId); `index=endpoint` `sourcetype IN (\"WinEventLog:Security\",\"crowdstrike:json\")` for policy-related detections. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(Operation,\"%policy%\") OR like(Operation,\"%DLP%\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by UserId, Operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where user_total>25` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC2 — Ethical Conduct and Acceptable-Use Violation Monitoring** — CC2 addresses integrity and ethical values. Trending acceptable-use policy breaches—harassment keywords, unauthorized software installs, or policy bypass attempts—shows the tone at the top is reinforced with detective controls and corrective action.\n\nDocumented **Data sources**: `index=o365` `sourcetype=\"ms:o365:management\"` (Operation, Workload, UserId); `index=endpoint` `sourcetype IN (\"WinEventLog:Security\",\"crowdstrike:json\")` for policy-related detections. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by Operation), Table (top users), Single value (distinct users over threshold).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC2 — Ethical Conduct and Acceptable-Use Violation Monitoring. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC2.1 (Internal communication) is enforced — Splunk UC-22.8.10: SOC 2 CC2 — Ethical Conduct and Acceptable-Use Violation Monitoring.",
                  "ea": "Saved search 'UC-22.8.10' running on sourcetype ms:o365:management and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.11",
              "n": "SOC 2 CC2 — Organizational Structure and Segregation-of-Duties Validation",
              "c": "high",
              "f": "advanced",
              "v": "CC2 includes assignment of authority and responsibility. Comparing HR job codes to privileged group membership surfaces users whose structural role should not carry admin rights—supporting clean org charts in the system description.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), LDAP/HR CSV via `inputlookup`",
              "d": "`index=windows` `sourcetype=\"WinEventLog:Security\"` EventCode IN (4728,4732,4756) (MemberName, Group_Name); `hr_job_roles.csv` (samaccountname, job_code, job_family, sod_restricted)",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4728,4732,4756) earliest=-30d\n| stats latest(_time) as last_add by MemberName, Group_Name\n| lookup hr_job_roles.csv samaccountname as MemberName OUTPUT job_code, job_family, sod_restricted\n| where sod_restricted=\"true\" AND like(Group_Name,\"%Admin%\")\n| table MemberName, Group_Name, job_code, job_family, last_add",
              "m": "(1) Refresh `hr_job_roles.csv` daily from authoritative HR feed; (2) map `sod_restricted` for finance, audit, and security families; (3) auto-create remediation tasks in ITSM with `MemberName`; (4) exclude break-glass groups via lookup `exempt_groups.csv`; (5) snapshot monthly for SOC working papers.",
              "z": "Table (violations), Single value (count), Treemap (by job_family).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), LDAP/HR CSV via `inputlookup`.\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"WinEventLog:Security\"` EventCode IN (4728,4732,4756) (MemberName, Group_Name); `hr_job_roles.csv` (samaccountname, job_code, job_family, sod_restricted).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Refresh `hr_job_roles.csv` daily from authoritative HR feed; (2) map `sod_restricted` for finance, audit, and security families; (3) auto-create remediation tasks in ITSM with `MemberName`; (4) exclude break-glass groups via lookup `exempt_groups.csv`; (5) snapshot monthly for SOC working papers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4728,4732,4756) earliest=-30d\n| stats latest(_time) as last_add by MemberName, Group_Name\n| lookup hr_job_roles.csv samaccountname as MemberName OUTPUT job_code, job_family, sod_restricted\n| where sod_restricted=\"true\" AND like(Group_Name,\"%Admin%\")\n| table MemberName, Group_Name, job_code, job_family, last_add\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC2 — Organizational Structure and Segregation-of-Duties Validation** — CC2 includes assignment of authority and responsibility. Comparing HR job codes to privileged group membership surfaces users whose structural role should not carry admin rights—supporting clean org charts in the system description.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` EventCode IN (4728,4732,4756) (MemberName, Group_Name); `hr_job_roles.csv` (samaccountname, job_code, job_family, sod_restricted). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), LDAP/HR CSV via `inputlookup`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by MemberName, Group_Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where sod_restricted=\"true\" AND like(Group_Name,\"%Admin%\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 CC2 — Organizational Structure and Segregation-of-Duties Validation**): table MemberName, Group_Name, job_code, job_family, last_add\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC2 — Organizational Structure and Segregation-of-Duties Validation** — CC2 includes assignment of authority and responsibility. Comparing HR job codes to privileged group membership surfaces users whose structural role should not carry admin rights—supporting clean org charts in the system description.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` EventCode IN (4728,4732,4756) (MemberName, Group_Name); `hr_job_roles.csv` (samaccountname, job_code, job_family, sod_restricted). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), LDAP/HR CSV via `inputlookup`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Single value (count), Treemap (by job_family).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC2 — Organizational Structure and Segregation-of-Duties Validation. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC2.1 (Internal communication) is enforced — Splunk UC-22.8.11: SOC 2 CC2 — Organizational Structure and Segregation-of-Duties Validation.",
                  "ea": "Saved search 'UC-22.8.11' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.12",
              "n": "SOC 2 CC3 — Management Accountability for Control Deficiency Remediation SLAs",
              "c": "high",
              "f": "intermediate",
              "v": "CC3 expects management to identify and respond to risks. Tracking internal-control deficiencies from identification through management sign-off demonstrates accountability and timely remediation—core evidence for COSO principle 17 narratives.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:problem\"` or `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, assignment_group, u_control_id, u_severity)",
              "q": "index=itsm sourcetype IN (\"snow:problem\",\"snow:sc_req_item\") u_control_id=* earliest=-180d\n| eval open_epoch=if(isnotnull(closed_at), strptime(closed_at,\"%Y-%m-%d %H:%M:%S\"), now())\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age_days=round((open_epoch-opened)/86400,1)\n| eval breach=case(\n    u_severity=\"material\" AND age_days>60, 1,\n    u_severity=\"significant\" AND age_days>45, 1,\n    u_severity=\"other\" AND age_days>90, 1,\n    1=1, 0)\n| where state!=\"Closed\" OR breach=1\n| stats latest(state) as state, max(age_days) as max_age by number, u_control_id, assignment_group, u_severity\n| sort - max_age",
              "m": "(1) Define severity-to-SLA mapping in a macro `deficiency_sla_days(severity)`; (2) require `u_control_id` linking to your control library; (3) weekly digest to CFO/CISO for material items; (4) closed tickets must populate `closed_at`; (5) correlate with external audit findings by shared `u_control_id`.",
              "z": "Table (aging deficiencies), Column chart (count by assignment_group), Single value (SLA breaches).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:problem\"` or `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, assignment_group, u_control_id, u_severity).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define severity-to-SLA mapping in a macro `deficiency_sla_days(severity)`; (2) require `u_control_id` linking to your control library; (3) weekly digest to CFO/CISO for material items; (4) closed tickets must populate `closed_at`; (5) correlate with external audit findings by shared `u_control_id`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype IN (\"snow:problem\",\"snow:sc_req_item\") u_control_id=* earliest=-180d\n| eval open_epoch=if(isnotnull(closed_at), strptime(closed_at,\"%Y-%m-%d %H:%M:%S\"), now())\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age_days=round((open_epoch-opened)/86400,1)\n| eval breach=case(\n    u_severity=\"material\" AND age_days>60, 1,\n    u_severity=\"significant\" AND age_days>45, 1,\n    u_severity=\"other\" AND age_days>90, 1,\n    1=1, 0)\n| where state!=\"Closed\" OR breach=1\n| stats latest(state) as state, max(age_days) as max_age by number, u_control_id, assignment_group, u_severity\n| sort - max_age\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC3 — Management Accountability for Control Deficiency Remediation SLAs** — CC3 expects management to identify and respond to risks. Tracking internal-control deficiencies from identification through management sign-off demonstrates accountability and timely remediation—core evidence for COSO principle 17 narratives.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:problem\"` or `sourcetype=\"snow:sc_req_item\"` (number, opened_at, closed_at, state, assignment_group, u_control_id, u_severity). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **open_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where state!=\"Closed\" OR breach=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by number, u_control_id, assignment_group, u_severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (aging deficiencies), Column chart (count by assignment_group), Single value (SLA breaches).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC3 — Management Accountability for Control Deficiency Remediation SLAs. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC3.1 (Risk assessment) is enforced — Splunk UC-22.8.12: SOC 2 CC3 — Management Accountability for Control Deficiency Remediation SLAs.",
                  "ea": "Saved search 'UC-22.8.12' running on sourcetype snow:problem and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.13",
              "n": "SOC 2 CC4 — Enterprise Risk Register Ingestion and Coverage Gaps",
              "c": "high",
              "f": "intermediate",
              "v": "CC4 requires consideration of risks to objectives. Monitoring the risk register for stale reviews, missing owners, or systems without mapped risks shows the assessment process is operational—not a static spreadsheet export.",
              "t": "Splunk DB Connect (Splunkbase 2686), HTTP Event Collector",
              "d": "`index=risk` `sourcetype=\"grc:risk_register\"` (risk_id, system_id, owner, inherent_score, residual_score, last_review_date, status)",
              "q": "index=risk sourcetype=\"grc:risk_register\" earliest=-1d\n| eval last_rev=strptime(last_review_date,\"%Y-%m-%d\")\n| eval days_since_review=round((now()-last_rev)/86400,0)\n| eval gap=case(\n    isnull(system_id) OR system_id=\"\", \"NO_SYSTEM_MAPPING\",\n    days_since_review>365, \"STALE_REVIEW\",\n    status!=\"active\" AND residual_score>12, \"INACTIVE_HIGH_RESIDUAL\",\n    1=1, \"OK\")\n| where gap!=\"OK\"\n| stats count by gap, owner\n| sort - count",
              "m": "(1) Normalize `system_id` to CMDB keys; (2) integrate GRC export nightly; (3) alert on `NO_SYSTEM_MAPPING` for production systems from `cmdb:production_systems.csv`; (4) tie residual_score scale to your GRC methodology; (5) quarterly attestation that zero high risks lack owners.",
              "z": "Table (gaps), Pie chart (gap types), Single value (stale reviews).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HTTP Event Collector.\n• Ensure the following data sources are available: `index=risk` `sourcetype=\"grc:risk_register\"` (risk_id, system_id, owner, inherent_score, residual_score, last_review_date, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `system_id` to CMDB keys; (2) integrate GRC export nightly; (3) alert on `NO_SYSTEM_MAPPING` for production systems from `cmdb:production_systems.csv`; (4) tie residual_score scale to your GRC methodology; (5) quarterly attestation that zero high risks lack owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=risk sourcetype=\"grc:risk_register\" earliest=-1d\n| eval last_rev=strptime(last_review_date,\"%Y-%m-%d\")\n| eval days_since_review=round((now()-last_rev)/86400,0)\n| eval gap=case(\n    isnull(system_id) OR system_id=\"\", \"NO_SYSTEM_MAPPING\",\n    days_since_review>365, \"STALE_REVIEW\",\n    status!=\"active\" AND residual_score>12, \"INACTIVE_HIGH_RESIDUAL\",\n    1=1, \"OK\")\n| where gap!=\"OK\"\n| stats count by gap, owner\n| sort - count\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC4 — Enterprise Risk Register Ingestion and Coverage Gaps** — CC4 requires consideration of risks to objectives. Monitoring the risk register for stale reviews, missing owners, or systems without mapped risks shows the assessment process is operational—not a static spreadsheet export.\n\nDocumented **Data sources**: `index=risk` `sourcetype=\"grc:risk_register\"` (risk_id, system_id, owner, inherent_score, residual_score, last_review_date, status). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HTTP Event Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: risk; **sourcetype**: grc:risk_register. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=risk, sourcetype=\"grc:risk_register\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **last_rev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_since_review** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by gap, owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gaps), Pie chart (gap types), Single value (stale reviews).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC4 — Enterprise Risk Register Ingestion and Coverage Gaps. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.8.13: SOC 2 CC4 — Enterprise Risk Register Ingestion and Coverage Gaps.",
                  "ea": "Saved search 'UC-22.8.13' running on sourcetype grc:risk_register and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.14",
              "n": "SOC 2 CC4 — Fraud Risk Scenario Testing Evidence from Anomaly Correlation",
              "c": "critical",
              "f": "advanced",
              "v": "CC4 includes fraud risk. Documenting that specific fraud scenarios (payroll diversion, vendor master change, refund abuse) are exercised and that monitoring fired in tabletop or purple-team events satisfies assessors that fraud is in scope.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` with `tag=fraud_scenario`; `index=security` `sourcetype=\"purple_team:event\"` (scenario_id, phase, result)",
              "q": "index=security sourcetype=\"purple_team:event\" earliest=-180d\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, values(result) as results by scenario_id, phase\n| join type=left scenario_id [\n    search `notable` earliest=-180d tag=fraud_scenario\n    | stats count as correlated_notables by rule_name\n    | rename rule_name as scenario_id\n  ]\n| eval evidence_ok=if(isnotnull(correlated_notables) AND correlated_notables>0,1,0)\n| where evidence_ok=0 AND phase=\"detection_validation\"\n| table scenario_id, phase, first_seen, last_seen, correlated_notables",
              "m": "(1) Maintain `scenario_id` alignment between purple team and ES rules; (2) tag notables used for fraud drills; (3) store facilitator sign-off outside Splunk, link `scenario_id`; (4) rerun after control changes; (5) red-team sensitive details in dashboards.",
              "z": "Table (gaps), Timeline (phases), Single value (scenarios without detection).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` with `tag=fraud_scenario`; `index=security` `sourcetype=\"purple_team:event\"` (scenario_id, phase, result).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `scenario_id` alignment between purple team and ES rules; (2) tag notables used for fraud drills; (3) store facilitator sign-off outside Splunk, link `scenario_id`; (4) rerun after control changes; (5) red-team sensitive details in dashboards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security sourcetype=\"purple_team:event\" earliest=-180d\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, values(result) as results by scenario_id, phase\n| join type=left scenario_id [\n    search `notable` earliest=-180d tag=fraud_scenario\n    | stats count as correlated_notables by rule_name\n    | rename rule_name as scenario_id\n  ]\n| eval evidence_ok=if(isnotnull(correlated_notables) AND correlated_notables>0,1,0)\n| where evidence_ok=0 AND phase=\"detection_validation\"\n| table scenario_id, phase, first_seen, last_seen, correlated_notables\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC4 — Fraud Risk Scenario Testing Evidence from Anomaly Correlation** — CC4 includes fraud risk. Documenting that specific fraud scenarios (payroll diversion, vendor master change, refund abuse) are exercised and that monitoring fired in tabletop or purple-team events satisfies assessors that fraud is in scope.\n\nDocumented **Data sources**: `` `notable` `` with `tag=fraud_scenario`; `index=security` `sourcetype=\"purple_team:event\"` (scenario_id, phase, result). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security; **sourcetype**: purple_team:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security, sourcetype=\"purple_team:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scenario_id, phase** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **evidence_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where evidence_ok=0 AND phase=\"detection_validation\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 CC4 — Fraud Risk Scenario Testing Evidence from Anomaly Correlation**): table scenario_id, phase, first_seen, last_seen, correlated_notables\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gaps), Timeline (phases), Single value (scenarios without detection).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC4 — Fraud Risk Scenario Testing Evidence from Anomaly Correlation. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.8.14: SOC 2 CC4 — Fraud Risk Scenario Testing Evidence from Anomaly Correlation.",
                  "ea": "Saved search 'UC-22.8.14' running on notable with tag=fraud_scenario; index=security sourcetype=\"purple_team:event\" (scenario_id, phase, result), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.15",
              "n": "SOC 2 CC5 — Change Impact Analysis Completeness for Production Releases",
              "c": "high",
              "f": "intermediate",
              "v": "CC5 expects consideration of fraud and other changes. Verifying every production change record carries impact analysis, rollback plan, and security review flags demonstrates change-related risks were evaluated before deployment.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:change_request\"` (number, state, u_impact_analysis, u_rollback_plan, u_security_review, start_date, cmdb_ci)",
              "q": "index=itsm sourcetype=\"snow:change_request\" state IN (\"Implement\",\"Closed\") earliest=-60d\n| eval missing=mvjoin(\n    if(isnull(u_impact_analysis) OR u_impact_analysis=\"\",\"impact\",null()),\n    if(isnull(u_rollback_plan) OR u_rollback_plan=\"\",\"rollback\",null()),\n    if(isnull(u_security_review) OR u_security_review=\"false\",\"sec_review\",null()),\n    \",\")\n| where missing!=\"\"\n| stats count by missing, cmdb_ci\n| sort - count",
              "m": "(1) Map Snow fields to your CMDB CI classes requiring analysis; (2) exclude standard changes via `chg_model=standard`; (3) integrate with CAB workflow to block closure in source system, Splunk mirrors drift; (4) monthly metric for % changes with complete analysis; (5) attach search export to change management KPI deck.",
              "z": "Table (defective changes), Bar chart (missing field counts), Single value (% complete inverse).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:change_request\"` (number, state, u_impact_analysis, u_rollback_plan, u_security_review, start_date, cmdb_ci).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map Snow fields to your CMDB CI classes requiring analysis; (2) exclude standard changes via `chg_model=standard`; (3) integrate with CAB workflow to block closure in source system, Splunk mirrors drift; (4) monthly metric for % changes with complete analysis; (5) attach search export to change management KPI deck.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:change_request\" state IN (\"Implement\",\"Closed\") earliest=-60d\n| eval missing=mvjoin(\n    if(isnull(u_impact_analysis) OR u_impact_analysis=\"\",\"impact\",null()),\n    if(isnull(u_rollback_plan) OR u_rollback_plan=\"\",\"rollback\",null()),\n    if(isnull(u_security_review) OR u_security_review=\"false\",\"sec_review\",null()),\n    \",\")\n| where missing!=\"\"\n| stats count by missing, cmdb_ci\n| sort - count\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC5 — Change Impact Analysis Completeness for Production Releases** — CC5 expects consideration of fraud and other changes. Verifying every production change record carries impact analysis, rollback plan, and security review flags demonstrates change-related risks were evaluated before deployment.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (number, state, u_impact_analysis, u_rollback_plan, u_security_review, start_date, cmdb_ci). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:change_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **missing** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where missing!=\"\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by missing, cmdb_ci** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC5 — Change Impact Analysis Completeness for Production Releases** — CC5 expects consideration of fraud and other changes. Verifying every production change record carries impact analysis, rollback plan, and security review flags demonstrates change-related risks were evaluated before deployment.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:change_request\"` (number, state, u_impact_analysis, u_rollback_plan, u_security_review, start_date, cmdb_ci). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (defective changes), Bar chart (missing field counts), Single value (% complete inverse).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC5 — Change Impact Analysis Completeness for Production Releases. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Change"
              ],
              "pillar": "observability",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC5.1 (Control activities) is enforced — Splunk UC-22.8.15: SOC 2 CC5 — Change Impact Analysis Completeness for Production Releases.",
                  "ea": "Saved search 'UC-22.8.15' running on sourcetype snow:change_request and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.16",
              "n": "SOC 2 CC6 — Credential Lifecycle — Orphan and Contractor Account Detection",
              "c": "high",
              "f": "advanced",
              "v": "CC6 requires logical access be restricted. Finding accounts without HR sponsorship, expired contractors still enabled, or shared break-glass IDs without rotation evidence supports least-privilege and lifecycle assertions.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), HR feed via `inputlookup`",
              "d": "`index=identity` `sourcetype=\"idm:account_snapshot\"` (samaccountname, account_type, enabled, manager_id, sponsor_id, expiry_date); `hr_active_workers.csv` (employee_id, status)",
              "q": "| inputlookup idm_account_snapshot.csv\n| lookup hr_active_workers.csv employee_id as sponsor_id OUTPUT status as sponsor_status\n| eval expired_contractor=if(account_type=\"contractor\" AND enabled=\"true\" AND (isnull(expiry_date) OR now()>strptime(expiry_date,\"%Y-%m-%d\")),1,0)\n| eval orphan=if(enabled=\"true\" AND (isnull(sponsor_id) OR sponsor_status!=\"active\"),1,0)\n| where orphan=1 OR expired_contractor=1\n| stats values(eval(if(orphan=1,\"orphan\",null()))) as flags, values(eval(if(expired_contractor=1,\"expired_contractor\",null()))) as flags2 by samaccountname, account_type, manager_id\n| eval issue=mvappend(flags,flags2)\n| table samaccountname, account_type, issue, manager_id",
              "m": "(1) Replace CSV with live IdM export; (2) define `account_type` taxonomy; (3) auto-ticket IAM for `expired_contractor=1`; (4) weekly digest for `orphan`; (5) exclude service accounts via `svc_exclusions.csv`.",
              "z": "Table (accounts), Single value (orphan count), Donut (account_type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078",
                "T1136"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), HR feed via `inputlookup`.\n• Ensure the following data sources are available: `index=identity` `sourcetype=\"idm:account_snapshot\"` (samaccountname, account_type, enabled, manager_id, sponsor_id, expiry_date); `hr_active_workers.csv` (employee_id, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replace CSV with live IdM export; (2) define `account_type` taxonomy; (3) auto-ticket IAM for `expired_contractor=1`; (4) weekly digest for `orphan`; (5) exclude service accounts via `svc_exclusions.csv`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup idm_account_snapshot.csv\n| lookup hr_active_workers.csv employee_id as sponsor_id OUTPUT status as sponsor_status\n| eval expired_contractor=if(account_type=\"contractor\" AND enabled=\"true\" AND (isnull(expiry_date) OR now()>strptime(expiry_date,\"%Y-%m-%d\")),1,0)\n| eval orphan=if(enabled=\"true\" AND (isnull(sponsor_id) OR sponsor_status!=\"active\"),1,0)\n| where orphan=1 OR expired_contractor=1\n| stats values(eval(if(orphan=1,\"orphan\",null()))) as flags, values(eval(if(expired_contractor=1,\"expired_contractor\",null()))) as flags2 by samaccountname, account_type, manager_id\n| eval issue=mvappend(flags,flags2)\n| table samaccountname, account_type, issue, manager_id\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC6 — Credential Lifecycle — Orphan and Contractor Account Detection** — CC6 requires logical access be restricted. Finding accounts without HR sponsorship, expired contractors still enabled, or shared break-glass IDs without rotation evidence supports least-privilege and lifecycle assertions.\n\nDocumented **Data sources**: `index=identity` `sourcetype=\"idm:account_snapshot\"` (samaccountname, account_type, enabled, manager_id, sponsor_id, expiry_date); `hr_active_workers.csv` (employee_id, status). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), HR feed via `inputlookup`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **expired_contractor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **orphan** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where orphan=1 OR expired_contractor=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by samaccountname, account_type, manager_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **issue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **SOC 2 CC6 — Credential Lifecycle — Orphan and Contractor Account Detection**): table samaccountname, account_type, issue, manager_id\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC6 — Credential Lifecycle — Orphan and Contractor Account Detection** — CC6 requires logical access be restricted. Finding accounts without HR sponsorship, expired contractors still enabled, or shared break-glass IDs without rotation evidence supports least-privilege and lifecycle assertions.\n\nDocumented **Data sources**: `index=identity` `sourcetype=\"idm:account_snapshot\"` (samaccountname, account_type, enabled, manager_id, sponsor_id, expiry_date); `hr_active_workers.csv` (employee_id, status). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), HR feed via `inputlookup`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (accounts), Single value (orphan count), Donut (account_type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC6 — Credential Lifecycle — Orphan and Contractor Account Detection. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC6.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC6.1 (Logical access controls) is enforced — Splunk UC-22.8.16: SOC 2 CC6 — Credential Lifecycle — Orphan and Contractor Account Detection.",
                  "ea": "Saved search 'UC-22.8.16' running on sourcetype idm:account_snapshot and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.17",
              "n": "SOC 2 CC6 — Physical Access Review Exception Tracking for Sensitive Facilities",
              "c": "high",
              "f": "intermediate",
              "v": "CC6 includes physical protections. Correlating badge reader logs with quarterly access certification exceptions proves reviews occurred and lingering physical access was explicitly accepted or revoked.",
              "t": "Custom HEC or CSV from PACS, Splunk DB Connect",
              "d": "`index=physical` `sourcetype=\"pacs:access\"` (badge_id, site_id, door_id, granted, _time); `physical_access_cert.csv` (badge_id, review_quarter, decision, reviewer, expires_on)",
              "q": "index=physical sourcetype=\"pacs:access\" granted=true earliest=-30d site_id IN (\"DC1\",\"DC2\",\"HQ_SEC\")\n| stats latest(_time) as last_physical by badge_id, site_id\n| inputlookup append=t physical_access_cert.csv\n| stats latest(review_quarter) as rq, latest(decision) as dec, latest(expires_on) as exp, max(last_physical) as last_physical by badge_id, site_id\n| eval exp_epoch=strptime(expires_on,\"%Y-%m-%d\")\n| where isnull(exp_epoch) OR exp_epoch<now() OR isnull(dec)\n| table badge_id, site_id, last_physical, rq, dec, exp",
              "m": "(1) Map `badge_id` to human identity via secure lookup; (2) restrict site_id list to SOC-relevant facilities; (3) alert when access after `exp` without new cert row; (4) PII-minimize dashboards; (5) retain aligned with physical security policy (often 1 year).",
              "z": "Table (exceptions), Map or bar by site_id, Single value (uncertified active badges).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC or CSV from PACS, Splunk DB Connect.\n• Ensure the following data sources are available: `index=physical` `sourcetype=\"pacs:access\"` (badge_id, site_id, door_id, granted, _time); `physical_access_cert.csv` (badge_id, review_quarter, decision, reviewer, expires_on).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `badge_id` to human identity via secure lookup; (2) restrict site_id list to SOC-relevant facilities; (3) alert when access after `exp` without new cert row; (4) PII-minimize dashboards; (5) retain aligned with physical security policy (often 1 year).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"pacs:access\" granted=true earliest=-30d site_id IN (\"DC1\",\"DC2\",\"HQ_SEC\")\n| stats latest(_time) as last_physical by badge_id, site_id\n| inputlookup append=t physical_access_cert.csv\n| stats latest(review_quarter) as rq, latest(decision) as dec, latest(expires_on) as exp, max(last_physical) as last_physical by badge_id, site_id\n| eval exp_epoch=strptime(expires_on,\"%Y-%m-%d\")\n| where isnull(exp_epoch) OR exp_epoch<now() OR isnull(dec)\n| table badge_id, site_id, last_physical, rq, dec, exp\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC6 — Physical Access Review Exception Tracking for Sensitive Facilities** — CC6 includes physical protections. Correlating badge reader logs with quarterly access certification exceptions proves reviews occurred and lingering physical access was explicitly accepted or revoked.\n\nDocumented **Data sources**: `index=physical` `sourcetype=\"pacs:access\"` (badge_id, site_id, door_id, granted, _time); `physical_access_cert.csv` (badge_id, review_quarter, decision, reviewer, expires_on). **App/TA** (typical add-on context): Custom HEC or CSV from PACS, Splunk DB Connect. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pacs:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"pacs:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by badge_id, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `stats` rolls up events into metrics; results are split **by badge_id, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(exp_epoch) OR exp_epoch<now() OR isnull(dec)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 CC6 — Physical Access Review Exception Tracking for Sensitive Facilities**): table badge_id, site_id, last_physical, rq, dec, exp\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exceptions), Map or bar by site_id, Single value (uncertified active badges).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC6 — Physical Access Review Exception Tracking for Sensitive Facilities. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Physical Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.8.17: SOC 2 CC6 — Physical Access Review Exception Tracking for Sensitive Facilities.",
                  "ea": "Saved search 'UC-22.8.17' running on sourcetype pacs:access and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.18",
              "n": "SOC 2 CC6 — Encryption in Transit Policy Enforcement for Admin and API Paths",
              "c": "critical",
              "f": "advanced",
              "v": "CC6 expects encryption of data in transmission. Detecting TLS policy regressions—weak cipher suites, certificate expirations, or plaintext admin sessions—provides continuous evidence that cryptographic controls remain effective.",
              "t": "Splunk Add-on for Zeek (Splunkbase 1617), Splunk Add-on for Nginx or F5 LTM (as applicable)",
              "d": "`index=network` `sourcetype IN (\"zeek:ssl\",\"nginx:access\")` (server_name, ssl_version, cipher, cert_expires); tag admin paths via `uri_path`",
              "q": "index=network sourcetype=\"zeek:ssl\" earliest=-24h\n| where ssl_version IN (\"TLSv1\",\"TLSv1.0\",\"SSLv3\") OR like(cipher,\"%NULL%\") OR like(cipher,\"%EXPORT%\")\n| stats count by server_name, ssl_version, cipher\n| sort - count",
              "m": "(1) Expand to internal services behind load balancers using `server_name`; (2) join cert inventory for `cert_expires` within 30 days; (3) separate customer-facing vs admin VIPs; (4) integrate with vulnerability SLA for weak TLS; (5) document compensating controls for legacy apps with approved exceptions lookup.",
              "z": "Table (weak sessions), Line chart (trend of weak cipher hits), Single value (distinct servers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Zeek](https://splunkbase.splunk.com/app/1617), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [
                "T1557"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Zeek (Splunkbase 1617), Splunk Add-on for Nginx or F5 LTM (as applicable).\n• Ensure the following data sources are available: `index=network` `sourcetype IN (\"zeek:ssl\",\"nginx:access\")` (server_name, ssl_version, cipher, cert_expires); tag admin paths via `uri_path`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Expand to internal services behind load balancers using `server_name`; (2) join cert inventory for `cert_expires` within 30 days; (3) separate customer-facing vs admin VIPs; (4) integrate with vulnerability SLA for weak TLS; (5) document compensating controls for legacy apps with approved exceptions lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"zeek:ssl\" earliest=-24h\n| where ssl_version IN (\"TLSv1\",\"TLSv1.0\",\"SSLv3\") OR like(cipher,\"%NULL%\") OR like(cipher,\"%EXPORT%\")\n| stats count by server_name, ssl_version, cipher\n| sort - count\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC6 — Encryption in Transit Policy Enforcement for Admin and API Paths** — CC6 expects encryption of data in transmission. Detecting TLS policy regressions—weak cipher suites, certificate expirations, or plaintext admin sessions—provides continuous evidence that cryptographic controls remain effective.\n\nDocumented **Data sources**: `index=network` `sourcetype IN (\"zeek:ssl\",\"nginx:access\")` (server_name, ssl_version, cipher, cert_expires); tag admin paths via `uri_path`. **App/TA** (typical add-on context): Splunk Add-on for Zeek (Splunkbase 1617), Splunk Add-on for Nginx or F5 LTM (as applicable). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: zeek:ssl. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"zeek:ssl\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where ssl_version IN (\"TLSv1\",\"TLSv1.0\",\"SSLv3\") OR like(cipher,\"%NULL%\") OR like(cipher,\"%EXPORT%\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by server_name, ssl_version, cipher** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC6 — Encryption in Transit Policy Enforcement for Admin and API Paths** — CC6 expects encryption of data in transmission. Detecting TLS policy regressions—weak cipher suites, certificate expirations, or plaintext admin sessions—provides continuous evidence that cryptographic controls remain effective.\n\nDocumented **Data sources**: `index=network` `sourcetype IN (\"zeek:ssl\",\"nginx:access\")` (server_name, ssl_version, cipher, cert_expires); tag admin paths via `uri_path`. **App/TA** (typical add-on context): Splunk Add-on for Zeek (Splunkbase 1617), Splunk Add-on for Nginx or F5 LTM (as applicable). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (weak sessions), Line chart (trend of weak cipher hits), Single value (distinct servers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC6 — Encryption in Transit Policy Enforcement for Admin and API Paths. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "f5",
                "nginx"
              ],
              "em": [
                "nginx_open"
              ],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC6.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC6.6 (Encryption in transit) is enforced — Splunk UC-22.8.18: SOC 2 CC6 — Encryption in Transit Policy Enforcement for Admin and API Paths.",
                  "ea": "Saved search 'UC-22.8.18' running on index network and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.19",
              "n": "SOC 2 CC6 — Timeliness of Access Removal After HR Termination Events",
              "c": "critical",
              "f": "advanced",
              "v": "CC6 expects removal of access when no longer required. Measuring lag between HR termination timestamp and last successful disablement event across IdM and key SaaS apps proves offboarding SLAs are met—high-risk evidence for every SOC examination.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Workday/Okta logs via HEC",
              "d": "`index=hr` `sourcetype=\"workday:termination\"` (employee_id, term_effective_ts); `index=identity` `sourcetype=\"okta:system\"` (eventType, target_user_id, published)",
              "q": "index=hr sourcetype=\"workday:termination\" earliest=-30d\n| eval term=strptime(term_effective_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left employee_id [\n    search index=identity sourcetype=\"okta:system\" eventType=\"user.lifecycle.deactivate\" earliest=-30d\n    | eval deactivated=strptime(published,\"%Y-%m-%dT%H:%M:%SZ\")\n    | stats min(deactivated) as first_deact by target_user_id\n    | rename target_user_id as employee_id\n  ]\n| eval lag_sec=first_deact-term\n| eval lag_min=round(lag_sec/60,1)\n| where lag_min>120 OR isnull(first_deact)\n| table employee_id, term, first_deact, lag_min",
              "m": "(1) Align `employee_id` across HR and IdM; (2) tune 120 minutes to contractual SLA; (3) add parallel tracks for AD disable and VPN revoke; (4) page IAM on breach; (5) monthly median lag KPI for management review.",
              "z": "Table (breaches), Histogram (lag_min), Single value (p95 lag).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Workday/Okta logs via HEC.\n• Ensure the following data sources are available: `index=hr` `sourcetype=\"workday:termination\"` (employee_id, term_effective_ts); `index=identity` `sourcetype=\"okta:system\"` (eventType, target_user_id, published).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `employee_id` across HR and IdM; (2) tune 120 minutes to contractual SLA; (3) add parallel tracks for AD disable and VPN revoke; (4) page IAM on breach; (5) monthly median lag KPI for management review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"workday:termination\" earliest=-30d\n| eval term=strptime(term_effective_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left employee_id [\n    search index=identity sourcetype=\"okta:system\" eventType=\"user.lifecycle.deactivate\" earliest=-30d\n    | eval deactivated=strptime(published,\"%Y-%m-%dT%H:%M:%SZ\")\n    | stats min(deactivated) as first_deact by target_user_id\n    | rename target_user_id as employee_id\n  ]\n| eval lag_sec=first_deact-term\n| eval lag_min=round(lag_sec/60,1)\n| where lag_min>120 OR isnull(first_deact)\n| table employee_id, term, first_deact, lag_min\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC6 — Timeliness of Access Removal After HR Termination Events** — CC6 expects removal of access when no longer required. Measuring lag between HR termination timestamp and last successful disablement event across IdM and key SaaS apps proves offboarding SLAs are met—high-risk evidence for every SOC examination.\n\nDocumented **Data sources**: `index=hr` `sourcetype=\"workday:termination\"` (employee_id, term_effective_ts); `index=identity` `sourcetype=\"okta:system\"` (eventType, target_user_id, published). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Workday/Okta logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: workday:termination. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"workday:termination\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **term** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **lag_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **lag_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag_min>120 OR isnull(first_deact)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 CC6 — Timeliness of Access Removal After HR Termination Events**): table employee_id, term, first_deact, lag_min\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC6 — Timeliness of Access Removal After HR Termination Events** — CC6 expects removal of access when no longer required. Measuring lag between HR termination timestamp and last successful disablement event across IdM and key SaaS apps proves offboarding SLAs are met—high-risk evidence for every SOC examination.\n\nDocumented **Data sources**: `index=hr` `sourcetype=\"workday:termination\"` (employee_id, term_effective_ts); `index=identity` `sourcetype=\"okta:system\"` (eventType, target_user_id, published). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Workday/Okta logs via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (breaches), Histogram (lag_min), Single value (p95 lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC6 — Timeliness of Access Removal After HR Termination Events. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "okta"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC3.1 (Risk assessment) is enforced — Splunk UC-22.8.19: SOC 2 CC6 — Timeliness of Access Removal After HR Termination Events.",
                  "ea": "Saved search 'UC-22.8.19' running on sourcetype workday:termination and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.20",
              "n": "SOC 2 CC7 — Unauthorized Production Configuration Change Detection",
              "c": "critical",
              "f": "advanced",
              "v": "CC7 addresses system operations. Detecting changes to firewall rules, cloud security groups, or database parameters without a matching approved change ticket demonstrates monitoring over unauthorized modifications.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Windows or Palo Alto Networks (as applicable)",
              "d": "`index=aws` `sourcetype=\"aws:cloudtrail\"` (eventName, requestParameters, userIdentity.arn, _time); `index=itsm` `sourcetype=\"snow:change_request\"` (number, start_date, cmdb_ci)",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventName IN (\"AuthorizeSecurityGroupIngress\",\"RevokeSecurityGroupIngress\",\"ModifyDBClusterParameterGroup\") earliest=-7d\n| eval change_key=md5(_raw)\n| join type=left change_key [\n    search index=itsm sourcetype=\"snow:change_request\" earliest=-14d\n    | eval change_key=md5(number+cmdb_ci)\n    | fields number\n  ]\n| where isnull(number)\n| stats count by eventName, userIdentity.arn\n| sort - count",
              "m": "(1) Replace simplified join with CMDB-linked correlation on `requestParameters.groupId`; (2) allow emergency change workflow via `userIdentity.arn` lookup `breakglass_arns.csv`; (3) send to SOC and ITIL process owner; (4) retain CloudTrail in WORM bucket; (5) map events to CC7-2 subservice in system description.",
              "z": "Table (unmatched changes), Timeline, Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1565",
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Windows or Palo Alto Networks (as applicable).\n• Ensure the following data sources are available: `index=aws` `sourcetype=\"aws:cloudtrail\"` (eventName, requestParameters, userIdentity.arn, _time); `index=itsm` `sourcetype=\"snow:change_request\"` (number, start_date, cmdb_ci).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replace simplified join with CMDB-linked correlation on `requestParameters.groupId`; (2) allow emergency change workflow via `userIdentity.arn` lookup `breakglass_arns.csv`; (3) send to SOC and ITIL process owner; (4) retain CloudTrail in WORM bucket; (5) map events to CC7-2 subservice in system description.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventName IN (\"AuthorizeSecurityGroupIngress\",\"RevokeSecurityGroupIngress\",\"ModifyDBClusterParameterGroup\") earliest=-7d\n| eval change_key=md5(_raw)\n| join type=left change_key [\n    search index=itsm sourcetype=\"snow:change_request\" earliest=-14d\n    | eval change_key=md5(number+cmdb_ci)\n    | fields number\n  ]\n| where isnull(number)\n| stats count by eventName, userIdentity.arn\n| sort - count\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC7 — Unauthorized Production Configuration Change Detection** — CC7 addresses system operations. Detecting changes to firewall rules, cloud security groups, or database parameters without a matching approved change ticket demonstrates monitoring over unauthorized modifications.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` (eventName, requestParameters, userIdentity.arn, _time); `index=itsm` `sourcetype=\"snow:change_request\"` (number, start_date, cmdb_ci). **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Windows or Palo Alto Networks (as applicable). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **change_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(number)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by eventName, userIdentity.arn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC7 — Unauthorized Production Configuration Change Detection** — CC7 addresses system operations. Detecting changes to firewall rules, cloud security groups, or database parameters without a matching approved change ticket demonstrates monitoring over unauthorized modifications.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` (eventName, requestParameters, userIdentity.arn, _time); `index=itsm` `sourcetype=\"snow:change_request\"` (number, start_date, cmdb_ci). **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for Microsoft Windows or Palo Alto Networks (as applicable). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unmatched changes), Timeline, Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC7 — Unauthorized Production Configuration Change Detection. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.8.20: SOC 2 CC7 — Unauthorized Production Configuration Change Detection.",
                  "ea": "Saved search 'UC-22.8.20' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.21",
              "n": "SOC 2 CC7 — Incident Classification Consistency and Severity Drift Audit",
              "c": "high",
              "f": "intermediate",
              "v": "CC7 expects defined incident response categories. Tracking reclassification events (severity lowered post-facto, category changes near month-end) supports integrity of incident metrics used for management reporting and customer notifications.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (number, severity, category, sys_updated_on); `snow:incident_audit` or `sys_audit` replica with field changes",
              "q": "index=itsm sourcetype=\"snow:incident_audit\" field_name IN (\"severity\",\"category\") earliest=-90d\n| streamstats window=1 global=f latest(new_value) as prev_val by number, field_name\n| where isnotnull(prev_val) AND new_value!=prev_val\n| eval downgrade=if(field_name=\"severity\" AND tonumber(new_value)>tonumber(prev_val),1,0)\n| stats count as changes, sum(downgrade) as severity_downgrades by number\n| where severity_downgrades>0 OR changes>3\n| sort - changes",
              "m": "(1) Ensure audit subflow indexes to Splunk; (2) map numeric severity scale; (3) route downgrades to IR quality review; (4) correlate with customer breach notification timestamps; (5) quarterly governance review of outliers.",
              "z": "Table (incidents), Bar chart (downgrades by month), Single value (distinct incidents).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (number, severity, category, sys_updated_on); `snow:incident_audit` or `sys_audit` replica with field changes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure audit subflow indexes to Splunk; (2) map numeric severity scale; (3) route downgrades to IR quality review; (4) correlate with customer breach notification timestamps; (5) quarterly governance review of outliers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident_audit\" field_name IN (\"severity\",\"category\") earliest=-90d\n| streamstats window=1 global=f latest(new_value) as prev_val by number, field_name\n| where isnotnull(prev_val) AND new_value!=prev_val\n| eval downgrade=if(field_name=\"severity\" AND tonumber(new_value)>tonumber(prev_val),1,0)\n| stats count as changes, sum(downgrade) as severity_downgrades by number\n| where severity_downgrades>0 OR changes>3\n| sort - changes\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC7 — Incident Classification Consistency and Severity Drift Audit** — CC7 expects defined incident response categories. Tracking reclassification events (severity lowered post-facto, category changes near month-end) supports integrity of incident metrics used for management reporting and customer notifications.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (number, severity, category, sys_updated_on); `snow:incident_audit` or `sys_audit` replica with field changes. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident_audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `streamstats` rolls up events into metrics; results are split **by number, field_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnotnull(prev_val) AND new_value!=prev_val` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **downgrade** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by number** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where severity_downgrades>0 OR changes>3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incidents), Bar chart (downgrades by month), Single value (distinct incidents).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC7 — Incident Classification Consistency and Severity Drift Audit. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC8.1 (Change management) is enforced — Splunk UC-22.8.21: SOC 2 CC7 — Incident Classification Consistency and Severity Drift Audit.",
                  "ea": "Saved search 'UC-22.8.21' running on sourcetype snow:incident and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.22",
              "n": "SOC 2 CC7 — Operational Anomaly Detection on Critical Batch and API SLOs",
              "c": "high",
              "f": "advanced",
              "v": "CC7 includes monitoring for anomalies. Statistical deviation on transaction volumes, error rates, or job durations for financially relevant pipelines shows detective monitoring beyond static thresholds—supporting availability and processing-integrity narratives.",
              "t": "Splunk IT Service Intelligence (Splunkbase 1841) or MLTK",
              "d": "`index=app` `sourcetype=\"api:gateway\"` (service, status, latency_ms, _time); or `sourcetype=\"batch:summary\"` (job_name, duration_sec, records_out)",
              "q": "index=app sourcetype=\"api:gateway\" service=\"payments_api\" earliest=-60d\n| timechart span=1h perc95(latency_ms) as p95_lat, sum(eval(if(status>=500,1,0))) as err, count as vol\n| eventstats median(vol) as med_vol, stdev(vol) as sd_vol\n| eval z=if(sd_vol>0, (vol-med_vol)/sd_vol, 0)\n| where abs(z)>4\n| table _time, vol, med_vol, z, p95_lat, err",
              "m": "(1) Scope `service` to in-scope subsystems; (2) tune z-score window; (3) suppress known marketing events via lookup; (4) feed notable to on-call; (5) document model refresh cadence for auditors.",
              "z": "Line chart (vol with bands), Table (anomaly hours), Single value (count last 7d).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk IT Service Intelligence](https://splunkbase.splunk.com/app/1841), [CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk IT Service Intelligence (Splunkbase 1841) or MLTK.\n• Ensure the following data sources are available: `index=app` `sourcetype=\"api:gateway\"` (service, status, latency_ms, _time); or `sourcetype=\"batch:summary\"` (job_name, duration_sec, records_out).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Scope `service` to in-scope subsystems; (2) tune z-score window; (3) suppress known marketing events via lookup; (4) feed notable to on-call; (5) document model refresh cadence for auditors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"api:gateway\" service=\"payments_api\" earliest=-60d\n| timechart span=1h perc95(latency_ms) as p95_lat, sum(eval(if(status>=500,1,0))) as err, count as vol\n| eventstats median(vol) as med_vol, stdev(vol) as sd_vol\n| eval z=if(sd_vol>0, (vol-med_vol)/sd_vol, 0)\n| where abs(z)>4\n| table _time, vol, med_vol, z, p95_lat, err\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC7 — Operational Anomaly Detection on Critical Batch and API SLOs** — CC7 includes monitoring for anomalies. Statistical deviation on transaction volumes, error rates, or job durations for financially relevant pipelines shows detective monitoring beyond static thresholds—supporting availability and processing-integrity narratives.\n\nDocumented **Data sources**: `index=app` `sourcetype=\"api:gateway\"` (service, status, latency_ms, _time); or `sourcetype=\"batch:summary\"` (job_name, duration_sec, records_out). **App/TA** (typical add-on context): Splunk IT Service Intelligence (Splunkbase 1841) or MLTK. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: api:gateway. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"api:gateway\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where abs(z)>4` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 CC7 — Operational Anomaly Detection on Critical Batch and API SLOs**): table _time, vol, med_vol, z, p95_lat, err\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=1h | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC7 — Operational Anomaly Detection on Critical Batch and API SLOs** — CC7 includes monitoring for anomalies. Statistical deviation on transaction volumes, error rates, or job durations for financially relevant pipelines shows detective monitoring beyond static thresholds—supporting availability and processing-integrity narratives.\n\nDocumented **Data sources**: `index=app` `sourcetype=\"api:gateway\"` (service, status, latency_ms, _time); or `sourcetype=\"batch:summary\"` (job_name, duration_sec, records_out). **App/TA** (typical add-on context): Splunk IT Service Intelligence (Splunkbase 1841) or MLTK. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (vol with bands), Table (anomaly hours), Single value (count last 7d).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC7 — Operational Anomaly Detection on Critical Batch and API SLOs. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "pillar": "observability",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t sum(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=1h | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "A1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 A1.2 (Availability commitments) is enforced — Splunk UC-22.8.22: SOC 2 CC7 — Operational Anomaly Detection on Critical Batch and API SLOs.",
                  "ea": "Saved search 'UC-22.8.22' running on sourcetype api:gateway and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.23",
              "n": "SOC 2 CC7 — Vulnerability Management SLA and Exception Expiry Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "CC7 expects vulnerability management. Splunk consolidates scanner output with SLAs and risk-acceptance expirations so overdue criticals and stale exceptions cannot hide in siloed tools—direct evidence for patch-management controls.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060), Qualys Technology Add-on (Splunkbase 2964)",
              "d": "`index=vuln` `sourcetype IN (\"tenable:vuln\",\"qualys:host_vm_detection\")` (plugin_id, severity, first_found, patch_due_date, host_fqdn); `vuln_exceptions.csv` (plugin_id, host_fqdn, expires_on, approver)",
              "q": "index=vuln sourcetype=\"tenable:vuln\" severity IN (\"Critical\",\"High\") earliest=-1d\n| eval due=strptime(patch_due_date,\"%Y-%m-%d\")\n| where isnull(due) OR due<now()\n| lookup vuln_exceptions.csv plugin_id host_fqdn OUTPUT expires_on\n| eval exp=strptime(expires_on,\"%Y-%m-%d\")\n| where isnull(exp) OR exp<now()\n| stats dc(host_fqdn) as hosts, values(plugin_id) as plugins by severity\n| sort severity",
              "m": "(1) Normalize severity strings; (2) integrate exception workflow approvals; (3) auto-expire exceptions in GRC; (4) map hosts to owners via CMDB; (5) weekly leadership metric: overdue critical count.",
              "z": "Table (overdue), Single value (host count), Treemap by business unit from CMDB join.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [Qualys Technology Add-on](https://splunkbase.splunk.com/app/2964), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060), Qualys Technology Add-on (Splunkbase 2964).\n• Ensure the following data sources are available: `index=vuln` `sourcetype IN (\"tenable:vuln\",\"qualys:host_vm_detection\")` (plugin_id, severity, first_found, patch_due_date, host_fqdn); `vuln_exceptions.csv` (plugin_id, host_fqdn, expires_on, approver).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize severity strings; (2) integrate exception workflow approvals; (3) auto-expire exceptions in GRC; (4) map hosts to owners via CMDB; (5) weekly leadership metric: overdue critical count.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=\"tenable:vuln\" severity IN (\"Critical\",\"High\") earliest=-1d\n| eval due=strptime(patch_due_date,\"%Y-%m-%d\")\n| where isnull(due) OR due<now()\n| lookup vuln_exceptions.csv plugin_id host_fqdn OUTPUT expires_on\n| eval exp=strptime(expires_on,\"%Y-%m-%d\")\n| where isnull(exp) OR exp<now()\n| stats dc(host_fqdn) as hosts, values(plugin_id) as plugins by severity\n| sort severity\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC7 — Vulnerability Management SLA and Exception Expiry Tracking** — CC7 expects vulnerability management. Splunk consolidates scanner output with SLAs and risk-acceptance expirations so overdue criticals and stale exceptions cannot hide in siloed tools—direct evidence for patch-management controls.\n\nDocumented **Data sources**: `index=vuln` `sourcetype IN (\"tenable:vuln\",\"qualys:host_vm_detection\")` (plugin_id, severity, first_found, patch_due_date, host_fqdn); `vuln_exceptions.csv` (plugin_id, host_fqdn, expires_on, approver). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060), Qualys Technology Add-on (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=\"tenable:vuln\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(due) OR due<now()` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(exp) OR exp<now()` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC7 — Vulnerability Management SLA and Exception Expiry Tracking** — CC7 expects vulnerability management. Splunk consolidates scanner output with SLAs and risk-acceptance expirations so overdue criticals and stale exceptions cannot hide in siloed tools—direct evidence for patch-management controls.\n\nDocumented **Data sources**: `index=vuln` `sourcetype IN (\"tenable:vuln\",\"qualys:host_vm_detection\")` (plugin_id, severity, first_found, patch_due_date, host_fqdn); `vuln_exceptions.csv` (plugin_id, host_fqdn, expires_on, approver). **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060), Qualys Technology Add-on (Splunkbase 2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue), Single value (host count), Treemap by business unit from CMDB join.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC7 — Vulnerability Management SLA and Exception Expiry Tracking. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Vulnerability"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity | sort - count",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC3.1 (Risk assessment) is enforced — Splunk UC-22.8.23: SOC 2 CC7 — Vulnerability Management SLA and Exception Expiry Tracking.",
                  "ea": "Saved search 'UC-22.8.23' running on index vuln and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.24",
              "n": "SOC 2 CC8 — Infrastructure-as-Code Drift vs Approved Terraform Modules",
              "c": "high",
              "f": "advanced",
              "v": "CC8 governs change to infrastructure. Comparing live cloud resource tags and configurations to approved module signatures detects shadow infrastructure that bypassed pipeline controls—supporting change integrity for cloud environments.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), AWS Config events via EventBridge HEC",
              "d": "`index=cloud` `sourcetype=\"aws:config_snapshot\"` (resourceId, resourceType, configuration, tags); `approved_modules.csv` (resourceType, required_tag_module, golden_hash)",
              "q": "index=cloud sourcetype=\"aws:config_snapshot\" resourceType=\"AWS::EC2::Instance\" earliest=-1d\n| spath path=configuration path=output\n| eval module_tag=mvindex('tags{}.value', mvfind('tags{}.key', \"TerraformModule\"))\n| lookup approved_modules.csv resourceType OUTPUT required_tag_module\n| where isnull(module_tag) OR module_tag!=required_tag_module\n| stats count by module_tag, resourceType, awsAccountId\n| sort - count",
              "m": "(1) Adjust `spath` to your Config JSON shape; (2) maintain golden baseline per account; (3) integrate with Terraform Cloud audit API for module version; (4) route to cloud governance; (5) document exceptions in change register only.",
              "z": "Table (drift), Bar by account, Single value (instances noncompliant).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1190"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), AWS Config events via EventBridge HEC.\n• Ensure the following data sources are available: `index=cloud` `sourcetype=\"aws:config_snapshot\"` (resourceId, resourceType, configuration, tags); `approved_modules.csv` (resourceType, required_tag_module, golden_hash).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Adjust `spath` to your Config JSON shape; (2) maintain golden baseline per account; (3) integrate with Terraform Cloud audit API for module version; (4) route to cloud governance; (5) document exceptions in change register only.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"aws:config_snapshot\" resourceType=\"AWS::EC2::Instance\" earliest=-1d\n| spath path=configuration path=output\n| eval module_tag=mvindex('tags{}.value', mvfind('tags{}.key', \"TerraformModule\"))\n| lookup approved_modules.csv resourceType OUTPUT required_tag_module\n| where isnull(module_tag) OR module_tag!=required_tag_module\n| stats count by module_tag, resourceType, awsAccountId\n| sort - count\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC8 — Infrastructure-as-Code Drift vs Approved Terraform Modules** — CC8 governs change to infrastructure. Comparing live cloud resource tags and configurations to approved module signatures detects shadow infrastructure that bypassed pipeline controls—supporting change integrity for cloud environments.\n\nDocumented **Data sources**: `index=cloud` `sourcetype=\"aws:config_snapshot\"` (resourceId, resourceType, configuration, tags); `approved_modules.csv` (resourceType, required_tag_module, golden_hash). **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), AWS Config events via EventBridge HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:config_snapshot, AWS::EC2::Instance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"aws:config_snapshot\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `eval` defines or adjusts **module_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(module_tag) OR module_tag!=required_tag_module` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by module_tag, resourceType, awsAccountId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC8 — Infrastructure-as-Code Drift vs Approved Terraform Modules** — CC8 governs change to infrastructure. Comparing live cloud resource tags and configurations to approved module signatures detects shadow infrastructure that bypassed pipeline controls—supporting change integrity for cloud environments.\n\nDocumented **Data sources**: `index=cloud` `sourcetype=\"aws:config_snapshot\"` (resourceId, resourceType, configuration, tags); `approved_modules.csv` (resourceType, required_tag_module, golden_hash). **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), AWS Config events via EventBridge HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drift), Bar by account, Single value (instances noncompliant).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC8 — Infrastructure-as-Code Drift vs Approved Terraform Modules. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Security",
                "Compliance",
                "Change"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.8.24: SOC 2 CC8 — Infrastructure-as-Code Drift vs Approved Terraform Modules.",
                  "ea": "Saved search 'UC-22.8.24' running on sourcetype aws:config_snapshot and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.25",
              "n": "SOC 2 CC8 — Software Development Lifecycle Control Gates from CI/CD Telemetry",
              "c": "high",
              "f": "advanced",
              "v": "CC8 extends to software changes. Ingesting pipeline events for SAST, dependency scan, and peer review gates shows releases without required checks are rare and investigated—evidence that SDLC controls operate in production paths.",
              "t": "HTTP Event Collector from GitHub Actions, GitLab, or Jenkins",
              "d": "`index=cicd` `sourcetype=\"cicd:pipeline\"` (repo, commit, stage, status, build_id, _time)",
              "q": "index=cicd sourcetype=\"cicd:pipeline\" earliest=-30d stage=\"deploy_prod\"\n| stats values(stage) as stages by repo, commit, build_id\n| where !like(stages,\"(?i)sast_pass\") OR !like(stages,\"(?i)deps_scan_pass\")\n| stats count by repo\n| sort - count",
              "m": "(1) Normalize stage names in ingestion; (2) require immutable `commit` in deploy events; (3) block deploy in CI, use Splunk for detective drift; (4) map `repo` to in-scope system; (5) monthly % deploys with all gates green.",
              "z": "Table (repos with gaps), Single value (failed gate deploy attempts), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from GitHub Actions, GitLab, or Jenkins.\n• Ensure the following data sources are available: `index=cicd` `sourcetype=\"cicd:pipeline\"` (repo, commit, stage, status, build_id, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize stage names in ingestion; (2) require immutable `commit` in deploy events; (3) block deploy in CI, use Splunk for detective drift; (4) map `repo` to in-scope system; (5) monthly % deploys with all gates green.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype=\"cicd:pipeline\" earliest=-30d stage=\"deploy_prod\"\n| stats values(stage) as stages by repo, commit, build_id\n| where !like(stages,\"(?i)sast_pass\") OR !like(stages,\"(?i)deps_scan_pass\")\n| stats count by repo\n| sort - count\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC8 — Software Development Lifecycle Control Gates from CI/CD Telemetry** — CC8 extends to software changes. Ingesting pipeline events for SAST, dependency scan, and peer review gates shows releases without required checks are rare and investigated—evidence that SDLC controls operate in production paths.\n\nDocumented **Data sources**: `index=cicd` `sourcetype=\"cicd:pipeline\"` (repo, commit, stage, status, build_id, _time). **App/TA** (typical add-on context): HTTP Event Collector from GitHub Actions, GitLab, or Jenkins. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd; **sourcetype**: cicd:pipeline. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, sourcetype=\"cicd:pipeline\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by repo, commit, build_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where !like(stages,\"(?i)sast_pass\") OR !like(stages,\"(?i)deps_scan_pass\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by repo** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (repos with gaps), Single value (failed gate deploy attempts), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC8 — Software Development Lifecycle Control Gates from CI/CD Telemetry. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "DevSecOps"
              ],
              "pillar": "observability",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github",
                "gitlab",
                "jenkins"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.8.25: SOC 2 CC8 — Software Development Lifecycle Control Gates from CI/CD Telemetry.",
                  "ea": "Saved search 'UC-22.8.25' running on index=cicd sourcetype=\"cicd:pipeline\" (repo, commit, stage, status, build_id, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Jenkins",
                  "id": 3332,
                  "url": "https://splunkbase.splunk.com/app/3332",
                  "desc": "Dashboards for Jenkins job and build status, console logs, test results",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.26",
              "n": "SOC 2 CC9 — Change Authorization Dual-Control on Privileged Cloud Roles",
              "c": "critical",
              "f": "advanced",
              "v": "CC9 addresses mitigation of fraud and management override. Ensuring IAM role attachments and policy updates have dual approval in PAM or ticketing prevents single-person override of segregation of duties—key for change authorization narratives.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=aws` `sourcetype=\"aws:cloudtrail\"` eventName IN (\"AttachRolePolicy\",\"PutRolePolicy\") (requestParameters, userIdentity.arn, _time); `index=itsm` `sourcetype=\"snow:change_request\"` with `u_dual_approval=true`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" eventName IN (\"AttachRolePolicy\",\"PutRolePolicy\") earliest=-7d\n| eval ticket=mvindex(split(requestParameters.policyArn,\"/\"),-1)\n| join type=left userIdentity.arn [\n    search index=itsm sourcetype=\"snow:change_request\" u_dual_approval=true earliest=-14d\n    | rename opened_by as approver1, u_second_approver as approver2\n    | eval join_key=opened_by\n    | table number, join_key\n  ]\n| where isnull(number)\n| stats count by userIdentity.arn, eventName\n| sort - count",
              "m": "(1) Replace join with robust ticket id extracted from CloudTrail `requestParameters` or session context; (2) integrate CyberArk session id where available; (3) emergency break-glass via lookup with post-incident review flag; (4) SOX-style quarterly sampling export; (5) align with CC6 logical access monitoring.",
              "z": "Table (unauthorized IAM changes), Single value (count), Timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [
                "T1078",
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=aws` `sourcetype=\"aws:cloudtrail\"` eventName IN (\"AttachRolePolicy\",\"PutRolePolicy\") (requestParameters, userIdentity.arn, _time); `index=itsm` `sourcetype=\"snow:change_request\"` with `u_dual_approval=true`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replace join with robust ticket id extracted from CloudTrail `requestParameters` or session context; (2) integrate CyberArk session id where available; (3) emergency break-glass via lookup with post-incident review flag; (4) SOX-style quarterly sampling export; (5) align with CC6 logical access monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" eventName IN (\"AttachRolePolicy\",\"PutRolePolicy\") earliest=-7d\n| eval ticket=mvindex(split(requestParameters.policyArn,\"/\"),-1)\n| join type=left userIdentity.arn [\n    search index=itsm sourcetype=\"snow:change_request\" u_dual_approval=true earliest=-14d\n    | rename opened_by as approver1, u_second_approver as approver2\n    | eval join_key=opened_by\n    | table number, join_key\n  ]\n| where isnull(number)\n| stats count by userIdentity.arn, eventName\n| sort - count\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC9 — Change Authorization Dual-Control on Privileged Cloud Roles** — CC9 addresses mitigation of fraud and management override. Ensuring IAM role attachments and policy updates have dual approval in PAM or ticketing prevents single-person override of segregation of duties—key for change authorization narratives.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` eventName IN (\"AttachRolePolicy\",\"PutRolePolicy\") (requestParameters, userIdentity.arn, _time); `index=itsm` `sourcetype=\"snow:change_request\"` with `u_dual_approval=true`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ticket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(number)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, eventName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 CC9 — Change Authorization Dual-Control on Privileged Cloud Roles** — CC9 addresses mitigation of fraud and management override. Ensuring IAM role attachments and policy updates have dual approval in PAM or ticketing prevents single-person override of segregation of duties—key for change authorization narratives.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` eventName IN (\"AttachRolePolicy\",\"PutRolePolicy\") (requestParameters, userIdentity.arn, _time); `index=itsm` `sourcetype=\"snow:change_request\"` with `u_dual_approval=true`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unauthorized IAM changes), Single value (count), Timeline.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC9 — Change Authorization Dual-Control on Privileged Cloud Roles. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC8.1 (Change management) is enforced — Splunk UC-22.8.26: SOC 2 CC9 — Change Authorization Dual-Control on Privileged Cloud Roles.",
                  "ea": "Saved search 'UC-22.8.26' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.27",
              "n": "SOC 2 A1 — Capacity Planning Signals for In-Scope Production Services",
              "c": "medium",
              "f": "intermediate",
              "v": "A1 addresses availability commitments. Forecasting CPU, memory, queue depth, or connection pool utilization against committed headroom demonstrates proactive capacity management—not reactive firefighting after outages.",
              "t": "Splunk Observability Cloud / Infrastructure monitoring, or `metrics` index",
              "d": "`index=metrics` `metric_name IN (\"cpu.util\",\"mem.util\",\"queue.depth\")` `entity=prod_payments_*` earliest=-90d",
              "q": "index=metrics metric_name=\"cpu.util\" entity=prod_payments_* earliest=-90d\n| timechart span=1d avg(_value) as avg_cpu\n| predict avg_cpu as forecast algorithm=LLP future_timespan=14 period=7\n| where forecast>85\n| table _time, avg_cpu, forecast",
              "m": "(1) Set threshold to contractual headroom; (2) join with change calendar for planned traffic spikes; (3) feed capacity committee slide deck; (4) document assumptions in system description; (5) alert infra 30 days before sustained breach predicted.",
              "z": "Line chart (avg_cpu + forecast), Single value (days until threshold), Table.",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Observability Cloud / Infrastructure monitoring, or `metrics` index.\n• Ensure the following data sources are available: `index=metrics` `metric_name IN (\"cpu.util\",\"mem.util\",\"queue.depth\")` `entity=prod_payments_*` earliest=-90d.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Set threshold to contractual headroom; (2) join with change calendar for planned traffic spikes; (3) feed capacity committee slide deck; (4) document assumptions in system description; (5) alert infra 30 days before sustained breach predicted.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=metrics metric_name=\"cpu.util\" entity=prod_payments_* earliest=-90d\n| timechart span=1d avg(_value) as avg_cpu\n| predict avg_cpu as forecast algorithm=LLP future_timespan=14 period=7\n| where forecast>85\n| table _time, avg_cpu, forecast\n```\n\nUnderstanding this SPL\n\n**SOC 2 A1 — Capacity Planning Signals for In-Scope Production Services** — A1 addresses availability commitments. Forecasting CPU, memory, queue depth, or connection pool utilization against committed headroom demonstrates proactive capacity management—not reactive firefighting after outages.\n\nDocumented **Data sources**: `index=metrics` `metric_name IN (\"cpu.util\",\"mem.util\",\"queue.depth\")` `entity=prod_payments_*` earliest=-90d. **App/TA** (typical add-on context): Splunk Observability Cloud / Infrastructure monitoring, or `metrics` index. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: metrics.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=metrics, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **SOC 2 A1 — Capacity Planning Signals for In-Scope Production Services**): predict avg_cpu as forecast algorithm=LLP future_timespan=14 period=7\n• Filters the current rows with `where forecast>85` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 A1 — Capacity Planning Signals for In-Scope Production Services**): table _time, avg_cpu, forecast\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=1d | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SOC 2 A1 — Capacity Planning Signals for In-Scope Production Services** — A1 addresses availability commitments. Forecasting CPU, memory, queue depth, or connection pool utilization against committed headroom demonstrates proactive capacity management—not reactive firefighting after outages.\n\nDocumented **Data sources**: `index=metrics` `metric_name IN (\"cpu.util\",\"mem.util\",\"queue.depth\")` `entity=prod_payments_*` earliest=-90d. **App/TA** (typical add-on context): Splunk Observability Cloud / Infrastructure monitoring, or `metrics` index. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (avg_cpu + forecast), Single value (days until threshold), Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for A1 — Capacity Planning Signals for In-Scope Production Services. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Performance"
              ],
              "pillar": "observability",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t avg(Performance.cpu_load_percent) as agg_value from datamodel=Performance.CPU by Performance.host span=1d | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "A1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 A1.2 (Availability commitments) is enforced — Splunk UC-22.8.27: SOC 2 A1 — Capacity Planning Signals for In-Scope Production Services.",
                  "ea": "Saved search 'UC-22.8.27' running on index=metrics metric_name IN (\"cpu.util\",\"mem.util\",\"queue.depth\") entity=prod_payments_* earliest=-90d, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.28",
              "n": "SOC 2 A1 — Disaster Recovery Test Execution and Evidence Timestamps",
              "c": "high",
              "f": "intermediate",
              "v": "A1 expects recovery commitments to be tested. Indexing DR runbook executions, RTO/RPO measurements, and failover test results with immutable timestamps proves tests occurred at the required frequency and outcomes were recorded.",
              "t": "HTTP Event Collector, Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=resilience` `sourcetype=\"dr:test_result\"` (test_id, region_pair, rto_min, rpo_min, pass_fail, executed_by, _time); optional `snow:cmdb_ci` for scope",
              "q": "index=resilience sourcetype=\"dr:test_result\" earliest=-400d\n| bin _time span=90d as quarter\n| stats latest(pass_fail) as outcome, latest(rto_min) as rto, latest(rpo_min) as rpo by quarter, region_pair\n| eval ok=if(outcome=\"pass\",1,0)\n| eventstats sum(ok) as passes, count as quarters by region_pair\n| where passes<4\n| table region_pair, quarter, outcome, rto, rpo",
              "m": "(1) Require one test per in-scope region_pair per quarter minimum—tune to policy; (2) integrate orchestration tool webhooks; (3) store raw logs in WORM; (4) failed `pass_fail` opens severity-2 problem; (5) map to customer SLA annex.",
              "z": "Timeline (tests), Table (gaps), Single value (region_pairs behind).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector, Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=resilience` `sourcetype=\"dr:test_result\"` (test_id, region_pair, rto_min, rpo_min, pass_fail, executed_by, _time); optional `snow:cmdb_ci` for scope.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require one test per in-scope region_pair per quarter minimum—tune to policy; (2) integrate orchestration tool webhooks; (3) store raw logs in WORM; (4) failed `pass_fail` opens severity-2 problem; (5) map to customer SLA annex.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=resilience sourcetype=\"dr:test_result\" earliest=-400d\n| bin _time span=90d as quarter\n| stats latest(pass_fail) as outcome, latest(rto_min) as rto, latest(rpo_min) as rpo by quarter, region_pair\n| eval ok=if(outcome=\"pass\",1,0)\n| eventstats sum(ok) as passes, count as quarters by region_pair\n| where passes<4\n| table region_pair, quarter, outcome, rto, rpo\n```\n\nUnderstanding this SPL\n\n**SOC 2 A1 — Disaster Recovery Test Execution and Evidence Timestamps** — A1 expects recovery commitments to be tested. Indexing DR runbook executions, RTO/RPO measurements, and failover test results with immutable timestamps proves tests occurred at the required frequency and outcomes were recorded.\n\nDocumented **Data sources**: `index=resilience` `sourcetype=\"dr:test_result\"` (test_id, region_pair, rto_min, rpo_min, pass_fail, executed_by, _time); optional `snow:cmdb_ci` for scope. **App/TA** (typical add-on context): HTTP Event Collector, Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: resilience; **sourcetype**: dr:test_result. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=resilience, sourcetype=\"dr:test_result\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by quarter, region_pair** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by region_pair** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where passes<4` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 A1 — Disaster Recovery Test Execution and Evidence Timestamps**): table region_pair, quarter, outcome, rto, rpo\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (tests), Table (gaps), Single value (region_pairs behind).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for A1 — Disaster Recovery Test Execution and Evidence Timestamps. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Resilience"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "A1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 A1.2 (Availability commitments) is enforced — Splunk UC-22.8.28: SOC 2 A1 — Disaster Recovery Test Execution and Evidence Timestamps.",
                  "ea": "Saved search 'UC-22.8.28' running on sourcetype dr:test_result and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.29",
              "n": "SOC 2 C1 — Confidential Information Disposal and Secure Destruction Evidence",
              "c": "high",
              "f": "intermediate",
              "v": "C1 addresses confidential information. Tracking destruction certificates, crypto-shredding events, and DLP “secure delete” workflows demonstrates media and logical disposal met policy when assets retired or contracts ended.",
              "t": "Splunk DB Connect, vendor APIs (Iron Mountain, secure erase tools) via HEC",
              "d": "`index=compliance` `sourcetype=\"disposal:cert\"` (asset_id, method, cert_number, destroyed_ts, witness); `index=endpoint` BitLocker or secure erase tool logs",
              "q": "index=compliance sourcetype=\"disposal:cert\" earliest=-365d\n| eval destroyed=strptime(destroyed_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left asset_id [\n    search index=cmdb sourcetype=\"cmdb:asset\" earliest=-400d\n    | fields asset_id, retired_date, classification\n    | eval retired=strptime(retired_date,\"%Y-%m-%d\")\n  ]\n| where retired>relative_time(destroyed,\"+30d@d\")\n| table asset_id, classification, retired, destroyed, cert_number",
              "m": "(1) Flip logic per policy: flag retirements without cert within 30 days; (2) restrict `classification=confidential` scope; (3) integrate ITAD vendor feed; (4) legal hold exceptions via `legal_hold_assets.csv`; (5) annual sample for auditors with cert PDF links (out of band).",
              "z": "Table (late disposal), Single value (assets past SLA), Bar by classification.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect, vendor APIs (Iron Mountain, secure erase tools) via HEC.\n• Ensure the following data sources are available: `index=compliance` `sourcetype=\"disposal:cert\"` (asset_id, method, cert_number, destroyed_ts, witness); `index=endpoint` BitLocker or secure erase tool logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Flip logic per policy: flag retirements without cert within 30 days; (2) restrict `classification=confidential` scope; (3) integrate ITAD vendor feed; (4) legal hold exceptions via `legal_hold_assets.csv`; (5) annual sample for auditors with cert PDF links (out of band).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=\"disposal:cert\" earliest=-365d\n| eval destroyed=strptime(destroyed_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| join type=left asset_id [\n    search index=cmdb sourcetype=\"cmdb:asset\" earliest=-400d\n    | fields asset_id, retired_date, classification\n    | eval retired=strptime(retired_date,\"%Y-%m-%d\")\n  ]\n| where retired>relative_time(destroyed,\"+30d@d\")\n| table asset_id, classification, retired, destroyed, cert_number\n```\n\nUnderstanding this SPL\n\n**SOC 2 C1 — Confidential Information Disposal and Secure Destruction Evidence** — C1 addresses confidential information. Tracking destruction certificates, crypto-shredding events, and DLP “secure delete” workflows demonstrates media and logical disposal met policy when assets retired or contracts ended.\n\nDocumented **Data sources**: `index=compliance` `sourcetype=\"disposal:cert\"` (asset_id, method, cert_number, destroyed_ts, witness); `index=endpoint` BitLocker or secure erase tool logs. **App/TA** (typical add-on context): Splunk DB Connect, vendor APIs (Iron Mountain, secure erase tools) via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: disposal:cert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=\"disposal:cert\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **destroyed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where retired>relative_time(destroyed,\"+30d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 C1 — Confidential Information Disposal and Secure Destruction Evidence**): table asset_id, classification, retired, destroyed, cert_number\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (late disposal), Single value (assets past SLA), Bar by classification.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for C1 — Confidential Information Disposal and Secure Destruction Evidence. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "C1.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 C1.1 (Confidentiality) is enforced — Splunk UC-22.8.29: SOC 2 C1 — Confidential Information Disposal and Secure Destruction Evidence.",
                  "ea": "Saved search 'UC-22.8.29' running on sourcetype disposal:cert and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.30",
              "n": "SOC 2 PI1 — Processing Completeness Validation Across Multi-Stage Pipelines",
              "c": "high",
              "f": "advanced",
              "v": "PI1 expects processing to be complete, valid, accurate, timely, and authorized. Reconciling record counts and checksums between ingest, transform, and load stages catches silent drops that would undermine financial or subscriber reporting integrity.",
              "t": "HTTP Event Collector from orchestration (Airflow, dbt, mainframe bridge)",
              "d": "`index=dataops` `sourcetype=\"pipeline:reconcile\"` (pipeline_id, stage, batch_id, row_count, checksum, _time)",
              "q": "index=dataops sourcetype=\"pipeline:reconcile\" earliest=-7d\n| stats latest(eval(if(stage=\"ingest\",row_count,null()))) as ingest_rows,\n        latest(eval(if(stage=\"load\",row_count,null()))) as load_rows,\n        latest(eval(if(stage=\"ingest\",checksum,null()))) as c_in,\n        latest(eval(if(stage=\"load\",checksum,null()))) as c_out\n  by batch_id, pipeline_id\n| eval row_match=if(ingest_rows=load_rows,1,0), chk_match=if(c_in=c_out,1,0)\n| where row_match=0 OR chk_match=0\n| table pipeline_id, batch_id, ingest_rows, load_rows, c_in, c_out",
              "m": "(1) Emit reconcile events at stage boundaries; (2) allow known shrinkage rules via `pipeline_tolerance.csv`; (3) alert data owners on mismatch; (4) retain batch_id for auditor replay; (5) map `pipeline_id` to PI1 subservice in documentation.",
              "z": "Table (mismatches), Line chart (ingest vs load over time), Single value (failed batches).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from orchestration (Airflow, dbt, mainframe bridge).\n• Ensure the following data sources are available: `index=dataops` `sourcetype=\"pipeline:reconcile\"` (pipeline_id, stage, batch_id, row_count, checksum, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit reconcile events at stage boundaries; (2) allow known shrinkage rules via `pipeline_tolerance.csv`; (3) alert data owners on mismatch; (4) retain batch_id for auditor replay; (5) map `pipeline_id` to PI1 subservice in documentation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dataops sourcetype=\"pipeline:reconcile\" earliest=-7d\n| stats latest(eval(if(stage=\"ingest\",row_count,null()))) as ingest_rows,\n        latest(eval(if(stage=\"load\",row_count,null()))) as load_rows,\n        latest(eval(if(stage=\"ingest\",checksum,null()))) as c_in,\n        latest(eval(if(stage=\"load\",checksum,null()))) as c_out\n  by batch_id, pipeline_id\n| eval row_match=if(ingest_rows=load_rows,1,0), chk_match=if(c_in=c_out,1,0)\n| where row_match=0 OR chk_match=0\n| table pipeline_id, batch_id, ingest_rows, load_rows, c_in, c_out\n```\n\nUnderstanding this SPL\n\n**SOC 2 PI1 — Processing Completeness Validation Across Multi-Stage Pipelines** — PI1 expects processing to be complete, valid, accurate, timely, and authorized. Reconciling record counts and checksums between ingest, transform, and load stages catches silent drops that would undermine financial or subscriber reporting integrity.\n\nDocumented **Data sources**: `index=dataops` `sourcetype=\"pipeline:reconcile\"` (pipeline_id, stage, batch_id, row_count, checksum, _time). **App/TA** (typical add-on context): HTTP Event Collector from orchestration (Airflow, dbt, mainframe bridge). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dataops; **sourcetype**: pipeline:reconcile. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dataops, sourcetype=\"pipeline:reconcile\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by batch_id, pipeline_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **row_match** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where row_match=0 OR chk_match=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 PI1 — Processing Completeness Validation Across Multi-Stage Pipelines**): table pipeline_id, batch_id, ingest_rows, load_rows, c_in, c_out\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mismatches), Line chart (ingest vs load over time), Single value (failed batches).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for PI1 — Processing Completeness Validation Across Multi-Stage Pipelines. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Data Quality"
              ],
              "pillar": "observability",
              "regs": [
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.8.30: SOC 2 PI1 — Processing Completeness Validation Across Multi-Stage Pipelines.",
                  "ea": "Saved search 'UC-22.8.30' running on index=dataops sourcetype=\"pipeline:reconcile\" (pipeline_id, stage, batch_id, row_count, checksum, _time), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.2,
          "qd": {
            "gold": 1,
            "silver": 0,
            "bronze": 29,
            "none": 0
          }
        },
        {
          "i": "22.9",
          "n": "Compliance Trending",
          "u": [
            {
              "i": "22.9.1",
              "n": "Compliance Posture Score Trending",
              "c": "high",
              "f": "advanced",
              "v": "Rolling quarterly posture scores across NIST CSF, ISO 27001, and SOC 2 show whether investments in controls are improving measurable outcomes—not just checkbox activity—so executives and regulators see trajectory, not a one-time snapshot.",
              "t": "Splunk DB Connect (GRC export), custom HTTP poller, or indexed CSV from Archer/ServiceNow GRC",
              "d": "`index=compliance` OR `index=grc` `sourcetype IN (\"compliance:framework_score\",\"grc:posture\")` — fields `framework`, `overall_score`, `_time`",
              "q": "index=compliance OR index=grc sourcetype IN (\"compliance:framework_score\",\"grc:posture\") earliest=-730d\n| eval fw=case(like(framework,\"%NIST%\"),\"NIST_CSF\",like(framework,\"%ISO%\"),\"ISO27001\",like(framework,\"%SOC%\"),\"SOC2\",1=1,framework)\n| timechart span=3mon avg(overall_score) as avg_score by fw\n| trendline sma2(NIST_CSF) as roll_NIST_CSF sma2(ISO27001) as roll_ISO27001 sma2(SOC2) as roll_SOC2",
              "m": "(1) Land GRC or continuous-control scores into `compliance` or `grc` with stable `framework` labels and numeric `overall_score` (0–100); (2) align calendar quarters to your audit cycle (`span=90d` vs fiscal); (3) schedule weekly and snapshot results to a summary index for year-over-year evidence; (4) validate `predict` against low-volume series—disable or widen `period` if confidence bands explode; (5) pair the portfolio panel with the by-framework panel for board-ready trending.",
              "z": "Line or area chart (portfolio score, `sma_posture`, `posture_forecast`), multiseries line (scores by `fw`), overlay confidence bands from `predict`.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (GRC export), custom HTTP poller, or indexed CSV from Archer/ServiceNow GRC.\n• Ensure the following data sources are available: `index=compliance` OR `index=grc` `sourcetype IN (\"compliance:framework_score\",\"grc:posture\")` — fields `framework`, `overall_score`, `_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Land GRC or continuous-control scores into `compliance` or `grc` with stable `framework` labels and numeric `overall_score` (0–100); (2) align calendar quarters to your audit cycle (`span=90d` vs fiscal); (3) schedule weekly and snapshot results to a summary index for year-over-year evidence; (4) validate `predict` against low-volume series—disable or widen `period` if confidence bands explode; (5) pair the portfolio panel with the by-framework panel for board-ready trending.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance OR index=grc sourcetype IN (\"compliance:framework_score\",\"grc:posture\") earliest=-730d\n| eval fw=case(like(framework,\"%NIST%\"),\"NIST_CSF\",like(framework,\"%ISO%\"),\"ISO27001\",like(framework,\"%SOC%\"),\"SOC2\",1=1,framework)\n| timechart span=3mon avg(overall_score) as avg_score by fw\n| trendline sma2(NIST_CSF) as roll_NIST_CSF sma2(ISO27001) as roll_ISO27001 sma2(SOC2) as roll_SOC2\n```\n\nUnderstanding this SPL\n\n**Compliance Posture Score Trending** — Rolling quarterly posture scores across NIST CSF, ISO 27001, and SOC 2 show whether investments in controls are improving measurable outcomes—not just checkbox activity—so executives and regulators see trajectory, not a one-time snapshot.\n\nDocumented **Data sources**: `index=compliance` OR `index=grc` `sourcetype IN (\"compliance:framework_score\",\"grc:posture\")` — fields `framework`, `overall_score`, `_time`. **App/TA** (typical add-on context): Splunk DB Connect (GRC export), custom HTTP poller, or indexed CSV from Archer/ServiceNow GRC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance, grc.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, index=grc, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **fw** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=3mon** buckets with a separate series **by fw** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Compliance Posture Score Trending**): trendline sma2(NIST_CSF) as roll_NIST_CSF sma2(ISO27001) as roll_ISO27001 sma2(SOC2) as roll_SOC2\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line or area chart (portfolio score, `sma_posture`, `posture_forecast`), multiseries line (scores by `fw`), overlay confidence bands from `predict`.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Compliance Posture Score Trending. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.36",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.36 is enforced — Splunk UC-22.9.1: Compliance Posture Score Trending.",
                  "ea": "Saved search 'UC-22.9.1' running on index compliance and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.OV-03",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF GV.OV-03 is enforced — Splunk UC-22.9.1: Compliance Posture Score Trending.",
                  "ea": "Saved search 'UC-22.9.1' running on index compliance and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.9.2",
              "n": "Audit Finding Closure Rate Trending",
              "c": "high",
              "f": "intermediate",
              "v": "Trending open versus closed audit findings over ninety days makes backlog and closure velocity visible before external audits, while mean time to remediate highlights whether remediation playbooks and ownership are working.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk DB Connect for GRC findings, or custom `grc:audit_finding` HEC",
              "d": "`index=grc` OR `index=compliance` `sourcetype=\"grc:audit_finding\"` — `finding_id`, `status`, `closed_time` or `remediated_time`, `_time`",
              "q": "index=grc OR index=compliance sourcetype=\"grc:audit_finding\" earliest=-90d\n| eval is_closed=if(match(status,\"(?i)closed|resolved|verified\"),1,0)\n| where is_closed=1\n| eval mttr_days=if(isnotnull(closed_time),(closed_time-_time)/86400,\n    if(isnotnull(remediated_time),(remediated_time-_time)/86400,null()))\n| where isnum(mttr_days) AND mttr_days>=0 AND mttr_days<365\n| timechart span=1w avg(mttr_days) as mean_mttr_days perc95(mttr_days) as p95_mttr_days\n| trendline sma4(mean_mttr_days) as mttr_trend\n| streamstats window=2 global=f first(mean_mttr_days) as prev_w_mttr\n| eval wow_mttr_pct=if(isnotnull(prev_w_mttr) AND prev_w_mttr>0,round(100*(mean_mttr_days-prev_w_mttr)/prev_w_mttr,1),null())\n| predict mean_mttr_days as mttr_forecast algorithm=LLP future_timespan=2",
              "m": "(1) Ensure each finding emits at least one event with stable `finding_key` and transition events or daily snapshots for `status`; (2) normalize `closed_time` to epoch seconds; (3) if only snapshot data exists, switch `dc()` panels to `latest()` by key in a summary index; (4) alert when `open_forecast` rises week over week or when `mean_mttr_days` exceeds your SLA; (5) export MTTR trend for audit workpapers.",
              "z": "Dual-axis line (open vs closed counts), area chart (`open_smoothed`), line chart (`mean_mttr_days`, `mttr_trend`, `mttr_forecast`), single value (`wow_mttr_pct`).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk DB Connect for GRC findings, or custom `grc:audit_finding` HEC.\n• Ensure the following data sources are available: `index=grc` OR `index=compliance` `sourcetype=\"grc:audit_finding\"` — `finding_id`, `status`, `closed_time` or `remediated_time`, `_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure each finding emits at least one event with stable `finding_key` and transition events or daily snapshots for `status`; (2) normalize `closed_time` to epoch seconds; (3) if only snapshot data exists, switch `dc()` panels to `latest()` by key in a summary index; (4) alert when `open_forecast` rises week over week or when `mean_mttr_days` exceeds your SLA; (5) export MTTR trend for audit workpapers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc OR index=compliance sourcetype=\"grc:audit_finding\" earliest=-90d\n| eval is_closed=if(match(status,\"(?i)closed|resolved|verified\"),1,0)\n| where is_closed=1\n| eval mttr_days=if(isnotnull(closed_time),(closed_time-_time)/86400,\n    if(isnotnull(remediated_time),(remediated_time-_time)/86400,null()))\n| where isnum(mttr_days) AND mttr_days>=0 AND mttr_days<365\n| timechart span=1w avg(mttr_days) as mean_mttr_days perc95(mttr_days) as p95_mttr_days\n| trendline sma4(mean_mttr_days) as mttr_trend\n| streamstats window=2 global=f first(mean_mttr_days) as prev_w_mttr\n| eval wow_mttr_pct=if(isnotnull(prev_w_mttr) AND prev_w_mttr>0,round(100*(mean_mttr_days-prev_w_mttr)/prev_w_mttr,1),null())\n| predict mean_mttr_days as mttr_forecast algorithm=LLP future_timespan=2\n```\n\nUnderstanding this SPL\n\n**Audit Finding Closure Rate Trending** — Trending open versus closed audit findings over ninety days makes backlog and closure velocity visible before external audits, while mean time to remediate highlights whether remediation playbooks and ownership are working.\n\nDocumented **Data sources**: `index=grc` OR `index=compliance` `sourcetype=\"grc:audit_finding\"` — `finding_id`, `status`, `closed_time` or `remediated_time`, `_time`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk DB Connect for GRC findings, or custom `grc:audit_finding` HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc, compliance; **sourcetype**: grc:audit_finding. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, index=compliance, sourcetype=\"grc:audit_finding\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_closed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_closed=1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **mttr_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnum(mttr_days) AND mttr_days>=0 AND mttr_days<365` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1w** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Audit Finding Closure Rate Trending**): trendline sma4(mean_mttr_days) as mttr_trend\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **wow_mttr_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Audit Finding Closure Rate Trending**): predict mean_mttr_days as mttr_forecast algorithm=LLP future_timespan=2\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Dual-axis line (open vs closed counts), area chart (`open_smoothed`), line chart (`mean_mttr_days`, `mttr_trend`, `mttr_forecast`), single value (`wow_mttr_pct`).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Audit Finding Closure Rate Trending. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.36",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.36 is enforced — Splunk UC-22.9.2: Audit Finding Closure Rate Trending.",
                  "ea": "Saved search 'UC-22.9.2' running on index=grc OR index=compliance sourcetype=\"grc:audit_finding\" — finding_id, status, closed_time or remediated_time, _time, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.OV-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF GV.OV-02 is enforced — Splunk UC-22.9.2: Audit Finding Closure Rate Trending.",
                  "ea": "Saved search 'UC-22.9.2' running on index=grc OR index=compliance sourcetype=\"grc:audit_finding\" — finding_id, status, closed_time or remediated_time, _time, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.9.3",
              "n": "Control Effectiveness Trending",
              "c": "high",
              "f": "advanced",
              "v": "A ninety-day pass ratio by control domain exposes domains where tests are failing more often or trending down—so you prioritize control owners, evidence collection, and automation before a failed external assessment.",
              "t": "Splunk Add-on for Tenable (Splunkbase 4060) for scan-based control checks, DB Connect for ITGC spreadsheets, or `compliance:control_test` automation feeds",
              "d": "`index=compliance` `sourcetype IN (\"compliance:control_test\",\"nessus:sc:compliance\",\"qualys:policy\")` — `control_domain`, `test_result` or `status`, `_time`",
              "q": "index=compliance sourcetype IN (\"compliance:control_test\",\"nessus:sc:compliance\") earliest=-90d\n| eval outcome=if(match(coalesce(test_result,status),\"(?i)pass|passed|success\"),1,0)\n| bin _time span=7d\n| stats count as tests, sum(outcome) as passes by _time\n| eval org_pass_ratio=if(tests>0, round(100*passes/tests,2), null())\n| sort _time\n| trendline sma3(org_pass_ratio) as ratio_trend\n| eventstats avg(org_pass_ratio) as portfolio_mean\n| eval gap=round(org_pass_ratio-portfolio_mean,2)\n| predict org_pass_ratio as ratio_forecast algorithm=LLP future_timespan=2 period=6",
              "m": "(1) Map vendor fields (`pluginFamily`, Qualys title) to internal `control_domain` via lookup `control_domain_map.csv`; (2) dedupe repeated tests per asset/control daily to avoid skew; (3) review domains where `eff_*` slopes negative for four consecutive buckets; (4) tune `span=7d` to match test frequency; (5) store weekly CSV exports for assessors.",
              "z": "Multiseries line or area (pass_ratio by domain), heatmap (domain x week), line (`org_pass_ratio`, `ratio_trend`, `ratio_forecast`).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Tenable](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Tenable (Splunkbase 4060) for scan-based control checks, DB Connect for ITGC spreadsheets, or `compliance:control_test` automation feeds.\n• Ensure the following data sources are available: `index=compliance` `sourcetype IN (\"compliance:control_test\",\"nessus:sc:compliance\",\"qualys:policy\")` — `control_domain`, `test_result` or `status`, `_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map vendor fields (`pluginFamily`, Qualys title) to internal `control_domain` via lookup `control_domain_map.csv`; (2) dedupe repeated tests per asset/control daily to avoid skew; (3) review domains where `eff_*` slopes negative for four consecutive buckets; (4) tune `span=7d` to match test frequency; (5) store weekly CSV exports for assessors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype IN (\"compliance:control_test\",\"nessus:sc:compliance\") earliest=-90d\n| eval outcome=if(match(coalesce(test_result,status),\"(?i)pass|passed|success\"),1,0)\n| bin _time span=7d\n| stats count as tests, sum(outcome) as passes by _time\n| eval org_pass_ratio=if(tests>0, round(100*passes/tests,2), null())\n| sort _time\n| trendline sma3(org_pass_ratio) as ratio_trend\n| eventstats avg(org_pass_ratio) as portfolio_mean\n| eval gap=round(org_pass_ratio-portfolio_mean,2)\n| predict org_pass_ratio as ratio_forecast algorithm=LLP future_timespan=2 period=6\n```\n\nUnderstanding this SPL\n\n**Control Effectiveness Trending** — A ninety-day pass ratio by control domain exposes domains where tests are failing more often or trending down—so you prioritize control owners, evidence collection, and automation before a failed external assessment.\n\nDocumented **Data sources**: `index=compliance` `sourcetype IN (\"compliance:control_test\",\"nessus:sc:compliance\",\"qualys:policy\")` — `control_domain`, `test_result` or `status`, `_time`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (Splunkbase 4060) for scan-based control checks, DB Connect for ITGC spreadsheets, or `compliance:control_test` automation feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **outcome** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **org_pass_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Control Effectiveness Trending**): trendline sma3(org_pass_ratio) as ratio_trend\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Control Effectiveness Trending**): predict org_pass_ratio as ratio_forecast algorithm=LLP future_timespan=2 period=6\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest span=7d | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Control Effectiveness Trending** — A ninety-day pass ratio by control domain exposes domains where tests are failing more often or trending down—so you prioritize control owners, evidence collection, and automation before a failed external assessment.\n\nDocumented **Data sources**: `index=compliance` `sourcetype IN (\"compliance:control_test\",\"nessus:sc:compliance\",\"qualys:policy\")` — `control_domain`, `test_result` or `status`, `_time`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (Splunkbase 4060) for scan-based control checks, DB Connect for ITGC spreadsheets, or `compliance:control_test` automation feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Multiseries line or area (pass_ratio by domain), heatmap (domain x week), line (`org_pass_ratio`, `ratio_trend`, `ratio_forecast`).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Control Effectiveness Trending. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Compliance"
              ],
              "a": [
                "Vulnerabilities (when Tenable/Qualys maps to CIM); otherwise N/A"
              ],
              "qs": "| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest span=7d | sort - agg_value",
              "e": [
                "db_connect",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.36",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.36 is enforced — Splunk UC-22.9.3: Control Effectiveness Trending.",
                  "ea": "Saved search 'UC-22.9.3' running on index compliance and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "ID.IM-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF ID.IM-02 is enforced — Splunk UC-22.9.3: Control Effectiveness Trending.",
                  "ea": "Saved search 'UC-22.9.3' running on index compliance and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.9.4",
              "n": "Regulatory Incident Response Time Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Mean time to resolve compliance-tagged incidents by quarter proves that regulatory and policy breaches are handled with discipline—supporting supervisory expectations and internal KPIs beyond generic IT MTTR.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` — filter with `tag`/`category`/`rule_name` for compliance/regulatory work; fields `closed_time`, `_time`",
              "q": "`notable` earliest=-730d (tag=\"compliance\" OR category=\"*compliance*\" OR rule_name=\"*regulatory*\" OR like(rule_name,\"%compliance%\"))\n| eval mttr_sec=if(isnotnull(closed_time) AND closed_time>_time, closed_time-_time, null())\n| where isnotnull(mttr_sec)\n| timechart span=90d avg(mttr_sec) as avg_mttr_sec perc95(mttr_sec) as p95_mttr_sec\n| eval avg_mttr_h=avg_mttr_sec/3600\n| trendline sma2(avg_mttr_sec) as mttr_trend\n| streamstats window=2 global=f first(avg_mttr_sec) as prev_q_mttr\n| eval vs_prev_q_pct=if(isnotnull(prev_q_mttr) AND prev_q_mttr>0,round(100*(avg_mttr_sec-prev_q_mttr)/prev_q_mttr,1),null())\n| predict avg_mttr_sec as mttr_forecast algorithm=LLP future_timespan=2 period=4",
              "m": "(1) Define a consistent ES tag or naming convention for regulatory notables; (2) confirm `closed_time` population for closed incidents (`| fieldsummary closed_time`); (3) exclude false positives with a lookup of excluded `rule_name` values; (4) compare quarterly MTTR to IT-wide MTTR in a separate panel for context; (5) document scope (which jurisdictions or policies) in the dashboard subtitle.",
              "z": "Line chart (`avg_mttr_h` or `avg_mttr_sec`, `mttr_trend`, `mttr_forecast`), column chart (`p95_mttr_sec` by quarter), single value (`vs_prev_q_pct`).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [
                "T1048",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` — filter with `tag`/`category`/`rule_name` for compliance/regulatory work; fields `closed_time`, `_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define a consistent ES tag or naming convention for regulatory notables; (2) confirm `closed_time` population for closed incidents (`| fieldsummary closed_time`); (3) exclude false positives with a lookup of excluded `rule_name` values; (4) compare quarterly MTTR to IT-wide MTTR in a separate panel for context; (5) document scope (which jurisdictions or policies) in the dashboard subtitle.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` earliest=-730d (tag=\"compliance\" OR category=\"*compliance*\" OR rule_name=\"*regulatory*\" OR like(rule_name,\"%compliance%\"))\n| eval mttr_sec=if(isnotnull(closed_time) AND closed_time>_time, closed_time-_time, null())\n| where isnotnull(mttr_sec)\n| timechart span=90d avg(mttr_sec) as avg_mttr_sec perc95(mttr_sec) as p95_mttr_sec\n| eval avg_mttr_h=avg_mttr_sec/3600\n| trendline sma2(avg_mttr_sec) as mttr_trend\n| streamstats window=2 global=f first(avg_mttr_sec) as prev_q_mttr\n| eval vs_prev_q_pct=if(isnotnull(prev_q_mttr) AND prev_q_mttr>0,round(100*(avg_mttr_sec-prev_q_mttr)/prev_q_mttr,1),null())\n| predict avg_mttr_sec as mttr_forecast algorithm=LLP future_timespan=2 period=4\n```\n\nUnderstanding this SPL\n\n**Regulatory Incident Response Time Trending** — Mean time to resolve compliance-tagged incidents by quarter proves that regulatory and policy breaches are handled with discipline—supporting supervisory expectations and internal KPIs beyond generic IT MTTR.\n\nDocumented **Data sources**: `` `notable` `` — filter with `tag`/`category`/`rule_name` for compliance/regulatory work; fields `closed_time`, `_time`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **mttr_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(mttr_sec)` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=90d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **avg_mttr_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Regulatory Incident Response Time Trending**): trendline sma2(avg_mttr_sec) as mttr_trend\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **vs_prev_q_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Regulatory Incident Response Time Trending**): predict avg_mttr_sec as mttr_forecast algorithm=LLP future_timespan=2 period=4\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (`avg_mttr_h` or `avg_mttr_sec`, `mttr_trend`, `mttr_forecast`), column chart (`p95_mttr_sec` by quarter), single value (`vs_prev_q_pct`).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Regulatory Incident Response Time Trending. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.33",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.33 (Breach notification to supervisory authority) is enforced — Splunk UC-22.9.4: Regulatory Incident Response Time Trending.",
                  "ea": "Saved search 'UC-22.9.4' running on notable — filter with tag/category/rule_name for compliance/regulatory work; fields closed_time, _time, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.23 (Reporting obligations) is enforced — Splunk UC-22.9.4: Regulatory Incident Response Time Trending.",
                  "ea": "Saved search 'UC-22.9.4' running on notable — filter with tag/category/rule_name for compliance/regulatory work; fields closed_time, _time, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.9.5",
              "n": "Policy Violation Volume Trending",
              "c": "medium",
              "f": "intermediate",
              "v": "Quarterly violation counts by category—data handling, access, encryption—show whether policy drift, training gaps, or technical misconfigurations are improving or worsening, which steers awareness campaigns and control investments.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Windows (Splunkbase 742), Enterprise Security data models",
              "d": "`index=compliance` OR `index=sec` `sourcetype IN (\"dlp:violation\",\"policy:enforcement\",\"ms:o365:management\")` — `violation_category`, `PolicyName`, `Workload`; optional `sourcetype=\"qualys:*\"` / `sourcetype=\"nessus:*\"` for encryption posture drift correlated to policy",
              "q": "index=vm OR index=compliance sourcetype IN (\"nessus:sc:*\",\"qualys:host\") earliest=-730d\n| eval enc_gap=if(match(_raw,\"(?i)ssl|tls|cipher|encrypt\") AND match(_raw,\"(?i)fail|weak|deprecated\"),1,0)\n| timechart span=90d sum(enc_gap) as encryption_policy_gaps\n| trendline sma3(encryption_policy_gaps) as enc_trend\n| predict encryption_policy_gaps as enc_fcst algorithm=LLP future_timespan=2",
              "m": "(1) Normalize DLP and CASB events into shared `violation_category` values via `case` or lookup; (2) align `span=90d` to fiscal or regulatory reporting quarters; (3) correlate spikes with change tickets (`index=itsm`) using `join` on `_time` windows; (4) use the total-violations panel (`o365_violation_total`) when category columns are too sparse for `predict`; (5) retain quarterly PDF snapshots for compliance archives.",
              "z": "Stacked column or area (counts by `cat` over time), line (`o365_violation_total`, `o365_trend`, `o365_fcst`), line (`encryption_policy_gaps`, `enc_trend`), heatmap (category x quarter).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Add-on for Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [
                "T1078",
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Windows (Splunkbase 742), Enterprise Security data models.\n• Ensure the following data sources are available: `index=compliance` OR `index=sec` `sourcetype IN (\"dlp:violation\",\"policy:enforcement\",\"ms:o365:management\")` — `violation_category`, `PolicyName`, `Workload`; optional `sourcetype=\"qualys:*\"` / `sourcetype=\"nessus:*\"` for encryption posture drift correlated to policy.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize DLP and CASB events into shared `violation_category` values via `case` or lookup; (2) align `span=90d` to fiscal or regulatory reporting quarters; (3) correlate spikes with change tickets (`index=itsm`) using `join` on `_time` windows; (4) use the total-violations panel (`o365_violation_total`) when category columns are too sparse for `predict`; (5) retain quarterly PDF snapshots for compliance archives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm OR index=compliance sourcetype IN (\"nessus:sc:*\",\"qualys:host\") earliest=-730d\n| eval enc_gap=if(match(_raw,\"(?i)ssl|tls|cipher|encrypt\") AND match(_raw,\"(?i)fail|weak|deprecated\"),1,0)\n| timechart span=90d sum(enc_gap) as encryption_policy_gaps\n| trendline sma3(encryption_policy_gaps) as enc_trend\n| predict encryption_policy_gaps as enc_fcst algorithm=LLP future_timespan=2\n```\n\nUnderstanding this SPL\n\n**Policy Violation Volume Trending** — Quarterly violation counts by category—data handling, access, encryption—show whether policy drift, training gaps, or technical misconfigurations are improving or worsening, which steers awareness campaigns and control investments.\n\nDocumented **Data sources**: `index=compliance` OR `index=sec` `sourcetype IN (\"dlp:violation\",\"policy:enforcement\",\"ms:o365:management\")` — `violation_category`, `PolicyName`, `Workload`; optional `sourcetype=\"qualys:*\"` / `sourcetype=\"nessus:*\"` for encryption posture drift correlated to policy. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Windows (Splunkbase 742), Enterprise Security data models. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm, compliance.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, index=compliance, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **enc_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=90d** buckets — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Policy Violation Volume Trending**): trendline sma3(encryption_policy_gaps) as enc_trend\n• Pipeline stage (see **Policy Violation Volume Trending**): predict encryption_policy_gaps as enc_fcst algorithm=LLP future_timespan=2\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked column or area (counts by `cat` over time), line (`o365_violation_total`, `o365_trend`, `o365_fcst`), line (`encryption_policy_gaps`, `enc_trend`), heatmap (category x quarter).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Policy Violation Volume Trending. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.36",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 A.5.36 is enforced — Splunk UC-22.9.5: Policy Violation Volume Trending.",
                  "ea": "Saved search 'UC-22.9.5' running on sourcetype qualys: and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST CSF",
                  "v": "2.0",
                  "cl": "GV.PO-02",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST CSF GV.PO-02 is enforced — Splunk UC-22.9.5: Policy Violation Volume Trending.",
                  "ea": "Saved search 'UC-22.9.5' running on sourcetype qualys: and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.9.6",
              "n": "Compliance Trending — SOC 2 Control Test Pass Rate vs Prior Quarter Baseline",
              "c": "medium",
              "f": "intermediate",
              "v": "Framework-specific trending shows whether SOC 2 Type II control tests are improving or eroding compared to your own prior quarter baseline—not just a static “percent pass today”—so leadership sees trajectory for trust services commitments.",
              "t": "Splunk DB Connect (GRC export), or `index=compliance` HEC from Archer/ServiceNow GRC",
              "d": "`index=compliance` `sourcetype=\"grc:control_test\"` (framework, control_id, test_result, test_date, period_label)",
              "q": "index=compliance sourcetype=\"grc:control_test\" framework=\"SOC2\" earliest=-400d\n| eval pass=if(lower(test_result) IN (\"pass\",\"effective\",\"no_exception\"),1,0)\n| eval test_epoch=strptime(test_date,\"%Y-%m-%d\")\n| bin test_epoch span=90d as period\n| stats avg(pass) as pass_rate by period\n| sort period\n| streamstats window=1 global=f latest(pass_rate) as prev_pass_rate by framework\n| eval delta_pp=round(100*(pass_rate-prev_pass_rate),2)\n| table period, pass_rate, prev_pass_rate, delta_pp",
              "m": "(1) Normalize `test_result` vocabulary via lookup; (2) align `period` to fiscal calendar if needed; (3) exclude duplicate tester reruns via `dedup control_id test_date`; (4) annotate dashboard with SOC subservice scope; (5) export quarterly screenshot for audit committee pack.",
              "z": "Line chart (pass_rate), Column chart (delta_pp), Single value (latest pass_rate).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (GRC export), or `index=compliance` HEC from Archer/ServiceNow GRC.\n• Ensure the following data sources are available: `index=compliance` `sourcetype=\"grc:control_test\"` (framework, control_id, test_result, test_date, period_label).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `test_result` vocabulary via lookup; (2) align `period` to fiscal calendar if needed; (3) exclude duplicate tester reruns via `dedup control_id test_date`; (4) annotate dashboard with SOC subservice scope; (5) export quarterly screenshot for audit committee pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=\"grc:control_test\" framework=\"SOC2\" earliest=-400d\n| eval pass=if(lower(test_result) IN (\"pass\",\"effective\",\"no_exception\"),1,0)\n| eval test_epoch=strptime(test_date,\"%Y-%m-%d\")\n| bin test_epoch span=90d as period\n| stats avg(pass) as pass_rate by period\n| sort period\n| streamstats window=1 global=f latest(pass_rate) as prev_pass_rate by framework\n| eval delta_pp=round(100*(pass_rate-prev_pass_rate),2)\n| table period, pass_rate, prev_pass_rate, delta_pp\n```\n\nUnderstanding this SPL\n\n**Compliance Trending — SOC 2 Control Test Pass Rate vs Prior Quarter Baseline** — Framework-specific trending shows whether SOC 2 Type II control tests are improving or eroding compared to your own prior quarter baseline—not just a static “percent pass today”—so leadership sees trajectory for trust services commitments.\n\nDocumented **Data sources**: `index=compliance` `sourcetype=\"grc:control_test\"` (framework, control_id, test_result, test_date, period_label). **App/TA** (typical add-on context): Splunk DB Connect (GRC export), or `index=compliance` HEC from Archer/ServiceNow GRC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: grc:control_test. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=\"grc:control_test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pass** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **test_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by period** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by framework** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta_pp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Compliance Trending — SOC 2 Control Test Pass Rate vs Prior Quarter Baseline**): table period, pass_rate, prev_pass_rate, delta_pp\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pass_rate), Column chart (delta_pp), Single value (latest pass_rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Compliance Trending — SOC 2 Control Test Pass Rate vs Prior Quarter Baseline. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Multiple"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "9.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 9.1 (Monitoring, measurement, analysis, evaluation) is enforced — Splunk UC-22.9.6: Compliance Trending — SOC 2 Control Test Pass Rate vs Prior Quarter Baseline.",
                  "ea": "Saved search 'UC-22.9.6' running on index=compliance sourcetype=\"grc:control_test\" (framework, control_id, test_result, test_date, period_label), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.9.6: Compliance Trending — SOC 2 Control Test Pass Rate vs Prior Quarter Baseline.",
                  "ea": "Saved search 'UC-22.9.6' running on index=compliance sourcetype=\"grc:control_test\" (framework, control_id, test_result, test_date, period_label), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.9.7",
              "n": "Compliance Trending — ISO 27001 Statement of Applicability Control Exception Burn-Down",
              "c": "high",
              "f": "intermediate",
              "v": "ISO trending should reflect SoA reality. Tracking open control exceptions (compensating controls, overdue remediation) by Annex A theme shows whether your ISO program is actually burning down risk or accumulating deferred work ahead of certification surveillance.",
              "t": "Splunk DB Connect, `index=grc` HEC",
              "d": "`index=grc` `sourcetype=\"iso:soa_exception\"` (annex_control, exception_id, opened_date, target_close_date, status, framework=\"ISO27001\")",
              "q": "index=grc sourcetype=\"iso:soa_exception\" status!=\"closed\" earliest=-730d\n| eval tgt=strptime(target_close_date,\"%Y-%m-%d\")\n| eval overdue=if(now()>tgt,1,0)\n| timechart span=30d sum(overdue) as overdue_exceptions by annex_control",
              "m": "(1) Map `annex_control` to SoA rows; (2) require `target_close_date` on all non-closed exceptions; (3) monthly ISO steering committee view; (4) correlate with internal audit themes; (5) keep framework tag explicit for multi-framework tenants.",
              "z": "Stacked area (overdue_exceptions), Table (current backlog), Single value (total overdue).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect, `index=grc` HEC.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"iso:soa_exception\"` (annex_control, exception_id, opened_date, target_close_date, status, framework=\"ISO27001\").\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `annex_control` to SoA rows; (2) require `target_close_date` on all non-closed exceptions; (3) monthly ISO steering committee view; (4) correlate with internal audit themes; (5) keep framework tag explicit for multi-framework tenants.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"iso:soa_exception\" status!=\"closed\" earliest=-730d\n| eval tgt=strptime(target_close_date,\"%Y-%m-%d\")\n| eval overdue=if(now()>tgt,1,0)\n| timechart span=30d sum(overdue) as overdue_exceptions by annex_control\n```\n\nUnderstanding this SPL\n\n**Compliance Trending — ISO 27001 Statement of Applicability Control Exception Burn-Down** — ISO trending should reflect SoA reality. Tracking open control exceptions (compensating controls, overdue remediation) by Annex A theme shows whether your ISO program is actually burning down risk or accumulating deferred work ahead of certification surveillance.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"iso:soa_exception\"` (annex_control, exception_id, opened_date, target_close_date, status, framework=\"ISO27001\"). **App/TA** (typical add-on context): Splunk DB Connect, `index=grc` HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: iso:soa_exception. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"iso:soa_exception\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tgt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=30d** buckets with a separate series **by annex_control** — ideal for trending and alerting on this use case.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (overdue_exceptions), Table (current backlog), Single value (total overdue).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Compliance Trending — ISO 27001 Statement of Applicability Control Exception Burn-Down. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Multiple"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "8.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 8.2 (Information-security risk assessment) is enforced — Splunk UC-22.9.7: Compliance Trending — ISO 27001 Statement of Applicability Control Exception Burn-Down.",
                  "ea": "Saved search 'UC-22.9.7' running on sourcetype iso:soa_exception and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "9.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 9.1 (Monitoring, measurement, analysis, evaluation) is enforced — Splunk UC-22.9.7: Compliance Trending — ISO 27001 Statement of Applicability Control Exception Burn-Down.",
                  "ea": "Saved search 'UC-22.9.7' running on sourcetype iso:soa_exception and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.9.8",
              "n": "Compliance Trending — Auditor Evidence Pack Generation Volume and Deficiency Rate",
              "c": "medium",
              "f": "beginner",
              "v": "External audits consume evidence packs. Trending how many packs were produced, how often they were rejected or re-requested for gaps, and median assembly time shows whether the organization is maturing its evidence factory—not scrambling at year-end.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), document system webhooks",
              "d": "`index=grc` `sourcetype=\"audit:evidence_pack\"` (engagement_id, pack_id, created_ts, sent_ts, deficiency_flag, reviewer_comment_len)",
              "q": "index=grc sourcetype=\"audit:evidence_pack\" earliest=-730d\n| eval created=strptime(created_ts,\"%Y-%m-%dT%H:%M:%SZ\"), sent=strptime(sent_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval assemble_h=round((sent-created)/3600,1)\n| bin sent span=90d as quarter\n| stats count as packs, sum(deficiency_flag) as deficient, median(assemble_h) as med_assemble_h by quarter\n| eval deficiency_rate=round(100*deficient/packs,1)\n| sort quarter",
              "m": "(1) Define `deficiency_flag` from auditor workflow state; (2) redact `reviewer_comment_len` only—never store full comments with PII; (3) integrate with secure file transfer metrics optionally; (4) target downward trend on `deficiency_rate`; (5) map `engagement_id` to regulator vs commercial audit classes.",
              "z": "Combo chart (packs + deficiency_rate), Line (med_assemble_h), Table by quarter.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), document system webhooks.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"audit:evidence_pack\"` (engagement_id, pack_id, created_ts, sent_ts, deficiency_flag, reviewer_comment_len).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define `deficiency_flag` from auditor workflow state; (2) redact `reviewer_comment_len` only—never store full comments with PII; (3) integrate with secure file transfer metrics optionally; (4) target downward trend on `deficiency_rate`; (5) map `engagement_id` to regulator vs commercial audit classes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"audit:evidence_pack\" earliest=-730d\n| eval created=strptime(created_ts,\"%Y-%m-%dT%H:%M:%SZ\"), sent=strptime(sent_ts,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval assemble_h=round((sent-created)/3600,1)\n| bin sent span=90d as quarter\n| stats count as packs, sum(deficiency_flag) as deficient, median(assemble_h) as med_assemble_h by quarter\n| eval deficiency_rate=round(100*deficient/packs,1)\n| sort quarter\n```\n\nUnderstanding this SPL\n\n**Compliance Trending — Auditor Evidence Pack Generation Volume and Deficiency Rate** — External audits consume evidence packs. Trending how many packs were produced, how often they were rejected or re-requested for gaps, and median assembly time shows whether the organization is maturing its evidence factory—not scrambling at year-end.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"audit:evidence_pack\"` (engagement_id, pack_id, created_ts, sent_ts, deficiency_flag, reviewer_comment_len). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), document system webhooks. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: audit:evidence_pack. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"audit:evidence_pack\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **created** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **assemble_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by quarter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **deficiency_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Combo chart (packs + deficiency_rate), Line (med_assemble_h), Table by quarter.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Compliance Trending — Auditor Evidence Pack Generation Volume and Deficiency Rate. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Multiple"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC5.1 (Control activities) is enforced — Splunk UC-22.9.8: Compliance Trending — Auditor Evidence Pack Generation Volume and Deficiency Rate.",
                  "ea": "Saved search 'UC-22.9.8' running on sourcetype audit:evidence_pack and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Logging.Continuity",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.Logging.Continuity (Audit trail completeness) is enforced — Splunk UC-22.9.8: Compliance Trending — Auditor Evidence Pack Generation Volume and Deficiency Rate.",
                  "ea": "Saved search 'UC-22.9.8' running on sourcetype audit:evidence_pack and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.9.9",
              "n": "Compliance Trending — Regulatory Change Feed Impact Score on In-Scope Controls",
              "c": "high",
              "f": "advanced",
              "v": "Laws and supervisory guidance change continuously. Scoring incoming regulatory change items by how many in-scope controls they touch—and trending that score—helps compliance prioritize policy updates, training, and control amendments before effective dates.",
              "t": "HTTP Event Collector from regulatory intelligence vendor or manual curator tool",
              "d": "`index=grc` `sourcetype=\"reg:change_item\"` (item_id, jurisdiction, effective_date, summary_tokens, impacted_control_ids, impact_score, _time)",
              "q": "index=grc sourcetype=\"reg:change_item\" earliest=-730d\n| eval eff=strptime(effective_date,\"%Y-%m-%d\")\n| where eff>now() OR eff>relative_time(now(),\"-90d@d\")\n| timechart span=30d sum(impact_score) as rolling_impact by jurisdiction\n| trendline sma3(rolling_impact) as impact_trend",
              "m": "(1) Maintain mapping table from `impacted_control_ids` to owners; (2) human validates `impact_score`—do not fully automate legal severity; (3) integrate effective-date countdown alerts; (4) deduplicate vendor spam with `item_id`; (5) cross-link to policy management tickets.",
              "z": "Line chart (rolling_impact + impact_trend), Table (upcoming items), Single value (next-90d sum).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector from regulatory intelligence vendor or manual curator tool.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"reg:change_item\"` (item_id, jurisdiction, effective_date, summary_tokens, impacted_control_ids, impact_score, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain mapping table from `impacted_control_ids` to owners; (2) human validates `impact_score`—do not fully automate legal severity; (3) integrate effective-date countdown alerts; (4) deduplicate vendor spam with `item_id`; (5) cross-link to policy management tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"reg:change_item\" earliest=-730d\n| eval eff=strptime(effective_date,\"%Y-%m-%d\")\n| where eff>now() OR eff>relative_time(now(),\"-90d@d\")\n| timechart span=30d sum(impact_score) as rolling_impact by jurisdiction\n| trendline sma3(rolling_impact) as impact_trend\n```\n\nUnderstanding this SPL\n\n**Compliance Trending — Regulatory Change Feed Impact Score on In-Scope Controls** — Laws and supervisory guidance change continuously. Scoring incoming regulatory change items by how many in-scope controls they touch—and trending that score—helps compliance prioritize policy updates, training, and control amendments before effective dates.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"reg:change_item\"` (item_id, jurisdiction, effective_date, summary_tokens, impacted_control_ids, impact_score, _time). **App/TA** (typical add-on context): HTTP Event Collector from regulatory intelligence vendor or manual curator tool. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: reg:change_item. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"reg:change_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eff** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where eff>now() OR eff>relative_time(now(),\"-90d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=30d** buckets with a separate series **by jurisdiction** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Compliance Trending — Regulatory Change Feed Impact Score on In-Scope Controls**): trendline sma3(rolling_impact) as impact_trend\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (rolling_impact + impact_trend), Table (upcoming items), Single value (next-90d sum).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Compliance Trending — Regulatory Change Feed Impact Score on In-Scope Controls. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Multiple"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "9.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 9.1 (Monitoring, measurement, analysis, evaluation) is enforced — Splunk UC-22.9.9: Compliance Trending — Regulatory Change Feed Impact Score on In-Scope Controls.",
                  "ea": "Saved search 'UC-22.9.9' running on sourcetype reg:change_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC3.1 (Risk assessment) is enforced — Splunk UC-22.9.9: Compliance Trending — Regulatory Change Feed Impact Score on In-Scope Controls.",
                  "ea": "Saved search 'UC-22.9.9' running on sourcetype reg:change_item and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.9.10",
              "n": "Compliance Trending — Weighted Compliance Posture Composite and Driver Attribution",
              "c": "high",
              "f": "advanced",
              "v": "A single posture score communicates status to executives while driver attribution explains why it moved. Weighting domains (identity, data, resilience, third party) by materiality yields a composite that trends over time and complements—not replaces—framework-specific panels.",
              "t": "Splunk DB Connect, `index=compliance` normalized score feeds",
              "d": "`index=compliance` `sourcetype=\"posture:domain_score\"` (domain, score_0_100, weight, as_of_date)",
              "q": "index=compliance sourcetype=\"posture:domain_score\" earliest=-400d\n| eval d=strptime(as_of_date,\"%Y-%m-%d\")\n| bin d span=7d as week\n| stats avg(score_0_100) as avg_score, latest(weight) as weight by week, domain\n| eval contrib=avg_score*weight\n| stats sum(contrib) as composite by week\n| sort week\n| streamstats window=2 global=f first(composite) as prev_composite\n| eval wow_delta=round(composite-prev_composite,2)\n| join type=left week [\n    search index=compliance sourcetype=\"posture:domain_score\" earliest=-400d\n    | eval d=strptime(as_of_date,\"%Y-%m-%d\")\n    | bin d span=7d as week\n    | stats avg(score_0_100) as s by week, domain\n    | sort 0 week -s\n    | dedup week\n    | rename domain as top_driver_domain, s as top_driver_score\n    | fields week, top_driver_domain, top_driver_score\n  ]\n| table week, composite, wow_delta, top_driver_domain, top_driver_score",
              "m": "(1) Normalize `weight` to sum to 1 per week in ETL if not already; (2) simplify driver logic in production with explicit `driver_domain` from upstream model instead of `dedup` shortcut; (3) governance committee approves weights annually; (4) document formula in methodology PDF; (5) never use this score as sole regulatory metric.",
              "z": "Line chart (composite), Overlay (wow_delta), Single value (latest composite), Table (driver).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect, `index=compliance` normalized score feeds.\n• Ensure the following data sources are available: `index=compliance` `sourcetype=\"posture:domain_score\"` (domain, score_0_100, weight, as_of_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `weight` to sum to 1 per week in ETL if not already; (2) simplify driver logic in production with explicit `driver_domain` from upstream model instead of `dedup` shortcut; (3) governance committee approves weights annually; (4) document formula in methodology PDF; (5) never use this score as sole regulatory metric.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=\"posture:domain_score\" earliest=-400d\n| eval d=strptime(as_of_date,\"%Y-%m-%d\")\n| bin d span=7d as week\n| stats avg(score_0_100) as avg_score, latest(weight) as weight by week, domain\n| eval contrib=avg_score*weight\n| stats sum(contrib) as composite by week\n| sort week\n| streamstats window=2 global=f first(composite) as prev_composite\n| eval wow_delta=round(composite-prev_composite,2)\n| join type=left week [\n    search index=compliance sourcetype=\"posture:domain_score\" earliest=-400d\n    | eval d=strptime(as_of_date,\"%Y-%m-%d\")\n    | bin d span=7d as week\n    | stats avg(score_0_100) as s by week, domain\n    | sort 0 week -s\n    | dedup week\n    | rename domain as top_driver_domain, s as top_driver_score\n    | fields week, top_driver_domain, top_driver_score\n  ]\n| table week, composite, wow_delta, top_driver_domain, top_driver_score\n```\n\nUnderstanding this SPL\n\n**Compliance Trending — Weighted Compliance Posture Composite and Driver Attribution** — A single posture score communicates status to executives while driver attribution explains why it moved. Weighting domains (identity, data, resilience, third party) by materiality yields a composite that trends over time and complements—not replaces—framework-specific panels.\n\nDocumented **Data sources**: `index=compliance` `sourcetype=\"posture:domain_score\"` (domain, score_0_100, weight, as_of_date). **App/TA** (typical add-on context): Splunk DB Connect, `index=compliance` normalized score feeds. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: posture:domain_score. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=\"posture:domain_score\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **d** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by week, domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **contrib** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by week** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **wow_delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Compliance Trending — Weighted Compliance Posture Composite and Driver Attribution**): table week, composite, wow_delta, top_driver_domain, top_driver_score\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (composite), Overlay (wow_delta), Single value (latest composite), Table (driver).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Compliance Trending — Weighted Compliance Posture Composite and Driver Attribution. We watch trends so leaders see improvement, not a one-time snapshot.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Multiple"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "9.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that ISO 27001 9.1 (Monitoring, measurement, analysis, evaluation) is enforced — Splunk UC-22.9.10: Compliance Trending — Weighted Compliance Posture Composite and Driver Attribution.",
                  "ea": "Saved search 'UC-22.9.10' running on index=compliance sourcetype=\"posture:domain_score\" (domain, score_0_100, weight, as_of_date), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC 2 CC2.1 (Internal communication) is enforced — Splunk UC-22.9.10: Compliance Trending — Weighted Compliance Posture Composite and Driver Attribution.",
                  "ea": "Saved search 'UC-22.9.10' running on index=compliance sourcetype=\"posture:domain_score\" (domain, score_0_100, weight, as_of_date), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 10,
            "none": 0
          }
        },
        {
          "i": "22.11",
          "n": "PCI DSS v4.0",
          "u": [
            {
              "i": "22.11.1",
              "n": "Scheduled Firewall Rule Review Evidence for CDE NSCs (PCI DSS Req 1.1.6, 1.2.8)",
              "c": "high",
              "f": "intermediate",
              "v": "Produces a repeatable inventory of firewall policy changes touching the cardholder data environment so assessors can see that network security controls are formally reviewed at least every six months.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263)",
              "d": "`index=netfw` `sourcetype IN (\"pan:config\",\"pan:system\")` — commits, admin user, rule object names; optional `sourcetype=\"cisco:asa\"` configuration events",
              "q": "index=netfw sourcetype IN (\"pan:config\",\"pan:system\") earliest=-185d\n| where match(_raw,\"(?i)commit|policy|rule|security|nat\")\n| lookup pci_cde_device_groups.csv device_group OUTPUT in_cde\n| where in_cde=\"true\"\n| rex field=_raw \"(?i)admin@?(?<fw_admin>\\\\S+)\"\n| bin _time span=30d\n| stats values(fw_admin) as reviewers, dc(_raw) as change_events by _time, host\n| sort - _time",
              "m": "(1) Maintain `pci_cde_device_groups.csv` mapping Panorama device groups or firewall hostnames to `in_cde`. (2) Align the search window to your documented six-month review cycle. (3) Export the stats table monthly and attach the export ID to the GRC ticket. (4) Exclude template pushes that do not alter enforced policy using a `commit_id` dedupe if available.",
              "z": "Column chart (change_events by month), table (reviewers per device group), single value (distinct review months with activity).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=netfw` `sourcetype IN (\"pan:config\",\"pan:system\")` — commits, admin user, rule object names; optional `sourcetype=\"cisco:asa\"` configuration events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `pci_cde_device_groups.csv` mapping Panorama device groups or firewall hostnames to `in_cde`. (2) Align the search window to your documented six-month review cycle. (3) Export the stats table monthly and attach the export ID to the GRC ticket. (4) Exclude template pushes that do not alter enforced policy using a `commit_id` dedupe if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype IN (\"pan:config\",\"pan:system\") earliest=-185d\n| where match(_raw,\"(?i)commit|policy|rule|security|nat\")\n| lookup pci_cde_device_groups.csv device_group OUTPUT in_cde\n| where in_cde=\"true\"\n| rex field=_raw \"(?i)admin@?(?<fw_admin>\\\\S+)\"\n| bin _time span=30d\n| stats values(fw_admin) as reviewers, dc(_raw) as change_events by _time, host\n| sort - _time\n```\n\nUnderstanding this SPL\n\n**Scheduled Firewall Rule Review Evidence for CDE NSCs (PCI DSS Req 1.1.6, 1.2.8)** — Produces a repeatable inventory of firewall policy changes touching the cardholder data environment so assessors can see that network security controls are formally reviewed at least every six months.\n\nDocumented **Data sources**: `index=netfw` `sourcetype IN (\"pan:config\",\"pan:system\")` — commits, admin user, rule object names; optional `sourcetype=\"cisco:asa\"` configuration events. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"(?i)commit|policy|rule|security|nat\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_cde=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest span=30d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Scheduled Firewall Rule Review Evidence for CDE NSCs (PCI DSS Req 1.1.6, 1.2.8)** — Produces a repeatable inventory of firewall policy changes touching the cardholder data environment so assessors can see that network security controls are formally reviewed at least every six months.\n\nDocumented **Data sources**: `index=netfw` `sourcetype IN (\"pan:config\",\"pan:system\")` — commits, admin user, rule object names; optional `sourcetype=\"cisco:asa\"` configuration events. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (change_events by month), table (reviewers per device group), single value (distinct review months with activity).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We produce a repeatable inventory of firewall policy changes touching the cardholder data environment so assessors can see that network security controls are formally reviewed at least every six months. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest span=30d | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.2.8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 1.2.8 is enforced — Splunk UC-22.11.1: Scheduled Firewall Rule Review Evidence for CDE NSCs.",
                  "ea": "Saved search 'UC-22.11.1' running on sourcetype cisco:asa and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.2",
              "n": "NSC Configuration Change Correlation to Change Tickets (PCI DSS Req 1.1.2, 1.2.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Links live firewall configuration changes to approved change records, evidencing that modifications to network security controls follow change management and are traceable for PCI interviews.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` `sourcetype=\"pan:config\"`; `index=itsm` `sourcetype=\"snow:change_request\"` or `sourcetype=\"jira:changelog\"` with change number in description",
              "q": "index=netfw sourcetype=\"pan:config\" earliest=-7d\n| eval chg_hint=if(match(_raw,\"CHG\\\\d{7}\"),replace(_raw,\".*?(CHG\\\\d{7}).*\",\"\\\\1\"),null())\n| where isnotnull(chg_hint)\n| join type=left max=1 chg_hint [\n    search index=itsm sourcetype=\"snow:change_request\" earliest=-14d\n    | rename number as chg_hint\n    | fields chg_hint, state, start_date, end_date, assigned_to\n  ]\n| eval missing_ticket=if(isnull(state),\"no_matching_chg\",state)\n| table _time, host, chg_hint, missing_ticket, assigned_to\n| sort _time",
              "m": "(1) Require engineers to embed `CHG#######` in commit descriptions or Panorama notes. (2) Tune `rex` for your ITSM number format. (3) Alert on `missing_ticket=\"no_matching_chg\"` for CDE-tagged hosts. (4) Retain joined results in a summary index for QSA sampling.",
              "z": "Table (exceptions first), pie (matched vs missing), timeline (_time vs change state).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` `sourcetype=\"pan:config\"`; `index=itsm` `sourcetype=\"snow:change_request\"` or `sourcetype=\"jira:changelog\"` with change number in description.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require engineers to embed `CHG#######` in commit descriptions or Panorama notes. (2) Tune `rex` for your ITSM number format. (3) Alert on `missing_ticket=\"no_matching_chg\"` for CDE-tagged hosts. (4) Retain joined results in a summary index for QSA sampling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:config\" earliest=-7d\n| eval chg_hint=if(match(_raw,\"CHG\\\\d{7}\"),replace(_raw,\".*?(CHG\\\\d{7}).*\",\"\\\\1\"),null())\n| where isnotnull(chg_hint)\n| join type=left max=1 chg_hint [\n    search index=itsm sourcetype=\"snow:change_request\" earliest=-14d\n    | rename number as chg_hint\n    | fields chg_hint, state, start_date, end_date, assigned_to\n  ]\n| eval missing_ticket=if(isnull(state),\"no_matching_chg\",state)\n| table _time, host, chg_hint, missing_ticket, assigned_to\n| sort _time\n```\n\nUnderstanding this SPL\n\n**NSC Configuration Change Correlation to Change Tickets (PCI DSS Req 1.1.2, 1.2.1)** — Links live firewall configuration changes to approved change records, evidencing that modifications to network security controls follow change management and are traceable for PCI interviews.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:config\"`; `index=itsm` `sourcetype=\"snow:change_request\"` or `sourcetype=\"jira:changelog\"` with change number in description. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:config. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:config\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **chg_hint** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(chg_hint)` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **missing_ticket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NSC Configuration Change Correlation to Change Tickets (PCI DSS Req 1.1.2, 1.2.1)**): table _time, host, chg_hint, missing_ticket, assigned_to\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NSC Configuration Change Correlation to Change Tickets (PCI DSS Req 1.1.2, 1.2.1)** — Links live firewall configuration changes to approved change records, evidencing that modifications to network security controls follow change management and are traceable for PCI interviews.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:config\"`; `index=itsm` `sourcetype=\"snow:change_request\"` or `sourcetype=\"jira:changelog\"` with change number in description. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exceptions first), pie (matched vs missing), timeline (_time vs change state).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We links live firewall configuration changes to approved change records, evidencing that modifications to network security controls follow change management and are traceable for PCI interviews. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 1.2.1 is enforced — Splunk UC-22.11.2: NSC Configuration Change Correlation to Change Tickets.",
                  "ea": "Saved search 'UC-22.11.2' running on sourcetype pan:config and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.3",
              "n": "CDE Boundary Traffic — Unexpected Corporate-to-Payment Flows (PCI DSS Req 1.2.3, 1.3.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Highlights allowed sessions from general corporate segments into payment subnets so segmentation and NSC rules can be validated between assessments, not only during annual penetration tests.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` `sourcetype=\"pan:traffic\"` — `src`, `dest`, `src_zone`, `dest_zone`, `action`, `rule`; lookups `pci_corp_zones.csv`, `pci_payment_subnets.csv`",
              "q": "index=netfw sourcetype=\"pan:traffic\" action=allowed earliest=-24h\n| lookup pci_corp_zones.csv zone AS src_zone OUTPUT corp_zone\n| lookup pci_payment_subnets.csv cidr AS dest OUTPUT payment_tier\n| where corp_zone=\"true\" AND payment_tier=\"card_processing\"\n| stats count, values(dest_port) as ports, values(app) as apps by src, dest, rule\n| where count>5\n| sort - count",
              "m": "(1) Populate CIDR lookups from network diagrams approved for PCI. (2) Whitelist jump-host rules via `where NOT match(rule,\"(?i)bastion|jump\")`. (3) Feed hits to the segmentation validation runbook. (4) Map findings to Req 11 segmentation testing follow-up.",
              "z": "Heatmap (src_zone x dest_zone counts), table (top flows), Sankey optional for zone paths.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` `sourcetype=\"pan:traffic\"` — `src`, `dest`, `src_zone`, `dest_zone`, `action`, `rule`; lookups `pci_corp_zones.csv`, `pci_payment_subnets.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate CIDR lookups from network diagrams approved for PCI. (2) Whitelist jump-host rules via `where NOT match(rule,\"(?i)bastion|jump\")`. (3) Feed hits to the segmentation validation runbook. (4) Map findings to Req 11 segmentation testing follow-up.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:traffic\" action=allowed earliest=-24h\n| lookup pci_corp_zones.csv zone AS src_zone OUTPUT corp_zone\n| lookup pci_payment_subnets.csv cidr AS dest OUTPUT payment_tier\n| where corp_zone=\"true\" AND payment_tier=\"card_processing\"\n| stats count, values(dest_port) as ports, values(app) as apps by src, dest, rule\n| where count>5\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CDE Boundary Traffic — Unexpected Corporate-to-Payment Flows (PCI DSS Req 1.2.3, 1.3.1)** — Highlights allowed sessions from general corporate segments into payment subnets so segmentation and NSC rules can be validated between assessments, not only during annual penetration tests.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"` — `src`, `dest`, `src_zone`, `dest_zone`, `action`, `rule`; lookups `pci_corp_zones.csv`, `pci_payment_subnets.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where corp_zone=\"true\" AND payment_tier=\"card_processing\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, rule** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CDE Boundary Traffic — Unexpected Corporate-to-Payment Flows (PCI DSS Req 1.2.3, 1.3.1)** — Highlights allowed sessions from general corporate segments into payment subnets so segmentation and NSC rules can be validated between assessments, not only during annual penetration tests.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"` — `src`, `dest`, `src_zone`, `dest_zone`, `action`, `rule`; lookups `pci_corp_zones.csv`, `pci_payment_subnets.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (src_zone x dest_zone counts), table (top flows), Sankey optional for zone paths.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlights allowed sessions from general corporate segments into payment subnets so segmentation and NSC rules can be validated between assessments, not only during annual penetration tests. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 1.3.1 is enforced — Splunk UC-22.11.3: CDE Boundary Traffic — Unexpected Corporate-to-Payment Flows.",
                  "ea": "Saved search 'UC-22.11.3' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.4",
              "n": "Denied Inbound Attempts to Payment Application Ports (PCI DSS Req 1.2.7, 1.3.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces blocked connection attempts targeting card-processing listeners so you can prove NSCs are actively enforced and tune rules before attackers find a misconfiguration.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` `sourcetype=\"pan:traffic\"` with `action` in (deny, drop, reset-client)",
              "q": "index=netfw sourcetype=\"pan:traffic\" earliest=-24h\n| where action IN (\"deny\",\"drop\",\"reset-client\",\"reset-server\")\n| lookup pci_payment_listener_ports.csv dest_port OUTPUT sensitive_listener\n| where sensitive_listener=\"true\"\n| stats count by src, dest, dest_port, app, rule\n| where count>=50\n| sort - count\n| head 100",
              "m": "(1) Define listener ports (database, queue, admin API) in the lookup. (2) Enrich `src` with threat intel in ES if available. (3) Set threshold based on baseline scanning noise. (4) Archive weekly top-100 for operational review evidence.",
              "z": "Map or geomap (if GeoIP enabled), bar chart (dest_port), table (top sources).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` `sourcetype=\"pan:traffic\"` with `action` in (deny, drop, reset-client).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define listener ports (database, queue, admin API) in the lookup. (2) Enrich `src` with threat intel in ES if available. (3) Set threshold based on baseline scanning noise. (4) Archive weekly top-100 for operational review evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:traffic\" earliest=-24h\n| where action IN (\"deny\",\"drop\",\"reset-client\",\"reset-server\")\n| lookup pci_payment_listener_ports.csv dest_port OUTPUT sensitive_listener\n| where sensitive_listener=\"true\"\n| stats count by src, dest, dest_port, app, rule\n| where count>=50\n| sort - count\n| head 100\n```\n\nUnderstanding this SPL\n\n**Denied Inbound Attempts to Payment Application Ports (PCI DSS Req 1.2.7, 1.3.2)** — Surfaces blocked connection attempts targeting card-processing listeners so you can prove NSCs are actively enforced and tune rules before attackers find a misconfiguration.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"` with `action` in (deny, drop, reset-client). **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where action IN (\"deny\",\"drop\",\"reset-client\",\"reset-server\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where sensitive_listener=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, app, rule** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port, All_Traffic.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Denied Inbound Attempts to Payment Application Ports (PCI DSS Req 1.2.7, 1.3.2)** — Surfaces blocked connection attempts targeting card-processing listeners so you can prove NSCs are actively enforced and tune rules before attackers find a misconfiguration.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"` with `action` in (deny, drop, reset-client). **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map or geomap (if GeoIP enabled), bar chart (dest_port), table (top sources).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface blocked connection attempts targeting card-processing listeners so you can prove NSCs are actively enforced and tune rules before attackers find a misconfiguration. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port, All_Traffic.app | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.3.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 1.3.2 is enforced — Splunk UC-22.11.4: Denied Inbound Attempts to Payment Application Ports.",
                  "ea": "Saved search 'UC-22.11.4' running on index=netfw sourcetype=\"pan:traffic\" with action in (deny, drop, reset-client), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.5",
              "n": "DMZ Originated Sessions Hitting CDE Internal Segments (PCI DSS Req 1.3.4, 1.3.7)",
              "c": "critical",
              "f": "expert",
              "v": "Detects allowed paths from DMZ assets into internal CDE networks that violate one-way DMZ design, reducing the chance a compromised web tier pivots into systems that store or process account data.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` `sourcetype=\"pan:traffic\"`; asset lookup `pci_dmz_assets.csv`",
              "q": "index=netfw sourcetype=\"pan:traffic\" action=allowed earliest=-7d\n| lookup pci_dmz_assets.csv ip AS src OUTPUT in_dmz\n| lookup pci_cde_internal.csv ip AS dest OUTPUT in_cde_internal\n| where in_dmz=\"true\" AND in_cde_internal=\"true\"\n| bin _time span=1h\n| stats count by _time, src, dest, app, dest_port\n| eventstats perc95(count) as p95 globally\n| where count>p95 AND p95>0\n| sort - count",
              "m": "(1) Refresh DMZ and CDE internal inventories weekly from CMDB. (2) Validate expected admin SSH paths. (3) Create ES notable for sustained spikes. (4) Document compensating controls if architecture requires exceptions.",
              "z": "Line chart (hourly counts), bubble chart (src size by count), table (spike windows).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` `sourcetype=\"pan:traffic\"`; asset lookup `pci_dmz_assets.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Refresh DMZ and CDE internal inventories weekly from CMDB. (2) Validate expected admin SSH paths. (3) Create ES notable for sustained spikes. (4) Document compensating controls if architecture requires exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:traffic\" action=allowed earliest=-7d\n| lookup pci_dmz_assets.csv ip AS src OUTPUT in_dmz\n| lookup pci_cde_internal.csv ip AS dest OUTPUT in_cde_internal\n| where in_dmz=\"true\" AND in_cde_internal=\"true\"\n| bin _time span=1h\n| stats count by _time, src, dest, app, dest_port\n| eventstats perc95(count) as p95 globally\n| where count>p95 AND p95>0\n| sort - count\n```\n\nUnderstanding this SPL\n\n**DMZ Originated Sessions Hitting CDE Internal Segments (PCI DSS Req 1.3.4, 1.3.7)** — Detects allowed paths from DMZ assets into internal CDE networks that violate one-way DMZ design, reducing the chance a compromised web tier pivots into systems that store or process account data.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"`; asset lookup `pci_dmz_assets.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_dmz=\"true\" AND in_cde_internal=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, src, dest, app, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where count>p95 AND p95>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.app, All_Traffic.dest_port span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DMZ Originated Sessions Hitting CDE Internal Segments (PCI DSS Req 1.3.4, 1.3.7)** — Detects allowed paths from DMZ assets into internal CDE networks that violate one-way DMZ design, reducing the chance a compromised web tier pivots into systems that store or process account data.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"`; asset lookup `pci_dmz_assets.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (hourly counts), bubble chart (src size by count), table (spike windows).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect allowed paths from DMZ assets into internal CDE networks that violate one-way DMZ design, reducing the chance a compromised web tier pivots into systems that store or process account data. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.app, All_Traffic.dest_port span=1h | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.3.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 1.3.7 is enforced — Splunk UC-22.11.5: DMZ Originated Sessions Hitting CDE Internal Segments.",
                  "ea": "Saved search 'UC-22.11.5' running on index=netfw sourcetype=\"pan:traffic\"; asset lookup pci_dmz_assets.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.6",
              "n": "Wireless Client Pools Reaching CDE Hosts (PCI DSS Req 1.2.3, 2.2.4)",
              "c": "high",
              "f": "advanced",
              "v": "Shows successful flows from wireless DHCP scopes to CDE destinations, supporting evidence that wireless is segmented from cardholder environments or that compensating detective controls exist.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Stream (1809)",
              "d": "`index=netfw` `sourcetype=\"pan:traffic\"`; optional `index=stream` for decrypted metadata tagging",
              "q": "index=netfw sourcetype=\"pan:traffic\" action=allowed earliest=-24h\n| lookup pci_wlan_dhcp_scopes.csv subnet AS src OUTPUT wlan_ssid\n| lookup pci_cde_subnets.csv subnet AS dest OUTPUT in_cde\n| where isnotnull(wlan_ssid) AND in_cde=\"true\"\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, count by wlan_ssid, src, dest, app\n| eval session_span_sec=last_seen-first_seen\n| sort - count",
              "m": "(1) Export DHCP superscopes per SSID from IPAM. (2) Exclude corporate VPN concentrator egress if pooled with WLAN. (3) Pair with 802.1X authentication logs when available. (4) Report zero-result runs as quarterly attestation attachment.",
              "z": "Table (sessions), pie (by wlan_ssid), timeline (first_seen/last_seen).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Stream](https://splunkbase.splunk.com/app/1809), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Stream (1809).\n• Ensure the following data sources are available: `index=netfw` `sourcetype=\"pan:traffic\"`; optional `index=stream` for decrypted metadata tagging.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export DHCP superscopes per SSID from IPAM. (2) Exclude corporate VPN concentrator egress if pooled with WLAN. (3) Pair with 802.1X authentication logs when available. (4) Report zero-result runs as quarterly attestation attachment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:traffic\" action=allowed earliest=-24h\n| lookup pci_wlan_dhcp_scopes.csv subnet AS src OUTPUT wlan_ssid\n| lookup pci_cde_subnets.csv subnet AS dest OUTPUT in_cde\n| where isnotnull(wlan_ssid) AND in_cde=\"true\"\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, count by wlan_ssid, src, dest, app\n| eval session_span_sec=last_seen-first_seen\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Wireless Client Pools Reaching CDE Hosts (PCI DSS Req 1.2.3, 2.2.4)** — Shows successful flows from wireless DHCP scopes to CDE destinations, supporting evidence that wireless is segmented from cardholder environments or that compensating detective controls exist.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"`; optional `index=stream` for decrypted metadata tagging. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Stream (1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(wlan_ssid) AND in_cde=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by wlan_ssid, src, dest, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **session_span_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Wireless Client Pools Reaching CDE Hosts (PCI DSS Req 1.2.3, 2.2.4)** — Shows successful flows from wireless DHCP scopes to CDE destinations, supporting evidence that wireless is segmented from cardholder environments or that compensating detective controls exist.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"`; optional `index=stream` for decrypted metadata tagging. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Stream (1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sessions), pie (by wlan_ssid), timeline (first_seen/last_seen).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We shows successful flows from wireless DHCP scopes to CDE destinations, supporting evidence that wireless is segmented from cardholder environments or that compensating detective controls exist. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.app | sort - count",
              "e": [
                "paloalto",
                "stream"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.2.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.2.4 is enforced — Splunk UC-22.11.6: Wireless Client Pools Reaching CDE Hosts.",
                  "ea": "Saved search 'UC-22.11.6' running on index=netfw sourcetype=\"pan:traffic\"; optional index=stream for decrypted metadata tagging, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.7",
              "n": "Outbound Service Allow-List Violations from CDE Servers (PCI DSS Req 1.2.1, 1.2.6)",
              "c": "medium",
              "f": "intermediate",
              "v": "Compares observed egress from in-scope servers against the documented approved service list, proving that outbound traffic from the CDE is limited to what the standard permits.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` `sourcetype=\"pan:traffic\"`; `lookup pci_cde_server_subnets.csv`, `lookup pci_approved_egress.csv`",
              "q": "index=netfw sourcetype=\"pan:traffic\" action=allowed earliest=-7d\n| lookup pci_cde_server_subnets.csv subnet AS src OUTPUT cde_server\n| where cde_server=\"true\"\n| eval svc=lower(concat(dest_port,\"#\",app))\n| lookup pci_approved_egress.csv svc OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| stats count by src, dest, dest_port, app, dest_zone\n| sort - count",
              "m": "(1) Build the egress lookup from the PCI network standard appendix. (2) Refresh when new SaaS dependencies are onboarded. (3) Weekly ops review of the stats output. (4) Store signed CSV with each policy version.",
              "z": "Treemap (by app), bar (dest_port), table (detail).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` `sourcetype=\"pan:traffic\"`; `lookup pci_cde_server_subnets.csv`, `lookup pci_approved_egress.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build the egress lookup from the PCI network standard appendix. (2) Refresh when new SaaS dependencies are onboarded. (3) Weekly ops review of the stats output. (4) Store signed CSV with each policy version.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:traffic\" action=allowed earliest=-7d\n| lookup pci_cde_server_subnets.csv subnet AS src OUTPUT cde_server\n| where cde_server=\"true\"\n| eval svc=lower(concat(dest_port,\"#\",app))\n| lookup pci_approved_egress.csv svc OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| stats count by src, dest, dest_port, app, dest_zone\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Outbound Service Allow-List Violations from CDE Servers (PCI DSS Req 1.2.1, 1.2.6)** — Compares observed egress from in-scope servers against the documented approved service list, proving that outbound traffic from the CDE is limited to what the standard permits.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"`; `lookup pci_cde_server_subnets.csv`, `lookup pci_approved_egress.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_server=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **svc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved) OR approved=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, app, dest_zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port, All_Traffic.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Outbound Service Allow-List Violations from CDE Servers (PCI DSS Req 1.2.1, 1.2.6)** — Compares observed egress from in-scope servers against the documented approved service list, proving that outbound traffic from the CDE is limited to what the standard permits.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"`; `lookup pci_cde_server_subnets.csv`, `lookup pci_approved_egress.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Treemap (by app), bar (dest_port), table (detail).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare observed egress from in-scope servers against the documented approved service list, proving that outbound traffic from the CDE is limited to what the standard permits. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port, All_Traffic.app | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.2.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 1.2.6 is enforced — Splunk UC-22.11.7: Outbound Service Allow-List Violations from CDE Servers.",
                  "ea": "Saved search 'UC-22.11.7' running on index=netfw sourcetype=\"pan:traffic\"; lookup pci_cde_server_subnets.csv, lookup pci_approved_egress.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.8",
              "n": "Default and Vendor Account Authentications on In-Scope Systems (PCI DSS Req 2.2.2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects interactive use of factory or vendor default accounts on systems in the CDE, directly supporting the requirement to change all vendor defaults before production use.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=windows` `sourcetype=\"WinEventLog:Security\"` EventCode 4624/4625; `index=linux` `sourcetype=\"linux:secure\"` or `auditd`",
              "q": "(index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4625))\n    OR (index=linux (sourcetype=\"linux:secure\" OR sourcetype=\"linux:auditd\")) earliest=-24h\n| eval acct=coalesce(TargetUserName, acct, audit_user)\n| lookup pci_default_accounts.csv account AS acct OUTPUT is_default\n| lookup pci_in_scope_hosts.csv host OUTPUT in_scope\n| where is_default=\"true\" AND in_scope=\"true\"\n| stats count by host, acct, EventCode, action\n| sort - count",
              "m": "(1) Seed the default-account lookup from CIS benchmarks and vendor lists. (2) Map `host` to PCI inventory. (3) Alert on any successful 4624 for default accounts. (4) Track remediation tickets to closure.",
              "z": "Table (hits), single value (distinct hosts), column chart (by account name).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"WinEventLog:Security\"` EventCode 4624/4625; `index=linux` `sourcetype=\"linux:secure\"` or `auditd`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Seed the default-account lookup from CIS benchmarks and vendor lists. (2) Map `host` to PCI inventory. (3) Alert on any successful 4624 for default accounts. (4) Track remediation tickets to closure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4625))\n    OR (index=linux (sourcetype=\"linux:secure\" OR sourcetype=\"linux:auditd\")) earliest=-24h\n| eval acct=coalesce(TargetUserName, acct, audit_user)\n| lookup pci_default_accounts.csv account AS acct OUTPUT is_default\n| lookup pci_in_scope_hosts.csv host OUTPUT in_scope\n| where is_default=\"true\" AND in_scope=\"true\"\n| stats count by host, acct, EventCode, action\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Default and Vendor Account Authentications on In-Scope Systems (PCI DSS Req 2.2.2)** — Detects interactive use of factory or vendor default accounts on systems in the CDE, directly supporting the requirement to change all vendor defaults before production use.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` EventCode 4624/4625; `index=linux` `sourcetype=\"linux:secure\"` or `auditd`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, linux; **sourcetype**: WinEventLog:Security, linux:secure, linux:auditd. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=linux, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **acct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_default=\"true\" AND in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, acct, EventCode, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest, Authentication.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Default and Vendor Account Authentications on In-Scope Systems (PCI DSS Req 2.2.2)** — Detects interactive use of factory or vendor default accounts on systems in the CDE, directly supporting the requirement to change all vendor defaults before production use.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"WinEventLog:Security\"` EventCode 4624/4625; `index=linux` `sourcetype=\"linux:secure\"` or `auditd`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hits), single value (distinct hosts), column chart (by account name).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect interactive use of factory or vendor default accounts on systems in the CDE, directly supporting the requirement to change all vendor defaults before production use. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest, Authentication.action | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.2.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.2.2 is enforced — Splunk UC-22.11.8: Default and Vendor Account Authentications on In-Scope Systems.",
                  "ea": "Saved search 'UC-22.11.8' running on index=windows sourcetype=\"WinEventLog:Security\" EventCode 4624/4625; index=linux sourcetype=\"linux:secure\" or auditd, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.9",
              "n": "Configuration Drift vs CIS Hardening Benchmark on Windows CDE Members (PCI DSS Req 2.2.1, 2.2.3)",
              "c": "high",
              "f": "advanced",
              "v": "Aggregates configuration assessment findings against a documented hardening standard so you can show continuous alignment with industry-accepted secure configurations for in-scope Windows systems.",
              "t": "Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964)",
              "d": "`index=vm` `sourcetype IN (\"tenable:sc:compliance\",\"qualys:policy\")` — `plugin_name`, `host_fqdn`, `compliance_status`",
              "q": "index=vm sourcetype IN (\"tenable:sc:compliance\",\"qualys:policy\") earliest=-7d\n| lookup pci_windows_cde_assets.csv fqdn AS host_fqdn OUTPUT cde_member\n| where cde_member=\"true\"\n| eval fail=if(match(compliance_status,\"(?i)failed|error|critical\"),1,0)\n| stats sum(fail) as failed_checks, count as total_checks by host_fqdn\n| eval fail_pct=if(total_checks>0, round(100*failed_checks/total_checks,2), null())\n| where fail_pct>5\n| sort - fail_pct",
              "m": "(1) Map scanner asset names to PCI inventory FQDNs. (2) Scope policies to CIS Windows Server benchmarks used in your standard. (3) Track `fail_pct` trend weekly. (4) Attach exports to secure configuration evidence repository.",
              "z": "Bar chart (fail_pct by host), heatmap (plugin families), single value (mean fail_pct across CDE).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Tenable](https://splunkbase.splunk.com/app/4060), [Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964).\n• Ensure the following data sources are available: `index=vm` `sourcetype IN (\"tenable:sc:compliance\",\"qualys:policy\")` — `plugin_name`, `host_fqdn`, `compliance_status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map scanner asset names to PCI inventory FQDNs. (2) Scope policies to CIS Windows Server benchmarks used in your standard. (3) Track `fail_pct` trend weekly. (4) Attach exports to secure configuration evidence repository.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype IN (\"tenable:sc:compliance\",\"qualys:policy\") earliest=-7d\n| lookup pci_windows_cde_assets.csv fqdn AS host_fqdn OUTPUT cde_member\n| where cde_member=\"true\"\n| eval fail=if(match(compliance_status,\"(?i)failed|error|critical\"),1,0)\n| stats sum(fail) as failed_checks, count as total_checks by host_fqdn\n| eval fail_pct=if(total_checks>0, round(100*failed_checks/total_checks,2), null())\n| where fail_pct>5\n| sort - fail_pct\n```\n\nUnderstanding this SPL\n\n**Configuration Drift vs CIS Hardening Benchmark on Windows CDE Members (PCI DSS Req 2.2.1, 2.2.3)** — Aggregates configuration assessment findings against a documented hardening standard so you can show continuous alignment with industry-accepted secure configurations for in-scope Windows systems.\n\nDocumented **Data sources**: `index=vm` `sourcetype IN (\"tenable:sc:compliance\",\"qualys:policy\")` — `plugin_name`, `host_fqdn`, `compliance_status`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_member=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **fail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host_fqdn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fail_pct>5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Configuration Drift vs CIS Hardening Benchmark on Windows CDE Members (PCI DSS Req 2.2.1, 2.2.3)** — Aggregates configuration assessment findings against a documented hardening standard so you can show continuous alignment with industry-accepted secure configurations for in-scope Windows systems.\n\nDocumented **Data sources**: `index=vm` `sourcetype IN (\"tenable:sc:compliance\",\"qualys:policy\")` — `plugin_name`, `host_fqdn`, `compliance_status`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (fail_pct by host), heatmap (plugin families), single value (mean fail_pct across CDE).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We aggregates configuration assessment findings against a documented hardening standard so you can show continuous alignment with industry-accepted secure configurations for in-scope Windows systems. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.2.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.2.3 is enforced — Splunk UC-22.11.9: Configuration Drift vs CIS Hardening Benchmark on Windows CDE Members.",
                  "ea": "Saved search 'UC-22.11.9' running on index=vm sourcetype IN (\"tenable:sc:compliance\",\"qualys:policy\") — plugin_name, host_fqdn, compliance_status, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.10",
              "n": "Listening Services and Daemons on Linux Payment Middleware (PCI DSS Req 2.2.4, 2.2.5)",
              "c": "medium",
              "f": "intermediate",
              "v": "Surfaces unexpected listening ports reported by endpoint telemetry on payment middleware hosts, supporting the requirement to enable only necessary services and to document business justification for each.",
              "t": "Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=edr` `sourcetype=\"crowdstrike:hosts\"` Network listening metadata if forwarded; or `index=linux` scripted `netstat`/`ss` HEC sourcetype `linux:network_listeners`",
              "q": "index=linux sourcetype=\"linux:network_listeners\" earliest=-24h\n| lookup pci_payment_middleware.csv host OUTPUT tier\n| where tier=\"middleware\"\n| lookup pci_allowed_listeners.csv proto_port OUTPUT allowed\n| where isnull(allowed)\n| stats latest(_time) as last_seen, values(process) as processes by host, proto_port\n| sort host, proto_port",
              "m": "(1) Schedule a scripted input to emit `proto_port` and owning `process` every hour. (2) Maintain allowed listener lookup per role. (3) Auto-open tickets for new tuples. (4) Review quarterly for stale rules.",
              "z": "Table (host, proto_port, processes), mosaic or heatmap (host x port), single value (new listeners per day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5994)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=edr` `sourcetype=\"crowdstrike:hosts\"` Network listening metadata if forwarded; or `index=linux` scripted `netstat`/`ss` HEC sourcetype `linux:network_listeners`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule a scripted input to emit `proto_port` and owning `process` every hour. (2) Maintain allowed listener lookup per role. (3) Auto-open tickets for new tuples. (4) Review quarterly for stale rules.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=linux sourcetype=\"linux:network_listeners\" earliest=-24h\n| lookup pci_payment_middleware.csv host OUTPUT tier\n| where tier=\"middleware\"\n| lookup pci_allowed_listeners.csv proto_port OUTPUT allowed\n| where isnull(allowed)\n| stats latest(_time) as last_seen, values(process) as processes by host, proto_port\n| sort host, proto_port\n```\n\nUnderstanding this SPL\n\n**Listening Services and Daemons on Linux Payment Middleware (PCI DSS Req 2.2.4, 2.2.5)** — Surfaces unexpected listening ports reported by endpoint telemetry on payment middleware hosts, supporting the requirement to enable only necessary services and to document business justification for each.\n\nDocumented **Data sources**: `index=edr` `sourcetype=\"crowdstrike:hosts\"` Network listening metadata if forwarded; or `index=linux` scripted `netstat`/`ss` HEC sourcetype `linux:network_listeners`. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: linux; **sourcetype**: linux:network_listeners. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=linux, sourcetype=\"linux:network_listeners\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where tier=\"middleware\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(allowed)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, proto_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, proto_port, processes), mosaic or heatmap (host x port), single value (new listeners per day).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface unexpected listening ports reported by endpoint telemetry on payment middleware hosts, supporting the requirement to enable only necessary services and to document business justification for each. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.2.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.2.5 is enforced — Splunk UC-22.11.10: Listening Services and Daemons on Linux Payment Middleware.",
                  "ea": "Saved search 'UC-22.11.10' running on sourcetype crowdstrike:hosts and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.11",
              "n": "System Component Inventory Reconciliation — New In-Scope Hosts (PCI DSS Req 2.1.1, 2.1.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Flags hosts observed in CDE traffic or authentication logs that are missing from the authoritative inventory so the documented list of system components stays accurate between annual reviews.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` `sourcetype=\"pan:traffic\"`; `lookup pci_component_inventory.csv` (host, in_scope, owner)",
              "q": "index=netfw sourcetype=\"pan:traffic\" earliest=-7d\n| lookup pci_cde_subnets.csv subnet AS dest OUTPUT in_cde\n| where in_cde=\"true\"\n| stats dc(dest) as uniq_dest by dest\n| rename dest as ip\n| lookup pci_component_inventory.csv ip OUTPUT inventory_id\n| where isnull(inventory_id)\n| stats count by ip\n| sort - count",
              "m": "(1) Normalize internal IP inventory to one row per address. (2) Deduplicate NAT pools using a second lookup. (3) Route new IPs to asset management within five business days. (4) Snapshot weekly results for PCI evidence.",
              "z": "Table (unknown IPs), single value (count new vs prior week), map if subnets geolocated internally.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` `sourcetype=\"pan:traffic\"`; `lookup pci_component_inventory.csv` (host, in_scope, owner).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize internal IP inventory to one row per address. (2) Deduplicate NAT pools using a second lookup. (3) Route new IPs to asset management within five business days. (4) Snapshot weekly results for PCI evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:traffic\" earliest=-7d\n| lookup pci_cde_subnets.csv subnet AS dest OUTPUT in_cde\n| where in_cde=\"true\"\n| stats dc(dest) as uniq_dest by dest\n| rename dest as ip\n| lookup pci_component_inventory.csv ip OUTPUT inventory_id\n| where isnull(inventory_id)\n| stats count by ip\n| sort - count\n```\n\nUnderstanding this SPL\n\n**System Component Inventory Reconciliation — New In-Scope Hosts (PCI DSS Req 2.1.1, 2.1.2)** — Flags hosts observed in CDE traffic or authentication logs that are missing from the authoritative inventory so the documented list of system components stays accurate between annual reviews.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"`; `lookup pci_component_inventory.csv` (host, in_scope, owner). **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_cde=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(inventory_id)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(All_Traffic.dest) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**System Component Inventory Reconciliation — New In-Scope Hosts (PCI DSS Req 2.1.1, 2.1.2)** — Flags hosts observed in CDE traffic or authentication logs that are missing from the authoritative inventory so the documented list of system components stays accurate between annual reviews.\n\nDocumented **Data sources**: `index=netfw` `sourcetype=\"pan:traffic\"`; `lookup pci_component_inventory.csv` (host, in_scope, owner). **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unknown IPs), single value (count new vs prior week), map if subnets geolocated internally.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flag hosts observed in CDE traffic or authentication logs that are missing from the authoritative inventory so the documented list of system components stays accurate between annual reviews. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t dc(All_Traffic.dest) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.1.2 is enforced — Splunk UC-22.11.11: System Component Inventory Reconciliation — New In-Scope Hosts.",
                  "ea": "Saved search 'UC-22.11.11' running on index=netfw sourcetype=\"pan:traffic\"; lookup pci_component_inventory.csv (host, in_scope, owner), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.12",
              "n": "Removal of Vendor Default SNMP and Community Strings (PCI DSS Req 2.2.2, 2.2.4)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces SNMP or other management protocols still using well-known community names on devices that touch the CDE, supporting timely removal of insecure defaults from network infrastructure.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=netops` `sourcetype IN (\"snmp:trap\",\"syslog:cisco\")` or `index=linux` `sourcetype=\"linux:auditd\"` for snmpd edits",
              "q": "index=netops sourcetype IN (\"snmp:trap\",\"syslog:cisco\") earliest=-24h\n| lookup pci_network_in_scope.csv device OUTPUT in_pci_scope\n| where in_pci_scope=\"true\"\n| where match(_raw,\"(?i)public|private|community\\\\s*=\\\\s*public\")\n| stats count by device, _raw\n| sort - count",
              "m": "(1) Forward SNMP-related syslog to a dedicated index. (2) Tune for encrypted SNMPv3 success noise. (3) Correlate with CMDB management IP. (4) Track remediation in change system.",
              "z": "Table (device, sample _raw), column chart (hits by device model if extracted).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=netops` `sourcetype IN (\"snmp:trap\",\"syslog:cisco\")` or `index=linux` `sourcetype=\"linux:auditd\"` for snmpd edits.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward SNMP-related syslog to a dedicated index. (2) Tune for encrypted SNMPv3 success noise. (3) Correlate with CMDB management IP. (4) Track remediation in change system.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netops sourcetype IN (\"snmp:trap\",\"syslog:cisco\") earliest=-24h\n| lookup pci_network_in_scope.csv device OUTPUT in_pci_scope\n| where in_pci_scope=\"true\"\n| where match(_raw,\"(?i)public|private|community\\\\s*=\\\\s*public\")\n| stats count by device, _raw\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Removal of Vendor Default SNMP and Community Strings (PCI DSS Req 2.2.2, 2.2.4)** — Surfaces SNMP or other management protocols still using well-known community names on devices that touch the CDE, supporting timely removal of insecure defaults from network infrastructure.\n\nDocumented **Data sources**: `index=netops` `sourcetype IN (\"snmp:trap\",\"syslog:cisco\")` or `index=linux` `sourcetype=\"linux:auditd\"` for snmpd edits. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netops.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netops, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_pci_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(_raw,\"(?i)public|private|community\\\\s*=\\\\s*public\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by device, _raw** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, sample _raw), column chart (hits by device model if extracted).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface SNMP or other management protocols still using well-known community names on devices that touch the CDE, supporting timely removal of insecure defaults from network infrastructure. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.2.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.2.4 is enforced — Splunk UC-22.11.12: Removal of Vendor Default SNMP and Community Strings.",
                  "ea": "Saved search 'UC-22.11.12' running on index=netops sourcetype IN (\"snmp:trap\",\"syslog:cisco\") or index=linux sourcetype=\"linux:auditd\" for snmpd edits, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.13",
              "n": "Security Parameter Drift on In-Scope Routers from Gold Config Hash (PCI DSS Req 2.2.1, 2.2.3)",
              "c": "medium",
              "f": "advanced",
              "v": "Compares periodic configuration snapshots to an approved gold hash per device so unintended drift in security-relevant parameters is detected between formal audits.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=compliance` `sourcetype=\"compliance:network_config_digest\"` — `device`, `config_hash`, `baseline_hash`",
              "q": "index=compliance sourcetype=\"compliance:network_config_digest\" earliest=-48h\n| lookup pci_perimeter_devices.csv device OUTPUT pci_tier\n| where pci_tier IN (\"cde_border\",\"dmz\")\n| eval drift=if(config_hash!=baseline_hash,1,0)\n| where drift=1\n| stats latest(_time) as last_drift by device, config_hash, baseline_hash\n| sort device",
              "m": "(1) Nightly job pushes SHA-256 of normalized running-config. (2) Store `baseline_hash` only after CAB approval. (3) Auto-create incident on drift. (4) Retain hash history 13 months for PCI logging retention alignment.",
              "z": "Table (drifted devices), sparkline if hashes trend over time in summary index, single value (drift count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=compliance` `sourcetype=\"compliance:network_config_digest\"` — `device`, `config_hash`, `baseline_hash`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Nightly job pushes SHA-256 of normalized running-config. (2) Store `baseline_hash` only after CAB approval. (3) Auto-create incident on drift. (4) Retain hash history 13 months for PCI logging retention alignment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=\"compliance:network_config_digest\" earliest=-48h\n| lookup pci_perimeter_devices.csv device OUTPUT pci_tier\n| where pci_tier IN (\"cde_border\",\"dmz\")\n| eval drift=if(config_hash!=baseline_hash,1,0)\n| where drift=1\n| stats latest(_time) as last_drift by device, config_hash, baseline_hash\n| sort device\n```\n\nUnderstanding this SPL\n\n**Security Parameter Drift on In-Scope Routers from Gold Config Hash (PCI DSS Req 2.2.1, 2.2.3)** — Compares periodic configuration snapshots to an approved gold hash per device so unintended drift in security-relevant parameters is detected between formal audits.\n\nDocumented **Data sources**: `index=compliance` `sourcetype=\"compliance:network_config_digest\"` — `device`, `config_hash`, `baseline_hash`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: compliance:network_config_digest. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=\"compliance:network_config_digest\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pci_tier IN (\"cde_border\",\"dmz\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **drift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by device, config_hash, baseline_hash** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security Parameter Drift on In-Scope Routers from Gold Config Hash (PCI DSS Req 2.2.1, 2.2.3)** — Compares periodic configuration snapshots to an approved gold hash per device so unintended drift in security-relevant parameters is detected between formal audits.\n\nDocumented **Data sources**: `index=compliance` `sourcetype=\"compliance:network_config_digest\"` — `device`, `config_hash`, `baseline_hash`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drifted devices), sparkline if hashes trend over time in summary index, single value (drift count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare periodic configuration snapshots to an approved gold hash per device so unintended drift in security-relevant parameters is detected between formal audits. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.2.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.2.3 is enforced — Splunk UC-22.11.13: Security Parameter Drift on In-Scope Routers from Gold Config Hash.",
                  "ea": "Saved search 'UC-22.11.13' running on index=compliance sourcetype=\"compliance:network_config_digest\" — device, config_hash, baseline_hash, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.14",
              "n": "Primary Account Number Pattern Discovery in Application Indexes (PCI DSS Req 3.3.1, 3.4.1)",
              "c": "critical",
              "f": "expert",
              "v": "Identifies log fields or raw text that resemble payment card numbers in non-tokenization indexes so PAN can be purged or masked before it violates protection of stored account data requirements.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=app` OR `index=web` high-volume text; restrict to CDE-related `sourcetype` list",
              "q": "(index=app OR index=web) sourcetype IN (\"payment:api:json\",\"java:log4j\",\"nginx:access\") earliest=-24h\n| regex _raw=\"\\\\b(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13}|6(?:011|5[0-9]{2})[0-9]{12})\\\\b\"\n| eval pan_like=replace(_raw,\".*?(\\\\b(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14})\\\\b).*\",\"\\\\1\")\n| eval pan_redacted=replace(pan_like,\"(\\\\d{6})\\\\d+(\\\\d{4})\",\"\\\\1******\\\\2\")\n| stats count by index, sourcetype, host\n| sort - count",
              "m": "(1) Run in a restricted role; mask outputs in dashboards. (2) Validate with Luhn in a external script if false positives are high. (3) Route hits to DLP response. (4) Never store full PAN in summary indexes—store counts only. (5) Document sampling methodology for QSA.",
              "z": "Bar (by sourcetype), table (host counts only), single value (total matches week over week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=app` OR `index=web` high-volume text; restrict to CDE-related `sourcetype` list.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Run in a restricted role; mask outputs in dashboards. (2) Validate with Luhn in a external script if false positives are high. (3) Route hits to DLP response. (4) Never store full PAN in summary indexes—store counts only. (5) Document sampling methodology for QSA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=app OR index=web) sourcetype IN (\"payment:api:json\",\"java:log4j\",\"nginx:access\") earliest=-24h\n| regex _raw=\"\\\\b(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13}|6(?:011|5[0-9]{2})[0-9]{12})\\\\b\"\n| eval pan_like=replace(_raw,\".*?(\\\\b(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14})\\\\b).*\",\"\\\\1\")\n| eval pan_redacted=replace(pan_like,\"(\\\\d{6})\\\\d+(\\\\d{4})\",\"\\\\1******\\\\2\")\n| stats count by index, sourcetype, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Primary Account Number Pattern Discovery in Application Indexes (PCI DSS Req 3.3.1, 3.4.1)** — Identifies log fields or raw text that resemble payment card numbers in non-tokenization indexes so PAN can be purged or masked before it violates protection of stored account data requirements.\n\nDocumented **Data sources**: `index=app` OR `index=web` high-volume text; restrict to CDE-related `sourcetype` list. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app, web.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, index=web, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• `eval` defines or adjusts **pan_like** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pan_redacted** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by index, sourcetype, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Primary Account Number Pattern Discovery in Application Indexes (PCI DSS Req 3.3.1, 3.4.1)** — Identifies log fields or raw text that resemble payment card numbers in non-tokenization indexes so PAN can be purged or masked before it violates protection of stored account data requirements.\n\nDocumented **Data sources**: `index=app` OR `index=web` high-volume text; restrict to CDE-related `sourcetype` list. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar (by sourcetype), table (host counts only), single value (total matches week over week).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We identify log fields or raw text that resemble payment card numbers in non-tokenization indexes so PAN can be purged or masked before it violates protection of stored account data requirements. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.4.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.4.1 is enforced — Splunk UC-22.11.14: Primary Account Number Pattern Discovery in Application Indexes.",
                  "ea": "Saved search 'UC-22.11.14' running on index=app OR index=web high-volume text; restrict to CDE-related sourcetype list, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.15",
              "n": "Key Management Operations from HSM and KMS Audit Trails (PCI DSS Req 3.5.1, 3.6.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Centralizes HSM partition and cloud KMS administrative events so cryptographic key custodians can prove dual-control and logging for keys that protect stored account data.",
              "t": "Splunk Add-on for AWS (1876), custom HSM syslog parser",
              "d": "`index=aws` `sourcetype=\"aws:cloudtrail\"` (`eventName` CreateKey, DisableKey, ScheduleKeyDeletion); `index=hsm` `sourcetype=\"thales:luna:audit\"`",
              "q": "(index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"kms.amazonaws.com\" earliest=-24h)\n OR (index=hsm sourcetype=\"thales:luna:audit\" earliest=-24h)\n| eval op=coalesce(eventName, hsm_command, operation)\n| where match(op,\"(?i)delete|disable|import|unwrap|clone|partition\")\n| stats count by user, op, src_ip, host\n| sort - count",
              "m": "(1) Map IAM user/role to human custodian IDs. (2) Require ticket ID in CloudTrail session context where possible. (3) Alert on `ScheduleKeyDeletion` without CAB record. (4) Retain 12+ months in compliant storage tier.",
              "z": "Timeline (operations), table (custodian, op), pie (op mix).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (1876), custom HSM syslog parser.\n• Ensure the following data sources are available: `index=aws` `sourcetype=\"aws:cloudtrail\"` (`eventName` CreateKey, DisableKey, ScheduleKeyDeletion); `index=hsm` `sourcetype=\"thales:luna:audit\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map IAM user/role to human custodian IDs. (2) Require ticket ID in CloudTrail session context where possible. (3) Alert on `ScheduleKeyDeletion` without CAB record. (4) Retain 12+ months in compliant storage tier.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:cloudtrail\" eventSource=\"kms.amazonaws.com\" earliest=-24h)\n OR (index=hsm sourcetype=\"thales:luna:audit\" earliest=-24h)\n| eval op=coalesce(eventName, hsm_command, operation)\n| where match(op,\"(?i)delete|disable|import|unwrap|clone|partition\")\n| stats count by user, op, src_ip, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Key Management Operations from HSM and KMS Audit Trails (PCI DSS Req 3.5.1, 3.6.1)** — Centralizes HSM partition and cloud KMS administrative events so cryptographic key custodians can prove dual-control and logging for keys that protect stored account data.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` (`eventName` CreateKey, DisableKey, ScheduleKeyDeletion); `index=hsm` `sourcetype=\"thales:luna:audit\"`. **App/TA** (typical add-on context): Splunk Add-on for AWS (1876), custom HSM syslog parser. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, hsm; **sourcetype**: aws:cloudtrail, thales:luna:audit. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=hsm, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **op** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(op,\"(?i)delete|disable|import|unwrap|clone|partition\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, op, src_ip, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Key Management Operations from HSM and KMS Audit Trails (PCI DSS Req 3.5.1, 3.6.1)** — Centralizes HSM partition and cloud KMS administrative events so cryptographic key custodians can prove dual-control and logging for keys that protect stored account data.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` (`eventName` CreateKey, DisableKey, ScheduleKeyDeletion); `index=hsm` `sourcetype=\"thales:luna:audit\"`. **App/TA** (typical add-on context): Splunk Add-on for AWS (1876), custom HSM syslog parser. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (operations), table (custodian, op), pie (op mix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We centralizes HSM partition and cloud KMS administrative events so cryptographic key custodians can prove dual-control and logging for keys that protect stored account data. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src, Authentication.dest | sort - count",
              "e": [
                "aws",
                "syslog"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.6.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.6.1 is enforced — Splunk UC-22.11.15: Key Management Operations from HSM and KMS Audit Trails.",
                  "ea": "Saved search 'UC-22.11.15' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.16",
              "n": "Data Retention Job Failures for Cardholder Data Stores (PCI DSS Req 3.2.1, 3.3.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks scheduled purge and truncation jobs for databases that held PAN so retention limits are enforced operationally, not only on paper.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=app` `sourcetype=\"sqlagent:job\"` or `index=linux` cron logs, `sourcetype=\"batch:retention_job\"`",
              "q": "index=app sourcetype=\"batch:retention_job\" earliest=-7d\n| where match(job_name,\"(?i)pci|pan|retention|purge|truncate\")\n| eval success=if(match(status,\"(?i)success|completed\"),1,0)\n| stats latest(_time) as last_run, sum(success) as ok_runs, count as total_runs by host, job_name\n| eval ok_rate=if(total_runs>0, round(100*ok_runs/total_runs,2), null())\n| where ok_rate<100 OR total_runs=0\n| sort job_name",
              "m": "(1) Instrument retention scripts to emit JSON lines with `job_name` and `status`. (2) Align job list with data retention standard table. (3) Page on two consecutive failures. (4) Attach weekly success rate to compliance dashboard.",
              "z": "Single value (aggregate ok_rate), table (failing jobs), column chart (runs per day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=app` `sourcetype=\"sqlagent:job\"` or `index=linux` cron logs, `sourcetype=\"batch:retention_job\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument retention scripts to emit JSON lines with `job_name` and `status`. (2) Align job list with data retention standard table. (3) Page on two consecutive failures. (4) Attach weekly success rate to compliance dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"batch:retention_job\" earliest=-7d\n| where match(job_name,\"(?i)pci|pan|retention|purge|truncate\")\n| eval success=if(match(status,\"(?i)success|completed\"),1,0)\n| stats latest(_time) as last_run, sum(success) as ok_runs, count as total_runs by host, job_name\n| eval ok_rate=if(total_runs>0, round(100*ok_runs/total_runs,2), null())\n| where ok_rate<100 OR total_runs=0\n| sort job_name\n```\n\nUnderstanding this SPL\n\n**Data Retention Job Failures for Cardholder Data Stores (PCI DSS Req 3.2.1, 3.3.1)** — Tracks scheduled purge and truncation jobs for databases that held PAN so retention limits are enforced operationally, not only on paper.\n\nDocumented **Data sources**: `index=app` `sourcetype=\"sqlagent:job\"` or `index=linux` cron logs, `sourcetype=\"batch:retention_job\"`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: batch:retention_job. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"batch:retention_job\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(job_name,\"(?i)pci|pan|retention|purge|truncate\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **success** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, job_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ok_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ok_rate<100 OR total_runs=0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (aggregate ok_rate), table (failing jobs), column chart (runs per day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track scheduled purge and truncation jobs for databases that held PAN so retention limits are enforced operationally, not only on paper. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.3.1 is enforced — Splunk UC-22.11.16: Data Retention Job Failures for Cardholder Data Stores.",
                  "ea": "Saved search 'UC-22.11.16' running on index=app sourcetype=\"sqlagent:job\" or index=linux cron logs, sourcetype=\"batch:retention_job\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.17",
              "n": "Cryptographic Erasure Verification After Decommission (PCI DSS Req 3.2.1, 3.5.1)",
              "c": "high",
              "f": "advanced",
              "v": "Correlates disk wipe or crypto-shred completion events with asset retirement tickets so media and VMs that held account data cannot return to production without verified erasure evidence.",
              "t": "Splunk Add-on for AWS (1876), Splunk Add-on for Windows (742)",
              "d": "`index=sec` `sourcetype=\"disk:wipe:log\"`; `index=aws` `eventName` DeleteVolume; `index=itsm` retirement tickets",
              "q": "index=itsm sourcetype=\"snow:cmdb_ci\" earliest=-30d\n| where match(short_description,\"(?i)decommission|retire\") AND match(short_description,\"(?i)CDE|PCI|payment\")\n| rename sys_id as asset_id, number as chg_ticket\n| join type=left max=0 asset_id [\n    search index=sec sourcetype=\"disk:wipe:log\" earliest=-30d\n    | stats max(wipe_status) as wipe_ok by asset_id\n  ]\n| where isnull(wipe_ok) OR wipe_ok!=\"VERIFIED\"\n| table chg_ticket, asset_id, short_description, wipe_ok",
              "m": "(1) Ensure wipe tooling emits `asset_id` matching CMDB. (2) Define `wipe_status` enumeration with `VERIFIED`. (3) Escalate open rows weekly. (4) Store closure export for PCI media controls.",
              "z": "Table (exceptions), gauge (% verified), timeline (ticket opened vs wipe event).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (1876), Splunk Add-on for Windows (742).\n• Ensure the following data sources are available: `index=sec` `sourcetype=\"disk:wipe:log\"`; `index=aws` `eventName` DeleteVolume; `index=itsm` retirement tickets.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure wipe tooling emits `asset_id` matching CMDB. (2) Define `wipe_status` enumeration with `VERIFIED`. (3) Escalate open rows weekly. (4) Store closure export for PCI media controls.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:cmdb_ci\" earliest=-30d\n| where match(short_description,\"(?i)decommission|retire\") AND match(short_description,\"(?i)CDE|PCI|payment\")\n| rename sys_id as asset_id, number as chg_ticket\n| join type=left max=0 asset_id [\n    search index=sec sourcetype=\"disk:wipe:log\" earliest=-30d\n    | stats max(wipe_status) as wipe_ok by asset_id\n  ]\n| where isnull(wipe_ok) OR wipe_ok!=\"VERIFIED\"\n| table chg_ticket, asset_id, short_description, wipe_ok\n```\n\nUnderstanding this SPL\n\n**Cryptographic Erasure Verification After Decommission (PCI DSS Req 3.2.1, 3.5.1)** — Correlates disk wipe or crypto-shred completion events with asset retirement tickets so media and VMs that held account data cannot return to production without verified erasure evidence.\n\nDocumented **Data sources**: `index=sec` `sourcetype=\"disk:wipe:log\"`; `index=aws` `eventName` DeleteVolume; `index=itsm` retirement tickets. **App/TA** (typical add-on context): Splunk Add-on for AWS (1876), Splunk Add-on for Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:cmdb_ci. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:cmdb_ci\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(short_description,\"(?i)decommission|retire\") AND match(short_description,\"(?i)CDE|PCI|payment\")` — typically the threshold or rule expression for this monitoring goal.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(wipe_ok) OR wipe_ok!=\"VERIFIED\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cryptographic Erasure Verification After Decommission (PCI DSS Req 3.2.1, 3.5.1)**): table chg_ticket, asset_id, short_description, wipe_ok\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exceptions), gauge (% verified), timeline (ticket opened vs wipe event).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate disk wipe or crypto-shred completion events with asset retirement tickets so media and VMs that held account data cannot return to production without verified erasure evidence. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.5.1 is enforced — Splunk UC-22.11.17: Cryptographic Erasure Verification After Decommission.",
                  "ea": "Saved search 'UC-22.11.17' running on index=sec sourcetype=\"disk:wipe:log\"; index=aws eventName DeleteVolume; index=itsm retirement tickets, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.18",
              "n": "PAN Masking Validation in Point-of-Sale and Web Receipt Logs (PCI DSS Req 3.3.3, 3.4.1)",
              "c": "medium",
              "f": "intermediate",
              "v": "Confirms that customer-visible channels emit only first six and last four digits, catching accidental logging of full PAN in receipt or audit trails.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=pos` `sourcetype=\"pos:receipt\"`; `index=web` `sourcetype=\"access_combined\"`",
              "q": "(index=pos sourcetype=\"pos:receipt\") OR (index=web sourcetype=\"access_combined\") earliest=-24h\n| where match(_raw,\"\\\\b[0-9]{13,19}\\\\b\")\n| where NOT match(_raw,\"\\\\b[0-9]{6}[\\\\*Xx-]{3,}[0-9]{4}\\\\b\")\n| stats count by index, sourcetype, host, uri\n| sort - count",
              "m": "(1) Tune URI exclusions for internal test environments. (2) Pair with secure SDLC defect workflow. (3) Redact `_raw` in dashboards. (4) Monthly compliance review of zero vs non-zero results.",
              "z": "Table (top URIs), single value (violations), trendline week over week.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=pos` `sourcetype=\"pos:receipt\"`; `index=web` `sourcetype=\"access_combined\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune URI exclusions for internal test environments. (2) Pair with secure SDLC defect workflow. (3) Redact `_raw` in dashboards. (4) Monthly compliance review of zero vs non-zero results.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=pos sourcetype=\"pos:receipt\") OR (index=web sourcetype=\"access_combined\") earliest=-24h\n| where match(_raw,\"\\\\b[0-9]{13,19}\\\\b\")\n| where NOT match(_raw,\"\\\\b[0-9]{6}[\\\\*Xx-]{3,}[0-9]{4}\\\\b\")\n| stats count by index, sourcetype, host, uri\n| sort - count\n```\n\nUnderstanding this SPL\n\n**PAN Masking Validation in Point-of-Sale and Web Receipt Logs (PCI DSS Req 3.3.3, 3.4.1)** — Confirms that customer-visible channels emit only first six and last four digits, catching accidental logging of full PAN in receipt or audit trails.\n\nDocumented **Data sources**: `index=pos` `sourcetype=\"pos:receipt\"`; `index=web` `sourcetype=\"access_combined\"`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pos, web; **sourcetype**: pos:receipt, access_combined. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pos, index=web, sourcetype=\"pos:receipt\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"\\\\b[0-9]{13,19}\\\\b\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where NOT match(_raw,\"\\\\b[0-9]{6}[\\\\*Xx-]{3,}[0-9]{4}\\\\b\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by index, sourcetype, host, uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PAN Masking Validation in Point-of-Sale and Web Receipt Logs (PCI DSS Req 3.3.3, 3.4.1)** — Confirms that customer-visible channels emit only first six and last four digits, catching accidental logging of full PAN in receipt or audit trails.\n\nDocumented **Data sources**: `index=pos` `sourcetype=\"pos:receipt\"`; `index=web` `sourcetype=\"access_combined\"`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top URIs), single value (violations), trendline week over week.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We confirms that customer-visible channels emit only first six and last four digits, catching accidental logging of full PAN in receipt or audit trails. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.4.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.4.1 is enforced — Splunk UC-22.11.18: PAN Masking Validation in Point-of-Sale and Web Receipt Logs.",
                  "ea": "Saved search 'UC-22.11.18' running on index=pos sourcetype=\"pos:receipt\"; index=web sourcetype=\"access_combined\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v3.2.1",
                  "cl": "3.4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 3.4 (PAN rendering unreadable) is enforced — Splunk UC-22.11.18: PAN Masking Validation in Point-of-Sale and Web Receipt Logs.",
                  "ea": "Saved search 'UC-22.11.18' running on index=pos sourcetype=\"pos:receipt\"; index=web sourcetype=\"access_combined\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.19",
              "n": "Sensitive Authentication Data (SAD) in Auth Broker Logs (PCI DSS Req 3.3.1, 3.2.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects track data, CVC, or PIN block patterns in authentication or tokenization service logs so SAD is not retained after authorization, per stored data prohibitions.",
              "t": "Splunk Enterprise Security (263), Splunk Stream (1809)",
              "d": "`index=app` `sourcetype=\"oauth:gateway:json\"`; `index=stream` TLS payloads only where policy allows mirroring metadata",
              "q": "index=app sourcetype=\"oauth:gateway:json\" earliest=-24h\n| where match(_raw,\"(?i)track[12]|;\\\\d{15,}|%B\\\\d{15,}|pin\\\\s*block|cvc2?\\\\s*[:=]\\\\s*\\\\d{3,4}\")\n| rex max_match=0 field=_raw \"(?i)(?<redacted>track[12][^\\\\n]{0,40})\"\n| stats count by host, endpoint, ip\n| sort - count",
              "m": "(1) Restrict search to security admin role. (2) Integrate with DLP blocking on the log path. (3) Mandatory incident for any positive hit. (4) Legal/privacy review of regex scope annually.",
              "z": "Table (aggregated only), single value (incident count), map of source IP if needed.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Stream](https://splunkbase.splunk.com/app/1809), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Stream (1809).\n• Ensure the following data sources are available: `index=app` `sourcetype=\"oauth:gateway:json\"`; `index=stream` TLS payloads only where policy allows mirroring metadata.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Restrict search to security admin role. (2) Integrate with DLP blocking on the log path. (3) Mandatory incident for any positive hit. (4) Legal/privacy review of regex scope annually.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"oauth:gateway:json\" earliest=-24h\n| where match(_raw,\"(?i)track[12]|;\\\\d{15,}|%B\\\\d{15,}|pin\\\\s*block|cvc2?\\\\s*[:=]\\\\s*\\\\d{3,4}\")\n| rex max_match=0 field=_raw \"(?i)(?<redacted>track[12][^\\\\n]{0,40})\"\n| stats count by host, endpoint, ip\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Sensitive Authentication Data (SAD) in Auth Broker Logs (PCI DSS Req 3.3.1, 3.2.1)** — Detects track data, CVC, or PIN block patterns in authentication or tokenization service logs so SAD is not retained after authorization, per stored data prohibitions.\n\nDocumented **Data sources**: `index=app` `sourcetype=\"oauth:gateway:json\"`; `index=stream` TLS payloads only where policy allows mirroring metadata. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Stream (1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: oauth:gateway:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"oauth:gateway:json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"(?i)track[12]|;\\\\d{15,}|%B\\\\d{15,}|pin\\\\s*block|cvc2?\\\\s*[:=]\\\\s*\\\\d{3,4}\")` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host, endpoint, ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Sensitive Authentication Data (SAD) in Auth Broker Logs (PCI DSS Req 3.3.1, 3.2.1)** — Detects track data, CVC, or PIN block patterns in authentication or tokenization service logs so SAD is not retained after authorization, per stored data prohibitions.\n\nDocumented **Data sources**: `index=app` `sourcetype=\"oauth:gateway:json\"`; `index=stream` TLS payloads only where policy allows mirroring metadata. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Stream (1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (aggregated only), single value (incident count), map of source IP if needed.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect track data, CVC, or PIN block patterns in authentication or tokenization service logs so SAD is not retained after authorization, per stored data prohibitions. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "stream"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.2.1 is enforced — Splunk UC-22.11.19: Sensitive Authentication Data (SAD) in Auth Broker Logs.",
                  "ea": "Saved search 'UC-22.11.19' running on index=app sourcetype=\"oauth:gateway:json\"; index=stream TLS payloads only where policy allows mirroring metadata, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.20",
              "n": "Hash and Truncation Method Changes on Tokenization Database (PCI DSS Req 3.3.1, 3.5.1)",
              "c": "high",
              "f": "advanced",
              "v": "Alerts when DDL or configuration changes alter hashing algorithms or truncation rules applied to account data, preserving integrity of non-reversible rendering approaches approved by the assessor.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk DB Connect (if DB audit ingested)",
              "d": "`index=db` `sourcetype=\"mssql:audit\"` or `oracle:audit` — `object_name`, `statement`, `action`",
              "q": "index=db sourcetype IN (\"mssql:audit\",\"oracle:audit\") earliest=-7d\n| where match(statement,\"(?i)alter\\\\s+table|create\\\\s+index|hashbytes|truncate\\\\s+column|token\")\n| lookup pci_token_db_objects.csv object_name OUTPUT in_token_vault\n| where in_token_vault=\"true\"\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, values(statement) as stmts by object_name, db_user\n| sort last_seen",
              "m": "(1) Enable fine-grained audit on token vault schema. (2) Map `db_user` to break-glass accounts only. (3) Require pre-approved DDL ticket in correlation. (4) Retain statements hashed if raw SQL is sensitive.",
              "z": "Timeline (DDL events), table (object_name, db_user), heatmap (hour of day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk DB Connect (if DB audit ingested).\n• Ensure the following data sources are available: `index=db` `sourcetype=\"mssql:audit\"` or `oracle:audit` — `object_name`, `statement`, `action`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable fine-grained audit on token vault schema. (2) Map `db_user` to break-glass accounts only. (3) Require pre-approved DDL ticket in correlation. (4) Retain statements hashed if raw SQL is sensitive.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db sourcetype IN (\"mssql:audit\",\"oracle:audit\") earliest=-7d\n| where match(statement,\"(?i)alter\\\\s+table|create\\\\s+index|hashbytes|truncate\\\\s+column|token\")\n| lookup pci_token_db_objects.csv object_name OUTPUT in_token_vault\n| where in_token_vault=\"true\"\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, values(statement) as stmts by object_name, db_user\n| sort last_seen\n```\n\nUnderstanding this SPL\n\n**Hash and Truncation Method Changes on Tokenization Database (PCI DSS Req 3.3.1, 3.5.1)** — Alerts when DDL or configuration changes alter hashing algorithms or truncation rules applied to account data, preserving integrity of non-reversible rendering approaches approved by the assessor.\n\nDocumented **Data sources**: `index=db` `sourcetype=\"mssql:audit\"` or `oracle:audit` — `object_name`, `statement`, `action`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk DB Connect (if DB audit ingested). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(statement,\"(?i)alter\\\\s+table|create\\\\s+index|hashbytes|truncate\\\\s+column|token\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_token_vault=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by object_name, db_user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (DDL events), table (object_name, db_user), heatmap (hour of day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We alerts when DDL or configuration changes alter hashing algorithms or truncation rules applied to account data, preserving integrity of non-reversible rendering approaches approved by the assessor. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.5.1 is enforced — Splunk UC-22.11.20: Hash and Truncation Method Changes on Tokenization Database.",
                  "ea": "Saved search 'UC-22.11.20' running on index=db sourcetype=\"mssql:audit\" or oracle:audit — object_name, statement, action, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.21",
              "n": "Cryptographic Key Rotation and Custodian Acknowledgement Trail (PCI DSS Req 3.6.1, 3.6.4)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks scheduled key rotation events and custodian acknowledgements so the organization can demonstrate keys are replaced on policy cadence and custodianship is documented.",
              "t": "Splunk Add-on for AWS (1876), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=aws` `sourcetype=\"aws:cloudtrail\"` KMS `RotateKey`; `index=sec` `sourcetype=\"vault:audit\"`; `index=itsm` CAB tasks",
              "q": "(index=aws sourcetype=\"aws:cloudtrail\" eventName=\"RotateKey\" earliest=-365d)\n OR (index=sec sourcetype=\"vault:audit\" operation=\"rotate\" earliest=-365d)\n| eval key_id=coalesce(requestParameters.keyId, key_id)\n| bin _time span=90d\n| stats count as rotations by _time, key_id\n| lookup pci_key_rotation_policy.csv key_id OUTPUT expected_days\n| eval expected_rotations=365/expected_days\n| where rotations<1\n| table _time, key_id, rotations, expected_rotations",
              "m": "(1) Populate `expected_days` per key from the cryptographic procedures. (2) Align `span` to rotation policy (annual vs quarterly). (3) Append manual HSM rotation logs via HEC with same `key_id`. (4) Archive quarterly attestation PDFs externally.",
              "z": "Line chart (rotations per bucket), table (keys under-rotated), single value (count of keys needing action).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (1876), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=aws` `sourcetype=\"aws:cloudtrail\"` KMS `RotateKey`; `index=sec` `sourcetype=\"vault:audit\"`; `index=itsm` CAB tasks.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate `expected_days` per key from the cryptographic procedures. (2) Align `span` to rotation policy (annual vs quarterly). (3) Append manual HSM rotation logs via HEC with same `key_id`. (4) Archive quarterly attestation PDFs externally.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=aws sourcetype=\"aws:cloudtrail\" eventName=\"RotateKey\" earliest=-365d)\n OR (index=sec sourcetype=\"vault:audit\" operation=\"rotate\" earliest=-365d)\n| eval key_id=coalesce(requestParameters.keyId, key_id)\n| bin _time span=90d\n| stats count as rotations by _time, key_id\n| lookup pci_key_rotation_policy.csv key_id OUTPUT expected_days\n| eval expected_rotations=365/expected_days\n| where rotations<1\n| table _time, key_id, rotations, expected_rotations\n```\n\nUnderstanding this SPL\n\n**Cryptographic Key Rotation and Custodian Acknowledgement Trail (PCI DSS Req 3.6.1, 3.6.4)** — Tracks scheduled key rotation events and custodian acknowledgements so the organization can demonstrate keys are replaced on policy cadence and custodianship is documented.\n\nDocumented **Data sources**: `index=aws` `sourcetype=\"aws:cloudtrail\"` KMS `RotateKey`; `index=sec` `sourcetype=\"vault:audit\"`; `index=itsm` CAB tasks. **App/TA** (typical add-on context): Splunk Add-on for AWS (1876), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws, sec; **sourcetype**: aws:cloudtrail, vault:audit. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, index=sec, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **key_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, key_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **expected_rotations** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where rotations<1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cryptographic Key Rotation and Custodian Acknowledgement Trail (PCI DSS Req 3.6.1, 3.6.4)**): table _time, key_id, rotations, expected_rotations\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (rotations per bucket), table (keys under-rotated), single value (count of keys needing action).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track scheduled key rotation events and custodian acknowledgements so the organization can demonstrate keys are replaced on policy cadence and custodianship is documented. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.6.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.6.4 is enforced — Splunk UC-22.11.21: Cryptographic Key Rotation and Custodian Acknowledgement Trail.",
                  "ea": "Saved search 'UC-22.11.21' running on index=aws sourcetype=\"aws:cloudtrail\" KMS RotateKey; index=sec sourcetype=\"vault:audit\"; index=itsm CAB tasks, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.22",
              "n": "TLS 1.2 Minimum Version Violations on Payment APIs (PCI DSS Req 4.1.1, 4.2.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "Surfaces handshakes negotiated below TLS 1.2 on in-scope listeners so weak transmission protections are remediated before external vulnerability scanning deadlines.",
              "t": "Splunk Stream (1809), Splunk Add-on for AWS (1876)",
              "d": "`index=stream` `sourcetype=\"stream:tls\"`; `index=aws` `sourcetype=\"aws:elb:accesslogs\"` with `ssl_protocol`",
              "q": "(index=stream sourcetype=\"stream:tls\" earliest=-24h)\n OR (index=aws sourcetype=\"aws:elb:accesslogs\" earliest=-24h)\n| eval tls_ver=coalesce(tls_version, ssl_protocol)\n| lookup pci_payment_endpoints.csv dest_ip OUTPUT in_scope\n| where in_scope=\"true\"\n| where tls_ver IN (\"TLSv1\",\"TLSv1.0\",\"TLSv1.1\",\"SSLv3\") OR match(tls_ver,\"(?i)^ssl\")\n| stats count by tls_ver, dest_ip, src_ip, server_name\n| sort - count",
              "m": "(1) Normalize TLS version strings per data source. (2) Map `dest_ip` to payment VLANs. (3) Exclude lab subnets via lookup. (4) Feed to vulnerability management for ASV correlation.",
              "z": "Bar chart (by tls_ver), table (top client IPs), geomap optional.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Stream](https://splunkbase.splunk.com/app/1809), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Stream (1809), Splunk Add-on for AWS (1876).\n• Ensure the following data sources are available: `index=stream` `sourcetype=\"stream:tls\"`; `index=aws` `sourcetype=\"aws:elb:accesslogs\"` with `ssl_protocol`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize TLS version strings per data source. (2) Map `dest_ip` to payment VLANs. (3) Exclude lab subnets via lookup. (4) Feed to vulnerability management for ASV correlation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=stream sourcetype=\"stream:tls\" earliest=-24h)\n OR (index=aws sourcetype=\"aws:elb:accesslogs\" earliest=-24h)\n| eval tls_ver=coalesce(tls_version, ssl_protocol)\n| lookup pci_payment_endpoints.csv dest_ip OUTPUT in_scope\n| where in_scope=\"true\"\n| where tls_ver IN (\"TLSv1\",\"TLSv1.0\",\"TLSv1.1\",\"SSLv3\") OR match(tls_ver,\"(?i)^ssl\")\n| stats count by tls_ver, dest_ip, src_ip, server_name\n| sort - count\n```\n\nUnderstanding this SPL\n\n**TLS 1.2 Minimum Version Violations on Payment APIs (PCI DSS Req 4.1.1, 4.2.1)** — Surfaces handshakes negotiated below TLS 1.2 on in-scope listeners so weak transmission protections are remediated before external vulnerability scanning deadlines.\n\nDocumented **Data sources**: `index=stream` `sourcetype=\"stream:tls\"`; `index=aws` `sourcetype=\"aws:elb:accesslogs\"` with `ssl_protocol`. **App/TA** (typical add-on context): Splunk Stream (1809), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: stream, aws; **sourcetype**: stream:tls, aws:elb:accesslogs. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=stream, index=aws, sourcetype=\"stream:tls\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tls_ver** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where tls_ver IN (\"TLSv1\",\"TLSv1.0\",\"TLSv1.1\",\"SSLv3\") OR match(tls_ver,\"(?i)^ssl\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by tls_ver, dest_ip, src_ip, server_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**TLS 1.2 Minimum Version Violations on Payment APIs (PCI DSS Req 4.1.1, 4.2.1)** — Surfaces handshakes negotiated below TLS 1.2 on in-scope listeners so weak transmission protections are remediated before external vulnerability scanning deadlines.\n\nDocumented **Data sources**: `index=stream` `sourcetype=\"stream:tls\"`; `index=aws` `sourcetype=\"aws:elb:accesslogs\"` with `ssl_protocol`. **App/TA** (typical add-on context): Splunk Stream (1809), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (by tls_ver), table (top client IPs), geomap optional.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface handshakes negotiated below TLS 1.2 on in-scope listeners so weak transmission protections are remediated before external vulnerability scanning deadlines. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.dest | sort - count",
              "e": [
                "aws",
                "stream"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "4.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 4.2.1 is enforced — Splunk UC-22.11.22: TLS 1.2 Minimum Version Violations on Payment APIs.",
                  "ea": "Saved search 'UC-22.11.22' running on index=stream sourcetype=\"stream:tls\"; index=aws sourcetype=\"aws:elb:accesslogs\" with ssl_protocol, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.23",
              "n": "Weak Cipher Suites Offered by Internal TLS Terminators (PCI DSS Req 4.1.1, 4.2.1)",
              "c": "high",
              "f": "advanced",
              "v": "Highlights servers advertising RC4, 3DES, or NULL ciphers on CDE-facing services, supporting the strong cryptography requirement for transmissions of PAN and connection data.",
              "t": "Splunk Stream (1809), Splunk Add-on for Qualys (2964)",
              "d": "`index=stream` `stream:tls` (`cipher_suite`); `index=vm` `qualys:host` SSL QIDs",
              "q": "index=stream sourcetype=\"stream:tls\" earliest=-24h\n| lookup pci_tls_terminators.csv server_ip OUTPUT pci_listener\n| where pci_listener=\"true\"\n| where match(cipher_suite,\"(?i)RC4|3DES|NULL|EXPORT|anon\")\n| stats count by server_ip, cipher_suite, tls_version\n| sort - count",
              "m": "(1) Ingest Stream metadata from SPAN or tap on DMZ switches. (2) Refresh listener inventory when load balancers change. (3) Pair with Qualys SSL scan for corroboration. (4) Track remediation SLA in ITSM.",
              "z": "Heatmap (server_ip x cipher_suite), pie (weak vs strong), table detail.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Stream](https://splunkbase.splunk.com/app/1809), [Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Stream (1809), Splunk Add-on for Qualys (2964).\n• Ensure the following data sources are available: `index=stream` `stream:tls` (`cipher_suite`); `index=vm` `qualys:host` SSL QIDs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest Stream metadata from SPAN or tap on DMZ switches. (2) Refresh listener inventory when load balancers change. (3) Pair with Qualys SSL scan for corroboration. (4) Track remediation SLA in ITSM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=stream sourcetype=\"stream:tls\" earliest=-24h\n| lookup pci_tls_terminators.csv server_ip OUTPUT pci_listener\n| where pci_listener=\"true\"\n| where match(cipher_suite,\"(?i)RC4|3DES|NULL|EXPORT|anon\")\n| stats count by server_ip, cipher_suite, tls_version\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Weak Cipher Suites Offered by Internal TLS Terminators (PCI DSS Req 4.1.1, 4.2.1)** — Highlights servers advertising RC4, 3DES, or NULL ciphers on CDE-facing services, supporting the strong cryptography requirement for transmissions of PAN and connection data.\n\nDocumented **Data sources**: `index=stream` `stream:tls` (`cipher_suite`); `index=vm` `qualys:host` SSL QIDs. **App/TA** (typical add-on context): Splunk Stream (1809), Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: stream; **sourcetype**: stream:tls. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=stream, sourcetype=\"stream:tls\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pci_listener=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(cipher_suite,\"(?i)RC4|3DES|NULL|EXPORT|anon\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by server_ip, cipher_suite, tls_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Weak Cipher Suites Offered by Internal TLS Terminators (PCI DSS Req 4.1.1, 4.2.1)** — Highlights servers advertising RC4, 3DES, or NULL ciphers on CDE-facing services, supporting the strong cryptography requirement for transmissions of PAN and connection data.\n\nDocumented **Data sources**: `index=stream` `stream:tls` (`cipher_suite`); `index=vm` `qualys:host` SSL QIDs. **App/TA** (typical add-on context): Splunk Stream (1809), Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (server_ip x cipher_suite), pie (weak vs strong), table detail.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlights servers advertising RC4, 3DES, or NULL ciphers on CDE-facing services, supporting the strong cryptography requirement for transmissions of PAN and connection data. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count",
              "e": [
                "qualys",
                "stream"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "4.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 4.2.1 is enforced — Splunk UC-22.11.23: Weak Cipher Suites Offered by Internal TLS Terminators.",
                  "ea": "Saved search 'UC-22.11.23' running on index=stream stream:tls (cipher_suite); index=vm qualys:host SSL QIDs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.24",
              "n": "Certificate Expiry Risk for Public-Facing Payment Hostnames (PCI DSS Req 4.2.1, 4.2.1.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Counts days-to-expiry for certificates protecting CHD transmission paths so renewals happen before outages or downgrade attacks during renewal gaps.",
              "t": "Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964)",
              "d": "`index=vm` `tenable:sc:vuln` plugin 51192; `qualys:host` cert fields if mapped",
              "q": "index=vm sourcetype=\"tenable:sc:vuln\" earliest=-24h\n| where pluginID=51192 OR match(plugin_name,\"(?i)certificate.*expir\")\n| lookup pci_public_payment_hosts.csv dns_name AS dns OUTPUT in_business_scope\n| where in_business_scope=\"true\"\n| eval days_left=round((notAfter_epoch-now())/86400,1)\n| where days_left<45 AND days_left>0\n| stats min(days_left) as soonest_expiry by dns, ip\n| sort soonest_expiry",
              "m": "(1) Map Tenable `dns` to customer-facing FQDNs. (2) Ingest `notAfter_epoch` via TA field alias. (3) Page at 30/14/7 days thresholds. (4) Document emergency renewal procedure linkage.",
              "z": "Table (dns, soonest_expiry), column chart (certs per expiry week), single value (min days_left).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Tenable](https://splunkbase.splunk.com/app/4060), [Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964).\n• Ensure the following data sources are available: `index=vm` `tenable:sc:vuln` plugin 51192; `qualys:host` cert fields if mapped.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map Tenable `dns` to customer-facing FQDNs. (2) Ingest `notAfter_epoch` via TA field alias. (3) Page at 30/14/7 days thresholds. (4) Document emergency renewal procedure linkage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype=\"tenable:sc:vuln\" earliest=-24h\n| where pluginID=51192 OR match(plugin_name,\"(?i)certificate.*expir\")\n| lookup pci_public_payment_hosts.csv dns_name AS dns OUTPUT in_business_scope\n| where in_business_scope=\"true\"\n| eval days_left=round((notAfter_epoch-now())/86400,1)\n| where days_left<45 AND days_left>0\n| stats min(days_left) as soonest_expiry by dns, ip\n| sort soonest_expiry\n```\n\nUnderstanding this SPL\n\n**Certificate Expiry Risk for Public-Facing Payment Hostnames (PCI DSS Req 4.2.1, 4.2.1.2)** — Counts days-to-expiry for certificates protecting CHD transmission paths so renewals happen before outages or downgrade attacks during renewal gaps.\n\nDocumented **Data sources**: `index=vm` `tenable:sc:vuln` plugin 51192; `qualys:host` cert fields if mapped. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm; **sourcetype**: tenable:sc:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, sourcetype=\"tenable:sc:vuln\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where pluginID=51192 OR match(plugin_name,\"(?i)certificate.*expir\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_business_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left<45 AND days_left>0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dns, ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Certificate Expiry Risk for Public-Facing Payment Hostnames (PCI DSS Req 4.2.1, 4.2.1.2)** — Counts days-to-expiry for certificates protecting CHD transmission paths so renewals happen before outages or downgrade attacks during renewal gaps.\n\nDocumented **Data sources**: `index=vm` `tenable:sc:vuln` plugin 51192; `qualys:host` cert fields if mapped. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (dns, soonest_expiry), column chart (certs per expiry week), single value (min days_left).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We counts days-to-expiry for certificates protecting CHD transmission paths so renewals happen before outages or downgrade attacks during renewal gaps. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Operations"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "4.2.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 4.2.1.2 is enforced — Splunk UC-22.11.24: Certificate Expiry Risk for Public-Facing Payment Hostnames.",
                  "ea": "Saved search 'UC-22.11.24' running on index=vm tenable:sc:vuln plugin 51192; qualys:host cert fields if mapped, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.25",
              "n": "Cleartext PAN Indicators in HTTP Headers or Query Strings (PCI DSS Req 4.1.1, 3.4.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects PAN-like sequences in unencrypted HTTP request lines so cardholder data is not sent in cleartext over open networks, satisfying encryption and rendering unreadable expectations.",
              "t": "Splunk Stream (1809), Splunk Enterprise Security (263)",
              "d": "`index=stream` `sourcetype=\"stream:http\"`; `index=web` `sourcetype=\"access_combined\"` where scheme=http",
              "q": "(index=stream sourcetype=\"stream:http\" earliest=-24h uri=*)\n OR (index=web sourcetype=\"access_combined\" earliest=-24h)\n| where match(coalesce(uri,url,_raw), \"http://\")\n| where match(coalesce(uri,url,_raw), \"\\\\b(?:4[0-9]{12}(?:[0-9]{3})?)\\\\b\")\n| stats count by src_ip, dest_ip, coalesce(uri,url) as request_line\n| sort - count",
              "m": "(1) Block at WAF based on same patterns where possible. (2) Restrict Splunk role for this search. (3) Immediate incident for positives. (4) Redact dashboard fields.",
              "z": "Table (aggregated URI patterns only), single value (violations), timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Stream](https://splunkbase.splunk.com/app/1809), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Stream (1809), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=stream` `sourcetype=\"stream:http\"`; `index=web` `sourcetype=\"access_combined\"` where scheme=http.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Block at WAF based on same patterns where possible. (2) Restrict Splunk role for this search. (3) Immediate incident for positives. (4) Redact dashboard fields.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=stream sourcetype=\"stream:http\" earliest=-24h uri=*)\n OR (index=web sourcetype=\"access_combined\" earliest=-24h)\n| where match(coalesce(uri,url,_raw), \"http://\")\n| where match(coalesce(uri,url,_raw), \"\\\\b(?:4[0-9]{12}(?:[0-9]{3})?)\\\\b\")\n| stats count by src_ip, dest_ip, coalesce(uri,url) as request_line\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Cleartext PAN Indicators in HTTP Headers or Query Strings (PCI DSS Req 4.1.1, 3.4.1)** — Detects PAN-like sequences in unencrypted HTTP request lines so cardholder data is not sent in cleartext over open networks, satisfying encryption and rendering unreadable expectations.\n\nDocumented **Data sources**: `index=stream` `sourcetype=\"stream:http\"`; `index=web` `sourcetype=\"access_combined\"` where scheme=http. **App/TA** (typical add-on context): Splunk Stream (1809), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: stream, web; **sourcetype**: stream:http, access_combined. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=stream, index=web, sourcetype=\"stream:http\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(coalesce(uri,url,_raw), \"http://\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(coalesce(uri,url,_raw), \"\\\\b(?:4[0-9]{12}(?:[0-9]{3})?)\\\\b\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_ip, dest_ip, coalesce(uri,url) as request_line** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.src, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cleartext PAN Indicators in HTTP Headers or Query Strings (PCI DSS Req 4.1.1, 3.4.1)** — Detects PAN-like sequences in unencrypted HTTP request lines so cardholder data is not sent in cleartext over open networks, satisfying encryption and rendering unreadable expectations.\n\nDocumented **Data sources**: `index=stream` `sourcetype=\"stream:http\"`; `index=web` `sourcetype=\"access_combined\"` where scheme=http. **App/TA** (typical add-on context): Splunk Stream (1809), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (aggregated URI patterns only), single value (violations), timeline.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect PAN-like sequences in unencrypted HTTP request lines so cardholder data is not sent in cleartext over open networks, satisfying encryption and rendering unreadable expectations. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.src, Web.dest | sort - count",
              "e": [
                "stream"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.4.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.4.1 is enforced — Splunk UC-22.11.25: Cleartext PAN Indicators in HTTP Headers or Query Strings.",
                  "ea": "Saved search 'UC-22.11.25' running on index=stream sourcetype=\"stream:http\"; index=web sourcetype=\"access_combined\" where scheme=http, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.26",
              "n": "Wireless Link Encryption Downgrade for Store WLAN Carrying Payment Terminals (PCI DSS Req 4.1.1, 2.2.4)",
              "c": "medium",
              "f": "advanced",
              "v": "Surfaces association frames or controller logs showing legacy WPA or open auth on SSIDs used by payment terminals, evidencing review of wireless encryption for account data environments.",
              "t": "Splunk Add-on for Cisco (WLAN syslog), custom Meraki `meraki:radio` if used",
              "d": "`index=wireless` `sourcetype=\"wlan:controller\"` — SSID, security mode, AP name",
              "q": "index=wireless sourcetype=\"wlan:controller\" earliest=-24h\n| lookup pci_store_ssid.csv ssid OUTPUT carries_pos\n| where carries_pos=\"true\"\n| where match(_raw,\"(?i)open|OWE|WEP|WPA[^2]|TKIP\")\n| stats count by ap_name, ssid, security_mode\n| sort - count",
              "m": "(1) Classify SSIDs that can see payment devices. (2) Exclude guest SSIDs with captive portal if physically isolated. (3) Integrate with store opening checklist. (4) Weekly store ops digest.",
              "z": "Table (SSID, security_mode), map of APs if geo available, bar chart (by store region).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Cisco (WLAN syslog), custom Meraki `meraki:radio` if used.\n• Ensure the following data sources are available: `index=wireless` `sourcetype=\"wlan:controller\"` — SSID, security mode, AP name.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Classify SSIDs that can see payment devices. (2) Exclude guest SSIDs with captive portal if physically isolated. (3) Integrate with store opening checklist. (4) Weekly store ops digest.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wireless sourcetype=\"wlan:controller\" earliest=-24h\n| lookup pci_store_ssid.csv ssid OUTPUT carries_pos\n| where carries_pos=\"true\"\n| where match(_raw,\"(?i)open|OWE|WEP|WPA[^2]|TKIP\")\n| stats count by ap_name, ssid, security_mode\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Wireless Link Encryption Downgrade for Store WLAN Carrying Payment Terminals (PCI DSS Req 4.1.1, 2.2.4)** — Surfaces association frames or controller logs showing legacy WPA or open auth on SSIDs used by payment terminals, evidencing review of wireless encryption for account data environments.\n\nDocumented **Data sources**: `index=wireless` `sourcetype=\"wlan:controller\"` — SSID, security mode, AP name. **App/TA** (typical add-on context): Splunk Add-on for Cisco (WLAN syslog), custom Meraki `meraki:radio` if used. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wireless; **sourcetype**: wlan:controller. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wireless, sourcetype=\"wlan:controller\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where carries_pos=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(_raw,\"(?i)open|OWE|WEP|WPA[^2]|TKIP\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ap_name, ssid, security_mode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SSID, security_mode), map of APs if geo available, bar chart (by store region).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface association frames or controller logs showing legacy WPA or open auth on SSIDs used by payment terminals, evidencing review of wireless encryption for account data environments. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_meraki"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.2.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.2.4 is enforced — Splunk UC-22.11.26: Wireless Link Encryption Downgrade for Store WLAN Carrying Payment Terminals.",
                  "ea": "Saved search 'UC-22.11.26' running on index=wireless sourcetype=\"wlan:controller\" — SSID, security mode, AP name, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.27",
              "n": "Anti-Malware Agent Coverage Gaps on CDE Windows Servers (PCI DSS Req 5.2.1, 5.3.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "Compares CrowdStrike or legacy AV presence signals to the PCI server inventory so every in-scope system is actively protected against malware as required for all system components.",
              "t": "Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=edr` `sourcetype=\"crowdstrike:hosts\"`; `index=windows` `WinEventLog:System` for legacy AV vendor",
              "q": "index=edr sourcetype=\"crowdstrike:hosts\" earliest=-24h\n| lookup pci_windows_cde_assets.csv hostname OUTPUT cde_member\n| where cde_member=\"true\"\n| eval protected=if(match(agent_status,\"(?i)normal|protected|enabled\"),1,0)\n| where protected=0 OR isnull(agent_status)\n| stats values(agent_status) as status_vals by hostname, local_ip\n| sort hostname",
              "m": "(1) Normalize hostname case with `lower()`. (2) Reconcile VMs cloned without sensor. (3) Auto-ticket missing agents within SLA. (4) Monthly attestation export for Req 5.",
              "z": "Table (unprotected hosts), single value (% coverage), pie (protected vs not).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5994), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=edr` `sourcetype=\"crowdstrike:hosts\"`; `index=windows` `WinEventLog:System` for legacy AV vendor.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize hostname case with `lower()`. (2) Reconcile VMs cloned without sensor. (3) Auto-ticket missing agents within SLA. (4) Monthly attestation export for Req 5.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:hosts\" earliest=-24h\n| lookup pci_windows_cde_assets.csv hostname OUTPUT cde_member\n| where cde_member=\"true\"\n| eval protected=if(match(agent_status,\"(?i)normal|protected|enabled\"),1,0)\n| where protected=0 OR isnull(agent_status)\n| stats values(agent_status) as status_vals by hostname, local_ip\n| sort hostname\n```\n\nUnderstanding this SPL\n\n**Anti-Malware Agent Coverage Gaps on CDE Windows Servers (PCI DSS Req 5.2.1, 5.3.1)** — Compares CrowdStrike or legacy AV presence signals to the PCI server inventory so every in-scope system is actively protected against malware as required for all system components.\n\nDocumented **Data sources**: `index=edr` `sourcetype=\"crowdstrike:hosts\"`; `index=windows` `WinEventLog:System` for legacy AV vendor. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:hosts. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:hosts\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_member=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **protected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where protected=0 OR isnull(agent_status)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by hostname, local_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Anti-Malware Agent Coverage Gaps on CDE Windows Servers (PCI DSS Req 5.2.1, 5.3.1)** — Compares CrowdStrike or legacy AV presence signals to the PCI server inventory so every in-scope system is actively protected against malware as required for all system components.\n\nDocumented **Data sources**: `index=edr` `sourcetype=\"crowdstrike:hosts\"`; `index=windows` `WinEventLog:System` for legacy AV vendor. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unprotected hosts), single value (% coverage), pie (protected vs not).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare CrowdStrike or legacy AV presence signals to the PCI server inventory so every in-scope system is actively protected against malware as required for all system components. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "5.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 5.3.1 is enforced — Splunk UC-22.11.27: Anti-Malware Agent Coverage Gaps on CDE Windows Servers.",
                  "ea": "Saved search 'UC-22.11.27' running on index=edr sourcetype=\"crowdstrike:hosts\"; index=windows WinEventLog:System for legacy AV vendor, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.28",
              "n": "Malware Definition and Sensor Policy Update Lag (PCI DSS Req 5.2.2, 5.3.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Measures hours since last successful policy or definition update on CDE endpoints so protection stays current and assessors see operational discipline beyond initial installation.",
              "t": "Splunk Add-on for CrowdStrike FDR (5082)",
              "d": "`index=edr` `crowdstrike:hosts` — `last_seen`, `policy_id`, `version`",
              "q": "index=edr sourcetype=\"crowdstrike:hosts\" earliest=-24h\n| lookup pci_windows_cde_assets.csv hostname OUTPUT cde_member\n| where cde_member=\"true\"\n| eval lag_hours=round((now()-last_seen)/3600,2)\n| where lag_hours>48\n| stats max(lag_hours) as max_lag_h by hostname, platform_name, version\n| sort - max_lag_h",
              "m": "(1) Tune `lag_hours` to vendor SLA. (2) Exclude maintenance windows via lookup. (3) Correlate with network outages. (4) Weekly compliance report attachment.",
              "z": "Bar chart (max_lag_h by host), table, single value (hosts breaching 48h).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5994), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CrowdStrike FDR (5082).\n• Ensure the following data sources are available: `index=edr` `crowdstrike:hosts` — `last_seen`, `policy_id`, `version`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune `lag_hours` to vendor SLA. (2) Exclude maintenance windows via lookup. (3) Correlate with network outages. (4) Weekly compliance report attachment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:hosts\" earliest=-24h\n| lookup pci_windows_cde_assets.csv hostname OUTPUT cde_member\n| where cde_member=\"true\"\n| eval lag_hours=round((now()-last_seen)/3600,2)\n| where lag_hours>48\n| stats max(lag_hours) as max_lag_h by hostname, platform_name, version\n| sort - max_lag_h\n```\n\nUnderstanding this SPL\n\n**Malware Definition and Sensor Policy Update Lag (PCI DSS Req 5.2.2, 5.3.2)** — Measures hours since last successful policy or definition update on CDE endpoints so protection stays current and assessors see operational discipline beyond initial installation.\n\nDocumented **Data sources**: `index=edr` `crowdstrike:hosts` — `last_seen`, `policy_id`, `version`. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:hosts. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:hosts\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_member=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **lag_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lag_hours>48` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by hostname, platform_name, version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malware Definition and Sensor Policy Update Lag (PCI DSS Req 5.2.2, 5.3.2)** — Measures hours since last successful policy or definition update on CDE endpoints so protection stays current and assessors see operational discipline beyond initial installation.\n\nDocumented **Data sources**: `index=edr` `crowdstrike:hosts` — `last_seen`, `policy_id`, `version`. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (max_lag_h by host), table, single value (hosts breaching 48h).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures hours since last successful policy or definition update on CDE endpoints so protection stays current and assessors see operational discipline beyond initial installation. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "5.3.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 5.3.2 is enforced — Splunk UC-22.11.28: Malware Definition and Sensor Policy Update Lag.",
                  "ea": "Saved search 'UC-22.11.28' running on index=edr crowdstrike:hosts — last_seen, policy_id, version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.29",
              "n": "Scheduled Malware Scan or On-Demand Scan Failures (PCI DSS Req 5.3.1, 5.3.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Captures failed or skipped AV scans on systems that can affect account data so periodic and real-time detection requirements remain demonstrable during operational reviews.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=windows` `WinEventLog:Application` vendor AV EventCode; `sourcetype=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\"`",
              "q": "index=windows sourcetype=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\" earliest=-7d\n| lookup pci_windows_cde_assets.csv ComputerName OUTPUT cde_member\n| where cde_member=\"true\"\n| where EventCode IN (1008,1010,5001,5007)\n| stats count by ComputerName, EventCode, EventDescription\n| sort - count",
              "m": "(1) Map Defender EventCodes to human-readable descriptions via lookup. (2) Include legacy AV sourcetypes in append. (3) Alert on any failure in CDE. (4) Track re-scan completion in ITSM.",
              "z": "Table (host, EventCode), column chart (failures by day), single value (open failures).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=windows` `WinEventLog:Application` vendor AV EventCode; `sourcetype=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map Defender EventCodes to human-readable descriptions via lookup. (2) Include legacy AV sourcetypes in append. (3) Alert on any failure in CDE. (4) Track re-scan completion in ITSM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\" earliest=-7d\n| lookup pci_windows_cde_assets.csv ComputerName OUTPUT cde_member\n| where cde_member=\"true\"\n| where EventCode IN (1008,1010,5001,5007)\n| stats count by ComputerName, EventCode, EventDescription\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Scheduled Malware Scan or On-Demand Scan Failures (PCI DSS Req 5.3.1, 5.3.2)** — Captures failed or skipped AV scans on systems that can affect account data so periodic and real-time detection requirements remain demonstrable during operational reviews.\n\nDocumented **Data sources**: `index=windows` `WinEventLog:Application` vendor AV EventCode; `sourcetype=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\"`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Microsoft-Windows-Windows Defender/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_member=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where EventCode IN (1008,1010,5001,5007)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ComputerName, EventCode, EventDescription** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Scheduled Malware Scan or On-Demand Scan Failures (PCI DSS Req 5.3.1, 5.3.2)** — Captures failed or skipped AV scans on systems that can affect account data so periodic and real-time detection requirements remain demonstrable during operational reviews.\n\nDocumented **Data sources**: `index=windows` `WinEventLog:Application` vendor AV EventCode; `sourcetype=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\"`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, EventCode), column chart (failures by day), single value (open failures).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We captures failed or skipped AV scans on systems that can affect account data so periodic and real-time detection requirements remain demonstrable during operational reviews. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "5.3.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 5.3.2 is enforced — Splunk UC-22.11.29: Scheduled Malware Scan or On-Demand Scan Failures.",
                  "ea": "Saved search 'UC-22.11.29' running on sourcetype WinEventLog:Microsoft and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.30",
              "n": "Malware Detection Volume Trend by Store and Server Tier (PCI DSS Req 5.2.1, 5.3.1)",
              "c": "medium",
              "f": "intermediate",
              "v": "Trends malware blocks and quarantines where payment systems exist so regional infection campaigns are visible before they spread into the CDE.",
              "t": "Splunk Add-on for CrowdStrike FDR (5082), Splunk Enterprise Security (263)",
              "d": "`index=edr` `sourcetype=\"crowdstrike:detection\"`; ES `malware` data model if accelerated",
              "q": "index=edr sourcetype=\"crowdstrike:detection\" earliest=-30d\n| lookup pci_site_codes.csv aid OUTPUT store_region, pci_tier\n| where pci_tier IN (\"pos\",\"store_server\",\"corp_payment\")\n| bin _time span=1d\n| stats count by _time, store_region, pci_tier\n| eventstats sum(count) as daily_total by _time\n| eval share_pct=if(daily_total>0, round(100*count/daily_total,2), null())\n| sort _time, store_region",
              "m": "(1) Map CrowdStrike `aid` to store metadata. (2) Filter out benign test detections via `severity` threshold. (3) Compare to seasonal retail baseline. (4) Include in monthly security committee pack.",
              "z": "Stacked area (count by pci_tier), line (`share_pct` by region), heatmap (region x day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5994), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CrowdStrike FDR (5082), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=edr` `sourcetype=\"crowdstrike:detection\"`; ES `malware` data model if accelerated.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map CrowdStrike `aid` to store metadata. (2) Filter out benign test detections via `severity` threshold. (3) Compare to seasonal retail baseline. (4) Include in monthly security committee pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:detection\" earliest=-30d\n| lookup pci_site_codes.csv aid OUTPUT store_region, pci_tier\n| where pci_tier IN (\"pos\",\"store_server\",\"corp_payment\")\n| bin _time span=1d\n| stats count by _time, store_region, pci_tier\n| eventstats sum(count) as daily_total by _time\n| eval share_pct=if(daily_total>0, round(100*count/daily_total,2), null())\n| sort _time, store_region\n```\n\nUnderstanding this SPL\n\n**Malware Detection Volume Trend by Store and Server Tier (PCI DSS Req 5.2.1, 5.3.1)** — Trends malware blocks and quarantines where payment systems exist so regional infection campaigns are visible before they spread into the CDE.\n\nDocumented **Data sources**: `index=edr` `sourcetype=\"crowdstrike:detection\"`; ES `malware` data model if accelerated. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:detection. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:detection\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pci_tier IN (\"pos\",\"store_server\",\"corp_payment\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, store_region, pci_tier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **share_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malware Detection Volume Trend by Store and Server Tier (PCI DSS Req 5.2.1, 5.3.1)** — Trends malware blocks and quarantines where payment systems exist so regional infection campaigns are visible before they spread into the CDE.\n\nDocumented **Data sources**: `index=edr` `sourcetype=\"crowdstrike:detection\"`; ES `malware` data model if accelerated. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area (count by pci_tier), line (`share_pct` by region), heatmap (region x day).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We trends malware blocks and quarantines where payment systems exist so regional infection campaigns are visible before they spread into the CDE. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest span=1d | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "5.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 5.3.1 is enforced — Splunk UC-22.11.30: Malware Detection Volume Trend by Store and Server Tier.",
                  "ea": "Saved search 'UC-22.11.30' running on index=edr sourcetype=\"crowdstrike:detection\"; ES malware data model if accelerated, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.31",
              "n": "Phishing Simulation Click-Through Rates for Users with CDE Access (PCI DSS Req 5.3.3, 12.6.1)",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures phishing exercise outcomes for staff who can reach payment systems so security awareness training effectiveness supports malware defense and human-layer controls.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055) if simulations logged; or `sourcetype=\"phishme:results\"`",
              "d": "`index=sec_aware` `sourcetype=\"phishme:results\"` — `campaign`, `user`, `clicked`, `reported`",
              "q": "index=sec_aware sourcetype=\"phishme:results\" earliest=-90d\n| lookup pci_cde_users.csv email AS user OUTPUT has_cde_access\n| where has_cde_access=\"true\"\n| eval clicked_flag=if(match(clicked,\"(?i)yes|true|1\"),1,0)\n| stats sum(clicked_flag) as clicks, count as sent by campaign\n| eval click_rate=if(sent>0, round(100*clicks/sent,2), null())\n| where click_rate>15\n| sort - click_rate",
              "m": "(1) Sync HR email to `pci_cde_users.csv` after access reviews. (2) De-duplicate multiple sends per user per campaign. (3) Trigger mandatory remedial training over threshold. (4) Quarterly leadership summary.",
              "z": "Bar (click_rate by campaign), table (campaign, sent, clicks), gauge (CDE user click rate).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055) if simulations logged; or `sourcetype=\"phishme:results\"`.\n• Ensure the following data sources are available: `index=sec_aware` `sourcetype=\"phishme:results\"` — `campaign`, `user`, `clicked`, `reported`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Sync HR email to `pci_cde_users.csv` after access reviews. (2) De-duplicate multiple sends per user per campaign. (3) Trigger mandatory remedial training over threshold. (4) Quarterly leadership summary.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec_aware sourcetype=\"phishme:results\" earliest=-90d\n| lookup pci_cde_users.csv email AS user OUTPUT has_cde_access\n| where has_cde_access=\"true\"\n| eval clicked_flag=if(match(clicked,\"(?i)yes|true|1\"),1,0)\n| stats sum(clicked_flag) as clicks, count as sent by campaign\n| eval click_rate=if(sent>0, round(100*clicks/sent,2), null())\n| where click_rate>15\n| sort - click_rate\n```\n\nUnderstanding this SPL\n\n**Phishing Simulation Click-Through Rates for Users with CDE Access (PCI DSS Req 5.3.3, 12.6.1)** — Measures phishing exercise outcomes for staff who can reach payment systems so security awareness training effectiveness supports malware defense and human-layer controls.\n\nDocumented **Data sources**: `index=sec_aware` `sourcetype=\"phishme:results\"` — `campaign`, `user`, `clicked`, `reported`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055) if simulations logged; or `sourcetype=\"phishme:results\"`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec_aware; **sourcetype**: phishme:results. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec_aware, sourcetype=\"phishme:results\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where has_cde_access=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **clicked_flag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by campaign** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **click_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where click_rate>15` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar (click_rate by campaign), table (campaign, sent, clicks), gauge (CDE user click rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures phishing exercise outcomes for staff who can reach payment systems so security awareness training effectiveness supports malware defense and human-layer controls. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.6.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.6.1 is enforced — Splunk UC-22.11.31: Phishing Simulation Click-Through Rates for Users with CDE Access.",
                  "ea": "Saved search 'UC-22.11.31' running on index=sec_aware sourcetype=\"phishme:results\" — campaign, user, clicked, reported, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.32",
              "n": "Anti-Malware Tamper and Bypass Attempt Telemetry (PCI DSS Req 5.2.1, 5.3.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Surfaces attempts to disable real-time protection or unload drivers on payment-facing hosts, evidencing detective controls when malware defenses are targeted.",
              "t": "Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=edr` `crowdstrike:detection` with technique “defense evasion”; `index=windows` Defender operational events for tamper protection",
              "q": "(index=edr sourcetype=\"crowdstrike:detection\" earliest=-7d tactic=\"Defense Evasion\")\n OR (index=windows sourcetype=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\" EventCode=5001 earliest=-7d)\n| lookup pci_windows_cde_assets.csv coalesce(hostname,ComputerName) as host_key OUTPUT cde_member\n| where cde_member=\"true\"\n| stats earliest(_time) as first, latest(_time) as last, values(description) as titles by host_key\n| sort first",
              "m": "(1) Tune out known patch reboot sequences. (2) Map to insider-threat procedure if user context exists. (3) Immediate SOC escalation. (4) Retain raw events 12 months.",
              "z": "Timeline (first/last), table (host_key, titles), single value (distinct hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5994), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=edr` `crowdstrike:detection` with technique “defense evasion”; `index=windows` Defender operational events for tamper protection.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune out known patch reboot sequences. (2) Map to insider-threat procedure if user context exists. (3) Immediate SOC escalation. (4) Retain raw events 12 months.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=edr sourcetype=\"crowdstrike:detection\" earliest=-7d tactic=\"Defense Evasion\")\n OR (index=windows sourcetype=\"WinEventLog:Microsoft-Windows-Windows Defender/Operational\" EventCode=5001 earliest=-7d)\n| lookup pci_windows_cde_assets.csv coalesce(hostname,ComputerName) as host_key OUTPUT cde_member\n| where cde_member=\"true\"\n| stats earliest(_time) as first, latest(_time) as last, values(description) as titles by host_key\n| sort first\n```\n\nUnderstanding this SPL\n\n**Anti-Malware Tamper and Bypass Attempt Telemetry (PCI DSS Req 5.2.1, 5.3.1)** — Surfaces attempts to disable real-time protection or unload drivers on payment-facing hosts, evidencing detective controls when malware defenses are targeted.\n\nDocumented **Data sources**: `index=edr` `crowdstrike:detection` with technique “defense evasion”; `index=windows` Defender operational events for tamper protection. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr, windows; **sourcetype**: crowdstrike:detection, WinEventLog:Microsoft-Windows-Windows Defender/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, index=windows, sourcetype=\"crowdstrike:detection\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_member=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host_key** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Anti-Malware Tamper and Bypass Attempt Telemetry (PCI DSS Req 5.2.1, 5.3.1)** — Surfaces attempts to disable real-time protection or unload drivers on payment-facing hosts, evidencing detective controls when malware defenses are targeted.\n\nDocumented **Data sources**: `index=edr` `crowdstrike:detection` with technique “defense evasion”; `index=windows` Defender operational events for tamper protection. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (first/last), table (host_key, titles), single value (distinct hosts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface attempts to disable real-time protection or unload drivers on payment-facing hosts, evidencing detective controls when malware defenses are targeted. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.action, Malware_Attacks.signature, Malware_Attacks.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "5.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 5.3.1 is enforced — Splunk UC-22.11.32: Anti-Malware Tamper and Bypass Attempt Telemetry.",
                  "ea": "Saved search 'UC-22.11.32' running on index edr and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.33",
              "n": "Critical and High Vulnerabilities on Payment Application Servers (PCI DSS Req 6.3.1, 11.3.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "Prioritizes open critical and high findings on CDE application tiers from authenticated scans so secure development and vulnerability management requirements stay aligned with PCI risk ranking.",
              "t": "Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964)",
              "d": "`index=vm` `sourcetype IN (\"tenable:sc:vuln\",\"qualys:host\")` — `severity`, `plugin_name`, `host_fqdn`",
              "q": "index=vm sourcetype IN (\"tenable:sc:vuln\",\"qualys:host\") earliest=-24h\n| lookup pci_app_tier.csv fqdn AS host_fqdn OUTPUT tier\n| where tier=\"payment_app\"\n| where severity IN (\"Critical\",\"High\",4,5)\n| where match(state,\"(?i)open|active|reopened)\")\n| stats dc(plugin_id) as open_plugins, values(plugin_name) as sample_plugins by host_fqdn\n| sort - open_plugins",
              "m": "(1) Normalize severity strings across vendors. (2) Deduplicate by `plugin_id`+asset. (3) Sync with patch CAB weekly. (4) Export for ASV/internal scan evidence crosswalk.",
              "z": "Table (hosts with counts), bar (open_plugins), treemap by business unit lookup.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Tenable](https://splunkbase.splunk.com/app/4060), [Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964).\n• Ensure the following data sources are available: `index=vm` `sourcetype IN (\"tenable:sc:vuln\",\"qualys:host\")` — `severity`, `plugin_name`, `host_fqdn`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize severity strings across vendors. (2) Deduplicate by `plugin_id`+asset. (3) Sync with patch CAB weekly. (4) Export for ASV/internal scan evidence crosswalk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype IN (\"tenable:sc:vuln\",\"qualys:host\") earliest=-24h\n| lookup pci_app_tier.csv fqdn AS host_fqdn OUTPUT tier\n| where tier=\"payment_app\"\n| where severity IN (\"Critical\",\"High\",4,5)\n| where match(state,\"(?i)open|active|reopened)\")\n| stats dc(plugin_id) as open_plugins, values(plugin_name) as sample_plugins by host_fqdn\n| sort - open_plugins\n```\n\nUnderstanding this SPL\n\n**Critical and High Vulnerabilities on Payment Application Servers (PCI DSS Req 6.3.1, 11.3.1)** — Prioritizes open critical and high findings on CDE application tiers from authenticated scans so secure development and vulnerability management requirements stay aligned with PCI risk ranking.\n\nDocumented **Data sources**: `index=vm` `sourcetype IN (\"tenable:sc:vuln\",\"qualys:host\")` — `severity`, `plugin_name`, `host_fqdn`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where tier=\"payment_app\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where severity IN (\"Critical\",\"High\",4,5)` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(state,\"(?i)open|active|reopened)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host_fqdn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Critical and High Vulnerabilities on Payment Application Servers (PCI DSS Req 6.3.1, 11.3.1)** — Prioritizes open critical and high findings on CDE application tiers from authenticated scans so secure development and vulnerability management requirements stay aligned with PCI risk ranking.\n\nDocumented **Data sources**: `index=vm` `sourcetype IN (\"tenable:sc:vuln\",\"qualys:host\")` — `severity`, `plugin_name`, `host_fqdn`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060), Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hosts with counts), bar (open_plugins), treemap by business unit lookup.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We prioritizes open critical and high findings on CDE application tiers from authenticated scans so secure development and vulnerability management requirements stay aligned with PCI risk ranking. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "11.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 11.3.1 is enforced — Splunk UC-22.11.33: Critical and High Vulnerabilities on Payment Application Servers.",
                  "ea": "Saved search 'UC-22.11.33' running on index=vm sourcetype IN (\"tenable:sc:vuln\",\"qualys:host\") — severity, plugin_name, host_fqdn, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.34",
              "n": "Critical CVE Remediation SLA Breach Tracking (PCI DSS Req 6.3.1, 6.3.3)",
              "c": "critical",
              "f": "advanced",
              "v": "Computes age in days for critical vulnerabilities against the organization’s documented remediation SLA so PCI expectations for timely flaw correction are measurable.",
              "t": "Splunk Add-on for Tenable (4060)",
              "d": "`index=vm` `tenable:sc:vuln` — `first_found`, `last_found`, `severity`, `host_fqdn`",
              "q": "index=vm sourcetype=\"tenable:sc:vuln\" severity=\"Critical\" earliest=-90d\n| lookup pci_in_scope_hosts.csv fqdn AS host_fqdn OUTPUT in_scope\n| where in_scope=\"true\"\n| where match(state,\"(?i)open|reopened)\")\n| eval age_days=round((now()-first_found)/86400,1)\n| eval sla_days=30\n| where age_days>sla_days\n| stats max(age_days) as max_age by host_fqdn, plugin_name\n| sort - max_age",
              "m": "(1) Adjust `sla_days` per risk policy (15 for internet-facing). (2) Join patch ticket state from ITSM to exclude in-progress. (3) Weekly governance email. (4) Document exceptions with compensating controls.",
              "z": "Bar (max_age by host), table (plugin_name), single value (count of SLA breaches).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Tenable](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Tenable (4060).\n• Ensure the following data sources are available: `index=vm` `tenable:sc:vuln` — `first_found`, `last_found`, `severity`, `host_fqdn`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Adjust `sla_days` per risk policy (15 for internet-facing). (2) Join patch ticket state from ITSM to exclude in-progress. (3) Weekly governance email. (4) Document exceptions with compensating controls.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype=\"tenable:sc:vuln\" severity=\"Critical\" earliest=-90d\n| lookup pci_in_scope_hosts.csv fqdn AS host_fqdn OUTPUT in_scope\n| where in_scope=\"true\"\n| where match(state,\"(?i)open|reopened)\")\n| eval age_days=round((now()-first_found)/86400,1)\n| eval sla_days=30\n| where age_days>sla_days\n| stats max(age_days) as max_age by host_fqdn, plugin_name\n| sort - max_age\n```\n\nUnderstanding this SPL\n\n**Critical CVE Remediation SLA Breach Tracking (PCI DSS Req 6.3.1, 6.3.3)** — Computes age in days for critical vulnerabilities against the organization’s documented remediation SLA so PCI expectations for timely flaw correction are measurable.\n\nDocumented **Data sources**: `index=vm` `tenable:sc:vuln` — `first_found`, `last_found`, `severity`, `host_fqdn`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm; **sourcetype**: tenable:sc:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, sourcetype=\"tenable:sc:vuln\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(state,\"(?i)open|reopened)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>sla_days` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host_fqdn, plugin_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Critical CVE Remediation SLA Breach Tracking (PCI DSS Req 6.3.1, 6.3.3)** — Computes age in days for critical vulnerabilities against the organization’s documented remediation SLA so PCI expectations for timely flaw correction are measurable.\n\nDocumented **Data sources**: `index=vm` `tenable:sc:vuln` — `first_found`, `last_found`, `severity`, `host_fqdn`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar (max_age by host), table (plugin_name), single value (count of SLA breaches).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We computes age in days for critical vulnerabilities against the organization’s documented remediation SLA so PCI expectations for timely flaw correction are measurable. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.3.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 6.3.3 is enforced — Splunk UC-22.11.34: Critical CVE Remediation SLA Breach Tracking.",
                  "ea": "Saved search 'UC-22.11.34' running on index=vm tenable:sc:vuln — first_found, last_found, severity, host_fqdn, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.35",
              "n": "Web Application Firewall Blocks and Anomalies on Checkout URIs (PCI DSS Req 6.4.1, 6.4.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Aggregates ModSecurity, AWS WAF, or commercial WAF denies on payment paths so public-facing web applications retain active attack detection between code releases.",
              "t": "Splunk Add-on for AWS (1876), Splunk Enterprise Security (263)",
              "d": "`index=waf` `sourcetype=\"aws:waf:logs\"`; `sourcetype=\"modsec:json\"`",
              "q": "index=waf (sourcetype=\"aws:waf:logs\" OR sourcetype=\"modsec:json\") earliest=-24h\n| where match(coalesce(uri,httpRequest.uri),\"(?i)/checkout|/payment|/cart|/token\")\n| where action IN (\"BLOCK\",\"block\",\"deny\",\"403\")\n| stats count by terminatingRuleId, coalesce(uri,httpRequest.uri) as path, clientIp\n| sort - count\n| head 200",
              "m": "(1) Normalize field names with CIM aliases. (2) Enrich `clientIp` with threat intel. (3) Tune out scanner noise using rate limits. (4) Feed spikes to application on-call.",
              "z": "Column chart (by rule ID), geomap (clientIp), table (path, count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (1876), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=waf` `sourcetype=\"aws:waf:logs\"`; `sourcetype=\"modsec:json\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field names with CIM aliases. (2) Enrich `clientIp` with threat intel. (3) Tune out scanner noise using rate limits. (4) Feed spikes to application on-call.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=waf (sourcetype=\"aws:waf:logs\" OR sourcetype=\"modsec:json\") earliest=-24h\n| where match(coalesce(uri,httpRequest.uri),\"(?i)/checkout|/payment|/cart|/token\")\n| where action IN (\"BLOCK\",\"block\",\"deny\",\"403\")\n| stats count by terminatingRuleId, coalesce(uri,httpRequest.uri) as path, clientIp\n| sort - count\n| head 200\n```\n\nUnderstanding this SPL\n\n**Web Application Firewall Blocks and Anomalies on Checkout URIs (PCI DSS Req 6.4.1, 6.4.2)** — Aggregates ModSecurity, AWS WAF, or commercial WAF denies on payment paths so public-facing web applications retain active attack detection between code releases.\n\nDocumented **Data sources**: `index=waf` `sourcetype=\"aws:waf:logs\"`; `sourcetype=\"modsec:json\"`. **App/TA** (typical add-on context): Splunk Add-on for AWS (1876), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: waf; **sourcetype**: aws:waf:logs, modsec:json. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=waf, sourcetype=\"aws:waf:logs\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(coalesce(uri,httpRequest.uri),\"(?i)/checkout|/payment|/cart|/token\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where action IN (\"BLOCK\",\"block\",\"deny\",\"403\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by terminatingRuleId, coalesce(uri,httpRequest.uri) as path, clientIp** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Web Application Firewall Blocks and Anomalies on Checkout URIs (PCI DSS Req 6.4.1, 6.4.2)** — Aggregates ModSecurity, AWS WAF, or commercial WAF denies on payment paths so public-facing web applications retain active attack detection between code releases.\n\nDocumented **Data sources**: `index=waf` `sourcetype=\"aws:waf:logs\"`; `sourcetype=\"modsec:json\"`. **App/TA** (typical add-on context): Splunk Add-on for AWS (1876), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (by rule ID), geomap (clientIp), table (path, count).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We aggregates ModSecurity, AWS WAF, or commercial WAF denies on payment paths so public-facing web applications retain active attack detection between code releases. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.4.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 6.4.2 is enforced — Splunk UC-22.11.35: Web Application Firewall Blocks and Anomalies on Checkout URIs.",
                  "ea": "Saved search 'UC-22.11.35' running on index=waf sourcetype=\"aws:waf:logs\"; sourcetype=\"modsec:json\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.36",
              "n": "Pull-Request and Code Review Evidence for Payment Microservices (PCI DSS Req 6.2.4, 6.3.1)",
              "c": "medium",
              "f": "advanced",
              "v": "Ingests Git platform webhooks to verify merges to payment repositories include required reviewers, supporting secure software engineering practices for custom payment software.",
              "t": "Custom HEC `sourcetype=\"github:pull_request\"` or Splunk Add-on for Bitbucket if used",
              "d": "`index=devsecops` `sourcetype=\"github:pull_request\"` — `repo`, `merged`, `review_count`, `labels`",
              "q": "index=devsecops sourcetype=\"github:pull_request\" earliest=-7d\n| lookup pci_payment_repos.csv repo OUTPUT pci_repo\n| where pci_repo=\"true\" AND merged=\"true\"\n| eval review_ok=if(review_count>=1 AND match(labels,\"(?i)security-reviewed\"),1,0)\n| where review_ok=0\n| table _time, repo, user, review_count, labels, url\n| sort _time",
              "m": "(1) Configure org-wide webhooks with HEC token. (2) Align label policy with secure SDLC. (3) Block deploy pipeline on Splunk summary alert if desired. (4) Monthly export for PCI evidence binder.",
              "z": "Table (exceptions), pie (review_ok ratio overall), timeline (merge times).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom HEC `sourcetype=\"github:pull_request\"` or Splunk Add-on for Bitbucket if used.\n• Ensure the following data sources are available: `index=devsecops` `sourcetype=\"github:pull_request\"` — `repo`, `merged`, `review_count`, `labels`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Configure org-wide webhooks with HEC token. (2) Align label policy with secure SDLC. (3) Block deploy pipeline on Splunk summary alert if desired. (4) Monthly export for PCI evidence binder.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devsecops sourcetype=\"github:pull_request\" earliest=-7d\n| lookup pci_payment_repos.csv repo OUTPUT pci_repo\n| where pci_repo=\"true\" AND merged=\"true\"\n| eval review_ok=if(review_count>=1 AND match(labels,\"(?i)security-reviewed\"),1,0)\n| where review_ok=0\n| table _time, repo, user, review_count, labels, url\n| sort _time\n```\n\nUnderstanding this SPL\n\n**Pull-Request and Code Review Evidence for Payment Microservices (PCI DSS Req 6.2.4, 6.3.1)** — Ingests Git platform webhooks to verify merges to payment repositories include required reviewers, supporting secure software engineering practices for custom payment software.\n\nDocumented **Data sources**: `index=devsecops` `sourcetype=\"github:pull_request\"` — `repo`, `merged`, `review_count`, `labels`. **App/TA** (typical add-on context): Custom HEC `sourcetype=\"github:pull_request\"` or Splunk Add-on for Bitbucket if used. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devsecops; **sourcetype**: github:pull_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devsecops, sourcetype=\"github:pull_request\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pci_repo=\"true\" AND merged=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **review_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where review_ok=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Pull-Request and Code Review Evidence for Payment Microservices (PCI DSS Req 6.2.4, 6.3.1)**): table _time, repo, user, review_count, labels, url\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exceptions), pie (review_ok ratio overall), timeline (merge times).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We ingests Git platform webhooks to verify merges to payment repositories include required reviewers, supporting secure software engineering practices for custom payment software. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 6.3.1 is enforced — Splunk UC-22.11.36: Pull-Request and Code Review Evidence for Payment Microservices.",
                  "ea": "Saved search 'UC-22.11.36' running on index=devsecops sourcetype=\"github:pull_request\" — repo, merged, review_count, labels, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.37",
              "n": "Change Control Completeness for Production Payment Releases (PCI DSS Req 6.5.1, 6.5.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Joins deployment pipeline events to approved change tickets so every production change affecting payment software follows formal change control as PCI expects for secure systems.",
              "t": "Splunk Add-on for AWS (1876), Splunk Add-on for ServiceNow (1928) if available",
              "d": "`index=ci` `sourcetype=\"jenkins:build\"` or `sourcetype=\"azuredevops:release\"`; `index=itsm` `snow:change_request`",
              "q": "index=ci sourcetype=\"jenkins:build\" earliest=-7d\n| where match(job,\"(?i)payment|checkout|token\")\n| where result=\"SUCCESS\" AND like(build_url,\"%prod%\")\n| eval chg_hint=replace(build_parameters,\".*?(CHG\\\\d{7}).*\",\"\\\\1\")\n| join type=left max=1 chg_hint [\n    search index=itsm sourcetype=\"snow:change_request\" earliest=-14d\n    | rename number as chg_hint\n    | fields chg_hint, state\n  ]\n| where isnull(state) OR NOT match(state,\"(?i)closed|implemented\")\n| table _time, job, build_url, chg_hint, state",
              "m": "(1) Require pipeline to inject `CHG` token parameter. (2) Add Git branch allow-list. (3) Alert on null `state`. (4) Retain build logs 13 months.",
              "z": "Table (unlinked deploys), column chart (violations by week), single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (1876), Splunk Add-on for ServiceNow (1928) if available.\n• Ensure the following data sources are available: `index=ci` `sourcetype=\"jenkins:build\"` or `sourcetype=\"azuredevops:release\"`; `index=itsm` `snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require pipeline to inject `CHG` token parameter. (2) Add Git branch allow-list. (3) Alert on null `state`. (4) Retain build logs 13 months.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ci sourcetype=\"jenkins:build\" earliest=-7d\n| where match(job,\"(?i)payment|checkout|token\")\n| where result=\"SUCCESS\" AND like(build_url,\"%prod%\")\n| eval chg_hint=replace(build_parameters,\".*?(CHG\\\\d{7}).*\",\"\\\\1\")\n| join type=left max=1 chg_hint [\n    search index=itsm sourcetype=\"snow:change_request\" earliest=-14d\n    | rename number as chg_hint\n    | fields chg_hint, state\n  ]\n| where isnull(state) OR NOT match(state,\"(?i)closed|implemented\")\n| table _time, job, build_url, chg_hint, state\n```\n\nUnderstanding this SPL\n\n**Change Control Completeness for Production Payment Releases (PCI DSS Req 6.5.1, 6.5.2)** — Joins deployment pipeline events to approved change tickets so every production change affecting payment software follows formal change control as PCI expects for secure systems.\n\nDocumented **Data sources**: `index=ci` `sourcetype=\"jenkins:build\"` or `sourcetype=\"azuredevops:release\"`; `index=itsm` `snow:change_request`. **App/TA** (typical add-on context): Splunk Add-on for AWS (1876), Splunk Add-on for ServiceNow (1928) if available. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ci; **sourcetype**: jenkins:build. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ci, sourcetype=\"jenkins:build\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(job,\"(?i)payment|checkout|token\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where result=\"SUCCESS\" AND like(build_url,\"%prod%\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **chg_hint** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(state) OR NOT match(state,\"(?i)closed|implemented\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Change Control Completeness for Production Payment Releases (PCI DSS Req 6.5.1, 6.5.2)**): table _time, job, build_url, chg_hint, state\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Change Control Completeness for Production Payment Releases (PCI DSS Req 6.5.1, 6.5.2)** — Joins deployment pipeline events to approved change tickets so every production change affecting payment software follows formal change control as PCI expects for secure systems.\n\nDocumented **Data sources**: `index=ci` `sourcetype=\"jenkins:build\"` or `sourcetype=\"azuredevops:release\"`; `index=itsm` `snow:change_request`. **App/TA** (typical add-on context): Splunk Add-on for AWS (1876), Splunk Add-on for ServiceNow (1928) if available. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unlinked deploys), column chart (violations by week), single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We joins deployment pipeline events to approved change tickets so every production change affecting payment software follows formal change control as PCI expects for secure systems. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Change"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.5.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 6.5.2 is enforced — Splunk UC-22.11.37: Change Control Completeness for Production Payment Releases.",
                  "ea": "Saved search 'UC-22.11.37' running on index=ci sourcetype=\"jenkins:build\" or sourcetype=\"azuredevops:release\"; index=itsm snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.38",
              "n": "DAST and SAST Finding Density Before Payment Service Releases (PCI DSS Req 6.3.1, 6.3.2)",
              "c": "high",
              "f": "advanced",
              "v": "Tracks high-severity static and dynamic scan findings per build of payment APIs so security testing is evidenced prior to promoting code that could affect account data.",
              "t": "Custom `sourcetype=\"veracode:scan\"` / `checkmarx:scan\"` HEC normalizers",
              "d": "`index=devsecops` `sourcetype IN (\"veracode:scan\",\"checkmarx:scan\")` — `app_name`, `build_id`, `severity`, `status`",
              "q": "index=devsecops sourcetype IN (\"veracode:scan\",\"checkmarx:scan\") earliest=-30d\n| lookup pci_payment_apps.csv app_name OUTPUT pci_app\n| where pci_app=\"true\"\n| where severity IN (\"High\",\"Very High\",\"Critical\")\n| where match(status,\"(?i)open|new)\")\n| stats count by app_name, build_id, scan_type\n| where count>0\n| sort - count",
              "m": "(1) Map scanner project names to internal `app_name`. (2) Gate release on `count=0` for Critical if policy mandates. (3) Correlate with UC-22.11.37 for same `build_id`. (4) Quarterly trending for PCI interview.",
              "z": "Stacked bar (open highs by app), table (build_id), line (count over time).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom `sourcetype=\"veracode:scan\"` / `checkmarx:scan\"` HEC normalizers.\n• Ensure the following data sources are available: `index=devsecops` `sourcetype IN (\"veracode:scan\",\"checkmarx:scan\")` — `app_name`, `build_id`, `severity`, `status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map scanner project names to internal `app_name`. (2) Gate release on `count=0` for Critical if policy mandates. (3) Correlate with UC-22.11.37 for same `build_id`. (4) Quarterly trending for PCI interview.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=devsecops sourcetype IN (\"veracode:scan\",\"checkmarx:scan\") earliest=-30d\n| lookup pci_payment_apps.csv app_name OUTPUT pci_app\n| where pci_app=\"true\"\n| where severity IN (\"High\",\"Very High\",\"Critical\")\n| where match(status,\"(?i)open|new)\")\n| stats count by app_name, build_id, scan_type\n| where count>0\n| sort - count\n```\n\nUnderstanding this SPL\n\n**DAST and SAST Finding Density Before Payment Service Releases (PCI DSS Req 6.3.1, 6.3.2)** — Tracks high-severity static and dynamic scan findings per build of payment APIs so security testing is evidenced prior to promoting code that could affect account data.\n\nDocumented **Data sources**: `index=devsecops` `sourcetype IN (\"veracode:scan\",\"checkmarx:scan\")` — `app_name`, `build_id`, `severity`, `status`. **App/TA** (typical add-on context): Custom `sourcetype=\"veracode:scan\"` / `checkmarx:scan\"` HEC normalizers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: devsecops.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=devsecops, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pci_app=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where severity IN (\"High\",\"Very High\",\"Critical\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(status,\"(?i)open|new)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by app_name, build_id, scan_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DAST and SAST Finding Density Before Payment Service Releases (PCI DSS Req 6.3.1, 6.3.2)** — Tracks high-severity static and dynamic scan findings per build of payment APIs so security testing is evidenced prior to promoting code that could affect account data.\n\nDocumented **Data sources**: `index=devsecops` `sourcetype IN (\"veracode:scan\",\"checkmarx:scan\")` — `app_name`, `build_id`, `severity`, `status`. **App/TA** (typical add-on context): Custom `sourcetype=\"veracode:scan\"` / `checkmarx:scan\"` HEC normalizers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (open highs by app), table (build_id), line (count over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track high-severity static and dynamic scan findings per build of payment APIs so security testing is evidenced prior to promoting code that could affect account data. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.3.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 6.3.2 is enforced — Splunk UC-22.11.38: DAST and SAST Finding Density Before Payment Service Releases.",
                  "ea": "Saved search 'UC-22.11.38' running on index=devsecops sourcetype IN (\"veracode:scan\",\"checkmarx:scan\") — app_name, build_id, severity, status, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.39",
              "n": "Public-Facing Payment Web Tier Patch and Library Drift (PCI DSS Req 6.3.3, 6.2.4)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces outdated frameworks (Log4j, OpenSSL) reported by container image scanners for checkout services, tying technical debt to protection of public-facing web applications.",
              "t": "Splunk Add-on for Qualys (2964), Splunk Add-on for Tenable (4060)",
              "d": "`index=vm` `qualys:container` or `tenable:sc:vuln` container plugins",
              "q": "index=vm sourcetype=\"qualys:container\" earliest=-7d\n| where match(image_repo,\"(?i)checkout|payment|gw|token\")\n| where match(_raw,\"(?i)log4j|openssl|struts|spring4shell)\")\n| stats latest(_time) as last_seen by image_repo, image_tag, dq\n| sort image_repo, image_tag",
              "m": "(1) Ingest CI-generated image tags with digest. (2) Map repo names to PCI asset IDs. (3) Auto-create patch stories under SLA. (4) Document accepted risk for deferred base image updates.",
              "z": "Table (image_repo, tag, dq), single value (images with findings), timeline (last_seen).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [Splunk Add-on for Tenable](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Qualys (2964), Splunk Add-on for Tenable (4060).\n• Ensure the following data sources are available: `index=vm` `qualys:container` or `tenable:sc:vuln` container plugins.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CI-generated image tags with digest. (2) Map repo names to PCI asset IDs. (3) Auto-create patch stories under SLA. (4) Document accepted risk for deferred base image updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype=\"qualys:container\" earliest=-7d\n| where match(image_repo,\"(?i)checkout|payment|gw|token\")\n| where match(_raw,\"(?i)log4j|openssl|struts|spring4shell)\")\n| stats latest(_time) as last_seen by image_repo, image_tag, dq\n| sort image_repo, image_tag\n```\n\nUnderstanding this SPL\n\n**Public-Facing Payment Web Tier Patch and Library Drift (PCI DSS Req 6.3.3, 6.2.4)** — Surfaces outdated frameworks (Log4j, OpenSSL) reported by container image scanners for checkout services, tying technical debt to protection of public-facing web applications.\n\nDocumented **Data sources**: `index=vm` `qualys:container` or `tenable:sc:vuln` container plugins. **App/TA** (typical add-on context): Splunk Add-on for Qualys (2964), Splunk Add-on for Tenable (4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm; **sourcetype**: qualys:container. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, sourcetype=\"qualys:container\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(image_repo,\"(?i)checkout|payment|gw|token\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(_raw,\"(?i)log4j|openssl|struts|spring4shell)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by image_repo, image_tag, dq** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Public-Facing Payment Web Tier Patch and Library Drift (PCI DSS Req 6.3.3, 6.2.4)** — Surfaces outdated frameworks (Log4j, OpenSSL) reported by container image scanners for checkout services, tying technical debt to protection of public-facing web applications.\n\nDocumented **Data sources**: `index=vm` `qualys:container` or `tenable:sc:vuln` container plugins. **App/TA** (typical add-on context): Splunk Add-on for Qualys (2964), Splunk Add-on for Tenable (4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (image_repo, tag, dq), single value (images with findings), timeline (last_seen).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface outdated frameworks (Log4j, OpenSSL) reported by container image scanners for checkout services, tying technical debt to protection of public-facing web applications. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.2.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 6.2.4 is enforced — Splunk UC-22.11.39: Public-Facing Payment Web Tier Patch and Library Drift.",
                  "ea": "Saved search 'UC-22.11.39' running on index=vm qualys:container or tenable:sc:vuln container plugins, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.40",
              "n": "Role-Based Group Membership Drift for Active Directory CDE OU (PCI DSS Req 7.2.1, 7.2.2)",
              "c": "high",
              "f": "advanced",
              "v": "Detects additions to privileged groups tied to the CDE organizational unit so access restrictions remain aligned with documented roles and least privilege between quarterly access reviews.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263)",
              "d": "`index=windows` `WinEventLog:Security` EventCode 4728,4732,4756; `sourcetype=\"msad:change\"` if Splunk AD TA",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4728,4732,4756) earliest=-24h\n| lookup pci_cde_ad_groups.csv GroupName OUTPUT cde_group\n| where cde_group=\"true\"\n| stats earliest(_time) as added_time, values(MemberName) as members by GroupName, SubjectUserName\n| sort - added_time",
              "m": "(1) Maintain `pci_cde_ad_groups.csv` for Domain Admins subset used in CDE. (2) Correlate `SubjectUserName` with privileged account inventory. (3) Immediate alert to IAM. (4) Snapshot daily to summary index for reviews.",
              "z": "Timeline (4728 events), table (GroupName, members), single value (daily adds).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=windows` `WinEventLog:Security` EventCode 4728,4732,4756; `sourcetype=\"msad:change\"` if Splunk AD TA.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `pci_cde_ad_groups.csv` for Domain Admins subset used in CDE. (2) Correlate `SubjectUserName` with privileged account inventory. (3) Immediate alert to IAM. (4) Snapshot daily to summary index for reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4728,4732,4756) earliest=-24h\n| lookup pci_cde_ad_groups.csv GroupName OUTPUT cde_group\n| where cde_group=\"true\"\n| stats earliest(_time) as added_time, values(MemberName) as members by GroupName, SubjectUserName\n| sort - added_time\n```\n\nUnderstanding this SPL\n\n**Role-Based Group Membership Drift for Active Directory CDE OU (PCI DSS Req 7.2.1, 7.2.2)** — Detects additions to privileged groups tied to the CDE organizational unit so access restrictions remain aligned with documented roles and least privilege between quarterly access reviews.\n\nDocumented **Data sources**: `index=windows` `WinEventLog:Security` EventCode 4728,4732,4756; `sourcetype=\"msad:change\"` if Splunk AD TA. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_group=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by GroupName, SubjectUserName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Role-Based Group Membership Drift for Active Directory CDE OU (PCI DSS Req 7.2.1, 7.2.2)** — Detects additions to privileged groups tied to the CDE organizational unit so access restrictions remain aligned with documented roles and least privilege between quarterly access reviews.\n\nDocumented **Data sources**: `index=windows` `WinEventLog:Security` EventCode 4728,4732,4756; `sourcetype=\"msad:change\"` if Splunk AD TA. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (4728 events), table (GroupName, members), single value (daily adds).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect additions to privileged groups tied to the CDE organizational unit so access restrictions remain aligned with documented roles and least privilege between quarterly access reviews. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "7.2.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 7.2.2 is enforced — Splunk UC-22.11.40: Role-Based Group Membership Drift for Active Directory CDE OU.",
                  "ea": "Saved search 'UC-22.11.40' running on index=windows WinEventLog:Security EventCode 4728,4732,4756; sourcetype=\"msad:change\" if Splunk AD TA, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.41",
              "n": "Excessive Database Grants on Schemas Storing Tokenized PAN (PCI DSS Req 7.2.1, 7.2.5)",
              "c": "critical",
              "f": "advanced",
              "v": "Highlights service accounts or users with `db_owner` or `SELECT` on token vault schemas beyond the documented roster, evidencing least privilege for access to systems with account data.",
              "t": "Splunk DB Connect, `sourcetype=\"mssql:audit\"` or scripted `sql:grants` HEC",
              "d": "`index=db` `sourcetype=\"sql:grants\"` — `principal`, `permission`, `schema_name`, `host`",
              "q": "index=db sourcetype=\"sql:grants\" earliest=-24h\n| where match(schema_name,\"(?i)token|vault|payment|cardholder)\")\n| lookup pci_db_allowed_principals.csv principal OUTPUT allowed\n| where isnull(allowed) OR allowed=\"false\"\n| stats values(permission) as perms by host, principal, schema_name\n| sort host, schema_name",
              "m": "(1) Export grants nightly via SQL job to HEC. (2) Refresh allowed principal list after each access review. (3) Auto-ticket violations. (4) Store weekly CSV for PCI Req 7 workpapers.",
              "z": "Table (violations), heatmap (principal x schema), single value (distinct excessive grants).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect, `sourcetype=\"mssql:audit\"` or scripted `sql:grants` HEC.\n• Ensure the following data sources are available: `index=db` `sourcetype=\"sql:grants\"` — `principal`, `permission`, `schema_name`, `host`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export grants nightly via SQL job to HEC. (2) Refresh allowed principal list after each access review. (3) Auto-ticket violations. (4) Store weekly CSV for PCI Req 7 workpapers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db sourcetype=\"sql:grants\" earliest=-24h\n| where match(schema_name,\"(?i)token|vault|payment|cardholder)\")\n| lookup pci_db_allowed_principals.csv principal OUTPUT allowed\n| where isnull(allowed) OR allowed=\"false\"\n| stats values(permission) as perms by host, principal, schema_name\n| sort host, schema_name\n```\n\nUnderstanding this SPL\n\n**Excessive Database Grants on Schemas Storing Tokenized PAN (PCI DSS Req 7.2.1, 7.2.5)** — Highlights service accounts or users with `db_owner` or `SELECT` on token vault schemas beyond the documented roster, evidencing least privilege for access to systems with account data.\n\nDocumented **Data sources**: `index=db` `sourcetype=\"sql:grants\"` — `principal`, `permission`, `schema_name`, `host`. **App/TA** (typical add-on context): Splunk DB Connect, `sourcetype=\"mssql:audit\"` or scripted `sql:grants` HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db; **sourcetype**: sql:grants. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db, sourcetype=\"sql:grants\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(schema_name,\"(?i)token|vault|payment|cardholder)\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(allowed) OR allowed=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, principal, schema_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), heatmap (principal x schema), single value (distinct excessive grants).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlights service accounts or users with `db_owner` or `SELECT` on token vault schemas beyond the documented roster, evidencing least privilege for access to systems with account data. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "7.2.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 7.2.5 is enforced — Splunk UC-22.11.41: Excessive Database Grants on Schemas Storing Tokenized PAN.",
                  "ea": "Saved search 'UC-22.11.41' running on index=db sourcetype=\"sql:grants\" — principal, permission, schema_name, host, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.42",
              "n": "Access Request and Approval Completeness for CDE VPN Accounts (PCI DSS Req 7.2.3, 8.2.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Verifies each new VPN entitlement for CDE users has a matching approved access request record so access is granted only with documented business justification.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for ServiceNow (1928)",
              "d": "`index=netfw` GlobalProtect or `sourcetype=\"pan:globalprotect\"`; `index=itsm` `snow:sc_req_item`",
              "q": "index=netfw sourcetype=\"pan:globalprotect\" subtype=\"gateway-connected\" earliest=-7d\n| lookup pci_cde_vpn_profiles.csv portal OUTPUT cde_profile\n| where cde_profile=\"true\"\n| eval user=lower(user)\n| join type=left max=1 user [\n    search index=itsm sourcetype=\"snow:sc_req_item\" earliest=-30d short_description=\"*VPN*CDE*\"\n    | eval user=lower(caller_id)\n    | stats latest(state) as req_state by user\n  ]\n| where isnull(req_state) OR NOT match(req_state,\"(?i)closed|fulfilled\")\n| stats dc(_time) as connect_events by user, source\n| sort - connect_events",
              "m": "(1) Normalize `user` to corporate UPN. (2) Align RITM naming with join filter. (3) Alert IAM on missing ticket within 24h of first connect. (4) Monthly reconciliation report.",
              "z": "Table (users without tickets), column chart (connect_events), pie (fulfilled vs missing).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for ServiceNow (1928).\n• Ensure the following data sources are available: `index=netfw` GlobalProtect or `sourcetype=\"pan:globalprotect\"`; `index=itsm` `snow:sc_req_item`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `user` to corporate UPN. (2) Align RITM naming with join filter. (3) Alert IAM on missing ticket within 24h of first connect. (4) Monthly reconciliation report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:globalprotect\" subtype=\"gateway-connected\" earliest=-7d\n| lookup pci_cde_vpn_profiles.csv portal OUTPUT cde_profile\n| where cde_profile=\"true\"\n| eval user=lower(user)\n| join type=left max=1 user [\n    search index=itsm sourcetype=\"snow:sc_req_item\" earliest=-30d short_description=\"*VPN*CDE*\"\n    | eval user=lower(caller_id)\n    | stats latest(state) as req_state by user\n  ]\n| where isnull(req_state) OR NOT match(req_state,\"(?i)closed|fulfilled\")\n| stats dc(_time) as connect_events by user, source\n| sort - connect_events\n```\n\nUnderstanding this SPL\n\n**Access Request and Approval Completeness for CDE VPN Accounts (PCI DSS Req 7.2.3, 8.2.1)** — Verifies each new VPN entitlement for CDE users has a matching approved access request record so access is granted only with documented business justification.\n\nDocumented **Data sources**: `index=netfw` GlobalProtect or `sourcetype=\"pan:globalprotect\"`; `index=itsm` `snow:sc_req_item`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for ServiceNow (1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:globalprotect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:globalprotect\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_profile=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(req_state) OR NOT match(req_state,\"(?i)closed|fulfilled\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, source** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Access Request and Approval Completeness for CDE VPN Accounts (PCI DSS Req 7.2.3, 8.2.1)** — Verifies each new VPN entitlement for CDE users has a matching approved access request record so access is granted only with documented business justification.\n\nDocumented **Data sources**: `index=netfw` GlobalProtect or `sourcetype=\"pan:globalprotect\"`; `index=itsm` `snow:sc_req_item`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for ServiceNow (1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users without tickets), column chart (connect_events), pie (fulfilled vs missing).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verifies each new VPN entitlement for CDE users has a matching approved access request record so access is granted only with documented business justification. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "paloalto",
                "servicenow"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.2.1 is enforced — Splunk UC-22.11.42: Access Request and Approval Completeness for CDE VPN Accounts.",
                  "ea": "Saved search 'UC-22.11.42' running on index=netfw GlobalProtect or sourcetype=\"pan:globalprotect\"; index=itsm snow:sc_req_item, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.43",
              "n": "Least-Privilege Validation — Interactive Logons to Database Tier from Workstations (PCI DSS Req 7.2.5, 8.2.1)",
              "c": "high",
              "f": "advanced",
              "v": "Surfaces database interactive logins from non-administrator workstations that may violate least privilege for personnel access to cardholder data stores.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft SQL Server (2648) if audit ingested",
              "d": "`index=db` `mssql:audit` action=\"AUDIT SESSION\" or `sourcetype=\"oracle:audit\"`; `lookup pci_jump_hosts.csv` client IP",
              "q": "index=db sourcetype=\"mssql:audit\" earliest=-24h\n| where match(action,\"(?i)login|authentication\")\n| lookup pci_jump_hosts.csv ip AS client_ip OUTPUT is_jump\n| where isnull(is_jump) OR is_jump=\"false\"\n| lookup pci_db_tier.csv server_name OUTPUT tier\n| where tier=\"carddata_db\"\n| stats count by client_ip, server_name, database_principal_name\n| sort - count",
              "m": "(1) Populate jump host egress IPs. (2) Exclude application service accounts via lookup. (3) Pair with bastion session logs for false positives. (4) Quarterly access review input.",
              "z": "Table (client_ip, principal), geomap, bar (count by server_name).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft SQL Server](https://splunkbase.splunk.com/app/2648), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft SQL Server (2648) if audit ingested.\n• Ensure the following data sources are available: `index=db` `mssql:audit` action=\"AUDIT SESSION\" or `sourcetype=\"oracle:audit\"`; `lookup pci_jump_hosts.csv` client IP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate jump host egress IPs. (2) Exclude application service accounts via lookup. (3) Pair with bastion session logs for false positives. (4) Quarterly access review input.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db sourcetype=\"mssql:audit\" earliest=-24h\n| where match(action,\"(?i)login|authentication\")\n| lookup pci_jump_hosts.csv ip AS client_ip OUTPUT is_jump\n| where isnull(is_jump) OR is_jump=\"false\"\n| lookup pci_db_tier.csv server_name OUTPUT tier\n| where tier=\"carddata_db\"\n| stats count by client_ip, server_name, database_principal_name\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Least-Privilege Validation — Interactive Logons to Database Tier from Workstations (PCI DSS Req 7.2.5, 8.2.1)** — Surfaces database interactive logins from non-administrator workstations that may violate least privilege for personnel access to cardholder data stores.\n\nDocumented **Data sources**: `index=db` `mssql:audit` action=\"AUDIT SESSION\" or `sourcetype=\"oracle:audit\"`; `lookup pci_jump_hosts.csv` client IP. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft SQL Server (2648) if audit ingested. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db; **sourcetype**: mssql:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db, sourcetype=\"mssql:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(action,\"(?i)login|authentication\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(is_jump) OR is_jump=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where tier=\"carddata_db\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by client_ip, server_name, database_principal_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Least-Privilege Validation — Interactive Logons to Database Tier from Workstations (PCI DSS Req 7.2.5, 8.2.1)** — Surfaces database interactive logins from non-administrator workstations that may violate least privilege for personnel access to cardholder data stores.\n\nDocumented **Data sources**: `index=db` `mssql:audit` action=\"AUDIT SESSION\" or `sourcetype=\"oracle:audit\"`; `lookup pci_jump_hosts.csv` client IP. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft SQL Server (2648) if audit ingested. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client_ip, principal), geomap, bar (count by server_name).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface database interactive logins from non-administrator workstations that may violate least privilege for personnel access to cardholder data stores. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count",
              "e": [
                "mssql"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.2.1 is enforced — Splunk UC-22.11.43: Least-Privilege Validation — Interactive Logons to Database Tier from Workstations.",
                  "ea": "Saved search 'UC-22.11.43' running on index=db mssql:audit action=\"AUDIT SESSION\" or sourcetype=\"oracle:audit\"; lookup pci_jump_hosts.csv client IP, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.44",
              "n": "Privilege Escalation Chains on CDE Windows Servers (PCI DSS Req 7.2.5, 10.2.2)",
              "c": "critical",
              "f": "expert",
              "v": "Correlates token elevation, service installation, and logon type sequences on payment servers so unauthorized privilege escalation is detected and investigated under access restriction obligations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=windows` `WinEventLog:Security` EventCode 4672,4698,7045; `lookup pci_payment_servers.csv`",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4672,4698,7045) earliest=-24h\n| lookup pci_payment_servers.csv ComputerName OUTPUT pci_host\n| where pci_host=\"true\"\n| transaction ComputerName maxspan=300m maxevents=20 startswith=EventCode=4672 endswith=EventCode=7045\n| where eventcount>=3\n| table duration, ComputerName, EventCode, Security_ID, Account_Name",
              "m": "(1) Tune `maxspan` to environment. (2) Add known patch service names to allow list. (3) Create ES correlation search from this pattern. (4) Retain transactions in summary index with redaction.",
              "z": "Timeline (transaction), table (suspicious hosts), single value (transaction count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=windows` `WinEventLog:Security` EventCode 4672,4698,7045; `lookup pci_payment_servers.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune `maxspan` to environment. (2) Add known patch service names to allow list. (3) Create ES correlation search from this pattern. (4) Retain transactions in summary index with redaction.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4672,4698,7045) earliest=-24h\n| lookup pci_payment_servers.csv ComputerName OUTPUT pci_host\n| where pci_host=\"true\"\n| transaction ComputerName maxspan=300m maxevents=20 startswith=EventCode=4672 endswith=EventCode=7045\n| where eventcount>=3\n| table duration, ComputerName, EventCode, Security_ID, Account_Name\n```\n\nUnderstanding this SPL\n\n**Privilege Escalation Chains on CDE Windows Servers (PCI DSS Req 7.2.5, 10.2.2)** — Correlates token elevation, service installation, and logon type sequences on payment servers so unauthorized privilege escalation is detected and investigated under access restriction obligations.\n\nDocumented **Data sources**: `index=windows` `WinEventLog:Security` EventCode 4672,4698,7045; `lookup pci_payment_servers.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pci_host=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• Filters the current rows with `where eventcount>=3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Privilege Escalation Chains on CDE Windows Servers (PCI DSS Req 7.2.5, 10.2.2)**): table duration, ComputerName, EventCode, Security_ID, Account_Name\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privilege Escalation Chains on CDE Windows Servers (PCI DSS Req 7.2.5, 10.2.2)** — Correlates token elevation, service installation, and logon type sequences on payment servers so unauthorized privilege escalation is detected and investigated under access restriction obligations.\n\nDocumented **Data sources**: `index=windows` `WinEventLog:Security` EventCode 4672,4698,7045; `lookup pci_payment_servers.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (transaction), table (suspicious hosts), single value (transaction count).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate token elevation, service installation, and logon type sequences on payment servers so unauthorized privilege escalation is detected and investigated under access restriction obligations. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.2.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.2.2 is enforced — Splunk UC-22.11.44: Privilege Escalation Chains on CDE Windows Servers.",
                  "ea": "Saved search 'UC-22.11.44' running on index=windows WinEventLog:Security EventCode 4672,4698,7045; lookup pci_payment_servers.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.45",
              "n": "Shared and Break-Glass Account Usage on Payment Infrastructure (PCI DSS Req 8.2.6, 8.6.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "Tracks authentications to shared admin or break-glass accounts on in-scope systems so unique identification requirements are enforced except for documented emergency scenarios.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=windows` EventCode 4624; `index=linux` `linux:secure`",
              "q": "(index=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-24h)\n OR (index=linux sourcetype=\"linux:secure\" earliest=-24h)\n| eval acct=coalesce(TargetUserName,user)\n| lookup pci_shared_accounts.csv account AS acct OUTPUT shared_flag\n| where shared_flag=\"true\"\n| lookup pci_in_scope_hosts.csv host OUTPUT in_scope\n| where in_scope=\"true\"\n| stats values(src_ip) as src_ips, count by acct, host\n| sort - count",
              "m": "(1) Inventory all shared accounts with approvers. (2) Require ticket number in logon script where possible. (3) Alert on concurrent src_ips. (4) Quarterly review of `src_ips` diversity.",
              "z": "Table (acct, host, src_ips), chord or link chart (acct to src_ip), single value (distinct shared logons).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=windows` EventCode 4624; `index=linux` `linux:secure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Inventory all shared accounts with approvers. (2) Require ticket number in logon script where possible. (3) Alert on concurrent src_ips. (4) Quarterly review of `src_ips` diversity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-24h)\n OR (index=linux sourcetype=\"linux:secure\" earliest=-24h)\n| eval acct=coalesce(TargetUserName,user)\n| lookup pci_shared_accounts.csv account AS acct OUTPUT shared_flag\n| where shared_flag=\"true\"\n| lookup pci_in_scope_hosts.csv host OUTPUT in_scope\n| where in_scope=\"true\"\n| stats values(src_ip) as src_ips, count by acct, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Shared and Break-Glass Account Usage on Payment Infrastructure (PCI DSS Req 8.2.6, 8.6.1)** — Tracks authentications to shared admin or break-glass accounts on in-scope systems so unique identification requirements are enforced except for documented emergency scenarios.\n\nDocumented **Data sources**: `index=windows` EventCode 4624; `index=linux` `linux:secure`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, linux; **sourcetype**: WinEventLog:Security, linux:secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=linux, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **acct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where shared_flag=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by acct, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Shared and Break-Glass Account Usage on Payment Infrastructure (PCI DSS Req 8.2.6, 8.6.1)** — Tracks authentications to shared admin or break-glass accounts on in-scope systems so unique identification requirements are enforced except for documented emergency scenarios.\n\nDocumented **Data sources**: `index=windows` EventCode 4624; `index=linux` `linux:secure`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (acct, host, src_ips), chord or link chart (acct to src_ip), single value (distinct shared logons).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track authentications to shared admin or break-glass accounts on in-scope systems so unique identification requirements are enforced except for documented emergency scenarios. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.6.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.6.1 is enforced — Splunk UC-22.11.45: Shared and Break-Glass Account Usage on Payment Infrastructure.",
                  "ea": "Saved search 'UC-22.11.45' running on index=windows EventCode 4624; index=linux linux:secure, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.46",
              "n": "Vendor Remote Access Sessions into CDE Jump Hosts (PCI DSS Req 7.2.5, 12.8.1)",
              "c": "high",
              "f": "advanced",
              "v": "Lists vendor identities and source IPs using remote access paths into the CDE so third-party access is monitored consistently with service provider due diligence expectations.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=windows` EventCode 4624 LogonType 10; `index=netfw` GlobalProtect logs with vendor realm",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=10 earliest=-7d\n| lookup pci_jump_hosts.csv ComputerName OUTPUT jump_host\n| where jump_host=\"true\"\n| lookup pci_vendor_accounts.csv TargetUserName OUTPUT vendor_name\n| where isnotnull(vendor_name)\n| stats earliest(_time) as first_seen, latest(_time) as last_seen by TargetUserName, vendor_name, WorkstationName, IpAddress\n| sort vendor_name",
              "m": "(1) Map vendor accounts to contracts in GRC. (2) Cross-check active maintenance windows. (3) Alert outside window. (4) Include in third-party access quarterly attestations.",
              "z": "Timeline (sessions), table (vendor_name, user), map (IpAddress).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=windows` EventCode 4624 LogonType 10; `index=netfw` GlobalProtect logs with vendor realm.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map vendor accounts to contracts in GRC. (2) Cross-check active maintenance windows. (3) Alert outside window. (4) Include in third-party access quarterly attestations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=10 earliest=-7d\n| lookup pci_jump_hosts.csv ComputerName OUTPUT jump_host\n| where jump_host=\"true\"\n| lookup pci_vendor_accounts.csv TargetUserName OUTPUT vendor_name\n| where isnotnull(vendor_name)\n| stats earliest(_time) as first_seen, latest(_time) as last_seen by TargetUserName, vendor_name, WorkstationName, IpAddress\n| sort vendor_name\n```\n\nUnderstanding this SPL\n\n**Vendor Remote Access Sessions into CDE Jump Hosts (PCI DSS Req 7.2.5, 12.8.1)** — Lists vendor identities and source IPs using remote access paths into the CDE so third-party access is monitored consistently with service provider due diligence expectations.\n\nDocumented **Data sources**: `index=windows` EventCode 4624 LogonType 10; `index=netfw` GlobalProtect logs with vendor realm. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where jump_host=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(vendor_name)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by TargetUserName, vendor_name, WorkstationName, IpAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Vendor Remote Access Sessions into CDE Jump Hosts (PCI DSS Req 7.2.5, 12.8.1)** — Lists vendor identities and source IPs using remote access paths into the CDE so third-party access is monitored consistently with service provider due diligence expectations.\n\nDocumented **Data sources**: `index=windows` EventCode 4624 LogonType 10; `index=netfw` GlobalProtect logs with vendor realm. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (sessions), table (vendor_name, user), map (IpAddress).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We lists vendor identities and source IPs using remote access paths into the CDE so third-party access is monitored consistently with service provider due diligence expectations. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.8.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.8.1 is enforced — Splunk UC-22.11.46: Vendor Remote Access Sessions into CDE Jump Hosts.",
                  "ea": "Saved search 'UC-22.11.46' running on index=windows EventCode 4624 LogonType 10; index=netfw GlobalProtect logs with vendor realm, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.47",
              "n": "MFA Gap Detection for CDE Interactive and Remote Access (PCI DSS Req 8.4.1, 8.4.2)",
              "c": "critical",
              "f": "advanced",
              "v": "Surfaces successful logons to CDE systems where MFA claim or step-up evidence is absent, supporting multi-factor authentication for all access into the cardholder environment.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=windows` EventCode 4624 with `AuthenticationPackageName`; Azure AD sign-in logs if hybrid `sourcetype=\"azure:aad:signin\"`",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-24h\n| lookup pci_cde_servers.csv ComputerName OUTPUT in_cde\n| where in_cde=\"true\"\n| where LogonType IN (10,3)\n| eval mfa_ok=if(match(AuthenticationPackageName,\"(?i)Negotiate\") AND isnotnull(TransmittedServices),1,0)\n| where mfa_ok=0\n| stats count by ComputerName, TargetUserName, LogonType, WorkstationName\n| sort - count",
              "m": "(1) Enrich with ADFS/Azure token claims via forwarder if available—replace `mfa_ok` logic accordingly. (2) Exclude machine accounts. (3) High-fidelity alert for human users only. (4) Document compensating VPN MFA at perimeter if host MFA absent.",
              "z": "Table (gaps), pie (MFA_ok vs not), single value (distinct users affected).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=windows` EventCode 4624 with `AuthenticationPackageName`; Azure AD sign-in logs if hybrid `sourcetype=\"azure:aad:signin\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enrich with ADFS/Azure token claims via forwarder if available—replace `mfa_ok` logic accordingly. (2) Exclude machine accounts. (3) High-fidelity alert for human users only. (4) Document compensating VPN MFA at perimeter if host MFA absent.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-24h\n| lookup pci_cde_servers.csv ComputerName OUTPUT in_cde\n| where in_cde=\"true\"\n| where LogonType IN (10,3)\n| eval mfa_ok=if(match(AuthenticationPackageName,\"(?i)Negotiate\") AND isnotnull(TransmittedServices),1,0)\n| where mfa_ok=0\n| stats count by ComputerName, TargetUserName, LogonType, WorkstationName\n| sort - count\n```\n\nUnderstanding this SPL\n\n**MFA Gap Detection for CDE Interactive and Remote Access (PCI DSS Req 8.4.1, 8.4.2)** — Surfaces successful logons to CDE systems where MFA claim or step-up evidence is absent, supporting multi-factor authentication for all access into the cardholder environment.\n\nDocumented **Data sources**: `index=windows` EventCode 4624 with `AuthenticationPackageName`; Azure AD sign-in logs if hybrid `sourcetype=\"azure:aad:signin\"`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_cde=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where LogonType IN (10,3)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **mfa_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mfa_ok=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ComputerName, TargetUserName, LogonType, WorkstationName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**MFA Gap Detection for CDE Interactive and Remote Access (PCI DSS Req 8.4.1, 8.4.2)** — Surfaces successful logons to CDE systems where MFA claim or step-up evidence is absent, supporting multi-factor authentication for all access into the cardholder environment.\n\nDocumented **Data sources**: `index=windows` EventCode 4624 with `AuthenticationPackageName`; Azure AD sign-in logs if hybrid `sourcetype=\"azure:aad:signin\"`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gaps), pie (MFA_ok vs not), single value (distinct users affected).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface successful logons to CDE systems where MFA claim or step-up evidence is absent, supporting multi-factor authentication for all access into the cardholder environment. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.4.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.4.2 is enforced — Splunk UC-22.11.47: MFA Gap Detection for CDE Interactive and Remote Access.",
                  "ea": "Saved search 'UC-22.11.47' running on sourcetype azure:aad:signin and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.48",
              "n": "Domain Password Policy Compliance via Resultant Set of Policy Events (PCI DSS Req 8.3.6, 8.3.7)",
              "c": "medium",
              "f": "intermediate",
              "v": "Ingests periodic exports of password policy settings to confirm minimum length, complexity, and history requirements for accounts with access to system components in scope for PCI.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=windows` `sourcetype=\"ad:password_policy\"` from scripted LDAP dump to HEC",
              "q": "index=windows sourcetype=\"ad:password_policy\" earliest=-7d\n| eval meets_pci=if(minPwdLength>=12 AND pwdHistoryLength>=4 AND complexityRequired=\"TRUE\",1,0)\n| where meets_pci=0\n| stats latest(_time) as last_report by domain_dn, minPwdLength, pwdHistoryLength, complexityRequired\n| sort domain_dn",
              "m": "(1) Schedule PowerShell `Get-ADDefaultDomainPasswordPolicy` hourly to HEC. (2) Map fine-grained password policies per CDE OU if used. (3) Alert GPO team on failure. (4) Store compliant snapshots quarterly.",
              "z": "Table (non-compliant domains), gauge (meets_pci %), timeline (policy changes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=windows` `sourcetype=\"ad:password_policy\"` from scripted LDAP dump to HEC.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule PowerShell `Get-ADDefaultDomainPasswordPolicy` hourly to HEC. (2) Map fine-grained password policies per CDE OU if used. (3) Alert GPO team on failure. (4) Store compliant snapshots quarterly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"ad:password_policy\" earliest=-7d\n| eval meets_pci=if(minPwdLength>=12 AND pwdHistoryLength>=4 AND complexityRequired=\"TRUE\",1,0)\n| where meets_pci=0\n| stats latest(_time) as last_report by domain_dn, minPwdLength, pwdHistoryLength, complexityRequired\n| sort domain_dn\n```\n\nUnderstanding this SPL\n\n**Domain Password Policy Compliance via Resultant Set of Policy Events (PCI DSS Req 8.3.6, 8.3.7)** — Ingests periodic exports of password policy settings to confirm minimum length, complexity, and history requirements for accounts with access to system components in scope for PCI.\n\nDocumented **Data sources**: `index=windows` `sourcetype=\"ad:password_policy\"` from scripted LDAP dump to HEC. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: ad:password_policy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"ad:password_policy\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **meets_pci** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where meets_pci=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by domain_dn, minPwdLength, pwdHistoryLength, complexityRequired** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant domains), gauge (meets_pci %), timeline (policy changes).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We ingests periodic exports of password policy settings to confirm minimum length, complexity, and history requirements for accounts with access to system components in scope for PCI. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.3.7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.3.7 is enforced — Splunk UC-22.11.48: Domain Password Policy Compliance via Resultant Set of Policy Events.",
                  "ea": "Saved search 'UC-22.11.48' running on index=windows sourcetype=\"ad:password_policy\" from scripted LDAP dump to HEC, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v3.2.1",
                  "cl": "8.2.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 8.2.3 (Strong password parameters) is enforced — Splunk UC-22.11.48: Domain Password Policy Compliance via Resultant Set of Policy Events.",
                  "ea": "Saved search 'UC-22.11.48' running on index=windows sourcetype=\"ad:password_policy\" from scripted LDAP dump to HEC, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.49",
              "n": "Failed Authentication Burst Detection on Payment Gateway Accounts (PCI DSS Req 8.3.4, 10.2.4)",
              "c": "high",
              "f": "intermediate",
              "v": "Trends failed logon attempts against payment application service accounts and operator IDs so brute-force activity is visible and aligned with monitoring of invalid access attempts.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=windows` EventCode 4625; `index=linux` `linux:secure` `failed password`",
              "q": "(index=windows sourcetype=\"WinEventLog:Security\" EventCode=4625 earliest=-24h)\n OR (index=linux sourcetype=\"linux:secure\" \"failed password\" earliest=-24h)\n| eval acct=coalesce(TargetUserName,user)\n| lookup pci_payment_accounts.csv account AS acct OUTPUT gateway_acct\n| where gateway_acct=\"true\"\n| bin _time span=5m\n| stats count by _time, acct, src_ip\n| where count>25\n| sort - count",
              "m": "(1) Populate payment service account list. (2) Whitelist vulnerability scanner IPs. (3) Auto-block at firewall for sustained bursts. (4) Daily SOC review metrics.",
              "z": "Timechart (failed logons), table (acct, src_ip, count), heatmap (acct x hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=windows` EventCode 4625; `index=linux` `linux:secure` `failed password`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate payment service account list. (2) Whitelist vulnerability scanner IPs. (3) Auto-block at firewall for sustained bursts. (4) Daily SOC review metrics.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=windows sourcetype=\"WinEventLog:Security\" EventCode=4625 earliest=-24h)\n OR (index=linux sourcetype=\"linux:secure\" \"failed password\" earliest=-24h)\n| eval acct=coalesce(TargetUserName,user)\n| lookup pci_payment_accounts.csv account AS acct OUTPUT gateway_acct\n| where gateway_acct=\"true\"\n| bin _time span=5m\n| stats count by _time, acct, src_ip\n| where count>25\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Failed Authentication Burst Detection on Payment Gateway Accounts (PCI DSS Req 8.3.4, 10.2.4)** — Trends failed logon attempts against payment application service accounts and operator IDs so brute-force activity is visible and aligned with monitoring of invalid access attempts.\n\nDocumented **Data sources**: `index=windows` EventCode 4625; `index=linux` `linux:secure` `failed password`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, linux; **sourcetype**: WinEventLog:Security, linux:secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=linux, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **acct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where gateway_acct=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, acct, src_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>25` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Failed Authentication Burst Detection on Payment Gateway Accounts (PCI DSS Req 8.3.4, 10.2.4)** — Trends failed logon attempts against payment application service accounts and operator IDs so brute-force activity is visible and aligned with monitoring of invalid access attempts.\n\nDocumented **Data sources**: `index=windows` EventCode 4625; `index=linux` `linux:secure` `failed password`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart (failed logons), table (acct, src_ip, count), heatmap (acct x hour).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We trends failed logon attempts against payment application service accounts and operator IDs so brute-force activity is visible and aligned with monitoring of invalid access attempts. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src span=5m | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.2.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.2.4 is enforced — Splunk UC-22.11.49: Failed Authentication Burst Detection on Payment Gateway Accounts.",
                  "ea": "Saved search 'UC-22.11.49' running on index=windows EventCode 4625; index=linux linux:secure failed password, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.50",
              "n": "Account Lifecycle — New AD Users with Immediate CDE Group Assignment (PCI DSS Req 8.2.1, 8.2.4)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects user creations followed within a short window by addition to CDE-privileged groups so provisioning follows lifecycle controls and management approval for access to account data.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=windows` EventCode 4720 (user created), 4728 (member added to security-enabled global group)",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4720,4728) earliest=-24h\n| transaction TargetUserName maxspan=60m startswith=EventCode=4720 endswith=EventCode=4728\n| lookup pci_cde_ad_groups.csv GroupName OUTPUT cde_group\n| where cde_group=\"true\"\n| table duration, TargetUserName, GroupName, SubjectUserName",
              "m": "(1) Map `TargetUserName` alignment between events—may require `rename` on 4720 field `TargetUserName` as new user and 4728 `MemberName`. (2) Adjust `transaction` keys per environment testing. (3) Alert IAM for out-of-band adds. (4) Evidence for user identification procedures.",
              "z": "Timeline (transactions), table (new users to CDE groups), single value (count per day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=windows` EventCode 4720 (user created), 4728 (member added to security-enabled global group).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `TargetUserName` alignment between events—may require `rename` on 4720 field `TargetUserName` as new user and 4728 `MemberName`. (2) Adjust `transaction` keys per environment testing. (3) Alert IAM for out-of-band adds. (4) Evidence for user identification procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4720,4728) earliest=-24h\n| transaction TargetUserName maxspan=60m startswith=EventCode=4720 endswith=EventCode=4728\n| lookup pci_cde_ad_groups.csv GroupName OUTPUT cde_group\n| where cde_group=\"true\"\n| table duration, TargetUserName, GroupName, SubjectUserName\n```\n\nUnderstanding this SPL\n\n**Account Lifecycle — New AD Users with Immediate CDE Group Assignment (PCI DSS Req 8.2.1, 8.2.4)** — Detects user creations followed within a short window by addition to CDE-privileged groups so provisioning follows lifecycle controls and management approval for access to account data.\n\nDocumented **Data sources**: `index=windows` EventCode 4720 (user created), 4728 (member added to security-enabled global group). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_group=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Account Lifecycle — New AD Users with Immediate CDE Group Assignment (PCI DSS Req 8.2.1, 8.2.4)**): table duration, TargetUserName, GroupName, SubjectUserName\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Account Lifecycle — New AD Users with Immediate CDE Group Assignment (PCI DSS Req 8.2.1, 8.2.4)** — Detects user creations followed within a short window by addition to CDE-privileged groups so provisioning follows lifecycle controls and management approval for access to account data.\n\nDocumented **Data sources**: `index=windows` EventCode 4720 (user created), 4728 (member added to security-enabled global group). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (transactions), table (new users to CDE groups), single value (count per day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect user creations followed within a short window by addition to CDE-privileged groups so provisioning follows lifecycle controls and management approval for access to account data. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.2.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.2.4 is enforced — Splunk UC-22.11.50: Account Lifecycle — New AD Users with Immediate CDE Group Assignment.",
                  "ea": "Saved search 'UC-22.11.50' running on index=windows EventCode 4720 (user created), 4728 (member added to security-enabled global group), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.51",
              "n": "Inactive Human Accounts Still Entitled to CDE Groups (PCI DSS Req 8.2.5, 8.2.6)",
              "c": "high",
              "f": "intermediate",
              "v": "Joins last successful logon timestamps to current CDE group membership so dormant identities are removed before they become an easy target for misuse of credentials.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "Daily `sourcetype=\"ad:user_snapshot\"` HEC; `lookup pci_cde_ad_groups.csv`",
              "q": "index=windows sourcetype=\"ad:user_snapshot\" earliest=-2d\n| lookup pci_cde_ad_groups.csv memberOf OUTPUT in_cde_group\n| where in_cde_group=\"true\"\n| eval idle_days=round((now()-lastLogonTimestamp)/86400,1)\n| where idle_days>90 AND NOT match(samAccountName,\"(?i)^svc-\")\n| table samAccountName, idle_days, lastLogonTimestamp, department, manager\n| sort - idle_days",
              "m": "(1) Build LDAP snapshot forwarder with `lastLogonTimestamp`. (2) Align 90-day threshold to corporate policy. (3) Auto-disable workflow in IAM integration. (4) Monthly PCI access review attachment.",
              "z": "Table (idle users), bar (idle_days), single value (count stale CDE users).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: Daily `sourcetype=\"ad:user_snapshot\"` HEC; `lookup pci_cde_ad_groups.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build LDAP snapshot forwarder with `lastLogonTimestamp`. (2) Align 90-day threshold to corporate policy. (3) Auto-disable workflow in IAM integration. (4) Monthly PCI access review attachment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"ad:user_snapshot\" earliest=-2d\n| lookup pci_cde_ad_groups.csv memberOf OUTPUT in_cde_group\n| where in_cde_group=\"true\"\n| eval idle_days=round((now()-lastLogonTimestamp)/86400,1)\n| where idle_days>90 AND NOT match(samAccountName,\"(?i)^svc-\")\n| table samAccountName, idle_days, lastLogonTimestamp, department, manager\n| sort - idle_days\n```\n\nUnderstanding this SPL\n\n**Inactive Human Accounts Still Entitled to CDE Groups (PCI DSS Req 8.2.5, 8.2.6)** — Joins last successful logon timestamps to current CDE group membership so dormant identities are removed before they become an easy target for misuse of credentials.\n\nDocumented **Data sources**: Daily `sourcetype=\"ad:user_snapshot\"` HEC; `lookup pci_cde_ad_groups.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: ad:user_snapshot. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"ad:user_snapshot\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_cde_group=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **idle_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where idle_days>90 AND NOT match(samAccountName,\"(?i)^svc-\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Inactive Human Accounts Still Entitled to CDE Groups (PCI DSS Req 8.2.5, 8.2.6)**): table samAccountName, idle_days, lastLogonTimestamp, department, manager\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (idle users), bar (idle_days), single value (count stale CDE users).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We joins last successful logon timestamps to current CDE group membership so dormant identities are removed before they become an easy target for misuse of credentials. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.2.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.2.6 is enforced — Splunk UC-22.11.51: Inactive Human Accounts Still Entitled to CDE Groups.",
                  "ea": "Saved search 'UC-22.11.51' running on Daily sourcetype=\"ad:user_snapshot\" HEC; lookup pci_cde_ad_groups.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.52",
              "n": "Generic Account Prohibition — `admin` / `root` Interactive Success on CDE (PCI DSS Req 8.2.6, 2.2.2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Alerts on successful interactive use of generic superuser accounts on in-scope Linux payment hosts, enforcing unique IDs and eliminating shared root for routine administration.",
              "t": "Splunk Add-on for Unix and Linux (833)",
              "d": "`index=linux` `sourcetype=\"linux:secure\"` or `auditd` — `user`, `tty`, `src_ip`",
              "q": "index=linux sourcetype=\"linux:secure\" earliest=-24h\n| where match(_raw,\"Accepted password|Accepted publickey\")\n| rex \"for (?<acct>\\\\S+) from (?<src_ip>\\\\d+\\\\.\\\\d+\\\\.\\\\d+\\\\.\\\\d+)\"\n| where acct IN (\"root\",\"admin\",\"Administrator\")\n| lookup pci_linux_payment_hosts.csv host OUTPUT pci_host\n| where pci_host=\"true\"\n| stats count by host, acct, src_ip\n| sort - count",
              "m": "(1) Prefer `auditd` TYPE=USER_LOGIN for richer fields if available. (2) Allow break-glass from bastion subnet only via `where`. (3) Immediate SOC page. (4) Quarterly attestation of zero violations goal.",
              "z": "Table (violations), map (src_ip), single value (weekly violation count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=linux` `sourcetype=\"linux:secure\"` or `auditd` — `user`, `tty`, `src_ip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Prefer `auditd` TYPE=USER_LOGIN for richer fields if available. (2) Allow break-glass from bastion subnet only via `where`. (3) Immediate SOC page. (4) Quarterly attestation of zero violations goal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=linux sourcetype=\"linux:secure\" earliest=-24h\n| where match(_raw,\"Accepted password|Accepted publickey\")\n| rex \"for (?<acct>\\\\S+) from (?<src_ip>\\\\d+\\\\.\\\\d+\\\\.\\\\d+\\\\.\\\\d+)\"\n| where acct IN (\"root\",\"admin\",\"Administrator\")\n| lookup pci_linux_payment_hosts.csv host OUTPUT pci_host\n| where pci_host=\"true\"\n| stats count by host, acct, src_ip\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Generic Account Prohibition — `admin` / `root` Interactive Success on CDE (PCI DSS Req 8.2.6, 2.2.2)** — Alerts on successful interactive use of generic superuser accounts on in-scope Linux payment hosts, enforcing unique IDs and eliminating shared root for routine administration.\n\nDocumented **Data sources**: `index=linux` `sourcetype=\"linux:secure\"` or `auditd` — `user`, `tty`, `src_ip`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: linux; **sourcetype**: linux:secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=linux, sourcetype=\"linux:secure\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"Accepted password|Accepted publickey\")` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where acct IN (\"root\",\"admin\",\"Administrator\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pci_host=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, acct, src_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Generic Account Prohibition — `admin` / `root` Interactive Success on CDE (PCI DSS Req 8.2.6, 2.2.2)** — Alerts on successful interactive use of generic superuser accounts on in-scope Linux payment hosts, enforcing unique IDs and eliminating shared root for routine administration.\n\nDocumented **Data sources**: `index=linux` `sourcetype=\"linux:secure\"` or `auditd` — `user`, `tty`, `src_ip`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), map (src_ip), single value (weekly violation count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We alerts on successful interactive use of generic superuser accounts on in-scope Linux payment hosts, enforcing unique IDs and eliminating shared root for routine administration. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.2.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.2.2 is enforced — Splunk UC-22.11.52: Generic Account Prohibition — `admin` / `root` Interactive Success on CDE.",
                  "ea": "Saved search 'UC-22.11.52' running on index=linux sourcetype=\"linux:secure\" or auditd — user, tty, src_ip, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.53",
              "n": "Remote Access MFA Evidence Correlation — VPN Success Without Step-Up Token (PCI DSS Req 8.4.2, 8.5.1)",
              "c": "critical",
              "f": "expert",
              "v": "Joins VPN authentication logs with identity provider MFA success events to prove remote entry into the CDE used multi-factor authentication for each session where policy requires it.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Office 365 (4055) for AAD sign-ins",
              "d": "`index=netfw` `pan:globalprotect`; `index=o365` `azure:aad:signin`",
              "q": "index=netfw sourcetype=\"pan:globalprotect\" subtype=\"gateway-connected\" earliest=-24h\n| eval session=coalesce(session_id,gp_session_id)\n| eval user=lower(user)\n| join type=left max=1 user [\n    search index=o365 sourcetype=\"azure:aad:signin\" earliest=-24h\n    | eval user=lower(userPrincipalName)\n    | where authenticationRequirement=\"multiFactorAuthentication\" OR mfaDetail!=\"[]\"\n    | stats latest(_time) as mfa_time by user\n  ]\n| where isnull(mfa_time) OR mfa_time > _time + 300\n| stats count by user, src, portal\n| sort - count",
              "m": "(1) Clock-skew normalize `_time` fields across sources. (2) Tune join window; `mfa_time` should precede VPN connect. (3) Replace logic if Okta/other IdP. (4) Document compensating SAML MFA at IdP only.",
              "z": "Table (sessions missing MFA join), timeline, single value (violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Office 365 (4055) for AAD sign-ins.\n• Ensure the following data sources are available: `index=netfw` `pan:globalprotect`; `index=o365` `azure:aad:signin`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Clock-skew normalize `_time` fields across sources. (2) Tune join window; `mfa_time` should precede VPN connect. (3) Replace logic if Okta/other IdP. (4) Document compensating SAML MFA at IdP only.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:globalprotect\" subtype=\"gateway-connected\" earliest=-24h\n| eval session=coalesce(session_id,gp_session_id)\n| eval user=lower(user)\n| join type=left max=1 user [\n    search index=o365 sourcetype=\"azure:aad:signin\" earliest=-24h\n    | eval user=lower(userPrincipalName)\n    | where authenticationRequirement=\"multiFactorAuthentication\" OR mfaDetail!=\"[]\"\n    | stats latest(_time) as mfa_time by user\n  ]\n| where isnull(mfa_time) OR mfa_time > _time + 300\n| stats count by user, src, portal\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Remote Access MFA Evidence Correlation — VPN Success Without Step-Up Token (PCI DSS Req 8.4.2, 8.5.1)** — Joins VPN authentication logs with identity provider MFA success events to prove remote entry into the CDE used multi-factor authentication for each session where policy requires it.\n\nDocumented **Data sources**: `index=netfw` `pan:globalprotect`; `index=o365` `azure:aad:signin`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Office 365 (4055) for AAD sign-ins. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:globalprotect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:globalprotect\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(mfa_time) OR mfa_time > _time + 300` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, src, portal** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Remote Access MFA Evidence Correlation — VPN Success Without Step-Up Token (PCI DSS Req 8.4.2, 8.5.1)** — Joins VPN authentication logs with identity provider MFA success events to prove remote entry into the CDE used multi-factor authentication for each session where policy requires it.\n\nDocumented **Data sources**: `index=netfw` `pan:globalprotect`; `index=o365` `azure:aad:signin`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Office 365 (4055) for AAD sign-ins. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sessions missing MFA join), timeline, single value (violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We joins VPN authentication logs with identity provider MFA success events to prove remote entry into the CDE used multi-factor authentication for each session where policy requires it. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.5.1 is enforced — Splunk UC-22.11.53: Remote Access MFA Evidence Correlation — VPN Success Without Step-Up Token.",
                  "ea": "Saved search 'UC-22.11.53' running on index=netfw pan:globalprotect; index=o365 azure:aad:signin, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                },
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.54",
              "n": "Service Account Inventory Reconciliation — Unexpected SPN or Delegation Changes (PCI DSS Req 8.2.1, 8.6.1)",
              "c": "high",
              "f": "advanced",
              "v": "Tracks modifications to Kerberos SPNs and constrained delegation on payment-related service accounts so non-human identities stay inventoried and cannot silently gain broader authentication rights.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=windows` EventCode 4742,4738,5136 (Directory Service Changes for `servicePrincipalName`)",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4742,4738) earliest=-7d\n| where match(_raw,\"(?i)servicePrincipalName|TrustedToAuthForDelegation|msDS-AllowedToDelegateTo)\")\n| lookup pci_service_accounts.csv SamAccountName OUTPUT pci_svc\n| where pci_svc=\"true\"\n| stats earliest(_time) as first_change, values(EventCode) as codes by SamAccountName, SubjectUserName\n| sort - first_change",
              "m": "(1) Maintain `pci_service_accounts.csv` from CMDB. (2) Map EventCode descriptions for auditors. (3) Require CAB for SPN edits. (4) Daily diff email to identity team.",
              "z": "Table (changes), timeline, single value (weekly change count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=windows` EventCode 4742,4738,5136 (Directory Service Changes for `servicePrincipalName`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `pci_service_accounts.csv` from CMDB. (2) Map EventCode descriptions for auditors. (3) Require CAB for SPN edits. (4) Daily diff email to identity team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode IN (4742,4738) earliest=-7d\n| where match(_raw,\"(?i)servicePrincipalName|TrustedToAuthForDelegation|msDS-AllowedToDelegateTo)\")\n| lookup pci_service_accounts.csv SamAccountName OUTPUT pci_svc\n| where pci_svc=\"true\"\n| stats earliest(_time) as first_change, values(EventCode) as codes by SamAccountName, SubjectUserName\n| sort - first_change\n```\n\nUnderstanding this SPL\n\n**Service Account Inventory Reconciliation — Unexpected SPN or Delegation Changes (PCI DSS Req 8.2.1, 8.6.1)** — Tracks modifications to Kerberos SPNs and constrained delegation on payment-related service accounts so non-human identities stay inventoried and cannot silently gain broader authentication rights.\n\nDocumented **Data sources**: `index=windows` EventCode 4742,4738,5136 (Directory Service Changes for `servicePrincipalName`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"(?i)servicePrincipalName|TrustedToAuthForDelegation|msDS-AllowedToDelegateTo)\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pci_svc=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by SamAccountName, SubjectUserName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Service Account Inventory Reconciliation — Unexpected SPN or Delegation Changes (PCI DSS Req 8.2.1, 8.6.1)** — Tracks modifications to Kerberos SPNs and constrained delegation on payment-related service accounts so non-human identities stay inventoried and cannot silently gain broader authentication rights.\n\nDocumented **Data sources**: `index=windows` EventCode 4742,4738,5136 (Directory Service Changes for `servicePrincipalName`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (changes), timeline, single value (weekly change count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track modifications to Kerberos SPNs and constrained delegation on payment-related service accounts so non-human identities stay inventoried and cannot silently gain broader authentication rights. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.6.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.6.1 is enforced — Splunk UC-22.11.54: Service Account Inventory Reconciliation — Unexpected SPN or Delegation Changes.",
                  "ea": "Saved search 'UC-22.11.54' running on index=windows EventCode 4742,4738,5136 (Directory Service Changes for servicePrincipalName), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.55",
              "n": "Session Timeout Enforcement on Payment Web Admin Consoles (PCI DSS Req 8.2.8)",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures idle session duration before logout on administrative payment portals using HTTP session cookies or app logs so inactive sessions re-authenticate within policy limits.",
              "t": "Splunk Add-on for Stream (1809), `index=web` `access_combined`",
              "d": "`index=web` `sourcetype=\"access_combined\"` for `/admin` URIs with `session_id`",
              "q": "index=web sourcetype=\"access_combined\" earliest=-24h\n| where match(uri,\"(?i)/admin|/portal/manage\")\n| transaction session_id maxpause=30m maxspan=8h\n| eval idle_mins=round(duration/60,2)\n| where idle_mins>15\n| stats max(idle_mins) as max_idle by session_id, clientip\n| sort - max_idle",
              "m": "(1) Align `maxpause` with configured server session timeout. (2) Mask `session_id` in dashboards. (3) Tune for long-running report downloads. (4) Pair with WAF session fixation checks.",
              "z": "Histogram (idle_mins), table (longest sessions), single value (sessions exceeding policy).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Stream (1809), `index=web` `access_combined`.\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` for `/admin` URIs with `session_id`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `maxpause` with configured server session timeout. (2) Mask `session_id` in dashboards. (3) Tune for long-running report downloads. (4) Pair with WAF session fixation checks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" earliest=-24h\n| where match(uri,\"(?i)/admin|/portal/manage\")\n| transaction session_id maxpause=30m maxspan=8h\n| eval idle_mins=round(duration/60,2)\n| where idle_mins>15\n| stats max(idle_mins) as max_idle by session_id, clientip\n| sort - max_idle\n```\n\nUnderstanding this SPL\n\n**Session Timeout Enforcement on Payment Web Admin Consoles (PCI DSS Req 8.2.8)** — Measures idle session duration before logout on administrative payment portals using HTTP session cookies or app logs so inactive sessions re-authenticate within policy limits.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` for `/admin` URIs with `session_id`. **App/TA** (typical add-on context): Splunk Add-on for Stream (1809), `index=web` `access_combined`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(uri,\"(?i)/admin|/portal/manage\")` — typically the threshold or rule expression for this monitoring goal.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **idle_mins** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where idle_mins>15` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by session_id, clientip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Session Timeout Enforcement on Payment Web Admin Consoles (PCI DSS Req 8.2.8)** — Measures idle session duration before logout on administrative payment portals using HTTP session cookies or app logs so inactive sessions re-authenticate within policy limits.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` for `/admin` URIs with `session_id`. **App/TA** (typical add-on context): Splunk Add-on for Stream (1809), `index=web` `access_combined`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (idle_mins), table (longest sessions), single value (sessions exceeding policy).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures idle session duration before logout on administrative payment portals using HTTP session cookies or app logs so inactive sessions re-authenticate within policy limits. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "8.2.8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 8.2.8 is enforced — Splunk UC-22.11.55: Session Timeout Enforcement on Payment Web Admin Consoles.",
                  "ea": "Saved search 'UC-22.11.55' running on index=web sourcetype=\"access_combined\" for /admin URIs with session_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.56",
              "n": "Physical Badge Access to Data Center Containing Cardholder Systems (PCI DSS Req 9.2.1, 9.4.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks granted and denied door reads for suites housing payment hardware so physical entry to sensitive areas is logged and reviewable for onsite PCI assessments.",
              "t": "Custom `sourcetype=\"pacs:ccure\"` or `lenel:access\"` CSV/HEC",
              "d": "`index=physical` `sourcetype=\"pacs:door\"` — `reader_id`, `badge_id`, `result`, `site`",
              "q": "index=physical sourcetype=\"pacs:door\" earliest=-7d\n| lookup pci_dc_readers.csv reader_id OUTPUT cde_dc_zone\n| where cde_dc_zone=\"true\"\n| eval denied=if(match(result,\"(?i)deny|reject|unknown\"),1,0)\n| timechart span=1d sum(denied) as denied_reads, count as total_reads\n| eval deny_rate=if(total_reads>0, round(100*denied_reads/total_reads,2), null())",
              "m": "(1) Map reader IDs to PCI zones from facility diagrams. (2) De-identify `badge_id` in dashboards if required. (3) Alert on sustained deny spikes (tailgating). (4) Quarterly reviewer export.",
              "z": "Line (deny_rate), column (denied_reads), table (top denied badges hashed).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom `sourcetype=\"pacs:ccure\"` or `lenel:access\"` CSV/HEC.\n• Ensure the following data sources are available: `index=physical` `sourcetype=\"pacs:door\"` — `reader_id`, `badge_id`, `result`, `site`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map reader IDs to PCI zones from facility diagrams. (2) De-identify `badge_id` in dashboards if required. (3) Alert on sustained deny spikes (tailgating). (4) Quarterly reviewer export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"pacs:door\" earliest=-7d\n| lookup pci_dc_readers.csv reader_id OUTPUT cde_dc_zone\n| where cde_dc_zone=\"true\"\n| eval denied=if(match(result,\"(?i)deny|reject|unknown\"),1,0)\n| timechart span=1d sum(denied) as denied_reads, count as total_reads\n| eval deny_rate=if(total_reads>0, round(100*denied_reads/total_reads,2), null())\n```\n\nUnderstanding this SPL\n\n**Physical Badge Access to Data Center Containing Cardholder Systems (PCI DSS Req 9.2.1, 9.4.2)** — Tracks granted and denied door reads for suites housing payment hardware so physical entry to sensitive areas is logged and reviewable for onsite PCI assessments.\n\nDocumented **Data sources**: `index=physical` `sourcetype=\"pacs:door\"` — `reader_id`, `badge_id`, `result`, `site`. **App/TA** (typical add-on context): Custom `sourcetype=\"pacs:ccure\"` or `lenel:access\"` CSV/HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pacs:door. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"pacs:door\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_dc_zone=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **denied** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **deny_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line (deny_rate), column (denied_reads), table (top denied badges hashed).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track granted and denied door reads for suites housing payment hardware so physical entry to sensitive areas is logged and reviewable for onsite PCI assessments. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Physical Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "9.4.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 9.4.2 is enforced — Splunk UC-22.11.56: Physical Badge Access to Data Center Containing Cardholder Systems.",
                  "ea": "Saved search 'UC-22.11.56' running on index=physical sourcetype=\"pacs:door\" — reader_id, badge_id, result, site, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.57",
              "n": "Visitor Log Completeness for Data Center Escorted Access (PCI DSS Req 9.4.1, 9.4.4)",
              "c": "medium",
              "f": "intermediate",
              "v": "Compares visitor check-in events from front desk systems to escorted badge-in events inside the CDE suite so visitor management procedures are complete and auditable.",
              "t": "Splunk DB Connect or `sourcetype=\"visitor:checkin\"` HEC",
              "d": "`index=physical` `visitor:checkin`; `pacs:door` escort reader pairings",
              "q": "index=physical sourcetype=\"visitor:checkin\" earliest=-7d\n| eval visit_key=md5(concat(visitor_id,date_hour,site))\n| join type=left max=0 visit_key [\n    search index=physical sourcetype=\"pacs:door\" earliest=-7d escort_flag=\"true\"\n    | eval visit_key=md5(concat(visitor_id,date_hour,site))\n    | stats count as escort_swipes by visit_key\n  ]\n| where isnull(escort_swipes) OR escort_swipes=0\n| table _time, visitor_id, site, host_escort_expected\n| sort _time",
              "m": "(1) Align `visitor_id` keys between systems. (2) Define mandatory escort readers. (3) Daily exception report to facilities. (4) Store signed visitor logs for assessor samples.",
              "z": "Table (missing escorts), single value (open exceptions), column chart (by site).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect or `sourcetype=\"visitor:checkin\"` HEC.\n• Ensure the following data sources are available: `index=physical` `visitor:checkin`; `pacs:door` escort reader pairings.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `visitor_id` keys between systems. (2) Define mandatory escort readers. (3) Daily exception report to facilities. (4) Store signed visitor logs for assessor samples.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"visitor:checkin\" earliest=-7d\n| eval visit_key=md5(concat(visitor_id,date_hour,site))\n| join type=left max=0 visit_key [\n    search index=physical sourcetype=\"pacs:door\" earliest=-7d escort_flag=\"true\"\n    | eval visit_key=md5(concat(visitor_id,date_hour,site))\n    | stats count as escort_swipes by visit_key\n  ]\n| where isnull(escort_swipes) OR escort_swipes=0\n| table _time, visitor_id, site, host_escort_expected\n| sort _time\n```\n\nUnderstanding this SPL\n\n**Visitor Log Completeness for Data Center Escorted Access (PCI DSS Req 9.4.1, 9.4.4)** — Compares visitor check-in events from front desk systems to escorted badge-in events inside the CDE suite so visitor management procedures are complete and auditable.\n\nDocumented **Data sources**: `index=physical` `visitor:checkin`; `pacs:door` escort reader pairings. **App/TA** (typical add-on context): Splunk DB Connect or `sourcetype=\"visitor:checkin\"` HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: visitor:checkin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"visitor:checkin\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **visit_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(escort_swipes) OR escort_swipes=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Visitor Log Completeness for Data Center Escorted Access (PCI DSS Req 9.4.1, 9.4.4)**): table _time, visitor_id, site, host_escort_expected\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing escorts), single value (open exceptions), column chart (by site).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare visitor check-in events from front desk systems to escorted badge-in events inside the CDE suite so visitor management procedures are complete and auditable. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "9.4.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 9.4.4 is enforced — Splunk UC-22.11.57: Visitor Log Completeness for Data Center Escorted Access.",
                  "ea": "Saved search 'UC-22.11.57' running on index=physical visitor:checkin; pacs:door escort reader pairings, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.58",
              "n": "Secure Media Destruction Workflow Completion for Backup Tapes with CHD (PCI DSS Req 9.5.1, 3.2.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks chain-of-custody and destruction certificates for media classified as containing account data so classified media is destroyed when no longer needed for business or legal reasons.",
              "t": "Splunk Add-on for ServiceNow (1928) or `sourcetype=\"ironmountain:destruction\"`",
              "d": "`index=logistics` `sourcetype=\"media:chain\"` — `barcode`, `status`, `destroyed_time`",
              "q": "index=logistics sourcetype=\"media:chain\" earliest=-90d\n| where classification=\"PCI_CHD\"\n| where match(status,\"(?i)in_transit|awaiting_destroy|stored\")\n| eval age_days=round((now()-created_time)/86400,1)\n| where age_days>120\n| stats max(age_days) as oldest_days by barcode, status, vault_site\n| sort - oldest_days",
              "m": "(1) Ingest vendor destruction webhook as status `destroyed`. (2) Align retention to legal hold exceptions via lookup. (3) Escalate aged media weekly. (4) Attach certificates to GRC record.",
              "z": "Table (oldest media), gauge (avg age_days), timeline (status transitions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (1928) or `sourcetype=\"ironmountain:destruction\"`.\n• Ensure the following data sources are available: `index=logistics` `sourcetype=\"media:chain\"` — `barcode`, `status`, `destroyed_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest vendor destruction webhook as status `destroyed`. (2) Align retention to legal hold exceptions via lookup. (3) Escalate aged media weekly. (4) Attach certificates to GRC record.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=logistics sourcetype=\"media:chain\" earliest=-90d\n| where classification=\"PCI_CHD\"\n| where match(status,\"(?i)in_transit|awaiting_destroy|stored\")\n| eval age_days=round((now()-created_time)/86400,1)\n| where age_days>120\n| stats max(age_days) as oldest_days by barcode, status, vault_site\n| sort - oldest_days\n```\n\nUnderstanding this SPL\n\n**Secure Media Destruction Workflow Completion for Backup Tapes with CHD (PCI DSS Req 9.5.1, 3.2.1)** — Tracks chain-of-custody and destruction certificates for media classified as containing account data so classified media is destroyed when no longer needed for business or legal reasons.\n\nDocumented **Data sources**: `index=logistics` `sourcetype=\"media:chain\"` — `barcode`, `status`, `destroyed_time`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (1928) or `sourcetype=\"ironmountain:destruction\"`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: logistics; **sourcetype**: media:chain. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=logistics, sourcetype=\"media:chain\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where classification=\"PCI_CHD\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(status,\"(?i)in_transit|awaiting_destroy|stored\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>120` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by barcode, status, vault_site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (oldest media), gauge (avg age_days), timeline (status transitions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track chain-of-custody and destruction certificates for media classified as containing account data so classified media is destroyed when no longer needed for business or legal reasons. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 3.2.1 is enforced — Splunk UC-22.11.58: Secure Media Destruction Workflow Completion for Backup Tapes with CHD.",
                  "ea": "Saved search 'UC-22.11.58' running on index=logistics sourcetype=\"media:chain\" — barcode, status, destroyed_time, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.59",
              "n": "POS Terminal Tamper and Intrusion Switch Alerts (PCI DSS Req 9.5.1, 9.3.2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Centralizes tamper switches, lid open, and unexpected reboot signals from retail payment terminals so physical device integrity controls for card-present environments are actively monitored.",
              "t": "Custom POS syslog `sourcetype=\"verifone:vxp\"` or `ingenico:telium\"` HEC",
              "d": "`index=pos` `sourcetype=\"pos:device_health\"` — `terminal_id`, `alert_type`, `severity`",
              "q": "index=pos sourcetype=\"pos:device_health\" earliest=-7d\n| where match(alert_type,\"(?i)tamper|lid|intrusion|case|seal|debug_port\")\n| lookup pci_store_terminals.csv terminal_id OUTPUT store_id\n| stats earliest(_time) as first_alert, latest(_time) as last_alert, values(alert_type) as types by terminal_id, store_id\n| sort first_alert",
              "m": "(1) Map terminal IDs to merchant locations. (2) Correlate with technician dispatch tickets. (3) Page on `severity=CRITICAL`. (4) Monthly store tamper rollup for PCI onsite interviews.",
              "z": "Map (store_id), table (terminal_id, types), timeline (alerts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom POS syslog `sourcetype=\"verifone:vxp\"` or `ingenico:telium\"` HEC.\n• Ensure the following data sources are available: `index=pos` `sourcetype=\"pos:device_health\"` — `terminal_id`, `alert_type`, `severity`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map terminal IDs to merchant locations. (2) Correlate with technician dispatch tickets. (3) Page on `severity=CRITICAL`. (4) Monthly store tamper rollup for PCI onsite interviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pos sourcetype=\"pos:device_health\" earliest=-7d\n| where match(alert_type,\"(?i)tamper|lid|intrusion|case|seal|debug_port\")\n| lookup pci_store_terminals.csv terminal_id OUTPUT store_id\n| stats earliest(_time) as first_alert, latest(_time) as last_alert, values(alert_type) as types by terminal_id, store_id\n| sort first_alert\n```\n\nUnderstanding this SPL\n\n**POS Terminal Tamper and Intrusion Switch Alerts (PCI DSS Req 9.5.1, 9.3.2)** — Centralizes tamper switches, lid open, and unexpected reboot signals from retail payment terminals so physical device integrity controls for card-present environments are actively monitored.\n\nDocumented **Data sources**: `index=pos` `sourcetype=\"pos:device_health\"` — `terminal_id`, `alert_type`, `severity`. **App/TA** (typical add-on context): Custom POS syslog `sourcetype=\"verifone:vxp\"` or `ingenico:telium\"` HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pos; **sourcetype**: pos:device_health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pos, sourcetype=\"pos:device_health\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(alert_type,\"(?i)tamper|lid|intrusion|case|seal|debug_port\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by terminal_id, store_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (store_id), table (terminal_id, types), timeline (alerts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We centralizes tamper switches, lid open, and unexpected reboot signals from retail payment terminals so physical device integrity controls for card-present environments are actively monitored. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "9.3.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 9.3.2 is enforced — Splunk UC-22.11.59: POS Terminal Tamper and Intrusion Switch Alerts.",
                  "ea": "Saved search 'UC-22.11.59' running on index=pos sourcetype=\"pos:device_health\" — terminal_id, alert_type, severity, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.60",
              "n": "Quarterly Physical Access List Review Exception Tracking (PCI DSS Req 9.2.4, 12.1.1)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks completion of quarterly access roster reviews for personnel with keys or badges to CDE facilities, tying physical security policy to measurable workflow deadlines.",
              "t": "Splunk Add-on for ServiceNow (1928), `sourcetype=\"grc:control_test\"`",
              "d": "`index=grc` `sourcetype=\"grc:control_test\"` — `control_id`, `test_status`, `due_date`, `owner`",
              "q": "index=grc sourcetype=\"grc:control_test\" earliest=-120d\n| where control_id IN (\"PCI-9.2.4-PHYS-REVIEW\",\"PCI-9.2.4\")\n| eval overdue=if(now()>due_date AND NOT match(test_status,\"(?i)pass|complete\"),1,0)\n| stats latest(test_status) as status, latest(due_date) as due by owner\n| where overdue=1 OR isnull(status)\n| sort due",
              "m": "(1) Create recurring GRC control with quarterly `due_date`. (2) Push completion events from ServiceNow attestation. (3) Escalate to facility security lead. (4) Export closed tests for PCI policy evidence (Req 12 linkage).",
              "z": "Table (overdue owners), timeline (due_date), single value (overdue count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (1928), `sourcetype=\"grc:control_test\"`.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"grc:control_test\"` — `control_id`, `test_status`, `due_date`, `owner`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create recurring GRC control with quarterly `due_date`. (2) Push completion events from ServiceNow attestation. (3) Escalate to facility security lead. (4) Export closed tests for PCI policy evidence (Req 12 linkage).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"grc:control_test\" earliest=-120d\n| where control_id IN (\"PCI-9.2.4-PHYS-REVIEW\",\"PCI-9.2.4\")\n| eval overdue=if(now()>due_date AND NOT match(test_status,\"(?i)pass|complete\"),1,0)\n| stats latest(test_status) as status, latest(due_date) as due by owner\n| where overdue=1 OR isnull(status)\n| sort due\n```\n\nUnderstanding this SPL\n\n**Quarterly Physical Access List Review Exception Tracking (PCI DSS Req 9.2.4, 12.1.1)** — Tracks completion of quarterly access roster reviews for personnel with keys or badges to CDE facilities, tying physical security policy to measurable workflow deadlines.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"grc:control_test\"` — `control_id`, `test_status`, `due_date`, `owner`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (1928), `sourcetype=\"grc:control_test\"`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: grc:control_test. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"grc:control_test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where control_id IN (\"PCI-9.2.4-PHYS-REVIEW\",\"PCI-9.2.4\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where overdue=1 OR isnull(status)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue owners), timeline (due_date), single value (overdue count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track completion of quarterly access roster reviews for personnel with keys or badges to CDE facilities, tying physical security policy to measurable workflow deadlines. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.1.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.1.1 is enforced — Splunk UC-22.11.60: Quarterly Physical Access List Review Exception Tracking.",
                  "ea": "Saved search 'UC-22.11.60' running on index=grc sourcetype=\"grc:control_test\" — control_id, test_status, due_date, owner, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.61",
              "n": "Audit Log Source Completeness — Missing Windows Security Events per CDE Host (PCI DSS Req 10.2.1, 10.3.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "Identifies in-scope Windows servers that stopped sending Security event logs so audit trails required for reconstructing access to system components remain complete and continuous.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263)",
              "d": "`index=windows` `WinEventLog:Security`; `lookup pci_windows_cde_assets.csv`",
              "q": "| tstats prestats=t count WHERE index=windows sourcetype=\"WinEventLog:Security\" earliest=-25h latest=-1h BY host\n| stats count by host\n| inputlookup pci_windows_cde_assets.csv\n| fields hostname\n| rename hostname as host\n| join type=left host [\n    search | tstats prestats=t count WHERE index=windows sourcetype=\"WinEventLog:Security\" earliest=-25h latest=-1h BY host\n    | stats count by host\n  ]\n| eval has_logs=if(isnotnull(count) AND count>0,1,0)\n| where has_logs=0\n| table host",
              "m": "(1) Validate `tstats` summaries are enabled for `windows` index or use metadata `| metadata type=hosts` alternative if needed. (2) Page if any CDE host missing. (3) Check forwarder connectivity. (4) Daily compliance health dashboard.",
              "z": "Table (missing hosts), single value (gap count), choropleth by data center if tagged.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=windows` `WinEventLog:Security`; `lookup pci_windows_cde_assets.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Validate `tstats` summaries are enabled for `windows` index or use metadata `| metadata type=hosts` alternative if needed. (2) Page if any CDE host missing. (3) Check forwarder connectivity. (4) Daily compliance health dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats prestats=t count WHERE index=windows sourcetype=\"WinEventLog:Security\" earliest=-25h latest=-1h BY host\n| stats count by host\n| inputlookup pci_windows_cde_assets.csv\n| fields hostname\n| rename hostname as host\n| join type=left host [\n    search | tstats prestats=t count WHERE index=windows sourcetype=\"WinEventLog:Security\" earliest=-25h latest=-1h BY host\n    | stats count by host\n  ]\n| eval has_logs=if(isnotnull(count) AND count>0,1,0)\n| where has_logs=0\n| table host\n```\n\nUnderstanding this SPL\n\n**Audit Log Source Completeness — Missing Windows Security Events per CDE Host (PCI DSS Req 10.2.1, 10.3.1)** — Identifies in-scope Windows servers that stopped sending Security event logs so audit trails required for reconstructing access to system components remain complete and continuous.\n\nDocumented **Data sources**: `index=windows` `WinEventLog:Security`; `lookup pci_windows_cde_assets.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against precomputed summaries; ensure the referenced data model is accelerated.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **has_logs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where has_logs=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Audit Log Source Completeness — Missing Windows Security Events per CDE Host (PCI DSS Req 10.2.1, 10.3.1)**): table host\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing hosts), single value (gap count), choropleth by data center if tagged.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We identify in-scope Windows servers that stopped sending Security event logs so audit trails required for reconstructing access to system components remain complete and continuous. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.3.1 is enforced — Splunk UC-22.11.61: Audit Log Source Completeness — Missing Windows Security Events per CDE Host.",
                  "ea": "Saved search 'UC-22.11.61' running on index=windows WinEventLog:Security; lookup pci_windows_cde_assets.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v3.2.1",
                  "cl": "10.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.2 (Audit events required to be logged) is enforced — Splunk UC-22.11.61: Audit Log Source Completeness — Missing Windows Security Events per CDE Host.",
                  "ea": "Saved search 'UC-22.11.61' running on index=windows WinEventLog:Security; lookup pci_windows_cde_assets.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.62",
              "n": "Log Ingestion Pipeline Lag and Parser Error Rate for PCI Indexes (PCI DSS Req 10.2.1, 10.5.1)",
              "c": "high",
              "f": "advanced",
              "v": "Monitors indexing latency and `date_parser` failures on indexes holding CDE evidence so log pipeline health does not silently break forensic reconstruction timelines.",
              "t": "Splunk internal monitoring, `_internal` sourcetypes",
              "d": "`index=_internal` `sourcetype=splunkd` / `metrics.log`; `index=_introspection`",
              "q": "index=_internal sourcetype=splunkd component=TailReader OR component=AggregatorMiningProcessor earliest=-24h\n| eval pci_idx=if(match(log_message,\"(?i)pci|cde|payment\"),1,0)\n| where pci_idx=1 OR index IN (\"pci\",\"cde\",\"payment\")\n| bin _time span=15m\n| stats count by _time, component\n| sort _time",
              "m": "(1) Tag forwarders with `pci_index` in `serverclass.conf`. (2) Pair with `index=_internal` `log_index_processor` errors filtered by dest index. (3) Alert on sustained zero events for high-volume sources. (4) SLO dashboard for SOC manager sign-off.",
              "z": "Line (events per 15m), overlay parser error series, single value (error count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk internal monitoring, `_internal` sourcetypes.\n• Ensure the following data sources are available: `index=_internal` `sourcetype=splunkd` / `metrics.log`; `index=_introspection`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag forwarders with `pci_index` in `serverclass.conf`. (2) Pair with `index=_internal` `log_index_processor` errors filtered by dest index. (3) Alert on sustained zero events for high-volume sources. (4) SLO dashboard for SOC manager sign-off.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd component=TailReader OR component=AggregatorMiningProcessor earliest=-24h\n| eval pci_idx=if(match(log_message,\"(?i)pci|cde|payment\"),1,0)\n| where pci_idx=1 OR index IN (\"pci\",\"cde\",\"payment\")\n| bin _time span=15m\n| stats count by _time, component\n| sort _time\n```\n\nUnderstanding this SPL\n\n**Log Ingestion Pipeline Lag and Parser Error Rate for PCI Indexes (PCI DSS Req 10.2.1, 10.5.1)** — Monitors indexing latency and `date_parser` failures on indexes holding CDE evidence so log pipeline health does not silently break forensic reconstruction timelines.\n\nDocumented **Data sources**: `index=_internal` `sourcetype=splunkd` / `metrics.log`; `index=_introspection`. **App/TA** (typical add-on context): Splunk internal monitoring, `_internal` sourcetypes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pci_idx** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pci_idx=1 OR index IN (\"pci\",\"cde\",\"payment\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, component** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line (events per 15m), overlay parser error series, single value (error count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor indexing latency and `date_parser` failures on indexes holding CDE evidence so log pipeline health does not silently break forensic reconstruction timelines. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Operations",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.5.1 is enforced — Splunk UC-22.11.62: Log Ingestion Pipeline Lag and Parser Error Rate for PCI Indexes.",
                  "ea": "Saved search 'UC-22.11.62' running on index=_internal sourcetype=splunkd / metrics.log; index=_introspection, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.63",
              "n": "Daily Log Review Workflow — PCI Queue Ticket Closure SLA (PCI DSS Req 10.4.1, 10.4.1.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks time from automated PCI log review finding to analyst disposition so periodic log reviews are not only scheduled but completed with auditable closure records.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`` `notable` `` filtered by `rule_name` like `PCI*` or `tag=\"pci\"`; fields `status`, `closed_time`, `_time`",
              "q": "`notable` (tag=\"pci\" OR like(rule_name,\"%PCI%\")) earliest=-7d\n| eval age_h=if(isnotnull(closed_time), (closed_time-_time)/3600, (now()-_time)/3600)\n| eval breached=if(isnull(closed_time) AND age_h>24,1,0)\n| stats sum(breached) as open_breach, count as total by rule_name\n| sort - open_breach",
              "m": "(1) Tag PCI-related correlation searches consistently. (2) Align 24h SLA to policy. (3) Supervisor dashboard for aging. (4) Export weekly metrics for PCI log review evidence.",
              "z": "Table (rule_name, open_breach), column chart (aging buckets), single value (% closed within SLA).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `` `notable` `` filtered by `rule_name` like `PCI*` or `tag=\"pci\"`; fields `status`, `closed_time`, `_time`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag PCI-related correlation searches consistently. (2) Align 24h SLA to policy. (3) Supervisor dashboard for aging. (4) Export weekly metrics for PCI log review evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` (tag=\"pci\" OR like(rule_name,\"%PCI%\")) earliest=-7d\n| eval age_h=if(isnotnull(closed_time), (closed_time-_time)/3600, (now()-_time)/3600)\n| eval breached=if(isnull(closed_time) AND age_h>24,1,0)\n| stats sum(breached) as open_breach, count as total by rule_name\n| sort - open_breach\n```\n\nUnderstanding this SPL\n\n**Daily Log Review Workflow — PCI Queue Ticket Closure SLA (PCI DSS Req 10.4.1, 10.4.1.1)** — Tracks time from automated PCI log review finding to analyst disposition so periodic log reviews are not only scheduled but completed with auditable closure records.\n\nDocumented **Data sources**: `` `notable` `` filtered by `rule_name` like `PCI*` or `tag=\"pci\"`; fields `status`, `closed_time`, `_time`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **age_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breached** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by rule_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rule_name, open_breach), column chart (aging buckets), single value (% closed within SLA).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track time from automated PCI log review finding to analyst disposition so periodic log reviews are not only scheduled but completed with auditable closure records. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.4.1.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.4.1.1 is enforced — Splunk UC-22.11.63: Daily Log Review Workflow — PCI Queue Ticket Closure SLA.",
                  "ea": "Saved search 'UC-22.11.63' running on notable filtered by rule_name like PCI* or tag=\"pci\"; fields status, closed_time, _time, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v3.2.1",
                  "cl": "10.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.6 (Log review) is enforced — Splunk UC-22.11.63: Daily Log Review Workflow — PCI Queue Ticket Closure SLA.",
                  "ea": "Saved search 'UC-22.11.63' running on notable filtered by rule_name like PCI* or tag=\"pci\"; fields status, closed_time, _time, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf"
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.64",
              "n": "NTP Stratum and Sync Failure Events on Payment Switches and Firewalls (PCI DSS Req 10.6.1, 10.6.1.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces NTP loss of sync or excessive stratum on devices whose timestamps anchor audit logs for network security controls, preserving accurate time for all critical systems.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Cisco syslog TA",
              "d": "`index=netops` `sourcetype IN (\"syslog:cisco\",\"pan:system\")` — NTP peer loss messages",
              "q": "index=netops sourcetype IN (\"syslog:cisco\",\"pan:system\") earliest=-24h\n| where match(_raw,\"(?i)ntp|clock|stratum|unsync|time\\\\s+set)\")\n| lookup pci_time_critical_devices.csv host OUTPUT requires_ntp\n| where requires_ntp=\"true\"\n| stats count by host, sourcetype\n| sort - count",
              "m": "(1) Populate devices that must never drift per network standard. (2) Correlate with leap-second maintenance windows. (3) Page network ops on spikes. (4) Attach monthly zero-result report when clean.",
              "z": "Table (device, hits), timeline, single value (distinct devices with issues).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Cisco syslog TA.\n• Ensure the following data sources are available: `index=netops` `sourcetype IN (\"syslog:cisco\",\"pan:system\")` — NTP peer loss messages.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate devices that must never drift per network standard. (2) Correlate with leap-second maintenance windows. (3) Page network ops on spikes. (4) Attach monthly zero-result report when clean.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netops sourcetype IN (\"syslog:cisco\",\"pan:system\") earliest=-24h\n| where match(_raw,\"(?i)ntp|clock|stratum|unsync|time\\\\s+set)\")\n| lookup pci_time_critical_devices.csv host OUTPUT requires_ntp\n| where requires_ntp=\"true\"\n| stats count by host, sourcetype\n| sort - count\n```\n\nUnderstanding this SPL\n\n**NTP Stratum and Sync Failure Events on Payment Switches and Firewalls (PCI DSS Req 10.6.1, 10.6.1.2)** — Surfaces NTP loss of sync or excessive stratum on devices whose timestamps anchor audit logs for network security controls, preserving accurate time for all critical systems.\n\nDocumented **Data sources**: `index=netops` `sourcetype IN (\"syslog:cisco\",\"pan:system\")` — NTP peer loss messages. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Cisco syslog TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netops.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netops, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"(?i)ntp|clock|stratum|unsync|time\\\\s+set)\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where requires_ntp=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (device, hits), timeline, single value (distinct devices with issues).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface NTP loss of sync or excessive stratum on devices whose timestamps anchor audit logs for network security controls, preserving accurate time for all critical systems. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Operations"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cisco",
                "paloalto",
                "syslog"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.6.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.6.1.2 is enforced — Splunk UC-22.11.64: NTP Stratum and Sync Failure Events on Payment Switches and Firewalls.",
                  "ea": "Saved search 'UC-22.11.64' running on index=netops sourcetype IN (\"syslog:cisco\",\"pan:system\") — NTP peer loss messages, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.65",
              "n": "Splunk `_audit` Tamper Indicators — Saved Search Deletes and Role Changes (PCI DSS Req 10.5.1, 10.2.1.2)",
              "c": "critical",
              "f": "advanced",
              "v": "Alerts on privileged actions that could weaken log integrity—such as deleting saved searches that power PCI dashboards or shrinking roles—supporting protection of audit trails from alteration.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=_audit` `action IN (\"delete\",\"edit_user\",\"edit_roles\")` — `user`, `object`, `operation`",
              "q": "index=_audit (action=\"delete\" OR operation=\"delete\") earliest=-24h\n| where match(object,\"(?i)savedsearch|alert|correlation|pci|cde)\")\n| stats earliest(_time) as first, values(user) as actors by object\n| sort first",
              "m": "(1) Restrict `_audit` visibility to security admin roles. (2) Forward `_audit` to immutable SIEM archive if required. (3) Map `user` to named individuals only. (4) Incident response playbook for suspected tampering.",
              "z": "Table (object, actors), timeline (first), single value (daily delete count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=_audit` `action IN (\"delete\",\"edit_user\",\"edit_roles\")` — `user`, `object`, `operation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Restrict `_audit` visibility to security admin roles. (2) Forward `_audit` to immutable SIEM archive if required. (3) Map `user` to named individuals only. (4) Incident response playbook for suspected tampering.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit (action=\"delete\" OR operation=\"delete\") earliest=-24h\n| where match(object,\"(?i)savedsearch|alert|correlation|pci|cde)\")\n| stats earliest(_time) as first, values(user) as actors by object\n| sort first\n```\n\nUnderstanding this SPL\n\n**Splunk `_audit` Tamper Indicators — Saved Search Deletes and Role Changes (PCI DSS Req 10.5.1, 10.2.1.2)** — Alerts on privileged actions that could weaken log integrity—such as deleting saved searches that power PCI dashboards or shrinking roles—supporting protection of audit trails from alteration.\n\nDocumented **Data sources**: `index=_audit` `action IN (\"delete\",\"edit_user\",\"edit_roles\")` — `user`, `object`, `operation`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(object,\"(?i)savedsearch|alert|correlation|pci|cde)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by object** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (object, actors), timeline (first), single value (daily delete count).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We alerts on privileged actions that could weaken log integrity—such as deleting saved searches that power PCI dashboards or shrinking roles—supporting protection of audit trails from alteration. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.2.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.2.1.2 is enforced — Splunk UC-22.11.65: Splunk `_audit` Tamper Indicators — Saved Search Deletes and Role Changes.",
                  "ea": "Saved search 'UC-22.11.65' running on index=_audit action IN (\"delete\",\"edit_user\",\"edit_roles\") — user, object, operation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v3.2.1",
                  "cl": "10.5",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of PCI-DSS 10.5 (Log integrity) — Splunk UC-22.11.65: Splunk `_audit` Tamper Indicators — Saved Search Deletes and Role Changes.",
                  "ea": "Saved search 'UC-22.11.65' running on index=_audit action IN (\"delete\",\"edit_user\",\"edit_roles\") — user, object, operation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.66",
              "n": "Critical System Clock Skew Between Database and Application Tier (PCI DSS Req 10.6.1, 10.6.1.3)",
              "c": "high",
              "f": "advanced",
              "v": "Compares `_time` deltas between paired application and database events sharing a `correlation_id` so forensic timelines for CHD access stay coherent across tiers.",
              "t": "Custom application JSON logs with `correlation_id`",
              "d": "`index=app` `sourcetype=\"payment:api:json\"`; `index=db` `oracle:audit` with same `correlation_id` if injected",
              "q": "(index=app sourcetype=\"payment:api:json\" earliest=-1h) OR (index=db sourcetype=\"oracle:audit\" earliest=-1h)\n| where isnotnull(correlation_id)\n| eval src=if(index=\"app\",\"app\",\"db\")\n| stats earliest(eval(if(src=\"app\",_time,null()))) as t_app, earliest(eval(if(src=\"db\",_time,null()))) as t_db by correlation_id\n| eval skew_sec=abs(t_app-t_db)\n| where skew_sec>5 AND isnotnull(t_app) AND isnotnull(t_db)\n| stats perc95(skew_sec) as p95_skew, count by correlation_id\n| where p95_skew>5\n| sort - p95_skew\n| head 100",
              "m": "(1) Require apps to propagate `correlation_id` into DB audit. (2) Tune threshold per transaction latency SLA. (3) Pair with NTP health UC. (4) Weekly ops review.",
              "z": "Histogram (skew_sec), table (worst correlation_id), single value (count high skew).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom application JSON logs with `correlation_id`.\n• Ensure the following data sources are available: `index=app` `sourcetype=\"payment:api:json\"`; `index=db` `oracle:audit` with same `correlation_id` if injected.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require apps to propagate `correlation_id` into DB audit. (2) Tune threshold per transaction latency SLA. (3) Pair with NTP health UC. (4) Weekly ops review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=app sourcetype=\"payment:api:json\" earliest=-1h) OR (index=db sourcetype=\"oracle:audit\" earliest=-1h)\n| where isnotnull(correlation_id)\n| eval src=if(index=\"app\",\"app\",\"db\")\n| stats earliest(eval(if(src=\"app\",_time,null()))) as t_app, earliest(eval(if(src=\"db\",_time,null()))) as t_db by correlation_id\n| eval skew_sec=abs(t_app-t_db)\n| where skew_sec>5 AND isnotnull(t_app) AND isnotnull(t_db)\n| stats perc95(skew_sec) as p95_skew, count by correlation_id\n| where p95_skew>5\n| sort - p95_skew\n| head 100\n```\n\nUnderstanding this SPL\n\n**Critical System Clock Skew Between Database and Application Tier (PCI DSS Req 10.6.1, 10.6.1.3)** — Compares `_time` deltas between paired application and database events sharing a `correlation_id` so forensic timelines for CHD access stay coherent across tiers.\n\nDocumented **Data sources**: `index=app` `sourcetype=\"payment:api:json\"`; `index=db` `oracle:audit` with same `correlation_id` if injected. **App/TA** (typical add-on context): Custom application JSON logs with `correlation_id`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app, db; **sourcetype**: payment:api:json, oracle:audit. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, index=db, sourcetype=\"payment:api:json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(correlation_id)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by correlation_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **skew_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where skew_sec>5 AND isnotnull(t_app) AND isnotnull(t_db)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by correlation_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_skew>5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (skew_sec), table (worst correlation_id), single value (count high skew).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare `_time` deltas between paired application and database events sharing a `correlation_id` so forensic timelines for CHD access stay coherent across tiers. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Operations"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.6.1.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.6.1.3 is enforced — Splunk UC-22.11.66: Critical System Clock Skew Between Database and Application Tier.",
                  "ea": "Saved search 'UC-22.11.66' running on index=app sourcetype=\"payment:api:json\"; index=db oracle:audit with same correlation_id if injected, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.67",
              "n": "Comprehensive Audit Trail for Successful CDE Administrator Logons (PCI DSS Req 10.2.1, 10.2.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Produces a consolidated view of successful interactive and remote logons to CDE admin jump accounts including user ID, time, origin, and object accessed for all individual user access to cardholder systems.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=windows` EventCode 4624; `index=linux` `auditd` type=USER_LOGIN",
              "q": "(index=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-24h)\n OR (index=linux sourcetype=\"linux:audit\" type=\"USER_LOGIN\" earliest=-24h)\n| lookup pci_cde_servers.csv coalesce(ComputerName,host) as host OUTPUT in_cde\n| where in_cde=\"true\"\n| eval user_id=coalesce(TargetUserName,acct)\n| eval src=coalesce(IpAddress,addr)\n| table _time, host, user_id, LogonType, src, TerminalSessionId\n| sort - _time",
              "m": "(1) Ensure 4624 includes workstation name and IP fields populated. (2) Mask sensitive columns in shared dashboards. (3) Long retention index or archive for 12+ months. (4) Map to quarterly access review samples.",
              "z": "Table (sortable audit view), timeline, pivot by user_id.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=windows` EventCode 4624; `index=linux` `auditd` type=USER_LOGIN.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure 4624 includes workstation name and IP fields populated. (2) Mask sensitive columns in shared dashboards. (3) Long retention index or archive for 12+ months. (4) Map to quarterly access review samples.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-24h)\n OR (index=linux sourcetype=\"linux:audit\" type=\"USER_LOGIN\" earliest=-24h)\n| lookup pci_cde_servers.csv coalesce(ComputerName,host) as host OUTPUT in_cde\n| where in_cde=\"true\"\n| eval user_id=coalesce(TargetUserName,acct)\n| eval src=coalesce(IpAddress,addr)\n| table _time, host, user_id, LogonType, src, TerminalSessionId\n| sort - _time\n```\n\nUnderstanding this SPL\n\n**Comprehensive Audit Trail for Successful CDE Administrator Logons (PCI DSS Req 10.2.1, 10.2.2)** — Produces a consolidated view of successful interactive and remote logons to CDE admin jump accounts including user ID, time, origin, and object accessed for all individual user access to cardholder systems.\n\nDocumented **Data sources**: `index=windows` EventCode 4624; `index=linux` `auditd` type=USER_LOGIN. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, linux; **sourcetype**: WinEventLog:Security, linux:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=linux, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_cde=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **user_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Comprehensive Audit Trail for Successful CDE Administrator Logons (PCI DSS Req 10.2.1, 10.2.2)**): table _time, host, user_id, LogonType, src, TerminalSessionId\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Comprehensive Audit Trail for Successful CDE Administrator Logons (PCI DSS Req 10.2.1, 10.2.2)** — Produces a consolidated view of successful interactive and remote logons to CDE admin jump accounts including user ID, time, origin, and object accessed for all individual user access to cardholder systems.\n\nDocumented **Data sources**: `index=windows` EventCode 4624; `index=linux` `auditd` type=USER_LOGIN. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sortable audit view), timeline, pivot by user_id.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We produce a consolidated view of successful interactive and remote logons to CDE admin jump accounts including user ID, time, origin, and object accessed for all individual user access to cardholder systems. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.2.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.2.2 is enforced — Splunk UC-22.11.67: Comprehensive Audit Trail for Successful CDE Administrator Logons.",
                  "ea": "Saved search 'UC-22.11.67' running on index=windows EventCode 4624; index=linux auditd type=USER_LOGIN, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v3.2.1",
                  "cl": "10.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.1 (Audit trail linking access to user) is enforced — Splunk UC-22.11.67: Comprehensive Audit Trail for Successful CDE Administrator Logons.",
                  "ea": "Saved search 'UC-22.11.67' running on index=windows EventCode 4624; index=linux auditd type=USER_LOGIN, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.68",
              "n": "Log Retention Index Frozen Bucket Age Compliance (PCI DSS Req 10.5.1, 10.5.1.2)",
              "c": "high",
              "f": "advanced",
              "v": "Reports oldest searchable bucket age for indexes storing PCI audit evidence so at least twelve months of immediately available history exists before cold or frozen archives.",
              "t": "`rest` search against Splunk cluster manager or monitoring console",
              "d": "`| rest /services/data/indexes` — `title`, `frozenTimePeriodInSecs`, `totalEventCount`",
              "q": "| rest /services/data/indexes splunk_server=local count=0\n| where match(title,\"^(pci|cde|payment)_\")\n| eval retain_days=round(frozenTimePeriodInSecs/86400,2)\n| where retain_days<395\n| table title, retain_days, totalEventCount, disabled\n| sort retain_days",
              "m": "(1) Adjust `frozenTimePeriodInSecs` to ≥395 days per policy (allow leap buffer). (2) Document frozen/archive retrieval for older periods. (3) Alert if `disabled=1` on PCI indexes. (4) Quarterly compliance sign-off screenshot.",
              "z": "Table (index, retain_days), gauge (min retain_days), single value (indexes under policy).",
              "kfp": "Planned maintenance, backups, or batch jobs can drive metrics outside normal bands — correlate with change management windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `rest` search against Splunk cluster manager or monitoring console.\n• Ensure the following data sources are available: `| rest /services/data/indexes` — `title`, `frozenTimePeriodInSecs`, `totalEventCount`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Adjust `frozenTimePeriodInSecs` to ≥395 days per policy (allow leap buffer). (2) Document frozen/archive retrieval for older periods. (3) Alert if `disabled=1` on PCI indexes. (4) Quarterly compliance sign-off screenshot.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/data/indexes splunk_server=local count=0\n| where match(title,\"^(pci|cde|payment)_\")\n| eval retain_days=round(frozenTimePeriodInSecs/86400,2)\n| where retain_days<395\n| table title, retain_days, totalEventCount, disabled\n| sort retain_days\n```\n\nUnderstanding this SPL\n\n**Log Retention Index Frozen Bucket Age Compliance (PCI DSS Req 10.5.1, 10.5.1.2)** — Reports oldest searchable bucket age for indexes storing PCI audit evidence so at least twelve months of immediately available history exists before cold or frozen archives.\n\nDocumented **Data sources**: `| rest /services/data/indexes` — `title`, `frozenTimePeriodInSecs`, `totalEventCount`. **App/TA** (typical add-on context): `rest` search against Splunk cluster manager or monitoring console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Filters the current rows with `where match(title,\"^(pci|cde|payment)_\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **retain_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where retain_days<395` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Log Retention Index Frozen Bucket Age Compliance (PCI DSS Req 10.5.1, 10.5.1.2)**): table title, retain_days, totalEventCount, disabled\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (index, retain_days), gauge (min retain_days), single value (indexes under policy).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We reports oldest searchable bucket age for indexes storing PCI audit evidence so at least twelve months of immediately available history exists before cold or frozen archives. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Capacity"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.5.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.5.1.2 is enforced — Splunk UC-22.11.68: Log Retention Index Frozen Bucket Age Compliance.",
                  "ea": "Saved search 'UC-22.11.68' running on | rest /services/data/indexes — title, frozenTimePeriodInSecs, totalEventCount, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.69",
              "n": "Automated Log Review — Correlation of Firewall Deny Bursts with IDS Signatures (PCI DSS Req 10.4.1, 11.4.1)",
              "c": "high",
              "f": "expert",
              "v": "Joins short-window firewall denies with IDS alerts touching the same `src_ip` so automated reviews surface coordinated attack patterns without waiting for manual cross-index pivoting.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` `pan:traffic`; `index=ids` `sourcetype=\"suricata:alert\"` or `pan:threat`",
              "q": "index=netfw sourcetype=\"pan:traffic\" action=deny earliest=-1h\n| stats count by src, dest\n| where count>500\n| join type=left max=0 src [\n    search index=ids (sourcetype=\"suricata:alert\" OR sourcetype=\"pan:threat\") earliest=-1h\n    | rename src_ip as src\n    | stats dc(signature) as sigs by src\n  ]\n| where isnotnull(sigs) AND sigs>0\n| table src, dest, count, sigs",
              "m": "(1) Tune `map` limits for performance; consider `join` with subsearch capped. (2) Replace with ES notable linking if preferred. (3) Document tuning for false positive rates. (4) Daily automated report to SOC leads.",
              "z": "Table (correlated src), bubble chart (count vs sigs), timeline.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` `pan:traffic`; `index=ids` `sourcetype=\"suricata:alert\"` or `pan:threat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune `map` limits for performance; consider `join` with subsearch capped. (2) Replace with ES notable linking if preferred. (3) Document tuning for false positive rates. (4) Daily automated report to SOC leads.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:traffic\" action=deny earliest=-1h\n| stats count by src, dest\n| where count>500\n| join type=left max=0 src [\n    search index=ids (sourcetype=\"suricata:alert\" OR sourcetype=\"pan:threat\") earliest=-1h\n    | rename src_ip as src\n    | stats dc(signature) as sigs by src\n  ]\n| where isnotnull(sigs) AND sigs>0\n| table src, dest, count, sigs\n```\n\nUnderstanding this SPL\n\n**Automated Log Review — Correlation of Firewall Deny Bursts with IDS Signatures (PCI DSS Req 10.4.1, 11.4.1)** — Joins short-window firewall denies with IDS alerts touching the same `src_ip` so automated reviews surface coordinated attack patterns without waiting for manual cross-index pivoting.\n\nDocumented **Data sources**: `index=netfw` `pan:traffic`; `index=ids` `sourcetype=\"suricata:alert\"` or `pan:threat`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>500` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(sigs) AND sigs>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Automated Log Review — Correlation of Firewall Deny Bursts with IDS Signatures (PCI DSS Req 10.4.1, 11.4.1)**): table src, dest, count, sigs\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(IDS_Attacks.signature) as agg_value from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src, IDS_Attacks.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Automated Log Review — Correlation of Firewall Deny Bursts with IDS Signatures (PCI DSS Req 10.4.1, 11.4.1)** — Joins short-window firewall denies with IDS alerts touching the same `src_ip` so automated reviews surface coordinated attack patterns without waiting for manual cross-index pivoting.\n\nDocumented **Data sources**: `index=netfw` `pan:traffic`; `index=ids` `sourcetype=\"suricata:alert\"` or `pan:threat`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (correlated src), bubble chart (count vs sigs), timeline.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We joins short-window firewall denies with IDS alerts touching the same `src_ip` so automated reviews surface coordinated attack patterns without waiting for manual cross-index pivoting. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t dc(IDS_Attacks.signature) as agg_value from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src, IDS_Attacks.dest | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "11.4.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 11.4.1 is enforced — Splunk UC-22.11.69: Automated Log Review — Correlation of Firewall Deny Bursts with IDS Signatures.",
                  "ea": "Saved search 'UC-22.11.69' running on index=netfw pan:traffic; index=ids sourcetype=\"suricata:alert\" or pan:threat, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.70",
              "n": "File Integrity Monitoring Alerts on Payment Web Roots (PCI DSS Req 10.5.2, 11.5.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "Surfaces Splunk Universal Forwarder or OSSEC/AIDE change alerts on directories serving checkout static content so unauthorized modification of critical files is detected promptly.",
              "t": "Splunk Add-on for Unix and Linux (833), Splunk Enterprise Security (263)",
              "d": "`index=os` `sourcetype=\"ossec\"` or `auditd` paths; `sourcetype=\"stash\"` from `monitor://` with `hash` in `change_management`",
              "q": "index=os (sourcetype=\"linux:audit\" OR sourcetype=\"ossec\") earliest=-24h\n| where match(name,\"(?i)/var/www/payment|/opt/checkout/dist)\")\n| where match(key,\"(?i)write|attrib|chmod|unlink|create)\")\n| stats earliest(_time) as first_change, values(key) as ops by host, name\n| sort - first_change",
              "m": "(1) Scope FIM to PCI web roots only to reduce noise. (2) Correlate with deploy pipeline user. (3) Page on off-hours changes. (4) Store weekly digest for change-detection evidence (Req 11 overlap).",
              "z": "Table (file paths), timeline (first_change), heatmap (host x hour).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (833), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=os` `sourcetype=\"ossec\"` or `auditd` paths; `sourcetype=\"stash\"` from `monitor://` with `hash` in `change_management`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Scope FIM to PCI web roots only to reduce noise. (2) Correlate with deploy pipeline user. (3) Page on off-hours changes. (4) Store weekly digest for change-detection evidence (Req 11 overlap).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os (sourcetype=\"linux:audit\" OR sourcetype=\"ossec\") earliest=-24h\n| where match(name,\"(?i)/var/www/payment|/opt/checkout/dist)\")\n| where match(key,\"(?i)write|attrib|chmod|unlink|create)\")\n| stats earliest(_time) as first_change, values(key) as ops by host, name\n| sort - first_change\n```\n\nUnderstanding this SPL\n\n**File Integrity Monitoring Alerts on Payment Web Roots (PCI DSS Req 10.5.2, 11.5.1)** — Surfaces Splunk Universal Forwarder or OSSEC/AIDE change alerts on directories serving checkout static content so unauthorized modification of critical files is detected promptly.\n\nDocumented **Data sources**: `index=os` `sourcetype=\"ossec\"` or `auditd` paths; `sourcetype=\"stash\"` from `monitor://` with `hash` in `change_management`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux:audit, ossec. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=\"linux:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(name,\"(?i)/var/www/payment|/opt/checkout/dist)\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(key,\"(?i)write|attrib|chmod|unlink|create)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**File Integrity Monitoring Alerts on Payment Web Roots (PCI DSS Req 10.5.2, 11.5.1)** — Surfaces Splunk Universal Forwarder or OSSEC/AIDE change alerts on directories serving checkout static content so unauthorized modification of critical files is detected promptly.\n\nDocumented **Data sources**: `index=os` `sourcetype=\"ossec\"` or `auditd` paths; `sourcetype=\"stash\"` from `monitor://` with `hash` in `change_management`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (file paths), timeline (first_change), heatmap (host x hour).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface Splunk Universal Forwarder or OSSEC/AIDE change alerts on directories serving checkout static content so unauthorized modification of critical files is detected promptly. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "11.5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 11.5.1 is enforced — Splunk UC-22.11.70: File Integrity Monitoring Alerts on Payment Web Roots.",
                  "ea": "Saved search 'UC-22.11.70' running on index=os sourcetype=\"ossec\" or auditd paths; sourcetype=\"stash\" from monitor:// with hash in change_management, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.71",
              "n": "Security Event Correlation — Payment API Errors with Concurrent Admin Logons (PCI DSS Req 10.4.1, 10.2.2)",
              "c": "high",
              "f": "expert",
              "v": "Correlates spikes in payment API `5xx` responses with privileged session starts in the same window so automated reviews flag potential tampering or misconfiguration during maintenance abuse.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=app` `payment:api:json`; `index=windows` EventCode 4624 LogonType 10",
              "q": "index=app sourcetype=\"payment:api:json\" status>=500 earliest=-4h\n| bin _time span=5m\n| stats count as err5xx by _time, host\n| where err5xx>100\n| join type=inner _time [\n    search index=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=10 earliest=-4h\n    | lookup pci_privileged_users.csv TargetUserName OUTPUT priv\n    | where priv=\"true\"\n    | bin _time span=5m\n    | stats dc(TargetUserName) as priv_logons by _time\n  ]\n| table _time, host, err5xx, priv_logons\n| sort _time",
              "m": "(1) Align `span` to traffic profile. (2) Maintain privileged user list from IAM. (3) Tune err5xx threshold per service. (4) Store notable-worthy rows in summary index.",
              "z": "Timeline (err5xx vs priv_logons), dual-axis line, table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=app` `payment:api:json`; `index=windows` EventCode 4624 LogonType 10.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `span` to traffic profile. (2) Maintain privileged user list from IAM. (3) Tune err5xx threshold per service. (4) Store notable-worthy rows in summary index.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"payment:api:json\" status>=500 earliest=-4h\n| bin _time span=5m\n| stats count as err5xx by _time, host\n| where err5xx>100\n| join type=inner _time [\n    search index=windows sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=10 earliest=-4h\n    | lookup pci_privileged_users.csv TargetUserName OUTPUT priv\n    | where priv=\"true\"\n    | bin _time span=5m\n    | stats dc(TargetUserName) as priv_logons by _time\n  ]\n| table _time, host, err5xx, priv_logons\n| sort _time\n```\n\nUnderstanding this SPL\n\n**Security Event Correlation — Payment API Errors with Concurrent Admin Logons (PCI DSS Req 10.4.1, 10.2.2)** — Correlates spikes in payment API `5xx` responses with privileged session starts in the same window so automated reviews flag potential tampering or misconfiguration during maintenance abuse.\n\nDocumented **Data sources**: `index=app` `payment:api:json`; `index=windows` EventCode 4624 LogonType 10. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: payment:api:json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"payment:api:json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where err5xx>100` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Security Event Correlation — Payment API Errors with Concurrent Admin Logons (PCI DSS Req 10.4.1, 10.2.2)**): table _time, host, err5xx, priv_logons\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.dest span=5m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security Event Correlation — Payment API Errors with Concurrent Admin Logons (PCI DSS Req 10.4.1, 10.2.2)** — Correlates spikes in payment API `5xx` responses with privileged session starts in the same window so automated reviews flag potential tampering or misconfiguration during maintenance abuse.\n\nDocumented **Data sources**: `index=app` `payment:api:json`; `index=windows` EventCode 4624 LogonType 10. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (err5xx vs priv_logons), dual-axis line, table.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate spikes in payment API `5xx` responses with privileged session starts in the same window so automated reviews flag potential tampering or misconfiguration during maintenance abuse. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.dest span=5m | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.2.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 10.2.2 is enforced — Splunk UC-22.11.71: Security Event Correlation — Payment API Errors with Concurrent Admin Logons.",
                  "ea": "Saved search 'UC-22.11.71' running on index=app payment:api:json; index=windows EventCode 4624 LogonType 10, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.72",
              "n": "Log Source Coverage Gaps — Expected Sourcetypes with Zero Events (PCI DSS Req 10.2.1, 12.10.1)",
              "c": "high",
              "f": "advanced",
              "v": "Compares a reference list of required sourcetypes for PCI monitoring to last-hour ingestion so missing log categories are detected before an audit sample fails completeness tests.",
              "t": "Splunk Enterprise Security (263), metadata searches",
              "d": "`| metadata type=sourcetypes index=pci OR index=cde`; lookup `pci_required_sourcetypes.csv`",
              "q": "| inputlookup pci_required_sourcetypes.csv\n| fields required_sourcetype, required_index\n| join type=left required_sourcetype [\n    | metadata type=sourcetypes index=pci index=cde\n    | rename sourcetype as required_sourcetype\n    | eval seen_recent=if(recentTime>relative_time(now(),\"-2h\"),1,0)\n    | fields required_sourcetype, seen_recent, totalCount\n  ]\n| where isnull(seen_recent) OR seen_recent=0\n| table required_index, required_sourcetype, seen_recent, totalCount",
              "m": "(1) Curate lookup with owner contact per sourcetype. (2) Run hourly; exclude maintenance windows via secondary lookup. (3) Auto-ticket to platform ops. (4) Attach zero-gap monthly PDF for PCI evidence.",
              "z": "Table (missing sources), single value (missing count), heatmap (by required_index).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), metadata searches.\n• Ensure the following data sources are available: `| metadata type=sourcetypes index=pci OR index=cde`; lookup `pci_required_sourcetypes.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Curate lookup with owner contact per sourcetype. (2) Run hourly; exclude maintenance windows via secondary lookup. (3) Auto-ticket to platform ops. (4) Attach zero-gap monthly PDF for PCI evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup pci_required_sourcetypes.csv\n| fields required_sourcetype, required_index\n| join type=left required_sourcetype [\n    | metadata type=sourcetypes index=pci index=cde\n    | rename sourcetype as required_sourcetype\n    | eval seen_recent=if(recentTime>relative_time(now(),\"-2h\"),1,0)\n    | fields required_sourcetype, seen_recent, totalCount\n  ]\n| where isnull(seen_recent) OR seen_recent=0\n| table required_index, required_sourcetype, seen_recent, totalCount\n```\n\nUnderstanding this SPL\n\n**Log Source Coverage Gaps — Expected Sourcetypes with Zero Events (PCI DSS Req 10.2.1, 12.10.1)** — Compares a reference list of required sourcetypes for PCI monitoring to last-hour ingestion so missing log categories are detected before an audit sample fails completeness tests.\n\nDocumented **Data sources**: `| metadata type=sourcetypes index=pci OR index=cde`; lookup `pci_required_sourcetypes.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), metadata searches. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(seen_recent) OR seen_recent=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Log Source Coverage Gaps — Expected Sourcetypes with Zero Events (PCI DSS Req 10.2.1, 12.10.1)**): table required_index, required_sourcetype, seen_recent, totalCount\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing sources), single value (missing count), heatmap (by required_index).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare a reference list of required sourcetypes for PCI monitoring to last-hour ingestion so missing log categories are detected before an audit sample fails completeness tests. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Operations"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.10.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.10.1 is enforced — Splunk UC-22.11.72: Log Source Coverage Gaps — Expected Sourcetypes with Zero Events.",
                  "ea": "Saved search 'UC-22.11.72' running on | metadata type=sourcetypes index=pci OR index=cde; lookup pci_required_sourcetypes.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.73",
              "n": "ASV External Scan Failure and Non-Compliant Finding Trend (PCI DSS Req 11.3.1, 11.3.2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Tracks approved scanning vendor run outcomes and residual high-risk findings on Internet-facing payment hosts so quarterly external vulnerability scanning evidence stays current and remediated.",
              "t": "Splunk Add-on for Qualys (2964), Splunk Add-on for Tenable (4060)",
              "d": "`index=vm` `qualys:was` or `tenable:io:vm` with `scan_type=\"ASV\"` if tagged",
              "q": "index=vm sourcetype=\"qualys:was\" earliest=-120d\n| where match(scan_name,\"(?i)ASV|PCI\\\\s*External)\")\n| eval failed_scan=if(match(status,\"(?i)error|failed|aborted\"),1,0)\n| timechart span=30d sum(failed_scan) as failed_runs, count as total_runs\n| eval fail_pct=if(total_runs>0, round(100*failed_runs/total_runs,2), null())",
              "m": "(1) Tag ASV scans distinctly from internal VM. (2) Join open findings `severity>=3` on same `fqdn`. (3) Alert if `fail_pct>0` two quarters in a row. (4) Store PDF reports in GRC with Splunk deep link.",
              "z": "Line (`fail_pct`), column (failed_runs), table (last scan status by fqdn).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [Splunk Add-on for Tenable](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Qualys (2964), Splunk Add-on for Tenable (4060).\n• Ensure the following data sources are available: `index=vm` `qualys:was` or `tenable:io:vm` with `scan_type=\"ASV\"` if tagged.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag ASV scans distinctly from internal VM. (2) Join open findings `severity>=3` on same `fqdn`. (3) Alert if `fail_pct>0` two quarters in a row. (4) Store PDF reports in GRC with Splunk deep link.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype=\"qualys:was\" earliest=-120d\n| where match(scan_name,\"(?i)ASV|PCI\\\\s*External)\")\n| eval failed_scan=if(match(status,\"(?i)error|failed|aborted\"),1,0)\n| timechart span=30d sum(failed_scan) as failed_runs, count as total_runs\n| eval fail_pct=if(total_runs>0, round(100*failed_runs/total_runs,2), null())\n```\n\nUnderstanding this SPL\n\n**ASV External Scan Failure and Non-Compliant Finding Trend (PCI DSS Req 11.3.1, 11.3.2)** — Tracks approved scanning vendor run outcomes and residual high-risk findings on Internet-facing payment hosts so quarterly external vulnerability scanning evidence stays current and remediated.\n\nDocumented **Data sources**: `index=vm` `qualys:was` or `tenable:io:vm` with `scan_type=\"ASV\"` if tagged. **App/TA** (typical add-on context): Splunk Add-on for Qualys (2964), Splunk Add-on for Tenable (4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm; **sourcetype**: qualys:was. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, sourcetype=\"qualys:was\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(scan_name,\"(?i)ASV|PCI\\\\s*External)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **failed_scan** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=30d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **fail_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest span=30d | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ASV External Scan Failure and Non-Compliant Finding Trend (PCI DSS Req 11.3.1, 11.3.2)** — Tracks approved scanning vendor run outcomes and residual high-risk findings on Internet-facing payment hosts so quarterly external vulnerability scanning evidence stays current and remediated.\n\nDocumented **Data sources**: `index=vm` `qualys:was` or `tenable:io:vm` with `scan_type=\"ASV\"` if tagged. **App/TA** (typical add-on context): Splunk Add-on for Qualys (2964), Splunk Add-on for Tenable (4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line (`fail_pct`), column (failed_runs), table (last scan status by fqdn).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track approved scanning vendor run outcomes and residual high-risk findings on Internet-facing payment hosts so quarterly external vulnerability scanning evidence stays current and remediated. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest span=30d | sort - agg_value",
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "11.3.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 11.3.2 is enforced — Splunk UC-22.11.73: ASV External Scan Failure and Non-Compliant Finding Trend.",
                  "ea": "Saved search 'UC-22.11.73' running on index=vm qualys:was or tenable:io:vm with scan_type=\"ASV\" if tagged, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.74",
              "n": "Internal Authenticated Vulnerability Scan Coverage by CDE Subnet (PCI DSS Req 11.3.1, 11.3.1.3)",
              "c": "high",
              "f": "intermediate",
              "v": "Verifies each CDE /24 received at least one credentialed scan in the rolling quarter so internal vulnerability scanning frequency and scope requirements are met with machine evidence.",
              "t": "Splunk Add-on for Tenable (4060)",
              "d": "`index=vm` `tenable:sc:vuln` with `scan_name`, `host_ip`",
              "q": "index=vm sourcetype=\"tenable:sc:vuln\" earliest=-90d\n| where match(scan_name,\"(?i)internal|credentialed|CDE)\")\n| eval subnet=replace(host_ip,\"^(\\\\d+\\\\.\\\\d+\\\\.\\\\d+)\\\\.\\\\d+$\",\"\\\\1.0/24\")\n| lookup pci_cde_subnets_summary.csv subnet OUTPUT required_scan\n| where required_scan=\"true\"\n| stats latest(_time) as last_scan by subnet\n| eval days_since=round((now()-last_scan)/86400,1)\n| where days_since>90 OR isnull(last_scan)\n| sort days_since",
              "m": "(1) Normalize IP field names from Tenable TA. (2) Adjust `subnet` extraction for IPv6 if in scope. (3) Alert subnets over 90 days. (4) Map exceptions to documented outage windows.",
              "z": "Table (subnet, days_since), choropleth if subnets mapped to sites, gauge (worst gap).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Tenable](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Tenable (4060).\n• Ensure the following data sources are available: `index=vm` `tenable:sc:vuln` with `scan_name`, `host_ip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize IP field names from Tenable TA. (2) Adjust `subnet` extraction for IPv6 if in scope. (3) Alert subnets over 90 days. (4) Map exceptions to documented outage windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype=\"tenable:sc:vuln\" earliest=-90d\n| where match(scan_name,\"(?i)internal|credentialed|CDE)\")\n| eval subnet=replace(host_ip,\"^(\\\\d+\\\\.\\\\d+\\\\.\\\\d+)\\\\.\\\\d+$\",\"\\\\1.0/24\")\n| lookup pci_cde_subnets_summary.csv subnet OUTPUT required_scan\n| where required_scan=\"true\"\n| stats latest(_time) as last_scan by subnet\n| eval days_since=round((now()-last_scan)/86400,1)\n| where days_since>90 OR isnull(last_scan)\n| sort days_since\n```\n\nUnderstanding this SPL\n\n**Internal Authenticated Vulnerability Scan Coverage by CDE Subnet (PCI DSS Req 11.3.1, 11.3.1.3)** — Verifies each CDE /24 received at least one credentialed scan in the rolling quarter so internal vulnerability scanning frequency and scope requirements are met with machine evidence.\n\nDocumented **Data sources**: `index=vm` `tenable:sc:vuln` with `scan_name`, `host_ip`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm; **sourcetype**: tenable:sc:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, sourcetype=\"tenable:sc:vuln\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(scan_name,\"(?i)internal|credentialed|CDE)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **subnet** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where required_scan=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by subnet** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since>90 OR isnull(last_scan)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Internal Authenticated Vulnerability Scan Coverage by CDE Subnet (PCI DSS Req 11.3.1, 11.3.1.3)** — Verifies each CDE /24 received at least one credentialed scan in the rolling quarter so internal vulnerability scanning frequency and scope requirements are met with machine evidence.\n\nDocumented **Data sources**: `index=vm` `tenable:sc:vuln` with `scan_name`, `host_ip`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (subnet, days_since), choropleth if subnets mapped to sites, gauge (worst gap).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verifies each CDE /24 received at least one credentialed scan in the rolling quarter so internal vulnerability scanning frequency and scope requirements are met with machine evidence. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "11.3.1.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 11.3.1.3 is enforced — Splunk UC-22.11.74: Internal Authenticated Vulnerability Scan Coverage by CDE Subnet.",
                  "ea": "Saved search 'UC-22.11.74' running on index=vm tenable:sc:vuln with scan_name, host_ip, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.75",
              "n": "Unauthorized Wireless Access Point Detection by SSID and BSSID (PCI DSS Req 11.3.1, 1.2.3)",
              "c": "high",
              "f": "advanced",
              "v": "Lists rogue or unknown BSSIDs observed on store wireless surveys so unauthorized wireless that could bypass segmentation is investigated under the wireless scanning requirement.",
              "t": "Custom `sourcetype=\"ekahau:survey\"` or `aruba:airwave:rogue\"`",
              "d": "`index=wireless` `sourcetype=\"wifi:rogue_ap\"` — `bssid`, `ssid`, `classification`",
              "q": "index=wireless sourcetype=\"wifi:rogue_ap\" earliest=-7d\n| lookup pci_approved_bssids.csv bssid OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, values(classification) as cls by bssid, ssid, site_id\n| sort first_seen",
              "m": "(1) Import approved corporate BSSIDs after each AP refresh. (2) Correlate with technician work orders. (3) Physical inspection workflow for persistent rogues. (4) Quarterly export for onsite assessor.",
              "z": "Table (rogues), map (site_id), timeline (first_seen).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Custom `sourcetype=\"ekahau:survey\"` or `aruba:airwave:rogue\"`.\n• Ensure the following data sources are available: `index=wireless` `sourcetype=\"wifi:rogue_ap\"` — `bssid`, `ssid`, `classification`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import approved corporate BSSIDs after each AP refresh. (2) Correlate with technician work orders. (3) Physical inspection workflow for persistent rogues. (4) Quarterly export for onsite assessor.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wireless sourcetype=\"wifi:rogue_ap\" earliest=-7d\n| lookup pci_approved_bssids.csv bssid OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| stats earliest(_time) as first_seen, latest(_time) as last_seen, values(classification) as cls by bssid, ssid, site_id\n| sort first_seen\n```\n\nUnderstanding this SPL\n\n**Unauthorized Wireless Access Point Detection by SSID and BSSID (PCI DSS Req 11.3.1, 1.2.3)** — Lists rogue or unknown BSSIDs observed on store wireless surveys so unauthorized wireless that could bypass segmentation is investigated under the wireless scanning requirement.\n\nDocumented **Data sources**: `index=wireless` `sourcetype=\"wifi:rogue_ap\"` — `bssid`, `ssid`, `classification`. **App/TA** (typical add-on context): Custom `sourcetype=\"ekahau:survey\"` or `aruba:airwave:rogue\"`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wireless; **sourcetype**: wifi:rogue_ap. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wireless, sourcetype=\"wifi:rogue_ap\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved) OR approved=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by bssid, ssid, site_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rogues), map (site_id), timeline (first_seen).",
              "script": "",
              "premium": "Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We lists rogue or unknown BSSIDs observed on store wireless surveys so unauthorized wireless that could bypass segmentation is investigated under the wireless scanning requirement. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.2.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 1.2.3 is enforced — Splunk UC-22.11.75: Unauthorized Wireless Access Point Detection by SSID and BSSID.",
                  "ea": "Saved search 'UC-22.11.75' running on index=wireless sourcetype=\"wifi:rogue_ap\" — bssid, ssid, classification, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.76",
              "n": "Penetration Test Finding Severity and Re-test Status (PCI DSS Req 11.4.1, 11.4.5)",
              "c": "critical",
              "f": "intermediate",
              "v": "Tracks open penetration test findings and re-validation dates so critical issues from annual or incremental tests are remediated within agreed timelines and evidence is ready for PCI validation.",
              "t": "`sourcetype=\"grc:penetration_test\"` HEC from reporting portal",
              "d": "`index=grc` `sourcetype=\"grc:penetration_test\"` — `finding_id`, `severity`, `status`, `retest_date`",
              "q": "index=grc sourcetype=\"grc:penetration_test\" earliest=-365d\n| where match(engagement_scope,\"(?i)CDE|payment|PCI)\")\n| where match(status,\"(?i)open|accepted_risk\")\n| eval overdue_retest=if(isnotnull(retest_date) AND now()>retest_date,1,0)\n| stats values(severity) as sev, max(overdue_retest) as overdue by finding_id, title\n| where match(mvjoin(sev,\",\"),\"(?i)critical|high\") OR overdue=1\n| sort finding_id",
              "m": "(1) Normalize severity strings. (2) Link `finding_id` to Jira epics. (3) Monthly steering committee view. (4) Store signed pentest report hash in GRC with Splunk snapshot.",
              "z": "Kanban-style table (status), timeline (retest_date), single value (open critical count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `sourcetype=\"grc:penetration_test\"` HEC from reporting portal.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"grc:penetration_test\"` — `finding_id`, `severity`, `status`, `retest_date`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize severity strings. (2) Link `finding_id` to Jira epics. (3) Monthly steering committee view. (4) Store signed pentest report hash in GRC with Splunk snapshot.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"grc:penetration_test\" earliest=-365d\n| where match(engagement_scope,\"(?i)CDE|payment|PCI)\")\n| where match(status,\"(?i)open|accepted_risk\")\n| eval overdue_retest=if(isnotnull(retest_date) AND now()>retest_date,1,0)\n| stats values(severity) as sev, max(overdue_retest) as overdue by finding_id, title\n| where match(mvjoin(sev,\",\"),\"(?i)critical|high\") OR overdue=1\n| sort finding_id\n```\n\nUnderstanding this SPL\n\n**Penetration Test Finding Severity and Re-test Status (PCI DSS Req 11.4.1, 11.4.5)** — Tracks open penetration test findings and re-validation dates so critical issues from annual or incremental tests are remediated within agreed timelines and evidence is ready for PCI validation.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"grc:penetration_test\"` — `finding_id`, `severity`, `status`, `retest_date`. **App/TA** (typical add-on context): `sourcetype=\"grc:penetration_test\"` HEC from reporting portal. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: grc:penetration_test. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"grc:penetration_test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(engagement_scope,\"(?i)CDE|payment|PCI)\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(status,\"(?i)open|accepted_risk\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **overdue_retest** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by finding_id, title** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where match(mvjoin(sev,\",\"),\"(?i)critical|high\") OR overdue=1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Kanban-style table (status), timeline (retest_date), single value (open critical count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track open penetration test findings and re-validation dates so critical issues from annual or incremental tests are remediated within agreed timelines and evidence is ready for PCI validation. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "11.4.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 11.4.5 is enforced — Splunk UC-22.11.76: Penetration Test Finding Severity and Re-test Status.",
                  "ea": "Saved search 'UC-22.11.76' running on index=grc sourcetype=\"grc:penetration_test\" — finding_id, severity, status, retest_date, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.77",
              "n": "IDS and IPS Alert Volume Baseline and Spike Detection (PCI DSS Req 11.4.1, 11.4.4)",
              "c": "high",
              "f": "advanced",
              "v": "Applies moving baselines to IDS/IPS alert rates on CDE perimeter sensors so intrusion-detection failures or sudden drops in alerting—often a sign of sensor issues—are visible alongside spikes.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263)",
              "d": "`index=ids` `pan:threat`; `suricata:alert`",
              "q": "index=ids sourcetype IN (\"pan:threat\",\"suricata:alert\") earliest=-30d\n| lookup pci_sensor_zones.csv sensor OUTPUT cde_adjacent\n| where cde_adjacent=\"true\"\n| bin _time span=1d\n| stats count by _time, signature\n| eventstats median(count) as med by signature\n| eval spike=if(count>med*4 AND med>10,1,0)\n| where spike=1\n| table _time, signature, count, med\n| sort - count",
              "m": "(1) Tune `med>10` for low-noise signatures. (2) Pair drops with health checks. (3) Feed spikes to SOC shift lead. (4) Include in monthly IDS tuning minutes.",
              "z": "Line (count by signature top-N), table (spike rows), single value (spike days).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=ids` `pan:threat`; `suricata:alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune `med>10` for low-noise signatures. (2) Pair drops with health checks. (3) Feed spikes to SOC shift lead. (4) Include in monthly IDS tuning minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ids sourcetype IN (\"pan:threat\",\"suricata:alert\") earliest=-30d\n| lookup pci_sensor_zones.csv sensor OUTPUT cde_adjacent\n| where cde_adjacent=\"true\"\n| bin _time span=1d\n| stats count by _time, signature\n| eventstats median(count) as med by signature\n| eval spike=if(count>med*4 AND med>10,1,0)\n| where spike=1\n| table _time, signature, count, med\n| sort - count\n```\n\nUnderstanding this SPL\n\n**IDS and IPS Alert Volume Baseline and Spike Detection (PCI DSS Req 11.4.1, 11.4.4)** — Applies moving baselines to IDS/IPS alert rates on CDE perimeter sensors so intrusion-detection failures or sudden drops in alerting—often a sign of sensor issues—are visible alongside spikes.\n\nDocumented **Data sources**: `index=ids` `pan:threat`; `suricata:alert`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ids.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ids, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cde_adjacent=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **spike** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where spike=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IDS and IPS Alert Volume Baseline and Spike Detection (PCI DSS Req 11.4.1, 11.4.4)**): table _time, signature, count, med\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IDS and IPS Alert Volume Baseline and Spike Detection (PCI DSS Req 11.4.1, 11.4.4)** — Applies moving baselines to IDS/IPS alert rates on CDE perimeter sensors so intrusion-detection failures or sudden drops in alerting—often a sign of sensor issues—are visible alongside spikes.\n\nDocumented **Data sources**: `index=ids` `pan:threat`; `suricata:alert`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line (count by signature top-N), table (spike rows), single value (spike days).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We applies moving baselines to IDS/IPS alert rates on CDE perimeter sensors so intrusion-detection failures or sudden drops in alerting—often a sign of sensor issues—are visible alongside spikes. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature span=1d | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "11.4.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 11.4.4 is enforced — Splunk UC-22.11.77: IDS and IPS Alert Volume Baseline and Spike Detection.",
                  "ea": "Saved search 'UC-22.11.77' running on index=ids pan:threat; suricata:alert, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v3.2.1",
                  "cl": "11.4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 11.4 (Intrusion detection) is enforced — Splunk UC-22.11.77: IDS and IPS Alert Volume Baseline and Spike Detection.",
                  "ea": "Saved search 'UC-22.11.77' running on index=ids pan:threat; suricata:alert, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.78",
              "n": "Segmentation Control Validation — Scanner IP Blocked at CDE Border as Expected (PCI DSS Req 11.4.1, 1.3.3)",
              "c": "medium",
              "f": "advanced",
              "v": "Confirms that active segmentation tests from non-CDE scanner IPs are denied at the NSC while still logged, producing evidence that untrusted networks cannot reach CHD without passing controls.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` `pan:traffic` with `src` in `pci_segmentation_scanner_ips.csv`",
              "q": "index=netfw sourcetype=\"pan:traffic\" earliest=-24h\n| lookup pci_segmentation_scanner_ips.csv ip AS src OUTPUT is_scanner\n| lookup pci_cde_subnets.csv subnet AS dest OUTPUT in_cde\n| where is_scanner=\"true\" AND in_cde=\"true\"\n| stats count by action, rule, src, dest\n| eval expected_deny=if(action IN (\"deny\",\"drop\",\"reset-client\"),1,0)\n| stats sum(expected_deny) as deny_hits, count as total by src\n| eval deny_ratio=if(total>0, round(100*deny_hits/total,2), null())\n| where deny_ratio<95\n| sort src",
              "m": "(1) Coordinate scanner IP list with annual segmentation test vendor. (2) Alert if `deny_ratio` drops (misconfiguration). (3) Attach results to PCI evidence library. (4) Run after major firewall pushes.",
              "z": "Table (src, deny_ratio), bar (action mix), single value (unexpected allows).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` `pan:traffic` with `src` in `pci_segmentation_scanner_ips.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Coordinate scanner IP list with annual segmentation test vendor. (2) Alert if `deny_ratio` drops (misconfiguration). (3) Attach results to PCI evidence library. (4) Run after major firewall pushes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=\"pan:traffic\" earliest=-24h\n| lookup pci_segmentation_scanner_ips.csv ip AS src OUTPUT is_scanner\n| lookup pci_cde_subnets.csv subnet AS dest OUTPUT in_cde\n| where is_scanner=\"true\" AND in_cde=\"true\"\n| stats count by action, rule, src, dest\n| eval expected_deny=if(action IN (\"deny\",\"drop\",\"reset-client\"),1,0)\n| stats sum(expected_deny) as deny_hits, count as total by src\n| eval deny_ratio=if(total>0, round(100*deny_hits/total,2), null())\n| where deny_ratio<95\n| sort src\n```\n\nUnderstanding this SPL\n\n**Segmentation Control Validation — Scanner IP Blocked at CDE Border as Expected (PCI DSS Req 11.4.1, 1.3.3)** — Confirms that active segmentation tests from non-CDE scanner IPs are denied at the NSC while still logged, producing evidence that untrusted networks cannot reach CHD without passing controls.\n\nDocumented **Data sources**: `index=netfw` `pan:traffic` with `src` in `pci_segmentation_scanner_ips.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_scanner=\"true\" AND in_cde=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by action, rule, src, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **expected_deny** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **deny_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where deny_ratio<95` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Segmentation Control Validation — Scanner IP Blocked at CDE Border as Expected (PCI DSS Req 11.4.1, 1.3.3)** — Confirms that active segmentation tests from non-CDE scanner IPs are denied at the NSC while still logged, producing evidence that untrusted networks cannot reach CHD without passing controls.\n\nDocumented **Data sources**: `index=netfw` `pan:traffic` with `src` in `pci_segmentation_scanner_ips.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (src, deny_ratio), bar (action mix), single value (unexpected allows).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We confirms that active segmentation tests from non-CDE scanner IPs are denied at the NSC while still logged, producing evidence that untrusted networks cannot reach CHD without passing controls. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.3.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 1.3.3 is enforced — Splunk UC-22.11.78: Segmentation Control Validation — Scanner IP Blocked at CDE Border as Expected.",
                  "ea": "Saved search 'UC-22.11.78' running on index=netfw pan:traffic with src in pci_segmentation_scanner_ips.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.79",
              "n": "Quarterly Internal Scan Remediation Aging Buckets (PCI DSS Req 11.3.1.1, 6.3.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Buckets open internal scan findings by age so quarterly rescan and fix cycles for high-risk issues remain on schedule and visible to management dashboards.",
              "t": "Splunk Add-on for Qualys (2964)",
              "d": "`index=vm` `qualys:host` with `FIRST_FOUND_DATETIME`, `STATUS`",
              "q": "index=vm sourcetype=\"qualys:host\" earliest=-120d\n| where match(TAGS,\"(?i)PCI|CDE)\")\n| where match(STATUS,\"(?i)active|new|reopened)\")\n| eval age_days=round((now()-strptime(FIRST_FOUND_DATETIME,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,1)\n| eval bucket=case(age_days<=30,\"0-30d\", age_days<=60,\"31-60d\", age_days<=90,\"61-90d\",1=1,\"90+d\")\n| stats count by bucket, SEVERITY_LEVEL\n| sort bucket",
              "m": "(1) Confirm Qualys timestamp format for `strptime`. (2) Map `SEVERITY_LEVEL` to PCI risk. (3) Executive stack chart weekly. (4) Align `90+d` alerts with remediation charter.",
              "z": "Stacked column (bucket x severity), pie (90+d share), table (raw host list drilldown).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Qualys (2964).\n• Ensure the following data sources are available: `index=vm` `qualys:host` with `FIRST_FOUND_DATETIME`, `STATUS`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm Qualys timestamp format for `strptime`. (2) Map `SEVERITY_LEVEL` to PCI risk. (3) Executive stack chart weekly. (4) Align `90+d` alerts with remediation charter.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype=\"qualys:host\" earliest=-120d\n| where match(TAGS,\"(?i)PCI|CDE)\")\n| where match(STATUS,\"(?i)active|new|reopened)\")\n| eval age_days=round((now()-strptime(FIRST_FOUND_DATETIME,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,1)\n| eval bucket=case(age_days<=30,\"0-30d\", age_days<=60,\"31-60d\", age_days<=90,\"61-90d\",1=1,\"90+d\")\n| stats count by bucket, SEVERITY_LEVEL\n| sort bucket\n```\n\nUnderstanding this SPL\n\n**Quarterly Internal Scan Remediation Aging Buckets (PCI DSS Req 11.3.1.1, 6.3.1)** — Buckets open internal scan findings by age so quarterly rescan and fix cycles for high-risk issues remain on schedule and visible to management dashboards.\n\nDocumented **Data sources**: `index=vm` `qualys:host` with `FIRST_FOUND_DATETIME`, `STATUS`. **App/TA** (typical add-on context): Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm; **sourcetype**: qualys:host. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, sourcetype=\"qualys:host\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(TAGS,\"(?i)PCI|CDE)\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(STATUS,\"(?i)active|new|reopened)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bucket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by bucket, SEVERITY_LEVEL** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Quarterly Internal Scan Remediation Aging Buckets (PCI DSS Req 11.3.1.1, 6.3.1)** — Buckets open internal scan findings by age so quarterly rescan and fix cycles for high-risk issues remain on schedule and visible to management dashboards.\n\nDocumented **Data sources**: `index=vm` `qualys:host` with `FIRST_FOUND_DATETIME`, `STATUS`. **App/TA** (typical add-on context): Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked column (bucket x severity), pie (90+d share), table (raw host list drilldown).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We buckets open internal scan findings by age so quarterly rescan and fix cycles for high-risk issues remain on schedule and visible to management dashboards. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "qualys"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 6.3.1 is enforced — Splunk UC-22.11.79: Quarterly Internal Scan Remediation Aging Buckets.",
                  "ea": "Saved search 'UC-22.11.79' running on index=vm qualys:host with FIRST_FOUND_DATETIME, STATUS, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.80",
              "n": "External Network Scan Job Failures and Timeout Trending (PCI DSS Req 11.3.2, 6.3.1)",
              "c": "medium",
              "f": "intermediate",
              "v": "Trends scan job timeouts and authentication failures so external vulnerability scanning remains reliable and gaps do not cause missed quarterly compliance windows.",
              "t": "Splunk Add-on for Qualys (2964)",
              "d": "`index=vm` `qualys:scan_summary` or vendor API log `qualys:scan_target`",
              "q": "index=vm sourcetype=\"qualys:scan_summary\" earliest=-180d\n| where match(scan_title,\"(?i)external|perimeter|PCI)\")\n| eval failed=if(match(status,\"(?i)error|cancel|timeout|auth\\\\s*fail\"),1,0)\n| timechart span=30d sum(failed) as failed_jobs, count as jobs\n| eval fail_rate=if(jobs>0, round(100*failed_jobs/jobs,2), null())\n| trendline sma2(fail_rate) as fail_trend",
              "m": "(1) Deduplicate rescheduled runs by `scan_id`. (2) Correlate auth fails with credential vault rotations. (3) Alert when `fail_rate` exceeds 10%. (4) Document compensating manual scans if needed.",
              "z": "Line (`fail_rate`, `fail_trend`), column (`failed_jobs`), table (last error text truncated).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Qualys](https://splunkbase.splunk.com/app/2964), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Qualys (2964).\n• Ensure the following data sources are available: `index=vm` `qualys:scan_summary` or vendor API log `qualys:scan_target`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Deduplicate rescheduled runs by `scan_id`. (2) Correlate auth fails with credential vault rotations. (3) Alert when `fail_rate` exceeds 10%. (4) Document compensating manual scans if needed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype=\"qualys:scan_summary\" earliest=-180d\n| where match(scan_title,\"(?i)external|perimeter|PCI)\")\n| eval failed=if(match(status,\"(?i)error|cancel|timeout|auth\\\\s*fail\"),1,0)\n| timechart span=30d sum(failed) as failed_jobs, count as jobs\n| eval fail_rate=if(jobs>0, round(100*failed_jobs/jobs,2), null())\n| trendline sma2(fail_rate) as fail_trend\n```\n\nUnderstanding this SPL\n\n**External Network Scan Job Failures and Timeout Trending (PCI DSS Req 11.3.2, 6.3.1)** — Trends scan job timeouts and authentication failures so external vulnerability scanning remains reliable and gaps do not cause missed quarterly compliance windows.\n\nDocumented **Data sources**: `index=vm` `qualys:scan_summary` or vendor API log `qualys:scan_target`. **App/TA** (typical add-on context): Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm; **sourcetype**: qualys:scan_summary. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, sourcetype=\"qualys:scan_summary\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(scan_title,\"(?i)external|perimeter|PCI)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **failed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=30d** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **External Network Scan Job Failures and Timeout Trending (PCI DSS Req 11.3.2, 6.3.1)**): trendline sma2(fail_rate) as fail_trend\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest span=30d | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**External Network Scan Job Failures and Timeout Trending (PCI DSS Req 11.3.2, 6.3.1)** — Trends scan job timeouts and authentication failures so external vulnerability scanning remains reliable and gaps do not cause missed quarterly compliance windows.\n\nDocumented **Data sources**: `index=vm` `qualys:scan_summary` or vendor API log `qualys:scan_target`. **App/TA** (typical add-on context): Splunk Add-on for Qualys (2964). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line (`fail_rate`, `fail_trend`), column (`failed_jobs`), table (last error text truncated).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We trends scan job timeouts and authentication failures so external vulnerability scanning remains reliable and gaps do not cause missed quarterly compliance windows. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Operations"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest span=30d | sort - agg_value",
              "e": [
                "qualys"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.3.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 6.3.1 is enforced — Splunk UC-22.11.80: External Network Scan Job Failures and Timeout Trending.",
                  "ea": "Saved search 'UC-22.11.80' running on index=vm qualys:scan_summary or vendor API log qualys:scan_target, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.81",
              "n": "Critical Payment Application Binary and Config File Integrity Alerts (PCI DSS Req 11.5.1, 11.5.1.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Monitors change-detection alerts on payment service binaries and configuration files so unauthorized modifications to critical systems are caught in line with intrusion-detection expectations for file integrity.",
              "t": "Splunk Add-on for Unix and Linux (833), Splunk Add-on for CrowdStrike FDR (5082)",
              "d": "`index=os` `sourcetype=\"linux:audit\"` paths `/opt/payment/bin`, `crowdstrike:filevantage` if available",
              "q": "index=os sourcetype=\"linux:audit\" type=\"PATH\" earliest=-24h\n| where match(name,\"(?i)/opt/payment/bin/|/etc/payment/[^/]+\\\\.conf$)\")\n| where match(nametype,\"(?i)create|delete|normal\") AND NOT match(tostring(auid),\"(?i)unset|^\\\\s*$\") AND auid!=\"4294967295\"\n| stats earliest(_time) as t0, values(comm) as process by host, name, auid\n| join type=left max=0 host [\n    search index=ci sourcetype=\"jenkins:build\" earliest=-26h result=\"SUCCESS\"\n    | rename NODE_NAME as host\n    | stats latest(_time) as deploy_t by host\n  ]\n| eval near_deploy=if(isnotnull(deploy_t) AND abs(t0-deploy_t)<=600,1,0)\n| where near_deploy=0\n| table t0, host, name, process, auid, deploy_t",
              "m": "(1) Tune `near_deploy` window to pipeline length. (2) Require signed deploy artifacts with known hashes. (3) Page SOC on `near_deploy=0`. (4) Map to Req 10 FIM evidence where overlapping.",
              "z": "Table (unexpected changes), timeline (t0), single value (off-deploy changes per week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5994), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (833), Splunk Add-on for CrowdStrike FDR (5082).\n• Ensure the following data sources are available: `index=os` `sourcetype=\"linux:audit\"` paths `/opt/payment/bin`, `crowdstrike:filevantage` if available.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune `near_deploy` window to pipeline length. (2) Require signed deploy artifacts with known hashes. (3) Page SOC on `near_deploy=0`. (4) Map to Req 10 FIM evidence where overlapping.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=\"linux:audit\" type=\"PATH\" earliest=-24h\n| where match(name,\"(?i)/opt/payment/bin/|/etc/payment/[^/]+\\\\.conf$)\")\n| where match(nametype,\"(?i)create|delete|normal\") AND NOT match(tostring(auid),\"(?i)unset|^\\\\s*$\") AND auid!=\"4294967295\"\n| stats earliest(_time) as t0, values(comm) as process by host, name, auid\n| join type=left max=0 host [\n    search index=ci sourcetype=\"jenkins:build\" earliest=-26h result=\"SUCCESS\"\n    | rename NODE_NAME as host\n    | stats latest(_time) as deploy_t by host\n  ]\n| eval near_deploy=if(isnotnull(deploy_t) AND abs(t0-deploy_t)<=600,1,0)\n| where near_deploy=0\n| table t0, host, name, process, auid, deploy_t\n```\n\nUnderstanding this SPL\n\n**Critical Payment Application Binary and Config File Integrity Alerts (PCI DSS Req 11.5.1, 11.5.1.1)** — Monitors change-detection alerts on payment service binaries and configuration files so unauthorized modifications to critical systems are caught in line with intrusion-detection expectations for file integrity.\n\nDocumented **Data sources**: `index=os` `sourcetype=\"linux:audit\"` paths `/opt/payment/bin`, `crowdstrike:filevantage` if available. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833), Splunk Add-on for CrowdStrike FDR (5082). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=\"linux:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(name,\"(?i)/opt/payment/bin/|/etc/payment/[^/]+\\\\.conf$)\")` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(nametype,\"(?i)create|delete|normal\") AND NOT match(tostring(auid),\"(?i)unset|^\\\\s*$\") AND auid!=\"4294967295\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, name, auid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **near_deploy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where near_deploy=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Critical Payment Application Binary and Config File Integrity Alerts (PCI DSS Req 11.5.1, 11.5.1.1)**): table t0, host, name, process, auid, deploy_t\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Critical Payment Application Binary and Config File Integrity Alerts (PCI DSS Req 11.5.1, 11.5.1.1)** — Monitors change-detection alerts on payment service binaries and configuration files so unauthorized modifications to critical systems are caught in line with intrusion-detection expectations for file integrity.\n\nDocumented **Data sources**: `index=os` `sourcetype=\"linux:audit\"` paths `/opt/payment/bin`, `crowdstrike:filevantage` if available. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833), Splunk Add-on for CrowdStrike FDR (5082). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unexpected changes), timeline (t0), single value (off-deploy changes per week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor change-detection alerts on payment service binaries and configuration files so unauthorized modifications to critical systems are caught in line with intrusion-detection expectations for file integrity. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "11.5.1.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 11.5.1.1 is enforced — Splunk UC-22.11.81: Critical Payment Application Binary and Config File Integrity Alerts.",
                  "ea": "Saved search 'UC-22.11.81' running on index=os sourcetype=\"linux:audit\" paths /opt/payment/bin, crowdstrike:filevantage if available, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.82",
              "n": "Network Topology Drift — New L3 Adjacency or BGP Peer on CDE Perimeter (PCI DSS Req 11.4.1, 1.2.1)",
              "c": "high",
              "f": "expert",
              "v": "Detects new BGP neighbors or unexpected VLAN routing adjacencies on routers bordering the CDE so network diagrams and segmentation assumptions stay accurate after infrastructure changes.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Cisco IOS syslog",
              "d": "`index=netops` `syslog:cisco` — `%BGP-5-ADJCHANGE`; `pan:system` interface/route events",
              "q": "index=netops sourcetype=\"syslog:cisco\" earliest=-7d\n| where match(_raw,\"(?i)BGP-5-ADJCHANGE|neighbor\\\\s+[0-9.]+\")\n| lookup pci_perimeter_devices.csv host OUTPUT pci_border\n| where pci_border=\"true\"\n| rex \"neighbor (?<nbr>\\\\d+\\\\.\\\\d+\\\\.\\\\d+\\\\.\\\\d+)\"\n| lookup pci_approved_bgp_peers.csv nbr OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| stats earliest(_time) as seen by host, nbr\n| sort seen",
              "m": "(1) Maintain approved peer list per POP. (2) Include maintenance windows in lookup. (3) Immediate network architecture review. (4) Update diagram repository on each approved add.",
              "z": "Table (new neighbors), network diagram export link, timeline (seen).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Cisco IOS syslog.\n• Ensure the following data sources are available: `index=netops` `syslog:cisco` — `%BGP-5-ADJCHANGE`; `pan:system` interface/route events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain approved peer list per POP. (2) Include maintenance windows in lookup. (3) Immediate network architecture review. (4) Update diagram repository on each approved add.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netops sourcetype=\"syslog:cisco\" earliest=-7d\n| where match(_raw,\"(?i)BGP-5-ADJCHANGE|neighbor\\\\s+[0-9.]+\")\n| lookup pci_perimeter_devices.csv host OUTPUT pci_border\n| where pci_border=\"true\"\n| rex \"neighbor (?<nbr>\\\\d+\\\\.\\\\d+\\\\.\\\\d+\\\\.\\\\d+)\"\n| lookup pci_approved_bgp_peers.csv nbr OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| stats earliest(_time) as seen by host, nbr\n| sort seen\n```\n\nUnderstanding this SPL\n\n**Network Topology Drift — New L3 Adjacency or BGP Peer on CDE Perimeter (PCI DSS Req 11.4.1, 1.2.1)** — Detects new BGP neighbors or unexpected VLAN routing adjacencies on routers bordering the CDE so network diagrams and segmentation assumptions stay accurate after infrastructure changes.\n\nDocumented **Data sources**: `index=netops` `syslog:cisco` — `%BGP-5-ADJCHANGE`; `pan:system` interface/route events. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Cisco IOS syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netops; **sourcetype**: syslog:cisco. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netops, sourcetype=\"syslog:cisco\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"(?i)BGP-5-ADJCHANGE|neighbor\\\\s+[0-9.]+\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where pci_border=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved) OR approved=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, nbr** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Topology Drift — New L3 Adjacency or BGP Peer on CDE Perimeter (PCI DSS Req 11.4.1, 1.2.1)** — Detects new BGP neighbors or unexpected VLAN routing adjacencies on routers bordering the CDE so network diagrams and segmentation assumptions stay accurate after infrastructure changes.\n\nDocumented **Data sources**: `index=netops` `syslog:cisco` — `%BGP-5-ADJCHANGE`; `pan:system` interface/route events. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Cisco IOS syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new neighbors), network diagram export link, timeline (seen).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect new BGP neighbors or unexpected VLAN routing adjacencies on routers bordering the CDE so network diagrams and segmentation assumptions stay accurate after infrastructure changes. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [
                "cisco",
                "paloalto",
                "syslog"
              ],
              "em": [
                "cisco_ios",
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 1.2.1 is enforced — Splunk UC-22.11.82: Network Topology Drift — New L3 Adjacency or BGP Peer on CDE Perimeter.",
                  "ea": "Saved search 'UC-22.11.82' running on index=netops syslog:cisco — %BGP-5-ADJCHANGE; pan:system interface/route events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.83",
              "n": "Security Awareness Training Completion for Personnel with CDE Access (PCI DSS Req 12.6.1, 12.6.3)",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures annual or periodic training completion for users who can access the cardholder environment so the information security awareness program is evidenced by actual completion rates, not policy alone.",
              "t": "LMS HEC `sourcetype=\"knowbe4:training\"` or Workday `workday:learning`",
              "d": "`index=hr` `sourcetype=\"knowbe4:training\"` — `user`, `course`, `completed_date`, `status`",
              "q": "index=hr sourcetype=\"knowbe4:training\" earliest=-400d\n| lookup pci_cde_users.csv email AS user OUTPUT has_cde_access\n| where has_cde_access=\"true\"\n| where match(course,\"(?i)PCI|security\\\\s*awareness|phishing)\")\n| eval completed=if(isnotnull(completed_date) OR match(status,\"(?i)completed\"),1,0)\n| stats sum(completed) as done, count as assigned by user\n| eval pct=if(assigned>0, round(100*done/assigned,2), null())\n| where pct<100\n| sort pct, user",
              "m": "(1) Sync HR email with IAM quarterly. (2) Auto-remind via SOAR when `pct<100`. (3) Store annual completion CSV for PCI. (4) Exclude contractors with separate policy if applicable via lookup flag.",
              "z": "Table (incomplete users), gauge (org completion %), bar (by department from lookup join).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: LMS HEC `sourcetype=\"knowbe4:training\"` or Workday `workday:learning`.\n• Ensure the following data sources are available: `index=hr` `sourcetype=\"knowbe4:training\"` — `user`, `course`, `completed_date`, `status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Sync HR email with IAM quarterly. (2) Auto-remind via SOAR when `pct<100`. (3) Store annual completion CSV for PCI. (4) Exclude contractors with separate policy if applicable via lookup flag.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"knowbe4:training\" earliest=-400d\n| lookup pci_cde_users.csv email AS user OUTPUT has_cde_access\n| where has_cde_access=\"true\"\n| where match(course,\"(?i)PCI|security\\\\s*awareness|phishing)\")\n| eval completed=if(isnotnull(completed_date) OR match(status,\"(?i)completed\"),1,0)\n| stats sum(completed) as done, count as assigned by user\n| eval pct=if(assigned>0, round(100*done/assigned,2), null())\n| where pct<100\n| sort pct, user\n```\n\nUnderstanding this SPL\n\n**Security Awareness Training Completion for Personnel with CDE Access (PCI DSS Req 12.6.1, 12.6.3)** — Measures annual or periodic training completion for users who can access the cardholder environment so the information security awareness program is evidenced by actual completion rates, not policy alone.\n\nDocumented **Data sources**: `index=hr` `sourcetype=\"knowbe4:training\"` — `user`, `course`, `completed_date`, `status`. **App/TA** (typical add-on context): LMS HEC `sourcetype=\"knowbe4:training\"` or Workday `workday:learning`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: knowbe4:training. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"knowbe4:training\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where has_cde_access=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Filters the current rows with `where match(course,\"(?i)PCI|security\\\\s*awareness|phishing)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **completed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct<100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incomplete users), gauge (org completion %), bar (by department from lookup join).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures annual or periodic training completion for users who can access the cardholder environment so the information security awareness program is evidenced by actual completion rates, not policy alone. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.6.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.6.3 is enforced — Splunk UC-22.11.83: Security Awareness Training Completion for Personnel with CDE Access.",
                  "ea": "Saved search 'UC-22.11.83' running on index=hr sourcetype=\"knowbe4:training\" — user, course, completed_date, status, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.84",
              "n": "Incident Response Plan Tabletop and Live Test Execution Logging (PCI DSS Req 12.10.1, 12.10.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Indexes IR test outcomes and scenario coverage so annual (or more frequent) response exercises for payment breaches are tracked with timestamps, participants, and lessons learned for PCI readiness reviews.",
              "t": "Splunk Add-on for ServiceNow (1928), `sourcetype=\"grc:ir_test\"`",
              "d": "`index=grc` `sourcetype=\"grc:ir_test\"` — `test_id`, `scenario`, `executed_date`, `result`",
              "q": "index=grc sourcetype=\"grc:ir_test\" earliest=-400d\n| where match(scenario,\"(?i)payment|breach|PAN|CDE|PCI)\")\n| stats latest(executed_date) as last_run, latest(result) as outcome by test_id\n| eval days_since=round((now()-strptime(last_run,\"%Y-%m-%d\"))/86400,1)\n| where days_since>365 OR isnull(last_run)\n| table test_id, last_run, days_since, outcome\n| sort days_since",
              "m": "(1) Push test completion from GRC after each tabletop. (2) Adjust `days_since` threshold if policy requires semi-annual tests. (3) Link `test_id` to remediation tickets. (4) Board briefing attachment optional.",
              "z": "Timeline (last_run), table (overdue tests), single value (tests past due).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (1928), `sourcetype=\"grc:ir_test\"`.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"grc:ir_test\"` — `test_id`, `scenario`, `executed_date`, `result`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push test completion from GRC after each tabletop. (2) Adjust `days_since` threshold if policy requires semi-annual tests. (3) Link `test_id` to remediation tickets. (4) Board briefing attachment optional.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"grc:ir_test\" earliest=-400d\n| where match(scenario,\"(?i)payment|breach|PAN|CDE|PCI)\")\n| stats latest(executed_date) as last_run, latest(result) as outcome by test_id\n| eval days_since=round((now()-strptime(last_run,\"%Y-%m-%d\"))/86400,1)\n| where days_since>365 OR isnull(last_run)\n| table test_id, last_run, days_since, outcome\n| sort days_since\n```\n\nUnderstanding this SPL\n\n**Incident Response Plan Tabletop and Live Test Execution Logging (PCI DSS Req 12.10.1, 12.10.2)** — Indexes IR test outcomes and scenario coverage so annual (or more frequent) response exercises for payment breaches are tracked with timestamps, participants, and lessons learned for PCI readiness reviews.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"grc:ir_test\"` — `test_id`, `scenario`, `executed_date`, `result`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (1928), `sourcetype=\"grc:ir_test\"`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: grc:ir_test. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"grc:ir_test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(scenario,\"(?i)payment|breach|PAN|CDE|PCI)\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by test_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since>365 OR isnull(last_run)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Incident Response Plan Tabletop and Live Test Execution Logging (PCI DSS Req 12.10.1, 12.10.2)**): table test_id, last_run, days_since, outcome\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (last_run), table (overdue tests), single value (tests past due).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We indexes IR test outcomes and scenario coverage so annual (or more frequent) response exercises for payment breaches are tracked with timestamps, participants, and lessons learned for PCI readiness reviews. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.10.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.10.2 is enforced — Splunk UC-22.11.84: Incident Response Plan Tabletop and Live Test Execution Logging.",
                  "ea": "Saved search 'UC-22.11.84' running on index=grc sourcetype=\"grc:ir_test\" — test_id, scenario, executed_date, result, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.85",
              "n": "Formal Risk Assessment Evidence and Residual Risk Score Trend (PCI DSS Req 12.3.1, 12.3.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks annual enterprise risk assessment submissions including PCI-scoped risk items and residual scores so targeted risk analyses required by the standard are current and leadership-reviewed.",
              "t": "Splunk DB Connect or `sourcetype=\"grc:risk_register\"`",
              "d": "`index=grc` `sourcetype=\"grc:risk_register\"` — `risk_id`, `pci_relevant`, `residual_score`, `review_date`",
              "q": "index=grc sourcetype=\"grc:risk_register\" earliest=-730d\n| where pci_relevant=\"true\"\n| eval overdue=if(now()>strptime(review_date,\"%Y-%m-%d\"),1,0)\n| stats latest(residual_score) as score, max(overdue) as past_due by risk_id, title\n| where past_due=1 OR score>15\n| sort score",
              "m": "(1) Align `review_date` to annual cycle. (2) Map `score` scale 1–25 consistently. (3) Escalate `past_due` to risk committee. (4) Export closed reviews for PCI policy pack.",
              "z": "Line (residual_score trend in summary index), table (past_due risks), heatmap (domain x score).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect or `sourcetype=\"grc:risk_register\"`.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"grc:risk_register\"` — `risk_id`, `pci_relevant`, `residual_score`, `review_date`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `review_date` to annual cycle. (2) Map `score` scale 1–25 consistently. (3) Escalate `past_due` to risk committee. (4) Export closed reviews for PCI policy pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"grc:risk_register\" earliest=-730d\n| where pci_relevant=\"true\"\n| eval overdue=if(now()>strptime(review_date,\"%Y-%m-%d\"),1,0)\n| stats latest(residual_score) as score, max(overdue) as past_due by risk_id, title\n| where past_due=1 OR score>15\n| sort score\n```\n\nUnderstanding this SPL\n\n**Formal Risk Assessment Evidence and Residual Risk Score Trend (PCI DSS Req 12.3.1, 12.3.2)** — Tracks annual enterprise risk assessment submissions including PCI-scoped risk items and residual scores so targeted risk analyses required by the standard are current and leadership-reviewed.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"grc:risk_register\"` — `risk_id`, `pci_relevant`, `residual_score`, `review_date`. **App/TA** (typical add-on context): Splunk DB Connect or `sourcetype=\"grc:risk_register\"`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: grc:risk_register. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"grc:risk_register\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where pci_relevant=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by risk_id, title** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where past_due=1 OR score>15` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line (residual_score trend in summary index), table (past_due risks), heatmap (domain x score).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track annual enterprise risk assessment submissions including PCI-scoped risk items and residual scores so targeted risk analyses required by the standard are current and leadership-reviewed. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.3.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.3.2 is enforced — Splunk UC-22.11.85: Formal Risk Assessment Evidence and Residual Risk Score Trend.",
                  "ea": "Saved search 'UC-22.11.85' running on index=grc sourcetype=\"grc:risk_register\" — risk_id, pci_relevant, residual_score, review_date, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.86",
              "n": "Third-Party Service Provider Compliance Scorecard Ingest (PCI DSS Req 12.8.1, 12.8.2)",
              "c": "high",
              "f": "advanced",
              "v": "Centralizes SOC reports and PCI AOC freshness dates for processors and cloud providers that affect account data so service-provider due diligence and monitoring obligations are continuously visible.",
              "t": "`sourcetype=\"vendor:compliance_scorecard\"` HEC from TPRM platform",
              "d": "`index=vendor` `sourcetype=\"vendor:compliance_scorecard\"` — `vendor`, `pci_aoc_expiry`, `soc2_expiry`, `open_findings`",
              "q": "index=vendor sourcetype=\"vendor:compliance_scorecard\" earliest=-1d\n| eval pci_exp_epoch=strptime(pci_aoc_expiry,\"%Y-%m-%d\")\n| eval days_to_pci=round((pci_exp_epoch-now())/86400,1)\n| where days_to_pci<60 OR open_findings>0\n| table vendor, pci_aoc_expiry, days_to_pci, soc2_expiry, open_findings\n| sort days_to_pci",
              "m": "(1) Automate monthly vendor portal scrape to HEC. (2) Map vendors to in-scope services in lookup. (3) Legal review when `days_to_pci<30`. (4) Attach artifacts in document store with Splunk link.",
              "z": "Table (vendors at risk), timeline (expiry dates), single value (count expiring <60d).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `sourcetype=\"vendor:compliance_scorecard\"` HEC from TPRM platform.\n• Ensure the following data sources are available: `index=vendor` `sourcetype=\"vendor:compliance_scorecard\"` — `vendor`, `pci_aoc_expiry`, `soc2_expiry`, `open_findings`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Automate monthly vendor portal scrape to HEC. (2) Map vendors to in-scope services in lookup. (3) Legal review when `days_to_pci<30`. (4) Attach artifacts in document store with Splunk link.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"vendor:compliance_scorecard\" earliest=-1d\n| eval pci_exp_epoch=strptime(pci_aoc_expiry,\"%Y-%m-%d\")\n| eval days_to_pci=round((pci_exp_epoch-now())/86400,1)\n| where days_to_pci<60 OR open_findings>0\n| table vendor, pci_aoc_expiry, days_to_pci, soc2_expiry, open_findings\n| sort days_to_pci\n```\n\nUnderstanding this SPL\n\n**Third-Party Service Provider Compliance Scorecard Ingest (PCI DSS Req 12.8.1, 12.8.2)** — Centralizes SOC reports and PCI AOC freshness dates for processors and cloud providers that affect account data so service-provider due diligence and monitoring obligations are continuously visible.\n\nDocumented **Data sources**: `index=vendor` `sourcetype=\"vendor:compliance_scorecard\"` — `vendor`, `pci_aoc_expiry`, `soc2_expiry`, `open_findings`. **App/TA** (typical add-on context): `sourcetype=\"vendor:compliance_scorecard\"` HEC from TPRM platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: vendor:compliance_scorecard. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"vendor:compliance_scorecard\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pci_exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_pci** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_pci<60 OR open_findings>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Third-Party Service Provider Compliance Scorecard Ingest (PCI DSS Req 12.8.1, 12.8.2)**): table vendor, pci_aoc_expiry, days_to_pci, soc2_expiry, open_findings\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (vendors at risk), timeline (expiry dates), single value (count expiring <60d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We centralizes SOC reports and PCI AOC freshness dates for processors and cloud providers that affect account data so service-provider due diligence and monitoring obligations are continuously visible. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.8.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.8.2 is enforced — Splunk UC-22.11.86: Third-Party Service Provider Compliance Scorecard Ingest.",
                  "ea": "Saved search 'UC-22.11.86' running on index=vendor sourcetype=\"vendor:compliance_scorecard\" — vendor, pci_aoc_expiry, soc2_expiry, open_findings, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.87",
              "n": "Acceptable Use Policy Annual Attestation Completion (PCI DSS Req 12.1, 12.6.1)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks which employees have electronically signed the acceptable use policy each year so security policies acknowledged by users are provable for HR and PCI policy interviews.",
              "t": "Workday / ServiceNow attestation HEC `sourcetype=\"hr:policy_attestation\"`",
              "d": "`index=hr` `sourcetype=\"hr:policy_attestation\"` — `policy_code`, `user`, `signed_date`",
              "q": "index=hr sourcetype=\"hr:policy_attestation\" policy_code=\"AUP_V2026\" earliest=-400d\n| stats latest(signed_date) as last_signed by user\n| eval signed_epoch=strptime(last_signed,\"%Y-%m-%d\")\n| eval stale=if(signed_epoch<relative_time(now(),\"-365d@d\"),1,0)\n| where stale=1 OR isnull(last_signed)\n| table user, last_signed, stale\n| sort user",
              "m": "(1) Version `policy_code` each revision. (2) Join workforce roster to find never-signed. (3) Manager digest weekly until 100%. (4) Export completion CSV for PCI annual evidence.",
              "z": "Gauge (% current), table (stale users), bar (by department).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Workday / ServiceNow attestation HEC `sourcetype=\"hr:policy_attestation\"`.\n• Ensure the following data sources are available: `index=hr` `sourcetype=\"hr:policy_attestation\"` — `policy_code`, `user`, `signed_date`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Version `policy_code` each revision. (2) Join workforce roster to find never-signed. (3) Manager digest weekly until 100%. (4) Export completion CSV for PCI annual evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"hr:policy_attestation\" policy_code=\"AUP_V2026\" earliest=-400d\n| stats latest(signed_date) as last_signed by user\n| eval signed_epoch=strptime(last_signed,\"%Y-%m-%d\")\n| eval stale=if(signed_epoch<relative_time(now(),\"-365d@d\"),1,0)\n| where stale=1 OR isnull(last_signed)\n| table user, last_signed, stale\n| sort user\n```\n\nUnderstanding this SPL\n\n**Acceptable Use Policy Annual Attestation Completion (PCI DSS Req 12.1, 12.6.1)** — Tracks which employees have electronically signed the acceptable use policy each year so security policies acknowledged by users are provable for HR and PCI policy interviews.\n\nDocumented **Data sources**: `index=hr` `sourcetype=\"hr:policy_attestation\"` — `policy_code`, `user`, `signed_date`. **App/TA** (typical add-on context): Workday / ServiceNow attestation HEC `sourcetype=\"hr:policy_attestation\"`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: hr:policy_attestation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"hr:policy_attestation\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **signed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where stale=1 OR isnull(last_signed)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Acceptable Use Policy Annual Attestation Completion (PCI DSS Req 12.1, 12.6.1)**): table user, last_signed, stale\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge (% current), table (stale users), bar (by department).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track which employees have electronically signed the acceptable use policy each year so security policies acknowledged by users are provable for HR and PCI policy interviews. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.6.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.6.1 is enforced — Splunk UC-22.11.87: Acceptable Use Policy Annual Attestation Completion.",
                  "ea": "Saved search 'UC-22.11.87' running on index=hr sourcetype=\"hr:policy_attestation\" — policy_code, user, signed_date, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.88",
              "n": "Security Roles and Responsibilities Assignment Completeness (PCI DSS Req 12.1.3, 12.5.1)",
              "c": "medium",
              "f": "intermediate",
              "v": "Verifies each PCI-defined security role in the responsibility matrix has a named primary and backup assignee in the HR or IAM feed so accountability for security activities is not ambiguous during audits.",
              "t": "`sourcetype=\"iam:pci_role_assignment\"` HEC from IAM",
              "d": "`index=hr` `sourcetype=\"iam:pci_role_assignment\"` — `role_name`, `primary_user`, `backup_user`, `effective_date`",
              "q": "index=hr sourcetype=\"iam:pci_role_assignment\" earliest=-1d\n| where match(role_name,\"(?i)CISO|PCI\\\\s*Lead|IR\\\\s*Lead|NetSec|DBA\\\\s*CHD)\")\n| eval missing_primary=isnull(primary_user) OR primary_user=\"\"\n| eval missing_backup=isnull(backup_user) OR backup_user=\"\"\n| where missing_primary=1 OR missing_backup=1\n| table role_name, primary_user, backup_user, effective_date\n| sort role_name",
              "m": "(1) Align `role_name` strings with PCI responsibility matrix. (2) Daily sync from Workday/Azure AD groups. (3) Page HRIS on gaps. (4) Quarterly executive sign-off stored in GRC.",
              "z": "Table (gaps), matrix visual (role x assignee), single value (open gaps).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `sourcetype=\"iam:pci_role_assignment\"` HEC from IAM.\n• Ensure the following data sources are available: `index=hr` `sourcetype=\"iam:pci_role_assignment\"` — `role_name`, `primary_user`, `backup_user`, `effective_date`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `role_name` strings with PCI responsibility matrix. (2) Daily sync from Workday/Azure AD groups. (3) Page HRIS on gaps. (4) Quarterly executive sign-off stored in GRC.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"iam:pci_role_assignment\" earliest=-1d\n| where match(role_name,\"(?i)CISO|PCI\\\\s*Lead|IR\\\\s*Lead|NetSec|DBA\\\\s*CHD)\")\n| eval missing_primary=isnull(primary_user) OR primary_user=\"\"\n| eval missing_backup=isnull(backup_user) OR backup_user=\"\"\n| where missing_primary=1 OR missing_backup=1\n| table role_name, primary_user, backup_user, effective_date\n| sort role_name\n```\n\nUnderstanding this SPL\n\n**Security Roles and Responsibilities Assignment Completeness (PCI DSS Req 12.1.3, 12.5.1)** — Verifies each PCI-defined security role in the responsibility matrix has a named primary and backup assignee in the HR or IAM feed so accountability for security activities is not ambiguous during audits.\n\nDocumented **Data sources**: `index=hr` `sourcetype=\"iam:pci_role_assignment\"` — `role_name`, `primary_user`, `backup_user`, `effective_date`. **App/TA** (typical add-on context): `sourcetype=\"iam:pci_role_assignment\"` HEC from IAM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: iam:pci_role_assignment. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"iam:pci_role_assignment\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(role_name,\"(?i)CISO|PCI\\\\s*Lead|IR\\\\s*Lead|NetSec|DBA\\\\s*CHD)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **missing_primary** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **missing_backup** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where missing_primary=1 OR missing_backup=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Security Roles and Responsibilities Assignment Completeness (PCI DSS Req 12.1.3, 12.5.1)**): table role_name, primary_user, backup_user, effective_date\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gaps), matrix visual (role x assignee), single value (open gaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verifies each PCI-defined security role in the responsibility matrix has a named primary and backup assignee in the HR or IAM feed so accountability for security activities is not ambiguous during audits. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.5.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.5.1 is enforced — Splunk UC-22.11.88: Security Roles and Responsibilities Assignment Completeness.",
                  "ea": "Saved search 'UC-22.11.88' running on index=hr sourcetype=\"iam:pci_role_assignment\" — role_name, primary_user, backup_user, effective_date, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.89",
              "n": "Technology Acceptable Use — USB Mass Storage on CDE Workstations (PCI DSS Req 12.3.2, 2.2.4)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces device plug events for removable media on in-scope workstations so technology usage policies restricting untrusted storage near account data are enforced and auditable.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=windows` `Microsoft-Windows-Kernel-PnP/Configuration` or `WinEventLog:Security` EventCode 6416",
              "q": "index=windows sourcetype=\"WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration\" earliest=-7d\n| where match(ClassName,\"(?i)USBSTOR|diskdrive)\")\n| lookup pci_cde_workstations.csv ComputerName OUTPUT in_cde_ws\n| where in_cde_ws=\"true\"\n| stats earliest(_time) as first_insert, values(DeviceDescription) as dev by ComputerName, InstanceId\n| sort - first_insert",
              "m": "(1) Enable PnP operational log forwarding on CDE laptops if any exist. (2) Block at GPO where possible; use this as detective control. (3) Investigate each event. (4) Map to acceptable use training module completion.",
              "z": "Table (devices), timeline (first_insert), single value (weekly USB attempts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=windows` `Microsoft-Windows-Kernel-PnP/Configuration` or `WinEventLog:Security` EventCode 6416.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable PnP operational log forwarding on CDE laptops if any exist. (2) Block at GPO where possible; use this as detective control. (3) Investigate each event. (4) Map to acceptable use training module completion.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration\" earliest=-7d\n| where match(ClassName,\"(?i)USBSTOR|diskdrive)\")\n| lookup pci_cde_workstations.csv ComputerName OUTPUT in_cde_ws\n| where in_cde_ws=\"true\"\n| stats earliest(_time) as first_insert, values(DeviceDescription) as dev by ComputerName, InstanceId\n| sort - first_insert\n```\n\nUnderstanding this SPL\n\n**Technology Acceptable Use — USB Mass Storage on CDE Workstations (PCI DSS Req 12.3.2, 2.2.4)** — Surfaces device plug events for removable media on in-scope workstations so technology usage policies restricting untrusted storage near account data are enforced and auditable.\n\nDocumented **Data sources**: `index=windows` `Microsoft-Windows-Kernel-PnP/Configuration` or `WinEventLog:Security` EventCode 6416. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(ClassName,\"(?i)USBSTOR|diskdrive)\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_cde_ws=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ComputerName, InstanceId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Technology Acceptable Use — USB Mass Storage on CDE Workstations (PCI DSS Req 12.3.2, 2.2.4)** — Surfaces device plug events for removable media on in-scope workstations so technology usage policies restricting untrusted storage near account data are enforced and auditable.\n\nDocumented **Data sources**: `index=windows` `Microsoft-Windows-Kernel-PnP/Configuration` or `WinEventLog:Security` EventCode 6416. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (devices), timeline (first_insert), single value (weekly USB attempts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface device plug events for removable media on in-scope workstations so technology usage policies restricting untrusted storage near account data are enforced and auditable. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "2.2.4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 2.2.4 is enforced — Splunk UC-22.11.89: Technology Acceptable Use — USB Mass Storage on CDE Workstations.",
                  "ea": "Saved search 'UC-22.11.89' running on index=windows Microsoft-Windows-Kernel-PnP/Configuration or WinEventLog:Security EventCode 6416, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.90",
              "n": "Annual Information Security Policy Review and Approval Workflow (PCI DSS Req 12.1.1, 12.1.2)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks policy document versions, review due dates, and approver signatures so the security policy is reviewed at least annually and management acknowledges changes affecting the CDE.",
              "t": "Splunk Add-on for ServiceNow (1928) or `sourcetype=\"grc:policy_review\"`",
              "d": "`index=grc` `sourcetype=\"grc:policy_review\"` — `policy_id`, `version`, `next_review_date`, `approver`, `status`",
              "q": "index=grc sourcetype=\"grc:policy_review\" earliest=-730d\n| where match(policy_id,\"(?i)PCI|INFOSEC|ACCESS|NETWORK)\")\n| eval due_epoch=strptime(next_review_date,\"%Y-%m-%d\")\n| eval overdue=if(now()>due_epoch AND NOT match(status,\"(?i)approved|published\"),1,0)\n| stats latest(version) as ver, latest(status) as st, max(overdue) as od by policy_id\n| where od=1\n| table policy_id, ver, st, next_review_date\n| sort policy_id",
              "m": "(1) Push policy metadata whenever GRC workflow updates. (2) Link to document repository deep links. (3) Alert compliance manager 60 days before `next_review_date`. (4) Archive approved PDF hash in same event.",
              "z": "Table (overdue policies), timeline (next_review_date), gauge (% policies current).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (1928) or `sourcetype=\"grc:policy_review\"`.\n• Ensure the following data sources are available: `index=grc` `sourcetype=\"grc:policy_review\"` — `policy_id`, `version`, `next_review_date`, `approver`, `status`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push policy metadata whenever GRC workflow updates. (2) Link to document repository deep links. (3) Alert compliance manager 60 days before `next_review_date`. (4) Archive approved PDF hash in same event.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"grc:policy_review\" earliest=-730d\n| where match(policy_id,\"(?i)PCI|INFOSEC|ACCESS|NETWORK)\")\n| eval due_epoch=strptime(next_review_date,\"%Y-%m-%d\")\n| eval overdue=if(now()>due_epoch AND NOT match(status,\"(?i)approved|published\"),1,0)\n| stats latest(version) as ver, latest(status) as st, max(overdue) as od by policy_id\n| where od=1\n| table policy_id, ver, st, next_review_date\n| sort policy_id\n```\n\nUnderstanding this SPL\n\n**Annual Information Security Policy Review and Approval Workflow (PCI DSS Req 12.1.1, 12.1.2)** — Tracks policy document versions, review due dates, and approver signatures so the security policy is reviewed at least annually and management acknowledges changes affecting the CDE.\n\nDocumented **Data sources**: `index=grc` `sourcetype=\"grc:policy_review\"` — `policy_id`, `version`, `next_review_date`, `approver`, `status`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (1928) or `sourcetype=\"grc:policy_review\"`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: grc:policy_review. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"grc:policy_review\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(policy_id,\"(?i)PCI|INFOSEC|ACCESS|NETWORK)\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by policy_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where od=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Annual Information Security Policy Review and Approval Workflow (PCI DSS Req 12.1.1, 12.1.2)**): table policy_id, ver, st, next_review_date\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue policies), timeline (next_review_date), gauge (% policies current).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track policy document versions, review due dates, and approver signatures so the security policy is reviewed at least annually and management acknowledges changes affecting the CDE. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PCI DSS",
                "PCI DSS v4.0"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.1.2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that PCI DSS 12.1.2 is enforced — Splunk UC-22.11.90: Annual Information Security Policy Review and Approval Workflow.",
                  "ea": "Saved search 'UC-22.11.90' running on index=grc sourcetype=\"grc:policy_review\" — policy_id, version, next_review_date, approver, status, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 90,
            "none": 0
          }
        },
        {
          "i": "22.10",
          "n": "HIPAA",
          "u": [
            {
              "i": "22.10.1",
              "n": "HIPAA Risk Analysis Evidence — Asset & ePHI System Inventory (§164.308(a)(1))",
              "c": "critical",
              "f": "advanced",
              "v": "Correlates systems that store or transmit ePHI with vulnerability and patch posture so security leaders can evidence an ongoing, documented risk analysis aligned with the Security Rule.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=assets` (CMDB/ES assets), `index=vulnerability` (Tenable), `index=epic` `sourcetype=epic:integration` (interface engine hosts), Windows Security 4688/5156 for clinical app servers",
              "q": "(index=assets sourcetype IN (\"cmdb:server\",\"es:assets\")) OR (index=vulnerability sourcetype=\"tenable:vuln\")\n| eval ephi_system=if(match(lower(_raw),\"epic|cerner|clarity|caboodle|emr|ehr|phi|hipaa\"),1,0)\n| stats latest(_time) as last_seen values(sourcetype) as sources dc(host) as hosts by system_name, ip\n| lookup ephi_system_register.csv ip OUTPUT contains_ephi, data_owner, baa_id\n| where contains_ephi=\"true\" OR ephi_system=1\n| sort - last_seen",
              "m": "(1) Maintain `ephi_system_register.csv` with IP/CIDR, owner, and BAA ID; (2) normalize host/IP fields across CMDB and vuln feeds; (3) schedule weekly and route gaps to GRC for formal risk register updates; (4) attach CVSS and compensating controls in ticketing.",
              "z": "Table (system, owner, last vuln scan), Bar chart (critical findings by ePHI tier), Single value (unscanned ePHI hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=assets` (CMDB/ES assets), `index=vulnerability` (Tenable), `index=epic` `sourcetype=epic:integration` (interface engine hosts), Windows Security 4688/5156 for clinical app servers.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `ephi_system_register.csv` with IP/CIDR, owner, and BAA ID; (2) normalize host/IP fields across CMDB and vuln feeds; (3) schedule weekly and route gaps to GRC for formal risk register updates; (4) attach CVSS and compensating controls in ticketing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=assets sourcetype IN (\"cmdb:server\",\"es:assets\")) OR (index=vulnerability sourcetype=\"tenable:vuln\")\n| eval ephi_system=if(match(lower(_raw),\"epic|cerner|clarity|caboodle|emr|ehr|phi|hipaa\"),1,0)\n| stats latest(_time) as last_seen values(sourcetype) as sources dc(host) as hosts by system_name, ip\n| lookup ephi_system_register.csv ip OUTPUT contains_ephi, data_owner, baa_id\n| where contains_ephi=\"true\" OR ephi_system=1\n| sort - last_seen\n```\n\nUnderstanding this SPL\n\n**HIPAA Risk Analysis Evidence — Asset & ePHI System Inventory (§164.308(a)(1))** — Correlates systems that store or transmit ePHI with vulnerability and patch posture so security leaders can evidence an ongoing, documented risk analysis aligned with the Security Rule.\n\nDocumented **Data sources**: `index=assets` (CMDB/ES assets), `index=vulnerability` (Tenable), `index=epic` `sourcetype=epic:integration` (interface engine hosts), Windows Security 4688/5156 for clinical app servers. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: assets, vulnerability; **sourcetype**: tenable:vuln. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=assets, index=vulnerability, sourcetype=\"tenable:vuln\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ephi_system** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by system_name, ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where contains_ephi=\"true\" OR ephi_system=1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(Vulnerabilities.dest) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**HIPAA Risk Analysis Evidence — Asset & ePHI System Inventory (§164.308(a)(1))** — Correlates systems that store or transmit ePHI with vulnerability and patch posture so security leaders can evidence an ongoing, documented risk analysis aligned with the Security Rule.\n\nDocumented **Data sources**: `index=assets` (CMDB/ES assets), `index=vulnerability` (Tenable), `index=epic` `sourcetype=epic:integration` (interface engine hosts), Windows Security 4688/5156 for clinical app servers. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (system, owner, last vuln scan), Bar chart (critical findings by ePHI tier), Single value (unscanned ePHI hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate systems that store or transmit ePHI with vulnerability and patch posture so security leaders can evidence an ongoing, documented risk analysis aligned with the Security Rule. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Vulnerabilities (when Tenable is CIM-tagged)",
                "N/A for mixed CMDB"
              ],
              "qs": "| tstats summariesonly=t dc(Vulnerabilities.dest) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.308(a)(1) (Security management process) is enforced — Splunk UC-22.10.1: HIPAA Risk Analysis Evidence — Asset & ePHI System Inventory.",
                  "ea": "Saved search 'UC-22.10.1' running on sourcetype epic:integration and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.2",
              "n": "HIPAA Risk Management — Control Deficiency Tracking (§164.308(a)(1))",
              "c": "critical",
              "f": "intermediate",
              "v": "Tracks open HIPAA control gaps from assessments and pen tests through remediation so risk is reduced to reasonable and appropriate levels, not just documented once.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`` `notable` `` or `index=grc` HEC JSON (`control_id`, `framework`, `status`, `due_date`, `owner`) from GRC exports",
              "q": "index=grc sourcetype=\"grc:hipaa_control\" earliest=-90d framework=\"HIPAA_Security_Rule\"\n| eval overdue=if(status!=\"Closed\" AND now()>strptime(due_date,\"%Y-%m-%d\"),1,0)\n| stats count as open_controls sum(overdue) as overdue_controls by owner, control_family\n| sort - overdue_controls",
              "m": "(1) Define control_family values mapping to 164.308/310/312 families; (2) ingest weekly GRC CSV/HEC; (3) alert on `overdue_controls>0`; (4) require closure notes referencing implemented safeguard.",
              "z": "Table (owner, open, overdue), Column chart (by control family), Timeline (median days to close).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `` `notable` `` or `index=grc` HEC JSON (`control_id`, `framework`, `status`, `due_date`, `owner`) from GRC exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define control_family values mapping to 164.308/310/312 families; (2) ingest weekly GRC CSV/HEC; (3) alert on `overdue_controls>0`; (4) require closure notes referencing implemented safeguard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"grc:hipaa_control\" earliest=-90d framework=\"HIPAA_Security_Rule\"\n| eval overdue=if(status!=\"Closed\" AND now()>strptime(due_date,\"%Y-%m-%d\"),1,0)\n| stats count as open_controls sum(overdue) as overdue_controls by owner, control_family\n| sort - overdue_controls\n```\n\nUnderstanding this SPL\n\n**HIPAA Risk Management — Control Deficiency Tracking (§164.308(a)(1))** — Tracks open HIPAA control gaps from assessments and pen tests through remediation so risk is reduced to reasonable and appropriate levels, not just documented once.\n\nDocumented **Data sources**: `` `notable` `` or `index=grc` HEC JSON (`control_id`, `framework`, `status`, `due_date`, `owner`) from GRC exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: grc:hipaa_control. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"grc:hipaa_control\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by owner, control_family** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (owner, open, overdue), Column chart (by control family), Timeline (median days to close).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track open HIPAA control gaps from assessments and pen tests through remediation so risk is reduced to reasonable and appropriate levels, not just documented once. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.308(a)(1) (Security management process) is enforced — Splunk UC-22.10.2: HIPAA Risk Management — Control Deficiency Tracking.",
                  "ea": "Saved search 'UC-22.10.2' running on notable or index=grc HEC JSON (control_id, framework, status, due_date, owner) from GRC exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.3",
              "n": "Information System Activity Review — Cross-Source ePHI Access Summary (§164.308(a)(1)(ii)(D))",
              "c": "critical",
              "f": "advanced",
              "v": "Produces a daily leadership-ready rollup of who touched ePHI across EHR, VPN, and databases so workforce activity review obligations are operationally met, not ad hoc.",
              "t": "Splunk DB Connect (2686), Splunk Add-on for Citrix (2757), Splunk Common Information Model Add-on (1621)",
              "d": "`index=epic` `sourcetype=epic:audit`, `index=citrix` `sourcetype=citrix:session`, `index=vpn` `sourcetype=paloalto:globalprotect`",
              "q": "(index=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\") OR (index=cerner sourcetype=\"cerner:audit\" Action=\"PatientChartView\") OR (index=citrix sourcetype=\"citrix:session\" Application=\"*Hyperspace*\") OR (index=vpn sourcetype=\"paloalto:globalprotect\" status=\"success\")\n| eval user=coalesce(USER_ID, PractitionerId, user, src_user)\n| eval PAT_ID=coalesce(PAT_ID, PatientId)\n| bin _time span=1d\n| stats dc(PAT_ID) as patients_touched dc(host) as systems by _time, user\n| sort - patients_touched",
              "m": "(1) Align user ID formats via `identity_lookup.csv`; (2) restrict index ACLs to privacy/security; (3) schedule daily PDF/dashboard to HIPAA committee; (4) retain per legal record retention policy.",
              "z": "Heatmap (users x day), Table (top reviewers), Single value (distinct users accessing charts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Citrix](https://splunkbase.splunk.com/app/2757), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686), Splunk Add-on for Citrix (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit`, `index=citrix` `sourcetype=citrix:session`, `index=vpn` `sourcetype=paloalto:globalprotect`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align user ID formats via `identity_lookup.csv`; (2) restrict index ACLs to privacy/security; (3) schedule daily PDF/dashboard to HIPAA committee; (4) retain per legal record retention policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\") OR (index=cerner sourcetype=\"cerner:audit\" Action=\"PatientChartView\") OR (index=citrix sourcetype=\"citrix:session\" Application=\"*Hyperspace*\") OR (index=vpn sourcetype=\"paloalto:globalprotect\" status=\"success\")\n| eval user=coalesce(USER_ID, PractitionerId, user, src_user)\n| eval PAT_ID=coalesce(PAT_ID, PatientId)\n| bin _time span=1d\n| stats dc(PAT_ID) as patients_touched dc(host) as systems by _time, user\n| sort - patients_touched\n```\n\nUnderstanding this SPL\n\n**Information System Activity Review — Cross-Source ePHI Access Summary (§164.308(a)(1)(ii)(D))** — Produces a daily leadership-ready rollup of who touched ePHI across EHR, VPN, and databases so workforce activity review obligations are operationally met, not ad hoc.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit`, `index=citrix` `sourcetype=citrix:session`, `index=vpn` `sourcetype=paloalto:globalprotect`. **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk Add-on for Citrix (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic, cerner, citrix, vpn; **sourcetype**: epic:audit, cerner:audit, citrix:session, paloalto:globalprotect. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, index=cerner, index=citrix, index=vpn…. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **PAT_ID** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(Authentication.dest) as agg_value from datamodel=Authentication.Authentication by Authentication.user span=1d | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information System Activity Review — Cross-Source ePHI Access Summary (§164.308(a)(1)(ii)(D))** — Produces a daily leadership-ready rollup of who touched ePHI across EHR, VPN, and databases so workforce activity review obligations are operationally met, not ad hoc.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit`, `index=citrix` `sourcetype=citrix:session`, `index=vpn` `sourcetype=paloalto:globalprotect`. **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk Add-on for Citrix (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (users x day), Table (top reviewers), Single value (distinct users accessing charts).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We produce a daily leadership-ready rollup of who touched ePHI across EHR, VPN, and databases so workforce activity review obligations are operationally met, not ad hoc. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Authentication (VPN when CIM-tagged)",
                "N/A for Epic native"
              ],
              "qs": "| tstats summariesonly=t dc(Authentication.dest) as agg_value from datamodel=Authentication.Authentication by Authentication.user span=1d | sort - agg_value",
              "e": [
                "citrix",
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(1)(ii)(D)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(a)(1)(ii)(D) is enforced — Splunk UC-22.10.3: Information System Activity Review — Cross-Source ePHI Access Summary.",
                  "ea": "Saved search 'UC-22.10.3' running on index=epic sourcetype=epic:audit, index=citrix sourcetype=citrix:session, index=vpn sourcetype=paloalto:globalprotect, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.4",
              "n": "Workforce Clearance — HR Hire vs AD Account Creation (§164.308(a)(3))",
              "c": "high",
              "f": "intermediate",
              "v": "Flags Active Directory accounts created or enabled before HR clearance effective date, supporting workforce clearance procedures for roles with ePHI access.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`WinEventLog:Security` EventCode IN (4720,4722,4738), `index=hr` `sourcetype=workday:worker` or CSV HEC (`employee_id`, `clearance_status`, `effective_date`)",
              "q": "(index=windows EventCode IN (\"4720\",\"4722\") TargetUserName=\"*\") OR (index=hr sourcetype=\"workday:worker\")\n| eval emp_id=coalesce(employee_id, TargetUserName)\n| eval event_type=if(EventCode=\"4720\",\"AccountCreated\",if(EventCode=\"4722\",\"AccountEnabled\",\"HRRecord\"))\n| stats earliest(_time) as first_ad latest(_time) as last_hr by emp_id, event_type\n| join type=outer emp_id [\n    search index=hr sourcetype=\"workday:worker\" earliest=-180d\n    | eval clearance_epoch=strptime(effective_date,\"%Y-%m-%d\")\n    | table employee_id, clearance_status, clearance_epoch\n    | rename employee_id as emp_id ]\n| eval violation=if(first_ad < clearance_epoch AND clearance_status=\"Cleared\",\"PreClearance_AD\",null())\n| where isnotnull(violation)",
              "m": "(1) Normalize timestamps to UTC; (2) map service accounts to exclusion lookup; (3) route violations to IAM and HR; (4) quarterly attestation report.",
              "z": "Table (account, violation, timestamps), Single value (violations 30d).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `WinEventLog:Security` EventCode IN (4720,4722,4738), `index=hr` `sourcetype=workday:worker` or CSV HEC (`employee_id`, `clearance_status`, `effective_date`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize timestamps to UTC; (2) map service accounts to exclusion lookup; (3) route violations to IAM and HR; (4) quarterly attestation report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=windows EventCode IN (\"4720\",\"4722\") TargetUserName=\"*\") OR (index=hr sourcetype=\"workday:worker\")\n| eval emp_id=coalesce(employee_id, TargetUserName)\n| eval event_type=if(EventCode=\"4720\",\"AccountCreated\",if(EventCode=\"4722\",\"AccountEnabled\",\"HRRecord\"))\n| stats earliest(_time) as first_ad latest(_time) as last_hr by emp_id, event_type\n| join type=outer emp_id [\n    search index=hr sourcetype=\"workday:worker\" earliest=-180d\n    | eval clearance_epoch=strptime(effective_date,\"%Y-%m-%d\")\n    | table employee_id, clearance_status, clearance_epoch\n    | rename employee_id as emp_id ]\n| eval violation=if(first_ad < clearance_epoch AND clearance_status=\"Cleared\",\"PreClearance_AD\",null())\n| where isnotnull(violation)\n```\n\nUnderstanding this SPL\n\n**Workforce Clearance — HR Hire vs AD Account Creation (§164.308(a)(3))** — Flags Active Directory accounts created or enabled before HR clearance effective date, supporting workforce clearance procedures for roles with ePHI access.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode IN (4720,4722,4738), `index=hr` `sourcetype=workday:worker` or CSV HEC (`employee_id`, `clearance_status`, `effective_date`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, hr; **sourcetype**: workday:worker. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=hr, sourcetype=\"workday:worker\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **emp_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **event_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by emp_id, event_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **violation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(violation)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Workforce Clearance — HR Hire vs AD Account Creation (§164.308(a)(3))** — Flags Active Directory accounts created or enabled before HR clearance effective date, supporting workforce clearance procedures for roles with ePHI access.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode IN (4720,4722,4738), `index=hr` `sourcetype=workday:worker` or CSV HEC (`employee_id`, `clearance_status`, `effective_date`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (account, violation, timestamps), Single value (violations 30d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flag Active Directory accounts created or enabled before HR clearance effective date, supporting workforce clearance procedures for roles with ePHI access. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Authentication (optional for 4624 correlation)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(3)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.308(a)(3) (Workforce security) is enforced — Splunk UC-22.10.4: Workforce Clearance — HR Hire vs AD Account Creation.",
                  "ea": "Saved search 'UC-22.10.4' running on sourcetype workday:worker and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.5",
              "n": "Termination — Access Revocation Within Policy Window (§164.308(a)(3)(ii)(C))",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects logons, VPN sessions, or EHR access after HR termination timestamp so access revocation SLAs are provably enforced.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Citrix (2757)",
              "d": "`WinEventLog:Security` EventCode=4624, `index=citrix`, `index=epic` `sourcetype=epic:audit`, `index=hr` terminations (`term_date`, `samaccountname`)",
              "q": "index=hr sourcetype=\"workday:termination\" earliest=-30d\n| eval term_epoch=strptime(term_date,\"%Y-%m-%d %H:%M:%S\")\n| table samaccountname, term_epoch\n| rename samaccountname as user\n| join type=inner user [\n    search index=windows EventCode=\"4624\" earliest=-30d\n    | eval user=mvindex(Security_ID,0)\n    | rex field=user \"\\\\\\\\(?<sam>[^\\\\\\\\]+)$\"\n    | eval user=coalesce(sam,user)\n    | table _time, user, WorkstationName, Logon_Type ]\n| where _time > term_epoch\n| eval hours_after_term=round((_time-term_epoch)/3600,2)",
              "m": "(1) Ingest authoritative termination feed within 15 minutes SLA; (2) tune out break-glass shared accounts; (3) auto-disable via SOAR optional; (4) weekly HIPAA access committee digest.",
              "z": "Table (user, hours_after_term, workstation), Timeline (violations), Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Citrix](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Citrix (2757).\n• Ensure the following data sources are available: `WinEventLog:Security` EventCode=4624, `index=citrix`, `index=epic` `sourcetype=epic:audit`, `index=hr` terminations (`term_date`, `samaccountname`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest authoritative termination feed within 15 minutes SLA; (2) tune out break-glass shared accounts; (3) auto-disable via SOAR optional; (4) weekly HIPAA access committee digest.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"workday:termination\" earliest=-30d\n| eval term_epoch=strptime(term_date,\"%Y-%m-%d %H:%M:%S\")\n| table samaccountname, term_epoch\n| rename samaccountname as user\n| join type=inner user [\n    search index=windows EventCode=\"4624\" earliest=-30d\n    | eval user=mvindex(Security_ID,0)\n    | rex field=user \"\\\\\\\\(?<sam>[^\\\\\\\\]+)$\"\n    | eval user=coalesce(sam,user)\n    | table _time, user, WorkstationName, Logon_Type ]\n| where _time > term_epoch\n| eval hours_after_term=round((_time-term_epoch)/3600,2)\n```\n\nUnderstanding this SPL\n\n**Termination — Access Revocation Within Policy Window (§164.308(a)(3)(ii)(C))** — Detects logons, VPN sessions, or EHR access after HR termination timestamp so access revocation SLAs are provably enforced.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode=4624, `index=citrix`, `index=epic` `sourcetype=epic:audit`, `index=hr` terminations (`term_date`, `samaccountname`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Citrix (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: workday:termination. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"workday:termination\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **term_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Termination — Access Revocation Within Policy Window (§164.308(a)(3)(ii)(C))**): table samaccountname, term_epoch\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where _time > term_epoch` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **hours_after_term** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Termination — Access Revocation Within Policy Window (§164.308(a)(3)(ii)(C))** — Detects logons, VPN sessions, or EHR access after HR termination timestamp so access revocation SLAs are provably enforced.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode=4624, `index=citrix`, `index=epic` `sourcetype=epic:audit`, `index=hr` terminations (`term_date`, `samaccountname`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Citrix (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, hours_after_term, workstation), Timeline (violations), Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect logons, VPN sessions, or EHR access after HR termination timestamp so access revocation SLAs are provably enforced. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "citrix"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(3)(ii)(C)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(a)(3)(ii)(C) is enforced — Splunk UC-22.10.5: Termination — Access Revocation Within Policy Window.",
                  "ea": "Saved search 'UC-22.10.5' running on sourcetype epic:audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "PS-4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 PS-4 (Personnel termination) is enforced — Splunk UC-22.10.5: Termination — Access Revocation Within Policy Window.",
                  "ea": "Saved search 'UC-22.10.5' running on sourcetype epic:audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.6",
              "n": "Security Awareness & Training — Phish Simulation Failure to ePHI Risk (§164.308(a)(5))",
              "c": "high",
              "f": "intermediate",
              "v": "Links repeated phishing test failures to subsequent risky ePHI handling patterns so awareness gaps become targeted coaching, not checkbox training.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Office 365 (4055)",
              "d": "`index=security_awareness` HEC (`campaign_id`, `user`, `result`), `index=o365` `sourcetype=ms:o365:management` (DLP/alerts), `index=epic` chart exports",
              "q": "index=security_awareness sourcetype=\"phish:sim\" result=\"Clicked\" earliest=-90d\n| stats count as fail_clicks by user\n| where fail_clicks>=3\n| map maxsearches=50 search=\"index=epic sourcetype=\\\"epic:audit\\\" USER_ID=\\\"$user$\\\" AccessType IN (\\\"Report\\\",\\\"Export\\\") earliest=-90d | stats count as risky_epic_events by USER_ID\"\n| sort - risky_epic_events",
              "m": "(1) Prefer `lookup` + subsearch limits for scale; (2) de-identify in dashboards; (3) HR/privacy joint review; (4) document coaching in LMS export.",
              "z": "Table (user, fails, risky events), Bar chart (failures by department from HR lookup).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Office 365 (4055).\n• Ensure the following data sources are available: `index=security_awareness` HEC (`campaign_id`, `user`, `result`), `index=o365` `sourcetype=ms:o365:management` (DLP/alerts), `index=epic` chart exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Prefer `lookup` + subsearch limits for scale; (2) de-identify in dashboards; (3) HR/privacy joint review; (4) document coaching in LMS export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security_awareness sourcetype=\"phish:sim\" result=\"Clicked\" earliest=-90d\n| stats count as fail_clicks by user\n| where fail_clicks>=3\n| map maxsearches=50 search=\"index=epic sourcetype=\\\"epic:audit\\\" USER_ID=\\\"$user$\\\" AccessType IN (\\\"Report\\\",\\\"Export\\\") earliest=-90d | stats count as risky_epic_events by USER_ID\"\n| sort - risky_epic_events\n```\n\nUnderstanding this SPL\n\n**Security Awareness & Training — Phish Simulation Failure to ePHI Risk (§164.308(a)(5))** — Links repeated phishing test failures to subsequent risky ePHI handling patterns so awareness gaps become targeted coaching, not checkbox training.\n\nDocumented **Data sources**: `index=security_awareness` HEC (`campaign_id`, `user`, `result`), `index=o365` `sourcetype=ms:o365:management` (DLP/alerts), `index=epic` chart exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Office 365 (4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security_awareness; **sourcetype**: phish:sim. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security_awareness, sourcetype=\"phish:sim\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fail_clicks>=3` — typically the threshold or rule expression for this monitoring goal.\n• Runs a templated search per row with `map`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, fails, risky events), Bar chart (failures by department from HR lookup).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We links repeated phishing test failures to subsequent risky ePHI handling patterns so awareness gaps become targeted coaching, not checkbox training. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(5)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(a)(5) (Security awareness and training) is enforced — Splunk UC-22.10.6: Security Awareness & Training — Phish Simulation Failure to ePHI Risk.",
                  "ea": "Saved search 'UC-22.10.6' running on sourcetype ms:o365:management and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.7",
              "n": "Login Monitoring & Security Incident Procedures — Brute Force to Clinical SSO (§164.308(a)(6))",
              "c": "critical",
              "f": "intermediate",
              "v": "Surfaces credential stuffing and password spray against Hyperspace/SSO endpoints so incidents are detected and handled under HIPAA security incident procedures.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=ad` `sourcetype=WinEventLog:Security` EventCode=4625, `index=okta` `sourcetype=OktaIM2:log` (if used), `index=citrix` AAA failures",
              "q": "index=ad sourcetype=\"WinEventLog:Security\" EventCode=\"4625\" earliest=-24h\n| eval user=TargetUserName\n| eval src=src_ip\n| stats count values(EventCode) as codes dc(ComputerName) as dc_used by user, src\n| where count>=15 AND dc_used>=3\n| lookup hipaa_clinical_apps.csv src OUTPUT app_name\n| sort - count",
              "m": "(1) Baseline legitimate auth patterns during go-live; (2) integrate with ES notable or SOAR playbooks; (3) document IR steps for suspected credential compromise affecting ePHI.",
              "z": "Choropleth (src geo if iplocation enabled), Table (user, src, count), Single value (distinct locked accounts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=ad` `sourcetype=WinEventLog:Security` EventCode=4625, `index=okta` `sourcetype=OktaIM2:log` (if used), `index=citrix` AAA failures.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Baseline legitimate auth patterns during go-live; (2) integrate with ES notable or SOAR playbooks; (3) document IR steps for suspected credential compromise affecting ePHI.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ad sourcetype=\"WinEventLog:Security\" EventCode=\"4625\" earliest=-24h\n| eval user=TargetUserName\n| eval src=src_ip\n| stats count values(EventCode) as codes dc(ComputerName) as dc_used by user, src\n| where count>=15 AND dc_used>=3\n| lookup hipaa_clinical_apps.csv src OUTPUT app_name\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Login Monitoring & Security Incident Procedures — Brute Force to Clinical SSO (§164.308(a)(6))** — Surfaces credential stuffing and password spray against Hyperspace/SSO endpoints so incidents are detected and handled under HIPAA security incident procedures.\n\nDocumented **Data sources**: `index=ad` `sourcetype=WinEventLog:Security` EventCode=4625, `index=okta` `sourcetype=OktaIM2:log` (if used), `index=citrix` AAA failures. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ad; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ad, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=15 AND dc_used>=3` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Login Monitoring & Security Incident Procedures — Brute Force to Clinical SSO (§164.308(a)(6))** — Surfaces credential stuffing and password spray against Hyperspace/SSO endpoints so incidents are detected and handled under HIPAA security incident procedures.\n\nDocumented **Data sources**: `index=ad` `sourcetype=WinEventLog:Security` EventCode=4625, `index=okta` `sourcetype=OktaIM2:log` (if used), `index=citrix` AAA failures. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Choropleth (src geo if iplocation enabled), Table (user, src, count), Single value (distinct locked accounts).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface credential stuffing and password spray against Hyperspace/SSO endpoints so incidents are detected and handled under HIPAA security incident procedures. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(6)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(a)(6) (Security incident procedures) is enforced — Splunk UC-22.10.7: Login Monitoring & Security Incident Procedures — Brute Force to Clinical SSO.",
                  "ea": "Saved search 'UC-22.10.7' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.8",
              "n": "Contingency Plan — Backup Job Success for ePHI Databases (§164.308(a)(7))",
              "c": "critical",
              "f": "intermediate",
              "v": "Monitors backup completion for Clarity/Caboodle and clinical data stores so availability safeguards in the contingency plan are continuously evidenced.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk DB Connect (2686)",
              "d": "`index=backup` `sourcetype=commvault:job` or `rubrik:events`, SQL `msdb.dbo.backupset` via DBX, Epic Clarity backup logs if forwarded",
              "q": "index=backup sourcetype=\"commvault:job\" earliest=-48h (database=\"CLARITY\" OR database=\"CABOODLE\" OR app=\"Epic*\")\n| eval success=if(match(lower(status),\"success|completed\"),1,0)\n| stats sum(success) as ok_jobs count as total_jobs latest(_time) as last_job by database, client_name\n| eval health=if(ok_jobs=total_jobs,\"OK\",\"FAIL\")\n| where health!=\"OK\"",
              "m": "(1) Tag databases containing ePHI in backup catalog; (2) page DBAs on failure; (3) quarterly restore test outcomes ingested as separate sourcetype for correlation.",
              "z": "Single value (failed jobs), Table (DB, last success), Timeline (job duration anomalies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=backup` `sourcetype=commvault:job` or `rubrik:events`, SQL `msdb.dbo.backupset` via DBX, Epic Clarity backup logs if forwarded.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag databases containing ePHI in backup catalog; (2) page DBAs on failure; (3) quarterly restore test outcomes ingested as separate sourcetype for correlation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"commvault:job\" earliest=-48h (database=\"CLARITY\" OR database=\"CABOODLE\" OR app=\"Epic*\")\n| eval success=if(match(lower(status),\"success|completed\"),1,0)\n| stats sum(success) as ok_jobs count as total_jobs latest(_time) as last_job by database, client_name\n| eval health=if(ok_jobs=total_jobs,\"OK\",\"FAIL\")\n| where health!=\"OK\"\n```\n\nUnderstanding this SPL\n\n**Contingency Plan — Backup Job Success for ePHI Databases (§164.308(a)(7))** — Monitors backup completion for Clarity/Caboodle and clinical data stores so availability safeguards in the contingency plan are continuously evidenced.\n\nDocumented **Data sources**: `index=backup` `sourcetype=commvault:job` or `rubrik:events`, SQL `msdb.dbo.backupset` via DBX, Epic Clarity backup logs if forwarded. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: commvault:job. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"commvault:job\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **success** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by database, client_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where health!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (failed jobs), Table (DB, last success), Timeline (job duration anomalies).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor backup completion for Clarity/Caboodle and clinical data stores so availability safeguards in the contingency plan are continuously evidenced. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(7)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.308(a)(7) (Contingency plan) is enforced — Splunk UC-22.10.8: Contingency Plan — Backup Job Success for ePHI Databases.",
                  "ea": "Saved search 'UC-22.10.8' running on sourcetype commvault:job and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.9",
              "n": "Periodic Evaluation — Control Test Evidence Ingest (§164.308(a)(8))",
              "c": "high",
              "f": "beginner",
              "v": "Centralizes periodic technical test results (MFA coverage, encryption checks) with timestamps for HIPAA periodic technical and non-technical evaluation evidence.",
              "t": "Splunk Enterprise / Splunk Cloud (HEC), Splunk ITSI (1841)",
              "d": "`index=grc` `sourcetype=hipaa:periodic_test` JSON lines from Qualys/Azure Policy/AWS Config exporters",
              "q": "index=grc sourcetype=\"hipaa:periodic_test\" earliest=-365d test_family=\"HIPAA_Technical\"\n| eval period_month=strftime(_time,\"%Y-%m\")\n| stats latest(result) as last_result latest(_time) as last_run by control_id, period_month\n| where last_result!=\"Pass\"\n| sort period_month, control_id",
              "m": "(1) Standardize `result` enum Pass/Fail/Partial; (2) join to asset inventory; (3) annual HIPAA evaluation deck exports from this dataset.",
              "z": "Matrix (control x month), Table (failing controls), Column chart (pass rate trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud (HEC), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=grc` `sourcetype=hipaa:periodic_test` JSON lines from Qualys/Azure Policy/AWS Config exporters.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize `result` enum Pass/Fail/Partial; (2) join to asset inventory; (3) annual HIPAA evaluation deck exports from this dataset.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"hipaa:periodic_test\" earliest=-365d test_family=\"HIPAA_Technical\"\n| eval period_month=strftime(_time,\"%Y-%m\")\n| stats latest(result) as last_result latest(_time) as last_run by control_id, period_month\n| where last_result!=\"Pass\"\n| sort period_month, control_id\n```\n\nUnderstanding this SPL\n\n**Periodic Evaluation — Control Test Evidence Ingest (§164.308(a)(8))** — Centralizes periodic technical test results (MFA coverage, encryption checks) with timestamps for HIPAA periodic technical and non-technical evaluation evidence.\n\nDocumented **Data sources**: `index=grc` `sourcetype=hipaa:periodic_test` JSON lines from Qualys/Azure Policy/AWS Config exporters. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud (HEC), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: hipaa:periodic_test. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"hipaa:periodic_test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **period_month** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by control_id, period_month** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where last_result!=\"Pass\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (control x month), Table (failing controls), Column chart (pass rate trend).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We centralizes periodic technical test results (MFA coverage, encryption checks) with timestamps for HIPAA periodic technical and non-technical evaluation evidence. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(8)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.308(a)(8) (Evaluation) is enforced — Splunk UC-22.10.9: Periodic Evaluation — Control Test Evidence Ingest.",
                  "ea": "Saved search 'UC-22.10.9' running on index=grc sourcetype=hipaa:periodic_test JSON lines from Qualys/Azure Policy/AWS Config exporters, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.10",
              "n": "Business Associate Contracts — BAA Coverage for Connected Systems (§164.308(a)(1)(ii)(B))",
              "c": "high",
              "f": "intermediate",
              "v": "Highlights new interfaces or SaaS tenants talking to ePHI systems without a registered BAA so legal can close contract gaps before production traffic carries PHI.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686)",
              "d": "`index=epic` `sourcetype=epic:interface` (remote host, interface name), `lookup baa_register.csv` (`vendor`, `hostname`, `baa_status`)",
              "q": "index=epic sourcetype=\"epic:interface\" earliest=-7d\n| stats latest(_time) as last_seen values(interface_name) as ifaces by remote_host\n| lookup baa_register.csv hostname as remote_host OUTPUT vendor, baa_status, baa_expiry\n| where isnull(baa_status) OR baa_status!=\"Active\"\n| eval days_since=round((now()-last_seen)/86400,1)\n| sort - last_seen",
              "m": "(1) Populate `baa_register.csv` from contract repository; (2) integrate ServiceNow vendor risk tasks; (3) block firewall exceptions until BAA recorded (process outside Splunk).",
              "z": "Table (host, vendor, BAA gap), Bar chart (new hosts per week without BAA).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:interface` (remote host, interface name), `lookup baa_register.csv` (`vendor`, `hostname`, `baa_status`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate `baa_register.csv` from contract repository; (2) integrate ServiceNow vendor risk tasks; (3) block firewall exceptions until BAA recorded (process outside Splunk).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:interface\" earliest=-7d\n| stats latest(_time) as last_seen values(interface_name) as ifaces by remote_host\n| lookup baa_register.csv hostname as remote_host OUTPUT vendor, baa_status, baa_expiry\n| where isnull(baa_status) OR baa_status!=\"Active\"\n| eval days_since=round((now()-last_seen)/86400,1)\n| sort - last_seen\n```\n\nUnderstanding this SPL\n\n**Business Associate Contracts — BAA Coverage for Connected Systems (§164.308(a)(1)(ii)(B))** — Highlights new interfaces or SaaS tenants talking to ePHI systems without a registered BAA so legal can close contract gaps before production traffic carries PHI.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:interface` (remote host, interface name), `lookup baa_register.csv` (`vendor`, `hostname`, `baa_status`). **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:interface. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:interface\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by remote_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(baa_status) OR baa_status!=\"Active\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, vendor, BAA gap), Bar chart (new hosts per week without BAA).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlights new interfaces or SaaS tenants talking to ePHI systems without a registered BAA so legal can close contract gaps before production traffic carries PHI. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(1)(ii)(B)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(a)(1)(ii)(B) is enforced — Splunk UC-22.10.10: Business Associate Contracts — BAA Coverage for Connected Systems.",
                  "ea": "Saved search 'UC-22.10.10' running on sourcetype epic:interface and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.11",
              "n": "Business Associate Agreements — Expiry & Auto-Renewal Monitoring (§164.308(b)(1))",
              "c": "high",
              "f": "beginner",
              "v": "Prevents lapsed BAAs for vendors that process ePHI by tracking expiry, auto-renew flags, and required amendment workflows.",
              "t": "Splunk Enterprise / Splunk Cloud (CSV/HEC)",
              "d": "`index=legal` `sourcetype=ironclad:contract` or scheduled CSV `baa_master.csv` (`vendor_id`, `baa_end`, `auto_renew`)",
              "q": "| inputlookup baa_master.csv\n| eval days_to_end=round((strptime(baa_end,\"%Y-%m-%d\")-now())/86400,0)\n| eval alert_tier=case(days_to_end<0,\"EXPIRED\",days_to_end<=30,\"CRITICAL\",days_to_end<=90,\"WARN\",1=1,\"OK\")\n| where alert_tier!=\"OK\"\n| table vendor_id, vendor_name, baa_end, auto_renew, alert_tier, data_categories\n| sort days_to_end",
              "m": "(1) Nightly refresh from CLM API; (2) route CRITICAL to legal ops; (3) map vendor_id to technical assets for coverage attestation.",
              "z": "Single value (expired count), Table (upcoming renewals), Timeline (signature dates).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud (CSV/HEC).\n• Ensure the following data sources are available: `index=legal` `sourcetype=ironclad:contract` or scheduled CSV `baa_master.csv` (`vendor_id`, `baa_end`, `auto_renew`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Nightly refresh from CLM API; (2) route CRITICAL to legal ops; (3) map vendor_id to technical assets for coverage attestation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup baa_master.csv\n| eval days_to_end=round((strptime(baa_end,\"%Y-%m-%d\")-now())/86400,0)\n| eval alert_tier=case(days_to_end<0,\"EXPIRED\",days_to_end<=30,\"CRITICAL\",days_to_end<=90,\"WARN\",1=1,\"OK\")\n| where alert_tier!=\"OK\"\n| table vendor_id, vendor_name, baa_end, auto_renew, alert_tier, data_categories\n| sort days_to_end\n```\n\nUnderstanding this SPL\n\n**Business Associate Agreements — Expiry & Auto-Renewal Monitoring (§164.308(b)(1))** — Prevents lapsed BAAs for vendors that process ePHI by tracking expiry, auto-renew flags, and required amendment workflows.\n\nDocumented **Data sources**: `index=legal` `sourcetype=ironclad:contract` or scheduled CSV `baa_master.csv` (`vendor_id`, `baa_end`, `auto_renew`). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud (CSV/HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **days_to_end** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **alert_tier** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where alert_tier!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Business Associate Agreements — Expiry & Auto-Renewal Monitoring (§164.308(b)(1))**): table vendor_id, vendor_name, baa_end, auto_renew, alert_tier, data_categories\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (expired count), Table (upcoming renewals), Timeline (signature dates).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We prevents lapsed BAAs for vendors that process ePHI by tracking expiry, auto-renew flags, and required amendment workflows. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(b)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(b)(1) is enforced — Splunk UC-22.10.11: Business Associate Agreements — Expiry & Auto-Renewal Monitoring.",
                  "ea": "Saved search 'UC-22.10.11' running on index=legal sourcetype=ironclad:contract or scheduled CSV baa_master.csv (vendor_id, baa_end, auto_renew), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.12",
              "n": "Sanction Policy — Privileged Abuse on EHR Audit Logs (§164.308(a)(1)(ii)(C))",
              "c": "critical",
              "f": "advanced",
              "v": "Detects workforce sanctions triggers such as repeated self-family lookups or audit log tampering attempts by IT admins supporting HIPAA sanction policy enforcement.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=epic` `sourcetype=epic:audit` (`USER_ID`, `PAT_ID`, `AccessType`), HR relationship file `employee_dependents.csv` (hashed IDs)",
              "q": "index=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-30d\n| lookup employee_dependents.csv employee_id as USER_ID OUTPUT dependent_patient_hash\n| eval pat_hash=sha256(PAT_ID)\n| where pat_hash=dependent_patient_hash\n| stats count by USER_ID, PAT_ID\n| where count>=1",
              "m": "(1) Use salted hashes only; (2) legal/privacy approval for dependent mapping; (3) integrate HR sanctions workflow; (4) false positive review for legitimate care team overlap.",
              "z": "Table (user, event count), Link analysis graph (user-patient edges).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit` (`USER_ID`, `PAT_ID`, `AccessType`), HR relationship file `employee_dependents.csv` (hashed IDs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Use salted hashes only; (2) legal/privacy approval for dependent mapping; (3) integrate HR sanctions workflow; (4) false positive review for legitimate care team overlap.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-30d\n| lookup employee_dependents.csv employee_id as USER_ID OUTPUT dependent_patient_hash\n| eval pat_hash=sha256(PAT_ID)\n| where pat_hash=dependent_patient_hash\n| stats count by USER_ID, PAT_ID\n| where count>=1\n```\n\nUnderstanding this SPL\n\n**Sanction Policy — Privileged Abuse on EHR Audit Logs (§164.308(a)(1)(ii)(C))** — Detects workforce sanctions triggers such as repeated self-family lookups or audit log tampering attempts by IT admins supporting HIPAA sanction policy enforcement.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit` (`USER_ID`, `PAT_ID`, `AccessType`), HR relationship file `employee_dependents.csv` (hashed IDs). **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **pat_hash** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pat_hash=dependent_patient_hash` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by USER_ID, PAT_ID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, event count), Link analysis graph (user-patient edges).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect workforce sanctions triggers such as repeated self-family lookups or audit log tampering attempts by IT admins supporting HIPAA sanction policy enforcement. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(1)(ii)(C)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(a)(1)(ii)(C) is enforced — Splunk UC-22.10.12: Sanction Policy — Privileged Abuse on EHR Audit Logs.",
                  "ea": "Saved search 'UC-22.10.12' running on sourcetype epic:audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.13",
              "n": "Unique User Identification — Shared or Generic EHR Accounts (§164.312(a)(2)(i))",
              "c": "critical",
              "f": "intermediate",
              "v": "Finds authentication and EHR events tied to shared kiosk or interface accounts so every action can be attributed to an identified user as required for ePHI access.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Citrix (2757)",
              "d": "`index=epic` `sourcetype=epic:audit`, `index=citrix` `sourcetype=citrix:session`, `lookup forbidden_shared_accounts.csv`",
              "q": "(index=epic sourcetype=\"epic:audit\") OR (index=citrix sourcetype=\"citrix:session\")\n| eval account=upper(coalesce(USER_ID, UserName))\n| lookup forbidden_shared_accounts.csv account OUTPUT policy\n| where isnotnull(policy)\n| bin _time span=1h\n| stats dc(PAT_ID) as patients dc(client_ip) as src_ips count as events by account, _time\n| where events>10\n| sort - events",
              "m": "(1) Maintain list of prohibited generic accounts; (2) push exceptions only via CAB-approved break-glass IDs; (3) correlate Citrix `ClientName` for kiosk attribution where possible.",
              "z": "Table (account, events, patients), Timeline (shared account spikes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Citrix](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Citrix (2757).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit`, `index=citrix` `sourcetype=citrix:session`, `lookup forbidden_shared_accounts.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain list of prohibited generic accounts; (2) push exceptions only via CAB-approved break-glass IDs; (3) correlate Citrix `ClientName` for kiosk attribution where possible.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=epic sourcetype=\"epic:audit\") OR (index=citrix sourcetype=\"citrix:session\")\n| eval account=upper(coalesce(USER_ID, UserName))\n| lookup forbidden_shared_accounts.csv account OUTPUT policy\n| where isnotnull(policy)\n| bin _time span=1h\n| stats dc(PAT_ID) as patients dc(client_ip) as src_ips count as events by account, _time\n| where events>10\n| sort - events\n```\n\nUnderstanding this SPL\n\n**Unique User Identification — Shared or Generic EHR Accounts (§164.312(a)(2)(i))** — Finds authentication and EHR events tied to shared kiosk or interface accounts so every action can be attributed to an identified user as required for ePHI access.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit`, `index=citrix` `sourcetype=citrix:session`, `lookup forbidden_shared_accounts.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Citrix (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic, citrix; **sourcetype**: epic:audit, citrix:session. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, index=citrix, sourcetype=\"epic:audit\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **account** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(policy)` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by account, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where events>10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(Authentication.src) as agg_value from datamodel=Authentication.Authentication by Authentication.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unique User Identification — Shared or Generic EHR Accounts (§164.312(a)(2)(i))** — Finds authentication and EHR events tied to shared kiosk or interface accounts so every action can be attributed to an identified user as required for ePHI access.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit`, `index=citrix` `sourcetype=citrix:session`, `lookup forbidden_shared_accounts.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Citrix (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (account, events, patients), Timeline (shared account spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We finds authentication and EHR events tied to shared kiosk or interface accounts so every action can be attributed to an identified user as required for ePHI access. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Authentication (Citrix/AD when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t dc(Authentication.src) as agg_value from datamodel=Authentication.Authentication by Authentication.user | sort - agg_value",
              "e": [
                "citrix"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(a)(2)(i)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(a)(2)(i) is enforced — Splunk UC-22.10.13: Unique User Identification — Shared or Generic EHR Accounts.",
                  "ea": "Saved search 'UC-22.10.13' running on index=epic sourcetype=epic:audit, index=citrix sourcetype=citrix:session, lookup forbidden_shared_accounts.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.14",
              "n": "Emergency Access Procedure — Downtime / Break-Glass Account Usage (§164.312(a)(2)(ii))",
              "c": "critical",
              "f": "advanced",
              "v": "Audits emergency and downtime accounts touching production ePHI so procedural safeguards around emergency access are reviewable and time-bound.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263)",
              "d": "`WinEventLog:Security` 4624 for `svc_epic_emergency*`, `index=epic` `sourcetype=epic:audit` `AccessContext=\"Downtime\"`",
              "q": "(index=windows EventCode=\"4622\" OR EventCode=\"4624\") Account_Name=\"svc_epic_emergency*\" earliest=-30d\n| append [\n    search index=epic sourcetype=\"epic:audit\" AccessContext=\"*\" earliest=-30d\n    | rex field=AccessContext \"(?<ctx>Downtime|Emergency)\"\n    | where ctx IN (\"Downtime\",\"Emergency\") ]\n| eval principal=coalesce(Account_Name, USER_ID)\n| stats earliest(_time) as first latest(_time) as last dc(host) as systems by principal, ctx\n| eval duration_h=round((last-first)/3600,2)\n| sort - last",
              "m": "(1) Require post-event attestation form ID in separate ticketing sourcetype and join; (2) auto-expire passwords via IAM; (3) weekly privacy review of all uses.",
              "z": "Table (principal, duration, systems), Single value (uses per month).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `WinEventLog:Security` 4624 for `svc_epic_emergency*`, `index=epic` `sourcetype=epic:audit` `AccessContext=\"Downtime\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require post-event attestation form ID in separate ticketing sourcetype and join; (2) auto-expire passwords via IAM; (3) weekly privacy review of all uses.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=windows EventCode=\"4622\" OR EventCode=\"4624\") Account_Name=\"svc_epic_emergency*\" earliest=-30d\n| append [\n    search index=epic sourcetype=\"epic:audit\" AccessContext=\"*\" earliest=-30d\n    | rex field=AccessContext \"(?<ctx>Downtime|Emergency)\"\n    | where ctx IN (\"Downtime\",\"Emergency\") ]\n| eval principal=coalesce(Account_Name, USER_ID)\n| stats earliest(_time) as first latest(_time) as last dc(host) as systems by principal, ctx\n| eval duration_h=round((last-first)/3600,2)\n| sort - last\n```\n\nUnderstanding this SPL\n\n**Emergency Access Procedure — Downtime / Break-Glass Account Usage (§164.312(a)(2)(ii))** — Audits emergency and downtime accounts touching production ePHI so procedural safeguards around emergency access are reviewable and time-bound.\n\nDocumented **Data sources**: `WinEventLog:Security` 4624 for `svc_epic_emergency*`, `index=epic` `sourcetype=epic:audit` `AccessContext=\"Downtime\"`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Appends rows from a subsearch with `append`.\n• `eval` defines or adjusts **principal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by principal, ctx** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **duration_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(Authentication.dest) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Emergency Access Procedure — Downtime / Break-Glass Account Usage (§164.312(a)(2)(ii))** — Audits emergency and downtime accounts touching production ePHI so procedural safeguards around emergency access are reviewable and time-bound.\n\nDocumented **Data sources**: `WinEventLog:Security` 4624 for `svc_epic_emergency*`, `index=epic` `sourcetype=epic:audit` `AccessContext=\"Downtime\"`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, duration, systems), Single value (uses per month).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We audits emergency and downtime accounts touching production ePHI so procedural safeguards around emergency access are reviewable and time-bound. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t dc(Authentication.dest) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(a)(2)(ii)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(a)(2)(ii) is enforced — Splunk UC-22.10.14: Emergency Access Procedure — Downtime / Break-Glass Account Usage.",
                  "ea": "Saved search 'UC-22.10.14' running on WinEventLog:Security 4624 for svc_epic_emergency*, index=epic sourcetype=epic:audit AccessContext=\"Downtime\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.15",
              "n": "Automatic Logoff — Stale Clinical Workstation Sessions (§164.312(a)(2)(iii))",
              "c": "high",
              "f": "intermediate",
              "v": "Identifies unusually long Citrix/VDI sessions and unlocked Windows consoles in patient care areas where automatic logoff may be misconfigured.",
              "t": "Splunk Add-on for Citrix (2757), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=citrix` session start/end, `WinEventLog:Security` 4800/4801 (workstation lock)",
              "q": "index=citrix sourcetype=\"citrix:session\" earliest=-7d\n| eval start=strptime(SessionStart,\"%Y-%m-%d %H:%M:%S\")\n| eval end=if(isnotnull(SessionEnd), strptime(SessionEnd,\"%Y-%m-%d %H:%M:%S\"), now())\n| eval session_hours=round((end-start)/3600,2)\n| where session_hours>12 AND match(Application,\"Hyperspace|Epic\")\n| stats max(session_hours) as max_hrs by UserName, client_ip, PublishedApplication\n| sort - max_hrs",
              "m": "(1) Align with published idle timeout policy (e.g., 30 min); (2) tune for legitimate long shifts using department lookup; (3) feed findings to desktop engineering for GPO fixes.",
              "z": "Bar chart (max session by dept), Table (user, app, hours).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Citrix](https://splunkbase.splunk.com/app/2757), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix (2757), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=citrix` session start/end, `WinEventLog:Security` 4800/4801 (workstation lock).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align with published idle timeout policy (e.g., 30 min); (2) tune for legitimate long shifts using department lookup; (3) feed findings to desktop engineering for GPO fixes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=citrix sourcetype=\"citrix:session\" earliest=-7d\n| eval start=strptime(SessionStart,\"%Y-%m-%d %H:%M:%S\")\n| eval end=if(isnotnull(SessionEnd), strptime(SessionEnd,\"%Y-%m-%d %H:%M:%S\"), now())\n| eval session_hours=round((end-start)/3600,2)\n| where session_hours>12 AND match(Application,\"Hyperspace|Epic\")\n| stats max(session_hours) as max_hrs by UserName, client_ip, PublishedApplication\n| sort - max_hrs\n```\n\nUnderstanding this SPL\n\n**Automatic Logoff — Stale Clinical Workstation Sessions (§164.312(a)(2)(iii))** — Identifies unusually long Citrix/VDI sessions and unlocked Windows consoles in patient care areas where automatic logoff may be misconfigured.\n\nDocumented **Data sources**: `index=citrix` session start/end, `WinEventLog:Security` 4800/4801 (workstation lock). **App/TA** (typical add-on context): Splunk Add-on for Citrix (2757), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: citrix; **sourcetype**: citrix:session. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=citrix, sourcetype=\"citrix:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **start** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **end** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **session_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where session_hours>12 AND match(Application,\"Hyperspace|Epic\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by UserName, client_ip, PublishedApplication** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (max session by dept), Table (user, app, hours).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We identify unusually long Citrix/VDI sessions and unlocked Windows consoles in patient care areas where automatic logoff may be misconfigured. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(a)(2)(iii)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(a)(2)(iii) is enforced — Splunk UC-22.10.15: Automatic Logoff — Stale Clinical Workstation Sessions.",
                  "ea": "Saved search 'UC-22.10.15' running on index=citrix session start/end, WinEventLog:Security 4800/4801 (workstation lock), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.16",
              "n": "Encryption of ePHI at Rest — TDE / BitLocker Status for Clinical Datastores (§164.312(a)(2)(iv))",
              "c": "critical",
              "f": "advanced",
              "v": "Surfaces SQL Server instances hosting Clarity that lack Transparent Data Encryption or hosts missing full-disk encryption so at-rest ePHI protections meet the addressable implementation specification evidence bar.",
              "t": "Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Microsoft Windows (742)",
              "d": "`mssql:audit` or scripted input `sqlserver:tde_status`, `WinEventLog:Microsoft-Windows-BitLocker/Operational`",
              "q": "index=database sourcetype=\"sqlserver:tde_status\" earliest=-1d\n| eval encrypted=if(match(lower(encryption_state),\"encrypted|on\"),1,0)\n| lookup ephi_sql_instances.csv instance_name OUTPUT contains_phi, owner_team\n| where contains_phi=\"true\" AND encrypted=0\n| stats latest(_time) as last_check by instance_name, database_name, owner_team",
              "m": "(1) Collect TDE via DB Connect scheduled statement against `sys.databases`; (2) ingest BitLocker compliance from SCCM/Intune if available; (3) track exceptions with approved risk acceptance IDs.",
              "z": "Table (instance, DB, encryption state), Single value (non-compliant count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft SQL Server](https://splunkbase.splunk.com/app/2648), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `mssql:audit` or scripted input `sqlserver:tde_status`, `WinEventLog:Microsoft-Windows-BitLocker/Operational`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect TDE via DB Connect scheduled statement against `sys.databases`; (2) ingest BitLocker compliance from SCCM/Intune if available; (3) track exceptions with approved risk acceptance IDs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=database sourcetype=\"sqlserver:tde_status\" earliest=-1d\n| eval encrypted=if(match(lower(encryption_state),\"encrypted|on\"),1,0)\n| lookup ephi_sql_instances.csv instance_name OUTPUT contains_phi, owner_team\n| where contains_phi=\"true\" AND encrypted=0\n| stats latest(_time) as last_check by instance_name, database_name, owner_team\n```\n\nUnderstanding this SPL\n\n**Encryption of ePHI at Rest — TDE / BitLocker Status for Clinical Datastores (§164.312(a)(2)(iv))** — Surfaces SQL Server instances hosting Clarity that lack Transparent Data Encryption or hosts missing full-disk encryption so at-rest ePHI protections meet the addressable implementation specification evidence bar.\n\nDocumented **Data sources**: `mssql:audit` or scripted input `sqlserver:tde_status`, `WinEventLog:Microsoft-Windows-BitLocker/Operational`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft SQL Server (Splunkbase 2648), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: database; **sourcetype**: sqlserver:tde_status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=database, sourcetype=\"sqlserver:tde_status\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **encrypted** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where contains_phi=\"true\" AND encrypted=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by instance_name, database_name, owner_team** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (instance, DB, encryption state), Single value (non-compliant count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface SQL Server instances hosting Clarity that lack Transparent Data Encryption or hosts missing full-disk encryption so at-rest ePHI protections meet the addressable implementation specification evidence bar. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "mssql"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(a)(2)(iv)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(a)(2)(iv) (Encryption and decryption) is enforced — Splunk UC-22.10.16: Encryption of ePHI at Rest — TDE / BitLocker Status for Clinical Datastores.",
                  "ea": "Saved search 'UC-22.10.16' running on mssql:audit or scripted input sqlserver:tde_status, WinEventLog:Microsoft-Windows-BitLocker/Operational, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.17",
              "n": "Audit Controls — High-Volume ePHI Read Baseline & Anomaly (§164.312(b))",
              "c": "critical",
              "f": "advanced",
              "v": "Uses statistical baselines on audited ePHI reads to detect abnormal access volumes indicative of snooping or compromised credentials while proving audit control operation.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=epic` `sourcetype=epic:audit` (`USER_ID`, `PAT_ID`, `AccessType`)",
              "q": "index=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-90d\n| bin _time span=1d\n| stats dc(PAT_ID) as daily_patients by USER_ID, _time\n| eventstats median(daily_patients) as med by USER_ID\n| eval z=(daily_patients-med)/sqrt(med+0.001)\n| where z>4 AND daily_patients>30\n| sort - z",
              "m": "(1) Use `anomalydetection` command alternatively for streaming; (2) exclude legitimate roles via `hipaa_role_exclusions.csv`; (3) route to privacy investigations queue.",
              "z": "Time chart (daily patients by user), Table (spikes), Scatter (median vs spike).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit` (`USER_ID`, `PAT_ID`, `AccessType`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Use `anomalydetection` command alternatively for streaming; (2) exclude legitimate roles via `hipaa_role_exclusions.csv`; (3) route to privacy investigations queue.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-90d\n| bin _time span=1d\n| stats dc(PAT_ID) as daily_patients by USER_ID, _time\n| eventstats median(daily_patients) as med by USER_ID\n| eval z=(daily_patients-med)/sqrt(med+0.001)\n| where z>4 AND daily_patients>30\n| sort - z\n```\n\nUnderstanding this SPL\n\n**Audit Controls — High-Volume ePHI Read Baseline & Anomaly (§164.312(b))** — Uses statistical baselines on audited ePHI reads to detect abnormal access volumes indicative of snooping or compromised credentials while proving audit control operation.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit` (`USER_ID`, `PAT_ID`, `AccessType`). **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by USER_ID, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by USER_ID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z>4 AND daily_patients>30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (daily patients by user), Table (spikes), Scatter (median vs spike).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We use statistical baselines on audited ePHI reads to detect abnormal access volumes indicative of snooping or compromised credentials while proving audit control operation. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(b) (Audit controls) is enforced — Splunk UC-22.10.17: Audit Controls — High-Volume ePHI Read Baseline & Anomaly.",
                  "ea": "Saved search 'UC-22.10.17' running on index=epic sourcetype=epic:audit (USER_ID, PAT_ID, AccessType), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.18",
              "n": "Integrity — Unexpected UPDATE/DELETE on PHI Tables (Clarity/Caboodle) (§164.312(c)(1))",
              "c": "critical",
              "f": "advanced",
              "v": "Alerts on bulk or off-hours SQL changes to tables holding patient demographics or clinical facts so improper alteration of ePHI is detected and investigated.",
              "t": "Splunk DB Connect (2686), Splunk Add-on for Microsoft SQL Server (2648)",
              "d": "`index=mssql` `sourcetype=mssql:audit` (`statement`, `schema_name`, `object_name`, `application_name`, `session_server_principal_name`)",
              "q": "index=mssql sourcetype=\"mssql:audit\" earliest=-24h action_class=\"DML\"\n| where match(lower(object_name),\"pat|patient|phi|epic\") AND match(lower(statement),\"update|delete\")\n| eval is_bulk=if(match(statement,\"(?i)where\\s+1=1|truncate|delete\\s+from\"),1,0)\n| where is_bulk=1 OR (match(lower(application_name),\"microsoft sql server management studio\") AND _time < relative_time(now(),\"@d+18h\"))\n| stats count by session_server_principal_name, object_name, application_name\n| sort - count",
              "m": "(1) Enable SQL Server Audit to SIEM; (2) map ETL service accounts to allow list; (3) pair with change tickets from ServiceNow; (4) retain immutable copy to WORM storage if required.",
              "z": "Table (principal, object, count), Timeline (bulk ops).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft SQL Server](https://splunkbase.splunk.com/app/2648)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686), Splunk Add-on for Microsoft SQL Server (2648).\n• Ensure the following data sources are available: `index=mssql` `sourcetype=mssql:audit` (`statement`, `schema_name`, `object_name`, `application_name`, `session_server_principal_name`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable SQL Server Audit to SIEM; (2) map ETL service accounts to allow list; (3) pair with change tickets from ServiceNow; (4) retain immutable copy to WORM storage if required.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mssql sourcetype=\"mssql:audit\" earliest=-24h action_class=\"DML\"\n| where match(lower(object_name),\"pat|patient|phi|epic\") AND match(lower(statement),\"update|delete\")\n| eval is_bulk=if(match(statement,\"(?i)where\\s+1=1|truncate|delete\\s+from\"),1,0)\n| where is_bulk=1 OR (match(lower(application_name),\"microsoft sql server management studio\") AND _time < relative_time(now(),\"@d+18h\"))\n| stats count by session_server_principal_name, object_name, application_name\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Integrity — Unexpected UPDATE/DELETE on PHI Tables (Clarity/Caboodle) (§164.312(c)(1))** — Alerts on bulk or off-hours SQL changes to tables holding patient demographics or clinical facts so improper alteration of ePHI is detected and investigated.\n\nDocumented **Data sources**: `index=mssql` `sourcetype=mssql:audit` (`statement`, `schema_name`, `object_name`, `application_name`, `session_server_principal_name`). **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk Add-on for Microsoft SQL Server (2648). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mssql; **sourcetype**: mssql:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mssql, sourcetype=\"mssql:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(object_name),\"pat|patient|phi|epic\") AND match(lower(statement),\"update|delete\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **is_bulk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_bulk=1 OR (match(lower(application_name),\"microsoft sql server management studio\") AND _time < relative_time(now()…` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by session_server_principal_name, object_name, application_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, object, count), Timeline (bulk ops).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We alerts on bulk or off-hours SQL changes to tables holding patient demographics or clinical facts so improper alteration of ePHI is detected and investigated. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "mssql"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(c)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(c)(1) (Integrity) is enforced — Splunk UC-22.10.18: Integrity — Unexpected UPDATE/DELETE on PHI Tables (Clarity/Caboodle).",
                  "ea": "Saved search 'UC-22.10.18' running on sourcetype mssql:audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.19",
              "n": "Entity Authentication — Smart Card / Certificate Logon Failures (§164.312(d))",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks failed certificate-based logons to clinical workstations and VDI pools so weak or expired credentials cannot silently undermine strong authentication for ePHI.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`WinEventLog:Security` EventCode IN (4625,4776) with `Authentication_Package=Negotiate` and `Key_Length>0`, PKI OCSP events if available",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode=\"4625\" earliest=-24h\n| search Logon_Process=\"Kerberos\" OR Authentication_Package=\"Negotiate\"\n| eval fail_reason=Status, Sub_Status\n| stats count values(WorkstationName) as workstations by TargetUserName, fail_reason, src_ip\n| where count>=5\n| sort - count",
              "m": "(1) Enrich with AD `userCertificate` expiry via scripted lookup; (2) alert on clusters before smart card lockouts impact patient care; (3) document PKI renewal procedures.",
              "z": "Table (user, reason, count), Choropleth (src_ip).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `WinEventLog:Security` EventCode IN (4625,4776) with `Authentication_Package=Negotiate` and `Key_Length>0`, PKI OCSP events if available.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enrich with AD `userCertificate` expiry via scripted lookup; (2) alert on clusters before smart card lockouts impact patient care; (3) document PKI renewal procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode=\"4625\" earliest=-24h\n| search Logon_Process=\"Kerberos\" OR Authentication_Package=\"Negotiate\"\n| eval fail_reason=Status, Sub_Status\n| stats count values(WorkstationName) as workstations by TargetUserName, fail_reason, src_ip\n| where count>=5\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Entity Authentication — Smart Card / Certificate Logon Failures (§164.312(d))** — Tracks failed certificate-based logons to clinical workstations and VDI pools so weak or expired credentials cannot silently undermine strong authentication for ePHI.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode IN (4625,4776) with `Authentication_Package=Negotiate` and `Key_Length>0`, PKI OCSP events if available. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **fail_reason** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by TargetUserName, fail_reason, src_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Entity Authentication — Smart Card / Certificate Logon Failures (§164.312(d))** — Tracks failed certificate-based logons to clinical workstations and VDI pools so weak or expired credentials cannot silently undermine strong authentication for ePHI.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode IN (4625,4776) with `Authentication_Package=Negotiate` and `Key_Length>0`, PKI OCSP events if available. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, reason, count), Choropleth (src_ip).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track failed certificate-based logons to clinical workstations and VDI pools so weak or expired credentials cannot silently undermine strong authentication for ePHI. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(d)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(d) (Person or entity authentication) is enforced — Splunk UC-22.10.19: Entity Authentication — Smart Card / Certificate Logon Failures.",
                  "ea": "Saved search 'UC-22.10.19' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.20",
              "n": "Transmission Security — TLS 1.0/1.1 Deprecation for EHR Integrations (§164.312(e)(1))",
              "c": "high",
              "f": "advanced",
              "v": "Highlights legacy TLS usage on connections carrying HL7/HTTPS to integration engines so transmission safeguards for ePHI meet current encryption standards.",
              "t": "Splunk Add-on for Stream (1809) or Palo Alto Networks Add-on for Splunk (`Splunk_TA_paloalto`), Splunk Common Information Model Add-on (1621)",
              "d": "`index=network` `sourcetype=pan:decrypted` OR `stream:tls` fields (`ssl_version`, `dest_ip`, `app`)",
              "q": "index=network (sourcetype=\"pan:decrypted\" OR sourcetype=\"stream:tls\") earliest=-7d\n| eval tls=coalesce(ssl_version, tls_version)\n| where tls IN (\"TLSv1\",\"TLSv1.0\",\"TLSv1.1\",\"SSLv3\")\n| lookup ephi_integration_subnets.csv dest_ip OUTPUT integration_name, contains_ephi\n| where contains_ephi=\"true\"\n| stats count by dest_ip, integration_name, tls, app\n| sort - count",
              "m": "(1) Populate integration subnet lookup from interface inventory; (2) partner with integration team on phased TLS upgrades; (3) verify after change with same search zero results.",
              "z": "Bar chart (legacy TLS hits by integration), Table (src/dest/app).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [Splunk_TA_paloalto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Stream (1809) or Palo Alto Networks Add-on for Splunk (`Splunk_TA_paloalto`), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:decrypted` OR `stream:tls` fields (`ssl_version`, `dest_ip`, `app`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate integration subnet lookup from interface inventory; (2) partner with integration team on phased TLS upgrades; (3) verify after change with same search zero results.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=\"pan:decrypted\" OR sourcetype=\"stream:tls\") earliest=-7d\n| eval tls=coalesce(ssl_version, tls_version)\n| where tls IN (\"TLSv1\",\"TLSv1.0\",\"TLSv1.1\",\"SSLv3\")\n| lookup ephi_integration_subnets.csv dest_ip OUTPUT integration_name, contains_ephi\n| where contains_ephi=\"true\"\n| stats count by dest_ip, integration_name, tls, app\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Transmission Security — TLS 1.0/1.1 Deprecation for EHR Integrations (§164.312(e)(1))** — Highlights legacy TLS usage on connections carrying HL7/HTTPS to integration engines so transmission safeguards for ePHI meet current encryption standards.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:decrypted` OR `stream:tls` fields (`ssl_version`, `dest_ip`, `app`). **App/TA** (typical add-on context): Splunk Add-on for Stream (1809) or Palo Alto Networks Add-on for Splunk (`Splunk_TA_paloalto`), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:decrypted, stream:tls. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:decrypted\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tls** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where tls IN (\"TLSv1\",\"TLSv1.0\",\"TLSv1.1\",\"SSLv3\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where contains_ephi=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dest_ip, integration_name, tls, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Transmission Security — TLS 1.0/1.1 Deprecation for EHR Integrations (§164.312(e)(1))** — Highlights legacy TLS usage on connections carrying HL7/HTTPS to integration engines so transmission safeguards for ePHI meet current encryption standards.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:decrypted` OR `stream:tls` fields (`ssl_version`, `dest_ip`, `app`). **App/TA** (typical add-on context): Splunk Add-on for Stream (1809) or Palo Alto Networks Add-on for Splunk (`Splunk_TA_paloalto`), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (legacy TLS hits by integration), Table (src/dest/app).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlights legacy TLS usage on connections carrying HL7/HTTPS to integration engines so transmission safeguards for ePHI meet current encryption standards. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Network_Traffic (when mapped)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.app | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(e)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(e)(1) (Transmission security) is enforced — Splunk UC-22.10.20: Transmission Security — TLS 1.0/1.1 Deprecation for EHR Integrations.",
                  "ea": "Saved search 'UC-22.10.20' running on index=network sourcetype=pan:decrypted OR stream:tls fields (ssl_version, dest_ip, app), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "Palo Alto Networks Add-on for Splunk",
                "id": 2757,
                "url": "https://splunkbase.splunk.com/app/2757"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.21",
              "n": "Access Control — Role-Based Violations (Coder Accessing Medication Admin) (§164.312(a)(1))",
              "c": "critical",
              "f": "advanced",
              "v": "Detects EHR workspace or activity codes inconsistent with HR job role, supporting least-privilege access control for ePHI workflows.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=epic` `sourcetype=epic:audit` (`Activity`, `USER_ID`), `lookup epic_user_roles.csv` (`USER_ID`, `hr_role`, `allowed_activities`)",
              "q": "index=epic sourcetype=\"epic:audit\" earliest=-24h\n| lookup epic_user_roles.csv USER_ID OUTPUT hr_role, allowed_activities mvsep=\"|\"\n| eval allowed=if(isnull(allowed_activities), \"*\", allowed_activities)\n| where isnotnull(hr_role) AND NOT match(Activity, allowed)\n| stats count values(Activity) as bad_activities by USER_ID, hr_role\n| sort - count",
              "m": "(1) Refresh role mapping nightly from IAM; (2) tune with clinical float pools; (3) integrate with provisioning QA before go-live.",
              "z": "Table (user, role, violations), Heatmap (activity vs role).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit` (`Activity`, `USER_ID`), `lookup epic_user_roles.csv` (`USER_ID`, `hr_role`, `allowed_activities`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Refresh role mapping nightly from IAM; (2) tune with clinical float pools; (3) integrate with provisioning QA before go-live.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:audit\" earliest=-24h\n| lookup epic_user_roles.csv USER_ID OUTPUT hr_role, allowed_activities mvsep=\"|\"\n| eval allowed=if(isnull(allowed_activities), \"*\", allowed_activities)\n| where isnotnull(hr_role) AND NOT match(Activity, allowed)\n| stats count values(Activity) as bad_activities by USER_ID, hr_role\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Access Control — Role-Based Violations (Coder Accessing Medication Admin) (§164.312(a)(1))** — Detects EHR workspace or activity codes inconsistent with HR job role, supporting least-privilege access control for ePHI workflows.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit` (`Activity`, `USER_ID`), `lookup epic_user_roles.csv` (`USER_ID`, `hr_role`, `allowed_activities`). **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **allowed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(hr_role) AND NOT match(Activity, allowed)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by USER_ID, hr_role** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, role, violations), Heatmap (activity vs role).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect EHR workspace or activity codes inconsistent with HR job role, supporting least-privilege access control for ePHI workflows. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(a)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(a)(1) (Access control) is enforced — Splunk UC-22.10.21: Access Control — Role-Based Violations (Coder Accessing Medication Admin).",
                  "ea": "Saved search 'UC-22.10.21' running on index=epic sourcetype=epic:audit (Activity, USER_ID), lookup epic_user_roles.csv (USER_ID, hr_role, allowed_activities), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security Rule",
                  "v": "2013-final",
                  "cl": "§164.308(a)(4)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security Rule §164.308(a)(4) (Information access management) is enforced — Splunk UC-22.10.21: Access Control — Role-Based Violations (Coder Accessing Medication Admin).",
                  "ea": "Saved search 'UC-22.10.21' running on index=epic sourcetype=epic:audit (Activity, USER_ID), lookup epic_user_roles.csv (USER_ID, hr_role, allowed_activities), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/index.html"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.22",
              "n": "Remote ePHI Access — MFA Gap for VPN + O365 Clinical Mail (§164.312(e)(1) / §164.308(a)(1))",
              "c": "critical",
              "f": "intermediate",
              "v": "Surfaces successful VPN or Exchange Online logons without MFA claim for users in ePHI-privileged groups so remote access meets organizational authentication policy.",
              "t": "Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)",
              "d": "`index=o365` `sourcetype=ms:o365:reporting` or `Audit.Exchange` / `AzureActiveDirectory` JSON, `index=vpn` GlobalProtect logs",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=\"AzureActiveDirectory\" Operation=\"UserLoggedIn\" earliest=-24h\n| eval mfa=if(match(ExtendedProperties,\"MFA\\s*completed|multi-factor\"),1,0)\n| lookup hipaa_ephi_users.csv user OUTPUT in_ephi_group\n| where in_ephi_group=\"true\" AND mfa=0\n| stats count by user, ClientIP, ApplicationId\n| sort - count",
              "m": "(1) Confirm `ExtendedProperties` field names for your tenant; (2) include Conditional Access exclusions report; (3) feed to IAM for forced enrollment.",
              "z": "Table (user, IP, count), Map (ClientIP), Single value (non-MFA logons).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=o365` `sourcetype=ms:o365:reporting` or `Audit.Exchange` / `AzureActiveDirectory` JSON, `index=vpn` GlobalProtect logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm `ExtendedProperties` field names for your tenant; (2) include Conditional Access exclusions report; (3) feed to IAM for forced enrollment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=\"AzureActiveDirectory\" Operation=\"UserLoggedIn\" earliest=-24h\n| eval mfa=if(match(ExtendedProperties,\"MFA\\s*completed|multi-factor\"),1,0)\n| lookup hipaa_ephi_users.csv user OUTPUT in_ephi_group\n| where in_ephi_group=\"true\" AND mfa=0\n| stats count by user, ClientIP, ApplicationId\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Remote ePHI Access — MFA Gap for VPN + O365 Clinical Mail (§164.312(e)(1) / §164.308(a)(1))** — Surfaces successful VPN or Exchange Online logons without MFA claim for users in ePHI-privileged groups so remote access meets organizational authentication policy.\n\nDocumented **Data sources**: `index=o365` `sourcetype=ms:o365:reporting` or `Audit.Exchange` / `AzureActiveDirectory` JSON, `index=vpn` GlobalProtect logs. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mfa** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_ephi_group=\"true\" AND mfa=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, ClientIP, ApplicationId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Remote ePHI Access — MFA Gap for VPN + O365 Clinical Mail (§164.312(e)(1) / §164.308(a)(1))** — Surfaces successful VPN or Exchange Online logons without MFA claim for users in ePHI-privileged groups so remote access meets organizational authentication policy.\n\nDocumented **Data sources**: `index=o365` `sourcetype=ms:o365:reporting` or `Audit.Exchange` / `AzureActiveDirectory` JSON, `index=vpn` GlobalProtect logs. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, IP, count), Map (ClientIP), Single value (non-MFA logons).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface successful VPN or Exchange Online logons without MFA claim for users in ePHI-privileged groups so remote access meets organizational authentication policy. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Authentication (if O365 CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.308(a)(1) (Security management process) is enforced — Splunk UC-22.10.22: Remote ePHI Access — MFA Gap for VPN + O365 Clinical Mail.",
                  "ea": "Saved search 'UC-22.10.22' running on index=o365 sourcetype=ms:o365:reporting or Audit.Exchange / AzureActiveDirectory JSON, index=vpn GlobalProtect logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(e)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(e)(1) (Transmission security) is enforced — Splunk UC-22.10.22: Remote ePHI Access — MFA Gap for VPN + O365 Clinical Mail.",
                  "ea": "Saved search 'UC-22.10.22' running on index=o365 sourcetype=ms:o365:reporting or Audit.Exchange / AzureActiveDirectory JSON, index=vpn GlobalProtect logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.23",
              "n": "FHIR / SMART on FHIR App Access to ePHI Scopes (§164.312(d))",
              "c": "high",
              "f": "advanced",
              "v": "Monitors OAuth token grants and FHIR resource access so only authorized applications and scopes can read or write clinical data exposed via interoperability APIs.",
              "t": "Splunk Add-on for Microsoft Windows (742), HTTP Event Collector (HEC)",
              "d": "`index=interop` `sourcetype=epic:fhir_audit` or Azure API Management / Apigee logs (`client_id`, `scope`, `patient`, `status_code`)",
              "q": "index=interop sourcetype=\"epic:fhir_audit\" earliest=-24h\n| eval scopes=split(scope,\" \")\n| mvexpand scopes\n| lookup fhir_app_register.csv client_id OUTPUT approved_scopes_mv\n| eval approved=if(isnotnull(mvfind(approved_scopes_mv,scopes)),1,0)\n| where approved=0 AND status_code IN (\"200\",\"201\")\n| stats count by client_id, scopes, user_id, patient_id\n| sort - count",
              "m": "(1) Ingest authorization server logs with scope strings; (2) maintain app registry with approved scopes; (3) block or revoke tokens via IAM integration for critical hits.",
              "z": "Table (client, scope, hits), Sankey (client → resource type).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), HTTP Event Collector (HEC).\n• Ensure the following data sources are available: `index=interop` `sourcetype=epic:fhir_audit` or Azure API Management / Apigee logs (`client_id`, `scope`, `patient`, `status_code`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest authorization server logs with scope strings; (2) maintain app registry with approved scopes; (3) block or revoke tokens via IAM integration for critical hits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=interop sourcetype=\"epic:fhir_audit\" earliest=-24h\n| eval scopes=split(scope,\" \")\n| mvexpand scopes\n| lookup fhir_app_register.csv client_id OUTPUT approved_scopes_mv\n| eval approved=if(isnotnull(mvfind(approved_scopes_mv,scopes)),1,0)\n| where approved=0 AND status_code IN (\"200\",\"201\")\n| stats count by client_id, scopes, user_id, patient_id\n| sort - count\n```\n\nUnderstanding this SPL\n\n**FHIR / SMART on FHIR App Access to ePHI Scopes (§164.312(d))** — Monitors OAuth token grants and FHIR resource access so only authorized applications and scopes can read or write clinical data exposed via interoperability APIs.\n\nDocumented **Data sources**: `index=interop` `sourcetype=epic:fhir_audit` or Azure API Management / Apigee logs (`client_id`, `scope`, `patient`, `status_code`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: interop; **sourcetype**: epic:fhir_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=interop, sourcetype=\"epic:fhir_audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **scopes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **approved** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where approved=0 AND status_code IN (\"200\",\"201\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by client_id, scopes, user_id, patient_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client, scope, hits), Sankey (client → resource type).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor OAuth token grants and FHIR resource access so only authorized applications and scopes can read or write clinical data exposed via interoperability APIs. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(d)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(d) (Person or entity authentication) is enforced — Splunk UC-22.10.23: FHIR / SMART on FHIR App Access to ePHI Scopes.",
                  "ea": "Saved search 'UC-22.10.23' running on index=interop sourcetype=epic:fhir_audit or Azure API Management / Apigee logs (client_id, scope, patient, status_code), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.24",
              "n": "Medical Device Integration — Unapproved HL7 Feeds to EMPI (§164.312(a)(1))",
              "c": "high",
              "f": "advanced",
              "v": "Detects new sending facilities or IP addresses pushing ADT/ORM traffic into the integration engine without change control, protecting access control around device-mediated ePHI flows.",
              "t": "Splunk Add-on for Stream (1809) or `Splunk_TA_f5` / load balancer logs, Splunk Enterprise Security (263)",
              "d": "`index=hl7` `sourcetype=hl7:receive` (`MSH-4`, `MSH-3`, `sending_ip`), `lookup approved_hl7_senders.csv`",
              "q": "index=hl7 sourcetype=\"hl7:receive\" earliest=-7d\n| rex field=_raw \"^MSH\\\\|[^|]*\\\\|[^|]*\\\\|(?<sending_app>[^|]*)\\\\|(?<sending_facility>[^|]*)\"\n| stats earliest(_time) as first_seen latest(_time) as last_seen dc(PID) as patients by sending_ip, sending_facility, sending_app\n| lookup approved_hl7_senders.csv sending_ip sending_facility OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| sort - patients",
              "m": "(1) Normalize HL7 with TA or `rex` consistently; (2) integrate Mirth/Interface engine connection table; (3) auto-ticket network security for rogue feeds.",
              "z": "Table (IP, facility, patients), Node graph (sender → engine).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Stream (1809) or `Splunk_TA_f5` / load balancer logs, Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=hl7` `sourcetype=hl7:receive` (`MSH-4`, `MSH-3`, `sending_ip`), `lookup approved_hl7_senders.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize HL7 with TA or `rex` consistently; (2) integrate Mirth/Interface engine connection table; (3) auto-ticket network security for rogue feeds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hl7 sourcetype=\"hl7:receive\" earliest=-7d\n| rex field=_raw \"^MSH\\\\|[^|]*\\\\|[^|]*\\\\|(?<sending_app>[^|]*)\\\\|(?<sending_facility>[^|]*)\"\n| stats earliest(_time) as first_seen latest(_time) as last_seen dc(PID) as patients by sending_ip, sending_facility, sending_app\n| lookup approved_hl7_senders.csv sending_ip sending_facility OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| sort - patients\n```\n\nUnderstanding this SPL\n\n**Medical Device Integration — Unapproved HL7 Feeds to EMPI (§164.312(a)(1))** — Detects new sending facilities or IP addresses pushing ADT/ORM traffic into the integration engine without change control, protecting access control around device-mediated ePHI flows.\n\nDocumented **Data sources**: `index=hl7` `sourcetype=hl7:receive` (`MSH-4`, `MSH-3`, `sending_ip`), `lookup approved_hl7_senders.csv`. **App/TA** (typical add-on context): Splunk Add-on for Stream (1809) or `Splunk_TA_f5` / load balancer logs, Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hl7; **sourcetype**: hl7:receive. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hl7, sourcetype=\"hl7:receive\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by sending_ip, sending_facility, sending_app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved) OR approved=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (IP, facility, patients), Node graph (sender → engine).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect new sending facilities or IP addresses pushing ADT/ORM traffic into the integration engine without change control, protecting access control around device-mediated ePHI flows. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "f5"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(a)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(a)(1) (Access control) is enforced — Splunk UC-22.10.24: Medical Device Integration — Unapproved HL7 Feeds to EMPI.",
                  "ea": "Saved search 'UC-22.10.24' running on index=hl7 sourcetype=hl7:receive (MSH-4, MSH-3, sending_ip), lookup approved_hl7_senders.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.25",
              "n": "Endpoint Controls — ePHI Clipboard/Print from VDI (§164.312(a)(1))",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks clipboard redirection and client drive mapping events on clinical VDI where policies should block exfiltration paths for ePHI.",
              "t": "Splunk Add-on for Citrix (2757), Splunk Add-on for CrowdStrike FDR (5082)",
              "d": "`index=citrix` `sourcetype=citrix:session` ICA channel flags, `index=endpoint` `sourcetype=crowdstrike:hosts` or `ProcessRollup2` with clipboard tools",
              "q": "index=citrix sourcetype=\"citrix:session\" earliest=-7d ClientDrive=\"On\" OR Clipboard=\"ClientToServer\"\n| lookup hipaa_epic_users.csv UserName OUTPUT is_clinical_user\n| where is_clinical_user=\"true\"\n| stats count by UserName, client_ip, PublishedApplication, ClientDrive, Clipboard\n| sort - count",
              "m": "(1) Align Citrix policy names with log fields; (2) validate expected exceptions (transcription vendors); (3) remediate via Citrix policy sets.",
              "z": "Table (user, flags, count), Bar chart (policy violations by site).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Citrix](https://splunkbase.splunk.com/app/2757), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5994), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix (2757), Splunk Add-on for CrowdStrike FDR (5082).\n• Ensure the following data sources are available: `index=citrix` `sourcetype=citrix:session` ICA channel flags, `index=endpoint` `sourcetype=crowdstrike:hosts` or `ProcessRollup2` with clipboard tools.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align Citrix policy names with log fields; (2) validate expected exceptions (transcription vendors); (3) remediate via Citrix policy sets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=citrix sourcetype=\"citrix:session\" earliest=-7d ClientDrive=\"On\" OR Clipboard=\"ClientToServer\"\n| lookup hipaa_epic_users.csv UserName OUTPUT is_clinical_user\n| where is_clinical_user=\"true\"\n| stats count by UserName, client_ip, PublishedApplication, ClientDrive, Clipboard\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Endpoint Controls — ePHI Clipboard/Print from VDI (§164.312(a)(1))** — Tracks clipboard redirection and client drive mapping events on clinical VDI where policies should block exfiltration paths for ePHI.\n\nDocumented **Data sources**: `index=citrix` `sourcetype=citrix:session` ICA channel flags, `index=endpoint` `sourcetype=crowdstrike:hosts` or `ProcessRollup2` with clipboard tools. **App/TA** (typical add-on context): Splunk Add-on for Citrix (2757), Splunk Add-on for CrowdStrike FDR (5082). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: citrix; **sourcetype**: citrix:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=citrix, sourcetype=\"citrix:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_clinical_user=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by UserName, client_ip, PublishedApplication, ClientDrive, Clipboard** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Endpoint Controls — ePHI Clipboard/Print from VDI (§164.312(a)(1))** — Tracks clipboard redirection and client drive mapping events on clinical VDI where policies should block exfiltration paths for ePHI.\n\nDocumented **Data sources**: `index=citrix` `sourcetype=citrix:session` ICA channel flags, `index=endpoint` `sourcetype=crowdstrike:hosts` or `ProcessRollup2` with clipboard tools. **App/TA** (typical add-on context): Splunk Add-on for Citrix (2757), Splunk Add-on for CrowdStrike FDR (5082). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, flags, count), Bar chart (policy violations by site).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track clipboard redirection and client drive mapping events on clinical VDI where policies should block exfiltration paths for ePHI. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Endpoint (CrowdStrike when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count",
              "e": [
                "citrix",
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(a)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(a)(1) (Access control) is enforced — Splunk UC-22.10.25: Endpoint Controls — ePHI Clipboard/Print from VDI.",
                  "ea": "Saved search 'UC-22.10.25' running on sourcetype citrix:session and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.26",
              "n": "Transmission Security — Unencrypted SMTP with PHI Patterns (§164.312(e)(1))",
              "c": "critical",
              "f": "advanced",
              "v": "Alerts when internal mail relays accept messages on port 25 without TLS that contain medical record number or DOB patterns, evidencing safeguards for ePHI in motion.",
              "t": "Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for Stream (1809)",
              "d": "`index=mail` `sourcetype=proofpoint:mta` or `ms:o365:reporting` MessageTrace, `stream:smtp` metadata",
              "q": "index=mail sourcetype=\"proofpoint:mta\" earliest=-24h\n| eval tls_used=if(match(lower(smtp_tls_status),\"none|no|plaintext\"),0,1)\n| regex subject=\"(?i)\\b(MRN|medical record|DOB|date of birth)\\b\"\n| where tls_used=0\n| stats count values(rcpt) as recipients by sender, subject, client_ip\n| sort - count",
              "m": "(1) Prefer DLP classification fields when available instead of regex on body; (2) route to messaging team for forced TLS connectors; (3) document exceptions for legacy lab devices with compensating controls.",
              "z": "Table (sender, subject snippet, recipients), Single value (plaintext PHI candidates).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for Stream (1809).\n• Ensure the following data sources are available: `index=mail` `sourcetype=proofpoint:mta` or `ms:o365:reporting` MessageTrace, `stream:smtp` metadata.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Prefer DLP classification fields when available instead of regex on body; (2) route to messaging team for forced TLS connectors; (3) document exceptions for legacy lab devices with compensating controls.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail sourcetype=\"proofpoint:mta\" earliest=-24h\n| eval tls_used=if(match(lower(smtp_tls_status),\"none|no|plaintext\"),0,1)\n| regex subject=\"(?i)\\b(MRN|medical record|DOB|date of birth)\\b\"\n| where tls_used=0\n| stats count values(rcpt) as recipients by sender, subject, client_ip\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Transmission Security — Unencrypted SMTP with PHI Patterns (§164.312(e)(1))** — Alerts when internal mail relays accept messages on port 25 without TLS that contain medical record number or DOB patterns, evidencing safeguards for ePHI in motion.\n\nDocumented **Data sources**: `index=mail` `sourcetype=proofpoint:mta` or `ms:o365:reporting` MessageTrace, `stream:smtp` metadata. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for Stream (1809). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: proofpoint:mta. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=\"proofpoint:mta\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tls_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters rows matching a pattern with `regex`.\n• Filters the current rows with `where tls_used=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by sender, subject, client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (sender, subject snippet, recipients), Single value (plaintext PHI candidates).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We alerts when internal mail relays accept messages on port 25 without TLS that contain medical record number or DOB patterns, evidencing safeguards for ePHI in motion. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(e)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(e)(1) (Transmission security) is enforced — Splunk UC-22.10.26: Transmission Security — Unencrypted SMTP with PHI Patterns.",
                  "ea": "Saved search 'UC-22.10.26' running on index=mail sourcetype=proofpoint:mta or ms:o365:reporting MessageTrace, stream:smtp metadata, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.27",
              "n": "Integrity — Caboodle / Clarity ETL Job Failures & Partial Loads (§164.312(c)(1))",
              "c": "high",
              "f": "intermediate",
              "v": "Detects failed or partial analytics warehouse loads that could silently skew quality reporting built on ePHI, supporting integrity of data used for care and compliance analytics.",
              "t": "Splunk DB Connect (2686), Splunk ITSI (1841)",
              "d": "`index=etl` `sourcetype=sqlserver:agent_job` or Informatica/SSIS logs, `index=epic` Caboodle job status HEC",
              "q": "index=etl sourcetype=\"sqlserver:agent_job\" earliest=-48h (job_name=\"*CABOODLE*\" OR job_name=\"*CLARITY*\")\n| eval ok=if(match(lower(run_status),\"succeeded|success\"),1,0)\n| stats latest(run_status) as last_status latest(_time) as last_run sum(ok) as ok_runs count as runs by job_name, server_name\n| eval health=if(ok_runs=runs,\"OK\",\"DEGRADED\")\n| where health!=\"OK\"\n| sort last_run",
              "m": "(1) Map job names per environment; (2) create ITSI KPI for critical ETL health; (3) correlate with downstream dashboard freshness checks.",
              "z": "Single value (failed jobs), Table (job, server, status), Timeline (duration).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=etl` `sourcetype=sqlserver:agent_job` or Informatica/SSIS logs, `index=epic` Caboodle job status HEC.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map job names per environment; (2) create ITSI KPI for critical ETL health; (3) correlate with downstream dashboard freshness checks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=etl sourcetype=\"sqlserver:agent_job\" earliest=-48h (job_name=\"*CABOODLE*\" OR job_name=\"*CLARITY*\")\n| eval ok=if(match(lower(run_status),\"succeeded|success\"),1,0)\n| stats latest(run_status) as last_status latest(_time) as last_run sum(ok) as ok_runs count as runs by job_name, server_name\n| eval health=if(ok_runs=runs,\"OK\",\"DEGRADED\")\n| where health!=\"OK\"\n| sort last_run\n```\n\nUnderstanding this SPL\n\n**Integrity — Caboodle / Clarity ETL Job Failures & Partial Loads (§164.312(c)(1))** — Detects failed or partial analytics warehouse loads that could silently skew quality reporting built on ePHI, supporting integrity of data used for care and compliance analytics.\n\nDocumented **Data sources**: `index=etl` `sourcetype=sqlserver:agent_job` or Informatica/SSIS logs, `index=epic` Caboodle job status HEC. **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: etl; **sourcetype**: sqlserver:agent_job. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=etl, sourcetype=\"sqlserver:agent_job\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by job_name, server_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where health!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (failed jobs), Table (job, server, status), Timeline (duration).",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect failed or partial analytics warehouse loads that could silently skew quality reporting built on ePHI, supporting integrity of data used for care and compliance analytics. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(c)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(c)(1) (Integrity) is enforced — Splunk UC-22.10.27: Integrity — Caboodle / Clarity ETL Job Failures & Partial Loads.",
                  "ea": "Saved search 'UC-22.10.27' running on index=etl sourcetype=sqlserver:agent_job or Informatica/SSIS logs, index=epic Caboodle job status HEC, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.28",
              "n": "Workstation Security — Unattended Unlocked Sessions in Clinical Pods (§164.310(b))",
              "c": "high",
              "f": "intermediate",
              "v": "Correlates long gaps between workstation unlock events in nursing stations with active Hyperspace sessions to reduce shoulder-surfing and unauthorized viewing risks.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`WinEventLog:Security` 4801 (unlock), 4800 (lock), `index=epic` `sourcetype=epic:audit` from same `Workstation_ID`",
              "q": "index=windows sourcetype=\"WinEventLog:Security\" EventCode IN (\"4800\",\"4801\") earliest=-7d\n| eval action=if(EventCode=\"4800\",\"lock\",\"unlock\")\n| transaction WorkstationName startswith=eval(action=\"unlock\") endswith=eval(action=\"lock\") maxspan=8h\n| eval idle_during=if(duration>3600 AND eventcount<3,1,0)\n| where idle_during=1\n| table WorkstationName, duration, user, _time",
              "m": "(1) Deploy only on designated clinical OU to limit noise; (2) pair with physical security rounds checklist; (3) educate on clean desk + lock policy.",
              "z": "Histogram (session durations), Table (worst workstations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `WinEventLog:Security` 4801 (unlock), 4800 (lock), `index=epic` `sourcetype=epic:audit` from same `Workstation_ID`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Deploy only on designated clinical OU to limit noise; (2) pair with physical security rounds checklist; (3) educate on clean desk + lock policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Security\" EventCode IN (\"4800\",\"4801\") earliest=-7d\n| eval action=if(EventCode=\"4800\",\"lock\",\"unlock\")\n| transaction WorkstationName startswith=eval(action=\"unlock\") endswith=eval(action=\"lock\") maxspan=8h\n| eval idle_during=if(duration>3600 AND eventcount<3,1,0)\n| where idle_during=1\n| table WorkstationName, duration, user, _time\n```\n\nUnderstanding this SPL\n\n**Workstation Security — Unattended Unlocked Sessions in Clinical Pods (§164.310(b))** — Correlates long gaps between workstation unlock events in nursing stations with active Hyperspace sessions to reduce shoulder-surfing and unauthorized viewing risks.\n\nDocumented **Data sources**: `WinEventLog:Security` 4801 (unlock), 4800 (lock), `index=epic` `sourcetype=epic:audit` from same `Workstation_ID`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **idle_during** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where idle_during=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Workstation Security — Unattended Unlocked Sessions in Clinical Pods (§164.310(b))**): table WorkstationName, duration, user, _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (session durations), Table (worst workstations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate long gaps between workstation unlock events in nursing stations with active Hyperspace sessions to reduce shoulder-surfing and unauthorized viewing risks. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.310(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.310(b) is enforced — Splunk UC-22.10.28: Workstation Security — Unattended Unlocked Sessions in Clinical Pods.",
                  "ea": "Saved search 'UC-22.10.28' running on WinEventLog:Security 4801 (unlock), 4800 (lock), index=epic sourcetype=epic:audit from same Workstation_ID, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.29",
              "n": "Device & Media Controls — USB Mass Storage on ePHI Workstations (§164.310(d)(1))",
              "c": "critical",
              "f": "intermediate",
              "v": "Captures removable media mount attempts on endpoints that access Hyperspace so device and media control policies for ePHI workstations are enforceable and auditable.",
              "t": "Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742)",
              "d": "`WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration` 400, `index=endpoint` `sourcetype=crowdstrike:usb` if available",
              "q": "index=windows sourcetype=\"WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration\" EventCode=\"400\" earliest=-7d\n| where match(ClassName,\"USBSTOR|Disk&Ven_\")\n| lookup clinical_workstations.csv ComputerName OUTPUT contains_ephi_workstation\n| where contains_ephi_workstation=\"true\"\n| stats count by ComputerName, ClassName, DeviceID\n| sort - count",
              "m": "(1) Enforce BitLocker + DLP on exceptions; (2) require temporary USB grants via ServiceNow with ticket ID in log annotation optional; (3) quarterly exception review.",
              "z": "Table (host, device class, count), Single value (violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5994), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration` 400, `index=endpoint` `sourcetype=crowdstrike:usb` if available.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce BitLocker + DLP on exceptions; (2) require temporary USB grants via ServiceNow with ticket ID in log annotation optional; (3) quarterly exception review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration\" EventCode=\"400\" earliest=-7d\n| where match(ClassName,\"USBSTOR|Disk&Ven_\")\n| lookup clinical_workstations.csv ComputerName OUTPUT contains_ephi_workstation\n| where contains_ephi_workstation=\"true\"\n| stats count by ComputerName, ClassName, DeviceID\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Device & Media Controls — USB Mass Storage on ePHI Workstations (§164.310(d)(1))** — Captures removable media mount attempts on endpoints that access Hyperspace so device and media control policies for ePHI workstations are enforceable and auditable.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration` 400, `index=endpoint` `sourcetype=crowdstrike:usb` if available. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(ClassName,\"USBSTOR|Disk&Ven_\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where contains_ephi_workstation=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ComputerName, ClassName, DeviceID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Device & Media Controls — USB Mass Storage on ePHI Workstations (§164.310(d)(1))** — Captures removable media mount attempts on endpoints that access Hyperspace so device and media control policies for ePHI workstations are enforceable and auditable.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration` 400, `index=endpoint` `sourcetype=crowdstrike:usb` if available. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (host, device class, count), Single value (violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We captures removable media mount attempts on endpoints that access Hyperspace so device and media control policies for ePHI workstations are enforceable and auditable. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.310(d)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.310(d)(1) (Device and media controls) is enforced — Splunk UC-22.10.29: Device & Media Controls — USB Mass Storage on ePHI Workstations.",
                  "ea": "Saved search 'UC-22.10.29' running on WinEventLog:Microsoft-Windows-Kernel-PnP/Configuration 400, index=endpoint sourcetype=crowdstrike:usb if available, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.30",
              "n": "Media Controls — Large PHI Print Jobs to Non-Secure Printers (§164.310(d)(2))",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors unusually large chart print jobs routed to non-secure print queues so physical disclosure risks from uncontrolled hardcopy are reduced.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`WinEventLog:Microsoft-Windows-PrintService/Operational` 307 (`Document`, `Pages`, `PrinterName`, `UserName`)",
              "q": "index=windows sourcetype=\"WinEventLog:Microsoft-Windows-PrintService/Operational\" EventCode=\"307\" earliest=-7d\n| eval pages=tonumber(Pages)\n| lookup secure_print_queues.csv PrinterName OUTPUT secure_queue\n| where (isnull(secure_queue) OR secure_queue=\"false\") AND pages>=50\n| stats sum(pages) as total_pages count as jobs by UserName, PrinterName\n| sort - total_pages",
              "m": "(1) Classify printers by site/zone; (2) integrate Pharos/Equitrac if used; (3) educate on minimum necessary printing.",
              "z": "Table (user, printer, pages), Bar chart (jobs by location).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `WinEventLog:Microsoft-Windows-PrintService/Operational` 307 (`Document`, `Pages`, `PrinterName`, `UserName`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Classify printers by site/zone; (2) integrate Pharos/Equitrac if used; (3) educate on minimum necessary printing.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=\"WinEventLog:Microsoft-Windows-PrintService/Operational\" EventCode=\"307\" earliest=-7d\n| eval pages=tonumber(Pages)\n| lookup secure_print_queues.csv PrinterName OUTPUT secure_queue\n| where (isnull(secure_queue) OR secure_queue=\"false\") AND pages>=50\n| stats sum(pages) as total_pages count as jobs by UserName, PrinterName\n| sort - total_pages\n```\n\nUnderstanding this SPL\n\n**Media Controls — Large PHI Print Jobs to Non-Secure Printers (§164.310(d)(2))** — Monitors unusually large chart print jobs routed to non-secure print queues so physical disclosure risks from uncontrolled hardcopy are reduced.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-PrintService/Operational` 307 (`Document`, `Pages`, `PrinterName`, `UserName`). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Microsoft-Windows-PrintService/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=\"WinEventLog:Microsoft-Windows-PrintService/Operational\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pages** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where (isnull(secure_queue) OR secure_queue=\"false\") AND pages>=50` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by UserName, PrinterName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, printer, pages), Bar chart (jobs by location).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor unusually large chart print jobs routed to non-secure print queues so physical disclosure risks from uncontrolled hardcopy are reduced. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.310(d)(2)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.310(d)(2) is enforced — Splunk UC-22.10.30: Media Controls — Large PHI Print Jobs to Non-Secure Printers.",
                  "ea": "Saved search 'UC-22.10.30' running on WinEventLog:Microsoft-Windows-PrintService/Operational 307 (Document, Pages, PrinterName, UserName), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.31",
              "n": "Facility vs Logical Access — Badge-In Without VPN/SSO for Remote Roles (§164.310(a)(1))",
              "c": "high",
              "f": "advanced",
              "v": "Correlates physical badge events with absence of corresponding VPN or SSO session for users tagged as onsite-only, highlighting impossible travel or credential sharing.",
              "t": "Splunk Enterprise Security (263), HTTP Event Collector (HEC)",
              "d": "`index=pacs` or `index=physical` `sourcetype=genetec:access` (`badge_id`, `reader`, `result`), `index=vpn` GlobalProtect, `lookup badge_to_user.csv`",
              "q": "index=physical sourcetype=\"genetec:access\" result=\"Granted\" earliest=-1d\n| lookup badge_to_user.csv badge_id OUTPUT samaccountname, work_mode\n| where work_mode=\"OnsiteOnly\"\n| eval day=strftime(_time,\"%Y-%m-%d\")\n| join type=left samaccountname, day [\n    search index=vpn sourcetype=\"paloalto:globalprotect\" earliest=-1d\n    | eval day=strftime(_time,\"%Y-%m-%d\")\n    | stats earliest(_time) as vpn_first by src_user, day\n    | rename src_user as samaccountname ]\n| where isnull(vpn_first)\n| stats count by samaccountname, reader, day",
              "m": "(1) Normalize time zones for multi-campus; (2) tune for vendor remote workers; (3) integrate HR work location attribute.",
              "z": "Table (user, reader, days), Map (reader sites).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), HTTP Event Collector (HEC).\n• Ensure the following data sources are available: `index=pacs` or `index=physical` `sourcetype=genetec:access` (`badge_id`, `reader`, `result`), `index=vpn` GlobalProtect, `lookup badge_to_user.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize time zones for multi-campus; (2) tune for vendor remote workers; (3) integrate HR work location attribute.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"genetec:access\" result=\"Granted\" earliest=-1d\n| lookup badge_to_user.csv badge_id OUTPUT samaccountname, work_mode\n| where work_mode=\"OnsiteOnly\"\n| eval day=strftime(_time,\"%Y-%m-%d\")\n| join type=left samaccountname, day [\n    search index=vpn sourcetype=\"paloalto:globalprotect\" earliest=-1d\n    | eval day=strftime(_time,\"%Y-%m-%d\")\n    | stats earliest(_time) as vpn_first by src_user, day\n    | rename src_user as samaccountname ]\n| where isnull(vpn_first)\n| stats count by samaccountname, reader, day\n```\n\nUnderstanding this SPL\n\n**Facility vs Logical Access — Badge-In Without VPN/SSO for Remote Roles (§164.310(a)(1))** — Correlates physical badge events with absence of corresponding VPN or SSO session for users tagged as onsite-only, highlighting impossible travel or credential sharing.\n\nDocumented **Data sources**: `index=pacs` or `index=physical` `sourcetype=genetec:access` (`badge_id`, `reader`, `result`), `index=vpn` GlobalProtect, `lookup badge_to_user.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: genetec:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"genetec:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where work_mode=\"OnsiteOnly\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **day** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(vpn_first)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by samaccountname, reader, day** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, reader, days), Map (reader sites).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate physical badge events with absence of corresponding VPN or SSO session for users tagged as onsite-only, highlighting impossible travel or credential sharing. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.310(a)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.310(a)(1) (Facility access controls) is enforced — Splunk UC-22.10.31: Facility vs Logical Access — Badge-In Without VPN/SSO for Remote Roles.",
                  "ea": "Saved search 'UC-22.10.31' running on sourcetype genetec:access and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.32",
              "n": "Workstation Use — After-Hours Login from Non-Clinical IP Space (§164.310(b) / §164.310(c))",
              "c": "high",
              "f": "intermediate",
              "v": "Flags Citrix/VDI launches for inpatient CPOE roles originating from residential ISP ASNs outside hospital guest WiFi, supporting workstation use policies in shared clinical environments.",
              "t": "Splunk Add-on for Citrix (2757), Splunk Common Information Model Add-on (1621)",
              "d": "`index=citrix` `sourcetype=citrix:session`, MaxMind `iplocation`",
              "q": "index=citrix sourcetype=\"citrix:session\" earliest=-7d\n| eval hour=strftime(_time,\"%H\")\n| where (hour<6 OR hour>22)\n| iplocation client_ip\n| lookup clinical_role_users.csv UserName OUTPUT role\n| where role IN (\"RN_INPATIENT\",\"MD_INPATIENT\") AND NOT match(Org,\"*Health System*\")\n| stats count by UserName, client_ip, Country, City, Org\n| sort - count",
              "m": "(1) Replace `Org` heuristic with ASN lookup table; (2) allow VPN concentrator ranges; (3) route to privacy for workforce policy review.",
              "z": "Map (client_ip), Table (user, ASN), Timeline (after-hours spikes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Citrix](https://splunkbase.splunk.com/app/2757), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=citrix` `sourcetype=citrix:session`, MaxMind `iplocation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replace `Org` heuristic with ASN lookup table; (2) allow VPN concentrator ranges; (3) route to privacy for workforce policy review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=citrix sourcetype=\"citrix:session\" earliest=-7d\n| eval hour=strftime(_time,\"%H\")\n| where (hour<6 OR hour>22)\n| iplocation client_ip\n| lookup clinical_role_users.csv UserName OUTPUT role\n| where role IN (\"RN_INPATIENT\",\"MD_INPATIENT\") AND NOT match(Org,\"*Health System*\")\n| stats count by UserName, client_ip, Country, City, Org\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Workstation Use — After-Hours Login from Non-Clinical IP Space (§164.310(b) / §164.310(c))** — Flags Citrix/VDI launches for inpatient CPOE roles originating from residential ISP ASNs outside hospital guest WiFi, supporting workstation use policies in shared clinical environments.\n\nDocumented **Data sources**: `index=citrix` `sourcetype=citrix:session`, MaxMind `iplocation`. **App/TA** (typical add-on context): Splunk Add-on for Citrix (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: citrix; **sourcetype**: citrix:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=citrix, sourcetype=\"citrix:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (hour<6 OR hour>22)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Workstation Use — After-Hours Login from Non-Clinical IP Space (§164.310(b) / §164.310(c))**): iplocation client_ip\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where role IN (\"RN_INPATIENT\",\"MD_INPATIENT\") AND NOT match(Org,\"*Health System*\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by UserName, client_ip, Country, City, Org** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Workstation Use — After-Hours Login from Non-Clinical IP Space (§164.310(b) / §164.310(c))** — Flags Citrix/VDI launches for inpatient CPOE roles originating from residential ISP ASNs outside hospital guest WiFi, supporting workstation use policies in shared clinical environments.\n\nDocumented **Data sources**: `index=citrix` `sourcetype=citrix:session`, MaxMind `iplocation`. **App/TA** (typical add-on context): Splunk Add-on for Citrix (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (client_ip), Table (user, ASN), Timeline (after-hours spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flag Citrix/VDI launches for inpatient CPOE roles originating from residential ISP ASNs outside hospital guest WiFi, supporting workstation use policies in shared clinical environments. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Network_Traffic (optional enrichment)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "e": [
                "citrix"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.310(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.310(b) is enforced — Splunk UC-22.10.32: Workstation Use — After-Hours Login from Non-Clinical IP Space.",
                  "ea": "Saved search 'UC-22.10.32' running on index=citrix sourcetype=citrix:session, MaxMind iplocation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.310(c)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.310(c) is enforced — Splunk UC-22.10.32: Workstation Use — After-Hours Login from Non-Clinical IP Space.",
                  "ea": "Saved search 'UC-22.10.32' running on index=citrix sourcetype=citrix:session, MaxMind iplocation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.33",
              "n": "Minimum Necessary — Access Outside Active Care Team (§164.502(b) / §164.514(d))",
              "c": "critical",
              "f": "advanced",
              "v": "Compares chart opens to on-service treatment team membership so accesses without an active treatment relationship are surfaced for minimum necessary review.",
              "t": "Splunk DB Connect (2686), Splunk Enterprise Security (263)",
              "d": "`index=epic` `sourcetype=epic:audit` (`USER_ID`, `PAT_ID`, `AccessType`), nightly roster `epic_active_careteam.csv` (`PAT_ID`, `member_id`, `relation`)",
              "q": "index=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-24h\n| lookup epic_active_careteam.csv PAT_ID USER_ID as member_id OUTPUT relation\n| where isnull(relation)\n| stats count dc(PAT_ID) as patients by USER_ID\n| where count>=5\n| sort - count",
              "m": "(1) Build care team extract from Clarity `PAT_ENC`/`TEAM` tables into lookup; (2) exclude HIM release of information users; (3) integrate privacy queue workflow.",
              "z": "Table (user, chart opens, patients), Heatmap (department vs violation rate).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit` (`USER_ID`, `PAT_ID`, `AccessType`), nightly roster `epic_active_careteam.csv` (`PAT_ID`, `member_id`, `relation`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build care team extract from Clarity `PAT_ENC`/`TEAM` tables into lookup; (2) exclude HIM release of information users; (3) integrate privacy queue workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-24h\n| lookup epic_active_careteam.csv PAT_ID USER_ID as member_id OUTPUT relation\n| where isnull(relation)\n| stats count dc(PAT_ID) as patients by USER_ID\n| where count>=5\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Minimum Necessary — Access Outside Active Care Team (§164.502(b) / §164.514(d))** — Compares chart opens to on-service treatment team membership so accesses without an active treatment relationship are surfaced for minimum necessary review.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit` (`USER_ID`, `PAT_ID`, `AccessType`), nightly roster `epic_active_careteam.csv` (`PAT_ID`, `member_id`, `relation`). **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(relation)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by USER_ID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, chart opens, patients), Heatmap (department vs violation rate).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare chart opens to on-service treatment team membership so accesses without an active treatment relationship are surfaced for minimum necessary review. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(b) is enforced — Splunk UC-22.10.33: Minimum Necessary — Access Outside Active Care Team.",
                  "ea": "Saved search 'UC-22.10.33' running on sourcetype epic:audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.514(d)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.514(d) is enforced — Splunk UC-22.10.33: Minimum Necessary — Access Outside Active Care Team.",
                  "ea": "Saved search 'UC-22.10.33' running on sourcetype epic:audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.34",
              "n": "Break-Glass — Emergency Access Reason Codes & Post-Review (§164.502(a)(2)(ii) / policy)",
              "c": "critical",
              "f": "advanced",
              "v": "Ensures every break-glass chart override includes a reason code and is followed by supervisor attestation within policy timelines.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686)",
              "d": "`index=epic` `sourcetype=epic:breakglass` (`USER_ID`, `PAT_ID`, `reason_code`, `session_id`), `index=itsm` ServiceNow tasks (`short_description`, `parent_incident`)",
              "q": "index=epic sourcetype=\"epic:breakglass\" earliest=-30d\n| eval has_reason=if(isnotnull(reason_code) AND reason_code!=\"\",\"OK\",\"MISSING\")\n| join type=left session_id [\n    search index=itsm sourcetype=\"snow:incident\" earliest=-30d short_description=\"*break*glass*\"\n    | rename number as attestation_ticket, sys_id as session_id ]\n| eval reviewed=if(isnotnull(attestation_ticket),\"Yes\",\"No\")\n| where has_reason!=\"OK\" OR reviewed=\"No\"\n| table _time, USER_ID, PAT_ID, reason_code, has_reason, reviewed",
              "m": "(1) Map `session_id` join keys to your ticketing design; (2) 24h alert on missing reason; (3) 72h alert on missing attestation; (4) quarterly privacy committee rollup.",
              "z": "Timeline (open attestations), Table (violations), Single value (% reviewed).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:breakglass` (`USER_ID`, `PAT_ID`, `reason_code`, `session_id`), `index=itsm` ServiceNow tasks (`short_description`, `parent_incident`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map `session_id` join keys to your ticketing design; (2) 24h alert on missing reason; (3) 72h alert on missing attestation; (4) quarterly privacy committee rollup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:breakglass\" earliest=-30d\n| eval has_reason=if(isnotnull(reason_code) AND reason_code!=\"\",\"OK\",\"MISSING\")\n| join type=left session_id [\n    search index=itsm sourcetype=\"snow:incident\" earliest=-30d short_description=\"*break*glass*\"\n    | rename number as attestation_ticket, sys_id as session_id ]\n| eval reviewed=if(isnotnull(attestation_ticket),\"Yes\",\"No\")\n| where has_reason!=\"OK\" OR reviewed=\"No\"\n| table _time, USER_ID, PAT_ID, reason_code, has_reason, reviewed\n```\n\nUnderstanding this SPL\n\n**Break-Glass — Emergency Access Reason Codes & Post-Review (§164.502(a)(2)(ii) / policy)** — Ensures every break-glass chart override includes a reason code and is followed by supervisor attestation within policy timelines.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:breakglass` (`USER_ID`, `PAT_ID`, `reason_code`, `session_id`), `index=itsm` ServiceNow tasks (`short_description`, `parent_incident`). **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:breakglass. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:breakglass\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_reason** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **reviewed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where has_reason!=\"OK\" OR reviewed=\"No\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Break-Glass — Emergency Access Reason Codes & Post-Review (§164.502(a)(2)(ii) / policy)**): table _time, USER_ID, PAT_ID, reason_code, has_reason, reviewed\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (open attestations), Table (violations), Single value (% reviewed).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We ensures every break-glass chart override includes a reason code and is followed by supervisor attestation within policy timelines. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(a)(2)(ii)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(a)(2)(ii) is enforced — Splunk UC-22.10.34: Break-Glass — Emergency Access Reason Codes & Post-Review.",
                  "ea": "Saved search 'UC-22.10.34' running on sourcetype epic:breakglass and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.35",
              "n": "Non-Treating Provider — Specialty Mismatch Chart Access (§164.502(a)(1))",
              "c": "high",
              "f": "advanced",
              "v": "Highlights when users in non-clinical specialties repeatedly open inpatient charts outside their documented specialty cohort, a common snooping pattern.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=epic` `sourcetype=epic:audit`, `lookup provider_specialty.csv` (`USER_ID`, `specialty_group`), Clarity-derived `patient_unit_map.csv` (`PAT_ID`, `unit_service_line`)",
              "q": "index=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-30d\n| lookup provider_specialty.csv USER_ID OUTPUT specialty_group\n| lookup patient_unit_map.csv PAT_ID OUTPUT unit_service_line\n| eval mismatch=if(isnotnull(specialty_group) AND NOT match(unit_service_line, specialty_group),1,0)\n| where mismatch=1 AND specialty_group IN (\"BILLING\",\"CODING\",\"IT\",\"HR\")\n| stats dc(PAT_ID) as patients by USER_ID, specialty_group\n| where patients>=10\n| sort - patients",
              "m": "(1) Refresh patient service line nightly; (2) tune for legitimate cross-coverage; (3) route to privacy investigations with manager context.",
              "z": "Table (user, specialty, patients), Bar chart (violations by department).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit`, `lookup provider_specialty.csv` (`USER_ID`, `specialty_group`), Clarity-derived `patient_unit_map.csv` (`PAT_ID`, `unit_service_line`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Refresh patient service line nightly; (2) tune for legitimate cross-coverage; (3) route to privacy investigations with manager context.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-30d\n| lookup provider_specialty.csv USER_ID OUTPUT specialty_group\n| lookup patient_unit_map.csv PAT_ID OUTPUT unit_service_line\n| eval mismatch=if(isnotnull(specialty_group) AND NOT match(unit_service_line, specialty_group),1,0)\n| where mismatch=1 AND specialty_group IN (\"BILLING\",\"CODING\",\"IT\",\"HR\")\n| stats dc(PAT_ID) as patients by USER_ID, specialty_group\n| where patients>=10\n| sort - patients\n```\n\nUnderstanding this SPL\n\n**Non-Treating Provider — Specialty Mismatch Chart Access (§164.502(a)(1))** — Highlights when users in non-clinical specialties repeatedly open inpatient charts outside their documented specialty cohort, a common snooping pattern.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit`, `lookup provider_specialty.csv` (`USER_ID`, `specialty_group`), Clarity-derived `patient_unit_map.csv` (`PAT_ID`, `unit_service_line`). **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mismatch=1 AND specialty_group IN (\"BILLING\",\"CODING\",\"IT\",\"HR\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by USER_ID, specialty_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where patients>=10` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, specialty, patients), Bar chart (violations by department).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlights when users in non-clinical specialties repeatedly open inpatient charts outside their documented specialty cohort, a common snooping pattern. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(a)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(a)(1) is enforced — Splunk UC-22.10.35: Non-Treating Provider — Specialty Mismatch Chart Access.",
                  "ea": "Saved search 'UC-22.10.35' running on sourcetype epic:audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.36",
              "n": "Bulk ePHI Export — Clarity SQL / Caboodle Extract Volume Spike (§164.502(b) / §164.312(b))",
              "c": "critical",
              "f": "advanced",
              "v": "Detects unusually large SELECT/BCP operations against patient fact tables that may indicate unauthorized bulk download of ePHI for exfiltration.",
              "t": "Splunk DB Connect (2686), Splunk Add-on for Microsoft SQL Server (2648)",
              "d": "`index=mssql` `sourcetype=mssql:audit` (`statement`, `rows`, `session_server_principal_name`)",
              "q": "index=mssql sourcetype=\"mssql:audit\" earliest=-24h action_class=\"SELECT\"\n| where match(lower(statement),\"from\\s+clarity\\.(pat|patient|episodes)\") OR match(lower(statement),\"from\\s+caboodle\")\n| eval row_estimate=coalesce(rows,0)\n| bin _time span=1h\n| stats sum(row_estimate) as est_rows by session_server_principal_name, _time, client_ip\n| eventstats median(est_rows) as med by session_server_principal_name\n| where est_rows > med*20 AND est_rows>100000\n| sort - est_rows",
              "m": "(1) Map ETL service accounts to allow list; (2) require data request tickets with correlation ID in SQL comment where feasible; (3) integrate DLP for flat file exports.",
              "z": "Time chart (rows/hour by principal), Table (top spikes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft SQL Server](https://splunkbase.splunk.com/app/2648)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686), Splunk Add-on for Microsoft SQL Server (2648).\n• Ensure the following data sources are available: `index=mssql` `sourcetype=mssql:audit` (`statement`, `rows`, `session_server_principal_name`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map ETL service accounts to allow list; (2) require data request tickets with correlation ID in SQL comment where feasible; (3) integrate DLP for flat file exports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mssql sourcetype=\"mssql:audit\" earliest=-24h action_class=\"SELECT\"\n| where match(lower(statement),\"from\\s+clarity\\.(pat|patient|episodes)\") OR match(lower(statement),\"from\\s+caboodle\")\n| eval row_estimate=coalesce(rows,0)\n| bin _time span=1h\n| stats sum(row_estimate) as est_rows by session_server_principal_name, _time, client_ip\n| eventstats median(est_rows) as med by session_server_principal_name\n| where est_rows > med*20 AND est_rows>100000\n| sort - est_rows\n```\n\nUnderstanding this SPL\n\n**Bulk ePHI Export — Clarity SQL / Caboodle Extract Volume Spike (§164.502(b) / §164.312(b))** — Detects unusually large SELECT/BCP operations against patient fact tables that may indicate unauthorized bulk download of ePHI for exfiltration.\n\nDocumented **Data sources**: `index=mssql` `sourcetype=mssql:audit` (`statement`, `rows`, `session_server_principal_name`). **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk Add-on for Microsoft SQL Server (2648). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mssql; **sourcetype**: mssql:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mssql, sourcetype=\"mssql:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(lower(statement),\"from\\s+clarity\\.(pat|patient|episodes)\") OR match(lower(statement),\"from\\s+caboodle\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **row_estimate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by session_server_principal_name, _time, client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by session_server_principal_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where est_rows > med*20 AND est_rows>100000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (rows/hour by principal), Table (top spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect unusually large SELECT/BCP operations against patient fact tables that may indicate unauthorized bulk download of ePHI for exfiltration. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "mssql"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(b) (Audit controls) is enforced — Splunk UC-22.10.36: Bulk ePHI Export — Clarity SQL / Caboodle Extract Volume Spike.",
                  "ea": "Saved search 'UC-22.10.36' running on index=mssql sourcetype=mssql:audit (statement, rows, session_server_principal_name), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(b) is enforced — Splunk UC-22.10.36: Bulk ePHI Export — Clarity SQL / Caboodle Extract Volume Spike.",
                  "ea": "Saved search 'UC-22.10.36' running on index=mssql sourcetype=mssql:audit (statement, rows, session_server_principal_name), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.37",
              "n": "After-Hours ePHI Access — Billing Users on Inpatient Charts (§164.502(b))",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces patient chart access during overnight windows by revenue cycle roles where business need is uncommon, supporting workforce training and sanctions workflows.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=epic` `sourcetype=epic:audit`, `lookup hipaa_department_roles.csv` (`USER_ID`, `dept`, `after_hours_allowed`)",
              "q": "index=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-30d\n| eval hour=tonumber(strftime(_time,\"%H\"))\n| where hour>=22 OR hour<=5\n| lookup hipaa_department_roles.csv USER_ID OUTPUT dept, after_hours_allowed\n| where dept=\"REVENUE_CYCLE\" AND after_hours_allowed!=\"true\"\n| stats dc(PAT_ID) as patients by USER_ID, dept\n| sort - patients",
              "m": "(1) Align dept codes with HR feed; (2) add on-call exceptions; (3) integrate with privacy investigation templates.",
              "z": "Heatmap (hour x user group), Table (users, patients).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit`, `lookup hipaa_department_roles.csv` (`USER_ID`, `dept`, `after_hours_allowed`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align dept codes with HR feed; (2) add on-call exceptions; (3) integrate with privacy investigation templates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-30d\n| eval hour=tonumber(strftime(_time,\"%H\"))\n| where hour>=22 OR hour<=5\n| lookup hipaa_department_roles.csv USER_ID OUTPUT dept, after_hours_allowed\n| where dept=\"REVENUE_CYCLE\" AND after_hours_allowed!=\"true\"\n| stats dc(PAT_ID) as patients by USER_ID, dept\n| sort - patients\n```\n\nUnderstanding this SPL\n\n**After-Hours ePHI Access — Billing Users on Inpatient Charts (§164.502(b))** — Surfaces patient chart access during overnight windows by revenue cycle roles where business need is uncommon, supporting workforce training and sanctions workflows.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit`, `lookup hipaa_department_roles.csv` (`USER_ID`, `dept`, `after_hours_allowed`). **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hour>=22 OR hour<=5` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where dept=\"REVENUE_CYCLE\" AND after_hours_allowed!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by USER_ID, dept** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (hour x user group), Table (users, patients).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface patient chart access during overnight windows by revenue cycle roles where business need is uncommon, supporting workforce training and sanctions workflows. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(b) is enforced — Splunk UC-22.10.37: After-Hours ePHI Access — Billing Users on Inpatient Charts.",
                  "ea": "Saved search 'UC-22.10.37' running on index=epic sourcetype=epic:audit, lookup hipaa_department_roles.csv (USER_ID, dept, after_hours_allowed), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.38",
              "n": "Deceased Patient Records — Access After Death Date (§164.502(f) / policy)",
              "c": "high",
              "f": "advanced",
              "v": "Monitors chart access after documented death timestamps for users not on approved post-mortem workflows (HIM, organ donation, medical examiner).",
              "t": "Splunk DB Connect (2686)",
              "d": "`index=epic` `sourcetype=epic:audit`, `lookup patient_death_index.csv` (`PAT_ID`, `death_dttm`, `restricted_flag`)",
              "q": "index=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-90d\n| lookup patient_death_index.csv PAT_ID OUTPUT death_dttm, restricted_flag\n| where isnotnull(death_dttm) AND _time > strptime(death_dttm,\"%Y-%m-%d %H:%M:%S\")\n| lookup post_mortem_allowed_roles.csv USER_ID OUTPUT allowed_role\n| where isnull(allowed_role)\n| stats count by USER_ID, PAT_ID\n| sort - count",
              "m": "(1) Refresh death index from ADT feed within SLA; (2) legal guidance on state law vs HIPAA; (3) sensitive dashboard ACLs.",
              "z": "Table (user, patient, events), Single value (restricted accesses).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit`, `lookup patient_death_index.csv` (`PAT_ID`, `death_dttm`, `restricted_flag`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Refresh death index from ADT feed within SLA; (2) legal guidance on state law vs HIPAA; (3) sensitive dashboard ACLs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-90d\n| lookup patient_death_index.csv PAT_ID OUTPUT death_dttm, restricted_flag\n| where isnotnull(death_dttm) AND _time > strptime(death_dttm,\"%Y-%m-%d %H:%M:%S\")\n| lookup post_mortem_allowed_roles.csv USER_ID OUTPUT allowed_role\n| where isnull(allowed_role)\n| stats count by USER_ID, PAT_ID\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Deceased Patient Records — Access After Death Date (§164.502(f) / policy)** — Monitors chart access after documented death timestamps for users not on approved post-mortem workflows (HIM, organ donation, medical examiner).\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit`, `lookup patient_death_index.csv` (`PAT_ID`, `death_dttm`, `restricted_flag`). **App/TA** (typical add-on context): Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(death_dttm) AND _time > strptime(death_dttm,\"%Y-%m-%d %H:%M:%S\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(allowed_role)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by USER_ID, PAT_ID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, patient, events), Single value (restricted accesses).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor chart access after documented death timestamps for users not on approved post-mortem workflows (HIM, organ donation, medical examiner). HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(f)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(f) is enforced — Splunk UC-22.10.38: Deceased Patient Records — Access After Death Date.",
                  "ea": "Saved search 'UC-22.10.38' running on index=epic sourcetype=epic:audit, lookup patient_death_index.csv (PAT_ID, death_dttm, restricted_flag), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.39",
              "n": "VIP / High-Profile Patient — Elevated Access Monitoring (§164.502(a) / policy)",
              "c": "critical",
              "f": "intermediate",
              "v": "Applies extra scrutiny to any user accessing patients on the VIP/high-profile flag list to deter and detect snooping.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=epic` `sourcetype=epic:audit`, `lookup vip_patients.csv` (`PAT_ID`, `vip_level`, `start_date`, `end_date`)",
              "q": "index=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-30d\n| lookup vip_patients.csv PAT_ID OUTPUT vip_level, start_date, end_date\n| where isnotnull(vip_level) AND _time>=strptime(start_date,\"%Y-%m-%d\") AND _time<=strptime(end_date,\"%Y-%m-%d\")\n| lookup vip_care_team.csv PAT_ID USER_ID OUTPUT is_authorized\n| where isnull(is_authorized)\n| table _time, USER_ID, PAT_ID, vip_level, Activity",
              "m": "(1) VIP list restricted to small privacy group managing lookup; (2) real-time alert to privacy on-call; (3) never place plaintext celebrity names in Splunk—use surrogate IDs only.",
              "z": "Table (masked IDs only), Single value (unauthorized VIP touches).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit`, `lookup vip_patients.csv` (`PAT_ID`, `vip_level`, `start_date`, `end_date`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) VIP list restricted to small privacy group managing lookup; (2) real-time alert to privacy on-call; (3) never place plaintext celebrity names in Splunk—use surrogate IDs only.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:audit\" AccessType=\"PatientChart\" earliest=-30d\n| lookup vip_patients.csv PAT_ID OUTPUT vip_level, start_date, end_date\n| where isnotnull(vip_level) AND _time>=strptime(start_date,\"%Y-%m-%d\") AND _time<=strptime(end_date,\"%Y-%m-%d\")\n| lookup vip_care_team.csv PAT_ID USER_ID OUTPUT is_authorized\n| where isnull(is_authorized)\n| table _time, USER_ID, PAT_ID, vip_level, Activity\n```\n\nUnderstanding this SPL\n\n**VIP / High-Profile Patient — Elevated Access Monitoring (§164.502(a) / policy)** — Applies extra scrutiny to any user accessing patients on the VIP/high-profile flag list to deter and detect snooping.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit`, `lookup vip_patients.csv` (`PAT_ID`, `vip_level`, `start_date`, `end_date`). **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(vip_level) AND _time>=strptime(start_date,\"%Y-%m-%d\") AND _time<=strptime(end_date,\"%Y-%m-%d\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(is_authorized)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **VIP / High-Profile Patient — Elevated Access Monitoring (§164.502(a) / policy)**): table _time, USER_ID, PAT_ID, vip_level, Activity\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (masked IDs only), Single value (unauthorized VIP touches).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We applies extra scrutiny to any user accessing patients on the VIP/high-profile flag list to deter and detect snooping. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(a)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(a) is enforced — Splunk UC-22.10.39: VIP / High-Profile Patient — Elevated Access Monitoring.",
                  "ea": "Saved search 'UC-22.10.39' running on index=epic sourcetype=epic:audit, lookup vip_patients.csv (PAT_ID, vip_level, start_date, end_date), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.40",
              "n": "Research Access — Chart Views Without Active IRB Consent Flag (§164.502(a)(1) / §164.512(i))",
              "c": "critical",
              "f": "advanced",
              "v": "Correlates research coordinator EHR access with IRB-approved patient rosters to catch out-of-protocol chart reviews.",
              "t": "Splunk DB Connect (2686), Splunk Enterprise Security (263)",
              "d": "`index=epic` `sourcetype=epic:audit`, `lookup irb_active_subjects.csv` (`STUDY_ID`, `PAT_ID`, `consent_status`, `window_end`)",
              "q": "index=epic sourcetype=\"epic:audit\" Activity=\"*Research*\" OR Context=\"*Study*\" earliest=-30d\n| rex field=Context \"STUDY:(?<STUDY_ID>[^|]+)\"\n| lookup irb_active_subjects.csv STUDY_ID PAT_ID OUTPUT consent_status, window_end\n| eval in_window=if(_time<strptime(window_end,\"%Y-%m-%d\"),1,0)\n| where consent_status!=\"Active\" OR in_window=0 OR isnull(consent_status)\n| stats values(STUDY_ID) as studies dc(PAT_ID) as patients by USER_ID\n| sort - patients",
              "m": "(1) Standardize study context logging in Hyperspace build; (2) IRB office owns roster refresh; (3) integrate with Office for Human Research Protections policies.",
              "z": "Table (user, studies, patients), Bar chart (violations by study).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=epic` `sourcetype=epic:audit`, `lookup irb_active_subjects.csv` (`STUDY_ID`, `PAT_ID`, `consent_status`, `window_end`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize study context logging in Hyperspace build; (2) IRB office owns roster refresh; (3) integrate with Office for Human Research Protections policies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=epic sourcetype=\"epic:audit\" Activity=\"*Research*\" OR Context=\"*Study*\" earliest=-30d\n| rex field=Context \"STUDY:(?<STUDY_ID>[^|]+)\"\n| lookup irb_active_subjects.csv STUDY_ID PAT_ID OUTPUT consent_status, window_end\n| eval in_window=if(_time<strptime(window_end,\"%Y-%m-%d\"),1,0)\n| where consent_status!=\"Active\" OR in_window=0 OR isnull(consent_status)\n| stats values(STUDY_ID) as studies dc(PAT_ID) as patients by USER_ID\n| sort - patients\n```\n\nUnderstanding this SPL\n\n**Research Access — Chart Views Without Active IRB Consent Flag (§164.502(a)(1) / §164.512(i))** — Correlates research coordinator EHR access with IRB-approved patient rosters to catch out-of-protocol chart reviews.\n\nDocumented **Data sources**: `index=epic` `sourcetype=epic:audit`, `lookup irb_active_subjects.csv` (`STUDY_ID`, `PAT_ID`, `consent_status`, `window_end`). **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: epic; **sourcetype**: epic:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=epic, sourcetype=\"epic:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **in_window** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where consent_status!=\"Active\" OR in_window=0 OR isnull(consent_status)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by USER_ID** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, studies, patients), Bar chart (violations by study).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate research coordinator EHR access with IRB-approved patient rosters to catch out-of-protocol chart reviews. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(a)(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(a)(1) is enforced — Splunk UC-22.10.40: Research Access — Chart Views Without Active IRB Consent Flag.",
                  "ea": "Saved search 'UC-22.10.40' running on index=epic sourcetype=epic:audit, lookup irb_active_subjects.csv (STUDY_ID, PAT_ID, consent_status, window_end), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.512(i)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.512(i) is enforced — Splunk UC-22.10.40: Research Access — Chart Views Without Active IRB Consent Flag.",
                  "ea": "Saved search 'UC-22.10.40' running on index=epic sourcetype=epic:audit, lookup irb_active_subjects.csv (STUDY_ID, PAT_ID, consent_status, window_end), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.41",
              "n": "Accounting of Disclosures — Registry vs EHR-Logged Disclosures (§164.528)",
              "c": "high",
              "f": "advanced",
              "v": "Reconciles disclosure events recorded in the accounting of disclosures repository with interface and manual disclosure logs so patients receive accurate histories.",
              "t": "Splunk Enterprise / HEC, Splunk DB Connect (2686)",
              "d": "`index=privacy` `sourcetype=hipaa:disclosure_log` (`disclosure_id`, `PAT_ID`, `recipient_class`), `index=epic` `sourcetype=epic:disclosure_export`",
              "q": "index=privacy sourcetype=\"hipaa:disclosure_log\" earliest=-30d\n| stats count by disclosure_id, PAT_ID, recipient_class\n| join type=left PAT_ID [\n    search index=epic sourcetype=\"epic:disclosure_export\" earliest=-30d\n    | stats latest(_time) as epic_export by PAT_ID ]\n| where isnull(epic_export) AND recipient_class IN (\"LawEnforcement\",\"Insurance\",\"Employer\")\n| table disclosure_id, PAT_ID, recipient_class, count",
              "m": "(1) Define authoritative system of record; (2) fix integration gaps when join misses; (3) legal review before patient-facing export.",
              "z": "Table (missing reconciliations), Column chart (disclosures by class).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / HEC, Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=privacy` `sourcetype=hipaa:disclosure_log` (`disclosure_id`, `PAT_ID`, `recipient_class`), `index=epic` `sourcetype=epic:disclosure_export`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define authoritative system of record; (2) fix integration gaps when join misses; (3) legal review before patient-facing export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"hipaa:disclosure_log\" earliest=-30d\n| stats count by disclosure_id, PAT_ID, recipient_class\n| join type=left PAT_ID [\n    search index=epic sourcetype=\"epic:disclosure_export\" earliest=-30d\n    | stats latest(_time) as epic_export by PAT_ID ]\n| where isnull(epic_export) AND recipient_class IN (\"LawEnforcement\",\"Insurance\",\"Employer\")\n| table disclosure_id, PAT_ID, recipient_class, count\n```\n\nUnderstanding this SPL\n\n**Accounting of Disclosures — Registry vs EHR-Logged Disclosures (§164.528)** — Reconciles disclosure events recorded in the accounting of disclosures repository with interface and manual disclosure logs so patients receive accurate histories.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=hipaa:disclosure_log` (`disclosure_id`, `PAT_ID`, `recipient_class`), `index=epic` `sourcetype=epic:disclosure_export`. **App/TA** (typical add-on context): Splunk Enterprise / HEC, Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: hipaa:disclosure_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"hipaa:disclosure_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by disclosure_id, PAT_ID, recipient_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(epic_export) AND recipient_class IN (\"LawEnforcement\",\"Insurance\",\"Employer\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Accounting of Disclosures — Registry vs EHR-Logged Disclosures (§164.528)**): table disclosure_id, PAT_ID, recipient_class, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing reconciliations), Column chart (disclosures by class).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We reconciles disclosure events recorded in the accounting of disclosures repository with interface and manual disclosure logs so patients receive accurate histories. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.528",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.528 is enforced — Splunk UC-22.10.41: Accounting of Disclosures — Registry vs EHR-Logged Disclosures.",
                  "ea": "Saved search 'UC-22.10.41' running on sourcetype hipaa:disclosure_log and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.42",
              "n": "Patient Portal — Suspicious MyChart Password Reset & MFA Changes (§164.312(d) / §164.530(c))",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects takeover patterns on patient portals that could expose ePHI and undermine patient trust and access controls.",
              "t": "HTTP Event Collector (HEC), Splunk Add-on for Microsoft Office 365 (4055)",
              "d": "`index=mychart` `sourcetype=epic:mychart_audit` (`action`, `user_mrn`, `src_ip`), optional Okta/Azure AD B2C logs",
              "q": "index=mychart sourcetype=\"epic:mychart_audit\" earliest=-7d (action=\"PasswordReset\" OR action=\"MFAEnrollmentChanged\" OR action=\"EmailChanged\")\n| stats earliest(_time) as first latest(_time) as last dc(src_ip) as src_dc values(action) as actions by user_mrn\n| where src_dc>=3 OR (last-first)<300 AND mvcount(actions)>=2\n| sort - src_dc",
              "m": "(1) Mask MRN in non-production; (2) integrate fraud scoring from vendor if available; (3) patient outreach playbook for suspected takeover.",
              "z": "Table (MRN hash, actions, IPs), Timeline (burst resets).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Add-on for Microsoft Office 365 (4055).\n• Ensure the following data sources are available: `index=mychart` `sourcetype=epic:mychart_audit` (`action`, `user_mrn`, `src_ip`), optional Okta/Azure AD B2C logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Mask MRN in non-production; (2) integrate fraud scoring from vendor if available; (3) patient outreach playbook for suspected takeover.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mychart sourcetype=\"epic:mychart_audit\" earliest=-7d (action=\"PasswordReset\" OR action=\"MFAEnrollmentChanged\" OR action=\"EmailChanged\")\n| stats earliest(_time) as first latest(_time) as last dc(src_ip) as src_dc values(action) as actions by user_mrn\n| where src_dc>=3 OR (last-first)<300 AND mvcount(actions)>=2\n| sort - src_dc\n```\n\nUnderstanding this SPL\n\n**Patient Portal — Suspicious MyChart Password Reset & MFA Changes (§164.312(d) / §164.530(c))** — Detects takeover patterns on patient portals that could expose ePHI and undermine patient trust and access controls.\n\nDocumented **Data sources**: `index=mychart` `sourcetype=epic:mychart_audit` (`action`, `user_mrn`, `src_ip`), optional Okta/Azure AD B2C logs. **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for Microsoft Office 365 (4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mychart; **sourcetype**: epic:mychart_audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mychart, sourcetype=\"epic:mychart_audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user_mrn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where src_dc>=3 OR (last-first)<300 AND mvcount(actions)>=2` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Patient Portal — Suspicious MyChart Password Reset & MFA Changes (§164.312(d) / §164.530(c))** — Detects takeover patterns on patient portals that could expose ePHI and undermine patient trust and access controls.\n\nDocumented **Data sources**: `index=mychart` `sourcetype=epic:mychart_audit` (`action`, `user_mrn`, `src_ip`), optional Okta/Azure AD B2C logs. **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for Microsoft Office 365 (4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (MRN hash, actions, IPs), Timeline (burst resets).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect takeover patterns on patient portals that could expose ePHI and undermine patient trust and access controls. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Authentication (if federated IdP logs are CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(d)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.312(d) (Person or entity authentication) is enforced — Splunk UC-22.10.42: Patient Portal — Suspicious MyChart Password Reset & MFA Changes.",
                  "ea": "Saved search 'UC-22.10.42' running on index=mychart sourcetype=epic:mychart_audit (action, user_mrn, src_ip), optional Okta/Azure AD B2C logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.530(c)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.530(c) is enforced — Splunk UC-22.10.42: Patient Portal — Suspicious MyChart Password Reset & MFA Changes.",
                  "ea": "Saved search 'UC-22.10.42' running on index=mychart sourcetype=epic:mychart_audit (action, user_mrn, src_ip), optional Okta/Azure AD B2C logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.43",
              "n": "Breach Discovery — Time-to-Detect from First PHI Indicator (§164.404)",
              "c": "critical",
              "f": "advanced",
              "v": "Measures elapsed time between earliest technical indicator of a potential PHI incident and formal incident creation so discovery timelines support Breach Notification Rule evidence.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`` `notable` `` (`_time`, `rule_name`, `status`), `index=dlp` `sourcetype=ms:o365:management` or `ms:o365:dlp` (`IncidentId`, `CreatedTime`)",
              "q": "`notable` HIPAA_Breach_Candidate=1 earliest=-90d\n| eval detect_time=_time\n| join IncidentId type=left [\n    search index=dlp sourcetype=\"o365:dlp\" earliest=-90d\n    | eval first_indicator=strptime(CreatedTime,\"%Y-%m-%dT%H:%M:%SZ\")\n    | stats min(first_indicator) as first_dlp by IncidentId ]\n| eval hours_to_detect=round((detect_time-first_dlp)/3600,2)\n| table IncidentId, first_dlp, detect_time, hours_to_detect, rule_name, status\n| sort - hours_to_detect",
              "m": "(1) Tag correlation searches that create HIPAA candidate notables; (2) normalize UTC; (3) executive dashboard for legal discovery SLA.",
              "z": "Histogram (hours_to_detect), Table (incidents), Single value (median detect hours).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `` `notable` `` (`_time`, `rule_name`, `status`), `index=dlp` `sourcetype=ms:o365:management` or `ms:o365:dlp` (`IncidentId`, `CreatedTime`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag correlation searches that create HIPAA candidate notables; (2) normalize UTC; (3) executive dashboard for legal discovery SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` HIPAA_Breach_Candidate=1 earliest=-90d\n| eval detect_time=_time\n| join IncidentId type=left [\n    search index=dlp sourcetype=\"o365:dlp\" earliest=-90d\n    | eval first_indicator=strptime(CreatedTime,\"%Y-%m-%dT%H:%M:%SZ\")\n    | stats min(first_indicator) as first_dlp by IncidentId ]\n| eval hours_to_detect=round((detect_time-first_dlp)/3600,2)\n| table IncidentId, first_dlp, detect_time, hours_to_detect, rule_name, status\n| sort - hours_to_detect\n```\n\nUnderstanding this SPL\n\n**Breach Discovery — Time-to-Detect from First PHI Indicator (§164.404)** — Measures elapsed time between earliest technical indicator of a potential PHI incident and formal incident creation so discovery timelines support Breach Notification Rule evidence.\n\nDocumented **Data sources**: `` `notable` `` (`_time`, `rule_name`, `status`), `index=dlp` `sourcetype=ms:o365:management` or `ms:o365:dlp` (`IncidentId`, `CreatedTime`). **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **detect_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **hours_to_detect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Breach Discovery — Time-to-Detect from First PHI Indicator (§164.404)**): table IncidentId, first_dlp, detect_time, hours_to_detect, rule_name, status\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (hours_to_detect), Table (incidents), Single value (median detect hours).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures elapsed time between earliest technical indicator of a potential PHI incident and formal incident creation so discovery timelines support Breach Notification Rule evidence. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.404",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.404 is enforced — Splunk UC-22.10.43: Breach Discovery — Time-to-Detect from First PHI Indicator.",
                  "ea": "Saved search 'UC-22.10.43' running on notable (_time, rule_name, status), index=dlp sourcetype=ms:o365:management or ms:o365:dlp (IncidentId, CreatedTime), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.44",
              "n": "Breach Risk Assessment — Four-Factor Documentation Tracking (§164.402 / §164.404)",
              "c": "critical",
              "f": "intermediate",
              "v": "Stores structured answers for nature/extent of PHI, unauthorized recipient, actual acquisition, and mitigation so breach vs non-breach determinations are auditable.",
              "t": "Splunk Enterprise / HEC",
              "d": "`index=privacy` `sourcetype=hipaa:breach_ra` JSON (`incident_id`, `factor1`, `factor2`, `factor3`, `factor4`, `preliminary_result`, `reviewer`)",
              "q": "index=privacy sourcetype=\"hipaa:breach_ra\" earliest=-365d\n| eval complete=if(isnotnull(factor1) AND isnotnull(factor2) AND isnotnull(factor3) AND isnotnull(factor4),\"Yes\",\"No\")\n| where complete=\"No\" OR isnull(preliminary_result)\n| eval days_open=round((now()-_time)/86400,1)\n| table incident_id, days_open, reviewer, factor1, factor2, factor3, factor4\n| sort - days_open",
              "m": "(1) Emit JSON from GRC web form via HEC; (2) retain immutable snapshots on decision changes; (3) legal owns field definitions.",
              "z": "Table (open assessments), Single value (incomplete RA count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / HEC.\n• Ensure the following data sources are available: `index=privacy` `sourcetype=hipaa:breach_ra` JSON (`incident_id`, `factor1`, `factor2`, `factor3`, `factor4`, `preliminary_result`, `reviewer`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit JSON from GRC web form via HEC; (2) retain immutable snapshots on decision changes; (3) legal owns field definitions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"hipaa:breach_ra\" earliest=-365d\n| eval complete=if(isnotnull(factor1) AND isnotnull(factor2) AND isnotnull(factor3) AND isnotnull(factor4),\"Yes\",\"No\")\n| where complete=\"No\" OR isnull(preliminary_result)\n| eval days_open=round((now()-_time)/86400,1)\n| table incident_id, days_open, reviewer, factor1, factor2, factor3, factor4\n| sort - days_open\n```\n\nUnderstanding this SPL\n\n**Breach Risk Assessment — Four-Factor Documentation Tracking (§164.402 / §164.404)** — Stores structured answers for nature/extent of PHI, unauthorized recipient, actual acquisition, and mitigation so breach vs non-breach determinations are auditable.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=hipaa:breach_ra` JSON (`incident_id`, `factor1`, `factor2`, `factor3`, `factor4`, `preliminary_result`, `reviewer`). **App/TA** (typical add-on context): Splunk Enterprise / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: hipaa:breach_ra. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"hipaa:breach_ra\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **complete** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where complete=\"No\" OR isnull(preliminary_result)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Breach Risk Assessment — Four-Factor Documentation Tracking (§164.402 / §164.404)**): table incident_id, days_open, reviewer, factor1, factor2, factor3, factor4\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open assessments), Single value (incomplete RA count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We stores structured answers for nature/extent of PHI, unauthorized recipient, actual acquisition, and mitigation so breach vs non-breach determinations are auditable. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.402",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.402 is enforced — Splunk UC-22.10.44: Breach Risk Assessment — Four-Factor Documentation Tracking.",
                  "ea": "Saved search 'UC-22.10.44' running on sourcetype hipaa:breach_ra and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.404",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.404 is enforced — Splunk UC-22.10.44: Breach Risk Assessment — Four-Factor Documentation Tracking.",
                  "ea": "Saved search 'UC-22.10.44' running on sourcetype hipaa:breach_ra and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.45",
              "n": "Individual Notification — Letter Generation & Mailing Evidence (§164.404(b), (d)(1))",
              "c": "critical",
              "f": "intermediate",
              "v": "Tracks generation, print, and certified mail tracking numbers for breach letters within the 60-day window to support individual notification compliance.",
              "t": "HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928) if tickets drive mailhouse",
              "d": "`index=privacy` `sourcetype=hipaa:breach_notice` (`incident_id`, `patient_token`, `letter_status`, `mail_tracking_id`, `mailed_dttm`)",
              "q": "index=privacy sourcetype=\"hipaa:breach_notice\" earliest=-120d\n| eval mailed_epoch=strptime(mailed_dttm,\"%Y-%m-%d %H:%M:%S\")\n| eval breach_discovered=strptime(breach_discovered_dttm,\"%Y-%m-%d %H:%M:%S\")\n| eval days_to_mail=round((mailed_epoch-breach_discovered)/86400,1)\n| where letter_status!=\"Mailed\" OR days_to_mail>60 OR isnull(mail_tracking_id)\n| stats count by incident_id, letter_status\n| sort - count",
              "m": "(1) Integrate print vendor webhooks; (2) never index full SSN—use tokenized IDs; (3) legal sign-off field in same event.",
              "z": "Gantt-style timeline (discovered → mailed), Table (exceptions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928) if tickets drive mailhouse.\n• Ensure the following data sources are available: `index=privacy` `sourcetype=hipaa:breach_notice` (`incident_id`, `patient_token`, `letter_status`, `mail_tracking_id`, `mailed_dttm`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Integrate print vendor webhooks; (2) never index full SSN—use tokenized IDs; (3) legal sign-off field in same event.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"hipaa:breach_notice\" earliest=-120d\n| eval mailed_epoch=strptime(mailed_dttm,\"%Y-%m-%d %H:%M:%S\")\n| eval breach_discovered=strptime(breach_discovered_dttm,\"%Y-%m-%d %H:%M:%S\")\n| eval days_to_mail=round((mailed_epoch-breach_discovered)/86400,1)\n| where letter_status!=\"Mailed\" OR days_to_mail>60 OR isnull(mail_tracking_id)\n| stats count by incident_id, letter_status\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Individual Notification — Letter Generation & Mailing Evidence (§164.404(b), (d)(1))** — Tracks generation, print, and certified mail tracking numbers for breach letters within the 60-day window to support individual notification compliance.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=hipaa:breach_notice` (`incident_id`, `patient_token`, `letter_status`, `mail_tracking_id`, `mailed_dttm`). **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928) if tickets drive mailhouse. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: hipaa:breach_notice. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"hipaa:breach_notice\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mailed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach_discovered** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_mail** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where letter_status!=\"Mailed\" OR days_to_mail>60 OR isnull(mail_tracking_id)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by incident_id, letter_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gantt-style timeline (discovered → mailed), Table (exceptions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track generation, print, and certified mail tracking numbers for breach letters within the 60-day window to support individual notification compliance. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.404(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.404(b) is enforced — Splunk UC-22.10.45: Individual Notification — Letter Generation & Mailing Evidence.",
                  "ea": "Saved search 'UC-22.10.45' running on index=privacy sourcetype=hipaa:breach_notice (incident_id, patient_token, letter_status, mail_tracking_id, mailed_dttm), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.46",
              "n": "HHS Secretary Notification — 500+ Individuals Threshold Watch (§164.408)",
              "c": "critical",
              "f": "beginner",
              "v": "Rolling count of affected individuals per incident versus the 500-person statutory threshold for timely HHS portal submission.",
              "t": "Splunk Enterprise / HEC",
              "d": "`index=privacy` `sourcetype=hipaa:breach_subject` (`incident_id`, `subject_token`, `included_in_count`)",
              "q": "index=privacy sourcetype=\"hipaa:breach_subject\" earliest=-365d\n| where included_in_count=\"true\"\n| stats dc(subject_token) as affected by incident_id\n| eval hhs_required=if(affected>=500,\"Yes\",\"No\")\n| join incident_id [\n    search index=privacy sourcetype=\"hipaa:hhs_filing\" earliest=-365d\n    | stats latest(filed_dttm) as hhs_filed by incident_id ]\n| where hhs_required=\"Yes\" AND isnull(hhs_filed)\n| table incident_id, affected, hhs_required",
              "m": "(1) Dedupe subjects carefully per legal guidance; (2) separate incidents vs linked incidents; (3) alert at 450 rolling to create runway.",
              "z": "Single value (open >500 without filing), Table (incident, affected).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / HEC.\n• Ensure the following data sources are available: `index=privacy` `sourcetype=hipaa:breach_subject` (`incident_id`, `subject_token`, `included_in_count`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Dedupe subjects carefully per legal guidance; (2) separate incidents vs linked incidents; (3) alert at 450 rolling to create runway.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"hipaa:breach_subject\" earliest=-365d\n| where included_in_count=\"true\"\n| stats dc(subject_token) as affected by incident_id\n| eval hhs_required=if(affected>=500,\"Yes\",\"No\")\n| join incident_id [\n    search index=privacy sourcetype=\"hipaa:hhs_filing\" earliest=-365d\n    | stats latest(filed_dttm) as hhs_filed by incident_id ]\n| where hhs_required=\"Yes\" AND isnull(hhs_filed)\n| table incident_id, affected, hhs_required\n```\n\nUnderstanding this SPL\n\n**HHS Secretary Notification — 500+ Individuals Threshold Watch (§164.408)** — Rolling count of affected individuals per incident versus the 500-person statutory threshold for timely HHS portal submission.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=hipaa:breach_subject` (`incident_id`, `subject_token`, `included_in_count`). **App/TA** (typical add-on context): Splunk Enterprise / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: hipaa:breach_subject. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"hipaa:breach_subject\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where included_in_count=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by incident_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hhs_required** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where hhs_required=\"Yes\" AND isnull(hhs_filed)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HHS Secretary Notification — 500+ Individuals Threshold Watch (§164.408)**): table incident_id, affected, hhs_required\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (open >500 without filing), Table (incident, affected).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We rolling count of affected individuals per incident versus the 500-person statutory threshold for timely HHS portal submission. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.408",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.408 is enforced — Splunk UC-22.10.46: HHS Secretary Notification — 500+ Individuals Threshold Watch.",
                  "ea": "Saved search 'UC-22.10.46' running on index=privacy sourcetype=hipaa:breach_subject (incident_id, subject_token, included_in_count), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.47",
              "n": "Media Notification — Large-State Resident Threshold Tracking (§164.406(c))",
              "c": "critical",
              "f": "intermediate",
              "v": "Computes per-state affected resident counts to trigger prominent media notice obligations when thresholds in a jurisdiction or overall breach size demand public notice.",
              "t": "Splunk Enterprise / HEC",
              "d": "`index=privacy` `sourcetype=hipaa:breach_subject` (`incident_id`, `state`, `media_required`), `lookup state_media_threshold.csv`",
              "q": "index=privacy sourcetype=\"hipaa:breach_subject\" earliest=-365d included_in_count=\"true\"\n| stats dc(subject_token) as affected by incident_id, state\n| lookup state_media_threshold.csv state OUTPUT threshold\n| eval media_needed=if(affected>=threshold,\"Yes\",\"No\")\n| where media_needed=\"Yes\"\n| join incident_id [\n    search index=privacy sourcetype=\"hipaa:media_notice\" earliest=-365d\n    | stats latest(published_dttm) as published by incident_id, state ]\n| where isnull(published)\n| table incident_id, state, affected, threshold",
              "m": "(1) Legal provides threshold table including state mini-HIPAA laws; (2) do not store full addresses in Splunk; (3) coordinate with comms team workflow IDs.",
              "z": "Choropleth (affected by state), Table (missing media publishes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / HEC.\n• Ensure the following data sources are available: `index=privacy` `sourcetype=hipaa:breach_subject` (`incident_id`, `state`, `media_required`), `lookup state_media_threshold.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Legal provides threshold table including state mini-HIPAA laws; (2) do not store full addresses in Splunk; (3) coordinate with comms team workflow IDs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"hipaa:breach_subject\" earliest=-365d included_in_count=\"true\"\n| stats dc(subject_token) as affected by incident_id, state\n| lookup state_media_threshold.csv state OUTPUT threshold\n| eval media_needed=if(affected>=threshold,\"Yes\",\"No\")\n| where media_needed=\"Yes\"\n| join incident_id [\n    search index=privacy sourcetype=\"hipaa:media_notice\" earliest=-365d\n    | stats latest(published_dttm) as published by incident_id, state ]\n| where isnull(published)\n| table incident_id, state, affected, threshold\n```\n\nUnderstanding this SPL\n\n**Media Notification — Large-State Resident Threshold Tracking (§164.406(c))** — Computes per-state affected resident counts to trigger prominent media notice obligations when thresholds in a jurisdiction or overall breach size demand public notice.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=hipaa:breach_subject` (`incident_id`, `state`, `media_required`), `lookup state_media_threshold.csv`. **App/TA** (typical add-on context): Splunk Enterprise / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: hipaa:breach_subject. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"hipaa:breach_subject\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by incident_id, state** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **media_needed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where media_needed=\"Yes\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(published)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Media Notification — Large-State Resident Threshold Tracking (§164.406(c))**): table incident_id, state, affected, threshold\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Choropleth (affected by state), Table (missing media publishes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We computes per-state affected resident counts to trigger prominent media notice obligations when thresholds in a jurisdiction or overall breach size demand public notice. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.406(c)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.406(c) is enforced — Splunk UC-22.10.47: Media Notification — Large-State Resident Threshold Tracking.",
                  "ea": "Saved search 'UC-22.10.47' running on index=privacy sourcetype=hipaa:breach_subject (incident_id, state, media_required), lookup state_media_threshold.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.48",
              "n": "Breach Log / Incident Register — Immutable Chronological Record (§164.402 / policy)",
              "c": "high",
              "f": "beginner",
              "v": "Maintains a searchable chronological register of all breach-suspected and confirmed incidents with status transitions for OCR-style examinations.",
              "t": "Splunk Enterprise / HEC",
              "d": "`index=privacy` `sourcetype=hipaa:breach_register` (`register_id`, `incident_title`, `status`, `severity`, `owner`)",
              "q": "index=privacy sourcetype=\"hipaa:breach_register\" earliest=-365d\n| stats latest(status) as last_status latest(_time) as last_update earliest(_time) as opened latest(incident_title) as incident_title values(owner) as owner by register_id\n| eval days_open=round((now()-opened)/86400,0)\n| where last_status IN (\"Draft\",\"Investigating\") AND days_open>30\n| table register_id, incident_title, last_status, owner, days_open, last_update\n| sort - days_open",
              "m": "(1) Append-only sourcetype from GRC; (2) restrict to privacy/legal roles; (3) export annually to offline counsel archive.",
              "z": "Timeline (status transitions), Table (stale investigations), Single value (open incidents >30d).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / HEC.\n• Ensure the following data sources are available: `index=privacy` `sourcetype=hipaa:breach_register` (`register_id`, `incident_title`, `status`, `severity`, `owner`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Append-only sourcetype from GRC; (2) restrict to privacy/legal roles; (3) export annually to offline counsel archive.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"hipaa:breach_register\" earliest=-365d\n| stats latest(status) as last_status latest(_time) as last_update earliest(_time) as opened latest(incident_title) as incident_title values(owner) as owner by register_id\n| eval days_open=round((now()-opened)/86400,0)\n| where last_status IN (\"Draft\",\"Investigating\") AND days_open>30\n| table register_id, incident_title, last_status, owner, days_open, last_update\n| sort - days_open\n```\n\nUnderstanding this SPL\n\n**Breach Log / Incident Register — Immutable Chronological Record (§164.402 / policy)** — Maintains a searchable chronological register of all breach-suspected and confirmed incidents with status transitions for OCR-style examinations.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=hipaa:breach_register` (`register_id`, `incident_title`, `status`, `severity`, `owner`). **App/TA** (typical add-on context): Splunk Enterprise / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: hipaa:breach_register. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"hipaa:breach_register\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by register_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where last_status IN (\"Draft\",\"Investigating\") AND days_open>30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Breach Log / Incident Register — Immutable Chronological Record (§164.402 / policy)**): table register_id, incident_title, last_status, owner, days_open, last_update\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (status transitions), Table (stale investigations), Single value (open incidents >30d).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We maintains a searchable chronological register of all breach-suspected and confirmed incidents with status transitions for OCR-style examinations. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.402",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.402 is enforced — Splunk UC-22.10.48: Breach Log / Incident Register — Immutable Chronological Record.",
                  "ea": "Saved search 'UC-22.10.48' running on index=privacy sourcetype=hipaa:breach_register (register_id, incident_title, status, severity, owner), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.49",
              "n": "Breach Remediation — Control Implementation Evidence Post-Incident (§164.308(a)(1)(ii)(A))",
              "c": "high",
              "f": "intermediate",
              "v": "Links post-breach corrective actions (MFA rollout, DLP rules) to measurable control telemetry so remediation is demonstrable, not narrative-only.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Office 365 (4055)",
              "d": "`index=privacy` `sourcetype=hipaa:remediation_task` (`incident_id`, `control_id`, `target_date`), `index=o365` DLP rule hits, `index=ad` MFA coverage scripted metrics",
              "q": "index=privacy sourcetype=\"hipaa:remediation_task\" earliest=-180d status!=\"Closed\"\n| eval due=strptime(target_date,\"%Y-%m-%d\")\n| join control_id [\n    search index=metrics sourcetype=\"hipaa:control_metric\" earliest=-1d\n    | stats latest(metric_value) as current_value by control_id ]\n| eval behind=if(now()>due AND current_value<100,1,0)\n| where behind=1\n| table incident_id, control_id, target_date, current_value, owner",
              "m": "(1) Define `metric_value` as percent complete (e.g., MFA enrolled / total); (2) nightly scripted input; (3) board-level dashboard after major breaches.",
              "z": "Bullet chart (current vs target), Table (behind tasks).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Office 365 (4055).\n• Ensure the following data sources are available: `index=privacy` `sourcetype=hipaa:remediation_task` (`incident_id`, `control_id`, `target_date`), `index=o365` DLP rule hits, `index=ad` MFA coverage scripted metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define `metric_value` as percent complete (e.g., MFA enrolled / total); (2) nightly scripted input; (3) board-level dashboard after major breaches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"hipaa:remediation_task\" earliest=-180d status!=\"Closed\"\n| eval due=strptime(target_date,\"%Y-%m-%d\")\n| join control_id [\n    search index=metrics sourcetype=\"hipaa:control_metric\" earliest=-1d\n    | stats latest(metric_value) as current_value by control_id ]\n| eval behind=if(now()>due AND current_value<100,1,0)\n| where behind=1\n| table incident_id, control_id, target_date, current_value, owner\n```\n\nUnderstanding this SPL\n\n**Breach Remediation — Control Implementation Evidence Post-Incident (§164.308(a)(1)(ii)(A))** — Links post-breach corrective actions (MFA rollout, DLP rules) to measurable control telemetry so remediation is demonstrable, not narrative-only.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=hipaa:remediation_task` (`incident_id`, `control_id`, `target_date`), `index=o365` DLP rule hits, `index=ad` MFA coverage scripted metrics. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Office 365 (4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: hipaa:remediation_task. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"hipaa:remediation_task\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **behind** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where behind=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Breach Remediation — Control Implementation Evidence Post-Incident (§164.308(a)(1)(ii)(A))**): table incident_id, control_id, target_date, current_value, owner\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bullet chart (current vs target), Table (behind tasks).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We links post-breach corrective actions (MFA rollout, DLP rules) to measurable control telemetry so remediation is demonstrable, not narrative-only. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(1)(ii)(A)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(a)(1)(ii)(A) is enforced — Splunk UC-22.10.49: Breach Remediation — Control Implementation Evidence Post-Incident.",
                  "ea": "Saved search 'UC-22.10.49' running on sourcetype hipaa:remediation_task and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.50",
              "n": "Annual Breach Reporting — Trend of Affected Individuals & Root Cause (§164.408 / OCR reporting)",
              "c": "high",
              "f": "intermediate",
              "v": "Aggregates year-over-year breach statistics and root causes for leadership and regulatory annual summaries.",
              "t": "Splunk Enterprise / HEC",
              "d": "`index=privacy` `sourcetype=hipaa:breach_closeout` (`close_date`, `affected_count`, `root_cause`, `vector`)",
              "q": "index=privacy sourcetype=\"hipaa:breach_closeout\" earliest=-800d confirmed_breach=\"true\"\n| eval year=strftime(strptime(close_date,\"%Y-%m-%d\"),\"%Y\")\n| stats sum(affected_count) as individuals count as incidents by year, root_cause, vector\n| sort year, - individuals",
              "m": "(1) Standardize `root_cause` taxonomy (misdelivery, theft, hacking); (2) align with OCR data request fields; (3) scrub narrative PHI from free-text fields.",
              "z": "Stacked column (individuals by cause per year), Line chart (incident count trend).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / HEC.\n• Ensure the following data sources are available: `index=privacy` `sourcetype=hipaa:breach_closeout` (`close_date`, `affected_count`, `root_cause`, `vector`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize `root_cause` taxonomy (misdelivery, theft, hacking); (2) align with OCR data request fields; (3) scrub narrative PHI from free-text fields.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"hipaa:breach_closeout\" earliest=-800d confirmed_breach=\"true\"\n| eval year=strftime(strptime(close_date,\"%Y-%m-%d\"),\"%Y\")\n| stats sum(affected_count) as individuals count as incidents by year, root_cause, vector\n| sort year, - individuals\n```\n\nUnderstanding this SPL\n\n**Annual Breach Reporting — Trend of Affected Individuals & Root Cause (§164.408 / OCR reporting)** — Aggregates year-over-year breach statistics and root causes for leadership and regulatory annual summaries.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=hipaa:breach_closeout` (`close_date`, `affected_count`, `root_cause`, `vector`). **App/TA** (typical add-on context): Splunk Enterprise / HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: hipaa:breach_closeout. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"hipaa:breach_closeout\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **year** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by year, root_cause, vector** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked column (individuals by cause per year), Line chart (incident count trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We aggregates year-over-year breach statistics and root causes for leadership and regulatory annual summaries. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.408",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.408 is enforced — Splunk UC-22.10.50: Annual Breach Reporting — Trend of Affected Individuals & Root Cause.",
                  "ea": "Saved search 'UC-22.10.50' running on index=privacy sourcetype=hipaa:breach_closeout (close_date, affected_count, root_cause, vector), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.51",
              "n": "Business Associate Access — VPN/SSO Sessions Originating from BA Address Space (§164.308(b) / §164.502(e))",
              "c": "high",
              "f": "advanced",
              "v": "Surfaces new source ASNs or IP ranges used by business associates connecting into hosted EHR jump boxes so BA access stays within contracted technical boundaries.",
              "t": "Splunk Add-on for Citrix (2757), Splunk Enterprise Security (263)",
              "d": "`index=citrix` `sourcetype=citrix:session`, CIDR lookup `ba_ip_ranges` in `transforms.conf` (`vendor_name`, `baa_id`) or `ba_ip_ranges.csv` with one row per approved egress IP",
              "q": "index=citrix sourcetype=\"citrix:session\" earliest=-7d PublishedApplication=\"*Epic*BAA*\"\n| lookup ba_ip_ranges client_ip OUTPUT vendor_name, baa_id\n| where isnull(vendor_name)\n| iplocation client_ip\n| stats count dc(UserName) as distinct_users by client_ip, Country, Org, PublishedApplication\n| sort - count",
              "m": "(1) Define `ba_ip_ranges` in `transforms.conf` as a CIDR lookup on `client_ip`, or maintain one row per approved egress IP in `ba_ip_ranges.csv`; (2) refresh when BA contracts change VPN concentrators; (3) alert on first-seen IP before allowlisting; (4) optional threat intel enrichment on `client_ip`.",
              "z": "Table (client IP, users, sessions), Map (geo), Single value (sessions from non-allowlisted BA egress).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Citrix](https://splunkbase.splunk.com/app/2757), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Citrix (2757), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=citrix` `sourcetype=citrix:session`, CIDR lookup `ba_ip_ranges` in `transforms.conf` (`vendor_name`, `baa_id`) or `ba_ip_ranges.csv` with one row per approved egress IP.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define `ba_ip_ranges` in `transforms.conf` as a CIDR lookup on `client_ip`, or maintain one row per approved egress IP in `ba_ip_ranges.csv`; (2) refresh when BA contracts change VPN concentrators; (3) alert on first-seen IP before allowlisting; (4) optional threat intel enrichment on `client_ip`.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=citrix sourcetype=\"citrix:session\" earliest=-7d PublishedApplication=\"*Epic*BAA*\"\n| lookup ba_ip_ranges client_ip OUTPUT vendor_name, baa_id\n| where isnull(vendor_name)\n| iplocation client_ip\n| stats count dc(UserName) as distinct_users by client_ip, Country, Org, PublishedApplication\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Business Associate Access — VPN/SSO Sessions Originating from BA Address Space (§164.308(b) / §164.502(e))** — Surfaces new source ASNs or IP ranges used by business associates connecting into hosted EHR jump boxes so BA access stays within contracted technical boundaries.\n\nDocumented **Data sources**: `index=citrix` `sourcetype=citrix:session`, CIDR lookup `ba_ip_ranges` in `transforms.conf` (`vendor_name`, `baa_id`) or `ba_ip_ranges.csv` with one row per approved egress IP. **App/TA** (typical add-on context): Splunk Add-on for Citrix (2757), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: citrix; **sourcetype**: citrix:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=citrix, sourcetype=\"citrix:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(vendor_name)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Business Associate Access — VPN/SSO Sessions Originating from BA Address Space (§164.308(b) / §164.502(e))**): iplocation client_ip\n• `stats` rolls up events into metrics; results are split **by client_ip, Country, Org, PublishedApplication** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (client IP, users, sessions), Map (geo), Single value (sessions from non-allowlisted BA egress).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface new source ASNs or IP ranges used by business associates connecting into hosted EHR jump boxes so BA access stays within contracted technical boundaries. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "citrix"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(b) is enforced — Splunk UC-22.10.51: Business Associate Access — VPN/SSO Sessions Originating from BA Address Space.",
                  "ea": "Saved search 'UC-22.10.51' running on sourcetype citrix:session and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(e)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(e) is enforced — Splunk UC-22.10.51: Business Associate Access — VPN/SSO Sessions Originating from BA Address Space.",
                  "ea": "Saved search 'UC-22.10.51' running on sourcetype citrix:session and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.52",
              "n": "BAA Compliance Evidence — Control Attestations vs Technical Telemetry (§164.308(b)(3) / §164.502(e))",
              "c": "high",
              "f": "intermediate",
              "v": "Joins annual BA questionnaire answers (encryption, logging) with Splunk-ingested technical proof points so subcontractor compliance is evidence-based, not checkbox-only.",
              "t": "Splunk Enterprise / HEC, Splunk Enterprise Security (263)",
              "d": "`index=privacy` `sourcetype=hipaa:baa_attestation` (`vendor_id`, `control_id`, `answer`), `index=metrics` `sourcetype=hipaa:ba_control_metric` (`vendor_id`, `control_id`, `measured_value`)",
              "q": "index=privacy sourcetype=\"hipaa:baa_attestation\" earliest=-365d\n| where answer IN (\"Yes\",\"FullyImplemented\")\n| join vendor_id, control_id [\n    search index=metrics sourcetype=\"hipaa:ba_control_metric\" earliest=-30d\n    | stats latest(measured_value) as measured by vendor_id, control_id ]\n| eval gap=if(isnull(measured) OR measured<95,\"Evidence_Gap\",\"OK\")\n| where gap!=\"OK\"\n| table vendor_id, control_id, answer, measured, gap",
              "m": "(1) Map each attestation row to an automated metric (for example TLS coverage percent from vulnerability scans); (2) legal sets materiality thresholds; (3) feed gaps into vendor risk reviews.",
              "z": "Matrix (vendor x control), Table (gaps only), Column chart (percent evidence coverage).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / HEC, Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=privacy` `sourcetype=hipaa:baa_attestation` (`vendor_id`, `control_id`, `answer`), `index=metrics` `sourcetype=hipaa:ba_control_metric` (`vendor_id`, `control_id`, `measured_value`).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map each attestation row to an automated metric (for example TLS coverage percent from vulnerability scans); (2) legal sets materiality thresholds; (3) feed gaps into vendor risk reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"hipaa:baa_attestation\" earliest=-365d\n| where answer IN (\"Yes\",\"FullyImplemented\")\n| join vendor_id, control_id [\n    search index=metrics sourcetype=\"hipaa:ba_control_metric\" earliest=-30d\n    | stats latest(measured_value) as measured by vendor_id, control_id ]\n| eval gap=if(isnull(measured) OR measured<95,\"Evidence_Gap\",\"OK\")\n| where gap!=\"OK\"\n| table vendor_id, control_id, answer, measured, gap\n```\n\nUnderstanding this SPL\n\n**BAA Compliance Evidence — Control Attestations vs Technical Telemetry (§164.308(b)(3) / §164.502(e))** — Joins annual BA questionnaire answers (encryption, logging) with Splunk-ingested technical proof points so subcontractor compliance is evidence-based, not checkbox-only.\n\nDocumented **Data sources**: `index=privacy` `sourcetype=hipaa:baa_attestation` (`vendor_id`, `control_id`, `answer`), `index=metrics` `sourcetype=hipaa:ba_control_metric` (`vendor_id`, `control_id`, `measured_value`). **App/TA** (typical add-on context): Splunk Enterprise / HEC, Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: hipaa:baa_attestation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"hipaa:baa_attestation\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where answer IN (\"Yes\",\"FullyImplemented\")` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BAA Compliance Evidence — Control Attestations vs Technical Telemetry (§164.308(b)(3) / §164.502(e))**): table vendor_id, control_id, answer, measured, gap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (vendor x control), Table (gaps only), Column chart (percent evidence coverage).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We joins annual BA questionnaire answers (encryption, logging) with Splunk-ingested technical proof points so subcontractor compliance is evidence-based, not checkbox-only. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(b)(3)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.308(b)(3) is enforced — Splunk UC-22.10.52: BAA Compliance Evidence — Control Attestations vs Technical Telemetry.",
                  "ea": "Saved search 'UC-22.10.52' running on sourcetype hipaa:baa_attestation and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(e)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(e) is enforced — Splunk UC-22.10.52: BAA Compliance Evidence — Control Attestations vs Technical Telemetry.",
                  "ea": "Saved search 'UC-22.10.52' running on sourcetype hipaa:baa_attestation and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.53",
              "n": "Subcontractor Access — Downstream API Keys Touching ePHI Interfaces (§164.502(e) / BAA chain)",
              "c": "critical",
              "f": "advanced",
              "v": "Monitors first-seen API keys or OAuth clients used by BA subcontractors against an approved key registry so the extended BAA chain does not introduce unreviewed ePHI paths.",
              "t": "HTTP Event Collector (HEC), Splunk Enterprise Security (263)",
              "d": "`index=interop` `sourcetype=apigee:proxy` or AWS API Gateway logs (`client_id`, `api_key`, `consumer`), `lookup ba_subcontractor_clients.csv`",
              "q": "index=interop sourcetype=\"apigee:proxy\" uri_path=\"/fhir/*\" earliest=-30d\n| stats earliest(_time) as first_seen latest(_time) as last_seen dc(status) as status_codes by client_id, api_key\n| eval key_or_client=coalesce(client_id, api_key)\n| lookup ba_subcontractor_clients.csv key_or_client OUTPUT approved_vendor, subcontractor_name\n| where isnull(approved_vendor)\n| sort first_seen",
              "m": "(1) Hash static API keys at ingest with `sha256(api_key)` for display fields; (2) require procurement to register subcontractor keys before production; (3) auto-revoke via API management integration on critical alerts.",
              "z": "Table (first_seen, key hash, path), Timeline (new clients per week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=interop` `sourcetype=apigee:proxy` or AWS API Gateway logs (`client_id`, `api_key`, `consumer`), `lookup ba_subcontractor_clients.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Hash static API keys at ingest with `sha256(api_key)` for display fields; (2) require procurement to register subcontractor keys before production; (3) auto-revoke via API management integration on critical alerts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=interop sourcetype=\"apigee:proxy\" uri_path=\"/fhir/*\" earliest=-30d\n| stats earliest(_time) as first_seen latest(_time) as last_seen dc(status) as status_codes by client_id, api_key\n| eval key_or_client=coalesce(client_id, api_key)\n| lookup ba_subcontractor_clients.csv key_or_client OUTPUT approved_vendor, subcontractor_name\n| where isnull(approved_vendor)\n| sort first_seen\n```\n\nUnderstanding this SPL\n\n**Subcontractor Access — Downstream API Keys Touching ePHI Interfaces (§164.502(e) / BAA chain)** — Monitors first-seen API keys or OAuth clients used by BA subcontractors against an approved key registry so the extended BAA chain does not introduce unreviewed ePHI paths.\n\nDocumented **Data sources**: `index=interop` `sourcetype=apigee:proxy` or AWS API Gateway logs (`client_id`, `api_key`, `consumer`), `lookup ba_subcontractor_clients.csv`. **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: interop; **sourcetype**: apigee:proxy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=interop, sourcetype=\"apigee:proxy\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by client_id, api_key** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **key_or_client** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved_vendor)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (first_seen, key hash, path), Timeline (new clients per week).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor first-seen API keys or OAuth clients used by BA subcontractors against an approved key registry so the extended BAA chain does not introduce unreviewed ePHI paths. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(e)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(e) is enforced — Splunk UC-22.10.53: Subcontractor Access — Downstream API Keys Touching ePHI Interfaces.",
                  "ea": "Saved search 'UC-22.10.53' running on sourcetype apigee:proxy and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.54",
              "n": "Third-Party Data Sharing — O365 Sharing Links to External Domains on PHI Libraries (§164.502(b) / §164.514(e))",
              "c": "critical",
              "f": "intermediate",
              "v": "Surfaces SharePoint or OneDrive anonymous or company links created in sites labeled for clinical operations that resolve to recipients outside the covered entity tenant.",
              "t": "Splunk Add-on for Microsoft Office 365 (4055)",
              "d": "`index=o365` `sourcetype=ms:o365:management` SharePointFileOperation or SharingSet operations, Purview sensitivity labels when present",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=SharePoint earliest=-7d Operation IN (\"SharingSet\",\"AnonymousLinkCreated\",\"CompanyLinkCreated\")\n| eval site_label=coalesce(SensitivityLabel, SiteSensitivity)\n| where match(lower(site_label),\"clinical|phi|hipaa|restricted\")\n| eval sharee_lower=lower(Sharee)\n| eval external=if(match(sharee_lower,\"@yourhealthorg\\.com$\"),0,1)\n| where external=1\n| stats count values(Sharee) as sharees by UserId, ObjectId, Operation, site_label\n| sort - count",
              "m": "(1) Replace `@yourhealthorg\\.com` with your primary tenant SMTP domain; (2) align `site_label` with Purview labels actually deployed; (3) tune for legitimate research collaborators via domain allow list; (4) integrate with insider risk workflows.",
              "z": "Table (user, object, sharees), Bar chart (external shares by label), Single value (anonymous links).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (4055).\n• Ensure the following data sources are available: `index=o365` `sourcetype=ms:o365:management` SharePointFileOperation or SharingSet operations, Purview sensitivity labels when present.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replace `@yourhealthorg\\.com` with your primary tenant SMTP domain; (2) align `site_label` with Purview labels actually deployed; (3) tune for legitimate research collaborators via domain allow list; (4) integrate with insider risk workflows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=SharePoint earliest=-7d Operation IN (\"SharingSet\",\"AnonymousLinkCreated\",\"CompanyLinkCreated\")\n| eval site_label=coalesce(SensitivityLabel, SiteSensitivity)\n| where match(lower(site_label),\"clinical|phi|hipaa|restricted\")\n| eval sharee_lower=lower(Sharee)\n| eval external=if(match(sharee_lower,\"@yourhealthorg\\.com$\"),0,1)\n| where external=1\n| stats count values(Sharee) as sharees by UserId, ObjectId, Operation, site_label\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Third-Party Data Sharing — O365 Sharing Links to External Domains on PHI Libraries (§164.502(b) / §164.514(e))** — Surfaces SharePoint or OneDrive anonymous or company links created in sites labeled for clinical operations that resolve to recipients outside the covered entity tenant.\n\nDocumented **Data sources**: `index=o365` `sourcetype=ms:o365:management` SharePointFileOperation or SharingSet operations, Purview sensitivity labels when present. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **site_label** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(lower(site_label),\"clinical|phi|hipaa|restricted\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **sharee_lower** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **external** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where external=1` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by UserId, ObjectId, Operation, site_label** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (user, object, sharees), Bar chart (external shares by label), Single value (anonymous links).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface SharePoint or OneDrive anonymous or company links created in sites labeled for clinical operations that resolve to recipients outside the covered entity tenant. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(b) is enforced — Splunk UC-22.10.54: Third-Party Data Sharing — O365 Sharing Links to External Domains on PHI Libraries.",
                  "ea": "Saved search 'UC-22.10.54' running on sourcetype ms:o365:management and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.514(e)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.514(e) is enforced — Splunk UC-22.10.54: Third-Party Data Sharing — O365 Sharing Links to External Domains on PHI Libraries.",
                  "ea": "Saved search 'UC-22.10.54' running on sourcetype ms:o365:management and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.10.55",
              "n": "Cloud Service Provider — ePHI Hosting Admin Actions in AWS or Azure Audit (§164.308(a)(1) / §164.502(e))",
              "c": "critical",
              "f": "advanced",
              "v": "Tracks CSP administrator and break-glass activity against accounts or subscriptions tagged as hosting replicated ePHI backups or analytics, supporting BA and cloud subcontractor oversight.",
              "t": "Splunk Add-on for Amazon Web Services (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Enterprise Security (263)",
              "d": "`index=aws` `sourcetype=aws:cloudtrail` (`userIdentity.arn`, `eventName`, `requestParameters`), Azure Activity Log (`azure:monitor`), CMDB lookup `aws_account_ephi_tags.csv`",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" earliest=-7d\n| rename userIdentity.arn as user_arn\n| eval account=coalesce(recipientAccountId, aws_account_id)\n| lookup aws_account_ephi_tags.csv account OUTPUT contains_ephi_backup\n| where contains_ephi_backup=\"true\"\n| search eventName IN (\"AssumeRole\",\"GetObject\",\"PutObject\",\"DeleteObject\",\"CreateSnapshot\",\"RunInstances\",\"ModifyDBInstance\")\n| eval principal=mvindex(split(user_arn,\"/\"),-1)\n| stats count values(eventName) as actions by principal, account, sourceIPAddress\n| sort - count",
              "m": "(1) Tag AWS accounts in `aws_account_ephi_tags.csv` from enterprise architecture; (2) use organization CloudTrail with integrity validation; (3) route root and break-glass usage to SOC and privacy; (4) map ARNs to named individuals via IAM Identity Center export.",
              "z": "Table (principal, actions, account), Timeline (snapshot or delete spikes), Single value (break-glass events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Amazon Web Services](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Amazon Web Services (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=aws` `sourcetype=aws:cloudtrail` (`userIdentity.arn`, `eventName`, `requestParameters`), Azure Activity Log (`azure:monitor`), CMDB lookup `aws_account_ephi_tags.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag AWS accounts in `aws_account_ephi_tags.csv` from enterprise architecture; (2) use organization CloudTrail with integrity validation; (3) route root and break-glass usage to SOC and privacy; (4) map ARNs to named individuals via IAM Identity Center export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" earliest=-7d\n| rename userIdentity.arn as user_arn\n| eval account=coalesce(recipientAccountId, aws_account_id)\n| lookup aws_account_ephi_tags.csv account OUTPUT contains_ephi_backup\n| where contains_ephi_backup=\"true\"\n| search eventName IN (\"AssumeRole\",\"GetObject\",\"PutObject\",\"DeleteObject\",\"CreateSnapshot\",\"RunInstances\",\"ModifyDBInstance\")\n| eval principal=mvindex(split(user_arn,\"/\"),-1)\n| stats count values(eventName) as actions by principal, account, sourceIPAddress\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Cloud Service Provider — ePHI Hosting Admin Actions in AWS or Azure Audit (§164.308(a)(1) / §164.502(e))** — Tracks CSP administrator and break-glass activity against accounts or subscriptions tagged as hosting replicated ePHI backups or analytics, supporting BA and cloud subcontractor oversight.\n\nDocumented **Data sources**: `index=aws` `sourcetype=aws:cloudtrail` (`userIdentity.arn`, `eventName`, `requestParameters`), Azure Activity Log (`azure:monitor`), CMDB lookup `aws_account_ephi_tags.csv`. **App/TA** (typical add-on context): Splunk Add-on for Amazon Web Services (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **account** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where contains_ephi_backup=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **principal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by principal, account, sourceIPAddress** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud Service Provider — ePHI Hosting Admin Actions in AWS or Azure Audit (§164.308(a)(1) / §164.502(e))** — Tracks CSP administrator and break-glass activity against accounts or subscriptions tagged as hosting replicated ePHI backups or analytics, supporting BA and cloud subcontractor oversight.\n\nDocumented **Data sources**: `index=aws` `sourcetype=aws:cloudtrail` (`userIdentity.arn`, `eventName`, `requestParameters`), Azure Activity Log (`azure:monitor`), CMDB lookup `aws_account_ephi_tags.csv`. **App/TA** (typical add-on context): Splunk Add-on for Amazon Web Services (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (principal, actions, account), Timeline (snapshot or delete spikes), Single value (break-glass events).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track CSP administrator and break-glass activity against accounts or subscriptions tagged as hosting replicated ePHI backups or analytics, supporting BA and cloud subcontractor oversight. HIPAA is about keeping health records private and secure when we use them in care and operations.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA"
              ],
              "a": [
                "Change (CloudTrail when CIM-tagged)",
                "N/A for raw CloudTrail"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.308(a)(1) (Security management process) is enforced — Splunk UC-22.10.55: Cloud Service Provider — ePHI Hosting Admin Actions in AWS or Azure Audit.",
                  "ea": "Saved search 'UC-22.10.55' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.502(e)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that HIPAA Security §164.502(e) is enforced — Splunk UC-22.10.55: Cloud Service Provider — ePHI Hosting Admin Actions in AWS or Azure Audit.",
                  "ea": "Saved search 'UC-22.10.55' running on sourcetype aws:cloudtrail and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 55,
            "none": 0
          }
        },
        {
          "i": "22.13",
          "n": "NERC CIP",
          "u": [
            {
              "i": "22.13.1",
              "n": "BES Cyber Asset Inventory Reconciliation Against Telemetry (CIP-002-6 R1 Part 1.1)",
              "c": "critical",
              "f": "advanced",
              "v": "We find assets appearing in security telemetry that are missing from the BES Cyber System inventory before auditors do, closing registration gaps that undermine categorization.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=wineventlog` `sourcetype=WinEventLog:Security`, `index=network` `sourcetype=pan:traffic`, lookup `bes_cyber_asset_inventory.csv`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4768) earliest=-24h\n| eval asset=lower(coalesce(dest_nt_host, WorkstationName))\n| stats dc(asset) as hits by asset\n| append [\n    search index=network sourcetype=pan:traffic bes_zone=\"ESP\" earliest=-24h action=allowed\n    | eval asset=lower(dest)\n    | stats count by asset\n  ]\n| stats sum(count) as pan_hits max(hits) as win_hits by asset\n| lookup bes_cyber_asset_inventory.csv asset OUTPUT bes_system impact_rating\n| where isnull(bes_system)\n| eval evidence=coalesce(win_hits, pan_hits)\n| sort - evidence",
              "m": "(1) Maintain `bes_cyber_asset_inventory.csv` from the official BES Cyber System list. (2) Normalize hostnames and IP keys consistently. (3) Schedule weekly and route gaps to asset governance. (4) Close tickets by CMDB update or correct Splunk tagging.",
              "z": "Table (unregistered assets), Bar chart (hits by index), Single value (gap count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=wineventlog` `sourcetype=WinEventLog:Security`, `index=network` `sourcetype=pan:traffic`, lookup `bes_cyber_asset_inventory.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `bes_cyber_asset_inventory.csv` from the official BES Cyber System list. (2) Normalize hostnames and IP keys consistently. (3) Schedule weekly and route gaps to asset governance. (4) Close tickets by CMDB update or correct Splunk tagging.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4768) earliest=-24h\n| eval asset=lower(coalesce(dest_nt_host, WorkstationName))\n| stats dc(asset) as hits by asset\n| append [\n    search index=network sourcetype=pan:traffic bes_zone=\"ESP\" earliest=-24h action=allowed\n    | eval asset=lower(dest)\n    | stats count by asset\n  ]\n| stats sum(count) as pan_hits max(hits) as win_hits by asset\n| lookup bes_cyber_asset_inventory.csv asset OUTPUT bes_system impact_rating\n| where isnull(bes_system)\n| eval evidence=coalesce(win_hits, pan_hits)\n| sort - evidence\n```\n\nUnderstanding this SPL\n\n**BES Cyber Asset Inventory Reconciliation Against Telemetry (CIP-002-6 R1 Part 1.1)** — We find assets appearing in security telemetry that are missing from the BES Cyber System inventory before auditors do, closing registration gaps that undermine categorization.\n\nDocumented **Data sources**: `index=wineventlog` `sourcetype=WinEventLog:Security`, `index=network` `sourcetype=pan:traffic`, lookup `bes_cyber_asset_inventory.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **asset** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by asset** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by asset** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **evidence** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**BES Cyber Asset Inventory Reconciliation Against Telemetry (CIP-002-6 R1 Part 1.1)** — We find assets appearing in security telemetry that are missing from the BES Cyber System inventory before auditors do, closing registration gaps that undermine categorization.\n\nDocumented **Data sources**: `index=wineventlog` `sourcetype=WinEventLog:Security`, `index=network` `sourcetype=pan:traffic`, lookup `bes_cyber_asset_inventory.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unregistered assets), Bar chart (hits by index), Single value (gap count)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We find assets appearing in security telemetry that are missing from the BES Cyber System inventory before auditors do, closing registration gaps that undermine categorization. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication",
                "Network_Traffic (when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-002-6 R1 Part 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-002-6 R1 Part 1.1 is enforced — Splunk UC-22.13.1: BES Cyber Asset Inventory Reconciliation Against Telemetry.",
                  "ea": "Saved search 'UC-22.13.1' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.2",
              "n": "BES Cyber System Impact Rating Drift Detection (CIP-002-6 R2)",
              "c": "high",
              "f": "intermediate",
              "v": "We highlight when operational tags on hosts disagree with the registered high or medium impact rating so impact determinations stay defensible.",
              "t": "Splunk Enterprise Security (263), Splunk OT Security Add-on (5151)",
              "d": "`index=ot_security` asset records, `bes_cyber_system_categorization.csv`",
              "q": "| inputlookup bes_cyber_system_categorization.csv\n| eval key=lower(bes_asset_hostname)\n| join type=left key [\n    search index=ot_security sourcetype=\"ot:asset\" earliest=-24h\n    | eval key=lower(hostname)\n    | stats latest(impact_tag) as observed_impact by key\n  ]\n| where lower(observed_impact)!=lower(impact_rating)\n| table bes_system, impact_rating, observed_impact, key",
              "m": "(1) Populate `impact_tag` from OT Security Add-on or CMDB sync. (2) Tune mapping for staging assets. (3) Monthly governance report. (4) Drive formal CIP-002 review updates when drift persists.",
              "z": "Table (mismatches), Single value (drift count), Column chart (by site)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk OT Security Add-on (5151).\n• Ensure the following data sources are available: `index=ot_security` asset records, `bes_cyber_system_categorization.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate `impact_tag` from OT Security Add-on or CMDB sync. (2) Tune mapping for staging assets. (3) Monthly governance report. (4) Drive formal CIP-002 review updates when drift persists.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup bes_cyber_system_categorization.csv\n| eval key=lower(bes_asset_hostname)\n| join type=left key [\n    search index=ot_security sourcetype=\"ot:asset\" earliest=-24h\n    | eval key=lower(hostname)\n    | stats latest(impact_tag) as observed_impact by key\n  ]\n| where lower(observed_impact)!=lower(impact_rating)\n| table bes_system, impact_rating, observed_impact, key\n```\n\nUnderstanding this SPL\n\n**BES Cyber System Impact Rating Drift Detection (CIP-002-6 R2)** — We highlight when operational tags on hosts disagree with the registered high or medium impact rating so impact determinations stay defensible.\n\nDocumented **Data sources**: `index=ot_security` asset records, `bes_cyber_system_categorization.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk OT Security Add-on (5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where lower(observed_impact)!=lower(impact_rating)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BES Cyber System Impact Rating Drift Detection (CIP-002-6 R2)**): table bes_system, impact_rating, observed_impact, key\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mismatches), Single value (drift count), Column chart (by site)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlight when operational tags on hosts disagree with the registered high or medium impact rating so impact determinations stay defensible. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-002-6 R2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-002-6 R2 is enforced — Splunk UC-22.13.2: BES Cyber System Impact Rating Drift Detection.",
                  "ea": "Saved search 'UC-22.13.2' running on index=ot_security asset records, bes_cyber_system_categorization.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.3",
              "n": "ESP Boundary Change Detection via Firewall Configuration Events (CIP-002-6 R1 Part 1.2)",
              "c": "critical",
              "f": "advanced",
              "v": "We tie ESP security rule, zone, and interface edits to change records so boundary changes to BES Cyber Systems are visible and attributable.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263)",
              "d": "`index=network` `sourcetype=pan:config` OR `sourcetype=pan:system`, `index=itsm` `sourcetype=chg:firewall`",
              "q": "index=network (sourcetype=pan:config OR sourcetype=pan:system) earliest=-7d\n| where match(object,\"(?i)security|zone|interface\") OR match(cmd,\"(?i)commit\")\n| lookup esp_object_inventory object OUTPUT esp_segment_id\n| join type=left max=0 object [\n    search index=itsm sourcetype=chg:firewall earliest=-30d\n    | stats latest(approved) as approved latest(change_id) as change_id by firewall_object\n    | rename firewall_object as object\n  ]\n| where approved!=\"true\" OR isnull(change_id)\n| table _time, admin, object, esp_segment_id, change_id, approved",
              "m": "(1) Forward Panorama or device config logs. (2) Map changed objects to ESP segments in `esp_object_inventory`. (3) Require ITSM linkage for production commits. (4) Weekly compliance dashboard export.",
              "z": "Timeline (changes), Table (unauthorized edits), Bar chart (by ESP segment)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:config` OR `sourcetype=pan:system`, `index=itsm` `sourcetype=chg:firewall`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward Panorama or device config logs. (2) Map changed objects to ESP segments in `esp_object_inventory`. (3) Require ITSM linkage for production commits. (4) Weekly compliance dashboard export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network (sourcetype=pan:config OR sourcetype=pan:system) earliest=-7d\n| where match(object,\"(?i)security|zone|interface\") OR match(cmd,\"(?i)commit\")\n| lookup esp_object_inventory object OUTPUT esp_segment_id\n| join type=left max=0 object [\n    search index=itsm sourcetype=chg:firewall earliest=-30d\n    | stats latest(approved) as approved latest(change_id) as change_id by firewall_object\n    | rename firewall_object as object\n  ]\n| where approved!=\"true\" OR isnull(change_id)\n| table _time, admin, object, esp_segment_id, change_id, approved\n```\n\nUnderstanding this SPL\n\n**ESP Boundary Change Detection via Firewall Configuration Events (CIP-002-6 R1 Part 1.2)** — We tie ESP security rule, zone, and interface edits to change records so boundary changes to BES Cyber Systems are visible and attributable.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:config` OR `sourcetype=pan:system`, `index=itsm` `sourcetype=chg:firewall`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:config, pan:system. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:config, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(object,\"(?i)security|zone|interface\") OR match(cmd,\"(?i)commit\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where approved!=\"true\" OR isnull(change_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ESP Boundary Change Detection via Firewall Configuration Events (CIP-002-6 R1 Part 1.2)**): table _time, admin, object, esp_segment_id, change_id, approved\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ESP Boundary Change Detection via Firewall Configuration Events (CIP-002-6 R1 Part 1.2)** — We tie ESP security rule, zone, and interface edits to change records so boundary changes to BES Cyber Systems are visible and attributable.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:config` OR `sourcetype=pan:system`, `index=itsm` `sourcetype=chg:firewall`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (changes), Table (unauthorized edits), Bar chart (by ESP segment)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tie ESP security rule, zone, and interface edits to change records so boundary changes to BES Cyber Systems are visible and attributable. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Change (when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-002-6 R1 Part 1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-002-6 R1 Part 1.2 is enforced — Splunk UC-22.13.3: ESP Boundary Change Detection via Firewall Configuration Events.",
                  "ea": "Saved search 'UC-22.13.3' running on index=network sourcetype=pan:config OR sourcetype=pan:system, index=itsm sourcetype=chg:firewall, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.4",
              "n": "Annual CIP-002 Categorization Review Evidence Package (CIP-002-6 R4)",
              "c": "high",
              "f": "intermediate",
              "v": "We timestamp inventory validation searches and signed review workflows so annual categorization review evidence is searchable and complete.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=itsm` `sourcetype=cip:review_ticket`, `index=cip_evidence` `sourcetype=cip:search_evidence`",
              "q": "index=itsm sourcetype=cip:review_ticket review_type=\"CIP-002_Categorization\" earliest=-400d\n| eval signed_epoch=strptime(signed_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval annual_due=relative_time(now(),\"-1y@d\")\n| eval signed_ok=if(signed_epoch>=annual_due,1,0)\n| append [\n    search index=cip_evidence sourcetype=cip:search_evidence search_name=\"CIP002_inventory_recon\" earliest=-400d\n    | stats latest(_time) as last_proof by search_name\n  ]\n| table review_type, owner, signed_at, signed_ok, last_proof",
              "m": "(1) Emit HEC when the responsible manager signs the annual review. (2) Schedule the inventory recon search to write `cip:search_evidence` events. (3) Alert if `signed_ok=0` approaching the compliance date. (4) PDF the dashboard for the audit binder.",
              "z": "Single value (review current), Timeline (signatures vs proof runs), Table (owners)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=cip:review_ticket`, `index=cip_evidence` `sourcetype=cip:search_evidence`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit HEC when the responsible manager signs the annual review. (2) Schedule the inventory recon search to write `cip:search_evidence` events. (3) Alert if `signed_ok=0` approaching the compliance date. (4) PDF the dashboard for the audit binder.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=cip:review_ticket review_type=\"CIP-002_Categorization\" earliest=-400d\n| eval signed_epoch=strptime(signed_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval annual_due=relative_time(now(),\"-1y@d\")\n| eval signed_ok=if(signed_epoch>=annual_due,1,0)\n| append [\n    search index=cip_evidence sourcetype=cip:search_evidence search_name=\"CIP002_inventory_recon\" earliest=-400d\n    | stats latest(_time) as last_proof by search_name\n  ]\n| table review_type, owner, signed_at, signed_ok, last_proof\n```\n\nUnderstanding this SPL\n\n**Annual CIP-002 Categorization Review Evidence Package (CIP-002-6 R4)** — We timestamp inventory validation searches and signed review workflows so annual categorization review evidence is searchable and complete.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=cip:review_ticket`, `index=cip_evidence` `sourcetype=cip:search_evidence`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: cip:review_ticket. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=cip:review_ticket, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **signed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **annual_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **signed_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• Pipeline stage (see **Annual CIP-002 Categorization Review Evidence Package (CIP-002-6 R4)**): table review_type, owner, signed_at, signed_ok, last_proof\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (review current), Timeline (signatures vs proof runs), Table (owners)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We timestamp inventory validation searches and signed review workflows so annual categorization review evidence is searchable and complete. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-002-6 R4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-002-6 R4 is enforced — Splunk UC-22.13.4: Annual CIP-002 Categorization Review Evidence Package.",
                  "ea": "Saved search 'UC-22.13.4' running on index=itsm sourcetype=cip:review_ticket, index=cip_evidence sourcetype=cip:search_evidence, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.5",
              "n": "Security Policy Exception Register with Expiration Tracking (CIP-003-8 R1 Part 1.2)",
              "c": "high",
              "f": "intermediate",
              "v": "We prevent policy exceptions from silently exceeding approved dates so security management controls stay within delegated risk acceptance.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=grc` `sourcetype=cip:policy_exception` (exception_id, control_ref, expires_on, status)",
              "q": "index=grc sourcetype=cip:policy_exception status=\"active\" earliest=-120d\n| eval exp_epoch=strptime(expires_on,\"%Y-%m-%d\")\n| eval expired=if(now()>exp_epoch,1,0)\n| eval days_left=round((exp_epoch-now())/86400,1)\n| where expired=1 OR days_left<14\n| table exception_id, control_ref, owner, expires_on, expired, days_left",
              "m": "(1) Ingest the exception register via HEC or DB Connect on change. (2) Map `control_ref` to CIP-003 policies. (3) Alert at fourteen days and on expiration. (4) Require closure events to flip status.",
              "z": "Table (near-expired), Single value (expired count), Time chart (open exceptions)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=grc` `sourcetype=cip:policy_exception` (exception_id, control_ref, expires_on, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest the exception register via HEC or DB Connect on change. (2) Map `control_ref` to CIP-003 policies. (3) Alert at fourteen days and on expiration. (4) Require closure events to flip status.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=cip:policy_exception status=\"active\" earliest=-120d\n| eval exp_epoch=strptime(expires_on,\"%Y-%m-%d\")\n| eval expired=if(now()>exp_epoch,1,0)\n| eval days_left=round((exp_epoch-now())/86400,1)\n| where expired=1 OR days_left<14\n| table exception_id, control_ref, owner, expires_on, expired, days_left\n```\n\nUnderstanding this SPL\n\n**Security Policy Exception Register with Expiration Tracking (CIP-003-8 R1 Part 1.2)** — We prevent policy exceptions from silently exceeding approved dates so security management controls stay within delegated risk acceptance.\n\nDocumented **Data sources**: `index=grc` `sourcetype=cip:policy_exception` (exception_id, control_ref, expires_on, status). **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cip:policy_exception. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=cip:policy_exception, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **expired** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where expired=1 OR days_left<14` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Security Policy Exception Register with Expiration Tracking (CIP-003-8 R1 Part 1.2)**): table exception_id, control_ref, owner, expires_on, expired, days_left\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (near-expired), Single value (expired count), Time chart (open exceptions)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We prevent policy exceptions from silently exceeding approved dates so security management controls stay within delegated risk acceptance. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-003-8 R1 Part 1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-003-8 R1 Part 1.2 is enforced — Splunk UC-22.13.5: Security Policy Exception Register with Expiration Tracking.",
                  "ea": "Saved search 'UC-22.13.5' running on index=grc sourcetype=cip:policy_exception (exception_id, control_ref, expires_on, status), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.6",
              "n": "Cyber Security Plan Technical Control Attestation — MFA for Privileged Access (CIP-003-8 R2)",
              "c": "critical",
              "f": "advanced",
              "v": "We compare multifactor authentication signals for privileged BES accounts against the cyber security plan so documented controls match authentication reality.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263)",
              "d": "`index=wineventlog` `EventCode=4624`, lookup `bes_privileged_users.csv`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-24h\n| lookup bes_privileged_users.csv Account_Name AS user OUTPUT requires_mfa bes_system\n| where requires_mfa=\"true\"\n| eval strong_pkg=if(match(AuthenticationPackageName,\"(?i)Negotiate|Kerberos\") AND LogonType IN (\"10\",\"3\"),1,0)\n| stats count as logons sum(strong_pkg) as strong_signals by user, bes_system\n| eval ratio=round(strong_signals/logons,3)\n| where ratio < 0.9",
              "m": "(1) Enrich with IdP-specific fields if forwarded via HEC. (2) Tune `strong_pkg` logic with your MFA architecture. (3) Monthly report to the security working group. (4) Track remediation in ITSM.",
              "z": "Bar chart (ratio by user), Table (exceptions), Single value (users below threshold)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=wineventlog` `EventCode=4624`, lookup `bes_privileged_users.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enrich with IdP-specific fields if forwarded via HEC. (2) Tune `strong_pkg` logic with your MFA architecture. (3) Monthly report to the security working group. (4) Track remediation in ITSM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-24h\n| lookup bes_privileged_users.csv Account_Name AS user OUTPUT requires_mfa bes_system\n| where requires_mfa=\"true\"\n| eval strong_pkg=if(match(AuthenticationPackageName,\"(?i)Negotiate|Kerberos\") AND LogonType IN (\"10\",\"3\"),1,0)\n| stats count as logons sum(strong_pkg) as strong_signals by user, bes_system\n| eval ratio=round(strong_signals/logons,3)\n| where ratio < 0.9\n```\n\nUnderstanding this SPL\n\n**Cyber Security Plan Technical Control Attestation — MFA for Privileged Access (CIP-003-8 R2)** — We compare multifactor authentication signals for privileged BES accounts against the cyber security plan so documented controls match authentication reality.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode=4624`, lookup `bes_privileged_users.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where requires_mfa=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **strong_pkg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user, bes_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ratio < 0.9` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cyber Security Plan Technical Control Attestation — MFA for Privileged Access (CIP-003-8 R2)** — We compare multifactor authentication signals for privileged BES accounts against the cyber security plan so documented controls match authentication reality.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode=4624`, lookup `bes_privileged_users.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (ratio by user), Table (exceptions), Single value (users below threshold)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare multifactor authentication signals for privileged BES accounts against the cyber security plan so documented controls match authentication reality. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-003-8 R2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-003-8 R2 is enforced — Splunk UC-22.13.6: Cyber Security Plan Technical Control Attestation — MFA for Privileged Access.",
                  "ea": "Saved search 'UC-22.13.6' running on index=wineventlog EventCode=4624, lookup bes_privileged_users.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.7",
              "n": "Delegated Authority Register Change History (CIP-003-8 R3)",
              "c": "high",
              "f": "intermediate",
              "v": "We retain an auditable history when delegated authority for cyber security duties is granted or revoked, supporting management control evidence.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=grc` `sourcetype=cip:delegation_register`",
              "q": "index=grc sourcetype=cip:delegation_register earliest=-730d\n| eval eff=strptime(effective_date,\"%Y-%m-%d\")\n| eval rev=if(isnotnull(revoked_date),strptime(revoked_date,\"%Y-%m-%d\"),null())\n| eval active=if(isnull(rev) OR rev>now(),1,0)\n| stats latest(scope) as scope latest(effective_date) as effective latest(revoked_date) as revoked by delegate\n| eval status=if(isnull(revoked) OR revoked=\"\",\"active\",\"revoked\")\n| sort delegate",
              "m": "(1) Push delegation CSV rows append-only through a scripted input or HEC. (2) Align fields with the official register. (3) Quarterly PDF export for legal retention. (4) Correlate revocations with HR events.",
              "z": "Table (delegates), Timeline (effective dates), Single value (active delegates)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=grc` `sourcetype=cip:delegation_register`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push delegation CSV rows append-only through a scripted input or HEC. (2) Align fields with the official register. (3) Quarterly PDF export for legal retention. (4) Correlate revocations with HR events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=cip:delegation_register earliest=-730d\n| eval eff=strptime(effective_date,\"%Y-%m-%d\")\n| eval rev=if(isnotnull(revoked_date),strptime(revoked_date,\"%Y-%m-%d\"),null())\n| eval active=if(isnull(rev) OR rev>now(),1,0)\n| stats latest(scope) as scope latest(effective_date) as effective latest(revoked_date) as revoked by delegate\n| eval status=if(isnull(revoked) OR revoked=\"\",\"active\",\"revoked\")\n| sort delegate\n```\n\nUnderstanding this SPL\n\n**Delegated Authority Register Change History (CIP-003-8 R3)** — We retain an auditable history when delegated authority for cyber security duties is granted or revoked, supporting management control evidence.\n\nDocumented **Data sources**: `index=grc` `sourcetype=cip:delegation_register`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cip:delegation_register. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=cip:delegation_register, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eff** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rev** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **active** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by delegate** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (delegates), Timeline (effective dates), Single value (active delegates)\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We retain an auditable history when delegated authority for cyber security duties is granted or revoked, supporting management control evidence. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-003-8 R3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-003-8 R3 is enforced — Splunk UC-22.13.7: Delegated Authority Register Change History.",
                  "ea": "Saved search 'UC-22.13.7' running on index=grc sourcetype=cip:delegation_register, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.8",
              "n": "Low-Impact BES Cyber System Electronic Access Path Enforcement (CIP-003-8 R4)",
              "c": "high",
              "f": "intermediate",
              "v": "We catch remote access sessions to low-impact assets that bypass approved intermediaries or applications, reducing undocumented electronic access paths.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=network` `sourcetype=pan:traffic`, `bes_low_impact_hosts.csv`, `cip003_approved_remote_paths.csv`",
              "q": "index=network sourcetype=pan:traffic action=allowed earliest=-24h\n| lookup bes_low_impact_hosts.csv ip AS dest OUTPUT asset_id impact\n| where impact=\"low\"\n| lookup cip003_approved_remote_paths.csv app src_zone dest_zone OUTPUT approved\n| where isnull(approved)\n| stats sum(bytes_out) as bytes_out count by dest, app, rule, user\n| sort - bytes_out",
              "m": "(1) Maintain low-impact asset IP list from CIP-002 outputs. (2) Define approved app and zone tuples. (3) Daily alert on violations. (4) Feed confirmed exceptions to the CIP-003 exception workflow.",
              "z": "Table (violations), Bar chart (disallowed apps), Single value (distinct assets)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:traffic`, `bes_low_impact_hosts.csv`, `cip003_approved_remote_paths.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain low-impact asset IP list from CIP-002 outputs. (2) Define approved app and zone tuples. (3) Daily alert on violations. (4) Feed confirmed exceptions to the CIP-003 exception workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:traffic action=allowed earliest=-24h\n| lookup bes_low_impact_hosts.csv ip AS dest OUTPUT asset_id impact\n| where impact=\"low\"\n| lookup cip003_approved_remote_paths.csv app src_zone dest_zone OUTPUT approved\n| where isnull(approved)\n| stats sum(bytes_out) as bytes_out count by dest, app, rule, user\n| sort - bytes_out\n```\n\nUnderstanding this SPL\n\n**Low-Impact BES Cyber System Electronic Access Path Enforcement (CIP-003-8 R4)** — We catch remote access sessions to low-impact assets that bypass approved intermediaries or applications, reducing undocumented electronic access paths.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic`, `bes_low_impact_hosts.csv`, `cip003_approved_remote_paths.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where impact=\"low\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dest, app, rule, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_out) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.app | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Low-Impact BES Cyber System Electronic Access Path Enforcement (CIP-003-8 R4)** — We catch remote access sessions to low-impact assets that bypass approved intermediaries or applications, reducing undocumented electronic access paths.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic`, `bes_low_impact_hosts.csv`, `cip003_approved_remote_paths.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Bar chart (disallowed apps), Single value (distinct assets)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We catch remote access sessions to low-impact assets that bypass approved intermediaries or applications, reducing undocumented electronic access paths. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_out) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.app | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-003-8 R4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-003-8 R4 is enforced — Splunk UC-22.13.8: Low-Impact BES Cyber System Electronic Access Path Enforcement.",
                  "ea": "Saved search 'UC-22.13.8' running on index=network sourcetype=pan:traffic, bes_low_impact_hosts.csv, cip003_approved_remote_paths.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.9",
              "n": "Security Awareness Training Completion for BES Personnel (CIP-004-6 R1 Part 1.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We list personnel with active BES access who have not completed the current awareness module so training evidence is current before audits.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=hr` `sourcetype=lms:completion`, `bes_privileged_users.csv`",
              "q": "| inputlookup bes_privileged_users.csv\n| join type=left employee_id [\n    search index=hr sourcetype=lms:completion course_id=\"CIP_SECURITY_AWARENESS\" earliest=-400d\n    | stats latest(completed_at) as completed_at by employee_id\n  ]\n| eval completed_epoch=strptime(completed_at,\"%Y-%m-%d\")\n| eval stale=if(isnull(completed_epoch) OR completed_epoch<relative_time(now(),\"-350d@d\"),1,0)\n| where stale=1\n| table employee_id, role, completed_at",
              "m": "(1) Sync LMS completions nightly. (2) Align `employee_id` with directory accounts. (3) Alert managers fourteen days before the annual due wave. (4) Export CSV for the training coordinator.",
              "z": "Table (incomplete), Single value (percent complete), Column chart (by department)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=hr` `sourcetype=lms:completion`, `bes_privileged_users.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Sync LMS completions nightly. (2) Align `employee_id` with directory accounts. (3) Alert managers fourteen days before the annual due wave. (4) Export CSV for the training coordinator.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup bes_privileged_users.csv\n| join type=left employee_id [\n    search index=hr sourcetype=lms:completion course_id=\"CIP_SECURITY_AWARENESS\" earliest=-400d\n    | stats latest(completed_at) as completed_at by employee_id\n  ]\n| eval completed_epoch=strptime(completed_at,\"%Y-%m-%d\")\n| eval stale=if(isnull(completed_epoch) OR completed_epoch<relative_time(now(),\"-350d@d\"),1,0)\n| where stale=1\n| table employee_id, role, completed_at\n```\n\nUnderstanding this SPL\n\n**Security Awareness Training Completion for BES Personnel (CIP-004-6 R1 Part 1.1)** — We list personnel with active BES access who have not completed the current awareness module so training evidence is current before audits.\n\nDocumented **Data sources**: `index=hr` `sourcetype=lms:completion`, `bes_privileged_users.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **completed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where stale=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Security Awareness Training Completion for BES Personnel (CIP-004-6 R1 Part 1.1)**): table employee_id, role, completed_at\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incomplete), Single value (percent complete), Column chart (by department)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We list personnel with active BES access who have not completed the current awareness module so training evidence is current before audits. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-004-6 R1 Part 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-004-6 R1 Part 1.1 is enforced — Splunk UC-22.13.9: Security Awareness Training Completion for BES Personnel.",
                  "ea": "Saved search 'UC-22.13.9' running on index=hr sourcetype=lms:completion, bes_privileged_users.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.10",
              "n": "Personnel Risk Assessment Due-Date Monitoring (CIP-004-6 R2)",
              "c": "high",
              "f": "intermediate",
              "v": "We highlight access holders whose personnel risk assessment is overdue while they still touch BES systems, reducing stale risk decisions.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=hr` `sourcetype=hr:pra`, `index=wineventlog` privileged logons",
              "q": "index=hr sourcetype=hr:pra earliest=-730d\n| eval next_epoch=strptime(next_due,\"%Y-%m-%d\")\n| rename employee_id as user\n| join type=left user [\n    search index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-30d\n    | lookup bes_privileged_users.csv Account_Name AS user OUTPUT employee_id\n    | stats dc(ComputerName) as systems by user\n  ]\n| where next_epoch < now() OR isnull(next_due)\n| table user, assessment_date, next_due, systems",
              "m": "(1) Ingest PRA due dates from HR or GRC. (2) Correlate with active privileged users only. (3) Auto-ticket overdue users with recent BES logons. (4) Quarterly leadership rollup.",
              "z": "Table (overdue), Timeline (assessment dates), Single value (overdue count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=hr` `sourcetype=hr:pra`, `index=wineventlog` privileged logons.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest PRA due dates from HR or GRC. (2) Correlate with active privileged users only. (3) Auto-ticket overdue users with recent BES logons. (4) Quarterly leadership rollup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=hr:pra earliest=-730d\n| eval next_epoch=strptime(next_due,\"%Y-%m-%d\")\n| rename employee_id as user\n| join type=left user [\n    search index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-30d\n    | lookup bes_privileged_users.csv Account_Name AS user OUTPUT employee_id\n    | stats dc(ComputerName) as systems by user\n  ]\n| where next_epoch < now() OR isnull(next_due)\n| table user, assessment_date, next_due, systems\n```\n\nUnderstanding this SPL\n\n**Personnel Risk Assessment Due-Date Monitoring (CIP-004-6 R2)** — We highlight access holders whose personnel risk assessment is overdue while they still touch BES systems, reducing stale risk decisions.\n\nDocumented **Data sources**: `index=hr` `sourcetype=hr:pra`, `index=wineventlog` privileged logons. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: hr:pra. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=hr:pra, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **next_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where next_epoch < now() OR isnull(next_due)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Personnel Risk Assessment Due-Date Monitoring (CIP-004-6 R2)**): table user, assessment_date, next_due, systems\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue), Timeline (assessment dates), Single value (overdue count)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlight access holders whose personnel risk assessment is overdue while they still touch BES systems, reducing stale risk decisions. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-004-6 R2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-004-6 R2 is enforced — Splunk UC-22.13.10: Personnel Risk Assessment Due-Date Monitoring.",
                  "ea": "Saved search 'UC-22.13.10' running on index=hr sourcetype=hr:pra, index=wineventlog privileged logons, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.11",
              "n": "Electronic Access Authorization Record Coverage for PAM Sessions (CIP-004-6 R3)",
              "c": "critical",
              "f": "intermediate",
              "v": "We verify CyberArk sessions target systems that have a valid, unexpired access authorization on file for the user.",
              "t": "Splunk Add-on for CyberArk (4295)",
              "d": "`index=pam` `sourcetype=cyberark:ptalog`, `cip004_access_authorizations.csv`",
              "q": "index=pam sourcetype=cyberark:ptalog action=\"Logon\" earliest=-24h\n| lookup cip004_access_authorizations.csv user, address OUTPUT auth_id expires\n| eval exp_epoch=strptime(expires,\"%Y-%m-%d\")\n| where isnull(auth_id) OR now()>exp_epoch\n| stats count by user, address, safe\n| sort - count",
              "m": "(1) Normalize `address` to the same key as the authorization register. (2) Nightly reconciliation job. (3) Feed hits to access governance. (4) Attach export to quarterly access review.",
              "z": "Table (missing or expired auth), Bar chart (by safe), Single value (sessions)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CyberArk (4295).\n• Ensure the following data sources are available: `index=pam` `sourcetype=cyberark:ptalog`, `cip004_access_authorizations.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `address` to the same key as the authorization register. (2) Nightly reconciliation job. (3) Feed hits to access governance. (4) Attach export to quarterly access review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=cyberark:ptalog action=\"Logon\" earliest=-24h\n| lookup cip004_access_authorizations.csv user, address OUTPUT auth_id expires\n| eval exp_epoch=strptime(expires,\"%Y-%m-%d\")\n| where isnull(auth_id) OR now()>exp_epoch\n| stats count by user, address, safe\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Electronic Access Authorization Record Coverage for PAM Sessions (CIP-004-6 R3)** — We verify CyberArk sessions target systems that have a valid, unexpired access authorization on file for the user.\n\nDocumented **Data sources**: `index=pam` `sourcetype=cyberark:ptalog`, `cip004_access_authorizations.csv`. **App/TA** (typical add-on context): Splunk Add-on for CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:ptalog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=cyberark:ptalog, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(auth_id) OR now()>exp_epoch` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, address, safe** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing or expired auth), Bar chart (by safe), Single value (sessions)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verify CyberArk sessions target systems that have a valid, unexpired access authorization on file for the user. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-004-6 R3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-004-6 R3 is enforced — Splunk UC-22.13.11: Electronic Access Authorization Record Coverage for PAM Sessions.",
                  "ea": "Saved search 'UC-22.13.11' running on index=pam sourcetype=cyberark:ptalog, cip004_access_authorizations.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.12",
              "n": "Post-Termination Access Revocation within 24 Hours (CIP-004-6 R4 Part 4.2)",
              "c": "critical",
              "f": "advanced",
              "v": "We measure hours from HR termination timestamp to Active Directory account disable for BES personnel, evidencing the one-day removal expectation.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=hr` `sourcetype=hr:termination`, `index=wineventlog` `EventCode=4726`",
              "q": "index=hr sourcetype=hr:termination earliest=-14d\n| eval term_epoch=strptime(term_time,\"%Y-%m-%d %H:%M:%S\")\n| rename employee_id as user\n| join type=left user [\n    search index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4726 earliest=-14d\n    | stats earliest(_time) as disable_epoch by TargetUserName\n    | rename TargetUserName as user\n  ]\n| eval delta_h=round((disable_epoch-term_epoch)/3600,2)\n| where isnull(disable_epoch) OR delta_h>24\n| table user, term_time, disable_epoch, delta_h",
              "m": "(1) Ensure 4726 from all in-scope domains. (2) Normalize time zones to UTC. (3) Page IAM when SLA missed. (4) Optionally join CyberArk disable events as supplemental proof.",
              "z": "Table (SLA misses), Column chart (delta hours), Single value (miss count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=hr` `sourcetype=hr:termination`, `index=wineventlog` `EventCode=4726`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure 4726 from all in-scope domains. (2) Normalize time zones to UTC. (3) Page IAM when SLA missed. (4) Optionally join CyberArk disable events as supplemental proof.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=hr:termination earliest=-14d\n| eval term_epoch=strptime(term_time,\"%Y-%m-%d %H:%M:%S\")\n| rename employee_id as user\n| join type=left user [\n    search index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4726 earliest=-14d\n    | stats earliest(_time) as disable_epoch by TargetUserName\n    | rename TargetUserName as user\n  ]\n| eval delta_h=round((disable_epoch-term_epoch)/3600,2)\n| where isnull(disable_epoch) OR delta_h>24\n| table user, term_time, disable_epoch, delta_h\n```\n\nUnderstanding this SPL\n\n**Post-Termination Access Revocation within 24 Hours (CIP-004-6 R4 Part 4.2)** — We measure hours from HR termination timestamp to Active Directory account disable for BES personnel, evidencing the one-day removal expectation.\n\nDocumented **Data sources**: `index=hr` `sourcetype=hr:termination`, `index=wineventlog` `EventCode=4726`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: hr:termination. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=hr:termination, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **term_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **delta_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(disable_epoch) OR delta_h>24` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Post-Termination Access Revocation within 24 Hours (CIP-004-6 R4 Part 4.2)**): table user, term_time, disable_epoch, delta_h\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Post-Termination Access Revocation within 24 Hours (CIP-004-6 R4 Part 4.2)** — We measure hours from HR termination timestamp to Active Directory account disable for BES personnel, evidencing the one-day removal expectation.\n\nDocumented **Data sources**: `index=hr` `sourcetype=hr:termination`, `index=wineventlog` `EventCode=4726`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (SLA misses), Column chart (delta hours), Single value (miss count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measure hours from HR termination timestamp to Active Directory account disable for BES personnel, evidencing the one-day removal expectation. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-004-6 R4 Part 4.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-004-6 R4 Part 4.2 is enforced — Splunk UC-22.13.12: Post-Termination Access Revocation within 24 Hours.",
                  "ea": "Saved search 'UC-22.13.12' running on index=hr sourcetype=hr:termination, index=wineventlog EventCode=4726, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.13",
              "n": "Quarterly Access Review Dataset — Interactive Sessions by System (CIP-004-6 R5)",
              "c": "high",
              "f": "intermediate",
              "v": "We assemble quarterly interactive access by user and BES asset to accelerate access certification and attestation.",
              "t": "Splunk Add-on for CyberArk (4295), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=pam` `sourcetype=cyberark:ptalog`, `bes_cyber_asset_inventory.csv`",
              "q": "index=pam sourcetype=cyberark:ptalog action=\"Logon\" earliest=-90d@d latest=@d\n| lookup bes_cyber_asset_inventory.csv address AS address OUTPUT bes_system impact_rating\n| where isnotnull(bes_system)\n| stats earliest(_time) as first latest(_time) as last dc(address) as targets by user, bes_system, impact_rating\n| sort impact_rating, user",
              "m": "(1) Define quarter macros for reproducible windows. (2) Export to GRC certification tool. (3) Restrict index ACLs. (4) Store signed attestation back via HEC if desired.",
              "z": "Table (matrix), Pivot (counts), Bar chart (top users)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CyberArk (4295), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=pam` `sourcetype=cyberark:ptalog`, `bes_cyber_asset_inventory.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define quarter macros for reproducible windows. (2) Export to GRC certification tool. (3) Restrict index ACLs. (4) Store signed attestation back via HEC if desired.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=cyberark:ptalog action=\"Logon\" earliest=-90d@d latest=@d\n| lookup bes_cyber_asset_inventory.csv address AS address OUTPUT bes_system impact_rating\n| where isnotnull(bes_system)\n| stats earliest(_time) as first latest(_time) as last dc(address) as targets by user, bes_system, impact_rating\n| sort impact_rating, user\n```\n\nUnderstanding this SPL\n\n**Quarterly Access Review Dataset — Interactive Sessions by System (CIP-004-6 R5)** — We assemble quarterly interactive access by user and BES asset to accelerate access certification and attestation.\n\nDocumented **Data sources**: `index=pam` `sourcetype=cyberark:ptalog`, `bes_cyber_asset_inventory.csv`. **App/TA** (typical add-on context): Splunk Add-on for CyberArk (4295), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:ptalog. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=cyberark:ptalog, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, bes_system, impact_rating** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (matrix), Pivot (counts), Bar chart (top users)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We assemble quarterly interactive access by user and BES asset to accelerate access certification and attestation. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-004-6 R5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-004-6 R5 is enforced — Splunk UC-22.13.13: Quarterly Access Review Dataset — Interactive Sessions by System.",
                  "ea": "Saved search 'UC-22.13.13' running on index=pam sourcetype=cyberark:ptalog, bes_cyber_asset_inventory.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.14",
              "n": "Background Investigation Recency for ESP and PAM Users (CIP-004-6 R6)",
              "c": "high",
              "f": "intermediate",
              "v": "We flag active PAM users whose background investigation date exceeds policy limits so personnel suitability evidence stays current.",
              "t": "Splunk Add-on for CyberArk (4295)",
              "d": "`index=hr` `sourcetype=hr:background`, `index=pam` CyberArk logons",
              "q": "| inputlookup esp_privileged_personnel.csv\n| lookup hr_background employee_id OUTPUT investigation_date\n| eval inv_epoch=strptime(investigation_date,\"%Y-%m-%d\")\n| eval age_days=round((now()-inv_epoch)/86400,0)\n| join type=left employee_id [\n    search index=pam sourcetype=cyberark:ptalog action=\"Logon\" earliest=-30d\n    | stats latest(_time) as last_pam by user\n    | rename user as employee_id\n  ]\n| where age_days>2555 AND isnotnull(last_pam)\n| table employee_id, investigation_date, age_days, last_pam",
              "m": "(1) Replace `2555` with your policy threshold lookup. (2) Weekly sync from HR. (3) Suspend access per procedure when triggered. (4) Document waivers in CIP-003 exceptions.",
              "z": "Table (stale investigations), Single value (count), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CyberArk (4295).\n• Ensure the following data sources are available: `index=hr` `sourcetype=hr:background`, `index=pam` CyberArk logons.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replace `2555` with your policy threshold lookup. (2) Weekly sync from HR. (3) Suspend access per procedure when triggered. (4) Document waivers in CIP-003 exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup esp_privileged_personnel.csv\n| lookup hr_background employee_id OUTPUT investigation_date\n| eval inv_epoch=strptime(investigation_date,\"%Y-%m-%d\")\n| eval age_days=round((now()-inv_epoch)/86400,0)\n| join type=left employee_id [\n    search index=pam sourcetype=cyberark:ptalog action=\"Logon\" earliest=-30d\n    | stats latest(_time) as last_pam by user\n    | rename user as employee_id\n  ]\n| where age_days>2555 AND isnotnull(last_pam)\n| table employee_id, investigation_date, age_days, last_pam\n```\n\nUnderstanding this SPL\n\n**Background Investigation Recency for ESP and PAM Users (CIP-004-6 R6)** — We flag active PAM users whose background investigation date exceeds policy limits so personnel suitability evidence stays current.\n\nDocumented **Data sources**: `index=hr` `sourcetype=hr:background`, `index=pam` CyberArk logons. **App/TA** (typical add-on context): Splunk Add-on for CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **inv_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where age_days>2555 AND isnotnull(last_pam)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Background Investigation Recency for ESP and PAM Users (CIP-004-6 R6)**): table employee_id, investigation_date, age_days, last_pam\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale investigations), Single value (count), Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flag active PAM users whose background investigation date exceeds policy limits so personnel suitability evidence stays current. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-004-6 R6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-004-6 R6 is enforced — Splunk UC-22.13.14: Background Investigation Recency for ESP and PAM Users.",
                  "ea": "Saved search 'UC-22.13.14' running on index=hr sourcetype=hr:background, index=pam CyberArk logons, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.15",
              "n": "ESP Inbound First-Seen External Source Detection (CIP-005-6 R1 Part 1.3)",
              "c": "critical",
              "f": "advanced",
              "v": "We detect new external sources allowed through ESP entry rules so undocumented inbound paths are reviewed before they become normal traffic.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263)",
              "d": "`index=network` `sourcetype=pan:traffic`, lookup `esp_inbound_baseline.csv`",
              "q": "index=network sourcetype=pan:traffic dest_zone=\"ESP_DMZ\" action=allowed earliest=-24h\n| eval ext_src=if(cidrmatch(\"10.0.0.0/8\",src) OR cidrmatch(\"172.16.0.0/12\",src) OR cidrmatch(\"192.168.0.0/16\",src),null(),src)\n| where isnotnull(ext_src)\n| lookup esp_inbound_baseline.csv ext_src AS ext_src OUTPUT approved_vendor\n| where isnull(approved_vendor)\n| stats earliest(_time) as first_seen values(dest) as targets by ext_src, rule\n| sort - first_seen",
              "m": "(1) Seed `esp_inbound_baseline.csv` with known vendor ranges. (2) Promote reviewed sources into the baseline. (3) Create ES notable for first-seen. (4) Map rules to EAP register fields.",
              "z": "Table (new sources), Map (geo), Time chart (first-seen rate)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:traffic`, lookup `esp_inbound_baseline.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Seed `esp_inbound_baseline.csv` with known vendor ranges. (2) Promote reviewed sources into the baseline. (3) Create ES notable for first-seen. (4) Map rules to EAP register fields.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:traffic dest_zone=\"ESP_DMZ\" action=allowed earliest=-24h\n| eval ext_src=if(cidrmatch(\"10.0.0.0/8\",src) OR cidrmatch(\"172.16.0.0/12\",src) OR cidrmatch(\"192.168.0.0/16\",src),null(),src)\n| where isnotnull(ext_src)\n| lookup esp_inbound_baseline.csv ext_src AS ext_src OUTPUT approved_vendor\n| where isnull(approved_vendor)\n| stats earliest(_time) as first_seen values(dest) as targets by ext_src, rule\n| sort - first_seen\n```\n\nUnderstanding this SPL\n\n**ESP Inbound First-Seen External Source Detection (CIP-005-6 R1 Part 1.3)** — We detect new external sources allowed through ESP entry rules so undocumented inbound paths are reviewed before they become normal traffic.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic`, lookup `esp_inbound_baseline.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ext_src** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(ext_src)` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved_vendor)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ext_src, rule** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ESP Inbound First-Seen External Source Detection (CIP-005-6 R1 Part 1.3)** — We detect new external sources allowed through ESP entry rules so undocumented inbound paths are reviewed before they become normal traffic.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic`, lookup `esp_inbound_baseline.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new sources), Map (geo), Time chart (first-seen rate)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect new external sources allowed through ESP entry rules so undocumented inbound paths are reviewed before they become normal traffic. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-6 R1 Part 1.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-6 R1 Part 1.3 is enforced — Splunk UC-22.13.15: ESP Inbound First-Seen External Source Detection.",
                  "ea": "Saved search 'UC-22.13.15' running on index=network sourcetype=pan:traffic, lookup esp_inbound_baseline.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.16",
              "n": "ESP Outbound Bytes Spike versus Rolling Baseline (CIP-005-6 R2 Part 2.1)",
              "c": "high",
              "f": "advanced",
              "v": "We surface unusual outbound volume from ESP-internal segments that may indicate data exfiltration or misconfigured services.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=network` `sourcetype=pan:traffic` (bytes_out, src, dest, src_zone)",
              "q": "index=network sourcetype=pan:traffic src_zone=\"ESP_INTERNAL\" action=allowed earliest=-30d\n| eval internal_dest=cidrmatch(\"10.0.0.0/8\",dest) OR cidrmatch(\"172.16.0.0/12\",dest) OR cidrmatch(\"192.168.0.0/16\",dest)\n| where internal_dest=0\n| bin _time span=1d\n| stats sum(bytes_out) as b_out by src, dest, _time\n| eventstats median(b_out) as med by src, dest\n| eval spike=if(b_out>med*8 AND med>0,1,0)\n| where spike=1\n| table _time, src, dest, b_out, med",
              "m": "(1) Tune multiplier and internal CIDRs. (2) Exclude CDN and update destinations via lookup. (3) Tag BES Cyber Assets on `src`. (4) Pair with PCAP workflow if available.",
              "z": "Time chart (bytes), Table (spikes), Single value (spike count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:traffic` (bytes_out, src, dest, src_zone).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune multiplier and internal CIDRs. (2) Exclude CDN and update destinations via lookup. (3) Tag BES Cyber Assets on `src`. (4) Pair with PCAP workflow if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:traffic src_zone=\"ESP_INTERNAL\" action=allowed earliest=-30d\n| eval internal_dest=cidrmatch(\"10.0.0.0/8\",dest) OR cidrmatch(\"172.16.0.0/12\",dest) OR cidrmatch(\"192.168.0.0/16\",dest)\n| where internal_dest=0\n| bin _time span=1d\n| stats sum(bytes_out) as b_out by src, dest, _time\n| eventstats median(b_out) as med by src, dest\n| eval spike=if(b_out>med*8 AND med>0,1,0)\n| where spike=1\n| table _time, src, dest, b_out, med\n```\n\nUnderstanding this SPL\n\n**ESP Outbound Bytes Spike versus Rolling Baseline (CIP-005-6 R2 Part 2.1)** — We surface unusual outbound volume from ESP-internal segments that may indicate data exfiltration or misconfigured services.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic` (bytes_out, src, dest, src_zone). **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **internal_dest** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where internal_dest=0` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by src, dest, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by src, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **spike** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where spike=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ESP Outbound Bytes Spike versus Rolling Baseline (CIP-005-6 R2 Part 2.1)**): table _time, src, dest, b_out, med\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_out) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest span=1d | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ESP Outbound Bytes Spike versus Rolling Baseline (CIP-005-6 R2 Part 2.1)** — We surface unusual outbound volume from ESP-internal segments that may indicate data exfiltration or misconfigured services.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic` (bytes_out, src, dest, src_zone). **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (bytes), Table (spikes), Single value (spike count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface unusual outbound volume from ESP-internal segments that may indicate data exfiltration or misconfigured services. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_out) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest span=1d | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-6 R2 Part 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-6 R2 Part 2.1 is enforced — Splunk UC-22.13.16: ESP Outbound Bytes Spike versus Rolling Baseline.",
                  "ea": "Saved search 'UC-22.13.16' running on index=network sourcetype=pan:traffic (bytes_out, src, dest, src_zone), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.17",
              "n": "Denied ESP Traversal Bursts Followed by Allowed Sessions (CIP-005-6 R2 Part 2.3)",
              "c": "critical",
              "f": "advanced",
              "v": "We correlate dense deny activity at the ESP with subsequent allows to prioritize perimeter probing that may indicate misconfiguration or attack.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=network` `sourcetype=pan:traffic`",
              "q": "index=network sourcetype=pan:traffic dest_zone=\"ESP_PERIMETER\" earliest=-4h\n| eval outcome=if(action=\"deny\",\"deny\",\"allow\")\n| bin _time span=5m\n| stats sum(eval(if(outcome==\"deny\",1,0))) as denies sum(eval(if(outcome==\"allow\",1,0))) as allows by src, _time\n| where denies>40 AND allows>0\n| sort - denies",
              "m": "(1) Calibrate deny thresholds per ESP. (2) Enrich with threat log if separate sourcetype. (3) Feed ES correlation. (4) Document benign scanner ranges in lookup.",
              "z": "Timeline (deny vs allow), Table (src clusters), Single value (burst windows)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:traffic`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Calibrate deny thresholds per ESP. (2) Enrich with threat log if separate sourcetype. (3) Feed ES correlation. (4) Document benign scanner ranges in lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:traffic dest_zone=\"ESP_PERIMETER\" earliest=-4h\n| eval outcome=if(action=\"deny\",\"deny\",\"allow\")\n| bin _time span=5m\n| stats sum(eval(if(outcome==\"deny\",1,0))) as denies sum(eval(if(outcome==\"allow\",1,0))) as allows by src, _time\n| where denies>40 AND allows>0\n| sort - denies\n```\n\nUnderstanding this SPL\n\n**Denied ESP Traversal Bursts Followed by Allowed Sessions (CIP-005-6 R2 Part 2.3)** — We correlate dense deny activity at the ESP with subsequent allows to prioritize perimeter probing that may indicate misconfiguration or attack.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **outcome** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by src, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where denies>40 AND allows>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src span=5m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Denied ESP Traversal Bursts Followed by Allowed Sessions (CIP-005-6 R2 Part 2.3)** — We correlate dense deny activity at the ESP with subsequent allows to prioritize perimeter probing that may indicate misconfiguration or attack.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (deny vs allow), Table (src clusters), Single value (burst windows)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate dense deny activity at the ESP with subsequent allows to prioritize perimeter probing that may indicate misconfiguration or attack. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src span=5m | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-6 R2 Part 2.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-6 R2 Part 2.3 is enforced — Splunk UC-22.13.17: Denied ESP Traversal Bursts Followed by Allowed Sessions.",
                  "ea": "Saved search 'UC-22.13.17' running on index=network sourcetype=pan:traffic, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.18",
              "n": "Interactive Remote Access Session Duration on Jump Hosts (CIP-005-6 R3)",
              "c": "high",
              "f": "intermediate",
              "v": "We track RDP or vendor remote sessions through registered jump hosts so interactive remote access is visible for ESP monitoring.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295)",
              "d": "`index=wineventlog` `EventCode` 4624/4634, `bes_jump_hosts.csv`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4634) LogonType=10 earliest=-24h\n| lookup bes_jump_hosts.csv ComputerName OUTPUT jump_id esp_segment\n| where isnotnull(jump_id)\n| transaction Account_Name maxspan=7200 startswith=(EventCode=4624) endswith=(EventCode=4634)\n| eval duration_min=round(duration/60,2)\n| table _time, Account_Name, ComputerName, esp_segment, duration_min",
              "m": "(1) Maintain jump host list aligned with CIP-005 EAP documentation. (2) Ensure logoff events are audited. (3) Cross-check CyberArk session metadata. (4) Alert on sessions exceeding policy without active work order.",
              "z": "Histogram (duration), Table (sessions), Single value (sessions over threshold)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295).\n• Ensure the following data sources are available: `index=wineventlog` `EventCode` 4624/4634, `bes_jump_hosts.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain jump host list aligned with CIP-005 EAP documentation. (2) Ensure logoff events are audited. (3) Cross-check CyberArk session metadata. (4) Alert on sessions exceeding policy without active work order.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4634) LogonType=10 earliest=-24h\n| lookup bes_jump_hosts.csv ComputerName OUTPUT jump_id esp_segment\n| where isnotnull(jump_id)\n| transaction Account_Name maxspan=7200 startswith=(EventCode=4624) endswith=(EventCode=4634)\n| eval duration_min=round(duration/60,2)\n| table _time, Account_Name, ComputerName, esp_segment, duration_min\n```\n\nUnderstanding this SPL\n\n**Interactive Remote Access Session Duration on Jump Hosts (CIP-005-6 R3)** — We track RDP or vendor remote sessions through registered jump hosts so interactive remote access is visible for ESP monitoring.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode` 4624/4634, `bes_jump_hosts.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(jump_id)` — typically the threshold or rule expression for this monitoring goal.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Interactive Remote Access Session Duration on Jump Hosts (CIP-005-6 R3)**): table _time, Account_Name, ComputerName, esp_segment, duration_min\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Interactive Remote Access Session Duration on Jump Hosts (CIP-005-6 R3)** — We track RDP or vendor remote sessions through registered jump hosts so interactive remote access is visible for ESP monitoring.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode` 4624/4634, `bes_jump_hosts.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (duration), Table (sessions), Single value (sessions over threshold)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track RDP or vendor remote sessions through registered jump hosts so interactive remote access is visible for ESP monitoring. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-6 R3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-6 R3 is enforced — Splunk UC-22.13.18: Interactive Remote Access Session Duration on Jump Hosts.",
                  "ea": "Saved search 'UC-22.13.18' running on index=wineventlog EventCode 4624/4634, bes_jump_hosts.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.19",
              "n": "EACMS and IRA Authentication Failure Concentration (CIP-005-6 R4 Part 4.2)",
              "c": "critical",
              "f": "intermediate",
              "v": "We detect brute-force patterns against Electronic Access Control and Monitoring Systems that gate ESP remote access.",
              "t": "Splunk Add-on for Unix and Linux (833), Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=network` `sourcetype=eacms:auth`, `index=os` `sourcetype=linux:secure`",
              "q": "(index=network sourcetype=eacms:auth result=\"failure\") OR (index=os sourcetype=linux:secure \"Failed password\")\nearliest=-24h\n| eval user=coalesce(user, account)\n| lookup eacms_asset_map host OUTPUT eacms_id\n| stats count by user, src_ip, host, eacms_id\n| where count>25\n| sort - count",
              "m": "(1) Normalize vendor syslog into `eacms:auth`. (2) Distinguish IRA accounts from vendors. (3) Integrate with ES adaptive response. (4) Retain raw events per records retention.",
              "z": "Table (top sources), Time chart (failures), Single value (locked-out systems)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (833), Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=network` `sourcetype=eacms:auth`, `index=os` `sourcetype=linux:secure`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize vendor syslog into `eacms:auth`. (2) Distinguish IRA accounts from vendors. (3) Integrate with ES adaptive response. (4) Retain raw events per records retention.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=network sourcetype=eacms:auth result=\"failure\") OR (index=os sourcetype=linux:secure \"Failed password\")\nearliest=-24h\n| eval user=coalesce(user, account)\n| lookup eacms_asset_map host OUTPUT eacms_id\n| stats count by user, src_ip, host, eacms_id\n| where count>25\n| sort - count\n```\n\nUnderstanding this SPL\n\n**EACMS and IRA Authentication Failure Concentration (CIP-005-6 R4 Part 4.2)** — We detect brute-force patterns against Electronic Access Control and Monitoring Systems that gate ESP remote access.\n\nDocumented **Data sources**: `index=network` `sourcetype=eacms:auth`, `index=os` `sourcetype=linux:secure`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network, os; **sourcetype**: eacms:auth, linux:secure. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, index=os, sourcetype=eacms:auth, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by user, src_ip, host, eacms_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>25` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**EACMS and IRA Authentication Failure Concentration (CIP-005-6 R4 Part 4.2)** — We detect brute-force patterns against Electronic Access Control and Monitoring Systems that gate ESP remote access.\n\nDocumented **Data sources**: `index=network` `sourcetype=eacms:auth`, `index=os` `sourcetype=linux:secure`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top sources), Time chart (failures), Single value (locked-out systems)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect brute-force patterns against Electronic Access Control and Monitoring Systems that gate ESP remote access. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src, Authentication.dest | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-6 R4 Part 4.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-6 R4 Part 4.2 is enforced — Splunk UC-22.13.19: EACMS and IRA Authentication Failure Concentration.",
                  "ea": "Saved search 'UC-22.13.19' running on index=network sourcetype=eacms:auth, index=os sourcetype=linux:secure, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.20",
              "n": "ESP Security Rule Commit Without Approved Change Record (CIP-005-6 R1 Part 1.4)",
              "c": "critical",
              "f": "intermediate",
              "v": "We flag firewall commits affecting ESP rules when no linked change ticket exists, supporting controlled configuration of the security perimeter.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=network` `sourcetype=pan:config`, `index=itsm` `sourcetype=chg:firewall`",
              "q": "index=network sourcetype=pan:config earliest=-7d\n| where match(cmd,\"(?i)commit\")\n| join type=left max=0 admin [\n    search index=itsm sourcetype=chg:firewall earliest=-30d\n    | stats latest(change_id) as change_id by submitter\n    | rename submitter as admin\n  ]\n| where isnull(change_id)\n| table _time, admin, object, cmd, change_id",
              "m": "(1) Map Panorama admin IDs to ITSM submitter IDs. (2) Require change_id in commit messages where supported. (3) Weekly governance report. (4) Escalate repeat offenders.",
              "z": "Table (unapproved commits), Timeline, Bar chart (by admin)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:config`, `index=itsm` `sourcetype=chg:firewall`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map Panorama admin IDs to ITSM submitter IDs. (2) Require change_id in commit messages where supported. (3) Weekly governance report. (4) Escalate repeat offenders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:config earliest=-7d\n| where match(cmd,\"(?i)commit\")\n| join type=left max=0 admin [\n    search index=itsm sourcetype=chg:firewall earliest=-30d\n    | stats latest(change_id) as change_id by submitter\n    | rename submitter as admin\n  ]\n| where isnull(change_id)\n| table _time, admin, object, cmd, change_id\n```\n\nUnderstanding this SPL\n\n**ESP Security Rule Commit Without Approved Change Record (CIP-005-6 R1 Part 1.4)** — We flag firewall commits affecting ESP rules when no linked change ticket exists, supporting controlled configuration of the security perimeter.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:config`, `index=itsm` `sourcetype=chg:firewall`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:config. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:config, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(cmd,\"(?i)commit\")` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(change_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ESP Security Rule Commit Without Approved Change Record (CIP-005-6 R1 Part 1.4)**): table _time, admin, object, cmd, change_id\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ESP Security Rule Commit Without Approved Change Record (CIP-005-6 R1 Part 1.4)** — We flag firewall commits affecting ESP rules when no linked change ticket exists, supporting controlled configuration of the security perimeter.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:config`, `index=itsm` `sourcetype=chg:firewall`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved commits), Timeline, Bar chart (by admin)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flag firewall commits affecting ESP rules when no linked change ticket exists, supporting controlled configuration of the security perimeter. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-6 R1 Part 1.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-6 R1 Part 1.4 is enforced — Splunk UC-22.13.20: ESP Security Rule Commit Without Approved Change Record.",
                  "ea": "Saved search 'UC-22.13.20' running on index=network sourcetype=pan:config, index=itsm sourcetype=chg:firewall, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.21",
              "n": "Vendor Interactive Remote Access — Encryption and Session Recording Evidence (CIP-005-6 R3 Part 3.2)",
              "c": "critical",
              "f": "intermediate",
              "v": "We verify vendor interactive remote access uses approved encryption (for example TLS versions on VPN or SSH) and that PAM session recording identifiers exist for the same sessions.",
              "t": "Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=network` `sourcetype=pan:globalprotect` or SSL VPN logs, `index=pam` `sourcetype=cyberark:ptalog`, `sourcetype=cyberark:session_recording`",
              "q": "index=network sourcetype=pan:globalprotect earliest=-24h\n| eval tls_version=coalesce(tls_version,ssl_tls_version)\n| eval tls_ok=if(match(tls_version,\"TLSv1\\.[23]\"),1,0)\n| where tls_ok=0 AND isnotnull(user)\n| stats count by user, public_ip, tls_version, gateway\n| sort - count",
              "m": "(1) Map GlobalProtect or SSL VPN field names to `tls_version`. (2) Run a companion saved search on `index=pam` for vendor sessions missing `recording_id` in `cyberark:session_recording`. (3) Alert on weak TLS or missing recording for vendor users. (4) Document compensating controls for legacy jump hosts.",
              "z": "Table (encryption and recording gaps), Timeline, Single value (violation count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:globalprotect` or SSL VPN logs, `index=pam` `sourcetype=cyberark:ptalog`, `sourcetype=cyberark:session_recording`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map GlobalProtect or SSL VPN field names to `tls_version`. (2) Run a companion saved search on `index=pam` for vendor sessions missing `recording_id` in `cyberark:session_recording`. (3) Alert on weak TLS or missing recording for vendor users. (4) Document compensating controls for legacy jump hosts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:globalprotect earliest=-24h\n| eval tls_version=coalesce(tls_version,ssl_tls_version)\n| eval tls_ok=if(match(tls_version,\"TLSv1\\.[23]\"),1,0)\n| where tls_ok=0 AND isnotnull(user)\n| stats count by user, public_ip, tls_version, gateway\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Vendor Interactive Remote Access — Encryption and Session Recording Evidence (CIP-005-6 R3 Part 3.2)** — We verify vendor interactive remote access uses approved encryption (for example TLS versions on VPN or SSH) and that PAM session recording identifiers exist for the same sessions.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:globalprotect` or SSL VPN logs, `index=pam` `sourcetype=cyberark:ptalog`, `sourcetype=cyberark:session_recording`. **App/TA** (typical add-on context): Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:globalprotect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:globalprotect, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tls_version** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **tls_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where tls_ok=0 AND isnotnull(user)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, public_ip, tls_version, gateway** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Vendor Interactive Remote Access — Encryption and Session Recording Evidence (CIP-005-6 R3 Part 3.2)** — We verify vendor interactive remote access uses approved encryption (for example TLS versions on VPN or SSH) and that PAM session recording identifiers exist for the same sessions.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:globalprotect` or SSL VPN logs, `index=pam` `sourcetype=cyberark:ptalog`, `sourcetype=cyberark:session_recording`. **App/TA** (typical add-on context): Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (encryption and recording gaps), Timeline, Single value (violation count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verify vendor interactive remote access uses approved encryption (for example TLS versions on VPN or SSH) and that PAM session recording identifiers exist for the same sessions. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-6 R3 Part 3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-6 R3 Part 3.2 is enforced — Splunk UC-22.13.21: Vendor Interactive Remote Access — Encryption and Session Recording Evidence.",
                  "ea": "Saved search 'UC-22.13.21' running on sourcetype pan:globalprotect and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.22",
              "n": "Dial-Up or Serial Out-of-Band Access on ESP-Adjacent Segments (CIP-005-6 R2 Part 2.4)",
              "c": "high",
              "f": "advanced",
              "v": "We detect modem, PPP, or serial console activity that could bypass documented ESP protections.",
              "t": "Splunk Add-on for Unix and Linux (833)",
              "d": "`index=os` `sourcetype=linux:secure`, `sourcetype=serial_concentrator`",
              "q": "index=os sourcetype=linux:secure earliest=-14d (pppd OR ttyS OR ttyUSB OR \"serial console\")\n| lookup bes_network_segments.csv host OUTPUT segment\n| where segment=\"ESP_adjacent\"\n| stats count by host, process, user\n| sort - count",
              "m": "(1) Inventory any remaining dial or OOB paths. (2) Forward concentrator logs. (3) Alert on any production hit unless maintenance window lookup matches. (4) Document removal in CIP-010.",
              "z": "Table (events), Single value (distinct hosts), Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=os` `sourcetype=linux:secure`, `sourcetype=serial_concentrator`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Inventory any remaining dial or OOB paths. (2) Forward concentrator logs. (3) Alert on any production hit unless maintenance window lookup matches. (4) Document removal in CIP-010.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux:secure earliest=-14d (pppd OR ttyS OR ttyUSB OR \"serial console\")\n| lookup bes_network_segments.csv host OUTPUT segment\n| where segment=\"ESP_adjacent\"\n| stats count by host, process, user\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Dial-Up or Serial Out-of-Band Access on ESP-Adjacent Segments (CIP-005-6 R2 Part 2.4)** — We detect modem, PPP, or serial console activity that could bypass documented ESP protections.\n\nDocumented **Data sources**: `index=os` `sourcetype=linux:secure`, `sourcetype=serial_concentrator`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux:secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux:secure, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where segment=\"ESP_adjacent\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, process, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (events), Single value (distinct hosts), Time chart",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect modem, PPP, or serial console activity that could bypass documented ESP protections. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-005-6 R2 Part 2.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-005-6 R2 Part 2.4 is enforced — Splunk UC-22.13.22: Dial-Up or Serial Out-of-Band Access on ESP-Adjacent Segments.",
                  "ea": "Saved search 'UC-22.13.22' running on index=os sourcetype=linux:secure, sourcetype=serial_concentrator, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.23",
              "n": "Physical Access Outside Approved Maintenance Window at PSP (CIP-006-6 R1 Part 1.2)",
              "c": "high",
              "f": "intermediate",
              "v": "We alert on badge-granted entries at Physical Security Perimeters outside documented maintenance windows for follow-up with operations.",
              "t": "Splunk OT Security Add-on (5151)",
              "d": "`index=physical` `sourcetype=pacs:access`",
              "q": "index=physical sourcetype=pacs:access result=\"granted\" earliest=-7d\n| lookup cip_psp_register reader_id OUTPUT psp_name maint_cron_ok\n| eval hour=strftime(_time,\"%H\")\n| eval dow=strftime(_time,\"%w\")\n| lookup psp_maint_windows.csv psp_name dow hour OUTPUT in_window\n| where in_window!=\"true\"\n| table _time, employee_id, psp_name, reader_id, in_window",
              "m": "(1) Encode maintenance windows in `psp_maint_windows.csv`. (2) Add holiday exceptions. (3) Route alerts to physical security. (4) Correlate with work orders.",
              "z": "Timeline (after-hours), Table (events), Single value (count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151).\n• Ensure the following data sources are available: `index=physical` `sourcetype=pacs:access`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Encode maintenance windows in `psp_maint_windows.csv`. (2) Add holiday exceptions. (3) Route alerts to physical security. (4) Correlate with work orders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=pacs:access result=\"granted\" earliest=-7d\n| lookup cip_psp_register reader_id OUTPUT psp_name maint_cron_ok\n| eval hour=strftime(_time,\"%H\")\n| eval dow=strftime(_time,\"%w\")\n| lookup psp_maint_windows.csv psp_name dow hour OUTPUT in_window\n| where in_window!=\"true\"\n| table _time, employee_id, psp_name, reader_id, in_window\n```\n\nUnderstanding this SPL\n\n**Physical Access Outside Approved Maintenance Window at PSP (CIP-006-6 R1 Part 1.2)** — We alert on badge-granted entries at Physical Security Perimeters outside documented maintenance windows for follow-up with operations.\n\nDocumented **Data sources**: `index=physical` `sourcetype=pacs:access`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pacs:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=pacs:access, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dow** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_window!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Physical Access Outside Approved Maintenance Window at PSP (CIP-006-6 R1 Part 1.2)**): table _time, employee_id, psp_name, reader_id, in_window\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (after-hours), Table (events), Single value (count)",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We alert on badge-granted entries at Physical Security Perimeters outside documented maintenance windows for follow-up with operations. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-006-6 R1 Part 1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-006-6 R1 Part 1.2 is enforced — Splunk UC-22.13.23: Physical Access Outside Approved Maintenance Window at PSP.",
                  "ea": "Saved search 'UC-22.13.23' running on index=physical sourcetype=pacs:access, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.24",
              "n": "Unauthorized Physical Access Attempts at PSP (CIP-006-6 R2)",
              "c": "critical",
              "f": "intermediate",
              "v": "We aggregate denied, forced, and tailgate physical access events for rapid triage and evidence of physical perimeter monitoring.",
              "t": "Splunk OT Security Add-on (5151)",
              "d": "`index=physical` `sourcetype=pacs:access`",
              "q": "index=physical sourcetype=pacs:access earliest=-24h\n    (result=\"denied\" OR event_type IN (\"forced\",\"tailgate\",\"passback\"))\n| lookup cip_psp_register reader_id OUTPUT psp_name bes_site\n| stats count by event_type, psp_name, bes_site\n| sort - count",
              "m": "(1) Map readers to PSP inventory. (2) Push high severity to SOC bridge. (3) Optional join to VMS syslog. (4) Weekly trending for reliability.",
              "z": "Stacked bar (event types), Table (raw), Single value (tailgates)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151).\n• Ensure the following data sources are available: `index=physical` `sourcetype=pacs:access`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map readers to PSP inventory. (2) Push high severity to SOC bridge. (3) Optional join to VMS syslog. (4) Weekly trending for reliability.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=pacs:access earliest=-24h\n    (result=\"denied\" OR event_type IN (\"forced\",\"tailgate\",\"passback\"))\n| lookup cip_psp_register reader_id OUTPUT psp_name bes_site\n| stats count by event_type, psp_name, bes_site\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Unauthorized Physical Access Attempts at PSP (CIP-006-6 R2)** — We aggregate denied, forced, and tailgate physical access events for rapid triage and evidence of physical perimeter monitoring.\n\nDocumented **Data sources**: `index=physical` `sourcetype=pacs:access`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pacs:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=pacs:access, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by event_type, psp_name, bes_site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (event types), Table (raw), Single value (tailgates)",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We aggregate denied, forced, and tailgate physical access events for rapid triage and evidence of physical perimeter monitoring. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-006-6 R2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-006-6 R2 is enforced — Splunk UC-22.13.24: Unauthorized Physical Access Attempts at PSP.",
                  "ea": "Saved search 'UC-22.13.24' running on index=physical sourcetype=pacs:access, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.25",
              "n": "Visitor Badge Without Escort Within Policy Window (CIP-006-6 R3 Part 3.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We find visitor badge reads that lack a proximate escort badge swipe, supporting visitor control procedures.",
              "t": "Splunk OT Security Add-on (5151)",
              "d": "`index=physical` `sourcetype=pacs:access`",
              "q": "index=physical sourcetype=pacs:access badge_type=\"visitor\" earliest=-7d\n| where isnull(escort_badge_id) OR escort_present=\"false\"\n| lookup cip_psp_register reader_id OUTPUT psp_name\n| stats count by visitor_id, reader_id, psp_name\n| where count>=1\n| table visitor_id, reader_id, psp_name, count",
              "m": "(1) Fix join logic per PACS export (example simplified). (2) Use `transaction` on reader_id if vendor provides visit_id. (3) Daily report to reception. (4) Tune window to site policy.",
              "z": "Table (violations), Single value (rate), Time chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151).\n• Ensure the following data sources are available: `index=physical` `sourcetype=pacs:access`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Fix join logic per PACS export (example simplified). (2) Use `transaction` on reader_id if vendor provides visit_id. (3) Daily report to reception. (4) Tune window to site policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=pacs:access badge_type=\"visitor\" earliest=-7d\n| where isnull(escort_badge_id) OR escort_present=\"false\"\n| lookup cip_psp_register reader_id OUTPUT psp_name\n| stats count by visitor_id, reader_id, psp_name\n| where count>=1\n| table visitor_id, reader_id, psp_name, count\n```\n\nUnderstanding this SPL\n\n**Visitor Badge Without Escort Within Policy Window (CIP-006-6 R3 Part 3.1)** — We find visitor badge reads that lack a proximate escort badge swipe, supporting visitor control procedures.\n\nDocumented **Data sources**: `index=physical` `sourcetype=pacs:access`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pacs:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=pacs:access, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(escort_badge_id) OR escort_present=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by visitor_id, reader_id, psp_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Visitor Badge Without Escort Within Policy Window (CIP-006-6 R3 Part 3.1)**): table visitor_id, reader_id, psp_name, count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Single value (rate), Time chart",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We find visitor badge reads that lack a proximate escort badge swipe, supporting visitor control procedures. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-006-6 R3 Part 3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-006-6 R3 Part 3.1 is enforced — Splunk UC-22.13.25: Visitor Badge Without Escort Within Policy Window.",
                  "ea": "Saved search 'UC-22.13.25' running on index=physical sourcetype=pacs:access, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.26",
              "n": "Physical Access Log Ingest Continuity for Retention Evidence (CIP-006-6 R4)",
              "c": "high",
              "f": "intermediate",
              "v": "We detect multi-day gaps in PACS indexing that could undermine evidence of physical access monitoring and review.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "`index=physical` `sourcetype=pacs:access`, `_internal`",
              "q": "index=physical sourcetype=pacs:access earliest=-60d\n| timechart span=1d count as events\n| streamstats window=7 avg(events) as baseline\n| eval gap=if(events<baseline*0.15 OR events=0,1,0)\n| where gap=1\n| table _time, events, baseline",
              "m": "(1) Exclude planned outages via lookup. (2) Alert physical security and Splunk admins. (3) Track forwarder uptime. (4) Export gap analysis for audits.",
              "z": "Time chart (daily volume), Table (gap days), Single value (max gap hours)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: `index=physical` `sourcetype=pacs:access`, `_internal`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Exclude planned outages via lookup. (2) Alert physical security and Splunk admins. (3) Track forwarder uptime. (4) Export gap analysis for audits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=pacs:access earliest=-60d\n| timechart span=1d count as events\n| streamstats window=7 avg(events) as baseline\n| eval gap=if(events<baseline*0.15 OR events=0,1,0)\n| where gap=1\n| table _time, events, baseline\n```\n\nUnderstanding this SPL\n\n**Physical Access Log Ingest Continuity for Retention Evidence (CIP-006-6 R4)** — We detect multi-day gaps in PACS indexing that could undermine evidence of physical access monitoring and review.\n\nDocumented **Data sources**: `index=physical` `sourcetype=pacs:access`, `_internal`. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pacs:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=pacs:access, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets — ideal for trending and alerting on this use case.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Physical Access Log Ingest Continuity for Retention Evidence (CIP-006-6 R4)**): table _time, events, baseline\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (daily volume), Table (gap days), Single value (max gap hours)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect multi-day gaps in PACS indexing that could undermine evidence of physical access monitoring and review. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-006-6 R4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-006-6 R4 is enforced — Splunk UC-22.13.26: Physical Access Log Ingest Continuity for Retention Evidence.",
                  "ea": "Saved search 'UC-22.13.26' running on index=physical sourcetype=pacs:access, _internal, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.27",
              "n": "PSP Forced Door with Concurrent ESP Interactive Logon (CIP-006-6 R3 Part 3.2)",
              "c": "critical",
              "f": "advanced",
              "v": "We time-align severe physical alarms with interactive logons at the same site to support combined cyber-physical incident analysis.",
              "t": "Splunk OT Security Add-on (5151), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=physical` forced events, `index=wineventlog` RDP logons",
              "q": "index=physical sourcetype=pacs:access event_type=\"forced\" earliest=-6h\n| eval ptime=_time, site=bes_site\n| join type=left site [\n    search index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=10 earliest=-6h\n    | lookup bes_site_map.csv ComputerName OUTPUT bes_site as site\n    | stats count by Account_Name, ComputerName, site\n  ]\n| where isnotnull(Account_Name)\n| table ptime, site, employee_id, Account_Name, ComputerName, count",
              "m": "(1) Maintain `bes_site_map.csv` from facility records. (2) Tune time bucketing (example uses join on site). (3) Prefer summary indexing for scale. (4) Attach results to CIP-008 preliminary review.",
              "z": "Timeline (overlay), Table (correlated pairs), Single value (pair count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=physical` forced events, `index=wineventlog` RDP logons.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `bes_site_map.csv` from facility records. (2) Tune time bucketing (example uses join on site). (3) Prefer summary indexing for scale. (4) Attach results to CIP-008 preliminary review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=pacs:access event_type=\"forced\" earliest=-6h\n| eval ptime=_time, site=bes_site\n| join type=left site [\n    search index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 LogonType=10 earliest=-6h\n    | lookup bes_site_map.csv ComputerName OUTPUT bes_site as site\n    | stats count by Account_Name, ComputerName, site\n  ]\n| where isnotnull(Account_Name)\n| table ptime, site, employee_id, Account_Name, ComputerName, count\n```\n\nUnderstanding this SPL\n\n**PSP Forced Door with Concurrent ESP Interactive Logon (CIP-006-6 R3 Part 3.2)** — We time-align severe physical alarms with interactive logons at the same site to support combined cyber-physical incident analysis.\n\nDocumented **Data sources**: `index=physical` forced events, `index=wineventlog` RDP logons. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: pacs:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=pacs:access, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ptime** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(Account_Name)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PSP Forced Door with Concurrent ESP Interactive Logon (CIP-006-6 R3 Part 3.2)**): table ptime, site, employee_id, Account_Name, ComputerName, count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PSP Forced Door with Concurrent ESP Interactive Logon (CIP-006-6 R3 Part 3.2)** — We time-align severe physical alarms with interactive logons at the same site to support combined cyber-physical incident analysis.\n\nDocumented **Data sources**: `index=physical` forced events, `index=wineventlog` RDP logons. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (overlay), Table (correlated pairs), Single value (pair count)",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We time-align severe physical alarms with interactive logons at the same site to support combined cyber-physical incident analysis. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-006-6 R3 Part 3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-006-6 R3 Part 3.2 is enforced — Splunk UC-22.13.27: PSP Forced Door with Concurrent ESP Interactive Logon.",
                  "ea": "Saved search 'UC-22.13.27' running on index=physical forced events, index=wineventlog RDP logons, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.28",
              "n": "Listening Port and Service Baseline Deviation on BES Servers (CIP-007-6 R1 Part 1.2)",
              "c": "high",
              "f": "advanced",
              "v": "We compare live listening ports from host telemetry to an approved baseline so unauthorized services on BES Cyber Assets are detected quickly.",
              "t": "Splunk Add-on for Unix and Linux (833), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=os` `sourcetype=netstat` or scripted listening ports, `bes_port_baseline.csv`",
              "q": "index=os (sourcetype=netstat OR sourcetype=listening_ports) earliest=-24h\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats latest(port) as port latest(proto) as proto by host, port\n| lookup bes_port_baseline.csv host, port OUTPUT approved\n| where isnull(approved)\n| table host, port, proto, bes_system",
              "m": "(1) Collect listening port data via UF scripted input. (2) Build baseline from golden images. (3) Alert on new ports for seven days. (4) Feed changes into CIP-010 change tickets.",
              "z": "Table (new ports), Bar chart (by host), Single value (distinct deviations)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (833), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=os` `sourcetype=netstat` or scripted listening ports, `bes_port_baseline.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect listening port data via UF scripted input. (2) Build baseline from golden images. (3) Alert on new ports for seven days. (4) Feed changes into CIP-010 change tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os (sourcetype=netstat OR sourcetype=listening_ports) earliest=-24h\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats latest(port) as port latest(proto) as proto by host, port\n| lookup bes_port_baseline.csv host, port OUTPUT approved\n| where isnull(approved)\n| table host, port, proto, bes_system\n```\n\nUnderstanding this SPL\n\n**Listening Port and Service Baseline Deviation on BES Servers (CIP-007-6 R1 Part 1.2)** — We compare live listening ports from host telemetry to an approved baseline so unauthorized services on BES Cyber Assets are detected quickly.\n\nDocumented **Data sources**: `index=os` `sourcetype=netstat` or scripted listening ports, `bes_port_baseline.csv`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: netstat, listening_ports. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=netstat, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Listening Port and Service Baseline Deviation on BES Servers (CIP-007-6 R1 Part 1.2)**): table host, port, proto, bes_system\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new ports), Bar chart (by host), Single value (distinct deviations)\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare live listening ports from host telemetry to an approved baseline so unauthorized services on BES Cyber Assets are detected quickly. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R1 Part 1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R1 Part 1.2 is enforced — Splunk UC-22.13.28: Listening Port and Service Baseline Deviation on BES Servers.",
                  "ea": "Saved search 'UC-22.13.28' running on index=os sourcetype=netstat or scripted listening ports, bes_port_baseline.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.29",
              "n": "Security Patch Evaluation Completion Tracking (CIP-007-6 R2 Part 2.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We track that each published critical patch for BES assets received a documented evaluation record within policy time.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=patch` `sourcetype=wsus:summary` or `sourcetype=tenable:plugin`, `patch_evaluation_register.csv`",
              "q": "index=patch sourcetype=wsus:summary earliest=-35d classification=\"Critical\"\n| lookup bes_cyber_asset_inventory.csv ComputerName AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| join type=left kb_id [\n    search index=grc sourcetype=cip:patch_evaluation kb_id=* earliest=-35d\n    | stats latest(evaluated_at) as evaluated_at by kb_id\n  ]\n| where isnull(evaluated_at)\n| stats dc(host) as hosts_missing_eval by kb_id, title",
              "m": "(1) Ingest patch catalog with release dates. (2) Require evaluation HEC event per KB. (3) Alert before internal evaluation SLA. (4) Export for CIP-007 evidence binders.",
              "z": "Table (unevaluated KBs), Single value (count), Time chart (evaluations)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=patch` `sourcetype=wsus:summary` or `sourcetype=tenable:plugin`, `patch_evaluation_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest patch catalog with release dates. (2) Require evaluation HEC event per KB. (3) Alert before internal evaluation SLA. (4) Export for CIP-007 evidence binders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=patch sourcetype=wsus:summary earliest=-35d classification=\"Critical\"\n| lookup bes_cyber_asset_inventory.csv ComputerName AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| join type=left kb_id [\n    search index=grc sourcetype=cip:patch_evaluation kb_id=* earliest=-35d\n    | stats latest(evaluated_at) as evaluated_at by kb_id\n  ]\n| where isnull(evaluated_at)\n| stats dc(host) as hosts_missing_eval by kb_id, title\n```\n\nUnderstanding this SPL\n\n**Security Patch Evaluation Completion Tracking (CIP-007-6 R2 Part 2.1)** — We track that each published critical patch for BES assets received a documented evaluation record within policy time.\n\nDocumented **Data sources**: `index=patch` `sourcetype=wsus:summary` or `sourcetype=tenable:plugin`, `patch_evaluation_register.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: patch; **sourcetype**: wsus:summary. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=patch, sourcetype=wsus:summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(evaluated_at)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by kb_id, title** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unevaluated KBs), Single value (count), Time chart (evaluations)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track that each published critical patch for BES assets received a documented evaluation record within policy time. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R2 Part 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R2 Part 2.1 is enforced — Splunk UC-22.13.29: Security Patch Evaluation Completion Tracking.",
                  "ea": "Saved search 'UC-22.13.29' running on index=patch sourcetype=wsus:summary or sourcetype=tenable:plugin, patch_evaluation_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.30",
              "n": "Patch Installation or Mitigation within 35-Day Window (CIP-007-6 R2 Part 2.3)",
              "c": "critical",
              "f": "intermediate",
              "v": "We highlight applicable patches past thirty-five days from completed evaluation without apply or approved mitigation plan.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=patch` `sourcetype=patch:status`, `cip007_mitigation_plans.csv`",
              "q": "index=patch sourcetype=patch:status bes_asset=1 earliest=-120d\n| eval eval_epoch=strptime(eval_completed,\"%Y-%m-%d\")\n| eval due_epoch=eval_epoch+(35*86400)\n| eval applied=if(match(status,\"(?i)installed\"),1,0)\n| lookup cip007_mitigation_plans.csv patch_id OUTPUT plan_id approved_until\n| eval mitigated=if(isnotnull(plan_id),1,0)\n| where applied=0 AND mitigated=0 AND now()>due_epoch\n| table host, patch_id, eval_completed, due_epoch, status",
              "m": "(1) Align `eval_completed` with official evaluation logs. (2) Import mitigation plans from GRC. (3) Daily alert to server owners. (4) Ticket exceptions through CIP-003.",
              "z": "Table (overdue patches), Column chart (by team), Single value (asset count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=patch` `sourcetype=patch:status`, `cip007_mitigation_plans.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align `eval_completed` with official evaluation logs. (2) Import mitigation plans from GRC. (3) Daily alert to server owners. (4) Ticket exceptions through CIP-003.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=patch sourcetype=patch:status bes_asset=1 earliest=-120d\n| eval eval_epoch=strptime(eval_completed,\"%Y-%m-%d\")\n| eval due_epoch=eval_epoch+(35*86400)\n| eval applied=if(match(status,\"(?i)installed\"),1,0)\n| lookup cip007_mitigation_plans.csv patch_id OUTPUT plan_id approved_until\n| eval mitigated=if(isnotnull(plan_id),1,0)\n| where applied=0 AND mitigated=0 AND now()>due_epoch\n| table host, patch_id, eval_completed, due_epoch, status\n```\n\nUnderstanding this SPL\n\n**Patch Installation or Mitigation within 35-Day Window (CIP-007-6 R2 Part 2.3)** — We highlight applicable patches past thirty-five days from completed evaluation without apply or approved mitigation plan.\n\nDocumented **Data sources**: `index=patch` `sourcetype=patch:status`, `cip007_mitigation_plans.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: patch; **sourcetype**: patch:status. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=patch, sourcetype=patch:status, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eval_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **applied** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **mitigated** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where applied=0 AND mitigated=0 AND now()>due_epoch` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Patch Installation or Mitigation within 35-Day Window (CIP-007-6 R2 Part 2.3)**): table host, patch_id, eval_completed, due_epoch, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue patches), Column chart (by team), Single value (asset count)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlight applicable patches past thirty-five days from completed evaluation without apply or approved mitigation plan. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R2 Part 2.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R2 Part 2.3 is enforced — Splunk UC-22.13.30: Patch Installation or Mitigation within 35-Day Window.",
                  "ea": "Saved search 'UC-22.13.30' running on index=patch sourcetype=patch:status, cip007_mitigation_plans.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.31",
              "n": "Malware Prevention Agent Coverage on BES Cyber Assets (CIP-007-6 R3 Part 3.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We list in-scope hosts that have not reported antivirus or EDR health within the required interval.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=endpoint` `sourcetype=WinDefender:Status` or `sourcetype=crowdstrike:hosts`",
              "q": "| inputlookup bes_cyber_asset_inventory.csv\n| rename hostname as host\n| join type=left host [\n    search index=endpoint (sourcetype=WinDefender:Status OR sourcetype=crowdstrike:hosts) earliest=-48h\n    | stats latest(_time) as last_report by host\n  ]\n| eval stale=if(isnull(last_report) OR (now()-last_report)>172800,1,0)\n| where stale=1\n| table host, bes_system, last_report",
              "m": "(1) Normalize host naming between inventory and endpoint tools. (2) Set stale threshold per vendor guidance. (3) Weekly report to operations. (4) Track remediation tickets.",
              "z": "Table (uncovered hosts), Single value (percent reporting), Map (optional site)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=WinDefender:Status` or `sourcetype=crowdstrike:hosts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize host naming between inventory and endpoint tools. (2) Set stale threshold per vendor guidance. (3) Weekly report to operations. (4) Track remediation tickets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup bes_cyber_asset_inventory.csv\n| rename hostname as host\n| join type=left host [\n    search index=endpoint (sourcetype=WinDefender:Status OR sourcetype=crowdstrike:hosts) earliest=-48h\n    | stats latest(_time) as last_report by host\n  ]\n| eval stale=if(isnull(last_report) OR (now()-last_report)>172800,1,0)\n| where stale=1\n| table host, bes_system, last_report\n```\n\nUnderstanding this SPL\n\n**Malware Prevention Agent Coverage on BES Cyber Assets (CIP-007-6 R3 Part 3.1)** — We list in-scope hosts that have not reported antivirus or EDR health within the required interval.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=WinDefender:Status` or `sourcetype=crowdstrike:hosts`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where stale=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Malware Prevention Agent Coverage on BES Cyber Assets (CIP-007-6 R3 Part 3.1)**): table host, bes_system, last_report\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (uncovered hosts), Single value (percent reporting), Map (optional site)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We list in-scope hosts that have not reported antivirus or EDR health within the required interval. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R3 Part 3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R3 Part 3.1 is enforced — Splunk UC-22.13.31: Malware Prevention Agent Coverage on BES Cyber Assets.",
                  "ea": "Saved search 'UC-22.13.31' running on index=endpoint sourcetype=WinDefender:Status or sourcetype=crowdstrike:hosts, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.32",
              "n": "Malware Detection Events on BES Cyber Systems (CIP-007-6 R4 Part 4.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "We centralize detected-malicious-code events from BES assets so alerts and evidence meet system security management expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=endpoint` Defender or EDR sourcetypes, `bes_cyber_asset_inventory.csv`",
              "q": "index=endpoint (sourcetype=WinDefender:Threat OR sourcetype=\"ms:defender:alert\") earliest=-7d\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system impact_rating\n| where isnotnull(bes_system)\n| stats count earliest(_time) as first latest(_time) as last by host, threat_name, action\n| sort - count",
              "m": "(1) Fix sourcetype list to your EDR. (2) Map actions to blocked vs quarantined. (3) Create ES notable with CIP-008 link. (4) Retain chain-of-custody fields if exported.",
              "z": "Table (threats), Time chart (detections), Single value (distinct hosts)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=endpoint` Defender or EDR sourcetypes, `bes_cyber_asset_inventory.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Fix sourcetype list to your EDR. (2) Map actions to blocked vs quarantined. (3) Create ES notable with CIP-008 link. (4) Retain chain-of-custody fields if exported.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint (sourcetype=WinDefender:Threat OR sourcetype=\"ms:defender:alert\") earliest=-7d\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system impact_rating\n| where isnotnull(bes_system)\n| stats count earliest(_time) as first latest(_time) as last by host, threat_name, action\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Malware Detection Events on BES Cyber Systems (CIP-007-6 R4 Part 4.1)** — We centralize detected-malicious-code events from BES assets so alerts and evidence meet system security management expectations.\n\nDocumented **Data sources**: `index=endpoint` Defender or EDR sourcetypes, `bes_cyber_asset_inventory.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: WinDefender:Threat, ms:defender:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=WinDefender:Threat, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, threat_name, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest, Malware_Attacks.signature, Malware_Attacks.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malware Detection Events on BES Cyber Systems (CIP-007-6 R4 Part 4.1)** — We centralize detected-malicious-code events from BES assets so alerts and evidence meet system security management expectations.\n\nDocumented **Data sources**: `index=endpoint` Defender or EDR sourcetypes, `bes_cyber_asset_inventory.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (threats), Time chart (detections), Single value (distinct hosts)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We centralize detected-malicious-code events from BES assets so alerts and evidence meet system security management expectations. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Malware (when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest, Malware_Attacks.signature, Malware_Attacks.action | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R4 Part 4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R4 Part 4.1 is enforced — Splunk UC-22.13.32: Malware Detection Events on BES Cyber Systems.",
                  "ea": "Saved search 'UC-22.13.32' running on index=endpoint Defender or EDR sourcetypes, bes_cyber_asset_inventory.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.33",
              "n": "Security Event Log Generation Validation for Windows BES Hosts (CIP-007-6 R4 Part 4.2)",
              "c": "high",
              "f": "intermediate",
              "v": "We detect Windows systems where the Security channel stopped forwarding while the host remains online, risking missing auditable events.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=wineventlog` `sourcetype=WinEventLog:Security`, `_internal` forwarder metrics",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" earliest=-48h\n| lookup bes_cyber_asset_inventory.csv ComputerName AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats latest(_time) as last_sec by host\n| eval gap_h=round((now()-last_sec)/3600,2)\n| where gap_h>24\n| table host, bes_system, last_sec, gap_h",
              "m": "(1) Tune gap threshold per risk. (2) Exclude maintenance windows. (3) Alert server owners and Splunk admins. (4) Track ticket closure with log resume proof.",
              "z": "Table (stale log sources), Single value (host count), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=wineventlog` `sourcetype=WinEventLog:Security`, `_internal` forwarder metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune gap threshold per risk. (2) Exclude maintenance windows. (3) Alert server owners and Splunk admins. (4) Track ticket closure with log resume proof.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" earliest=-48h\n| lookup bes_cyber_asset_inventory.csv ComputerName AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats latest(_time) as last_sec by host\n| eval gap_h=round((now()-last_sec)/3600,2)\n| where gap_h>24\n| table host, bes_system, last_sec, gap_h\n```\n\nUnderstanding this SPL\n\n**Security Event Log Generation Validation for Windows BES Hosts (CIP-007-6 R4 Part 4.2)** — We detect Windows systems where the Security channel stopped forwarding while the host remains online, risking missing auditable events.\n\nDocumented **Data sources**: `index=wineventlog` `sourcetype=WinEventLog:Security`, `_internal` forwarder metrics. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_h>24` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Security Event Log Generation Validation for Windows BES Hosts (CIP-007-6 R4 Part 4.2)**): table host, bes_system, last_sec, gap_h\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale log sources), Single value (host count), Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect Windows systems where the Security channel stopped forwarding while the host remains online, risking missing auditable events. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R4 Part 4.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R4 Part 4.2 is enforced — Splunk UC-22.13.33: Security Event Log Generation Validation for Windows BES Hosts.",
                  "ea": "Saved search 'UC-22.13.33' running on index=wineventlog sourcetype=WinEventLog:Security, _internal forwarder metrics, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.34",
              "n": "Failed and Successful Login Attempt Monitoring on BES Assets (CIP-007-6 R5 Part 5.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We summarize authentication successes and failures on BES Cyber Assets to support account lockout and misuse investigations.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=wineventlog` `EventCode` 4624/4625, `bes_cyber_asset_inventory.csv`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4625) earliest=-24h\n| lookup bes_cyber_asset_inventory.csv ComputerName AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| eval outcome=if(EventCode==4625,\"fail\",\"success\")\n| stats count by Account_Name, ComputerName, outcome, bes_system\n| sort - count",
              "m": "(1) Ensure 4625 auditing enabled. (2) Add threshold alerts for failures. (3) Correlate with password spray use cases. (4) Retain per entity log policy.",
              "z": "Stacked bar (success vs fail), Table (top accounts), Single value (failure rate)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=wineventlog` `EventCode` 4624/4625, `bes_cyber_asset_inventory.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure 4625 auditing enabled. (2) Add threshold alerts for failures. (3) Correlate with password spray use cases. (4) Retain per entity log policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4625) earliest=-24h\n| lookup bes_cyber_asset_inventory.csv ComputerName AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| eval outcome=if(EventCode==4625,\"fail\",\"success\")\n| stats count by Account_Name, ComputerName, outcome, bes_system\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Failed and Successful Login Attempt Monitoring on BES Assets (CIP-007-6 R5 Part 5.1)** — We summarize authentication successes and failures on BES Cyber Assets to support account lockout and misuse investigations.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode` 4624/4625, `bes_cyber_asset_inventory.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **outcome** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by Account_Name, ComputerName, outcome, bes_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Failed and Successful Login Attempt Monitoring on BES Assets (CIP-007-6 R5 Part 5.1)** — We summarize authentication successes and failures on BES Cyber Assets to support account lockout and misuse investigations.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode` 4624/4625, `bes_cyber_asset_inventory.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (success vs fail), Table (top accounts), Single value (failure rate)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We summarize authentication successes and failures on BES Cyber Assets to support account lockout and misuse investigations. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R5 Part 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R5 Part 5.1 is enforced — Splunk UC-22.13.34: Failed and Successful Login Attempt Monitoring on BES Assets.",
                  "ea": "Saved search 'UC-22.13.34' running on index=wineventlog EventCode 4624/4625, bes_cyber_asset_inventory.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.35",
              "n": "Default, Built-In, or Generic Account Usage on BES Hosts (CIP-007-6 R5 Part 5.2)",
              "c": "critical",
              "f": "intermediate",
              "v": "We flag logons using default or generic accounts that should be disabled or renamed per system hardening standards.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=wineventlog` `EventCode=4624`, `generic_accounts_blocklist.csv`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-7d\n| lookup bes_cyber_asset_inventory.csv ComputerName AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| lookup generic_accounts_blocklist.csv Account_Name OUTPUT blocked_account\n| where isnotnull(blocked_account)\n| stats count by Account_Name, ComputerName, bes_system\n| sort - count",
              "m": "(1) Populate blocklist with Administrator, Guest, vendor defaults. (2) Tune for renamed admin accounts via SID lookup if available. (3) Page IR on hits. (4) Document approved break-glass in lookup.",
              "z": "Table (violations), Single value (event count), Bar chart (by account)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=wineventlog` `EventCode=4624`, `generic_accounts_blocklist.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate blocklist with Administrator, Guest, vendor defaults. (2) Tune for renamed admin accounts via SID lookup if available. (3) Page IR on hits. (4) Document approved break-glass in lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-7d\n| lookup bes_cyber_asset_inventory.csv ComputerName AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| lookup generic_accounts_blocklist.csv Account_Name OUTPUT blocked_account\n| where isnotnull(blocked_account)\n| stats count by Account_Name, ComputerName, bes_system\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Default, Built-In, or Generic Account Usage on BES Hosts (CIP-007-6 R5 Part 5.2)** — We flag logons using default or generic accounts that should be disabled or renamed per system hardening standards.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode=4624`, `generic_accounts_blocklist.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(blocked_account)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by Account_Name, ComputerName, bes_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Default, Built-In, or Generic Account Usage on BES Hosts (CIP-007-6 R5 Part 5.2)** — We flag logons using default or generic accounts that should be disabled or renamed per system hardening standards.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode=4624`, `generic_accounts_blocklist.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Single value (event count), Bar chart (by account)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flag logons using default or generic accounts that should be disabled or renamed per system hardening standards. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R5 Part 5.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R5 Part 5.2 is enforced — Splunk UC-22.13.35: Default, Built-In, or Generic Account Usage on BES Hosts.",
                  "ea": "Saved search 'UC-22.13.35' running on index=wineventlog EventCode=4624, generic_accounts_blocklist.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.36",
              "n": "Shared Account Justification Review Queue (CIP-007-6 R5 Part 5.3)",
              "c": "high",
              "f": "intermediate",
              "v": "We track shared or functional accounts that logged onto BES assets against a justification register due for periodic review.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=wineventlog` `EventCode=4624`, `shared_accounts_register.csv`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-30d\n| lookup shared_accounts_register.csv Account_Name OUTPUT shared_owner review_due\n| where isnotnull(shared_owner)\n| eval due_epoch=strptime(review_due,\"%Y-%m-%d\")\n| where due_epoch < now() OR isnull(review_due)\n| stats dc(ComputerName) as systems values(bes_system) as sites by Account_Name, shared_owner, review_due",
              "m": "(1) Maintain register with review due dates. (2) Quarterly export for access governance. (3) Require re-approval tickets. (4) Remove retired shared accounts from register.",
              "z": "Table (accounts overdue), Timeline (reviews), Single value (count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=wineventlog` `EventCode=4624`, `shared_accounts_register.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain register with review due dates. (2) Quarterly export for access governance. (3) Require re-approval tickets. (4) Remove retired shared accounts from register.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4624 earliest=-30d\n| lookup shared_accounts_register.csv Account_Name OUTPUT shared_owner review_due\n| where isnotnull(shared_owner)\n| eval due_epoch=strptime(review_due,\"%Y-%m-%d\")\n| where due_epoch < now() OR isnull(review_due)\n| stats dc(ComputerName) as systems values(bes_system) as sites by Account_Name, shared_owner, review_due\n```\n\nUnderstanding this SPL\n\n**Shared Account Justification Review Queue (CIP-007-6 R5 Part 5.3)** — We track shared or functional accounts that logged onto BES assets against a justification register due for periodic review.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode=4624`, `shared_accounts_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(shared_owner)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where due_epoch < now() OR isnull(review_due)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by Account_Name, shared_owner, review_due** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Shared Account Justification Review Queue (CIP-007-6 R5 Part 5.3)** — We track shared or functional accounts that logged onto BES assets against a justification register due for periodic review.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode=4624`, `shared_accounts_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (accounts overdue), Timeline (reviews), Single value (count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track shared or functional accounts that logged onto BES assets against a justification register due for periodic review. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R5 Part 5.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R5 Part 5.3 is enforced — Splunk UC-22.13.36: Shared Account Justification Review Queue.",
                  "ea": "Saved search 'UC-22.13.36' running on index=wineventlog EventCode=4624, shared_accounts_register.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.37",
              "n": "Password or Passphrase Policy Violations from Domain Controller Auditing (CIP-007-6 R5 Part 5.4)",
              "c": "high",
              "f": "advanced",
              "v": "We surface directory events indicating weak password changes blocked by policy or Kerberos pre-authentication failures tied to account hygiene.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=wineventlog` `EventCode` 4723/4724/4771, BES OUs",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4723,4724,4771) earliest=-7d\n| lookup bes_ad_ou_map.csv TargetUserName OUTPUT in_bes_ou\n| where in_bes_ou=\"true\"\n| stats count by EventCode, TargetUserName, ComputerName\n| sort - count",
              "m": "(1) Enable appropriate DC auditing. (2) Map accounts to BES OUs. (3) Tune noise from service accounts. (4) Pair with IAM for password reset workflow.",
              "z": "Table (events), Bar chart (by EventCode), Single value (distinct users)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=wineventlog` `EventCode` 4723/4724/4771, BES OUs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable appropriate DC auditing. (2) Map accounts to BES OUs. (3) Tune noise from service accounts. (4) Pair with IAM for password reset workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4723,4724,4771) earliest=-7d\n| lookup bes_ad_ou_map.csv TargetUserName OUTPUT in_bes_ou\n| where in_bes_ou=\"true\"\n| stats count by EventCode, TargetUserName, ComputerName\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Password or Passphrase Policy Violations from Domain Controller Auditing (CIP-007-6 R5 Part 5.4)** — We surface directory events indicating weak password changes blocked by policy or Kerberos pre-authentication failures tied to account hygiene.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode` 4723/4724/4771, BES OUs. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_bes_ou=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by EventCode, TargetUserName, ComputerName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Password or Passphrase Policy Violations from Domain Controller Auditing (CIP-007-6 R5 Part 5.4)** — We surface directory events indicating weak password changes blocked by policy or Kerberos pre-authentication failures tied to account hygiene.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode` 4723/4724/4771, BES OUs. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (events), Bar chart (by EventCode), Single value (distinct users)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surface directory events indicating weak password changes blocked by policy or Kerberos pre-authentication failures tied to account hygiene. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-007-6 R5 Part 5.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-007-6 R5 Part 5.4 is enforced — Splunk UC-22.13.37: Password or Passphrase Policy Violations from Domain Controller Auditing.",
                  "ea": "Saved search 'UC-22.13.37' running on index=wineventlog EventCode 4723/4724/4771, BES OUs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.38",
              "n": "Cyber Security Incident Identification from Correlated ESP and Endpoint Alerts (CIP-008-6 R1 Part 1.1)",
              "c": "critical",
              "f": "advanced",
              "v": "We group perimeter, authentication, and malware signals affecting BES assets to support timely identification of reportable cyber security incidents.",
              "t": "Splunk Enterprise Security (263)",
              "d": "Splunk ES `` `notable` `` macro (or `index=notable`), BES asset tags",
              "q": "`notable` earliest=-7d\n| lookup bes_cyber_asset_inventory.csv dest as asset OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats earliest(_time) as detected_time values(rule_name) as rules dc(asset) as assets by urgency\n| sort - detected_time",
              "m": "(1) Tag ES notables with BES context at correlation time. (2) Define which rules imply CIP-008 review. (3) Integrate with incident command workflow. (4) Preserve search artifacts per policy.",
              "z": "Table (candidate incidents), Timeline, Single value (open BES-tagged notables)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: Splunk ES `` `notable` `` macro (or `index=notable`), BES asset tags.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag ES notables with BES context at correlation time. (2) Define which rules imply CIP-008 review. (3) Integrate with incident command workflow. (4) Preserve search artifacts per policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` earliest=-7d\n| lookup bes_cyber_asset_inventory.csv dest as asset OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats earliest(_time) as detected_time values(rule_name) as rules dc(asset) as assets by urgency\n| sort - detected_time\n```\n\nUnderstanding this SPL\n\n**Cyber Security Incident Identification from Correlated ESP and Endpoint Alerts (CIP-008-6 R1 Part 1.1)** — We group perimeter, authentication, and malware signals affecting BES assets to support timely identification of reportable cyber security incidents.\n\nDocumented **Data sources**: Splunk ES `` `notable` `` macro (or `index=notable`), BES asset tags. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by urgency** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (candidate incidents), Timeline, Single value (open BES-tagged notables)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We group perimeter, authentication, and malware signals affecting BES assets to support timely identification of reportable cyber security incidents. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-008-6 R1 Part 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-008-6 R1 Part 1.1 is enforced — Splunk UC-22.13.38: Cyber Security Incident Identification from Correlated ESP and Endpoint Alerts.",
                  "ea": "Saved search 'UC-22.13.38' running on Splunk ES notable macro (or index=notable), BES asset tags, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.39",
              "n": "Reportable Incident Classification — BES Reliability Impact Signals (CIP-008-6 R2 Part 2.1)",
              "c": "critical",
              "f": "advanced",
              "v": "We correlate OT historian or EMS quality flags with cyber alerts to help classify incidents that may affect BES reliability or operations.",
              "t": "Splunk Industrial Asset Intelligence, Splunk OT Security Add-on (5151)",
              "d": "`index=ot_hist` `sourcetype=scada:point`, `index=ot_security` security events",
              "q": "index=ot_hist sourcetype=scada:point earliest=-24h (quality=\"BAD\" OR quality=\"UNCERTAIN\")\n| bin _time span=15m\n| stats count by substation, point_name, _time\n| join type=left substation [\n    search index=ot_security sourcetype=\"ot:alert\" earliest=-24h\n    | stats latest(description) as cyber_alert by substation\n  ]\n| where isnotnull(cyber_alert)\n| table _time, substation, point_name, count, cyber_alert",
              "m": "(1) Normalize substation keys between historians and security tools. (2) Train analysts on EOP impact criteria. (3) Document classification decisions in ITSM. (4) Retain historian extracts per legal hold guidance.",
              "z": "Timeline (quality vs alerts), Table (joined events), Single value (windows)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Industrial Asset Intelligence, Splunk OT Security Add-on (5151).\n• Ensure the following data sources are available: `index=ot_hist` `sourcetype=scada:point`, `index=ot_security` security events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize substation keys between historians and security tools. (2) Train analysts on EOP impact criteria. (3) Document classification decisions in ITSM. (4) Retain historian extracts per legal hold guidance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_hist sourcetype=scada:point earliest=-24h (quality=\"BAD\" OR quality=\"UNCERTAIN\")\n| bin _time span=15m\n| stats count by substation, point_name, _time\n| join type=left substation [\n    search index=ot_security sourcetype=\"ot:alert\" earliest=-24h\n    | stats latest(description) as cyber_alert by substation\n  ]\n| where isnotnull(cyber_alert)\n| table _time, substation, point_name, count, cyber_alert\n```\n\nUnderstanding this SPL\n\n**Reportable Incident Classification — BES Reliability Impact Signals (CIP-008-6 R2 Part 2.1)** — We correlate OT historian or EMS quality flags with cyber alerts to help classify incidents that may affect BES reliability or operations.\n\nDocumented **Data sources**: `index=ot_hist` `sourcetype=scada:point`, `index=ot_security` security events. **App/TA** (typical add-on context): Splunk Industrial Asset Intelligence, Splunk OT Security Add-on (5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_hist; **sourcetype**: scada:point. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_hist, sourcetype=scada:point, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by substation, point_name, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnotnull(cyber_alert)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Reportable Incident Classification — BES Reliability Impact Signals (CIP-008-6 R2 Part 2.1)**): table _time, substation, point_name, count, cyber_alert\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (quality vs alerts), Table (joined events), Single value (windows)",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate OT historian or EMS quality flags with cyber alerts to help classify incidents that may affect BES reliability or operations. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-008-6 R2 Part 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-008-6 R2 Part 2.1 is enforced — Splunk UC-22.13.39: Reportable Incident Classification — BES Reliability Impact Signals.",
                  "ea": "Saved search 'UC-22.13.39' running on index=ot_hist sourcetype=scada:point, index=ot_security security events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.40",
              "n": "Incident Response Timeline Milestone Tracking (CIP-008-6 R3 Part 3.2)",
              "c": "critical",
              "f": "intermediate",
              "v": "We track detection, containment, eradication, and recovery timestamps from ITSM to evidence structured incident response progression.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=itsm` `sourcetype=incident:milestone` or ServiceNow custom sourcetype",
              "q": "index=itsm sourcetype=incident:milestone category=\"CIP008\" earliest=-90d\n| eval det=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval con=strptime(contained_at,\"%Y-%m-%d %H:%M:%S\")\n| eval rec=strptime(recovered_at,\"%Y-%m-%d %H:%M:%S\")\n| eval contain_h=round((con-det)/3600,2)\n| eval recover_h=round((rec-con)/3600,2)\n| table incident_id, detected_at, contained_at, recovered_at, contain_h, recover_h",
              "m": "(1) Require milestone fields on CIP-tagged incidents. (2) Validate timestamps in UTC. (3) Dashboard for leadership. (4) Export for after-action reviews.",
              "z": "Timeline (milestones), Table (SLAs), Column chart (mean contain hours)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=incident:milestone` or ServiceNow custom sourcetype.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require milestone fields on CIP-tagged incidents. (2) Validate timestamps in UTC. (3) Dashboard for leadership. (4) Export for after-action reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=incident:milestone category=\"CIP008\" earliest=-90d\n| eval det=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval con=strptime(contained_at,\"%Y-%m-%d %H:%M:%S\")\n| eval rec=strptime(recovered_at,\"%Y-%m-%d %H:%M:%S\")\n| eval contain_h=round((con-det)/3600,2)\n| eval recover_h=round((rec-con)/3600,2)\n| table incident_id, detected_at, contained_at, recovered_at, contain_h, recover_h\n```\n\nUnderstanding this SPL\n\n**Incident Response Timeline Milestone Tracking (CIP-008-6 R3 Part 3.2)** — We track detection, containment, eradication, and recovery timestamps from ITSM to evidence structured incident response progression.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=incident:milestone` or ServiceNow custom sourcetype. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: incident:milestone. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=incident:milestone, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **det** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **con** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **contain_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **recover_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Incident Response Timeline Milestone Tracking (CIP-008-6 R3 Part 3.2)**): table incident_id, detected_at, contained_at, recovered_at, contain_h, recover_h\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (milestones), Table (SLAs), Column chart (mean contain hours)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track detection, containment, eradication, and recovery timestamps from ITSM to evidence structured incident response progression. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-008-6 R3 Part 3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-008-6 R3 Part 3.2 is enforced — Splunk UC-22.13.40: Incident Response Timeline Milestone Tracking.",
                  "ea": "Saved search 'UC-22.13.40' running on index=itsm sourcetype=incident:milestone or ServiceNow custom sourcetype, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.41",
              "n": "NERC Filing Deadline Compliance Countdown (CIP-008-6 R4 Part 4.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "We compute hours remaining until regulatory filing deadlines from the official incident discovery time to reduce missed submissions.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=itsm` `sourcetype=cip008_incident` (discovered_at, filing_deadline, filed_at)",
              "q": "index=itsm sourcetype=cip008_incident earliest=-60d\n| eval disc=strptime(discovered_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval due=strptime(filing_deadline,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval hours_left=round((due-now())/3600,2)\n| eval filed=if(isnotnull(filed_at),1,0)\n| where filed=0\n| table incident_id, discovered_at, filing_deadline, hours_left, owner\n| sort hours_left",
              "m": "(1) Populate deadlines per NERC filing category from legal. (2) Alert at seventy-two, twenty-four, and six hours. (3) Attach filing confirmation artifact. (4) Restrict access to sensitive fields.",
              "z": "Single value (min hours left), Table (open filings), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=cip008_incident` (discovered_at, filing_deadline, filed_at).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate deadlines per NERC filing category from legal. (2) Alert at seventy-two, twenty-four, and six hours. (3) Attach filing confirmation artifact. (4) Restrict access to sensitive fields.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=cip008_incident earliest=-60d\n| eval disc=strptime(discovered_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval due=strptime(filing_deadline,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval hours_left=round((due-now())/3600,2)\n| eval filed=if(isnotnull(filed_at),1,0)\n| where filed=0\n| table incident_id, discovered_at, filing_deadline, hours_left, owner\n| sort hours_left\n```\n\nUnderstanding this SPL\n\n**NERC Filing Deadline Compliance Countdown (CIP-008-6 R4 Part 4.1)** — We compute hours remaining until regulatory filing deadlines from the official incident discovery time to reduce missed submissions.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=cip008_incident` (discovered_at, filing_deadline, filed_at). **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: cip008_incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=cip008_incident, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **disc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hours_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **filed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where filed=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **NERC Filing Deadline Compliance Countdown (CIP-008-6 R4 Part 4.1)**): table incident_id, discovered_at, filing_deadline, hours_left, owner\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min hours left), Table (open filings), Timeline",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compute hours remaining until regulatory filing deadlines from the official incident discovery time to reduce missed submissions. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-008-6 R4 Part 4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-008-6 R4 Part 4.1 is enforced — Splunk UC-22.13.41: NERC Filing Deadline Compliance Countdown.",
                  "ea": "Saved search 'UC-22.13.41' running on index=itsm sourcetype=cip008_incident (discovered_at, filing_deadline, filed_at), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.42",
              "n": "Incident Evidence Preservation — Splunk Search Artifact Export Audit (CIP-008-6 R5 Part 5.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We audit exports of saved searches and large result downloads during active incidents to support evidence preservation and chain-of-custody discipline.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "`_audit` index `action=search` `info=granted`",
              "q": "index=_audit action=search info=granted earliest=-30d total_count>50000\n| lookup cip008_active_incidents.csv search_name OUTPUT incident_id\n| where isnotnull(incident_id)\n| stats sum(total_count) as rows values(user) as users by incident_id, search_name\n| sort - rows",
              "m": "(1) Maintain active incident lookup updated by IR. (2) Tune `total_count` threshold. (3) Pair with proxy DLP if used. (4) Retain audit logs per records program.",
              "z": "Table (large searches), Bar chart (by user), Single value (export events)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: `_audit` index `action=search` `info=granted`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain active incident lookup updated by IR. (2) Tune `total_count` threshold. (3) Pair with proxy DLP if used. (4) Retain audit logs per records program.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit action=search info=granted earliest=-30d total_count>50000\n| lookup cip008_active_incidents.csv search_name OUTPUT incident_id\n| where isnotnull(incident_id)\n| stats sum(total_count) as rows values(user) as users by incident_id, search_name\n| sort - rows\n```\n\nUnderstanding this SPL\n\n**Incident Evidence Preservation — Splunk Search Artifact Export Audit (CIP-008-6 R5 Part 5.1)** — We audit exports of saved searches and large result downloads during active incidents to support evidence preservation and chain-of-custody discipline.\n\nDocumented **Data sources**: `_audit` index `action=search` `info=granted`. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(incident_id)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by incident_id, search_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (large searches), Bar chart (by user), Single value (export events)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We audit exports of saved searches and large result downloads during active incidents to support evidence preservation and chain-of-custody discipline. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-008-6 R5 Part 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-008-6 R5 Part 5.1 is enforced — Splunk UC-22.13.42: Incident Evidence Preservation — Splunk Search Artifact Export Audit.",
                  "ea": "Saved search 'UC-22.13.42' running on _audit index action=search info=granted, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.43",
              "n": "Post-Incident Lessons Learned Action Item Tracking (CIP-008-6 R6)",
              "c": "high",
              "f": "intermediate",
              "v": "We track closure of lessons-learned tasks after CIP-008 incidents so corrective actions do not stall after the initial response.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=itsm` `sourcetype=post_incident_action`",
              "q": "index=itsm sourcetype=post_incident_action cip008=1 earliest=-365d\n| eval due_epoch=strptime(due_date,\"%Y-%m-%d\")\n| eval open=if(match(status,\"(?i)closed\"),0,1)\n| where open=1 AND due_epoch < now()\n| table action_id, incident_id, owner, due_date, status, description",
              "m": "(1) Create ticket template for post-incident actions. (2) Weekly reminder digest. (3) Escalate overdue items to CISO office. (4) Link evidence in Splunk dashboards.",
              "z": "Table (overdue actions), Single value (count), Column chart (by owner)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=post_incident_action`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create ticket template for post-incident actions. (2) Weekly reminder digest. (3) Escalate overdue items to CISO office. (4) Link evidence in Splunk dashboards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=post_incident_action cip008=1 earliest=-365d\n| eval due_epoch=strptime(due_date,\"%Y-%m-%d\")\n| eval open=if(match(status,\"(?i)closed\"),0,1)\n| where open=1 AND due_epoch < now()\n| table action_id, incident_id, owner, due_date, status, description\n```\n\nUnderstanding this SPL\n\n**Post-Incident Lessons Learned Action Item Tracking (CIP-008-6 R6)** — We track closure of lessons-learned tasks after CIP-008 incidents so corrective actions do not stall after the initial response.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=post_incident_action`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: post_incident_action. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=post_incident_action, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where open=1 AND due_epoch < now()` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Post-Incident Lessons Learned Action Item Tracking (CIP-008-6 R6)**): table action_id, incident_id, owner, due_date, status, description\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue actions), Single value (count), Column chart (by owner)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track closure of lessons-learned tasks after CIP-008 incidents so corrective actions do not stall after the initial response. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-008-6 R6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-008-6 R6 is enforced — Splunk UC-22.13.43: Post-Incident Lessons Learned Action Item Tracking.",
                  "ea": "Saved search 'UC-22.13.43' running on index=itsm sourcetype=post_incident_action, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.44",
              "n": "Backup Job Success and Failure for BES Databases and Configuration Stores (CIP-009-6 R1 Part 1.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "We verify backup jobs for systems supporting BES recovery completed successfully so recovery planning rests on reliable restore points.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=backup` `sourcetype=veeam:job` or `commvault:job` or `rubrik:events`",
              "q": "index=backup (sourcetype=veeam:job OR sourcetype=commvault:job) earliest=-48h\n| lookup bes_cyber_asset_inventory.csv job_target AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats latest(result) as last_result latest(_time) as last_run by host, job_name\n| where last_result!=\"Success\"\n| table host, job_name, last_result, last_run, bes_system",
              "m": "(1) Map backup job targets to BES inventory. (2) Alert on failures and missed schedules. (3) Pair with ticket automation. (4) Retain success summaries for auditors.",
              "z": "Table (failed jobs), Single value (failure count), Time chart (success rate)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=backup` `sourcetype=veeam:job` or `commvault:job` or `rubrik:events`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map backup job targets to BES inventory. (2) Alert on failures and missed schedules. (3) Pair with ticket automation. (4) Retain success summaries for auditors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup (sourcetype=veeam:job OR sourcetype=commvault:job) earliest=-48h\n| lookup bes_cyber_asset_inventory.csv job_target AS host OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats latest(result) as last_result latest(_time) as last_run by host, job_name\n| where last_result!=\"Success\"\n| table host, job_name, last_result, last_run, bes_system\n```\n\nUnderstanding this SPL\n\n**Backup Job Success and Failure for BES Databases and Configuration Stores (CIP-009-6 R1 Part 1.1)** — We verify backup jobs for systems supporting BES recovery completed successfully so recovery planning rests on reliable restore points.\n\nDocumented **Data sources**: `index=backup` `sourcetype=veeam:job` or `commvault:job` or `rubrik:events`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job, commvault:job. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=veeam:job, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, job_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where last_result!=\"Success\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup Job Success and Failure for BES Databases and Configuration Stores (CIP-009-6 R1 Part 1.1)**): table host, job_name, last_result, last_run, bes_system\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed jobs), Single value (failure count), Time chart (success rate)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verify backup jobs for systems supporting BES recovery completed successfully so recovery planning rests on reliable restore points. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-009-6 R1 Part 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-009-6 R1 Part 1.1 is enforced — Splunk UC-22.13.44: Backup Job Success and Failure for BES Databases and Configuration Stores.",
                  "ea": "Saved search 'UC-22.13.44' running on index=backup sourcetype=veeam:job or commvault:job or rubrik:events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.45",
              "n": "Backup Media Integrity Test Results (CIP-009-6 R2 Part 2.2)",
              "c": "high",
              "f": "intermediate",
              "v": "We track restore-test and checksum results for backup media containing BES configuration or security data.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=backup` `sourcetype=backup:integrity_test`",
              "q": "index=backup sourcetype=backup:integrity_test bes_scope=1 earliest=-180d\n| eval test_epoch=strptime(test_date,\"%Y-%m-%d\")\n| eval age_days=round((now()-test_epoch)/86400,0)\n| where match(result,\"(?i)fail\") OR age_days>120\n| table media_id, test_date, result, age_days, vault",
              "m": "(1) Emit HEC when integrity tests complete. (2) Define maximum age without re-test. (3) Alert storage team. (4) Document failures in CIP-008 if data loss risk.",
              "z": "Table (stale or failed tests), Timeline, Single value (failed)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=backup` `sourcetype=backup:integrity_test`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit HEC when integrity tests complete. (2) Define maximum age without re-test. (3) Alert storage team. (4) Document failures in CIP-008 if data loss risk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=backup:integrity_test bes_scope=1 earliest=-180d\n| eval test_epoch=strptime(test_date,\"%Y-%m-%d\")\n| eval age_days=round((now()-test_epoch)/86400,0)\n| where match(result,\"(?i)fail\") OR age_days>120\n| table media_id, test_date, result, age_days, vault\n```\n\nUnderstanding this SPL\n\n**Backup Media Integrity Test Results (CIP-009-6 R2 Part 2.2)** — We track restore-test and checksum results for backup media containing BES configuration or security data.\n\nDocumented **Data sources**: `index=backup` `sourcetype=backup:integrity_test`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: backup:integrity_test. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=backup:integrity_test, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **test_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(result,\"(?i)fail\") OR age_days>120` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup Media Integrity Test Results (CIP-009-6 R2 Part 2.2)**): table media_id, test_date, result, age_days, vault\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale or failed tests), Timeline, Single value (failed)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track restore-test and checksum results for backup media containing BES configuration or security data. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-009-6 R2 Part 2.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-009-6 R2 Part 2.2 is enforced — Splunk UC-22.13.45: Backup Media Integrity Test Results.",
                  "ea": "Saved search 'UC-22.13.45' running on index=backup sourcetype=backup:integrity_test, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.46",
              "n": "Recovery Plan Exercise Attendance and Scenario Evidence (CIP-009-6 R3 Part 3.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We log who ran tabletop or functional recovery exercises and which BES systems were exercised, supporting recovery plan testing evidence.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=grc` `sourcetype=cip009_exercise`",
              "q": "index=grc sourcetype=cip009_exercise earliest=-400d\n| eval ex_epoch=strptime(exercise_date,\"%Y-%m-%d\")\n| eval annual_cutoff=relative_time(now(),\"-1y@d\")\n| stats latest(exercise_type) as last_type latest(exercise_date) as last_date values(participant) as people by bes_system\n| eval compliant=if(strptime(last_date,\"%Y-%m-%d\")>=annual_cutoff,1,0)\n| where compliant=0\n| table bes_system, last_date, last_type, people",
              "m": "(1) Push exercise completion events from GRC. (2) Align `bes_system` with CIP-002 list. (3) Alert before annual due. (4) Attach scenario documents out-of-band with links in events.",
              "z": "Table (systems needing exercise), Single value (non-compliant count), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=grc` `sourcetype=cip009_exercise`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push exercise completion events from GRC. (2) Align `bes_system` with CIP-002 list. (3) Alert before annual due. (4) Attach scenario documents out-of-band with links in events.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=cip009_exercise earliest=-400d\n| eval ex_epoch=strptime(exercise_date,\"%Y-%m-%d\")\n| eval annual_cutoff=relative_time(now(),\"-1y@d\")\n| stats latest(exercise_type) as last_type latest(exercise_date) as last_date values(participant) as people by bes_system\n| eval compliant=if(strptime(last_date,\"%Y-%m-%d\")>=annual_cutoff,1,0)\n| where compliant=0\n| table bes_system, last_date, last_type, people\n```\n\nUnderstanding this SPL\n\n**Recovery Plan Exercise Attendance and Scenario Evidence (CIP-009-6 R3 Part 3.1)** — We log who ran tabletop or functional recovery exercises and which BES systems were exercised, supporting recovery plan testing evidence.\n\nDocumented **Data sources**: `index=grc` `sourcetype=cip009_exercise`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cip009_exercise. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=cip009_exercise, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ex_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **annual_cutoff** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by bes_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliant=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Recovery Plan Exercise Attendance and Scenario Evidence (CIP-009-6 R3 Part 3.1)**): table bes_system, last_date, last_type, people\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (systems needing exercise), Single value (non-compliant count), Timeline",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We log who ran tabletop or functional recovery exercises and which BES systems were exercised, supporting recovery plan testing evidence. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-009-6 R3 Part 3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-009-6 R3 Part 3.1 is enforced — Splunk UC-22.13.46: Recovery Plan Exercise Attendance and Scenario Evidence.",
                  "ea": "Saved search 'UC-22.13.46' running on index=grc sourcetype=cip009_exercise, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.47",
              "n": "Data Preservation During Recovery Activities (CIP-009-6 R4 Part 4.1)",
              "c": "critical",
              "f": "advanced",
              "v": "We monitor mass deletes or index resets during declared recovery windows that could destroy forensic or compliance evidence.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "`_audit` index destructive actions, `index=itsm` maintenance windows",
              "q": "index=_audit (action=delete OR match(action,\"(?i)clean\")) earliest=-7d\n| lookup bes_splunk_indexes.csv index OUTPUT bes_evidence_index\n| where bes_evidence_index=\"true\"\n| join type=left user [\n    search index=itsm sourcetype=maint_window earliest=-14d\n    | eval mw=1\n    | fields user mw\n  ]\n| where isnull(mw)\n| table _time, user, action, index, object",
              "m": "(1) Tag evidence-bearing indexes in lookup. (2) Require maintenance window tickets for bulk deletes. (3) Page IR and compliance on hits. (4) Pair with bucket freeze procedures.",
              "z": "Table (events), Timeline, Single value (violations)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: `_audit` index destructive actions, `index=itsm` maintenance windows.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag evidence-bearing indexes in lookup. (2) Require maintenance window tickets for bulk deletes. (3) Page IR and compliance on hits. (4) Pair with bucket freeze procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit (action=delete OR match(action,\"(?i)clean\")) earliest=-7d\n| lookup bes_splunk_indexes.csv index OUTPUT bes_evidence_index\n| where bes_evidence_index=\"true\"\n| join type=left user [\n    search index=itsm sourcetype=maint_window earliest=-14d\n    | eval mw=1\n    | fields user mw\n  ]\n| where isnull(mw)\n| table _time, user, action, index, object\n```\n\nUnderstanding this SPL\n\n**Data Preservation During Recovery Activities (CIP-009-6 R4 Part 4.1)** — We monitor mass deletes or index resets during declared recovery windows that could destroy forensic or compliance evidence.\n\nDocumented **Data sources**: `_audit` index destructive actions, `index=itsm` maintenance windows. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where bes_evidence_index=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(mw)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Data Preservation During Recovery Activities (CIP-009-6 R4 Part 4.1)**): table _time, user, action, index, object\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (events), Timeline, Single value (violations)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor mass deletes or index resets during declared recovery windows that could destroy forensic or compliance evidence. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-009-6 R4 Part 4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-009-6 R4 Part 4.1 is enforced — Splunk UC-22.13.47: Data Preservation During Recovery Activities.",
                  "ea": "Saved search 'UC-22.13.47' running on _audit index destructive actions, index=itsm maintenance windows, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.48",
              "n": "BES Cyber System Restore Drill — RTO Measurement (CIP-009-6 R5 Part 5.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We compute restore-time metrics from drill tickets and application availability pings to evidence recovery capability testing.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=itsm` `sourcetype=restore_drill`, `index=apps` `sourcetype=http:ping`",
              "q": "index=itsm sourcetype=restore_drill bes_system=* earliest=-400d\n| eval start=strptime(started_at,\"%Y-%m-%d %H:%M:%S\")\n| eval end=strptime(restored_at,\"%Y-%m-%d %H:%M:%S\")\n| eval rto_min=round((end-start)/60,2)\n| lookup cip009_rto_targets.csv bes_system OUTPUT target_min\n| where rto_min > target_min\n| table drill_id, bes_system, rto_min, target_min, restored_at",
              "m": "(1) Populate RTO targets from recovery plans. (2) Validate timestamps from orchestration tools. (3) Track remediation for misses. (4) Annual leadership briefing export.",
              "z": "Column chart (RTO vs target), Table (misses), Single value (worst RTO)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=restore_drill`, `index=apps` `sourcetype=http:ping`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate RTO targets from recovery plans. (2) Validate timestamps from orchestration tools. (3) Track remediation for misses. (4) Annual leadership briefing export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=restore_drill bes_system=* earliest=-400d\n| eval start=strptime(started_at,\"%Y-%m-%d %H:%M:%S\")\n| eval end=strptime(restored_at,\"%Y-%m-%d %H:%M:%S\")\n| eval rto_min=round((end-start)/60,2)\n| lookup cip009_rto_targets.csv bes_system OUTPUT target_min\n| where rto_min > target_min\n| table drill_id, bes_system, rto_min, target_min, restored_at\n```\n\nUnderstanding this SPL\n\n**BES Cyber System Restore Drill — RTO Measurement (CIP-009-6 R5 Part 5.1)** — We compute restore-time metrics from drill tickets and application availability pings to evidence recovery capability testing.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=restore_drill`, `index=apps` `sourcetype=http:ping`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: restore_drill. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=restore_drill, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **start** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **end** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **rto_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where rto_min > target_min` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BES Cyber System Restore Drill — RTO Measurement (CIP-009-6 R5 Part 5.1)**): table drill_id, bes_system, rto_min, target_min, restored_at\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (RTO vs target), Table (misses), Single value (worst RTO)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compute restore-time metrics from drill tickets and application availability pings to evidence recovery capability testing. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-009-6 R5 Part 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-009-6 R5 Part 5.1 is enforced — Splunk UC-22.13.48: BES Cyber System Restore Drill — RTO Measurement.",
                  "ea": "Saved search 'UC-22.13.48' running on index=itsm sourcetype=restore_drill, index=apps sourcetype=http:ping, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.49",
              "n": "Baseline Configuration Deviation — Key OS and Application Settings (CIP-010-4 R1 Part 1.1)",
              "c": "high",
              "f": "advanced",
              "v": "We compare configuration snapshots from servers and appliances to approved baselines so unauthorized drift on BES Cyber Assets is visible.",
              "t": "Splunk Add-on for Unix and Linux (833), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=config` `sourcetype=config:snapshot`, `bes_config_baseline.csv`",
              "q": "index=config sourcetype=config:snapshot earliest=-24h\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system\n| where isnotnull(bes_system)\n| lookup bes_config_baseline.csv host, setting_name OUTPUT expected_value\n| where expected_value!=current_value\n| table host, setting_name, expected_value, current_value, bes_system",
              "m": "(1) Collect snapshots via scripted input or Chef/Ansible exports. (2) Version baselines with change tickets. (3) Alert on critical settings only first. (4) Feed drift into change review board.",
              "z": "Table (drift rows), Bar chart (by setting), Single value (host count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (833), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=config` `sourcetype=config:snapshot`, `bes_config_baseline.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect snapshots via scripted input or Chef/Ansible exports. (2) Version baselines with change tickets. (3) Alert on critical settings only first. (4) Feed drift into change review board.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=config sourcetype=config:snapshot earliest=-24h\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system\n| where isnotnull(bes_system)\n| lookup bes_config_baseline.csv host, setting_name OUTPUT expected_value\n| where expected_value!=current_value\n| table host, setting_name, expected_value, current_value, bes_system\n```\n\nUnderstanding this SPL\n\n**Baseline Configuration Deviation — Key OS and Application Settings (CIP-010-4 R1 Part 1.1)** — We compare configuration snapshots from servers and appliances to approved baselines so unauthorized drift on BES Cyber Assets is visible.\n\nDocumented **Data sources**: `index=config` `sourcetype=config:snapshot`, `bes_config_baseline.csv`. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (833), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: config; **sourcetype**: config:snapshot. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=config, sourcetype=config:snapshot, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where expected_value!=current_value` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Baseline Configuration Deviation — Key OS and Application Settings (CIP-010-4 R1 Part 1.1)**): table host, setting_name, expected_value, current_value, bes_system\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (drift rows), Bar chart (by setting), Single value (host count)\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare configuration snapshots from servers and appliances to approved baselines so unauthorized drift on BES Cyber Assets is visible. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1 Part 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R1 Part 1.1 is enforced — Splunk UC-22.13.49: Baseline Configuration Deviation — Key OS and Application Settings.",
                  "ea": "Saved search 'UC-22.13.49' running on index=config sourcetype=config:snapshot, bes_config_baseline.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.50",
              "n": "Unauthorized Software Installation on BES Windows Servers (CIP-010-4 R2 Part 2.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "We detect new software inventory entries not tied to an approved package list on in-scope Windows hosts.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=endpoint` `sourcetype=inventory:software`",
              "q": "index=endpoint sourcetype=inventory:software earliest=-7d\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system\n| where isnotnull(bes_system)\n| lookup approved_software_catalog.csv product_name OUTPUT approved\n| where isnull(approved)\n| stats earliest(_time) as first_seen by host, product_name, version\n| sort - first_seen",
              "m": "(1) Populate approved catalog from change management. (2) Deduplicate version strings. (3) Weekly review with server owners. (4) Auto-ticket unapproved installs.",
              "z": "Table (new software), Bar chart (by product), Single value (distinct hosts)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=inventory:software`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate approved catalog from change management. (2) Deduplicate version strings. (3) Weekly review with server owners. (4) Auto-ticket unapproved installs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=inventory:software earliest=-7d\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system\n| where isnotnull(bes_system)\n| lookup approved_software_catalog.csv product_name OUTPUT approved\n| where isnull(approved)\n| stats earliest(_time) as first_seen by host, product_name, version\n| sort - first_seen\n```\n\nUnderstanding this SPL\n\n**Unauthorized Software Installation on BES Windows Servers (CIP-010-4 R2 Part 2.1)** — We detect new software inventory entries not tied to an approved package list on in-scope Windows hosts.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=inventory:software`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: inventory:software. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=inventory:software, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, product_name, version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new software), Bar chart (by product), Single value (distinct hosts)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect new software inventory entries not tied to an approved package list on in-scope Windows hosts. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R2 Part 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R2 Part 2.1 is enforced — Splunk UC-22.13.50: Unauthorized Software Installation on BES Windows Servers.",
                  "ea": "Saved search 'UC-22.13.50' running on index=endpoint sourcetype=inventory:software, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.51",
              "n": "Configuration Change Authorization Linkage for Network Devices (CIP-010-4 R3 Part 3.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "We require network configuration commits on ESP devices to reference an approved change identifier in syslog or commit comments.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=network` `sourcetype=pan:config`, `index=itsm`",
              "q": "index=network sourcetype=pan:config earliest=-7d\n| rex field=cmd \"(?i)CHG(?<chg>[0-9]{6,10})\"\n| lookup esp_device_inventory.csv hostname AS host OUTPUT in_scope\n| where in_scope=\"true\"\n| join type=left chg [\n    search index=itsm sourcetype=chg:network earliest=-30d\n    | rename number as chg\n    | stats latest(state) as ticket_state by chg\n  ]\n| where isnull(ticket_state) OR ticket_state!=\"Closed\"\n| table _time, host, cmd, chg, ticket_state",
              "m": "(1) Standardize change number format in commit templates. (2) Map devices to ESP scope. (3) Daily alert on missing tickets. (4) Escalate repeat violators.",
              "z": "Table (missing linkage), Bar chart (by device), Single value (count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:config`, `index=itsm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize change number format in commit templates. (2) Map devices to ESP scope. (3) Daily alert on missing tickets. (4) Escalate repeat violators.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:config earliest=-7d\n| rex field=cmd \"(?i)CHG(?<chg>[0-9]{6,10})\"\n| lookup esp_device_inventory.csv hostname AS host OUTPUT in_scope\n| where in_scope=\"true\"\n| join type=left chg [\n    search index=itsm sourcetype=chg:network earliest=-30d\n    | rename number as chg\n    | stats latest(state) as ticket_state by chg\n  ]\n| where isnull(ticket_state) OR ticket_state!=\"Closed\"\n| table _time, host, cmd, chg, ticket_state\n```\n\nUnderstanding this SPL\n\n**Configuration Change Authorization Linkage for Network Devices (CIP-010-4 R3 Part 3.1)** — We require network configuration commits on ESP devices to reference an approved change identifier in syslog or commit comments.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:config`, `index=itsm`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:config. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:config, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(ticket_state) OR ticket_state!=\"Closed\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Configuration Change Authorization Linkage for Network Devices (CIP-010-4 R3 Part 3.1)**): table _time, host, cmd, chg, ticket_state\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Configuration Change Authorization Linkage for Network Devices (CIP-010-4 R3 Part 3.1)** — We require network configuration commits on ESP devices to reference an approved change identifier in syslog or commit comments.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:config`, `index=itsm`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing linkage), Bar chart (by device), Single value (count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We require network configuration commits on ESP devices to reference an approved change identifier in syslog or commit comments. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R3 Part 3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R3 Part 3.1 is enforced — Splunk UC-22.13.51: Configuration Change Authorization Linkage for Network Devices.",
                  "ea": "Saved search 'UC-22.13.51' running on index=network sourcetype=pan:config, index=itsm, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.52",
              "n": "Vulnerability Assessment After Material Configuration Changes (CIP-010-4 R4 Part 4.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We find material firewall or OS changes without a subsequent vulnerability scan within the required follow-up window.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=itsm` material changes, `index=vuln` Tenable or Qualys",
              "q": "index=itsm sourcetype=chg:network material=1 earliest=-35d\n| eval chg_time=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| rename number as change_id\n| join type=left change_id [\n    search index=vuln sourcetype=tenable:scans earliest=-35d\n    | rename change_ticket as change_id\n    | stats latest(finished_at) as scan_done by change_id\n  ]\n| eval scan_epoch=strptime(scan_done,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(scan_epoch) OR (scan_epoch-chg_time)>1209600\n| table change_id, closed_at, scan_done",
              "m": "(1) Flag material changes in ITSM. (2) Require scanner to tag `change_ticket`. (3) Alert at fourteen days without scan. (4) Document risk acceptance if delayed.",
              "z": "Table (missing scans), Timeline, Single value (open gaps)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=itsm` material changes, `index=vuln` Tenable or Qualys.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Flag material changes in ITSM. (2) Require scanner to tag `change_ticket`. (3) Alert at fourteen days without scan. (4) Document risk acceptance if delayed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=chg:network material=1 earliest=-35d\n| eval chg_time=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| rename number as change_id\n| join type=left change_id [\n    search index=vuln sourcetype=tenable:scans earliest=-35d\n    | rename change_ticket as change_id\n    | stats latest(finished_at) as scan_done by change_id\n  ]\n| eval scan_epoch=strptime(scan_done,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(scan_epoch) OR (scan_epoch-chg_time)>1209600\n| table change_id, closed_at, scan_done\n```\n\nUnderstanding this SPL\n\n**Vulnerability Assessment After Material Configuration Changes (CIP-010-4 R4 Part 4.1)** — We find material firewall or OS changes without a subsequent vulnerability scan within the required follow-up window.\n\nDocumented **Data sources**: `index=itsm` material changes, `index=vuln` Tenable or Qualys. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: chg:network. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=chg:network, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **chg_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **scan_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(scan_epoch) OR (scan_epoch-chg_time)>1209600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Vulnerability Assessment After Material Configuration Changes (CIP-010-4 R4 Part 4.1)**): table change_id, closed_at, scan_done\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Vulnerability Assessment After Material Configuration Changes (CIP-010-4 R4 Part 4.1)** — We find material firewall or OS changes without a subsequent vulnerability scan within the required follow-up window.\n\nDocumented **Data sources**: `index=itsm` material changes, `index=vuln` Tenable or Qualys. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing scans), Timeline, Single value (open gaps)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We find material firewall or OS changes without a subsequent vulnerability scan within the required follow-up window. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Vulnerabilities (when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R4 Part 4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R4 Part 4.1 is enforced — Splunk UC-22.13.52: Vulnerability Assessment After Material Configuration Changes.",
                  "ea": "Saved search 'UC-22.13.52' running on index=itsm material changes, index=vuln Tenable or Qualys, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.53",
              "n": "Transient Cyber Asset Connection Logging to ESP Jump Zones (CIP-010-4 R5 Part 5.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We log and summarize laptops or vendor devices connecting transiently inside ESP jump zones for transient cyber asset visibility.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=network` `sourcetype=pan:traffic`, DHCP logs",
              "q": "index=network sourcetype=pan:traffic dest_zone=\"ESP_JUMP\" earliest=-24h\n| eval tca=if(match(src,\"dhcp_pool\"),1,0)\n| lookup dhcp_leases.csv mac AS mac OUTPUT hostname\n| stats earliest(_time) as first latest(_time) as last sum(bytes) as bytes by src, hostname, user\n| sort - last",
              "m": "(1) Correlate DHCP MAC to user if available. (2) Maintain TCA register with expected devices. (3) Alert on unknown TCA over one hour. (4) Scan-before-connect workflow tracked separately.",
              "z": "Table (TCA sessions), Timeline, Map (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=network` `sourcetype=pan:traffic`, DHCP logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Correlate DHCP MAC to user if available. (2) Maintain TCA register with expected devices. (3) Alert on unknown TCA over one hour. (4) Scan-before-connect workflow tracked separately.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=pan:traffic dest_zone=\"ESP_JUMP\" earliest=-24h\n| eval tca=if(match(src,\"dhcp_pool\"),1,0)\n| lookup dhcp_leases.csv mac AS mac OUTPUT hostname\n| stats earliest(_time) as first latest(_time) as last sum(bytes) as bytes by src, hostname, user\n| sort - last\n```\n\nUnderstanding this SPL\n\n**Transient Cyber Asset Connection Logging to ESP Jump Zones (CIP-010-4 R5 Part 5.1)** — We log and summarize laptops or vendor devices connecting transiently inside ESP jump zones for transient cyber asset visibility.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic`, DHCP logs. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tca** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by src, hostname, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Transient Cyber Asset Connection Logging to ESP Jump Zones (CIP-010-4 R5 Part 5.1)** — We log and summarize laptops or vendor devices connecting transiently inside ESP jump zones for transient cyber asset visibility.\n\nDocumented **Data sources**: `index=network` `sourcetype=pan:traffic`, DHCP logs. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (TCA sessions), Timeline, Map (optional)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We log and summarize laptops or vendor devices connecting transiently inside ESP jump zones for transient cyber asset visibility. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R5 Part 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R5 Part 5.1 is enforced — Splunk UC-22.13.53: Transient Cyber Asset Connection Logging to ESP Jump Zones.",
                  "ea": "Saved search 'UC-22.13.53' running on index=network sourcetype=pan:traffic, DHCP logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.54",
              "n": "Removable Media Mount Events on Engineering Workstations (CIP-010-4 R6 Part 6.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We detect USB or removable media usage on BES engineering workstations to enforce transient media controls.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=wineventlog` `EventCode` 6416 or Sysmon EventCode 12/1",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=6416 earliest=-7d\n| lookup bes_engineering_hosts.csv ComputerName OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats count by ComputerName, DeviceName, ClassName, bes_system\n| sort - count",
              "m": "(1) Enable device installation auditing. (2) Optionally deploy Sysmon for richer context. (3) Weekly review with OT engineering. (4) Document approved maintenance media in lookup.",
              "z": "Table (mounts), Bar chart (by host), Single value (distinct devices)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=wineventlog` `EventCode` 6416 or Sysmon EventCode 12/1.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable device installation auditing. (2) Optionally deploy Sysmon for richer context. (3) Weekly review with OT engineering. (4) Document approved maintenance media in lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=6416 earliest=-7d\n| lookup bes_engineering_hosts.csv ComputerName OUTPUT bes_system\n| where isnotnull(bes_system)\n| stats count by ComputerName, DeviceName, ClassName, bes_system\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Removable Media Mount Events on Engineering Workstations (CIP-010-4 R6 Part 6.1)** — We detect USB or removable media usage on BES engineering workstations to enforce transient media controls.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode` 6416 or Sysmon EventCode 12/1. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ComputerName, DeviceName, ClassName, bes_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Removable Media Mount Events on Engineering Workstations (CIP-010-4 R6 Part 6.1)** — We detect USB or removable media usage on BES engineering workstations to enforce transient media controls.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode` 6416 or Sysmon EventCode 12/1. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (mounts), Bar chart (by host), Single value (distinct devices)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect USB or removable media usage on BES engineering workstations to enforce transient media controls. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Endpoint (when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R6 Part 6.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R6 Part 6.1 is enforced — Splunk UC-22.13.54: Removable Media Mount Events on Engineering Workstations.",
                  "ea": "Saved search 'UC-22.13.54' running on index=wineventlog EventCode 6416 or Sysmon EventCode 12/1, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.55",
              "n": "TCA Scan-Before-Connect Compliance from NAC or Agent Logs (CIP-010-4 R5 Part 5.3)",
              "c": "critical",
              "f": "intermediate",
              "v": "We verify transient cyber assets received a successful posture or malware scan before being granted ESP network access.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=network` `sourcetype=cisco:ise:radius` or `sourcetype=pan:globalprotect`",
              "q": "index=network sourcetype=cisco:ise:radius earliest=-24h Acct-Status-Type=Start\n| lookup tca_device_registry.csv calling_station_id OUTPUT asset_tag\n| where isnotnull(asset_tag)\n| join type=left calling_station_id [\n    search index=endpoint sourcetype=defender:scan earliest=-24h\n    | rename device_id as calling_station_id\n    | stats latest(scan_result) as scan by calling_station_id\n  ]\n| where scan!=\"clean\" OR isnull(scan)\n| table _time, calling_station_id, user, scan, asset_tag",
              "m": "(1) Align RADIUS calling-station IDs with endpoint inventory keys. (2) Tune for your NAC vendor sourcetype. (3) Block or quarantine integration outside Splunk. (4) Retain pass/fail for audits.",
              "z": "Table (non-compliant TCA), Single value (count), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=network` `sourcetype=cisco:ise:radius` or `sourcetype=pan:globalprotect`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align RADIUS calling-station IDs with endpoint inventory keys. (2) Tune for your NAC vendor sourcetype. (3) Block or quarantine integration outside Splunk. (4) Retain pass/fail for audits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=cisco:ise:radius earliest=-24h Acct-Status-Type=Start\n| lookup tca_device_registry.csv calling_station_id OUTPUT asset_tag\n| where isnotnull(asset_tag)\n| join type=left calling_station_id [\n    search index=endpoint sourcetype=defender:scan earliest=-24h\n    | rename device_id as calling_station_id\n    | stats latest(scan_result) as scan by calling_station_id\n  ]\n| where scan!=\"clean\" OR isnull(scan)\n| table _time, calling_station_id, user, scan, asset_tag\n```\n\nUnderstanding this SPL\n\n**TCA Scan-Before-Connect Compliance from NAC or Agent Logs (CIP-010-4 R5 Part 5.3)** — We verify transient cyber assets received a successful posture or malware scan before being granted ESP network access.\n\nDocumented **Data sources**: `index=network` `sourcetype=cisco:ise:radius` or `sourcetype=pan:globalprotect`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: cisco:ise:radius. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=cisco:ise:radius, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(asset_tag)` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where scan!=\"clean\" OR isnull(scan)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **TCA Scan-Before-Connect Compliance from NAC or Agent Logs (CIP-010-4 R5 Part 5.3)**): table _time, calling_station_id, user, scan, asset_tag\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**TCA Scan-Before-Connect Compliance from NAC or Agent Logs (CIP-010-4 R5 Part 5.3)** — We verify transient cyber assets received a successful posture or malware scan before being granted ESP network access.\n\nDocumented **Data sources**: `index=network` `sourcetype=cisco:ise:radius` or `sourcetype=pan:globalprotect`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant TCA), Single value (count), Timeline",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verify transient cyber assets received a successful posture or malware scan before being granted ESP network access. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R5 Part 5.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R5 Part 5.3 is enforced — Splunk UC-22.13.55: TCA Scan-Before-Connect Compliance from NAC or Agent Logs.",
                  "ea": "Saved search 'UC-22.13.55' running on index=network sourcetype=cisco:ise:radius or sourcetype=pan:globalprotect, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.56",
              "n": "Baseline Update Documentation — Config Baseline Version Changes (CIP-010-4 R1 Part 1.3)",
              "c": "high",
              "f": "intermediate",
              "v": "We audit when official configuration baseline versions change and require an authorizing record for each update.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=grc` `sourcetype=cip010_baseline_version`",
              "q": "index=grc sourcetype=cip010_baseline_version earliest=-365d\n| sort 0 - _time\n| streamstats current=f latest(baseline_version) as prev_version by platform\n| where baseline_version!=prev_version\n| eval has_auth=if(isnotnull(auth_record_id),1,0)\n| where has_auth=0\n| table _time, platform, baseline_version, prev_version, owner, auth_record_id",
              "m": "(1) Emit version bump events from CMDB or GitOps. (2) Require `auth_record_id` from CAB approval. (3) Alert compliance on missing linkage. (4) Archive prior baselines offline.",
              "z": "Timeline (version bumps), Table (missing auth), Single value (violations)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=grc` `sourcetype=cip010_baseline_version`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit version bump events from CMDB or GitOps. (2) Require `auth_record_id` from CAB approval. (3) Alert compliance on missing linkage. (4) Archive prior baselines offline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=cip010_baseline_version earliest=-365d\n| sort 0 - _time\n| streamstats current=f latest(baseline_version) as prev_version by platform\n| where baseline_version!=prev_version\n| eval has_auth=if(isnotnull(auth_record_id),1,0)\n| where has_auth=0\n| table _time, platform, baseline_version, prev_version, owner, auth_record_id\n```\n\nUnderstanding this SPL\n\n**Baseline Update Documentation — Config Baseline Version Changes (CIP-010-4 R1 Part 1.3)** — We audit when official configuration baseline versions change and require an authorizing record for each update.\n\nDocumented **Data sources**: `index=grc` `sourcetype=cip010_baseline_version`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cip010_baseline_version. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=cip010_baseline_version, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by platform** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where baseline_version!=prev_version` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **has_auth** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where has_auth=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Baseline Update Documentation — Config Baseline Version Changes (CIP-010-4 R1 Part 1.3)**): table _time, platform, baseline_version, prev_version, owner, auth_record_id\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (version bumps), Table (missing auth), Single value (violations)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We audit when official configuration baseline versions change and require an authorizing record for each update. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-010-4 R1 Part 1.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-010-4 R1 Part 1.3 is enforced — Splunk UC-22.13.56: Baseline Update Documentation — Config Baseline Version Changes.",
                  "ea": "Saved search 'UC-22.13.56' running on index=grc sourcetype=cip010_baseline_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.57",
              "n": "BES Cyber System Information Access via Share and Web Downloads (CIP-011-3 R1 Part 1.1)",
              "c": "critical",
              "f": "advanced",
              "v": "We monitor access to file shares and web paths classified as BES Cyber System Information for inappropriate bulk retrieval.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=wineventlog` `EventCode=5145` or proxy logs, `bcsi_path_tags.csv`",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5145 earliest=-24h\n| lookup bcsi_path_tags.csv ShareName, RelativeTargetName OUTPUT classification\n| where classification=\"BCSI\"\n| stats sum(TransferredBytes) as bytes by Account_Name, ShareName, ComputerName\n| where bytes > 104857600\n| sort - bytes",
              "m": "(1) Tag BCSI paths in lookup from information protection register. (2) Enable detailed file share auditing. (3) Alert on high-byte transfers. (4) Integrate with insider threat program.",
              "z": "Table (large transfers), Bar chart (by user), Single value (events)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=wineventlog` `EventCode=5145` or proxy logs, `bcsi_path_tags.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag BCSI paths in lookup from information protection register. (2) Enable detailed file share auditing. (3) Alert on high-byte transfers. (4) Integrate with insider threat program.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=5145 earliest=-24h\n| lookup bcsi_path_tags.csv ShareName, RelativeTargetName OUTPUT classification\n| where classification=\"BCSI\"\n| stats sum(TransferredBytes) as bytes by Account_Name, ShareName, ComputerName\n| where bytes > 104857600\n| sort - bytes\n```\n\nUnderstanding this SPL\n\n**BES Cyber System Information Access via Share and Web Downloads (CIP-011-3 R1 Part 1.1)** — We monitor access to file shares and web paths classified as BES Cyber System Information for inappropriate bulk retrieval.\n\nDocumented **Data sources**: `index=wineventlog` `EventCode=5145` or proxy logs, `bcsi_path_tags.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where classification=\"BCSI\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by Account_Name, ShareName, ComputerName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where bytes > 104857600` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (large transfers), Bar chart (by user), Single value (events)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor access to file shares and web paths classified as BES Cyber System Information for inappropriate bulk retrieval. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-011-3 R1 Part 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-011-3 R1 Part 1.1 is enforced — Splunk UC-22.13.57: BES Cyber System Information Access via Share and Web Downloads.",
                  "ea": "Saved search 'UC-22.13.57' running on index=wineventlog EventCode=5145 or proxy logs, bcsi_path_tags.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.58",
              "n": "BCSI Storage Location Inventory vs. Observed Disk Paths (CIP-011-3 R2 Part 2.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We compare discovered storage paths containing BCSI keywords against the approved storage location inventory.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=inventory` `sourcetype=filesystem:scan`, `bcsi_approved_locations.csv`",
              "q": "index=inventory sourcetype=filesystem:scan earliest=-7d match(path,\"(?i)one-line|relay|settings|cyber\")\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system\n| where isnotnull(bes_system)\n| lookup bcsi_approved_locations.csv host, path_prefix OUTPUT approved\n| where isnull(approved)\n| stats values(path) as paths by host, bes_system",
              "m": "(1) Run constrained filesystem scans with legal approval. (2) Tune regex to reduce false positives. (3) Investigate unknown paths quickly. (4) Update inventory when validated.",
              "z": "Table (unapproved paths), Single value (host count), Bar chart (by site)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=inventory` `sourcetype=filesystem:scan`, `bcsi_approved_locations.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Run constrained filesystem scans with legal approval. (2) Tune regex to reduce false positives. (3) Investigate unknown paths quickly. (4) Update inventory when validated.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=inventory sourcetype=filesystem:scan earliest=-7d match(path,\"(?i)one-line|relay|settings|cyber\")\n| lookup bes_cyber_asset_inventory.csv host OUTPUT bes_system\n| where isnotnull(bes_system)\n| lookup bcsi_approved_locations.csv host, path_prefix OUTPUT approved\n| where isnull(approved)\n| stats values(path) as paths by host, bes_system\n```\n\nUnderstanding this SPL\n\n**BCSI Storage Location Inventory vs. Observed Disk Paths (CIP-011-3 R2 Part 2.1)** — We compare discovered storage paths containing BCSI keywords against the approved storage location inventory.\n\nDocumented **Data sources**: `index=inventory` `sourcetype=filesystem:scan`, `bcsi_approved_locations.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: inventory; **sourcetype**: filesystem:scan. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=inventory, sourcetype=filesystem:scan, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(bes_system)` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, bes_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved paths), Single value (host count), Bar chart (by site)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare discovered storage paths containing BCSI keywords against the approved storage location inventory. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-011-3 R2 Part 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-011-3 R2 Part 2.1 is enforced — Splunk UC-22.13.58: BCSI Storage Location Inventory vs. Observed Disk Paths.",
                  "ea": "Saved search 'UC-22.13.58' running on index=inventory sourcetype=filesystem:scan, bcsi_approved_locations.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.59",
              "n": "Information Handling Procedure Compliance — Email DLP for BCSI (CIP-011-3 R3 Part 3.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "We track Data Loss Prevention blocks on messages tagged as BCSI to show handling rules are enforced in email channels.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=email` `sourcetype=mimecast:dlp` or `ms:o365:dlp`",
              "q": "index=email sourcetype=mimecast:dlp earliest=-7d policy=\"BCSI\"\n| stats count by action, sender, recipient\n| sort - count",
              "m": "(1) Map DLP policies to BCSI categories. (2) Differentiate block vs notify. (3) Weekly compliance metrics. (4) Investigate repeated violators via HR process.",
              "z": "Stacked bar (action), Table (top senders), Single value (blocked count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=email` `sourcetype=mimecast:dlp` or `ms:o365:dlp`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map DLP policies to BCSI categories. (2) Differentiate block vs notify. (3) Weekly compliance metrics. (4) Investigate repeated violators via HR process.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=mimecast:dlp earliest=-7d policy=\"BCSI\"\n| stats count by action, sender, recipient\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Information Handling Procedure Compliance — Email DLP for BCSI (CIP-011-3 R3 Part 3.1)** — We track Data Loss Prevention blocks on messages tagged as BCSI to show handling rules are enforced in email channels.\n\nDocumented **Data sources**: `index=email` `sourcetype=mimecast:dlp` or `ms:o365:dlp`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: mimecast:dlp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=mimecast:dlp, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action, sender, recipient** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (action), Table (top senders), Single value (blocked count)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track Data Loss Prevention blocks on messages tagged as BCSI to show handling rules are enforced in email channels. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-011-3 R3 Part 3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-011-3 R3 Part 3.1 is enforced — Splunk UC-22.13.59: Information Handling Procedure Compliance — Email DLP for BCSI.",
                  "ea": "Saved search 'UC-22.13.59' running on index=email sourcetype=mimecast:dlp or ms:o365:dlp, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.60",
              "n": "BCSI Destruction and Sanitization Evidence from ITAD Tickets (CIP-011-3 R4 Part 4.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We correlate IT asset disposal tickets with wipe certificates indexed as structured events for sanitization evidence.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=itsm` `sourcetype=itad:disposal`, `sourcetype=itad:wipe_cert`",
              "q": "index=itsm sourcetype=itad:disposal status=\"completed\" earliest=-365d\n| rename asset_serial as serial\n| join type=left serial [\n    search index=itsm sourcetype=itad:wipe_cert earliest=-365d\n    | stats latest(wipe_method) as method latest(cert_id) as cert by serial\n  ]\n| where bcsi_stored=\"true\" AND isnull(cert)\n| table disposal_id, serial, asset_tag, completed_at, cert",
              "m": "(1) Require vendors to upload certificate metadata to ITSM. (2) Alert when disposal closes without cert. (3) Store PDF hashes out-of-band if needed. (4) Annual sampling by internal audit.",
              "z": "Table (missing certs), Single value (gap count), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=itad:disposal`, `sourcetype=itad:wipe_cert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require vendors to upload certificate metadata to ITSM. (2) Alert when disposal closes without cert. (3) Store PDF hashes out-of-band if needed. (4) Annual sampling by internal audit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=itad:disposal status=\"completed\" earliest=-365d\n| rename asset_serial as serial\n| join type=left serial [\n    search index=itsm sourcetype=itad:wipe_cert earliest=-365d\n    | stats latest(wipe_method) as method latest(cert_id) as cert by serial\n  ]\n| where bcsi_stored=\"true\" AND isnull(cert)\n| table disposal_id, serial, asset_tag, completed_at, cert\n```\n\nUnderstanding this SPL\n\n**BCSI Destruction and Sanitization Evidence from ITAD Tickets (CIP-011-3 R4 Part 4.1)** — We correlate IT asset disposal tickets with wipe certificates indexed as structured events for sanitization evidence.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=itad:disposal`, `sourcetype=itad:wipe_cert`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: itad:disposal. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=itad:disposal, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where bcsi_stored=\"true\" AND isnull(cert)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BCSI Destruction and Sanitization Evidence from ITAD Tickets (CIP-011-3 R4 Part 4.1)**): table disposal_id, serial, asset_tag, completed_at, cert\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing certs), Single value (gap count), Timeline",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We correlate IT asset disposal tickets with wipe certificates indexed as structured events for sanitization evidence. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-011-3 R4 Part 4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-011-3 R4 Part 4.1 is enforced — Splunk UC-22.13.60: BCSI Destruction and Sanitization Evidence from ITAD Tickets.",
                  "ea": "Saved search 'UC-22.13.60' running on index=itsm sourcetype=itad:disposal, sourcetype=itad:wipe_cert, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.61",
              "n": "Control Center Real-Time Assessment TLS Cipher and Certificate Health (CIP-012-1 R1 Part 1.1)",
              "c": "critical",
              "f": "advanced",
              "v": "We monitor TLS handshakes or certificate metadata for inter-control-center assessment links to catch weak ciphers or expiring certificates.",
              "t": "Splunk Add-on for Stream (optional) or certificate inventory feeds, Splunk Common Information Model Add-on (1621)",
              "d": "`index=cert` `sourcetype=cert:inventory`, `control_center_links.csv`",
              "q": "index=cert sourcetype=cert:inventory earliest=-7d\n| lookup control_center_links.csv fqdn OUTPUT link_name cip012_scope\n| where cip012_scope=\"true\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%d\")\n| eval days_left=round((exp_epoch-now())/86400,0)\n| where days_left < 45 OR match(cipher_suite,\"(?i)RC4|DES\")\n| table link_name, fqdn, days_left, cipher_suite, issuer",
              "m": "(1) Populate inventory from scanners or ACME logs. (2) Alert at forty-five and fourteen days to expiry. (3) Track cipher suite deprecations. (4) Pair with firewall allowlist verification.",
              "z": "Table (at-risk links), Single value (expiring count), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Stream (optional) or certificate inventory feeds, Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=cert` `sourcetype=cert:inventory`, `control_center_links.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate inventory from scanners or ACME logs. (2) Alert at forty-five and fourteen days to expiry. (3) Track cipher suite deprecations. (4) Pair with firewall allowlist verification.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cert sourcetype=cert:inventory earliest=-7d\n| lookup control_center_links.csv fqdn OUTPUT link_name cip012_scope\n| where cip012_scope=\"true\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%d\")\n| eval days_left=round((exp_epoch-now())/86400,0)\n| where days_left < 45 OR match(cipher_suite,\"(?i)RC4|DES\")\n| table link_name, fqdn, days_left, cipher_suite, issuer\n```\n\nUnderstanding this SPL\n\n**Control Center Real-Time Assessment TLS Cipher and Certificate Health (CIP-012-1 R1 Part 1.1)** — We monitor TLS handshakes or certificate metadata for inter-control-center assessment links to catch weak ciphers or expiring certificates.\n\nDocumented **Data sources**: `index=cert` `sourcetype=cert:inventory`, `control_center_links.csv`. **App/TA** (typical add-on context): Splunk Add-on for Stream (optional) or certificate inventory feeds, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cert; **sourcetype**: cert:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cert, sourcetype=cert:inventory, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cip012_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 45 OR match(cipher_suite,\"(?i)RC4|DES\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Control Center Real-Time Assessment TLS Cipher and Certificate Health (CIP-012-1 R1 Part 1.1)**): table link_name, fqdn, days_left, cipher_suite, issuer\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Control Center Real-Time Assessment TLS Cipher and Certificate Health (CIP-012-1 R1 Part 1.1)** — We monitor TLS handshakes or certificate metadata for inter-control-center assessment links to catch weak ciphers or expiring certificates.\n\nDocumented **Data sources**: `index=cert` `sourcetype=cert:inventory`, `control_center_links.csv`. **App/TA** (typical add-on context): Splunk Add-on for Stream (optional) or certificate inventory feeds, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (at-risk links), Single value (expiring count), Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor TLS handshakes or certificate metadata for inter-control-center assessment links to catch weak ciphers or expiring certificates. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Certificates (when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-012-1 R1 Part 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-012-1 R1 Part 1.1 is enforced — Splunk UC-22.13.61: Control Center Real-Time Assessment TLS Cipher and Certificate Health.",
                  "ea": "Saved search 'UC-22.13.61' running on index=cert sourcetype=cert:inventory, control_center_links.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.62",
              "n": "Inter-Control-Center Link Availability from Synthetic Tests (CIP-012-1 R2 Part 2.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "We measure synthetic ping or application handshake success rates between control centers to evidence communications availability.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=synthetic` `sourcetype=http:ping` or `sourcetype=tcp:check`",
              "q": "index=synthetic (sourcetype=http:ping OR sourcetype=tcp:check) cip012_link=1 earliest=-24h\n| stats count(eval(match(status,\"(?i)success\"))) as ok count as total by link_id\n| eval pct=round(100*ok/total,2)\n| where pct < 99.5\n| lookup control_center_links.csv link_id OUTPUT src_cc dest_cc\n| table link_id, src_cc, dest_cc, pct, ok, total",
              "m": "(1) Deploy lightweight synthetic from each center. (2) Tune SLA percentage. (3) Page network operations on breach. (4) Retain raw samples for monthly report.",
              "z": "Time chart (availability), Single value (worst link), Table (breaches)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=synthetic` `sourcetype=http:ping` or `sourcetype=tcp:check`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Deploy lightweight synthetic from each center. (2) Tune SLA percentage. (3) Page network operations on breach. (4) Retain raw samples for monthly report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=synthetic (sourcetype=http:ping OR sourcetype=tcp:check) cip012_link=1 earliest=-24h\n| stats count(eval(match(status,\"(?i)success\"))) as ok count as total by link_id\n| eval pct=round(100*ok/total,2)\n| where pct < 99.5\n| lookup control_center_links.csv link_id OUTPUT src_cc dest_cc\n| table link_id, src_cc, dest_cc, pct, ok, total\n```\n\nUnderstanding this SPL\n\n**Inter-Control-Center Link Availability from Synthetic Tests (CIP-012-1 R2 Part 2.1)** — We measure synthetic ping or application handshake success rates between control centers to evidence communications availability.\n\nDocumented **Data sources**: `index=synthetic` `sourcetype=http:ping` or `sourcetype=tcp:check`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: synthetic; **sourcetype**: http:ping, tcp:check. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=synthetic, sourcetype=http:ping, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by link_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct < 99.5` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Pipeline stage (see **Inter-Control-Center Link Availability from Synthetic Tests (CIP-012-1 R2 Part 2.1)**): table link_id, src_cc, dest_cc, pct, ok, total\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (availability), Single value (worst link), Table (breaches)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measure synthetic ping or application handshake success rates between control centers to evidence communications availability. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-012-1 R2 Part 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-012-1 R2 Part 2.1 is enforced — Splunk UC-22.13.62: Inter-Control-Center Link Availability from Synthetic Tests.",
                  "ea": "Saved search 'UC-22.13.62' running on index=synthetic sourcetype=http:ping or sourcetype=tcp:check, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.63",
              "n": "Communication Path Integrity — Unexpected Route or ASN Changes (CIP-012-1 R3 Part 3.1)",
              "c": "high",
              "f": "advanced",
              "v": "We detect when BGP or path monitoring shows control center interconnects traversing unexpected autonomous systems or countries.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=netflow` `sourcetype=flow:internet` or `sourcetype=thousandeyes:tests`",
              "q": "index=netflow sourcetype=flow:internet cip012_path=1 earliest=-24h\n| stats values(dest_asn) as asn_list by src_cc, dest_cc, hour\n| lookup cip012_approved_asn.csv src_cc, dest_cc, dest_asn OUTPUT approved\n| mvexpand asn_list\n| rename asn_list as dest_asn\n| lookup cip012_approved_asn.csv src_cc, dest_cc, dest_asn OUTPUT approved\n| where isnull(approved)\n| table src_cc, dest_cc, dest_asn, hour",
              "m": "(1) Build approved ASN matrix with carriers. (2) Enrich flows with GeoIP cautiously. (3) Alert NOC on drift. (4) Document maintenance reroutes in lookup.",
              "z": "Table (unexpected ASNs), Map (optional), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=netflow` `sourcetype=flow:internet` or `sourcetype=thousandeyes:tests`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build approved ASN matrix with carriers. (2) Enrich flows with GeoIP cautiously. (3) Alert NOC on drift. (4) Document maintenance reroutes in lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netflow sourcetype=flow:internet cip012_path=1 earliest=-24h\n| stats values(dest_asn) as asn_list by src_cc, dest_cc, hour\n| lookup cip012_approved_asn.csv src_cc, dest_cc, dest_asn OUTPUT approved\n| mvexpand asn_list\n| rename asn_list as dest_asn\n| lookup cip012_approved_asn.csv src_cc, dest_cc, dest_asn OUTPUT approved\n| where isnull(approved)\n| table src_cc, dest_cc, dest_asn, hour\n```\n\nUnderstanding this SPL\n\n**Communication Path Integrity — Unexpected Route or ASN Changes (CIP-012-1 R3 Part 3.1)** — We detect when BGP or path monitoring shows control center interconnects traversing unexpected autonomous systems or countries.\n\nDocumented **Data sources**: `index=netflow` `sourcetype=flow:internet` or `sourcetype=thousandeyes:tests`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netflow; **sourcetype**: flow:internet. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netflow, sourcetype=flow:internet, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_cc, dest_cc, hour** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Communication Path Integrity — Unexpected Route or ASN Changes (CIP-012-1 R3 Part 3.1)**): table src_cc, dest_cc, dest_asn, hour\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Communication Path Integrity — Unexpected Route or ASN Changes (CIP-012-1 R3 Part 3.1)** — We detect when BGP or path monitoring shows control center interconnects traversing unexpected autonomous systems or countries.\n\nDocumented **Data sources**: `index=netflow` `sourcetype=flow:internet` or `sourcetype=thousandeyes:tests`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unexpected ASNs), Map (optional), Timeline",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect when BGP or path monitoring shows control center interconnects traversing unexpected autonomous systems or countries. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-012-1 R3 Part 3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-012-1 R3 Part 3.1 is enforced — Splunk UC-22.13.63: Communication Path Integrity — Unexpected Route or ASN Changes.",
                  "ea": "Saved search 'UC-22.13.63' running on index=netflow sourcetype=flow:internet or sourcetype=thousandeyes:tests, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.64",
              "n": "Vendor Risk Assessment Due Dates for Critical Suppliers (CIP-013-1 R1 Part 1.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We track vendor security risk assessments for suppliers touching BES Cyber Systems and alert before assessments expire.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=grc` `sourcetype=vendor:risk`",
              "q": "index=grc sourcetype=vendor:risk critical_vendor=1 earliest=-730d\n| eval due=strptime(assessment_due,\"%Y-%m-%d\")\n| eval days_left=round((due-now())/86400,0)\n| where days_left < 30 OR isnull(assessment_completed)\n| table vendor_id, vendor_name, assessment_due, days_left, owner\n| sort days_left",
              "m": "(1) Sync vendor register from TPRM tool. (2) Alert at ninety, thirty, and seven days. (3) Block new access for expired high-risk vendors per policy. (4) Export for supply chain committee.",
              "z": "Table (due soon), Single value (expired count), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=grc` `sourcetype=vendor:risk`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Sync vendor register from TPRM tool. (2) Alert at ninety, thirty, and seven days. (3) Block new access for expired high-risk vendors per policy. (4) Export for supply chain committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=vendor:risk critical_vendor=1 earliest=-730d\n| eval due=strptime(assessment_due,\"%Y-%m-%d\")\n| eval days_left=round((due-now())/86400,0)\n| where days_left < 30 OR isnull(assessment_completed)\n| table vendor_id, vendor_name, assessment_due, days_left, owner\n| sort days_left\n```\n\nUnderstanding this SPL\n\n**Vendor Risk Assessment Due Dates for Critical Suppliers (CIP-013-1 R1 Part 1.1)** — We track vendor security risk assessments for suppliers touching BES Cyber Systems and alert before assessments expire.\n\nDocumented **Data sources**: `index=grc` `sourcetype=vendor:risk`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: vendor:risk. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=vendor:risk, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_left** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_left < 30 OR isnull(assessment_completed)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Vendor Risk Assessment Due Dates for Critical Suppliers (CIP-013-1 R1 Part 1.1)**): table vendor_id, vendor_name, assessment_due, days_left, owner\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (due soon), Single value (expired count), Timeline",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track vendor security risk assessments for suppliers touching BES Cyber Systems and alert before assessments expire. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-013-1 R1 Part 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-013-1 R1 Part 1.1 is enforced — Splunk UC-22.13.64: Vendor Risk Assessment Due Dates for Critical Suppliers.",
                  "ea": "Saved search 'UC-22.13.64' running on index=grc sourcetype=vendor:risk, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.65",
              "n": "Software Package Integrity — Signed Installer Verification Failures (CIP-013-1 R2 Part 2.1)",
              "c": "critical",
              "f": "advanced",
              "v": "We capture endpoint or packaging pipeline events where installers fail signature or hash verification for software deployed to BES environments.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=endpoint` `sourcetype=ms:defender:behavior` or `sourcetype=package:verify`",
              "q": "index=endpoint sourcetype=package:verify bes_scope=1 earliest=-30d\n| where match(result,\"(?i)fail|invalid|unsigned\")\n| stats count by package_name, host, signer, result\n| sort - count",
              "m": "(1) Instrument package repos to emit verification events. (2) Block deployment outside pipeline. (3) Page security on any failure. (4) Map to CIP-010 unauthorized software use case.",
              "z": "Table (failures), Bar chart (by package), Single value (hosts)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=ms:defender:behavior` or `sourcetype=package:verify`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument package repos to emit verification events. (2) Block deployment outside pipeline. (3) Page security on any failure. (4) Map to CIP-010 unauthorized software use case.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=package:verify bes_scope=1 earliest=-30d\n| where match(result,\"(?i)fail|invalid|unsigned\")\n| stats count by package_name, host, signer, result\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Software Package Integrity — Signed Installer Verification Failures (CIP-013-1 R2 Part 2.1)** — We capture endpoint or packaging pipeline events where installers fail signature or hash verification for software deployed to BES environments.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=ms:defender:behavior` or `sourcetype=package:verify`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: package:verify. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=package:verify, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(result,\"(?i)fail|invalid|unsigned\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by package_name, host, signer, result** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Software Package Integrity — Signed Installer Verification Failures (CIP-013-1 R2 Part 2.1)** — We capture endpoint or packaging pipeline events where installers fail signature or hash verification for software deployed to BES environments.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=ms:defender:behavior` or `sourcetype=package:verify`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failures), Bar chart (by package), Single value (hosts)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We capture endpoint or packaging pipeline events where installers fail signature or hash verification for software deployed to BES environments. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Malware (when CIM-tagged)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-013-1 R2 Part 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-013-1 R2 Part 2.1 is enforced — Splunk UC-22.13.65: Software Package Integrity — Signed Installer Verification Failures.",
                  "ea": "Saved search 'UC-22.13.65' running on index=endpoint sourcetype=ms:defender:behavior or sourcetype=package:verify, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.66",
              "n": "Vendor Security Incident Notification Receipt Tracking (CIP-013-1 R3 Part 3.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We log inbound vendor security bulletins and track acknowledgement SLAs for incidents that may affect BES software or services.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=email` `sourcetype=vendor:bulletin` or HEC JSON",
              "q": "index=email sourcetype=vendor:bulletin earliest=-90d severity IN (\"high\",\"critical\")\n| eval recv=strptime(received_at,\"%Y-%m-%d %H:%M:%S\")\n| eval ack_epoch=strptime(acknowledged_at,\"%Y-%m-%d %H:%M:%S\")\n| eval ack_h=round((ack_epoch-recv)/3600,2)\n| where isnull(acknowledged_at) OR ack_h>72\n| table bulletin_id, vendor, received_at, acknowledged_at, ack_h",
              "m": "(1) Parse vendor mailing lists into structured events. (2) Define acknowledgement workflow in ITSM. (3) Alert procurement and security. (4) Link to CIP-008 if operational impact.",
              "z": "Table (late acks), Single value (open critical), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=email` `sourcetype=vendor:bulletin` or HEC JSON.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Parse vendor mailing lists into structured events. (2) Define acknowledgement workflow in ITSM. (3) Alert procurement and security. (4) Link to CIP-008 if operational impact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=email sourcetype=vendor:bulletin earliest=-90d severity IN (\"high\",\"critical\")\n| eval recv=strptime(received_at,\"%Y-%m-%d %H:%M:%S\")\n| eval ack_epoch=strptime(acknowledged_at,\"%Y-%m-%d %H:%M:%S\")\n| eval ack_h=round((ack_epoch-recv)/3600,2)\n| where isnull(acknowledged_at) OR ack_h>72\n| table bulletin_id, vendor, received_at, acknowledged_at, ack_h\n```\n\nUnderstanding this SPL\n\n**Vendor Security Incident Notification Receipt Tracking (CIP-013-1 R3 Part 3.1)** — We log inbound vendor security bulletins and track acknowledgement SLAs for incidents that may affect BES software or services.\n\nDocumented **Data sources**: `index=email` `sourcetype=vendor:bulletin` or HEC JSON. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: email; **sourcetype**: vendor:bulletin. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=email, sourcetype=vendor:bulletin, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **recv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ack_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ack_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(acknowledged_at) OR ack_h>72` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Vendor Security Incident Notification Receipt Tracking (CIP-013-1 R3 Part 3.1)**): table bulletin_id, vendor, received_at, acknowledged_at, ack_h\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (late acks), Single value (open critical), Timeline",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We log inbound vendor security bulletins and track acknowledgement SLAs for incidents that may affect BES software or services. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-013-1 R3 Part 3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-013-1 R3 Part 3.1 is enforced — Splunk UC-22.13.66: Vendor Security Incident Notification Receipt Tracking.",
                  "ea": "Saved search 'UC-22.13.66' running on index=email sourcetype=vendor:bulletin or HEC JSON, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.67",
              "n": "Supply Chain Incident Response Task Tracking (CIP-013-1 R4 Part 4.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "We monitor open tasks for supply chain incidents affecting registered vendor components used in BES Cyber Systems.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=itsm` `sourcetype=supply_chain_incident`",
              "q": "index=itsm sourcetype=supply_chain_incident earliest=-180d status!=\"Closed\"\n| eval due=strptime(remediation_due,\"%Y-%m-%d\")\n| where due < now()\n| stats values(task) as open_tasks by incident_id, vendor, component\n| sort - incident_id",
              "m": "(1) Create ticket type for supply chain cases. (2) Link tasks to asset inventory rows. (3) Daily stand-up dashboard. (4) Close with compensating control evidence if needed.",
              "z": "Table (overdue incidents), Column chart (by vendor), Single value (open)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=supply_chain_incident`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create ticket type for supply chain cases. (2) Link tasks to asset inventory rows. (3) Daily stand-up dashboard. (4) Close with compensating control evidence if needed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=supply_chain_incident earliest=-180d status!=\"Closed\"\n| eval due=strptime(remediation_due,\"%Y-%m-%d\")\n| where due < now()\n| stats values(task) as open_tasks by incident_id, vendor, component\n| sort - incident_id\n```\n\nUnderstanding this SPL\n\n**Supply Chain Incident Response Task Tracking (CIP-013-1 R4 Part 4.1)** — We monitor open tasks for supply chain incidents affecting registered vendor components used in BES Cyber Systems.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=supply_chain_incident`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: supply_chain_incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=supply_chain_incident, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where due < now()` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by incident_id, vendor, component** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue incidents), Column chart (by vendor), Single value (open)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We monitor open tasks for supply chain incidents affecting registered vendor components used in BES Cyber Systems. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-013-1 R4 Part 4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-013-1 R4 Part 4.1 is enforced — Splunk UC-22.13.67: Supply Chain Incident Response Task Tracking.",
                  "ea": "Saved search 'UC-22.13.67' running on index=itsm sourcetype=supply_chain_incident, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.68",
              "n": "Transmission Station Threat Assessment Evidence Indexing (CIP-014-3 R1 Part 1.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We index completed threat assessment metadata for transmission stations so evidence dates and reviewers are searchable for CIP-014 reviews.",
              "t": "Splunk Enterprise Security (263)",
              "d": "`index=grc` `sourcetype=cip014_threat_assessment`",
              "q": "index=grc sourcetype=cip014_threat_assessment earliest=-730d\n| eval completed=strptime(completed_date,\"%Y-%m-%d\")\n| eval due_review=relative_time(completed,\"+3y@d\")\n| where now()>due_review\n| table station_id, completed_date, reviewer, due_review, next_assessment_scheduled",
              "m": "(1) Push assessment completions via HEC when signed. (2) Set review cadence per program (example three years). (3) Alert owners before due. (4) Attach PDF hash externally if required.",
              "z": "Table (stations due), Timeline, Single value (count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=grc` `sourcetype=cip014_threat_assessment`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push assessment completions via HEC when signed. (2) Set review cadence per program (example three years). (3) Alert owners before due. (4) Attach PDF hash externally if required.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=cip014_threat_assessment earliest=-730d\n| eval completed=strptime(completed_date,\"%Y-%m-%d\")\n| eval due_review=relative_time(completed,\"+3y@d\")\n| where now()>due_review\n| table station_id, completed_date, reviewer, due_review, next_assessment_scheduled\n```\n\nUnderstanding this SPL\n\n**Transmission Station Threat Assessment Evidence Indexing (CIP-014-3 R1 Part 1.1)** — We index completed threat assessment metadata for transmission stations so evidence dates and reviewers are searchable for CIP-014 reviews.\n\nDocumented **Data sources**: `index=grc` `sourcetype=cip014_threat_assessment`. **App/TA** (typical add-on context): Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cip014_threat_assessment. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=cip014_threat_assessment, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **completed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **due_review** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where now()>due_review` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Transmission Station Threat Assessment Evidence Indexing (CIP-014-3 R1 Part 1.1)**): table station_id, completed_date, reviewer, due_review, next_assessment_scheduled\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stations due), Timeline, Single value (count)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We index completed threat assessment metadata for transmission stations so evidence dates and reviewers are searchable for CIP-014 reviews. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-014-3 R1 Part 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-014-3 R1 Part 1.1 is enforced — Splunk UC-22.13.68: Transmission Station Threat Assessment Evidence Indexing.",
                  "ea": "Saved search 'UC-22.13.68' running on index=grc sourcetype=cip014_threat_assessment, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.69",
              "n": "Transmission Physical Security Plan Control Checklist Status (CIP-014-3 R2 Part 2.1)",
              "c": "high",
              "f": "intermediate",
              "v": "We track periodic checklist completions for transmission station physical security plan controls such as lighting, fencing, and cameras.",
              "t": "Splunk OT Security Add-on (5151)",
              "d": "`index=grc` `sourcetype=cip014_control_check`",
              "q": "index=grc sourcetype=cip014_control_check earliest=-400d\n| eval performed=strptime(performed_date,\"%Y-%m-%d\")\n| eval overdue=if(now()-performed>7776000,1,0)\n| where overdue=1 AND result!=\"pass\"\n| table station_id, control_id, performed_date, result, owner",
              "m": "(1) Encode ninety-day inspection default (tune per plan). (2) Mobile form integration to Splunk via HEC. (3) Escalate failed controls to field ops. (4) Photo evidence stored outside Splunk with URL field.",
              "z": "Table (failed/overdue), Column chart (by control), Single value (stations)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151).\n• Ensure the following data sources are available: `index=grc` `sourcetype=cip014_control_check`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Encode ninety-day inspection default (tune per plan). (2) Mobile form integration to Splunk via HEC. (3) Escalate failed controls to field ops. (4) Photo evidence stored outside Splunk with URL field.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=cip014_control_check earliest=-400d\n| eval performed=strptime(performed_date,\"%Y-%m-%d\")\n| eval overdue=if(now()-performed>7776000,1,0)\n| where overdue=1 AND result!=\"pass\"\n| table station_id, control_id, performed_date, result, owner\n```\n\nUnderstanding this SPL\n\n**Transmission Physical Security Plan Control Checklist Status (CIP-014-3 R2 Part 2.1)** — We track periodic checklist completions for transmission station physical security plan controls such as lighting, fencing, and cameras.\n\nDocumented **Data sources**: `index=grc` `sourcetype=cip014_control_check`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cip014_control_check. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=cip014_control_check, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **performed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overdue=1 AND result!=\"pass\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Transmission Physical Security Plan Control Checklist Status (CIP-014-3 R2 Part 2.1)**): table station_id, control_id, performed_date, result, owner\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed/overdue), Column chart (by control), Single value (stations)",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track periodic checklist completions for transmission station physical security plan controls such as lighting, fencing, and cameras. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-014-3 R2 Part 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-014-3 R2 Part 2.1 is enforced — Splunk UC-22.13.69: Transmission Physical Security Plan Control Checklist Status.",
                  "ea": "Saved search 'UC-22.13.69' running on index=grc sourcetype=cip014_control_check, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.13.70",
              "n": "Unplanned Physical Security Incidents at Transmission Sites (CIP-014-3 R3 Part 3.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "We centralize alarms for intrusion, cut fence, or camera loss at transmission facilities for timely response and regulatory awareness.",
              "t": "Splunk OT Security Add-on (5151)",
              "d": "`index=physical` `sourcetype=transmission:perimeter`",
              "q": "index=physical sourcetype=transmission:perimeter earliest=-7d (alarm_type=\"intrusion\" OR alarm_type=\"fence_cut\" OR alarm_type=\"video_loss\")\n| lookup cip014_station_register station_id OUTPUT criticality\n| stats count earliest(_time) as first latest(_time) as last by station_id, alarm_type, criticality\n| sort - count",
              "m": "(1) Map alarm feeds to station registry. (2) Page security dispatch on critical sites. (3) Correlate with law enforcement case numbers in ITSM. (4) Feed significant events to CIP-008 evaluation.",
              "z": "Timeline (alarms), Table (by station), Single value (critical alarms)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151).\n• Ensure the following data sources are available: `index=physical` `sourcetype=transmission:perimeter`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map alarm feeds to station registry. (2) Page security dispatch on critical sites. (3) Correlate with law enforcement case numbers in ITSM. (4) Feed significant events to CIP-008 evaluation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=transmission:perimeter earliest=-7d (alarm_type=\"intrusion\" OR alarm_type=\"fence_cut\" OR alarm_type=\"video_loss\")\n| lookup cip014_station_register station_id OUTPUT criticality\n| stats count earliest(_time) as first latest(_time) as last by station_id, alarm_type, criticality\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Unplanned Physical Security Incidents at Transmission Sites (CIP-014-3 R3 Part 3.1)** — We centralize alarms for intrusion, cut fence, or camera loss at transmission facilities for timely response and regulatory awareness.\n\nDocumented **Data sources**: `index=physical` `sourcetype=transmission:perimeter`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: transmission:perimeter. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=transmission:perimeter, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by station_id, alarm_type, criticality** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (alarms), Table (by station), Single value (critical alarms)",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We centralize alarms for intrusion, cut fence, or camera loss at transmission facilities for timely response and regulatory awareness. That is important for the big power grid, where proof needs to line up with what the rules expect.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-014-3 R3 Part 3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-014-3 R3 Part 3.1 is enforced — Splunk UC-22.13.70: Unplanned Physical Security Incidents at Transmission Sites.",
                  "ea": "Saved search 'UC-22.13.70' running on index=physical sourcetype=transmission:perimeter, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 70,
            "none": 0
          }
        },
        {
          "i": "22.14",
          "n": "NIST 800-53 Rev. 5",
          "u": [
            {
              "i": "22.14.1",
              "n": "Centralized Audit Event Logging Policy Coverage (AU-2)",
              "c": "critical",
              "f": "advanced",
              "v": "Shows which systems and sourcetypes are forwarding security-relevant events so auditors can evidence that logging policy scope matches AU-2 event types.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`index=sec` OR `index=os` — Windows Security, Linux auditd, cloud control planes (sourcetypes vary by TA)",
              "q": "index IN (sec, os, cloud) earliest=-24h\n| stats dc(host) as hosts, dc(sourcetype) as sourcetypes, count as events by index, sourcetype\n| lookup nist_au2_event_coverage.csv sourcetype OUTPUT au2_event_family, required\n| eval gap=if(required=\"true\" AND events=0, \"No_Data\", \"OK\")\n| sort index, sourcetype",
              "m": "(1) Maintain `nist_au2_event_coverage.csv` mapping sourcetypes to AU-2 event families; (2) schedule daily and alert on sourcetypes with `required=true` and zero volume; (3) onboard missing forwarders or enable missing channels; (4) attach evidence export for assessors.",
              "z": "Table (coverage by sourcetype), Single value (percent required sources reporting), Bar chart (event volume by family)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `index=sec` OR `index=os` — Windows Security, Linux auditd, cloud control planes (sourcetypes vary by TA).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `nist_au2_event_coverage.csv` mapping sourcetypes to AU-2 event families; (2) schedule daily and alert on sourcetypes with `required=true` and zero volume; (3) onboard missing forwarders or enable missing channels; (4) attach evidence export for assessors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex IN (sec, os, cloud) earliest=-24h\n| stats dc(host) as hosts, dc(sourcetype) as sourcetypes, count as events by index, sourcetype\n| lookup nist_au2_event_coverage.csv sourcetype OUTPUT au2_event_family, required\n| eval gap=if(required=\"true\" AND events=0, \"No_Data\", \"OK\")\n| sort index, sourcetype\n```\n\nUnderstanding this SPL\n\n**Centralized Audit Event Logging Policy Coverage (AU-2)** — Shows which systems and sourcetypes are forwarding security-relevant events so auditors can evidence that logging policy scope matches AU-2 event types.\n\nDocumented **Data sources**: `index=sec` OR `index=os` — Windows Security, Linux auditd, cloud control planes (sourcetypes vary by TA). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Scopes the data: time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by index, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Centralized Audit Event Logging Policy Coverage (AU-2)** — Shows which systems and sourcetypes are forwarding security-relevant events so auditors can evidence that logging policy scope matches AU-2 event types.\n\nDocumented **Data sources**: `index=sec` OR `index=os` — Windows Security, Linux auditd, cloud control planes (sourcetypes vary by TA). **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (coverage by sourcetype), Single value (percent required sources reporting), Bar chart (event volume by family)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show which systems and sourcetypes are forwarding security-relevant events so auditors can evidence that logging policy scope matches AU-2 event types. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication",
                "Endpoint",
                "Change — where CIM-tagged"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-2 (Event logging) is enforced — Splunk UC-22.14.1: Centralized Audit Event Logging Policy Coverage.",
                  "ea": "Saved search 'UC-22.14.1' running on index=sec OR index=os — Windows Security, Linux auditd, cloud control planes (sourcetypes vary by TA), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.2",
              "n": "Audit Record Content Completeness for Privileged Actions (AU-3)",
              "c": "critical",
              "f": "advanced",
              "v": "Validates that privileged account activity records include who, what, when, where, and outcome fields required for AU-3 content of audit records.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`WinEventLog:Security` OR CIM `Authentication` — EventCode 4672, 4728, 4732, 4756, etc.",
              "q": "index=sec sourcetype=WinEventLog:Security earliest=-24h EventCode IN (4672,4728,4732,4756,4767,4720,4722,4725)\n| eval missing_user=if(isnull(user) OR user=\"\",1,0)\n| eval missing_dest=if(isnull(dest) OR dest=\"\",1,0)\n| eval missing_outcome=if(isnull(signature) OR signature=\"\",1,0)\n| eval au3_gap=missing_user+missing_dest+missing_outcome\n| stats count as events, sum(au3_gap) as incomplete by EventCode, host\n| where incomplete>0",
              "m": "(1) Normalize Windows events into CIM Authentication fields via props/transforms; (2) tune extractions for workgroup vs domain naming; (3) alert when `incomplete>0` for Tier0 hosts; (4) pair with AD schema docs for field mapping evidence.",
              "z": "Table (EventCode, host, incomplete counts), Bar chart (gaps by host), Single value (incomplete rate)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `WinEventLog:Security` OR CIM `Authentication` — EventCode 4672, 4728, 4732, 4756, etc..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize Windows events into CIM Authentication fields via props/transforms; (2) tune extractions for workgroup vs domain naming; (3) alert when `incomplete>0` for Tier0 hosts; (4) pair with AD schema docs for field mapping evidence.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec sourcetype=WinEventLog:Security earliest=-24h EventCode IN (4672,4728,4732,4756,4767,4720,4722,4725)\n| eval missing_user=if(isnull(user) OR user=\"\",1,0)\n| eval missing_dest=if(isnull(dest) OR dest=\"\",1,0)\n| eval missing_outcome=if(isnull(signature) OR signature=\"\",1,0)\n| eval au3_gap=missing_user+missing_dest+missing_outcome\n| stats count as events, sum(au3_gap) as incomplete by EventCode, host\n| where incomplete>0\n```\n\nUnderstanding this SPL\n\n**Audit Record Content Completeness for Privileged Actions (AU-3)** — Validates that privileged account activity records include who, what, when, where, and outcome fields required for AU-3 content of audit records.\n\nDocumented **Data sources**: `WinEventLog:Security` OR CIM `Authentication` — EventCode 4672, 4728, 4732, 4756, etc. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **missing_user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **missing_dest** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **missing_outcome** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **au3_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by EventCode, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where incomplete>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Audit Record Content Completeness for Privileged Actions (AU-3)** — Validates that privileged account activity records include who, what, when, where, and outcome fields required for AU-3 content of audit records.\n\nDocumented **Data sources**: `WinEventLog:Security` OR CIM `Authentication` — EventCode 4672, 4728, 4732, 4756, etc. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (EventCode, host, incomplete counts), Bar chart (gaps by host), Single value (incomplete rate)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We check that privileged account activity records include who, what, when, where, and outcome fields required for AU-3 content of audit records. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-3 (Content of audit records) is enforced — Splunk UC-22.14.2: Audit Record Content Completeness for Privileged Actions.",
                  "ea": "Saved search 'UC-22.14.2' running on WinEventLog:Security OR CIM Authentication — EventCode 4672, 4728, 4732, 4756, etc., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.3",
              "n": "Audit Storage Capacity and Index Growth Guardrails (AU-4)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects indexes approaching disk or license limits that could cause loss of audit evidence, supporting AU-4 audit storage capacity management.",
              "t": "Splunk Enterprise / Splunk Cloud Platform (REST, DMC)",
              "d": "`_internal` (metrics.log license), `/services/data/indexes` (index size, frozenTimePeriodInSecs)",
              "q": "| rest /services/data/indexes splunk_server=local count=0\n| search disabled=0 NOT title IN (\"_*\")\n| eval size_gb=currentDBSizeMB/1024\n| eval retention_days=round(frozenTimePeriodInSecs/86400,1)\n| table title, size_gb, maxTotalDataSizeMB, retention_days, frozenTimePeriodInSecs\n| sort - size_gb",
              "m": "(1) Run from a scheduled search with `rest` rights; (2) compare `size_gb` to business thresholds by index class; (3) integrate license usage from `_internal` for combined risk scoring; (4) open change requests before forced rolling.",
              "z": "Single value (indexes over threshold), Table (top consumers), Column chart (size by index)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform (REST, DMC).\n• Ensure the following data sources are available: `_internal` (metrics.log license), `/services/data/indexes` (index size, frozenTimePeriodInSecs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Run from a scheduled search with `rest` rights; (2) compare `size_gb` to business thresholds by index class; (3) integrate license usage from `_internal` for combined risk scoring; (4) open change requests before forced rolling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/data/indexes splunk_server=local count=0\n| search disabled=0 NOT title IN (\"_*\")\n| eval size_gb=currentDBSizeMB/1024\n| eval retention_days=round(frozenTimePeriodInSecs/86400,1)\n| table title, size_gb, maxTotalDataSizeMB, retention_days, frozenTimePeriodInSecs\n| sort - size_gb\n```\n\nUnderstanding this SPL\n\n**Audit Storage Capacity and Index Growth Guardrails (AU-4)** — Detects indexes approaching disk or license limits that could cause loss of audit evidence, supporting AU-4 audit storage capacity management.\n\nDocumented **Data sources**: `_internal` (metrics.log license), `/services/data/indexes` (index size, frozenTimePeriodInSecs). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform (REST, DMC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **size_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **retention_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Audit Storage Capacity and Index Growth Guardrails (AU-4)**): table title, size_gb, maxTotalDataSizeMB, retention_days, frozenTimePeriodInSecs\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (indexes over threshold), Table (top consumers), Column chart (size by index)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects indexes approaching disk or license limits that could cause loss of audit evidence, supporting AU-4 audit storage capacity management. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-4 is enforced — Splunk UC-22.14.3: Audit Storage Capacity and Index Growth Guardrails.",
                  "ea": "Saved search 'UC-22.14.3' running on _internal (metrics.log license), /services/data/indexes (index size, frozenTimePeriodInSecs), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.4",
              "n": "Response to Audit Logging Failures and Forwarder Gaps (AU-5)",
              "c": "critical",
              "f": "intermediate",
              "v": "Correlates forwarder offline events, TCP routing failures, and HEC 4xx/5xx spikes so operators can respond before AU-5 audit logging failures silently drop evidence.",
              "t": "Splunk Universal Forwarder / Splunk Cloud Platform",
              "d": "`_internal` sourcetype=`splunkd` OR `http_event_collector_metrics`, deployment server phonehome metrics",
              "q": "index=_internal earliest=-4h (sourcetype=splunkd OR sourcetype=http_event_collector_metrics)\n| bin _time span=15m\n| stats count(eval(match(_raw,\"(?i)blocked|pause|queue|full|failure\"))) as failure_signals,\n        dc(host) as hosts by _time, sourcetype\n| where failure_signals>0\n| sort - failure_signals",
              "m": "(1) Enable verbose forwarder logging only on troubleshooting windows to avoid noise; (2) route `_internal` to a security operations index with RBAC; (3) alert on sustained `failure_signals`; (4) document compensating controls when maintenance pauses logging.",
              "z": "Time chart (failure signals), Table (top hosts), Alert list (spikes)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder / Splunk Cloud Platform.\n• Ensure the following data sources are available: `_internal` sourcetype=`splunkd` OR `http_event_collector_metrics`, deployment server phonehome metrics.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable verbose forwarder logging only on troubleshooting windows to avoid noise; (2) route `_internal` to a security operations index with RBAC; (3) alert on sustained `failure_signals`; (4) document compensating controls when maintenance pauses logging.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal earliest=-4h (sourcetype=splunkd OR sourcetype=http_event_collector_metrics)\n| bin _time span=15m\n| stats count(eval(match(_raw,\"(?i)blocked|pause|queue|full|failure\"))) as failure_signals,\n        dc(host) as hosts by _time, sourcetype\n| where failure_signals>0\n| sort - failure_signals\n```\n\nUnderstanding this SPL\n\n**Response to Audit Logging Failures and Forwarder Gaps (AU-5)** — Correlates forwarder offline events, TCP routing failures, and HEC 4xx/5xx spikes so operators can respond before AU-5 audit logging failures silently drop evidence.\n\nDocumented **Data sources**: `_internal` sourcetype=`splunkd` OR `http_event_collector_metrics`, deployment server phonehome metrics. **App/TA** (typical add-on context): Splunk Universal Forwarder / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd, http_event_collector_metrics. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failure_signals>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (failure signals), Table (top hosts), Alert list (spikes)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect forwarder offline events, TCP routing failures, and HEC 4xx/5xx spikes so operators can respond before AU-5 audit logging failures silently drop evidence. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-5 is enforced — Splunk UC-22.14.4: Response to Audit Logging Failures and Forwarder Gaps.",
                  "ea": "Saved search 'UC-22.14.4' running on _internal sourcetype=splunkd OR http_event_collector_metrics, deployment server phonehome metrics, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.5",
              "n": "Audit Review, Analysis, and Reporting for Privileged Users (AU-6)",
              "c": "critical",
              "f": "advanced",
              "v": "Produces repeatable privileged-user activity summaries for periodic AU-6 audit review and management attestation.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "ES assets/identities (optional), `index=sec` Authentication-tagged events",
              "q": "index=sec earliest=-7d\n| search `cim_Authentication_indexes` (user=\"*admin*\" OR user=\"*svc_*\" OR privileged=1)\n| stats count by user, action, dest, app\n| sort - count\n| head 200",
              "m": "(1) Define `privileged=1` via `identities.csv` or ES asset priority; (2) replace macro with your CIM index macro; (3) schedule weekly PDF/CSV to GRC; (4) store signed exports for assessor sampling.",
              "z": "Table (top privileged actions), Bar chart (by user), Sankey or pivot (optional ES)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: ES assets/identities (optional), `index=sec` Authentication-tagged events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define `privileged=1` via `identities.csv` or ES asset priority; (2) replace macro with your CIM index macro; (3) schedule weekly PDF/CSV to GRC; (4) store signed exports for assessor sampling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec earliest=-7d\n| search `cim_Authentication_indexes` (user=\"*admin*\" OR user=\"*svc_*\" OR privileged=1)\n| stats count by user, action, dest, app\n| sort - count\n| head 200\n```\n\nUnderstanding this SPL\n\n**Audit Review, Analysis, and Reporting for Privileged Users (AU-6)** — Produces repeatable privileged-user activity summaries for periodic AU-6 audit review and management attestation.\n\nDocumented **Data sources**: ES assets/identities (optional), `index=sec` Authentication-tagged events. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, action, dest, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.action, Authentication.dest, Authentication.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Audit Review, Analysis, and Reporting for Privileged Users (AU-6)** — Produces repeatable privileged-user activity summaries for periodic AU-6 audit review and management attestation.\n\nDocumented **Data sources**: ES assets/identities (optional), `index=sec` Authentication-tagged events. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top privileged actions), Bar chart (by user), Sankey or pivot (optional ES)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We produces repeatable privileged-user activity summaries for periodic AU-6 audit review and management attestation. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.action, Authentication.dest, Authentication.app | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-6 (Audit review, analysis, and reporting) is enforced — Splunk UC-22.14.5: Audit Review, Analysis, and Reporting for Privileged Users.",
                  "ea": "Saved search 'UC-22.14.5' running on ES assets/identities (optional), index=sec Authentication-tagged events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.6",
              "n": "Audit Reduction and Report Generation Integrity (AU-7)",
              "c": "high",
              "f": "advanced",
              "v": "Tracks scheduled searches that materialize audit reports to ensure AU-7 reduction and aggregation does not remove required contextual fields.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "`_audit` (search, savedsearch_name, info), summary indexes `index=summary`",
              "q": "index=_audit action=search info=completed earliest=-24h\n| search savedsearch_name=\"*AU7*\" OR savedsearch_name=\"*audit_report*\"\n| stats avg(total_run_time) as avg_runtime, max(total_run_time) as max_runtime, count by savedsearch_name, user\n| eval sla_breach=if(avg_runtime>300,1,0)",
              "m": "(1) Prefix audit-export saved searches with a standard name; (2) validate fields in summary vs raw sample monthly; (3) alert on `sla_breach`; (4) document field retention in the audit reduction procedure.",
              "z": "Table (saved search performance), Single value (breach count), Time chart (runtime trend)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: `_audit` (search, savedsearch_name, info), summary indexes `index=summary`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Prefix audit-export saved searches with a standard name; (2) validate fields in summary vs raw sample monthly; (3) alert on `sla_breach`; (4) document field retention in the audit reduction procedure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit action=search info=completed earliest=-24h\n| search savedsearch_name=\"*AU7*\" OR savedsearch_name=\"*audit_report*\"\n| stats avg(total_run_time) as avg_runtime, max(total_run_time) as max_runtime, count by savedsearch_name, user\n| eval sla_breach=if(avg_runtime>300,1,0)\n```\n\nUnderstanding this SPL\n\n**Audit Reduction and Report Generation Integrity (AU-7)** — Tracks scheduled searches that materialize audit reports to ensure AU-7 reduction and aggregation does not remove required contextual fields.\n\nDocumented **Data sources**: `_audit` (search, savedsearch_name, info), summary indexes `index=summary`. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by savedsearch_name, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (saved search performance), Single value (breach count), Time chart (runtime trend)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks scheduled searches that materialize audit reports to ensure AU-7 reduction and aggregation does not remove required contextual fields. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-7 is enforced — Splunk UC-22.14.6: Audit Reduction and Report Generation Integrity.",
                  "ea": "Saved search 'UC-22.14.6' running on _audit (search, savedsearch_name, info), summary indexes index=summary, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.7",
              "n": "Time Synchronization and Clock Skew for Audit Timestamps (AU-8)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces hosts whose event `_time` diverges from indexer time beyond tolerance, evidencing AU-8 time stamp synchronization.",
              "t": "Splunk Universal Forwarder, Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`WinEventLog:System` (W32Time), `linux:secure` / `ntp` logs, `_indextime` vs `_time` delta",
              "q": "index=os earliest=-24h\n| eval skew_sec=abs(_indextime-_time)\n| stats perc95(skew_sec) as p95_skew, max(skew_sec) as max_skew by host, sourcetype\n| where p95_skew>120 OR max_skew>300",
              "m": "(1) Standardize NTP/chrony on all forwarders; (2) tune thresholds per sourcetype (batch logs vs real-time); (3) alert Tier0 assets on breach; (4) attach remediation runbook for clock drift.",
              "z": "Table (worst skew by host), Single value (hosts breaching threshold), Map (optional geo context)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder, Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `WinEventLog:System` (W32Time), `linux:secure` / `ntp` logs, `_indextime` vs `_time` delta.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize NTP/chrony on all forwarders; (2) tune thresholds per sourcetype (batch logs vs real-time); (3) alert Tier0 assets on breach; (4) attach remediation runbook for clock drift.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os earliest=-24h\n| eval skew_sec=abs(_indextime-_time)\n| stats perc95(skew_sec) as p95_skew, max(skew_sec) as max_skew by host, sourcetype\n| where p95_skew>120 OR max_skew>300\n```\n\nUnderstanding this SPL\n\n**Time Synchronization and Clock Skew for Audit Timestamps (AU-8)** — Surfaces hosts whose event `_time` diverges from indexer time beyond tolerance, evidencing AU-8 time stamp synchronization.\n\nDocumented **Data sources**: `WinEventLog:System` (W32Time), `linux:secure` / `ntp` logs, `_indextime` vs `_time` delta. **App/TA** (typical add-on context): Splunk Universal Forwarder, Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **skew_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p95_skew>120 OR max_skew>300` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Time Synchronization and Clock Skew for Audit Timestamps (AU-8)** — Surfaces hosts whose event `_time` diverges from indexer time beyond tolerance, evidencing AU-8 time stamp synchronization.\n\nDocumented **Data sources**: `WinEventLog:System` (W32Time), `linux:secure` / `ntp` logs, `_indextime` vs `_time` delta. **App/TA** (typical add-on context): Splunk Universal Forwarder, Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (worst skew by host), Single value (hosts breaching threshold), Map (optional geo context)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surfaces hosts whose event `_time` diverges from indexer time beyond tolerance, evidencing AU-8 time stamp synchronization. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-8 (Time stamps) is enforced — Splunk UC-22.14.7: Time Synchronization and Clock Skew for Audit Timestamps.",
                  "ea": "Saved search 'UC-22.14.7' running on WinEventLog:System (W32Time), linux:secure / ntp logs, _indextime vs _time delta, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.8",
              "n": "Protection of Audit Information — Tamper Detection on Audit Indexes (AU-9)",
              "c": "critical",
              "f": "advanced",
              "v": "Monitors `_audit` for destructive actions against searches, indexes, and roles that could undermine AU-9 protection of audit information.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "`index=_audit` (delete, edit, permissions changes)",
              "q": "index=_audit earliest=-24h\n| search object=\"*savedsearches*\" OR object=\"*indexes*\" OR object=\"*authorize*\"\n| stats values(action) as actions, values(user) as users, count by object\n| sort - count",
              "m": "(1) Ensure `_audit` is indexed and retained per policy; (2) restrict who can delete `_audit`; (3) forward `_audit` to immutable store if required; (4) integrate alerts with SOC for Splunk admin changes.",
              "z": "Table (object, actions), Timeline (admin changes), Single value (critical changes per day)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: `index=_audit` (delete, edit, permissions changes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure `_audit` is indexed and retained per policy; (2) restrict who can delete `_audit`; (3) forward `_audit` to immutable store if required; (4) integrate alerts with SOC for Splunk admin changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit earliest=-24h\n| search object=\"*savedsearches*\" OR object=\"*indexes*\" OR object=\"*authorize*\"\n| stats values(action) as actions, values(user) as users, count by object\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Protection of Audit Information — Tamper Detection on Audit Indexes (AU-9)** — Monitors `_audit` for destructive actions against searches, indexes, and roles that could undermine AU-9 protection of audit information.\n\nDocumented **Data sources**: `index=_audit` (delete, edit, permissions changes). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by object** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (object, actions), Timeline (admin changes), Single value (critical changes per day)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch `_audit` for destructive actions against searches, indexes, and roles that could undermine AU-9 protection of audit information. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-9",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-9 (Protection of audit information) is enforced — Splunk UC-22.14.8: Protection of Audit Information — Tamper Detection on Audit Indexes.",
                  "ea": "Saved search 'UC-22.14.8' running on index=_audit (delete, edit, permissions changes), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.9",
              "n": "Non-Repudiation Evidence for Sensitive Transactions (AU-10)",
              "c": "high",
              "f": "advanced",
              "v": "Chains application user actions to authentication events and source IPs to support AU-10 non-repudiation for high-risk transactions.",
              "t": "Splunk DB Connect (Splunkbase 2686), Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "Application DB audit tables (via DBX), `WinEventLog:Security` logons",
              "q": "index=app sourcetype=app:txn_audit earliest=-24h status=success risk_tier=\"HIGH\"\n| eval join_key=md5(user.src_ip)\n| join type=left join_key [\n    search index=sec sourcetype=WinEventLog:Security EventCode=4624 earliest=-24h\n    | eval join_key=md5(user.src_ip)\n    | stats latest(_time) as last_logon by join_key, user, src_ip\n  ]\n| where isnull(last_logon)\n| table _time, user, src_ip, txn_id, action",
              "m": "(1) Prefer `transaction` or `stats` over `join` at scale — prototype first; (2) hash PII keys consistently; (3) validate clock sync (AU-8); (4) document limitations where shared accounts break non-repudiation.",
              "z": "Table (orphaned high-risk txns), Timeline (txn vs logon), Stats (match rate)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: Application DB audit tables (via DBX), `WinEventLog:Security` logons.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Prefer `transaction` or `stats` over `join` at scale — prototype first; (2) hash PII keys consistently; (3) validate clock sync (AU-8); (4) document limitations where shared accounts break non-repudiation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=app:txn_audit earliest=-24h status=success risk_tier=\"HIGH\"\n| eval join_key=md5(user.src_ip)\n| join type=left join_key [\n    search index=sec sourcetype=WinEventLog:Security EventCode=4624 earliest=-24h\n    | eval join_key=md5(user.src_ip)\n    | stats latest(_time) as last_logon by join_key, user, src_ip\n  ]\n| where isnull(last_logon)\n| table _time, user, src_ip, txn_id, action\n```\n\nUnderstanding this SPL\n\n**Non-Repudiation Evidence for Sensitive Transactions (AU-10)** — Chains application user actions to authentication events and source IPs to support AU-10 non-repudiation for high-risk transactions.\n\nDocumented **Data sources**: Application DB audit tables (via DBX), `WinEventLog:Security` logons. **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: app:txn_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=app:txn_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **join_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(last_logon)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Non-Repudiation Evidence for Sensitive Transactions (AU-10)**): table _time, user, src_ip, txn_id, action\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Non-Repudiation Evidence for Sensitive Transactions (AU-10)** — Chains application user actions to authentication events and source IPs to support AU-10 non-repudiation for high-risk transactions.\n\nDocumented **Data sources**: Application DB audit tables (via DBX), `WinEventLog:Security` logons. **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (orphaned high-risk txns), Timeline (txn vs logon), Stats (match rate)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We chains application user actions to authentication events and source IPs to support AU-10 non-repudiation for high-risk transactions. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication (for Windows side)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-10 is enforced — Splunk UC-22.14.9: Non-Repudiation Evidence for Sensitive Transactions.",
                  "ea": "Saved search 'UC-22.14.9' running on Application DB audit tables (via DBX), WinEventLog:Security logons, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.10",
              "n": "Audit Record Retention Compliance vs Policy (AU-11)",
              "c": "high",
              "f": "intermediate",
              "v": "Compares configured frozen retention to policy targets for systems of record logs supporting AU-11 audit record retention.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "`/services/data/indexes`, lookup `nist_retention_policy.csv`",
              "q": "| rest /services/data/indexes splunk_server=local count=0\n| search disabled=0\n| eval retention_days=round(frozenTimePeriodInSecs/86400,1)\n| lookup nist_retention_policy.csv title OUTPUT required_min_days, system_category\n| eval compliant=if(retention_days+0>=required_min_days+0,\"Yes\",\"No\")\n| where compliant=\"No\" OR isnull(required_min_days)",
              "m": "(1) Build `nist_retention_policy.csv` with index or prefix to required days; (2) reconcile with offline archive retention; (3) alert on non-compliance; (4) export for POA&M tracking.",
              "z": "Table (index, retention, required), Bar chart (gap days), Single value (non-compliant count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: `/services/data/indexes`, lookup `nist_retention_policy.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `nist_retention_policy.csv` with index or prefix to required days; (2) reconcile with offline archive retention; (3) alert on non-compliance; (4) export for POA&M tracking.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/data/indexes splunk_server=local count=0\n| search disabled=0\n| eval retention_days=round(frozenTimePeriodInSecs/86400,1)\n| lookup nist_retention_policy.csv title OUTPUT required_min_days, system_category\n| eval compliant=if(retention_days+0>=required_min_days+0,\"Yes\",\"No\")\n| where compliant=\"No\" OR isnull(required_min_days)\n```\n\nUnderstanding this SPL\n\n**Audit Record Retention Compliance vs Policy (AU-11)** — Compares configured frozen retention to policy targets for systems of record logs supporting AU-11 audit record retention.\n\nDocumented **Data sources**: `/services/data/indexes`, lookup `nist_retention_policy.csv`. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **retention_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliant=\"No\" OR isnull(required_min_days)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (index, retention, required), Bar chart (gap days), Single value (non-compliant count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare configured frozen retention to policy targets for systems of record logs supporting AU-11 audit record retention. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-11",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-11 is enforced — Splunk UC-22.14.10: Audit Record Retention Compliance vs Policy.",
                  "ea": "Saved search 'UC-22.14.10' running on /services/data/indexes, lookup nist_retention_policy.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.11",
              "n": "Audit Generation Coverage for Critical Network Controls (AU-12)",
              "c": "critical",
              "f": "intermediate",
              "v": "Verifies that boundary devices continue generating audit records for allow/deny decisions, evidencing AU-12 audit generation for network events.",
              "t": "Palo Alto Networks Add-on for Splunk (Splunkbase 2757), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`pan:traffic`, `pan:threat` or CIM Network_Traffic / Intrusion_Detection sources",
              "q": "index=net (sourcetype=pan:traffic OR sourcetype=pan:threat) earliest=-24h\n| stats earliest(_time) as first, latest(_time) as last, count by host, sourcetype\n| eval gap_minutes=round((now()-strptime(last,\"%Y-%m-%d %H:%M:%S %z\"))/60,1)\n| eval gap_minutes=if(isnull(gap_minutes), round((now()-last)/60,1), gap_minutes)\n| where gap_minutes>30",
              "m": "(1) Fix timestamp parsing per TA — sample `last` field; (2) tune gap threshold per HA pair; (3) alert if firewall stops logging; (4) map to firewall inventory CM-8.",
              "z": "Table (firewall, gap), Single value (devices silent), Time chart (EPS trend)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Palo Alto Networks Add-on for Splunk](https://splunkbase.splunk.com/app/2757), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto Networks Add-on for Splunk (Splunkbase 2757), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `pan:traffic`, `pan:threat` or CIM Network_Traffic / Intrusion_Detection sources.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Fix timestamp parsing per TA — sample `last` field; (2) tune gap threshold per HA pair; (3) alert if firewall stops logging; (4) map to firewall inventory CM-8.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=net (sourcetype=pan:traffic OR sourcetype=pan:threat) earliest=-24h\n| stats earliest(_time) as first, latest(_time) as last, count by host, sourcetype\n| eval gap_minutes=round((now()-strptime(last,\"%Y-%m-%d %H:%M:%S %z\"))/60,1)\n| eval gap_minutes=if(isnull(gap_minutes), round((now()-last)/60,1), gap_minutes)\n| where gap_minutes>30\n```\n\nUnderstanding this SPL\n\n**Audit Generation Coverage for Critical Network Controls (AU-12)** — Verifies that boundary devices continue generating audit records for allow/deny decisions, evidencing AU-12 audit generation for network events.\n\nDocumented **Data sources**: `pan:traffic`, `pan:threat` or CIM Network_Traffic / Intrusion_Detection sources. **App/TA** (typical add-on context): Palo Alto Networks Add-on for Splunk (Splunkbase 2757), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: net; **sourcetype**: pan:traffic, pan:threat. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=net, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **gap_minutes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **gap_minutes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_minutes>30` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Audit Generation Coverage for Critical Network Controls (AU-12)** — Verifies that boundary devices continue generating audit records for allow/deny decisions, evidencing AU-12 audit generation for network events.\n\nDocumented **Data sources**: `pan:traffic`, `pan:threat` or CIM Network_Traffic / Intrusion_Detection sources. **App/TA** (typical add-on context): Palo Alto Networks Add-on for Splunk (Splunkbase 2757), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (firewall, gap), Single value (devices silent), Time chart (EPS trend)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verifies that boundary devices continue generating audit records for allow/deny decisions, evidencing AU-12 audit generation for network events. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Network_Traffic",
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-12 (Audit record generation) is enforced — Splunk UC-22.14.11: Audit Generation Coverage for Critical Network Controls.",
                  "ea": "Saved search 'UC-22.14.11' running on pan:traffic, pan:threat or CIM Network_Traffic / Intrusion_Detection sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.12",
              "n": "Monitoring for Information Disclosure via DLP and Web Exfil Patterns (AU-13)",
              "c": "high",
              "f": "advanced",
              "v": "Correlates DLP policy violations and large outbound transfers to evidence AU-13 monitoring for information disclosure to unauthorized parties.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "`ms:o365:management` DLP workloads, `pan:traffic` bytes_out",
              "q": "index=o365 sourcetype=ms:o365:management Workload=\"DLP\" earliest=-24h\n| stats count by Operation, UserId, ObjectId\n| sort - count\n| head 100",
              "m": "(1) Ensure unified audit ingestion includes DLP operations; (2) enrich with `UserId` to HR directory via lookup; (3) threshold alerts for rare operations; (4) cross-check with firewall `bytes_out` spikes for same user within 1h (separate correlation).",
              "z": "Table (top DLP operations), Time chart (violations), Link to firewall dashboard",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: `ms:o365:management` DLP workloads, `pan:traffic` bytes_out.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure unified audit ingestion includes DLP operations; (2) enrich with `UserId` to HR directory via lookup; (3) threshold alerts for rare operations; (4) cross-check with firewall `bytes_out` spikes for same user within 1h (separate correlation).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management Workload=\"DLP\" earliest=-24h\n| stats count by Operation, UserId, ObjectId\n| sort - count\n| head 100\n```\n\nUnderstanding this SPL\n\n**Monitoring for Information Disclosure via DLP and Web Exfil Patterns (AU-13)** — Correlates DLP policy violations and large outbound transfers to evidence AU-13 monitoring for information disclosure to unauthorized parties.\n\nDocumented **Data sources**: `ms:o365:management` DLP workloads, `pan:traffic` bytes_out. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Operation, UserId, ObjectId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Monitoring for Information Disclosure via DLP and Web Exfil Patterns (AU-13)** — Correlates DLP policy violations and large outbound transfers to evidence AU-13 monitoring for information disclosure to unauthorized parties.\n\nDocumented **Data sources**: `ms:o365:management` DLP workloads, `pan:traffic` bytes_out. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top DLP operations), Time chart (violations), Link to firewall dashboard",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect DLP policy violations and large outbound transfers to evidence AU-13 monitoring for information disclosure to unauthorized parties. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A (O365) + Network_Traffic for firewall side"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-13",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-13 is enforced — Splunk UC-22.14.12: Monitoring for Information Disclosure via DLP and Web Exfil Patterns.",
                  "ea": "Saved search 'UC-22.14.12' running on ms:o365:management DLP workloads, pan:traffic bytes_out, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.13",
              "n": "Session Audit for Privileged Interactive Access (AU-14)",
              "c": "high",
              "f": "advanced",
              "v": "Captures SSH/RDP session telemetry (logon, channel, disconnect) for AU-14 session audit on administrative jump hosts.",
              "t": "Splunk Add-on for Unix and Linux (Splunkbase 833), Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`linux:secure` (sshd), `WinEventLog:Security` (4624/4647/4778), Sysmon if available",
              "q": "index=os earliest=-24h (sourcetype=linux:secure OR sourcetype=WinEventLog:Security)\n| search process=sshd OR EventCode IN (4624,4647,4778)\n| eval session_actor=coalesce(user, Account_Name)\n| stats earliest(_time) as start, latest(_time) as end, values(process) as processes by session_actor, src_ip, dest\n| eval duration_sec=end-start",
              "m": "(1) Enable verbose sshd logging per policy; (2) map EventCode meanings in a lookup; (3) alert on long sessions to sensitive assets; (4) redact passwords never present in logs.",
              "z": "Timeline (session spans), Table (duration by admin), Bar chart (sessions by jump host)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (Splunkbase 833), Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `linux:secure` (sshd), `WinEventLog:Security` (4624/4647/4778), Sysmon if available.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable verbose sshd logging per policy; (2) map EventCode meanings in a lookup; (3) alert on long sessions to sensitive assets; (4) redact passwords never present in logs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os earliest=-24h (sourcetype=linux:secure OR sourcetype=WinEventLog:Security)\n| search process=sshd OR EventCode IN (4624,4647,4778)\n| eval session_actor=coalesce(user, Account_Name)\n| stats earliest(_time) as start, latest(_time) as end, values(process) as processes by session_actor, src_ip, dest\n| eval duration_sec=end-start\n```\n\nUnderstanding this SPL\n\n**Session Audit for Privileged Interactive Access (AU-14)** — Captures SSH/RDP session telemetry (logon, channel, disconnect) for AU-14 session audit on administrative jump hosts.\n\nDocumented **Data sources**: `linux:secure` (sshd), `WinEventLog:Security` (4624/4647/4778), Sysmon if available. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (Splunkbase 833), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux:secure, WinEventLog:Security. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux:secure, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **session_actor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by session_actor, src_ip, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **duration_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Session Audit for Privileged Interactive Access (AU-14)** — Captures SSH/RDP session telemetry (logon, channel, disconnect) for AU-14 session audit on administrative jump hosts.\n\nDocumented **Data sources**: `linux:secure` (sshd), `WinEventLog:Security` (4624/4647/4778), Sysmon if available. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (Splunkbase 833), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (session spans), Table (duration by admin), Bar chart (sessions by jump host)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We captures SSH/RDP session telemetry (logon, channel, disconnect) for AU-14 session audit on administrative jump hosts. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-14 is enforced — Splunk UC-22.14.13: Session Audit for Privileged Interactive Access.",
                  "ea": "Saved search 'UC-22.14.13' running on linux:secure (sshd), WinEventLog:Security (4624/4647/4778), Sysmon if available, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.14",
              "n": "Alternate Audit Capability During Control Outages (AU-15)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects when primary logging pipelines fail and compensating syslog or packet capture taps increase, evidencing AU-15 alternate audit capability activation.",
              "t": "Splunk Add-on for Stream (Splunkbase 1809), Splunk Connect for Syslog",
              "d": "`_internal` pipeline errors, `index=net` tap sourcetypes",
              "q": "index=_internal OR index=net earliest=-24h\n| eval channel=case(match(sourcetype,\"stream\"),\"PCAP_TAP\", match(sourcetype,\"syslog\"),\"SYSLOG_ALT\",1=1,\"PRIMARY\")\n| bin _time span=15m\n| stats sum(eval(if(channel=\"PRIMARY\",1,0))) as primary_hits,\n        sum(eval(if(channel!=\"PRIMARY\",1,0))) as alt_hits by _time, host\n| eval alt_ratio=round(alt_hits/(primary_hits+alt_hits+0.001),3)\n| where alt_ratio>0.3",
              "m": "(1) Label alternate paths consistently; (2) tune ratio for your architecture; (3) require ticket linkage when alt_ratio spikes; (4) review monthly with contingency tests (CP-10).",
              "z": "Time chart (alt_ratio), Table (hosts), Single value (minutes in compensating mode)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Stream (Splunkbase 1809), Splunk Connect for Syslog.\n• Ensure the following data sources are available: `_internal` pipeline errors, `index=net` tap sourcetypes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Label alternate paths consistently; (2) tune ratio for your architecture; (3) require ticket linkage when alt_ratio spikes; (4) review monthly with contingency tests (CP-10).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal OR index=net earliest=-24h\n| eval channel=case(match(sourcetype,\"stream\"),\"PCAP_TAP\", match(sourcetype,\"syslog\"),\"SYSLOG_ALT\",1=1,\"PRIMARY\")\n| bin _time span=15m\n| stats sum(eval(if(channel=\"PRIMARY\",1,0))) as primary_hits,\n        sum(eval(if(channel!=\"PRIMARY\",1,0))) as alt_hits by _time, host\n| eval alt_ratio=round(alt_hits/(primary_hits+alt_hits+0.001),3)\n| where alt_ratio>0.3\n```\n\nUnderstanding this SPL\n\n**Alternate Audit Capability During Control Outages (AU-15)** — Detects when primary logging pipelines fail and compensating syslog or packet capture taps increase, evidencing AU-15 alternate audit capability activation.\n\nDocumented **Data sources**: `_internal` pipeline errors, `index=net` tap sourcetypes. **App/TA** (typical add-on context): Splunk Add-on for Stream (Splunkbase 1809), Splunk Connect for Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal, net.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, index=net, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **channel** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **alt_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where alt_ratio>0.3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest span=15m | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Alternate Audit Capability During Control Outages (AU-15)** — Detects when primary logging pipelines fail and compensating syslog or packet capture taps increase, evidencing AU-15 alternate audit capability activation.\n\nDocumented **Data sources**: `_internal` pipeline errors, `index=net` tap sourcetypes. **App/TA** (typical add-on context): Splunk Add-on for Stream (Splunkbase 1809), Splunk Connect for Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (alt_ratio), Table (hosts), Single value (minutes in compensating mode)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects when primary logging pipelines fail and compensating syslog or packet capture taps increase, evidencing AU-15 alternate audit capability activation. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Network_Traffic (Stream)"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest span=15m | sort - agg_value",
              "e": [
                "syslog"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-15",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-15 is enforced — Splunk UC-22.14.14: Alternate Audit Capability During Control Outages.",
                  "ea": "Saved search 'UC-22.14.14' running on _internal pipeline errors, index=net tap sourcetypes, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.15",
              "n": "Cross-Organizational Audit Forwarding Health to SIEM (AU-16)",
              "c": "high",
              "f": "advanced",
              "v": "Validates receipt rates from partner or subsidiary syslog streams against SLAs for AU-16 cross-organizational auditing agreements.",
              "t": "Splunk Enterprise / Splunk Cloud Platform, Splunk Connect for Syslog",
              "d": "`index=partner*` OR `host=partner_*` sourcetype=syslog or HEC tokens per org",
              "q": "index=partner earliest=-24h\n| bin _time span=1h\n| stats count by _time, org_id, sourcetype\n| eventstats median(count) as med by org_id, sourcetype\n| eval drop=if(count < med*0.5, 1, 0)\n| where drop=1",
              "m": "(1) Tag events with `org_id` at ingest; (2) baseline median EPS per hour/day; (3) alert on sustained drops; (4) document contractual EPS and retention in the interconnection agreement (CA-9).",
              "z": "Time chart (EPS by org), Table (drop windows), Single value (orgs in breach)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform, Splunk Connect for Syslog.\n• Ensure the following data sources are available: `index=partner*` OR `host=partner_*` sourcetype=syslog or HEC tokens per org.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag events with `org_id` at ingest; (2) baseline median EPS per hour/day; (3) alert on sustained drops; (4) document contractual EPS and retention in the interconnection agreement (CA-9).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=partner earliest=-24h\n| bin _time span=1h\n| stats count by _time, org_id, sourcetype\n| eventstats median(count) as med by org_id, sourcetype\n| eval drop=if(count < med*0.5, 1, 0)\n| where drop=1\n```\n\nUnderstanding this SPL\n\n**Cross-Organizational Audit Forwarding Health to SIEM (AU-16)** — Validates receipt rates from partner or subsidiary syslog streams against SLAs for AU-16 cross-organizational auditing agreements.\n\nDocumented **Data sources**: `index=partner*` OR `host=partner_*` sourcetype=syslog or HEC tokens per org. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform, Splunk Connect for Syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: partner.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=partner, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, org_id, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by org_id, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drop** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drop=1` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (EPS by org), Table (drop windows), Single value (orgs in breach)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We check receipt rates from partner or subsidiary syslog streams against SLAs for AU-16 cross-organizational auditing agreements. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-16",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AU-16 is enforced — Splunk UC-22.14.15: Cross-Organizational Audit Forwarding Health to SIEM.",
                  "ea": "Saved search 'UC-22.14.15' running on index=partner* OR host=partner_* sourcetype=syslog or HEC tokens per org, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.16",
              "n": "Account Management — Orphan and Stale Privileged Accounts (AC-2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Flags accounts not seen in authentication logs within policy windows for AC-2 account management lifecycle reviews.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`WinEventLog:Security` (4720/4726), `ms:o365:management` sign-in logs, AD export via scripted input",
              "q": "index=sec sourcetype=WinEventLog:Security EventCode IN (4720,4726,4738) earliest=-90d\n| stats latest(_time) as last_change by Account_Name\n| outputlookup ad_account_last_change.csv",
              "m": "(1) Pair with `inputlookup` of all privileged SAMAccountNames; (2) compare `last_change` to HR termination dates; (3) alert stale admin accounts; (4) feed IAM remediation queue.",
              "z": "Table (stale accounts), Time chart (creations/modifications), Single value (orphans)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `WinEventLog:Security` (4720/4726), `ms:o365:management` sign-in logs, AD export via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Pair with `inputlookup` of all privileged SAMAccountNames; (2) compare `last_change` to HR termination dates; (3) alert stale admin accounts; (4) feed IAM remediation queue.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec sourcetype=WinEventLog:Security EventCode IN (4720,4726,4738) earliest=-90d\n| stats latest(_time) as last_change by Account_Name\n| outputlookup ad_account_last_change.csv\n```\n\nUnderstanding this SPL\n\n**Account Management — Orphan and Stale Privileged Accounts (AC-2)** — Flags accounts not seen in authentication logs within policy windows for AC-2 account management lifecycle reviews.\n\nDocumented **Data sources**: `WinEventLog:Security` (4720/4726), `ms:o365:management` sign-in logs, AD export via scripted input. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Writes results to a lookup with `outputlookup` (permissions and retention apply).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Account Management — Orphan and Stale Privileged Accounts (AC-2)** — Flags accounts not seen in authentication logs within policy windows for AC-2 account management lifecycle reviews.\n\nDocumented **Data sources**: `WinEventLog:Security` (4720/4726), `ms:o365:management` sign-in logs, AD export via scripted input. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale accounts), Time chart (creations/modifications), Single value (orphans)\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flags accounts not seen in authentication logs within policy windows for AC-2 account management lifecycle reviews. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-2 (Account management) is enforced — Splunk UC-22.14.16: Account Management — Orphan and Stale Privileged Accounts.",
                  "ea": "Saved search 'UC-22.14.16' running on WinEventLog:Security (4720/4726), ms:o365:management sign-in logs, AD export via scripted input, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.17",
              "n": "Access Enforcement — Unauthorized Access Attempts to Sensitive Shares (AC-3)",
              "c": "critical",
              "f": "intermediate",
              "v": "Surfaces repeated failed object access attempts that indicate AC-3 access enforcement gaps or misconfigured ACLs.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`WinEventLog:Security` EventCode 4656/4663 (object access) when auditing enabled",
              "q": "index=sec sourcetype=WinEventLog:Security EventCode IN (4656,4663) earliest=-24h\n| search Object_Type=\"File\" ShareName=\"\\\\*SECRET*\"\n| stats count by user, ShareName, ObjectName, host\n| where count>25",
              "m": "(1) Enable granular SACLs on crown-jewel shares; (2) normalize `ShareName` fields; (3) tune threshold per sensitivity; (4) integrate with case management for access reviews.",
              "z": "Table (top requesters), Bar chart (by share), Heatmap (hour x user optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `WinEventLog:Security` EventCode 4656/4663 (object access) when auditing enabled.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable granular SACLs on crown-jewel shares; (2) normalize `ShareName` fields; (3) tune threshold per sensitivity; (4) integrate with case management for access reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec sourcetype=WinEventLog:Security EventCode IN (4656,4663) earliest=-24h\n| search Object_Type=\"File\" ShareName=\"\\\\*SECRET*\"\n| stats count by user, ShareName, ObjectName, host\n| where count>25\n```\n\nUnderstanding this SPL\n\n**Access Enforcement — Unauthorized Access Attempts to Sensitive Shares (AC-3)** — Surfaces repeated failed object access attempts that indicate AC-3 access enforcement gaps or misconfigured ACLs.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode 4656/4663 (object access) when auditing enabled. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, ShareName, ObjectName, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>25` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Access Enforcement — Unauthorized Access Attempts to Sensitive Shares (AC-3)** — Surfaces repeated failed object access attempts that indicate AC-3 access enforcement gaps or misconfigured ACLs.\n\nDocumented **Data sources**: `WinEventLog:Security` EventCode 4656/4663 (object access) when auditing enabled. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top requesters), Bar chart (by share), Heatmap (hour x user optional)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surfaces repeated failed object access attempts that indicate AC-3 access enforcement gaps or misconfigured ACLs. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-3 (Access enforcement) is enforced — Splunk UC-22.14.17: Access Enforcement — Unauthorized Access Attempts to Sensitive Shares.",
                  "ea": "Saved search 'UC-22.14.17' running on WinEventLog:Security EventCode 4656/4663 (object access) when auditing enabled, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.18",
              "n": "Separation of Duties Violations in Change Tickets (AC-5)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects the same identity approving and implementing changes for AC-5 separation of duties.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`snow:change_request` with requested_by, assigned_to, closed_by",
              "q": "index=itsm sourcetype=snow:change_request earliest=-30d\n| eval same_actor=if(requested_by=assigned_to OR requested_by=closed_by,1,0)\n| where same_actor=1 AND priority IN (\"1\",\"2\")\n| table number, short_description, requested_by, assigned_to, closed_by, priority, _time",
              "m": "(1) Confirm field names match your SNOW transform; (2) exclude emergency changes via `chg_model` lookup; (3) alert weekly summary to ITGC; (4) document approved break-glass codes.",
              "z": "Table (violations), Pie chart (SoD pass/fail), Single value (count open)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `snow:change_request` with requested_by, assigned_to, closed_by.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field names match your SNOW transform; (2) exclude emergency changes via `chg_model` lookup; (3) alert weekly summary to ITGC; (4) document approved break-glass codes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=snow:change_request earliest=-30d\n| eval same_actor=if(requested_by=assigned_to OR requested_by=closed_by,1,0)\n| where same_actor=1 AND priority IN (\"1\",\"2\")\n| table number, short_description, requested_by, assigned_to, closed_by, priority, _time\n```\n\nUnderstanding this SPL\n\n**Separation of Duties Violations in Change Tickets (AC-5)** — Detects the same identity approving and implementing changes for AC-5 separation of duties.\n\nDocumented **Data sources**: `snow:change_request` with requested_by, assigned_to, closed_by. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=snow:change_request, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **same_actor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where same_actor=1 AND priority IN (\"1\",\"2\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Separation of Duties Violations in Change Tickets (AC-5)**): table number, short_description, requested_by, assigned_to, closed_by, priority, _time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Pie chart (SoD pass/fail), Single value (count open)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects the same identity approving and implementing changes for AC-5 separation of duties. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-5 is enforced — Splunk UC-22.14.18: Separation of Duties Violations in Change Tickets.",
                  "ea": "Saved search 'UC-22.14.18' running on snow:change_request with requested_by, assigned_to, closed_by, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.19",
              "n": "Least Privilege — Excessive Cloud IAM Permissions (AC-6)",
              "c": "critical",
              "f": "advanced",
              "v": "Reviews AWS CloudTrail `AttachUserPolicy` events for high-risk managed policies supporting AC-6 least privilege.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876)",
              "d": "`aws:cloudtrail` management events",
              "q": "index=cloud sourcetype=aws:cloudtrail earliest=-7d eventName IN (\"AttachUserPolicy\",\"PutUserPolicy\",\"AttachRolePolicy\")\n| spath path=eventSource output=eventSource\n| spath path=requestParameters output=requestParameters\n| search requestParameters=\"*AdministratorAccess*\" OR requestParameters=\"*IAMFullAccess*\"\n| stats count by userName, eventName, requestParameters, aws_account_id",
              "m": "(1) Use `spath` or JSON KV mode consistently; (2) maintain deny-list of risky policy ARNs; (3) auto-ticket attachments; (4) quarterly access review export.",
              "z": "Table (who attached what), Time chart (high-risk grants), Bar chart (by account)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876).\n• Ensure the following data sources are available: `aws:cloudtrail` management events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Use `spath` or JSON KV mode consistently; (2) maintain deny-list of risky policy ARNs; (3) auto-ticket attachments; (4) quarterly access review export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=aws:cloudtrail earliest=-7d eventName IN (\"AttachUserPolicy\",\"PutUserPolicy\",\"AttachRolePolicy\")\n| spath path=eventSource output=eventSource\n| spath path=requestParameters output=requestParameters\n| search requestParameters=\"*AdministratorAccess*\" OR requestParameters=\"*IAMFullAccess*\"\n| stats count by userName, eventName, requestParameters, aws_account_id\n```\n\nUnderstanding this SPL\n\n**Least Privilege — Excessive Cloud IAM Permissions (AC-6)** — Reviews AWS CloudTrail `AttachUserPolicy` events for high-risk managed policies supporting AC-6 least privilege.\n\nDocumented **Data sources**: `aws:cloudtrail` management events. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=aws:cloudtrail, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by userName, eventName, requestParameters, aws_account_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Least Privilege — Excessive Cloud IAM Permissions (AC-6)** — Reviews AWS CloudTrail `AttachUserPolicy` events for high-risk managed policies supporting AC-6 least privilege.\n\nDocumented **Data sources**: `aws:cloudtrail` management events. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (who attached what), Time chart (high-risk grants), Bar chart (by account)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We reviews AWS CloudTrail `AttachUserPolicy` events for high-risk managed policies supporting AC-6 least privilege. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-6 (Least privilege) is enforced — Splunk UC-22.14.19: Least Privilege — Excessive Cloud IAM Permissions.",
                  "ea": "Saved search 'UC-22.14.19' running on aws:cloudtrail management events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.20",
              "n": "Unsuccessful Logon Attempts and Account Lockout Patterns (AC-7)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks brute-force clusters and lockouts for AC-7 unsuccessful logon attempts limits.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`WinEventLog:Security` 4625, `ms:o365:management` Sign-in failures",
              "q": "index=sec sourcetype=WinEventLog:Security EventCode=4625 earliest=-24h\n| stats count by src_ip, user, dest\n| where count>20",
              "m": "(1) Enrich `src_ip` with threat intel; (2) baseline VPN concentrators separately; (3) alert on distributed password spray; (4) validate lockout thresholds in AD GPO match monitoring thresholds.",
              "z": "Cluster map (src_ip), Table (top sources), Time chart (failures)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `WinEventLog:Security` 4625, `ms:o365:management` Sign-in failures.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enrich `src_ip` with threat intel; (2) baseline VPN concentrators separately; (3) alert on distributed password spray; (4) validate lockout thresholds in AD GPO match monitoring thresholds.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec sourcetype=WinEventLog:Security EventCode=4625 earliest=-24h\n| stats count by src_ip, user, dest\n| where count>20\n```\n\nUnderstanding this SPL\n\n**Unsuccessful Logon Attempts and Account Lockout Patterns (AC-7)** — Tracks brute-force clusters and lockouts for AC-7 unsuccessful logon attempts limits.\n\nDocumented **Data sources**: `WinEventLog:Security` 4625, `ms:o365:management` Sign-in failures. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_ip, user, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>20` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.user, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unsuccessful Logon Attempts and Account Lockout Patterns (AC-7)** — Tracks brute-force clusters and lockouts for AC-7 unsuccessful logon attempts limits.\n\nDocumented **Data sources**: `WinEventLog:Security` 4625, `ms:o365:management` Sign-in failures. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Cluster map (src_ip), Table (top sources), Time chart (failures)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks brute-force clusters and lockouts for AC-7 unsuccessful logon attempts limits. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.user, Authentication.dest | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-7 is enforced — Splunk UC-22.14.20: Unsuccessful Logon Attempts and Account Lockout Patterns.",
                  "ea": "Saved search 'UC-22.14.20' running on WinEventLog:Security 4625, ms:o365:management Sign-in failures, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.21",
              "n": "System Use Notification Banner Acceptance in SSH Sessions (AC-8)",
              "c": "high",
              "f": "beginner",
              "v": "Verifies SSH banner or interactive MOTD markers appear in session logs for AC-8 system use notification where technically feasible.",
              "t": "Splunk Add-on for Unix and Linux (Splunkbase 833)",
              "d": "`linux:secure` sshd session lines",
              "q": "index=os sourcetype=linux:secure earliest=-7d process=sshd\n| search \"Accepted\" OR \"session opened\"\n| rex \"(?i)banner=(?<banner_ack>yes|no)\"\n| stats count by banner_ack, dest\n| where banner_ack=\"no\" OR isnull(banner_ack)",
              "m": "(1) Standardize sshd_config `Banner` path; (2) optionally emit structured JSON from bastion; (3) do not log full banner text if classified; (4) report exceptions monthly.",
              "z": "Table (hosts missing ack), Bar chart (by datacenter), Single value (percent compliant)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Unix and Linux (Splunkbase 833).\n• Ensure the following data sources are available: `linux:secure` sshd session lines.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize sshd_config `Banner` path; (2) optionally emit structured JSON from bastion; (3) do not log full banner text if classified; (4) report exceptions monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=linux:secure earliest=-7d process=sshd\n| search \"Accepted\" OR \"session opened\"\n| rex \"(?i)banner=(?<banner_ack>yes|no)\"\n| stats count by banner_ack, dest\n| where banner_ack=\"no\" OR isnull(banner_ack)\n```\n\nUnderstanding this SPL\n\n**System Use Notification Banner Acceptance in SSH Sessions (AC-8)** — Verifies SSH banner or interactive MOTD markers appear in session logs for AC-8 system use notification where technically feasible.\n\nDocumented **Data sources**: `linux:secure` sshd session lines. **App/TA** (typical add-on context): Splunk Add-on for Unix and Linux (Splunkbase 833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: linux:secure. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=linux:secure, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by banner_ack, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where banner_ack=\"no\" OR isnull(banner_ack)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hosts missing ack), Bar chart (by datacenter), Single value (percent compliant)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verifies SSH banner or interactive MOTD markers appear in session logs for AC-8 system use notification where technically feasible. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-8 is enforced — Splunk UC-22.14.21: System Use Notification Banner Acceptance in SSH Sessions.",
                  "ea": "Saved search 'UC-22.14.21' running on linux:secure sshd session lines, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.22",
              "n": "Session Lock Events for Workstation Inactivity Policy (AC-11)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors workstation lock (4800) and unlock (4801) ratio for AC-11 session lock enforcement.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`WinEventLog:Security` 4800/4801",
              "q": "index=sec sourcetype=WinEventLog:Security EventCode IN (4800,4801) earliest=-7d\n| eval evt=if(EventCode=4800,\"lock\",\"unlock\")\n| stats count by evt, ComputerName\n| xyseries ComputerName evt count\n| eval lock_ratio=round(lock/(lock+unlock+0.001),3)\n| where lock_ratio<0.2",
              "m": "(1) Validate ComputerName field extraction; (2) exclude kiosk OU via lookup; (3) sample manual validation on outliers; (4) integrate with MDM compliance scores if available.",
              "z": "Bar chart (lock_ratio by OU via lookup), Table (lowest hosts)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Monitors workstation lock](https://splunkbase.splunk.com/app/4800), [and unlock](https://splunkbase.splunk.com/app/4801), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `WinEventLog:Security` 4800/4801.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Validate ComputerName field extraction; (2) exclude kiosk OU via lookup; (3) sample manual validation on outliers; (4) integrate with MDM compliance scores if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec sourcetype=WinEventLog:Security EventCode IN (4800,4801) earliest=-7d\n| eval evt=if(EventCode=4800,\"lock\",\"unlock\")\n| stats count by evt, ComputerName\n| xyseries ComputerName evt count\n| eval lock_ratio=round(lock/(lock+unlock+0.001),3)\n| where lock_ratio<0.2\n```\n\nUnderstanding this SPL\n\n**Session Lock Events for Workstation Inactivity Policy (AC-11)** — Monitors workstation lock (4800) and unlock (4801) ratio for AC-11 session lock enforcement.\n\nDocumented **Data sources**: `WinEventLog:Security` 4800/4801. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **evt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by evt, ComputerName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pivots fields for charting with `xyseries`.\n• `eval` defines or adjusts **lock_ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where lock_ratio<0.2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Session Lock Events for Workstation Inactivity Policy (AC-11)** — Monitors workstation lock (4800) and unlock (4801) ratio for AC-11 session lock enforcement.\n\nDocumented **Data sources**: `WinEventLog:Security` 4800/4801. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (lock_ratio by OU via lookup), Table (lowest hosts)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch workstation lock (4800) and unlock (4801) ratio for AC-11 session lock enforcement. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-11",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-11 is enforced — Splunk UC-22.14.22: Session Lock Events for Workstation Inactivity Policy.",
                  "ea": "Saved search 'UC-22.14.22' running on WinEventLog:Security 4800/4801, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.23",
              "n": "Session Termination on Logoff and VPN Disconnect (AC-12)",
              "c": "high",
              "f": "intermediate",
              "v": "Correlates VPN connect/disconnect with OS logoff events to evidence AC-12 session termination when sessions end.",
              "t": "Splunk Add-on for Cisco ASA (Splunkbase 1620), Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`cisco:asa` VPN teardown, `WinEventLog:Security` 4634",
              "q": "index=net OR index=sec earliest=-24h (sourcetype=cisco:asa OR sourcetype=WinEventLog:Security)\n| eval et=case(EventCode=4634,\"win_logoff\", match(_raw,\"Teardown\"),\"vpn_teardown\",1=1,null())\n| stats count by et, user, src_ip\n| sort - count",
              "m": "(1) Normalize VPN user field to match AD `user`; (2) alert on long VPN sessions without logoff (heuristic); (3) document remote access policy mapping to AC-17; (4) tune for split tunnel false positives.",
              "z": "Timeline (VPN vs logoff), Table (long sessions), Single value (disconnect anomalies)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Cisco ASA](https://splunkbase.splunk.com/app/1620), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Cisco ASA (Splunkbase 1620), Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `cisco:asa` VPN teardown, `WinEventLog:Security` 4634.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize VPN user field to match AD `user`; (2) alert on long VPN sessions without logoff (heuristic); (3) document remote access policy mapping to AC-17; (4) tune for split tunnel false positives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=net OR index=sec earliest=-24h (sourcetype=cisco:asa OR sourcetype=WinEventLog:Security)\n| eval et=case(EventCode=4634,\"win_logoff\", match(_raw,\"Teardown\"),\"vpn_teardown\",1=1,null())\n| stats count by et, user, src_ip\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Session Termination on Logoff and VPN Disconnect (AC-12)** — Correlates VPN connect/disconnect with OS logoff events to evidence AC-12 session termination when sessions end.\n\nDocumented **Data sources**: `cisco:asa` VPN teardown, `WinEventLog:Security` 4634. **App/TA** (typical add-on context): Splunk Add-on for Cisco ASA (Splunkbase 1620), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: net, sec; **sourcetype**: cisco:asa, WinEventLog:Security. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=net, index=sec, sourcetype=cisco:asa, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **et** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by et, user, src_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Session Termination on Logoff and VPN Disconnect (AC-12)** — Correlates VPN connect/disconnect with OS logoff events to evidence AC-12 session termination when sessions end.\n\nDocumented **Data sources**: `cisco:asa` VPN teardown, `WinEventLog:Security` 4634. **App/TA** (typical add-on context): Splunk Add-on for Cisco ASA (Splunkbase 1620), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (VPN vs logoff), Table (long sessions), Single value (disconnect anomalies)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect VPN connect/disconnect with OS logoff events to evidence AC-12 session termination when sessions end. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Network_Traffic",
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_asa"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-12 is enforced — Splunk UC-22.14.23: Session Termination on Logoff and VPN Disconnect.",
                  "ea": "Saved search 'UC-22.14.23' running on cisco:asa VPN teardown, WinEventLog:Security 4634, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ASA",
                "id": 1621,
                "url": "https://splunkbase.splunk.com/app/1621"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.24",
              "n": "Remote Access Anomalies — Geo-Velocity and Off-Hours VPN (AC-17)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects impossible travel and off-hours remote access for AC-17 remote access monitoring.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Cisco ASA (Splunkbase 1620)",
              "d": "`ms:o365:management` SignIns, `cisco:asa` VPN logs",
              "q": "index=o365 sourcetype=ms:o365:management Workload=AzureActiveDirectory Operation=UserLoggedIn earliest=-24h\n| iplocation extended_field=ClientIP\n| stats earliest(_time) as t1, latest(_time) as t2, values(Country) as countries by UserId, ClientIP\n| eval delta_h=round((t2-t1)/3600,2)\n| where mvcount(countries)>1 AND delta_h<2",
              "m": "(1) Validate `UserLoggedIn` operation naming for tenant; (2) pair with MFA status field if present; (3) reduce noise with corporate travel calendar lookup optional; (4) document tuning in SSP.",
              "z": "Map (country hops), Table (impossible travel), Time chart (after-hours VPN)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Add-on for Cisco ASA](https://splunkbase.splunk.com/app/1620), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Cisco ASA (Splunkbase 1620).\n• Ensure the following data sources are available: `ms:o365:management` SignIns, `cisco:asa` VPN logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Validate `UserLoggedIn` operation naming for tenant; (2) pair with MFA status field if present; (3) reduce noise with corporate travel calendar lookup optional; (4) document tuning in SSP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management Workload=AzureActiveDirectory Operation=UserLoggedIn earliest=-24h\n| iplocation extended_field=ClientIP\n| stats earliest(_time) as t1, latest(_time) as t2, values(Country) as countries by UserId, ClientIP\n| eval delta_h=round((t2-t1)/3600,2)\n| where mvcount(countries)>1 AND delta_h<2\n```\n\nUnderstanding this SPL\n\n**Remote Access Anomalies — Geo-Velocity and Off-Hours VPN (AC-17)** — Detects impossible travel and off-hours remote access for AC-17 remote access monitoring.\n\nDocumented **Data sources**: `ms:o365:management` SignIns, `cisco:asa` VPN logs. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Cisco ASA (Splunkbase 1620). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Remote Access Anomalies — Geo-Velocity and Off-Hours VPN (AC-17)**): iplocation extended_field=ClientIP\n• `stats` rolls up events into metrics; results are split **by UserId, ClientIP** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mvcount(countries)>1 AND delta_h<2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Remote Access Anomalies — Geo-Velocity and Off-Hours VPN (AC-17)** — Detects impossible travel and off-hours remote access for AC-17 remote access monitoring.\n\nDocumented **Data sources**: `ms:o365:management` SignIns, `cisco:asa` VPN logs. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Cisco ASA (Splunkbase 1620). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (country hops), Table (impossible travel), Time chart (after-hours VPN)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects impossible travel and off-hours remote access for AC-17 remote access monitoring. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cisco",
                "m365"
              ],
              "em": [
                "cisco_asa"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-17",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-17 is enforced — Splunk UC-22.14.24: Remote Access Anomalies — Geo-Velocity and Off-Hours VPN.",
                  "ea": "Saved search 'UC-22.14.24' running on ms:o365:management SignIns, cisco:asa VPN logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ASA",
                "id": 1621,
                "url": "https://splunkbase.splunk.com/app/1621"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.25",
              "n": "Use of External Systems — Unmanaged SaaS OAuth Grants (AC-20)",
              "c": "high",
              "f": "advanced",
              "v": "Highlights high-risk OAuth consent grants to external applications for AC-20 use of external systems governance.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`ms:o365:management` OAuth2 / consent operations",
              "q": "index=o365 sourcetype=ms:o365:management earliest=-30d Operation=\"Consent to application\"\n| stats values(UserId) as users, latest(_time) as last_seen by ObjectId\n| lookup risky_oauth_apps.csv ObjectId OUTPUT app_risk\n| where app_risk=\"high\" OR match(ObjectId,\"unknown\")",
              "m": "(1) Build `risky_oauth_apps.csv` from security reviews; (2) integrate CASB signals if forwarded; (3) quarterly user attestation; (4) auto-revoke workflow outside Splunk with ticket link.",
              "z": "Table (risky consents), Bar chart (by app), Time chart (trend)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `ms:o365:management` OAuth2 / consent operations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `risky_oauth_apps.csv` from security reviews; (2) integrate CASB signals if forwarded; (3) quarterly user attestation; (4) auto-revoke workflow outside Splunk with ticket link.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management earliest=-30d Operation=\"Consent to application\"\n| stats values(UserId) as users, latest(_time) as last_seen by ObjectId\n| lookup risky_oauth_apps.csv ObjectId OUTPUT app_risk\n| where app_risk=\"high\" OR match(ObjectId,\"unknown\")\n```\n\nUnderstanding this SPL\n\n**Use of External Systems — Unmanaged SaaS OAuth Grants (AC-20)** — Highlights high-risk OAuth consent grants to external applications for AC-20 use of external systems governance.\n\nDocumented **Data sources**: `ms:o365:management` OAuth2 / consent operations. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ObjectId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where app_risk=\"high\" OR match(ObjectId,\"unknown\")` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (risky consents), Bar chart (by app), Time chart (trend)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlights high-risk OAuth consent grants to external applications for AC-20 use of external systems governance. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-20",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 AC-20 is enforced — Splunk UC-22.14.25: Use of External Systems — Unmanaged SaaS OAuth Grants.",
                  "ea": "Saved search 'UC-22.14.25' running on ms:o365:management OAuth2 / consent operations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.26",
              "n": "Multifactor Authentication Gaps for Interactive Sign-Ins (IA-2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Surfaces cloud sign-ins that succeed without strong authentication methods for IA-2 identification and authentication (MFA).",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Google Workspace (Splunkbase 5556)",
              "d": "`ms:o365:management` SignIn logs, `google:workspace:login`",
              "q": "index=o365 sourcetype=ms:o365:management Workload=AzureActiveDirectory earliest=-24h\n| search Operation=\"UserLoggedIn\" OR Operation=\"Sign-in activity*\"\n| eval mfa=if(match(AuthenticationMethod,\"*MFA*\") OR match(AuthenticationMethod,\"*multifactor*\"),1,0)\n| where mfa=0 AND ResultStatus=\"Success\"\n| stats count by UserId, ClientIP, AuthenticationMethod, ApplicationId",
              "m": "(1) Normalize `AuthenticationMethod` values for your tenant; (2) exclude break-glass accounts via lookup; (3) alert on admin roles without MFA; (4) map to Conditional Access rollout milestones.",
              "z": "Table (users without MFA), Bar chart (by app), Time chart (trend)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Add-on for Google Workspace](https://splunkbase.splunk.com/app/5556), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Google Workspace (Splunkbase 5556).\n• Ensure the following data sources are available: `ms:o365:management` SignIn logs, `google:workspace:login`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `AuthenticationMethod` values for your tenant; (2) exclude break-glass accounts via lookup; (3) alert on admin roles without MFA; (4) map to Conditional Access rollout milestones.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management Workload=AzureActiveDirectory earliest=-24h\n| search Operation=\"UserLoggedIn\" OR Operation=\"Sign-in activity*\"\n| eval mfa=if(match(AuthenticationMethod,\"*MFA*\") OR match(AuthenticationMethod,\"*multifactor*\"),1,0)\n| where mfa=0 AND ResultStatus=\"Success\"\n| stats count by UserId, ClientIP, AuthenticationMethod, ApplicationId\n```\n\nUnderstanding this SPL\n\n**Multifactor Authentication Gaps for Interactive Sign-Ins (IA-2)** — Surfaces cloud sign-ins that succeed without strong authentication methods for IA-2 identification and authentication (MFA).\n\nDocumented **Data sources**: `ms:o365:management` SignIn logs, `google:workspace:login`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Google Workspace (Splunkbase 5556). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **mfa** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mfa=0 AND ResultStatus=\"Success\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by UserId, ClientIP, AuthenticationMethod, ApplicationId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Multifactor Authentication Gaps for Interactive Sign-Ins (IA-2)** — Surfaces cloud sign-ins that succeed without strong authentication methods for IA-2 identification and authentication (MFA).\n\nDocumented **Data sources**: `ms:o365:management` SignIn logs, `google:workspace:login`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Google Workspace (Splunkbase 5556). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (users without MFA), Bar chart (by app), Time chart (trend)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surfaces cloud sign-ins that succeed without strong authentication methods for IA-2 identification and authentication (MFA). That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IA-2 (Identification and authentication (users)) is enforced — Splunk UC-22.14.26: Multifactor Authentication Gaps for Interactive Sign-Ins.",
                  "ea": "Saved search 'UC-22.14.26' running on ms:o365:management SignIn logs, google:workspace:login, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.27",
              "n": "Device Identification for Corporate-Managed Endpoints (IA-3)",
              "c": "high",
              "f": "advanced",
              "v": "Correlates certificate- or MDM-based device IDs with authentication events for IA-3 device identification and compliance.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`ms:o365:management` device fields, `WinEventLog:Security` 4768 (Kerberos) with device claims if enabled",
              "q": "index=o365 sourcetype=ms:o365:management earliest=-24h DeviceId=*\n| stats latest(OperatingSystem) as os, dc(UserId) as users by DeviceId\n| lookup corp_managed_devices.csv DeviceId OUTPUT owner, compliance_state\n| where compliance_state!=\"compliant\" OR isnull(compliance_state)",
              "m": "(1) Build `corp_managed_devices.csv` from Intune/JAMF export; (2) refresh nightly; (3) alert on unknown DeviceId with privileged users; (4) document attribute mapping in SSP.",
              "z": "Table (non-compliant devices), Pie chart (compliance mix), Single value (unknown devices)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `ms:o365:management` device fields, `WinEventLog:Security` 4768 (Kerberos) with device claims if enabled.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `corp_managed_devices.csv` from Intune/JAMF export; (2) refresh nightly; (3) alert on unknown DeviceId with privileged users; (4) document attribute mapping in SSP.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management earliest=-24h DeviceId=*\n| stats latest(OperatingSystem) as os, dc(UserId) as users by DeviceId\n| lookup corp_managed_devices.csv DeviceId OUTPUT owner, compliance_state\n| where compliance_state!=\"compliant\" OR isnull(compliance_state)\n```\n\nUnderstanding this SPL\n\n**Device Identification for Corporate-Managed Endpoints (IA-3)** — Correlates certificate- or MDM-based device IDs with authentication events for IA-3 device identification and compliance.\n\nDocumented **Data sources**: `ms:o365:management` device fields, `WinEventLog:Security` 4768 (Kerberos) with device claims if enabled. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by DeviceId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where compliance_state!=\"compliant\" OR isnull(compliance_state)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Device Identification for Corporate-Managed Endpoints (IA-3)** — Correlates certificate- or MDM-based device IDs with authentication events for IA-3 device identification and compliance.\n\nDocumented **Data sources**: `ms:o365:management` device fields, `WinEventLog:Security` 4768 (Kerberos) with device claims if enabled. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055), Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant devices), Pie chart (compliance mix), Single value (unknown devices)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect certificate- or MDM-based device IDs with authentication events for IA-3 device identification and compliance. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IA-3 is enforced — Splunk UC-22.14.27: Device Identification for Corporate-Managed Endpoints.",
                  "ea": "Saved search 'UC-22.14.27' running on ms:o365:management device fields, WinEventLog:Security 4768 (Kerberos) with device claims if enabled, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.28",
              "n": "Identifier Management — Non-Human Service Account Sprawl (IA-4)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks creation of service principals and machine accounts for IA-4 identifier management discipline.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for AWS (Splunkbase 1876)",
              "d": "`WinEventLog:Security` 4741/4742, `aws:cloudtrail` `CreateUser`/`CreateRole`",
              "q": "index=sec sourcetype=WinEventLog:Security EventCode IN (4741,4742) earliest=-30d\n| stats latest(_time) as last_seen by Account_Name, EventCode\n| lookup svc_account_registry.csv Account_Name OUTPUT owner_team, approved\n| where approved!=\"true\" OR isnull(approved)",
              "m": "(1) Maintain authoritative registry CSV or KVStore; (2) integrate IAM ticketing for approvals; (3) monthly cleanup report; (4) align naming standards with CM-8 tags.",
              "z": "Table (unregistered service accounts), Time chart (creations), Bar chart (by OU)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for AWS (Splunkbase 1876).\n• Ensure the following data sources are available: `WinEventLog:Security` 4741/4742, `aws:cloudtrail` `CreateUser`/`CreateRole`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain authoritative registry CSV or KVStore; (2) integrate IAM ticketing for approvals; (3) monthly cleanup report; (4) align naming standards with CM-8 tags.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec sourcetype=WinEventLog:Security EventCode IN (4741,4742) earliest=-30d\n| stats latest(_time) as last_seen by Account_Name, EventCode\n| lookup svc_account_registry.csv Account_Name OUTPUT owner_team, approved\n| where approved!=\"true\" OR isnull(approved)\n```\n\nUnderstanding this SPL\n\n**Identifier Management — Non-Human Service Account Sprawl (IA-4)** — Tracks creation of service principals and machine accounts for IA-4 identifier management discipline.\n\nDocumented **Data sources**: `WinEventLog:Security` 4741/4742, `aws:cloudtrail` `CreateUser`/`CreateRole`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name, EventCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where approved!=\"true\" OR isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Identifier Management — Non-Human Service Account Sprawl (IA-4)** — Tracks creation of service principals and machine accounts for IA-4 identifier management discipline.\n\nDocumented **Data sources**: `WinEventLog:Security` 4741/4742, `aws:cloudtrail` `CreateUser`/`CreateRole`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unregistered service accounts), Time chart (creations), Bar chart (by OU)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks creation of service principals and machine accounts for IA-4 identifier management discipline. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IA-4 is enforced — Splunk UC-22.14.28: Identifier Management — Non-Human Service Account Sprawl.",
                  "ea": "Saved search 'UC-22.14.28' running on WinEventLog:Security 4741/4742, aws:cloudtrail CreateUser/CreateRole, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.29",
              "n": "Authenticator Management — Password Age and Rotation Anomalies (IA-5)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects accounts with old passwords or sudden mass resets indicative of IA-5 authenticator management issues or attacks.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742)",
              "d": "`WinEventLog:Security` 4723/4724 (password reset/change)",
              "q": "index=sec sourcetype=WinEventLog:Security EventCode IN (4723,4724) earliest=-7d\n| stats count by Account_Name, EventCode, src\n| eventstats sum(count) as resets by Account_Name\n| where resets>3",
              "m": "(1) Enrich with last password set from AD export join; (2) alert on helpdesk spikes; (3) correlate with impossible travel; (4) document self-service vs admin resets.",
              "z": "Table (burst resets), Time chart (4723 volume), Single value (unique accounts)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742).\n• Ensure the following data sources are available: `WinEventLog:Security` 4723/4724 (password reset/change).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enrich with last password set from AD export join; (2) alert on helpdesk spikes; (3) correlate with impossible travel; (4) document self-service vs admin resets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec sourcetype=WinEventLog:Security EventCode IN (4723,4724) earliest=-7d\n| stats count by Account_Name, EventCode, src\n| eventstats sum(count) as resets by Account_Name\n| where resets>3\n```\n\nUnderstanding this SPL\n\n**Authenticator Management — Password Age and Rotation Anomalies (IA-5)** — Detects accounts with old passwords or sudden mass resets indicative of IA-5 authenticator management issues or attacks.\n\nDocumented **Data sources**: `WinEventLog:Security` 4723/4724 (password reset/change). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec; **sourcetype**: WinEventLog:Security. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name, EventCode, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by Account_Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where resets>3` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Authenticator Management — Password Age and Rotation Anomalies (IA-5)** — Detects accounts with old passwords or sudden mass resets indicative of IA-5 authenticator management issues or attacks.\n\nDocumented **Data sources**: `WinEventLog:Security` 4723/4724 (password reset/change). **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (burst resets), Time chart (4723 volume), Single value (unique accounts)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects accounts with old passwords or sudden mass resets indicative of IA-5 authenticator management issues or attacks. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IA-5 (Authenticator management) is enforced — Splunk UC-22.14.29: Authenticator Management — Password Age and Rotation Anomalies.",
                  "ea": "Saved search 'UC-22.14.29' running on WinEventLog:Security 4723/4724 (password reset/change), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.30",
              "n": "Authentication Feedback — Credential Stuffing via Login Failures (IA-6)",
              "c": "medium",
              "f": "intermediate",
              "v": "Uses failed authentication ratios without exposing secrets to tune IA-6 authentication feedback monitoring at the aggregate level.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`ms:o365:management` sign-in failures",
              "q": "index=o365 sourcetype=ms:o365:management earliest=-24h ResultStatus=\"Failed\"\n| stats count by ClientIP, UserId\n| where count>50\n| sort - count",
              "m": "(1) Do not log raw passwords — rely on platform signals only; (2) integrate threat intel on ClientIP; (3) rate-limit alerts; (4) feed SOAR password spray playbooks.",
              "z": "Map (ClientIP), Table (top IPs), Bar chart (failure codes if extracted)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `ms:o365:management` sign-in failures.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Do not log raw passwords — rely on platform signals only; (2) integrate threat intel on ClientIP; (3) rate-limit alerts; (4) feed SOAR password spray playbooks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management earliest=-24h ResultStatus=\"Failed\"\n| stats count by ClientIP, UserId\n| where count>50\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Authentication Feedback — Credential Stuffing via Login Failures (IA-6)** — Uses failed authentication ratios without exposing secrets to tune IA-6 authentication feedback monitoring at the aggregate level.\n\nDocumented **Data sources**: `ms:o365:management` sign-in failures. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ClientIP, UserId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>50` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Authentication Feedback — Credential Stuffing via Login Failures (IA-6)** — Uses failed authentication ratios without exposing secrets to tune IA-6 authentication feedback monitoring at the aggregate level.\n\nDocumented **Data sources**: `ms:o365:management` sign-in failures. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (ClientIP), Table (top IPs), Bar chart (failure codes if extracted)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We uses failed authentication ratios without exposing secrets to tune IA-6 authentication feedback monitoring at the aggregate level. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IA-6 is enforced — Splunk UC-22.14.30: Authentication Feedback — Credential Stuffing via Login Failures.",
                  "ea": "Saved search 'UC-22.14.30' running on ms:o365:management sign-in failures, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.31",
              "n": "Identification of Non-Organization Users in Collaboration Tools (IA-8)",
              "c": "high",
              "f": "intermediate",
              "v": "Lists guest and external identities accessing tenant resources for IA-8 identification and authentication of non-organizational users.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`ms:o365:management` guest invitations and logons",
              "q": "index=o365 sourcetype=ms:o365:management earliest=-30d (UserType=\"Guest\" OR Operation=\"Add member to group\")\n| stats latest(_time) as last_seen, values(Workload) as workloads by UserId, ObjectId\n| sort - last_seen",
              "m": "(1) Validate `UserType` field availability; (2) cross-check with approved guest domains CSV; (3) alert on guests added to privileged groups; (4) quarterly access review export.",
              "z": "Table (guest activity), Time chart (invitations), Bar chart (by domain extracted from UserId)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `ms:o365:management` guest invitations and logons.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Validate `UserType` field availability; (2) cross-check with approved guest domains CSV; (3) alert on guests added to privileged groups; (4) quarterly access review export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management earliest=-30d (UserType=\"Guest\" OR Operation=\"Add member to group\")\n| stats latest(_time) as last_seen, values(Workload) as workloads by UserId, ObjectId\n| sort - last_seen\n```\n\nUnderstanding this SPL\n\n**Identification of Non-Organization Users in Collaboration Tools (IA-8)** — Lists guest and external identities accessing tenant resources for IA-8 identification and authentication of non-organizational users.\n\nDocumented **Data sources**: `ms:o365:management` guest invitations and logons. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by UserId, ObjectId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (guest activity), Time chart (invitations), Bar chart (by domain extracted from UserId)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We lists guest and external identities accessing tenant resources for IA-8 identification and authentication of non-organizational users. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IA-8 is enforced — Splunk UC-22.14.31: Identification of Non-Organization Users in Collaboration Tools.",
                  "ea": "Saved search 'UC-22.14.31' running on ms:o365:management guest invitations and logons, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.32",
              "n": "Re-Authentication for Sensitive Application Roles (IA-11)",
              "c": "high",
              "f": "advanced",
              "v": "Measures time since last strong authentication for high-risk app roles supporting IA-11 re-authentication requirements.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`ms:o365:management` sign-ins with `AuthenticationRequirement` / `SessionId` if present",
              "q": "index=o365 sourcetype=ms:o365:management earliest=-24h\n| search ApplicationId IN (\"*high_risk_app_guid_1*\",\"*high_risk_app_guid_2*\")\n| stats earliest(_time) as first_seen, latest(_time) as last_seen by UserId, SessionId\n| eval session_age_min=round((now()-last_seen)/60,1)\n| where session_age_min>480",
              "m": "(1) Replace ApplicationId tokens with production GUIDs; (2) align session length to policy; (3) pair with CASB step-up events if available; (4) document exceptions for legacy apps.",
              "z": "Table (long sessions), Single value (policy breaches), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `ms:o365:management` sign-ins with `AuthenticationRequirement` / `SessionId` if present.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replace ApplicationId tokens with production GUIDs; (2) align session length to policy; (3) pair with CASB step-up events if available; (4) document exceptions for legacy apps.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management earliest=-24h\n| search ApplicationId IN (\"*high_risk_app_guid_1*\",\"*high_risk_app_guid_2*\")\n| stats earliest(_time) as first_seen, latest(_time) as last_seen by UserId, SessionId\n| eval session_age_min=round((now()-last_seen)/60,1)\n| where session_age_min>480\n```\n\nUnderstanding this SPL\n\n**Re-Authentication for Sensitive Application Roles (IA-11)** — Measures time since last strong authentication for high-risk app roles supporting IA-11 re-authentication requirements.\n\nDocumented **Data sources**: `ms:o365:management` sign-ins with `AuthenticationRequirement` / `SessionId` if present. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by UserId, SessionId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **session_age_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where session_age_min>480` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Re-Authentication for Sensitive Application Roles (IA-11)** — Measures time since last strong authentication for high-risk app roles supporting IA-11 re-authentication requirements.\n\nDocumented **Data sources**: `ms:o365:management` sign-ins with `AuthenticationRequirement` / `SessionId` if present. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (long sessions), Single value (policy breaches), Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures time since last strong authentication for high-risk app roles supporting IA-11 re-authentication requirements. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-11",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IA-11 is enforced — Splunk UC-22.14.32: Re-Authentication for Sensitive Application Roles.",
                  "ea": "Saved search 'UC-22.14.32' running on ms:o365:management sign-ins with AuthenticationRequirement / SessionId if present, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.33",
              "n": "Identity Proofing Evidence for HR Onboarding Events (IA-12)",
              "c": "high",
              "f": "beginner",
              "v": "Tracks completion timestamps of identity proofing checkpoints from HR systems for IA-12 identity evidence (process control, not PII in Splunk).",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC)",
              "d": "`snow:sc_req_item` onboarding catalog, or HEC JSON `id_proof_status` without government ID numbers",
              "q": "index=itsm sourcetype=snow:sc_req_item earliest=-90d short_description=\"*Onboard*\"\n| eval proof_complete=if(state=\"Closed\" AND match(short_description,\"(?i)id proof\"),1,0)\n| stats latest(_time) as closed_time, values(state) as states by number, opened_at\n| where proof_complete=0 AND relative_time(now(),opened_at) < 2592000",
              "m": "(1) Never index raw ID document numbers — use status flags only; (2) align `short_description` filters with catalog; (3) alert HR on overdue proofing; (4) map retention to AU-11.",
              "z": "Table (open onboarding), Time chart (cycle time), Single value (SLA breaches)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC).\n• Ensure the following data sources are available: `snow:sc_req_item` onboarding catalog, or HEC JSON `id_proof_status` without government ID numbers.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Never index raw ID document numbers — use status flags only; (2) align `short_description` filters with catalog; (3) alert HR on overdue proofing; (4) map retention to AU-11.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=snow:sc_req_item earliest=-90d short_description=\"*Onboard*\"\n| eval proof_complete=if(state=\"Closed\" AND match(short_description,\"(?i)id proof\"),1,0)\n| stats latest(_time) as closed_time, values(state) as states by number, opened_at\n| where proof_complete=0 AND relative_time(now(),opened_at) < 2592000\n```\n\nUnderstanding this SPL\n\n**Identity Proofing Evidence for HR Onboarding Events (IA-12)** — Tracks completion timestamps of identity proofing checkpoints from HR systems for IA-12 identity evidence (process control, not PII in Splunk).\n\nDocumented **Data sources**: `snow:sc_req_item` onboarding catalog, or HEC JSON `id_proof_status` without government ID numbers. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:sc_req_item. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=snow:sc_req_item, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **proof_complete** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by number, opened_at** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where proof_complete=0 AND relative_time(now(),opened_at) < 2592000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (open onboarding), Time chart (cycle time), Single value (SLA breaches)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks completion timestamps of identity proofing checkpoints from HR systems for IA-12 identity evidence (process control, not PII in Splunk). That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IA-12 is enforced — Splunk UC-22.14.33: Identity Proofing Evidence for HR Onboarding Events.",
                  "ea": "Saved search 'UC-22.14.33' running on snow:sc_req_item onboarding catalog, or HEC JSON id_proof_status without government ID numbers, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.34",
              "n": "Flaw Remediation SLA Tracking from Vulnerability Scans (SI-2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Measures age of open critical vulnerabilities for SI-2 flaw remediation timeliness.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060), Splunk Enterprise Security (Splunkbase 263)",
              "d": "`tenable:sc` or `tenable:vuln` plugin outputs, ES `index=vulnerabilities` if normalized",
              "q": "index=vulns sourcetype=tenable:vuln earliest=-7d severity IN (\"Critical\",\"High\")\n| eval age_days=round((now()-strptime(first_found,\"%Y-%m-%d\"))/86400,1)\n| where age_days>30 AND state!=\"Fixed\"\n| stats max(age_days) as oldest_open, values(plugin_id) as plugins by host, severity",
              "m": "(1) Normalize `first_found` timestamp format from your TA; (2) join CMDB owner; (3) alert on SLA breach; (4) feed POA&M (CA-5).",
              "z": "Table (oldest vulns), Bar chart (by owner via lookup), Single value (count past SLA)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060), Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `tenable:sc` or `tenable:vuln` plugin outputs, ES `index=vulnerabilities` if normalized.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize `first_found` timestamp format from your TA; (2) join CMDB owner; (3) alert on SLA breach; (4) feed POA&M (CA-5).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulns sourcetype=tenable:vuln earliest=-7d severity IN (\"Critical\",\"High\")\n| eval age_days=round((now()-strptime(first_found,\"%Y-%m-%d\"))/86400,1)\n| where age_days>30 AND state!=\"Fixed\"\n| stats max(age_days) as oldest_open, values(plugin_id) as plugins by host, severity\n```\n\nUnderstanding this SPL\n\n**Flaw Remediation SLA Tracking from Vulnerability Scans (SI-2)** — Measures age of open critical vulnerabilities for SI-2 flaw remediation timeliness.\n\nDocumented **Data sources**: `tenable:sc` or `tenable:vuln` plugin outputs, ES `index=vulnerabilities` if normalized. **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060), Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulns; **sourcetype**: tenable:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulns, sourcetype=tenable:vuln, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>30 AND state!=\"Fixed\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest, Vulnerabilities.severity | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Flaw Remediation SLA Tracking from Vulnerability Scans (SI-2)** — Measures age of open critical vulnerabilities for SI-2 flaw remediation timeliness.\n\nDocumented **Data sources**: `tenable:sc` or `tenable:vuln` plugin outputs, ES `index=vulnerabilities` if normalized. **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060), Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (oldest vulns), Bar chart (by owner via lookup), Single value (count past SLA)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures age of open critical vulnerabilities for SI-2 flaw remediation timeliness. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest, Vulnerabilities.severity | sort - count",
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-2 is enforced — Splunk UC-22.14.34: Flaw Remediation SLA Tracking from Vulnerability Scans.",
                  "ea": "Saved search 'UC-22.14.34' running on tenable:sc or tenable:vuln plugin outputs, ES index=vulnerabilities if normalized, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.35",
              "n": "Malicious Code Protection — EDR / AV Detections Volume and Gaps (SI-3)",
              "c": "critical",
              "f": "intermediate",
              "v": "Monitors endpoint malware detection pipelines for silence or surges for SI-3 malicious code protection effectiveness.",
              "t": "Splunk Add-on for Microsoft Sysmon (community/TA), CrowdStrike Falcon Event Streams (TA), or CIM Malware sources",
              "d": "`XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode=1 with AV vendor logs, `malware` CIM-tagged data",
              "q": "index=endpoint earliest=-24h (sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational OR tag=malware)\n| bin _time span=1h\n| stats count by _time, host\n| eventstats median(count) as med by host\n| eval silent=if(count < med*0.1 AND med>10,1,0)\n| where silent=1",
              "m": "(1) Baseline per host class; (2) exclude offline laptops with lookup; (3) alert on sustained silence; (4) cross-check agent health from MDM.",
              "z": "Time chart (detections per host), Table (silent hosts), Single value (agents anomalous)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Malware](https://docs.splunk.com/Documentation/CIM/latest/User/Malware)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Sysmon (community/TA), CrowdStrike Falcon Event Streams (TA), or CIM Malware sources.\n• Ensure the following data sources are available: `XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode=1 with AV vendor logs, `malware` CIM-tagged data.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Baseline per host class; (2) exclude offline laptops with lookup; (3) alert on sustained silence; (4) cross-check agent health from MDM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint earliest=-24h (sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational OR tag=malware)\n| bin _time span=1h\n| stats count by _time, host\n| eventstats median(count) as med by host\n| eval silent=if(count < med*0.1 AND med>10,1,0)\n| where silent=1\n```\n\nUnderstanding this SPL\n\n**Malicious Code Protection — EDR / AV Detections Volume and Gaps (SI-3)** — Monitors endpoint malware detection pipelines for silence or surges for SI-3 malicious code protection effectiveness.\n\nDocumented **Data sources**: `XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode=1 with AV vendor logs, `malware` CIM-tagged data. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Sysmon (community/TA), CrowdStrike Falcon Event Streams (TA), or CIM Malware sources. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: XmlWinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational, time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **silent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where silent=1` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malicious Code Protection — EDR / AV Detections Volume and Gaps (SI-3)** — Monitors endpoint malware detection pipelines for silence or surges for SI-3 malicious code protection effectiveness.\n\nDocumented **Data sources**: `XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode=1 with AV vendor logs, `malware` CIM-tagged data. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Sysmon (community/TA), CrowdStrike Falcon Event Streams (TA), or CIM Malware sources. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Malware.Malware_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (detections per host), Table (silent hosts), Single value (agents anomalous)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch endpoint malware detection pipelines for silence or surges for SI-3 malicious code protection effectiveness. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Malware"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Malware.Malware_Attacks by Malware_Attacks.dest span=1h | sort - count",
              "e": [
                "crowdstrike",
                "dell_emc"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-3 is enforced — Splunk UC-22.14.35: Malicious Code Protection — EDR / AV Detections Volume and Gaps.",
                  "ea": "Saved search 'UC-22.14.35' running on XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode=1 with AV vendor logs, malware CIM-tagged data, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.36",
              "n": "System Monitoring — Host Instrumentation Coverage vs Inventory (SI-4)",
              "c": "critical",
              "f": "advanced",
              "v": "Compares active forwarders to CMDB inventory for SI-4 system monitoring coverage.",
              "t": "Splunk Enterprise Deployment Server / Splunk Cloud Monitoring Console",
              "d": "`_internal` `splunkd` connection metrics, lookup `cmdb_hosts.csv`",
              "q": "index=_internal sourcetype=splunkd component=Metrics group=tcp, connections earliest=-4h\n| stats latest(kb) as kb, dc(source) as peers by hostname\n| rename hostname as host\n| inputlookup append=t cmdb_hosts.csv\n| stats values(in_scope) as in_scope, max(peers) as reporting by host\n| where in_scope=\"true\" AND isnull(reporting)",
              "m": "(1) Replace internal metric approach with `| metadata type=hosts` nightly if simpler; (2) reconcile host naming; (3) alert missing Tier0; (4) document agent deployment standard (CM-7).",
              "z": "Table (missing monitoring), Single value (coverage %), Choropleth (by site from lookup)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Deployment Server / Splunk Cloud Monitoring Console.\n• Ensure the following data sources are available: `_internal` `splunkd` connection metrics, lookup `cmdb_hosts.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replace internal metric approach with `| metadata type=hosts` nightly if simpler; (2) reconcile host naming; (3) alert missing Tier0; (4) document agent deployment standard (CM-7).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal sourcetype=splunkd component=Metrics group=tcp, connections earliest=-4h\n| stats latest(kb) as kb, dc(source) as peers by hostname\n| rename hostname as host\n| inputlookup append=t cmdb_hosts.csv\n| stats values(in_scope) as in_scope, max(peers) as reporting by host\n| where in_scope=\"true\" AND isnull(reporting)\n```\n\nUnderstanding this SPL\n\n**System Monitoring — Host Instrumentation Coverage vs Inventory (SI-4)** — Compares active forwarders to CMDB inventory for SI-4 system monitoring coverage.\n\nDocumented **Data sources**: `_internal` `splunkd` connection metrics, lookup `cmdb_hosts.csv`. **App/TA** (typical add-on context): Splunk Enterprise Deployment Server / Splunk Cloud Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal; **sourcetype**: splunkd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, sourcetype=splunkd, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Renames fields with `rename` for clarity or joins.\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where in_scope=\"true\" AND isnull(reporting)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**System Monitoring — Host Instrumentation Coverage vs Inventory (SI-4)** — Compares active forwarders to CMDB inventory for SI-4 system monitoring coverage.\n\nDocumented **Data sources**: `_internal` `splunkd` connection metrics, lookup `cmdb_hosts.csv`. **App/TA** (typical add-on context): Splunk Enterprise Deployment Server / Splunk Cloud Monitoring Console. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing monitoring), Single value (coverage %), Choropleth (by site from lookup)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare active forwarders to CMDB inventory for SI-4 system monitoring coverage. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-4 (System monitoring) is enforced — Splunk UC-22.14.36: System Monitoring — Host Instrumentation Coverage vs Inventory.",
                  "ea": "Saved search 'UC-22.14.36' running on _internal splunkd connection metrics, lookup cmdb_hosts.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.37",
              "n": "Security Alerts Ingestion Health from Vendor Feeds (SI-5)",
              "c": "high",
              "f": "intermediate",
              "v": "Ensures security advisory and vendor alert feeds continue arriving for SI-5 security alerts, advisories, and directives.",
              "t": "Splunk Add-on for TAXII (if used), vendor JSON via HEC, RSS parser custom",
              "d": "`index=threat_intel` OR `sourcetype=vendor:security_advisory`",
              "q": "index=threat_intel earliest=-7d\n| stats latest(_time) as last_event, count by sourcetype, feed_name\n| eval hours_since=round((now()-last_event)/3600,1)\n| where hours_since>48",
              "m": "(1) Standardize `feed_name` at ingest; (2) alert per feed SLA; (3) document alternate notification path; (4) map to IR-4 playbooks when vendor declares incident.",
              "z": "Table (stale feeds), Time chart (events per feed), Single value (feeds in breach)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for TAXII (if used), vendor JSON via HEC, RSS parser custom.\n• Ensure the following data sources are available: `index=threat_intel` OR `sourcetype=vendor:security_advisory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize `feed_name` at ingest; (2) alert per feed SLA; (3) document alternate notification path; (4) map to IR-4 playbooks when vendor declares incident.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=threat_intel earliest=-7d\n| stats latest(_time) as last_event, count by sourcetype, feed_name\n| eval hours_since=round((now()-last_event)/3600,1)\n| where hours_since>48\n```\n\nUnderstanding this SPL\n\n**Security Alerts Ingestion Health from Vendor Feeds (SI-5)** — Ensures security advisory and vendor alert feeds continue arriving for SI-5 security alerts, advisories, and directives.\n\nDocumented **Data sources**: `index=threat_intel` OR `sourcetype=vendor:security_advisory`. **App/TA** (typical add-on context): Splunk Add-on for TAXII (if used), vendor JSON via HEC, RSS parser custom. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: threat_intel.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=threat_intel, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sourcetype, feed_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since>48` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale feeds), Time chart (events per feed), Single value (feeds in breach)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We make sure security advisory and vendor alert feeds continue arriving for SI-5 security alerts, advisories, and directives. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-5 is enforced — Splunk UC-22.14.37: Security Alerts Ingestion Health from Vendor Feeds.",
                  "ea": "Saved search 'UC-22.14.37' running on index=threat_intel OR sourcetype=vendor:security_advisory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.38",
              "n": "Security Function Verification — Forwarder Config Change Auditing (SI-6)",
              "c": "high",
              "f": "advanced",
              "v": "Detects unauthorized changes to inputs and props that affect log integrity checks for SI-6 security function verification concepts at the logging layer.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "`index=_audit` object paths under `inputs`/`props`",
              "q": "index=_audit earliest=-24h object=\"*/inputs.conf\" OR object=\"*/props.conf\"\n| stats values(action) as actions, values(user) as users by object, _time\n| search NOT user IN (\"splunk_system_user\",\"deployment_service\")",
              "m": "(1) Integrate with change tickets; (2) alert on non-service accounts; (3) snapshot approved configs in git; (4) quarterly drift review with CM-3.",
              "z": "Timeline (config changes), Table (who changed what)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: `index=_audit` object paths under `inputs`/`props`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Integrate with change tickets; (2) alert on non-service accounts; (3) snapshot approved configs in git; (4) quarterly drift review with CM-3.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit earliest=-24h object=\"*/inputs.conf\" OR object=\"*/props.conf\"\n| stats values(action) as actions, values(user) as users by object, _time\n| search NOT user IN (\"splunk_system_user\",\"deployment_service\")\n```\n\nUnderstanding this SPL\n\n**Security Function Verification — Forwarder Config Change Auditing (SI-6)** — Detects unauthorized changes to inputs and props that affect log integrity checks for SI-6 security function verification concepts at the logging layer.\n\nDocumented **Data sources**: `index=_audit` object paths under `inputs`/`props`. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by object, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (config changes), Table (who changed what)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects unauthorized changes to inputs and props that affect log integrity checks for SI-6 security function verification concepts at the logging layer. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-6 is enforced — Splunk UC-22.14.38: Security Function Verification — Forwarder Config Change Auditing.",
                  "ea": "Saved search 'UC-22.14.38' running on index=_audit object paths under inputs/props, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.39",
              "n": "Software and Firmware Integrity — Unexpected Driver Loads (SI-7)",
              "c": "critical",
              "f": "advanced",
              "v": "Flags rare Sysmon driver loads and kernel module events for SI-7 software and firmware integrity monitoring.",
              "t": "Splunk Add-on for Microsoft Sysmon",
              "d": "`XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 6",
              "q": "index=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode=6 earliest=-7d\n| stats count by ImageLoaded, host\n| eventstats median(count) as med global=t\n| where count < med*0.01 OR match(ImageLoaded,\"(?i)temp|users\\\\public\")",
              "m": "(1) Enrich with driver signer if captured; (2) allowlist known AV drivers via lookup; (3) alert on temp-path loads; (4) integrate with binary reputation service.",
              "z": "Table (rare drivers), Bar chart (by host), Scatter (prevalence vs first_seen)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Sysmon.\n• Ensure the following data sources are available: `XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 6.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enrich with driver signer if captured; (2) allowlist known AV drivers via lookup; (3) alert on temp-path loads; (4) integrate with binary reputation service.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode=6 earliest=-7d\n| stats count by ImageLoaded, host\n| eventstats median(count) as med global=t\n| where count < med*0.01 OR match(ImageLoaded,\"(?i)temp|users\\\\public\")\n```\n\nUnderstanding this SPL\n\n**Software and Firmware Integrity — Unexpected Driver Loads (SI-7)** — Flags rare Sysmon driver loads and kernel module events for SI-7 software and firmware integrity monitoring.\n\nDocumented **Data sources**: `XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 6. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Sysmon. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: XmlWinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ImageLoaded, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where count < med*0.01 OR match(ImageLoaded,\"(?i)temp|users\\\\public\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Software and Firmware Integrity — Unexpected Driver Loads (SI-7)** — Flags rare Sysmon driver loads and kernel module events for SI-7 software and firmware integrity monitoring.\n\nDocumented **Data sources**: `XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 6. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Sysmon. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (rare drivers), Bar chart (by host), Scatter (prevalence vs first_seen)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flags rare Sysmon driver loads and kernel module events for SI-7 software and firmware integrity monitoring. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-7 is enforced — Splunk UC-22.14.39: Software and Firmware Integrity — Unexpected Driver Loads.",
                  "ea": "Saved search 'UC-22.14.39' running on XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode 6, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.40",
              "n": "Information Input Validation — Web Parameter Anomalies (SI-10)",
              "c": "high",
              "f": "advanced",
              "v": "Surfaces WAF or app logs showing validation failures and injection patterns for SI-10 information input validation.",
              "t": "Splunk Add-on for F5 BIG-IP ASM (TA), ModSecurity, or `access_combined` with WAF headers",
              "d": "`sourcetype=f5:asm:*` or `sourcetype=apache:access` with `mod_security` message route",
              "q": "index=web sourcetype=f5:asm:* OR sourcetype=modsec_json earliest=-24h\n| search severity IN (\"high\",\"critical\") OR response_code=406\n| stats count by uri, src_ip, attack_type\n| sort - count\n| head 100",
              "m": "(1) Normalize field names per TA; (2) tune out scanner noise with bot lookup; (3) feed IR-4 for confirmed exploitation attempts; (4) map to RA-5 findings.",
              "z": "Table (top URIs), Map (src_ip), Time chart (attack types)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for F5 BIG-IP ASM (TA), ModSecurity, or `access_combined` with WAF headers.\n• Ensure the following data sources are available: `sourcetype=f5:asm:*` or `sourcetype=apache:access` with `mod_security` message route.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field names per TA; (2) tune out scanner noise with bot lookup; (3) feed IR-4 for confirmed exploitation attempts; (4) map to RA-5 findings.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=f5:asm:* OR sourcetype=modsec_json earliest=-24h\n| search severity IN (\"high\",\"critical\") OR response_code=406\n| stats count by uri, src_ip, attack_type\n| sort - count\n| head 100\n```\n\nUnderstanding this SPL\n\n**Information Input Validation — Web Parameter Anomalies (SI-10)** — Surfaces WAF or app logs showing validation failures and injection patterns for SI-10 information input validation.\n\nDocumented **Data sources**: `sourcetype=f5:asm:*` or `sourcetype=apache:access` with `mod_security` message route. **App/TA** (typical add-on context): Splunk Add-on for F5 BIG-IP ASM (TA), ModSecurity, or `access_combined` with WAF headers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: f5:asm:*, modsec_json. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=f5:asm:*, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by uri, src_ip, attack_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information Input Validation — Web Parameter Anomalies (SI-10)** — Surfaces WAF or app logs showing validation failures and injection patterns for SI-10 information input validation.\n\nDocumented **Data sources**: `sourcetype=f5:asm:*` or `sourcetype=apache:access` with `mod_security` message route. **App/TA** (typical add-on context): Splunk Add-on for F5 BIG-IP ASM (TA), ModSecurity, or `access_combined` with WAF headers. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top URIs), Map (src_ip), Time chart (attack types)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surfaces WAF or app logs showing validation failures and injection patterns for SI-10 information input validation. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.src | sort - count",
              "e": [
                "f5"
              ],
              "em": [
                "f5_asm",
                "f5_bigip"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-10 is enforced — Splunk UC-22.14.40: Information Input Validation — Web Parameter Anomalies.",
                  "ea": "Saved search 'UC-22.14.40' running on sourcetype=f5:asm:* or sourcetype=apache:access with mod_security message route, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.41",
              "n": "Error Handling — Application Stack Traces Exposing Internals (SI-11)",
              "c": "medium",
              "f": "intermediate",
              "v": "Finds logs containing stack traces or SQL fragments that may violate SI-11 error handling by revealing sensitive implementation details.",
              "t": "HTTP Event Collector (HEC), Splunk Add-on for Java logging",
              "d": "`index=app` log4j/json sourcetypes",
              "q": "index=app earliest=-24h log_level IN (\"ERROR\",\"FATAL\")\n| regex _raw=\"(?i)(stack trace|SQLException|traceback|ORA-\\d{5})\"\n| stats count by sourcetype, service, host\n| sort - count",
              "m": "(1) Route samples to restricted index; (2) work with dev teams to scrub patterns; (3) alert on increases post-release; (4) validate no secrets in messages via regex vault patterns.",
              "z": "Table (noisy services), Time chart (error spikes), Single value (matches)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Add-on for Java logging.\n• Ensure the following data sources are available: `index=app` log4j/json sourcetypes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Route samples to restricted index; (2) work with dev teams to scrub patterns; (3) alert on increases post-release; (4) validate no secrets in messages via regex vault patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app earliest=-24h log_level IN (\"ERROR\",\"FATAL\")\n| regex _raw=\"(?i)(stack trace|SQLException|traceback|ORA-\\d{5})\"\n| stats count by sourcetype, service, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Error Handling — Application Stack Traces Exposing Internals (SI-11)** — Finds logs containing stack traces or SQL fragments that may violate SI-11 error handling by revealing sensitive implementation details.\n\nDocumented **Data sources**: `index=app` log4j/json sourcetypes. **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for Java logging. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• `stats` rolls up events into metrics; results are split **by sourcetype, service, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (noisy services), Time chart (error spikes), Single value (matches)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We find logs containing stack traces or SQL fragments that may violate SI-11 error handling by revealing sensitive implementation details. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-11",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-11 is enforced — Splunk UC-22.14.41: Error Handling — Application Stack Traces Exposing Internals.",
                  "ea": "Saved search 'UC-22.14.41' running on index=app log4j/json sourcetypes, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.42",
              "n": "Information Management — Sensitive Fields in Unapproved Indexes (SI-12)",
              "c": "high",
              "f": "advanced",
              "v": "Scans high-volume indexes for patterns resembling government IDs or payment cards to support SI-12 information management and handling.",
              "t": "Splunk Edge Processor (Splunk Cloud Platform), Splunk Enterprise Security (Splunkbase 263)",
              "d": "`index=*` excluding known-safe indexes — run in restricted search mode",
              "q": "index=app sourcetype=app:json earliest=-24h\n| regex _raw=\"(?i)\\b(?:4[0-9]{12}(?:[0-9]{3})?)\\b\"\n| stats count by index, sourcetype, host\n| sort - count",
              "m": "(1) Run from isolated role with limited indexes; (2) mask at ingest after discovery; (3) never broadly expose results; (4) document data handling per SI-12 procedure.",
              "z": "Table (hits by index), Single value (unique hosts)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Processor (Splunk Cloud Platform), Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `index=*` excluding known-safe indexes — run in restricted search mode.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Run from isolated role with limited indexes; (2) mask at ingest after discovery; (3) never broadly expose results; (4) document data handling per SI-12 procedure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=app:json earliest=-24h\n| regex _raw=\"(?i)\\b(?:4[0-9]{12}(?:[0-9]{3})?)\\b\"\n| stats count by index, sourcetype, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Information Management — Sensitive Fields in Unapproved Indexes (SI-12)** — Scans high-volume indexes for patterns resembling government IDs or payment cards to support SI-12 information management and handling.\n\nDocumented **Data sources**: `index=*` excluding known-safe indexes — run in restricted search mode. **App/TA** (typical add-on context): Splunk Edge Processor (Splunk Cloud Platform), Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: app:json. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=app:json, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• `stats` rolls up events into metrics; results are split **by index, sourcetype, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hits by index), Single value (unique hosts)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We scans high-volume indexes for patterns resembling government IDs or payment cards to support SI-12 information management and handling. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-12 is enforced — Splunk UC-22.14.42: Information Management — Sensitive Fields in Unapproved Indexes.",
                  "ea": "Saved search 'UC-22.14.42' running on index=* excluding known-safe indexes — run in restricted search mode, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.43",
              "n": "Memory Protection Signals — Exploitation Primitives in EDR Telemetry (SI-16)",
              "c": "critical",
              "f": "advanced",
              "v": "Uses EDR or Sysmon events indicating process hollowing or suspicious memory protections for SI-16 memory protection.",
              "t": "Splunk Add-on for Microsoft Sysmon",
              "d": "`XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 8/10",
              "q": "index=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode IN (8,10) earliest=-24h\n| stats count by SourceImage, TargetImage, host\n| sort - count",
              "m": "(1) Tune for developer VMs; (2) enrich with parent-child chain 1/3; (3) alert on LSASS-targeting patterns; (4) integrate with IR-4 containment playbooks.",
              "z": "Table (top technique pairs), Timeline, Graph (optional ES Risk)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Sysmon.\n• Ensure the following data sources are available: `XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 8/10.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune for developer VMs; (2) enrich with parent-child chain 1/3; (3) alert on LSASS-targeting patterns; (4) integrate with IR-4 containment playbooks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode IN (8,10) earliest=-24h\n| stats count by SourceImage, TargetImage, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Memory Protection Signals — Exploitation Primitives in EDR Telemetry (SI-16)** — Uses EDR or Sysmon events indicating process hollowing or suspicious memory protections for SI-16 memory protection.\n\nDocumented **Data sources**: `XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 8/10. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Sysmon. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: XmlWinEventLog:Microsoft-Windows-Sysmon/Operational. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by SourceImage, TargetImage, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Memory Protection Signals — Exploitation Primitives in EDR Telemetry (SI-16)** — Uses EDR or Sysmon events indicating process hollowing or suspicious memory protections for SI-16 memory protection.\n\nDocumented **Data sources**: `XmlWinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 8/10. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Sysmon. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top technique pairs), Timeline, Graph (optional ES Risk)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We uses EDR or Sysmon events indicating process hollowing or suspicious memory protections for SI-16 memory protection. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-16",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SI-16 is enforced — Splunk UC-22.14.43: Memory Protection Signals — Exploitation Primitives in EDR Telemetry.",
                  "ea": "Saved search 'UC-22.14.43' running on XmlWinEventLog:Microsoft-Windows-Sysmon/Operational EventCode 8/10, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.44",
              "n": "Incident Response Training Completion Tracking (IR-2)",
              "c": "medium",
              "f": "beginner",
              "v": "Joins LMS completion events with HR role for IR-2 incident response training evidence.",
              "t": "HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`lms:completion` HEC JSON or `snow:training_task` if modeled",
              "q": "index=hr_training sourcetype=lms:completion earliest=-365d course_id=\"IR-101\"\n| stats latest(completed_at) as completed by user_id\n| inputlookup append=t ir_required_roles.csv\n| stats first(completed) as completed, values(role) as roles by user_id\n| where isnull(completed) AND roles=\"SOC\"",
              "m": "(1) Hash user identifiers consistently; (2) refresh role lookup from IdM; (3) quarterly compliance report; (4) store exports for assessors.",
              "z": "Table (missing training), Single value (% complete), Bar chart (by team)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `lms:completion` HEC JSON or `snow:training_task` if modeled.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Hash user identifiers consistently; (2) refresh role lookup from IdM; (3) quarterly compliance report; (4) store exports for assessors.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr_training sourcetype=lms:completion earliest=-365d course_id=\"IR-101\"\n| stats latest(completed_at) as completed by user_id\n| inputlookup append=t ir_required_roles.csv\n| stats first(completed) as completed, values(role) as roles by user_id\n| where isnull(completed) AND roles=\"SOC\"\n```\n\nUnderstanding this SPL\n\n**Incident Response Training Completion Tracking (IR-2)** — Joins LMS completion events with HR role for IR-2 incident response training evidence.\n\nDocumented **Data sources**: `lms:completion` HEC JSON or `snow:training_task` if modeled. **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr_training; **sourcetype**: lms:completion. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr_training, sourcetype=lms:completion, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `stats` rolls up events into metrics; results are split **by user_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnull(completed) AND roles=\"SOC\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (missing training), Single value (% complete), Bar chart (by team)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We joins LMS completion events with HR role for IR-2 incident response training evidence. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-2 is enforced — Splunk UC-22.14.44: Incident Response Training Completion Tracking.",
                  "ea": "Saved search 'UC-22.14.44' running on lms:completion HEC JSON or snow:training_task if modeled, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.45",
              "n": "Incident Handling Stage Timestamps from Case Management (IR-4)",
              "c": "critical",
              "f": "intermediate",
              "v": "Measures time between detection, containment, and eradication milestones for IR-4 incident handling.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`` `notable` `` or `snow:incident` with custom fields for IR stages",
              "q": "index=itsm sourcetype=snow:incident category=\"Security\" earliest=-90d\n| eval detect=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval contain=strptime(u_contained_at,\"%Y-%m-%d %H:%M:%S\")\n| eval eradicate=strptime(u_eradicated_at,\"%Y-%m-%d %H:%M:%S\")\n| eval contain_h=round((contain-detect)/3600,2)\n| where contain_h>24 OR isnull(contain)",
              "m": "(1) Add custom fields in SNOW for stage timestamps; (2) validate `strptime` format; (3) alert leadership on SLA misses; (4) post-incident review linkage IR-8.",
              "z": "Table (aging incidents), Time chart (mean time to contain), Single value (breach count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `` `notable` `` or `snow:incident` with custom fields for IR stages.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Add custom fields in SNOW for stage timestamps; (2) validate `strptime` format; (3) alert leadership on SLA misses; (4) post-incident review linkage IR-8.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=snow:incident category=\"Security\" earliest=-90d\n| eval detect=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval contain=strptime(u_contained_at,\"%Y-%m-%d %H:%M:%S\")\n| eval eradicate=strptime(u_eradicated_at,\"%Y-%m-%d %H:%M:%S\")\n| eval contain_h=round((contain-detect)/3600,2)\n| where contain_h>24 OR isnull(contain)\n```\n\nUnderstanding this SPL\n\n**Incident Handling Stage Timestamps from Case Management (IR-4)** — Measures time between detection, containment, and eradication milestones for IR-4 incident handling.\n\nDocumented **Data sources**: `` `notable` `` or `snow:incident` with custom fields for IR stages. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=snow:incident, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **detect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **contain** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **eradicate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **contain_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where contain_h>24 OR isnull(contain)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (aging incidents), Time chart (mean time to contain), Single value (breach count)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures time between detection, containment, and eradication milestones for IR-4 incident handling. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-4 (Incident handling) is enforced — Splunk UC-22.14.45: Incident Handling Stage Timestamps from Case Management.",
                  "ea": "Saved search 'UC-22.14.45' running on notable or snow:incident with custom fields for IR stages, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.46",
              "n": "Incident Monitoring — SOC Queue Depth and Severity Mix (IR-5)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks open incident volume and severity for IR-5 incident monitoring (SOC operations).",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` macro fields",
              "q": "`notable` earliest=-7d status IN (\"New\",\"In Progress\",\"Pending\")\n| stats count by urgency, status, owner\n| sort - count",
              "m": "(1) Ensure owners populated; (2) alert on `New` backlog thresholds; (3) weekly management review dashboard; (4) map to staffing model.",
              "z": "Stacked bar (severity x status), Table (owner workload), Single value (open critical)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` macro fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure owners populated; (2) alert on `New` backlog thresholds; (3) weekly management review dashboard; (4) map to staffing model.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` earliest=-7d status IN (\"New\",\"In Progress\",\"Pending\")\n| stats count by urgency, status, owner\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Incident Monitoring — SOC Queue Depth and Severity Mix (IR-5)** — Tracks open incident volume and severity for IR-5 incident monitoring (SOC operations).\n\nDocumented **Data sources**: `` `notable` `` macro fields. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `stats` rolls up events into metrics; results are split **by urgency, status, owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (severity x status), Table (owner workload), Single value (open critical)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks open incident volume and severity for IR-5 incident monitoring (SOC operations). That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-5 is enforced — Splunk UC-22.14.46: Incident Monitoring — SOC Queue Depth and Severity Mix.",
                  "ea": "Saved search 'UC-22.14.46' running on notable macro fields, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.47",
              "n": "Incident Reporting to Authorities — Regulatory Timer Watch (IR-6)",
              "c": "critical",
              "f": "intermediate",
              "v": "Adds timer fields for regulatory reporting obligations tied to incidents for IR-6 incident reporting.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` with `category` or `tag` for regulated incidents",
              "q": "`notable` earliest=-30d tag=\"regulated_breach\"\n| eval hours_open=round((now()-_time)/3600,1)\n| where hours_open>72 AND status!=\"Closed\"\n| table _time, rule_name, owner, status, hours_open",
              "m": "(1) Standardize tags for regulated incidents; (2) integrate legal workflow fields; (3) alert before deadlines; (4) document reporting chains per jurisdiction.",
              "z": "Timeline, Table (aging regulated incidents), Single value (past threshold)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` with `category` or `tag` for regulated incidents.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize tags for regulated incidents; (2) integrate legal workflow fields; (3) alert before deadlines; (4) document reporting chains per jurisdiction.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` earliest=-30d tag=\"regulated_breach\"\n| eval hours_open=round((now()-_time)/3600,1)\n| where hours_open>72 AND status!=\"Closed\"\n| table _time, rule_name, owner, status, hours_open\n```\n\nUnderstanding this SPL\n\n**Incident Reporting to Authorities — Regulatory Timer Watch (IR-6)** — Adds timer fields for regulatory reporting obligations tied to incidents for IR-6 incident reporting.\n\nDocumented **Data sources**: `` `notable` `` with `category` or `tag` for regulated incidents. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **hours_open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_open>72 AND status!=\"Closed\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Incident Reporting to Authorities — Regulatory Timer Watch (IR-6)**): table _time, rule_name, owner, status, hours_open\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table (aging regulated incidents), Single value (past threshold)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We adds timer fields for regulatory reporting obligations tied to incidents for IR-6 incident reporting. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-6 is enforced — Splunk UC-22.14.47: Incident Reporting to Authorities — Regulatory Timer Watch.",
                  "ea": "Saved search 'UC-22.14.47' running on notable with category or tag for regulated incidents, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.48",
              "n": "Incident Response Assistance — External IR Firm Access Auditing (IR-7)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors VPN and jump host logons for third-party IR accounts for IR-7 incident response assistance access.",
              "t": "Splunk Add-on for Cisco ASA (Splunkbase 1620), Splunk Add-on for Unix and Linux (Splunkbase 833)",
              "d": "`cisco:asa` VPN, `linux:secure` sshd",
              "q": "index=net sourcetype=cisco:asa earliest=-30d group_policy=\"IR_VENDOR\"\n| stats earliest(_time) as first, latest(_time) as last, values(src_ip) as sources by user, dest\n| eval duration_h=round((last-first)/3600,2)",
              "m": "(1) Tag IR vendor accounts in IdM and forward to Splunk; (2) alert on access outside incident windows; (3) session recording references in tickets; (4) revoke after IR-4 closure.",
              "z": "Table (vendor sessions), Map (sources), Time chart (duration)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Cisco ASA](https://splunkbase.splunk.com/app/1620), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Cisco ASA (Splunkbase 1620), Splunk Add-on for Unix and Linux (Splunkbase 833).\n• Ensure the following data sources are available: `cisco:asa` VPN, `linux:secure` sshd.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag IR vendor accounts in IdM and forward to Splunk; (2) alert on access outside incident windows; (3) session recording references in tickets; (4) revoke after IR-4 closure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=net sourcetype=cisco:asa earliest=-30d group_policy=\"IR_VENDOR\"\n| stats earliest(_time) as first, latest(_time) as last, values(src_ip) as sources by user, dest\n| eval duration_h=round((last-first)/3600,2)\n```\n\nUnderstanding this SPL\n\n**Incident Response Assistance — External IR Firm Access Auditing (IR-7)** — Monitors VPN and jump host logons for third-party IR accounts for IR-7 incident response assistance access.\n\nDocumented **Data sources**: `cisco:asa` VPN, `linux:secure` sshd. **App/TA** (typical add-on context): Splunk Add-on for Cisco ASA (Splunkbase 1620), Splunk Add-on for Unix and Linux (Splunkbase 833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: net; **sourcetype**: cisco:asa. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=net, sourcetype=cisco:asa, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **duration_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Incident Response Assistance — External IR Firm Access Auditing (IR-7)** — Monitors VPN and jump host logons for third-party IR accounts for IR-7 incident response assistance access.\n\nDocumented **Data sources**: `cisco:asa` VPN, `linux:secure` sshd. **App/TA** (typical add-on context): Splunk Add-on for Cisco ASA (Splunkbase 1620), Splunk Add-on for Unix and Linux (Splunkbase 833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (vendor sessions), Map (sources), Time chart (duration)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch VPN and jump host logons for third-party IR accounts for IR-7 incident response assistance access. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication",
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count",
              "e": [
                "cisco"
              ],
              "em": [
                "cisco_asa"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-7 is enforced — Splunk UC-22.14.48: Incident Response Assistance — External IR Firm Access Auditing.",
                  "ea": "Saved search 'UC-22.14.48' running on cisco:asa VPN, linux:secure sshd, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "Splunk Add-on for Cisco ASA",
                "id": 1621,
                "url": "https://splunkbase.splunk.com/app/1621"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.49",
              "n": "Incident Response Plan Test Evidence from Scheduled Tabletop Tags (IR-8)",
              "c": "medium",
              "f": "beginner",
              "v": "Aggregates tagged tabletop exercise events for IR-8 incident response plan testing evidence.",
              "t": "HTTP Event Collector (HEC)",
              "d": "`index=grc` `sourcetype=ir:tabletop` JSON",
              "q": "index=grc sourcetype=ir:tabletop earliest=-365d\n| stats latest(exercise_date) as last_ex, values(scenario_id) as scenarios by plan_version, participant_team\n| eval overdue=if(now()-strptime(last_ex,\"%Y-%m-%d\")>15552000,1,0)\n| where overdue=1",
              "m": "(1) Emit structured HEC after each exercise; (2) define semi-annual threshold; (3) alert GRC owners; (4) attach lessons learned links.",
              "z": "Table (plans needing test), Timeline (exercises), Single value (days since last)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC).\n• Ensure the following data sources are available: `index=grc` `sourcetype=ir:tabletop` JSON.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit structured HEC after each exercise; (2) define semi-annual threshold; (3) alert GRC owners; (4) attach lessons learned links.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=ir:tabletop earliest=-365d\n| stats latest(exercise_date) as last_ex, values(scenario_id) as scenarios by plan_version, participant_team\n| eval overdue=if(now()-strptime(last_ex,\"%Y-%m-%d\")>15552000,1,0)\n| where overdue=1\n```\n\nUnderstanding this SPL\n\n**Incident Response Plan Test Evidence from Scheduled Tabletop Tags (IR-8)** — Aggregates tagged tabletop exercise events for IR-8 incident response plan testing evidence.\n\nDocumented **Data sources**: `index=grc` `sourcetype=ir:tabletop` JSON. **App/TA** (typical add-on context): HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: ir:tabletop. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=ir:tabletop, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by plan_version, participant_team** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overdue=1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (plans needing test), Timeline (exercises), Single value (days since last)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We aggregates tagged tabletop exercise events for IR-8 incident response plan testing evidence. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-8 is enforced — Splunk UC-22.14.49: Incident Response Plan Test Evidence from Scheduled Tabletop Tags.",
                  "ea": "Saved search 'UC-22.14.49' running on index=grc sourcetype=ir:tabletop JSON, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.50",
              "n": "Information Spillage — DLP High-Severity Exfil Indicators (IR-9)",
              "c": "critical",
              "f": "advanced",
              "v": "Prioritizes DLP incidents that may require IR-9 information spillage response.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`ms:o365:management` DLP events",
              "q": "index=o365 sourcetype=ms:o365:management Workload=\"DLP\" earliest=-7d Severity=High\n| stats values(UserId) as users, latest(_time) as last by IncidentId\n| sort - last",
              "m": "(1) Map Severity values to tenant schema; (2) auto-create IR ticket; (3) preserve chain-of-custody outside Splunk; (4) coordinate with legal/privacy.",
              "z": "Table (incidents), Time chart (severity mix), Single value (open high)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `ms:o365:management` DLP events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map Severity values to tenant schema; (2) auto-create IR ticket; (3) preserve chain-of-custody outside Splunk; (4) coordinate with legal/privacy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management Workload=\"DLP\" earliest=-7d Severity=High\n| stats values(UserId) as users, latest(_time) as last by IncidentId\n| sort - last\n```\n\nUnderstanding this SPL\n\n**Information Spillage — DLP High-Severity Exfil Indicators (IR-9)** — Prioritizes DLP incidents that may require IR-9 information spillage response.\n\nDocumented **Data sources**: `ms:o365:management` DLP events. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by IncidentId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (incidents), Time chart (severity mix), Single value (open high)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We prioritizes DLP incidents that may require IR-9 information spillage response. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-9",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-9 is enforced — Splunk UC-22.14.50: Information Spillage — DLP High-Severity Exfil Indicators.",
                  "ea": "Saved search 'UC-22.14.50' running on ms:o365:management DLP events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.51",
              "n": "Integrated Information Security Analysis Team Handoffs (IR-10)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks cross-team comments and assignments between SOC and CTI for IR-10 integrated infosec analysis team collaboration.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`snow:incident` with assignment_group transitions",
              "q": "index=itsm sourcetype=snow:incident category=\"Security\" earliest=-30d\n| transaction number maxspan=24h startswith=\"assignment_group=SOC\" endswith=\"assignment_group=CTI\"\n| where duration>3600\n| table number, duration, assignment_group, short_description",
              "m": "(1) Validate `transaction` performance — limit indexes; (2) alternatively use `streamstats` on sorted events; (3) KPI for handoff latency; (4) monthly leadership review.",
              "z": "Table (slow handoffs), Time chart (median duration)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `snow:incident` with assignment_group transitions.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Validate `transaction` performance — limit indexes; (2) alternatively use `streamstats` on sorted events; (3) KPI for handoff latency; (4) monthly leadership review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=snow:incident category=\"Security\" earliest=-30d\n| transaction number maxspan=24h startswith=\"assignment_group=SOC\" endswith=\"assignment_group=CTI\"\n| where duration>3600\n| table number, duration, assignment_group, short_description\n```\n\nUnderstanding this SPL\n\n**Integrated Information Security Analysis Team Handoffs (IR-10)** — Tracks cross-team comments and assignments between SOC and CTI for IR-10 integrated infosec analysis team collaboration.\n\nDocumented **Data sources**: `snow:incident` with assignment_group transitions. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=snow:incident, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• Filters the current rows with `where duration>3600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Integrated Information Security Analysis Team Handoffs (IR-10)**): table number, duration, assignment_group, short_description\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (slow handoffs), Time chart (median duration)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks cross-team comments and assignments between SOC and CTI for IR-10 integrated infosec analysis team collaboration. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-10 is enforced — Splunk UC-22.14.51: Integrated Information Security Analysis Team Handoffs.",
                  "ea": "Saved search 'UC-22.14.51' running on snow:incident with assignment_group transitions, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.52",
              "n": "Baseline Configuration Drift vs Gold Build (CM-2)",
              "c": "high",
              "f": "advanced",
              "v": "Compares endpoint configuration snapshots to approved baselines for CM-2 baseline configuration.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), custom scripted baseline forwarder",
              "d": "`WinEventLog:Microsoft-Windows-GroupPolicy/Operational` or inventory JSON",
              "q": "index=os sourcetype=win_build_inventory earliest=-24h\n| lookup gold_windows_build.csv build_number OUTPUT approved_patch_level\n| eval drift=if(patch_level<approved_patch_level,\"behind\",\"ok\")\n| where drift!=\"ok\"",
              "m": "(1) Create periodic inventory scripted input; (2) refresh gold CSV from SOE team; (3) alert on drift; (4) map to CM-6 parameters.",
              "z": "Table (non-compliant hosts), Bar chart (by build), Single value (drift %)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), custom scripted baseline forwarder.\n• Ensure the following data sources are available: `WinEventLog:Microsoft-Windows-GroupPolicy/Operational` or inventory JSON.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Create periodic inventory scripted input; (2) refresh gold CSV from SOE team; (3) alert on drift; (4) map to CM-6 parameters.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=win_build_inventory earliest=-24h\n| lookup gold_windows_build.csv build_number OUTPUT approved_patch_level\n| eval drift=if(patch_level<approved_patch_level,\"behind\",\"ok\")\n| where drift!=\"ok\"\n```\n\nUnderstanding this SPL\n\n**Baseline Configuration Drift vs Gold Build (CM-2)** — Compares endpoint configuration snapshots to approved baselines for CM-2 baseline configuration.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-GroupPolicy/Operational` or inventory JSON. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), custom scripted baseline forwarder. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: win_build_inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=win_build_inventory, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **drift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift!=\"ok\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Baseline Configuration Drift vs Gold Build (CM-2)** — Compares endpoint configuration snapshots to approved baselines for CM-2 baseline configuration.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-GroupPolicy/Operational` or inventory JSON. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), custom scripted baseline forwarder. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-compliant hosts), Bar chart (by build), Single value (drift %)\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare endpoint configuration snapshots to approved baselines for CM-2 baseline configuration. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CM-2 (Baseline configuration) is enforced — Splunk UC-22.14.52: Baseline Configuration Drift vs Gold Build.",
                  "ea": "Saved search 'UC-22.14.52' running on WinEventLog:Microsoft-Windows-GroupPolicy/Operational or inventory JSON, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.53",
              "n": "Configuration Change Control — Unauthorized Firewall Rule Adds (CM-3)",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects firewall commits outside maintenance windows without change tickets for CM-3 configuration change control.",
              "t": "Palo Alto Networks Add-on for Splunk (Splunkbase 2757), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`pan:system` or `pan:config` logs, `snow:change_request`",
              "q": "index=net sourcetype=pan:system earliest=-7d (status=\"commit\" OR match(_raw,\"(?i)commit\"))\n| eval change_window=if(_time % 86400 > 72000 OR _time % 86400 < 18000,1,0)\n| lookup recent_changes.csv device commit_id OUTPUT ticket\n| where isnull(ticket) AND change_window=0",
              "m": "(1) Normalize PAN commit events; (2) build `recent_changes.csv` from SNOW scheduled output; (3) alert on orphan commits; (4) emergency change tagging.",
              "z": "Table (orphan commits), Timeline, Single value (count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Palo Alto Networks Add-on for Splunk](https://splunkbase.splunk.com/app/2757), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto Networks Add-on for Splunk (Splunkbase 2757), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `pan:system` or `pan:config` logs, `snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize PAN commit events; (2) build `recent_changes.csv` from SNOW scheduled output; (3) alert on orphan commits; (4) emergency change tagging.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=net sourcetype=pan:system earliest=-7d (status=\"commit\" OR match(_raw,\"(?i)commit\"))\n| eval change_window=if(_time % 86400 > 72000 OR _time % 86400 < 18000,1,0)\n| lookup recent_changes.csv device commit_id OUTPUT ticket\n| where isnull(ticket) AND change_window=0\n```\n\nUnderstanding this SPL\n\n**Configuration Change Control — Unauthorized Firewall Rule Adds (CM-3)** — Detects firewall commits outside maintenance windows without change tickets for CM-3 configuration change control.\n\nDocumented **Data sources**: `pan:system` or `pan:config` logs, `snow:change_request`. **App/TA** (typical add-on context): Palo Alto Networks Add-on for Splunk (Splunkbase 2757), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: net; **sourcetype**: pan:system. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=net, sourcetype=pan:system, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **change_window** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(ticket) AND change_window=0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Configuration Change Control — Unauthorized Firewall Rule Adds (CM-3)** — Detects firewall commits outside maintenance windows without change tickets for CM-3 configuration change control.\n\nDocumented **Data sources**: `pan:system` or `pan:config` logs, `snow:change_request`. **App/TA** (typical add-on context): Palo Alto Networks Add-on for Splunk (Splunkbase 2757), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (orphan commits), Timeline, Single value (count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects firewall commits outside maintenance windows without change tickets for CM-3 configuration change control. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "paloalto",
                "servicenow"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CM-3 is enforced — Splunk UC-22.14.53: Configuration Change Control — Unauthorized Firewall Rule Adds.",
                  "ea": "Saved search 'UC-22.14.53' running on pan:system or pan:config logs, snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                },
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.54",
              "n": "Security Impact Analysis Signals for Emergency Changes (CM-4)",
              "c": "high",
              "f": "intermediate",
              "v": "Flags emergency changes with subsequent security alerts for CM-4 security impact analysis feedback.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Enterprise Security (Splunkbase 263)",
              "d": "`snow:change_request` type=emergency, `` `notable` ``",
              "q": "index=itsm sourcetype=snow:change_request type=emergency earliest=-30d\n| eval cid=number\n| join type=left cid [\n    search `notable` earliest=-30d\n    | rex field=annotations \"(?i)chg:(?<cid>CHG[0-9]+)\"\n    | stats count by cid\n  ]\n| where count>0",
              "m": "(1) Require CHG id in ES annotations via workflow; (2) tune join window; (3) monthly emergency change review deck; (4) document false positive handling.",
              "z": "Table (changes with notables), Bar chart (alert types)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `snow:change_request` type=emergency, `` `notable` ``.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require CHG id in ES annotations via workflow; (2) tune join window; (3) monthly emergency change review deck; (4) document false positive handling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=snow:change_request type=emergency earliest=-30d\n| eval cid=number\n| join type=left cid [\n    search `notable` earliest=-30d\n    | rex field=annotations \"(?i)chg:(?<cid>CHG[0-9]+)\"\n    | stats count by cid\n  ]\n| where count>0\n```\n\nUnderstanding this SPL\n\n**Security Impact Analysis Signals for Emergency Changes (CM-4)** — Flags emergency changes with subsequent security alerts for CM-4 security impact analysis feedback.\n\nDocumented **Data sources**: `snow:change_request` type=emergency, `` `notable` ``. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:change_request. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=snow:change_request, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cid** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (changes with notables), Bar chart (alert types)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flags emergency changes with subsequent security alerts for CM-4 security impact analysis feedback. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CM-4 is enforced — Splunk UC-22.14.54: Security Impact Analysis Signals for Emergency Changes.",
                  "ea": "Saved search 'UC-22.14.54' running on snow:change_request type=emergency, notable , archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.55",
              "n": "Access Restrictions for Change — Privileged Route Changes (CM-5)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors BGP or routing table changes from privileged accounts for CM-5 access restrictions for change.",
              "t": "Network device TAs (Cisco IOS, Juniper), syslog",
              "d": "`cisco:ios` routing syslog messages",
              "q": "index=net sourcetype=cisco:ios earliest=-7d\n| search \"%BGP-5-ADJCHANGE\" OR \"%SYS-5-CONFIG_I\" OR \"neighbor\"\n| stats count by user, host, command\n| where count>0 AND NOT user IN (\"svc_net_automation\")",
              "m": "(1) Enable AAA accounting to syslog; (2) parse `user` field; (3) alert on human-initiated changes without ticket keyword in command comment if used; (4) integrate RANCID/Oxidized diffs optional.",
              "z": "Table (commands), Timeline, Single value (human changes)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Network device TAs (Cisco IOS, Juniper), syslog.\n• Ensure the following data sources are available: `cisco:ios` routing syslog messages.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable AAA accounting to syslog; (2) parse `user` field; (3) alert on human-initiated changes without ticket keyword in command comment if used; (4) integrate RANCID/Oxidized diffs optional.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=net sourcetype=cisco:ios earliest=-7d\n| search \"%BGP-5-ADJCHANGE\" OR \"%SYS-5-CONFIG_I\" OR \"neighbor\"\n| stats count by user, host, command\n| where count>0 AND NOT user IN (\"svc_net_automation\")\n```\n\nUnderstanding this SPL\n\n**Access Restrictions for Change — Privileged Route Changes (CM-5)** — Monitors BGP or routing table changes from privileged accounts for CM-5 access restrictions for change.\n\nDocumented **Data sources**: `cisco:ios` routing syslog messages. **App/TA** (typical add-on context): Network device TAs (Cisco IOS, Juniper), syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: net; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=net, sourcetype=cisco:ios, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, host, command** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>0 AND NOT user IN (\"svc_net_automation\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Access Restrictions for Change — Privileged Route Changes (CM-5)** — Monitors BGP or routing table changes from privileged accounts for CM-5 access restrictions for change.\n\nDocumented **Data sources**: `cisco:ios` routing syslog messages. **App/TA** (typical add-on context): Network device TAs (Cisco IOS, Juniper), syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (commands), Timeline, Single value (human changes)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch BGP or routing table changes from privileged accounts for CM-5 access restrictions for change. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.dest | sort - count",
              "e": [
                "cisco",
                "syslog"
              ],
              "em": [
                "cisco_ios"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CM-5 is enforced — Splunk UC-22.14.55: Access Restrictions for Change — Privileged Route Changes.",
                  "ea": "Saved search 'UC-22.14.55' running on cisco:ios routing syslog messages, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.56",
              "n": "Configuration Settings Compliance — CIS Benchmark Control Checks (CM-6)",
              "c": "high",
              "f": "advanced",
              "v": "Ingests configuration assessment scan results for CM-6 configuration settings.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060), SCAP/OVAL via HEC",
              "d": "`tenable:sc` compliance exports or custom `cis:assessment` sourcetype",
              "q": "index=compliance sourcetype=cis:assessment earliest=-7d control_result=\"FAIL\" benchmark=\"CIS_Microsoft_Windows_Server_2022\"\n| stats count by hostname, control_id, severity\n| sort - severity, - count",
              "m": "(1) Normalize control_id to CIS references; (2) join system owner lookup; (3) SLA remediations; (4) map to SC-7 for boundary controls when network-related.",
              "z": "Table (failed controls), Bar chart (by benchmark), Single value (fail count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060), SCAP/OVAL via HEC.\n• Ensure the following data sources are available: `tenable:sc` compliance exports or custom `cis:assessment` sourcetype.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize control_id to CIS references; (2) join system owner lookup; (3) SLA remediations; (4) map to SC-7 for boundary controls when network-related.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=cis:assessment earliest=-7d control_result=\"FAIL\" benchmark=\"CIS_Microsoft_Windows_Server_2022\"\n| stats count by hostname, control_id, severity\n| sort - severity, - count\n```\n\nUnderstanding this SPL\n\n**Configuration Settings Compliance — CIS Benchmark Control Checks (CM-6)** — Ingests configuration assessment scan results for CM-6 configuration settings.\n\nDocumented **Data sources**: `tenable:sc` compliance exports or custom `cis:assessment` sourcetype. **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060), SCAP/OVAL via HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: cis:assessment. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=cis:assessment, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname, control_id, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed controls), Bar chart (by benchmark), Single value (fail count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We ingests configuration assessment scan results for CM-6 configuration settings. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CM-6 (Configuration settings) is enforced — Splunk UC-22.14.56: Configuration Settings Compliance — CIS Benchmark Control Checks.",
                  "ea": "Saved search 'UC-22.14.56' running on tenable:sc compliance exports or custom cis:assessment sourcetype, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.57",
              "n": "Least Functionality — Unexpected Listening Ports (CM-7)",
              "c": "high",
              "f": "advanced",
              "v": "Compares listening ports to approved service catalog for CM-7 least functionality.",
              "t": "Splunk Universal Forwarder with `ss`/`netstat` scripted input or EDR network telemetry",
              "d": "`index=os` `sourcetype=netstat:listener`",
              "q": "index=os sourcetype=netstat:listener earliest=-24h state=\"LISTEN\"\n| lookup approved_listening_ports.csv port protocol OUTPUT approved\n| where approved!=\"true\" OR isnull(approved)",
              "m": "(1) Refresh approved ports CSV by environment; (2) exclude ephemeral ranges with care; (3) alert on new listeners on servers; (4) pair with CM-8 inventory.",
              "z": "Table (unexpected listeners), Bar chart (by port), Single value (hosts)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Universal Forwarder with `ss`/`netstat` scripted input or EDR network telemetry.\n• Ensure the following data sources are available: `index=os` `sourcetype=netstat:listener`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Refresh approved ports CSV by environment; (2) exclude ephemeral ranges with care; (3) alert on new listeners on servers; (4) pair with CM-8 inventory.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=netstat:listener earliest=-24h state=\"LISTEN\"\n| lookup approved_listening_ports.csv port protocol OUTPUT approved\n| where approved!=\"true\" OR isnull(approved)\n```\n\nUnderstanding this SPL\n\n**Least Functionality — Unexpected Listening Ports (CM-7)** — Compares listening ports to approved service catalog for CM-7 least functionality.\n\nDocumented **Data sources**: `index=os` `sourcetype=netstat:listener`. **App/TA** (typical add-on context): Splunk Universal Forwarder with `ss`/`netstat` scripted input or EDR network telemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: netstat:listener. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=netstat:listener, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where approved!=\"true\" OR isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Least Functionality — Unexpected Listening Ports (CM-7)** — Compares listening ports to approved service catalog for CM-7 least functionality.\n\nDocumented **Data sources**: `index=os` `sourcetype=netstat:listener`. **App/TA** (typical add-on context): Splunk Universal Forwarder with `ss`/`netstat` scripted input or EDR network telemetry. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unexpected listeners), Bar chart (by port), Single value (hosts)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare listening ports to approved service catalog for CM-7 least functionality. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CM-7 is enforced — Splunk UC-22.14.57: Least Functionality — Unexpected Listening Ports.",
                  "ea": "Saved search 'UC-22.14.57' running on index=os sourcetype=netstat:listener, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.58",
              "n": "System Component Inventory vs Observed Network Assets (CM-8)",
              "c": "high",
              "f": "intermediate",
              "v": "Reconciles CMDB-managed systems with active DHCP/DNS or scan observations for CM-8 system component inventory.",
              "t": "Splunk Add-on for Infoblox (TA) or DHCP logs, CMDB lookup",
              "d": "`infoblox:dhcp` or `msdns:debug`, `cmdb_hosts.csv`",
              "q": "index=net sourcetype=infoblox:dhcp earliest=-24h\n| stats latest(mac) as mac, latest(fingerprint) as fp by hostname, ip\n| lookup cmdb_hosts.csv hostname OUTPUT owner, classification\n| where isnull(owner) AND match(classification,\"unknown\")",
              "m": "(1) Normalize hostname case; (2) dedupe NAT; (3) feed discovery to CMDB; (4) monthly rogue asset report.",
              "z": "Table (unknown assets), Map (subnet), Single value (count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Infoblox (TA) or DHCP logs, CMDB lookup.\n• Ensure the following data sources are available: `infoblox:dhcp` or `msdns:debug`, `cmdb_hosts.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize hostname case; (2) dedupe NAT; (3) feed discovery to CMDB; (4) monthly rogue asset report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=net sourcetype=infoblox:dhcp earliest=-24h\n| stats latest(mac) as mac, latest(fingerprint) as fp by hostname, ip\n| lookup cmdb_hosts.csv hostname OUTPUT owner, classification\n| where isnull(owner) AND match(classification,\"unknown\")\n```\n\nUnderstanding this SPL\n\n**System Component Inventory vs Observed Network Assets (CM-8)** — Reconciles CMDB-managed systems with active DHCP/DNS or scan observations for CM-8 system component inventory.\n\nDocumented **Data sources**: `infoblox:dhcp` or `msdns:debug`, `cmdb_hosts.csv`. **App/TA** (typical add-on context): Splunk Add-on for Infoblox (TA) or DHCP logs, CMDB lookup. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: net; **sourcetype**: infoblox:dhcp. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=net, sourcetype=infoblox:dhcp, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname, ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(owner) AND match(classification,\"unknown\")` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unknown assets), Map (subnet), Single value (count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We reconciles CMDB-managed systems with active DHCP/DNS or scan observations for CM-8 system component inventory. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "infoblox"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CM-8 is enforced — Splunk UC-22.14.58: System Component Inventory vs Observed Network Assets.",
                  "ea": "Saved search 'UC-22.14.58' running on infoblox:dhcp or msdns:debug, cmdb_hosts.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.59",
              "n": "User-Installed Software Detections on Corporate Images (CM-11)",
              "c": "medium",
              "f": "intermediate",
              "v": "Lists unauthorized software packages reported by inventory agents for CM-11 user-installed software.",
              "t": "Microsoft SCCM / Intune logs via HEC, Jamf logs",
              "d": "`index=endpoint` `sourcetype=intune:device_inventory`",
              "q": "index=endpoint sourcetype=intune:device_inventory earliest=-7d\n| search publisher!=\"Microsoft Corporation\"\n| lookup approved_software_catalog.csv name OUTPUT allowed\n| where allowed!=\"true\" OR isnull(allowed)",
              "m": "(1) Curate catalog with security exceptions; (2) version-aware matching; (3) self-service remediation tickets; (4) integrate with software blocklist.",
              "z": "Table (unapproved titles), Bar chart (by publisher), Single value (devices affected)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Microsoft SCCM / Intune logs via HEC, Jamf logs.\n• Ensure the following data sources are available: `index=endpoint` `sourcetype=intune:device_inventory`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Curate catalog with security exceptions; (2) version-aware matching; (3) self-service remediation tickets; (4) integrate with software blocklist.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=endpoint sourcetype=intune:device_inventory earliest=-7d\n| search publisher!=\"Microsoft Corporation\"\n| lookup approved_software_catalog.csv name OUTPUT allowed\n| where allowed!=\"true\" OR isnull(allowed)\n```\n\nUnderstanding this SPL\n\n**User-Installed Software Detections on Corporate Images (CM-11)** — Lists unauthorized software packages reported by inventory agents for CM-11 user-installed software.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=intune:device_inventory`. **App/TA** (typical add-on context): Microsoft SCCM / Intune logs via HEC, Jamf logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: endpoint; **sourcetype**: intune:device_inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=endpoint, sourcetype=intune:device_inventory, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where allowed!=\"true\" OR isnull(allowed)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User-Installed Software Detections on Corporate Images (CM-11)** — Lists unauthorized software packages reported by inventory agents for CM-11 user-installed software.\n\nDocumented **Data sources**: `index=endpoint` `sourcetype=intune:device_inventory`. **App/TA** (typical add-on context): Microsoft SCCM / Intune logs via HEC, Jamf logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unapproved titles), Bar chart (by publisher), Single value (devices affected)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We lists unauthorized software packages reported by inventory agents for CM-11 user-installed software. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-11",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CM-11 is enforced — Splunk UC-22.14.59: User-Installed Software Detections on Corporate Images.",
                  "ea": "Saved search 'UC-22.14.59' running on index=endpoint sourcetype=intune:device_inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.60",
              "n": "Control Assessment Findings Ingest and Aging (CA-2)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks open assessment findings for CA-2 control assessments and independent reviews.",
              "t": "HTTP Event Collector (HEC) from GRC tool",
              "d": "`index=grc` `sourcetype=assessment:finding`",
              "q": "index=grc sourcetype=assessment:finding status=\"open\" earliest=-365d\n| eval age_days=round((now()-strptime(opened_date,\"%Y-%m-%d\"))/86400,1)\n| where age_days>90\n| stats max(age_days) as oldest by control_set, finding_id, owner",
              "m": "(1) Standardize control_set to NIST families; (2) alert owners; (3) export for assessor closeout; (4) map to CA-7 dashboards.",
              "z": "Table (stale findings), Bar chart (by family), Single value (open >90d)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC) from GRC tool.\n• Ensure the following data sources are available: `index=grc` `sourcetype=assessment:finding`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardize control_set to NIST families; (2) alert owners; (3) export for assessor closeout; (4) map to CA-7 dashboards.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=assessment:finding status=\"open\" earliest=-365d\n| eval age_days=round((now()-strptime(opened_date,\"%Y-%m-%d\"))/86400,1)\n| where age_days>90\n| stats max(age_days) as oldest by control_set, finding_id, owner\n```\n\nUnderstanding this SPL\n\n**Control Assessment Findings Ingest and Aging (CA-2)** — Tracks open assessment findings for CA-2 control assessments and independent reviews.\n\nDocumented **Data sources**: `index=grc` `sourcetype=assessment:finding`. **App/TA** (typical add-on context): HTTP Event Collector (HEC) from GRC tool. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: assessment:finding. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=assessment:finding, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>90` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by control_set, finding_id, owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale findings), Bar chart (by family), Single value (open >90d)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks open assessment findings for CA-2 control assessments and independent reviews. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CA-2 is enforced — Splunk UC-22.14.60: Control Assessment Findings Ingest and Aging.",
                  "ea": "Saved search 'UC-22.14.60' running on index=grc sourcetype=assessment:finding, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.61",
              "n": "Information Exchange Agreements — Data Share Volume Anomalies (CA-3)",
              "c": "high",
              "f": "advanced",
              "v": "Monitors API traffic to partner enclaves for CA-3 information exchange agreement compliance.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876) API Gateway, or reverse proxy logs",
              "d": "`aws:apigateway` or `sourcetype=nginx:plus:access` with partner route prefixes",
              "q": "index=cloud sourcetype=aws:apigateway earliest=-24h resourcePath=\"/partner/*\"\n| bin _time span=1h\n| stats sum(bytes) as bytes by _time, partner_id\n| eventstats median(bytes) as med by partner_id\n| eval spike=if(bytes > med*5,1,0)\n| where spike=1",
              "m": "(1) Tag `partner_id` at API GW stage; (2) baseline weekly seasonality; (3) alert on spikes; (4) document agreement limits.",
              "z": "Time chart (bytes by partner), Table (spike windows)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876) API Gateway, or reverse proxy logs.\n• Ensure the following data sources are available: `aws:apigateway` or `sourcetype=nginx:plus:access` with partner route prefixes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag `partner_id` at API GW stage; (2) baseline weekly seasonality; (3) alert on spikes; (4) document agreement limits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=aws:apigateway earliest=-24h resourcePath=\"/partner/*\"\n| bin _time span=1h\n| stats sum(bytes) as bytes by _time, partner_id\n| eventstats median(bytes) as med by partner_id\n| eval spike=if(bytes > med*5,1,0)\n| where spike=1\n```\n\nUnderstanding this SPL\n\n**Information Exchange Agreements — Data Share Volume Anomalies (CA-3)** — Monitors API traffic to partner enclaves for CA-3 information exchange agreement compliance.\n\nDocumented **Data sources**: `aws:apigateway` or `sourcetype=nginx:plus:access` with partner route prefixes. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876) API Gateway, or reverse proxy logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:apigateway. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=aws:apigateway, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, partner_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by partner_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **spike** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where spike=1` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information Exchange Agreements — Data Share Volume Anomalies (CA-3)** — Monitors API traffic to partner enclaves for CA-3 information exchange agreement compliance.\n\nDocumented **Data sources**: `aws:apigateway` or `sourcetype=nginx:plus:access` with partner route prefixes. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876) API Gateway, or reverse proxy logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (bytes by partner), Table (spike windows)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch API traffic to partner enclaves for CA-3 information exchange agreement compliance. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1h | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CA-3 is enforced — Splunk UC-22.14.61: Information Exchange Agreements — Data Share Volume Anomalies.",
                  "ea": "Saved search 'UC-22.14.61' running on aws:apigateway or sourcetype=nginx:plus:access with partner route prefixes, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.62",
              "n": "Plan of Action and Milestones Open Items Past Due (CA-5)",
              "c": "high",
              "f": "beginner",
              "v": "Lists overdue POA&M items for CA-5 plans of action and milestones.",
              "t": "Splunk DB Connect (Splunkbase 2686) or CSV lookup updated by GRC",
              "d": "`inputlookup poam_open.csv`",
              "q": "| inputlookup poam_open.csv\n| eval due_epoch=strptime(due_date,\"%Y-%m-%d\")\n| where due_epoch < now() AND status!=\"Closed\"\n| table poam_id, control, owner, due_date, status, risk_rating\n| sort due_date",
              "m": "(1) Automate CSV refresh from authoritative GRC; (2) RBAC the lookup; (3) weekly leadership digest; (4) integrate CAP tracking.",
              "z": "Table (overdue POA&M), Single value (count), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686) or CSV lookup updated by GRC.\n• Ensure the following data sources are available: `inputlookup poam_open.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Automate CSV refresh from authoritative GRC; (2) RBAC the lookup; (3) weekly leadership digest; (4) integrate CAP tracking.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup poam_open.csv\n| eval due_epoch=strptime(due_date,\"%Y-%m-%d\")\n| where due_epoch < now() AND status!=\"Closed\"\n| table poam_id, control, owner, due_date, status, risk_rating\n| sort due_date\n```\n\nUnderstanding this SPL\n\n**Plan of Action and Milestones Open Items Past Due (CA-5)** — Lists overdue POA&M items for CA-5 plans of action and milestones.\n\nDocumented **Data sources**: `inputlookup poam_open.csv`. **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686) or CSV lookup updated by GRC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where due_epoch < now() AND status!=\"Closed\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Plan of Action and Milestones Open Items Past Due (CA-5)**): table poam_id, control, owner, due_date, status, risk_rating\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue POA&M), Single value (count), Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We lists overdue POA&M items for CA-5 plans of action and milestones. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CA-5 is enforced — Splunk UC-22.14.62: Plan of Action and Milestones Open Items Past Due.",
                  "ea": "Saved search 'UC-22.14.62' running on inputlookup poam_open.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.63",
              "n": "Continuous Monitoring Control Health Scores (CA-7)",
              "c": "critical",
              "f": "advanced",
              "v": "Rolls up data source freshness, detection coverage, and vuln SLA into a CA-7 continuous monitoring scorecard.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "Multiple summary indexes: `summary:ingest_health`, `summary:vuln_sla`",
              "q": "index=summary sourcetype=ca7:kpi earliest=-1d\n| stats latest(ingest_score) as ingest, latest(detection_score) as det, latest(vuln_score) as vuln by _time env\n| eval overall=round((ingest+det+vuln)/3,1)\n| where overall < 85",
              "m": "(1) Populate `ca7:kpi` via scheduled searches; (2) weight factors by risk; (3) monthly AO briefing; (4) tie to CA-8 test windows.",
              "z": "Single value (overall score), Radial gauge, Table (by env)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: Multiple summary indexes: `summary:ingest_health`, `summary:vuln_sla`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate `ca7:kpi` via scheduled searches; (2) weight factors by risk; (3) monthly AO briefing; (4) tie to CA-8 test windows.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=summary sourcetype=ca7:kpi earliest=-1d\n| stats latest(ingest_score) as ingest, latest(detection_score) as det, latest(vuln_score) as vuln by _time env\n| eval overall=round((ingest+det+vuln)/3,1)\n| where overall < 85\n```\n\nUnderstanding this SPL\n\n**Continuous Monitoring Control Health Scores (CA-7)** — Rolls up data source freshness, detection coverage, and vuln SLA into a CA-7 continuous monitoring scorecard.\n\nDocumented **Data sources**: Multiple summary indexes: `summary:ingest_health`, `summary:vuln_sla`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: summary; **sourcetype**: ca7:kpi. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=summary, sourcetype=ca7:kpi, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by _time env** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **overall** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overall < 85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (overall score), Radial gauge, Table (by env)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We rolls up data source freshness, detection coverage, and vuln SLA into a CA-7 continuous monitoring scorecard. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CA-7 is enforced — Splunk UC-22.14.63: Continuous Monitoring Control Health Scores.",
                  "ea": "Saved search 'UC-22.14.63' running on Multiple summary indexes: summary:ingest_health, summary:vuln_sla, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.64",
              "n": "Penetration Test Windows and Detected Activities (CA-8)",
              "c": "high",
              "f": "advanced",
              "v": "Correlates scheduled pen-test IP ranges with IDS alerts for CA-8 penetration testing visibility.",
              "t": "Splunk Enterprise Security (Splunkbase 263), IDS sourcetypes",
              "d": "`notable` OR `suricata`, `snort`, `pan:threat`",
              "q": "index=sec (sourcetype=suricata OR sourcetype=pan:threat) earliest=-30d\n| lookup pentest_windows.csv src_ip OUTPUT engagement_id, tester\n| where isnotnull(engagement_id)\n| stats count by engagement_id, signature, dest\n| sort - count",
              "m": "(1) Maintain `pentest_windows.csv` with IPs and dates; (2) reduce false IR-4 by tagging engagements; (3) store ROE in GRC; (4) after-action detection gap review.",
              "z": "Table (detections during tests), Time chart (activity), Single value (coverage)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), IDS sourcetypes.\n• Ensure the following data sources are available: `notable` OR `suricata`, `snort`, `pan:threat`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `pentest_windows.csv` with IPs and dates; (2) reduce false IR-4 by tagging engagements; (3) store ROE in GRC; (4) after-action detection gap review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sec (sourcetype=suricata OR sourcetype=pan:threat) earliest=-30d\n| lookup pentest_windows.csv src_ip OUTPUT engagement_id, tester\n| where isnotnull(engagement_id)\n| stats count by engagement_id, signature, dest\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Penetration Test Windows and Detected Activities (CA-8)** — Correlates scheduled pen-test IP ranges with IDS alerts for CA-8 penetration testing visibility.\n\nDocumented **Data sources**: `notable` OR `suricata`, `snort`, `pan:threat`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), IDS sourcetypes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sec; **sourcetype**: suricata, pan:threat. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sec, sourcetype=suricata, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(engagement_id)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by engagement_id, signature, dest** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature, IDS_Attacks.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Penetration Test Windows and Detected Activities (CA-8)** — Correlates scheduled pen-test IP ranges with IDS alerts for CA-8 penetration testing visibility.\n\nDocumented **Data sources**: `notable` OR `suricata`, `snort`, `pan:threat`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), IDS sourcetypes. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (detections during tests), Time chart (activity), Single value (coverage)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect scheduled pen-test IP ranges with IDS alerts for CA-8 penetration testing visibility. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature, IDS_Attacks.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CA-8 is enforced — Splunk UC-22.14.64: Penetration Test Windows and Detected Activities.",
                  "ea": "Saved search 'UC-22.14.64' running on notable OR suricata, snort, pan:threat, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.65",
              "n": "Internal System Connections — East-West New Service Relationships (CA-9)",
              "c": "high",
              "f": "advanced",
              "v": "Discovers new internal connection pairs not seen in baseline for CA-9 internal system connections authorization.",
              "t": "Splunk Common Information Model Add-on (Splunkbase 1621)",
              "d": "CIM `Network_Traffic` accelerated",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic where earliest=-7d latest=now by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port\n| rename All_Traffic.* as *\n| eval pair=src.\"->\".dest.\":\".dest_port\n| inputlookup append=t internal_connection_allowlist.csv\n| stats first(allowed) as allowed, sum(count) as hits by pair, src, dest, dest_port\n| where allowed!=\"true\" OR isnull(allowed)",
              "m": "(1) Build allowlist from CMDB application dependency matrix; (2) tune minimum hit threshold; (3) alert on new database ports; (4) document approval workflow.",
              "z": "Table (new pairs), Sankey (optional), Graph (ES)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Common Information Model Add-on (Splunkbase 1621).\n• Ensure the following data sources are available: CIM `Network_Traffic` accelerated.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build allowlist from CMDB application dependency matrix; (2) tune minimum hit threshold; (3) alert on new database ports; (4) document approval workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic where earliest=-7d latest=now by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port\n| rename All_Traffic.* as *\n| eval pair=src.\"->\".dest.\":\".dest_port\n| inputlookup append=t internal_connection_allowlist.csv\n| stats first(allowed) as allowed, sum(count) as hits by pair, src, dest, dest_port\n| where allowed!=\"true\" OR isnull(allowed)\n```\n\nUnderstanding this SPL\n\n**Internal System Connections — East-West New Service Relationships (CA-9)** — Discovers new internal connection pairs not seen in baseline for CA-9 internal system connections authorization.\n\nDocumented **Data sources**: CIM `Network_Traffic` accelerated. **App/TA** (typical add-on context): Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **pair** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `stats` rolls up events into metrics; results are split **by pair, src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where allowed!=\"true\" OR isnull(allowed)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Internal System Connections — East-West New Service Relationships (CA-9)** — Discovers new internal connection pairs not seen in baseline for CA-9 internal system connections authorization.\n\nDocumented **Data sources**: CIM `Network_Traffic` accelerated. **App/TA** (typical add-on context): Splunk Common Information Model Add-on (Splunkbase 1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (new pairs), Sankey (optional), Graph (ES)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We discovers new internal connection pairs not seen in baseline for CA-9 internal system connections authorization. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-9",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CA-9 is enforced — Splunk UC-22.14.65: Internal System Connections — East-West New Service Relationships.",
                  "ea": "Saved search 'UC-22.14.65' running on CIM Network_Traffic accelerated, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.66",
              "n": "Residual Information in Shared Cloud Object Stores (SC-4)",
              "c": "high",
              "f": "advanced",
              "v": "Monitors public ACL changes on object storage for SC-4 information in shared resources.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876)",
              "d": "`aws:cloudtrail` `PutBucketPolicy` `PutBucketAcl`",
              "q": "index=cloud sourcetype=aws:cloudtrail earliest=-7d eventName IN (\"PutBucketAcl\",\"PutBucketPolicy\",\"PutObjectAcl\")\n| spath path=requestParameters output=requestParameters\n| search requestParameters=\"*\"public\"*\" OR requestParameters=\"*AllUsers*\"\n| stats count by userName, requestParameters, aws_account_id",
              "m": "(1) Use `spath` carefully on large events; (2) integrate AWS Config snapshots optional; (3) auto revert tickets; (4) map to CP-9 backup separation.",
              "z": "Table (public exposure changes), Time chart (trend), Single value (accounts)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876).\n• Ensure the following data sources are available: `aws:cloudtrail` `PutBucketPolicy` `PutBucketAcl`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Use `spath` carefully on large events; (2) integrate AWS Config snapshots optional; (3) auto revert tickets; (4) map to CP-9 backup separation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=aws:cloudtrail earliest=-7d eventName IN (\"PutBucketAcl\",\"PutBucketPolicy\",\"PutObjectAcl\")\n| spath path=requestParameters output=requestParameters\n| search requestParameters=\"*\"public\"*\" OR requestParameters=\"*AllUsers*\"\n| stats count by userName, requestParameters, aws_account_id\n```\n\nUnderstanding this SPL\n\n**Residual Information in Shared Cloud Object Stores (SC-4)** — Monitors public ACL changes on object storage for SC-4 information in shared resources.\n\nDocumented **Data sources**: `aws:cloudtrail` `PutBucketPolicy` `PutBucketAcl`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=aws:cloudtrail, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by userName, requestParameters, aws_account_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Residual Information in Shared Cloud Object Stores (SC-4)** — Monitors public ACL changes on object storage for SC-4 information in shared resources.\n\nDocumented **Data sources**: `aws:cloudtrail` `PutBucketPolicy` `PutBucketAcl`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (public exposure changes), Time chart (trend), Single value (accounts)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch public ACL changes on object storage for SC-4 information in shared resources. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SC-4 is enforced — Splunk UC-22.14.66: Residual Information in Shared Cloud Object Stores.",
                  "ea": "Saved search 'UC-22.14.66' running on aws:cloudtrail PutBucketPolicy PutBucketAcl, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.67",
              "n": "Boundary Protection — Firewall Deny Burst to Sensitive Segments (SC-7)",
              "c": "critical",
              "f": "intermediate",
              "v": "Highlights deny spikes targeting protected VLANs for SC-7 boundary protection.",
              "t": "Palo Alto Networks Add-on for Splunk (Splunkbase 2757)",
              "d": "`pan:traffic` with action=deny",
              "q": "index=net sourcetype=pan:traffic action=deny earliest=-24h\n| lookup protected_segments.csv dest_zone OUTPUT sensitivity\n| where sensitivity=\"high\"\n| stats count by src, dest, dest_port, app\n| where count>1000",
              "m": "(1) Normalize zone names; (2) tune thresholds per segment; (3) integrate threat intel on src; (4) document inter-zone matrix.",
              "z": "Map (src), Table (top denies), Time chart (volume)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Palo Alto Networks Add-on for Splunk](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto Networks Add-on for Splunk (Splunkbase 2757).\n• Ensure the following data sources are available: `pan:traffic` with action=deny.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize zone names; (2) tune thresholds per segment; (3) integrate threat intel on src; (4) document inter-zone matrix.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=net sourcetype=pan:traffic action=deny earliest=-24h\n| lookup protected_segments.csv dest_zone OUTPUT sensitivity\n| where sensitivity=\"high\"\n| stats count by src, dest, dest_port, app\n| where count>1000\n```\n\nUnderstanding this SPL\n\n**Boundary Protection — Firewall Deny Burst to Sensitive Segments (SC-7)** — Highlights deny spikes targeting protected VLANs for SC-7 boundary protection.\n\nDocumented **Data sources**: `pan:traffic` with action=deny. **App/TA** (typical add-on context): Palo Alto Networks Add-on for Splunk (Splunkbase 2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: net; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=net, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where sensitivity=\"high\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>1000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port, All_Traffic.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Boundary Protection — Firewall Deny Burst to Sensitive Segments (SC-7)** — Highlights deny spikes targeting protected VLANs for SC-7 boundary protection.\n\nDocumented **Data sources**: `pan:traffic` with action=deny. **App/TA** (typical add-on context): Palo Alto Networks Add-on for Splunk (Splunkbase 2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Map (src), Table (top denies), Time chart (volume)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We highlights deny spikes targeting protected VLANs for SC-7 boundary protection. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port, All_Traffic.app | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SC-7 (Boundary protection) is enforced — Splunk UC-22.14.67: Boundary Protection — Firewall Deny Burst to Sensitive Segments.",
                  "ea": "Saved search 'UC-22.14.67' running on pan:traffic with action=deny, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.68",
              "n": "Transmission Confidentiality and Integrity — TLS Policy Downgrades (SC-8)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects servers offering weak TLS versions or cipher suites for SC-8 transmission confidentiality and integrity.",
              "t": "Splunk Add-on for Stream (Splunkbase 1809) or certificate inventory",
              "d": "`stream:tls` or `tenable:` plugin SSL findings",
              "q": "index=net sourcetype=stream:tls earliest=-7d version IN (\"SSLv3\",\"TLS1.0\",\"TLS1.1\")\n| stats dc(src_ip) as clients, values(cipher) as ciphers by dest_ip, dest_port\n| sort - clients",
              "m": "(1) Enrich with asset owner; (2) emergency exceptions with expiry; (3) track remediation in RA-7; (4) validate after patch window.",
              "z": "Table (weak TLS services), Bar chart (by cipher), Single value (affected IPs)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Stream](https://splunkbase.splunk.com/app/1809), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Stream (Splunkbase 1809) or certificate inventory.\n• Ensure the following data sources are available: `stream:tls` or `tenable:` plugin SSL findings.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enrich with asset owner; (2) emergency exceptions with expiry; (3) track remediation in RA-7; (4) validate after patch window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=net sourcetype=stream:tls earliest=-7d version IN (\"SSLv3\",\"TLS1.0\",\"TLS1.1\")\n| stats dc(src_ip) as clients, values(cipher) as ciphers by dest_ip, dest_port\n| sort - clients\n```\n\nUnderstanding this SPL\n\n**Transmission Confidentiality and Integrity — TLS Policy Downgrades (SC-8)** — Detects servers offering weak TLS versions or cipher suites for SC-8 transmission confidentiality and integrity.\n\nDocumented **Data sources**: `stream:tls` or `tenable:` plugin SSL findings. **App/TA** (typical add-on context): Splunk Add-on for Stream (Splunkbase 1809) or certificate inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: net; **sourcetype**: stream:tls. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=net, sourcetype=stream:tls, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest_ip, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Transmission Confidentiality and Integrity — TLS Policy Downgrades (SC-8)** — Detects servers offering weak TLS versions or cipher suites for SC-8 transmission confidentiality and integrity.\n\nDocumented **Data sources**: `stream:tls` or `tenable:` plugin SSL findings. **App/TA** (typical add-on context): Splunk Add-on for Stream (Splunkbase 1809) or certificate inventory. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (weak TLS services), Bar chart (by cipher), Single value (affected IPs)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects servers offering weak TLS versions or cipher suites for SC-8 transmission confidentiality and integrity. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SC-8 (Transmission confidentiality and integrity) is enforced — Splunk UC-22.14.68: Transmission Confidentiality and Integrity — TLS Policy Downgrades.",
                  "ea": "Saved search 'UC-22.14.68' running on stream:tls or tenable: plugin SSL findings, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.69",
              "n": "Network Disconnect for Inactive Sessions on Admin Services (SC-10)",
              "c": "high",
              "f": "intermediate",
              "v": "Measures idle durations on administrative listeners for SC-10 network disconnect controls (proxy perspective).",
              "t": "F5 BIG-IP APM or Zscaler logs via TA",
              "d": "`sourcetype=zscaler:web` or `f5:apm:session`",
              "q": "index=proxy sourcetype=zscaler:web earliest=-24h app_class=\"Admin\"\n| transaction user maxpause=30m maxspan=24h\n| eval idle_mins=round(duration/60,1)\n| where idle_mins>240",
              "m": "(1) Validate `transaction` keys; (2) align idle timeout to policy minutes; (3) alert on excessive admin idle tunnels; (4) document VPN vs ZTNA differences.",
              "z": "Table (long idle admin sessions), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: F5 BIG-IP APM or Zscaler logs via TA.\n• Ensure the following data sources are available: `sourcetype=zscaler:web` or `f5:apm:session`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Validate `transaction` keys; (2) align idle timeout to policy minutes; (3) alert on excessive admin idle tunnels; (4) document VPN vs ZTNA differences.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=zscaler:web earliest=-24h app_class=\"Admin\"\n| transaction user maxpause=30m maxspan=24h\n| eval idle_mins=round(duration/60,1)\n| where idle_mins>240\n```\n\nUnderstanding this SPL\n\n**Network Disconnect for Inactive Sessions on Admin Services (SC-10)** — Measures idle durations on administrative listeners for SC-10 network disconnect controls (proxy perspective).\n\nDocumented **Data sources**: `sourcetype=zscaler:web` or `f5:apm:session`. **App/TA** (typical add-on context): F5 BIG-IP APM or Zscaler logs via TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:web. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=zscaler:web, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **idle_mins** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where idle_mins>240` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network Disconnect for Inactive Sessions on Admin Services (SC-10)** — Measures idle durations on administrative listeners for SC-10 network disconnect controls (proxy perspective).\n\nDocumented **Data sources**: `sourcetype=zscaler:web` or `f5:apm:session`. **App/TA** (typical add-on context): F5 BIG-IP APM or Zscaler logs via TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (long idle admin sessions), Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures idle durations on administrative listeners for SC-10 network disconnect controls (proxy perspective). That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "f5",
                "zscaler"
              ],
              "em": [
                "f5_bigip"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SC-10 is enforced — Splunk UC-22.14.69: Network Disconnect for Inactive Sessions on Admin Services.",
                  "ea": "Saved search 'UC-22.14.69' running on sourcetype=zscaler:web or f5:apm:session, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Zscaler Splunk App",
                  "id": 3866,
                  "url": "https://splunkbase.splunk.com/app/3866",
                  "desc": "Dashboards for Zscaler web usage, threat intelligence, DLP, and remote access",
                  "screenshots": []
                },
                {
                  "name": "Splunk Add-on for F5 BIG-IP",
                  "id": 2680,
                  "url": "https://splunkbase.splunk.com/app/2680",
                  "desc": "Collects network traffic, system logs and performance metrics from F5 BIG-IP",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "Zscaler Add-on for Splunk",
                "id": 3865,
                "url": "https://splunkbase.splunk.com/app/3865"
              },
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.70",
              "n": "Cryptographic Key Management Events from Cloud KMS (SC-12)",
              "c": "critical",
              "f": "advanced",
              "v": "Audits KMS key disable/delete operations for SC-12 cryptographic key establishment and management.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for GCP (TA)",
              "d": "`aws:cloudtrail` KMS events",
              "q": "index=cloud sourcetype=aws:cloudtrail earliest=-30d eventSource=\"kms.amazonaws.com\" eventName IN (\"DisableKey\",\"ScheduleKeyDeletion\",\"DeleteImportedKeyMaterial\")\n| stats values(eventName) as actions, latest(_time) as last by userName, requestParameters",
              "m": "(1) Parse `requestParameters` for key ARNs into fields; (2) alert outside maintenance CAB; (3) dual-control evidence from SNOW comments optional join; (4) map to SC-13.",
              "z": "Table (KMS changes), Timeline, Single value (destructive ops)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for GCP (TA).\n• Ensure the following data sources are available: `aws:cloudtrail` KMS events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Parse `requestParameters` for key ARNs into fields; (2) alert outside maintenance CAB; (3) dual-control evidence from SNOW comments optional join; (4) map to SC-13.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=aws:cloudtrail earliest=-30d eventSource=\"kms.amazonaws.com\" eventName IN (\"DisableKey\",\"ScheduleKeyDeletion\",\"DeleteImportedKeyMaterial\")\n| stats values(eventName) as actions, latest(_time) as last by userName, requestParameters\n```\n\nUnderstanding this SPL\n\n**Cryptographic Key Management Events from Cloud KMS (SC-12)** — Audits KMS key disable/delete operations for SC-12 cryptographic key establishment and management.\n\nDocumented **Data sources**: `aws:cloudtrail` KMS events. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for GCP (TA). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=aws:cloudtrail, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userName, requestParameters** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cryptographic Key Management Events from Cloud KMS (SC-12)** — Audits KMS key disable/delete operations for SC-12 cryptographic key establishment and management.\n\nDocumented **Data sources**: `aws:cloudtrail` KMS events. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876), Splunk Add-on for GCP (TA). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (KMS changes), Timeline, Single value (destructive ops)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We audits KMS key disable/delete operations for SC-12 cryptographic key establishment and management. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "gcp"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-12",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SC-12 is enforced — Splunk UC-22.14.70: Cryptographic Key Management Events from Cloud KMS.",
                  "ea": "Saved search 'UC-22.14.70' running on aws:cloudtrail KMS events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.71",
              "n": "Cryptographic Protection — BitLocker or Disk Encryption Status Drops (SC-13)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks endpoint encryption compliance for SC-13 cryptographic protection.",
              "t": "Splunk Add-on for Microsoft Windows (Splunkbase 742), Intune HEC",
              "d": "`WinEventLog:Microsoft-Windows-BitLocker/Operational` or inventory JSON",
              "q": "index=os sourcetype=win:bitlocker_status earliest=-24h protection_status=\"Off\"\n| stats count by host, volume, reason\n| sort - count",
              "m": "(1) Normalize status strings; (2) exclude expected lab hosts via lookup; (3) auto-quarantine workflow optional; (4) map to SC-28 overlap for at-rest.",
              "z": "Table (unencrypted volumes), Single value (non-compliant hosts)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (Splunkbase 742), Intune HEC.\n• Ensure the following data sources are available: `WinEventLog:Microsoft-Windows-BitLocker/Operational` or inventory JSON.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize status strings; (2) exclude expected lab hosts via lookup; (3) auto-quarantine workflow optional; (4) map to SC-28 overlap for at-rest.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=win:bitlocker_status earliest=-24h protection_status=\"Off\"\n| stats count by host, volume, reason\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Cryptographic Protection — BitLocker or Disk Encryption Status Drops (SC-13)** — Tracks endpoint encryption compliance for SC-13 cryptographic protection.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-BitLocker/Operational` or inventory JSON. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Intune HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: win:bitlocker_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=win:bitlocker_status, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, volume, reason** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cryptographic Protection — BitLocker or Disk Encryption Status Drops (SC-13)** — Tracks endpoint encryption compliance for SC-13 cryptographic protection.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-BitLocker/Operational` or inventory JSON. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (Splunkbase 742), Intune HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unencrypted volumes), Single value (non-compliant hosts)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks endpoint encryption compliance for SC-13 cryptographic protection. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Endpoint"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-13",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SC-13 (Cryptographic protection) is enforced — Splunk UC-22.14.71: Cryptographic Protection — BitLocker or Disk Encryption Status Drops.",
                  "ea": "Saved search 'UC-22.14.71' running on WinEventLog:Microsoft-Windows-BitLocker/Operational or inventory JSON, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.72",
              "n": "Session Authenticity — Token Replay Across Geographies (SC-23)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects same refresh token used from distant geolocations within short windows for SC-23 session authenticity.",
              "t": "Splunk Add-on for Microsoft Office 365 (Splunkbase 4055)",
              "d": "`ms:o365:management` token events if available, or proxy OAuth logs",
              "q": "index=o365 sourcetype=ms:o365:management earliest=-24d Operation=\"Token issued*\"\n| iplocation ClientIP\n| stats earliest(_time) as t1, latest(_time) as t2, values(Country) as countries, values(ClientIP) as ips by UserId, SessionId\n| eval dt=t2-t1\n| where mvcount(countries)>1 AND dt < 7200",
              "m": "(1) Confirm operation strings; (2) reduce noise with corporate travel; (3) integrate with IR-4; (4) force re-auth playbooks.",
              "z": "Table (suspected replay), Map (IPs), Timeline",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Office 365 (Splunkbase 4055).\n• Ensure the following data sources are available: `ms:o365:management` token events if available, or proxy OAuth logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm operation strings; (2) reduce noise with corporate travel; (3) integrate with IR-4; (4) force re-auth playbooks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=ms:o365:management earliest=-24d Operation=\"Token issued*\"\n| iplocation ClientIP\n| stats earliest(_time) as t1, latest(_time) as t2, values(Country) as countries, values(ClientIP) as ips by UserId, SessionId\n| eval dt=t2-t1\n| where mvcount(countries)>1 AND dt < 7200\n```\n\nUnderstanding this SPL\n\n**Session Authenticity — Token Replay Across Geographies (SC-23)** — Detects same refresh token used from distant geolocations within short windows for SC-23 session authenticity.\n\nDocumented **Data sources**: `ms:o365:management` token events if available, or proxy OAuth logs. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=ms:o365:management, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Session Authenticity — Token Replay Across Geographies (SC-23)**): iplocation ClientIP\n• `stats` rolls up events into metrics; results are split **by UserId, SessionId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dt** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mvcount(countries)>1 AND dt < 7200` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Session Authenticity — Token Replay Across Geographies (SC-23)** — Detects same refresh token used from distant geolocations within short windows for SC-23 session authenticity.\n\nDocumented **Data sources**: `ms:o365:management` token events if available, or proxy OAuth logs. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Office 365 (Splunkbase 4055). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (suspected replay), Map (IPs), Timeline",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects same refresh token used from distant geolocations within short windows for SC-23 session authenticity. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-23",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SC-23 is enforced — Splunk UC-22.14.72: Session Authenticity — Token Replay Across Geographies.",
                  "ea": "Saved search 'UC-22.14.72' running on ms:o365:management token events if available, or proxy OAuth logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.73",
              "n": "Protection of Information at Rest — Storage Encryption Misconfigurations (SC-28)",
              "c": "critical",
              "f": "intermediate",
              "v": "Finds newly created volumes or buckets without encryption-at-rest enabled for SC-28 protection of information at rest.",
              "t": "Splunk Add-on for AWS (Splunkbase 1876)",
              "d": "`aws:cloudtrail` `CreateVolume` `CreateBucket`",
              "q": "index=cloud sourcetype=aws:cloudtrail earliest=-7d eventName IN (\"CreateVolume\",\"CreateBucket\")\n| spath path=responseElements output=responseElements\n| regex responseElements=\"(?i)encrypted.?false|Encryption:?None|null\"\n| stats count by userName, eventName, aws_account_id",
              "m": "(1) Tune regex to your JSON shape; (2) auto-open CM-3 change if intentional; (3) integrate AWS Config rules for continuous check; (4) evidence for assessor sampling.",
              "z": "Table (non-encrypted creates), Time chart, Single value (count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (Splunkbase 1876).\n• Ensure the following data sources are available: `aws:cloudtrail` `CreateVolume` `CreateBucket`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tune regex to your JSON shape; (2) auto-open CM-3 change if intentional; (3) integrate AWS Config rules for continuous check; (4) evidence for assessor sampling.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=aws:cloudtrail earliest=-7d eventName IN (\"CreateVolume\",\"CreateBucket\")\n| spath path=responseElements output=responseElements\n| regex responseElements=\"(?i)encrypted.?false|Encryption:?None|null\"\n| stats count by userName, eventName, aws_account_id\n```\n\nUnderstanding this SPL\n\n**Protection of Information at Rest — Storage Encryption Misconfigurations (SC-28)** — Finds newly created volumes or buckets without encryption-at-rest enabled for SC-28 protection of information at rest.\n\nDocumented **Data sources**: `aws:cloudtrail` `CreateVolume` `CreateBucket`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: aws:cloudtrail. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=aws:cloudtrail, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Filters rows matching a pattern with `regex`.\n• `stats` rolls up events into metrics; results are split **by userName, eventName, aws_account_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Protection of Information at Rest — Storage Encryption Misconfigurations (SC-28)** — Finds newly created volumes or buckets without encryption-at-rest enabled for SC-28 protection of information at rest.\n\nDocumented **Data sources**: `aws:cloudtrail` `CreateVolume` `CreateBucket`. **App/TA** (typical add-on context): Splunk Add-on for AWS (Splunkbase 1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (non-encrypted creates), Time chart, Single value (count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We find newly created volumes or buckets without encryption-at-rest enabled for SC-28 protection of information at rest. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-28",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 SC-28 is enforced — Splunk UC-22.14.73: Protection of Information at Rest — Storage Encryption Misconfigurations.",
                  "ea": "Saved search 'UC-22.14.73' running on aws:cloudtrail CreateVolume CreateBucket, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.74",
              "n": "Risk Assessment Inputs — Control Deficiency Hotspots (RA-3)",
              "c": "high",
              "f": "intermediate",
              "v": "Aggregates multiple weak signals (failed controls, vulns, policy violations) by system for RA-3 risk assessment prioritization.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`index=compliance`, `index=vulns`, `index=sec` — use summary joins",
              "q": "index=compliance sourcetype=cis:assessment control_result=\"FAIL\" earliest=-30d\n| stats count as ctrl_fails by hostname\n| join type=left hostname [\n    search index=vulns sourcetype=tenable:vuln severity=\"Critical\" earliest=-30d\n    | stats count as crit_vulns by host\n    | rename host as hostname\n  ]\n| eval risk_score=ctrl_fails*2 + crit_vulns*5\n| sort - risk_score\n| head 200",
              "m": "(1) Replace `join` with `lookup` at scale; (2) calibrate weights annually; (3) export to enterprise risk register; (4) document assumptions.",
              "z": "Table (top systems), Bubble chart (optional), Single value (mean score)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `index=compliance`, `index=vulns`, `index=sec` — use summary joins.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replace `join` with `lookup` at scale; (2) calibrate weights annually; (3) export to enterprise risk register; (4) document assumptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=cis:assessment control_result=\"FAIL\" earliest=-30d\n| stats count as ctrl_fails by hostname\n| join type=left hostname [\n    search index=vulns sourcetype=tenable:vuln severity=\"Critical\" earliest=-30d\n    | stats count as crit_vulns by host\n    | rename host as hostname\n  ]\n| eval risk_score=ctrl_fails*2 + crit_vulns*5\n| sort - risk_score\n| head 200\n```\n\nUnderstanding this SPL\n\n**Risk Assessment Inputs — Control Deficiency Hotspots (RA-3)** — Aggregates multiple weak signals (failed controls, vulns, policy violations) by system for RA-3 risk assessment prioritization.\n\nDocumented **Data sources**: `index=compliance`, `index=vulns`, `index=sec` — use summary joins. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: cis:assessment. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=cis:assessment, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **risk_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Risk Assessment Inputs — Control Deficiency Hotspots (RA-3)** — Aggregates multiple weak signals (failed controls, vulns, policy violations) by system for RA-3 risk assessment prioritization.\n\nDocumented **Data sources**: `index=compliance`, `index=vulns`, `index=sec` — use summary joins. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top systems), Bubble chart (optional), Single value (mean score)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We aggregates multiple weak signals (failed controls, vulns, policy violations) by system for RA-3 risk assessment prioritization. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "RA-3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 RA-3 is enforced — Splunk UC-22.14.74: Risk Assessment Inputs — Control Deficiency Hotspots.",
                  "ea": "Saved search 'UC-22.14.74' running on index=compliance, index=vulns, index=sec — use summary joins, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.75",
              "n": "Vulnerability Monitoring — Exploitable in the Wild Prioritization (RA-5)",
              "c": "critical",
              "f": "advanced",
              "v": "Joins vuln scan data with threat intel tags like KEV for RA-5 vulnerability monitoring.",
              "t": "Tenable Add-On for Splunk (Splunkbase 4060), threat intel lookups",
              "d": "`tenable:vuln` with `plugin_id`, `kev_catalog.csv`",
              "q": "index=vulns sourcetype=tenable:vuln earliest=-7d state!=\"Fixed\"\n| lookup kev_catalog.csv plugin_id OUTPUT in_kev, due_date\n| where in_kev=\"true\"\n| stats max(patch_due_sla_days) as sla by host, plugin_id",
              "m": "(1) Sync KEV catalog regularly; (2) align SLA to policy; (3) alert owners; (4) integrate patch change windows CM-3.",
              "z": "Table (KEV findings), Single value (count), Time chart (mean age)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Tenable Add-On for Splunk](https://splunkbase.splunk.com/app/4060), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Tenable Add-On for Splunk (Splunkbase 4060), threat intel lookups.\n• Ensure the following data sources are available: `tenable:vuln` with `plugin_id`, `kev_catalog.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Sync KEV catalog regularly; (2) align SLA to policy; (3) alert owners; (4) integrate patch change windows CM-3.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulns sourcetype=tenable:vuln earliest=-7d state!=\"Fixed\"\n| lookup kev_catalog.csv plugin_id OUTPUT in_kev, due_date\n| where in_kev=\"true\"\n| stats max(patch_due_sla_days) as sla by host, plugin_id\n```\n\nUnderstanding this SPL\n\n**Vulnerability Monitoring — Exploitable in the Wild Prioritization (RA-5)** — Joins vuln scan data with threat intel tags like KEV for RA-5 vulnerability monitoring.\n\nDocumented **Data sources**: `tenable:vuln` with `plugin_id`, `kev_catalog.csv`. **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060), threat intel lookups. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulns; **sourcetype**: tenable:vuln. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulns, sourcetype=tenable:vuln, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_kev=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, plugin_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Vulnerability Monitoring — Exploitable in the Wild Prioritization (RA-5)** — Joins vuln scan data with threat intel tags like KEV for RA-5 vulnerability monitoring.\n\nDocumented **Data sources**: `tenable:vuln` with `plugin_id`, `kev_catalog.csv`. **App/TA** (typical add-on context): Tenable Add-On for Splunk (Splunkbase 4060), threat intel lookups. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (KEV findings), Single value (count), Time chart (mean age)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We joins vuln scan data with threat intel tags like KEV for RA-5 vulnerability monitoring. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count",
              "e": [
                "tenable"
              ],
              "em": [
                "snmp_ups"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "RA-5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 RA-5 (Vulnerability scanning) is enforced — Splunk UC-22.14.75: Vulnerability Monitoring — Exploitable in the Wild Prioritization.",
                  "ea": "Saved search 'UC-22.14.75' running on tenable:vuln with plugin_id, kev_catalog.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.76",
              "n": "Risk Response Effectiveness After Control Changes (RA-7)",
              "c": "medium",
              "f": "advanced",
              "v": "Compares alert rates before and after control deployments for RA-7 risk response monitoring.",
              "t": "Splunk Enterprise Security (Splunkbase 263)",
              "d": "`` `notable` `` with `rule_name` tags for control initiatives",
              "q": "`notable` earliest=-60d tag=\"control_initiative_X\"\n| bin _time span=7d\n| stats count by _time\n| eventstats first(count) as baseline, latest(count) as latest\n| eval reduction_pct=round(100*(baseline-latest)/baseline,1)",
              "m": "(1) Tag rules consistently during pilots; (2) seasonally adjust baselines; (3) leadership narrative in monthly report; (4) document residual risk.",
              "z": "Time chart (notables per week), Single value (reduction %)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263).\n• Ensure the following data sources are available: `` `notable` `` with `rule_name` tags for control initiatives.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag rules consistently during pilots; (2) seasonally adjust baselines; (3) leadership narrative in monthly report; (4) document residual risk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` earliest=-60d tag=\"control_initiative_X\"\n| bin _time span=7d\n| stats count by _time\n| eventstats first(count) as baseline, latest(count) as latest\n| eval reduction_pct=round(100*(baseline-latest)/baseline,1)\n```\n\nUnderstanding this SPL\n\n**Risk Response Effectiveness After Control Changes (RA-7)** — Compares alert rates before and after control deployments for RA-7 risk response monitoring.\n\nDocumented **Data sources**: `` `notable` `` with `rule_name` tags for control initiatives. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **reduction_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (notables per week), Single value (reduction %)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare alert rates before and after control deployments for RA-7 risk response monitoring. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "RA-7",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 RA-7 is enforced — Splunk UC-22.14.76: Risk Response Effectiveness After Control Changes.",
                  "ea": "Saved search 'UC-22.14.76' running on notable with rule_name tags for control initiatives, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.77",
              "n": "Threat Hunting Outcomes Logged for Repeatable Hunts (RA-10)",
              "c": "high",
              "f": "advanced",
              "v": "Structures hunt hypothesis, queries, and findings for RA-10 threat hunting (as organizational capability evidence).",
              "t": "Splunk Enterprise Security (Splunkbase 263) — Enterprise Security Content Update or custom KVStore",
              "d": "`index=thunt` `sourcetype=hunt:record` HEC JSON",
              "q": "index=thunt sourcetype=hunt:record earliest=-90d\n| stats values(hypothesis) as hyps, latest(outcome) as outcome, max(duration_min) as duration by hunt_id, hunter\n| where outcome=\"inconclusive\" OR duration>480",
              "m": "(1) Require hunt closure fields; (2) link to IR tickets when escalated; (3) quarterly metrics for leadership; (4) library of proven SPL in repo outside Splunk.",
              "z": "Table (hunt backlog), Bar chart (outcomes), Single value (inconclusive %)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263) — Enterprise Security Content Update or custom KVStore.\n• Ensure the following data sources are available: `index=thunt` `sourcetype=hunt:record` HEC JSON.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require hunt closure fields; (2) link to IR tickets when escalated; (3) quarterly metrics for leadership; (4) library of proven SPL in repo outside Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=thunt sourcetype=hunt:record earliest=-90d\n| stats values(hypothesis) as hyps, latest(outcome) as outcome, max(duration_min) as duration by hunt_id, hunter\n| where outcome=\"inconclusive\" OR duration>480\n```\n\nUnderstanding this SPL\n\n**Threat Hunting Outcomes Logged for Repeatable Hunts (RA-10)** — Structures hunt hypothesis, queries, and findings for RA-10 threat hunting (as organizational capability evidence).\n\nDocumented **Data sources**: `index=thunt` `sourcetype=hunt:record` HEC JSON. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263) — Enterprise Security Content Update or custom KVStore. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: thunt; **sourcetype**: hunt:record. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=thunt, sourcetype=hunt:record, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hunt_id, hunter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where outcome=\"inconclusive\" OR duration>480` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (hunt backlog), Bar chart (outcomes), Single value (inconclusive %)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We structures hunt hypothesis, queries, and findings for RA-10 threat hunting (as organizational capability evidence). That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "RA-10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 RA-10 is enforced — Splunk UC-22.14.77: Threat Hunting Outcomes Logged for Repeatable Hunts.",
                  "ea": "Saved search 'UC-22.14.77' running on index=thunt sourcetype=hunt:record HEC JSON, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.78",
              "n": "Contingency Plan Tabletop and Activation Logging (CP-2)",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks formal contingency plan activations and tests for CP-2 contingency plan maintenance.",
              "t": "HTTP Event Collector (HEC)",
              "d": "`index=grc` `sourcetype=cp:activation`",
              "q": "index=grc sourcetype=cp:activation earliest=-365d\n| stats latest(activation_time) as last_act, count by plan_name, scenario_type\n| eval months_since=round((now()-strptime(last_act,\"%Y-%m-%d %H:%M:%S\"))/2628000,2)\n| where months_since>12",
              "m": "(1) Log both tests and real activations distinctly; (2) alert owners; (3) align with BIA refresh cycles; (4) map dependencies CA-9.",
              "z": "Table (stale plans), Timeline, Single value (plans overdue)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC).\n• Ensure the following data sources are available: `index=grc` `sourcetype=cp:activation`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Log both tests and real activations distinctly; (2) alert owners; (3) align with BIA refresh cycles; (4) map dependencies CA-9.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=cp:activation earliest=-365d\n| stats latest(activation_time) as last_act, count by plan_name, scenario_type\n| eval months_since=round((now()-strptime(last_act,\"%Y-%m-%d %H:%M:%S\"))/2628000,2)\n| where months_since>12\n```\n\nUnderstanding this SPL\n\n**Contingency Plan Tabletop and Activation Logging (CP-2)** — Tracks formal contingency plan activations and tests for CP-2 contingency plan maintenance.\n\nDocumented **Data sources**: `index=grc` `sourcetype=cp:activation`. **App/TA** (typical add-on context): HTTP Event Collector (HEC). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cp:activation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=cp:activation, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by plan_name, scenario_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **months_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where months_since>12` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale plans), Timeline, Single value (plans overdue)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks formal contingency plan activations and tests for CP-2 contingency plan maintenance. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CP-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CP-2 is enforced — Splunk UC-22.14.78: Contingency Plan Tabletop and Activation Logging.",
                  "ea": "Saved search 'UC-22.14.78' running on index=grc sourcetype=cp:activation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.79",
              "n": "System Backup Success and RPO Violations (CP-9)",
              "c": "critical",
              "f": "intermediate",
              "v": "Monitors backup job completions and anomalies for CP-9 system backup.",
              "t": "Splunk Add-on for NetBackup / Commvault / Veeam TAs or HEC from backup software",
              "d": "`index=backup` `sourcetype=veeam:job`",
              "q": "index=backup sourcetype=veeam:job earliest=-7d\n| stats latest(job_end) as last_end, latest(job_result) as result, latest(data_read_gb) as gb by job_name, host\n| eval hours_since=round((now()-strptime(last_end,\"%Y-%m-%d %H:%M:%S\"))/3600,1)\n| where result!=\"Success\" OR hours_since>26",
              "m": "(1) Adjust `strptime` to vendor format; (2) define RPO per tier in lookup; (3) alert backup admins + IR-4 if ransomware context; (4) test restores log separately CP-10.",
              "z": "Table (failed/stale jobs), Single value (SLA breaches), Time chart (duration)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for NetBackup / Commvault / Veeam TAs or HEC from backup software.\n• Ensure the following data sources are available: `index=backup` `sourcetype=veeam:job`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Adjust `strptime` to vendor format; (2) define RPO per tier in lookup; (3) alert backup admins + IR-4 if ransomware context; (4) test restores log separately CP-10.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=veeam:job earliest=-7d\n| stats latest(job_end) as last_end, latest(job_result) as result, latest(data_read_gb) as gb by job_name, host\n| eval hours_since=round((now()-strptime(last_end,\"%Y-%m-%d %H:%M:%S\"))/3600,1)\n| where result!=\"Success\" OR hours_since>26\n```\n\nUnderstanding this SPL\n\n**System Backup Success and RPO Violations (CP-9)** — Monitors backup job completions and anomalies for CP-9 system backup.\n\nDocumented **Data sources**: `index=backup` `sourcetype=veeam:job`. **App/TA** (typical add-on context): Splunk Add-on for NetBackup / Commvault / Veeam TAs or HEC from backup software. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=veeam:job, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by job_name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where result!=\"Success\" OR hours_since>26` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed/stale jobs), Single value (SLA breaches), Time chart (duration)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch backup job completions and anomalies for CP-9 system backup. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "commvault",
                "hashicorp",
                "veeam"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CP-9",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CP-9 (System backup) is enforced — Splunk UC-22.14.79: System Backup Success and RPO Violations.",
                  "ea": "Saved search 'UC-22.14.79' running on index=backup sourcetype=veeam:job, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Veeam App for Splunk",
                  "id": 7312,
                  "url": "https://splunkbase.splunk.com/app/7312",
                  "desc": "Monitoring and security dashboards for Veeam Backup job statuses and events",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/033805d0-fe3c-11ee-a32e-be99bb517a22.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/0a08b346-fe3c-11ee-9b85-7afc4dbed252.png"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.14.80",
              "n": "System Recovery Time Objective Tracking from DR Drills (CP-10)",
              "c": "critical",
              "f": "intermediate",
              "v": "Captures measured recovery time metrics from DR drills for CP-10 system recovery and reconstitution.",
              "t": "HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=grc` `sourcetype=dr:drill` or `snow:change_request` DR template",
              "q": "index=grc sourcetype=dr:drill earliest=-365d\n| eval rto_min=recovery_end_epoch - failover_start_epoch\n| lookup app_tiers.csv application OUTPUT rto_target_min\n| where rto_min > rto_target_min\n| table drill_id, application, rto_min, rto_target_min, notes",
              "m": "(1) Ensure epoch fields populated at drill completion; (2) store raw drill packets in secure share; (3) feed POA&M if missed; (4) integrate with IR-8 updates.",
              "z": "Table (RTO breaches), Bar chart (by app), Single value (worst RTO)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=grc` `sourcetype=dr:drill` or `snow:change_request` DR template.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure epoch fields populated at drill completion; (2) store raw drill packets in secure share; (3) feed POA&M if missed; (4) integrate with IR-8 updates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=dr:drill earliest=-365d\n| eval rto_min=recovery_end_epoch - failover_start_epoch\n| lookup app_tiers.csv application OUTPUT rto_target_min\n| where rto_min > rto_target_min\n| table drill_id, application, rto_min, rto_target_min, notes\n```\n\nUnderstanding this SPL\n\n**System Recovery Time Objective Tracking from DR Drills (CP-10)** — Captures measured recovery time metrics from DR drills for CP-10 system recovery and reconstitution.\n\nDocumented **Data sources**: `index=grc` `sourcetype=dr:drill` or `snow:change_request` DR template. **App/TA** (typical add-on context): HTTP Event Collector (HEC), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: dr:drill. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=dr:drill, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **rto_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where rto_min > rto_target_min` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **System Recovery Time Objective Tracking from DR Drills (CP-10)**): table drill_id, application, rto_min, rto_target_min, notes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (RTO breaches), Bar chart (by app), Single value (worst RTO)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We captures measured recovery time metrics from DR drills for CP-10 system recovery and reconstitution. That helps federal and agency security reviews match what the systems actually do day to day.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CP-10",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CP-10 is enforced — Splunk UC-22.14.80: System Recovery Time Objective Tracking from DR Drills.",
                  "ea": "Saved search 'UC-22.14.80' running on index=grc sourcetype=dr:drill or snow:change_request DR template, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 80,
            "none": 0
          }
        },
        {
          "i": "22.15",
          "n": "IEC 62443",
          "u": [
            {
              "i": "22.15.1",
              "n": "OT Security Policy Control Evidence from Log Review (IEC 62443-2-1 / 4.2.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Correlates policy-required controls (access, change, remote sessions) to indexed evidence so the IACS security program can demonstrate ongoing compliance with documented requirements.",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263)",
              "d": "`index=ot_sec` — `sourcetype=\"nozomi:alerts\"`, `sourcetype=\"claroty:alert\"`, `sourcetype=\"pan:system\"`; lookup `iec62443_policy_controls.csv`",
              "q": "index=ot_sec earliest=-30d (sourcetype=\"nozomi:alerts\" OR sourcetype=\"claroty:alert\" OR sourcetype=\"pan:system\")\n| eval control_family=case(match(_raw,\"(?i)remote\"),\"Remote_Access\", match(_raw,\"(?i)change|config\"),\"Change_Management\", true(),\"General_Security_Event\")\n| lookup iec62443_policy_controls.csv control_family OUTPUT policy_clause\n| stats count by control_family, policy_clause, sourcetype\n| sort - count",
              "m": "(1) Maintain lookup mapping control families to 2-1 policy clauses; (2) normalize sourcetypes with CIM tags where applicable; (3) schedule weekly PDF for program reviews; (4) route uncategorized families to GRC.",
              "z": "Stacked bar (events by control family), Table (clause coverage), Single value (distinct clauses with hits).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [CIM: Alerts](https://docs.splunk.com/Documentation/CIM/latest/User/Alerts)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=ot_sec` — `sourcetype=\"nozomi:alerts\"`, `sourcetype=\"claroty:alert\"`, `sourcetype=\"pan:system\"`; lookup `iec62443_policy_controls.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain lookup mapping control families to 2-1 policy clauses; (2) normalize sourcetypes with CIM tags where applicable; (3) schedule weekly PDF for program reviews; (4) route uncategorized families to GRC.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_sec earliest=-30d (sourcetype=\"nozomi:alerts\" OR sourcetype=\"claroty:alert\" OR sourcetype=\"pan:system\")\n| eval control_family=case(match(_raw,\"(?i)remote\"),\"Remote_Access\", match(_raw,\"(?i)change|config\"),\"Change_Management\", true(),\"General_Security_Event\")\n| lookup iec62443_policy_controls.csv control_family OUTPUT policy_clause\n| stats count by control_family, policy_clause, sourcetype\n| sort - count\n```\n\nUnderstanding this SPL\n\n**OT Security Policy Control Evidence from Log Review (IEC 62443-2-1 / 4.2.2)** — Correlates policy-required controls (access, change, remote sessions) to indexed evidence so the IACS security program can demonstrate ongoing compliance with documented requirements.\n\nDocumented **Data sources**: `index=ot_sec` — `sourcetype=\"nozomi:alerts\"`, `sourcetype=\"claroty:alert\"`, `sourcetype=\"pan:system\"`; lookup `iec62443_policy_controls.csv`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_sec; **sourcetype**: nozomi:alerts, claroty:alert, pan:system. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_sec, sourcetype=\"nozomi:alerts\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **control_family** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by control_family, policy_clause, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Security Policy Control Evidence from Log Review (IEC 62443-2-1 / 4.2.2)** — Correlates policy-required controls (access, change, remote sessions) to indexed evidence so the IACS security program can demonstrate ongoing compliance with documented requirements.\n\nDocumented **Data sources**: `index=ot_sec` — `sourcetype=\"nozomi:alerts\"`, `sourcetype=\"claroty:alert\"`, `sourcetype=\"pan:system\"`; lookup `iec62443_policy_controls.csv`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Alerts.Alerts` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (events by control family), Table (clause coverage), Single value (distinct clauses with hits).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect policy-required controls (access, change, remote sessions) to indexed evidence so the IACS security program can demonstrate ongoing compliance with documented requirements. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Alerts (when CIM-tagged)",
                "N/A otherwise"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Alerts.Alerts by Alerts.severity, Alerts.signature, Alerts.app | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.2.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.2.2 is enforced — Splunk UC-22.15.1: OT Security Policy Control Evidence from Log Review.",
                  "ea": "Saved search 'UC-22.15.1' running on sourcetype nozomi:alerts and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.2",
              "n": "IACS Cyber Risk Assessment — Critical OT Asset Exposure (IEC 62443-2-1 / 4.2.3)",
              "c": "critical",
              "f": "advanced",
              "v": "Surfaces internet-facing or high-CVSS OT assets to inform documented risk treatment and residual risk acceptance for the IACS.",
              "t": "Splunk Industrial Asset Intelligence, Splunk DB Connect (2686) for CMMS",
              "d": "`index=ot_assets`, `index=ot_vuln`, optional `ot_dmz_exposure.csv`",
              "q": "index=ot_assets OR index=ot_vuln earliest=-7d\n| eval asset_id=coalesce(device_id, asset_id, host)\n| stats values(cve) as cves, max(cvss_base_score) as max_cvss, latest(criticality) as tier by asset_id\n| lookup ot_dmz_exposure.csv asset_id OUTPUT exposed_to_untrusted\n| where max_cvss>=7.0 OR exposed_to_untrusted=\"true\"\n| eval risk_note=case(isnotnull(cves) AND exposed_to_untrusted=\"true\",\"Treat first: exposure + CVE\", exposed_to_untrusted=\"true\",\"Network exposure\", true(),\"Critical vulnerability concentration\")\n| table asset_id, tier, max_cvss, exposed_to_untrusted, risk_note, cves",
              "m": "(1) Ingest vulnerability exports with OT asset keys; (2) refresh exposure lookup from Claroty/Nozomi topology; (3) align tiers with SL-T; (4) attach results to formal RA register.",
              "z": "Scatter (CVSS vs count), Table (top assets), Single value (dual-risk count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Industrial Asset Intelligence, Splunk DB Connect (2686) for CMMS.\n• Ensure the following data sources are available: `index=ot_assets`, `index=ot_vuln`, optional `ot_dmz_exposure.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest vulnerability exports with OT asset keys; (2) refresh exposure lookup from Claroty/Nozomi topology; (3) align tiers with SL-T; (4) attach results to formal RA register.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_assets OR index=ot_vuln earliest=-7d\n| eval asset_id=coalesce(device_id, asset_id, host)\n| stats values(cve) as cves, max(cvss_base_score) as max_cvss, latest(criticality) as tier by asset_id\n| lookup ot_dmz_exposure.csv asset_id OUTPUT exposed_to_untrusted\n| where max_cvss>=7.0 OR exposed_to_untrusted=\"true\"\n| eval risk_note=case(isnotnull(cves) AND exposed_to_untrusted=\"true\",\"Treat first: exposure + CVE\", exposed_to_untrusted=\"true\",\"Network exposure\", true(),\"Critical vulnerability concentration\")\n| table asset_id, tier, max_cvss, exposed_to_untrusted, risk_note, cves\n```\n\nUnderstanding this SPL\n\n**IACS Cyber Risk Assessment — Critical OT Asset Exposure (IEC 62443-2-1 / 4.2.3)** — Surfaces internet-facing or high-CVSS OT assets to inform documented risk treatment and residual risk acceptance for the IACS.\n\nDocumented **Data sources**: `index=ot_assets`, `index=ot_vuln`, optional `ot_dmz_exposure.csv`. **App/TA** (typical add-on context): Splunk Industrial Asset Intelligence, Splunk DB Connect (2686) for CMMS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_assets, ot_vuln.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_assets, index=ot_vuln, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **asset_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by asset_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where max_cvss>=7.0 OR exposed_to_untrusted=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **risk_note** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **IACS Cyber Risk Assessment — Critical OT Asset Exposure (IEC 62443-2-1 / 4.2.3)**): table asset_id, tier, max_cvss, exposed_to_untrusted, risk_note, cves\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IACS Cyber Risk Assessment — Critical OT Asset Exposure (IEC 62443-2-1 / 4.2.3)** — Surfaces internet-facing or high-CVSS OT assets to inform documented risk treatment and residual risk acceptance for the IACS.\n\nDocumented **Data sources**: `index=ot_assets`, `index=ot_vuln`, optional `ot_dmz_exposure.csv`. **App/TA** (typical add-on context): Splunk Industrial Asset Intelligence, Splunk DB Connect (2686) for CMMS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter (CVSS vs count), Table (top assets), Single value (dual-risk count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surfaces internet-facing or high-CVSS OT assets to inform documented risk treatment and residual risk acceptance for the IACS. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Vulnerabilities (when mapped)",
                "N/A"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.2.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.2.3 is enforced — Splunk UC-22.15.2: IACS Cyber Risk Assessment — Critical OT Asset Exposure.",
                  "ea": "Saved search 'UC-22.15.2' running on index=ot_assets, index=ot_vuln, optional ot_dmz_exposure.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.3",
              "n": "OT Security Awareness Training Completion (IEC 62443-2-1 / 4.3.2.6)",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures completion of OT-specific training by role so awareness obligations for personnel working on IACS are auditable.",
              "t": "Splunk DB Connect (2686) or HEC from LMS",
              "d": "`index=hr_lms` `sourcetype=\"lms:ot_training\"`",
              "q": "index=hr_lms sourcetype=\"lms:ot_training\" earliest=-400d\n| where like(lower(course_code),\"%otsec%\") OR like(lower(course_code),\"%ics%\")\n| eval done_epoch=strptime(completed_date,\"%Y-%m-%d\")\n| stats latest(done_epoch) as last_done by employee_id, role\n| eval days_since=round((now()-last_done)/86400,0)\n| where isnull(last_done) OR days_since>365\n| sort - days_since",
              "m": "(1) Ingest LMS extracts with privacy minimization; (2) map mandatory courses per role in policy; (3) join to AD only on approved fields; (4) alert managers for overdue learners.",
              "z": "Table (overdue), Bar (completions/month), Single value (% current).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686) or HEC from LMS.\n• Ensure the following data sources are available: `index=hr_lms` `sourcetype=\"lms:ot_training\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest LMS extracts with privacy minimization; (2) map mandatory courses per role in policy; (3) join to AD only on approved fields; (4) alert managers for overdue learners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr_lms sourcetype=\"lms:ot_training\" earliest=-400d\n| where like(lower(course_code),\"%otsec%\") OR like(lower(course_code),\"%ics%\")\n| eval done_epoch=strptime(completed_date,\"%Y-%m-%d\")\n| stats latest(done_epoch) as last_done by employee_id, role\n| eval days_since=round((now()-last_done)/86400,0)\n| where isnull(last_done) OR days_since>365\n| sort - days_since\n```\n\nUnderstanding this SPL\n\n**OT Security Awareness Training Completion (IEC 62443-2-1 / 4.3.2.6)** — Measures completion of OT-specific training by role so awareness obligations for personnel working on IACS are auditable.\n\nDocumented **Data sources**: `index=hr_lms` `sourcetype=\"lms:ot_training\"`. **App/TA** (typical add-on context): Splunk DB Connect (2686) or HEC from LMS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr_lms; **sourcetype**: lms:ot_training. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr_lms, sourcetype=\"lms:ot_training\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where like(lower(course_code),\"%otsec%\") OR like(lower(course_code),\"%ics%\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **done_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by employee_id, role** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(last_done) OR days_since>365` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (overdue), Bar (completions/month), Single value (% current).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures completion of OT-specific training by role so awareness obligations for personnel working on IACS are auditable. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.3.2.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.3.2.6 is enforced — Splunk UC-22.15.3: OT Security Awareness Training Completion.",
                  "ea": "Saved search 'UC-22.15.3' running on index=hr_lms sourcetype=\"lms:ot_training\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.4",
              "n": "Privileged OT Account Changes vs Personnel Screening (IEC 62443-2-1 / 4.3.3.4)",
              "c": "high",
              "f": "intermediate",
              "v": "Flags new or elevated OT privileged accounts that lack matching screening status, supporting personnel security for IACS access paths.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295)",
              "d": "`WinEventLog:Security` 4720/4728/4732; `index=cyberark` PTA/CPM",
              "q": "((index=windows EventCode IN (\"4720\",\"4728\",\"4732\") Message=\"*OT_*\") OR (index=cyberark sourcetype IN (\"cyberark:pta\",\"cyberark:cpm\")))\n| where index=\"windows\" OR match(_raw,\"(?i)OT_|PLC_|SCADA\")\n| eval account=coalesce(Account_Name, user, dest_user)\n| stats earliest(_time) as first_evt, latest(_time) as last_evt by account, EventCode, host\n| join type=left account [ search index=hr sourcetype=\"hr:screening\" | stats latest(status) as screening by samaccountname | rename samaccountname as account ]\n| where screening!=\"Cleared\" OR isnull(screening)\n| table first_evt, last_evt, account, EventCode, screening, host",
              "m": "(1) Enable AD auditing for sensitive OT groups; (2) ingest CyberArk for break-glass and vendor IDs; (3) HR feed with legal basis documented; (4) remediate within SLA.",
              "z": "Timeline, Table (exceptions), Single value (open items).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295).\n• Ensure the following data sources are available: `WinEventLog:Security` 4720/4728/4732; `index=cyberark` PTA/CPM.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable AD auditing for sensitive OT groups; (2) ingest CyberArk for break-glass and vendor IDs; (3) HR feed with legal basis documented; (4) remediate within SLA.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n((index=windows EventCode IN (\"4720\",\"4728\",\"4732\") Message=\"*OT_*\") OR (index=cyberark sourcetype IN (\"cyberark:pta\",\"cyberark:cpm\")))\n| where index=\"windows\" OR match(_raw,\"(?i)OT_|PLC_|SCADA\")\n| eval account=coalesce(Account_Name, user, dest_user)\n| stats earliest(_time) as first_evt, latest(_time) as last_evt by account, EventCode, host\n| join type=left account [ search index=hr sourcetype=\"hr:screening\" | stats latest(status) as screening by samaccountname | rename samaccountname as account ]\n| where screening!=\"Cleared\" OR isnull(screening)\n| table first_evt, last_evt, account, EventCode, screening, host\n```\n\nUnderstanding this SPL\n\n**Privileged OT Account Changes vs Personnel Screening (IEC 62443-2-1 / 4.3.3.4)** — Flags new or elevated OT privileged accounts that lack matching screening status, supporting personnel security for IACS access paths.\n\nDocumented **Data sources**: `WinEventLog:Security` 4720/4728/4732; `index=cyberark` PTA/CPM. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, cyberark.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=cyberark. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where index=\"windows\" OR match(_raw,\"(?i)OT_|PLC_|SCADA\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **account** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by account, EventCode, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where screening!=\"Cleared\" OR isnull(screening)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Privileged OT Account Changes vs Personnel Screening (IEC 62443-2-1 / 4.3.3.4)**): table first_evt, last_evt, account, EventCode, screening, host\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privileged OT Account Changes vs Personnel Screening (IEC 62443-2-1 / 4.3.3.4)** — Flags new or elevated OT privileged accounts that lack matching screening status, supporting personnel security for IACS access paths.\n\nDocumented **Data sources**: `WinEventLog:Security` 4720/4728/4732; `index=cyberark` PTA/CPM. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table (exceptions), Single value (open items).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flags new or elevated OT privileged accounts that lack matching screening status, supporting personnel security for IACS access paths. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count",
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.3.3.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.3.3.4 is enforced — Splunk UC-22.15.4: Privileged OT Account Changes vs Personnel Screening.",
                  "ea": "Saved search 'UC-22.15.4' running on WinEventLog:Security 4720/4728/4732; index=cyberark PTA/CPM, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.5",
              "n": "IACS Incident Response Containment Interval (IEC 62443-2-1 / 4.3.4.5)",
              "c": "critical",
              "f": "advanced",
              "v": "Quantifies detection-to-containment for OT cyber incidents and separates drills from production to evidence an exercised IR plan.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for ServiceNow (1928)",
              "d": "`index=ot_incidents` ServiceNow `snow:incident` or SOAR with `contained_at`",
              "q": "index=ot_incidents sourcetype=\"snow:incident\" earliest=-180d\n| where category=\"OT/Cyber\" OR match(short_description,\"(?i)ics|scada|plc\")\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"), contained=strptime(contained_at,\"%Y-%m-%d %H:%M:%S\")\n| eval is_drill=if(match(tag,\"drill\") OR match(short_description,\"(?i)drill|tabletop\"),1,0)\n| eval contain_min=round((contained-opened)/60,1)\n| where isnotnull(contain_min)\n| stats median(contain_min) as p50, perc95(contain_min) as p95, count by is_drill",
              "m": "(1) Require containment milestone in workflow; (2) align timestamps to OT SOC clock; (3) compare drill vs live separately; (4) feed p95 into program KPIs.",
              "z": "Box plot, Table (recent), Single value (p95 live).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [CIM: Ticket Management](https://docs.splunk.com/Documentation/CIM/latest/User/Ticket_Management)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for ServiceNow (1928).\n• Ensure the following data sources are available: `index=ot_incidents` ServiceNow `snow:incident` or SOAR with `contained_at`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require containment milestone in workflow; (2) align timestamps to OT SOC clock; (3) compare drill vs live separately; (4) feed p95 into program KPIs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_incidents sourcetype=\"snow:incident\" earliest=-180d\n| where category=\"OT/Cyber\" OR match(short_description,\"(?i)ics|scada|plc\")\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"), contained=strptime(contained_at,\"%Y-%m-%d %H:%M:%S\")\n| eval is_drill=if(match(tag,\"drill\") OR match(short_description,\"(?i)drill|tabletop\"),1,0)\n| eval contain_min=round((contained-opened)/60,1)\n| where isnotnull(contain_min)\n| stats median(contain_min) as p50, perc95(contain_min) as p95, count by is_drill\n```\n\nUnderstanding this SPL\n\n**IACS Incident Response Containment Interval (IEC 62443-2-1 / 4.3.4.5)** — Quantifies detection-to-containment for OT cyber incidents and separates drills from production to evidence an exercised IR plan.\n\nDocumented **Data sources**: `index=ot_incidents` ServiceNow `snow:incident` or SOAR with `contained_at`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for ServiceNow (1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_incidents; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_incidents, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where category=\"OT/Cyber\" OR match(short_description,\"(?i)ics|scada|plc\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_drill** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **contain_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(contain_min)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by is_drill** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IACS Incident Response Containment Interval (IEC 62443-2-1 / 4.3.4.5)** — Quantifies detection-to-containment for OT cyber incidents and separates drills from production to evidence an exercised IR plan.\n\nDocumented **Data sources**: `index=ot_incidents` ServiceNow `snow:incident` or SOAR with `contained_at`. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for ServiceNow (1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Ticket_Management.All_Ticket_Management` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Box plot, Table (recent), Single value (p95 live).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We quantifies detection-to-containment for OT cyber incidents and separates drills from production to evidence an exercised IR plan. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Ticket_Management"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Ticket_Management.All_Ticket_Management by All_Ticket_Management.status, All_Ticket_Management.priority, All_Ticket_Management.category | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.3.4.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.3.4.5 is enforced — Splunk UC-22.15.5: IACS Incident Response Containment Interval.",
                  "ea": "Saved search 'UC-22.15.5' running on index=ot_incidents ServiceNow snow:incident or SOAR with contained_at, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.6",
              "n": "Control System Continuity — Historian Export and OT Backup Health (IEC 62443-2-1 / 4.3.4.3)",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects missed historian snapshots or failed OT-tier backups that undermine recovery time objectives for control systems.",
              "t": "Universal Forwarder on backup/historian bridge hosts",
              "d": "`index=ot_hist` export status; `index=backup` Commvault/Veeam",
              "q": "index=ot_hist sourcetype=\"hist:export_status\" earliest=-72h\n| stats latest(_time) as last_ok, latest(status) as st by plant, historian_id\n| eval hours_since=round((now()-last_ok)/3600,1)\n| where st!=\"SUCCESS\" OR hours_since>26\n| append [ search index=backup sourcetype=\"veeam:JobSession\" OR sourcetype=\"commvault:job\" earliest=-72h | where match(JobName,\"(?i)OT|Historian|VMware_OT\") | stats latest(Result) as st by JobName | where st!=\"Success\" ]\n| sort - hours_since",
              "m": "(1) Emit structured export rows from each historian interface; (2) tag backup jobs covering OT VMs; (3) tune thresholds to RPO; (4) link runbook for cold standby DCS.",
              "z": "Table (gaps), Single value (failed jobs), Time chart (export lag).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Universal Forwarder on backup/historian bridge hosts.\n• Ensure the following data sources are available: `index=ot_hist` export status; `index=backup` Commvault/Veeam.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit structured export rows from each historian interface; (2) tag backup jobs covering OT VMs; (3) tune thresholds to RPO; (4) link runbook for cold standby DCS.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_hist sourcetype=\"hist:export_status\" earliest=-72h\n| stats latest(_time) as last_ok, latest(status) as st by plant, historian_id\n| eval hours_since=round((now()-last_ok)/3600,1)\n| where st!=\"SUCCESS\" OR hours_since>26\n| append [ search index=backup sourcetype=\"veeam:JobSession\" OR sourcetype=\"commvault:job\" earliest=-72h | where match(JobName,\"(?i)OT|Historian|VMware_OT\") | stats latest(Result) as st by JobName | where st!=\"Success\" ]\n| sort - hours_since\n```\n\nUnderstanding this SPL\n\n**Control System Continuity — Historian Export and OT Backup Health (IEC 62443-2-1 / 4.3.4.3)** — Detects missed historian snapshots or failed OT-tier backups that undermine recovery time objectives for control systems.\n\nDocumented **Data sources**: `index=ot_hist` export status; `index=backup` Commvault/Veeam. **App/TA** (typical add-on context): Universal Forwarder on backup/historian bridge hosts. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_hist; **sourcetype**: hist:export_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_hist, sourcetype=\"hist:export_status\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by plant, historian_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where st!=\"SUCCESS\" OR hours_since>26` — typically the threshold or rule expression for this monitoring goal.\n• Appends rows from a subsearch with `append`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (gaps), Single value (failed jobs), Time chart (export lag).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects missed historian snapshots or failed OT-tier backups that undermine recovery time objectives for control systems. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.3.4.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.3.4.3 is enforced — Splunk UC-22.15.6: Control System Continuity — Historian Export and OT Backup Health.",
                  "ea": "Saved search 'UC-22.15.6' running on index=ot_hist export status; index=backup Commvault/Veeam, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.7",
              "n": "OT Security Audit Population Sampling Dashboard (IEC 62443-2-1 / 4.4.3)",
              "c": "high",
              "f": "intermediate",
              "v": "Delivers repeatable audit samples of OT security-relevant events across zones for internal and external assessments of the security program.",
              "t": "Splunk OT Security Add-on (5151), CIM Add-on (1621)",
              "d": "`index=ot_audit` unified; `ot_zones.csv`",
              "q": "index=ot_audit earliest=-90d\n| lookup ot_zones.csv cidr OUTPUT zone, sl_t\n| stats count as evt, dc(user) as users, dc(dest_ip) as peers by zone, sourcetype\n| sort zone, - evt\n| head 300",
              "m": "(1) Land firewall, HMI, PAM, IDS into one auditable index with RBAC; (2) maintain CIDR→zone lookup; (3) export monthly CSV to auditors; (4) document sampling method in scope letter.",
              "z": "Heatmap (zone × sourcetype), Table, Bar (evt by zone).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), CIM Add-on (1621).\n• Ensure the following data sources are available: `index=ot_audit` unified; `ot_zones.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Land firewall, HMI, PAM, IDS into one auditable index with RBAC; (2) maintain CIDR→zone lookup; (3) export monthly CSV to auditors; (4) document sampling method in scope letter.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_audit earliest=-90d\n| lookup ot_zones.csv cidr OUTPUT zone, sl_t\n| stats count as evt, dc(user) as users, dc(dest_ip) as peers by zone, sourcetype\n| sort zone, - evt\n| head 300\n```\n\nUnderstanding this SPL\n\n**OT Security Audit Population Sampling Dashboard (IEC 62443-2-1 / 4.4.3)** — Delivers repeatable audit samples of OT security-relevant events across zones for internal and external assessments of the security program.\n\nDocumented **Data sources**: `index=ot_audit` unified; `ot_zones.csv`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), CIM Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by zone, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(All_Changes.dest) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Security Audit Population Sampling Dashboard (IEC 62443-2-1 / 4.4.3)** — Delivers repeatable audit samples of OT security-relevant events across zones for internal and external assessments of the security program.\n\nDocumented **Data sources**: `index=ot_audit` unified; `ot_zones.csv`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), CIM Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (zone × sourcetype), Table, Bar (evt by zone).",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We delivers repeatable audit samples of OT security-relevant events across zones for internal and external assessments of the security program. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Change",
                "Authentication when tagged"
              ],
              "qs": "| tstats summariesonly=t dc(All_Changes.dest) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.4.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.4.3 is enforced — Splunk UC-22.15.7: OT Security Audit Population Sampling Dashboard.",
                  "ea": "Saved search 'UC-22.15.7' running on index=ot_audit unified; ot_zones.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.8",
              "n": "OT Change Management — PLC Download Without CM Ticket (IEC 62443-2-1 / 4.3.4.4)",
              "c": "critical",
              "f": "advanced",
              "v": "Correlates controller downloads or online edits with change tickets to enforce controlled maintenance for IACS devices.",
              "t": "Splunk OT Security Add-on (5151), ServiceNow TA (1928)",
              "d": "`index=ot_eng` vendor PLC audit; `snow:change_request`",
              "q": "index=ot_eng sourcetype=\"plc:download\" OR sourcetype=\"siemens:audit\" OR sourcetype=\"ab:audit\" earliest=-14d\n| eval ck=coalesce(project_id, program_name, concat(\"PLC-\",controller))\n| eval t0=_time\n| join type=left ck [ search index=itsm sourcetype=\"snow:change_request\" earliest=-30d | eval ck=coalesce(u_ot_project, number) | where state IN (\"Closed Complete\",\"Implement\") | fields number, ck, opened_at ]\n| where isnull(number)\n| stats earliest(_time) as first_dl, latest(user) as users, values(src_ip) as src by ck, controller",
              "m": "(1) Normalize vendor fields to shared `ck`; (2) mandate CM tickets carry project keys; (3) tune time join if needed via subsearch; (4) weekly engineering review of unmatched.",
              "z": "Table (unmatched), Timeline, Single value (count/day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [ServiceNow TA](https://splunkbase.splunk.com/app/1928), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), ServiceNow TA (1928).\n• Ensure the following data sources are available: `index=ot_eng` vendor PLC audit; `snow:change_request`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize vendor fields to shared `ck`; (2) mandate CM tickets carry project keys; (3) tune time join if needed via subsearch; (4) weekly engineering review of unmatched.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_eng sourcetype=\"plc:download\" OR sourcetype=\"siemens:audit\" OR sourcetype=\"ab:audit\" earliest=-14d\n| eval ck=coalesce(project_id, program_name, concat(\"PLC-\",controller))\n| eval t0=_time\n| join type=left ck [ search index=itsm sourcetype=\"snow:change_request\" earliest=-30d | eval ck=coalesce(u_ot_project, number) | where state IN (\"Closed Complete\",\"Implement\") | fields number, ck, opened_at ]\n| where isnull(number)\n| stats earliest(_time) as first_dl, latest(user) as users, values(src_ip) as src by ck, controller\n```\n\nUnderstanding this SPL\n\n**OT Change Management — PLC Download Without CM Ticket (IEC 62443-2-1 / 4.3.4.4)** — Correlates controller downloads or online edits with change tickets to enforce controlled maintenance for IACS devices.\n\nDocumented **Data sources**: `index=ot_eng` vendor PLC audit; `snow:change_request`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), ServiceNow TA (1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_eng; **sourcetype**: plc:download, siemens:audit, ab:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_eng, sourcetype=\"plc:download\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ck** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **t0** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(number)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by ck, controller** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Change Management — PLC Download Without CM Ticket (IEC 62443-2-1 / 4.3.4.4)** — Correlates controller downloads or online edits with change tickets to enforce controlled maintenance for IACS devices.\n\nDocumented **Data sources**: `index=ot_eng` vendor PLC audit; `snow:change_request`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), ServiceNow TA (1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unmatched), Timeline, Single value (count/day).",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect controller downloads or online edits with change tickets to enforce controlled maintenance for IACS devices. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.3.4.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.3.4.4 is enforced — Splunk UC-22.15.8: OT Change Management — PLC Download Without CM Ticket.",
                  "ea": "Saved search 'UC-22.15.8' running on index=ot_eng vendor PLC audit; snow:change_request, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.9",
              "n": "Physical Access vs OT Network First-Seen Asset (IEC 62443-2-1 / 4.3.3.3)",
              "c": "high",
              "f": "intermediate",
              "v": "Relates physical access events to new or unknown OT assets appearing on protected VLANs to strengthen environmental controls narrative.",
              "t": "Nozomi Networks TA (6905), HEC from PACS",
              "d": "`index=pacs` `sourcetype=\"pacs:door\"`; `index=ot_net` asset updates",
              "q": "index=pacs sourcetype=\"pacs:door\" door_id IN (\"OT-MCC-A\",\"OT-SERVER-1\") earliest=-24h\n| eval t_phys=_time, who=badge_id\n| join max=0 type=inner who [\n  search index=ot_net sourcetype=\"nozomi:assets\" earliest=-24h\n  | where first_seen_time>_time-86400\n  | eval who=owner\n  | stats min(first_seen_time) as fs by asset_name, vlan, who\n]\n| eval delta_sec=fs-t_phys\n| where delta_sec>120 OR isnull(who)\n| table _time, door_id, badge_id, asset_name, vlan, delta_sec",
              "m": "(1) Sync clocks between PACS and NMS; (2) enrich asset owner field; (3) filter tailgating test doors; (4) integrate physical security escalation path.",
              "z": "Table, Map (optional), Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Nozomi Networks TA](https://splunkbase.splunk.com/app/6905)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Nozomi Networks TA (6905), HEC from PACS.\n• Ensure the following data sources are available: `index=pacs` `sourcetype=\"pacs:door\"`; `index=ot_net` asset updates.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Sync clocks between PACS and NMS; (2) enrich asset owner field; (3) filter tailgating test doors; (4) integrate physical security escalation path.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pacs sourcetype=\"pacs:door\" door_id IN (\"OT-MCC-A\",\"OT-SERVER-1\") earliest=-24h\n| eval t_phys=_time, who=badge_id\n| join max=0 type=inner who [\n  search index=ot_net sourcetype=\"nozomi:assets\" earliest=-24h\n  | where first_seen_time>_time-86400\n  | eval who=owner\n  | stats min(first_seen_time) as fs by asset_name, vlan, who\n]\n| eval delta_sec=fs-t_phys\n| where delta_sec>120 OR isnull(who)\n| table _time, door_id, badge_id, asset_name, vlan, delta_sec\n```\n\nUnderstanding this SPL\n\n**Physical Access vs OT Network First-Seen Asset (IEC 62443-2-1 / 4.3.3.3)** — Relates physical access events to new or unknown OT assets appearing on protected VLANs to strengthen environmental controls narrative.\n\nDocumented **Data sources**: `index=pacs` `sourcetype=\"pacs:door\"`; `index=ot_net` asset updates. **App/TA** (typical add-on context): Nozomi Networks TA (6905), HEC from PACS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pacs; **sourcetype**: pacs:door. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pacs, sourcetype=\"pacs:door\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **t_phys** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **delta_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta_sec>120 OR isnull(who)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Physical Access vs OT Network First-Seen Asset (IEC 62443-2-1 / 4.3.3.3)**): table _time, door_id, badge_id, asset_name, vlan, delta_sec\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Map (optional), Time chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We relates physical access events to new or unknown OT assets appearing on protected VLANs to strengthen environmental controls narrative. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.3.3.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.3.3.3 is enforced — Splunk UC-22.15.9: Physical Access vs OT Network First-Seen Asset.",
                  "ea": "Saved search 'UC-22.15.9' running on index=pacs sourcetype=\"pacs:door\"; index=ot_net asset updates, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.10",
              "n": "Vendor Remote Maintenance Window and Dual-Control Evidence (IEC 62443-2-1 / 4.4.3.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors vendor VPN and PSM sessions against contracted hours and approval records to evidence supplier risk management for OT maintenance.",
              "t": "Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for CyberArk (4295)",
              "d": "`pan:globalprotect` or `pan:sslvpn`; `cyberark:psm`",
              "q": "index=vpn sourcetype=\"pan:globalprotect\" earliest=-14d\n| lookup vendor_access.csv user OUTPUT vendor, window\n| eval hod=strftime(_time,\"%H\")\n| eval off_window=case(window=\"business\" AND (hod<7 OR hod>18),1, window=\"24x7\",0, true(),0)\n| join type=left user [ search index=cyberark sourcetype=\"cyberark:psm\" earliest=-14d | stats count as psm_hits by user ]\n| where vendor!=\"Internal\" AND (off_window=1 OR psm_hits=0)\n| stats count by vendor, user, off_window",
              "m": "(1) Populate vendor contract lookup; (2) force vendor IDs through PSM; (3) alert security + procurement on violations; (4) quarterly supplier review export.",
              "z": "Bar (by vendor), Table, Single value (weekly violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for CyberArk (4295).\n• Ensure the following data sources are available: `pan:globalprotect` or `pan:sslvpn`; `cyberark:psm`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Populate vendor contract lookup; (2) force vendor IDs through PSM; (3) alert security + procurement on violations; (4) quarterly supplier review export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"pan:globalprotect\" earliest=-14d\n| lookup vendor_access.csv user OUTPUT vendor, window\n| eval hod=strftime(_time,\"%H\")\n| eval off_window=case(window=\"business\" AND (hod<7 OR hod>18),1, window=\"24x7\",0, true(),0)\n| join type=left user [ search index=cyberark sourcetype=\"cyberark:psm\" earliest=-14d | stats count as psm_hits by user ]\n| where vendor!=\"Internal\" AND (off_window=1 OR psm_hits=0)\n| stats count by vendor, user, off_window\n```\n\nUnderstanding this SPL\n\n**Vendor Remote Maintenance Window and Dual-Control Evidence (IEC 62443-2-1 / 4.4.3.2)** — Monitors vendor VPN and PSM sessions against contracted hours and approval records to evidence supplier risk management for OT maintenance.\n\nDocumented **Data sources**: `pan:globalprotect` or `pan:sslvpn`; `cyberark:psm`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: pan:globalprotect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"pan:globalprotect\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **hod** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **off_window** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where vendor!=\"Internal\" AND (off_window=1 OR psm_hits=0)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by vendor, user, off_window** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Vendor Remote Maintenance Window and Dual-Control Evidence (IEC 62443-2-1 / 4.4.3.2)** — Monitors vendor VPN and PSM sessions against contracted hours and approval records to evidence supplier risk management for OT maintenance.\n\nDocumented **Data sources**: `pan:globalprotect` or `pan:sslvpn`; `cyberark:psm`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757), Splunk Add-on for CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar (by vendor), Table, Single value (weekly violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch vendor VPN and PSM sessions against contracted hours and approval records to evidence supplier risk management for OT maintenance. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "4.4.3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 4.4.3.2 is enforced — Splunk UC-22.15.10: Vendor Remote Maintenance Window and Dual-Control Evidence.",
                  "ea": "Saved search 'UC-22.15.10' running on pan:globalprotect or pan:sslvpn; cyberark:psm, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.11",
              "n": "HMI and Jump Host Human User Identification Gaps (IEC 62443-3-3 / SR 1.1)",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects interactive OT logons missing identifiable human user mapping to support unique identification of people accessing IACS UIs.",
              "t": "Splunk Add-on for CyberArk (4295), Windows TA (742)",
              "d": "`index=pam` CyberArk PSM; `WinEventLog:Security` 4624 on jump hosts",
              "q": "index=pam sourcetype=\"cyberark:psm\" earliest=-7d\n| eval human=coalesce(suser, user, src_user)\n| where isnull(human) OR human=\"-\" OR match(human,\"^svc_\")\n| stats count by dest_host, application, protocol\n| sort - count",
              "m": "(1) Enforce personal accounts on PSM; (2) block shared HMI logins at source; (3) integrate LDAP attribute for service vs human; (4) remediate top dest_hosts first.",
              "z": "Table (top gaps), Single value (events with no human id).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [Windows TA](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CyberArk (4295), Windows TA (742).\n• Ensure the following data sources are available: `index=pam` CyberArk PSM; `WinEventLog:Security` 4624 on jump hosts.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce personal accounts on PSM; (2) block shared HMI logins at source; (3) integrate LDAP attribute for service vs human; (4) remediate top dest_hosts first.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=\"cyberark:psm\" earliest=-7d\n| eval human=coalesce(suser, user, src_user)\n| where isnull(human) OR human=\"-\" OR match(human,\"^svc_\")\n| stats count by dest_host, application, protocol\n| sort - count\n```\n\nUnderstanding this SPL\n\n**HMI and Jump Host Human User Identification Gaps (IEC 62443-3-3 / SR 1.1)** — Detects interactive OT logons missing identifiable human user mapping to support unique identification of people accessing IACS UIs.\n\nDocumented **Data sources**: `index=pam` CyberArk PSM; `WinEventLog:Security` 4624 on jump hosts. **App/TA** (typical add-on context): Splunk Add-on for CyberArk (4295), Windows TA (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:psm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=\"cyberark:psm\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **human** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(human) OR human=\"-\" OR match(human,\"^svc_\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dest_host, application, protocol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**HMI and Jump Host Human User Identification Gaps (IEC 62443-3-3 / SR 1.1)** — Detects interactive OT logons missing identifiable human user mapping to support unique identification of people accessing IACS UIs.\n\nDocumented **Data sources**: `index=pam` CyberArk PSM; `WinEventLog:Security` 4624 on jump hosts. **App/TA** (typical add-on context): Splunk Add-on for CyberArk (4295), Windows TA (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (top gaps), Single value (events with no human id).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects interactive OT logons missing identifiable human user mapping to support unique identification of people accessing IACS UIs. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.1 (Human user identification and authentication) is enforced — Splunk UC-22.15.11: HMI and Jump Host Human User Identification Gaps.",
                  "ea": "Saved search 'UC-22.15.11' running on index=pam CyberArk PSM; WinEventLog:Security 4624 on jump hosts, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.12",
              "n": "Non-Interactive Industrial Protocol Clients Without Device Identity (IEC 62443-3-3 / SR 1.2)",
              "c": "high",
              "f": "advanced",
              "v": "Finds Modbus/OPC-UA sessions where client identity cannot be tied to an approved automation asset, weakening software process and device identification.",
              "t": "Splunk Edge Hub, Splunk OT Security Add-on (5151)",
              "d": "`index=ot_protocol` Edge Hub `edge_hub_modbus`, `edge_hub_opcua`",
              "q": "index=ot_protocol (sourcetype=\"edge_hub_modbus\" OR sourcetype=\"edge_hub_opcua\") earliest=-24h\n| eval client_key=coalesce(client_ip, opcua_client_appuri, modbus_unit_id)\n| lookup ot_approved_clients.csv client_key OUTPUT asset_name, owner\n| where isnull(asset_name)\n| stats dc(function_code) as fc_dc, dc(node_id) as nodes, count by client_key, sourcetype\n| sort - count",
              "m": "(1) Build authoritative client inventory from engineering CMDB; (2) enrich Edge Hub events with IP→asset; (3) alert on new clients; (4) certificate-pin OPC-UA where possible.",
              "z": "Table (unknown clients), Map (optional), Time chart (first seen).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub, Splunk OT Security Add-on (5151).\n• Ensure the following data sources are available: `index=ot_protocol` Edge Hub `edge_hub_modbus`, `edge_hub_opcua`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build authoritative client inventory from engineering CMDB; (2) enrich Edge Hub events with IP→asset; (3) alert on new clients; (4) certificate-pin OPC-UA where possible.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_protocol (sourcetype=\"edge_hub_modbus\" OR sourcetype=\"edge_hub_opcua\") earliest=-24h\n| eval client_key=coalesce(client_ip, opcua_client_appuri, modbus_unit_id)\n| lookup ot_approved_clients.csv client_key OUTPUT asset_name, owner\n| where isnull(asset_name)\n| stats dc(function_code) as fc_dc, dc(node_id) as nodes, count by client_key, sourcetype\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Non-Interactive Industrial Protocol Clients Without Device Identity (IEC 62443-3-3 / SR 1.2)** — Finds Modbus/OPC-UA sessions where client identity cannot be tied to an approved automation asset, weakening software process and device identification.\n\nDocumented **Data sources**: `index=ot_protocol` Edge Hub `edge_hub_modbus`, `edge_hub_opcua`. **App/TA** (typical add-on context): Splunk Edge Hub, Splunk OT Security Add-on (5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_protocol; **sourcetype**: edge_hub_modbus, edge_hub_opcua. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_protocol, sourcetype=\"edge_hub_modbus\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **client_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(asset_name)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by client_key, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (unknown clients), Map (optional), Time chart (first seen).",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We find Modbus/OPC-UA sessions where client identity cannot be tied to an approved automation asset, weakening software process and device identification. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.2 is enforced — Splunk UC-22.15.12: Non-Interactive Industrial Protocol Clients Without Device Identity.",
                  "ea": "Saved search 'UC-22.15.12' running on index=ot_protocol Edge Hub edge_hub_modbus, edge_hub_opcua, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.13",
              "n": "Stale or Shared OT Service Accounts (IEC 62443-3-3 / SR 1.3)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces OT accounts with weak rotation, excessive group breadth, or shared naming patterns inconsistent with IACS account management practice.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`WinEventLog:Security` password changes 4723/4724; AD snapshot via scripted input",
              "q": "index=windows EventCode IN (\"4723\",\"4724\") earliest=-180d\n| eval account=Target_User_Name\n| where match(account,\"(?i)scada|plc|hmi|svc_ot|batch\")\n| stats latest(_time) as last_pw_change by account\n| eval days_since=round((now()-last_pw_change)/86400,0)\n| where days_since>90 OR isnull(last_pw_change)\n| join type=left account [ search index=ad_snapshot sourcetype=\"ad:user\" | fields account, memberOf | eval grp_count=mvcount(memberOf) ]\n| table account, days_since, grp_count",
              "m": "(1) Ingest periodic LDAP snapshots to CSV; (2) define password rotation SLAs by tier; (3) eliminate shared accounts; (4) track exceptions with expiry dates.",
              "z": "Table, Bar (aging buckets), Single value (stale count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `WinEventLog:Security` password changes 4723/4724; AD snapshot via scripted input.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest periodic LDAP snapshots to CSV; (2) define password rotation SLAs by tier; (3) eliminate shared accounts; (4) track exceptions with expiry dates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows EventCode IN (\"4723\",\"4724\") earliest=-180d\n| eval account=Target_User_Name\n| where match(account,\"(?i)scada|plc|hmi|svc_ot|batch\")\n| stats latest(_time) as last_pw_change by account\n| eval days_since=round((now()-last_pw_change)/86400,0)\n| where days_since>90 OR isnull(last_pw_change)\n| join type=left account [ search index=ad_snapshot sourcetype=\"ad:user\" | fields account, memberOf | eval grp_count=mvcount(memberOf) ]\n| table account, days_since, grp_count\n```\n\nUnderstanding this SPL\n\n**Stale or Shared OT Service Accounts (IEC 62443-3-3 / SR 1.3)** — Surfaces OT accounts with weak rotation, excessive group breadth, or shared naming patterns inconsistent with IACS account management practice.\n\nDocumented **Data sources**: `WinEventLog:Security` password changes 4723/4724; AD snapshot via scripted input. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **account** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(account,\"(?i)scada|plc|hmi|svc_ot|batch\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by account** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since>90 OR isnull(last_pw_change)` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Stale or Shared OT Service Accounts (IEC 62443-3-3 / SR 1.3)**): table account, days_since, grp_count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Stale or Shared OT Service Accounts (IEC 62443-3-3 / SR 1.3)** — Surfaces OT accounts with weak rotation, excessive group breadth, or shared naming patterns inconsistent with IACS account management practice.\n\nDocumented **Data sources**: `WinEventLog:Security` password changes 4723/4724; AD snapshot via scripted input. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar (aging buckets), Single value (stale count).\n\nScripted input (generic example)\nThis use case relies on a scripted input. In the app's local/inputs.conf add a stanza such as:\n\n```ini\n[script://$SPLUNK_HOME/etc/apps/YourApp/bin/collect.sh]\ninterval = 300\nsourcetype = your_sourcetype\nindex = main\ndisabled = 0\n```\n\nThe script should print one event per line (e.g. key=value). Example minimal script (bash):\n\n```bash\n#!/usr/bin/env bash\n# Output metrics or events, one per line\necho \"metric=value timestamp=$(date +%s)\"\n```\n\nFor full details (paths, scheduling, permissions), see the Implementation guide: docs/implementation-guide.md",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surfaces OT accounts with weak rotation, excessive group breadth, or shared naming patterns inconsistent with IACS account management practice. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.3 is enforced — Splunk UC-22.15.13: Stale or Shared OT Service Accounts.",
                  "ea": "Saved search 'UC-22.15.13' running on WinEventLog:Security password changes 4723/4724; AD snapshot via scripted input, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.14",
              "n": "Weak OTP or Missing MFA Step for OT VPN (IEC 62443-3-3 / SR 1.5)",
              "c": "critical",
              "f": "intermediate",
              "v": "Identifies VPN authentications that skip second factor or reuse legacy token methods for paths into OT jump zones.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`pan:globalprotect` or `pan:auth`",
              "q": "index=vpn sourcetype=\"pan:globalprotect\" earliest=-7d\n| eval mfa=coalesce(factor_type, auth_method, mfa_mechanism)\n| where stage=\"login\" AND (isnull(mfa) OR mfa=\"none\" OR mfa=\"password-only\")\n| lookup ot_vpn_profiles.csv portal OUTPUT requires_mfa\n| where requires_mfa=\"true\"\n| stats count by user, portal, public_ip, host\n| sort - count",
              "m": "(1) Map portals to MFA policy; (2) upgrade GP app versions enforcing MFA; (3) block legacy profiles; (4) weekly exception report to IAM.",
              "z": "Table, Single value (violations), Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `pan:globalprotect` or `pan:auth`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map portals to MFA policy; (2) upgrade GP app versions enforcing MFA; (3) block legacy profiles; (4) weekly exception report to IAM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"pan:globalprotect\" earliest=-7d\n| eval mfa=coalesce(factor_type, auth_method, mfa_mechanism)\n| where stage=\"login\" AND (isnull(mfa) OR mfa=\"none\" OR mfa=\"password-only\")\n| lookup ot_vpn_profiles.csv portal OUTPUT requires_mfa\n| where requires_mfa=\"true\"\n| stats count by user, portal, public_ip, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Weak OTP or Missing MFA Step for OT VPN (IEC 62443-3-3 / SR 1.5)** — Identifies VPN authentications that skip second factor or reuse legacy token methods for paths into OT jump zones.\n\nDocumented **Data sources**: `pan:globalprotect` or `pan:auth`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: pan:globalprotect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"pan:globalprotect\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mfa** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where stage=\"login\" AND (isnull(mfa) OR mfa=\"none\" OR mfa=\"password-only\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where requires_mfa=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, portal, public_ip, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Weak OTP or Missing MFA Step for OT VPN (IEC 62443-3-3 / SR 1.5)** — Identifies VPN authentications that skip second factor or reuse legacy token methods for paths into OT jump zones.\n\nDocumented **Data sources**: `pan:globalprotect` or `pan:auth`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value (violations), Time chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We spot VPN authentications that skip second factor or reuse legacy token methods for paths into OT jump zones. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.5 is enforced — Splunk UC-22.15.14: Weak OTP or Missing MFA Step for OT VPN.",
                  "ea": "Saved search 'UC-22.15.14' running on pan:globalprotect or pan:auth, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.15",
              "n": "Short or Default Password Patterns in OT Authentication Logs (IEC 62443-3-3 / SR 1.7)",
              "c": "high",
              "f": "advanced",
              "v": "Uses policy checks on authentication metadata (where available) and heuristics on failed logon bursts to catch weak password hygiene in OT accounts.",
              "t": "Windows TA (742), FortiGate TA",
              "d": "`WinEventLog:Security` 4625; `fortigate_traffic` or `sslvpn` auth failures",
              "q": "(index=windows EventCode=\"4625\" Status!=\"0x0\") OR (index=fw sourcetype=\"fortigate_event\" subtype=\"vpn\")\n| eval user=coalesce(Account_Name, user, vpnuser)\n| where match(user,\"(?i)hmi|operator|ot_\")\n| bin _time span=1h\n| stats count as fails by _time, user, src_ip\n| where fails>30\n| sort - fails",
              "m": "(1) Never log cleartext passwords—use failure reason codes only; (2) pair with AD password policy audit offline; (3) enforce lockout thresholds on jump hosts; (4) reset compromised accounts via PAM.",
              "z": "Time chart (fail spikes), Table (user, src), Single value (distinct users).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Windows TA](https://splunkbase.splunk.com/app/742), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows TA (742), FortiGate TA.\n• Ensure the following data sources are available: `WinEventLog:Security` 4625; `fortigate_traffic` or `sslvpn` auth failures.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Never log cleartext passwords—use failure reason codes only; (2) pair with AD password policy audit offline; (3) enforce lockout thresholds on jump hosts; (4) reset compromised accounts via PAM.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=windows EventCode=\"4625\" Status!=\"0x0\") OR (index=fw sourcetype=\"fortigate_event\" subtype=\"vpn\")\n| eval user=coalesce(Account_Name, user, vpnuser)\n| where match(user,\"(?i)hmi|operator|ot_\")\n| bin _time span=1h\n| stats count as fails by _time, user, src_ip\n| where fails>30\n| sort - fails\n```\n\nUnderstanding this SPL\n\n**Short or Default Password Patterns in OT Authentication Logs (IEC 62443-3-3 / SR 1.7)** — Uses policy checks on authentication metadata (where available) and heuristics on failed logon bursts to catch weak password hygiene in OT accounts.\n\nDocumented **Data sources**: `WinEventLog:Security` 4625; `fortigate_traffic` or `sslvpn` auth failures. **App/TA** (typical add-on context): Windows TA (742), FortiGate TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, fw; **sourcetype**: fortigate_event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=fw, sourcetype=\"fortigate_event\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where match(user,\"(?i)hmi|operator|ot_\")` — typically the threshold or rule expression for this monitoring goal.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, user, src_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Short or Default Password Patterns in OT Authentication Logs (IEC 62443-3-3 / SR 1.7)** — Uses policy checks on authentication metadata (where available) and heuristics on failed logon bursts to catch weak password hygiene in OT accounts.\n\nDocumented **Data sources**: `WinEventLog:Security` 4625; `fortigate_traffic` or `sslvpn` auth failures. **App/TA** (typical add-on context): Windows TA (742), FortiGate TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fail spikes), Table (user, src), Single value (distinct users).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We uses policy checks on authentication metadata (where available) and heuristics on failed logon bursts to catch weak password hygiene in OT accounts. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.src span=1h | sort - count",
              "e": [
                "fortinet"
              ],
              "em": [
                "fortinet_fortigate"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.7",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.7 is enforced — Splunk UC-22.15.15: Short or Default Password Patterns in OT Authentication Logs.",
                  "ea": "Saved search 'UC-22.15.15' running on WinEventLog:Security 4625; fortigate_traffic or sslvpn auth failures, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Fortinet FortiGate App for Splunk",
                  "id": 2800,
                  "url": "https://splunkbase.splunk.com/app/2800",
                  "desc": "Threat visualizations and analytics for FortiGate firewall and UTM data",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/aa2b4e52-3252-11e5-84c5-02e61222c923.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/480c26fe-3254-11e5-a6ef-063854888a19.png"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.16",
              "n": "Certificate-Based OPC-UA Logon Failures and Weak Trust Stores (IEC 62443-3-3 / SR 1.9)",
              "c": "high",
              "f": "advanced",
              "v": "Tracks OPC-UA secure channel failures and untrusted issuer patterns to validate strength of public-key authentication for industrial servers.",
              "t": "Splunk Edge Hub / unified `index=ot_protocol`",
              "d": "`edge_hub_opcua` or unified sourcetype with `security_policy`",
              "q": "index=ot_protocol sourcetype=\"edge_hub_opcua\" earliest=-7d\n| search status!=\"Good\" OR match(_raw,\"(?i)BadCertificate|issuer|untrusted\")\n| eval server=coalesce(opcua_server_endpoint, dest_host)\n| stats count by server, status, opcua_security_mode, opcua_user_token_type\n| sort - count",
              "m": "(1) Deploy UA GDS or centralized PKI; (2) map status codes to runbooks; (3) alert on spike per server; (4) rotate OPC-UA app certs on schedule.",
              "z": "Table, Time chart, Single value (failed channels).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub / unified `index=ot_protocol`.\n• Ensure the following data sources are available: `edge_hub_opcua` or unified sourcetype with `security_policy`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Deploy UA GDS or centralized PKI; (2) map status codes to runbooks; (3) alert on spike per server; (4) rotate OPC-UA app certs on schedule.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_protocol sourcetype=\"edge_hub_opcua\" earliest=-7d\n| search status!=\"Good\" OR match(_raw,\"(?i)BadCertificate|issuer|untrusted\")\n| eval server=coalesce(opcua_server_endpoint, dest_host)\n| stats count by server, status, opcua_security_mode, opcua_user_token_type\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Certificate-Based OPC-UA Logon Failures and Weak Trust Stores (IEC 62443-3-3 / SR 1.9)** — Tracks OPC-UA secure channel failures and untrusted issuer patterns to validate strength of public-key authentication for industrial servers.\n\nDocumented **Data sources**: `edge_hub_opcua` or unified sourcetype with `security_policy`. **App/TA** (typical add-on context): Splunk Edge Hub / unified `index=ot_protocol`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_protocol; **sourcetype**: edge_hub_opcua. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_protocol, sourcetype=\"edge_hub_opcua\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **server** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by server, status, opcua_security_mode, opcua_user_token_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Time chart, Single value (failed channels).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks OPC-UA secure channel failures and untrusted issuer patterns to validate strength of public-key authentication for industrial servers. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.9 is enforced — Splunk UC-22.15.16: Certificate-Based OPC-UA Logon Failures and Weak Trust Stores.",
                  "ea": "Saved search 'UC-22.15.16' running on edge_hub_opcua or unified sourcetype with security_policy, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.17",
              "n": "OT Access from Untrusted Networks Without Split Tunnel Block (IEC 62443-3-3 / SR 1.13)",
              "c": "critical",
              "f": "intermediate",
              "v": "Flags VPN or agent sessions originating from high-risk geos or residential ASNs while accessing OT jump portals.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`pan:globalprotect` with `public_ip`",
              "q": "index=vpn sourcetype=\"pan:globalprotect\" earliest=-7d\n| iplocation public_ip\n| lookup ot_trusted_country.csv Country OUTPUT trust\n| where trust!=\"trusted\" OR like(ASN,\"*RESIDENTIAL*\")\n| lookup ot_jump_portals.csv portal OUTPUT sl_t\n| stats earliest(_time) as first, latest(_time) as last, values(public_ip) as ips by user, portal, Country\n| sort - last",
              "m": "(1) Maintain country allowlist aligned to business; (2) require hardware token from untrusted regions; (3) integrate threat intel ASN feed; (4) auto-disable accounts on repeat hits.",
              "z": "Choropleth, Table, Single value (sessions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic when joined to firewall](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic_when_joined_to_firewall)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `pan:globalprotect` with `public_ip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain country allowlist aligned to business; (2) require hardware token from untrusted regions; (3) integrate threat intel ASN feed; (4) auto-disable accounts on repeat hits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"pan:globalprotect\" earliest=-7d\n| iplocation public_ip\n| lookup ot_trusted_country.csv Country OUTPUT trust\n| where trust!=\"trusted\" OR like(ASN,\"*RESIDENTIAL*\")\n| lookup ot_jump_portals.csv portal OUTPUT sl_t\n| stats earliest(_time) as first, latest(_time) as last, values(public_ip) as ips by user, portal, Country\n| sort - last\n```\n\nUnderstanding this SPL\n\n**OT Access from Untrusted Networks Without Split Tunnel Block (IEC 62443-3-3 / SR 1.13)** — Flags VPN or agent sessions originating from high-risk geos or residential ASNs while accessing OT jump portals.\n\nDocumented **Data sources**: `pan:globalprotect` with `public_ip`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: pan:globalprotect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"pan:globalprotect\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **OT Access from Untrusted Networks Without Split Tunnel Block (IEC 62443-3-3 / SR 1.13)**): iplocation public_ip\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where trust!=\"trusted\" OR like(ASN,\"*RESIDENTIAL*\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by user, portal, Country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OT Access from Untrusted Networks Without Split Tunnel Block (IEC 62443-3-3 / SR 1.13)** — Flags VPN or agent sessions originating from high-risk geos or residential ASNs while accessing OT jump portals.\n\nDocumented **Data sources**: `pan:globalprotect` with `public_ip`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Choropleth, Table, Single value (sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flags VPN or agent sessions originating from high-risk geos or residential ASNs while accessing OT jump portals. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Network_Traffic when joined to firewall"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.13 is enforced — Splunk UC-22.15.17: OT Access from Untrusted Networks Without Split Tunnel Block.",
                  "ea": "Saved search 'UC-22.15.17' running on pan:globalprotect with public_ip, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.18",
              "n": "Unauthorized Writes to Modbus Holding Registers (IEC 62443-3-3 / SR 2.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects Modbus function codes that modify process values from hosts not authorized for write operations.",
              "t": "Splunk Edge Hub",
              "d": "`edge_hub_modbus` with `function_code`, `src_ip`",
              "q": "index=ot_protocol sourcetype=\"edge_hub_modbus\" earliest=-24h\n| where function_code IN (5,6,15,16,22,23)\n| lookup modbus_write_acl.csv src_ip dest_plc OUTPUT role\n| where role!=\"engineer_approved\" OR isnull(role)\n| stats values(register_start) as regs, count by src_ip, dest_plc, user\n| sort - count",
              "m": "(1) Build ACL from approved EWS IPs; (2) segment read-only analytics clients; (3) test with HIL bench; (4) integrate with SOAR for host isolation.",
              "z": "Table, Heatmap (IP×PLC), Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub.\n• Ensure the following data sources are available: `edge_hub_modbus` with `function_code`, `src_ip`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build ACL from approved EWS IPs; (2) segment read-only analytics clients; (3) test with HIL bench; (4) integrate with SOAR for host isolation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_protocol sourcetype=\"edge_hub_modbus\" earliest=-24h\n| where function_code IN (5,6,15,16,22,23)\n| lookup modbus_write_acl.csv src_ip dest_plc OUTPUT role\n| where role!=\"engineer_approved\" OR isnull(role)\n| stats values(register_start) as regs, count by src_ip, dest_plc, user\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Unauthorized Writes to Modbus Holding Registers (IEC 62443-3-3 / SR 2.1)** — Detects Modbus function codes that modify process values from hosts not authorized for write operations.\n\nDocumented **Data sources**: `edge_hub_modbus` with `function_code`, `src_ip`. **App/TA** (typical add-on context): Splunk Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_protocol; **sourcetype**: edge_hub_modbus. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_protocol, sourcetype=\"edge_hub_modbus\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where function_code IN (5,6,15,16,22,23)` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where role!=\"engineer_approved\" OR isnull(role)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_ip, dest_plc, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Heatmap (IP×PLC), Time chart.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects Modbus function codes that modify process values from hosts not authorized for write operations. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.1 is enforced — Splunk UC-22.15.18: Unauthorized Writes to Modbus Holding Registers.",
                  "ea": "Saved search 'UC-22.15.18' running on edge_hub_modbus with function_code, src_ip, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.19",
              "n": "Unsigned Macros or Scripts Executed on Engineering Workstation (IEC 62443-3-3 / SR 2.4)",
              "c": "high",
              "f": "advanced",
              "v": "Monitors execution of script engines and office macros from OT engineering hosts where mobile code should be restricted.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`WinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 1",
              "q": "index=sysmon EventCode=1 host IN (\"OT-EWS-*\") earliest=-7d\n| where (Image LIKE \"%\\powershell.exe\" OR Image LIKE \"%\\mshta.exe\" OR Image LIKE \"%\\cscript.exe\")\n    AND NOT(CommandLine LIKE \"%-nop -enc%\")\n| lookup ews_signed_scripts.csv CommandLine OUTPUT approved\n| where approved!=\"true\"\n| stats count by host, Image, CommandLine\n| sort - count",
              "m": "(1) Deploy Sysmon with tuned config for EWS; (2) Approve known vendor installers via hash lookup; (3) block VBA macros at policy layer; (4) review weekly top commands.",
              "z": "Table, Sankey (parent→child optional), Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `WinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 1.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Deploy Sysmon with tuned config for EWS; (2) Approve known vendor installers via hash lookup; (3) block VBA macros at policy layer; (4) review weekly top commands.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sysmon EventCode=1 host IN (\"OT-EWS-*\") earliest=-7d\n| where (Image LIKE \"%\\powershell.exe\" OR Image LIKE \"%\\mshta.exe\" OR Image LIKE \"%\\cscript.exe\")\n    AND NOT(CommandLine LIKE \"%-nop -enc%\")\n| lookup ews_signed_scripts.csv CommandLine OUTPUT approved\n| where approved!=\"true\"\n| stats count by host, Image, CommandLine\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Unsigned Macros or Scripts Executed on Engineering Workstation (IEC 62443-3-3 / SR 2.4)** — Monitors execution of script engines and office macros from OT engineering hosts where mobile code should be restricted.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 1. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sysmon.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sysmon, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where (Image LIKE \"%\\powershell.exe\" OR Image LIKE \"%\\mshta.exe\" OR Image LIKE \"%\\cscript.exe\")\n    AND NOT(CommandLine LIK…` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where approved!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, Image, CommandLine** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unsigned Macros or Scripts Executed on Engineering Workstation (IEC 62443-3-3 / SR 2.4)** — Monitors execution of script engines and office macros from OT engineering hosts where mobile code should be restricted.\n\nDocumented **Data sources**: `WinEventLog:Microsoft-Windows-Sysmon/Operational` EventCode 1. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Sankey (parent→child optional), Time chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch execution of script engines and office macros from OT engineering hosts where mobile code should be restricted. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Endpoint (Sysmon when mapped)"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.4 is enforced — Splunk UC-22.15.19: Unsigned Macros or Scripts Executed on Engineering Workstation.",
                  "ea": "Saved search 'UC-22.15.19' running on WinEventLog:Microsoft-Windows-Sysmon/Operational EventCode 1, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.20",
              "n": "HMI Session Lock Bypass — Long Idle Operator Consoles (IEC 62443-3-3 / SR 2.5)",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures continuous HMI operator sessions without screen-lock events to evidence session lock policy effectiveness.",
              "t": "Windows TA or HMI vendor audit forwarder",
              "d": "`index=hmi_audit` `sourcetype=\"winlogon\"` on HMI thin clients",
              "q": "index=hmi_audit (sourcetype=\"winlogon\" OR sourcetype=\"hmi:session\") earliest=-7d\n| eval action=coalesce(EventCode, event_type)\n| transaction user host maxspan=8h maxpause=15m\n| eval duration_min=round(duration/60,1)\n| where duration_min>240 AND mvcount(action)=1\n| table starttime, endtime, user, host, duration_min",
              "m": "(1) Enforce GPO screen lock on HMI OS; (2) reduce `maxpause` per plant standard; (3) alert on transactions > policy; (4) physical walk-through for exceptions.",
              "z": "Timeline, Table, Single value (long sessions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows TA or HMI vendor audit forwarder.\n• Ensure the following data sources are available: `index=hmi_audit` `sourcetype=\"winlogon\"` on HMI thin clients.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce GPO screen lock on HMI OS; (2) reduce `maxpause` per plant standard; (3) alert on transactions > policy; (4) physical walk-through for exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hmi_audit (sourcetype=\"winlogon\" OR sourcetype=\"hmi:session\") earliest=-7d\n| eval action=coalesce(EventCode, event_type)\n| transaction user host maxspan=8h maxpause=15m\n| eval duration_min=round(duration/60,1)\n| where duration_min>240 AND mvcount(action)=1\n| table starttime, endtime, user, host, duration_min\n```\n\nUnderstanding this SPL\n\n**HMI Session Lock Bypass — Long Idle Operator Consoles (IEC 62443-3-3 / SR 2.5)** — Measures continuous HMI operator sessions without screen-lock events to evidence session lock policy effectiveness.\n\nDocumented **Data sources**: `index=hmi_audit` `sourcetype=\"winlogon\"` on HMI thin clients. **App/TA** (typical add-on context): Windows TA or HMI vendor audit forwarder. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hmi_audit; **sourcetype**: winlogon, hmi:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hmi_audit, sourcetype=\"winlogon\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **duration_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where duration_min>240 AND mvcount(action)=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HMI Session Lock Bypass — Long Idle Operator Consoles (IEC 62443-3-3 / SR 2.5)**): table starttime, endtime, user, host, duration_min\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table, Single value (long sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measures continuous HMI operator sessions without screen-lock events to evidence session lock policy effectiveness. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.5 is enforced — Splunk UC-22.15.20: HMI Session Lock Bypass — Long Idle Operator Consoles.",
                  "ea": "Saved search 'UC-22.15.20' running on index=hmi_audit sourcetype=\"winlogon\" on HMI thin clients, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.21",
              "n": "Forced Remote Engineering Session Termination After Hours (IEC 62443-3-3 / SR 2.6)",
              "c": "high",
              "f": "intermediate",
              "v": "Ensures vendor and internal remote sessions to OT are closed at shift end by correlating still-active tunnels with schedule.",
              "t": "Palo Alto (2757), CyberArk (4295)",
              "d": "`pan:traffic` app=ssl,sslvpn; `cyberark:psm` session end",
              "q": "index=fw sourcetype=\"pan:traffic\" earliest=-2d\n| where app IN (\"ssl\",\"sslvpn\",\"gp-gateway\") AND dest_zone=\"OT-Jump\"\n| eval sess_key=concat(src,\"-\",dest_ip)\n| stats min(_time) as start, max(_time) as end by sess_key, user\n| eval span_h=(end-start)/3600\n| eval ended_after_shift=if((strftime(end,\"%H\")>\"22\" OR strftime(end,\"%H\")<\"05\") AND span_h>4,1,0)\n| where ended_after_shift=1\n| table start, end, span_h, sess_key, user",
              "m": "(1) Configure firewall idle timeouts; (2) CyberArk max session duration; (3) validate OT maintenance windows; (4) alert if sessions exceed 8h.",
              "z": "Timeline, Table, Bar (duration buckets).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Palo Alto](https://splunkbase.splunk.com/app/2757), [CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto (2757), CyberArk (4295).\n• Ensure the following data sources are available: `pan:traffic` app=ssl,sslvpn; `cyberark:psm` session end.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Configure firewall idle timeouts; (2) CyberArk max session duration; (3) validate OT maintenance windows; (4) alert if sessions exceed 8h.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fw sourcetype=\"pan:traffic\" earliest=-2d\n| where app IN (\"ssl\",\"sslvpn\",\"gp-gateway\") AND dest_zone=\"OT-Jump\"\n| eval sess_key=concat(src,\"-\",dest_ip)\n| stats min(_time) as start, max(_time) as end by sess_key, user\n| eval span_h=(end-start)/3600\n| eval ended_after_shift=if((strftime(end,\"%H\")>\"22\" OR strftime(end,\"%H\")<\"05\") AND span_h>4,1,0)\n| where ended_after_shift=1\n| table start, end, span_h, sess_key, user\n```\n\nUnderstanding this SPL\n\n**Forced Remote Engineering Session Termination After Hours (IEC 62443-3-3 / SR 2.6)** — Ensures vendor and internal remote sessions to OT are closed at shift end by correlating still-active tunnels with schedule.\n\nDocumented **Data sources**: `pan:traffic` app=ssl,sslvpn; `cyberark:psm` session end. **App/TA** (typical add-on context): Palo Alto (2757), CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where app IN (\"ssl\",\"sslvpn\",\"gp-gateway\") AND dest_zone=\"OT-Jump\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **sess_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sess_key, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **span_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ended_after_shift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ended_after_shift=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Forced Remote Engineering Session Termination After Hours (IEC 62443-3-3 / SR 2.6)**): table start, end, span_h, sess_key, user\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Forced Remote Engineering Session Termination After Hours (IEC 62443-3-3 / SR 2.6)** — Ensures vendor and internal remote sessions to OT are closed at shift end by correlating still-active tunnels with schedule.\n\nDocumented **Data sources**: `pan:traffic` app=ssl,sslvpn; `cyberark:psm` session end. **App/TA** (typical add-on context): Palo Alto (2757), CyberArk (4295). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table, Bar (duration buckets).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We make sure vendor and internal remote sessions to OT are closed at shift end by correlating still-active tunnels with schedule. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.6 is enforced — Splunk UC-22.15.21: Forced Remote Engineering Session Termination After Hours.",
                  "ea": "Saved search 'UC-22.15.21' running on pan:traffic app=ssl,sslvpn; cyberark:psm session end, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.22",
              "n": "OT Auditable Event Coverage by Zone and Source (IEC 62443-3-3 / SR 2.8)",
              "c": "high",
              "f": "intermediate",
              "v": "Compares expected auditable sources per IEC 62443 zone to actual indexed event rates to find blind spots.",
              "t": "Splunk OT Security Add-on (5151)",
              "d": "`metadata` sources; `index=ot_audit`",
              "q": "| tstats count where index=ot_audit by sourcetype, host prefix=ot\n| lookup ot_expected_audit_sources.csv sourcetype OUTPUT zone, required\n| eval gap=if(required=\"true\" AND count<100,\"low_volume\",\"ok\")\n| where gap!=\"ok\"\n| table sourcetype, host, count, zone",
              "m": "(1) Define required sourcetypes per Purdue level; (2) onboard missing forwarders; (3) validate license throttling not dropping; (4) quarterly coverage attestation.",
              "z": "Bar (actual vs expected), Table, Single value (gaps).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151).\n• Ensure the following data sources are available: `metadata` sources; `index=ot_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define required sourcetypes per Purdue level; (2) onboard missing forwarders; (3) validate license throttling not dropping; (4) quarterly coverage attestation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats count where index=ot_audit by sourcetype, host prefix=ot\n| lookup ot_expected_audit_sources.csv sourcetype OUTPUT zone, required\n| eval gap=if(required=\"true\" AND count<100,\"low_volume\",\"ok\")\n| where gap!=\"ok\"\n| table sourcetype, host, count, zone\n```\n\nUnderstanding this SPL\n\n**OT Auditable Event Coverage by Zone and Source (IEC 62443-3-3 / SR 2.8)** — Compares expected auditable sources per IEC 62443 zone to actual indexed event rates to find blind spots.\n\nDocumented **Data sources**: `metadata` sources; `index=ot_audit`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_audit.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against precomputed summaries; ensure the referenced data model is accelerated.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap!=\"ok\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **OT Auditable Event Coverage by Zone and Source (IEC 62443-3-3 / SR 2.8)**): table sourcetype, host, count, zone\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar (actual vs expected), Table, Single value (gaps).",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare expected auditable sources per IEC 62443 zone to actual indexed event rates to find blind spots. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.8",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.8 (Auditable events) is enforced — Splunk UC-22.15.22: OT Auditable Event Coverage by Zone and Source.",
                  "ea": "Saved search 'UC-22.15.22' running on metadata sources; index=ot_audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.23",
              "n": "Audit Index Growth vs Licensed Retention (Capacity Risk) (IEC 62443-3-3 / SR 2.9)",
              "c": "medium",
              "f": "intermediate",
              "v": "Forecasts `ot_audit` disk usage to avoid silent loss of security audit records before policy retention is met.",
              "t": "Splunk Enterprise / Cloud REST",
              "d": "`/services/data/indexes` for `ot_audit`",
              "q": "| rest /services/data/indexes splunk_server=local count=0 title=ot_audit\n| eval gb=currentDBSizeMB/1024, days=round(frozenTimePeriodInSecs/86400,1)\n| eval daily_gb=gb/30\n| eval days_to_full=if(daily_gb>0, round((maxTotalDataSizeMB/1024)/daily_gb,1), null())\n| table title, gb, days, daily_gb, days_to_full",
              "m": "(1) Set `maxTotalDataSizeMB` with headroom; (2) route verbose protocols to warm/cold tier; (3) expand frozen path; (4) alert if `days_to_full`<90.",
              "z": "Single value, Gauge, Table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Cloud REST.\n• Ensure the following data sources are available: `/services/data/indexes` for `ot_audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Set `maxTotalDataSizeMB` with headroom; (2) route verbose protocols to warm/cold tier; (3) expand frozen path; (4) alert if `days_to_full`<90.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/data/indexes splunk_server=local count=0 title=ot_audit\n| eval gb=currentDBSizeMB/1024, days=round(frozenTimePeriodInSecs/86400,1)\n| eval daily_gb=gb/30\n| eval days_to_full=if(daily_gb>0, round((maxTotalDataSizeMB/1024)/daily_gb,1), null())\n| table title, gb, days, daily_gb, days_to_full\n```\n\nUnderstanding this SPL\n\n**Audit Index Growth vs Licensed Retention (Capacity Risk) (IEC 62443-3-3 / SR 2.9)** — Forecasts `ot_audit` disk usage to avoid silent loss of security audit records before policy retention is met.\n\nDocumented **Data sources**: `/services/data/indexes` for `ot_audit`. **App/TA** (typical add-on context): Splunk Enterprise / Cloud REST. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• `eval` defines or adjusts **gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **daily_gb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_full** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Audit Index Growth vs Licensed Retention (Capacity Risk) (IEC 62443-3-3 / SR 2.9)**): table title, gb, days, daily_gb, days_to_full\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value, Gauge, Table.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We forecasts `ot_audit` disk usage to avoid silent loss of security audit records before policy retention is met. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.9 (Audit storage capacity) is enforced — Splunk UC-22.15.23: Audit Index Growth vs Licensed Retention (Capacity Risk).",
                  "ea": "Saved search 'UC-22.15.23' running on /services/data/indexes for ot_audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.24",
              "n": "Forwarder Stops — Audit Pipeline Failure Response (IEC 62443-3-3 / SR 2.10)",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects universal forwarder or OT collector outages that stop security-relevant events reaching the SIEM.",
              "t": "Splunk Monitoring Console / `_internal`",
              "d": "`_internal` metrics.log, `splunkd` health",
              "q": "index=_internal source=*metrics.log group=tcpout_connections earliest=-4h\n| stats latest(kb) as kb by host\n| join type=left host [ search index=_internal sourcetype=splunkd earliest=-4h \"Closing tcp\" OR \"ERROR TcpOutputFd\" | stats count as errs by host ]\n| where isnull(kb) OR errs>10\n| table host, kb, errs",
              "m": "(1) Enable MC alerts; (2) map hosts to OT sites; (3) redundant intermediate HF; (4) local buffering on UF until restored.",
              "z": "Table (affected sites), Single value (hosts down), Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Monitoring Console / `_internal`.\n• Ensure the following data sources are available: `_internal` metrics.log, `splunkd` health.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable MC alerts; (2) map hosts to OT sites; (3) redundant intermediate HF; (4) local buffering on UF until restored.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_internal source=*metrics.log group=tcpout_connections earliest=-4h\n| stats latest(kb) as kb by host\n| join type=left host [ search index=_internal sourcetype=splunkd earliest=-4h \"Closing tcp\" OR \"ERROR TcpOutputFd\" | stats count as errs by host ]\n| where isnull(kb) OR errs>10\n| table host, kb, errs\n```\n\nUnderstanding this SPL\n\n**Forwarder Stops — Audit Pipeline Failure Response (IEC 62443-3-3 / SR 2.10)** — Detects universal forwarder or OT collector outages that stop security-relevant events reaching the SIEM.\n\nDocumented **Data sources**: `_internal` metrics.log, `splunkd` health. **App/TA** (typical add-on context): Splunk Monitoring Console / `_internal`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_internal, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(kb) OR errs>10` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Forwarder Stops — Audit Pipeline Failure Response (IEC 62443-3-3 / SR 2.10)**): table host, kb, errs\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (affected sites), Single value (hosts down), Time chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects universal forwarder or OT collector outages that stop security-relevant events reaching the SIEM. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.10 is enforced — Splunk UC-22.15.24: Forwarder Stops — Audit Pipeline Failure Response.",
                  "ea": "Saved search 'UC-22.15.24' running on _internal metrics.log, splunkd health, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.25",
              "n": "Clock Skew on OT Hosts vs NTP Stratum (IEC 62443-3-3 / SR 2.11)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces OT servers and RTUs with large `_indextime` vs `_time` deltas undermining trustworthy audit timestamps.",
              "t": "Universal Forwarder time debug or vendor syslog",
              "d": "`index=ot_infra` with `_time` and `orig_time` if extracted",
              "q": "index=ot_infra earliest=-24h\n| eval skew=abs(_indextime-_time)\n| where skew>120\n| stats max(skew) as max_skew, perc95(skew) as p95_skew by host, sourcetype\n| sort - p95_skew",
              "m": "(1) Enforce PTP/NTP on all IEDs; (2) fix dual-homed syslog paths; (3) alert >120s; (4) document stratum hierarchy per site.",
              "z": "Table, Single value (worst skew), Bar by site.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Universal Forwarder time debug or vendor syslog.\n• Ensure the following data sources are available: `index=ot_infra` with `_time` and `orig_time` if extracted.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce PTP/NTP on all IEDs; (2) fix dual-homed syslog paths; (3) alert >120s; (4) document stratum hierarchy per site.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_infra earliest=-24h\n| eval skew=abs(_indextime-_time)\n| where skew>120\n| stats max(skew) as max_skew, perc95(skew) as p95_skew by host, sourcetype\n| sort - p95_skew\n```\n\nUnderstanding this SPL\n\n**Clock Skew on OT Hosts vs NTP Stratum (IEC 62443-3-3 / SR 2.11)** — Surfaces OT servers and RTUs with large `_indextime` vs `_time` deltas undermining trustworthy audit timestamps.\n\nDocumented **Data sources**: `index=ot_infra` with `_time` and `orig_time` if extracted. **App/TA** (typical add-on context): Universal Forwarder time debug or vendor syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_infra.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_infra, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **skew** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where skew>120` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value (worst skew), Bar by site.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surfaces OT servers and RTUs with large `_indextime` vs `_time` deltas undermining trustworthy audit timestamps. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.11",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.11 is enforced — Splunk UC-22.15.25: Clock Skew on OT Hosts vs NTP Stratum.",
                  "ea": "Saved search 'UC-22.15.25' running on index=ot_infra with _time and orig_time if extracted, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.26",
              "n": "Dual-Signature PLC Program Change Without Four-Eyes Ticket (IEC 62443-3-3 / SR 2.12)",
              "c": "critical",
              "f": "advanced",
              "v": "Looks for PLC program uploads/downloads signed by only one engineering identity when policy requires dual control.",
              "t": "Vendor PLC audit, ServiceNow",
              "d": "`index=ot_eng` `sourcetype=\"plc:audit\"`",
              "q": "index=ot_eng sourcetype=\"plc:audit\" action=\"PROGRAM_CHANGE\" earliest=-30d\n| eval change_id=coalesce(change_ticket, work_order)\n| stats values(user) as signers, dc(user) as signer_count by change_id, controller\n| where signer_count<2 AND len(change_id)>4\n| table change_id, controller, signers, signer_count",
              "m": "(1) Require second approval in CM tool; (2) map PLC audit user fields; (3) integrate e-signature if available; (4) weekly compliance report.",
              "z": "Table, Timeline, Single value (violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Vendor PLC audit, ServiceNow.\n• Ensure the following data sources are available: `index=ot_eng` `sourcetype=\"plc:audit\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require second approval in CM tool; (2) map PLC audit user fields; (3) integrate e-signature if available; (4) weekly compliance report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_eng sourcetype=\"plc:audit\" action=\"PROGRAM_CHANGE\" earliest=-30d\n| eval change_id=coalesce(change_ticket, work_order)\n| stats values(user) as signers, dc(user) as signer_count by change_id, controller\n| where signer_count<2 AND len(change_id)>4\n| table change_id, controller, signers, signer_count\n```\n\nUnderstanding this SPL\n\n**Dual-Signature PLC Program Change Without Four-Eyes Ticket (IEC 62443-3-3 / SR 2.12)** — Looks for PLC program uploads/downloads signed by only one engineering identity when policy requires dual control.\n\nDocumented **Data sources**: `index=ot_eng` `sourcetype=\"plc:audit\"`. **App/TA** (typical add-on context): Vendor PLC audit, ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_eng; **sourcetype**: plc:audit. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_eng, sourcetype=\"plc:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **change_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by change_id, controller** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where signer_count<2 AND len(change_id)>4` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Dual-Signature PLC Program Change Without Four-Eyes Ticket (IEC 62443-3-3 / SR 2.12)**): table change_id, controller, signers, signer_count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(All_Changes.user) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Dual-Signature PLC Program Change Without Four-Eyes Ticket (IEC 62443-3-3 / SR 2.12)** — Looks for PLC program uploads/downloads signed by only one engineering identity when policy requires dual control.\n\nDocumented **Data sources**: `index=ot_eng` `sourcetype=\"plc:audit\"`. **App/TA** (typical add-on context): Vendor PLC audit, ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline, Single value (violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We looks for PLC program uploads/downloads signed by only one engineering identity when policy requires dual control. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t dc(All_Changes.user) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.12",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.12 is enforced — Splunk UC-22.15.26: Dual-Signature PLC Program Change Without Four-Eyes Ticket.",
                  "ea": "Saved search 'UC-22.15.26' running on index=ot_eng sourcetype=\"plc:audit\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.27",
              "n": "OPC-UA Message Tampering Indicators via Signature Failures (IEC 62443-3-3 / SR 3.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Raises alerts when OPC-UA signing/encryption modes downgrade or signature checks fail, indicating integrity issues on industrial sessions.",
              "t": "Splunk Edge Hub",
              "d": "`edge_hub_opcua` security headers",
              "q": "index=ot_protocol sourcetype=\"edge_hub_opcua\" earliest=-24h\n| eval mode=coalesce(opcua_security_mode,\"None\")\n| stats count by opcua_server_endpoint, mode, opcua_security_policy_uri\n| eventstats sum(count) as tot by opcua_server_endpoint\n| eval pct=round(100*count/tot,2)\n| where mode=\"None\" AND pct>1\n| sort - pct",
              "m": "(1) Disable None except lab VLANs; (2) baseline each server; (3) alert on new None percentage; (4) correlate with firewall TLS inspection if used.",
              "z": "Stacked bar (mode mix), Table, Single value (servers with None).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub.\n• Ensure the following data sources are available: `edge_hub_opcua` security headers.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Disable None except lab VLANs; (2) baseline each server; (3) alert on new None percentage; (4) correlate with firewall TLS inspection if used.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_protocol sourcetype=\"edge_hub_opcua\" earliest=-24h\n| eval mode=coalesce(opcua_security_mode,\"None\")\n| stats count by opcua_server_endpoint, mode, opcua_security_policy_uri\n| eventstats sum(count) as tot by opcua_server_endpoint\n| eval pct=round(100*count/tot,2)\n| where mode=\"None\" AND pct>1\n| sort - pct\n```\n\nUnderstanding this SPL\n\n**OPC-UA Message Tampering Indicators via Signature Failures (IEC 62443-3-3 / SR 3.1)** — Raises alerts when OPC-UA signing/encryption modes downgrade or signature checks fail, indicating integrity issues on industrial sessions.\n\nDocumented **Data sources**: `edge_hub_opcua` security headers. **App/TA** (typical add-on context): Splunk Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_protocol; **sourcetype**: edge_hub_opcua. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_protocol, sourcetype=\"edge_hub_opcua\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **mode** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by opcua_server_endpoint, mode, opcua_security_policy_uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by opcua_server_endpoint** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mode=\"None\" AND pct>1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (mode mix), Table, Single value (servers with None).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We raises alerts when OPC-UA signing/encryption modes downgrade or signature checks fail, indicating integrity issues on industrial sessions. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 3.1 is enforced — Splunk UC-22.15.27: OPC-UA Message Tampering Indicators via Signature Failures.",
                  "ea": "Saved search 'UC-22.15.27' running on edge_hub_opcua security headers, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.28",
              "n": "PLC Firmware Verification Job Failures (IEC 62443-3-3 / SR 3.3)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks scheduled integrity or attestation jobs on controllers that fail or never run, weakening verification of security functionality.",
              "t": "Asset intelligence CSV or vendor API HEC",
              "d": "`index=ot_integrity` vendor attestation logs",
              "q": "index=ot_integrity sourcetype=\"plc:integrity_scan\" earliest=-30d\n| stats latest(result) as last_res, latest(_time) as last_run by controller, firmware_version\n| eval days_since=round((now()-last_run)/86400,0)\n| where last_res!=\"PASS\" OR days_since>30\n| table controller, firmware_version, last_res, days_since",
              "m": "(1) Schedule vendor integrity checks post-patch; (2) ingest pass/fail JSON; (3) integrate with patch CMDB; (4) block production promotion on FAIL.",
              "z": "Table, Single value (controllers overdue), Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Asset intelligence CSV or vendor API HEC.\n• Ensure the following data sources are available: `index=ot_integrity` vendor attestation logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule vendor integrity checks post-patch; (2) ingest pass/fail JSON; (3) integrate with patch CMDB; (4) block production promotion on FAIL.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_integrity sourcetype=\"plc:integrity_scan\" earliest=-30d\n| stats latest(result) as last_res, latest(_time) as last_run by controller, firmware_version\n| eval days_since=round((now()-last_run)/86400,0)\n| where last_res!=\"PASS\" OR days_since>30\n| table controller, firmware_version, last_res, days_since\n```\n\nUnderstanding this SPL\n\n**PLC Firmware Verification Job Failures (IEC 62443-3-3 / SR 3.3)** — Tracks scheduled integrity or attestation jobs on controllers that fail or never run, weakening verification of security functionality.\n\nDocumented **Data sources**: `index=ot_integrity` vendor attestation logs. **App/TA** (typical add-on context): Asset intelligence CSV or vendor API HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_integrity; **sourcetype**: plc:integrity_scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_integrity, sourcetype=\"plc:integrity_scan\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by controller, firmware_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where last_res!=\"PASS\" OR days_since>30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PLC Firmware Verification Job Failures (IEC 62443-3-3 / SR 3.3)**): table controller, firmware_version, last_res, days_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value (controllers overdue), Time chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks scheduled integrity or attestation jobs on controllers that fail or never run, weakening verification of security functionality. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 3.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 3.3 is enforced — Splunk UC-22.15.28: PLC Firmware Verification Job Failures.",
                  "ea": "Saved search 'UC-22.15.28' running on index=ot_integrity vendor attestation logs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.29",
              "n": "Unexpected PLC Logic CRC Change (IEC 62443-3-3 / SR 3.4)",
              "c": "critical",
              "f": "advanced",
              "v": "Compares reported logic checksums or project CRCs to last approved baseline to detect unauthorized software changes.",
              "t": "Historian or engineering export",
              "d": "`index=ot_eng` CRC events; `approved_plc_crc.csv`",
              "q": "index=ot_eng sourcetype=\"plc:crc\" earliest=-7d\n| lookup approved_plc_crc.csv controller program_name OUTPUT expected_crc\n| where isnotnull(expected_crc) AND crc!=expected_crc\n| stats earliest(_time) as first_bad by controller, program_name, crc, expected_crc, user\n| sort - first_bad",
              "m": "(1) Export CRC after each validated download; (2) store gold images securely; (3) auto-ticket on mismatch; (4) offline forensic for first event.",
              "z": "Table, Timeline, Single value (mismatches/day).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Historian or engineering export.\n• Ensure the following data sources are available: `index=ot_eng` CRC events; `approved_plc_crc.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export CRC after each validated download; (2) store gold images securely; (3) auto-ticket on mismatch; (4) offline forensic for first event.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_eng sourcetype=\"plc:crc\" earliest=-7d\n| lookup approved_plc_crc.csv controller program_name OUTPUT expected_crc\n| where isnotnull(expected_crc) AND crc!=expected_crc\n| stats earliest(_time) as first_bad by controller, program_name, crc, expected_crc, user\n| sort - first_bad\n```\n\nUnderstanding this SPL\n\n**Unexpected PLC Logic CRC Change (IEC 62443-3-3 / SR 3.4)** — Compares reported logic checksums or project CRCs to last approved baseline to detect unauthorized software changes.\n\nDocumented **Data sources**: `index=ot_eng` CRC events; `approved_plc_crc.csv`. **App/TA** (typical add-on context): Historian or engineering export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_eng; **sourcetype**: plc:crc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_eng, sourcetype=\"plc:crc\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(expected_crc) AND crc!=expected_crc` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by controller, program_name, crc, expected_crc, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unexpected PLC Logic CRC Change (IEC 62443-3-3 / SR 3.4)** — Compares reported logic checksums or project CRCs to last approved baseline to detect unauthorized software changes.\n\nDocumented **Data sources**: `index=ot_eng` CRC events; `approved_plc_crc.csv`. **App/TA** (typical add-on context): Historian or engineering export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline, Single value (mismatches/day).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compare reported logic checksums or project CRCs to last approved baseline to detect unauthorized software changes. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 3.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 3.4 is enforced — Splunk UC-22.15.29: Unexpected PLC Logic CRC Change.",
                  "ea": "Saved search 'UC-22.15.29' running on index=ot_eng CRC events; approved_plc_crc.csv, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.30",
              "n": "Out-of-Range SCADA Setpoints and Invalid Control Commands (IEC 62443-3-3 / SR 3.5)",
              "c": "critical",
              "f": "advanced",
              "v": "Validates control writes against engineering limits and recipe tables to catch dangerous setpoint or mode commands.",
              "t": "Historian HEC, DCS export",
              "d": "`index=scada` `sourcetype=\"hist:control\"`",
              "q": "index=scada sourcetype=\"hist:control\" earliest=-24h\n| lookup setpoint_limits.csv tag OUTPUT min_sp, max_sp, allowed_modes\n| eval bad=if(value<min_sp OR value>max_sp OR !mvfind(allowed_modes, mode),1,0)\n| where bad=1\n| table _time, tag, value, min_sp, max_sp, mode, operator",
              "m": "(1) Publish limits from P&ID digital twin; (2) test alarms in shadow mode; (3) integrate with MES recipes; (4) alert operations supervisor.",
              "z": "Time chart, Table, Single value (bad commands).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Historian HEC, DCS export.\n• Ensure the following data sources are available: `index=scada` `sourcetype=\"hist:control\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Publish limits from P&ID digital twin; (2) test alarms in shadow mode; (3) integrate with MES recipes; (4) alert operations supervisor.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=scada sourcetype=\"hist:control\" earliest=-24h\n| lookup setpoint_limits.csv tag OUTPUT min_sp, max_sp, allowed_modes\n| eval bad=if(value<min_sp OR value>max_sp OR !mvfind(allowed_modes, mode),1,0)\n| where bad=1\n| table _time, tag, value, min_sp, max_sp, mode, operator\n```\n\nUnderstanding this SPL\n\n**Out-of-Range SCADA Setpoints and Invalid Control Commands (IEC 62443-3-3 / SR 3.5)** — Validates control writes against engineering limits and recipe tables to catch dangerous setpoint or mode commands.\n\nDocumented **Data sources**: `index=scada` `sourcetype=\"hist:control\"`. **App/TA** (typical add-on context): Historian HEC, DCS export. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: scada; **sourcetype**: hist:control. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=scada, sourcetype=\"hist:control\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **bad** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bad=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Out-of-Range SCADA Setpoints and Invalid Control Commands (IEC 62443-3-3 / SR 3.5)**): table _time, tag, value, min_sp, max_sp, mode, operator\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value (bad commands).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We check control writes against engineering limits and recipe tables to catch dangerous setpoint or mode commands. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 3.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 3.5 is enforced — Splunk UC-22.15.30: Out-of-Range SCADA Setpoints and Invalid Control Commands.",
                  "ea": "Saved search 'UC-22.15.30' running on index=scada sourcetype=\"hist:control\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.31",
              "n": "Cleartext Industrial Credentials in PCAP-Derived Metadata (IEC 62443-3-3 / SR 4.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Scans IDS metadata for username or password fields in cleartext industrial protocols where confidentiality must be preserved.",
              "t": "Claroty xDome TA, Nozomi TA",
              "d": "`claroty:alert` or `nozomi:packets_meta` (example sourcetype)",
              "q": "index=ot_sec (sourcetype=\"claroty:alert\" OR sourcetype=\"nozomi:alerts\") earliest=-7d\n| where match(_raw,\"(?i)password=|passwd=|user=.*pass\") AND NOT match(_raw,\"(?i)\\\\*\\\\*\\\\*\")\n| rex field=_raw \"(?i)(?<proto>modbus|opcua|dnp3).*\"\n| stats count by proto, src_ip, dest_ip, signature\n| sort - count",
              "m": "(1) Never index full payload in prod—use IDS summaries; (2) rotate exposed creds; (3) enable TLS where vendor supports; (4) segment sniffers.",
              "z": "Table, Single value (hits), Bar by protocol.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Intrusion_Detection when CIM-tagged](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection_when_CIM-tagged)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Claroty xDome TA, Nozomi TA.\n• Ensure the following data sources are available: `claroty:alert` or `nozomi:packets_meta` (example sourcetype).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Never index full payload in prod—use IDS summaries; (2) rotate exposed creds; (3) enable TLS where vendor supports; (4) segment sniffers.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_sec (sourcetype=\"claroty:alert\" OR sourcetype=\"nozomi:alerts\") earliest=-7d\n| where match(_raw,\"(?i)password=|passwd=|user=.*pass\") AND NOT match(_raw,\"(?i)\\\\*\\\\*\\\\*\")\n| rex field=_raw \"(?i)(?<proto>modbus|opcua|dnp3).*\"\n| stats count by proto, src_ip, dest_ip, signature\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Cleartext Industrial Credentials in PCAP-Derived Metadata (IEC 62443-3-3 / SR 4.1)** — Scans IDS metadata for username or password fields in cleartext industrial protocols where confidentiality must be preserved.\n\nDocumented **Data sources**: `claroty:alert` or `nozomi:packets_meta` (example sourcetype). **App/TA** (typical add-on context): Claroty xDome TA, Nozomi TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_sec; **sourcetype**: claroty:alert, nozomi:alerts. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_sec, sourcetype=\"claroty:alert\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where match(_raw,\"(?i)password=|passwd=|user=.*pass\") AND NOT match(_raw,\"(?i)\\\\*\\\\*\\\\*\")` — typically the threshold or rule expression for this monitoring goal.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by proto, src_ip, dest_ip, signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src, IDS_Attacks.dest, IDS_Attacks.signature | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cleartext Industrial Credentials in PCAP-Derived Metadata (IEC 62443-3-3 / SR 4.1)** — Scans IDS metadata for username or password fields in cleartext industrial protocols where confidentiality must be preserved.\n\nDocumented **Data sources**: `claroty:alert` or `nozomi:packets_meta` (example sourcetype). **App/TA** (typical add-on context): Claroty xDome TA, Nozomi TA. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value (hits), Bar by protocol.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We scans IDS metadata for username or password fields in cleartext industrial protocols where confidentiality must be preserved. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Intrusion_Detection when CIM-tagged"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src, IDS_Attacks.dest, IDS_Attacks.signature | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 4.1 is enforced — Splunk UC-22.15.31: Cleartext Industrial Credentials in PCAP-Derived Metadata.",
                  "ea": "Saved search 'UC-22.15.31' running on claroty:alert or nozomi:packets_meta (example sourcetype), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.32",
              "n": "Deprecated TLS or SSH Algorithms on OT Jump Hosts (IEC 62443-3-3 / SR 4.3)",
              "c": "high",
              "f": "intermediate",
              "v": "Lists sessions negotiating weak ciphers for OT administrative channels where cryptography policy must be enforced.",
              "t": "Palo Alto (2757) decrypted metadata or Stream",
              "d": "`pan:ssl` or Splunk Stream TLS fields",
              "q": "index=fw sourcetype=\"pan:ssl\" dest_zone=\"OT-Jump\" earliest=-7d\n| where cipher IN (\"RC4\",\"3DES\",\"DES\") OR tls_version=\"TLSv1.0\" OR tls_version=\"TLSv1.1\"\n| stats count by dest_ip, cipher, tls_version, app\n| sort - count",
              "m": "(1) Disable legacy TLS on servers; (2) upgrade PanOS profiles; (3) exception process for legacy PLC web UIs; (4) monthly compliance scan.",
              "z": "Table, Bar (cipher), Single value (sessions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Palo Alto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto (2757) decrypted metadata or Stream.\n• Ensure the following data sources are available: `pan:ssl` or Splunk Stream TLS fields.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Disable legacy TLS on servers; (2) upgrade PanOS profiles; (3) exception process for legacy PLC web UIs; (4) monthly compliance scan.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fw sourcetype=\"pan:ssl\" dest_zone=\"OT-Jump\" earliest=-7d\n| where cipher IN (\"RC4\",\"3DES\",\"DES\") OR tls_version=\"TLSv1.0\" OR tls_version=\"TLSv1.1\"\n| stats count by dest_ip, cipher, tls_version, app\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Deprecated TLS or SSH Algorithms on OT Jump Hosts (IEC 62443-3-3 / SR 4.3)** — Lists sessions negotiating weak ciphers for OT administrative channels where cryptography policy must be enforced.\n\nDocumented **Data sources**: `pan:ssl` or Splunk Stream TLS fields. **App/TA** (typical add-on context): Palo Alto (2757) decrypted metadata or Stream. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fw; **sourcetype**: pan:ssl. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fw, sourcetype=\"pan:ssl\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where cipher IN (\"RC4\",\"3DES\",\"DES\") OR tls_version=\"TLSv1.0\" OR tls_version=\"TLSv1.1\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dest_ip, cipher, tls_version, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.app | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Deprecated TLS or SSH Algorithms on OT Jump Hosts (IEC 62443-3-3 / SR 4.3)** — Lists sessions negotiating weak ciphers for OT administrative channels where cryptography policy must be enforced.\n\nDocumented **Data sources**: `pan:ssl` or Splunk Stream TLS fields. **App/TA** (typical add-on context): Palo Alto (2757) decrypted metadata or Stream. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar (cipher), Single value (sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We lists sessions negotiating weak ciphers for OT administrative channels where cryptography policy must be enforced. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.app | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 4.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 4.3 is enforced — Splunk UC-22.15.32: Deprecated TLS or SSH Algorithms on OT Jump Hosts.",
                  "ea": "Saved search 'UC-22.15.32' running on pan:ssl or Splunk Stream TLS fields, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.33",
              "n": "East-West OT Traffic Crossing Intended Purdue Levels (IEC 62443-3-3 / SR 5.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects flows between OT zones that should communicate only through defined conduits, evidencing segmentation policy.",
              "t": "Palo Alto (2757), NetFlow",
              "d": "`pan:traffic` with `src_zone`, `dest_zone`",
              "q": "index=fw sourcetype=\"pan:traffic\" earliest=-24h\n| lookup ot_zone_pairs.csv src_zone dest_zone OUTPUT allowed_conduit\n| where isnull(allowed_conduit) OR allowed_conduit=\"false\"\n| stats sum(bytes) as bytes by src_zone, dest_zone, app\n| sort - bytes",
              "m": "(1) Maintain zone pair matrix from ISA-62443 drawings; (2) log intra-zone noise baseline; (3) alert new apps; (4) feed exceptions with expiry.",
              "z": "Sankey (zone flows), Table, Single value (new pairs/week).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Palo Alto](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto (2757), NetFlow.\n• Ensure the following data sources are available: `pan:traffic` with `src_zone`, `dest_zone`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain zone pair matrix from ISA-62443 drawings; (2) log intra-zone noise baseline; (3) alert new apps; (4) feed exceptions with expiry.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fw sourcetype=\"pan:traffic\" earliest=-24h\n| lookup ot_zone_pairs.csv src_zone dest_zone OUTPUT allowed_conduit\n| where isnull(allowed_conduit) OR allowed_conduit=\"false\"\n| stats sum(bytes) as bytes by src_zone, dest_zone, app\n| sort - bytes\n```\n\nUnderstanding this SPL\n\n**East-West OT Traffic Crossing Intended Purdue Levels (IEC 62443-3-3 / SR 5.1)** — Detects flows between OT zones that should communicate only through defined conduits, evidencing segmentation policy.\n\nDocumented **Data sources**: `pan:traffic` with `src_zone`, `dest_zone`. **App/TA** (typical add-on context): Palo Alto (2757), NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(allowed_conduit) OR allowed_conduit=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_zone, dest_zone, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**East-West OT Traffic Crossing Intended Purdue Levels (IEC 62443-3-3 / SR 5.1)** — Detects flows between OT zones that should communicate only through defined conduits, evidencing segmentation policy.\n\nDocumented **Data sources**: `pan:traffic` with `src_zone`, `dest_zone`. **App/TA** (typical add-on context): Palo Alto (2757), NetFlow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey (zone flows), Table, Single value (new pairs/week).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects flows between OT zones that should communicate only through defined conduits, evidencing segmentation policy. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - agg_value",
              "e": [
                "netflow",
                "paloalto"
              ],
              "em": [
                "netflow_netflow",
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 5.1 is enforced — Splunk UC-22.15.33: East-West OT Traffic Crossing Intended Purdue Levels.",
                  "ea": "Saved search 'UC-22.15.33' running on pan:traffic with src_zone, dest_zone, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.34",
              "n": "Denied Exploit Attempts at OT DMZ Boundary (IEC 62443-3-3 / SR 5.2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Aggregates blocked threats at OT firewall and IDS boundaries to prove active zone boundary protection.",
              "t": "Splunk OT Security Add-on (5151), ES (optional)",
              "d": "`pan:threat`, `nozomi:alerts`, `claroty:alert`",
              "q": "index=ot_sec earliest=-24h (sourcetype=\"pan:threat\" OR sourcetype=\"nozomi:alerts\" OR sourcetype=\"claroty:alert\")\n| eval severity=coalesce(severity, priority, level)\n| where action=\"blocked\" OR disposition=\"blocked\" OR threat_id!=\"none\"\n| stats count by signature, src_ip, dest_zone, sourcetype\n| sort - count",
              "m": "(1) Normalize severity scales; (2) enrich with MITRE if desired; (3) tune noisy signatures; (4) daily SOC rollup for management.",
              "z": "Bar (top signatures), Map (src geo), Table.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Intrusion_Detection](https://docs.splunk.com/Documentation/CIM/latest/User/Intrusion_Detection)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), ES (optional).\n• Ensure the following data sources are available: `pan:threat`, `nozomi:alerts`, `claroty:alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize severity scales; (2) enrich with MITRE if desired; (3) tune noisy signatures; (4) daily SOC rollup for management.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_sec earliest=-24h (sourcetype=\"pan:threat\" OR sourcetype=\"nozomi:alerts\" OR sourcetype=\"claroty:alert\")\n| eval severity=coalesce(severity, priority, level)\n| where action=\"blocked\" OR disposition=\"blocked\" OR threat_id!=\"none\"\n| stats count by signature, src_ip, dest_zone, sourcetype\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Denied Exploit Attempts at OT DMZ Boundary (IEC 62443-3-3 / SR 5.2)** — Aggregates blocked threats at OT firewall and IDS boundaries to prove active zone boundary protection.\n\nDocumented **Data sources**: `pan:threat`, `nozomi:alerts`, `claroty:alert`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), ES (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_sec; **sourcetype**: pan:threat, nozomi:alerts, claroty:alert. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_sec, sourcetype=\"pan:threat\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where action=\"blocked\" OR disposition=\"blocked\" OR threat_id!=\"none\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by signature, src_ip, dest_zone, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature, IDS_Attacks.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Denied Exploit Attempts at OT DMZ Boundary (IEC 62443-3-3 / SR 5.2)** — Aggregates blocked threats at OT firewall and IDS boundaries to prove active zone boundary protection.\n\nDocumented **Data sources**: `pan:threat`, `nozomi:alerts`, `claroty:alert`. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), ES (optional). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Intrusion_Detection.IDS_Attacks` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar (top signatures), Map (src geo), Table.",
              "script": "",
              "premium": "Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We aggregates blocked threats at OT firewall and IDS boundaries to prove active zone boundary protection. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.signature, IDS_Attacks.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 5.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 5.2 is enforced — Splunk UC-22.15.34: Denied Exploit Attempts at OT DMZ Boundary.",
                  "ea": "Saved search 'UC-22.15.34' running on pan:threat, nozomi:alerts, claroty:alert, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.35",
              "n": "Mixed Safety and Non-Safety Traffic on Same VLAN Evidence (IEC 62443-3-3 / SR 5.4)",
              "c": "high",
              "f": "advanced",
              "v": "Correlates asset roles from inventory with L2/L3 observations to highlight application partitioning violations between safety and non-safety systems.",
              "t": "Industrial Asset Intelligence, Claroty/Nozomi",
              "d": "`ot_assets` role; `ot_net` vlan observations",
              "q": "index=ot_net sourcetype=\"claroty:asset\" OR sourcetype=\"nozomi:assets\" earliest=-7d\n| eval vlan=vlan_id\n| join type=left name [ search index=ot_assets | rename asset_name as name | fields name, safety_related, vlan_expected ]\n| where safety_related=\"true\" AND vlan!=vlan_expected\n| stats values(vlan) as vlans_seen by name, safety_related",
              "m": "(1) Tag SIS vs BPCS in CMDB; (2) enforce separate VLANs; (3) migration plan for mixed assets; (4) document compensating controls if any.",
              "z": "Table, Diagram export, Single value (violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Industrial Asset Intelligence, Claroty/Nozomi.\n• Ensure the following data sources are available: `ot_assets` role; `ot_net` vlan observations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag SIS vs BPCS in CMDB; (2) enforce separate VLANs; (3) migration plan for mixed assets; (4) document compensating controls if any.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_net sourcetype=\"claroty:asset\" OR sourcetype=\"nozomi:assets\" earliest=-7d\n| eval vlan=vlan_id\n| join type=left name [ search index=ot_assets | rename asset_name as name | fields name, safety_related, vlan_expected ]\n| where safety_related=\"true\" AND vlan!=vlan_expected\n| stats values(vlan) as vlans_seen by name, safety_related\n```\n\nUnderstanding this SPL\n\n**Mixed Safety and Non-Safety Traffic on Same VLAN Evidence (IEC 62443-3-3 / SR 5.4)** — Correlates asset roles from inventory with L2/L3 observations to highlight application partitioning violations between safety and non-safety systems.\n\nDocumented **Data sources**: `ot_assets` role; `ot_net` vlan observations. **App/TA** (typical add-on context): Industrial Asset Intelligence, Claroty/Nozomi. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_net; **sourcetype**: claroty:asset, nozomi:assets. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_net, sourcetype=\"claroty:asset\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **vlan** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where safety_related=\"true\" AND vlan!=vlan_expected` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by name, safety_related** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Diagram export, Single value (violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect asset roles from inventory with L2/L3 observations to highlight application partitioning violations between safety and non-safety systems. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 5.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 5.4 is enforced — Splunk UC-22.15.35: Mixed Safety and Non-Safety Traffic on Same VLAN Evidence.",
                  "ea": "Saved search 'UC-22.15.35' running on ot_assets role; ot_net vlan observations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.36",
              "n": "PLC Local Console Login Without Corporate Identity (IEC 62443-4-2 / CR 1.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Finds controller-local or maintenance port logins that cannot be mapped to a named human, weakening component-level identification.",
              "t": "Rockwell/Siemens vendor forwarders → `index=ot_eng`",
              "d": "`sourcetype=\"plc:access\"` or vendor-specific",
              "q": "index=ot_eng sourcetype=\"plc:access\" earliest=-30d\n| eval user_id=coalesce(user, operator_slot)\n| where user_id IN (\"0\",\"anonymous\",\"MAINT\",\"\") OR match(user_id, \"^\\*+$\")\n| stats count by controller, user_id, src_interface\n| sort - count",
              "m": "(1) Enable user management on intelligent devices; (2) disable default maintenance accounts; (3) physical locks on USB; (4) quarterly audit export.",
              "z": "Table, Bar, Single value (anonymous logins).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Rockwell/Siemens vendor forwarders → `index=ot_eng`.\n• Ensure the following data sources are available: `sourcetype=\"plc:access\"` or vendor-specific.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable user management on intelligent devices; (2) disable default maintenance accounts; (3) physical locks on USB; (4) quarterly audit export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_eng sourcetype=\"plc:access\" earliest=-30d\n| eval user_id=coalesce(user, operator_slot)\n| where user_id IN (\"0\",\"anonymous\",\"MAINT\",\"\") OR match(user_id, \"^\\*+$\")\n| stats count by controller, user_id, src_interface\n| sort - count\n```\n\nUnderstanding this SPL\n\n**PLC Local Console Login Without Corporate Identity (IEC 62443-4-2 / CR 1.1)** — Finds controller-local or maintenance port logins that cannot be mapped to a named human, weakening component-level identification.\n\nDocumented **Data sources**: `sourcetype=\"plc:access\"` or vendor-specific. **App/TA** (typical add-on context): Rockwell/Siemens vendor forwarders → `index=ot_eng`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_eng; **sourcetype**: plc:access. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_eng, sourcetype=\"plc:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **user_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where user_id IN (\"0\",\"anonymous\",\"MAINT\",\"\") OR match(user_id, \"^\\*+$\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by controller, user_id, src_interface** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PLC Local Console Login Without Corporate Identity (IEC 62443-4-2 / CR 1.1)** — Finds controller-local or maintenance port logins that cannot be mapped to a named human, weakening component-level identification.\n\nDocumented **Data sources**: `sourcetype=\"plc:access\"` or vendor-specific. **App/TA** (typical add-on context): Rockwell/Siemens vendor forwarders → `index=ot_eng`. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar, Single value (anonymous logins).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We find controller-local or maintenance port logins that cannot be mapped to a named human, weakening component-level identification. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 1.1 is enforced — Splunk UC-22.15.36: PLC Local Console Login Without Corporate Identity.",
                  "ea": "Saved search 'UC-22.15.36' running on sourcetype=\"plc:access\" or vendor-specific, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.37",
              "n": "Unauthorized Firmware Upload Attempt on RTU (IEC 62443-4-2 / CR 2.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects firmware or OS image transfer sessions to RTUs from non-engineering hosts, testing authorization enforcement on components.",
              "t": "Dragos/Claroty alerts + firewall",
              "d": "`claroty:alert` category=firmware; `pan:traffic` ftp/scp",
              "q": "index=ot_sec sourcetype=\"claroty:alert\" category=\"firmware\" earliest=-14d\n| append [ search index=fw sourcetype=\"pan:traffic\" app IN (\"ftp\",\"scp\",\"sftp\") dest_zone=\"OT-RTU\" earliest=-14d\n    | lookup ot_engineering_nets.csv src OUTPUT role | where role!=\"ews\" ]\n| stats earliest(_time) as t0, values(src) as sources by dest, signature, sourcetype\n| sort - t0",
              "m": "(1) Block firmware protocols except via jump host; (2) require signed images; (3) alert any ftp to field; (4) integrate vendor secure boot logs if available.",
              "z": "Timeline, Table, Single value (events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Dragos/Claroty alerts + firewall.\n• Ensure the following data sources are available: `claroty:alert` category=firmware; `pan:traffic` ftp/scp.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Block firmware protocols except via jump host; (2) require signed images; (3) alert any ftp to field; (4) integrate vendor secure boot logs if available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_sec sourcetype=\"claroty:alert\" category=\"firmware\" earliest=-14d\n| append [ search index=fw sourcetype=\"pan:traffic\" app IN (\"ftp\",\"scp\",\"sftp\") dest_zone=\"OT-RTU\" earliest=-14d\n    | lookup ot_engineering_nets.csv src OUTPUT role | where role!=\"ews\" ]\n| stats earliest(_time) as t0, values(src) as sources by dest, signature, sourcetype\n| sort - t0\n```\n\nUnderstanding this SPL\n\n**Unauthorized Firmware Upload Attempt on RTU (IEC 62443-4-2 / CR 2.1)** — Detects firmware or OS image transfer sessions to RTUs from non-engineering hosts, testing authorization enforcement on components.\n\nDocumented **Data sources**: `claroty:alert` category=firmware; `pan:traffic` ftp/scp. **App/TA** (typical add-on context): Dragos/Claroty alerts + firewall. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_sec; **sourcetype**: claroty:alert. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_sec, sourcetype=\"claroty:alert\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by dest, signature, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Unauthorized Firmware Upload Attempt on RTU (IEC 62443-4-2 / CR 2.1)** — Detects firmware or OS image transfer sessions to RTUs from non-engineering hosts, testing authorization enforcement on components.\n\nDocumented **Data sources**: `claroty:alert` category=firmware; `pan:traffic` ftp/scp. **App/TA** (typical add-on context): Dragos/Claroty alerts + firewall. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table, Single value (events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects firmware or OS image transfer sessions to RTUs from non-engineering hosts, testing authorization enforcement on components. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 2.1 is enforced — Splunk UC-22.15.37: Unauthorized Firmware Upload Attempt on RTU.",
                  "ea": "Saved search 'UC-22.15.37' running on claroty:alert category=firmware; pan:traffic ftp/scp, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.38",
              "n": "Embedded PLC Audit Log Forwarding Gaps (IEC 62443-4-2 / CR 2.8)",
              "c": "high",
              "f": "intermediate",
              "v": "Identifies PLCs that stopped emitting security audit events to Splunk compared to a seven-day baseline.",
              "t": "`metadata` or custom heartbeat",
              "d": "`index=ot_eng` per-controller heartbeat",
              "q": "index=ot_eng sourcetype=\"plc:audit\" earliest=-14d\n| bin _time span=1d\n| stats count by _time, controller\n| eventstats avg(count) as baseline by controller\n| where _time>relative_time(now(),\"-3d@d\") AND count < baseline*0.2\n| table _time, controller, count, baseline",
              "m": "(1) Enable syslog/SNMP trap from PLC if supported; (2) UF on gateway concentrator; (3) alert on 24h silence; (4) spares swap procedure.",
              "z": "Time chart (count by PLC), Table, Single value (silent PLCs).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: `metadata` or custom heartbeat.\n• Ensure the following data sources are available: `index=ot_eng` per-controller heartbeat.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable syslog/SNMP trap from PLC if supported; (2) UF on gateway concentrator; (3) alert on 24h silence; (4) spares swap procedure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_eng sourcetype=\"plc:audit\" earliest=-14d\n| bin _time span=1d\n| stats count by _time, controller\n| eventstats avg(count) as baseline by controller\n| where _time>relative_time(now(),\"-3d@d\") AND count < baseline*0.2\n| table _time, controller, count, baseline\n```\n\nUnderstanding this SPL\n\n**Embedded PLC Audit Log Forwarding Gaps (IEC 62443-4-2 / CR 2.8)** — Identifies PLCs that stopped emitting security audit events to Splunk compared to a seven-day baseline.\n\nDocumented **Data sources**: `index=ot_eng` per-controller heartbeat. **App/TA** (typical add-on context): `metadata` or custom heartbeat. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_eng; **sourcetype**: plc:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_eng, sourcetype=\"plc:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, controller** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by controller** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where _time>relative_time(now(),\"-3d@d\") AND count < baseline*0.2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Embedded PLC Audit Log Forwarding Gaps (IEC 62443-4-2 / CR 2.8)**): table _time, controller, count, baseline\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (count by PLC), Table, Single value (silent PLCs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We spot PLCs that stopped emitting security audit events to Splunk compared to a seven-day baseline. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 2.8",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 2.8 is enforced — Splunk UC-22.15.38: Embedded PLC Audit Log Forwarding Gaps.",
                  "ea": "Saved search 'UC-22.15.38' running on index=ot_eng per-controller heartbeat, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.39",
              "n": "Secure Boot or Image Hash Mismatch on Industrial Appliance (IEC 62443-4-2 / CR 3.4)",
              "c": "critical",
              "f": "advanced",
              "v": "Correlates component attestation failures with subsequent process alarms to prioritize firmware integrity incidents.",
              "t": "HEC from appliance vendor or Edge Hub",
              "d": "`index=ot_edge` `sourcetype=\"edge:attest\"`",
              "q": "index=ot_edge sourcetype=\"edge:attest\" earliest=-7d\n| where result!=\"OK\" OR match(_raw,\"(?i)hash mismatch|secure boot fail\")\n| join type=left device_id [ search index=ot_hist sourcetype=\"hist:alarm\" earliest=-7d | stats count as alarms by device_id ]\n| table _time, device_id, result, alarms",
              "m": "(1) Collect TPM/UEFI events from industrial gateways; (2) isolate device on mismatch; (3) compare to gold hash registry; (4) vendor RMA workflow.",
              "z": "Table, Timeline, Single value (failed devices).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC from appliance vendor or Edge Hub.\n• Ensure the following data sources are available: `index=ot_edge` `sourcetype=\"edge:attest\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Collect TPM/UEFI events from industrial gateways; (2) isolate device on mismatch; (3) compare to gold hash registry; (4) vendor RMA workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_edge sourcetype=\"edge:attest\" earliest=-7d\n| where result!=\"OK\" OR match(_raw,\"(?i)hash mismatch|secure boot fail\")\n| join type=left device_id [ search index=ot_hist sourcetype=\"hist:alarm\" earliest=-7d | stats count as alarms by device_id ]\n| table _time, device_id, result, alarms\n```\n\nUnderstanding this SPL\n\n**Secure Boot or Image Hash Mismatch on Industrial Appliance (IEC 62443-4-2 / CR 3.4)** — Correlates component attestation failures with subsequent process alarms to prioritize firmware integrity incidents.\n\nDocumented **Data sources**: `index=ot_edge` `sourcetype=\"edge:attest\"`. **App/TA** (typical add-on context): HEC from appliance vendor or Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_edge; **sourcetype**: edge:attest. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_edge, sourcetype=\"edge:attest\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where result!=\"OK\" OR match(_raw,\"(?i)hash mismatch|secure boot fail\")` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Secure Boot or Image Hash Mismatch on Industrial Appliance (IEC 62443-4-2 / CR 3.4)**): table _time, device_id, result, alarms\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline, Single value (failed devices).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect component attestation failures with subsequent process alarms to prioritize firmware integrity incidents. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 3.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 3.4 is enforced — Splunk UC-22.15.39: Secure Boot or Image Hash Mismatch on Industrial Appliance.",
                  "ea": "Saved search 'UC-22.15.39' running on index=ot_edge sourcetype=\"edge:attest\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.40",
              "n": "Modbus Serial Gateway Exposing Registers to Wrong TCP Subnet (IEC 62443-4-2 / CR 4.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Flags TCP-side Modbus requests from subnets that should not see serial-segment registers, supporting confidentiality on component channels.",
              "t": "Edge Hub Modbus TCP capture",
              "d": "`edge_hub_modbus`",
              "q": "index=ot_protocol sourcetype=\"edge_hub_modbus\" transport=\"tcp\" earliest=-24h\n| lookup serial_gateway_allowlist.csv dest_ip src_subnet OUTPUT permitted\n| where permitted!=\"true\"\n| stats dc(function_code) as fcs, sum(register_count) as regs by src_ip, dest_ip\n| sort - regs",
              "m": "(1) ACL on gateway Ethernet port; (2) VPN overlay for remote; (3) alert new src subnets; (4) document data diodes where used.",
              "z": "Table, Heatmap, Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Edge Hub Modbus TCP capture.\n• Ensure the following data sources are available: `edge_hub_modbus`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) ACL on gateway Ethernet port; (2) VPN overlay for remote; (3) alert new src subnets; (4) document data diodes where used.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_protocol sourcetype=\"edge_hub_modbus\" transport=\"tcp\" earliest=-24h\n| lookup serial_gateway_allowlist.csv dest_ip src_subnet OUTPUT permitted\n| where permitted!=\"true\"\n| stats dc(function_code) as fcs, sum(register_count) as regs by src_ip, dest_ip\n| sort - regs\n```\n\nUnderstanding this SPL\n\n**Modbus Serial Gateway Exposing Registers to Wrong TCP Subnet (IEC 62443-4-2 / CR 4.1)** — Flags TCP-side Modbus requests from subnets that should not see serial-segment registers, supporting confidentiality on component channels.\n\nDocumented **Data sources**: `edge_hub_modbus`. **App/TA** (typical add-on context): Edge Hub Modbus TCP capture. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_protocol; **sourcetype**: edge_hub_modbus. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_protocol, sourcetype=\"edge_hub_modbus\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where permitted!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_ip, dest_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Heatmap, Time chart.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We flags TCP-side Modbus requests from subnets that should not see serial-segment registers, supporting confidentiality on component channels. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "modbus"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 4.1 is enforced — Splunk UC-22.15.40: Modbus Serial Gateway Exposing Registers to Wrong TCP Subnet.",
                  "ea": "Saved search 'UC-22.15.40' running on edge_hub_modbus, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.41",
              "n": "Duplicate IP or MAC on OT Switch — Segmentation Violation (IEC 62443-4-2 / CR 5.1)",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces ARP/ND anomalies that indicate bridging or misconfiguration undermining component-level network segmentation.",
              "t": "Switch syslog, Claroty",
              "d": "`cisco:ios` ARP, `claroty:alert`",
              "q": "index=net sourcetype=\"cisco:ios\" \"%Duplicate address%\" OR \"%ARP collision%\" earliest=-7d\n| append [ search index=ot_sec sourcetype=\"claroty:alert\" signature=\"*ARP*\" earliest=-7d ]\n| stats count by host, vlan, signature, src_ip\n| sort - count",
              "m": "(1) Enable DHCP snooping; (2) BPDU guard on access ports; (3) correlate with change tickets; (4) physical inspection for rogue bridges.",
              "z": "Table, Topology panel (if ITSI), Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Switch syslog, Claroty.\n• Ensure the following data sources are available: `cisco:ios` ARP, `claroty:alert`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable DHCP snooping; (2) BPDU guard on access ports; (3) correlate with change tickets; (4) physical inspection for rogue bridges.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=net sourcetype=\"cisco:ios\" \"%Duplicate address%\" OR \"%ARP collision%\" earliest=-7d\n| append [ search index=ot_sec sourcetype=\"claroty:alert\" signature=\"*ARP*\" earliest=-7d ]\n| stats count by host, vlan, signature, src_ip\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Duplicate IP or MAC on OT Switch — Segmentation Violation (IEC 62443-4-2 / CR 5.1)** — Surfaces ARP/ND anomalies that indicate bridging or misconfiguration undermining component-level network segmentation.\n\nDocumented **Data sources**: `cisco:ios` ARP, `claroty:alert`. **App/TA** (typical add-on context): Switch syslog, Claroty. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: net; **sourcetype**: cisco:ios. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=net, sourcetype=\"cisco:ios\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Appends rows from a subsearch with `append`.\n• `stats` rolls up events into metrics; results are split **by host, vlan, signature, src_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Duplicate IP or MAC on OT Switch — Segmentation Violation (IEC 62443-4-2 / CR 5.1)** — Surfaces ARP/ND anomalies that indicate bridging or misconfiguration undermining component-level network segmentation.\n\nDocumented **Data sources**: `cisco:ios` ARP, `claroty:alert`. **App/TA** (typical add-on context): Switch syslog, Claroty. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Topology panel (if ITSI), Time chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We surfaces ARP/ND anomalies that indicate bridging or misconfiguration undermining component-level network segmentation. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest, All_Traffic.src | sort - count",
              "e": [
                "syslog"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 5.1 is enforced — Splunk UC-22.15.41: Duplicate IP or MAC on OT Switch — Segmentation Violation.",
                  "ea": "Saved search 'UC-22.15.41' running on cisco:ios ARP, claroty:alert, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.42",
              "n": "High-Rate Modbus Exception Responses (Potential DoS) (IEC 62443-4-2 / CR 6.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects exception storms targeting PLCs that may indicate denial-of-service attacks against control I/O.",
              "t": "Splunk Edge Hub",
              "d": "`edge_hub_modbus` with `exception_code`",
              "q": "index=ot_protocol sourcetype=\"edge_hub_modbus\" isnotnull(exception_code) earliest=-4h\n| bin _time span=1m\n| stats count by _time, dest_plc, src_ip, exception_code\n| where count>500\n| sort - count",
              "m": "(1) Rate-limit at gateway; (2) ACL offending IP; (3) correlate with IDS; (4) verify PLC CPU load via vendor API.",
              "z": "Time chart, Table, Single value (peaks/min).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Edge Hub.\n• Ensure the following data sources are available: `edge_hub_modbus` with `exception_code`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Rate-limit at gateway; (2) ACL offending IP; (3) correlate with IDS; (4) verify PLC CPU load via vendor API.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_protocol sourcetype=\"edge_hub_modbus\" isnotnull(exception_code) earliest=-4h\n| bin _time span=1m\n| stats count by _time, dest_plc, src_ip, exception_code\n| where count>500\n| sort - count\n```\n\nUnderstanding this SPL\n\n**High-Rate Modbus Exception Responses (Potential DoS) (IEC 62443-4-2 / CR 6.1)** — Detects exception storms targeting PLCs that may indicate denial-of-service attacks against control I/O.\n\nDocumented **Data sources**: `edge_hub_modbus` with `exception_code`. **App/TA** (typical add-on context): Splunk Edge Hub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_protocol; **sourcetype**: edge_hub_modbus. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_protocol, sourcetype=\"edge_hub_modbus\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, dest_plc, src_ip, exception_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>500` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value (peaks/min).",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects exception storms targeting PLCs that may indicate denial-of-service attacks against control I/O. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 6.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 6.1 is enforced — Splunk UC-22.15.42: High-Rate Modbus Exception Responses (Potential DoS).",
                  "ea": "Saved search 'UC-22.15.42' running on edge_hub_modbus with exception_code, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.43",
              "n": "PLC CPU Load and Scan Cycle Degradation (IEC 62443-4-2 / CR 6.2)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors embedded controller resource metrics to catch resource exhaustion from misconfiguration or attack precursors.",
              "t": "SNMP from OT NMS or vendor API HEC",
              "d": "`index=ot_perf` `sourcetype=\"snmp:plc\"`",
              "q": "index=ot_perf sourcetype=\"snmp:plc\" earliest=-24h metric_name IN (\"cpuUtil\",\"scanCycleMs\",\"taskOverload\")\n| timechart span=15m perc95(metric_value) by host, metric_name\n| untable _time metric_series metric_value\n| where metric_value>85 AND like(metric_series,\"%cpuUtil%\")",
              "m": "(1) Poll at safe intervals; (2) baseline per process; (3) alert above threshold; (4) tie to recent downloads.",
              "z": "Time chart, Table (worst hosts), Single value (breach minutes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Performance](https://docs.splunk.com/Documentation/CIM/latest/User/Performance)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: SNMP from OT NMS or vendor API HEC.\n• Ensure the following data sources are available: `index=ot_perf` `sourcetype=\"snmp:plc\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Poll at safe intervals; (2) baseline per process; (3) alert above threshold; (4) tie to recent downloads.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_perf sourcetype=\"snmp:plc\" earliest=-24h metric_name IN (\"cpuUtil\",\"scanCycleMs\",\"taskOverload\")\n| timechart span=15m perc95(metric_value) by host, metric_name\n| untable _time metric_series metric_value\n| where metric_value>85 AND like(metric_series,\"%cpuUtil%\")\n```\n\nUnderstanding this SPL\n\n**PLC CPU Load and Scan Cycle Degradation (IEC 62443-4-2 / CR 6.2)** — Monitors embedded controller resource metrics to catch resource exhaustion from misconfiguration or attack precursors.\n\nDocumented **Data sources**: `index=ot_perf` `sourcetype=\"snmp:plc\"`. **App/TA** (typical add-on context): SNMP from OT NMS or vendor API HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_perf; **sourcetype**: snmp:plc. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_perf, sourcetype=\"snmp:plc\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by host, metric_name** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **PLC CPU Load and Scan Cycle Degradation (IEC 62443-4-2 / CR 6.2)**): untable _time metric_series metric_value\n• Filters the current rows with `where metric_value>85 AND like(metric_series,\"%cpuUtil%\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PLC CPU Load and Scan Cycle Degradation (IEC 62443-4-2 / CR 6.2)** — Monitors embedded controller resource metrics to catch resource exhaustion from misconfiguration or attack precursors.\n\nDocumented **Data sources**: `index=ot_perf` `sourcetype=\"snmp:plc\"`. **App/TA** (typical add-on context): SNMP from OT NMS or vendor API HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Performance.CPU` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table (worst hosts), Single value (breach minutes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch embedded controller resource metrics to catch resource exhaustion from misconfiguration or attack precursors. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Performance"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Performance.CPU by Performance.host span=15m | sort - count",
              "e": [
                "snmp"
              ],
              "em": [
                "snmp_generic"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 6.2 is enforced — Splunk UC-22.15.43: PLC CPU Load and Scan Cycle Degradation.",
                  "ea": "Saved search 'UC-22.15.43' running on index=ot_perf sourcetype=\"snmp:plc\", archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.44",
              "n": "Redundant Controller Failover During Attack Window (IEC 62443-4-2 / CR 7.1)",
              "c": "critical",
              "f": "advanced",
              "v": "Correlates IDS alerts with DCS redundancy state changes to evidence availability protections via backup paths.",
              "t": "DCS alarm export, IDS",
              "d": "`index=hist_alarms` redundancy; `nozomi:alerts`",
              "q": "(index=hist_alarms sourcetype=\"dcs:redundancy\" message=\"*failover*\") OR (index=ot_sec sourcetype=\"nozomi:alerts\" (severity=\"high\" OR severity=\"critical\" OR match(severity,\"(?i)^high\")))\n| eval key=coalesce(controller_pair, dest_ip)\n| transaction key maxspan=10m\n| where mvcount(sourcetype)>1\n| table _time, key, duration, sourcetype",
              "m": "(1) Normalize timestamps; (2) validate expected failover tests; (3) exclude maintenance windows; (4) post-incident review template.",
              "z": "Timeline, Table, Single value (correlated events).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: DCS alarm export, IDS.\n• Ensure the following data sources are available: `index=hist_alarms` redundancy; `nozomi:alerts`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize timestamps; (2) validate expected failover tests; (3) exclude maintenance windows; (4) post-incident review template.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=hist_alarms sourcetype=\"dcs:redundancy\" message=\"*failover*\") OR (index=ot_sec sourcetype=\"nozomi:alerts\" (severity=\"high\" OR severity=\"critical\" OR match(severity,\"(?i)^high\")))\n| eval key=coalesce(controller_pair, dest_ip)\n| transaction key maxspan=10m\n| where mvcount(sourcetype)>1\n| table _time, key, duration, sourcetype\n```\n\nUnderstanding this SPL\n\n**Redundant Controller Failover During Attack Window (IEC 62443-4-2 / CR 7.1)** — Correlates IDS alerts with DCS redundancy state changes to evidence availability protections via backup paths.\n\nDocumented **Data sources**: `index=hist_alarms` redundancy; `nozomi:alerts`. **App/TA** (typical add-on context): DCS alarm export, IDS. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hist_alarms, ot_sec; **sourcetype**: dcs:redundancy, nozomi:alerts. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hist_alarms, index=ot_sec, sourcetype=\"dcs:redundancy\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• Filters the current rows with `where mvcount(sourcetype)>1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Redundant Controller Failover During Attack Window (IEC 62443-4-2 / CR 7.1)**): table _time, key, duration, sourcetype\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline, Table, Single value (correlated events).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect IDS alerts with DCS redundancy state changes to evidence availability protections via backup paths. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 7.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 7.1 is enforced — Splunk UC-22.15.44: Redundant Controller Failover During Attack Window.",
                  "ea": "Saved search 'UC-22.15.44' running on index=hist_alarms redundancy; nozomi:alerts, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.45",
              "n": "Historian Query Saturation During Incident (IEC 62443-4-2 / CR 7.2)",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks historian and OPC query rates to ensure analytical consumers do not starve operators during sustained attacks or mis-tuned reports.",
              "t": "Historian ODBC/API logs",
              "d": "`index=hist_queries`",
              "q": "index=hist_queries sourcetype=\"hist:sql\" earliest=-24h\n| bin _time span=5m\n| stats count as qps by _time, client_app, historian_cluster\n| eventstats perc95(qps) as p95_global\n| where qps > p95_global*3\n| sort - qps",
              "m": "(1) Separate reporting VLAN; (2) cap concurrent ODBC; (3) throttle BI tools; (4) burst alerts to ops.",
              "z": "Time chart, Table, Single value (saturated windows).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Historian ODBC/API logs.\n• Ensure the following data sources are available: `index=hist_queries`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Separate reporting VLAN; (2) cap concurrent ODBC; (3) throttle BI tools; (4) burst alerts to ops.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hist_queries sourcetype=\"hist:sql\" earliest=-24h\n| bin _time span=5m\n| stats count as qps by _time, client_app, historian_cluster\n| eventstats perc95(qps) as p95_global\n| where qps > p95_global*3\n| sort - qps\n```\n\nUnderstanding this SPL\n\n**Historian Query Saturation During Incident (IEC 62443-4-2 / CR 7.2)** — Tracks historian and OPC query rates to ensure analytical consumers do not starve operators during sustained attacks or mis-tuned reports.\n\nDocumented **Data sources**: `index=hist_queries`. **App/TA** (typical add-on context): Historian ODBC/API logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hist_queries; **sourcetype**: hist:sql. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hist_queries, sourcetype=\"hist:sql\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, client_app, historian_cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Filters the current rows with `where qps > p95_global*3` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value (saturated windows).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks historian and OPC query rates to ensure analytical consumers do not starve operators during sustained attacks or mis-tuned reports. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "CR 7.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 CR 7.2 is enforced — Splunk UC-22.15.45: Historian Query Saturation During Incident.",
                  "ea": "Saved search 'UC-22.15.45' running on index=hist_queries, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.46",
              "n": "Zone Boundary Traffic Monitoring — Anomalous Inter-Zone Volume (IEC 62443-3-2 / Zone boundary traffic monitoring)",
              "c": "high",
              "f": "intermediate",
              "v": "Baselines permitted bytes and sessions between OT zones and flags spikes that may indicate policy drift, covert channels, or misrouted traffic at conduits.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=fw` `sourcetype=\"pan:traffic\"` with `src_zone`, `dest_zone`, `bytes`",
              "q": "index=fw sourcetype=\"pan:traffic\" earliest=-14d\n| eval zone_pair=concat(src_zone,\"->\",dest_zone)\n| bin _time span=1h\n| stats sum(bytes) as b, dc(dest_port) as ports by _time, zone_pair, app\n| eventstats median(b) as med by zone_pair, app\n| eval ratio=if(med>0, round(b/med,2), null())\n| where ratio>8 AND b>10485760\n| sort - b",
              "m": "(1) Build `ot_zone_pairs_expected.csv` for pairs that should exist; (2) exclude known maintenance windows; (3) alert on new `app` on sensitive pairs; (4) correlate with change records.",
              "z": "Time chart (bytes by zone_pair), Heatmap (zone_pair × hour), Table (top spikes).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=fw` `sourcetype=\"pan:traffic\"` with `src_zone`, `dest_zone`, `bytes`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `ot_zone_pairs_expected.csv` for pairs that should exist; (2) exclude known maintenance windows; (3) alert on new `app` on sensitive pairs; (4) correlate with change records.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fw sourcetype=\"pan:traffic\" earliest=-14d\n| eval zone_pair=concat(src_zone,\"->\",dest_zone)\n| bin _time span=1h\n| stats sum(bytes) as b, dc(dest_port) as ports by _time, zone_pair, app\n| eventstats median(b) as med by zone_pair, app\n| eval ratio=if(med>0, round(b/med,2), null())\n| where ratio>8 AND b>10485760\n| sort - b\n```\n\nUnderstanding this SPL\n\n**Zone Boundary Traffic Monitoring — Anomalous Inter-Zone Volume (IEC 62443-3-2 / Zone boundary traffic monitoring)** — Baselines permitted bytes and sessions between OT zones and flags spikes that may indicate policy drift, covert channels, or misrouted traffic at conduits.\n\nDocumented **Data sources**: `index=fw` `sourcetype=\"pan:traffic\"` with `src_zone`, `dest_zone`, `bytes`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **zone_pair** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, zone_pair, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by zone_pair, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ratio>8 AND b>10485760` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t dc(All_Traffic.dest_port) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.app span=1h | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Zone Boundary Traffic Monitoring — Anomalous Inter-Zone Volume (IEC 62443-3-2 / Zone boundary traffic monitoring)** — Baselines permitted bytes and sessions between OT zones and flags spikes that may indicate policy drift, covert channels, or misrouted traffic at conduits.\n\nDocumented **Data sources**: `index=fw` `sourcetype=\"pan:traffic\"` with `src_zone`, `dest_zone`, `bytes`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (bytes by zone_pair), Heatmap (zone_pair × hour), Table (top spikes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We baselines permitted bytes and sessions between OT zones and flags spikes that may indicate policy drift, covert channels, or misrouted traffic at conduits. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t dc(All_Traffic.dest_port) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.app span=1h | sort - agg_value",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 5.1 is enforced — Splunk UC-22.15.46: Zone Boundary Traffic Monitoring — Anomalous Inter-Zone Volume.",
                  "ea": "Saved search 'UC-22.15.46' running on index=fw sourcetype=\"pan:traffic\" with src_zone, dest_zone, bytes, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.47",
              "n": "Conduit Traffic Allowlist Enforcement — Unexpected DNP3 Functions (IEC 62443-3-2 / Conduit traffic allowlist enforcement)",
              "c": "critical",
              "f": "advanced",
              "v": "Detects DNP3 function codes on encrypted OT tunnels that are not in the approved conduit function matrix.",
              "t": "Edge Hub or serial concentrator logs",
              "d": "`edge_hub_dnp3` or `dnp3:frame`",
              "q": "index=ot_protocol sourcetype=\"edge_hub_dnp3\" conduit_id=\"C-DMZ-OT1\" earliest=-24h\n| lookup dnp3_conduit_matrix.csv conduit_id function_code OUTPUT allowed\n| where allowed!=\"true\"\n| stats count by src_ip, dest_ip, function_code, conduit_id\n| sort - count",
              "m": "(1) Document matrix per conduit; (2) test in lab; (3) alert and optional block at gateway; (4) annual matrix review with engineering.",
              "z": "Heatmap (function×conduit), Table, Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Edge Hub or serial concentrator logs.\n• Ensure the following data sources are available: `edge_hub_dnp3` or `dnp3:frame`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Document matrix per conduit; (2) test in lab; (3) alert and optional block at gateway; (4) annual matrix review with engineering.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_protocol sourcetype=\"edge_hub_dnp3\" conduit_id=\"C-DMZ-OT1\" earliest=-24h\n| lookup dnp3_conduit_matrix.csv conduit_id function_code OUTPUT allowed\n| where allowed!=\"true\"\n| stats count by src_ip, dest_ip, function_code, conduit_id\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Conduit Traffic Allowlist Enforcement — Unexpected DNP3 Functions (IEC 62443-3-2 / Conduit traffic allowlist enforcement)** — Detects DNP3 function codes on encrypted OT tunnels that are not in the approved conduit function matrix.\n\nDocumented **Data sources**: `edge_hub_dnp3` or `dnp3:frame`. **App/TA** (typical add-on context): Edge Hub or serial concentrator logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_protocol; **sourcetype**: edge_hub_dnp3. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_protocol, sourcetype=\"edge_hub_dnp3\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where allowed!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_ip, dest_ip, function_code, conduit_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (function×conduit), Table, Time chart.",
              "script": "",
              "premium": "Splunk Edge Hub",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects DNP3 function codes on encrypted OT tunnels that are not in the approved conduit function matrix. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "edge_hub",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 5.1 is enforced — Splunk UC-22.15.47: Conduit Traffic Allowlist Enforcement — Unexpected DNP3 Functions.",
                  "ea": "Saved search 'UC-22.15.47' running on edge_hub_dnp3 or dnp3:frame, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.48",
              "n": "Cross-Zone Protocol Anomaly Detection — IEC 61850 GOOSE Floods (IEC 62443-3-2 / Cross-zone protocol anomaly detection)",
              "c": "critical",
              "f": "advanced",
              "v": "Spots unusual GOOSE/AppID volumes between substation zones indicating mis-wiring, VLAN bleed, or malicious injection.",
              "t": "Substation PCAP summary feed",
              "d": "`iec61850:goose` HEC sourcetype",
              "q": "index=ot_sub sourcetype=\"iec61850:goose\" earliest=-24h\n| stats count by app_id, src_mac, vlan, bay\n| eventstats median(count) as med by app_id\n| eval ratio=round(count/med,2)\n| where ratio>5 OR count>10000\n| sort - ratio",
              "m": "(1) Baseline per bay during commissioning; (2) segregate process bus; (3) integrate relay vendor validation; (4) NERC/IEC dual use where applicable.",
              "z": "Time chart, Table, Single value (anomalous APPIDs).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Substation PCAP summary feed.\n• Ensure the following data sources are available: `iec61850:goose` HEC sourcetype.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Baseline per bay during commissioning; (2) segregate process bus; (3) integrate relay vendor validation; (4) NERC/IEC dual use where applicable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_sub sourcetype=\"iec61850:goose\" earliest=-24h\n| stats count by app_id, src_mac, vlan, bay\n| eventstats median(count) as med by app_id\n| eval ratio=round(count/med,2)\n| where ratio>5 OR count>10000\n| sort - ratio\n```\n\nUnderstanding this SPL\n\n**Cross-Zone Protocol Anomaly Detection — IEC 61850 GOOSE Floods (IEC 62443-3-2 / Cross-zone protocol anomaly detection)** — Spots unusual GOOSE/AppID volumes between substation zones indicating mis-wiring, VLAN bleed, or malicious injection.\n\nDocumented **Data sources**: `iec61850:goose` HEC sourcetype. **App/TA** (typical add-on context): Substation PCAP summary feed. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_sub; **sourcetype**: iec61850:goose. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_sub, sourcetype=\"iec61850:goose\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_id, src_mac, vlan, bay** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by app_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ratio>5 OR count>10000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value (anomalous APPIDs).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We spots unusual GOOSE/AppID volumes between substation zones indicating mis-wiring, VLAN bleed, or malicious injection. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "FR 6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 FR 6.2 (Continuous monitoring) is enforced — Splunk UC-22.15.48: Cross-Zone Protocol Anomaly Detection — IEC 61850 GOOSE Floods.",
                  "ea": "Saved search 'UC-22.15.48' running on iec61850:goose HEC sourcetype, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.49",
              "n": "Safety Zone Isolation Verification — Non-Safety OPC on SIS VLAN (IEC 62443-3-2 / Safety zone isolation verification)",
              "c": "critical",
              "f": "advanced",
              "v": "Identifies OPC-UA or HMI sessions sourced from assets tagged non-safety on safety-controller VLANs.",
              "t": "Claroty/Nozomi + CMDB",
              "d": "`claroty:session` style events",
              "q": "index=ot_sec sourcetype=\"claroty:session\" earliest=-7d\n| lookup asset_safety_class.csv asset_id OUTPUT safety_class\n| lookup vlan_role.csv vlan OUTPUT vlan_role\n| where vlan_role=\"SIS\" AND safety_class!=\"SIS\"\n| stats dc(asset_id) as assets, count by vlan, protocol\n| sort - count",
              "m": "(1) Align CMDB safety flags; (2) re-cable or re-ACL; (3) monthly scan; (4) document compensating monitoring.",
              "z": "Table, Bar, Single value (sessions).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Claroty/Nozomi + CMDB.\n• Ensure the following data sources are available: `claroty:session` style events.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align CMDB safety flags; (2) re-cable or re-ACL; (3) monthly scan; (4) document compensating monitoring.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_sec sourcetype=\"claroty:session\" earliest=-7d\n| lookup asset_safety_class.csv asset_id OUTPUT safety_class\n| lookup vlan_role.csv vlan OUTPUT vlan_role\n| where vlan_role=\"SIS\" AND safety_class!=\"SIS\"\n| stats dc(asset_id) as assets, count by vlan, protocol\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Safety Zone Isolation Verification — Non-Safety OPC on SIS VLAN (IEC 62443-3-2 / Safety zone isolation verification)** — Identifies OPC-UA or HMI sessions sourced from assets tagged non-safety on safety-controller VLANs.\n\nDocumented **Data sources**: `claroty:session` style events. **App/TA** (typical add-on context): Claroty/Nozomi + CMDB. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_sec; **sourcetype**: claroty:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_sec, sourcetype=\"claroty:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where vlan_role=\"SIS\" AND safety_class!=\"SIS\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by vlan, protocol** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Bar, Single value (sessions).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We spot OPC-UA or HMI sessions sourced from assets tagged non-safety on safety-controller VLANs. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 5.1 is enforced — Splunk UC-22.15.49: Safety Zone Isolation Verification — Non-Safety OPC on SIS VLAN.",
                  "ea": "Saved search 'UC-22.15.49' running on claroty:session style events, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "Nozomi Networks Universal Add-on",
                "id": 6905,
                "url": "https://splunkbase.splunk.com/app/6905"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.50",
              "n": "DMZ Integrity Between IT and OT — Corporate Browser to OT HMI (IEC 62443-3-2 / DMZ integrity between IT and OT)",
              "c": "high",
              "f": "intermediate",
              "v": "Finds IT-originated web traffic reaching OT HMIs that should only be reachable via bastion or reverse proxy in the DMZ.",
              "t": "Palo Alto (2757)",
              "d": "`pan:traffic` url_category, zones",
              "q": "index=fw sourcetype=\"pan:traffic\" earliest=-7d\n| where dest_zone=\"OT-Web-HMI\" AND src_zone=\"Corporate-Users\" AND app=\"web-browsing\"\n| lookup ot_hmi_exposure.csv dest_ip OUTPUT exposure\n| where exposure!=\"via_dmz_proxy\"\n| stats count by src_ip, dest_ip, url_category, user\n| sort - count",
              "m": "(1) Enforce explicit proxy paths; (2) split DNS views; (3) microsegment HMI; (4) user awareness for bookmarked IPs.",
              "z": "Table, Map (src), Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Palo Alto](https://splunkbase.splunk.com/app/2757), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto (2757).\n• Ensure the following data sources are available: `pan:traffic` url_category, zones.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce explicit proxy paths; (2) split DNS views; (3) microsegment HMI; (4) user awareness for bookmarked IPs.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fw sourcetype=\"pan:traffic\" earliest=-7d\n| where dest_zone=\"OT-Web-HMI\" AND src_zone=\"Corporate-Users\" AND app=\"web-browsing\"\n| lookup ot_hmi_exposure.csv dest_ip OUTPUT exposure\n| where exposure!=\"via_dmz_proxy\"\n| stats count by src_ip, dest_ip, url_category, user\n| sort - count\n```\n\nUnderstanding this SPL\n\n**DMZ Integrity Between IT and OT — Corporate Browser to OT HMI (IEC 62443-3-2 / DMZ integrity between IT and OT)** — Finds IT-originated web traffic reaching OT HMIs that should only be reachable via bastion or reverse proxy in the DMZ.\n\nDocumented **Data sources**: `pan:traffic` url_category, zones. **App/TA** (typical add-on context): Palo Alto (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fw, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where dest_zone=\"OT-Web-HMI\" AND src_zone=\"Corporate-Users\" AND app=\"web-browsing\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where exposure!=\"via_dmz_proxy\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_ip, dest_ip, url_category, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.src, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DMZ Integrity Between IT and OT — Corporate Browser to OT HMI (IEC 62443-3-2 / DMZ integrity between IT and OT)** — Finds IT-originated web traffic reaching OT HMIs that should only be reachable via bastion or reverse proxy in the DMZ.\n\nDocumented **Data sources**: `pan:traffic` url_category, zones. **App/TA** (typical add-on context): Palo Alto (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Map (src), Time chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We find IT-originated web traffic reaching OT HMIs that should only be reachable via bastion or reverse proxy in the DMZ. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.src, Web.dest | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "FR 6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 FR 6.2 (Continuous monitoring) is enforced — Splunk UC-22.15.50: DMZ Integrity Between IT and OT — Corporate Browser to OT HMI.",
                  "ea": "Saved search 'UC-22.15.50' running on pan:traffic url_category, zones, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.51",
              "n": "Engineering Workstation Access Control — RDP Skips Jump Tier (IEC 62443-3-2 / Engineering workstation access control)",
              "c": "critical",
              "f": "intermediate",
              "v": "Alerts when RDP/WinRM sessions skip the jump tier and land directly on low-Purdue controllers or Windows I/O servers.",
              "t": "Windows + firewall",
              "d": "`WinEventLog:Security` 4624 type=10; `pan:traffic` rdp",
              "q": "(index=windows EventCode=4624 Logon_Type=10 host=\"PLC-WIN-IO-*\") OR (index=fw sourcetype=\"pan:traffic\" app=\"rdp\" dest_zone=\"OT-L1\")\n| eval session_key=coalesce(src_ip, src)\n| lookup ot_jump_required.csv dest_ip OUTPUT jump_required\n| where jump_required=\"true\"\n| stats count by user, dest_ip, host\n| sort - count",
              "m": "(1) Disable RDP on controllers; (2) enforce PSM-only path; (3) GPO firewall; (4) weekly exception report.",
              "z": "Table, Timeline, Single value (violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Windows + firewall.\n• Ensure the following data sources are available: `WinEventLog:Security` 4624 type=10; `pan:traffic` rdp.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Disable RDP on controllers; (2) enforce PSM-only path; (3) GPO firewall; (4) weekly exception report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=windows EventCode=4624 Logon_Type=10 host=\"PLC-WIN-IO-*\") OR (index=fw sourcetype=\"pan:traffic\" app=\"rdp\" dest_zone=\"OT-L1\")\n| eval session_key=coalesce(src_ip, src)\n| lookup ot_jump_required.csv dest_ip OUTPUT jump_required\n| where jump_required=\"true\"\n| stats count by user, dest_ip, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Engineering Workstation Access Control — RDP Skips Jump Tier (IEC 62443-3-2 / Engineering workstation access control)** — Alerts when RDP/WinRM sessions skip the jump tier and land directly on low-Purdue controllers or Windows I/O servers.\n\nDocumented **Data sources**: `WinEventLog:Security` 4624 type=10; `pan:traffic` rdp. **App/TA** (typical add-on context): Windows + firewall. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, fw; **sourcetype**: pan:traffic; **host** filter: PLC-WIN-IO-*. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, index=fw, sourcetype=\"pan:traffic\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session_key** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where jump_required=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, dest_ip, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Engineering Workstation Access Control — RDP Skips Jump Tier (IEC 62443-3-2 / Engineering workstation access control)** — Alerts when RDP/WinRM sessions skip the jump tier and land directly on low-Purdue controllers or Windows I/O servers.\n\nDocumented **Data sources**: `WinEventLog:Security` 4624 type=10; `pan:traffic` rdp. **App/TA** (typical add-on context): Windows + firewall. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Timeline, Single value (violations).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We alerts when RDP/WinRM sessions skip the jump tier and land directly on low-Purdue controllers or Windows I/O servers. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.1 (Human user identification and authentication) is enforced — Splunk UC-22.15.51: Engineering Workstation Access Control — RDP Skips Jump Tier.",
                  "ea": "Saved search 'UC-22.15.51' running on WinEventLog:Security 4624 type=10; pan:traffic rdp, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.52",
              "n": "Historian-to-Corporate Data Flow Audit — Large ODBC Without SPN (IEC 62443-3-2 / Historian-to-corporate data flow audit)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks large ODBC/JDBC pulls from OT historian to corporate subnets missing service principal authentication.",
              "t": "Historian vendor logs",
              "d": "`hist:sql` or `pi:audit`",
              "q": "index=hist_queries sourcetype=\"hist:sql\" earliest=-7d\n| eval bytes_mb=round(bytes/1048576,2)\n| where bytes_mb>500 AND match(client_ip,\"^10\\.12\\.\") AND isnull(service_principal)\n| stats sum(bytes_mb) as total_mb by client_app, client_ip, historian_cluster\n| sort - total_mb",
              "m": "(1) Require Kerberos or cert auth for service accounts; (2) throttle BI extracts; (3) DLP review for tag exports; (4) document data classification.",
              "z": "Bar (apps), Table, Single value (GB moved).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Historian vendor logs.\n• Ensure the following data sources are available: `hist:sql` or `pi:audit`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require Kerberos or cert auth for service accounts; (2) throttle BI extracts; (3) DLP review for tag exports; (4) document data classification.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hist_queries sourcetype=\"hist:sql\" earliest=-7d\n| eval bytes_mb=round(bytes/1048576,2)\n| where bytes_mb>500 AND match(client_ip,\"^10\\.12\\.\") AND isnull(service_principal)\n| stats sum(bytes_mb) as total_mb by client_app, client_ip, historian_cluster\n| sort - total_mb\n```\n\nUnderstanding this SPL\n\n**Historian-to-Corporate Data Flow Audit — Large ODBC Without SPN (IEC 62443-3-2 / Historian-to-corporate data flow audit)** — Tracks large ODBC/JDBC pulls from OT historian to corporate subnets missing service principal authentication.\n\nDocumented **Data sources**: `hist:sql` or `pi:audit`. **App/TA** (typical add-on context): Historian vendor logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hist_queries; **sourcetype**: hist:sql. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hist_queries, sourcetype=\"hist:sql\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **bytes_mb** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where bytes_mb>500 AND match(client_ip,\"^10\\.12\\.\") AND isnull(service_principal)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by client_app, client_ip, historian_cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar (apps), Table, Single value (GB moved).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We tracks large ODBC/JDBC pulls from OT historian to corporate subnets missing service principal authentication. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 2.8",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 2.8 (Auditable events) is enforced — Splunk UC-22.15.52: Historian-to-Corporate Data Flow Audit — Large ODBC Without SPN.",
                  "ea": "Saved search 'UC-22.15.52' running on hist:sql or pi:audit, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.53",
              "n": "Wireless Zone Security Monitoring — Unknown Clients on OT SSID (IEC 62443-3-2 / Wireless zone security monitoring)",
              "c": "high",
              "f": "intermediate",
              "v": "Lists Wi-Fi associations to OT WLANs from unknown MAC vendors or non-instrumentation OUI prefixes.",
              "t": "WLAN controller syslog",
              "d": "`meraki:airmarshal` or `cisco:wlc`",
              "q": "index=wireless sourcetype=\"meraki:association\" OR sourcetype=\"cisco:wlc\" ssid=\"OT-PROD-*\" earliest=-24h\n| iplocation src_ip\n| lookup ot_wireless_allowlist.csv client_mac OUTPUT asset_class\n| where asset_class!=\"instrumentation\"\n| stats earliest(_time) as first_seen, latest(_time) as last_seen by client_mac, ssid, ap_name\n| sort - last_seen",
              "m": "(1) WPA3-Enterprise with cert; (2) dynamic PSK per device class; (3) block consumer OUIs; (4) RF site survey after changes.",
              "z": "Table, Map, Time chart.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[CIM: Network_Sessions](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Sessions)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: WLAN controller syslog.\n• Ensure the following data sources are available: `meraki:airmarshal` or `cisco:wlc`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) WPA3-Enterprise with cert; (2) dynamic PSK per device class; (3) block consumer OUIs; (4) RF site survey after changes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wireless sourcetype=\"meraki:association\" OR sourcetype=\"cisco:wlc\" ssid=\"OT-PROD-*\" earliest=-24h\n| iplocation src_ip\n| lookup ot_wireless_allowlist.csv client_mac OUTPUT asset_class\n| where asset_class!=\"instrumentation\"\n| stats earliest(_time) as first_seen, latest(_time) as last_seen by client_mac, ssid, ap_name\n| sort - last_seen\n```\n\nUnderstanding this SPL\n\n**Wireless Zone Security Monitoring — Unknown Clients on OT SSID (IEC 62443-3-2 / Wireless zone security monitoring)** — Lists Wi-Fi associations to OT WLANs from unknown MAC vendors or non-instrumentation OUI prefixes.\n\nDocumented **Data sources**: `meraki:airmarshal` or `cisco:wlc`. **App/TA** (typical add-on context): WLAN controller syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wireless; **sourcetype**: meraki:association, cisco:wlc. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wireless, sourcetype=\"meraki:association\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Wireless Zone Security Monitoring — Unknown Clients on OT SSID (IEC 62443-3-2 / Wireless zone security monitoring)**): iplocation src_ip\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where asset_class!=\"instrumentation\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by client_mac, ssid, ap_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Wireless Zone Security Monitoring — Unknown Clients on OT SSID (IEC 62443-3-2 / Wireless zone security monitoring)** — Lists Wi-Fi associations to OT WLANs from unknown MAC vendors or non-instrumentation OUI prefixes.\n\nDocumented **Data sources**: `meraki:airmarshal` or `cisco:wlc`. **App/TA** (typical add-on context): WLAN controller syslog. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Sessions.All_Sessions` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Map, Time chart.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We lists Wi-Fi associations to OT WLANs from unknown MAC vendors or non-instrumentation OUI prefixes. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Network_Sessions"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Sessions.All_Sessions by All_Sessions.action, All_Sessions.src, All_Sessions.user | sort - count",
              "e": [
                "syslog"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.1 (Human user identification and authentication) is enforced — Splunk UC-22.15.53: Wireless Zone Security Monitoring — Unknown Clients on OT SSID.",
                  "ea": "Saved search 'UC-22.15.53' running on meraki:airmarshal or cisco:wlc, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.54",
              "n": "Remote Access Conduit Integrity — Split Tunnel on OT VPN Portal (IEC 62443-3-2 / Remote access conduit integrity)",
              "c": "critical",
              "f": "intermediate",
              "v": "Detects GlobalProtect configurations that expose split routing while OT portals should force tunnel-all.",
              "t": "Palo Alto (2757)",
              "d": "`pan:globalprotect` config fields if logged",
              "q": "index=vpn sourcetype=\"pan:globalprotect\" earliest=-7d\n| eval split=coalesce(split_tunnel,\"unknown\")\n| lookup ot_vpn_profiles.csv portal OUTPUT tunnel_mode_required\n| where tunnel_mode_required=\"full\" AND split!=\"disabled\"\n| stats dc(public_ip) as distinct_exit, count by user, portal, split",
              "m": "(1) Push portal config from Panorama; (2) block local subnet access lists; (3) monthly config export diff; (4) user re-education.",
              "z": "Table, Single value (misconfigured sessions), Bar.",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Palo Alto](https://splunkbase.splunk.com/app/2757), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Palo Alto (2757).\n• Ensure the following data sources are available: `pan:globalprotect` config fields if logged.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push portal config from Panorama; (2) block local subnet access lists; (3) monthly config export diff; (4) user re-education.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vpn sourcetype=\"pan:globalprotect\" earliest=-7d\n| eval split=coalesce(split_tunnel,\"unknown\")\n| lookup ot_vpn_profiles.csv portal OUTPUT tunnel_mode_required\n| where tunnel_mode_required=\"full\" AND split!=\"disabled\"\n| stats dc(public_ip) as distinct_exit, count by user, portal, split\n```\n\nUnderstanding this SPL\n\n**Remote Access Conduit Integrity — Split Tunnel on OT VPN Portal (IEC 62443-3-2 / Remote access conduit integrity)** — Detects GlobalProtect configurations that expose split routing while OT portals should force tunnel-all.\n\nDocumented **Data sources**: `pan:globalprotect` config fields if logged. **App/TA** (typical add-on context): Palo Alto (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vpn; **sourcetype**: pan:globalprotect. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vpn, sourcetype=\"pan:globalprotect\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **split** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where tunnel_mode_required=\"full\" AND split!=\"disabled\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, portal, split** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Remote Access Conduit Integrity — Split Tunnel on OT VPN Portal (IEC 62443-3-2 / Remote access conduit integrity)** — Detects GlobalProtect configurations that expose split routing while OT portals should force tunnel-all.\n\nDocumented **Data sources**: `pan:globalprotect` config fields if logged. **App/TA** (typical add-on context): Palo Alto (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Single value (misconfigured sessions), Bar.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detects GlobalProtect configurations that expose split routing while OT portals should force tunnel-all. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 5.1 is enforced — Splunk UC-22.15.54: Remote Access Conduit Integrity — Split Tunnel on OT VPN Portal.",
                  "ea": "Saved search 'UC-22.15.54' running on pan:globalprotect config fields if logged, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.15.55",
              "n": "Zone Trust Level Verification — SL-T vs Observed Purdue Layer (IEC 62443-3-2 / Zone trust level verification)",
              "c": "high",
              "f": "intermediate",
              "v": "Joins SL-T/Criticality from inventory to observed Purdue level to find assets operating below required trust level for their criticality.",
              "t": "Industrial Asset Intelligence",
              "d": "`ot_assets` with `sl_t`; `ot_topology` observed layer",
              "q": "index=ot_assets OR index=ot_topology earliest=-1d\n| eval id=coalesce(asset_id, device_id)\n| stats latest(sl_t) as required_tier, latest(observed_purdue_level) as observed by id\n| eval mismatch=case(required_tier=\"SL-3\" AND observed>2,1, required_tier=\"SL-2\" AND observed>3,1, true(),0)\n| where mismatch=1\n| table id, required_tier, observed",
              "m": "(1) Refresh topology weekly from IDS; (2) reconcile moves with CMDB; (3) migration plan to correct zone; (4) risk acceptance workflow for exceptions.",
              "z": "Table, Scatter (tier vs level), Single value (count).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Industrial Asset Intelligence.\n• Ensure the following data sources are available: `ot_assets` with `sl_t`; `ot_topology` observed layer.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Refresh topology weekly from IDS; (2) reconcile moves with CMDB; (3) migration plan to correct zone; (4) risk acceptance workflow for exceptions.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot_assets OR index=ot_topology earliest=-1d\n| eval id=coalesce(asset_id, device_id)\n| stats latest(sl_t) as required_tier, latest(observed_purdue_level) as observed by id\n| eval mismatch=case(required_tier=\"SL-3\" AND observed>2,1, required_tier=\"SL-2\" AND observed>3,1, true(),0)\n| where mismatch=1\n| table id, required_tier, observed\n```\n\nUnderstanding this SPL\n\n**Zone Trust Level Verification — SL-T vs Observed Purdue Layer (IEC 62443-3-2 / Zone trust level verification)** — Joins SL-T/Criticality from inventory to observed Purdue level to find assets operating below required trust level for their criticality.\n\nDocumented **Data sources**: `ot_assets` with `sl_t`; `ot_topology` observed layer. **App/TA** (typical add-on context): Industrial Asset Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot_assets, ot_topology.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot_assets, index=ot_topology, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mismatch=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Zone Trust Level Verification — SL-T vs Observed Purdue Layer (IEC 62443-3-2 / Zone trust level verification)**): table id, required_tier, observed\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table, Scatter (tier vs level), Single value (count).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We joins SL-T/Criticality from inventory to observed Purdue level to find assets operating below required trust level for their criticality. That helps protect factories, plants, and the gear that runs them from mistakes and bad actors the standards call out.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IEC 62443"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "IEC 62443",
                  "v": "2013-ongoing",
                  "cl": "SR 1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IEC 62443 SR 1.1 (Human user identification and authentication) is enforced — Splunk UC-22.15.55: Zone Trust Level Verification — SL-T vs Observed Purdue Layer.",
                  "ea": "Saved search 'UC-22.15.55' running on ot_assets with sl_t; ot_topology observed layer, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 55,
            "none": 0
          }
        },
        {
          "i": "22.12",
          "n": "SOX / ITGC",
          "u": [
            {
              "i": "22.12.1",
              "n": "User provisioning evidence tied to financial application accounts (SOX §404 / COSO)",
              "c": "high",
              "f": "beginner",
              "v": "Supports SOX IT general and application controls evidence for \"User provisioning evidence tied to financial application accounts\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 / COSO attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**User provisioning evidence tied to financial application accounts (SOX §404 / COSO)** — Supports SOX IT general and application controls evidence for \"User provisioning evidence tied to financial application accounts\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 / COSO attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We make sure new financial application accounts show up the way our provisioning and control owners expect, using the same tickets and trails auditors review. SOX is about making sure our financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Provisioning",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.Provisioning (User provisioning) is enforced — Splunk UC-22.12.1: User provisioning evidence tied to financial application accounts.",
                  "ea": "Saved search 'UC-22.12.1' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.2",
              "n": "Privileged access review completion and aging for financial systems (SOX §404)",
              "c": "medium",
              "f": "expert",
              "v": "Supports SOX IT general and application controls evidence for \"Privileged access review completion and aging for financial systems\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Privileged access review completion and aging for financial systems (SOX §404)** — Supports SOX IT general and application controls evidence for \"Privileged access review completion and aging for financial systems\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for privileged access review completion and aging for financial systems the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Privileged",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.Privileged (Privileged access) is enforced — Splunk UC-22.12.2: Privileged access review completion and aging for financial systems.",
                  "ea": "Saved search 'UC-22.12.2' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.3",
              "n": "Segregation of duties conflicts across SAP / Oracle financial roles (SOX §404)",
              "c": "low",
              "f": "advanced",
              "v": "Supports SOX IT general and application controls evidence for \"Segregation of duties conflicts across SAP / Oracle financial roles\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Segregation of duties conflicts across SAP / Oracle financial roles (SOX §404)** — Supports SOX IT general and application controls evidence for \"Segregation of duties conflicts across SAP / Oracle financial roles\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for segregation of duties conflicts across SAP / Oracle financial roles the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX",
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.SOD",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.SOD (Segregation of duties) is enforced — Splunk UC-22.12.3: Segregation of duties conflicts across SAP / Oracle financial roles.",
                  "ea": "Saved search 'UC-22.12.3' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.4",
              "n": "Administrator and break-glass usage on production financial hosts (SOX §404)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports SOX IT general and application controls evidence for \"Administrator and break-glass usage on production financial hosts\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t dc(Authentication.dest) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t dc(Authentication.dest) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**Administrator and break-glass usage on production financial hosts (SOX §404)** — Supports SOX IT general and application controls evidence for \"Administrator and break-glass usage on production financial hosts\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for administrator and break-glass usage on production financial hosts the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.4: Administrator and break-glass usage on production financial hosts.",
                  "ea": "Saved search 'UC-22.12.4' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.5",
              "n": "Terminated-user authentication after HR termination date (SOX §404)",
              "c": "high",
              "f": "beginner",
              "v": "Supports SOX IT general and application controls evidence for \"Terminated-user authentication after HR termination date\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Terminated-user authentication after HR termination date (SOX §404)** — Supports SOX IT general and application controls evidence for \"Terminated-user authentication after HR termination date\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for terminated-user authentication after HR termination date the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Termination",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.Termination (Timely deprovisioning) is enforced — Splunk UC-22.12.5: Terminated-user authentication after HR termination date.",
                  "ea": "Saved search 'UC-22.12.5' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.6",
              "n": "Periodic access certification exceptions for in-scope applications (SOX §404)",
              "c": "medium",
              "f": "expert",
              "v": "Supports SOX IT general and application controls evidence for \"Periodic access certification exceptions for in-scope applications\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Periodic access certification exceptions for in-scope applications (SOX §404)** — Supports SOX IT general and application controls evidence for \"Periodic access certification exceptions for in-scope applications\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for periodic access certification exceptions for in-scope applications the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.6: Periodic access certification exceptions for in-scope applications.",
                  "ea": "Saved search 'UC-22.12.6' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.7",
              "n": "Orphaned and dormant accounts with recent interactive activity (SOX §404)",
              "c": "low",
              "f": "advanced",
              "v": "Supports SOX IT general and application controls evidence for \"Orphaned and dormant accounts with recent interactive activity\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Orphaned and dormant accounts with recent interactive activity (SOX §404)** — Supports SOX IT general and application controls evidence for \"Orphaned and dormant accounts with recent interactive activity\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX §404 attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for orphaned and dormant accounts with recent interactive activity the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.7: Orphaned and dormant accounts with recent interactive activity.",
                  "ea": "Saved search 'UC-22.12.7' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.8",
              "n": "Emergency change retrospective documentation completeness (SOX ITGC)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports SOX IT general and application controls evidence for \"Emergency change retrospective documentation completeness\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Emergency change retrospective documentation completeness (SOX ITGC)** — Supports SOX IT general and application controls evidence for \"Emergency change retrospective documentation completeness\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for emergency change retrospective documentation completeness the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.8: Emergency change retrospective documentation completeness.",
                  "ea": "Saved search 'UC-22.12.8' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.9",
              "n": "Production configuration drift without matching approved change (SOX ITGC)",
              "c": "high",
              "f": "beginner",
              "v": "Supports SOX IT general and application controls evidence for \"Production configuration drift without matching approved change\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest span=1h | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest span=1h | sort - count\n```\n\nUnderstanding this SPL\n\n**Production configuration drift without matching approved change (SOX ITGC)** — Supports SOX IT general and application controls evidence for \"Production configuration drift without matching approved change\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for production configuration drift without matching approved change the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.9: Production configuration drift without matching approved change.",
                  "ea": "Saved search 'UC-22.12.9' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.10",
              "n": "Change approval workflow evidence for financially material CIs (SOX ITGC)",
              "c": "medium",
              "f": "expert",
              "v": "Supports SOX IT general and application controls evidence for \"Change approval workflow evidence for financially material CIs\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Change approval workflow evidence for financially material CIs (SOX ITGC)** — Supports SOX IT general and application controls evidence for \"Change approval workflow evidence for financially material CIs\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for change approval workflow evidence for financially material CIs the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Approval",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Approval (Change approved) is enforced — Splunk UC-22.12.10: Change approval workflow evidence for financially material CIs.",
                  "ea": "Saved search 'UC-22.12.10' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.11",
              "n": "CAB evidence and high-risk change documentation gaps (SOX ITGC)",
              "c": "low",
              "f": "advanced",
              "v": "Supports SOX IT general and application controls evidence for \"CAB evidence and high-risk change documentation gaps\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**CAB evidence and high-risk change documentation gaps (SOX ITGC)** — Supports SOX IT general and application controls evidence for \"CAB evidence and high-risk change documentation gaps\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for cAB evidence and high-risk change documentation gaps the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Approval",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Approval (Change approved) is enforced — Splunk UC-22.12.11: CAB evidence and high-risk change documentation gaps.",
                  "ea": "Saved search 'UC-22.12.11' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.12",
              "n": "Production change volume during financial close windows (SOX close)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports SOX IT general and application controls evidence for \"Production change volume during financial close windows\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX close attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1w | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1w | sort - count\n```\n\nUnderstanding this SPL\n\n**Production change volume during financial close windows (SOX close)** — Supports SOX IT general and application controls evidence for \"Production change volume during financial close windows\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX close attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for production change volume during financial close windows the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.12: Production change volume during financial close windows.",
                  "ea": "Saved search 'UC-22.12.12' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.13",
              "n": "Failed change rollback and backout evidence tracking (SOX ITGC)",
              "c": "high",
              "f": "beginner",
              "v": "Supports SOX IT general and application controls evidence for \"Failed change rollback and backout evidence tracking\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Failed change rollback and backout evidence tracking (SOX ITGC)** — Supports SOX IT general and application controls evidence for \"Failed change rollback and backout evidence tracking\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for failed change rollback and backout evidence tracking the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.13: Failed change rollback and backout evidence tracking.",
                  "ea": "Saved search 'UC-22.12.13' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.14",
              "n": "Changes executed outside approved maintenance windows (SOX ITGC)",
              "c": "medium",
              "f": "expert",
              "v": "Supports SOX IT general and application controls evidence for \"Changes executed outside approved maintenance windows\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Changes executed outside approved maintenance windows (SOX ITGC)** — Supports SOX IT general and application controls evidence for \"Changes executed outside approved maintenance windows\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for changes executed outside approved maintenance windows the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.14: Changes executed outside approved maintenance windows.",
                  "ea": "Saved search 'UC-22.12.14' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.15",
              "n": "Financial close batch job failures and runtime SLA breaches (SOX close)",
              "c": "low",
              "f": "advanced",
              "v": "Supports SOX IT general and application controls evidence for \"Financial close batch job failures and runtime SLA breaches\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX close attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Financial close batch job failures and runtime SLA breaches (SOX close)** — Supports SOX IT general and application controls evidence for \"Financial close batch job failures and runtime SLA breaches\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX close attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for financial close batch job failures and runtime SLA breaches the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.15: Financial close batch job failures and runtime SLA breaches.",
                  "ea": "Saved search 'UC-22.12.15' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.16",
              "n": "General ledger database backup success within policy windows (SOX ITGC)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports SOX IT general and application controls evidence for \"General ledger database backup success within policy windows\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1d | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1d | sort - count\n```\n\nUnderstanding this SPL\n\n**General ledger database backup success within policy windows (SOX ITGC)** — Supports SOX IT general and application controls evidence for \"General ledger database backup success within policy windows\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for general ledger database backup success within policy windows the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.16: General ledger database backup success within policy windows.",
                  "ea": "Saved search 'UC-22.12.16' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.17",
              "n": "Unauthorized batch schedule or dependency modifications (SOX ITGC)",
              "c": "high",
              "f": "beginner",
              "v": "Supports SOX IT general and application controls evidence for \"Unauthorized batch schedule or dependency modifications\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Unauthorized batch schedule or dependency modifications (SOX ITGC)** — Supports SOX IT general and application controls evidence for \"Unauthorized batch schedule or dependency modifications\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX ITGC attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for unauthorized batch schedule or dependency modifications the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.17: Unauthorized batch schedule or dependency modifications.",
                  "ea": "Saved search 'UC-22.12.17' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.18",
              "n": "ITSI service health for financial reporting dependency chain (SOX availability)",
              "c": "medium",
              "f": "expert",
              "v": "Supports SOX IT general and application controls evidence for \"ITSI service health for financial reporting dependency chain\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX availability attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1h | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1h | sort - count\n```\n\nUnderstanding this SPL\n\n**ITSI service health for financial reporting dependency chain (SOX availability)** — Supports SOX IT general and application controls evidence for \"ITSI service health for financial reporting dependency chain\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX availability attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for iTSI service health for financial reporting dependency chain the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.18: ITSI service health for financial reporting dependency chain.",
                  "ea": "Saved search 'UC-22.12.18' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.19",
              "n": "Close-processing cluster CPU saturation during peak windows (SOX performance)",
              "c": "low",
              "f": "advanced",
              "v": "Supports SOX IT general and application controls evidence for \"Close-processing cluster CPU saturation during peak windows\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX performance attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**Close-processing cluster CPU saturation during peak windows (SOX performance)** — Supports SOX IT general and application controls evidence for \"Close-processing cluster CPU saturation during peak windows\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX performance attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for close-processing cluster CPU saturation during peak windows the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.19: Close-processing cluster CPU saturation during peak windows.",
                  "ea": "Saved search 'UC-22.12.19' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.20",
              "n": "Disaster recovery test execution and evidence correlation (SOX DR)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports SOX IT general and application controls evidence for \"Disaster recovery test execution and evidence correlation\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX DR attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Disaster recovery test execution and evidence correlation (SOX DR)** — Supports SOX IT general and application controls evidence for \"Disaster recovery test execution and evidence correlation\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX DR attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for disaster recovery test execution and evidence correlation the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.20: Disaster recovery test execution and evidence correlation.",
                  "ea": "Saved search 'UC-22.12.20' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.21",
              "n": "Priority incident aging for finance-critical configuration items (SOX operations)",
              "c": "high",
              "f": "beginner",
              "v": "Supports SOX IT general and application controls evidence for \"Priority incident aging for finance-critical configuration items\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX operations attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Priority incident aging for finance-critical configuration items (SOX operations)** — Supports SOX IT general and application controls evidence for \"Priority incident aging for finance-critical configuration items\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX operations attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for priority incident aging for finance-critical configuration items the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.21: Priority incident aging for finance-critical configuration items.",
                  "ea": "Saved search 'UC-22.12.21' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.22",
              "n": "Financial close checklist task completion by owner (SOX close)",
              "c": "medium",
              "f": "expert",
              "v": "Supports SOX IT general and application controls evidence for \"Financial close checklist task completion by owner\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX close attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Financial close checklist task completion by owner (SOX close)** — Supports SOX IT general and application controls evidence for \"Financial close checklist task completion by owner\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX close attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for financial close checklist task completion by owner the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.22: Financial close checklist task completion by owner.",
                  "ea": "Saved search 'UC-22.12.22' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.23",
              "n": "After-hours and high-value journal entry concentration (SOX JE)",
              "c": "low",
              "f": "advanced",
              "v": "Supports SOX IT general and application controls evidence for \"After-hours and high-value journal entry concentration\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX JE attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**After-hours and high-value journal entry concentration (SOX JE)** — Supports SOX IT general and application controls evidence for \"After-hours and high-value journal entry concentration\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX JE attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for after-hours and high-value journal entry concentration the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.23: After-hours and high-value journal entry concentration.",
                  "ea": "Saved search 'UC-22.12.23' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.24",
              "n": "Sequential ERP document number gap detection (SOX audit trail)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports SOX IT general and application controls evidence for \"Sequential ERP document number gap detection\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX audit trail attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Sequential ERP document number gap detection (SOX audit trail)** — Supports SOX IT general and application controls evidence for \"Sequential ERP document number gap detection\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX audit trail attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for sequential ERP document number gap detection the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.24: Sequential ERP document number gap detection.",
                  "ea": "Saved search 'UC-22.12.24' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.25",
              "n": "Duplicate disbursement pattern detection in AP subledger (SOX cash)",
              "c": "high",
              "f": "beginner",
              "v": "Supports SOX IT general and application controls evidence for \"Duplicate disbursement pattern detection in AP subledger\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX cash attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Duplicate disbursement pattern detection in AP subledger (SOX cash)** — Supports SOX IT general and application controls evidence for \"Duplicate disbursement pattern detection in AP subledger\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX cash attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for duplicate disbursement pattern detection in AP subledger the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.25: Duplicate disbursement pattern detection in AP subledger.",
                  "ea": "Saved search 'UC-22.12.25' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.26",
              "n": "Sensitive management financial report access and export (SOX reporting)",
              "c": "medium",
              "f": "expert",
              "v": "Supports SOX IT general and application controls evidence for \"Sensitive management financial report access and export\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX reporting attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.action | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.action | sort - count\n```\n\nUnderstanding this SPL\n\n**Sensitive management financial report access and export (SOX reporting)** — Supports SOX IT general and application controls evidence for \"Sensitive management financial report access and export\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX reporting attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for sensitive management financial report access and export the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Review",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.Review (Periodic access review) is enforced — Splunk UC-22.12.26: Sensitive management financial report access and export.",
                  "ea": "Saved search 'UC-22.12.26' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.27",
              "n": "Subledger-to-general-ledger reconciliation variance monitoring (SOX reconciliation)",
              "c": "low",
              "f": "advanced",
              "v": "Supports SOX IT general and application controls evidence for \"Subledger-to-general-ledger reconciliation variance monitoring\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX reconciliation attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Subledger-to-general-ledger reconciliation variance monitoring (SOX reconciliation)** — Supports SOX IT general and application controls evidence for \"Subledger-to-general-ledger reconciliation variance monitoring\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX reconciliation attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for subledger-to-general-ledger reconciliation variance monitoring the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.27: Subledger-to-general-ledger reconciliation variance monitoring.",
                  "ea": "Saved search 'UC-22.12.27' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.28",
              "n": "Quarterly privileged ERP role population for sign-off (SOX access)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports SOX IT general and application controls evidence for \"Quarterly privileged ERP role population for sign-off\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX access attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Quarterly privileged ERP role population for sign-off (SOX access)** — Supports SOX IT general and application controls evidence for \"Quarterly privileged ERP role population for sign-off\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX access attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for quarterly privileged ERP role population for sign-off the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Privileged",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.Privileged (Privileged access) is enforced — Splunk UC-22.12.28: Quarterly privileged ERP role population for sign-off.",
                  "ea": "Saved search 'UC-22.12.28' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.29",
              "n": "IT control testing sample evidence retrieval by control ID (SOX testing)",
              "c": "high",
              "f": "beginner",
              "v": "Supports SOX IT general and application controls evidence for \"IT control testing sample evidence retrieval by control ID\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX testing attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**IT control testing sample evidence retrieval by control ID (SOX testing)** — Supports SOX IT general and application controls evidence for \"IT control testing sample evidence retrieval by control ID\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX testing attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for iT control testing sample evidence retrieval by control ID the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.29: IT control testing sample evidence retrieval by control ID.",
                  "ea": "Saved search 'UC-22.12.29' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.30",
              "n": "Open IT control exception aging and escalation tiers (SOX exceptions)",
              "c": "medium",
              "f": "expert",
              "v": "Supports SOX IT general and application controls evidence for \"Open IT control exception aging and escalation tiers\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX exceptions attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Open IT control exception aging and escalation tiers (SOX exceptions)** — Supports SOX IT general and application controls evidence for \"Open IT control exception aging and escalation tiers\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX exceptions attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for open IT control exception aging and escalation tiers the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.30: Open IT control exception aging and escalation tiers.",
                  "ea": "Saved search 'UC-22.12.30' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.31",
              "n": "Audit finding remediation milestone and due-date risk (SOX remediation)",
              "c": "low",
              "f": "advanced",
              "v": "Supports SOX IT general and application controls evidence for \"Audit finding remediation milestone and due-date risk\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX remediation attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Audit finding remediation milestone and due-date risk (SOX remediation)** — Supports SOX IT general and application controls evidence for \"Audit finding remediation milestone and due-date risk\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX remediation attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for audit finding remediation milestone and due-date risk the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.31: Audit finding remediation milestone and due-date risk.",
                  "ea": "Saved search 'UC-22.12.31' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.32",
              "n": "External audit IT finding closure and retest documentation (SOX audit)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports SOX IT general and application controls evidence for \"External audit IT finding closure and retest documentation\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX audit attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**External audit IT finding closure and retest documentation (SOX audit)** — Supports SOX IT general and application controls evidence for \"External audit IT finding closure and retest documentation\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX audit attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for external audit IT finding closure and retest documentation the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.32: External audit IT finding closure and retest documentation.",
                  "ea": "Saved search 'UC-22.12.32' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.33",
              "n": "IT control self-assessment questionnaire completion rates (SOX CSA)",
              "c": "high",
              "f": "beginner",
              "v": "Supports SOX IT general and application controls evidence for \"IT control self-assessment questionnaire completion rates\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX CSA attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**IT control self-assessment questionnaire completion rates (SOX CSA)** — Supports SOX IT general and application controls evidence for \"IT control self-assessment questionnaire completion rates\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX CSA attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for iT control self-assessment questionnaire completion rates the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.33: IT control self-assessment questionnaire completion rates.",
                  "ea": "Saved search 'UC-22.12.33' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.34",
              "n": "IT risk register residual score movement for financial reporting risks (SOX risk)",
              "c": "medium",
              "f": "expert",
              "v": "Supports SOX IT general and application controls evidence for \"IT risk register residual score movement for financial reporting risks\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX risk attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**IT risk register residual score movement for financial reporting risks (SOX risk)** — Supports SOX IT general and application controls evidence for \"IT risk register residual score movement for financial reporting risks\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX risk attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for iT risk register residual score movement for financial reporting risks the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.34: IT risk register residual score movement for financial reporting risks.",
                  "ea": "Saved search 'UC-22.12.34' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.35",
              "n": "Monthly ITGC KPI pack for management review evidence (SOX management review)",
              "c": "low",
              "f": "advanced",
              "v": "Supports SOX IT general and application controls evidence for \"Monthly ITGC KPI pack for management review evidence\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX management review attestation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Bar chart",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Active Directory](https://splunkbase.splunk.com/app/3207), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Monthly ITGC KPI pack for management review evidence (SOX management review)** — Supports SOX IT general and application controls evidence for \"Monthly ITGC KPI pack for management review evidence\" by correlating security, change, ERP, and ITSM telemetry with control ownership records, strengthening SOX management review attestation.\n\nDocumented **Data sources**: `WinEventLog:Security`, Active Directory replication/audit, `snow:*`, SAP/Oracle audit, `mssql:audit`, PAM, backup jobs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Active Directory (3207), Splunk DB Connect (2686), Splunk Add-on for ServiceNow (1928), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Bar chart",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for monthly ITGC KPI pack for management review evidence the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SOX / ITGC"
              ],
              "a": [
                "Authentication",
                "Change",
                "Ticket_Management- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.35: Monthly ITGC KPI pack for management review evidence.",
                  "ea": "Saved search 'UC-22.12.35' running on WinEventLog:Security, Active Directory replication/audit, snow:*, SAP/Oracle audit, mssql:audit, PAM, backup jobs, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 30.4,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 35,
            "none": 0
          }
        },
        {
          "i": "22.16",
          "n": "TSA Pipeline Security",
          "u": [
            {
              "i": "22.16.1",
              "n": "IT/OT boundary deny vs allow ratio by zone pair (TSA Pipeline Security)",
              "c": "high",
              "f": "beginner",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"IT/OT boundary deny vs allow ratio by zone pair\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**IT/OT boundary deny vs allow ratio by zone pair (TSA Pipeline Security)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"IT/OT boundary deny vs allow ratio by zone pair\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"IT/OT boundary deny vs allow ratio by zone pair\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.1: IT/OT boundary deny vs allow ratio by zone pair.",
                  "ea": "Saved search 'UC-22.16.1' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.2",
              "n": "Cross-zone traffic volume spike vs baseline (TSA segmentation)",
              "c": "medium",
              "f": "expert",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Cross-zone traffic volume spike vs baseline\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**Cross-zone traffic volume spike vs baseline (TSA segmentation)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Cross-zone traffic volume spike vs baseline\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Cross-zone traffic volume spike vs baseline\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.2: Cross-zone traffic volume spike vs baseline.",
                  "ea": "Saved search 'UC-22.16.2' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.3",
              "n": "Lateral authentication chains across OT VLANs (TSA IR readiness)",
              "c": "low",
              "f": "advanced",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Lateral authentication chains across OT VLANs\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Lateral authentication chains across OT VLANs (TSA IR readiness)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Lateral authentication chains across OT VLANs\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Lateral authentication chains across OT VLANs\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.3: Lateral authentication chains across OT VLANs.",
                  "ea": "Saved search 'UC-22.16.3' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.4",
              "n": "Unexpected IT-style applications in OT enclaves (TSA segmentation)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Unexpected IT-style applications in OT enclaves\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.app | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.app | sort - count\n```\n\nUnderstanding this SPL\n\n**Unexpected IT-style applications in OT enclaves (TSA segmentation)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Unexpected IT-style applications in OT enclaves\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Unexpected IT-style applications in OT enclaves\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.4: Unexpected IT-style applications in OT enclaves.",
                  "ea": "Saved search 'UC-22.16.4' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.5",
              "n": "DMZ jump host concurrent multi-segment sessions (TSA architecture)",
              "c": "high",
              "f": "beginner",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"DMZ jump host concurrent multi-segment sessions\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**DMZ jump host concurrent multi-segment sessions (TSA architecture)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"DMZ jump host concurrent multi-segment sessions\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"DMZ jump host concurrent multi-segment sessions\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.5: DMZ jump host concurrent multi-segment sessions.",
                  "ea": "Saved search 'UC-22.16.5' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.6",
              "n": "Unauthorized MAC appearances on OT uplink ports (TSA physical/logical)",
              "c": "medium",
              "f": "expert",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Unauthorized MAC appearances on OT uplink ports\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**Unauthorized MAC appearances on OT uplink ports (TSA physical/logical)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Unauthorized MAC appearances on OT uplink ports\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Unauthorized MAC appearances on OT uplink ports\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.6: Unauthorized MAC appearances on OT uplink ports.",
                  "ea": "Saved search 'UC-22.16.6' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.7",
              "n": "Interactive logons to pipeline engineering workstations (TSA access)",
              "c": "low",
              "f": "advanced",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Interactive logons to pipeline engineering workstations\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Interactive logons to pipeline engineering workstations (TSA access)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Interactive logons to pipeline engineering workstations\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Interactive logons to pipeline engineering workstations\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.7: Interactive logons to pipeline engineering workstations.",
                  "ea": "Saved search 'UC-22.16.7' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.8",
              "n": "Privileged remote maintenance with MFA and recording correlation (TSA access)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Privileged remote maintenance with MFA and recording correlation\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Privileged remote maintenance with MFA and recording correlation (TSA access)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Privileged remote maintenance with MFA and recording correlation\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Privileged remote maintenance with MFA and recording correlation\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.8: Privileged remote maintenance with MFA and recording correlation.",
                  "ea": "Saved search 'UC-22.16.8' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.9",
              "n": "Contractor MFA gaps to OT bastion and VPN portals (TSA access)",
              "c": "high",
              "f": "beginner",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Contractor MFA gaps to OT bastion and VPN portals\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Contractor MFA gaps to OT bastion and VPN portals (TSA access)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Contractor MFA gaps to OT bastion and VPN portals\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Contractor MFA gaps to OT bastion and VPN portals\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.9: Contractor MFA gaps to OT bastion and VPN portals.",
                  "ea": "Saved search 'UC-22.16.9' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.10",
              "n": "Vendor account usage outside approved maintenance windows (TSA vendor access)",
              "c": "medium",
              "f": "expert",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Vendor account usage outside approved maintenance windows\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Vendor account usage outside approved maintenance windows (TSA vendor access)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Vendor account usage outside approved maintenance windows\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Vendor account usage outside approved maintenance windows\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.10: Vendor account usage outside approved maintenance windows.",
                  "ea": "Saved search 'UC-22.16.10' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.11",
              "n": "Shared OT maintenance account attribution via PAM (TSA access)",
              "c": "low",
              "f": "advanced",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Shared OT maintenance account attribution via PAM\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Shared OT maintenance account attribution via PAM (TSA access)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Shared OT maintenance account attribution via PAM\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Shared OT maintenance account attribution via PAM\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.11: Shared OT maintenance account attribution via PAM.",
                  "ea": "Saved search 'UC-22.16.11' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.12",
              "n": "Break-glass vault usage correlated to active P1 incidents (TSA emergency access)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Break-glass vault usage correlated to active P1 incidents\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Break-glass vault usage correlated to active P1 incidents (TSA emergency access)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Break-glass vault usage correlated to active P1 incidents\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Break-glass vault usage correlated to active P1 incidents\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.12: Break-glass vault usage correlated to active P1 incidents.",
                  "ea": "Saved search 'UC-22.16.12' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.13",
              "n": "OT alert volume by NIST CSF-style category (TSA incident response)",
              "c": "high",
              "f": "beginner",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"OT alert volume by NIST CSF-style category\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**OT alert volume by NIST CSF-style category (TSA incident response)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"OT alert volume by NIST CSF-style category\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"OT alert volume by NIST CSF-style category\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.13: OT alert volume by NIST CSF-style category.",
                  "ea": "Saved search 'UC-22.16.13' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.14",
              "n": "Cybersecurity plan milestone on-time completion (TSA SD)",
              "c": "medium",
              "f": "expert",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Cybersecurity plan milestone on-time completion\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**Cybersecurity plan milestone on-time completion (TSA SD)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Cybersecurity plan milestone on-time completion\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Cybersecurity plan milestone on-time completion\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.14: Cybersecurity plan milestone on-time completion.",
                  "ea": "Saved search 'UC-22.16.14' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.15",
              "n": "Composite severity from OT alerts and SCADA health (TSA IR)",
              "c": "low",
              "f": "advanced",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Composite severity from OT alerts and SCADA health\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Composite severity from OT alerts and SCADA health (TSA IR)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Composite severity from OT alerts and SCADA health\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Composite severity from OT alerts and SCADA health\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.15: Composite severity from OT alerts and SCADA health.",
                  "ea": "Saved search 'UC-22.16.15' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.16",
              "n": "Regulatory notification task checklist aging (TSA reporting)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Regulatory notification task checklist aging\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Regulatory notification task checklist aging (TSA reporting)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Regulatory notification task checklist aging\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Regulatory notification task checklist aging\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.16: Regulatory notification task checklist aging.",
                  "ea": "Saved search 'UC-22.16.16' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.17",
              "n": "PLC mode changes during active OT incidents (TSA containment)",
              "c": "high",
              "f": "beginner",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"PLC mode changes during active OT incidents\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**PLC mode changes during active OT incidents (TSA containment)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"PLC mode changes during active OT incidents\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"PLC mode changes during active OT incidents\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.17: PLC mode changes during active OT incidents.",
                  "ea": "Saved search 'UC-22.16.17' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.18",
              "n": "Post-incident recovery test evidence in problem records (TSA recovery)",
              "c": "medium",
              "f": "expert",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Post-incident recovery test evidence in problem records\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Post-incident recovery test evidence in problem records (TSA recovery)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Post-incident recovery test evidence in problem records\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Post-incident recovery test evidence in problem records\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.18: Post-incident recovery test evidence in problem records.",
                  "ea": "Saved search 'UC-22.16.18' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.19",
              "n": "Cybersecurity implementation plan artifact versioning (TSA plan)",
              "c": "low",
              "f": "advanced",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Cybersecurity implementation plan artifact versioning\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Cybersecurity implementation plan artifact versioning (TSA plan)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Cybersecurity implementation plan artifact versioning\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Cybersecurity implementation plan artifact versioning\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.19: Cybersecurity implementation plan artifact versioning.",
                  "ea": "Saved search 'UC-22.16.19' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.20",
              "n": "Documented subnets vs observed NetFlow peers (TSA architecture)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Documented subnets vs observed NetFlow peers\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**Documented subnets vs observed NetFlow peers (TSA architecture)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Documented subnets vs observed NetFlow peers\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Documented subnets vs observed NetFlow peers\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.20: Documented subnets vs observed NetFlow peers.",
                  "ea": "Saved search 'UC-22.16.20' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.21",
              "n": "Control-loop latency before vs after hardening window (TSA effectiveness)",
              "c": "high",
              "f": "beginner",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Control-loop latency before vs after hardening window\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Control-loop latency before vs after hardening window (TSA effectiveness)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Control-loop latency before vs after hardening window\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Control-loop latency before vs after hardening window\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.21: Control-loop latency before vs after hardening window.",
                  "ea": "Saved search 'UC-22.16.21' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.22",
              "n": "PLC/RTU inventory without expected monitoring agent (TSA inventory)",
              "c": "medium",
              "f": "expert",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"PLC/RTU inventory without expected monitoring agent\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**PLC/RTU inventory without expected monitoring agent (TSA inventory)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"PLC/RTU inventory without expected monitoring agent\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"PLC/RTU inventory without expected monitoring agent\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.22: PLC/RTU inventory without expected monitoring agent.",
                  "ea": "Saved search 'UC-22.16.22' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.23",
              "n": "ICS advisory affected firmware still installed (TSA vulnerability)",
              "c": "low",
              "f": "advanced",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"ICS advisory affected firmware still installed\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**ICS advisory affected firmware still installed (TSA vulnerability)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"ICS advisory affected firmware still installed\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"ICS advisory affected firmware still installed\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.23: ICS advisory affected firmware still installed.",
                  "ea": "Saved search 'UC-22.16.23' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.24",
              "n": "Qualitative OT cyber risk scenario roll-up (TSA risk)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Qualitative OT cyber risk scenario roll-up\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Qualitative OT cyber risk scenario roll-up (TSA risk)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Qualitative OT cyber risk scenario roll-up\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Qualitative OT cyber risk scenario roll-up\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.24: Qualitative OT cyber risk scenario roll-up.",
                  "ea": "Saved search 'UC-22.16.24' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.25",
              "n": "SCADA master redundancy failover duration (TSA monitoring)",
              "c": "high",
              "f": "beginner",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"SCADA master redundancy failover duration\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**SCADA master redundancy failover duration (TSA monitoring)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"SCADA master redundancy failover duration\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"SCADA master redundancy failover duration\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.25: SCADA master redundancy failover duration.",
                  "ea": "Saved search 'UC-22.16.25' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.26",
              "n": "Blocked unauthorized PLC logic downloads (TSA integrity)",
              "c": "medium",
              "f": "expert",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Blocked unauthorized PLC logic downloads\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Blocked unauthorized PLC logic downloads (TSA integrity)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Blocked unauthorized PLC logic downloads\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Blocked unauthorized PLC logic downloads\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.26: Blocked unauthorized PLC logic downloads.",
                  "ea": "Saved search 'UC-22.16.26' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.27",
              "n": "OT configuration changes without ITSM change record (TSA change)",
              "c": "low",
              "f": "advanced",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"OT configuration changes without ITSM change record\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**OT configuration changes without ITSM change record (TSA change)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"OT configuration changes without ITSM change record\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"OT configuration changes without ITSM change record\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.27: OT configuration changes without ITSM change record.",
                  "ea": "Saved search 'UC-22.16.27' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.28",
              "n": "Monthly OT security posture score trend by site (TSA monitoring)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Monthly OT security posture score trend by site\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1mon | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t avg(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port span=1mon | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**Monthly OT security posture score trend by site (TSA monitoring)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Monthly OT security posture score trend by site\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Monthly OT security posture score trend by site\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.28: Monthly OT security posture score trend by site.",
                  "ea": "Saved search 'UC-22.16.28' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.29",
              "n": "First-seen OT patch-repository binary hashes (TSA supply chain)",
              "c": "high",
              "f": "beginner",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"First-seen OT patch-repository binary hashes\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**First-seen OT patch-repository binary hashes (TSA supply chain)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"First-seen OT patch-repository binary hashes\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"First-seen OT patch-repository binary hashes\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.29: First-seen OT patch-repository binary hashes.",
                  "ea": "Saved search 'UC-22.16.29' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.16.30",
              "n": "Threat-intel hits on OT DNS forwarder queries (TSA threat intel)",
              "c": "medium",
              "f": "expert",
              "v": "Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Threat-intel hits on OT DNS forwarder queries\".",
              "t": "Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621).",
              "d": "`pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Choropleth or site bar, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Palo Alto Networks](https://splunkbase.splunk.com/app/2757), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Threat-intel hits on OT DNS forwarder queries (TSA threat intel)** — Supports pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Threat-intel hits on OT DNS forwarder queries\".\n\nDocumented **Data sources**: `pan:traffic`, SCADA/HMI audit, PAM, `nozomi:alert`, NetFlow, ITSM, passive OT asset discovery. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Enterprise Security (263), Splunk Add-on for Palo Alto Networks (2757), Splunk Industrial Asset Intelligence, Splunk Add-on for CyberArk (4295), Splunk Edge Hub, Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Choropleth or site bar, Table, Single value",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We support pipeline cybersecurity program oversight under TSA Pipeline Security Directive by correlating operational telemetry with governance records for \"Threat-intel hits on OT DNS forwarder queries\". That lines up with pipeline security expectations and helps the team show its work to regulators and leadership.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "TSA Pipeline Security Directive"
              ],
              "a": [
                "Network_Traffic",
                "Authentication",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "TSA SD",
                  "v": "SD02C",
                  "cl": "III.A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that TSA SD III.A (Cybersecurity plan) is enforced — Splunk UC-22.16.30: Threat-intel hits on OT DNS forwarder queries.",
                  "ea": "Saved search 'UC-22.16.30' running on pan:traffic, SCADA/HMI audit, PAM, nozomi:alert, NetFlow, ITSM, passive OT asset discovery, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 30.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 30,
            "none": 0
          }
        },
        {
          "i": "22.18",
          "n": "API 1164 Pipeline SCADA Security",
          "u": [
            {
              "i": "22.18.1",
              "n": "FactoryTalk excessive operator login sessions (API RP 1164)",
              "c": "high",
              "f": "beginner",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**FactoryTalk excessive operator login sessions (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.1: RTU/HMI access control — control point 1.",
                  "ea": "Saved search 'UC-22.18.1' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.2",
              "n": "FactoryTalk compressor-area role mismatch (API RP 1164)",
              "c": "medium",
              "f": "expert",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**FactoryTalk compressor-area role mismatch (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.2: RTU/HMI access control — control point 2.",
                  "ea": "Saved search 'UC-22.18.2' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.3",
              "n": "OPC-UA Write method without named approver (API RP 1164)",
              "c": "low",
              "f": "advanced",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**OPC-UA Write method without named approver (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.3: RTU/HMI access control — control point 3.",
                  "ea": "Saved search 'UC-22.18.3' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.4",
              "n": "FactoryTalk rejected open/close commands (API RP 1164)",
              "c": "critical",
              "f": "intermediate",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**FactoryTalk rejected open/close commands (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.4: RTU/HMI access control — control point 4.",
                  "ea": "Saved search 'UC-22.18.4' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.5",
              "n": "FactoryTalk operator sessions idle over 2 h (API RP 1164)",
              "c": "high",
              "f": "beginner",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**FactoryTalk operator sessions idle over 2 h (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.5: RTU/HMI access control — control point 5.",
                  "ea": "Saved search 'UC-22.18.5' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.6",
              "n": "Vendor or field-tech Windows logons outside depot hours (API RP 1164)",
              "c": "medium",
              "f": "expert",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Vendor or field-tech Windows logons outside depot hours (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.6: RTU/HMI access control — control point 6.",
                  "ea": "Saved search 'UC-22.18.6' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.7",
              "n": "Pipeline HMI app running on jailbroken mobile (API RP 1164)",
              "c": "low",
              "f": "advanced",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Pipeline HMI app running on jailbroken mobile (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.7: RTU/HMI access control — control point 7.",
                  "ea": "Saved search 'UC-22.18.7' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.8",
              "n": "DNP3 high-volume direct-operate commands (API RP 1164)",
              "c": "critical",
              "f": "intermediate",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**DNP3 high-volume direct-operate commands (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.8: SCADA command authentication — control point 1.",
                  "ea": "Saved search 'UC-22.18.8' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.9",
              "n": "PI-AF setpoint changes beyond 15 percent (API RP 1164)",
              "c": "high",
              "f": "beginner",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**PI-AF setpoint changes beyond 15 percent (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.9: SCADA command authentication — control point 2.",
                  "ea": "Saved search 'UC-22.18.9' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.10",
              "n": "Modbus coil writes on SIL-rated registers (API RP 1164)",
              "c": "medium",
              "f": "expert",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**Modbus coil writes on SIL-rated registers (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.10: SCADA command authentication — control point 3.",
                  "ea": "Saved search 'UC-22.18.10' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.11",
              "n": "Ignition pump actions originating from scripts (API RP 1164)",
              "c": "low",
              "f": "advanced",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action | sort - count\n```\n\nUnderstanding this SPL\n\n**Ignition pump actions originating from scripts (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.11: SCADA command authentication — control point 4.",
                  "ea": "Saved search 'UC-22.18.11' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.12",
              "n": "ESD or shutdown alarm acknowledgements (API RP 1164)",
              "c": "critical",
              "f": "intermediate",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**ESD or shutdown alarm acknowledgements (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.12: SCADA command authentication — control point 5.",
                  "ea": "Saved search 'UC-22.18.12' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.13",
              "n": "Rockwell controller program download or upload by vendor (API RP 1164)",
              "c": "high",
              "f": "beginner",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Rockwell controller program download or upload by vendor (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.13: SCADA command authentication — control point 6.",
                  "ea": "Saved search 'UC-22.18.13' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.14",
              "n": "OPC-UA unsigned program downloads (API RP 1164)",
              "c": "medium",
              "f": "expert",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**OPC-UA unsigned program downloads (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "5.3",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 5.3 (Access control) is enforced — Splunk UC-22.18.14: SCADA command authentication — control point 7.",
                  "ea": "Saved search 'UC-22.18.14' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.15",
              "n": "FIELD zone to SCADA-DMZ unexpected bytes (API RP 1164)",
              "c": "low",
              "f": "advanced",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.app | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.app | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**FIELD zone to SCADA-DMZ unexpected bytes (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.15: Pipeline SCADA network segmentation — control point 1.",
                  "ea": "Saved search 'UC-22.18.15' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.16",
              "n": "ENTERPRISE to SCADA-DMZ flows (API RP 1164)",
              "c": "critical",
              "f": "intermediate",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.app | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src, All_Traffic.dest, All_Traffic.app | sort - count\n```\n\nUnderstanding this SPL\n\n**ENTERPRISE to SCADA-DMZ flows (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.16: Pipeline SCADA network segmentation — control point 2.",
                  "ea": "Saved search 'UC-22.18.16' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.17",
              "n": "DNP3 traffic on non-standard ports (API RP 1164)",
              "c": "high",
              "f": "beginner",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this SPL\n\n**DNP3 traffic on non-standard ports (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.17: Pipeline SCADA network segmentation — control point 3.",
                  "ea": "Saved search 'UC-22.18.17' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.18",
              "n": "Pipeline-field WiFi without WPA3-Enterprise (API RP 1164)",
              "c": "medium",
              "f": "expert",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Pipeline-field WiFi without WPA3-Enterprise (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.18: Pipeline SCADA network segmentation — control point 4.",
                  "ea": "Saved search 'UC-22.18.18' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.19",
              "n": "Edge Modbus gateway exposing over 200 unit IDs (API RP 1164)",
              "c": "low",
              "f": "advanced",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Edge Modbus gateway exposing over 200 unit IDs (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.19: Pipeline SCADA network segmentation — control point 5.",
                  "ea": "Saved search 'UC-22.18.19' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.20",
              "n": "OT-PLC TLSv1.0 connections (API RP 1164)",
              "c": "critical",
              "f": "intermediate",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.src | sort - count\n```\n\nUnderstanding this SPL\n\n**OT-PLC TLSv1.0 connections (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.20: Pipeline SCADA network segmentation — control point 6.",
                  "ea": "Saved search 'UC-22.18.20' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.21",
              "n": "Vendor GlobalProtect jump from non-standard image (API RP 1164)",
              "c": "high",
              "f": "beginner",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Vendor GlobalProtect jump from non-standard image (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.21: Pipeline SCADA network segmentation — control point 7.",
                  "ea": "Saved search 'UC-22.18.21' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.22",
              "n": "Field devices on firmware behind ICS-CERT required version (API RP 1164)",
              "c": "medium",
              "f": "expert",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Field devices on firmware behind ICS-CERT required version (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.22: Field device integrity — control point 1.",
                  "ea": "Saved search 'UC-22.18.22' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.23",
              "n": "Schneider PLC logic changes by user span (API RP 1164)",
              "c": "low",
              "f": "advanced",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Schneider PLC logic changes by user span (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.23: Field device integrity — control point 2.",
                  "ea": "Saved search 'UC-22.18.23' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.24",
              "n": "Modbus CRC success rate below 99.5 percent (API RP 1164)",
              "c": "critical",
              "f": "intermediate",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**Modbus CRC success rate below 99.5 percent (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.24: Field device integrity — control point 3.",
                  "ea": "Saved search 'UC-22.18.24' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.25",
              "n": "Wonderware flow/pressure tag jumps over 50 percent (API RP 1164)",
              "c": "high",
              "f": "beginner",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Wonderware flow/pressure tag jumps over 50 percent (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.25: Field device integrity — control point 4.",
                  "ea": "Saved search 'UC-22.18.25' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.26",
              "n": "RTU-ROW-12 off-role Genetec badge swipes (API RP 1164)",
              "c": "medium",
              "f": "expert",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**RTU-ROW-12 off-role Genetec badge swipes (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.26: Field device integrity — control point 5.",
                  "ea": "Saved search 'UC-22.18.26' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.27",
              "n": "DNP3 sequence-number gaps (API RP 1164)",
              "c": "low",
              "f": "advanced",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**DNP3 sequence-number gaps (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.27: Field device integrity — control point 6.",
                  "ea": "Saved search 'UC-22.18.27' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.28",
              "n": "Claroty devices with unverified integrity state (API RP 1164)",
              "c": "critical",
              "f": "intermediate",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Claroty devices with unverified integrity state (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.28: Field device integrity — control point 7.",
                  "ea": "Saved search 'UC-22.18.28' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.29",
              "n": "Pipeline cyber incident MTTR tracking (API RP 1164)",
              "c": "high",
              "f": "beginner",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Pipeline cyber incident MTTR tracking (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.29: API 1164 incident and compliance — control point 1.",
                  "ea": "Saved search 'UC-22.18.29' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.30",
              "n": "API 1164 domain-score regression year-over-year (API RP 1164)",
              "c": "medium",
              "f": "expert",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**API 1164 domain-score regression year-over-year (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.30: API 1164 incident and compliance — control point 2.",
                  "ea": "Saved search 'UC-22.18.30' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.31",
              "n": "Critical SCADA vulnerabilities by Tenable plugin (API RP 1164)",
              "c": "low",
              "f": "advanced",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Critical SCADA vulnerabilities by Tenable plugin (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.31: API 1164 incident and compliance — control point 3.",
                  "ea": "Saved search 'UC-22.18.31' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.32",
              "n": "SCADA tabletop exercises missing evidence (API RP 1164)",
              "c": "critical",
              "f": "intermediate",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**SCADA tabletop exercises missing evidence (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.32: API 1164 incident and compliance — control point 4.",
                  "ea": "Saved search 'UC-22.18.32' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.33",
              "n": "Pipeline SCADA risks with open treatment (API RP 1164)",
              "c": "high",
              "f": "beginner",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Pipeline SCADA risks with open treatment (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.33: API 1164 incident and compliance — control point 5.",
                  "ea": "Saved search 'UC-22.18.33' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.34",
              "n": "Pipeline cyber training overdue (API RP 1164)",
              "c": "medium",
              "f": "expert",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**Pipeline cyber training overdue (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.34: API 1164 incident and compliance — control point 6.",
                  "ea": "Saved search 'UC-22.18.34' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.18.35",
              "n": "API 1164 regulatory reports past due (API RP 1164)",
              "c": "low",
              "f": "advanced",
              "v": "Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.",
              "t": "Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621).",
              "d": "RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Node-link or sankey (optional)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Nozomi Networks TA for Splunk](https://splunkbase.splunk.com/app/6905), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this SPL\n\n**API 1164 regulatory reports past due (API RP 1164)** — Evidences API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources.\n\nDocumented **Data sources**: RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds. **App/TA** (typical add-on context): Splunk OT Security Add-on (5151), Splunk Industrial Asset Intelligence, Splunk Enterprise Security (263), Splunk Edge Hub, Nozomi Networks TA for Splunk (6905), Splunk Add-on for CyberArk (4295), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Node-link or sankey (optional)",
              "script": "",
              "premium": "Splunk Edge Hub, Splunk Enterprise Security, Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show evidence that API RP 1164 cybersecurity practices for pipeline SCADA using OT and engineering audit sources. That lines up with safe pipeline control practice and keeps the operation story clear when people ask who did what.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "API RP 1164"
              ],
              "a": [
                "Network_Traffic",
                "Endpoint",
                "Operational Telemetry- **CIM SPL:**"
              ],
              "e": [
                "cyberark",
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "API RP 1164",
                  "v": "3rd edition",
                  "cl": "6.2.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that API RP 1164 6.2.1 (Logging and monitoring) is enforced — Splunk UC-22.18.35: API 1164 incident and compliance — control point 7.",
                  "ea": "Saved search 'UC-22.18.35' running on RTU/PLC, DNP3, Modbus OPC-UA, Wonderware/Ignition, historian, OT firewall, wireless sensor feeds, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 30.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 35,
            "none": 0
          }
        },
        {
          "i": "22.17",
          "n": "FDA 21 CFR Part 11",
          "u": [
            {
              "i": "22.17.1",
              "n": "LIMS audit entries missing reason codes (21 CFR Part 11)",
              "c": "high",
              "f": "beginner",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**LIMS audit entries missing reason codes (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for electronic records integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.1: Electronic records integrity — control theme 1.",
                  "ea": "Saved search 'UC-22.17.1' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.2",
              "n": "LIMS excessive record modifications per batch (21 CFR Part 11)",
              "c": "medium",
              "f": "expert",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**LIMS excessive record modifications per batch (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for electronic records integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.2: Electronic records integrity — control theme 2.",
                  "ea": "Saved search 'UC-22.17.2' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.3",
              "n": "MES batch clock-skew vs generated timestamp (21 CFR Part 11)",
              "c": "low",
              "f": "advanced",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**MES batch clock-skew vs generated timestamp (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for electronic records integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.3: Electronic records integrity — control theme 3.",
                  "ea": "Saved search 'UC-22.17.3' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.4",
              "n": "Veeva document hash mismatch (21 CFR Part 11)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**Veeva document hash mismatch (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for electronic records integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.4: Electronic records integrity — control theme 4.",
                  "ea": "Saved search 'UC-22.17.4' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.5",
              "n": "LIMS records past retention without disposition (21 CFR Part 11)",
              "c": "high",
              "f": "beginner",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**LIMS records past retention without disposition (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for electronic records integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.5: Electronic records integrity — control theme 5.",
                  "ea": "Saved search 'UC-22.17.5' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.6",
              "n": "ELN signatures beyond delegated authority (21 CFR Part 11)",
              "c": "medium",
              "f": "expert",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**ELN signatures beyond delegated authority (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show that electronic signatures in your validated lab and manufacturing systems line up with how you said they would work, so food and drug inspections can trust the record.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.200",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.200 (Electronic signatures) is enforced — Splunk UC-22.17.6: Electronic signatures — control theme 1.",
                  "ea": "Saved search 'UC-22.17.6' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.7",
              "n": "ELN signatures missing certificate or hash binding (21 CFR Part 11)",
              "c": "low",
              "f": "advanced",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**ELN signatures missing certificate or hash binding (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show that electronic signatures in your validated lab and manufacturing systems line up with how you said they would work, so food and drug inspections can trust the record.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.200",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.200 (Electronic signatures) is enforced — Splunk UC-22.17.7: Electronic signatures — control theme 2.",
                  "ea": "Saved search 'UC-22.17.7' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.8",
              "n": "ELN logins without FIDO2 or X.509 credential (21 CFR Part 11)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**ELN logins without FIDO2 or X.509 credential (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show that electronic signatures in your validated lab and manufacturing systems line up with how you said they would work, so food and drug inspections can trust the record.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.200",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.200 (Electronic signatures) is enforced — Splunk UC-22.17.8: Electronic signatures — control theme 3.",
                  "ea": "Saved search 'UC-22.17.8' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.9",
              "n": "ELN signatures missing meaning code (21 CFR Part 11)",
              "c": "high",
              "f": "beginner",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**ELN signatures missing meaning code (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show that electronic signatures in your validated lab and manufacturing systems line up with how you said they would work, so food and drug inspections can trust the record.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.200",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.200 (Electronic signatures) is enforced — Splunk UC-22.17.9: Electronic signatures — control theme 4.",
                  "ea": "Saved search 'UC-22.17.9' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.10",
              "n": "ELN release signatures bypassing multi-step flow (21 CFR Part 11)",
              "c": "medium",
              "f": "expert",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**ELN release signatures bypassing multi-step flow (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show that electronic signatures in your validated lab and manufacturing systems line up with how you said they would work, so food and drug inspections can trust the record.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.200",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.200 (Electronic signatures) is enforced — Splunk UC-22.17.10: Electronic signatures — control theme 5.",
                  "ea": "Saved search 'UC-22.17.10' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.11",
              "n": "CDS injections with too few audit entries (21 CFR Part 11)",
              "c": "low",
              "f": "advanced",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**CDS injections with too few audit entries (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for audit trails.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.11: Audit trails — control theme 1.",
                  "ea": "Saved search 'UC-22.17.11' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.12",
              "n": "LIMS sample touched by multiple actors (21 CFR Part 11)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**LIMS sample touched by multiple actors (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for audit trails.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.12: Audit trails — control theme 2.",
                  "ea": "Saved search 'UC-22.17.12' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.13",
              "n": "MES record UPDATE without change reason (21 CFR Part 11)",
              "c": "high",
              "f": "beginner",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**MES record UPDATE without change reason (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for audit trails.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.13: Audit trails — control theme 3.",
                  "ea": "Saved search 'UC-22.17.13' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.14",
              "n": "HPLC NTP drift over 500 ms (21 CFR Part 11)",
              "c": "medium",
              "f": "expert",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**HPLC NTP drift over 500 ms (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for audit trails.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.14: Audit trails — control theme 4.",
                  "ea": "Saved search 'UC-22.17.14' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.15",
              "n": "Veeam LIMS database backup failures (21 CFR Part 11)",
              "c": "low",
              "f": "advanced",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**Veeam LIMS database backup failures (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for audit trails.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.15: Audit trails — control theme 5.",
                  "ea": "Saved search 'UC-22.17.15' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.16",
              "n": "MES batch entries missing ALCOA who/when/what/why fields (21 CFR Part 11)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**MES batch entries missing ALCOA who/when/what/why fields (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for alcoa+ data integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.16: ALCOA+ data integrity — control theme 1.",
                  "ea": "Saved search 'UC-22.17.16' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.17",
              "n": "Commvault MES subclient backups not completed (21 CFR Part 11)",
              "c": "high",
              "f": "beginner",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**Commvault MES subclient backups not completed (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for alcoa+ data integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.17: ALCOA+ data integrity — control theme 2.",
                  "ea": "Saved search 'UC-22.17.17' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.18",
              "n": "LIMS COPY action without independent witness (21 CFR Part 11)",
              "c": "medium",
              "f": "expert",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**LIMS COPY action without independent witness (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for alcoa+ data integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.18: ALCOA+ data integrity — control theme 3.",
                  "ea": "Saved search 'UC-22.17.18' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.19",
              "n": "CDS raw vs processed chromatogram file mismatch (21 CFR Part 11)",
              "c": "low",
              "f": "advanced",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**CDS raw vs processed chromatogram file mismatch (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for alcoa+ data integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.19: ALCOA+ data integrity — control theme 4.",
                  "ea": "Saved search 'UC-22.17.19' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.20",
              "n": "Lab instrument integrity-check failures (21 CFR Part 11)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**Lab instrument integrity-check failures (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for alcoa+ data integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.20: ALCOA+ data integrity — control theme 5.",
                  "ea": "Saved search 'UC-22.17.20' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.21",
              "n": "LIMS-PROD PQ sign-offs incomplete (21 CFR Part 11)",
              "c": "high",
              "f": "beginner",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**LIMS-PROD PQ sign-offs incomplete (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for gxp computer system validation.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.21: GxP computer system validation — control theme 1.",
                  "ea": "Saved search 'UC-22.17.21' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.22",
              "n": "LIMS change requests without CSV risk assessment (21 CFR Part 11)",
              "c": "medium",
              "f": "expert",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**LIMS change requests without CSV risk assessment (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for gxp computer system validation.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.22: GxP computer system validation — control theme 2.",
                  "ea": "Saved search 'UC-22.17.22' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.23",
              "n": "Periodic system reviews overdue (21 CFR Part 11)",
              "c": "low",
              "f": "advanced",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**Periodic system reviews overdue (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for gxp computer system validation.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.23: GxP computer system validation — control theme 3.",
                  "ea": "Saved search 'UC-22.17.23' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.24",
              "n": "GxP workstation Windows account changes (21 CFR Part 11)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**GxP workstation Windows account changes (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for gxp computer system validation.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.24: GxP computer system validation — control theme 4.",
                  "ea": "Saved search 'UC-22.17.24' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.17.25",
              "n": "Overdue GxP computer-systems training by course (21 CFR Part 11)",
              "c": "high",
              "f": "beginner",
              "v": "Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).",
              "d": "LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations",
              "q": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this SPL\n\n**Overdue GxP computer-systems training by course (21 CFR Part 11)** — Supports Part 11 technical controls evidence by correlating validated system audit trails with identity and change records.\n\nDocumented **Data sources**: LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect audit trails from lab and plant systems with who did what, so we can show food and drug rules are met for gxp computer system validation.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FDA 21 CFR Part 11"
              ],
              "a": [
                "Change",
                "Authentication- **CIM SPL:**"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FDA Part 11",
                  "v": "current",
                  "cl": "§11.10(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FDA Part 11 §11.10(e) (Audit trails) is enforced — Splunk UC-22.17.25: GxP computer system validation — control theme 5.",
                  "ea": "Saved search 'UC-22.17.25' running on LIMS, MES, ELN, CDS, QMS/ERP validation logs, Windows security for GxP workstations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 30.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "22.19",
          "n": "FISMA / FedRAMP",
          "u": [
            {
              "i": "22.19.1",
              "n": "CloudTrail high-volume mutating actions (FISMA / FedRAMP)",
              "c": "high",
              "f": "beginner",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t dc(Authentication.user) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t dc(Authentication.user) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**CloudTrail high-volume mutating actions (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.1: Continuous monitoring — indicator 1.",
                  "ea": "Saved search 'UC-22.19.1' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.2",
              "n": "Tenable FedRAMP compliance failures (FISMA / FedRAMP)",
              "c": "medium",
              "f": "expert",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t dc(Authentication.dest) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t dc(Authentication.dest) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**Tenable FedRAMP compliance failures (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.2: Continuous monitoring — indicator 2.",
                  "ea": "Saved search 'UC-22.19.2' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.3",
              "n": "STIG file-integrity hash mismatch (FISMA / FedRAMP)",
              "c": "low",
              "f": "advanced",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP by detecting drift between deployed file-integrity hashes and the STIG baseline across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "index=os sourcetype=stash:file_integrity earliest=-7d\n| eval uc_tag=\"22.19.3\"\n| lookup stig_baseline.csv path OUTPUT expected_hash\n| where sha256!=expected_hash\n| stats count by host, path",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=stash:file_integrity earliest=-7d\n| eval uc_tag=\"22.19.3\"\n| lookup stig_baseline.csv path OUTPUT expected_hash\n| where sha256!=expected_hash\n| stats count by host, path\n```\n\nUnderstanding this SPL\n\n**STIG file-integrity hash mismatch (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP by detecting drift between deployed file-integrity hashes and the STIG baseline across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: stash:file_integrity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=stash:file_integrity, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **uc_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where sha256!=expected_hash` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host, path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**STIG file-integrity hash mismatch (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP by detecting drift between deployed file-integrity hashes and the STIG baseline across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP by detecting drift between deployed file-integrity hashes and the STIG baseline across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Change",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.3: STIG file-integrity hash mismatch (FISMA / FedRAMP).",
                  "ea": "Saved search 'UC-22.19.3' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.4",
              "n": "WSUS patch coverage below 95 percent (FISMA / FedRAMP)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**WSUS patch coverage below 95 percent (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.4: Continuous monitoring — indicator 4.",
                  "ea": "Saved search 'UC-22.19.4' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.5",
              "n": "FedRAMP servers not discovered in 30 days (FISMA / FedRAMP)",
              "c": "high",
              "f": "beginner",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**FedRAMP servers not discovered in 30 days (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.5: Continuous monitoring — indicator 5.",
                  "ea": "Saved search 'UC-22.19.5' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.6",
              "n": "GovCloud SSP sections incomplete (FISMA / FedRAMP)",
              "c": "medium",
              "f": "expert",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**GovCloud SSP sections incomplete (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.6: Authorization and boundary — indicator 1.",
                  "ea": "Saved search 'UC-22.19.6' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.7",
              "n": "FedRAMP POA&M items past planned finish (FISMA / FedRAMP)",
              "c": "low",
              "f": "advanced",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**FedRAMP POA&M items past planned finish (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.7: Authorization and boundary — indicator 2.",
                  "ea": "Saved search 'UC-22.19.7' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.8",
              "n": "Risk acceptances past review date (FISMA / FedRAMP)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Risk acceptances past review date (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.8: Authorization and boundary — indicator 3.",
                  "ea": "Saved search 'UC-22.19.8' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.9",
              "n": "Azure VNet peerings outside approved list (FISMA / FedRAMP)",
              "c": "high",
              "f": "beginner",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Azure VNet peerings outside approved list (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.9: Authorization and boundary — indicator 4.",
                  "ea": "Saved search 'UC-22.19.9' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.10",
              "n": "AWS SG ingress opened to 0.0.0.0/0 (FISMA / FedRAMP)",
              "c": "medium",
              "f": "expert",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this SPL\n\n**AWS SG ingress opened to 0.0.0.0/0 (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.10: Authorization and boundary — indicator 5.",
                  "ea": "Saved search 'UC-22.19.10' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.11",
              "n": "FedRAMP notable events unactioned over 8 h (FISMA / FedRAMP)",
              "c": "low",
              "f": "advanced",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**FedRAMP notable events unactioned over 8 h (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.11: Federal incident handling — indicator 1.",
                  "ea": "Saved search 'UC-22.19.11' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.12",
              "n": "US-CERT or CISA incidents unresolved (FISMA / FedRAMP)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**US-CERT or CISA incidents unresolved (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.12: Federal incident handling — indicator 2.",
                  "ea": "Saved search 'UC-22.19.12' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.13",
              "n": "Phantom high-severity containers off NIST DE.CM (FISMA / FedRAMP)",
              "c": "high",
              "f": "beginner",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Phantom high-severity containers off NIST DE.CM (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.13: Federal incident handling — indicator 3.",
                  "ea": "Saved search 'UC-22.19.13' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.14",
              "n": "FedRAMP hosts with cleared Windows Security log (FISMA / FedRAMP)",
              "c": "medium",
              "f": "expert",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**FedRAMP hosts with cleared Windows Security log (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.14: Federal incident handling — indicator 4.",
                  "ea": "Saved search 'UC-22.19.14' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.15",
              "n": "Federal IR lessons-learned not published (FISMA / FedRAMP)",
              "c": "low",
              "f": "advanced",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Federal IR lessons-learned not published (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.19.15: Federal incident handling — indicator 5.",
                  "ea": "Saved search 'UC-22.19.15' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.16",
              "n": "Fed apps accepting single-factor authentication (FISMA / FedRAMP)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Fed apps accepting single-factor authentication (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.16: Privileged and remote access — indicator 1.",
                  "ea": "Saved search 'UC-22.19.16' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.17",
              "n": "CyberArk Fed-Admin safe checkout surge (FISMA / FedRAMP)",
              "c": "high",
              "f": "beginner",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t dc(Authentication.user) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t dc(Authentication.user) as agg_value from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**CyberArk Fed-Admin safe checkout surge (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.17: Privileged and remote access — indicator 2.",
                  "ea": "Saved search 'UC-22.19.17' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.18",
              "n": "Dormant privileged accounts beyond 90 days (FISMA / FedRAMP)",
              "c": "medium",
              "f": "expert",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**Dormant privileged accounts beyond 90 days (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.18: Privileged and remote access — indicator 3.",
                  "ea": "Saved search 'UC-22.19.18' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.19",
              "n": "Fed-VDP VPN from unexpected private subnets (FISMA / FedRAMP)",
              "c": "low",
              "f": "advanced",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this SPL\n\n**Fed-VDP VPN from unexpected private subnets (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.19: Privileged and remote access — indicator 4.",
                  "ea": "Saved search 'UC-22.19.19' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.20",
              "n": "SAP users with excessive role stacking (FISMA / FedRAMP)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**SAP users with excessive role stacking (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.20: Privileged and remote access — indicator 5.",
                  "ea": "Saved search 'UC-22.19.20' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.21",
              "n": "FedRAMP 2026 control assessments failed (FISMA / FedRAMP)",
              "c": "high",
              "f": "beginner",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**FedRAMP 2026 control assessments failed (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.21: Assessment and FedRAMP evidence — indicator 1.",
                  "ea": "Saved search 'UC-22.19.21' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.22",
              "n": "Open 3PAO findings by severity (FISMA / FedRAMP)",
              "c": "medium",
              "f": "expert",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Open 3PAO findings by severity (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.22: Assessment and FedRAMP evidence — indicator 2.",
                  "ea": "Saved search 'UC-22.19.22' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.23",
              "n": "CDM devices without hardware root of trust (FISMA / FedRAMP)",
              "c": "low",
              "f": "advanced",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**CDM devices without hardware root of trust (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.23: Assessment and FedRAMP evidence — indicator 3.",
                  "ea": "Saved search 'UC-22.19.23' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.24",
              "n": "Risk score above 80 on CUI systems (FISMA / FedRAMP)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**Risk score above 80 on CUI systems (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.24: Assessment and FedRAMP evidence — indicator 4.",
                  "ea": "Saved search 'UC-22.19.24' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.19.25",
              "n": "FedRAMP marketplace listings not active (FISMA / FedRAMP)",
              "c": "high",
              "f": "beginner",
              "v": "Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621).",
              "d": "CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory",
              "q": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Time chart, Table, Single value, Heat map",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this SPL\n\n**FedRAMP marketplace listings not active (FISMA / FedRAMP)** — Strengthens agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds.\n\nDocumented **Data sources**: CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk Add-on for Tenable (4060), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart, Table, Single value, Heat map",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We strengthen agency or cloud service continuous authorization evidence under FISMA / FedRAMP across cloud, endpoint, and GRC feeds. That backs trust in the cloud and federal program posture with data, not just answers on a checklist.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA / FedRAMP"
              ],
              "a": [
                "Authentication",
                "Change",
                "Vulnerabilities- **CIM SPL:**"
              ],
              "e": [
                "aws",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.19.25: Assessment and FedRAMP evidence — indicator 5.",
                  "ea": "Saved search 'UC-22.19.25' running on CloudTrail, Azure Monitor, Tenable/STIG scans, POA&M exports, AAD/O365 sign-in, inventory, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 30.4,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "22.20",
          "n": "CMMC 2.0",
          "u": [
            {
              "i": "22.20.1",
              "n": "CMMC Level 2 practice evidence — CUI control area 1 (CMMC 2.0 Level 2)",
              "c": "high",
              "f": "beginner",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 1\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 1 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 1\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CUI control area 1. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AC.L2-3.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC AC.L2-3.1.1 (Authorized access to systems) is enforced — Splunk UC-22.20.1: CMMC Level 2 practice evidence — CUI control area 1.",
                  "ea": "Saved search 'UC-22.20.1' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://dodcio.defense.gov/CMMC/"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.2",
              "n": "CMMC Level 2 practice evidence — CUI control area 2 (CMMC 2.0 Level 2)",
              "c": "medium",
              "f": "expert",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 2\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 2 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 2\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CUI control area 2. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AC.L2-3.1.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC AC.L2-3.1.5 (Least privilege) is enforced — Splunk UC-22.20.2: CMMC Level 2 practice evidence — CUI control area 2.",
                  "ea": "Saved search 'UC-22.20.2' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://dodcio.defense.gov/CMMC/"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.3",
              "n": "CMMC Level 2 practice evidence — CUI control area 3 (CMMC 2.0 Level 2)",
              "c": "low",
              "f": "advanced",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 3\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t dc(Processes.dest) as agg_value from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - agg_value",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t dc(Processes.dest) as agg_value from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - agg_value\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 3 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 3\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for AU.L2-3.3.1 — Audit record creation verification on CUI systems. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AU.L2-3.3.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CMMC AU.L2-3.3.1 (Create audit records) is enforced — Splunk UC-22.20.3: CMMC Level 2 practice evidence — CUI control area 3.",
                  "ea": "Saved search 'UC-22.20.3' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://dodcio.defense.gov/CMMC/"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.4",
              "n": "CMMC Level 2 practice evidence — CUI control area 4 (CMMC 2.0 Level 2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 4\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 4 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 4\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for AU.L2-3.3.2 — User-to-action traceability on CUI systems. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AU.L2-3.3.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CMMC AU.L2-3.3.2 (Ensure unique user traceability) is enforced — Splunk UC-22.20.4: CMMC Level 2 practice evidence — CUI control area 4.",
                  "ea": "Saved search 'UC-22.20.4' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://dodcio.defense.gov/CMMC/"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.5",
              "n": "CMMC Level 2 practice evidence — CUI control area 5 (CMMC 2.0 Level 2)",
              "c": "high",
              "f": "beginner",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 5\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 5 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 5\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CUI control area 5. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AU.L2-3.3.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC AU.L2-3.3.5 (Audit reporting and correlation) is enforced — Splunk UC-22.20.5: CMMC Level 2 practice evidence — CUI control area 5.",
                  "ea": "Saved search 'UC-22.20.5' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://dodcio.defense.gov/CMMC/"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.6",
              "n": "CMMC Level 2 practice evidence — CUI control area 6 (CMMC 2.0 Level 2)",
              "c": "medium",
              "f": "expert",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 6\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 6 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 6\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CM.L2-3.4.1 — Baseline configuration drift detection on CUI systems. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "CM.L2-3.4.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CMMC CM.L2-3.4.1 (Baseline configurations) is enforced — Splunk UC-22.20.6: CMMC Level 2 practice evidence — CUI control area 6.",
                  "ea": "Saved search 'UC-22.20.6' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://dodcio.defense.gov/CMMC/"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.7",
              "n": "CMMC Level 2 practice evidence — CUI control area 7 (CMMC 2.0 Level 2)",
              "c": "low",
              "f": "advanced",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 7\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 7 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 7\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for IR.L2-3.6.1 — Incident response lifecycle tracking for CUI incidents. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "IR.L2-3.6.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CMMC IR.L2-3.6.1 (Incident handling capability) is enforced — Splunk UC-22.20.7: CMMC Level 2 practice evidence — CUI control area 7.",
                  "ea": "Saved search 'UC-22.20.7' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://dodcio.defense.gov/CMMC/"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.8",
              "n": "CMMC Level 2 practice evidence — CUI control area 8 (CMMC 2.0 Level 2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 8\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 8 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 8\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for SC.L2-3.13.8 — Cryptographic protection of CUI in transit. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "SC.L2-3.13.8",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CMMC SC.L2-3.13.8 (Cryptographic mechanisms for CUI in transit) is enforced — Splunk UC-22.20.8: CMMC Level 2 practice evidence — CUI control area 8.",
                  "ea": "Saved search 'UC-22.20.8' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://dodcio.defense.gov/CMMC/"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.9",
              "n": "CMMC Level 2 practice evidence — CUI control area 9 (CMMC 2.0 Level 2)",
              "c": "high",
              "f": "beginner",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 9\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 9 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 9\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for SI.L2-3.14.6 — Real-time attack monitoring on CUI systems. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "SI.L2-3.14.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CMMC SI.L2-3.14.6 (Monitor for attacks) is enforced — Splunk UC-22.20.9: CMMC Level 2 practice evidence — CUI control area 9.",
                  "ea": "Saved search 'UC-22.20.9' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://dodcio.defense.gov/CMMC/"
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.10",
              "n": "CMMC Level 2 practice evidence — CUI control area 10 (CMMC 2.0 Level 2)",
              "c": "medium",
              "f": "expert",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 10\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 2 practice evidence — CUI control area 10 (CMMC 2.0 Level 2)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 2 practice evidence — CUI control area 10\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 2 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CUI control area 10. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "CM.L2-3.4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC CM.L2-3.4.1 is enforced — Splunk UC-22.20.10.",
                  "ea": "Saved search 'UC-22.20.10' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.11",
              "n": "CMMC Level 3 enhanced practice — threat scenario 1 (CMMC 2.0 Level 3)",
              "c": "low",
              "f": "advanced",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 1\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 3 enhanced practice — threat scenario 1 (CMMC 2.0 Level 3)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 1\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Level 3 enhanced practice — threat scenario 1. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "SI.L2-3.14.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC SI.L2-3.14.6 is enforced — Splunk UC-22.20.11.",
                  "ea": "Saved search 'UC-22.20.11' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.12",
              "n": "CMMC Level 3 enhanced practice — threat scenario 2 (CMMC 2.0 Level 3)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 2\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 3 enhanced practice — threat scenario 2 (CMMC 2.0 Level 3)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 2\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Level 3 enhanced practice — threat scenario 2. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "SI.L2-3.14.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC SI.L2-3.14.6 is enforced — Splunk UC-22.20.12.",
                  "ea": "Saved search 'UC-22.20.12' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.13",
              "n": "CMMC Level 3 enhanced practice — threat scenario 3 (CMMC 2.0 Level 3)",
              "c": "high",
              "f": "beginner",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 3\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 3 enhanced practice — threat scenario 3 (CMMC 2.0 Level 3)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 3\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Level 3 enhanced practice — threat scenario 3. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "SI.L2-3.14.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC SI.L2-3.14.6 is enforced — Splunk UC-22.20.13.",
                  "ea": "Saved search 'UC-22.20.13' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.14",
              "n": "CMMC Level 3 enhanced practice — threat scenario 4 (CMMC 2.0 Level 3)",
              "c": "medium",
              "f": "expert",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 4\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 3 enhanced practice — threat scenario 4 (CMMC 2.0 Level 3)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 4\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Level 3 enhanced practice — threat scenario 4. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AC.L2-3.1.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC AC.L2-3.1.5 is enforced — Splunk UC-22.20.14.",
                  "ea": "Saved search 'UC-22.20.14' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.15",
              "n": "CMMC Level 3 enhanced practice — threat scenario 5 (CMMC 2.0 Level 3)",
              "c": "low",
              "f": "advanced",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 5\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC Level 3 enhanced practice — threat scenario 5 (CMMC 2.0 Level 3)** — Demonstrates contractor cybersecurity hygiene for \"CMMC Level 3 enhanced practice — threat scenario 5\" using CUI-scoped telemetry mapped to CMMC 2.0 Level 3 expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for Level 3 enhanced practice — threat scenario 5. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "SI.L2-3.14.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC SI.L2-3.14.6 is enforced — Splunk UC-22.20.15.",
                  "ea": "Saved search 'UC-22.20.15' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.16",
              "n": "CMMC assessment readiness — artifact 1 (CMMC 2.0 Assessment)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 1\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC assessment readiness — artifact 1 (CMMC 2.0 Assessment)** — Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 1\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for assessment readiness — artifact 1. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AU.L2-3.3.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC AU.L2-3.3.5 is enforced — Splunk UC-22.20.16.",
                  "ea": "Saved search 'UC-22.20.16' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.17",
              "n": "CMMC assessment readiness — artifact 2 (CMMC 2.0 Assessment)",
              "c": "high",
              "f": "beginner",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 2\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC assessment readiness — artifact 2 (CMMC 2.0 Assessment)** — Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 2\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for assessment readiness — artifact 2. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "CM.L2-3.4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC CM.L2-3.4.1 is enforced — Splunk UC-22.20.17.",
                  "ea": "Saved search 'UC-22.20.17' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.18",
              "n": "CMMC assessment readiness — artifact 3 (CMMC 2.0 Assessment)",
              "c": "medium",
              "f": "expert",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 3\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC assessment readiness — artifact 3 (CMMC 2.0 Assessment)** — Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 3\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for assessment readiness — artifact 3. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AU.L2-3.3.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC AU.L2-3.3.5 is enforced — Splunk UC-22.20.18.",
                  "ea": "Saved search 'UC-22.20.18' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.19",
              "n": "CMMC assessment readiness — artifact 4 (CMMC 2.0 Assessment)",
              "c": "low",
              "f": "advanced",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 4\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC assessment readiness — artifact 4 (CMMC 2.0 Assessment)** — Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 4\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for assessment readiness — artifact 4. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "IR.L2-3.6.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC IR.L2-3.6.1 is enforced — Splunk UC-22.20.19.",
                  "ea": "Saved search 'UC-22.20.19' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.20.20",
              "n": "CMMC assessment readiness — artifact 5 (CMMC 2.0 Assessment)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 5\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621).",
              "d": "`WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports",
              "q": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "m": "(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.",
              "z": "Stacked bar, Table, Time chart, Single value",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621)..\n• Ensure the following data sources are available: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Confirm field extractions and CIM tags for the sourcetypes in scope; (2) Load or maintain the referenced lookups/KVStore collections with owner attestation dates; (3) Schedule the search at an interval aligned to the control’s materiality; (4) Route positive findings to the compliance ticketing queue with required evidence fields; (5) Retain scheduled search exports per records management and legal hold procedures.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this SPL\n\n**CMMC assessment readiness — artifact 5 (CMMC 2.0 Assessment)** — Demonstrates contractor cybersecurity hygiene for \"CMMC assessment readiness — artifact 5\" using CUI-scoped telemetry mapped to CMMC 2.0 Assessment expectations.\n\nDocumented **Data sources**: `WinEventLog:Security`, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk Add-on for Microsoft Office 365 (4055), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar, Table, Time chart, Single value",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for assessment readiness — artifact 5. CMMC sets the bar for protecting sensitive information, and we keep evidence for the level we target.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC 2.0"
              ],
              "a": [
                "Endpoint",
                "Authentication",
                "Email- **CIM SPL:**"
              ],
              "e": [
                "crowdstrike",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AU.L2-3.3.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC AU.L2-3.3.5 is enforced — Splunk UC-22.20.20.",
                  "ea": "Saved search 'UC-22.20.20' running on WinEventLog:Security, CrowdStrike FDR, Defender for Office 365, NetFlow, encryption posture exports, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 30.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "22.21",
          "n": "EU AI Act",
          "u": [
            {
              "i": "22.21.1",
              "n": "Automatic Recording Events for High-Risk AI (Art. 12) (EU AI Act)",
              "c": "critical",
              "f": "advanced",
              "v": "Demonstrates automatic recording of operation events for high-risk AI systems under Art. 12(1), including retention-ready correlation of model_id, decision_id, and subject_reference hash.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=ai_ops` `sourcetype=mlflow:audit` OR `sourcetype=vertex:prediction` OR HEC `sourcetype=ai:high_risk:record` — fields: model_id, decision_id, subject_ref_hash, risk_class",
              "q": "index=ai_ops (sourcetype=\"mlflow:audit\" OR sourcetype=\"vertex:prediction\" OR sourcetype=\"ai:high_risk:record\") earliest=-24h risk_class=\"high\"\n| where isnotnull(model_id) AND isnotnull(decision_id)\n| eval record_complete=if(match(_raw,\"subject_ref_hash\"),1,0)\n| stats count as events, sum(record_complete) as with_subject_hash, dc(model_id) as models by sourcetype, host\n| eval coverage_pct=round(100*with_subject_hash/events,2)\n| sort - events",
              "m": "(1) Normalize high-risk serving logs to include model_id, decision_id, and hashed subject reference; (2) tag sourcetype with risk_class; (3) schedule hourly and alert when coverage_pct < 99.5; (4) route breaches to compliance queue in ES.",
              "z": "Bar chart (events by sourcetype), Single value (coverage_pct), Table (models with gaps)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=ai_ops` `sourcetype=mlflow:audit` OR `sourcetype=vertex:prediction` OR HEC `sourcetype=ai:high_risk:record` — fields: model_id, decision_id, subject_ref_hash, risk_class.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize high-risk serving logs to include model_id, decision_id, and hashed subject reference; (2) tag sourcetype with risk_class; (3) schedule hourly and alert when coverage_pct < 99.5; (4) route breaches to compliance queue in ES.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_ops (sourcetype=\"mlflow:audit\" OR sourcetype=\"vertex:prediction\" OR sourcetype=\"ai:high_risk:record\") earliest=-24h risk_class=\"high\"\n| where isnotnull(model_id) AND isnotnull(decision_id)\n| eval record_complete=if(match(_raw,\"subject_ref_hash\"),1,0)\n| stats count as events, sum(record_complete) as with_subject_hash, dc(model_id) as models by sourcetype, host\n| eval coverage_pct=round(100*with_subject_hash/events,2)\n| sort - events\n```\n\nUnderstanding this SPL\n\n**Automatic Recording Events for High-Risk AI (Art. 12) (EU AI Act)** — Demonstrates automatic recording of operation events for high-risk AI systems under Art. 12(1), including retention-ready correlation of model_id, decision_id, and subject_reference hash.\n\nDocumented **Data sources**: `index=ai_ops` `sourcetype=mlflow:audit` OR `sourcetype=vertex:prediction` OR HEC `sourcetype=ai:high_risk:record` — fields: model_id, decision_id, subject_ref_hash, risk_class. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ops; **sourcetype**: mlflow:audit, vertex:prediction, ai:high_risk:record. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ops, sourcetype=\"mlflow:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(model_id) AND isnotnull(decision_id)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **record_complete** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sourcetype, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by sourcetype), Single value (coverage_pct), Table (models with gaps)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We demonstrates automatic recording of operation events for high-risk AI systems under Art.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.26 (High-risk AI obligations for deployers) is enforced — Splunk UC-22.21.1: Automatic Recording Events for High-Risk AI (Art. 12).",
                  "ea": "Saved search 'UC-22.21.1' running on sourcetype mlflow:audit and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.2",
              "n": "Model Input/Output Logging Integrity (Art. 12) (EU AI Act)",
              "c": "critical",
              "f": "intermediate",
              "v": "Verifies that each inference event carries paired input feature hash and output label/probability for Art. 12 traceability and post-incident reconstruction.",
              "t": "Splunk Machine Learning Toolkit (2890), Splunk Common Information Model Add-on (1621)",
              "d": "`index=ai_serving` HEC JSON — input_feature_hash, output_label, output_prob, model_version",
              "q": "index=ai_serving sourcetype=\"ai:inference:json\" earliest=-4h\n| eval has_in=if(isnotnull(input_feature_hash) AND len(input_feature_hash)>=16,1,0)\n| eval has_out=if(isnotnull(output_label) OR isnotnull(output_prob),1,0)\n| eval pair_ok=if(has_in=1 AND has_out=1,1,0)\n| stats count as n, sum(pair_ok) as paired by model_version, deployment\n| eval pair_rate=round(100*paired/n,2)\n| where pair_rate < 100 OR n>0\n| sort + pair_rate",
              "m": "(1) Require inference middleware to emit paired fields; (2) add props.conf FIELDALIAS if vendor names differ; (3) alert on pair_rate drops > 0.1% in 15m; (4) retain raw JSON in restricted index.",
              "z": "Line chart (pair_rate over time), Table (worst deployments), Single value (global pair_rate)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Machine Learning Toolkit](https://splunkbase.splunk.com/app/2890), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (2890), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=ai_serving` HEC JSON — input_feature_hash, output_label, output_prob, model_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require inference middleware to emit paired fields; (2) add props.conf FIELDALIAS if vendor names differ; (3) alert on pair_rate drops > 0.1% in 15m; (4) retain raw JSON in restricted index.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_serving sourcetype=\"ai:inference:json\" earliest=-4h\n| eval has_in=if(isnotnull(input_feature_hash) AND len(input_feature_hash)>=16,1,0)\n| eval has_out=if(isnotnull(output_label) OR isnotnull(output_prob),1,0)\n| eval pair_ok=if(has_in=1 AND has_out=1,1,0)\n| stats count as n, sum(pair_ok) as paired by model_version, deployment\n| eval pair_rate=round(100*paired/n,2)\n| where pair_rate < 100 OR n>0\n| sort + pair_rate\n```\n\nUnderstanding this SPL\n\n**Model Input/Output Logging Integrity (Art. 12) (EU AI Act)** — Verifies that each inference event carries paired input feature hash and output label/probability for Art. 12 traceability and post-incident reconstruction.\n\nDocumented **Data sources**: `index=ai_serving` HEC JSON — input_feature_hash, output_label, output_prob, model_version. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (2890), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_serving; **sourcetype**: ai:inference:json. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_serving, sourcetype=\"ai:inference:json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_in** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_out** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pair_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by model_version, deployment** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pair_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pair_rate < 100 OR n>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (pair_rate over time), Table (worst deployments), Single value (global pair_rate)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verifies that each inference event carries paired input feature hash and output label/probability for Art.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.19",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.19 (Automatically generated logs) is enforced — Splunk UC-22.21.2: Model Input/Output Logging Integrity (Art. 12).",
                  "ea": "Saved search 'UC-22.21.2' running on index=ai_serving HEC JSON — input_feature_hash, output_label, output_prob, model_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.3",
              "n": "Automated Decision Audit Trail Reconstruction (Art. 12) (EU AI Act)",
              "c": "high",
              "f": "advanced",
              "v": "Chains decision_id across orchestrator, model gateway, and business rules engine to evidence end-to-end decision audit trail required for high-risk ADM logging.",
              "t": "Splunk ITSI (1841), Splunk Enterprise Security (263)",
              "d": "`index=ai_decision` sourcetypes: `airflow:task`, `ai:gateway`, `rules:engine` — shared decision_id",
              "q": "index=ai_decision earliest=-24h decision_id=*\n| eval hop=case(sourcetype=\"airflow:task\",1,sourcetype=\"ai:gateway\",2,sourcetype=\"rules:engine\",3,true(),0)\n| stats values(sourcetype) as path, min(_time) as t0, max(_time) as t1, range(_time) as span_sec by decision_id\n| eval complete=if(mvcount(path)>=3,1,0)\n| where complete=0\n| eval span_min=round(span_sec/60,2)\n| table decision_id, path, span_min, t0, t1",
              "m": "(1) Propagate decision_id across pipeline stages; (2) create macro `ai_decision_index`; (3) schedule every 30m for incomplete chains; (4) attach ITSI episode template for cross-team triage.",
              "z": "Sankey or sequence table (path), Histogram (span_sec), Table (incomplete decision_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=ai_decision` sourcetypes: `airflow:task`, `ai:gateway`, `rules:engine` — shared decision_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Propagate decision_id across pipeline stages; (2) create macro `ai_decision_index`; (3) schedule every 30m for incomplete chains; (4) attach ITSI episode template for cross-team triage.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_decision earliest=-24h decision_id=*\n| eval hop=case(sourcetype=\"airflow:task\",1,sourcetype=\"ai:gateway\",2,sourcetype=\"rules:engine\",3,true(),0)\n| stats values(sourcetype) as path, min(_time) as t0, max(_time) as t1, range(_time) as span_sec by decision_id\n| eval complete=if(mvcount(path)>=3,1,0)\n| where complete=0\n| eval span_min=round(span_sec/60,2)\n| table decision_id, path, span_min, t0, t1\n```\n\nUnderstanding this SPL\n\n**Automated Decision Audit Trail Reconstruction (Art. 12) (EU AI Act)** — Chains decision_id across orchestrator, model gateway, and business rules engine to evidence end-to-end decision audit trail required for high-risk ADM logging.\n\nDocumented **Data sources**: `index=ai_decision` sourcetypes: `airflow:task`, `ai:gateway`, `rules:engine` — shared decision_id. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_decision.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_decision, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hop** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by decision_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **complete** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where complete=0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **span_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Automated Decision Audit Trail Reconstruction (Art. 12) (EU AI Act)**): table decision_id, path, span_min, t0, t1\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey or sequence table (path), Histogram (span_sec), Table (incomplete decision_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We chains decision_id across orchestrator, model gateway, and business rules engine to evidence end-to-end decision audit trail required for high-risk ADM logging.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.19",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.19 (Automatically generated logs) is enforced — Splunk UC-22.21.3: Automated Decision Audit Trail Reconstruction (Art. 12).",
                  "ea": "Saved search 'UC-22.21.3' running on index=ai_decision sourcetypes: airflow:task, ai:gateway, rules:engine — shared decision_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.4",
              "n": "Model Performance Degradation Detection (Art. 12) (EU AI Act)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects sudden drops in accuracy/auroc vs baseline to support operational logging and risk management for high-risk AI performance stability.",
              "t": "Splunk Machine Learning Toolkit (2890), Splunk ITSI (1841)",
              "d": "`index=ai_metrics` `sourcetype=ai:model:kpi` — model_id, metric_name, metric_value, window",
              "q": "index=ai_metrics sourcetype=\"ai:model:kpi\" metric_name=\"accuracy\" earliest=-7d\n| timechart span=1h perc95(metric_value) as p95_acc by model_id\n| untable _time model_id p95_acc\n| eventstats median(p95_acc) as baseline by model_id\n| eval delta=p95_acc-baseline\n| where delta < -0.02\n| table _time, model_id, p95_acc, baseline, delta",
              "m": "(1) Ingest batch evaluation KPIs hourly; (2) tune -0.02 threshold per model risk file; (3) create ITSI KPI thresholding; (4) document rollback criteria in QMS.",
              "z": "Time chart (p95_acc vs baseline), Table (negative deltas), Single value (models in alert)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Machine Learning Toolkit](https://splunkbase.splunk.com/app/2890), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (2890), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=ai_metrics` `sourcetype=ai:model:kpi` — model_id, metric_name, metric_value, window.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest batch evaluation KPIs hourly; (2) tune -0.02 threshold per model risk file; (3) create ITSI KPI thresholding; (4) document rollback criteria in QMS.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_metrics sourcetype=\"ai:model:kpi\" metric_name=\"accuracy\" earliest=-7d\n| timechart span=1h perc95(metric_value) as p95_acc by model_id\n| untable _time model_id p95_acc\n| eventstats median(p95_acc) as baseline by model_id\n| eval delta=p95_acc-baseline\n| where delta < -0.02\n| table _time, model_id, p95_acc, baseline, delta\n```\n\nUnderstanding this SPL\n\n**Model Performance Degradation Detection (Art. 12) (EU AI Act)** — Detects sudden drops in accuracy/auroc vs baseline to support operational logging and risk management for high-risk AI performance stability.\n\nDocumented **Data sources**: `index=ai_metrics` `sourcetype=ai:model:kpi` — model_id, metric_name, metric_value, window. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (2890), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_metrics; **sourcetype**: ai:model:kpi. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_metrics, sourcetype=\"ai:model:kpi\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by model_id** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Model Performance Degradation Detection (Art. 12) (EU AI Act)**): untable _time model_id p95_acc\n• `eventstats` rolls up events into metrics; results are split **by model_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where delta < -0.02` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Model Performance Degradation Detection (Art. 12) (EU AI Act)**): table _time, model_id, p95_acc, baseline, delta\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (p95_acc vs baseline), Table (negative deltas), Single value (models in alert)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We look for sudden drops in accuracy/auroc vs baseline to support operational logging and risk management for high-risk AI performance stability.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.15 (Accuracy, robustness, cybersecurity) is enforced — Splunk UC-22.21.4: Model Performance Degradation Detection (Art. 12).",
                  "ea": "Saved search 'UC-22.21.4' running on index=ai_metrics sourcetype=ai:model:kpi — model_id, metric_name, metric_value, window, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.5",
              "n": "Bias Drift Monitoring Across Protected Attributes (Art. 12) (EU AI Act)",
              "c": "critical",
              "f": "expert",
              "v": "Tracks disparate impact ratio across demographic buckets over sliding windows to evidence ongoing bias monitoring aligned with Art. 12 and Annex IV risk management.",
              "t": "Splunk Machine Learning Toolkit (2890), Splunk DB Connect (2686)",
              "d": "`index=ai_fairness` `sourcetype=ai:bias:eval` — attribute, bucket, selection_rate, approval_rate",
              "q": "index=ai_fairness sourcetype=\"ai:bias:eval\" earliest=-48h\n| stats latest(selection_rate) as sr by attribute, bucket, model_id\n| eventstats max(sr) as sr_max, min(sr) as sr_min by attribute, model_id\n| eval dipr=round(sr_max/nullif(sr_min,0),3)\n| where dipr > 1.25 OR sr_min < 0.01\n| sort attribute, -dipr\n| table model_id, attribute, bucket, sr, dipr",
              "m": "(1) Ensure fairness jobs emit non-identifying buckets; (2) join reference rates via DB Connect if needed; (3) alert when DIPR exceeds policy; (4) store model card link in lookup.",
              "z": "Heatmap (bucket vs selection_rate), Bar chart (dipr by attribute), Table (violations)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Machine Learning Toolkit](https://splunkbase.splunk.com/app/2890), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (2890), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=ai_fairness` `sourcetype=ai:bias:eval` — attribute, bucket, selection_rate, approval_rate.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure fairness jobs emit non-identifying buckets; (2) join reference rates via DB Connect if needed; (3) alert when DIPR exceeds policy; (4) store model card link in lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_fairness sourcetype=\"ai:bias:eval\" earliest=-48h\n| stats latest(selection_rate) as sr by attribute, bucket, model_id\n| eventstats max(sr) as sr_max, min(sr) as sr_min by attribute, model_id\n| eval dipr=round(sr_max/nullif(sr_min,0),3)\n| where dipr > 1.25 OR sr_min < 0.01\n| sort attribute, -dipr\n| table model_id, attribute, bucket, sr, dipr\n```\n\nUnderstanding this SPL\n\n**Bias Drift Monitoring Across Protected Attributes (Art. 12) (EU AI Act)** — Tracks disparate impact ratio across demographic buckets over sliding windows to evidence ongoing bias monitoring aligned with Art. 12 and Annex IV risk management.\n\nDocumented **Data sources**: `index=ai_fairness` `sourcetype=ai:bias:eval` — attribute, bucket, selection_rate, approval_rate. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (2890), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_fairness; **sourcetype**: ai:bias:eval. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_fairness, sourcetype=\"ai:bias:eval\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by attribute, bucket, model_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by attribute, model_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dipr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where dipr > 1.25 OR sr_min < 0.01` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Bias Drift Monitoring Across Protected Attributes (Art. 12) (EU AI Act)**): table model_id, attribute, bucket, sr, dipr\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (bucket vs selection_rate), Bar chart (dipr by attribute), Table (violations)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track disparate impact ratio across demographic buckets over sliding windows to evidence ongoing bias monitoring aligned with Art.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.5: Bias Drift Monitoring Across Protected Attributes (Art. 12).",
                  "ea": "Saved search 'UC-22.21.5' running on index=ai_fairness sourcetype=ai:bias:eval — attribute, bucket, selection_rate, approval_rate, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.6",
              "n": "Transparency — Explainability Payload Presence (Art. 13) (EU AI Act)",
              "c": "high",
              "f": "intermediate",
              "v": "Confirms user-facing responses include required explanation fields (summary, confidence, limitations) for transparency obligations under Art. 13.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=ai_ux` `sourcetype=ai:chat:response` — fields: explanation_summary, confidence, limitations",
              "q": "index=ai_ux sourcetype=\"ai:chat:response\" earliest=-24h\n| eval has_sum=if(len(explanation_summary)>20,1,0)\n| eval has_conf=if(isnum(confidence),1,0)\n| eval has_lim=if(len(limitations)>10,1,0)\n| eval art13_ok=if(has_sum+has_conf+has_lim=3,1,0)\n| stats count as n, sum(art13_ok) as ok by product, model_id\n| eval ok_pct=round(100*ok/n,2)\n| where ok_pct < 100\n| sort + ok_pct",
              "m": "(1) Instrument API gateway to log structured explainability fields; (2) redact PII in explanations; (3) weekly report to product/legal; (4) tie to conformity documentation version.",
              "z": "Stacked bar (ok vs n), Table (failing product/model_id), Single value (ok_pct)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=ai_ux` `sourcetype=ai:chat:response` — fields: explanation_summary, confidence, limitations.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument API gateway to log structured explainability fields; (2) redact PII in explanations; (3) weekly report to product/legal; (4) tie to conformity documentation version.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_ux sourcetype=\"ai:chat:response\" earliest=-24h\n| eval has_sum=if(len(explanation_summary)>20,1,0)\n| eval has_conf=if(isnum(confidence),1,0)\n| eval has_lim=if(len(limitations)>10,1,0)\n| eval art13_ok=if(has_sum+has_conf+has_lim=3,1,0)\n| stats count as n, sum(art13_ok) as ok by product, model_id\n| eval ok_pct=round(100*ok/n,2)\n| where ok_pct < 100\n| sort + ok_pct\n```\n\nUnderstanding this SPL\n\n**Transparency — Explainability Payload Presence (Art. 13) (EU AI Act)** — Confirms user-facing responses include required explanation fields (summary, confidence, limitations) for transparency obligations under Art. 13.\n\nDocumented **Data sources**: `index=ai_ux` `sourcetype=ai:chat:response` — fields: explanation_summary, confidence, limitations. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_ux; **sourcetype**: ai:chat:response. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_ux, sourcetype=\"ai:chat:response\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **has_sum** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_conf** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **has_lim** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **art13_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product, model_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ok_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ok_pct < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (ok vs n), Table (failing product/model_id), Single value (ok_pct)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We confirms user-facing responses include required explanation fields (summary, confidence, limitations) for transparency obligations under Art.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.6: Transparency — Explainability Payload Presence (Art. 13).",
                  "ea": "Saved search 'UC-22.21.6' running on index=ai_ux sourcetype=ai:chat:response — fields: explanation_summary, confidence, limitations, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.7",
              "n": "Model Version Tracking and Immutable Version IDs (Art. 13) (EU AI Act)",
              "c": "high",
              "f": "beginner",
              "v": "Detects production traffic served from non-semver or missing model_version to support traceability and change control evidence.",
              "t": "Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "`index=ai_serving` — model_version, deployment, traffic_pct",
              "q": "index=ai_serving sourcetype=\"ai:gateway:access\" earliest=-4h\n| eval semver_ok=if(match(model_version,\"^v?\\d+\\.\\d+\\.\\d+\"),1,0)\n| stats count as reqs, sum(semver_ok) as semver_hits by deployment, model_version\n| eval semver_pct=round(100*semver_hits/reqs,2)\n| where isnull(model_version) OR model_version=\"\" OR semver_ok=0\n| sort - reqs",
              "m": "(1) Enforce semver in model registry webhook; (2) block deploy if version missing; (3) alert on null model_version > 10/min; (4) map deployment to ITSI entity.",
              "z": "Pie chart (semver_ok), Table (bad versions), Column chart (reqs by deployment)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=ai_serving` — model_version, deployment, traffic_pct.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce semver in model registry webhook; (2) block deploy if version missing; (3) alert on null model_version > 10/min; (4) map deployment to ITSI entity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_serving sourcetype=\"ai:gateway:access\" earliest=-4h\n| eval semver_ok=if(match(model_version,\"^v?\\d+\\.\\d+\\.\\d+\"),1,0)\n| stats count as reqs, sum(semver_ok) as semver_hits by deployment, model_version\n| eval semver_pct=round(100*semver_hits/reqs,2)\n| where isnull(model_version) OR model_version=\"\" OR semver_ok=0\n| sort - reqs\n```\n\nUnderstanding this SPL\n\n**Model Version Tracking and Immutable Version IDs (Art. 13) (EU AI Act)** — Detects production traffic served from non-semver or missing model_version to support traceability and change control evidence.\n\nDocumented **Data sources**: `index=ai_serving` — model_version, deployment, traffic_pct. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_serving; **sourcetype**: ai:gateway:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_serving, sourcetype=\"ai:gateway:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **semver_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by deployment, model_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **semver_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(model_version) OR model_version=\"\" OR semver_ok=0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (semver_ok), Table (bad versions), Column chart (reqs by deployment)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We look for production traffic served from non-semver or missing model_version to support traceability and change control evidence.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.7: Model Version Tracking and Immutable Version IDs (Art. 13).",
                  "ea": "Saved search 'UC-22.21.7' running on index=ai_serving — model_version, deployment, traffic_pct, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.8",
              "n": "Training Data Lineage Event Completeness (Art. 13) (EU AI Act)",
              "c": "critical",
              "f": "advanced",
              "v": "Validates dataset_id, snapshot_hash, and license_tag on each training run completion for dataset lineage traceability.",
              "t": "Splunk DB Connect (2686), Splunk Machine Learning Toolkit (2890)",
              "d": "`index=ml_train` `sourcetype=kubeflow:run` OR `sourcetype=mlflow:run`",
              "q": "index=ml_train (sourcetype=\"kubeflow:run\" OR sourcetype=\"mlflow:run\") event_type=\"completed\" earliest=-7d\n| eval lineage_ok=if(isnotnull(dataset_id) AND isnotnull(snapshot_hash) AND isnotnull(license_tag),1,0)\n| stats count as runs, sum(lineage_ok) as ok by experiment, pipeline\n| eval ok_pct=round(100*ok/runs,2)\n| where ok_pct < 100\n| sort + ok_pct",
              "m": "(1) Add lineage fields at pipeline DSL; (2) join catalog table via DB Connect for license_tag validation; (3) block promotion if lineage_ok=0; (4) export monthly compliance CSV.",
              "z": "Bar chart (ok_pct by pipeline), Table (failed runs), Single value (runs missing license_tag)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Machine Learning Toolkit](https://splunkbase.splunk.com/app/2890)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686), Splunk Machine Learning Toolkit (2890).\n• Ensure the following data sources are available: `index=ml_train` `sourcetype=kubeflow:run` OR `sourcetype=mlflow:run`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Add lineage fields at pipeline DSL; (2) join catalog table via DB Connect for license_tag validation; (3) block promotion if lineage_ok=0; (4) export monthly compliance CSV.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ml_train (sourcetype=\"kubeflow:run\" OR sourcetype=\"mlflow:run\") event_type=\"completed\" earliest=-7d\n| eval lineage_ok=if(isnotnull(dataset_id) AND isnotnull(snapshot_hash) AND isnotnull(license_tag),1,0)\n| stats count as runs, sum(lineage_ok) as ok by experiment, pipeline\n| eval ok_pct=round(100*ok/runs,2)\n| where ok_pct < 100\n| sort + ok_pct\n```\n\nUnderstanding this SPL\n\n**Training Data Lineage Event Completeness (Art. 13) (EU AI Act)** — Validates dataset_id, snapshot_hash, and license_tag on each training run completion for dataset lineage traceability.\n\nDocumented **Data sources**: `index=ml_train` `sourcetype=kubeflow:run` OR `sourcetype=mlflow:run`. **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk Machine Learning Toolkit (2890). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ml_train; **sourcetype**: kubeflow:run, mlflow:run. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ml_train, sourcetype=\"kubeflow:run\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lineage_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by experiment, pipeline** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ok_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ok_pct < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (ok_pct by pipeline), Table (failed runs), Single value (runs missing license_tag)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We validates dataset_id, snapshot_hash, and license_tag on each training run completion for dataset lineage traceability.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.8: Training Data Lineage Event Completeness (Art. 13).",
                  "ea": "Saved search 'UC-22.21.8' running on index=ml_train sourcetype=kubeflow:run OR sourcetype=mlflow:run, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.9",
              "n": "Feature Importance Audit Snapshot Correlation (Art. 13) (EU AI Act)",
              "c": "medium",
              "f": "advanced",
              "v": "Correlates SHAP export job completion with model_version to retain feature importance audit snapshots for transparency reviews.",
              "t": "Splunk Machine Learning Toolkit (2890), Splunk Common Information Model Add-on (1621)",
              "d": "`index=ml_explain` `sourcetype=ai:shap:export` — job_id, model_version, artifact_uri",
              "q": "index=ml_explain sourcetype=\"ai:shap:export\" earliest=-30d\n| stats latest(_time) as last_export by model_version\n| join type=left model_version [\n    search index=ai_serving sourcetype=\"ai:gateway:access\" earliest=-30d\n    | stats dc(model_version) as versions_seen by model_version\n  ]\n| eval stale=if(isnull(last_export) OR (now()-last_export)>2592000,1,0)\n| where stale=1\n| table model_version, last_export, versions_seen",
              "m": "(1) Schedule SHAP export on each release; (2) store artifact_uri in object storage with WORM; (3) alert if model_version in prod without export in 7d; (4) document in technical file.",
              "z": "Timeline (exports), Table (stale models), Single value (stale count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Machine Learning Toolkit](https://splunkbase.splunk.com/app/2890), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (2890), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=ml_explain` `sourcetype=ai:shap:export` — job_id, model_version, artifact_uri.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule SHAP export on each release; (2) store artifact_uri in object storage with WORM; (3) alert if model_version in prod without export in 7d; (4) document in technical file.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ml_explain sourcetype=\"ai:shap:export\" earliest=-30d\n| stats latest(_time) as last_export by model_version\n| join type=left model_version [\n    search index=ai_serving sourcetype=\"ai:gateway:access\" earliest=-30d\n    | stats dc(model_version) as versions_seen by model_version\n  ]\n| eval stale=if(isnull(last_export) OR (now()-last_export)>2592000,1,0)\n| where stale=1\n| table model_version, last_export, versions_seen\n```\n\nUnderstanding this SPL\n\n**Feature Importance Audit Snapshot Correlation (Art. 13) (EU AI Act)** — Correlates SHAP export job completion with model_version to retain feature importance audit snapshots for transparency reviews.\n\nDocumented **Data sources**: `index=ml_explain` `sourcetype=ai:shap:export` — job_id, model_version, artifact_uri. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (2890), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ml_explain; **sourcetype**: ai:shap:export. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ml_explain, sourcetype=\"ai:shap:export\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by model_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where stale=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Feature Importance Audit Snapshot Correlation (Art. 13) (EU AI Act)**): table model_version, last_export, versions_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (exports), Table (stale models), Single value (stale count)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We connect SHAP export job completion with model_version to retain feature importance audit snapshots for transparency reviews.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.9: Feature Importance Audit Snapshot Correlation (Art. 13).",
                  "ea": "Saved search 'UC-22.21.9' running on index=ml_explain sourcetype=ai:shap:export — job_id, model_version, artifact_uri, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.10",
              "n": "Algorithm Change Documentation Linkage (Art. 13) (EU AI Act)",
              "c": "high",
              "f": "intermediate",
              "v": "Ensures each production model_version has linked change_ticket and approver for algorithm change documentation.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686)",
              "d": "`index=ai_governance` HEC — model_version, change_ticket, approver, deployed_at",
              "q": "index=ai_governance sourcetype=\"ai:deploy:gov\" earliest=-90d\n| stats latest(deployed_at) as deployed_at, values(approver) as approvers by model_version, change_ticket\n| eval ticket_ok=if(match(change_ticket,\"^(CHG|CR)-\\d+\"),1,0)\n| where ticket_ok=0 OR mvcount(approvers)<1\n| sort - deployed_at",
              "m": "(1) Require CI/CD to emit governance event; (2) validate ticket regex against ITSM; (3) create lookup fallback from DB Connect ITSM export; (4) quarterly audit sample.",
              "z": "Table (violations), Bar chart (ticket_ok rate by month), Single value (open violations)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=ai_governance` HEC — model_version, change_ticket, approver, deployed_at.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require CI/CD to emit governance event; (2) validate ticket regex against ITSM; (3) create lookup fallback from DB Connect ITSM export; (4) quarterly audit sample.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_governance sourcetype=\"ai:deploy:gov\" earliest=-90d\n| stats latest(deployed_at) as deployed_at, values(approver) as approvers by model_version, change_ticket\n| eval ticket_ok=if(match(change_ticket,\"^(CHG|CR)-\\d+\"),1,0)\n| where ticket_ok=0 OR mvcount(approvers)<1\n| sort - deployed_at\n```\n\nUnderstanding this SPL\n\n**Algorithm Change Documentation Linkage (Art. 13) (EU AI Act)** — Ensures each production model_version has linked change_ticket and approver for algorithm change documentation.\n\nDocumented **Data sources**: `index=ai_governance` HEC — model_version, change_ticket, approver, deployed_at. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_governance; **sourcetype**: ai:deploy:gov. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_governance, sourcetype=\"ai:deploy:gov\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by model_version, change_ticket** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ticket_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ticket_ok=0 OR mvcount(approvers)<1` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (violations), Bar chart (ticket_ok rate by month), Single value (open violations)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We ensures each production model_version has linked change_ticket and approver for algorithm change documentation.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.10: Algorithm Change Documentation Linkage (Art. 13).",
                  "ea": "Saved search 'UC-22.21.10' running on index=ai_governance HEC — model_version, change_ticket, approver, deployed_at, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.11",
              "n": "Human Override Action Logging Coverage (Art. 14) (EU AI Act)",
              "c": "critical",
              "f": "intermediate",
              "v": "Measures fraction of automated decisions overridden by human operators with mandatory reason codes for Art. 14 human oversight.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=ai_hil` `sourcetype=ai:human:override` — decision_id, operator_id, reason_code",
              "q": "index=ai_hil sourcetype=\"ai:human:override\" earliest=-24h\n| eval reason_ok=if(match(reason_code,\"^[A-Z]{2,4}_\\d{2,4}$\"),1,0)\n| stats count as overrides, sum(reason_ok) as with_valid_reason by application\n| eval reason_pct=round(100*with_valid_reason/overrides,2)\n| where overrides>0 AND reason_pct < 100\n| sort + reason_pct",
              "m": "(1) Enforce reason_code enum in UI; (2) map operator_id to HR directory via lookup; (3) alert on missing reason_code; (4) retain 24-month archive per policy.",
              "z": "Bar chart (overrides by application), Table (invalid reason), Single value (reason_pct)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=ai_hil` `sourcetype=ai:human:override` — decision_id, operator_id, reason_code.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce reason_code enum in UI; (2) map operator_id to HR directory via lookup; (3) alert on missing reason_code; (4) retain 24-month archive per policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_hil sourcetype=\"ai:human:override\" earliest=-24h\n| eval reason_ok=if(match(reason_code,\"^[A-Z]{2,4}_\\d{2,4}$\"),1,0)\n| stats count as overrides, sum(reason_ok) as with_valid_reason by application\n| eval reason_pct=round(100*with_valid_reason/overrides,2)\n| where overrides>0 AND reason_pct < 100\n| sort + reason_pct\n```\n\nUnderstanding this SPL\n\n**Human Override Action Logging Coverage (Art. 14) (EU AI Act)** — Measures fraction of automated decisions overridden by human operators with mandatory reason codes for Art. 14 human oversight.\n\nDocumented **Data sources**: `index=ai_hil` `sourcetype=ai:human:override` — decision_id, operator_id, reason_code. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_hil; **sourcetype**: ai:human:override. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_hil, sourcetype=\"ai:human:override\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **reason_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **reason_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overrides>0 AND reason_pct < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (overrides by application), Table (invalid reason), Single value (reason_pct)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measure fraction of automated decisions overridden by human operators with mandatory reason codes for Art.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.14 (Human oversight) is enforced — Splunk UC-22.21.11: Human Override Action Logging Coverage (Art. 14).",
                  "ea": "Saved search 'UC-22.21.11' running on index=ai_hil sourcetype=ai:human:override — decision_id, operator_id, reason_code, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.12",
              "n": "Escalation to Human Review Queue Latency (Art. 14) (EU AI Act)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks SLA from model_low_confidence event to human_assigned for timely human review per oversight procedures.",
              "t": "Splunk ITSI (1841), Splunk Enterprise Security (263)",
              "d": "`index=ai_hil` — event=low_confidence | event=human_assigned — decision_id",
              "q": "index=ai_hil sourcetype=\"ai:human:queue\" earliest=-7d (event=\"low_confidence\" OR event=\"human_assigned\")\n| eval t=if(event=\"low_confidence\",_time,null())\n| eval t2=if(event=\"human_assigned\",_time,null())\n| stats min(t) as t_lc, min(t2) as t_ha by decision_id\n| eval sla_sec=t_ha-t_lc\n| where isnotnull(t_lc) AND (isnull(t_ha) OR sla_sec>900)\n| eval breach=if(isnull(t_ha),\"unassigned\",if(sla_sec>900,\"late\",\"ok\"))\n| table decision_id, t_lc, t_ha, sla_sec, breach",
              "m": "(1) Unify timestamps to UTC; (2) set SLA based on risk class; (3) ITSI threshold on median sla_sec; (4) playbook for reassignment.",
              "z": "Histogram (sla_sec), Table (breaches), Single value (pct within SLA)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=ai_hil` — event=low_confidence | event=human_assigned — decision_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Unify timestamps to UTC; (2) set SLA based on risk class; (3) ITSI threshold on median sla_sec; (4) playbook for reassignment.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_hil sourcetype=\"ai:human:queue\" earliest=-7d (event=\"low_confidence\" OR event=\"human_assigned\")\n| eval t=if(event=\"low_confidence\",_time,null())\n| eval t2=if(event=\"human_assigned\",_time,null())\n| stats min(t) as t_lc, min(t2) as t_ha by decision_id\n| eval sla_sec=t_ha-t_lc\n| where isnotnull(t_lc) AND (isnull(t_ha) OR sla_sec>900)\n| eval breach=if(isnull(t_ha),\"unassigned\",if(sla_sec>900,\"late\",\"ok\"))\n| table decision_id, t_lc, t_ha, sla_sec, breach\n```\n\nUnderstanding this SPL\n\n**Escalation to Human Review Queue Latency (Art. 14) (EU AI Act)** — Tracks SLA from model_low_confidence event to human_assigned for timely human review per oversight procedures.\n\nDocumented **Data sources**: `index=ai_hil` — event=low_confidence | event=human_assigned — decision_id. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_hil; **sourcetype**: ai:human:queue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_hil, sourcetype=\"ai:human:queue\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **t** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **t2** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by decision_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_sec** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(t_lc) AND (isnull(t_ha) OR sla_sec>900)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Escalation to Human Review Queue Latency (Art. 14) (EU AI Act)**): table decision_id, t_lc, t_ha, sla_sec, breach\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (sla_sec), Table (breaches), Single value (pct within SLA)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track SLA from model_low_confidence event to human_assigned for timely human review per oversight procedures.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.14 (Human oversight) is enforced — Splunk UC-22.21.12: Escalation to Human Review Queue Latency (Art. 14).",
                  "ea": "Saved search 'UC-22.21.12' running on index=ai_hil — event=low_confidence | event=human_assigned — decision_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.13",
              "n": "Autonomous Decision Threshold Breach Monitoring (Art. 14) (EU AI Act)",
              "c": "critical",
              "f": "advanced",
              "v": "Alerts when decisions are auto-approved above configured risk_score without human gate, evidencing threshold enforcement.",
              "t": "Splunk Enterprise Security (263), Splunk Machine Learning Toolkit (2890)",
              "d": "`index=ai_policy` `sourcetype=ai:decision:policy` — risk_score, auto_approve, human_gate",
              "q": "index=ai_policy sourcetype=\"ai:decision:policy\" earliest=-24h auto_approve=true\n| lookup ai_risk_thresholds.csv model_id OUTPUT max_auto_risk\n| eval breach=if(risk_score > max_auto_risk,1,0)\n| where breach=1 OR (auto_approve=true AND human_gate=false AND risk_score>0.6)\n| stats count by model_id, user_segment, breach\n| sort - count",
              "m": "(1) Maintain thresholds lookup by jurisdiction; (2) correlate with credit/product policies; (3) notable in ES on breach; (4) monthly threshold review attestation.",
              "z": "Time chart (breach count), Table (top model_id), Single value (breaches/1k decisions)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Machine Learning Toolkit](https://splunkbase.splunk.com/app/2890)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Machine Learning Toolkit (2890).\n• Ensure the following data sources are available: `index=ai_policy` `sourcetype=ai:decision:policy` — risk_score, auto_approve, human_gate.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain thresholds lookup by jurisdiction; (2) correlate with credit/product policies; (3) notable in ES on breach; (4) monthly threshold review attestation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_policy sourcetype=\"ai:decision:policy\" earliest=-24h auto_approve=true\n| lookup ai_risk_thresholds.csv model_id OUTPUT max_auto_risk\n| eval breach=if(risk_score > max_auto_risk,1,0)\n| where breach=1 OR (auto_approve=true AND human_gate=false AND risk_score>0.6)\n| stats count by model_id, user_segment, breach\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Autonomous Decision Threshold Breach Monitoring (Art. 14) (EU AI Act)** — Alerts when decisions are auto-approved above configured risk_score without human gate, evidencing threshold enforcement.\n\nDocumented **Data sources**: `index=ai_policy` `sourcetype=ai:decision:policy` — risk_score, auto_approve, human_gate. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Machine Learning Toolkit (2890). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_policy; **sourcetype**: ai:decision:policy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_policy, sourcetype=\"ai:decision:policy\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where breach=1 OR (auto_approve=true AND human_gate=false AND risk_score>0.6)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by model_id, user_segment, breach** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (breach count), Table (top model_id), Single value (breaches/1k decisions)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We alerts when decisions are auto-approved above configured risk_score without human gate, evidencing threshold enforcement.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.13: Autonomous Decision Threshold Breach Monitoring (Art. 14).",
                  "ea": "Saved search 'UC-22.21.13' running on index=ai_policy sourcetype=ai:decision:policy — risk_score, auto_approve, human_gate, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.14",
              "n": "Human-in-the-Loop Intervention Effectiveness (Art. 14) (EU AI Act)",
              "c": "medium",
              "f": "advanced",
              "v": "Compares downstream outcomes after override vs baseline auto path to evidence intervention effectiveness for governance reviews.",
              "t": "Splunk Machine Learning Toolkit (2890), Splunk DB Connect (2686)",
              "d": "`index=ai_outcomes` — decision_id, path (override|auto), outcome_score, days_to_outcome",
              "q": "index=ai_outcomes sourcetype=\"ai:outcome:track\" earliest=-90d\n| stats median(outcome_score) as med_score, median(days_to_outcome) as med_days by path, product_line\n| eval uplift=if(path=\"override\", med_score, null())\n| eventstats max(eval(if(path=\"auto\", med_score, null()))) as base by product_line\n| eval delta=if(path=\"override\", med_score-base, null())\n| where path=\"override\"\n| table product_line, med_score, base, delta, med_days",
              "m": "(1) Align outcome_score definition with business KPI; (2) ensure ethical review sign-off on metrics; (3) quarterly dashboard to board risk committee; (4) document limitations.",
              "z": "Box plot (simulated via table), Bar chart (delta by product_line), Table (medians)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Machine Learning Toolkit](https://splunkbase.splunk.com/app/2890), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (2890), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=ai_outcomes` — decision_id, path (override|auto), outcome_score, days_to_outcome.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align outcome_score definition with business KPI; (2) ensure ethical review sign-off on metrics; (3) quarterly dashboard to board risk committee; (4) document limitations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_outcomes sourcetype=\"ai:outcome:track\" earliest=-90d\n| stats median(outcome_score) as med_score, median(days_to_outcome) as med_days by path, product_line\n| eval uplift=if(path=\"override\", med_score, null())\n| eventstats max(eval(if(path=\"auto\", med_score, null()))) as base by product_line\n| eval delta=if(path=\"override\", med_score-base, null())\n| where path=\"override\"\n| table product_line, med_score, base, delta, med_days\n```\n\nUnderstanding this SPL\n\n**Human-in-the-Loop Intervention Effectiveness (Art. 14) (EU AI Act)** — Compares downstream outcomes after override vs baseline auto path to evidence intervention effectiveness for governance reviews.\n\nDocumented **Data sources**: `index=ai_outcomes` — decision_id, path (override|auto), outcome_score, days_to_outcome. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (2890), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_outcomes; **sourcetype**: ai:outcome:track. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_outcomes, sourcetype=\"ai:outcome:track\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by path, product_line** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uplift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by product_line** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where path=\"override\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Human-in-the-Loop Intervention Effectiveness (Art. 14) (EU AI Act)**): table product_line, med_score, base, delta, med_days\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Box plot (simulated via table), Bar chart (delta by product_line), Table (medians)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We compares downstream outcomes after override vs baseline auto path to evidence intervention effectiveness for governance reviews.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.14: Human-in-the-Loop Intervention Effectiveness (Art. 14).",
                  "ea": "Saved search 'UC-22.21.14' running on index=ai_outcomes — decision_id, path (override|auto), outcome_score, days_to_outcome, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.15",
              "n": "Operator Session Segregation for High-Risk AI Consoles (Art. 14) (EU AI Act)",
              "c": "high",
              "f": "intermediate",
              "v": "Detects concurrent high-risk console sessions per operator_id to support segregation of duties and oversight controls.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=ai_console` `sourcetype=ai:admin:session` — operator_id, session_id, action",
              "q": "index=ai_console sourcetype=\"ai:admin:session\" earliest=-24h\n| bin _time span=5m\n| stats dc(session_id) as concurrent_sessions by _time, operator_id\n| where concurrent_sessions > 1\n| timechart span=1h max(concurrent_sessions) by operator_id",
              "m": "(1) Ingest IdP session logs if console lacks them; (2) tune threshold per role; (3) ES correlation for shared accounts; (4) policy attestation annually.",
              "z": "Time chart (concurrent_sessions), Table (operator_id peaks), Single value (distinct operators affected)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=ai_console` `sourcetype=ai:admin:session` — operator_id, session_id, action.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest IdP session logs if console lacks them; (2) tune threshold per role; (3) ES correlation for shared accounts; (4) policy attestation annually.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_console sourcetype=\"ai:admin:session\" earliest=-24h\n| bin _time span=5m\n| stats dc(session_id) as concurrent_sessions by _time, operator_id\n| where concurrent_sessions > 1\n| timechart span=1h max(concurrent_sessions) by operator_id\n```\n\nUnderstanding this SPL\n\n**Operator Session Segregation for High-Risk AI Consoles (Art. 14) (EU AI Act)** — Detects concurrent high-risk console sessions per operator_id to support segregation of duties and oversight controls.\n\nDocumented **Data sources**: `index=ai_console` `sourcetype=ai:admin:session` — operator_id, session_id, action. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_console; **sourcetype**: ai:admin:session. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_console, sourcetype=\"ai:admin:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, operator_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where concurrent_sessions > 1` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1h** buckets with a separate series **by operator_id** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Operator Session Segregation for High-Risk AI Consoles (Art. 14) (EU AI Act)** — Detects concurrent high-risk console sessions per operator_id to support segregation of duties and oversight controls.\n\nDocumented **Data sources**: `index=ai_console` `sourcetype=ai:admin:session` — operator_id, session_id, action. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (concurrent_sessions), Table (operator_id peaks), Single value (distinct operators affected)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We look for concurrent high-risk console sessions per operator_id to support segregation of duties and oversight controls.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1h | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.14 (Human oversight) is enforced — Splunk UC-22.21.15: Operator Session Segregation for High-Risk AI Consoles (Art. 14).",
                  "ea": "Saved search 'UC-22.21.15' running on index=ai_console sourcetype=ai:admin:session — operator_id, session_id, action, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.16",
              "n": "Quality Management System Control Evidence Freshness (Art. 17) (EU AI Act)",
              "c": "high",
              "f": "beginner",
              "v": "Monitors last_review date for each QMS control_id to ensure Art. 17 quality management system remains current.",
              "t": "Splunk DB Connect (2686), Splunk Enterprise Security (263)",
              "d": "`index=qms` CSV via DB Connect or `sourcetype=qms:control:event` — control_id, last_review, owner",
              "q": "index=qms sourcetype=\"qms:control:event\" earliest=-400d\n| stats latest(last_review) as lr_epoch by control_id, owner\n| eval days_since=round((now()-lr_epoch)/86400,1)\n| where days_since > 365 OR isnull(lr_epoch)\n| sort - days_since\n| table control_id, owner, days_since",
              "m": "(1) Ingest QMS export nightly; (2) map owners to RACI; (3) alert >365d; (4) attach evidence artifact hash field when available.",
              "z": "Table (stale controls), Bar chart (days_since distribution), Single value (stale count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=qms` CSV via DB Connect or `sourcetype=qms:control:event` — control_id, last_review, owner.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest QMS export nightly; (2) map owners to RACI; (3) alert >365d; (4) attach evidence artifact hash field when available.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=qms sourcetype=\"qms:control:event\" earliest=-400d\n| stats latest(last_review) as lr_epoch by control_id, owner\n| eval days_since=round((now()-lr_epoch)/86400,1)\n| where days_since > 365 OR isnull(lr_epoch)\n| sort - days_since\n| table control_id, owner, days_since\n```\n\nUnderstanding this SPL\n\n**Quality Management System Control Evidence Freshness (Art. 17) (EU AI Act)** — Monitors last_review date for each QMS control_id to ensure Art. 17 quality management system remains current.\n\nDocumented **Data sources**: `index=qms` CSV via DB Connect or `sourcetype=qms:control:event` — control_id, last_review, owner. **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: qms; **sourcetype**: qms:control:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=qms, sourcetype=\"qms:control:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id, owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since > 365 OR isnull(lr_epoch)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Quality Management System Control Evidence Freshness (Art. 17) (EU AI Act)**): table control_id, owner, days_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (stale controls), Bar chart (days_since distribution), Single value (stale count)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for last_review date for each QMS control_id to ensure Art.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.16: Quality Management System Control Evidence Freshness (Art. 17).",
                  "ea": "Saved search 'UC-22.21.16' running on index=qms CSV via DB Connect or sourcetype=qms:control:event — control_id, last_review, owner, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.17",
              "n": "Technical Documentation Completeness Checklist (Art. 11/Annex IV) (EU AI Act)",
              "c": "critical",
              "f": "intermediate",
              "v": "Scores required documentation sections present per model_id for Annex IV technical documentation completeness.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=ai_docs` HEC — model_id, section_id, present",
              "q": "index=ai_docs sourcetype=\"ai:tdoc:status\" earliest=-7d\n| stats values(section_id) as sections, sum(present) as present_cnt by model_id\n| eval required=12\n| eval completeness=round(100*present_cnt/required,2)\n| where completeness < 100\n| sort + completeness",
              "m": "(1) Define canonical section_id list; (2) integrate CMS webhook; (3) block EU deploy if completeness < 100; (4) store PDF hashes externally.",
              "z": "Gauge-style single value (avg completeness), Table (missing sections), Bar chart (by model_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=ai_docs` HEC — model_id, section_id, present.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define canonical section_id list; (2) integrate CMS webhook; (3) block EU deploy if completeness < 100; (4) store PDF hashes externally.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_docs sourcetype=\"ai:tdoc:status\" earliest=-7d\n| stats values(section_id) as sections, sum(present) as present_cnt by model_id\n| eval required=12\n| eval completeness=round(100*present_cnt/required,2)\n| where completeness < 100\n| sort + completeness\n```\n\nUnderstanding this SPL\n\n**Technical Documentation Completeness Checklist (Art. 11/Annex IV) (EU AI Act)** — Scores required documentation sections present per model_id for Annex IV technical documentation completeness.\n\nDocumented **Data sources**: `index=ai_docs` HEC — model_id, section_id, present. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_docs; **sourcetype**: ai:tdoc:status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_docs, sourcetype=\"ai:tdoc:status\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by model_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **required** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **completeness** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where completeness < 100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gauge-style single value (avg completeness), Table (missing sections), Bar chart (by model_id)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We scores required documentation sections present per model_id for Annex IV technical documentation completeness.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.17: Technical Documentation Completeness Checklist (Art. 11/Annex IV).",
                  "ea": "Saved search 'UC-22.21.17' running on index=ai_docs HEC — model_id, section_id, present, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.18",
              "n": "EU Database Registration Event Audit (Art. 49) (EU AI Act)",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks registration_id, status, and error codes from EU database API client logs for conformity registration evidence.",
              "t": "Splunk Add-on for AWS (optional), HTTP Event Collector",
              "d": "`index=ai_registry` `sourcetype=eu:ai:db:client` — http_status, registration_id, operation",
              "q": "index=ai_registry sourcetype=\"eu:ai:db:client\" earliest=-30d\n| stats count as calls, sum(eval(http_status>=400)) as errors by operation, registration_id\n| eval err_rate=round(100*errors/calls,2)\n| where err_rate>5 OR errors>0 AND calls<20\n| sort - err_rate",
              "m": "(1) Mask secrets in _raw; (2) alert on sustained 5xx; (3) reconcile with legal registry export weekly; (4) retain API idempotency keys.",
              "z": "Time chart (errors), Table (registration_id with errors), Single value (global err_rate)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for AWS (optional), HTTP Event Collector.\n• Ensure the following data sources are available: `index=ai_registry` `sourcetype=eu:ai:db:client` — http_status, registration_id, operation.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Mask secrets in _raw; (2) alert on sustained 5xx; (3) reconcile with legal registry export weekly; (4) retain API idempotency keys.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_registry sourcetype=\"eu:ai:db:client\" earliest=-30d\n| stats count as calls, sum(eval(http_status>=400)) as errors by operation, registration_id\n| eval err_rate=round(100*errors/calls,2)\n| where err_rate>5 OR errors>0 AND calls<20\n| sort - err_rate\n```\n\nUnderstanding this SPL\n\n**EU Database Registration Event Audit (Art. 49) (EU AI Act)** — Tracks registration_id, status, and error codes from EU database API client logs for conformity registration evidence.\n\nDocumented **Data sources**: `index=ai_registry` `sourcetype=eu:ai:db:client` — http_status, registration_id, operation. **App/TA** (typical add-on context): Splunk Add-on for AWS (optional), HTTP Event Collector. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_registry; **sourcetype**: eu:ai:db:client. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_registry, sourcetype=\"eu:ai:db:client\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by operation, registration_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate>5 OR errors>0 AND calls<20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (errors), Table (registration_id with errors), Single value (global err_rate)",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track registration_id, status, and error codes from EU database API client logs for conformity registration evidence.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.18: EU Database Registration Event Audit (Art. 49).",
                  "ea": "Saved search 'UC-22.21.18' running on index=ai_registry sourcetype=eu:ai:db:client — http_status, registration_id, operation, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.19",
              "n": "CE Marking Evidence — Test Report Coverage (Art. 48) (EU AI Act)",
              "c": "high",
              "f": "intermediate",
              "v": "Joins product_sku to test_report_id coverage from test lab feeds for CE marking evidentiary chain.",
              "t": "Splunk DB Connect (2686), Splunk ITSI (1841)",
              "d": "`index=product` `sourcetype=plm:sku` joined with `index=testlab` `sourcetype=lab:report`",
              "q": "index=plm sourcetype=\"plm:sku\" earliest=-1d\n| fields product_sku, model_family, hw_rev\n| join type=left product_sku [\n    search index=testlab sourcetype=\"lab:report\" earliest=-400d\n    | stats latest(report_id) as test_report_id by product_sku\n  ]\n| eval covered=if(isnotnull(test_report_id),1,0)\n| stats sum(covered) as ok, count as total\n| eval coverage_pct=round(100*ok/total,2)",
              "m": "(1) Normalize SKU keys; (2) nightly DB Connect sync; (3) alert coverage_pct < 100 for EU SKUs; (4) map to notified body references outside Splunk.",
              "z": "Single value (coverage_pct), Table (uncovered SKU), Pie chart (covered vs not)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (2686), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=product` `sourcetype=plm:sku` joined with `index=testlab` `sourcetype=lab:report`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize SKU keys; (2) nightly DB Connect sync; (3) alert coverage_pct < 100 for EU SKUs; (4) map to notified body references outside Splunk.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=plm sourcetype=\"plm:sku\" earliest=-1d\n| fields product_sku, model_family, hw_rev\n| join type=left product_sku [\n    search index=testlab sourcetype=\"lab:report\" earliest=-400d\n    | stats latest(report_id) as test_report_id by product_sku\n  ]\n| eval covered=if(isnotnull(test_report_id),1,0)\n| stats sum(covered) as ok, count as total\n| eval coverage_pct=round(100*ok/total,2)\n```\n\nUnderstanding this SPL\n\n**CE Marking Evidence — Test Report Coverage (Art. 48) (EU AI Act)** — Joins product_sku to test_report_id coverage from test lab feeds for CE marking evidentiary chain.\n\nDocumented **Data sources**: `index=product` `sourcetype=plm:sku` joined with `index=testlab` `sourcetype=lab:report`. **App/TA** (typical add-on context): Splunk DB Connect (2686), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: plm; **sourcetype**: plm:sku. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=plm, sourcetype=\"plm:sku\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **covered** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (coverage_pct), Table (uncovered SKU), Pie chart (covered vs not)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We joins product_sku to test_report_id coverage from test lab feeds for CE marking evidentiary chain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.19: CE Marking Evidence — Test Report Coverage (Art. 48).",
                  "ea": "Saved search 'UC-22.21.19' running on index=product sourcetype=plm:sku joined with index=testlab sourcetype=lab:report, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.20",
              "n": "Risk Management System — Hazard Log Update SLA (Art. 9/Annex IV) (EU AI Act)",
              "c": "critical",
              "f": "intermediate",
              "v": "Measures time from new_hazard to hazard_reviewed in risk register stream for RMS operational effectiveness.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "`index=ai_risk` `sourcetype=rms:hazard` — hazard_id, status, transition",
              "q": "index=ai_risk sourcetype=\"rms:hazard\" earliest=-90d\n| eval opened=if(status=\"new\",_time,null())\n| eval reviewed=if(status=\"reviewed\",_time,null())\n| stats min(opened) as t_open, max(reviewed) as t_rev by hazard_id\n| eval review_hours=round((t_rev-t_open)/3600,2)\n| where isnull(t_rev) OR review_hours>168\n| sort - review_hours",
              "m": "(1) Ensure hazard_id stable; (2) escalate unreviewed >7d; (3) link to incident index; (4) quarterly RMS committee export.",
              "z": "Histogram (review_hours), Table (open hazards), Single value (mean review_hours)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=ai_risk` `sourcetype=rms:hazard` — hazard_id, status, transition.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure hazard_id stable; (2) escalate unreviewed >7d; (3) link to incident index; (4) quarterly RMS committee export.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_risk sourcetype=\"rms:hazard\" earliest=-90d\n| eval opened=if(status=\"new\",_time,null())\n| eval reviewed=if(status=\"reviewed\",_time,null())\n| stats min(opened) as t_open, max(reviewed) as t_rev by hazard_id\n| eval review_hours=round((t_rev-t_open)/3600,2)\n| where isnull(t_rev) OR review_hours>168\n| sort - review_hours\n```\n\nUnderstanding this SPL\n\n**Risk Management System — Hazard Log Update SLA (Art. 9/Annex IV) (EU AI Act)** — Measures time from new_hazard to hazard_reviewed in risk register stream for RMS operational effectiveness.\n\nDocumented **Data sources**: `index=ai_risk` `sourcetype=rms:hazard` — hazard_id, status, transition. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_risk; **sourcetype**: rms:hazard. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_risk, sourcetype=\"rms:hazard\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **reviewed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by hazard_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **review_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(t_rev) OR review_hours>168` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram (review_hours), Table (open hazards), Single value (mean review_hours)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We measure time from new_hazard to hazard_reviewed in risk register stream for RMS operational effectiveness.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.20: Risk Management System — Hazard Log Update SLA (Art. 9/Annex IV).",
                  "ea": "Saved search 'UC-22.21.20' running on index=ai_risk sourcetype=rms:hazard — hazard_id, status, transition, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.21",
              "n": "Post-Market Surveillance Signal Detection (Art. 72) (EU AI Act)",
              "c": "high",
              "f": "advanced",
              "v": "Uses z-score on daily serious_complaint signals per model_id for early post-market surveillance escalation.",
              "t": "Splunk Machine Learning Toolkit (2890), Splunk ITSI (1841)",
              "d": "`index=ai_pms` `sourcetype=ai:complaint:serious` — model_id, region",
              "q": "index=ai_pms sourcetype=\"ai:complaint:serious\" earliest=-120d\n| timechart span=1d count by model_id\n| untable _time model_id count\n| eventstats avg(count) as mu, stdev(count) as sd by model_id\n| eval z=(count-mu)/nullif(sd,0)\n| where z>3\n| table _time, model_id, count, mu, z",
              "m": "(1) Define serious_complaint taxonomy; (2) dedupe by ticket_id; (3) alert z>3 for 2 consecutive days; (4) feed pharmacovigilance-style review where applicable.",
              "z": "Time chart (count with z overlay), Table (spikes), Single value (models in spike)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Machine Learning Toolkit](https://splunkbase.splunk.com/app/2890), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Machine Learning Toolkit (2890), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=ai_pms` `sourcetype=ai:complaint:serious` — model_id, region.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define serious_complaint taxonomy; (2) dedupe by ticket_id; (3) alert z>3 for 2 consecutive days; (4) feed pharmacovigilance-style review where applicable.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_pms sourcetype=\"ai:complaint:serious\" earliest=-120d\n| timechart span=1d count by model_id\n| untable _time model_id count\n| eventstats avg(count) as mu, stdev(count) as sd by model_id\n| eval z=(count-mu)/nullif(sd,0)\n| where z>3\n| table _time, model_id, count, mu, z\n```\n\nUnderstanding this SPL\n\n**Post-Market Surveillance Signal Detection (Art. 72) (EU AI Act)** — Uses z-score on daily serious_complaint signals per model_id for early post-market surveillance escalation.\n\nDocumented **Data sources**: `index=ai_pms` `sourcetype=ai:complaint:serious` — model_id, region. **App/TA** (typical add-on context): Splunk Machine Learning Toolkit (2890), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_pms; **sourcetype**: ai:complaint:serious. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_pms, sourcetype=\"ai:complaint:serious\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by model_id** — ideal for trending and alerting on this use case.\n• Pipeline stage (see **Post-Market Surveillance Signal Detection (Art. 72) (EU AI Act)**): untable _time model_id count\n• `eventstats` rolls up events into metrics; results are split **by model_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **z** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where z>3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Post-Market Surveillance Signal Detection (Art. 72) (EU AI Act)**): table _time, model_id, count, mu, z\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (count with z overlay), Table (spikes), Single value (models in spike)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We uses z-score on daily serious_complaint signals per model_id for early post-market surveillance escalation.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.21: Post-Market Surveillance Signal Detection (Art. 72).",
                  "ea": "Saved search 'UC-22.21.21' running on index=ai_pms sourcetype=ai:complaint:serious — model_id, region, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.22",
              "n": "Serious Incident Report Pack Timeliness (Art. 73) (EU AI Act)",
              "c": "critical",
              "f": "expert",
              "v": "Tracks time from serious_incident_detected to regulator_packet_submitted for serious incident reporting workflows.",
              "t": "Splunk Enterprise Security (263), Splunk DB Connect (2686)",
              "d": "`index=ai_incident` — incident_id, milestone, _time",
              "q": "index=ai_incident sourcetype=\"ai:serious:incident\" earliest=-180d\n| eval m_detect=if(milestone=\"detected\",_time,null())\n| eval m_submit=if(milestone=\"regulator_submitted\",_time,null())\n| stats min(m_detect) as t0, min(m_submit) as t1 by incident_id\n| eval hours_to_submit=round((t1-t0)/3600,2)\n| where isnull(t1) OR hours_to_submit>72\n| table incident_id, t0, t1, hours_to_submit",
              "m": "(1) Legal defines milestones; (2) immutable append-only index ACL; (3) alert at 60h; (4) attach packet hash in external DMS.",
              "z": "Timeline (milestones), Table (late incidents), Single value (open past 72h)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=ai_incident` — incident_id, milestone, _time.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Legal defines milestones; (2) immutable append-only index ACL; (3) alert at 60h; (4) attach packet hash in external DMS.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_incident sourcetype=\"ai:serious:incident\" earliest=-180d\n| eval m_detect=if(milestone=\"detected\",_time,null())\n| eval m_submit=if(milestone=\"regulator_submitted\",_time,null())\n| stats min(m_detect) as t0, min(m_submit) as t1 by incident_id\n| eval hours_to_submit=round((t1-t0)/3600,2)\n| where isnull(t1) OR hours_to_submit>72\n| table incident_id, t0, t1, hours_to_submit\n```\n\nUnderstanding this SPL\n\n**Serious Incident Report Pack Timeliness (Art. 73) (EU AI Act)** — Tracks time from serious_incident_detected to regulator_packet_submitted for serious incident reporting workflows.\n\nDocumented **Data sources**: `index=ai_incident` — incident_id, milestone, _time. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_incident; **sourcetype**: ai:serious:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_incident, sourcetype=\"ai:serious:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **m_detect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **m_submit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by incident_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_to_submit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(t1) OR hours_to_submit>72` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Serious Incident Report Pack Timeliness (Art. 73) (EU AI Act)**): table incident_id, t0, t1, hours_to_submit\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (milestones), Table (late incidents), Single value (open past 72h)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We track time from serious_incident_detected to regulator_packet_submitted for serious incident reporting workflows.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.22: Serious Incident Report Pack Timeliness (Art. 73).",
                  "ea": "Saved search 'UC-22.21.22' running on index=ai_incident — incident_id, milestone, _time, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.23",
              "n": "Model Withdrawal / Recall Execution Audit (Art. 12/72) (EU AI Act)",
              "c": "critical",
              "f": "intermediate",
              "v": "Verifies recall_command acknowledged by each edge node within SLA for model withdrawal evidence.",
              "t": "Splunk ITSI (1841), Splunk Enterprise Security (263)",
              "d": "`index=ai_edge` `sourcetype=ai:recall:ack` — recall_id, node_id, ack_time",
              "q": "index=ai_edge sourcetype=\"ai:recall:ack\" earliest=-30d\n| stats min(ack_time) as first_ack, dc(node_id) as nodes_acked by recall_id\n| join type=left recall_id [\n    search index=ai_edge sourcetype=\"ai:recall:target\" earliest=-30d\n    | stats dc(node_id) as nodes_total by recall_id\n  ]\n| eval pct=round(100*nodes_acked/nodes_total,2)\n| where pct < 100\n| table recall_id, nodes_acked, nodes_total, pct, first_ack",
              "m": "(1) Maintain authoritative node inventory; (2) alert pct < 100 after 4h; (3) ITSI episode per recall_id; (4) store signed ack externally.",
              "z": "Bar chart (pct by recall_id), Table (lagging nodes), Single value (min pct)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk Enterprise Security (263).\n• Ensure the following data sources are available: `index=ai_edge` `sourcetype=ai:recall:ack` — recall_id, node_id, ack_time.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain authoritative node inventory; (2) alert pct < 100 after 4h; (3) ITSI episode per recall_id; (4) store signed ack externally.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_edge sourcetype=\"ai:recall:ack\" earliest=-30d\n| stats min(ack_time) as first_ack, dc(node_id) as nodes_acked by recall_id\n| join type=left recall_id [\n    search index=ai_edge sourcetype=\"ai:recall:target\" earliest=-30d\n    | stats dc(node_id) as nodes_total by recall_id\n  ]\n| eval pct=round(100*nodes_acked/nodes_total,2)\n| where pct < 100\n| table recall_id, nodes_acked, nodes_total, pct, first_ack\n```\n\nUnderstanding this SPL\n\n**Model Withdrawal / Recall Execution Audit (Art. 12/72) (EU AI Act)** — Verifies recall_command acknowledged by each edge node within SLA for model withdrawal evidence.\n\nDocumented **Data sources**: `index=ai_edge` `sourcetype=ai:recall:ack` — recall_id, node_id, ack_time. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk Enterprise Security (263). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_edge; **sourcetype**: ai:recall:ack. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_edge, sourcetype=\"ai:recall:ack\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by recall_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pct < 100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Model Withdrawal / Recall Execution Audit (Art. 12/72) (EU AI Act)**): table recall_id, nodes_acked, nodes_total, pct, first_ack\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (pct by recall_id), Table (lagging nodes), Single value (min pct)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We verifies recall_command acknowledged by each edge node within SLA for model withdrawal evidence.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.23: Model Withdrawal / Recall Execution Audit (Art. 12/72).",
                  "ea": "Saved search 'UC-22.21.23' running on index=ai_edge sourcetype=ai:recall:ack — recall_id, node_id, ack_time, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.24",
              "n": "User Feedback Collection Pipeline Health (Art. 72) (EU AI Act)",
              "c": "medium",
              "f": "beginner",
              "v": "Monitors volume and error rate of in-app feedback submissions for post-market user feedback collection integrity.",
              "t": "Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "`index=app` `sourcetype=app:feedback` — http_status, session_id",
              "q": "index=app sourcetype=\"app:feedback\" earliest=-7d\n| eval ok=if(http_status>=200 AND http_status<300,1,0)\n| timechart span=1h sum(ok) as ok_ct, count as total\n| eval ok_rate=round(100*ok_ct/total,2)\n| where ok_rate < 95",
              "m": "(1) Add HEC token per environment; (2) dedupe by feedback_id; (3) alert drops >20% vs baseline; (4) map to product owner.",
              "z": "Time chart (ok_rate), Single value (7d avg ok_rate), Table (hours below 95%)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=app` `sourcetype=app:feedback` — http_status, session_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Add HEC token per environment; (2) dedupe by feedback_id; (3) alert drops >20% vs baseline; (4) map to product owner.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"app:feedback\" earliest=-7d\n| eval ok=if(http_status>=200 AND http_status<300,1,0)\n| timechart span=1h sum(ok) as ok_ct, count as total\n| eval ok_rate=round(100*ok_ct/total,2)\n| where ok_rate < 95\n```\n\nUnderstanding this SPL\n\n**User Feedback Collection Pipeline Health (Art. 72) (EU AI Act)** — Monitors volume and error rate of in-app feedback submissions for post-market user feedback collection integrity.\n\nDocumented **Data sources**: `index=app` `sourcetype=app:feedback` — http_status, session_id. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: app:feedback. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"app:feedback\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=1h** buckets — ideal for trending and alerting on this use case.\n• `eval` defines or adjusts **ok_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ok_rate < 95` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1h | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User Feedback Collection Pipeline Health (Art. 72) (EU AI Act)** — Monitors volume and error rate of in-app feedback submissions for post-market user feedback collection integrity.\n\nDocumented **Data sources**: `index=app` `sourcetype=app:feedback` — http_status, session_id. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (ok_rate), Single value (7d avg ok_rate), Table (hours below 95%)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for volume and error rate of in-app feedback submissions for post-market user feedback collection integrity.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest span=1h | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.24: User Feedback Collection Pipeline Health (Art. 72).",
                  "ea": "Saved search 'UC-22.21.24' running on index=app sourcetype=app:feedback — http_status, session_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.21.25",
              "n": "Regulatory Authority Notification Delivery Confirmation (Art. 73) (EU AI Act)",
              "c": "critical",
              "f": "intermediate",
              "v": "Confirms secure notification channel message_id acknowledged by authority mailbox/API within policy window.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Office 365 (4055) or generic mail gateway logs",
              "d": "`index=mail` `sourcetype=o365:messaging` OR `sourcetype=sendgrid:event` — message_id, event, recipient_domain",
              "q": "index=mail (sourcetype=\"o365:messaging\" OR sourcetype=\"sendgrid:event\") earliest=-30d recipient_domain=\"*authority*\"\n| stats earliest(eval(if(event=\"queued\",_time,null()))) as t_q,\n        earliest(eval(if(event=\"delivered\",_time,null()))) as t_d,\n        earliest(eval(if(match(event,\"bounce|dropped\"),_time,null()))) as t_fail\n    by message_id\n| eval deliver_h=round((t_d-t_q)/3600,3)\n| where isnull(t_d) OR isnotnull(t_fail)\n| table message_id, t_q, t_d, t_fail, deliver_h",
              "m": "(1) Tag regulatory messages with dedicated header; (2) alert on bounce; (3) retain DKIM pass evidence; (4) legal owns recipient allowlist.",
              "z": "Table (failed deliveries), Time chart (deliver_h), Single value (undelivered count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Office 365 (4055) or generic mail gateway logs.\n• Ensure the following data sources are available: `index=mail` `sourcetype=o365:messaging` OR `sourcetype=sendgrid:event` — message_id, event, recipient_domain.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag regulatory messages with dedicated header; (2) alert on bounce; (3) retain DKIM pass evidence; (4) legal owns recipient allowlist.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=mail (sourcetype=\"o365:messaging\" OR sourcetype=\"sendgrid:event\") earliest=-30d recipient_domain=\"*authority*\"\n| stats earliest(eval(if(event=\"queued\",_time,null()))) as t_q,\n        earliest(eval(if(event=\"delivered\",_time,null()))) as t_d,\n        earliest(eval(if(match(event,\"bounce|dropped\"),_time,null()))) as t_fail\n    by message_id\n| eval deliver_h=round((t_d-t_q)/3600,3)\n| where isnull(t_d) OR isnotnull(t_fail)\n| table message_id, t_q, t_d, t_fail, deliver_h\n```\n\nUnderstanding this SPL\n\n**Regulatory Authority Notification Delivery Confirmation (Art. 73) (EU AI Act)** — Confirms secure notification channel message_id acknowledged by authority mailbox/API within policy window.\n\nDocumented **Data sources**: `index=mail` `sourcetype=o365:messaging` OR `sourcetype=sendgrid:event` — message_id, event, recipient_domain. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Office 365 (4055) or generic mail gateway logs. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: mail; **sourcetype**: o365:messaging, sendgrid:event. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=mail, sourcetype=\"o365:messaging\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by message_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **deliver_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(t_d) OR isnotnull(t_fail)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Regulatory Authority Notification Delivery Confirmation (Art. 73) (EU AI Act)**): table message_id, t_q, t_d, t_fail, deliver_h\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (failed deliveries), Time chart (deliver_h), Single value (undelivered count)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We confirms secure notification channel message_id acknowledged by authority mailbox/API within policy window.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AI Act"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AI Act",
                  "v": "Regulation (EU) 2024/1689",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AI Act Art.13 (Transparency and information) is enforced — Splunk UC-22.21.25: Regulatory Authority Notification Delivery Confirmation (Art. 73).",
                  "ea": "Saved search 'UC-22.21.25' running on index=mail sourcetype=o365:messaging OR sourcetype=sendgrid:event — message_id, event, recipient_domain, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "22.22",
          "n": "PSD2 / Payment Services",
          "u": [
            {
              "i": "22.22.1",
              "n": "SCA Challenge Rate Monitoring (RTS on SCA & CSC) (PSD2 RTS on SCA & CSC)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method",
              "q": "index=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"challenge\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.1\"\n| where authn>100 AND (delta>1 OR delta<-1)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"challenge\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.1\"\n| where authn>100 AND (delta>1 OR delta<-1)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n```\n\nUnderstanding this SPL\n\n**SCA Challenge Rate Monitoring (RTS on SCA & CSC) (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: psd2:sca. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"psd2:sca\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sca_dim** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **challenge_flag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, sca_dim** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **challenge_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where authn>100 AND (delta>1 OR delta<-1)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SCA Challenge Rate Monitoring (RTS on SCA & CSC) (PSD2 RTS on SCA & CSC)**): table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SCA Challenge Rate Monitoring (RTS on SCA & CSC) (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.1: SCA Challenge Rate Monitoring (RTS on SCA & CSC).",
                  "ea": "Saved search 'UC-22.22.1' running on index=payments sourcetype=psd2:sca — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.2",
              "n": "SCA Exemption Usage Tracking (RTS on SCA & CSC) (PSD2 RTS on SCA & CSC)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method",
              "q": "index=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"exemption\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.2\"\n| where authn>100 AND (delta>2 OR delta<-2)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"exemption\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.2\"\n| where authn>100 AND (delta>2 OR delta<-2)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n```\n\nUnderstanding this SPL\n\n**SCA Exemption Usage Tracking (RTS on SCA & CSC) (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: psd2:sca. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"psd2:sca\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sca_dim** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **challenge_flag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, sca_dim** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **challenge_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where authn>100 AND (delta>2 OR delta<-2)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SCA Exemption Usage Tracking (RTS on SCA & CSC) (PSD2 RTS on SCA & CSC)**): table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SCA Exemption Usage Tracking (RTS on SCA & CSC) (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.2: SCA Exemption Usage Tracking (RTS on SCA & CSC).",
                  "ea": "Saved search 'UC-22.22.2' running on index=payments sourcetype=psd2:sca — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.3",
              "n": "Dynamic Linking Verification for Payee/Amount (RTS Art. 5) (PSD2 RTS on SCA & CSC)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method",
              "q": "index=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"dynamic_link\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.3\"\n| where authn>100 AND (delta>3 OR delta<-3)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"dynamic_link\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.3\"\n| where authn>100 AND (delta>3 OR delta<-3)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n```\n\nUnderstanding this SPL\n\n**Dynamic Linking Verification for Payee/Amount (RTS Art. 5) (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: psd2:sca. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"psd2:sca\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sca_dim** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **challenge_flag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, sca_dim** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **challenge_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where authn>100 AND (delta>3 OR delta<-3)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Dynamic Linking Verification for Payee/Amount (RTS Art. 5) (PSD2 RTS on SCA & CSC)**): table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Dynamic Linking Verification for Payee/Amount (RTS Art. 5) (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.3: Dynamic Linking Verification for Payee/Amount (RTS Art. 5).",
                  "ea": "Saved search 'UC-22.22.3' running on index=payments sourcetype=psd2:sca — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.4",
              "n": "Authentication Factor Validation Failure Trends (PSD2 RTS on SCA & CSC)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method",
              "q": "index=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"factor_fail\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.4\"\n| where authn>100 AND (delta>4 OR delta<-4)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"factor_fail\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.4\"\n| where authn>100 AND (delta>4 OR delta<-4)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n```\n\nUnderstanding this SPL\n\n**Authentication Factor Validation Failure Trends (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: psd2:sca. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"psd2:sca\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sca_dim** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **challenge_flag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, sca_dim** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **challenge_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where authn>100 AND (delta>4 OR delta<-4)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Authentication Factor Validation Failure Trends (PSD2 RTS on SCA & CSC)**): table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Authentication Factor Validation Failure Trends (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.4: Authentication Factor Validation Failure Trends.",
                  "ea": "Saved search 'UC-22.22.4' running on index=payments sourcetype=psd2:sca — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.5",
              "n": "Biometric SCA Attempt and Liveness Failure Monitoring (PSD2 RTS on SCA & CSC)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method",
              "q": "index=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"biometric\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.5\"\n| where authn>100 AND (delta>5 OR delta<-5)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"biometric\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.5\"\n| where authn>100 AND (delta>5 OR delta<-5)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n```\n\nUnderstanding this SPL\n\n**Biometric SCA Attempt and Liveness Failure Monitoring (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: psd2:sca. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"psd2:sca\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sca_dim** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **challenge_flag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, sca_dim** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **challenge_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where authn>100 AND (delta>5 OR delta<-5)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Biometric SCA Attempt and Liveness Failure Monitoring (PSD2 RTS on SCA & CSC)**): table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Biometric SCA Attempt and Liveness Failure Monitoring (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.5: Biometric SCA Attempt and Liveness Failure Monitoring.",
                  "ea": "Saved search 'UC-22.22.5' running on index=payments sourcetype=psd2:sca — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.6",
              "n": "SCA Fallback Channel Abuse (SMS OTP / Voice OTP) (PSD2 RTS on SCA & CSC)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "t": "Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621)",
              "d": "`index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method",
              "q": "index=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"fallback\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.6\"\n| where authn>100 AND (delta>6 OR delta<-6)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Common Information Model Add-on](https://splunkbase.splunk.com/app/1621), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"psd2:sca\" earliest=-24h\n| eval sca_dim=\"fallback\"\n| eval challenge_flag=if(challenge=\"true\" OR challenge=\"1\",1,0)\n| stats count as authn, sum(eval(outcome=\"success\")) as ok, sum(challenge_flag) as ch, dc(psu_id) as psu by aspsp_id, sca_dim\n| lookup psd2_sca_baseline.csv aspsp_id OUTPUT challenge_rate_baseline AS baseline\n| eval challenge_rate=round(100*ch/authn,3)\n| eval delta=round(challenge_rate-baseline,3)\n| eval uc_id=\"22.22.6\"\n| where authn>100 AND (delta>6 OR delta<-6)\n| table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n```\n\nUnderstanding this SPL\n\n**SCA Fallback Channel Abuse (SMS OTP / Voice OTP) (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: psd2:sca. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"psd2:sca\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **sca_dim** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **challenge_flag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, sca_dim** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **challenge_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where authn>100 AND (delta>6 OR delta<-6)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SCA Fallback Channel Abuse (SMS OTP / Voice OTP) (PSD2 RTS on SCA & CSC)**): table uc_id, aspsp_id, sca_dim, authn, challenge_rate, baseline, delta\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SCA Fallback Channel Abuse (SMS OTP / Voice OTP) (PSD2 RTS on SCA & CSC)** — Monitors PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.\n\nDocumented **Data sources**: `index=payments` `sourcetype=psd2:sca` — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (challenge_rate), Bar chart (delta by aspsp_id), Table (top outliers)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for PSD2 strong customer authentication telemetry against documented baselines to evidence RTS compliance and proportionate exemption usage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.6: SCA Fallback Channel Abuse (SMS OTP / Voice OTP).",
                  "ea": "Saved search 'UC-22.22.6' running on index=payments sourcetype=psd2:sca — aspsp_id, psu_id, challenge, sca_exemption, outcome, authentication_method, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.7",
              "n": "Transaction Fraud Detection — Score and Rule Hits (PSD2 / EBA fraud guidelines)",
              "c": "high",
              "f": "advanced",
              "v": "Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "index=fraud `fdm:score` — user_id, amount_eur, score, outcome, merchant_id, corridor",
              "q": "index=fraud sourcetype=\"fdm:score\" earliest=-24h\n| eval psd2_fraud_tag=\"fraud\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.7\"\n| where events > 55\n| sort - events\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: index=fraud `fdm:score` — user_id, amount_eur, score, outcome, merchant_id, corridor.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fraud sourcetype=\"fdm:score\" earliest=-24h\n| eval psd2_fraud_tag=\"fraud\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.7\"\n| where events > 55\n| sort - events\n| head 200\n```\n\nUnderstanding this SPL\n\n**Transaction Fraud Detection — Score and Rule Hits (PSD2 / EBA fraud guidelines)** — Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.\n\nDocumented **Data sources**: index=fraud `fdm:score` — user_id, amount_eur, score, outcome, merchant_id, corridor. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fraud; **sourcetype**: fdm:score. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fraud, sourcetype=\"fdm:score\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **psd2_fraud_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **high_value** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by psd2_fraud_tag, bin(_time,15m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where events > 55` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.7: Transaction Fraud Detection — Score and Rule Hits.",
                  "ea": "Saved search 'UC-22.22.7' running on index=fraud fdm:score — user_id, amount_eur, score, outcome, merchant_id, corridor, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.8",
              "n": "Unauthorized Transaction Claims and Dispute Intake Velocity (PSD2 / EBA fraud guidelines)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "index=claims `dispute:unauth` — user_id, amount_eur, score, outcome, merchant_id, corridor",
              "q": "index=claims sourcetype=\"dispute:unauth\" earliest=-24h\n| eval psd2_fraud_tag=\"claims\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.8\"\n| where events > 60\n| sort - events\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: index=claims `dispute:unauth` — user_id, amount_eur, score, outcome, merchant_id, corridor.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=claims sourcetype=\"dispute:unauth\" earliest=-24h\n| eval psd2_fraud_tag=\"claims\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.8\"\n| where events > 60\n| sort - events\n| head 200\n```\n\nUnderstanding this SPL\n\n**Unauthorized Transaction Claims and Dispute Intake Velocity (PSD2 / EBA fraud guidelines)** — Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.\n\nDocumented **Data sources**: index=claims `dispute:unauth` — user_id, amount_eur, score, outcome, merchant_id, corridor. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: claims; **sourcetype**: dispute:unauth. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=claims, sourcetype=\"dispute:unauth\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **psd2_fraud_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **high_value** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by psd2_fraud_tag, bin(_time,15m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where events > 60` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.8: Unauthorized Transaction Claims and Dispute Intake Velocity.",
                  "ea": "Saved search 'UC-22.22.8' running on index=claims dispute:unauth — user_id, amount_eur, score, outcome, merchant_id, corridor, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.9",
              "n": "Merchant Fraud Trending by MCC and Acquirer (PSD2 / EBA fraud guidelines)",
              "c": "high",
              "f": "advanced",
              "v": "Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "index=acq `acquirer:txn` — user_id, amount_eur, score, outcome, merchant_id, corridor",
              "q": "index=acq sourcetype=\"acquirer:txn\" earliest=-24h\n| eval psd2_fraud_tag=\"merchant\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.9\"\n| where events > 65\n| sort - events\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: index=acq `acquirer:txn` — user_id, amount_eur, score, outcome, merchant_id, corridor.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=acq sourcetype=\"acquirer:txn\" earliest=-24h\n| eval psd2_fraud_tag=\"merchant\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.9\"\n| where events > 65\n| sort - events\n| head 200\n```\n\nUnderstanding this SPL\n\n**Merchant Fraud Trending by MCC and Acquirer (PSD2 / EBA fraud guidelines)** — Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.\n\nDocumented **Data sources**: index=acq `acquirer:txn` — user_id, amount_eur, score, outcome, merchant_id, corridor. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: acq; **sourcetype**: acquirer:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=acq, sourcetype=\"acquirer:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **psd2_fraud_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **high_value** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by psd2_fraud_tag, bin(_time,15m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where events > 65` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.9: Merchant Fraud Trending by MCC and Acquirer.",
                  "ea": "Saved search 'UC-22.22.9' running on index=acq acquirer:txn — user_id, amount_eur, score, outcome, merchant_id, corridor, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.10",
              "n": "Real-Time Fraud Scoring Latency and Engine Errors (PSD2 / EBA fraud guidelines)",
              "c": "high",
              "f": "advanced",
              "v": "Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "index=fraud `fdm:engine` — user_id, amount_eur, score, outcome, merchant_id, corridor",
              "q": "index=fraud sourcetype=\"fdm:engine\" earliest=-24h\n| eval psd2_fraud_tag=\"rt\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.10\"\n| where events > 70\n| sort - events\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: index=fraud `fdm:engine` — user_id, amount_eur, score, outcome, merchant_id, corridor.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=fraud sourcetype=\"fdm:engine\" earliest=-24h\n| eval psd2_fraud_tag=\"rt\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.10\"\n| where events > 70\n| sort - events\n| head 200\n```\n\nUnderstanding this SPL\n\n**Real-Time Fraud Scoring Latency and Engine Errors (PSD2 / EBA fraud guidelines)** — Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.\n\nDocumented **Data sources**: index=fraud `fdm:engine` — user_id, amount_eur, score, outcome, merchant_id, corridor. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: fraud; **sourcetype**: fdm:engine. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=fraud, sourcetype=\"fdm:engine\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **psd2_fraud_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **high_value** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by psd2_fraud_tag, bin(_time,15m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where events > 70` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.10: Real-Time Fraud Scoring Latency and Engine Errors.",
                  "ea": "Saved search 'UC-22.22.10' running on index=fraud fdm:engine — user_id, amount_eur, score, outcome, merchant_id, corridor, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.11",
              "n": "Cross-Border Payment Anomaly Detection (Corridor Concentration) (PSD2 / EBA fraud guidelines)",
              "c": "medium",
              "f": "advanced",
              "v": "Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "index=swift `cbpr:mt103` — user_id, amount_eur, score, outcome, merchant_id, corridor",
              "q": "index=swift sourcetype=\"cbpr:mt103\" earliest=-24h\n| eval psd2_fraud_tag=\"xb\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.11\"\n| where events > 75\n| sort - events\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: index=swift `cbpr:mt103` — user_id, amount_eur, score, outcome, merchant_id, corridor.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=swift sourcetype=\"cbpr:mt103\" earliest=-24h\n| eval psd2_fraud_tag=\"xb\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.11\"\n| where events > 75\n| sort - events\n| head 200\n```\n\nUnderstanding this SPL\n\n**Cross-Border Payment Anomaly Detection (Corridor Concentration) (PSD2 / EBA fraud guidelines)** — Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.\n\nDocumented **Data sources**: index=swift `cbpr:mt103` — user_id, amount_eur, score, outcome, merchant_id, corridor. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: swift; **sourcetype**: cbpr:mt103. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=swift, sourcetype=\"cbpr:mt103\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **psd2_fraud_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **high_value** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by psd2_fraud_tag, bin(_time,15m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where events > 75` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.11: Cross-Border Payment Anomaly Detection (Corridor Concentration).",
                  "ea": "Saved search 'UC-22.22.11' running on index=swift cbpr:mt103 — user_id, amount_eur, score, outcome, merchant_id, corridor, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.12",
              "n": "Account Takeover Signals Linked to Payment Instrument Changes (PSD2 / EBA fraud guidelines)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "index=auth `login:risk` — user_id, amount_eur, score, outcome, merchant_id, corridor",
              "q": "index=auth sourcetype=\"login:risk\" earliest=-24h\n| eval psd2_fraud_tag=\"ato\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.12\"\n| where events > 80\n| sort - events\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: index=auth `login:risk` — user_id, amount_eur, score, outcome, merchant_id, corridor.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=auth sourcetype=\"login:risk\" earliest=-24h\n| eval psd2_fraud_tag=\"ato\"\n| eval high_value=if(amount_eur>5000 OR amount>5000,1,0)\n| stats count as events, dc(user_id) as users, sum(high_value) as hv, median(eval(if(isnum(score),score,null()))) as med_score by psd2_fraud_tag, bin(_time,15m)\n| eval uc_id=\"22.22.12\"\n| where events > 80\n| sort - events\n| head 200\n```\n\nUnderstanding this SPL\n\n**Account Takeover Signals Linked to Payment Instrument Changes (PSD2 / EBA fraud guidelines)** — Supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.\n\nDocumented **Data sources**: index=auth `login:risk` — user_id, amount_eur, score, outcome, merchant_id, corridor. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: auth; **sourcetype**: login:risk. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=auth, sourcetype=\"login:risk\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **psd2_fraud_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **high_value** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by psd2_fraud_tag, bin(_time,15m)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where events > 80` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (events by time bin), Table (top corridors/merchants), Single value (hv count)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports PSD2 security objectives and EBA fraud monitoring expectations by correlating payment, authentication, and fraud-engine telemetry for typology coverage.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.12: Account Takeover Signals Linked to Payment Instrument Changes.",
                  "ea": "Saved search 'UC-22.22.12' running on index=auth login:risk — user_id, amount_eur, score, outcome, merchant_id, corridor, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.13",
              "n": "TPP Access Monitoring for AIS/PIS Traffic (RTS APIs) (PSD2 / RTS on SCA & CSC)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876)",
              "d": "`index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out",
              "q": "index=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"tpp\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.13\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876).\n• Ensure the following data sources are available: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"tpp\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.13\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate\n```\n\nUnderstanding this SPL\n\n**TPP Access Monitoring for AIS/PIS Traffic (RTS APIs) (PSD2 / RTS on SCA & CSC)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openbanking; **sourcetype**: ob:api:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openbanking, sourcetype=\"ob:api:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **api_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_prod** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **scope_mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, api_tag, is_prod** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**TPP Access Monitoring for AIS/PIS Traffic (RTS APIs) (PSD2 / RTS on SCA & CSC)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.13: TPP Access Monitoring for AIS/PIS Traffic (RTS APIs).",
                  "ea": "Saved search 'UC-22.22.13' running on index=openbanking — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.14",
              "n": "Open Banking API Rate Limiting and Throttle Breaches (PSD2 / RTS on SCA & CSC)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876)",
              "d": "`index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out",
              "q": "index=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"rate\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.14\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876).\n• Ensure the following data sources are available: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"rate\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.14\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate\n```\n\nUnderstanding this SPL\n\n**Open Banking API Rate Limiting and Throttle Breaches (PSD2 / RTS on SCA & CSC)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openbanking; **sourcetype**: ob:api:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openbanking, sourcetype=\"ob:api:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **api_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_prod** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **scope_mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, api_tag, is_prod** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Open Banking API Rate Limiting and Throttle Breaches (PSD2 / RTS on SCA & CSC)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.14: Open Banking API Rate Limiting and Throttle Breaches.",
                  "ea": "Saved search 'UC-22.22.14' running on index=openbanking — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.15",
              "n": "Consent Record Alignment to Accessed Account Scopes (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876)",
              "d": "`index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out",
              "q": "index=openbanking sourcetype=\"ob:consent:event\" earliest=-24h\n| eval api_tag=\"consent\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.15\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876).\n• Ensure the following data sources are available: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openbanking sourcetype=\"ob:consent:event\" earliest=-24h\n| eval api_tag=\"consent\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.15\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate\n```\n\nUnderstanding this SPL\n\n**Consent Record Alignment to Accessed Account Scopes (PSD2)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openbanking; **sourcetype**: ob:consent:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openbanking, sourcetype=\"ob:consent:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **api_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_prod** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **scope_mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, api_tag, is_prod** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Consent Record Alignment to Accessed Account Scopes (PSD2)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "PSD2"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.15: Consent Record Alignment to Accessed Account Scopes.",
                  "ea": "Saved search 'UC-22.22.15' running on index=openbanking — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.16",
              "n": "Data Minimization Checks for TPP Response Payload Sizes (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876)",
              "d": "`index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out",
              "q": "index=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"min\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.16\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876).\n• Ensure the following data sources are available: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"min\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.16\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate\n```\n\nUnderstanding this SPL\n\n**Data Minimization Checks for TPP Response Payload Sizes (PSD2)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openbanking; **sourcetype**: ob:api:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openbanking, sourcetype=\"ob:api:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **api_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_prod** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **scope_mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, api_tag, is_prod** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Data Minimization Checks for TPP Response Payload Sizes (PSD2)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.16: Data Minimization Checks for TPP Response Payload Sizes.",
                  "ea": "Saved search 'UC-22.22.16' running on index=openbanking — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.17",
              "n": "Sandbox vs Production Traffic Separation and Leakage (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876)",
              "d": "`index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out",
              "q": "index=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"env\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.17\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876).\n• Ensure the following data sources are available: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"env\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.17\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate\n```\n\nUnderstanding this SPL\n\n**Sandbox vs Production Traffic Separation and Leakage (PSD2)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openbanking; **sourcetype**: ob:api:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openbanking, sourcetype=\"ob:api:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **api_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_prod** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **scope_mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, api_tag, is_prod** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Sandbox vs Production Traffic Separation and Leakage (PSD2)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.17: Sandbox vs Production Traffic Separation and Leakage.",
                  "ea": "Saved search 'UC-22.22.17' running on index=openbanking — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.18",
              "n": "Mutual TLS and OAuth Client Certificate Authentication Audits (PSD2 / EBA guidelines)",
              "c": "high",
              "f": "intermediate",
              "v": "Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for AWS (1876)",
              "d": "`index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out",
              "q": "index=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"mtls\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.18\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Web](https://docs.splunk.com/Documentation/CIM/latest/User/Web)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for AWS (1876).\n• Ensure the following data sources are available: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=openbanking sourcetype=\"ob:api:access\" earliest=-24h\n| eval api_tag=\"mtls\"\n| eval is_prod=if(match(environment,\"(?i)prod\"),1,0)\n| eval scope_mismatch=if(isnotnull(consent_scope) AND isnotnull(accessed_scope) AND consent_scope!=accessed_scope,1,0)\n| stats count as calls, sum(eval(http_status>=400)) as err, sum(scope_mismatch) as sm, dc(tpp_id) as tpp by aspsp_id, api_tag, is_prod\n| eval uc_id=\"22.22.18\"\n| eval err_rate=round(100*err/calls,2)\n| where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)\n| sort - err_rate\n```\n\nUnderstanding this SPL\n\n**Mutual TLS and OAuth Client Certificate Authentication Audits (PSD2 / EBA guidelines)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: openbanking; **sourcetype**: ob:api:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=openbanking, sourcetype=\"ob:api:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **api_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_prod** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **scope_mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, api_tag, is_prod** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **err_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where err_rate>2 OR sm>0 OR (api_tag=\"env\" AND is_prod=0 AND tpp>50)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Mutual TLS and OAuth Client Certificate Authentication Audits (PSD2 / EBA guidelines)** — Monitors third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.\n\nDocumented **Data sources**: `index=openbanking` — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for AWS (1876). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Web.Web` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (err_rate), Bar chart (calls by tpp_id), Table (scope_mismatch samples)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for third-party provider access patterns, consent alignment, and API security controls required for PSD2 open banking operational resilience.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "Web"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Web.Web by Web.status, Web.http_method, Web.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.97",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.97 (Strong customer authentication) is enforced — Splunk UC-22.22.18: Mutual TLS and OAuth Client Certificate Authentication Audits.",
                  "ea": "Saved search 'UC-22.22.18' running on index=openbanking — tpp_id, consent_id, consent_scope, accessed_scope, http_status, environment, bytes_out, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.19",
              "n": "Payment Processing Integrity — Duplicate Authorization IDs (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "t": "Splunk ITSI (1841), Splunk DB Connect (2686)",
              "d": "`index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag",
              "q": "index=payments sourcetype=\"paygw:txn\" earliest=-24h\n| eval txn_tag=\"dup\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.19\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"paygw:txn\" earliest=-24h\n| eval txn_tag=\"dup\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.19\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200\n```\n\nUnderstanding this SPL\n\n**Payment Processing Integrity — Duplicate Authorization IDs (PSD2)** — Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.\n\nDocumented **Data sources**: `index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: paygw:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"paygw:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **txn_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **eur_equiv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by merchant_id, txn_tag, bin(_time,1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dup_suspect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.19: Payment Processing Integrity — Duplicate Authorization IDs.",
                  "ea": "Saved search 'UC-22.22.19' running on index=payments paygw:txn — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.20",
              "n": "Settlement Reconciliation Exceptions (Acquirer vs Issuer) (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "t": "Splunk ITSI (1841), Splunk DB Connect (2686)",
              "d": "`index=payments` `pay:settlement` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag",
              "q": "index=payments sourcetype=\"pay:settlement\" earliest=-24h\n| eval txn_tag=\"settle\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.20\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=payments` `pay:settlement` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"pay:settlement\" earliest=-24h\n| eval txn_tag=\"settle\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.20\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200\n```\n\nUnderstanding this SPL\n\n**Settlement Reconciliation Exceptions (Acquirer vs Issuer) (PSD2)** — Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.\n\nDocumented **Data sources**: `index=payments` `pay:settlement` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: pay:settlement. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"pay:settlement\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **txn_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **eur_equiv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by merchant_id, txn_tag, bin(_time,1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dup_suspect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.20: Settlement Reconciliation Exceptions (Acquirer vs Issuer).",
                  "ea": "Saved search 'UC-22.22.20' running on index=payments pay:settlement — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.21",
              "n": "High-Value Transfer Monitoring vs Internal Policy Limits (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "t": "Splunk ITSI (1841), Splunk DB Connect (2686)",
              "d": "`index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag",
              "q": "index=payments sourcetype=\"paygw:txn\" earliest=-24h\n| eval txn_tag=\"hv\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.21\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"paygw:txn\" earliest=-24h\n| eval txn_tag=\"hv\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.21\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200\n```\n\nUnderstanding this SPL\n\n**High-Value Transfer Monitoring vs Internal Policy Limits (PSD2)** — Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.\n\nDocumented **Data sources**: `index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: paygw:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"paygw:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **txn_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **eur_equiv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by merchant_id, txn_tag, bin(_time,1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dup_suspect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.21: High-Value Transfer Monitoring vs Internal Policy Limits.",
                  "ea": "Saved search 'UC-22.22.21' running on index=payments paygw:txn — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.22",
              "n": "Refund and Reversal Spike Detection by Merchant (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "t": "Splunk ITSI (1841), Splunk DB Connect (2686)",
              "d": "`index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag",
              "q": "index=payments sourcetype=\"paygw:txn\" earliest=-24h\n| eval txn_tag=\"refund\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.22\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"paygw:txn\" earliest=-24h\n| eval txn_tag=\"refund\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.22\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200\n```\n\nUnderstanding this SPL\n\n**Refund and Reversal Spike Detection by Merchant (PSD2)** — Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.\n\nDocumented **Data sources**: `index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: paygw:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"paygw:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **txn_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **eur_equiv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by merchant_id, txn_tag, bin(_time,1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dup_suspect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.22: Refund and Reversal Spike Detection by Merchant.",
                  "ea": "Saved search 'UC-22.22.22' running on index=payments paygw:txn — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.23",
              "n": "Dormant Account Activity After Long Inactivity (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "t": "Splunk ITSI (1841), Splunk DB Connect (2686)",
              "d": "`index=payments` `core:account:activity` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag",
              "q": "index=payments sourcetype=\"core:account:activity\" earliest=-24h\n| eval txn_tag=\"dormant\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.23\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=payments` `core:account:activity` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"core:account:activity\" earliest=-24h\n| eval txn_tag=\"dormant\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.23\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200\n```\n\nUnderstanding this SPL\n\n**Dormant Account Activity After Long Inactivity (PSD2)** — Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.\n\nDocumented **Data sources**: `index=payments` `core:account:activity` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: core:account:activity. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"core:account:activity\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **txn_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **eur_equiv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by merchant_id, txn_tag, bin(_time,1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dup_suspect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.23: Dormant Account Activity After Long Inactivity.",
                  "ea": "Saved search 'UC-22.22.23' running on index=payments core:account:activity — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.24",
              "n": "Cross-Currency Transaction Monitoring and FX Spread Anomalies (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "t": "Splunk ITSI (1841), Splunk DB Connect (2686)",
              "d": "`index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag",
              "q": "index=payments sourcetype=\"paygw:txn\" earliest=-24h\n| eval txn_tag=\"fx\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.24\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk ITSI (1841), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"paygw:txn\" earliest=-24h\n| eval txn_tag=\"fx\"\n| eval eur_equiv=coalesce(amount_eur, amount*fx_rate)\n| stats count as txns, dc(authorization_id) as auth_dc, sum(eval(eur_equiv>100000)) as very_high by merchant_id, txn_tag, bin(_time,1h)\n| eval dup_suspect=if(txns>auth_dc AND txn_tag=\"dup\", txns-auth_dc, 0)\n| eval uc_id=\"22.22.24\"\n| where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"\n| sort - very_high\n| head 200\n```\n\nUnderstanding this SPL\n\n**Cross-Currency Transaction Monitoring and FX Spread Anomalies (PSD2)** — Provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.\n\nDocumented **Data sources**: `index=payments` `paygw:txn` — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag. **App/TA** (typical add-on context): Splunk ITSI (1841), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: paygw:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"paygw:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **txn_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **eur_equiv** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by merchant_id, txn_tag, bin(_time,1h)** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dup_suspect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where very_high>0 OR dup_suspect>5 OR txn_tag=\"refund\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (very_high), Table (merchant_id), Single value (dup_suspect)",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We provides transaction-level integrity and monitoring evidence aligned with PSD2 operational and security expectations for ASPSPs and payment processors.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.95",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.95 (Management of operational and security risks) is enforced — Splunk UC-22.22.24: Cross-Currency Transaction Monitoring and FX Spread Anomalies.",
                  "ea": "Saved search 'UC-22.22.24' running on index=payments paygw:txn — authorization_id, merchant_id, amount_eur, fx_rate, reversal_flag, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.25",
              "n": "Major Incident Notification Readiness to NCA (Payment Service) (PSD2 / NCA operational expectations)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "`index=payments` `pay:incident:milestone` — incident_id, milestone, severity, impacted_psu_estimate",
              "q": "index=payments sourcetype=\"pay:incident:milestone\" earliest=-90d\n| eval inc_tag=\"nca\"\n| stats earliest(eval(if(milestone==\"detected\",_time,null()))) as t0,\n        earliest(eval(if(milestone==\"nca_notified\",_time,null()))) as t1,\n        dc(incident_id) as incidents by inc_tag, severity\n| eval hours_to_nca=round((t1-t0)/3600,2)\n| eval uc_id=\"22.22.25\"\n| where inc_tag=\"nca\" AND (isnull(t1) OR hours_to_nca>24)\n| sort - hours_to_nca\n| table uc_id, severity, incidents, t0, t1, hours_to_nca",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=payments` `pay:incident:milestone` — incident_id, milestone, severity, impacted_psu_estimate.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"pay:incident:milestone\" earliest=-90d\n| eval inc_tag=\"nca\"\n| stats earliest(eval(if(milestone==\"detected\",_time,null()))) as t0,\n        earliest(eval(if(milestone==\"nca_notified\",_time,null()))) as t1,\n        dc(incident_id) as incidents by inc_tag, severity\n| eval hours_to_nca=round((t1-t0)/3600,2)\n| eval uc_id=\"22.22.25\"\n| where inc_tag=\"nca\" AND (isnull(t1) OR hours_to_nca>24)\n| sort - hours_to_nca\n| table uc_id, severity, incidents, t0, t1, hours_to_nca\n```\n\nUnderstanding this SPL\n\n**Major Incident Notification Readiness to NCA (Payment Service) (PSD2 / NCA operational expectations)** — Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.\n\nDocumented **Data sources**: `index=payments` `pay:incident:milestone` — incident_id, milestone, severity, impacted_psu_estimate. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: pay:incident:milestone. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"pay:incident:milestone\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **inc_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by inc_tag, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_to_nca** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where inc_tag=\"nca\" AND (isnull(t1) OR hours_to_nca>24)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Major Incident Notification Readiness to NCA (Payment Service) (PSD2 / NCA operational expectations)**): table uc_id, severity, incidents, t0, t1, hours_to_nca\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.96",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.96 (Incident reporting) is enforced — Splunk UC-22.22.25: Major Incident Notification Readiness to NCA (Payment Service).",
                  "ea": "Saved search 'UC-22.22.25' running on index=payments pay:incident:milestone — incident_id, milestone, severity, impacted_psu_estimate, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.26",
              "n": "Operational vs Security Incident Classification Consistency (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "`index=payments` `pay:incident:milestone` — incident_id, milestone, severity, impacted_psu_estimate",
              "q": "index=payments sourcetype=\"pay:incident:milestone\" earliest=-90d\n| eval inc_tag=\"class\"\n| where isnotnull(security_class) AND isnotnull(ops_class)\n| eval class_mismatch=if(security_class!=ops_class,1,0)\n| stats sum(class_mismatch) as mismatches, count as events by aspsp_id, severity\n| eval uc_id=\"22.22.26\"\n| where mismatches>0\n| sort - mismatches\n| table uc_id, aspsp_id, severity, mismatches, events",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=payments` `pay:incident:milestone` — incident_id, milestone, severity, impacted_psu_estimate.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"pay:incident:milestone\" earliest=-90d\n| eval inc_tag=\"class\"\n| where isnotnull(security_class) AND isnotnull(ops_class)\n| eval class_mismatch=if(security_class!=ops_class,1,0)\n| stats sum(class_mismatch) as mismatches, count as events by aspsp_id, severity\n| eval uc_id=\"22.22.26\"\n| where mismatches>0\n| sort - mismatches\n| table uc_id, aspsp_id, severity, mismatches, events\n```\n\nUnderstanding this SPL\n\n**Operational vs Security Incident Classification Consistency (PSD2)** — Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.\n\nDocumented **Data sources**: `index=payments` `pay:incident:milestone` — incident_id, milestone, severity, impacted_psu_estimate. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: pay:incident:milestone. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"pay:incident:milestone\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **inc_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(security_class) AND isnotnull(ops_class)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **class_mismatch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mismatches>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Operational vs Security Incident Classification Consistency (PSD2)**): table uc_id, aspsp_id, severity, mismatches, events\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.96",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.96 (Incident reporting) is enforced — Splunk UC-22.22.26: Operational vs Security Incident Classification Consistency.",
                  "ea": "Saved search 'UC-22.22.26' running on index=payments pay:incident:milestone — incident_id, milestone, severity, impacted_psu_estimate, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.27",
              "n": "Customer Impact Assessment Coverage for Incidents (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "`index=payments` `pay:incident:milestone` — incident_id, milestone, severity, impacted_psu_estimate",
              "q": "index=payments sourcetype=\"pay:incident:milestone\" earliest=-90d\n| eval inc_tag=\"cust\"\n| where milestone=\"impact_assessed\"\n| eval impact_missing=if(isnull(impacted_psu_estimate) OR impacted_psu_estimate<0,1,0)\n| stats sum(impact_missing) as missing, dc(incident_id) as incidents by aspsp_id, severity\n| eval uc_id=\"22.22.27\"\n| where missing>0\n| sort - missing\n| table uc_id, aspsp_id, severity, missing, incidents",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=payments` `pay:incident:milestone` — incident_id, milestone, severity, impacted_psu_estimate.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"pay:incident:milestone\" earliest=-90d\n| eval inc_tag=\"cust\"\n| where milestone=\"impact_assessed\"\n| eval impact_missing=if(isnull(impacted_psu_estimate) OR impacted_psu_estimate<0,1,0)\n| stats sum(impact_missing) as missing, dc(incident_id) as incidents by aspsp_id, severity\n| eval uc_id=\"22.22.27\"\n| where missing>0\n| sort - missing\n| table uc_id, aspsp_id, severity, missing, incidents\n```\n\nUnderstanding this SPL\n\n**Customer Impact Assessment Coverage for Incidents (PSD2)** — Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.\n\nDocumented **Data sources**: `index=payments` `pay:incident:milestone` — incident_id, milestone, severity, impacted_psu_estimate. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: pay:incident:milestone. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"pay:incident:milestone\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **inc_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where milestone=\"impact_assessed\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **impact_missing** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where missing>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Customer Impact Assessment Coverage for Incidents (PSD2)**): table uc_id, aspsp_id, severity, missing, incidents\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.96",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.96 (Incident reporting) is enforced — Splunk UC-22.22.27: Customer Impact Assessment Coverage for Incidents.",
                  "ea": "Saved search 'UC-22.22.27' running on index=payments pay:incident:milestone — incident_id, milestone, severity, impacted_psu_estimate, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.28",
              "n": "Payment Service Availability SLO Breach Tracking (PSD2)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "`index=payments` `pay:health:probe` — incident_id, milestone, severity, impacted_psu_estimate",
              "q": "index=payments sourcetype=\"pay:health:probe\" earliest=-7d\n| eval inc_tag=\"slo\"\n| bin _time span=5m\n| stats avg(availability_pct) as avgp, min(availability_pct) as min_av by aspsp_id, payment_scheme, _time\n| eval uc_id=\"22.22.28\"\n| where avgp < 99.5 OR min_av < 99.0\n| sort _time\n| table uc_id, _time, aspsp_id, payment_scheme, avgp, min_av",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=payments` `pay:health:probe` — incident_id, milestone, severity, impacted_psu_estimate.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"pay:health:probe\" earliest=-7d\n| eval inc_tag=\"slo\"\n| bin _time span=5m\n| stats avg(availability_pct) as avgp, min(availability_pct) as min_av by aspsp_id, payment_scheme, _time\n| eval uc_id=\"22.22.28\"\n| where avgp < 99.5 OR min_av < 99.0\n| sort _time\n| table uc_id, _time, aspsp_id, payment_scheme, avgp, min_av\n```\n\nUnderstanding this SPL\n\n**Payment Service Availability SLO Breach Tracking (PSD2)** — Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.\n\nDocumented **Data sources**: `index=payments` `pay:health:probe` — incident_id, milestone, severity, impacted_psu_estimate. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: pay:health:probe. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"pay:health:probe\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **inc_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, payment_scheme, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where avgp < 99.5 OR min_av < 99.0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Payment Service Availability SLO Breach Tracking (PSD2)**): table uc_id, _time, aspsp_id, payment_scheme, avgp, min_av\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.96",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.96 (Incident reporting) is enforced — Splunk UC-22.22.28: Payment Service Availability SLO Breach Tracking.",
                  "ea": "Saved search 'UC-22.22.28' running on index=payments pay:health:probe — incident_id, milestone, severity, impacted_psu_estimate, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.29",
              "n": "Fraud Loss Reporting Aggregation by Product Line (PSD2 / EBA fraud reporting)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "`index=payments` `pay:fraud:loss` — incident_id, milestone, severity, impacted_psu_estimate",
              "q": "index=payments sourcetype=\"pay:fraud:loss\" earliest=-400d\n| eval inc_tag=\"loss\"\n| stats sum(loss_eur) as total_loss, dc(case_id) as cases by product_line, reporting_quarter\n| eventstats median(total_loss) as med_loss by reporting_quarter\n| eval uc_id=\"22.22.29\"\n| eval uplift=round(total_loss-med_loss,2)\n| where uplift>10000 OR total_loss>250000\n| sort - total_loss\n| table uc_id, reporting_quarter, product_line, cases, total_loss, med_loss, uplift",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=payments` `pay:fraud:loss` — incident_id, milestone, severity, impacted_psu_estimate.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"pay:fraud:loss\" earliest=-400d\n| eval inc_tag=\"loss\"\n| stats sum(loss_eur) as total_loss, dc(case_id) as cases by product_line, reporting_quarter\n| eventstats median(total_loss) as med_loss by reporting_quarter\n| eval uc_id=\"22.22.29\"\n| eval uplift=round(total_loss-med_loss,2)\n| where uplift>10000 OR total_loss>250000\n| sort - total_loss\n| table uc_id, reporting_quarter, product_line, cases, total_loss, med_loss, uplift\n```\n\nUnderstanding this SPL\n\n**Fraud Loss Reporting Aggregation by Product Line (PSD2 / EBA fraud reporting)** — Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.\n\nDocumented **Data sources**: `index=payments` `pay:fraud:loss` — incident_id, milestone, severity, impacted_psu_estimate. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: pay:fraud:loss. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"pay:fraud:loss\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **inc_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_line, reporting_quarter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by reporting_quarter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uplift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where uplift>10000 OR total_loss>250000` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Fraud Loss Reporting Aggregation by Product Line (PSD2 / EBA fraud reporting)**): table uc_id, reporting_quarter, product_line, cases, total_loss, med_loss, uplift\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.96",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.96 (Incident reporting) is enforced — Splunk UC-22.22.29: Fraud Loss Reporting Aggregation by Product Line.",
                  "ea": "Saved search 'UC-22.22.29' running on index=payments pay:fraud:loss — incident_id, milestone, severity, impacted_psu_estimate, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.22.30",
              "n": "Quarterly Statistical Reporting Dataset Reconciliation (PSD2 / EBA reporting)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841)",
              "d": "`index=payments` `pay:reg:stats` — incident_id, milestone, severity, impacted_psu_estimate",
              "q": "index=payments sourcetype=\"pay:reg:stats\" earliest=-400d\n| eval inc_tag=\"stats\"\n| stats sum(eval(mismatch=\"true\")) as mismatches, count as rows by aspsp_id, reporting_quarter\n| eval uc_id=\"22.22.30\"\n| eval mismatch_rate=round(100*mismatches/rows,2)\n| where mismatches>0 OR mismatch_rate>0.5\n| sort - mismatches\n| table uc_id, aspsp_id, reporting_quarter, mismatches, rows, mismatch_rate",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841).\n• Ensure the following data sources are available: `index=payments` `pay:reg:stats` — incident_id, milestone, severity, impacted_psu_estimate.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"pay:reg:stats\" earliest=-400d\n| eval inc_tag=\"stats\"\n| stats sum(eval(mismatch=\"true\")) as mismatches, count as rows by aspsp_id, reporting_quarter\n| eval uc_id=\"22.22.30\"\n| eval mismatch_rate=round(100*mismatches/rows,2)\n| where mismatches>0 OR mismatch_rate>0.5\n| sort - mismatches\n| table uc_id, aspsp_id, reporting_quarter, mismatches, rows, mismatch_rate\n```\n\nUnderstanding this SPL\n\n**Quarterly Statistical Reporting Dataset Reconciliation (PSD2 / EBA reporting)** — Supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.\n\nDocumented **Data sources**: `index=payments` `pay:reg:stats` — incident_id, milestone, severity, impacted_psu_estimate. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: pay:reg:stats. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"pay:reg:stats\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **inc_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aspsp_id, reporting_quarter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mismatch_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mismatches>0 OR mismatch_rate>0.5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Quarterly Statistical Reporting Dataset Reconciliation (PSD2 / EBA reporting)**): table uc_id, aspsp_id, reporting_quarter, mismatches, rows, mismatch_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline (milestones), Table (late NCA notifications), Single value (mismatches)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports incident governance, customer communications evidence, and supervisory reporting workflows associated with payment service continuity and fraud statistics.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PSD2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PSD2",
                  "v": "Directive (EU) 2015/2366",
                  "cl": "Art.96",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PSD2 Art.96 (Incident reporting) is enforced — Splunk UC-22.22.30: Quarterly Statistical Reporting Dataset Reconciliation.",
                  "ea": "Saved search 'UC-22.22.30' running on index=payments pay:reg:stats — incident_id, milestone, severity, impacted_psu_estimate, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.2,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 30,
            "none": 0
          }
        },
        {
          "i": "22.23",
          "n": "EU Cyber Resilience Act (CRA)",
          "u": [
            {
              "i": "22.23.1",
              "n": "Security-by-Default Configuration Evidence (CRA Art. 10(2)) (EU CRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `product:telemetry:security` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"product:telemetry:security\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"default_cfg\"\n| eval uc_id=\"22.23.1\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `product:telemetry:security` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"product:telemetry:security\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"default_cfg\"\n| eval uc_id=\"22.23.1\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Security-by-Default Configuration Evidence (CRA Art. 10(2)) (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `product:telemetry:security` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: product:telemetry:security, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"product:telemetry:security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.1: Security-by-Default Configuration Evidence (CRA Art. 10(2)).",
                  "ea": "Saved search 'UC-22.23.1' running on index=product_security product:telemetry:security — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.2",
              "n": "Default Credential and Hardcoded Secret Detection in Releases (EU CRA)",
              "c": "high",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `cicd:build:log` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"cicd:build:log\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"secrets\"\n| eval uc_id=\"22.23.2\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `cicd:build:log` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"cicd:build:log\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"secrets\"\n| eval uc_id=\"22.23.2\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Default Credential and Hardcoded Secret Detection in Releases (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `cicd:build:log` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: cicd:build:log, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"cicd:build:log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.2: Default Credential and Hardcoded Secret Detection in Releases.",
                  "ea": "Saved search 'UC-22.23.2' running on index=product_security cicd:build:log — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.3",
              "n": "Attack Surface Minimization — Exposed Admin Interfaces (EU CRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `product:telemetry:security` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"product:telemetry:security\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"surface\"\n| eval uc_id=\"22.23.3\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `product:telemetry:security` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"product:telemetry:security\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"surface\"\n| eval uc_id=\"22.23.3\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Attack Surface Minimization — Exposed Admin Interfaces (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `product:telemetry:security` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: product:telemetry:security, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"product:telemetry:security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.3: Attack Surface Minimization — Exposed Admin Interfaces.",
                  "ea": "Saved search 'UC-22.23.3' running on index=product_security product:telemetry:security — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.4",
              "n": "Secure Update Mechanism Integrity (Signed Update Verification) (EU CRA)",
              "c": "high",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `product:fw:update` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"product:fw:update\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"update\"\n| eval uc_id=\"22.23.4\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `product:fw:update` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"product:fw:update\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"update\"\n| eval uc_id=\"22.23.4\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Secure Update Mechanism Integrity (Signed Update Verification) (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `product:fw:update` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: product:fw:update, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"product:fw:update\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.4: Secure Update Mechanism Integrity (Signed Update Verification).",
                  "ea": "Saved search 'UC-22.23.4' running on index=product_security product:fw:update — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.5",
              "n": "Data Protection in Product — Local Storage Encryption Flags (EU CRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `product:telemetry:security` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"product:telemetry:security\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"data\"\n| eval uc_id=\"22.23.5\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `product:telemetry:security` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"product:telemetry:security\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"data\"\n| eval uc_id=\"22.23.5\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Data Protection in Product — Local Storage Encryption Flags (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `product:telemetry:security` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: product:telemetry:security, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"product:telemetry:security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "EU Cyber Resilience Act (CRA)",
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.5: Data Protection in Product — Local Storage Encryption Flags.",
                  "ea": "Saved search 'UC-22.23.5' running on index=product_security product:telemetry:security — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.6",
              "n": "Coordinated Disclosure Process SLA (Reporter Acknowledgement) (EU CRA)",
              "c": "high",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `sec:vuln:case` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"sec:vuln:case\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"disclosure\"\n| eval uc_id=\"22.23.6\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `sec:vuln:case` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"sec:vuln:case\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"disclosure\"\n| eval uc_id=\"22.23.6\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Coordinated Disclosure Process SLA (Reporter Acknowledgement) (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `sec:vuln:case` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: sec:vuln:case, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"sec:vuln:case\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.6: Coordinated Disclosure Process SLA (Reporter Acknowledgement).",
                  "ea": "Saved search 'UC-22.23.6' running on index=product_security sec:vuln:case — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.7",
              "n": "External Vulnerability Intelligence Correlation (KEV/EPSS) (EU CRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `sec:vuln:intel` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"sec:vuln:intel\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"intel\"\n| eval uc_id=\"22.23.7\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `sec:vuln:intel` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"sec:vuln:intel\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"intel\"\n| eval uc_id=\"22.23.7\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**External Vulnerability Intelligence Correlation (KEV/EPSS) (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `sec:vuln:intel` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: sec:vuln:intel, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"sec:vuln:intel\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.7: External Vulnerability Intelligence Correlation (KEV/EPSS).",
                  "ea": "Saved search 'UC-22.23.7' running on index=product_security sec:vuln:intel — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.8",
              "n": "Patch Timeline Compliance vs Vendor SLA (EU CRA)",
              "c": "high",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `sec:vuln:case` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"sec:vuln:case\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"patch\"\n| eval uc_id=\"22.23.8\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `sec:vuln:case` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"sec:vuln:case\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"patch\"\n| eval uc_id=\"22.23.8\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Patch Timeline Compliance vs Vendor SLA (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `sec:vuln:case` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: sec:vuln:case, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"sec:vuln:case\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.8: Patch Timeline Compliance vs Vendor SLA.",
                  "ea": "Saved search 'UC-22.23.8' running on index=product_security sec:vuln:case — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.9",
              "n": "End-of-Support Notification Delivery Audit (EU CRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `product:lifecycle:event` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"product:lifecycle:event\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"eos\"\n| eval uc_id=\"22.23.9\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `product:lifecycle:event` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"product:lifecycle:event\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"eos\"\n| eval uc_id=\"22.23.9\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**End-of-Support Notification Delivery Audit (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `product:lifecycle:event` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: product:lifecycle:event, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"product:lifecycle:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.9: End-of-Support Notification Delivery Audit.",
                  "ea": "Saved search 'UC-22.23.9' running on index=product_security product:lifecycle:event — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.10",
              "n": "Vulnerability Severity Assessment Consistency (CVSS vector completeness) (EU CRA)",
              "c": "high",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `sec:vuln:case` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"sec:vuln:case\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"cvss\"\n| eval uc_id=\"22.23.10\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `sec:vuln:case` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"sec:vuln:case\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"cvss\"\n| eval uc_id=\"22.23.10\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Vulnerability Severity Assessment Consistency (CVSS vector completeness) (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `sec:vuln:case` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: sec:vuln:case, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"sec:vuln:case\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.10: Vulnerability Severity Assessment Consistency (CVSS vector completeness).",
                  "ea": "Saved search 'UC-22.23.10' running on index=product_security sec:vuln:case — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.11",
              "n": "SBOM Generation Job Success and Artifact Hash Registry (EU CRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `cicd:sbom:job` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"cicd:sbom:job\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"sbom_gen\"\n| eval uc_id=\"22.23.11\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `cicd:sbom:job` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"cicd:sbom:job\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"sbom_gen\"\n| eval uc_id=\"22.23.11\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**SBOM Generation Job Success and Artifact Hash Registry (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `cicd:sbom:job` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: cicd:sbom:job, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"cicd:sbom:job\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.11: SBOM Generation Job Success and Artifact Hash Registry.",
                  "ea": "Saved search 'UC-22.23.11' running on index=product_security cicd:sbom:job — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.12",
              "n": "Component Vulnerability Tracking from SBOM to CVE (EU CRA)",
              "c": "high",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `cicd:sbom:analysis` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"cicd:sbom:analysis\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"sbom_cve\"\n| eval uc_id=\"22.23.12\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `cicd:sbom:analysis` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"cicd:sbom:analysis\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"sbom_cve\"\n| eval uc_id=\"22.23.12\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Component Vulnerability Tracking from SBOM to CVE (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `cicd:sbom:analysis` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: cicd:sbom:analysis, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"cicd:sbom:analysis\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.12: Component Vulnerability Tracking from SBOM to CVE.",
                  "ea": "Saved search 'UC-22.23.12' running on index=product_security cicd:sbom:analysis — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.13",
              "n": "Open Source License Compliance Drift in Dependencies (EU CRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `cicd:sbom:analysis` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"cicd:sbom:analysis\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"license\"\n| eval uc_id=\"22.23.13\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `cicd:sbom:analysis` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"cicd:sbom:analysis\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"license\"\n| eval uc_id=\"22.23.13\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Open Source License Compliance Drift in Dependencies (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `cicd:sbom:analysis` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: cicd:sbom:analysis, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"cicd:sbom:analysis\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.13: Open Source License Compliance Drift in Dependencies.",
                  "ea": "Saved search 'UC-22.23.13' running on index=product_security cicd:sbom:analysis — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.14",
              "n": "Dependency Update Monitoring and Stale Component Age (EU CRA)",
              "c": "high",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `cicd:sbom:analysis` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"cicd:sbom:analysis\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"stale\"\n| eval uc_id=\"22.23.14\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `cicd:sbom:analysis` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"cicd:sbom:analysis\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"stale\"\n| eval uc_id=\"22.23.14\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Dependency Update Monitoring and Stale Component Age (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `cicd:sbom:analysis` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: cicd:sbom:analysis, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"cicd:sbom:analysis\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.14: Dependency Update Monitoring and Stale Component Age.",
                  "ea": "Saved search 'UC-22.23.14' running on index=product_security cicd:sbom:analysis — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.15",
              "n": "Actively Exploited Vulnerability Notification Window (24h evidence) (EU CRA)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `sec:incident:notify` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security sourcetype=\"sec:incident:notify\" actively_exploited=true earliest=-30d\n| eval cra_tag=\"active24\"\n| eval uc_id=\"22.23.15\"\n| stats earliest(_time) as t_detect, latest(notification_sent_epoch) as t_sent by product_id, cve_id\n| eval hours_to_notify=if(isnotnull(t_sent), round((t_sent-t_detect)/3600,2), null())\n| where isnull(t_sent) OR hours_to_notify>24\n| sort - hours_to_notify\n| table uc_id, product_id, cve_id, t_detect, t_sent, hours_to_notify",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `sec:incident:notify` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security sourcetype=\"sec:incident:notify\" actively_exploited=true earliest=-30d\n| eval cra_tag=\"active24\"\n| eval uc_id=\"22.23.15\"\n| stats earliest(_time) as t_detect, latest(notification_sent_epoch) as t_sent by product_id, cve_id\n| eval hours_to_notify=if(isnotnull(t_sent), round((t_sent-t_detect)/3600,2), null())\n| where isnull(t_sent) OR hours_to_notify>24\n| sort - hours_to_notify\n| table uc_id, product_id, cve_id, t_detect, t_sent, hours_to_notify\n```\n\nUnderstanding this SPL\n\n**Actively Exploited Vulnerability Notification Window (24h evidence) (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `sec:incident:notify` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: sec:incident:notify. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"sec:incident:notify\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cve_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **hours_to_notify** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(t_sent) OR hours_to_notify>24` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Actively Exploited Vulnerability Notification Window (24h evidence) (EU CRA)**): table uc_id, product_id, cve_id, t_detect, t_sent, hours_to_notify\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.15: Actively Exploited Vulnerability Notification Window (24h evidence).",
                  "ea": "Saved search 'UC-22.23.15' running on index=product_security sec:incident:notify — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.16",
              "n": "Incident Notification to ENISA — Delivery and Acknowledgement (EU CRA)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `sec:incident:notify` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"sec:incident:notify\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"enisa\"\n| eval uc_id=\"22.23.16\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `sec:incident:notify` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"sec:incident:notify\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"enisa\"\n| eval uc_id=\"22.23.16\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Incident Notification to ENISA — Delivery and Acknowledgement (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `sec:incident:notify` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: sec:incident:notify, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"sec:incident:notify\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.16: Incident Notification to ENISA — Delivery and Acknowledgement.",
                  "ea": "Saved search 'UC-22.23.16' running on index=product_security sec:incident:notify — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.17",
              "n": "User Notification for Material Product Security Issues (EU CRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `sec:incident:notify` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"sec:incident:notify\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"user\"\n| eval uc_id=\"22.23.17\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `sec:incident:notify` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"sec:incident:notify\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"user\"\n| eval uc_id=\"22.23.17\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**User Notification for Material Product Security Issues (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `sec:incident:notify` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: sec:incident:notify, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"sec:incident:notify\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.17: User Notification for Material Product Security Issues.",
                  "ea": "Saved search 'UC-22.23.17' running on index=product_security sec:incident:notify — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.18",
              "n": "Security Testing Evidence in CI Gates (SAST/DAST/SCA) (EU CRA)",
              "c": "high",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `cicd:security:gate` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"cicd:security:gate\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"test\"\n| eval uc_id=\"22.23.18\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `cicd:security:gate` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"cicd:security:gate\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"test\"\n| eval uc_id=\"22.23.18\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Security Testing Evidence in CI Gates (SAST/DAST/SCA) (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `cicd:security:gate` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: cicd:security:gate, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"cicd:security:gate\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.18: Security Testing Evidence in CI Gates (SAST/DAST/SCA).",
                  "ea": "Saved search 'UC-22.23.18' running on index=product_security cicd:security:gate — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.19",
              "n": "Threat Modeling Artifact Presence by Release Train (EU CRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `sec:threatmodel:doc` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"sec:threatmodel:doc\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"tm\"\n| eval uc_id=\"22.23.19\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `sec:threatmodel:doc` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"sec:threatmodel:doc\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"tm\"\n| eval uc_id=\"22.23.19\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Threat Modeling Artifact Presence by Release Train (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `sec:threatmodel:doc` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: sec:threatmodel:doc, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"sec:threatmodel:doc\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.19: Threat Modeling Artifact Presence by Release Train.",
                  "ea": "Saved search 'UC-22.23.19' running on index=product_security sec:threatmodel:doc — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.23.20",
              "n": "Code Review and Pull Request Approval Evidence for Security Changes (EU CRA)",
              "c": "high",
              "f": "advanced",
              "v": "Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765)",
              "d": "`index=product_security` `cicd:vcs:pr` — product_id, release_id, cve_id, status, result, enisa_ticket_id",
              "q": "index=product_security (sourcetype=\"cicd:vcs:pr\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"review\"\n| eval uc_id=\"22.23.20\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (fails), Table (product_id), Single value (releases with fails)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for GitHub](https://splunkbase.splunk.com/app/5765), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765).\n• Ensure the following data sources are available: `index=product_security` `cicd:vcs:pr` — product_id, release_id, cve_id, status, result, enisa_ticket_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=product_security (sourcetype=\"cicd:vcs:pr\" OR sourcetype=\"github:actions\") earliest=-30d\n| eval cra_tag=\"review\"\n| eval uc_id=\"22.23.20\"\n| stats count as events, sum(eval(status=\"fail\" OR result=\"fail\")) as fails, dc(release_id) as releases by product_id, cra_tag\n| where fails>0\n| sort - fails\n| head 200\n```\n\nUnderstanding this SPL\n\n**Code Review and Pull Request Approval Evidence for Security Changes (EU CRA)** — Supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.\n\nDocumented **Data sources**: `index=product_security` `cicd:vcs:pr` — product_id, release_id, cve_id, status, result, enisa_ticket_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for GitHub (5765). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: product_security; **sourcetype**: cicd:vcs:pr, github:actions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=product_security, sourcetype=\"cicd:vcs:pr\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **cra_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by product_id, cra_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fails>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (fails), Table (product_id), Single value (releases with fails)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports CRA product security, vulnerability handling, and transparency obligations with auditable telemetry from engineering and incident processes.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU Cyber Resilience Act (CRA)"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "github"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that EU CRA Art.14 (Reporting of actively exploited vulnerabilities) is enforced — Splunk UC-22.23.20: Code Review and Pull Request Approval Evidence for Security Changes.",
                  "ea": "Saved search 'UC-22.23.20' running on index=product_security cicd:vcs:pr — product_id, release_id, cve_id, status, result, enisa_ticket_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "22.24",
          "n": "eIDAS 2.0 / Trust Services",
          "u": [
            {
              "i": "22.24.1",
              "n": "Qualified Certificate Issuance and Revocation Audit Trail (eIDAS / ETSI)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:cert:event\" earliest=-7d\n| eval eidas_tag=\"iss\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.1\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:cert:event\" earliest=-7d\n| eval eidas_tag=\"iss\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.1\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Qualified Certificate Issuance and Revocation Audit Trail (eIDAS / ETSI)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:cert:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:cert:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Qualified Certificate Issuance and Revocation Audit Trail (eIDAS / ETSI)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.1: Qualified Certificate Issuance and Revocation Audit Trail.",
                  "ea": "Saved search 'UC-22.24.1' running on index=trust qtsp:cert:event — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.2",
              "n": "Qualified Timestamp Integrity and Clock Synchronization Checks (eIDAS / ETSI)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:timestamp:log` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:timestamp:log\" earliest=-7d\n| eval eidas_tag=\"tsa\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.2\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:timestamp:log` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:timestamp:log\" earliest=-7d\n| eval eidas_tag=\"tsa\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.2\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Qualified Timestamp Integrity and Clock Synchronization Checks (eIDAS / ETSI)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:timestamp:log` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:timestamp:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:timestamp:log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.2: Qualified Timestamp Integrity and Clock Synchronization Checks.",
                  "ea": "Saved search 'UC-22.24.2' running on index=trust qtsp:timestamp:log — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.3",
              "n": "Trust Service Availability and Error Rate Monitoring (eIDAS)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:service:health` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:service:health\" earliest=-7d\n| eval eidas_tag=\"avail\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.3\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:service:health` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:service:health\" earliest=-7d\n| eval eidas_tag=\"avail\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.3\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Trust Service Availability and Error Rate Monitoring (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:service:health` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:service:health. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:service:health\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.3: Trust Service Availability and Error Rate Monitoring.",
                  "ea": "Saved search 'UC-22.24.3' running on index=trust qtsp:service:health — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.4",
              "n": "Conformity Assessment Evidence Index Freshness (eIDAS)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:conformity:record` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:conformity:record\" earliest=-7d\n| eval eidas_tag=\"ca\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.4\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:conformity:record` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:conformity:record\" earliest=-7d\n| eval eidas_tag=\"ca\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.4\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Conformity Assessment Evidence Index Freshness (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:conformity:record` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:conformity:record. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:conformity:record\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.4: Conformity Assessment Evidence Index Freshness.",
                  "ea": "Saved search 'UC-22.24.4' running on index=trust qtsp:conformity:record — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.5",
              "n": "EU Digital Identity Wallet Issuance Event Completeness (eIDAS 2.0)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `eudi:wallet:issuance` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"eudi:wallet:issuance\" earliest=-7d\n| eval eidas_tag=\"wiss\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.5\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `eudi:wallet:issuance` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"eudi:wallet:issuance\" earliest=-7d\n| eval eidas_tag=\"wiss\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.5\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**EU Digital Identity Wallet Issuance Event Completeness (eIDAS 2.0)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `eudi:wallet:issuance` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: eudi:wallet:issuance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"eudi:wallet:issuance\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.5: EU Digital Identity Wallet Issuance Event Completeness.",
                  "ea": "Saved search 'UC-22.24.5' running on index=trust eudi:wallet:issuance — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.6",
              "n": "Wallet Credential Presentation Audit (RP relying party) (eIDAS 2.0)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `eudi:wallet:presentation` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"eudi:wallet:presentation\" earliest=-7d\n| eval eidas_tag=\"pres\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.6\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `eudi:wallet:presentation` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"eudi:wallet:presentation\" earliest=-7d\n| eval eidas_tag=\"pres\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.6\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Wallet Credential Presentation Audit (RP relying party) (eIDAS 2.0)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `eudi:wallet:presentation` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: eudi:wallet:presentation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"eudi:wallet:presentation\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.6: Wallet Credential Presentation Audit (RP relying party).",
                  "ea": "Saved search 'UC-22.24.6' running on index=trust eudi:wallet:presentation — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.7",
              "n": "Selective Disclosure Attribute Set Minimization Monitoring (eIDAS 2.0)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `eudi:wallet:presentation` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"eudi:wallet:presentation\" earliest=-7d\n| eval eidas_tag=\"sel\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.7\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `eudi:wallet:presentation` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"eudi:wallet:presentation\" earliest=-7d\n| eval eidas_tag=\"sel\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.7\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Selective Disclosure Attribute Set Minimization Monitoring (eIDAS 2.0)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `eudi:wallet:presentation` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: eudi:wallet:presentation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"eudi:wallet:presentation\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.7: Selective Disclosure Attribute Set Minimization Monitoring.",
                  "ea": "Saved search 'UC-22.24.7' running on index=trust eudi:wallet:presentation — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.8",
              "n": "Wallet Suspension and Revocation Propagation Latency (eIDAS 2.0)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `eudi:wallet:lifecycle` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"eudi:wallet:lifecycle\" earliest=-7d\n| eval eidas_tag=\"susp\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.8\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `eudi:wallet:lifecycle` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"eudi:wallet:lifecycle\" earliest=-7d\n| eval eidas_tag=\"susp\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.8\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Wallet Suspension and Revocation Propagation Latency (eIDAS 2.0)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `eudi:wallet:lifecycle` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: eudi:wallet:lifecycle. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"eudi:wallet:lifecycle\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.8: Wallet Suspension and Revocation Propagation Latency.",
                  "ea": "Saved search 'UC-22.24.8' running on index=trust eudi:wallet:lifecycle — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.9",
              "n": "Qualified Timestamp Accuracy vs Stratum / NTP Offset (eIDAS / ETSI)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:timestamp:log` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:timestamp:log\" earliest=-7d\n| eval eidas_tag=\"acc\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.9\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:timestamp:log` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:timestamp:log\" earliest=-7d\n| eval eidas_tag=\"acc\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.9\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Qualified Timestamp Accuracy vs Stratum / NTP Offset (eIDAS / ETSI)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:timestamp:log` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:timestamp:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:timestamp:log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.9: Qualified Timestamp Accuracy vs Stratum / NTP Offset.",
                  "ea": "Saved search 'UC-22.24.9' running on index=trust qtsp:timestamp:log — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.10",
              "n": "Timestamp Source Diversity and Failover Evidence (eIDAS / ETSI)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:timestamp:log` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:timestamp:log\" earliest=-7d\n| eval eidas_tag=\"src\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.10\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:timestamp:log` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:timestamp:log\" earliest=-7d\n| eval eidas_tag=\"src\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.10\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Timestamp Source Diversity and Failover Evidence (eIDAS / ETSI)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:timestamp:log` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:timestamp:log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:timestamp:log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.10: Timestamp Source Diversity and Failover Evidence.",
                  "ea": "Saved search 'UC-22.24.10' running on index=trust qtsp:timestamp:log — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.11",
              "n": "Long-Term Validation Evidence for Signature Time Stamps (eIDAS)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:archive:ltv` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:archive:ltv\" earliest=-7d\n| eval eidas_tag=\"ltv\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.11\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:archive:ltv` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:archive:ltv\" earliest=-7d\n| eval eidas_tag=\"ltv\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.11\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Long-Term Validation Evidence for Signature Time Stamps (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:archive:ltv` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:archive:ltv. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:archive:ltv\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.11: Long-Term Validation Evidence for Signature Time Stamps.",
                  "ea": "Saved search 'UC-22.24.11' running on index=trust qtsp:archive:ltv — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.12",
              "n": "Archive Timestamp Chain Continuity Checks (eIDAS)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:archive:ltv` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:archive:ltv\" earliest=-7d\n| eval eidas_tag=\"chain\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.12\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:archive:ltv` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:archive:ltv\" earliest=-7d\n| eval eidas_tag=\"chain\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.12\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Archive Timestamp Chain Continuity Checks (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:archive:ltv` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:archive:ltv. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:archive:ltv\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.12: Archive Timestamp Chain Continuity Checks.",
                  "ea": "Saved search 'UC-22.24.12' running on index=trust qtsp:archive:ltv — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.13",
              "n": "Certificate Validity Window Monitoring (Not Before / Not After) (eIDAS)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:cert:event\" earliest=-7d\n| eval eidas_tag=\"val\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.13\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:cert:event\" earliest=-7d\n| eval eidas_tag=\"val\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.13\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Certificate Validity Window Monitoring (Not Before / Not After) (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:cert:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:cert:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Certificate Validity Window Monitoring (Not Before / Not After) (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.13: Certificate Validity Window Monitoring (Not Before / Not After).",
                  "ea": "Saved search 'UC-22.24.13' running on index=trust qtsp:cert:event — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.14",
              "n": "CRL and OCSP Response Freshness and HTTP Status Audits (eIDAS)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `pki:ocsp:crl:probe` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"pki:ocsp:crl:probe\" earliest=-7d\n| eval eidas_tag=\"crl\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.14\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `pki:ocsp:crl:probe` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"pki:ocsp:crl:probe\" earliest=-7d\n| eval eidas_tag=\"crl\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.14\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**CRL and OCSP Response Freshness and HTTP Status Audits (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `pki:ocsp:crl:probe` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: pki:ocsp:crl:probe. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"pki:ocsp:crl:probe\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CRL and OCSP Response Freshness and HTTP Status Audits (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `pki:ocsp:crl:probe` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.14: CRL and OCSP Response Freshness and HTTP Status Audits.",
                  "ea": "Saved search 'UC-22.24.14' running on index=trust pki:ocsp:crl:probe — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.24.15",
              "n": "Qualified Certificate Attribute Verification Against Subscriber Records (eIDAS)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id",
              "q": "index=trust sourcetype=\"qtsp:cert:event\" earliest=-7d\n| eval eidas_tag=\"attr\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.15\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Single value (min_days), Table (trust_service), Time chart (http_err)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Certificates](https://docs.splunk.com/Documentation/CIM/latest/User/Certificates)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=trust sourcetype=\"qtsp:cert:event\" earliest=-7d\n| eval eidas_tag=\"attr\"\n| eval exp_epoch=strptime(not_after,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval days_to_exp=round((exp_epoch-now())/86400,1)\n| stats count as events, min(days_to_exp) as min_days, sum(eval(http_status>=400)) as http_err by trust_service, eidas_tag\n| eval uc_id=\"22.24.15\"\n| where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"\n| sort min_days\n| head 200\n```\n\nUnderstanding this SPL\n\n**Qualified Certificate Attribute Verification Against Subscriber Records (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: trust; **sourcetype**: qtsp:cert:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=trust, sourcetype=\"qtsp:cert:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eidas_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_exp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by trust_service, eidas_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where min_days < 30 OR http_err>0 OR eidas_tag=\"susp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Qualified Certificate Attribute Verification Against Subscriber Records (eIDAS)** — Supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.\n\nDocumented **Data sources**: `index=trust` `qtsp:cert:event` — trust_service, serial, not_after, ocsp_url, http_status, wallet_id. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Certificates.All_Certificates` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (min_days), Table (trust_service), Time chart (http_err)",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We supports qualified trust service auditing, wallet security monitoring, and PKI integrity evidence expected under eIDAS trust frameworks.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "eIDAS 2.0 / EU trust services"
              ],
              "a": [
                "Certificates"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Certificates.All_Certificates by All_Certificates.ssl_subject_common_name, All_Certificates.dest | sort - count",
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "eIDAS",
                  "v": "Regulation (EU) 2024/1183",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that eIDAS Art.24 (Requirements for qualified trust service providers) is enforced — Splunk UC-22.24.15: Qualified Certificate Attribute Verification Against Subscriber Records.",
                  "ea": "Saved search 'UC-22.24.15' running on index=trust qtsp:cert:event — trust_service, serial, not_after, ocsp_url, http_status, wallet_id, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 15,
            "none": 0
          }
        },
        {
          "i": "22.25",
          "n": "AML / CFT (Anti-Money Laundering)",
          "u": [
            {
              "i": "22.25.1",
              "n": "Structuring and Smurfing Pattern Detection (Just Below Thresholds) (EU AMLD / national law)",
              "c": "high",
              "f": "expert",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"struct\"\n| eval uc_id=\"22.25.1\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"struct\"\n| eval uc_id=\"22.25.1\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Structuring and Smurfing Pattern Detection (Just Below Thresholds) (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: core:wire:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"core:wire:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.1: Structuring and Smurfing Pattern Detection (Just Below Thresholds).",
                  "ea": "Saved search 'UC-22.25.1' running on index=aml core:wire:txn — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.2",
              "n": "Rapid Movement of Funds Through Layering Accounts (EU AMLD / national law)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"layer\"\n| eval uc_id=\"22.25.2\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"layer\"\n| eval uc_id=\"22.25.2\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Rapid Movement of Funds Through Layering Accounts (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: core:wire:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"core:wire:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.2: Rapid Movement of Funds Through Layering Accounts.",
                  "ea": "Saved search 'UC-22.25.2' running on index=aml core:wire:txn — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.3",
              "n": "Round-Trip Transaction Detection (Circular Flows) (EU AMLD / national law)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"round\"\n| eval uc_id=\"22.25.3\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"round\"\n| eval uc_id=\"22.25.3\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Round-Trip Transaction Detection (Circular Flows) (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: core:wire:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"core:wire:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.3: Round-Trip Transaction Detection (Circular Flows).",
                  "ea": "Saved search 'UC-22.25.3' running on index=aml core:wire:txn — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.4",
              "n": "Unusual Transaction Pattern Deviation vs Customer Profile (EU AMLD / national law)",
              "c": "high",
              "f": "advanced",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"unusual\"\n| eval uc_id=\"22.25.4\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"unusual\"\n| eval uc_id=\"22.25.4\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Unusual Transaction Pattern Deviation vs Customer Profile (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: core:wire:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"core:wire:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.4: Unusual Transaction Pattern Deviation vs Customer Profile.",
                  "ea": "Saved search 'UC-22.25.4' running on index=aml core:wire:txn — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.5",
              "n": "Dormant Account Reactivation with High Outbound Velocity (EU AMLD / national law)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `core:account:activity` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"core:account:activity\" earliest=-7d\n| eval aml_tag=\"dorm\"\n| eval uc_id=\"22.25.5\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `core:account:activity` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"core:account:activity\" earliest=-7d\n| eval aml_tag=\"dorm\"\n| eval uc_id=\"22.25.5\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Dormant Account Reactivation with High Outbound Velocity (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `core:account:activity` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: core:account:activity. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"core:account:activity\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.5: Dormant Account Reactivation with High Outbound Velocity.",
                  "ea": "Saved search 'UC-22.25.5' running on index=aml core:account:activity — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.6",
              "n": "Cash-Intensive Business Monitoring vs Sector Benchmarks (EU AMLD / national law)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `core:branch:cash` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"core:branch:cash\" earliest=-7d\n| eval aml_tag=\"cash\"\n| eval uc_id=\"22.25.6\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `core:branch:cash` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"core:branch:cash\" earliest=-7d\n| eval aml_tag=\"cash\"\n| eval uc_id=\"22.25.6\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Cash-Intensive Business Monitoring vs Sector Benchmarks (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `core:branch:cash` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: core:branch:cash. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"core:branch:cash\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.6: Cash-Intensive Business Monitoring vs Sector Benchmarks.",
                  "ea": "Saved search 'UC-22.25.6' running on index=aml core:branch:cash — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.7",
              "n": "Cross-Border Transaction Threshold and Corridor Risk Scoring (EU AMLD / national law)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"xb\"\n| eval uc_id=\"22.25.7\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"xb\"\n| eval uc_id=\"22.25.7\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Cross-Border Transaction Threshold and Corridor Risk Scoring (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: core:wire:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"core:wire:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.7: Cross-Border Transaction Threshold and Corridor Risk Scoring.",
                  "ea": "Saved search 'UC-22.25.7' running on index=aml core:wire:txn — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.8",
              "n": "Suspicious Transaction Report (STR) Generation Workflow Audits (EU AMLD / national FIAML)",
              "c": "critical",
              "f": "expert",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:case:workflow` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:case:workflow\" earliest=-7d\n| eval aml_tag=\"str_gen\"\n| eval uc_id=\"22.25.8\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:case:workflow` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:case:workflow\" earliest=-7d\n| eval aml_tag=\"str_gen\"\n| eval uc_id=\"22.25.8\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Suspicious Transaction Report (STR) Generation Workflow Audits (EU AMLD / national FIAML)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:case:workflow` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:case:workflow. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:case:workflow\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.8: Suspicious Transaction Report (STR) Generation Workflow Audits.",
                  "ea": "Saved search 'UC-22.25.8' running on index=aml aml:case:workflow — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.9",
              "n": "SAR Filing Timeline Compliance vs Regulatory Cutoffs (EU AMLD / national FIAML)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:sar:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:sar:event\" earliest=-7d\n| eval aml_tag=\"sar_time\"\n| eval uc_id=\"22.25.9\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:sar:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:sar:event\" earliest=-7d\n| eval aml_tag=\"sar_time\"\n| eval uc_id=\"22.25.9\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**SAR Filing Timeline Compliance vs Regulatory Cutoffs (EU AMLD / national FIAML)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:sar:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:sar:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:sar:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.9: SAR Filing Timeline Compliance vs Regulatory Cutoffs.",
                  "ea": "Saved search 'UC-22.25.9' running on index=aml aml:sar:event — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.10",
              "n": "SAR Quality Assurance Sampling — Narrative Length and Fields (EU AMLD / national FIAML)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:sar:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:sar:event\" earliest=-7d\n| eval aml_tag=\"sar_qa\"\n| eval uc_id=\"22.25.10\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:sar:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:sar:event\" earliest=-7d\n| eval aml_tag=\"sar_qa\"\n| eval uc_id=\"22.25.10\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**SAR Quality Assurance Sampling — Narrative Length and Fields (EU AMLD / national FIAML)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:sar:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:sar:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:sar:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.10: SAR Quality Assurance Sampling — Narrative Length and Fields.",
                  "ea": "Saved search 'UC-22.25.10' running on index=aml aml:sar:event — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.11",
              "n": "SAR Feedback Loop Tracking from Supervisor to Front Office (EU AMLD / national FIAML)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:sar:feedback` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:sar:feedback\" earliest=-7d\n| eval aml_tag=\"sar_fb\"\n| eval uc_id=\"22.25.11\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:sar:feedback` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:sar:feedback\" earliest=-7d\n| eval aml_tag=\"sar_fb\"\n| eval uc_id=\"22.25.11\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**SAR Feedback Loop Tracking from Supervisor to Front Office (EU AMLD / national FIAML)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:sar:feedback` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:sar:feedback. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:sar:feedback\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.11: SAR Feedback Loop Tracking from Supervisor to Front Office.",
                  "ea": "Saved search 'UC-22.25.11' running on index=aml aml:sar:feedback — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.12",
              "n": "Regulatory Examination Evidence — STR/SAR Retrieval Completeness (EU AMLD / national FIAML)",
              "c": "high",
              "f": "advanced",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:sar:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:sar:event\" earliest=-7d\n| eval aml_tag=\"sar_ex\"\n| eval uc_id=\"22.25.12\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:sar:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:sar:event\" earliest=-7d\n| eval aml_tag=\"sar_ex\"\n| eval uc_id=\"22.25.12\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Regulatory Examination Evidence — STR/SAR Retrieval Completeness (EU AMLD / national FIAML)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:sar:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:sar:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:sar:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.12: Regulatory Examination Evidence — STR/SAR Retrieval Completeness.",
                  "ea": "Saved search 'UC-22.25.12' running on index=aml aml:sar:event — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.13",
              "n": "Customer Due Diligence (CDD) Completion Before Account Use (EU AMLD / national law)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:onboarding:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:onboarding:event\" earliest=-7d\n| eval aml_tag=\"cdd\"\n| eval uc_id=\"22.25.13\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:onboarding:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:onboarding:event\" earliest=-7d\n| eval aml_tag=\"cdd\"\n| eval uc_id=\"22.25.13\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Customer Due Diligence (CDD) Completion Before Account Use (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:onboarding:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:onboarding:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:onboarding:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.13: Customer Due Diligence (CDD) Completion Before Account Use.",
                  "ea": "Saved search 'UC-22.25.13' running on index=aml kyc:onboarding:event — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.14",
              "n": "Enhanced Due Diligence (EDD) Trigger and Approval Tracking (EU AMLD / national law)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:onboarding:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:onboarding:event\" earliest=-7d\n| eval aml_tag=\"edd\"\n| eval uc_id=\"22.25.14\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:onboarding:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:onboarding:event\" earliest=-7d\n| eval aml_tag=\"edd\"\n| eval uc_id=\"22.25.14\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Enhanced Due Diligence (EDD) Trigger and Approval Tracking (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:onboarding:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:onboarding:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:onboarding:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.14: Enhanced Due Diligence (EDD) Trigger and Approval Tracking.",
                  "ea": "Saved search 'UC-22.25.14' running on index=aml kyc:onboarding:event — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.15",
              "n": "Beneficial Ownership Verification Completeness (UBO) (EU AMLD / national law)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:ubo:verification` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:ubo:verification\" earliest=-7d\n| eval aml_tag=\"ubo\"\n| eval uc_id=\"22.25.15\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:ubo:verification` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:ubo:verification\" earliest=-7d\n| eval aml_tag=\"ubo\"\n| eval uc_id=\"22.25.15\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Beneficial Ownership Verification Completeness (UBO) (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:ubo:verification` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:ubo:verification. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:ubo:verification\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.15: Beneficial Ownership Verification Completeness (UBO).",
                  "ea": "Saved search 'UC-22.25.15' running on index=aml kyc:ubo:verification — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.16",
              "n": "Customer Risk Scoring Model Output Drift Monitoring (EU AMLD / national law)",
              "c": "high",
              "f": "expert",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:risk:score` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:risk:score\" earliest=-7d\n| eval aml_tag=\"risk\"\n| eval uc_id=\"22.25.16\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:risk:score` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:risk:score\" earliest=-7d\n| eval aml_tag=\"risk\"\n| eval uc_id=\"22.25.16\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Customer Risk Scoring Model Output Drift Monitoring (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:risk:score` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:risk:score. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:risk:score\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.16: Customer Risk Scoring Model Output Drift Monitoring.",
                  "ea": "Saved search 'UC-22.25.16' running on index=aml kyc:risk:score — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.17",
              "n": "Periodic KYC Review Compliance and Overdue Reviews (EU AMLD / national law)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:review:schedule` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:review:schedule\" earliest=-7d\n| eval aml_tag=\"pkr\"\n| eval uc_id=\"22.25.17\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:review:schedule` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:review:schedule\" earliest=-7d\n| eval aml_tag=\"pkr\"\n| eval uc_id=\"22.25.17\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Periodic KYC Review Compliance and Overdue Reviews (EU AMLD / national law)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:review:schedule` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:review:schedule. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:review:schedule\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.17: Periodic KYC Review Compliance and Overdue Reviews.",
                  "ea": "Saved search 'UC-22.25.17' running on index=aml kyc:review:schedule — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.18",
              "n": "Real-Time Sanctions Screening Hit Rate and Latency (EU AMLD / sanctions regulations)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `sanctions:screening` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"sanctions:screening\" earliest=-7d\n| eval aml_tag=\"san_rt\"\n| eval uc_id=\"22.25.18\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `sanctions:screening` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"sanctions:screening\" earliest=-7d\n| eval aml_tag=\"san_rt\"\n| eval uc_id=\"22.25.18\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Real-Time Sanctions Screening Hit Rate and Latency (EU AMLD / sanctions regulations)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `sanctions:screening` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: sanctions:screening. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"sanctions:screening\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.18: Real-Time Sanctions Screening Hit Rate and Latency.",
                  "ea": "Saved search 'UC-22.25.18' running on index=aml sanctions:screening — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.19",
              "n": "Sanctions List Update Monitoring (OFAC/EU/UN Feeds) (EU sanctions regulations)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `sanctions:list:update` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"sanctions:list:update\" earliest=-7d\n| eval aml_tag=\"list_up\"\n| eval uc_id=\"22.25.19\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `sanctions:list:update` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"sanctions:list:update\" earliest=-7d\n| eval aml_tag=\"list_up\"\n| eval uc_id=\"22.25.19\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Sanctions List Update Monitoring (OFAC/EU/UN Feeds) (EU sanctions regulations)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `sanctions:list:update` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: sanctions:list:update. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"sanctions:list:update\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.19: Sanctions List Update Monitoring (OFAC/EU/UN Feeds).",
                  "ea": "Saved search 'UC-22.25.19' running on index=aml sanctions:list:update — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.20",
              "n": "False Positive Management — Analyst Override Quality (EU AMLD / internal policy)",
              "c": "high",
              "f": "advanced",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `sanctions:case` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"sanctions:case\" earliest=-7d\n| eval aml_tag=\"fp\"\n| eval uc_id=\"22.25.20\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `sanctions:case` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"sanctions:case\" earliest=-7d\n| eval aml_tag=\"fp\"\n| eval uc_id=\"22.25.20\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**False Positive Management — Analyst Override Quality (EU AMLD / internal policy)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `sanctions:case` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: sanctions:case. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"sanctions:case\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.20: False Positive Management — Analyst Override Quality.",
                  "ea": "Saved search 'UC-22.25.20' running on index=aml sanctions:case — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.21",
              "n": "Secondary Sanctions and Sectoral Sanctions Exposure Mapping (EU/US sanctions policy)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `sanctions:screening` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"sanctions:screening\" earliest=-7d\n| eval aml_tag=\"sec2\"\n| eval uc_id=\"22.25.21\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `sanctions:screening` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"sanctions:screening\" earliest=-7d\n| eval aml_tag=\"sec2\"\n| eval uc_id=\"22.25.21\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Secondary Sanctions and Sectoral Sanctions Exposure Mapping (EU/US sanctions policy)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `sanctions:screening` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: sanctions:screening. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"sanctions:screening\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.21: Secondary Sanctions and Sectoral Sanctions Exposure Mapping.",
                  "ea": "Saved search 'UC-22.25.21' running on index=aml sanctions:screening — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.22",
              "n": "Correspondent Banking Sanctions Screening Coverage (EU AMLD / Wolfsberg)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `corr:swift:message` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"corr:swift:message\" earliest=-7d\n| eval aml_tag=\"corr\"\n| eval uc_id=\"22.25.22\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `corr:swift:message` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"corr:swift:message\" earliest=-7d\n| eval aml_tag=\"corr\"\n| eval uc_id=\"22.25.22\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Correspondent Banking Sanctions Screening Coverage (EU AMLD / Wolfsberg)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `corr:swift:message` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: corr:swift:message. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"corr:swift:message\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.22: Correspondent Banking Sanctions Screening Coverage.",
                  "ea": "Saved search 'UC-22.25.22' running on index=aml corr:swift:message — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.23",
              "n": "Trade Embargo Compliance for Goods and Destination Checks (EU sanctions / trade controls)",
              "c": "critical",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `trade:finance:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"trade:finance:txn\" earliest=-7d\n| eval aml_tag=\"emb\"\n| eval uc_id=\"22.25.23\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `trade:finance:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"trade:finance:txn\" earliest=-7d\n| eval aml_tag=\"emb\"\n| eval uc_id=\"22.25.23\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Trade Embargo Compliance for Goods and Destination Checks (EU sanctions / trade controls)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `trade:finance:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: trade:finance:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"trade:finance:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.23: Trade Embargo Compliance for Goods and Destination Checks.",
                  "ea": "Saved search 'UC-22.25.23' running on index=aml trade:finance:txn — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.24",
              "n": "PEP Identification and Classification Coverage at Onboarding (EU AMLD / FATF)",
              "c": "high",
              "f": "advanced",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:pep:check` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:pep:check\" earliest=-7d\n| eval aml_tag=\"pep_on\"\n| eval uc_id=\"22.25.24\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:pep:check` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:pep:check\" earliest=-7d\n| eval aml_tag=\"pep_on\"\n| eval uc_id=\"22.25.24\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**PEP Identification and Classification Coverage at Onboarding (EU AMLD / FATF)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:pep:check` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:pep:check. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:pep:check\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.24: PEP Identification and Classification Coverage at Onboarding.",
                  "ea": "Saved search 'UC-22.25.24' running on index=aml kyc:pep:check — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.25",
              "n": "PEP Transaction Monitoring — Elevated Monitoring Rules (EU AMLD / FATF)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"pep_txn\"\n| eval uc_id=\"22.25.25\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"pep_txn\"\n| eval uc_id=\"22.25.25\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**PEP Transaction Monitoring — Elevated Monitoring Rules (EU AMLD / FATF)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: core:wire:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"core:wire:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.25: PEP Transaction Monitoring — Elevated Monitoring Rules.",
                  "ea": "Saved search 'UC-22.25.25' running on index=aml core:wire:txn — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.26",
              "n": "PEP Relationship Mapping and Network Expansion Alerts (EU AMLD / FATF)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:pep:graph` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:pep:graph\" earliest=-7d\n| eval aml_tag=\"pep_net\"\n| eval uc_id=\"22.25.26\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:pep:graph` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:pep:graph\" earliest=-7d\n| eval aml_tag=\"pep_net\"\n| eval uc_id=\"22.25.26\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**PEP Relationship Mapping and Network Expansion Alerts (EU AMLD / FATF)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:pep:graph` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:pep:graph. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:pep:graph\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.26: PEP Relationship Mapping and Network Expansion Alerts.",
                  "ea": "Saved search 'UC-22.25.26' running on index=aml kyc:pep:graph — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.27",
              "n": "Source of Wealth (SoW) Verification Evidence Completeness (EU AMLD / FATF)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:sow:document` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:sow:document\" earliest=-7d\n| eval aml_tag=\"sow\"\n| eval uc_id=\"22.25.27\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:sow:document` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:sow:document\" earliest=-7d\n| eval aml_tag=\"sow\"\n| eval uc_id=\"22.25.27\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Source of Wealth (SoW) Verification Evidence Completeness (EU AMLD / FATF)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:sow:document` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:sow:document. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:sow:document\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.27: Source of Wealth (SoW) Verification Evidence Completeness.",
                  "ea": "Saved search 'UC-22.25.27' running on index=aml kyc:sow:document — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.28",
              "n": "Political Exposure Change Detection (Ongoing Screening) (EU AMLD / FATF)",
              "c": "high",
              "f": "advanced",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:pep:rescreen` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:pep:rescreen\" earliest=-7d\n| eval aml_tag=\"pep_chg\"\n| eval uc_id=\"22.25.28\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:pep:rescreen` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:pep:rescreen\" earliest=-7d\n| eval aml_tag=\"pep_chg\"\n| eval uc_id=\"22.25.28\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Political Exposure Change Detection (Ongoing Screening) (EU AMLD / FATF)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:pep:rescreen` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:pep:rescreen. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:pep:rescreen\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.28: Political Exposure Change Detection (Ongoing Screening).",
                  "ea": "Saved search 'UC-22.25.28' running on index=aml kyc:pep:rescreen — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.29",
              "n": "ML/TF National Risk Assessment Control Mapping Evidence (EU AMLD / NRA)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:risk:register` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:risk:register\" earliest=-7d\n| eval aml_tag=\"nra\"\n| eval uc_id=\"22.25.29\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:risk:register` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:risk:register\" earliest=-7d\n| eval aml_tag=\"nra\"\n| eval uc_id=\"22.25.29\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**ML/TF National Risk Assessment Control Mapping Evidence (EU AMLD / NRA)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:risk:register` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:risk:register. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:risk:register\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.29: ML/TF National Risk Assessment Control Mapping Evidence.",
                  "ea": "Saved search 'UC-22.25.29' running on index=aml aml:risk:register — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.30",
              "n": "Institution-Wide Risk Assessment (IWRA) Control Testing Samples (EU AMLD)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:risk:register` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:risk:register\" earliest=-7d\n| eval aml_tag=\"iwra\"\n| eval uc_id=\"22.25.30\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:risk:register` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:risk:register\" earliest=-7d\n| eval aml_tag=\"iwra\"\n| eval uc_id=\"22.25.30\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Institution-Wide Risk Assessment (IWRA) Control Testing Samples (EU AMLD)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:risk:register` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:risk:register. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:risk:register\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.30: Institution-Wide Risk Assessment (IWRA) Control Testing Samples.",
                  "ea": "Saved search 'UC-22.25.30' running on index=aml aml:risk:register — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.31",
              "n": "Product and Service Risk Rating Changes and Approvals (EU AMLD)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:product:risk` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:product:risk\" earliest=-7d\n| eval aml_tag=\"prod_r\"\n| eval uc_id=\"22.25.31\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:product:risk` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:product:risk\" earliest=-7d\n| eval aml_tag=\"prod_r\"\n| eval uc_id=\"22.25.31\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Product and Service Risk Rating Changes and Approvals (EU AMLD)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:product:risk` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:product:risk. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:product:risk\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.31: Product and Service Risk Rating Changes and Approvals.",
                  "ea": "Saved search 'UC-22.25.31' running on index=aml aml:product:risk — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.32",
              "n": "Geographic Risk Monitoring — High-Risk Jurisdiction Concentration (EU AMLD / FATF)",
              "c": "high",
              "f": "advanced",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"geo\"\n| eval uc_id=\"22.25.32\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"core:wire:txn\" earliest=-7d\n| eval aml_tag=\"geo\"\n| eval uc_id=\"22.25.32\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Geographic Risk Monitoring — High-Risk Jurisdiction Concentration (EU AMLD / FATF)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `core:wire:txn` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: core:wire:txn. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"core:wire:txn\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.32: Geographic Risk Monitoring — High-Risk Jurisdiction Concentration.",
                  "ea": "Saved search 'UC-22.25.32' running on index=aml core:wire:txn — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.33",
              "n": "Delivery Channel Risk — Digital Onboarding Fraud Uplift (EU AMLD)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `kyc:onboarding:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"kyc:onboarding:event\" earliest=-7d\n| eval aml_tag=\"deliv\"\n| eval uc_id=\"22.25.33\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `kyc:onboarding:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"kyc:onboarding:event\" earliest=-7d\n| eval aml_tag=\"deliv\"\n| eval uc_id=\"22.25.33\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Delivery Channel Risk — Digital Onboarding Fraud Uplift (EU AMLD)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `kyc:onboarding:event` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: kyc:onboarding:event. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"kyc:onboarding:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.18",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.18 (Customer due diligence) is enforced — Splunk UC-22.25.33: Delivery Channel Risk — Digital Onboarding Fraud Uplift.",
                  "ea": "Saved search 'UC-22.25.33' running on index=aml kyc:onboarding:event — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.34",
              "n": "New Technology Risk Assessment (VA/VASP/Instant Payments) (EU AMLD / MiCA intersection)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:tech:risk` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:tech:risk\" earliest=-7d\n| eval aml_tag=\"tech\"\n| eval uc_id=\"22.25.34\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:tech:risk` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:tech:risk\" earliest=-7d\n| eval aml_tag=\"tech\"\n| eval uc_id=\"22.25.34\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**New Technology Risk Assessment (VA/VASP/Instant Payments) (EU AMLD / MiCA intersection)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:tech:risk` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:tech:risk. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:tech:risk\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.34: New Technology Risk Assessment (VA/VASP/Instant Payments).",
                  "ea": "Saved search 'UC-22.25.34' running on index=aml aml:tech:risk — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.25.35",
              "n": "Regulatory Change Impact Assessment Tracking for AML Program (EU AMLD)",
              "c": "high",
              "f": "intermediate",
              "v": "Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.",
              "t": "Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686)",
              "d": "`index=aml` `aml:reg:change` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version",
              "q": "index=aml sourcetype=\"aml:reg:change\" earliest=-7d\n| eval aml_tag=\"chg\"\n| eval uc_id=\"22.25.35\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300",
              "m": "(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.",
              "z": "Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk UBA](https://splunkbase.splunk.com/app/2941), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686).\n• Ensure the following data sources are available: `index=aml` `aml:reg:change` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize field extractions and CIM tags; (2) schedule hourly or near-real-time; (3) route alerts to SOC/compliance queues; (4) export evidence packs for supervisory review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aml sourcetype=\"aml:reg:change\" earliest=-7d\n| eval aml_tag=\"chg\"\n| eval uc_id=\"22.25.35\"\n| bin _time span=1h\n| stats count as events, sum(eval(screening_result=\"hit\")) as hits, median(screening_latency_ms) as lat by customer_id, aml_tag, _time\n| eventstats perc95(amount_eur) as p95_amt by aml_tag\n| where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800\n| sort - events\n| head 300\n```\n\nUnderstanding this SPL\n\n**Regulatory Change Impact Assessment Tracking for AML Program (EU AMLD)** — Strengthens anti-money laundering and counter-terrorist financing monitoring, investigations, and supervisory examination readiness across transactions, customers, and list screening.\n\nDocumented **Data sources**: `index=aml` `aml:reg:change` — customer_id, amount_eur, screening_result, screening_latency_ms, list_version. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk UBA (2941), Splunk DB Connect (2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aml; **sourcetype**: aml:reg:change. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aml, sourcetype=\"aml:reg:change\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **aml_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **uc_id** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by customer_id, aml_tag, _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by aml_tag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where (aml_tag=\"struct\" AND amount_eur>9000 AND amount_eur<10000) OR hits>0 OR lat>800` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (hits/lat), Histogram (amount_eur), Table (customer_id)",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk User Behavior Analytics (UBA)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you spot patterns in payments and customers that look like money-laundering risk, so the bank can file what it must and stand up to regulator questions.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "EU AML/CFT framework"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "EU AML",
                  "v": "6AMLD / AMLR 2024",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU AML Art.9 (Internal policies and controls) is enforced — Splunk UC-22.25.35: Regulatory Change Impact Assessment Tracking for AML Program.",
                  "ea": "Saved search 'UC-22.25.35' running on index=aml aml:reg:change — customer_id, amount_eur, screening_result, screening_latency_ms, list_version, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 35,
            "none": 0
          }
        },
        {
          "i": "22.26",
          "n": "Norwegian Regulatory Framework",
          "u": [
            {
              "i": "22.26.1",
              "n": "Classified information system monitoring (NSM RUT)",
              "c": "high",
              "f": "expert",
              "v": "Detects privileged logons and policy changes on hosts designated for classified processing to evidence continuous monitoring aligned with national security system expectations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" earliest=-24h EventCode IN (4624,4625,4672)\n| search ComputerName=\"GOV-CL-*\" OR host=\"GOV-CL-*\"\n| eval logon_type=coalesce(Logon_Type, LogonType)\n| stats count, values(IpAddress) as src_ips by user, ComputerName, logon_type\n| where count>50 OR logon_type IN (\"10\",\"3\")\n| sort - count",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" earliest=-24h EventCode IN (4624,4625,4672)\n| search ComputerName=\"GOV-CL-*\" OR host=\"GOV-CL-*\"\n| eval logon_type=coalesce(Logon_Type, LogonType)\n| stats count, values(IpAddress) as src_ips by user, ComputerName, logon_type\n| where count>50 OR logon_type IN (\"10\",\"3\")\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Classified information system monitoring (NSM RUT)** — Detects privileged logons and policy changes on hosts designated for classified processing to evidence continuous monitoring aligned with national security system expectations.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **logon_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user, ComputerName, logon_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>50 OR logon_type IN (\"10\",\"3\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Classified information system monitoring (NSM RUT)** — Detects privileged logons and policy changes on hosts designated for classified processing to evidence continuous monitoring aligned with national security system expectations.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Sikkerhetsloven; NSM veiledning"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Sikkerhetsloven",
                  "v": "2018",
                  "cl": "§5-3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Sikkerhetsloven §5-3 (Risk assessment and documentation of security level) is enforced — Splunk UC-22.26.1: Classified information system monitoring.",
                  "ea": "Saved search 'UC-22.26.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.2",
              "n": "Security clearance tracking (Sikkerhetsloven)",
              "c": "medium",
              "f": "advanced",
              "v": "Joins authentication success events to HR clearance attributes to highlight accounts active after clearance expiry or role change.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=hr_security OR index=identity sourcetype IN (\"clearance:hr\",\"okta:user_lifecycle\") earliest=-7d\n| eval upn=lower(coalesce(user, profile_login))\n| stats latest(clearance_level) as clr, latest(_time) as hr_ts by upn\n| join type=left upn [| search index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4634) earliest=-7d | eval upn=lower(user) | stats latest(_time) as last_auth by upn]\n| where clr IN (\"None\",\"Revoked\") AND last_auth>hr_ts\n| table upn, clr, hr_ts, last_auth",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr_security OR index=identity sourcetype IN (\"clearance:hr\",\"okta:user_lifecycle\") earliest=-7d\n| eval upn=lower(coalesce(user, profile_login))\n| stats latest(clearance_level) as clr, latest(_time) as hr_ts by upn\n| join type=left upn [| search index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4634) earliest=-7d | eval upn=lower(user) | stats latest(_time) as last_auth by upn]\n| where clr IN (\"None\",\"Revoked\") AND last_auth>hr_ts\n| table upn, clr, hr_ts, last_auth\n```\n\nUnderstanding this SPL\n\n**Security clearance tracking (Sikkerhetsloven)** — Joins authentication success events to HR clearance attributes to highlight accounts active after clearance expiry or role change.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr_security, identity.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr_security, index=identity, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **upn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by upn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where clr IN (\"None\",\"Revoked\") AND last_auth>hr_ts` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Security clearance tracking (Sikkerhetsloven)**): table upn, clr, hr_ts, last_auth\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security clearance tracking (Sikkerhetsloven)** — Joins authentication success events to HR clearance attributes to highlight accounts active after clearance expiry or role change.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Sikkerhetsloven; NSM veiledning"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Sikkerhetsloven",
                  "v": "2018",
                  "cl": "§5-3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Sikkerhetsloven §5-3 (Risk assessment and documentation of security level) is enforced — Splunk UC-22.26.2: Security clearance tracking.",
                  "ea": "Saved search 'UC-22.26.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.3",
              "n": "NSM reporting compliance (NSM IKT)",
              "c": "low",
              "f": "intermediate",
              "v": "Measures incident queue aging against internal NSM-aligned reporting deadlines including evidence attachments.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "`notable` (tag=\"nsm_reportable\" OR rule_name=\"*national*security*\") earliest=-30d\n| eval detected=_time\n| eval reported=strptime(reported_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval hours=round((reported-detected)/3600,2)\n| eval breach=if(isnull(reported) AND (now()-detected)>86400,1,if(hours>24,1,0))\n| table _time, rule_name, owner, status, hours, breach\n| sort - breach, - hours",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` (tag=\"nsm_reportable\" OR rule_name=\"*national*security*\") earliest=-30d\n| eval detected=_time\n| eval reported=strptime(reported_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval hours=round((reported-detected)/3600,2)\n| eval breach=if(isnull(reported) AND (now()-detected)>86400,1,if(hours>24,1,0))\n| table _time, rule_name, owner, status, hours, breach\n| sort - breach, - hours\n```\n\nUnderstanding this SPL\n\n**NSM reporting compliance (NSM IKT)** — Measures incident queue aging against internal NSM-aligned reporting deadlines including evidence attachments.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **reported** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NSM reporting compliance (NSM IKT)**): table _time, rule_name, owner, status, hours, breach\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NSM reporting compliance (NSM IKT)** — Measures incident queue aging against internal NSM-aligned reporting deadlines including evidence attachments.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Sikkerhetsloven; NSM veiledning"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Sikkerhetsloven",
                  "v": "2018",
                  "cl": "§5-3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Sikkerhetsloven §5-3 (Risk assessment and documentation of security level) is enforced — Splunk UC-22.26.3: NSM reporting compliance.",
                  "ea": "Saved search 'UC-22.26.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.4",
              "n": "Protective security measures (NSM beskyttelse)",
              "c": "critical",
              "f": "beginner",
              "v": "Detects hardening drift (new services, weak protocols) on systems under protective security baselines.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4768,4769,4776) earliest=-24h\n| stats dc(ComputerName) as hosts, count as krb by user\n| join type=left user [| inputlookup mfa_required_nsm_users.csv | fields user mfa_enforced]\n| where (isnull(mfa_enforced) OR mfa_enforced=\"false\") AND krb>200\n| sort - krb",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4768,4769,4776) earliest=-24h\n| stats dc(ComputerName) as hosts, count as krb by user\n| join type=left user [| inputlookup mfa_required_nsm_users.csv | fields user mfa_enforced]\n| where (isnull(mfa_enforced) OR mfa_enforced=\"false\") AND krb>200\n| sort - krb\n```\n\nUnderstanding this SPL\n\n**Protective security measures (NSM beskyttelse)** — Detects hardening drift (new services, weak protocols) on systems under protective security baselines.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where (isnull(mfa_enforced) OR mfa_enforced=\"false\") AND krb>200` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Protective security measures (NSM beskyttelse)** — Detects hardening drift (new services, weak protocols) on systems under protective security baselines.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Sikkerhetsloven; NSM veiledning"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Sikkerhetsloven",
                  "v": "2018",
                  "cl": "§5-3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Sikkerhetsloven §5-3 (Risk assessment and documentation of security level) is enforced — Splunk UC-22.26.4: Protective security measures.",
                  "ea": "Saved search 'UC-22.26.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.5",
              "n": "Information system accreditation evidence (NSM akkreditering)",
              "c": "high",
              "f": "expert",
              "v": "Aggregates control tests, exceptions, and approvals into a quarterly accreditation evidence view.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=change sourcetype IN (\"snow:chg\",\"servicenow:change\") earliest=-30d (short_description=\"*accred*\" OR cmdb_ci=\"*CL-*\")\n| stats values(state) as states, min(_time) as opened, max(_time) as updated by number, cmdb_ci\n| eval reopened=if(mvcount(mvfilter(match(states,\"(Implement|Reopened)\"))) > 0, 1, 0)\n| table number, cmdb_ci, opened, updated, reopened\n| sort - updated",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=change sourcetype IN (\"snow:chg\",\"servicenow:change\") earliest=-30d (short_description=\"*accred*\" OR cmdb_ci=\"*CL-*\")\n| stats values(state) as states, min(_time) as opened, max(_time) as updated by number, cmdb_ci\n| eval reopened=if(mvcount(mvfilter(match(states,\"(Implement|Reopened)\"))) > 0, 1, 0)\n| table number, cmdb_ci, opened, updated, reopened\n| sort - updated\n```\n\nUnderstanding this SPL\n\n**Information system accreditation evidence (NSM akkreditering)** — Aggregates control tests, exceptions, and approvals into a quarterly accreditation evidence view.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: change.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=change, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by number, cmdb_ci** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **reopened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Information system accreditation evidence (NSM akkreditering)**): table number, cmdb_ci, opened, updated, reopened\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information system accreditation evidence (NSM akkreditering)** — Aggregates control tests, exceptions, and approvals into a quarterly accreditation evidence view.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Sikkerhetsloven; NSM veiledning"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Sikkerhetsloven",
                  "v": "2018",
                  "cl": "§6-2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Sikkerhetsloven §6-2 (Protection of classified / security-graded information) is enforced — Splunk UC-22.26.5: Information system accreditation evidence.",
                  "ea": "Saved search 'UC-22.26.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.6",
              "n": "Power system availability monitoring (Kraftberedskap)",
              "c": "medium",
              "f": "advanced",
              "v": "Correlates SCADA-linked outages and restorations to evidence availability monitoring for electricity preparedness.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=ot sourcetype IN (\"opcua:session\",\"scada:hmi:login\") asset=\"GRID-*\" earliest=-24h\n| bin span=15m _time\n| stats dc(session_id) as sessions by _time, asset, user\n| eventstats median(sessions) as med by asset\n| where sessions > med*3 AND sessions>4\n| sort - sessions",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype IN (\"opcua:session\",\"scada:hmi:login\") asset=\"GRID-*\" earliest=-24h\n| bin span=15m _time\n| stats dc(session_id) as sessions by _time, asset, user\n| eventstats median(sessions) as med by asset\n| where sessions > med*3 AND sessions>4\n| sort - sessions\n```\n\nUnderstanding this SPL\n\n**Power system availability monitoring (Kraftberedskap)** — Correlates SCADA-linked outages and restorations to evidence availability monitoring for electricity preparedness.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, asset, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by asset** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sessions > med*3 AND sessions>4` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Power system availability monitoring (Kraftberedskap)** — Correlates SCADA-linked outages and restorations to evidence availability monitoring for electricity preparedness.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Kraftberedskapsforskriften; NVE"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO KBF",
                  "v": "2012 as amended",
                  "cl": "§6-1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO KBF §6-1 (Informasjonssikkerhet) is enforced — Splunk UC-22.26.6: Power system availability monitoring.",
                  "ea": "Saved search 'UC-22.26.6' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.7",
              "n": "Grid SCADA access control (NVE)",
              "c": "critical",
              "f": "intermediate",
              "v": "Tracks jump-host and VPN sessions targeting SCADA enclaves for least-privilege and session integrity evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=network sourcetype=\"pan:traffic\" earliest=-24h\n| eval src_zone=coalesce(src_zone,\"IT-Unclassified\"), dest_zone=coalesce(dest_zone,\"Unknown\")\n| search src_zone=\"IT-Corporate\" AND dest_zone=\"OT-SCADA\" AND action=\"allow\"\n| lookup ot_segmentation_allowlist.csv src dest dest_port OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| stats sum(bytes) as bytes, values(app) as apps by src, dest, dest_port\n| sort - bytes",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\" earliest=-24h\n| eval src_zone=coalesce(src_zone,\"IT-Unclassified\"), dest_zone=coalesce(dest_zone,\"Unknown\")\n| search src_zone=\"IT-Corporate\" AND dest_zone=\"OT-SCADA\" AND action=\"allow\"\n| lookup ot_segmentation_allowlist.csv src dest dest_port OUTPUT approved\n| where isnull(approved) OR approved=\"false\"\n| stats sum(bytes) as bytes, values(app) as apps by src, dest, dest_port\n| sort - bytes\n```\n\nUnderstanding this SPL\n\n**Grid SCADA access control (NVE)** — Tracks jump-host and VPN sessions targeting SCADA enclaves for least-privilege and session integrity evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **src_zone** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Applies an explicit `search` filter to narrow the current result set.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved) OR approved=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src, dest, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Grid SCADA access control (NVE)** — Tracks jump-host and VPN sessions targeting SCADA enclaves for least-privilege and session integrity evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Kraftberedskapsforskriften; NVE"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO KBF",
                  "v": "2012 as amended",
                  "cl": "§6-1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO KBF §6-1 (Informasjonssikkerhet) is enforced — Splunk UC-22.26.7: Grid SCADA access control.",
                  "ea": "Saved search 'UC-22.26.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.8",
              "n": "Emergency preparedness evidence (Kraftberedskap)",
              "c": "critical",
              "f": "beginner",
              "v": "Ingests drill timelines, inject acknowledgements, and recovery checkpoints as auditable preparedness records.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=comms sourcetype IN (\"drill:inject\",\"drill:ack\") earliest=-120d\n| stats earliest(eval(if(match(_raw,\"inject\"),_time,null()))) as t_inject, earliest(eval(if(match(_raw,\"ack\"),_time,null()))) as t_ack by drill_id\n| eval ack_min=round((t_ack-t_inject)/60,1)\n| where isnull(t_ack) OR ack_min>15\n| table drill_id, t_inject, t_ack, ack_min",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=comms sourcetype IN (\"drill:inject\",\"drill:ack\") earliest=-120d\n| stats earliest(eval(if(match(_raw,\"inject\"),_time,null()))) as t_inject, earliest(eval(if(match(_raw,\"ack\"),_time,null()))) as t_ack by drill_id\n| eval ack_min=round((t_ack-t_inject)/60,1)\n| where isnull(t_ack) OR ack_min>15\n| table drill_id, t_inject, t_ack, ack_min\n```\n\nUnderstanding this SPL\n\n**Emergency preparedness evidence (Kraftberedskap)** — Ingests drill timelines, inject acknowledgements, and recovery checkpoints as auditable preparedness records.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: comms.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=comms, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by drill_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ack_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(t_ack) OR ack_min>15` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Emergency preparedness evidence (Kraftberedskap)**): table drill_id, t_inject, t_ack, ack_min\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Emergency preparedness evidence (Kraftberedskap)** — Ingests drill timelines, inject acknowledgements, and recovery checkpoints as auditable preparedness records.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Kraftberedskapsforskriften; NVE"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO KBF",
                  "v": "2012 as amended",
                  "cl": "§6-1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO KBF §6-1 (Informasjonssikkerhet) is enforced — Splunk UC-22.26.8: Emergency preparedness evidence.",
                  "ea": "Saved search 'UC-22.26.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.9",
              "n": "NVE reporting compliance (NVE)",
              "c": "high",
              "f": "expert",
              "v": "Validates scheduled regulatory submissions and checksum metadata for completeness against expected reporting calendar.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=proxy sourcetype IN (\"zscaler:web\",\"bluecoat:proxysg:access\") earliest=-30d\n| search url=\"*nve.no*\" OR url=\"*altinn*nve*\"\n| stats dc(user) as users, values(url) as urls by src, action\n| lookup nve_authorized_subnets.csv subnet AS src OUTPUT authorized\n| where authorized!=\"true\" AND action=\"allowed\"\n| sort - users",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype IN (\"zscaler:web\",\"bluecoat:proxysg:access\") earliest=-30d\n| search url=\"*nve.no*\" OR url=\"*altinn*nve*\"\n| stats dc(user) as users, values(url) as urls by src, action\n| lookup nve_authorized_subnets.csv subnet AS src OUTPUT authorized\n| where authorized!=\"true\" AND action=\"allowed\"\n| sort - users\n```\n\nUnderstanding this SPL\n\n**NVE reporting compliance (NVE)** — Validates scheduled regulatory submissions and checksum metadata for completeness against expected reporting calendar.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by src, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where authorized!=\"true\" AND action=\"allowed\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NVE reporting compliance (NVE)** — Validates scheduled regulatory submissions and checksum metadata for completeness against expected reporting calendar.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Kraftberedskapsforskriften; NVE"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.src, Authentication.action | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO KBF",
                  "v": "2012 as amended",
                  "cl": "§6-1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO KBF §6-1 (Informasjonssikkerhet) is enforced — Splunk UC-22.26.9: NVE reporting compliance.",
                  "ea": "Saved search 'UC-22.26.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.10",
              "n": "Critical infrastructure resilience (RME/KI)",
              "c": "medium",
              "f": "advanced",
              "v": "Links penetration test findings to remediation tickets to show resilience improvement cycles for electricity infrastructure.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=itsi_summary source=\"itsi_grouped_alerts\" earliest=-30d service_title=\"*Grid*\"\n| eval start=strptime(first_event_time,\"%Y-%m-%d %H:%M:%S\")\n| eval end=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval mttr_min=round((end-start)/60,1)\n| stats avg(mttr_min) as avg_mttr, perc95(mttr_min) as p95_mttr, count by service_title\n| sort - p95_mttr",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary source=\"itsi_grouped_alerts\" earliest=-30d service_title=\"*Grid*\"\n| eval start=strptime(first_event_time,\"%Y-%m-%d %H:%M:%S\")\n| eval end=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval mttr_min=round((end-start)/60,1)\n| stats avg(mttr_min) as avg_mttr, perc95(mttr_min) as p95_mttr, count by service_title\n| sort - p95_mttr\n```\n\nUnderstanding this SPL\n\n**Critical infrastructure resilience (RME/KI)** — Links penetration test findings to remediation tickets to show resilience improvement cycles for electricity infrastructure.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **start** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **end** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mttr_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by service_title** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Critical infrastructure resilience (RME/KI)** — Links penetration test findings to remediation tickets to show resilience improvement cycles for electricity infrastructure.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Kraftberedskapsforskriften; NVE"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO KBF",
                  "v": "2012 as amended",
                  "cl": "§6-1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO KBF §6-1 (Informasjonssikkerhet) is enforced — Splunk UC-22.26.10: Critical infrastructure resilience.",
                  "ea": "Saved search 'UC-22.26.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.11",
              "n": "Offshore platform system monitoring (PSA)",
              "c": "low",
              "f": "intermediate",
              "v": "Monitors historian and controller tag changes that could impact process safety on offshore installations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=ot sourcetype=\"engineering:plc\" asset=\"NOR-OFF-*\" earliest=-7d\n| eval wo=coalesce(work_order,\"NONE\")\n| join type=left asset [| search index=change sourcetype=\"snow:chg\" earliest=-30d | rename cmdb_ci as asset | stats latest(number) as chg by asset]\n| where wo=\"NONE\" AND isnull(chg)\n| stats latest(program_hash) as phash by asset, program_name, user",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"engineering:plc\" asset=\"NOR-OFF-*\" earliest=-7d\n| eval wo=coalesce(work_order,\"NONE\")\n| join type=left asset [| search index=change sourcetype=\"snow:chg\" earliest=-30d | rename cmdb_ci as asset | stats latest(number) as chg by asset]\n| where wo=\"NONE\" AND isnull(chg)\n| stats latest(program_hash) as phash by asset, program_name, user\n```\n\nUnderstanding this SPL\n\n**Offshore platform system monitoring (PSA)** — Monitors historian and controller tag changes that could impact process safety on offshore installations.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: engineering:plc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"engineering:plc\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **wo** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where wo=\"NONE\" AND isnull(chg)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by asset, program_name, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Offshore platform system monitoring (PSA)** — Monitors historian and controller tag changes that could impact process safety on offshore installations.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Petroleumsforskriften; PSA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Petroleumsforskriften",
                  "v": "1997 as amended",
                  "cl": "§15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Petroleumsforskriften §15 (Health, safety and environmental (HSE) management requirements) is enforced — Splunk UC-22.26.11: Offshore platform system monitoring.",
                  "ea": "Saved search 'UC-22.26.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.12",
              "n": "Safety-critical system integrity (Petroleumsforskriften)",
              "c": "critical",
              "f": "beginner",
              "v": "Flags overdue security patches and unsigned firmware changes on safety-related hosts.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=os sourcetype IN (\"drilling:app:status\",\"WinEventLog:Application\") host=\"DRILL-*\" earliest=-24h\n| rex field=_raw \"Version=(?<app_ver>\\d+\\.\\d+\\.\\d+)\"\n| stats latest(app_ver) as observed by host\n| lookup drilling_control_baseline.csv host OUTPUT expected_ver\n| where isnotnull(expected_ver) AND observed!=expected_ver",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype IN (\"drilling:app:status\",\"WinEventLog:Application\") host=\"DRILL-*\" earliest=-24h\n| rex field=_raw \"Version=(?<app_ver>\\d+\\.\\d+\\.\\d+)\"\n| stats latest(app_ver) as observed by host\n| lookup drilling_control_baseline.csv host OUTPUT expected_ver\n| where isnotnull(expected_ver) AND observed!=expected_ver\n```\n\nUnderstanding this SPL\n\n**Safety-critical system integrity (Petroleumsforskriften)** — Flags overdue security patches and unsigned firmware changes on safety-related hosts.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **host** filter: DRILL-*.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(expected_ver) AND observed!=expected_ver` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Safety-critical system integrity (Petroleumsforskriften)** — Flags overdue security patches and unsigned firmware changes on safety-related hosts.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Petroleumsforskriften; PSA"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Petroleumsforskriften",
                  "v": "1997 as amended",
                  "cl": "§15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Petroleumsforskriften §15 (Health, safety and environmental (HSE) management requirements) is enforced — Splunk UC-22.26.12: Safety-critical system integrity.",
                  "ea": "Saved search 'UC-22.26.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.13",
              "n": "PSA compliance evidence (PSA)",
              "c": "high",
              "f": "expert",
              "v": "Packages access reviews, alarm management changes, and temporary bypasses for PSA inspection readiness.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=app sourcetype=\"eptw:audit\" site=\"NOR-OIL-*\" earliest=-30d\n| transaction permit_id maxspan=2h maxpause=10m\n| eval mfa_ok=if(mvfind(_raw,\"mfa_result=success\")>=0,1,0)\n| stats min(mfa_ok) as chain_ok by permit_id\n| where chain_ok=0\n| table permit_id",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"eptw:audit\" site=\"NOR-OIL-*\" earliest=-30d\n| transaction permit_id maxspan=2h maxpause=10m\n| eval mfa_ok=if(mvfind(_raw,\"mfa_result=success\")>=0,1,0)\n| stats min(mfa_ok) as chain_ok by permit_id\n| where chain_ok=0\n| table permit_id\n```\n\nUnderstanding this SPL\n\n**PSA compliance evidence (PSA)** — Packages access reviews, alarm management changes, and temporary bypasses for PSA inspection readiness.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: eptw:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"eptw:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Groups related events into transactions — prefer `maxspan`/`maxpause`/`maxevents` for bounded memory.\n• `eval` defines or adjusts **mfa_ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by permit_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where chain_ok=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PSA compliance evidence (PSA)**): table permit_id\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PSA compliance evidence (PSA)** — Packages access reviews, alarm management changes, and temporary bypasses for PSA inspection readiness.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Petroleumsforskriften; PSA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Petroleumsforskriften",
                  "v": "1997 as amended",
                  "cl": "§15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Petroleumsforskriften §15 (Health, safety and environmental (HSE) management requirements) is enforced — Splunk UC-22.26.13: PSA compliance evidence.",
                  "ea": "Saved search 'UC-22.26.13' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.14",
              "n": "HSE system audit trails (HSE)",
              "c": "critical",
              "f": "advanced",
              "v": "Tracks privileged edits to occupational health and safety systems tied to offshore operations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=app sourcetype=\"hse:incident\" earliest=-180d status=\"closed\"\n| join type=inner incident_id [| search index=app sourcetype=\"hse:audit\" action=\"update\" | stats max(_time) as audit_t by incident_id]\n| eval closed_epoch=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| where audit_t > closed_epoch\n| table incident_id, closed_at, audit_t, actor",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"hse:incident\" earliest=-180d status=\"closed\"\n| join type=inner incident_id [| search index=app sourcetype=\"hse:audit\" action=\"update\" | stats max(_time) as audit_t by incident_id]\n| eval closed_epoch=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| where audit_t > closed_epoch\n| table incident_id, closed_at, audit_t, actor\n```\n\nUnderstanding this SPL\n\n**HSE system audit trails (HSE)** — Tracks privileged edits to occupational health and safety systems tied to offshore operations.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: hse:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"hse:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **closed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where audit_t > closed_epoch` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **HSE system audit trails (HSE)**): table incident_id, closed_at, audit_t, actor\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**HSE system audit trails (HSE)** — Tracks privileged edits to occupational health and safety systems tied to offshore operations.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Petroleumsforskriften; PSA"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Petroleumsforskriften",
                  "v": "1997 as amended",
                  "cl": "§15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Petroleumsforskriften §15 (Health, safety and environmental (HSE) management requirements) is enforced — Splunk UC-22.26.14: HSE system audit trails.",
                  "ea": "Saved search 'UC-22.26.14' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.15",
              "n": "Petroleum facility access control (Petroleumsforskriften)",
              "c": "low",
              "f": "intermediate",
              "v": "Correlates physical badge events with logical sessions into engineering workstations for facility access control.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=pacs sourcetype=\"badge:swipe\" site=\"NOR-OIL-*\" earliest=-24h\n| eval day=strftime(_time,\"%Y-%m-%d\"), user=lower(user)\n| stats earliest(_time) as first_in, latest(_time) as last_in by user, day, site\n| join type=left user, day [| search index=vpn sourcetype=\"paloalto:globalprotect\" log_type=\"connect\" earliest=-2d\n    | eval day=strftime(_time,\"%Y-%m-%d\"), user=lower(user)\n    | stats dc(public_ip) as vpn_ips by user, day]\n| where isnull(vpn_ips) AND (last_in-first_in)>3600\n| table user, day, site, first_in, last_in, vpn_ips",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pacs sourcetype=\"badge:swipe\" site=\"NOR-OIL-*\" earliest=-24h\n| eval day=strftime(_time,\"%Y-%m-%d\"), user=lower(user)\n| stats earliest(_time) as first_in, latest(_time) as last_in by user, day, site\n| join type=left user, day [| search index=vpn sourcetype=\"paloalto:globalprotect\" log_type=\"connect\" earliest=-2d\n    | eval day=strftime(_time,\"%Y-%m-%d\"), user=lower(user)\n    | stats dc(public_ip) as vpn_ips by user, day]\n| where isnull(vpn_ips) AND (last_in-first_in)>3600\n| table user, day, site, first_in, last_in, vpn_ips\n```\n\nUnderstanding this SPL\n\n**Petroleum facility access control (Petroleumsforskriften)** — Correlates physical badge events with logical sessions into engineering workstations for facility access control.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pacs; **sourcetype**: badge:swipe. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pacs, sourcetype=\"badge:swipe\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **day** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by user, day, site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(vpn_ips) AND (last_in-first_in)>3600` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Petroleum facility access control (Petroleumsforskriften)**): table user, day, site, first_in, last_in, vpn_ips\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Petroleum facility access control (Petroleumsforskriften)** — Correlates physical badge events with logical sessions into engineering workstations for facility access control.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Petroleumsforskriften; PSA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Petroleumsforskriften",
                  "v": "1997 as amended",
                  "cl": "§15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Petroleumsforskriften §15 (Health, safety and environmental (HSE) management requirements) is enforced — Splunk UC-22.26.15: Petroleum facility access control.",
                  "ea": "Saved search 'UC-22.26.15' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.16",
              "n": "Datatilsynet breach reporting (Personopplysningsloven)",
              "c": "critical",
              "f": "beginner",
              "v": "Tracks breach assessment milestones and Datatilsynet notification timestamps for supervisory evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=privacy sourcetype=\"breach:ticket\" jurisdiction=\"NO\" earliest=-180d\n| eval confirmed=strptime(confirmed_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval filed=strptime(datatilsynet_filed_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval hours=round((filed-confirmed)/3600,2)\n| eval breach_sla=if(isnull(filed) OR hours>72,1,0)\n| table breach_id, confirmed_at, datatilsynet_filed_at, hours, breach_sla",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"breach:ticket\" jurisdiction=\"NO\" earliest=-180d\n| eval confirmed=strptime(confirmed_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval filed=strptime(datatilsynet_filed_at,\"%Y-%m-%dT%H:%M:%SZ\")\n| eval hours=round((filed-confirmed)/3600,2)\n| eval breach_sla=if(isnull(filed) OR hours>72,1,0)\n| table breach_id, confirmed_at, datatilsynet_filed_at, hours, breach_sla\n```\n\nUnderstanding this SPL\n\n**Datatilsynet breach reporting (Personopplysningsloven)** — Tracks breach assessment milestones and Datatilsynet notification timestamps for supervisory evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: breach:ticket. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"breach:ticket\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **confirmed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **filed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach_sla** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Datatilsynet breach reporting (Personopplysningsloven)**): table breach_id, confirmed_at, datatilsynet_filed_at, hours, breach_sla\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Datatilsynet breach reporting (Personopplysningsloven)** — Tracks breach assessment milestones and Datatilsynet notification timestamps for supervisory evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Personopplysningsloven; Datatilsynet"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Personopplysningsloven",
                  "v": "2018",
                  "cl": "§8",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Personopplysningsloven §8 (Processing of special categories of personal data) is enforced — Splunk UC-22.26.16: Datatilsynet breach reporting.",
                  "ea": "Saved search 'UC-22.26.16' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.17",
              "n": "Altinn-based data handling compliance (Altinn)",
              "c": "high",
              "f": "expert",
              "v": "Monitors API usage to national digital channels for purpose limitation and logging of national identifier references.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=integration sourcetype=\"http:json\" uri=\"*/altinn/*\" earliest=-7d\n| spath input=_raw path=body.fields{} output=fields_mv\n| eval field_count=mvcount(fields_mv)\n| stats max(field_count) as max_fields by correlation_id, service\n| lookup altinn_schema_allowlist.csv service OUTPUT max_fields as allowed_max\n| where max_fields > allowed_max",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=integration sourcetype=\"http:json\" uri=\"*/altinn/*\" earliest=-7d\n| spath input=_raw path=body.fields{} output=fields_mv\n| eval field_count=mvcount(fields_mv)\n| stats max(field_count) as max_fields by correlation_id, service\n| lookup altinn_schema_allowlist.csv service OUTPUT max_fields as allowed_max\n| where max_fields > allowed_max\n```\n\nUnderstanding this SPL\n\n**Altinn-based data handling compliance (Altinn)** — Monitors API usage to national digital channels for purpose limitation and logging of national identifier references.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: integration; **sourcetype**: http:json. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=integration, sourcetype=\"http:json\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• `eval` defines or adjusts **field_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by correlation_id, service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where max_fields > allowed_max` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Altinn-based data handling compliance (Altinn)** — Monitors API usage to national digital channels for purpose limitation and logging of national identifier references.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Personopplysningsloven; Datatilsynet"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Personopplysningsloven",
                  "v": "2018",
                  "cl": "§8",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Personopplysningsloven §8 (Processing of special categories of personal data) is enforced — Splunk UC-22.26.17: Altinn-based data handling compliance.",
                  "ea": "Saved search 'UC-22.26.17' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.18",
              "n": "National ID number protection (fødselsnummer)",
              "c": "medium",
              "f": "advanced",
              "v": "Detects national identifier patterns in application and web logs to trigger immediate redaction workflows.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "(index=app OR index=web) earliest=-24h\n| regex _raw=\"\\b(0[1-9]|[12]\\d|3[01])(0[1-9]|1[0-2])\\d{7}\\b\"\n| stats count by index, sourcetype, host\n| sort - count",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n(index=app OR index=web) earliest=-24h\n| regex _raw=\"\\b(0[1-9]|[12]\\d|3[01])(0[1-9]|1[0-2])\\d{7}\\b\"\n| stats count by index, sourcetype, host\n| sort - count\n```\n\nUnderstanding this SPL\n\n**National ID number protection (fødselsnummer)** — Detects national identifier patterns in application and web logs to trigger immediate redaction workflows.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app, web.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, index=web, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters rows matching a pattern with `regex`.\n• `stats` rolls up events into metrics; results are split **by index, sourcetype, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**National ID number protection (fødselsnummer)** — Detects national identifier patterns in application and web logs to trigger immediate redaction workflows.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Personopplysningsloven; Datatilsynet"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Personopplysningsloven",
                  "v": "2018",
                  "cl": "§8",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Personopplysningsloven §8 (Processing of special categories of personal data) is enforced — Splunk UC-22.26.18: National ID number protection.",
                  "ea": "Saved search 'UC-22.26.18' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.19",
              "n": "Sector-specific processing rules (Personopplysningsloven)",
              "c": "low",
              "f": "intermediate",
              "v": "Maps processing activities to sector codes via lookup to evidence sector-specific obligations.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=emr sourcetype IN (\"epic:access\",\"cerner:audit\") earliest=-24h\n| lookup clinical_shift_roster.csv user OUTPUT shift_start, shift_end, ward\n| eval on_shift=if(_time>=shift_start AND _time<=shift_end,1,0)\n| where on_shift=0 AND action=\"READ\" AND patient_unit!=ward\n| stats count by user, patient_unit, ward",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=emr sourcetype IN (\"epic:access\",\"cerner:audit\") earliest=-24h\n| lookup clinical_shift_roster.csv user OUTPUT shift_start, shift_end, ward\n| eval on_shift=if(_time>=shift_start AND _time<=shift_end,1,0)\n| where on_shift=0 AND action=\"READ\" AND patient_unit!=ward\n| stats count by user, patient_unit, ward\n```\n\nUnderstanding this SPL\n\n**Sector-specific processing rules (Personopplysningsloven)** — Maps processing activities to sector codes via lookup to evidence sector-specific obligations.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: emr.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=emr, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **on_shift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where on_shift=0 AND action=\"READ\" AND patient_unit!=ward` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, patient_unit, ward** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Sector-specific processing rules (Personopplysningsloven)** — Maps processing activities to sector codes via lookup to evidence sector-specific obligations.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Personopplysningsloven; Datatilsynet"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Personopplysningsloven",
                  "v": "2018",
                  "cl": "§8",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Personopplysningsloven §8 (Processing of special categories of personal data) is enforced — Splunk UC-22.26.19: Sector-specific processing rules.",
                  "ea": "Saved search 'UC-22.26.19' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.26.20",
              "n": "Cross-border transfer to non-EU/EEA (Personopplysningsloven)",
              "c": "critical",
              "f": "beginner",
              "v": "Validates transfer tickets include documented safeguards before data leaves the EEA boundary.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports",
              "q": "index=proxy sourcetype=\"zscaler:web\" earliest=-24h\n| iplocation dest_host\n| lookup eea_countries.csv Country OUTPUT in_eea\n| where in_eea!=\"true\"\n| lookup transfer_register.csv dest_host OUTPUT mechanism\n| where isnull(mechanism) OR mechanism=\"pending\"\n| stats count by dest_host, Country, mechanism",
              "m": "(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.",
              "z": "Bar chart (events by host), Table (top users), Single value (policy violations).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Unix and Linux](https://splunkbase.splunk.com/app/833), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain supporting lookups (classified hosts, sectors, transfers) aligned to Norwegian regulatory language. (2) Schedule searches with Europe/Oslo time semantics for reporting cutoffs. (3) Route ES notables to the correct national security, OT, or privacy queues. (4) Quarterly false-positive tuning with HR, operations, and legal stakeholders.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"zscaler:web\" earliest=-24h\n| iplocation dest_host\n| lookup eea_countries.csv Country OUTPUT in_eea\n| where in_eea!=\"true\"\n| lookup transfer_register.csv dest_host OUTPUT mechanism\n| where isnull(mechanism) OR mechanism=\"pending\"\n| stats count by dest_host, Country, mechanism\n```\n\nUnderstanding this SPL\n\n**Cross-border transfer to non-EU/EEA (Personopplysningsloven)** — Validates transfer tickets include documented safeguards before data leaves the EEA boundary.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:web. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"zscaler:web\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Pipeline stage (see **Cross-border transfer to non-EU/EEA (Personopplysningsloven)**): iplocation dest_host\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_eea!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(mechanism) OR mechanism=\"pending\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dest_host, Country, mechanism** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cross-border transfer to non-EU/EEA (Personopplysningsloven)** — Validates transfer tickets include documented safeguards before data leaves the EEA boundary.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, OT/ICS system logs, power grid SCADA, petroleum control systems, government system audit trails, identity management systems, security incident reports. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (events by host), Table (top users), Single value (policy violations).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national security and critical-sector monitoring rules are met with the events you keep in one place for review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Personopplysningsloven; Datatilsynet"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Personopplysningsloven",
                  "v": "2018",
                  "cl": "§8",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Personopplysningsloven §8 (Processing of special categories of personal data) is enforced — Splunk UC-22.26.20: Cross-border transfer to non-EU/EEA.",
                  "ea": "Saved search 'UC-22.26.20' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "22.27",
          "n": "UK Regulations (NIS + FCA/PRA)",
          "u": [
            {
              "i": "22.27.1",
              "n": "OES security measures monitoring (UK NIS Regulations)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for oes security measures monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" earliest=-34h EventCode IN (4624,4625,4648)\n| lookup uk_oes_assets.csv host OUTPUT oes_tier\n| where isnotnull(oes_tier)\n| stats dc(IpAddress) as src_ips, count by user, host, oes_tier\n| where count>100\n| sort - count",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" earliest=-34h EventCode IN (4624,4625,4648)\n| lookup uk_oes_assets.csv host OUTPUT oes_tier\n| where isnotnull(oes_tier)\n| stats dc(IpAddress) as src_ips, count by user, host, oes_tier\n| where count>100\n| sort - count\n```\n\nUnderstanding this SPL\n\n**OES security measures monitoring (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for oes security measures monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(oes_tier)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user, host, oes_tier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>100` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OES security measures monitoring (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for oes security measures monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.dest | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.10 (OES security duties) is enforced — Splunk UC-22.27.1: OES security measures monitoring.",
                  "ea": "Saved search 'UC-22.27.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.2",
              "n": "Digital service provider compliance checks (UK NIS Regulations)",
              "c": "medium",
              "f": "beginner",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for digital service provider compliance checks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=cloudflare sourcetype=\"cloudflare:audit\" OR index=aws sourcetype=\"aws:cloudtrail\" earliest=-24h\n| eval dsp=if(match(source,\"cloudflare\"),\"CF\",\"AWS\")\n| stats count by dsp, action, user\n| lookup uk_dsp_register.csv service OUTPUT registered\n| where registered=\"true\" AND action IN (\"Create\",\"Delete\",\"Modify\")\n| sort - count",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloudflare sourcetype=\"cloudflare:audit\" OR index=aws sourcetype=\"aws:cloudtrail\" earliest=-24h\n| eval dsp=if(match(source,\"cloudflare\"),\"CF\",\"AWS\")\n| stats count by dsp, action, user\n| lookup uk_dsp_register.csv service OUTPUT registered\n| where registered=\"true\" AND action IN (\"Create\",\"Delete\",\"Modify\")\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Digital service provider compliance checks (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for digital service provider compliance checks.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloudflare, aws; **sourcetype**: cloudflare:audit, aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloudflare, index=aws, sourcetype=\"cloudflare:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **dsp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by dsp, action, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where registered=\"true\" AND action IN (\"Create\",\"Delete\",\"Modify\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Digital service provider compliance checks (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for digital service provider compliance checks.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.10 (OES security duties) is enforced — Splunk UC-22.27.2: Digital service provider compliance checks.",
                  "ea": "Saved search 'UC-22.27.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.3",
              "n": "NCSC CAF outcome evidence tagging (UK NIS Regulations)",
              "c": "low",
              "f": "expert",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for ncsc caf outcome evidence tagging.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=grc sourcetype=\"caf:control_status\" earliest=-90d\n| stats latest(status) as st, latest(evidence_ref) as ev by control_id, system\n| where st IN (\"Partial\",\"NotMet\")\n| lookup ncsc_caf_control_map.csv control_id OUTPUT outcome\n| stats count by outcome, st",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"caf:control_status\" earliest=-90d\n| stats latest(status) as st, latest(evidence_ref) as ev by control_id, system\n| where st IN (\"Partial\",\"NotMet\")\n| lookup ncsc_caf_control_map.csv control_id OUTPUT outcome\n| stats count by outcome, st\n```\n\nUnderstanding this SPL\n\n**NCSC CAF outcome evidence tagging (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for ncsc caf outcome evidence tagging.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: caf:control_status. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"caf:control_status\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id, system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st IN (\"Partial\",\"NotMet\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by outcome, st** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NCSC CAF outcome evidence tagging (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for ncsc caf outcome evidence tagging.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.10 (OES security duties) is enforced — Splunk UC-22.27.3: NCSC CAF outcome evidence tagging.",
                  "ea": "Saved search 'UC-22.27.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.4",
              "n": "Network and information systems incident reporting (UK NIS Regulations)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for network and information systems incident reporting.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "`notable` (rule_name=\"*NIS*\" OR tag=\"uk_nis_incident\") earliest=-30d\n| eval hours=round((now()-_time)/3600,2)\n| eval overdue=if(status IN (\"New\",\"In Progress\") AND hours>12,1,0)\n| table _time, rule_name, owner, status, hours, overdue\n| sort - overdue, - hours",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n`notable` (rule_name=\"*NIS*\" OR tag=\"uk_nis_incident\") earliest=-30d\n| eval hours=round((now()-_time)/3600,2)\n| eval overdue=if(status IN (\"New\",\"In Progress\") AND hours>12,1,0)\n| table _time, rule_name, owner, status, hours, overdue\n| sort - overdue, - hours\n```\n\nUnderstanding this SPL\n\n**Network and information systems incident reporting (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for network and information systems incident reporting.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Invokes macro `notable` — in Search, use the UI or expand to inspect the underlying SPL.\n• `eval` defines or adjusts **hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Network and information systems incident reporting (UK NIS Regulations)**): table _time, rule_name, owner, status, hours, overdue\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Network and information systems incident reporting (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for network and information systems incident reporting.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.11",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.11 (Incident reporting) is enforced — Splunk UC-22.27.4: Network and information systems incident reporting.",
                  "ea": "Saved search 'UC-22.27.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.5",
              "n": "Security of essential services mapping (UK NIS Regulations)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for security of essential services mapping.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=cmdb sourcetype=\"service:dependency\" OR index=itsi_grouped_alerts earliest=-7d\n| stats values(parent_service) as upstream, values(child_service) as downstream by business_service\n| search business_service=\"*essential*\"\n| mvexpand downstream\n| stats dc(downstream) as dep_count by business_service\n| sort - dep_count",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmdb sourcetype=\"service:dependency\" OR index=itsi_grouped_alerts earliest=-7d\n| stats values(parent_service) as upstream, values(child_service) as downstream by business_service\n| search business_service=\"*essential*\"\n| mvexpand downstream\n| stats dc(downstream) as dep_count by business_service\n| sort - dep_count\n```\n\nUnderstanding this SPL\n\n**Security of essential services mapping (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for security of essential services mapping.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmdb, itsi_grouped_alerts; **sourcetype**: service:dependency. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmdb, index=itsi_grouped_alerts, sourcetype=\"service:dependency\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by business_service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• `stats` rolls up events into metrics; results are split **by business_service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security of essential services mapping (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for security of essential services mapping.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.10 (OES security duties) is enforced — Splunk UC-22.27.5: Security of essential services mapping.",
                  "ea": "Saved search 'UC-22.27.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.6",
              "n": "Threat intelligence sharing ingestion for OES (UK NIS Regulations)",
              "c": "medium",
              "f": "beginner",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for threat intelligence sharing ingestion for oes.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=stix sourcetype=\"taxii:feed\" OR index=threat sourcetype=\"misp:alert\" earliest=-7d\n| stats count by feed_name, indicator_type, severity\n| lookup uk_oes_threat_sharing.csv feed_name OUTPUT oes_subscriber\n| where isnotnull(oes_subscriber)\n| timechart span=1d sum(count) by feed_name",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=stix sourcetype=\"taxii:feed\" OR index=threat sourcetype=\"misp:alert\" earliest=-7d\n| stats count by feed_name, indicator_type, severity\n| lookup uk_oes_threat_sharing.csv feed_name OUTPUT oes_subscriber\n| where isnotnull(oes_subscriber)\n| timechart span=1d sum(count) by feed_name\n```\n\nUnderstanding this SPL\n\n**Threat intelligence sharing ingestion for OES (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for threat intelligence sharing ingestion for oes.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: stix, threat; **sourcetype**: taxii:feed, misp:alert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=stix, index=threat, sourcetype=\"taxii:feed\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by feed_name, indicator_type, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(oes_subscriber)` — typically the threshold or rule expression for this monitoring goal.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by feed_name** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Threat intelligence sharing ingestion for OES (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for threat intelligence sharing ingestion for oes.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=1d | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.10 (OES security duties) is enforced — Splunk UC-22.27.6: Threat intelligence sharing ingestion for OES.",
                  "ea": "Saved search 'UC-22.27.6' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.7",
              "n": "Supply chain security for OES dependencies (UK NIS Regulations)",
              "c": "critical",
              "f": "expert",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for supply chain security for oes dependencies.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" earliest=-30d eventName IN (\"PutBucketPolicy\",\"AuthorizeSecurityGroupIngress\")\n| stats values(requestParameters) as rp by userIdentity.arn, eventSource\n| lookup uk_oes_supplier_arn.csv arn AS `userIdentity.arn` OUTPUT supplier_name\n| where isnotnull(supplier_name)\n| eval risk_len=len(rp)\n| where risk_len>400",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" earliest=-30d eventName IN (\"PutBucketPolicy\",\"AuthorizeSecurityGroupIngress\")\n| stats values(requestParameters) as rp by userIdentity.arn, eventSource\n| lookup uk_oes_supplier_arn.csv arn AS `userIdentity.arn` OUTPUT supplier_name\n| where isnotnull(supplier_name)\n| eval risk_len=len(rp)\n| where risk_len>400\n```\n\nUnderstanding this SPL\n\n**Supply chain security for OES dependencies (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for supply chain security for oes dependencies.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, eventSource** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(supplier_name)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **risk_len** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where risk_len>400` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Supply chain security for OES dependencies (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for supply chain security for oes dependencies.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.10 (OES security duties) is enforced — Splunk UC-22.27.7: Supply chain security for OES dependencies.",
                  "ea": "Saved search 'UC-22.27.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.8",
              "n": "Capacity and resilience headroom tracking (UK NIS Regulations)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for capacity and resilience headroom tracking.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=itsi_summary source=service_health_monitor earliest=-7d\n| where service_title=\"*OES*\"\n| eval headroom=100-health_score\n| timechart span=15m perc95(headroom) by service_title",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary source=service_health_monitor earliest=-7d\n| where service_title=\"*OES*\"\n| eval headroom=100-health_score\n| timechart span=15m perc95(headroom) by service_title\n```\n\nUnderstanding this SPL\n\n**Capacity and resilience headroom tracking (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for capacity and resilience headroom tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where service_title=\"*OES*\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **headroom** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `timechart` plots the metric over time using **span=15m** buckets with a separate series **by service_title** — ideal for trending and alerting on this use case.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Capacity and resilience headroom tracking (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for capacity and resilience headroom tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.10 (OES security duties) is enforced — Splunk UC-22.27.8: Capacity and resilience headroom tracking.",
                  "ea": "Saved search 'UC-22.27.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.9",
              "n": "Competent authority notification completeness (UK NIS Regulations)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for competent authority notification completeness.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=grc sourcetype=\"nis:notification\" earliest=-365d\n| eval filed=strptime(filed_competent_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval delta_h=round((filed-detected)/3600,2)\n| where isnull(filed) OR delta_h>72\n| table incident_ref, detected_at, filed_competent_authority_at, delta_h",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"nis:notification\" earliest=-365d\n| eval filed=strptime(filed_competent_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval delta_h=round((filed-detected)/3600,2)\n| where isnull(filed) OR delta_h>72\n| table incident_ref, detected_at, filed_competent_authority_at, delta_h\n```\n\nUnderstanding this SPL\n\n**Competent authority notification completeness (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for competent authority notification completeness.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: nis:notification. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"nis:notification\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **filed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **delta_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(filed) OR delta_h>72` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Competent authority notification completeness (UK NIS Regulations)**): table incident_ref, detected_at, filed_competent_authority_at, delta_h\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Competent authority notification completeness (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for competent authority notification completeness.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.10 (OES security duties) is enforced — Splunk UC-22.27.9: Competent authority notification completeness.",
                  "ea": "Saved search 'UC-22.27.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.10",
              "n": "NIS audit evidence correlation (UK NIS Regulations)",
              "c": "medium",
              "f": "beginner",
              "v": "Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for nis audit evidence correlation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=audit sourcetype IN (\"nessus:scan\",\"qualys:vm\",\"snow:chg\") earliest=-30d\n| eval family=case(match(sourcetype,\"nessus|qualys\"),\"Vuln\",match(sourcetype,\"chg\"),\"Change\",1=1,\"Other\")\n| stats count by family, dest_host\n| lookup uk_nis_audit_scope.csv dest_host OUTPUT in_scope\n| where in_scope=\"true\"\n| sort - count",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype IN (\"nessus:scan\",\"qualys:vm\",\"snow:chg\") earliest=-30d\n| eval family=case(match(sourcetype,\"nessus|qualys\"),\"Vuln\",match(sourcetype,\"chg\"),\"Change\",1=1,\"Other\")\n| stats count by family, dest_host\n| lookup uk_nis_audit_scope.csv dest_host OUTPUT in_scope\n| where in_scope=\"true\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**NIS audit evidence correlation (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for nis audit evidence correlation.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **family** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by family, dest_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NIS audit evidence correlation (UK NIS Regulations)** — Supports UK UK NIS Regulations supervisory expectations by correlating technical telemetry with governance records for nis audit evidence correlation.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS Regulations (UK)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "UK NIS",
                  "v": "2018",
                  "cl": "Reg.10",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK NIS Reg.10 (OES security duties) is enforced — Splunk UC-22.27.10: NIS audit evidence correlation.",
                  "ea": "Saved search 'UC-22.27.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.11",
              "n": "Important business service mapping health (FCA operational resilience)",
              "c": "low",
              "f": "expert",
              "v": "Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for important business service mapping health.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=itsi_summary service_title=\"*IBS*\" earliest=-30d\n| stats latest(health_score) as hs, latest(service_description) as desc by service_title\n| eval missing=if(isnull(desc) OR desc=\"\",1,0)\n| where missing=1 OR hs<70\n| table service_title, hs, missing",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary service_title=\"*IBS*\" earliest=-30d\n| stats latest(health_score) as hs, latest(service_description) as desc by service_title\n| eval missing=if(isnull(desc) OR desc=\"\",1,0)\n| where missing=1 OR hs<70\n| table service_title, hs, missing\n```\n\nUnderstanding this SPL\n\n**Important business service mapping health (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for important business service mapping health.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by service_title** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **missing** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where missing=1 OR hs<70` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Important business service mapping health (FCA operational resilience)**): table service_title, hs, missing\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Important business service mapping health (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for important business service mapping health.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SS1/21 operational resilience"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §1.1 (Identify important business services) is enforced — Splunk UC-22.27.11: Important business service mapping health.",
                  "ea": "Saved search 'UC-22.27.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.12",
              "n": "Impact tolerance testing evidence (FCA operational resilience)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for impact tolerance testing evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=test sourcetype=\"resilience:scenario\" earliest=-365d\n| eval tol=strptime(impact_tolerance_breach_at,\"%Y-%m-%d %H:%M:%S\")\n| stats count(eval(breach=\"true\")) as breaches, count as runs by ibs_name\n| eval breach_rate=round(100*breaches/runs,2)\n| where breach_rate>0\n| sort - breach_rate",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=test sourcetype=\"resilience:scenario\" earliest=-365d\n| eval tol=strptime(impact_tolerance_breach_at,\"%Y-%m-%d %H:%M:%S\")\n| stats count(eval(breach=\"true\")) as breaches, count as runs by ibs_name\n| eval breach_rate=round(100*breaches/runs,2)\n| where breach_rate>0\n| sort - breach_rate\n```\n\nUnderstanding this SPL\n\n**Impact tolerance testing evidence (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for impact tolerance testing evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: test; **sourcetype**: resilience:scenario. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=test, sourcetype=\"resilience:scenario\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **tol** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by ibs_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **breach_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where breach_rate>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Impact tolerance testing evidence (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for impact tolerance testing evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SS1/21 operational resilience"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §2.1 (Set impact tolerances) is enforced — Splunk UC-22.27.12: Impact tolerance testing evidence.",
                  "ea": "Saved search 'UC-22.27.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.13",
              "n": "Third-party dependency concentration monitoring (FCA operational resilience)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for third-party dependency concentration monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=cmdb sourcetype=\"vendor:dependency\" earliest=-7d\n| stats dc(critical_service) as fanout by vendor_name\n| eventstats max(fanout) as maxf\n| eval concentration=round(100*fanout/maxf,2)\n| where fanout>=5\n| sort - fanout",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmdb sourcetype=\"vendor:dependency\" earliest=-7d\n| stats dc(critical_service) as fanout by vendor_name\n| eventstats max(fanout) as maxf\n| eval concentration=round(100*fanout/maxf,2)\n| where fanout>=5\n| sort - fanout\n```\n\nUnderstanding this SPL\n\n**Third-party dependency concentration monitoring (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for third-party dependency concentration monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmdb; **sourcetype**: vendor:dependency. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmdb, sourcetype=\"vendor:dependency\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **concentration** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where fanout>=5` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Third-party dependency concentration monitoring (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for third-party dependency concentration monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SS1/21 operational resilience"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §1.1 (Identify important business services) is enforced — Splunk UC-22.27.13: Third-party dependency concentration monitoring.",
                  "ea": "Saved search 'UC-22.27.13' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.14",
              "n": "Scenario testing record capture (FCA operational resilience)",
              "c": "critical",
              "f": "beginner",
              "v": "Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for scenario testing record capture.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=grc sourcetype=\"scenario:test\" earliest=-180d regulator=\"FCA\"\n| stats latest(result) as res, latest(_time) as t by scenario_id, ibs\n| where res!=\"Pass\"\n| table scenario_id, ibs, res, t",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"scenario:test\" earliest=-180d regulator=\"FCA\"\n| stats latest(result) as res, latest(_time) as t by scenario_id, ibs\n| where res!=\"Pass\"\n| table scenario_id, ibs, res, t\n```\n\nUnderstanding this SPL\n\n**Scenario testing record capture (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for scenario testing record capture.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: scenario:test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"scenario:test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scenario_id, ibs** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where res!=\"Pass\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Scenario testing record capture (FCA operational resilience)**): table scenario_id, ibs, res, t\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Scenario testing record capture (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for scenario testing record capture.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SS1/21 operational resilience"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §3.1 (Scenario testing) is enforced — Splunk UC-22.27.14: Scenario testing record capture.",
                  "ea": "Saved search 'UC-22.27.14' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.15",
              "n": "Communication and information systems resilience (FCA operational resilience)",
              "c": "low",
              "f": "expert",
              "v": "Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for communication and information systems resilience.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=Exchange OR Workload=MicrosoftTeams earliest=-24h\n| stats count by Operation, UserId, ResultStatus\n| search Operation=\"*Set-*\" OR Operation=\"*Update*\"\n| lookup uk_ibs_comms_systems.csv Workload OUTPUT ibs_tagged\n| where ibs_tagged=\"true\"\n| sort - count",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=Exchange OR Workload=MicrosoftTeams earliest=-24h\n| stats count by Operation, UserId, ResultStatus\n| search Operation=\"*Set-*\" OR Operation=\"*Update*\"\n| lookup uk_ibs_comms_systems.csv Workload OUTPUT ibs_tagged\n| where ibs_tagged=\"true\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Communication and information systems resilience (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for communication and information systems resilience.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Operation, UserId, ResultStatus** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where ibs_tagged=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Communication and information systems resilience (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for communication and information systems resilience.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SS1/21 operational resilience"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §1.1 (Identify important business services) is enforced — Splunk UC-22.27.15: Communication and information systems resilience.",
                  "ea": "Saved search 'UC-22.27.15' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.16",
              "n": "Self-assessment compliance scoring (FCA operational resilience)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for self-assessment compliance scoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=grc sourcetype=\"fca:ors:selfassess\" earliest=-365d\n| stats latest(score) as s, latest(status) as st by entity, assessment_version\n| where s<80 OR st!=\"Complete\"\n| table entity, assessment_version, s, st",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"fca:ors:selfassess\" earliest=-365d\n| stats latest(score) as s, latest(status) as st by entity, assessment_version\n| where s<80 OR st!=\"Complete\"\n| table entity, assessment_version, s, st\n```\n\nUnderstanding this SPL\n\n**Self-assessment compliance scoring (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for self-assessment compliance scoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: fca:ors:selfassess. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"fca:ors:selfassess\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by entity, assessment_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where s<80 OR st!=\"Complete\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Self-assessment compliance scoring (FCA operational resilience)**): table entity, assessment_version, s, st\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Self-assessment compliance scoring (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for self-assessment compliance scoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SS1/21 operational resilience"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §1.1 (Identify important business services) is enforced — Splunk UC-22.27.16: Self-assessment compliance scoring.",
                  "ea": "Saved search 'UC-22.27.16' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.17",
              "n": "FCA notification timeline tracking (FCA operational resilience)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for fca notification timeline tracking.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=grc sourcetype=\"fca:notification\" earliest=-730d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval filed=strptime(filed_regulator_at,\"%Y-%m-%d %H:%M:%S\")\n| eval hours=round((filed-opened)/3600,2)\n| where hours>24 OR isnull(filed)\n| table case_id, opened_at, filed_regulator_at, hours",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"fca:notification\" earliest=-730d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval filed=strptime(filed_regulator_at,\"%Y-%m-%d %H:%M:%S\")\n| eval hours=round((filed-opened)/3600,2)\n| where hours>24 OR isnull(filed)\n| table case_id, opened_at, filed_regulator_at, hours\n```\n\nUnderstanding this SPL\n\n**FCA notification timeline tracking (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for fca notification timeline tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: fca:notification. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"fca:notification\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **filed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours>24 OR isnull(filed)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **FCA notification timeline tracking (FCA operational resilience)**): table case_id, opened_at, filed_regulator_at, hours\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**FCA notification timeline tracking (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for fca notification timeline tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SS1/21 operational resilience"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §1.1 (Identify important business services) is enforced — Splunk UC-22.27.17: FCA notification timeline tracking.",
                  "ea": "Saved search 'UC-22.27.17' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.18",
              "n": "Outsourcing operational resilience oversight (FCA operational resilience)",
              "c": "medium",
              "f": "beginner",
              "v": "Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for outsourcing operational resilience oversight.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=snow sourcetype=\"snow:chg\" OR index=vendor sourcetype=\"vendor:risk\" earliest=-30d\n| eval vendor=coalesce(u_vendor, vendor_name)\n| stats count by vendor, category\n| lookup fca_material_outsource.csv vendor OUTPUT material\n| where material=\"true\"\n| sort - count",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=snow sourcetype=\"snow:chg\" OR index=vendor sourcetype=\"vendor:risk\" earliest=-30d\n| eval vendor=coalesce(u_vendor, vendor_name)\n| stats count by vendor, category\n| lookup fca_material_outsource.csv vendor OUTPUT material\n| where material=\"true\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Outsourcing operational resilience oversight (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for outsourcing operational resilience oversight.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: snow, vendor; **sourcetype**: snow:chg, vendor:risk. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=snow, index=vendor, sourcetype=\"snow:chg\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **vendor** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by vendor, category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where material=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Outsourcing operational resilience oversight (FCA operational resilience)** — Supports UK FCA operational resilience supervisory expectations by correlating technical telemetry with governance records for outsourcing operational resilience oversight.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SS1/21 operational resilience"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SS1/21",
                  "v": "2021",
                  "cl": "§1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SS1/21 §1.1 (Identify important business services) is enforced — Splunk UC-22.27.18: Outsourcing operational resilience oversight.",
                  "ea": "Saved search 'UC-22.27.18' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.19",
              "n": "Material outsourcing register drift detection (PRA outsourcing)",
              "c": "low",
              "f": "expert",
              "v": "Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for material outsourcing register drift detection.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=snow sourcetype=\"snow:cmdb_rel\" earliest=-7d\n| search relationship_type=\"depends_on\" AND parent_class=\"Outsourced Service\"\n| stats dc(child_ci) as downstream_ci by parent_ci\n| where downstream_ci>20\n| sort - downstream_ci",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=snow sourcetype=\"snow:cmdb_rel\" earliest=-7d\n| search relationship_type=\"depends_on\" AND parent_class=\"Outsourced Service\"\n| stats dc(child_ci) as downstream_ci by parent_ci\n| where downstream_ci>20\n| sort - downstream_ci\n```\n\nUnderstanding this SPL\n\n**Material outsourcing register drift detection (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for material outsourcing register drift detection.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: snow; **sourcetype**: snow:cmdb_rel. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=snow, sourcetype=\"snow:cmdb_rel\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by parent_ci** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where downstream_ci>20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Material outsourcing register drift detection (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for material outsourcing register drift detection.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PRA SS2/21 outsourcing"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PRA SS2/21",
                  "v": "2021",
                  "cl": "§3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PRA SS2/21 §3.2 (Proportionality) is enforced — Splunk UC-22.27.19: Material outsourcing register drift detection.",
                  "ea": "Saved search 'UC-22.27.19' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.20",
              "n": "Outsourcing concentration risk monitoring (PRA outsourcing)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for outsourcing concentration risk monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=risk sourcetype=\"vendor:concentration\" regulator=\"PRA\" earliest=-30d\n| stats sum(exposure_pct) as exp by vendor_name, service_line\n| where exp>15\n| sort - exp",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=risk sourcetype=\"vendor:concentration\" regulator=\"PRA\" earliest=-30d\n| stats sum(exposure_pct) as exp by vendor_name, service_line\n| where exp>15\n| sort - exp\n```\n\nUnderstanding this SPL\n\n**Outsourcing concentration risk monitoring (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for outsourcing concentration risk monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: risk; **sourcetype**: vendor:concentration. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=risk, sourcetype=\"vendor:concentration\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_name, service_line** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where exp>15` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Outsourcing concentration risk monitoring (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for outsourcing concentration risk monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PRA SS2/21 outsourcing"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PRA SS2/21",
                  "v": "2021",
                  "cl": "§3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PRA SS2/21 §3.2 (Proportionality) is enforced — Splunk UC-22.27.20: Outsourcing concentration risk monitoring.",
                  "ea": "Saved search 'UC-22.27.20' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.21",
              "n": "Exit strategy test evidence (PRA outsourcing)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for exit strategy test evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=test sourcetype=\"exit_drill\" earliest=-365d\n| stats latest(result) as exit_ok, latest(_time) as t by outsource_id\n| where exit_ok!=\"success\"\n| lookup pra_material_outsource.csv outsource_id OUTPUT vendor\n| table vendor, outsource_id, exit_ok, t",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=test sourcetype=\"exit_drill\" earliest=-365d\n| stats latest(result) as exit_ok, latest(_time) as t by outsource_id\n| where exit_ok!=\"success\"\n| lookup pra_material_outsource.csv outsource_id OUTPUT vendor\n| table vendor, outsource_id, exit_ok, t\n```\n\nUnderstanding this SPL\n\n**Exit strategy test evidence (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for exit strategy test evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: test; **sourcetype**: exit_drill. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=test, sourcetype=\"exit_drill\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by outsource_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where exit_ok!=\"success\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Pipeline stage (see **Exit strategy test evidence (PRA outsourcing)**): table vendor, outsource_id, exit_ok, t\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Exit strategy test evidence (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for exit strategy test evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PRA SS2/21 outsourcing"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PRA SS2/21",
                  "v": "2021",
                  "cl": "§9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PRA SS2/21 §9 (Business continuity & exit plans) is enforced — Splunk UC-22.27.21: Exit strategy test evidence.",
                  "ea": "Saved search 'UC-22.27.21' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.22",
              "n": "Sub-outsourcing chain visibility (PRA outsourcing)",
              "c": "medium",
              "f": "beginner",
              "v": "Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for sub-outsourcing chain visibility.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=vendor sourcetype=\"subprocessor:register\" earliest=-30d\n| stats values(subprocessor) as chain by prime_vendor, contract_id\n| eval depth=mvcount(chain)\n| where depth>=4\n| sort - depth",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"subprocessor:register\" earliest=-30d\n| stats values(subprocessor) as chain by prime_vendor, contract_id\n| eval depth=mvcount(chain)\n| where depth>=4\n| sort - depth\n```\n\nUnderstanding this SPL\n\n**Sub-outsourcing chain visibility (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for sub-outsourcing chain visibility.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: subprocessor:register. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"subprocessor:register\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by prime_vendor, contract_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **depth** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where depth>=4` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Sub-outsourcing chain visibility (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for sub-outsourcing chain visibility.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PRA SS2/21 outsourcing"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PRA SS2/21",
                  "v": "2021",
                  "cl": "§3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PRA SS2/21 §3.2 (Proportionality) is enforced — Splunk UC-22.27.22: Sub-outsourcing chain visibility.",
                  "ea": "Saved search 'UC-22.27.22' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.23",
              "n": "Cloud outsourcing compliance validation (PRA outsourcing)",
              "c": "low",
              "f": "expert",
              "v": "Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for cloud outsourcing compliance validation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" earliest=-30d eventSource=\"ec2.amazonaws.com\"\n| stats dc(awsRegion) as regions, dc(requestParameters.instanceType) as shapes by userIdentity.arn\n| lookup pra_cloud_material_functions.csv arn AS `userIdentity.arn` OUTPUT function_name\n| where isnotnull(function_name) AND regions>2",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" earliest=-30d eventSource=\"ec2.amazonaws.com\"\n| stats dc(awsRegion) as regions, dc(requestParameters.instanceType) as shapes by userIdentity.arn\n| lookup pra_cloud_material_functions.csv arn AS `userIdentity.arn` OUTPUT function_name\n| where isnotnull(function_name) AND regions>2\n```\n\nUnderstanding this SPL\n\n**Cloud outsourcing compliance validation (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for cloud outsourcing compliance validation.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(function_name) AND regions>2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud outsourcing compliance validation (PRA outsourcing)** — Supports UK PRA outsourcing supervisory expectations by correlating technical telemetry with governance records for cloud outsourcing compliance validation.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PRA SS2/21 outsourcing"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PRA SS2/21",
                  "v": "2021",
                  "cl": "§3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PRA SS2/21 §3.2 (Proportionality) is enforced — Splunk UC-22.27.23: Cloud outsourcing compliance validation.",
                  "ea": "Saved search 'UC-22.27.23' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.24",
              "n": "Senior manager responsibility mapping (SM&CR)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for senior manager responsibility mapping.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=hr sourcetype=\"smcr:smf\" earliest=-90d\n| stats values(smf_role) as roles, dc(department) as depts by senior_manager\n| mvexpand roles\n| lookup smcr_prescribed_responsibilities.csv smf_role OUTPUT required\n| where required=\"true\" AND depts>3\n| table senior_manager, roles, depts",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"smcr:smf\" earliest=-90d\n| stats values(smf_role) as roles, dc(department) as depts by senior_manager\n| mvexpand roles\n| lookup smcr_prescribed_responsibilities.csv smf_role OUTPUT required\n| where required=\"true\" AND depts>3\n| table senior_manager, roles, depts\n```\n\nUnderstanding this SPL\n\n**Senior manager responsibility mapping (SM&CR)** — Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for senior manager responsibility mapping.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: smcr:smf. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"smcr:smf\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by senior_manager** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where required=\"true\" AND depts>3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Senior manager responsibility mapping (SM&CR)**): table senior_manager, roles, depts\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Senior manager responsibility mapping (SM&CR)** — Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for senior manager responsibility mapping.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SM&CR"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SM&CR",
                  "v": "current",
                  "cl": "SMR 1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SM&CR SMR 1 (Senior Management Functions, Statements of Responsibilities) is enforced — Splunk UC-22.27.24: Senior manager responsibility mapping.",
                  "ea": "Saved search 'UC-22.27.24' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.25",
              "n": "Certification regime compliance monitoring (SM&CR)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for certification regime compliance monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=learning sourcetype=\"smcr:certification\" earliest=-180d\n| stats latest(cert_status) as st, latest(expiry) as ex by employee_id, cert_type\n| eval exp_epoch=strptime(ex,\"%Y-%m-%d\")\n| where st!=\"Active\" OR exp_epoch<relative_time(now(),\"+60d@d\")\n| table employee_id, cert_type, st, ex",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=learning sourcetype=\"smcr:certification\" earliest=-180d\n| stats latest(cert_status) as st, latest(expiry) as ex by employee_id, cert_type\n| eval exp_epoch=strptime(ex,\"%Y-%m-%d\")\n| where st!=\"Active\" OR exp_epoch<relative_time(now(),\"+60d@d\")\n| table employee_id, cert_type, st, ex\n```\n\nUnderstanding this SPL\n\n**Certification regime compliance monitoring (SM&CR)** — Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for certification regime compliance monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: learning; **sourcetype**: smcr:certification. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=learning, sourcetype=\"smcr:certification\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by employee_id, cert_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **exp_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where st!=\"Active\" OR exp_epoch<relative_time(now(),\"+60d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Certification regime compliance monitoring (SM&CR)**): table employee_id, cert_type, st, ex\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Certification regime compliance monitoring (SM&CR)** — Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for certification regime compliance monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SM&CR"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SM&CR",
                  "v": "current",
                  "cl": "SMR 1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SM&CR SMR 1 (Senior Management Functions, Statements of Responsibilities) is enforced — Splunk UC-22.27.25: Certification regime compliance monitoring.",
                  "ea": "Saved search 'UC-22.27.25' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.26",
              "n": "Conduct rule breach monitoring (SM&CR)",
              "c": "medium",
              "f": "beginner",
              "v": "Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for conduct rule breach monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=comm sourcetype=\"conduct:breach\" earliest=-365d\n| stats count by rule_name, employee_class\n| where count>3\n| lookup smcr_conduct_rules.csv rule_name OUTPUT tier\n| search tier=\"Core\"\n| sort - count",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=comm sourcetype=\"conduct:breach\" earliest=-365d\n| stats count by rule_name, employee_class\n| where count>3\n| lookup smcr_conduct_rules.csv rule_name OUTPUT tier\n| search tier=\"Core\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Conduct rule breach monitoring (SM&CR)** — Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for conduct rule breach monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: comm; **sourcetype**: conduct:breach. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=comm, sourcetype=\"conduct:breach\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by rule_name, employee_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>3` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.signature | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Conduct rule breach monitoring (SM&CR)** — Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for conduct rule breach monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SM&CR"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.signature | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SM&CR",
                  "v": "current",
                  "cl": "COCON 2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SM&CR COCON 2 (Individual Conduct Rules (including acting with integrity/due care)) is enforced — Splunk UC-22.27.26: Conduct rule breach monitoring.",
                  "ea": "Saved search 'UC-22.27.26' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.27",
              "n": "Reasonable steps evidence aggregation (SM&CR)",
              "c": "low",
              "f": "expert",
              "v": "Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for reasonable steps evidence aggregation.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=audit sourcetype=\"smcr:reasonable_steps\" earliest=-365d smf_id=*\n| stats values(evidence_type) as ev, latest(_time) as t by smf_id, control_id\n| eval ev_count=mvcount(ev)\n| where ev_count<3\n| table smf_id, control_id, ev_count, t",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"smcr:reasonable_steps\" earliest=-365d smf_id=*\n| stats values(evidence_type) as ev, latest(_time) as t by smf_id, control_id\n| eval ev_count=mvcount(ev)\n| where ev_count<3\n| table smf_id, control_id, ev_count, t\n```\n\nUnderstanding this SPL\n\n**Reasonable steps evidence aggregation (SM&CR)** — Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for reasonable steps evidence aggregation.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: smcr:reasonable_steps. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"smcr:reasonable_steps\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by smf_id, control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ev_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ev_count<3` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Reasonable steps evidence aggregation (SM&CR)**): table smf_id, control_id, ev_count, t\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Reasonable steps evidence aggregation (SM&CR)** — Supports UK SM&CR supervisory expectations by correlating technical telemetry with governance records for reasonable steps evidence aggregation.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SM&CR"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SM&CR",
                  "v": "current",
                  "cl": "SMR 1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SM&CR SMR 1 (Senior Management Functions, Statements of Responsibilities) is enforced — Splunk UC-22.27.27: Reasonable steps evidence aggregation.",
                  "ea": "Saved search 'UC-22.27.27' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.28",
              "n": "Cyber Essentials boundary firewall configuration evidence (Cyber Essentials)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports UK Cyber Essentials supervisory expectations by correlating technical telemetry with governance records for cyber essentials boundary firewall configuration evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=network sourcetype=\"pan:config\" OR index=checkpoint sourcetype=\"cp_log\" earliest=-7d\n| search (command=\"set rule\" OR object=\"firewall\") OR match(_raw,\"policy\")\n| stats dc(device) as touched by admin, action\n| lookup cyberessentials_scope.csv device OUTPUT in_scope\n| where in_scope=\"true\"\n| sort - touched",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:config\" OR index=checkpoint sourcetype=\"cp_log\" earliest=-7d\n| search (command=\"set rule\" OR object=\"firewall\") OR match(_raw,\"policy\")\n| stats dc(device) as touched by admin, action\n| lookup cyberessentials_scope.csv device OUTPUT in_scope\n| where in_scope=\"true\"\n| sort - touched\n```\n\nUnderstanding this SPL\n\n**Cyber Essentials boundary firewall configuration evidence (Cyber Essentials)** — Supports UK Cyber Essentials supervisory expectations by correlating technical telemetry with governance records for cyber essentials boundary firewall configuration evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network, checkpoint; **sourcetype**: pan:config, cp_log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, index=checkpoint, sourcetype=\"pan:config\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by admin, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cyber Essentials boundary firewall configuration evidence (Cyber Essentials)** — Supports UK Cyber Essentials supervisory expectations by correlating technical telemetry with governance records for cyber essentials boundary firewall configuration evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Cyber Essentials"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.BF.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Cyber Essentials CE.BF.1 (Boundary firewalls) is enforced — Splunk UC-22.27.28: Cyber Essentials boundary firewall configuration evidence.",
                  "ea": "Saved search 'UC-22.27.28' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.29",
              "n": "Cyber Essentials Plus secure configuration baseline drift (Cyber Essentials)",
              "c": "high",
              "f": "intermediate",
              "v": "Supports UK Cyber Essentials supervisory expectations by correlating technical telemetry with governance records for cyber essentials plus secure configuration baseline drift.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=os sourcetype IN (\"Unix:Audit\",\"WinEventLog:Security\") earliest=-24h\n| eval baseline_tag=host.\"|\".coalesce(process,Image)\n| stats earliest(_time) as first by baseline_tag, host\n| lookup ce_secure_config_baseline.csv baseline_tag OUTPUT expected_hash\n| where isnotnull(expected_hash)\n| sort - first",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype IN (\"Unix:Audit\",\"WinEventLog:Security\") earliest=-24h\n| eval baseline_tag=host.\"|\".coalesce(process,Image)\n| stats earliest(_time) as first by baseline_tag, host\n| lookup ce_secure_config_baseline.csv baseline_tag OUTPUT expected_hash\n| where isnotnull(expected_hash)\n| sort - first\n```\n\nUnderstanding this SPL\n\n**Cyber Essentials Plus secure configuration baseline drift (Cyber Essentials)** — Supports UK Cyber Essentials supervisory expectations by correlating technical telemetry with governance records for cyber essentials plus secure configuration baseline drift.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **baseline_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by baseline_tag, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(expected_hash)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cyber Essentials Plus secure configuration baseline drift (Cyber Essentials)** — Supports UK Cyber Essentials supervisory expectations by correlating technical telemetry with governance records for cyber essentials plus secure configuration baseline drift.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Cyber Essentials"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.SAU.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Cyber Essentials CE.SAU.1 (Secure authentication & access) is enforced — Splunk UC-22.27.29: Cyber Essentials Plus secure configuration baseline drift.",
                  "ea": "Saved search 'UC-22.27.29' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.27.30",
              "n": "Malware protection and update compliance for CE scope (Cyber Essentials)",
              "c": "medium",
              "f": "beginner",
              "v": "Supports UK Cyber Essentials supervisory expectations by correlating technical telemetry with governance records for malware protection and update compliance for ce scope.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence",
              "q": "index=defender sourcetype=\"DefenderATP:Alert\" OR index=crowdstrike sourcetype=\"crowdstrike:hosts\" earliest=-7d\n| stats latest(last_seen) as ls, latest(rtr_state) as rtr by hostname\n| eval stale=if((now()-ls)>604800,1,0)\n| lookup cyberessentials_endpoints.csv hostname OUTPUT in_scope\n| where in_scope=\"true\" AND (stale=1 OR rtr!=\"enabled\")\n| table hostname, ls, rtr, stale",
              "m": "(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.",
              "z": "Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `uk_oes_inventory.csv` with OES identifiers, essential services, and CAF mappings. (2) Tag correlation searches with `nis_relevant` or `fca_relevant` for workflow routing. (3) Integrate IR ticketing milestones for competent authority and FCA clocks. (4) Monthly review with compliance, technology risk, and resilience forums.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=defender sourcetype=\"DefenderATP:Alert\" OR index=crowdstrike sourcetype=\"crowdstrike:hosts\" earliest=-7d\n| stats latest(last_seen) as ls, latest(rtr_state) as rtr by hostname\n| eval stale=if((now()-ls)>604800,1,0)\n| lookup cyberessentials_endpoints.csv hostname OUTPUT in_scope\n| where in_scope=\"true\" AND (stale=1 OR rtr!=\"enabled\")\n| table hostname, ls, rtr, stale\n```\n\nUnderstanding this SPL\n\n**Malware protection and update compliance for CE scope (Cyber Essentials)** — Supports UK Cyber Essentials supervisory expectations by correlating technical telemetry with governance records for malware protection and update compliance for ce scope.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: defender, crowdstrike; **sourcetype**: DefenderATP:Alert, crowdstrike:hosts. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=defender, index=crowdstrike, sourcetype=\"DefenderATP:Alert\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\" AND (stale=1 OR rtr!=\"enabled\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Malware protection and update compliance for CE scope (Cyber Essentials)**): table hostname, ls, rtr, stale\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Malware protection and update compliance for CE scope (Cyber Essentials)** — Supports UK Cyber Essentials supervisory expectations by correlating technical telemetry with governance records for malware protection and update compliance for ce scope.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails (AWS, Azure), network infrastructure logs, financial application logs, identity management systems, vulnerability scan outputs, incident response logs, operational resilience evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart (event volume), Heatmap (service x control), Table (top risk drivers).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show United Kingdom financial and operational resilience rules are met with clear evidence from your live systems.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Cyber Essentials"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "aws",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "Cyber Essentials",
                  "v": "Montpellier (2025)",
                  "cl": "CE.SAU.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Cyber Essentials CE.SAU.1 (Secure authentication & access) is enforced — Splunk UC-22.27.30: Malware protection and update compliance for CE scope.",
                  "ea": "Saved search 'UC-22.27.30' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 30,
            "none": 0
          }
        },
        {
          "i": "22.28",
          "n": "German KRITIS / BSI",
          "u": [
            {
              "i": "22.28.1",
              "n": "KRITIS operator reporting evidence (IT-SiG 2.0; BSI)",
              "c": "medium",
              "f": "expert",
              "v": "Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for kritis operator reporting evidence using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=grc sourcetype=\"bsi:kritis:report\" earliest=-365d\n| eval submitted=strptime(submitted_bsi_at,\"%Y-%m-%d %H:%M:%S\")\n| eval due=strptime(reporting_deadline,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(submitted) OR submitted>due\n| table operator_id, reporting_deadline, submitted_bsi_at",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"bsi:kritis:report\" earliest=-365d\n| eval submitted=strptime(submitted_bsi_at,\"%Y-%m-%d %H:%M:%S\")\n| eval due=strptime(reporting_deadline,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(submitted) OR submitted>due\n| table operator_id, reporting_deadline, submitted_bsi_at\n```\n\nUnderstanding this SPL\n\n**KRITIS operator reporting evidence (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for kritis operator reporting evidence using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: bsi:kritis:report. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"bsi:kritis:report\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **submitted** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(submitted) OR submitted>due` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **KRITIS operator reporting evidence (IT-SiG 2.0; BSI)**): table operator_id, reporting_deadline, submitted_bsi_at\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**KRITIS operator reporting evidence (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for kritis operator reporting evidence using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with IT-Sicherheitsgesetz 2.0 expectations for kritis operator reporting evidence using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-SiG 2.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-SiG 2.0",
                  "v": "2021",
                  "cl": "§8b",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-SiG 2.0 §8b (National IT situation centre notification) is enforced — Splunk UC-22.28.1: KRITIS operator reporting evidence.",
                  "ea": "Saved search 'UC-22.28.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.2",
              "n": "KRITIS registration and asset scope compliance (IT-SiG 2.0; BSI)",
              "c": "low",
              "f": "advanced",
              "v": "Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for kritis registration and asset scope compliance using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=cmdb sourcetype=\"kritis:asset\" earliest=-7d\n| stats dc(asset_id) as cnt by sector, site\n| lookup kritis_registration.csv sector OUTPUT registered_count\n| where cnt!=registered_count\n| table sector, site, cnt, registered_count",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmdb sourcetype=\"kritis:asset\" earliest=-7d\n| stats dc(asset_id) as cnt by sector, site\n| lookup kritis_registration.csv sector OUTPUT registered_count\n| where cnt!=registered_count\n| table sector, site, cnt, registered_count\n```\n\nUnderstanding this SPL\n\n**KRITIS registration and asset scope compliance (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for kritis registration and asset scope compliance using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmdb; **sourcetype**: kritis:asset. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmdb, sourcetype=\"kritis:asset\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sector, site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cnt!=registered_count` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **KRITIS registration and asset scope compliance (IT-SiG 2.0; BSI)**): table sector, site, cnt, registered_count\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**KRITIS registration and asset scope compliance (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for kritis registration and asset scope compliance using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with IT-Sicherheitsgesetz 2.0 expectations for kritis registration and asset scope compliance using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-SiG 2.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-SiG 2.0",
                  "v": "2021",
                  "cl": "§8a",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-SiG 2.0 §8a (Security measures for KRITIS operators) is enforced — Splunk UC-22.28.2: KRITIS registration and asset scope compliance.",
                  "ea": "Saved search 'UC-22.28.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.3",
              "n": "Systems for attack detection (SzA) alert fidelity (IT-SiG 2.0; BSI)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for systems for attack detection (sza) alert fidelity using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=ids sourcetype IN (\"suricata:event\",\"zeek:conn\") tag=kritis earliest=-24h\n| stats count by dest_ip, signature\n| where count>50\n| lookup kritis_operator_networks.csv dest_ip OUTPUT operator\n| where isnotnull(operator)\n| sort - count",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ids sourcetype IN (\"suricata:event\",\"zeek:conn\") tag=kritis earliest=-24h\n| stats count by dest_ip, signature\n| where count>50\n| lookup kritis_operator_networks.csv dest_ip OUTPUT operator\n| where isnotnull(operator)\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Systems for attack detection (SzA) alert fidelity (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for systems for attack detection (sza) alert fidelity using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ids.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ids, time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest_ip, signature** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>50` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(operator)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest, Vulnerabilities.signature | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Systems for attack detection (SzA) alert fidelity (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for systems for attack detection (sza) alert fidelity using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with IT-Sicherheitsgesetz 2.0 expectations for systems for attack detection (sza) alert fidelity using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-SiG 2.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest, Vulnerabilities.signature | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-SiG 2.0",
                  "v": "2021",
                  "cl": "§8a",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-SiG 2.0 §8a (Security measures for KRITIS operators) is enforced — Splunk UC-22.28.3: Systems for attack detection (SzA) alert fidelity.",
                  "ea": "Saved search 'UC-22.28.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.4",
              "n": "BSI notification within 24 hours tracking (IT-SiG 2.0; BSI)",
              "c": "high",
              "f": "beginner",
              "v": "Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for bsi notification within 24 hours tracking using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=incident sourcetype=\"bsi:notification\" earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval bsi_sent=strptime(bsi_notified_at,\"%Y-%m-%d %H:%M:%S\")\n| eval hours=round((bsi_sent-detected)/3600,2)\n| where hours>24 OR isnull(bsi_sent)\n| table incident_id, hours",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=incident sourcetype=\"bsi:notification\" earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval bsi_sent=strptime(bsi_notified_at,\"%Y-%m-%d %H:%M:%S\")\n| eval hours=round((bsi_sent-detected)/3600,2)\n| where hours>24 OR isnull(bsi_sent)\n| table incident_id, hours\n```\n\nUnderstanding this SPL\n\n**BSI notification within 24 hours tracking (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for bsi notification within 24 hours tracking using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: incident; **sourcetype**: bsi:notification. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=incident, sourcetype=\"bsi:notification\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bsi_sent** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours>24 OR isnull(bsi_sent)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **BSI notification within 24 hours tracking (IT-SiG 2.0; BSI)**): table incident_id, hours\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**BSI notification within 24 hours tracking (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for bsi notification within 24 hours tracking using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with IT-Sicherheitsgesetz 2.0 expectations for bsi notification within 24 hours tracking using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-SiG 2.0"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-SiG 2.0",
                  "v": "2021",
                  "cl": "§8b",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-SiG 2.0 §8b (National IT situation centre notification) is enforced — Splunk UC-22.28.4: BSI notification within 24 hours tracking.",
                  "ea": "Saved search 'UC-22.28.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.5",
              "n": "Annual BSI audit evidence aggregation (IT-SiG 2.0; BSI)",
              "c": "medium",
              "f": "expert",
              "v": "Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for annual bsi audit evidence aggregation using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=audit sourcetype=\"bsi:annual_audit\" earliest=-730d\n| stats latest(finding_count) as fc, latest(remediation_pct) as rp by operator, audit_year\n| where fc>0 AND rp<90\n| sort - fc",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"bsi:annual_audit\" earliest=-730d\n| stats latest(finding_count) as fc, latest(remediation_pct) as rp by operator, audit_year\n| where fc>0 AND rp<90\n| sort - fc\n```\n\nUnderstanding this SPL\n\n**Annual BSI audit evidence aggregation (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for annual bsi audit evidence aggregation using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: bsi:annual_audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"bsi:annual_audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by operator, audit_year** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where fc>0 AND rp<90` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Annual BSI audit evidence aggregation (IT-SiG 2.0; BSI)** — Demonstrates alignment with IT-Sicherheitsgesetz 2.0 expectations for annual bsi audit evidence aggregation using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with IT-Sicherheitsgesetz 2.0 expectations for annual bsi audit evidence aggregation using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-SiG 2.0"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-SiG 2.0",
                  "v": "2021",
                  "cl": "§8a",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-SiG 2.0 §8a (Security measures for KRITIS operators) is enforced — Splunk UC-22.28.5: Annual BSI audit evidence aggregation.",
                  "ea": "Saved search 'UC-22.28.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.6",
              "n": "Sector threshold and scope monitoring (BSI-KritisV)",
              "c": "critical",
              "f": "advanced",
              "v": "Demonstrates alignment with BSI-KritisV expectations for sector threshold and scope monitoring using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=metrics sourcetype=\"kritis:threshold\" earliest=-90d\n| timechart span=1d max(utilization_pct) as u by sector\n| where u>85",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=metrics sourcetype=\"kritis:threshold\" earliest=-90d\n| timechart span=1d max(utilization_pct) as u by sector\n| where u>85\n```\n\nUnderstanding this SPL\n\n**Sector threshold and scope monitoring (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for sector threshold and scope monitoring using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: metrics; **sourcetype**: kritis:threshold. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=metrics, sourcetype=\"kritis:threshold\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=1d** buckets with a separate series **by sector** — ideal for trending and alerting on this use case.\n• Filters the current rows with `where u>85` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1d | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Sector threshold and scope monitoring (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for sector threshold and scope monitoring using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI-KritisV expectations for sector threshold and scope monitoring using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BSI-KritisV"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user span=1d | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BSI-KritisV",
                  "v": "2021 (as amended)",
                  "cl": "§8a",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BSI-KritisV §8a (Security in IT systems) is enforced — Splunk UC-22.28.6: Sector threshold and scope monitoring.",
                  "ea": "Saved search 'UC-22.28.6' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.7",
              "n": "KRITIS asset inventory completeness (BSI-KritisV)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates alignment with BSI-KritisV expectations for kritis asset inventory completeness using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=cmdb sourcetype=\"kritis:inventory\" earliest=-7d\n| stats values(system_role) as roles by asset_id, operator\n| eval role_count=mvcount(roles)\n| where role_count>6\n| sort - role_count",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmdb sourcetype=\"kritis:inventory\" earliest=-7d\n| stats values(system_role) as roles by asset_id, operator\n| eval role_count=mvcount(roles)\n| where role_count>6\n| sort - role_count\n```\n\nUnderstanding this SPL\n\n**KRITIS asset inventory completeness (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for kritis asset inventory completeness using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmdb; **sourcetype**: kritis:inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmdb, sourcetype=\"kritis:inventory\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by asset_id, operator** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **role_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where role_count>6` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**KRITIS asset inventory completeness (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for kritis asset inventory completeness using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI-KritisV expectations for kritis asset inventory completeness using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BSI-KritisV"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BSI-KritisV",
                  "v": "2021 (as amended)",
                  "cl": "§8a",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BSI-KritisV §8a (Security in IT systems) is enforced — Splunk UC-22.28.7: KRITIS asset inventory completeness.",
                  "ea": "Saved search 'UC-22.28.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.8",
              "n": "Minimum security standard compliance (BSI-KritisV)",
              "c": "high",
              "f": "beginner",
              "v": "Demonstrates alignment with BSI-KritisV expectations for minimum security standard compliance using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=vulns sourcetype IN (\"tenable:sc\",\"qualys:vm\") tag=kritis earliest=-14d\n| stats max(severity_score) as max_sev by host\n| lookup bsi_minimum_standard.csv control OUTPUT required_score\n| where max_sev>required_score\n| table host, max_sev, required_score",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulns sourcetype IN (\"tenable:sc\",\"qualys:vm\") tag=kritis earliest=-14d\n| stats max(severity_score) as max_sev by host\n| lookup bsi_minimum_standard.csv control OUTPUT required_score\n| where max_sev>required_score\n| table host, max_sev, required_score\n```\n\nUnderstanding this SPL\n\n**Minimum security standard compliance (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for minimum security standard compliance using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulns.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulns, time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where max_sev>required_score` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Minimum security standard compliance (BSI-KritisV)**): table host, max_sev, required_score\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Minimum security standard compliance (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for minimum security standard compliance using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI-KritisV expectations for minimum security standard compliance using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BSI-KritisV"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BSI-KritisV",
                  "v": "2021 (as amended)",
                  "cl": "§8a",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BSI-KritisV §8a (Security in IT systems) is enforced — Splunk UC-22.28.8: Minimum security standard compliance.",
                  "ea": "Saved search 'UC-22.28.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.9",
              "n": "Interface security between operators (BSI-KritisV)",
              "c": "medium",
              "f": "expert",
              "v": "Demonstrates alignment with BSI-KritisV expectations for interface security between operators using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=network sourcetype=\"pan:traffic\" earliest=-24h tag=kritis_interface\n| stats sum(bytes) as bytes by src_zone, dest_zone, app\n| where src_zone!=dest_zone AND app=\"unknown-tcp\"\n| sort - bytes",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\" earliest=-24h tag=kritis_interface\n| stats sum(bytes) as bytes by src_zone, dest_zone, app\n| where src_zone!=dest_zone AND app=\"unknown-tcp\"\n| sort - bytes\n```\n\nUnderstanding this SPL\n\n**Interface security between operators (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for interface security between operators using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\", time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_zone, dest_zone, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where src_zone!=dest_zone AND app=\"unknown-tcp\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Interface security between operators (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for interface security between operators using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI-KritisV expectations for interface security between operators using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BSI-KritisV"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t sum(Vulnerabilities.cvss) as agg_value from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - agg_value",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BSI-KritisV",
                  "v": "2021 (as amended)",
                  "cl": "§8a",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BSI-KritisV §8a (Security in IT systems) is enforced — Splunk UC-22.28.9: Interface security between operators.",
                  "ea": "Saved search 'UC-22.28.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.10",
              "n": "Disruption impact assessment evidence (BSI-KritisV)",
              "c": "low",
              "f": "advanced",
              "v": "Demonstrates alignment with BSI-KritisV expectations for disruption impact assessment evidence using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=risk sourcetype=\"kritis:bia\" earliest=-365d\n| stats latest(impact_score) as imp, latest(likelihood) as lik by scenario_id\n| eval risk=imp*lik\n| where risk>5000\n| table scenario_id, imp, lik, risk",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=risk sourcetype=\"kritis:bia\" earliest=-365d\n| stats latest(impact_score) as imp, latest(likelihood) as lik by scenario_id\n| eval risk=imp*lik\n| where risk>5000\n| table scenario_id, imp, lik, risk\n```\n\nUnderstanding this SPL\n\n**Disruption impact assessment evidence (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for disruption impact assessment evidence using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: risk; **sourcetype**: kritis:bia. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=risk, sourcetype=\"kritis:bia\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scenario_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where risk>5000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Disruption impact assessment evidence (BSI-KritisV)**): table scenario_id, imp, lik, risk\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Disruption impact assessment evidence (BSI-KritisV)** — Demonstrates alignment with BSI-KritisV expectations for disruption impact assessment evidence using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI-KritisV expectations for disruption impact assessment evidence using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BSI-KritisV"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BSI-KritisV",
                  "v": "2021 (as amended)",
                  "cl": "§8a",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BSI-KritisV §8a (Security in IT systems) is enforced — Splunk UC-22.28.10: Disruption impact assessment evidence.",
                  "ea": "Saved search 'UC-22.28.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.11",
              "n": "BSI module compliance tracking (IT-Grundschutz)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates alignment with BSI IT-Grundschutz expectations for bsi module compliance tracking using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=grc sourcetype=\"grundschutz:module\" earliest=-730d\n| stats latest(status) as st by module_id, system\n| where st IN (\"Open\",\"Partial\")\n| lookup bsi_module_catalog.csv module_id OUTPUT layer\n| stats count by layer, st",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"grundschutz:module\" earliest=-730d\n| stats latest(status) as st by module_id, system\n| where st IN (\"Open\",\"Partial\")\n| lookup bsi_module_catalog.csv module_id OUTPUT layer\n| stats count by layer, st\n```\n\nUnderstanding this SPL\n\n**BSI module compliance tracking (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for bsi module compliance tracking using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: grundschutz:module. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"grundschutz:module\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by module_id, system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st IN (\"Open\",\"Partial\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `stats` rolls up events into metrics; results are split **by layer, st** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**BSI module compliance tracking (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for bsi module compliance tracking using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI IT-Grundschutz expectations for bsi module compliance tracking using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-Grundschutz"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) is enforced — Splunk UC-22.28.11: BSI module compliance tracking.",
                  "ea": "Saved search 'UC-22.28.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.12",
              "n": "Risk analysis per BSI methodology (IT-Grundschutz)",
              "c": "high",
              "f": "beginner",
              "v": "Demonstrates alignment with BSI IT-Grundschutz expectations for risk analysis per bsi methodology using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=risk sourcetype=\"bsi:bia\" earliest=-365d\n| stats latest(risk_score) as rs by asset_group, threat\n| where rs>75\n| lookup grundschutz_ba_scope.csv asset_group OUTPUT in_scope\n| where in_scope=\"true\"",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=risk sourcetype=\"bsi:bia\" earliest=-365d\n| stats latest(risk_score) as rs by asset_group, threat\n| where rs>75\n| lookup grundschutz_ba_scope.csv asset_group OUTPUT in_scope\n| where in_scope=\"true\"\n```\n\nUnderstanding this SPL\n\n**Risk analysis per BSI methodology (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for risk analysis per bsi methodology using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: risk; **sourcetype**: bsi:bia. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=risk, sourcetype=\"bsi:bia\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by asset_group, threat** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rs>75` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Risk analysis per BSI methodology (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for risk analysis per bsi methodology using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI IT-Grundschutz expectations for risk analysis per bsi methodology using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-Grundschutz"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) is enforced — Splunk UC-22.28.12: Risk analysis per BSI methodology.",
                  "ea": "Saved search 'UC-22.28.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.13",
              "n": "Security concept documentation changes (IT-Grundschutz)",
              "c": "critical",
              "f": "expert",
              "v": "Demonstrates alignment with BSI IT-Grundschutz expectations for security concept documentation changes using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=wiki sourcetype=\"confluence:audit\" OR index=git sourcetype=\"gitlab:audit\" earliest=-30d\n| search (object=\"*security concept*\" OR path=\"*konzept*\")\n| stats dc(author) as authors, count by object\n| where authors>5",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wiki sourcetype=\"confluence:audit\" OR index=git sourcetype=\"gitlab:audit\" earliest=-30d\n| search (object=\"*security concept*\" OR path=\"*konzept*\")\n| stats dc(author) as authors, count by object\n| where authors>5\n```\n\nUnderstanding this SPL\n\n**Security concept documentation changes (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for security concept documentation changes using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wiki, git; **sourcetype**: confluence:audit, gitlab:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wiki, index=git, sourcetype=\"confluence:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by object** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where authors>5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security concept documentation changes (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for security concept documentation changes using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI IT-Grundschutz expectations for security concept documentation changes using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-Grundschutz"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) is enforced — Splunk UC-22.28.13: Security concept documentation changes.",
                  "ea": "Saved search 'UC-22.28.13' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.14",
              "n": "Penetration test compliance and remediation (IT-Grundschutz)",
              "c": "low",
              "f": "advanced",
              "v": "Demonstrates alignment with BSI IT-Grundschutz expectations for penetration test compliance and remediation using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=vulns sourcetype=\"nessus:scan\" OR index=test sourcetype=\"pentest:report\" earliest=-180d\n| stats latest(status) as st by engagement_id, host\n| where st!=\"remediated\" AND _time<relative_time(now(),\"-30d@d\")\n| table engagement_id, host, st",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulns sourcetype=\"nessus:scan\" OR index=test sourcetype=\"pentest:report\" earliest=-180d\n| stats latest(status) as st by engagement_id, host\n| where st!=\"remediated\" AND _time<relative_time(now(),\"-30d@d\")\n| table engagement_id, host, st\n```\n\nUnderstanding this SPL\n\n**Penetration test compliance and remediation (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for penetration test compliance and remediation using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulns, test; **sourcetype**: nessus:scan, pentest:report. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulns, index=test, sourcetype=\"nessus:scan\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by engagement_id, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"remediated\" AND _time<relative_time(now(),\"-30d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Penetration test compliance and remediation (IT-Grundschutz)**): table engagement_id, host, st\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Penetration test compliance and remediation (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for penetration test compliance and remediation using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI IT-Grundschutz expectations for penetration test compliance and remediation using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-Grundschutz"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) is enforced — Splunk UC-22.28.14: Penetration test compliance and remediation.",
                  "ea": "Saved search 'UC-22.28.14' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.15",
              "n": "Grundschutz certification evidence pack (IT-Grundschutz)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates alignment with BSI IT-Grundschutz expectations for grundschutz certification evidence pack using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=audit sourcetype=\"grundschutz:cert\" earliest=-365d\n| stats latest(expiry) as ex, latest(level) as lvl by scope_id\n| eval ex_epoch=strptime(ex,\"%Y-%m-%d\")\n| where ex_epoch<relative_time(now(),\"+90d@d\")\n| table scope_id, lvl, ex",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"grundschutz:cert\" earliest=-365d\n| stats latest(expiry) as ex, latest(level) as lvl by scope_id\n| eval ex_epoch=strptime(ex,\"%Y-%m-%d\")\n| where ex_epoch<relative_time(now(),\"+90d@d\")\n| table scope_id, lvl, ex\n```\n\nUnderstanding this SPL\n\n**Grundschutz certification evidence pack (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for grundschutz certification evidence pack using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: grundschutz:cert. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"grundschutz:cert\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scope_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ex_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ex_epoch<relative_time(now(),\"+90d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Grundschutz certification evidence pack (IT-Grundschutz)**): table scope_id, lvl, ex\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Grundschutz certification evidence pack (IT-Grundschutz)** — Demonstrates alignment with BSI IT-Grundschutz expectations for grundschutz certification evidence pack using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BSI IT-Grundschutz expectations for grundschutz certification evidence pack using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "IT-Grundschutz"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "IT-Grundschutz",
                  "v": "2023 Edition",
                  "cl": "OPS.1.1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that IT-Grundschutz OPS.1.1.2 (Ordered ICT operation) is enforced — Splunk UC-22.28.15: Grundschutz certification evidence pack.",
                  "ea": "Saved search 'UC-22.28.15' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.16",
              "n": "Information risk management for financial institutions (BAIT/KAIT)",
              "c": "high",
              "f": "beginner",
              "v": "Demonstrates alignment with BAIT/KAIT expectations for information risk management for financial institutions using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=risk sourcetype=\"bait:irm\" earliest=-365d entity=\"*bank*\"\n| stats latest(inherent_risk) as ir, latest(residual_risk) as rr by process_id\n| where rr>ir*0.6\n| table process_id, ir, rr",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=risk sourcetype=\"bait:irm\" earliest=-365d entity=\"*bank*\"\n| stats latest(inherent_risk) as ir, latest(residual_risk) as rr by process_id\n| where rr>ir*0.6\n| table process_id, ir, rr\n```\n\nUnderstanding this SPL\n\n**Information risk management for financial institutions (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for information risk management for financial institutions using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: risk; **sourcetype**: bait:irm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=risk, sourcetype=\"bait:irm\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by process_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rr>ir*0.6` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Information risk management for financial institutions (BAIT/KAIT)**): table process_id, ir, rr\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information risk management for financial institutions (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for information risk management for financial institutions using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BAIT/KAIT expectations for information risk management for financial institutions using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BAIT/KAIT"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BAIT/KAIT §5 (Identity & access management) is enforced — Splunk UC-22.28.16: Information risk management for financial institutions.",
                  "ea": "Saved search 'UC-22.28.16' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.17",
              "n": "IT operations governance evidence (BAIT/KAIT)",
              "c": "medium",
              "f": "expert",
              "v": "Demonstrates alignment with BAIT/KAIT expectations for it operations governance evidence using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=itsi_summary OR index=ops sourcetype=\"linux:audit\" earliest=-24h\n| stats count by host, sourcetype\n| lookup bait_it_ops_scope.csv host OUTPUT critical\n| where critical=\"true\" AND count<10",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary OR index=ops sourcetype=\"linux:audit\" earliest=-24h\n| stats count by host, sourcetype\n| lookup bait_it_ops_scope.csv host OUTPUT critical\n| where critical=\"true\" AND count<10\n```\n\nUnderstanding this SPL\n\n**IT operations governance evidence (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for it operations governance evidence using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary, ops; **sourcetype**: linux:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, index=ops, sourcetype=\"linux:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where critical=\"true\" AND count<10` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IT operations governance evidence (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for it operations governance evidence using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BAIT/KAIT expectations for it operations governance evidence using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BAIT/KAIT"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BAIT/KAIT §9 (ICT operations management) is enforced — Splunk UC-22.28.17: IT operations governance evidence.",
                  "ea": "Saved search 'UC-22.28.17' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.18",
              "n": "Outsourcing management for banks and insurers (BAIT/KAIT)",
              "c": "low",
              "f": "advanced",
              "v": "Demonstrates alignment with BAIT/KAIT expectations for outsourcing management for banks and insurers using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=snow sourcetype=\"snow:chg\" category=\"Outsourcing\" earliest=-30d\n| stats values(state) as st by number, vendor\n| where mvcount(mvfilter(match(st,\"Closed\"))) = 0 AND now() > relative_time(_time,\"+30d@d\")\n| table number, vendor, st",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=snow sourcetype=\"snow:chg\" category=\"Outsourcing\" earliest=-30d\n| stats values(state) as st by number, vendor\n| where mvcount(mvfilter(match(st,\"Closed\"))) = 0 AND now() > relative_time(_time,\"+30d@d\")\n| table number, vendor, st\n```\n\nUnderstanding this SPL\n\n**Outsourcing management for banks and insurers (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for outsourcing management for banks and insurers using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: snow; **sourcetype**: snow:chg. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=snow, sourcetype=\"snow:chg\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by number, vendor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvcount(mvfilter(match(st,\"Closed\"))) = 0 AND now() > relative_time(_time,\"+30d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Outsourcing management for banks and insurers (BAIT/KAIT)**): table number, vendor, st\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Outsourcing management for banks and insurers (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for outsourcing management for banks and insurers using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BAIT/KAIT expectations for outsourcing management for banks and insurers using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BAIT/KAIT"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BAIT/KAIT §5 (Identity & access management) is enforced — Splunk UC-22.28.18: Outsourcing management for banks and insurers.",
                  "ea": "Saved search 'UC-22.28.18' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.19",
              "n": "User access management for banking systems (BAIT/KAIT)",
              "c": "critical",
              "f": "intermediate",
              "v": "Demonstrates alignment with BAIT/KAIT expectations for user access management for banking systems using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4720,4738) earliest=-24h\n| stats dc(ComputerName) as hosts by Account_Name, Privilege_List\n| lookup bait_user_access_scope.csv Account_Name OUTPUT role\n| where isnotnull(role) AND hosts>15",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Vulnerabilities](https://docs.splunk.com/Documentation/CIM/latest/User/Vulnerabilities)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4720,4738) earliest=-24h\n| stats dc(ComputerName) as hosts by Account_Name, Privilege_List\n| lookup bait_user_access_scope.csv Account_Name OUTPUT role\n| where isnotnull(role) AND hosts>15\n```\n\nUnderstanding this SPL\n\n**User access management for banking systems (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for user access management for banking systems using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name, Privilege_List** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(role) AND hosts>15` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User access management for banking systems (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for user access management for banking systems using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Vulnerabilities.Vulnerabilities` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BAIT/KAIT expectations for user access management for banking systems using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BAIT/KAIT"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Vulnerabilities.Vulnerabilities by Vulnerabilities.severity, Vulnerabilities.signature, Vulnerabilities.dest | sort - count",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BAIT/KAIT §5 (Identity & access management) is enforced — Splunk UC-22.28.19: User access management for banking systems.",
                  "ea": "Saved search 'UC-22.28.19' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.28.20",
              "n": "Critical infrastructure reporting for insurance sector (BAIT/KAIT)",
              "c": "critical",
              "f": "beginner",
              "v": "Demonstrates alignment with BAIT/KAIT expectations for critical infrastructure reporting for insurance sector using auditable Splunk evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence",
              "q": "index=grc sourcetype=\"kritis:finance_report\" earliest=-365d\n| stats latest(submitted) as sub by institution_id, period\n| where sub!=\"true\"\n| table institution_id, period, sub",
              "m": "(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.",
              "z": "Matrix (host x module), Bar chart (severity), Table (controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk OT Security Add-on](https://splunkbase.splunk.com/app/5151), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `de_kritis_assets.csv` with sector classification and KRITIS identifiers. (2) Link ITSI services to KRITIS objects for business impact overlays. (3) Export monthly control posture snapshots for BSI engagement. (4) Tune thresholds with enterprise risk management and OT security owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"kritis:finance_report\" earliest=-365d\n| stats latest(submitted) as sub by institution_id, period\n| where sub!=\"true\"\n| table institution_id, period, sub\n```\n\nUnderstanding this SPL\n\n**Critical infrastructure reporting for insurance sector (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for critical infrastructure reporting for insurance sector using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: kritis:finance_report. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"kritis:finance_report\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by institution_id, period** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sub!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Critical infrastructure reporting for insurance sector (BAIT/KAIT)**): table institution_id, period, sub\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Critical infrastructure reporting for insurance sector (BAIT/KAIT)** — Demonstrates alignment with BAIT/KAIT expectations for critical infrastructure reporting for insurance sector using auditable Splunk evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, critical infrastructure system logs, BSI IT-Grundschutz compliance scans, SIEM correlation events, vulnerability scan outputs, change management records, incident response logs, security audit evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk OT Security Add-on (5151), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix (host x module), Bar chart (severity), Table (controls).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI), Splunk OT Security Add-on",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We show alignment with BAIT/KAIT expectations for critical infrastructure reporting for insurance sector using auditable Splunk evidence. That matches the critical-infrastructure and reporting work we are asked to do in the region we operate in.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "BAIT/KAIT"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "BAIT/KAIT",
                  "v": "Aug 2021",
                  "cl": "§5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that BAIT/KAIT §5 (Identity & access management) is enforced — Splunk UC-22.28.20: Critical infrastructure reporting for insurance sector.",
                  "ea": "Saved search 'UC-22.28.20' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "ta_link": {
                "name": "OT Security Add-on for Splunk",
                "id": 5151,
                "url": "https://splunkbase.splunk.com/app/5151"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "22.29",
          "n": "APAC Data Protection",
          "u": [
            {
              "i": "22.29.1",
              "n": "China PIPL Art.38 localization boundary monitoring (PIPL Art.38; ASEAN CBPR)",
              "c": "critical",
              "f": "advanced",
              "v": "Provides APAC privacy supervisory evidence for china pipl art.38 localization boundary monitoring aligned with PIPL Art.38; ASEAN CBPR.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=db_audit sourcetype=\"oracle:audit\" Action=\"SELECT\" earliest=-36h\n| stats count by DBUserName, Obj_Name, client_ip\n| iplocation client_ip\n| where Country!=\"China\" AND match(Obj_Name,\"(?i)resident\")\n| sort - count",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db_audit sourcetype=\"oracle:audit\" Action=\"SELECT\" earliest=-36h\n| stats count by DBUserName, Obj_Name, client_ip\n| iplocation client_ip\n| where Country!=\"China\" AND match(Obj_Name,\"(?i)resident\")\n| sort - count\n```\n\nUnderstanding this SPL\n\n**China PIPL Art.38 localization boundary monitoring (PIPL Art.38; ASEAN CBPR)** — Provides APAC privacy supervisory evidence for china pipl art.38 localization boundary monitoring aligned with PIPL Art.38; ASEAN CBPR.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db_audit; **sourcetype**: oracle:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db_audit, sourcetype=\"oracle:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by DBUserName, Obj_Name, client_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **China PIPL Art.38 localization boundary monitoring (PIPL Art.38; ASEAN CBPR)**): iplocation client_ip\n• Filters the current rows with `where Country!=\"China\" AND match(Obj_Name,\"(?i)resident\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PIPL Art.38"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.38",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.38 (Cross-border transfer conditions) is enforced — Splunk UC-22.29.1: China PIPL Art.38 localization boundary monitoring.",
                  "ea": "Saved search 'UC-22.29.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.2",
              "n": "Cross-border transfer impact assessment logging (PIPL Art.38; ASEAN CBPR)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides APAC privacy supervisory evidence for cross-border transfer impact assessment logging aligned with PIPL Art.38; ASEAN CBPR.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=app sourcetype=\"api:gateway\" region=\"APAC\" earliest=-37h\n| stats sum(bytes_out) as ob by dest_region, tenant\n| where dest_region=\"cn-north-1\" AND tenant!=\"localization_ok\"\n| eval pipl_art38_review=\"required\"\n| sort - ob",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"api:gateway\" region=\"APAC\" earliest=-37h\n| stats sum(bytes_out) as ob by dest_region, tenant\n| where dest_region=\"cn-north-1\" AND tenant!=\"localization_ok\"\n| eval pipl_art38_review=\"required\"\n| sort - ob\n```\n\nUnderstanding this SPL\n\n**Cross-border transfer impact assessment logging (PIPL Art.38; ASEAN CBPR)** — Provides APAC privacy supervisory evidence for cross-border transfer impact assessment logging aligned with PIPL Art.38; ASEAN CBPR.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: api:gateway. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"api:gateway\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest_region, tenant** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dest_region=\"cn-north-1\" AND tenant!=\"localization_ok\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **pipl_art38_review** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PIPL Art.38"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.38",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.38 (Cross-border transfer conditions) is enforced — Splunk UC-22.29.2: Cross-border transfer impact assessment logging.",
                  "ea": "Saved search 'UC-22.29.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.3",
              "n": "ASEAN CBPR participation evidence (PIPL Art.38; ASEAN CBPR)",
              "c": "medium",
              "f": "beginner",
              "v": "Provides APAC privacy supervisory evidence for asean cbpr participation evidence aligned with PIPL Art.38; ASEAN CBPR.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=s3 sourcetype=\"aws:cloudtrail\" earliest=-30d eventName=\"PutObject\"\n| spath path=requestParameters.key\n| search key=\"*apac-pii*\"\n| stats dc(awsRegion) as regions by userIdentity.arn\n| where regions>1",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=s3 sourcetype=\"aws:cloudtrail\" earliest=-30d eventName=\"PutObject\"\n| spath path=requestParameters.key\n| search key=\"*apac-pii*\"\n| stats dc(awsRegion) as regions by userIdentity.arn\n| where regions>1\n```\n\nUnderstanding this SPL\n\n**ASEAN CBPR participation evidence (PIPL Art.38; ASEAN CBPR)** — Provides APAC privacy supervisory evidence for asean cbpr participation evidence aligned with PIPL Art.38; ASEAN CBPR.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: s3; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=s3, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts structured paths (JSON/XML) with `spath`.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where regions>1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PIPL Art.38"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.38",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.38 (Cross-border transfer conditions) is enforced — Splunk UC-22.29.3: ASEAN CBPR participation evidence.",
                  "ea": "Saved search 'UC-22.29.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.4",
              "n": "Transfer mechanism validation before export (PIPL Art.38; ASEAN CBPR)",
              "c": "critical",
              "f": "expert",
              "v": "Provides APAC privacy supervisory evidence for transfer mechanism validation before export aligned with PIPL Art.38; ASEAN CBPR.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=dlp sourcetype=\"symantec:dlp\" OR sourcetype=\"ms:o365:dlp\" earliest=-7d\n| stats count by policy_name, destination, action\n| lookup asean_cbpr_participants.csv destination OUTPUT participant\n| where action=\"BLOCK\" AND isnull(participant)",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dlp sourcetype=\"symantec:dlp\" OR sourcetype=\"ms:o365:dlp\" earliest=-7d\n| stats count by policy_name, destination, action\n| lookup asean_cbpr_participants.csv destination OUTPUT participant\n| where action=\"BLOCK\" AND isnull(participant)\n```\n\nUnderstanding this SPL\n\n**Transfer mechanism validation before export (PIPL Art.38; ASEAN CBPR)** — Provides APAC privacy supervisory evidence for transfer mechanism validation before export aligned with PIPL Art.38; ASEAN CBPR.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dlp; **sourcetype**: symantec:dlp, ms:o365:dlp. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dlp, sourcetype=\"symantec:dlp\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by policy_name, destination, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where action=\"BLOCK\" AND isnull(participant)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PIPL Art.38"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.38",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.38 (Cross-border transfer conditions) is enforced — Splunk UC-22.29.4: Transfer mechanism validation before export.",
                  "ea": "Saved search 'UC-22.29.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.5",
              "n": "Data residency monitoring for regulated datasets (PIPL Art.38; ASEAN CBPR)",
              "c": "critical",
              "f": "advanced",
              "v": "Provides APAC privacy supervisory evidence for data residency monitoring for regulated datasets aligned with PIPL Art.38; ASEAN CBPR.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=db_connect sourcetype=\"sql:transfer\" earliest=-24h\n| stats sum(rowcount) as rows by src_cluster, dest_cluster\n| lookup data_residency_matrix.csv dest_cluster OUTPUT allowed_from_apac\n| where allowed_from_apac=\"false\"\n| sort - rows",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db_connect sourcetype=\"sql:transfer\" earliest=-24h\n| stats sum(rowcount) as rows by src_cluster, dest_cluster\n| lookup data_residency_matrix.csv dest_cluster OUTPUT allowed_from_apac\n| where allowed_from_apac=\"false\"\n| sort - rows\n```\n\nUnderstanding this SPL\n\n**Data residency monitoring for regulated datasets (PIPL Art.38; ASEAN CBPR)** — Provides APAC privacy supervisory evidence for data residency monitoring for regulated datasets aligned with PIPL Art.38; ASEAN CBPR.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db_connect; **sourcetype**: sql:transfer. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db_connect, sourcetype=\"sql:transfer\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_cluster, dest_cluster** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where allowed_from_apac=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PIPL Art.38"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.38",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.38 (Cross-border transfer conditions) is enforced — Splunk UC-22.29.5: Data residency monitoring for regulated datasets.",
                  "ea": "Saved search 'UC-22.29.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.6",
              "n": "Third-country adequacy decision tracking (PIPL Art.38; ASEAN CBPR)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides APAC privacy supervisory evidence for third-country adequacy decision tracking aligned with PIPL Art.38; ASEAN CBPR.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=grc sourcetype=\"transfer:mechanism\" earliest=-365d\n| stats latest(mechanism) as mech by vendor, dest_country\n| lookup adequacy_decisions_apac.csv dest_country OUTPUT adequate\n| where adequate!=\"true\" AND mech!=\"SCC\"",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"transfer:mechanism\" earliest=-365d\n| stats latest(mechanism) as mech by vendor, dest_country\n| lookup adequacy_decisions_apac.csv dest_country OUTPUT adequate\n| where adequate!=\"true\" AND mech!=\"SCC\"\n```\n\nUnderstanding this SPL\n\n**Third-country adequacy decision tracking (PIPL Art.38; ASEAN CBPR)** — Provides APAC privacy supervisory evidence for third-country adequacy decision tracking aligned with PIPL Art.38; ASEAN CBPR.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: transfer:mechanism. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"transfer:mechanism\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor, dest_country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where adequate!=\"true\" AND mech!=\"SCC\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PIPL Art.38"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.38",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.38 (Cross-border transfer conditions) is enforced — Splunk UC-22.29.6: Third-country adequacy decision tracking.",
                  "ea": "Saved search 'UC-22.29.6' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.7",
              "n": "Breach discovery and severity classification (APAC breach laws)",
              "c": "medium",
              "f": "beginner",
              "v": "Provides APAC privacy supervisory evidence for breach discovery and severity classification aligned with APAC breach laws.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+7)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"7\"\n| table breach_id, jurisdiction, breach, register_line",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+7)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"7\"\n| table breach_id, jurisdiction, breach, register_line\n```\n\nUnderstanding this SPL\n\n**Breach discovery and severity classification (APAC breach laws)** — Provides APAC privacy supervisory evidence for breach discovery and severity classification aligned with APAC breach laws.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: breach:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"breach:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **authority** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **individuals** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **register_line** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Breach discovery and severity classification (APAC breach laws)**): table breach_id, jurisdiction, breach, register_line\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APAC breach laws"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.26 (Leakage reporting) is enforced — Splunk UC-22.29.7: Breach discovery and severity classification.",
                  "ea": "Saved search 'UC-22.29.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.29.7: Breach discovery and severity classification.",
                  "ea": "Saved search 'UC-22.29.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.15 is enforced — Splunk UC-22.29.7: Breach discovery and severity classification.",
                  "ea": "Saved search 'UC-22.29.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§26A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §26A (Data breach notification) is enforced — Splunk UC-22.29.7: Breach discovery and severity classification.",
                  "ea": "Saved search 'UC-22.29.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.8",
              "n": "Regulator notification timeline compliance by jurisdiction (APAC breach laws)",
              "c": "low",
              "f": "expert",
              "v": "Provides APAC privacy supervisory evidence for regulator notification timeline compliance by jurisdiction aligned with APAC breach laws.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+8)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"8\"\n| table breach_id, jurisdiction, breach, register_line",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+8)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"8\"\n| table breach_id, jurisdiction, breach, register_line\n```\n\nUnderstanding this SPL\n\n**Regulator notification timeline compliance by jurisdiction (APAC breach laws)** — Provides APAC privacy supervisory evidence for regulator notification timeline compliance by jurisdiction aligned with APAC breach laws.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: breach:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"breach:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **authority** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **individuals** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **register_line** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Regulator notification timeline compliance by jurisdiction (APAC breach laws)**): table breach_id, jurisdiction, breach, register_line\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APAC breach laws"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.26 (Leakage reporting) is enforced — Splunk UC-22.29.8: Regulator notification timeline compliance by jurisdiction.",
                  "ea": "Saved search 'UC-22.29.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.29.8: Regulator notification timeline compliance by jurisdiction.",
                  "ea": "Saved search 'UC-22.29.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.15 is enforced — Splunk UC-22.29.8: Regulator notification timeline compliance by jurisdiction.",
                  "ea": "Saved search 'UC-22.29.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§26A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §26A (Data breach notification) is enforced — Splunk UC-22.29.8: Regulator notification timeline compliance by jurisdiction.",
                  "ea": "Saved search 'UC-22.29.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.9",
              "n": "Affected individual notification evidence (APAC breach laws)",
              "c": "critical",
              "f": "advanced",
              "v": "Provides APAC privacy supervisory evidence for affected individual notification evidence aligned with APAC breach laws.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+9)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"9\"\n| table breach_id, jurisdiction, breach, register_line",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+9)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"9\"\n| table breach_id, jurisdiction, breach, register_line\n```\n\nUnderstanding this SPL\n\n**Affected individual notification evidence (APAC breach laws)** — Provides APAC privacy supervisory evidence for affected individual notification evidence aligned with APAC breach laws.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: breach:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"breach:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **authority** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **individuals** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **register_line** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Affected individual notification evidence (APAC breach laws)**): table breach_id, jurisdiction, breach, register_line\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APAC breach laws"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.29.9: Affected individual notification evidence.",
                  "ea": "Saved search 'UC-22.29.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.15 is enforced — Splunk UC-22.29.9: Affected individual notification evidence.",
                  "ea": "Saved search 'UC-22.29.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§26A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §26A (Data breach notification) is enforced — Splunk UC-22.29.9: Affected individual notification evidence.",
                  "ea": "Saved search 'UC-22.29.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.10",
              "n": "Authority reporting package completeness (APAC breach laws)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides APAC privacy supervisory evidence for authority reporting package completeness aligned with APAC breach laws.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+10)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"10\"\n| table breach_id, jurisdiction, breach, register_line",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+10)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"10\"\n| table breach_id, jurisdiction, breach, register_line\n```\n\nUnderstanding this SPL\n\n**Authority reporting package completeness (APAC breach laws)** — Provides APAC privacy supervisory evidence for authority reporting package completeness aligned with APAC breach laws.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: breach:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"breach:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **authority** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **individuals** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **register_line** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Authority reporting package completeness (APAC breach laws)**): table breach_id, jurisdiction, breach, register_line\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APAC breach laws"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.26 (Leakage reporting) is enforced — Splunk UC-22.29.10: Authority reporting package completeness.",
                  "ea": "Saved search 'UC-22.29.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.29.10: Authority reporting package completeness.",
                  "ea": "Saved search 'UC-22.29.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§26A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §26A (Data breach notification) is enforced — Splunk UC-22.29.10: Authority reporting package completeness.",
                  "ea": "Saved search 'UC-22.29.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.11",
              "n": "Breach register maintenance and linkage (APAC breach laws)",
              "c": "critical",
              "f": "beginner",
              "v": "Provides APAC privacy supervisory evidence for breach register maintenance and linkage aligned with APAC breach laws.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+11)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"11\"\n| table breach_id, jurisdiction, breach, register_line",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+11)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"11\"\n| table breach_id, jurisdiction, breach, register_line\n```\n\nUnderstanding this SPL\n\n**Breach register maintenance and linkage (APAC breach laws)** — Provides APAC privacy supervisory evidence for breach register maintenance and linkage aligned with APAC breach laws.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: breach:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"breach:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **authority** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **individuals** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **register_line** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Breach register maintenance and linkage (APAC breach laws)**): table breach_id, jurisdiction, breach, register_line\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APAC breach laws"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.26 (Leakage reporting) is enforced — Splunk UC-22.29.11: Breach register maintenance and linkage.",
                  "ea": "Saved search 'UC-22.29.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.29.11: Breach register maintenance and linkage.",
                  "ea": "Saved search 'UC-22.29.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.15 is enforced — Splunk UC-22.29.11: Breach register maintenance and linkage.",
                  "ea": "Saved search 'UC-22.29.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.12",
              "n": "Remediation and root-cause tracking (APAC breach laws)",
              "c": "low",
              "f": "expert",
              "v": "Provides APAC privacy supervisory evidence for remediation and root-cause tracking aligned with APAC breach laws.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+12)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"12\"\n| table breach_id, jurisdiction, breach, register_line",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"breach:event\" jurisdiction IN (\"SG\",\"TH\",\"JP\",\"KR\",\"CN\") earliest=-180d\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| eval authority=strptime(notified_authority_at,\"%Y-%m-%d %H:%M:%S\")\n| eval individuals=strptime(notified_individuals_at,\"%Y-%m-%d %H:%M:%S\")\n| eval sla_h=case(jurisdiction=\"SG\",72,jurisdiction=\"TH\",72,jurisdiction=\"JP\",120,1=1,24+12)\n| eval breach=if(isnull(authority) AND (now()-detected)>sla_h*3600,1,0)\n| eval register_line=\"12\"\n| table breach_id, jurisdiction, breach, register_line\n```\n\nUnderstanding this SPL\n\n**Remediation and root-cause tracking (APAC breach laws)** — Provides APAC privacy supervisory evidence for remediation and root-cause tracking aligned with APAC breach laws.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: breach:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"breach:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **authority** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **individuals** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **register_line** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Remediation and root-cause tracking (APAC breach laws)**): table breach_id, jurisdiction, breach, register_line\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APAC breach laws"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.29.12: Remediation and root-cause tracking.",
                  "ea": "Saved search 'UC-22.29.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.15 is enforced — Splunk UC-22.29.12: Remediation and root-cause tracking.",
                  "ea": "Saved search 'UC-22.29.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.13",
              "n": "Reasonable security measures per Singapore PDPA (PDPA SG; PIPL; K-ISMS)",
              "c": "critical",
              "f": "advanced",
              "v": "Provides APAC privacy supervisory evidence for reasonable security measures per singapore pdpa aligned with PDPA SG; PIPL; K-ISMS.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=os sourcetype=\"WinEventLog:Security\" EventCode=4663 earliest=-24h\n| stats count by Object_Name, user\n| lookup pdpa_sg_technical_measure.csv Object_Name OUTPUT requires_encryption\n| where requires_encryption=\"true\" AND count>50",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=\"WinEventLog:Security\" EventCode=4663 earliest=-24h\n| stats count by Object_Name, user\n| lookup pdpa_sg_technical_measure.csv Object_Name OUTPUT requires_encryption\n| where requires_encryption=\"true\" AND count>50\n```\n\nUnderstanding this SPL\n\n**Reasonable security measures per Singapore PDPA (PDPA SG; PIPL; K-ISMS)** — Provides APAC privacy supervisory evidence for reasonable security measures per singapore pdpa aligned with PDPA SG; PIPL; K-ISMS.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Object_Name, user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where requires_encryption=\"true\" AND count>50` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PDPA SG"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §24 (Protection of personal data obligation) is enforced — Splunk UC-22.29.13: Reasonable security measures per Singapore PDPA.",
                  "ea": "Saved search 'UC-22.29.13' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.14",
              "n": "PIPL cybersecurity protection obligations monitoring (PDPA SG; PIPL; K-ISMS)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides APAC privacy supervisory evidence for pipl cybersecurity protection obligations monitoring aligned with PDPA SG; PIPL; K-ISMS.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Workload=SharePoint earliest=-7d\n| stats dc(SiteUrl) as sites by UserId, Operation\n| where Operation=\"FileDownloaded\" AND sites>40",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Workload=SharePoint earliest=-7d\n| stats dc(SiteUrl) as sites by UserId, Operation\n| where Operation=\"FileDownloaded\" AND sites>40\n```\n\nUnderstanding this SPL\n\n**PIPL cybersecurity protection obligations monitoring (PDPA SG; PIPL; K-ISMS)** — Provides APAC privacy supervisory evidence for pipl cybersecurity protection obligations monitoring aligned with PDPA SG; PIPL; K-ISMS.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by UserId, Operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where Operation=\"FileDownloaded\" AND sites>40` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PDPA SG"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §24 (Protection of personal data obligation) is enforced — Splunk UC-22.29.14: PIPL cybersecurity protection obligations monitoring.",
                  "ea": "Saved search 'UC-22.29.14' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.15",
              "n": "K-ISMS certification maintenance evidence (PDPA SG; PIPL; K-ISMS)",
              "c": "medium",
              "f": "beginner",
              "v": "Provides APAC privacy supervisory evidence for k-isms certification maintenance evidence aligned with PDPA SG; PIPL; K-ISMS.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=grc sourcetype=\"kisms:evidence\" earliest=-365d\n| stats latest(status) as st by control_id, system\n| where st!=\"EvidenceUploaded\" AND match(control_id,\"AC-[0-9]+\")",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"kisms:evidence\" earliest=-365d\n| stats latest(status) as st by control_id, system\n| where st!=\"EvidenceUploaded\" AND match(control_id,\"AC-[0-9]+\")\n```\n\nUnderstanding this SPL\n\n**K-ISMS certification maintenance evidence (PDPA SG; PIPL; K-ISMS)** — Provides APAC privacy supervisory evidence for k-isms certification maintenance evidence aligned with PDPA SG; PIPL; K-ISMS.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: kisms:evidence. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"kisms:evidence\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id, system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"EvidenceUploaded\" AND match(control_id,\"AC-[0-9]+\")` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PDPA SG"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §24 (Protection of personal data obligation) is enforced — Splunk UC-22.29.15: K-ISMS certification maintenance evidence.",
                  "ea": "Saved search 'UC-22.29.15' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.16",
              "n": "Technical measures adequacy review (PDPA SG; PIPL; K-ISMS)",
              "c": "low",
              "f": "expert",
              "v": "Provides APAC privacy supervisory evidence for technical measures adequacy review aligned with PDPA SG; PIPL; K-ISMS.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=edr sourcetype=\"crowdstrike:hosts\" OR sourcetype=\"DefenderATP:Host\" earliest=-24h\n| stats latest(EncryptionStatus) as enc by hostname, os_version\n| where enc!=\"Encrypted\"",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:hosts\" OR sourcetype=\"DefenderATP:Host\" earliest=-24h\n| stats latest(EncryptionStatus) as enc by hostname, os_version\n| where enc!=\"Encrypted\"\n```\n\nUnderstanding this SPL\n\n**Technical measures adequacy review (PDPA SG; PIPL; K-ISMS)** — Provides APAC privacy supervisory evidence for technical measures adequacy review aligned with PDPA SG; PIPL; K-ISMS.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:hosts, DefenderATP:Host. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:hosts\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname, os_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where enc!=\"Encrypted\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PDPA SG"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §24 (Protection of personal data obligation) is enforced — Splunk UC-22.29.16: Technical measures adequacy review.",
                  "ea": "Saved search 'UC-22.29.16' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.17",
              "n": "Encryption and pseudonymization control evidence (PDPA SG; PIPL; K-ISMS)",
              "c": "critical",
              "f": "advanced",
              "v": "Provides APAC privacy supervisory evidence for encryption and pseudonymization control evidence aligned with PDPA SG; PIPL; K-ISMS.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=kms sourcetype=\"vault:audit\" earliest=-7d\n| stats count by key_path, operation\n| where operation=\"decrypt\" AND match(key_path,\"(?i)pii\")\n| sort - count",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kms sourcetype=\"vault:audit\" earliest=-7d\n| stats count by key_path, operation\n| where operation=\"decrypt\" AND match(key_path,\"(?i)pii\")\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Encryption and pseudonymization control evidence (PDPA SG; PIPL; K-ISMS)** — Provides APAC privacy supervisory evidence for encryption and pseudonymization control evidence aligned with PDPA SG; PIPL; K-ISMS.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kms; **sourcetype**: vault:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kms, sourcetype=\"vault:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by key_path, operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where operation=\"decrypt\" AND match(key_path,\"(?i)pii\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "PDPA SG"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §24 (Protection of personal data obligation) is enforced — Splunk UC-22.29.17: Encryption and pseudonymization control evidence.",
                  "ea": "Saved search 'UC-22.29.17' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.18",
              "n": "Access control validation for personal data stores (PDPA SG; PIPL; K-ISMS)",
              "c": "critical",
              "f": "intermediate",
              "v": "Provides APAC privacy supervisory evidence for access control validation for personal data stores aligned with PDPA SG; PIPL; K-ISMS.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=linux sourcetype=\"linux:secure\" earliest=-24h\n| search appname=\"sudo\"\n| stats count by user, command\n| lookup apac_privileged_cmd_allowlist.csv command OUTPUT allowed\n| where allowed!=\"true\"",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=linux sourcetype=\"linux:secure\" earliest=-24h\n| search appname=\"sudo\"\n| stats count by user, command\n| lookup apac_privileged_cmd_allowlist.csv command OUTPUT allowed\n| where allowed!=\"true\"\n```\n\nUnderstanding this SPL\n\n**Access control validation for personal data stores (PDPA SG; PIPL; K-ISMS)** — Provides APAC privacy supervisory evidence for access control validation for personal data stores aligned with PDPA SG; PIPL; K-ISMS.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: linux; **sourcetype**: linux:secure. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=linux, sourcetype=\"linux:secure\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, command** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where allowed!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "PDPA SG"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "SG PDPA",
                  "v": "2020 amended",
                  "cl": "§24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SG PDPA §24 (Protection of personal data obligation) is enforced — Splunk UC-22.29.18: Access control validation for personal data stores.",
                  "ea": "Saved search 'UC-22.29.18' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.19",
              "n": "DPO appointment and coverage compliance (APPI; PDPA Thailand)",
              "c": "medium",
              "f": "beginner",
              "v": "Provides APAC privacy supervisory evidence for dpo appointment and coverage compliance aligned with APPI; PDPA Thailand.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=hr sourcetype=\"privacy:dpo\" earliest=-730d\n| stats latest(appointed) as ap by entity, jurisdiction\n| where isnull(ap) AND employee_count>5000",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"privacy:dpo\" earliest=-730d\n| stats latest(appointed) as ap by entity, jurisdiction\n| where isnull(ap) AND employee_count>5000\n```\n\nUnderstanding this SPL\n\n**DPO appointment and coverage compliance (APPI; PDPA Thailand)** — Provides APAC privacy supervisory evidence for dpo appointment and coverage compliance aligned with APPI; PDPA Thailand.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: privacy:dpo. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"privacy:dpo\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by entity, jurisdiction** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnull(ap) AND employee_count>5000` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APPI"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.23 (Security control action) is enforced — Splunk UC-22.29.19: DPO appointment and coverage compliance.",
                  "ea": "Saved search 'UC-22.29.19' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.20",
              "n": "DPIA completion for high-risk processing (APPI; PDPA Thailand)",
              "c": "low",
              "f": "expert",
              "v": "Provides APAC privacy supervisory evidence for dpia completion for high-risk processing aligned with APPI; PDPA Thailand.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=privacy sourcetype=\"dpia:register\" earliest=-730d risk_score>6\n| stats latest(status) as st by processing_activity_id\n| where st!=\"Completed\"",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"dpia:register\" earliest=-730d risk_score>6\n| stats latest(status) as st by processing_activity_id\n| where st!=\"Completed\"\n```\n\nUnderstanding this SPL\n\n**DPIA completion for high-risk processing (APPI; PDPA Thailand)** — Provides APAC privacy supervisory evidence for dpia completion for high-risk processing aligned with APPI; PDPA Thailand.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: dpia:register. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"dpia:register\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by processing_activity_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"Completed\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APPI"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.23 (Security control action) is enforced — Splunk UC-22.29.20: DPIA completion for high-risk processing.",
                  "ea": "Saved search 'UC-22.29.20' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.21",
              "n": "Privacy impact assessment tracking (APPI; PDPA Thailand)",
              "c": "critical",
              "f": "advanced",
              "v": "Provides APAC privacy supervisory evidence for privacy impact assessment tracking aligned with APPI; PDPA Thailand.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=grc sourcetype=\"pia:workflow\" earliest=-365d\n| stats min(_time) as opened, max(_time) as updated by pia_id\n| eval duration_days=round((updated-opened)/86400,1)\n| where duration_days>90",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"pia:workflow\" earliest=-365d\n| stats min(_time) as opened, max(_time) as updated by pia_id\n| eval duration_days=round((updated-opened)/86400,1)\n| where duration_days>90\n```\n\nUnderstanding this SPL\n\n**Privacy impact assessment tracking (APPI; PDPA Thailand)** — Provides APAC privacy supervisory evidence for privacy impact assessment tracking aligned with APPI; PDPA Thailand.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: pia:workflow. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"pia:workflow\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by pia_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **duration_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where duration_days>90` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APPI"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.23 (Security control action) is enforced — Splunk UC-22.29.21: Privacy impact assessment tracking.",
                  "ea": "Saved search 'UC-22.29.21' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.22",
              "n": "DPO activity reporting metrics (APPI; PDPA Thailand)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides APAC privacy supervisory evidence for dpo activity reporting metrics aligned with APPI; PDPA Thailand.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=privacy sourcetype=\"dpo:activity\" earliest=-90d\n| stats count by dpo_id, activity_type\n| where activity_type=\"RegulatorConsultation\" AND count<1",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"dpo:activity\" earliest=-90d\n| stats count by dpo_id, activity_type\n| where activity_type=\"RegulatorConsultation\" AND count<1\n```\n\nUnderstanding this SPL\n\n**DPO activity reporting metrics (APPI; PDPA Thailand)** — Provides APAC privacy supervisory evidence for dpo activity reporting metrics aligned with APPI; PDPA Thailand.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: dpo:activity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"dpo:activity\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dpo_id, activity_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where activity_type=\"RegulatorConsultation\" AND count<1` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APPI"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.23 (Security control action) is enforced — Splunk UC-22.29.22: DPO activity reporting metrics.",
                  "ea": "Saved search 'UC-22.29.22' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.23",
              "n": "Regulatory consultation trigger monitoring (APPI; PDPA Thailand)",
              "c": "medium",
              "f": "beginner",
              "v": "Provides APAC privacy supervisory evidence for regulatory consultation trigger monitoring aligned with APPI; PDPA Thailand.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=ticketing sourcetype=\"snow:sc_req_item\" cat_item=\"*DPIA*\" earliest=-180d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval closed=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age=round((now()-opened)/86400,0)\n| where isnull(closed) AND age>60",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ticketing sourcetype=\"snow:sc_req_item\" cat_item=\"*DPIA*\" earliest=-180d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval closed=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age=round((now()-opened)/86400,0)\n| where isnull(closed) AND age>60\n```\n\nUnderstanding this SPL\n\n**Regulatory consultation trigger monitoring (APPI; PDPA Thailand)** — Provides APAC privacy supervisory evidence for regulatory consultation trigger monitoring aligned with APPI; PDPA Thailand.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ticketing; **sourcetype**: snow:sc_req_item. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ticketing, sourcetype=\"snow:sc_req_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **closed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(closed) AND age>60` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APPI"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.23 (Security control action) is enforced — Splunk UC-22.29.23: Regulatory consultation trigger monitoring.",
                  "ea": "Saved search 'UC-22.29.23' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.24",
              "n": "Annual privacy program assessment evidence (APPI; PDPA Thailand)",
              "c": "low",
              "f": "expert",
              "v": "Provides APAC privacy supervisory evidence for annual privacy program assessment evidence aligned with APPI; PDPA Thailand.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=audit sourcetype=\"privacy:annual_review\" earliest=-400d\n| stats latest(score) as s by org_unit, year\n| where s<85",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"privacy:annual_review\" earliest=-400d\n| stats latest(score) as s by org_unit, year\n| where s<85\n```\n\nUnderstanding this SPL\n\n**Annual privacy program assessment evidence (APPI; PDPA Thailand)** — Provides APAC privacy supervisory evidence for annual privacy program assessment evidence aligned with APPI; PDPA Thailand.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: privacy:annual_review. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"privacy:annual_review\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by org_unit, year** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where s<85` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APPI"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.23 (Security control action) is enforced — Splunk UC-22.29.24: Annual privacy program assessment evidence.",
                  "ea": "Saved search 'UC-22.29.24' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.25",
              "n": "Separate consent for sensitive personal information (PIPL; PDPA)",
              "c": "critical",
              "f": "advanced",
              "v": "Provides APAC privacy supervisory evidence for separate consent for sensitive personal information aligned with PIPL; PDPA.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=cmp sourcetype=\"consent:log\" jurisdiction=\"CN\" earliest=-30d\n| stats latest(consent_type) as ct by subject_id, purpose\n| where match(purpose,\"(?i)biometric|sensitive\") AND ct!=\"explicit\"",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmp sourcetype=\"consent:log\" jurisdiction=\"CN\" earliest=-30d\n| stats latest(consent_type) as ct by subject_id, purpose\n| where match(purpose,\"(?i)biometric|sensitive\") AND ct!=\"explicit\"\n```\n\nUnderstanding this SPL\n\n**Separate consent for sensitive personal information (PIPL; PDPA)** — Provides APAC privacy supervisory evidence for separate consent for sensitive personal information aligned with PIPL; PDPA.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmp; **sourcetype**: consent:log. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmp, sourcetype=\"consent:log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by subject_id, purpose** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where match(purpose,\"(?i)biometric|sensitive\") AND ct!=\"explicit\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "PIPL"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.29",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.29 is enforced — Splunk UC-22.29.25: Separate consent for sensitive personal information.",
                  "ea": "Saved search 'UC-22.29.25' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.26",
              "n": "Consent withdrawal processing audit (PIPL; PDPA)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides APAC privacy supervisory evidence for consent withdrawal processing audit aligned with PIPL; PDPA.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=app sourcetype=\"marketing:preference\" earliest=-7d\n| stats dc(channel) as channels, values(preference_state) as states by subject_id\n| where channels<2",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"marketing:preference\" earliest=-7d\n| stats dc(channel) as channels, values(preference_state) as states by subject_id\n| where channels<2\n```\n\nUnderstanding this SPL\n\n**Consent withdrawal processing audit (PIPL; PDPA)** — Provides APAC privacy supervisory evidence for consent withdrawal processing audit aligned with PIPL; PDPA.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: marketing:preference. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"marketing:preference\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by subject_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where channels<2` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "PIPL"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.15 is enforced — Splunk UC-22.29.26: Consent withdrawal processing audit.",
                  "ea": "Saved search 'UC-22.29.26' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.27",
              "n": "Opt-in and opt-out preference tracking (PIPL; PDPA)",
              "c": "medium",
              "f": "beginner",
              "v": "Provides APAC privacy supervisory evidence for opt-in and opt-out preference tracking aligned with PIPL; PDPA.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=web sourcetype=\"access_combined\" earliest=-24h\n| search uri=\"*optin*\" OR uri=\"*opt-out*\"\n| stats count by uri, status\n| sort - count",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" earliest=-24h\n| search uri=\"*optin*\" OR uri=\"*opt-out*\"\n| stats count by uri, status\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Opt-in and opt-out preference tracking (PIPL; PDPA)** — Provides APAC privacy supervisory evidence for opt-in and opt-out preference tracking aligned with PIPL; PDPA.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by uri, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PIPL"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.14 is enforced — Splunk UC-22.29.27: Opt-in and opt-out preference tracking.",
                  "ea": "Saved search 'UC-22.29.27' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.28",
              "n": "Purpose limitation enforcement in APIs (PIPL; PDPA)",
              "c": "low",
              "f": "expert",
              "v": "Provides APAC privacy supervisory evidence for purpose limitation enforcement in apis aligned with PIPL; PDPA.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=crm sourcetype=\"salesforce:audit\" earliest=-30d\n| search Field=\"Purpose__c\"\n| stats dc(NewValue) as purposes by ParentId\n| where purposes>8",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=crm sourcetype=\"salesforce:audit\" earliest=-30d\n| search Field=\"Purpose__c\"\n| stats dc(NewValue) as purposes by ParentId\n| where purposes>8\n```\n\nUnderstanding this SPL\n\n**Purpose limitation enforcement in APIs (PIPL; PDPA)** — Provides APAC privacy supervisory evidence for purpose limitation enforcement in apis aligned with PIPL; PDPA.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: crm; **sourcetype**: salesforce:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=crm, sourcetype=\"salesforce:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by ParentId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where purposes>8` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "PIPL"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.6 is enforced — Splunk UC-22.29.28: Purpose limitation enforcement in APIs.",
                  "ea": "Saved search 'UC-22.29.28' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.29",
              "n": "Consent record retention compliance (PIPL; PDPA)",
              "c": "critical",
              "f": "advanced",
              "v": "Provides APAC privacy supervisory evidence for consent record retention compliance aligned with PIPL; PDPA.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=cmp sourcetype=\"consent:archive\" earliest=-730d\n| stats min(_time) as first, max(_time) as last by consent_id\n| eval retained_days=round((last-first)/86400,0)\n| where retained_days>2555",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmp sourcetype=\"consent:archive\" earliest=-730d\n| stats min(_time) as first, max(_time) as last by consent_id\n| eval retained_days=round((last-first)/86400,0)\n| where retained_days>2555\n```\n\nUnderstanding this SPL\n\n**Consent record retention compliance (PIPL; PDPA)** — Provides APAC privacy supervisory evidence for consent record retention compliance aligned with PIPL; PDPA.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmp; **sourcetype**: consent:archive. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmp, sourcetype=\"consent:archive\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by consent_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **retained_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where retained_days>2555` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "PIPL"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.14 is enforced — Splunk UC-22.29.29: Consent record retention compliance.",
                  "ea": "Saved search 'UC-22.29.29' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.29.30",
              "n": "Minor consent and guardian verification monitoring (PIPL; PDPA)",
              "c": "high",
              "f": "intermediate",
              "v": "Provides APAC privacy supervisory evidence for minor consent and guardian verification monitoring aligned with PIPL; PDPA.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records",
              "q": "index=app sourcetype=\"parental:consent\" jurisdiction=\"KR\" earliest=-365d\n| stats latest(verified) as v by minor_subject_id\n| where v!=\"guardian_verified\"",
              "m": "(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.",
              "z": "Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for Microsoft Office 365](https://splunkbase.splunk.com/app/4055)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_transfer_register.csv` and consent or DPIA lookups referenced in runbooks. (2) Map technical identifiers to records of processing activities with legal review. (3) Restrict indexes to regional privacy roles with least privilege. (4) Quarterly DPIA and consent linkage review with local counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"parental:consent\" jurisdiction=\"KR\" earliest=-365d\n| stats latest(verified) as v by minor_subject_id\n| where v!=\"guardian_verified\"\n```\n\nUnderstanding this SPL\n\n**Minor consent and guardian verification monitoring (PIPL; PDPA)** — Provides APAC privacy supervisory evidence for minor consent and guardian verification monitoring aligned with PIPL; PDPA.\n\nDocumented **Data sources**: Windows Security Event Logs, cloud audit trails, DLP system outputs, consent management platforms, data classification tools, cross-border transfer logs, privacy impact assessments, breach notification records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Microsoft Office 365 (4055), Splunk Add-on for AWS (1876), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: parental:consent. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"parental:consent\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by minor_subject_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where v!=\"guardian_verified\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (jurisdiction x action), Table (open reviews), Single value (exports without mechanism).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show Asia-Pacific data-protection duties are met, including access and handling people can prove in an audit.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "PIPL"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "aws",
                "m365"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.14",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.14 is enforced — Splunk UC-22.29.30: Minor consent and guardian verification monitoring.",
                  "ea": "Saved search 'UC-22.29.30' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Microsoft 365 App for Splunk",
                  "id": 3786,
                  "url": "https://splunkbase.splunk.com/app/3786",
                  "desc": "Dashboards for Azure AD, Defender 365, Exchange, SharePoint, Teams, Power BI",
                  "screenshots": [
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/47862910-b938-11ec-bed4-4a49cc3b8a38.png",
                    "https://cdn.splunkbase.splunk.com/media/public/screenshots/4aa6a7a0-b938-11ec-a542-32c4f9dd13a0.jpeg"
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 30,
            "none": 0
          }
        },
        {
          "i": "22.30",
          "n": "APAC Financial Regulation",
          "u": [
            {
              "i": "22.30.1",
              "n": "Technology risk management governance metrics (MAS TRM)",
              "c": "high",
              "f": "beginner",
              "v": "Supports MAS TRM supervisory examination readiness for technology risk management governance metrics.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=bank sourcetype=\"core:audit\" regulator=\"MAS\" earliest=-37h\n| stats count by function_id, user, action\n| lookup mas_trm_critical_functions.csv function_id OUTPUT tier\n| where tier=\"Material\" AND action=\"FUNDS_TRANSFER\"\n| sort - count",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"core:audit\" regulator=\"MAS\" earliest=-37h\n| stats count by function_id, user, action\n| lookup mas_trm_critical_functions.csv function_id OUTPUT tier\n| where tier=\"Material\" AND action=\"FUNDS_TRANSFER\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Technology risk management governance metrics (MAS TRM)** — Supports MAS TRM supervisory examination readiness for technology risk management governance metrics.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: core:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"core:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by function_id, user, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where tier=\"Material\" AND action=\"FUNDS_TRANSFER\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Technology risk management governance metrics (MAS TRM)** — Supports MAS TRM supervisory examination readiness for technology risk management governance metrics.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MAS TRM"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user, All_Changes.action | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§4.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MAS TRM §4.1.1 (Technology risk governance) is enforced — Splunk UC-22.30.1: Technology risk management governance metrics.",
                  "ea": "Saved search 'UC-22.30.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.2",
              "n": "System availability monitoring against internal SLOs (MAS TRM)",
              "c": "medium",
              "f": "expert",
              "v": "Supports MAS TRM supervisory examination readiness for system availability monitoring against internal slos.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=itsi_summary service_title=\"*Payments*\" earliest=-7d\n| timechart span=15m avg(health_score) as hs\n| where hs<85",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsi_summary service_title=\"*Payments*\" earliest=-7d\n| timechart span=15m avg(health_score) as hs\n| where hs<85\n```\n\nUnderstanding this SPL\n\n**System availability monitoring against internal SLOs (MAS TRM)** — Supports MAS TRM supervisory examination readiness for system availability monitoring against internal slos.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsi_summary.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsi_summary, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `timechart` plots the metric over time using **span=15m** buckets — ideal for trending and alerting on this use case.\n• Filters the current rows with `where hs<85` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**System availability monitoring against internal SLOs (MAS TRM)** — Supports MAS TRM supervisory examination readiness for system availability monitoring against internal slos.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MAS TRM"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src span=15m | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§4.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MAS TRM §4.1.1 (Technology risk governance) is enforced — Splunk UC-22.30.2: System availability monitoring against internal SLOs.",
                  "ea": "Saved search 'UC-22.30.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.3",
              "n": "Privileged access management review evidence (MAS TRM)",
              "c": "low",
              "f": "advanced",
              "v": "Supports MAS TRM supervisory examination readiness for privileged access management review evidence.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=bank sourcetype=\"pam:session\" regulator=\"MAS\" earliest=-24h\n| stats dc(src_ip) as ips by privileged_user, target_system\n| where ips>5",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"pam:session\" regulator=\"MAS\" earliest=-24h\n| stats dc(src_ip) as ips by privileged_user, target_system\n| where ips>5\n```\n\nUnderstanding this SPL\n\n**Privileged access management review evidence (MAS TRM)** — Supports MAS TRM supervisory examination readiness for privileged access management review evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: pam:session. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"pam:session\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by privileged_user, target_system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where ips>5` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privileged access management review evidence (MAS TRM)** — Supports MAS TRM supervisory examination readiness for privileged access management review evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MAS TRM"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§4.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MAS TRM §4.1.1 (Technology risk governance) is enforced — Splunk UC-22.30.3: Privileged access management review evidence.",
                  "ea": "Saved search 'UC-22.30.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.4",
              "n": "Patch management compliance and overdue systems (MAS TRM)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports MAS TRM supervisory examination readiness for patch management compliance and overdue systems.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=vulns sourcetype=\"tenable:sc\" tag=mas_trm earliest=-14d\n| stats max(severity_score) as sev by host\n| where sev>=9",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulns sourcetype=\"tenable:sc\" tag=mas_trm earliest=-14d\n| stats max(severity_score) as sev by host\n| where sev>=9\n```\n\nUnderstanding this SPL\n\n**Patch management compliance and overdue systems (MAS TRM)** — Supports MAS TRM supervisory examination readiness for patch management compliance and overdue systems.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulns; **sourcetype**: tenable:sc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulns, sourcetype=\"tenable:sc\", time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sev>=9` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Patch management compliance and overdue systems (MAS TRM)** — Supports MAS TRM supervisory examination readiness for patch management compliance and overdue systems.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MAS TRM"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§4.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MAS TRM §4.1.1 (Technology risk governance) is enforced — Splunk UC-22.30.4: Patch management compliance and overdue systems.",
                  "ea": "Saved search 'UC-22.30.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.5",
              "n": "Security testing evidence aggregation (MAS TRM)",
              "c": "high",
              "f": "beginner",
              "v": "Supports MAS TRM supervisory examination readiness for security testing evidence aggregation.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=grc sourcetype=\"mas:incident\" earliest=-365d\n| eval delta_hours=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(notified_mas_at) AND delta_hours>4",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"mas:incident\" earliest=-365d\n| eval delta_hours=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(notified_mas_at) AND delta_hours>4\n```\n\nUnderstanding this SPL\n\n**Security testing evidence aggregation (MAS TRM)** — Supports MAS TRM supervisory examination readiness for security testing evidence aggregation.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: mas:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"mas:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **delta_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(notified_mas_at) AND delta_hours>4` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security testing evidence aggregation (MAS TRM)** — Supports MAS TRM supervisory examination readiness for security testing evidence aggregation.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MAS TRM"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§4.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MAS TRM §4.1.1 (Technology risk governance) is enforced — Splunk UC-22.30.5: Security testing evidence aggregation.",
                  "ea": "Saved search 'UC-22.30.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.6",
              "n": "Incident notification to MAS timeline tracking (MAS TRM)",
              "c": "medium",
              "f": "expert",
              "v": "Supports MAS TRM supervisory examination readiness for incident notification to mas timeline tracking.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=test sourcetype=\"mas:security_test\" earliest=-730d\n| stats latest(result) as r by test_id, application\n| where r!=\"Pass\"",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=test sourcetype=\"mas:security_test\" earliest=-730d\n| stats latest(result) as r by test_id, application\n| where r!=\"Pass\"\n```\n\nUnderstanding this SPL\n\n**Incident notification to MAS timeline tracking (MAS TRM)** — Supports MAS TRM supervisory examination readiness for incident notification to mas timeline tracking.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: test; **sourcetype**: mas:security_test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=test, sourcetype=\"mas:security_test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by test_id, application** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where r!=\"Pass\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Incident notification to MAS timeline tracking (MAS TRM)** — Supports MAS TRM supervisory examination readiness for incident notification to mas timeline tracking.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MAS TRM"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§8.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MAS TRM §8.1.1 (IT operations — incident mgmt) is enforced — Splunk UC-22.30.6: Incident notification to MAS timeline tracking.",
                  "ea": "Saved search 'UC-22.30.6' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.7",
              "n": "Outsourcing arrangements risk monitoring (MAS TRM)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports MAS TRM supervisory examination readiness for outsourcing arrangements risk monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=vendor sourcetype=\"mas:outsourcing\" earliest=-180d\n| stats latest(risk_rating) as rr by vendor_name\n| where rr IN (\"High\",\"Severe\")",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"mas:outsourcing\" earliest=-180d\n| stats latest(risk_rating) as rr by vendor_name\n| where rr IN (\"High\",\"Severe\")\n```\n\nUnderstanding this SPL\n\n**Outsourcing arrangements risk monitoring (MAS TRM)** — Supports MAS TRM supervisory examination readiness for outsourcing arrangements risk monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: mas:outsourcing. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"mas:outsourcing\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rr IN (\"High\",\"Severe\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Outsourcing arrangements risk monitoring (MAS TRM)** — Supports MAS TRM supervisory examination readiness for outsourcing arrangements risk monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "MAS TRM"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§4.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MAS TRM §4.1.1 (Technology risk governance) is enforced — Splunk UC-22.30.7: Outsourcing arrangements risk monitoring.",
                  "ea": "Saved search 'UC-22.30.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.8",
              "n": "Technology governance board reporting evidence (HKMA TM-G-2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports HKMA TM-G-2 supervisory examination readiness for technology governance board reporting evidence.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=hkma sourcetype=\"tm_g2:control_test\" domain=\"Governance\" earliest=-365d\n| stats latest(result) as res by control_id\n| where res!=\"Satisfactory\"",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hkma sourcetype=\"tm_g2:control_test\" domain=\"Governance\" earliest=-365d\n| stats latest(result) as res by control_id\n| where res!=\"Satisfactory\"\n```\n\nUnderstanding this SPL\n\n**Technology governance board reporting evidence (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for technology governance board reporting evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hkma; **sourcetype**: tm_g2:control_test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hkma, sourcetype=\"tm_g2:control_test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where res!=\"Satisfactory\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Technology governance board reporting evidence (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for technology governance board reporting evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HKMA TM-G-2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HKMA TM-G-2",
                  "v": "current",
                  "cl": "§3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HKMA TM-G-2 §3 (Governance of technology risk) is enforced — Splunk UC-22.30.8: Technology governance board reporting evidence.",
                  "ea": "Saved search 'UC-22.30.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.9",
              "n": "Cybersecurity assessment scoring and trends (HKMA TM-G-2)",
              "c": "high",
              "f": "beginner",
              "v": "Supports HKMA TM-G-2 supervisory examination readiness for cybersecurity assessment scoring and trends.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=bank sourcetype=\"hkma:cyber:selfassess\" earliest=-365d\n| stats latest(score) as s by domain\n| where s<80",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"hkma:cyber:selfassess\" earliest=-365d\n| stats latest(score) as s by domain\n| where s<80\n```\n\nUnderstanding this SPL\n\n**Cybersecurity assessment scoring and trends (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for cybersecurity assessment scoring and trends.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: hkma:cyber:selfassess. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"hkma:cyber:selfassess\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where s<80` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cybersecurity assessment scoring and trends (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for cybersecurity assessment scoring and trends.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HKMA TM-G-2"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.dest | sort - agg_value",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HKMA TM-G-2",
                  "v": "current",
                  "cl": "§3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HKMA TM-G-2 §3 (Governance of technology risk) is enforced — Splunk UC-22.30.9: Cybersecurity assessment scoring and trends.",
                  "ea": "Saved search 'UC-22.30.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.10",
              "n": "Third-party management control monitoring (HKMA TM-G-2)",
              "c": "medium",
              "f": "expert",
              "v": "Supports HKMA TM-G-2 supervisory examination readiness for third-party management control monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=snow sourcetype=\"snow:chg\" category=\"Third Party\" country=\"HK\" earliest=-30d\n| stats values(state) as st by vendor\n| where mvcount(mvfilter(match(st,\"Closed\")))=0",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=snow sourcetype=\"snow:chg\" category=\"Third Party\" country=\"HK\" earliest=-30d\n| stats values(state) as st by vendor\n| where mvcount(mvfilter(match(st,\"Closed\")))=0\n```\n\nUnderstanding this SPL\n\n**Third-party management control monitoring (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for third-party management control monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: snow; **sourcetype**: snow:chg. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=snow, sourcetype=\"snow:chg\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvcount(mvfilter(match(st,\"Closed\")))=0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Third-party management control monitoring (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for third-party management control monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HKMA TM-G-2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HKMA TM-G-2",
                  "v": "current",
                  "cl": "§3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HKMA TM-G-2 §3 (Governance of technology risk) is enforced — Splunk UC-22.30.10: Third-party management control monitoring.",
                  "ea": "Saved search 'UC-22.30.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.11",
              "n": "Internet banking security event monitoring (HKMA TM-G-2)",
              "c": "low",
              "f": "advanced",
              "v": "Supports HKMA TM-G-2 supervisory examination readiness for internet banking security event monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=web sourcetype=\"ib:access\" earliest=-24h\n| stats count by uri, user, geo\n| search uri=\"*/ib/*\" AND geo!=\"HK\"\n| sort - count",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"ib:access\" earliest=-24h\n| stats count by uri, user, geo\n| search uri=\"*/ib/*\" AND geo!=\"HK\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Internet banking security event monitoring (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for internet banking security event monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: ib:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"ib:access\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by uri, user, geo** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Internet banking security event monitoring (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for internet banking security event monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HKMA TM-G-2"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.user | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HKMA TM-G-2",
                  "v": "current",
                  "cl": "§3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HKMA TM-G-2 §3 (Governance of technology risk) is enforced — Splunk UC-22.30.11: Internet banking security event monitoring.",
                  "ea": "Saved search 'UC-22.30.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.12",
              "n": "HKMA incident reporting timeline compliance (HKMA TM-G-2)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports HKMA TM-G-2 supervisory examination readiness for hkma incident reporting timeline compliance.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=grc sourcetype=\"hkma:incident\" earliest=-180d\n| eval hrs=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(hkma_filed_at) AND hrs>2",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"hkma:incident\" earliest=-180d\n| eval hrs=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(hkma_filed_at) AND hrs>2\n```\n\nUnderstanding this SPL\n\n**HKMA incident reporting timeline compliance (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for hkma incident reporting timeline compliance.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: hkma:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"hkma:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(hkma_filed_at) AND hrs>2` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**HKMA incident reporting timeline compliance (HKMA TM-G-2)** — Supports HKMA TM-G-2 supervisory examination readiness for hkma incident reporting timeline compliance.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HKMA TM-G-2"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HKMA TM-G-2",
                  "v": "current",
                  "cl": "§3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HKMA TM-G-2 §3 (Governance of technology risk) is enforced — Splunk UC-22.30.12: HKMA incident reporting timeline compliance.",
                  "ea": "Saved search 'UC-22.30.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.13",
              "n": "Cyber security framework compliance monitoring (RBI cyber security framework)",
              "c": "high",
              "f": "beginner",
              "v": "Supports RBI cyber security framework supervisory examination readiness for cyber security framework compliance monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=rbi sourcetype=\"bank:cyber:event\" earliest=-30d\n| eval reported=strptime(reported_cert_in_at,\"%Y-%m-%d %H:%M:%S\")\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(reported) OR (reported-detected)>6*3600",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=rbi sourcetype=\"bank:cyber:event\" earliest=-30d\n| eval reported=strptime(reported_cert_in_at,\"%Y-%m-%d %H:%M:%S\")\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(reported) OR (reported-detected)>6*3600\n```\n\nUnderstanding this SPL\n\n**Cyber security framework compliance monitoring (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for cyber security framework compliance monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: rbi; **sourcetype**: bank:cyber:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=rbi, sourcetype=\"bank:cyber:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **reported** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(reported) OR (reported-detected)>6*3600` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cyber security framework compliance monitoring (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for cyber security framework compliance monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "RBI cyber security framework"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "RBI Cyber",
                  "v": "2016 (as amended)",
                  "cl": "Annex-A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that RBI Cyber Annex-A (Baseline cyber-security controls) is enforced — Splunk UC-22.30.13: Cyber security framework compliance monitoring.",
                  "ea": "Saved search 'UC-22.30.13' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.14",
              "n": "IT governance for banks evidence (RBI cyber security framework)",
              "c": "critical",
              "f": "expert",
              "v": "Supports RBI cyber security framework supervisory examination readiness for it governance for banks evidence.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=bank sourcetype=\"rbi:it_governance\" earliest=-365d\n| stats latest(status) as st by policy_id\n| where st!=\"Approved\"",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"rbi:it_governance\" earliest=-365d\n| stats latest(status) as st by policy_id\n| where st!=\"Approved\"\n```\n\nUnderstanding this SPL\n\n**IT governance for banks evidence (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for it governance for banks evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: rbi:it_governance. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"rbi:it_governance\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by policy_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"Approved\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**IT governance for banks evidence (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for it governance for banks evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "RBI cyber security framework"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "RBI Cyber",
                  "v": "2016 (as amended)",
                  "cl": "Annex-A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that RBI Cyber Annex-A (Baseline cyber-security controls) is enforced — Splunk UC-22.30.14: IT governance for banks evidence.",
                  "ea": "Saved search 'UC-22.30.14' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.15",
              "n": "Electronic payment channel security monitoring (RBI cyber security framework)",
              "c": "low",
              "f": "advanced",
              "v": "Supports RBI cyber security framework supervisory examination readiness for electronic payment channel security monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=payments sourcetype=\"npci:neft\" OR sourcetype=\"upi:txn\" earliest=-24h\n| stats sum(amount) as amt by merchant, risk_flag\n| where risk_flag=\"true\" AND amt>1000000",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=payments sourcetype=\"npci:neft\" OR sourcetype=\"upi:txn\" earliest=-24h\n| stats sum(amount) as amt by merchant, risk_flag\n| where risk_flag=\"true\" AND amt>1000000\n```\n\nUnderstanding this SPL\n\n**Electronic payment channel security monitoring (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for electronic payment channel security monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: payments; **sourcetype**: npci:neft, upi:txn. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=payments, sourcetype=\"npci:neft\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by merchant, risk_flag** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where risk_flag=\"true\" AND amt>1000000` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Electronic payment channel security monitoring (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for electronic payment channel security monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "RBI cyber security framework"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "RBI Cyber",
                  "v": "2016 (as amended)",
                  "cl": "Annex-A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that RBI Cyber Annex-A (Baseline cyber-security controls) is enforced — Splunk UC-22.30.15: Electronic payment channel security monitoring.",
                  "ea": "Saved search 'UC-22.30.15' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.16",
              "n": "Outsourcing and vendor management evidence (RBI cyber security framework)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports RBI cyber security framework supervisory examination readiness for outsourcing and vendor management evidence.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=vendor sourcetype=\"rbi:vendor_risk\" earliest=-90d\n| stats latest(tier) as t by vendor_id\n| where t=\"Critical\"",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"rbi:vendor_risk\" earliest=-90d\n| stats latest(tier) as t by vendor_id\n| where t=\"Critical\"\n```\n\nUnderstanding this SPL\n\n**Outsourcing and vendor management evidence (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for outsourcing and vendor management evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: rbi:vendor_risk. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"rbi:vendor_risk\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where t=\"Critical\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Outsourcing and vendor management evidence (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for outsourcing and vendor management evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "RBI cyber security framework"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "RBI Cyber",
                  "v": "2016 (as amended)",
                  "cl": "Annex-A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that RBI Cyber Annex-A (Baseline cyber-security controls) is enforced — Splunk UC-22.30.16: Outsourcing and vendor management evidence.",
                  "ea": "Saved search 'UC-22.30.16' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.17",
              "n": "CERT-In incident reporting tracking (RBI cyber security framework)",
              "c": "high",
              "f": "beginner",
              "v": "Supports RBI cyber security framework supervisory examination readiness for cert-in incident reporting tracking.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=grc sourcetype=\"cert_in:report\" earliest=-30d\n| where isnull(submitted_at) AND severity IN (\"High\",\"Critical\")",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"cert_in:report\" earliest=-30d\n| where isnull(submitted_at) AND severity IN (\"High\",\"Critical\")\n```\n\nUnderstanding this SPL\n\n**CERT-In incident reporting tracking (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for cert-in incident reporting tracking.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cert_in:report. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"cert_in:report\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(submitted_at) AND severity IN (\"High\",\"Critical\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CERT-In incident reporting tracking (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for cert-in incident reporting tracking.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "RBI cyber security framework"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "RBI Cyber",
                  "v": "2016 (as amended)",
                  "cl": "Annex-B",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that RBI Cyber Annex-B (Cyber-crisis management plan) is enforced — Splunk UC-22.30.17: CERT-In incident reporting tracking.",
                  "ea": "Saved search 'UC-22.30.17' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.18",
              "n": "Business continuity management testing evidence (RBI cyber security framework)",
              "c": "medium",
              "f": "expert",
              "v": "Supports RBI cyber security framework supervisory examination readiness for business continuity management testing evidence.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=bank sourcetype=\"rbi:bcp:test\" earliest=-730d\n| stats latest(result) as r by scenario_id\n| where r!=\"Success\"",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"rbi:bcp:test\" earliest=-730d\n| stats latest(result) as r by scenario_id\n| where r!=\"Success\"\n```\n\nUnderstanding this SPL\n\n**Business continuity management testing evidence (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for business continuity management testing evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: rbi:bcp:test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"rbi:bcp:test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scenario_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where r!=\"Success\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Business continuity management testing evidence (RBI cyber security framework)** — Supports RBI cyber security framework supervisory examination readiness for business continuity management testing evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "RBI cyber security framework"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "RBI Cyber",
                  "v": "2016 (as amended)",
                  "cl": "Annex-A",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that RBI Cyber Annex-A (Baseline cyber-security controls) is enforced — Splunk UC-22.30.18: Business continuity management testing evidence.",
                  "ea": "Saved search 'UC-22.30.18' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.19",
              "n": "Information security capability assessment evidence (APRA CPS 234)",
              "c": "low",
              "f": "advanced",
              "v": "Supports APRA CPS 234 supervisory examination readiness for information security capability assessment evidence.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=bank sourcetype=\"cps234:control_test\" earliest=-90d\n| stats latest(result) as r, latest(severity) as sev by control_id, entity\n| where r!=\"Effective\" AND sev IN (\"High\",\"Critical\")",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"cps234:control_test\" earliest=-90d\n| stats latest(result) as r, latest(severity) as sev by control_id, entity\n| where r!=\"Effective\" AND sev IN (\"High\",\"Critical\")\n```\n\nUnderstanding this SPL\n\n**Information security capability assessment evidence (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for information security capability assessment evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: cps234:control_test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"cps234:control_test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id, entity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where r!=\"Effective\" AND sev IN (\"High\",\"Critical\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information security capability assessment evidence (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for information security capability assessment evidence.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-22.30.19: Information security capability assessment evidence.",
                  "ea": "Saved search 'UC-22.30.19' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.20",
              "n": "Information asset classification drift detection (APRA CPS 234)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports APRA CPS 234 supervisory examination readiness for information asset classification drift detection.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=cmdb sourcetype=\"information_asset\" region=\"AU\" earliest=-7d\n| stats count by classification, owner\n| where classification=\"UNKNOWN\" AND count>0",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmdb sourcetype=\"information_asset\" region=\"AU\" earliest=-7d\n| stats count by classification, owner\n| where classification=\"UNKNOWN\" AND count>0\n```\n\nUnderstanding this SPL\n\n**Information asset classification drift detection (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for information asset classification drift detection.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmdb; **sourcetype**: information_asset. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmdb, sourcetype=\"information_asset\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by classification, owner** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where classification=\"UNKNOWN\" AND count>0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information asset classification drift detection (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for information asset classification drift detection.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-22.30.20: Information asset classification drift detection.",
                  "ea": "Saved search 'UC-22.30.20' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.21",
              "n": "Policy framework compliance monitoring (APRA CPS 234)",
              "c": "critical",
              "f": "beginner",
              "v": "Supports APRA CPS 234 supervisory examination readiness for policy framework compliance monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=audit sourcetype=\"cps234:policy_ack\" earliest=-365d\n| stats latest(ack) as a by policy_id, board_member\n| where a!=\"true\"",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"cps234:policy_ack\" earliest=-365d\n| stats latest(ack) as a by policy_id, board_member\n| where a!=\"true\"\n```\n\nUnderstanding this SPL\n\n**Policy framework compliance monitoring (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for policy framework compliance monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: cps234:policy_ack. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"cps234:policy_ack\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by policy_id, board_member** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where a!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Policy framework compliance monitoring (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for policy framework compliance monitoring.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 15 (Policy framework) is enforced — Splunk UC-22.30.21: Policy framework compliance monitoring.",
                  "ea": "Saved search 'UC-22.30.21' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.22",
              "n": "Incident notification within 72 hours tracking (APRA CPS 234)",
              "c": "medium",
              "f": "expert",
              "v": "Supports APRA CPS 234 supervisory examination readiness for incident notification within 72 hours tracking.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=grc sourcetype=\"cps234:incident\" earliest=-30d\n| eval hrs=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(board_notified_at) AND hrs>12",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"cps234:incident\" earliest=-30d\n| eval hrs=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(board_notified_at) AND hrs>12\n```\n\nUnderstanding this SPL\n\n**Incident notification within 72 hours tracking (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for incident notification within 72 hours tracking.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cps234:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"cps234:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(board_notified_at) AND hrs>12` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Incident notification within 72 hours tracking (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for incident notification within 72 hours tracking.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "36",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 36 (Notification of incidents) is enforced — Splunk UC-22.30.22: Incident notification within 72 hours tracking.",
                  "ea": "Saved search 'UC-22.30.22' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.23",
              "n": "Security control testing outcomes aggregation (APRA CPS 234)",
              "c": "low",
              "f": "advanced",
              "v": "Supports APRA CPS 234 supervisory examination readiness for security control testing outcomes aggregation.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=test sourcetype=\"cps234:penetration\" earliest=-730d\n| stats latest(remediation_pct) as p by engagement_id\n| where p<100",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=test sourcetype=\"cps234:penetration\" earliest=-730d\n| stats latest(remediation_pct) as p by engagement_id\n| where p<100\n```\n\nUnderstanding this SPL\n\n**Security control testing outcomes aggregation (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for security control testing outcomes aggregation.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: test; **sourcetype**: cps234:penetration. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=test, sourcetype=\"cps234:penetration\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by engagement_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p<100` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security control testing outcomes aggregation (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for security control testing outcomes aggregation.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-22.30.23: Security control testing outcomes aggregation.",
                  "ea": "Saved search 'UC-22.30.23' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.24",
              "n": "Third-party information security assessment tracking (APRA CPS 234)",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports APRA CPS 234 supervisory examination readiness for third-party information security assessment tracking.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=vendor sourcetype=\"cps234:third_party\" earliest=-180d\n| stats latest(assessment) as a by vendor_name\n| where a!=\"Complete\"",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"cps234:third_party\" earliest=-180d\n| stats latest(assessment) as a by vendor_name\n| where a!=\"Complete\"\n```\n\nUnderstanding this SPL\n\n**Third-party information security assessment tracking (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for third-party information security assessment tracking.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: cps234:third_party. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"cps234:third_party\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where a!=\"Complete\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Third-party information security assessment tracking (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for third-party information security assessment tracking.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-22.30.24: Third-party information security assessment tracking.",
                  "ea": "Saved search 'UC-22.30.24' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.30.25",
              "n": "Board-level information security reporting pack (APRA CPS 234)",
              "c": "high",
              "f": "beginner",
              "v": "Supports APRA CPS 234 supervisory examination readiness for board-level information security reporting pack.",
              "t": "Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621)",
              "d": "Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records",
              "q": "index=board sourcetype=\"cps234:board_report\" earliest=-400d\n| stats latest(submitted) as s by quarter\n| where s!=\"true\"",
              "m": "(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.",
              "z": "Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `apac_material_services.csv` for important business services and regulator tags. (2) Send weekly digests to CRO and technology risk committees. (3) Validate DB Connect concurrency for audit peak windows. (4) Archive results to a restricted summary index for retention governance.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=board sourcetype=\"cps234:board_report\" earliest=-400d\n| stats latest(submitted) as s by quarter\n| where s!=\"true\"\n```\n\nUnderstanding this SPL\n\n**Board-level information security reporting pack (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for board-level information security reporting pack.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: board; **sourcetype**: cps234:board_report. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=board, sourcetype=\"cps234:board_report\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by quarter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where s!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Board-level information security reporting pack (APRA CPS 234)** — Supports APRA CPS 234 supervisory examination readiness for board-level information security reporting pack.\n\nDocumented **Data sources**: Banking system audit trails, cloud infrastructure logs, vulnerability scan outputs, privileged access management logs, change management records, business continuity evidence, incident response logs, outsourcing management records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk ITSI (1841), Splunk Add-on for AWS (1876), Splunk DB Connect (2686), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Column chart (events by regulator), Table (privileged actions), Timeline (notification milestones).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show technology risk management and financial-sector technology evidence your regulator expects, with dated activity you can hand over.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws",
                "db_connect",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-22.30.25: Board-level information security reporting pack.",
                  "ea": "Saved search 'UC-22.30.25' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "22.31",
          "n": "Australia & New Zealand",
          "u": [
            {
              "i": "22.31.1",
              "n": "Notifiable data breach assessment workflow timing (Privacy Act 1988 (Cth); NDB)",
              "c": "high",
              "f": "intermediate",
              "v": "Evidences Privacy Act 1988 (Cth); NDB obligations for notifiable data breach assessment workflow timing using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=privacy sourcetype=\"oaic:ndb\" earliest=-365d\n| eval eligible=if(serious_harm=\"true\",1,0)\n| eval filed=strptime(filed_oaic_at,\"%Y-%m-%d %H:%M:%S\")\n| where eligible=1 AND isnull(filed)",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"oaic:ndb\" earliest=-365d\n| eval eligible=if(serious_harm=\"true\",1,0)\n| eval filed=strptime(filed_oaic_at,\"%Y-%m-%d %H:%M:%S\")\n| where eligible=1 AND isnull(filed)\n```\n\nUnderstanding this SPL\n\n**Notifiable data breach assessment workflow timing (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for notifiable data breach assessment workflow timing using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: oaic:ndb. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"oaic:ndb\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **eligible** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **filed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where eligible=1 AND isnull(filed)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Notifiable data breach assessment workflow timing (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for notifiable data breach assessment workflow timing using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Privacy Act 1988 (Cth); NDB"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.31.1: Notifiable data breach assessment workflow timing.",
                  "ea": "Saved search 'UC-22.31.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.2",
              "n": "OAIC reporting compliance for eligible breaches (Privacy Act 1988 (Cth); NDB)",
              "c": "medium",
              "f": "beginner",
              "v": "Evidences Privacy Act 1988 (Cth); NDB obligations for oaic reporting compliance for eligible breaches using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=grc sourcetype=\"breach:assessment\" region=\"AU\" earliest=-180d\n| stats latest(decision) as d by breach_id\n| where d=\"notifiable\" AND isnull(oaic_reference)",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"breach:assessment\" region=\"AU\" earliest=-180d\n| stats latest(decision) as d by breach_id\n| where d=\"notifiable\" AND isnull(oaic_reference)\n```\n\nUnderstanding this SPL\n\n**OAIC reporting compliance for eligible breaches (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for oaic reporting compliance for eligible breaches using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: breach:assessment. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"breach:assessment\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by breach_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where d=\"notifiable\" AND isnull(oaic_reference)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**OAIC reporting compliance for eligible breaches (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for oaic reporting compliance for eligible breaches using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Privacy Act 1988 (Cth); NDB"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.31.2: OAIC reporting compliance for eligible breaches.",
                  "ea": "Saved search 'UC-22.31.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.3",
              "n": "Australian Privacy Principles compliance monitoring (Privacy Act 1988 (Cth); NDB)",
              "c": "low",
              "f": "expert",
              "v": "Evidences Privacy Act 1988 (Cth); NDB obligations for australian privacy principles compliance monitoring using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=app sourcetype=\"crm:audit\" region=\"AU\" earliest=-30d\n| stats count by object, Operation\n| search Operation=\"Export\" AND NOT match(object,\"(?i)de_identified\")",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"crm:audit\" region=\"AU\" earliest=-30d\n| stats count by object, Operation\n| search Operation=\"Export\" AND NOT match(object,\"(?i)de_identified\")\n```\n\nUnderstanding this SPL\n\n**Australian Privacy Principles compliance monitoring (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for australian privacy principles compliance monitoring using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: crm:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"crm:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by object, Operation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Australian Privacy Principles compliance monitoring (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for australian privacy principles compliance monitoring using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Privacy Act 1988 (Cth); NDB"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "APP 1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act APP 1 (Open and transparent management of personal info) is enforced — Splunk UC-22.31.3: Australian Privacy Principles compliance monitoring.",
                  "ea": "Saved search 'UC-22.31.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.31.3: Australian Privacy Principles compliance monitoring.",
                  "ea": "Saved search 'UC-22.31.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.4",
              "n": "Privacy impact assessment register tracking (Privacy Act 1988 (Cth); NDB)",
              "c": "critical",
              "f": "advanced",
              "v": "Evidences Privacy Act 1988 (Cth); NDB obligations for privacy impact assessment register tracking using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=privacy sourcetype=\"pia:app\" earliest=-730d\n| stats latest(status) as st by app_id\n| where st!=\"Complete\" AND data_categories=\"health\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"pia:app\" earliest=-730d\n| stats latest(status) as st by app_id\n| where st!=\"Complete\" AND data_categories=\"health\"\n```\n\nUnderstanding this SPL\n\n**Privacy impact assessment register tracking (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for privacy impact assessment register tracking using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: pia:app. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"pia:app\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"Complete\" AND data_categories=\"health\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privacy impact assessment register tracking (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for privacy impact assessment register tracking using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Privacy Act 1988 (Cth); NDB"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "APP 1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act APP 1 (Open and transparent management of personal info) is enforced — Splunk UC-22.31.4: Privacy impact assessment register tracking.",
                  "ea": "Saved search 'UC-22.31.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.31.4: Privacy impact assessment register tracking.",
                  "ea": "Saved search 'UC-22.31.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.5",
              "n": "Cross-border disclosure tracking under APPs (Privacy Act 1988 (Cth); NDB)",
              "c": "high",
              "f": "intermediate",
              "v": "Evidences Privacy Act 1988 (Cth); NDB obligations for cross-border disclosure tracking under apps using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=proxy sourcetype=\"zscaler:web\" user_country=\"AU\" earliest=-24h\n| stats sum(bytes) as b by dest_country, app_class\n| where dest_country!=\"AU\" AND app_class=\"SaaS\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=proxy sourcetype=\"zscaler:web\" user_country=\"AU\" earliest=-24h\n| stats sum(bytes) as b by dest_country, app_class\n| where dest_country!=\"AU\" AND app_class=\"SaaS\"\n```\n\nUnderstanding this SPL\n\n**Cross-border disclosure tracking under APPs (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for cross-border disclosure tracking under apps using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: proxy; **sourcetype**: zscaler:web. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=proxy, sourcetype=\"zscaler:web\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest_country, app_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dest_country!=\"AU\" AND app_class=\"SaaS\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cross-border disclosure tracking under APPs (Privacy Act 1988 (Cth); NDB)** — Evidences Privacy Act 1988 (Cth); NDB obligations for cross-border disclosure tracking under apps using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Privacy Act 1988 (Cth); NDB"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "APP 1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act APP 1 (Open and transparent management of personal info) is enforced — Splunk UC-22.31.5: Cross-border disclosure tracking under APPs.",
                  "ea": "Saved search 'UC-22.31.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "§26WK",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act §26WK (NDB — notifiable data breach) is enforced — Splunk UC-22.31.5: Cross-border disclosure tracking under APPs.",
                  "ea": "Saved search 'UC-22.31.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.6",
              "n": "Application control allowlist drift monitoring (ASD Essential Eight)",
              "c": "medium",
              "f": "beginner",
              "v": "Evidences ASD Essential Eight obligations for application control allowlist drift monitoring using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4688 earliest=-24h\n| stats count by New_Process_Name, host\n| lookup asd_e8_scope.csv host OUTPUT in_scope\n| where in_scope=\"true\" AND NOT match(New_Process_Name,\"(?i)\\\\\\\\Program Files\")",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4688 earliest=-24h\n| stats count by New_Process_Name, host\n| lookup asd_e8_scope.csv host OUTPUT in_scope\n| where in_scope=\"true\" AND NOT match(New_Process_Name,\"(?i)\\\\\\\\Program Files\")\n```\n\nUnderstanding this SPL\n\n**Application control allowlist drift monitoring (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for application control allowlist drift monitoring using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by New_Process_Name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\" AND NOT match(New_Process_Name,\"(?i)\\\\\\\\Program Files\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Application control allowlist drift monitoring (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for application control allowlist drift monitoring using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ASD Essential Eight"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.01",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.01 (Application control) is enforced — Splunk UC-22.31.6: Application control allowlist drift monitoring.",
                  "ea": "Saved search 'UC-22.31.6' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.7",
              "n": "Microsoft Office macro security baseline compliance (ASD Essential Eight)",
              "c": "critical",
              "f": "expert",
              "v": "Evidences ASD Essential Eight obligations for microsoft office macro security baseline compliance using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=winregistry sourcetype=\"WinRegistry\" reg_path=\"*Office*Security*VBAWarnings*\" earliest=-7d\n| stats latest(reg_value_data) as v by host\n| where v!=\"3\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=winregistry sourcetype=\"WinRegistry\" reg_path=\"*Office*Security*VBAWarnings*\" earliest=-7d\n| stats latest(reg_value_data) as v by host\n| where v!=\"3\"\n```\n\nUnderstanding this SPL\n\n**Microsoft Office macro security baseline compliance (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for microsoft office macro security baseline compliance using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: winregistry; **sourcetype**: WinRegistry. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=winregistry, sourcetype=\"WinRegistry\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where v!=\"3\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Microsoft Office macro security baseline compliance (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for microsoft office macro security baseline compliance using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ASD Essential Eight"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.03",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.03 (Configure MS Office macro settings) is enforced — Splunk UC-22.31.7: Microsoft Office macro security baseline compliance.",
                  "ea": "Saved search 'UC-22.31.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.8",
              "n": "User application hardening compliance (ASD Essential Eight)",
              "c": "critical",
              "f": "advanced",
              "v": "Evidences ASD Essential Eight obligations for user application hardening compliance using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4673 earliest=-24h\n| stats count by ProcessName, ServiceName\n| lookup asd_e8_scope.csv host OUTPUT in_scope\n| where in_scope=\"true\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4673 earliest=-24h\n| stats count by ProcessName, ServiceName\n| lookup asd_e8_scope.csv host OUTPUT in_scope\n| where in_scope=\"true\"\n```\n\nUnderstanding this SPL\n\n**User application hardening compliance (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for user application hardening compliance using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ProcessName, ServiceName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**User application hardening compliance (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for user application hardening compliance using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ASD Essential Eight"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.02",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.02 is enforced — Splunk UC-22.31.8: User application hardening compliance.",
                  "ea": "Saved search 'UC-22.31.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.9",
              "n": "Administrative privilege restriction enforcement (ASD Essential Eight)",
              "c": "high",
              "f": "intermediate",
              "v": "Evidences ASD Essential Eight obligations for administrative privilege restriction enforcement using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4672 earliest=-24h\n| stats count by Account_Name, ComputerName\n| where count>20",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4672 earliest=-24h\n| stats count by Account_Name, ComputerName\n| where count>20\n```\n\nUnderstanding this SPL\n\n**Administrative privilege restriction enforcement (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for administrative privilege restriction enforcement using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by Account_Name, ComputerName** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>20` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Administrative privilege restriction enforcement (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for administrative privilege restriction enforcement using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ASD Essential Eight"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.05",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.05 (Restrict administrative privileges) is enforced — Splunk UC-22.31.9: Administrative privilege restriction enforcement.",
                  "ea": "Saved search 'UC-22.31.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.10",
              "n": "Operating system patching latency monitoring (ASD Essential Eight)",
              "c": "medium",
              "f": "beginner",
              "v": "Evidences ASD Essential Eight obligations for operating system patching latency monitoring using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=os sourcetype=package host=\"AU-*\" earliest=-24h\n| stats latest(version) as ver by package, host\n| lookup au_patch_baseline.csv package OUTPUT min_version\n| where ver<min_version",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=package host=\"AU-*\" earliest=-24h\n| stats latest(version) as ver by package, host\n| lookup au_patch_baseline.csv package OUTPUT min_version\n| where ver<min_version\n```\n\nUnderstanding this SPL\n\n**Operating system patching latency monitoring (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for operating system patching latency monitoring using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os; **sourcetype**: package; **host** filter: AU-*. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, sourcetype=package, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by package, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where ver<min_version` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Operating system patching latency monitoring (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for operating system patching latency monitoring using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ASD Essential Eight"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.06",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.06 (Patch operating systems) is enforced — Splunk UC-22.31.10: Operating system patching latency monitoring.",
                  "ea": "Saved search 'UC-22.31.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.11",
              "n": "Multi-factor authentication coverage and failures (ASD Essential Eight)",
              "c": "low",
              "f": "expert",
              "v": "Evidences ASD Essential Eight obligations for multi-factor authentication coverage and failures using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=o365 sourcetype=\"ms:o365:management\" Operation=\"Set-MultiFactorAuthenticationRequirements\" earliest=-7d\n| stats latest(ResultStatus) as rs by UserId\n| where rs!=\"Success\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=o365 sourcetype=\"ms:o365:management\" Operation=\"Set-MultiFactorAuthenticationRequirements\" earliest=-7d\n| stats latest(ResultStatus) as rs by UserId\n| where rs!=\"Success\"\n```\n\nUnderstanding this SPL\n\n**Multi-factor authentication coverage and failures (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for multi-factor authentication coverage and failures using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: o365; **sourcetype**: ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=o365, sourcetype=\"ms:o365:management\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by UserId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rs!=\"Success\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Multi-factor authentication coverage and failures (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for multi-factor authentication coverage and failures using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ASD Essential Eight"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.07",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.07 is enforced — Splunk UC-22.31.11: Multi-factor authentication coverage and failures.",
                  "ea": "Saved search 'UC-22.31.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.12",
              "n": "Daily backup verification and restore test evidence (ASD Essential Eight)",
              "c": "critical",
              "f": "advanced",
              "v": "Evidences ASD Essential Eight obligations for daily backup verification and restore test evidence using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=backup sourcetype=\"veeam:job\" OR sourcetype=\"rubrik:backup\" earliest=-2d\n| stats latest(job_status) as js by host, job_name\n| where js!=\"Success\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=\"veeam:job\" OR sourcetype=\"rubrik:backup\" earliest=-2d\n| stats latest(job_status) as js by host, job_name\n| where js!=\"Success\"\n```\n\nUnderstanding this SPL\n\n**Daily backup verification and restore test evidence (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for daily backup verification and restore test evidence using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: veeam:job, rubrik:backup. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=\"veeam:job\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host, job_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where js!=\"Success\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Daily backup verification and restore test evidence (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for daily backup verification and restore test evidence using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ASD Essential Eight"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.08",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.08 (Regular backups) is enforced — Splunk UC-22.31.12: Daily backup verification and restore test evidence.",
                  "ea": "Saved search 'UC-22.31.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.13",
              "n": "Office macro execution policy violation detection (ASD Essential Eight)",
              "c": "high",
              "f": "intermediate",
              "v": "Evidences ASD Essential Eight obligations for office macro execution policy violation detection using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4660 earliest=-24h\n| search Object_Name=\"*.xlsm\"\n| stats count by user, Object_Name",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4660 earliest=-24h\n| search Object_Name=\"*.xlsm\"\n| stats count by user, Object_Name\n```\n\nUnderstanding this SPL\n\n**Office macro execution policy violation detection (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for office macro execution policy violation detection using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, Object_Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Office macro execution policy violation detection (ASD Essential Eight)** — Evidences ASD Essential Eight obligations for office macro execution policy violation detection using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ASD Essential Eight"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.user | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ASD E8",
                  "v": "Nov 2023",
                  "cl": "E8.03",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ASD E8 E8.03 (Configure MS Office macro settings) is enforced — Splunk UC-22.31.13: Office macro execution policy violation detection.",
                  "ea": "Saved search 'UC-22.31.13' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.14",
              "n": "Information security roles and responsibilities attestations (APRA CPS 234)",
              "c": "critical",
              "f": "beginner",
              "v": "Evidences APRA CPS 234 obligations for information security roles and responsibilities attestations using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=audit sourcetype=\"cps234:test\" control_family=\"Roles\" earliest=-180d\n| stats latest(result) as r by control_id, third_party\n| where r!=\"Pass\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"cps234:test\" control_family=\"Roles\" earliest=-180d\n| stats latest(result) as r by control_id, third_party\n| where r!=\"Pass\"\n```\n\nUnderstanding this SPL\n\n**Information security roles and responsibilities attestations (APRA CPS 234)** — Evidences APRA CPS 234 obligations for information security roles and responsibilities attestations using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: cps234:test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"cps234:test\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id, third_party** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where r!=\"Pass\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information security roles and responsibilities attestations (APRA CPS 234)** — Evidences APRA CPS 234 obligations for information security roles and responsibilities attestations using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-22.31.14: Information security roles and responsibilities attestations.",
                  "ea": "Saved search 'UC-22.31.14' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.15",
              "n": "Control testing compliance evidence (APRA CPS 234)",
              "c": "low",
              "f": "expert",
              "v": "Evidences APRA CPS 234 obligations for control testing compliance evidence using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=test sourcetype=\"cps234:control_testing\" earliest=-365d\n| stats latest(effective) as e by control_id\n| where e=\"false\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=test sourcetype=\"cps234:control_testing\" earliest=-365d\n| stats latest(effective) as e by control_id\n| where e=\"false\"\n```\n\nUnderstanding this SPL\n\n**Control testing compliance evidence (APRA CPS 234)** — Evidences APRA CPS 234 obligations for control testing compliance evidence using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: test; **sourcetype**: cps234:control_testing. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=test, sourcetype=\"cps234:control_testing\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where e=\"false\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Control testing compliance evidence (APRA CPS 234)** — Evidences APRA CPS 234 obligations for control testing compliance evidence using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-22.31.15: Control testing compliance evidence.",
                  "ea": "Saved search 'UC-22.31.15' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.16",
              "n": "Incident notification workflow within regulatory expectations (APRA CPS 234)",
              "c": "critical",
              "f": "advanced",
              "v": "Evidences APRA CPS 234 obligations for incident notification workflow within regulatory expectations using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=grc sourcetype=\"cps234:incident_register\" earliest=-30d\n| eval elapsed=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(apra_notification_at) AND elapsed>60",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"cps234:incident_register\" earliest=-30d\n| eval elapsed=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(apra_notification_at) AND elapsed>60\n```\n\nUnderstanding this SPL\n\n**Incident notification workflow within regulatory expectations (APRA CPS 234)** — Evidences APRA CPS 234 obligations for incident notification workflow within regulatory expectations using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cps234:incident_register. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"cps234:incident_register\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **elapsed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(apra_notification_at) AND elapsed>60` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Incident notification workflow within regulatory expectations (APRA CPS 234)** — Evidences APRA CPS 234 obligations for incident notification workflow within regulatory expectations using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "36",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 36 (Notification of incidents) is enforced — Splunk UC-22.31.16: Incident notification workflow within regulatory expectations.",
                  "ea": "Saved search 'UC-22.31.16' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.17",
              "n": "Third-party information security assessment tracking (APRA CPS 234)",
              "c": "high",
              "f": "intermediate",
              "v": "Evidences APRA CPS 234 obligations for third-party information security assessment tracking using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=vendor sourcetype=\"cps234:third_party\" earliest=-180d\n| stats latest(assessment_score) as s by vendor_name\n| where s<75",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"cps234:third_party\" earliest=-180d\n| stats latest(assessment_score) as s by vendor_name\n| where s<75\n```\n\nUnderstanding this SPL\n\n**Third-party information security assessment tracking (APRA CPS 234)** — Evidences APRA CPS 234 obligations for third-party information security assessment tracking using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: cps234:third_party. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"cps234:third_party\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where s<75` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Third-party information security assessment tracking (APRA CPS 234)** — Evidences APRA CPS 234 obligations for third-party information security assessment tracking using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "APRA CPS 234"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "APRA CPS 234",
                  "v": "current",
                  "cl": "23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APRA CPS 234 23 (Incident management) is enforced — Splunk UC-22.31.17: Third-party information security assessment tracking.",
                  "ea": "Saved search 'UC-22.31.17' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.18",
              "n": "NZ ISM control effectiveness monitoring (NZISM)",
              "c": "medium",
              "f": "beginner",
              "v": "Evidences NZISM obligations for nz ism control effectiveness monitoring using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=grc sourcetype=\"nzism:control\" classification=\"CONFIDENTIAL\" earliest=-365d\n| stats latest(status) as st by control_id\n| where st!=\"Implemented\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"nzism:control\" classification=\"CONFIDENTIAL\" earliest=-365d\n| stats latest(status) as st by control_id\n| where st!=\"Implemented\"\n```\n\nUnderstanding this SPL\n\n**NZ ISM control effectiveness monitoring (NZISM)** — Evidences NZISM obligations for nz ism control effectiveness monitoring using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: nzism:control. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"nzism:control\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"Implemented\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**NZ ISM control effectiveness monitoring (NZISM)** — Evidences NZISM obligations for nz ism control effectiveness monitoring using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NZISM"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NZISM",
                  "v": "3.7",
                  "cl": "§16.6.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NZISM §16.6.9 (Event logging requirements) is enforced — Splunk UC-22.31.18: NZ ISM control effectiveness monitoring.",
                  "ea": "Saved search 'UC-22.31.18' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.19",
              "n": "CERT NZ incident reporting timeline compliance (NZISM)",
              "c": "low",
              "f": "expert",
              "v": "Evidences NZISM obligations for cert nz incident reporting timeline compliance using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=incident sourcetype=\"certnz:report\" earliest=-180d\n| where isnull(submitted_at) AND severity=\"significant\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=incident sourcetype=\"certnz:report\" earliest=-180d\n| where isnull(submitted_at) AND severity=\"significant\"\n```\n\nUnderstanding this SPL\n\n**CERT NZ incident reporting timeline compliance (NZISM)** — Evidences NZISM obligations for cert nz incident reporting timeline compliance using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: incident; **sourcetype**: certnz:report. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=incident, sourcetype=\"certnz:report\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(submitted_at) AND severity=\"significant\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CERT NZ incident reporting timeline compliance (NZISM)** — Evidences NZISM obligations for cert nz incident reporting timeline compliance using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NZISM"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NZISM",
                  "v": "3.7",
                  "cl": "§17.2.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NZISM §17.2.17 (Information security incident management and response) is enforced — Splunk UC-22.31.19: CERT NZ incident reporting timeline compliance.",
                  "ea": "Saved search 'UC-22.31.19' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.31.20",
              "n": "Protective security requirements evidence for NZ agencies (NZISM)",
              "c": "critical",
              "f": "advanced",
              "v": "Evidences NZISM obligations for protective security requirements evidence for nz agencies using endpoint, identity, and cloud telemetry.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records",
              "q": "index=physical sourcetype=\"nz:protective\" earliest=-90d\n| stats latest(status) as st by site_id, requirement\n| where st!=\"Met\"",
              "m": "(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.",
              "z": "Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CrowdStrike FDR](https://splunkbase.splunk.com/app/5082), [CIM: Endpoint](https://docs.splunk.com/Documentation/CIM/latest/User/Endpoint)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CrowdStrike policy posture via TA 5082 and map hosts to NDB and NZISM tiers. (2) Tune alerts with ACSD Essential Eight and APRA-aligned severities. (3) Coordinate privacy and security ownership for dual-reporting workflows. (4) Monthly export for privacy officer and prudential working groups.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=physical sourcetype=\"nz:protective\" earliest=-90d\n| stats latest(status) as st by site_id, requirement\n| where st!=\"Met\"\n```\n\nUnderstanding this SPL\n\n**Protective security requirements evidence for NZ agencies (NZISM)** — Evidences NZISM obligations for protective security requirements evidence for nz agencies using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: physical; **sourcetype**: nz:protective. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=physical, sourcetype=\"nz:protective\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by site_id, requirement** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"Met\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Protective security requirements evidence for NZ agencies (NZISM)** — Evidences NZISM obligations for protective security requirements evidence for nz agencies using endpoint, identity, and cloud telemetry.\n\nDocumented **Data sources**: Windows Security Event Logs, endpoint protection logs, cloud audit trails, vulnerability scan outputs, email security logs, network infrastructure logs, identity management systems, incident response records. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CrowdStrike FDR (5082), Splunk ITSI (1841), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Endpoint.Processes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (% compliant hosts), Bar chart (fails by control), Table (worst hosts).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show security and resilience evidence that Australian and New Zealand prudential standards ask for, in one place.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NZISM"
              ],
              "a": [
                "Endpoint",
                "Vulnerabilities"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Endpoint.Processes by Processes.process_name, Processes.user, Processes.dest | sort - count",
              "e": [
                "crowdstrike",
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NZISM",
                  "v": "3.7",
                  "cl": "§16.6.9",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NZISM §16.6.9 (Event logging requirements) is enforced — Splunk UC-22.31.20: Protective security requirements evidence for NZ agencies.",
                  "ea": "Saved search 'UC-22.31.20' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "22.32",
          "n": "Americas Regulations",
          "u": [
            {
              "i": "22.32.1",
              "n": "LGPD consent management audit trail (Lei Geral de Proteção de Dados (LGPD))",
              "c": "high",
              "f": "expert",
              "v": "Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for lgpd consent management audit trail in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=cmp sourcetype=\"lgpd:consent\" earliest=-90d\n| stats latest(consent_status) as cs by data_subject_id, purpose_id\n| where cs!=\"recorded\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmp sourcetype=\"lgpd:consent\" earliest=-90d\n| stats latest(consent_status) as cs by data_subject_id, purpose_id\n| where cs!=\"recorded\"\n```\n\nUnderstanding this SPL\n\n**LGPD consent management audit trail (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for lgpd consent management audit trail in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmp; **sourcetype**: lgpd:consent. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmp, sourcetype=\"lgpd:consent\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by data_subject_id, purpose_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cs!=\"recorded\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**LGPD consent management audit trail (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for lgpd consent management audit trail in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "Lei Geral de Proteção de Dados (LGPD)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.32.1: LGPD consent management audit trail.",
                  "ea": "Saved search 'UC-22.32.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.2",
              "n": "Data subject rights fulfillment SLA monitoring (Lei Geral de Proteção de Dados (LGPD))",
              "c": "medium",
              "f": "advanced",
              "v": "Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for data subject rights fulfillment sla monitoring in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=ticketing sourcetype=\"snow:sc_req_item\" cat_item=\"*LGPD*DSAR*\" earliest=-180d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval closed=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age=round((now()-opened)/86400,0)\n| where isnull(closed) AND age>15",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ticketing sourcetype=\"snow:sc_req_item\" cat_item=\"*LGPD*DSAR*\" earliest=-180d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval closed=strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")\n| eval age=round((now()-opened)/86400,0)\n| where isnull(closed) AND age>15\n```\n\nUnderstanding this SPL\n\n**Data subject rights fulfillment SLA monitoring (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for data subject rights fulfillment sla monitoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ticketing; **sourcetype**: snow:sc_req_item. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ticketing, sourcetype=\"snow:sc_req_item\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **closed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(closed) AND age>15` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Data subject rights fulfillment SLA monitoring (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for data subject rights fulfillment sla monitoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "Lei Geral de Proteção de Dados (LGPD)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.32.2: Data subject rights fulfillment SLA monitoring.",
                  "ea": "Saved search 'UC-22.32.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.3",
              "n": "ANPD personal data incident notification evidence (Lei Geral de Proteção de Dados (LGPD))",
              "c": "critical",
              "f": "intermediate",
              "v": "Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for anpd personal data incident notification evidence in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=privacy sourcetype=\"anpd:notification\" earliest=-365d\n| where isnull(anpd_filed_at) AND confirmed=\"true\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"anpd:notification\" earliest=-365d\n| where isnull(anpd_filed_at) AND confirmed=\"true\"\n```\n\nUnderstanding this SPL\n\n**ANPD personal data incident notification evidence (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for anpd personal data incident notification evidence in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: anpd:notification. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"anpd:notification\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(anpd_filed_at) AND confirmed=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ANPD personal data incident notification evidence (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for anpd personal data incident notification evidence in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "Lei Geral de Proteção de Dados (LGPD)"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.48",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.48 (Breach notification) is enforced — Splunk UC-22.32.3: ANPD personal data incident notification evidence.",
                  "ea": "Saved search 'UC-22.32.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.4",
              "n": "DPO statutory compliance and coverage (Lei Geral de Proteção de Dados (LGPD))",
              "c": "critical",
              "f": "beginner",
              "v": "Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for dpo statutory compliance and coverage in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=hr sourcetype=\"lgpd:dpo\" earliest=-730d\n| stats latest(appointed) as a by legal_entity\n| where isnull(a)",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"lgpd:dpo\" earliest=-730d\n| stats latest(appointed) as a by legal_entity\n| where isnull(a)\n```\n\nUnderstanding this SPL\n\n**DPO statutory compliance and coverage (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for dpo statutory compliance and coverage in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: lgpd:dpo. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"lgpd:dpo\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by legal_entity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnull(a)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**DPO statutory compliance and coverage (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for dpo statutory compliance and coverage in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Lei Geral de Proteção de Dados (LGPD)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.32.4: DPO statutory compliance and coverage.",
                  "ea": "Saved search 'UC-22.32.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.5",
              "n": "Cross-border transfer legal basis validation (Lei Geral de Proteção de Dados (LGPD))",
              "c": "high",
              "f": "expert",
              "v": "Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for cross-border transfer legal basis validation in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=grc sourcetype=\"lgpd:tia\" earliest=-365d\n| stats latest(status) as st by transfer_id, dest_country\n| where st!=\"Approved\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"lgpd:tia\" earliest=-365d\n| stats latest(status) as st by transfer_id, dest_country\n| where st!=\"Approved\"\n```\n\nUnderstanding this SPL\n\n**Cross-border transfer legal basis validation (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for cross-border transfer legal basis validation in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: lgpd:tia. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"lgpd:tia\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by transfer_id, dest_country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"Approved\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cross-border transfer legal basis validation (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for cross-border transfer legal basis validation in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Lei Geral de Proteção de Dados (LGPD)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.32.5: Cross-border transfer legal basis validation.",
                  "ea": "Saved search 'UC-22.32.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.6",
              "n": "Privacy impact assessment (RIPD) tracking (Lei Geral de Proteção de Dados (LGPD))",
              "c": "medium",
              "f": "advanced",
              "v": "Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for privacy impact assessment (ripd) tracking in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=privacy sourcetype=\"ropa:line\" earliest=-730d\n| stats dc(processing_purpose) as p by system_id\n| where p=0",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"ropa:line\" earliest=-730d\n| stats dc(processing_purpose) as p by system_id\n| where p=0\n```\n\nUnderstanding this SPL\n\n**Privacy impact assessment (RIPD) tracking (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for privacy impact assessment (ripd) tracking in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: ropa:line. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"ropa:line\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by system_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p=0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Privacy impact assessment (RIPD) tracking (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for privacy impact assessment (ripd) tracking in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Lei Geral de Proteção de Dados (LGPD)"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.32.6: Privacy impact assessment (RIPD) tracking.",
                  "ea": "Saved search 'UC-22.32.6' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.7",
              "n": "Legitimate interest assessment record monitoring (Lei Geral de Proteção de Dados (LGPD))",
              "c": "low",
              "f": "intermediate",
              "v": "Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for legitimate interest assessment record monitoring in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=grc sourcetype=\"lgpd:lia\" earliest=-365d\n| stats latest(score) as s by processing_activity\n| where s<70",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"lgpd:lia\" earliest=-365d\n| stats latest(score) as s by processing_activity\n| where s<70\n```\n\nUnderstanding this SPL\n\n**Legitimate interest assessment record monitoring (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for legitimate interest assessment record monitoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: lgpd:lia. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"lgpd:lia\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by processing_activity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where s<70` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Legitimate interest assessment record monitoring (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for legitimate interest assessment record monitoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Lei Geral de Proteção de Dados (LGPD)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.32.7: Legitimate interest assessment record monitoring.",
                  "ea": "Saved search 'UC-22.32.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.8",
              "n": "Processing activities register synchronization (Lei Geral de Proteção de Dados (LGPD))",
              "c": "critical",
              "f": "beginner",
              "v": "Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for processing activities register synchronization in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=wiki sourcetype=\"confluence:audit\" object=\"*DPIA*LGPD*\" earliest=-180d\n| stats count by author, action\n| sort - count",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wiki sourcetype=\"confluence:audit\" object=\"*DPIA*LGPD*\" earliest=-180d\n| stats count by author, action\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Processing activities register synchronization (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for processing activities register synchronization in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wiki; **sourcetype**: confluence:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wiki, sourcetype=\"confluence:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by author, action** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Processing activities register synchronization (Lei Geral de Proteção de Dados (LGPD))** — Supports Lei Geral de Proteção de Dados (LGPD) compliance evidence for processing activities register synchronization in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Lei Geral de Proteção de Dados (LGPD)"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.32.8: Processing activities register synchronization.",
                  "ea": "Saved search 'UC-22.32.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.9",
              "n": "Continuous monitoring metrics for system security plans (FISMA; FedRAMP)",
              "c": "high",
              "f": "expert",
              "v": "Supports FISMA; FedRAMP compliance evidence for continuous monitoring metrics for system security plans in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=grc sourcetype=\"fedramp:poam\" earliest=-90d\n| eval due_epoch=strptime(due_date,\"%Y-%m-%d\")\n| where status!=\"Closed\" AND due_epoch<now()",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"fedramp:poam\" earliest=-90d\n| eval due_epoch=strptime(due_date,\"%Y-%m-%d\")\n| where status!=\"Closed\" AND due_epoch<now()\n```\n\nUnderstanding this SPL\n\n**Continuous monitoring metrics for system security plans (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for continuous monitoring metrics for system security plans in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: fedramp:poam. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"fedramp:poam\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status!=\"Closed\" AND due_epoch<now()` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Continuous monitoring metrics for system security plans (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for continuous monitoring metrics for system security plans in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.32.9: Continuous monitoring metrics for system security plans.",
                  "ea": "Saved search 'UC-22.32.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.10",
              "n": "Plan of action and milestone (POA&M) aging management (FISMA; FedRAMP)",
              "c": "critical",
              "f": "advanced",
              "v": "Supports FISMA; FedRAMP compliance evidence for plan of action and milestone (poa&m) aging management in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=nessus sourcetype=\"nessus:scan\" tag=fedramp earliest=-14d\n| stats max(severity) as sev by host\n| where sev IN (\"Critical\",\"High\")",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=nessus sourcetype=\"nessus:scan\" tag=fedramp earliest=-14d\n| stats max(severity) as sev by host\n| where sev IN (\"Critical\",\"High\")\n```\n\nUnderstanding this SPL\n\n**Plan of action and milestone (POA&M) aging management (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for plan of action and milestone (poa&m) aging management in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: nessus; **sourcetype**: nessus:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=nessus, sourcetype=\"nessus:scan\", time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where sev IN (\"Critical\",\"High\")` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Plan of action and milestone (POA&M) aging management (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for plan of action and milestone (poa&m) aging management in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.32.10: Plan of action and milestone (POA&M) aging management.",
                  "ea": "Saved search 'UC-22.32.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.11",
              "n": "ATO boundary enforcement for cloud workloads (FISMA; FedRAMP)",
              "c": "low",
              "f": "intermediate",
              "v": "Supports FISMA; FedRAMP compliance evidence for ato boundary enforcement for cloud workloads in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=aws sourcetype=\"aws:cloudtrail\" earliest=-7d eventName=\"AuthorizeSecurityGroupIngress\"\n| stats count by userIdentity.arn, requestParameters.groupId\n| lookup fedramp_boundary_sg.csv groupId AS `requestParameters.groupId` OUTPUT in_boundary\n| where in_boundary=\"true\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=aws sourcetype=\"aws:cloudtrail\" earliest=-7d eventName=\"AuthorizeSecurityGroupIngress\"\n| stats count by userIdentity.arn, requestParameters.groupId\n| lookup fedramp_boundary_sg.csv groupId AS `requestParameters.groupId` OUTPUT in_boundary\n| where in_boundary=\"true\"\n```\n\nUnderstanding this SPL\n\n**ATO boundary enforcement for cloud workloads (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for ato boundary enforcement for cloud workloads in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: aws; **sourcetype**: aws:cloudtrail. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=aws, sourcetype=\"aws:cloudtrail\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by userIdentity.arn, requestParameters.groupId** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_boundary=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**ATO boundary enforcement for cloud workloads (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for ato boundary enforcement for cloud workloads in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.32.11: ATO boundary enforcement for cloud workloads.",
                  "ea": "Saved search 'UC-22.32.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.12",
              "n": "Vulnerability remediation SLA tracking (FISMA; FedRAMP)",
              "c": "critical",
              "f": "beginner",
              "v": "Supports FISMA; FedRAMP compliance evidence for vulnerability remediation sla tracking in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=grc sourcetype=\"ato:continuous_monitoring\" earliest=-30d\n| stats latest(metric_value) as v by system_id, metric\n| lookup fedramp_cm_thresholds.csv metric OUTPUT max_v\n| where v>max_v",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"ato:continuous_monitoring\" earliest=-30d\n| stats latest(metric_value) as v by system_id, metric\n| lookup fedramp_cm_thresholds.csv metric OUTPUT max_v\n| where v>max_v\n```\n\nUnderstanding this SPL\n\n**Vulnerability remediation SLA tracking (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for vulnerability remediation sla tracking in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: ato:continuous_monitoring. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"ato:continuous_monitoring\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by system_id, metric** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where v>max_v` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Vulnerability remediation SLA tracking (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for vulnerability remediation sla tracking in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.32.12: Vulnerability remediation SLA tracking.",
                  "ea": "Saved search 'UC-22.32.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.13",
              "n": "Security control assessment evidence correlation (FISMA; FedRAMP)",
              "c": "high",
              "f": "expert",
              "v": "Supports FISMA; FedRAMP compliance evidence for security control assessment evidence correlation in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=us_cert sourcetype=\"incident:federal\" earliest=-365d\n| where isnull(us_cert_reported_at) AND classification=\"incident\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=us_cert sourcetype=\"incident:federal\" earliest=-365d\n| where isnull(us_cert_reported_at) AND classification=\"incident\"\n```\n\nUnderstanding this SPL\n\n**Security control assessment evidence correlation (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for security control assessment evidence correlation in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: us_cert; **sourcetype**: incident:federal. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=us_cert, sourcetype=\"incident:federal\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(us_cert_reported_at) AND classification=\"incident\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security control assessment evidence correlation (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for security control assessment evidence correlation in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.32.13: Security control assessment evidence correlation.",
                  "ea": "Saved search 'UC-22.32.13' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.14",
              "n": "US-CERT incident reporting completeness (FISMA; FedRAMP)",
              "c": "medium",
              "f": "advanced",
              "v": "Supports FISMA; FedRAMP compliance evidence for us-cert incident reporting completeness in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=ad sourcetype=\"WinEventLog:Security\" EventCode=4768 earliest=-24h\n| search TicketOptions=\"*0x40810000*\"\n| stats count by Account_Name\n| lookup piv_enforced_users.csv Account_Name OUTPUT piv_ok\n| where piv_ok!=\"true\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ad sourcetype=\"WinEventLog:Security\" EventCode=4768 earliest=-24h\n| search TicketOptions=\"*0x40810000*\"\n| stats count by Account_Name\n| lookup piv_enforced_users.csv Account_Name OUTPUT piv_ok\n| where piv_ok!=\"true\"\n```\n\nUnderstanding this SPL\n\n**US-CERT incident reporting completeness (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for us-cert incident reporting completeness in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ad; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ad, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by Account_Name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where piv_ok!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**US-CERT incident reporting completeness (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for us-cert incident reporting completeness in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.32.14: US-CERT incident reporting completeness.",
                  "ea": "Saved search 'UC-22.32.14' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.15",
              "n": "PIV and smart card authentication monitoring (FISMA; FedRAMP)",
              "c": "low",
              "f": "intermediate",
              "v": "Supports FISMA; FedRAMP compliance evidence for piv and smart card authentication monitoring in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=vendor sourcetype=\"fedramp:ssp\" earliest=-730d\n| stats latest(supply_chain_review) as scr by component\n| where scr!=\"current\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"fedramp:ssp\" earliest=-730d\n| stats latest(supply_chain_review) as scr by component\n| where scr!=\"current\"\n```\n\nUnderstanding this SPL\n\n**PIV and smart card authentication monitoring (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for piv and smart card authentication monitoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: fedramp:ssp. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"fedramp:ssp\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by component** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where scr!=\"current\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**PIV and smart card authentication monitoring (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for piv and smart card authentication monitoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(1) (Information security program) is enforced — Splunk UC-22.32.15: PIV and smart card authentication monitoring.",
                  "ea": "Saved search 'UC-22.32.15' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.16",
              "n": "Supply chain risk management telemetry (FISMA; FedRAMP)",
              "c": "critical",
              "f": "beginner",
              "v": "Supports FISMA; FedRAMP compliance evidence for supply chain risk management telemetry in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=vulns sourcetype=\"tenable:sc\" fedramp_system=\"true\" earliest=-30d\n| stats count by plugin_id, host\n| lookup poam_open_items.csv plugin_id OUTPUT poam_open\n| where poam_open=\"true\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vulns sourcetype=\"tenable:sc\" fedramp_system=\"true\" earliest=-30d\n| stats count by plugin_id, host\n| lookup poam_open_items.csv plugin_id OUTPUT poam_open\n| where poam_open=\"true\"\n```\n\nUnderstanding this SPL\n\n**Supply chain risk management telemetry (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for supply chain risk management telemetry in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vulns; **sourcetype**: tenable:sc. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vulns, sourcetype=\"tenable:sc\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by plugin_id, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where poam_open=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Supply chain risk management telemetry (FISMA; FedRAMP)** — Supports FISMA; FedRAMP compliance evidence for supply chain risk management telemetry in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "FISMA"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "FISMA",
                  "v": "2014",
                  "cl": "§3554(b)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FISMA §3554(b)(5) (Security controls and monitoring) is enforced — Splunk UC-22.32.16: Supply chain risk management telemetry.",
                  "ea": "Saved search 'UC-22.32.16' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.17",
              "n": "Controlled unclassified information access control (CMMC)",
              "c": "critical",
              "f": "expert",
              "v": "Supports CMMC compliance evidence for controlled unclassified information access control in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=share sourcetype=\"sharepoint:audit\" OR index=fs sourcetype=\"netapp:audit\" earliest=-24h\n| search tag=cui OR sensitivity=\"CUI\"\n| stats count by user, path\n| sort - count",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=share sourcetype=\"sharepoint:audit\" OR index=fs sourcetype=\"netapp:audit\" earliest=-24h\n| search tag=cui OR sensitivity=\"CUI\"\n| stats count by user, path\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Controlled unclassified information access control (CMMC)** — Supports CMMC compliance evidence for controlled unclassified information access control in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: share, fs; **sourcetype**: sharepoint:audit, netapp:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=share, index=fs, sourcetype=\"sharepoint:audit\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, path** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Controlled unclassified information access control (CMMC)** — Supports CMMC compliance evidence for controlled unclassified information access control in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AC.L2-3.1.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CMMC AC.L2-3.1.1 (Authorized access to systems) is enforced — Splunk UC-22.32.17: Controlled unclassified information access control.",
                  "ea": "Saved search 'UC-22.32.17' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.18",
              "n": "CMMC practice implementation evidence collection (CMMC)",
              "c": "medium",
              "f": "advanced",
              "v": "Supports CMMC compliance evidence for cmmc practice implementation evidence collection in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=grc sourcetype=\"cmmc:ssp\" practice_id=\"AC.L2-*\" earliest=-365d\n| stats latest(evidence_status) as es by practice_id\n| where es!=\"complete\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"cmmc:ssp\" practice_id=\"AC.L2-*\" earliest=-365d\n| stats latest(evidence_status) as es by practice_id\n| where es!=\"complete\"\n```\n\nUnderstanding this SPL\n\n**CMMC practice implementation evidence collection (CMMC)** — Supports CMMC compliance evidence for cmmc practice implementation evidence collection in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: cmmc:ssp. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"cmmc:ssp\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by practice_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where es!=\"complete\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CMMC practice implementation evidence collection (CMMC)** — Supports CMMC compliance evidence for cmmc practice implementation evidence collection in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AU.L2-3.3.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC AU.L2-3.3.5 is enforced — Splunk UC-22.32.18.",
                  "ea": "Saved search 'UC-22.32.18' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.19",
              "n": "CMMC assessment readiness scoring (CMMC)",
              "c": "low",
              "f": "intermediate",
              "v": "Supports CMMC compliance evidence for cmmc assessment readiness scoring in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=test sourcetype=\"cmmc:readiness\" earliest=-180d\n| stats latest(score) as s by domain\n| where s<85",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=test sourcetype=\"cmmc:readiness\" earliest=-180d\n| stats latest(score) as s by domain\n| where s<85\n```\n\nUnderstanding this SPL\n\n**CMMC assessment readiness scoring (CMMC)** — Supports CMMC compliance evidence for cmmc assessment readiness scoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: test; **sourcetype**: cmmc:readiness. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=test, sourcetype=\"cmmc:readiness\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where s<85` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CMMC assessment readiness scoring (CMMC)** — Supports CMMC compliance evidence for cmmc assessment readiness scoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "AU.L2-3.3.5",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CMMC AU.L2-3.3.5 (Audit reporting and correlation) is enforced — Splunk UC-22.32.19: CMMC assessment readiness scoring.",
                  "ea": "Saved search 'UC-22.32.19' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.20",
              "n": "CUI incident response evidence (CMMC)",
              "c": "critical",
              "f": "beginner",
              "v": "Supports CMMC compliance evidence for cui incident response evidence in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=edr sourcetype=\"crowdstrike:detection\" tag=cui earliest=-7d\n| stats count by tactic, hostname\n| sort - count",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=edr sourcetype=\"crowdstrike:detection\" tag=cui earliest=-7d\n| stats count by tactic, hostname\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CUI incident response evidence (CMMC)** — Supports CMMC compliance evidence for cui incident response evidence in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: edr; **sourcetype**: crowdstrike:detection. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=edr, sourcetype=\"crowdstrike:detection\", time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by tactic, hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CUI incident response evidence (CMMC)** — Supports CMMC compliance evidence for cui incident response evidence in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "IR.L2-3.6.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CMMC IR.L2-3.6.1 is enforced — Splunk UC-22.32.20.",
                  "ea": "Saved search 'UC-22.32.20' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.21",
              "n": "Continuous monitoring for CMMC practice families (CMMC)",
              "c": "high",
              "f": "expert",
              "v": "Supports CMMC compliance evidence for continuous monitoring for cmmc practice families in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=network sourcetype=\"pan:traffic\" tag=cui earliest=-24h\n| stats sum(bytes) as b by src, dest, app\n| where app=\"unknown-tcp\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\" tag=cui earliest=-24h\n| stats sum(bytes) as b by src, dest, app\n| where app=\"unknown-tcp\"\n```\n\nUnderstanding this SPL\n\n**Continuous monitoring for CMMC practice families (CMMC)** — Supports CMMC compliance evidence for continuous monitoring for cmmc practice families in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\", time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src, dest, app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where app=\"unknown-tcp\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Continuous monitoring for CMMC practice families (CMMC)** — Supports CMMC compliance evidence for continuous monitoring for cmmc practice families in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CMMC"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Change.All_Changes by All_Changes.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CMMC",
                  "v": "2.0",
                  "cl": "SI.L2-3.14.6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CMMC SI.L2-3.14.6 (Monitor for attacks) is enforced — Splunk UC-22.32.21: Continuous monitoring for CMMC practice families.",
                  "ea": "Saved search 'UC-22.32.21' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.22",
              "n": "Criminal justice information access logging (CJIS Security Policy)",
              "c": "medium",
              "f": "advanced",
              "v": "Supports CJIS Security Policy compliance evidence for criminal justice information access logging in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663 earliest=-24h\n| search Object_Name=\"*CJI*\"\n| stats count by user, Object_Name, host",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode=4663 earliest=-24h\n| search Object_Name=\"*CJI*\"\n| stats count by user, Object_Name, host\n```\n\nUnderstanding this SPL\n\n**Criminal justice information access logging (CJIS Security Policy)** — Supports CJIS Security Policy compliance evidence for criminal justice information access logging in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, Object_Name, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Criminal justice information access logging (CJIS Security Policy)** — Supports CJIS Security Policy compliance evidence for criminal justice information access logging in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CJIS Security Policy"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.dest | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CJIS 5.5.1 (Access control - identification) is enforced — Splunk UC-22.32.22: Criminal justice information access logging.",
                  "ea": "Saved search 'UC-22.32.22' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.23",
              "n": "Advanced authentication for CJI sessions (CJIS Security Policy)",
              "c": "low",
              "f": "intermediate",
              "v": "Supports CJIS Security Policy compliance evidence for advanced authentication for cji sessions in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4648) earliest=-24h\n| lookup cjis_workstation.csv host OUTPUT cjis_flag\n| where cjis_flag=\"true\"\n| eval mfa=if(match(AuthenticationPackage,\"Negotiate\"),1,0)\n| where mfa=0",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" EventCode IN (4624,4648) earliest=-24h\n| lookup cjis_workstation.csv host OUTPUT cjis_flag\n| where cjis_flag=\"true\"\n| eval mfa=if(match(AuthenticationPackage,\"Negotiate\"),1,0)\n| where mfa=0\n```\n\nUnderstanding this SPL\n\n**Advanced authentication for CJI sessions (CJIS Security Policy)** — Supports CJIS Security Policy compliance evidence for advanced authentication for cji sessions in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where cjis_flag=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **mfa** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mfa=0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Advanced authentication for CJI sessions (CJIS Security Policy)** — Supports CJIS Security Policy compliance evidence for advanced authentication for cji sessions in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CJIS Security Policy"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CJIS 5.5.1 (Access control - identification) is enforced — Splunk UC-22.32.23: Advanced authentication for CJI sessions.",
                  "ea": "Saved search 'UC-22.32.23' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.24",
              "n": "Personnel security screening compliance tracking (CJIS Security Policy)",
              "c": "critical",
              "f": "beginner",
              "v": "Supports CJIS Security Policy compliance evidence for personnel security screening compliance tracking in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=hr sourcetype=\"cjis:personnel\" earliest=-365d\n| stats latest(screening_status) as st by employee_id\n| where st!=\"approved\"",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Change](https://docs.splunk.com/Documentation/CIM/latest/User/Change)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"cjis:personnel\" earliest=-365d\n| stats latest(screening_status) as st by employee_id\n| where st!=\"approved\"\n```\n\nUnderstanding this SPL\n\n**Personnel security screening compliance tracking (CJIS Security Policy)** — Supports CJIS Security Policy compliance evidence for personnel security screening compliance tracking in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: cjis:personnel. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"cjis:personnel\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by employee_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"approved\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Personnel security screening compliance tracking (CJIS Security Policy)** — Supports CJIS Security Policy compliance evidence for personnel security screening compliance tracking in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Change.All_Changes` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CJIS Security Policy"
              ],
              "a": [
                "Change"
              ],
              "qs": "| tstats summariesonly=t latest(All_Changes.status) as agg_value from datamodel=Change.All_Changes by All_Changes.action, All_Changes.object_category, All_Changes.user | sort - agg_value",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CJIS 5.5.1 (Access control - identification) is enforced — Splunk UC-22.32.24: Personnel security screening compliance tracking.",
                  "ea": "Saved search 'UC-22.32.24' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.32.25",
              "n": "CJI media protection and transfer monitoring (CJIS Security Policy)",
              "c": "high",
              "f": "expert",
              "v": "Supports CJIS Security Policy compliance evidence for cji media protection and transfer monitoring in Splunk dashboards and scheduled reports.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621)",
              "d": "Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs",
              "q": "index=removable sourcetype=\"usb:mount\" OR sourcetype=\"dlp:usb\" earliest=-30d\n| search tag=cjis OR host=\"CJIS-*\"\n| stats count by user, device_serial\n| sort - count",
              "m": "(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.",
              "z": "Stacked bar (control family), Table (users and events), Single value (open findings).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for AWS](https://splunkbase.splunk.com/app/1876), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain `amer_authorization_boundary.csv` with ATO, CUI, and CJIS scope flags. (2) Integrate POA&M exports via scheduled output and lookup refresh for ISSM workflows. (3) Separate LGPD and federal datasets with index- and role-level controls. (4) Quarterly control mapping workshop with ISSM, ISO, and privacy counsel.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=removable sourcetype=\"usb:mount\" OR sourcetype=\"dlp:usb\" earliest=-30d\n| search tag=cjis OR host=\"CJIS-*\"\n| stats count by user, device_serial\n| sort - count\n```\n\nUnderstanding this SPL\n\n**CJI media protection and transfer monitoring (CJIS Security Policy)** — Supports CJIS Security Policy compliance evidence for cji media protection and transfer monitoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: removable; **sourcetype**: usb:mount, dlp:usb. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=removable, sourcetype=\"usb:mount\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, device_serial** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**CJI media protection and transfer monitoring (CJIS Security Policy)** — Supports CJIS Security Policy compliance evidence for cji media protection and transfer monitoring in Splunk dashboards and scheduled reports.\n\nDocumented **Data sources**: Cloud audit trails, Windows Security Event Logs, vulnerability scan outputs, access control logs, encryption status reports, continuous monitoring feeds, incident response logs, privacy management platforms, law enforcement system logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for AWS (1876), Splunk Add-on for Microsoft Cloud Services (3110), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (control family), Table (users and events), Single value (open findings).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show privacy and sector rules across the Americas with records people can read without deep technical jargon.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CJIS Security Policy"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "aws"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CJIS 5.5.1 (Access control - identification) is enforced — Splunk UC-22.32.25: CJI media protection and transfer monitoring.",
                  "ea": "Saved search 'UC-22.32.25' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 25,
            "none": 0
          }
        },
        {
          "i": "22.33",
          "n": "Middle East Cybersecurity",
          "u": [
            {
              "i": "22.33.1",
              "n": "National cybersecurity standard compliance monitoring (NESA UAE IAS)",
              "c": "low",
              "f": "advanced",
              "v": "Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for national cybersecurity standard compliance monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=audit sourcetype=\"nesa:control\" earliest=-365d\n| stats latest(evidence_status) as es by control_id, system\n| where es!=\"Collected\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"nesa:control\" earliest=-365d\n| stats latest(evidence_status) as es by control_id, system\n| where es!=\"Collected\"\n```\n\nUnderstanding this SPL\n\n**National cybersecurity standard compliance monitoring (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for national cybersecurity standard compliance monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: nesa:control. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"nesa:control\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id, system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where es!=\"Collected\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**National cybersecurity standard compliance monitoring (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for national cybersecurity standard compliance monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NESA UAE IAS"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NESA IAS",
                  "v": "v2 (2020)",
                  "cl": "T4.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NESA IAS T4.3 (Audit trails, logging and information-system monitoring) is enforced — Splunk UC-22.33.1: National cybersecurity standard compliance monitoring.",
                  "ea": "Saved search 'UC-22.33.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.2",
              "n": "Critical infrastructure segmentation evidence (NESA UAE IAS)",
              "c": "critical",
              "f": "intermediate",
              "v": "Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for critical infrastructure segmentation evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=ot sourcetype=\"ics:alarm\" country=\"AE\" earliest=-24h\n| stats count by site, alarm_type\n| lookup nesa_critical_sites.csv site OUTPUT tier\n| where tier=\"Tier1\" AND count>100",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ot sourcetype=\"ics:alarm\" country=\"AE\" earliest=-24h\n| stats count by site, alarm_type\n| lookup nesa_critical_sites.csv site OUTPUT tier\n| where tier=\"Tier1\" AND count>100\n```\n\nUnderstanding this SPL\n\n**Critical infrastructure segmentation evidence (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for critical infrastructure segmentation evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ot; **sourcetype**: ics:alarm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ot, sourcetype=\"ics:alarm\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by site, alarm_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where tier=\"Tier1\" AND count>100` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Critical infrastructure segmentation evidence (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for critical infrastructure segmentation evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NESA UAE IAS"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NESA IAS",
                  "v": "v2 (2020)",
                  "cl": "T3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NESA IAS T3.2 (Access control management) is enforced — Splunk UC-22.33.2: Critical infrastructure segmentation evidence.",
                  "ea": "Saved search 'UC-22.33.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.3",
              "n": "aeCERT incident reporting timeline compliance (NESA UAE IAS)",
              "c": "high",
              "f": "beginner",
              "v": "Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for aecert incident reporting timeline compliance.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=grc sourcetype=\"aecert:incident\" earliest=-180d\n| where isnull(closed_at) AND reported_aecert=\"true\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"aecert:incident\" earliest=-180d\n| where isnull(closed_at) AND reported_aecert=\"true\"\n```\n\nUnderstanding this SPL\n\n**aeCERT incident reporting timeline compliance (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for aecert incident reporting timeline compliance.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: aecert:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"aecert:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(closed_at) AND reported_aecert=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**aeCERT incident reporting timeline compliance (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for aecert incident reporting timeline compliance.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NESA UAE IAS"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NESA IAS",
                  "v": "v2 (2020)",
                  "cl": "T6.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NESA IAS T6.3 (Information security incident management) is enforced — Splunk UC-22.33.3: aeCERT incident reporting timeline compliance.",
                  "ea": "Saved search 'UC-22.33.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.4",
              "n": "Security assessment evidence tracking (NESA UAE IAS)",
              "c": "medium",
              "f": "expert",
              "v": "Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for security assessment evidence tracking.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=cloud sourcetype=\"azure:activity\" OR sourcetype=\"gws:admin\" region=\"me-central\" earliest=-7d\n| stats count by operation, principal\n| lookup nesa_cloud_controls.csv operation OUTPUT requires_approval\n| where requires_approval=\"true\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud sourcetype=\"azure:activity\" OR sourcetype=\"gws:admin\" region=\"me-central\" earliest=-7d\n| stats count by operation, principal\n| lookup nesa_cloud_controls.csv operation OUTPUT requires_approval\n| where requires_approval=\"true\"\n```\n\nUnderstanding this SPL\n\n**Security assessment evidence tracking (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for security assessment evidence tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud; **sourcetype**: azure:activity, gws:admin. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud, sourcetype=\"azure:activity\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by operation, principal** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where requires_approval=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Security assessment evidence tracking (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for security assessment evidence tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NESA UAE IAS"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NESA IAS",
                  "v": "v2 (2020)",
                  "cl": "T3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NESA IAS T3.2 (Access control management) is enforced — Splunk UC-22.33.4: Security assessment evidence tracking.",
                  "ea": "Saved search 'UC-22.33.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.5",
              "n": "Cloud security configuration baseline monitoring (NESA UAE IAS)",
              "c": "critical",
              "f": "advanced",
              "v": "Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for cloud security configuration baseline monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=network sourcetype=\"pan:traffic\" tag=uae_critical earliest=-24h\n| stats sum(bytes) as b by app, dest_zone\n| where app=\"unknown-tcp\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\" tag=uae_critical earliest=-24h\n| stats sum(bytes) as b by app, dest_zone\n| where app=\"unknown-tcp\"\n```\n\nUnderstanding this SPL\n\n**Cloud security configuration baseline monitoring (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for cloud security configuration baseline monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network; **sourcetype**: pan:traffic. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, sourcetype=\"pan:traffic\", time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app, dest_zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where app=\"unknown-tcp\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - agg_value\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cloud security configuration baseline monitoring (NESA UAE IAS)** — Aligns NESA UAE IAS supervisory expectations with measurable Splunk evidence for cloud security configuration baseline monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NESA UAE IAS"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t sum(All_Traffic.bytes_in) as agg_value from datamodel=Network_Traffic.All_Traffic by All_Traffic.app | sort - agg_value",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "NESA IAS",
                  "v": "v2 (2020)",
                  "cl": "T4.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NESA IAS T4.3 (Audit trails, logging and information-system monitoring) is enforced — Splunk UC-22.33.5: Cloud security configuration baseline monitoring.",
                  "ea": "Saved search 'UC-22.33.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.6",
              "n": "SAMA cybersecurity framework control testing (SAMA Cyber Security Framework)",
              "c": "critical",
              "f": "intermediate",
              "v": "Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for sama cybersecurity framework control testing.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=bank sourcetype=\"sama:csf:event\" earliest=-30d\n| eval reported=strptime(reported_sama_at,\"%Y-%m-%d %H:%M:%S\")\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(reported) OR (reported-detected)>4*3600",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"sama:csf:event\" earliest=-30d\n| eval reported=strptime(reported_sama_at,\"%Y-%m-%d %H:%M:%S\")\n| eval detected=strptime(detected_at,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(reported) OR (reported-detected)>4*3600\n```\n\nUnderstanding this SPL\n\n**SAMA cybersecurity framework control testing (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for sama cybersecurity framework control testing.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: sama:csf:event. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"sama:csf:event\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **reported** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **detected** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(reported) OR (reported-detected)>4*3600` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SAMA cybersecurity framework control testing (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for sama cybersecurity framework control testing.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SAMA Cyber Security Framework"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SAMA CSF 3.1.1 (Cyber security governance) is enforced — Splunk UC-22.33.6: SAMA cybersecurity framework control testing.",
                  "ea": "Saved search 'UC-22.33.6' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.7",
              "n": "Third-party cybersecurity assessment monitoring (SAMA Cyber Security Framework)",
              "c": "high",
              "f": "beginner",
              "v": "Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for third-party cybersecurity assessment monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=vendor sourcetype=\"sama:third_party\" earliest=-180d\n| stats latest(cyber_rating) as r by vendor_id\n| where r=\"Weak\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"sama:third_party\" earliest=-180d\n| stats latest(cyber_rating) as r by vendor_id\n| where r=\"Weak\"\n```\n\nUnderstanding this SPL\n\n**Third-party cybersecurity assessment monitoring (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for third-party cybersecurity assessment monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: sama:third_party. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"sama:third_party\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where r=\"Weak\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Third-party cybersecurity assessment monitoring (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for third-party cybersecurity assessment monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SAMA Cyber Security Framework"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.3.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SAMA CSF 3.3.5 (Security monitoring) is enforced — Splunk UC-22.33.7: Third-party cybersecurity assessment monitoring.",
                  "ea": "Saved search 'UC-22.33.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.8",
              "n": "SAMA cybersecurity incident reporting timeline (SAMA Cyber Security Framework)",
              "c": "medium",
              "f": "expert",
              "v": "Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for sama cybersecurity incident reporting timeline.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=training sourcetype=\"sama:awareness\" earliest=-365d\n| stats latest(completion_pct) as p by employee_class\n| where p<95",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=training sourcetype=\"sama:awareness\" earliest=-365d\n| stats latest(completion_pct) as p by employee_class\n| where p<95\n```\n\nUnderstanding this SPL\n\n**SAMA cybersecurity incident reporting timeline (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for sama cybersecurity incident reporting timeline.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: training; **sourcetype**: sama:awareness. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=training, sourcetype=\"sama:awareness\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by employee_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where p<95` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SAMA cybersecurity incident reporting timeline (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for sama cybersecurity incident reporting timeline.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SAMA Cyber Security Framework"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SAMA CSF 3.1.1 (Cyber security governance) is enforced — Splunk UC-22.33.8: SAMA cybersecurity incident reporting timeline.",
                  "ea": "Saved search 'UC-22.33.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.9",
              "n": "Secure system development lifecycle evidence (SAMA Cyber Security Framework)",
              "c": "low",
              "f": "advanced",
              "v": "Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for secure system development lifecycle evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=sdlc sourcetype=\"jira:issue\" project=\"SEC\" earliest=-90d\n| search labels=\"*SAMA*\" AND status!=\"Done\"\n| stats count by assignee, status",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=sdlc sourcetype=\"jira:issue\" project=\"SEC\" earliest=-90d\n| search labels=\"*SAMA*\" AND status!=\"Done\"\n| stats count by assignee, status\n```\n\nUnderstanding this SPL\n\n**Secure system development lifecycle evidence (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for secure system development lifecycle evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: sdlc; **sourcetype**: jira:issue. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=sdlc, sourcetype=\"jira:issue\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by assignee, status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Secure system development lifecycle evidence (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for secure system development lifecycle evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SAMA Cyber Security Framework"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SAMA CSF 3.1.1 (Cyber security governance) is enforced — Splunk UC-22.33.9: Secure system development lifecycle evidence.",
                  "ea": "Saved search 'UC-22.33.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.10",
              "n": "Cybersecurity awareness program completion tracking (SAMA Cyber Security Framework)",
              "c": "critical",
              "f": "intermediate",
              "v": "Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for cybersecurity awareness program completion tracking.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=bank sourcetype=\"sama:secure_dev\" earliest=-730d\n| stats latest(gate_pass) as g by release_id\n| where g!=\"true\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"sama:secure_dev\" earliest=-730d\n| stats latest(gate_pass) as g by release_id\n| where g!=\"true\"\n```\n\nUnderstanding this SPL\n\n**Cybersecurity awareness program completion tracking (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for cybersecurity awareness program completion tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: sama:secure_dev. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"sama:secure_dev\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by release_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where g!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cybersecurity awareness program completion tracking (SAMA Cyber Security Framework)** — Aligns SAMA Cyber Security Framework supervisory expectations with measurable Splunk evidence for cybersecurity awareness program completion tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SAMA Cyber Security Framework"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SAMA CSF",
                  "v": "v1.0 (2017)",
                  "cl": "3.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SAMA CSF 3.1.1 (Cyber security governance) is enforced — Splunk UC-22.33.10: Cybersecurity awareness program completion tracking.",
                  "ea": "Saved search 'UC-22.33.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.11",
              "n": "Personal data processing compliance monitoring (Saudi PDPL)",
              "c": "high",
              "f": "beginner",
              "v": "Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for personal data processing compliance monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=privacy sourcetype=\"pdpl:processing\" jurisdiction=\"SA\" earliest=-90d\n| stats dc(purpose_id) as purposes by data_subject_id, lawful_basis\n| where purposes>5 AND lawful_basis!=\"consent\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"pdpl:processing\" jurisdiction=\"SA\" earliest=-90d\n| stats dc(purpose_id) as purposes by data_subject_id, lawful_basis\n| where purposes>5 AND lawful_basis!=\"consent\"\n```\n\nUnderstanding this SPL\n\n**Personal data processing compliance monitoring (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for personal data processing compliance monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: pdpl:processing. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"pdpl:processing\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by data_subject_id, lawful_basis** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where purposes>5 AND lawful_basis!=\"consent\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Personal data processing compliance monitoring (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for personal data processing compliance monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "Saudi PDPL"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SA PDPL",
                  "v": "current",
                  "cl": "Art. 19",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SA PDPL Art. 19 (Data security and protection obligations) is enforced — Splunk UC-22.33.11: Personal data processing compliance monitoring.",
                  "ea": "Saved search 'UC-22.33.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.12",
              "n": "Data subject rights implementation evidence (Saudi PDPL)",
              "c": "critical",
              "f": "expert",
              "v": "Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for data subject rights implementation evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=app sourcetype=\"pdpl:dsar\" earliest=-180d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval fulfilled=strptime(fulfilled_at,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(fulfilled) AND (now()-opened)>30*86400",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"pdpl:dsar\" earliest=-180d\n| eval opened=strptime(opened_at,\"%Y-%m-%d %H:%M:%S\")\n| eval fulfilled=strptime(fulfilled_at,\"%Y-%m-%d %H:%M:%S\")\n| where isnull(fulfilled) AND (now()-opened)>30*86400\n```\n\nUnderstanding this SPL\n\n**Data subject rights implementation evidence (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for data subject rights implementation evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app; **sourcetype**: pdpl:dsar. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, sourcetype=\"pdpl:dsar\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **fulfilled** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(fulfilled) AND (now()-opened)>30*86400` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Data subject rights implementation evidence (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for data subject rights implementation evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "Saudi PDPL"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SA PDPL",
                  "v": "current",
                  "cl": "Art. 19",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SA PDPL Art. 19 (Data security and protection obligations) is enforced — Splunk UC-22.33.12: Data subject rights implementation evidence.",
                  "ea": "Saved search 'UC-22.33.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.13",
              "n": "Cross-border personal data transfer controls (Saudi PDPL)",
              "c": "low",
              "f": "advanced",
              "v": "Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for cross-border personal data transfer controls.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=grc sourcetype=\"pdpl:transfer\" earliest=-365d\n| stats latest(mechanism) as m by dest_country\n| where m!=\"approved_mechanism\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"pdpl:transfer\" earliest=-365d\n| stats latest(mechanism) as m by dest_country\n| where m!=\"approved_mechanism\"\n```\n\nUnderstanding this SPL\n\n**Cross-border personal data transfer controls (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for cross-border personal data transfer controls.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: pdpl:transfer. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"pdpl:transfer\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest_country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where m!=\"approved_mechanism\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Cross-border personal data transfer controls (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for cross-border personal data transfer controls.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "Saudi PDPL"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SA PDPL",
                  "v": "current",
                  "cl": "Art. 19",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SA PDPL Art. 19 (Data security and protection obligations) is enforced — Splunk UC-22.33.13: Cross-border personal data transfer controls.",
                  "ea": "Saved search 'UC-22.33.13' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.14",
              "n": "Personal data breach notification to SDAIA tracking (Saudi PDPL)",
              "c": "critical",
              "f": "intermediate",
              "v": "Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for personal data breach notification to sdaia tracking.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=privacy sourcetype=\"pdpl:breach\" earliest=-365d\n| where isnull(sdaia_notified_at) AND severity=\"major\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"pdpl:breach\" earliest=-365d\n| where isnull(sdaia_notified_at) AND severity=\"major\"\n```\n\nUnderstanding this SPL\n\n**Personal data breach notification to SDAIA tracking (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for personal data breach notification to sdaia tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: pdpl:breach. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"pdpl:breach\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnull(sdaia_notified_at) AND severity=\"major\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Personal data breach notification to SDAIA tracking (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for personal data breach notification to sdaia tracking.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "Saudi PDPL"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SA PDPL",
                  "v": "current",
                  "cl": "Art. 20",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SA PDPL Art. 20 (Personal data breach notification) is enforced — Splunk UC-22.33.14: Personal data breach notification to SDAIA tracking.",
                  "ea": "Saved search 'UC-22.33.14' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.15",
              "n": "Data protection impact assessment completion monitoring (Saudi PDPL)",
              "c": "high",
              "f": "beginner",
              "v": "Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for data protection impact assessment completion monitoring.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=privacy sourcetype=\"pdpl:dpia\" earliest=-730d\n| stats latest(status) as st by processing_id\n| where st!=\"signed_off\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=privacy sourcetype=\"pdpl:dpia\" earliest=-730d\n| stats latest(status) as st by processing_id\n| where st!=\"signed_off\"\n```\n\nUnderstanding this SPL\n\n**Data protection impact assessment completion monitoring (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for data protection impact assessment completion monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: privacy; **sourcetype**: pdpl:dpia. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=privacy, sourcetype=\"pdpl:dpia\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by processing_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"signed_off\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Data protection impact assessment completion monitoring (Saudi PDPL)** — Aligns Saudi PDPL supervisory expectations with measurable Splunk evidence for data protection impact assessment completion monitoring.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "Saudi PDPL"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SA PDPL",
                  "v": "current",
                  "cl": "Art. 19",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SA PDPL Art. 19 (Data security and protection obligations) is enforced — Splunk UC-22.33.15: Data protection impact assessment completion monitoring.",
                  "ea": "Saved search 'UC-22.33.15' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.16",
              "n": "QCB cybersecurity framework compliance for financial institutions (Qatar Central Bank cybersecurity)",
              "c": "medium",
              "f": "expert",
              "v": "Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for qcb cybersecurity framework compliance for financial institutions.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=bank sourcetype=\"qcb:cyber:metric\" earliest=-30d\n| stats latest(value) as v by metric_name, branch\n| lookup qcb_thresholds.csv metric_name OUTPUT max_allowed\n| where v>max_allowed",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"qcb:cyber:metric\" earliest=-30d\n| stats latest(value) as v by metric_name, branch\n| lookup qcb_thresholds.csv metric_name OUTPUT max_allowed\n| where v>max_allowed\n```\n\nUnderstanding this SPL\n\n**QCB cybersecurity framework compliance for financial institutions (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for qcb cybersecurity framework compliance for financial institutions.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: qcb:cyber:metric. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"qcb:cyber:metric\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by metric_name, branch** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where v>max_allowed` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**QCB cybersecurity framework compliance for financial institutions (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for qcb cybersecurity framework compliance for financial institutions.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Qatar Central Bank cybersecurity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "QCB Cyber",
                  "v": "2018",
                  "cl": "§3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that QCB Cyber §3.1 (Cybersecurity governance and strategy) is enforced — Splunk UC-22.33.16: QCB cybersecurity framework compliance for financial institutions.",
                  "ea": "Saved search 'UC-22.33.16' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.17",
              "n": "QCB cybersecurity incident reporting evidence (Qatar Central Bank cybersecurity)",
              "c": "low",
              "f": "advanced",
              "v": "Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for qcb cybersecurity incident reporting evidence.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=grc sourcetype=\"qcb:incident\" earliest=-180d\n| eval hrs=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(qcb_filed_at) AND hrs>4",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"qcb:incident\" earliest=-180d\n| eval hrs=(now()-strptime(detected_at,\"%Y-%m-%d %H:%M:%S\"))/3600\n| where isnull(qcb_filed_at) AND hrs>4\n```\n\nUnderstanding this SPL\n\n**QCB cybersecurity incident reporting evidence (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for qcb cybersecurity incident reporting evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: qcb:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"qcb:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **hrs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(qcb_filed_at) AND hrs>4` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**QCB cybersecurity incident reporting evidence (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for qcb cybersecurity incident reporting evidence.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Qatar Central Bank cybersecurity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "QCB Cyber",
                  "v": "2018",
                  "cl": "§6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that QCB Cyber §6.2 (Cyber incident management and response) is enforced — Splunk UC-22.33.17: QCB cybersecurity incident reporting evidence.",
                  "ea": "Saved search 'UC-22.33.17' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.18",
              "n": "Information security governance metrics (Qatar Central Bank cybersecurity)",
              "c": "critical",
              "f": "intermediate",
              "v": "Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for information security governance metrics.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=board sourcetype=\"qcb:isms\" earliest=-365d\n| stats latest(review_status) as rs by committee\n| where rs!=\"Complete\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=board sourcetype=\"qcb:isms\" earliest=-365d\n| stats latest(review_status) as rs by committee\n| where rs!=\"Complete\"\n```\n\nUnderstanding this SPL\n\n**Information security governance metrics (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for information security governance metrics.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: board; **sourcetype**: qcb:isms. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=board, sourcetype=\"qcb:isms\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by committee** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rs!=\"Complete\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Information security governance metrics (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for information security governance metrics.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Qatar Central Bank cybersecurity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "QCB Cyber",
                  "v": "2018",
                  "cl": "§3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that QCB Cyber §3.1 (Cybersecurity governance and strategy) is enforced — Splunk UC-22.33.18: Information security governance metrics.",
                  "ea": "Saved search 'UC-22.33.18' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.19",
              "n": "Third-party risk management for banks (Qatar Central Bank cybersecurity)",
              "c": "critical",
              "f": "beginner",
              "v": "Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for third-party risk management for banks.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=vendor sourcetype=\"qcb:third_party\" earliest=-180d\n| stats latest(tier) as t by vendor_name\n| where t=\"High\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"qcb:third_party\" earliest=-180d\n| stats latest(tier) as t by vendor_name\n| where t=\"High\"\n```\n\nUnderstanding this SPL\n\n**Third-party risk management for banks (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for third-party risk management for banks.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: qcb:third_party. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"qcb:third_party\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where t=\"High\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Third-party risk management for banks (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for third-party risk management for banks.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Qatar Central Bank cybersecurity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "QCB Cyber",
                  "v": "2018",
                  "cl": "§3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that QCB Cyber §3.1 (Cybersecurity governance and strategy) is enforced — Splunk UC-22.33.19: Third-party risk management for banks.",
                  "ea": "Saved search 'UC-22.33.19' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.33.20",
              "n": "Business continuity evidence for financial services (Qatar Central Bank cybersecurity)",
              "c": "medium",
              "f": "expert",
              "v": "Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for business continuity evidence for financial services.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence",
              "q": "index=bank sourcetype=\"qcb:bcp\" earliest=-730d\n| stats latest(test_outcome) as o by scenario\n| where o!=\"Pass\"",
              "m": "(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.",
              "z": "Bar chart (sector), Table (top flows), Single value (policy denies).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [CIM: Network_Traffic](https://docs.splunk.com/Documentation/CIM/latest/User/Network_Traffic)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalize bilingual field extractions in props.conf where required. (2) Map IP ranges and business units to national critical infrastructure and banking registers. (3) Route SAMA, QCB, NESA, and PDPL alerts to regional bilingual runbooks. (4) Weekly leadership digest with regulator-specific views.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=bank sourcetype=\"qcb:bcp\" earliest=-730d\n| stats latest(test_outcome) as o by scenario\n| where o!=\"Pass\"\n```\n\nUnderstanding this SPL\n\n**Business continuity evidence for financial services (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for business continuity evidence for financial services.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: bank; **sourcetype**: qcb:bcp. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=bank, sourcetype=\"qcb:bcp\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scenario** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where o!=\"Pass\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Business continuity evidence for financial services (Qatar Central Bank cybersecurity)** — Aligns Qatar Central Bank cybersecurity supervisory expectations with measurable Splunk evidence for business continuity evidence for financial services.\n\nDocumented **Data sources**: Windows Security Event Logs, network infrastructure logs, cloud audit trails, national cybersecurity platform feeds, financial system logs, data protection compliance records, incident response logs, security assessment evidence. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk ITSI (1841), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (sector), Table (top flows), Single value (policy denies).",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show national cyber and financial controls for the region are met with the logs and reports you already retain.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "Qatar Central Bank cybersecurity"
              ],
              "a": [
                "Network_Traffic"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic by All_Traffic.action, All_Traffic.src, All_Traffic.dest, All_Traffic.dest_port | sort - count",
              "e": [
                "itsi",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "QCB Cyber",
                  "v": "2018",
                  "cl": "§3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that QCB Cyber §3.1 (Cybersecurity governance and strategy) is enforced — Splunk UC-22.33.20: Business continuity evidence for financial services.",
                  "ea": "Saved search 'UC-22.33.20' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 20,
            "none": 0
          }
        },
        {
          "i": "22.34",
          "n": "SWIFT Customer Security Programme (CSP)",
          "u": [
            {
              "i": "22.34.1",
              "n": "SWIFT secure zone logical isolation monitoring (SWIFT CSCF mandatory)",
              "c": "medium",
              "f": "intermediate",
              "v": "Produces SWIFT Customer Security Programme evidence for swift secure zone logical isolation monitoring ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=swift sourcetype=\"swift:zone:flow\" OR index=network sourcetype=\"pan:traffic\" tag=swift earliest=-24h\n| stats sum(bytes) as b by src_zone, dest_zone\n| where src_zone!=\"SWIFT_DMZ\" AND dest_zone=\"SWIFT_SECURE\"\n| sort - b",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=swift sourcetype=\"swift:zone:flow\" OR index=network sourcetype=\"pan:traffic\" tag=swift earliest=-24h\n| stats sum(bytes) as b by src_zone, dest_zone\n| where src_zone!=\"SWIFT_DMZ\" AND dest_zone=\"SWIFT_SECURE\"\n| sort - b\n```\n\nUnderstanding this SPL\n\n**SWIFT secure zone logical isolation monitoring (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for swift secure zone logical isolation monitoring ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: swift, network; **sourcetype**: swift:zone:flow, pan:traffic. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=swift, index=network, sourcetype=\"swift:zone:flow\", time bounds…. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by src_zone, dest_zone** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where src_zone!=\"SWIFT_DMZ\" AND dest_zone=\"SWIFT_SECURE\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**SWIFT secure zone logical isolation monitoring (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for swift secure zone logical isolation monitoring ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-22.34.1: SWIFT secure zone logical isolation monitoring.",
                  "ea": "Saved search 'UC-22.34.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.2",
              "n": "Operating system privileged account control within SWIFT zone (SWIFT CSCF mandatory)",
              "c": "critical",
              "f": "beginner",
              "v": "Produces SWIFT Customer Security Programme evidence for operating system privileged account control within swift zone ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=wineventlog sourcetype=\"WinEventLog:Security\" host=\"SWIFT-*\" EventCode IN (4720,4722,4738) earliest=-24h\n| stats values(TargetUserName) as tgt by MemberName, EventCode\n| lookup swift_os_account_allowlist.csv tgt OUTPUT allowed\n| where isnull(allowed)",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=wineventlog sourcetype=\"WinEventLog:Security\" host=\"SWIFT-*\" EventCode IN (4720,4722,4738) earliest=-24h\n| stats values(TargetUserName) as tgt by MemberName, EventCode\n| lookup swift_os_account_allowlist.csv tgt OUTPUT allowed\n| where isnull(allowed)\n```\n\nUnderstanding this SPL\n\n**Operating system privileged account control within SWIFT zone (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for operating system privileged account control within swift zone ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: wineventlog; **sourcetype**: WinEventLog:Security; **host** filter: SWIFT-*. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=wineventlog, sourcetype=\"WinEventLog:Security\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by MemberName, EventCode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(allowed)` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Operating system privileged account control within SWIFT zone (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for operating system privileged account control within swift zone ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 1.1 (SWIFT environment protection) is enforced — Splunk UC-22.34.2: Operating system privileged account control within SWIFT zone.",
                  "ea": "Saved search 'UC-22.34.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.3",
              "n": "Physical and logical access correlation for SWIFT operators (SWIFT CSCF mandatory)",
              "c": "critical",
              "f": "expert",
              "v": "Produces SWIFT Customer Security Programme evidence for physical and logical access correlation for swift operators ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=pacs sourcetype=\"badge:swipe\" building=\"SWIFT_OPS\" earliest=-24h\n| join type=left user [| search index=swift sourcetype=\"swift:operator:login\" earliest=-24d | stats latest(_time) as last_swift by user]\n| eval gap=now()-last_swift\n| where gap>7776000\n| table user, last_swift",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pacs sourcetype=\"badge:swipe\" building=\"SWIFT_OPS\" earliest=-24h\n| join type=left user [| search index=swift sourcetype=\"swift:operator:login\" earliest=-24d | stats latest(_time) as last_swift by user]\n| eval gap=now()-last_swift\n| where gap>7776000\n| table user, last_swift\n```\n\nUnderstanding this SPL\n\n**Physical and logical access correlation for SWIFT operators (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for physical and logical access correlation for swift operators ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pacs; **sourcetype**: badge:swipe. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pacs, sourcetype=\"badge:swipe\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap>7776000` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Physical and logical access correlation for SWIFT operators (SWIFT CSCF mandatory)**): table user, last_swift\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Physical and logical access correlation for SWIFT operators (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for physical and logical access correlation for swift operators ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-22.34.3: Physical and logical access correlation for SWIFT operators.",
                  "ea": "Saved search 'UC-22.34.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.4",
              "n": "Operator MFA and session integrity monitoring (SWIFT CSCF mandatory)",
              "c": "high",
              "f": "advanced",
              "v": "Produces SWIFT Customer Security Programme evidence for operator mfa and session integrity monitoring ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=cyberark sourcetype=\"cyberark:pta\" OR index=o365 sourcetype=\"ms:o365:management\" earliest=-7d\n| stats count by user, action, src\n| search user=\"swift*\" (action=\"Logon\" OR Operation=\"UserLoggedIn\")\n| eval mfa=if(match(_raw,\"MFA\"),1,0)\n| where mfa=0",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cyberark sourcetype=\"cyberark:pta\" OR index=o365 sourcetype=\"ms:o365:management\" earliest=-7d\n| stats count by user, action, src\n| search user=\"swift*\" (action=\"Logon\" OR Operation=\"UserLoggedIn\")\n| eval mfa=if(match(_raw,\"MFA\"),1,0)\n| where mfa=0\n```\n\nUnderstanding this SPL\n\n**Operator MFA and session integrity monitoring (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for operator mfa and session integrity monitoring ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cyberark, o365; **sourcetype**: cyberark:pta, ms:o365:management. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cyberark, index=o365, sourcetype=\"cyberark:pta\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user, action, src** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **mfa** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mfa=0` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.action, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Operator MFA and session integrity monitoring (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for operator mfa and session integrity monitoring ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user, Authentication.action, Authentication.src | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-22.34.4: Operator MFA and session integrity monitoring.",
                  "ea": "Saved search 'UC-22.34.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.5",
              "n": "System hardening compliance within the SWIFT secure zone (SWIFT CSCF mandatory)",
              "c": "medium",
              "f": "intermediate",
              "v": "Produces SWIFT Customer Security Programme evidence for system hardening compliance within the swift secure zone ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=os sourcetype=package OR index=wineventlog sourcetype=\"WinEventLog:Security\" host=\"SWIFT-*\" earliest=-24h\n| stats latest(version) as ver by package, host\n| lookup swift_hardening_baseline.csv package OUTPUT min_version\n| where ver<min_version",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=os sourcetype=package OR index=wineventlog sourcetype=\"WinEventLog:Security\" host=\"SWIFT-*\" earliest=-24h\n| stats latest(version) as ver by package, host\n| lookup swift_hardening_baseline.csv package OUTPUT min_version\n| where ver<min_version\n```\n\nUnderstanding this SPL\n\n**System hardening compliance within the SWIFT secure zone (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for system hardening compliance within the swift secure zone ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: os, wineventlog; **sourcetype**: package, WinEventLog:Security; **host** filter: SWIFT-*. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=os, index=wineventlog, sourcetype=package, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by package, host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where ver<min_version` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**System hardening compliance within the SWIFT secure zone (SWIFT CSCF mandatory)** — Produces SWIFT Customer Security Programme evidence for system hardening compliance within the swift secure zone ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 1.1 (SWIFT environment protection) is enforced — Splunk UC-22.34.5: System hardening compliance within the SWIFT secure zone.",
                  "ea": "Saved search 'UC-22.34.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.6",
              "n": "Back-office to SWIFT zone data flow security (SWIFT CSCF advisory)",
              "c": "low",
              "f": "beginner",
              "v": "Produces SWIFT Customer Security Programme evidence for back-office to swift zone data flow security ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=app sourcetype=\"backoffice:transfer\" OR index=mq sourcetype=\"ibm:mq:error\" earliest=-24h\n| search dest_queue=\"TO.SWIFT*\" OR swift_zone=\"true\"\n| stats count by user, direction, queue\n| where direction=\"inbound\" AND match(queue,\"BACKOFFICE\")\n| sort - count",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app sourcetype=\"backoffice:transfer\" OR index=mq sourcetype=\"ibm:mq:error\" earliest=-24h\n| search dest_queue=\"TO.SWIFT*\" OR swift_zone=\"true\"\n| stats count by user, direction, queue\n| where direction=\"inbound\" AND match(queue,\"BACKOFFICE\")\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Back-office to SWIFT zone data flow security (SWIFT CSCF advisory)** — Produces SWIFT Customer Security Programme evidence for back-office to swift zone data flow security ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app, mq; **sourcetype**: backoffice:transfer, ibm:mq:error. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, index=mq, sourcetype=\"backoffice:transfer\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by user, direction, queue** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where direction=\"inbound\" AND match(queue,\"BACKOFFICE\")` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Back-office to SWIFT zone data flow security (SWIFT CSCF advisory)** — Produces SWIFT Customer Security Programme evidence for back-office to swift zone data flow security ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.user | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 1.1 (SWIFT environment protection) is enforced — Splunk UC-22.34.6: Back-office to SWIFT zone data flow security.",
                  "ea": "Saved search 'UC-22.34.6' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.7",
              "n": "External transmission data protection monitoring (SWIFT CSCF advisory)",
              "c": "critical",
              "f": "expert",
              "v": "Produces SWIFT Customer Security Programme evidence for external transmission data protection monitoring ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=network sourcetype=\"pan:traffic\" OR index=ssl sourcetype=\"stream:tls\" earliest=-24h\n| stats dc(cipher) as ciphers, values(ssl_version) as tls by dest_ip, dest_port\n| where dest_port=443 AND mvfind(tls,\"TLS1_0\")>=0\n| lookup swift_external_endpoints.csv dest_ip OUTPUT approved\n| where approved!=\"true\"",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=network sourcetype=\"pan:traffic\" OR index=ssl sourcetype=\"stream:tls\" earliest=-24h\n| stats dc(cipher) as ciphers, values(ssl_version) as tls by dest_ip, dest_port\n| where dest_port=443 AND mvfind(tls,\"TLS1_0\")>=0\n| lookup swift_external_endpoints.csv dest_ip OUTPUT approved\n| where approved!=\"true\"\n```\n\nUnderstanding this SPL\n\n**External transmission data protection monitoring (SWIFT CSCF advisory)** — Produces SWIFT Customer Security Programme evidence for external transmission data protection monitoring ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: network, ssl; **sourcetype**: pan:traffic, stream:tls. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=network, index=ssl, sourcetype=\"pan:traffic\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by dest_ip, dest_port** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where dest_port=443 AND mvfind(tls,\"TLS1_0\")>=0` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where approved!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**External transmission data protection monitoring (SWIFT CSCF advisory)** — Produces SWIFT Customer Security Programme evidence for external transmission data protection monitoring ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.dest | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-22.34.7: External transmission data protection monitoring.",
                  "ea": "Saved search 'UC-22.34.7' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.8",
              "n": "Operator screening evidence aggregation (SWIFT CSCF advisory)",
              "c": "high",
              "f": "advanced",
              "v": "Produces SWIFT Customer Security Programme evidence for operator screening evidence aggregation ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=hr sourcetype=\"swift:operator:hr\" earliest=-365d\n| stats latest(screening_status) as st, latest(screening_date) as sd by operator_id\n| where st!=\"Cleared\" OR sd<relative_time(now(),\"-365d@d\")\n| table operator_id, st, sd",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=\"swift:operator:hr\" earliest=-365d\n| stats latest(screening_status) as st, latest(screening_date) as sd by operator_id\n| where st!=\"Cleared\" OR sd<relative_time(now(),\"-365d@d\")\n| table operator_id, st, sd\n```\n\nUnderstanding this SPL\n\n**Operator screening evidence aggregation (SWIFT CSCF advisory)** — Produces SWIFT Customer Security Programme evidence for operator screening evidence aggregation ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: swift:operator:hr. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=\"swift:operator:hr\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by operator_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"Cleared\" OR sd<relative_time(now(),\"-365d@d\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Operator screening evidence aggregation (SWIFT CSCF advisory)**): table operator_id, st, sd\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Operator screening evidence aggregation (SWIFT CSCF advisory)** — Produces SWIFT Customer Security Programme evidence for operator screening evidence aggregation ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-22.34.8: Operator screening evidence aggregation.",
                  "ea": "Saved search 'UC-22.34.8' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.9",
              "n": "Intrusion detection coverage within SWIFT environment (SWIFT CSCF advisory)",
              "c": "critical",
              "f": "intermediate",
              "v": "Produces SWIFT Customer Security Programme evidence for intrusion detection coverage within swift environment ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=ids sourcetype IN (\"suricata:event\",\"pan:threat\") tag=swift earliest=-24h\n| stats count by signature, dest_ip\n| lookup swift_vlan_map.csv dest_ip OUTPUT in_swift_vlan\n| where in_swift_vlan=\"true\"\n| sort - count",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ids sourcetype IN (\"suricata:event\",\"pan:threat\") tag=swift earliest=-24h\n| stats count by signature, dest_ip\n| lookup swift_vlan_map.csv dest_ip OUTPUT in_swift_vlan\n| where in_swift_vlan=\"true\"\n| sort - count\n```\n\nUnderstanding this SPL\n\n**Intrusion detection coverage within SWIFT environment (SWIFT CSCF advisory)** — Produces SWIFT Customer Security Programme evidence for intrusion detection coverage within swift environment ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ids.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ids, time bounds, tags. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by signature, dest_ip** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_swift_vlan=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.signature, Authentication.dest | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Intrusion detection coverage within SWIFT environment (SWIFT CSCF advisory)** — Produces SWIFT Customer Security Programme evidence for intrusion detection coverage within swift environment ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.signature, Authentication.dest | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 1.1 (SWIFT environment protection) is enforced — Splunk UC-22.34.9: Intrusion detection coverage within SWIFT environment.",
                  "ea": "Saved search 'UC-22.34.9' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.10",
              "n": "Annual KYC-SA attestation evidence pack (SWIFT KYC-SA)",
              "c": "low",
              "f": "beginner",
              "v": "Produces SWIFT Customer Security Programme evidence for annual kyc-sa attestation evidence pack ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=grc sourcetype=\"swift:kycsa\" earliest=-400d\n| stats latest(attestation_status) as st, latest(attestation_year) as yr by bic\n| where st!=\"Submitted\" AND yr=strftime(now(),\"%Y\")\n| table bic, st, yr",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=\"swift:kycsa\" earliest=-400d\n| stats latest(attestation_status) as st, latest(attestation_year) as yr by bic\n| where st!=\"Submitted\" AND yr=strftime(now(),\"%Y\")\n| table bic, st, yr\n```\n\nUnderstanding this SPL\n\n**Annual KYC-SA attestation evidence pack (SWIFT KYC-SA)** — Produces SWIFT Customer Security Programme evidence for annual kyc-sa attestation evidence pack ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: swift:kycsa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=\"swift:kycsa\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by bic** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where st!=\"Submitted\" AND yr=strftime(now(),\"%Y\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Annual KYC-SA attestation evidence pack (SWIFT KYC-SA)**): table bic, st, yr\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Annual KYC-SA attestation evidence pack (SWIFT KYC-SA)** — Produces SWIFT Customer Security Programme evidence for annual kyc-sa attestation evidence pack ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 1.1 (SWIFT environment protection) is enforced — Splunk UC-22.34.10: Annual KYC-SA attestation evidence pack.",
                  "ea": "Saved search 'UC-22.34.10' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.11",
              "n": "Independent assessment finding remediation tracking (SWIFT KYC-SA)",
              "c": "critical",
              "f": "expert",
              "v": "Produces SWIFT Customer Security Programme evidence for independent assessment finding remediation tracking ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=audit sourcetype=\"swift:independent_assessment\" earliest=-730d\n| stats latest(severity) as sev, latest(remediation_status) as rs by finding_id\n| where rs!=\"Closed\" AND match(sev,\"High|Critical\")\n| table finding_id, sev, rs",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit sourcetype=\"swift:independent_assessment\" earliest=-730d\n| stats latest(severity) as sev, latest(remediation_status) as rs by finding_id\n| where rs!=\"Closed\" AND match(sev,\"High|Critical\")\n| table finding_id, sev, rs\n```\n\nUnderstanding this SPL\n\n**Independent assessment finding remediation tracking (SWIFT KYC-SA)** — Produces SWIFT Customer Security Programme evidence for independent assessment finding remediation tracking ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit; **sourcetype**: swift:independent_assessment. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit, sourcetype=\"swift:independent_assessment\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by finding_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where rs!=\"Closed\" AND match(sev,\"High|Critical\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Independent assessment finding remediation tracking (SWIFT KYC-SA)**): table finding_id, sev, rs\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Independent assessment finding remediation tracking (SWIFT KYC-SA)** — Produces SWIFT Customer Security Programme evidence for independent assessment finding remediation tracking ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-22.34.11: Independent assessment finding remediation tracking.",
                  "ea": "Saved search 'UC-22.34.11' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.34.12",
              "n": "Counterparty CSP score monitoring for correspondent risk (SWIFT KYC-SA)",
              "c": "high",
              "f": "advanced",
              "v": "Produces SWIFT Customer Security Programme evidence for counterparty csp score monitoring for correspondent risk ahead of independent assessment.",
              "t": "Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621)",
              "d": "SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs",
              "q": "index=vendor sourcetype=\"swift:csp_score\" earliest=-30d\n| stats latest(score) as s, latest(trend) as t by counterparty_bic\n| where s<75 OR t=\"down\"\n| sort s\n| head 200",
              "m": "(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.",
              "z": "Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "kfp": "Administrative tasks, scheduled jobs or platform updates can match this pattern — correlate with change management, maintenance windows and user role before raising severity.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk Add-on for Microsoft Windows](https://splunkbase.splunk.com/app/742), [Splunk Add-on for CyberArk](https://splunkbase.splunk.com/app/4295), [CIM: Authentication](https://docs.splunk.com/Documentation/CIM/latest/User/Authentication)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621).\n• Ensure the following data sources are available: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Segment SWIFT CSP evidence into dedicated indexes with strict role-based access. (2) Join operators to HR screening and vendor risk lookups refreshed weekly. (3) Validate MFA and PAM sourcetypes are CIM-mapped for Authentication models. (4) Retain evidence for at least 13 months aligned to attestation cycles.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vendor sourcetype=\"swift:csp_score\" earliest=-30d\n| stats latest(score) as s, latest(trend) as t by counterparty_bic\n| where s<75 OR t=\"down\"\n| sort s\n| head 200\n```\n\nUnderstanding this SPL\n\n**Counterparty CSP score monitoring for correspondent risk (SWIFT KYC-SA)** — Produces SWIFT Customer Security Programme evidence for counterparty csp score monitoring for correspondent risk ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vendor; **sourcetype**: swift:csp_score. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vendor, sourcetype=\"swift:csp_score\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by counterparty_bic** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where s<75 OR t=\"down\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n\nOptional CIM / accelerated variant (same use case, normalized fields via Common Information Model):\n\n```spl\n| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count\n```\n\nUnderstanding this CIM / accelerated SPL\n\n**Counterparty CSP score monitoring for correspondent risk (SWIFT KYC-SA)** — Produces SWIFT Customer Security Programme evidence for counterparty csp score monitoring for correspondent risk ahead of independent assessment.\n\nDocumented **Data sources**: SWIFT Alliance/Lite2 transaction logs, operator session audit trails, network infrastructure logs, endpoint security logs, privileged access management, change management records, vulnerability scan outputs, multi-factor authentication logs. **App/TA** (typical add-on context): Splunk Enterprise Security (263), Splunk Add-on for Microsoft Windows (742), Splunk Add-on for CyberArk (4295), Splunk Add-on for Palo Alto Networks (2757), Splunk Common Information Model Add-on (1621). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThis **CIM or accelerated** block uses normalized field names and/or `tstats` over data models. Enable **acceleration** on the referenced models (and correct CIM knowledge objects) or the search may return nothing.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (operators), Single value (sessions), Bar chart (CSCF controls).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show SWIFT security programme expectations are met for your payment zone, operators, and controls before the independent check.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Authentication"
              ],
              "qs": "| tstats summariesonly=t count from datamodel=Authentication.Authentication by Authentication.action, Authentication.user, Authentication.src | sort - count",
              "e": [
                "cyberark",
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.4 (Logging and monitoring) is enforced — Splunk UC-22.34.12: Counterparty CSP score monitoring for correspondent risk.",
                  "ea": "Saved search 'UC-22.34.12' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 12,
            "none": 0
          }
        },
        {
          "i": "22.35",
          "n": "Evidence continuity and log integrity",
          "u": [
            {
              "i": "22.35.1",
              "n": "Audit-log continuity: detect indexing gap indicating lost evidence",
              "c": "critical",
              "f": "intermediate",
              "v": "Without continuous audit logs, compliance claims are unfalsifiable. Auditors across GDPR, HIPAA, PCI DSS, SOC 2, and SOX all treat log-gaps as a finding.",
              "t": "Splunk Enterprise / Splunk Cloud Platform (native; no separate TA required)",
              "d": "_internal metrics.log (group=per_sourcetype_thruput), _audit (action=indexing), or any index-time metric that exposes per-source event rates.",
              "q": "| tstats count as events where index=_internal source=*metrics.log* group=per_sourcetype_thruput earliest=-30m by sourcetype _time span=1m\n| eventstats avg(events) as avg_events, stdev(events) as stdev_events by sourcetype\n| where events=0 AND avg_events>0\n| eval gap_seconds=60\n| table _time sourcetype events avg_events stdev_events gap_seconds\n| sort - _time",
              "m": "(1) Schedule every 5 minutes; (2) route hits to a restricted summary index (compliance_summary); (3) wire to ITSI service health or ES notable depending on deployment; (4) maintain a maintenance-window lookup (maintenance_windows.csv).",
              "z": "Timeline of event rate per sourcetype with red bars for gap periods; heat-map of sources-by-day with green/amber/red cells; single value for '% of last 30d with full coverage'.",
              "kfp": "Planned maintenance windows or expected cold-standby sources that receive low volume. Mitigate with a maintenance_windows.csv lookup and a `not source IN [$lookup]` clause.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [MITRE ATT&CK — T1562.008 Impair Defenses: Disable or Modify Cloud Logs](https://attack.mitre.org/techniques/T1562/008/)",
              "mitre": [
                "T1562.008"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform (native; no separate TA required).\n• Ensure the following data sources are available: _internal metrics.log (group=per_sourcetype_thruput), _audit (action=indexing), or any index-time metric that exposes per-source event rates..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule every 5 minutes; (2) route hits to a restricted summary index (compliance_summary); (3) wire to ITSI service health or ES notable depending on deployment; (4) maintain a maintenance-window lookup (maintenance_windows.csv).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats count as events where index=_internal source=*metrics.log* group=per_sourcetype_thruput earliest=-30m by sourcetype _time span=1m\n| eventstats avg(events) as avg_events, stdev(events) as stdev_events by sourcetype\n| where events=0 AND avg_events>0\n| eval gap_seconds=60\n| table _time sourcetype events avg_events stdev_events gap_seconds\n| sort - _time\n```\n\nUnderstanding this SPL\n\n**Audit-log continuity: detect indexing gap indicating lost evidence** — Without continuous audit logs, compliance claims are unfalsifiable. Auditors across GDPR, HIPAA, PCI DSS, SOC 2, and SOX all treat log-gaps as a finding.\n\nDocumented **Data sources**: _internal metrics.log (group=per_sourcetype_thruput), _audit (action=indexing), or any index-time metric that exposes per-source event rates. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform (native; no separate TA required). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _internal.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against precomputed summaries; ensure the referenced data model is accelerated.\n• `eventstats` rolls up events into metrics; results are split **by sourcetype** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where events=0 AND avg_events>0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **gap_seconds** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Audit-log continuity: detect indexing gap indicating lost evidence**): table _time sourcetype events avg_events stdev_events gap_seconds\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of event rate per sourcetype with red bars for gap periods; heat-map of sources-by-day with green/amber/red cells; single value for '% of last 30d with full coverage'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for breaks in the audit log stream so you can fix gaps before compliance or security reviews find them first.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "GDPR",
                "HIPAA",
                "PCI-DSS",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32(1)(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.32(1)(b) is enforced — Splunk UC-22.35.1: Audit-log continuity: detect indexing gap indicating lost evidence.",
                  "ea": "Saved search 'UC-22.35.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2016/679/oj#d1e2833-1-1"
                },
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.312(b)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA §164.312(b) (Audit controls) is enforced — Splunk UC-22.35.1: Audit-log continuity: detect indexing gap indicating lost evidence.",
                  "ea": "Saved search 'UC-22.35.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-C/section-164.312#p-164.312(b)"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "10.2.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 10.2.1 is enforced — Splunk UC-22.35.1: Audit-log continuity: detect indexing gap indicating lost evidence.",
                  "ea": "Saved search 'UC-22.35.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf#page=126"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.35.1: Audit-log continuity: detect indexing gap indicating lost evidence.",
                  "ea": "Saved search 'UC-22.35.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Logging.Continuity",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of SOX-ITGC ITGC.Logging.Continuity (Audit trail completeness) — Splunk UC-22.35.1: Audit-log continuity: detect indexing gap indicating lost evidence.",
                  "ea": "Saved search 'UC-22.35.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.32(1)(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.32(1)(b) is enforced — Splunk UC-22.35.1: Audit-log continuity: detect indexing gap indicating lost evidence.",
                  "ea": "Saved search 'UC-22.35.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.35.2",
              "n": "Log tamper detection via write-once-read-many chain-of-custody",
              "c": "critical",
              "f": "advanced",
              "v": "Produces defensible chain-of-custody evidence: auditors can be shown a 30-day view in which every bucket was either green or had a timestamped remediation ticket.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| rest /services/cluster/master/buckets splunk_server=* | eval primary_hash=coalesce('primaries{}.bucket_flags.hash','bucket_hash') | stats values(primary_hash) as hashes dc(primary_hash) as uniques by index, bucket_id | where uniques>1\n| eval divergence_detected_at=strftime(now(),\"%Y-%m-%dT%H:%M:%SZ\")\n| table divergence_detected_at index bucket_id hashes",
              "m": "(1) Enable indexer-cluster replication with searchFactor≥2, replicationFactor≥3; (2) schedule the REST probe every 5 minutes; (3) pipe hits to compliance_summary and to ITSI as a KPI; (4) escalate to CISO on any bucket divergence older than 30 minutes.",
              "z": "Heat-map of indexes × buckets over 30d (green/red); single value for '% of buckets verified'; time-chart of open divergences.",
              "kfp": "Planned rolling restarts of indexer peers can briefly show hash mismatches during re-balancing. Use maintenance_windows.csv and a 10-minute grace window.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [MITRE ATT&CK — T1070 Indicator Removal](https://attack.mitre.org/techniques/T1070/)",
              "mitre": [
                "T1070"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable indexer-cluster replication with searchFactor≥2, replicationFactor≥3; (2) schedule the REST probe every 5 minutes; (3) pipe hits to compliance_summary and to ITSI as a KPI; (4) escalate to CISO on any bucket divergence older than 30 minutes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/cluster/master/buckets splunk_server=* | eval primary_hash=coalesce('primaries{}.bucket_flags.hash','bucket_hash') | stats values(primary_hash) as hashes dc(primary_hash) as uniques by index, bucket_id | where uniques>1\n| eval divergence_detected_at=strftime(now(),\"%Y-%m-%dT%H:%M:%SZ\")\n| table divergence_detected_at index bucket_id hashes\n```\n\nUnderstanding this SPL\n\n**Log tamper detection via write-once-read-many chain-of-custody** — Produces defensible chain-of-custody evidence: auditors can be shown a 30-day view in which every bucket was either green or had a timestamped remediation ticket.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• `eval` defines or adjusts **primary_hash** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by index, bucket_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where uniques>1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **divergence_detected_at** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Log tamper detection via write-once-read-many chain-of-custody**): table divergence_detected_at index bucket_id hashes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heat-map of indexes × buckets over 30d (green/red); single value for '% of buckets verified'; time-chart of open divergences.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for breaks in the audit log stream so you can fix gaps before compliance or security reviews find them first.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "GDPR",
                "HIPAA Security",
                "PCI DSS",
                "SOC 2",
                "SOX ITGC"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.23 (Security control action) is enforced — Splunk UC-22.35.2: Log tamper detection via write-once-read-many chain-of-custody.",
                  "ea": "Saved search 'UC-22.35.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.32 (Security of processing) is enforced — Splunk UC-22.35.2: Log tamper detection via write-once-read-many chain-of-custody.",
                  "ea": "Saved search 'UC-22.35.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2016/679/oj#d1e2833-1-1"
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(c)(1)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security §164.312(c)(1) (Integrity) is enforced — Splunk UC-22.35.2: Log tamper detection via write-once-read-many chain-of-custody.",
                  "ea": "Saved search 'UC-22.35.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-C/section-164.312#p-164.312(c)(1)"
                },
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.35.2: Log tamper detection via write-once-read-many chain-of-custody.",
                  "ea": "Saved search 'UC-22.35.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI DSS 10.5 is enforced — Splunk UC-22.35.2: Log tamper detection via write-once-read-many chain-of-custody.",
                  "ea": "Saved search 'UC-22.35.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf#page=131"
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC 2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.35.2: Log tamper detection via write-once-read-many chain-of-custody.",
                  "ea": "Saved search 'UC-22.35.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Logging.Continuity",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of SOX ITGC ITGC.Logging.Continuity (Audit trail completeness) — Splunk UC-22.35.2: Log tamper detection via write-once-read-many chain-of-custody.",
                  "ea": "Saved search 'UC-22.35.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.32 (Security of processing) is enforced — Splunk UC-22.35.2: Log tamper detection via write-once-read-many chain-of-custody.",
                  "ea": "Saved search 'UC-22.35.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.35.3",
              "n": "Indexer replication lag exposing evidence to single-point failure",
              "c": "high",
              "f": "intermediate",
              "v": "Turns an otherwise-silent SLO breach into evidence that the control was monitored continuously, even across shards that never had an incident.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| rest /services/cluster/master/indexes splunk_server=* | stats max(replication_factor_met) as rf_met max(search_factor_met) as sf_met by title | eval index=title, rf_met=coalesce(rf_met,0), sf_met=coalesce(sf_met,0) | where rf_met=0 OR sf_met=0",
              "m": "(1) Schedule hourly; (2) parameterise RPO via macro `worm_rpo_seconds`; (3) correlate with maintenance-window lookup to suppress planned reboots; (4) wire to ITSI availability KPI for cross-pillar visibility.",
              "z": "Line chart of replication lag by index; table of out-of-SLA indexes with duration; single-value for '% of time within RPO'.",
              "kfp": "Planned rolling restarts, primary-peer failovers, or newly-added indexes (before first replication cycle) can briefly show lag. Apply maintenance-window suppression.",
              "refs": "[Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule hourly; (2) parameterise RPO via macro `worm_rpo_seconds`; (3) correlate with maintenance-window lookup to suppress planned reboots; (4) wire to ITSI availability KPI for cross-pillar visibility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/cluster/master/indexes splunk_server=* | stats max(replication_factor_met) as rf_met max(search_factor_met) as sf_met by title | eval index=title, rf_met=coalesce(rf_met,0), sf_met=coalesce(sf_met,0) | where rf_met=0 OR sf_met=0\n```\n\nUnderstanding this SPL\n\n**Indexer replication lag exposing evidence to single-point failure** — Turns an otherwise-silent SLO breach into evidence that the control was monitored continuously, even across shards that never had an incident.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• `stats` rolls up events into metrics; results are split **by title** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **index** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where rf_met=0 OR sf_met=0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart of replication lag by index; table of out-of-SLA indexes with duration; single-value for '% of time within RPO'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for breaks in the audit log stream so you can fix gaps before compliance or security reviews find them first.",
              "mtype": [
                "Compliance",
                "Operations"
              ],
              "regs": [
                "DORA",
                "GDPR",
                "NIST 800-53",
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.23 (Security control action) is enforced — Splunk UC-22.35.3: Indexer replication lag exposing evidence to single-point failure.",
                  "ea": "Saved search 'UC-22.35.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.12 (Backup policies and recovery methods) is enforced — Splunk UC-22.35.3: Indexer replication lag exposing evidence to single-point failure.",
                  "ea": "Saved search 'UC-22.35.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.32 (Security of processing) is enforced — Splunk UC-22.35.3: Indexer replication lag exposing evidence to single-point failure.",
                  "ea": "Saved search 'UC-22.35.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.35.3: Indexer replication lag exposing evidence to single-point failure.",
                  "ea": "Saved search 'UC-22.35.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-9",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 AU-9 (Protection of audit information) is enforced — Splunk UC-22.35.3: Indexer replication lag exposing evidence to single-point failure.",
                  "ea": "Saved search 'UC-22.35.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "A1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC 2 A1.2 (Availability commitments) is enforced — Splunk UC-22.35.3: Indexer replication lag exposing evidence to single-point failure.",
                  "ea": "Saved search 'UC-22.35.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.32 (Security of processing) is enforced — Splunk UC-22.35.3: Indexer replication lag exposing evidence to single-point failure.",
                  "ea": "Saved search 'UC-22.35.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.36",
          "n": "Data subject rights fulfillment",
          "u": [
            {
              "i": "22.36.1",
              "n": "DSAR fulfillment SLA tracker with verification evidence trail",
              "c": "high",
              "f": "intermediate",
              "v": "Replaces the common 'DPO exports a CSV from OneTrust' evidence pattern with a continuously-measured, tamper-evident record that directly supports auditor questions about Art.12/Art.15 response timeliness.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=dsar sourcetype=onetrust:dsar OR sourcetype=servicenow:privacy earliest=-90d\n| stats min(_time) as received max(eval(case(status=\"verified\",_time))) as verified max(eval(case(status=\"fulfilled\",_time))) as fulfilled by ticket_id subject_token\n| eval days_open=round((coalesce(fulfilled,now())-received)/86400,1), sla_breach=if(isnull(fulfilled) AND days_open>25,1,0)\n| where sla_breach=1\n| table ticket_id subject_token received verified days_open",
              "m": "(1) Ingest the DSAR platform (OneTrust, TrustArc, ServiceNow Privacy) via its API; (2) join to an `extensions.csv` for legitimately extended tickets; (3) alert to DPO when breach is imminent; (4) emit KPI to ITSI privacy service.",
              "z": "Funnel (received → verified → responded → fulfilled), aging bar chart, single-value for '% of DSARs fulfilled within SLA (90d)'.",
              "kfp": "Legitimately extended requests (Art.12(3)) look like SLA breaches unless the extension_approved_on field in extensions.csv is populated.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [California Consumer Privacy Act (CCPA/CPRA)](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest the DSAR platform (OneTrust, TrustArc, ServiceNow Privacy) via its API; (2) join to an `extensions.csv` for legitimately extended tickets; (3) alert to DPO when breach is imminent; (4) emit KPI to ITSI privacy service.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dsar sourcetype=onetrust:dsar OR sourcetype=servicenow:privacy earliest=-90d\n| stats min(_time) as received max(eval(case(status=\"verified\",_time))) as verified max(eval(case(status=\"fulfilled\",_time))) as fulfilled by ticket_id subject_token\n| eval days_open=round((coalesce(fulfilled,now())-received)/86400,1), sla_breach=if(isnull(fulfilled) AND days_open>25,1,0)\n| where sla_breach=1\n| table ticket_id subject_token received verified days_open\n```\n\nUnderstanding this SPL\n\n**DSAR fulfillment SLA tracker with verification evidence trail** — Replaces the common 'DPO exports a CSV from OneTrust' evidence pattern with a continuously-measured, tamper-evident record that directly supports auditor questions about Art.12/Art.15 response timeliness.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dsar; **sourcetype**: onetrust:dsar, servicenow:privacy. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dsar, sourcetype=onetrust:dsar, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by ticket_id subject_token** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sla_breach=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DSAR fulfillment SLA tracker with verification evidence trail**): table ticket_id subject_token received verified days_open\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel (received → verified → responded → fulfilled), aging bar chart, single-value for '% of DSARs fulfilled within SLA (90d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when data-subject requests are running late so privacy duties stay on track across systems.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "CCPA/CPRA",
                "GDPR"
              ],
              "a": [
                "Ticket_Management"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.100",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CCPA/CPRA §1798.100 (Consumer right to know) is enforced — Splunk UC-22.36.1: DSAR fulfillment SLA tracker with verification evidence trail.",
                  "ea": "Saved search 'UC-22.36.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.12 is enforced — Splunk UC-22.36.1: DSAR fulfillment SLA tracker with verification evidence trail.",
                  "ea": "Saved search 'UC-22.36.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.15 (Right of access) is enforced — Splunk UC-22.36.1: DSAR fulfillment SLA tracker with verification evidence trail.",
                  "ea": "Saved search 'UC-22.36.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.12 is enforced — Splunk UC-22.36.1: DSAR fulfillment SLA tracker with verification evidence trail.",
                  "ea": "Saved search 'UC-22.36.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.15",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.15 is enforced — Splunk UC-22.36.1: DSAR fulfillment SLA tracker with verification evidence trail.",
                  "ea": "Saved search 'UC-22.36.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.36.2",
              "n": "Right-to-erasure propagation completeness across downstream systems",
              "c": "critical",
              "f": "advanced",
              "v": "Moves the erasure claim from 'we believe it propagated' to 'we proved it propagated, system by system' — exactly the evidence supervisory authorities ask for on first inspection.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=dsar subject_token=* action=erasure earliest=-180d\n| stats values(system) as acked_systems by subject_token ticket_id\n| lookup ropa_system_map.csv ticket_id OUTPUT systems_in_scope\n| eval missing=mvfilter(NOT match(acked_systems, system))\n| where mvcount(missing)>0\n| table ticket_id subject_token systems_in_scope acked_systems missing",
              "m": "(1) Publish a ropa_system_map.csv lookup keyed by ticket_id → list of systems-in-scope; (2) ingest deletion-ack events from each system (DB TTL jobs, DWH deletion manifests, app-level hooks); (3) run hourly; (4) integrate with SOAR to auto-reopen tickets with missing acks.",
              "z": "Matrix of ticket_id × system with green/red cells; aging histogram; single-value for 'in-scope systems with ack in SLA'.",
              "kfp": "Systems that hold data under a legitimate retention obligation (tax law, medical records) will 'miss' an ack. Use erasure_exceptions.csv to exclude them per ticket.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [California Consumer Privacy Act (CCPA/CPRA)](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Publish a ropa_system_map.csv lookup keyed by ticket_id → list of systems-in-scope; (2) ingest deletion-ack events from each system (DB TTL jobs, DWH deletion manifests, app-level hooks); (3) run hourly; (4) integrate with SOAR to auto-reopen tickets with missing acks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dsar subject_token=* action=erasure earliest=-180d\n| stats values(system) as acked_systems by subject_token ticket_id\n| lookup ropa_system_map.csv ticket_id OUTPUT systems_in_scope\n| eval missing=mvfilter(NOT match(acked_systems, system))\n| where mvcount(missing)>0\n| table ticket_id subject_token systems_in_scope acked_systems missing\n```\n\nUnderstanding this SPL\n\n**Right-to-erasure propagation completeness across downstream systems** — Moves the erasure claim from 'we believe it propagated' to 'we proved it propagated, system by system' — exactly the evidence supervisory authorities ask for on first inspection.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dsar.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dsar, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by subject_token ticket_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **missing** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mvcount(missing)>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Right-to-erasure propagation completeness across downstream systems**): table ticket_id subject_token systems_in_scope acked_systems missing\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix of ticket_id × system with green/red cells; aging histogram; single-value for 'in-scope systems with ack in SLA'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see when data-subject requests are running late so privacy duties stay on track across systems.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "CCPA/CPRA",
                "GDPR"
              ],
              "a": [
                "Ticket_Management"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.105",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CCPA/CPRA §1798.105 (Consumer right to delete) is enforced — Splunk UC-22.36.2: Right-to-erasure propagation completeness across downstream systems.",
                  "ea": "Saved search 'UC-22.36.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.17 (Right to erasure) is enforced — Splunk UC-22.36.2: Right-to-erasure propagation completeness across downstream systems.",
                  "ea": "Saved search 'UC-22.36.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.17(2)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.17(2) is enforced — Splunk UC-22.36.2: Right-to-erasure propagation completeness across downstream systems.",
                  "ea": "Saved search 'UC-22.36.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.17 is enforced — Splunk UC-22.36.2: Right-to-erasure propagation completeness across downstream systems.",
                  "ea": "Saved search 'UC-22.36.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.17(2)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.17(2) is enforced — Splunk UC-22.36.2: Right-to-erasure propagation completeness across downstream systems.",
                  "ea": "Saved search 'UC-22.36.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.36.3",
              "n": "Portability export integrity — signed manifest verification",
              "c": "medium",
              "f": "intermediate",
              "v": "Adds cryptographic evidence to what is otherwise a 'we generated the file' audit trail.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=dsar sourcetype=portability:export action=delivered earliest=-30d\n| join export_id [search index=dsar sourcetype=portability:manifest | fields export_id, manifest_hash]\n| where delivered_hash!=manifest_hash\n| table _time export_id subject_token delivered_hash manifest_hash",
              "m": "(1) Sign manifests at export time with an HSM key scoped to the privacy program; (2) log manifest_hash and delivered_hash at each delivery; (3) run hourly; (4) on hit, auto-open an Art.12 remediation ticket.",
              "z": "Table of failures, single-value for '% verified (30d)', time chart of failures.",
              "kfp": "Content encoding transforms can change the on-wire hash. Require manifest to record pre-compression hash and compare apples to apples.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Sign manifests at export time with an HSM key scoped to the privacy program; (2) log manifest_hash and delivered_hash at each delivery; (3) run hourly; (4) on hit, auto-open an Art.12 remediation ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dsar sourcetype=portability:export action=delivered earliest=-30d\n| join export_id [search index=dsar sourcetype=portability:manifest | fields export_id, manifest_hash]\n| where delivered_hash!=manifest_hash\n| table _time export_id subject_token delivered_hash manifest_hash\n```\n\nUnderstanding this SPL\n\n**Portability export integrity — signed manifest verification** — Adds cryptographic evidence to what is otherwise a 'we generated the file' audit trail.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dsar; **sourcetype**: portability:export. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dsar, sourcetype=portability:export, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where delivered_hash!=manifest_hash` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Portability export integrity — signed manifest verification**): table _time export_id subject_token delivered_hash manifest_hash\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of failures, single-value for '% verified (30d)', time chart of failures.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when data-subject requests are running late so privacy duties stay on track across systems.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.20",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.20 (Right to data portability) is enforced — Splunk UC-22.36.3: Portability export integrity — signed manifest verification.",
                  "ea": "Saved search 'UC-22.36.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.20",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.20 is enforced — Splunk UC-22.36.3: Portability export integrity — signed manifest verification.",
                  "ea": "Saved search 'UC-22.36.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.37",
          "n": "Consent lifecycle and lawful basis",
          "u": [
            {
              "i": "22.37.1",
              "n": "Consent capture evidence freshness — stale-consent alerting",
              "c": "high",
              "f": "intermediate",
              "v": "Prevents the 'set-and-forget' consent anti-pattern that surfaces in nearly every GDPR supervisory-authority enforcement action.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=cmp sourcetype=onetrust:consent earliest=-400d\n| stats max(_time) as last_consent by subject_token purpose\n| eval consent_age_days=round((now()-last_consent)/86400,0)\n| where consent_age_days>365\n| join subject_token [search index=app_activity earliest=-30d | stats count by subject_token | where count>0]\n| table subject_token purpose last_consent consent_age_days",
              "m": "(1) Ingest CMP events with purpose taxonomy; (2) set max_consent_age_days via macro; (3) run daily; (4) pipe to SOAR to drive a re-consent workflow.",
              "z": "Histogram of consent age by purpose; table of stale subjects; single-value for '% of processing with fresh consent'.",
              "kfp": "Long-lived contractual relationships where consent is not the lawful basis. Use a lawful_basis.csv join to filter them out.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest CMP events with purpose taxonomy; (2) set max_consent_age_days via macro; (3) run daily; (4) pipe to SOAR to drive a re-consent workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmp sourcetype=onetrust:consent earliest=-400d\n| stats max(_time) as last_consent by subject_token purpose\n| eval consent_age_days=round((now()-last_consent)/86400,0)\n| where consent_age_days>365\n| join subject_token [search index=app_activity earliest=-30d | stats count by subject_token | where count>0]\n| table subject_token purpose last_consent consent_age_days\n```\n\nUnderstanding this SPL\n\n**Consent capture evidence freshness — stale-consent alerting** — Prevents the 'set-and-forget' consent anti-pattern that surfaces in nearly every GDPR supervisory-authority enforcement action.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmp; **sourcetype**: onetrust:consent. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmp, sourcetype=onetrust:consent, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by subject_token purpose** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **consent_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where consent_age_days>365` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Consent capture evidence freshness — stale-consent alerting**): table subject_token purpose last_consent consent_age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram of consent age by purpose; table of stale subjects; single-value for '% of processing with fresh consent'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you track consent and choices over time so your marketing and storage stay aligned with what people agreed to.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.6 (Lawful basis) is enforced — Splunk UC-22.37.1: Consent capture evidence freshness — stale-consent alerting.",
                  "ea": "Saved search 'UC-22.37.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.7 (Conditions for consent) is enforced — Splunk UC-22.37.1: Consent capture evidence freshness — stale-consent alerting.",
                  "ea": "Saved search 'UC-22.37.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.6 is enforced — Splunk UC-22.37.1: Consent capture evidence freshness — stale-consent alerting.",
                  "ea": "Saved search 'UC-22.37.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.7 is enforced — Splunk UC-22.37.1: Consent capture evidence freshness — stale-consent alerting.",
                  "ea": "Saved search 'UC-22.37.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.37.2",
              "n": "Consent withdrawal propagation SLA — downstream stop-processing evidence",
              "c": "critical",
              "f": "advanced",
              "v": "Withdrawals that do not propagate are a primary source of supervisory-authority enforcement; this UC turns a weak process control into a tested technical control.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=cmp action=withdraw earliest=-7d\n| join subject_token [search index=app_activity (lawful_basis=consent) earliest=-7d]\n| where _time>=withdrawal_time + 86400\n| stats values(system) as systems_still_processing count by subject_token withdrawal_time\n| where count>0\n| table withdrawal_time subject_token systems_still_processing count",
              "m": "(1) Instrument each consent-governed processing activity to emit subject_token and lawful_basis; (2) ingest withdrawals from the CMP; (3) run every 30 min; (4) on hit, open a SOAR case with the system owner.",
              "z": "Matrix of system × withdrawal-breach count (7d), single-value 'withdrawals propagated within 24h'.",
              "kfp": "Systems that process under non-consent lawful bases must emit the correct lawful_basis value; unmapped systems can appear as FPs.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [California Consumer Privacy Act (CCPA/CPRA)](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument each consent-governed processing activity to emit subject_token and lawful_basis; (2) ingest withdrawals from the CMP; (3) run every 30 min; (4) on hit, open a SOAR case with the system owner.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cmp action=withdraw earliest=-7d\n| join subject_token [search index=app_activity (lawful_basis=consent) earliest=-7d]\n| where _time>=withdrawal_time + 86400\n| stats values(system) as systems_still_processing count by subject_token withdrawal_time\n| where count>0\n| table withdrawal_time subject_token systems_still_processing count\n```\n\nUnderstanding this SPL\n\n**Consent withdrawal propagation SLA — downstream stop-processing evidence** — Withdrawals that do not propagate are a primary source of supervisory-authority enforcement; this UC turns a weak process control into a tested technical control.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cmp.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cmp, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where _time>=withdrawal_time + 86400` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by subject_token withdrawal_time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where count>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Consent withdrawal propagation SLA — downstream stop-processing evidence**): table withdrawal_time subject_token systems_still_processing count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix of system × withdrawal-breach count (7d), single-value 'withdrawals propagated within 24h'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you track consent and choices over time so your marketing and storage stay aligned with what people agreed to.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "CCPA/CPRA",
                "GDPR"
              ],
              "a": [
                "Authentication"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.100",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CCPA/CPRA §1798.100 (Consumer right to know) is enforced — Splunk UC-22.37.2: Consent withdrawal propagation SLA — downstream stop-processing evidence.",
                  "ea": "Saved search 'UC-22.37.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.7 (Conditions for consent) is enforced — Splunk UC-22.37.2: Consent withdrawal propagation SLA — downstream stop-processing evidence.",
                  "ea": "Saved search 'UC-22.37.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.7 is enforced — Splunk UC-22.37.2: Consent withdrawal propagation SLA — downstream stop-processing evidence.",
                  "ea": "Saved search 'UC-22.37.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.37.3",
              "n": "Global Privacy Control (GPC) signal honoring — server-side audit",
              "c": "high",
              "f": "intermediate",
              "v": "Automates the evidence collection California AG investigations have asked for in recent settlements (Sephora, DoorDash).",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=web sourcetype=access_combined earliest=-24h\n| rex field=_raw \"Sec-GPC:\\s*(?<gpc>\\d)\"\n| where gpc=1\n| join session_id [search index=web sourcetype=adtech_call earliest=-24h | stats count by session_id vendor endpoint]\n| table session_id vendor endpoint count",
              "m": "(1) Ensure `Sec-GPC` is preserved in access logs; (2) list regulated ad-tech partners in an allowlist; (3) run hourly; (4) pipe to privacy-metric KPI.",
              "z": "Table of violating sessions, vendor bar chart, single-value 'sessions honoring GPC (24h)'.",
              "kfp": "Essential functional third parties (payment, fraud) are not 'sale/share'. Allowlist via gpc_exempt_vendors.csv.",
              "refs": "[California Consumer Privacy Act (CCPA/CPRA)](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure `Sec-GPC` is preserved in access logs; (2) list regulated ad-tech partners in an allowlist; (3) run hourly; (4) pipe to privacy-metric KPI.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=access_combined earliest=-24h\n| rex field=_raw \"Sec-GPC:\\s*(?<gpc>\\d)\"\n| where gpc=1\n| join session_id [search index=web sourcetype=adtech_call earliest=-24h | stats count by session_id vendor endpoint]\n| table session_id vendor endpoint count\n```\n\nUnderstanding this SPL\n\n**Global Privacy Control (GPC) signal honoring — server-side audit** — Automates the evidence collection California AG investigations have asked for in recent settlements (Sephora, DoorDash).\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=access_combined, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Extracts fields with `rex` (regular expression).\n• Filters the current rows with `where gpc=1` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Global Privacy Control (GPC) signal honoring — server-side audit**): table session_id vendor endpoint count\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of violating sessions, vendor bar chart, single-value 'sessions honoring GPC (24h)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you track consent and choices over time so your marketing and storage stay aligned with what people agreed to.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "CCPA/CPRA"
              ],
              "a": [
                "Web"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.135",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CCPA/CPRA §1798.135 is enforced — Splunk UC-22.37.3: Global Privacy Control (GPC) signal honoring — server-side audit.",
                  "ea": "Saved search 'UC-22.37.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.38",
          "n": "Cross-border transfer controls",
          "u": [
            {
              "i": "22.38.1",
              "n": "Cross-border personal-data flow anomaly — egress to unsanctioned jurisdictions",
              "c": "critical",
              "f": "advanced",
              "v": "Operationalises a control that is otherwise verifiable only via quarterly desk review of the transfer register.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic where All_Traffic.tag=egress earliest=-1h by All_Traffic.dest, All_Traffic.src, All_Traffic.bytes_out\n| `get_geolocation(dest)`\n| lookup sanctioned_transfer_jurisdictions.csv country as dest_country OUTPUT scc_active adequacy_decision\n| where isnull(scc_active) AND isnull(adequacy_decision)\n| join src [search index=dlp tag=pii | stats values(classification) as pii_class by src]\n| table _time src dest dest_country pii_class bytes_out",
              "m": "(1) Ensure DLP emits CIM-compatible PII classification; (2) maintain sanctioned_transfer_jurisdictions.csv driven by the Art.45 adequacy list and the SCC register; (3) schedule every 15 min; (4) escalate via SOAR to DPO.",
              "z": "World-map of egress flows (green=sanctioned, red=finding), table of findings, single-value 'transfers in sanctioned geographies'.",
              "kfp": "Corporate VPN endpoints that present a destination IP in a non-sanctioned country can mis-geolocate; add vpn_exits.csv to suppress.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [MITRE ATT&CK — T1567.002 Exfiltration to Cloud Storage](https://attack.mitre.org/techniques/T1567/002/)",
              "mitre": [
                "T1567.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure DLP emits CIM-compatible PII classification; (2) maintain sanctioned_transfer_jurisdictions.csv driven by the Art.45 adequacy list and the SCC register; (3) schedule every 15 min; (4) escalate via SOAR to DPO.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count from datamodel=Network_Traffic.All_Traffic where All_Traffic.tag=egress earliest=-1h by All_Traffic.dest, All_Traffic.src, All_Traffic.bytes_out\n| `get_geolocation(dest)`\n| lookup sanctioned_transfer_jurisdictions.csv country as dest_country OUTPUT scc_active adequacy_decision\n| where isnull(scc_active) AND isnull(adequacy_decision)\n| join src [search index=dlp tag=pii | stats values(classification) as pii_class by src]\n| table _time src dest dest_country pii_class bytes_out\n```\n\nUnderstanding this SPL\n\n**Cross-border personal-data flow anomaly — egress to unsanctioned jurisdictions** — Operationalises a control that is otherwise verifiable only via quarterly desk review of the transfer register.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Network_Traffic.All_Traffic` — enable acceleration for that model.\n• Invokes macro `get_geolocation(dest)` — in Search, use the UI or expand to inspect the underlying SPL.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(scc_active) AND isnull(adequacy_decision)` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Cross-border personal-data flow anomaly — egress to unsanctioned jurisdictions**): table _time src dest dest_country pii_class bytes_out\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: World-map of egress flows (green=sanctioned, red=finding), table of findings, single-value 'transfers in sanctioned geographies'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see cross-border data flows against your rules so transfers stay justified and documented.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "GDPR"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.44",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of GDPR Art.44 (International transfers — general principle) — Splunk UC-22.38.1: Cross-border personal-data flow anomaly — egress to unsanctioned jurisdictions.",
                  "ea": "Saved search 'UC-22.38.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.46 (Transfers subject to safeguards) is enforced — Splunk UC-22.38.1: Cross-border personal-data flow anomaly — egress to unsanctioned jurisdictions.",
                  "ea": "Saved search 'UC-22.38.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.44",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of UK GDPR Art.44 — Splunk UC-22.38.1: Cross-border personal-data flow anomaly — egress to unsanctioned jurisdictions.",
                  "ea": "Saved search 'UC-22.38.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.46 is enforced — Splunk UC-22.38.1: Cross-border personal-data flow anomaly — egress to unsanctioned jurisdictions.",
                  "ea": "Saved search 'UC-22.38.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.38.2",
              "n": "SCC / adequacy decision reference freshness — stale-safeguard detector",
              "c": "medium",
              "f": "beginner",
              "v": "Prevents silent expiration of the legal basis for international transfers — the single most common finding in SCC-era enforcement.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| inputlookup transfer_register.csv\n| lookup scc_reference.csv template_id OUTPUT template_active effective_from superseded_by\n| eval age_days=round((now()-strptime(effective_from,\"%Y-%m-%d\"))/86400,0)\n| where NOT template_active OR age_days>1095 OR isnotnull(superseded_by)\n| table transfer_id destination template_id template_active age_days superseded_by",
              "m": "(1) Maintain scc_reference.csv synced nightly against EDPB / Commission publications; (2) maintain transfer_register.csv as part of Art.30 ROPA; (3) run daily.",
              "z": "Stacked bar of active vs stale vs rescinded, table of findings, single-value '% of transfers with fresh safeguards'.",
              "kfp": "Transfer entries mid-migration (with `migration=in_flight`) should be allow-listed by a start/end date.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain scc_reference.csv synced nightly against EDPB / Commission publications; (2) maintain transfer_register.csv as part of Art.30 ROPA; (3) run daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup transfer_register.csv\n| lookup scc_reference.csv template_id OUTPUT template_active effective_from superseded_by\n| eval age_days=round((now()-strptime(effective_from,\"%Y-%m-%d\"))/86400,0)\n| where NOT template_active OR age_days>1095 OR isnotnull(superseded_by)\n| table transfer_id destination template_id template_active age_days superseded_by\n```\n\nUnderstanding this SPL\n\n**SCC / adequacy decision reference freshness — stale-safeguard detector** — Prevents silent expiration of the legal basis for international transfers — the single most common finding in SCC-era enforcement.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where NOT template_active OR age_days>1095 OR isnotnull(superseded_by)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SCC / adequacy decision reference freshness — stale-safeguard detector**): table transfer_id destination template_id template_active age_days superseded_by\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar of active vs stale vs rescinded, table of findings, single-value '% of transfers with fresh safeguards'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see cross-border data flows against your rules so transfers stay justified and documented.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.45",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.45 (Transfers via adequacy decision) is enforced — Splunk UC-22.38.2: SCC / adequacy decision reference freshness — stale-safeguard detector.",
                  "ea": "Saved search 'UC-22.38.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.46 (Transfers subject to safeguards) is enforced — Splunk UC-22.38.2: SCC / adequacy decision reference freshness — stale-safeguard detector.",
                  "ea": "Saved search 'UC-22.38.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.45",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.45 is enforced — Splunk UC-22.38.2: SCC / adequacy decision reference freshness — stale-safeguard detector.",
                  "ea": "Saved search 'UC-22.38.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.46 is enforced — Splunk UC-22.38.2: SCC / adequacy decision reference freshness — stale-safeguard detector.",
                  "ea": "Saved search 'UC-22.38.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.38.3",
              "n": "Data localization enforcement — regulated-data must-stay-in-region",
              "c": "high",
              "f": "advanced",
              "v": "Replaces 'we think the bucket is in eu-west-1' with 'we proved every access was in-region for the full audit period'.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=cloud_storage sourcetype=aws:s3:objectaccess OR sourcetype=azure:blob:access earliest=-1h\n| lookup data_classification.csv key=object_key OUTPUT classification regulated_scope\n| lookup localization_policy.csv regulated_scope OUTPUT allowed_regions\n| where isnotnull(regulated_scope) AND NOT match(bucket_region, allowed_regions)\n| table _time bucket bucket_region object_key classification regulated_scope allowed_regions",
              "m": "(1) Tag objects with data-classification via the cloud provider's object metadata; (2) maintain localization_policy.csv keyed by regulated_scope; (3) run hourly; (4) escalate breaches via SOAR.",
              "z": "Table of breaches by bucket, bar chart of breaches by scope, single-value '% of accesses in sanctioned region'.",
              "kfp": "Disaster-recovery replication to allowed DR regions appears as a breach if those regions are not in allowed_regions.csv.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag objects with data-classification via the cloud provider's object metadata; (2) maintain localization_policy.csv keyed by regulated_scope; (3) run hourly; (4) escalate breaches via SOAR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cloud_storage sourcetype=aws:s3:objectaccess OR sourcetype=azure:blob:access earliest=-1h\n| lookup data_classification.csv key=object_key OUTPUT classification regulated_scope\n| lookup localization_policy.csv regulated_scope OUTPUT allowed_regions\n| where isnotnull(regulated_scope) AND NOT match(bucket_region, allowed_regions)\n| table _time bucket bucket_region object_key classification regulated_scope allowed_regions\n```\n\nUnderstanding this SPL\n\n**Data localization enforcement — regulated-data must-stay-in-region** — Replaces 'we think the bucket is in eu-west-1' with 'we proved every access was in-region for the full audit period'.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cloud_storage; **sourcetype**: aws:s3:objectaccess, azure:blob:access. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cloud_storage, sourcetype=aws:s3:objectaccess, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(regulated_scope) AND NOT match(bucket_region, allowed_regions)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Data localization enforcement — regulated-data must-stay-in-region**): table _time bucket bucket_region object_key classification regulated_scope allowed_regions\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of breaches by bucket, bar chart of breaches by scope, single-value '% of accesses in sanctioned region'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see cross-border data flows against your rules so transfers stay justified and documented.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "DORA",
                "GDPR"
              ],
              "a": [
                "Web"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.28 (ICT third-party risk) is enforced — Splunk UC-22.38.3: Data localization enforcement — regulated-data must-stay-in-region.",
                  "ea": "Saved search 'UC-22.38.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.44",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of GDPR Art.44 (International transfers — general principle) — Splunk UC-22.38.3: Data localization enforcement — regulated-data must-stay-in-region.",
                  "ea": "Saved search 'UC-22.38.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.44",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of UK GDPR Art.44 — Splunk UC-22.38.3: Data localization enforcement — regulated-data must-stay-in-region.",
                  "ea": "Saved search 'UC-22.38.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 33.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.39",
          "n": "Incident notification timeliness",
          "u": [
            {
              "i": "22.39.1",
              "n": "Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA)",
              "c": "critical",
              "f": "advanced",
              "v": "Most enforcement actions against supervisory breaches (ICO, DPC, BaFin) focus on notification timeliness. This UC turns the obligation into a visible KPI instead of a post-facto audit finding.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| inputlookup incident_register.csv\n| where classification=\"breach\"\n| eval applicable_regs=split(applicable_regs,\",\")\n| mvexpand applicable_regs\n| lookup sla_catalog.csv regulator as applicable_regs OUTPUT sla_hours\n| eval deadline=strptime(classified_at,\"%Y-%m-%dT%H:%M:%SZ\")+sla_hours*3600\n| join incident_id applicable_regs [search index=notifications sourcetype=regulator:submission | fields incident_id applicable_regs tracking_id submitted_at]\n| eval status=case(isnull(tracking_id) AND now()>deadline, \"breach\", isnull(tracking_id) AND (deadline-now())<5400, \"imminent\", isnotnull(tracking_id), \"submitted\")\n| table incident_id applicable_regs deadline submitted_at status",
              "m": "(1) Ingest classifications from ServiceNow / ES incident index; (2) maintain sla_catalog.csv with regulator → SLA hours; (3) ingest notification-portal tracking IDs; (4) schedule every 15 min; (5) integrate with SOAR to auto-escalate.",
              "z": "Gantt of incidents vs SLA deadlines, single-value 'incidents notified on time (90d)', breakdown by regulator.",
              "kfp": "Incidents whose submission happens offline with post-hoc upload require a timezone-normalised submitted_at; mis-mapped timezones produce false SLA breaches.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest classifications from ServiceNow / ES incident index; (2) maintain sla_catalog.csv with regulator → SLA hours; (3) ingest notification-portal tracking IDs; (4) schedule every 15 min; (5) integrate with SOAR to auto-escalate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup incident_register.csv\n| where classification=\"breach\"\n| eval applicable_regs=split(applicable_regs,\",\")\n| mvexpand applicable_regs\n| lookup sla_catalog.csv regulator as applicable_regs OUTPUT sla_hours\n| eval deadline=strptime(classified_at,\"%Y-%m-%dT%H:%M:%SZ\")+sla_hours*3600\n| join incident_id applicable_regs [search index=notifications sourcetype=regulator:submission | fields incident_id applicable_regs tracking_id submitted_at]\n| eval status=case(isnull(tracking_id) AND now()>deadline, \"breach\", isnull(tracking_id) AND (deadline-now())<5400, \"imminent\", isnotnull(tracking_id), \"submitted\")\n| table incident_id applicable_regs deadline submitted_at status\n```\n\nUnderstanding this SPL\n\n**Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA)** — Most enforcement actions against supervisory breaches (ICO, DPC, BaFin) focus on notification timeliness. This UC turns the obligation into a visible KPI instead of a post-facto audit finding.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where classification=\"breach\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **applicable_regs** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **deadline** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA)**): table incident_id applicable_regs deadline submitted_at status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Gantt of incidents vs SLA deadlines, single-value 'incidents notified on time (90d)', breakdown by regulator.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see when incident clocks are tight so you can meet notification duties even after hours.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "GDPR",
                "HIPAA Security",
                "NIS2"
              ],
              "a": [
                "Alerts",
                "Ticket_Management"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.26 (Leakage reporting) is enforced — Splunk UC-22.39.1: Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA).",
                  "ea": "Saved search 'UC-22.39.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.19",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.19 (Reporting of major ICT-related incidents) is enforced — Splunk UC-22.39.1: Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA).",
                  "ea": "Saved search 'UC-22.39.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.33",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.33 (Breach notification to supervisory authority) is enforced — Splunk UC-22.39.1: Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA).",
                  "ea": "Saved search 'UC-22.39.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(6)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.308(a)(6) (Security incident procedures) is enforced — Splunk UC-22.39.1: Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA).",
                  "ea": "Saved search 'UC-22.39.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.48",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.48 (Breach notification) is enforced — Splunk UC-22.39.1: Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA).",
                  "ea": "Saved search 'UC-22.39.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIS2 Art.23 (Reporting obligations) is enforced — Splunk UC-22.39.1: Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA).",
                  "ea": "Saved search 'UC-22.39.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "Swiss nFADP",
                  "v": "2020 revision",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Swiss nFADP Art.24 (Data breach notification) is enforced — Splunk UC-22.39.1: Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA).",
                  "ea": "Saved search 'UC-22.39.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.33",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that UK GDPR Art.33 is enforced — Splunk UC-22.39.1: Multi-regulator breach-notification SLA tracker (24h NIS2 / 72h GDPR / 72h HIPAA).",
                  "ea": "Saved search 'UC-22.39.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.39.2",
              "n": "Regulator-portal submission evidence — one-way API acknowledgement audit",
              "c": "high",
              "f": "intermediate",
              "v": "Addresses the repeatedly-cited 'we submitted but the authority never acknowledged' defence that now carries no weight in supervisory decisions.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=notifications sourcetype=regulator:submission earliest=-72h\n| stats values(tracking_id) as tracking_id min(_time) as submitted by incident_id regulator\n| join type=outer incident_id regulator [search index=notifications sourcetype=regulator:ack earliest=-72h | stats values(tracking_id) as ack_tracking_id max(_time) as acked by incident_id regulator]\n| where isnull(ack_tracking_id) OR (acked-submitted)>900\n| table incident_id regulator tracking_id submitted acked",
              "m": "(1) Instrument submission and ack channels to emit matching tracking_ids; (2) schedule hourly; (3) on miss, SOAR opens an incident-notification retry case.",
              "z": "Funnel (submitted → acked), bar chart of delays by regulator, single-value 'submissions acked within 15 min'.",
              "kfp": "Regulator portals that batch acks daily produce legitimate lag; configure a per-regulator grace window.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument submission and ack channels to emit matching tracking_ids; (2) schedule hourly; (3) on miss, SOAR opens an incident-notification retry case.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=notifications sourcetype=regulator:submission earliest=-72h\n| stats values(tracking_id) as tracking_id min(_time) as submitted by incident_id regulator\n| join type=outer incident_id regulator [search index=notifications sourcetype=regulator:ack earliest=-72h | stats values(tracking_id) as ack_tracking_id max(_time) as acked by incident_id regulator]\n| where isnull(ack_tracking_id) OR (acked-submitted)>900\n| table incident_id regulator tracking_id submitted acked\n```\n\nUnderstanding this SPL\n\n**Regulator-portal submission evidence — one-way API acknowledgement audit** — Addresses the repeatedly-cited 'we submitted but the authority never acknowledged' defence that now carries no weight in supervisory decisions.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: notifications; **sourcetype**: regulator:submission. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=notifications, sourcetype=regulator:submission, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by incident_id regulator** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(ack_tracking_id) OR (acked-submitted)>900` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Regulator-portal submission evidence — one-way API acknowledgement audit**): table incident_id regulator tracking_id submitted acked\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel (submitted → acked), bar chart of delays by regulator, single-value 'submissions acked within 15 min'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see when incident clocks are tight so you can meet notification duties even after hours.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "NIS2"
              ],
              "a": [
                "Ticket_Management"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.26 (Leakage reporting) is enforced — Splunk UC-22.39.2: Regulator-portal submission evidence — one-way API acknowledgement audit.",
                  "ea": "Saved search 'UC-22.39.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.33",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.33 (Breach notification to supervisory authority) is enforced — Splunk UC-22.39.2: Regulator-portal submission evidence — one-way API acknowledgement audit.",
                  "ea": "Saved search 'UC-22.39.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.48",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.48 (Breach notification) is enforced — Splunk UC-22.39.2: Regulator-portal submission evidence — one-way API acknowledgement audit.",
                  "ea": "Saved search 'UC-22.39.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIS2 Art.23 (Reporting obligations) is enforced — Splunk UC-22.39.2: Regulator-portal submission evidence — one-way API acknowledgement audit.",
                  "ea": "Saved search 'UC-22.39.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "Swiss nFADP",
                  "v": "2020 revision",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Swiss nFADP Art.24 (Data breach notification) is enforced — Splunk UC-22.39.2: Regulator-portal submission evidence — one-way API acknowledgement audit.",
                  "ea": "Saved search 'UC-22.39.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.33",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that UK GDPR Art.33 is enforced — Splunk UC-22.39.2: Regulator-portal submission evidence — one-way API acknowledgement audit.",
                  "ea": "Saved search 'UC-22.39.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.39.3",
              "n": "Data-subject breach communication timeline tracker (Art.34 / §164.404)",
              "c": "high",
              "f": "intermediate",
              "v": "Data-subject communication is routinely cited in enforcement decisions (Meta, Uber, British Airways); measuring it is the only credible evidence of 'taken reasonable efforts'.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=comms sourcetype=breach:notification earliest=-90d\n| stats count(eval(status=\"delivered\")) as delivered count(eval(status=\"bounced_handled\")) as fallback count as total by campaign_id\n| eval ok=(delivered+fallback)/total\n| lookup breach_campaigns.csv campaign_id OUTPUT classification classified_at\n| eval age_days=round((now()-strptime(classified_at,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n| where age_days>=50 AND ok<0.95\n| table campaign_id classified_at age_days total delivered fallback ok",
              "m": "(1) Ingest campaign events from the messaging platform; (2) join to breach_campaigns.csv with classification timestamps; (3) run daily; (4) escalate via SOAR.",
              "z": "Per-campaign completion funnel, aging bar chart, single-value '% of campaigns on track'.",
              "kfp": "Subjects without deliverable contact info require the fall-back channel; bounces without fall-back handling should be followed up manually.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest campaign events from the messaging platform; (2) join to breach_campaigns.csv with classification timestamps; (3) run daily; (4) escalate via SOAR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=comms sourcetype=breach:notification earliest=-90d\n| stats count(eval(status=\"delivered\")) as delivered count(eval(status=\"bounced_handled\")) as fallback count as total by campaign_id\n| eval ok=(delivered+fallback)/total\n| lookup breach_campaigns.csv campaign_id OUTPUT classification classified_at\n| eval age_days=round((now()-strptime(classified_at,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n| where age_days>=50 AND ok<0.95\n| table campaign_id classified_at age_days total delivered fallback ok\n```\n\nUnderstanding this SPL\n\n**Data-subject breach communication timeline tracker (Art.34 / §164.404)** — Data-subject communication is routinely cited in enforcement decisions (Meta, Uber, British Airways); measuring it is the only credible evidence of 'taken reasonable efforts'.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: comms; **sourcetype**: breach:notification. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=comms, sourcetype=breach:notification, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by campaign_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>=50 AND ok<0.95` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Data-subject breach communication timeline tracker (Art.34 / §164.404)**): table campaign_id classified_at age_days total delivered fallback ok\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Per-campaign completion funnel, aging bar chart, single-value '% of campaigns on track'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when incident clocks are tight so you can meet notification duties even after hours.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "HIPAA Security"
              ],
              "a": [
                "Ticket_Management"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.26",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.26 (Leakage reporting) is enforced — Splunk UC-22.39.3: Data-subject breach communication timeline tracker (Art.34 / §164.404).",
                  "ea": "Saved search 'UC-22.39.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.150",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CCPA/CPRA §1798.150 (Private right of action for data breaches) is enforced — Splunk UC-22.39.3: Data-subject breach communication timeline tracker (Art.34 / §164.404).",
                  "ea": "Saved search 'UC-22.39.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.34",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.34 (Breach communication to data subjects) is enforced — Splunk UC-22.39.3: Data-subject breach communication timeline tracker (Art.34 / §164.404).",
                  "ea": "Saved search 'UC-22.39.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.404",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security §164.404 is enforced — Splunk UC-22.39.3: Data-subject breach communication timeline tracker (Art.34 / §164.404).",
                  "ea": "Saved search 'UC-22.39.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.48",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.48 (Breach notification) is enforced — Splunk UC-22.39.3: Data-subject breach communication timeline tracker (Art.34 / §164.404).",
                  "ea": "Saved search 'UC-22.39.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.34",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.34 is enforced — Splunk UC-22.39.3: Data-subject breach communication timeline tracker (Art.34 / §164.404).",
                  "ea": "Saved search 'UC-22.39.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.40",
          "n": "Privileged access evidence",
          "u": [
            {
              "i": "22.40.1",
              "n": "Privileged session recording — missing recordings for elevated sessions",
              "c": "critical",
              "f": "intermediate",
              "v": "Converts 'we have PAM' into 'we can prove every privileged session was recorded' — the question auditors actually ask.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=idp sourcetype=azuread:audit OR sourcetype=okta:audit earliest=-1h eventtype=privilege_elevation\n| rename user as admin_user\n| join type=outer admin_user [search index=pam sourcetype=cyberark:session OR sourcetype=beyondtrust:session earliest=-1h | fields admin_user recording_id session_id target_host _time]\n| where isnull(recording_id)\n| table _time admin_user target_host session_id",
              "m": "(1) Ingest IdP PIM/PAM activation events; (2) ingest PAM session registration events; (3) run hourly; (4) on miss, SOAR opens a P2 case for the session.",
              "z": "Time chart of recorded-vs-unrecorded elevations, table of gaps, single-value 'privileged sessions with recording (7d)'.",
              "kfp": "Elevations for emergency service accounts recorded outside the PAM (agent-based) require a secondary allowlist.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [MITRE ATT&CK — T1078.004 Valid Accounts: Cloud Accounts](https://attack.mitre.org/techniques/T1078/004/)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest IdP PIM/PAM activation events; (2) ingest PAM session registration events; (3) run hourly; (4) on miss, SOAR opens a P2 case for the session.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=idp sourcetype=azuread:audit OR sourcetype=okta:audit earliest=-1h eventtype=privilege_elevation\n| rename user as admin_user\n| join type=outer admin_user [search index=pam sourcetype=cyberark:session OR sourcetype=beyondtrust:session earliest=-1h | fields admin_user recording_id session_id target_host _time]\n| where isnull(recording_id)\n| table _time admin_user target_host session_id\n```\n\nUnderstanding this SPL\n\n**Privileged session recording — missing recordings for elevated sessions** — Converts 'we have PAM' into 'we can prove every privileged session was recorded' — the question auditors actually ask.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: idp; **sourcetype**: azuread:audit, okta:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=idp, sourcetype=azuread:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(recording_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Privileged session recording — missing recordings for elevated sessions**): table _time admin_user target_host session_id\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart of recorded-vs-unrecorded elevations, table of gaps, single-value 'privileged sessions with recording (7d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show who used powerful accounts and when, so access reviews and investigations have a clear story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53",
                "PCI DSS",
                "SOC 2",
                "SOX ITGC"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 AC-6 (Least privilege) is enforced — Splunk UC-22.40.1: Privileged session recording — missing recordings for elevated sessions.",
                  "ea": "Saved search 'UC-22.40.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "10.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI DSS 10.2 (Audit logs captured for all system components) is enforced — Splunk UC-22.40.1: Privileged session recording — missing recordings for elevated sessions.",
                  "ea": "Saved search 'UC-22.40.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC6.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC 2 CC6.1 (Logical access controls) is enforced — Splunk UC-22.40.1: Privileged session recording — missing recordings for elevated sessions.",
                  "ea": "Saved search 'UC-22.40.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Privileged",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.Privileged (Privileged access) is enforced — Splunk UC-22.40.1: Privileged session recording — missing recordings for elevated sessions.",
                  "ea": "Saved search 'UC-22.40.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.40.2",
              "n": "Break-glass account usage review with mandatory post-use approval",
              "c": "critical",
              "f": "intermediate",
              "v": "Break-glass misuse is the most audited sub-case of privileged access; a missing approval is a textbook SOX-ITGC deficiency.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=pam breakglass=true earliest=-14d\n| stats min(_time) as started max(_time) as ended values(session_id) as session_id by account initiator\n| join account session_id [search index=grc sourcetype=breakglass:approval | stats values(approver) as approver max(_time) as approved_at by account session_id]\n| where isnull(approved_at) AND (now()-ended)>172800\n| table account initiator started ended session_id approved_at",
              "m": "(1) Tag break-glass accounts at vault time; (2) emit approval events from the GRC tool; (3) schedule every 30 min; (4) on miss, notify CISO and engage SOAR.",
              "z": "Table of break-glass activations with approval status, bar chart of usages per quarter, single-value 'activations with 48h approval'.",
              "kfp": "Approval events tagged against the wrong session_id will look like misses; maintain strict id-matching contracts.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag break-glass accounts at vault time; (2) emit approval events from the GRC tool; (3) schedule every 30 min; (4) on miss, notify CISO and engage SOAR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam breakglass=true earliest=-14d\n| stats min(_time) as started max(_time) as ended values(session_id) as session_id by account initiator\n| join account session_id [search index=grc sourcetype=breakglass:approval | stats values(approver) as approver max(_time) as approved_at by account session_id]\n| where isnull(approved_at) AND (now()-ended)>172800\n| table account initiator started ended session_id approved_at\n```\n\nUnderstanding this SPL\n\n**Break-glass account usage review with mandatory post-use approval** — Break-glass misuse is the most audited sub-case of privileged access; a missing approval is a textbook SOX-ITGC deficiency.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by account initiator** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(approved_at) AND (now()-ended)>172800` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Break-glass account usage review with mandatory post-use approval**): table account initiator started ended session_id approved_at\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of break-glass activations with approval status, bar chart of usages per quarter, single-value 'activations with 48h approval'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show who used powerful accounts and when, so access reviews and investigations have a clear story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001",
                "NIST 800-53",
                "SOX ITGC"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.15",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO 27001 A.5.15 (Access control) is enforced — Splunk UC-22.40.2: Break-glass account usage review with mandatory post-use approval.",
                  "ea": "Saved search 'UC-22.40.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 AC-6 (Least privilege) is enforced — Splunk UC-22.40.2: Break-glass account usage review with mandatory post-use approval.",
                  "ea": "Saved search 'UC-22.40.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Privileged",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.Privileged (Privileged access) is enforced — Splunk UC-22.40.2: Break-glass account usage review with mandatory post-use approval.",
                  "ea": "Saved search 'UC-22.40.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.40.3",
              "n": "Periodic access review SLA — stale certifications by control owner",
              "c": "high",
              "f": "intermediate",
              "v": "Access review is the #1 ITGC deficiency in typical public-company audits; turning it into a green/red KPI changes the operating rhythm.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=grc sourcetype=iam:access_review earliest=-365d\n| stats max(_time) as last_review by app_id reviewer\n| eval days_since=round((now()-last_review)/86400,0)\n| where days_since>90\n| lookup app_inventory.csv app_id OUTPUT owner criticality\n| table owner app_id criticality last_review days_since",
              "m": "(1) Ingest review-completion events from the GRC tool; (2) refresh app_inventory.csv weekly; (3) run daily.",
              "z": "Heatmap of owner × application age, table of overdue, single-value '% of apps reviewed within 90 days'.",
              "kfp": "New applications added in-cycle need a baseline review; track onboarding date.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest review-completion events from the GRC tool; (2) refresh app_inventory.csv weekly; (3) run daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=iam:access_review earliest=-365d\n| stats max(_time) as last_review by app_id reviewer\n| eval days_since=round((now()-last_review)/86400,0)\n| where days_since>90\n| lookup app_inventory.csv app_id OUTPUT owner criticality\n| table owner app_id criticality last_review days_since\n```\n\nUnderstanding this SPL\n\n**Periodic access review SLA — stale certifications by control owner** — Access review is the #1 ITGC deficiency in typical public-company audits; turning it into a green/red KPI changes the operating rhythm.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: iam:access_review. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=iam:access_review, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by app_id reviewer** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since>90` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Pipeline stage (see **Periodic access review SLA — stale certifications by control owner**): table owner app_id criticality last_review days_since\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of owner × application age, table of overdue, single-value '% of apps reviewed within 90 days'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show who used powerful accounts and when, so access reviews and investigations have a clear story.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001",
                "NIST 800-53",
                "SOX ITGC"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.18",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO 27001 A.5.18 (Access rights review) is enforced — Splunk UC-22.40.3: Periodic access review SLA — stale certifications by control owner.",
                  "ea": "Saved search 'UC-22.40.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AC-2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 AC-2 (Account management) is enforced — Splunk UC-22.40.3: Periodic access review SLA — stale certifications by control owner.",
                  "ea": "Saved search 'UC-22.40.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Review",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.Review (Periodic access review) is enforced — Splunk UC-22.40.3: Periodic access review SLA — stale certifications by control owner.",
                  "ea": "Saved search 'UC-22.40.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.41",
          "n": "Encryption and key management attestation",
          "u": [
            {
              "i": "22.41.1",
              "n": "Encryption-at-rest coverage gap — unencrypted storage with regulated data",
              "c": "critical",
              "f": "intermediate",
              "v": "Unencrypted regulated data is the most common PCI 3.5 / HIPAA finding — preventing it from appearing on the audit spreadsheet is the goal.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| rest /services/cloud_storage/inventory splunk_server=* | where regulated_data=\"true\"\n| stats values(encryption_state) as state by volume_id region tenant\n| where mvfind(state, \"enabled\")<0\n| lookup data_classification.csv volume_id OUTPUT classification\n| table tenant region volume_id classification state",
              "m": "(1) Ingest cloud-inventory telemetry; (2) tag volumes via data_classification.csv; (3) run daily; (4) on finding, auto-open a CISO review.",
              "z": "Stacked bar of encrypted vs unencrypted by tenant, table of findings, single-value '% of regulated volumes encrypted'.",
              "kfp": "Volumes undergoing pre-encryption migration require a time-bounded allowance (migration_end_date).",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest cloud-inventory telemetry; (2) tag volumes via data_classification.csv; (3) run daily; (4) on finding, auto-open a CISO review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/cloud_storage/inventory splunk_server=* | where regulated_data=\"true\"\n| stats values(encryption_state) as state by volume_id region tenant\n| where mvfind(state, \"enabled\")<0\n| lookup data_classification.csv volume_id OUTPUT classification\n| table tenant region volume_id classification state\n```\n\nUnderstanding this SPL\n\n**Encryption-at-rest coverage gap — unencrypted storage with regulated data** — Unencrypted regulated data is the most common PCI 3.5 / HIPAA finding — preventing it from appearing on the audit spreadsheet is the goal.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Filters the current rows with `where regulated_data=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by volume_id region tenant** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mvfind(state, \"enabled\")<0` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Pipeline stage (see **Encryption-at-rest coverage gap — unencrypted storage with regulated data**): table tenant region volume_id classification state\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar of encrypted vs unencrypted by tenant, table of findings, single-value '% of regulated volumes encrypted'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see encryption coverage and gaps so you can show protections stay current across systems.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "GDPR",
                "HIPAA Security",
                "NIST 800-53",
                "PCI DSS"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "APPI",
                  "v": "2022 amendments",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that APPI Art.23 (Security control action) is enforced — Splunk UC-22.41.1: Encryption-at-rest coverage gap — unencrypted storage with regulated data.",
                  "ea": "Saved search 'UC-22.41.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.32 (Security of processing) is enforced — Splunk UC-22.41.1: Encryption-at-rest coverage gap — unencrypted storage with regulated data.",
                  "ea": "Saved search 'UC-22.41.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(a)(2)(iv)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security §164.312(a)(2)(iv) (Encryption and decryption) is enforced — Splunk UC-22.41.1: Encryption-at-rest coverage gap — unencrypted storage with regulated data.",
                  "ea": "Saved search 'UC-22.41.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.46 (Security measures) is enforced — Splunk UC-22.41.1: Encryption-at-rest coverage gap — unencrypted storage with regulated data.",
                  "ea": "Saved search 'UC-22.41.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-13",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 SC-13 (Cryptographic protection) is enforced — Splunk UC-22.41.1: Encryption-at-rest coverage gap — unencrypted storage with regulated data.",
                  "ea": "Saved search 'UC-22.41.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI DSS 3.5 (PAN protection) is enforced — Splunk UC-22.41.1: Encryption-at-rest coverage gap — unencrypted storage with regulated data.",
                  "ea": "Saved search 'UC-22.41.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.32",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.32 (Security of processing) is enforced — Splunk UC-22.41.1: Encryption-at-rest coverage gap — unencrypted storage with regulated data.",
                  "ea": "Saved search 'UC-22.41.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.41.2",
              "n": "Certificate / TLS posture — weak cipher and expired-cert detection",
              "c": "high",
              "f": "intermediate",
              "v": "Moves TLS posture from 'we scan quarterly' to 'we proved every TLS endpoint was compliant for every day of the period'.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=tls sourcetype=tls:scan earliest=-1h\n| eval days_to_expiry=round((expires-now())/86400,0)\n| where days_to_expiry<21 OR protocol!=\"TLSv1.2\" AND protocol!=\"TLSv1.3\" OR NOT match(cipher, \"^TLS_(AES|CHACHA20|ECDHE)\")\n| table host days_to_expiry protocol cipher",
              "m": "(1) Run scanner (SSL Labs CLI, OpenSSL-based scripts, Qualys SSL Labs); (2) feed to Splunk; (3) schedule hourly; (4) on finding, ticket via SOAR.",
              "z": "Table of non-compliant hosts, stacked bar of protocol distribution, single-value '% of endpoints with strong TLS'.",
              "kfp": "Internal / lab endpoints may legitimately use non-standard configurations; maintain scan_scope.csv to include only in-scope systems.",
              "refs": "[PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Run scanner (SSL Labs CLI, OpenSSL-based scripts, Qualys SSL Labs); (2) feed to Splunk; (3) schedule hourly; (4) on finding, ticket via SOAR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=tls sourcetype=tls:scan earliest=-1h\n| eval days_to_expiry=round((expires-now())/86400,0)\n| where days_to_expiry<21 OR protocol!=\"TLSv1.2\" AND protocol!=\"TLSv1.3\" OR NOT match(cipher, \"^TLS_(AES|CHACHA20|ECDHE)\")\n| table host days_to_expiry protocol cipher\n```\n\nUnderstanding this SPL\n\n**Certificate / TLS posture — weak cipher and expired-cert detection** — Moves TLS posture from 'we scan quarterly' to 'we proved every TLS endpoint was compliant for every day of the period'.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: tls; **sourcetype**: tls:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=tls, sourcetype=tls:scan, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_expiry<21 OR protocol!=\"TLSv1.2\" AND protocol!=\"TLSv1.3\" OR NOT match(cipher, \"^TLS_(AES|CHACHA20|ECDHE)\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Certificate / TLS posture — weak cipher and expired-cert detection**): table host days_to_expiry protocol cipher\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of non-compliant hosts, stacked bar of protocol distribution, single-value '% of endpoints with strong TLS'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see encryption coverage and gaps so you can show protections stay current across systems.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA Security",
                "NIS2",
                "NIST 800-53",
                "PCI DSS"
              ],
              "a": [
                "Certificates"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.312(e)(1)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security §164.312(e)(1) (Transmission security) is enforced — Splunk UC-22.41.2: Certificate / TLS posture — weak cipher and expired-cert detection.",
                  "ea": "Saved search 'UC-22.41.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(h)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIS2 Art.21(2)(h) (Cryptography and encryption) is enforced — Splunk UC-22.41.2: Certificate / TLS posture — weak cipher and expired-cert detection.",
                  "ea": "Saved search 'UC-22.41.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-8",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 SC-8 (Transmission confidentiality and integrity) is enforced — Splunk UC-22.41.2: Certificate / TLS posture — weak cipher and expired-cert detection.",
                  "ea": "Saved search 'UC-22.41.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "4.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI DSS 4.2 (Strong cryptography for CHD in transit) is enforced — Splunk UC-22.41.2: Certificate / TLS posture — weak cipher and expired-cert detection.",
                  "ea": "Saved search 'UC-22.41.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.41.3",
              "n": "Key rotation attestation — KMS/HSM rotation SLA tracker",
              "c": "high",
              "f": "advanced",
              "v": "Gives the cryptographic custodian the ability to defend the 'our keys are rotated' claim with per-key evidence rather than a policy document.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=kms (sourcetype=aws:kms OR sourcetype=azure:keyvault OR sourcetype=gcp:kms) action=rotate\n| stats max(_time) as last_rotation by key_id tenant\n| lookup key_policy.csv key_id OUTPUT cryptoperiod_days key_type\n| where key_type!=\"archive\"\n| eval age_days=round((now()-last_rotation)/86400,0)\n| where age_days>cryptoperiod_days\n| table tenant key_id last_rotation age_days cryptoperiod_days",
              "m": "(1) Ingest KMS audit logs; (2) maintain key_policy.csv; (3) run daily; (4) escalate via SOAR to the crypto custodian.",
              "z": "Table of overdue keys, scatter plot age vs cryptoperiod, single-value '% of keys within cryptoperiod'.",
              "kfp": "Keys under a temporary freeze (key_status=frozen) for investigation should be excluded via a status filter.",
              "refs": "[PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest KMS audit logs; (2) maintain key_policy.csv; (3) run daily; (4) escalate via SOAR to the crypto custodian.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kms (sourcetype=aws:kms OR sourcetype=azure:keyvault OR sourcetype=gcp:kms) action=rotate\n| stats max(_time) as last_rotation by key_id tenant\n| lookup key_policy.csv key_id OUTPUT cryptoperiod_days key_type\n| where key_type!=\"archive\"\n| eval age_days=round((now()-last_rotation)/86400,0)\n| where age_days>cryptoperiod_days\n| table tenant key_id last_rotation age_days cryptoperiod_days\n```\n\nUnderstanding this SPL\n\n**Key rotation attestation — KMS/HSM rotation SLA tracker** — Gives the cryptographic custodian the ability to defend the 'our keys are rotated' claim with per-key evidence rather than a policy document.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kms; **sourcetype**: aws:kms, azure:keyvault, gcp:kms. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kms, sourcetype=aws:kms. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by key_id tenant** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where key_type!=\"archive\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>cryptoperiod_days` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Key rotation attestation — KMS/HSM rotation SLA tracker**): table tenant key_id last_rotation age_days cryptoperiod_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of overdue keys, scatter plot age vs cryptoperiod, single-value '% of keys within cryptoperiod'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see encryption coverage and gaps so you can show protections stay current across systems.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "DORA",
                "NIST 800-53",
                "PCI DSS"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.9 (Protection and prevention) is enforced — Splunk UC-22.41.3: Key rotation attestation — KMS/HSM rotation SLA tracker.",
                  "ea": "Saved search 'UC-22.41.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SC-13",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 SC-13 (Cryptographic protection) is enforced — Splunk UC-22.41.3: Key rotation attestation — KMS/HSM rotation SLA tracker.",
                  "ea": "Saved search 'UC-22.41.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "3.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI DSS 3.6 is enforced — Splunk UC-22.41.3: Key rotation attestation — KMS/HSM rotation SLA tracker.",
                  "ea": "Saved search 'UC-22.41.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 33.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.42",
          "n": "Change management and configuration baseline",
          "u": [
            {
              "i": "22.42.1",
              "n": "Unauthorized production change — no approved CR matches the observed change",
              "c": "critical",
              "f": "intermediate",
              "v": "Replaces the standard quarterly-sample audit with continuous evidence of the authorisation gate.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=change sourcetype=prod:change earliest=-1h\n| join type=outer change_id [search index=itsm sourcetype=servicenow:change earliest=-1h state=approved | fields change_id approved_at]\n| where isnull(approved_at)\n| table _time change_id system applied_by system_owner",
              "m": "(1) Force CI/CD tools to emit change events tagged with change_id; (2) ingest CR state from the ITSM platform; (3) run hourly; (4) on finding, escalate via SOAR.",
              "z": "Stacked bar approved vs unauthorised by system, table of findings, single-value '% authorised changes'.",
              "kfp": "Emergency changes executed under break-glass must reference a retro-CR id; without one they look unauthorised.",
              "refs": "[AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [MITRE ATT&CK — T1562.001 Impair Defenses: Disable Tools](https://attack.mitre.org/techniques/T1562/001/)",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Force CI/CD tools to emit change events tagged with change_id; (2) ingest CR state from the ITSM platform; (3) run hourly; (4) on finding, escalate via SOAR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=change sourcetype=prod:change earliest=-1h\n| join type=outer change_id [search index=itsm sourcetype=servicenow:change earliest=-1h state=approved | fields change_id approved_at]\n| where isnull(approved_at)\n| table _time change_id system applied_by system_owner\n```\n\nUnderstanding this SPL\n\n**Unauthorized production change — no approved CR matches the observed change** — Replaces the standard quarterly-sample audit with continuous evidence of the authorisation gate.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: change; **sourcetype**: prod:change. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=change, sourcetype=prod:change, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(approved_at)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Unauthorized production change — no approved CR matches the observed change**): table _time change_id system applied_by system_owner\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar approved vs unauthorised by system, table of findings, single-value '% authorised changes'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when configs drift from what you baselined, so only approved change moves through.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53",
                "SOC 2",
                "SOX ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 CM-3 is enforced — Splunk UC-22.42.1: Unauthorized production change — no approved CR matches the observed change.",
                  "ea": "Saved search 'UC-22.42.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC 2 CC8.1 (Change management) is enforced — Splunk UC-22.42.1: Unauthorized production change — no approved CR matches the observed change.",
                  "ea": "Saved search 'UC-22.42.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Authorization",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX ITGC ITGC.ChangeMgmt.Authorization (Change authorised) is enforced — Splunk UC-22.42.1: Unauthorized production change — no approved CR matches the observed change.",
                  "ea": "Saved search 'UC-22.42.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.42.2",
              "n": "Configuration baseline drift — regulated hosts deviating from CIS benchmark",
              "c": "high",
              "f": "advanced",
              "v": "Turns baseline drift from a quarterly scan artefact into a continuous control that auditors can review any day.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=baseline sourcetype=osquery:cis OR sourcetype=ciscat earliest=-24h severity>=medium\n| join host [search index=asset regulated=true | fields host tenant classification]\n| table host tenant classification rule_id severity deviation",
              "m": "(1) Schedule osquery/CIS-CAT agents; (2) publish baselines via config-management tooling; (3) run daily; (4) on finding, open a P2 ticket.",
              "z": "Heatmap of tenants × controls, bar chart of findings by severity, single-value 'regulated hosts in compliance'.",
              "kfp": "Hosts mid-patching cycle can appear out-of-baseline briefly; tolerate a 4-hour grace window.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule osquery/CIS-CAT agents; (2) publish baselines via config-management tooling; (3) run daily; (4) on finding, open a P2 ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=baseline sourcetype=osquery:cis OR sourcetype=ciscat earliest=-24h severity>=medium\n| join host [search index=asset regulated=true | fields host tenant classification]\n| table host tenant classification rule_id severity deviation\n```\n\nUnderstanding this SPL\n\n**Configuration baseline drift — regulated hosts deviating from CIS benchmark** — Turns baseline drift from a quarterly scan artefact into a continuous control that auditors can review any day.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: baseline; **sourcetype**: osquery:cis, ciscat. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=baseline, sourcetype=osquery:cis, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Configuration baseline drift — regulated hosts deviating from CIS benchmark**): table host tenant classification rule_id severity deviation\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of tenants × controls, bar chart of findings by severity, single-value 'regulated hosts in compliance'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when configs drift from what you baselined, so only approved change moves through.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "NIST 800-53",
                "PCI DSS"
              ],
              "a": [
                "Change",
                "Vulnerabilities"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 CM-2 (Baseline configuration) is enforced — Splunk UC-22.42.2: Configuration baseline drift — regulated hosts deviating from CIS benchmark.",
                  "ea": "Saved search 'UC-22.42.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 CM-6 (Configuration settings) is enforced — Splunk UC-22.42.2: Configuration baseline drift — regulated hosts deviating from CIS benchmark.",
                  "ea": "Saved search 'UC-22.42.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "1.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI DSS 1.2 (Network security controls configuration) is enforced — Splunk UC-22.42.2: Configuration baseline drift — regulated hosts deviating from CIS benchmark.",
                  "ea": "Saved search 'UC-22.42.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.43",
          "n": "Vulnerability management and patch SLAs",
          "u": [
            {
              "i": "22.43.1",
              "n": "Critical vulnerability SLA tracker — unpatched 30+ days with exploited-in-the-wild indicator",
              "c": "critical",
              "f": "intermediate",
              "v": "A missed critical patch is the #1 root cause of breaches — proving remediation timeliness is the matching audit control.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=vuln sourcetype=tenable:vm OR sourcetype=qualys:vm OR sourcetype=rapid7:vm earliest=-60d cvss_v3_base>=9.0\n| stats min(_time) as first_seen max(_time) as last_seen values(cve) as cves by host\n| eval age_days=round((now()-first_seen)/86400,0)\n| where age_days>30\n| join type=outer host cves [search index=vuln action=remediated]\n| where isnull(remediated_at)\n| table host cves first_seen age_days",
              "m": "(1) Ingest scanner data into Splunk via TA; (2) consume CISA KEV list into exploited_in_wild.csv; (3) run daily; (4) integrate with SOAR to auto-open remediation.",
              "z": "Bar chart of age buckets, table of breaches, single-value '% of critical CVEs remediated within 30 days'.",
              "kfp": "Compensating controls (e.g. WAF rules) can legitimately extend the SLA; capture them in vuln_exceptions.csv.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html), [MITRE ATT&CK — T1190 Exploit Public-Facing Application](https://attack.mitre.org/techniques/T1190/), [CISA Known Exploited Vulnerabilities Catalog](https://www.cisa.gov/known-exploited-vulnerabilities-catalog)",
              "mitre": [
                "T1190"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest scanner data into Splunk via TA; (2) consume CISA KEV list into exploited_in_wild.csv; (3) run daily; (4) integrate with SOAR to auto-open remediation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln sourcetype=tenable:vm OR sourcetype=qualys:vm OR sourcetype=rapid7:vm earliest=-60d cvss_v3_base>=9.0\n| stats min(_time) as first_seen max(_time) as last_seen values(cve) as cves by host\n| eval age_days=round((now()-first_seen)/86400,0)\n| where age_days>30\n| join type=outer host cves [search index=vuln action=remediated]\n| where isnull(remediated_at)\n| table host cves first_seen age_days\n```\n\nUnderstanding this SPL\n\n**Critical vulnerability SLA tracker — unpatched 30+ days with exploited-in-the-wild indicator** — A missed critical patch is the #1 root cause of breaches — proving remediation timeliness is the matching audit control.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln; **sourcetype**: tenable:vm, qualys:vm, rapid7:vm. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, sourcetype=tenable:vm, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>30` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(remediated_at)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Critical vulnerability SLA tracker — unpatched 30+ days with exploited-in-the-wild indicator**): table host cves first_seen age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart of age buckets, table of breaches, single-value '% of critical CVEs remediated within 30 days'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see patch timing against your service promises so you can explain delays with evidence.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001",
                "NIS2",
                "NIST 800-53",
                "PCI DSS"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.8.8",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO 27001 A.8.8 is enforced — Splunk UC-22.43.1: Critical vulnerability SLA tracker — unpatched 30+ days with exploited-in-the-wild indicator.",
                  "ea": "Saved search 'UC-22.43.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(e) (Security in acquisition, development and maintenance) is enforced — Splunk UC-22.43.1: Critical vulnerability SLA tracker — unpatched 30+ days with exploited-in-the-wild indicator.",
                  "ea": "Saved search 'UC-22.43.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "RA-5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 RA-5 (Vulnerability scanning) is enforced — Splunk UC-22.43.1: Critical vulnerability SLA tracker — unpatched 30+ days with exploited-in-the-wild indicator.",
                  "ea": "Saved search 'UC-22.43.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI DSS 6.3 (Vulnerabilities identified and addressed) is enforced — Splunk UC-22.43.1: Critical vulnerability SLA tracker — unpatched 30+ days with exploited-in-the-wild indicator.",
                  "ea": "Saved search 'UC-22.43.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.43.2",
              "n": "Vulnerability rediscovery after patch — regressed exposures",
              "c": "high",
              "f": "advanced",
              "v": "Prevents the compliance report from overstating the remediation rate by double-counting recurring findings.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=vuln earliest=-90d | sort 0 host cve _time | streamstats latest(action) as last_action by host cve\n| where last_action=\"remediated\"\n| join host cve [search index=vuln action=detected earliest=-7d]\n| table host cve remediated_at _time as rediscovered_at",
              "m": "(1) Maintain history of vulnerability state per host/CVE; (2) run daily; (3) on finding, re-open original remediation ticket.",
              "z": "Table of regressions, trend of regressions per scan cycle, single-value 'patches holding (30d)'.",
              "kfp": "Scanner credential changes can cause false positives that look like regressions; correlate with scanner health.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain history of vulnerability state per host/CVE; (2) run daily; (3) on finding, re-open original remediation ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln earliest=-90d | sort 0 host cve _time | streamstats latest(action) as last_action by host cve\n| where last_action=\"remediated\"\n| join host cve [search index=vuln action=detected earliest=-7d]\n| table host cve remediated_at _time as rediscovered_at\n```\n\nUnderstanding this SPL\n\n**Vulnerability rediscovery after patch — regressed exposures** — Prevents the compliance report from overstating the remediation rate by double-counting recurring findings.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by host cve** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where last_action=\"remediated\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Vulnerability rediscovery after patch — regressed exposures**): table host cve remediated_at _time as rediscovered_at\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of regressions, trend of regressions per scan cycle, single-value 'patches holding (30d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see patch timing against your service promises so you can explain delays with evidence.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST 800-53",
                "PCI DSS"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "RA-5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST 800-53 RA-5 (Vulnerability scanning) is enforced — Splunk UC-22.43.2: Vulnerability rediscovery after patch — regressed exposures.",
                  "ea": "Saved search 'UC-22.43.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "6.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI DSS 6.3 (Vulnerabilities identified and addressed) is enforced — Splunk UC-22.43.2: Vulnerability rediscovery after patch — regressed exposures.",
                  "ea": "Saved search 'UC-22.43.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.44",
          "n": "Third-party and supply-chain risk",
          "u": [
            {
              "i": "22.44.1",
              "n": "Supplier attestation currency — stale SOC 2 / ISO 27001 reports for critical vendors",
              "c": "high",
              "f": "beginner",
              "v": "Makes vendor-risk currency a live signal rather than an annual gap report.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| inputlookup vrm_attestations.csv\n| where criticality IN (\"critical\",\"high\")\n| eval days_to_expiry=round((strptime(expires,\"%Y-%m-%d\")-now())/86400,0)\n| where days_to_expiry<=30\n| table vendor attestation_type expires days_to_expiry",
              "m": "(1) Push attestation metadata from OneTrust/ServiceNow VRM into vrm_attestations.csv nightly; (2) run daily; (3) on finding, email supplier contact and assigned procurement lead.",
              "z": "Table of expiring attestations, Sankey of attestation types per vendor, single-value 'critical vendors with current attestation'.",
              "kfp": "Re-certification in progress produces a brief 'expired' window; tolerate with an in-progress flag.",
              "refs": "[Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push attestation metadata from OneTrust/ServiceNow VRM into vrm_attestations.csv nightly; (2) run daily; (3) on finding, email supplier contact and assigned procurement lead.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup vrm_attestations.csv\n| where criticality IN (\"critical\",\"high\")\n| eval days_to_expiry=round((strptime(expires,\"%Y-%m-%d\")-now())/86400,0)\n| where days_to_expiry<=30\n| table vendor attestation_type expires days_to_expiry\n```\n\nUnderstanding this SPL\n\n**Supplier attestation currency — stale SOC 2 / ISO 27001 reports for critical vendors** — Makes vendor-risk currency a live signal rather than an annual gap report.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where criticality IN (\"critical\",\"high\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_expiry<=30` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Supplier attestation currency — stale SOC 2 / ISO 27001 reports for critical vendors**): table vendor attestation_type expires days_to_expiry\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of expiring attestations, Sankey of attestation types per vendor, single-value 'critical vendors with current attestation'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when third-party risk scores move so you can act before a weak vendor becomes a real problem.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "DORA",
                "NIS2",
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.28 (ICT third-party risk) is enforced — Splunk UC-22.44.1: Supplier attestation currency — stale SOC 2 / ISO 27001 reports for critical vendors.",
                  "ea": "Saved search 'UC-22.44.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(d)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIS2 Art.21(2)(d) (Supply-chain security) is enforced — Splunk UC-22.44.1: Supplier attestation currency — stale SOC 2 / ISO 27001 reports for critical vendors.",
                  "ea": "Saved search 'UC-22.44.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SR-3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST 800-53 SR-3 (Supply chain controls and processes) is enforced — Splunk UC-22.44.1: Supplier attestation currency — stale SOC 2 / ISO 27001 reports for critical vendors.",
                  "ea": "Saved search 'UC-22.44.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.44.2",
              "n": "Subprocessor inventory change — notification SLA to data controllers",
              "c": "high",
              "f": "intermediate",
              "v": "Notifications to controllers are a standard audit focus; this UC removes the manual book-keeping.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=vrm sourcetype=subprocessor:change earliest=-60d\n| join change_id [search index=vrm sourcetype=controller:notification | stats values(controller_id) as controllers_notified by change_id]\n| lookup subprocessor_scope.csv change_id OUTPUT controllers_in_scope notice_days\n| eval days_since=round((now()-_time)/86400,0), missing=mvfilter(NOT match(controllers_notified, controller_id))\n| where days_since>=notice_days AND mvcount(missing)>0\n| table change_id vendor controllers_in_scope controllers_notified missing",
              "m": "(1) Ingest subprocessor-change events and notification dispatch events; (2) maintain subprocessor_scope.csv; (3) run hourly; (4) escalate missed notices via SOAR.",
              "z": "Matrix controller × change with notification status, aging bar chart, single-value 'notifications on time'.",
              "kfp": "Controllers whose contracts do not require notification should be excluded via scope lookup.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest subprocessor-change events and notification dispatch events; (2) maintain subprocessor_scope.csv; (3) run hourly; (4) escalate missed notices via SOAR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vrm sourcetype=subprocessor:change earliest=-60d\n| join change_id [search index=vrm sourcetype=controller:notification | stats values(controller_id) as controllers_notified by change_id]\n| lookup subprocessor_scope.csv change_id OUTPUT controllers_in_scope notice_days\n| eval days_since=round((now()-_time)/86400,0), missing=mvfilter(NOT match(controllers_notified, controller_id))\n| where days_since>=notice_days AND mvcount(missing)>0\n| table change_id vendor controllers_in_scope controllers_notified missing\n```\n\nUnderstanding this SPL\n\n**Subprocessor inventory change — notification SLA to data controllers** — Notifications to controllers are a standard audit focus; this UC removes the manual book-keeping.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vrm; **sourcetype**: subprocessor:change. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vrm, sourcetype=subprocessor:change, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since>=notice_days AND mvcount(missing)>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Subprocessor inventory change — notification SLA to data controllers**): table change_id vendor controllers_in_scope controllers_notified missing\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Matrix controller × change with notification status, aging bar chart, single-value 'notifications on time'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when third-party risk scores move so you can act before a weak vendor becomes a real problem.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "DORA",
                "GDPR"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.28 (ICT third-party risk) is enforced — Splunk UC-22.44.2: Subprocessor inventory change — notification SLA to data controllers.",
                  "ea": "Saved search 'UC-22.44.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.28 (Processor obligations) is enforced — Splunk UC-22.44.2: Subprocessor inventory change — notification SLA to data controllers.",
                  "ea": "Saved search 'UC-22.44.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.28 is enforced — Splunk UC-22.44.2: Subprocessor inventory change — notification SLA to data controllers.",
                  "ea": "Saved search 'UC-22.44.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.44.3",
              "n": "Fourth-party concentration risk — shared critical dependencies across vendors",
              "c": "medium",
              "f": "advanced",
              "v": "Addresses the DORA supervisory-priorities theme that regulators ask 'who do your vendors depend on?'.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| inputlookup vendor_dependency_graph.csv\n| where vendor_criticality IN (\"critical\",\"high\")\n| stats dc(vendor) as dependent_count, values(vendor) as vendors by fourth_party\n| eval concentration_score=min(dependent_count/3,1)\n| where concentration_score>=0.6\n| table fourth_party dependent_count vendors concentration_score",
              "m": "(1) Maintain vendor_dependency_graph.csv enriched monthly; (2) run monthly; (3) surface to risk committee.",
              "z": "Network graph of concentration, bar chart of fourth-parties by score, single-value 'fourth-parties above concentration threshold'.",
              "kfp": "Depth-1 dependencies where the upstream vendor has alternate providers should be discounted (substitutability>0).",
              "refs": "[Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain vendor_dependency_graph.csv enriched monthly; (2) run monthly; (3) surface to risk committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup vendor_dependency_graph.csv\n| where vendor_criticality IN (\"critical\",\"high\")\n| stats dc(vendor) as dependent_count, values(vendor) as vendors by fourth_party\n| eval concentration_score=min(dependent_count/3,1)\n| where concentration_score>=0.6\n| table fourth_party dependent_count vendors concentration_score\n```\n\nUnderstanding this SPL\n\n**Fourth-party concentration risk — shared critical dependencies across vendors** — Addresses the DORA supervisory-priorities theme that regulators ask 'who do your vendors depend on?'.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where vendor_criticality IN (\"critical\",\"high\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by fourth_party** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **concentration_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where concentration_score>=0.6` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Fourth-party concentration risk — shared critical dependencies across vendors**): table fourth_party dependent_count vendors concentration_score\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Network graph of concentration, bar chart of fourth-parties by score, single-value 'fourth-parties above concentration threshold'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when third-party risk scores move so you can act before a weak vendor becomes a real problem.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "DORA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.28 (ICT third-party risk) is enforced — Splunk UC-22.44.3: Fourth-party concentration risk — shared critical dependencies across vendors.",
                  "ea": "Saved search 'UC-22.44.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 31.7,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.45",
          "n": "Backup integrity and recovery testing",
          "u": [
            {
              "i": "22.45.1",
              "n": "Backup restore test evidence — RPO/RTO SLA compliance per tier",
              "c": "critical",
              "f": "intermediate",
              "v": "Makes backup testing a continuous control rather than an annual artefact.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=backup sourcetype=restore:test earliest=-7d\n| stats max(rpo_seconds) as rpo_s, max(rto_seconds) as rto_s by system tier test_id\n| lookup dr_tiers.csv tier OUTPUT rpo_slo rto_slo\n| where rpo_s>rpo_slo OR rto_s>rto_slo\n| table system tier rpo_s rpo_slo rto_s rto_slo",
              "m": "(1) Configure restore test jobs; (2) ingest durations and results; (3) schedule weekly; (4) on miss, escalate to continuity lead.",
              "z": "Table of tests, line chart of RPO/RTO trend, single-value '% of tier-1 systems within SLO'.",
              "kfp": "Tier-2/-3 systems may legitimately miss the tier-1 SLO; ensure tier lookup is current.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Configure restore test jobs; (2) ingest durations and results; (3) schedule weekly; (4) on miss, escalate to continuity lead.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=restore:test earliest=-7d\n| stats max(rpo_seconds) as rpo_s, max(rto_seconds) as rto_s by system tier test_id\n| lookup dr_tiers.csv tier OUTPUT rpo_slo rto_slo\n| where rpo_s>rpo_slo OR rto_s>rto_slo\n| table system tier rpo_s rpo_slo rto_s rto_slo\n```\n\nUnderstanding this SPL\n\n**Backup restore test evidence — RPO/RTO SLA compliance per tier** — Makes backup testing a continuous control rather than an annual artefact.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: restore:test. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=restore:test, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by system tier test_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where rpo_s>rpo_slo OR rto_s>rto_slo` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup restore test evidence — RPO/RTO SLA compliance per tier**): table system tier rpo_s rpo_slo rto_s rto_slo\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of tests, line chart of RPO/RTO trend, single-value '% of tier-1 systems within SLO'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show backups were tested and restorable when regulators or disaster recovery need proof.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "DORA",
                "NIST 800-53",
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.12 (Backup policies and recovery methods) is enforced — Splunk UC-22.45.1: Backup restore test evidence — RPO/RTO SLA compliance per tier.",
                  "ea": "Saved search 'UC-22.45.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CP-9",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 CP-9 (System backup) is enforced — Splunk UC-22.45.1: Backup restore test evidence — RPO/RTO SLA compliance per tier.",
                  "ea": "Saved search 'UC-22.45.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "A1.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC 2 A1.2 (Availability commitments) is enforced — Splunk UC-22.45.1: Backup restore test evidence — RPO/RTO SLA compliance per tier.",
                  "ea": "Saved search 'UC-22.45.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.45.2",
              "n": "Backup encryption and air-gap integrity — tamper detection on immutable storage",
              "c": "critical",
              "f": "advanced",
              "v": "Prevents the compliance claim of 'we have immutable backups' from silently becoming untrue.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=backup sourcetype=immutable:events earliest=-1h (event=tamper OR event=lock_violation OR event=checksum_mismatch)\n| lookup backup_lifecycle.csv repo_id OUTPUT policy_id\n| where event!=\"lifecycle_expiry\" OR isnull(policy_id)\n| table _time repo_id event detail policy_id",
              "m": "(1) Enable repository audit logs; (2) emit checksum-verification events on a schedule; (3) run hourly; (4) on finding, escalate to security.",
              "z": "Timeline of events, table of findings, single-value 'hours with no tamper events (30d)'.",
              "kfp": "Planned decommission of repositories should carry a decommission_id in the lookup to avoid noise.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [MITRE ATT&CK — T1490 Inhibit System Recovery](https://attack.mitre.org/techniques/T1490/)",
              "mitre": [
                "T1490"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enable repository audit logs; (2) emit checksum-verification events on a schedule; (3) run hourly; (4) on finding, escalate to security.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=backup sourcetype=immutable:events earliest=-1h (event=tamper OR event=lock_violation OR event=checksum_mismatch)\n| lookup backup_lifecycle.csv repo_id OUTPUT policy_id\n| where event!=\"lifecycle_expiry\" OR isnull(policy_id)\n| table _time repo_id event detail policy_id\n```\n\nUnderstanding this SPL\n\n**Backup encryption and air-gap integrity — tamper detection on immutable storage** — Prevents the compliance claim of 'we have immutable backups' from silently becoming untrue.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: backup; **sourcetype**: immutable:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=backup, sourcetype=immutable:events, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where event!=\"lifecycle_expiry\" OR isnull(policy_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup encryption and air-gap integrity — tamper detection on immutable storage**): table _time repo_id event detail policy_id\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of events, table of findings, single-value 'hours with no tamper events (30d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show backups were tested and restorable when regulators or disaster recovery need proof.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "HIPAA Security",
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(7)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security §164.308(a)(7) (Contingency plan) is enforced — Splunk UC-22.45.2: Backup encryption and air-gap integrity — tamper detection on immutable storage.",
                  "ea": "Saved search 'UC-22.45.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CP-9",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 CP-9 (System backup) is enforced — Splunk UC-22.45.2: Backup encryption and air-gap integrity — tamper detection on immutable storage.",
                  "ea": "Saved search 'UC-22.45.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.45.3",
              "n": "Backup completeness — unprotected workloads with regulated data",
              "c": "high",
              "f": "intermediate",
              "v": "Coverage gaps are the single most common backup finding at audit; continuous evidence eliminates surprise findings.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| rest /services/backup/inventory splunk_server=* | where regulated=\"true\"\n| stats max(last_success) as last_ok by workload tier\n| lookup dr_tiers.csv tier OUTPUT expected_interval_hours\n| eval hours_since=round((now()-last_ok)/3600,0)\n| where hours_since>expected_interval_hours\n| table tier workload last_ok hours_since expected_interval_hours",
              "m": "(1) Tag regulated workloads; (2) publish dr_tiers.csv; (3) run daily; (4) escalate via SOAR.",
              "z": "Heatmap of workload × day, bar chart of gaps by tier, single-value '% of regulated workloads protected'.",
              "kfp": "Workloads in a DR cutover window can temporarily appear uncovered; tolerate via a maintenance window.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag regulated workloads; (2) publish dr_tiers.csv; (3) run daily; (4) escalate via SOAR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/backup/inventory splunk_server=* | where regulated=\"true\"\n| stats max(last_success) as last_ok by workload tier\n| lookup dr_tiers.csv tier OUTPUT expected_interval_hours\n| eval hours_since=round((now()-last_ok)/3600,0)\n| where hours_since>expected_interval_hours\n| table tier workload last_ok hours_since expected_interval_hours\n```\n\nUnderstanding this SPL\n\n**Backup completeness — unprotected workloads with regulated data** — Coverage gaps are the single most common backup finding at audit; continuous evidence eliminates surprise findings.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Filters the current rows with `where regulated=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by workload tier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **hours_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where hours_since>expected_interval_hours` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup completeness — unprotected workloads with regulated data**): table tier workload last_ok hours_since expected_interval_hours\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of workload × day, bar chart of gaps by tier, single-value '% of regulated workloads protected'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show backups were tested and restorable when regulators or disaster recovery need proof.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "DORA",
                "NIST 800-53",
                "SOX ITGC"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.12 (Backup policies and recovery methods) is enforced — Splunk UC-22.45.3: Backup completeness — unprotected workloads with regulated data.",
                  "ea": "Saved search 'UC-22.45.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CP-9",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 CP-9 (System backup) is enforced — Splunk UC-22.45.3: Backup completeness — unprotected workloads with regulated data.",
                  "ea": "Saved search 'UC-22.45.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Operations.Backup",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX ITGC ITGC.Operations.Backup (Backup and restore) is enforced — Splunk UC-22.45.3: Backup completeness — unprotected workloads with regulated data.",
                  "ea": "Saved search 'UC-22.45.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 33.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.46",
          "n": "Training and awareness",
          "u": [
            {
              "i": "22.46.1",
              "n": "Mandatory security training — completion SLA by role",
              "c": "medium",
              "f": "beginner",
              "v": "Moves completion from a retrospective export to a continuously-known KPI.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=lms sourcetype=completion earliest=-365d course_id=security_awareness\n| stats max(_time) as last_completion by employee_id\n| lookup hr_roster.csv employee_id OUTPUT role status\n| where status=\"active\" AND (isnull(last_completion) OR (now()-last_completion)>31536000)\n| table employee_id role last_completion",
              "m": "(1) Ingest LMS events; (2) refresh hr_roster.csv nightly; (3) run weekly; (4) on finding, auto-enrol via HR integration.",
              "z": "Bar chart of completion by department, table of overdue, single-value '% completion in cycle'.",
              "kfp": "Employees on long leave (maternity, sabbatical) should be excluded via a leave flag.",
              "refs": "[45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest LMS events; (2) refresh hr_roster.csv nightly; (3) run weekly; (4) on finding, auto-enrol via HR integration.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=lms sourcetype=completion earliest=-365d course_id=security_awareness\n| stats max(_time) as last_completion by employee_id\n| lookup hr_roster.csv employee_id OUTPUT role status\n| where status=\"active\" AND (isnull(last_completion) OR (now()-last_completion)>31536000)\n| table employee_id role last_completion\n```\n\nUnderstanding this SPL\n\n**Mandatory security training — completion SLA by role** — Moves completion from a retrospective export to a continuously-known KPI.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: lms; **sourcetype**: completion. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=lms, sourcetype=completion, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by employee_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where status=\"active\" AND (isnull(last_completion) OR (now()-last_completion)>31536000)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Mandatory security training — completion SLA by role**): table employee_id role last_completion\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart of completion by department, table of overdue, single-value '% completion in cycle'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see who still needs training so awareness stays current across teams and roles.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "HIPAA Security",
                "NIS2",
                "PCI DSS"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.308(a)(5)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security §164.308(a)(5) (Security awareness and training) is enforced — Splunk UC-22.46.1: Mandatory security training — completion SLA by role.",
                  "ea": "Saved search 'UC-22.46.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(g)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIS2 Art.21(2)(g) (Cyber-hygiene and training) is enforced — Splunk UC-22.46.1: Mandatory security training — completion SLA by role.",
                  "ea": "Saved search 'UC-22.46.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI DSS 12.6 is enforced — Splunk UC-22.46.1: Mandatory security training — completion SLA by role.",
                  "ea": "Saved search 'UC-22.46.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.46.2",
              "n": "Phishing simulation efficacy — click-rate trend and repeat-clicker detection",
              "c": "medium",
              "f": "intermediate",
              "v": "Turns the awareness programme into a measurable control, not an annual training badge.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=phishing sourcetype=knowbe4:campaign OR sourcetype=proofpoint:psa earliest=-180d\n| bin _time span=30d\n| stats count(eval(action=\"clicked\")) as clicks count as total by _time\n| eval click_rate=round(clicks/total*100,2)\n| stats last(click_rate) as current avg(click_rate) as baseline\n| eval drift=current-baseline\n| where drift>3\n| appendpipe [search index=phishing action=clicked earliest=-180d | stats dc(campaign_id) as clicks by employee | where clicks>=3]",
              "m": "(1) Ingest campaign events from the phishing-sim platform; (2) compute baselines per quarter; (3) run monthly; (4) deliver repeat-clicker lists to HR for targeted training.",
              "z": "Line chart of click rate over time with baseline band, table of repeat clickers, single-value 'current click rate vs baseline'.",
              "kfp": "Newly-onboarded employees inflate click-rate; compute baseline excluding the first 30 days post-start.",
              "refs": "[Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest campaign events from the phishing-sim platform; (2) compute baselines per quarter; (3) run monthly; (4) deliver repeat-clicker lists to HR for targeted training.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=phishing sourcetype=knowbe4:campaign OR sourcetype=proofpoint:psa earliest=-180d\n| bin _time span=30d\n| stats count(eval(action=\"clicked\")) as clicks count as total by _time\n| eval click_rate=round(clicks/total*100,2)\n| stats last(click_rate) as current avg(click_rate) as baseline\n| eval drift=current-baseline\n| where drift>3\n| appendpipe [search index=phishing action=clicked earliest=-180d | stats dc(campaign_id) as clicks by employee | where clicks>=3]\n```\n\nUnderstanding this SPL\n\n**Phishing simulation efficacy — click-rate trend and repeat-clicker detection** — Turns the awareness programme into a measurable control, not an annual training badge.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: phishing; **sourcetype**: knowbe4:campaign, proofpoint:psa. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=phishing, sourcetype=knowbe4:campaign, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **click_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **drift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift>3` — typically the threshold or rule expression for this monitoring goal.\n• Appends rows from a subsearch with `append`.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart of click rate over time with baseline band, table of repeat clickers, single-value 'current click rate vs baseline'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see who still needs training so awareness stays current across teams and roles.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NIS2",
                "PCI DSS"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(g)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(g) (Cyber-hygiene and training) is enforced — Splunk UC-22.46.2: Phishing simulation efficacy — click-rate trend and repeat-clicker detection.",
                  "ea": "Saved search 'UC-22.46.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "12.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI DSS 12.6 is enforced — Splunk UC-22.46.2: Phishing simulation efficacy — click-rate trend and repeat-clicker detection.",
                  "ea": "Saved search 'UC-22.46.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.47",
          "n": "Control testing evidence freshness",
          "u": [
            {
              "i": "22.47.1",
              "n": "Control test freshness — evidence older than policy cadence",
              "c": "medium",
              "f": "beginner",
              "v": "Provides internal audit with a continuously-updated testing backlog rather than a pre-audit scramble.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| inputlookup control_inventory.csv\n| lookup control_test_log.csv control_id OUTPUT last_tested\n| eval age_days=round((now()-strptime(last_tested,\"%Y-%m-%d\"))/86400,0)\n| where age_days>cadence_days\n| table control_id control_family cadence_days last_tested age_days",
              "m": "(1) Maintain control_inventory.csv with cadence per control; (2) emit control_test_log.csv from GRC tooling; (3) run weekly.",
              "z": "Heatmap of control × age, table of overdue, single-value '% of controls with fresh evidence'.",
              "kfp": "Controls mid-transition (ownership change) may legitimately exceed cadence; tag with a transition flag.",
              "refs": "[AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain control_inventory.csv with cadence per control; (2) emit control_test_log.csv from GRC tooling; (3) run weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup control_inventory.csv\n| lookup control_test_log.csv control_id OUTPUT last_tested\n| eval age_days=round((now()-strptime(last_tested,\"%Y-%m-%d\"))/86400,0)\n| where age_days>cadence_days\n| table control_id control_family cadence_days last_tested age_days\n```\n\nUnderstanding this SPL\n\n**Control test freshness — evidence older than policy cadence** — Provides internal audit with a continuously-updated testing backlog rather than a pre-audit scramble.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>cadence_days` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Control test freshness — evidence older than policy cadence**): table control_id control_family cadence_days last_tested age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of control × age, table of overdue, single-value '% of controls with fresh evidence'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when control evidence is getting old so you refresh proof before the next review.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "ISO 27001",
                "NIST 800-53",
                "SOC 2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.36",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO 27001 A.5.36 is enforced — Splunk UC-22.47.1: Control test freshness — evidence older than policy cadence.",
                  "ea": "Saved search 'UC-22.47.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "PM-1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST 800-53 PM-1 (Information security program plan) is enforced — Splunk UC-22.47.1: Control test freshness — evidence older than policy cadence.",
                  "ea": "Saved search 'UC-22.47.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC5.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC 2 CC5.1 (Control activities) is enforced — Splunk UC-22.47.1: Control test freshness — evidence older than policy cadence.",
                  "ea": "Saved search 'UC-22.47.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.47.2",
              "n": "Repeat audit findings — same control deficiency across consecutive audit cycles",
              "c": "high",
              "f": "intermediate",
              "v": "Converts the 'prior year issue' anecdote into a quantitative signal.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=grc sourcetype=audit:finding earliest=-2y\n| stats dc(cycle_id) as cycles, values(cycle_id) as cycle_ids by control_id\n| where cycles>=2\n| table control_id cycles cycle_ids",
              "m": "(1) Ingest audit findings per cycle; (2) schedule quarterly; (3) escalate to audit committee.",
              "z": "Bar chart of repeat controls, heatmap of control × cycle, single-value 'repeat findings count'.",
              "kfp": "Findings closed with carry-over remediation plans can legitimately appear repeat; validate remediation_status.",
              "refs": "[PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest audit findings per cycle; (2) schedule quarterly; (3) escalate to audit committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=audit:finding earliest=-2y\n| stats dc(cycle_id) as cycles, values(cycle_id) as cycle_ids by control_id\n| where cycles>=2\n| table control_id cycles cycle_ids\n```\n\nUnderstanding this SPL\n\n**Repeat audit findings — same control deficiency across consecutive audit cycles** — Converts the 'prior year issue' anecdote into a quantitative signal.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: audit:finding. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=audit:finding, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where cycles>=2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Repeat audit findings — same control deficiency across consecutive audit cycles**): table control_id cycles cycle_ids\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart of repeat controls, heatmap of control × cycle, single-value 'repeat findings count'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when control evidence is getting old so you refresh proof before the next review.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "SOC 2",
                "SOX ITGC"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC 2",
                  "v": "2017 TSC",
                  "cl": "CC3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC 2 CC3.1 (Risk assessment) is enforced — Splunk UC-22.47.2: Repeat audit findings — same control deficiency across consecutive audit cycles.",
                  "ea": "Saved search 'UC-22.47.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Logging.Review",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of SOX ITGC ITGC.Logging.Review (Log review) — Splunk UC-22.47.2: Repeat audit findings — same control deficiency across consecutive audit cycles.",
                  "ea": "Saved search 'UC-22.47.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 32.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.48",
          "n": "Segregation of duties enforcement",
          "u": [
            {
              "i": "22.48.1",
              "n": "Segregation of duties — toxic role combinations in IAM",
              "c": "critical",
              "f": "advanced",
              "v": "SoD failures are the top SOX ITGC finding in the Fortune-1000 cohort; automating them is the only defensible posture.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "| rest /services/iam/principals splunk_server=* | fields principal roles\n| mvexpand roles\n| join principal [| rest /services/iam/principals splunk_server=* | fields principal roles | mvexpand roles | rename roles as other_role]\n| where roles!=other_role\n| lookup sod_matrix.csv role_a=roles role_b=other_role OUTPUT toxic permitted_with_compensating\n| where toxic=\"true\" AND permitted_with_compensating!=\"true\"\n| table principal roles other_role",
              "m": "(1) Maintain sod_matrix.csv with the company's SoD rules; (2) ingest IAM state via REST; (3) run daily; (4) on finding, auto-open a remediation ticket.",
              "z": "Table of violations, network graph of role interactions, single-value 'principals with SoD violations'.",
              "kfp": "Break-glass admin accounts require explicit permitted_with_compensating=true.",
              "refs": "[PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain sod_matrix.csv with the company's SoD rules; (2) ingest IAM state via REST; (3) run daily; (4) on finding, auto-open a remediation ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/iam/principals splunk_server=* | fields principal roles\n| mvexpand roles\n| join principal [| rest /services/iam/principals splunk_server=* | fields principal roles | mvexpand roles | rename roles as other_role]\n| where roles!=other_role\n| lookup sod_matrix.csv role_a=roles role_b=other_role OUTPUT toxic permitted_with_compensating\n| where toxic=\"true\" AND permitted_with_compensating!=\"true\"\n| table principal roles other_role\n```\n\nUnderstanding this SPL\n\n**Segregation of duties — toxic role combinations in IAM** — SoD failures are the top SOX ITGC finding in the Fortune-1000 cohort; automating them is the only defensible posture.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where roles!=other_role` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where toxic=\"true\" AND permitted_with_compensating!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Segregation of duties — toxic role combinations in IAM**): table principal roles other_role\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of violations, network graph of role interactions, single-value 'principals with SoD violations'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the same person could do two jobs that should be split, so errors and fraud are harder.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO 27001",
                "PCI DSS",
                "SOX",
                "SOX ITGC"
              ],
              "a": [
                "Authentication"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO 27001 A.5.3 is enforced — Splunk UC-22.48.1: Segregation of duties — toxic role combinations in IAM.",
                  "ea": "Saved search 'UC-22.48.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI DSS",
                  "v": "v4.0",
                  "cl": "7.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI DSS 7.2 (Access granted on least privilege) is enforced — Splunk UC-22.48.1: Segregation of duties — toxic role combinations in IAM.",
                  "ea": "Saved search 'UC-22.48.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.SOD",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.SOD (Segregation of duties) is enforced — Splunk UC-22.48.1: Segregation of duties — toxic role combinations in IAM.",
                  "ea": "Saved search 'UC-22.48.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.48.2",
              "n": "SoD violations via break-glass usage — emergency role abuse",
              "c": "critical",
              "f": "advanced",
              "v": "Break-glass is often abused to step around SoD; this UC catches that pattern without waiting for audit.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=pam breakglass=true earliest=-24h\n| stats values(action_category) as actions by session_id account\n| mvexpand actions\n| lookup sod_matrix.csv action_a=actions OUTPUT toxic_against\n| mvexpand toxic_against\n| join session_id toxic_against [search index=pam breakglass=true earliest=-24h | eval hit=action_category]\n| join session_id [search index=grc sourcetype=emergency:exception | fields session_id exception_id]\n| where isnull(exception_id)\n| table session_id account actions toxic_against",
              "m": "(1) Tag PAM session actions into SoD categories; (2) maintain sod_matrix.csv; (3) require emergency exceptions to log to GRC; (4) run hourly.",
              "z": "Table of sessions with SoD crossings, single-value 'SoD-compliant emergency sessions'.",
              "kfp": "Sessions whose category mapping is imprecise can look like crossings; refine sod_matrix.csv categories quarterly.",
              "refs": "[PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag PAM session actions into SoD categories; (2) maintain sod_matrix.csv; (3) require emergency exceptions to log to GRC; (4) run hourly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam breakglass=true earliest=-24h\n| stats values(action_category) as actions by session_id account\n| mvexpand actions\n| lookup sod_matrix.csv action_a=actions OUTPUT toxic_against\n| mvexpand toxic_against\n| join session_id toxic_against [search index=pam breakglass=true earliest=-24h | eval hit=action_category]\n| join session_id [search index=grc sourcetype=emergency:exception | fields session_id exception_id]\n| where isnull(exception_id)\n| table session_id account actions toxic_against\n```\n\nUnderstanding this SPL\n\n**SoD violations via break-glass usage — emergency role abuse** — Break-glass is often abused to step around SoD; this UC catches that pattern without waiting for audit.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by session_id account** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(exception_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SoD violations via break-glass usage — emergency role abuse**): table session_id account actions toxic_against\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of sessions with SoD crossings, single-value 'SoD-compliant emergency sessions'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the same person could do two jobs that should be split, so errors and fraud are harder.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOX ITGC"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.SOD",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOX ITGC ITGC.AccessMgmt.SOD (Segregation of duties) is enforced — Splunk UC-22.48.2: SoD violations via break-glass usage — emergency role abuse.",
                  "ea": "Saved search 'UC-22.48.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.49",
          "n": "Retention and disposal automation",
          "u": [
            {
              "i": "22.49.1",
              "n": "Retention execution evidence — records past retention still present",
              "c": "high",
              "f": "intermediate",
              "v": "Retention failures quickly become GDPR enforcement actions — continuous monitoring is the only real defence.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=datastore sourcetype=retention:inventory earliest=-1d\n| lookup retention_policy.csv data_domain OUTPUT retention_period_days\n| eval age_days=round((now()-strptime(created_at,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n| where age_days>retention_period_days AND litigation_hold!=\"true\"\n| table store record_id data_domain age_days retention_period_days",
              "m": "(1) Maintain retention_policy.csv per data_domain; (2) ingest store inventory with created_at / litigation_hold flags; (3) run daily.",
              "z": "Heatmap of store × age buckets, table of overdue records, single-value '% of records within retention'.",
              "kfp": "Data under litigation hold must carry the flag; without it, the UC will flag it.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [California Consumer Privacy Act (CCPA/CPRA)](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain retention_policy.csv per data_domain; (2) ingest store inventory with created_at / litigation_hold flags; (3) run daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datastore sourcetype=retention:inventory earliest=-1d\n| lookup retention_policy.csv data_domain OUTPUT retention_period_days\n| eval age_days=round((now()-strptime(created_at,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n| where age_days>retention_period_days AND litigation_hold!=\"true\"\n| table store record_id data_domain age_days retention_period_days\n```\n\nUnderstanding this SPL\n\n**Retention execution evidence — records past retention still present** — Retention failures quickly become GDPR enforcement actions — continuous monitoring is the only real defence.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datastore; **sourcetype**: retention:inventory. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datastore, sourcetype=retention:inventory, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>retention_period_days AND litigation_hold!=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Retention execution evidence — records past retention still present**): table store record_id data_domain age_days retention_period_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of store × age buckets, table of overdue records, single-value '% of records within retention'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see retention and disposal so data is not kept too long and holds are respected.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "CCPA/CPRA",
                "GDPR",
                "HIPAA Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.100",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CCPA/CPRA §1798.100 (Consumer right to know) is enforced — Splunk UC-22.49.1: Retention execution evidence — records past retention still present.",
                  "ea": "Saved search 'UC-22.49.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.5 (Principles of processing) is enforced — Splunk UC-22.49.1: Retention execution evidence — records past retention still present.",
                  "ea": "Saved search 'UC-22.49.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.310(d)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Security §164.310(d)(1) (Device and media controls) is enforced — Splunk UC-22.49.1: Retention execution evidence — records past retention still present.",
                  "ea": "Saved search 'UC-22.49.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.5 is enforced — Splunk UC-22.49.1: Retention execution evidence — records past retention still present.",
                  "ea": "Saved search 'UC-22.49.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.49.2",
              "n": "Disposal workflow completion — failed disposals requiring manual review",
              "c": "high",
              "f": "intermediate",
              "v": "Ensures the disposal pipeline is as observable as the ingest pipeline.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=datastore sourcetype=disposal:job earliest=-7d\n| stats values(status) as states min(_time) as started max(_time) as last_update by job_id\n| where NOT match(states, \"completed\") AND (now()-started)>172800\n| table job_id started last_update states",
              "m": "(1) Instrument disposal tooling to emit lifecycle events; (2) run daily; (3) on stall, notify the data owner.",
              "z": "Funnel started → in-progress → completed, table of stalled jobs, single-value '% of disposals completed within 48h'.",
              "kfp": "Long-running large-scale disposals legitimately take >48h; parametrise per data_domain.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument disposal tooling to emit lifecycle events; (2) run daily; (3) on stall, notify the data owner.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datastore sourcetype=disposal:job earliest=-7d\n| stats values(status) as states min(_time) as started max(_time) as last_update by job_id\n| where NOT match(states, \"completed\") AND (now()-started)>172800\n| table job_id started last_update states\n```\n\nUnderstanding this SPL\n\n**Disposal workflow completion — failed disposals requiring manual review** — Ensures the disposal pipeline is as observable as the ingest pipeline.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datastore; **sourcetype**: disposal:job. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datastore, sourcetype=disposal:job, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by job_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where NOT match(states, \"completed\") AND (now()-started)>172800` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Disposal workflow completion — failed disposals requiring manual review**): table job_id started last_update states\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel started → in-progress → completed, table of stalled jobs, single-value '% of disposals completed within 48h'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see retention and disposal so data is not kept too long and holds are respected.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "GDPR",
                "HIPAA Security"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.5 (Principles of processing) is enforced — Splunk UC-22.49.2: Disposal workflow completion — failed disposals requiring manual review.",
                  "ea": "Saved search 'UC-22.49.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA Security",
                  "v": "2013-final",
                  "cl": "§164.310(d)(1)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security §164.310(d)(1) (Device and media controls) is enforced — Splunk UC-22.49.2: Disposal workflow completion — failed disposals requiring manual review.",
                  "ea": "Saved search 'UC-22.49.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.5 is enforced — Splunk UC-22.49.2: Disposal workflow completion — failed disposals requiring manual review.",
                  "ea": "Saved search 'UC-22.49.2' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.49.3",
              "n": "Litigation-hold override audit — holds applied/released without ticket",
              "c": "high",
              "f": "intermediate",
              "v": "Prevents silent tampering with evidence preservation.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "See subcategory preamble.",
              "q": "index=datastore sourcetype=hold:change earliest=-24h\n| join hold_id [search index=legal sourcetype=hold:ticket | fields hold_id ticket_id requested_by]\n| where isnull(ticket_id)\n| table _time hold_id record_id action ticket_id",
              "m": "(1) Ingest hold-change audit events from data stores and from the legal-hold tool; (2) run hourly.",
              "z": "Table of unauthorised changes, single-value 'hold changes with ticket (30d)'.",
              "kfp": "Bulk hold changes during DR or system migration should carry a maintenance ticket.",
              "refs": "[PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: See subcategory preamble..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest hold-change audit events from data stores and from the legal-hold tool; (2) run hourly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datastore sourcetype=hold:change earliest=-24h\n| join hold_id [search index=legal sourcetype=hold:ticket | fields hold_id ticket_id requested_by]\n| where isnull(ticket_id)\n| table _time hold_id record_id action ticket_id\n```\n\nUnderstanding this SPL\n\n**Litigation-hold override audit — holds applied/released without ticket** — Prevents silent tampering with evidence preservation.\n\nDocumented **Data sources**: See subcategory preamble. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datastore; **sourcetype**: hold:change. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datastore, sourcetype=hold:change, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(ticket_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Litigation-hold override audit — holds applied/released without ticket**): table _time hold_id record_id action ticket_id\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of unauthorised changes, single-value 'hold changes with ticket (30d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see retention and disposal so data is not kept too long and holds are respected.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "ISO 27001",
                "SOX ITGC"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO 27001",
                  "v": "2022",
                  "cl": "A.5.33",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO 27001 A.5.33 is enforced — Splunk UC-22.49.3: Litigation-hold override audit — holds applied/released without ticket.",
                  "ea": "Saved search 'UC-22.49.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Logging.Review",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOX ITGC ITGC.Logging.Review (Log review) is enforced — Splunk UC-22.49.3: Litigation-hold override audit — holds applied/released without ticket.",
                  "ea": "Saved search 'UC-22.49.3' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.35",
          "n": "— additional UCs (Evidence continuity and log integrity)",
          "u": [
            {
              "i": "22.35.4",
              "n": "Log signing chain integrity — cryptographic signature drift on evidence archive",
              "c": "critical",
              "f": "advanced",
              "v": "Replaces the quarterly 'we sampled 10 logs and checked hashes' audit procedure with a continuous control whose output is either 'zero breaks for the period' or a named remediation record. This is the evidence regulators actually ask for.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Evidence archive signature-verification events (e.g., sourcetype=archive:signer emitted by the signer service), WORM repository state REST API, KMS signing audit logs.",
              "q": "index=compliance sourcetype=archive:signer earliest=-1h\n| eval block_parent_hash=coalesce(parent_hash,\"GENESIS\")\n| stats values(block_hash) as block_hashes latest(expected_parent_hash) as expected_parent by block_id archive_id\n| where expected_parent!=block_parent_hash AND block_parent_hash!=\"GENESIS\"\n| table archive_id block_id block_parent_hash expected_parent block_hashes",
              "m": "(1) Ensure the archive signer emits a per-block event with block_id, block_hash, parent_hash, and expected_parent_hash; (2) schedule the UC hourly; (3) route hits to compliance_summary; (4) SOAR auto-opens a P1 with the relevant archive owner; (5) run an automated rebuild drill monthly to ensure verification code itself is not silently broken.",
              "z": "Timeline of verified vs broken blocks, scatter of block_id × hash-mismatch severity, single-value '% of archive blocks verified (30d)'.",
              "kfp": "Archive signer upgrades that rotate the hash function produce a one-off legitimate 'parent-missing' at the rotation boundary — record the rotation in archive_rotations.csv and allow-list by timestamp.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [MITRE ATT&CK — T1070 Indicator Removal](https://attack.mitre.org/techniques/T1070/)",
              "mitre": [
                "T1070"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Evidence archive signature-verification events (e.g., sourcetype=archive:signer emitted by the signer service), WORM repository state REST API, KMS signing audit logs..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure the archive signer emits a per-block event with block_id, block_hash, parent_hash, and expected_parent_hash; (2) schedule the UC hourly; (3) route hits to compliance_summary; (4) SOAR auto-opens a P1 with the relevant archive owner; (5) run an automated rebuild drill monthly to ensure verification code itself is not silently broken.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=compliance sourcetype=archive:signer earliest=-1h\n| eval block_parent_hash=coalesce(parent_hash,\"GENESIS\")\n| stats values(block_hash) as block_hashes latest(expected_parent_hash) as expected_parent by block_id archive_id\n| where expected_parent!=block_parent_hash AND block_parent_hash!=\"GENESIS\"\n| table archive_id block_id block_parent_hash expected_parent block_hashes\n```\n\nUnderstanding this SPL\n\n**Log signing chain integrity — cryptographic signature drift on evidence archive** — Replaces the quarterly 'we sampled 10 logs and checked hashes' audit procedure with a continuous control whose output is either 'zero breaks for the period' or a named remediation record. This is the evidence regulators actually ask for.\n\nDocumented **Data sources**: Evidence archive signature-verification events (e.g., sourcetype=archive:signer emitted by the signer service), WORM repository state REST API, KMS signing audit logs. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: compliance; **sourcetype**: archive:signer. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=compliance, sourcetype=archive:signer, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **block_parent_hash** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by block_id archive_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where expected_parent!=block_parent_hash AND block_parent_hash!=\"GENESIS\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Log signing chain integrity — cryptographic signature drift on evidence archive**): table archive_id block_id block_parent_hash expected_parent block_hashes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timeline of verified vs broken blocks, scatter of block_id × hash-mismatch severity, single-value '% of archive blocks verified (30d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We watch for breaks in the audit log stream so you can fix gaps before compliance or security reviews find them first.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "GDPR",
                "HIPAA",
                "PCI-DSS",
                "SOX-ITGC"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32(1)(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.32(1)(b) is enforced — Splunk UC-22.35.4: Log signing chain integrity — cryptographic signature drift on evidence archive.",
                  "ea": "Saved search 'UC-22.35.4' running on sourcetype archive:signer and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2016/679/oj"
                },
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.312(c)(1)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA §164.312(c)(1) (Integrity) is enforced — Splunk UC-22.35.4: Log signing chain integrity — cryptographic signature drift on evidence archive.",
                  "ea": "Saved search 'UC-22.35.4' running on sourcetype archive:signer and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "10.3.4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.3.4 is enforced — Splunk UC-22.35.4: Log signing chain integrity — cryptographic signature drift on evidence archive.",
                  "ea": "Saved search 'UC-22.35.4' running on sourcetype archive:signer and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Logging.Integrity",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of SOX-ITGC ITGC.Logging.Integrity — Splunk UC-22.35.4: Log signing chain integrity — cryptographic signature drift on evidence archive.",
                  "ea": "Saved search 'UC-22.35.4' running on sourcetype archive:signer and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.32(1)(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.32(1)(b) is enforced — Splunk UC-22.35.4: Log signing chain integrity — cryptographic signature drift on evidence archive.",
                  "ea": "Saved search 'UC-22.35.4' running on sourcetype archive:signer and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.35.5",
              "n": "Search-head audit-trail completeness — deleted or rewritten search jobs",
              "c": "high",
              "f": "intermediate",
              "v": "Gives the Splunk admin team a defensible 'every search run is accounted for' control rather than the weaker 'we have search logging enabled'.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "_audit (action=search), /services/search/jobs REST for current artifacts, dispatch directory state via Splunk REST.",
              "q": "index=_audit action=search info=completed earliest=-24h\n| stats values(artifact_id) as artifact_ids earliest(_time) as first_seen latest(_time) as last_seen by search_id user\n| mvexpand artifact_ids\n| join type=outer artifact_ids [| rest /services/search/jobs splunk_server=* | fields sid dispatchState | rename sid as artifact_ids]\n| where isnull(dispatchState) AND (now()-last_seen)<86400\n| table user search_id artifact_ids first_seen last_seen",
              "m": "(1) Ensure _audit is retained for ≥90 days on the evidence cluster; (2) adjust dispatch_ttl per risk tier; (3) schedule every 15 min; (4) on hit, SOAR opens a P2 for the Splunk admin team.",
              "z": "Time chart of job-vs-artifact reconciliation, table of missing artifacts by user, single-value 'search jobs with artifact present (7d)'.",
              "kfp": "Cluster re-balancing or search-head restart can transiently remove dispatch artifacts before the next sync; tolerate with a 30-minute post-restart grace via maintenance_windows.csv.",
              "refs": "[45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [MITRE ATT&CK — T1070.006 Indicator Removal: Timestomp](https://attack.mitre.org/techniques/T1070/006/)",
              "mitre": [
                "T1070.006"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: _audit (action=search), /services/search/jobs REST for current artifacts, dispatch directory state via Splunk REST..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure _audit is retained for ≥90 days on the evidence cluster; (2) adjust dispatch_ttl per risk tier; (3) schedule every 15 min; (4) on hit, SOAR opens a P2 for the Splunk admin team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=_audit action=search info=completed earliest=-24h\n| stats values(artifact_id) as artifact_ids earliest(_time) as first_seen latest(_time) as last_seen by search_id user\n| mvexpand artifact_ids\n| join type=outer artifact_ids [| rest /services/search/jobs splunk_server=* | fields sid dispatchState | rename sid as artifact_ids]\n| where isnull(dispatchState) AND (now()-last_seen)<86400\n| table user search_id artifact_ids first_seen last_seen\n```\n\nUnderstanding this SPL\n\n**Search-head audit-trail completeness — deleted or rewritten search jobs** — Gives the Splunk admin team a defensible 'every search run is accounted for' control rather than the weaker 'we have search logging enabled'.\n\nDocumented **Data sources**: _audit (action=search), /services/search/jobs REST for current artifacts, dispatch directory state via Splunk REST. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: _audit.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=_audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by search_id user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Expands multivalue fields with `mvexpand` — use `limit=` to cap row explosion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(dispatchState) AND (now()-last_seen)<86400` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Search-head audit-trail completeness — deleted or rewritten search jobs**): table user search_id artifact_ids first_seen last_seen\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart of job-vs-artifact reconciliation, table of missing artifacts by user, single-value 'search jobs with artifact present (7d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We watch for breaks in the audit log stream so you can fix gaps before compliance or security reviews find them first.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "HIPAA",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.312(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA §164.312(b) (Audit controls) is enforced — Splunk UC-22.35.5: Search-head audit-trail completeness — deleted or rewritten search jobs.",
                  "ea": "Saved search 'UC-22.35.5' running on _audit (action=search), /services/search/jobs REST for current artifacts, dispatch directory state via Splunk REST., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "10.3.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.3.2 is enforced — Splunk UC-22.35.5: Search-head audit-trail completeness — deleted or rewritten search jobs.",
                  "ea": "Saved search 'UC-22.35.5' running on _audit (action=search), /services/search/jobs REST for current artifacts, dispatch directory state via Splunk REST., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.35.5: Search-head audit-trail completeness — deleted or rewritten search jobs.",
                  "ea": "Saved search 'UC-22.35.5' running on _audit (action=search), /services/search/jobs REST for current artifacts, dispatch directory state via Splunk REST., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.36",
          "n": "— additional UCs (Data subject rights fulfillment)",
          "u": [
            {
              "i": "22.36.4",
              "n": "DSAR identity-verification friction — failed-verification anomaly",
              "c": "high",
              "f": "intermediate",
              "v": "Takes the single most common enforcement narrative ('the verification process itself denied the right') and turns it into a measurable metric.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "DSAR platform identity-verification events (OneTrust, TrustArc, ServiceNow Privacy), IdP step-up audit logs, manual review tickets.",
              "q": "index=dsar sourcetype=onetrust:dsar OR sourcetype=servicenow:privacy action=verification earliest=-14d\n| eval pass=if(result=\"success\",1,0), fail=if(result=\"failure\",1,0)\n| stats sum(pass) as passes sum(fail) as failures values(ticket_id) as ticket_ids by subject_token\n| where failures>=3 AND passes<1\n| table subject_token failures ticket_ids",
              "m": "(1) Ensure DSAR verification events emit subject_token and result; (2) run daily; (3) feed findings to a DPO review dashboard; (4) after a fixed threshold, escalate via SOAR for legal review.",
              "z": "Bar chart of verification failure rate per channel, table of repeat-failure subjects, single-value 'repeat-failure subjects (14d)'.",
              "kfp": "Genuine impersonation attempts will (correctly) produce multiple failures for the same subject_token; pair with IdP risk signals before escalating.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [California CCPA/CPRA](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: DSAR platform identity-verification events (OneTrust, TrustArc, ServiceNow Privacy), IdP step-up audit logs, manual review tickets..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure DSAR verification events emit subject_token and result; (2) run daily; (3) feed findings to a DPO review dashboard; (4) after a fixed threshold, escalate via SOAR for legal review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dsar sourcetype=onetrust:dsar OR sourcetype=servicenow:privacy action=verification earliest=-14d\n| eval pass=if(result=\"success\",1,0), fail=if(result=\"failure\",1,0)\n| stats sum(pass) as passes sum(fail) as failures values(ticket_id) as ticket_ids by subject_token\n| where failures>=3 AND passes<1\n| table subject_token failures ticket_ids\n```\n\nUnderstanding this SPL\n\n**DSAR identity-verification friction — failed-verification anomaly** — Takes the single most common enforcement narrative ('the verification process itself denied the right') and turns it into a measurable metric.\n\nDocumented **Data sources**: DSAR platform identity-verification events (OneTrust, TrustArc, ServiceNow Privacy), IdP step-up audit logs, manual review tickets. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dsar; **sourcetype**: onetrust:dsar, servicenow:privacy. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dsar, sourcetype=onetrust:dsar, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pass** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by subject_token** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where failures>=3 AND passes<1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DSAR identity-verification friction — failed-verification anomaly**): table subject_token failures ticket_ids\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart of verification failure rate per channel, table of repeat-failure subjects, single-value 'repeat-failure subjects (14d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see when data-subject requests are running late so privacy duties stay on track across systems.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Ticket_Management"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.130(a)",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of CCPA §1798.130(a) — Splunk UC-22.36.4: DSAR identity-verification friction — failed-verification anomaly.",
                  "ea": "Saved search 'UC-22.36.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.12(6)",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of GDPR Art.12(6) — Splunk UC-22.36.4: DSAR identity-verification friction — failed-verification anomaly.",
                  "ea": "Saved search 'UC-22.36.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.12(6)",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of UK GDPR Art.12(6) — Splunk UC-22.36.4: DSAR identity-verification friction — failed-verification anomaly.",
                  "ea": "Saved search 'UC-22.36.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.36.5",
              "n": "DSAR request-type mix anomaly — zero-deletion skew indicating broken workflow",
              "c": "medium",
              "f": "intermediate",
              "v": "Makes the common 'silent bug in the privacy portal' failure mode visible without waiting for a complaint.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "DSAR ticketing system request-type field, historic aggregates from compliance summary index.",
              "q": "index=dsar sourcetype=onetrust:dsar OR sourcetype=servicenow:privacy earliest=-84d\n| bin _time span=7d\n| stats count by _time request_type\n| eventstats sum(count) as total by _time\n| eval ratio=count/total\n| stats avg(ratio) as baseline, stdev(ratio) as baseline_sd latest(ratio) as current by request_type\n| eval drift=abs(current-baseline)\n| where drift>(3*baseline_sd) AND baseline_sd>0\n| table request_type baseline current drift baseline_sd",
              "m": "(1) Ensure DSAR events emit a canonical request_type from an allow-list; (2) run weekly; (3) on fire, open a DPO + engineering review ticket.",
              "z": "Stacked area of request-type mix over 12 weeks, bar chart of current-vs-baseline deltas, single-value 'drift categories this week'.",
              "kfp": "Seasonal campaigns (e.g., year-end 'delete my data' reminders) legitimately shift the mix; maintain a known-campaign lookup to dampen the signal for those windows.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [California CCPA/CPRA](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: DSAR ticketing system request-type field, historic aggregates from compliance summary index..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure DSAR events emit a canonical request_type from an allow-list; (2) run weekly; (3) on fire, open a DPO + engineering review ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dsar sourcetype=onetrust:dsar OR sourcetype=servicenow:privacy earliest=-84d\n| bin _time span=7d\n| stats count by _time request_type\n| eventstats sum(count) as total by _time\n| eval ratio=count/total\n| stats avg(ratio) as baseline, stdev(ratio) as baseline_sd latest(ratio) as current by request_type\n| eval drift=abs(current-baseline)\n| where drift>(3*baseline_sd) AND baseline_sd>0\n| table request_type baseline current drift baseline_sd\n```\n\nUnderstanding this SPL\n\n**DSAR request-type mix anomaly — zero-deletion skew indicating broken workflow** — Makes the common 'silent bug in the privacy portal' failure mode visible without waiting for a complaint.\n\nDocumented **Data sources**: DSAR ticketing system request-type field, historic aggregates from compliance summary index. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dsar; **sourcetype**: onetrust:dsar, servicenow:privacy. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dsar, sourcetype=onetrust:dsar, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time request_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **ratio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by request_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drift** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift>(3*baseline_sd) AND baseline_sd>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DSAR request-type mix anomaly — zero-deletion skew indicating broken workflow**): table request_type baseline current drift baseline_sd\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked area of request-type mix over 12 weeks, bar chart of current-vs-baseline deltas, single-value 'drift categories this week'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when data-subject requests are running late so privacy duties stay on track across systems.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Ticket_Management"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.130(a)",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of CCPA §1798.130(a) — Splunk UC-22.36.5: DSAR request-type mix anomaly — zero-deletion skew indicating broken workflow.",
                  "ea": "Saved search 'UC-22.36.5' running on DSAR ticketing system request-type field, historic aggregates from compliance summary index., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.12(2)",
                  "m": "detects-violation-of",
                  "a": "contributing",
                  "co": "Detects violations of GDPR Art.12(2) — Splunk UC-22.36.5: DSAR request-type mix anomaly — zero-deletion skew indicating broken workflow.",
                  "ea": "Saved search 'UC-22.36.5' running on DSAR ticketing system request-type field, historic aggregates from compliance summary index., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.37",
          "n": "— additional UCs (Consent lifecycle and lawful basis)",
          "u": [
            {
              "i": "22.37.4",
              "n": "Purpose-limitation enforcement — processing not matching declared purpose",
              "c": "high",
              "f": "advanced",
              "v": "Converts an otherwise-abstract regulatory principle into a continuously-measurable control with evidentiary value.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Application audit logs carrying the purpose-bearing access token, system-purpose catalogue in purpose_catalogue.csv.",
              "q": "index=app_activity earliest=-1h token_purpose=*\n| lookup purpose_catalogue.csv system_id OUTPUT allowed_purposes\n| eval ok=if(match(allowed_purposes,token_purpose),1,0)\n| where ok=0\n| stats count values(user) as users values(subject_token) as subjects by system_id token_purpose allowed_purposes\n| table system_id token_purpose allowed_purposes count users subjects",
              "m": "(1) Mint OAuth/OIDC tokens with a purpose claim; (2) publish purpose_catalogue.csv keyed by system_id; (3) run hourly; (4) alert DPO on mismatches and drive remediation via SOAR.",
              "z": "Sankey of token-purpose → system, heatmap of mismatches by system, single-value 'mismatch events (24h)'.",
              "kfp": "Shared service accounts without per-request purpose context cannot be evaluated; filter them out via service_account_allowlist.csv.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [California CCPA/CPRA](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=), [LGPD (Brazil Law 13,709/2018)](https://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/l13709.htm), [MITRE ATT&CK — T1552 Unsecured Credentials](https://attack.mitre.org/techniques/T1552/)",
              "mitre": [
                "T1552"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Application audit logs carrying the purpose-bearing access token, system-purpose catalogue in purpose_catalogue.csv..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Mint OAuth/OIDC tokens with a purpose claim; (2) publish purpose_catalogue.csv keyed by system_id; (3) run hourly; (4) alert DPO on mismatches and drive remediation via SOAR.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app_activity earliest=-1h token_purpose=*\n| lookup purpose_catalogue.csv system_id OUTPUT allowed_purposes\n| eval ok=if(match(allowed_purposes,token_purpose),1,0)\n| where ok=0\n| stats count values(user) as users values(subject_token) as subjects by system_id token_purpose allowed_purposes\n| table system_id token_purpose allowed_purposes count users subjects\n```\n\nUnderstanding this SPL\n\n**Purpose-limitation enforcement — processing not matching declared purpose** — Converts an otherwise-abstract regulatory principle into a continuously-measurable control with evidentiary value.\n\nDocumented **Data sources**: Application audit logs carrying the purpose-bearing access token, system-purpose catalogue in purpose_catalogue.csv. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app_activity.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app_activity, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **ok** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ok=0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by system_id token_purpose allowed_purposes** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Purpose-limitation enforcement — processing not matching declared purpose**): table system_id token_purpose allowed_purposes count users subjects\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey of token-purpose → system, heatmap of mismatches by system, single-value 'mismatch events (24h)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you track consent and choices over time so your marketing and storage stay aligned with what people agreed to.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR",
                "LGPD"
              ],
              "a": [
                "Authentication"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.100(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CCPA §1798.100(b) is enforced — Splunk UC-22.37.4: Purpose-limitation enforcement — processing not matching declared purpose.",
                  "ea": "Saved search 'UC-22.37.4' running on Application audit logs carrying the purpose-bearing access token, system-purpose catalogue in purpose_catalogue.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(1)(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.5(1)(b) is enforced — Splunk UC-22.37.4: Purpose-limitation enforcement — processing not matching declared purpose.",
                  "ea": "Saved search 'UC-22.37.4' running on Application audit logs carrying the purpose-bearing access token, system-purpose catalogue in purpose_catalogue.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "LGPD",
                  "v": "Lei nº 13.709/2018",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that LGPD Art.6 is enforced — Splunk UC-22.37.4: Purpose-limitation enforcement — processing not matching declared purpose.",
                  "ea": "Saved search 'UC-22.37.4' running on Application audit logs carrying the purpose-bearing access token, system-purpose catalogue in purpose_catalogue.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.5(1)(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.5(1)(b) is enforced — Splunk UC-22.37.4: Purpose-limitation enforcement — processing not matching declared purpose.",
                  "ea": "Saved search 'UC-22.37.4' running on Application audit logs carrying the purpose-bearing access token, system-purpose catalogue in purpose_catalogue.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.37.5",
              "n": "IAB TCF consent string mutation without user interaction",
              "c": "high",
              "f": "advanced",
              "v": "Catches a pattern that has surfaced repeatedly in EDPB enforcement decisions (Meta, Google ad-tech) before it becomes a supervisory finding.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Web access logs capturing Set-Cookie: euconsent-v2, CMP interaction events (onetrust:cmp), client-side analytics.",
              "q": "index=web sourcetype=access_combined earliest=-1h euconsent_v2=*\n| sort 0 session_id _time\n| streamstats current=f last(euconsent_v2) as prior_consent by session_id\n| where isnotnull(prior_consent) AND euconsent_v2!=prior_consent\n| join type=outer session_id _time [search index=cmp sourcetype=onetrust:cmp action=interaction earliest=-1h | fields session_id _time | rename _time as interaction_time]\n| where isnull(interaction_time) OR abs(_time-interaction_time)>60\n| table _time session_id prior_consent euconsent_v2",
              "m": "(1) Preserve Set-Cookie headers in access logs; (2) ingest CMP interaction events; (3) run hourly; (4) on hit, freeze the vendor via ad-tech allow-list and open a DPO ticket.",
              "z": "Table of mutating sessions by vendor, bar chart of mutations per ad-tech partner, single-value 'sessions with genuine consent (24h)'.",
              "kfp": "CMP token refreshes (same purpose mask, new expiry) can legitimately rewrite the TC string; allow-list based on purpose-bitmap equality.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [California CCPA/CPRA](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=), [MITRE ATT&CK — T1557 Adversary-in-the-Middle](https://attack.mitre.org/techniques/T1557/)",
              "mitre": [
                "T1557"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Web access logs capturing Set-Cookie: euconsent-v2, CMP interaction events (onetrust:cmp), client-side analytics..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Preserve Set-Cookie headers in access logs; (2) ingest CMP interaction events; (3) run hourly; (4) on hit, freeze the vendor via ad-tech allow-list and open a DPO ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=access_combined earliest=-1h euconsent_v2=*\n| sort 0 session_id _time\n| streamstats current=f last(euconsent_v2) as prior_consent by session_id\n| where isnotnull(prior_consent) AND euconsent_v2!=prior_consent\n| join type=outer session_id _time [search index=cmp sourcetype=onetrust:cmp action=interaction earliest=-1h | fields session_id _time | rename _time as interaction_time]\n| where isnull(interaction_time) OR abs(_time-interaction_time)>60\n| table _time session_id prior_consent euconsent_v2\n```\n\nUnderstanding this SPL\n\n**IAB TCF consent string mutation without user interaction** — Catches a pattern that has surfaced repeatedly in EDPB enforcement decisions (Meta, Google ad-tech) before it becomes a supervisory finding.\n\nDocumented **Data sources**: Web access logs capturing Set-Cookie: euconsent-v2, CMP interaction events (onetrust:cmp), client-side analytics. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=access_combined, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by session_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where isnotnull(prior_consent) AND euconsent_v2!=prior_consent` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(interaction_time) OR abs(_time-interaction_time)>60` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **IAB TCF consent string mutation without user interaction**): table _time session_id prior_consent euconsent_v2\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of mutating sessions by vendor, bar chart of mutations per ad-tech partner, single-value 'sessions with genuine consent (24h)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you track consent and choices over time so your marketing and storage stay aligned with what people agreed to.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "CCPA",
                "GDPR"
              ],
              "a": [
                "Web"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.135(b)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that CCPA §1798.135(b) is enforced — Splunk UC-22.37.5: IAB TCF consent string mutation without user interaction.",
                  "ea": "Saved search 'UC-22.37.5' running on Web access logs capturing Set-Cookie: euconsent-v2, CMP interaction events (onetrust:cmp), client-side analytics., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.7(1) is enforced — Splunk UC-22.37.5: IAB TCF consent string mutation without user interaction.",
                  "ea": "Saved search 'UC-22.37.5' running on Web access logs capturing Set-Cookie: euconsent-v2, CMP interaction events (onetrust:cmp), client-side analytics., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.7(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.7(1) is enforced — Splunk UC-22.37.5: IAB TCF consent string mutation without user interaction.",
                  "ea": "Saved search 'UC-22.37.5' running on Web access logs capturing Set-Cookie: euconsent-v2, CMP interaction events (onetrust:cmp), client-side analytics., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.38",
          "n": "— additional UCs (Cross-border transfer controls)",
          "u": [
            {
              "i": "22.38.4",
              "n": "Transfer Impact Assessment currency — stale Schrems II assessments",
              "c": "medium",
              "f": "beginner",
              "v": "Converts a calendar reminder into a demonstrable control with evidence per-transfer.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "TIA register (tia_register.csv), transfer_register.csv, DPO review events.",
              "q": "| inputlookup tia_register.csv\n| eval age_days=round((now()-strptime(last_reviewed,\"%Y-%m-%d\"))/86400,0)\n| where age_days>cadence_days\n| lookup transfer_register.csv tia_id OUTPUT transfer_id destination\n| table tia_id destination last_reviewed age_days cadence_days transfer_id",
              "m": "(1) Maintain tia_register.csv; (2) run weekly; (3) on miss, DPO re-opens the TIA review workflow.",
              "z": "Table of stale TIAs with remediation owner, bar chart of TIAs by age bucket, single-value 'TIAs within cadence'.",
              "kfp": "TIAs under active review (with a workflow_state='in_review' flag) should be excluded via workflow_state.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [ICO International Transfers Guidance](https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/international-transfers/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: TIA register (tia_register.csv), transfer_register.csv, DPO review events..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain tia_register.csv; (2) run weekly; (3) on miss, DPO re-opens the TIA review workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup tia_register.csv\n| eval age_days=round((now()-strptime(last_reviewed,\"%Y-%m-%d\"))/86400,0)\n| where age_days>cadence_days\n| lookup transfer_register.csv tia_id OUTPUT transfer_id destination\n| table tia_id destination last_reviewed age_days cadence_days transfer_id\n```\n\nUnderstanding this SPL\n\n**Transfer Impact Assessment currency — stale Schrems II assessments** — Converts a calendar reminder into a demonstrable control with evidence per-transfer.\n\nDocumented **Data sources**: TIA register (tia_register.csv), transfer_register.csv, DPO review events. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>cadence_days` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Pipeline stage (see **Transfer Impact Assessment currency — stale Schrems II assessments**): table tia_id destination last_reviewed age_days cadence_days transfer_id\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of stale TIAs with remediation owner, bar chart of TIAs by age bucket, single-value 'TIAs within cadence'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see cross-border data flows against your rules so transfers stay justified and documented.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "GDPR",
                "UK-GDPR"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.46 (Transfers subject to safeguards) is enforced — Splunk UC-22.38.4: Transfer Impact Assessment currency — stale Schrems II assessments.",
                  "ea": "Saved search 'UC-22.38.4' running on TIA register (tia_register.csv), transfer_register.csv, DPO review events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.46 is enforced — Splunk UC-22.38.4: Transfer Impact Assessment currency — stale Schrems II assessments.",
                  "ea": "Saved search 'UC-22.38.4' running on TIA register (tia_register.csv), transfer_register.csv, DPO review events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK-GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.46",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that UK-GDPR Art.46 is enforced — Splunk UC-22.38.4: Transfer Impact Assessment currency — stale Schrems II assessments.",
                  "ea": "Saved search 'UC-22.38.4' running on TIA register (tia_register.csv), transfer_register.csv, DPO review events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.38.5",
              "n": "Bulk regulated-data export targeting non-adequate jurisdiction",
              "c": "critical",
              "f": "advanced",
              "v": "Turns post-hoc 'where did our customer dataset go?' incident forensics into real-time prevention.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "DB export audit logs, cloud storage PUT events, DLP bulk-operation classifications, sanctioned_transfer_jurisdictions.csv.",
              "q": "index=db_audit action=export OR (index=cloud_storage action=PutObject bytes>1073741824) earliest=-10m\n| lookup data_classification.csv object_key OUTPUT classification regulated_scope\n| where isnotnull(regulated_scope)\n| `get_geolocation(dest_ip)`\n| lookup sanctioned_transfer_jurisdictions.csv country as dest_country OUTPUT scc_active adequacy_decision\n| where isnull(scc_active) AND isnull(adequacy_decision)\n| table _time user object_key classification regulated_scope dest_country bytes",
              "m": "(1) Ensure DB and cloud storage layers emit size + destination; (2) maintain classification and jurisdiction lookups; (3) schedule every 10 min; (4) SOAR auto-blocks destination if exception policy permits.",
              "z": "World-map of bulk exports, table of findings with destination geography, single-value 'bulk exports within sanctioned geographies'.",
              "kfp": "Bona-fide DR or migration operations authorised under a dedicated exception ID must be logged to a migrations.csv allow-list.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [MITRE ATT&CK — T1567 Exfiltration Over Web Service](https://attack.mitre.org/techniques/T1567/)",
              "mitre": [
                "T1567"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: DB export audit logs, cloud storage PUT events, DLP bulk-operation classifications, sanctioned_transfer_jurisdictions.csv..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure DB and cloud storage layers emit size + destination; (2) maintain classification and jurisdiction lookups; (3) schedule every 10 min; (4) SOAR auto-blocks destination if exception policy permits.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=db_audit action=export OR (index=cloud_storage action=PutObject bytes>1073741824) earliest=-10m\n| lookup data_classification.csv object_key OUTPUT classification regulated_scope\n| where isnotnull(regulated_scope)\n| `get_geolocation(dest_ip)`\n| lookup sanctioned_transfer_jurisdictions.csv country as dest_country OUTPUT scc_active adequacy_decision\n| where isnull(scc_active) AND isnull(adequacy_decision)\n| table _time user object_key classification regulated_scope dest_country bytes\n```\n\nUnderstanding this SPL\n\n**Bulk regulated-data export targeting non-adequate jurisdiction** — Turns post-hoc 'where did our customer dataset go?' incident forensics into real-time prevention.\n\nDocumented **Data sources**: DB export audit logs, cloud storage PUT events, DLP bulk-operation classifications, sanctioned_transfer_jurisdictions.csv. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: db_audit, cloud_storage.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=db_audit, index=cloud_storage, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(regulated_scope)` — typically the threshold or rule expression for this monitoring goal.\n• Invokes macro `get_geolocation(dest_ip)` — in Search, use the UI or expand to inspect the underlying SPL.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(scc_active) AND isnull(adequacy_decision)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Bulk regulated-data export targeting non-adequate jurisdiction**): table _time user object_key classification regulated_scope dest_country bytes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: World-map of bulk exports, table of findings with destination geography, single-value 'bulk exports within sanctioned geographies'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see cross-border data flows against your rules so transfers stay justified and documented.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "GDPR"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.28 (ICT third-party risk) is enforced — Splunk UC-22.38.5: Bulk regulated-data export targeting non-adequate jurisdiction.",
                  "ea": "Saved search 'UC-22.38.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.44",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.44 (International transfers — general principle) is enforced — Splunk UC-22.38.5: Bulk regulated-data export targeting non-adequate jurisdiction.",
                  "ea": "Saved search 'UC-22.38.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.45",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.45 (Transfers via adequacy decision) is enforced — Splunk UC-22.38.5: Bulk regulated-data export targeting non-adequate jurisdiction.",
                  "ea": "Saved search 'UC-22.38.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.44",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.44 is enforced — Splunk UC-22.38.5: Bulk regulated-data export targeting non-adequate jurisdiction.",
                  "ea": "Saved search 'UC-22.38.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.45",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.45 is enforced — Splunk UC-22.38.5: Bulk regulated-data export targeting non-adequate jurisdiction.",
                  "ea": "Saved search 'UC-22.38.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 32.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.39",
          "n": "— additional UCs (Incident notification timeliness)",
          "u": [
            {
              "i": "22.39.4",
              "n": "Cross-regulator consistency — divergent material facts across submissions",
              "c": "high",
              "f": "advanced",
              "v": "Prevents the 'regulator cross-reference' problem that has escalated several prominent supervisory cases from technical fines to broader corporate sanction.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "regulator:submission sourcetype records with normalised material-fact fields (affected_subjects, data_classes, root_cause).",
              "q": "index=notifications sourcetype=regulator:submission earliest=-30d\n| stats values(affected_subjects) as subjects_set values(data_classes) as data_set dc(affected_subjects) as subjects_uniques dc(data_classes) as data_uniques by incident_id\n| where subjects_uniques>1 OR data_uniques>1\n| table incident_id subjects_set data_set",
              "m": "(1) Normalise submission events to carry canonical material-fact fields; (2) run 24h after any new submission for the incident_id; (3) alert DPO + legal if divergence found.",
              "z": "Table of divergent incidents, bar chart of divergence magnitude, single-value 'incidents with consistent submissions'.",
              "kfp": "Genuine refinement of scope between the initial 72h notice and the 30-day follow-up is legitimate and expected; parametrise a tolerance window.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: regulator:submission sourcetype records with normalised material-fact fields (affected_subjects, data_classes, root_cause)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalise submission events to carry canonical material-fact fields; (2) run 24h after any new submission for the incident_id; (3) alert DPO + legal if divergence found.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=notifications sourcetype=regulator:submission earliest=-30d\n| stats values(affected_subjects) as subjects_set values(data_classes) as data_set dc(affected_subjects) as subjects_uniques dc(data_classes) as data_uniques by incident_id\n| where subjects_uniques>1 OR data_uniques>1\n| table incident_id subjects_set data_set\n```\n\nUnderstanding this SPL\n\n**Cross-regulator consistency — divergent material facts across submissions** — Prevents the 'regulator cross-reference' problem that has escalated several prominent supervisory cases from technical fines to broader corporate sanction.\n\nDocumented **Data sources**: regulator:submission sourcetype records with normalised material-fact fields (affected_subjects, data_classes, root_cause). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: notifications; **sourcetype**: regulator:submission. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=notifications, sourcetype=regulator:submission, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by incident_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where subjects_uniques>1 OR data_uniques>1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cross-regulator consistency — divergent material facts across submissions**): table incident_id subjects_set data_set\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of divergent incidents, bar chart of divergence magnitude, single-value 'incidents with consistent submissions'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see when incident clocks are tight so you can meet notification duties even after hours.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "GDPR",
                "NIS2"
              ],
              "a": [
                "Ticket_Management"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.19",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.19 (Reporting of major ICT-related incidents) is enforced — Splunk UC-22.39.4: Cross-regulator consistency — divergent material facts across submissions.",
                  "ea": "Saved search 'UC-22.39.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.33(3)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.33(3) is enforced — Splunk UC-22.39.4: Cross-regulator consistency — divergent material facts across submissions.",
                  "ea": "Saved search 'UC-22.39.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23(4)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.23(4) is enforced — Splunk UC-22.39.4: Cross-regulator consistency — divergent material facts across submissions.",
                  "ea": "Saved search 'UC-22.39.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.33(3)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.33(3) is enforced — Splunk UC-22.39.4: Cross-regulator consistency — divergent material facts across submissions.",
                  "ea": "Saved search 'UC-22.39.4' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 40,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.39.5",
              "n": "Regulator-portal authentication failure during submission window",
              "c": "high",
              "f": "beginner",
              "v": "Converts a class of silent failures (an expired portal API key, a stale MFA token) that has materialised in past incidents into an always-on monitoring pattern.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "IdP audit logs (okta:audit, azuread:audit), incident_register.csv with open_status flag.",
              "q": "index=idp sourcetype=okta:audit OR sourcetype=azuread:audit earliest=-15m result=failure\n| search user IN (`regulator_service_accounts`)\n| join user [| inputlookup incident_register.csv | where status=\"open\" | fields user=regulator_user incident_id]\n| table _time user incident_id src_ip failure_reason",
              "m": "(1) Maintain a named macro regulator_service_accounts listing the IdP principals; (2) run every 5-10 min; (3) SOAR pages the DPO on hit.",
              "z": "Time chart of portal authentications by account, table of failures, single-value 'successful authentications in open windows'.",
              "kfp": "Portal-initiated credential rotation can cause a single expected failure; require ≥2 failures in 5 min.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [MITRE ATT&CK — T1110 Brute Force](https://attack.mitre.org/techniques/T1110/)",
              "mitre": [
                "T1110"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: IdP audit logs (okta:audit, azuread:audit), incident_register.csv with open_status flag..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain a named macro regulator_service_accounts listing the IdP principals; (2) run every 5-10 min; (3) SOAR pages the DPO on hit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=idp sourcetype=okta:audit OR sourcetype=azuread:audit earliest=-15m result=failure\n| search user IN (`regulator_service_accounts`)\n| join user [| inputlookup incident_register.csv | where status=\"open\" | fields user=regulator_user incident_id]\n| table _time user incident_id src_ip failure_reason\n```\n\nUnderstanding this SPL\n\n**Regulator-portal authentication failure during submission window** — Converts a class of silent failures (an expired portal API key, a stale MFA token) that has materialised in past incidents into an always-on monitoring pattern.\n\nDocumented **Data sources**: IdP audit logs (okta:audit, azuread:audit), incident_register.csv with open_status flag. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: idp; **sourcetype**: okta:audit, azuread:audit. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=idp, sourcetype=okta:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Pipeline stage (see **Regulator-portal authentication failure during submission window**): table _time user incident_id src_ip failure_reason\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Time chart of portal authentications by account, table of failures, single-value 'successful authentications in open windows'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when incident clocks are tight so you can meet notification duties even after hours.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "NIS2"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "azure",
                "okta"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.33(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that GDPR Art.33(1) is enforced — Splunk UC-22.39.5: Regulator-portal authentication failure during submission window.",
                  "ea": "Saved search 'UC-22.39.5' running on IdP audit logs (okta:audit, azuread:audit), incident_register.csv with open_status flag., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23(1)",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIS2 Art.23(1) is enforced — Splunk UC-22.39.5: Regulator-portal authentication failure during submission window.",
                  "ea": "Saved search 'UC-22.39.5' running on IdP audit logs (okta:audit, azuread:audit), incident_register.csv with open_status flag., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.40",
          "n": "— additional UCs (Privileged access evidence)",
          "u": [
            {
              "i": "22.40.4",
              "n": "Standing-privilege credential vaulting drift — admin accounts outside PAM",
              "c": "critical",
              "f": "advanced",
              "v": "Replaces the quarterly 'access-review export + spreadsheet join' with a daily evidence feed.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "IdP privileged-group membership (Azure AD, Okta, LDAP), PAM vault inventory (CyberArk, BeyondTrust, Delinea).",
              "q": "| rest /services/identity/privileged-groups splunk_server=*\n| fields user group\n| join type=outer user [| rest /services/pam/vault-inventory splunk_server=* | fields user vault_id]\n| where isnull(vault_id)\n| stats values(group) as groups by user\n| table user groups",
              "m": "(1) Expose IdP privileged groups and PAM vault inventory via REST; (2) run daily; (3) on miss, auto-open a PAM onboarding ticket with the CISO.",
              "z": "Table of drift accounts by group, bar chart of drift count by tenant, single-value '% of admins vaulted'.",
              "kfp": "Break-glass accounts are intentionally outside the vault; maintain breakglass_accounts.csv to allow-list.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [MITRE ATT&CK — T1078.003 Valid Accounts: Local Accounts](https://attack.mitre.org/techniques/T1078/003/)",
              "mitre": [
                "T1078.003"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: IdP privileged-group membership (Azure AD, Okta, LDAP), PAM vault inventory (CyberArk, BeyondTrust, Delinea)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Expose IdP privileged groups and PAM vault inventory via REST; (2) run daily; (3) on miss, auto-open a PAM onboarding ticket with the CISO.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/identity/privileged-groups splunk_server=*\n| fields user group\n| join type=outer user [| rest /services/pam/vault-inventory splunk_server=* | fields user vault_id]\n| where isnull(vault_id)\n| stats values(group) as groups by user\n| table user groups\n```\n\nUnderstanding this SPL\n\n**Standing-privilege credential vaulting drift — admin accounts outside PAM** — Replaces the quarterly 'access-review export + spreadsheet join' with a daily evidence feed.\n\nDocumented **Data sources**: IdP privileged-group membership (Azure AD, Okta, LDAP), PAM vault inventory (CyberArk, BeyondTrust, Delinea). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(vault_id)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **Standing-privilege credential vaulting drift — admin accounts outside PAM**): table user groups\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of drift accounts by group, bar chart of drift count by tenant, single-value '% of admins vaulted'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show who used powerful accounts and when, so access reviews and investigations have a clear story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST-800-53",
                "PCI-DSS",
                "SOX-ITGC"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "e": [
                "azure",
                "beyondtrust",
                "cyberark",
                "hashicorp",
                "okta"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "cmp": [
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "AC-5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 AC-5 is enforced — Splunk UC-22.40.4: Standing-privilege credential vaulting drift — admin accounts outside PAM.",
                  "ea": "Saved search 'UC-22.40.4' running on IdP privileged-group membership (Azure AD, Okta, LDAP), PAM vault inventory (CyberArk, BeyondTrust, Delinea)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "7.2.5.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 7.2.5.1 is enforced — Splunk UC-22.40.4: Standing-privilege credential vaulting drift — admin accounts outside PAM.",
                  "ea": "Saved search 'UC-22.40.4' running on IdP privileged-group membership (Azure AD, Okta, LDAP), PAM vault inventory (CyberArk, BeyondTrust, Delinea)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Privileged",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.AccessMgmt.Privileged (Privileged access) is enforced — Splunk UC-22.40.4: Standing-privilege credential vaulting drift — admin accounts outside PAM.",
                  "ea": "Saved search 'UC-22.40.4' running on IdP privileged-group membership (Azure AD, Okta, LDAP), PAM vault inventory (CyberArk, BeyondTrust, Delinea)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.40.5",
              "n": "High-risk privileged-session command without JIT approval",
              "c": "critical",
              "f": "advanced",
              "v": "Makes 'we require JIT approval for dangerous commands' an observable, always-on control.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "PAM session keystroke/command logs, GRC JIT ticketing events, high-risk command catalogue (high_risk_commands.csv).",
              "q": "index=pam sourcetype=cyberark:command OR sourcetype=beyondtrust:command earliest=-5m\n| lookup high_risk_commands.csv command_pattern AS command OUTPUT risk_class\n| where risk_class IN (\"destructive\",\"mass-data\",\"security-control\")\n| join session_id [search index=grc sourcetype=jit:approval earliest=-1h | fields session_id ticket_id approver approved_at]\n| where isnull(ticket_id)\n| table _time session_id user command risk_class target_host",
              "m": "(1) Curate high_risk_commands.csv with regex patterns; (2) instrument GRC JIT to emit session_id; (3) run every 5 min; (4) SOAR auto-terminates session on hit.",
              "z": "Table of unapproved high-risk commands, bar chart of commands by risk class, single-value 'high-risk commands approved (7d)'.",
              "kfp": "Automated runbook executions need a runbook_id that looks like a JIT ticket; ensure runbook automation is integrated with the JIT ledger.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [MITRE ATT&CK — T1078.004 Valid Accounts: Cloud Accounts](https://attack.mitre.org/techniques/T1078/004/)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: PAM session keystroke/command logs, GRC JIT ticketing events, high-risk command catalogue (high_risk_commands.csv)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Curate high_risk_commands.csv with regex patterns; (2) instrument GRC JIT to emit session_id; (3) run every 5 min; (4) SOAR auto-terminates session on hit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=pam sourcetype=cyberark:command OR sourcetype=beyondtrust:command earliest=-5m\n| lookup high_risk_commands.csv command_pattern AS command OUTPUT risk_class\n| where risk_class IN (\"destructive\",\"mass-data\",\"security-control\")\n| join session_id [search index=grc sourcetype=jit:approval earliest=-1h | fields session_id ticket_id approver approved_at]\n| where isnull(ticket_id)\n| table _time session_id user command risk_class target_host\n```\n\nUnderstanding this SPL\n\n**High-risk privileged-session command without JIT approval** — Makes 'we require JIT approval for dangerous commands' an observable, always-on control.\n\nDocumented **Data sources**: PAM session keystroke/command logs, GRC JIT ticketing events, high-risk command catalogue (high_risk_commands.csv). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: pam; **sourcetype**: cyberark:command, beyondtrust:command. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=pam, sourcetype=cyberark:command, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where risk_class IN (\"destructive\",\"mass-data\",\"security-control\")` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where isnull(ticket_id)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **High-risk privileged-session command without JIT approval**): table _time session_id user command risk_class target_host\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of unapproved high-risk commands, bar chart of commands by risk class, single-value 'high-risk commands approved (7d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show who used powerful accounts and when, so access reviews and investigations have a clear story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST-800-53",
                "PCI-DSS",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "beyondtrust",
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "AC-6(9)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 AC-6(9) is enforced — Splunk UC-22.40.5: High-risk privileged-session command without JIT approval.",
                  "ea": "Saved search 'UC-22.40.5' running on PAM session keystroke/command logs, GRC JIT ticketing events, high-risk command catalogue (high_risk_commands.csv)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "10.2.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.2.2 is enforced — Splunk UC-22.40.5: High-risk privileged-session command without JIT approval.",
                  "ea": "Saved search 'UC-22.40.5' running on PAM session keystroke/command logs, GRC JIT ticketing events, high-risk command catalogue (high_risk_commands.csv)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Privileged.JIT",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of SOX-ITGC ITGC.Privileged.JIT — Splunk UC-22.40.5: High-risk privileged-session command without JIT approval.",
                  "ea": "Saved search 'UC-22.40.5' running on PAM session keystroke/command logs, GRC JIT ticketing events, high-risk command catalogue (high_risk_commands.csv)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 32.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.41",
          "n": "— additional UCs (Encryption and key management attestation)",
          "u": [
            {
              "i": "22.41.4",
              "n": "TLS downgrade / legacy-cipher handshake spike",
              "c": "high",
              "f": "intermediate",
              "v": "Catches silent downgrade issues introduced by load-balancer config drift before they become an audit finding or data-exposure incident.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "F5/NGINX/HAProxy TLS handshake logs, cloud-provider LB telemetry, TLS scanner longitudinal results.",
              "q": "index=tls sourcetype=f5:tls OR sourcetype=nginx:tls OR sourcetype=haproxy:tls earliest=-1h\n| eval legacy=if(match(protocol,\"TLSv1\\.(0|1)\") OR match(cipher,\"3DES|RC4|NULL|DES|EXPORT\"),1,0)\n| stats sum(legacy) as legacy_hs count as total_hs by host\n| eval legacy_pct=round(legacy_hs/total_hs*100,2)\n| where legacy_pct>0.5 AND total_hs>100\n| table host legacy_hs total_hs legacy_pct",
              "m": "(1) Ensure LBs emit handshake telemetry; (2) schedule hourly; (3) route hits to security ops; (4) block via config-as-code rollback.",
              "z": "Line chart of legacy-handshake percentage per endpoint, table of offending endpoints, single-value '% handshakes on strong TLS (24h)'.",
              "kfp": "Legacy client populations (POS devices) can inflate TLS 1.0 usage in scope of a documented risk-acceptance; allow-list such endpoints.",
              "refs": "[PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [MITRE ATT&CK — T1562.006 Impair Defenses: Indicator Blocking](https://attack.mitre.org/techniques/T1562/006/)",
              "mitre": [
                "T1562.006"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: F5/NGINX/HAProxy TLS handshake logs, cloud-provider LB telemetry, TLS scanner longitudinal results..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure LBs emit handshake telemetry; (2) schedule hourly; (3) route hits to security ops; (4) block via config-as-code rollback.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=tls sourcetype=f5:tls OR sourcetype=nginx:tls OR sourcetype=haproxy:tls earliest=-1h\n| eval legacy=if(match(protocol,\"TLSv1\\.(0|1)\") OR match(cipher,\"3DES|RC4|NULL|DES|EXPORT\"),1,0)\n| stats sum(legacy) as legacy_hs count as total_hs by host\n| eval legacy_pct=round(legacy_hs/total_hs*100,2)\n| where legacy_pct>0.5 AND total_hs>100\n| table host legacy_hs total_hs legacy_pct\n```\n\nUnderstanding this SPL\n\n**TLS downgrade / legacy-cipher handshake spike** — Catches silent downgrade issues introduced by load-balancer config drift before they become an audit finding or data-exposure incident.\n\nDocumented **Data sources**: F5/NGINX/HAProxy TLS handshake logs, cloud-provider LB telemetry, TLS scanner longitudinal results. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: tls; **sourcetype**: f5:tls, nginx:tls, haproxy:tls. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=tls, sourcetype=f5:tls, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **legacy** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **legacy_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where legacy_pct>0.5 AND total_hs>100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **TLS downgrade / legacy-cipher handshake spike**): table host legacy_hs total_hs legacy_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart of legacy-handshake percentage per endpoint, table of offending endpoints, single-value '% handshakes on strong TLS (24h)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see encryption coverage and gaps so you can show protections stay current across systems.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA",
                "NIST-800-53",
                "PCI-DSS"
              ],
              "a": [
                "Certificates"
              ],
              "e": [
                "haproxy",
                "nginx"
              ],
              "em": [
                "nginx_open"
              ],
              "cmp": [
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.312(e)(1)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA §164.312(e)(1) (Transmission security) is enforced — Splunk UC-22.41.4: TLS downgrade / legacy-cipher handshake spike.",
                  "ea": "Saved search 'UC-22.41.4' running on F5/NGINX/HAProxy TLS handshake logs, cloud-provider LB telemetry, TLS scanner longitudinal results., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "SC-8",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 SC-8 (Transmission confidentiality and integrity) is enforced — Splunk UC-22.41.4: TLS downgrade / legacy-cipher handshake spike.",
                  "ea": "Saved search 'UC-22.41.4' running on F5/NGINX/HAProxy TLS handshake logs, cloud-provider LB telemetry, TLS scanner longitudinal results., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "4.2.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 4.2.1 is enforced — Splunk UC-22.41.4: TLS downgrade / legacy-cipher handshake spike.",
                  "ea": "Saved search 'UC-22.41.4' running on F5/NGINX/HAProxy TLS handshake logs, cloud-provider LB telemetry, TLS scanner longitudinal results., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.41.5",
              "n": "Key custodian SoD — same identity creates AND approves a key",
              "c": "high",
              "f": "intermediate",
              "v": "Provides daily evidence of key-custodian SoD rather than the common annual sample.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "KMS audit logs (aws:kms, azure:keyvault, gcp:kms), HSM audit logs.",
              "q": "index=kms earliest=-30m (action=CreateKey OR action=ScheduleKeyActivation)\n| stats values(user) as users values(action) as actions by key_id\n| eval distinct_users=mvcount(mvdedup(users))\n| where distinct_users<2 AND mvcount(actions)>=2\n| table key_id users actions",
              "m": "(1) Ingest KMS audit logs; (2) run every 30 min; (3) escalate to the crypto custodian.",
              "z": "Table of SoD violations, bar chart of violations by key type, single-value 'SoD-compliant key operations (30d)'.",
              "kfp": "Automated key provisioning pipelines that run under a single service account may trigger this UC; they must be structured as two-stage with distinct approver accounts.",
              "refs": "[PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: KMS audit logs (aws:kms, azure:keyvault, gcp:kms), HSM audit logs..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest KMS audit logs; (2) run every 30 min; (3) escalate to the crypto custodian.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kms earliest=-30m (action=CreateKey OR action=ScheduleKeyActivation)\n| stats values(user) as users values(action) as actions by key_id\n| eval distinct_users=mvcount(mvdedup(users))\n| where distinct_users<2 AND mvcount(actions)>=2\n| table key_id users actions\n```\n\nUnderstanding this SPL\n\n**Key custodian SoD — same identity creates AND approves a key** — Provides daily evidence of key-custodian SoD rather than the common annual sample.\n\nDocumented **Data sources**: KMS audit logs (aws:kms, azure:keyvault, gcp:kms), HSM audit logs. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kms.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kms, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by key_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **distinct_users** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where distinct_users<2 AND mvcount(actions)>=2` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Key custodian SoD — same identity creates AND approves a key**): table key_id users actions\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of SoD violations, bar chart of violations by key type, single-value 'SoD-compliant key operations (30d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see encryption coverage and gaps so you can show protections stay current across systems.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST-800-53",
                "PCI-DSS",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "azure",
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "cmp": [
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "SC-12",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 SC-12 is enforced — Splunk UC-22.41.5: Key custodian SoD — same identity creates AND approves a key.",
                  "ea": "Saved search 'UC-22.41.5' running on KMS audit logs (aws:kms, azure:keyvault, gcp:kms), HSM audit logs., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "3.6.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 3.6.1 is enforced — Splunk UC-22.41.5: Key custodian SoD — same identity creates AND approves a key.",
                  "ea": "Saved search 'UC-22.41.5' running on KMS audit logs (aws:kms, azure:keyvault, gcp:kms), HSM audit logs., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Crypto.SoD",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of SOX-ITGC ITGC.Crypto.SoD — Splunk UC-22.41.5: Key custodian SoD — same identity creates AND approves a key.",
                  "ea": "Saved search 'UC-22.41.5' running on KMS audit logs (aws:kms, azure:keyvault, gcp:kms), HSM audit logs., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.42",
          "n": "— additional UCs (Change management and configuration baseline)",
          "u": [
            {
              "i": "22.42.3",
              "n": "Change rollback execution evidence — declared rollback vs actual",
              "c": "medium",
              "f": "intermediate",
              "v": "Turns rollback-plan existence from a documentation artefact into a tested capability.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "ITSM CR records (ServiceNow), CI/CD pipeline audit (rollback_test events).",
              "q": "| inputlookup itsm_changes.csv\n| where rollback_plan_required=\"true\"\n| lookup rollback_tests.csv change_id OUTPUT last_tested\n| eval age_days=round((now()-strptime(last_tested,\"%Y-%m-%d\"))/86400,0)\n| where isnull(last_tested) OR age_days>180\n| table change_id rollback_plan_required last_tested age_days",
              "m": "(1) Export CR metadata to itsm_changes.csv; (2) have CI/CD emit rollback_test events; (3) run weekly; (4) on miss, schedule a rollback rehearsal.",
              "z": "Aging bar chart of changes without rollback test, table of at-risk CRs, single-value 'CRs with tested rollback'.",
              "kfp": "Emergency hotfix CRs may legitimately have a 'forward-fix-only' rollback strategy; annotate with rollback_type to exclude them.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: ITSM CR records (ServiceNow), CI/CD pipeline audit (rollback_test events)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export CR metadata to itsm_changes.csv; (2) have CI/CD emit rollback_test events; (3) run weekly; (4) on miss, schedule a rollback rehearsal.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup itsm_changes.csv\n| where rollback_plan_required=\"true\"\n| lookup rollback_tests.csv change_id OUTPUT last_tested\n| eval age_days=round((now()-strptime(last_tested,\"%Y-%m-%d\"))/86400,0)\n| where isnull(last_tested) OR age_days>180\n| table change_id rollback_plan_required last_tested age_days\n```\n\nUnderstanding this SPL\n\n**Change rollback execution evidence — declared rollback vs actual** — Turns rollback-plan existence from a documentation artefact into a tested capability.\n\nDocumented **Data sources**: ITSM CR records (ServiceNow), CI/CD pipeline audit (rollback_test events). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where rollback_plan_required=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(last_tested) OR age_days>180` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Change rollback execution evidence — declared rollback vs actual**): table change_id rollback_plan_required last_tested age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Aging bar chart of changes without rollback test, table of at-risk CRs, single-value 'CRs with tested rollback'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when configs drift from what you baselined, so only approved change moves through.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "NIST-800-53",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "CM-3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 CM-3 is enforced — Splunk UC-22.42.3: Change rollback execution evidence — declared rollback vs actual.",
                  "ea": "Saved search 'UC-22.42.3' running on ITSM CR records (ServiceNow), CI/CD pipeline audit (rollback_test events)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC8.1 (Change management) is enforced — Splunk UC-22.42.3: Change rollback execution evidence — declared rollback vs actual.",
                  "ea": "Saved search 'UC-22.42.3' running on ITSM CR records (ServiceNow), CI/CD pipeline audit (rollback_test events)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Change.Rollback",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOX-ITGC ITGC.Change.Rollback is enforced — Splunk UC-22.42.3: Change rollback execution evidence — declared rollback vs actual.",
                  "ea": "Saved search 'UC-22.42.3' running on ITSM CR records (ServiceNow), CI/CD pipeline audit (rollback_test events)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.42.4",
              "n": "CAB approval bypass — change pushed before scheduled window",
              "c": "high",
              "f": "intermediate",
              "v": "Turns CAB approval from a calendar event into an operational control with per-change evidence.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "CI/CD pipeline apply events tagged with change_id, ServiceNow CR schedule data.",
              "q": "index=change sourcetype=prod:apply earliest=-1h\n| join change_id [search index=itsm sourcetype=servicenow:change | fields change_id window_start window_end approval_type]\n| eval start_s=strptime(window_start,\"%Y-%m-%dT%H:%M:%SZ\"), end_s=strptime(window_end,\"%Y-%m-%dT%H:%M:%SZ\")\n| where approval_type!=\"emergency\" AND (_time<start_s OR _time>end_s)\n| table _time change_id applied_by window_start window_end",
              "m": "(1) Force CI/CD pipelines to emit change_id; (2) ingest CR schedule data; (3) run hourly; (4) SOAR auto-freezes the pipeline on hit.",
              "z": "Table of out-of-window applies, bar chart of offending pipelines, single-value '% of changes applied in window'.",
              "kfp": "Timezone-normalised window_start/window_end is critical; a TZ mismatch will falsely flag correct changes.",
              "refs": "[AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [MITRE ATT&CK — T1562.001 Impair Defenses: Disable Tools](https://attack.mitre.org/techniques/T1562/001/)",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: CI/CD pipeline apply events tagged with change_id, ServiceNow CR schedule data..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Force CI/CD pipelines to emit change_id; (2) ingest CR schedule data; (3) run hourly; (4) SOAR auto-freezes the pipeline on hit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=change sourcetype=prod:apply earliest=-1h\n| join change_id [search index=itsm sourcetype=servicenow:change | fields change_id window_start window_end approval_type]\n| eval start_s=strptime(window_start,\"%Y-%m-%dT%H:%M:%SZ\"), end_s=strptime(window_end,\"%Y-%m-%dT%H:%M:%SZ\")\n| where approval_type!=\"emergency\" AND (_time<start_s OR _time>end_s)\n| table _time change_id applied_by window_start window_end\n```\n\nUnderstanding this SPL\n\n**CAB approval bypass — change pushed before scheduled window** — Turns CAB approval from a calendar event into an operational control with per-change evidence.\n\nDocumented **Data sources**: CI/CD pipeline apply events tagged with change_id, ServiceNow CR schedule data. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: change; **sourcetype**: prod:apply. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=change, sourcetype=prod:apply, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **start_s** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where approval_type!=\"emergency\" AND (_time<start_s OR _time>end_s)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CAB approval bypass — change pushed before scheduled window**): table _time change_id applied_by window_start window_end\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of out-of-window applies, bar chart of offending pipelines, single-value '% of changes applied in window'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when configs drift from what you baselined, so only approved change moves through.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIST-800-53",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "CM-3",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of NIST-800-53 CM-3 — Splunk UC-22.42.4: CAB approval bypass — change pushed before scheduled window.",
                  "ea": "Saved search 'UC-22.42.4' running on CI/CD pipeline apply events tagged with change_id, ServiceNow CR schedule data., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of SOC-2 CC8.1 (Change management) — Splunk UC-22.42.4: CAB approval bypass — change pushed before scheduled window.",
                  "ea": "Saved search 'UC-22.42.4' running on CI/CD pipeline apply events tagged with change_id, ServiceNow CR schedule data., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Change.Approval",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of SOX-ITGC ITGC.Change.Approval — Splunk UC-22.42.4: CAB approval bypass — change pushed before scheduled window.",
                  "ea": "Saved search 'UC-22.42.4' running on CI/CD pipeline apply events tagged with change_id, ServiceNow CR schedule data., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.42.5",
              "n": "Infrastructure-as-code drift — applied state diverges from merged plan",
              "c": "high",
              "f": "advanced",
              "v": "Turns 'we use Terraform' into a continuously-verified statement.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Terraform/OpenTofu state snapshots, cloud provider inventory (AWS Config, Azure Resource Graph, GCP Asset Inventory), CI/CD merge events.",
              "q": "index=iac sourcetype=tf:state_snapshot earliest=-30m\n| stats latest(plan_checksum) as expected_checksum by resource_id\n| join resource_id [search index=cloud sourcetype=aws:config OR sourcetype=azure:rg OR sourcetype=gcp:asset earliest=-30m | stats latest(state_checksum) as actual_checksum by resource_id]\n| where expected_checksum!=actual_checksum\n| table resource_id expected_checksum actual_checksum",
              "m": "(1) Export per-plan checksums from CI on merge; (2) enable cloud inventory export; (3) run every 30 min; (4) SOAR auto-remediates via `terraform plan && apply` if allowed.",
              "z": "Heatmap of resources × drift, table of drifting resources, single-value '% of resources matching plan'.",
              "kfp": "Cloud-provider control-plane metadata changes (timestamps, auto-generated IDs) can fire; exclude from checksum calculation at plan-emission time.",
              "refs": "[NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html), [MITRE ATT&CK — T1562.001 Impair Defenses: Disable Tools](https://attack.mitre.org/techniques/T1562/001/)",
              "mitre": [
                "T1562.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Terraform/OpenTofu state snapshots, cloud provider inventory (AWS Config, Azure Resource Graph, GCP Asset Inventory), CI/CD merge events..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export per-plan checksums from CI on merge; (2) enable cloud inventory export; (3) run every 30 min; (4) SOAR auto-remediates via `terraform plan && apply` if allowed.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iac sourcetype=tf:state_snapshot earliest=-30m\n| stats latest(plan_checksum) as expected_checksum by resource_id\n| join resource_id [search index=cloud sourcetype=aws:config OR sourcetype=azure:rg OR sourcetype=gcp:asset earliest=-30m | stats latest(state_checksum) as actual_checksum by resource_id]\n| where expected_checksum!=actual_checksum\n| table resource_id expected_checksum actual_checksum\n```\n\nUnderstanding this SPL\n\n**Infrastructure-as-code drift — applied state diverges from merged plan** — Turns 'we use Terraform' into a continuously-verified statement.\n\nDocumented **Data sources**: Terraform/OpenTofu state snapshots, cloud provider inventory (AWS Config, Azure Resource Graph, GCP Asset Inventory), CI/CD merge events. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iac; **sourcetype**: tf:state_snapshot. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iac, sourcetype=tf:state_snapshot, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where expected_checksum!=actual_checksum` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Infrastructure-as-code drift — applied state diverges from merged plan**): table resource_id expected_checksum actual_checksum\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of resources × drift, table of drifting resources, single-value '% of resources matching plan'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when configs drift from what you baselined, so only approved change moves through.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "ISO-27001",
                "NIST-800-53",
                "SOC-2"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "azure",
                "hashicorp"
              ],
              "em": [
                "hashicorp_terraform"
              ],
              "cmp": [
                {
                  "r": "ISO-27001",
                  "v": "2022",
                  "cl": "A.8.9",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO-27001 A.8.9 (Configuration management (2022 new)) is enforced — Splunk UC-22.42.5: Infrastructure-as-code drift — applied state diverges from merged plan.",
                  "ea": "Saved search 'UC-22.42.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "CM-6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 CM-6 (Configuration settings) is enforced — Splunk UC-22.42.5: Infrastructure-as-code drift — applied state diverges from merged plan.",
                  "ea": "Saved search 'UC-22.42.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC8.1 (Change management) is enforced — Splunk UC-22.42.5: Infrastructure-as-code drift — applied state diverges from merged plan.",
                  "ea": "Saved search 'UC-22.42.5' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 33.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.43",
          "n": "— additional UCs (Vulnerability management and patch SLAs)",
          "u": [
            {
              "i": "22.43.3",
              "n": "Internet-facing asset × unpatched critical CVE",
              "c": "critical",
              "f": "intermediate",
              "v": "Eliminates the gap between scanner output and risk-based patch-SLA tracking.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Vulnerability scanner feeds, external-facing inventory (Shodan, Censys, AWS ALB/NLB, cloud LBs).",
              "q": "index=vuln earliest=-24h cvss_v3_base>=9.0\n| lookup external_asset_inventory.csv host OUTPUT external_exposed\n| where external_exposed=\"true\"\n| stats min(_time) as first_seen values(cve) as cves by host\n| eval age_days=round((now()-first_seen)/86400,0)\n| where age_days>0\n| table host cves age_days",
              "m": "(1) Maintain external_asset_inventory.csv refreshed daily from LB config or external attack-surface tooling; (2) run hourly; (3) SOAR auto-escalates on hit.",
              "z": "Table of Internet-facing hosts with critical CVEs, bar chart of patch-age buckets, single-value 'Internet-facing hosts fully patched'.",
              "kfp": "Honeypot / deliberately-exposed research assets can be excluded via an inventory annotation.",
              "refs": "[Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [MITRE ATT&CK — T1190 Exploit Public-Facing Application](https://attack.mitre.org/techniques/T1190/)",
              "mitre": [
                "T1190"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Vulnerability scanner feeds, external-facing inventory (Shodan, Censys, AWS ALB/NLB, cloud LBs)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain external_asset_inventory.csv refreshed daily from LB config or external attack-surface tooling; (2) run hourly; (3) SOAR auto-escalates on hit.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vuln earliest=-24h cvss_v3_base>=9.0\n| lookup external_asset_inventory.csv host OUTPUT external_exposed\n| where external_exposed=\"true\"\n| stats min(_time) as first_seen values(cve) as cves by host\n| eval age_days=round((now()-first_seen)/86400,0)\n| where age_days>0\n| table host cves age_days\n```\n\nUnderstanding this SPL\n\n**Internet-facing asset × unpatched critical CVE** — Eliminates the gap between scanner output and risk-based patch-SLA tracking.\n\nDocumented **Data sources**: Vulnerability scanner feeds, external-facing inventory (Shodan, Censys, AWS ALB/NLB, cloud LBs). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vuln.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vuln, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where external_exposed=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Internet-facing asset × unpatched critical CVE**): table host cves age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of Internet-facing hosts with critical CVEs, bar chart of patch-age buckets, single-value 'Internet-facing hosts fully patched'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see patch timing against your service promises so you can explain delays with evidence.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NIS2",
                "NIST-800-53",
                "PCI-DSS"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(e)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIS2 Art.21(2)(e) (Security in acquisition, development and maintenance) is enforced — Splunk UC-22.43.3: Internet-facing asset × unpatched critical CVE.",
                  "ea": "Saved search 'UC-22.43.3' running on Vulnerability scanner feeds, external-facing inventory (Shodan, Censys, AWS ALB/NLB, cloud LBs)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "RA-5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 RA-5 (Vulnerability scanning) is enforced — Splunk UC-22.43.3: Internet-facing asset × unpatched critical CVE.",
                  "ea": "Saved search 'UC-22.43.3' running on Vulnerability scanner feeds, external-facing inventory (Shodan, Censys, AWS ALB/NLB, cloud LBs)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "6.3.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 6.3.3 is enforced — Splunk UC-22.43.3: Internet-facing asset × unpatched critical CVE.",
                  "ea": "Saved search 'UC-22.43.3' running on Vulnerability scanner feeds, external-facing inventory (Shodan, Censys, AWS ALB/NLB, cloud LBs)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.43.4",
              "n": "Scanner coverage gap — regulated hosts without a recent scan",
              "c": "high",
              "f": "beginner",
              "v": "Surfaces silent scanner onboarding failures before they become audit findings.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Vulnerability scanner inventory, asset_inventory.csv with regulated flag, per-tier SLA lookup.",
              "q": "| inputlookup asset_inventory.csv\n| where regulated=\"true\"\n| lookup scan_history.csv host OUTPUT last_scan\n| eval days_since=round((now()-strptime(last_scan,\"%Y-%m-%d\"))/86400,0)\n| lookup tier_sla.csv tier OUTPUT scan_sla_days\n| where isnull(last_scan) OR days_since>scan_sla_days\n| table tier host last_scan days_since scan_sla_days",
              "m": "(1) Maintain asset_inventory.csv; (2) export scanner state to scan_history.csv; (3) run daily.",
              "z": "Heatmap of hosts × days-since-scan, table of offenders, single-value 'regulated coverage'.",
              "kfp": "Hosts mid-onboarding should carry an onboarding_date; allow-list until scan credentials propagate.",
              "refs": "[PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Vulnerability scanner inventory, asset_inventory.csv with regulated flag, per-tier SLA lookup..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain asset_inventory.csv; (2) export scanner state to scan_history.csv; (3) run daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup asset_inventory.csv\n| where regulated=\"true\"\n| lookup scan_history.csv host OUTPUT last_scan\n| eval days_since=round((now()-strptime(last_scan,\"%Y-%m-%d\"))/86400,0)\n| lookup tier_sla.csv tier OUTPUT scan_sla_days\n| where isnull(last_scan) OR days_since>scan_sla_days\n| table tier host last_scan days_since scan_sla_days\n```\n\nUnderstanding this SPL\n\n**Scanner coverage gap — regulated hosts without a recent scan** — Surfaces silent scanner onboarding failures before they become audit findings.\n\nDocumented **Data sources**: Vulnerability scanner inventory, asset_inventory.csv with regulated flag, per-tier SLA lookup. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where regulated=\"true\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(last_scan) OR days_since>scan_sla_days` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Scanner coverage gap — regulated hosts without a recent scan**): table tier host last_scan days_since scan_sla_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of hosts × days-since-scan, table of offenders, single-value 'regulated coverage'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see patch timing against your service promises so you can explain delays with evidence.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO-27001",
                "NIST-800-53",
                "PCI-DSS"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO-27001",
                  "v": "2022",
                  "cl": "A.8.8",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO-27001 A.8.8 is enforced — Splunk UC-22.43.4: Scanner coverage gap — regulated hosts without a recent scan.",
                  "ea": "Saved search 'UC-22.43.4' running on Vulnerability scanner inventory, asset_inventory.csv with regulated flag, per-tier SLA lookup., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "RA-5(2)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 RA-5(2) is enforced — Splunk UC-22.43.4: Scanner coverage gap — regulated hosts without a recent scan.",
                  "ea": "Saved search 'UC-22.43.4' running on Vulnerability scanner inventory, asset_inventory.csv with regulated flag, per-tier SLA lookup., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "11.3.1.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 11.3.1.1 is enforced — Splunk UC-22.43.4: Scanner coverage gap — regulated hosts without a recent scan.",
                  "ea": "Saved search 'UC-22.43.4' running on Vulnerability scanner inventory, asset_inventory.csv with regulated flag, per-tier SLA lookup., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.43.5",
              "n": "SBOM vendor-component CVE exposure",
              "c": "high",
              "f": "advanced",
              "v": "Closes the gap between the day a CVE is published and the day a vendor responds — crucial for CRA and NIS2 supply-chain narratives.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Vendor SBOMs (CycloneDX/SPDX), CVE feed (NVD, vendor advisories).",
              "q": "| inputlookup vendor_sbom.csv\n| lookup cve_feed.csv component version OUTPUT cve cvss_v3_base\n| where isnotnull(cve) AND cvss_v3_base>=7.0\n| stats values(cve) as cves, max(cvss_v3_base) as max_cvss by vendor component version\n| table vendor component version cves max_cvss",
              "m": "(1) Ingest vendor SBOMs from the procurement workflow; (2) subscribe to NVD feed into cve_feed.csv; (3) run daily; (4) SOAR opens a case per vendor-affected component.",
              "z": "Table of exposed components, bar chart of CVE count by vendor, single-value 'SBOM-indexed components with no open CVE'.",
              "kfp": "Version ranges expressed in SPDX are sometimes imprecise; maintain a normalisation pre-step to avoid double counting.",
              "refs": "[Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [EU Cyber Resilience Act](https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [MITRE ATT&CK — T1195.002 Supply Chain Compromise: Compromise Software Supply Chain](https://attack.mitre.org/techniques/T1195/002/)",
              "mitre": [
                "T1195.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Vendor SBOMs (CycloneDX/SPDX), CVE feed (NVD, vendor advisories)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest vendor SBOMs from the procurement workflow; (2) subscribe to NVD feed into cve_feed.csv; (3) run daily; (4) SOAR opens a case per vendor-affected component.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup vendor_sbom.csv\n| lookup cve_feed.csv component version OUTPUT cve cvss_v3_base\n| where isnotnull(cve) AND cvss_v3_base>=7.0\n| stats values(cve) as cves, max(cvss_v3_base) as max_cvss by vendor component version\n| table vendor component version cves max_cvss\n```\n\nUnderstanding this SPL\n\n**SBOM vendor-component CVE exposure** — Closes the gap between the day a CVE is published and the day a vendor responds — crucial for CRA and NIS2 supply-chain narratives.\n\nDocumented **Data sources**: Vendor SBOMs (CycloneDX/SPDX), CVE feed (NVD, vendor advisories). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnotnull(cve) AND cvss_v3_base>=7.0` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by vendor component version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Pipeline stage (see **SBOM vendor-component CVE exposure**): table vendor component version cves max_cvss\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of exposed components, bar chart of CVE count by vendor, single-value 'SBOM-indexed components with no open CVE'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see patch timing against your service promises so you can explain delays with evidence.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "EU-CRA",
                "NIS2",
                "NIST-800-53"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU-CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that EU-CRA Art.13 (Obligations of manufacturers) is enforced — Splunk UC-22.43.5: SBOM vendor-component CVE exposure.",
                  "ea": "Saved search 'UC-22.43.5' running on Vendor SBOMs (CycloneDX/SPDX), CVE feed (NVD, vendor advisories)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(d)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(d) (Supply-chain security) is enforced — Splunk UC-22.43.5: SBOM vendor-component CVE exposure.",
                  "ea": "Saved search 'UC-22.43.5' running on Vendor SBOMs (CycloneDX/SPDX), CVE feed (NVD, vendor advisories)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "SR-11",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST-800-53 SR-11 is enforced — Splunk UC-22.43.5: SBOM vendor-component CVE exposure.",
                  "ea": "Saved search 'UC-22.43.5' running on Vendor SBOMs (CycloneDX/SPDX), CVE feed (NVD, vendor advisories)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 31.7,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.44",
          "n": "— additional UCs (Third-party and supply-chain risk)",
          "u": [
            {
              "i": "22.44.4",
              "n": "Vendor access telemetry — principals active outside contracted hours/geos",
              "c": "high",
              "f": "intermediate",
              "v": "Materialises contractual access controls into operational telemetry.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "IdP authentication events for named vendor principals, vendor contracts lookup with contracted hours/geos.",
              "q": "index=idp earliest=-1h (sourcetype=okta:audit OR sourcetype=azuread:audit) result=success\n| search user IN (`vendor_principals`)\n| `get_geolocation(src_ip)`\n| lookup vendor_contract.csv user OUTPUT allowed_hours allowed_countries\n| eval hour=strftime(_time,\"%H\"), weekday=strftime(_time,\"%A\"), window_ok=if(match(allowed_hours,weekday.\":\".hour),1,0), geo_ok=if(match(allowed_countries,country),1,0)\n| where window_ok=0 OR geo_ok=0\n| table _time user country hour weekday allowed_countries allowed_hours",
              "m": "(1) Maintain vendor_principals macro; (2) extract contracted windows/geos from contracts and publish to vendor_contract.csv; (3) run hourly.",
              "z": "World map of vendor logins with off-geo markers, bar chart of vendors over time, single-value 'vendor logins within contract (24h)'.",
              "kfp": "Planned after-hours maintenance windows need a pre-declared exception with vendor_contract.exception_id.",
              "refs": "[Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [MITRE ATT&CK — T1078.004 Valid Accounts: Cloud Accounts](https://attack.mitre.org/techniques/T1078/004/)",
              "mitre": [
                "T1078.004"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: IdP authentication events for named vendor principals, vendor contracts lookup with contracted hours/geos..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain vendor_principals macro; (2) extract contracted windows/geos from contracts and publish to vendor_contract.csv; (3) run hourly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=idp earliest=-1h (sourcetype=okta:audit OR sourcetype=azuread:audit) result=success\n| search user IN (`vendor_principals`)\n| `get_geolocation(src_ip)`\n| lookup vendor_contract.csv user OUTPUT allowed_hours allowed_countries\n| eval hour=strftime(_time,\"%H\"), weekday=strftime(_time,\"%A\"), window_ok=if(match(allowed_hours,weekday.\":\".hour),1,0), geo_ok=if(match(allowed_countries,country),1,0)\n| where window_ok=0 OR geo_ok=0\n| table _time user country hour weekday allowed_countries allowed_hours\n```\n\nUnderstanding this SPL\n\n**Vendor access telemetry — principals active outside contracted hours/geos** — Materialises contractual access controls into operational telemetry.\n\nDocumented **Data sources**: IdP authentication events for named vendor principals, vendor contracts lookup with contracted hours/geos. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: idp; **sourcetype**: okta:audit, azuread:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=idp, sourcetype=okta:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Invokes macro `get_geolocation(src_ip)` — in Search, use the UI or expand to inspect the underlying SPL.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **hour** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where window_ok=0 OR geo_ok=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Vendor access telemetry — principals active outside contracted hours/geos**): table _time user country hour weekday allowed_countries allowed_hours\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: World map of vendor logins with off-geo markers, bar chart of vendors over time, single-value 'vendor logins within contract (24h)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when third-party risk scores move so you can act before a weak vendor becomes a real problem.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "NIS2",
                "NIST-800-53"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "azure",
                "okta"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.28 (ICT third-party risk) is enforced — Splunk UC-22.44.4: Vendor access telemetry — principals active outside contracted hours/geos.",
                  "ea": "Saved search 'UC-22.44.4' running on IdP authentication events for named vendor principals, vendor contracts lookup with contracted hours/geos., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(d)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(d) (Supply-chain security) is enforced — Splunk UC-22.44.4: Vendor access telemetry — principals active outside contracted hours/geos.",
                  "ea": "Saved search 'UC-22.44.4' running on IdP authentication events for named vendor principals, vendor contracts lookup with contracted hours/geos., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "SR-6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST-800-53 SR-6 is enforced — Splunk UC-22.44.4: Vendor access telemetry — principals active outside contracted hours/geos.",
                  "ea": "Saved search 'UC-22.44.4' running on IdP authentication events for named vendor principals, vendor contracts lookup with contracted hours/geos., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.44.5",
              "n": "SBOM attestation completeness — critical vendors without signed SBOM",
              "c": "high",
              "f": "beginner",
              "v": "Makes the CRA/NIS2 SBOM requirement a tracked operational metric.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Vendor SBOM register (vendor_sbom.csv), signature-validation tooling outputs, VRM criticality.",
              "q": "| inputlookup vrm_attestations.csv\n| where criticality=\"critical\"\n| lookup vendor_sbom_attestations.csv vendor OUTPUT sbom_sha256 signed signed_by signed_at\n| eval age_days=round((now()-strptime(signed_at,\"%Y-%m-%d\"))/86400,0)\n| where isnull(sbom_sha256) OR signed!=\"true\" OR age_days>180\n| table vendor criticality sbom_sha256 signed signed_by age_days",
              "m": "(1) Onboard vendor SBOMs through procurement workflow; (2) validate signatures with sigstore or equivalent; (3) run weekly.",
              "z": "Table of at-risk vendors, bar chart of signature freshness, single-value 'critical vendors with signed SBOM'.",
              "kfp": "Vendors mid-re-signing can appear stale; allow a 7-day grace after signed_attempted_at.",
              "refs": "[EU Cyber Resilience Act](https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Vendor SBOM register (vendor_sbom.csv), signature-validation tooling outputs, VRM criticality..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard vendor SBOMs through procurement workflow; (2) validate signatures with sigstore or equivalent; (3) run weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup vrm_attestations.csv\n| where criticality=\"critical\"\n| lookup vendor_sbom_attestations.csv vendor OUTPUT sbom_sha256 signed signed_by signed_at\n| eval age_days=round((now()-strptime(signed_at,\"%Y-%m-%d\"))/86400,0)\n| where isnull(sbom_sha256) OR signed!=\"true\" OR age_days>180\n| table vendor criticality sbom_sha256 signed signed_by age_days\n```\n\nUnderstanding this SPL\n\n**SBOM attestation completeness — critical vendors without signed SBOM** — Makes the CRA/NIS2 SBOM requirement a tracked operational metric.\n\nDocumented **Data sources**: Vendor SBOM register (vendor_sbom.csv), signature-validation tooling outputs, VRM criticality. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where criticality=\"critical\"` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(sbom_sha256) OR signed!=\"true\" OR age_days>180` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SBOM attestation completeness — critical vendors without signed SBOM**): table vendor criticality sbom_sha256 signed signed_by age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of at-risk vendors, bar chart of signature freshness, single-value 'critical vendors with signed SBOM'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when third-party risk scores move so you can act before a weak vendor becomes a real problem.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "EU-CRA",
                "NIS2",
                "NIST-800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "EU-CRA",
                  "v": "Regulation (EU) 2024/2847",
                  "cl": "Art.13",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that EU-CRA Art.13 (Obligations of manufacturers) is enforced — Splunk UC-22.44.5: SBOM attestation completeness — critical vendors without signed SBOM.",
                  "ea": "Saved search 'UC-22.44.5' running on Vendor SBOM register (vendor_sbom.csv), signature-validation tooling outputs, VRM criticality., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(d)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(d) (Supply-chain security) is enforced — Splunk UC-22.44.5: SBOM attestation completeness — critical vendors without signed SBOM.",
                  "ea": "Saved search 'UC-22.44.5' running on Vendor SBOM register (vendor_sbom.csv), signature-validation tooling outputs, VRM criticality., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "SR-11(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST-800-53 SR-11(1) is enforced — Splunk UC-22.44.5: SBOM attestation completeness — critical vendors without signed SBOM.",
                  "ea": "Saved search 'UC-22.44.5' running on Vendor SBOM register (vendor_sbom.csv), signature-validation tooling outputs, VRM criticality., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 32.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.45",
          "n": "— additional UCs (Backup integrity and recovery testing)",
          "u": [
            {
              "i": "22.45.4",
              "n": "Backup repository TLS posture — aged or weak-cipher endpoints",
              "c": "medium",
              "f": "intermediate",
              "v": "Closes a gap pattern observed in DORA inspections where the backup plane had older TLS than the production plane.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "TLS scanner output for backup endpoint inventory.",
              "q": "index=tls sourcetype=tls:scan earliest=-24h\n| search host IN (`backup_repository_endpoints`)\n| eval days_to_expiry=round((expires-now())/86400,0)\n| where days_to_expiry<45 OR protocol=\"TLSv1.0\" OR protocol=\"TLSv1.1\" OR match(cipher,\"3DES|RC4|NULL|DES|EXPORT\")\n| table host days_to_expiry protocol cipher",
              "m": "(1) Inventory backup endpoints; (2) scan daily; (3) on finding, ticket to platform engineering.",
              "z": "Table of endpoints with weak posture, time chart of scan results, single-value '% of backup endpoints with strong TLS'.",
              "kfp": "Appliance-managed endpoints with vendor-pinned TLS libraries may need a supply-chain exception.",
              "refs": "[Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [MITRE ATT&CK — T1040 Network Sniffing](https://attack.mitre.org/techniques/T1040/)",
              "mitre": [
                "T1040"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: TLS scanner output for backup endpoint inventory..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Inventory backup endpoints; (2) scan daily; (3) on finding, ticket to platform engineering.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=tls sourcetype=tls:scan earliest=-24h\n| search host IN (`backup_repository_endpoints`)\n| eval days_to_expiry=round((expires-now())/86400,0)\n| where days_to_expiry<45 OR protocol=\"TLSv1.0\" OR protocol=\"TLSv1.1\" OR match(cipher,\"3DES|RC4|NULL|DES|EXPORT\")\n| table host days_to_expiry protocol cipher\n```\n\nUnderstanding this SPL\n\n**Backup repository TLS posture — aged or weak-cipher endpoints** — Closes a gap pattern observed in DORA inspections where the backup plane had older TLS than the production plane.\n\nDocumented **Data sources**: TLS scanner output for backup endpoint inventory. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: tls; **sourcetype**: tls:scan. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=tls, sourcetype=tls:scan, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **days_to_expiry** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_expiry<45 OR protocol=\"TLSv1.0\" OR protocol=\"TLSv1.1\" OR match(cipher,\"3DES|RC4|NULL|DES|EXPORT\")` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Backup repository TLS posture — aged or weak-cipher endpoints**): table host days_to_expiry protocol cipher\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of endpoints with weak posture, time chart of scan results, single-value '% of backup endpoints with strong TLS'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show backups were tested and restorable when regulators or disaster recovery need proof.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "DORA",
                "HIPAA",
                "NIST-800-53"
              ],
              "a": [
                "Certificates"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.12",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.12 (Backup policies and recovery methods) is enforced — Splunk UC-22.45.4: Backup repository TLS posture — aged or weak-cipher endpoints.",
                  "ea": "Saved search 'UC-22.45.4' running on TLS scanner output for backup endpoint inventory., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.312(e)(1)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA §164.312(e)(1) (Transmission security) is enforced — Splunk UC-22.45.4: Backup repository TLS posture — aged or weak-cipher endpoints.",
                  "ea": "Saved search 'UC-22.45.4' running on TLS scanner output for backup endpoint inventory., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "CP-9(3)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST-800-53 CP-9(3) is enforced — Splunk UC-22.45.4: Backup repository TLS posture — aged or weak-cipher endpoints.",
                  "ea": "Saved search 'UC-22.45.4' running on TLS scanner output for backup endpoint inventory., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.45.5",
              "n": "Business-continuity rehearsal evidence — BCP/DR exercise execution logged",
              "c": "medium",
              "f": "beginner",
              "v": "Converts a calendar item into evidence with a signed attestation per rehearsal.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "BCP rehearsal ticketing system, signed rehearsal attestations.",
              "q": "| inputlookup business_units.csv\n| lookup bcp_rehearsals.csv bu_id OUTPUT last_rehearsal_at signed_by\n| eval days_since=round((now()-strptime(last_rehearsal_at,\"%Y-%m-%d\"))/86400,0)\n| where isnull(last_rehearsal_at) OR days_since>cadence_days\n| table bu_id owner last_rehearsal_at days_since cadence_days",
              "m": "(1) Maintain business_units.csv with owner + cadence; (2) emit bcp_rehearsals.csv after each exercise with a signed_by; (3) run quarterly.",
              "z": "Heatmap of BU × days-since-rehearsal, table of at-risk BUs, single-value 'BUs within cadence'.",
              "kfp": "New BUs need an onboarding_date carve-out until the first scheduled rehearsal.",
              "refs": "[Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: BCP rehearsal ticketing system, signed rehearsal attestations..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain business_units.csv with owner + cadence; (2) emit bcp_rehearsals.csv after each exercise with a signed_by; (3) run quarterly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup business_units.csv\n| lookup bcp_rehearsals.csv bu_id OUTPUT last_rehearsal_at signed_by\n| eval days_since=round((now()-strptime(last_rehearsal_at,\"%Y-%m-%d\"))/86400,0)\n| where isnull(last_rehearsal_at) OR days_since>cadence_days\n| table bu_id owner last_rehearsal_at days_since cadence_days\n```\n\nUnderstanding this SPL\n\n**Business-continuity rehearsal evidence — BCP/DR exercise execution logged** — Converts a calendar item into evidence with a signed attestation per rehearsal.\n\nDocumented **Data sources**: BCP rehearsal ticketing system, signed rehearsal attestations. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(last_rehearsal_at) OR days_since>cadence_days` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Business-continuity rehearsal evidence — BCP/DR exercise execution logged**): table bu_id owner last_rehearsal_at days_since cadence_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of BU × days-since-rehearsal, table of at-risk BUs, single-value 'BUs within cadence'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you show backups were tested and restorable when regulators or disaster recovery need proof.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "DORA",
                "NIS2",
                "NIST-800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.24 (Digital operational-resilience testing) is enforced — Splunk UC-22.45.5: Business-continuity rehearsal evidence — BCP/DR exercise execution logged.",
                  "ea": "Saved search 'UC-22.45.5' running on BCP rehearsal ticketing system, signed rehearsal attestations., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(c)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(c) (Business continuity and crisis management) is enforced — Splunk UC-22.45.5: Business-continuity rehearsal evidence — BCP/DR exercise execution logged.",
                  "ea": "Saved search 'UC-22.45.5' running on BCP rehearsal ticketing system, signed rehearsal attestations., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "CP-4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 CP-4 is enforced — Splunk UC-22.45.5: Business-continuity rehearsal evidence — BCP/DR exercise execution logged.",
                  "ea": "Saved search 'UC-22.45.5' running on BCP rehearsal ticketing system, signed rehearsal attestations., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 32.5,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.46",
          "n": "— additional UCs (Training and awareness)",
          "u": [
            {
              "i": "22.46.3",
              "n": "Privileged-role specialist training — admins lacking annual deep-training",
              "c": "medium",
              "f": "beginner",
              "v": "Makes the 'role-based training' requirement evidence-backed rather than policy-on-paper.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "LMS specialist-course completions, PAM vault inventory with owner mapping.",
              "q": "| rest /services/pam/vault-inventory splunk_server=*\n| fields user account_type\n| where account_type=\"privileged\"\n| join user [search index=lms sourcetype=completion course_id=priv_security_training earliest=-365d | stats max(_time) as last_completion by user]\n| where (now()-last_completion)>31536000 OR isnull(last_completion)\n| table user account_type last_completion",
              "m": "(1) Tag PAM vault entries with the owner; (2) map course completions to users in the LMS; (3) run weekly; (4) auto-enrol overdue owners.",
              "z": "Table of overdue admins, bar chart of departments, single-value 'admins with current specialist training'.",
              "kfp": "New privileged users within 30 days of onboarding have a grace window; allow-list via onboarding_date.",
              "refs": "[45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: LMS specialist-course completions, PAM vault inventory with owner mapping..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag PAM vault entries with the owner; (2) map course completions to users in the LMS; (3) run weekly; (4) auto-enrol overdue owners.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/pam/vault-inventory splunk_server=*\n| fields user account_type\n| where account_type=\"privileged\"\n| join user [search index=lms sourcetype=completion course_id=priv_security_training earliest=-365d | stats max(_time) as last_completion by user]\n| where (now()-last_completion)>31536000 OR isnull(last_completion)\n| table user account_type last_completion\n```\n\nUnderstanding this SPL\n\n**Privileged-role specialist training — admins lacking annual deep-training** — Makes the 'role-based training' requirement evidence-backed rather than policy-on-paper.\n\nDocumented **Data sources**: LMS specialist-course completions, PAM vault inventory with owner mapping. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Filters the current rows with `where account_type=\"privileged\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where (now()-last_completion)>31536000 OR isnull(last_completion)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Privileged-role specialist training — admins lacking annual deep-training**): table user account_type last_completion\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of overdue admins, bar chart of departments, single-value 'admins with current specialist training'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see who still needs training so awareness stays current across teams and roles.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "HIPAA",
                "NIST-800-53",
                "PCI-DSS"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hashicorp"
              ],
              "em": [
                "hashicorp_vault"
              ],
              "cmp": [
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.308(a)(5)(ii)(A)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA §164.308(a)(5)(ii)(A) is enforced — Splunk UC-22.46.3: Privileged-role specialist training — admins lacking annual deep-training.",
                  "ea": "Saved search 'UC-22.46.3' running on LMS specialist-course completions, PAM vault inventory with owner mapping., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "AT-3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 AT-3 is enforced — Splunk UC-22.46.3: Privileged-role specialist training — admins lacking annual deep-training.",
                  "ea": "Saved search 'UC-22.46.3' running on LMS specialist-course completions, PAM vault inventory with owner mapping., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "12.6.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 12.6.3 is enforced — Splunk UC-22.46.3: Privileged-role specialist training — admins lacking annual deep-training.",
                  "ea": "Saved search 'UC-22.46.3' running on LMS specialist-course completions, PAM vault inventory with owner mapping., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.46.4",
              "n": "Tabletop rehearsal evidence — IR plan exercise frequency",
              "c": "medium",
              "f": "beginner",
              "v": "Shifts IR readiness evidence from 'we rehearse' to 'here is the per-playbook cadence record'.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "IR playbook register, rehearsal signed attestations.",
              "q": "| inputlookup ir_playbooks.csv\n| lookup ir_rehearsals.csv playbook_id OUTPUT last_rehearsal signed_by\n| eval days_since=round((now()-strptime(last_rehearsal,\"%Y-%m-%d\"))/86400,0)\n| where isnull(last_rehearsal) OR days_since>cadence_days\n| table playbook_id owner last_rehearsal days_since cadence_days",
              "m": "(1) Maintain ir_playbooks.csv with cadence; (2) emit rehearsal completion records; (3) run monthly.",
              "z": "Heatmap of playbook × days-since-rehearsal, single-value '% of playbooks within cadence'.",
              "kfp": "Newly-authored playbooks need an onboarding grace window.",
              "refs": "[Directive (EU) 2022/2555 — NIS2](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [Regulation (EU) 2022/2554 — DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: IR playbook register, rehearsal signed attestations..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain ir_playbooks.csv with cadence; (2) emit rehearsal completion records; (3) run monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup ir_playbooks.csv\n| lookup ir_rehearsals.csv playbook_id OUTPUT last_rehearsal signed_by\n| eval days_since=round((now()-strptime(last_rehearsal,\"%Y-%m-%d\"))/86400,0)\n| where isnull(last_rehearsal) OR days_since>cadence_days\n| table playbook_id owner last_rehearsal days_since cadence_days\n```\n\nUnderstanding this SPL\n\n**Tabletop rehearsal evidence — IR plan exercise frequency** — Shifts IR readiness evidence from 'we rehearse' to 'here is the per-playbook cadence record'.\n\nDocumented **Data sources**: IR playbook register, rehearsal signed attestations. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(last_rehearsal) OR days_since>cadence_days` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Tabletop rehearsal evidence — IR plan exercise frequency**): table playbook_id owner last_rehearsal days_since cadence_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of playbook × days-since-rehearsal, single-value '% of playbooks within cadence'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see who still needs training so awareness stays current across teams and roles.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "NIS2",
                "NIST-800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.24 (Digital operational-resilience testing) is enforced — Splunk UC-22.46.4: Tabletop rehearsal evidence — IR plan exercise frequency.",
                  "ea": "Saved search 'UC-22.46.4' running on IR playbook register, rehearsal signed attestations., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(b)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(b) (Incident handling) is enforced — Splunk UC-22.46.4: Tabletop rehearsal evidence — IR plan exercise frequency.",
                  "ea": "Saved search 'UC-22.46.4' running on IR playbook register, rehearsal signed attestations., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "IR-3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 IR-3 is enforced — Splunk UC-22.46.4: Tabletop rehearsal evidence — IR plan exercise frequency.",
                  "ea": "Saved search 'UC-22.46.4' running on IR playbook register, rehearsal signed attestations., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.46.5",
              "n": "Developer data-handling training — prod-access engineers lacking training",
              "c": "high",
              "f": "intermediate",
              "v": "Links IdP posture to training posture — the evidence auditors actually ask for.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "IdP prod-access group membership, LMS completion records for DPO-approved course ids.",
              "q": "| rest /services/identity/prod-access-groups splunk_server=*\n| fields user group scope\n| where scope=\"regulated\"\n| join user [search index=lms sourcetype=completion course_id=data_handling_v* earliest=-365d | stats max(_time) as last_completion by user]\n| where (now()-last_completion)>31536000 OR isnull(last_completion)\n| table user group scope last_completion",
              "m": "(1) Tag IdP groups for prod access with regulated scope; (2) publish DPO-approved course IDs; (3) run weekly; (4) auto-revoke access after configured overdue window.",
              "z": "Table of engineers without training, bar chart by team, single-value 'prod-access engineers with current training'.",
              "kfp": "Freshly-onboarded engineers in the 30-day grace window should be allow-listed by onboarding_date.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: IdP prod-access group membership, LMS completion records for DPO-approved course ids..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag IdP groups for prod access with regulated scope; (2) publish DPO-approved course IDs; (3) run weekly; (4) auto-revoke access after configured overdue window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/identity/prod-access-groups splunk_server=*\n| fields user group scope\n| where scope=\"regulated\"\n| join user [search index=lms sourcetype=completion course_id=data_handling_v* earliest=-365d | stats max(_time) as last_completion by user]\n| where (now()-last_completion)>31536000 OR isnull(last_completion)\n| table user group scope last_completion\n```\n\nUnderstanding this SPL\n\n**Developer data-handling training — prod-access engineers lacking training** — Links IdP posture to training posture — the evidence auditors actually ask for.\n\nDocumented **Data sources**: IdP prod-access group membership, LMS completion records for DPO-approved course ids. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Keeps or drops fields with `fields` to shape columns and size.\n• Filters the current rows with `where scope=\"regulated\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where (now()-last_completion)>31536000 OR isnull(last_completion)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Developer data-handling training — prod-access engineers lacking training**): table user group scope last_completion\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of engineers without training, bar chart by team, single-value 'prod-access engineers with current training'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you see who still needs training so awareness stays current across teams and roles.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "GDPR",
                "HIPAA",
                "ISO-27001"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.32(4)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.32(4) is enforced — Splunk UC-22.46.5: Developer data-handling training — prod-access engineers lacking training.",
                  "ea": "Saved search 'UC-22.46.5' running on IdP prod-access group membership, LMS completion records for DPO-approved course ids., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.308(a)(5)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA §164.308(a)(5) (Security awareness and training) is enforced — Splunk UC-22.46.5: Developer data-handling training — prod-access engineers lacking training.",
                  "ea": "Saved search 'UC-22.46.5' running on IdP prod-access group membership, LMS completion records for DPO-approved course ids., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "ISO-27001",
                  "v": "2022",
                  "cl": "A.6.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO-27001 A.6.3 is enforced — Splunk UC-22.46.5: Developer data-handling training — prod-access engineers lacking training.",
                  "ea": "Saved search 'UC-22.46.5' running on IdP prod-access group membership, LMS completion records for DPO-approved course ids., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.32(4)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.32(4) is enforced — Splunk UC-22.46.5: Developer data-handling training — prod-access engineers lacking training.",
                  "ea": "Saved search 'UC-22.46.5' running on IdP prod-access group membership, LMS completion records for DPO-approved course ids., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 33.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.47",
          "n": "— additional UCs (Control testing evidence freshness)",
          "u": [
            {
              "i": "22.47.3",
              "n": "Control owner attestation freshness",
              "c": "medium",
              "f": "beginner",
              "v": "Delivers the internal-audit team a continuous view of 'who has signed what'.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "GRC owner attestation ledger, control_inventory.csv.",
              "q": "| inputlookup control_inventory.csv\n| lookup owner_attestations.csv control_id OUTPUT last_attested signed_by\n| eval age_days=round((now()-strptime(last_attested,\"%Y-%m-%d\"))/86400,0)\n| where age_days>attestation_cadence_days OR isnull(last_attested)\n| table control_id owner last_attested age_days attestation_cadence_days",
              "m": "(1) Emit owner_attestations.csv from GRC workflow; (2) run weekly.",
              "z": "Heatmap of control × age buckets, table of overdue controls, single-value '% of controls with fresh attestation'.",
              "kfp": "New controls in onboarding need a grace window.",
              "refs": "[AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: GRC owner attestation ledger, control_inventory.csv..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit owner_attestations.csv from GRC workflow; (2) run weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup control_inventory.csv\n| lookup owner_attestations.csv control_id OUTPUT last_attested signed_by\n| eval age_days=round((now()-strptime(last_attested,\"%Y-%m-%d\"))/86400,0)\n| where age_days>attestation_cadence_days OR isnull(last_attested)\n| table control_id owner last_attested age_days attestation_cadence_days\n```\n\nUnderstanding this SPL\n\n**Control owner attestation freshness** — Delivers the internal-audit team a continuous view of 'who has signed what'.\n\nDocumented **Data sources**: GRC owner attestation ledger, control_inventory.csv. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days>attestation_cadence_days OR isnull(last_attested)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Control owner attestation freshness**): table control_id owner last_attested age_days attestation_cadence_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of control × age buckets, table of overdue controls, single-value '% of controls with fresh attestation'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when control evidence is getting old so you refresh proof before the next review.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "ISO-27001",
                "NIST-800-53",
                "SOC-2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO-27001",
                  "v": "2022",
                  "cl": "A.5.35",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO-27001 A.5.35 is enforced — Splunk UC-22.47.3: Control owner attestation freshness.",
                  "ea": "Saved search 'UC-22.47.3' running on GRC owner attestation ledger, control_inventory.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "CA-2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 CA-2 is enforced — Splunk UC-22.47.3: Control owner attestation freshness.",
                  "ea": "Saved search 'UC-22.47.3' running on GRC owner attestation ledger, control_inventory.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC1.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC1.3 is enforced — Splunk UC-22.47.3: Control owner attestation freshness.",
                  "ea": "Saved search 'UC-22.47.3' running on GRC owner attestation ledger, control_inventory.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.47.4",
              "n": "Evidence-pack drift — auditor-facing vs pre-production evidence",
              "c": "high",
              "f": "advanced",
              "v": "Protects the integrity of the evidence that underpins the external audit.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Evidence-pack build pipeline, publication manifest, auditor-delivery signature events.",
              "q": "index=grc sourcetype=evidence_pack:publish earliest=-30d\n| stats latest(sha256) as published_sha latest(pack_id) as pack by _time\n| join pack [search index=grc sourcetype=evidence_pack:preprod | stats latest(sha256) as preprod_sha by pack]\n| where published_sha!=preprod_sha\n| table pack published_sha preprod_sha",
              "m": "(1) Sign both preprod and published packs; (2) run post-publish; (3) on mismatch, alert internal audit and QA pipeline.",
              "z": "Table of mismatches, single-value 'packs with matching SHA (30d)'.",
              "kfp": "Legitimate last-mile redactions can produce intended SHA changes; capture them in a redaction_log.csv.",
              "refs": "[AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [MITRE ATT&CK — T1070 Indicator Removal](https://attack.mitre.org/techniques/T1070/)",
              "mitre": [
                "T1070"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Evidence-pack build pipeline, publication manifest, auditor-delivery signature events..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Sign both preprod and published packs; (2) run post-publish; (3) on mismatch, alert internal audit and QA pipeline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=evidence_pack:publish earliest=-30d\n| stats latest(sha256) as published_sha latest(pack_id) as pack by _time\n| join pack [search index=grc sourcetype=evidence_pack:preprod | stats latest(sha256) as preprod_sha by pack]\n| where published_sha!=preprod_sha\n| table pack published_sha preprod_sha\n```\n\nUnderstanding this SPL\n\n**Evidence-pack drift — auditor-facing vs pre-production evidence** — Protects the integrity of the evidence that underpins the external audit.\n\nDocumented **Data sources**: Evidence-pack build pipeline, publication manifest, auditor-delivery signature events. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: evidence_pack:publish. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=evidence_pack:publish, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where published_sha!=preprod_sha` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Evidence-pack drift — auditor-facing vs pre-production evidence**): table pack published_sha preprod_sha\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of mismatches, single-value 'packs with matching SHA (30d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when control evidence is getting old so you refresh proof before the next review.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "regs": [
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC1.5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC1.5 is enforced — Splunk UC-22.47.4: Evidence-pack drift — auditor-facing vs pre-production evidence.",
                  "ea": "Saved search 'UC-22.47.4' running on Evidence-pack build pipeline, publication manifest, auditor-delivery signature events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Evidence.Integrity",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.Evidence.Integrity is enforced — Splunk UC-22.47.4: Evidence-pack drift — auditor-facing vs pre-production evidence.",
                  "ea": "Saved search 'UC-22.47.4' running on Evidence-pack build pipeline, publication manifest, auditor-delivery signature events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.47.5",
              "n": "Continuous control monitoring anomaly — failure-rate trending up",
              "c": "high",
              "f": "intermediate",
              "v": "Gives internal audit a quantitative view of control health rather than a point-in-time pass/fail.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "GRC continuous-control-monitoring events (pass/fail per execution).",
              "q": "index=grc sourcetype=ccm:execution earliest=-90d\n| bin _time span=1d\n| stats count(eval(result=\"fail\")) as fails count as total by _time control_id\n| eval fail_rate=fails/total\n| eventstats avg(fail_rate) as avg7d by control_id\n| stats avg(fail_rate) as baseline stdev(fail_rate) as baseline_sd latest(fail_rate) as current latest(avg7d) as avg7d by control_id\n| where avg7d > baseline+3*baseline_sd\n| table control_id baseline baseline_sd avg7d current",
              "m": "(1) Instrument CCM per control; (2) run daily; (3) open a deep-dive ticket for anomalies.",
              "z": "Line charts of failure rate with baseline bands, table of controls above threshold, single-value 'controls within baseline'.",
              "kfp": "Controls with naturally bimodal distributions (business-hours only) benefit from a bucketed baseline.",
              "refs": "[AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: GRC continuous-control-monitoring events (pass/fail per execution)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument CCM per control; (2) run daily; (3) open a deep-dive ticket for anomalies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=ccm:execution earliest=-90d\n| bin _time span=1d\n| stats count(eval(result=\"fail\")) as fails count as total by _time control_id\n| eval fail_rate=fails/total\n| eventstats avg(fail_rate) as avg7d by control_id\n| stats avg(fail_rate) as baseline stdev(fail_rate) as baseline_sd latest(fail_rate) as current latest(avg7d) as avg7d by control_id\n| where avg7d > baseline+3*baseline_sd\n| table control_id baseline baseline_sd avg7d current\n```\n\nUnderstanding this SPL\n\n**Continuous control monitoring anomaly — failure-rate trending up** — Gives internal audit a quantitative view of control health rather than a point-in-time pass/fail.\n\nDocumented **Data sources**: GRC continuous-control-monitoring events (pass/fail per execution). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: ccm:execution. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=ccm:execution, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fail_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by control_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where avg7d > baseline+3*baseline_sd` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Continuous control monitoring anomaly — failure-rate trending up**): table control_id baseline baseline_sd avg7d current\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line charts of failure rate with baseline bands, table of controls above threshold, single-value 'controls within baseline'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when control evidence is getting old so you refresh proof before the next review.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "ISO-27001",
                "NIST-800-53",
                "SOC-2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "controlm"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO-27001",
                  "v": "2022",
                  "cl": "A.5.36",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO-27001 A.5.36 is enforced — Splunk UC-22.47.5: Continuous control monitoring anomaly — failure-rate trending up.",
                  "ea": "Saved search 'UC-22.47.5' running on GRC continuous-control-monitoring events (pass/fail per execution)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "CA-7",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 CA-7 is enforced — Splunk UC-22.47.5: Continuous control monitoring anomaly — failure-rate trending up.",
                  "ea": "Saved search 'UC-22.47.5' running on GRC continuous-control-monitoring events (pass/fail per execution)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC4.1 is enforced — Splunk UC-22.47.5: Continuous control monitoring anomaly — failure-rate trending up.",
                  "ea": "Saved search 'UC-22.47.5' running on GRC continuous-control-monitoring events (pass/fail per execution)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 33.3,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.48",
          "n": "— additional UCs (Segregation of duties enforcement)",
          "u": [
            {
              "i": "22.48.3",
              "n": "Developer-to-production SoD — same developer submits AND approves merge",
              "c": "high",
              "f": "intermediate",
              "v": "Makes a core deployment-pipeline SoD requirement a continuously-enforced technical control.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Git provider audit logs (GitHub, GitLab, Bitbucket), CI/CD deployment events.",
              "q": "index=git sourcetype=github:audit OR sourcetype=gitlab:audit OR sourcetype=bitbucket:audit action=merge earliest=-1h\n| search target_branch IN (main,master,release*)\n| where author=approver\n| table _time repo target_branch author approver commit_sha",
              "m": "(1) Ensure Git provider captures author + approver; (2) run hourly; (3) on fire, revert merge via pipeline automation.",
              "z": "Table of violations, bar chart by repo, single-value 'merges SoD-compliant (30d)'.",
              "kfp": "Solo-maintainer repos legitimately have author=approver; these must carry a risk-acceptance annotation.",
              "refs": "[PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [ISO/IEC 27001:2022](https://www.iso.org/standard/82875.html), [MITRE ATT&CK — T1078 Valid Accounts](https://attack.mitre.org/techniques/T1078/)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Git provider audit logs (GitHub, GitLab, Bitbucket), CI/CD deployment events..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure Git provider captures author + approver; (2) run hourly; (3) on fire, revert merge via pipeline automation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=git sourcetype=github:audit OR sourcetype=gitlab:audit OR sourcetype=bitbucket:audit action=merge earliest=-1h\n| search target_branch IN (main,master,release*)\n| where author=approver\n| table _time repo target_branch author approver commit_sha\n```\n\nUnderstanding this SPL\n\n**Developer-to-production SoD — same developer submits AND approves merge** — Makes a core deployment-pipeline SoD requirement a continuously-enforced technical control.\n\nDocumented **Data sources**: Git provider audit logs (GitHub, GitLab, Bitbucket), CI/CD deployment events. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: git; **sourcetype**: github:audit, gitlab:audit, bitbucket:audit. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=git, sourcetype=github:audit, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• Filters the current rows with `where author=approver` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Developer-to-production SoD — same developer submits AND approves merge**): table _time repo target_branch author approver commit_sha\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of violations, bar chart by repo, single-value 'merges SoD-compliant (30d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the same person could do two jobs that should be split, so errors and fraud are harder.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO-27001",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "github",
                "gitlab"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO-27001",
                  "v": "2022",
                  "cl": "A.8.30",
                  "m": "detects-violation-of",
                  "a": "partial",
                  "co": "Detects violations of ISO-27001 A.8.30 — Splunk UC-22.48.3: Developer-to-production SoD — same developer submits AND approves merge.",
                  "ea": "Saved search 'UC-22.48.3' running on Git provider audit logs (GitHub, GitLab, Bitbucket), CI/CD deployment events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC6.3",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of SOC-2 CC6.3 — Splunk UC-22.48.3: Developer-to-production SoD — same developer submits AND approves merge.",
                  "ea": "Saved search 'UC-22.48.3' running on Git provider audit logs (GitHub, GitLab, Bitbucket), CI/CD deployment events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Change.SoD",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of SOX-ITGC ITGC.Change.SoD — Splunk UC-22.48.3: Developer-to-production SoD — same developer submits AND approves merge.",
                  "ea": "Saved search 'UC-22.48.3' running on Git provider audit logs (GitHub, GitLab, Bitbucket), CI/CD deployment events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.48.4",
              "n": "Financial SoD — same identity posts AND approves a journal entry",
              "c": "critical",
              "f": "intermediate",
              "v": "Provides continuous evidence of a universally-audited SOX control.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "ERP financial-journal audit events (SAP, Oracle E-Business, Workday Financials).",
              "q": "index=erp sourcetype=sap:gl OR sourcetype=oracle:gl OR sourcetype=workday:gl action=post_and_approve earliest=-1h\n| where poster=approver\n| table _time entry_id poster approver amount",
              "m": "(1) Ensure ERP emits both events; (2) run hourly; (3) on fire, escalate to CFO + controller.",
              "z": "Table of violations, bar chart by business unit, single-value 'SoD-compliant journals (30d)'.",
              "kfp": "System-generated entries (automated accruals) post under a service account that must not be counted; filter by account type.",
              "refs": "[PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [AICPA Trust Services Criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022), [MITRE ATT&CK — T1078 Valid Accounts](https://attack.mitre.org/techniques/T1078/)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: ERP financial-journal audit events (SAP, Oracle E-Business, Workday Financials)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure ERP emits both events; (2) run hourly; (3) on fire, escalate to CFO + controller.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=erp sourcetype=sap:gl OR sourcetype=oracle:gl OR sourcetype=workday:gl action=post_and_approve earliest=-1h\n| where poster=approver\n| table _time entry_id poster approver amount\n```\n\nUnderstanding this SPL\n\n**Financial SoD — same identity posts AND approves a journal entry** — Provides continuous evidence of a universally-audited SOX control.\n\nDocumented **Data sources**: ERP financial-journal audit events (SAP, Oracle E-Business, Workday Financials). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: erp; **sourcetype**: sap:gl, oracle:gl, workday:gl. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=erp, sourcetype=sap:gl, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where poster=approver` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Financial SoD — same identity posts AND approves a journal entry**): table _time entry_id poster approver amount\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of violations, bar chart by business unit, single-value 'SoD-compliant journals (30d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the same person could do two jobs that should be split, so errors and fraud are harder.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "oracle"
              ],
              "em": [
                "oracle_oracle_db"
              ],
              "cmp": [
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC6.3",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of SOC-2 CC6.3 — Splunk UC-22.48.4: Financial SoD — same identity posts AND approves a journal entry.",
                  "ea": "Saved search 'UC-22.48.4' running on ERP financial-journal audit events (SAP, Oracle E-Business, Workday Financials)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Financial.SoD",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of SOX-ITGC ITGC.Financial.SoD — Splunk UC-22.48.4: Financial SoD — same identity posts AND approves a journal entry.",
                  "ea": "Saved search 'UC-22.48.4' running on ERP financial-journal audit events (SAP, Oracle E-Business, Workday Financials)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.48.5",
              "n": "Vendor-master SoD — same identity creates vendor AND approves payment",
              "c": "critical",
              "f": "advanced",
              "v": "Continuously enforces a control most organisations only sample.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "ERP vendor-master events, AP payment approval events.",
              "q": "index=erp sourcetype=sap:vendor_master OR sourcetype=oracle:vendor_master action=create earliest=-30d\n| stats min(_time) as created_at values(created_by) as creators by vendor_id\n| join vendor_id [search index=erp sourcetype=*:ap_payment action=approve earliest=-30d | stats values(approved_by) as approvers by vendor_id]\n| eval overlap=mvfilter(match(creators,approvers))\n| where mvcount(overlap)>0\n| table vendor_id created_at creators approvers overlap",
              "m": "(1) Ensure ERP emits both events; (2) run hourly; (3) auto-freeze suspicious vendor payments via AP workflow.",
              "z": "Table of same-person violations, single-value 'vendors with SoD-compliant lifecycle'.",
              "kfp": "Small-organisation setups with a single AP resource must carry a documented compensating control.",
              "refs": "[PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [PCI DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [MITRE ATT&CK — T1078 Valid Accounts](https://attack.mitre.org/techniques/T1078/)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: ERP vendor-master events, AP payment approval events..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure ERP emits both events; (2) run hourly; (3) auto-freeze suspicious vendor payments via AP workflow.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=erp sourcetype=sap:vendor_master OR sourcetype=oracle:vendor_master action=create earliest=-30d\n| stats min(_time) as created_at values(created_by) as creators by vendor_id\n| join vendor_id [search index=erp sourcetype=*:ap_payment action=approve earliest=-30d | stats values(approved_by) as approvers by vendor_id]\n| eval overlap=mvfilter(match(creators,approvers))\n| where mvcount(overlap)>0\n| table vendor_id created_at creators approvers overlap\n```\n\nUnderstanding this SPL\n\n**Vendor-master SoD — same identity creates vendor AND approves payment** — Continuously enforces a control most organisations only sample.\n\nDocumented **Data sources**: ERP vendor-master events, AP payment approval events. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: erp; **sourcetype**: sap:vendor_master, oracle:vendor_master. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=erp, sourcetype=sap:vendor_master, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by vendor_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **overlap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mvcount(overlap)>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Vendor-master SoD — same identity creates vendor AND approves payment**): table vendor_id created_at creators approvers overlap\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of same-person violations, single-value 'vendors with SoD-compliant lifecycle'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see when the same person could do two jobs that should be split, so errors and fraud are harder.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PCI-DSS",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "oracle"
              ],
              "em": [
                "oracle_oracle_db"
              ],
              "cmp": [
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "7.2.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 7.2.5 is enforced — Splunk UC-22.48.5: Vendor-master SoD — same identity creates vendor AND approves payment.",
                  "ea": "Saved search 'UC-22.48.5' running on ERP vendor-master events, AP payment approval events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Vendor.SoD",
                  "m": "detects-violation-of",
                  "a": "full",
                  "co": "Detects violations of SOX-ITGC ITGC.Vendor.SoD — Splunk UC-22.48.5: Vendor-master SoD — same identity creates vendor AND approves payment.",
                  "ea": "Saved search 'UC-22.48.5' running on ERP vendor-master events, AP payment approval events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 3,
            "none": 0
          }
        },
        {
          "i": "22.49",
          "n": "— additional UCs (Retention and disposal automation)",
          "u": [
            {
              "i": "22.49.4",
              "n": "Retention policy drift — system config vs policy catalogue",
              "c": "high",
              "f": "intermediate",
              "v": "Turns retention policy adherence into a continuously-measured metric per system.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "System retention configuration snapshots, retention_policy.csv.",
              "q": "index=datastore sourcetype=retention:config earliest=-1d\n| lookup retention_policy.csv data_domain OUTPUT policy_days\n| eval drift_days=abs(configured_days-policy_days)\n| where drift_days>0\n| table store data_domain configured_days policy_days drift_days",
              "m": "(1) Export retention configuration nightly; (2) maintain retention_policy.csv; (3) run daily.",
              "z": "Table of drift stores, bar chart of drift magnitude by data-domain, single-value 'stores aligned to policy'.",
              "kfp": "Legal-hold flagged systems legitimately have longer retention; exclude via hold_in_force=true.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [California CCPA/CPRA](https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=CIV&division=3.&title=1.81.5.&part=4.&chapter=&article=), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: System retention configuration snapshots, retention_policy.csv..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export retention configuration nightly; (2) maintain retention_policy.csv; (3) run daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=datastore sourcetype=retention:config earliest=-1d\n| lookup retention_policy.csv data_domain OUTPUT policy_days\n| eval drift_days=abs(configured_days-policy_days)\n| where drift_days>0\n| table store data_domain configured_days policy_days drift_days\n```\n\nUnderstanding this SPL\n\n**Retention policy drift — system config vs policy catalogue** — Turns retention policy adherence into a continuously-measured metric per system.\n\nDocumented **Data sources**: System retention configuration snapshots, retention_policy.csv. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: datastore; **sourcetype**: retention:config. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=datastore, sourcetype=retention:config, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **drift_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift_days>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Retention policy drift — system config vs policy catalogue**): table store data_domain configured_days policy_days drift_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of drift stores, bar chart of drift magnitude by data-domain, single-value 'stores aligned to policy'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see retention and disposal so data is not kept too long and holds are respected.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "CCPA",
                "GDPR",
                "HIPAA"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.100(c)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that CCPA §1798.100(c) is enforced — Splunk UC-22.49.4: Retention policy drift — system config vs policy catalogue.",
                  "ea": "Saved search 'UC-22.49.4' running on System retention configuration snapshots, retention_policy.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.5(1)(e)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.5(1)(e) is enforced — Splunk UC-22.49.4: Retention policy drift — system config vs policy catalogue.",
                  "ea": "Saved search 'UC-22.49.4' running on System retention configuration snapshots, retention_policy.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.316(b)(2)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA §164.316(b)(2) is enforced — Splunk UC-22.49.4: Retention policy drift — system config vs policy catalogue.",
                  "ea": "Saved search 'UC-22.49.4' running on System retention configuration snapshots, retention_policy.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.5(1)(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.5(1)(e) is enforced — Splunk UC-22.49.4: Retention policy drift — system config vs policy catalogue.",
                  "ea": "Saved search 'UC-22.49.4' running on System retention configuration snapshots, retention_policy.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.49.5",
              "n": "Cryptographic erasure attestation — per-asset destruction evidence",
              "c": "high",
              "f": "advanced",
              "v": "Solves the 'how do I prove we actually erased that data?' question that every DPO faces.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Asset disposal workflow events, KMS key destruction events.",
              "q": "index=assets sourcetype=asset:disposal earliest=-7d method=cryptographic_erasure\n| join asset_id [search index=kms action=DeleteKey earliest=-8d | fields asset_id key_id _time | rename _time as key_deleted_at]\n| eval delta=abs(_time-key_deleted_at)\n| where isnull(key_deleted_at) OR delta>86400\n| table asset_id key_id _time key_deleted_at delta",
              "m": "(1) Tag asset-disposal events with asset_id and key_id; (2) ingest KMS DeleteKey; (3) run daily.",
              "z": "Table of orphan disposal events, single-value '% of disposals with matched key destruction'.",
              "kfp": "Shared-key assets require manual validation; exclude via shared_key_assets.csv.",
              "refs": "[Regulation (EU) 2016/679 — GDPR](https://eur-lex.europa.eu/eli/reg/2016/679/oj), [NIST SP 800-53 Rev. 5](https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final), [45 CFR Part 164 — HIPAA](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Asset disposal workflow events, KMS key destruction events..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag asset-disposal events with asset_id and key_id; (2) ingest KMS DeleteKey; (3) run daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=assets sourcetype=asset:disposal earliest=-7d method=cryptographic_erasure\n| join asset_id [search index=kms action=DeleteKey earliest=-8d | fields asset_id key_id _time | rename _time as key_deleted_at]\n| eval delta=abs(_time-key_deleted_at)\n| where isnull(key_deleted_at) OR delta>86400\n| table asset_id key_id _time key_deleted_at delta\n```\n\nUnderstanding this SPL\n\n**Cryptographic erasure attestation — per-asset destruction evidence** — Solves the 'how do I prove we actually erased that data?' question that every DPO faces.\n\nDocumented **Data sources**: Asset disposal workflow events, KMS key destruction events. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: assets; **sourcetype**: asset:disposal. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=assets, sourcetype=asset:disposal, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(key_deleted_at) OR delta>86400` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **Cryptographic erasure attestation — per-asset destruction evidence**): table asset_id key_id _time key_deleted_at delta\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of orphan disposal events, single-value '% of disposals with matched key destruction'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We help you see retention and disposal so data is not kept too long and holds are respected.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "GDPR",
                "HIPAA",
                "NIST-800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CCPA/CPRA",
                  "v": "CPRA (as amended)",
                  "cl": "§1798.105",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CCPA/CPRA §1798.105 (Consumer right to delete) is enforced — Splunk UC-22.49.5: Cryptographic erasure attestation — per-asset destruction evidence.",
                  "ea": "Saved search 'UC-22.49.5' running on Asset disposal workflow events, KMS key destruction events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that GDPR Art.17 (Right to erasure) is enforced — Splunk UC-22.49.5: Cryptographic erasure attestation — per-asset destruction evidence.",
                  "ea": "Saved search 'UC-22.49.5' running on Asset disposal workflow events, KMS key destruction events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "HIPAA",
                  "v": "2013-final",
                  "cl": "§164.310(d)(2)(i)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA §164.310(d)(2)(i) is enforced — Splunk UC-22.49.5: Cryptographic erasure attestation — per-asset destruction evidence.",
                  "ea": "Saved search 'UC-22.49.5' running on Asset disposal workflow events, KMS key destruction events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "NIST-800-53",
                  "v": "Rev. 5",
                  "cl": "MP-6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST-800-53 MP-6 is enforced — Splunk UC-22.49.5: Cryptographic erasure attestation — per-asset destruction evidence.",
                  "ea": "Saved search 'UC-22.49.5' running on Asset disposal workflow events, KMS key destruction events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.17 is enforced — Splunk UC-22.49.5: Cryptographic erasure attestation — per-asset destruction evidence.",
                  "ea": "Saved search 'UC-22.49.5' running on Asset disposal workflow events, KMS key destruction events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 2,
            "none": 0
          }
        },
        {
          "i": "22.3",
          "n": "— DORA (extended clauses)",
          "u": [
            {
              "i": "22.3.41",
              "n": "DORA Art.6 — ICT risk-management framework evidence: control catalogue drift detection",
              "c": "high",
              "f": "advanced",
              "v": "Replaces the annual 'do we still have the framework' attestation with a continuous reconciliation — regulators under DORA Art.6 expect the framework to be 'soundly, comprehensively and well-documented', meaning drift must be detected and remediated, not just discovered at the next audit.",
              "t": "Splunk Enterprise Security, Splunk Add-on for ServiceNow",
              "d": "`index=grc` sourcetype=archer:control, sourcetype=servicenow:grc_control; `index=cmdb` sourcetype=cmdb:ci_ict_service; controlFamily catalogue lookup `dora_control_catalogue.csv`.",
              "q": "index=grc sourcetype IN (archer:control,servicenow:grc_control) earliest=-1d\n| stats latest(control_family) AS control_family latest(state) AS state BY control_id ict_service\n| lookup dora_control_catalogue.csv control_family OUTPUT approved\n| where isnull(approved) OR state=\"inactive\"\n| table ict_service control_id control_family state approved",
              "m": "(1) Export control inventory daily from Archer/ServiceNow GRC into Splunk; (2) maintain dora_control_catalogue.csv as the Board-approved baseline with approval date and approver; (3) schedule UC daily; (4) drift >0 opens a risk-register entry assigned to the CISO with 30-day remediation SLA; (5) quarterly sign-off by the ICT risk committee.",
              "z": "Table of drifted controls, timechart of drift count by day, single value 'days since last zero-drift state'.",
              "kfp": "Catalogue refreshes legitimately introduce new control_family values until the GRC platform syncs — maintain a 24h grace window in the lookup's effective-from date.",
              "refs": "[DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [MITRE ATT&CK — T1562](https://attack.mitre.org/techniques/T1562/)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `index=grc` sourcetype=archer:control, sourcetype=servicenow:grc_control; `index=cmdb` sourcetype=cmdb:ci_ict_service; controlFamily catalogue lookup `dora_control_catalogue.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export control inventory daily from Archer/ServiceNow GRC into Splunk; (2) maintain dora_control_catalogue.csv as the Board-approved baseline with approval date and approver; (3) schedule UC daily; (4) drift >0 opens a risk-register entry assigned to the CISO with 30-day remediation SLA; (5) quarterly sign-off by the ICT risk committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype IN (archer:control,servicenow:grc_control) earliest=-1d\n| stats latest(control_family) AS control_family latest(state) AS state BY control_id ict_service\n| lookup dora_control_catalogue.csv control_family OUTPUT approved\n| where isnull(approved) OR state=\"inactive\"\n| table ict_service control_id control_family state approved\n```\n\nUnderstanding this SPL\n\n**DORA Art.6 — ICT risk-management framework evidence: control catalogue drift detection** — Replaces the annual 'do we still have the framework' attestation with a continuous reconciliation — regulators under DORA Art.6 expect the framework to be 'soundly, comprehensively and well-documented', meaning drift must be detected and remediated, not just discovered at the next audit.\n\nDocumented **Data sources**: `index=grc` sourcetype=archer:control, sourcetype=servicenow:grc_control; `index=cmdb` sourcetype=cmdb:ci_ict_service; controlFamily catalogue lookup `dora_control_catalogue.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security, Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by control_id ict_service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved) OR state=\"inactive\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **DORA Art.6 — ICT risk-management framework evidence: control catalogue drift detection**): table ict_service control_id control_family state approved\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of drifted controls, timechart of drift count by day, single value 'days since last zero-drift state'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We compares the live control inventory from the governance platform against the Board-approved DORA control catalogue (per Art.6 requirement for a documented ICT risk-management framework).",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.6 (ICT risk-management framework) is enforced — Splunk UC-22.3.41: DORA Art.6 — ICT risk-management framework evidence: control catalogue drift detection.",
                  "ea": "Saved search 'UC-22.3.41' running on sourcetype archer:control and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.5.1 is enforced — Splunk UC-22.3.41: DORA Art.6 — ICT risk-management framework evidence: control catalogue drift detection.",
                  "ea": "Saved search 'UC-22.3.41' running on sourcetype archer:control and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "PM-9",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 PM-9 is enforced — Splunk UC-22.3.41: DORA Art.6 — ICT risk-management framework evidence: control catalogue drift detection.",
                  "ea": "Saved search 'UC-22.3.41' running on sourcetype archer:control and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.42",
              "n": "DORA Art.7 — ICT systems inventory completeness: unmanaged endpoints attached to financial services",
              "c": "high",
              "f": "intermediate",
              "v": "Turns the quarterly 'asset inventory reconciliation' into a continuous signal, closing the time window during which an unmanaged host can process financial transactions unnoticed.",
              "t": "Splunk Enterprise Security, Splunk Add-on for Microsoft Windows, Splunk Add-on for Unix and Linux",
              "d": "`index=endpoint` sourcetype=WinEventLog:Security / sourcetype=linux:audit; `index=cmdb` sourcetype=cmdb:ci_server; authoritative financial-service CMDB view `financial_services_servers.csv`.",
              "q": "| tstats summariesonly=t count FROM datamodel=Authentication WHERE Authentication.app IN (\"payments\",\"trading\",\"settlement\") BY Authentication.dest\n| rename Authentication.dest AS hostname\n| lookup financial_services_servers.csv hostname OUTPUT asset_id owner team\n| where isnull(asset_id)\n| eval inventory_gap_reason=\"not in ICT asset inventory\"\n| table hostname inventory_gap_reason count",
              "m": "(1) Onboard auth data to CIM Authentication DM; (2) maintain financial_services_servers.csv from the ICT asset register nightly; (3) schedule UC hourly; (4) hit opens a ServiceNow CMDB task to register the host; (5) SLA: resolve within 72h or isolate; (6) exclude build/imaging windows with an effective-from allowlist.",
              "z": "Bar chart of inventory gaps by financial service, table of hosts with count, single value 'hosts in gap'.",
              "kfp": "Short-lived CI/CD build runners used for payments tests may legitimately appear off-inventory — allow-list by hostname-pattern in financial_services_servers.csv with a 24h TTL.",
              "refs": "[DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIS2 Directive (EU) 2022/2555](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [MITRE ATT&CK — T1078](https://attack.mitre.org/techniques/T1078/)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, Splunk Add-on for Microsoft Windows, Splunk Add-on for Unix and Linux.\n• Ensure the following data sources are available: `index=endpoint` sourcetype=WinEventLog:Security / sourcetype=linux:audit; `index=cmdb` sourcetype=cmdb:ci_server; authoritative financial-service CMDB view `financial_services_servers.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard auth data to CIM Authentication DM; (2) maintain financial_services_servers.csv from the ICT asset register nightly; (3) schedule UC hourly; (4) hit opens a ServiceNow CMDB task to register the host; (5) SLA: resolve within 72h or isolate; (6) exclude build/imaging windows with an effective-from allowlist.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count FROM datamodel=Authentication WHERE Authentication.app IN (\"payments\",\"trading\",\"settlement\") BY Authentication.dest\n| rename Authentication.dest AS hostname\n| lookup financial_services_servers.csv hostname OUTPUT asset_id owner team\n| where isnull(asset_id)\n| eval inventory_gap_reason=\"not in ICT asset inventory\"\n| table hostname inventory_gap_reason count\n```\n\nUnderstanding this SPL\n\n**DORA Art.7 — ICT systems inventory completeness: unmanaged endpoints attached to financial services** — Turns the quarterly 'asset inventory reconciliation' into a continuous signal, closing the time window during which an unmanaged host can process financial transactions unnoticed.\n\nDocumented **Data sources**: `index=endpoint` sourcetype=WinEventLog:Security / sourcetype=linux:audit; `index=cmdb` sourcetype=cmdb:ci_server; authoritative financial-service CMDB view `financial_services_servers.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security, Splunk Add-on for Microsoft Windows, Splunk Add-on for Unix and Linux. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(asset_id)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **inventory_gap_reason** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DORA Art.7 — ICT systems inventory completeness: unmanaged endpoints attached to financial services**): table hostname inventory_gap_reason count\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart of inventory gaps by financial service, table of hosts with count, single value 'hosts in gap'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We uses the Authentication data model to find hosts that have been seen handling financial-service traffic but are absent from the official ICT asset CMDB.",
              "mtype": [
                "Compliance",
                "Availability",
                "Security"
              ],
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIS2"
              ],
              "a": [
                "Authentication",
                "Inventory"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.7 (ICT systems, protocols and tools) is enforced — Splunk UC-22.3.42: DORA Art.7 — ICT systems inventory completeness: unmanaged endpoints attached to financial services.",
                  "ea": "Saved search 'UC-22.3.42' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.9",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.5.9 is enforced — Splunk UC-22.3.42: DORA Art.7 — ICT systems inventory completeness: unmanaged endpoints attached to financial services.",
                  "ea": "Saved search 'UC-22.3.42' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.21(2)(d)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIS2 Art.21(2)(d) (Supply-chain security) is enforced — Splunk UC-22.3.42: DORA Art.7 — ICT systems inventory completeness: unmanaged endpoints attached to financial services.",
                  "ea": "Saved search 'UC-22.3.42' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "pillar": "both",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.43",
              "n": "DORA Art.8 — ICT risk identification: newly discovered high-severity exposure on critical financial services",
              "c": "critical",
              "f": "intermediate",
              "v": "Produces per-finding evidence that critical-service exposure is identified and logged within the DORA-aligned window, rather than relying on monthly scanner review meetings.",
              "t": "Splunk Enterprise Security, Splunk Add-on for Tenable, Splunk Add-on for Qualys",
              "d": "`index=vm` sourcetype=tenable:sc:vuln OR sourcetype=qualys:host; `index=cmdb` sourcetype=cmdb:ci_ict_service; DORA critical-service lookup `dora_critical_services.csv`.",
              "q": "index=vm sourcetype IN (tenable:sc:vuln,qualys:host) cvss>=7 earliest=-15m\n| rename dest AS hostname\n| lookup dora_critical_services.csv hostname OUTPUT ict_service criticality_tier\n| where criticality_tier=\"critical\"\n| eval registration_sla_met=if(relative_time(now(),\"-24h\")<=_time,1,0)\n| stats count BY ict_service cvss cve hostname registration_sla_met",
              "m": "(1) Onboard scanner output; (2) tag hosts with ict_service + criticality_tier nightly; (3) schedule UC every 15m; (4) each hit opens a risk-register entry (ServiceNow) within 24h; (5) weekly report to ICT risk committee; (6) breach of 24h SLA escalates to the Head of IR and Board/Audit Committee.",
              "z": "Timechart of new criticals per ICT service, table of unresolved findings, single value 'findings past 24h SLA'.",
              "kfp": "Scanner re-baselines after signature updates can produce a one-time spike of 're-discovered' findings — cross-reference against first_seen and exclude.",
              "refs": "[DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [MITRE ATT&CK — T1190](https://attack.mitre.org/techniques/T1190/)",
              "mitre": [
                "T1190"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, Splunk Add-on for Tenable, Splunk Add-on for Qualys.\n• Ensure the following data sources are available: `index=vm` sourcetype=tenable:sc:vuln OR sourcetype=qualys:host; `index=cmdb` sourcetype=cmdb:ci_ict_service; DORA critical-service lookup `dora_critical_services.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard scanner output; (2) tag hosts with ict_service + criticality_tier nightly; (3) schedule UC every 15m; (4) each hit opens a risk-register entry (ServiceNow) within 24h; (5) weekly report to ICT risk committee; (6) breach of 24h SLA escalates to the Head of IR and Board/Audit Committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype IN (tenable:sc:vuln,qualys:host) cvss>=7 earliest=-15m\n| rename dest AS hostname\n| lookup dora_critical_services.csv hostname OUTPUT ict_service criticality_tier\n| where criticality_tier=\"critical\"\n| eval registration_sla_met=if(relative_time(now(),\"-24h\")<=_time,1,0)\n| stats count BY ict_service cvss cve hostname registration_sla_met\n```\n\nUnderstanding this SPL\n\n**DORA Art.8 — ICT risk identification: newly discovered high-severity exposure on critical financial services** — Produces per-finding evidence that critical-service exposure is identified and logged within the DORA-aligned window, rather than relying on monthly scanner review meetings.\n\nDocumented **Data sources**: `index=vm` sourcetype=tenable:sc:vuln OR sourcetype=qualys:host; `index=cmdb` sourcetype=cmdb:ci_ict_service; DORA critical-service lookup `dora_critical_services.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security, Splunk Add-on for Tenable, Splunk Add-on for Qualys. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where criticality_tier=\"critical\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **registration_sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by ict_service cvss cve hostname registration_sla_met** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart of new criticals per ICT service, table of unresolved findings, single value 'findings past 24h SLA'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We flag newly-discovered CVSS>=7 exposures on hosts supporting a DORA-critical financial service.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIST 800-53",
                "PCI-DSS"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "e": [
                "qualys",
                "servicenow",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.8",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.8 (Identification) is enforced — Splunk UC-22.3.43: DORA Art.8 — ICT risk identification: newly discovered high-severity exposure on critical financial services.",
                  "ea": "Saved search 'UC-22.3.43' running on sourcetype tenable:sc:vuln and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.8",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.8 is enforced — Splunk UC-22.3.43: DORA Art.8 — ICT risk identification: newly discovered high-severity exposure on critical financial services.",
                  "ea": "Saved search 'UC-22.3.43' running on sourcetype tenable:sc:vuln and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "RA-5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST 800-53 RA-5 (Vulnerability scanning) is enforced — Splunk UC-22.3.43: DORA Art.8 — ICT risk identification: newly discovered high-severity exposure on critical financial services.",
                  "ea": "Saved search 'UC-22.3.43' running on sourcetype tenable:sc:vuln and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "11.3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 11.3.1 is enforced — Splunk UC-22.3.43: DORA Art.8 — ICT risk identification: newly discovered high-severity exposure on critical financial services.",
                  "ea": "Saved search 'UC-22.3.43' running on sourcetype tenable:sc:vuln and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.44",
              "n": "DORA Art.17 — ICT incident classification timeliness: major-incident clock evidence",
              "c": "critical",
              "f": "advanced",
              "v": "Supervisors under DORA request the per-incident clock evidence. This UC produces it automatically, so the entity can prove the notification workflow was operated within the required windows or identify the exact root cause of a miss.",
              "t": "Splunk SOAR, Splunk Add-on for ServiceNow",
              "d": "`index=soar` sourcetype=servicenow:sir_incident OR sourcetype=phantom:incident; `index=ticketing` sourcetype=jira:issue (IR project); regulator notification log `index=compliance` sourcetype=dora:notification.",
              "q": "index=soar sourcetype IN (servicenow:sir_incident,phantom:incident) tag::major_incident=true earliest=-1d\n| stats min(detected_at) AS detected_at min(classified_at) AS classified_at min(notified_at) AS notified_at min(intermediate_at) AS intermediate_at min(final_at) AS final_at BY incident_id\n| eval detect_to_classify_h=round((classified_at-detected_at)/3600,2)\n| eval classify_to_initial_h=round((notified_at-classified_at)/3600,2)\n| eval initial_to_inter_h=round((intermediate_at-notified_at)/3600,2)\n| eval inter_to_final_h=round((final_at-intermediate_at)/3600,2)\n| eval SLA_breached=if(detect_to_classify_h>24 OR classify_to_initial_h>24 OR initial_to_inter_h>72 OR inter_to_final_h>720,1,0)\n| table incident_id detected_at classified_at notified_at intermediate_at final_at detect_to_classify_h classify_to_initial_h initial_to_inter_h inter_to_final_h SLA_breached",
              "m": "(1) Normalise detected_at/classified_at/notified_at/intermediate_at/final_at in the IR tool; (2) tag major incidents at classification; (3) schedule UC every 5m; (4) SLA_breached=1 escalates to CISO + Head of IR; (5) monthly roll-up to Board; (6) annual tabletop exercise validates clock fidelity.",
              "z": "Per-incident Gantt of phase durations, timechart of breaches per quarter, single value 'open incidents at risk of SLA breach in next 4h'.",
              "kfp": "Incidents re-classified as non-major after initial raise can show a misleading 'breach' until tag::major_incident is cleared — dedup on latest tag state before computing SLAs.",
              "refs": "[DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [NIS2 Directive (EU) 2022/2555](https://eur-lex.europa.eu/eli/dir/2022/2555/oj), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [MITRE ATT&CK — T1090](https://attack.mitre.org/techniques/T1090/)",
              "mitre": [
                "T1090"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk SOAR, Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `index=soar` sourcetype=servicenow:sir_incident OR sourcetype=phantom:incident; `index=ticketing` sourcetype=jira:issue (IR project); regulator notification log `index=compliance` sourcetype=dora:notification..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalise detected_at/classified_at/notified_at/intermediate_at/final_at in the IR tool; (2) tag major incidents at classification; (3) schedule UC every 5m; (4) SLA_breached=1 escalates to CISO + Head of IR; (5) monthly roll-up to Board; (6) annual tabletop exercise validates clock fidelity.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=soar sourcetype IN (servicenow:sir_incident,phantom:incident) tag::major_incident=true earliest=-1d\n| stats min(detected_at) AS detected_at min(classified_at) AS classified_at min(notified_at) AS notified_at min(intermediate_at) AS intermediate_at min(final_at) AS final_at BY incident_id\n| eval detect_to_classify_h=round((classified_at-detected_at)/3600,2)\n| eval classify_to_initial_h=round((notified_at-classified_at)/3600,2)\n| eval initial_to_inter_h=round((intermediate_at-notified_at)/3600,2)\n| eval inter_to_final_h=round((final_at-intermediate_at)/3600,2)\n| eval SLA_breached=if(detect_to_classify_h>24 OR classify_to_initial_h>24 OR initial_to_inter_h>72 OR inter_to_final_h>720,1,0)\n| table incident_id detected_at classified_at notified_at intermediate_at final_at detect_to_classify_h classify_to_initial_h initial_to_inter_h inter_to_final_h SLA_breached\n```\n\nUnderstanding this SPL\n\n**DORA Art.17 — ICT incident classification timeliness: major-incident clock evidence** — Supervisors under DORA request the per-incident clock evidence. This UC produces it automatically, so the entity can prove the notification workflow was operated within the required windows or identify the exact root cause of a miss.\n\nDocumented **Data sources**: `index=soar` sourcetype=servicenow:sir_incident OR sourcetype=phantom:incident; `index=ticketing` sourcetype=jira:issue (IR project); regulator notification log `index=compliance` sourcetype=dora:notification. **App/TA** (typical add-on context): Splunk SOAR, Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: soar.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=soar, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by incident_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **detect_to_classify_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **classify_to_initial_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **initial_to_inter_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **inter_to_final_h** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **SLA_breached** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DORA Art.17 — ICT incident classification timeliness: major-incident clock evidence**): table incident_id detected_at classified_at notified_at intermediate_at final_at detect_to_classify_h classify_to_initial_h initial_to_in…\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Per-incident Gantt of phase durations, timechart of breaches per quarter, single value 'open incidents at risk of SLA breach in next 4h'.",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We measures, per major ICT incident, the elapsed time between detection, classification, initial notification, intermediate notification, and final report.",
              "mtype": [
                "Compliance",
                "Security",
                "Availability"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIS2"
              ],
              "a": [
                "Alerts",
                "Ticket_Management"
              ],
              "e": [
                "jira",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.17 (ICT-related incident management process) is enforced — Splunk UC-22.3.44: DORA Art.17 — ICT incident classification timeliness: major-incident clock evidence.",
                  "ea": "Saved search 'UC-22.3.44' running on sourcetype servicenow:sir_incident and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.24",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.5.24 (Incident management planning) is enforced — Splunk UC-22.3.44: DORA Art.17 — ICT incident classification timeliness: major-incident clock evidence.",
                  "ea": "Saved search 'UC-22.3.44' running on sourcetype servicenow:sir_incident and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIS2",
                  "v": "Directive (EU) 2022/2555",
                  "cl": "Art.23",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIS2 Art.23 (Reporting obligations) is enforced — Splunk UC-22.3.44: DORA Art.17 — ICT incident classification timeliness: major-incident clock evidence.",
                  "ea": "Saved search 'UC-22.3.44' running on sourcetype servicenow:sir_incident and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.3.45",
              "n": "DORA Art.24 — Digital operational-resilience testing: test-plan execution attestation",
              "c": "high",
              "f": "intermediate",
              "v": "Gives the CISO and the ICT risk committee a single-query attestation that the operational-resilience testing programme executed as planned, replacing an email-based reconciliation that typically slips by weeks.",
              "t": "Splunk Enterprise Security",
              "d": "`index=testing` sourcetype=test:plan_execution (BAS/TLPT/DR runs); `index=ticketing` sourcetype=jira:issue (findings tickets); test-plan register `dora_test_plan.csv`.",
              "q": "| inputlookup dora_test_plan.csv\n| eval planned_date_et=strptime(planned_date,\"%Y-%m-%d\")\n| join type=outer test_id\n  [search index=testing sourcetype=test:plan_execution\n   | stats latest(_time) AS actual_run_time latest(pass_fail) AS pass_fail sum(finding_count) AS finding_count BY test_id]\n| eval evidence_complete=if(isnotnull(actual_run_time),1,0)\n| eval on_time=if(actual_run_time<=planned_date_et+86400*14,1,0)\n| table test_id test_type scope_service planned_date actual_run_time pass_fail finding_count evidence_complete on_time",
              "m": "(1) Maintain dora_test_plan.csv as the approved annual programme; (2) normalise execution output from BAS/DR tools to sourcetype=test:plan_execution; (3) schedule UC daily; (4) evidence_complete=0 past planned_date + 14d escalates to CISO; (5) per-quarter report to Board; (6) missing TLPT for significant entities escalates immediately.",
              "z": "Calendar heatmap of test executions, bar chart of findings per scope service, single value 'tests past due'.",
              "kfp": "Tests rescheduled due to change freeze produce a legitimate gap — record the rescheduled_date and rebase the 14d tolerance.",
              "refs": "[DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [MITRE ATT&CK — T1562](https://attack.mitre.org/techniques/T1562/)",
              "mitre": [
                "T1562"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=testing` sourcetype=test:plan_execution (BAS/TLPT/DR runs); `index=ticketing` sourcetype=jira:issue (findings tickets); test-plan register `dora_test_plan.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain dora_test_plan.csv as the approved annual programme; (2) normalise execution output from BAS/DR tools to sourcetype=test:plan_execution; (3) schedule UC daily; (4) evidence_complete=0 past planned_date + 14d escalates to CISO; (5) per-quarter report to Board; (6) missing TLPT for significant entities escalates immediately.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup dora_test_plan.csv\n| eval planned_date_et=strptime(planned_date,\"%Y-%m-%d\")\n| join type=outer test_id\n  [search index=testing sourcetype=test:plan_execution\n   | stats latest(_time) AS actual_run_time latest(pass_fail) AS pass_fail sum(finding_count) AS finding_count BY test_id]\n| eval evidence_complete=if(isnotnull(actual_run_time),1,0)\n| eval on_time=if(actual_run_time<=planned_date_et+86400*14,1,0)\n| table test_id test_type scope_service planned_date actual_run_time pass_fail finding_count evidence_complete on_time\n```\n\nUnderstanding this SPL\n\n**DORA Art.24 — Digital operational-resilience testing: test-plan execution attestation** — Gives the CISO and the ICT risk committee a single-query attestation that the operational-resilience testing programme executed as planned, replacing an email-based reconciliation that typically slips by weeks.\n\nDocumented **Data sources**: `index=testing` sourcetype=test:plan_execution (BAS/TLPT/DR runs); `index=ticketing` sourcetype=jira:issue (findings tickets); test-plan register `dora_test_plan.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **planned_date_et** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **evidence_complete** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **on_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **DORA Art.24 — Digital operational-resilience testing: test-plan execution attestation**): table test_id test_type scope_service planned_date actual_run_time pass_fail finding_count evidence_complete on_time\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Calendar heatmap of test executions, bar chart of findings per scope service, single value 'tests past due'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We pivots the DORA test-plan register against the actual execution telemetry from the BAS/TLPT/DR tooling.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "jira"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.24 (Digital operational-resilience testing) is enforced — Splunk UC-22.3.45: DORA Art.24 — Digital operational-resilience testing: test-plan execution attestation.",
                  "ea": "Saved search 'UC-22.3.45' running on sourcetype test:plan_execution and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.29",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.5.29 is enforced — Splunk UC-22.3.45: DORA Art.24 — Digital operational-resilience testing: test-plan execution attestation.",
                  "ea": "Saved search 'UC-22.3.45' running on sourcetype test:plan_execution and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-8",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CA-8 is enforced — Splunk UC-22.3.45: DORA Art.24 — Digital operational-resilience testing: test-plan execution attestation.",
                  "ea": "Saved search 'UC-22.3.45' running on sourcetype test:plan_execution and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 5,
            "none": 0
          }
        },
        {
          "i": "22.6",
          "n": "— ISO/IEC 27001 (extended clauses)",
          "u": [
            {
              "i": "22.6.46",
              "n": "ISO/IEC 27001:2022 Clause 6.1 — Risk-assessment evidence: live risk register decay",
              "c": "high",
              "f": "intermediate",
              "v": "Produces the evidence the external auditor expects at stage-2: proof that each risk line in the ISMS register was reviewed within its cadence, identifying exact owners of overdue entries.",
              "t": "Splunk Enterprise Security, Splunk Add-on for ServiceNow",
              "d": "`index=grc` sourcetype=archer:risk OR sourcetype=servicenow:grc_risk; ISMS cadence lookup `iso_27001_risk_cadence.csv` (per category, review frequency in days).",
              "q": "index=grc sourcetype IN (archer:risk,servicenow:grc_risk) earliest=-1d\n| stats latest(last_reviewed) AS last_reviewed latest(category) AS category latest(owner) AS owner BY risk_id\n| lookup iso_27001_risk_cadence.csv category OUTPUT cadence_days\n| eval days_overdue=round((now()-last_reviewed)/86400,0)-cadence_days\n| where days_overdue>0\n| table risk_id category owner last_reviewed cadence_days days_overdue",
              "m": "(1) Export GRC risk register daily to Splunk; (2) maintain iso_27001_risk_cadence.csv by risk category; (3) schedule UC daily; (4) overdue > 0 opens a task to the risk owner; (5) Board quarterly report uses the rolling 90-day mean of overdue risks.",
              "z": "Histogram of days_overdue by category, table of top-overdue risks, single value 'total risks overdue today'.",
              "kfp": "Category cadence changes introduced via ISMS committee need a 24h propagation grace — suppress new overdue hits for 24h after the lookup change.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `index=grc` sourcetype=archer:risk OR sourcetype=servicenow:grc_risk; ISMS cadence lookup `iso_27001_risk_cadence.csv` (per category, review frequency in days)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export GRC risk register daily to Splunk; (2) maintain iso_27001_risk_cadence.csv by risk category; (3) schedule UC daily; (4) overdue > 0 opens a task to the risk owner; (5) Board quarterly report uses the rolling 90-day mean of overdue risks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype IN (archer:risk,servicenow:grc_risk) earliest=-1d\n| stats latest(last_reviewed) AS last_reviewed latest(category) AS category latest(owner) AS owner BY risk_id\n| lookup iso_27001_risk_cadence.csv category OUTPUT cadence_days\n| eval days_overdue=round((now()-last_reviewed)/86400,0)-cadence_days\n| where days_overdue>0\n| table risk_id category owner last_reviewed cadence_days days_overdue\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Clause 6.1 — Risk-assessment evidence: live risk register decay** — Produces the evidence the external auditor expects at stage-2: proof that each risk line in the ISMS register was reviewed within its cadence, identifying exact owners of overdue entries.\n\nDocumented **Data sources**: `index=grc` sourcetype=archer:risk OR sourcetype=servicenow:grc_risk; ISMS cadence lookup `iso_27001_risk_cadence.csv` (per category, review frequency in days). **App/TA** (typical add-on context): Splunk Enterprise Security, Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by risk_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_overdue>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ISO/IEC 27001:2022 Clause 6.1 — Risk-assessment evidence: live risk register decay**): table risk_id category owner last_reviewed cadence_days days_overdue\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Histogram of days_overdue by category, table of top-overdue risks, single value 'total risks overdue today'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Clause 6.1 — Risk-assessment evidence: live risk register decay. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.6 (ICT risk-management framework) is enforced — Splunk UC-22.6.46: ISO/IEC 27001:2022 Clause 6.1 — Risk-assessment evidence: live risk register decay.",
                  "ea": "Saved search 'UC-22.6.46' running on sourcetype archer:risk and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "6.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 6.1 (Risk assessment) is enforced — Splunk UC-22.6.46: ISO/IEC 27001:2022 Clause 6.1 — Risk-assessment evidence: live risk register decay.",
                  "ea": "Saved search 'UC-22.6.46' running on sourcetype archer:risk and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "PM-9",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 PM-9 is enforced — Splunk UC-22.6.46: ISO/IEC 27001:2022 Clause 6.1 — Risk-assessment evidence: live risk register decay.",
                  "ea": "Saved search 'UC-22.6.46' running on sourcetype archer:risk and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.47",
              "n": "ISO/IEC 27001:2022 Clause 6.2 — Information-security objectives: measurable-target attainment",
              "c": "medium",
              "f": "intermediate",
              "v": "Replaces the manual spreadsheet most ISMS programmes use to report objective attainment. Auditors see the live measurement, the target, and the last update in a single reproducible query.",
              "t": "Splunk Enterprise Security",
              "d": "`index=compliance` sourcetype=isms:objective_result (push from ISMS KPIs); objective targets lookup `iso_27001_objectives.csv`.",
              "q": "| inputlookup iso_27001_objectives.csv\n| join type=outer objective_id\n  [search index=compliance sourcetype=isms:objective_result earliest=-30d\n   | stats latest(value) AS current latest(_time) AS measured_at BY objective_id\n   | eval measured_at=strftime(measured_at,\"%Y-%m-%d\")]\n| eval attainment_pct=round(100*current/target,1)\n| eval status=if(attainment_pct>=100,\"met\",\"below_target\")\n| table objective_id target current attainment_pct measured_at status",
              "m": "(1) Push each KPI to index=compliance nightly with objective_id tag; (2) maintain iso_27001_objectives.csv as the ISMS-committee-approved target list; (3) schedule UC daily; (4) status=below_target opens an ISMS action; (5) quarterly Board pack exports the table.",
              "z": "Bullet chart of attainment % vs target, line chart of 30-day trend per objective, single value 'objectives below target'.",
              "kfp": "Objective targets revised by the ISMS committee within the look-back window create an apparent regression — use the effective-from column in the lookup to compute status.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=compliance` sourcetype=isms:objective_result (push from ISMS KPIs); objective targets lookup `iso_27001_objectives.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push each KPI to index=compliance nightly with objective_id tag; (2) maintain iso_27001_objectives.csv as the ISMS-committee-approved target list; (3) schedule UC daily; (4) status=below_target opens an ISMS action; (5) quarterly Board pack exports the table.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup iso_27001_objectives.csv\n| join type=outer objective_id\n  [search index=compliance sourcetype=isms:objective_result earliest=-30d\n   | stats latest(value) AS current latest(_time) AS measured_at BY objective_id\n   | eval measured_at=strftime(measured_at,\"%Y-%m-%d\")]\n| eval attainment_pct=round(100*current/target,1)\n| eval status=if(attainment_pct>=100,\"met\",\"below_target\")\n| table objective_id target current attainment_pct measured_at status\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Clause 6.2 — Information-security objectives: measurable-target attainment** — Replaces the manual spreadsheet most ISMS programmes use to report objective attainment. Auditors see the live measurement, the target, and the last update in a single reproducible query.\n\nDocumented **Data sources**: `index=compliance` sourcetype=isms:objective_result (push from ISMS KPIs); objective targets lookup `iso_27001_objectives.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **attainment_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **ISO/IEC 27001:2022 Clause 6.2 — Information-security objectives: measurable-target attainment**): table objective_id target current attainment_pct measured_at status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bullet chart of attainment % vs target, line chart of 30-day trend per objective, single value 'objectives below target'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Clause 6.2 — Information-security objectives: measurable-target attainment. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "6.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 6.2 (Information-security objectives) is enforced — Splunk UC-22.6.47: ISO/IEC 27001:2022 Clause 6.2 — Information-security objectives: measurable-target attainment.",
                  "ea": "Saved search 'UC-22.6.47' running on sourcetype isms:objective_result and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "9.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 9.1 (Monitoring, measurement, analysis, evaluation) is enforced — Splunk UC-22.6.47: ISO/IEC 27001:2022 Clause 6.2 — Information-security objectives: measurable-target attainment.",
                  "ea": "Saved search 'UC-22.6.47' running on sourcetype isms:objective_result and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "PM-6",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 PM-6 is enforced — Splunk UC-22.6.47: ISO/IEC 27001:2022 Clause 6.2 — Information-security objectives: measurable-target attainment.",
                  "ea": "Saved search 'UC-22.6.47' running on sourcetype isms:objective_result and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.48",
              "n": "ISO/IEC 27001:2022 Clause 8.2 — Operational risk-assessment: per-change risk-score recalculation",
              "c": "high",
              "f": "advanced",
              "v": "Gives Clause 8.2 an operational verification signal instead of an annual narrative. Audit objection 'where is the evidence the risk was reassessed after the 2024-Q3 PCI change?' becomes a one-query answer.",
              "t": "Splunk Enterprise Security, Splunk Add-on for ServiceNow",
              "d": "`index=change` sourcetype=servicenow:change; `index=grc` sourcetype=archer:risk_score; change-to-risk mapping `iso_27001_change_to_risk.csv`.",
              "q": "index=change sourcetype=servicenow:change state=closed earliest=-14d\n| rename affected_ci AS asset\n| lookup iso_27001_change_to_risk.csv asset OUTPUT risk_id\n| join type=outer risk_id\n  [search index=grc sourcetype=archer:risk_score earliest=-14d\n   | stats latest(score) AS new_score earliest(score) AS old_score latest(_time) AS recalc_time BY risk_id]\n| eval delta=new_score-old_score\n| eval recalc_performed=if(recalc_time>=_time AND recalc_time<=_time+604800,1,0)\n| where recalc_performed=0 AND abs(delta)>0\n| table change_id asset risk_id old_score new_score delta recalc_performed",
              "m": "(1) Link every CMDB change to affected risks in iso_27001_change_to_risk.csv; (2) normalise Archer risk_score events; (3) schedule UC daily; (4) recalc_performed=0 opens an ISMS action to the risk owner; (5) monthly report to risk committee.",
              "z": "Scatter of change impact vs recalc latency, table of changes missing recalc, single value 'changes in 14d missing recalc'.",
              "kfp": "Changes categorised 'standard' (no risk implication) should be excluded — filter on change_category!='standard' in the base search.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `index=change` sourcetype=servicenow:change; `index=grc` sourcetype=archer:risk_score; change-to-risk mapping `iso_27001_change_to_risk.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Link every CMDB change to affected risks in iso_27001_change_to_risk.csv; (2) normalise Archer risk_score events; (3) schedule UC daily; (4) recalc_performed=0 opens an ISMS action to the risk owner; (5) monthly report to risk committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=change sourcetype=servicenow:change state=closed earliest=-14d\n| rename affected_ci AS asset\n| lookup iso_27001_change_to_risk.csv asset OUTPUT risk_id\n| join type=outer risk_id\n  [search index=grc sourcetype=archer:risk_score earliest=-14d\n   | stats latest(score) AS new_score earliest(score) AS old_score latest(_time) AS recalc_time BY risk_id]\n| eval delta=new_score-old_score\n| eval recalc_performed=if(recalc_time>=_time AND recalc_time<=_time+604800,1,0)\n| where recalc_performed=0 AND abs(delta)>0\n| table change_id asset risk_id old_score new_score delta recalc_performed\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Clause 8.2 — Operational risk-assessment: per-change risk-score recalculation** — Gives Clause 8.2 an operational verification signal instead of an annual narrative. Audit objection 'where is the evidence the risk was reassessed after the 2024-Q3 PCI change?' becomes a one-query answer.\n\nDocumented **Data sources**: `index=change` sourcetype=servicenow:change; `index=grc` sourcetype=archer:risk_score; change-to-risk mapping `iso_27001_change_to_risk.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security, Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: change; **sourcetype**: servicenow:change. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=change, sourcetype=servicenow:change, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **delta** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **recalc_performed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where recalc_performed=0 AND abs(delta)>0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ISO/IEC 27001:2022 Clause 8.2 — Operational risk-assessment: per-change risk-score recalculation**): table change_id asset risk_id old_score new_score delta recalc_performed\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter of change impact vs recalc latency, table of changes missing recalc, single value 'changes in 14d missing recalc'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Clause 8.2 — Operational risk-assessment: per-change risk-score recalculation. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "6.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 6.1 (Risk assessment) is enforced — Splunk UC-22.6.48: ISO/IEC 27001:2022 Clause 8.2 — Operational risk-assessment: per-change risk-score recalculation.",
                  "ea": "Saved search 'UC-22.6.48' running on sourcetype servicenow:change and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "8.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 8.2 (Information-security risk assessment) is enforced — Splunk UC-22.6.48: ISO/IEC 27001:2022 Clause 8.2 — Operational risk-assessment: per-change risk-score recalculation.",
                  "ea": "Saved search 'UC-22.6.48' running on sourcetype servicenow:change and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "RA-3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST 800-53 RA-3 is enforced — Splunk UC-22.6.48: ISO/IEC 27001:2022 Clause 8.2 — Operational risk-assessment: per-change risk-score recalculation.",
                  "ea": "Saved search 'UC-22.6.48' running on sourcetype servicenow:change and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.49",
              "n": "ISO/IEC 27001:2022 Clause 9.1 — Monitoring programme coverage: KPI telemetry uptime",
              "c": "high",
              "f": "intermediate",
              "v": "Prevents the worst external-audit outcome (monitoring 'was in place' but had silently died) and gives the Head of Platform a single dashboard tile showing ISMS telemetry health.",
              "t": "Splunk Enterprise Security",
              "d": "`index=_internal` sourcetype=splunkd (scheduler logs); `index=compliance` sourcetype=isms:objective_result; KPI freshness lookup `iso_27001_kpi_freshness.csv`.",
              "q": "| inputlookup iso_27001_kpi_freshness.csv\n| join type=outer kpi_id\n  [search index=compliance sourcetype=isms:objective_result earliest=-7d\n   | stats latest(_time) AS last_seen BY kpi_id]\n| eval age_min=round((now()-last_seen)/60,0)\n| eval is_stale=if(age_min>sla_minutes OR isnull(last_seen),1,0)\n| table kpi_id last_seen age_min sla_minutes is_stale",
              "m": "(1) Push each KPI into index=compliance; (2) maintain iso_27001_kpi_freshness.csv per KPI with its SLA; (3) schedule UC every 15m; (4) is_stale=1 pages the data-engineering owner of the KPI; (5) weekly ISMS report attaches the latest state.",
              "z": "Heatmap of KPI staleness, table of stale KPIs, single value 'stale KPIs right now'.",
              "kfp": "KPI suspended during maintenance windows legitimately shows as stale — exclude with a maintenance_calendar.csv lookup.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=_internal` sourcetype=splunkd (scheduler logs); `index=compliance` sourcetype=isms:objective_result; KPI freshness lookup `iso_27001_kpi_freshness.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push each KPI into index=compliance; (2) maintain iso_27001_kpi_freshness.csv per KPI with its SLA; (3) schedule UC every 15m; (4) is_stale=1 pages the data-engineering owner of the KPI; (5) weekly ISMS report attaches the latest state.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup iso_27001_kpi_freshness.csv\n| join type=outer kpi_id\n  [search index=compliance sourcetype=isms:objective_result earliest=-7d\n   | stats latest(_time) AS last_seen BY kpi_id]\n| eval age_min=round((now()-last_seen)/60,0)\n| eval is_stale=if(age_min>sla_minutes OR isnull(last_seen),1,0)\n| table kpi_id last_seen age_min sla_minutes is_stale\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Clause 9.1 — Monitoring programme coverage: KPI telemetry uptime** — Prevents the worst external-audit outcome (monitoring 'was in place' but had silently died) and gives the Head of Platform a single dashboard tile showing ISMS telemetry health.\n\nDocumented **Data sources**: `index=_internal` sourcetype=splunkd (scheduler logs); `index=compliance` sourcetype=isms:objective_result; KPI freshness lookup `iso_27001_kpi_freshness.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **age_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **ISO/IEC 27001:2022 Clause 9.1 — Monitoring programme coverage: KPI telemetry uptime**): table kpi_id last_seen age_min sla_minutes is_stale\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of KPI staleness, table of stale KPIs, single value 'stale KPIs right now'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Clause 9.1 — Monitoring programme coverage: KPI telemetry uptime. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance",
                "Availability"
              ],
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53",
                "SOC-2"
              ],
              "a": [
                "Performance"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "9.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 9.1 (Monitoring, measurement, analysis, evaluation) is enforced — Splunk UC-22.6.49: ISO/IEC 27001:2022 Clause 9.1 — Monitoring programme coverage: KPI telemetry uptime.",
                  "ea": "Saved search 'UC-22.6.49' running on sourcetype splunkd and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-7",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NIST 800-53 CA-7 is enforced — Splunk UC-22.6.49: ISO/IEC 27001:2022 Clause 9.1 — Monitoring programme coverage: KPI telemetry uptime.",
                  "ea": "Saved search 'UC-22.6.49' running on sourcetype splunkd and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC7.1 (System operations monitoring) is enforced — Splunk UC-22.6.49: ISO/IEC 27001:2022 Clause 9.1 — Monitoring programme coverage: KPI telemetry uptime.",
                  "ea": "Saved search 'UC-22.6.49' running on sourcetype splunkd and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.50",
              "n": "ISO/IEC 27001:2022 Clause 9.2 — Internal audit coverage: control sample rotation",
              "c": "medium",
              "f": "intermediate",
              "v": "Closes the typical gap where internal audit over-samples 'hot' controls and silently skips administrative ones. The rotation evidence is what ISO 9.2 requires and what the registrar looks for.",
              "t": "Splunk Enterprise Security",
              "d": "`index=grc` sourcetype=archer:audit_test OR sourcetype=servicenow:audit_finding; ISMS control catalogue `iso_27001_control_catalog.csv`.",
              "q": "| inputlookup iso_27001_control_catalog.csv\n| join type=outer control_id\n  [search index=grc sourcetype IN (archer:audit_test,servicenow:audit_finding) earliest=-365d\n   | stats latest(_time) AS last_audited count AS audit_count_365d BY control_id]\n| eval days_since=round((now()-last_audited)/86400,0)\n| eval due_for_audit=if(isnull(last_audited) OR days_since>365,1,0)\n| table control_id category last_audited days_since audit_count_365d due_for_audit",
              "m": "(1) Maintain iso_27001_control_catalog.csv; (2) require internal-audit output to index=grc with control_id tag; (3) schedule UC weekly; (4) due_for_audit=1 adds the control to the next quarter audit plan; (5) annual Audit Committee report.",
              "z": "Stacked bar of audit count by category, table of overdue controls, single value 'controls never audited'.",
              "kfp": "Controls retired in-year but still in catalog produce false 'overdue' entries — mark retired=true in the lookup.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=grc` sourcetype=archer:audit_test OR sourcetype=servicenow:audit_finding; ISMS control catalogue `iso_27001_control_catalog.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain iso_27001_control_catalog.csv; (2) require internal-audit output to index=grc with control_id tag; (3) schedule UC weekly; (4) due_for_audit=1 adds the control to the next quarter audit plan; (5) annual Audit Committee report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup iso_27001_control_catalog.csv\n| join type=outer control_id\n  [search index=grc sourcetype IN (archer:audit_test,servicenow:audit_finding) earliest=-365d\n   | stats latest(_time) AS last_audited count AS audit_count_365d BY control_id]\n| eval days_since=round((now()-last_audited)/86400,0)\n| eval due_for_audit=if(isnull(last_audited) OR days_since>365,1,0)\n| table control_id category last_audited days_since audit_count_365d due_for_audit\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Clause 9.2 — Internal audit coverage: control sample rotation** — Closes the typical gap where internal audit over-samples 'hot' controls and silently skips administrative ones. The rotation evidence is what ISO 9.2 requires and what the registrar looks for.\n\nDocumented **Data sources**: `index=grc` sourcetype=archer:audit_test OR sourcetype=servicenow:audit_finding; ISMS control catalogue `iso_27001_control_catalog.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **due_for_audit** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **ISO/IEC 27001:2022 Clause 9.2 — Internal audit coverage: control sample rotation**): table control_id category last_audited days_since audit_count_365d due_for_audit\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar of audit count by category, table of overdue controls, single value 'controls never audited'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Clause 9.2 — Internal audit coverage: control sample rotation. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53",
                "SOC-2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "9.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 9.2 (Internal audit) is enforced — Splunk UC-22.6.50: ISO/IEC 27001:2022 Clause 9.2 — Internal audit coverage: control sample rotation.",
                  "ea": "Saved search 'UC-22.6.50' running on sourcetype archer:audit_test and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CA-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 CA-2 is enforced — Splunk UC-22.6.50: ISO/IEC 27001:2022 Clause 9.2 — Internal audit coverage: control sample rotation.",
                  "ea": "Saved search 'UC-22.6.50' running on sourcetype archer:audit_test and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC4.1",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that SOC-2 CC4.1 is enforced — Splunk UC-22.6.50: ISO/IEC 27001:2022 Clause 9.2 — Internal audit coverage: control sample rotation.",
                  "ea": "Saved search 'UC-22.6.50' running on sourcetype archer:audit_test and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.51",
              "n": "ISO/IEC 27001:2022 Annex A.5.24 — Incident-management planning: runbook currency attestation",
              "c": "medium",
              "f": "beginner",
              "v": "Satisfies the A.5.24 'planning and preparation' sub-requirement: incident procedures are continuously refreshed, not one-time written.",
              "t": "Splunk SOAR",
              "d": "`index=soar` sourcetype=phantom:playbook OR sourcetype=runbook:metadata; runbook register lookup `ir_runbook_register.csv`.",
              "q": "| inputlookup ir_runbook_register.csv\n| eval last_reviewed=strptime(last_reviewed,\"%Y-%m-%d\")\n| eval age_days=round((now()-last_reviewed)/86400,0)\n| eval is_stale=if(age_days>cadence_days,1,0)\n| where is_stale=1\n| table runbook_id owner last_reviewed age_days cadence_days is_stale",
              "m": "(1) Maintain ir_runbook_register.csv with owner + cadence_days; (2) schedule UC daily; (3) stale runbooks open a recertification task to the owner with 14d SLA; (4) monthly IR committee review.",
              "z": "Table of stale runbooks by owner, single value 'stale runbooks', bar chart of age by runbook family.",
              "kfp": "Runbooks under active revision have an incomplete last_reviewed — exclude where state=in_review.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk SOAR.\n• Ensure the following data sources are available: `index=soar` sourcetype=phantom:playbook OR sourcetype=runbook:metadata; runbook register lookup `ir_runbook_register.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain ir_runbook_register.csv with owner + cadence_days; (2) schedule UC daily; (3) stale runbooks open a recertification task to the owner with 14d SLA; (4) monthly IR committee review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup ir_runbook_register.csv\n| eval last_reviewed=strptime(last_reviewed,\"%Y-%m-%d\")\n| eval age_days=round((now()-last_reviewed)/86400,0)\n| eval is_stale=if(age_days>cadence_days,1,0)\n| where is_stale=1\n| table runbook_id owner last_reviewed age_days cadence_days is_stale\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Annex A.5.24 — Incident-management planning: runbook currency attestation** — Satisfies the A.5.24 'planning and preparation' sub-requirement: incident procedures are continuously refreshed, not one-time written.\n\nDocumented **Data sources**: `index=soar` sourcetype=phantom:playbook OR sourcetype=runbook:metadata; runbook register lookup `ir_runbook_register.csv`. **App/TA** (typical add-on context): Splunk SOAR. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **last_reviewed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_stale=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ISO/IEC 27001:2022 Annex A.5.24 — Incident-management planning: runbook currency attestation**): table runbook_id owner last_reviewed age_days cadence_days is_stale\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of stale runbooks by owner, single value 'stale runbooks', bar chart of age by runbook family.",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Annex A.5.24 — Incident-management planning: runbook currency attestation. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIST 800-53"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.17 (ICT-related incident management process) is enforced — Splunk UC-22.6.51: ISO/IEC 27001:2022 Annex A.5.24 — Incident-management planning: runbook currency attestation.",
                  "ea": "Saved search 'UC-22.6.51' running on index=soar sourcetype=phantom:playbook OR sourcetype=runbook:metadata; runbook register lookup ir_runbook_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.24",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.5.24 (Incident management planning) is enforced — Splunk UC-22.6.51: ISO/IEC 27001:2022 Annex A.5.24 — Incident-management planning: runbook currency attestation.",
                  "ea": "Saved search 'UC-22.6.51' running on index=soar sourcetype=phantom:playbook OR sourcetype=runbook:metadata; runbook register lookup ir_runbook_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-4 (Incident handling) is enforced — Splunk UC-22.6.51: ISO/IEC 27001:2022 Annex A.5.24 — Incident-management planning: runbook currency attestation.",
                  "ea": "Saved search 'UC-22.6.51' running on index=soar sourcetype=phantom:playbook OR sourcetype=runbook:metadata; runbook register lookup ir_runbook_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.52",
              "n": "ISO/IEC 27001:2022 Annex A.5.25 — Event classification decisions: SIEM-to-incident triage traceability",
              "c": "high",
              "f": "intermediate",
              "v": "Gives the Audit Committee a single number: 'classification SLA adherence %'. Analysts know their backlog, auditors see the proof.",
              "t": "Splunk Enterprise Security, Splunk SOAR",
              "d": "`index=notable` (ES notable events); `index=soar` sourcetype=phantom:incident (classification decisions).",
              "q": "search `notable` severity IN (\"high\",\"critical\") earliest=-7d\n| rename event_id AS alert_id\n| join type=outer alert_id\n  [search index=soar sourcetype=phantom:incident earliest=-7d\n   | stats latest(_time) AS classified_at latest(decision) AS decision BY alert_id]\n| eval decision_sla_met=if(classified_at<=_time+3600,1,0)\n| where isnull(decision) OR decision_sla_met=0\n| table alert_id severity _time classified_at decision decision_sla_met",
              "m": "(1) Ensure SOAR writes classification decisions with alert_id tag; (2) schedule UC every 10m; (3) SLA miss escalates to shift lead; (4) daily SOC report of adherence %; (5) monthly to Audit Committee.",
              "z": "Funnel from total alerts → classified → on-time, line chart of adherence %, single value 'open unclassified alerts'.",
              "kfp": "Alerts suppressed by tuning rules do not need classification decisions — filter out suppression_reason IS NOT NULL.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [MITRE ATT&CK — T1070](https://attack.mitre.org/techniques/T1070/)",
              "mitre": [
                "T1070"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, Splunk SOAR.\n• Ensure the following data sources are available: `index=notable` (ES notable events); `index=soar` sourcetype=phantom:incident (classification decisions)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure SOAR writes classification decisions with alert_id tag; (2) schedule UC every 10m; (3) SLA miss escalates to shift lead; (4) daily SOC report of adherence %; (5) monthly to Audit Committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch `notable` severity IN (\"high\",\"critical\") earliest=-7d\n| rename event_id AS alert_id\n| join type=outer alert_id\n  [search index=soar sourcetype=phantom:incident earliest=-7d\n   | stats latest(_time) AS classified_at latest(decision) AS decision BY alert_id]\n| eval decision_sla_met=if(classified_at<=_time+3600,1,0)\n| where isnull(decision) OR decision_sla_met=0\n| table alert_id severity _time classified_at decision decision_sla_met\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Annex A.5.25 — Event classification decisions: SIEM-to-incident triage traceability** — Gives the Audit Committee a single number: 'classification SLA adherence %'. Analysts know their backlog, auditors see the proof.\n\nDocumented **Data sources**: `index=notable` (ES notable events); `index=soar` sourcetype=phantom:incident (classification decisions). **App/TA** (typical add-on context): Splunk Enterprise Security, Splunk SOAR. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **decision_sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnull(decision) OR decision_sla_met=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ISO/IEC 27001:2022 Annex A.5.25 — Event classification decisions: SIEM-to-incident triage traceability**): table alert_id severity _time classified_at decision decision_sla_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel from total alerts → classified → on-time, line chart of adherence %, single value 'open unclassified alerts'.",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Annex A.5.25 — Event classification decisions: SIEM-to-incident triage traceability. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIST 800-53",
                "SOC-2"
              ],
              "a": [
                "Alerts",
                "Ticket_Management"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.17 (ICT-related incident management process) is enforced — Splunk UC-22.6.52: ISO/IEC 27001:2022 Annex A.5.25 — Event classification decisions: SIEM-to-incident triage traceability.",
                  "ea": "Saved search 'UC-22.6.52' running on index=notable (ES notable events); index=soar sourcetype=phantom:incident (classification decisions)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.25",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.5.25 (Assessment and decision on events) is enforced — Splunk UC-22.6.52: ISO/IEC 27001:2022 Annex A.5.25 — Event classification decisions: SIEM-to-incident triage traceability.",
                  "ea": "Saved search 'UC-22.6.52' running on index=notable (ES notable events); index=soar sourcetype=phantom:incident (classification decisions)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IR-4",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 IR-4 (Incident handling) is enforced — Splunk UC-22.6.52: ISO/IEC 27001:2022 Annex A.5.25 — Event classification decisions: SIEM-to-incident triage traceability.",
                  "ea": "Saved search 'UC-22.6.52' running on index=notable (ES notable events); index=soar sourcetype=phantom:incident (classification decisions)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC7.3 (Evaluated events and incidents) is enforced — Splunk UC-22.6.52: ISO/IEC 27001:2022 Annex A.5.25 — Event classification decisions: SIEM-to-incident triage traceability.",
                  "ea": "Saved search 'UC-22.6.52' running on index=notable (ES notable events); index=soar sourcetype=phantom:incident (classification decisions)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.53",
              "n": "ISO/IEC 27001:2022 Clause 7.2 — Competence evidence: role-based training completion",
              "c": "medium",
              "f": "beginner",
              "v": "Replaces the annual LMS export with a daily compliance signal. HR can notify managers proactively rather than after the audit finding.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=hr` sourcetype=lms:completion; `index=hr` sourcetype=hr:role_assignment; required-module matrix `iso_27001_training_matrix.csv`.",
              "q": "index=hr sourcetype=hr:role_assignment earliest=-30d\n| stats latest(role) AS role BY user\n| lookup iso_27001_training_matrix.csv role OUTPUT required_modules\n| eval required_modules=split(required_modules,\",\")\n| join user\n  [search index=hr sourcetype=lms:completion status=completed earliest=-365d\n   | stats values(module) AS completed_modules BY user]\n| eval covered=mvmap(required_modules, if(isnotnull(mvfind(completed_modules,x)),1,0))\n| eval compliance_pct=round(100*sum(covered)/mvcount(required_modules),0)\n| where compliance_pct<100\n| table user role required_modules completed_modules compliance_pct",
              "m": "(1) Onboard LMS completions; (2) maintain iso_27001_training_matrix.csv per role; (3) schedule UC daily; (4) compliance_pct<100 triggers an HR email; (5) quarterly report to ISMS committee.",
              "z": "Stacked bar by role, table of sub-100% users, single value 'users below 100%'.",
              "kfp": "New hires in their grace window (30d) appear as non-compliant — exclude by hire_date.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [HIPAA Security Rule 2013-final](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=hr` sourcetype=lms:completion; `index=hr` sourcetype=hr:role_assignment; required-module matrix `iso_27001_training_matrix.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard LMS completions; (2) maintain iso_27001_training_matrix.csv per role; (3) schedule UC daily; (4) compliance_pct<100 triggers an HR email; (5) quarterly report to ISMS committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=hr:role_assignment earliest=-30d\n| stats latest(role) AS role BY user\n| lookup iso_27001_training_matrix.csv role OUTPUT required_modules\n| eval required_modules=split(required_modules,\",\")\n| join user\n  [search index=hr sourcetype=lms:completion status=completed earliest=-365d\n   | stats values(module) AS completed_modules BY user]\n| eval covered=mvmap(required_modules, if(isnotnull(mvfind(completed_modules,x)),1,0))\n| eval compliance_pct=round(100*sum(covered)/mvcount(required_modules),0)\n| where compliance_pct<100\n| table user role required_modules completed_modules compliance_pct\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Clause 7.2 — Competence evidence: role-based training completion** — Replaces the annual LMS export with a daily compliance signal. HR can notify managers proactively rather than after the audit finding.\n\nDocumented **Data sources**: `index=hr` sourcetype=lms:completion; `index=hr` sourcetype=hr:role_assignment; required-module matrix `iso_27001_training_matrix.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: hr:role_assignment. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=hr:role_assignment, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **required_modules** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **covered** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliance_pct<100` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ISO/IEC 27001:2022 Clause 7.2 — Competence evidence: role-based training completion**): table user role required_modules completed_modules compliance_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar by role, table of sub-100% users, single value 'users below 100%'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Clause 7.2 — Competence evidence: role-based training completion. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "HIPAA Security Rule",
                "ISO/IEC 27001",
                "PCI-DSS"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security Rule",
                  "v": "2013-final",
                  "cl": "§164.308(a)(5)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security Rule §164.308(a)(5) (Security awareness and training) is enforced — Splunk UC-22.6.53: ISO/IEC 27001:2022 Clause 7.2 — Competence evidence: role-based training completion.",
                  "ea": "Saved search 'UC-22.6.53' running on sourcetype lms:completion and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "7.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 7.2 (Competence) is enforced — Splunk UC-22.6.53: ISO/IEC 27001:2022 Clause 7.2 — Competence evidence: role-based training completion.",
                  "ea": "Saved search 'UC-22.6.53' running on sourcetype lms:completion and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "12.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 12.6 is enforced — Splunk UC-22.6.53: ISO/IEC 27001:2022 Clause 7.2 — Competence evidence: role-based training completion.",
                  "ea": "Saved search 'UC-22.6.53' running on sourcetype lms:completion and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.54",
              "n": "ISO/IEC 27001:2022 Clause 7.5 — Documented information control: policy register approval trail",
              "c": "medium",
              "f": "intermediate",
              "v": "Prevents the common deficiency of policies that 'used to be' approved but whose approval trail has expired — a direct Clause 7.5 non-conformance.",
              "t": "Splunk Enterprise Security",
              "d": "`index=grc` sourcetype=policyhub:policy (change of state); policy cadence lookup `iso_27001_policy_cadence.csv`.",
              "q": "index=grc sourcetype=policyhub:policy action=approved earliest=-400d\n| stats latest(_time) AS approved_at latest(version) AS version latest(approved_by) AS approved_by BY policy_id\n| lookup iso_27001_policy_cadence.csv policy_id OUTPUT cadence_days owner\n| eval days_to_next_review=cadence_days-round((now()-approved_at)/86400,0)\n| where days_to_next_review<=30 OR isnull(approved_at)\n| table policy_id version approved_by approved_at days_to_next_review owner",
              "m": "(1) Route policyhub approval events into index=grc; (2) maintain iso_27001_policy_cadence.csv; (3) schedule UC weekly; (4) days_to_next_review<=30 opens a review task to policy owner; (5) quarterly CISO review.",
              "z": "Table of near-expiry policies, bar chart by owner, single value 'policies expired'.",
              "kfp": "Newly-issued policies within 30 days of issue appear as near-expiry until the first cadence period completes — exclude if age<30d.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=grc` sourcetype=policyhub:policy (change of state); policy cadence lookup `iso_27001_policy_cadence.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Route policyhub approval events into index=grc; (2) maintain iso_27001_policy_cadence.csv; (3) schedule UC weekly; (4) days_to_next_review<=30 opens a review task to policy owner; (5) quarterly CISO review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=policyhub:policy action=approved earliest=-400d\n| stats latest(_time) AS approved_at latest(version) AS version latest(approved_by) AS approved_by BY policy_id\n| lookup iso_27001_policy_cadence.csv policy_id OUTPUT cadence_days owner\n| eval days_to_next_review=cadence_days-round((now()-approved_at)/86400,0)\n| where days_to_next_review<=30 OR isnull(approved_at)\n| table policy_id version approved_by approved_at days_to_next_review owner\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Clause 7.5 — Documented information control: policy register approval trail** — Prevents the common deficiency of policies that 'used to be' approved but whose approval trail has expired — a direct Clause 7.5 non-conformance.\n\nDocumented **Data sources**: `index=grc` sourcetype=policyhub:policy (change of state); policy cadence lookup `iso_27001_policy_cadence.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: policyhub:policy. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=policyhub:policy, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by policy_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_to_next_review** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_next_review<=30 OR isnull(approved_at)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ISO/IEC 27001:2022 Clause 7.5 — Documented information control: policy register approval trail**): table policy_id version approved_by approved_at days_to_next_review owner\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of near-expiry policies, bar chart by owner, single value 'policies expired'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Clause 7.5 — Documented information control: policy register approval trail. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53",
                "SOC-2"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "7.5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 7.5 (Documented information) is enforced — Splunk UC-22.6.54: ISO/IEC 27001:2022 Clause 7.5 — Documented information control: policy register approval trail.",
                  "ea": "Saved search 'UC-22.6.54' running on index=grc sourcetype=policyhub:policy (change of state); policy cadence lookup iso_27001_policy_cadence.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "PL-2",
                  "m": "satisfies",
                  "a": "contributing",
                  "co": "Evidence that NIST 800-53 PL-2 is enforced — Splunk UC-22.6.54: ISO/IEC 27001:2022 Clause 7.5 — Documented information control: policy register approval trail.",
                  "ea": "Saved search 'UC-22.6.54' running on index=grc sourcetype=policyhub:policy (change of state); policy cadence lookup iso_27001_policy_cadence.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC1.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC1.4 is enforced — Splunk UC-22.6.54: ISO/IEC 27001:2022 Clause 7.5 — Documented information control: policy register approval trail.",
                  "ea": "Saved search 'UC-22.6.54' running on index=grc sourcetype=policyhub:policy (change of state); policy cadence lookup iso_27001_policy_cadence.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.6.55",
              "n": "ISO/IEC 27001:2022 Clause 8.1 — Operational planning: change advisory board (CAB) approval evidence",
              "c": "medium",
              "f": "intermediate",
              "v": "Turns the 'post-implementation review' into a pre-emptive signal. The Head of IT Operations sees the breach list within the hour.",
              "t": "Splunk Add-on for ServiceNow",
              "d": "`index=change` sourcetype=servicenow:change; approver-role lookup `cab_roles.csv`.",
              "q": "index=change sourcetype=servicenow:change state IN (implemented,closed) earliest=-7d\n| stats latest(type) AS type latest(_time) AS implemented_at latest(approval_status) AS approval_status latest(approver) AS approver latest(approval_date) AS cab_approved_at BY change_id\n| lookup cab_roles.csv approver OUTPUT is_authorised_cab\n| eval approval_met=if(type=\"emergency\",1,if(approval_status=\"approved\" AND is_authorised_cab=1 AND cab_approved_at<=implemented_at,1,0))\n| where approval_met=0\n| table change_id type implemented_at cab_approved_at approver approval_met",
              "m": "(1) Export ServiceNow change events; (2) maintain cab_roles.csv listing authorised approvers; (3) schedule UC hourly; (4) approval_met=0 opens a deficiency record with 7d remediation; (5) monthly Audit Committee report.",
              "z": "Table of unapproved changes, bar chart of breach count by team, single value 'unapproved changes in last 7d'.",
              "kfp": "Emergency changes legitimately bypass CAB; their follow-up retrospective approval must exist separately — this UC excludes type=emergency by design.",
              "refs": "[ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOX-ITGC PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `index=change` sourcetype=servicenow:change; approver-role lookup `cab_roles.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export ServiceNow change events; (2) maintain cab_roles.csv listing authorised approvers; (3) schedule UC hourly; (4) approval_met=0 opens a deficiency record with 7d remediation; (5) monthly Audit Committee report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=change sourcetype=servicenow:change state IN (implemented,closed) earliest=-7d\n| stats latest(type) AS type latest(_time) AS implemented_at latest(approval_status) AS approval_status latest(approver) AS approver latest(approval_date) AS cab_approved_at BY change_id\n| lookup cab_roles.csv approver OUTPUT is_authorised_cab\n| eval approval_met=if(type=\"emergency\",1,if(approval_status=\"approved\" AND is_authorised_cab=1 AND cab_approved_at<=implemented_at,1,0))\n| where approval_met=0\n| table change_id type implemented_at cab_approved_at approver approval_met\n```\n\nUnderstanding this SPL\n\n**ISO/IEC 27001:2022 Clause 8.1 — Operational planning: change advisory board (CAB) approval evidence** — Turns the 'post-implementation review' into a pre-emptive signal. The Head of IT Operations sees the breach list within the hour.\n\nDocumented **Data sources**: `index=change` sourcetype=servicenow:change; approver-role lookup `cab_roles.csv`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: change; **sourcetype**: servicenow:change. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=change, sourcetype=servicenow:change, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by change_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **approval_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where approval_met=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **ISO/IEC 27001:2022 Clause 8.1 — Operational planning: change advisory board (CAB) approval evidence**): table change_id type implemented_at cab_approved_at approver approval_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of unapproved changes, bar chart of breach count by team, single value 'unapproved changes in last 7d'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for ISO/IEC 27001:2022 Clause 8.1 — Operational planning: change advisory board (CAB) approval evidence. ISO 27001 tells us what good security management looks like, and we document that we follow it.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "ISO/IEC 27001",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "8.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 8.1 (Operational planning) is enforced — Splunk UC-22.6.55: ISO/IEC 27001:2022 Clause 8.1 — Operational planning: change advisory board (CAB) approval evidence.",
                  "ea": "Saved search 'UC-22.6.55' running on index=change sourcetype=servicenow:change; approver-role lookup cab_roles.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC8.1 (Change management) is enforced — Splunk UC-22.6.55: ISO/IEC 27001:2022 Clause 8.1 — Operational planning: change advisory board (CAB) approval evidence.",
                  "ea": "Saved search 'UC-22.6.55' running on index=change sourcetype=servicenow:change; approver-role lookup cab_roles.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Approval",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.ChangeMgmt.Approval (Change approved) is enforced — Splunk UC-22.6.55: ISO/IEC 27001:2022 Clause 8.1 — Operational planning: change advisory board (CAB) approval evidence.",
                  "ea": "Saved search 'UC-22.6.55' running on index=change sourcetype=servicenow:change; approver-role lookup cab_roles.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 10,
            "none": 0
          }
        },
        {
          "i": "22.8",
          "n": "— SOC 2 (extended clauses)",
          "u": [
            {
              "i": "22.8.31",
              "n": "SOC 2 CC6.6 — Encryption-in-transit validation: cleartext protocols crossing the trust boundary",
              "c": "high",
              "f": "intermediate",
              "v": "Replaces annual TLS-compliance scans with a continuous signal, so drift (a new container accidentally exposing http) is caught within 30 minutes.",
              "t": "Splunk Stream",
              "d": "`index=stream` sourcetype=stream:tcp OR sourcetype=stream:http OR sourcetype=stream:tls; trust-zone lookup `trust_zones.csv`.",
              "q": "index=stream sourcetype=stream:tls earliest=-30m\n| rename src_ip AS src dest_ip AS dest\n| lookup trust_zones.csv ip=src OUTPUT zone AS src_zone\n| lookup trust_zones.csv ip=dest OUTPUT zone AS dest_zone\n| where src_zone!=dest_zone\n| eval is_deprecated=if(tls_version IN (\"SSLv3\",\"TLS1.0\",\"TLS1.1\"),1,0)\n| where is_deprecated=1 OR protocol IN (\"http\",\"ftp\",\"telnet\",\"smtp\")\n| stats count BY src dest dest_port protocol tls_version is_deprecated",
              "m": "(1) Onboard Splunk Stream TLS data; (2) maintain trust_zones.csv; (3) schedule UC every 30m; (4) each hit opens a Platform ticket with src/dest/dest_port; (5) SLA: remediate within 24h.",
              "z": "Sankey of src_zone → dest_zone by protocol, table of deprecated sessions, single value 'deprecated TLS sessions right now'.",
              "kfp": "Explicit plaintext management LANs (backup target, ILO) are allowlisted in trust_zones.csv via category='management-cleartext-allowed'.",
              "refs": "[SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [HIPAA Security Rule 2013-final](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [MITRE ATT&CK — T1071](https://attack.mitre.org/techniques/T1071/)",
              "mitre": [
                "T1071"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Stream.\n• Ensure the following data sources are available: `index=stream` sourcetype=stream:tcp OR sourcetype=stream:http OR sourcetype=stream:tls; trust-zone lookup `trust_zones.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard Splunk Stream TLS data; (2) maintain trust_zones.csv; (3) schedule UC every 30m; (4) each hit opens a Platform ticket with src/dest/dest_port; (5) SLA: remediate within 24h.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=stream sourcetype=stream:tls earliest=-30m\n| rename src_ip AS src dest_ip AS dest\n| lookup trust_zones.csv ip=src OUTPUT zone AS src_zone\n| lookup trust_zones.csv ip=dest OUTPUT zone AS dest_zone\n| where src_zone!=dest_zone\n| eval is_deprecated=if(tls_version IN (\"SSLv3\",\"TLS1.0\",\"TLS1.1\"),1,0)\n| where is_deprecated=1 OR protocol IN (\"http\",\"ftp\",\"telnet\",\"smtp\")\n| stats count BY src dest dest_port protocol tls_version is_deprecated\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC6.6 — Encryption-in-transit validation: cleartext protocols crossing the trust boundary** — Replaces annual TLS-compliance scans with a continuous signal, so drift (a new container accidentally exposing http) is caught within 30 minutes.\n\nDocumented **Data sources**: `index=stream` sourcetype=stream:tcp OR sourcetype=stream:http OR sourcetype=stream:tls; trust-zone lookup `trust_zones.csv`. **App/TA** (typical add-on context): Splunk Stream. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: stream; **sourcetype**: stream:tls. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=stream, sourcetype=stream:tls, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where src_zone!=dest_zone` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **is_deprecated** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_deprecated=1 OR protocol IN (\"http\",\"ftp\",\"telnet\",\"smtp\")` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src dest dest_port protocol tls_version is_deprecated** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey of src_zone → dest_zone by protocol, table of deprecated sessions, single value 'deprecated TLS sessions right now'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC6.6 — Encryption-in-transit validation: cleartext protocols crossing the trust boundary. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA Security Rule",
                "ISO/IEC 27001",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "Network_Traffic",
                "Certificates"
              ],
              "e": [
                "stream"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security Rule",
                  "v": "2013-final",
                  "cl": "§164.312(e)(1)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security Rule §164.312(e)(1) (Transmission security) is enforced — Splunk UC-22.8.31: SOC 2 CC6.6 — Encryption-in-transit validation: cleartext protocols crossing the trust boundary.",
                  "ea": "Saved search 'UC-22.8.31' running on sourcetype stream:tcp and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.24",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.24 is enforced — Splunk UC-22.8.31: SOC 2 CC6.6 — Encryption-in-transit validation: cleartext protocols crossing the trust boundary.",
                  "ea": "Saved search 'UC-22.8.31' running on sourcetype stream:tcp and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "4.2.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 4.2.1 is enforced — Splunk UC-22.8.31: SOC 2 CC6.6 — Encryption-in-transit validation: cleartext protocols crossing the trust boundary.",
                  "ea": "Saved search 'UC-22.8.31' running on sourcetype stream:tcp and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC6.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC6.6 (Encryption in transit) is enforced — Splunk UC-22.8.31: SOC 2 CC6.6 — Encryption-in-transit validation: cleartext protocols crossing the trust boundary.",
                  "ea": "Saved search 'UC-22.8.31' running on sourcetype stream:tcp and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.32",
              "n": "SOC 2 CC6.7 — System boundary & data-transmission control: unapproved egress destinations",
              "c": "high",
              "f": "intermediate",
              "v": "Provides the evidence SOC 2 auditors want: proof that outbound flows are governed and deviations are captured. Supports reduction of shadow SaaS risk.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` sourcetype=pan:traffic; ASN/country enrichment via `asn_country.csv`; approved-egress registry `approved_egress.csv`.",
              "q": "index=netfw sourcetype=pan:traffic action=allow direction=outbound earliest=-15m\n| rename src AS src_ip dest AS dest_ip\n| lookup service_by_src.csv src_ip OUTPUT src_service\n| lookup asn_country.csv ip=dest_ip OUTPUT asn AS dest_asn country AS dest_country\n| lookup approved_egress.csv src_service dest_asn OUTPUT approved\n| where isnull(approved) OR approved!=\"yes\"\n| stats count sum(bytes_out) AS bytes BY src_service dest_ip dest_asn dest_country",
              "m": "(1) Enrich Palo Alto traffic with service/ASN/country; (2) maintain approved_egress.csv; (3) schedule UC every 15m; (4) hit opens a ticket to service owner; (5) 48h SLA; (6) weekly review by Platform team.",
              "z": "Geo map of unapproved destinations, bar chart by service, single value 'unapproved egress flows (1h)'.",
              "kfp": "Public CDNs and package mirrors that use anycast may produce transient country churn — allowlist by ASN in approved_egress.csv.",
              "refs": "[SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [MITRE ATT&CK — T1041](https://attack.mitre.org/techniques/T1041/)",
              "mitre": [
                "T1041"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` sourcetype=pan:traffic; ASN/country enrichment via `asn_country.csv`; approved-egress registry `approved_egress.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enrich Palo Alto traffic with service/ASN/country; (2) maintain approved_egress.csv; (3) schedule UC every 15m; (4) hit opens a ticket to service owner; (5) 48h SLA; (6) weekly review by Platform team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype=pan:traffic action=allow direction=outbound earliest=-15m\n| rename src AS src_ip dest AS dest_ip\n| lookup service_by_src.csv src_ip OUTPUT src_service\n| lookup asn_country.csv ip=dest_ip OUTPUT asn AS dest_asn country AS dest_country\n| lookup approved_egress.csv src_service dest_asn OUTPUT approved\n| where isnull(approved) OR approved!=\"yes\"\n| stats count sum(bytes_out) AS bytes BY src_service dest_ip dest_asn dest_country\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC6.7 — System boundary & data-transmission control: unapproved egress destinations** — Provides the evidence SOC 2 auditors want: proof that outbound flows are governed and deviations are captured. Supports reduction of shadow SaaS risk.\n\nDocumented **Data sources**: `index=netfw` sourcetype=pan:traffic; ASN/country enrichment via `asn_country.csv`; approved-egress registry `approved_egress.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw; **sourcetype**: pan:traffic. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, sourcetype=pan:traffic, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where isnull(approved) OR approved!=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src_service dest_ip dest_asn dest_country** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Geo map of unapproved destinations, bar chart by service, single value 'unapproved egress flows (1h)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC6.7 — System boundary & data-transmission control: unapproved egress destinations. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "paloalto"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.7 (ICT systems, protocols and tools) is enforced — Splunk UC-22.8.32: SOC 2 CC6.7 — System boundary & data-transmission control: unapproved egress destinations.",
                  "ea": "Saved search 'UC-22.8.32' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.23",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.8.23 (Web filtering (2022 new)) is enforced — Splunk UC-22.8.32: SOC 2 CC6.7 — System boundary & data-transmission control: unapproved egress destinations.",
                  "ea": "Saved search 'UC-22.8.32' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "1.3.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 1.3.1 is enforced — Splunk UC-22.8.32: SOC 2 CC6.7 — System boundary & data-transmission control: unapproved egress destinations.",
                  "ea": "Saved search 'UC-22.8.32' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC6.7",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC6.7 (System boundaries and data transmission) is enforced — Splunk UC-22.8.32: SOC 2 CC6.7 — System boundary & data-transmission control: unapproved egress destinations.",
                  "ea": "Saved search 'UC-22.8.32' running on sourcetype pan:traffic and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.33",
              "n": "SOC 2 CC7.1 — System-operations monitoring: uptime attestation and alert-noise governance",
              "c": "high",
              "f": "intermediate",
              "v": "Lets the Head of IT Operations present the Audit Committee with coverage % (target 100% for production) and alert-noise % (target <5/day/CI). The same query replaces three spreadsheets.",
              "t": "Splunk IT Service Intelligence",
              "d": "`index=monitoring` sourcetype=heartbeat:agent; `index=alerts` sourcetype=opsgenie:alert; CMDB lookup `cmdb_ci_list.csv`.",
              "q": "| inputlookup cmdb_ci_list.csv\n| join type=outer ci_id\n  [search index=monitoring sourcetype=heartbeat:agent earliest=-1d\n   | stats latest(_time) AS heartbeat_last_seen BY ci_id]\n| join type=outer ci_id\n  [search index=alerts sourcetype=opsgenie:alert earliest=-1d\n   | stats count AS alerts_24h BY ci_id]\n| eval coverage_status=case(isnull(heartbeat_last_seen),\"no-heartbeat\",heartbeat_last_seen<now()-1800,\"stale\",true(),\"healthy\")\n| table ci_id heartbeat_last_seen alerts_24h coverage_status",
              "m": "(1) Nightly CMDB sync; (2) heartbeat agent installed on every CI; (3) schedule UC daily; (4) no-heartbeat CIs open a task to IT Ops; (5) quarterly Audit Committee report.",
              "z": "Donut of coverage_status, top-10 noisiest CIs, single value 'coverage %'.",
              "kfp": "Decommissioned CIs legitimately stop heartbeats — mark retired=true in CMDB lookup.",
              "refs": "[SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk IT Service Intelligence.\n• Ensure the following data sources are available: `index=monitoring` sourcetype=heartbeat:agent; `index=alerts` sourcetype=opsgenie:alert; CMDB lookup `cmdb_ci_list.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Nightly CMDB sync; (2) heartbeat agent installed on every CI; (3) schedule UC daily; (4) no-heartbeat CIs open a task to IT Ops; (5) quarterly Audit Committee report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup cmdb_ci_list.csv\n| join type=outer ci_id\n  [search index=monitoring sourcetype=heartbeat:agent earliest=-1d\n   | stats latest(_time) AS heartbeat_last_seen BY ci_id]\n| join type=outer ci_id\n  [search index=alerts sourcetype=opsgenie:alert earliest=-1d\n   | stats count AS alerts_24h BY ci_id]\n| eval coverage_status=case(isnull(heartbeat_last_seen),\"no-heartbeat\",heartbeat_last_seen<now()-1800,\"stale\",true(),\"healthy\")\n| table ci_id heartbeat_last_seen alerts_24h coverage_status\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC7.1 — System-operations monitoring: uptime attestation and alert-noise governance** — Lets the Head of IT Operations present the Audit Committee with coverage % (target 100% for production) and alert-noise % (target <5/day/CI). The same query replaces three spreadsheets.\n\nDocumented **Data sources**: `index=monitoring` sourcetype=heartbeat:agent; `index=alerts` sourcetype=opsgenie:alert; CMDB lookup `cmdb_ci_list.csv`. **App/TA** (typical add-on context): Splunk IT Service Intelligence. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **coverage_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **SOC 2 CC7.1 — System-operations monitoring: uptime attestation and alert-noise governance**): table ci_id heartbeat_last_seen alerts_24h coverage_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Donut of coverage_status, top-10 noisiest CIs, single value 'coverage %'.",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC7.1 — System-operations monitoring: uptime attestation and alert-noise governance. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Availability"
              ],
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIST 800-53",
                "SOC-2"
              ],
              "a": [
                "Performance",
                "Alerts"
              ],
              "e": [
                "pagerduty"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.10",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.10 (Detection) is enforced — Splunk UC-22.8.33: SOC 2 CC7.1 — System-operations monitoring: uptime attestation and alert-noise governance.",
                  "ea": "Saved search 'UC-22.8.33' running on index=monitoring sourcetype=heartbeat:agent; index=alerts sourcetype=opsgenie:alert; CMDB lookup cmdb_ci_list.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "8.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 8.1 (Operational planning) is enforced — Splunk UC-22.8.33: SOC 2 CC7.1 — System-operations monitoring: uptime attestation and alert-noise governance.",
                  "ea": "Saved search 'UC-22.8.33' running on index=monitoring sourcetype=heartbeat:agent; index=alerts sourcetype=opsgenie:alert; CMDB lookup cmdb_ci_list.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 SI-4 (System monitoring) is enforced — Splunk UC-22.8.33: SOC 2 CC7.1 — System-operations monitoring: uptime attestation and alert-noise governance.",
                  "ea": "Saved search 'UC-22.8.33' running on index=monitoring sourcetype=heartbeat:agent; index=alerts sourcetype=opsgenie:alert; CMDB lookup cmdb_ci_list.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC7.1 (System operations monitoring) is enforced — Splunk UC-22.8.33: SOC 2 CC7.1 — System-operations monitoring: uptime attestation and alert-noise governance.",
                  "ea": "Saved search 'UC-22.8.33' running on index=monitoring sourcetype=heartbeat:agent; index=alerts sourcetype=opsgenie:alert; CMDB lookup cmdb_ci_list.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.34",
              "n": "SOC 2 CC7.3 — Evaluated events: threshold breaches without documented rationale",
              "c": "medium",
              "f": "intermediate",
              "v": "Gives leadership proof that events are not only closed but justifiably closed. Auditors consider an 'auto-closed without rationale' event a material weakness.",
              "t": "Splunk Enterprise Security, Splunk SOAR",
              "d": "`index=notable` (ES notable); `index=soar` sourcetype=phantom:incident.",
              "q": "search `notable` severity IN (\"medium\",\"high\",\"critical\") earliest=-7d\n| rename event_id AS alert_id\n| join type=outer alert_id\n  [search index=soar sourcetype=phantom:incident state=closed earliest=-7d\n   | stats latest(closed_at) AS closed_at latest(analyst) AS analyst latest(rationale) AS rationale BY alert_id]\n| eval rationale_present=if(len(rationale)>20,1,0)\n| where isnotnull(closed_at) AND rationale_present=0\n| table alert_id severity closed_at analyst rationale_present",
              "m": "(1) Enforce rationale field on SOAR close action; (2) schedule UC hourly; (3) rationale_present=0 reopens the alert for review; (4) weekly SOC report.",
              "z": "Stacked bar of alerts by analyst × rationale_present, single value 'closed-without-rationale'.",
              "kfp": "Alerts closed by automation rules must use rationale='auto-closed:<rule_id>' rather than blank — reject blank in SOAR.",
              "refs": "[SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security, Splunk SOAR.\n• Ensure the following data sources are available: `index=notable` (ES notable); `index=soar` sourcetype=phantom:incident..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce rationale field on SOAR close action; (2) schedule UC hourly; (3) rationale_present=0 reopens the alert for review; (4) weekly SOC report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch `notable` severity IN (\"medium\",\"high\",\"critical\") earliest=-7d\n| rename event_id AS alert_id\n| join type=outer alert_id\n  [search index=soar sourcetype=phantom:incident state=closed earliest=-7d\n   | stats latest(closed_at) AS closed_at latest(analyst) AS analyst latest(rationale) AS rationale BY alert_id]\n| eval rationale_present=if(len(rationale)>20,1,0)\n| where isnotnull(closed_at) AND rationale_present=0\n| table alert_id severity closed_at analyst rationale_present\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC7.3 — Evaluated events: threshold breaches without documented rationale** — Gives leadership proof that events are not only closed but justifiably closed. Auditors consider an 'auto-closed without rationale' event a material weakness.\n\nDocumented **Data sources**: `index=notable` (ES notable); `index=soar` sourcetype=phantom:incident. **App/TA** (typical add-on context): Splunk Enterprise Security, Splunk SOAR. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• Renames fields with `rename` for clarity or joins.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **rationale_present** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(closed_at) AND rationale_present=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 CC7.3 — Evaluated events: threshold breaches without documented rationale**): table alert_id severity closed_at analyst rationale_present\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar of alerts by analyst × rationale_present, single value 'closed-without-rationale'.",
              "script": "",
              "premium": "Splunk Enterprise Security, Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC7.3 — Evaluated events: threshold breaches without documented rationale. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "SOC-2"
              ],
              "a": [
                "Alerts"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.17 (ICT-related incident management process) is enforced — Splunk UC-22.8.34: SOC 2 CC7.3 — Evaluated events: threshold breaches without documented rationale.",
                  "ea": "Saved search 'UC-22.8.34' running on index=notable (ES notable); index=soar sourcetype=phantom:incident., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.25",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.5.25 (Assessment and decision on events) is enforced — Splunk UC-22.8.34: SOC 2 CC7.3 — Evaluated events: threshold breaches without documented rationale.",
                  "ea": "Saved search 'UC-22.8.34' running on index=notable (ES notable); index=soar sourcetype=phantom:incident., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC7.3 (Evaluated events and incidents) is enforced — Splunk UC-22.8.34: SOC 2 CC7.3 — Evaluated events: threshold breaches without documented rationale.",
                  "ea": "Saved search 'UC-22.8.34' running on index=notable (ES notable); index=soar sourcetype=phantom:incident., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.35",
              "n": "SOC 2 CC7.4 — Incident response: post-incident review completion SLA",
              "c": "high",
              "f": "intermediate",
              "v": "Surfaces PIR backlog before it becomes an audit finding. Board sees a rolling '% of PIRs on time' metric.",
              "t": "Splunk SOAR",
              "d": "`index=soar` sourcetype=phantom:incident; `index=ticketing` sourcetype=jira:issue (PIR project).",
              "q": "index=soar sourcetype=phantom:incident state=closed earliest=-30d\n| stats latest(closed_at) AS closed_at BY incident_id\n| join type=outer incident_id\n  [search index=ticketing sourcetype=jira:issue project=PIR earliest=-60d\n   | stats latest(_time) AS pir_completed_at BY incident_id]\n| eval pir_sla_met=if(isnotnull(pir_completed_at) AND pir_completed_at<=closed_at+604800,1,0)\n| where pir_sla_met=0\n| table incident_id closed_at pir_completed_at pir_sla_met",
              "m": "(1) Ensure PIR Jira project is populated; (2) schedule UC daily; (3) pir_sla_met=0 opens a reminder ticket; (4) weekly SOC report; (5) quarterly Board report.",
              "z": "Waterfall of incidents → PIR completed → on time, single value 'open incidents overdue PIR'.",
              "kfp": "Incidents reopened and later re-closed legitimately extend the window — use latest(closed_at).",
              "refs": "[SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk SOAR.\n• Ensure the following data sources are available: `index=soar` sourcetype=phantom:incident; `index=ticketing` sourcetype=jira:issue (PIR project)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure PIR Jira project is populated; (2) schedule UC daily; (3) pir_sla_met=0 opens a reminder ticket; (4) weekly SOC report; (5) quarterly Board report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=soar sourcetype=phantom:incident state=closed earliest=-30d\n| stats latest(closed_at) AS closed_at BY incident_id\n| join type=outer incident_id\n  [search index=ticketing sourcetype=jira:issue project=PIR earliest=-60d\n   | stats latest(_time) AS pir_completed_at BY incident_id]\n| eval pir_sla_met=if(isnotnull(pir_completed_at) AND pir_completed_at<=closed_at+604800,1,0)\n| where pir_sla_met=0\n| table incident_id closed_at pir_completed_at pir_sla_met\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC7.4 — Incident response: post-incident review completion SLA** — Surfaces PIR backlog before it becomes an audit finding. Board sees a rolling '% of PIRs on time' metric.\n\nDocumented **Data sources**: `index=soar` sourcetype=phantom:incident; `index=ticketing` sourcetype=jira:issue (PIR project). **App/TA** (typical add-on context): Splunk SOAR. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: soar; **sourcetype**: phantom:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=soar, sourcetype=phantom:incident, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by incident_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **pir_sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pir_sla_met=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 CC7.4 — Incident response: post-incident review completion SLA**): table incident_id closed_at pir_completed_at pir_sla_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Waterfall of incidents → PIR completed → on time, single value 'open incidents overdue PIR'.",
              "script": "",
              "premium": "Splunk SOAR",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC7.4 — Incident response: post-incident review completion SLA. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "SOC-2"
              ],
              "a": [
                "Ticket_Management"
              ],
              "e": [
                "jira"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.17",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.17 (ICT-related incident management process) is enforced — Splunk UC-22.8.35: SOC 2 CC7.4 — Incident response: post-incident review completion SLA.",
                  "ea": "Saved search 'UC-22.8.35' running on index=soar sourcetype=phantom:incident; index=ticketing sourcetype=jira:issue (PIR project)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.26",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.5.26 is enforced — Splunk UC-22.8.35: SOC 2 CC7.4 — Incident response: post-incident review completion SLA.",
                  "ea": "Saved search 'UC-22.8.35' running on index=soar sourcetype=phantom:incident; index=ticketing sourcetype=jira:issue (PIR project)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC7.4 (Incident response) is enforced — Splunk UC-22.8.35: SOC 2 CC7.4 — Incident response: post-incident review completion SLA.",
                  "ea": "Saved search 'UC-22.8.35' running on index=soar sourcetype=phantom:incident; index=ticketing sourcetype=jira:issue (PIR project)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.36",
              "n": "SOC 2 CC1.1 — Integrity and ethical values: code-of-conduct acknowledgement trail",
              "c": "medium",
              "f": "beginner",
              "v": "Removes the audit finding 'CoC acknowledgement trail is incomplete' by making HR's coverage measurable daily.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=hr` sourcetype=hr:employee (active roster); `index=hr` sourcetype=lms:acknowledgement; policy-version lookup `coc_versions.csv`.",
              "q": "index=hr sourcetype=hr:employee status=active earliest=-30d\n| stats latest(hire_date) AS hire_date BY user\n| join user type=outer\n  [search index=hr sourcetype=lms:acknowledgement module=code_of_conduct earliest=-2y\n   | stats latest(version) AS latest_ack_version latest(_time) AS latest_ack_date BY user]\n| lookup coc_versions.csv AS latest OUTPUT current_version\n| eval is_compliant=if(latest_ack_version==current_version AND latest_ack_date>now()-31536000,1,0)\n| where is_compliant=0\n| table user hire_date latest_ack_version latest_ack_date is_compliant",
              "m": "(1) Nightly HR roster sync; (2) maintain coc_versions.csv; (3) schedule UC daily; (4) is_compliant=0 triggers an HR reminder email; (5) quarterly compliance report.",
              "z": "Table of non-acknowledging users, bar chart by department, single value 'non-compliant users'.",
              "kfp": "Employees within their 30-day grace window are excluded via hire_date.",
              "refs": "[SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=hr` sourcetype=hr:employee (active roster); `index=hr` sourcetype=lms:acknowledgement; policy-version lookup `coc_versions.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Nightly HR roster sync; (2) maintain coc_versions.csv; (3) schedule UC daily; (4) is_compliant=0 triggers an HR reminder email; (5) quarterly compliance report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=hr:employee status=active earliest=-30d\n| stats latest(hire_date) AS hire_date BY user\n| join user type=outer\n  [search index=hr sourcetype=lms:acknowledgement module=code_of_conduct earliest=-2y\n   | stats latest(version) AS latest_ack_version latest(_time) AS latest_ack_date BY user]\n| lookup coc_versions.csv AS latest OUTPUT current_version\n| eval is_compliant=if(latest_ack_version==current_version AND latest_ack_date>now()-31536000,1,0)\n| where is_compliant=0\n| table user hire_date latest_ack_version latest_ack_date is_compliant\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC1.1 — Integrity and ethical values: code-of-conduct acknowledgement trail** — Removes the audit finding 'CoC acknowledgement trail is incomplete' by making HR's coverage measurable daily.\n\nDocumented **Data sources**: `index=hr` sourcetype=hr:employee (active roster); `index=hr` sourcetype=lms:acknowledgement; policy-version lookup `coc_versions.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: hr:employee. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=hr:employee, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **is_compliant** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where is_compliant=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 CC1.1 — Integrity and ethical values: code-of-conduct acknowledgement trail**): table user hire_date latest_ack_version latest_ack_date is_compliant\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of non-acknowledging users, bar chart by department, single value 'non-compliant users'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC1.1 — Integrity and ethical values: code-of-conduct acknowledgement trail. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "ISO/IEC 27001",
                "SOC-2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "6.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 6.3 is enforced — Splunk UC-22.8.36: SOC 2 CC1.1 — Integrity and ethical values: code-of-conduct acknowledgement trail.",
                  "ea": "Saved search 'UC-22.8.36' running on sourcetype hr:employee and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC1.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC1.1 (Integrity and ethical values) is enforced — Splunk UC-22.8.36: SOC 2 CC1.1 — Integrity and ethical values: code-of-conduct acknowledgement trail.",
                  "ea": "Saved search 'UC-22.8.36' running on sourcetype hr:employee and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.37",
              "n": "SOC 2 CC9.1 — Risk-mitigation activity: vendor-risk action closure SLA",
              "c": "medium",
              "f": "intermediate",
              "v": "Replaces the quarterly Procurement PowerPoint with a continuous signal. Top vendors with overdue actions are the first line of an Audit Committee brief.",
              "t": "Splunk Add-on for ServiceNow",
              "d": "`index=grc` sourcetype=servicenow:vendor_risk; vendor register lookup `vendor_register.csv`.",
              "q": "index=grc sourcetype=servicenow:vendor_risk state IN (open,closed) earliest=-120d\n| stats latest(state) AS state latest(severity) AS severity latest(due_at) AS due_at latest(closed_at) AS closed_at BY action_id vendor\n| eval sla_met=case(state=\"closed\" AND closed_at<=due_at,1,state=\"closed\",0,now()>due_at,0,true(),1)\n| where sla_met=0\n| table action_id vendor severity due_at closed_at sla_met",
              "m": "(1) Route vendor_risk events from ServiceNow GRC; (2) maintain vendor_register.csv; (3) schedule UC daily; (4) sla_met=0 escalates per severity matrix; (5) monthly CISO report.",
              "z": "Table of overdue actions by vendor, bar chart of SLA adherence by severity, single value 'overdue vendor-risk actions'.",
              "kfp": "Actions waiting on vendor response can be legitimately late — mark state='awaiting-vendor' and exclude.",
              "refs": "[SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `index=grc` sourcetype=servicenow:vendor_risk; vendor register lookup `vendor_register.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Route vendor_risk events from ServiceNow GRC; (2) maintain vendor_register.csv; (3) schedule UC daily; (4) sla_met=0 escalates per severity matrix; (5) monthly CISO report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=servicenow:vendor_risk state IN (open,closed) earliest=-120d\n| stats latest(state) AS state latest(severity) AS severity latest(due_at) AS due_at latest(closed_at) AS closed_at BY action_id vendor\n| eval sla_met=case(state=\"closed\" AND closed_at<=due_at,1,state=\"closed\",0,now()>due_at,0,true(),1)\n| where sla_met=0\n| table action_id vendor severity due_at closed_at sla_met\n```\n\nUnderstanding this SPL\n\n**SOC 2 CC9.1 — Risk-mitigation activity: vendor-risk action closure SLA** — Replaces the quarterly Procurement PowerPoint with a continuous signal. Top vendors with overdue actions are the first line of an Audit Committee brief.\n\nDocumented **Data sources**: `index=grc` sourcetype=servicenow:vendor_risk; vendor register lookup `vendor_register.csv`. **App/TA** (typical add-on context): Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: servicenow:vendor_risk. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=servicenow:vendor_risk, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by action_id vendor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sla_met=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 CC9.1 — Risk-mitigation activity: vendor-risk action closure SLA**): table action_id vendor severity due_at closed_at sla_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of overdue actions by vendor, bar chart of SLA adherence by severity, single value 'overdue vendor-risk actions'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for CC9.1 — Risk-mitigation activity: vendor-risk action closure SLA. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "SOC-2"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.28",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.28 (ICT third-party risk) is enforced — Splunk UC-22.8.37: SOC 2 CC9.1 — Risk-mitigation activity: vendor-risk action closure SLA.",
                  "ea": "Saved search 'UC-22.8.37' running on index=grc sourcetype=servicenow:vendor_risk; vendor register lookup vendor_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.19",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.5.19 is enforced — Splunk UC-22.8.37: SOC 2 CC9.1 — Risk-mitigation activity: vendor-risk action closure SLA.",
                  "ea": "Saved search 'UC-22.8.37' running on index=grc sourcetype=servicenow:vendor_risk; vendor register lookup vendor_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC9.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC9.1 (Risk mitigation activities) is enforced — Splunk UC-22.8.37: SOC 2 CC9.1 — Risk-mitigation activity: vendor-risk action closure SLA.",
                  "ea": "Saved search 'UC-22.8.37' running on index=grc sourcetype=servicenow:vendor_risk; vendor register lookup vendor_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.38",
              "n": "SOC 2 C1.1 — Confidentiality: sensitive-data exposure at the egress boundary",
              "c": "high",
              "f": "advanced",
              "v": "C1.1 confidentiality assurance hinges on continuous detection of data leaving the boundary. This UC gives the DPO a queryable control output.",
              "t": "Splunk Add-on for Microsoft Purview, Splunk Add-on for Symantec DLP",
              "d": "`index=dlp` sourcetype=symantec:dlp OR sourcetype=microsoft:purview:dlp; approved-destination lookup `approved_egress.csv`.",
              "q": "index=dlp sourcetype IN (symantec:dlp,microsoft:purview:dlp) severity IN (\"high\",\"critical\") earliest=-30m\n| rename destination_ip AS dest policy_name AS dlp_policy\n| lookup approved_egress.csv dest OUTPUT approved\n| where approved!=\"yes\" OR isnull(approved)\n| stats count BY dlp_policy severity user dest approved",
              "m": "(1) Onboard DLP; (2) maintain approved_egress.csv; (3) schedule UC every 30m; (4) hit opens a P1 to DPO + user's manager; (5) weekly DPO report.",
              "z": "Table of unapproved egress detections, bar chart by policy, single value 'new unapproved egress (1h)'.",
              "kfp": "Legitimate cross-tenant file shares with approved partners can produce detections — allowlist by partner in approved_egress.csv.",
              "refs": "[SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [HIPAA Security Rule 2013-final](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164), [PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [MITRE ATT&CK — T1048](https://attack.mitre.org/techniques/T1048/)",
              "mitre": [
                "T1048"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Purview, Splunk Add-on for Symantec DLP.\n• Ensure the following data sources are available: `index=dlp` sourcetype=symantec:dlp OR sourcetype=microsoft:purview:dlp; approved-destination lookup `approved_egress.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard DLP; (2) maintain approved_egress.csv; (3) schedule UC every 30m; (4) hit opens a P1 to DPO + user's manager; (5) weekly DPO report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=dlp sourcetype IN (symantec:dlp,microsoft:purview:dlp) severity IN (\"high\",\"critical\") earliest=-30m\n| rename destination_ip AS dest policy_name AS dlp_policy\n| lookup approved_egress.csv dest OUTPUT approved\n| where approved!=\"yes\" OR isnull(approved)\n| stats count BY dlp_policy severity user dest approved\n```\n\nUnderstanding this SPL\n\n**SOC 2 C1.1 — Confidentiality: sensitive-data exposure at the egress boundary** — C1.1 confidentiality assurance hinges on continuous detection of data leaving the boundary. This UC gives the DPO a queryable control output.\n\nDocumented **Data sources**: `index=dlp` sourcetype=symantec:dlp OR sourcetype=microsoft:purview:dlp; approved-destination lookup `approved_egress.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Purview, Splunk Add-on for Symantec DLP. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: dlp.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=dlp, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where approved!=\"yes\" OR isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by dlp_policy severity user dest approved** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of unapproved egress detections, bar chart by policy, single value 'new unapproved egress (1h)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for C1.1 — Confidentiality: sensitive-data exposure at the egress boundary. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA Security Rule",
                "ISO/IEC 27001",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "Data Loss Prevention"
              ],
              "e": [
                "broadcom_symantec"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Security Rule",
                  "v": "2013-final",
                  "cl": "§164.312(e)(1)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that HIPAA Security Rule §164.312(e)(1) (Transmission security) is enforced — Splunk UC-22.8.38: SOC 2 C1.1 — Confidentiality: sensitive-data exposure at the egress boundary.",
                  "ea": "Saved search 'UC-22.8.38' running on index=dlp sourcetype=symantec:dlp OR sourcetype=microsoft:purview:dlp; approved-destination lookup approved_egress.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.12",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.12 (Data leakage prevention) is enforced — Splunk UC-22.8.38: SOC 2 C1.1 — Confidentiality: sensitive-data exposure at the egress boundary.",
                  "ea": "Saved search 'UC-22.8.38' running on index=dlp sourcetype=symantec:dlp OR sourcetype=microsoft:purview:dlp; approved-destination lookup approved_egress.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "10.6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 10.6.2 is enforced — Splunk UC-22.8.38: SOC 2 C1.1 — Confidentiality: sensitive-data exposure at the egress boundary.",
                  "ea": "Saved search 'UC-22.8.38' running on index=dlp sourcetype=symantec:dlp OR sourcetype=microsoft:purview:dlp; approved-destination lookup approved_egress.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "C1.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 C1.1 (Confidentiality) is enforced — Splunk UC-22.8.38: SOC 2 C1.1 — Confidentiality: sensitive-data exposure at the egress boundary.",
                  "ea": "Saved search 'UC-22.8.38' running on index=dlp sourcetype=symantec:dlp OR sourcetype=microsoft:purview:dlp; approved-destination lookup approved_egress.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "ta_link": {
                "name": "Symantec WSS Add-on for Splunk",
                "id": 3856,
                "url": "https://splunkbase.splunk.com/app/3856"
              },
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.8.39",
              "n": "SOC 2 P1.1 — Privacy notice: consent-record freshness for privacy-notice version changes",
              "c": "medium",
              "f": "intermediate",
              "v": "Prevents P1.1 findings: 'consent records reference a superseded notice'. Satisfies the '<v-current>' regulator question asked at every annual audit.",
              "t": "Splunk Enterprise Security",
              "d": "`index=consent` sourcetype=onetrust:consent; privacy-notice version lookup `privacy_notice_versions.csv`.",
              "q": "index=consent sourcetype=onetrust:consent earliest=-14d\n| stats latest(consent_version) AS consent_version latest(_time) AS consent_time BY subject_id\n| lookup privacy_notice_versions.csv AS latest OUTPUT current_version\n| eval days_stale=round((now()-consent_time)/86400,0)\n| eval needs_refresh=if(consent_version!=current_version,1,0)\n| where needs_refresh=1\n| table subject_id consent_version current_version days_stale needs_refresh",
              "m": "(1) Push consent changes to index=consent; (2) maintain privacy_notice_versions.csv; (3) schedule UC daily; (4) batch subjects needing refresh by cohort; (5) DPO owns the refresh campaign.",
              "z": "Timechart of stale consent backlog, table of cohorts, single value 'subjects needing refresh'.",
              "kfp": "Deceased/closed subject accounts legitimately have stale consent — exclude by account_state.",
              "refs": "[SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [GDPR 2016/679](https://eur-lex.europa.eu/eli/reg/2016/679/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=consent` sourcetype=onetrust:consent; privacy-notice version lookup `privacy_notice_versions.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Push consent changes to index=consent; (2) maintain privacy_notice_versions.csv; (3) schedule UC daily; (4) batch subjects needing refresh by cohort; (5) DPO owns the refresh campaign.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=consent sourcetype=onetrust:consent earliest=-14d\n| stats latest(consent_version) AS consent_version latest(_time) AS consent_time BY subject_id\n| lookup privacy_notice_versions.csv AS latest OUTPUT current_version\n| eval days_stale=round((now()-consent_time)/86400,0)\n| eval needs_refresh=if(consent_version!=current_version,1,0)\n| where needs_refresh=1\n| table subject_id consent_version current_version days_stale needs_refresh\n```\n\nUnderstanding this SPL\n\n**SOC 2 P1.1 — Privacy notice: consent-record freshness for privacy-notice version changes** — Prevents P1.1 findings: 'consent records reference a superseded notice'. Satisfies the '<v-current>' regulator question asked at every annual audit.\n\nDocumented **Data sources**: `index=consent` sourcetype=onetrust:consent; privacy-notice version lookup `privacy_notice_versions.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: consent; **sourcetype**: onetrust:consent. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=consent, sourcetype=onetrust:consent, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by subject_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **days_stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **needs_refresh** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where needs_refresh=1` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOC 2 P1.1 — Privacy notice: consent-record freshness for privacy-notice version changes**): table subject_id consent_version current_version days_stale needs_refresh\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart of stale consent backlog, table of cohorts, single value 'subjects needing refresh'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We keep evidence for P1.1 — Privacy notice: consent-record freshness for privacy-notice version changes. SOC 2 is how we show we run our services responsibly, and we keep evidence that supports that story.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "GDPR",
                "SOC-2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "GDPR",
                  "v": "2016/679",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that GDPR Art.7 (Conditions for consent) is enforced — Splunk UC-22.8.39: SOC 2 P1.1 — Privacy notice: consent-record freshness for privacy-notice version changes.",
                  "ea": "Saved search 'UC-22.8.39' running on index=consent sourcetype=onetrust:consent; privacy-notice version lookup privacy_notice_versions.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2016/679/oj"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "P1.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 P1.1 (Privacy notice) is enforced — Splunk UC-22.8.39: SOC 2 P1.1 — Privacy notice: consent-record freshness for privacy-notice version changes.",
                  "ea": "Saved search 'UC-22.8.39' running on index=consent sourcetype=onetrust:consent; privacy-notice version lookup privacy_notice_versions.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                },
                {
                  "r": "UK GDPR",
                  "v": "post-Brexit",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that UK GDPR Art.7 is enforced — Splunk UC-22.8.39: SOC 2 P1.1 — Privacy notice: consent-record freshness for privacy-notice version changes.",
                  "ea": "Saved search 'UC-22.8.39' running on index=consent sourcetype=onetrust:consent; privacy-notice version lookup privacy_notice_versions.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 9,
            "none": 0
          }
        },
        {
          "i": "22.11",
          "n": "— PCI DSS v4.0 (extended clauses)",
          "u": [
            {
              "i": "22.11.91",
              "n": "PCI-DSS 1.3 — CDE network boundary: unauthorised flows between CDE and untrusted networks",
              "c": "critical",
              "f": "intermediate",
              "v": "Provides QSAs the direct evidence that CDE boundary controls are intact and deviations are captured within 15 minutes.",
              "t": "Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=netfw` sourcetype IN (pan:traffic,cisco:asa); zone lookup `cde_zones.csv`; approved-flows lookup `pci_approved_flows.csv`.",
              "q": "index=netfw sourcetype IN (pan:traffic,cisco:asa) action=allow earliest=-15m\n| lookup cde_zones.csv ip=src OUTPUT zone AS src_zone\n| lookup cde_zones.csv ip=dest OUTPUT zone AS dest_zone\n| where (src_zone=\"CDE\" AND dest_zone=\"UNTRUSTED\") OR (src_zone=\"UNTRUSTED\" AND dest_zone=\"CDE\")\n| lookup pci_approved_flows.csv src_zone dest_zone dest_port protocol OUTPUT approved\n| where approved!=\"yes\" OR isnull(approved)\n| stats count BY src src_zone dest dest_zone dest_port protocol approved",
              "m": "(1) Onboard Palo Alto + ASA traffic; (2) maintain cde_zones.csv (CIDR → zone) and pci_approved_flows.csv; (3) schedule UC every 15m; (4) hit opens a P1 to NSC team; (5) daily report to compliance officer.",
              "z": "Sankey of src_zone→dest_zone with unapproved highlighted, table of unapproved flows, single value 'unapproved CDE flows (1h)'.",
              "kfp": "Cold-failover rules activated during DR days produce ephemeral unapproved flows — add DR-day allowlist with effective-from/to timestamps.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [MITRE ATT&CK — T1021](https://attack.mitre.org/techniques/T1021/)",
              "mitre": [
                "T1021"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=netfw` sourcetype IN (pan:traffic,cisco:asa); zone lookup `cde_zones.csv`; approved-flows lookup `pci_approved_flows.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard Palo Alto + ASA traffic; (2) maintain cde_zones.csv (CIDR → zone) and pci_approved_flows.csv; (3) schedule UC every 15m; (4) hit opens a P1 to NSC team; (5) daily report to compliance officer.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=netfw sourcetype IN (pan:traffic,cisco:asa) action=allow earliest=-15m\n| lookup cde_zones.csv ip=src OUTPUT zone AS src_zone\n| lookup cde_zones.csv ip=dest OUTPUT zone AS dest_zone\n| where (src_zone=\"CDE\" AND dest_zone=\"UNTRUSTED\") OR (src_zone=\"UNTRUSTED\" AND dest_zone=\"CDE\")\n| lookup pci_approved_flows.csv src_zone dest_zone dest_port protocol OUTPUT approved\n| where approved!=\"yes\" OR isnull(approved)\n| stats count BY src src_zone dest dest_zone dest_port protocol approved\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 1.3 — CDE network boundary: unauthorised flows between CDE and untrusted networks** — Provides QSAs the direct evidence that CDE boundary controls are intact and deviations are captured within 15 minutes.\n\nDocumented **Data sources**: `index=netfw` sourcetype IN (pan:traffic,cisco:asa); zone lookup `cde_zones.csv`; approved-flows lookup `pci_approved_flows.csv`. **App/TA** (typical add-on context): Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: netfw.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=netfw, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where (src_zone=\"CDE\" AND dest_zone=\"UNTRUSTED\") OR (src_zone=\"UNTRUSTED\" AND dest_zone=\"CDE\")` — typically the threshold or rule expression for this monitoring goal.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where approved!=\"yes\" OR isnull(approved)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by src src_zone dest dest_zone dest_port protocol approved** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey of src_zone→dest_zone with unapproved highlighted, table of unapproved flows, single value 'unapproved CDE flows (1h)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We detect any CDE-to-untrusted or untrusted-to-CDE flow that was allowed by the NSC but is not on the approved-flows list. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "PCI DSS",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "Network_Traffic"
              ],
              "e": [
                "cisco",
                "paloalto"
              ],
              "em": [
                "cisco_asa",
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.22",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.8.22 is enforced — Splunk UC-22.11.91: PCI-DSS 1.3 — CDE network boundary: unauthorised flows between CDE and untrusted networks.",
                  "ea": "Saved search 'UC-22.11.91' running on index netfw and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "1.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 1.3 (CDE network boundary) is enforced — Splunk UC-22.11.91: PCI-DSS 1.3 — CDE network boundary: unauthorised flows between CDE and untrusted networks.",
                  "ea": "Saved search 'UC-22.11.91' running on index netfw and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC6.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC6.6 (Encryption in transit) is enforced — Splunk UC-22.11.91: PCI-DSS 1.3 — CDE network boundary: unauthorised flows between CDE and untrusted networks.",
                  "ea": "Saved search 'UC-22.11.91' running on index netfw and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.92",
              "n": "PCI-DSS 2.2 — Secure configuration baseline: drift from approved hardening template",
              "c": "high",
              "f": "intermediate",
              "v": "Turns the quarterly 'baseline assessment' into a 30-minute signal. Platform team sees regressions before the QSA.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "`index=configmgmt` sourcetype=chef:compliance OR sourcetype=ansible:callback; baseline lookup `pci_baselines.csv`.",
              "q": "index=configmgmt sourcetype IN (chef:compliance,ansible:callback) earliest=-30m\n| stats latest(role) AS role latest(baseline_version) AS baseline_version latest(resource_failed) AS resource_failed latest(_time) AS last_converged BY hostname\n| eval drift_count=coalesce(resource_failed,0)\n| where drift_count>0 OR last_converged<now()-86400\n| table hostname role baseline_version drift_count last_converged",
              "m": "(1) Enforce Chef/Ansible on all CDE hosts; (2) emit convergence events; (3) maintain pci_baselines.csv with the current approved baseline_version per role; (4) schedule UC every 30m; (5) drift opens a Platform ticket.",
              "z": "Bar of drift_count by role, table of top-drift hosts, single value 'hosts in drift'.",
              "kfp": "Baseline upgrades take effect over a staged rollout — support multiple active baseline_versions in the lookup.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: `index=configmgmt` sourcetype=chef:compliance OR sourcetype=ansible:callback; baseline lookup `pci_baselines.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce Chef/Ansible on all CDE hosts; (2) emit convergence events; (3) maintain pci_baselines.csv with the current approved baseline_version per role; (4) schedule UC every 30m; (5) drift opens a Platform ticket.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=configmgmt sourcetype IN (chef:compliance,ansible:callback) earliest=-30m\n| stats latest(role) AS role latest(baseline_version) AS baseline_version latest(resource_failed) AS resource_failed latest(_time) AS last_converged BY hostname\n| eval drift_count=coalesce(resource_failed,0)\n| where drift_count>0 OR last_converged<now()-86400\n| table hostname role baseline_version drift_count last_converged\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 2.2 — Secure configuration baseline: drift from approved hardening template** — Turns the quarterly 'baseline assessment' into a 30-minute signal. Platform team sees regressions before the QSA.\n\nDocumented **Data sources**: `index=configmgmt` sourcetype=chef:compliance OR sourcetype=ansible:callback; baseline lookup `pci_baselines.csv`. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: configmgmt.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=configmgmt, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drift_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where drift_count>0 OR last_converged<now()-86400` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 2.2 — Secure configuration baseline: drift from approved hardening template**): table hostname role baseline_version drift_count last_converged\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar of drift_count by role, table of top-drift hosts, single value 'hosts in drift'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We baseline compliance for CDE hosts. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53",
                "PCI DSS",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "Change",
                "Endpoint"
              ],
              "e": [
                "ansible"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.9",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.9 (Configuration management (2022 new)) is enforced — Splunk UC-22.11.92: PCI-DSS 2.2 — Secure configuration baseline: drift from approved hardening template.",
                  "ea": "Saved search 'UC-22.11.92' running on index=configmgmt sourcetype=chef:compliance OR sourcetype=ansible:callback; baseline lookup pci_baselines.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "CM-6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 CM-6 (Configuration settings) is enforced — Splunk UC-22.11.92: PCI-DSS 2.2 — Secure configuration baseline: drift from approved hardening template.",
                  "ea": "Saved search 'UC-22.11.92' running on index=configmgmt sourcetype=chef:compliance OR sourcetype=ansible:callback; baseline lookup pci_baselines.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "2.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 2.2 (Secure system component configuration) is enforced — Splunk UC-22.11.92: PCI-DSS 2.2 — Secure configuration baseline: drift from approved hardening template.",
                  "ea": "Saved search 'UC-22.11.92' running on index=configmgmt sourcetype=chef:compliance OR sourcetype=ansible:callback; baseline lookup pci_baselines.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC8.1 (Change management) is enforced — Splunk UC-22.11.92: PCI-DSS 2.2 — Secure configuration baseline: drift from approved hardening template.",
                  "ea": "Saved search 'UC-22.11.92' running on index=configmgmt sourcetype=chef:compliance OR sourcetype=ansible:callback; baseline lookup pci_baselines.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.93",
              "n": "PCI-DSS 3.3 — Sensitive authentication data: cleartext PAN/CVV detection in logs",
              "c": "critical",
              "f": "advanced",
              "v": "Reduces the window between a logging regression introducing a cleartext PAN and the compliance team's discovery from 'next log review' to 10 minutes.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "`index=app` sourcetypes from payment applications; `index=web` access logs; redaction-pipeline attestation events.",
              "q": "index=app OR index=web earliest=-10m\n| eval pan_match=if(match(_raw,\"(?<![0-9])(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13})(?![0-9])\"),1,0)\n| eval cvv_match=if(match(_raw,\"(?i)cvv=[0-9]{3,4}\"),1,0)\n| where pan_match=1 OR cvv_match=1\n| eval match_type=case(pan_match=1 AND cvv_match=1,\"pan+cvv\",pan_match=1,\"pan\",true(),\"cvv\")\n| stats count BY sourcetype host match_type",
              "m": "(1) Apply a separate restrictive index for hits (ACL); (2) schedule UC every 10m; (3) hit creates a P1 to DPO + app owner; (4) index-time redaction must be fixed + backfill scrub; (5) monthly DPO report.",
              "z": "Timechart of match_count, table of top offending sourcetypes, single value 'detections in last hour'.",
              "kfp": "Synthetic test data may look like a PAN — allow-list via the synthetic_test flag in the application.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [MITRE ATT&CK — T1005](https://attack.mitre.org/techniques/T1005/)",
              "mitre": [
                "T1005"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: `index=app` sourcetypes from payment applications; `index=web` access logs; redaction-pipeline attestation events..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Apply a separate restrictive index for hits (ACL); (2) schedule UC every 10m; (3) hit creates a P1 to DPO + app owner; (4) index-time redaction must be fixed + backfill scrub; (5) monthly DPO report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app OR index=web earliest=-10m\n| eval pan_match=if(match(_raw,\"(?<![0-9])(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13})(?![0-9])\"),1,0)\n| eval cvv_match=if(match(_raw,\"(?i)cvv=[0-9]{3,4}\"),1,0)\n| where pan_match=1 OR cvv_match=1\n| eval match_type=case(pan_match=1 AND cvv_match=1,\"pan+cvv\",pan_match=1,\"pan\",true(),\"cvv\")\n| stats count BY sourcetype host match_type\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 3.3 — Sensitive authentication data: cleartext PAN/CVV detection in logs** — Reduces the window between a logging regression introducing a cleartext PAN and the compliance team's discovery from 'next log review' to 10 minutes.\n\nDocumented **Data sources**: `index=app` sourcetypes from payment applications; `index=web` access logs; redaction-pipeline attestation events. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app, web.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app, index=web, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **pan_match** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cvv_match** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pan_match=1 OR cvv_match=1` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **match_type** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by sourcetype host match_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart of match_count, table of top offending sourcetypes, single value 'detections in last hour'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We detect cleartext PAN or CVV patterns in application / web logs. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "PCI DSS",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "N/A"
              ],
              "e": [
                "hardware_bmc"
              ],
              "em": [
                "hardware_bmc_edac"
              ],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.12",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.12 (Data leakage prevention) is enforced — Splunk UC-22.11.93: PCI-DSS 3.3 — Sensitive authentication data: cleartext PAN/CVV detection in logs.",
                  "ea": "Saved search 'UC-22.11.93' running on index=app sourcetypes from payment applications; index=web access logs; redaction-pipeline attestation events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "3.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 3.3 (Sensitive authentication data not stored) is enforced — Splunk UC-22.11.93: PCI-DSS 3.3 — Sensitive authentication data: cleartext PAN/CVV detection in logs.",
                  "ea": "Saved search 'UC-22.11.93' running on index=app sourcetypes from payment applications; index=web access logs; redaction-pipeline attestation events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "C1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 C1.1 (Confidentiality) is enforced — Splunk UC-22.11.93: PCI-DSS 3.3 — Sensitive authentication data: cleartext PAN/CVV detection in logs.",
                  "ea": "Saved search 'UC-22.11.93' running on index=app sourcetypes from payment applications; index=web access logs; redaction-pipeline attestation events., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.94",
              "n": "PCI-DSS 5.2 — Anti-malware: EDR coverage + detection-queue attestation",
              "c": "high",
              "f": "intermediate",
              "v": "Converts the annual 'EDR deployed' claim into a 30-minute ground-truth signal — directly auditable against PCI-DSS 5.2 requirement for deployed anti-malware.",
              "t": "Splunk Add-on for CrowdStrike FDR (5082)",
              "d": "`index=edr` sourcetype IN (crowdstrike:hosts,microsoft:defender:host); `index=cmdb` CDE host list.",
              "q": "| inputlookup cde_hosts.csv\n| join type=outer hostname\n  [search index=edr sourcetype IN (crowdstrike:hosts,microsoft:defender:host) earliest=-1h\n   | stats latest(_time) AS last_event latest(agent_state) AS agent_state BY hostname]\n| eval edr_seen=if(isnotnull(last_event) AND last_event>now()-1800,1,0)\n| where edr_seen=0 OR agent_state!=\"healthy\"\n| table hostname role edr_seen last_event agent_state",
              "m": "(1) Maintain cde_hosts.csv from CMDB; (2) schedule UC every 30m; (3) gap opens a ticket to the host owner; (4) 24h SLA; (5) weekly CISO report.",
              "z": "Bar chart of coverage by role, table of gaps, single value 'CDE hosts without EDR'.",
              "kfp": "Hosts under image-build or maintenance windows legitimately show no EDR — use maintenance_calendar.csv.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CrowdStrike FDR (5082).\n• Ensure the following data sources are available: `index=edr` sourcetype IN (crowdstrike:hosts,microsoft:defender:host); `index=cmdb` CDE host list..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain cde_hosts.csv from CMDB; (2) schedule UC every 30m; (3) gap opens a ticket to the host owner; (4) 24h SLA; (5) weekly CISO report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup cde_hosts.csv\n| join type=outer hostname\n  [search index=edr sourcetype IN (crowdstrike:hosts,microsoft:defender:host) earliest=-1h\n   | stats latest(_time) AS last_event latest(agent_state) AS agent_state BY hostname]\n| eval edr_seen=if(isnotnull(last_event) AND last_event>now()-1800,1,0)\n| where edr_seen=0 OR agent_state!=\"healthy\"\n| table hostname role edr_seen last_event agent_state\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 5.2 — Anti-malware: EDR coverage + detection-queue attestation** — Converts the annual 'EDR deployed' claim into a 30-minute ground-truth signal — directly auditable against PCI-DSS 5.2 requirement for deployed anti-malware.\n\nDocumented **Data sources**: `index=edr` sourcetype IN (crowdstrike:hosts,microsoft:defender:host); `index=cmdb` CDE host list. **App/TA** (typical add-on context): Splunk Add-on for CrowdStrike FDR (5082). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **edr_seen** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where edr_seen=0 OR agent_state!=\"healthy\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 5.2 — Anti-malware: EDR coverage + detection-queue attestation**): table hostname role edr_seen last_event agent_state\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart of coverage by role, table of gaps, single value 'CDE hosts without EDR'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We coverage attestation for CDE anti-malware: every CDE host has an EDR event within 30 minutes and the agent is healthy. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53",
                "PCI DSS",
                "PCI-DSS"
              ],
              "a": [
                "Endpoint"
              ],
              "e": [
                "crowdstrike"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.7",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.7 is enforced — Splunk UC-22.11.94: PCI-DSS 5.2 — Anti-malware: EDR coverage + detection-queue attestation.",
                  "ea": "Saved search 'UC-22.11.94' running on index=edr sourcetype IN (crowdstrike:hosts,microsoft:defender:host); index=cmdb CDE host list., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "SI-3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 SI-3 is enforced — Splunk UC-22.11.94: PCI-DSS 5.2 — Anti-malware: EDR coverage + detection-queue attestation.",
                  "ea": "Saved search 'UC-22.11.94' running on index=edr sourcetype IN (crowdstrike:hosts,microsoft:defender:host); index=cmdb CDE host list., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "5.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 5.2 (Anti-malware mechanisms) is enforced — Splunk UC-22.11.94: PCI-DSS 5.2 — Anti-malware: EDR coverage + detection-queue attestation.",
                  "ea": "Saved search 'UC-22.11.94' running on index=edr sourcetype IN (crowdstrike:hosts,microsoft:defender:host); index=cmdb CDE host list., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.95",
              "n": "PCI-DSS 6.2 — Bespoke-software SDLC: code-review + SAST completion before CDE deploy",
              "c": "high",
              "f": "advanced",
              "v": "Replaces the pre-release tick-box with continuous gate-enforcement evidence. Auditors see the commit→gate→deploy chain for every CDE deploy.",
              "t": "Splunk Add-on for GitHub",
              "d": "`index=cicd` sourcetype=github:actions OR sourcetype=gitlab:pipeline; SAST output `index=security` sourcetype=veracode:scan; CDE-service lookup `cde_services.csv`.",
              "q": "index=cicd sourcetype IN (github:actions,gitlab:pipeline) event_type=deployment target_env IN (\"prod-cde\") earliest=-24h\n| stats latest(_time) AS deployed_at latest(commit_sha) AS commit_sha BY artifact service\n| join type=outer commit_sha\n  [search index=security sourcetype=veracode:scan earliest=-30d\n   | stats latest(overall_state) AS sast_state BY commit_sha]\n| join type=outer commit_sha\n  [search index=cicd event_type=pr_review state=approved earliest=-30d\n   | stats latest(reviewer) AS reviewer BY commit_sha\n   | eval review_state=\"approved\"]\n| eval gates_met=if(sast_state=\"pass\" AND review_state=\"approved\",1,0)\n| where gates_met=0\n| table artifact commit_sha service deployed_at sast_state review_state gates_met",
              "m": "(1) Route CI/CD + SAST events; (2) maintain cde_services.csv; (3) schedule UC hourly; (4) gates_met=0 opens a security ticket; (5) pipeline policy revisions add gates.",
              "z": "Table of gate-missing deploys, bar chart by service, single value 'recent deploys missing gates'.",
              "kfp": "Hotfix deploys via break-glass workflow have an alternate review path — exclude with the workflow label hotfix=true when attested separately.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [SOX-ITGC PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [MITRE ATT&CK — T1195](https://attack.mitre.org/techniques/T1195/)",
              "mitre": [
                "T1195"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for GitHub.\n• Ensure the following data sources are available: `index=cicd` sourcetype=github:actions OR sourcetype=gitlab:pipeline; SAST output `index=security` sourcetype=veracode:scan; CDE-service lookup `cde_services.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Route CI/CD + SAST events; (2) maintain cde_services.csv; (3) schedule UC hourly; (4) gates_met=0 opens a security ticket; (5) pipeline policy revisions add gates.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=cicd sourcetype IN (github:actions,gitlab:pipeline) event_type=deployment target_env IN (\"prod-cde\") earliest=-24h\n| stats latest(_time) AS deployed_at latest(commit_sha) AS commit_sha BY artifact service\n| join type=outer commit_sha\n  [search index=security sourcetype=veracode:scan earliest=-30d\n   | stats latest(overall_state) AS sast_state BY commit_sha]\n| join type=outer commit_sha\n  [search index=cicd event_type=pr_review state=approved earliest=-30d\n   | stats latest(reviewer) AS reviewer BY commit_sha\n   | eval review_state=\"approved\"]\n| eval gates_met=if(sast_state=\"pass\" AND review_state=\"approved\",1,0)\n| where gates_met=0\n| table artifact commit_sha service deployed_at sast_state review_state gates_met\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 6.2 — Bespoke-software SDLC: code-review + SAST completion before CDE deploy** — Replaces the pre-release tick-box with continuous gate-enforcement evidence. Auditors see the commit→gate→deploy chain for every CDE deploy.\n\nDocumented **Data sources**: `index=cicd` sourcetype=github:actions OR sourcetype=gitlab:pipeline; SAST output `index=security` sourcetype=veracode:scan; CDE-service lookup `cde_services.csv`. **App/TA** (typical add-on context): Splunk Add-on for GitHub. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: cicd.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=cicd, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by artifact service** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **gates_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gates_met=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 6.2 — Bespoke-software SDLC: code-review + SAST completion before CDE deploy**): table artifact commit_sha service deployed_at sast_state review_state gates_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of gate-missing deploys, bar chart by service, single value 'recent deploys missing gates'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We for every deploy to a CDE service, assert that SAST passed and a PR review was approved before deploy. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "ISO/IEC 27001",
                "PCI DSS",
                "PCI-DSS",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "github",
                "gitlab"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.25",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.25 (Secure development life cycle) is enforced — Splunk UC-22.11.95: PCI-DSS 6.2 — Bespoke-software SDLC: code-review + SAST completion before CDE deploy.",
                  "ea": "Saved search 'UC-22.11.95' running on sourcetype github:actions and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "6.2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 6.2 (Bespoke software developed securely) is enforced — Splunk UC-22.11.95: PCI-DSS 6.2 — Bespoke-software SDLC: code-review + SAST completion before CDE deploy.",
                  "ea": "Saved search 'UC-22.11.95' running on sourcetype github:actions and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC8.1 (Change management) is enforced — Splunk UC-22.11.95: PCI-DSS 6.2 — Bespoke-software SDLC: code-review + SAST completion before CDE deploy.",
                  "ea": "Saved search 'UC-22.11.95' running on sourcetype github:actions and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOX-ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.11.95: PCI-DSS 6.2 — Bespoke-software SDLC: code-review + SAST completion before CDE deploy.",
                  "ea": "Saved search 'UC-22.11.95' running on sourcetype github:actions and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.96",
              "n": "PCI-DSS 8.3 — Strong authentication: password-only logins against privileged accounts",
              "c": "critical",
              "f": "intermediate",
              "v": "Catches the exact condition that has historically resulted in CDE breaches (privileged password-only sessions). MTTR shrinks from weeks to minutes.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Okta",
              "d": "`index=windows` sourcetype=WinEventLog:Security (4624); `index=linux` sourcetype=auditd; IDP `index=idp` sourcetype=okta:system_log.",
              "q": "search (index=windows sourcetype=WinEventLog:Security EventCode=4624) OR (index=idp sourcetype=okta:system_log eventType=user.session.start) earliest=-10m\n| lookup privileged_accounts.csv user OUTPUT is_privileged\n| where is_privileged=\"yes\"\n| eval auth_method=case(isnotnull(LogonType) AND LogonType IN (\"2\",\"10\"),\"interactive\",isnotnull(factor),factor,true(),\"password\")\n| eval mfa_used=if(auth_method IN (\"mfa_push\",\"mfa_totp\",\"webauthn\",\"smartcard\",\"token:hardware\"),1,0)\n| where mfa_used=0\n| table user src dest auth_method mfa_used",
              "m": "(1) Maintain privileged_accounts.csv; (2) ensure IDP emits factor evidence; (3) schedule UC every 10m; (4) hit opens a P1 and disables the session; (5) monthly CISO review of patterns.",
              "z": "Timechart of non-MFA privileged logins, table of top accounts, single value 'non-MFA priv logins (1h)'.",
              "kfp": "Service-account logins using smart-card-equivalent client cert may not show 'factor'; capture via cert-login sourcetype and mark mfa_used=1.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [MITRE ATT&CK — T1078](https://attack.mitre.org/techniques/T1078/)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Okta.\n• Ensure the following data sources are available: `index=windows` sourcetype=WinEventLog:Security (4624); `index=linux` sourcetype=auditd; IDP `index=idp` sourcetype=okta:system_log..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain privileged_accounts.csv; (2) ensure IDP emits factor evidence; (3) schedule UC every 10m; (4) hit opens a P1 and disables the session; (5) monthly CISO review of patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch (index=windows sourcetype=WinEventLog:Security EventCode=4624) OR (index=idp sourcetype=okta:system_log eventType=user.session.start) earliest=-10m\n| lookup privileged_accounts.csv user OUTPUT is_privileged\n| where is_privileged=\"yes\"\n| eval auth_method=case(isnotnull(LogonType) AND LogonType IN (\"2\",\"10\"),\"interactive\",isnotnull(factor),factor,true(),\"password\")\n| eval mfa_used=if(auth_method IN (\"mfa_push\",\"mfa_totp\",\"webauthn\",\"smartcard\",\"token:hardware\"),1,0)\n| where mfa_used=0\n| table user src dest auth_method mfa_used\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 8.3 — Strong authentication: password-only logins against privileged accounts** — Catches the exact condition that has historically resulted in CDE breaches (privileged password-only sessions). MTTR shrinks from weeks to minutes.\n\nDocumented **Data sources**: `index=windows` sourcetype=WinEventLog:Security (4624); `index=linux` sourcetype=auditd; IDP `index=idp` sourcetype=okta:system_log. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Okta. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, idp; **sourcetype**: WinEventLog:Security, okta:system_log. Those sourcetypes align with what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where is_privileged=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **auth_method** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mfa_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mfa_used=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 8.3 — Strong authentication: password-only logins against privileged accounts**): table user src dest auth_method mfa_used\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart of non-MFA privileged logins, table of top accounts, single value 'non-MFA priv logins (1h)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We detect a successful login to a privileged account that did not complete a second factor. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53",
                "PCI DSS",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "okta"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.5 is enforced — Splunk UC-22.11.96: PCI-DSS 8.3 — Strong authentication: password-only logins against privileged accounts.",
                  "ea": "Saved search 'UC-22.11.96' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 IA-2 (Identification and authentication (users)) is enforced — Splunk UC-22.11.96: PCI-DSS 8.3 — Strong authentication: password-only logins against privileged accounts.",
                  "ea": "Saved search 'UC-22.11.96' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "8.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 8.3 (Strong authentication) is enforced — Splunk UC-22.11.96: PCI-DSS 8.3 — Strong authentication: password-only logins against privileged accounts.",
                  "ea": "Saved search 'UC-22.11.96' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC6.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC6.1 (Logical access controls) is enforced — Splunk UC-22.11.96: PCI-DSS 8.3 — Strong authentication: password-only logins against privileged accounts.",
                  "ea": "Saved search 'UC-22.11.96' running on sourcetype WinEventLog:Security and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.97",
              "n": "PCI-DSS 8.4 — MFA coverage: administrative access to CDE without MFA",
              "c": "critical",
              "f": "intermediate",
              "v": "The #1 QSA finding in failed assessments. Converting this to a 10-minute detection reduces remediation cost 100×.",
              "t": "Splunk Add-on for CyberArk",
              "d": "`index=windows` EventCode 4624/4672; `index=linux` auditd; bastion/PAM `index=pam` sourcetype=cyberark:pam; CDE-dest lookup `cde_hosts.csv`.",
              "q": "search (index=windows sourcetype=WinEventLog:Security EventCode IN (4624,4672)) OR (index=pam sourcetype=cyberark:pam event_type=session_start) earliest=-10m\n| rename target_host AS dest\n| lookup cde_hosts.csv hostname=dest OUTPUT in_cde\n| lookup privileged_accounts.csv user AS admin OUTPUT is_privileged\n| where in_cde=\"yes\" AND is_privileged=\"yes\"\n| eval mfa_used=if(match(_raw,\"(?i)mfa|2fa|push|webauthn|smartcard\"),1,0)\n| where mfa_used=0\n| table admin dest session_type mfa_used",
              "m": "(1) Integrate PAM/bastion; (2) maintain cde_hosts.csv + privileged_accounts.csv; (3) schedule UC every 10m; (4) hit revokes session; (5) monthly CISO review.",
              "z": "Heatmap of admins × destinations, table of non-MFA sessions, single value 'non-MFA admin in CDE'.",
              "kfp": "Break-glass emergency access with compensating controls (hardware-token-only) must log mfa_used=1 via its own sourcetype — route to the field correctly.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [MITRE ATT&CK — T1078](https://attack.mitre.org/techniques/T1078/)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for CyberArk.\n• Ensure the following data sources are available: `index=windows` EventCode 4624/4672; `index=linux` auditd; bastion/PAM `index=pam` sourcetype=cyberark:pam; CDE-dest lookup `cde_hosts.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Integrate PAM/bastion; (2) maintain cde_hosts.csv + privileged_accounts.csv; (3) schedule UC every 10m; (4) hit revokes session; (5) monthly CISO review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch (index=windows sourcetype=WinEventLog:Security EventCode IN (4624,4672)) OR (index=pam sourcetype=cyberark:pam event_type=session_start) earliest=-10m\n| rename target_host AS dest\n| lookup cde_hosts.csv hostname=dest OUTPUT in_cde\n| lookup privileged_accounts.csv user AS admin OUTPUT is_privileged\n| where in_cde=\"yes\" AND is_privileged=\"yes\"\n| eval mfa_used=if(match(_raw,\"(?i)mfa|2fa|push|webauthn|smartcard\"),1,0)\n| where mfa_used=0\n| table admin dest session_type mfa_used\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 8.4 — MFA coverage: administrative access to CDE without MFA** — The #1 QSA finding in failed assessments. Converting this to a 10-minute detection reduces remediation cost 100×.\n\nDocumented **Data sources**: `index=windows` EventCode 4624/4672; `index=linux` auditd; bastion/PAM `index=pam` sourcetype=cyberark:pam; CDE-dest lookup `cde_hosts.csv`. **App/TA** (typical add-on context): Splunk Add-on for CyberArk. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, pam; **sourcetype**: WinEventLog:Security, cyberark:pam. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_cde=\"yes\" AND is_privileged=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **mfa_used** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mfa_used=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 8.4 — MFA coverage: administrative access to CDE without MFA**): table admin dest session_type mfa_used\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap of admins × destinations, table of non-MFA sessions, single value 'non-MFA admin in CDE'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We administrative access to the CDE must use MFA. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIST 800-53",
                "PCI DSS",
                "PCI-DSS"
              ],
              "a": [
                "Authentication"
              ],
              "e": [
                "cyberark"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.9",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.9 (Protection and prevention) is enforced — Splunk UC-22.11.97: PCI-DSS 8.4 — MFA coverage: administrative access to CDE without MFA.",
                  "ea": "Saved search 'UC-22.11.97' running on sourcetype cyberark:pam and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.5 is enforced — Splunk UC-22.11.97: PCI-DSS 8.4 — MFA coverage: administrative access to CDE without MFA.",
                  "ea": "Saved search 'UC-22.11.97' running on sourcetype cyberark:pam and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-2(1)",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 IA-2(1) is enforced — Splunk UC-22.11.97: PCI-DSS 8.4 — MFA coverage: administrative access to CDE without MFA.",
                  "ea": "Saved search 'UC-22.11.97' running on sourcetype cyberark:pam and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "8.4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 8.4 (MFA) is enforced — Splunk UC-22.11.97: PCI-DSS 8.4 — MFA coverage: administrative access to CDE without MFA.",
                  "ea": "Saved search 'UC-22.11.97' running on sourcetype cyberark:pam and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.98",
              "n": "PCI-DSS 8.6 — Application and system accounts: interactive use of a service account",
              "c": "high",
              "f": "intermediate",
              "v": "Removes the common post-audit-finding 'we didn't know that account was used interactively' narrative by making the misuse visible in 10 minutes.",
              "t": "Splunk Add-on for Microsoft Windows (742)",
              "d": "`index=windows` EventCode 4624; `index=linux` auditd; service-account lookup `service_accounts.csv`.",
              "q": "index=windows sourcetype=WinEventLog:Security EventCode=4624 LogonType IN (2,10,11) earliest=-10m\n| rename TargetUserName AS user\n| lookup service_accounts.csv user OUTPUT account_type\n| where account_type=\"service-only\"\n| table user src dest LogonType account_type",
              "m": "(1) Maintain service_accounts.csv with account_type tag; (2) schedule UC every 10m; (3) hit opens a P1 and disables the account; (4) weekly IAM review.",
              "z": "Table of interactive service-account logins, time chart, single value 'events in last hour'.",
              "kfp": "Admin password rotations performed by a service-account-using tool may be reported as interactive — exclude via tool's user-as-tool mapping.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [MITRE ATT&CK — T1078.002](https://attack.mitre.org/techniques/T1078/)",
              "mitre": [
                "T1078.002"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742).\n• Ensure the following data sources are available: `index=windows` EventCode 4624; `index=linux` auditd; service-account lookup `service_accounts.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain service_accounts.csv with account_type tag; (2) schedule UC every 10m; (3) hit opens a P1 and disables the account; (4) weekly IAM review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=windows sourcetype=WinEventLog:Security EventCode=4624 LogonType IN (2,10,11) earliest=-10m\n| rename TargetUserName AS user\n| lookup service_accounts.csv user OUTPUT account_type\n| where account_type=\"service-only\"\n| table user src dest LogonType account_type\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 8.6 — Application and system accounts: interactive use of a service account** — Removes the common post-audit-finding 'we didn't know that account was used interactively' narrative by making the misuse visible in 10 minutes.\n\nDocumented **Data sources**: `index=windows` EventCode 4624; `index=linux` auditd; service-account lookup `service_accounts.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows; **sourcetype**: WinEventLog:Security. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=windows, sourcetype=WinEventLog:Security, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Renames fields with `rename` for clarity or joins.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where account_type=\"service-only\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 8.6 — Application and system accounts: interactive use of a service account**): table user src dest LogonType account_type\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of interactive service-account logins, time chart, single value 'events in last hour'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We detect an interactive session on a service-only account — direct PCI-DSS 8.6 violation, which requires application/system accounts not be used interactively. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53",
                "PCI DSS",
                "PCI-DSS"
              ],
              "a": [
                "Authentication"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.16",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.5.16 is enforced — Splunk UC-22.11.98: PCI-DSS 8.6 — Application and system accounts: interactive use of a service account.",
                  "ea": "Saved search 'UC-22.11.98' running on index=windows EventCode 4624; index=linux auditd; service-account lookup service_accounts.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "IA-2",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 IA-2 (Identification and authentication (users)) is enforced — Splunk UC-22.11.98: PCI-DSS 8.6 — Application and system accounts: interactive use of a service account.",
                  "ea": "Saved search 'UC-22.11.98' running on index=windows EventCode 4624; index=linux auditd; service-account lookup service_accounts.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "8.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 8.6 (Application and system accounts) is enforced — Splunk UC-22.11.98: PCI-DSS 8.6 — Application and system accounts: interactive use of a service account.",
                  "ea": "Saved search 'UC-22.11.98' running on index=windows EventCode 4624; index=linux auditd; service-account lookup service_accounts.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.99",
              "n": "PCI-DSS 10.3 — Audit log integrity: tampering/deletion detection on CDE log source",
              "c": "critical",
              "f": "advanced",
              "v": "A cleared Windows Security log on a CDE host is a near-certain indicator of attacker presence. Closing the detection gap to 10 minutes saves forensics time.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=windows` EventCode IN (1102,104); `index=linux` auditd log clears, syslog `rotated-by-hand`.",
              "q": "search (index=windows sourcetype=WinEventLog:Security EventCode=1102) OR (index=windows sourcetype=WinEventLog:System EventCode=104) OR (index=linux sourcetype=auditd type=CONFIG_CHANGE op=remove_rule) earliest=-10m\n| eval action=case(EventCode=1102,\"audit_log_cleared\",EventCode=104,\"event_log_cleared\",op=\"remove_rule\",\"auditd_rule_removed\",true(),\"unknown\")\n| rename TargetUserName AS actor host AS host\n| table host action actor target_log",
              "m": "(1) Forward Windows Security + System + Linux auditd to Splunk; (2) schedule UC every 10m; (3) hit opens a P1 to IR + isolates host; (4) quarterly IR drill validates response.",
              "z": "Table of events by action, time chart, single value 'clears in last hour'.",
              "kfp": "Host-retirement scripts may legitimately clear logs — exclude only with scheduled-retirement ticket linked.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [SOX-ITGC PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [MITRE ATT&CK — T1070.001](https://attack.mitre.org/techniques/T1070/)",
              "mitre": [
                "T1070.001"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=windows` EventCode IN (1102,104); `index=linux` auditd log clears, syslog `rotated-by-hand`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward Windows Security + System + Linux auditd to Splunk; (2) schedule UC every 10m; (3) hit opens a P1 to IR + isolates host; (4) quarterly IR drill validates response.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch (index=windows sourcetype=WinEventLog:Security EventCode=1102) OR (index=windows sourcetype=WinEventLog:System EventCode=104) OR (index=linux sourcetype=auditd type=CONFIG_CHANGE op=remove_rule) earliest=-10m\n| eval action=case(EventCode=1102,\"audit_log_cleared\",EventCode=104,\"event_log_cleared\",op=\"remove_rule\",\"auditd_rule_removed\",true(),\"unknown\")\n| rename TargetUserName AS actor host AS host\n| table host action actor target_log\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 10.3 — Audit log integrity: tampering/deletion detection on CDE log source** — A cleared Windows Security log on a CDE host is a near-certain indicator of attacker presence. Closing the detection gap to 10 minutes saves forensics time.\n\nDocumented **Data sources**: `index=windows` EventCode IN (1102,104); `index=linux` auditd log clears, syslog `rotated-by-hand`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, linux; **sourcetype**: WinEventLog:Security, WinEventLog:System, auditd. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **action** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Renames fields with `rename` for clarity or joins.\n• Pipeline stage (see **PCI-DSS 10.3 — Audit log integrity: tampering/deletion detection on CDE log source**): table host action actor target_log\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of events by action, time chart, single value 'clears in last hour'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We pCI-DSS 10.3 requires audit logs be protected from modification and deletion. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "regs": [
                "ISO/IEC 27001",
                "PCI DSS",
                "PCI-DSS",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "syslog"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.15",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.15 (Logging) is enforced — Splunk UC-22.11.99: PCI-DSS 10.3 — Audit log integrity: tampering/deletion detection on CDE log source.",
                  "ea": "Saved search 'UC-22.11.99' running on index=windows EventCode IN (1102,104); index=linux auditd log clears, syslog rotated-by-hand., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "10.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.3 (Audit logs protected from modification) is enforced — Splunk UC-22.11.99: PCI-DSS 10.3 — Audit log integrity: tampering/deletion detection on CDE log source.",
                  "ea": "Saved search 'UC-22.11.99' running on index=windows EventCode IN (1102,104); index=linux auditd log clears, syslog rotated-by-hand., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC7.2 (System monitoring for anomalies) is enforced — Splunk UC-22.11.99: PCI-DSS 10.3 — Audit log integrity: tampering/deletion detection on CDE log source.",
                  "ea": "Saved search 'UC-22.11.99' running on index=windows EventCode IN (1102,104); index=linux auditd log clears, syslog rotated-by-hand., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Logging.Integrity",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.Logging.Integrity is enforced — Splunk UC-22.11.99: PCI-DSS 10.3 — Audit log integrity: tampering/deletion detection on CDE log source.",
                  "ea": "Saved search 'UC-22.11.99' running on index=windows EventCode IN (1102,104); index=linux auditd log clears, syslog rotated-by-hand., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "pillar": "security",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.100",
              "n": "PCI-DSS 10.4 — Time synchronisation: NTP drift on CDE hosts",
              "c": "high",
              "f": "beginner",
              "v": "Prevents the classic forensic nightmare of 'when exactly did the event occur' because the CDE clocks disagreed.",
              "t": "Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833)",
              "d": "`index=windows` sourcetype=w32time; `index=linux` sourcetype=chronyd OR sourcetype=ntpd; CDE lookup `cde_hosts.csv`.",
              "q": "search (index=windows sourcetype=w32time) OR (index=linux sourcetype IN (chronyd,ntpd)) earliest=-15m\n| stats latest(offset_ms) AS offset_ms latest(stratum) AS stratum latest(_time) AS last_ntp_sync BY hostname\n| lookup cde_hosts.csv hostname OUTPUT in_cde\n| where in_cde=\"yes\" AND (abs(offset_ms)>1000 OR last_ntp_sync<now()-3600)\n| table hostname offset_ms stratum last_ntp_sync",
              "m": "(1) Enforce w32time/chrony logs to Splunk; (2) maintain cde_hosts.csv; (3) schedule UC every 15m; (4) drift opens a Platform ticket; (5) quarterly sampling for auditor.",
              "z": "Scatter of offset_ms per host, table of drifted hosts, single value 'hosts over 1s drift'.",
              "kfp": "Virtualisation drift during host migration may cause short bursts — smooth with 5-minute averaging before trigger.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833).\n• Ensure the following data sources are available: `index=windows` sourcetype=w32time; `index=linux` sourcetype=chronyd OR sourcetype=ntpd; CDE lookup `cde_hosts.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce w32time/chrony logs to Splunk; (2) maintain cde_hosts.csv; (3) schedule UC every 15m; (4) drift opens a Platform ticket; (5) quarterly sampling for auditor.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nsearch (index=windows sourcetype=w32time) OR (index=linux sourcetype IN (chronyd,ntpd)) earliest=-15m\n| stats latest(offset_ms) AS offset_ms latest(stratum) AS stratum latest(_time) AS last_ntp_sync BY hostname\n| lookup cde_hosts.csv hostname OUTPUT in_cde\n| where in_cde=\"yes\" AND (abs(offset_ms)>1000 OR last_ntp_sync<now()-3600)\n| table hostname offset_ms stratum last_ntp_sync\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 10.4 — Time synchronisation: NTP drift on CDE hosts** — Prevents the classic forensic nightmare of 'when exactly did the event occur' because the CDE clocks disagreed.\n\nDocumented **Data sources**: `index=windows` sourcetype=w32time; `index=linux` sourcetype=chronyd OR sourcetype=ntpd; CDE lookup `cde_hosts.csv`. **App/TA** (typical add-on context): Splunk Add-on for Microsoft Windows (742), Splunk Add-on for Unix and Linux (833). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: windows, linux; **sourcetype**: w32time. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by hostname** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_cde=\"yes\" AND (abs(offset_ms)>1000 OR last_ntp_sync<now()-3600)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 10.4 — Time synchronisation: NTP drift on CDE hosts**): table hostname offset_ms stratum last_ntp_sync\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter of offset_ms per host, table of drifted hosts, single value 'hosts over 1s drift'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We pCI-DSS 10.4 requires time synchronisation across CDE systems. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53",
                "PCI DSS",
                "PCI-DSS"
              ],
              "a": [
                "Performance"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.17",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.17 (Clock synchronisation) is enforced — Splunk UC-22.11.100: PCI-DSS 10.4 — Time synchronisation: NTP drift on CDE hosts.",
                  "ea": "Saved search 'UC-22.11.100' running on index=windows sourcetype=w32time; index=linux sourcetype=chronyd OR sourcetype=ntpd; CDE lookup cde_hosts.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-8",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 AU-8 (Time stamps) is enforced — Splunk UC-22.11.100: PCI-DSS 10.4 — Time synchronisation: NTP drift on CDE hosts.",
                  "ea": "Saved search 'UC-22.11.100' running on index=windows sourcetype=w32time; index=linux sourcetype=chronyd OR sourcetype=ntpd; CDE lookup cde_hosts.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "10.4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.4 (Time synchronised) is enforced — Splunk UC-22.11.100: PCI-DSS 10.4 — Time synchronisation: NTP drift on CDE hosts.",
                  "ea": "Saved search 'UC-22.11.100' running on index=windows sourcetype=w32time; index=linux sourcetype=chronyd OR sourcetype=ntpd; CDE lookup cde_hosts.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.101",
              "n": "PCI-DSS 10.6 — Log review: daily-review evidence for CDE data sources",
              "c": "high",
              "f": "intermediate",
              "v": "Auditors frequently disallow 'we review daily' without evidence. The review artefact lookup gives the evidence.",
              "t": "Splunk Enterprise Security",
              "d": "`index=compliance` sourcetype=review:daily (from SOC daily review tool); CDE-source list `pci_cde_log_sources.csv`.",
              "q": "| inputlookup pci_cde_log_sources.csv\n| join type=outer source\n  [search index=compliance sourcetype=review:daily earliest=-1d\n   | stats latest(reviewer) AS reviewer latest(_time) AS reviewed_at latest(artefact_id) AS artefact_id BY source]\n| eval review_today=if(reviewed_at>=relative_time(now(),\"@d\"),1,0)\n| where review_today=0\n| table source reviewer reviewed_at artefact_id review_today",
              "m": "(1) SOC tool must write review artefacts to index=compliance; (2) maintain pci_cde_log_sources.csv; (3) schedule UC daily at 23:55 UTC; (4) gap opens a ticket for the next morning's shift; (5) monthly CISO report.",
              "z": "Table of sources missing review, time chart by day, single value 'days with gaps (30d)'.",
              "kfp": "Automated high-fidelity systems (e.g., PAN unified logging with correlation) may substitute for daily review — allow-list via source_type='automated-attest' in the lookup.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=compliance` sourcetype=review:daily (from SOC daily review tool); CDE-source list `pci_cde_log_sources.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) SOC tool must write review artefacts to index=compliance; (2) maintain pci_cde_log_sources.csv; (3) schedule UC daily at 23:55 UTC; (4) gap opens a ticket for the next morning's shift; (5) monthly CISO report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup pci_cde_log_sources.csv\n| join type=outer source\n  [search index=compliance sourcetype=review:daily earliest=-1d\n   | stats latest(reviewer) AS reviewer latest(_time) AS reviewed_at latest(artefact_id) AS artefact_id BY source]\n| eval review_today=if(reviewed_at>=relative_time(now(),\"@d\"),1,0)\n| where review_today=0\n| table source reviewer reviewed_at artefact_id review_today\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 10.6 — Log review: daily-review evidence for CDE data sources** — Auditors frequently disallow 'we review daily' without evidence. The review artefact lookup gives the evidence.\n\nDocumented **Data sources**: `index=compliance` sourcetype=review:daily (from SOC daily review tool); CDE-source list `pci_cde_log_sources.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **review_today** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where review_today=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 10.6 — Log review: daily-review evidence for CDE data sources**): table source reviewer reviewed_at artefact_id review_today\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of sources missing review, time chart by day, single value 'days with gaps (30d)'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We pCI-DSS 10.6 requires daily review of logs. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "PCI DSS",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.28",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.5.28 is enforced — Splunk UC-22.11.101: PCI-DSS 10.6 — Log review: daily-review evidence for CDE data sources.",
                  "ea": "Saved search 'UC-22.11.101' running on index=compliance sourcetype=review:daily (from SOC daily review tool); CDE-source list pci_cde_log_sources.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "10.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.6 (Logs reviewed) is enforced — Splunk UC-22.11.101: PCI-DSS 10.6 — Log review: daily-review evidence for CDE data sources.",
                  "ea": "Saved search 'UC-22.11.101' running on index=compliance sourcetype=review:daily (from SOC daily review tool); CDE-source list pci_cde_log_sources.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC7.1 (System operations monitoring) is enforced — Splunk UC-22.11.101: PCI-DSS 10.6 — Log review: daily-review evidence for CDE data sources.",
                  "ea": "Saved search 'UC-22.11.101' running on index=compliance sourcetype=review:daily (from SOC daily review tool); CDE-source list pci_cde_log_sources.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.102",
              "n": "PCI-DSS 10.7 — Log retention: CDE data-source retention + immutability attestation",
              "c": "high",
              "f": "intermediate",
              "v": "Prevents the audit-killing discovery that retention was silently reduced during an index rebuild. Daily attestation beats the quarterly spot-check.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Splunk REST /services/data/indexes; `index=compliance` sourcetype=worm:signer.",
              "q": "| rest /services/data/indexes\n| where match(title,\"^(cde|pci)_\")\n| eval configured_days=round(frozenTimePeriodInSecs/86400,0)\n| table title configured_days minTime maxTime\n| rename title AS index\n| eval oldest_event_age_d=round((now()-strptime(minTime,\"%Y-%m-%dT%H:%M:%S%z\"))/86400,0)\n| join type=outer index\n  [search index=compliance sourcetype=worm:signer earliest=-7d\n   | stats latest(_time) AS worm_last_write BY index]\n| where configured_days<365 OR oldest_event_age_d<365 OR worm_last_write<now()-86400\n| table index configured_days oldest_event_age_d worm_last_write",
              "m": "(1) Name CDE indexes with cde_/pci_ prefix; (2) WORM signer writes per-index heartbeat; (3) schedule UC daily; (4) gap opens a Platform ticket; (5) monthly CISO review.",
              "z": "Bar of retention days by index, table of gaps, single value 'indexes below 365d'.",
              "kfp": "New indexes legitimately have oldest_event_age_d<365 until one year passes — mark age_exempt=true in an index-metadata lookup.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [SOX-ITGC PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Splunk REST /services/data/indexes; `index=compliance` sourcetype=worm:signer..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Name CDE indexes with cde_/pci_ prefix; (2) WORM signer writes per-index heartbeat; (3) schedule UC daily; (4) gap opens a Platform ticket; (5) monthly CISO review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| rest /services/data/indexes\n| where match(title,\"^(cde|pci)_\")\n| eval configured_days=round(frozenTimePeriodInSecs/86400,0)\n| table title configured_days minTime maxTime\n| rename title AS index\n| eval oldest_event_age_d=round((now()-strptime(minTime,\"%Y-%m-%dT%H:%M:%S%z\"))/86400,0)\n| join type=outer index\n  [search index=compliance sourcetype=worm:signer earliest=-7d\n   | stats latest(_time) AS worm_last_write BY index]\n| where configured_days<365 OR oldest_event_age_d<365 OR worm_last_write<now()-86400\n| table index configured_days oldest_event_age_d worm_last_write\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 10.7 — Log retention: CDE data-source retention + immutability attestation** — Prevents the audit-killing discovery that retention was silently reduced during an index rebuild. Daily attestation beats the quarterly spot-check.\n\nDocumented **Data sources**: Splunk REST /services/data/indexes; `index=compliance` sourcetype=worm:signer. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Calls Splunk `rest` to read configuration or REST-exposed entities.\n• Filters the current rows with `where match(title,\"^(cde|pci)_\")` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **configured_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **PCI-DSS 10.7 — Log retention: CDE data-source retention + immutability attestation**): table title configured_days minTime maxTime\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **oldest_event_age_d** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Filters the current rows with `where configured_days<365 OR oldest_event_age_d<365 OR worm_last_write<now()-86400` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 10.7 — Log retention: CDE data-source retention + immutability attestation**): table index configured_days oldest_event_age_d worm_last_write\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar of retention days by index, table of gaps, single value 'indexes below 365d'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We attests the retention and WORM posture required by PCI-DSS 10.7 (one-year retention, three months immediately available). PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "regs": [
                "ISO/IEC 27001",
                "NIST 800-53",
                "PCI DSS",
                "PCI-DSS",
                "SOX-ITGC"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.33",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.5.33 is enforced — Splunk UC-22.11.102: PCI-DSS 10.7 — Log retention: CDE data-source retention + immutability attestation.",
                  "ea": "Saved search 'UC-22.11.102' running on Splunk REST /services/data/indexes; index=compliance sourcetype=worm:signer., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "AU-11",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 AU-11 is enforced — Splunk UC-22.11.102: PCI-DSS 10.7 — Log retention: CDE data-source retention + immutability attestation.",
                  "ea": "Saved search 'UC-22.11.102' running on Splunk REST /services/data/indexes; index=compliance sourcetype=worm:signer., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "10.7",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 10.7 (Log retention) is enforced — Splunk UC-22.11.102: PCI-DSS 10.7 — Log retention: CDE data-source retention + immutability attestation.",
                  "ea": "Saved search 'UC-22.11.102' running on Splunk REST /services/data/indexes; index=compliance sourcetype=worm:signer., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Logging.Integrity",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOX-ITGC ITGC.Logging.Integrity is enforced — Splunk UC-22.11.102: PCI-DSS 10.7 — Log retention: CDE data-source retention + immutability attestation.",
                  "ea": "Saved search 'UC-22.11.102' running on Splunk REST /services/data/indexes; index=compliance sourcetype=worm:signer., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.103",
              "n": "PCI-DSS 11.3 — Vulnerability programme: overdue scan cadence and unremediated high-severity",
              "c": "high",
              "f": "intermediate",
              "v": "Closes the #1 open-finding reason at QSAs — 'the scan history was incomplete'. Continuous measurement is the intended replacement.",
              "t": "Splunk Add-on for Tenable (4060)",
              "d": "`index=vm` sourcetype=tenable:sc:vuln, qualys:host; scan-scope lookup `pci_scan_scope.csv`.",
              "q": "index=vm sourcetype IN (tenable:sc:vuln,qualys:host) earliest=-120d\n| stats latest(_time) AS last_scan_at count(eval(severity IN (\"high\",\"critical\") AND state=\"open\")) AS open_high_findings avg(eval(if(state=\"open\" AND severity IN (\"high\",\"critical\"),now()-first_seen,null))) AS mean_age_s BY scan_scope\n| eval mean_age_days=round(mean_age_s/86400,0)\n| lookup pci_scan_scope.csv scan_scope OUTPUT cadence_days sla_days\n| eval sla_met=if(last_scan_at>=now()-cadence_days*86400 AND mean_age_days<=sla_days,1,0)\n| where sla_met=0\n| table scan_scope last_scan_at sla_met open_high_findings mean_age_days",
              "m": "(1) Normalise scanner feeds; (2) maintain pci_scan_scope.csv (quarterly external, per-change internal); (3) schedule UC daily; (4) sla_met=0 opens a P2; (5) quarterly QSA report.",
              "z": "Bullet chart of SLA adherence by scope, table of overdue scopes, single value 'scopes overdue'.",
              "kfp": "Immaterial findings whose exception was approved in the risk register show as 'open' — tag state='accepted_risk' and exclude.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [NIST 800-53 Rev. 5](https://doi.org/10.6028/NIST.SP.800-53r5), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [MITRE ATT&CK — T1595](https://attack.mitre.org/techniques/T1595/)",
              "mitre": [
                "T1595"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Tenable (4060).\n• Ensure the following data sources are available: `index=vm` sourcetype=tenable:sc:vuln, qualys:host; scan-scope lookup `pci_scan_scope.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Normalise scanner feeds; (2) maintain pci_scan_scope.csv (quarterly external, per-change internal); (3) schedule UC daily; (4) sla_met=0 opens a P2; (5) quarterly QSA report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=vm sourcetype IN (tenable:sc:vuln,qualys:host) earliest=-120d\n| stats latest(_time) AS last_scan_at count(eval(severity IN (\"high\",\"critical\") AND state=\"open\")) AS open_high_findings avg(eval(if(state=\"open\" AND severity IN (\"high\",\"critical\"),now()-first_seen,null))) AS mean_age_s BY scan_scope\n| eval mean_age_days=round(mean_age_s/86400,0)\n| lookup pci_scan_scope.csv scan_scope OUTPUT cadence_days sla_days\n| eval sla_met=if(last_scan_at>=now()-cadence_days*86400 AND mean_age_days<=sla_days,1,0)\n| where sla_met=0\n| table scan_scope last_scan_at sla_met open_high_findings mean_age_days\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 11.3 — Vulnerability programme: overdue scan cadence and unremediated high-severity** — Closes the #1 open-finding reason at QSAs — 'the scan history was incomplete'. Continuous measurement is the intended replacement.\n\nDocumented **Data sources**: `index=vm` sourcetype=tenable:sc:vuln, qualys:host; scan-scope lookup `pci_scan_scope.csv`. **App/TA** (typical add-on context): Splunk Add-on for Tenable (4060). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: vm.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=vm, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by scan_scope** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **mean_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sla_met=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 11.3 — Vulnerability programme: overdue scan cadence and unremediated high-severity**): table scan_scope last_scan_at sla_met open_high_findings mean_age_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bullet chart of SLA adherence by scope, table of overdue scopes, single value 'scopes overdue'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We produces, per scan scope, the evidence required by PCI-DSS 11.3 — scans are on cadence and high/critical findings are remediated within SLA. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "NIST 800-53",
                "PCI DSS",
                "PCI-DSS"
              ],
              "a": [
                "Vulnerabilities"
              ],
              "e": [
                "qualys",
                "tenable"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.8",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.8 (Identification) is enforced — Splunk UC-22.11.103: PCI-DSS 11.3 — Vulnerability programme: overdue scan cadence and unremediated high-severity.",
                  "ea": "Saved search 'UC-22.11.103' running on index=vm sourcetype=tenable:sc:vuln, qualys:host; scan-scope lookup pci_scan_scope.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.8",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.8 is enforced — Splunk UC-22.11.103: PCI-DSS 11.3 — Vulnerability programme: overdue scan cadence and unremediated high-severity.",
                  "ea": "Saved search 'UC-22.11.103' running on index=vm sourcetype=tenable:sc:vuln, qualys:host; scan-scope lookup pci_scan_scope.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "NIST 800-53",
                  "v": "Rev. 5",
                  "cl": "RA-5",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that NIST 800-53 RA-5 (Vulnerability scanning) is enforced — Splunk UC-22.11.103: PCI-DSS 11.3 — Vulnerability programme: overdue scan cadence and unremediated high-severity.",
                  "ea": "Saved search 'UC-22.11.103' running on index=vm sourcetype=tenable:sc:vuln, qualys:host; scan-scope lookup pci_scan_scope.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://doi.org/10.6028/NIST.SP.800-53r5"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "11.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 11.3 (External and internal vulnerabilities identified) is enforced — Splunk UC-22.11.103: PCI-DSS 11.3 — Vulnerability programme: overdue scan cadence and unremediated high-severity.",
                  "ea": "Saved search 'UC-22.11.103' running on index=vm sourcetype=tenable:sc:vuln, qualys:host; scan-scope lookup pci_scan_scope.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.104",
              "n": "PCI-DSS 11.4 — Intrusion detection: IDS signature/health attestation + untuned alert monitoring",
              "c": "high",
              "f": "intermediate",
              "v": "Transforms annual sign-off into 30-minute ground-truth. IR team sees stale signatures before they become an audit issue.",
              "t": "Splunk Add-on for Suricata, Splunk Add-on for Palo Alto Networks (2757)",
              "d": "`index=ids` sourcetype IN (suricata:eve,snort:ids,palo:threat); sensor inventory `pci_ids_sensors.csv`.",
              "q": "| inputlookup pci_ids_sensors.csv\n| join type=outer sensor\n  [search index=ids earliest=-30m\n   | stats latest(_time) AS last_seen latest(sig_version) AS sig_version BY sensor]\n| eval age_d=round((now()-strptime(sig_version,\"%Y-%m-%d\"))/86400,0)\n| eval state=case(isnull(last_seen),\"offline\",last_seen<now()-1800,\"stale\",age_d>7,\"signatures-stale\",true(),\"healthy\")\n| where state!=\"healthy\"\n| table sensor last_seen sig_version age_d state",
              "m": "(1) Maintain pci_ids_sensors.csv; (2) onboard sensor telemetry; (3) schedule UC every 30m; (4) non-healthy state opens a P1/P2; (5) monthly CISO review.",
              "z": "Donut of state, table of non-healthy sensors, single value 'stale sensors'.",
              "kfp": "Sensor maintenance windows legitimately show 'offline' — allowlist with maintenance_calendar.csv.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Suricata, Splunk Add-on for Palo Alto Networks (2757).\n• Ensure the following data sources are available: `index=ids` sourcetype IN (suricata:eve,snort:ids,palo:threat); sensor inventory `pci_ids_sensors.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain pci_ids_sensors.csv; (2) onboard sensor telemetry; (3) schedule UC every 30m; (4) non-healthy state opens a P1/P2; (5) monthly CISO review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup pci_ids_sensors.csv\n| join type=outer sensor\n  [search index=ids earliest=-30m\n   | stats latest(_time) AS last_seen latest(sig_version) AS sig_version BY sensor]\n| eval age_d=round((now()-strptime(sig_version,\"%Y-%m-%d\"))/86400,0)\n| eval state=case(isnull(last_seen),\"offline\",last_seen<now()-1800,\"stale\",age_d>7,\"signatures-stale\",true(),\"healthy\")\n| where state!=\"healthy\"\n| table sensor last_seen sig_version age_d state\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 11.4 — Intrusion detection: IDS signature/health attestation + untuned alert monitoring** — Transforms annual sign-off into 30-minute ground-truth. IR team sees stale signatures before they become an audit issue.\n\nDocumented **Data sources**: `index=ids` sourcetype IN (suricata:eve,snort:ids,palo:threat); sensor inventory `pci_ids_sensors.csv`. **App/TA** (typical add-on context): Splunk Add-on for Suricata, Splunk Add-on for Palo Alto Networks (2757). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **age_d** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where state!=\"healthy\"` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 11.4 — Intrusion detection: IDS signature/health attestation + untuned alert monitoring**): table sensor last_seen sig_version age_d state\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Donut of state, table of non-healthy sensors, single value 'stale sensors'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We pCI-DSS 11.4 requires IDS/IPS use. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "PCI DSS",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "Intrusion_Detection"
              ],
              "e": [
                "paloalto",
                "suricata"
              ],
              "em": [
                "paloalto_pan_firewall"
              ],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.16",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.16 (Monitoring activities) is enforced — Splunk UC-22.11.104: PCI-DSS 11.4 — Intrusion detection: IDS signature/health attestation + untuned alert monitoring.",
                  "ea": "Saved search 'UC-22.11.104' running on index=ids sourcetype IN (suricata:eve,snort:ids,palo:threat); sensor inventory pci_ids_sensors.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "11.4",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 11.4 (Intrusion detection / prevention) is enforced — Splunk UC-22.11.104: PCI-DSS 11.4 — Intrusion detection: IDS signature/health attestation + untuned alert monitoring.",
                  "ea": "Saved search 'UC-22.11.104' running on index=ids sourcetype IN (suricata:eve,snort:ids,palo:threat); sensor inventory pci_ids_sensors.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC7.1 (System operations monitoring) is enforced — Splunk UC-22.11.104: PCI-DSS 11.4 — Intrusion detection: IDS signature/health attestation + untuned alert monitoring.",
                  "ea": "Saved search 'UC-22.11.104' running on index=ids sourcetype IN (suricata:eve,snort:ids,palo:threat); sensor inventory pci_ids_sensors.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk App for Palo Alto Networks",
                  "id": 7505,
                  "url": "https://splunkbase.splunk.com/app/7505",
                  "desc": "Dashboards for Palo Alto firewall traffic, threat, and GlobalProtect data",
                  "screenshots": [],
                  "predecessor": [
                    {
                      "name": "Palo Alto Networks App for Splunk",
                      "id": 491,
                      "url": "https://splunkbase.splunk.com/app/491"
                    }
                  ]
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.105",
              "n": "PCI-DSS 12.10 — Incident response: IR readiness — playbook exercise evidence",
              "c": "high",
              "f": "intermediate",
              "v": "Converts the 'we did a tabletop' email into indexed evidence auditors can query directly.",
              "t": "Splunk Enterprise Security",
              "d": "`index=testing` sourcetype=ir:drill; playbook register `ir_playbook_register.csv`.",
              "q": "| inputlookup ir_playbook_register.csv\n| where scope_cde=\"yes\"\n| join type=outer playbook_id\n  [search index=testing sourcetype=ir:drill earliest=-14mon\n   | stats latest(_time) AS last_exercise_at latest(finding_count) AS finding_count latest(exercise_lead) AS exercise_lead BY playbook_id]\n| eval cadence_met=if(last_exercise_at>=now()-365*86400,1,0)\n| table playbook_id scope_cde last_exercise_at finding_count exercise_lead cadence_met",
              "m": "(1) IR exercise tool writes drill events to index=testing; (2) maintain ir_playbook_register.csv; (3) schedule UC weekly; (4) cadence_met=0 schedules a drill; (5) annual Board report.",
              "z": "Table of playbooks with drill history, bar chart of findings per exercise, single value 'playbooks overdue'.",
              "kfp": "Combined multi-playbook drills may link to one of several playbook_ids — maintain a drill_to_playbook mapping.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=testing` sourcetype=ir:drill; playbook register `ir_playbook_register.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) IR exercise tool writes drill events to index=testing; (2) maintain ir_playbook_register.csv; (3) schedule UC weekly; (4) cadence_met=0 schedules a drill; (5) annual Board report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup ir_playbook_register.csv\n| where scope_cde=\"yes\"\n| join type=outer playbook_id\n  [search index=testing sourcetype=ir:drill earliest=-14mon\n   | stats latest(_time) AS last_exercise_at latest(finding_count) AS finding_count latest(exercise_lead) AS exercise_lead BY playbook_id]\n| eval cadence_met=if(last_exercise_at>=now()-365*86400,1,0)\n| table playbook_id scope_cde last_exercise_at finding_count exercise_lead cadence_met\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 12.10 — Incident response: IR readiness — playbook exercise evidence** — Converts the 'we did a tabletop' email into indexed evidence auditors can query directly.\n\nDocumented **Data sources**: `index=testing` sourcetype=ir:drill; playbook register `ir_playbook_register.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Filters the current rows with `where scope_cde=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **cadence_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **PCI-DSS 12.10 — Incident response: IR readiness — playbook exercise evidence**): table playbook_id scope_cde last_exercise_at finding_count exercise_lead cadence_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of playbooks with drill history, bar chart of findings per exercise, single value 'playbooks overdue'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We pCI-DSS 12.10.2 requires IR plan testing at least annually. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "PCI DSS",
                "PCI-DSS",
                "SOC-2"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that DORA Art.24 (Digital operational-resilience testing) is enforced — Splunk UC-22.11.105: PCI-DSS 12.10 — Incident response: IR readiness — playbook exercise evidence.",
                  "ea": "Saved search 'UC-22.11.105' running on index=testing sourcetype=ir:drill; playbook register ir_playbook_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.24",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.5.24 (Incident management planning) is enforced — Splunk UC-22.11.105: PCI-DSS 12.10 — Incident response: IR readiness — playbook exercise evidence.",
                  "ea": "Saved search 'UC-22.11.105' running on index=testing sourcetype=ir:drill; playbook register ir_playbook_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "12.10",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 12.10 (Security incident response) is enforced — Splunk UC-22.11.105: PCI-DSS 12.10 — Incident response: IR readiness — playbook exercise evidence.",
                  "ea": "Saved search 'UC-22.11.105' running on index=testing sourcetype=ir:drill; playbook register ir_playbook_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC7.4 (Incident response) is enforced — Splunk UC-22.11.105: PCI-DSS 12.10 — Incident response: IR readiness — playbook exercise evidence.",
                  "ea": "Saved search 'UC-22.11.105' running on index=testing sourcetype=ir:drill; playbook register ir_playbook_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.11.106",
              "n": "PCI-DSS 12.3 — Targeted risk analysis: frequency adherence for per-requirement TRAs",
              "c": "medium",
              "f": "intermediate",
              "v": "Gives compliance an explicit overdue list — the v4.0 TRA is a common reason for 'in progress' at year-two assessments.",
              "t": "Splunk Enterprise Security",
              "d": "`index=grc` sourcetype=archer:tra_record; TRA register `pci_tra_register.csv`.",
              "q": "| inputlookup pci_tra_register.csv\n| join type=outer tra_id\n  [search index=grc sourcetype=archer:tra_record earliest=-2y\n   | stats latest(_time) AS last_refresh_at BY tra_id]\n| eval overdue_days=round((now()-last_refresh_at)/86400,0)-frequency_days\n| where overdue_days>0 OR isnull(last_refresh_at)\n| table tra_id requirement frequency_days last_refresh_at overdue_days",
              "m": "(1) Maintain pci_tra_register.csv with requirement and frequency_days; (2) GRC writes each refresh event; (3) schedule UC daily; (4) overdue TRAs open a CISO-owned task; (5) quarterly compliance report.",
              "z": "Table of overdue TRAs, bar chart by requirement, single value 'TRAs overdue'.",
              "kfp": "TRAs whose requirement is marked 'not applicable' should be excluded via scope_applicable=false.",
              "refs": "[PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [DORA Regulation (EU) 2022/2554](https://eur-lex.europa.eu/eli/reg/2022/2554/oj)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=grc` sourcetype=archer:tra_record; TRA register `pci_tra_register.csv`..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain pci_tra_register.csv with requirement and frequency_days; (2) GRC writes each refresh event; (3) schedule UC daily; (4) overdue TRAs open a CISO-owned task; (5) quarterly compliance report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup pci_tra_register.csv\n| join type=outer tra_id\n  [search index=grc sourcetype=archer:tra_record earliest=-2y\n   | stats latest(_time) AS last_refresh_at BY tra_id]\n| eval overdue_days=round((now()-last_refresh_at)/86400,0)-frequency_days\n| where overdue_days>0 OR isnull(last_refresh_at)\n| table tra_id requirement frequency_days last_refresh_at overdue_days\n```\n\nUnderstanding this SPL\n\n**PCI-DSS 12.3 — Targeted risk analysis: frequency adherence for per-requirement TRAs** — Gives compliance an explicit overdue list — the v4.0 TRA is a common reason for 'in progress' at year-two assessments.\n\nDocumented **Data sources**: `index=grc` sourcetype=archer:tra_record; TRA register `pci_tra_register.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **overdue_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where overdue_days>0 OR isnull(last_refresh_at)` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **PCI-DSS 12.3 — Targeted risk analysis: frequency adherence for per-requirement TRAs**): table tra_id requirement frequency_days last_refresh_at overdue_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of overdue TRAs, bar chart by requirement, single value 'TRAs overdue'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We pCI-DSS v4.0 introduced targeted risk analyses (TRAs) at 12.3. PCI DSS is about protecting card and payment data when we run these checks.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "DORA",
                "ISO/IEC 27001",
                "PCI DSS",
                "PCI-DSS"
              ],
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "DORA",
                  "v": "Regulation (EU) 2022/2554",
                  "cl": "Art.6",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that DORA Art.6 (ICT risk-management framework) is enforced — Splunk UC-22.11.106: PCI-DSS 12.3 — Targeted risk analysis: frequency adherence for per-requirement TRAs.",
                  "ea": "Saved search 'UC-22.11.106' running on index=grc sourcetype=archer:tra_record; TRA register pci_tra_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://eur-lex.europa.eu/eli/reg/2022/2554/oj"
                },
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "8.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 8.2 (Information-security risk assessment) is enforced — Splunk UC-22.11.106: PCI-DSS 12.3 — Targeted risk analysis: frequency adherence for per-requirement TRAs.",
                  "ea": "Saved search 'UC-22.11.106' running on index=grc sourcetype=archer:tra_record; TRA register pci_tra_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "12.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that PCI-DSS 12.3 (Targeted risk analysis) is enforced — Splunk UC-22.11.106: PCI-DSS 12.3 — Targeted risk analysis: frequency adherence for per-requirement TRAs.",
                  "ea": "Saved search 'UC-22.11.106' running on index=grc sourcetype=archer:tra_record; TRA register pci_tra_register.csv., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 16,
            "none": 0
          }
        },
        {
          "i": "22.12",
          "n": "— SOX / ITGC (extended clauses)",
          "u": [
            {
              "i": "22.12.36",
              "n": "SOX-ITGC AccessMgmt.Provisioning — Financial-system user provisioning SLA & workflow adherence",
              "c": "high",
              "f": "intermediate",
              "v": "Closes the #1 SOX deficiency 'access provisioned without an approved ticket'. Internal audit gets a daily report rather than a quarterly spot-check.",
              "t": "Splunk Add-on for Okta, Splunk Add-on for ServiceNow",
              "d": "`index=iam` sourcetype=okta:system_log OR sourcetype=ad:group_change; ticketing `index=ticketing` sourcetype=servicenow:request.",
              "q": "index=iam eventType=group.user_membership.add earliest=-7d\n| lookup financial_systems_groups.csv group OUTPUT in_scope\n| where in_scope=\"yes\"\n| rename actor AS added_by target.id AS user\n| eval added_at=_time\n| join type=outer user group\n  [search index=ticketing sourcetype=servicenow:request request_type=access_request state=approved earliest=-30d\n   | stats latest(ticket_id) AS ticket_id latest(approved_at) AS approved_at BY user target_group\n   | rename target_group AS group]\n| eval sla_met=if(isnotnull(approved_at) AND added_at<=approved_at+24*3600,1,0)\n| where sla_met=0\n| table group user added_at ticket_id approved_at sla_met",
              "m": "(1) Maintain financial_systems_groups.csv; (2) enforce ServiceNow workflow; (3) schedule UC daily; (4) sla_met=0 opens a SOX deficiency record; (5) monthly CFO report.",
              "z": "Table of provisioning without approval, bar chart by group, single value 'unapproved provisions (7d)'.",
              "kfp": "Break-glass adds via approved emergency-access workflow populate approved_at via a separate sourcetype — ensure the join covers both.",
              "refs": "[SOX-ITGC PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [MITRE ATT&CK — T1098](https://attack.mitre.org/techniques/T1098/)",
              "mitre": [
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Okta, Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `index=iam` sourcetype=okta:system_log OR sourcetype=ad:group_change; ticketing `index=ticketing` sourcetype=servicenow:request..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain financial_systems_groups.csv; (2) enforce ServiceNow workflow; (3) schedule UC daily; (4) sla_met=0 opens a SOX deficiency record; (5) monthly CFO report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=iam eventType=group.user_membership.add earliest=-7d\n| lookup financial_systems_groups.csv group OUTPUT in_scope\n| where in_scope=\"yes\"\n| rename actor AS added_by target.id AS user\n| eval added_at=_time\n| join type=outer user group\n  [search index=ticketing sourcetype=servicenow:request request_type=access_request state=approved earliest=-30d\n   | stats latest(ticket_id) AS ticket_id latest(approved_at) AS approved_at BY user target_group\n   | rename target_group AS group]\n| eval sla_met=if(isnotnull(approved_at) AND added_at<=approved_at+24*3600,1,0)\n| where sla_met=0\n| table group user added_at ticket_id approved_at sla_met\n```\n\nUnderstanding this SPL\n\n**SOX-ITGC AccessMgmt.Provisioning — Financial-system user provisioning SLA & workflow adherence** — Closes the #1 SOX deficiency 'access provisioned without an approved ticket'. Internal audit gets a daily report rather than a quarterly spot-check.\n\nDocumented **Data sources**: `index=iam` sourcetype=okta:system_log OR sourcetype=ad:group_change; ticketing `index=ticketing` sourcetype=servicenow:request. **App/TA** (typical add-on context): Splunk Add-on for Okta, Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: iam.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=iam, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **added_at** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sla_met=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOX-ITGC AccessMgmt.Provisioning — Financial-system user provisioning SLA & workflow adherence**): table group user added_at ticket_id approved_at sla_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of provisioning without approval, bar chart by group, single value 'unapproved provisions (7d)'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We make sure new financial application accounts show up the way our provisioning and control owners expect, using the same tickets and trails auditors review. SOX is about making sure our financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "PCI-DSS",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Change",
                "Authentication"
              ],
              "e": [
                "okta",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.18",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.5.18 (Access rights review) is enforced — Splunk UC-22.12.36: SOX-ITGC AccessMgmt.Provisioning — Financial-system user provisioning SLA & workflow adherence.",
                  "ea": "Saved search 'UC-22.12.36' running on sourcetype okta:system_log and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "8.2.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 8.2.4 is enforced — Splunk UC-22.12.36: SOX-ITGC AccessMgmt.Provisioning — Financial-system user provisioning SLA & workflow adherence.",
                  "ea": "Saved search 'UC-22.12.36' running on sourcetype okta:system_log and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC6.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC6.2 is enforced — Splunk UC-22.12.36: SOX-ITGC AccessMgmt.Provisioning — Financial-system user provisioning SLA & workflow adherence.",
                  "ea": "Saved search 'UC-22.12.36' running on sourcetype okta:system_log and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Provisioning",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.AccessMgmt.Provisioning (User provisioning) is enforced — Splunk UC-22.12.36: SOX-ITGC AccessMgmt.Provisioning — Financial-system user provisioning SLA & workflow adherence.",
                  "ea": "Saved search 'UC-22.12.36' running on sourcetype okta:system_log and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.37",
              "n": "SOX-ITGC AccessMgmt.Termination — Deprovisioning SLA after HR termination event",
              "c": "critical",
              "f": "intermediate",
              "v": "The most common SOX finding — ex-employees retaining access — becomes measurable hour-by-hour.",
              "t": "Splunk Add-on for Okta, Splunk Add-on for Workday",
              "d": "`index=hr` sourcetype=workday:termination; `index=iam` sourcetype=okta:system_log (user.lifecycle.deactivate + group.user_membership.remove).",
              "q": "index=hr sourcetype=workday:termination earliest=-7d\n| stats latest(_time) AS term_date BY user\n| join user type=outer\n  [search index=iam (eventType=user.lifecycle.deactivate OR eventType=group.user_membership.remove) earliest=-7d\n   | stats max(_time) AS last_access_revoked BY user]\n| eval elapsed_hours=round((last_access_revoked-term_date)/3600,1)\n| eval sla_met=if(elapsed_hours<=24 AND isnotnull(last_access_revoked),1,0)\n| where sla_met=0\n| table user term_date last_access_revoked elapsed_hours sla_met",
              "m": "(1) Route Workday terminations to Splunk; (2) route Okta lifecycle events; (3) schedule UC hourly; (4) sla_met=0 pages the IAM team; (5) monthly CFO report.",
              "z": "Timechart of SLA adherence %, table of breaches, single value 'open SLA breaches'.",
              "kfp": "Contractor terminations with delayed HR events may produce a false breach — backfill term_date from contract_end_date in vendor_register.csv.",
              "refs": "[SOX-ITGC PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [MITRE ATT&CK — T1078](https://attack.mitre.org/techniques/T1078/)",
              "mitre": [
                "T1078"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Okta, Splunk Add-on for Workday.\n• Ensure the following data sources are available: `index=hr` sourcetype=workday:termination; `index=iam` sourcetype=okta:system_log (user.lifecycle.deactivate + group.user_membership.remove)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Route Workday terminations to Splunk; (2) route Okta lifecycle events; (3) schedule UC hourly; (4) sla_met=0 pages the IAM team; (5) monthly CFO report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hr sourcetype=workday:termination earliest=-7d\n| stats latest(_time) AS term_date BY user\n| join user type=outer\n  [search index=iam (eventType=user.lifecycle.deactivate OR eventType=group.user_membership.remove) earliest=-7d\n   | stats max(_time) AS last_access_revoked BY user]\n| eval elapsed_hours=round((last_access_revoked-term_date)/3600,1)\n| eval sla_met=if(elapsed_hours<=24 AND isnotnull(last_access_revoked),1,0)\n| where sla_met=0\n| table user term_date last_access_revoked elapsed_hours sla_met\n```\n\nUnderstanding this SPL\n\n**SOX-ITGC AccessMgmt.Termination — Deprovisioning SLA after HR termination event** — The most common SOX finding — ex-employees retaining access — becomes measurable hour-by-hour.\n\nDocumented **Data sources**: `index=hr` sourcetype=workday:termination; `index=iam` sourcetype=okta:system_log (user.lifecycle.deactivate + group.user_membership.remove). **App/TA** (typical add-on context): Splunk Add-on for Okta, Splunk Add-on for Workday. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hr; **sourcetype**: workday:termination. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hr, sourcetype=workday:termination, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by user** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **elapsed_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sla_met=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOX-ITGC AccessMgmt.Termination — Deprovisioning SLA after HR termination event**): table user term_date last_access_revoked elapsed_hours sla_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Timechart of SLA adherence %, table of breaches, single value 'open SLA breaches'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We make sure new financial application accounts show up the way our provisioning and control owners expect, using the same tickets and trails auditors review. SOX is about making sure our financial systems stay trustworthy for reporting.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "PCI-DSS",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "e": [
                "okta"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.18",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.5.18 (Access rights review) is enforced — Splunk UC-22.12.37: SOX-ITGC AccessMgmt.Termination — Deprovisioning SLA after HR termination event.",
                  "ea": "Saved search 'UC-22.12.37' running on sourcetype workday:termination and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "8.2.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 8.2.5 is enforced — Splunk UC-22.12.37: SOX-ITGC AccessMgmt.Termination — Deprovisioning SLA after HR termination event.",
                  "ea": "Saved search 'UC-22.12.37' running on sourcetype workday:termination and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC6.3",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC6.3 is enforced — Splunk UC-22.12.37: SOX-ITGC AccessMgmt.Termination — Deprovisioning SLA after HR termination event.",
                  "ea": "Saved search 'UC-22.12.37' running on sourcetype workday:termination and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.AccessMgmt.Termination",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.AccessMgmt.Termination (Timely deprovisioning) is enforced — Splunk UC-22.12.37: SOX-ITGC AccessMgmt.Termination — Deprovisioning SLA after HR termination event.",
                  "ea": "Saved search 'UC-22.12.37' running on sourcetype workday:termination and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.38",
              "n": "SOX-ITGC ChangeMgmt.Testing — Financial-system change test-evidence completeness",
              "c": "high",
              "f": "intermediate",
              "v": "Converts a common sampling control into full-coverage evidence, reducing audit sampling and cost.",
              "t": "Splunk Add-on for ServiceNow",
              "d": "`index=change` sourcetype=servicenow:change; test evidence sub-table sourcetype=servicenow:change_task (type=UAT).",
              "q": "index=change sourcetype=servicenow:change state=implemented earliest=-7d\n| lookup financial_systems_apps.csv system OUTPUT in_scope\n| where in_scope=\"yes\"\n| stats latest(_time) AS deployed_at latest(change_id) AS change_id BY change_id system\n| join type=outer change_id\n  [search index=change sourcetype=servicenow:change_task task_type=uat state=closed_complete earliest=-60d\n   | stats latest(assigned_to) AS tester count AS uat_tasks BY change_id]\n| eval test_evidence_present=if(uat_tasks>=1,1,0)\n| where test_evidence_present=0\n| table change_id system tester deployed_at test_evidence_present",
              "m": "(1) Maintain financial_systems_apps.csv; (2) configure ServiceNow to require a UAT subtask; (3) schedule UC hourly; (4) test_evidence_present=0 triggers a deficiency record.",
              "z": "Table of missing-test changes, bar chart by system, single value 'missing test evidence'.",
              "kfp": "Automated changes (configuration as code) may use automated test jobs rather than UAT tasks — accept sourcetype=cicd:tests result=pass as equivalent evidence.",
              "refs": "[SOX-ITGC PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `index=change` sourcetype=servicenow:change; test evidence sub-table sourcetype=servicenow:change_task (type=UAT)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain financial_systems_apps.csv; (2) configure ServiceNow to require a UAT subtask; (3) schedule UC hourly; (4) test_evidence_present=0 triggers a deficiency record.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=change sourcetype=servicenow:change state=implemented earliest=-7d\n| lookup financial_systems_apps.csv system OUTPUT in_scope\n| where in_scope=\"yes\"\n| stats latest(_time) AS deployed_at latest(change_id) AS change_id BY change_id system\n| join type=outer change_id\n  [search index=change sourcetype=servicenow:change_task task_type=uat state=closed_complete earliest=-60d\n   | stats latest(assigned_to) AS tester count AS uat_tasks BY change_id]\n| eval test_evidence_present=if(uat_tasks>=1,1,0)\n| where test_evidence_present=0\n| table change_id system tester deployed_at test_evidence_present\n```\n\nUnderstanding this SPL\n\n**SOX-ITGC ChangeMgmt.Testing — Financial-system change test-evidence completeness** — Converts a common sampling control into full-coverage evidence, reducing audit sampling and cost.\n\nDocumented **Data sources**: `index=change` sourcetype=servicenow:change; test evidence sub-table sourcetype=servicenow:change_task (type=UAT). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: change; **sourcetype**: servicenow:change. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=change, sourcetype=servicenow:change, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by change_id system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **test_evidence_present** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where test_evidence_present=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOX-ITGC ChangeMgmt.Testing — Financial-system change test-evidence completeness**): table change_id system tester deployed_at test_evidence_present\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of missing-test changes, bar chart by system, single value 'missing test evidence'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for sOX-ITGC ChangeMgmt.Testing — Financial-system change test-evidence completeness the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "PCI-DSS",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.32",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that ISO/IEC 27001 A.8.32 is enforced — Splunk UC-22.12.38: SOX-ITGC ChangeMgmt.Testing — Financial-system change test-evidence completeness.",
                  "ea": "Saved search 'UC-22.12.38' running on index=change sourcetype=servicenow:change; test evidence sub-table sourcetype=servicenow:change_task (type=UAT)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "6.4.6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 6.4.6 is enforced — Splunk UC-22.12.38: SOX-ITGC ChangeMgmt.Testing — Financial-system change test-evidence completeness.",
                  "ea": "Saved search 'UC-22.12.38' running on index=change sourcetype=servicenow:change; test evidence sub-table sourcetype=servicenow:change_task (type=UAT)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC8.1 (Change management) is enforced — Splunk UC-22.12.38: SOX-ITGC ChangeMgmt.Testing — Financial-system change test-evidence completeness.",
                  "ea": "Saved search 'UC-22.12.38' running on index=change sourcetype=servicenow:change; test evidence sub-table sourcetype=servicenow:change_task (type=UAT)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Testing",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.ChangeMgmt.Testing (Change tested) is enforced — Splunk UC-22.12.38: SOX-ITGC ChangeMgmt.Testing — Financial-system change test-evidence completeness.",
                  "ea": "Saved search 'UC-22.12.38' running on index=change sourcetype=servicenow:change; test evidence sub-table sourcetype=servicenow:change_task (type=UAT)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.39",
              "n": "SOX-ITGC ChangeMgmt.Approval — Segregation of duties in financial-system change approval",
              "c": "critical",
              "f": "advanced",
              "v": "Eliminates the two patterns that SOX auditors most frequently cite. The daily report is the control evidence.",
              "t": "Splunk Add-on for ServiceNow",
              "d": "`index=change` sourcetype=servicenow:change (workflow events).",
              "q": "index=change sourcetype=servicenow:change earliest=-7d\n| stats latest(implementer) AS implementer latest(approver) AS approver latest(approval_date) AS approved_at latest(state_date) AS implemented_at BY change_id system\n| lookup financial_systems_apps.csv system OUTPUT in_scope\n| where in_scope=\"yes\"\n| eval sod_violation=if(implementer=approver,1,0)\n| eval approval_before_implementation=if(approved_at<=implemented_at,1,0)\n| where sod_violation=1 OR approval_before_implementation=0\n| table change_id system implementer approver approved_at implemented_at sod_violation approval_before_implementation",
              "m": "(1) Maintain financial_systems_apps.csv; (2) capture ServiceNow workflow events; (3) schedule UC hourly; (4) violations open an immediate deficiency; (5) monthly CFO report.",
              "z": "Table of violations, bar chart by system, single value 'SoD + post-implementation approvals'.",
              "kfp": "Standard-pre-approved changes legitimately have matching implementer/approver — filter on type!='standard'.",
              "refs": "[SOX-ITGC PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001), [PCI-DSS v4.0](https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf), [MITRE ATT&CK — T1098](https://attack.mitre.org/techniques/T1098/)",
              "mitre": [
                "T1098"
              ],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow.\n• Ensure the following data sources are available: `index=change` sourcetype=servicenow:change (workflow events)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain financial_systems_apps.csv; (2) capture ServiceNow workflow events; (3) schedule UC hourly; (4) violations open an immediate deficiency; (5) monthly CFO report.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=change sourcetype=servicenow:change earliest=-7d\n| stats latest(implementer) AS implementer latest(approver) AS approver latest(approval_date) AS approved_at latest(state_date) AS implemented_at BY change_id system\n| lookup financial_systems_apps.csv system OUTPUT in_scope\n| where in_scope=\"yes\"\n| eval sod_violation=if(implementer=approver,1,0)\n| eval approval_before_implementation=if(approved_at<=implemented_at,1,0)\n| where sod_violation=1 OR approval_before_implementation=0\n| table change_id system implementer approver approved_at implemented_at sod_violation approval_before_implementation\n```\n\nUnderstanding this SPL\n\n**SOX-ITGC ChangeMgmt.Approval — Segregation of duties in financial-system change approval** — Eliminates the two patterns that SOX auditors most frequently cite. The daily report is the control evidence.\n\nDocumented **Data sources**: `index=change` sourcetype=servicenow:change (workflow events). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: change; **sourcetype**: servicenow:change. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=change, sourcetype=servicenow:change, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by change_id system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **sod_violation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **approval_before_implementation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where sod_violation=1 OR approval_before_implementation=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOX-ITGC ChangeMgmt.Approval — Segregation of duties in financial-system change approval**): table change_id system implementer approver approved_at implemented_at sod_violation approval_before_implementation\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of violations, bar chart by system, single value 'SoD + post-implementation approvals'.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for sOX-ITGC ChangeMgmt.Approval — Segregation of duties in financial-system change approval the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "ISO/IEC 27001",
                "PCI-DSS",
                "SOC-2",
                "SOX",
                "SOX-ITGC"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.5.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.5.3 is enforced — Splunk UC-22.12.39: SOX-ITGC ChangeMgmt.Approval — Segregation of duties in financial-system change approval.",
                  "ea": "Saved search 'UC-22.12.39' running on index=change sourcetype=servicenow:change (workflow events)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "PCI-DSS",
                  "v": "v4.0",
                  "cl": "6.4.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PCI-DSS 6.4.4 is enforced — Splunk UC-22.12.39: SOX-ITGC ChangeMgmt.Approval — Segregation of duties in financial-system change approval.",
                  "ea": "Saved search 'UC-22.12.39' running on index=change sourcetype=servicenow:change (workflow events)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://listings.pcisecuritystandards.org/documents/PCI-DSS-v4_0.pdf"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC8.1",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOC-2 CC8.1 (Change management) is enforced — Splunk UC-22.12.39: SOX-ITGC ChangeMgmt.Approval — Segregation of duties in financial-system change approval.",
                  "ea": "Saved search 'UC-22.12.39' running on index=change sourcetype=servicenow:change (workflow events)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.ChangeMgmt.Approval",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.ChangeMgmt.Approval (Change approved) is enforced — Splunk UC-22.12.39: SOX-ITGC ChangeMgmt.Approval — Segregation of duties in financial-system change approval.",
                  "ea": "Saved search 'UC-22.12.39' running on index=change sourcetype=servicenow:change (workflow events)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.12.40",
              "n": "SOX-ITGC Operations.JobSchedule — Batch-schedule monitoring: financial-job exception visibility",
              "c": "medium",
              "f": "intermediate",
              "v": "Makes the typically-spreadsheet-based batch monitoring queryable — Ops has a live queue, auditors have a history.",
              "t": "Splunk Enterprise Security",
              "d": "`index=batch` sourcetype=autosys:event OR sourcetype=controlm:job; operator ack `index=ticketing` sourcetype=servicenow:incident (category=batch-ack).",
              "q": "index=batch sourcetype IN (autosys:event,controlm:job) earliest=-30m\n| lookup financial_batch_jobs.csv job_id OUTPUT in_scope expected_window\n| where in_scope=\"yes\"\n| stats latest(state) AS state latest(_time) AS run_at BY job_id\n| where state IN (\"FAILED\",\"LATE\",\"WARNING\")\n| join type=outer job_id\n  [search index=ticketing sourcetype=servicenow:incident category=batch-ack earliest=-24h\n   | stats latest(assigned_to) AS ack_operator latest(_time) AS ack_at BY job_id]\n| eval ack_met=if(isnotnull(ack_operator) AND ack_at>=run_at AND ack_at<=run_at+4*3600,1,0)\n| where ack_met=0\n| table job_id run_at state ack_operator ack_at ack_met",
              "m": "(1) Route Autosys/Control-M events; (2) maintain financial_batch_jobs.csv; (3) schedule UC every 30m; (4) unacked exceptions open an incident; (5) monthly CFO + Head of IT Ops review.",
              "z": "Table of unacked exceptions, time chart of exception rate, single value 'open batch exceptions'.",
              "kfp": "Jobs whose expected_window is currently paused (month-end blackout) should be excluded via the lookup's active window.",
              "refs": "[SOX-ITGC PCAOB AS 2201](https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201), [SOC-2 2017 TSC](https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services), [ISO/IEC 27001 2022](https://www.iso.org/standard/27001)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security.\n• Ensure the following data sources are available: `index=batch` sourcetype=autosys:event OR sourcetype=controlm:job; operator ack `index=ticketing` sourcetype=servicenow:incident (category=batch-ack)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Route Autosys/Control-M events; (2) maintain financial_batch_jobs.csv; (3) schedule UC every 30m; (4) unacked exceptions open an incident; (5) monthly CFO + Head of IT Ops review.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=batch sourcetype IN (autosys:event,controlm:job) earliest=-30m\n| lookup financial_batch_jobs.csv job_id OUTPUT in_scope expected_window\n| where in_scope=\"yes\"\n| stats latest(state) AS state latest(_time) AS run_at BY job_id\n| where state IN (\"FAILED\",\"LATE\",\"WARNING\")\n| join type=outer job_id\n  [search index=ticketing sourcetype=servicenow:incident category=batch-ack earliest=-24h\n   | stats latest(assigned_to) AS ack_operator latest(_time) AS ack_at BY job_id]\n| eval ack_met=if(isnotnull(ack_operator) AND ack_at>=run_at AND ack_at<=run_at+4*3600,1,0)\n| where ack_met=0\n| table job_id run_at state ack_operator ack_at ack_met\n```\n\nUnderstanding this SPL\n\n**SOX-ITGC Operations.JobSchedule — Batch-schedule monitoring: financial-job exception visibility** — Makes the typically-spreadsheet-based batch monitoring queryable — Ops has a live queue, auditors have a history.\n\nDocumented **Data sources**: `index=batch` sourcetype=autosys:event OR sourcetype=controlm:job; operator ack `index=ticketing` sourcetype=servicenow:incident (category=batch-ack). **App/TA** (typical add-on context): Splunk Enterprise Security. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: batch.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=batch, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where in_scope=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by job_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where state IN (\"FAILED\",\"LATE\",\"WARNING\")` — typically the threshold or rule expression for this monitoring goal.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **ack_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where ack_met=0` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **SOX-ITGC Operations.JobSchedule — Batch-schedule monitoring: financial-job exception visibility**): table job_id run_at state ack_operator ack_at ack_met\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of unacked exceptions, time chart of exception rate, single value 'open batch exceptions'.",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-16",
              "sver": "",
              "rby": "",
              "ge": "We align the evidence for sOX-ITGC Operations.JobSchedule — Batch-schedule monitoring: financial-job exception visibility the way our SOX and audit work expects, using the data feeds the control owners already have. SOX is about making sure financial systems stay trustworthy for reporting.",
              "mtype": [
                "Compliance",
                "Availability"
              ],
              "regs": [
                "ISO/IEC 27001",
                "SOC-2",
                "SOX-ITGC"
              ],
              "a": [
                "Performance",
                "Alerts"
              ],
              "e": [
                "controlm",
                "servicenow"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "ISO/IEC 27001",
                  "v": "2022",
                  "cl": "A.8.30",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that ISO/IEC 27001 A.8.30 is enforced — Splunk UC-22.12.40: SOX-ITGC Operations.JobSchedule — Batch-schedule monitoring: financial-job exception visibility.",
                  "ea": "Saved search 'UC-22.12.40' running on sourcetype autosys:event and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.iso.org/standard/27001"
                },
                {
                  "r": "SOC-2",
                  "v": "2017 TSC",
                  "cl": "CC7.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SOC-2 CC7.1 (System operations monitoring) is enforced — Splunk UC-22.12.40: SOX-ITGC Operations.JobSchedule — Batch-schedule monitoring: financial-job exception visibility.",
                  "ea": "Saved search 'UC-22.12.40' running on sourcetype autosys:event and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://www.aicpa-cima.com/resources/landing/system-and-organization-controls-soc-suite-of-services"
                },
                {
                  "r": "SOX-ITGC",
                  "v": "PCAOB AS 2201",
                  "cl": "ITGC.Operations.JobSchedule",
                  "m": "satisfies",
                  "a": "full",
                  "co": "Evidence that SOX-ITGC ITGC.Operations.JobSchedule (Batch scheduling and monitoring) is enforced — Splunk UC-22.12.40: SOX-ITGC Operations.JobSchedule — Batch-schedule monitoring: financial-job exception visibility.",
                  "ea": "Saved search 'UC-22.12.40' running on sourcetype autosys:event and supporting catalogue-defined data sources, archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required.",
                  "u": "https://pcaobus.org/oversight/standards/auditing-standards/details/AS2201"
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 5,
            "none": 0
          }
        },
        {
          "i": "22.50",
          "n": "— Tier-2 framework clause coverage",
          "u": [
            {
              "i": "22.50.1",
              "n": "APP 11 personal-information security — continuous evidence of protective controls",
              "c": "high",
              "f": "intermediate",
              "v": "APP 11 requires reasonable steps to protect personal information. Continuous aggregation of protective-control activity provides auditable evidence of those steps.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Authentication events, privileged-access logs, DLP and data-classification events from repositories handling Australian personal information.",
              "q": "index=ident_pii earliest=-24h\n| search (category=\"pii\" OR data_class=\"personal_information\")\n| stats dc(subject_id) AS distinct_subjects count(eval(action=\"access\")) AS reads count(eval(action=\"modify\")) AS writes count(eval(outcome=\"blocked\")) AS blocked_attempts BY system, control\n| eval appears_effective=if(blocked_attempts>0 OR reads>0, \"yes\", \"no\")\n| table system, control, distinct_subjects, reads, writes, blocked_attempts, appears_effective",
              "m": "(1) Tag sources that hold Australian personal information with data_class=personal_information via props/transforms; (2) schedule this search daily; (3) roll up to a 'appears_effective' KPI for the Privacy Officer dashboard.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Australian Privacy Principles (APP 11)](https://www.oaic.gov.au/privacy/australian-privacy-principles/australian-privacy-principles-quick-reference)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Authentication events, privileged-access logs, DLP and data-classification events from repositories handling Australian personal information..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag sources that hold Australian personal information with data_class=personal_information via props/transforms; (2) schedule this search daily; (3) roll up to a 'appears_effective' KPI for the Privacy Officer dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ident_pii earliest=-24h\n| search (category=\"pii\" OR data_class=\"personal_information\")\n| stats dc(subject_id) AS distinct_subjects count(eval(action=\"access\")) AS reads count(eval(action=\"modify\")) AS writes count(eval(outcome=\"blocked\")) AS blocked_attempts BY system, control\n| eval appears_effective=if(blocked_attempts>0 OR reads>0, \"yes\", \"no\")\n| table system, control, distinct_subjects, reads, writes, blocked_attempts, appears_effective\n```\n\nUnderstanding this SPL\n\n**APP 11 personal-information security — continuous evidence of protective controls** — APP 11 requires reasonable steps to protect personal information. Continuous aggregation of protective-control activity provides auditable evidence of those steps.\n\nDocumented **Data sources**: Authentication events, privileged-access logs, DLP and data-classification events from repositories handling Australian personal information. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ident_pii.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ident_pii, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by system, control** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **appears_effective** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **APP 11 personal-information security — continuous evidence of protective controls**): table system, control, distinct_subjects, reads, writes, blocked_attempts, appears_effective\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "AU Privacy Act"
              ],
              "a": [
                "Authentication",
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "AU Privacy Act",
                  "v": "current",
                  "cl": "APP 11",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that AU Privacy Act APP 11 (Security of personal information) is enforced — Splunk UC-22.50.1: APP 11 personal-information security — continuous evidence of protective controls.",
                  "ea": "Saved search 'UC-22.50.1' running on catalogue-defined data sources (see UC detailedImplementation), archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.2",
              "n": "CJIS §5.13.3 incident response — detection, tracking, and reporting evidence",
              "c": "high",
              "f": "intermediate",
              "v": "§5.13.3 requires agencies to track incidents, notify FBI CJIS ISO within timelines, and retain evidence. Automated cadence tracking keeps the agency audit-ready.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "SIEM incident index, Splunk ES notables, SOAR incident records, CJIS audit logs.",
              "q": "index=notable earliest=-30d\n| search tag=\"incident_response\" (system_tag=\"cji\" OR data_class=\"cji\")\n| stats values(severity) AS severities min(_time) AS first_detected max(_time) AS last_update count AS update_count BY rule_id, incident_id\n| eval mttd_minutes=round((first_detected - strptime(event_time, \"%Y-%m-%dT%H:%M:%SZ\"))/60, 1)\n| where mttd_minutes <= 1440\n| table incident_id, rule_id, severities, first_detected, last_update, update_count, mttd_minutes",
              "m": "(1) Tag CJI-handling systems with system_tag=cji; (2) route notables to a CJIS-only summary index; (3) alert when mttd_minutes > 1440 (24 h) which is the CJIS reporting window.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[CJIS Security Policy v5.9.4](https://le.fbi.gov/cjis-division/cjis-security-policy-resource-center)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: SIEM incident index, Splunk ES notables, SOAR incident records, CJIS audit logs..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag CJI-handling systems with system_tag=cji; (2) route notables to a CJIS-only summary index; (3) alert when mttd_minutes > 1440 (24 h) which is the CJIS reporting window.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=notable earliest=-30d\n| search tag=\"incident_response\" (system_tag=\"cji\" OR data_class=\"cji\")\n| stats values(severity) AS severities min(_time) AS first_detected max(_time) AS last_update count AS update_count BY rule_id, incident_id\n| eval mttd_minutes=round((first_detected - strptime(event_time, \"%Y-%m-%dT%H:%M:%SZ\"))/60, 1)\n| where mttd_minutes <= 1440\n| table incident_id, rule_id, severities, first_detected, last_update, update_count, mttd_minutes\n```\n\nUnderstanding this SPL\n\n**CJIS §5.13.3 incident response — detection, tracking, and reporting evidence** — §5.13.3 requires agencies to track incidents, notify FBI CJIS ISO within timelines, and retain evidence. Automated cadence tracking keeps the agency audit-ready.\n\nDocumented **Data sources**: SIEM incident index, Splunk ES notables, SOAR incident records, CJIS audit logs. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: notable.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=notable, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by rule_id, incident_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **mttd_minutes** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where mttd_minutes <= 1440` — typically the threshold or rule expression for this monitoring goal.\n• Pipeline stage (see **CJIS §5.13.3 incident response — detection, tracking, and reporting evidence**): table incident_id, rule_id, severities, first_detected, last_update, update_count, mttd_minutes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "CJIS"
              ],
              "a": [
                "Alerts"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "CJIS",
                  "v": "v5.9.4",
                  "cl": "5.13.3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that CJIS 5.13.3 (Incident response) is enforced — Splunk UC-22.50.2: CJIS §5.13.3 incident response — detection, tracking, and reporting evidence.",
                  "ea": "Saved search 'UC-22.50.2' running on SIEM incident index, Splunk ES notables, SOAR incident records, CJIS audit logs., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.3",
              "n": "SYSC 3.2 internal-controls evidence — exceptions, approvals and audit trail",
              "c": "high",
              "f": "intermediate",
              "v": "SYSC 3.2 requires firms to maintain adequate internal controls; continuous checks on approval coverage provide the operational evidence.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Change management index (CMDB, ITSM), privileged access logs, approval workflow logs.",
              "q": "index=itsm sourcetype=change_record earliest=-30d\n| eval approved=if(isnotnull(approver) AND approval_status=\"approved\",1,0)\n| stats count AS total_changes sum(approved) AS approved_changes count(eval(approval_status=\"emergency\")) AS emergency_changes BY system_criticality\n| eval approval_rate=round(100*approved_changes/total_changes,1)\n| where system_criticality=\"critical\" AND approval_rate < 100",
              "m": "(1) Onboard ITSM/change approval logs with a 'system_criticality' enrichment; (2) schedule this search weekly; (3) file deviations as a finding in the Senior Manager's responsibility map.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[FCA Handbook — SYSC 3.2](https://www.handbook.fca.org.uk/handbook/SYSC/3/2.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Change management index (CMDB, ITSM), privileged access logs, approval workflow logs..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard ITSM/change approval logs with a 'system_criticality' enrichment; (2) schedule this search weekly; (3) file deviations as a finding in the Senior Manager's responsibility map.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=change_record earliest=-30d\n| eval approved=if(isnotnull(approver) AND approval_status=\"approved\",1,0)\n| stats count AS total_changes sum(approved) AS approved_changes count(eval(approval_status=\"emergency\")) AS emergency_changes BY system_criticality\n| eval approval_rate=round(100*approved_changes/total_changes,1)\n| where system_criticality=\"critical\" AND approval_rate < 100\n```\n\nUnderstanding this SPL\n\n**SYSC 3.2 internal-controls evidence — exceptions, approvals and audit trail** — SYSC 3.2 requires firms to maintain adequate internal controls; continuous checks on approval coverage provide the operational evidence.\n\nDocumented **Data sources**: Change management index (CMDB, ITSM), privileged access logs, approval workflow logs. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: change_record. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=change_record, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **approved** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by system_criticality** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **approval_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where system_criticality=\"critical\" AND approval_rate < 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Governance"
              ],
              "pillar": "security",
              "regs": [
                "FCA SM&CR"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SM&CR",
                  "v": "current",
                  "cl": "SYSC 3.2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SM&CR SYSC 3.2 (Internal controls, systems and audit arrangements) is enforced — Splunk UC-22.50.3: SYSC 3.2 internal-controls evidence — exceptions, approvals and audit trail.",
                  "ea": "Saved search 'UC-22.50.3' running on Change management index (CMDB, ITSM), privileged access logs, approval workflow logs., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.4",
              "n": "§164.504(e) Business Associate activity — PHI access by BA principals",
              "c": "high",
              "f": "intermediate",
              "v": "HIPAA §164.504(e) requires BAAs; continuous verification that no BA accesses PHI outside a valid contract is primary evidence.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Access logs from PHI systems enriched with principal classification (workforce vs BA).",
              "q": "index=phi_access earliest=-7d\n| lookup principals.csv principal_id OUTPUT principal_type, ba_contract_id, ba_expiry\n| where principal_type=\"business_associate\"\n| eval contract_valid=if(strptime(ba_expiry, \"%Y-%m-%d\") > now(), \"yes\", \"no\")\n| stats count AS accesses values(action) AS actions min(_time) AS first_seen max(_time) AS last_seen BY principal_id, ba_contract_id, contract_valid\n| where contract_valid=\"no\" OR isnull(ba_contract_id)",
              "m": "(1) Maintain principals.csv mapping principal_id→ba_contract_id/ba_expiry; (2) schedule daily; (3) route findings to the HIPAA Privacy Officer queue.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[45 CFR §164.504(e)](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E/section-164.504)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Access logs from PHI systems enriched with principal classification (workforce vs BA)..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain principals.csv mapping principal_id→ba_contract_id/ba_expiry; (2) schedule daily; (3) route findings to the HIPAA Privacy Officer queue.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=phi_access earliest=-7d\n| lookup principals.csv principal_id OUTPUT principal_type, ba_contract_id, ba_expiry\n| where principal_type=\"business_associate\"\n| eval contract_valid=if(strptime(ba_expiry, \"%Y-%m-%d\") > now(), \"yes\", \"no\")\n| stats count AS accesses values(action) AS actions min(_time) AS first_seen max(_time) AS last_seen BY principal_id, ba_contract_id, contract_valid\n| where contract_valid=\"no\" OR isnull(ba_contract_id)\n```\n\nUnderstanding this SPL\n\n**§164.504(e) Business Associate activity — PHI access by BA principals** — HIPAA §164.504(e) requires BAAs; continuous verification that no BA accesses PHI outside a valid contract is primary evidence.\n\nDocumented **Data sources**: Access logs from PHI systems enriched with principal classification (workforce vs BA). **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: phi_access.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=phi_access, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Filters the current rows with `where principal_type=\"business_associate\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **contract_valid** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by principal_id, ba_contract_id, contract_valid** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where contract_valid=\"no\" OR isnull(ba_contract_id)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA Privacy"
              ],
              "a": [
                "Authentication"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Privacy",
                  "v": "current",
                  "cl": "§164.504(e)",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Privacy §164.504(e) (Business Associate contracts) is enforced — Splunk UC-22.50.4: §164.504(e) Business Associate activity — PHI access by BA principals.",
                  "ea": "Saved search 'UC-22.50.4' running on Access logs from PHI systems enriched with principal classification (workforce vs BA)., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.5",
              "n": "MAS TRM §11.1.1 system resilience — RTO/RPO burn-rate evidence",
              "c": "high",
              "f": "intermediate",
              "v": "MAS TRM §11.1.1 requires FI boards to be satisfied with resilience outcomes; automated RTO/RPO reconciliation is auditor-friendly evidence.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "ITSI KPI service health, infrastructure uptime, DR test records.",
              "q": "| inputlookup rto_rpo_targets.csv\n| eval target_rto_min=coalesce(target_rto_min, 60), target_rpo_min=coalesce(target_rpo_min, 15)\n| map search=\"| rest /services/data/models/$service_id$/summary | eval measured_rto_min=measured_rto_min, measured_rpo_min=measured_rpo_min | eval service_id=\\\"$service_id$\\\"\"\n| eval rto_breach=if(measured_rto_min > target_rto_min, \"yes\", \"no\"), rpo_breach=if(measured_rpo_min > target_rpo_min, \"yes\", \"no\")\n| table service_id, target_rto_min, measured_rto_min, rto_breach, target_rpo_min, measured_rpo_min, rpo_breach",
              "m": "(1) Maintain rto_rpo_targets.csv per service_id; (2) ITSI pushes measured values into service summary; (3) schedule weekly report to the board.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[MAS Technology Risk Management Guidelines (2021)](https://www.mas.gov.sg/regulation/guidelines/technology-risk-management-guidelines)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: ITSI KPI service health, infrastructure uptime, DR test records..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Maintain rto_rpo_targets.csv per service_id; (2) ITSI pushes measured values into service summary; (3) schedule weekly report to the board.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup rto_rpo_targets.csv\n| eval target_rto_min=coalesce(target_rto_min, 60), target_rpo_min=coalesce(target_rpo_min, 15)\n| map search=\"| rest /services/data/models/$service_id$/summary | eval measured_rto_min=measured_rto_min, measured_rpo_min=measured_rpo_min | eval service_id=\\\"$service_id$\\\"\"\n| eval rto_breach=if(measured_rto_min > target_rto_min, \"yes\", \"no\"), rpo_breach=if(measured_rpo_min > target_rpo_min, \"yes\", \"no\")\n| table service_id, target_rto_min, measured_rto_min, rto_breach, target_rpo_min, measured_rpo_min, rpo_breach\n```\n\nUnderstanding this SPL\n\n**MAS TRM §11.1.1 system resilience — RTO/RPO burn-rate evidence** — MAS TRM §11.1.1 requires FI boards to be satisfied with resilience outcomes; automated RTO/RPO reconciliation is auditor-friendly evidence.\n\nDocumented **Data sources**: ITSI KPI service health, infrastructure uptime, DR test records. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **target_rto_min** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Runs a templated search per row with `map`.\n• `eval` defines or adjusts **rto_breach** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **MAS TRM §11.1.1 system resilience — RTO/RPO burn-rate evidence**): table service_id, target_rto_min, measured_rto_min, rto_breach, target_rpo_min, measured_rpo_min, rpo_breach\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Resilience",
                "Compliance"
              ],
              "regs": [
                "MAS TRM"
              ],
              "a": [
                "Performance"
              ],
              "e": [
                "itsi"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "MAS TRM",
                  "v": "2021",
                  "cl": "§11.1.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that MAS TRM §11.1.1 (System resilience) is enforced — Splunk UC-22.50.5: MAS TRM §11.1.1 system resilience — RTO/RPO burn-rate evidence.",
                  "ea": "Saved search 'UC-22.50.5' running on ITSI KPI service health, infrastructure uptime, DR test records., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.6",
              "n": "NERC CIP-008-6 R1 incident response plan — evidence of activation and review",
              "c": "high",
              "f": "intermediate",
              "v": "CIP-008-6 R1 requires an IR plan and evidence of activation / annual review. Automated tracking catches drift before a CIP enforcement action.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "SOC ticketing, CIP incident reports, annual plan-review logs.",
              "q": "index=soc_tickets sourcetype=cip:incident earliest=-365d\n| eval plan_version=coalesce(plan_version,\"unknown\")\n| stats count AS incidents max(eval(event_type=\"plan_review\")) AS last_review earliest(_time) AS first_activation latest(_time) AS last_activation BY bes_function, plan_version\n| eval days_since_review=round((now()-last_review)/86400,0)\n| table bes_function, plan_version, incidents, first_activation, last_activation, days_since_review\n| where days_since_review > 365 OR isnull(last_review)",
              "m": "(1) Index plan-review events with event_type=plan_review; (2) schedule the search quarterly; (3) send findings to the NERC CIP evidence pack.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[NERC CIP-008-6 Cyber Security — Incident Reporting and Response Planning](https://www.nerc.com/pa/Stand/Reliability%20Standards/CIP-008-6.pdf)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: SOC ticketing, CIP incident reports, annual plan-review logs..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Index plan-review events with event_type=plan_review; (2) schedule the search quarterly; (3) send findings to the NERC CIP evidence pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=soc_tickets sourcetype=cip:incident earliest=-365d\n| eval plan_version=coalesce(plan_version,\"unknown\")\n| stats count AS incidents max(eval(event_type=\"plan_review\")) AS last_review earliest(_time) AS first_activation latest(_time) AS last_activation BY bes_function, plan_version\n| eval days_since_review=round((now()-last_review)/86400,0)\n| table bes_function, plan_version, incidents, first_activation, last_activation, days_since_review\n| where days_since_review > 365 OR isnull(last_review)\n```\n\nUnderstanding this SPL\n\n**NERC CIP-008-6 R1 incident response plan — evidence of activation and review** — CIP-008-6 R1 requires an IR plan and evidence of activation / annual review. Automated tracking catches drift before a CIP enforcement action.\n\nDocumented **Data sources**: SOC ticketing, CIP incident reports, annual plan-review logs. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: soc_tickets; **sourcetype**: cip:incident. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=soc_tickets, sourcetype=cip:incident, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **plan_version** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by bes_function, plan_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_review** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **NERC CIP-008-6 R1 incident response plan — evidence of activation and review**): table bes_function, plan_version, incidents, first_activation, last_activation, days_since_review\n• Filters the current rows with `where days_since_review > 365 OR isnull(last_review)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NERC CIP"
              ],
              "a": [
                "Alerts"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NERC CIP",
                  "v": "current",
                  "cl": "CIP-008-6 R1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NERC CIP CIP-008-6 R1 (Incident response) is enforced — Splunk UC-22.50.6: NERC CIP-008-6 R1 incident response plan — evidence of activation and review.",
                  "ea": "Saved search 'UC-22.50.6' running on SOC ticketing, CIP incident reports, annual plan-review logs., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.7",
              "n": "Petroleumsforskriften §11 — emergency preparedness drill evidence",
              "c": "high",
              "f": "intermediate",
              "v": "§11 requires demonstrable preparedness for emergency situations; drill cadence and activation records are auditor-facing evidence.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Emergency drill logs, Edge Hub OT alerts, PSA notifications index.",
              "q": "index=psa_emergency earliest=-180d\n| search (event_type=\"drill\" OR event_type=\"activation\" OR event_type=\"psa_notification\")\n| stats count(eval(event_type=\"drill\")) AS drill_count count(eval(event_type=\"activation\")) AS activation_count count(eval(event_type=\"psa_notification\")) AS psa_notifications BY installation_id, facility_type\n| eval drill_status=if(drill_count>=4,\"compliant\",\"gap\")\n| table installation_id, facility_type, drill_count, activation_count, psa_notifications, drill_status",
              "m": "(1) Ingest drill logs from the offshore operational systems; (2) schedule quarterly; (3) feed results into the PSA (Petroleumstilsynet) evidence pack.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Petroleumsforskriften (1997)](https://lovdata.no/dokument/SF/forskrift/1997-06-27-653)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Emergency drill logs, Edge Hub OT alerts, PSA notifications index..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest drill logs from the offshore operational systems; (2) schedule quarterly; (3) feed results into the PSA (Petroleumstilsynet) evidence pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=psa_emergency earliest=-180d\n| search (event_type=\"drill\" OR event_type=\"activation\" OR event_type=\"psa_notification\")\n| stats count(eval(event_type=\"drill\")) AS drill_count count(eval(event_type=\"activation\")) AS activation_count count(eval(event_type=\"psa_notification\")) AS psa_notifications BY installation_id, facility_type\n| eval drill_status=if(drill_count>=4,\"compliant\",\"gap\")\n| table installation_id, facility_type, drill_count, activation_count, psa_notifications, drill_status\n```\n\nUnderstanding this SPL\n\n**Petroleumsforskriften §11 — emergency preparedness drill evidence** — §11 requires demonstrable preparedness for emergency situations; drill cadence and activation records are auditor-facing evidence.\n\nDocumented **Data sources**: Emergency drill logs, Edge Hub OT alerts, PSA notifications index. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: psa_emergency.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=psa_emergency, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by installation_id, facility_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **drill_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Petroleumsforskriften §11 — emergency preparedness drill evidence**): table installation_id, facility_type, drill_count, activation_count, psa_notifications, drill_status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Resilience",
                "Safety"
              ],
              "regs": [
                "NO Petroleumsforskriften"
              ],
              "a": [
                "Alerts"
              ],
              "e": [
                "edge_hub"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "NO Petroleumsforskriften",
                  "v": "1997 as amended",
                  "cl": "§11",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Petroleumsforskriften §11 (Emergency preparedness and response) is enforced — Splunk UC-22.50.7: Petroleumsforskriften §11 — emergency preparedness drill evidence.",
                  "ea": "Saved search 'UC-22.50.7' running on Emergency drill logs, Edge Hub OT alerts, PSA notifications index., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.8",
              "n": "Sikkerhetsloven §6-1 — preventive control effectiveness across classified systems",
              "c": "high",
              "f": "intermediate",
              "v": "§6-1 requires systematic preventive measures; effectiveness KPIs make the §6-1 duty continuously measurable.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Endpoint security logs, access control logs, encryption/crypto inventory for classified systems.",
              "q": "index=security_classified earliest=-30d\n| eval control_state=coalesce(control_state, \"unknown\")\n| stats count AS events count(eval(control_state=\"active\")) AS active count(eval(control_state=\"failed\")) AS failed BY system_id, classification_level, control_type\n| eval effectiveness=round(100*active/events, 1)\n| where classification_level IN (\"HEMMELIG\", \"KONFIDENSIELT\") AND effectiveness < 99",
              "m": "(1) Classify systems with classification_level at onboarding; (2) ingest preventive-control telemetry; (3) schedule daily; (4) report to NSM evidence pack.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Sikkerhetsloven (Security Act 2018)](https://lovdata.no/dokument/NL/lov/2018-06-01-24)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Endpoint security logs, access control logs, encryption/crypto inventory for classified systems..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Classify systems with classification_level at onboarding; (2) ingest preventive-control telemetry; (3) schedule daily; (4) report to NSM evidence pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=security_classified earliest=-30d\n| eval control_state=coalesce(control_state, \"unknown\")\n| stats count AS events count(eval(control_state=\"active\")) AS active count(eval(control_state=\"failed\")) AS failed BY system_id, classification_level, control_type\n| eval effectiveness=round(100*active/events, 1)\n| where classification_level IN (\"HEMMELIG\", \"KONFIDENSIELT\") AND effectiveness < 99\n```\n\nUnderstanding this SPL\n\n**Sikkerhetsloven §6-1 — preventive control effectiveness across classified systems** — §6-1 requires systematic preventive measures; effectiveness KPIs make the §6-1 duty continuously measurable.\n\nDocumented **Data sources**: Endpoint security logs, access control logs, encryption/crypto inventory for classified systems. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: security_classified.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=security_classified, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **control_state** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by system_id, classification_level, control_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **effectiveness** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where classification_level IN (\"HEMMELIG\", \"KONFIDENSIELT\") AND effectiveness < 99` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NO Sikkerhetsloven"
              ],
              "a": [
                "Change",
                "Authentication"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NO Sikkerhetsloven",
                  "v": "2018",
                  "cl": "§6-1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Sikkerhetsloven §6-1 (General preventive security measures) is enforced — Splunk UC-22.50.8: Sikkerhetsloven §6-1 — preventive control effectiveness across classified systems.",
                  "ea": "Saved search 'UC-22.50.8' running on Endpoint security logs, access control logs, encryption/crypto inventory for classified systems., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.9",
              "n": "NZISM §16.1.32 — user authentication strength & MFA coverage",
              "c": "high",
              "f": "intermediate",
              "v": "§16.1.32 requires agencies to deploy authentication commensurate with classification; continuous MFA KPIs evidence the control.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Authentication CIM-tagged logs; SSO / IdP logs for government systems.",
              "q": "| tstats summariesonly=t count FROM datamodel=Authentication.Authentication WHERE Authentication.app!=\"\" BY Authentication.app, Authentication.authentication_method\n| rename \"Authentication.*\" AS \"*\"\n| eval is_mfa=if(authentication_method IN (\"mfa\",\"fido2\",\"webauthn\",\"push\",\"totp\",\"smart_card\"),1,0)\n| eventstats sum(count) AS app_total sum(eval(count*is_mfa)) AS app_mfa BY app\n| eval mfa_coverage_pct=round(100*app_mfa/app_total,1)\n| stats first(mfa_coverage_pct) AS mfa_coverage_pct values(authentication_method) AS methods BY app\n| where mfa_coverage_pct < 100",
              "m": "(1) Ensure IdP events are CIM-tagged; (2) schedule this tstats hourly; (3) feed non-100% apps to an 'MFA coverage' glass table.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[NZISM v3.7](https://www.nzism.gcsb.govt.nz/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Authentication CIM-tagged logs; SSO / IdP logs for government systems..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure IdP events are CIM-tagged; (2) schedule this tstats hourly; (3) feed non-100% apps to an 'MFA coverage' glass table.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| tstats summariesonly=t count FROM datamodel=Authentication.Authentication WHERE Authentication.app!=\"\" BY Authentication.app, Authentication.authentication_method\n| rename \"Authentication.*\" AS \"*\"\n| eval is_mfa=if(authentication_method IN (\"mfa\",\"fido2\",\"webauthn\",\"push\",\"totp\",\"smart_card\"),1,0)\n| eventstats sum(count) AS app_total sum(eval(count*is_mfa)) AS app_mfa BY app\n| eval mfa_coverage_pct=round(100*app_mfa/app_total,1)\n| stats first(mfa_coverage_pct) AS mfa_coverage_pct values(authentication_method) AS methods BY app\n| where mfa_coverage_pct < 100\n```\n\nUnderstanding this SPL\n\n**NZISM §16.1.32 — user authentication strength & MFA coverage** — §16.1.32 requires agencies to deploy authentication commensurate with classification; continuous MFA KPIs evidence the control.\n\nDocumented **Data sources**: Authentication CIM-tagged logs; SSO / IdP logs for government systems. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Uses `tstats` against accelerated summaries for data model `Authentication.Authentication` — enable acceleration for that model.\n• Renames fields with `rename` for clarity or joins.\n• `eval` defines or adjusts **is_mfa** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **mfa_coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by app** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Filters the current rows with `where mfa_coverage_pct < 100` — typically the threshold or rule expression for this monitoring goal.\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Security",
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "NZISM"
              ],
              "a": [
                "Authentication"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NZISM",
                  "v": "3.7",
                  "cl": "§16.1.32",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NZISM §16.1.32 (User identification, authentication and access management) is enforced — Splunk UC-22.50.9: NZISM §16.1.32 — user authentication strength & MFA coverage.",
                  "ea": "Saved search 'UC-22.50.9' running on Authentication CIM-tagged logs; SSO / IdP logs for government systems., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.10",
              "n": "PIPL Art.51 — information-security measures across PRC personal-data systems",
              "c": "high",
              "f": "intermediate",
              "v": "PIPL Art.51 mandates information-security management; the UC gives data processors Chinese-authority-facing evidence of Art.51 measures.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Endpoint logs, crypto inventory, access control, vulnerability scanner feeds for PRC-resident systems.",
              "q": "index=prc_systems earliest=-7d\n| eval control_family=coalesce(control_family,\"unknown\")\n| stats count AS events count(eval(control_state=\"failed\")) AS failures BY system_id, control_family\n| eval failure_rate=round(100*failures/events,2)\n| where failure_rate > 0 AND control_family IN (\"encryption\",\"access_control\",\"vulnerability_mgmt\",\"logging\")",
              "m": "(1) Tag PRC-resident systems with a 'jurisdiction' field; (2) ingest control telemetry; (3) schedule this search nightly.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Personal Information Protection Law of the PRC (PIPL)](http://www.npc.gov.cn/npc/c30834/202108/a8c4e3672c74491a80b53a172bb753fe.shtml)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Endpoint logs, crypto inventory, access control, vulnerability scanner feeds for PRC-resident systems..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag PRC-resident systems with a 'jurisdiction' field; (2) ingest control telemetry; (3) schedule this search nightly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=prc_systems earliest=-7d\n| eval control_family=coalesce(control_family,\"unknown\")\n| stats count AS events count(eval(control_state=\"failed\")) AS failures BY system_id, control_family\n| eval failure_rate=round(100*failures/events,2)\n| where failure_rate > 0 AND control_family IN (\"encryption\",\"access_control\",\"vulnerability_mgmt\",\"logging\")\n```\n\nUnderstanding this SPL\n\n**PIPL Art.51 — information-security measures across PRC personal-data systems** — PIPL Art.51 mandates information-security management; the UC gives data processors Chinese-authority-facing evidence of Art.51 measures.\n\nDocumented **Data sources**: Endpoint logs, crypto inventory, access control, vulnerability scanner feeds for PRC-resident systems. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: prc_systems.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=prc_systems, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **control_family** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by system_id, control_family** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **failure_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where failure_rate > 0 AND control_family IN (\"encryption\",\"access_control\",\"vulnerability_mgmt\",\"logging\")` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "PIPL"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "PIPL",
                  "v": "2021",
                  "cl": "Art.51",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that PIPL Art.51 (Information security measures) is enforced — Splunk UC-22.50.10: PIPL Art.51 — information-security measures across PRC personal-data systems.",
                  "ea": "Saved search 'UC-22.50.10' running on Endpoint logs, crypto inventory, access control, vulnerability scanner feeds for PRC-resident systems., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.11",
              "n": "QCB §4.1 — cyber-risk register evidence with treatment progress",
              "c": "high",
              "f": "intermediate",
              "v": "§4.1 requires FIs to identify and manage cyber risk; overdue-risk evidence is hard to fake and is directly auditor-facing.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "GRC risk register, vulnerability management, threat-intelligence feeds.",
              "q": "index=grc sourcetype=risk_register earliest=-180d\n| eval target_ttc_days=case(severity==\"critical\",30,severity==\"high\",60,severity==\"medium\",90,true(),120)\n| eval actual_ttc_days=round((coalesce(closed_at, now())-strptime(opened_at,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n| eval breaching=if(status!=\"closed\" AND actual_ttc_days>target_ttc_days, \"yes\", \"no\")\n| table risk_id, severity, opened_at, closed_at, actual_ttc_days, target_ttc_days, breaching, treatment_plan\n| where breaching=\"yes\"",
              "m": "(1) Onboard the GRC risk register; (2) Ensure severity/opened_at/closed_at fields are consistent; (3) schedule weekly.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Qatar Central Bank Cybersecurity Framework (2018)](https://www.qcb.gov.qa/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: GRC risk register, vulnerability management, threat-intelligence feeds..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard the GRC risk register; (2) Ensure severity/opened_at/closed_at fields are consistent; (3) schedule weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=grc sourcetype=risk_register earliest=-180d\n| eval target_ttc_days=case(severity==\"critical\",30,severity==\"high\",60,severity==\"medium\",90,true(),120)\n| eval actual_ttc_days=round((coalesce(closed_at, now())-strptime(opened_at,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n| eval breaching=if(status!=\"closed\" AND actual_ttc_days>target_ttc_days, \"yes\", \"no\")\n| table risk_id, severity, opened_at, closed_at, actual_ttc_days, target_ttc_days, breaching, treatment_plan\n| where breaching=\"yes\"\n```\n\nUnderstanding this SPL\n\n**QCB §4.1 — cyber-risk register evidence with treatment progress** — §4.1 requires FIs to identify and manage cyber risk; overdue-risk evidence is hard to fake and is directly auditor-facing.\n\nDocumented **Data sources**: GRC risk register, vulnerability management, threat-intelligence feeds. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: grc; **sourcetype**: risk_register. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=grc, sourcetype=risk_register, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **target_ttc_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **actual_ttc_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breaching** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **QCB §4.1 — cyber-risk register evidence with treatment progress**): table risk_id, severity, opened_at, closed_at, actual_ttc_days, target_ttc_days, breaching, treatment_plan\n• Filters the current rows with `where breaching=\"yes\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "pillar": "security",
              "regs": [
                "QCB Cyber"
              ],
              "a": [
                "Alerts"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "QCB Cyber",
                  "v": "2018",
                  "cl": "§4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that QCB Cyber §4.1 (Cyber risk identification and management) is enforced — Splunk UC-22.50.11: QCB §4.1 — cyber-risk register evidence with treatment progress.",
                  "ea": "Saved search 'UC-22.50.11' running on GRC risk register, vulnerability management, threat-intelligence feeds., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.12",
              "n": "SA PDPL Art. 6 — processing-purpose and lawful-basis evidence",
              "c": "high",
              "f": "intermediate",
              "v": "SA PDPL Art. 6 requires a lawful basis for each processing activity; the UC produces direct evidence of alignment or gaps.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Consent management platform logs, downstream processing system logs tagged with purpose.",
              "q": "index=consent sourcetype=cmp:events earliest=-24h\n| eval lawful_basis=coalesce(lawful_basis,\"unknown\")\n| stats count AS consented BY subject_id, purpose_id, lawful_basis\n| join type=left purpose_id\n    [ search index=app_processing earliest=-24h sourcetype=processing:events | stats count AS processed BY subject_id, purpose_id ]\n| eval unlawful=if(isnotnull(processed) AND consented=0, \"yes\", \"no\")\n| where unlawful=\"yes\" OR lawful_basis=\"unknown\"",
              "m": "(1) Emit events from the CMP with subject_id + purpose_id + lawful_basis; (2) downstream systems must log processing events with the same purpose_id; (3) schedule daily.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Saudi Personal Data Protection Law](https://sdaia.gov.sa/en/SDAIA/about/Files/PersonalDataEnglish.pdf)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Consent management platform logs, downstream processing system logs tagged with purpose..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit events from the CMP with subject_id + purpose_id + lawful_basis; (2) downstream systems must log processing events with the same purpose_id; (3) schedule daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=consent sourcetype=cmp:events earliest=-24h\n| eval lawful_basis=coalesce(lawful_basis,\"unknown\")\n| stats count AS consented BY subject_id, purpose_id, lawful_basis\n| join type=left purpose_id\n    [ search index=app_processing earliest=-24h sourcetype=processing:events | stats count AS processed BY subject_id, purpose_id ]\n| eval unlawful=if(isnotnull(processed) AND consented=0, \"yes\", \"no\")\n| where unlawful=\"yes\" OR lawful_basis=\"unknown\"\n```\n\nUnderstanding this SPL\n\n**SA PDPL Art. 6 — processing-purpose and lawful-basis evidence** — SA PDPL Art. 6 requires a lawful basis for each processing activity; the UC produces direct evidence of alignment or gaps.\n\nDocumented **Data sources**: Consent management platform logs, downstream processing system logs tagged with purpose. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: consent; **sourcetype**: cmp:events. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=consent, sourcetype=cmp:events, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **lawful_basis** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by subject_id, purpose_id, lawful_basis** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **unlawful** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where unlawful=\"yes\" OR lawful_basis=\"unknown\"` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SA PDPL"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SA PDPL",
                  "v": "current",
                  "cl": "Art. 6",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SA PDPL Art. 6 (Lawful grounds and consent for processing) is enforced — Splunk UC-22.50.12: SA PDPL Art. 6 — processing-purpose and lawful-basis evidence.",
                  "ea": "Saved search 'UC-22.50.12' running on Consent management platform logs, downstream processing system logs tagged with purpose., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.13",
              "n": "SWIFT CSCF 6.1 — malware protection across the SWIFT secure zone",
              "c": "high",
              "f": "intermediate",
              "v": "CSCF 6.1 is a mandatory control; continuous coverage checks produce annual KYC-SA attestation evidence on demand.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Endpoint detection and response (EDR), antivirus logs from the SWIFT secure zone, CSP attestation evidence.",
              "q": "index=swift_zone earliest=-24h\n| search (sourcetype=\"av\" OR sourcetype=\"edr\")\n| stats count AS events count(eval(signature_coverage_hours <= 24)) AS current_coverage count(eval(action=\"block\")) AS blocks BY host, edr_vendor\n| eval up_to_date=if(current_coverage=events, \"yes\", \"no\")\n| where up_to_date=\"no\" OR blocks > 0",
              "m": "(1) Identify the SWIFT secure zone with a 'zone=swift' asset field; (2) route EDR logs through a dedicated source; (3) schedule daily.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[SWIFT Customer Security Control Framework (CSCF) v2025](https://www.swift.com/myswift/customer-security-programme-csp/security-controls)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Endpoint detection and response (EDR), antivirus logs from the SWIFT secure zone, CSP attestation evidence..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Identify the SWIFT secure zone with a 'zone=swift' asset field; (2) route EDR logs through a dedicated source; (3) schedule daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=swift_zone earliest=-24h\n| search (sourcetype=\"av\" OR sourcetype=\"edr\")\n| stats count AS events count(eval(signature_coverage_hours <= 24)) AS current_coverage count(eval(action=\"block\")) AS blocks BY host, edr_vendor\n| eval up_to_date=if(current_coverage=events, \"yes\", \"no\")\n| where up_to_date=\"no\" OR blocks > 0\n```\n\nUnderstanding this SPL\n\n**SWIFT CSCF 6.1 — malware protection across the SWIFT secure zone** — CSCF 6.1 is a mandatory control; continuous coverage checks produce annual KYC-SA attestation evidence on demand.\n\nDocumented **Data sources**: Endpoint detection and response (EDR), antivirus logs from the SWIFT secure zone, CSP attestation evidence. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: swift_zone.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=swift_zone, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by host, edr_vendor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **up_to_date** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where up_to_date=\"no\" OR blocks > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "SWIFT CSP"
              ],
              "a": [
                "Change",
                "Authentication"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SWIFT CSP",
                  "v": "CSCF v2025",
                  "cl": "6.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SWIFT CSP 6.1 (Malware protection) is enforced — Splunk UC-22.50.13: SWIFT CSCF 6.1 — malware protection across the SWIFT secure zone.",
                  "ea": "Saved search 'UC-22.50.13' running on Endpoint detection and response (EDR), antivirus logs from the SWIFT secure zone, CSP attestation evidence., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.14",
              "n": "Swiss nFADP Art.7 — privacy-by-design checkpoints in the SDLC",
              "c": "high",
              "f": "intermediate",
              "v": "nFADP Art.7 explicitly requires privacy-by-design; the UC ties the statutory duty to measurable SDLC KPIs.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "SDLC / GitOps pipeline logs, code-review audit events, DPIA tracker.",
              "q": "index=gitops earliest=-90d\n| search event=\"merge\" AND target_branch=\"main\"\n| eval dpia_completed=if(isnotnull(dpia_id) AND dpia_status=\"signed\", 1, 0), pbd_review=if(reviewer_role=\"dpo\" OR reviewer_role=\"privacy_engineer\",1,0)\n| stats sum(dpia_completed) AS dpias sum(pbd_review) AS pbd_reviews count AS merges BY repo, team\n| eval dpia_rate=round(100*dpias/merges,1), pbd_rate=round(100*pbd_reviews/merges,1)\n| where pbd_rate < 100 OR dpia_rate < 100",
              "m": "(1) Ensure GitOps events carry reviewer_role, dpia_id, dpia_status; (2) schedule weekly; (3) publish per-team glass table.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Swiss Federal Act on Data Protection (nFADP)](https://www.fedlex.admin.ch/eli/cc/1993/1945_1945_1945/en)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: SDLC / GitOps pipeline logs, code-review audit events, DPIA tracker..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure GitOps events carry reviewer_role, dpia_id, dpia_status; (2) schedule weekly; (3) publish per-team glass table.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=gitops earliest=-90d\n| search event=\"merge\" AND target_branch=\"main\"\n| eval dpia_completed=if(isnotnull(dpia_id) AND dpia_status=\"signed\", 1, 0), pbd_review=if(reviewer_role=\"dpo\" OR reviewer_role=\"privacy_engineer\",1,0)\n| stats sum(dpia_completed) AS dpias sum(pbd_review) AS pbd_reviews count AS merges BY repo, team\n| eval dpia_rate=round(100*dpias/merges,1), pbd_rate=round(100*pbd_reviews/merges,1)\n| where pbd_rate < 100 OR dpia_rate < 100\n```\n\nUnderstanding this SPL\n\n**Swiss nFADP Art.7 — privacy-by-design checkpoints in the SDLC** — nFADP Art.7 explicitly requires privacy-by-design; the UC ties the statutory duty to measurable SDLC KPIs.\n\nDocumented **Data sources**: SDLC / GitOps pipeline logs, code-review audit events, DPIA tracker. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: gitops.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=gitops, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **dpia_completed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by repo, team** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **dpia_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where pbd_rate < 100 OR dpia_rate < 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Governance"
              ],
              "regs": [
                "Swiss nFADP"
              ],
              "a": [
                "Change"
              ],
              "e": [
                "checkpoint"
              ],
              "em": [],
              "cmp": [
                {
                  "r": "Swiss nFADP",
                  "v": "2020 revision",
                  "cl": "Art.7",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that Swiss nFADP Art.7 (Privacy by design) is enforced — Splunk UC-22.50.14: Swiss nFADP Art.7 — privacy-by-design checkpoints in the SDLC.",
                  "ea": "Saved search 'UC-22.50.14' running on SDLC / GitOps pipeline logs, code-review audit events, DPIA tracker., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.15",
              "n": "SYSC 4.1 organisational requirements — role-population and responsibilities map",
              "c": "medium",
              "f": "intermediate",
              "v": "SYSC 4.1 requires clear apportionment of responsibilities; drift detection keeps the Responsibilities Map credible with the FCA.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "HR system, SMCR responsibilities map, IAM role assignments.",
              "q": "| inputlookup smcr_responsibilities_map.csv\n| join type=left role_id [ search index=iam earliest=-1d sourcetype=iam:grant | stats latest(principal_id) AS principal_id BY role_id ]\n| eval assignment_status=if(isnull(principal_id), \"vacant\", \"assigned\")\n| where assignment_status=\"vacant\" OR (last_review_date!=\"\" AND strptime(last_review_date, \"%Y-%m-%d\") < relative_time(now(), \"-180d\"))",
              "m": "(1) Publish smcr_responsibilities_map.csv with role_id, responsibility, last_review_date; (2) join IAM data; (3) schedule monthly.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[FCA Handbook — SYSC 4.1](https://www.handbook.fca.org.uk/handbook/SYSC/4/1.html)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: HR system, SMCR responsibilities map, IAM role assignments..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Publish smcr_responsibilities_map.csv with role_id, responsibility, last_review_date; (2) join IAM data; (3) schedule monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup smcr_responsibilities_map.csv\n| join type=left role_id [ search index=iam earliest=-1d sourcetype=iam:grant | stats latest(principal_id) AS principal_id BY role_id ]\n| eval assignment_status=if(isnull(principal_id), \"vacant\", \"assigned\")\n| where assignment_status=\"vacant\" OR (last_review_date!=\"\" AND strptime(last_review_date, \"%Y-%m-%d\") < relative_time(now(), \"-180d\"))\n```\n\nUnderstanding this SPL\n\n**SYSC 4.1 organisational requirements — role-population and responsibilities map** — SYSC 4.1 requires clear apportionment of responsibilities; drift detection keeps the Responsibilities Map credible with the FCA.\n\nDocumented **Data sources**: HR system, SMCR responsibilities map, IAM role assignments. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **assignment_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where assignment_status=\"vacant\" OR (last_review_date!=\"\" AND strptime(last_review_date, \"%Y-%m-%d\") < relative_time(now(),…` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Governance"
              ],
              "regs": [
                "FCA SM&CR"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "FCA SM&CR",
                  "v": "current",
                  "cl": "SYSC 4.1",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that FCA SM&CR SYSC 4.1 (General organisational requirements) is enforced — Splunk UC-22.50.15: SYSC 4.1 organisational requirements — role-population and responsibilities map.",
                  "ea": "Saved search 'UC-22.50.15' running on HR system, SMCR responsibilities map, IAM role assignments., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.16",
              "n": "§164.528 accounting-of-disclosures — retention and responsiveness",
              "c": "medium",
              "f": "intermediate",
              "v": "§164.528 requires the ability to produce an accounting of disclosures within 60 days; continuous metrics prevent nasty surprises during a HIPAA audit.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "PHI disclosure ledger, patient-request queue logs.",
              "q": "index=phi_disclosures earliest=-365d\n| eval disclosure_age_days=round((now()-strptime(disclosure_date,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n| stats count AS total_disclosures count(eval(request_type=\"patient_request\")) AS patient_requests count(eval(response_time_days <= 60)) AS on_time BY covered_entity\n| eval response_rate=round(100*on_time/patient_requests,1)\n| where (total_disclosures=0) OR (patient_requests>0 AND response_rate < 100)",
              "m": "(1) Persist disclosures in a dedicated ledger index; (2) emit patient-request events with response_time_days; (3) schedule monthly.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[45 CFR §164.528](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E/section-164.528)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: PHI disclosure ledger, patient-request queue logs..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Persist disclosures in a dedicated ledger index; (2) emit patient-request events with response_time_days; (3) schedule monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=phi_disclosures earliest=-365d\n| eval disclosure_age_days=round((now()-strptime(disclosure_date,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0)\n| stats count AS total_disclosures count(eval(request_type=\"patient_request\")) AS patient_requests count(eval(response_time_days <= 60)) AS on_time BY covered_entity\n| eval response_rate=round(100*on_time/patient_requests,1)\n| where (total_disclosures=0) OR (patient_requests>0 AND response_rate < 100)\n```\n\nUnderstanding this SPL\n\n**§164.528 accounting-of-disclosures — retention and responsiveness** — §164.528 requires the ability to produce an accounting of disclosures within 60 days; continuous metrics prevent nasty surprises during a HIPAA audit.\n\nDocumented **Data sources**: PHI disclosure ledger, patient-request queue logs. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: phi_disclosures.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=phi_disclosures, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **disclosure_age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by covered_entity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **response_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where (total_disclosures=0) OR (patient_requests>0 AND response_rate < 100)` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-22",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "HIPAA Privacy"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "HIPAA Privacy",
                  "v": "current",
                  "cl": "§164.528",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that HIPAA Privacy §164.528 (Accounting of disclosures) is enforced — Splunk UC-22.50.16: §164.528 accounting-of-disclosures — retention and responsiveness.",
                  "ea": "Saved search 'UC-22.50.16' running on PHI disclosure ledger, patient-request queue logs., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.17",
              "n": "NESA T3.5 cryptographic controls — key-age & HSM inventory evidence",
              "c": "medium",
              "f": "intermediate",
              "v": "T3.5 requires effective cryptographic key-management; key-age metrics expose drift from the control.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "KMS/HSM inventory, certificate-authority logs, crypto-library runtime telemetry.",
              "q": "index=kms earliest=-1d\n| stats min(created_at) AS oldest_key max(last_rotated) AS last_rotation count AS total_keys BY key_store, purpose\n| eval age_days=round((now()-strptime(oldest_key,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0), rotation_age_days=round((now()-last_rotation)/86400,0)\n| where age_days > 1095 OR rotation_age_days > 730",
              "m": "(1) Onboard KMS/HSM inventory with created_at/last_rotated; (2) schedule weekly; (3) report to the NESA IAS evidence pack.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[NESA UAE IAS v2 (2020)](https://www.nesa.gov.ae/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: KMS/HSM inventory, certificate-authority logs, crypto-library runtime telemetry..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Onboard KMS/HSM inventory with created_at/last_rotated; (2) schedule weekly; (3) report to the NESA IAS evidence pack.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=kms earliest=-1d\n| stats min(created_at) AS oldest_key max(last_rotated) AS last_rotation count AS total_keys BY key_store, purpose\n| eval age_days=round((now()-strptime(oldest_key,\"%Y-%m-%dT%H:%M:%SZ\"))/86400,0), rotation_age_days=round((now()-last_rotation)/86400,0)\n| where age_days > 1095 OR rotation_age_days > 730\n```\n\nUnderstanding this SPL\n\n**NESA T3.5 cryptographic controls — key-age & HSM inventory evidence** — T3.5 requires effective cryptographic key-management; key-age metrics expose drift from the control.\n\nDocumented **Data sources**: KMS/HSM inventory, certificate-authority logs, crypto-library runtime telemetry. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: kms.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=kms, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by key_store, purpose** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where age_days > 1095 OR rotation_age_days > 730` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Security"
              ],
              "pillar": "security",
              "regs": [
                "NESA IAS"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NESA IAS",
                  "v": "v2 (2020)",
                  "cl": "T3.5",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NESA IAS T3.5 (Cryptographic controls and key management) is enforced — Splunk UC-22.50.17: NESA T3.5 cryptographic controls — key-age & HSM inventory evidence.",
                  "ea": "Saved search 'UC-22.50.17' running on KMS/HSM inventory, certificate-authority logs, crypto-library runtime telemetry., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.18",
              "n": "Personopplysningsloven §14 — automated-decision inventory and human-review evidence",
              "c": "medium",
              "f": "intermediate",
              "v": "§14 (mirroring GDPR Art.22) requires safeguards for automated individual decisions; a low review rate triggers an immediate DPO review.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "ML/AI model inventory, production decision logs, DPO review records.",
              "q": "index=ai_models earliest=-30d\n| search scope=\"automated_decision\" AND subject_impact=\"significant\"\n| stats count AS decisions count(eval(human_review=\"yes\")) AS reviewed earliest(_time) AS first_seen BY model_id, purpose\n| eval review_rate=round(100*reviewed/decisions,1)\n| where review_rate < 5",
              "m": "(1) Tag models with scope/subject_impact at registration; (2) emit decision logs with human_review flag; (3) schedule weekly.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Personopplysningsloven (2018)](https://lovdata.no/dokument/NL/lov/2018-06-15-38)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: ML/AI model inventory, production decision logs, DPO review records..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag models with scope/subject_impact at registration; (2) emit decision logs with human_review flag; (3) schedule weekly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=ai_models earliest=-30d\n| search scope=\"automated_decision\" AND subject_impact=\"significant\"\n| stats count AS decisions count(eval(human_review=\"yes\")) AS reviewed earliest(_time) AS first_seen BY model_id, purpose\n| eval review_rate=round(100*reviewed/decisions,1)\n| where review_rate < 5\n```\n\nUnderstanding this SPL\n\n**Personopplysningsloven §14 — automated-decision inventory and human-review evidence** — §14 (mirroring GDPR Art.22) requires safeguards for automated individual decisions; a low review rate triggers an immediate DPO review.\n\nDocumented **Data sources**: ML/AI model inventory, production decision logs, DPO review records. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: ai_models.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=ai_models, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by model_id, purpose** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **review_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where review_rate < 5` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Risk"
              ],
              "pillar": "security",
              "regs": [
                "NO Personopplysningsloven"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NO Personopplysningsloven",
                  "v": "2018",
                  "cl": "§14",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Personopplysningsloven §14 (Automated individual decision-making restrictions) is enforced — Splunk UC-22.50.18: Personopplysningsloven §14 — automated-decision inventory and human-review evidence.",
                  "ea": "Saved search 'UC-22.50.18' running on ML/AI model inventory, production decision logs, DPO review records., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.19",
              "n": "Personopplysningsloven §2 — territorial/material scope tagging of data flows",
              "c": "medium",
              "f": "intermediate",
              "v": "§2 defines when Norwegian data-protection law applies; missing scope tagging is a silent governance failure and the UC catches it.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Data catalogue, cross-border transfer logs, processing-activity register.",
              "q": "| inputlookup data_flows.csv\n| eval scope_tagged=if(isnotnull(scope_no_pol) AND scope_no_pol!=\"\", 1, 0)\n| stats count AS flows sum(scope_tagged) AS tagged BY dataset, controller\n| eval coverage_pct=round(100*tagged/flows,1)\n| where coverage_pct < 100",
              "m": "(1) Require scope_no_pol on every new dataset; (2) run weekly; (3) backfill via DPO workshops.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Personopplysningsloven (2018)](https://lovdata.no/dokument/NL/lov/2018-06-15-38)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Data catalogue, cross-border transfer logs, processing-activity register..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require scope_no_pol on every new dataset; (2) run weekly; (3) backfill via DPO workshops.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup data_flows.csv\n| eval scope_tagged=if(isnotnull(scope_no_pol) AND scope_no_pol!=\"\", 1, 0)\n| stats count AS flows sum(scope_tagged) AS tagged BY dataset, controller\n| eval coverage_pct=round(100*tagged/flows,1)\n| where coverage_pct < 100\n```\n\nUnderstanding this SPL\n\n**Personopplysningsloven §2 — territorial/material scope tagging of data flows** — §2 defines when Norwegian data-protection law applies; missing scope tagging is a silent governance failure and the UC catches it.\n\nDocumented **Data sources**: Data catalogue, cross-border transfer logs, processing-activity register. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **scope_tagged** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by dataset, controller** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **coverage_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where coverage_pct < 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Governance"
              ],
              "regs": [
                "NO Personopplysningsloven"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NO Personopplysningsloven",
                  "v": "2018",
                  "cl": "§2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Personopplysningsloven §2 (Territorial and material scope) is enforced — Splunk UC-22.50.19: Personopplysningsloven §2 — territorial/material scope tagging of data flows.",
                  "ea": "Saved search 'UC-22.50.19' running on Data catalogue, cross-border transfer logs, processing-activity register., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.20",
              "n": "Petroleumsforskriften §3 — operator safety/security obligation register",
              "c": "medium",
              "f": "intermediate",
              "v": "§3 establishes the operator's general duty of care; continuous monitoring of recurring obligations is good stewardship evidence.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "HSE management system, operator-activity logs, PSA inspection records.",
              "q": "index=hse sourcetype=operator:activity earliest=-180d\n| eval obligation=coalesce(obligation,\"unknown\")\n| stats count AS events count(eval(outcome=\"complete\")) AS complete count(eval(outcome=\"deferred\")) AS deferred BY installation_id, obligation\n| eval completion_pct=round(100*complete/events,1)\n| where completion_pct < 95",
              "m": "(1) Canonicalise obligation labels; (2) emit completion events from the HSE system; (3) schedule monthly.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Petroleumsforskriften (1997)](https://lovdata.no/dokument/SF/forskrift/1997-06-27-653)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: HSE management system, operator-activity logs, PSA inspection records..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Canonicalise obligation labels; (2) emit completion events from the HSE system; (3) schedule monthly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=hse sourcetype=operator:activity earliest=-180d\n| eval obligation=coalesce(obligation,\"unknown\")\n| stats count AS events count(eval(outcome=\"complete\")) AS complete count(eval(outcome=\"deferred\")) AS deferred BY installation_id, obligation\n| eval completion_pct=round(100*complete/events,1)\n| where completion_pct < 95\n```\n\nUnderstanding this SPL\n\n**Petroleumsforskriften §3 — operator safety/security obligation register** — §3 establishes the operator's general duty of care; continuous monitoring of recurring obligations is good stewardship evidence.\n\nDocumented **Data sources**: HSE management system, operator-activity logs, PSA inspection records. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: hse; **sourcetype**: operator:activity. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=hse, sourcetype=operator:activity, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **obligation** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by installation_id, obligation** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **completion_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where completion_pct < 95` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Safety"
              ],
              "regs": [
                "NO Petroleumsforskriften"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NO Petroleumsforskriften",
                  "v": "1997 as amended",
                  "cl": "§3",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Petroleumsforskriften §3 (General operator obligations for safety and security) is enforced — Splunk UC-22.50.20: Petroleumsforskriften §3 — operator safety/security obligation register.",
                  "ea": "Saved search 'UC-22.50.20' running on HSE management system, operator-activity logs, PSA inspection records., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.21",
              "n": "Sikkerhetsloven §5-2 — annual internal security review activity",
              "c": "medium",
              "f": "intermediate",
              "v": "§5-2 mandates an annual internal security review; a simple overdue-check prevents long silent gaps.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Internal-audit workflow, CISO dashboard inputs, NSM reporting queue.",
              "q": "index=audit_workflow earliest=-400d\n| search scope=\"internal_security_review\"\n| stats max(_time) AS last_run count(eval(outcome=\"approved\")) AS approved count AS runs BY entity_id, entity_type\n| eval days_since=round((now()-last_run)/86400,0)\n| where days_since > 365 OR approved < runs",
              "m": "(1) Log every internal security review with scope, outcome, entity_id; (2) schedule weekly; (3) feed NSM reporting dashboard.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Sikkerhetsloven (Security Act 2018)](https://lovdata.no/dokument/NL/lov/2018-06-01-24)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Internal-audit workflow, CISO dashboard inputs, NSM reporting queue..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Log every internal security review with scope, outcome, entity_id; (2) schedule weekly; (3) feed NSM reporting dashboard.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=audit_workflow earliest=-400d\n| search scope=\"internal_security_review\"\n| stats max(_time) AS last_run count(eval(outcome=\"approved\")) AS approved count AS runs BY entity_id, entity_type\n| eval days_since=round((now()-last_run)/86400,0)\n| where days_since > 365 OR approved < runs\n```\n\nUnderstanding this SPL\n\n**Sikkerhetsloven §5-2 — annual internal security review activity** — §5-2 mandates an annual internal security review; a simple overdue-check prevents long silent gaps.\n\nDocumented **Data sources**: Internal-audit workflow, CISO dashboard inputs, NSM reporting queue. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: audit_workflow.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=audit_workflow, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `stats` rolls up events into metrics; results are split **by entity_id, entity_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_since > 365 OR approved < runs` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Audit"
              ],
              "pillar": "security",
              "regs": [
                "NO Sikkerhetsloven"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NO Sikkerhetsloven",
                  "v": "2018",
                  "cl": "§5-2",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NO Sikkerhetsloven §5-2 (Internal control and annual security review) is enforced — Splunk UC-22.50.21: Sikkerhetsloven §5-2 — annual internal security review activity.",
                  "ea": "Saved search 'UC-22.50.21' running on Internal-audit workflow, CISO dashboard inputs, NSM reporting queue., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.22",
              "n": "NZISM §12.4 — policy documentation freshness and approval state",
              "c": "medium",
              "f": "intermediate",
              "v": "§12.4 requires policies to be kept current; stale-policy metrics are direct GCSB-facing evidence.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Policy repository, document control system, CISO approval records.",
              "q": "| inputlookup policy_repository.csv\n| eval age_days=round((now()-strptime(last_reviewed, \"%Y-%m-%d\"))/86400, 0)\n| eval stale=if(age_days > 365, \"yes\", \"no\")\n| stats count AS docs sum(eval(stale=\"yes\")) AS stale_count values(owner) AS owners BY policy_domain\n| eval stale_pct=round(100*stale_count/docs,1)\n| where stale_count > 0",
              "m": "(1) Publish policy_repository.csv with last_reviewed dates; (2) schedule monthly; (3) alert CISO on any stale_count > 0.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[NZISM v3.7](https://www.nzism.gcsb.govt.nz/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Policy repository, document control system, CISO approval records..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Publish policy_repository.csv with last_reviewed dates; (2) schedule monthly; (3) alert CISO on any stale_count > 0.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup policy_repository.csv\n| eval age_days=round((now()-strptime(last_reviewed, \"%Y-%m-%d\"))/86400, 0)\n| eval stale=if(age_days > 365, \"yes\", \"no\")\n| stats count AS docs sum(eval(stale=\"yes\")) AS stale_count values(owner) AS owners BY policy_domain\n| eval stale_pct=round(100*stale_count/docs,1)\n| where stale_count > 0\n```\n\nUnderstanding this SPL\n\n**NZISM §12.4 — policy documentation freshness and approval state** — §12.4 requires policies to be kept current; stale-policy metrics are direct GCSB-facing evidence.\n\nDocumented **Data sources**: Policy repository, document control system, CISO approval records. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **stale** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by policy_domain** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **stale_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where stale_count > 0` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance",
                "Governance"
              ],
              "regs": [
                "NZISM"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "NZISM",
                  "v": "3.7",
                  "cl": "§12.4",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that NZISM §12.4 (Information security documentation and policy) is enforced — Splunk UC-22.50.22: NZISM §12.4 — policy documentation freshness and approval state.",
                  "ea": "Saved search 'UC-22.50.22' running on Policy repository, document control system, CISO approval records., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "pillar": "observability",
              "_qs": 30,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "22.50.23",
              "n": "SA PDPL Art. 29 — cross-border transfer inventory and legal-basis evidence",
              "c": "medium",
              "f": "intermediate",
              "v": "Art. 29 requires a documented basis for every outbound transfer; continuous inventory is superior to point-in-time attestations.",
              "t": "Splunk Enterprise / Splunk Cloud Platform",
              "d": "Data transfer ledger, cloud-region event logs, SDAIA notification records.",
              "q": "index=data_transfer earliest=-30d\n| search src_jurisdiction=\"SA\"\n| eval basis_valid=if(legal_basis IN (\"adequacy\",\"scc\",\"consent\",\"exemption\") AND isnotnull(legal_basis_ref),1,0)\n| stats count AS transfers sum(basis_valid) AS with_basis BY dst_jurisdiction, data_class\n| eval compliance_pct=round(100*with_basis/transfers,1)\n| where compliance_pct < 100",
              "m": "(1) Emit every outbound transfer event with src_jurisdiction, dst_jurisdiction, legal_basis, legal_basis_ref; (2) schedule daily.",
              "z": "Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "kfp": "Tuning prerequisites: source field mappings must be in place before the UC will produce reliable metrics; early runs will commonly flag unmapped systems until onboarding is completed.",
              "refs": "[Saudi Personal Data Protection Law](https://sdaia.gov.sa/en/SDAIA/about/Files/PersonalDataEnglish.pdf)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise / Splunk Cloud Platform.\n• Ensure the following data sources are available: Data transfer ledger, cloud-region event logs, SDAIA notification records..\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Emit every outbound transfer event with src_jurisdiction, dst_jurisdiction, legal_basis, legal_basis_ref; (2) schedule daily.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=data_transfer earliest=-30d\n| search src_jurisdiction=\"SA\"\n| eval basis_valid=if(legal_basis IN (\"adequacy\",\"scc\",\"consent\",\"exemption\") AND isnotnull(legal_basis_ref),1,0)\n| stats count AS transfers sum(basis_valid) AS with_basis BY dst_jurisdiction, data_class\n| eval compliance_pct=round(100*with_basis/transfers,1)\n| where compliance_pct < 100\n```\n\nUnderstanding this SPL\n\n**SA PDPL Art. 29 — cross-border transfer inventory and legal-basis evidence** — Art. 29 requires a documented basis for every outbound transfer; continuous inventory is superior to point-in-time attestations.\n\nDocumented **Data sources**: Data transfer ledger, cloud-region event logs, SDAIA notification records. **App/TA** (typical add-on context): Splunk Enterprise / Splunk Cloud Platform. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: data_transfer.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=data_transfer, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Applies an explicit `search` filter to narrow the current result set.\n• `eval` defines or adjusts **basis_valid** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by dst_jurisdiction, data_class** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **compliance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where compliance_pct < 100` — typically the threshold or rule expression for this monitoring goal.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table of flagged records for investigation; single-value KPI of coverage / compliance percentage; time chart of trend over the last 90 days; drill-down per responsible owner.",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-18",
              "sver": "",
              "rby": "",
              "ge": "We help you show how a second-line framework line maps to what you actually monitor, so your gap story stays clear.",
              "mtype": [
                "Compliance"
              ],
              "pillar": "security",
              "regs": [
                "SA PDPL"
              ],
              "a": [
                "Change"
              ],
              "e": [],
              "em": [],
              "cmp": [
                {
                  "r": "SA PDPL",
                  "v": "current",
                  "cl": "Art. 29",
                  "m": "satisfies",
                  "a": "partial",
                  "co": "Evidence that SA PDPL Art. 29 (Cross-border personal data transfers) is enforced — Splunk UC-22.50.23: SA PDPL Art. 29 — cross-border transfer inventory and legal-basis evidence.",
                  "ea": "Saved search 'UC-22.50.23' running on Data transfer ledger, cloud-region event logs, SDAIA notification records., archived to the restricted audit_evidence index (default 7-year retention). Auto-drafted — SME review required."
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 34.1,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 23,
            "none": 0
          }
        }
      ],
      "i": 22,
      "n": "Regulatory and Compliance Frameworks",
      "src": "cat-22-regulatory-compliance.md"
    },
    {
      "s": [
        {
          "i": "23.1",
          "n": "Customer Experience & Digital Analytics",
          "u": [
            {
              "i": "23.1.1",
              "n": "Website Conversion Funnel Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Shows where customers drop off in the purchase or signup journey — landing page, product view, cart, checkout, confirmation. Helps marketing and product teams identify which step loses the most revenue so they can prioritise UX improvements with the biggest business impact.",
              "t": "Splunk Add-on for Apache Web Server (Splunkbase 3186)",
              "d": "`index=web` `sourcetype=\"access_combined\"` (clientip, uri, status, referer, useragent, bytes, _time)",
              "q": "index=web sourcetype=\"access_combined\" status=200 earliest=-7d\n| eval session=clientip.\"_\".useragent\n| eval stage=case(\n    match(uri,\"^/(home|landing|index)\"), \"1_Landing\",\n    match(uri,\"^/products?/\"), \"2_Product_View\",\n    match(uri,\"^/cart\"), \"3_Cart\",\n    match(uri,\"^/checkout\"), \"4_Checkout\",\n    match(uri,\"^/(confirm|thank|order-complete)\"), \"5_Confirmation\",\n    1=1, \"0_Other\")\n| where stage!=\"0_Other\"\n| stats dc(session) as unique_sessions by stage\n| sort stage\n| streamstats current=t window=1 last(unique_sessions) as prev_sessions\n| eval drop_off_pct=if(isnotnull(prev_sessions) AND stage!=\"1_Landing\", round(100*(prev_sessions-unique_sessions)/prev_sessions,1), 0)\n| table stage, unique_sessions, drop_off_pct",
              "m": "(1) Map your site's URL patterns to funnel stages in the `case()` statement; (2) for SPAs, use custom event logging via HEC with page/view identifiers; (3) schedule daily and weekly for trend comparison; (4) add revenue estimates per stage using average order value lookup; (5) segment by traffic source (organic, paid, direct) using referer field.",
              "z": "Funnel chart or stacked bar, Single value (overall conversion rate), Table (drop-off per stage), Line chart (daily conversion trend).",
              "kfp": "Affiliate and paid-media flight changes can spike early-stage volume without a matching lift at purchase, so the relative drop pattern shifts even when nothing broke on the site. A WAF or bot-management challenge that still returns `status=200` with an interstitial page inflates landing counts while human shoppers never see your catalogue. A/B tests that move inventory between `/c/` and `/p/` URL schemes redistribute counts across the middle stages without a real conversion change. B2B organisations that gate PDF downloads behind the same `uri` as the pricing page make the Product stage look busier than human consideration would imply. A regional promotion that routes EU shoppers through a consent wall may suppress rows at ingest, producing a kink in the line chart that is policy-driven rather than creative failure. A catalog migration that renames every PDP slug in one night invalidates the `case()` list until the search author updates the patterns. Flash crowds from television or influencer drops that exceed session affinity limits can fragment the `visit_key` proxy and depress the apparent funnel width without dropping actual demand. Partner pixels that refire the landing tag when an iframe becomes visible can duplicate early-stage reach without duplicate humans. A sudden TLS or HTTP/2 tuning change on the load balancer alters the mix of `304` vs `200` you choose to count; revalidate the `status=200` guard against your caching policy. Internal pen-test scripts that rotate through the storefront with a browser-like `useragent` can bypass naive bot rules unless you exclude their source IPs. Seasonal job postings that point candidates to a consumer home page can spike Landing without a commerce intent, which product marketing should label as out-of-band noise.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Apache Web Server (Splunkbase 3186).\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` (clientip, uri, status, referer, useragent, bytes, _time).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Map your site's URL patterns to funnel stages in the `case()` statement; (2) for SPAs, use custom event logging via HEC with page/view identifiers; (3) schedule daily and weekly for trend comparison; (4) add revenue estimates per stage using average order value lookup; (5) segment by traffic source (organic, paid, direct) using referer field.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" status=200 earliest=-7d\n| eval session=clientip.\"_\".useragent\n| eval stage=case(\n    match(uri,\"^/(home|landing|index)\"), \"1_Landing\",\n    match(uri,\"^/products?/\"), \"2_Product_View\",\n    match(uri,\"^/cart\"), \"3_Cart\",\n    match(uri,\"^/checkout\"), \"4_Checkout\",\n    match(uri,\"^/(confirm|thank|order-complete)\"), \"5_Confirmation\",\n    1=1, \"0_Other\")\n| where stage!=\"0_Other\"\n| stats dc(session) as unique_sessions by stage\n| sort stage\n| streamstats current=t window=1 last(unique_sessions) as prev_sessions\n| eval drop_off_pct=if(isnotnull(prev_sessions) AND stage!=\"1_Landing\", round(100*(prev_sessions-unique_sessions)/prev_sessions,1), 0)\n| table stage, unique_sessions, drop_off_pct\n```\n\nUnderstanding this SPL\n\n**Website Conversion Funnel Analysis** — Shows where customers drop off in the purchase or signup journey — landing page, product view, cart, checkout, confirmation. Helps marketing and product teams identify which step loses the most revenue so they can prioritise UX improvements with the biggest business impact.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (clientip, uri, status, referer, useragent, bytes, _time). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **stage** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where stage!=\"0_Other\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by stage** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **drop_off_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Website Conversion Funnel Analysis**): table stage, unique_sessions, drop_off_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel chart or stacked bar, Single value (overall conversion rate), Table (drop-off per stage), Line chart (daily conversion trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We chart how visitors move from the first page through checkout and where they pause, so we improve the right screens instead of guessing which step loses orders.",
              "wv": "crawl",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "23.1.2",
              "n": "Shopping Cart Abandonment Rate and Recovery",
              "c": "high",
              "f": "intermediate",
              "v": "Quantifies revenue left on the table when customers add items to cart but leave without purchasing. Typical abandonment rates are 60-80% — reducing this by even a few points directly increases revenue. Alerts the business team when abandonment spikes, often indicating a payment gateway issue or pricing problem.",
              "t": "Splunk Add-on for Apache Web Server (Splunkbase 3186)",
              "d": "`index=web` (web access logs), `index=app_events` (application event logs via HEC with cart and purchase events)",
              "q": "index=app_events sourcetype=\"app:ecommerce\" event_type IN (\"cart_add\",\"purchase_complete\") earliest=-7d\n| eval session=coalesce(session_id, clientip.\"_\".useragent)\n| stats earliest(_time) as first_action, latest(event_type) as last_action,\n        sum(eval(if(event_type=\"cart_add\",1,0))) as cart_adds,\n        sum(eval(if(event_type=\"purchase_complete\",1,0))) as purchases,\n        sum(eval(if(event_type=\"cart_add\",item_value,0))) as cart_value by session\n| eval abandoned=if(cart_adds>0 AND purchases=0, 1, 0)\n| eval abandoned_value=if(abandoned=1, cart_value, 0)\n| stats count as total_sessions, sum(abandoned) as abandoned_sessions,\n        sum(abandoned_value) as total_abandoned_value,\n        sum(eval(if(purchases>0,1,0))) as purchasing_sessions\n| eval abandonment_rate=round(100*abandoned_sessions/total_sessions, 1)\n| eval conversion_rate=round(100*purchasing_sessions/total_sessions, 1)\n| table total_sessions, abandoned_sessions, abandonment_rate, purchasing_sessions, conversion_rate, total_abandoned_value",
              "m": "(1) Instrument your e-commerce platform to send cart_add and purchase_complete events via HEC with session_id, item_value, and item_sku; (2) alert when daily abandonment rate exceeds baseline by >10 percentage points — this often signals a payment gateway issue; (3) segment by device type (mobile vs desktop) and traffic source; (4) feed abandoned session data to marketing automation for recovery emails.",
              "z": "Single value (abandonment rate + trend), Line chart (daily abandonment trend), Bar chart (abandonment by device/source), Single value (abandoned revenue).",
              "kfp": "Legitimate shopper behaviors inflate abandonment without code defects: mobile shoppers who add on phone but finish payment on desktop with a fresh cookie appear abandoned on the handset cohort until identity stitching joins cart_token. Bots (Googlebot, AhrefsBot, GPTBot, ClaudeBot) occasionally replay checkout URLs during audits—exclude via useragent before trusting funnel denominators. Coupon-extension shoppers (Honey, Capital One Shopping) populate carts under twenty-second sessions for price comparisons—looks like abandonment spikes unless session-duration thresholds filter noise. Guest checkout (Baymard-cited higher abandonment than registered accounts) blended with forced-account journeys mis-states averages unless `customer_type` segmentation exists. BNPL (Klarna, Afterpay) credit-decline surges during inflationary quarters track macro stress, not necessarily misconfigured storefronts. Apple 3DS / PSD2 flows that stall in `requires_action` resemble abandonment even when the PSP will eventually succeed. Mid-checkout promo expiration (SUMMER25 midnight UTC cliff), inventory races (out_of_stock declines), Shop Pay versus legacy checkout cohort tests, EU/US cart-retention timer mismatches, Black Friday sticker-shock browsing, and BOPIS versus ship-home mixes all distort naive daily percentages unless cohort rules align.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Apache Web Server (Splunkbase 3186).\n• Ensure the following data sources are available: `index=web` (web access logs), `index=app_events` (application event logs via HEC with cart and purchase events).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument your e-commerce platform to send cart_add and purchase_complete events via HEC with session_id, item_value, and item_sku; (2) alert when daily abandonment rate exceeds baseline by >10 percentage points — this often signals a payment gateway issue; (3) segment by device type (mobile vs desktop) and traffic source; (4) feed abandoned session data to marketing automation for recovery emails.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app_events sourcetype=\"app:ecommerce\" event_type IN (\"cart_add\",\"purchase_complete\") earliest=-7d\n| eval session=coalesce(session_id, clientip.\"_\".useragent)\n| stats earliest(_time) as first_action, latest(event_type) as last_action,\n        sum(eval(if(event_type=\"cart_add\",1,0))) as cart_adds,\n        sum(eval(if(event_type=\"purchase_complete\",1,0))) as purchases,\n        sum(eval(if(event_type=\"cart_add\",item_value,0))) as cart_value by session\n| eval abandoned=if(cart_adds>0 AND purchases=0, 1, 0)\n| eval abandoned_value=if(abandoned=1, cart_value, 0)\n| stats count as total_sessions, sum(abandoned) as abandoned_sessions,\n        sum(abandoned_value) as total_abandoned_value,\n        sum(eval(if(purchases>0,1,0))) as purchasing_sessions\n| eval abandonment_rate=round(100*abandoned_sessions/total_sessions, 1)\n| eval conversion_rate=round(100*purchasing_sessions/total_sessions, 1)\n| table total_sessions, abandoned_sessions, abandonment_rate, purchasing_sessions, conversion_rate, total_abandoned_value\n```\n\nUnderstanding this SPL\n\n**Shopping Cart Abandonment Rate and Recovery** — Quantifies revenue left on the table when customers add items to cart but leave without purchasing. Typical abandonment rates are 60-80% — reducing this by even a few points directly increases revenue. Alerts the business team when abandonment spikes, often indicating a payment gateway issue or pricing problem.\n\nDocumented **Data sources**: `index=web` (web access logs), `index=app_events` (application event logs via HEC with cart and purchase events). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app_events; **sourcetype**: app:ecommerce. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app_events, sourcetype=\"app:ecommerce\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by session** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **abandoned** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **abandoned_value** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **abandonment_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **conversion_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Shopping Cart Abandonment Rate and Recovery**): table total_sessions, abandoned_sessions, abandonment_rate, purchasing_sessions, conversion_rate, total_abandoned_value\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (abandonment rate + trend), Line chart (daily abandonment trend), Bar chart (abandonment by device/source), Single value (abandoned revenue).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch how often people fill a cart but leave before paying, especially on phones versus computers, and how much money walks away when that happens. When that rate jumps far above normal, we dig into checkout and payments early—because fixing carts often saves more revenue than launching another sale.",
              "mtype": [
                "Business"
              ],
              "ind": "Retail, E-Commerce",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.1.3",
              "n": "Real-Time Page Load Performance Impact on Revenue",
              "c": "high",
              "f": "intermediate",
              "v": "Research shows every 100ms of page load delay reduces conversion by 1-2%. This use case correlates page response times with business outcomes, quantifying the revenue impact of site performance — giving engineering teams a business case for performance optimization and helping executives understand why infrastructure investment matters.",
              "t": "Splunk Add-on for Apache Web Server (Splunkbase 3186), Splunk RUM",
              "d": "`index=web` `sourcetype=\"access_combined\"` (response_time, uri, status), `index=app_events` (purchase events with session linkage)",
              "q": "index=web sourcetype=\"access_combined\" status=200 earliest=-30d\n| eval session=clientip.\"_\".useragent\n| eval response_ms=response_time*1000\n| eval perf_bucket=case(\n    response_ms<1000, \"Fast (<1s)\",\n    response_ms<3000, \"Moderate (1-3s)\",\n    response_ms<5000, \"Slow (3-5s)\",\n    1=1, \"Very Slow (>5s)\")\n| join type=left session [\n    search index=app_events event_type=\"purchase_complete\" earliest=-30d\n    | eval session=coalesce(session_id, clientip.\"_\".useragent)\n    | stats sum(order_value) as revenue, count as purchases by session\n]\n| fillnull value=0 revenue purchases\n| stats dc(session) as sessions, sum(purchases) as total_purchases,\n        sum(revenue) as total_revenue, avg(response_ms) as avg_response_ms by perf_bucket\n| eval conversion_rate=round(100*total_purchases/sessions, 2)\n| eval revenue_per_session=round(total_revenue/sessions, 2)\n| sort perf_bucket\n| table perf_bucket, sessions, avg_response_ms, total_purchases, conversion_rate, total_revenue, revenue_per_session",
              "m": "(1) Ensure web server logs include response time (Apache: `%D`, Nginx: `$request_time`); (2) link web sessions to purchase events via session ID; (3) run monthly to build the business case for performance investment; (4) alert when average response time degrades past the \"Moderate\" threshold; (5) calculate the revenue uplift from moving sessions from \"Slow\" to \"Fast\" buckets.",
              "z": "Bar chart (conversion rate by speed bucket), Table (revenue per session by performance), Line chart (daily avg response time vs daily revenue), Single value (estimated revenue loss from slow pages).",
              "kfp": "Peak-traffic saturation raises origin time-to-first-byte without any deploy—Poor buckets widen until elastic load balancer queues drain; correlate surge-queue depth and Apache worker occupancy before blaming PDP templates alone. CDN failover exercises reroute around degraded PoPs—latency jumps resemble onsite regressions until operator status timelines align with Splunk timelines. Heavy Google Tag Manager chains inflate Largest Contentful Paint via main-thread contention while origin `$request_time` stays calm—mis-attribution points optimisation at infra when creative tooling dominated paint delay. Safari releases before iOS 15 drop many unload-phase beacon posts—Poor cohort sizes shrink versus Chrome until collectors adopt `pagehide` paths. Synthetic monitors and bots yield artificially crisp paints unless excluded—blend labs uptime metrics cautiously into shopper cohorts. Asia-Pacific propagation adds hundreds of milliseconds versus North America—continent-wide aggregates hide POP tuning priorities unless `geo_continent` splits exist. Chrome back-forward cache and prerender can report near-zero LCP for navigations that never truly cold-loaded—annotate Performance Navigation Timing `notRestoredReasons` / discard flags before celebrating Good saturation. Email link pre-fetch (for example Mail Privacy Protection patterns) seeds phantom sessions without revenue—honour `via_email_prefetch` or suppress those keys in commerce rollups.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Apache Web Server (Splunkbase 3186), Splunk RUM.\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` (response_time, uri, status), `index=app_events` (purchase events with session linkage).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure web server logs include response time (Apache: `%D`, Nginx: `$request_time`); (2) link web sessions to purchase events via session ID; (3) run monthly to build the business case for performance investment; (4) alert when average response time degrades past the \"Moderate\" threshold; (5) calculate the revenue uplift from moving sessions from \"Slow\" to \"Fast\" buckets.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" status=200 earliest=-30d\n| eval session=clientip.\"_\".useragent\n| eval response_ms=response_time*1000\n| eval perf_bucket=case(\n    response_ms<1000, \"Fast (<1s)\",\n    response_ms<3000, \"Moderate (1-3s)\",\n    response_ms<5000, \"Slow (3-5s)\",\n    1=1, \"Very Slow (>5s)\")\n| join type=left session [\n    search index=app_events event_type=\"purchase_complete\" earliest=-30d\n    | eval session=coalesce(session_id, clientip.\"_\".useragent)\n    | stats sum(order_value) as revenue, count as purchases by session\n]\n| fillnull value=0 revenue purchases\n| stats dc(session) as sessions, sum(purchases) as total_purchases,\n        sum(revenue) as total_revenue, avg(response_ms) as avg_response_ms by perf_bucket\n| eval conversion_rate=round(100*total_purchases/sessions, 2)\n| eval revenue_per_session=round(total_revenue/sessions, 2)\n| sort perf_bucket\n| table perf_bucket, sessions, avg_response_ms, total_purchases, conversion_rate, total_revenue, revenue_per_session\n```\n\nUnderstanding this SPL\n\n**Real-Time Page Load Performance Impact on Revenue** — Research shows every 100ms of page load delay reduces conversion by 1-2%. This use case correlates page response times with business outcomes, quantifying the revenue impact of site performance — giving engineering teams a business case for performance optimization and helping executives understand why infrastructure investment matters.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (response_time, uri, status), `index=app_events` (purchase events with session linkage). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186), Splunk RUM. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **response_ms** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **perf_bucket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `stats` rolls up events into metrics; results are split **by perf_bucket** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **conversion_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **revenue_per_session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Real-Time Page Load Performance Impact on Revenue**): table perf_bucket, sessions, avg_response_ms, total_purchases, conversion_rate, total_revenue, revenue_per_session\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (conversion rate by speed bucket), Table (revenue per session by performance), Line chart (daily avg response time vs daily revenue), Single value (estimated revenue loss from slow pages).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We tie how quickly pages load and feel responsive to whether people actually bought in that visit—by page layout—so leaders see where speeding things up likely saves sales, not just where clicks dropped in a funnel.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.1.4",
              "n": "Customer Satisfaction Score (CSAT/NPS) Trend Dashboard",
              "c": "medium",
              "f": "beginner",
              "v": "Centralises customer satisfaction metrics — Net Promoter Score, CSAT, CES — alongside operational data, enabling teams to correlate satisfaction dips with specific incidents, releases, or service changes. Executives see customer health at a glance rather than waiting for quarterly survey reports.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC",
              "d": "`index=business` `sourcetype=\"nps_survey\"` (score, verbatim, channel, product, date), CRM data via DB Connect",
              "q": "index=business sourcetype=\"nps_survey\" earliest=-90d\n| eval category=case(score>=9,\"Promoter\", score>=7,\"Passive\", 1=1,\"Detractor\")\n| bin _time span=1w\n| stats count as responses,\n        sum(eval(if(category=\"Promoter\",1,0))) as promoters,\n        sum(eval(if(category=\"Detractor\",1,0))) as detractors,\n        avg(score) as avg_score by _time\n| eval nps=round(100*(promoters-detractors)/responses, 0)\n| table _time, responses, promoters, detractors, nps, avg_score\n| sort _time",
              "m": "(1) Ingest survey responses via HEC or DB Connect from your survey platform (Qualtrics, Medallia, SurveyMonkey); (2) include product/service and channel fields for segmentation; (3) schedule weekly NPS calculation; (4) alert when NPS drops below threshold or week-over-week decline exceeds 10 points; (5) add text analytics on verbatim comments using `rex` for common complaint themes.",
              "z": "Line chart (NPS trend), Single value (current NPS), Pie chart (Promoter/Passive/Detractor split), Word cloud or bar chart (top complaint themes from verbatims).",
              "kfp": "Sudden promoter-ratio erosion frequently mirrors cadence collisions rather than faulty CX craft—when invitation waves overlap renewal invoices plus incident apologies inside one fortnight, irritated respondents dominate returns while quietly satisfied households skip replies altogether, depressing headline scores until throttle maps restore spacing. Concurrent taxonomy refreshes inside XM-Discover or Lexalytics pipelines may shift legacy SERVICE_QUALITY mentions into LOGISTICS friction buckets without any behavioural shift on the floor, producing imaginary topic surges Splunk surfaces as crises until NLP version identifiers reconcile cohort labels.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC.\n• Ensure the following data sources are available: `index=business` `sourcetype=\"nps_survey\"` (score, verbatim, channel, product, date), CRM data via DB Connect.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest survey responses via HEC or DB Connect from your survey platform (Qualtrics, Medallia, SurveyMonkey); (2) include product/service and channel fields for segmentation; (3) schedule weekly NPS calculation; (4) alert when NPS drops below threshold or week-over-week decline exceeds 10 points; (5) add text analytics on verbatim comments using `rex` for common complaint themes.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"nps_survey\" earliest=-90d\n| eval category=case(score>=9,\"Promoter\", score>=7,\"Passive\", 1=1,\"Detractor\")\n| bin _time span=1w\n| stats count as responses,\n        sum(eval(if(category=\"Promoter\",1,0))) as promoters,\n        sum(eval(if(category=\"Detractor\",1,0))) as detractors,\n        avg(score) as avg_score by _time\n| eval nps=round(100*(promoters-detractors)/responses, 0)\n| table _time, responses, promoters, detractors, nps, avg_score\n| sort _time\n```\n\nUnderstanding this SPL\n\n**Customer Satisfaction Score (CSAT/NPS) Trend Dashboard** — Centralises customer satisfaction metrics — Net Promoter Score, CSAT, CES — alongside operational data, enabling teams to correlate satisfaction dips with specific incidents, releases, or service changes. Executives see customer health at a glance rather than waiting for quarterly survey reports.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"nps_survey\"` (score, verbatim, channel, product, date), CRM data via DB Connect. **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: nps_survey. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"nps_survey\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **category** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **nps** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Customer Satisfaction Score (CSAT/NPS) Trend Dashboard**): table _time, responses, promoters, detractors, nps, avg_score\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (NPS trend), Single value (current NPS), Pie chart (Promoter/Passive/Detractor split), Word cloud or bar chart (top complaint themes from verbatims).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We blend star ratings and written remarks folks send after purchases or service chats, watch whether goodwill climbs or slips week by week, and raise the alarm before renewals turn hostile—not before counting clicks or carts alone.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.1.5",
              "n": "Customer Journey Cross-Channel Attribution",
              "c": "high",
              "f": "advanced",
              "v": "Traces a customer's complete journey across web, mobile app, email, call centre, and in-store touchpoints — showing which channels drive engagement and conversion. Replaces siloed channel reporting with a unified view, helping marketing allocate budget to the channels that actually move customers to purchase.",
              "t": "Splunk DB Connect (Splunkbase 2686), Splunk Add-on for Apache Web Server (Splunkbase 3186), HEC",
              "d": "`index=web` (web visits), `index=app_events` (mobile app events), `index=business` (CRM interactions, call centre logs, email engagement, POS transactions)",
              "q": "index=web sourcetype=\"access_combined\" earliest=-30d\n| eval channel=\"Web\", customer_id=coalesce(cookie_customer_id, clientip)\n| eval touchpoint=uri\n| append [search index=app_events sourcetype=\"app:mobile\" earliest=-30d | eval channel=\"Mobile_App\", customer_id=user_id, touchpoint=screen_name]\n| append [search index=business sourcetype=\"email_engagement\" earliest=-30d | eval channel=\"Email\", touchpoint=campaign_name]\n| append [search index=business sourcetype=\"call_centre\" earliest=-30d | eval channel=\"Phone\", touchpoint=call_reason]\n| append [search index=business sourcetype=\"pos_transaction\" earliest=-30d | eval channel=\"In_Store\", touchpoint=\"Purchase\"]\n| where isnotnull(customer_id)\n| sort customer_id _time\n| streamstats values(channel) as journey_channels dc(channel) as channel_count by customer_id\n| stats dc(customer_id) as unique_customers, count as total_touchpoints,\n        dc(eval(if(channel=\"In_Store\" OR match(touchpoint,\"(?i)purchase|confirm\"),customer_id,null()))) as converting_customers by channel\n| eval conversion_contribution=round(100*converting_customers/unique_customers, 1)\n| sort - conversion_contribution\n| table channel, unique_customers, total_touchpoints, converting_customers, conversion_contribution",
              "m": "(1) Unify customer identity across channels using a customer ID or email — use identity resolution lookup if needed; (2) ingest email engagement via HEC from your ESP; (3) import call centre logs from ACD/IVR systems; (4) import POS transactions from retail systems; (5) build multi-touch attribution models (first-touch, last-touch, linear, time-decay) as additional saved searches.",
              "z": "Sankey diagram (channel flow), Bar chart (conversion contribution by channel), Table (customer journey paths), Single value (avg touchpoints before conversion).",
              "kfp": "Retagging taxonomy leaves mid-flight journeys sliced under legacy labels until JDBC replay completes—mistaken dips resemble creative fatigue rather than ingest latency until warehouse watermark audits reconcile cohort labels. Probabilistic bridges occasionally unite unrelated shoppers sharing coworking egress fingerprints—duplicate CRM survivor checks isolate phantom merges faster than heuristic uplift tiles alone. Bayesian cushions stabilise sparse-channel slices yet disguise genuine erosion whenever thirty-day conversion counts fall beneath roughly one thousand events per lane—extend horizons cautiously instead of trusting premature demise narratives from thin cohorts alone. Offline till receipts arriving forty-eight hours after digital discovery misalign credited channels—anchor spend reviews to treasury posting clocks before punishing syndicated displays. Interaction Studio duplicate impression keys after anonymous resets inflate transition tallies until dedupe_keys collapse retried beacons upstream. Deferred SKAdNetwork cohorts paired with ATT opt-in scarcity masquerade as social under-performance when measurement loss dominates true demand motion. W-shaped milestone fields missing from CRM extracts mis-allocate enterprise credit—verify lead-creation and opportunity-creation timestamps land inside the same stitched stream before defending W allocations. Privacy Sandbox redirects strip inbound querystrings—dark-traffic cohorts swell unless server-side collectors preserve campaign metadata before HEC ingestion.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), Splunk Add-on for Apache Web Server (Splunkbase 3186), HEC.\n• Ensure the following data sources are available: `index=web` (web visits), `index=app_events` (mobile app events), `index=business` (CRM interactions, call centre logs, email engagement, POS transactions).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Unify customer identity across channels using a customer ID or email — use identity resolution lookup if needed; (2) ingest email engagement via HEC from your ESP; (3) import call centre logs from ACD/IVR systems; (4) import POS transactions from retail systems; (5) build multi-touch attribution models (first-touch, last-touch, linear, time-decay) as additional saved searches.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" earliest=-30d\n| eval channel=\"Web\", customer_id=coalesce(cookie_customer_id, clientip)\n| eval touchpoint=uri\n| append [search index=app_events sourcetype=\"app:mobile\" earliest=-30d | eval channel=\"Mobile_App\", customer_id=user_id, touchpoint=screen_name]\n| append [search index=business sourcetype=\"email_engagement\" earliest=-30d | eval channel=\"Email\", touchpoint=campaign_name]\n| append [search index=business sourcetype=\"call_centre\" earliest=-30d | eval channel=\"Phone\", touchpoint=call_reason]\n| append [search index=business sourcetype=\"pos_transaction\" earliest=-30d | eval channel=\"In_Store\", touchpoint=\"Purchase\"]\n| where isnotnull(customer_id)\n| sort customer_id _time\n| streamstats values(channel) as journey_channels dc(channel) as channel_count by customer_id\n| stats dc(customer_id) as unique_customers, count as total_touchpoints,\n        dc(eval(if(channel=\"In_Store\" OR match(touchpoint,\"(?i)purchase|confirm\"),customer_id,null()))) as converting_customers by channel\n| eval conversion_contribution=round(100*converting_customers/unique_customers, 1)\n| sort - conversion_contribution\n| table channel, unique_customers, total_touchpoints, converting_customers, conversion_contribution\n```\n\nUnderstanding this SPL\n\n**Customer Journey Cross-Channel Attribution** — Traces a customer's complete journey across web, mobile app, email, call centre, and in-store touchpoints — showing which channels drive engagement and conversion. Replaces siloed channel reporting with a unified view, helping marketing allocate budget to the channels that actually move customers to purchase.\n\nDocumented **Data sources**: `index=web` (web visits), `index=app_events` (mobile app events), `index=business` (CRM interactions, call centre logs, email engagement, POS transactions). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), Splunk Add-on for Apache Web Server (Splunkbase 3186), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **channel** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **touchpoint** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Filters the current rows with `where isnotnull(customer_id)` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by customer_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by channel** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **conversion_contribution** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Customer Journey Cross-Channel Attribution**): table channel, unique_customers, total_touchpoints, converting_customers, conversion_contribution\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Sankey diagram (channel flow), Bar chart (conversion contribution by channel), Table (customer journey paths), Single value (avg touchpoints before conversion).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We follow one buyer’s trail across email, ads, searches, stores, and apps over weeks, then share credit fairly so the real helpers get noticed instead of whichever touch simply happened last before checkout.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "apache",
                "db_connect"
              ],
              "em": [
                "apache_httpd"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.1.6",
              "n": "Mobile App Crash Rate and User Impact",
              "c": "high",
              "f": "intermediate",
              "v": "Every app crash is a customer experience failure that may lead to churn. This use case tracks crash rates by app version, device, and OS — correlating crashes with user retention and app store ratings. Product managers see which crashes affect the most users and can prioritise fixes based on business impact rather than technical severity alone.",
              "t": "HEC (custom app telemetry)",
              "d": "`index=app_events` `sourcetype=\"app:crash\"` (app_version, os_version, device_model, user_id, crash_type, stack_trace)",
              "q": "index=app_events sourcetype IN (\"app:crash\",\"app:session\") earliest=-7d\n| eval is_crash=if(sourcetype=\"app:crash\", 1, 0)\n| eval is_session=if(sourcetype=\"app:session\", 1, 0)\n| stats sum(is_crash) as crashes, sum(is_session) as sessions, dc(user_id) as affected_users by app_version\n| eval crash_rate=round(100*crashes/sessions, 2)\n| eval users_pct=round(100*affected_users/sessions, 2)\n| sort - crash_rate\n| table app_version, sessions, crashes, crash_rate, affected_users, users_pct",
              "m": "(1) Instrument your mobile app to send crash reports and session start events via HEC; (2) include app version, OS version, device model, and user ID; (3) alert when crash rate for any version exceeds 2%; (4) track crash rate trends after new releases; (5) correlate crash-affected users with churn data to quantify business impact.",
              "z": "Line chart (crash rate by version over time), Bar chart (crashes by device/OS), Single value (current crash-free rate %), Table (top crash types with user impact).",
              "kfp": "Internal dogfood plus TestFlight cohorts shipped with aggressive logging can dominate hourly denominators until `environment=\"production\"` filters align with Play Console track selections—crash-free percentages look artificially bleak even though storefront shoppers remain steady. Crashlytics velocity alerts tuned for tiny handset cohorts fire whenever legacy Galaxy Note fleets spike even though statistically those populations cannot sway quarterly retention forecasts alone.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (custom app telemetry).\n• Ensure the following data sources are available: `index=app_events` `sourcetype=\"app:crash\"` (app_version, os_version, device_model, user_id, crash_type, stack_trace).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument your mobile app to send crash reports and session start events via HEC; (2) include app version, OS version, device model, and user ID; (3) alert when crash rate for any version exceeds 2%; (4) track crash rate trends after new releases; (5) correlate crash-affected users with churn data to quantify business impact.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app_events sourcetype IN (\"app:crash\",\"app:session\") earliest=-7d\n| eval is_crash=if(sourcetype=\"app:crash\", 1, 0)\n| eval is_session=if(sourcetype=\"app:session\", 1, 0)\n| stats sum(is_crash) as crashes, sum(is_session) as sessions, dc(user_id) as affected_users by app_version\n| eval crash_rate=round(100*crashes/sessions, 2)\n| eval users_pct=round(100*affected_users/sessions, 2)\n| sort - crash_rate\n| table app_version, sessions, crashes, crash_rate, affected_users, users_pct\n```\n\nUnderstanding this SPL\n\n**Mobile App Crash Rate and User Impact** — Every app crash is a customer experience failure that may lead to churn. This use case tracks crash rates by app version, device, and OS — correlating crashes with user retention and app store ratings. Product managers see which crashes affect the most users and can prioritise fixes based on business impact rather than technical severity alone.\n\nDocumented **Data sources**: `index=app_events` `sourcetype=\"app:crash\"` (app_version, os_version, device_model, user_id, crash_type, stack_trace). **App/TA** (typical add-on context): HEC (custom app telemetry). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app_events.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app_events, time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_crash** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by app_version** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **crash_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **users_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Mobile App Crash Rate and User Impact**): table app_version, sessions, crashes, crash_rate, affected_users, users_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (crash rate by version over time), Bar chart (crashes by device/OS), Single value (current crash-free rate %), Table (top crash types with user impact).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "When pocket software freezes or shuts itself down we tally how widespread that trouble is—not just totals—so fixes chase flaws that genuinely spoil everyday trips instead of reacting to meaningless spikes.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.1.7",
              "n": "Site Search Effectiveness and Zero-Result Rate",
              "c": "medium",
              "f": "beginner",
              "v": "Measures how often on-site searches return no results or no clicks, which usually means frustrated shoppers or support deflection failure. We help merchandising and content teams fix synonyms and catalog gaps before abandonment rises.",
              "t": "Splunk Add-on for Apache Web Server (Splunkbase 3186), HEC",
              "d": "`index=web` `sourcetype=\"access_combined\"` (uri, status, query), `index=app_events` `sourcetype=\"site_search\"` (search_term, result_count, clicked_result)",
              "q": "index=app_events sourcetype=\"site_search\" earliest=-14d\n| eval zero_results=if(result_count=0 OR isnull(result_count),1,0)\n| eval no_click=if(clicked_result=\"none\" OR isnull(clicked_result),1,0)\n| stats count as searches, sum(zero_results) as zero_hits, sum(no_click) as no_click by search_term\n| eval zero_rate_pct=round(100*zero_hits/searches,1)\n| eval no_click_rate_pct=round(100*no_click/searches,1)\n| where searches>=20\n| sort - zero_rate_pct\n| head 40\n| table search_term, searches, zero_hits, zero_rate_pct, no_click_rate_pct",
              "m": "(1) Log each search with term, count of results, and whether the user clicked a hit using HEC from your storefront or portal; (2) filter bots from web logs if you also infer search from query strings; (3) send the top twenty high-volume zero-result terms weekly to the product content owner with suggested catalog checks.",
              "z": "Bar chart (zero-result rate by term), Table (top problem searches), Line chart (overall zero-result trend), Single value (searches with no click percent).",
              "kfp": "Storefront-wide **seasonal peaks** inflate **reviews_per_day** while **rating** medians remain flat, triggering **velocity** alarms until **retail_calendar** exclusions land. **Apple** or **Google** featuring shifts countries first, so **territory** skew looks like crises though code never regressed—trellis by **ISO-3166** before executive readouts. **Phased rollout** leaves **canary** cohorts louder about a **crash** fixed days later on **stable**, so **aspect_sentiment_drift** on **performance** misleads until **release_calendar** percentages annotate mix. **NLP upgrades** retag **aspect_topic** labels overnight, inventing **topic-frequency surges** absent behavioural shifts—partition by **`model_version`**. **Mandatory OS updates** plus **credential migrations** spike **login** vitriol unrelated to feature backlogs—correlate with **SDK** adoption curves. **Paid acquisition cohorts** show harsher **rating** averages per public **AppsFlyer** marketing studies yet reflect targeting mistakes—not core craft—until **campaign** joins succeed. **Aggregated ASN** clustering flags **campus Wi-Fi** during orientation week as faux brigades—validate **account_age** signals before Legal paging. **Dual pollers** (RSS plus API) double-count Apple reviews until **dedup** macros align. **Google policy purges** removing spam waves reduce daily volume resembling product wins without merges. **Cross-app news** (**airline outage**, **bank failure**) sparks one-star solidarity unrelated to release quality—monitor **named-entity** spikes. **Localization double-translation** pipelines distort **sentiment_bucket** thresholds even when native reviewers felt neutral—inspect **language_chain** metadata.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Apache Web Server (Splunkbase 3186), HEC.\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` (uri, status, query), `index=app_events` `sourcetype=\"site_search\"` (search_term, result_count, clicked_result).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Log each search with term, count of results, and whether the user clicked a hit using HEC from your storefront or portal; (2) filter bots from web logs if you also infer search from query strings; (3) send the top twenty high-volume zero-result terms weekly to the product content owner with suggested catalog checks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app_events sourcetype=\"site_search\" earliest=-14d\n| eval zero_results=if(result_count=0 OR isnull(result_count),1,0)\n| eval no_click=if(clicked_result=\"none\" OR isnull(clicked_result),1,0)\n| stats count as searches, sum(zero_results) as zero_hits, sum(no_click) as no_click by search_term\n| eval zero_rate_pct=round(100*zero_hits/searches,1)\n| eval no_click_rate_pct=round(100*no_click/searches,1)\n| where searches>=20\n| sort - zero_rate_pct\n| head 40\n| table search_term, searches, zero_hits, zero_rate_pct, no_click_rate_pct\n```\n\nUnderstanding this SPL\n\n**Site Search Effectiveness and Zero-Result Rate** — Measures how often on-site searches return no results or no clicks, which usually means frustrated shoppers or support deflection failure. We help merchandising and content teams fix synonyms and catalog gaps before abandonment rises.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (uri, status, query), `index=app_events` `sourcetype=\"site_search\"` (search_term, result_count, clicked_result). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app_events; **sourcetype**: site_search. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app_events, sourcetype=\"site_search\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **zero_results** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **no_click** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by search_term** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **zero_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **no_click_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where searches>=20` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **Site Search Effectiveness and Zero-Result Rate**): table search_term, searches, zero_hits, zero_rate_pct, no_click_rate_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (zero-result rate by term), Table (top problem searches), Line chart (overall zero-result trend), Single value (searches with no click percent).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We keep an eye on the public star scores and comments people leave on Apple and Android shop pages—not private survey forms—and we notice when unhappy notes pile up after a new version ships. We connect those complaint waves to phones being uninstalled or ad traffic that brought the wrong audience so teams can fix the product story before the listing tanks.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.1.8",
              "n": "Form Abandonment and Field-Level Drop-Off",
              "c": "high",
              "f": "intermediate",
              "v": "Shows which form fields correlate with users leaving before submit so teams can shorten lead flows and improve compliance without guessing. We help growth leaders recover more sign-ups from the same traffic volume.",
              "t": "HEC (form analytics)",
              "d": "`index=app_events` `sourcetype=\"form_analytics\"` (form_id, field_name, event_type, session_id, dwell_ms)",
              "q": "index=app_events sourcetype=\"form_analytics\" earliest=-30d\n| where event_type=\"abandon\"\n| stats count as abandons by form_id, field_name\n| sort - abandons\n| head 25\n| table form_id, field_name, abandons",
              "m": "(1) Instrument blur, focus, submit, and abandon events with field names and session identifiers through HEC; (2) define abandon as leaving the page without submit after interacting with the form; (3) prioritise redesign of the top three field-and-form pairs every sprint until abandon counts fall by the agreed target.",
              "z": "Bar chart (abandons by last field), Table (form and field ranking), Funnel chart (started vs submitted), Single value (overall form completion rate).",
              "kfp": "Instrumentation parity drift—not seasonal whims—is what bites: when Safari Intelligent Tracking Prevention blocks first-party scripts until late interaction, early blur timestamps disappear while submit receipts still arrive, so abandonment math exaggerates curiosity exits on Apple-heavy cohorts despite unchanged creative assets. Conditional-display logic may reveal hidden salary-range controls after unrelated radio picks; Splunk logs focus events before respondents perceive those widgets, falsely blaming invisible blanks until DOM snapshots corroborate visibility windows.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (form analytics).\n• Ensure the following data sources are available: `index=app_events` `sourcetype=\"form_analytics\"` (form_id, field_name, event_type, session_id, dwell_ms).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument blur, focus, submit, and abandon events with field names and session identifiers through HEC; (2) define abandon as leaving the page without submit after interacting with the form; (3) prioritise redesign of the top three field-and-form pairs every sprint until abandon counts fall by the agreed target.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app_events sourcetype=\"form_analytics\" earliest=-30d\n| where event_type=\"abandon\"\n| stats count as abandons by form_id, field_name\n| sort - abandons\n| head 25\n| table form_id, field_name, abandons\n```\n\nUnderstanding this SPL\n\n**Form Abandonment and Field-Level Drop-Off** — Shows which form fields correlate with users leaving before submit so teams can shorten lead flows and improve compliance without guessing. We help growth leaders recover more sign-ups from the same traffic volume.\n\nDocumented **Data sources**: `index=app_events` `sourcetype=\"form_analytics\"` (form_id, field_name, event_type, session_id, dwell_ms). **App/TA** (typical add-on context): HEC (form analytics). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app_events; **sourcetype**: form_analytics. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app_events, sourcetype=\"form_analytics\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where event_type=\"abandon\"` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by form_id, field_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **Form Abandonment and Field-Level Drop-Off**): table form_id, field_name, abandons\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (abandons by last field), Table (form and field ranking), Funnel chart (started vs submitted), Single value (overall form completion rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We spot when someone begins answering questions on an enrollment or inquiry screen but leaves before sending it, and which blanks lined up with that exit—so teams reorder fields and shorten wording instead of arguing from instinct alone.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.1.9",
              "n": "Third-Party Tag and API Latency Impact on Engagement",
              "c": "medium",
              "f": "advanced",
              "v": "Correlates slow marketing tags or partner application programming interface calls with shorter sessions and fewer conversions. We help digital teams decide which vendors to keep, async load, or remove based on customer impact, not vendor promises alone.",
              "t": "HEC (RUM or browser telemetry), Splunk Add-on for Apache Web Server (Splunkbase 3186)",
              "d": "`index=app_events` `sourcetype=\"rum:resource\"` (session_id, resource_host, duration_ms, initiator_type), `index=app_events` `sourcetype=\"app:ecommerce\"` (session_id, event_type)",
              "q": "index=app_events sourcetype=\"rum:resource\" initiator_type=\"script\" earliest=-7d\n| stats perc95(duration_ms) as p95_ms, avg(duration_ms) as avg_ms, count as loads,\n        dc(session_id) as affected_sessions by resource_host\n| eval slow_tag=if(p95_ms>500,\"SLOW\",\"OK\")\n| sort - p95_ms\n| head 25\n| table resource_host, affected_sessions, loads, avg_ms, p95_ms, slow_tag",
              "m": "(1) Capture real user monitoring resource timings with host and initiator type via HEC; (2) map session identifiers consistently with commerce events; (3) review monthly with marketing technology owners and defer or replace any third-party host above the latency budget for two consecutive weeks.",
              "z": "Bar chart (ninety-fifth percentile duration by host), Table (slow tag list), Scatter plot (loads vs latency), Single value (count of hosts over budget).",
              "kfp": "Holiday staffing freezes intentionally reopen dormant **`zendesk:ticket_audit`** threads once skeleton crews return—fourteen-day **`reopen_gap_s`** alarms spike without underlying workmanship faults until **`routing_calendar`** exclusions annotate Splunk drilldowns. **VIP concierge** pods proactively **`pending`** VIP threads nightly even though underlying defects vanished—conversation churn resembles rework though **`cause_bucket`** stays **`customer_expectation_gap`** rather than **`wrong_fix`**. Salesforce **`RecordMerge`** duplicates flatten **`CaseHistory`** timelines until **`MasterRecordId`** joins reconcile blended **`tid`** strings—Splunk **`stats`** temporarily double-count **`resolved_like_volume`**. Freshdesk **`freshdesk:ticket`** **`spam`** folders reopen automated **`ticket`** IDs whenever SOC forwards phishing attempts—noise masks legitimate **`reopen_rate_pct`** spikes unless **`spam`** tags feed **`where`** clauses. Zendesk **`merged_ticket`** macros rename **`ticket_id`** mid-history—without **`merged_into`** lookups **`streamstats`** reads **`prev_solved`** gaps spanning unrelated narratives. Enterprise **`snow:csm_case`** rows mirrored beside **`snow:incident`** duplicates accidentally ingest **`INC`** numbering conventions—generic **`tid`** filters must differentiate **`sn_customerservice_case`** prefixes before **`channel`** overlays compare apples-to-oranges against Zendesk queues. Third-party **`bots`** answering FAQs toggle **`pending`** hundreds of times nightly inflating **`interaction_segments`** yet leaving **`reopen_gap_s`** zero— **`agent_quality_idx`** sinks despite cheerful **`CSAT`** surveys arriving elsewhere.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (RUM or browser telemetry), Splunk Add-on for Apache Web Server (Splunkbase 3186).\n• Ensure the following data sources are available: `index=app_events` `sourcetype=\"rum:resource\"` (session_id, resource_host, duration_ms, initiator_type), `index=app_events` `sourcetype=\"app:ecommerce\"` (session_id, event_type).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Capture real user monitoring resource timings with host and initiator type via HEC; (2) map session identifiers consistently with commerce events; (3) review monthly with marketing technology owners and defer or replace any third-party host above the latency budget for two consecutive weeks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app_events sourcetype=\"rum:resource\" initiator_type=\"script\" earliest=-7d\n| stats perc95(duration_ms) as p95_ms, avg(duration_ms) as avg_ms, count as loads,\n        dc(session_id) as affected_sessions by resource_host\n| eval slow_tag=if(p95_ms>500,\"SLOW\",\"OK\")\n| sort - p95_ms\n| head 25\n| table resource_host, affected_sessions, loads, avg_ms, p95_ms, slow_tag\n```\n\nUnderstanding this SPL\n\n**Third-Party Tag and API Latency Impact on Engagement** — Correlates slow marketing tags or partner application programming interface calls with shorter sessions and fewer conversions. We help digital teams decide which vendors to keep, async load, or remove based on customer impact, not vendor promises alone.\n\nDocumented **Data sources**: `index=app_events` `sourcetype=\"rum:resource\"` (session_id, resource_host, duration_ms, initiator_type), `index=app_events` `sourcetype=\"app:ecommerce\"` (session_id, event_type). **App/TA** (typical add-on context): HEC (RUM or browser telemetry), Splunk Add-on for Apache Web Server (Splunkbase 3186). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app_events; **sourcetype**: rum:resource. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app_events, sourcetype=\"rum:resource\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by resource_host** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **slow_tag** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **Third-Party Tag and API Latency Impact on Engagement**): table resource_host, affected_sessions, loads, avg_ms, p95_ms, slow_tag\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (ninety-fifth percentile duration by host), Table (slow tag list), Scatter plot (loads vs latency), Single value (count of hosts over budget).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We notice when someone marks a help ticket solved yet the customer has to send it back soon after because the trouble was not really fixed. We tally those repeats, study the wording to learn why fixes fail, and tell coaches where agents need sharper answers—not faster chatter alone.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 37.8,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 8,
            "none": 0
          }
        },
        {
          "i": "23.2",
          "n": "Revenue & Sales Operations",
          "u": [
            {
              "i": "23.2.1",
              "n": "Sales Pipeline Velocity and Forecast Accuracy",
              "c": "critical",
              "f": "intermediate",
              "v": "Measures how fast deals move through the pipeline and how accurately the team forecasts revenue. Executives see whether the pipeline is healthy enough to hit quarterly targets, which stages are bottlenecks, and whether forecasts consistently over- or under-predict — enabling data-driven sales leadership rather than gut-feel forecasting.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, stage, amount, close_date, created_date, owner, probability, forecast_category)",
              "q": "index=business sourcetype=\"dbx:crm_opportunities\" earliest=-90d\n| eval days_in_pipeline=round((now()-strptime(created_date,\"%Y-%m-%d\"))/86400, 0)\n| eval is_won=if(stage=\"Closed Won\", 1, 0)\n| eval is_lost=if(stage=\"Closed Lost\", 1, 0)\n| eval weighted_value=amount*probability/100\n| stats sum(amount) as total_pipeline, sum(weighted_value) as weighted_pipeline,\n        sum(eval(if(is_won=1,amount,0))) as won_revenue,\n        sum(eval(if(is_lost=1,amount,0))) as lost_revenue,\n        avg(days_in_pipeline) as avg_days_in_pipeline,\n        dc(opportunity_id) as total_deals,\n        sum(is_won) as won_deals, sum(is_lost) as lost_deals by forecast_category\n| eval win_rate=round(100*won_deals/(won_deals+lost_deals), 1)\n| eval pipeline_coverage=round(total_pipeline/won_revenue, 1)\n| table forecast_category, total_deals, total_pipeline, weighted_pipeline, won_revenue, win_rate, avg_days_in_pipeline, pipeline_coverage\n| sort forecast_category",
              "m": "(1) Use DB Connect to query CRM opportunities table on a schedule (hourly or daily); (2) map CRM stage names to your pipeline stages; (3) build quarter-over-quarter comparison for forecast accuracy; (4) alert when pipeline coverage drops below 3x target; (5) segment by sales team, region, and product line for management reviews.",
              "z": "Funnel (pipeline by stage), Single value (weighted pipeline, win rate, avg deal cycle), Bar chart (pipeline by forecast category), Line chart (pipeline trend over time).",
              "kfp": "Quarter-end sandbagging is the classic CRM pattern where reps move **CloseDate** out of the current fiscal quarter in the last few business days to protect recognized quota, which temporarily shrinks `pipeline_amt` and makes **pipeline_coverage** look worse even though the quarter is still on track. A mid-quarter **OwnerId** mass-change from territory realignment (account-team or sharing-rule churn) reassigns hundreds of opps in one batch; for two to three days your per-rep and per-region cut of the same SPL can swing while territories settle. A deal that went **Closed Lost** and was later re-opened to **Negotiation/Review** then **Closed Won** looks like a brand-new long-cycle opp if you only read point-in-time `Opportunity` rows without **OpportunityFieldHistory**, so `avg_days_in_pipeline` and velocity can look inflated. Subscription-plus-services quotes that explode into many **OpportunityLineItem** children can also distort booking-like totals if you accidentally count line-level ingest rows against parent **Amount**. Forecasting users sometimes override the category that **StageName** would imply; the visible **ForecastCategoryName** on the **Opportunity** can lag what **ForecastingItem** shows during a forecast-commit window, so Splunk and the in-app forecast subtotal will disagree for a few days. A sandbox refresh that replays ETL or bulk-updates `LastModifiedDate` on every record looks like a sudden spike in “changed” opps, and a VP-sales pipeline-scrub session on Friday afternoons can spike stage-transition rates without a genuine demand drop.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, stage, amount, close_date, created_date, owner, probability, forecast_category).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Use DB Connect to query CRM opportunities table on a schedule (hourly or daily); (2) map CRM stage names to your pipeline stages; (3) build quarter-over-quarter comparison for forecast accuracy; (4) alert when pipeline coverage drops below 3x target; (5) segment by sales team, region, and product line for management reviews.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:crm_opportunities\" earliest=-90d\n| eval days_in_pipeline=round((now()-strptime(created_date,\"%Y-%m-%d\"))/86400, 0)\n| eval is_won=if(stage=\"Closed Won\", 1, 0)\n| eval is_lost=if(stage=\"Closed Lost\", 1, 0)\n| eval weighted_value=amount*probability/100\n| stats sum(amount) as total_pipeline, sum(weighted_value) as weighted_pipeline,\n        sum(eval(if(is_won=1,amount,0))) as won_revenue,\n        sum(eval(if(is_lost=1,amount,0))) as lost_revenue,\n        avg(days_in_pipeline) as avg_days_in_pipeline,\n        dc(opportunity_id) as total_deals,\n        sum(is_won) as won_deals, sum(is_lost) as lost_deals by forecast_category\n| eval win_rate=round(100*won_deals/(won_deals+lost_deals), 1)\n| eval pipeline_coverage=round(total_pipeline/won_revenue, 1)\n| table forecast_category, total_deals, total_pipeline, weighted_pipeline, won_revenue, win_rate, avg_days_in_pipeline, pipeline_coverage\n| sort forecast_category\n```\n\nUnderstanding this SPL\n\n**Sales Pipeline Velocity and Forecast Accuracy** — Measures how fast deals move through the pipeline and how accurately the team forecasts revenue. Executives see whether the pipeline is healthy enough to hit quarterly targets, which stages are bottlenecks, and whether forecasts consistently over- or under-predict — enabling data-driven sales leadership rather than gut-feel forecasting.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, stage, amount, close_date, created_date, owner, probability, forecast_category). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:crm_opportunities. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:crm_opportunities\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_in_pipeline** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_won** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_lost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **weighted_value** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by forecast_category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **win_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pipeline_coverage** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Sales Pipeline Velocity and Forecast Accuracy**): table forecast_category, total_deals, total_pipeline, weighted_pipeline, won_revenue, win_rate, avg_days_in_pipeline, pipeline_coverage\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel (pipeline by stage), Single value (weighted pipeline, win rate, avg deal cycle), Bar chart (pipeline by forecast category), Line chart (pipeline trend over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We look at the Salesforce list of open deals, how much each is worth, and which forecast bucket it sits in, then compare that picture to what already closed. When the open pile is too thin for the revenue we still need, we say so in plain language before the quarter surprises the CEO.",
              "wv": "crawl",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "23.2.2",
              "n": "Revenue Recognition and Booking Trend",
              "c": "critical",
              "f": "intermediate",
              "v": "Tracks revenue bookings in near-real-time against targets — daily, weekly, and monthly — giving finance and sales leadership an up-to-the-day view of where the business stands relative to plan. Replaces end-of-month surprises with continuous visibility, enabling mid-course corrections.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:erp_orders\"` (order_id, order_date, revenue, product_line, region, customer_id, order_status)",
              "q": "index=business sourcetype=\"dbx:erp_orders\" order_status=\"booked\" earliest=-30d@d\n| bin _time span=1d\n| stats sum(revenue) as daily_revenue, dc(order_id) as orders, dc(customer_id) as unique_customers by _time\n| sort _time\n| streamstats sum(daily_revenue) as mtd_revenue, sum(orders) as mtd_orders\n| lookup monthly_targets.csv month AS month OUTPUT target_revenue\n| eval month=strftime(_time,\"%Y-%m\")\n| lookup monthly_targets.csv month OUTPUT target_revenue\n| eval pct_of_target=if(isnotnull(target_revenue), round(100*mtd_revenue/target_revenue,1), null())\n| eval days_elapsed=tonumber(strftime(_time,\"%d\"))\n| eval days_in_month=tonumber(strftime(relative_time(now(),\"+1mon@mon-1d\"),\"%d\"))\n| eval run_rate=round(mtd_revenue/days_elapsed*days_in_month, 0)\n| table _time, daily_revenue, mtd_revenue, pct_of_target, run_rate, mtd_orders, unique_customers",
              "m": "(1) Connect to ERP/billing system via DB Connect; (2) create `monthly_targets.csv` with month and target_revenue columns; (3) schedule every 4 hours for near-real-time visibility; (4) alert when run rate projects a miss of >10% against target; (5) segment by product line, region, and customer segment for management drill-down.",
              "z": "Line chart (daily revenue + cumulative MTD), Single value (MTD revenue, % of target, run rate), Bar chart (daily revenue vs same day last month), Gauge (MTD progress to target).",
              "kfp": "Reopening the FI or MM posting period after month-end can let late BUDAT FARR lines land in a prior MONAT in ACDOCA even when VBRK FKDAT still shows the old billing day, which widens book_vs_recog_delta under an otherwise clean policy. AB reversals post a new G/L document that offsets RV lines; a feed that omits XREVERSED and XREVERSAL on ACDOCA, or the XBLNR chain on VBRK, will double-count. Intercompany KUNAG lines inflate gross NETWR until an elimination view applies. A fiscal year variant such as V3 (April–March) puts 31 December BUDAT into the next GJAHR, so calendar strftime on FKDAT mis-buckets unless you join to T009B. PIT recognition versus over-time POBs can post RAR in a different month from the F2, which looks like a miss when the board expects a one-month booking-to-recognize match on long service contracts.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:erp_orders\"` (order_id, order_date, revenue, product_line, region, customer_id, order_status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Connect to ERP/billing system via DB Connect; (2) create `monthly_targets.csv` with month and target_revenue columns; (3) schedule every 4 hours for near-real-time visibility; (4) alert when run rate projects a miss of >10% against target; (5) segment by product line, region, and customer segment for management drill-down.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:erp_orders\" order_status=\"booked\" earliest=-30d@d\n| bin _time span=1d\n| stats sum(revenue) as daily_revenue, dc(order_id) as orders, dc(customer_id) as unique_customers by _time\n| sort _time\n| streamstats sum(daily_revenue) as mtd_revenue, sum(orders) as mtd_orders\n| lookup monthly_targets.csv month AS month OUTPUT target_revenue\n| eval month=strftime(_time,\"%Y-%m\")\n| lookup monthly_targets.csv month OUTPUT target_revenue\n| eval pct_of_target=if(isnotnull(target_revenue), round(100*mtd_revenue/target_revenue,1), null())\n| eval days_elapsed=tonumber(strftime(_time,\"%d\"))\n| eval days_in_month=tonumber(strftime(relative_time(now(),\"+1mon@mon-1d\"),\"%d\"))\n| eval run_rate=round(mtd_revenue/days_elapsed*days_in_month, 0)\n| table _time, daily_revenue, mtd_revenue, pct_of_target, run_rate, mtd_orders, unique_customers\n```\n\nUnderstanding this SPL\n\n**Revenue Recognition and Booking Trend** — Tracks revenue bookings in near-real-time against targets — daily, weekly, and monthly — giving finance and sales leadership an up-to-the-day view of where the business stands relative to plan. Replaces end-of-month surprises with continuous visibility, enabling mid-course corrections.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:erp_orders\"` (order_id, order_date, revenue, product_line, region, customer_id, order_status). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:erp_orders. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:erp_orders\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **month** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **pct_of_target** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_elapsed** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_in_month** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **run_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Revenue Recognition and Booking Trend**): table _time, daily_revenue, mtd_revenue, pct_of_target, run_rate, mtd_orders, unique_customers\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (daily revenue + cumulative MTD), Single value (MTD revenue, % of target, run rate), Bar chart (daily revenue vs same day last month), Gauge (MTD progress to target).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We show two clocks side by side: the day we printed customer invoices, and the day finance officially counted that money under revenue rules. When those clocks disagree by more than the business allows, we raise it early—like noticing your bank balance and your budget notebook no longer line up before the tax filing.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.2.3",
              "n": "Customer Churn Prediction and Early Warning",
              "c": "critical",
              "f": "advanced",
              "v": "Identifies customers showing churn risk signals — declining usage, reduced logins, support ticket spikes, late payments — before they cancel. Customer success teams get an actionable watchlist so they can intervene while there's still time to save the account. Retaining an existing customer costs 5-25x less than acquiring a new one.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC (product usage telemetry)",
              "d": "`index=app_events` (product usage/login events), `index=business` (subscription data, support tickets, payment history)",
              "q": "index=app_events sourcetype=\"app:login\" earliest=-90d\n| stats count as logins_90d, latest(_time) as last_login by customer_id\n| eval days_since_login=round((now()-last_login)/86400, 0)\n| join type=left customer_id [\n    search index=app_events sourcetype=\"app:feature_use\" earliest=-90d\n    | stats count as feature_uses, dc(feature_name) as features_used by customer_id\n]\n| join type=left customer_id [\n    search index=business sourcetype=\"support_ticket\" earliest=-90d\n    | stats count as tickets_90d, avg(eval(if(priority IN (\"high\",\"critical\"),1,0))) as pct_high_priority by customer_id\n]\n| join type=left customer_id [\n    search index=business sourcetype=\"dbx:billing\" earliest=-90d\n    | stats sum(eval(if(payment_status=\"late\",1,0))) as late_payments by customer_id\n]\n| fillnull value=0 feature_uses features_used tickets_90d late_payments pct_high_priority\n| eval churn_score=0\n| eval churn_score=churn_score + if(days_since_login > 30, 25, 0)\n| eval churn_score=churn_score + if(logins_90d < 10, 20, 0)\n| eval churn_score=churn_score + if(features_used < 3, 15, 0)\n| eval churn_score=churn_score + if(tickets_90d > 5, 20, 0)\n| eval churn_score=churn_score + if(late_payments > 0, 20, 0)\n| eval risk_level=case(churn_score >= 60, \"HIGH\", churn_score >= 30, \"MEDIUM\", 1=1, \"LOW\")\n| where churn_score >= 30\n| sort - churn_score\n| table customer_id, risk_level, churn_score, days_since_login, logins_90d, features_used, tickets_90d, late_payments",
              "m": "(1) Instrument product usage logging via HEC (logins, feature usage, session duration); (2) import billing and subscription data via DB Connect; (3) import support ticket data from ITSM; (4) tune churn score weights based on historical churn analysis; (5) alert customer success managers daily on new high-risk accounts; (6) track intervention outcomes to refine the scoring model.",
              "z": "Table (at-risk customers sorted by churn score), Gauge (portfolio health — % low risk), Bar chart (churn risk distribution), Line chart (weekly churn score trend).",
              "kfp": "**Seasonal contract exits** — fixed-scope engagements conclude on schedule so login trails evaporate without billing distress.**SKU migrations** — leadership directs adoption onto a successor offering while legacy telemetry flatlines though wallet share persists.**Finance-led mergers** — acquiring firms consolidate billing identities overnight producing simultaneous churn-and-add pseudo-motion.**CRM duplicate remediation** — duplicate merges rewrite identifiers yielding fictitious churn bursts.**Quarter-close timezone seams** — ninety-day fences anchored UTC versus Pacific slip engagement tiers one sidereal day.**Instrumentation pivots** — new structured logging inflates feature breadth without habits improving.**Enterprise pilots without invoicing** — procurement extends evaluations while billing extracts stay silent despite heavy traffic.**Portfolio sunsets at holding-company level** — child logos churn while enterprise ARR grows elsewhere.**Marketing-led login spikes** — bursts suppress staleness penalties hiding dormant cohorts.**Collections courtesy credits** — goodwill adjustments erase severe payment flags yet sentiment stays brittle.**Warehouse latency** — nightly billing snapshots omit intra-day retries clearing balances.**Alternate ticketing channels** — email-only disputes bypass incident ingestion.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC (product usage telemetry).\n• Ensure the following data sources are available: `index=app_events` (product usage/login events), `index=business` (subscription data, support tickets, payment history).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Instrument product usage logging via HEC (logins, feature usage, session duration); (2) import billing and subscription data via DB Connect; (3) import support ticket data from ITSM; (4) tune churn score weights based on historical churn analysis; (5) alert customer success managers daily on new high-risk accounts; (6) track intervention outcomes to refine the scoring model.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=app_events sourcetype=\"app:login\" earliest=-90d\n| stats count as logins_90d, latest(_time) as last_login by customer_id\n| eval days_since_login=round((now()-last_login)/86400, 0)\n| join type=left customer_id [\n    search index=app_events sourcetype=\"app:feature_use\" earliest=-90d\n    | stats count as feature_uses, dc(feature_name) as features_used by customer_id\n]\n| join type=left customer_id [\n    search index=business sourcetype=\"support_ticket\" earliest=-90d\n    | stats count as tickets_90d, avg(eval(if(priority IN (\"high\",\"critical\"),1,0))) as pct_high_priority by customer_id\n]\n| join type=left customer_id [\n    search index=business sourcetype=\"dbx:billing\" earliest=-90d\n    | stats sum(eval(if(payment_status=\"late\",1,0))) as late_payments by customer_id\n]\n| fillnull value=0 feature_uses features_used tickets_90d late_payments pct_high_priority\n| eval churn_score=0\n| eval churn_score=churn_score + if(days_since_login > 30, 25, 0)\n| eval churn_score=churn_score + if(logins_90d < 10, 20, 0)\n| eval churn_score=churn_score + if(features_used < 3, 15, 0)\n| eval churn_score=churn_score + if(tickets_90d > 5, 20, 0)\n| eval churn_score=churn_score + if(late_payments > 0, 20, 0)\n| eval risk_level=case(churn_score >= 60, \"HIGH\", churn_score >= 30, \"MEDIUM\", 1=1, \"LOW\")\n| where churn_score >= 30\n| sort - churn_score\n| table customer_id, risk_level, churn_score, days_since_login, logins_90d, features_used, tickets_90d, late_payments\n```\n\nUnderstanding this SPL\n\n**Customer Churn Prediction and Early Warning** — Identifies customers showing churn risk signals — declining usage, reduced logins, support ticket spikes, late payments — before they cancel. Customer success teams get an actionable watchlist so they can intervene while there's still time to save the account. Retaining an existing customer costs 5-25x less than acquiring a new one.\n\nDocumented **Data sources**: `index=app_events` (product usage/login events), `index=business` (subscription data, support tickets, payment history). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC (product usage telemetry). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: app_events; **sourcetype**: app:login. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=app_events, sourcetype=\"app:login\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by customer_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **days_since_login** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **churn_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **churn_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **churn_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **churn_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **churn_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **churn_score** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **risk_level** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where churn_score >= 30` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Customer Churn Prediction and Early Warning**): table customer_id, risk_level, churn_score, days_since_login, logins_90d, features_used, tickets_90d, late_payments\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (at-risk customers sorted by churn score), Gauge (portfolio health — % low risk), Bar chart (churn risk distribution), Line chart (weekly churn score trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We blend login rhythm, breadth of product use, noisy support queues, and serious billing slips — when several weaken together, customer-success teammates get a ranked heads-up before subscriptions quietly disappear.",
              "wv": "crawl",
              "mtype": [
                "Business"
              ],
              "ind": "SaaS, Subscription, Telecom",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "23.2.4",
              "n": "Subscription Renewal and Expansion Pipeline",
              "c": "high",
              "f": "intermediate",
              "v": "Shows the upcoming renewal pipeline — which subscriptions are due for renewal, their current health, and expansion potential based on usage. Account managers see a prioritised renewal list with risk indicators, helping them focus on the accounts most likely to churn or most ripe for upsell.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:subscriptions\"` (customer_id, subscription_id, renewal_date, arr, product_tier, usage_pct)",
              "q": "index=business sourcetype=\"dbx:subscriptions\" earliest=-1d@d latest=now()\n| eval renewal_epoch=strptime(renewal_date, \"%Y-%m-%d\")\n| eval days_to_renewal=round((renewal_epoch-now())/86400, 0)\n| where days_to_renewal <= 90 AND days_to_renewal >= 0\n| eval renewal_urgency=case(days_to_renewal <= 30, \"URGENT\", days_to_renewal <= 60, \"APPROACHING\", 1=1, \"UPCOMING\")\n| eval expansion_signal=case(usage_pct >= 80, \"EXPAND — high usage\", usage_pct >= 50, \"HEALTHY\", 1=1, \"AT RISK — low usage\")\n| stats sum(arr) as total_arr_at_risk, dc(subscription_id) as subscriptions by renewal_urgency, expansion_signal\n| sort renewal_urgency\n| table renewal_urgency, expansion_signal, subscriptions, total_arr_at_risk",
              "m": "(1) Import subscription records via DB Connect including renewal dates, ARR, and product tier; (2) join with product usage data to calculate usage_pct against entitlement; (3) schedule daily and feed to CRM/account management tools; (4) alert when total ARR at risk in the 30-day window exceeds threshold; (5) create a detailed drill-down showing individual accounts with health scores.",
              "z": "Table (renewals by urgency and signal), Single value (total ARR renewing in 30/60/90 days), Stacked bar (renewal pipeline by health), Timeline (renewal schedule).",
              "kfp": "**Silent period rollovers** — Stripe advances **`current_period_end`** immediately after successful auto-charges, temporarily removing logos from an urgent slicer until the next horizon unless you explicitly bucket lifecycle exits.**Finance-grade ARR mismatch** — treasury-approved **`committed_arr`** rarely equals naive **`unit_amount × quantity × 12`** when ramp deals, vouchers, FX hedges, or prepaid waterfalls apply.**Zuora / CPQ amendment forks** — subscription-version splits emit distinct **`TermEndDate`** rows tied to one **`OriginalSubscriptionId`**, so aggregate dashboards can hide intra-quarter mini-renewals unless you pivot at **version × term-end**.**CRM overlay staleness** — Salesforce **`LastModifiedDate`** lag lets **`expansion_pipeline_usd`** read zero while Desk-less quoting progresses offline.**Usage troughs** — holiday weekends or fiscal-year invoice freezes shrink **`usage_pct`** without adoption loss.**Mixed tiers** — parent accounts bundle sandbox SKUs whose telemetry stays idle beside paid workloads.**Legal holds / procurement pauses** — contractual moratoriums postpone signatures yet billing dates remain visible.**Acquisition absorbs** — acquired subsidiaries remap **`customer_id`** mid-quarter producing phantom churn-expansion pairs.**Non-production tenants** — QA stacks indexed beside prod inflate faux urgency unless gated by **`environment`**.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:subscriptions\"` (customer_id, subscription_id, renewal_date, arr, product_tier, usage_pct).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import subscription records via DB Connect including renewal dates, ARR, and product tier; (2) join with product usage data to calculate usage_pct against entitlement; (3) schedule daily and feed to CRM/account management tools; (4) alert when total ARR at risk in the 30-day window exceeds threshold; (5) create a detailed drill-down showing individual accounts with health scores.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:subscriptions\" earliest=-1d@d latest=now()\n| eval renewal_epoch=strptime(renewal_date, \"%Y-%m-%d\")\n| eval days_to_renewal=round((renewal_epoch-now())/86400, 0)\n| where days_to_renewal <= 90 AND days_to_renewal >= 0\n| eval renewal_urgency=case(days_to_renewal <= 30, \"URGENT\", days_to_renewal <= 60, \"APPROACHING\", 1=1, \"UPCOMING\")\n| eval expansion_signal=case(usage_pct >= 80, \"EXPAND — high usage\", usage_pct >= 50, \"HEALTHY\", 1=1, \"AT RISK — low usage\")\n| stats sum(arr) as total_arr_at_risk, dc(subscription_id) as subscriptions by renewal_urgency, expansion_signal\n| sort renewal_urgency\n| table renewal_urgency, expansion_signal, subscriptions, total_arr_at_risk\n```\n\nUnderstanding this SPL\n\n**Subscription Renewal and Expansion Pipeline** — Shows the upcoming renewal pipeline — which subscriptions are due for renewal, their current health, and expansion potential based on usage. Account managers see a prioritised renewal list with risk indicators, helping them focus on the accounts most likely to churn or most ripe for upsell.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:subscriptions\"` (customer_id, subscription_id, renewal_date, arr, product_tier, usage_pct). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:subscriptions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:subscriptions\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **renewal_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_renewal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where days_to_renewal <= 90 AND days_to_renewal >= 0` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **renewal_urgency** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **expansion_signal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by renewal_urgency, expansion_signal** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Subscription Renewal and Expansion Pipeline**): table renewal_urgency, expansion_signal, subscriptions, total_arr_at_risk\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (renewals by urgency and signal), Single value (total ARR renewing in 30/60/90 days), Stacked bar (renewal pipeline by health), Timeline (renewal schedule).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We line up whose agreements end soon with how strongly people actually use the product and whether a bigger deal is already moving, so customer-facing teams chase renewals in fair order.",
              "mtype": [
                "Business"
              ],
              "ind": "SaaS, Subscription",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.2.5",
              "n": "Pricing and Discount Effectiveness Analysis",
              "c": "medium",
              "f": "intermediate",
              "v": "Analyses whether discounts actually increase win rates or just erode margin. Shows average selling price vs list price by product, region, and sales rep — identifying where discounting is excessive and where it's effective. Helps sales leadership set discount guardrails backed by data.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, amount, list_price, discount_pct, stage, product, region, owner)",
              "q": "index=business sourcetype=\"dbx:crm_opportunities\" stage IN (\"Closed Won\",\"Closed Lost\") earliest=-180d\n| eval discount_pct=if(isnotnull(discount_pct), discount_pct, round(100*(list_price-amount)/list_price,1))\n| eval discount_band=case(\n    discount_pct=0, \"No Discount\",\n    discount_pct<=10, \"1-10%\",\n    discount_pct<=20, \"11-20%\",\n    discount_pct<=30, \"21-30%\",\n    1=1, \">30%\")\n| eval is_won=if(stage=\"Closed Won\", 1, 0)\n| stats count as deals, sum(is_won) as wins, avg(amount) as avg_deal_size, avg(discount_pct) as avg_discount by discount_band\n| eval win_rate=round(100*wins/deals, 1)\n| eval effective=if(win_rate > 50 AND avg_discount < 15, \"Effective\", if(win_rate < 30, \"Ineffective — not winning\", \"Margin erosion\"))\n| sort discount_band\n| table discount_band, deals, wins, win_rate, avg_deal_size, avg_discount, effective",
              "m": "(1) Import opportunity data including list price and actual selling price; (2) calculate discount percentage at deal level; (3) run quarterly for pricing reviews; (4) alert when any rep's average discount exceeds the policy threshold; (5) segment by product, region, and deal size for nuanced analysis.",
              "z": "Bar chart (win rate by discount band), Table (discount effectiveness), Scatter plot (discount % vs deal size), Single value (overall average discount).",
              "kfp": "A single **list-price uplift** pushed at week five of the quarter silently re-bases every downstream `discount_pct` even when deal points did not move, so Margin-erosion alarms spike without behavioral change until Finance publishes both old and new pricebook IDs side by side. **Multi-product bundles** frequently carry five percent haircut on commodity SKUs alongside forty percent on shelfware in the same Opportunity — averaging to twenty-two percent trips the Margin erosion verdict while each line obeyed policy. **Loss-leader whale hunts** record near-one-hundred percent concession on pilots intentionally structured as paid pilots elsewhere on paper, lifting win-rate inside the highest discount band without repeatable economics. **Regional pricebooks** (USD versus EUR versus JPY lists at PAR FX snapshots) mean identical Salesforce percentages erase different pocket-margin outcomes — Splunk sees percentage, not treasury-adjusted contribution. **Partner-led resale** injects distributor margin already carved out of net `amount`, mimicking thirty-plus percent discounts versus corporate list though revenue-share cleared Deal Desk. **End-of-quarter surge approvals** cluster inside the final ten days and inflate average concession versus quiet mid-quarter weeks independent of seller skill. **Ramp or prepaid multi-year constructs** sometimes stuff five-year TCV into Opportunity `amount` while list references Year-One SKU rows — computed `discount_pct` looks wildly generous versus Year-One ARR truth. **Free professional-services bundles** negotiated verbally shrink realized ASP yet rarely capture into CPQ-derived percentage fields. Each scenario requires pairing Splunk bands with ERP gross-margin proxies before withholding payouts.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, amount, list_price, discount_pct, stage, product, region, owner).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import opportunity data including list price and actual selling price; (2) calculate discount percentage at deal level; (3) run quarterly for pricing reviews; (4) alert when any rep's average discount exceeds the policy threshold; (5) segment by product, region, and deal size for nuanced analysis.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:crm_opportunities\" stage IN (\"Closed Won\",\"Closed Lost\") earliest=-180d\n| eval discount_pct=if(isnotnull(discount_pct), discount_pct, round(100*(list_price-amount)/list_price,1))\n| eval discount_band=case(\n    discount_pct=0, \"No Discount\",\n    discount_pct<=10, \"1-10%\",\n    discount_pct<=20, \"11-20%\",\n    discount_pct<=30, \"21-30%\",\n    1=1, \">30%\")\n| eval is_won=if(stage=\"Closed Won\", 1, 0)\n| stats count as deals, sum(is_won) as wins, avg(amount) as avg_deal_size, avg(discount_pct) as avg_discount by discount_band\n| eval win_rate=round(100*wins/deals, 1)\n| eval effective=if(win_rate > 50 AND avg_discount < 15, \"Effective\", if(win_rate < 30, \"Ineffective — not winning\", \"Margin erosion\"))\n| sort discount_band\n| table discount_band, deals, wins, win_rate, avg_deal_size, avg_discount, effective\n```\n\nUnderstanding this SPL\n\n**Pricing and Discount Effectiveness Analysis** — Analyses whether discounts actually increase win rates or just erode margin. Shows average selling price vs list price by product, region, and sales rep — identifying where discounting is excessive and where it's effective. Helps sales leadership set discount guardrails backed by data.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, amount, list_price, discount_pct, stage, product, region, owner). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:crm_opportunities. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:crm_opportunities\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **discount_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **discount_band** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_won** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by discount_band** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **win_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **effective** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Pricing and Discount Effectiveness Analysis**): table discount_band, deals, wins, win_rate, avg_deal_size, avg_discount, effective\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (win rate by discount band), Table (discount effectiveness), Scatter plot (discount % vs deal size), Single value (overall average discount).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We group closed deals by how big the markdown was and check whether those markdowns usually went hand-in-hand with wins. When teams cut prices steeply but still lose often, we say money walked out the door; when modest trims keep wins flowing, pricing stays sensible.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.2.6",
              "n": "Quota Attainment and Capacity Coverage",
              "c": "critical",
              "f": "intermediate",
              "v": "Compares booked revenue to quota by rep and region so leaders see who is on track before the quarter ends. We help you move deals, coaching, and territory support to the teams with the largest gap to plan.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:crm_opportunities\"` (owner, region, stage, amount, close_date), `sales_quota.csv` (owner, quarter, quota_amount)",
              "q": "index=business sourcetype=\"dbx:crm_opportunities\" stage=\"Closed Won\" earliest=-90d\n| eval qn=ceil(tonumber(strftime(strptime(close_date,\"%Y-%m-%d\"),\"%m\"))/3)\n| eval quarter=strftime(strptime(close_date,\"%Y-%m-%d\"),\"%Y\").\"-Q\".tostring(qn)\n| stats sum(amount) as booked by owner, region, quarter\n| lookup sales_quota.csv owner quarter OUTPUT quota_amount\n| eval attainment_pct=if(quota_amount>0, round(100*booked/quota_amount,1), null())\n| eval gap_to_quota=quota_amount-booked\n| where gap_to_quota>0\n| sort - gap_to_quota\n| table owner, region, quarter, quota_amount, booked, attainment_pct, gap_to_quota",
              "m": "(1) Export closed-won revenue and owner keys from your customer relationship management system on a nightly DB Connect schedule; (2) maintain `sales_quota.csv` with fiscal quarter and approved quota amounts; (3) email sales leadership each Monday with reps below eighty percent attainment entering the final month of the quarter.",
              "z": "Bar chart (attainment by rep), Table (gap to quota), Heatmap (region × quarter), Single value (team blended attainment).",
              "kfp": "RevOps publishes revised quarterly denominators inside Xactly while Anaplan still streams yesterday’s baseline—Splunk attainment denominators disagree until both planners stamp the identical quota_version token on Monday morning exports. Forecast-category edits sliding deals from Commit back down into Pipeline shrink weighted_pipeline overnight even though sellers merely tightened optimism rather than surrendering dollars—coverage_ratio dips without incremental churn in StageName progression. Acquisition hires imported without quota_plan rows inherit phantom zero denominators until HR feeding catches up, flashing synthetic zero attainment despite negotiated carve-outs inside spreadsheets outside Salesforce. Extended parental-leave cohorts inherit standard tenure clocks unless People Ops pushes LOA-adjusted ramp overrides through `hire_epoch`, producing punitive attainment_pct swings entirely procedural. Legacy opportunities lingering beyond twelve months inflate weighted_pipeline totals despite AE skepticism—coverage_ratio exaggerates readiness unless hygiene filters purge stalled identifiers credibly excluded by CRM governance rules.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (owner, region, stage, amount, close_date), `sales_quota.csv` (owner, quarter, quota_amount).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export closed-won revenue and owner keys from your customer relationship management system on a nightly DB Connect schedule; (2) maintain `sales_quota.csv` with fiscal quarter and approved quota amounts; (3) email sales leadership each Monday with reps below eighty percent attainment entering the final month of the quarter.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:crm_opportunities\" stage=\"Closed Won\" earliest=-90d\n| eval qn=ceil(tonumber(strftime(strptime(close_date,\"%Y-%m-%d\"),\"%m\"))/3)\n| eval quarter=strftime(strptime(close_date,\"%Y-%m-%d\"),\"%Y\").\"-Q\".tostring(qn)\n| stats sum(amount) as booked by owner, region, quarter\n| lookup sales_quota.csv owner quarter OUTPUT quota_amount\n| eval attainment_pct=if(quota_amount>0, round(100*booked/quota_amount,1), null())\n| eval gap_to_quota=quota_amount-booked\n| where gap_to_quota>0\n| sort - gap_to_quota\n| table owner, region, quarter, quota_amount, booked, attainment_pct, gap_to_quota\n```\n\nUnderstanding this SPL\n\n**Quota Attainment and Capacity Coverage** — Compares booked revenue to quota by rep and region so leaders see who is on track before the quarter ends. We help you move deals, coaching, and territory support to the teams with the largest gap to plan.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (owner, region, stage, amount, close_date), `sales_quota.csv` (owner, quarter, quota_amount). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:crm_opportunities. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:crm_opportunities\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **qn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **quarter** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by owner, region, quarter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **attainment_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **gap_to_quota** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where gap_to_quota>0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Quota Attainment and Capacity Coverage**): table owner, region, quarter, quota_amount, booked, attainment_pct, gap_to_quota\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (attainment by rep), Table (gap to quota), Heatmap (region × quarter), Single value (team blended attainment).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "Leadership compares each seller’s booked dollars against the quota they truly owe this quarter—including starter ramps—and checks whether remaining open deals, weighted realistically, stack high enough versus what is left to carry. That pairing exposes who is pacing safely versus who merely hoards stale pipe.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.2.7",
              "n": "Average Contract Value and Deal Size Mix",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks whether new business is trending larger or smaller so product and packaging teams can adjust offers before average contract value drifts. We help finance stress-test forecasts when the mix shifts toward many small deals or a few giants.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, deal_type, amount, stage, product_line, close_date)",
              "q": "index=business sourcetype=\"dbx:crm_opportunities\" stage=\"Closed Won\" deal_type=\"New Business\" earliest=-180d\n| eval close_epoch=strptime(close_date,\"%Y-%m-%d\")\n| eval month=strftime(close_epoch,\"%Y-%m\")\n| eval deal_band=case(\n    amount < 10000, \"<10k\",\n    amount < 50000, \"10k-50k\",\n    amount < 250000, \"50k-250k\",\n    1=1, \"250k+\")\n| stats count as deals, sum(amount) as revenue, avg(amount) as acv by month, deal_band\n| sort month, deal_band\n| table month, deal_band, deals, revenue, acv",
              "m": "(1) Require deal type and product line on closed opportunities in your source system; (2) import history for at least six months to see mix shifts; (3) review monthly with revenue operations and adjust campaign targeting when small-deal share spikes unexpectedly.",
              "z": "Stacked bar (revenue by deal band over time), Line chart (average contract value trend), Table (mix percentages), Single value (current quarter average contract value).",
              "kfp": "Asp surges when a solitary **nine-figure strategic** record books in a thin quarter—the mean moves while the median stays flat until you filter that OppKey out. Ramp cohorts stack many sub-rung transactions; mean deal size slides even when Enterprise win quality held because headcount—not offer—shifted. Account industry labels refresh mid-quarter after data-steward scrub; yesterday’s Retail row becomes Healthcare today, so segment-mix bars jump without any offer change. A multi-year signature posts in January while revenue recognition spreads across thirty-six months; Splunk tied to close-month still shows one fat event. Treasury FX snapshots for CurrencyIsoCode EUR or JPY lag the board’s daily rate, so USD ASP can wiggle though local list prices stayed put. Packaging teams re-bundle three add-ons into one SKU; booked_amount climbs while seats held flat, mimicking intentional ASP lift. Partner co-sell rows sometimes record distributor margin already folded into net Amount, overstating headline ASP versus direct SKUs when you blend channels. Probability-weighted Stage filters you may use in other searches will drop late-stage-but-lost pipe and should never be mixed here—this panel is Closed Won truth only. Sandbox refreshes duplicate OppKey hashes for a weekend and can double counts until incremental jobs reset watermarks. CloseDate edits that walk a record back into a prior fiscal week break cohort_month YoY compares for that slice only. Partial quarters compared to full prior-year quarters exaggerate YoY swings; constrain close_quarter to complete fiscal periods before publishing. Creation-date cohorts versus CloseDate cohorts diverge six to nine months on long cycles—keep one definition in the subtitle to stop false trajectory reads.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, deal_type, amount, stage, product_line, close_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Require deal type and product line on closed opportunities in your source system; (2) import history for at least six months to see mix shifts; (3) review monthly with revenue operations and adjust campaign targeting when small-deal share spikes unexpectedly.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:crm_opportunities\" stage=\"Closed Won\" deal_type=\"New Business\" earliest=-180d\n| eval close_epoch=strptime(close_date,\"%Y-%m-%d\")\n| eval month=strftime(close_epoch,\"%Y-%m\")\n| eval deal_band=case(\n    amount < 10000, \"<10k\",\n    amount < 50000, \"10k-50k\",\n    amount < 250000, \"50k-250k\",\n    1=1, \"250k+\")\n| stats count as deals, sum(amount) as revenue, avg(amount) as acv by month, deal_band\n| sort month, deal_band\n| table month, deal_band, deals, revenue, acv\n```\n\nUnderstanding this SPL\n\n**Average Contract Value and Deal Size Mix** — Tracks whether new business is trending larger or smaller so product and packaging teams can adjust offers before average contract value drifts. We help finance stress-test forecasts when the mix shifts toward many small deals or a few giants.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, deal_type, amount, stage, product_line, close_date). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:crm_opportunities. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:crm_opportunities\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **close_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **month** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **deal_band** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by month, deal_band** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Average Contract Value and Deal Size Mix**): table month, deal_band, deals, revenue, acv\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (revenue by deal band over time), Line chart (average contract value trend), Table (mix percentages), Single value (current quarter average contract value).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch whether typical closed-won dollars shrink or swell across quarters, and whether the blend tilts toward bigger customer tiers, longer subscriptions, or different product families. That tells finance if revenue quality is climbing or quietly sliding toward smaller baskets even when win counts look busy.",
              "mtype": [
                "Business"
              ],
              "ind": "SaaS, Professional Services",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.2.8",
              "n": "Win–Loss Reason Coding and Competitive Rate",
              "c": "high",
              "f": "intermediate",
              "v": "Summarises why deals are won or lost and how often named competitors appear so product and enablement invest in the right battlecards. We help you reduce repeated losses to the same objection without relying on anecdotal win stories alone.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:crm_opportunities\"` (stage, loss_reason, win_reason, competitor, amount, region)",
              "q": "index=business sourcetype=\"dbx:crm_opportunities\" stage=\"Closed Lost\" earliest=-180d\n| where isnotnull(loss_reason)\n| stats count as deals, sum(amount) as pipeline by loss_reason, competitor\n| eventstats sum(deals) as total_lost\n| eval share_pct=round(100*deals/total_lost,1)\n| sort - deals\n| head 30\n| table loss_reason, competitor, deals, pipeline, share_pct",
              "m": "(1) Enforce structured loss and competitor picklists in your customer relationship management close workflow; (2) replicate closed opportunity rows including reasons via DB Connect; (3) run a bi-weekly review with product marketing when any single loss reason exceeds ten percent of losses for two periods in a row.",
              "z": "Bar chart (top loss reasons), Table (competitor × reason), Pie chart (loss reason mix), Single value (losses with competitor tagged percent).",
              "kfp": "**Multi-segment quota books** assign one seller both **renewal** and **new-logo** targets yet Splunk collapses them into a single **`quota_amt`**, overstating **`coverage_ratio`** when **`Type`** filters exclude **renewal** pipe from the numerator only. **Partner-sourced overlays** credit **same `AccountId`** opps to **channel managers** whose **Quota** rows still show **enterprise** bottoms-up numbers, so **`coverage_band`** flashes **two_x** though the **account** is genuinely well covered through **distribution**. **ForecastCategoryName overrides** lag **Collaborative Forecasting** **ForecastingItem** rows during weekly **forecast review** locks; Splunk reads **Opportunity** picklist text while managers commit in the **forecast grid**, inflating **`attainment_fcst_pct`** until nightly **Einstein** sync completes. **CurrencyIsoCode** mismatches (**USD quota** vs **EUR opps**) compress **`coverage_ratio`** after **FX** tables slip a day versus treasury marks. **Territory2 mass moves** orphan **`Territory2Id`** joins mid-quarter — **`lookup`** tables referencing stale **`territory_label`** buckets mis-attribute **`tier_stride`** coaching cues until **`salesforce:territory_assignment`** catches up. **Split-opportunity overlays** (**overlay AE + core AE**) duplicate **`Amount`** rows across **`OwnerId`** copies unless ingest collapses via **`SplitPercentage`**, distorting **`rep_book_pct`** Pareto tails. **Historical blend coefficients** (**`hist_stage_blend_pct`**) anchored to **InsightSquared** benchmarks drift when **Salesforce OpportunityStage** probability knobs reset during **Sales Path** edits — **`hist_blend`** stays stale and **`coverage_band`** reads artificially optimistic until CSV refresh. **Quarter-start sandbox refreshes** replay **`CloseDate`** edits so **`open_pipeline_fq`** briefly doubles until **`dedup`** checkpoints reset.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (stage, loss_reason, win_reason, competitor, amount, region).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Enforce structured loss and competitor picklists in your customer relationship management close workflow; (2) replicate closed opportunity rows including reasons via DB Connect; (3) run a bi-weekly review with product marketing when any single loss reason exceeds ten percent of losses for two periods in a row.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:crm_opportunities\" stage=\"Closed Lost\" earliest=-180d\n| where isnotnull(loss_reason)\n| stats count as deals, sum(amount) as pipeline by loss_reason, competitor\n| eventstats sum(deals) as total_lost\n| eval share_pct=round(100*deals/total_lost,1)\n| sort - deals\n| head 30\n| table loss_reason, competitor, deals, pipeline, share_pct\n```\n\nUnderstanding this SPL\n\n**Win–Loss Reason Coding and Competitive Rate** — Summarises why deals are won or lost and how often named competitors appear so product and enablement invest in the right battlecards. We help you reduce repeated losses to the same objection without relying on anecdotal win stories alone.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (stage, loss_reason, win_reason, competitor, amount, region). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:crm_opportunities. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:crm_opportunities\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(loss_reason)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by loss_reason, competitor** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **share_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **Win–Loss Reason Coding and Competitive Rate**): table loss_reason, competitor, deals, pipeline, share_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top loss reasons), Table (competitor × reason), Pie chart (loss reason mix), Single value (losses with competitor tagged percent).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We weigh still-open deals scheduled this quarter against each seller's official quota so thin cushion shows before forecasts bite. We map who carries most booked dollars versus who stalls one stage for months so coaches rebalance territories fairly.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 41.2,
          "qd": {
            "gold": 0,
            "silver": 2,
            "bronze": 6,
            "none": 0
          }
        },
        {
          "i": "23.3",
          "n": "Marketing Performance & Attribution",
          "u": [
            {
              "i": "23.3.1",
              "n": "Marketing Campaign ROI by Channel",
              "c": "high",
              "f": "intermediate",
              "v": "Calculates return on investment for each marketing channel — paid search, social, email, events, content — by connecting campaign spend to pipeline generated and revenue closed. CMOs see which channels deliver positive ROI and which are burning budget, enabling real-time budget reallocation rather than waiting for quarterly reports.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC",
              "d": "`index=business` (CRM opportunities with campaign source, ad spend data), `index=web` (UTM-tagged traffic)",
              "q": "index=business sourcetype=\"dbx:crm_opportunities\" earliest=-90d\n| stats sum(amount) as pipeline_generated, sum(eval(if(stage=\"Closed Won\",amount,0))) as revenue_closed, dc(opportunity_id) as deals by campaign_source\n| join type=left campaign_source [\n    | inputlookup marketing_spend.csv\n    | stats sum(spend) as total_spend by campaign_source\n]\n| fillnull value=0 total_spend\n| eval roi=if(total_spend>0, round((revenue_closed-total_spend)/total_spend*100, 1), null())\n| eval cost_per_deal=if(deals>0, round(total_spend/deals, 0), null())\n| eval pipeline_to_spend=if(total_spend>0, round(pipeline_generated/total_spend, 1), null())\n| sort - roi\n| table campaign_source, total_spend, deals, pipeline_generated, revenue_closed, roi, cost_per_deal, pipeline_to_spend",
              "m": "(1) Tag CRM opportunities with campaign source using UTM parameters or CRM campaign membership; (2) maintain `marketing_spend.csv` with monthly spend by channel — update from finance/marketing ops; (3) schedule monthly for marketing reviews; (4) add time-to-revenue calculation by comparing opportunity creation date to close date; (5) segment by customer segment (enterprise, mid-market, SMB).",
              "z": "Bar chart (ROI by channel), Table (full metrics per channel), Bubble chart (spend vs revenue, bubble size = deals), Single value (blended ROI, total marketing-sourced revenue).",
              "kfp": "ROI swings when RevOps narrows or widens the attribution window day-count without renaming dashboards — leadership reads the shift as channel efficiency rather than bookkeeping; brand lifts from podcasts or outdoor sponsorship inflate downstream channels because modeled halo lift rarely feeds Splunk alongside GA-assisted conversions; offline conversions uploaded two to four weeks later via Google Offline Conversion Import or LinkedIn CAPI spike perceived CPA before catching closed-won dollars in CRM; inconsistent UTM casing across regions duplicates **campaign_key** rows until governance collapses synonyms; finance posts mid-month media true-ups after pacing spreadsheets clear, pushing spend into the wrong fiscal month versus CRM opportunity-create cohorts B2B teams rely on; EUR invoice spikes translated at month-end FX rates diverge from CRM functional currency totals recognized earlier in the quarter; paid versus organic splits blur when prospects bookmark URLs stripped of UTMs after clicking ads; Salesforce Campaign influence totals disagree with warehouse snapshots when Bulk loads omit CampaignMember deletes during membership pruning; GDPR-heavy regions lose consent-grade referrer detail so modeled strip-rate benchmarks mislead versus North America panels; migrations between HubSpot lifecycle automation and Salesforce Opportunity stages reorder attribution credits without touching finance chargebacks for legacy quarters.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC.\n• Ensure the following data sources are available: `index=business` (CRM opportunities with campaign source, ad spend data), `index=web` (UTM-tagged traffic).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag CRM opportunities with campaign source using UTM parameters or CRM campaign membership; (2) maintain `marketing_spend.csv` with monthly spend by channel — update from finance/marketing ops; (3) schedule monthly for marketing reviews; (4) add time-to-revenue calculation by comparing opportunity creation date to close date; (5) segment by customer segment (enterprise, mid-market, SMB).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:crm_opportunities\" earliest=-90d\n| stats sum(amount) as pipeline_generated, sum(eval(if(stage=\"Closed Won\",amount,0))) as revenue_closed, dc(opportunity_id) as deals by campaign_source\n| join type=left campaign_source [\n    | inputlookup marketing_spend.csv\n    | stats sum(spend) as total_spend by campaign_source\n]\n| fillnull value=0 total_spend\n| eval roi=if(total_spend>0, round((revenue_closed-total_spend)/total_spend*100, 1), null())\n| eval cost_per_deal=if(deals>0, round(total_spend/deals, 0), null())\n| eval pipeline_to_spend=if(total_spend>0, round(pipeline_generated/total_spend, 1), null())\n| sort - roi\n| table campaign_source, total_spend, deals, pipeline_generated, revenue_closed, roi, cost_per_deal, pipeline_to_spend\n```\n\nUnderstanding this SPL\n\n**Marketing Campaign ROI by Channel** — Calculates return on investment for each marketing channel — paid search, social, email, events, content — by connecting campaign spend to pipeline generated and revenue closed. CMOs see which channels deliver positive ROI and which are burning budget, enabling real-time budget reallocation rather than waiting for quarterly reports.\n\nDocumented **Data sources**: `index=business` (CRM opportunities with campaign source, ad spend data), `index=web` (UTM-tagged traffic). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:crm_opportunities. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:crm_opportunities\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by campaign_source** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **roi** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cost_per_deal** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pipeline_to_spend** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Marketing Campaign ROI by Channel**): table campaign_source, total_spend, deals, pipeline_generated, revenue_closed, roi, cost_per_deal, pipeline_to_spend\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (ROI by channel), Table (full metrics per channel), Bubble chart (spend vs revenue, bubble size = deals), Single value (blended ROI, total marketing-sourced revenue).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We compare what we spent on marketing programs to the revenue those touches helped bring in across every major channel, so the team can shift money toward the mixes that truly pay back instead of trusting a single ad site’s story.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.3.2",
              "n": "Lead-to-Revenue Funnel Conversion Rates",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks conversion rates at every stage of the marketing-sales funnel — visitor to lead, lead to MQL, MQL to SQL, SQL to opportunity, opportunity to closed-won. Identifies where the biggest leaks are and whether marketing is delivering quality leads that sales can close — the perennial question in every marketing-sales alignment meeting.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:crm_leads\"` and `sourcetype=\"dbx:crm_opportunities\"`",
              "q": "index=business sourcetype=\"dbx:crm_leads\" earliest=-90d\n| eval status_norm=lower(status)\n| stats dc(lead_id) as total_leads,\n        dc(eval(if(match(status_norm,\"qualified|mql|marketing.qualified\"),lead_id,null()))) as mqls,\n        dc(eval(if(match(status_norm,\"sql|sales.qualified|accepted\"),lead_id,null()))) as sqls,\n        dc(eval(if(isnotnull(converted_opportunity_id),lead_id,null()))) as opportunities,\n        dc(eval(if(match(status_norm,\"closed.won|won\"),lead_id,null()))) as closed_won\n| eval lead_to_mql=round(100*mqls/total_leads, 1)\n| eval mql_to_sql=round(100*sqls/mqls, 1)\n| eval sql_to_opp=round(100*opportunities/sqls, 1)\n| eval opp_to_won=round(100*closed_won/opportunities, 1)\n| eval overall=round(100*closed_won/total_leads, 2)\n| table total_leads, mqls, lead_to_mql, sqls, mql_to_sql, opportunities, sql_to_opp, closed_won, opp_to_won, overall",
              "m": "(1) Import lead and opportunity records via DB Connect; (2) map your CRM status values to the standard funnel stages; (3) schedule weekly; (4) alert when any stage conversion rate drops below historical baseline; (5) segment by lead source, geography, and product interest for actionable insights.",
              "z": "Funnel chart, Single values (conversion rate per stage), Line chart (weekly conversion trends), Bar chart (conversion by lead source).",
              "kfp": "Quarterly MAP scoring-model refreshes compress everyone below the “new MQL” bar until nurture catches up — Splunk shows a sharp lead→MQL dip that mirrors governance, not macro demand; seasonal webinar bursts inflate early-stage counts while downstream acceptance queues stay flat, resembling leakage though reps simply triaged backlog Monday morning; Salesforce sandbox refreshes replay historic Lead IDs with fresh `_time`, briefly doubling funnel denominators until dedupe keys activate; duplicate Lead rows after CSV imports without `Email` uniqueness lift totals_leads without matched lifecycle progression; territory round-robin freezes during holidays mimic SQL slump though inbound quality held steady; ABM accounts that bypass lead stages entirely leave MAP-derived funnel silent while Opportunities still appear — exclude ABM cohort in parallel panels when comparing MAP-centric ratios.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:crm_leads\"` and `sourcetype=\"dbx:crm_opportunities\"`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import lead and opportunity records via DB Connect; (2) map your CRM status values to the standard funnel stages; (3) schedule weekly; (4) alert when any stage conversion rate drops below historical baseline; (5) segment by lead source, geography, and product interest for actionable insights.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:crm_leads\" earliest=-90d\n| eval status_norm=lower(status)\n| stats dc(lead_id) as total_leads,\n        dc(eval(if(match(status_norm,\"qualified|mql|marketing.qualified\"),lead_id,null()))) as mqls,\n        dc(eval(if(match(status_norm,\"sql|sales.qualified|accepted\"),lead_id,null()))) as sqls,\n        dc(eval(if(isnotnull(converted_opportunity_id),lead_id,null()))) as opportunities,\n        dc(eval(if(match(status_norm,\"closed.won|won\"),lead_id,null()))) as closed_won\n| eval lead_to_mql=round(100*mqls/total_leads, 1)\n| eval mql_to_sql=round(100*sqls/mqls, 1)\n| eval sql_to_opp=round(100*opportunities/sqls, 1)\n| eval opp_to_won=round(100*closed_won/opportunities, 1)\n| eval overall=round(100*closed_won/total_leads, 2)\n| table total_leads, mqls, lead_to_mql, sqls, mql_to_sql, opportunities, sql_to_opp, closed_won, opp_to_won, overall\n```\n\nUnderstanding this SPL\n\n**Lead-to-Revenue Funnel Conversion Rates** — Tracks conversion rates at every stage of the marketing-sales funnel — visitor to lead, lead to MQL, MQL to SQL, SQL to opportunity, opportunity to closed-won. Identifies where the biggest leaks are and whether marketing is delivering quality leads that sales can close — the perennial question in every marketing-sales alignment meeting.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:crm_leads\"` and `sourcetype=\"dbx:crm_opportunities\"`. **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:crm_leads. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:crm_leads\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **status_norm** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **lead_to_mql** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **mql_to_sql** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sql_to_opp** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **opp_to_won** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **overall** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Lead-to-Revenue Funnel Conversion Rates**): table total_leads, mqls, lead_to_mql, sqls, mql_to_sql, opportunities, sql_to_opp, closed_won, opp_to_won, overall\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Funnel chart, Single values (conversion rate per stage), Line chart (weekly conversion trends), Bar chart (conversion by lead source).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch how many interested people move step by step from first raising their hand through marketing vetting, sales vetting, a real deal, and finally a signed win — broken out by where they came from — so we see where the pinch is instead of arguing from gut feelings.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.3.3",
              "n": "Email Campaign Performance and Engagement",
              "c": "medium",
              "f": "beginner",
              "v": "Consolidates email marketing metrics — open rates, click rates, unsubscribes, bounces — across all campaigns into a single dashboard. Marketing teams see which subject lines and content drive engagement and which cause unsubscribes, enabling continuous optimisation of email as a revenue channel.",
              "t": "HEC (email platform webhooks)",
              "d": "`index=business` `sourcetype=\"email_engagement\"` (campaign_id, event_type, recipient, timestamp)",
              "q": "index=business sourcetype=\"email_engagement\" earliest=-30d\n| stats dc(eval(if(event_type=\"sent\",recipient,null()))) as sent,\n        dc(eval(if(event_type=\"delivered\",recipient,null()))) as delivered,\n        dc(eval(if(event_type=\"opened\",recipient,null()))) as opened,\n        dc(eval(if(event_type=\"clicked\",recipient,null()))) as clicked,\n        dc(eval(if(event_type=\"unsubscribed\",recipient,null()))) as unsubscribed,\n        dc(eval(if(event_type=\"bounced\",recipient,null()))) as bounced by campaign_id\n| eval delivery_rate=round(100*delivered/sent, 1)\n| eval open_rate=round(100*opened/delivered, 1)\n| eval click_rate=round(100*clicked/delivered, 1)\n| eval unsub_rate=round(100*unsubscribed/delivered, 2)\n| eval bounce_rate=round(100*bounced/sent, 2)\n| sort - click_rate\n| table campaign_id, sent, delivered, delivery_rate, open_rate, click_rate, unsub_rate, bounce_rate",
              "m": "(1) Configure your email platform (Mailchimp, Marketo, HubSpot, Salesforce Marketing Cloud) to send engagement events via webhooks to Splunk HEC; (2) include campaign ID, event type, and recipient; (3) schedule daily summaries; (4) alert on bounce rates >5% or unsubscribe rates >1%; (5) compare A/B test variants using campaign_id segmentation.",
              "z": "Table (campaign metrics), Bar chart (open/click rates by campaign), Line chart (engagement trends over time), Single value (avg open rate, avg click rate).",
              "kfp": "Privacy-enhanced mailbox vendors prefetch invisible pixels minutes post-send — Splunk logs mimic curiosity bursts absent incremental link taps unless analysts overlay click-centric companion tiles beside headline charts. TLS handshake retries inside Asian transit carriers occasionally duplicate webhook deliveries seconds apart — transient bounce_evt duplication resembles harsh reputation swings until dedupe macros keyed on sg_message_id collapse repeats. Bulk seasonal sends aligned to retail peaks inflate baseline comparisons against SaaS-heavy Splunk cohorts — dashboards resemble ESP outages although cohort mixes shifted toward bargain hunters opening everything yet clicking nothing substantive. Warehouse CDC lag replaying ninety-day unsubscribe horizons backward collapses unsubscribe velocity histograms tied to legal holds unrelated to creative merit — Splunk timelines resemble orchestrated churn rather than dated audience fatigue unless immutable snapshots isolate immutable weekly extracts before Legal revises CRM consent artifacts retroactively.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (email platform webhooks).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"email_engagement\"` (campaign_id, event_type, recipient, timestamp).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Configure your email platform (Mailchimp, Marketo, HubSpot, Salesforce Marketing Cloud) to send engagement events via webhooks to Splunk HEC; (2) include campaign ID, event type, and recipient; (3) schedule daily summaries; (4) alert on bounce rates >5% or unsubscribe rates >1%; (5) compare A/B test variants using campaign_id segmentation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"email_engagement\" earliest=-30d\n| stats dc(eval(if(event_type=\"sent\",recipient,null()))) as sent,\n        dc(eval(if(event_type=\"delivered\",recipient,null()))) as delivered,\n        dc(eval(if(event_type=\"opened\",recipient,null()))) as opened,\n        dc(eval(if(event_type=\"clicked\",recipient,null()))) as clicked,\n        dc(eval(if(event_type=\"unsubscribed\",recipient,null()))) as unsubscribed,\n        dc(eval(if(event_type=\"bounced\",recipient,null()))) as bounced by campaign_id\n| eval delivery_rate=round(100*delivered/sent, 1)\n| eval open_rate=round(100*opened/delivered, 1)\n| eval click_rate=round(100*clicked/delivered, 1)\n| eval unsub_rate=round(100*unsubscribed/delivered, 2)\n| eval bounce_rate=round(100*bounced/sent, 2)\n| sort - click_rate\n| table campaign_id, sent, delivered, delivery_rate, open_rate, click_rate, unsub_rate, bounce_rate\n```\n\nUnderstanding this SPL\n\n**Email Campaign Performance and Engagement** — Consolidates email marketing metrics — open rates, click rates, unsubscribes, bounces — across all campaigns into a single dashboard. Marketing teams see which subject lines and content drive engagement and which cause unsubscribes, enabling continuous optimisation of email as a revenue channel.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"email_engagement\"` (campaign_id, event_type, recipient, timestamp). **App/TA** (typical add-on context): HEC (email platform webhooks). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: email_engagement. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"email_engagement\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by campaign_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **delivery_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **open_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **click_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **unsub_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **bounce_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Email Campaign Performance and Engagement**): table campaign_id, sent, delivered, delivery_rate, open_rate, click_rate, unsub_rate, bounce_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (campaign metrics), Bar chart (open/click rates by campaign), Line chart (engagement trends over time), Single value (avg open rate, avg click rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We follow ordinary newsletters people asked for—whether anyone bothers tapping buttons, quietly walks away from lists, or warns providers mail feels shady—so storytellers freshen wording before noisy gripes pile up.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.3.4",
              "n": "Website Traffic Source and SEO Performance",
              "c": "medium",
              "f": "beginner",
              "v": "Breaks down website traffic by source — organic search, paid search, social, direct, referral — showing which channels drive the most visitors and which produce the highest quality engagement (measured by pages per session and time on site). Marketing teams see SEO effectiveness alongside paid performance.",
              "t": "Splunk Add-on for Apache Web Server (Splunkbase 3186)",
              "d": "`index=web` `sourcetype=\"access_combined\"` (clientip, uri, referer, status, bytes)",
              "q": "index=web sourcetype=\"access_combined\" status=200 NOT uri IN (\"/favicon.ico\",\"robots.txt\",\"/health\") earliest=-7d\n| eval session=clientip.\"_\".useragent\n| eval source=case(\n    match(referer,\"(?i)google\\.(com|co\\.\\w+)/search\"), \"Organic — Google\",\n    match(referer,\"(?i)bing\\.com/search\"), \"Organic — Bing\",\n    match(referer,\"(?i)(facebook|linkedin|twitter|instagram)\\.com\"), \"Social\",\n    match(uri,\"[?&]utm_medium=cpc\"), \"Paid Search\",\n    match(uri,\"[?&]utm_medium=email\"), \"Email\",\n    isnull(referer) OR referer=\"-\", \"Direct\",\n    1=1, \"Referral\")\n| stats dc(session) as sessions, dc(clientip) as visitors, count as pageviews by source\n| eval pages_per_session=round(pageviews/sessions, 1)\n| sort - sessions\n| table source, visitors, sessions, pageviews, pages_per_session",
              "m": "(1) Ensure web server logs capture referer and full URI including query strings; (2) customise source classification for your UTM conventions; (3) schedule daily and weekly comparisons; (4) alert on significant organic traffic drops (possible SEO issue or algorithm change); (5) add landing page analysis by cross-referencing source with first URI per session.",
              "z": "Pie chart (sessions by source), Bar chart (visitors by source), Table (quality metrics per source), Line chart (daily traffic by source).",
              "kfp": "A broad Google Search algorithm update can depress clicks and impressions in Search Console while your first-party Splunk session counts stay flat — the causal story is ranking and SERP volatility off-site, not a broken web-server feed or Splunk outage. Earned-media spikes (major press, broadcast, podcast) lift brand queries; organic Splunk rows rise with no matching on-site content change, and bounce can look high if readers land once and leave satisfied. Geo-IP database refreshes sometimes re-home mobile carrier ranges; analytics geography mixes swing while Apache sees identical subnets — resembling a regional acquisition shift without marketing intention. Country-domain redirect chains (`google.de` versus `google.com`) strip or reshape referers during tests — Direct rises briefly until hostname dictionaries absorb both ccTLDs and parent domains. Aggressive privacy modes null referers or strip query strings — Paid and Organic buckets shrink while Direct grows though invoices still reflect spend; reconcile against ad consoles rather than paging infra. Browser extensions and DNS-layer filters occasionally strip UTMs before navigation completes — Paid classification empties independent of bid strategy. Email or messaging opens using stripped preview frames yield clicks without meaningful referer — Splunk reads Direct though UTMs occasionally survive — blended tests confuse unless tagging lands on final HTTPS URLs. Mobile deep links jump straight into apps — landing URIs omit UTMs entirely unless SDK handlers append parameters server-side — Splunk understates attributed campaigns versus vendor dashboards. Single-page-app navigations after the first HTML fetch generate zero extra combined-log rows — engagement ratios diverge from analytics suites counting route transitions.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Apache Web Server (Splunkbase 3186).\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` (clientip, uri, referer, status, bytes).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure web server logs capture referer and full URI including query strings; (2) customise source classification for your UTM conventions; (3) schedule daily and weekly comparisons; (4) alert on significant organic traffic drops (possible SEO issue or algorithm change); (5) add landing page analysis by cross-referencing source with first URI per session.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" status=200 NOT uri IN (\"/favicon.ico\",\"robots.txt\",\"/health\") earliest=-7d\n| eval session=clientip.\"_\".useragent\n| eval source=case(\n    match(referer,\"(?i)google\\.(com|co\\.\\w+)/search\"), \"Organic — Google\",\n    match(referer,\"(?i)bing\\.com/search\"), \"Organic — Bing\",\n    match(referer,\"(?i)(facebook|linkedin|twitter|instagram)\\.com\"), \"Social\",\n    match(uri,\"[?&]utm_medium=cpc\"), \"Paid Search\",\n    match(uri,\"[?&]utm_medium=email\"), \"Email\",\n    isnull(referer) OR referer=\"-\", \"Direct\",\n    1=1, \"Referral\")\n| stats dc(session) as sessions, dc(clientip) as visitors, count as pageviews by source\n| eval pages_per_session=round(pageviews/sessions, 1)\n| sort - sessions\n| table source, visitors, sessions, pageviews, pages_per_session\n```\n\nUnderstanding this SPL\n\n**Website Traffic Source and SEO Performance** — Breaks down website traffic by source — organic search, paid search, social, direct, referral — showing which channels drive the most visitors and which produce the highest quality engagement (measured by pages per session and time on site). Marketing teams see SEO effectiveness alongside paid performance.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (clientip, uri, referer, status, bytes). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **source** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by source** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pages_per_session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Website Traffic Source and SEO Performance**): table source, visitors, sessions, pageviews, pages_per_session\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Pie chart (sessions by source), Bar chart (visitors by source), Table (quality metrics per source), Line chart (daily traffic by source).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We count real visits from your web-server diary lines and sort them by whether people came from search sites, clicked tagged ads, followed social links, or typed the address with no referral. That tells you where audiences originate before you spend more on ads or fixing pages.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "apache"
              ],
              "em": [
                "apache_httpd"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.3.5",
              "n": "Paid Media Cost Per Acquisition and Quality Score",
              "c": "high",
              "f": "intermediate",
              "v": "Divides advertising spend by attributed sign-ups or qualified leads so paid media teams stop optimising for cheap clicks that never buy. We help you reallocate budget toward campaigns that bring customers who actually convert downstream.",
              "t": "HEC (ad platform), Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"ad_spend\"` (campaign_id, spend, impressions, clicks, date), `index=business` `sourcetype=\"dbx:crm_leads\"` (lead_id, campaign_id, status)",
              "q": "index=business sourcetype=\"ad_spend\" earliest=-30d\n| stats sum(spend) as spend, sum(clicks) as clicks, sum(impressions) as impressions by campaign_id\n| join type=left campaign_id [\n    search index=business sourcetype=\"dbx:crm_leads\" earliest=-30d\n    | eval qualified=if(match(lower(status),\"qualified|mql\"),1,0)\n    | stats dc(lead_id) as leads, sum(qualified) as qualified_leads by campaign_id\n]\n| fillnull value=0 leads qualified_leads\n| eval cpa_spend=if(leads>0, round(spend/leads,2), null())\n| eval cpq=if(qualified_leads>0, round(spend/qualified_leads,2), null())\n| eval ctr_pct=if(impressions>0, round(100*clicks/impressions,2), null())\n| sort - spend\n| table campaign_id, spend, clicks, ctr_pct, leads, qualified_leads, cpa_spend, cpq",
              "m": "(1) Land daily campaign cost and click files from your advertising APIs into Splunk using HEC; (2) join leads on a shared campaign identifier from your marketing automation or customer relationship management system; (3) alert marketing when cost per qualified lead doubles week over week for any active campaign.",
              "z": "Scatter plot (spend vs qualified leads), Table (campaign efficiency), Bar chart (cost per qualified lead), Single value (blended cost per acquisition).",
              "kfp": "Treasury closes invoices using quarter-end FX prints while Splunk applies daily treasury curves — CPA ladders spike although bids never shifted until FX harmonisation reruns overnight. Concurrent UTC-boundary pulls from LinkedIn plus account-local snapshots from auction consoles duplicate conversions near midnight—CPA dips abruptly though budgets stayed steady until dedupe macros collapse overlapping pulls.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (ad platform), Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"ad_spend\"` (campaign_id, spend, impressions, clicks, date), `index=business` `sourcetype=\"dbx:crm_leads\"` (lead_id, campaign_id, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Land daily campaign cost and click files from your advertising APIs into Splunk using HEC; (2) join leads on a shared campaign identifier from your marketing automation or customer relationship management system; (3) alert marketing when cost per qualified lead doubles week over week for any active campaign.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"ad_spend\" earliest=-30d\n| stats sum(spend) as spend, sum(clicks) as clicks, sum(impressions) as impressions by campaign_id\n| join type=left campaign_id [\n    search index=business sourcetype=\"dbx:crm_leads\" earliest=-30d\n    | eval qualified=if(match(lower(status),\"qualified|mql\"),1,0)\n    | stats dc(lead_id) as leads, sum(qualified) as qualified_leads by campaign_id\n]\n| fillnull value=0 leads qualified_leads\n| eval cpa_spend=if(leads>0, round(spend/leads,2), null())\n| eval cpq=if(qualified_leads>0, round(spend/qualified_leads,2), null())\n| eval ctr_pct=if(impressions>0, round(100*clicks/impressions,2), null())\n| sort - spend\n| table campaign_id, spend, clicks, ctr_pct, leads, qualified_leads, cpa_spend, cpq\n```\n\nUnderstanding this SPL\n\n**Paid Media Cost Per Acquisition and Quality Score** — Divides advertising spend by attributed sign-ups or qualified leads so paid media teams stop optimising for cheap clicks that never buy. We help you reallocate budget toward campaigns that bring customers who actually convert downstream.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"ad_spend\"` (campaign_id, spend, impressions, clicks, date), `index=business` `sourcetype=\"dbx:crm_leads\"` (lead_id, campaign_id, status). **App/TA** (typical add-on context): HEC (ad platform), Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: ad_spend. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"ad_spend\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by campaign_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **cpa_spend** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cpq** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ctr_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Paid Media Cost Per Acquisition and Quality Score**): table campaign_id, spend, clicks, ctr_pct, leads, qualified_leads, cpa_spend, cpq\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Scatter plot (spend vs qualified leads), Table (campaign efficiency), Bar chart (cost per qualified lead), Single value (blended cost per acquisition).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch paid placements so pricey auctions cannot silently drain budgets before finance notices shrinking returns. We compare invoice-grade spend against measured conversions and auction-quality grades so teams rebalance bids before quarters slip.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.3.6",
              "n": "Content Engagement and Lead Conversion Lift",
              "c": "medium",
              "f": "intermediate",
              "v": "Links blog and resource centre engagement to lead creation so editorial investments can be judged like performance channels. We help you retire low-traffic pages that consume effort without pipeline impact.",
              "t": "Splunk Add-on for Apache Web Server (Splunkbase 3186), Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=web` `sourcetype=\"access_combined\"` (uri, clientip, status), `index=business` `sourcetype=\"dbx:crm_leads\"` (lead_id, landing_page, created_date)",
              "q": "index=web sourcetype=\"access_combined\" status=200 uri=\"/blog/*\" earliest=-30d\n| eval session=clientip.\"_\".useragent\n| stats dc(session) as sessions, count as views by uri\n| join type=left uri [\n    search index=business sourcetype=\"dbx:crm_leads\" earliest=-30d\n    | stats dc(lead_id) as leads by landing_page\n    | rename landing_page as uri\n]\n| fillnull value=0 leads\n| eval views_per_lead=if(leads>0, round(views/leads,0), null())\n| sort - views\n| head 25\n| table uri, sessions, views, leads, views_per_lead",
              "m": "(1) Standardise landing page URLs on forms so they match blog paths where possible; (2) import lead timestamps and landing page from customer relationship management nightly; (3) review monthly with content marketing to double down on topics with strong lead lift and archive thin content.",
              "z": "Bar chart (views by article), Table (lead conversion proxy), Line chart (sessions vs leads), Single value (blog-sourced leads).",
              "kfp": "Marketing operations renames lifecyclestage values (subscriber becomes marketingqualifiedlead) without rebuilding Splunk macros—mql_cnt jumps overnight although reading behavior never changed; freeze taxonomy edits monthly or snapshot HubSpot property history extracts next to workflow_event feeds. Hotjar suppresses recordings where EU sampling toggles tighten while GA4 continues counting hits—qualified_read ratios skew toward regions with permissive analytics though onsite UX stayed identical. Ahrefs keyword cannibalization splits rankings across duplicate asset_key URLs (http versus https, trailing slash drift); rank_delta spikes while Google Search Console impressions barely move until canonical tags unify across locales. Vidyard autoplay increments percent_viewed before humans engage—tutorial asset_family tiles inflate engagement_score until auto_play=false filters isolate deliberate viewers. Marketo progressive profiling emits successive Fill Out Form activity_id rows whenever reps tweak progressive fields mid-quarter—form_evt duplicates inflate denominators unless program_member_id dedupe filters remain enabled. Podcast podcast:download_event bursts after Apple Podcasts featuring spikes unrelated to refreshed blogs—organic attribution jumps although resource_center edits stayed untouched; isolate RSS cohorts before interpreting pillar_ratio ratios inside cluster dashboards. Freelance invoices consolidate twelve SME-hour adjustments into one lump freelancer_usd payment—cost_per_mql tiles spike although editorial throughput never slowed until amortization lookups split fees weekly across calendar months. Adobe Target personalization hides hero cta_hit variants behind experience IDs—Splunk captures optimizely:experiment_event exposures without suppressed DOM renders whenever anti-flicker snippets delay Hotjar snapshots; reconcile using Adobe Target tokens before blaming copywriters. Squarespace or Wix headless previews sometimes emit wix:form_event payloads using staging domains that never match production asset_key normalization—Splunk flags thin engagement even while production traffic healthy until preview IP allowlists exclude internal editors.",
              "refs": "[Splunk Add-on for Apache Web Server](https://splunkbase.splunk.com/app/3186), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for Apache Web Server (Splunkbase 3186), Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` (uri, clientip, status), `index=business` `sourcetype=\"dbx:crm_leads\"` (lead_id, landing_page, created_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Standardise landing page URLs on forms so they match blog paths where possible; (2) import lead timestamps and landing page from customer relationship management nightly; (3) review monthly with content marketing to double down on topics with strong lead lift and archive thin content.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" status=200 uri=\"/blog/*\" earliest=-30d\n| eval session=clientip.\"_\".useragent\n| stats dc(session) as sessions, count as views by uri\n| join type=left uri [\n    search index=business sourcetype=\"dbx:crm_leads\" earliest=-30d\n    | stats dc(lead_id) as leads by landing_page\n    | rename landing_page as uri\n]\n| fillnull value=0 leads\n| eval views_per_lead=if(leads>0, round(views/leads,0), null())\n| sort - views\n| head 25\n| table uri, sessions, views, leads, views_per_lead\n```\n\nUnderstanding this SPL\n\n**Content Engagement and Lead Conversion Lift** — Links blog and resource centre engagement to lead creation so editorial investments can be judged like performance channels. We help you retire low-traffic pages that consume effort without pipeline impact.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (uri, clientip, status), `index=business` `sourcetype=\"dbx:crm_leads\"` (lead_id, landing_page, created_date). **App/TA** (typical add-on context): Splunk Add-on for Apache Web Server (Splunkbase 3186), Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **views_per_lead** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **Content Engagement and Lead Conversion Lift**): table uri, sessions, views, leads, views_per_lead\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (views by article), Table (lead conversion proxy), Line chart (sessions vs leads), Single value (blog-sourced leads).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We track which guides and downloads people finish, whether they hit the signup button, and whether those contacts truly become qualified leads worth sales attention. When interest quietly fades or older guides slip down search results, we notice early enough to rewrite instead of guessing.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "apache",
                "db_connect"
              ],
              "em": [
                "apache_httpd"
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.3.7",
              "n": "Webinar and Event Pipeline Contribution",
              "c": "medium",
              "f": "beginner",
              "v": "Attributes opportunities and revenue to webinars and field events so field marketing proves return beyond attendance counts. We help leadership compare expensive programmes to simpler digital motions using the same pipeline currency.",
              "t": "HEC (event platform), Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"event_registration\"` (event_id, registrant_id, attended_flag), `index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, campaign_source, amount, stage)",
              "q": "index=business sourcetype=\"event_registration\" earliest=-90d\n| stats dc(registrant_id) as registrations, sum(eval(if(attended_flag=\"yes\",1,0))) as attendees by event_id\n| join type=left event_id [\n    search index=business sourcetype=\"dbx:crm_opportunities\" earliest=-90d\n    | eval from_event=if(match(campaign_source,\"(?i)webinar|event|field\"),1,0)\n    | where from_event=1\n    | stats sum(amount) as pipeline, sum(eval(if(stage=\"Closed Won\",amount,0))) as won_revenue by campaign_source\n    | rename campaign_source as event_id\n]\n| fillnull value=0 pipeline won_revenue\n| eval attendance_rate=if(registrations>0, round(100*attendees/registrations,1), null())\n| sort - won_revenue\n| table event_id, registrations, attendees, attendance_rate, pipeline, won_revenue",
              "m": "(1) Send registration and attendance webhooks from your event tool into Splunk with identifiers that match customer relationship management campaigns; (2) require opportunities to carry the originating programme code; (3) publish a quarterly event portfolio review sorted by won revenue and pipeline efficiency.",
              "z": "Bar chart (won revenue by event), Table (attendance and pipeline), Single value (events-sourced pipeline), Line chart (attendance rate trend).",
              "kfp": "Aggregate **DMARC** XML arriving on staggered forty-eight-to-seventy-two hour publisher cadences makes week-to-week pass rates dip when reporting domains batch weekend files although Mail Transfer Agent transports stayed steady. **Inbox-placement** seed tests referencing obsolete Litmus cohorts diverge fifteen to twenty-five points from consumer reality because recycled **spam-trap** pools or purchased lists were purged upstream before Splunk summarized **`litmus:inbox_placement`**. **AWS SES** configuration-set publishing duplicates identical **`message_id`** rows when SNS subscribers redrive after SQS visibility timeouts, doubling **`bounce_evt`** counts until **`dedup`** macros keyed on **`message_id`** plus **`notification_type`** suppress repeats. **Dedicated IP** migrations advertising fresh **`PTR`** hosts lag DNS caches for several days—Talos lookups temporarily flag poor reputation despite mailbox providers seeing aligned **SPF** includes. **Google Postmaster Tools** spam-rate telemetry trails webhook complaints two reporting intervals during peak retail bursts—Splunk alarms first while Postmaster dashboards remain green until nightly snapshots ingest. **BIMI** logo retrieval failures tied to oversized **SVG Tiny PSV** variants rarely alter **`DMARC`** disposition in aggregate seeds though Gmail clips logos—teams mistakenly chase **DKIM** rotations unrelated to avatar rendering. **`MTA-STS`** **`mode=testing`** paired with noisy **`TLS-RPT`** ingestion raises benign Opportunistic TLS downgrade rows across APAC carriers that never affect marketing streams. **Suppression-list** hygiene jobs that scrub **`role`** mailboxes suppress billing aliases still expecting invoices—false alarms fade only when transactional **`configuration_set_name`** dimensions split bulk from finance mail. **Spamhaus CSS** composite listings hit neighboring subnets in abusive `/24` ranges—Splunk spikes imply collateral guilt despite **`bounce_complaint`** ratios staying compliant until Abuse desks relocate egress pools.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (event platform), Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"event_registration\"` (event_id, registrant_id, attended_flag), `index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, campaign_source, amount, stage).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Send registration and attendance webhooks from your event tool into Splunk with identifiers that match customer relationship management campaigns; (2) require opportunities to carry the originating programme code; (3) publish a quarterly event portfolio review sorted by won revenue and pipeline efficiency.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"event_registration\" earliest=-90d\n| stats dc(registrant_id) as registrations, sum(eval(if(attended_flag=\"yes\",1,0))) as attendees by event_id\n| join type=left event_id [\n    search index=business sourcetype=\"dbx:crm_opportunities\" earliest=-90d\n    | eval from_event=if(match(campaign_source,\"(?i)webinar|event|field\"),1,0)\n    | where from_event=1\n    | stats sum(amount) as pipeline, sum(eval(if(stage=\"Closed Won\",amount,0))) as won_revenue by campaign_source\n    | rename campaign_source as event_id\n]\n| fillnull value=0 pipeline won_revenue\n| eval attendance_rate=if(registrations>0, round(100*attendees/registrations,1), null())\n| sort - won_revenue\n| table event_id, registrations, attendees, attendance_rate, pipeline, won_revenue\n```\n\nUnderstanding this SPL\n\n**Webinar and Event Pipeline Contribution** — Attributes opportunities and revenue to webinars and field events so field marketing proves return beyond attendance counts. We help leadership compare expensive programmes to simpler digital motions using the same pipeline currency.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"event_registration\"` (event_id, registrant_id, attended_flag), `index=business` `sourcetype=\"dbx:crm_opportunities\"` (opportunity_id, campaign_source, amount, stage). **App/TA** (typical add-on context): HEC (event platform), Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: event_registration. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"event_registration\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by event_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **attendance_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Webinar and Event Pipeline Contribution**): table event_id, registrations, attendees, attendance_rate, pipeline, won_revenue\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (won revenue by event), Table (attendance and pipeline), Single value (events-sourced pipeline), Line chart (attendance rate trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch whether mailbox providers accept our mail, trust our signatures, and whether scores from reputation feeds slip before anyone notices quieter inboxes—not merely whether folks tapped a headline in a newsletter.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 7,
            "none": 0
          }
        },
        {
          "i": "23.4",
          "n": "HR & People Analytics",
          "u": [
            {
              "i": "23.4.1",
              "n": "Employee Attrition Analysis and Flight Risk",
              "c": "high",
              "f": "intermediate",
              "v": "Analyses attrition patterns by department, tenure, role, and demographics — identifying where the organisation is losing people fastest and what the common factors are. HR leaders see flight risk indicators (recent role change, tenure milestones, team attrition clusters) so they can proactively engage at-risk employees before resignation.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:hris_employees\"` (employee_id, department, hire_date, termination_date, role, manager, location, tenure_months)",
              "q": "index=business sourcetype=\"dbx:hris_employees\" earliest=-365d\n| eval is_terminated=if(isnotnull(termination_date), 1, 0)\n| eval tenure_months=round((coalesce(strptime(termination_date,\"%Y-%m-%d\"),now())-strptime(hire_date,\"%Y-%m-%d\"))/(86400*30), 0)\n| eval tenure_band=case(\n    tenure_months < 6, \"0-6 months\",\n    tenure_months < 12, \"6-12 months\",\n    tenure_months < 24, \"1-2 years\",\n    tenure_months < 48, \"2-4 years\",\n    1=1, \"4+ years\")\n| stats count as headcount, sum(is_terminated) as departures by department, tenure_band\n| eval attrition_rate=round(100*departures/headcount, 1)\n| where departures > 0\n| sort - attrition_rate\n| table department, tenure_band, headcount, departures, attrition_rate",
              "m": "(1) Import employee records via DB Connect from HRIS; (2) anonymise personal data — use employee IDs, not names; (3) schedule monthly for HR leadership reviews; (4) alert when any department's annual attrition exceeds 20%; (5) add manager-level rollup for people manager coaching; (6) compare voluntary vs involuntary terminations.",
              "z": "Heatmap (department × tenure band), Bar chart (attrition by department), Line chart (monthly attrition trend), Single value (organisation-wide attrition rate).",
              "kfp": "A supervisory-organization rename or a large M&A mapping exercise can shift `Supervisory_Organization` or `Business_Unit` on effective-dated rows so last week’s “spike” in predicted exit load is a geography label change, not a new behavioral signal—reconcile to Workday’s org-hierarchy effective dates before you fund interventions. A worker transferred with `Worker_Status_Detail=Transferred` is not a termination, but a thin snapshot that only reads `Active_Status=0` can look like a loss of headcount. Retirement rows often carry `Termination_Category` or reason taxonomies outside the Voluntary bucket; folding retirement into the voluntary model swells controllable High and Critical counts by 10–20% unless you keep a separate bucket. Integration System User pulls can accidentally mix system_user test accounts with human Worker_ID if domain security is too broad, which pollutes compa-ratio and manager-loss stats until you filter `Worker_Type` or worker-class fields at ingest.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:hris_employees\"` (employee_id, department, hire_date, termination_date, role, manager, location, tenure_months).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import employee records via DB Connect from HRIS; (2) anonymise personal data — use employee IDs, not names; (3) schedule monthly for HR leadership reviews; (4) alert when any department's annual attrition exceeds 20%; (5) add manager-level rollup for people manager coaching; (6) compare voluntary vs involuntary terminations.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:hris_employees\" earliest=-365d\n| eval is_terminated=if(isnotnull(termination_date), 1, 0)\n| eval tenure_months=round((coalesce(strptime(termination_date,\"%Y-%m-%d\"),now())-strptime(hire_date,\"%Y-%m-%d\"))/(86400*30), 0)\n| eval tenure_band=case(\n    tenure_months < 6, \"0-6 months\",\n    tenure_months < 12, \"6-12 months\",\n    tenure_months < 24, \"1-2 years\",\n    tenure_months < 48, \"2-4 years\",\n    1=1, \"4+ years\")\n| stats count as headcount, sum(is_terminated) as departures by department, tenure_band\n| eval attrition_rate=round(100*departures/headcount, 1)\n| where departures > 0\n| sort - attrition_rate\n| table department, tenure_band, headcount, departures, attrition_rate\n```\n\nUnderstanding this SPL\n\n**Employee Attrition Analysis and Flight Risk** — Analyses attrition patterns by department, tenure, role, and demographics — identifying where the organisation is losing people fastest and what the common factors are. HR leaders see flight risk indicators (recent role change, tenure milestones, team attrition clusters) so they can proactively engage at-risk employees before resignation.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:hris_employees\"` (employee_id, department, hire_date, termination_date, role, manager, location, tenure_months). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:hris_employees. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:hris_employees\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **is_terminated** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **tenure_months** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **tenure_band** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by department, tenure_band** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **attrition_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where departures > 0` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Employee Attrition Analysis and Flight Risk**): table department, tenure_band, headcount, departures, attrition_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (department × tenure band), Bar chart (attrition by department), Line chart (monthly attrition trend), Single value (organisation-wide attrition rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We look at who still works for the company, how fair their pay is compared to the job’s midpoint, how long it has been since a raise, and which managers have already lost several people on purpose, then add those signals up to say which big teams look shaky. It is a bit like checking which neighborhoods have had a lot of “for sale” signs before you find out the whole market moved overnight.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.4.2",
              "n": "Time-to-Hire and Recruiting Pipeline Health",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks how long it takes to fill open positions — from requisition to offer acceptance — by department, role level, and recruiter. Helps talent acquisition leaders identify bottlenecks (slow hiring managers, long interview stages) and predict capacity risks when critical roles remain unfilled too long.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:ats_requisitions\"` (req_id, title, department, opened_date, filled_date, stage, recruiter)",
              "q": "index=business sourcetype=\"dbx:ats_requisitions\" earliest=-180d\n| eval opened_epoch=strptime(opened_date, \"%Y-%m-%d\")\n| eval filled_epoch=if(isnotnull(filled_date), strptime(filled_date, \"%Y-%m-%d\"), null())\n| eval days_to_fill=if(isnotnull(filled_epoch), round((filled_epoch-opened_epoch)/86400, 0), null())\n| eval age_days=round((now()-opened_epoch)/86400, 0)\n| eval status=case(isnotnull(filled_date),\"Filled\", age_days>90,\"Stale (>90d)\", age_days>60,\"Aging (60-90d)\", 1=1,\"Active\")\n| stats avg(days_to_fill) as avg_days_to_fill, median(days_to_fill) as median_days,\n        dc(eval(if(status=\"Filled\",req_id,null()))) as filled,\n        dc(eval(if(status=\"Stale (>90d)\",req_id,null()))) as stale,\n        dc(req_id) as total_reqs by department\n| eval fill_rate=round(100*filled/total_reqs, 1)\n| sort - stale\n| table department, total_reqs, filled, fill_rate, stale, avg_days_to_fill, median_days",
              "m": "(1) Import requisition data from ATS (Greenhouse, Lever, Workday Recruiting) via DB Connect; (2) include stage timestamps for stage-level analysis; (3) schedule weekly for talent acquisition reviews; (4) alert when any critical role is open >60 days; (5) compare recruiter performance (time-to-fill, offer acceptance rate).",
              "z": "Bar chart (avg time-to-fill by department), Table (stale requisitions), Line chart (monthly hiring velocity), Single value (overall median time-to-fill).",
              "kfp": "Corporate hiring freezes inflate naive days-open because ATS clocks tick while sourcing idles — overlay finance freeze calendars before blaming TA throughput. Reopened reqs that recycle identifiers after Cancelled–Not-Filled closures distort longitudinal fill-rate unless reopen timestamps split lifetimes across closures. Executive searches spanning twelve-plus months skew department averages — isolate VP-plus cohorts so median KPI panels exclude deliberate slow slate-building. Multi-headcount openings modeled as one parent req versus multiple child openings scramble denominators — reconcile ATS quota semantics before comparing recruiters. Large Workday RaaS prompts sometimes truncate silently — chart uniqueness hourly alongside ingestion alarms whenever Splunk totals diverge materially from warehouse totals. Harvest repeat applicants inflate application-centric widgets versus candidate-centric funnel breadth — caption dashboards with which cardinality each tile counts. Warehouse UTC timestamps versus recruiter-local UI assumptions inject boundary ghosts — publish timezone SPL macros shared across TA analytics apps. Ghost ownership after recruiter exits keeps reqs Open despite dormant pipelines — HR hygiene precedes escalation. Acquisition migrations stamping legacy backlog as opened today create spikes — suppress governance-tagged synthetic migrations.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:ats_requisitions\"` (req_id, title, department, opened_date, filled_date, stage, recruiter).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import requisition data from ATS (Greenhouse, Lever, Workday Recruiting) via DB Connect; (2) include stage timestamps for stage-level analysis; (3) schedule weekly for talent acquisition reviews; (4) alert when any critical role is open >60 days; (5) compare recruiter performance (time-to-fill, offer acceptance rate).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:ats_requisitions\" earliest=-180d\n| eval opened_epoch=strptime(opened_date, \"%Y-%m-%d\")\n| eval filled_epoch=if(isnotnull(filled_date), strptime(filled_date, \"%Y-%m-%d\"), null())\n| eval days_to_fill=if(isnotnull(filled_epoch), round((filled_epoch-opened_epoch)/86400, 0), null())\n| eval age_days=round((now()-opened_epoch)/86400, 0)\n| eval status=case(isnotnull(filled_date),\"Filled\", age_days>90,\"Stale (>90d)\", age_days>60,\"Aging (60-90d)\", 1=1,\"Active\")\n| stats avg(days_to_fill) as avg_days_to_fill, median(days_to_fill) as median_days,\n        dc(eval(if(status=\"Filled\",req_id,null()))) as filled,\n        dc(eval(if(status=\"Stale (>90d)\",req_id,null()))) as stale,\n        dc(req_id) as total_reqs by department\n| eval fill_rate=round(100*filled/total_reqs, 1)\n| sort - stale\n| table department, total_reqs, filled, fill_rate, stale, avg_days_to_fill, median_days\n```\n\nUnderstanding this SPL\n\n**Time-to-Hire and Recruiting Pipeline Health** — Tracks how long it takes to fill open positions — from requisition to offer acceptance — by department, role level, and recruiter. Helps talent acquisition leaders identify bottlenecks (slow hiring managers, long interview stages) and predict capacity risks when critical roles remain unfilled too long.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:ats_requisitions\"` (req_id, title, department, opened_date, filled_date, stage, recruiter). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:ats_requisitions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:ats_requisitions\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **filled_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_to_fill** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fill_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Time-to-Hire and Recruiting Pipeline Health**): table department, total_reqs, filled, fill_rate, stale, avg_days_to_fill, median_days\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (avg time-to-fill by department), Table (stale requisitions), Line chart (monthly hiring velocity), Single value (overall median time-to-fill).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch each open job crawl through recruiting stages, count how many weeks until someone accepts, and point out hiring-manager bottlenecks before backlog slows launches everyone counted on.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.4.3",
              "n": "Diversity and Inclusion Metrics Dashboard",
              "c": "medium",
              "f": "beginner",
              "v": "Provides a real-time view of workforce composition by gender, ethnicity, age band, and role level — tracking representation trends over time and measuring progress against diversity goals. HR and executive leadership can see whether hiring and promotion practices are moving the needle on inclusion commitments.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:hris_employees\"` (employee_id, gender, ethnicity, age_band, role_level, department, hire_date, promotion_date)",
              "q": "index=business sourcetype=\"dbx:hris_employees\" status=\"active\"\n| stats dc(employee_id) as headcount by gender, role_level\n| eventstats sum(headcount) as level_total by role_level\n| eval representation_pct=round(100*headcount/level_total, 1)\n| sort role_level, gender\n| table role_level, gender, headcount, representation_pct",
              "m": "(1) Import anonymised demographic data from HRIS; (2) handle self-reported data sensitively — include \"Prefer not to say\"; (3) schedule monthly; (4) track representation changes over time with timechart; (5) compare new hire diversity vs existing workforce diversity; (6) measure promotion rates by demographic group to identify glass ceiling patterns.",
              "z": "Stacked bar (representation by role level), Line chart (diversity trend over quarters), Table (representation vs targets), Single value (% representation by group).",
              "kfp": "Annual Q3–Q4 Workday self-ID refresh campaigns often spike measured diversity percentages versus stale Q1–Q2 snapshots — interpret jumps as survey artefacts unless corroborated across quarters. Heavy EU populations inflate aggregate opt-out (“Prefer not to say”) versus US-heavy cohorts (~8% vs ~18%), so blended enterprise percentages dip without behaviour change. Small teams oscillate wildly after single hires or exits (promotion-rate ratios swing past eighty-percent thresholds mathematically despite benign intent). Post-close acquisitions inject hundreds of workers from distinct demographic mixes overnight — representation deltas mimic crises yet trace to portfolio integration. Contractor inclusion toggles shift denominators fifteen to thirty percent versus FTE-only OFCCP AAP lenses; reconciliation tables eliminate bogus adverse-impact alarms. HR reorganisations that rename grades without updating historic mappings create fictitious glass-ceiling slopes until lineage tables realign IC4 ↔ Senior Engineer ladders.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:hris_employees\"` (employee_id, gender, ethnicity, age_band, role_level, department, hire_date, promotion_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import anonymised demographic data from HRIS; (2) handle self-reported data sensitively — include \"Prefer not to say\"; (3) schedule monthly; (4) track representation changes over time with timechart; (5) compare new hire diversity vs existing workforce diversity; (6) measure promotion rates by demographic group to identify glass ceiling patterns.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:hris_employees\" status=\"active\"\n| stats dc(employee_id) as headcount by gender, role_level\n| eventstats sum(headcount) as level_total by role_level\n| eval representation_pct=round(100*headcount/level_total, 1)\n| sort role_level, gender\n| table role_level, gender, headcount, representation_pct\n```\n\nUnderstanding this SPL\n\n**Diversity and Inclusion Metrics Dashboard** — Provides a real-time view of workforce composition by gender, ethnicity, age band, and role level — tracking representation trends over time and measuring progress against diversity goals. HR and executive leadership can see whether hiring and promotion practices are moving the needle on inclusion commitments.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:hris_employees\"` (employee_id, gender, ethnicity, age_band, role_level, department, hire_date, promotion_date). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:hris_employees. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:hris_employees\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by gender, role_level** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by role_level** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **representation_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Diversity and Inclusion Metrics Dashboard**): table role_level, gender, headcount, representation_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (representation by role level), Line chart (diversity trend over quarters), Table (representation vs targets), Single value (% representation by group).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We add up how many people sit at each job level — women, men, and anyone who prefers privacy — and compare those shares to the goals our leaders promised. That way we see whether promotions thin out toward the top like a glass ceiling and whether government fairness reports line up without exposing anyone’s private answers row by row.",
              "mtype": [
                "Business",
                "Compliance"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.4.4",
              "n": "Training Completion and Compliance Tracking",
              "c": "high",
              "f": "beginner",
              "v": "Tracks mandatory and optional training completion rates across the organisation — compliance training, security awareness, leadership development, technical certifications. HR and compliance teams see who hasn't completed required training before audit deadlines and can target reminders to specific groups.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC (LMS integration)",
              "d": "`index=business` `sourcetype=\"lms_completion\"` (employee_id, course_id, course_name, completion_date, due_date, status, mandatory)",
              "q": "index=business sourcetype=\"lms_completion\" mandatory=\"yes\"\n| eval due_epoch=strptime(due_date, \"%Y-%m-%d\")\n| eval days_until_due=round((due_epoch-now())/86400, 0)\n| eval compliance_status=case(\n    status=\"completed\", \"Completed\",\n    days_until_due < 0, \"OVERDUE\",\n    days_until_due <= 14, \"Due Soon\",\n    1=1, \"Not Started\")\n| stats dc(employee_id) as employees by course_name, compliance_status\n| eventstats sum(employees) as total_assigned by course_name\n| eval pct=round(100*employees/total_assigned, 1)\n| sort course_name, compliance_status\n| table course_name, compliance_status, employees, pct, total_assigned",
              "m": "(1) Import LMS completion data via HEC webhooks or DB Connect; (2) mark courses as mandatory/optional; (3) alert managers when team members have overdue mandatory training; (4) schedule daily for compliance reporting; (5) produce audit-ready reports showing completion rates by department and deadline.",
              "z": "Stacked bar (completion status by course), Table (overdue employees), Single value (overall compliance rate %), Gauge (mandatory training completion).",
              "kfp": "Sharable Content Object relaunch retries duplicate unfinished shells until nightly LMS reconciliation merges attempts—Splunk duplicate-detection absent upstream yields phantom overdue shells despite learner mastery elsewhere inside Experience API telemetry yet unstitched. Warehouse watermark stalls replay yesterday identical transcripts causing Splunk freshness dashboards imply widespread lapse though SaaS portals already stamped completions minutes later. Curriculum rebrands mint replacement catalog identifiers mid-semester leaving Splunk joins referencing obsolete course_surrogate_key tuples until taxonomy lookups refresh weekly. Experience API Tin Can verbs marked experienced versus completed diverge within niche authoring packs inflating overdue cohorts absent supplemental mastery_score predicates. Cornerstone OAuth pagination resumes mid-stream duplicating partially enumerated trainings whenever modular-input heartbeat timeouts retry identical offsets producing inflated incomplete denominators absent deterministic paging tokens persisted locally. Batch migrations stamping identical enrollment_epoch alongside completion_epoch collapse perceived velocity KPIs toward zero-day artefacts unrelated behavioural urgency. Single-sign-on broker outages postpone HTTPS POST callbacks leaving Splunk overdue tallies swollen temporarily despite learners finishing offline remediation screenshots awaiting manual curator uploads. Localization queues delaying Portuguese harassment bundles strand Iberian cohorts appearing derelict versus Anglo-ready completions despite procurement—not refusal—drivers. Instructor-led spreadsheet confirmations uploaded forty-eight hours late contradict instantaneous Sharable Content Object completions triggering contradictory managerial narratives absent modality-aware lookups. Records-retention obliteration removing decade-old completions resurrects overdue prompts despite archived diploma scans residing solely offline outside Splunk indexes. Contractor curricula referencing abbreviated remediation timelines clash accidentally against enterprise mandatory timers whenever warehouse schemas omit tier_multiplier joins generating exaggerated escalations targeting contingent cohorts unjustly. Quarterly bulk-auto-enrollment automation enrolling entire divisions simultaneously crater aggregate percentages purely denominator shocks absent cohort-age smoothing overlays referencing automation_batch identifiers surfacing ingestion telemetry. SAML federation jitter emitting learner_subject churn across dormant mailbox identifiers sporadically duplicates overdue intersections absent deterministic HR identifier precedence merges authoritative downstream.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC (LMS integration).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"lms_completion\"` (employee_id, course_id, course_name, completion_date, due_date, status, mandatory).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import LMS completion data via HEC webhooks or DB Connect; (2) mark courses as mandatory/optional; (3) alert managers when team members have overdue mandatory training; (4) schedule daily for compliance reporting; (5) produce audit-ready reports showing completion rates by department and deadline.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"lms_completion\" mandatory=\"yes\"\n| eval due_epoch=strptime(due_date, \"%Y-%m-%d\")\n| eval days_until_due=round((due_epoch-now())/86400, 0)\n| eval compliance_status=case(\n    status=\"completed\", \"Completed\",\n    days_until_due < 0, \"OVERDUE\",\n    days_until_due <= 14, \"Due Soon\",\n    1=1, \"Not Started\")\n| stats dc(employee_id) as employees by course_name, compliance_status\n| eventstats sum(employees) as total_assigned by course_name\n| eval pct=round(100*employees/total_assigned, 1)\n| sort course_name, compliance_status\n| table course_name, compliance_status, employees, pct, total_assigned\n```\n\nUnderstanding this SPL\n\n**Training Completion and Compliance Tracking** — Tracks mandatory and optional training completion rates across the organisation — compliance training, security awareness, leadership development, technical certifications. HR and compliance teams see who hasn't completed required training before audit deadlines and can target reminders to specific groups.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"lms_completion\"` (employee_id, course_id, course_name, completion_date, due_date, status, mandatory). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC (LMS integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: lms_completion. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"lms_completion\". Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_until_due** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **compliance_status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by course_name, compliance_status** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by course_name** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Training Completion and Compliance Tracking**): table course_name, compliance_status, employees, pct, total_assigned\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (completion status by course), Table (overdue employees), Single value (overall compliance rate %), Gauge (mandatory training completion).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch whether required lessons finish before their deadlines so privacy and safety coursework stays provable when auditors ask questions later on. People leaders see tidy overdue queues for their teams instead of hunting inside classroom spreadsheets.",
              "mtype": [
                "Compliance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.4.5",
              "n": "Absence and Leave Pattern Monitoring",
              "c": "medium",
              "f": "beginner",
              "v": "Surfaces unusual absence spikes by team or location so managers can offer support or adjust rosters before service levels suffer. We help people leaders spot burnout or local illness trends early while respecting privacy aggregation.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC",
              "d": "`index=business` `sourcetype=\"dbx:time_attendance\"` (employee_id, department, absence_hours, date, absence_type)",
              "q": "index=business sourcetype=\"dbx:time_attendance\" absence_hours>0 earliest=-90d\n| eval week=strftime(strptime(date,\"%Y-%m-%d\"),\"%Y-%U\")\n| stats sum(absence_hours) as total_absence_hours, dc(employee_id) as absentees by department, week\n| eventstats avg(total_absence_hours) as org_avg\n| eval variance_pct=if(org_avg>0, round(100*(total_absence_hours-org_avg)/org_avg,1), null())\n| where variance_pct>25\n| sort - total_absence_hours\n| table department, week, absentees, total_absence_hours, variance_pct",
              "m": "(1) Import anonymised time and attendance extracts without attaching medical reasons unless legally approved; (2) roll up to department and week by default; (3) alert human resources when any department exceeds twenty-five percent above the company average for two consecutive weeks.",
              "z": "Line chart (absence hours trend by department), Table (outlier weeks), Heatmap (department × week), Single value (organisation absence hours).",
              "kfp": "Legitimate influenza or norovirus waves elevate weekly absent curves exactly when CDC FluView ILI percentages crest yet morale stays intact—annotate dashboards with epidemiological overlays rather than punitive workflows. Regional school-board closures dumping childcare burdens onto rotating crews mimic contagion hotspots absent microbiological corroboration—blend district SMS feeds before interpreting spikes as misconduct clusters. Heavy snowfall paired with NOAA winter advisories aligns simultaneous sick-coded stacks across warehouses handling dock staffing interchangeably—suppress escalation macros whenever NOAA CSV stamps confirm hazardous-road commuting envelopes. Post-long-weekend Tuesdays rebound after phantom Friday spikes recorded Saturday overtime swaps distort baseline averages until payroll accountants reconcile swing-shift calendars referencing collective bargaining rider clauses. County courthouse jury summons batches concentrating jurors inside overlapping supervisory hierarchies inflate Bradford denominators temporarily despite lawful civic-duty protections unrelated to absentee abuse narratives. March Madness weekday tournaments historically elevate casual sick Fridays coupled with recovery Mondays inside collegiate-heavy metros—tie athletics calendars sourced via ESPN feeds commentary panels outside punitive SPL predicates. Election-cycle civic-duty clusters cite overlapping precinct staffing swaps misconstrued as chronic tardiness absent municipal clerk confirmations acknowledging lawful civic-leave carve-outs mirrored locally inside Sedgwick narratives. Intermittent FMLA certifications coded mistakenly inside casual buckets inflate frequent-short Bradford footprints nevertheless immunized legally until HR remediations toggle leave_program_flag joins HR analytics lookups referencing intermittent eligibility attestations. Managers approving vacation verbally yet deferring BambooHR approvals convert benign planned stretches into payroll-visible emergency stacks resembling reactive absenteeism absent contemporaneous payroll reconciliation timestamps. Regional payroll desks inconsistently transcribing Sick versus Personal taxonomy shift Bradford tiers materially despite benign intent until absence_pay_normalize KV dictionaries synchronize quarterly Benefits symposium clarifications. Spring-forward daylight-saving Sundays bifurcate spell-gap arithmetic doubling discrete spells absent TZ normalization macros aligning UTC offsets spanning multinational footprints consolidated beneath singular geo_cluster identifiers. Localized religious observance calendars clustering abstentions geographically resemble morale crises absent chaplaincy confirmations validating lawful dietary-fast exemptions enumerated separately inside ethics attestations unrelated to Bradford ladders.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC.\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:time_attendance\"` (employee_id, department, absence_hours, date, absence_type).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import anonymised time and attendance extracts without attaching medical reasons unless legally approved; (2) roll up to department and week by default; (3) alert human resources when any department exceeds twenty-five percent above the company average for two consecutive weeks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:time_attendance\" absence_hours>0 earliest=-90d\n| eval week=strftime(strptime(date,\"%Y-%m-%d\"),\"%Y-%U\")\n| stats sum(absence_hours) as total_absence_hours, dc(employee_id) as absentees by department, week\n| eventstats avg(total_absence_hours) as org_avg\n| eval variance_pct=if(org_avg>0, round(100*(total_absence_hours-org_avg)/org_avg,1), null())\n| where variance_pct>25\n| sort - total_absence_hours\n| table department, week, absentees, total_absence_hours, variance_pct\n```\n\nUnderstanding this SPL\n\n**Absence and Leave Pattern Monitoring** — Surfaces unusual absence spikes by team or location so managers can offer support or adjust rosters before service levels suffer. We help people leaders spot burnout or local illness trends early while respecting privacy aggregation.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:time_attendance\"` (employee_id, department, absence_hours, date, absence_type). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:time_attendance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:time_attendance\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **week** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by department, week** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **variance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where variance_pct>25` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Absence and Leave Pattern Monitoring**): table department, week, absentees, total_absence_hours, variance_pct\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (absence hours trend by department), Table (outlier weeks), Heatmap (department × week), Single value (organisation absence hours).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch BambooHR sick buckets bunch across neighborhoods before staffing planners reshuffle crews so checkout lanes stay staffed—nothing here tallies overtime coins like finance dashboards elsewhere.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.4.6",
              "n": "Internal Mobility and Promotion Velocity",
              "c": "medium",
              "f": "intermediate",
              "v": "Tracks how often people move roles internally and how long promotions take after eligibility. We help talent leaders prove whether career paths are real or blocked, which affects engagement and retention.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:hris_employees\"` (employee_id, department, role, hire_date, promotion_date, job_change_type)",
              "q": "index=business sourcetype=\"dbx:hris_employees\" earliest=-730d\n| where isnotnull(promotion_date)\n| eval promo_epoch=strptime(promotion_date,\"%Y-%m-%d\")\n| eval hire_epoch=strptime(hire_date,\"%Y-%m-%d\")\n| eval months_to_promo=round((promo_epoch-hire_epoch)/(86400*30),1)\n| eval internal_move=if(job_change_type IN (\"Transfer\",\"Promotion\",\"Lateral\"),1,0)\n| stats count as events, sum(internal_move) as internal_moves, avg(months_to_promo) as avg_months_to_promo by department\n| eval internal_move_rate=round(100*internal_moves/events,1)\n| sort - internal_move_rate\n| table department, events, internal_moves, internal_move_rate, avg_months_to_promo",
              "m": "(1) Ensure job change events carry a type flag from your human resources information system feed; (2) refresh monthly after payroll close; (3) review with business unit heads when promotion velocity lengthens materially versus the prior year.",
              "z": "Bar chart (internal move rate by department), Table (promotion timing), Line chart (average months to promotion), Single value (company internal move rate).",
              "kfp": "Large-scale Workday **`job_family`** remap initiatives temporarily inflate **`lat_prm`** because lateral postings absorb renamed ladders before promotions catch up — reconcile effective-dated **`Job_Family_Reference`** histories before accusing Talent Marketplace stagnation. Finance-funded interim gig assignments spike **`stretch_assignment_hours`** without altering **`prm`**, mimicking mobility vibrancy though succession benches remain hollow — cross-reference **`stretch_assignment_hours`** approvals flagged **`succession_critical`** versus exploratory rotations. Eightfold **`path_score`** dips whenever **`skills_inference_refresh`** rebuilds taxonomy nodes overnight so **`avg_ef_score`** swoons absent behavioural change — annotate dashboards whenever **`skills_cloud_version`** increments. SAP **`CareerDevelopmentPlan`** OData retries duplicate **`SF_CDP`** rows inflating mentorship-ready narratives despite stagnant pairing chemistry — dedupe via **`etag`** hashes plus **`changed_on`** timestamps from BizX exports. Campus-heavy cohorts exhibit elongated **`med_m1`** simply because rotational programs postpone promotions versus seasoned hires — isolate **`hire_program`** markers before benchmarking Mercer SaaS timelines. Collective bargaining mandates posting union roles externally even when filled internally, depressing naive **`fill_pct`** readings unless **`union_notice`** booleans suppress denominators tied to **`job_profile_union`** lookups. Executive **`succession_plan_update`** snapshots replay unchanged **`Ready_Now`** tiers nightly, implying churn in **`prm`** counters despite frozen nominations — collapse duplicates via **`snapshot_seq`** keys supplied by SAP Integration Center extracts.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:hris_employees\"` (employee_id, department, role, hire_date, promotion_date, job_change_type).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure job change events carry a type flag from your human resources information system feed; (2) refresh monthly after payroll close; (3) review with business unit heads when promotion velocity lengthens materially versus the prior year.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:hris_employees\" earliest=-730d\n| where isnotnull(promotion_date)\n| eval promo_epoch=strptime(promotion_date,\"%Y-%m-%d\")\n| eval hire_epoch=strptime(hire_date,\"%Y-%m-%d\")\n| eval months_to_promo=round((promo_epoch-hire_epoch)/(86400*30),1)\n| eval internal_move=if(job_change_type IN (\"Transfer\",\"Promotion\",\"Lateral\"),1,0)\n| stats count as events, sum(internal_move) as internal_moves, avg(months_to_promo) as avg_months_to_promo by department\n| eval internal_move_rate=round(100*internal_moves/events,1)\n| sort - internal_move_rate\n| table department, events, internal_moves, internal_move_rate, avg_months_to_promo\n```\n\nUnderstanding this SPL\n\n**Internal Mobility and Promotion Velocity** — Tracks how often people move roles internally and how long promotions take after eligibility. We help talent leaders prove whether career paths are real or blocked, which affects engagement and retention.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:hris_employees\"` (employee_id, department, role, hire_date, promotion_date, job_change_type). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:hris_employees. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:hris_employees\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Filters the current rows with `where isnotnull(promotion_date)` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **promo_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **hire_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **months_to_promo** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **internal_move** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **internal_move_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Internal Mobility and Promotion Velocity**): table department, events, internal_moves, internal_move_rate, avg_months_to_promo\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (internal move rate by department), Table (promotion timing), Line chart (average months to promotion), Single value (company internal move rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We notice how often coworkers browse internal openings, how long someone waits before their first step up, whether sideways moves outnumber promotions in healthy proportions, and whether leaders mostly lift people already inside rather than importing strangers at every turn. That plain scorecard tells executives if posted career ladders truly move people or only decorate slideshows.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.4.7",
              "n": "Overtime Cost and Burnout Risk Indicator",
              "c": "high",
              "f": "intermediate",
              "v": "Aggregates overtime hours and premium pay by cost centre so finance controls labour inflation while people teams watch burnout signals. We help you intervene when a few teams carry an unsustainable load.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:time_attendance\"` (employee_id, cost_centre, overtime_hours, pay_week, hourly_rate)",
              "q": "index=business sourcetype=\"dbx:time_attendance\" overtime_hours>0 earliest=-56d\n| eval ot_pay=overtime_hours*hourly_rate*1.5\n| stats sum(overtime_hours) as total_ot_hours, sum(ot_pay) as total_ot_pay,\n        dc(employee_id) as employees_with_ot by cost_centre, pay_week\n| eventstats perc90(total_ot_pay) as p90_pay by pay_week\n| eval high_cost=if(total_ot_pay>=p90_pay,\"YES\",\"NO\")\n| where high_cost=\"YES\"\n| sort - total_ot_pay\n| table cost_centre, pay_week, employees_with_ot, total_ot_hours, total_ot_pay",
              "m": "(1) Load approved time cards with overtime and base rates through DB Connect using payroll rules your controller validates; (2) group by cost centre and pay week for privacy-preserving views; (3) trigger a fortnightly review when any cost centre repeatedly lands in the top decile of overtime pay.",
              "z": "Bar chart (overtime pay by cost centre), Table (high-cost weeks), Line chart (total overtime hours trend), Single value (total overtime pay period).",
              "kfp": "Manufacturing shutdown weeks compress payroll calendars so overtime disappears mathematically while crews remain exhausted from upstream contractor slips — variance reviewers must overlay plant-maintenance blackout calendars before trusting silence. Collective bargaining clauses mandate voluntary overtime pools where elevated hour totals evidence negotiated premiums rather than unmanaged staffing trauma — skipping union roster joins miscasts solidarity queues as crises.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:time_attendance\"` (employee_id, cost_centre, overtime_hours, pay_week, hourly_rate).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Load approved time cards with overtime and base rates through DB Connect using payroll rules your controller validates; (2) group by cost centre and pay week for privacy-preserving views; (3) trigger a fortnightly review when any cost centre repeatedly lands in the top decile of overtime pay.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:time_attendance\" overtime_hours>0 earliest=-56d\n| eval ot_pay=overtime_hours*hourly_rate*1.5\n| stats sum(overtime_hours) as total_ot_hours, sum(ot_pay) as total_ot_pay,\n        dc(employee_id) as employees_with_ot by cost_centre, pay_week\n| eventstats perc90(total_ot_pay) as p90_pay by pay_week\n| eval high_cost=if(total_ot_pay>=p90_pay,\"YES\",\"NO\")\n| where high_cost=\"YES\"\n| sort - total_ot_pay\n| table cost_centre, pay_week, employees_with_ot, total_ot_hours, total_ot_pay\n```\n\nUnderstanding this SPL\n\n**Overtime Cost and Burnout Risk Indicator** — Aggregates overtime hours and premium pay by cost centre so finance controls labour inflation while people teams watch burnout signals. We help you intervene when a few teams carry an unsustainable load.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:time_attendance\"` (employee_id, cost_centre, overtime_hours, pay_week, hourly_rate). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:time_attendance. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:time_attendance\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **ot_pay** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by cost_centre, pay_week** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` rolls up events into metrics; results are split **by pay_week** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **high_cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where high_cost=\"YES\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Overtime Cost and Burnout Risk Indicator**): table cost_centre, pay_week, employees_with_ot, total_ot_hours, total_ot_pay\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (overtime pay by cost centre), Table (high-cost weeks), Line chart (total overtime hours trend), Single value (total overtime pay period).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We tally premium-paid extra hours beside streaks of many consecutive workdays and dwindling paid leave so stretched crews surface before paycheck totals alarm finance or fatigue threatens safety.",
              "mtype": [
                "Business"
              ],
              "ind": "Manufacturing, Retail, Healthcare",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 7,
            "none": 0
          }
        },
        {
          "i": "23.5",
          "n": "Supply Chain & Operations",
          "u": [
            {
              "i": "23.5.1",
              "n": "Order-to-Cash Cycle Time and Bottleneck Analysis",
              "c": "high",
              "f": "intermediate",
              "v": "Measures the complete order-to-cash cycle — order placement, picking, packing, shipping, delivery, invoicing, payment receipt — identifying which stages take longest and where delays cluster. Operations leaders see exactly where the fulfilment process breaks down and can target improvements to reduce working capital tied up in the cycle.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:erp_orders\"` (order_id, customer, stage, stage_timestamp, value)",
              "q": "index=business sourcetype=\"dbx:erp_orders\" earliest=-90d\n| eval stage_time=strptime(stage_timestamp, \"%Y-%m-%d %H:%M:%S\")\n| sort order_id, stage_time\n| streamstats latest(stage_time) as prev_time latest(stage) as prev_stage by order_id\n| eval stage_duration_hours=if(isnotnull(prev_time), round((stage_time-prev_time)/3600, 1), 0)\n| stats avg(stage_duration_hours) as avg_hours, perc95(stage_duration_hours) as p95_hours, count as transitions by stage\n| eval avg_days=round(avg_hours/24, 1)\n| sort stage\n| table stage, avg_hours, avg_days, p95_hours, transitions",
              "m": "(1) Import order lifecycle events from ERP via DB Connect; (2) ensure each stage transition is logged with timestamp; (3) define standard stages (Ordered → Confirmed → Picked → Packed → Shipped → Delivered → Invoiced → Paid); (4) schedule weekly for operations reviews; (5) alert when average cycle time exceeds target; (6) segment by product category, customer tier, and warehouse for targeted improvement.",
              "z": "Waterfall chart (time per stage), Bar chart (avg vs P95 by stage), Line chart (cycle time trend), Single value (overall avg order-to-cash days).",
              "kfp": "Ocean freight lanes elongate Delivered timestamps versus dock Ship timestamps without implying dock inefficiency—carrier variability masks Fulfillment KPI honesty unless segmented by Incoterms. Planned ERP blackouts freeze postings yet inventory physically moves—duration spikes reflect maintenance calendars not Operations regression. SAP finance accelerators replay aged CDHDR pointers during backlog drains near German fiscal-year-end producing duplicate-looking dwell spikes absent genuine rework. Same-second RFID tunnel scans collapse pick versus pack deltas despite hourly labour elapsed—parallel-stage anomalies inflate pick→pack KPI optimism when timestamps collide. Partial shipments duplicate lifecycle tuples without netting residual quantities—Splunk aggregates mis-count throughput intensity unless keyed by delivered quantity tiers. Customer-driven pickup lanes bypass carrier milestones entirely yet analysts mistake skipped legs for systemic latency. Heavy month-end billing batches lengthen invoice lag though fulfilment stayed steady—Finance rhythm dominates invoice→cash variance temporarily. Extended AR collection stretches unrelated to fulfilment excellence appear when treasury grants ninety-day enterprise Net terms deliberately—misroutes Ops accountability without segmentation by payment_term tier. EDI ASN batching delays Delivered scans though product left the building hours earlier—true supply-chain noise confuses last-mile posture but is not dock failure. Freight-broker handoffs create limbo timestamps between Shipped versus Invoiced splits when documentation passes through non-integrated 3PL portals. Credit-hold pause-and-release sequences stretch Confirmed→Allocated gaps while warehouse capacity sits idle—misread as labour shortage absent credit status flags. Multi-currency FX revaluation postings near quarter close disturb naive cash-application math versus functional-currency expectations even when physical delivery performance was sound.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:erp_orders\"` (order_id, customer, stage, stage_timestamp, value).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import order lifecycle events from ERP via DB Connect; (2) ensure each stage transition is logged with timestamp; (3) define standard stages (Ordered → Confirmed → Picked → Packed → Shipped → Delivered → Invoiced → Paid); (4) schedule weekly for operations reviews; (5) alert when average cycle time exceeds target; (6) segment by product category, customer tier, and warehouse for targeted improvement.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:erp_orders\" earliest=-90d\n| eval stage_time=strptime(stage_timestamp, \"%Y-%m-%d %H:%M:%S\")\n| sort order_id, stage_time\n| streamstats latest(stage_time) as prev_time latest(stage) as prev_stage by order_id\n| eval stage_duration_hours=if(isnotnull(prev_time), round((stage_time-prev_time)/3600, 1), 0)\n| stats avg(stage_duration_hours) as avg_hours, perc95(stage_duration_hours) as p95_hours, count as transitions by stage\n| eval avg_days=round(avg_hours/24, 1)\n| sort stage\n| table stage, avg_hours, avg_days, p95_hours, transitions\n```\n\nUnderstanding this SPL\n\n**Order-to-Cash Cycle Time and Bottleneck Analysis** — Measures the complete order-to-cash cycle — order placement, picking, packing, shipping, delivery, invoicing, payment receipt — identifying which stages take longest and where delays cluster. Operations leaders see exactly where the fulfilment process breaks down and can target improvements to reduce working capital tied up in the cycle.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:erp_orders\"` (order_id, customer, stage, stage_timestamp, value). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:erp_orders. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:erp_orders\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **stage_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` rolls up events into metrics; results are split **by order_id** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **stage_duration_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by stage** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **avg_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Order-to-Cash Cycle Time and Bottleneck Analysis**): table stage, avg_hours, avg_days, p95_hours, transitions\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Waterfall chart (time per stage), Bar chart (avg vs P95 by stage), Line chart (cycle time trend), Single value (overall avg order-to-cash days).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We follow each customer order through its business steps—from confirming the order to shipping the goods to getting paid—and measure the hours between those steps so we can see whether delays pile up in the warehouse, on the road, or in billing before cash arrives.",
              "mtype": [
                "Business"
              ],
              "ind": "Manufacturing, Retail, Distribution",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.5.2",
              "n": "Inventory Level Monitoring and Stockout Risk",
              "c": "critical",
              "f": "intermediate",
              "v": "Monitors inventory levels against reorder points and forecasted demand — flagging products at risk of stockout before it happens. A stockout means lost sales and disappointed customers. Equally, excess inventory ties up cash and risks obsolescence. This gives supply chain managers a balanced, exception-based view of where to act.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:inventory\"` (sku, warehouse, qty_on_hand, reorder_point, daily_demand_avg, lead_time_days)",
              "q": "index=business sourcetype=\"dbx:inventory\" earliest=-1d@d latest=now()\n| eval days_of_stock=if(daily_demand_avg>0, round(qty_on_hand/daily_demand_avg, 0), 999)\n| eval status=case(\n    qty_on_hand <= 0, \"STOCKOUT\",\n    days_of_stock <= lead_time_days, \"CRITICAL — below lead time\",\n    qty_on_hand <= reorder_point, \"REORDER NOW\",\n    days_of_stock > 180, \"OVERSTOCK\",\n    1=1, \"OK\")\n| where status!=\"OK\"\n| eval revenue_at_risk=if(status IN (\"STOCKOUT\",\"CRITICAL — below lead time\"), daily_demand_avg * unit_price * lead_time_days, 0)\n| sort status, - revenue_at_risk\n| table sku, product_name, warehouse, qty_on_hand, reorder_point, days_of_stock, lead_time_days, status, revenue_at_risk",
              "m": "(1) Import inventory snapshot via DB Connect daily from ERP/WMS; (2) calculate rolling average daily demand from sales history; (3) include supplier lead times in the lookup; (4) alert purchasing team immediately on STOCKOUT and CRITICAL items; (5) generate a weekly overstock report for markdown/clearance decisions; (6) integrate with demand forecasting model outputs for improved accuracy.",
              "z": "Table (exception list), Single value (items in stockout, total revenue at risk), Gauge (% of SKUs at healthy levels), Bar chart (stockout risk by category).",
              "kfp": "Merchant-led assortment resets ahead of promotional peaks deliberately drain regional bins while inbound ASN-backed receipts arrive inside ten calendar days—the JDBC snapshot moment flags STOCKOUT even though planners staged replenishment aligned with vendor trailers rather than reorder-point breaches alone. Goods-receipt postings queued behind customs clearance hold inbound quantities off unrestricted stock until SAP MM documents post even though warehouse supervisors visually scanned pallets onto docks (transaction MB51 backlog versus MB52 readable balances). Frozen fiscal-cycle cycle-count postings swing unrestricted quantities ±8 percent overnight without consumption shifts until nightly rolling-average jobs rebuild demand signals, flashing transient CRITICAL tiers solely from ledger adjustments. Heavy holiday returns reopen pallets into bulk reserve locations without attaching outbound shipments that informed the trailing thirty-day consumption statistic—qty_on_hand balloons while rolling averages remain depressed from peak outbound weeks left in the denominator. Procurement edits lengthen supplier calendars during peak ocean seasons yet JDBC extracts keep contractual forty-five-day defaults until MM master maintains MD04/MM03 pushes—risk dollars inflate purely from stale `lead_time_days` literals versus SAP `/SAPAPO/` forecasts already rerouted by planners. Forty-eight-hour lightning promotions spike outbound picks faster than trailing thirty-day averages imply once lifts roll past moving-average windows—Splunk reads tame consumption until spike demand enters averaged rows days later while planners assumed buffers offline.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:inventory\"` (sku, warehouse, qty_on_hand, reorder_point, daily_demand_avg, lead_time_days).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import inventory snapshot via DB Connect daily from ERP/WMS; (2) calculate rolling average daily demand from sales history; (3) include supplier lead times in the lookup; (4) alert purchasing team immediately on STOCKOUT and CRITICAL items; (5) generate a weekly overstock report for markdown/clearance decisions; (6) integrate with demand forecasting model outputs for improved accuracy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:inventory\" earliest=-1d@d latest=now()\n| eval days_of_stock=if(daily_demand_avg>0, round(qty_on_hand/daily_demand_avg, 0), 999)\n| eval status=case(\n    qty_on_hand <= 0, \"STOCKOUT\",\n    days_of_stock <= lead_time_days, \"CRITICAL — below lead time\",\n    qty_on_hand <= reorder_point, \"REORDER NOW\",\n    days_of_stock > 180, \"OVERSTOCK\",\n    1=1, \"OK\")\n| where status!=\"OK\"\n| eval revenue_at_risk=if(status IN (\"STOCKOUT\",\"CRITICAL — below lead time\"), daily_demand_avg * unit_price * lead_time_days, 0)\n| sort status, - revenue_at_risk\n| table sku, product_name, warehouse, qty_on_hand, reorder_point, days_of_stock, lead_time_days, status, revenue_at_risk\n```\n\nUnderstanding this SPL\n\n**Inventory Level Monitoring and Stockout Risk** — Monitors inventory levels against reorder points and forecasted demand — flagging products at risk of stockout before it happens. A stockout means lost sales and disappointed customers. Equally, excess inventory ties up cash and risks obsolescence. This gives supply chain managers a balanced, exception-based view of where to act.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:inventory\"` (sku, warehouse, qty_on_hand, reorder_point, daily_demand_avg, lead_time_days). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:inventory\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **days_of_stock** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status!=\"OK\"` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **revenue_at_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Inventory Level Monitoring and Stockout Risk**): table sku, product_name, warehouse, qty_on_hand, reorder_point, days_of_stock, lead_time_days, status, revenue_at_risk\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exception list), Single value (items in stockout, total revenue at risk), Gauge (% of SKUs at healthy levels), Bar chart (stockout risk by category).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch how much usable stock sits in each warehouse once safety cushions come off the top, compare what is left with how fast sales burn through units and how many days suppliers need to refill shelves. When it looks like we could sell out before the next shipment lands, we flag those lines early so buyers fix shortages—not hours chasing orders step by step.",
              "mtype": [
                "Business"
              ],
              "ind": "Retail, Manufacturing, Distribution",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.5.3",
              "n": "Supplier On-Time Delivery Performance",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks whether suppliers deliver on time, in full (OTIF) — the single most important supplier performance metric. Procurement teams see which suppliers consistently miss delivery dates and can use the data in contract negotiations, sourcing decisions, and supplier development programmes.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:purchase_orders\"` (po_number, supplier, promised_date, actual_delivery_date, qty_ordered, qty_received)",
              "q": "index=business sourcetype=\"dbx:purchase_orders\" actual_delivery_date=* earliest=-180d\n| eval promised=strptime(promised_date, \"%Y-%m-%d\")\n| eval actual=strptime(actual_delivery_date, \"%Y-%m-%d\")\n| eval days_late=round((actual-promised)/86400, 0)\n| eval on_time=if(days_late <= 0, 1, 0)\n| eval in_full=if(qty_received >= qty_ordered, 1, 0)\n| eval otif=if(on_time=1 AND in_full=1, 1, 0)\n| stats count as deliveries, sum(on_time) as on_time_count, sum(in_full) as in_full_count, sum(otif) as otif_count, avg(days_late) as avg_days_late by supplier\n| eval otif_pct=round(100*otif_count/deliveries, 1)\n| eval on_time_pct=round(100*on_time_count/deliveries, 1)\n| eval in_full_pct=round(100*in_full_count/deliveries, 1)\n| sort - deliveries\n| table supplier, deliveries, on_time_pct, in_full_pct, otif_pct, avg_days_late",
              "m": "(1) Import purchase order data including promised and actual delivery dates via DB Connect; (2) define your OTIF tolerance (e.g., ±1 day for \"on time\"); (3) schedule monthly for supplier reviews; (4) alert procurement when any strategic supplier's OTIF drops below 90%; (5) share supplier scorecards via scheduled PDF reports.",
              "z": "Table (supplier scorecard), Bar chart (OTIF by supplier), Line chart (OTIF trend over months), Single value (overall OTIF rate).",
              "kfp": "Buyer-initiated PO quantity bumps leave **EKET** promised dates stale while **EKBE** receipts track new totals — OTIF reads falsely late until confirmations replay. Dock clerks batch **MIGO** postings one to three days after physical receipt so GR timestamps drift though trailers arrived on schedule. Supplier acknowledgement categories (**AB** versus **LA**) stamp different baseline dates — dashboards anchored only on order acknowledgement mis-score OTIF versus logistics-aware notices. Mixed pallets through ocean consolidation hubs tie receipts to hub arrival windows rather than promised PO-line milestones once freight-forward data absent. Materials parked in SAP quality inspection buckets appear unreleased-to-stock though pallets landed — procurement argues OTIF satisfied while planners still starved downstream when QC gates linger. Multiple GR documents against one PO line demand summed quantities — Splunk tallies fall short when reversal movements (**MOVEMENT_TYPE** 102) net incompletely across polls. Drop-ship lanes bypass hub GR entirely — receipts never arrive where dashboards expect warehouse postings; mirror Coupa receipt confirmations instead of ERP GR rows only. Vendor-managed inventory programs replace discrete PO cycles — OTIF denominators evaporate though service stays acceptable. Buyer calendars declare holidays supplier calendars ignore — dock closures skew perceived lateness near weekends. Midnight timezone normalization compares UTC-cut promises against plant-local receipts creating ± one-day jitter at fiscal boundaries. Split-line shipments score blended OTIF when line two slips though line one carries the headline vendor KPI. Quarter-end pushes concentrating receipts inflate OTIF optics versus steady rhythm buyers expected. SaaS lifecycle labels (**Coupa** Closed versus Partially Received versus **Ariba** Received) mismatch ERP GR counts until normalization lookups reconcile shipment states. ASN (**EDI 856**) arrival timestamps diverge from dock sign-offs — variance signals paperwork friction rather than carrier punctuality alone. FX snapshots comparing promised-value-only checks versus GR-period valuation disturb full-dollar fulfillment logic though quantities reconcile. Macro disruptions such as Suez congestion or fleet-wide canal delays crater cohort OTIF regardless of supplier tactics — segregate force-majeure buckets before punitive reviews.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:purchase_orders\"` (po_number, supplier, promised_date, actual_delivery_date, qty_ordered, qty_received).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import purchase order data including promised and actual delivery dates via DB Connect; (2) define your OTIF tolerance (e.g., ±1 day for \"on time\"); (3) schedule monthly for supplier reviews; (4) alert procurement when any strategic supplier's OTIF drops below 90%; (5) share supplier scorecards via scheduled PDF reports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:purchase_orders\" actual_delivery_date=* earliest=-180d\n| eval promised=strptime(promised_date, \"%Y-%m-%d\")\n| eval actual=strptime(actual_delivery_date, \"%Y-%m-%d\")\n| eval days_late=round((actual-promised)/86400, 0)\n| eval on_time=if(days_late <= 0, 1, 0)\n| eval in_full=if(qty_received >= qty_ordered, 1, 0)\n| eval otif=if(on_time=1 AND in_full=1, 1, 0)\n| stats count as deliveries, sum(on_time) as on_time_count, sum(in_full) as in_full_count, sum(otif) as otif_count, avg(days_late) as avg_days_late by supplier\n| eval otif_pct=round(100*otif_count/deliveries, 1)\n| eval on_time_pct=round(100*on_time_count/deliveries, 1)\n| eval in_full_pct=round(100*in_full_count/deliveries, 1)\n| sort - deliveries\n| table supplier, deliveries, on_time_pct, in_full_pct, otif_pct, avg_days_late\n```\n\nUnderstanding this SPL\n\n**Supplier On-Time Delivery Performance** — Tracks whether suppliers deliver on time, in full (OTIF) — the single most important supplier performance metric. Procurement teams see which suppliers consistently miss delivery dates and can use the data in contract negotiations, sourcing decisions, and supplier development programmes.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:purchase_orders\"` (po_number, supplier, promised_date, actual_delivery_date, qty_ordered, qty_received). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:purchase_orders. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:purchase_orders\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **promised** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **actual** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_late** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **on_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **in_full** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **otif** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by supplier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **otif_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **on_time_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **in_full_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Supplier On-Time Delivery Performance**): table supplier, deliveries, on_time_pct, in_full_pct, otif_pct, avg_days_late\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (supplier scorecard), Bar chart (OTIF by supplier), Line chart (OTIF trend over months), Single value (overall OTIF rate).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We check whether suppliers keep their delivery promises — both on the day they said and with the full amount we ordered — and show which ones miss often enough to hurt production. That way we negotiate fair contracts and help the right vendors before shortages cost big money.",
              "mtype": [
                "Business"
              ],
              "ind": "Manufacturing, Retail, Distribution",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.5.4",
              "n": "Delivery SLA Compliance and Last-Mile Performance",
              "c": "high",
              "f": "intermediate",
              "v": "Measures whether customer orders are delivered within promised timeframes — next-day, 2-day, standard — by carrier and region. Logistics managers see which carriers and routes are failing SLAs, while customer experience teams understand the impact on satisfaction. A single percentage-point improvement in delivery SLA compliance can significantly reduce customer complaints.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC (carrier tracking data)",
              "d": "`index=business` `sourcetype=\"dbx:shipments\"` (shipment_id, carrier, promised_delivery, actual_delivery, origin, destination, service_level)",
              "q": "index=business sourcetype=\"dbx:shipments\" actual_delivery=* earliest=-30d\n| eval promised_epoch=strptime(promised_delivery, \"%Y-%m-%d\")\n| eval actual_epoch=strptime(actual_delivery, \"%Y-%m-%d\")\n| eval days_variance=round((actual_epoch-promised_epoch)/86400, 0)\n| eval sla_met=if(days_variance <= 0, 1, 0)\n| stats count as shipments, sum(sla_met) as on_time, avg(days_variance) as avg_variance by carrier, service_level\n| eval sla_compliance=round(100*on_time/shipments, 1)\n| sort carrier, service_level\n| table carrier, service_level, shipments, on_time, sla_compliance, avg_variance",
              "m": "(1) Import shipment tracking data from TMS or carrier APIs via HEC; (2) map carrier service levels to your customer-facing delivery promises; (3) schedule daily for logistics team; (4) alert when any carrier/service_level combination drops below 95% compliance; (5) calculate financial impact of late deliveries (refunds, credits, lost customers).",
              "z": "Bar chart (SLA compliance by carrier), Table (carrier scorecard), Heatmap (carrier × region), Single value (overall SLA compliance %).",
              "kfp": "Carrier handheld queues occasionally upload Delivered legs ahead of Out-for-Delivery milestones—Splunk timelines sorted solely by ingestion order inflate punctuality unless scans reorder by canonical scan_ts tied to Memphis hub clocks versus shopper curb expectations. Checkout banners promising truncated transit windows still cite fulfillment-delay buffers baked two seasons earlier—Splunk compares storefront countdown modules rather than refreshed parcel-commit matrices published internally by carriers. Weather-throttled hubs suspend published guarantees regionally yet JDBC-fed promised_delivery_at stamps ignore disaster carve-outs until planners paste advisory overrides—noise accumulates unless lookup rows ingest USPS/FedEx published suspension notices.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC (carrier tracking data).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:shipments\"` (shipment_id, carrier, promised_delivery, actual_delivery, origin, destination, service_level).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import shipment tracking data from TMS or carrier APIs via HEC; (2) map carrier service levels to your customer-facing delivery promises; (3) schedule daily for logistics team; (4) alert when any carrier/service_level combination drops below 95% compliance; (5) calculate financial impact of late deliveries (refunds, credits, lost customers).\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:shipments\" actual_delivery=* earliest=-30d\n| eval promised_epoch=strptime(promised_delivery, \"%Y-%m-%d\")\n| eval actual_epoch=strptime(actual_delivery, \"%Y-%m-%d\")\n| eval days_variance=round((actual_epoch-promised_epoch)/86400, 0)\n| eval sla_met=if(days_variance <= 0, 1, 0)\n| stats count as shipments, sum(sla_met) as on_time, avg(days_variance) as avg_variance by carrier, service_level\n| eval sla_compliance=round(100*on_time/shipments, 1)\n| sort carrier, service_level\n| table carrier, service_level, shipments, on_time, sla_compliance, avg_variance\n```\n\nUnderstanding this SPL\n\n**Delivery SLA Compliance and Last-Mile Performance** — Measures whether customer orders are delivered within promised timeframes — next-day, 2-day, standard — by carrier and region. Logistics managers see which carriers and routes are failing SLAs, while customer experience teams understand the impact on satisfaction. A single percentage-point improvement in delivery SLA compliance can significantly reduce customer complaints.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:shipments\"` (shipment_id, carrier, promised_delivery, actual_delivery, origin, destination, service_level). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC (carrier tracking data). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:shipments. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:shipments\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **promised_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **actual_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_variance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by carrier, service_level** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_compliance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Delivery SLA Compliance and Last-Mile Performance**): table carrier, service_level, shipments, on_time, sla_compliance, avg_variance\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (SLA compliance by carrier), Table (carrier scorecard), Heatmap (carrier × region), Single value (overall SLA compliance %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We line up what people were told about when a box would arrive with what record shows happened at the door—street by street and lane by lane—without mixing that picture with dock receipts or invoice bundles studied elsewhere.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Retail, Logistics, E-Commerce",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.5.5",
              "n": "Perfect Order Rate and Customer Impact",
              "c": "high",
              "f": "intermediate",
              "v": "Combines on-time, in-full, and damage-free delivery into one perfect order score so sales and operations share one customer-facing metric. We help you see when service looks acceptable on paper but customers still receive wrong or damaged goods.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC",
              "d": "`index=business` `sourcetype=\"dbx:shipments\"` (order_id, customer_tier, sla_met, damage_flag, short_ship_flag), `index=business` `sourcetype=\"returns_log\"` (order_id, return_reason)",
              "q": "index=business sourcetype=\"dbx:shipments\" earliest=-30d\n| eval on_time=sla_met\n| eval in_full=if(short_ship_flag=\"no\",1,0)\n| eval no_damage=if(damage_flag=\"no\",1,0)\n| join type=left order_id [\n    search index=business sourcetype=\"returns_log\" earliest=-30d\n    | stats count as return_count by order_id\n]\n| fillnull value=0 return_count\n| eval perfect=if(on_time=1 AND in_full=1 AND no_damage=1 AND return_count=0,1,0)\n| stats count as orders, sum(perfect) as perfect_orders by customer_tier\n| eval perfect_order_rate=round(100*perfect_orders/orders,1)\n| sort - perfect_order_rate\n| table customer_tier, orders, perfect_orders, perfect_order_rate",
              "m": "(1) Align shipment, quality, and returns feeds on a common order identifier ingested through DB Connect or HEC; (2) define “perfect” with commercial and logistics leaders; (3) publish weekly to account teams when any customer tier drops below the agreed perfect-order threshold.",
              "z": "Bar chart (perfect order rate by tier), Single value (overall perfect order %), Table (tier detail), Line chart (weekly trend).",
              "kfp": "Merchant storefront countdown edits shorten shopper-visible arrival timers without rewriting JDBC promised landmarks sourced hours earlier — Splunk punctuality misreads shopper-approved slack after kiosk banners refreshed overnight. Consolidator steamship manifests stitch unrelated purchaser commitments beneath one vessel identifier — parcel proofs satisfy maritime milestones yet disguise which storefront pledge slipped once drums unload amid unrelated receipts staged shifts apart. Reverse-logistics refurbishment credits resurrect fulfillment tuples previously stamped fulfilled — dormant identifiers collide unless lineage surrogate keys fingerprint refurbishment waves distinctly from maiden voyages. Boutique replenishment cohorts skew dormant-buyer churn proxies whenever ninety-day silence reflects negotiated annual replenishment pauses—not abandonment inferred blindly absent contractual clauses. Parcel gateways spanning datelines stamp midday arrivals while dock scanners register dusk carton closes twelve diary hours earlier — tardiness horns sound despite pragmatic dock adherence reconciled manually inside transportation bridges weekly. Sequential tractor appointments splitting pallets duplicate denominators whenever JDBC emits two trailer rows though storefront charters insist singular trophy scoring aligned with consolidated manifests planners circulate privately. Curbside showroom pickups omit courier milestones entirely — tardiness predicates treating absent mileage proofs like interstate misses malign associates honoring verbal pickup pledges shoppers applaud though timestamps remain invisible inside parcel consoles altogether. Signature detention narratives annotate carrier timestamps Splunk punctuality treats like blemishes irrespective of courteous shopper-approved postponements negotiated verbally beyond ticketing hierarchies surfaced digitally overnight. Treasury interim accrual freezes withhold invoicing identifiers despite carton completeness milestones logistics satisfied punctually — documentation predicates referencing ERP invoices alone mis-flag immaculate shipments whenever accountants deliberate ASC606 narratives unrelated to dock throughput excellence nightly ops reviews emphasize confidently despite CFO dashboards highlighting provisional deferrals auditors reconcile afterward collaboratively.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC.\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:shipments\"` (order_id, customer_tier, sla_met, damage_flag, short_ship_flag), `index=business` `sourcetype=\"returns_log\"` (order_id, return_reason).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align shipment, quality, and returns feeds on a common order identifier ingested through DB Connect or HEC; (2) define “perfect” with commercial and logistics leaders; (3) publish weekly to account teams when any customer tier drops below the agreed perfect-order threshold.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:shipments\" earliest=-30d\n| eval on_time=sla_met\n| eval in_full=if(short_ship_flag=\"no\",1,0)\n| eval no_damage=if(damage_flag=\"no\",1,0)\n| join type=left order_id [\n    search index=business sourcetype=\"returns_log\" earliest=-30d\n    | stats count as return_count by order_id\n]\n| fillnull value=0 return_count\n| eval perfect=if(on_time=1 AND in_full=1 AND no_damage=1 AND return_count=0,1,0)\n| stats count as orders, sum(perfect) as perfect_orders by customer_tier\n| eval perfect_order_rate=round(100*perfect_orders/orders,1)\n| sort - perfect_order_rate\n| table customer_tier, orders, perfect_orders, perfect_order_rate\n```\n\nUnderstanding this SPL\n\n**Perfect Order Rate and Customer Impact** — Combines on-time, in-full, and damage-free delivery into one perfect order score so sales and operations share one customer-facing metric. We help you see when service looks acceptable on paper but customers still receive wrong or damaged goods.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:shipments\"` (order_id, customer_tier, sla_met, damage_flag, short_ship_flag), `index=business` `sourcetype=\"returns_log\"` (order_id, return_reason). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:shipments. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:shipments\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **on_time** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **in_full** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **no_damage** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **perfect** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by customer_tier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **perfect_order_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Perfect Order Rate and Customer Impact**): table customer_tier, orders, perfect_orders, perfect_order_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (perfect order rate by tier), Single value (overall perfect order %), Table (tier detail), Line chart (weekly trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We bundle arrival timing, shipment completeness, intact merchandise, and invoice-ready paperwork into one grade per outbound sale, then pair dips with shopper replies when panels respond so crews chase stories shoppers told—not guesses from spreadsheets.",
              "mtype": [
                "Business"
              ],
              "ind": "Retail, Manufacturing, Distribution",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.5.6",
              "n": "Capacity Utilisation vs Demand Forecast",
              "c": "high",
              "f": "intermediate",
              "v": "Compares production or warehouse throughput to forecast demand so planners see under-used lines before capital is wasted or overloaded sites before service fails. We help you align staffing and shifts with expected volume swings.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC",
              "d": "`index=business` `sourcetype=\"mes:throughput\"` (line_id, units_produced, shift_date), `demand_forecast.csv` (line_id, week, forecast_units)",
              "q": "index=business sourcetype=\"mes:throughput\" earliest=-14d\n| eval week=strftime(_time,\"%Y-%U\")\n| stats sum(units_produced) as actual_units by line_id, week\n| lookup demand_forecast.csv line_id week OUTPUT forecast_units\n| eval utilisation_pct=if(forecast_units>0, round(100*actual_units/forecast_units,1), null())\n| eval gap_units=forecast_units-actual_units\n| where utilisation_pct<85 OR utilisation_pct>115\n| eval abs_gap=abs(gap_units)\n| sort - abs_gap\n| table line_id, week, forecast_units, actual_units, utilisation_pct, gap_units",
              "m": "(1) Ingest manufacturing execution system or warehouse throughput events on each shift via HEC; (2) refresh `demand_forecast.csv` from planning each week with matching line and week keys; (3) review exceptions daily with planning and plant managers to rebalance loads or adjust forecasts.",
              "z": "Line chart (utilisation vs one hundred percent by line), Table (under and over capacity), Single value (lines out of band), Bar chart (gap units by line).",
              "kfp": "Authorized tooling changeovers logged inside CMMS under `SETUP` families occasionally inherit `UNCATEG` downtime buckets inside `factorytalk:downtime_event` extracts — Availability plunges despite planners approving forty-five-minute inserts aligned with Lean cadences until `reason_family` lookups reconcile coded `AFVV` routing strings from SAP `AFKO`. Micro-interruption classification macros flipping `<=120` versus `<=90` seconds thresholds materially rearrange `Performance` five-to-twelve percentage points because jitter near arbitrary boundaries separates `IDLE_STARVED` buckets — freeze thresholds alongside PLC ladder freeze tags published quarterly. `Counter_Good` OPC NodeIds renamed after Allen-Bradley firmware refreshes silently zero Splunk deltas until `FIELDALIAS` catch `PART_GOOD_ACCUM` synonyms OT programmers checked into GitLab weeks earlier — dashboards mimic starvation absent mechanical faults. Parallel historians (**Ignition Tag Historian** versus **FactoryTalk Metrics**) occasionally disagree **150 ms** on `State_Code` transitions causing `Performance` numerator denominators to lag `MES` pallet scans though conveyors stayed synchronous — widen `bin` buckets cautiously before accusing rate-loss clusters. MQTT `NBIRTH` certificate churn across HiveMQ clusters emits fifteen-to-thirty-second telemetry gaps resembling starvation despite uninterrupted spindle currents until `sparkplug` `seq` gaps flagged benign. Demand-plan snapshots referencing `scenario=W17_PREEXEC` mismatch Splunk `lookup` rows frozen at `scenario=W17_PUBLISHED` — `variance_pct` spikes purely Planning reconciliation artefact until nightly `fc_ver` watermark monitors succeed. Commercial `pull-forward` incentives inflate `forecast_units` temporarily separate from mechanical reliability yet `actual_good` falls short whenever distributors postpone pickups unrelated to shop-floor `OEE`. Quality supervisors mid-shift relabel `scrap` baskets into `rework` queues — `Quality` denominators oscillate unless nightly `parts_scrapped` reconciliations merge rework routers Saturday overtime bursts omitted from `planned_runtime_sec` defaults inflate `Availability` alarms absent supplemental `shift_calendar` overlays Facilities publishes Fridays. `Demand_Response` curtailment programmes deliberately throttle `State_Code` despite intact tooling — `Performance` dips benign absent Facilities `demand_response_active` annotation bridging SPL through `lookup`. Predictive `MAINT` windows counted identical to stochastic breakdown `DT` inflate `Availability` misses unless `planned_pm` booleans subtract numerator adjustments sourced from SAP `IW39` extracts Splunk optionally joins IT `MES` outages suppress `downtime_reason` altogether leaving Splunk `Availability` numerator reliant solely on PLC `Alarm` caches buffered offline until MQTT redundancy heals.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC.\n• Ensure the following data sources are available: `index=business` `sourcetype=\"mes:throughput\"` (line_id, units_produced, shift_date), `demand_forecast.csv` (line_id, week, forecast_units).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest manufacturing execution system or warehouse throughput events on each shift via HEC; (2) refresh `demand_forecast.csv` from planning each week with matching line and week keys; (3) review exceptions daily with planning and plant managers to rebalance loads or adjust forecasts.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"mes:throughput\" earliest=-14d\n| eval week=strftime(_time,\"%Y-%U\")\n| stats sum(units_produced) as actual_units by line_id, week\n| lookup demand_forecast.csv line_id week OUTPUT forecast_units\n| eval utilisation_pct=if(forecast_units>0, round(100*actual_units/forecast_units,1), null())\n| eval gap_units=forecast_units-actual_units\n| where utilisation_pct<85 OR utilisation_pct>115\n| eval abs_gap=abs(gap_units)\n| sort - abs_gap\n| table line_id, week, forecast_units, actual_units, utilisation_pct, gap_units\n```\n\nUnderstanding this SPL\n\n**Capacity Utilisation vs Demand Forecast** — Compares production or warehouse throughput to forecast demand so planners see under-used lines before capital is wasted or overloaded sites before service fails. We help you align staffing and shifts with expected volume swings.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"mes:throughput\"` (line_id, units_produced, shift_date), `demand_forecast.csv` (line_id, week, forecast_units). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: mes:throughput. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"mes:throughput\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **week** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by line_id, week** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **utilisation_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **gap_units** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where utilisation_pct<85 OR utilisation_pct>115` — typically the threshold or rule expression for this monitoring goal.\n• `eval` defines or adjusts **abs_gap** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Capacity Utilisation vs Demand Forecast**): table line_id, week, forecast_units, actual_units, utilisation_pct, gap_units\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (utilisation vs one hundred percent by line), Table (under and over capacity), Single value (lines out of band), Bar chart (gap units by line).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We tally what each assembly line truly produced compared with what planners scheduled for that week, and when those totals disagree we trace whether machines stopped too long, ran slower than designed, or threw away parts—well before shortage talks drift toward trucking lanes studied elsewhere.",
              "mtype": [
                "Business"
              ],
              "ind": "Manufacturing, Logistics",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.5.7",
              "n": "Returns Rate and Reverse Logistics Cost",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks return counts and refund value against shipped units so merchandising sees which products drive margin leakage. We help you trigger quality reviews or sizing guides before return rates damage the brand.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"returns_log\"` (sku, return_date, refund_amount, reason_code), `index=business` `sourcetype=\"dbx:shipments\"` (sku, shipped_qty, ship_date)",
              "q": "index=business sourcetype=\"returns_log\" earliest=-30d\n| stats count as returns, sum(refund_amount) as refund_total by sku, reason_code\n| join type=left sku [\n    search index=business sourcetype=\"dbx:shipments\" earliest=-30d\n    | stats sum(shipped_qty) as shipped_units by sku\n]\n| eval return_rate_pct=if(shipped_units>0, round(100*returns/shipped_units,2), null())\n| sort - refund_total\n| table sku, reason_code, shipped_units, returns, return_rate_pct, refund_total",
              "m": "(1) Load returns authorisations and shipment facts from order management with a shared stock keeping unit key; (2) normalise reason codes to a small taxonomy for reporting; (3) schedule weekly and invite category managers when any stock keeping unit exceeds the agreed return rate.",
              "z": "Bar chart (return rate by SKU), Table (reason code breakdown), Single value (portfolio return %), Pie chart (reason mix).",
              "kfp": "January apparel waves spike **`bracketing_lines`** once holiday gifts swap sizes though SKU defect rates remain steady — Splunk ratios resembling wardrobing fraud per Appriss wardrobing percentages (**nine-point-seven** percent fraud taxonomy category) flag morally benign exchanges absent investigator overlays tying persona histories. Extended **`gift_return`** calendars overlapping retailer-hosted ninety-day goodwill policies inflate **`customer_changed_mind`** tuples purely because beneficiaries postpone mall visits rather than signalling SKU toxicity deserving QC containment holds — differentiate CSR-coded narratives referencing **`gift_recipient`** chatter versus preventable defects. Lightning-fast promotional markdown calendars synchronize **`promotional_arbitrage`** bursts coinciding with lawful price-protection refunds retailers intentionally honour — Splunk **`promo_arb_lines`** climbs despite morally permissible shopper behaviours CFO offices accept contractually — annotate Shopify Flow **`bonus_credit`** automation parity before accusing shoppers of misconduct. Heavy footwear spring-training brackets elevate **`bracketing_lines`** absent fraudulent intent whenever runners reserve indoor versus outdoor soles concurrently — mimic fraud dashboards yet stem from benign athlete experimentation referenced orthopaedics counsels approve clinically — overlay SKU descriptions referencing cushion tiers rather than Splunk thresholds alone. Regional parcel-loss clusters flagged **`quality_defect`** attributions purely because corrugated **`carrier_damage`** manifests mimic merchandise faults inside **`wrong_item`** queues — correlate UC-23.5.4 courier **`normalized_exception`** overlays referencing FedEx Memphis sorting anomalies versus SKU rework defects inside QA ticketing bridges — Splunk splits blame incorrectly whenever taxonomy lumps logistics abrasions alongside supplier workmanship escapes unless **`carrier_claim`** identifiers augment SPL macros explicitly. Enterprise **`B2B`** blanket **`accept_all_returns`** clauses inflate **`margin_leak_pct`** versus disciplined **`B2C`** storefront economics tracked independently — CFO councils intentionally tolerate divergent denominators reflecting contractual allowances unrelated to consumer impulse behaviours analysed casually inside naive SPL aggregates keyed solely on SKU families lacking **`customer_segment`** overlays sourced from **`customer_tier.csv`** lookups inherited loosely from UC-23.5.5 segmentation macros albeit interpreted distinctly here because refunds reconcile differently across negotiated allowances rather than debating outbound pallet punctuality altogether. Fitness-category seasonal swings elevate **`bracketing_lines`** whenever marathon-training spikes collide with apparel markdown calendars misclassified purely as **`promotional_arbitrage`** absent storefront banners documenting deliberate athlete cohort allowances HR wellness stipends reimburse cooperatively — annotate payroll **`wellness_credit`** overlays weekly so Splunk thresholds honour contractual carve-outs unrelated to SKU workmanship regressions deserving QA containment boards referenced upstream.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Enterprise Security](https://splunkbase.splunk.com/app/263)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"returns_log\"` (sku, return_date, refund_amount, reason_code), `index=business` `sourcetype=\"dbx:shipments\"` (sku, shipped_qty, ship_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Load returns authorisations and shipment facts from order management with a shared stock keeping unit key; (2) normalise reason codes to a small taxonomy for reporting; (3) schedule weekly and invite category managers when any stock keeping unit exceeds the agreed return rate.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"returns_log\" earliest=-30d\n| stats count as returns, sum(refund_amount) as refund_total by sku, reason_code\n| join type=left sku [\n    search index=business sourcetype=\"dbx:shipments\" earliest=-30d\n    | stats sum(shipped_qty) as shipped_units by sku\n]\n| eval return_rate_pct=if(shipped_units>0, round(100*returns/shipped_units,2), null())\n| sort - refund_total\n| table sku, reason_code, shipped_units, returns, return_rate_pct, refund_total\n```\n\nUnderstanding this SPL\n\n**Returns Rate and Reverse Logistics Cost** — Tracks return counts and refund value against shipped units so merchandising sees which products drive margin leakage. We help you trigger quality reviews or sizing guides before return rates damage the brand.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"returns_log\"` (sku, return_date, refund_amount, reason_code), `index=business` `sourcetype=\"dbx:shipments\"` (sku, shipped_qty, ship_date). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: returns_log. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"returns_log\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by sku, reason_code** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **return_rate_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Returns Rate and Reverse Logistics Cost**): table sku, reason_code, shipped_units, returns, return_rate_pct, refund_total\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (return rate by SKU), Table (reason code breakdown), Single value (portfolio return %), Pie chart (reason mix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We tally parcels shoppers ship back alongside refunds and warehouse fees tied to labels and inspections. That picture tells finance where goodwill leaks money without confusing shelf counts or dock receipts studied under other chapters.",
              "mtype": [
                "Business"
              ],
              "ind": "Retail, E-Commerce",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 7,
            "none": 0
          }
        },
        {
          "i": "23.6",
          "n": "Financial Operations & Procurement",
          "u": [
            {
              "i": "23.6.1",
              "n": "Accounts Receivable Aging and Cash Collection",
              "c": "high",
              "f": "intermediate",
              "v": "Shows outstanding receivables by aging bucket — current, 30-day, 60-day, 90-day, 120+ day — with customer-level drill-down. CFOs and controllers see cash collection health at a glance, identify customers drifting into bad debt territory, and measure Days Sales Outstanding (DSO) against targets. Early visibility enables proactive collection before debts become uncollectable.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:ar_invoices\"` (invoice_id, customer, amount, invoice_date, due_date, payment_date, status)",
              "q": "index=business sourcetype=\"dbx:ar_invoices\" status=\"open\" earliest=-1y\n| eval due_epoch=strptime(due_date, \"%Y-%m-%d\")\n| eval days_overdue=round((now()-due_epoch)/86400, 0)\n| eval aging_bucket=case(\n    days_overdue <= 0, \"Current\",\n    days_overdue <= 30, \"1-30 days\",\n    days_overdue <= 60, \"31-60 days\",\n    days_overdue <= 90, \"61-90 days\",\n    1=1, \"90+ days\")\n| stats sum(amount) as total_outstanding, dc(invoice_id) as invoice_count, dc(customer) as customers by aging_bucket\n| eventstats sum(total_outstanding) as grand_total\n| eval pct_of_total=round(100*total_outstanding/grand_total, 1)\n| sort aging_bucket\n| table aging_bucket, invoice_count, customers, total_outstanding, pct_of_total",
              "m": "(1) Import open AR invoices via DB Connect from ERP; (2) schedule daily; (3) alert collections team when any customer exceeds 60 days overdue; (4) calculate DSO: (AR balance / revenue) × days in period; (5) trend DSO monthly for CFO reporting; (6) segment by customer segment and region for targeted collection strategies.",
              "z": "Stacked bar (AR by aging bucket), Table (top overdue customers), Single value (total overdue, DSO), Line chart (DSO trend).",
              "kfp": "Cash applied in the bank on T but posted to FI on T+2 frequently produces a short phantom overdue spike at month boundaries until the clearing batch mirrors treasury reality—indexed rows still show open AR though collectors already hold remittance advice. Invoices parked fully in SAP **FSCM-DM** dispute cases while the FI line remains technically open inflate overdue dollars versus collection priority until dispute resolution closes or writes down amounts. Intercompany billing remains aged in a subsidiary ledger until elimination posts even though consolidation nets to zero group-wide. Credit memos issued against invoices before cancellation postings settle can swing buckets negatively until netting completes. Without-recourse factoring removes balances upstream; with-recourse factoring leaves balances flagged—mis-tagged `factoring_status` sends collectors toward invoices the factor already owns. Quarter-end **F.05** foreign-currency valuation runs shift translated balances without customer payments changing when functional currency differs from transaction currency. ERP nightly re-aging jobs started between 02:00–04:00 UTC often leave Monday morning dashboards one refresh behind treasury spreadsheets until Finance confirms weekend receipt postings finished. Statistical sampling programmes that randomly delay cash-application QA batches near audit windows can widen Splunk-versus-bank variance without any customer paying late. Deferred-revenue schedules that park contra balances beside gross AR sometimes resemble overdue invoices until accountants move amounts into contract-liability accounts—nothing here diagnoses revenue timing misclassification alone. Rolling ninety-day revenue denominators that include heavy returns during recall programmes shrink apparent DSO versus operational expectation until FP&A revises the credit-sales definition feeding `dbx:finance_revenue_daily`. Consolidation-only netting entries booked after subsidiary snapshots replicate zero cash risk at group level yet leave operational dashboards flashing tier-two buckets until Corporate posts eliminations.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:ar_invoices\"` (invoice_id, customer, amount, invoice_date, due_date, payment_date, status).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import open AR invoices via DB Connect from ERP; (2) schedule daily; (3) alert collections team when any customer exceeds 60 days overdue; (4) calculate DSO: (AR balance / revenue) × days in period; (5) trend DSO monthly for CFO reporting; (6) segment by customer segment and region for targeted collection strategies.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:ar_invoices\" status=\"open\" earliest=-1y\n| eval due_epoch=strptime(due_date, \"%Y-%m-%d\")\n| eval days_overdue=round((now()-due_epoch)/86400, 0)\n| eval aging_bucket=case(\n    days_overdue <= 0, \"Current\",\n    days_overdue <= 30, \"1-30 days\",\n    days_overdue <= 60, \"31-60 days\",\n    days_overdue <= 90, \"61-90 days\",\n    1=1, \"90+ days\")\n| stats sum(amount) as total_outstanding, dc(invoice_id) as invoice_count, dc(customer) as customers by aging_bucket\n| eventstats sum(total_outstanding) as grand_total\n| eval pct_of_total=round(100*total_outstanding/grand_total, 1)\n| sort aging_bucket\n| table aging_bucket, invoice_count, customers, total_outstanding, pct_of_total\n```\n\nUnderstanding this SPL\n\n**Accounts Receivable Aging and Cash Collection** — Shows outstanding receivables by aging bucket — current, 30-day, 60-day, 90-day, 120+ day — with customer-level drill-down. CFOs and controllers see cash collection health at a glance, identify customers drifting into bad debt territory, and measure Days Sales Outstanding (DSO) against targets. Early visibility enables proactive collection before debts become uncollectable.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:ar_invoices\"` (invoice_id, customer, amount, invoice_date, due_date, payment_date, status). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:ar_invoices. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:ar_invoices\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **due_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **days_overdue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **aging_bucket** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by aging_bucket** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eventstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **pct_of_total** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Accounts Receivable Aging and Cash Collection**): table aging_bucket, invoice_count, customers, total_outstanding, pct_of_total\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (AR by aging bucket), Table (top overdue customers), Single value (total overdue, DSO), Line chart (DSO trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We line up every unpaid customer bill by how late it is versus its due date, add up what share sits in each lateness band, and highlight who owes the most so collectors call the right accounts first instead of guessing from one big total.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.6.2",
              "n": "Expense Report Anomaly Detection",
              "c": "high",
              "f": "intermediate",
              "v": "Detects unusual expense patterns that may indicate policy violations or fraud — round-number claims just below approval thresholds, duplicate submissions, weekend expenses, excessive entertainment spend, or outliers compared to peers. Finance teams get an exception list rather than reviewing every expense, dramatically improving audit efficiency.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC",
              "d": "`index=business` `sourcetype=\"expense_reports\"` (report_id, employee_id, department, category, amount, expense_date, merchant, receipt_attached)",
              "q": "index=business sourcetype=\"expense_reports\" earliest=-90d\n| eval anomaly_flags=mvappend(\n    if(amount=round(amount,0) AND amount>=100, \"ROUND_NUMBER\", null()),\n    if(amount>=490 AND amount<500, \"JUST_BELOW_THRESHOLD\", null()),\n    if(tonumber(strftime(strptime(expense_date,\"%Y-%m-%d\"),\"%u\"))>=6, \"WEEKEND\", null()),\n    if(receipt_attached=\"no\" AND amount>25, \"NO_RECEIPT\", null()))\n| eventstats avg(amount) as dept_avg, stdev(amount) as dept_stdev by department, category\n| eval statistical_outlier=if(amount > dept_avg + 2*dept_stdev, \"STATISTICAL_OUTLIER\", null())\n| eval anomaly_flags=mvappend(anomaly_flags, statistical_outlier)\n| where isnotnull(anomaly_flags)\n| stats count as flagged_expenses, sum(amount) as total_flagged_amount, values(anomaly_flags) as reasons by employee_id, department\n| sort - total_flagged_amount\n| table employee_id, department, flagged_expenses, total_flagged_amount, reasons",
              "m": "(1) Import expense data from expense management system (Concur, Expensify, SAP) via DB Connect or HEC; (2) tune approval threshold amounts to match your policy (e.g., $500); (3) schedule weekly for finance review; (4) compare duplicate merchant/date/amount combinations across employees; (5) build a peer comparison model by role and department for more accurate outlier detection.",
              "z": "Table (flagged employees with reasons), Bar chart (anomaly types), Single value (% of expenses flagged), Scatter plot (amount vs department average).",
              "kfp": "Sales kickoffs and distributor conferences routinely produce legitimate Saturday-night entertainment rows flagged by WEEKEND_DISCRETIONARY_CATEGORY despite approved agendas filed outside Splunk. International travelers submitting EUR-equivalent rows against USD approval thresholds appear as JUST_BELOW_APPROVAL_THRESHOLD noise until treasury publishes FX rounding rules beside Splunk thresholds. Tokyo or Manhattan meal totals clustering seventy-three to seventy-four dollars reflect policy-compliant high-cost-locality per-diem bands rather than receipt-evasion — geography lookups missing from Concur profiles amplify this. Duplicate-detection logic colliding with shared corporate-card merchant descriptors across unrelated teammates during conference hotel clusters raises collisions until AP passes canonical ERP invoice keys through DB Connect joins. Round-dollar taxi vouchers at exactly two hundred fifty dollars after vendor rounding frequently resemble ROUND_NUMBER_SPLIT_RULE without intent. Insight Premium or AppZen may already reroute severe offenders inside Concur workflows while Splunk still mirrors residual narrative rows — reviewers interpret Splunk hits as backlog hygiene signals, not standalone accusations.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC.\n• Ensure the following data sources are available: `index=business` `sourcetype=\"expense_reports\"` (report_id, employee_id, department, category, amount, expense_date, merchant, receipt_attached).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import expense data from expense management system (Concur, Expensify, SAP) via DB Connect or HEC; (2) tune approval threshold amounts to match your policy (e.g., $500); (3) schedule weekly for finance review; (4) compare duplicate merchant/date/amount combinations across employees; (5) build a peer comparison model by role and department for more accurate outlier detection.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"expense_reports\" earliest=-90d\n| eval anomaly_flags=mvappend(\n    if(amount=round(amount,0) AND amount>=100, \"ROUND_NUMBER\", null()),\n    if(amount>=490 AND amount<500, \"JUST_BELOW_THRESHOLD\", null()),\n    if(tonumber(strftime(strptime(expense_date,\"%Y-%m-%d\"),\"%u\"))>=6, \"WEEKEND\", null()),\n    if(receipt_attached=\"no\" AND amount>25, \"NO_RECEIPT\", null()))\n| eventstats avg(amount) as dept_avg, stdev(amount) as dept_stdev by department, category\n| eval statistical_outlier=if(amount > dept_avg + 2*dept_stdev, \"STATISTICAL_OUTLIER\", null())\n| eval anomaly_flags=mvappend(anomaly_flags, statistical_outlier)\n| where isnotnull(anomaly_flags)\n| stats count as flagged_expenses, sum(amount) as total_flagged_amount, values(anomaly_flags) as reasons by employee_id, department\n| sort - total_flagged_amount\n| table employee_id, department, flagged_expenses, total_flagged_amount, reasons\n```\n\nUnderstanding this SPL\n\n**Expense Report Anomaly Detection** — Detects unusual expense patterns that may indicate policy violations or fraud — round-number claims just below approval thresholds, duplicate submissions, weekend expenses, excessive entertainment spend, or outliers compared to peers. Finance teams get an exception list rather than reviewing every expense, dramatically improving audit efficiency.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"expense_reports\"` (report_id, employee_id, department, category, amount, expense_date, merchant, receipt_attached). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: expense_reports. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"expense_reports\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **anomaly_flags** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eventstats` rolls up events into metrics; results are split **by department, category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **statistical_outlier** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **anomaly_flags** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where isnotnull(anomaly_flags)` — typically the threshold or rule expression for this monitoring goal.\n• `stats` rolls up events into metrics; results are split **by employee_id, department** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Expense Report Anomaly Detection**): table employee_id, department, flagged_expenses, total_flagged_amount, reasons\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (flagged employees with reasons), Bar chart (anomaly types), Single value (% of expenses flagged), Scatter plot (amount vs department average).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We scan employee expense receipts for patterns that look like cheating or mistakes — duplicate dinners, sneaky amounts just under approval limits, or weekends charged like weekdays — so Finance reviews a short suspicious list instead of every reimbursement request coming through payroll.",
              "mtype": [
                "Security",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "security",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.6.3",
              "n": "Budget vs Actual Variance Tracking",
              "c": "high",
              "f": "intermediate",
              "v": "Compares actual spending against budget at the cost centre and GL account level — highlighting where departments are over or under budget. Finance teams and department heads see variances in near-real-time rather than waiting for month-end close, enabling earlier corrective action on runaway costs.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:gl_transactions\"` (cost_centre, gl_account, amount, period), `budget_plan.csv`",
              "q": "index=business sourcetype=\"dbx:gl_transactions\" earliest=-1mon@mon latest=@mon\n| stats sum(amount) as actual_spend by cost_centre, gl_account\n| lookup budget_plan.csv cost_centre gl_account OUTPUT budget_amount\n| eval variance=actual_spend - budget_amount\n| eval variance_pct=if(budget_amount!=0, round(100*variance/budget_amount, 1), null())\n| eval status=case(\n    variance_pct > 10, \"OVER BUDGET\",\n    variance_pct > 5, \"WATCH\",\n    variance_pct < -20, \"UNDERSPEND\",\n    1=1, \"ON TRACK\")\n| where status!=\"ON TRACK\"\n| sort - variance\n| table cost_centre, gl_account, budget_amount, actual_spend, variance, variance_pct, status",
              "m": "(1) Import GL transaction data via DB Connect from ERP; (2) maintain `budget_plan.csv` with approved budget by cost centre and GL account; (3) schedule monthly after period close; (4) add YTD cumulative view alongside monthly; (5) alert department heads when any cost centre exceeds budget by >10%; (6) enable drill-down to individual transactions for variance investigation.",
              "z": "Bar chart (variance by cost centre), Table (over-budget items), Gauge (department spend vs budget), Line chart (cumulative spend vs budget over months).",
              "kfp": "SAP allocation cycles tied to KSU5 or KSV5 often land around Day three of close; variance snapshots taken Day one or two attribute overhead before cycles finish so centres look artificially unfavourable until postings settle. Universal-journal extracts filtered only by period sometimes mix planning sandbox rows when DB Connect pulls reuse dev schemas — totals oscillate until SQL predicates exclude client two-two-two test ledgers. Frozen Oracle `GL_BUDGET_VERSIONS` marked open versus frozen confuse Splunk readers when interim uploads overlap frozen annual envelopes; dashboards spike until controllers publish final freeze timestamps. SaaS rolling forecast pulls (`VERSN` `101`–`104`) refreshed asynchronously versus nightly ERP polls produce phantom drift identical to FX noise until timestamps align on `_time`. Multi-currency portfolios comparing `HSL` variance against USD-only floors from lookup yield misleading breach counts though economic substance sits inside statutory translation buffers—mirror treasury rate-type policies (`TCURR` type `M` versus `E`) before reacting. Capital-project invoices parked temporarily on operating expense accounts ahead of asset capitalization (`ANEK`/`ANEP` not yet tied) inflate opex variance without operations truly overspending. Mid-year reorganizations that remap `RCNTR` through CSKS hierarchy without back-allocating prior months make merged centres resemble runaway spend versus true year-to-date continuity. Reverse-audit top-side journals in special periods (`MEC1` thirteen through sixteen equivalents) absent from operational subledgers swing variance for one day only. Driver-based forecast lines (headcount linked to bonus accruals) may show favourability simply because payroll accruals reverse on a different business day than planned, not because talent costs structurally declined.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:gl_transactions\"` (cost_centre, gl_account, amount, period), `budget_plan.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import GL transaction data via DB Connect from ERP; (2) maintain `budget_plan.csv` with approved budget by cost centre and GL account; (3) schedule monthly after period close; (4) add YTD cumulative view alongside monthly; (5) alert department heads when any cost centre exceeds budget by >10%; (6) enable drill-down to individual transactions for variance investigation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:gl_transactions\" earliest=-1mon@mon latest=@mon\n| stats sum(amount) as actual_spend by cost_centre, gl_account\n| lookup budget_plan.csv cost_centre gl_account OUTPUT budget_amount\n| eval variance=actual_spend - budget_amount\n| eval variance_pct=if(budget_amount!=0, round(100*variance/budget_amount, 1), null())\n| eval status=case(\n    variance_pct > 10, \"OVER BUDGET\",\n    variance_pct > 5, \"WATCH\",\n    variance_pct < -20, \"UNDERSPEND\",\n    1=1, \"ON TRACK\")\n| where status!=\"ON TRACK\"\n| sort - variance\n| table cost_centre, gl_account, budget_amount, actual_spend, variance, variance_pct, status\n```\n\nUnderstanding this SPL\n\n**Budget vs Actual Variance Tracking** — Compares actual spending against budget at the cost centre and GL account level — highlighting where departments are over or under budget. Finance teams and department heads see variances in near-real-time rather than waiting for month-end close, enabling earlier corrective action on runaway costs.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:gl_transactions\"` (cost_centre, gl_account, amount, period), `budget_plan.csv`. **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:gl_transactions. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:gl_transactions\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by cost_centre, gl_account** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **variance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **variance_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **status** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Filters the current rows with `where status!=\"ON TRACK\"` — typically the threshold or rule expression for this monitoring goal.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Budget vs Actual Variance Tracking**): table cost_centre, gl_account, budget_amount, actual_spend, variance, variance_pct, status\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (variance by cost centre), Table (over-budget items), Gauge (department spend vs budget), Line chart (cumulative spend vs budget over months).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We compare what each department planned to spend against what really posted each month so managers see meaningful overshoots early—before leadership asks why numbers shifted during the formal close meeting.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.6.4",
              "n": "Payment Processing Success Rate and Revenue Leakage",
              "c": "critical",
              "f": "intermediate",
              "v": "Monitors payment gateway success, decline, and error rates in real time — every declined payment is potentially lost revenue. Business teams see which payment methods, card types, and regions have the highest failure rates, while engineering teams get immediate alerts on gateway outages. A 1% improvement in payment success rate directly increases revenue.",
              "t": "HEC (payment gateway logs)",
              "d": "`index=business` `sourcetype=\"payment_gateway\"` (transaction_id, amount, currency, payment_method, status, decline_reason, country)",
              "q": "index=business sourcetype=\"payment_gateway\" earliest=-24h\n| eval success=if(status=\"approved\", 1, 0)\n| eval declined=if(status=\"declined\", 1, 0)\n| eval errored=if(status=\"error\", 1, 0)\n| stats sum(success) as approved, sum(declined) as declined, sum(errored) as errors,\n        sum(eval(if(success=1,amount,0))) as approved_revenue,\n        sum(eval(if(declined=1,amount,0))) as declined_revenue,\n        count as total by payment_method\n| eval success_rate=round(100*approved/total, 2)\n| eval revenue_lost=declined_revenue\n| sort - revenue_lost\n| table payment_method, total, approved, declined, errors, success_rate, approved_revenue, revenue_lost",
              "m": "(1) Forward payment gateway events to Splunk via HEC; (2) include transaction details, decline codes, and amounts; (3) alert immediately when success rate drops below 95% — likely a gateway issue; (4) analyse decline reasons to identify recoverable declines (e.g., retry logic, alternative payment methods); (5) track by country for regional payment method optimisation.",
              "z": "Single value (overall success rate, revenue lost today), Line chart (success rate over time), Bar chart (decline reasons), Table (performance by payment method).",
              "kfp": "Buy-online-return-in-store workflows occasionally duplicate shopper intents across rails such that two webhook retries bear identical checkout fingerprints until middleware collapses omnichannel retries — refusal counters spike although alternate rails cleared moments later without PSP faults. Sports-and-ticketing surge windows localize issuer-coded declines to stadium-heavy BIN cohorts during flash on-sales — portfolio dashboards resemble outages unless geography facets isolate burst geography from sustained regressions. Cross-border travellers flagged transiently inside adaptive-acceleration experiments endure wider stepped-up authentication prompts — abandonment resembles issuer declines absent cohort annotations referencing experimentation banners. Gift-card-first split tenders emit refusal-success sandwiches where only stored-value rails refuse — blended ratios crater though primary Visa rails succeeded beside tangled tender sequencing. Merchant-category promotions overriding acquirer routing tables mid-sale temporarily mismatch issuer authorization profiles shared across BIN peers — sharply synchronous BIN cohort swings mimic gateway regressions despite steady latency telemetry.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (payment gateway logs).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"payment_gateway\"` (transaction_id, amount, currency, payment_method, status, decline_reason, country).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Forward payment gateway events to Splunk via HEC; (2) include transaction details, decline codes, and amounts; (3) alert immediately when success rate drops below 95% — likely a gateway issue; (4) analyse decline reasons to identify recoverable declines (e.g., retry logic, alternative payment methods); (5) track by country for regional payment method optimisation.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"payment_gateway\" earliest=-24h\n| eval success=if(status=\"approved\", 1, 0)\n| eval declined=if(status=\"declined\", 1, 0)\n| eval errored=if(status=\"error\", 1, 0)\n| stats sum(success) as approved, sum(declined) as declined, sum(errored) as errors,\n        sum(eval(if(success=1,amount,0))) as approved_revenue,\n        sum(eval(if(declined=1,amount,0))) as declined_revenue,\n        count as total by payment_method\n| eval success_rate=round(100*approved/total, 2)\n| eval revenue_lost=declined_revenue\n| sort - revenue_lost\n| table payment_method, total, approved, declined, errors, success_rate, approved_revenue, revenue_lost\n```\n\nUnderstanding this SPL\n\n**Payment Processing Success Rate and Revenue Leakage** — Monitors payment gateway success, decline, and error rates in real time — every declined payment is potentially lost revenue. Business teams see which payment methods, card types, and regions have the highest failure rates, while engineering teams get immediate alerts on gateway outages. A 1% improvement in payment success rate directly increases revenue.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"payment_gateway\"` (transaction_id, amount, currency, payment_method, status, decline_reason, country). **App/TA** (typical add-on context): HEC (payment gateway logs). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: payment_gateway. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"payment_gateway\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **success** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **declined** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **errored** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by payment_method** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **success_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **revenue_lost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Payment Processing Success Rate and Revenue Leakage**): table payment_method, total, approved, declined, errors, success_rate, approved_revenue, revenue_lost\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (overall success rate, revenue lost today), Line chart (success rate over time), Bar chart (decline reasons), Table (performance by payment method).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch whether shopper card payments approve right when someone clicks pay—or stall on bank checks meant to stop stolen cards—so teams notice when honest buyers slip away quietly and can tune routing before checkout sheds revenue nobody counted as fraud.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "E-Commerce, SaaS, Financial Services",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.6.5",
              "n": "Purchase Order Cycle Time and Maverick Spend",
              "c": "high",
              "f": "intermediate",
              "v": "Measures how long purchase requests take from submission to approval and flags orders placed outside preferred suppliers. We help procurement protect negotiated savings and shorten delays that stall projects.",
              "t": "Splunk DB Connect (Splunkbase 2686), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=business` `sourcetype=\"dbx:purchase_orders\"` (po_number, request_date, approved_date, supplier, preferred_flag, amount)",
              "q": "index=business sourcetype=\"dbx:purchase_orders\" earliest=-90d\n| eval req=strptime(request_date,\"%Y-%m-%d\")\n| eval appr=strptime(approved_date,\"%Y-%m-%d\")\n| eval cycle_days=if(isnotnull(appr), round((appr-req)/86400,0), null())\n| eval maverick=if(preferred_flag=\"no\",1,0)\n| stats avg(cycle_days) as avg_cycle_days, perc95(cycle_days) as p95_cycle_days,\n        sum(amount) as total_spend, sum(eval(if(maverick=1,amount,0))) as maverick_spend,\n        count as po_count by supplier\n| eval maverick_pct=if(total_spend>0, round(100*maverick_spend/total_spend,1), null())\n| sort - maverick_spend\n| table supplier, po_count, avg_cycle_days, p95_cycle_days, total_spend, maverick_spend, maverick_pct",
              "m": "(1) Replicate purchase order lifecycle fields from enterprise resource planning or procurement workflow into Splunk using DB Connect; (2) maintain a preferred supplier flag on each vendor record; (3) alert procurement when maverick spend exceeds policy for two consecutive weeks.",
              "z": "Bar chart (maverick spend by supplier), Table (cycle time and maverick detail), Single value (overall maverick %), Line chart (average cycle days trend).",
              "kfp": "Standing blanket schedules intentionally stretch header lifetimes across fiscal quarters so cycle timelines balloon despite disciplined releases baked into procurement calendars — interpreting header-age alone misses planned blanket mechanics until planners annotate blanket identifiers carried alongside Splunk lookups. Punch-out storefront carts inherit contracted pricing negotiated months earlier yet snapshot approvals appear instantaneous inside SaaS timelines because sourcing diligence preceded the punch-out session — Splunk clocks compress versus negotiation reality whenever upstream catalogs absorbed negotiations quietly.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:purchase_orders\"` (po_number, request_date, approved_date, supplier, preferred_flag, amount).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Replicate purchase order lifecycle fields from enterprise resource planning or procurement workflow into Splunk using DB Connect; (2) maintain a preferred supplier flag on each vendor record; (3) alert procurement when maverick spend exceeds policy for two consecutive weeks.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:purchase_orders\" earliest=-90d\n| eval req=strptime(request_date,\"%Y-%m-%d\")\n| eval appr=strptime(approved_date,\"%Y-%m-%d\")\n| eval cycle_days=if(isnotnull(appr), round((appr-req)/86400,0), null())\n| eval maverick=if(preferred_flag=\"no\",1,0)\n| stats avg(cycle_days) as avg_cycle_days, perc95(cycle_days) as p95_cycle_days,\n        sum(amount) as total_spend, sum(eval(if(maverick=1,amount,0))) as maverick_spend,\n        count as po_count by supplier\n| eval maverick_pct=if(total_spend>0, round(100*maverick_spend/total_spend,1), null())\n| sort - maverick_spend\n| table supplier, po_count, avg_cycle_days, p95_cycle_days, total_spend, maverick_spend, maverick_pct\n```\n\nUnderstanding this SPL\n\n**Purchase Order Cycle Time and Maverick Spend** — Measures how long purchase requests take from submission to approval and flags orders placed outside preferred suppliers. We help procurement protect negotiated savings and shorten delays that stall projects.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:purchase_orders\"` (po_number, request_date, approved_date, supplier, preferred_flag, amount). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:purchase_orders. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:purchase_orders\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **req** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **appr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cycle_days** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **maverick** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by supplier** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **maverick_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Purchase Order Cycle Time and Maverick Spend**): table supplier, po_count, avg_cycle_days, p95_cycle_days, total_spend, maverick_spend, maverick_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (maverick spend by supplier), Table (cycle time and maverick detail), Single value (overall maverick %), Line chart (average cycle days trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We clock how quickly buying requests win approvals and whether purchases wander outside negotiated supplier paths, so leadership notices stalled buys and leaked discounts before quarterly closes rather than relying on gut feel alone.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.6.6",
              "n": "Intercompany Reconciliation Exception Queue",
              "c": "high",
              "f": "advanced",
              "v": "Surfaces journal entries that fail intercompany matching rules so the close team clears exceptions before statutory reporting deadlines. We help finance reduce manual spreadsheet chasing during month-end.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:ic_reconciliation\"` (ic_pair_id, entity_a, entity_b, amount_a, amount_b, status, posting_date)",
              "q": "index=business sourcetype=\"dbx:ic_reconciliation\" status!=\"matched\" earliest=-60d\n| eval variance=abs(amount_a-amount_b)\n| eval severity=case(variance>10000,\"HIGH\", variance>1000,\"MEDIUM\", 1=1,\"LOW\")\n| stats count as open_items, sum(variance) as total_variance, max(variance) as max_variance by entity_a, entity_b, severity\n| sort - total_variance\n| table entity_a, entity_b, severity, open_items, total_variance, max_variance",
              "m": "(1) Export unmatched intercompany lines nightly from the general ledger or reconciliation tool via DB Connect; (2) classify severity bands with your controller; (3) assign a daily saved search that emails the shared services inbox when high severity open items exceed the agreed cap.",
              "z": "Table (exceptions by entity pair), Bar chart (open items by severity), Single value (total unmatched variance), Pie chart (severity mix).",
              "kfp": "Dividend corridors routinely split gross-leg postings plus withheld fractions on payer ledgers versus net-leg postings on receivers until treaty vouchers finalize—Splunk variance climbs despite OECD-aligned schedules once treasury attaches withholding proofs beside statutory filings rather than correcting ERP timestamps Splunk snapshots hourly. Midnight consolidation batches sometimes publish SAP Universal Journal rows between extractor polls so Splunk thinks elimination gaps persist although SAP FS-PCE dashboards already refreshed thirty minutes later once Basis reruns nightly BW chains controllers orchestrate deliberately.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:ic_reconciliation\"` (ic_pair_id, entity_a, entity_b, amount_a, amount_b, status, posting_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Export unmatched intercompany lines nightly from the general ledger or reconciliation tool via DB Connect; (2) classify severity bands with your controller; (3) assign a daily saved search that emails the shared services inbox when high severity open items exceed the agreed cap.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:ic_reconciliation\" status!=\"matched\" earliest=-60d\n| eval variance=abs(amount_a-amount_b)\n| eval severity=case(variance>10000,\"HIGH\", variance>1000,\"MEDIUM\", 1=1,\"LOW\")\n| stats count as open_items, sum(variance) as total_variance, max(variance) as max_variance by entity_a, entity_b, severity\n| sort - total_variance\n| table entity_a, entity_b, severity, open_items, total_variance, max_variance\n```\n\nUnderstanding this SPL\n\n**Intercompany Reconciliation Exception Queue** — Surfaces journal entries that fail intercompany matching rules so the close team clears exceptions before statutory reporting deadlines. We help finance reduce manual spreadsheet chasing during month-end.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:ic_reconciliation\"` (ic_pair_id, entity_a, entity_b, amount_a, amount_b, status, posting_date). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:ic_reconciliation. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:ic_reconciliation\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **variance** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **severity** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by entity_a, entity_b, severity** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Intercompany Reconciliation Exception Queue**): table entity_a, entity_b, severity, open_items, total_variance, max_variance\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (exceptions by entity pair), Bar chart (open items by severity), Single value (total unmatched variance), Pie chart (severity mix).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "When two divisions mirror each other's entries across borders, clocks and currencies rarely agree at once; we line up what still disagrees after translation rules run so accountants finish pairing before filings slip rather than chasing ghosts days later.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 6,
            "none": 0
          }
        },
        {
          "i": "23.7",
          "n": "Customer Support & Service Excellence",
          "u": [
            {
              "i": "23.7.1",
              "n": "Support Ticket Volume and Resolution SLA Dashboard",
              "c": "high",
              "f": "beginner",
              "v": "Shows support ticket volume, backlog, and SLA compliance in real time — how many tickets are open, how fast they're being resolved, and whether the team is meeting response and resolution targets. Support leaders see whether they need to add staff, redistribute workload, or investigate a spike in a specific issue category.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, state, priority, assignment_group, category, short_description)",
              "q": "index=itsm sourcetype=\"snow:incident\" earliest=-30d\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at, \"%Y-%m-%d %H:%M:%S\"), null())\n| eval resolution_hours=if(isnotnull(closed_epoch), round((closed_epoch-opened_epoch)/3600, 1), null())\n| eval sla_target_hours=case(priority=\"1\",4, priority=\"2\",8, priority=\"3\",24, 1=1,72)\n| eval sla_met=if(isnotnull(resolution_hours) AND resolution_hours <= sla_target_hours, 1, 0)\n| eval is_open=if(isnull(closed_at) OR state IN (\"New\",\"In Progress\",\"On Hold\"), 1, 0)\n| stats count as total_tickets,\n        sum(is_open) as open_tickets,\n        sum(sla_met) as within_sla,\n        avg(resolution_hours) as avg_resolution_h,\n        median(resolution_hours) as median_resolution_h by assignment_group\n| eval sla_pct=round(100*within_sla/(total_tickets-open_tickets), 1)\n| sort - open_tickets\n| table assignment_group, total_tickets, open_tickets, avg_resolution_h, median_resolution_h, sla_pct",
              "m": "(1) Configure ServiceNow TA for incident ingestion; (2) map your SLA targets by priority; (3) schedule every 4 hours for team leads; (4) alert when any team's backlog exceeds capacity threshold; (5) add first-response time tracking alongside resolution time.",
              "z": "Single value (open backlog, SLA %), Bar chart (volume by team), Line chart (daily ticket trend), Table (team performance).",
              "kfp": "A case moved from P4 to P1 mid-flight rebinds a new **task_sla** row and the breach flag on the earlier SLA definition can look like a false miss until you read **stage** and **start_time** on the new row. **Account** and **company** M2M fields sometimes fan out to multiple `account` values per ticket when territories overlap; **stats** without `mv` handling then inflates or splits row counts. When an incident reopens from Resolved to In Progress, the SLA engine can emit a fresh clock while Splunk still holds the first resolved snapshot—breach and compliance figures diverge for one poll until the **sys_updated_on** watermark catches the reopen. Parent/child CSM work where **child** rows still show Resolved will hide real backlog if you filter **parent** only; conversely, counting both parent and children doubles volume—your filter must match how your operating model prices SLAs. Finally, a **cmn_schedule** change (holidays, DST, or a new 24×7 **calendar_id**) rewrites **business_duration** for tickets straddling the change even though the wall-clock **calendar_duration** is unchanged.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, state, priority, assignment_group, category, short_description).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Configure ServiceNow TA for incident ingestion; (2) map your SLA targets by priority; (3) schedule every 4 hours for team leads; (4) alert when any team's backlog exceeds capacity threshold; (5) add first-response time tracking alongside resolution time.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" earliest=-30d\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval closed_epoch=if(isnotnull(closed_at), strptime(closed_at, \"%Y-%m-%d %H:%M:%S\"), null())\n| eval resolution_hours=if(isnotnull(closed_epoch), round((closed_epoch-opened_epoch)/3600, 1), null())\n| eval sla_target_hours=case(priority=\"1\",4, priority=\"2\",8, priority=\"3\",24, 1=1,72)\n| eval sla_met=if(isnotnull(resolution_hours) AND resolution_hours <= sla_target_hours, 1, 0)\n| eval is_open=if(isnull(closed_at) OR state IN (\"New\",\"In Progress\",\"On Hold\"), 1, 0)\n| stats count as total_tickets,\n        sum(is_open) as open_tickets,\n        sum(sla_met) as within_sla,\n        avg(resolution_hours) as avg_resolution_h,\n        median(resolution_hours) as median_resolution_h by assignment_group\n| eval sla_pct=round(100*within_sla/(total_tickets-open_tickets), 1)\n| sort - open_tickets\n| table assignment_group, total_tickets, open_tickets, avg_resolution_h, median_resolution_h, sla_pct\n```\n\nUnderstanding this SPL\n\n**Support Ticket Volume and Resolution SLA Dashboard** — Shows support ticket volume, backlog, and SLA compliance in real time — how many tickets are open, how fast they're being resolved, and whether the team is meeting response and resolution targets. Support leaders see whether they need to add staff, redistribute workload, or investigate a spike in a specific issue category.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, state, priority, assignment_group, category, short_description). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **closed_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **resolution_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_target_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_met** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **is_open** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by assignment_group** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **sla_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Support Ticket Volume and Resolution SLA Dashboard**): table assignment_group, total_tickets, open_tickets, avg_resolution_h, median_resolution_h, sla_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (open backlog, SLA %), Bar chart (volume by team), Line chart (daily ticket trend), Table (team performance).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We time how well support keeps its promises for each important customer—like checking whether every pizza arrived while it was still hot—so leaders can fix staffing or contracts when too many “late” outcomes pile up, before renewals and penalties become the story.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.7.2",
              "n": "First Contact Resolution Rate and Escalation Patterns",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures how often customer issues are resolved on first contact without escalation or transfer — the gold standard of support efficiency. High FCR means happy customers and lower support costs. Tracking escalation patterns reveals training gaps (topics that always escalate), staffing issues (times when escalation spikes), and product problems (features that generate repeated escalations).",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928), HEC",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"`, `index=business` `sourcetype=\"support_interaction\"` (interaction_id, ticket_id, channel, agent, transferred, escalated)",
              "q": "index=itsm sourcetype=\"snow:incident\" state=\"Closed\" earliest=-30d\n| eval reassignment_count=if(isnotnull(reassignment_count), reassignment_count, 0)\n| eval fcr=if(reassignment_count=0, 1, 0)\n| eval escalated=if(reassignment_count >= 2 OR match(lower(work_notes),\"(?i)escalat\"), 1, 0)\n| stats count as resolved_tickets, sum(fcr) as first_contact, sum(escalated) as escalated_tickets by category\n| eval fcr_rate=round(100*first_contact/resolved_tickets, 1)\n| eval escalation_rate=round(100*escalated_tickets/resolved_tickets, 1)\n| sort - escalation_rate\n| table category, resolved_tickets, first_contact, fcr_rate, escalated_tickets, escalation_rate",
              "m": "(1) Track reassignment count and escalation events in your ticketing system; (2) define FCR criteria (resolved by original assignee, no reopens within 48h); (3) schedule weekly; (4) identify top escalation categories for targeted training; (5) compare FCR by channel (phone vs chat vs email) to understand channel effectiveness.",
              "z": "Bar chart (FCR rate by category), Table (top escalation categories), Single value (overall FCR rate), Line chart (FCR trend over time).",
              "kfp": "Governance tweaks re-label **Resolved** versus **Closed Complete** overnight, swinging FCR ten to twenty points when nothing about agents changed—version-control state lists. Flow Designer or **business rules** that auto-route L1→L2 on insert bump **`reassignment_count`** without a human transfer, mimicking collapsed FCR after a routing refresh. New lines of business shift **`category`**/**`assignment_group`** taxonomies for a quarter, distorting denominators mid-transition. **P1**–**P2** bridge work legitimately reassigns—segment severities instead of blending into one headline. **SLA pause** windows shrink **business_duration** while wall **handle-time** stays long, so FCR beside **`avg_aht`** looks bimodal until pause policy is explicit. Imported tickets may lack **`reassignment_count`**/**`reopen_count`**, inflating FCR when nulls read as zero hops. Regex on journals can still snag template boilerplate—sample quarterly. Reopens filed as **new** incidents (same **`caller_id`**, same CI) bypass **`reopen_count`**—add duplicate-detection heuristics when CMDB quality permits.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928), HEC.\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"`, `index=business` `sourcetype=\"support_interaction\"` (interaction_id, ticket_id, channel, agent, transferred, escalated).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Track reassignment count and escalation events in your ticketing system; (2) define FCR criteria (resolved by original assignee, no reopens within 48h); (3) schedule weekly; (4) identify top escalation categories for targeted training; (5) compare FCR by channel (phone vs chat vs email) to understand channel effectiveness.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" state=\"Closed\" earliest=-30d\n| eval reassignment_count=if(isnotnull(reassignment_count), reassignment_count, 0)\n| eval fcr=if(reassignment_count=0, 1, 0)\n| eval escalated=if(reassignment_count >= 2 OR match(lower(work_notes),\"(?i)escalat\"), 1, 0)\n| stats count as resolved_tickets, sum(fcr) as first_contact, sum(escalated) as escalated_tickets by category\n| eval fcr_rate=round(100*first_contact/resolved_tickets, 1)\n| eval escalation_rate=round(100*escalated_tickets/resolved_tickets, 1)\n| sort - escalation_rate\n| table category, resolved_tickets, first_contact, fcr_rate, escalated_tickets, escalation_rate\n```\n\nUnderstanding this SPL\n\n**First Contact Resolution Rate and Escalation Patterns** — Measures how often customer issues are resolved on first contact without escalation or transfer — the gold standard of support efficiency. High FCR means happy customers and lower support costs. Tracking escalation patterns reveals training gaps (topics that always escalate), staffing issues (times when escalation spikes), and product problems (features that generate repeated escalations).\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"`, `index=business` `sourcetype=\"support_interaction\"` (interaction_id, ticket_id, channel, agent, transferred, escalated). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **reassignment_count** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **fcr** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **escalated** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by category** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **fcr_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **escalation_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **First Contact Resolution Rate and Escalation Patterns**): table category, resolved_tickets, first_contact, fcr_rate, escalated_tickets, escalation_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (FCR rate by category), Table (top escalation categories), Single value (overall FCR rate), Line chart (FCR trend over time).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We measure how often a case gets solved on the first try without handing it to another queue, so managers can fix training and routing before people have to call back.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.7.3",
              "n": "Customer Effort Score and Support Channel Effectiveness",
              "c": "medium",
              "f": "intermediate",
              "v": "Measures how much effort customers expend to get their issues resolved — combining post-interaction survey scores with operational metrics like transfers, repeat contacts, and channel switches. Support leaders see which channels and issue types create the most friction, guiding investments in self-service, chatbots, and process simplification.",
              "t": "HEC, Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=business` `sourcetype=\"support_survey\"` (ticket_id, ces_score, channel, issue_type), `index=itsm` (ticket lifecycle data)",
              "q": "index=business sourcetype=\"support_survey\" earliest=-90d\n| stats avg(ces_score) as avg_ces, count as surveys by channel, issue_type\n| join type=left channel issue_type [\n    search index=itsm sourcetype=\"snow:incident\" state=\"Closed\" earliest=-90d\n    | stats avg(reassignment_count) as avg_transfers, avg(eval(round((strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")-strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"))/3600,1))) as avg_resolution_h by channel, category\n    | rename category as issue_type\n]\n| eval effort_index=round(avg_ces + avg_transfers*0.5 + if(avg_resolution_h>24,1,0), 1)\n| sort - effort_index\n| table channel, issue_type, surveys, avg_ces, avg_transfers, avg_resolution_h, effort_index",
              "m": "(1) Ingest post-interaction CES surveys via HEC; (2) use a 1-7 scale (lower = less effort = better); (3) correlate with operational data from ticketing system; (4) schedule monthly; (5) identify high-effort combinations (e.g., \"Billing + Phone = high effort\") for process redesign; (6) track CES trend after implementing improvements.",
              "z": "Heatmap (channel × issue type), Bar chart (CES by channel), Table (highest effort combinations), Line chart (CES trend).",
              "kfp": "**Polite-email inflation** — asynchronous channels reward softer wording; **`avg(ces_normalized)`** can paint email rosier than voice even when **`reassignment_count`** proves the opposite workload story—compare distribution tails, not means alone. **Sparse-response bias** — survey completion below ten percent overweight vocal detractors or promoters; **`stats dc(case_key_norm)`** denominators shrink without warning when reminders stop sending. **Timing pivots** — surveys dispatched seconds after closure versus twenty-four hours later swing CES independent of handling quality when incidents reopen silently overnight. **Seasonality noise** — holiday travel or fiscal-year closes spike effort independently of coaching gaps; overlay **`cases`** volume before blaming agents. **Locale softness** — certain regions normalize toward top-box scores; **`repeat_within_7d`** still exposes workload truth when psychology lies. **Translation drift** — multilingual CES prompts subtly alter Likert spacing quarter to quarter. **Join hygiene ghosts** — mismatched **`INC`** prefixes strand surveys without **`snow:incident`** enrichment yet **`effort_index`** still ranks noise—monitor **`pct_dropped`** KPIs. **Product-incident masking** — widespread outages inflate **`handle_time_minutes`** everywhere; slice by **`cmdb_ci`** clusters before naming squads. **Chatbot containment optics** — bot-first journeys alter CES sampling frames unless segmented separately from assisted queues.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC, Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"support_survey\"` (ticket_id, ces_score, channel, issue_type), `index=itsm` (ticket lifecycle data).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest post-interaction CES surveys via HEC; (2) use a 1-7 scale (lower = less effort = better); (3) correlate with operational data from ticketing system; (4) schedule monthly; (5) identify high-effort combinations (e.g., \"Billing + Phone = high effort\") for process redesign; (6) track CES trend after implementing improvements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"support_survey\" earliest=-90d\n| stats avg(ces_score) as avg_ces, count as surveys by channel, issue_type\n| join type=left channel issue_type [\n    search index=itsm sourcetype=\"snow:incident\" state=\"Closed\" earliest=-90d\n    | stats avg(reassignment_count) as avg_transfers, avg(eval(round((strptime(closed_at,\"%Y-%m-%d %H:%M:%S\")-strptime(opened_at,\"%Y-%m-%d %H:%M:%S\"))/3600,1))) as avg_resolution_h by channel, category\n    | rename category as issue_type\n]\n| eval effort_index=round(avg_ces + avg_transfers*0.5 + if(avg_resolution_h>24,1,0), 1)\n| sort - effort_index\n| table channel, issue_type, surveys, avg_ces, avg_transfers, avg_resolution_h, effort_index\n```\n\nUnderstanding this SPL\n\n**Customer Effort Score and Support Channel Effectiveness** — Measures how much effort customers expend to get their issues resolved — combining post-interaction survey scores with operational metrics like transfers, repeat contacts, and channel switches. Support leaders see which channels and issue types create the most friction, guiding investments in self-service, chatbots, and process simplification.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"support_survey\"` (ticket_id, ces_score, channel, issue_type), `index=itsm` (ticket lifecycle data). **App/TA** (typical add-on context): HEC, Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: support_survey. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"support_survey\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by channel, issue_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **effort_index** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Customer Effort Score and Support Channel Effectiveness**): table channel, issue_type, surveys, avg_ces, avg_transfers, avg_resolution_h, effort_index\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (channel × issue type), Bar chart (CES by channel), Table (highest effort combinations), Line chart (CES trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We blend quick survey answers about how hard getting help felt with ticket facts like transfers and repeats, so leaders see which kinds of problems through which channels annoy customers most.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.7.4",
              "n": "Backlog Age and Breach Risk Forecast",
              "c": "high",
              "f": "intermediate",
              "v": "Highlights tickets that have been open longer than your policy allows before they breach service promises. We help capacity planners see how much work is aging so they can add shifts or shift topics before customers feel ignored.",
              "t": "Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=itsm` `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, priority, assignment_group)",
              "q": "index=itsm sourcetype=\"snow:incident\" state!=\"Closed\" earliest=-30d\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval age_hours=round((now()-opened_epoch)/3600,1)\n| eval sla_target_hours=case(priority=\"1\",4, priority=\"2\",8, priority=\"3\",24, 1=1,72)\n| eval pct_of_sla=round(100*age_hours/sla_target_hours,0)\n| eval breach_risk=case(pct_of_sla>=100,\"BREACHED\", pct_of_sla>=80,\"AT RISK\", 1=1,\"OK\")\n| stats count as tickets, avg(age_hours) as avg_age_h, max(age_hours) as max_age_h by assignment_group, breach_risk\n| sort assignment_group, breach_risk\n| table assignment_group, breach_risk, tickets, avg_age_h, max_age_h",
              "m": "(1) Ingest open incidents with accurate opened timestamps from ServiceNow; (2) align `sla_target_hours` with your contractual response and resolve clocks; (3) schedule every two hours and route “AT RISK” queues to team leads before breaches hit customer reports.",
              "z": "Stacked bar (tickets by risk band and team), Table (oldest tickets), Single value (count at risk), Line chart (backlog age trend).",
              "kfp": "Weekend-and-holiday drift: Splunk wall-clock pct_of_sla can look crisis-level while ServiceNow task_sla.business_time_left still shows comfortable room because nights and bank holidays freeze the engine clock your raw age math ignores—reconcile with snow:task_sla before reassigning squads Friday evening. On-hold pause blind spot: awaiting-caller vendor hold states stop the contractual clock in Snow but Splunk bh_age_hours keeps climbing, so breach_risk reads AT_RISK while the related list shows stage paused—add hold reason fields or child wait tables when triage escalations depend on fairness. Same ticket reopened after “fix” loops: multiple resolve cycles reset task_sla rows; Splunk opened_at spans the full multi-month story so forecasted breach_eta overstates urgency relative to the active clock—exclude chronic accounts or parent splitters per policy. Handoff-heavy groups: backlog age tied to current assignment_group misallocates blame when ninety percent of elapsed time sat with a predecessor queue—pair with reassignment histograms before capacity funding moves. Planned long-run P4 roadmap items parked for two quarters rarely breach even with high wall hours—filter category u_type for enhancement versus break-fix so the watch list does not shame product planning tickets. Release weekend surges that clear in forty-eight hours spike row volume without sustained breach velocity—alert on consecutive poll increases, not single batch imports. Parent incident with unfinished child tasks: closing subtasks does not retire the parent sys_id Splunk ages; breach_risk stays red on the umbrella while children look green unless you model parent filters for MIM exclusions already in place.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=itsm` `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, priority, assignment_group).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ingest open incidents with accurate opened timestamps from ServiceNow; (2) align `sla_target_hours` with your contractual response and resolve clocks; (3) schedule every two hours and route “AT RISK” queues to team leads before breaches hit customer reports.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=itsm sourcetype=\"snow:incident\" state!=\"Closed\" earliest=-30d\n| eval opened_epoch=strptime(opened_at, \"%Y-%m-%d %H:%M:%S\")\n| eval age_hours=round((now()-opened_epoch)/3600,1)\n| eval sla_target_hours=case(priority=\"1\",4, priority=\"2\",8, priority=\"3\",24, 1=1,72)\n| eval pct_of_sla=round(100*age_hours/sla_target_hours,0)\n| eval breach_risk=case(pct_of_sla>=100,\"BREACHED\", pct_of_sla>=80,\"AT RISK\", 1=1,\"OK\")\n| stats count as tickets, avg(age_hours) as avg_age_h, max(age_hours) as max_age_h by assignment_group, breach_risk\n| sort assignment_group, breach_risk\n| table assignment_group, breach_risk, tickets, avg_age_h, max_age_h\n```\n\nUnderstanding this SPL\n\n**Backlog Age and Breach Risk Forecast** — Highlights tickets that have been open longer than your policy allows before they breach service promises. We help capacity planners see how much work is aging so they can add shifts or shift topics before customers feel ignored.\n\nDocumented **Data sources**: `index=itsm` `sourcetype=\"snow:incident\"` (number, opened_at, closed_at, priority, assignment_group). **App/TA** (typical add-on context): Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: itsm; **sourcetype**: snow:incident. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=itsm, sourcetype=\"snow:incident\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **opened_epoch** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **age_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **sla_target_hours** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **pct_of_sla** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **breach_risk** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by assignment_group, breach_risk** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Backlog Age and Breach Risk Forecast**): table assignment_group, breach_risk, tickets, avg_age_h, max_age_h\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (tickets by risk band and team), Table (oldest tickets), Single value (count at risk), Line chart (backlog age trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We look at open tickets and estimate which ones will cross the promised finish window next, using how fast each team usually closes similar work. That way managers can move people onto the hottest queues before customers feel ignored or weekly leadership decks show another breach spike.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.7.5",
              "n": "Agent Occupancy and Schedule Adherence",
              "c": "medium",
              "f": "intermediate",
              "v": "Relates logged-in and productive handle time to published schedules so workforce leaders see understaffing before service levels collapse. We help you balance labour cost with customer wait times using facts instead of anecdotal busy signals.",
              "t": "HEC (contact centre platform), Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"acd:agent_state\"` (agent_id, state, duration_sec, queue), `agent_schedule.csv` (agent_id, scheduled_seconds, work_date)",
              "q": "index=business sourcetype=\"acd:agent_state\" earliest=-7d\n| eval work_states=if(state IN (\"On_Call\",\"After_Call_Work\",\"Busy\"),1,0)\n| eval work_date=strftime(_time,\"%Y-%m-%d\")\n| stats sum(eval(if(work_states=1,duration_sec,0))) as productive_sec,\n        sum(duration_sec) as logged_sec by agent_id, work_date\n| lookup agent_schedule.csv agent_id work_date OUTPUT scheduled_seconds\n| fillnull value=28800 scheduled_seconds\n| eval occupancy_pct=if(logged_sec>0, round(100*productive_sec/logged_sec,1), null())\n| eval adherence_pct=if(scheduled_seconds>0, round(100*logged_sec/scheduled_seconds,1), null())\n| stats avg(occupancy_pct) as avg_occupancy, avg(adherence_pct) as avg_adherence, dc(agent_id) as agents\n| table agents, avg_occupancy, avg_adherence",
              "m": "(1) Stream agent state changes from your automatic call distributor into Splunk using HEC with consistent state names; (2) publish `agent_schedule.csv` with per-agent scheduled seconds per work date from workforce management; (3) review weekly with operations and adjust forecasts when adherence drifts more than five points from target.",
              "z": "Bar chart (occupancy by agent), Table (adherence exceptions), Single value (team average occupancy), Heatmap (hour-of-day occupancy).",
              "kfp": "Blended outbound dialing lifts **`busy`** dwell without inbound **`lambda_per_hour`**, so **`occupancy_pct`** climbs while Erlang **`traffic_erlangs`** stays flat until you segment dialer buckets separately. Omnichannel concurrency stacks voice plus parallel chats — naive **`busy_like_s`** sums can exceed rostered minute ceilings unless **`sess_uid`** splits concurrent contracts per modality. Supervisor-documented coaching logged only inside Learning LMS leaves **`system_unavailable`** in CCaaS while WFM marks **`scheduled_customer_ready`**, shaving adherence though productivity stayed positive. Carrier trunk seizures strand reps **`Available`** without **`busy`** ticks — occupancy reads artificially low despite callers receiving busy-tone overflow. Precision workforce snapshots arriving ninety minutes late mis-align **`hour_bucket`** joins against streaming CCaaS slices — adherence crosses appear until **`scheduled_customer_seconds`** timestamps reconcile ingestion offsets.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (contact centre platform), Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"acd:agent_state\"` (agent_id, state, duration_sec, queue), `agent_schedule.csv` (agent_id, scheduled_seconds, work_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Stream agent state changes from your automatic call distributor into Splunk using HEC with consistent state names; (2) publish `agent_schedule.csv` with per-agent scheduled seconds per work date from workforce management; (3) review weekly with operations and adjust forecasts when adherence drifts more than five points from target.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"acd:agent_state\" earliest=-7d\n| eval work_states=if(state IN (\"On_Call\",\"After_Call_Work\",\"Busy\"),1,0)\n| eval work_date=strftime(_time,\"%Y-%m-%d\")\n| stats sum(eval(if(work_states=1,duration_sec,0))) as productive_sec,\n        sum(duration_sec) as logged_sec by agent_id, work_date\n| lookup agent_schedule.csv agent_id work_date OUTPUT scheduled_seconds\n| fillnull value=28800 scheduled_seconds\n| eval occupancy_pct=if(logged_sec>0, round(100*productive_sec/logged_sec,1), null())\n| eval adherence_pct=if(scheduled_seconds>0, round(100*logged_sec/scheduled_seconds,1), null())\n| stats avg(occupancy_pct) as avg_occupancy, avg(adherence_pct) as avg_adherence, dc(agent_id) as agents\n| table agents, avg_occupancy, avg_adherence\n```\n\nUnderstanding this SPL\n\n**Agent Occupancy and Schedule Adherence** — Relates logged-in and productive handle time to published schedules so workforce leaders see understaffing before service levels collapse. We help you balance labour cost with customer wait times using facts instead of anecdotal busy signals.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"acd:agent_state\"` (agent_id, state, duration_sec, queue), `agent_schedule.csv` (agent_id, scheduled_seconds, work_date). **App/TA** (typical add-on context): HEC (contact centre platform), Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: acd:agent_state. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"acd:agent_state\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **work_states** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **work_date** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by agent_id, work_date** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **occupancy_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **adherence_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Pipeline stage (see **Agent Occupancy and Schedule Adherence**): table agents, avg_occupancy, avg_adherence\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (occupancy by agent), Table (adherence exceptions), Single value (team average occupancy), Heatmap (hour-of-day occupancy).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We watch phone-queue staffing like air-traffic control: people should be ready when calls arrive, stick to the shift plan so buses stay predictable, yet avoid cramming folks past ninety-percent occupancy because burnout creeps in fast.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.7.6",
              "n": "Knowledge Base Deflection and Self-Service ROI",
              "c": "medium",
              "f": "beginner",
              "v": "Compares article views and successful searches to ticket volume so content owners see which topics actually reduce contacts. We help you justify investment in help articles by linking usage to fewer paid support minutes.",
              "t": "HEC (help centre analytics), Splunk Add-on for ServiceNow (Splunkbase 1928)",
              "d": "`index=web` `sourcetype=\"access_combined\"` (uri, status, clientip), `index=itsm` `sourcetype=\"snow:incident\"` (category, opened_at)",
              "q": "index=web sourcetype=\"access_combined\" status=200 uri=\"/help/*\" earliest=-30d\n| eval session=clientip.\"_\".useragent\n| stats dc(session) as help_sessions, count as article_views by uri\n| appendcols [\n    search index=itsm sourcetype=\"snow:incident\" earliest=-30d\n    | stats count as tickets_30d\n]\n| sort - article_views\n| head 20\n| table uri, help_sessions, article_views, tickets_30d",
              "m": "(1) Ensure help centre URLs are structured so `/help/` paths are easy to filter in web logs; (2) join or correlate weekly ticket counts by category with top article topics using a shared topic tag if available; (3) publish a monthly readout to the knowledge team listing articles with high views and categories where tickets remain high.",
              "z": "Bar chart (top articles by views), Table (URI performance), Line chart (help sessions vs tickets), Single value (help sessions per thousand tickets).",
              "kfp": "Coordinated **launch-week hype** lifts article reads sharply without comparable incident suppression—finance-grade ROI packs must **`utm_campaign`**-segment hype bursts before trusting numerator deltas. **Ranking-engine synonym pushes** erase zero-hit percentages days ahead of editorial publishes— **`snow:kb_knowledge`** **`published`** timestamps remain flat despite KPI optimism. Tablet seekers abandon lookups sooner than workstation browsers— **`device_class`** skew exaggerates abandonment independent of prose quality. Staff rehearsal cohorts browsing external-facing libraries inflate **`distinct_search_sessions`** versus paying customer denominators until SSO **`INTERNAL`** cohort lookups suppress rehearsal hashes. Dual-platform wins arise when **`Ada`** transcripts resolve chats yet **`case_resolved_via_kb`** toggles fire simultaneously— **`explicit`** tallies double-count until **`conversation_id`** collapse arrives. Supervisor warm-transfer timers occasionally stamp **`bot_resolved`** alongside **`opened_case`** rows— **`resolution_owner`** narratives reconcile transcripts nightly. Visitors chaining FAQ reads, **`Intercom`** chats, then **`INC`** inserts stack fractional attribution noise absent **`journey_hash`** stitching. **`Iframe`** embed traffic misses ticketing **`session_bridge`** correlation— **`Syndicated`** clicks appear orphaned versus portal **`caller_id`** joins. Overnight **search-cluster rebuilds** reorder snippets without **`article`** edits—CTR leaps mimic editorial victories erroneously. Training **`DEV`** **`kb_use`** telemetry forwarded through prod HEC tokens raises numerator counts ahead of GA—split panels by **`instance`** plus **`tenant`** to isolate sandbox drains.",
              "refs": "[Splunk Add-on for ServiceNow](https://splunkbase.splunk.com/app/1928), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (help centre analytics), Splunk Add-on for ServiceNow (Splunkbase 1928).\n• Ensure the following data sources are available: `index=web` `sourcetype=\"access_combined\"` (uri, status, clientip), `index=itsm` `sourcetype=\"snow:incident\"` (category, opened_at).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Ensure help centre URLs are structured so `/help/` paths are easy to filter in web logs; (2) join or correlate weekly ticket counts by category with top article topics using a shared topic tag if available; (3) publish a monthly readout to the knowledge team listing articles with high views and categories where tickets remain high.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=web sourcetype=\"access_combined\" status=200 uri=\"/help/*\" earliest=-30d\n| eval session=clientip.\"_\".useragent\n| stats dc(session) as help_sessions, count as article_views by uri\n| appendcols [\n    search index=itsm sourcetype=\"snow:incident\" earliest=-30d\n    | stats count as tickets_30d\n]\n| sort - article_views\n| head 20\n| table uri, help_sessions, article_views, tickets_30d\n```\n\nUnderstanding this SPL\n\n**Knowledge Base Deflection and Self-Service ROI** — Compares article views and successful searches to ticket volume so content owners see which topics actually reduce contacts. We help you justify investment in help articles by linking usage to fewer paid support minutes.\n\nDocumented **Data sources**: `index=web` `sourcetype=\"access_combined\"` (uri, status, clientip), `index=itsm` `sourcetype=\"snow:incident\"` (category, opened_at). **App/TA** (typical add-on context): HEC (help centre analytics), Splunk Add-on for ServiceNow (Splunkbase 1928). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: web; **sourcetype**: access_combined. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=web, sourcetype=\"access_combined\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **session** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by uri** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Adds columns from a subsearch with `appendcols`.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Limits the number of rows with `head`.\n• Pipeline stage (see **Knowledge Base Deflection and Self-Service ROI**): table uri, help_sessions, article_views, tickets_30d\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (top articles by views), Table (URI performance), Line chart (help sessions vs tickets), Single value (help sessions per thousand tickets).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We estimate how often customers settle issues using published answers instead of asking agents for repeat steps; we translate that into dollars kept off pricey channels and flag lookups that dead-end so editors know what to write next.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "servicenow"
              ],
              "em": [],
              "sapp": [
                {
                  "name": "Splunk Add-on for ServiceNow",
                  "id": 1928,
                  "url": "https://splunkbase.splunk.com/app/1928",
                  "desc": "Collects ServiceNow incidents, events, change and CMDB data via REST APIs",
                  "screenshots": []
                }
              ],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 35.0,
          "qd": {
            "gold": 0,
            "silver": 0,
            "bronze": 6,
            "none": 0
          }
        },
        {
          "i": "23.8",
          "n": "Executive Dashboards & Business KPIs",
          "u": [
            {
              "i": "23.8.1",
              "n": "CEO/CFO Business Health Scorecard",
              "c": "critical",
              "f": "intermediate",
              "v": "A single-page dashboard showing the 8-12 metrics that matter most to the executive team — revenue vs target, customer acquisition cost, churn rate, NPS, employee headcount, operational margin, cash position, and key risk indicators. Replaces the monthly board pack with a live, always-current view that executives can check any time.",
              "t": "Splunk DB Connect (Splunkbase 2686), Splunk ITSI (Splunkbase 1841)",
              "d": "`index=business` (aggregated from revenue, customer, HR, and operational data), `executive_kpi_targets.csv`",
              "q": "| inputlookup executive_kpi_targets.csv\n| join type=left kpi_name [\n    | makeresults count=1\n    | eval kpi_data=mvappend(\"revenue_mtd:\".mvindex(split(\"placeholder\",\":\"),0), \"nps_current:0\", \"churn_rate:0\", \"headcount:0\")\n    | mvexpand kpi_data\n    | rex field=kpi_data \"^(?<kpi_name>[^:]+):(?<current_value>\\d+)\"\n]\n| append [\n    search index=business sourcetype=\"dbx:erp_orders\" order_status=\"booked\" earliest=-1mon@mon latest=now()\n    | stats sum(revenue) as current_value\n    | eval kpi_name=\"revenue_mtd\"\n]\n| append [\n    search index=business sourcetype=\"nps_survey\" earliest=-30d\n    | eval category=case(score>=9,\"Promoter\", score>=7,\"Passive\", 1=1,\"Detractor\")\n    | stats sum(eval(if(category=\"Promoter\",1,0))) as p, sum(eval(if(category=\"Detractor\",1,0))) as d, count as n\n    | eval current_value=round(100*(p-d)/n, 0)\n    | eval kpi_name=\"nps_current\"\n]\n| append [\n    search index=business sourcetype=\"dbx:hris_employees\" status=\"active\"\n    | stats dc(employee_id) as current_value\n    | eval kpi_name=\"headcount\"\n]\n| lookup executive_kpi_targets.csv kpi_name OUTPUT target_value, kpi_label, unit\n| eval vs_target=if(target_value>0, round(100*current_value/target_value,1), null())\n| eval health=case(vs_target>=95,\"GREEN\", vs_target>=80,\"AMBER\", 1=1,\"RED\")\n| table kpi_label, current_value, target_value, unit, vs_target, health\n| sort kpi_label",
              "m": "(1) Define 8-12 executive KPIs in `executive_kpi_targets.csv` with name, target, label, and unit; (2) build individual saved searches for each KPI sourcing from the relevant business data; (3) combine into a unified scorecard using append; (4) schedule refresh every 4 hours; (5) provide drill-down links from each KPI to the detailed dashboard; (6) distribute via scheduled PDF to the executive team.",
              "z": "Single value tiles (one per KPI with traffic light colors), Table (KPI vs target), Gauge or bullet charts for each metric.",
              "kfp": "Multi-entity **scorecard** **tiles** can disagree with the **quarterly** **board** **presentation** when one feed still posts under a **divested** **BUKRS** after **close** while **FP&A** already moved the **plan** to **continuing** **operations** only, which is a **scope** **change** not a **parser** **fault**. **Equity** **settlement** on **NRR** and **churn** can lag **one** **full** **close** when **Subscription** **Asset** **history** is **nightly** but **G/L** **revenue** is **real**-**time** **accrual**, so the **CFO** **briefing** and **CRM**-**sourced** **KPIs** will **diverge** for **days** on **SaaS** **cohorts** with **true**-**up** **journals**. **Interim** **headcount** from **Workday** **RaaS** may **include** **contingent** **workers** your **board** **pack** **excludes**; treat **reclassification** in **the** **HRIS** as a **rebaseline** **trigger**. **Earnings** **release** **weeks** that **restate** **prior**-**year** **comparables** make **Y/Y** **growth** **KPIs** look **like** **misses** until **the** **lookup** **and** **narrative** **footnote** both **adopt** **the** **restated** **baseline**.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), Splunk ITSI (Splunkbase 1841).\n• Ensure the following data sources are available: `index=business` (aggregated from revenue, customer, HR, and operational data), `executive_kpi_targets.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define 8-12 executive KPIs in `executive_kpi_targets.csv` with name, target, label, and unit; (2) build individual saved searches for each KPI sourcing from the relevant business data; (3) combine into a unified scorecard using append; (4) schedule refresh every 4 hours; (5) provide drill-down links from each KPI to the detailed dashboard; (6) distribute via scheduled PDF to the executive team.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup executive_kpi_targets.csv\n| join type=left kpi_name [\n    | makeresults count=1\n    | eval kpi_data=mvappend(\"revenue_mtd:\".mvindex(split(\"placeholder\",\":\"),0), \"nps_current:0\", \"churn_rate:0\", \"headcount:0\")\n    | mvexpand kpi_data\n    | rex field=kpi_data \"^(?<kpi_name>[^:]+):(?<current_value>\\d+)\"\n]\n| append [\n    search index=business sourcetype=\"dbx:erp_orders\" order_status=\"booked\" earliest=-1mon@mon latest=now()\n    | stats sum(revenue) as current_value\n    | eval kpi_name=\"revenue_mtd\"\n]\n| append [\n    search index=business sourcetype=\"nps_survey\" earliest=-30d\n    | eval category=case(score>=9,\"Promoter\", score>=7,\"Passive\", 1=1,\"Detractor\")\n    | stats sum(eval(if(category=\"Promoter\",1,0))) as p, sum(eval(if(category=\"Detractor\",1,0))) as d, count as n\n    | eval current_value=round(100*(p-d)/n, 0)\n    | eval kpi_name=\"nps_current\"\n]\n| append [\n    search index=business sourcetype=\"dbx:hris_employees\" status=\"active\"\n    | stats dc(employee_id) as current_value\n    | eval kpi_name=\"headcount\"\n]\n| lookup executive_kpi_targets.csv kpi_name OUTPUT target_value, kpi_label, unit\n| eval vs_target=if(target_value>0, round(100*current_value/target_value,1), null())\n| eval health=case(vs_target>=95,\"GREEN\", vs_target>=80,\"AMBER\", 1=1,\"RED\")\n| table kpi_label, current_value, target_value, unit, vs_target, health\n| sort kpi_label\n```\n\nUnderstanding this SPL\n\n**CEO/CFO Business Health Scorecard** — A single-page dashboard showing the 8-12 metrics that matter most to the executive team — revenue vs target, customer acquisition cost, churn rate, NPS, employee headcount, operational margin, cash position, and key risk indicators. Replaces the monthly board pack with a live, always-current view that executives can check any time.\n\nDocumented **Data sources**: `index=business` (aggregated from revenue, customer, HR, and operational data), `executive_kpi_targets.csv`. **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), Splunk ITSI (Splunkbase 1841). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **vs_target** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **health** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **CEO/CFO Business Health Scorecard**): table kpi_label, current_value, target_value, unit, vs_target, health\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value tiles (one per KPI with traffic light colors), Table (KPI vs target), Gauge or bullet charts for each metric.",
              "script": "",
              "premium": "Splunk IT Service Intelligence (ITSI)",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We line up the few headline numbers leadership watches against the annual plan in one screen, so the team sees the same story the finance chief will tell the board before the printed book lands.",
              "wv": "crawl",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "_qs": 85,
              "_qt": "gold",
              "_qg": [
                "validation step should reference vendor UI for comparison",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "23.8.2",
              "n": "Operational Efficiency and Productivity Metrics",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks operational efficiency metrics that COOs care about — revenue per employee, cost per transaction, automation rate, throughput, and error rates across business processes. Shows whether the organisation is getting more productive over time and where manual processes are creating bottlenecks.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` (operational data), `index=app_events` (process automation logs)",
              "q": "index=business sourcetype=\"dbx:erp_orders\" order_status=\"booked\" earliest=-1mon@mon latest=@mon\n| stats sum(revenue) as monthly_revenue, count as transactions\n| appendcols [\n    search index=business sourcetype=\"dbx:hris_employees\" status=\"active\"\n    | stats dc(employee_id) as headcount\n]\n| appendcols [\n    search index=app_events sourcetype=\"process_automation\" earliest=-1mon@mon latest=@mon\n    | stats sum(eval(if(automated=\"yes\",1,0))) as automated, count as total_processes\n]\n| eval revenue_per_employee=round(monthly_revenue/headcount, 0)\n| eval cost_per_transaction=round(monthly_revenue*0.15/transactions, 2)\n| eval automation_rate=if(total_processes>0, round(100*automated/total_processes, 1), null())\n| table monthly_revenue, headcount, revenue_per_employee, transactions, cost_per_transaction, automated, total_processes, automation_rate",
              "m": "(1) Combine revenue, headcount, and process data into a unified view; (2) define \"cost per transaction\" calculation based on your cost structure; (3) track process automation by tagging automated vs manual processes in application logs; (4) schedule monthly for operations reviews; (5) trend over quarters to show efficiency improvements.",
              "z": "Single value (revenue per employee, automation rate), Bar chart (metrics over time), Gauge (automation rate vs target), Table (efficiency metrics).",
              "kfp": "Productivity optics swing when payroll-classified workers sit on unpaid leave yet remain coded active in HR snapshots—denominators inflate while revenue stays flat until staffing catches up. Badge-access feeds sometimes lag termination dates by weeks; pairing revenue-per-worker ratios with badge-headcount inadvertently celebrates phantom leverage until reconciliation closes the gap. Vendor-defined automation labels jump after reorganizing Orchestrator folders even when bots execute unchanged workloads—classification drift mimics breakthrough gains unless lookups gate taxonomy. Mergers fold acquired headcounts ahead of revenue synergies—mixing legacy enterprise resource planning scope makes efficiency ratios look hollow until finance aligns definitions across entities. Contractors billed outside HR worker IDs distort denominators when procurement staffing spikes during seasonal peaks—finance must declare contingent inclusion versus exclusion explicitly each quarter. Severance or restructuring accelerators posting through operations cost centers inflate operating-expense-per-transaction math though throughput stayed steady—finance-supplied flags belong in numerator exclusions. Idle clock minutes versus ERP-backed completions also misstate automation leverage when only vendor consoles are compared.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` (operational data), `index=app_events` (process automation logs).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Combine revenue, headcount, and process data into a unified view; (2) define \"cost per transaction\" calculation based on your cost structure; (3) track process automation by tagging automated vs manual processes in application logs; (4) schedule monthly for operations reviews; (5) trend over quarters to show efficiency improvements.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:erp_orders\" order_status=\"booked\" earliest=-1mon@mon latest=@mon\n| stats sum(revenue) as monthly_revenue, count as transactions\n| appendcols [\n    search index=business sourcetype=\"dbx:hris_employees\" status=\"active\"\n    | stats dc(employee_id) as headcount\n]\n| appendcols [\n    search index=app_events sourcetype=\"process_automation\" earliest=-1mon@mon latest=@mon\n    | stats sum(eval(if(automated=\"yes\",1,0))) as automated, count as total_processes\n]\n| eval revenue_per_employee=round(monthly_revenue/headcount, 0)\n| eval cost_per_transaction=round(monthly_revenue*0.15/transactions, 2)\n| eval automation_rate=if(total_processes>0, round(100*automated/total_processes, 1), null())\n| table monthly_revenue, headcount, revenue_per_employee, transactions, cost_per_transaction, automated, total_processes, automation_rate\n```\n\nUnderstanding this SPL\n\n**Operational Efficiency and Productivity Metrics** — Tracks operational efficiency metrics that COOs care about — revenue per employee, cost per transaction, automation rate, throughput, and error rates across business processes. Shows whether the organisation is getting more productive over time and where manual processes are creating bottlenecks.\n\nDocumented **Data sources**: `index=business` (operational data), `index=app_events` (process automation logs). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:erp_orders. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:erp_orders\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Adds columns from a subsearch with `appendcols`.\n• Adds columns from a subsearch with `appendcols`.\n• `eval` defines or adjusts **revenue_per_employee** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cost_per_transaction** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **automation_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Operational Efficiency and Productivity Metrics**): table monthly_revenue, headcount, revenue_per_employee, transactions, cost_per_transaction, automated, total_processes, automation_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (revenue per employee, automation rate), Bar chart (metrics over time), Gauge (automation rate vs target), Table (efficiency metrics).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We blend sales going out the door, how many people are on staff, how much work runs without hand-holding, and how often processes fail—so leaders see if the team truly gets more done per person, not just busier days.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.8.3",
              "n": "Business Risk Heatmap and Early Warning System",
              "c": "high",
              "f": "advanced",
              "v": "Aggregates business risks from multiple domains — financial (revenue miss, cash flow), operational (SLA breaches, supply chain), customer (churn spike, NPS drop), people (attrition spike, hiring delays), and cyber (security incidents) — into a single risk heatmap. Executives see a consolidated risk picture rather than siloed department reports, enabling faster decision-making on risk mitigation.",
              "t": "Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841), Splunk DB Connect (Splunkbase 2686)",
              "d": "All business indexes (aggregated), `business_risk_thresholds.csv`",
              "q": "| inputlookup business_risk_thresholds.csv\n| append [\n    search index=business sourcetype=\"dbx:erp_orders\" earliest=-30d\n    | stats sum(revenue) as value | eval risk_area=\"Financial\", risk_metric=\"Monthly Revenue\", unit=\"currency\"\n]\n| append [\n    search index=business sourcetype=\"nps_survey\" earliest=-30d\n    | stats avg(score) as value | eval risk_area=\"Customer\", risk_metric=\"NPS Score\", unit=\"points\"\n]\n| append [\n    search index=business sourcetype=\"dbx:hris_employees\" earliest=-30d\n    | eval termed=if(isnotnull(termination_date),1,0) | stats sum(termed) as terminations, dc(employee_id) as headcount\n    | eval value=round(100*terminations/headcount*12, 1) | eval risk_area=\"People\", risk_metric=\"Annualised Attrition %\", unit=\"pct\"\n]\n| append [\n    search `notable` urgency IN (\"critical\",\"high\") earliest=-30d | stats count as value | eval risk_area=\"Cyber\", risk_metric=\"Critical Security Incidents\", unit=\"count\"\n]\n| lookup business_risk_thresholds.csv risk_metric OUTPUT green_threshold, amber_threshold\n| eval risk_level=case(\n    unit=\"currency\" AND value >= green_threshold, \"GREEN\",\n    unit=\"currency\" AND value >= amber_threshold, \"AMBER\",\n    unit IN (\"pct\",\"count\") AND value <= green_threshold, \"GREEN\",\n    unit IN (\"pct\",\"count\") AND value <= amber_threshold, \"AMBER\",\n    1=1, \"RED\")\n| table risk_area, risk_metric, value, unit, risk_level\n| sort risk_area",
              "m": "(1) Define `business_risk_thresholds.csv` with green/amber/red thresholds per metric; (2) add risk domains relevant to your business; (3) schedule daily for executive team; (4) alert the executive team when any risk moves to RED; (5) add drill-down to the domain-specific dashboard for investigation; (6) maintain risk register notes for board reporting.",
              "z": "Heatmap (risk areas × risk level), Single value tiles per domain, Table (risk register with current status), Trend chart (risk movement over time).",
              "kfp": "Band colours can flip when the lookup tightens (for example revenue amber floor from nine million to nine point five million) even though operations did not change—always pair heatmap tiles with `threshold_version` and `effective_from` in the footnote. Splunk Security Content deployments can temporarily raise critical and high notable counts because new detections increase coverage, not because adversary activity spiked—compare to Enterprise Security Incident Review disposition rates and escape-hatch exclusions before declaring cyber regression. Quarter-end revenue recognition pulls can swing Monthly Revenue from amber to green the final week then back without a structural fix. NPS based on fewer than thirty responses in the window is statistically thin; guard with `where response_count>=30` in a sibling panel. M&A headcount integration lags make Annualised Attrition rate look artificially low until acquired employees appear in the HRIS feed. Multi-currency ERP roll-ups that translate at month-end spot versus daily snapshots inject two-to-five percent noise into Financial rows—Finance must own the FX policy referenced in the risk register. ServiceNow SLA logic changes (pause conditions) alter SLA breach rate without a process change on the shop floor.",
              "refs": "[Splunk Enterprise Security](https://splunkbase.splunk.com/app/263), [Splunk ITSI](https://splunkbase.splunk.com/app/1841), [Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841), Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: All business indexes (aggregated), `business_risk_thresholds.csv`.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Define `business_risk_thresholds.csv` with green/amber/red thresholds per metric; (2) add risk domains relevant to your business; (3) schedule daily for executive team; (4) alert the executive team when any risk moves to RED; (5) add drill-down to the domain-specific dashboard for investigation; (6) maintain risk register notes for board reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup business_risk_thresholds.csv\n| append [\n    search index=business sourcetype=\"dbx:erp_orders\" earliest=-30d\n    | stats sum(revenue) as value | eval risk_area=\"Financial\", risk_metric=\"Monthly Revenue\", unit=\"currency\"\n]\n| append [\n    search index=business sourcetype=\"nps_survey\" earliest=-30d\n    | stats avg(score) as value | eval risk_area=\"Customer\", risk_metric=\"NPS Score\", unit=\"points\"\n]\n| append [\n    search index=business sourcetype=\"dbx:hris_employees\" earliest=-30d\n    | eval termed=if(isnotnull(termination_date),1,0) | stats sum(termed) as terminations, dc(employee_id) as headcount\n    | eval value=round(100*terminations/headcount*12, 1) | eval risk_area=\"People\", risk_metric=\"Annualised Attrition %\", unit=\"pct\"\n]\n| append [\n    search `notable` urgency IN (\"critical\",\"high\") earliest=-30d | stats count as value | eval risk_area=\"Cyber\", risk_metric=\"Critical Security Incidents\", unit=\"count\"\n]\n| lookup business_risk_thresholds.csv risk_metric OUTPUT green_threshold, amber_threshold\n| eval risk_level=case(\n    unit=\"currency\" AND value >= green_threshold, \"GREEN\",\n    unit=\"currency\" AND value >= amber_threshold, \"AMBER\",\n    unit IN (\"pct\",\"count\") AND value <= green_threshold, \"GREEN\",\n    unit IN (\"pct\",\"count\") AND value <= amber_threshold, \"AMBER\",\n    1=1, \"RED\")\n| table risk_area, risk_metric, value, unit, risk_level\n| sort risk_area\n```\n\nUnderstanding this SPL\n\n**Business Risk Heatmap and Early Warning System** — Aggregates business risks from multiple domains — financial (revenue miss, cash flow), operational (SLA breaches, supply chain), customer (churn spike, NPS drop), people (attrition spike, hiring delays), and cyber (security incidents) — into a single risk heatmap. Executives see a consolidated risk picture rather than siloed department reports, enabling faster decision-making on risk mitigation.\n\nDocumented **Data sources**: All business indexes (aggregated), `business_risk_thresholds.csv`. **App/TA** (typical add-on context): Splunk Enterprise Security (Splunkbase 263), Splunk ITSI (Splunkbase 1841), Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **risk_level** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Business Risk Heatmap and Early Warning System**): table risk_area, risk_metric, value, unit, risk_level\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Heatmap (risk areas × risk level), Single value tiles per domain, Table (risk register with current status), Trend chart (risk movement over time).",
              "script": "",
              "premium": "Splunk Enterprise Security",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We blend money, customer mood, staffing changes, service slip-ups, and serious security warnings into one simple red-yellow-green picture so leaders see where trouble is building before five different departments send five different stories.",
              "mtype": [
                "Risk",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "security",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect",
                "itsi"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.8.4",
              "n": "Rule-of-40 and SaaS Unit Economics",
              "c": "high",
              "f": "intermediate",
              "v": "Combines revenue growth with profit margin into a single investor-friendly score so boards can see whether the company balances growth and discipline. We help finance and strategy teams spot quarters where efficiency slips without waiting for the full close package.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:erp_orders\"` (revenue, order_date, order_status), `index=business` `sourcetype=\"dbx:gl_transactions\"` (ebitda_amount, revenue_amount, period)",
              "q": "index=business sourcetype=\"dbx:erp_orders\" order_status=\"booked\" earliest=-270d latest=now()\n| eval qn=ceil(tonumber(strftime(_time,\"%m\"))/3)\n| eval quarter=strftime(_time,\"%Y\").\"-Q\".tostring(qn)\n| stats sum(revenue) as q_revenue by quarter\n| sort quarter\n| streamstats window=1 current=f last(q_revenue) as prev_rev\n| eval growth_pct=if(isnotnull(prev_rev) AND prev_rev>0, round(100*(q_revenue-prev_rev)/prev_rev,1), null())\n| join type=left quarter [\n    search index=business sourcetype=\"dbx:gl_transactions\" earliest=-270d latest=now()\n    | eval pe=coalesce(strptime(period,\"%Y-%m-%d\"),strptime(period.\"-01\",\"%Y-%m-%d\"))\n    | eval qn=ceil(tonumber(strftime(pe,\"%m\"))/3)\n    | eval quarter=strftime(pe,\"%Y\").\"-Q\".tostring(qn)\n    | stats sum(ebitda_amount) as q_ebitda, sum(revenue_amount) as q_rev_gl by quarter\n    | eval margin_pct=if(q_rev_gl>0, round(100*q_ebitda/q_rev_gl,1), null())\n    | fields quarter, margin_pct\n]\n| eval rule_of_40=round(coalesce(growth_pct,0)+coalesce(margin_pct,0),1)\n| table quarter, q_revenue, growth_pct, margin_pct, rule_of_40",
              "m": "(1) Align revenue and profit data from your enterprise resource planning system on the same fiscal calendar via DB Connect; (2) map the earnings before interest accounts used for margin; (3) refresh each quarter and compare rule-of-40 to board targets and peer benchmarks from your planning lookup.",
              "z": "Line chart (rule-of-40 over quarters), Table (growth and margin components), Single value (latest rule-of-40), Bar chart (margin vs growth stacked).",
              "kfp": "Mosaic automated metric bundles occasionally replay superseded quarterly snapshots after FP&A revises free-cash-flow margin narratives inside Adaptive cubes — Splunk dashboards briefly duplicate Rule-of-forty plateaus unrelated to valuation multiples until collectors stamp metric_version_hash dedupe keys. ChartMogul churn-dollar spikes triggered by Stripe Radar disputes resemble churn leakage suppressing Quick Ratio numerators despite unchanged cohort logos once reconciliation clears disputed invoices inside Zuora RevPro staging branches. Public comps lookups keyed exclusively on ticker_peer collide whenever dual-listed ordinaries diverge materially from ADR liquidity pools — valuation_story_gap swings fifty-plus EV/Revenue turns attributable to equity-structure mismatch rather than SaaS fundamentals alone. NetSuite saved-search extracts labeling recognized revenue inclusive of professional-services implementation fees inflate YoY revenue growth versus subscription-only narratives cited during scripted remarks — Splunk faithfully totals chartered GAAP feeds yet diverges from investor-facing ARR pacing absent reconciliation overlays. Sage Intacct multi-entity eliminations arriving one fiscal day later than Zuora cohort totals distort Net Dollar Retention numerators until subsidiary_scope tags propagate across mosaic:metric_event payloads sourced differently between ERP bridges and billing hubs. Chargebee Revenue Recognition batches defer expansion dollars onto deferred revenue ledgers while Mosaic pulls invoice-backed Quick Ratio expansions concurrently — numerator spikes mimic leakage improvements absent durable cohort economics until deferred waterfalls settle through treasury-validated postings.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:erp_orders\"` (revenue, order_date, order_status), `index=business` `sourcetype=\"dbx:gl_transactions\"` (ebitda_amount, revenue_amount, period).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Align revenue and profit data from your enterprise resource planning system on the same fiscal calendar via DB Connect; (2) map the earnings before interest accounts used for margin; (3) refresh each quarter and compare rule-of-40 to board targets and peer benchmarks from your planning lookup.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:erp_orders\" order_status=\"booked\" earliest=-270d latest=now()\n| eval qn=ceil(tonumber(strftime(_time,\"%m\"))/3)\n| eval quarter=strftime(_time,\"%Y\").\"-Q\".tostring(qn)\n| stats sum(revenue) as q_revenue by quarter\n| sort quarter\n| streamstats window=1 current=f last(q_revenue) as prev_rev\n| eval growth_pct=if(isnotnull(prev_rev) AND prev_rev>0, round(100*(q_revenue-prev_rev)/prev_rev,1), null())\n| join type=left quarter [\n    search index=business sourcetype=\"dbx:gl_transactions\" earliest=-270d latest=now()\n    | eval pe=coalesce(strptime(period,\"%Y-%m-%d\"),strptime(period.\"-01\",\"%Y-%m-%d\"))\n    | eval qn=ceil(tonumber(strftime(pe,\"%m\"))/3)\n    | eval quarter=strftime(pe,\"%Y\").\"-Q\".tostring(qn)\n    | stats sum(ebitda_amount) as q_ebitda, sum(revenue_amount) as q_rev_gl by quarter\n    | eval margin_pct=if(q_rev_gl>0, round(100*q_ebitda/q_rev_gl,1), null())\n    | fields quarter, margin_pct\n]\n| eval rule_of_40=round(coalesce(growth_pct,0)+coalesce(margin_pct,0),1)\n| table quarter, q_revenue, growth_pct, margin_pct, rule_of_40\n```\n\nUnderstanding this SPL\n\n**Rule-of-40 and SaaS Unit Economics** — Combines revenue growth with profit margin into a single investor-friendly score so boards can see whether the company balances growth and discipline. We help finance and strategy teams spot quarters where efficiency slips without waiting for the full close package.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:erp_orders\"` (revenue, order_date, order_status), `index=business` `sourcetype=\"dbx:gl_transactions\"` (ebitda_amount, revenue_amount, period). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:erp_orders. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:erp_orders\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **qn** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **quarter** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by quarter** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• `streamstats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• `eval` defines or adjusts **growth_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• `eval` defines or adjusts **rule_of_40** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Rule-of-40 and SaaS Unit Economics**): table quarter, q_revenue, growth_pct, margin_pct, rule_of_40\n\nEnable Data Model Acceleration (and metric indexes for `mstats`) for the models or datasets referenced above; otherwise `tstats`/`mstats` may return no results from summaries.\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (rule-of-40 over quarters), Table (growth and margin components), Single value (latest rule-of-40), Bar chart (margin vs growth stacked).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We stitch revenue pace, spare cash margins, marketing payback on new bookings, burn versus fresh recurring dollars, buyer retention swings, and how public peers trade—so leadership sees whether operating facts match shareholder-facing remarks.",
              "mtype": [
                "Business"
              ],
              "ind": "SaaS, Technology",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.8.5",
              "n": "Customer Acquisition Cost and Payback Period",
              "c": "high",
              "f": "intermediate",
              "v": "Divides sales and marketing spend by new customers won so leadership sees whether growth is efficient or expensive. We help you estimate how many months a typical customer must stay to repay that investment, which guides budget cuts and pricing decisions.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC",
              "d": "`index=business` `sourcetype=\"dbx:crm_opportunities\"` (stage, customer_id, close_date, amount), `marketing_spend.csv` (month, spend)",
              "q": "index=business sourcetype=\"dbx:crm_opportunities\" stage=\"Closed Won\" earliest=-90d\n| eval month=strftime(strptime(close_date,\"%Y-%m-%d\"),\"%Y-%m\")\n| stats dc(customer_id) as new_customers, sum(amount) as booked_revenue by month\n| join type=left month [\n    | inputlookup marketing_spend.csv\n    | stats sum(marketing_spend) as marketing_spend, sum(sales_spend) as sales_spend by month\n]\n| fillnull value=0 marketing_spend sales_spend\n| eval total_spend=marketing_spend+sales_spend\n| eval cac=if(new_customers>0, round(total_spend/new_customers,0), null())\n| eval avg_first_year_revenue=if(new_customers>0, round(booked_revenue/new_customers,2), null())\n| eval payback_months=if(avg_first_year_revenue>0 AND cac>0, round(cac/(avg_first_year_revenue/12),1), null())\n| sort month\n| table month, new_customers, total_spend, cac, avg_first_year_revenue, payback_months",
              "m": "(1) Load closed-won customers with first-order dates from customer relationship management via DB Connect; (2) combine marketing and allocated sales costs in `marketing_spend.csv` or separate lookups by month; (3) review monthly with the chief marketing officer and finance partner and set alert thresholds when payback months exceed policy.",
              "z": "Line chart (CAC and payback trend), Table (monthly detail), Single value (blended CAC), Bar chart (spend vs new customers).",
              "kfp": "Zuora subscription amendments can temporarily duplicate normalised MRR rows until amendment streams reconcile inside nightly Zuora connectors—Splunk paying_accounts denominators swell briefly versus CRM wins causing phantom ARPU softness unrelated to packaging economics. Partners resale motions booking vendor marketplace fees outside chartered demand-gen centres leave acquisition numerator understated while reseller-attributed logos inflate denominators sourced purely from CRM campaign IDs—dual-definition distortion until alliance ledger mappings arrive. Stripe Radar disputes netting against gross cash receipts shift ERP commission accruals yet leave CRM Closed Won untouched creating faux worsening payback weeks until dispute-resolution postings settle. Lightweight discounted cash-flow lifetime-value variants disagree sharply with simple_ltv whenever finance lifts gross-retention assumptions inside valuation decks—Splunk simple_ltv remains bookkeeping-style while board models use weighted discount curves. Inbound webinar surges counted as acquisition spend spikes when event deposits post one week and delegate pipeline closes two quarters later—single-month numerator spike with delayed logos mislabels go-to-market regression until expenses align to acquisition cohort windows.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC.\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (stage, customer_id, close_date, amount), `marketing_spend.csv` (month, spend).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Load closed-won customers with first-order dates from customer relationship management via DB Connect; (2) combine marketing and allocated sales costs in `marketing_spend.csv` or separate lookups by month; (3) review monthly with the chief marketing officer and finance partner and set alert thresholds when payback months exceed policy.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:crm_opportunities\" stage=\"Closed Won\" earliest=-90d\n| eval month=strftime(strptime(close_date,\"%Y-%m-%d\"),\"%Y-%m\")\n| stats dc(customer_id) as new_customers, sum(amount) as booked_revenue by month\n| join type=left month [\n    | inputlookup marketing_spend.csv\n    | stats sum(marketing_spend) as marketing_spend, sum(sales_spend) as sales_spend by month\n]\n| fillnull value=0 marketing_spend sales_spend\n| eval total_spend=marketing_spend+sales_spend\n| eval cac=if(new_customers>0, round(total_spend/new_customers,0), null())\n| eval avg_first_year_revenue=if(new_customers>0, round(booked_revenue/new_customers,2), null())\n| eval payback_months=if(avg_first_year_revenue>0 AND cac>0, round(cac/(avg_first_year_revenue/12),1), null())\n| sort month\n| table month, new_customers, total_spend, cac, avg_first_year_revenue, payback_months\n```\n\nUnderstanding this SPL\n\n**Customer Acquisition Cost and Payback Period** — Divides sales and marketing spend by new customers won so leadership sees whether growth is efficient or expensive. We help you estimate how many months a typical customer must stay to repay that investment, which guides budget cuts and pricing decisions.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:crm_opportunities\"` (stage, customer_id, close_date, amount), `marketing_spend.csv` (month, spend). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:crm_opportunities. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:crm_opportunities\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **month** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by month** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Joins to a subsearch with `join` — set `max=` to match cardinality and avoid silent truncation.\n• Fills null values with `fillnull`.\n• `eval` defines or adjusts **total_spend** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cac** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **avg_first_year_revenue** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **payback_months** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Customer Acquisition Cost and Payback Period**): table month, new_customers, total_spend, cac, avg_first_year_revenue, payback_months\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Line chart (CAC and payback trend), Table (monthly detail), Single value (blended CAC), Bar chart (spend vs new customers).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We keep a clear picture of what it costs to win each paying customer and how many months of profit it takes before that spend pays for itself, so leadership stops guessing from mismatched spreadsheets.",
              "mtype": [
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.8.6",
              "n": "Working Capital and Cash Conversion Cycle",
              "c": "critical",
              "f": "advanced",
              "v": "Combines days inventory outstanding, days sales outstanding, and days payables outstanding into a cash conversion view so treasury can see how long cash is tied up in operations. We help you prioritise collections, stock, and supplier terms when liquidity is tight.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"dbx:inventory\"` (sku, qty_on_hand, unit_cost), `index=business` `sourcetype=\"dbx:ar_invoices\"` (amount, status), `index=business` `sourcetype=\"dbx:ap_invoices\"` (amount, status), `index=business` `sourcetype=\"dbx:gl_transactions\"` (revenue_amount, period)",
              "q": "index=business sourcetype=\"dbx:inventory\" earliest=-1d@d latest=now()\n| stats sum(eval(qty_on_hand*unit_cost)) as inventory_value\n| appendcols [\n    search index=business sourcetype=\"dbx:ar_invoices\" status=\"open\" earliest=-1d@d\n    | stats sum(amount) as ar_open\n]\n| appendcols [\n    search index=business sourcetype=\"dbx:ap_invoices\" status=\"open\" earliest=-1d@d\n    | stats sum(amount) as ap_open\n]\n| appendcols [\n    search index=business sourcetype=\"dbx:gl_transactions\" earliest=-30d\n    | stats sum(revenue_amount) as revenue_30d\n]\n| eval dio=if(revenue_30d>0, round(inventory_value/(revenue_30d/30),0), null())\n| eval dso=if(revenue_30d>0, round(ar_open/(revenue_30d/30),0), null())\n| eval dpo=if(revenue_30d>0, round(ap_open/(revenue_30d/30),0), null())\n| eval ccc=round(dio+dso-dpo,0)\n| table inventory_value, ar_open, ap_open, revenue_30d, dio, dso, dpo, ccc",
              "m": "(1) Schedule a daily snapshot from inventory, receivables, and payables tables through DB Connect using consistent valuation rules; (2) use trailing thirty-day revenue as the activity denominator unless your controller specifies otherwise; (3) alert treasury when the cash conversion cycle moves more than five days away from the rolling average.",
              "z": "Single value (CCC, DIO, DSO, DPO), Waterfall (components), Line chart (CCC trend), Table (daily snapshot history).",
              "kfp": "Rolling **CCC** can improve when finance reclassifies consignment stock between owner and vendor accounts even though pallets never moved, which is a legitimate policy change rather than a collections win. A sharp **DSO** decline may appear after intercompany netting removes mirror **AR** rows in one company code while revenue still posts at the selling entity, so the ratio looks cleaner while external cash is unchanged. Retail **DIO** often rises before peak season on planned **inventory** prebuild; compare the same retail week year over year before paging **treasury**. **Covenant** definitions that use ninety-day revenue in the denominator while this search uses thirty will disagree by design—reconcile to the lender worksheet, not to the board slide. Duplicate-payment recovery sweeps can temporarily inflate **open AP** until reversal documents post, stretching **DPO** for a few days without a true supplier-term change. **FX** revaluation of **BSID**/**BSIK** balances at month-end can move **DSO**/**DPO** several points with no operational shift if **HSL** and **ACDOCA** use different rate types. A **flash** that freezes at **period close** while Splunk keeps ingesting live clears will diverge after the close **window**—expected, not a parser bug. One-time **write-offs** hitting **AR** move open balance and can look like faster collection unless you exclude those **BLART** paths. **Standard** versus **moving-average** cost flips on a **material** category change **average** inventory **value** without a physical count shift, which changes **DIO** even when foot traffic is flat.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686), [Splunk ITSI](https://splunkbase.splunk.com/app/1841)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"dbx:inventory\"` (sku, qty_on_hand, unit_cost), `index=business` `sourcetype=\"dbx:ar_invoices\"` (amount, status), `index=business` `sourcetype=\"dbx:ap_invoices\"` (amount, status), `index=business` `sourcetype=\"dbx:gl_transactions\"` (revenue_amount, period).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Schedule a daily snapshot from inventory, receivables, and payables tables through DB Connect using consistent valuation rules; (2) use trailing thirty-day revenue as the activity denominator unless your controller specifies otherwise; (3) alert treasury when the cash conversion cycle moves more than five days away from the rolling average.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"dbx:inventory\" earliest=-1d@d latest=now()\n| stats sum(eval(qty_on_hand*unit_cost)) as inventory_value\n| appendcols [\n    search index=business sourcetype=\"dbx:ar_invoices\" status=\"open\" earliest=-1d@d\n    | stats sum(amount) as ar_open\n]\n| appendcols [\n    search index=business sourcetype=\"dbx:ap_invoices\" status=\"open\" earliest=-1d@d\n    | stats sum(amount) as ap_open\n]\n| appendcols [\n    search index=business sourcetype=\"dbx:gl_transactions\" earliest=-30d\n    | stats sum(revenue_amount) as revenue_30d\n]\n| eval dio=if(revenue_30d>0, round(inventory_value/(revenue_30d/30),0), null())\n| eval dso=if(revenue_30d>0, round(ar_open/(revenue_30d/30),0), null())\n| eval dpo=if(revenue_30d>0, round(ap_open/(revenue_30d/30),0), null())\n| eval ccc=round(dio+dso-dpo,0)\n| table inventory_value, ar_open, ap_open, revenue_30d, dio, dso, dpo, ccc\n```\n\nUnderstanding this SPL\n\n**Working Capital and Cash Conversion Cycle** — Combines days inventory outstanding, days sales outstanding, and days payables outstanding into a cash conversion view so treasury can see how long cash is tied up in operations. We help you prioritise collections, stock, and supplier terms when liquidity is tight.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"dbx:inventory\"` (sku, qty_on_hand, unit_cost), `index=business` `sourcetype=\"dbx:ar_invoices\"` (amount, status), `index=business` `sourcetype=\"dbx:ap_invoices\"` (amount, status), `index=business` `sourcetype=\"dbx:gl_transactions\"` (revenue_amount, period). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: dbx:inventory. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"dbx:inventory\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` aggregates the pipeline (counts, distinct values, sums, percentiles, etc.) into fewer rows.\n• Adds columns from a subsearch with `appendcols`.\n• Adds columns from a subsearch with `appendcols`.\n• Adds columns from a subsearch with `appendcols`.\n• `eval` defines or adjusts **dio** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dso** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **dpo** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **ccc** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **Working Capital and Cash Conversion Cycle**): table inventory_value, ar_open, ap_open, revenue_30d, dio, dso, dpo, ccc\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Single value (CCC, DIO, DSO, DPO), Waterfall (components), Line chart (CCC trend), Table (daily snapshot history).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We watch how long money stays tied up in stock, what customers still owe, and what we owe suppliers, so we notice when the full cash round-trip stretches longer than finance expected last quarter.",
              "mtype": [
                "Business"
              ],
              "ind": "Manufacturing, Retail, Distribution",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 43.3,
          "qd": {
            "gold": 1,
            "silver": 0,
            "bronze": 5,
            "none": 0
          }
        },
        {
          "i": "23.9",
          "n": "ESG & Sustainability Reporting",
          "u": [
            {
              "i": "23.9.1",
              "n": "Carbon Footprint Tracking and Reduction Progress",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks Scope 1, 2, and 3 carbon emissions from energy consumption, fleet operations, business travel, and cloud usage — measuring progress against net-zero commitments. Sustainability officers see which emission sources are largest and whether reduction initiatives are working, while the board gets quarterly ESG reporting data automatically calculated rather than manually assembled.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC (BMS/IoT data)",
              "d": "`index=facilities` (power meter readings, gas consumption), `index=business` (fleet fuel purchases, travel bookings, cloud usage), `carbon_factors.csv` (emission conversion factors)",
              "q": "index=facilities sourcetype=\"power_meter\" earliest=-1mon@mon latest=@mon\n| stats sum(kwh_consumed) as total_kwh by site\n| eval scope=\"Scope_2\", source=\"Electricity\"\n| append [\n    search index=facilities sourcetype=\"gas_meter\" earliest=-1mon@mon latest=@mon\n    | stats sum(cubic_metres) as total_gas by site\n    | eval scope=\"Scope_1\", source=\"Natural_Gas\", total_kwh=total_gas*11.1\n]\n| append [\n    search index=business sourcetype=\"fleet_fuel\" earliest=-1mon@mon latest=@mon\n    | stats sum(litres) as total_litres by site\n    | eval scope=\"Scope_1\", source=\"Fleet_Fuel\", total_kwh=total_litres*9.7\n]\n| append [\n    search index=business sourcetype=\"travel_booking\" earliest=-1mon@mon latest=@mon\n    | stats sum(distance_km) as total_km by site\n    | eval scope=\"Scope_3\", source=\"Business_Travel\", total_kwh=total_km*0.255\n]\n| lookup carbon_factors.csv source OUTPUT kg_co2_per_kwh\n| eval tonnes_co2=round(total_kwh * kg_co2_per_kwh / 1000, 2)\n| stats sum(tonnes_co2) as total_tonnes by scope, source\n| sort scope, source\n| table scope, source, total_tonnes",
              "m": "(1) Install power and gas meters with data logging to Splunk via BMS/IoT integration; (2) maintain `carbon_factors.csv` with region-specific emission factors (update annually); (3) import fleet fuel and travel data from procurement/booking systems; (4) schedule monthly calculation; (5) compare against annual targets and prior year; (6) generate quarterly ESG report data for board and external disclosure.",
              "z": "Stacked bar (emissions by scope), Pie chart (emission sources), Line chart (monthly trend vs target), Single value (total CO2e, % reduction YoY).",
              "kfp": "Totals swing after the annual BEIS/DESNZ June conversion-factor workbook replaces last year’s coefficients while CDP questionnaires mid-cycle still instruct reporters to freeze factors until fiscal-year governance votes—Splunk shows movement analysts mis-label as operational drift. Regional grid-average intensities shift when ENTSO-E or EPA eGRID rebaseline sub-region boundaries (RFC versus your utility footprint versus balancing-authority splits), inflating apparent variance unrelated to onsite behaviour. Legacy GHG Protocol Scope 2 Guidance dual-reporting versus ISO 14064-1 consolidation clauses yields conflicting tonnes when procurement insists market-based residual mix imports lag supplier certificates by a quarter. Scope 3 Category 6 spreadsheet proxies sometimes approximate rail legs as generic passenger-km factors while traveller itineraries mix international segments requiring ICAO-tier splits—Splunk faithfully totals whichever simplistic keys you coded. Operational-control consolidation captures entire leased-building Scope 1 gas while equity-share reporters expect proration—inventory spikes near lease classification reviews without operational change. Acquisition doubles Scope 1 diesel exposure mid-year unless you apply contractual cut-over dates aligned with acquisition accounting teams; divestitures symmetrically deflate totals in ways YoY dashboards miscount as savings. Carbon-offset retirement timestamps rarely align with meter-month attributions—finance books tonnes retired in December against November operational stacks—Splunk surfaces temporal mismatch risk under scrutiny. Renewable Energy Certificate procurement teams occasionally retire more certificates than megawatt-hours justified by billed meters—market-based Scope 2 rows risk optimism bias unless procurement confirms REC serial linkage per invoice month. Diesel litres accidentally sourced with imperial gallons rather than litres silently distort tonnes until fleet masters reconcile tank throughput conversions—conversion-factor drift hides inside procurement staging joins (fleet diesel-vs-electric mix migrations amplify uncertainty once electrification percentages shift quarter-to-quarter). Business-travel extracts sourced from expense reimbursement batches lag ticket flown-date by four-to-eight weeks—near-real-time Sustainability KPI commentary diverges from SAP Concur/Egencia CSV snapshots Splunk indexes nightly.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC (BMS/IoT data).\n• Ensure the following data sources are available: `index=facilities` (power meter readings, gas consumption), `index=business` (fleet fuel purchases, travel bookings, cloud usage), `carbon_factors.csv` (emission conversion factors).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Install power and gas meters with data logging to Splunk via BMS/IoT integration; (2) maintain `carbon_factors.csv` with region-specific emission factors (update annually); (3) import fleet fuel and travel data from procurement/booking systems; (4) schedule monthly calculation; (5) compare against annual targets and prior year; (6) generate quarterly ESG report data for board and external disclosure.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=facilities sourcetype=\"power_meter\" earliest=-1mon@mon latest=@mon\n| stats sum(kwh_consumed) as total_kwh by site\n| eval scope=\"Scope_2\", source=\"Electricity\"\n| append [\n    search index=facilities sourcetype=\"gas_meter\" earliest=-1mon@mon latest=@mon\n    | stats sum(cubic_metres) as total_gas by site\n    | eval scope=\"Scope_1\", source=\"Natural_Gas\", total_kwh=total_gas*11.1\n]\n| append [\n    search index=business sourcetype=\"fleet_fuel\" earliest=-1mon@mon latest=@mon\n    | stats sum(litres) as total_litres by site\n    | eval scope=\"Scope_1\", source=\"Fleet_Fuel\", total_kwh=total_litres*9.7\n]\n| append [\n    search index=business sourcetype=\"travel_booking\" earliest=-1mon@mon latest=@mon\n    | stats sum(distance_km) as total_km by site\n    | eval scope=\"Scope_3\", source=\"Business_Travel\", total_kwh=total_km*0.255\n]\n| lookup carbon_factors.csv source OUTPUT kg_co2_per_kwh\n| eval tonnes_co2=round(total_kwh * kg_co2_per_kwh / 1000, 2)\n| stats sum(tonnes_co2) as total_tonnes by scope, source\n| sort scope, source\n| table scope, source, total_tonnes\n```\n\nUnderstanding this SPL\n\n**Carbon Footprint Tracking and Reduction Progress** — Tracks Scope 1, 2, and 3 carbon emissions from energy consumption, fleet operations, business travel, and cloud usage — measuring progress against net-zero commitments. Sustainability officers see which emission sources are largest and whether reduction initiatives are working, while the board gets quarterly ESG reporting data automatically calculated rather than manually assembled.\n\nDocumented **Data sources**: `index=facilities` (power meter readings, gas consumption), `index=business` (fleet fuel purchases, travel bookings, cloud usage), `carbon_factors.csv` (emission conversion factors). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC (BMS/IoT data). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: facilities; **sourcetype**: power_meter. If that sourcetype is not mentioned in Data sources, double-check parsing or update the documentation to match the feed you actually ingest.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=facilities, sourcetype=\"power_meter\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `stats` rolls up events into metrics; results are split **by site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **scope** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Appends rows from a subsearch with `append`.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **tonnes_co2** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by scope, source** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Carbon Footprint Tracking and Reduction Progress**): table scope, source, total_tonnes\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (emissions by scope), Pie chart (emission sources), Line chart (monthly trend vs target), Single value (total CO2e, % reduction YoY).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We add up pollution from company trucks and boilers, purchased power using both grid-average and green-contract math, and business flights together so bosses see one credible climate footprint—not piecemeal guesses split across spreadsheets.",
              "wv": "crawl",
              "mtype": [
                "Compliance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 60,
              "_qt": "silver",
              "_qg": [
                "add at least 2 references for Gold tier",
                "troubleshooting should mention product-specific failure modes"
              ]
            },
            {
              "i": "23.9.2",
              "n": "Energy Consumption and Efficiency by Facility",
              "c": "medium",
              "f": "intermediate",
              "v": "Monitors energy consumption per building, floor, and system (HVAC, lighting, compute) — showing energy use intensity (EUI) and identifying facilities that are consuming more than expected. Facilities managers see which buildings need efficiency upgrades, while finance sees the cost impact. Reducing energy consumption directly reduces both costs and carbon emissions.",
              "t": "HEC (BMS/IoT data), Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=facilities` `sourcetype=\"power_meter\"` (meter_id, site, system, kwh, timestamp), `facility_details.csv` (site, square_metres)",
              "q": "index=facilities sourcetype=\"power_meter\" earliest=-30d\n| bin _time span=1d\n| stats sum(kwh) as daily_kwh by _time, site, system\n| stats avg(daily_kwh) as avg_daily_kwh, sum(daily_kwh) as total_kwh by site, system\n| lookup facility_details.csv site OUTPUT square_metres, occupancy\n| eval eui=if(square_metres>0, round(total_kwh/square_metres, 2), null())\n| eval cost_estimate=round(total_kwh * 0.12, 2)\n| stats sum(total_kwh) as site_total_kwh, sum(cost_estimate) as site_total_cost, values(eui) as eui by site, square_metres\n| sort - site_total_kwh\n| table site, square_metres, site_total_kwh, site_total_cost, eui",
              "m": "(1) Install sub-metering at system level (HVAC, lighting, compute) where possible; (2) ingest meter data via BACnet/Modbus through Edge Hub or BMS integration; (3) maintain `facility_details.csv` with building size and occupancy; (4) schedule daily; (5) alert when any site's EUI exceeds benchmark by >20%; (6) compare weekday vs weekend consumption to identify waste.",
              "z": "Bar chart (EUI by facility), Heatmap (energy by system × site), Line chart (daily consumption trend), Single value (total monthly cost, EUI benchmark).",
              "kfp": "A cold January in Toronto can push the same metered kilowatt-hours as a July heat wave in Phoenix through entirely different HVAC drivers — heating-degree-day versus cooling-degree-day dominance means EUI rises in both cities without a sudden efficiency collapse, so treat raw kilowatt-hours per square metre spikes next to seasonal degree-day overlays before blaming controls. Hybrid return-to-office shifts headcount without floor-area changes: energy per occupant swings while kilowatt-hours per square metre stays flat, so workforce analytics can scream waste when the building simply hosts fewer desks. Large unattributed buckets appear when only the main switchboard is metered but lighting circuits remain estimated from breaker labels — Splunk faithfully shows other above forty percent while the gap is instrumentation, not mystery load. Demand-response programmes (ERCOT emergency response service, CAISO capacity bids, PJM economic load response) legitimately shed one hundred to five hundred kilowatts for hours — EUI drops are compliance with grid stress, not proof of deeper savings. Electric-vehicle chargers behind plug-load meters add fifty to two hundred kilowatt-hours per day as a deliberate sustainability investment; weekend curves change without inefficiency. Multi-tenant sites see step-changes when a restaurant kitchen or colocation cage energises — intensity jumps with no chiller fault. Current-transformer clamps drift three to eight percent high after five tropical years without recalibration, inflating EUI until maintenance replaces CTs or applies correction factors.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (BMS/IoT data), Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=facilities` `sourcetype=\"power_meter\"` (meter_id, site, system, kwh, timestamp), `facility_details.csv` (site, square_metres).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Install sub-metering at system level (HVAC, lighting, compute) where possible; (2) ingest meter data via BACnet/Modbus through Edge Hub or BMS integration; (3) maintain `facility_details.csv` with building size and occupancy; (4) schedule daily; (5) alert when any site's EUI exceeds benchmark by >20%; (6) compare weekday vs weekend consumption to identify waste.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=facilities sourcetype=\"power_meter\" earliest=-30d\n| bin _time span=1d\n| stats sum(kwh) as daily_kwh by _time, site, system\n| stats avg(daily_kwh) as avg_daily_kwh, sum(daily_kwh) as total_kwh by site, system\n| lookup facility_details.csv site OUTPUT square_metres, occupancy\n| eval eui=if(square_metres>0, round(total_kwh/square_metres, 2), null())\n| eval cost_estimate=round(total_kwh * 0.12, 2)\n| stats sum(total_kwh) as site_total_kwh, sum(cost_estimate) as site_total_cost, values(eui) as eui by site, square_metres\n| sort - site_total_kwh\n| table site, square_metres, site_total_kwh, site_total_cost, eui\n```\n\nUnderstanding this SPL\n\n**Energy Consumption and Efficiency by Facility** — Monitors energy consumption per building, floor, and system (HVAC, lighting, compute) — showing energy use intensity (EUI) and identifying facilities that are consuming more than expected. Facilities managers see which buildings need efficiency upgrades, while finance sees the cost impact. Reducing energy consumption directly reduces both costs and carbon emissions.\n\nDocumented **Data sources**: `index=facilities` `sourcetype=\"power_meter\"` (meter_id, site, system, kwh, timestamp), `facility_details.csv` (site, square_metres). **App/TA** (typical add-on context): HEC (BMS/IoT data), Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: facilities; **sourcetype**: power_meter. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=facilities, sourcetype=\"power_meter\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, site, system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by site, system** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **eui** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **cost_estimate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by site, square_metres** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Energy Consumption and Efficiency by Facility**): table site, square_metres, site_total_kwh, site_total_cost, eui\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (EUI by facility), Heatmap (energy by system × site), Line chart (daily consumption trend), Single value (total monthly cost, EUI benchmark).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-27",
              "sver": "",
              "rby": "",
              "ge": "We look at each building’s electricity use against its floor size so we know which ones are heavy for what they are. We split that use into air-and-heat, lights, computers, and everyday plugs, and compare quiet weekends to busy weekdays so people can see where schedules waste power without turning it into a carbon accounting lecture.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.9.3",
              "n": "Waste Diversion and Recycling Rate Tracking",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks waste generation by type (landfill, recycling, composting, hazardous) and calculates diversion rates against zero-waste targets. Operations and sustainability teams see which facilities and waste streams need attention, while the data feeds directly into ESG disclosure requirements — many organisations now report waste metrics to investors and rating agencies.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC",
              "d": "`index=business` `sourcetype=\"waste_manifest\"` (site, waste_type, weight_kg, disposal_method, collection_date)",
              "q": "index=business sourcetype=\"waste_manifest\" earliest=-90d\n| eval diverted=if(disposal_method IN (\"recycled\",\"composted\",\"reused\"), 1, 0)\n| stats sum(weight_kg) as total_kg, sum(eval(if(diverted=1,weight_kg,0))) as diverted_kg by site\n| eval diversion_rate=round(100*diverted_kg/total_kg, 1)\n| eval landfill_kg=total_kg-diverted_kg\n| sort - landfill_kg\n| table site, total_kg, diverted_kg, landfill_kg, diversion_rate",
              "m": "(1) Import waste manifests from waste management provider via CSV upload or HEC; (2) classify disposal methods consistently; (3) schedule monthly; (4) set diversion rate targets by facility; (5) alert when any site's diversion rate drops below target; (6) trend quarterly for ESG reporting.",
              "z": "Bar chart (diversion rate by site), Pie chart (waste by type), Line chart (monthly diversion trend), Single value (overall diversion rate %).",
              "kfp": "Volume-based estimated weights from Rubicon or WM routes can diverge ±15–30% from scale tickets during container-only pickups, inflating diversion_rate until quarterly weighbridge reconciliations land. Transfer-station double posts make two sites claim the same tonnes when `transfer_station_flag` is absent. High contamination in single-stream recycling can force MRF reroute to landfill while the manifest still reads recycling until `disposal_method_actual` backfills. Material taxonomy drift (OCC versus cardboard) splits rows until synonym lookups merge streams. EPA WARM June refreshes swing Scope 3 Category 5 kilograms 3–8% without operational change if `factor_year` is not stamped. Renovation spikes from construction and demolition waste dominate one quarter and look like chronic landfill regression unless baselines segment C&D. Medical-waste manifests may show zero hazardous in Splunk when sites still ship paper manifests not ingested into RCRAInfo. Donated furniture coded as recycling understates reuse, while anaerobic digestion may be classified as recycling or composting depending on hauler schema, shifting diversion_rate between UK, California, and federal reporting lenses. Seasonal cafeteria organics swing back-to-school peaks that resemble programme backsliding absent school-calendar normalization. Bin sensors report false empty when Compology lenses are occluded or Sensoneo ultrasonics miss odd bag shapes. Tipping-fee spikes follow commodity markets, not mass, so cost panels can scream while diversion_rate is flat. Basel international loads without `Annex_VII_form_id` look compliance-clean in Splunk yet fail customs rules—this UC flags data holes, not legal clearance.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC.\n• Ensure the following data sources are available: `index=business` `sourcetype=\"waste_manifest\"` (site, waste_type, weight_kg, disposal_method, collection_date).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Import waste manifests from waste management provider via CSV upload or HEC; (2) classify disposal methods consistently; (3) schedule monthly; (4) set diversion rate targets by facility; (5) alert when any site's diversion rate drops below target; (6) trend quarterly for ESG reporting.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"waste_manifest\" earliest=-90d\n| eval diverted=if(disposal_method IN (\"recycled\",\"composted\",\"reused\"), 1, 0)\n| stats sum(weight_kg) as total_kg, sum(eval(if(diverted=1,weight_kg,0))) as diverted_kg by site\n| eval diversion_rate=round(100*diverted_kg/total_kg, 1)\n| eval landfill_kg=total_kg-diverted_kg\n| sort - landfill_kg\n| table site, total_kg, diverted_kg, landfill_kg, diversion_rate\n```\n\nUnderstanding this SPL\n\n**Waste Diversion and Recycling Rate Tracking** — Tracks waste generation by type (landfill, recycling, composting, hazardous) and calculates diversion rates against zero-waste targets. Operations and sustainability teams see which facilities and waste streams need attention, while the data feeds directly into ESG disclosure requirements — many organisations now report waste metrics to investors and rating agencies.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"waste_manifest\"` (site, waste_type, weight_kg, disposal_method, collection_date). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC. The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: waste_manifest. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"waste_manifest\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **diverted** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **diversion_rate** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **landfill_kg** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Waste Diversion and Recycling Rate Tracking**): table site, total_kg, diverted_kg, landfill_kg, diversion_rate\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (diversion rate by site), Pie chart (waste by type), Line chart (monthly diversion trend), Single value (overall diversion rate %).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We weigh what leaves each building—recycling, food scraps, landfill, hazardous—and compare it to targets so leaders see whether we are keeping stuff out of dumps and staying honest with hauler receipts. When something drifts, we spot it before the yearly sustainability report goes to investors or regulators.",
              "mtype": [
                "Compliance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.9.4",
              "n": "Water Consumption Monitoring and Conservation",
              "c": "medium",
              "f": "beginner",
              "v": "Tracks water consumption by facility and usage type (cooling, process, domestic) — identifying sites that consume disproportionately and detecting leaks or waste. Water is increasingly a material ESG metric, especially for data centres (PUE) and manufacturing (process water). Reducing consumption lowers costs and demonstrates environmental responsibility.",
              "t": "HEC (IoT/BMS integration)",
              "d": "`index=facilities` `sourcetype=\"water_meter\"` (meter_id, site, usage_type, litres, timestamp)",
              "q": "index=facilities sourcetype=\"water_meter\" earliest=-30d\n| bin _time span=1d\n| stats sum(litres) as daily_litres by _time, site\n| stats avg(daily_litres) as avg_daily, stdev(daily_litres) as stdev_daily, sum(daily_litres) as monthly_total by site\n| eval upper_threshold=avg_daily + 2*stdev_daily\n| eval anomaly=if(avg_daily > upper_threshold, \"INVESTIGATE\", \"Normal\")\n| eval monthly_cost=round(monthly_total * 0.003, 2)\n| sort - monthly_total\n| table site, monthly_total, avg_daily, monthly_cost, anomaly",
              "m": "(1) Install smart water meters with data logging; (2) ingest readings via Edge Hub or HEC; (3) schedule daily with anomaly detection for leak identification; (4) compare consumption against production output for water intensity metrics; (5) trend quarterly for ESG reporting; (6) set reduction targets by facility.",
              "z": "Bar chart (consumption by site), Line chart (daily trend with anomaly markers), Single value (monthly total, cost), Table (sites with anomalies).",
              "kfp": "Radio-dark cellar AMI radios flatten overnight gallon deltas across twelve-hour LTE gaps while an open hose bib drains steadily—layer acoustic collars or strap-on ultrasonics before trusting silent telemetry nights alone. Johnson Controls versus Honeywell exported BACnet analogInput indexes collide so irrigation pulses inherit district_feed tags—phantom inlet surpluses read as theft until Facilities re-maps browse tables to as-built riser diagrams. Groundskeepers lawfully punch extra irrigation cycles during civic turf-brown moratoria—Splunk spikes look like noncompliance unless ordinance_calendar.csv stamps temporary variances beside _time. Conductivity-governed cooling-tower blowdown solenoids chatter two-hundred-second on-off patterns that mimic hidden pipe breaks—pair chem-probe historian strips before tagging Ops leaks on makeup metres alone. Hybrid shifts trimmed weekday badge counts sixty percent yet denominators stayed frozen—gallons-per-desk plunge reads virtuous until IWMS occupancy refreshes quarterly. Quiet Saturday continuous-flow fractions above idle thresholds still trace worn lavatory flappers or stuck rotor sprinklers deserving plumber dispatch—not Sustainability paging executives prematurely. July-August evaporative makeup routinely stacks fivefold above January lows along lawful chill-duty curves—cooling-degree-day overlays distinguish benign peaks from cracked basin seams. Nine-decimal cumulative registers snapping past ninety-nine-million roll negative deltas Splunk theft logic catches until ingestion pipelines clamp rollover arithmetic exactly like meter OEM manuals prescribe. Cracked diaphragms inside vault-mounted PRVs periodically flip instantaneous vectors backward—retrograde totals resemble bypass theft despite lawful evaporative credits elsewhere along the campus spine. Mandatory NFPA diesel-engine discharge headers purge hundreds of gallons during weekly churn trials resembling bursts—suppress Splunk alarms via CMMS-derived nfpa_flow_window tokens keyed facility-by-facility. Purple reclaimed headers plumbed parallel to bronze bundles duplicate pulses when coefficient tables mismatch painted sleeve colours—maintain reclaimed_water.csv splitting flows before potable KPI decks inflate. Ignoring tower drift fractions near two percent while modelling evaporation-only withdrawals shifts blame toward phantom mains leakage whenever drift stacks atop intentional bleed schedules. Upstream municipal unmetered acres sipping wholesale unmetered hydrants downstream of campus boundaries inflate campus-side apparent-loss buckets until GIS unmetered_landscape subtraction CSV subtracts sanctioned turf withdrawals before auditors argue NRW narratives. Ordinance maths quarrel uniform percentage-cut dashboards versus personalised baseline allotments—Splunk tier greens persist while regulators cite breaches whenever ordinance_version CSV omits freshly amended proportional clauses adopted mid-season.",
              "refs": "[Splunk Lantern — use case library](https://lantern.splunk.com/)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (IoT/BMS integration).\n• Ensure the following data sources are available: `index=facilities` `sourcetype=\"water_meter\"` (meter_id, site, usage_type, litres, timestamp).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Install smart water meters with data logging; (2) ingest readings via Edge Hub or HEC; (3) schedule daily with anomaly detection for leak identification; (4) compare consumption against production output for water intensity metrics; (5) trend quarterly for ESG reporting; (6) set reduction targets by facility.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=facilities sourcetype=\"water_meter\" earliest=-30d\n| bin _time span=1d\n| stats sum(litres) as daily_litres by _time, site\n| stats avg(daily_litres) as avg_daily, stdev(daily_litres) as stdev_daily, sum(daily_litres) as monthly_total by site\n| eval upper_threshold=avg_daily + 2*stdev_daily\n| eval anomaly=if(avg_daily > upper_threshold, \"INVESTIGATE\", \"Normal\")\n| eval monthly_cost=round(monthly_total * 0.003, 2)\n| sort - monthly_total\n| table site, monthly_total, avg_daily, monthly_cost, anomaly\n```\n\nUnderstanding this SPL\n\n**Water Consumption Monitoring and Conservation** — Tracks water consumption by facility and usage type (cooling, process, domestic) — identifying sites that consume disproportionately and detecting leaks or waste. Water is increasingly a material ESG metric, especially for data centres (PUE) and manufacturing (process water). Reducing consumption lowers costs and demonstrates environmental responsibility.\n\nDocumented **Data sources**: `index=facilities` `sourcetype=\"water_meter\"` (meter_id, site, usage_type, litres, timestamp). **App/TA** (typical add-on context): HEC (IoT/BMS integration). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: facilities; **sourcetype**: water_meter. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=facilities, sourcetype=\"water_meter\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **upper_threshold** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **anomaly** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **monthly_cost** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Water Consumption Monitoring and Conservation**): table site, monthly_total, avg_daily, monthly_cost, anomaly\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (consumption by site), Line chart (daily trend with anomaly markers), Single value (monthly total, cost), Table (sites with anomalies).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We tally drinking-water gallons moving through meters week by week, compare quiet-night flows to sleepy-building expectations, and line totals beside posted irrigation limits so crews mend silent leaks early and drought notices stay credible.",
              "mtype": [
                "Performance",
                "Business"
              ],
              "ind": "Manufacturing, Data Centers, Hospitality",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.9.5",
              "n": "ESG Disclosure Readiness and Data Completeness",
              "c": "high",
              "f": "intermediate",
              "v": "Tracks whether the organisation has complete, auditable data for all required ESG disclosure metrics — from emissions and energy to diversity and governance. As ESG reporting becomes mandatory in many jurisdictions (CSRD, SEC Climate Disclosure), this use case ensures that data collection gaps are identified well before reporting deadlines rather than during last-minute scrambles.",
              "t": "Splunk DB Connect (Splunkbase 2686)",
              "d": "`esg_metric_registry.csv` (required metrics, data sources, owners), all business and facilities indexes",
              "q": "| inputlookup esg_metric_registry.csv\n| eval data_available=case(\n    data_source=\"power_meter\" AND [search index=facilities sourcetype=\"power_meter\" earliest=-90d | stats count | return $count],  \"YES\",\n    data_source=\"waste_manifest\" AND [search index=business sourcetype=\"waste_manifest\" earliest=-90d | stats count | return $count], \"YES\",\n    data_source=\"hris\" AND [search index=business sourcetype=\"dbx:hris_employees\" earliest=-90d | stats count | return $count], \"YES\",\n    1=1, \"MISSING\")\n| eval readiness=if(data_available=\"YES\" AND isnotnull(last_verified), \"READY\", \"GAP\")\n| stats count as total_metrics, sum(eval(if(readiness=\"READY\",1,0))) as ready, sum(eval(if(readiness=\"GAP\",1,0))) as gaps by reporting_framework\n| eval completeness_pct=round(100*ready/total_metrics, 1)\n| table reporting_framework, total_metrics, ready, gaps, completeness_pct",
              "m": "(1) Build `esg_metric_registry.csv` listing all required ESG metrics by framework (CSRD, GRI, SASB, TCFD); (2) map each metric to its Splunk data source and data owner; (3) schedule quarterly readiness checks; (4) alert data owners when their metrics have data gaps; (5) generate audit trail showing when each metric was last validated; (6) produce a readiness report for the sustainability committee.",
              "z": "Table (metric readiness by framework), Gauge (overall completeness %), Bar chart (gaps by framework), Single value (metrics with gaps).",
              "kfp": "When ESRS amendment packages shrink mandatory datapoint inventories overnight, completeness dives although ingestion stayed steady until Knowledge Managers bump framework_version stamps aligned with EFRAG freeze notices — dashboards otherwise scream MISSING across metres that never broke. Fiscal EU closes stacked beside Securities and Exchange Commission calendar filings reuse identical Schneider totals yet Splunk variance detectors flag mismatched anchors unless reporting_deadline tokens harmonise multinational periods referencing GHG Protocol consolidation choices documented beside statutory ledgers. Participation bursts inside EcoVadis or Sedex portals widen respondent pools — readiness percentages swing upward without firmer supplier metre corroboration beneath Scope 3 Category 1 narratives tied to procurement diligence cadences.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `esg_metric_registry.csv` (required metrics, data sources, owners), all business and facilities indexes.\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Build `esg_metric_registry.csv` listing all required ESG metrics by framework (CSRD, GRI, SASB, TCFD); (2) map each metric to its Splunk data source and data owner; (3) schedule quarterly readiness checks; (4) alert data owners when their metrics have data gaps; (5) generate audit trail showing when each metric was last validated; (6) produce a readiness report for the sustainability committee.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\n| inputlookup esg_metric_registry.csv\n| eval data_available=case(\n    data_source=\"power_meter\" AND [search index=facilities sourcetype=\"power_meter\" earliest=-90d | stats count | return $count],  \"YES\",\n    data_source=\"waste_manifest\" AND [search index=business sourcetype=\"waste_manifest\" earliest=-90d | stats count | return $count], \"YES\",\n    data_source=\"hris\" AND [search index=business sourcetype=\"dbx:hris_employees\" earliest=-90d | stats count | return $count], \"YES\",\n    1=1, \"MISSING\")\n| eval readiness=if(data_available=\"YES\" AND isnotnull(last_verified), \"READY\", \"GAP\")\n| stats count as total_metrics, sum(eval(if(readiness=\"READY\",1,0))) as ready, sum(eval(if(readiness=\"GAP\",1,0))) as gaps by reporting_framework\n| eval completeness_pct=round(100*ready/total_metrics, 1)\n| table reporting_framework, total_metrics, ready, gaps, completeness_pct\n```\n\nUnderstanding this SPL\n\n**ESG Disclosure Readiness and Data Completeness** — Tracks whether the organisation has complete, auditable data for all required ESG disclosure metrics — from emissions and energy to diversity and governance. As ESG reporting becomes mandatory in many jurisdictions (CSRD, SEC Climate Disclosure), this use case ensures that data collection gaps are identified well before reporting deadlines rather than during last-minute scrambles.\n\nDocumented **Data sources**: `esg_metric_registry.csv` (required metrics, data sources, owners), all business and facilities indexes. **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\n**Pipeline walkthrough**\n\n• Loads rows via `inputlookup` (KV store or CSV lookup) for enrichment or reporting.\n• `eval` defines or adjusts **data_available** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **readiness** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by reporting_framework** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **completeness_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Pipeline stage (see **ESG Disclosure Readiness and Data Completeness**): table reporting_framework, total_metrics, ready, gaps, completeness_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Table (metric readiness by framework), Gauge (overall completeness %), Bar chart (gaps by framework), Single value (metrics with gaps).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "Before filings freeze, we compare each promised disclosure row against live feeds and steward sign-offs so gaps surface while calendars still flex. That avoids frantic hunts through spreadsheets when auditors already expect every cited figure backed by dated lineage.",
              "mtype": [
                "Compliance",
                "Business"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.9.6",
              "n": "Renewable Energy Share and Green Tariff Attribution",
              "c": "medium",
              "f": "intermediate",
              "v": "Shows what share of your electricity came from renewable contracts or on-site generation versus grid mix, so leaders can prove progress to investors and regulators. We help you tie tariff decisions to reported carbon outcomes instead of guessing after the fact.",
              "t": "Splunk DB Connect (Splunkbase 2686), HEC (utility data)",
              "d": "`index=facilities` `sourcetype=\"power_meter\"` (site, kwh, contract_type, timestamp), `renewable_contracts.csv` (site, renewable_pct, period)",
              "q": "index=facilities sourcetype=\"power_meter\" earliest=-30d\n| bin _time span=1d\n| stats sum(kwh) as daily_kwh by _time, site, contract_type\n| stats sum(daily_kwh) as total_kwh by site, contract_type\n| lookup renewable_contracts.csv site OUTPUT renewable_pct\n| eval renewable_kwh=round(total_kwh * coalesce(renewable_pct,0) / 100, 2)\n| eval grid_kwh=total_kwh-renewable_kwh\n| stats sum(renewable_kwh) as sum_renewable, sum(grid_kwh) as sum_grid, sum(total_kwh) as sum_total by site\n| eval renewable_share_pct=if(sum_total>0, round(100*sum_renewable/sum_total, 1), null())\n| sort - renewable_share_pct\n| table site, sum_total, sum_renewable, sum_grid, renewable_share_pct",
              "m": "(1) Tag each meter or site with contract type and ingest utility invoices or supplier files via DB Connect; (2) maintain `renewable_contracts.csv` with the certified renewable percentage per site and period; (3) schedule monthly and compare renewable share to your science-based or net-zero milestones.",
              "z": "Stacked bar (renewable vs grid kWh by site), Single value (portfolio renewable %), Table (site-level mix), Line chart (renewable share trend).",
              "kfp": "Utility normalizers lag thirty-to-ninety-day billing closes so Splunk monthly buckets paint dips when invoices slip across fiscal boundaries unrelated to procurement shifts — treasury accruals versus Splunk `_time` drift explains phantom declines until revised bills replay.\nREC retirement timestamps aligned to fiscal calendars mismatch hourly settlement feeds — annual booklet claims hundred-percent renewable electricity while WattTime marginal peaks still trace thermal generators — dashboards clash until Sustainability publishes disclosure-scope boundaries referencing EnergyTag hourly guidance versus vintage-year bundles.\nResidual mix coefficients refreshed July-first per Association of Issuing Bodies tables swing reported Scope **market_based** intensities plus-five-to-minus-twelve percent absent behavioral shifts — analysts confuse coefficient revisions with procurement setbacks unless methodology footnotes cite frozen **`factor_year`** windows.\nBehind-the-meter SolarEdge gateway LTE dropout yields **`ac_kwh`** zeros mistaken for halted renewables despite steady campus consumption masked by revenue-meter gaps — correlate inverter heartbeat alarms before accusing procurement gaps.\nRetail supplier tariff reshuffles rebalance **`green_tariff_pct`** riders mid-quarter without contract amendments — Splunk spikes trace regulatory filings rather than slipped diligence — annotate **`tariff_version`** tokens sourced from tariff IDs returned via Arcadia **`tariff_code`** metadata when accessible.\nQuarter-end virtual PPA **true_up_adjustment_q** postings retroactively rewrite **`ppa_generation_kwh`** months backward — Splunk dashboards imply officers hid shortfalls though settlements lag Independent System Operator meter corrections — freeze immutable archived summaries beside rolling searches for auditor parity.\nBundled PPAs embedding REC volumes collide with voluntary **`unbundled_REC_match_kwh`** purchases when procurement departments duplicate instruments — **`renewable_share_pct`** overstated until **`instrument_overlap_rules.csv`** suppresses overlapping serial namespaces spanning **`serial_range`** intersections.\nAdditionality narratives fracture when corporations procure vintage REC carry-forwards satisfying contractual megawatt-hours yet failing Science Based Targets initiative additionality screens — Splunk totals cheer while verifier spreadsheets disqualify offsets — maintain **`additionality_flag`** overlays referencing initiative correspondence identifiers.\nCommercial aviation rebound shifts headquarter load curves versus locked baseline renewable percentages established during muted travel eras — normalized **`renewable_share_pct`** dips solely because denominators climbed faster than onsite arrays though procurement volumes unchanged — overlay travel-return KPI splits referencing UC siblings selectively without merging unrelated scopes.\nFleet electrification corridors funnel megawatt-hours through depot chargers categorized separately from legacy buildings — Splunk **`facility_id`** omissions strand Scope **2** megawatt-hours outside **`usage_kwh`** feeds until ChargePoint JDBC extracts integrate — dashboards temporarily underrate procurement workload breadth.\nWashington climate filings, Brussels statutory extracts, and questionnaire portals disagree whether offsets belong inside renewable-percent numerators — Splunk arithmetic mirrors whichever stewardship worksheet Procurement canonized until Legal publishes authoritative denominator rules reconciling investor narratives with statutory scopes.\nGranular hourly certificate vendors disagree when **`renewable_supply_kwh`** references offshore wind **`forecast`** feeds versus realised **`meter`** proofs — **`hourly_match_score`** volatility stems from vendor variance not dishonesty — freeze **`vendor_revision`** stamps beside Splunk calculations.\nCertificate fraud allegations circulating European Guarantee of Origin liquidity prompted heightened scrutiny — Splunk duplicates issuer disclosures yet cannot adjudicate courtroom-grade authenticity — escalate forensic audits whenever **`serial_range`** duplicates appear across unrelated **`retirement_accounts`** within identical **`vintage_period`** slices.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: Splunk DB Connect (Splunkbase 2686), HEC (utility data).\n• Ensure the following data sources are available: `index=facilities` `sourcetype=\"power_meter\"` (site, kwh, contract_type, timestamp), `renewable_contracts.csv` (site, renewable_pct, period).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Tag each meter or site with contract type and ingest utility invoices or supplier files via DB Connect; (2) maintain `renewable_contracts.csv` with the certified renewable percentage per site and period; (3) schedule monthly and compare renewable share to your science-based or net-zero milestones.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=facilities sourcetype=\"power_meter\" earliest=-30d\n| bin _time span=1d\n| stats sum(kwh) as daily_kwh by _time, site, contract_type\n| stats sum(daily_kwh) as total_kwh by site, contract_type\n| lookup renewable_contracts.csv site OUTPUT renewable_pct\n| eval renewable_kwh=round(total_kwh * coalesce(renewable_pct,0) / 100, 2)\n| eval grid_kwh=total_kwh-renewable_kwh\n| stats sum(renewable_kwh) as sum_renewable, sum(grid_kwh) as sum_grid, sum(total_kwh) as sum_total by site\n| eval renewable_share_pct=if(sum_total>0, round(100*sum_renewable/sum_total, 1), null())\n| sort - renewable_share_pct\n| table site, sum_total, sum_renewable, sum_grid, renewable_share_pct\n```\n\nUnderstanding this SPL\n\n**Renewable Energy Share and Green Tariff Attribution** — Shows what share of your electricity came from renewable contracts or on-site generation versus grid mix, so leaders can prove progress to investors and regulators. We help you tie tariff decisions to reported carbon outcomes instead of guessing after the fact.\n\nDocumented **Data sources**: `index=facilities` `sourcetype=\"power_meter\"` (site, kwh, contract_type, timestamp), `renewable_contracts.csv` (site, renewable_pct, period). **App/TA** (typical add-on context): Splunk DB Connect (Splunkbase 2686), HEC (utility data). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: facilities; **sourcetype**: power_meter. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=facilities, sourcetype=\"power_meter\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• Discretizes time or numeric ranges with `bin`/`bucket`.\n• `stats` rolls up events into metrics; results are split **by _time, site, contract_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `stats` rolls up events into metrics; results are split **by site, contract_type** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **renewable_kwh** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `eval` defines or adjusts **grid_kwh** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by site** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **renewable_share_pct** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Renewable Energy Share and Green Tariff Attribution**): table site, sum_total, sum_renewable, sum_grid, renewable_share_pct\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Stacked bar (renewable vs grid kWh by site), Single value (portfolio renewable %), Table (site-level mix), Line chart (renewable share trend).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We follow supplier invoices, rooftop inverter totals, marketplace megawatt-hours, and cancellation receipts side by side against what buildings truly drew from the wires—so bosses defending renewable-percent pledges stay aligned with certificate paperwork rather than guessing from meter totals alone.",
              "mtype": [
                "Business",
                "Compliance"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            },
            {
              "i": "23.9.7",
              "n": "Scope 3 Commuting and Hybrid Work Emissions",
              "c": "low",
              "f": "beginner",
              "v": "Estimates emissions from employee commuting and hybrid office attendance using badge and survey data, which many disclosure frameworks now ask for under Scope 3. We help workplace and sustainability teams see which locations and commute modes drive the largest footprint so travel and office policies can be adjusted fairly.",
              "t": "HEC (badge/HR systems), Splunk DB Connect (Splunkbase 2686)",
              "d": "`index=business` `sourcetype=\"commute_survey\"` (employee_id, site, mode, distance_km, days_per_week), `commute_emission_factors.csv` (mode, kg_co2_per_km)",
              "q": "index=business sourcetype=\"commute_survey\" earliest=-90d\n| eval weekly_km=distance_km*days_per_week\n| lookup commute_emission_factors.csv mode OUTPUT kg_co2_per_km\n| eval weekly_kg_co2=round(weekly_km * coalesce(kg_co2_per_km, 0.12), 2)\n| stats sum(weekly_kg_co2) as total_kg_co2, dc(employee_id) as respondents, avg(weekly_km) as avg_weekly_km by site, mode\n| eval tonnes_co2=round(total_kg_co2/1000, 2)\n| sort - tonnes_co2\n| table site, mode, respondents, avg_weekly_km, tonnes_co2",
              "m": "(1) Send anonymised commute surveys or badge-based attendance summaries to Splunk via HEC on a schedule employees understand; (2) keep `commute_emission_factors.csv` aligned with your country’s published factors; (3) run quarterly and review results with facilities and people leaders before publishing ESG narratives.",
              "z": "Bar chart (tonnes CO2 by commute mode), Table (site and mode breakdown), Pie chart (mode share), Single value (total estimated commuting tonnes).",
              "kfp": "Upstream hotspot dashboards spike when Procurement reroutes PO flows through reseller hubs **without** adjusting **`supplier_vendor_id`** master keys — Splunk interprets reseller anonymity as new mega-suppliers though physical flows stayed intact until **`mdm_vendor_bridge.csv`** harmonises ERP hierarchies referencing canonical **`UNSPSC_BUCKET`** mappings tied to spend taxonomy stewards.\n\nSeasonal EEIO coefficient refreshes (**DEFRA June** drops versus locked **inventory_period** reporting ) rearrange **`spend_based`** tails absent behavioural procurement shifts — **`supplier_total_kg`** swings mimic supplier misconduct though finance merely adopted refreshed **`EXIOBASE`** vintage coefficients mirrored inside **`eeio_factors.csv`** governance commits.\n\nFinanced-emissions (**Category 15**) ingestion bursts when treasury uploads quarterly loan tapes denominated in **`USD`** alongside Procurement **`Cat01`** aluminium purchases keyed to identical surrogate IDs — **`cats_reported`** inflates dramatically though atmospheric exposures unchanged until namespaces separate **`bank_book`** versus **`supply_chain`** prefixes inside **`supplier_key`** enrichment macros referencing **PCAF** attribution guides.\n\n**GLEC** logistics calculators feeding **`Cat04`** **`activity_based`** tonnes occasionally double-count tonne-kilometres already embedded inside **`emitwise`** supplier-specific factors — **`supplier_total_kg`** duplicates until **`dedupe_shipment_id`** lookups reconcile multimodal bills-of-lading hashed beside **`watershed:reduction_initiative`** storytelling artefacts.\n\n**Climate TRACE** satellite-derived intensities blended through modular REST pulls reorder hotspots ±35% quarter-over-quarter purely because **`Climate TRACE`** **`api/v6`** asset revisions rebaseline maritime **`Cat04`** allocations unrelated to freight procurement diligence — annotate **`ClimateTRACE_schema_version`** tokens beside dashboards before accusing logistics leads.\n\n**Joint venture** procurement feeds attributing **`Cat01`** purchased goods while JV minority partner simultaneously reports mirrored **`Scope 1+2`** totals keep **`double_count_symmetry_ok`** FALSE legitimately until Legal publishes contractual elimination memos referencing **GHG Protocol** boundary symmetry tables — Splunk persists WARNING hues though no malpractice occurred beyond consolidation optics.\n\n**Intercompany** **`invoice_usd`** rows eliminated inside treasury consolidation feeds yet mirrored inside **`persefoni:inventory_event`** staging replicas inflate **`supplier_total_kg`** until **`intercompany_elim.csv`** suppresses mirrored **`PO_ID`** tuples keyed identical **`supplier_key`** aliases flagged **Finance close** references.\n\n**Commodity hedge** tagging jitter reorganises **`UNSPSC_BUCKET`** quarterly mappings rolling **`lookup`** joins — **`supplier_total_kg`** reshuffles tail vendors absent commodity consumption shifts until Procurement taxonomy councils freeze **`UNSPSC_BUCKET`** crosswalk tables referencing **SAP Ariba** commodity dictionaries.\n\n**Parallel pilots** landing **`workiva:carbon_event`** duplicates **`persefoni:inventory_event`** totals during Sustainability tooling migrations masquerade as doubled hotspots until modular-input **`disabled=1`** flags propagate across Heavy Forwarders — **`tier`** badges oscillate weak-strong absent atmospheric drift purely due to ingest duplication noise.",
              "refs": "[Splunk DB Connect](https://splunkbase.splunk.com/app/2686)",
              "mitre": [],
              "dtype": "",
              "sdomain": "",
              "reqf": "",
              "md": "Prerequisites\n• Install and configure the required add-on or app: HEC (badge/HR systems), Splunk DB Connect (Splunkbase 2686).\n• Ensure the following data sources are available: `index=business` `sourcetype=\"commute_survey\"` (employee_id, site, mode, distance_km, days_per_week), `commute_emission_factors.csv` (mode, kg_co2_per_km).\n• For app installation, inputs.conf, and Splunk directory layout, see the Implementation guide: docs/implementation-guide.md\n\nStep 1 — Configure data collection\n(1) Send anonymised commute surveys or badge-based attendance summaries to Splunk via HEC on a schedule employees understand; (2) keep `commute_emission_factors.csv` aligned with your country’s published factors; (3) run quarterly and review results with facilities and people leaders before publishing ESG narratives.\n\nStep 2 — Create the search and alert\nRun the following SPL in Search (then save as report or alert; adjust time range and threshold as needed):\n\n```spl\nindex=business sourcetype=\"commute_survey\" earliest=-90d\n| eval weekly_km=distance_km*days_per_week\n| lookup commute_emission_factors.csv mode OUTPUT kg_co2_per_km\n| eval weekly_kg_co2=round(weekly_km * coalesce(kg_co2_per_km, 0.12), 2)\n| stats sum(weekly_kg_co2) as total_kg_co2, dc(employee_id) as respondents, avg(weekly_km) as avg_weekly_km by site, mode\n| eval tonnes_co2=round(total_kg_co2/1000, 2)\n| sort - tonnes_co2\n| table site, mode, respondents, avg_weekly_km, tonnes_co2\n```\n\nUnderstanding this SPL\n\n**Scope 3 Commuting and Hybrid Work Emissions** — Estimates emissions from employee commuting and hybrid office attendance using badge and survey data, which many disclosure frameworks now ask for under Scope 3. We help workplace and sustainability teams see which locations and commute modes drive the largest footprint so travel and office policies can be adjusted fairly.\n\nDocumented **Data sources**: `index=business` `sourcetype=\"commute_survey\"` (employee_id, site, mode, distance_km, days_per_week), `commute_emission_factors.csv` (mode, kg_co2_per_km). **App/TA** (typical add-on context): HEC (badge/HR systems), Splunk DB Connect (Splunkbase 2686). The SPL below should target the same indexes and sourcetypes you configured for that feed—rename `index=` / `sourcetype=` if your deployment differs.\n\nThe first pipeline stage scopes events using **index**: business; **sourcetype**: commute_survey. That sourcetype matches what this use case lists under Data sources.\n\n**Pipeline walkthrough**\n\n• Scopes the data: index=business, sourcetype=\"commute_survey\", time bounds. Cross-check against **Data sources** above so indexes and sourcetypes match your ingestion.\n• `eval` defines or adjusts **weekly_km** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Enriches events using `lookup` (lookup definition + optional OUTPUT fields).\n• `eval` defines or adjusts **weekly_kg_co2** — often to normalize units, derive a ratio, or prepare for thresholds.\n• `stats` rolls up events into metrics; results are split **by site, mode** so each row reflects one combination of those dimensions (useful for per-host, per-user, or per-entity comparisons for this use case).\n• `eval` defines or adjusts **tonnes_co2** — often to normalize units, derive a ratio, or prepare for thresholds.\n• Orders rows with `sort` — combine with `head`/`tail` for top-N patterns.\n• Pipeline stage (see **Scope 3 Commuting and Hybrid Work Emissions**): table site, mode, respondents, avg_weekly_km, tonnes_co2\n\n\nStep 3 — Validate\nConfirm that events are present in the index and that the search returns expected results. Compare with known good/bad scenarios if applicable. Verify field extractions and index permissions.\n\nStep 4 — Operationalize\nAdd the search to a dashboard or set up alert actions (email, webhook, PagerDuty, etc.) as required. Document the use case in your runbook and assign an owner. Consider visualizations: Bar chart (tonnes CO2 by commute mode), Table (site and mode breakdown), Pie chart (mode share), Single value (total estimated commuting tonnes).",
              "script": "",
              "premium": "",
              "hw": "",
              "dma": "",
              "schema": "",
              "status": "",
              "reviewed": "2026-04-28",
              "sver": "",
              "rby": "",
              "ge": "We trace purchases and shipments across long vendor chains, spotlight which partners concentrate climate harm, and compare disclosure clues so footprint totals stay honest across buyer and seller books without counting the same tonne twice.",
              "mtype": [
                "Business",
                "Compliance"
              ],
              "ind": "Cross-industry",
              "pillar": "observability",
              "a": [
                "N/A"
              ],
              "e": [
                "db_connect"
              ],
              "em": [],
              "_qs": 35,
              "_qt": "bronze",
              "_qg": [
                "add wave for Silver tier"
              ]
            }
          ],
          "qa": 38.6,
          "qd": {
            "gold": 0,
            "silver": 1,
            "bronze": 6,
            "none": 0
          }
        }
      ],
      "i": 23,
      "n": "Business Analytics & Executive Intelligence",
      "src": "cat-23-business-analytics.md"
    }
  ],
  "CAT_META": {
    "1": {
      "icon": "monitor",
      "desc": "Linux, Windows, macOS endpoint and server monitoring — CPU, memory, disk, processes, security events, and compliance.",
      "quick": "Deploy Splunk_TA_nix or Splunk_TA_windows on forwarders to start collecting OS metrics immediately."
    },
    "2": {
      "icon": "cloudNodes",
      "desc": "VMware vSphere, Hyper-V, and KVM virtual infrastructure — host contention, VM sprawl, and capacity planning.",
      "quick": "Install Splunk Add-on for VMware and connect to vCenter to pull ESXi host and VM performance data."
    },
    "3": {
      "icon": "container",
      "desc": "Docker, Kubernetes, OpenShift container platforms — crash loops, OOM kills, resource limits, and orchestration health.",
      "quick": "Deploy Splunk Connect for Kubernetes (SCK) to ingest container logs and cluster events."
    },
    "4": {
      "icon": "globe",
      "desc": "AWS, Azure, GCP cloud infrastructure — API auditing, cost anomalies, resource drift, and security posture.",
      "quick": "Enable CloudTrail/Activity Log and use the respective Splunk TA to start collecting API audit events."
    },
    "5": {
      "icon": "networkDevices",
      "desc": "Routers, switches, firewalls, load balancers, wireless (Cisco C9800, Meraki MR, HPE Aruba), SD-WAN (Cisco, Fortinet, VeloCloud, Aruba EdgeConnect, Versa, Cato SASE), DNS/DHCP/DDI (BlueCat, Infoblox, Windows/BIND), network flow & packet analytics (NetFlow, Zeek, SPAN/TAP), network management platforms, CDN monitoring (CloudFront, Akamai, Fastly), ThousandEyes DEM, carrier signaling, gNMI streaming telemetry, and telecom CDR — MPLS/IS-IS/BFD, multicast, QoS, IPv6, NTP, topology discovery, and network assurance.",
      "quick": "Configure syslog from network devices to Splunk. Install Splunk Add-on for Cisco or vendor-specific TA."
    },
    "6": {
      "icon": "layersTriple",
      "desc": "SAN, NAS, object storage, and backup systems — capacity trends, latency, IOPS, and backup job monitoring.",
      "quick": "Install vendor TAs (NetApp, Pure Storage, etc.) and configure REST API or syslog collection."
    },
    "7": {
      "icon": "table",
      "desc": "SQL Server, Oracle, PostgreSQL, MongoDB, and data platforms — slow queries, deadlocks, replication, and connection pools.",
      "quick": "Install Splunk DB Connect or vendor TA to collect database logs and performance metrics."
    },
    "8": {
      "icon": "cog",
      "desc": "Web servers, application servers, message queues, CDNs, and DNS — HTTP errors, response times, and SSL certificates.",
      "quick": "Forward web server access/error logs and install the appropriate TA for structured field extraction."
    },
    "9": {
      "icon": "key",
      "desc": "Active Directory, Entra ID, LDAP, MFA, and PAM — authentication failures, privilege escalation, and identity governance.",
      "quick": "Enable Windows Security Event Log collection from DCs with Splunk_TA_windows for immediate AD visibility."
    },
    "10": {
      "icon": "shield",
      "desc": "Next-gen firewalls, IDS/IPS, endpoint protection, email security, web security, vulnerability management, SIEM & SOAR, and certificate/PKI — threat detection and SecOps. ESCU detections are distributed across subcategories 10.1–10.8.",
      "quick": "Forward firewall logs (syslog) and install the vendor TA (Palo Alto, Fortinet, etc.). Use `import_sse_detections.py` to import ESCU detections, then `redistribute_sse_ucs.py` to place them in the right subcategories."
    },
    "11": {
      "icon": "envelope",
      "desc": "Microsoft 365, Exchange, Teams, and collaboration platforms — mail flow, audit logging, and DLP events.",
      "quick": "Configure Splunk Add-on for Microsoft 365 with Management Activity API for audit events."
    },
    "12": {
      "icon": "nodeBranch",
      "desc": "Source control, CI/CD pipelines, artifact management, and IaC — build failures, deployment frequency, and secret exposure.",
      "quick": "Forward CI/CD logs (Jenkins, GitHub Actions) via webhook or log file monitoring to Splunk."
    },
    "13": {
      "icon": "monitorChart",
      "desc": "Splunk platform health, APM, synthetic monitoring, and log aggregation — indexer queues, search performance, and forwarder health.",
      "quick": "Use the Monitoring Console (MC) built into Splunk and supplement with _internal index searches."
    },
    "14": {
      "icon": "factory",
      "desc": "Building management, industrial control, Splunk Edge Hub, and IoT platforms — sensor data, anomaly detection, and OT security.",
      "quick": "Deploy Splunk Edge Hub with built-in sensors or configure MQTT/OPC-UA/Modbus protocol collection."
    },
    "15": {
      "icon": "buildings",
      "desc": "Power/UPS, cooling/CRAC, and environmental monitoring — battery health, thermal management, and physical security.",
      "quick": "Integrate DCIM or BMS platforms via SNMP or API to collect environmental and power data."
    },
    "16": {
      "icon": "clipboard",
      "desc": "Ticketing systems and CMDB — incident trends, SLA compliance, MTTR, and change management correlation.",
      "quick": "Use Splunk Add-on for ServiceNow or REST API integration to pull ticket and CMDB data."
    },
    "17": {
      "icon": "lock",
      "desc": "NAC (802.1X), micro-segmentation, and SASE — network access control, posture assessment, and zero trust enforcement.",
      "quick": "Collect ISE/NAC RADIUS accounting logs and install Splunk_TA_cisco-ise for structured data."
    },
    "18": {
      "icon": "nodeNetwork",
      "desc": "Cisco ACI, NSX-T, and software-defined networking — fabric health, policy compliance, and endpoint tracking.",
      "quick": "Install Splunk Add-on for Cisco ACI and connect to APIC for fault, event, and audit data."
    },
    "19": {
      "icon": "servers",
      "desc": "Cisco UCS, Nutanix, and hyper-converged infrastructure — blade health, service profiles, and hardware faults.",
      "quick": "Install vendor TA (UCS Manager, Nutanix Prism) and configure XML API or REST collection."
    },
    "20": {
      "icon": "dollarMark",
      "desc": "Cloud cost monitoring and capacity planning — spend trends, idle resources, rightsizing, and budget alerts.",
      "quick": "Ingest cloud billing data (AWS CUR, Azure Cost Management) and use Splunk for trend analysis."
    },
    "21": {
      "icon": "globe",
      "desc": "Industry-specific operational monitoring — energy, manufacturing, healthcare, transportation, oil & gas, retail, aviation, telecom, water utilities, and insurance.",
      "quick": "Combine standard infrastructure TAs with industry-specific data sources (SCADA historians, HL7 feeds, fleet telematics, POS systems) for vertical-specific observability."
    },
    "22": {
      "icon": "shield",
      "desc": "Cross-industry regulatory compliance monitoring — GDPR, NIS2, DORA, CCPA, MiFID II, ISO 27001, NIST CSF, and SOC 2. Deployable SPL for PII detection, breach notification timelines, data subject rights tracking, ICT risk management, and continuous control evidence.",
      "quick": "Map ES correlation searches and risk scores to specific regulatory articles for auditable, data-driven compliance evidence."
    },
    "23": {
      "icon": "chart",
      "desc": "Business-aligned analytics for non-technical stakeholders — customer experience, revenue & sales, marketing ROI, HR & people, supply chain, finance, customer support, executive KPIs, and ESG sustainability reporting.",
      "quick": "Use DB Connect to ingest CRM, ERP, and HRIS data alongside web logs and app telemetry for unified business intelligence in Splunk."
    }
  },
  "CAT_GROUPS": {
    "infra": [
      1,
      2,
      5,
      6,
      15,
      18,
      19
    ],
    "security": [
      9,
      10,
      17
    ],
    "cloud": [
      3,
      4,
      20
    ],
    "app": [
      7,
      8,
      11,
      12,
      13,
      14,
      16
    ],
    "industry": [
      21
    ],
    "compliance": [
      22
    ],
    "business": [
      23
    ]
  },
  "EQUIPMENT": [
    {
      "id": "linux",
      "label": "Linux / Unix servers",
      "tas": [
        "Splunk_TA_nix"
      ]
    },
    {
      "id": "windows",
      "label": "Windows servers & workstations",
      "tas": [
        "Splunk_TA_windows"
      ]
    },
    {
      "id": "macos",
      "label": "macOS",
      "tas": [
        "macOS",
        "Splunk UF for macOS"
      ]
    },
    {
      "id": "vmware",
      "label": "VMware",
      "tas": [
        "Splunk_TA_vmware",
        "TA-vmware",
        "VMware",
        "vSphere",
        "ESXi",
        "vCenter"
      ],
      "models": [
        {
          "id": "vsphere",
          "label": "vSphere",
          "tas": [
            "vSphere",
            "vsphere"
          ]
        },
        {
          "id": "esxi",
          "label": "ESXi",
          "tas": [
            "ESXi",
            "esxi"
          ]
        },
        {
          "id": "vcenter",
          "label": "vCenter",
          "tas": [
            "vCenter",
            "vcenter"
          ]
        },
        {
          "id": "ta_vmware",
          "label": "Splunk TA for VMware",
          "tas": [
            "Splunk_TA_vmware",
            "TA-vmware"
          ]
        }
      ]
    },
    {
      "id": "hyperv",
      "label": "Microsoft Hyper-V",
      "tas": [
        "Hyper-V",
        "hyperv",
        "HyperV",
        "Perfmon:HyperV"
      ]
    },
    {
      "id": "proxmox",
      "label": "Proxmox VE",
      "tas": [
        "Proxmox",
        "proxmox"
      ]
    },
    {
      "id": "ovirt",
      "label": "oVirt / Red Hat Virtualization",
      "tas": [
        "oVirt",
        "ovirt",
        "RHV",
        "rhv"
      ]
    },
    {
      "id": "openstack",
      "label": "OpenStack",
      "tas": [
        "OpenStack",
        "openstack"
      ]
    },
    {
      "id": "nutanix",
      "label": "Nutanix",
      "tas": [
        "Nutanix",
        "nutanix",
        "TA-nutanix",
        "Prism"
      ],
      "models": [
        {
          "id": "prism_central",
          "label": "Prism Central",
          "tas": [
            "Prism Central"
          ]
        },
        {
          "id": "prism_element",
          "label": "Prism Element",
          "tas": [
            "Prism Element"
          ]
        }
      ]
    },
    {
      "id": "vxrail",
      "label": "Dell VxRail",
      "tas": [
        "VxRail",
        "vxrail"
      ]
    },
    {
      "id": "aws",
      "label": "Amazon Web Services (AWS)",
      "tas": [
        "Splunk_TA_aws",
        "AWS",
        "CloudTrail",
        "CloudWatch"
      ]
    },
    {
      "id": "azure",
      "label": "Microsoft Azure",
      "tas": [
        "Splunk_TA_microsoft-cloudservices",
        "Azure",
        "Azure Monitor",
        "Azure Activity"
      ]
    },
    {
      "id": "gcp",
      "label": "Google Cloud Platform (GCP)",
      "tas": [
        "Splunk_TA_google-cloudplatform",
        "GCP",
        "Google Cloud"
      ]
    },
    {
      "id": "kubernetes",
      "label": "Kubernetes",
      "tas": [
        "Kubernetes",
        "Splunk Connect for Kubernetes",
        "SCK",
        "kube-state-metrics",
        "kubelet"
      ],
      "models": [
        {
          "id": "k8s",
          "label": "Kubernetes clusters",
          "tas": [
            "Kubernetes",
            "kube-state-metrics",
            "kubelet"
          ]
        },
        {
          "id": "openshift",
          "label": "OpenShift",
          "tas": [
            "OpenShift",
            "openshift"
          ]
        },
        {
          "id": "helm",
          "label": "Helm",
          "tas": [
            "Helm",
            "helm"
          ]
        }
      ]
    },
    {
      "id": "docker",
      "label": "Docker",
      "tas": [
        "Docker",
        "docker",
        "Splunk Connect for Docker"
      ]
    },
    {
      "id": "argocd",
      "label": "ArgoCD",
      "tas": [
        "ArgoCD",
        "argocd",
        "Argo CD"
      ]
    },
    {
      "id": "cisco",
      "label": "Cisco",
      "tas": [
        "Splunk_TA_cisco",
        "Cisco",
        "cisco-firepower",
        "cisco-asa",
        "cisco-ios",
        "cisco-ise",
        "cisco_meraki",
        "Meraki",
        "cisco:ucs",
        "cisco:aci",
        "cisco:sdwan",
        "cisco:ucm",
        "cisco:wlc",
        "Webex",
        "TA-cisco_ios",
        "Cisco Catalyst Add-on",
        "Cisco Meraki Add-on",
        "Cisco Secure Firewall",
        "ThousandEyes",
        "Cisco ThousandEyes App",
        "Cisco Intersight Add-on",
        "cisco:intersight",
        "Cisco DC Networking",
        "cisco:mds",
        "cisco:nexus",
        "cisco:ndfc"
      ],
      "models": [
        {
          "id": "firepower",
          "label": "Cisco Firepower / Secure Firewall",
          "tas": [
            "Cisco Firepower",
            "cisco-firepower",
            "cisco:firepower",
            "Cisco Secure Firewall"
          ]
        },
        {
          "id": "asa",
          "label": "Cisco ASA",
          "tas": [
            "Splunk_TA_cisco-asa",
            "Cisco ASA",
            "cisco-asa",
            "cisco:asa"
          ]
        },
        {
          "id": "ios",
          "label": "Cisco IOS / Catalyst / ISR / ASR",
          "tas": [
            "TA-cisco_ios",
            "Cisco IOS",
            "cisco-ios",
            "cisco:ios"
          ]
        },
        {
          "id": "ise",
          "label": "Cisco ISE",
          "tas": [
            "Splunk_TA_cisco-ise",
            "Cisco ISE",
            "cisco-ise",
            "cisco:ise"
          ]
        },
        {
          "id": "meraki",
          "label": "Cisco Meraki",
          "tas": [
            "Cisco Meraki Add-on",
            "Cisco Meraki",
            "Meraki",
            "cisco_meraki",
            "meraki"
          ]
        },
        {
          "id": "ucs",
          "label": "Cisco UCS",
          "tas": [
            "Splunk_TA_cisco-ucs",
            "Cisco UCS",
            "cisco:ucs",
            "UCS Manager"
          ]
        },
        {
          "id": "intersight",
          "label": "Cisco Intersight",
          "tas": [
            "Cisco Intersight Add-on",
            "cisco:intersight",
            "Intersight"
          ]
        },
        {
          "id": "aci",
          "label": "Cisco ACI",
          "tas": [
            "cisco:aci",
            "Cisco ACI",
            "ACI",
            "APIC",
            "TA_cisco-ACI"
          ]
        },
        {
          "id": "nexus",
          "label": "Cisco Nexus / NDFC / MDS",
          "tas": [
            "cisco:nexus",
            "cisco:mds",
            "cisco:ndfc",
            "Cisco DC Networking",
            "NDFC",
            "NX-OS"
          ]
        },
        {
          "id": "sdwan",
          "label": "Cisco SD-WAN / Catalyst Center",
          "tas": [
            "cisco:sdwan",
            "Cisco SD-WAN",
            "vManage",
            "Cisco Catalyst Add-on"
          ]
        },
        {
          "id": "wlc",
          "label": "Cisco WLC / Catalyst 9800",
          "tas": [
            "cisco:wlc",
            "Cisco WLC",
            "WLC"
          ]
        },
        {
          "id": "ucm",
          "label": "Cisco UCM / Unified Communications",
          "tas": [
            "cisco:ucm",
            "Cisco UCM",
            "UCM CDR",
            "CUCM"
          ]
        },
        {
          "id": "webex",
          "label": "Cisco Webex",
          "tas": [
            "Webex",
            "webex",
            "ta_cisco_webex"
          ]
        },
        {
          "id": "spaces",
          "label": "Cisco Spaces",
          "tas": [
            "Cisco Spaces",
            "cisco:spaces",
            "cisco_spaces",
            "Spaces Add-On"
          ]
        },
        {
          "id": "thousandeyes",
          "label": "Cisco ThousandEyes",
          "tas": [
            "ThousandEyes",
            "thousandeyes",
            "Cisco ThousandEyes App"
          ]
        }
      ]
    },
    {
      "id": "paloalto",
      "label": "Palo Alto Networks",
      "tas": [
        "Splunk_TA_paloalto",
        "Palo Alto",
        "GlobalProtect",
        "Prisma"
      ],
      "models": [
        {
          "id": "pan_firewall",
          "label": "Palo Alto Firewall / PAN-OS",
          "tas": [
            "Splunk_TA_paloalto",
            "Palo Alto",
            "PAN-OS",
            "paloalto"
          ]
        },
        {
          "id": "globalprotect",
          "label": "GlobalProtect",
          "tas": [
            "GlobalProtect",
            "globalprotect"
          ]
        },
        {
          "id": "prisma",
          "label": "Prisma Access",
          "tas": [
            "Prisma",
            "prisma"
          ]
        }
      ]
    },
    {
      "id": "fortinet",
      "label": "Fortinet",
      "tas": [
        "Fortinet",
        "FortiGate",
        "Splunk_TA_fortinet",
        "TA-fortinet_fortigate"
      ],
      "models": [
        {
          "id": "fortigate",
          "label": "FortiGate",
          "tas": [
            "FortiGate",
            "fortigate",
            "Splunk_TA_fortinet",
            "TA-fortinet_fortigate"
          ]
        },
        {
          "id": "fortianalyzer",
          "label": "FortiAnalyzer",
          "tas": [
            "FortiAnalyzer",
            "fortianalyzer"
          ]
        }
      ]
    },
    {
      "id": "f5",
      "label": "F5",
      "tas": [
        "Splunk_TA_f5-bigip",
        "F5",
        "BIG-IP",
        "f5-bigip"
      ],
      "models": [
        {
          "id": "bigip",
          "label": "F5 BIG-IP",
          "tas": [
            "Splunk_TA_f5-bigip",
            "BIG-IP",
            "f5-bigip",
            "bigip"
          ]
        },
        {
          "id": "asm",
          "label": "F5 ASM",
          "tas": [
            "ASM",
            "f5-bigip (ASM)"
          ]
        }
      ]
    },
    {
      "id": "citrix",
      "label": "Citrix",
      "tas": [
        "Splunk_TA_citrix-netscaler",
        "citrix",
        "NetScaler",
        "uberAgent",
        "uberAgent UXM",
        "TA-XD7-Broker",
        "TA-XD7-VDA",
        "citrix:broker",
        "citrix:vda",
        "citrix:netscaler",
        "Citrix Monitor Service",
        "citrix:pvs",
        "citrix:cloudconnector"
      ],
      "models": [
        {
          "id": "netscaler",
          "label": "Citrix NetScaler / ADC",
          "tas": [
            "Splunk_TA_citrix-netscaler",
            "citrix:netscaler",
            "NetScaler",
            "netscaler",
            "Citrix ADC",
            "NITRO API"
          ]
        },
        {
          "id": "cvad",
          "label": "Citrix Virtual Apps & Desktops",
          "tas": [
            "TA-XD7-Broker",
            "TA-XD7-VDA",
            "citrix:broker",
            "citrix:vda",
            "CVAD",
            "XenDesktop",
            "XenApp",
            "Citrix Monitor Service"
          ]
        },
        {
          "id": "uberagent",
          "label": "uberAgent (Citrix)",
          "tas": [
            "uberAgent",
            "uberAgent UXM",
            "uberAgent ESA",
            "uberAgent:Session",
            "uberAgent:Logon",
            "uberAgent:Application",
            "uberAgent:Process",
            "uberAgent:Browser",
            "uberAgent:CitrixSite",
            "uberAgent:CitrixADC",
            "uberAgent:ESA"
          ]
        }
      ]
    },
    {
      "id": "checkpoint",
      "label": "Check Point",
      "tas": [
        "Check Point",
        "checkpoint",
        "CheckPoint",
        "cp_log"
      ]
    },
    {
      "id": "nsx",
      "label": "VMware NSX",
      "tas": [
        "NSX",
        "nsx",
        "vmware_nsx_addon",
        "NSX-T"
      ]
    },
    {
      "id": "infoblox",
      "label": "Infoblox",
      "tas": [
        "Splunk_TA_infoblox",
        "Infoblox",
        "infoblox"
      ],
      "models": [
        {
          "id": "dns",
          "label": "Infoblox DNS",
          "tas": [
            "Infoblox DNS"
          ]
        },
        {
          "id": "dhcp",
          "label": "Infoblox DHCP",
          "tas": [
            "Infoblox DHCP"
          ]
        }
      ]
    },
    {
      "id": "netflow",
      "label": "NetFlow / sFlow",
      "tas": [
        "NetFlow",
        "netflow",
        "sFlow",
        "sflow"
      ],
      "models": [
        {
          "id": "netflow",
          "label": "NetFlow",
          "tas": [
            "NetFlow",
            "netflow"
          ]
        },
        {
          "id": "sflow",
          "label": "sFlow",
          "tas": [
            "sFlow",
            "sflow"
          ]
        }
      ]
    },
    {
      "id": "snmp",
      "label": "SNMP",
      "tas": [
        "SNMP",
        "snmp"
      ],
      "models": [
        {
          "id": "generic",
          "label": "SNMP (generic)",
          "tas": [
            "SNMP",
            "snmp"
          ]
        },
        {
          "id": "pdu",
          "label": "PDU / power",
          "tas": [
            "PDU",
            "PDU-MIB",
            "pdu"
          ]
        },
        {
          "id": "ups",
          "label": "UPS",
          "tas": [
            "UPS",
            "UPS-MIB",
            "ups"
          ]
        },
        {
          "id": "apc",
          "label": "APC / Schneider Electric",
          "tas": [
            "APC",
            "PowerNet-MIB",
            "InRow",
            "AirIR"
          ]
        }
      ]
    },
    {
      "id": "syslog",
      "label": "Syslog (generic)",
      "tas": [
        "Splunk_TA_syslog",
        "Syslog",
        "syslog"
      ]
    },
    {
      "id": "apache",
      "label": "Apache HTTP Server",
      "tas": [
        "Splunk_TA_apache",
        "apache"
      ],
      "models": [
        {
          "id": "httpd",
          "label": "Apache httpd",
          "tas": [
            "Splunk_TA_apache",
            "apache",
            "httpd"
          ]
        }
      ]
    },
    {
      "id": "nginx",
      "label": "NGINX",
      "tas": [
        "TA-nginx",
        "nginx",
        "NGINX"
      ],
      "models": [
        {
          "id": "open",
          "label": "NGINX Open Source",
          "tas": [
            "TA-nginx",
            "nginx",
            "NGINX"
          ]
        },
        {
          "id": "plus",
          "label": "NGINX Plus",
          "tas": [
            "NGINX Plus",
            "nginx plus"
          ]
        }
      ]
    },
    {
      "id": "iis",
      "label": "Microsoft IIS",
      "tas": [
        "IIS",
        "Microsoft IIS",
        "Splunk Add-on for Microsoft IIS"
      ]
    },
    {
      "id": "haproxy",
      "label": "HAProxy",
      "tas": [
        "HAProxy",
        "haproxy"
      ]
    },
    {
      "id": "traefik",
      "label": "Traefik",
      "tas": [
        "Traefik",
        "traefik"
      ]
    },
    {
      "id": "tomcat",
      "label": "Apache Tomcat",
      "tas": [
        "Tomcat",
        "tomcat",
        "Catalina"
      ]
    },
    {
      "id": "jboss",
      "label": "WildFly / JBoss",
      "tas": [
        "WildFly",
        "JBoss",
        "wildfly",
        "jboss"
      ]
    },
    {
      "id": "phpfpm",
      "label": "PHP-FPM",
      "tas": [
        "PHP-FPM",
        "php-fpm",
        "phpfpm"
      ]
    },
    {
      "id": "varnish",
      "label": "Varnish Cache",
      "tas": [
        "Varnish",
        "varnish"
      ]
    },
    {
      "id": "squid",
      "label": "Squid Proxy",
      "tas": [
        "Squid",
        "squid"
      ]
    },
    {
      "id": "memcached",
      "label": "Memcached",
      "tas": [
        "Memcached",
        "memcached"
      ]
    },
    {
      "id": "envoy",
      "label": "Envoy Proxy",
      "tas": [
        "Envoy",
        "envoy"
      ]
    },
    {
      "id": "db_connect",
      "label": "Splunk DB Connect",
      "tas": [
        "DB Connect",
        "splunk_app_db_connect"
      ]
    },
    {
      "id": "mssql",
      "label": "Microsoft SQL Server",
      "tas": [
        "Splunk_TA_microsoft-sqlserver",
        "microsoft-sqlserver",
        "SQL Server"
      ]
    },
    {
      "id": "oracle",
      "label": "Oracle Database",
      "tas": [
        "Splunk_TA_oracle",
        "Oracle"
      ],
      "models": [
        {
          "id": "oracle_db",
          "label": "Oracle Database",
          "tas": [
            "Oracle",
            "oracle",
            "tablespace"
          ]
        }
      ]
    },
    {
      "id": "postgresql",
      "label": "PostgreSQL",
      "tas": [
        "PostgreSQL",
        "postgresql",
        "PgBouncer",
        "pgbouncer"
      ],
      "models": [
        {
          "id": "pg",
          "label": "PostgreSQL",
          "tas": [
            "PostgreSQL",
            "postgresql"
          ]
        },
        {
          "id": "pgbouncer",
          "label": "PgBouncer",
          "tas": [
            "PgBouncer",
            "pgbouncer"
          ]
        }
      ]
    },
    {
      "id": "mysql",
      "label": "MySQL / MariaDB",
      "tas": [
        "MySQL",
        "mysql",
        "MariaDB",
        "mariadb",
        "InnoDB"
      ]
    },
    {
      "id": "mongodb",
      "label": "MongoDB",
      "tas": [
        "MongoDB",
        "mongodb",
        "mongosh"
      ],
      "models": [
        {
          "id": "mongod",
          "label": "MongoDB Server",
          "tas": [
            "MongoDB",
            "mongodb",
            "mongosh"
          ]
        },
        {
          "id": "wiredtiger",
          "label": "WiredTiger",
          "tas": [
            "WiredTiger",
            "wiredtiger"
          ]
        }
      ]
    },
    {
      "id": "redis",
      "label": "Redis",
      "tas": [
        "Redis",
        "redis"
      ]
    },
    {
      "id": "elasticsearch",
      "label": "Elasticsearch / OpenSearch",
      "tas": [
        "Elasticsearch",
        "elasticsearch",
        "ES REST API",
        "OpenSearch"
      ],
      "models": [
        {
          "id": "es",
          "label": "Elasticsearch",
          "tas": [
            "Elasticsearch",
            "elasticsearch",
            "ES REST API"
          ]
        },
        {
          "id": "opensearch",
          "label": "OpenSearch",
          "tas": [
            "OpenSearch",
            "opensearch"
          ]
        }
      ]
    },
    {
      "id": "clickhouse",
      "label": "ClickHouse",
      "tas": [
        "ClickHouse",
        "clickhouse"
      ]
    },
    {
      "id": "cassandra",
      "label": "Apache Cassandra",
      "tas": [
        "Cassandra",
        "cassandra",
        "nodetool"
      ]
    },
    {
      "id": "snowflake",
      "label": "Snowflake",
      "tas": [
        "Snowflake",
        "snowflake"
      ]
    },
    {
      "id": "kafka",
      "label": "Apache Kafka",
      "tas": [
        "TA-kafka",
        "Kafka",
        "kafka",
        "Splunk Connect for Kafka"
      ]
    },
    {
      "id": "rabbitmq",
      "label": "RabbitMQ",
      "tas": [
        "RabbitMQ",
        "rabbitmq"
      ]
    },
    {
      "id": "activemq",
      "label": "Apache ActiveMQ",
      "tas": [
        "ActiveMQ",
        "activemq"
      ]
    },
    {
      "id": "zookeeper",
      "label": "Apache ZooKeeper",
      "tas": [
        "ZooKeeper",
        "zookeeper"
      ]
    },
    {
      "id": "hashicorp",
      "label": "HashiCorp",
      "tas": [
        "Vault",
        "Consul",
        "Nomad",
        "HashiCorp",
        "Terraform"
      ],
      "models": [
        {
          "id": "vault",
          "label": "HashiCorp Vault",
          "tas": [
            "Vault"
          ]
        },
        {
          "id": "consul",
          "label": "HashiCorp Consul",
          "tas": [
            "Consul"
          ]
        },
        {
          "id": "nomad",
          "label": "HashiCorp Nomad",
          "tas": [
            "Nomad"
          ]
        },
        {
          "id": "terraform",
          "label": "Terraform",
          "tas": [
            "Terraform",
            "terraform"
          ]
        }
      ]
    },
    {
      "id": "netapp",
      "label": "NetApp",
      "tas": [
        "TA-netapp_ontap",
        "NetApp",
        "netapp",
        "ONTAP"
      ]
    },
    {
      "id": "pure_storage",
      "label": "Pure Storage",
      "tas": [
        "Pure Storage",
        "FlashArray",
        "FlashBlade"
      ]
    },
    {
      "id": "dell_emc",
      "label": "Dell EMC Storage",
      "tas": [
        "Dell EMC",
        "Isilon",
        "PowerStore",
        "Unity",
        "EqualLogic"
      ]
    },
    {
      "id": "truenas",
      "label": "TrueNAS / FreeNAS",
      "tas": [
        "TrueNAS",
        "truenas",
        "FreeNAS",
        "freenas"
      ]
    },
    {
      "id": "ceph",
      "label": "Ceph",
      "tas": [
        "Ceph",
        "ceph"
      ]
    },
    {
      "id": "veeam",
      "label": "Veeam",
      "tas": [
        "Veeam",
        "veeam"
      ]
    },
    {
      "id": "commvault",
      "label": "Commvault",
      "tas": [
        "Commvault",
        "commvault"
      ]
    },
    {
      "id": "okta",
      "label": "Okta",
      "tas": [
        "Splunk_TA_okta",
        "okta"
      ]
    },
    {
      "id": "cyberark",
      "label": "CyberArk",
      "tas": [
        "Splunk_TA_cyberark",
        "CyberArk",
        "cyberark"
      ]
    },
    {
      "id": "beyondtrust",
      "label": "BeyondTrust",
      "tas": [
        "BeyondTrust",
        "beyondtrust"
      ]
    },
    {
      "id": "m365",
      "label": "Microsoft 365 / Entra ID",
      "tas": [
        "Splunk_TA_MS_O365",
        "MS_O365",
        "Office 365",
        "Entra",
        "M365",
        "microsoft-cloudservices"
      ]
    },
    {
      "id": "exchange",
      "label": "Microsoft Exchange",
      "tas": [
        "Splunk_TA_microsoft-exchange",
        "microsoft-exchange",
        "Exchange"
      ]
    },
    {
      "id": "sharepoint",
      "label": "Microsoft SharePoint",
      "tas": [
        "SharePoint",
        "sharepoint",
        "SPOSite"
      ]
    },
    {
      "id": "security_essentials",
      "label": "Splunk Security Essentials (ESCU)",
      "tas": [
        "Security Essentials",
        "ESCU"
      ]
    },
    {
      "id": "crowdstrike",
      "label": "CrowdStrike Falcon",
      "tas": [
        "CrowdStrike",
        "crowdstrike",
        "Falcon"
      ]
    },
    {
      "id": "defender",
      "label": "Microsoft Defender",
      "tas": [
        "Microsoft Defender",
        "Defender for"
      ]
    },
    {
      "id": "tenable",
      "label": "Tenable / Nessus",
      "tas": [
        "Tenable",
        "tenable",
        "Nessus"
      ]
    },
    {
      "id": "qualys",
      "label": "Qualys",
      "tas": [
        "Qualys",
        "qualys"
      ]
    },
    {
      "id": "proofpoint",
      "label": "Proofpoint",
      "tas": [
        "Proofpoint",
        "proofpoint",
        "TA-proofpoint"
      ]
    },
    {
      "id": "suricata",
      "label": "Suricata / Snort (IDS/IPS)",
      "tas": [
        "Suricata",
        "suricata",
        "TA-suricata",
        "Snort",
        "snort"
      ]
    },
    {
      "id": "zscaler",
      "label": "Zscaler",
      "tas": [
        "Zscaler",
        "zscaler"
      ]
    },
    {
      "id": "netskope",
      "label": "Netskope",
      "tas": [
        "Netskope",
        "netskope"
      ]
    },
    {
      "id": "cloudflare",
      "label": "Cloudflare",
      "tas": [
        "Cloudflare",
        "cloudflare"
      ]
    },
    {
      "id": "guardicore",
      "label": "Akamai Guardicore",
      "tas": [
        "Guardicore",
        "guardicore"
      ]
    },
    {
      "id": "broadcom_symantec",
      "label": "Broadcom / Symantec SSE",
      "tas": [
        "Symantec",
        "symantec",
        "Broadcom",
        "bluecoat",
        "Blue Coat"
      ]
    },
    {
      "id": "forcepoint",
      "label": "Forcepoint ONE",
      "tas": [
        "Forcepoint",
        "forcepoint"
      ]
    },
    {
      "id": "sonicwall",
      "label": "SonicWall",
      "tas": [
        "SonicWall",
        "sonicwall",
        "dell:sonicwall"
      ]
    },
    {
      "id": "jenkins",
      "label": "Jenkins",
      "tas": [
        "Jenkins",
        "jenkins"
      ]
    },
    {
      "id": "github",
      "label": "GitHub",
      "tas": [
        "GitHub",
        "github"
      ]
    },
    {
      "id": "gitlab",
      "label": "GitLab",
      "tas": [
        "GitLab",
        "gitlab"
      ]
    },
    {
      "id": "ansible",
      "label": "Ansible",
      "tas": [
        "Ansible",
        "ansible"
      ]
    },
    {
      "id": "controlm",
      "label": "Control-M",
      "tas": [
        "Control-M",
        "control-m"
      ]
    },
    {
      "id": "itsi",
      "label": "Splunk ITSI",
      "tas": [
        "ITSI",
        "Splunk ITSI"
      ]
    },
    {
      "id": "stream",
      "label": "Splunk Stream",
      "tas": [
        "Splunk Stream",
        "Splunk App for Stream"
      ]
    },
    {
      "id": "opentelemetry",
      "label": "OpenTelemetry",
      "tas": [
        "OpenTelemetry",
        "OTel Collector",
        "Splunk_TA_otel",
        "otelcol"
      ]
    },
    {
      "id": "prometheus",
      "label": "Prometheus",
      "tas": [
        "Prometheus",
        "prometheus"
      ]
    },
    {
      "id": "grafana",
      "label": "Grafana",
      "tas": [
        "Grafana",
        "grafana"
      ]
    },
    {
      "id": "log_pipeline",
      "label": "Log Pipeline (Fluentd / Fluent Bit)",
      "tas": [
        "Fluentd",
        "fluentd",
        "Fluent Bit",
        "fluent bit"
      ]
    },
    {
      "id": "servicenow",
      "label": "ServiceNow",
      "tas": [
        "Splunk_TA_snow",
        "snow",
        "ServiceNow"
      ]
    },
    {
      "id": "jira",
      "label": "Atlassian Jira",
      "tas": [
        "Jira",
        "jira"
      ]
    },
    {
      "id": "pagerduty",
      "label": "PagerDuty / Opsgenie",
      "tas": [
        "PagerDuty",
        "pagerduty",
        "Opsgenie",
        "opsgenie"
      ]
    },
    {
      "id": "edge_hub",
      "label": "Splunk Edge Hub",
      "tas": [
        "Splunk Edge Hub",
        "Edge Hub"
      ]
    },
    {
      "id": "modbus",
      "label": "Modbus (TCP/RTU)",
      "tas": [
        "Modbus",
        "modbus"
      ]
    },
    {
      "id": "opcua",
      "label": "OPC-UA",
      "tas": [
        "OPC-UA",
        "opc-ua",
        "OPC UA",
        "opcua"
      ]
    },
    {
      "id": "mqtt",
      "label": "MQTT",
      "tas": [
        "MQTT",
        "mqtt",
        "Mosquitto",
        "HiveMQ"
      ]
    },
    {
      "id": "aranet",
      "label": "Aranet Sensors",
      "tas": [
        "Aranet",
        "aranet"
      ]
    },
    {
      "id": "asterisk",
      "label": "Asterisk / FreePBX",
      "tas": [
        "Asterisk",
        "asterisk",
        "FreePBX",
        "freepbx",
        "AMI"
      ]
    },
    {
      "id": "hardware_bmc",
      "label": "Hardware / BMC",
      "tas": [
        "ipmitool",
        "iDRAC",
        "iLO",
        "smartctl",
        "storcli",
        "megacli",
        "BMC",
        "edac-util",
        "ssacli",
        "dmidecode",
        "perccli",
        "hpssacli"
      ],
      "models": [
        {
          "id": "idrac",
          "label": "Dell iDRAC",
          "tas": [
            "iDRAC",
            "idrac"
          ]
        },
        {
          "id": "ilo",
          "label": "HPE iLO",
          "tas": [
            "iLO",
            "ilo"
          ]
        },
        {
          "id": "ipmi",
          "label": "IPMI (generic)",
          "tas": [
            "ipmitool",
            "IPMI",
            "ipmi"
          ]
        },
        {
          "id": "smartctl",
          "label": "Disks (SMART / smartctl)",
          "tas": [
            "smartctl"
          ]
        },
        {
          "id": "storcli",
          "label": "LSI MegaRAID (storcli)",
          "tas": [
            "storcli"
          ]
        },
        {
          "id": "megacli",
          "label": "LSI MegaRAID (megacli)",
          "tas": [
            "megacli",
            "MegaCli"
          ]
        },
        {
          "id": "perccli",
          "label": "Dell PERC (perccli)",
          "tas": [
            "perccli"
          ]
        },
        {
          "id": "ssacli",
          "label": "HPE Smart Array (ssacli)",
          "tas": [
            "ssacli",
            "hpssacli"
          ]
        },
        {
          "id": "edac",
          "label": "Memory / EDAC (edac-util)",
          "tas": [
            "edac-util",
            "edac"
          ]
        },
        {
          "id": "dmidecode",
          "label": "System info (dmidecode)",
          "tas": [
            "dmidecode"
          ]
        }
      ]
    },
    {
      "id": "apc_dc",
      "label": "APC / Schneider Electric",
      "tas": [
        "APC",
        "PowerNet-MIB",
        "InRow",
        "AirIR"
      ]
    },
    {
      "id": "cctv",
      "label": "CCTV / IP Cameras",
      "tas": [
        "NVR",
        "ONVIF",
        "Hikvision",
        "CCTV"
      ]
    }
  ],
  "implementationRoadmap": {
    "1": {
      "crawl": [
        "UC-1.1.1",
        "UC-1.1.2",
        "UC-1.1.23",
        "UC-1.1.36",
        "UC-1.1.58",
        "UC-1.1.69",
        "UC-1.1.70"
      ],
      "walk": [
        "UC-1.1.3",
        "UC-1.1.4"
      ],
      "run": [
        "UC-1.1.5"
      ],
      "unassigned": [
        "UC-1.1.6",
        "UC-1.1.7",
        "UC-1.1.8",
        "UC-1.1.9",
        "UC-1.1.10",
        "UC-1.1.11",
        "UC-1.1.12",
        "UC-1.1.13",
        "UC-1.1.14",
        "UC-1.1.15",
        "UC-1.1.16",
        "UC-1.1.17",
        "UC-1.1.18",
        "UC-1.1.19",
        "UC-1.1.20",
        "UC-1.1.21",
        "UC-1.1.22",
        "UC-1.1.24",
        "UC-1.1.25",
        "UC-1.1.26",
        "UC-1.1.27",
        "UC-1.1.28",
        "UC-1.1.29",
        "UC-1.1.30",
        "UC-1.1.31",
        "UC-1.1.32",
        "UC-1.1.33",
        "UC-1.1.34",
        "UC-1.1.35",
        "UC-1.1.37",
        "UC-1.1.38",
        "UC-1.1.39",
        "UC-1.1.40",
        "UC-1.1.41",
        "UC-1.1.42",
        "UC-1.1.43",
        "UC-1.1.44",
        "UC-1.1.45",
        "UC-1.1.46",
        "UC-1.1.47",
        "UC-1.1.48",
        "UC-1.1.49",
        "UC-1.1.50",
        "UC-1.1.51",
        "UC-1.1.52",
        "UC-1.1.53",
        "UC-1.1.54",
        "UC-1.1.55",
        "UC-1.1.56",
        "UC-1.1.57",
        "UC-1.1.59",
        "UC-1.1.60",
        "UC-1.1.61",
        "UC-1.1.62",
        "UC-1.1.63",
        "UC-1.1.64",
        "UC-1.1.65",
        "UC-1.1.66",
        "UC-1.1.67",
        "UC-1.1.68",
        "UC-1.1.71",
        "UC-1.1.72",
        "UC-1.1.73",
        "UC-1.1.74",
        "UC-1.1.75",
        "UC-1.1.76",
        "UC-1.1.77",
        "UC-1.1.78",
        "UC-1.1.79",
        "UC-1.1.80",
        "UC-1.1.81",
        "UC-1.1.82",
        "UC-1.1.83",
        "UC-1.1.84",
        "UC-1.1.85",
        "UC-1.1.86",
        "UC-1.1.87",
        "UC-1.1.88",
        "UC-1.1.89",
        "UC-1.1.90",
        "UC-1.1.91",
        "UC-1.1.92",
        "UC-1.1.93",
        "UC-1.1.94",
        "UC-1.1.95",
        "UC-1.1.96",
        "UC-1.1.97",
        "UC-1.1.98",
        "UC-1.1.99",
        "UC-1.1.100",
        "UC-1.1.101",
        "UC-1.1.102",
        "UC-1.1.103",
        "UC-1.1.104",
        "UC-1.1.105",
        "UC-1.1.106",
        "UC-1.1.107",
        "UC-1.1.108",
        "UC-1.1.109",
        "UC-1.1.110",
        "UC-1.1.111",
        "UC-1.1.112",
        "UC-1.1.113",
        "UC-1.1.114",
        "UC-1.1.115",
        "UC-1.1.116",
        "UC-1.1.117",
        "UC-1.1.118",
        "UC-1.1.119",
        "UC-1.1.120",
        "UC-1.1.121",
        "UC-1.1.122",
        "UC-1.1.123",
        "UC-1.1.124",
        "UC-1.1.125",
        "UC-1.1.126",
        "UC-1.1.127",
        "UC-1.1.128",
        "UC-1.1.129",
        "UC-1.1.130",
        "UC-1.1.131",
        "UC-1.2.1",
        "UC-1.2.2",
        "UC-1.2.3",
        "UC-1.2.4",
        "UC-1.2.5",
        "UC-1.2.6",
        "UC-1.2.7",
        "UC-1.2.8",
        "UC-1.2.9",
        "UC-1.2.10",
        "UC-1.2.11",
        "UC-1.2.12",
        "UC-1.2.13",
        "UC-1.2.15",
        "UC-1.2.16",
        "UC-1.2.17",
        "UC-1.2.19",
        "UC-1.2.20",
        "UC-1.2.21",
        "UC-1.2.22",
        "UC-1.2.23",
        "UC-1.2.24",
        "UC-1.2.25",
        "UC-1.2.26",
        "UC-1.2.27",
        "UC-1.2.28",
        "UC-1.2.29",
        "UC-1.2.30",
        "UC-1.2.31",
        "UC-1.2.32",
        "UC-1.2.33",
        "UC-1.2.34",
        "UC-1.2.35",
        "UC-1.2.36",
        "UC-1.2.37",
        "UC-1.2.38",
        "UC-1.2.39",
        "UC-1.2.40",
        "UC-1.2.41",
        "UC-1.2.42",
        "UC-1.2.43",
        "UC-1.2.44",
        "UC-1.2.45",
        "UC-1.2.46",
        "UC-1.2.47",
        "UC-1.2.48",
        "UC-1.2.49",
        "UC-1.2.50",
        "UC-1.2.51",
        "UC-1.2.52",
        "UC-1.2.53",
        "UC-1.2.54",
        "UC-1.2.55",
        "UC-1.2.56",
        "UC-1.2.57",
        "UC-1.2.58",
        "UC-1.2.59",
        "UC-1.2.60",
        "UC-1.2.61",
        "UC-1.2.62",
        "UC-1.2.63",
        "UC-1.2.64",
        "UC-1.2.65",
        "UC-1.2.66",
        "UC-1.2.67",
        "UC-1.2.68",
        "UC-1.2.69",
        "UC-1.2.70",
        "UC-1.2.71",
        "UC-1.2.72",
        "UC-1.2.73",
        "UC-1.2.76",
        "UC-1.2.77",
        "UC-1.2.78",
        "UC-1.2.79",
        "UC-1.2.81",
        "UC-1.2.82",
        "UC-1.2.83",
        "UC-1.2.84",
        "UC-1.2.86",
        "UC-1.2.87",
        "UC-1.2.88",
        "UC-1.2.89",
        "UC-1.2.90",
        "UC-1.2.91",
        "UC-1.2.92",
        "UC-1.2.93",
        "UC-1.2.94",
        "UC-1.2.95",
        "UC-1.2.96",
        "UC-1.2.97",
        "UC-1.2.98",
        "UC-1.2.100",
        "UC-1.2.101",
        "UC-1.2.102",
        "UC-1.2.103",
        "UC-1.2.104",
        "UC-1.2.105",
        "UC-1.2.106",
        "UC-1.2.107",
        "UC-1.2.108",
        "UC-1.2.109",
        "UC-1.2.110",
        "UC-1.2.111",
        "UC-1.2.112",
        "UC-1.2.113",
        "UC-1.2.114",
        "UC-1.2.115",
        "UC-1.2.116",
        "UC-1.2.117",
        "UC-1.2.118",
        "UC-1.2.119",
        "UC-1.2.120",
        "UC-1.2.121",
        "UC-1.2.122",
        "UC-1.2.123",
        "UC-1.2.124",
        "UC-1.2.125",
        "UC-1.2.126",
        "UC-1.2.127",
        "UC-1.2.128",
        "UC-1.2.129",
        "UC-1.2.130",
        "UC-1.2.131",
        "UC-1.2.132",
        "UC-1.2.133",
        "UC-1.2.134",
        "UC-1.3.1",
        "UC-1.3.2",
        "UC-1.3.3",
        "UC-1.3.4",
        "UC-1.3.5",
        "UC-1.3.6",
        "UC-1.4.1",
        "UC-1.4.2",
        "UC-1.4.3",
        "UC-1.4.4",
        "UC-1.4.5",
        "UC-1.4.6",
        "UC-1.4.7",
        "UC-1.4.8",
        "UC-1.4.9",
        "UC-1.4.10",
        "UC-1.4.11"
      ]
    },
    "2": {
      "crawl": [
        "UC-2.1.3",
        "UC-2.1.7",
        "UC-2.1.10",
        "UC-2.1.11",
        "UC-2.2.3"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-2.1.1",
        "UC-2.1.2",
        "UC-2.1.4",
        "UC-2.1.5",
        "UC-2.1.6",
        "UC-2.1.8",
        "UC-2.1.9",
        "UC-2.1.12",
        "UC-2.1.13",
        "UC-2.1.14",
        "UC-2.1.15",
        "UC-2.1.16",
        "UC-2.1.17",
        "UC-2.1.18",
        "UC-2.1.19",
        "UC-2.1.20",
        "UC-2.1.21",
        "UC-2.1.22",
        "UC-2.1.23",
        "UC-2.1.24",
        "UC-2.1.25",
        "UC-2.1.26",
        "UC-2.1.27",
        "UC-2.1.28",
        "UC-2.1.29",
        "UC-2.1.30",
        "UC-2.1.31",
        "UC-2.1.32",
        "UC-2.1.33",
        "UC-2.1.34",
        "UC-2.1.35",
        "UC-2.1.36",
        "UC-2.1.37",
        "UC-2.1.38",
        "UC-2.1.39",
        "UC-2.1.40",
        "UC-2.1.41",
        "UC-2.1.42",
        "UC-2.1.43",
        "UC-2.1.44",
        "UC-2.1.45",
        "UC-2.1.46",
        "UC-2.1.47",
        "UC-2.1.48",
        "UC-2.2.1",
        "UC-2.2.2",
        "UC-2.2.4",
        "UC-2.2.5",
        "UC-2.2.6",
        "UC-2.2.7",
        "UC-2.2.8",
        "UC-2.2.9",
        "UC-2.2.10",
        "UC-2.2.11",
        "UC-2.2.12",
        "UC-2.2.13",
        "UC-2.2.14",
        "UC-2.2.15",
        "UC-2.3.1",
        "UC-2.3.2",
        "UC-2.3.3",
        "UC-2.3.4",
        "UC-2.3.5",
        "UC-2.3.6",
        "UC-2.3.7",
        "UC-2.3.8",
        "UC-2.3.9",
        "UC-2.3.10",
        "UC-2.3.11",
        "UC-2.3.12",
        "UC-2.3.13",
        "UC-2.3.14",
        "UC-2.3.15",
        "UC-2.3.16",
        "UC-2.3.17",
        "UC-2.4.1",
        "UC-2.4.2",
        "UC-2.4.3",
        "UC-2.4.4",
        "UC-2.4.5",
        "UC-2.4.6",
        "UC-2.4.7",
        "UC-2.5.1",
        "UC-2.5.2",
        "UC-2.5.3",
        "UC-2.5.4",
        "UC-2.5.5",
        "UC-2.5.6",
        "UC-2.5.7",
        "UC-2.5.8",
        "UC-2.5.9",
        "UC-2.5.10",
        "UC-2.6.1",
        "UC-2.6.2",
        "UC-2.6.3",
        "UC-2.6.4",
        "UC-2.6.5",
        "UC-2.6.6",
        "UC-2.6.7",
        "UC-2.6.8",
        "UC-2.6.9",
        "UC-2.6.10",
        "UC-2.6.11",
        "UC-2.6.12",
        "UC-2.6.13",
        "UC-2.6.14",
        "UC-2.6.15",
        "UC-2.6.16",
        "UC-2.6.17",
        "UC-2.6.18",
        "UC-2.6.19",
        "UC-2.6.20",
        "UC-2.6.21",
        "UC-2.6.22",
        "UC-2.6.23",
        "UC-2.6.24",
        "UC-2.6.25",
        "UC-2.6.26",
        "UC-2.6.27",
        "UC-2.6.28",
        "UC-2.6.29",
        "UC-2.6.30",
        "UC-2.6.31",
        "UC-2.6.32",
        "UC-2.6.33",
        "UC-2.6.34",
        "UC-2.6.35",
        "UC-2.6.36",
        "UC-2.6.37",
        "UC-2.6.38",
        "UC-2.6.39",
        "UC-2.6.40",
        "UC-2.6.41",
        "UC-2.6.42",
        "UC-2.6.43",
        "UC-2.6.44",
        "UC-2.6.45",
        "UC-2.6.46",
        "UC-2.6.47",
        "UC-2.6.48",
        "UC-2.6.49",
        "UC-2.6.50",
        "UC-2.6.51",
        "UC-2.6.52",
        "UC-2.6.53",
        "UC-2.6.54",
        "UC-2.6.55",
        "UC-2.6.56",
        "UC-2.6.57",
        "UC-2.6.58",
        "UC-2.6.59",
        "UC-2.6.60",
        "UC-2.6.61",
        "UC-2.6.62",
        "UC-2.6.63",
        "UC-2.6.64",
        "UC-2.6.65",
        "UC-2.6.66",
        "UC-2.6.67",
        "UC-2.6.68",
        "UC-2.6.69",
        "UC-2.6.70",
        "UC-2.6.71",
        "UC-2.6.72",
        "UC-2.6.73",
        "UC-2.6.74",
        "UC-2.6.75",
        "UC-2.6.76",
        "UC-2.6.77",
        "UC-2.6.78",
        "UC-2.6.79"
      ]
    },
    "3": {
      "crawl": [
        "UC-3.1.1",
        "UC-3.1.2",
        "UC-3.2.1",
        "UC-3.2.7",
        "UC-3.2.8"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-3.1.3",
        "UC-3.1.4",
        "UC-3.1.5",
        "UC-3.1.6",
        "UC-3.1.7",
        "UC-3.1.8",
        "UC-3.1.9",
        "UC-3.1.10",
        "UC-3.1.11",
        "UC-3.1.12",
        "UC-3.1.13",
        "UC-3.1.14",
        "UC-3.1.15",
        "UC-3.1.16",
        "UC-3.1.17",
        "UC-3.1.18",
        "UC-3.1.19",
        "UC-3.1.20",
        "UC-3.1.21",
        "UC-3.1.22",
        "UC-3.1.23",
        "UC-3.1.24",
        "UC-3.1.25",
        "UC-3.1.26",
        "UC-3.1.27",
        "UC-3.1.28",
        "UC-3.1.29",
        "UC-3.2.2",
        "UC-3.2.3",
        "UC-3.2.4",
        "UC-3.2.5",
        "UC-3.2.6",
        "UC-3.2.9",
        "UC-3.2.10",
        "UC-3.2.11",
        "UC-3.2.12",
        "UC-3.2.13",
        "UC-3.2.14",
        "UC-3.2.15",
        "UC-3.2.16",
        "UC-3.2.17",
        "UC-3.2.18",
        "UC-3.2.19",
        "UC-3.2.20",
        "UC-3.2.21",
        "UC-3.2.22",
        "UC-3.2.23",
        "UC-3.2.24",
        "UC-3.2.25",
        "UC-3.2.26",
        "UC-3.2.27",
        "UC-3.2.28",
        "UC-3.2.29",
        "UC-3.2.30",
        "UC-3.2.31",
        "UC-3.2.32",
        "UC-3.2.33",
        "UC-3.2.34",
        "UC-3.2.35",
        "UC-3.2.36",
        "UC-3.2.37",
        "UC-3.2.38",
        "UC-3.2.39",
        "UC-3.2.40",
        "UC-3.2.41",
        "UC-3.2.42",
        "UC-3.2.43",
        "UC-3.2.44",
        "UC-3.2.45",
        "UC-3.2.46",
        "UC-3.3.1",
        "UC-3.3.2",
        "UC-3.3.3",
        "UC-3.3.4",
        "UC-3.3.5",
        "UC-3.3.6",
        "UC-3.3.7",
        "UC-3.3.8",
        "UC-3.3.9",
        "UC-3.3.10",
        "UC-3.3.11",
        "UC-3.3.12",
        "UC-3.3.13",
        "UC-3.3.14",
        "UC-3.3.15",
        "UC-3.3.16",
        "UC-3.3.17",
        "UC-3.3.18",
        "UC-3.3.19",
        "UC-3.3.20",
        "UC-3.3.21",
        "UC-3.3.22",
        "UC-3.3.23",
        "UC-3.3.24",
        "UC-3.3.25",
        "UC-3.4.1",
        "UC-3.4.2",
        "UC-3.4.3",
        "UC-3.4.4",
        "UC-3.4.5",
        "UC-3.4.6",
        "UC-3.4.7",
        "UC-3.4.8",
        "UC-3.4.9",
        "UC-3.5.1",
        "UC-3.5.2",
        "UC-3.5.3",
        "UC-3.5.7",
        "UC-3.5.8",
        "UC-3.5.9",
        "UC-3.5.10",
        "UC-3.5.11",
        "UC-3.5.12",
        "UC-3.5.13",
        "UC-3.5.14",
        "UC-3.5.15",
        "UC-3.5.16",
        "UC-3.5.17",
        "UC-3.6.1",
        "UC-3.6.2",
        "UC-3.6.3",
        "UC-3.6.4",
        "UC-3.6.5",
        "UC-3.6.6"
      ]
    },
    "4": {
      "crawl": [
        "UC-4.1.2",
        "UC-4.1.7",
        "UC-4.1.8",
        "UC-4.2.9",
        "UC-4.3.2"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-4.1.1",
        "UC-4.1.3",
        "UC-4.1.4",
        "UC-4.1.5",
        "UC-4.1.6",
        "UC-4.1.9",
        "UC-4.1.10",
        "UC-4.1.11",
        "UC-4.1.12",
        "UC-4.1.13",
        "UC-4.1.14",
        "UC-4.1.15",
        "UC-4.1.16",
        "UC-4.1.17",
        "UC-4.1.18",
        "UC-4.1.19",
        "UC-4.1.20",
        "UC-4.1.21",
        "UC-4.1.22",
        "UC-4.1.23",
        "UC-4.1.24",
        "UC-4.1.25",
        "UC-4.1.26",
        "UC-4.1.27",
        "UC-4.1.28",
        "UC-4.1.29",
        "UC-4.1.30",
        "UC-4.1.31",
        "UC-4.1.32",
        "UC-4.1.33",
        "UC-4.1.34",
        "UC-4.1.35",
        "UC-4.1.36",
        "UC-4.1.37",
        "UC-4.1.38",
        "UC-4.1.39",
        "UC-4.1.40",
        "UC-4.1.41",
        "UC-4.1.42",
        "UC-4.1.43",
        "UC-4.1.44",
        "UC-4.1.45",
        "UC-4.1.46",
        "UC-4.1.47",
        "UC-4.1.48",
        "UC-4.1.49",
        "UC-4.1.50",
        "UC-4.1.51",
        "UC-4.1.52",
        "UC-4.1.53",
        "UC-4.1.54",
        "UC-4.1.55",
        "UC-4.1.56",
        "UC-4.1.57",
        "UC-4.1.58",
        "UC-4.1.59",
        "UC-4.1.60",
        "UC-4.1.61",
        "UC-4.1.62",
        "UC-4.1.63",
        "UC-4.1.64",
        "UC-4.1.65",
        "UC-4.1.66",
        "UC-4.1.67",
        "UC-4.1.68",
        "UC-4.1.69",
        "UC-4.1.70",
        "UC-4.1.71",
        "UC-4.1.72",
        "UC-4.1.73",
        "UC-4.1.74",
        "UC-4.1.75",
        "UC-4.1.76",
        "UC-4.1.77",
        "UC-4.2.1",
        "UC-4.2.2",
        "UC-4.2.3",
        "UC-4.2.4",
        "UC-4.2.5",
        "UC-4.2.6",
        "UC-4.2.7",
        "UC-4.2.8",
        "UC-4.2.10",
        "UC-4.2.11",
        "UC-4.2.12",
        "UC-4.2.13",
        "UC-4.2.14",
        "UC-4.2.15",
        "UC-4.2.16",
        "UC-4.2.17",
        "UC-4.2.18",
        "UC-4.2.19",
        "UC-4.2.20",
        "UC-4.2.21",
        "UC-4.2.22",
        "UC-4.2.23",
        "UC-4.2.24",
        "UC-4.2.25",
        "UC-4.2.26",
        "UC-4.2.27",
        "UC-4.2.28",
        "UC-4.2.29",
        "UC-4.2.30",
        "UC-4.2.31",
        "UC-4.2.32",
        "UC-4.2.33",
        "UC-4.2.34",
        "UC-4.2.35",
        "UC-4.2.36",
        "UC-4.2.37",
        "UC-4.2.38",
        "UC-4.2.39",
        "UC-4.2.40",
        "UC-4.2.41",
        "UC-4.2.42",
        "UC-4.2.43",
        "UC-4.2.44",
        "UC-4.2.45",
        "UC-4.2.46",
        "UC-4.2.47",
        "UC-4.2.48",
        "UC-4.2.49",
        "UC-4.2.50",
        "UC-4.2.51",
        "UC-4.2.52",
        "UC-4.2.53",
        "UC-4.2.54",
        "UC-4.2.55",
        "UC-4.2.56",
        "UC-4.2.57",
        "UC-4.3.1",
        "UC-4.3.3",
        "UC-4.3.4",
        "UC-4.3.5",
        "UC-4.3.6",
        "UC-4.3.7",
        "UC-4.3.8",
        "UC-4.3.9",
        "UC-4.3.10",
        "UC-4.3.11",
        "UC-4.3.12",
        "UC-4.3.13",
        "UC-4.3.14",
        "UC-4.3.15",
        "UC-4.3.16",
        "UC-4.3.17",
        "UC-4.3.18",
        "UC-4.3.19",
        "UC-4.3.20",
        "UC-4.3.21",
        "UC-4.3.22",
        "UC-4.3.23",
        "UC-4.3.24",
        "UC-4.3.25",
        "UC-4.3.26",
        "UC-4.3.27",
        "UC-4.3.28",
        "UC-4.3.29",
        "UC-4.3.30",
        "UC-4.3.31",
        "UC-4.3.32",
        "UC-4.3.33",
        "UC-4.3.34",
        "UC-4.3.35",
        "UC-4.3.36",
        "UC-4.3.37",
        "UC-4.3.38",
        "UC-4.3.39",
        "UC-4.3.40",
        "UC-4.4.1",
        "UC-4.4.2",
        "UC-4.4.3",
        "UC-4.4.4",
        "UC-4.4.5",
        "UC-4.4.6",
        "UC-4.4.7",
        "UC-4.4.8",
        "UC-4.4.9",
        "UC-4.4.10",
        "UC-4.4.11",
        "UC-4.4.12",
        "UC-4.4.13",
        "UC-4.4.14",
        "UC-4.4.15",
        "UC-4.4.16",
        "UC-4.4.17",
        "UC-4.4.18",
        "UC-4.4.19",
        "UC-4.4.20",
        "UC-4.4.21",
        "UC-4.4.22",
        "UC-4.4.23",
        "UC-4.4.24",
        "UC-4.4.25",
        "UC-4.4.26",
        "UC-4.4.27",
        "UC-4.4.28",
        "UC-4.4.29",
        "UC-4.4.30",
        "UC-4.4.31",
        "UC-4.4.32",
        "UC-4.5.1",
        "UC-4.5.2",
        "UC-4.5.3",
        "UC-4.5.4",
        "UC-4.5.5",
        "UC-4.5.6",
        "UC-4.5.7",
        "UC-4.5.8",
        "UC-4.5.9",
        "UC-4.5.10",
        "UC-4.5.11",
        "UC-4.5.12",
        "UC-4.5.13",
        "UC-4.5.14",
        "UC-4.5.15",
        "UC-4.6.1",
        "UC-4.6.2",
        "UC-4.6.3",
        "UC-4.6.4",
        "UC-4.6.5",
        "UC-4.6.6"
      ]
    },
    "5": {
      "crawl": [
        "UC-5.1.1",
        "UC-5.1.4",
        "UC-5.1.5",
        "UC-5.1.11",
        "UC-5.2.2"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-5.1.2",
        "UC-5.1.3",
        "UC-5.1.6",
        "UC-5.1.7",
        "UC-5.1.8",
        "UC-5.1.9",
        "UC-5.1.10",
        "UC-5.1.12",
        "UC-5.1.13",
        "UC-5.1.14",
        "UC-5.1.15",
        "UC-5.1.16",
        "UC-5.1.17",
        "UC-5.1.18",
        "UC-5.1.19",
        "UC-5.1.20",
        "UC-5.1.21",
        "UC-5.1.22",
        "UC-5.1.23",
        "UC-5.1.24",
        "UC-5.1.25",
        "UC-5.1.26",
        "UC-5.1.27",
        "UC-5.1.28",
        "UC-5.1.29",
        "UC-5.1.30",
        "UC-5.1.31",
        "UC-5.1.32",
        "UC-5.1.33",
        "UC-5.1.34",
        "UC-5.1.35",
        "UC-5.1.36",
        "UC-5.1.37",
        "UC-5.1.38",
        "UC-5.1.39",
        "UC-5.1.40",
        "UC-5.1.41",
        "UC-5.1.42",
        "UC-5.1.43",
        "UC-5.1.44",
        "UC-5.1.45",
        "UC-5.1.46",
        "UC-5.1.47",
        "UC-5.1.48",
        "UC-5.1.49",
        "UC-5.1.50",
        "UC-5.1.51",
        "UC-5.1.52",
        "UC-5.1.53",
        "UC-5.1.54",
        "UC-5.1.55",
        "UC-5.1.56",
        "UC-5.1.57",
        "UC-5.1.58",
        "UC-5.1.59",
        "UC-5.1.60",
        "UC-5.1.61",
        "UC-5.1.62",
        "UC-5.1.63",
        "UC-5.1.64",
        "UC-5.1.65",
        "UC-5.1.66",
        "UC-5.1.67",
        "UC-5.1.68",
        "UC-5.1.69",
        "UC-5.1.70",
        "UC-5.1.71",
        "UC-5.1.72",
        "UC-5.1.73",
        "UC-5.1.74",
        "UC-5.1.75",
        "UC-5.2.1",
        "UC-5.2.3",
        "UC-5.2.4",
        "UC-5.2.5",
        "UC-5.2.6",
        "UC-5.2.7",
        "UC-5.2.8",
        "UC-5.2.9",
        "UC-5.2.10",
        "UC-5.2.11",
        "UC-5.2.12",
        "UC-5.2.13",
        "UC-5.2.14",
        "UC-5.2.15",
        "UC-5.2.16",
        "UC-5.2.17",
        "UC-5.2.18",
        "UC-5.2.19",
        "UC-5.2.20",
        "UC-5.2.21",
        "UC-5.2.22",
        "UC-5.2.23",
        "UC-5.2.24",
        "UC-5.2.25",
        "UC-5.2.26",
        "UC-5.2.27",
        "UC-5.2.28",
        "UC-5.2.29",
        "UC-5.2.30",
        "UC-5.2.31",
        "UC-5.2.32",
        "UC-5.2.33",
        "UC-5.2.34",
        "UC-5.2.35",
        "UC-5.2.36",
        "UC-5.2.37",
        "UC-5.2.38",
        "UC-5.2.39",
        "UC-5.2.40",
        "UC-5.2.41",
        "UC-5.2.42",
        "UC-5.2.43",
        "UC-5.2.44",
        "UC-5.2.45",
        "UC-5.2.46",
        "UC-5.2.47",
        "UC-5.2.48",
        "UC-5.2.49",
        "UC-5.2.50",
        "UC-5.2.51",
        "UC-5.2.52",
        "UC-5.2.53",
        "UC-5.2.54",
        "UC-5.3.1",
        "UC-5.3.2",
        "UC-5.3.3",
        "UC-5.3.4",
        "UC-5.3.5",
        "UC-5.3.6",
        "UC-5.3.7",
        "UC-5.3.8",
        "UC-5.3.9",
        "UC-5.3.10",
        "UC-5.3.11",
        "UC-5.3.12",
        "UC-5.3.13",
        "UC-5.3.14",
        "UC-5.3.15",
        "UC-5.3.16",
        "UC-5.3.17",
        "UC-5.3.18",
        "UC-5.3.19",
        "UC-5.3.20",
        "UC-5.3.21",
        "UC-5.3.22",
        "UC-5.4.1",
        "UC-5.4.2",
        "UC-5.4.3",
        "UC-5.4.4",
        "UC-5.4.5",
        "UC-5.4.6",
        "UC-5.4.7",
        "UC-5.4.8",
        "UC-5.4.9",
        "UC-5.4.10",
        "UC-5.4.11",
        "UC-5.4.12",
        "UC-5.4.13",
        "UC-5.4.14",
        "UC-5.4.15",
        "UC-5.4.16",
        "UC-5.4.17",
        "UC-5.4.18",
        "UC-5.4.19",
        "UC-5.4.20",
        "UC-5.4.21",
        "UC-5.4.22",
        "UC-5.4.23",
        "UC-5.4.24",
        "UC-5.4.25",
        "UC-5.4.26",
        "UC-5.4.27",
        "UC-5.4.28",
        "UC-5.4.29",
        "UC-5.4.30",
        "UC-5.4.31",
        "UC-5.4.32",
        "UC-5.4.33",
        "UC-5.4.34",
        "UC-5.4.35",
        "UC-5.4.36",
        "UC-5.4.37",
        "UC-5.4.38",
        "UC-5.4.39",
        "UC-5.4.40",
        "UC-5.5.1",
        "UC-5.5.2",
        "UC-5.5.3",
        "UC-5.5.4",
        "UC-5.5.5",
        "UC-5.5.6",
        "UC-5.5.7",
        "UC-5.5.8",
        "UC-5.5.9",
        "UC-5.5.10",
        "UC-5.5.11",
        "UC-5.5.12",
        "UC-5.5.13",
        "UC-5.5.14",
        "UC-5.5.15",
        "UC-5.5.16",
        "UC-5.5.17",
        "UC-5.5.18",
        "UC-5.5.19",
        "UC-5.5.20",
        "UC-5.5.21",
        "UC-5.5.22",
        "UC-5.5.23",
        "UC-5.5.24",
        "UC-5.5.25",
        "UC-5.6.1",
        "UC-5.6.2",
        "UC-5.6.3",
        "UC-5.6.4",
        "UC-5.6.5",
        "UC-5.6.6",
        "UC-5.6.7",
        "UC-5.6.8",
        "UC-5.6.9",
        "UC-5.6.10",
        "UC-5.6.11",
        "UC-5.6.12",
        "UC-5.6.13",
        "UC-5.6.14",
        "UC-5.6.15",
        "UC-5.6.16",
        "UC-5.6.17",
        "UC-5.6.18",
        "UC-5.6.19",
        "UC-5.7.1",
        "UC-5.7.2",
        "UC-5.7.3",
        "UC-5.7.4",
        "UC-5.7.5",
        "UC-5.7.6",
        "UC-5.7.7",
        "UC-5.7.8",
        "UC-5.7.9",
        "UC-5.7.10",
        "UC-5.7.11",
        "UC-5.7.12",
        "UC-5.8.1",
        "UC-5.8.2",
        "UC-5.8.3",
        "UC-5.8.4",
        "UC-5.8.5",
        "UC-5.8.7",
        "UC-5.8.8",
        "UC-5.8.9",
        "UC-5.8.10",
        "UC-5.8.11",
        "UC-5.8.12",
        "UC-5.8.13",
        "UC-5.8.14",
        "UC-5.8.15",
        "UC-5.8.16",
        "UC-5.8.17",
        "UC-5.8.18",
        "UC-5.8.19",
        "UC-5.8.20",
        "UC-5.8.21",
        "UC-5.8.22",
        "UC-5.8.23",
        "UC-5.8.24",
        "UC-5.8.25",
        "UC-5.8.26",
        "UC-5.8.27",
        "UC-5.9.1",
        "UC-5.9.2",
        "UC-5.9.3",
        "UC-5.9.4",
        "UC-5.9.5",
        "UC-5.9.6",
        "UC-5.9.7",
        "UC-5.9.8",
        "UC-5.9.9",
        "UC-5.9.10",
        "UC-5.9.11",
        "UC-5.9.12",
        "UC-5.9.13",
        "UC-5.9.14",
        "UC-5.9.15",
        "UC-5.9.16",
        "UC-5.9.17",
        "UC-5.9.18",
        "UC-5.9.19",
        "UC-5.9.20",
        "UC-5.9.21",
        "UC-5.9.22",
        "UC-5.9.23",
        "UC-5.9.24",
        "UC-5.9.25",
        "UC-5.9.26",
        "UC-5.9.27",
        "UC-5.9.28",
        "UC-5.9.29",
        "UC-5.9.30",
        "UC-5.9.31",
        "UC-5.9.32",
        "UC-5.9.33",
        "UC-5.9.34",
        "UC-5.9.35",
        "UC-5.9.36",
        "UC-5.9.37",
        "UC-5.9.38",
        "UC-5.9.39",
        "UC-5.9.40",
        "UC-5.9.41",
        "UC-5.9.42",
        "UC-5.9.43",
        "UC-5.9.44",
        "UC-5.9.45",
        "UC-5.9.46",
        "UC-5.9.47",
        "UC-5.9.48",
        "UC-5.9.49",
        "UC-5.9.50",
        "UC-5.9.51",
        "UC-5.9.52",
        "UC-5.9.53",
        "UC-5.9.54",
        "UC-5.10.1",
        "UC-5.10.2",
        "UC-5.10.3",
        "UC-5.10.4",
        "UC-5.10.5",
        "UC-5.10.6",
        "UC-5.11.1",
        "UC-5.11.2",
        "UC-5.11.3",
        "UC-5.11.4",
        "UC-5.11.5",
        "UC-5.11.6",
        "UC-5.11.7",
        "UC-5.11.8",
        "UC-5.11.9",
        "UC-5.11.10",
        "UC-5.11.11",
        "UC-5.3.23",
        "UC-5.3.24",
        "UC-5.3.25",
        "UC-5.3.26",
        "UC-5.3.27",
        "UC-5.3.28",
        "UC-5.3.29",
        "UC-5.3.30",
        "UC-5.3.31",
        "UC-5.3.32",
        "UC-5.3.33",
        "UC-5.3.34",
        "UC-5.3.35",
        "UC-5.3.36",
        "UC-5.3.37",
        "UC-5.3.38",
        "UC-5.3.39",
        "UC-5.3.40",
        "UC-5.3.41",
        "UC-5.3.42",
        "UC-5.12.1",
        "UC-5.12.2",
        "UC-5.12.3",
        "UC-5.12.4",
        "UC-5.12.5",
        "UC-5.12.6",
        "UC-5.12.7",
        "UC-5.12.8",
        "UC-5.12.9",
        "UC-5.12.10"
      ]
    },
    "6": {
      "crawl": [
        "UC-6.1.1",
        "UC-6.1.2",
        "UC-6.1.4",
        "UC-6.1.6",
        "UC-6.2.3"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-6.1.3",
        "UC-6.1.5",
        "UC-6.1.7",
        "UC-6.1.8",
        "UC-6.1.9",
        "UC-6.1.10",
        "UC-6.1.11",
        "UC-6.1.12",
        "UC-6.1.13",
        "UC-6.1.14",
        "UC-6.1.15",
        "UC-6.1.16",
        "UC-6.1.17",
        "UC-6.1.18",
        "UC-6.1.19",
        "UC-6.1.20",
        "UC-6.1.21",
        "UC-6.1.22",
        "UC-6.1.23",
        "UC-6.1.24",
        "UC-6.1.25",
        "UC-6.1.26",
        "UC-6.1.27",
        "UC-6.1.28",
        "UC-6.1.29",
        "UC-6.1.30",
        "UC-6.1.31",
        "UC-6.1.32",
        "UC-6.2.1",
        "UC-6.2.2",
        "UC-6.2.4",
        "UC-6.2.5",
        "UC-6.2.6",
        "UC-6.2.7",
        "UC-6.2.8",
        "UC-6.2.9",
        "UC-6.2.10",
        "UC-6.2.11",
        "UC-6.2.12",
        "UC-6.3.1",
        "UC-6.3.2",
        "UC-6.3.3",
        "UC-6.3.4",
        "UC-6.3.5",
        "UC-6.3.6",
        "UC-6.3.7",
        "UC-6.3.8",
        "UC-6.3.9",
        "UC-6.3.10",
        "UC-6.3.11",
        "UC-6.3.12",
        "UC-6.3.13",
        "UC-6.3.14",
        "UC-6.3.15",
        "UC-6.3.16",
        "UC-6.3.17",
        "UC-6.3.18",
        "UC-6.3.19",
        "UC-6.3.20",
        "UC-6.3.21",
        "UC-6.3.22",
        "UC-6.3.23",
        "UC-6.3.24",
        "UC-6.4.1",
        "UC-6.4.2",
        "UC-6.4.3",
        "UC-6.4.4",
        "UC-6.4.5",
        "UC-6.4.6",
        "UC-6.4.12",
        "UC-6.4.13",
        "UC-6.4.14",
        "UC-6.4.15",
        "UC-6.4.16",
        "UC-6.4.17",
        "UC-6.4.18"
      ]
    },
    "7": {
      "crawl": [
        "UC-7.1.2",
        "UC-7.1.3",
        "UC-7.1.12",
        "UC-7.1.15",
        "UC-7.2.1"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-7.1.1",
        "UC-7.1.4",
        "UC-7.1.5",
        "UC-7.1.6",
        "UC-7.1.7",
        "UC-7.1.8",
        "UC-7.1.9",
        "UC-7.1.10",
        "UC-7.1.11",
        "UC-7.1.13",
        "UC-7.1.14",
        "UC-7.2.2",
        "UC-7.2.3",
        "UC-7.2.4",
        "UC-7.2.5",
        "UC-7.2.6",
        "UC-7.2.7",
        "UC-7.2.8",
        "UC-7.2.9",
        "UC-7.2.10",
        "UC-7.2.11",
        "UC-7.2.12",
        "UC-7.2.13",
        "UC-7.2.14",
        "UC-7.2.15",
        "UC-7.2.16",
        "UC-7.2.17",
        "UC-7.2.18",
        "UC-7.2.19",
        "UC-7.2.20",
        "UC-7.2.21",
        "UC-7.2.22",
        "UC-7.2.23",
        "UC-7.3.1",
        "UC-7.3.2",
        "UC-7.3.3",
        "UC-7.3.4",
        "UC-7.3.5",
        "UC-7.3.6",
        "UC-7.3.7",
        "UC-7.3.8",
        "UC-7.3.9",
        "UC-7.3.10",
        "UC-7.3.11",
        "UC-7.3.12",
        "UC-7.3.13",
        "UC-7.3.14",
        "UC-7.3.15",
        "UC-7.3.16",
        "UC-7.3.17",
        "UC-7.1.16",
        "UC-7.1.17",
        "UC-7.1.18",
        "UC-7.1.19",
        "UC-7.1.20",
        "UC-7.1.21",
        "UC-7.1.22",
        "UC-7.1.23",
        "UC-7.1.24",
        "UC-7.1.25",
        "UC-7.1.26",
        "UC-7.1.27",
        "UC-7.1.28",
        "UC-7.1.29",
        "UC-7.1.30",
        "UC-7.1.31",
        "UC-7.1.32",
        "UC-7.1.33",
        "UC-7.1.34",
        "UC-7.1.35",
        "UC-7.1.36",
        "UC-7.1.37",
        "UC-7.1.38",
        "UC-7.1.39",
        "UC-7.1.40",
        "UC-7.4.1",
        "UC-7.4.2",
        "UC-7.4.3",
        "UC-7.4.4",
        "UC-7.4.5",
        "UC-7.4.6",
        "UC-7.4.7",
        "UC-7.4.8",
        "UC-7.4.9",
        "UC-7.4.10",
        "UC-7.4.11",
        "UC-7.4.12",
        "UC-7.4.13",
        "UC-7.4.14",
        "UC-7.4.15",
        "UC-7.4.16",
        "UC-7.5.1",
        "UC-7.5.2",
        "UC-7.5.3",
        "UC-7.5.4",
        "UC-7.5.5",
        "UC-7.5.6",
        "UC-7.5.7",
        "UC-7.5.8",
        "UC-7.5.9",
        "UC-7.5.10",
        "UC-7.5.11",
        "UC-7.5.12",
        "UC-7.5.13",
        "UC-7.5.14",
        "UC-7.5.15",
        "UC-7.5.16",
        "UC-7.5.17",
        "UC-7.5.18",
        "UC-7.5.19",
        "UC-7.5.20",
        "UC-7.5.21",
        "UC-7.6.1",
        "UC-7.6.2",
        "UC-7.6.3",
        "UC-7.6.4",
        "UC-7.6.5"
      ]
    },
    "8": {
      "crawl": [
        "UC-8.1.1",
        "UC-8.1.5",
        "UC-8.2.1",
        "UC-8.3.1",
        "UC-8.3.3"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-8.1.2",
        "UC-8.1.3",
        "UC-8.1.4",
        "UC-8.1.6",
        "UC-8.1.7",
        "UC-8.1.8",
        "UC-8.1.9",
        "UC-8.1.10",
        "UC-8.1.11",
        "UC-8.1.12",
        "UC-8.1.13",
        "UC-8.1.14",
        "UC-8.1.15",
        "UC-8.1.16",
        "UC-8.1.17",
        "UC-8.1.18",
        "UC-8.2.2",
        "UC-8.2.3",
        "UC-8.2.4",
        "UC-8.2.5",
        "UC-8.2.6",
        "UC-8.2.7",
        "UC-8.2.8",
        "UC-8.2.9",
        "UC-8.2.10",
        "UC-8.2.11",
        "UC-8.2.12",
        "UC-8.2.13",
        "UC-8.2.14",
        "UC-8.2.15",
        "UC-8.2.16",
        "UC-8.2.17",
        "UC-8.2.18",
        "UC-8.2.19",
        "UC-8.2.20",
        "UC-8.2.21",
        "UC-8.2.22",
        "UC-8.2.23",
        "UC-8.3.2",
        "UC-8.3.4",
        "UC-8.3.5",
        "UC-8.3.6",
        "UC-8.3.7",
        "UC-8.3.8",
        "UC-8.3.9",
        "UC-8.3.10",
        "UC-8.3.11",
        "UC-8.3.12",
        "UC-8.3.13",
        "UC-8.3.14",
        "UC-8.3.15",
        "UC-8.3.16",
        "UC-8.3.17",
        "UC-8.3.18",
        "UC-8.3.19",
        "UC-8.3.20",
        "UC-8.3.21",
        "UC-8.4.1",
        "UC-8.4.2",
        "UC-8.4.3",
        "UC-8.4.4",
        "UC-8.4.5",
        "UC-8.4.6",
        "UC-8.4.7",
        "UC-8.4.8",
        "UC-8.4.9",
        "UC-8.4.10",
        "UC-8.4.11",
        "UC-8.4.12",
        "UC-8.4.13",
        "UC-8.4.14",
        "UC-8.4.15",
        "UC-8.4.16",
        "UC-8.5.1",
        "UC-8.5.2",
        "UC-8.5.3",
        "UC-8.5.4",
        "UC-8.5.5",
        "UC-8.5.6",
        "UC-8.5.7",
        "UC-8.5.8",
        "UC-8.5.9",
        "UC-8.5.10",
        "UC-8.5.11",
        "UC-8.5.12",
        "UC-8.6.1",
        "UC-8.6.2",
        "UC-8.6.10",
        "UC-8.6.11",
        "UC-8.6.12",
        "UC-8.6.13",
        "UC-8.6.14",
        "UC-8.6.16",
        "UC-8.6.17",
        "UC-8.6.18",
        "UC-8.6.19",
        "UC-8.7.1",
        "UC-8.7.2",
        "UC-8.7.3",
        "UC-8.7.4",
        "UC-8.7.5"
      ]
    },
    "9": {
      "crawl": [
        "UC-9.1.3",
        "UC-9.1.5",
        "UC-9.1.7",
        "UC-9.2.3",
        "UC-9.3.5"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-9.1.1",
        "UC-9.1.2",
        "UC-9.1.4",
        "UC-9.1.6",
        "UC-9.1.8",
        "UC-9.1.9",
        "UC-9.1.10",
        "UC-9.1.11",
        "UC-9.1.12",
        "UC-9.1.13",
        "UC-9.1.14",
        "UC-9.1.15",
        "UC-9.1.16",
        "UC-9.1.17",
        "UC-9.1.18",
        "UC-9.1.19",
        "UC-9.1.20",
        "UC-9.1.21",
        "UC-9.1.22",
        "UC-9.1.23",
        "UC-9.1.24",
        "UC-9.1.25",
        "UC-9.1.26",
        "UC-9.1.27",
        "UC-9.1.28",
        "UC-9.2.1",
        "UC-9.2.2",
        "UC-9.2.4",
        "UC-9.2.5",
        "UC-9.2.6",
        "UC-9.2.7",
        "UC-9.2.8",
        "UC-9.2.9",
        "UC-9.2.10",
        "UC-9.2.11",
        "UC-9.2.12",
        "UC-9.3.1",
        "UC-9.3.2",
        "UC-9.3.3",
        "UC-9.3.4",
        "UC-9.3.6",
        "UC-9.3.7",
        "UC-9.3.8",
        "UC-9.3.9",
        "UC-9.3.10",
        "UC-9.3.11",
        "UC-9.3.12",
        "UC-9.3.13",
        "UC-9.3.14",
        "UC-9.3.15",
        "UC-9.3.16",
        "UC-9.4.1",
        "UC-9.4.2",
        "UC-9.4.3",
        "UC-9.4.4",
        "UC-9.4.5",
        "UC-9.4.6",
        "UC-9.4.7",
        "UC-9.4.8",
        "UC-9.4.9",
        "UC-9.4.10",
        "UC-9.4.11",
        "UC-9.4.12",
        "UC-9.4.13",
        "UC-9.4.14",
        "UC-9.4.15",
        "UC-9.4.16",
        "UC-9.4.17",
        "UC-9.4.18",
        "UC-9.4.19",
        "UC-9.4.20",
        "UC-9.5.1",
        "UC-9.5.2",
        "UC-9.5.3",
        "UC-9.5.4",
        "UC-9.5.5",
        "UC-9.5.6",
        "UC-9.5.7",
        "UC-9.5.8",
        "UC-9.5.9",
        "UC-9.5.10",
        "UC-9.5.11",
        "UC-9.5.12",
        "UC-9.5.13",
        "UC-9.5.14",
        "UC-9.5.15",
        "UC-9.6.1",
        "UC-9.6.2",
        "UC-9.6.3",
        "UC-9.6.4",
        "UC-9.6.5",
        "UC-9.6.6",
        "UC-9.7.1",
        "UC-9.7.2",
        "UC-9.7.3",
        "UC-9.7.4",
        "UC-9.7.5",
        "UC-9.7.6",
        "UC-9.7.7"
      ]
    },
    "10": {
      "crawl": [
        "UC-10.1.2",
        "UC-10.1.4",
        "UC-10.3.5",
        "UC-10.4.2",
        "UC-10.4.3",
        "UC-10.7.1"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-10.1.1",
        "UC-10.1.3",
        "UC-10.1.5",
        "UC-10.1.6",
        "UC-10.1.7",
        "UC-10.1.8",
        "UC-10.1.9",
        "UC-10.1.10",
        "UC-10.1.11",
        "UC-10.1.12",
        "UC-10.1.13",
        "UC-10.1.14",
        "UC-10.1.15",
        "UC-10.1.16",
        "UC-10.1.17",
        "UC-10.1.18",
        "UC-10.1.19",
        "UC-10.1.20",
        "UC-10.1.21",
        "UC-10.1.22",
        "UC-10.1.23",
        "UC-10.1.24",
        "UC-10.1.25",
        "UC-10.1.26",
        "UC-10.1.27",
        "UC-10.1.28",
        "UC-10.1.29",
        "UC-10.1.30",
        "UC-10.1.31",
        "UC-10.1.32",
        "UC-10.1.33",
        "UC-10.1.34",
        "UC-10.1.35",
        "UC-10.1.36",
        "UC-10.1.37",
        "UC-10.1.38",
        "UC-10.1.39",
        "UC-10.1.40",
        "UC-10.1.41",
        "UC-10.1.42",
        "UC-10.1.43",
        "UC-10.1.44",
        "UC-10.1.45",
        "UC-10.1.46",
        "UC-10.1.47",
        "UC-10.1.48",
        "UC-10.1.49",
        "UC-10.1.50",
        "UC-10.1.51",
        "UC-10.1.52",
        "UC-10.1.53",
        "UC-10.1.54",
        "UC-10.1.55",
        "UC-10.1.56",
        "UC-10.1.57",
        "UC-10.1.58",
        "UC-10.1.59",
        "UC-10.1.60",
        "UC-10.1.61",
        "UC-10.1.62",
        "UC-10.1.63",
        "UC-10.1.64",
        "UC-10.1.65",
        "UC-10.1.66",
        "UC-10.1.67",
        "UC-10.1.68",
        "UC-10.1.69",
        "UC-10.1.70",
        "UC-10.1.71",
        "UC-10.1.72",
        "UC-10.1.73",
        "UC-10.1.74",
        "UC-10.1.75",
        "UC-10.1.76",
        "UC-10.1.77",
        "UC-10.2.1",
        "UC-10.2.2",
        "UC-10.2.3",
        "UC-10.2.4",
        "UC-10.2.5",
        "UC-10.2.6",
        "UC-10.2.7",
        "UC-10.2.8",
        "UC-10.2.9",
        "UC-10.2.10",
        "UC-10.2.11",
        "UC-10.2.12",
        "UC-10.2.13",
        "UC-10.2.14",
        "UC-10.2.15",
        "UC-10.2.16",
        "UC-10.2.17",
        "UC-10.2.18",
        "UC-10.2.19",
        "UC-10.2.20",
        "UC-10.2.21",
        "UC-10.2.22",
        "UC-10.2.23",
        "UC-10.2.24",
        "UC-10.2.25",
        "UC-10.2.26",
        "UC-10.2.27",
        "UC-10.2.28",
        "UC-10.2.29",
        "UC-10.2.30",
        "UC-10.2.31",
        "UC-10.2.32",
        "UC-10.2.33",
        "UC-10.2.34",
        "UC-10.2.35",
        "UC-10.2.36",
        "UC-10.2.37",
        "UC-10.2.38",
        "UC-10.2.39",
        "UC-10.2.40",
        "UC-10.2.41",
        "UC-10.2.42",
        "UC-10.2.43",
        "UC-10.2.44",
        "UC-10.2.45",
        "UC-10.2.46",
        "UC-10.2.47",
        "UC-10.2.48",
        "UC-10.2.49",
        "UC-10.2.50",
        "UC-10.2.51",
        "UC-10.2.52",
        "UC-10.2.53",
        "UC-10.2.54",
        "UC-10.2.55",
        "UC-10.2.56",
        "UC-10.2.57",
        "UC-10.2.58",
        "UC-10.2.59",
        "UC-10.2.60",
        "UC-10.2.61",
        "UC-10.2.62",
        "UC-10.2.63",
        "UC-10.2.64",
        "UC-10.2.65",
        "UC-10.2.66",
        "UC-10.2.67",
        "UC-10.2.68",
        "UC-10.2.69",
        "UC-10.2.70",
        "UC-10.2.71",
        "UC-10.2.72",
        "UC-10.2.73",
        "UC-10.2.74",
        "UC-10.2.75",
        "UC-10.2.76",
        "UC-10.2.77",
        "UC-10.2.78",
        "UC-10.2.79",
        "UC-10.2.80",
        "UC-10.2.81",
        "UC-10.2.82",
        "UC-10.2.83",
        "UC-10.2.84",
        "UC-10.2.85",
        "UC-10.2.86",
        "UC-10.2.87",
        "UC-10.2.88",
        "UC-10.2.89",
        "UC-10.2.90",
        "UC-10.2.91",
        "UC-10.2.92",
        "UC-10.2.93",
        "UC-10.2.94",
        "UC-10.2.95",
        "UC-10.2.96",
        "UC-10.2.97",
        "UC-10.2.98",
        "UC-10.2.99",
        "UC-10.2.100",
        "UC-10.2.101",
        "UC-10.2.102",
        "UC-10.2.103",
        "UC-10.2.104",
        "UC-10.2.105",
        "UC-10.2.106",
        "UC-10.2.107",
        "UC-10.2.108",
        "UC-10.2.109",
        "UC-10.2.110",
        "UC-10.2.111",
        "UC-10.2.112",
        "UC-10.2.113",
        "UC-10.2.114",
        "UC-10.2.115",
        "UC-10.2.116",
        "UC-10.2.117",
        "UC-10.2.118",
        "UC-10.2.119",
        "UC-10.2.120",
        "UC-10.2.121",
        "UC-10.2.122",
        "UC-10.2.123",
        "UC-10.2.124",
        "UC-10.2.125",
        "UC-10.2.126",
        "UC-10.2.127",
        "UC-10.2.128",
        "UC-10.2.129",
        "UC-10.2.130",
        "UC-10.2.131",
        "UC-10.2.132",
        "UC-10.2.133",
        "UC-10.2.134",
        "UC-10.2.135",
        "UC-10.2.136",
        "UC-10.2.137",
        "UC-10.2.138",
        "UC-10.2.139",
        "UC-10.2.140",
        "UC-10.2.141",
        "UC-10.2.142",
        "UC-10.2.143",
        "UC-10.2.144",
        "UC-10.2.145",
        "UC-10.2.146",
        "UC-10.2.147",
        "UC-10.2.148",
        "UC-10.2.149",
        "UC-10.2.150",
        "UC-10.2.151",
        "UC-10.2.152",
        "UC-10.2.153",
        "UC-10.2.154",
        "UC-10.2.155",
        "UC-10.2.156",
        "UC-10.2.157",
        "UC-10.2.158",
        "UC-10.2.159",
        "UC-10.2.160",
        "UC-10.2.161",
        "UC-10.2.162",
        "UC-10.2.163",
        "UC-10.2.164",
        "UC-10.2.165",
        "UC-10.2.166",
        "UC-10.2.167",
        "UC-10.2.168",
        "UC-10.3.1",
        "UC-10.3.2",
        "UC-10.3.3",
        "UC-10.3.4",
        "UC-10.3.6",
        "UC-10.3.7",
        "UC-10.3.8",
        "UC-10.3.9",
        "UC-10.3.10",
        "UC-10.3.11",
        "UC-10.3.12",
        "UC-10.3.13",
        "UC-10.3.14",
        "UC-10.3.15",
        "UC-10.3.16",
        "UC-10.3.17",
        "UC-10.3.18",
        "UC-10.3.19",
        "UC-10.3.20",
        "UC-10.3.21",
        "UC-10.3.22",
        "UC-10.3.23",
        "UC-10.3.24",
        "UC-10.3.25",
        "UC-10.3.26",
        "UC-10.3.27",
        "UC-10.3.28",
        "UC-10.3.29",
        "UC-10.3.30",
        "UC-10.3.31",
        "UC-10.3.32",
        "UC-10.3.33",
        "UC-10.3.34",
        "UC-10.3.35",
        "UC-10.3.36",
        "UC-10.3.37",
        "UC-10.3.38",
        "UC-10.3.39",
        "UC-10.3.40",
        "UC-10.3.41",
        "UC-10.3.42",
        "UC-10.3.43",
        "UC-10.3.44",
        "UC-10.3.45",
        "UC-10.3.46",
        "UC-10.3.47",
        "UC-10.3.48",
        "UC-10.3.49",
        "UC-10.3.50",
        "UC-10.3.51",
        "UC-10.3.52",
        "UC-10.3.53",
        "UC-10.3.54",
        "UC-10.3.55",
        "UC-10.3.56",
        "UC-10.3.57",
        "UC-10.3.58",
        "UC-10.3.59",
        "UC-10.3.60",
        "UC-10.3.61",
        "UC-10.3.62",
        "UC-10.3.63",
        "UC-10.3.64",
        "UC-10.3.65",
        "UC-10.3.66",
        "UC-10.3.67",
        "UC-10.3.68",
        "UC-10.3.69",
        "UC-10.3.70",
        "UC-10.3.71",
        "UC-10.3.72",
        "UC-10.3.73",
        "UC-10.3.74",
        "UC-10.3.75",
        "UC-10.3.76",
        "UC-10.3.77",
        "UC-10.3.78",
        "UC-10.3.79",
        "UC-10.3.80",
        "UC-10.3.81",
        "UC-10.3.82",
        "UC-10.3.83",
        "UC-10.3.84",
        "UC-10.3.85",
        "UC-10.3.86",
        "UC-10.3.87",
        "UC-10.3.88",
        "UC-10.3.89",
        "UC-10.3.90",
        "UC-10.3.91",
        "UC-10.3.92",
        "UC-10.3.93",
        "UC-10.3.94",
        "UC-10.3.95",
        "UC-10.3.96",
        "UC-10.3.97",
        "UC-10.3.98",
        "UC-10.3.99",
        "UC-10.3.100",
        "UC-10.3.101",
        "UC-10.3.102",
        "UC-10.3.103",
        "UC-10.3.104",
        "UC-10.3.105",
        "UC-10.3.106",
        "UC-10.3.107",
        "UC-10.3.108",
        "UC-10.3.109",
        "UC-10.3.110",
        "UC-10.3.111",
        "UC-10.3.112",
        "UC-10.3.113",
        "UC-10.3.114",
        "UC-10.3.115",
        "UC-10.3.116",
        "UC-10.3.117",
        "UC-10.3.118",
        "UC-10.3.119",
        "UC-10.3.120",
        "UC-10.3.121",
        "UC-10.3.122",
        "UC-10.3.123",
        "UC-10.3.124",
        "UC-10.3.125",
        "UC-10.3.126",
        "UC-10.3.127",
        "UC-10.3.128",
        "UC-10.3.129",
        "UC-10.3.130",
        "UC-10.3.131",
        "UC-10.3.132",
        "UC-10.3.133",
        "UC-10.3.134",
        "UC-10.3.135",
        "UC-10.3.136",
        "UC-10.3.137",
        "UC-10.3.138",
        "UC-10.3.139",
        "UC-10.3.140",
        "UC-10.3.141",
        "UC-10.3.142",
        "UC-10.3.143",
        "UC-10.3.144",
        "UC-10.3.145",
        "UC-10.3.146",
        "UC-10.3.147",
        "UC-10.3.148",
        "UC-10.3.149",
        "UC-10.3.150",
        "UC-10.3.151",
        "UC-10.3.152",
        "UC-10.3.153",
        "UC-10.3.154",
        "UC-10.3.155",
        "UC-10.3.156",
        "UC-10.3.157",
        "UC-10.3.158",
        "UC-10.3.159",
        "UC-10.3.160",
        "UC-10.3.161",
        "UC-10.3.162",
        "UC-10.3.163",
        "UC-10.3.164",
        "UC-10.3.165",
        "UC-10.3.166",
        "UC-10.3.167",
        "UC-10.3.168",
        "UC-10.3.169",
        "UC-10.3.170",
        "UC-10.3.171",
        "UC-10.3.172",
        "UC-10.3.173",
        "UC-10.3.174",
        "UC-10.3.175",
        "UC-10.3.176",
        "UC-10.3.177",
        "UC-10.3.178",
        "UC-10.3.179",
        "UC-10.3.180",
        "UC-10.3.181",
        "UC-10.3.182",
        "UC-10.3.183",
        "UC-10.3.184",
        "UC-10.3.185",
        "UC-10.3.186",
        "UC-10.3.187",
        "UC-10.3.188",
        "UC-10.3.189",
        "UC-10.3.190",
        "UC-10.3.191",
        "UC-10.3.192",
        "UC-10.3.193",
        "UC-10.3.194",
        "UC-10.3.195",
        "UC-10.3.196",
        "UC-10.3.197",
        "UC-10.3.198",
        "UC-10.3.199",
        "UC-10.3.200",
        "UC-10.3.201",
        "UC-10.3.202",
        "UC-10.3.203",
        "UC-10.3.204",
        "UC-10.3.205",
        "UC-10.3.206",
        "UC-10.3.207",
        "UC-10.3.208",
        "UC-10.3.209",
        "UC-10.3.210",
        "UC-10.3.211",
        "UC-10.3.212",
        "UC-10.3.213",
        "UC-10.3.214",
        "UC-10.3.215",
        "UC-10.3.216",
        "UC-10.3.217",
        "UC-10.3.218",
        "UC-10.3.219",
        "UC-10.3.220",
        "UC-10.3.221",
        "UC-10.3.222",
        "UC-10.3.223",
        "UC-10.3.224",
        "UC-10.3.225",
        "UC-10.3.226",
        "UC-10.3.227",
        "UC-10.3.228",
        "UC-10.3.229",
        "UC-10.3.230",
        "UC-10.3.231",
        "UC-10.3.232",
        "UC-10.3.233",
        "UC-10.3.234",
        "UC-10.3.235",
        "UC-10.3.236",
        "UC-10.3.237",
        "UC-10.3.238",
        "UC-10.3.239",
        "UC-10.3.240",
        "UC-10.3.241",
        "UC-10.3.242",
        "UC-10.3.243",
        "UC-10.3.244",
        "UC-10.3.245",
        "UC-10.3.246",
        "UC-10.3.247",
        "UC-10.3.248",
        "UC-10.3.249",
        "UC-10.3.250",
        "UC-10.3.251",
        "UC-10.3.252",
        "UC-10.3.253",
        "UC-10.3.254",
        "UC-10.3.255",
        "UC-10.3.256",
        "UC-10.3.257",
        "UC-10.3.258",
        "UC-10.3.259",
        "UC-10.3.260",
        "UC-10.3.261",
        "UC-10.3.262",
        "UC-10.3.263",
        "UC-10.3.264",
        "UC-10.3.265",
        "UC-10.3.266",
        "UC-10.3.267",
        "UC-10.3.268",
        "UC-10.3.269",
        "UC-10.3.270",
        "UC-10.3.271",
        "UC-10.3.272",
        "UC-10.3.273",
        "UC-10.3.274",
        "UC-10.3.275",
        "UC-10.3.276",
        "UC-10.3.277",
        "UC-10.3.278",
        "UC-10.3.279",
        "UC-10.3.280",
        "UC-10.3.281",
        "UC-10.3.282",
        "UC-10.3.283",
        "UC-10.3.284",
        "UC-10.3.285",
        "UC-10.3.286",
        "UC-10.3.287",
        "UC-10.3.288",
        "UC-10.3.289",
        "UC-10.3.290",
        "UC-10.3.291",
        "UC-10.3.292",
        "UC-10.3.293",
        "UC-10.3.294",
        "UC-10.3.295",
        "UC-10.3.296",
        "UC-10.3.297",
        "UC-10.3.298",
        "UC-10.3.299",
        "UC-10.3.300",
        "UC-10.3.301",
        "UC-10.3.302",
        "UC-10.3.303",
        "UC-10.3.304",
        "UC-10.3.305",
        "UC-10.3.306",
        "UC-10.3.307",
        "UC-10.3.308",
        "UC-10.3.309",
        "UC-10.3.310",
        "UC-10.3.311",
        "UC-10.3.312",
        "UC-10.3.313",
        "UC-10.3.314",
        "UC-10.3.315",
        "UC-10.3.316",
        "UC-10.3.317",
        "UC-10.3.318",
        "UC-10.3.319",
        "UC-10.3.320",
        "UC-10.3.321",
        "UC-10.3.322",
        "UC-10.3.323",
        "UC-10.3.324",
        "UC-10.3.325",
        "UC-10.3.326",
        "UC-10.3.327",
        "UC-10.3.328",
        "UC-10.3.329",
        "UC-10.3.330",
        "UC-10.3.331",
        "UC-10.3.332",
        "UC-10.3.333",
        "UC-10.3.334",
        "UC-10.3.335",
        "UC-10.3.336",
        "UC-10.3.337",
        "UC-10.3.338",
        "UC-10.3.339",
        "UC-10.3.340",
        "UC-10.3.341",
        "UC-10.3.342",
        "UC-10.3.343",
        "UC-10.3.344",
        "UC-10.3.345",
        "UC-10.3.346",
        "UC-10.3.347",
        "UC-10.3.348",
        "UC-10.3.349",
        "UC-10.3.350",
        "UC-10.3.351",
        "UC-10.3.352",
        "UC-10.3.353",
        "UC-10.3.354",
        "UC-10.3.355",
        "UC-10.3.356",
        "UC-10.3.357",
        "UC-10.3.358",
        "UC-10.3.359",
        "UC-10.3.360",
        "UC-10.3.361",
        "UC-10.3.362",
        "UC-10.3.363",
        "UC-10.3.364",
        "UC-10.3.365",
        "UC-10.3.366",
        "UC-10.3.367",
        "UC-10.3.368",
        "UC-10.3.369",
        "UC-10.3.370",
        "UC-10.3.371",
        "UC-10.3.372",
        "UC-10.3.373",
        "UC-10.3.374",
        "UC-10.3.375",
        "UC-10.3.376",
        "UC-10.3.377",
        "UC-10.3.378",
        "UC-10.3.379",
        "UC-10.3.380",
        "UC-10.3.381",
        "UC-10.3.382",
        "UC-10.3.383",
        "UC-10.3.384",
        "UC-10.3.385",
        "UC-10.3.386",
        "UC-10.3.387",
        "UC-10.3.388",
        "UC-10.3.389",
        "UC-10.3.390",
        "UC-10.3.391",
        "UC-10.3.392",
        "UC-10.3.393",
        "UC-10.3.394",
        "UC-10.3.395",
        "UC-10.3.396",
        "UC-10.3.397",
        "UC-10.3.398",
        "UC-10.3.399",
        "UC-10.3.400",
        "UC-10.3.401",
        "UC-10.3.402",
        "UC-10.3.403",
        "UC-10.3.404",
        "UC-10.3.405",
        "UC-10.3.406",
        "UC-10.3.407",
        "UC-10.3.408",
        "UC-10.3.409",
        "UC-10.3.410",
        "UC-10.3.411",
        "UC-10.3.412",
        "UC-10.3.413",
        "UC-10.3.414",
        "UC-10.3.415",
        "UC-10.3.416",
        "UC-10.3.417",
        "UC-10.3.418",
        "UC-10.3.419",
        "UC-10.3.420",
        "UC-10.3.421",
        "UC-10.3.422",
        "UC-10.3.423",
        "UC-10.3.424",
        "UC-10.3.425",
        "UC-10.3.426",
        "UC-10.3.427",
        "UC-10.3.428",
        "UC-10.3.429",
        "UC-10.3.430",
        "UC-10.3.431",
        "UC-10.3.432",
        "UC-10.3.433",
        "UC-10.3.434",
        "UC-10.3.435",
        "UC-10.3.436",
        "UC-10.3.437",
        "UC-10.3.438",
        "UC-10.3.439",
        "UC-10.3.440",
        "UC-10.3.441",
        "UC-10.3.442",
        "UC-10.3.443",
        "UC-10.3.444",
        "UC-10.3.445",
        "UC-10.3.446",
        "UC-10.3.447",
        "UC-10.3.448",
        "UC-10.3.449",
        "UC-10.3.450",
        "UC-10.3.451",
        "UC-10.3.452",
        "UC-10.3.453",
        "UC-10.3.454",
        "UC-10.3.455",
        "UC-10.3.456",
        "UC-10.3.457",
        "UC-10.3.458",
        "UC-10.3.459",
        "UC-10.3.460",
        "UC-10.3.461",
        "UC-10.3.462",
        "UC-10.3.463",
        "UC-10.3.464",
        "UC-10.3.465",
        "UC-10.3.466",
        "UC-10.3.467",
        "UC-10.3.468",
        "UC-10.3.469",
        "UC-10.3.470",
        "UC-10.3.471",
        "UC-10.3.472",
        "UC-10.3.473",
        "UC-10.3.474",
        "UC-10.3.475",
        "UC-10.3.476",
        "UC-10.3.477",
        "UC-10.3.478",
        "UC-10.3.479",
        "UC-10.3.480",
        "UC-10.3.481",
        "UC-10.3.482",
        "UC-10.3.483",
        "UC-10.3.484",
        "UC-10.3.485",
        "UC-10.3.486",
        "UC-10.3.487",
        "UC-10.3.488",
        "UC-10.3.489",
        "UC-10.3.490",
        "UC-10.3.491",
        "UC-10.3.492",
        "UC-10.3.493",
        "UC-10.3.494",
        "UC-10.3.495",
        "UC-10.3.496",
        "UC-10.3.497",
        "UC-10.3.498",
        "UC-10.3.499",
        "UC-10.3.500",
        "UC-10.3.501",
        "UC-10.3.502",
        "UC-10.3.503",
        "UC-10.3.504",
        "UC-10.3.505",
        "UC-10.3.506",
        "UC-10.3.507",
        "UC-10.3.508",
        "UC-10.3.509",
        "UC-10.3.510",
        "UC-10.3.511",
        "UC-10.3.512",
        "UC-10.3.513",
        "UC-10.3.514",
        "UC-10.3.515",
        "UC-10.3.516",
        "UC-10.3.517",
        "UC-10.3.518",
        "UC-10.3.519",
        "UC-10.3.520",
        "UC-10.3.521",
        "UC-10.3.522",
        "UC-10.3.523",
        "UC-10.3.524",
        "UC-10.3.525",
        "UC-10.3.526",
        "UC-10.3.527",
        "UC-10.3.528",
        "UC-10.3.529",
        "UC-10.3.530",
        "UC-10.3.531",
        "UC-10.3.532",
        "UC-10.3.533",
        "UC-10.3.534",
        "UC-10.3.535",
        "UC-10.3.536",
        "UC-10.3.537",
        "UC-10.3.538",
        "UC-10.3.539",
        "UC-10.3.540",
        "UC-10.3.541",
        "UC-10.3.542",
        "UC-10.3.543",
        "UC-10.3.544",
        "UC-10.3.545",
        "UC-10.3.546",
        "UC-10.3.547",
        "UC-10.3.548",
        "UC-10.3.549",
        "UC-10.3.550",
        "UC-10.3.551",
        "UC-10.3.552",
        "UC-10.3.553",
        "UC-10.3.554",
        "UC-10.3.555",
        "UC-10.3.556",
        "UC-10.3.557",
        "UC-10.3.558",
        "UC-10.3.559",
        "UC-10.3.560",
        "UC-10.3.561",
        "UC-10.3.562",
        "UC-10.3.563",
        "UC-10.3.564",
        "UC-10.3.565",
        "UC-10.3.566",
        "UC-10.3.567",
        "UC-10.3.568",
        "UC-10.3.569",
        "UC-10.3.570",
        "UC-10.3.571",
        "UC-10.3.572",
        "UC-10.3.573",
        "UC-10.3.574",
        "UC-10.3.575",
        "UC-10.3.576",
        "UC-10.3.577",
        "UC-10.3.578",
        "UC-10.3.579",
        "UC-10.3.580",
        "UC-10.3.581",
        "UC-10.3.582",
        "UC-10.3.583",
        "UC-10.3.584",
        "UC-10.3.585",
        "UC-10.3.586",
        "UC-10.3.587",
        "UC-10.3.588",
        "UC-10.3.589",
        "UC-10.3.590",
        "UC-10.3.591",
        "UC-10.3.592",
        "UC-10.3.593",
        "UC-10.3.594",
        "UC-10.3.595",
        "UC-10.3.596",
        "UC-10.3.597",
        "UC-10.3.598",
        "UC-10.3.599",
        "UC-10.3.600",
        "UC-10.3.601",
        "UC-10.3.602",
        "UC-10.3.603",
        "UC-10.3.604",
        "UC-10.3.605",
        "UC-10.3.606",
        "UC-10.3.607",
        "UC-10.3.608",
        "UC-10.3.609",
        "UC-10.3.610",
        "UC-10.3.611",
        "UC-10.3.612",
        "UC-10.3.613",
        "UC-10.3.614",
        "UC-10.3.615",
        "UC-10.3.616",
        "UC-10.3.617",
        "UC-10.3.618",
        "UC-10.3.619",
        "UC-10.3.620",
        "UC-10.3.621",
        "UC-10.3.622",
        "UC-10.3.623",
        "UC-10.3.624",
        "UC-10.3.625",
        "UC-10.3.626",
        "UC-10.3.627",
        "UC-10.3.628",
        "UC-10.3.629",
        "UC-10.3.630",
        "UC-10.3.631",
        "UC-10.3.632",
        "UC-10.3.633",
        "UC-10.3.634",
        "UC-10.3.635",
        "UC-10.3.636",
        "UC-10.3.637",
        "UC-10.3.638",
        "UC-10.3.639",
        "UC-10.3.640",
        "UC-10.3.641",
        "UC-10.3.642",
        "UC-10.3.643",
        "UC-10.3.644",
        "UC-10.3.645",
        "UC-10.3.646",
        "UC-10.3.647",
        "UC-10.3.648",
        "UC-10.3.649",
        "UC-10.3.650",
        "UC-10.3.651",
        "UC-10.3.652",
        "UC-10.3.653",
        "UC-10.3.654",
        "UC-10.3.655",
        "UC-10.3.656",
        "UC-10.3.657",
        "UC-10.3.658",
        "UC-10.3.659",
        "UC-10.3.660",
        "UC-10.3.661",
        "UC-10.3.662",
        "UC-10.3.663",
        "UC-10.3.664",
        "UC-10.3.665",
        "UC-10.3.666",
        "UC-10.3.667",
        "UC-10.3.668",
        "UC-10.3.669",
        "UC-10.3.670",
        "UC-10.3.671",
        "UC-10.3.672",
        "UC-10.3.673",
        "UC-10.3.674",
        "UC-10.3.675",
        "UC-10.3.676",
        "UC-10.3.677",
        "UC-10.3.678",
        "UC-10.3.679",
        "UC-10.3.680",
        "UC-10.3.681",
        "UC-10.4.1",
        "UC-10.4.4",
        "UC-10.4.5",
        "UC-10.4.6",
        "UC-10.4.7",
        "UC-10.4.8",
        "UC-10.4.9",
        "UC-10.4.10",
        "UC-10.4.11",
        "UC-10.4.12",
        "UC-10.4.13",
        "UC-10.4.14",
        "UC-10.4.15",
        "UC-10.4.16",
        "UC-10.4.17",
        "UC-10.4.18",
        "UC-10.4.19",
        "UC-10.4.20",
        "UC-10.4.21",
        "UC-10.4.22",
        "UC-10.4.23",
        "UC-10.4.24",
        "UC-10.4.25",
        "UC-10.4.26",
        "UC-10.4.27",
        "UC-10.4.28",
        "UC-10.4.29",
        "UC-10.4.30",
        "UC-10.4.31",
        "UC-10.4.32",
        "UC-10.4.33",
        "UC-10.4.34",
        "UC-10.4.35",
        "UC-10.4.36",
        "UC-10.4.37",
        "UC-10.4.38",
        "UC-10.4.39",
        "UC-10.4.40",
        "UC-10.4.41",
        "UC-10.4.42",
        "UC-10.4.43",
        "UC-10.4.44",
        "UC-10.4.45",
        "UC-10.4.46",
        "UC-10.4.47",
        "UC-10.4.48",
        "UC-10.4.49",
        "UC-10.4.50",
        "UC-10.4.51",
        "UC-10.4.52",
        "UC-10.4.53",
        "UC-10.4.54",
        "UC-10.4.55",
        "UC-10.4.56",
        "UC-10.4.57",
        "UC-10.4.58",
        "UC-10.4.59",
        "UC-10.4.60",
        "UC-10.4.61",
        "UC-10.4.62",
        "UC-10.4.63",
        "UC-10.4.64",
        "UC-10.4.65",
        "UC-10.4.66",
        "UC-10.4.67",
        "UC-10.4.68",
        "UC-10.4.69",
        "UC-10.4.70",
        "UC-10.4.71",
        "UC-10.4.72",
        "UC-10.4.73",
        "UC-10.4.74",
        "UC-10.4.75",
        "UC-10.4.76",
        "UC-10.4.77",
        "UC-10.4.78",
        "UC-10.4.79",
        "UC-10.4.80",
        "UC-10.4.81",
        "UC-10.4.82",
        "UC-10.4.83",
        "UC-10.4.84",
        "UC-10.4.85",
        "UC-10.4.86",
        "UC-10.4.87",
        "UC-10.4.88",
        "UC-10.4.89",
        "UC-10.4.90",
        "UC-10.4.91",
        "UC-10.4.92",
        "UC-10.4.93",
        "UC-10.4.94",
        "UC-10.4.95",
        "UC-10.4.96",
        "UC-10.4.97",
        "UC-10.4.98",
        "UC-10.4.99",
        "UC-10.4.100",
        "UC-10.4.101",
        "UC-10.4.102",
        "UC-10.4.103",
        "UC-10.4.104",
        "UC-10.4.105",
        "UC-10.4.106",
        "UC-10.4.107",
        "UC-10.4.108",
        "UC-10.4.109",
        "UC-10.4.110",
        "UC-10.4.111",
        "UC-10.4.112",
        "UC-10.4.113",
        "UC-10.4.114",
        "UC-10.4.115",
        "UC-10.4.116",
        "UC-10.4.117",
        "UC-10.4.118",
        "UC-10.4.119",
        "UC-10.4.120",
        "UC-10.4.121",
        "UC-10.4.122",
        "UC-10.4.123",
        "UC-10.4.124",
        "UC-10.4.125",
        "UC-10.4.126",
        "UC-10.4.127",
        "UC-10.4.128",
        "UC-10.4.129",
        "UC-10.4.130",
        "UC-10.4.131",
        "UC-10.4.132",
        "UC-10.4.133",
        "UC-10.4.134",
        "UC-10.4.135",
        "UC-10.4.136",
        "UC-10.4.137",
        "UC-10.4.138",
        "UC-10.4.139",
        "UC-10.4.140",
        "UC-10.4.141",
        "UC-10.4.142",
        "UC-10.4.143",
        "UC-10.4.144",
        "UC-10.4.145",
        "UC-10.4.146",
        "UC-10.4.147",
        "UC-10.4.148",
        "UC-10.4.149",
        "UC-10.4.150",
        "UC-10.5.1",
        "UC-10.5.2",
        "UC-10.5.3",
        "UC-10.5.4",
        "UC-10.5.5",
        "UC-10.5.6",
        "UC-10.5.7",
        "UC-10.5.8",
        "UC-10.5.9",
        "UC-10.5.10",
        "UC-10.5.11",
        "UC-10.5.12",
        "UC-10.5.13",
        "UC-10.5.14",
        "UC-10.5.15",
        "UC-10.5.16",
        "UC-10.5.17",
        "UC-10.5.18",
        "UC-10.5.19",
        "UC-10.5.20",
        "UC-10.6.1",
        "UC-10.6.2",
        "UC-10.6.3",
        "UC-10.6.4",
        "UC-10.6.5",
        "UC-10.6.6",
        "UC-10.6.7",
        "UC-10.6.8",
        "UC-10.6.9",
        "UC-10.6.10",
        "UC-10.6.11",
        "UC-10.6.12",
        "UC-10.6.13",
        "UC-10.6.14",
        "UC-10.6.15",
        "UC-10.6.16",
        "UC-10.6.17",
        "UC-10.6.18",
        "UC-10.6.19",
        "UC-10.6.20",
        "UC-10.6.21",
        "UC-10.6.22",
        "UC-10.6.23",
        "UC-10.6.24",
        "UC-10.6.25",
        "UC-10.6.26",
        "UC-10.6.27",
        "UC-10.6.28",
        "UC-10.6.29",
        "UC-10.6.30",
        "UC-10.6.31",
        "UC-10.6.32",
        "UC-10.6.33",
        "UC-10.6.34",
        "UC-10.6.35",
        "UC-10.6.36",
        "UC-10.6.37",
        "UC-10.6.38",
        "UC-10.6.39",
        "UC-10.6.40",
        "UC-10.6.41",
        "UC-10.6.42",
        "UC-10.6.43",
        "UC-10.6.44",
        "UC-10.6.45",
        "UC-10.6.46",
        "UC-10.6.47",
        "UC-10.6.48",
        "UC-10.6.49",
        "UC-10.6.50",
        "UC-10.6.51",
        "UC-10.6.52",
        "UC-10.6.53",
        "UC-10.6.54",
        "UC-10.6.55",
        "UC-10.6.56",
        "UC-10.6.57",
        "UC-10.6.58",
        "UC-10.6.59",
        "UC-10.6.60",
        "UC-10.6.61",
        "UC-10.6.62",
        "UC-10.6.63",
        "UC-10.6.64",
        "UC-10.6.65",
        "UC-10.6.66",
        "UC-10.6.67",
        "UC-10.6.68",
        "UC-10.6.69",
        "UC-10.6.70",
        "UC-10.6.71",
        "UC-10.6.72",
        "UC-10.6.73",
        "UC-10.6.74",
        "UC-10.6.75",
        "UC-10.6.76",
        "UC-10.6.77",
        "UC-10.6.78",
        "UC-10.6.79",
        "UC-10.6.80",
        "UC-10.6.81",
        "UC-10.6.82",
        "UC-10.6.83",
        "UC-10.6.84",
        "UC-10.6.85",
        "UC-10.6.86",
        "UC-10.6.87",
        "UC-10.6.88",
        "UC-10.6.89",
        "UC-10.6.90",
        "UC-10.6.91",
        "UC-10.6.92",
        "UC-10.6.93",
        "UC-10.6.94",
        "UC-10.6.95",
        "UC-10.6.96",
        "UC-10.6.97",
        "UC-10.6.98",
        "UC-10.6.99",
        "UC-10.6.100",
        "UC-10.6.101",
        "UC-10.6.102",
        "UC-10.6.103",
        "UC-10.6.104",
        "UC-10.6.105",
        "UC-10.6.106",
        "UC-10.6.107",
        "UC-10.6.108",
        "UC-10.6.109",
        "UC-10.6.110",
        "UC-10.6.111",
        "UC-10.6.112",
        "UC-10.6.113",
        "UC-10.6.114",
        "UC-10.6.115",
        "UC-10.6.116",
        "UC-10.6.117",
        "UC-10.6.118",
        "UC-10.6.119",
        "UC-10.6.120",
        "UC-10.6.121",
        "UC-10.6.122",
        "UC-10.6.123",
        "UC-10.6.124",
        "UC-10.6.125",
        "UC-10.6.126",
        "UC-10.6.127",
        "UC-10.6.128",
        "UC-10.6.129",
        "UC-10.6.130",
        "UC-10.6.131",
        "UC-10.6.132",
        "UC-10.6.133",
        "UC-10.6.134",
        "UC-10.6.135",
        "UC-10.6.136",
        "UC-10.6.137",
        "UC-10.6.138",
        "UC-10.6.139",
        "UC-10.6.140",
        "UC-10.6.141",
        "UC-10.6.142",
        "UC-10.6.143",
        "UC-10.6.144",
        "UC-10.6.145",
        "UC-10.6.146",
        "UC-10.6.147",
        "UC-10.6.148",
        "UC-10.6.149",
        "UC-10.6.150",
        "UC-10.6.151",
        "UC-10.6.152",
        "UC-10.6.153",
        "UC-10.6.154",
        "UC-10.6.155",
        "UC-10.6.156",
        "UC-10.6.157",
        "UC-10.6.158",
        "UC-10.6.159",
        "UC-10.6.160",
        "UC-10.6.161",
        "UC-10.6.162",
        "UC-10.6.163",
        "UC-10.6.164",
        "UC-10.6.165",
        "UC-10.6.166",
        "UC-10.6.167",
        "UC-10.6.168",
        "UC-10.6.169",
        "UC-10.6.170",
        "UC-10.6.171",
        "UC-10.6.172",
        "UC-10.6.173",
        "UC-10.6.174",
        "UC-10.6.175",
        "UC-10.6.176",
        "UC-10.6.177",
        "UC-10.6.178",
        "UC-10.6.179",
        "UC-10.6.180",
        "UC-10.6.181",
        "UC-10.6.182",
        "UC-10.6.183",
        "UC-10.6.184",
        "UC-10.6.185",
        "UC-10.6.186",
        "UC-10.6.187",
        "UC-10.6.188",
        "UC-10.6.189",
        "UC-10.6.190",
        "UC-10.6.191",
        "UC-10.6.192",
        "UC-10.6.193",
        "UC-10.7.2",
        "UC-10.7.3",
        "UC-10.7.4",
        "UC-10.7.5",
        "UC-10.7.6",
        "UC-10.7.7",
        "UC-10.7.8",
        "UC-10.7.9",
        "UC-10.7.10",
        "UC-10.7.11",
        "UC-10.7.12",
        "UC-10.7.13",
        "UC-10.7.14",
        "UC-10.7.15",
        "UC-10.7.16",
        "UC-10.7.17",
        "UC-10.7.18",
        "UC-10.7.19",
        "UC-10.7.20",
        "UC-10.7.21",
        "UC-10.7.22",
        "UC-10.7.23",
        "UC-10.7.24",
        "UC-10.7.25",
        "UC-10.7.26",
        "UC-10.7.27",
        "UC-10.7.28",
        "UC-10.7.29",
        "UC-10.7.30",
        "UC-10.7.31",
        "UC-10.7.32",
        "UC-10.7.33",
        "UC-10.7.34",
        "UC-10.7.35",
        "UC-10.7.36",
        "UC-10.7.37",
        "UC-10.7.38",
        "UC-10.7.39",
        "UC-10.7.40",
        "UC-10.7.41",
        "UC-10.7.42",
        "UC-10.7.43",
        "UC-10.7.44",
        "UC-10.7.45",
        "UC-10.7.46",
        "UC-10.7.47",
        "UC-10.7.48",
        "UC-10.7.49",
        "UC-10.7.50",
        "UC-10.7.51",
        "UC-10.7.52",
        "UC-10.7.53",
        "UC-10.7.54",
        "UC-10.7.55",
        "UC-10.7.56",
        "UC-10.7.57",
        "UC-10.7.58",
        "UC-10.7.59",
        "UC-10.7.60",
        "UC-10.7.61",
        "UC-10.7.62",
        "UC-10.7.63",
        "UC-10.7.64",
        "UC-10.7.65",
        "UC-10.7.66",
        "UC-10.7.67",
        "UC-10.7.68",
        "UC-10.7.69",
        "UC-10.7.70",
        "UC-10.7.71",
        "UC-10.7.72",
        "UC-10.7.73",
        "UC-10.7.74",
        "UC-10.7.75",
        "UC-10.7.76",
        "UC-10.7.77",
        "UC-10.7.78",
        "UC-10.7.79",
        "UC-10.7.80",
        "UC-10.7.81",
        "UC-10.7.82",
        "UC-10.7.83",
        "UC-10.7.84",
        "UC-10.7.85",
        "UC-10.7.86",
        "UC-10.7.87",
        "UC-10.7.88",
        "UC-10.7.89",
        "UC-10.7.90",
        "UC-10.7.91",
        "UC-10.7.92",
        "UC-10.7.93",
        "UC-10.7.94",
        "UC-10.7.95",
        "UC-10.7.96",
        "UC-10.7.97",
        "UC-10.7.98",
        "UC-10.7.99",
        "UC-10.7.100",
        "UC-10.7.101",
        "UC-10.7.102",
        "UC-10.7.103",
        "UC-10.7.104",
        "UC-10.7.105",
        "UC-10.7.106",
        "UC-10.7.107",
        "UC-10.7.108",
        "UC-10.7.109",
        "UC-10.7.110",
        "UC-10.7.111",
        "UC-10.7.112",
        "UC-10.7.113",
        "UC-10.7.114",
        "UC-10.7.115",
        "UC-10.7.116",
        "UC-10.7.117",
        "UC-10.7.118",
        "UC-10.7.119",
        "UC-10.7.120",
        "UC-10.7.121",
        "UC-10.7.122",
        "UC-10.7.123",
        "UC-10.7.124",
        "UC-10.7.125",
        "UC-10.7.126",
        "UC-10.7.127",
        "UC-10.7.128",
        "UC-10.7.129",
        "UC-10.7.130",
        "UC-10.7.131",
        "UC-10.7.132",
        "UC-10.7.133",
        "UC-10.7.134",
        "UC-10.7.135",
        "UC-10.7.136",
        "UC-10.7.137",
        "UC-10.7.138",
        "UC-10.7.139",
        "UC-10.7.140",
        "UC-10.7.141",
        "UC-10.7.142",
        "UC-10.7.143",
        "UC-10.7.144",
        "UC-10.7.145",
        "UC-10.7.146",
        "UC-10.7.147",
        "UC-10.7.148",
        "UC-10.7.149",
        "UC-10.7.150",
        "UC-10.7.151",
        "UC-10.7.152",
        "UC-10.7.153",
        "UC-10.7.154",
        "UC-10.7.155",
        "UC-10.7.156",
        "UC-10.7.157",
        "UC-10.7.158",
        "UC-10.7.159",
        "UC-10.7.160",
        "UC-10.7.161",
        "UC-10.7.162",
        "UC-10.7.163",
        "UC-10.7.164",
        "UC-10.7.165",
        "UC-10.7.166",
        "UC-10.7.167",
        "UC-10.7.168",
        "UC-10.7.169",
        "UC-10.7.170",
        "UC-10.7.171",
        "UC-10.7.172",
        "UC-10.7.173",
        "UC-10.7.174",
        "UC-10.7.175",
        "UC-10.7.176",
        "UC-10.7.177",
        "UC-10.7.178",
        "UC-10.7.179",
        "UC-10.7.180",
        "UC-10.7.181",
        "UC-10.7.182",
        "UC-10.7.183",
        "UC-10.7.184",
        "UC-10.7.185",
        "UC-10.7.186",
        "UC-10.7.187",
        "UC-10.7.188",
        "UC-10.7.189",
        "UC-10.7.190",
        "UC-10.7.191",
        "UC-10.7.192",
        "UC-10.7.193",
        "UC-10.7.194",
        "UC-10.7.195",
        "UC-10.7.196",
        "UC-10.7.197",
        "UC-10.7.198",
        "UC-10.7.199",
        "UC-10.7.200",
        "UC-10.7.201",
        "UC-10.7.202",
        "UC-10.7.203",
        "UC-10.7.204",
        "UC-10.7.205",
        "UC-10.7.206",
        "UC-10.7.207",
        "UC-10.7.208",
        "UC-10.7.209",
        "UC-10.7.210",
        "UC-10.7.211",
        "UC-10.7.212",
        "UC-10.7.213",
        "UC-10.7.214",
        "UC-10.7.215",
        "UC-10.7.216",
        "UC-10.7.217",
        "UC-10.7.218",
        "UC-10.7.219",
        "UC-10.7.220",
        "UC-10.7.221",
        "UC-10.7.222",
        "UC-10.7.223",
        "UC-10.7.224",
        "UC-10.7.225",
        "UC-10.7.226",
        "UC-10.7.227",
        "UC-10.7.228",
        "UC-10.7.229",
        "UC-10.7.230",
        "UC-10.7.231",
        "UC-10.7.232",
        "UC-10.7.233",
        "UC-10.7.234",
        "UC-10.7.235",
        "UC-10.7.236",
        "UC-10.7.237",
        "UC-10.7.238",
        "UC-10.7.239",
        "UC-10.7.240",
        "UC-10.7.241",
        "UC-10.7.242",
        "UC-10.7.243",
        "UC-10.7.244",
        "UC-10.7.245",
        "UC-10.7.246",
        "UC-10.7.247",
        "UC-10.7.248",
        "UC-10.7.249",
        "UC-10.7.250",
        "UC-10.7.251",
        "UC-10.7.252",
        "UC-10.7.253",
        "UC-10.7.254",
        "UC-10.7.255",
        "UC-10.7.256",
        "UC-10.7.257",
        "UC-10.7.258",
        "UC-10.7.259",
        "UC-10.7.260",
        "UC-10.7.261",
        "UC-10.7.262",
        "UC-10.7.263",
        "UC-10.7.264",
        "UC-10.7.265",
        "UC-10.7.266",
        "UC-10.7.267",
        "UC-10.7.268",
        "UC-10.7.269",
        "UC-10.7.270",
        "UC-10.7.271",
        "UC-10.7.272",
        "UC-10.7.273",
        "UC-10.7.274",
        "UC-10.7.275",
        "UC-10.7.276",
        "UC-10.7.277",
        "UC-10.7.278",
        "UC-10.7.279",
        "UC-10.7.280",
        "UC-10.7.281",
        "UC-10.7.282",
        "UC-10.7.283",
        "UC-10.7.284",
        "UC-10.7.285",
        "UC-10.7.286",
        "UC-10.7.287",
        "UC-10.7.288",
        "UC-10.7.289",
        "UC-10.7.290",
        "UC-10.7.291",
        "UC-10.7.292",
        "UC-10.7.293",
        "UC-10.7.294",
        "UC-10.7.295",
        "UC-10.7.296",
        "UC-10.7.297",
        "UC-10.7.298",
        "UC-10.7.299",
        "UC-10.7.300",
        "UC-10.7.301",
        "UC-10.7.302",
        "UC-10.7.303",
        "UC-10.7.304",
        "UC-10.7.305",
        "UC-10.7.306",
        "UC-10.7.307",
        "UC-10.7.308",
        "UC-10.7.309",
        "UC-10.7.310",
        "UC-10.7.311",
        "UC-10.7.312",
        "UC-10.7.313",
        "UC-10.7.314",
        "UC-10.7.315",
        "UC-10.7.316",
        "UC-10.7.317",
        "UC-10.7.318",
        "UC-10.7.319",
        "UC-10.7.320",
        "UC-10.7.321",
        "UC-10.7.322",
        "UC-10.7.323",
        "UC-10.7.324",
        "UC-10.7.325",
        "UC-10.7.326",
        "UC-10.7.327",
        "UC-10.7.328",
        "UC-10.7.329",
        "UC-10.7.330",
        "UC-10.7.331",
        "UC-10.7.332",
        "UC-10.7.333",
        "UC-10.7.334",
        "UC-10.7.335",
        "UC-10.7.336",
        "UC-10.7.337",
        "UC-10.7.338",
        "UC-10.7.339",
        "UC-10.7.340",
        "UC-10.7.341",
        "UC-10.7.342",
        "UC-10.7.343",
        "UC-10.7.344",
        "UC-10.7.345",
        "UC-10.7.346",
        "UC-10.7.347",
        "UC-10.7.348",
        "UC-10.7.349",
        "UC-10.7.350",
        "UC-10.7.351",
        "UC-10.7.352",
        "UC-10.7.353",
        "UC-10.7.354",
        "UC-10.7.355",
        "UC-10.7.356",
        "UC-10.7.357",
        "UC-10.7.358",
        "UC-10.7.359",
        "UC-10.7.360",
        "UC-10.7.361",
        "UC-10.7.362",
        "UC-10.7.363",
        "UC-10.7.364",
        "UC-10.7.365",
        "UC-10.7.366",
        "UC-10.7.367",
        "UC-10.7.368",
        "UC-10.7.369",
        "UC-10.7.370",
        "UC-10.7.371",
        "UC-10.7.372",
        "UC-10.7.373",
        "UC-10.7.374",
        "UC-10.7.375",
        "UC-10.7.376",
        "UC-10.7.377",
        "UC-10.7.378",
        "UC-10.7.379",
        "UC-10.7.380",
        "UC-10.7.381",
        "UC-10.7.382",
        "UC-10.7.383",
        "UC-10.7.384",
        "UC-10.7.385",
        "UC-10.7.386",
        "UC-10.7.387",
        "UC-10.7.388",
        "UC-10.7.389",
        "UC-10.7.390",
        "UC-10.7.391",
        "UC-10.7.392",
        "UC-10.7.393",
        "UC-10.7.394",
        "UC-10.7.395",
        "UC-10.7.396",
        "UC-10.7.397",
        "UC-10.7.398",
        "UC-10.7.399",
        "UC-10.7.400",
        "UC-10.7.401",
        "UC-10.7.402",
        "UC-10.7.403",
        "UC-10.7.404",
        "UC-10.7.405",
        "UC-10.7.406",
        "UC-10.7.407",
        "UC-10.7.408",
        "UC-10.7.409",
        "UC-10.7.410",
        "UC-10.7.411",
        "UC-10.7.412",
        "UC-10.7.413",
        "UC-10.7.414",
        "UC-10.7.415",
        "UC-10.7.416",
        "UC-10.7.417",
        "UC-10.7.418",
        "UC-10.7.419",
        "UC-10.7.420",
        "UC-10.7.421",
        "UC-10.7.422",
        "UC-10.7.423",
        "UC-10.7.424",
        "UC-10.7.425",
        "UC-10.7.426",
        "UC-10.7.427",
        "UC-10.7.428",
        "UC-10.7.429",
        "UC-10.7.430",
        "UC-10.7.431",
        "UC-10.7.432",
        "UC-10.7.433",
        "UC-10.7.434",
        "UC-10.7.435",
        "UC-10.7.436",
        "UC-10.7.437",
        "UC-10.7.438",
        "UC-10.7.439",
        "UC-10.7.440",
        "UC-10.7.441",
        "UC-10.7.442",
        "UC-10.7.443",
        "UC-10.7.444",
        "UC-10.7.445",
        "UC-10.7.446",
        "UC-10.7.447",
        "UC-10.7.448",
        "UC-10.7.449",
        "UC-10.7.450",
        "UC-10.7.451",
        "UC-10.7.452",
        "UC-10.7.453",
        "UC-10.7.454",
        "UC-10.7.455",
        "UC-10.7.456",
        "UC-10.7.457",
        "UC-10.7.458",
        "UC-10.7.459",
        "UC-10.7.460",
        "UC-10.7.461",
        "UC-10.7.462",
        "UC-10.7.463",
        "UC-10.7.464",
        "UC-10.7.465",
        "UC-10.7.466",
        "UC-10.7.467",
        "UC-10.7.468",
        "UC-10.7.469",
        "UC-10.7.470",
        "UC-10.7.471",
        "UC-10.7.472",
        "UC-10.7.473",
        "UC-10.7.474",
        "UC-10.7.475",
        "UC-10.7.476",
        "UC-10.7.477",
        "UC-10.7.478",
        "UC-10.7.479",
        "UC-10.7.480",
        "UC-10.7.481",
        "UC-10.7.482",
        "UC-10.7.483",
        "UC-10.7.484",
        "UC-10.7.485",
        "UC-10.7.486",
        "UC-10.7.487",
        "UC-10.7.488",
        "UC-10.7.489",
        "UC-10.7.490",
        "UC-10.7.491",
        "UC-10.7.492",
        "UC-10.7.493",
        "UC-10.7.494",
        "UC-10.7.495",
        "UC-10.7.496",
        "UC-10.7.497",
        "UC-10.7.498",
        "UC-10.7.499",
        "UC-10.7.500",
        "UC-10.7.501",
        "UC-10.7.502",
        "UC-10.7.503",
        "UC-10.7.504",
        "UC-10.7.505",
        "UC-10.7.506",
        "UC-10.7.507",
        "UC-10.7.508",
        "UC-10.7.509",
        "UC-10.7.510",
        "UC-10.7.511",
        "UC-10.7.512",
        "UC-10.7.513",
        "UC-10.7.514",
        "UC-10.7.515",
        "UC-10.7.516",
        "UC-10.7.517",
        "UC-10.7.518",
        "UC-10.7.519",
        "UC-10.7.520",
        "UC-10.7.521",
        "UC-10.7.522",
        "UC-10.7.523",
        "UC-10.7.524",
        "UC-10.7.525",
        "UC-10.7.526",
        "UC-10.7.527",
        "UC-10.7.528",
        "UC-10.7.529",
        "UC-10.7.530",
        "UC-10.7.531",
        "UC-10.7.532",
        "UC-10.7.533",
        "UC-10.7.534",
        "UC-10.7.535",
        "UC-10.7.536",
        "UC-10.7.537",
        "UC-10.7.538",
        "UC-10.7.539",
        "UC-10.7.540",
        "UC-10.7.541",
        "UC-10.7.542",
        "UC-10.7.543",
        "UC-10.7.544",
        "UC-10.7.545",
        "UC-10.7.546",
        "UC-10.7.547",
        "UC-10.7.548",
        "UC-10.7.549",
        "UC-10.7.550",
        "UC-10.7.551",
        "UC-10.7.552",
        "UC-10.7.553",
        "UC-10.7.554",
        "UC-10.7.555",
        "UC-10.7.556",
        "UC-10.7.557",
        "UC-10.7.558",
        "UC-10.7.559",
        "UC-10.7.560",
        "UC-10.7.561",
        "UC-10.7.562",
        "UC-10.7.563",
        "UC-10.7.564",
        "UC-10.7.565",
        "UC-10.7.566",
        "UC-10.7.567",
        "UC-10.7.568",
        "UC-10.7.569",
        "UC-10.7.570",
        "UC-10.7.571",
        "UC-10.7.572",
        "UC-10.7.573",
        "UC-10.7.574",
        "UC-10.7.575",
        "UC-10.7.576",
        "UC-10.7.577",
        "UC-10.7.578",
        "UC-10.7.579",
        "UC-10.7.580",
        "UC-10.7.581",
        "UC-10.7.582",
        "UC-10.7.583",
        "UC-10.7.584",
        "UC-10.7.585",
        "UC-10.7.586",
        "UC-10.7.587",
        "UC-10.7.588",
        "UC-10.7.589",
        "UC-10.7.590",
        "UC-10.7.591",
        "UC-10.7.592",
        "UC-10.7.593",
        "UC-10.7.594",
        "UC-10.7.595",
        "UC-10.7.596",
        "UC-10.7.597",
        "UC-10.7.598",
        "UC-10.7.599",
        "UC-10.7.600",
        "UC-10.7.601",
        "UC-10.7.602",
        "UC-10.7.603",
        "UC-10.7.604",
        "UC-10.7.605",
        "UC-10.7.606",
        "UC-10.7.607",
        "UC-10.7.608",
        "UC-10.7.609",
        "UC-10.7.610",
        "UC-10.7.611",
        "UC-10.7.612",
        "UC-10.7.613",
        "UC-10.7.614",
        "UC-10.7.615",
        "UC-10.7.616",
        "UC-10.7.617",
        "UC-10.7.618",
        "UC-10.7.619",
        "UC-10.7.620",
        "UC-10.7.621",
        "UC-10.7.622",
        "UC-10.7.623",
        "UC-10.7.624",
        "UC-10.7.625",
        "UC-10.7.626",
        "UC-10.7.627",
        "UC-10.7.628",
        "UC-10.7.629",
        "UC-10.7.630",
        "UC-10.7.631",
        "UC-10.7.632",
        "UC-10.7.633",
        "UC-10.7.634",
        "UC-10.7.635",
        "UC-10.7.636",
        "UC-10.7.637",
        "UC-10.7.638",
        "UC-10.7.639",
        "UC-10.7.640",
        "UC-10.7.641",
        "UC-10.7.642",
        "UC-10.7.643",
        "UC-10.7.644",
        "UC-10.7.645",
        "UC-10.7.646",
        "UC-10.7.647",
        "UC-10.7.648",
        "UC-10.7.649",
        "UC-10.7.650",
        "UC-10.7.651",
        "UC-10.7.652",
        "UC-10.7.653",
        "UC-10.7.654",
        "UC-10.7.655",
        "UC-10.7.656",
        "UC-10.7.657",
        "UC-10.7.658",
        "UC-10.7.659",
        "UC-10.7.660",
        "UC-10.7.661",
        "UC-10.7.662",
        "UC-10.7.663",
        "UC-10.7.664",
        "UC-10.7.665",
        "UC-10.7.666",
        "UC-10.7.667",
        "UC-10.7.668",
        "UC-10.7.669",
        "UC-10.7.670",
        "UC-10.7.671",
        "UC-10.7.672",
        "UC-10.7.673",
        "UC-10.7.674",
        "UC-10.7.675",
        "UC-10.7.676",
        "UC-10.7.677",
        "UC-10.7.678",
        "UC-10.7.679",
        "UC-10.7.680",
        "UC-10.7.681",
        "UC-10.7.682",
        "UC-10.7.683",
        "UC-10.7.684",
        "UC-10.7.685",
        "UC-10.7.686",
        "UC-10.7.687",
        "UC-10.7.688",
        "UC-10.7.689",
        "UC-10.7.690",
        "UC-10.7.691",
        "UC-10.7.692",
        "UC-10.7.693",
        "UC-10.7.694",
        "UC-10.7.695",
        "UC-10.7.696",
        "UC-10.7.697",
        "UC-10.7.698",
        "UC-10.7.699",
        "UC-10.7.700",
        "UC-10.7.701",
        "UC-10.7.702",
        "UC-10.7.703",
        "UC-10.7.704",
        "UC-10.7.705",
        "UC-10.7.706",
        "UC-10.7.707",
        "UC-10.7.708",
        "UC-10.7.709",
        "UC-10.7.710",
        "UC-10.7.711",
        "UC-10.7.712",
        "UC-10.7.713",
        "UC-10.7.714",
        "UC-10.7.715",
        "UC-10.7.716",
        "UC-10.7.717",
        "UC-10.7.718",
        "UC-10.7.719",
        "UC-10.8.1",
        "UC-10.8.2",
        "UC-10.8.3",
        "UC-10.8.4",
        "UC-10.8.5",
        "UC-10.8.6",
        "UC-10.8.7",
        "UC-10.8.8",
        "UC-10.8.9",
        "UC-10.8.10",
        "UC-10.8.11",
        "UC-10.8.12",
        "UC-10.8.13",
        "UC-10.8.14",
        "UC-10.8.15",
        "UC-10.8.16",
        "UC-10.8.17",
        "UC-10.8.18",
        "UC-10.8.19",
        "UC-10.8.20",
        "UC-10.8.21",
        "UC-10.8.22",
        "UC-10.8.23",
        "UC-10.8.24",
        "UC-10.8.25",
        "UC-10.8.26",
        "UC-10.8.27",
        "UC-10.8.28",
        "UC-10.8.29",
        "UC-10.8.30",
        "UC-10.8.31",
        "UC-10.8.32",
        "UC-10.8.33",
        "UC-10.8.34",
        "UC-10.8.35",
        "UC-10.8.36",
        "UC-10.8.37",
        "UC-10.8.38",
        "UC-10.8.39",
        "UC-10.9.1",
        "UC-10.9.2",
        "UC-10.9.3",
        "UC-10.9.4",
        "UC-10.9.5",
        "UC-10.9.6",
        "UC-10.9.7",
        "UC-10.9.8",
        "UC-10.9.9",
        "UC-10.9.10",
        "UC-10.9.11",
        "UC-10.9.12",
        "UC-10.9.13",
        "UC-10.9.14",
        "UC-10.9.15",
        "UC-10.9.16",
        "UC-10.9.17",
        "UC-10.9.18",
        "UC-10.9.19",
        "UC-10.9.20",
        "UC-10.9.21",
        "UC-10.9.22",
        "UC-10.9.23",
        "UC-10.9.24",
        "UC-10.9.25",
        "UC-10.9.26",
        "UC-10.9.27",
        "UC-10.9.28",
        "UC-10.9.29",
        "UC-10.9.30",
        "UC-10.9.31",
        "UC-10.9.32",
        "UC-10.9.33",
        "UC-10.9.34",
        "UC-10.9.35",
        "UC-10.9.36",
        "UC-10.9.37",
        "UC-10.9.38",
        "UC-10.9.39",
        "UC-10.9.40",
        "UC-10.9.41",
        "UC-10.9.42",
        "UC-10.9.43",
        "UC-10.9.44",
        "UC-10.9.45",
        "UC-10.9.46",
        "UC-10.9.47",
        "UC-10.9.48",
        "UC-10.9.49",
        "UC-10.9.50",
        "UC-10.9.51",
        "UC-10.9.52",
        "UC-10.9.53",
        "UC-10.9.54",
        "UC-10.9.55",
        "UC-10.9.56",
        "UC-10.9.57",
        "UC-10.9.58",
        "UC-10.9.59",
        "UC-10.9.60",
        "UC-10.9.61",
        "UC-10.9.62",
        "UC-10.9.63",
        "UC-10.9.64",
        "UC-10.9.65",
        "UC-10.9.66",
        "UC-10.9.67",
        "UC-10.9.68",
        "UC-10.9.69",
        "UC-10.9.70",
        "UC-10.9.71",
        "UC-10.9.72",
        "UC-10.9.73",
        "UC-10.9.74",
        "UC-10.9.75",
        "UC-10.9.76",
        "UC-10.9.77",
        "UC-10.9.78",
        "UC-10.9.79",
        "UC-10.9.80",
        "UC-10.10.1",
        "UC-10.10.2",
        "UC-10.10.3",
        "UC-10.10.4",
        "UC-10.10.5",
        "UC-10.10.6",
        "UC-10.10.7",
        "UC-10.10.8",
        "UC-10.10.9",
        "UC-10.10.10",
        "UC-10.10.11",
        "UC-10.10.12",
        "UC-10.10.13",
        "UC-10.10.14",
        "UC-10.10.15",
        "UC-10.11.1",
        "UC-10.11.2",
        "UC-10.11.3",
        "UC-10.11.4",
        "UC-10.11.5",
        "UC-10.11.6",
        "UC-10.11.7",
        "UC-10.11.8",
        "UC-10.11.9",
        "UC-10.11.10",
        "UC-10.11.11",
        "UC-10.11.12",
        "UC-10.11.13",
        "UC-10.11.14",
        "UC-10.11.15",
        "UC-10.11.16",
        "UC-10.11.17",
        "UC-10.11.18",
        "UC-10.11.19",
        "UC-10.11.20",
        "UC-10.11.21",
        "UC-10.11.22",
        "UC-10.11.23",
        "UC-10.11.24",
        "UC-10.11.25",
        "UC-10.11.26",
        "UC-10.11.27",
        "UC-10.11.28",
        "UC-10.11.29",
        "UC-10.11.30",
        "UC-10.11.31",
        "UC-10.11.32",
        "UC-10.11.33",
        "UC-10.11.34",
        "UC-10.11.35",
        "UC-10.11.36",
        "UC-10.11.37",
        "UC-10.11.38",
        "UC-10.11.39",
        "UC-10.11.40",
        "UC-10.11.41",
        "UC-10.11.42",
        "UC-10.11.43",
        "UC-10.11.44",
        "UC-10.11.45",
        "UC-10.11.46",
        "UC-10.11.47",
        "UC-10.11.48",
        "UC-10.11.49",
        "UC-10.11.50",
        "UC-10.11.51",
        "UC-10.11.52",
        "UC-10.11.53",
        "UC-10.11.54",
        "UC-10.11.55",
        "UC-10.11.56",
        "UC-10.11.57",
        "UC-10.11.58",
        "UC-10.11.59",
        "UC-10.11.60",
        "UC-10.11.61",
        "UC-10.11.62",
        "UC-10.11.63",
        "UC-10.11.64",
        "UC-10.11.65",
        "UC-10.11.66",
        "UC-10.11.67",
        "UC-10.11.68",
        "UC-10.11.69",
        "UC-10.11.70",
        "UC-10.11.71",
        "UC-10.11.72",
        "UC-10.11.73",
        "UC-10.11.74",
        "UC-10.11.75",
        "UC-10.11.76",
        "UC-10.11.77",
        "UC-10.11.78",
        "UC-10.11.79",
        "UC-10.11.80",
        "UC-10.11.81",
        "UC-10.11.82",
        "UC-10.11.83",
        "UC-10.11.84",
        "UC-10.11.85",
        "UC-10.11.86",
        "UC-10.11.87",
        "UC-10.11.88",
        "UC-10.11.89",
        "UC-10.11.90",
        "UC-10.11.91",
        "UC-10.11.92",
        "UC-10.11.93",
        "UC-10.11.94",
        "UC-10.11.95",
        "UC-10.11.96",
        "UC-10.11.97",
        "UC-10.11.98",
        "UC-10.11.99",
        "UC-10.11.100",
        "UC-10.11.101",
        "UC-10.11.102",
        "UC-10.11.103",
        "UC-10.11.104",
        "UC-10.11.105",
        "UC-10.11.106",
        "UC-10.11.107",
        "UC-10.11.108",
        "UC-10.11.109",
        "UC-10.11.110",
        "UC-10.11.111",
        "UC-10.11.112",
        "UC-10.11.113",
        "UC-10.11.114",
        "UC-10.11.115",
        "UC-10.11.116",
        "UC-10.11.117",
        "UC-10.11.118",
        "UC-10.11.119",
        "UC-10.11.120",
        "UC-10.11.121",
        "UC-10.11.122",
        "UC-10.11.123",
        "UC-10.11.124",
        "UC-10.11.125",
        "UC-10.11.126",
        "UC-10.11.127",
        "UC-10.11.128",
        "UC-10.11.129",
        "UC-10.11.130",
        "UC-10.12.1",
        "UC-10.12.2",
        "UC-10.12.3",
        "UC-10.12.4",
        "UC-10.12.5",
        "UC-10.12.6",
        "UC-10.12.7",
        "UC-10.12.8",
        "UC-10.12.9",
        "UC-10.12.10",
        "UC-10.12.11",
        "UC-10.12.12",
        "UC-10.12.13",
        "UC-10.12.14",
        "UC-10.12.15",
        "UC-10.12.16",
        "UC-10.12.17",
        "UC-10.12.18",
        "UC-10.12.19",
        "UC-10.12.20",
        "UC-10.12.21",
        "UC-10.12.22",
        "UC-10.12.23",
        "UC-10.12.24",
        "UC-10.12.25",
        "UC-10.12.26",
        "UC-10.12.27",
        "UC-10.12.28",
        "UC-10.12.29",
        "UC-10.12.30",
        "UC-10.12.31",
        "UC-10.12.32",
        "UC-10.12.33",
        "UC-10.12.34",
        "UC-10.12.35",
        "UC-10.12.36",
        "UC-10.12.37",
        "UC-10.12.38",
        "UC-10.12.39",
        "UC-10.12.40",
        "UC-10.12.41",
        "UC-10.12.42",
        "UC-10.12.43",
        "UC-10.12.44",
        "UC-10.12.45",
        "UC-10.13.1",
        "UC-10.13.2",
        "UC-10.13.3",
        "UC-10.13.4",
        "UC-10.13.5",
        "UC-10.13.6",
        "UC-10.13.7",
        "UC-10.13.8",
        "UC-10.13.9",
        "UC-10.13.10",
        "UC-10.13.11",
        "UC-10.13.12",
        "UC-10.13.13",
        "UC-10.13.14",
        "UC-10.13.15",
        "UC-10.13.16",
        "UC-10.13.17",
        "UC-10.13.18",
        "UC-10.13.19",
        "UC-10.13.20",
        "UC-10.13.21",
        "UC-10.13.22",
        "UC-10.13.23",
        "UC-10.13.24",
        "UC-10.13.25",
        "UC-10.13.26",
        "UC-10.13.27",
        "UC-10.13.28",
        "UC-10.13.29",
        "UC-10.13.30",
        "UC-10.13.31",
        "UC-10.13.32",
        "UC-10.13.33",
        "UC-10.13.34",
        "UC-10.13.35",
        "UC-10.13.36",
        "UC-10.13.37",
        "UC-10.13.38",
        "UC-10.13.39",
        "UC-10.13.40",
        "UC-10.13.41",
        "UC-10.13.42",
        "UC-10.13.43",
        "UC-10.13.44",
        "UC-10.13.45",
        "UC-10.13.46",
        "UC-10.13.47",
        "UC-10.13.48",
        "UC-10.13.49",
        "UC-10.14.1",
        "UC-10.14.2",
        "UC-10.14.3",
        "UC-10.14.4",
        "UC-10.14.5",
        "UC-10.14.6",
        "UC-10.14.7",
        "UC-10.14.8",
        "UC-10.14.9",
        "UC-10.14.10",
        "UC-10.14.11",
        "UC-10.14.12",
        "UC-10.14.13",
        "UC-10.14.14",
        "UC-10.14.15",
        "UC-10.14.16",
        "UC-10.14.17",
        "UC-10.14.18",
        "UC-10.14.19",
        "UC-10.14.20",
        "UC-10.15.1",
        "UC-10.15.2",
        "UC-10.15.3",
        "UC-10.15.4",
        "UC-10.15.5",
        "UC-10.15.6",
        "UC-10.15.7",
        "UC-10.15.8",
        "UC-10.6.194",
        "UC-10.6.195",
        "UC-10.6.196",
        "UC-10.6.197",
        "UC-10.6.198",
        "UC-10.6.199",
        "UC-10.6.200",
        "UC-10.16.1",
        "UC-10.16.2",
        "UC-10.16.3",
        "UC-10.16.4",
        "UC-10.16.5",
        "UC-10.16.6",
        "UC-10.16.7",
        "UC-10.16.8"
      ]
    },
    "11": {
      "crawl": [
        "UC-11.1.1",
        "UC-11.1.2",
        "UC-11.1.8",
        "UC-11.2.4",
        "UC-11.3.6"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-11.1.3",
        "UC-11.1.4",
        "UC-11.1.5",
        "UC-11.1.6",
        "UC-11.1.7",
        "UC-11.1.9",
        "UC-11.1.10",
        "UC-11.1.11",
        "UC-11.1.12",
        "UC-11.2.1",
        "UC-11.2.2",
        "UC-11.2.3",
        "UC-11.2.5",
        "UC-11.2.6",
        "UC-11.2.7",
        "UC-11.2.8",
        "UC-11.2.9",
        "UC-11.2.10",
        "UC-11.2.11",
        "UC-11.2.12",
        "UC-11.2.13",
        "UC-11.2.14",
        "UC-11.2.15",
        "UC-11.2.16",
        "UC-11.2.17",
        "UC-11.2.18",
        "UC-11.3.1",
        "UC-11.3.2",
        "UC-11.3.3",
        "UC-11.3.4",
        "UC-11.3.5",
        "UC-11.3.7",
        "UC-11.3.8",
        "UC-11.3.9",
        "UC-11.3.10",
        "UC-11.3.11",
        "UC-11.3.12",
        "UC-11.3.13",
        "UC-11.3.14",
        "UC-11.3.15",
        "UC-11.3.16",
        "UC-11.3.17",
        "UC-11.3.18",
        "UC-11.3.19",
        "UC-11.3.20",
        "UC-11.3.21",
        "UC-11.3.22",
        "UC-11.3.23",
        "UC-11.3.25",
        "UC-11.3.26",
        "UC-11.3.27",
        "UC-11.3.28",
        "UC-11.3.29",
        "UC-11.3.30",
        "UC-11.3.31",
        "UC-11.3.32",
        "UC-11.3.33",
        "UC-11.3.34",
        "UC-11.3.35",
        "UC-11.3.36",
        "UC-11.3.37",
        "UC-11.3.38",
        "UC-11.3.39",
        "UC-11.3.40",
        "UC-11.3.41",
        "UC-11.3.42",
        "UC-11.3.43",
        "UC-11.3.44",
        "UC-11.3.45",
        "UC-11.3.46",
        "UC-11.3.47",
        "UC-11.3.48",
        "UC-11.3.49",
        "UC-11.3.50",
        "UC-11.3.51",
        "UC-11.3.52",
        "UC-11.3.53",
        "UC-11.3.54",
        "UC-11.3.55",
        "UC-11.3.56",
        "UC-11.3.57",
        "UC-11.3.58",
        "UC-11.4.1",
        "UC-11.4.2",
        "UC-11.4.3",
        "UC-11.4.4",
        "UC-11.4.5",
        "UC-11.4.6",
        "UC-11.4.7",
        "UC-11.4.8",
        "UC-11.5.1",
        "UC-11.5.2",
        "UC-11.5.3",
        "UC-11.5.4",
        "UC-11.5.5",
        "UC-11.5.6",
        "UC-11.5.7",
        "UC-11.5.8",
        "UC-11.5.9",
        "UC-11.5.10",
        "UC-11.5.11",
        "UC-11.5.12"
      ]
    },
    "12": {
      "crawl": [
        "UC-12.1.2",
        "UC-12.1.4",
        "UC-12.2.5",
        "UC-12.2.8",
        "UC-12.3.2"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-12.1.1",
        "UC-12.1.3",
        "UC-12.1.5",
        "UC-12.1.6",
        "UC-12.1.7",
        "UC-12.1.8",
        "UC-12.1.9",
        "UC-12.1.10",
        "UC-12.1.11",
        "UC-12.1.12",
        "UC-12.1.13",
        "UC-12.1.14",
        "UC-12.1.15",
        "UC-12.1.16",
        "UC-12.1.17",
        "UC-12.1.18",
        "UC-12.1.19",
        "UC-12.1.20",
        "UC-12.2.1",
        "UC-12.2.2",
        "UC-12.2.3",
        "UC-12.2.4",
        "UC-12.2.6",
        "UC-12.2.7",
        "UC-12.2.9",
        "UC-12.2.10",
        "UC-12.2.11",
        "UC-12.2.12",
        "UC-12.2.14",
        "UC-12.2.15",
        "UC-12.2.16",
        "UC-12.2.17",
        "UC-12.2.18",
        "UC-12.2.19",
        "UC-12.2.20",
        "UC-12.2.21",
        "UC-12.2.22",
        "UC-12.2.23",
        "UC-12.2.24",
        "UC-12.2.25",
        "UC-12.2.26",
        "UC-12.2.27",
        "UC-12.3.1",
        "UC-12.3.3",
        "UC-12.3.4",
        "UC-12.3.5",
        "UC-12.3.6",
        "UC-12.3.7",
        "UC-12.3.8",
        "UC-12.3.9",
        "UC-12.3.10",
        "UC-12.3.11",
        "UC-12.3.12",
        "UC-12.4.1",
        "UC-12.4.2",
        "UC-12.4.3",
        "UC-12.4.4",
        "UC-12.4.5",
        "UC-12.4.6",
        "UC-12.4.7",
        "UC-12.4.8",
        "UC-12.4.9",
        "UC-12.4.10",
        "UC-12.4.11",
        "UC-12.4.12",
        "UC-12.4.13",
        "UC-12.4.14",
        "UC-12.4.15",
        "UC-12.4.16",
        "UC-12.5.1",
        "UC-12.5.2",
        "UC-12.5.3",
        "UC-12.5.4",
        "UC-12.5.5",
        "UC-12.5.6",
        "UC-12.5.7",
        "UC-12.5.8",
        "UC-12.5.9",
        "UC-12.5.10",
        "UC-12.6.1",
        "UC-12.6.2",
        "UC-12.6.3",
        "UC-12.6.4"
      ]
    },
    "13": {
      "crawl": [
        "UC-13.1.1",
        "UC-13.1.3",
        "UC-13.1.10",
        "UC-13.2.1",
        "UC-13.2.6"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-13.1.2",
        "UC-13.1.4",
        "UC-13.1.5",
        "UC-13.1.6",
        "UC-13.1.7",
        "UC-13.1.8",
        "UC-13.1.9",
        "UC-13.1.11",
        "UC-13.1.12",
        "UC-13.1.13",
        "UC-13.1.14",
        "UC-13.1.15",
        "UC-13.1.16",
        "UC-13.1.17",
        "UC-13.1.18",
        "UC-13.1.19",
        "UC-13.1.20",
        "UC-13.1.21",
        "UC-13.1.22",
        "UC-13.1.23",
        "UC-13.1.24",
        "UC-13.1.25",
        "UC-13.1.26",
        "UC-13.1.27",
        "UC-13.1.28",
        "UC-13.1.29",
        "UC-13.1.30",
        "UC-13.1.31",
        "UC-13.1.32",
        "UC-13.1.33",
        "UC-13.1.34",
        "UC-13.1.35",
        "UC-13.1.36",
        "UC-13.1.37",
        "UC-13.1.38",
        "UC-13.1.39",
        "UC-13.1.40",
        "UC-13.1.41",
        "UC-13.1.42",
        "UC-13.1.43",
        "UC-13.1.44",
        "UC-13.1.45",
        "UC-13.1.46",
        "UC-13.1.47",
        "UC-13.1.48",
        "UC-13.1.49",
        "UC-13.1.50",
        "UC-13.1.51",
        "UC-13.2.2",
        "UC-13.2.3",
        "UC-13.2.4",
        "UC-13.2.5",
        "UC-13.2.7",
        "UC-13.2.8",
        "UC-13.2.9",
        "UC-13.2.10",
        "UC-13.2.11",
        "UC-13.2.12",
        "UC-13.2.13",
        "UC-13.2.14",
        "UC-13.2.15",
        "UC-13.2.16",
        "UC-13.2.17",
        "UC-13.2.18",
        "UC-13.2.19",
        "UC-13.2.20",
        "UC-13.2.21",
        "UC-13.2.23",
        "UC-13.2.24",
        "UC-13.2.25",
        "UC-13.2.26",
        "UC-13.2.27",
        "UC-13.2.28",
        "UC-13.2.29",
        "UC-13.2.30",
        "UC-13.2.31",
        "UC-13.2.32",
        "UC-13.2.33",
        "UC-13.2.34",
        "UC-13.2.35",
        "UC-13.2.36",
        "UC-13.2.37",
        "UC-13.2.38",
        "UC-13.3.1",
        "UC-13.3.2",
        "UC-13.3.3",
        "UC-13.3.4",
        "UC-13.3.5",
        "UC-13.3.6",
        "UC-13.3.7",
        "UC-13.3.8",
        "UC-13.3.9",
        "UC-13.3.10",
        "UC-13.3.11",
        "UC-13.3.12",
        "UC-13.3.13",
        "UC-13.3.14",
        "UC-13.3.15",
        "UC-13.3.16",
        "UC-13.3.17",
        "UC-13.3.18",
        "UC-13.3.19",
        "UC-13.4.1",
        "UC-13.4.2",
        "UC-13.4.3",
        "UC-13.4.4",
        "UC-13.4.5",
        "UC-13.4.6",
        "UC-13.4.7",
        "UC-13.4.8",
        "UC-13.4.9",
        "UC-13.4.10",
        "UC-13.4.11",
        "UC-13.4.12",
        "UC-13.4.13",
        "UC-13.4.14",
        "UC-13.4.15",
        "UC-13.5.1",
        "UC-13.5.2",
        "UC-13.5.3",
        "UC-13.5.4",
        "UC-13.5.5",
        "UC-13.5.6",
        "UC-13.5.7",
        "UC-13.5.8",
        "UC-13.5.9",
        "UC-13.5.10",
        "UC-13.5.11",
        "UC-13.5.12",
        "UC-13.5.13",
        "UC-13.5.14",
        "UC-13.5.15",
        "UC-13.5.16",
        "UC-13.5.17",
        "UC-13.5.18",
        "UC-13.5.19",
        "UC-13.5.20",
        "UC-13.5.21"
      ]
    },
    "14": {
      "crawl": [
        "UC-14.1.6",
        "UC-14.2.1",
        "UC-14.2.2",
        "UC-14.2.3"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-14.1.1",
        "UC-14.1.5",
        "UC-14.1.7",
        "UC-14.1.8",
        "UC-14.1.9",
        "UC-14.1.10",
        "UC-14.1.11",
        "UC-14.1.12",
        "UC-14.1.13",
        "UC-14.1.14",
        "UC-14.1.15",
        "UC-14.1.16",
        "UC-14.1.17",
        "UC-14.1.18",
        "UC-14.1.19",
        "UC-14.1.20",
        "UC-14.1.21",
        "UC-14.1.22",
        "UC-14.1.23",
        "UC-14.1.24",
        "UC-14.1.25",
        "UC-14.1.26",
        "UC-14.1.27",
        "UC-14.1.28",
        "UC-14.1.29",
        "UC-14.1.30",
        "UC-14.1.31",
        "UC-14.1.32",
        "UC-14.1.33",
        "UC-14.1.34",
        "UC-14.1.35",
        "UC-14.1.36",
        "UC-14.1.37",
        "UC-14.1.38",
        "UC-14.1.39",
        "UC-14.1.40",
        "UC-14.1.41",
        "UC-14.1.42",
        "UC-14.1.43",
        "UC-14.1.44",
        "UC-14.1.45",
        "UC-14.1.46",
        "UC-14.1.47",
        "UC-14.1.48",
        "UC-14.1.49",
        "UC-14.1.50",
        "UC-14.2.4",
        "UC-14.2.5",
        "UC-14.2.6",
        "UC-14.2.7",
        "UC-14.2.8",
        "UC-14.2.9",
        "UC-14.2.10",
        "UC-14.2.11",
        "UC-14.2.12",
        "UC-14.2.13",
        "UC-14.2.14",
        "UC-14.2.15",
        "UC-14.2.16",
        "UC-14.2.17",
        "UC-14.2.18",
        "UC-14.2.19",
        "UC-14.2.20",
        "UC-14.2.21",
        "UC-14.2.22",
        "UC-14.2.23",
        "UC-14.2.24",
        "UC-14.2.25",
        "UC-14.2.26",
        "UC-14.2.27",
        "UC-14.2.28",
        "UC-14.3.1",
        "UC-14.3.2",
        "UC-14.3.3",
        "UC-14.3.4",
        "UC-14.3.5",
        "UC-14.3.6",
        "UC-14.3.7",
        "UC-14.3.9",
        "UC-14.3.10",
        "UC-14.3.11",
        "UC-14.3.12",
        "UC-14.3.13",
        "UC-14.3.14",
        "UC-14.3.15",
        "UC-14.3.16",
        "UC-14.3.17",
        "UC-14.3.18",
        "UC-14.3.19",
        "UC-14.3.20",
        "UC-14.3.21",
        "UC-14.3.22",
        "UC-14.3.23",
        "UC-14.3.24",
        "UC-14.3.25",
        "UC-14.3.26",
        "UC-14.3.27",
        "UC-14.3.28",
        "UC-14.3.29",
        "UC-14.3.30",
        "UC-14.3.31",
        "UC-14.3.32",
        "UC-14.3.33",
        "UC-14.3.34",
        "UC-14.3.35",
        "UC-14.3.36",
        "UC-14.3.37",
        "UC-14.3.38",
        "UC-14.3.39",
        "UC-14.3.40",
        "UC-14.3.41",
        "UC-14.3.42",
        "UC-14.3.43",
        "UC-14.3.44",
        "UC-14.3.45",
        "UC-14.3.46",
        "UC-14.3.47",
        "UC-14.3.48",
        "UC-14.3.49",
        "UC-14.3.50",
        "UC-14.3.51",
        "UC-14.3.52",
        "UC-14.3.53",
        "UC-14.3.54",
        "UC-14.3.55",
        "UC-14.3.56",
        "UC-14.3.57",
        "UC-14.4.1",
        "UC-14.4.2",
        "UC-14.4.3",
        "UC-14.4.4",
        "UC-14.4.5",
        "UC-14.4.6",
        "UC-14.4.7",
        "UC-14.4.8",
        "UC-14.4.9",
        "UC-14.4.10",
        "UC-14.4.11",
        "UC-14.4.12",
        "UC-14.4.13",
        "UC-14.4.14",
        "UC-14.4.15",
        "UC-14.4.16",
        "UC-14.4.17",
        "UC-14.4.18",
        "UC-14.4.19",
        "UC-14.5.1",
        "UC-14.5.2",
        "UC-14.5.3",
        "UC-14.5.4",
        "UC-14.5.5",
        "UC-14.5.6",
        "UC-14.5.7",
        "UC-14.5.8",
        "UC-14.5.9",
        "UC-14.5.10",
        "UC-14.5.11",
        "UC-14.5.12",
        "UC-14.5.13",
        "UC-14.5.14",
        "UC-14.5.15",
        "UC-14.5.16",
        "UC-14.5.17",
        "UC-14.5.18",
        "UC-14.5.19",
        "UC-14.5.20",
        "UC-14.5.21",
        "UC-14.5.22",
        "UC-14.6.1",
        "UC-14.6.2",
        "UC-14.6.3",
        "UC-14.6.4",
        "UC-14.6.5",
        "UC-14.6.6",
        "UC-14.6.7",
        "UC-14.6.8",
        "UC-14.6.9",
        "UC-14.6.10",
        "UC-14.6.11",
        "UC-14.6.12",
        "UC-14.6.13",
        "UC-14.6.14",
        "UC-14.6.15",
        "UC-14.6.16",
        "UC-14.6.17",
        "UC-14.6.18",
        "UC-14.6.19",
        "UC-14.6.20",
        "UC-14.7.1",
        "UC-14.7.2",
        "UC-14.7.3",
        "UC-14.7.4",
        "UC-14.7.5",
        "UC-14.7.6",
        "UC-14.7.7",
        "UC-14.7.8",
        "UC-14.7.9",
        "UC-14.8.1",
        "UC-14.8.2",
        "UC-14.8.3",
        "UC-14.8.4",
        "UC-14.9.1",
        "UC-14.9.2",
        "UC-14.9.3",
        "UC-14.9.4",
        "UC-14.9.5",
        "UC-14.9.6",
        "UC-14.9.7",
        "UC-14.9.8",
        "UC-14.9.9",
        "UC-14.9.10",
        "UC-14.9.11",
        "UC-14.9.12",
        "UC-14.9.13",
        "UC-14.9.14",
        "UC-14.9.15",
        "UC-14.9.16",
        "UC-14.9.17",
        "UC-14.9.18",
        "UC-14.9.19",
        "UC-14.9.20",
        "UC-14.9.21",
        "UC-14.9.22",
        "UC-14.9.23",
        "UC-14.9.24",
        "UC-14.9.25"
      ]
    },
    "15": {
      "crawl": [
        "UC-15.1.1",
        "UC-15.1.3",
        "UC-15.1.6",
        "UC-15.2.1",
        "UC-15.2.3"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-15.1.2",
        "UC-15.1.4",
        "UC-15.1.5",
        "UC-15.1.7",
        "UC-15.1.8",
        "UC-15.1.9",
        "UC-15.1.10",
        "UC-15.1.11",
        "UC-15.1.12",
        "UC-15.1.13",
        "UC-15.1.14",
        "UC-15.1.15",
        "UC-15.1.16",
        "UC-15.1.17",
        "UC-15.1.18",
        "UC-15.1.19",
        "UC-15.1.20",
        "UC-15.1.21",
        "UC-15.1.22",
        "UC-15.1.23",
        "UC-15.1.24",
        "UC-15.1.25",
        "UC-15.2.2",
        "UC-15.2.4",
        "UC-15.2.5",
        "UC-15.2.6",
        "UC-15.2.7",
        "UC-15.2.8",
        "UC-15.2.9",
        "UC-15.2.10",
        "UC-15.2.11",
        "UC-15.2.12",
        "UC-15.2.13",
        "UC-15.2.14",
        "UC-15.2.15",
        "UC-15.2.16",
        "UC-15.2.17",
        "UC-15.2.18",
        "UC-15.2.19",
        "UC-15.3.1",
        "UC-15.3.2",
        "UC-15.3.3",
        "UC-15.3.4",
        "UC-15.3.5",
        "UC-15.3.7",
        "UC-15.3.8",
        "UC-15.3.10",
        "UC-15.3.11",
        "UC-15.3.12",
        "UC-15.3.13",
        "UC-15.3.14",
        "UC-15.3.15",
        "UC-15.3.16",
        "UC-15.3.17",
        "UC-15.3.18",
        "UC-15.3.20",
        "UC-15.3.21",
        "UC-15.3.22",
        "UC-15.3.23",
        "UC-15.3.24",
        "UC-15.3.25",
        "UC-15.3.26",
        "UC-15.3.27",
        "UC-15.3.28",
        "UC-15.3.29",
        "UC-15.3.30",
        "UC-15.3.31",
        "UC-15.3.32",
        "UC-15.3.33",
        "UC-15.3.34",
        "UC-15.3.35",
        "UC-15.3.36",
        "UC-15.3.37",
        "UC-15.3.38",
        "UC-15.3.39",
        "UC-15.3.40"
      ]
    },
    "16": {
      "crawl": [
        "UC-16.1.2",
        "UC-16.1.3",
        "UC-16.1.4",
        "UC-16.1.9",
        "UC-16.2.1"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-16.1.1",
        "UC-16.1.5",
        "UC-16.1.6",
        "UC-16.1.7",
        "UC-16.1.8",
        "UC-16.1.10",
        "UC-16.1.11",
        "UC-16.1.12",
        "UC-16.1.13",
        "UC-16.1.14",
        "UC-16.1.15",
        "UC-16.1.16",
        "UC-16.1.17",
        "UC-16.1.18",
        "UC-16.1.19",
        "UC-16.1.20",
        "UC-16.1.21",
        "UC-16.1.22",
        "UC-16.1.23",
        "UC-16.1.24",
        "UC-16.1.25",
        "UC-16.1.26",
        "UC-16.1.27",
        "UC-16.2.2",
        "UC-16.2.3",
        "UC-16.2.4",
        "UC-16.2.5",
        "UC-16.2.6",
        "UC-16.2.7",
        "UC-16.2.8",
        "UC-16.2.9",
        "UC-16.2.10",
        "UC-16.2.12",
        "UC-16.2.13",
        "UC-16.2.14",
        "UC-16.2.15",
        "UC-16.2.16",
        "UC-16.2.17",
        "UC-16.2.18",
        "UC-16.2.19",
        "UC-16.3.1",
        "UC-16.3.2",
        "UC-16.3.3",
        "UC-16.3.4",
        "UC-16.3.5",
        "UC-16.3.6",
        "UC-16.3.7",
        "UC-16.3.8",
        "UC-16.3.9",
        "UC-16.3.10",
        "UC-16.3.11",
        "UC-16.3.12",
        "UC-16.3.13",
        "UC-16.3.14",
        "UC-16.3.15",
        "UC-16.3.16",
        "UC-16.4.1",
        "UC-16.4.2",
        "UC-16.4.3",
        "UC-16.4.4",
        "UC-16.4.5",
        "UC-16.4.6",
        "UC-16.4.7",
        "UC-16.4.8",
        "UC-16.4.9",
        "UC-16.4.10",
        "UC-16.4.11",
        "UC-16.4.12",
        "UC-16.5.1",
        "UC-16.5.2",
        "UC-16.5.3",
        "UC-16.5.4",
        "UC-16.5.5",
        "UC-16.5.6",
        "UC-16.5.7",
        "UC-16.5.8"
      ]
    },
    "17": {
      "crawl": [
        "UC-17.1.2",
        "UC-17.1.8",
        "UC-17.2.1",
        "UC-17.2.3",
        "UC-17.2.8"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-17.1.1",
        "UC-17.1.3",
        "UC-17.1.4",
        "UC-17.1.5",
        "UC-17.1.6",
        "UC-17.1.7",
        "UC-17.1.9",
        "UC-17.1.10",
        "UC-17.1.11",
        "UC-17.1.12",
        "UC-17.1.13",
        "UC-17.1.14",
        "UC-17.1.15",
        "UC-17.1.16",
        "UC-17.1.17",
        "UC-17.1.18",
        "UC-17.1.19",
        "UC-17.1.20",
        "UC-17.1.21",
        "UC-17.1.22",
        "UC-17.2.2",
        "UC-17.2.4",
        "UC-17.2.5",
        "UC-17.2.6",
        "UC-17.2.7",
        "UC-17.2.9",
        "UC-17.2.10",
        "UC-17.2.11",
        "UC-17.2.12",
        "UC-17.2.13",
        "UC-17.2.14",
        "UC-17.2.15",
        "UC-17.2.16",
        "UC-17.2.17",
        "UC-17.2.18",
        "UC-17.2.19",
        "UC-17.2.20",
        "UC-17.2.21",
        "UC-17.2.22",
        "UC-17.2.23",
        "UC-17.3.1",
        "UC-17.3.2",
        "UC-17.3.3",
        "UC-17.3.4",
        "UC-17.3.5",
        "UC-17.3.6",
        "UC-17.3.7",
        "UC-17.3.8",
        "UC-17.3.11",
        "UC-17.3.12",
        "UC-17.3.13",
        "UC-17.3.14",
        "UC-17.3.15",
        "UC-17.3.16",
        "UC-17.3.17",
        "UC-17.3.18",
        "UC-17.3.19",
        "UC-17.3.20",
        "UC-17.3.21",
        "UC-17.3.22",
        "UC-17.3.23",
        "UC-17.3.24",
        "UC-17.3.25",
        "UC-17.3.26",
        "UC-17.3.27",
        "UC-17.3.28",
        "UC-17.3.29",
        "UC-17.3.30",
        "UC-17.3.31",
        "UC-17.3.32",
        "UC-17.3.33",
        "UC-17.3.34",
        "UC-17.3.35",
        "UC-17.3.36",
        "UC-17.3.37",
        "UC-17.3.38",
        "UC-17.3.39",
        "UC-17.3.40",
        "UC-17.3.41",
        "UC-17.3.42",
        "UC-17.3.43",
        "UC-17.3.44",
        "UC-17.3.45",
        "UC-17.3.46",
        "UC-17.3.47",
        "UC-17.3.48",
        "UC-17.3.49",
        "UC-17.3.50",
        "UC-17.3.51",
        "UC-17.3.52",
        "UC-17.3.53",
        "UC-17.3.54",
        "UC-17.3.55",
        "UC-17.3.56",
        "UC-17.3.57",
        "UC-17.3.58",
        "UC-17.3.59",
        "UC-17.3.60",
        "UC-17.3.61",
        "UC-17.3.62"
      ]
    },
    "18": {
      "crawl": [
        "UC-18.1.2",
        "UC-18.1.3",
        "UC-18.1.4",
        "UC-18.2.1",
        "UC-18.2.5"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-18.1.1",
        "UC-18.1.5",
        "UC-18.1.6",
        "UC-18.1.7",
        "UC-18.1.8",
        "UC-18.1.9",
        "UC-18.1.10",
        "UC-18.1.11",
        "UC-18.1.12",
        "UC-18.1.13",
        "UC-18.1.14",
        "UC-18.1.15",
        "UC-18.1.16",
        "UC-18.1.17",
        "UC-18.1.18",
        "UC-18.1.19",
        "UC-18.1.20",
        "UC-18.1.21",
        "UC-18.1.22",
        "UC-18.1.23",
        "UC-18.2.2",
        "UC-18.2.3",
        "UC-18.2.4",
        "UC-18.2.6",
        "UC-18.2.7",
        "UC-18.2.8",
        "UC-18.2.9",
        "UC-18.2.10",
        "UC-18.2.11",
        "UC-18.2.12",
        "UC-18.2.13",
        "UC-18.2.14",
        "UC-18.2.15",
        "UC-18.2.16",
        "UC-18.2.17",
        "UC-18.2.18",
        "UC-18.3.1",
        "UC-18.3.2",
        "UC-18.3.3",
        "UC-18.3.4",
        "UC-18.3.5",
        "UC-18.3.6",
        "UC-18.3.7",
        "UC-18.3.8",
        "UC-18.3.9",
        "UC-18.3.10",
        "UC-18.3.11",
        "UC-18.3.12",
        "UC-18.3.13",
        "UC-18.3.14",
        "UC-18.3.15",
        "UC-18.3.16",
        "UC-18.3.17",
        "UC-18.3.18",
        "UC-18.3.19",
        "UC-18.3.20",
        "UC-18.3.21",
        "UC-18.3.22",
        "UC-18.4.1",
        "UC-18.4.2",
        "UC-18.4.3",
        "UC-18.4.4",
        "UC-18.4.5",
        "UC-18.4.6",
        "UC-18.4.7",
        "UC-18.4.8",
        "UC-18.4.9",
        "UC-18.4.10",
        "UC-18.4.11",
        "UC-18.4.12",
        "UC-18.4.13"
      ]
    },
    "19": {
      "crawl": [
        "UC-19.1.1",
        "UC-19.1.5",
        "UC-19.2.1",
        "UC-19.2.3",
        "UC-19.2.5"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-19.1.2",
        "UC-19.1.3",
        "UC-19.1.4",
        "UC-19.1.6",
        "UC-19.1.10",
        "UC-19.1.11",
        "UC-19.1.12",
        "UC-19.1.13",
        "UC-19.1.14",
        "UC-19.1.15",
        "UC-19.1.16",
        "UC-19.1.17",
        "UC-19.1.18",
        "UC-19.1.19",
        "UC-19.1.20",
        "UC-19.1.21",
        "UC-19.1.22",
        "UC-19.1.23",
        "UC-19.1.24",
        "UC-19.1.25",
        "UC-19.1.26",
        "UC-19.1.27",
        "UC-19.1.28",
        "UC-19.1.29",
        "UC-19.1.30",
        "UC-19.1.31",
        "UC-19.1.32",
        "UC-19.1.33",
        "UC-19.1.34",
        "UC-19.1.35",
        "UC-19.1.36",
        "UC-19.2.2",
        "UC-19.2.4",
        "UC-19.2.6",
        "UC-19.2.7",
        "UC-19.2.8",
        "UC-19.2.9",
        "UC-19.2.10",
        "UC-19.2.11",
        "UC-19.2.12",
        "UC-19.2.13",
        "UC-19.2.14",
        "UC-19.2.15",
        "UC-19.2.16",
        "UC-19.2.17",
        "UC-19.2.18",
        "UC-19.2.19",
        "UC-19.2.20",
        "UC-19.2.21",
        "UC-19.2.22",
        "UC-19.2.23",
        "UC-19.2.24",
        "UC-19.2.25",
        "UC-19.2.26",
        "UC-19.2.27",
        "UC-19.2.28",
        "UC-19.2.29",
        "UC-19.2.30",
        "UC-19.2.31",
        "UC-19.2.32",
        "UC-19.3.1",
        "UC-19.3.2",
        "UC-19.3.3",
        "UC-19.3.4",
        "UC-19.3.5",
        "UC-19.3.6",
        "UC-19.3.7"
      ]
    },
    "20": {
      "crawl": [
        "UC-20.1.1",
        "UC-20.1.4",
        "UC-20.1.5",
        "UC-20.2.1",
        "UC-20.2.2"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-20.1.2",
        "UC-20.1.3",
        "UC-20.1.6",
        "UC-20.1.7",
        "UC-20.1.8",
        "UC-20.1.9",
        "UC-20.1.10",
        "UC-20.1.11",
        "UC-20.1.12",
        "UC-20.1.13",
        "UC-20.1.14",
        "UC-20.1.15",
        "UC-20.1.16",
        "UC-20.1.17",
        "UC-20.1.18",
        "UC-20.1.19",
        "UC-20.1.20",
        "UC-20.1.21",
        "UC-20.1.22",
        "UC-20.1.23",
        "UC-20.1.24",
        "UC-20.1.25",
        "UC-20.1.26",
        "UC-20.1.27",
        "UC-20.2.3",
        "UC-20.2.4",
        "UC-20.2.5",
        "UC-20.2.6",
        "UC-20.2.7",
        "UC-20.2.8",
        "UC-20.2.9",
        "UC-20.2.10",
        "UC-20.2.11",
        "UC-20.2.12",
        "UC-20.2.13",
        "UC-20.2.14",
        "UC-20.2.15",
        "UC-20.2.16",
        "UC-20.2.17",
        "UC-20.2.18",
        "UC-20.2.19",
        "UC-20.2.20",
        "UC-20.2.21",
        "UC-20.2.22",
        "UC-20.2.23",
        "UC-20.2.24",
        "UC-20.2.25",
        "UC-20.2.26",
        "UC-20.2.27",
        "UC-20.2.28",
        "UC-20.2.29",
        "UC-20.2.30",
        "UC-20.2.31",
        "UC-20.2.32",
        "UC-20.2.33",
        "UC-20.3.1",
        "UC-20.3.2",
        "UC-20.3.3",
        "UC-20.3.4",
        "UC-20.3.5",
        "UC-20.3.6",
        "UC-20.3.7",
        "UC-20.3.8",
        "UC-20.3.9",
        "UC-20.3.10",
        "UC-20.3.11",
        "UC-20.3.12",
        "UC-20.3.13",
        "UC-20.3.14",
        "UC-20.3.15",
        "UC-20.3.16",
        "UC-20.3.17"
      ]
    },
    "21": {
      "crawl": [
        "UC-21.1.1",
        "UC-21.2.1",
        "UC-21.3.1",
        "UC-21.4.1",
        "UC-21.10.1"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-21.1.2",
        "UC-21.1.3",
        "UC-21.1.4",
        "UC-21.1.5",
        "UC-21.1.6",
        "UC-21.1.7",
        "UC-21.1.8",
        "UC-21.1.9",
        "UC-21.1.10",
        "UC-21.1.11",
        "UC-21.1.12",
        "UC-21.1.13",
        "UC-21.1.14",
        "UC-21.1.15",
        "UC-21.2.2",
        "UC-21.2.3",
        "UC-21.2.4",
        "UC-21.2.5",
        "UC-21.2.6",
        "UC-21.2.7",
        "UC-21.2.8",
        "UC-21.2.9",
        "UC-21.2.10",
        "UC-21.2.11",
        "UC-21.2.12",
        "UC-21.2.13",
        "UC-21.2.14",
        "UC-21.2.15",
        "UC-21.3.2",
        "UC-21.3.3",
        "UC-21.3.4",
        "UC-21.3.5",
        "UC-21.3.6",
        "UC-21.3.7",
        "UC-21.3.8",
        "UC-21.3.9",
        "UC-21.3.10",
        "UC-21.3.11",
        "UC-21.3.12",
        "UC-21.3.13",
        "UC-21.3.14",
        "UC-21.3.15",
        "UC-21.3.16",
        "UC-21.3.17",
        "UC-21.3.18",
        "UC-21.3.19",
        "UC-21.3.20",
        "UC-21.3.21",
        "UC-21.3.22",
        "UC-21.3.23",
        "UC-21.3.24",
        "UC-21.3.25",
        "UC-21.3.26",
        "UC-21.3.27",
        "UC-21.4.2",
        "UC-21.4.3",
        "UC-21.4.4",
        "UC-21.4.5",
        "UC-21.4.6",
        "UC-21.4.7",
        "UC-21.4.8",
        "UC-21.4.9",
        "UC-21.4.10",
        "UC-21.4.11",
        "UC-21.4.12",
        "UC-21.5.1",
        "UC-21.5.2",
        "UC-21.5.3",
        "UC-21.5.4",
        "UC-21.5.5",
        "UC-21.5.6",
        "UC-21.5.7",
        "UC-21.5.8",
        "UC-21.5.9",
        "UC-21.5.10",
        "UC-21.5.11",
        "UC-21.5.12",
        "UC-21.6.1",
        "UC-21.6.2",
        "UC-21.6.3",
        "UC-21.6.4",
        "UC-21.6.5",
        "UC-21.6.6",
        "UC-21.6.7",
        "UC-21.6.8",
        "UC-21.6.9",
        "UC-21.6.10",
        "UC-21.6.11",
        "UC-21.6.12",
        "UC-21.7.1",
        "UC-21.7.2",
        "UC-21.7.3",
        "UC-21.7.4",
        "UC-21.7.5",
        "UC-21.7.6",
        "UC-21.7.7",
        "UC-21.7.8",
        "UC-21.7.9",
        "UC-21.7.10",
        "UC-21.8.1",
        "UC-21.8.2",
        "UC-21.8.3",
        "UC-21.8.4",
        "UC-21.8.5",
        "UC-21.8.6",
        "UC-21.8.7",
        "UC-21.8.8",
        "UC-21.8.9",
        "UC-21.8.10",
        "UC-21.9.1",
        "UC-21.9.2",
        "UC-21.9.3",
        "UC-21.9.4",
        "UC-21.9.5",
        "UC-21.9.6",
        "UC-21.9.7",
        "UC-21.9.8",
        "UC-21.10.2",
        "UC-21.10.3",
        "UC-21.10.4",
        "UC-21.10.5",
        "UC-21.10.6",
        "UC-21.10.7",
        "UC-21.10.8"
      ]
    },
    "22": {
      "crawl": [
        "UC-22.1.1",
        "UC-22.2.1",
        "UC-22.3.1",
        "UC-22.4.1",
        "UC-22.8.1"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-22.1.2",
        "UC-22.1.3",
        "UC-22.1.4",
        "UC-22.1.5",
        "UC-22.1.6",
        "UC-22.1.7",
        "UC-22.1.8",
        "UC-22.1.9",
        "UC-22.1.10",
        "UC-22.1.11",
        "UC-22.1.12",
        "UC-22.1.13",
        "UC-22.1.14",
        "UC-22.1.15",
        "UC-22.1.16",
        "UC-22.1.17",
        "UC-22.1.18",
        "UC-22.1.19",
        "UC-22.1.20",
        "UC-22.1.21",
        "UC-22.1.22",
        "UC-22.1.23",
        "UC-22.1.24",
        "UC-22.1.25",
        "UC-22.1.26",
        "UC-22.1.27",
        "UC-22.1.28",
        "UC-22.1.29",
        "UC-22.1.30",
        "UC-22.1.31",
        "UC-22.1.32",
        "UC-22.1.33",
        "UC-22.1.34",
        "UC-22.1.35",
        "UC-22.1.36",
        "UC-22.1.37",
        "UC-22.1.38",
        "UC-22.1.39",
        "UC-22.1.40",
        "UC-22.1.41",
        "UC-22.1.42",
        "UC-22.1.43",
        "UC-22.1.44",
        "UC-22.1.45",
        "UC-22.1.46",
        "UC-22.1.47",
        "UC-22.1.48",
        "UC-22.1.49",
        "UC-22.1.50",
        "UC-22.2.2",
        "UC-22.2.3",
        "UC-22.2.4",
        "UC-22.2.5",
        "UC-22.2.6",
        "UC-22.2.7",
        "UC-22.2.8",
        "UC-22.2.9",
        "UC-22.2.10",
        "UC-22.2.11",
        "UC-22.2.12",
        "UC-22.2.13",
        "UC-22.2.14",
        "UC-22.2.15",
        "UC-22.2.16",
        "UC-22.2.17",
        "UC-22.2.18",
        "UC-22.2.19",
        "UC-22.2.20",
        "UC-22.2.21",
        "UC-22.2.22",
        "UC-22.2.23",
        "UC-22.2.24",
        "UC-22.2.25",
        "UC-22.2.26",
        "UC-22.2.27",
        "UC-22.2.28",
        "UC-22.2.29",
        "UC-22.2.30",
        "UC-22.2.31",
        "UC-22.2.32",
        "UC-22.2.33",
        "UC-22.2.34",
        "UC-22.2.35",
        "UC-22.2.36",
        "UC-22.2.37",
        "UC-22.2.38",
        "UC-22.2.39",
        "UC-22.2.40",
        "UC-22.2.41",
        "UC-22.2.42",
        "UC-22.2.43",
        "UC-22.2.44",
        "UC-22.2.45",
        "UC-22.3.2",
        "UC-22.3.3",
        "UC-22.3.4",
        "UC-22.3.5",
        "UC-22.3.6",
        "UC-22.3.7",
        "UC-22.3.8",
        "UC-22.3.9",
        "UC-22.3.10",
        "UC-22.3.11",
        "UC-22.3.12",
        "UC-22.3.13",
        "UC-22.3.14",
        "UC-22.3.15",
        "UC-22.3.16",
        "UC-22.3.17",
        "UC-22.3.18",
        "UC-22.3.19",
        "UC-22.3.20",
        "UC-22.3.21",
        "UC-22.3.22",
        "UC-22.3.23",
        "UC-22.3.24",
        "UC-22.3.25",
        "UC-22.3.26",
        "UC-22.3.27",
        "UC-22.3.28",
        "UC-22.3.29",
        "UC-22.3.30",
        "UC-22.3.31",
        "UC-22.3.32",
        "UC-22.3.33",
        "UC-22.3.34",
        "UC-22.3.35",
        "UC-22.3.36",
        "UC-22.3.37",
        "UC-22.3.38",
        "UC-22.3.39",
        "UC-22.3.40",
        "UC-22.3.41",
        "UC-22.3.42",
        "UC-22.3.43",
        "UC-22.3.44",
        "UC-22.3.45",
        "UC-22.4.2",
        "UC-22.4.3",
        "UC-22.4.4",
        "UC-22.4.5",
        "UC-22.4.6",
        "UC-22.4.7",
        "UC-22.4.8",
        "UC-22.4.9",
        "UC-22.4.10",
        "UC-22.4.11",
        "UC-22.4.12",
        "UC-22.4.13",
        "UC-22.4.14",
        "UC-22.4.15",
        "UC-22.4.16",
        "UC-22.4.17",
        "UC-22.4.18",
        "UC-22.4.19",
        "UC-22.4.20",
        "UC-22.4.21",
        "UC-22.4.22",
        "UC-22.4.23",
        "UC-22.4.24",
        "UC-22.4.25",
        "UC-22.5.1",
        "UC-22.5.2",
        "UC-22.5.3",
        "UC-22.5.4",
        "UC-22.5.5",
        "UC-22.5.6",
        "UC-22.5.7",
        "UC-22.5.8",
        "UC-22.5.9",
        "UC-22.5.10",
        "UC-22.5.11",
        "UC-22.5.12",
        "UC-22.5.13",
        "UC-22.5.14",
        "UC-22.5.15",
        "UC-22.5.16",
        "UC-22.5.17",
        "UC-22.5.18",
        "UC-22.5.19",
        "UC-22.5.20",
        "UC-22.5.21",
        "UC-22.5.22",
        "UC-22.5.23",
        "UC-22.5.24",
        "UC-22.5.25",
        "UC-22.6.1",
        "UC-22.6.2",
        "UC-22.6.3",
        "UC-22.6.4",
        "UC-22.6.5",
        "UC-22.6.6",
        "UC-22.6.7",
        "UC-22.6.8",
        "UC-22.6.9",
        "UC-22.6.10",
        "UC-22.6.11",
        "UC-22.6.12",
        "UC-22.6.13",
        "UC-22.6.14",
        "UC-22.6.15",
        "UC-22.6.16",
        "UC-22.6.17",
        "UC-22.6.18",
        "UC-22.6.19",
        "UC-22.6.20",
        "UC-22.6.21",
        "UC-22.6.22",
        "UC-22.6.23",
        "UC-22.6.24",
        "UC-22.6.25",
        "UC-22.6.26",
        "UC-22.6.27",
        "UC-22.6.28",
        "UC-22.6.29",
        "UC-22.6.30",
        "UC-22.6.31",
        "UC-22.6.32",
        "UC-22.6.33",
        "UC-22.6.34",
        "UC-22.6.35",
        "UC-22.6.36",
        "UC-22.6.37",
        "UC-22.6.38",
        "UC-22.6.39",
        "UC-22.6.40",
        "UC-22.6.41",
        "UC-22.6.42",
        "UC-22.6.43",
        "UC-22.6.44",
        "UC-22.6.45",
        "UC-22.6.46",
        "UC-22.6.47",
        "UC-22.6.48",
        "UC-22.6.49",
        "UC-22.6.50",
        "UC-22.6.51",
        "UC-22.6.52",
        "UC-22.6.53",
        "UC-22.6.54",
        "UC-22.6.55",
        "UC-22.7.1",
        "UC-22.7.2",
        "UC-22.7.3",
        "UC-22.7.4",
        "UC-22.7.5",
        "UC-22.7.6",
        "UC-22.7.7",
        "UC-22.7.8",
        "UC-22.7.9",
        "UC-22.7.10",
        "UC-22.7.11",
        "UC-22.7.12",
        "UC-22.7.13",
        "UC-22.7.14",
        "UC-22.7.15",
        "UC-22.7.16",
        "UC-22.7.17",
        "UC-22.7.18",
        "UC-22.7.19",
        "UC-22.7.20",
        "UC-22.7.21",
        "UC-22.7.22",
        "UC-22.7.23",
        "UC-22.7.24",
        "UC-22.7.25",
        "UC-22.7.26",
        "UC-22.7.27",
        "UC-22.7.28",
        "UC-22.7.29",
        "UC-22.7.30",
        "UC-22.7.31",
        "UC-22.7.32",
        "UC-22.7.33",
        "UC-22.7.34",
        "UC-22.7.35",
        "UC-22.7.36",
        "UC-22.7.37",
        "UC-22.7.38",
        "UC-22.7.39",
        "UC-22.7.40",
        "UC-22.7.41",
        "UC-22.7.42",
        "UC-22.7.43",
        "UC-22.7.44",
        "UC-22.7.45",
        "UC-22.7.46",
        "UC-22.7.47",
        "UC-22.7.48",
        "UC-22.7.49",
        "UC-22.7.50",
        "UC-22.8.2",
        "UC-22.8.3",
        "UC-22.8.4",
        "UC-22.8.5",
        "UC-22.8.6",
        "UC-22.8.7",
        "UC-22.8.8",
        "UC-22.8.9",
        "UC-22.8.10",
        "UC-22.8.11",
        "UC-22.8.12",
        "UC-22.8.13",
        "UC-22.8.14",
        "UC-22.8.15",
        "UC-22.8.16",
        "UC-22.8.17",
        "UC-22.8.18",
        "UC-22.8.19",
        "UC-22.8.20",
        "UC-22.8.21",
        "UC-22.8.22",
        "UC-22.8.23",
        "UC-22.8.24",
        "UC-22.8.25",
        "UC-22.8.26",
        "UC-22.8.27",
        "UC-22.8.28",
        "UC-22.8.29",
        "UC-22.8.30",
        "UC-22.8.31",
        "UC-22.8.32",
        "UC-22.8.33",
        "UC-22.8.34",
        "UC-22.8.35",
        "UC-22.8.36",
        "UC-22.8.37",
        "UC-22.8.38",
        "UC-22.8.39",
        "UC-22.9.1",
        "UC-22.9.2",
        "UC-22.9.3",
        "UC-22.9.4",
        "UC-22.9.5",
        "UC-22.9.6",
        "UC-22.9.7",
        "UC-22.9.8",
        "UC-22.9.9",
        "UC-22.9.10",
        "UC-22.10.1",
        "UC-22.10.2",
        "UC-22.10.3",
        "UC-22.10.4",
        "UC-22.10.5",
        "UC-22.10.6",
        "UC-22.10.7",
        "UC-22.10.8",
        "UC-22.10.9",
        "UC-22.10.10",
        "UC-22.10.11",
        "UC-22.10.12",
        "UC-22.10.13",
        "UC-22.10.14",
        "UC-22.10.15",
        "UC-22.10.16",
        "UC-22.10.17",
        "UC-22.10.18",
        "UC-22.10.19",
        "UC-22.10.20",
        "UC-22.10.21",
        "UC-22.10.22",
        "UC-22.10.23",
        "UC-22.10.24",
        "UC-22.10.25",
        "UC-22.10.26",
        "UC-22.10.27",
        "UC-22.10.28",
        "UC-22.10.29",
        "UC-22.10.30",
        "UC-22.10.31",
        "UC-22.10.32",
        "UC-22.10.33",
        "UC-22.10.34",
        "UC-22.10.35",
        "UC-22.10.36",
        "UC-22.10.37",
        "UC-22.10.38",
        "UC-22.10.39",
        "UC-22.10.40",
        "UC-22.10.41",
        "UC-22.10.42",
        "UC-22.10.43",
        "UC-22.10.44",
        "UC-22.10.45",
        "UC-22.10.46",
        "UC-22.10.47",
        "UC-22.10.48",
        "UC-22.10.49",
        "UC-22.10.50",
        "UC-22.10.51",
        "UC-22.10.52",
        "UC-22.10.53",
        "UC-22.10.54",
        "UC-22.10.55",
        "UC-22.11.1",
        "UC-22.11.2",
        "UC-22.11.3",
        "UC-22.11.4",
        "UC-22.11.5",
        "UC-22.11.6",
        "UC-22.11.7",
        "UC-22.11.8",
        "UC-22.11.9",
        "UC-22.11.10",
        "UC-22.11.11",
        "UC-22.11.12",
        "UC-22.11.13",
        "UC-22.11.14",
        "UC-22.11.15",
        "UC-22.11.16",
        "UC-22.11.17",
        "UC-22.11.18",
        "UC-22.11.19",
        "UC-22.11.20",
        "UC-22.11.21",
        "UC-22.11.22",
        "UC-22.11.23",
        "UC-22.11.24",
        "UC-22.11.25",
        "UC-22.11.26",
        "UC-22.11.27",
        "UC-22.11.28",
        "UC-22.11.29",
        "UC-22.11.30",
        "UC-22.11.31",
        "UC-22.11.32",
        "UC-22.11.33",
        "UC-22.11.34",
        "UC-22.11.35",
        "UC-22.11.36",
        "UC-22.11.37",
        "UC-22.11.38",
        "UC-22.11.39",
        "UC-22.11.40",
        "UC-22.11.41",
        "UC-22.11.42",
        "UC-22.11.43",
        "UC-22.11.44",
        "UC-22.11.45",
        "UC-22.11.46",
        "UC-22.11.47",
        "UC-22.11.48",
        "UC-22.11.49",
        "UC-22.11.50",
        "UC-22.11.51",
        "UC-22.11.52",
        "UC-22.11.53",
        "UC-22.11.54",
        "UC-22.11.55",
        "UC-22.11.56",
        "UC-22.11.57",
        "UC-22.11.58",
        "UC-22.11.59",
        "UC-22.11.60",
        "UC-22.11.61",
        "UC-22.11.62",
        "UC-22.11.63",
        "UC-22.11.64",
        "UC-22.11.65",
        "UC-22.11.66",
        "UC-22.11.67",
        "UC-22.11.68",
        "UC-22.11.69",
        "UC-22.11.70",
        "UC-22.11.71",
        "UC-22.11.72",
        "UC-22.11.73",
        "UC-22.11.74",
        "UC-22.11.75",
        "UC-22.11.76",
        "UC-22.11.77",
        "UC-22.11.78",
        "UC-22.11.79",
        "UC-22.11.80",
        "UC-22.11.81",
        "UC-22.11.82",
        "UC-22.11.83",
        "UC-22.11.84",
        "UC-22.11.85",
        "UC-22.11.86",
        "UC-22.11.87",
        "UC-22.11.88",
        "UC-22.11.89",
        "UC-22.11.90",
        "UC-22.11.91",
        "UC-22.11.92",
        "UC-22.11.93",
        "UC-22.11.94",
        "UC-22.11.95",
        "UC-22.11.96",
        "UC-22.11.97",
        "UC-22.11.98",
        "UC-22.11.99",
        "UC-22.11.100",
        "UC-22.11.101",
        "UC-22.11.102",
        "UC-22.11.103",
        "UC-22.11.104",
        "UC-22.11.105",
        "UC-22.11.106",
        "UC-22.12.1",
        "UC-22.12.2",
        "UC-22.12.3",
        "UC-22.12.4",
        "UC-22.12.5",
        "UC-22.12.6",
        "UC-22.12.7",
        "UC-22.12.8",
        "UC-22.12.9",
        "UC-22.12.10",
        "UC-22.12.11",
        "UC-22.12.12",
        "UC-22.12.13",
        "UC-22.12.14",
        "UC-22.12.15",
        "UC-22.12.16",
        "UC-22.12.17",
        "UC-22.12.18",
        "UC-22.12.19",
        "UC-22.12.20",
        "UC-22.12.21",
        "UC-22.12.22",
        "UC-22.12.23",
        "UC-22.12.24",
        "UC-22.12.25",
        "UC-22.12.26",
        "UC-22.12.27",
        "UC-22.12.28",
        "UC-22.12.29",
        "UC-22.12.30",
        "UC-22.12.31",
        "UC-22.12.32",
        "UC-22.12.33",
        "UC-22.12.34",
        "UC-22.12.35",
        "UC-22.12.36",
        "UC-22.12.37",
        "UC-22.12.38",
        "UC-22.12.39",
        "UC-22.12.40",
        "UC-22.13.1",
        "UC-22.13.2",
        "UC-22.13.3",
        "UC-22.13.4",
        "UC-22.13.5",
        "UC-22.13.6",
        "UC-22.13.7",
        "UC-22.13.8",
        "UC-22.13.9",
        "UC-22.13.10",
        "UC-22.13.11",
        "UC-22.13.12",
        "UC-22.13.13",
        "UC-22.13.14",
        "UC-22.13.15",
        "UC-22.13.16",
        "UC-22.13.17",
        "UC-22.13.18",
        "UC-22.13.19",
        "UC-22.13.20",
        "UC-22.13.21",
        "UC-22.13.22",
        "UC-22.13.23",
        "UC-22.13.24",
        "UC-22.13.25",
        "UC-22.13.26",
        "UC-22.13.27",
        "UC-22.13.28",
        "UC-22.13.29",
        "UC-22.13.30",
        "UC-22.13.31",
        "UC-22.13.32",
        "UC-22.13.33",
        "UC-22.13.34",
        "UC-22.13.35",
        "UC-22.13.36",
        "UC-22.13.37",
        "UC-22.13.38",
        "UC-22.13.39",
        "UC-22.13.40",
        "UC-22.13.41",
        "UC-22.13.42",
        "UC-22.13.43",
        "UC-22.13.44",
        "UC-22.13.45",
        "UC-22.13.46",
        "UC-22.13.47",
        "UC-22.13.48",
        "UC-22.13.49",
        "UC-22.13.50",
        "UC-22.13.51",
        "UC-22.13.52",
        "UC-22.13.53",
        "UC-22.13.54",
        "UC-22.13.55",
        "UC-22.13.56",
        "UC-22.13.57",
        "UC-22.13.58",
        "UC-22.13.59",
        "UC-22.13.60",
        "UC-22.13.61",
        "UC-22.13.62",
        "UC-22.13.63",
        "UC-22.13.64",
        "UC-22.13.65",
        "UC-22.13.66",
        "UC-22.13.67",
        "UC-22.13.68",
        "UC-22.13.69",
        "UC-22.13.70",
        "UC-22.14.1",
        "UC-22.14.2",
        "UC-22.14.3",
        "UC-22.14.4",
        "UC-22.14.5",
        "UC-22.14.6",
        "UC-22.14.7",
        "UC-22.14.8",
        "UC-22.14.9",
        "UC-22.14.10",
        "UC-22.14.11",
        "UC-22.14.12",
        "UC-22.14.13",
        "UC-22.14.14",
        "UC-22.14.15",
        "UC-22.14.16",
        "UC-22.14.17",
        "UC-22.14.18",
        "UC-22.14.19",
        "UC-22.14.20",
        "UC-22.14.21",
        "UC-22.14.22",
        "UC-22.14.23",
        "UC-22.14.24",
        "UC-22.14.25",
        "UC-22.14.26",
        "UC-22.14.27",
        "UC-22.14.28",
        "UC-22.14.29",
        "UC-22.14.30",
        "UC-22.14.31",
        "UC-22.14.32",
        "UC-22.14.33",
        "UC-22.14.34",
        "UC-22.14.35",
        "UC-22.14.36",
        "UC-22.14.37",
        "UC-22.14.38",
        "UC-22.14.39",
        "UC-22.14.40",
        "UC-22.14.41",
        "UC-22.14.42",
        "UC-22.14.43",
        "UC-22.14.44",
        "UC-22.14.45",
        "UC-22.14.46",
        "UC-22.14.47",
        "UC-22.14.48",
        "UC-22.14.49",
        "UC-22.14.50",
        "UC-22.14.51",
        "UC-22.14.52",
        "UC-22.14.53",
        "UC-22.14.54",
        "UC-22.14.55",
        "UC-22.14.56",
        "UC-22.14.57",
        "UC-22.14.58",
        "UC-22.14.59",
        "UC-22.14.60",
        "UC-22.14.61",
        "UC-22.14.62",
        "UC-22.14.63",
        "UC-22.14.64",
        "UC-22.14.65",
        "UC-22.14.66",
        "UC-22.14.67",
        "UC-22.14.68",
        "UC-22.14.69",
        "UC-22.14.70",
        "UC-22.14.71",
        "UC-22.14.72",
        "UC-22.14.73",
        "UC-22.14.74",
        "UC-22.14.75",
        "UC-22.14.76",
        "UC-22.14.77",
        "UC-22.14.78",
        "UC-22.14.79",
        "UC-22.14.80",
        "UC-22.15.1",
        "UC-22.15.2",
        "UC-22.15.3",
        "UC-22.15.4",
        "UC-22.15.5",
        "UC-22.15.6",
        "UC-22.15.7",
        "UC-22.15.8",
        "UC-22.15.9",
        "UC-22.15.10",
        "UC-22.15.11",
        "UC-22.15.12",
        "UC-22.15.13",
        "UC-22.15.14",
        "UC-22.15.15",
        "UC-22.15.16",
        "UC-22.15.17",
        "UC-22.15.18",
        "UC-22.15.19",
        "UC-22.15.20",
        "UC-22.15.21",
        "UC-22.15.22",
        "UC-22.15.23",
        "UC-22.15.24",
        "UC-22.15.25",
        "UC-22.15.26",
        "UC-22.15.27",
        "UC-22.15.28",
        "UC-22.15.29",
        "UC-22.15.30",
        "UC-22.15.31",
        "UC-22.15.32",
        "UC-22.15.33",
        "UC-22.15.34",
        "UC-22.15.35",
        "UC-22.15.36",
        "UC-22.15.37",
        "UC-22.15.38",
        "UC-22.15.39",
        "UC-22.15.40",
        "UC-22.15.41",
        "UC-22.15.42",
        "UC-22.15.43",
        "UC-22.15.44",
        "UC-22.15.45",
        "UC-22.15.46",
        "UC-22.15.47",
        "UC-22.15.48",
        "UC-22.15.49",
        "UC-22.15.50",
        "UC-22.15.51",
        "UC-22.15.52",
        "UC-22.15.53",
        "UC-22.15.54",
        "UC-22.15.55",
        "UC-22.16.1",
        "UC-22.16.2",
        "UC-22.16.3",
        "UC-22.16.4",
        "UC-22.16.5",
        "UC-22.16.6",
        "UC-22.16.7",
        "UC-22.16.8",
        "UC-22.16.9",
        "UC-22.16.10",
        "UC-22.16.11",
        "UC-22.16.12",
        "UC-22.16.13",
        "UC-22.16.14",
        "UC-22.16.15",
        "UC-22.16.16",
        "UC-22.16.17",
        "UC-22.16.18",
        "UC-22.16.19",
        "UC-22.16.20",
        "UC-22.16.21",
        "UC-22.16.22",
        "UC-22.16.23",
        "UC-22.16.24",
        "UC-22.16.25",
        "UC-22.16.26",
        "UC-22.16.27",
        "UC-22.16.28",
        "UC-22.16.29",
        "UC-22.16.30",
        "UC-22.17.1",
        "UC-22.17.2",
        "UC-22.17.3",
        "UC-22.17.4",
        "UC-22.17.5",
        "UC-22.17.6",
        "UC-22.17.7",
        "UC-22.17.8",
        "UC-22.17.9",
        "UC-22.17.10",
        "UC-22.17.11",
        "UC-22.17.12",
        "UC-22.17.13",
        "UC-22.17.14",
        "UC-22.17.15",
        "UC-22.17.16",
        "UC-22.17.17",
        "UC-22.17.18",
        "UC-22.17.19",
        "UC-22.17.20",
        "UC-22.17.21",
        "UC-22.17.22",
        "UC-22.17.23",
        "UC-22.17.24",
        "UC-22.17.25",
        "UC-22.18.1",
        "UC-22.18.2",
        "UC-22.18.3",
        "UC-22.18.4",
        "UC-22.18.5",
        "UC-22.18.6",
        "UC-22.18.7",
        "UC-22.18.8",
        "UC-22.18.9",
        "UC-22.18.10",
        "UC-22.18.11",
        "UC-22.18.12",
        "UC-22.18.13",
        "UC-22.18.14",
        "UC-22.18.15",
        "UC-22.18.16",
        "UC-22.18.17",
        "UC-22.18.18",
        "UC-22.18.19",
        "UC-22.18.20",
        "UC-22.18.21",
        "UC-22.18.22",
        "UC-22.18.23",
        "UC-22.18.24",
        "UC-22.18.25",
        "UC-22.18.26",
        "UC-22.18.27",
        "UC-22.18.28",
        "UC-22.18.29",
        "UC-22.18.30",
        "UC-22.18.31",
        "UC-22.18.32",
        "UC-22.18.33",
        "UC-22.18.34",
        "UC-22.18.35",
        "UC-22.19.1",
        "UC-22.19.2",
        "UC-22.19.3",
        "UC-22.19.4",
        "UC-22.19.5",
        "UC-22.19.6",
        "UC-22.19.7",
        "UC-22.19.8",
        "UC-22.19.9",
        "UC-22.19.10",
        "UC-22.19.11",
        "UC-22.19.12",
        "UC-22.19.13",
        "UC-22.19.14",
        "UC-22.19.15",
        "UC-22.19.16",
        "UC-22.19.17",
        "UC-22.19.18",
        "UC-22.19.19",
        "UC-22.19.20",
        "UC-22.19.21",
        "UC-22.19.22",
        "UC-22.19.23",
        "UC-22.19.24",
        "UC-22.19.25",
        "UC-22.20.1",
        "UC-22.20.2",
        "UC-22.20.3",
        "UC-22.20.4",
        "UC-22.20.5",
        "UC-22.20.6",
        "UC-22.20.7",
        "UC-22.20.8",
        "UC-22.20.9",
        "UC-22.20.10",
        "UC-22.20.11",
        "UC-22.20.12",
        "UC-22.20.13",
        "UC-22.20.14",
        "UC-22.20.15",
        "UC-22.20.16",
        "UC-22.20.17",
        "UC-22.20.18",
        "UC-22.20.19",
        "UC-22.20.20",
        "UC-22.21.1",
        "UC-22.21.2",
        "UC-22.21.3",
        "UC-22.21.4",
        "UC-22.21.5",
        "UC-22.21.6",
        "UC-22.21.7",
        "UC-22.21.8",
        "UC-22.21.9",
        "UC-22.21.10",
        "UC-22.21.11",
        "UC-22.21.12",
        "UC-22.21.13",
        "UC-22.21.14",
        "UC-22.21.15",
        "UC-22.21.16",
        "UC-22.21.17",
        "UC-22.21.18",
        "UC-22.21.19",
        "UC-22.21.20",
        "UC-22.21.21",
        "UC-22.21.22",
        "UC-22.21.23",
        "UC-22.21.24",
        "UC-22.21.25",
        "UC-22.22.1",
        "UC-22.22.2",
        "UC-22.22.3",
        "UC-22.22.4",
        "UC-22.22.5",
        "UC-22.22.6",
        "UC-22.22.7",
        "UC-22.22.8",
        "UC-22.22.9",
        "UC-22.22.10",
        "UC-22.22.11",
        "UC-22.22.12",
        "UC-22.22.13",
        "UC-22.22.14",
        "UC-22.22.15",
        "UC-22.22.16",
        "UC-22.22.17",
        "UC-22.22.18",
        "UC-22.22.19",
        "UC-22.22.20",
        "UC-22.22.21",
        "UC-22.22.22",
        "UC-22.22.23",
        "UC-22.22.24",
        "UC-22.22.25",
        "UC-22.22.26",
        "UC-22.22.27",
        "UC-22.22.28",
        "UC-22.22.29",
        "UC-22.22.30",
        "UC-22.23.1",
        "UC-22.23.2",
        "UC-22.23.3",
        "UC-22.23.4",
        "UC-22.23.5",
        "UC-22.23.6",
        "UC-22.23.7",
        "UC-22.23.8",
        "UC-22.23.9",
        "UC-22.23.10",
        "UC-22.23.11",
        "UC-22.23.12",
        "UC-22.23.13",
        "UC-22.23.14",
        "UC-22.23.15",
        "UC-22.23.16",
        "UC-22.23.17",
        "UC-22.23.18",
        "UC-22.23.19",
        "UC-22.23.20",
        "UC-22.24.1",
        "UC-22.24.2",
        "UC-22.24.3",
        "UC-22.24.4",
        "UC-22.24.5",
        "UC-22.24.6",
        "UC-22.24.7",
        "UC-22.24.8",
        "UC-22.24.9",
        "UC-22.24.10",
        "UC-22.24.11",
        "UC-22.24.12",
        "UC-22.24.13",
        "UC-22.24.14",
        "UC-22.24.15",
        "UC-22.25.1",
        "UC-22.25.2",
        "UC-22.25.3",
        "UC-22.25.4",
        "UC-22.25.5",
        "UC-22.25.6",
        "UC-22.25.7",
        "UC-22.25.8",
        "UC-22.25.9",
        "UC-22.25.10",
        "UC-22.25.11",
        "UC-22.25.12",
        "UC-22.25.13",
        "UC-22.25.14",
        "UC-22.25.15",
        "UC-22.25.16",
        "UC-22.25.17",
        "UC-22.25.18",
        "UC-22.25.19",
        "UC-22.25.20",
        "UC-22.25.21",
        "UC-22.25.22",
        "UC-22.25.23",
        "UC-22.25.24",
        "UC-22.25.25",
        "UC-22.25.26",
        "UC-22.25.27",
        "UC-22.25.28",
        "UC-22.25.29",
        "UC-22.25.30",
        "UC-22.25.31",
        "UC-22.25.32",
        "UC-22.25.33",
        "UC-22.25.34",
        "UC-22.25.35",
        "UC-22.26.1",
        "UC-22.26.2",
        "UC-22.26.3",
        "UC-22.26.4",
        "UC-22.26.5",
        "UC-22.26.6",
        "UC-22.26.7",
        "UC-22.26.8",
        "UC-22.26.9",
        "UC-22.26.10",
        "UC-22.26.11",
        "UC-22.26.12",
        "UC-22.26.13",
        "UC-22.26.14",
        "UC-22.26.15",
        "UC-22.26.16",
        "UC-22.26.17",
        "UC-22.26.18",
        "UC-22.26.19",
        "UC-22.26.20",
        "UC-22.27.1",
        "UC-22.27.2",
        "UC-22.27.3",
        "UC-22.27.4",
        "UC-22.27.5",
        "UC-22.27.6",
        "UC-22.27.7",
        "UC-22.27.8",
        "UC-22.27.9",
        "UC-22.27.10",
        "UC-22.27.11",
        "UC-22.27.12",
        "UC-22.27.13",
        "UC-22.27.14",
        "UC-22.27.15",
        "UC-22.27.16",
        "UC-22.27.17",
        "UC-22.27.18",
        "UC-22.27.19",
        "UC-22.27.20",
        "UC-22.27.21",
        "UC-22.27.22",
        "UC-22.27.23",
        "UC-22.27.24",
        "UC-22.27.25",
        "UC-22.27.26",
        "UC-22.27.27",
        "UC-22.27.28",
        "UC-22.27.29",
        "UC-22.27.30",
        "UC-22.28.1",
        "UC-22.28.2",
        "UC-22.28.3",
        "UC-22.28.4",
        "UC-22.28.5",
        "UC-22.28.6",
        "UC-22.28.7",
        "UC-22.28.8",
        "UC-22.28.9",
        "UC-22.28.10",
        "UC-22.28.11",
        "UC-22.28.12",
        "UC-22.28.13",
        "UC-22.28.14",
        "UC-22.28.15",
        "UC-22.28.16",
        "UC-22.28.17",
        "UC-22.28.18",
        "UC-22.28.19",
        "UC-22.28.20",
        "UC-22.29.1",
        "UC-22.29.2",
        "UC-22.29.3",
        "UC-22.29.4",
        "UC-22.29.5",
        "UC-22.29.6",
        "UC-22.29.7",
        "UC-22.29.8",
        "UC-22.29.9",
        "UC-22.29.10",
        "UC-22.29.11",
        "UC-22.29.12",
        "UC-22.29.13",
        "UC-22.29.14",
        "UC-22.29.15",
        "UC-22.29.16",
        "UC-22.29.17",
        "UC-22.29.18",
        "UC-22.29.19",
        "UC-22.29.20",
        "UC-22.29.21",
        "UC-22.29.22",
        "UC-22.29.23",
        "UC-22.29.24",
        "UC-22.29.25",
        "UC-22.29.26",
        "UC-22.29.27",
        "UC-22.29.28",
        "UC-22.29.29",
        "UC-22.29.30",
        "UC-22.30.1",
        "UC-22.30.2",
        "UC-22.30.3",
        "UC-22.30.4",
        "UC-22.30.5",
        "UC-22.30.6",
        "UC-22.30.7",
        "UC-22.30.8",
        "UC-22.30.9",
        "UC-22.30.10",
        "UC-22.30.11",
        "UC-22.30.12",
        "UC-22.30.13",
        "UC-22.30.14",
        "UC-22.30.15",
        "UC-22.30.16",
        "UC-22.30.17",
        "UC-22.30.18",
        "UC-22.30.19",
        "UC-22.30.20",
        "UC-22.30.21",
        "UC-22.30.22",
        "UC-22.30.23",
        "UC-22.30.24",
        "UC-22.30.25",
        "UC-22.31.1",
        "UC-22.31.2",
        "UC-22.31.3",
        "UC-22.31.4",
        "UC-22.31.5",
        "UC-22.31.6",
        "UC-22.31.7",
        "UC-22.31.8",
        "UC-22.31.9",
        "UC-22.31.10",
        "UC-22.31.11",
        "UC-22.31.12",
        "UC-22.31.13",
        "UC-22.31.14",
        "UC-22.31.15",
        "UC-22.31.16",
        "UC-22.31.17",
        "UC-22.31.18",
        "UC-22.31.19",
        "UC-22.31.20",
        "UC-22.32.1",
        "UC-22.32.2",
        "UC-22.32.3",
        "UC-22.32.4",
        "UC-22.32.5",
        "UC-22.32.6",
        "UC-22.32.7",
        "UC-22.32.8",
        "UC-22.32.9",
        "UC-22.32.10",
        "UC-22.32.11",
        "UC-22.32.12",
        "UC-22.32.13",
        "UC-22.32.14",
        "UC-22.32.15",
        "UC-22.32.16",
        "UC-22.32.17",
        "UC-22.32.18",
        "UC-22.32.19",
        "UC-22.32.20",
        "UC-22.32.21",
        "UC-22.32.22",
        "UC-22.32.23",
        "UC-22.32.24",
        "UC-22.32.25",
        "UC-22.33.1",
        "UC-22.33.2",
        "UC-22.33.3",
        "UC-22.33.4",
        "UC-22.33.5",
        "UC-22.33.6",
        "UC-22.33.7",
        "UC-22.33.8",
        "UC-22.33.9",
        "UC-22.33.10",
        "UC-22.33.11",
        "UC-22.33.12",
        "UC-22.33.13",
        "UC-22.33.14",
        "UC-22.33.15",
        "UC-22.33.16",
        "UC-22.33.17",
        "UC-22.33.18",
        "UC-22.33.19",
        "UC-22.33.20",
        "UC-22.34.1",
        "UC-22.34.2",
        "UC-22.34.3",
        "UC-22.34.4",
        "UC-22.34.5",
        "UC-22.34.6",
        "UC-22.34.7",
        "UC-22.34.8",
        "UC-22.34.9",
        "UC-22.34.10",
        "UC-22.34.11",
        "UC-22.34.12",
        "UC-22.35.1",
        "UC-22.35.2",
        "UC-22.35.3",
        "UC-22.35.4",
        "UC-22.35.5",
        "UC-22.36.1",
        "UC-22.36.2",
        "UC-22.36.3",
        "UC-22.36.4",
        "UC-22.36.5",
        "UC-22.37.1",
        "UC-22.37.2",
        "UC-22.37.3",
        "UC-22.37.4",
        "UC-22.37.5",
        "UC-22.38.1",
        "UC-22.38.2",
        "UC-22.38.3",
        "UC-22.38.4",
        "UC-22.38.5",
        "UC-22.39.1",
        "UC-22.39.2",
        "UC-22.39.3",
        "UC-22.39.4",
        "UC-22.39.5",
        "UC-22.40.1",
        "UC-22.40.2",
        "UC-22.40.3",
        "UC-22.40.4",
        "UC-22.40.5",
        "UC-22.41.1",
        "UC-22.41.2",
        "UC-22.41.3",
        "UC-22.41.4",
        "UC-22.41.5",
        "UC-22.42.1",
        "UC-22.42.2",
        "UC-22.42.3",
        "UC-22.42.4",
        "UC-22.42.5",
        "UC-22.43.1",
        "UC-22.43.2",
        "UC-22.43.3",
        "UC-22.43.4",
        "UC-22.43.5",
        "UC-22.44.1",
        "UC-22.44.2",
        "UC-22.44.3",
        "UC-22.44.4",
        "UC-22.44.5",
        "UC-22.45.1",
        "UC-22.45.2",
        "UC-22.45.3",
        "UC-22.45.4",
        "UC-22.45.5",
        "UC-22.46.1",
        "UC-22.46.2",
        "UC-22.46.3",
        "UC-22.46.4",
        "UC-22.46.5",
        "UC-22.47.1",
        "UC-22.47.2",
        "UC-22.47.3",
        "UC-22.47.4",
        "UC-22.47.5",
        "UC-22.48.1",
        "UC-22.48.2",
        "UC-22.48.3",
        "UC-22.48.4",
        "UC-22.48.5",
        "UC-22.49.1",
        "UC-22.49.2",
        "UC-22.49.3",
        "UC-22.49.4",
        "UC-22.49.5",
        "UC-22.50.1",
        "UC-22.50.2",
        "UC-22.50.3",
        "UC-22.50.4",
        "UC-22.50.5",
        "UC-22.50.6",
        "UC-22.50.7",
        "UC-22.50.8",
        "UC-22.50.9",
        "UC-22.50.10",
        "UC-22.50.11",
        "UC-22.50.12",
        "UC-22.50.13",
        "UC-22.50.14",
        "UC-22.50.15",
        "UC-22.50.16",
        "UC-22.50.17",
        "UC-22.50.18",
        "UC-22.50.19",
        "UC-22.50.20",
        "UC-22.50.21",
        "UC-22.50.22",
        "UC-22.50.23"
      ]
    },
    "23": {
      "crawl": [
        "UC-23.1.1",
        "UC-23.2.1",
        "UC-23.2.3",
        "UC-23.8.1",
        "UC-23.9.1"
      ],
      "walk": [],
      "run": [],
      "unassigned": [
        "UC-23.1.2",
        "UC-23.1.3",
        "UC-23.1.4",
        "UC-23.1.5",
        "UC-23.1.6",
        "UC-23.1.7",
        "UC-23.1.8",
        "UC-23.1.9",
        "UC-23.2.2",
        "UC-23.2.4",
        "UC-23.2.5",
        "UC-23.2.6",
        "UC-23.2.7",
        "UC-23.2.8",
        "UC-23.3.1",
        "UC-23.3.2",
        "UC-23.3.3",
        "UC-23.3.4",
        "UC-23.3.5",
        "UC-23.3.6",
        "UC-23.3.7",
        "UC-23.4.1",
        "UC-23.4.2",
        "UC-23.4.3",
        "UC-23.4.4",
        "UC-23.4.5",
        "UC-23.4.6",
        "UC-23.4.7",
        "UC-23.5.1",
        "UC-23.5.2",
        "UC-23.5.3",
        "UC-23.5.4",
        "UC-23.5.5",
        "UC-23.5.6",
        "UC-23.5.7",
        "UC-23.6.1",
        "UC-23.6.2",
        "UC-23.6.3",
        "UC-23.6.4",
        "UC-23.6.5",
        "UC-23.6.6",
        "UC-23.7.1",
        "UC-23.7.2",
        "UC-23.7.3",
        "UC-23.7.4",
        "UC-23.7.5",
        "UC-23.7.6",
        "UC-23.8.2",
        "UC-23.8.3",
        "UC-23.8.4",
        "UC-23.8.5",
        "UC-23.8.6",
        "UC-23.9.2",
        "UC-23.9.3",
        "UC-23.9.4",
        "UC-23.9.5",
        "UC-23.9.6",
        "UC-23.9.7"
      ]
    }
  }
}